qid int64 1 4.65M | question large_stringlengths 27 36.3k | author large_stringlengths 3 36 | author_id int64 -1 1.16M | answer large_stringlengths 18 63k |
|---|---|---|---|---|
2,481,767 | <p>Let A={$3m-1|m\in Z$} and B={$4m+2|m\in Z$} and let $f:A\rightarrow B$ is defined by </p>
<p>$f(x)=\frac{4(x+1)}{3}-2$ . Is f surjective?</p>
<p>I'm not really sure how to prove this. By trying out certain values it seems it's surjective. This is my work so far:</p>
<p>$f(x)=y \iff \frac{4(x+1)}{3}-2 = y \iff x=\frac{3y+2}{4}$</p>
<p>If we substitute $y=4m+2$ then $x=\frac{3(4m+2)+2}{4} \iff x=\frac{12m+8}{4} \iff x=3m+2$. Although this is not exactly $ A = 3m-1$ it seems that no matter which number for m you choose you basically get the same set in the end. </p>
<p>Same if we do $f(A)=B \iff f(3m-1)=4m+2 \iff \frac{4(3m-1+1)}{3}-2=4m+2 \iff $</p>
<p>$\iff 4m-2=4m+2$. Obviously these two are not equal yet they yield the same exact sets since they are infinite. So is f surjective? It seems like it , but these two proofs are not exactly very precise.</p>
| Arnaldo | 391,612 | <p>Just see that </p>
<p>$$3m+2=3(m+1)-1=3M-1$$</p>
<p>and $M\in \Bbb Z$</p>
|
4,037,697 | <p><span class="math-container">$a_n $</span> is a sequence defined this way:
<a href="https://i.stack.imgur.com/nHdiD.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/nHdiD.png" alt="enter image description here" /></a></p>
<p>and we define: <a href="https://i.stack.imgur.com/b01hX.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/b01hX.png" alt="enter image description here" /></a></p>
<p>need to prove that this happens:
<a href="https://i.stack.imgur.com/YMuE6.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/YMuE6.png" alt="enter image description here" /></a></p>
<p>i got exam soon and a lot of simmilar questions but i have no idea where to start</p>
| Nij | 226,212 | <p>Orthogonality is forced by requiring a scalar multiple of the identity; if any two columns <span class="math-container">$a_i, a_j$</span> were not orthogonal, there would be a nonzero entry in the corresponding entries <span class="math-container">$b_{ij}, b_{ji}$</span> of the product.</p>
<p>Orthogonality results in the absence of any shear/twist in the transformation, restricting it to only reflection, rotation and translation.</p>
<p>The eigenvalue squares as scalars allow for non-normality, as it is possible to scale the original space without affecting the angles between lines in it, provided <em>all</em> axes of the space are scaled by the same amount. If <span class="math-container">$A$</span> was instead orthonormal, you force a lack of scaling as well.</p>
<p>See also <a href="https://math.stackexchange.com/questions/3788789/">this question</a>, for which answers explain a converse point, why similarity transformation is a subtype of affine transformation.</p>
|
4,492,566 | <blockquote>
<p>To which degree must I rotate a parabola for it to be no longer the graph of a function?</p>
</blockquote>
<p>I have no problem with narrowing the question down by only concerning the standard parabola: <span class="math-container">$$f(x)=x^2.$$</span></p>
<p>I am looking for a specific angle measure. One such measure must exist as the reflection of <span class="math-container">$f$</span> over the line <span class="math-container">$y=x$</span> is certainly no longer well-defined. I realize that preferentially I should ask the question on this site with a bit of work put into it but, alas, I have no intuition for where to start. I suppose I know immediately that it must be less than <span class="math-container">$45^\circ$</span> as such a rotation will cross the y-axis at <span class="math-container">$(0,0)$</span> and <span class="math-container">$(0,\sqrt{2})$</span>.</p>
<p>Any insight on how to proceed?</p>
| orangeskid | 168,051 | <p>HINT:</p>
<p>The convex region <span class="math-container">$\{(x,y)\ | \ y \ge x^2\}$</span> has <span class="math-container">$\{0\} \times [0, \infty)$</span> as <a href="https://en.wikipedia.org/wiki/Recession_cone" rel="noreferrer">recession cone</a>.</p>
<p><span class="math-container">$\bf{Added:}$</span> We can ask a related question: given a convex function <span class="math-container">$f\colon \mathbb{R}\to \mathbb{R}$</span>, (<span class="math-container">$f(0)=0$</span> to keep it simple), what are the cones with vertex at <span class="math-container">$(0,0)$</span> that fit inside the <a href="https://en.wikipedia.org/wiki/Epigraph_(mathematics)" rel="noreferrer">epigraph</a> of <span class="math-container">$f$</span> ? Here is the answer: the derivative of <span class="math-container">$f$</span> is increasing, and its range is an interval <span class="math-container">$[m, M]$</span> (which could be infinite). The the slopes of the cone are <span class="math-container">$m$</span>, <span class="math-container">$M$</span> ( the terminal slopes of <span class="math-container">$f$</span>).</p>
<p>In our case for <span class="math-container">$f(x) = x^2$</span>, <span class="math-container">$m=-\infty$</span>, <span class="math-container">$M= \infty$</span>, so the cone becomes a half-line.</p>
<p>Let's take another example:</p>
<p><span class="math-container">$$f(x) = \log (\cosh x)$$</span></p>
<p>We have <span class="math-container">$f(0)= 0$</span>, <span class="math-container">$f'(x) = \tanh x$</span>. One checks that the larges cone that fits in has branches <span class="math-container">$y = \pm x$</span>. ( that is the recession cone).</p>
<p>Question: what is the largest angle of rotation to still get the graph of a function? (left or right will be the same in this case).</p>
|
48,864 | <p>I can't resist asking this companion question to the <a href="https://mathoverflow.net/questions/48771/proofs-that-require-fundamentally-new-ways-of-thinking"> one of Gowers</a>. There, Tim Dokchitser suggested the idea of Grothendieck topologies as a fundamentally new insight. But Gowers' original motivation is to probe the boundary between a human's way of thinking and that of a computer. I argued, therefore, that Grothendieck topologies might be more natural to computers, in some sense, than to humans. It seems Grothendieck always encouraged people to think of an object in terms of the category that surrounds it, rather than its internal structure. That is, even the most lovable mathematical structure might be represented simply as a symbol $A$, and its special properties encoded in arrows $A\rightarrow B$ and $C\rightarrow A$, that is, a grand combinatorial network. I'm tempted to say that the idea of a Grothendieck topology is something of an obvious corollary of this framework. It's not something I've devoted much thought to, but it seems this is exactly the kind of reasoning more agreeable to a computer than to a woolly, touchy-feelly thinker like me.</p>
<p>So the actual question is, what other mathematical insights do you know that might come more naturally to a computer than to a human? I won't try here to define computers and humans, for lack of competence. I don't think having a deep knowledge of computers is really a prerequisite for the question or for an answer. But it would be nice if your examples were connected to substantial mathematics. </p>
<p>I see that this question is subjective (but not argumentative in intent), so if you wish to close it on those grounds, that's fine.</p>
<p>Added, 11 December: Being a faulty human, I had an inexplicable attachment to the past tense. But, being weak-willed on top of it all, I am bowing to peer pressure and changing the title.</p>
| Kimball | 6,518 | <p>I suppose asymptotics for certain functions (e.g., Prime Number Theorem), or any sort of conjecture based on large empirical evidence, would count, but that's probably not what you mean.</p>
<p>Perhaps more interesting is the following. In high school/college, I was briefly interested in automated theorem proving and read about this (I don't remember the source and may miff the details--perhaps someone can help). Around the 60's or 70's, someone wrote an AI program to use numerical evidence to have a computer "deduce" many theorems/conjectures in number theory. They showed their answers to Knuth, and he marked the ones he thought were mathematically interesting. At least one thing that stood out was several interesting "results" on highly composite numbers, which I think Ramanujan may have studied as well.</p>
|
48,864 | <p>I can't resist asking this companion question to the <a href="https://mathoverflow.net/questions/48771/proofs-that-require-fundamentally-new-ways-of-thinking"> one of Gowers</a>. There, Tim Dokchitser suggested the idea of Grothendieck topologies as a fundamentally new insight. But Gowers' original motivation is to probe the boundary between a human's way of thinking and that of a computer. I argued, therefore, that Grothendieck topologies might be more natural to computers, in some sense, than to humans. It seems Grothendieck always encouraged people to think of an object in terms of the category that surrounds it, rather than its internal structure. That is, even the most lovable mathematical structure might be represented simply as a symbol $A$, and its special properties encoded in arrows $A\rightarrow B$ and $C\rightarrow A$, that is, a grand combinatorial network. I'm tempted to say that the idea of a Grothendieck topology is something of an obvious corollary of this framework. It's not something I've devoted much thought to, but it seems this is exactly the kind of reasoning more agreeable to a computer than to a woolly, touchy-feelly thinker like me.</p>
<p>So the actual question is, what other mathematical insights do you know that might come more naturally to a computer than to a human? I won't try here to define computers and humans, for lack of competence. I don't think having a deep knowledge of computers is really a prerequisite for the question or for an answer. But it would be nice if your examples were connected to substantial mathematics. </p>
<p>I see that this question is subjective (but not argumentative in intent), so if you wish to close it on those grounds, that's fine.</p>
<p>Added, 11 December: Being a faulty human, I had an inexplicable attachment to the past tense. But, being weak-willed on top of it all, I am bowing to peer pressure and changing the title.</p>
| John Pardon | 35,353 | <p>I disagree with the premise of this question.</p>
<p>Conventional computers follow a program written by a human. I think, for example, Daniel Moskovich's answer about simplicial sets is something that a <em>human programming a computer</em> (or a <em>computer scientist</em>) would think of when trying to program a computer.</p>
<p>Formalisms like these are things that we humans think of when programming a computer. Hence we have a tendency to think of the as "more mechanical", or "more like a computer", etc.; but I think it's a mistake to think that this is something that a computer "would come up with" on its own. Really it's us humans that come up with them, just when we are thinking in terms of computing.</p>
<p>There are computers which can be thought of as actually "thinking" in a way similar to humans (as opposed to just following a program), e.g. IBM's Watson computer. They need some large data set to learn from, though (just like we do), and if this large data set is all of the mathematics created by humans, then I think the mathematics produced by the computer would look a lot like things "a human would think of"!</p>
|
2,014,366 | <p>first of all it's not an exam sheet or some kinda stuff. I'm just preparing myself about quantifiers.</p>
<p>i couldn't find similar task to this one so had to ask here. </p>
<hr>
<p>Let '$x \mathrel{\heartsuit} y$' stand for 'x loves y'. Rewrite the sentence 'Someone loves everyone' using quantifiers in two different ways.</p>
| Bram28 | 256,001 | <p>Maybe the second interpretation would be something along the lines of: there is someone for eveyone' ... So for evryone there is someone who loves them. So switch the quatifiers for the second reading?</p>
|
3,076,504 | <p>The problem states: </p>
<p>Right Triangle- perimeter of <span class="math-container">$84$</span>, and the hypotenuse is <span class="math-container">$2$</span> greater than the other leg. Find the area of this triangle. </p>
<p>I have tried different methods of solving this problem using Pythagorean Theorem and systems of equations, but cannot find any of the side lengths or the area of the right triangle. I looked for similar problems on StackExchange and around the internet, but could not find anything. </p>
<p>Does anyone know anything that could help find the side lengths of the triangle and the area as well?<br>
Method that I tried: </p>
<ol>
<li>Made a system with the values given.<br>
<span class="math-container">\begin{align}
a+b+c&=84 \\
c&=b+2
\end{align}</span></li>
<li>Substituted <span class="math-container">$c$</span> with <span class="math-container">$b+2$</span>.<br>
<span class="math-container">\begin{align}
a+b+b+2&=84 \\
a + 2b &= 82 & \text{subtracted $2$ from both sides}\\
a + a^2 - 4 &= 82
\end{align}</span></li>
<li><span class="math-container">$c^2$</span> is <span class="math-container">$(b+2)(b+2)$</span>, so I used Pythagorean Theorem to isolate one of the variables.<br>
<span class="math-container">\begin{align}
a^2+b^2 &=c^2\\
a^2 + b^2 &=(b+2)(b+2)\\
a^2+b^2 &=b^2+2b+4\\
a^2&=2b+4 & \text{ (Subtracted $b^2$ from both sides) }
\end{align}</span>
OR<br>
<span class="math-container">\begin{align}
a^2-4&=2b
\end{align}</span></li>
</ol>
<p>I do not know what to do after this point.</p>
| Creep Anonymous | 564,710 | <p>Let the sides of the right triangle be <span class="math-container">$x,y,x+2$</span>.</p>
<p>Given, </p>
<p><span class="math-container">$2x+y=82 \tag{1}$</span></p>
<p><span class="math-container">$x^2 + y^2 = (x+2)^2 \tag{2}$</span>
<span class="math-container">$$\implies x^2 + y^2 = x^2 +4x+4 $$</span>
<span class="math-container">$$\implies y^2 = 4x+4 $$</span></p>
<p>Now, substitute the value of <span class="math-container">$x$</span> from equation (1) in terms of <span class="math-container">$y,$</span> you will get a quadratic equation in <span class="math-container">$y$</span> whose roots can be easily found and hence, the sides and area.</p>
|
3,538,305 | <blockquote>
<p>Given that the differential equation</p>
<p><span class="math-container">$f(x,y) \frac {dy}{dx} + x^2 +y = 0$</span> is exact and <span class="math-container">$f(0,y) =y^2$</span> , then <span class="math-container">$f(1,2)$</span> is</p>
</blockquote>
<p>choose the correct option</p>
<p><span class="math-container">$a)$</span> <span class="math-container">$5$</span></p>
<p><span class="math-container">$b)$$4$</span></p>
<p><span class="math-container">$c)$</span> <span class="math-container">$6$</span></p>
<p><span class="math-container">$d)$</span> <span class="math-container">$0$</span></p>
<p>My attempt : <span class="math-container">$(x^2+y)dx -f(x,y) dy =0$</span> Here <span class="math-container">$M =(x^2 +y)$</span> , <span class="math-container">$N=f(x,y)$</span></p>
<p>I know that for exact <span class="math-container">$\frac{dM}{dy} = \frac{dN}{dx}$</span> that is <span class="math-container">$f(x,y) =1$</span></p>
<p>After that im not able to proceed further</p>
| Math_Is_Fun | 590,763 | <p><a href="https://i.stack.imgur.com/DBpj5.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/DBpj5.jpg" alt="enter image description here"></a></p>
<p>This is the solution, your answer should be option (A). </p>
<p>Let me know if you have questions. </p>
|
3,538,305 | <blockquote>
<p>Given that the differential equation</p>
<p><span class="math-container">$f(x,y) \frac {dy}{dx} + x^2 +y = 0$</span> is exact and <span class="math-container">$f(0,y) =y^2$</span> , then <span class="math-container">$f(1,2)$</span> is</p>
</blockquote>
<p>choose the correct option</p>
<p><span class="math-container">$a)$</span> <span class="math-container">$5$</span></p>
<p><span class="math-container">$b)$$4$</span></p>
<p><span class="math-container">$c)$</span> <span class="math-container">$6$</span></p>
<p><span class="math-container">$d)$</span> <span class="math-container">$0$</span></p>
<p>My attempt : <span class="math-container">$(x^2+y)dx -f(x,y) dy =0$</span> Here <span class="math-container">$M =(x^2 +y)$</span> , <span class="math-container">$N=f(x,y)$</span></p>
<p>I know that for exact <span class="math-container">$\frac{dM}{dy} = \frac{dN}{dx}$</span> that is <span class="math-container">$f(x,y) =1$</span></p>
<p>After that im not able to proceed further</p>
| user577215664 | 475,762 | <p><span class="math-container">$$f(x,y) \frac {dy}{dx} + x^2 +y = 0$$</span>
<span class="math-container">$$(x^2+y)dx+fdy=0$$</span>
<span class="math-container">$$Mdx+Ndy=0$$</span>
<span class="math-container">$$ {\partial_y} M=\partial_x N $$</span>
<span class="math-container">$$ \implies 1=\partial_x f$$</span>
After integration we get :
<span class="math-container">$$f(x,y)=x+g(y)$$</span>
We are given <span class="math-container">$f(0,y)=y^2$</span>
<span class="math-container">$$ y^2 =g(y) \implies f(x,y)=x+y^2$$</span>
And
<span class="math-container">$$f(1,2)=1+2^2=5$$</span> </p>
|
2,947,829 | <p>Indeed this challenges my intuition of how different functions (that are not necessarily differentiable) interact to become differentiable which is nice.</p>
<p>I wonder if my proof suffices to show that it is indeed differentiable at <span class="math-container">$x = 0$</span>.</p>
<p><span class="math-container">$$\lim_{x\rightarrow0}\frac{x|x| - 0}{x-0} = \lim_{x\rightarrow0}\frac{|x|}{1} = \lim_{x\rightarrow0} |x| = 0$$</span></p>
<p>The existence of that limit by definition shows differentiability right?</p>
| James | 549,970 | <p>Notice </p>
<p><span class="math-container">$$ \lim_{x \to 0^-} \frac{ f(x)-f(0)}{x-0} = \lim_{x \to 0^-} \frac{ x \cdot (-x)}{x} = \lim_{x \to 0^-} - x = 0 $$</span></p>
<p>and </p>
<p><span class="math-container">$$ \lim_{x \to 0^+} \frac{ f(x)-f(0)}{x-0} = \lim_{x \to 0^+} \frac{ x \cdot (+x)}{x} = \lim_{x \to 0^+} x = 0 $$</span></p>
<p>Hence,</p>
<p><span class="math-container">$$ f'(0) = 0 $$</span></p>
<p>Notice that </p>
<p><span class="math-container">$$ f(x) = \begin{cases} x^2 , \; \;\; x \geq 0 \\ -x^2 , \; \; \; x <0 \end{cases} $$</span></p>
|
2,947,829 | <p>Indeed this challenges my intuition of how different functions (that are not necessarily differentiable) interact to become differentiable which is nice.</p>
<p>I wonder if my proof suffices to show that it is indeed differentiable at <span class="math-container">$x = 0$</span>.</p>
<p><span class="math-container">$$\lim_{x\rightarrow0}\frac{x|x| - 0}{x-0} = \lim_{x\rightarrow0}\frac{|x|}{1} = \lim_{x\rightarrow0} |x| = 0$$</span></p>
<p>The existence of that limit by definition shows differentiability right?</p>
| Mohammad Riazi-Kermani | 514,496 | <p>Your proof is correct and very elegant indeed. </p>
<p>You have said that it is anti-intuitive for this function to be differentiabe at <span class="math-container">$x=0$</span>, but if you look at the graph of this function you notice that it is very smooth at <span class="math-container">$x=0$</span>, so it is not surprising for the function to be differentiabe at <span class="math-container">$x=0$</span> </p>
<p>However there is something very interesting about it. It is about the Wronskian of <span class="math-container">$x|x|$</span> and <span class="math-container">$x^2$</span>.</p>
<p>The wronskian of these two function is identically zero over the entire real line but the functions are linearly independent over the real line.</p>
|
2,418,448 | <p>Let $\mathbb{N}^{\mathbb{N}}$ be the set of all sequences of positive integers. For $a=(n_1,n_2,\cdots),b=(m_1,m_2,\cdots)$ define $d(a,b)=\frac{1}{\min(i:n_i\neq m_i)}, a\neq b$, $d(a,a)=0$</p>
<hr>
<p>It can be easily shown that $d(a,b)\leq \max\{d(a,c),d(b,c)\}$, I want to prove that if two balls in this metric space intersect, then one is contained in another</p>
| orangeskid | 168,051 | <p>HINT: </p>
<p>Yours is an ultrametric space ( the stronger inequality quoted is satisfied).
So why don't you prove this fact: if $B = B(x_0, r)$ is a ball in this space and $x$ a point in $B$ then $B = B(x,r)$. Then your statement is clear, the ball of larger radius contains the other. </p>
|
2,532,280 | <p>If a N×N (N≥3) Hermitian matrix <strong>A</strong> meets the following conditions: </p>
<ol>
<li><strong>A</strong> is positive semi-definite (not positive definite, i.e. <strong>A</strong> has at least M zero eigenvalue, where M is a given paremeter with 1≤M≤N-1).</li>
<li>The sum of each off diagonal results in 0, and the main diagonal elements are non-negative, which is shown in the figure (set N=4 as example).</li>
</ol>
<p>Then what the general solution of <strong>A</strong> is?</p>
<p>For example, a particular solution of <strong>A</strong> can be $$
\begin{matrix}
I_{M'} & 0 &\\
0 & 0 \\
\end{matrix}
$$ where M≤N-M'≤N-1. It is just a particular solution, I wonder what is the general solution under these two conditions.</p>
<p><a href="https://i.stack.imgur.com/biXfS.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/biXfS.png" alt="enter image description here"></a></p>
| Maadhav | 416,874 | <p>$4x^2+(5k+3)x+(2k^2-1)=4x^2-\alpha^2$</p>
<p>$k=\frac{-3}5$</p>
<p>${ \alpha} ^2=1-2\times \frac{9}{25}=\frac{7}{25}$</p>
<p>$ \alpha =\pm \frac{ \sqrt {7}}{5}$</p>
<p>$\alpha \in \mathbb{R}$ so value of $k$ is valid.</p>
|
2,713,391 | <p>I can find the answers to similar questions online, but what I'm trying to do is develop my own intuition so I can find the answers. I am quite sure I am wrong, so could you look over my reasoning?</p>
<p>If $X = (1,2,3,4,5,6,7,8)$,</p>
<blockquote>
<ol>
<li>How many strings over X of length 5?</li>
</ol>
</blockquote>
<p>Reasoning: Each character of the string is a choice of a selection from the string, so $8^5$.</p>
<blockquote>
<ol start="2">
<li>How many strings over X with length 5 don't contain $1$?</li>
</ol>
</blockquote>
<p>Reasoning: This is equivalent to strings of length 5 of an alphabet not containing $1$, so $7^5$.</p>
<blockquote>
<ol start="3">
<li>How many strings over X with length 5 contain $1$?</li>
</ol>
</blockquote>
<p>Reasoning: First get the strings for length 4 ($8^4$). Then select a position to insert a 1 ($5$). Compounded: $5\times 8^4$.</p>
<p>At this point I saw $8^5 \ne 7^4 + 5 \times 8^4$ and lost motivation. I am more sure 1 and 2 are correct than 3 so perhaps the simplest solution would be just $8^5-7^5$, but my logic for 3 "feels" sound. I am frustrated because I have never had these kinds of difficulties before. I look at solutions and though they too "feel" correct, I'd not have come up with them.</p>
<p>I apologize for the irregular question form, and understand if this isn't the site for this. </p>
| leonbloy | 312 | <p>The problem with the third argument is that it counts some valid strings twice (or more).</p>
<p>For example the string <code>12341</code> is counted as <code>[1][2341]</code> and <code>[1234][1]</code>.</p>
<p>The other way (subtracting the number of strings without '1's from the number of total strings) is correct, and it's indeed the simplest way.</p>
<p>Elsewhere, you could correct the overcounting by <a href="https://en.wikipedia.org/wiki/Inclusion%E2%80%93exclusion_principle" rel="nofollow noreferrer">inclusion-exclusion</a> :</p>
<p>$$ \binom51 8^4 - \binom52 8^3 + \binom53 8^2 - \binom54 8^1 + 1 = 15961 $$</p>
<p>Alternatively, you could count all the strings with exactly $1,2 \cdots 5$ ones, as in @drhab's answer.</p>
<p>BTW: you say</p>
<blockquote>
<p>At this point I saw $8^5 \ne 7^4 + 5 \times 8^4$ and lost motivation.</p>
</blockquote>
<p>You definitely shouldn't. On the contrary, to fall into that "trap" (and to realize there's something is wrong, and learn why) is rather the point of this problem. </p>
|
4,482,707 | <p><a href="https://math.stackexchange.com/questions/942263/really-advanced-techniques-of-integration-definite-or-indefinite/1885401#1885401">Here</a>, I saw the following formula:</p>
<p><span class="math-container">$$\int_{0}^{\infty }\frac{f(t)}{t}dt=\int_{0}^{\infty }\mathcal{L}\left \{ f(t) \right \}ds$$</span></p>
<hr />
<p>Say we have the integrand only <span class="math-container">$f(t)$</span>, not <span class="math-container">$\frac{f(t)}{t}$</span>, then I think we can multiply and divide this integrand by <span class="math-container">$t$</span>, then we apply the formula. For instance, if the integrand is <span class="math-container">$te^{-t}$</span>, then we should rewrite it as</p>
<p><span class="math-container">$$\frac{t^2e^{-t}}{t}$$</span></p>
<p>So <span class="math-container">$f(t)$</span> is actually <span class="math-container">$t^2e^{-t}$</span>, not <span class="math-container">$te^{-t}$</span>.</p>
<p>I know that if the integrand is <span class="math-container">$te^{-t}$</span>, then it can be done by integrating by part. However, I think it is a good looking function to illustrate my question.</p>
<p>Am I right about multiplying and dividing by <span class="math-container">$t$</span> just to make a use of the formula?</p>
<p>In other words, is the following true?</p>
<p><span class="math-container">$$\int_{0}^{\infty }f(t)dt=\int_{0}^{\infty }\mathcal{L}\left \{ tf(t) \right \}ds$$</span></p>
<hr />
<p>Second Question:</p>
<p>Can we, somehow, generalize the formula to any interval <span class="math-container">$(a,b)$</span> instead of <span class="math-container">$(0,\infty)$</span>?</p>
<p>I know that the Laplace will have the same limits of integration, <span class="math-container">$(0,\infty)$</span> but I am asking about the original integral, not the integral from the <span class="math-container">$\mathcal{L}$</span>.</p>
<hr />
<p>Sorry for my bad English, hope my questions are clear. Your help would be appreciated. THANKS!</p>
<hr />
<h2>Edit:</h2>
<p>For the second question, what I mean is</p>
<p>Given <span class="math-container">$a,b,$</span> and <span class="math-container">$h(x)$</span>, and that <span class="math-container">$\int_{a}^{b}h(x)dx = \int_{0}^{\infty}r(t)dt$</span>. How to find <span class="math-container">$r(t)$</span>?</p>
<p>This will make use of the first formula in this post.</p>
<p>For instance, <span class="math-container">$\int_{1}^{2}\sqrt{4x(2-x)}dx$</span> can be replaced by <span class="math-container">$\int_{0}^{\infty} \frac{dt}{1+t^2}$</span>. And now we can use the above formula.</p>
| Rebecca J. Stones | 91,818 | <p>You're quite right in that
<span class="math-container">$$
R_1 = \{(x,y):y=x+2,\ x \in X,\ y \in Y\} = \{(1,3),(3,5),(5,7)\}
$$</span>
which is a <a href="https://en.wikipedia.org/wiki/Relation_(mathematics)" rel="nofollow noreferrer">relation</a> over <span class="math-container">$X$</span> and <span class="math-container">$Y$</span>. You can also tell it's a relation because the "<span class="math-container">$x \in X, y \in Y$</span>" part of its definition implies <span class="math-container">$R_1 \subseteq X \times Y$</span>. I'm guessing the author intended the first set to be
<span class="math-container">$$
\{(x,x+2):x \in X\}
$$</span>
which would not be a relation over <span class="math-container">$X$</span> and <span class="math-container">$Y$</span>. (Or maybe the author meant to say "function" instead of "relation", as in <a href="https://math.stackexchange.com/questions/4482695/if-x-1-2-3-4-5-and-y-1-3-5-7-9-determine-which-of-the-following-sets-r#comment9400118_4482695">Theo Bendit's comment</a>.)</p>
<p>We know <span class="math-container">$R_2$</span> is not a relation since <span class="math-container">$(1,2) \not\in X \times Y$</span>. Both <span class="math-container">$R_3$</span> and <span class="math-container">$R_4$</span> are relations, and you can check the first coordinates all belong to <span class="math-container">$X$</span> and second coordinates all belong to <span class="math-container">$Y$</span>.</p>
|
293,341 | <p>My apologies if this question is more appropriate for mathisfun.com, but I can only get so far reading about combinatrics and set theory before the interlocking logic becomes totally blurred. If this is a totally fundamental concept, feel free just to name it so I can read and understand the math myself.</p>
<p>So the goal is to minimize repetition of questions on a quiz to avoid (or really to slow down) the creation of a master key. This is for a client and I've explained that to make this truly realistic the number of questions in the master pool would need to be huge, but I want to show them the math behind their idea.</p>
<p>So they suggested having a 20 question pool with a given set being a 5-member subset. I figured out that the total number of unique quizes <span class="math-container">$\binom{20}{5}$</span> would be <span class="math-container">$\frac{20!}{5!(20-5)!}$</span> or 15504 unique quizes. But I know that most of those quizes will be near identical and that it won't take as long for cheaters to see all 20 questions to make the key. To prove this to myself (without knowing the math), I simplified the total combinations to <span class="math-container">$\binom{4}{3}$</span>, like so:</p>
<p>{a, b, c, d} = { {a,b,c}; {a,b,d}; {b,c,d}; {a,c,d} }</p>
<p>And I see that it only takes seeing any 2 quizes to see all 4 members of the master set. So knowing that the number of combinations (binomial coefficient!) is not equivalent to number of unique appearances of the master-set, I'd like to know the actual math involved to show the client that while they have a ton of quizes, it only takes <span class="math-container">$x$</span> to know all members.</p>
<p>Thanks as always.</p>
<h2>Addendum</h2>
<p>A bit more research has introduced me to the NP-complete problem known as Exact Cover, which would be (if I'm reading it right) a precise set of subsets which have a union equal to the original master-set. I just want to clarify that this constraint of perfect overlap is not necessary for my question, only the minimum number of subsets that would result in a union that has all master-set members, regardless of repetition, in order to demonstrate how many subsets are needed to know the original set (with the assumption that the seeker of the master-set knows the total membership count). I tweaked my micro-experiment from <span class="math-container">$\binom{4}{3}$</span> to <span class="math-container">$\binom{4}{2}$</span> resulting in 6 combinations and the ability to derive the master-set no longer being possible with a specific number of arbitrary subsets. Instead I get:</p>
<p>{a, b, c, d} = { {ab} ; {ac} ; {ad} ; {bc} ; {bd} ; {cd} }</p>
<p>which could derive the master set using the first three (<span class="math-container">$a$</span>) groups, or the exact cover of <span class="math-container">${ {a,b}; {c,d} }$</span>. This has me thinking that the minimum subsets needed to derive the original set is equal to the number of subsets where any given member occurs (so in this case 3 <span class="math-container">$a$</span>s, but this doesn't match up to the <span class="math-container">$\binom{4}{3}$</span>, where it can be found with 2 subsets. The next obvious solution (to me) is that the minimum number needed to derive the master-set (blindly) is half of the total number of subsets, but I would really want a link to a proof or a simple-english demonstration on how a pool of 20 questions would require 7752 subsets to know with certainty that all 20 members have appeared at least once.</p>
<p>Again, thanks.</p>
<h2>Question as Probability:</h2>
<p>I have a bag of Scrabble tiles and I know the following:</p>
<ol>
<li>The bag contains 20 tiles,</li>
<li>Each tile is unique (no two tiles have the same character),</li>
<li>The tiles come from a much larger (and otherwise irrelevant) set of an expansion set including numbers and non-Roman alphabet characters, thus removing any advantage of knowing that this set of 20 comes from a larger-but-limited set (in other words, the characters are only informative to each other and I may get all Klingon or a mix of Chinese and Tamil. I should not assume anything about the set other than what is in the bag).</li>
</ol>
<p>I am allowed to perform the following steps in the order given as many times as I want:</p>
<ol>
<li>Pull out 5 tiles,</li>
<li>Write down the characters drawn,</li>
<li>Return the tiles to the bag.</li>
<li>Lather, Rinse, Repeat.</li>
</ol>
<p>Also: I have magical fingers that prevent me from drawing the same set of 5 twice, thus reducing the number of draws from infinity to 15504 possible draws.</p>
<p>My objective is to have all 20 characters written down eventually and then stop drawing characters.</p>
<p>I know that the total number of unique combinations I could draw is <span class="math-container">$\binom{20}{5}$</span> which is 15504. I also know that the minimum draws required is equal to <span class="math-container">$\lceil{20}/{5}\rceil$</span>, which would be very lucky. What I am interested in is the maximum number of draws required to reveal all 20 characters.</p>
| Student | 137,056 | <p>I am not sure if this is still relevant to you but I will make the following suggestions that have helped me have that click moment when it comes to proofs. </p>
<p>1) You need to realize how important definitions are. How can you prove a statement involving a concept that you do not fully grasp? It is essential to develop an intuition about the concepts: Why did we need to define X? Why is it relevant? Find examples that are concrete and familiar to you but also try to look for examples that feel "weird".
As the next step you should try to feel very comfortable with the formal definition and be able to manipulate it easily ( for example, if it involves epsilons and deltas). You will find that this often comes easily if you have a good intuitive grasp of the concept.</p>
<p>2) Start with "follow your nose" type of proofs: If you have to prove a theorem and the only tool you have is a definition then often the proof will only involve manipulating a definition in a more or less straightforward way. After practising these for a while, they will come very naturally to you, no matter the area of mathematics you are studying.</p>
<p>3) Following an analogy from one of my lecturers : "Maths is like a beach filled with rock pools. Each rock-pool requires a different sneaky trick to cross it and some of the sneaky tricks that you learn crossing one rock-pool can be applied to others." I think it is important for you to actively read different kinds of proofs and figure out what the key ideas and tricks are. You will often find that there are standard ways to attack certain types of proofs, often one ingenious trick that seemed completely mysterious when you first saw it becomes a standard thing to try an will come naturally to you.</p>
<p>4) When trying to prove a statement first do a draft version. Look closely at what you have to show. How can you simplify the result you are trying to get? After you have simplified it , are there any theorems that seem appropriate? Look closely at the assumptions, play with the concepts and what you can deduce from the assumptions.A couple of strategies will come to you and often one of them will work! Try to get the general idea of the proof and only afterwards should you care about formalizing it and filling in the details.</p>
<p>5) Learn form proofs you read. Why would someone try a certain trick to attack this specific problem? What in the statement of the result prompts towards a given strategy or theorem? Even when things in proofs seem to come from nowhere and there are certain steps that you think you would have never thought of, it is very instructive to look hard for clues that would suggest trying those routes.If you really can't find anything either discuss it with someone or file that trick under those "sneaky tricks" that you can apply to other rock-poles. You will eventually find that when you first read a statement you are trying to prove you automatically jot down some possible strategies from prompts in the statement.</p>
<p>6) Be very careful not to assume things that might seem intuitively clear to you whereas in fact there is some weird counter-example that proves you wrong. This tends to happen when one is time pressured or frustrated so look out for that!</p>
<p>I hope some of this helps :). </p>
<p>P.S: I am an undergraduate so take this with a pinch a salt. Also I apologize if I have not expressed myself very well, English is not my first language and these kind of things are often hard to explain. If someone feels like they can express any idea better please feel free to edit my answer. </p>
|
2,414,011 | <p>In my recent works in PDEs, I'm interested in finding a family of cut-off functions satisfying following properties:</p>
<p>For each $\varepsilon >0$, find a function ${\psi _\varepsilon } \in {C^\infty }\left( \mathbb{R} \right)$ which is a non-decreasing function on $\mathbb{R}$ such that:</p>
<ol>
<li>${\psi _\varepsilon }\left( x \right) = \left\{ {\begin{array}{*{20}{l}}
{0 \mbox{ if } x \le \varepsilon ,}\\
{1\mbox{ if } x \ge 2\varepsilon ,}
\end{array}} \right.$ and</li>
<li>The function $x \mapsto x{\psi _\varepsilon }'\left( x \right)$ is bounded uniformly with respect to $\varepsilon$ as $\varepsilon \to 0$.</li>
</ol>
<p>The main problem here is ${\psi _\varepsilon }'\left( x \right) \to \infty $ for some $x \in \left( {\varepsilon ,2\varepsilon } \right)$ as $\varepsilon \to 0$. I also start with <a href="https://en.wikipedia.org/wiki/Non-analytic_smooth_function" rel="nofollow noreferrer">this function</a> to define explicitly ${\psi _\varepsilon }$ in the interval $\left( {\varepsilon ,2\varepsilon } \right)$ but my attempts to adjust the referenced function failed. </p>
<p>Can you find an example of these cut-off functions?</p>
<p>Thanks in advanced.</p>
| md2perpe | 168,433 | <p>First let $\Psi(x) = e^{-1/(1-x^2)}$ for $|x|<1$ and $=0$ otherwise (as <a href="https://en.wikipedia.org/wiki/Bump_function#Examples" rel="nofollow noreferrer">here</a>). This satisfies $\Psi \in C_c^\infty(\mathbb R)$ and $0 \leq \Psi(x) \leq e^{-1}$. </p>
<p>Then let $A = \int_{-\infty}^{\infty} \Psi(x) \, dx$ so that $\rho(x) = A^{-1} \int_{-\infty}^{x} \Psi(t) \, dt$ satisfies $\rho \in C^\infty(\mathbb R)$ with $\rho(x)=0$ for $x<-1$ and $\rho(x)=1$ for $x>1$, and $|\rho'(x)| \leq (Ae)^{-1}$.</p>
<p>Finally set $\psi_{\epsilon}(x) = \rho(\frac{2x}{\epsilon}-3).$ Then $\psi_\epsilon \in C^\infty(\mathbb R)$ with $\psi_\epsilon(x)=0$ for $x<\epsilon,$ $\psi_\epsilon(x)=1$ for $x>1,$ and $|\psi_\epsilon'(x)| = \frac{2}{\epsilon} |\rho'(\frac{2x}{\epsilon}-3)| \leq 2(Ae\epsilon)^{-1}$ so $|x \psi_\epsilon'(x)| \leq 2(Ae\epsilon)^{-1} \cdot 2\epsilon = 4(Ae)^{-1}.$</p>
<p>Thus, this $\psi_\epsilon$ satisfies your wishes.</p>
|
1,268,431 | <p>$$\lim_{x\to 2} \frac {\sin(x^2 -4)}{x^2 - x -2} $$</p>
<p>Attempt at solution:</p>
<p>So I know I can rewrite denominator:</p>
<p>$$\frac {\sin(x^2 -4)}{(x-1)(x+2)} $$</p>
<p>So what's next? I feel like I'm supposed to multiply by conjugate of either num or denom.... but by what value...?</p>
<p>Don't tell me I'm simply supposed to plug in $x = 2$</p>
<p>I need to simplify fractions somehow first, how?</p>
| Martin Argerami | 22,857 | <p>$$
\frac{\sin(x^2-4)}{(x+1)(x-2)}=\frac{\sin(x^2-4)}{x^2-4}\,\frac{x^2-4}{(x+1)(x-2)}
$$</p>
|
271,844 | <p>I've installed the KnotTheory package, following the instructions <a href="http://katlas.org/wiki/Setup" rel="nofollow noreferrer">here</a>. But when I try to use it I get this error:</p>
<p><code>$CharacterEncoding: The byte sequence {139} could not be interpreted as a character in the UTF-8 character encoding.</code></p>
<p>Both <code><< KnotTheory` </code> and <code>Needs["KnotTheory`"]</code> raise this error (I found that byte sequence {139} is for "<"). "Needs" also raises</p>
<p><code>Needs: "Context KnotTheory` was not created when Needs was evaluated".</code></p>
<p>Parent directory of KnotTheory is in the <code>$Path</code>, but it doesn't help.</p>
<p>How can I fix it?</p>
| Nasser | 70 | <p>I just tried it. It works for me, it loads and I can call its functions, but I keep getting warning <code>Get::path: ParentDirectory[File] in $Path is not a string.</code> even though it does give the same output as on the web site.</p>
<p>I think you need to edit <code>init.m</code> to fix these. It seems like old package.</p>
<p>There are the steps I did</p>
<p>download zip file. Extract the folder <code>KnotTheory</code>. Then did</p>
<pre><code>FileNameJoin[{$UserBaseDirectory,"Applications"}]
</code></pre>
<p>Which gives</p>
<pre><code>"C:\\Users\\Owner\\AppData\\Roaming\\Mathematica\\Applications"
</code></pre>
<p>Then copied the folder <code>KnotTheory</code> to the above folder. So now it looks like this</p>
<p><a href="https://i.stack.imgur.com/yGfR2.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/yGfR2.png" alt="enter image description here" /></a></p>
<p>Now from Mathematica</p>
<pre><code>Quit[]
<< KnotTheory`
</code></pre>
<p>Gives these warnings</p>
<p><a href="https://i.stack.imgur.com/lwMgA.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/lwMgA.png" alt="enter image description here" /></a></p>
<p>But it works</p>
<pre><code> Alexander[Knot[6, 2]][t]
</code></pre>
<p><img src="https://i.stack.imgur.com/010rE.png" alt="Mathematica graphics" /></p>
<p><img src="https://i.stack.imgur.com/wpOGb.png" alt="Mathematica graphics" /></p>
<p>I never used this package myself before. But the date on the webpage is 2013, which is old.</p>
<p><img src="https://i.stack.imgur.com/cKhzd.png" alt="Mathematica graphics" /></p>
|
1,721,565 | <p>I'm having trouble with what I have done wrong with the chain rule below. I have tried to show my working as much as possible for you to better understand my issue here.</p>
<p>So:</p>
<p>Find $dy/dx$ for $y=(x^2-x)^3$
<br> So power to the front will equal = $3(x^2-x)^2 * (2x-1)$</p>
<p>Where did the $-1$ come from in $2x-1$?</p>
<p>How did they get that? </p>
<p>Thanks!</p>
| Aditya Agarwal | 217,555 | <p>So you are evidently confused in how the derivative of $x^2-x$ is computed.
We know that $\frac{d}{dx}k f(x)=k\frac d{dx}f(x)$ and $\frac d{dx}(f(x)+g(x))=\frac d{dx}f(x)+\frac d{dx}g(x)$. <br> So
$$\frac d{dx}(x^2-x)=\frac d{dx}x^2+\frac d{dx}(-x)=2x+(-1)\frac d{dx}(x)=2x+(-1)1=2x-1$$</p>
|
715,361 | <p>Let $\Omega$ be a bounded domain and $f_n\in L^2(\Omega)$ be a sequence such that
$$\int_\Omega f_nq\operatorname{dx}\leq C<\infty\qquad \text{for all}\quad q\in H^1(\Omega),\ \|q\|_{H^1(\Omega)}\leq1,\ n\in\mathbb{N}.\quad (1) $$
Is it then possible to conclude that
$$ \sup_{n\in\mathbb{N}}\|f_n\|_{L^2(\Omega)}\leq C. $$</p>
<p>Here, $H^1(\Omega)$ denotes the Sobolev-Hilbert-Space $H^1(\Omega)$.</p>
<p>Obviously, this statement would be true if we were to replace (1) with
$$\int_\Omega f_nq\operatorname{dx}\leq C<\infty\qquad \text{for all}\quad q\in L^2(\Omega),\ \|q\|_{L^2(\Omega)}\leq1,\ n\in\mathbb{N}. $$</p>
<p>and maybe the dense and compact embedding $H^1(\Omega)\hookrightarrow L^2(\Omega)$ is of help but I'm not sure of it.</p>
<p>Edit: By now I'm pretty sure, that this statement doesn't hold. We only have a bound in the dual of $H^1(\Omega)$. But until now I'm failing to compile a conclusive argument!</p>
| naslundx | 130,817 | <p>We may never divide by 0, so we require for the solution that the denominator $(x-1) \not =0$, meaning $x \not= 1$.</p>
<p>However, since the only possible value for $x$ (given by both you and the trainer) is $x=1$. Hence we are forced to conclude that no solutions exist. </p>
|
2,026,486 | <p>Let $h,g$ be <em>not</em> injective functions, can function $f:\mathbb{R}\rightarrow \mathbb{R}^2$ such that $f(x) = (h(x), g(x))$ be injective?</p>
<p>I know that, if I pick polynomials for $h$ and $g$, then may be not injective, if picked carefully. For example, I can check zeros of the polynomials, whether they collide or not, but I am really not sure whether there exist other, nontrivial counterexample to disprove injectivity of $f$.</p>
| Arthur | 15,500 | <p>$f$ can be injective. For instance, $g(x) = x^2, g(x) = (x + 1)^2$. Then $f(x) = (x^2, (x+1)^2)$ is injective, since $g(a) = g(b)$ and $a \neq b$ means $a = -b$, but $h(a) \neq h(-a)$. Thus, if $a$ and $b$ are such that the first component of $f(a)$ is equal to the first component of $f(b)$, then the second components are different.</p>
<p>As for a non-trivial, non-injective $f$, one may take $g(x) = x^2$ and $h(x) = x^3 - x$. Then $f(-1) = f(1) = (1, 0)$.</p>
|
1,548,771 | <p>I have come up with the following constrained minimization problem:
\begin{eqnarray}
\min\ \sum_{i=1}^\infty x_i^2\\
\sum_{i=1}^\infty a_ix_i=1
\end{eqnarray}
If it were a finite-dimensional case it would be easily solved via Lagrange multipliers; in this case I ask your help since I don't know where to begin.</p>
| Amr | 29,267 | <p>Assume wlog that there are no zero terms in the sequence by 'ommiting' them.</p>
<p>Assuming that the sequence $a_i $ is square summable, then one can easily use cauchy schwartz inequality to show that </p>
<p>$$x_i := \frac {a_i} {\sum_{i=1}^{\infty}a_i^2} $$</p>
<p>minimizes the quantity $\sum_{i=1}^{\infty}x_i^2$. </p>
<hr>
<p>For the case where $a_i$ is not square summable i.e. $\sum_{i=1}^{\infty}a_i^2=\infty$, I will show that $$\inf\{\sum_{i=1}^{\infty}x_i^2: \sum_{i=1}^{\infty}a_ix_i=1\}=0$$</p>
<p>Consider the 'sequence' of sequences $\{x^{(n)}_i\}_{i\geq 1}$, where $x^{(n)}$ is defined as follows:</p>
<p>1) $x^{(n)}_i=\frac{a_i}{\sum_{j=1}^{n}a_j^2}\,\,\,$ if $i\leq n$</p>
<p>2) $x^{(n)}_i=0\,\,\,$ if $i> n$</p>
<p>It is easy to see the following that all the sequences $x^{(n)}$ satisfy the constraint i.e. $\sum_{i=1}^{\infty}a_ix^{(n)}_i=1$. Moreover :</p>
<p>$$ \lim_{n\rightarrow \infty} \sum_{i=1}^{\infty}[x^{(n)}_i]^2=\lim_{n\rightarrow \infty}\frac{1}{\sum_{j=1}^{n}a_j^2}=0$$</p>
<p>$\square$</p>
|
3,910,013 | <p>I'm preparing for a high school math exam and I came across this question in an old exam.</p>
<p>Let <span class="math-container">$f(x) = \dfrac{1}{2(1+x^3)}$</span>.</p>
<p><span class="math-container">$\alpha \in (0, \frac{1}{2})$</span> is the only real number such that <span class="math-container">$f(\alpha) = \alpha$</span>.</p>
<p><span class="math-container">$(u_n)$</span> is the series such that
<span class="math-container">$$
\begin{cases}
u_0 = 0 \\
u_{n+1} = f(u_n) \quad \forall n \in \mathbb{N}
\end{cases}
$$</span>
Prove that <span class="math-container">$|u_{n+1} - \alpha| \le \frac{1}{2} |u_n - \alpha|$</span></p>
<p>The end goal is to prove that <span class="math-container">$(u_n)$</span> converges to <span class="math-container">$\alpha$</span>. But we have to do it this way instead of finding that both <span class="math-container">$(u_{2n})$</span> and <span class="math-container">$(u_{2n+1})$</span> converge to <span class="math-container">$\alpha$</span>.</p>
<p>It's supposed to be a very easy question. But I have been trying for the last two hours and I couldn't find it.</p>
<p>It's easy to prove that <span class="math-container">$u_n \in [0, \frac{1}{2}]$</span> by induction.</p>
<p>Here's what I tried:</p>
<ul>
<li>I tried separating the question into two cases <span class="math-container">$u_n \le \alpha$</span> and <span class="math-container">$u_n \ge \alpha$</span>.
I got that I just need to prove that <span class="math-container">$u_{n+1} + \frac{1}{2} u_n \ge \frac{3}{2} \alpha$</span>. But I didn't know how to go from here. Substituting <span class="math-container">$u_{n+1}$</span> with <span class="math-container">$\dfrac{1}{2(1+u_n^3)}$</span> seems to only make the problem more complicated.</li>
<li>I tried squaring both sides. It only made the expression more complicated.</li>
</ul>
<p>The problem vaguely reminds me of the epsilon-delta definition of <span class="math-container">$\lim_{x \to \alpha} f(x) = \alpha$</span>. But that seems like it won't lead anywhere.</p>
| Michelle | 718,613 | <p>We can write
<span class="math-container">$$
|u_{n+1} - \alpha| \le \frac{1}{2} |u_n - \alpha| \iff |f(u_{n}) - f(\alpha)| \le \frac{1}{2} |u_n - \alpha|
$$</span>
which reminds us of a famous theorem in analysis (mean value theorem). Since <span class="math-container">$f \in \mathcal{C^\infty}$</span>, we only have to show that <span class="math-container">$\vert f' \vert \le \frac 1 2$</span> to apply it. But
<span class="math-container">$$
f'(x)= \frac{-3x^2}{2(1+x^3)^2}
$$</span>
so
<span class="math-container">$$
\vert f'(x) \vert = \frac{3}{2} \left(\frac{x^2}{1+x^3}\right)^2 \le \frac{1}{2} \iff P(x)\ge0
$$</span>
where <span class="math-container">$P(x)=\frac{x^3+1}{\sqrt{3}}-x^2$</span>. After computing <span class="math-container">$P'$</span>, you see that <span class="math-container">$P(x) \ge 0$</span>. So by induction, we have
<span class="math-container">$$
|u_{n+1} - \alpha| \le \frac{1}{2} |u_n - \alpha| \le \left(\frac{1}{2}\right)^2 |u_{n-1} - \alpha| \le \cdots \le \left(\frac{1}{2}\right)^{n+1} |u_{0} - \alpha|.
$$</span></p>
|
3,910,013 | <p>I'm preparing for a high school math exam and I came across this question in an old exam.</p>
<p>Let <span class="math-container">$f(x) = \dfrac{1}{2(1+x^3)}$</span>.</p>
<p><span class="math-container">$\alpha \in (0, \frac{1}{2})$</span> is the only real number such that <span class="math-container">$f(\alpha) = \alpha$</span>.</p>
<p><span class="math-container">$(u_n)$</span> is the series such that
<span class="math-container">$$
\begin{cases}
u_0 = 0 \\
u_{n+1} = f(u_n) \quad \forall n \in \mathbb{N}
\end{cases}
$$</span>
Prove that <span class="math-container">$|u_{n+1} - \alpha| \le \frac{1}{2} |u_n - \alpha|$</span></p>
<p>The end goal is to prove that <span class="math-container">$(u_n)$</span> converges to <span class="math-container">$\alpha$</span>. But we have to do it this way instead of finding that both <span class="math-container">$(u_{2n})$</span> and <span class="math-container">$(u_{2n+1})$</span> converge to <span class="math-container">$\alpha$</span>.</p>
<p>It's supposed to be a very easy question. But I have been trying for the last two hours and I couldn't find it.</p>
<p>It's easy to prove that <span class="math-container">$u_n \in [0, \frac{1}{2}]$</span> by induction.</p>
<p>Here's what I tried:</p>
<ul>
<li>I tried separating the question into two cases <span class="math-container">$u_n \le \alpha$</span> and <span class="math-container">$u_n \ge \alpha$</span>.
I got that I just need to prove that <span class="math-container">$u_{n+1} + \frac{1}{2} u_n \ge \frac{3}{2} \alpha$</span>. But I didn't know how to go from here. Substituting <span class="math-container">$u_{n+1}$</span> with <span class="math-container">$\dfrac{1}{2(1+u_n^3)}$</span> seems to only make the problem more complicated.</li>
<li>I tried squaring both sides. It only made the expression more complicated.</li>
</ul>
<p>The problem vaguely reminds me of the epsilon-delta definition of <span class="math-container">$\lim_{x \to \alpha} f(x) = \alpha$</span>. But that seems like it won't lead anywhere.</p>
| Calvin Lin | 54,563 | <p>As mentioned by others, the key is to show that</p>
<blockquote>
<p>Prove that for <span class="math-container">$x,y \in \left(0,\frac12\right)$</span> we have
<span class="math-container">$$|f(x)-f(y)| \le \frac12 |x-y|.$$</span>
and then apply this to <span class="math-container">$x = u_n$</span> and <span class="math-container">$y = \alpha$</span>.</p>
</blockquote>
<p>We can do so directly, without the need of calculus / MVT.</p>
<p>The LHS is</p>
<p><span class="math-container">$$ | \frac{1}{2(1+x^3 ) } - \frac{1}{2 ( 1 + y^3) } | = \frac{1}{2} |(x-y)| \times | \frac{ x^2 + xy + y^2 } { (1+x^3 ) ( 1+y^3)}|.$$</span></p>
<p>It remains to show that the last term is <span class="math-container">$\leq 1$</span>, which can be done in several ways like</p>
<p><span class="math-container">$$x^2 + xy + y^2 \leq (x+y)^2 \leq 1 \leq ( 1 + x^3 ) ( 1+y^3 ). $$</span></p>
|
52,841 | <p>In classical Mechanics, momentum and position can be paired together to form a symplectic manifold. If you have the simple harmonic oscillator with energy $H = (k/2)x^2 + (m/2)\dot{x}^2$. In this case, the orbits are ellipses. How is the vector field determined by the (symplectic) gradient, then? </p>
<p>Also, does anyone know an interpretation for the area inside a closed curve in phase space?</p>
| Patrick I-Z | 11,885 | <p>The symplectic area contained in a closed curved, that is the boundary of map of a disc, is the "action along the curve".
$$
\int_\sigma \omega = \int_\sigma d\lambda = \int_{\partial \sigma} \lambda = \int_0^{2\pi} \lambda_{\gamma(t)}(\dot \gamma(t)) dt,
$$
where $\sigma$ is a smooth map from the disc to $M$, and $\gamma = \partial \sigma$. In all cases, the pullback of the 2-form $\omega$ by $\sigma$ is exact since the disc is contractible, so there exists a primitive $\lambda$, on the disc, and you apply Stokes' theorem.</p>
<hr>
<p><em>[I apologize for the lengthy answer]</em></p>
<p>Let me try to elaborate a little bit on a not too complicate but not that simple example to see where the symplectic form makes sense. Let us consider a point on the sphere $S^2$, let
$$
TS^2 = \{ (x,v) \in S^2 \times {\bf R}^3 \mid x \cdot v = 0 \}
$$
Let
$$
L : TS^2 - S^2 \to {\bf R} \quad \mbox{with} \quad L(x,v) = \Vert v \Vert
$$
be the "length function" as lagrangian. And you look for the variational problem
$$
\delta \int L(x(t),\dot x(t))\ dt = \delta \int \Vert \dot x(t) \Vert\ dt = 0.
$$
I don't put the limits of the integral on purpose, it would lead to a too long discussion. Since the lagrangian is homogeneous of degree 1 in $v$, we have the Euler identity
$$
L(x,v) = \frac{\partial L(x,v)}{\partial v}(v)
$$
And the nature of the partial derivative involved above is a map from $TS^2-S^2$ to the cotangent $T^*S^2$
$$
\forall v \in T_xS^2 - \{0\}, \quad \frac{\partial L(x,v)}{\partial v} = \frac{\bar v}{\Vert v \Vert} \in T^*_xS^2
$$
where the bar denotes the transposed, that is $\bar v w = v \cdot w$. Let's call this map $P$
$$
P : TS^2 - S^2 \to T^*S^2 \quad \mbox{with} \quad P(x,v) = \left(x,\frac{\partial L(x,v)}{\partial v}\right) = \left(x, \frac{\bar v}{\Vert v \Vert}\right).
$$
Now let $\lambda = pdx$ the Liouville form on $T^*S^2$, its pullback by $P$, integrated along the curve $\gamma = [t \mapsto (x(t),\dot x(t))]$ is exactly the action
$$
\int \Vert \dot x(t) \Vert \ dt = \int_\gamma P^*(\lambda) = \int_{P \circ \gamma} \lambda.
$$
Now, let $\tilde \gamma = P \circ \gamma$, this is a path in the image $Y$ of $P$, which is the <em>unit-cotangent</em> bundle
$$
Y = {\rm Im}(P) = \{ (x, \bar u) \in T^*S^2 \mid \bar u u = 1 \}
$$
And the variational condition becomes then
$$
\delta \int_{\tilde \gamma} \lambda = \int d\lambda\left(\delta\tilde\gamma(t), \frac{d\tilde \gamma}{dt}\right)\ dt = 0.
$$
But $\varpi = d\lambda$ is a 2-form on $Y \simeq US^2 \simeq SO(3)$ which is of odd dimension, actually $3= 2\times 2 -1$. Now, $\varpi$ has a kernel of dimension 1, and $\gamma$ is a solution of the variational problem if and only if
$$
\frac{d\tilde \gamma}{dt} \in \ker \varpi_{\tilde \gamma(t)}
$$
In this case, the kernel is given explicitly by
$$
\frac{dx}{dt} = \alpha u \quad \mbox{and} \quad \frac{du}{dt}= -\alpha x.
$$
The quotient space ${\cal S} = Y/\ker\varpi$, the space of solutions of the variational problem, is then equivalent to the sphere $S^2$, thanks to the (SO(3)-moment map)
$$
\pi : (x,u) \mapsto x \times u.
$$
By construction this space inherits a symplectic form $\omega$ such that
$$
\pi^*(\omega) = \varpi.
$$
And $({\cal S}, \omega)$ is the space of oriented non parametrized geodesics of the sphere $S^2$ (which by chance is also a sphere $S^2$). Finally what do we get? A space $Y \simeq US^2 \simeq SO(3)$ made of couples $(x,u)$ or matrices $y=[x\ u \ x \times u]$, a 1-form $\lambda$, the "action-form" (actually called the "Cartan 1-form"), a characteristic distribution $y \mapsto \ker(d\lambda)$ whose leaves are the pre-images of the point of the sphere $S^2$ by the moment map $\mu : (x,u) \mapsto x \times u$, and the image of $\mu$ is a symplectic manifolds for the projection $\omega$ of $d\lambda$. Note that in this case $\omega$, proportional to the standard area-form, is closed but not exact.</p>
<p>Now you can ask the same question as previously: "What does mean the area include in a disc $\sigma : D^2 \to {\cal S}$?"</p>
<p>Consider the pullback by $\sigma$ of the $S^1$-principal bundle $\pi : Y \to {\cal S}$, this is a principal bundle on $D^2$, but $D^2$ is contractible, so this fiber bundle is trivial, thus it admits a smooth section, that is a lift $\tilde \sigma : D^2 \to Y$, that is $\pi \circ \tilde \sigma = \sigma$. Now,
$$
\int_\sigma \omega = \int_{\pi\circ\tilde\sigma} \omega = \int_{\tilde\sigma} \pi^*(\sigma) = \int_{\tilde\sigma} d\lambda = \int_{\tilde\gamma} \lambda \quad \mbox{with} \quad \tilde\gamma = \partial\tilde\sigma.
$$
Let us write $\tilde \gamma(s) = (x_s,\bar u_s) \in Y$, and let us assume that the parameter $s$ runs over $[0,2\pi]$ to describe $\tilde \gamma = \partial \tilde \sigma$, then
$$
\int_\sigma \omega = \int_{\tilde\gamma} \lambda = \int_0^{2\pi} \bar u_s \frac{dx_s}{ds} \ ds.
$$
And this is the action of the unit vector $s \mapsto u_s$ distribution along the curve $s \mapsto x_s$. And let us remember that the vector $x_s \times u_s$ describes a geodesic of the sphere $S^2$ for all $s$, and $s$ is not the time parameter of this geodesic.</p>
<hr>
<p>Note 1. that this construction can be applied to any homogeneous lagrangian, and for non-homogeneous lagrangian, first we homogenize them and after we apply this construction.</p>
<p><strong>Bibliography</strong> <em>Jean-Marie Souriau, "Structure des Systèmes Dynamiques", Dunod ed., Paris 1970</em></p>
|
4,577,651 | <p>From Gallian's "Contemporary Abstract Algebra", Part 2 Chapter 5</p>
<p>It looks like using Lagrange's theorem would work, since <span class="math-container">$|S_n| = n!$</span> and <span class="math-container">$\langle\alpha\rangle$</span> is a subgroup of <span class="math-container">$S_n$</span>. However, that hasn't been covered in the book at this point, so I'm assuming a different solution is expected</p>
<p><span class="math-container">$\alpha$</span> can be broken up into disjoint cycles <span class="math-container">$\alpha_1\dots\alpha_m$</span> such that <span class="math-container">$|\alpha_1| + \dots +|\alpha_m| = n$</span>, and then <span class="math-container">$|\alpha| = \operatorname{lcm}(|\alpha_1|, \dots, |\alpha_n|)$</span>. Don't know how to continue though</p>
| Mike F | 6,608 | <p>You've almost solved it yourself! The numbers <span class="math-container">$|\alpha_i|$</span> are all between <span class="math-container">$1$</span> and <span class="math-container">$n$</span> so, from what you wrote, <span class="math-container">$|\alpha|$</span> is the least common multiple of a set of numbers between <span class="math-container">$1$</span> and <span class="math-container">$n$</span>. On the other hand <span class="math-container">$n!$</span> is the product of all the numbers from <span class="math-container">$1$</span> to <span class="math-container">$n$</span>. One definition of the least common multiple of a set of numbers is that it divides any number which is divisible by all the numbers in the set.</p>
|
133,604 | <p>"A cycloid is the curve traced by a point on the rim of a circular wheel as the wheel rolls along a straight line." - Wikipedia</p>
<p><img src="https://i.stack.imgur.com/Rl1WS.gif" alt="cycloid animation"></p>
<p>In many calculus books I have, the cycloid, in parametric form, is used in examples to find arc length of parametric equations. This is the parametric equation for the cycloid:</p>
<p>$$\begin{align*}x &= r(t - \sin t)\\
y &= r(1 - \cos t)\end{align*}$$</p>
<p>How are these equations found in the first place?</p>
| Robert Israel | 8,508 | <p>$t$ measures the angle through which the wheel has rotated, starting with your point in the "down" position. Since the wheel is rolling, the distance it has rolled is the distance along the circumference of the wheel from your point to the "down" position, which (since the wheel has radius $r$) is $rt$. So the centre of the wheel, which was initially at $(0,r)$, is now at $(rt,r)$. Your point is displaced from this by $-r\sin(t)$ horizontally
and $-r\cos(t)$ vertically, so it is at $(rt - r\sin(t), r - r\cos(t))$.</p>
|
1,879,129 | <p>If $0 < y < 1$ and $-1 < x<1$, then prove that $$\left|\frac{x(1-y)}{1+yx}\right| < 1$$</p>
| Asinomás | 33,907 | <p>to prove $|\frac{x(1-y)}{1+yx}|<1$ it suffices to show $|1+yx|>|x(1-y)|$.</p>
<p>If $x\geq 0$ then $|1+yx|=1+yx>x>x|1-y|=|x(1-y)|$</p>
<p>if $x<0$ then $|1+yx|=1+yx>1-y>|x||1-y|=|x(1-y)|$</p>
|
4,367,893 | <p>I need help finding the multiplicative inverse for a = 123, m = 256</p>
<p>I ran through it python script and it gave me 179</p>
<p>I want to do it by pen paper and see what the algorithm is doing.</p>
<p>so here is what I got so far</p>
<p>the euclidean algorithm</p>
<pre><code>256 = 123 * 2 + 10
123 = 10 * 12 + 3
10 = 3 * 3 + 1
</code></pre>
<p>it's equivelancy</p>
<pre><code>1 = 10 - 3 * 3
3 = 123 - 10 * 12
10 = 256 - 123 * 2
</code></pre>
<p>it is when I need to back substitute
where I get confuse, since I need to substitute 3 by its equivalency do I need to do it for both 3's</p>
<pre><code>1 = 10 - (123 - 10 * 12) * (123 - 10 * 12)
</code></pre>
<p>A full step by step solution would be appreciated</p>
| Community | -1 | <p>Being isolated is a property defined only in terms of open sets, so if <span class="math-container">$f$</span> is the homeomorphism, then <span class="math-container">$x$</span> is isolated if and only if <span class="math-container">$f(x)$</span> is. Thus, <span class="math-container">$f$</span> induces a bijection between the isolated point sets.</p>
|
4,367,893 | <p>I need help finding the multiplicative inverse for a = 123, m = 256</p>
<p>I ran through it python script and it gave me 179</p>
<p>I want to do it by pen paper and see what the algorithm is doing.</p>
<p>so here is what I got so far</p>
<p>the euclidean algorithm</p>
<pre><code>256 = 123 * 2 + 10
123 = 10 * 12 + 3
10 = 3 * 3 + 1
</code></pre>
<p>it's equivelancy</p>
<pre><code>1 = 10 - 3 * 3
3 = 123 - 10 * 12
10 = 256 - 123 * 2
</code></pre>
<p>it is when I need to back substitute
where I get confuse, since I need to substitute 3 by its equivalency do I need to do it for both 3's</p>
<pre><code>1 = 10 - (123 - 10 * 12) * (123 - 10 * 12)
</code></pre>
<p>A full step by step solution would be appreciated</p>
| Matematleta | 138,929 | <p>If <span class="math-container">$f:X\to Y$</span> is a homeomorphism, then it is an open map. Let <span class="math-container">$x\in U\in \tau_X$</span> such that <span class="math-container">$U\cap \{x\}=\{x\}.$</span> As <span class="math-container">$f\upharpoonleft U$</span> is bijective and open <span class="math-container">$f(U)$</span> is open and <span class="math-container">$f(U)\cap \{f(x)\}=\{f(x\},$</span> which shows <span class="math-container">$f(x)$</span> is isolated. To finish, apply the same reasoning to <span class="math-container">$f^{-1}.$</span></p>
|
1,985,905 | <p>I was wondering if the cardinality of a set is a well defined function, more specifically, does it have a well defined domain and range?</p>
<p>One would say you could assign a number to every finite set, and a cardinality for an infinite set. So the range would be clear, the set of cardinal numbers. But what about the domain, here we get a few problems. This should be the set of all sets, yet this concept isn't allowed in mathematics as it leads to paradoxes like Russell's paradox.</p>
<p>So how do we formalize the notion of 'cardinality'? It seems to behave like a function that maps sets into cardinal numbers, but you can't define it this way as that definition would be based on a paradoxical notion. Even if we only restrict ourselves to finite sets the problem pops up, as we could define the set {A} for every set, thereby showing a one-to-one correspondence between 'the set of all sets' (that doesn't exist) and the 'set of all sets with one element'.</p>
<p>So how should one look at the concept of cardinality? You can't reasonably call it a function. Formalizing this concept without getting into paradoxes seems very hard indeed.</p>
| Vladimir Kanovei | 304,155 | <p>Being equinumerous or bijective is an equivalence relation on sets: $x\equiv y$ iff there is a bijection $f:x\text{ onto }y$. The problem is to define a total set function (a proper class of course) $F$ satisfying $x\equiv y$ iff $F(x)=F(y)$. </p>
<p>In ZFC this is done by the cardinality function $F(x)=\text{card}(x)$, whose values are cardinals, and it satisfies $x\equiv F(x)$, that is, is a true transversal. </p>
<p>In ZF, this is done by $F(x)=$ the set of all sets $y$ with $x\equiv y$, which have the least von Neumann's rank among all such sets, and then generally $x\not\equiv F(x)$, of course. </p>
<p>I don't know though if anyone has succeed to prove that in ZF there is no true transversal for $\equiv$.</p>
|
108,953 | <p>Given a variety $X$ over $\mathbb{Q}$ with good reduction at $p$, proper smooth base change tells us that its $l$-adic cohomology groups are unramified at $p$ (and I'd guess some $p$-adic Hodge theory tells us its p-adic cohomology is crystalline).</p>
<p>My question is to what extent it's possible to find a converse to this statement. More precisely, I have yet to see a counterexample to the following "conjecture" (though I still suspect it's wrong).</p>
<p><strong>"Conjecture"</strong>: Let $K$ be a number field, $p$ and $l$ primes, and $V$ a geometric (say, coming from the variety $Y$) $l$-adic representation of $G_K$ that is unramified/crystalline at $\mathfrak{p}|p$. Then there exists a smooth proper variety $X$ such that $X$ has good reduction at $\mathfrak{p}$ and $V$ can be cut out of the cohomology of $X$.</p>
<p>From googling around, the things I know so far are (at least for $l \not= p$):</p>
<ul>
<li>If $Y$ is an abelian variety, the classical Neron-Ogg-Shafarevich condition means that $Y$ itself is a witness to the conjecture.</li>
<li>We can take torsors for abelian varieties with no $K$-rational points, and these can have the same representations, but fail to have good reduction (in this paper <a href="http://arxiv.org/abs/math/0605326">http://arxiv.org/abs/math/0605326</a> of Dalawat).</li>
<li>There exist curves which have bad reduction, but whose Jacobians have good reduction.</li>
</ul>
<p>If anyone knows any more about this story I'd be interested to hear. Ultimately I guess it would be nice to have a definition for when a motive is unramified/has good reduction, and cohomologically this surely has to mean unramified/crystalline, but it would be nice if this could always be realised "geometrically".</p>
<p>Thanks,
Tom.</p>
| Tzanko Matev | 421 | <p>Unfortunately I don't know much about motives in genereal, but this might be relevant to your question. One result of my thesis, that I am currently writing, is to prove Neron-Ogg-Shafarevich for 1-motives. The proof is not particularly difficult and it ultimately reduces to the corresponding results for the components of the 1-motive. I will describe below what good reduction means in this particular case.</p>
<p>A 1-motive $M = [u\colon Y\to G]$ over a scheme $S$ consists of a group scheme $Y$, which is locally etale isomorphic to $\mathbb{Z}^r$, a group scheme $G$ which is an extension of an abelian scheme $A$ by a torus $T$ and a homomorphism $u\colon Y\to G$. If $S$ is the spectrum of a field $K$, this means that $Y$ is a free finitely-generated $\mathbb{Z}$-module with a continuous action of the absolute Galois group $\Gamma_K$ and that $u$ is a $\Gamma_K$-equivariant homomorphism $u\colon Y\to G(\bar K)$.</p>
<p>If $R$ is a complete discrete valuation ring with a fraction field $K$ we say that a 1-motive $M$ over $K$ has good reduction if there exists a 1-motive $\widetilde{M}$ over $R$ whose generic fiber is isomorphic to $M$. This is equivalent to the following:</p>
<ul>
<li>$G$ has good reduction $\widetilde{G}$ over $R$, which is equivalent to saying that both $A$ and $T$ have good reduction;</li>
<li>The action of $\Gamma_K$ on $Y$ is unramified;</li>
<li>The image of $u(Y)$ is contained in the set of those points in $G(K')$ which can be reduced, where $K'/K$ is some finite field extension. Equivalently, $u(Y)$ is contained in the maximal compact subgroup of $G(K')$;</li>
</ul>
<p>With this definition, the criterion of Neron-Ogg-Shafarevich is as follows: Let $l,p$ be primes, with $l\neq p$. A 1-motive $M/\mathbb{Q}$ has good reduction mod p if and only if the Tate module $T_l(M)$ is unramified at $p$. For general number fields replace $p$ by a prime ideal.</p>
<p>If you want to learn more about reduction of 1-motives you can look at M. Raynaud's paper 1-Motifs et Monodromie Géométrique. </p>
|
2,574,221 | <p>Does divergence of $\sum a_k$ imply divergence of $\sum \frac{a_k}{1+a_k}$?</p>
<p>Note: $a_k > 0 $</p>
<p>I understand that looking at the contrapositive statement, we can say that the convergence of the latter sum implies $\frac{a_k}{1+a_k}\rightarrow 0$ but from here is it possible to deduce that $a_k\rightarrow 0$ because it is not completely straightforward. If we assume $a_k$ to be convergent, this trivially follows but it could diverge in which case this is nontrivial to me.</p>
| Gribouillis | 398,505 | <p>In the case where the $a_n$ can change sign, let
$$b_n = \frac{(-1)^n}{\sqrt{n}}\quad \text{and}\quad a_n=\frac{b_n}{1-b_n} = \frac{(-1)^n}{\sqrt{n}}\frac{1}{1-\frac{(-1)^n}{\sqrt{n}}} =
\frac{(-1)^n}{\sqrt{n}} + \frac{1}{n} + O\left(\frac{1}{n^{3/2}}\right)$$
then one has $b_n = \frac{a_n}{1+a_n}$, the series $\sum a_n$ diverges but the series $\sum b_n$ converges.</p>
|
1,201,900 | <p>This is a rather soft question to I will tag it as such.</p>
<p>Basically what I am asking, is if anyone has a good explanation of what a homomorphism is and what an isomorphism is, and if possible specifically pertaining to beginner linear algebra.</p>
<p>This is because, in my courses we have talked about vector spaces, linear transformations, etc., but we have always for some reason skipped the sections on isomorphisms and homomorphisms.</p>
<p>And yes I have tried to look on wikipedia and such, but it just isn't really clicking for me what it is and what it represents/use of it.</p>
<p>I am under the impression that two spaces with bijection are isomorphic to one another, but that is about it.</p>
<p>Any ideas/opinions?</p>
<p>Thanks!</p>
| James S. Cook | 36,530 | <p>Two vector spaces are isomorphic if they have the same dimension. This is equivalent to the existence of a bijective linear mapping between the spaces since to say $dim(V)=n$ is to say $\beta = \{ v_1, \dots ,v_n \}$ is a basis for $V$ and to say $dim(W)=n$ is to say $\gamma = \{ w_1, \dots ,w_n \}$ is a basis for $W$. Then we can define a linear transformation by simply setting:
$$ T(v_1)=w_1, \ \ T(v_2)=w_2, \ \ \dots \ \ T(v_n)=w_n $$
then extend linearly. This makes $T: V \rightarrow W$ an invertible linear map. In short, $V$ and $W$ have the same <strong>vector space structure</strong></p>
<p>The term homomorphism is naturally identified with linear transformation in this context as a linear transformation preserves the linear structure of a vector space. However, linear transformations need not be injective or surjective hence a linear transformation may or may not be an isomorphism.</p>
<p>The more general use of the term homomorphism or isomorphism depends on context. For groups preserve group structure. For Lie algebra, you get a vector space isomorphism which also preserves a Lie bracket. There are many many more cases.</p>
|
2,985,917 | <p>Would it be possible to calculate which function in the Schwarz class of infinitely differentiable functions with compact support is closest to triangle wave?</p>
<p>Let us measure closeness as <span class="math-container">$$<f-g,f-g>_{L_2}^2 = \int_{-\infty}^{\infty}(f(x)-g(x))^2dx$$</span></p>
<p>I don't expect my knowledge in functional analysis to be strong and polished enough to answer this, but maybe one of you guys know how to? </p>
<hr>
<p><strong>EDIT</strong> of course we need to edit triangle wave to have compact support. Say it has compact support on <span class="math-container">$$x \in [-1-2N,1+2N], N\in \mathbb Z_+$$</span>
In other words, <span class="math-container">$2N+1$</span> whole periods and it goes down to <span class="math-container">$0$</span> just at both ends of support.</p>
| Federico | 180,428 | <p>The class of Schwartz class is not closed with respect to the <span class="math-container">$L^2$</span> norm, so there isn't necessarily a closest element. Indeed, in this case there isn't, as you can find arbitrarily close functions (take for instance convolutions with an <a href="https://en.wikipedia.org/wiki/Dirac_delta_function#nascent_delta_function" rel="nofollow noreferrer">approximate identity</a>), but the triangle wave itself is not in the Schwartz.</p>
|
2,410,517 | <p>I feel like I'm missing something very simple here, but I'm confused at how Rudin proved Theorem 2.27 c:</p>
<p>If <span class="math-container">$X$</span> is a metric space and <span class="math-container">$E\subset X$</span>, then <span class="math-container">$\overline{E}\subset F$</span> for every closed set <span class="math-container">$F\subset X$</span> such that <span class="math-container">$E\subset F$</span>. Note: <span class="math-container">$\overline{E}$</span> denotes the closure of <span class="math-container">$E$</span>; in other words, <span class="math-container">$\overline{E} = E \cup E'$</span>, where <span class="math-container">$E'$</span> is the set of limit points of <span class="math-container">$E$</span>.</p>
<p>Proof: If <span class="math-container">$F$</span> is closed and <span class="math-container">$F \supset E$</span>, then <span class="math-container">$F\supset F'$</span>, hence <span class="math-container">$F\supset E'$</span>. Thus <span class="math-container">$F \supset \overline{E}$</span>.</p>
<p>What I'm confused about is how we know <span class="math-container">$F \supset E'$</span> from the previous facts?</p>
| K.K.McDonald | 302,349 | <p>Let's go by contradiction. We know from assumption that <span class="math-container">$E \subset F$</span>, thus we just need to prove the limit points of <span class="math-container">$E$</span> are also in <span class="math-container">$F$</span>, i.e. <span class="math-container">$E' \subset F. $</span> Assume <span class="math-container">$x\in E'$</span> but <span class="math-container">$x\notin F$</span>. Since <span class="math-container">$x\notin F$</span> therefore <span class="math-container">$x\in F^c$</span> and <span class="math-container">$F$</span> is closed, therefore <span class="math-container">$F^c$</span> is open. This means there is a neighborhood of <span class="math-container">$x$</span>, <span class="math-container">$N_r(x)$</span> that is contained only in <span class="math-container">$F^c$</span> and does not intersect with <span class="math-container">$F$</span> (i.e. <span class="math-container">$N_r(x) \cap F = \emptyset$</span>). This is a contradiction due to <span class="math-container">$E \subset F$</span>, since according to definition of limit point, every neighborhood of <span class="math-container">$x$</span> intersects with <span class="math-container">$E$</span>.</p>
|
1,611,390 | <p>How to show that the following function is an injective function?</p>
<p>$ \varphi : \mathbb{N}\times \mathbb{N} \rightarrow \mathbb{N} \\
\varphi(\langle n, k\rangle) = \frac{1}{2}(n+k+1)(n+k)+n$</p>
<p>I'm starting with $ \frac{1}{2}(a+b+1)(a+b)+a = \frac{1}{2}(c+d+1)(c+d)+c$, but how am I supposed to show from this equality that $\langle a, b\rangle = \langle c, d\rangle$, where $\langle a, b\rangle \in \mathbb{N}\times \mathbb{N}$ ?</p>
| Pedro | 23,350 | <p>You should <em>really</em> make a drawing. The function enumerates the pairs $(n,k)$ in a diagonal manner. Note that $$(0,0)\mapsto 0,(0,1)\mapsto 1,(1,0)\mapsto 2, (0,2)\mapsto 3,(1,1)\mapsto 4,(2,0)\mapsto 5,\ldots$$</p>
<p>Thus, at least empirically, the function is enumerating $\Bbb N\times\Bbb N$ by travesing the $(n,k)$ grid diagonally from the upmost right. Having this in mind, try to produce a proof that this function is in fact a bijection. One way to go is to note how the function behaves when $n+k$ is constant. You can partition $\Bbb N\times\Bbb N$ into the sets $$S[m]=\{(n,k):n+k=m\}$$</p>
<p>and then show that the image of $S[m]$ under your function is the interval (in $\Bbb N$!) $$\left[\binom{m+1}2, \binom {m+2}2-1\right]$$ in increasing order, thus proving your function is a bijection. </p>
<p>This shouldn't prove too difficult since when $n+k=m$ you function sends $(n,k)$ to $\binom{m+1}2 +n$, thus you can let $n$ move and keep $n+k=m$. This increases from $n=0$ to $n=m$; when $$\binom{m+1}2+m=\frac{m^2+3m}2=\binom{m+2}2-1$$</p>
|
512,590 | <p>According to the definition my professor gave us its okay for a matrix in echelon form to have a zero row, but a system of equations in echelon form cannot have an equation with no leading variable.</p>
<p>Why is this? Aren't they supposed to represent the same thing?</p>
| preferred_anon | 27,150 | <p>Assuming you are familiar with the formula $z^{n}=(\cos(\alpha)+i\sin(\alpha))^{n}=\cos(n\alpha)+i\sin(n\alpha)$ at least for integers $n$, then you can see that$$z^{n}+z^{-n}=\cos(n\alpha)+i\sin(n\alpha)+\cos(-n\alpha)+i\sin(n\alpha)$$
Since $\cos$ is even and $\sin$ is odd, this simplifies to your result.</p>
|
325,765 | <p>Is there any method which allows us to describe all continuous functions (maps to $\mathbb{R}$) on the quotient space?</p>
<p>For examle, how could I classify all continuous functions on $\mathbb{R}/[x\sim2x]$?</p>
| Julien | 38,053 | <p>You can classify them easily: these are the constant functions, whatever Hausdorff topological space $Z$ they land in.</p>
<p>If $f:\mathbb{R}/[x\sim 2x]\longrightarrow Z$ is continuous, then
$$
g(x):=f(\bar{x})
$$
is continuous on $\mathbb{R}$ by composition with the canonical surjection $x\longmapsto \bar{x}$ from $\mathbb{R}$ onto $\mathbb{R}/[x\sim 2x]$.</p>
<p>Then for every $x$
$$
g(x)=g\left(\frac{x}{2}\right)=\ldots=g\left(\frac{x}{2^n}\right)\qquad\forall n\geq 1.
$$
Letting $n$ tend to $+\infty$ and using continuity at $0$, we get
$$
g(x)=g(0)\quad\forall x\in\mathbb{R}.
$$
So $g$, hence $f$ is constant.</p>
<p>The converse is clear: every constant function on the quotient space is continuous.</p>
|
325,765 | <p>Is there any method which allows us to describe all continuous functions (maps to $\mathbb{R}$) on the quotient space?</p>
<p>For examle, how could I classify all continuous functions on $\mathbb{R}/[x\sim2x]$?</p>
| Damien L | 59,825 | <p>Let's say that your quotient is describe by a relation $\rm R$. Then the <strong>Universal Property</strong> of the quotient topology tells you that there is a bijection $$ \text{ continuous functions on } \mathbb R/\mathrm{R} \longleftrightarrow \mathrm{R}-\text{invariant continuous functions on } \mathbb R.$$</p>
<p>Where $\rm R$-invariant means that $f(x) = f(y)$ when $x \sim_{\rm R} y$.</p>
|
112,226 | <p>Prove that there are exactly</p>
<p>$$\displaystyle{\frac{(a-1)(b-1)}{2}}$$ </p>
<p>positive integers that <em>cannot</em> be expressed in the form </p>
<p>$$ax\hspace{2pt}+\hspace{2pt}by$$</p>
<p>where $x$ and $y$ are non-negative integers, and $a, b$ are positive integers such that $\gcd(a,b) =1$.</p>
| Gerry Myerson | 8,269 | <p>Hints: Prove </p>
<p>If $ax+by=c$, and $ax'+by'=c$, then $b$ divides $x-x'$, and $a$ divides $y-y'$, and $(x-x')/b=(y'-y)/a$. </p>
<p>$n$ can be expressed if and only if $((a-1)(b-1)/2)-1-n$ can't. </p>
|
2,909,480 | <p>Please notice the following before reading: the following text is translated from Swedish and it may contain wrong wording. Also note that I am a first year student at an university - in the sense that my knowledge in mathematics is limited.</p>
<p>Translated text:</p>
<p><strong>Example 4.4</strong> Show that it for all integers $n$ is true that $n^3 - n$ is evenly divisible by $3$.<sup>1</sup> </p>
<p>Here we are put in front of a situation with a statement for <em>all integers</em> and not all positive. But it is enough that we treat the cases when $n$ is non-negative, for if $n$ is negative, put $m = -n$. Then $m$ is positive, $n^3 - n = -(m^3 - m)$ and if $3$ divides $a$, then $3$ also divides $-a$.</p>
<p>Now here also exists a statement for $n = 0$ so that we have a sequence $p_0, p_1, p_2, \; \ldots$ of statements, but that the first statement has the number $0$ and not $1$ is of course not of any higher meaning. Statement number $0$ says that $0^3 - 0$, which equals $0$, is evenly divisible by $3$, which obviously is true. If the statement number $n$ now is true, id est $n^3 - n = 3b$ for some integer $b$, then the statement number $n+1$ also must be true for</p>
<p>$
\begin{split}
(n + 1)^3 - (n + 1) & = n^3 - n + 3n^2 + 3n \\
& = 3b + 3n^2 + 3n \\
& =3(b + n^2 + n)
\end{split}
$</p>
<p>and $b + n^2 + n$ is an intege. What we was supposed to show now follows from the induction principle. $\square$</p>
<p><p>1. That an integer $a$ is "evenly divisible by 3" are everyday language rather than mathematical. The precise meaning is that it exists another integer $b$ such that $a = 3b$.</p></p>
<p><strong>In the above written text</strong>, I understanding everything (or I at least think so) except </p>
<p>$
\begin{split}
(n + 1)^3 - (n + 1) & = n^3 - n + 3n^2 + 3n \\
& = 3b + 3n^2 + 3n \\
& =3(b + n^2 + n).
\end{split}
$</p>
<p>Could someone please explain what happened, because I am totally lost?</p>
| ArsenBerk | 505,611 | <p>Since the $n$-th statement is assumed to be true, first it writes $n^3 - n = 3b$ because it should be divisible by $3$.</p>
<p>Then, for $(n+1)$-th statement, it rearranges the expression $(n+1)^3-(n+1)$ as $(n^3+3n^2+3n+1) - (n+1) = (n^3-n) + (3n^2+3n)$, then puts $3b$ in the place of $(n^3-n)$. Then it concludes that $3n^2+3n+3b$ is divisible by $3$.</p>
|
409,220 | <p>$$f(x,y)=6x^3y^2-x^4y^2-x^3y^3$$
$$\frac{\delta f}{\delta x}=18x^2y^2-4x^3y^2-3x^2y^3$$
$$\frac{\delta f}{\delta y}=12x^3y-2x^4y-3x^3y^2$$
Points, in which partial derivatives ar equal to 0 are: (3,2), (x,0), (0,y), x,y are any real numbers. Now I find second derivatives
$$\Delta_1=\frac{\delta f}{\delta x^2}=36xy^2-12x^2y^2-6xy^3$$
$$\frac{\delta f}{\delta y^2}=12x^3-2x^4-6x^3y$$
$$\frac{\delta f}{\delta x \delta y}=\frac{\delta f}{\delta y \delta x} = 36x^2y-8x^3y-9x^2y^2$$
$$\Delta_2=\begin{vmatrix}\frac{\partial^2 f}{\partial x^2}&\frac{\partial^2 f}{\partial y\partial x}\\\frac{\partial^2 f}{\partial x\partial y}& \frac{\partial^2 f}{\partial y^2} \end{vmatrix}$$
After plugging in the point (3,2) we get $\Delta_1<0$ and $\Delta_2>0$, so (3,2) is maxima. Now then I try to plug in (x,0) and (0,y) I obviously get $\Delta_1=0$ and $\Delta_2=0$ and I can't tell, using Sylvester's criterion, if those points are minima or maxima or neither. What should I do?</p>
| john | 79,781 | <p>You could try looking at further derivatives but generally in this case it's better to think of the function itself. Imagine you're at a point (0,y) for instance. How does f change when you move a little in the y-direction? How does f change when you move a little in the x-direction? </p>
|
2,619,131 | <p>How one can prove the following inequality?</p>
<p>$$58x^{10}-42x^9+11x^8+42x^7+53x^6-160x^5+118x^4+22x^3-56x^2-20x+74\geq 0$$ </p>
<p>I plotted the graph on Wolfram Alpha and found that the inequality seems to hold. I was unable to represent the polynomial as a sum of squares. </p>
<p>It looks quite boring to approximate the derivative to be zero and use some numerical methods to show that values near local minimums proves that the inequality really holds everywhere.</p>
| Johan Löfberg | 37,404 | <p>A sum-of-squares decomposition is given by $z^TQz$ where $z = (1,x,x^2,x^3,x^4,x^5)$ and the positive definite matrix $Q$ is</p>
<p>$Q = \begin{bmatrix}
74 & -10 & -38 & 9 & 8 & -30\\
-10 & 20 & 2 & -8 & -22 & 9\\
-38 & 2 & 118 & -28 & -40 & 21\\
9 & -8 & -28 & 115 & 0 & -32\\
8 & -22 & -40 & 0 & 75 & -21\\
-30 & 9 & 21 & -32 & -21 & 58
\end{bmatrix}$</p>
<p>Found by solving an integrality constrained sum-of-squares problem (i.e., mixed-integer semidefinite program) in the MATLAB Toolbox YALMIP</p>
<pre><code>sdpvar x
p = 58*x^10-42*x^9+11*x^8+42*x^7+53*x^6-160*x^5+118*x^4+22*x^3-56*x^2-20*x+74;
z = monolist(x,5);
Q = sdpvar(6);
optimize([integer(Q),coefficients(z'*Q*z-p,x)==0,Q>=0],sum(sum(Q)));
</code></pre>
|
760,767 | <p>I don't understand the last part of this proof:</p>
<p><a href="http://www.proofwiki.org/wiki/Intersection_of_Normal_Subgroup_with_Sylow_P-Subgroup" rel="nofollow">http://www.proofwiki.org/wiki/Intersection_of_Normal_Subgroup_with_Sylow_P-Subgroup</a></p>
<p>where they say: $p \nmid \left[{N : P \cap N}\right]$, thus, $P \cap N$ is a Sylow p-subgroup of $N$. I don't see why this implicaction is true. On the other hand, I understand that $P$ being a Sylow p-subgroup of $G$ implies that $p \nmid [G : P]$, for $[G:P]=[G:N_G(P)][N_G(P):P]$ and $p$ does not divide any of these two factors. So, what I don't understand why is true is the inverse implication, that is, if $p \nmid [G : P]$ then $P$ is a Sylow p-subgroup of $G$.</p>
| Mahkoe | 124,475 | <p>I just realized where I went wrong. The length of an arc is the <em>radius of curvature</em> times the angle, not the radial distance between the arc and the origin times the angle. Thanks to all the other answers</p>
|
1,372,376 | <p>For what values of $a$ and $b$, the two functions $f_a(x)=ax^2+3x+1$ and $g_b(x)=\frac{b}{x}$ are tangent to each other at a point where the $x\text{-coordinate}=1$.</p>
<p>The points of intersection are where:
$f_a(1)=g_b(1)$</p>
<p>which gives
$$a+4=b\text{ and } b-4=a$$</p>
<p>Now what to do with this information? Or if my approach is right?</p>
| Harish Chandra Rajpoot | 210,295 | <p>Notice, the slope of tangent of $f_{a}(x)=ax^2+3x+1$ at a general point is given as $$\frac{d}{dx}(f_{a}(x))=\frac{d}{dx}(ax^2+3x+1)$$$$\color{red}{f'_{a}(x)=2ax+3}$$ Similarly, the slope of tangent of $g_{b}(x)=\frac{b}{x}$ at a general point is given as $$\frac{d}{dx}(g_{b}(x))=\frac{d}{dx}\left(\frac{b}{x}\right)$$$$\color{red}{g'_{b}(x)=-\frac{b}{x^2}}$$ Since, the curves $f_{a}(x)$ & $g_{b}(x)$ are tangent to each other at a point having $x=1$ Thus the angle between tangents at $x=1$ must be zero i.e. slopes of common tangent at both the curves are equal hence, we have $$$$ </p>
<p>$$(f'_{a}(x))_{x=1}=(g'_{b}(x))_{x=1}$$ $$\left(2ax+3\right)_{x=1}=\left(-\frac{b}{x^2}\right)_{x=1}$$ $$2a+3=-b$$ $$\bbox[4pt, border: 1px solid blue;] {\color{blue}{2a+b+3=0}}$$ Above is the required relation between $a$ & $b$. Hence there are infinitely many real values of $a$ & $b$ satisfying the above relation.</p>
|
1,372,376 | <p>For what values of $a$ and $b$, the two functions $f_a(x)=ax^2+3x+1$ and $g_b(x)=\frac{b}{x}$ are tangent to each other at a point where the $x\text{-coordinate}=1$.</p>
<p>The points of intersection are where:
$f_a(1)=g_b(1)$</p>
<p>which gives
$$a+4=b\text{ and } b-4=a$$</p>
<p>Now what to do with this information? Or if my approach is right?</p>
| Michael Hoppe | 93,935 | <p>Isn't “to be tangent to each other in $x_0$” defined by $f(x_0)=g(x_0)$ and $g'(x_0)=f'(x_0)$? know that $f(1)=g(1)$ and $f'(1)=g'(1)$. In this case we derive two linear equations with solutions $a=-7/3$ and $b=5/3$.</p>
|
1,274,816 | <p>It seems known that there are infinitely many numbers that can be expressed as a sum of two positive cubes in at least two different ways (per the answer to this post: <a href="https://math.stackexchange.com/questions/1192338/number-theory-taxicab-number">Number Theory Taxicab Number</a>).</p>
<p>We know that</p>
<p>$$1729 = 10^3+9^3 = 12^3 + 1^3,$$</p>
<p>and I am wondering if there are infinitely many numbers like this that can be expressed as the sum of two positive cubes in <em>exactly</em> two ways?</p>
<p>In fact, are there even any other such numbers?</p>
<p>EDIT:
As provided by MJD in the comments section, here are other examples:
$$4104 = 2^3+16^3 = 9^3+15^3,$$
$$13832 = 20^3+18^3=24^3+2^3,$$
$$20683 = 10^3 +27^3 = 19^3 +24^3.$$ </p>
| SHOUNAK GUPTA | 857,041 | <p>I found a number recently which can be expressed as a sum of two cubes in exactly two different ways.</p>
<pre><code> 65673928=164³+394³=103³+401³
</code></pre>
|
2,626,920 | <p>I am working on a homework problem and I am given this situation:</p>
<p>Let $A$ be the event that the $r$ numbers we
obtain are all different from each other. So, for example, if $n = 3$ and $r = 2$ the sample space is
$S = \{(1, 1),(1, 2),(1, 3),(2, 1),(2, 2),(2, 3),(3, 1),(3, 2),(3, 3)\}$
and the event $A$ is
$A = \{(1, 2),(1, 3),(2, 1),(2, 3),(3, 1),(3, 2)\}$.</p>
<p>My task is to solve for the general case and put together a formula. </p>
<p>For the random experiment described above, find the probability $P(A)$ for a general $n$ and $r$. [Hint: If
$r = 1$, we don't choose any duplicate numbers, so $P(A) = 1$. If $r > n$, then our choice of $r$ numbers must
contain some duplicates, so $P(A) = 0$. The interesting case is when $2 \leq r \leq n$.]</p>
<p>I found this relatively simple to do while programming on <em>R</em>, however I do not know where to begin when putting a formula together for the general case. Any explanations would be helpful!</p>
| patentfox | 17,495 | <p>Size of sample space can be calculated by <code>n^r</code>, as we are selecting r elements out of n choices, with repetition.</p>
<p>No of allowed outcomes = <code>nCr</code></p>
<p>So, the formula comes out to be</p>
<pre><code>nCr/n^r
</code></pre>
|
476,095 | <p>I am attempting to learn about mathematical proofs on my own and this is where I've started. I think I can prove this by induction. Something like:</p>
<p>$n = 2k+1$ is odd by definition</p>
<p>$n = 2k+1 + 2$ (this is where I'm stuck, how do I show that this is odd?)</p>
<p>$n = 2(k+1) + 1$ (if I can show that it's odd, I can do the same here and prove my conjecture by induction, right?)</p>
<p>Thanks for any assistance</p>
| Argon | 27,624 | <p>Odd numbers have a remainder of $1$ when divided by $2$, thus</p>
<p>$$n+2 = 2k+3 \equiv 3 \equiv 1 \pmod 2 $$</p>
|
3,144,813 | <blockquote>
<p>Let <span class="math-container">$X : \mathbb{R} \to \mathbb{R}^n$</span> be a <span class="math-container">$C^1$</span> function. Let <span class="math-container">$\| .\|$</span> be the norm : <span class="math-container">$\| v \| = \max_{1 \leq i \leq N} \mid v_i \mid$</span>. Then is it true that :
<span class="math-container">$$\| X'(t) \| = (\| X(t) \|)'$$</span> ?</p>
</blockquote>
<p>I am wondering if in general if I have any function <span class="math-container">$f : \mathbb{R}^n \to \mathbb{R}^p$</span> and a norm <span class="math-container">$N$</span> on a : <span class="math-container">$\mathbb{R}^p$</span> then is it always possible to invert the norm and the differential operator or the norm and in the integral?</p>
<p>Thank you. </p>
| dmtri | 482,116 | <p>The pde alone has infinite number of solutions. We can visualize them as surfases, but when we want it that goes through a specific curve like the one you have here, there is only one solution surface. </p>
|
44,771 | <p>A capital delta ($\Delta$) is commonly used to indicate a difference (especially an incremental difference). For example, $\Delta x = x_1 - x_0$</p>
<p><strong>My question is: is there an analogue of this notation for ratios?</strong></p>
<p>In other words, what's the best symbol to use for $[?]$ in $[?]x = \dfrac{x_1}{x_0}$?</p>
| J. M. ain't a mathematician | 498 | <p>Not entirely standard, but in Peter Henrici's discussion of the (justly famous) quotient-difference (QD) algorithm in the books <a href="http://rads.stackoverflow.com/amzn/click/0471372412" rel="noreferrer"><em>Elements of Numerical Analysis</em></a> (see p. 163) and <a href="http://rads.stackoverflow.com/amzn/click/0471059048" rel="noreferrer"><em>Essentials of Numerical Analysis</em></a> (see p. 155), he defines the quotient operator as</p>
<p>$$Q\,x_n=\frac{x_{n+1}}{x_n}$$</p>
<p>in complete analogy with the (forward) difference operator $\Delta$.</p>
<p>Henrici's a pretty sharp mathematician, so I wouldn't mind borrowing notation from him if I were in your shoes...</p>
<hr>
<p>Here's a screenshot of the relevant page of the first book (sorry, I don't have a digital copy of the other book):</p>
<p><img src="https://i.stack.imgur.com/mO2ME.png" alt="Henrici"></p>
|
392,835 | <p>In a concrete category (i.e., where the morphisms are functions between sets), I define a <strong>base</strong> of an object <span class="math-container">$A$</span> to be a set of elements <span class="math-container">$M$</span> of <span class="math-container">$A$</span> such that for any morphisms <span class="math-container">$F,G:A\to B$</span> that coincide on <span class="math-container">$M$</span>, we have <span class="math-container">$F=G$</span>.</p>
<p><strong>Question:</strong> Is there an established name for a <strong>base</strong> in that sense?</p>
<p><strong>Examples:</strong> In the category of vectors spaces, generating sets are bases. In the category of sets, <span class="math-container">$A$</span> is the only base of <span class="math-container">$A$</span>.</p>
<p><strong>Note:</strong> The above definition does not really need a concrete category (an initial object is enough), but I decided to formulate it in a concrete category for simplicity.</p>
| Martin Brandenburg | 2,841 | <p>The term "base" should not be used, since, as you say, you are actually generalizing the notion of a generating set.</p>
<p>It is an <strong>epi-sink</strong>, also known as <strong>jointly epimorphic family</strong>. See <a href="http://katmat.math.uni-bremen.de/acc/acc.pdf" rel="nofollow noreferrer">Joy of Cats</a>, Definition 10.62 and (dual of) Definition 10.5. A family of morphisms <span class="math-container">$(f_i : A_i \to A)$</span> is called an epi-sink when for <span class="math-container">$u,v : A \to B$</span> we have <span class="math-container">$\forall i (u \circ f_i = v \circ f_i) \implies u=v$</span>. When the coproduct <span class="math-container">$\coprod_i A_i$</span> exists, this means that we have an epimorphism <span class="math-container">$\coprod_{i \in I} A_i \to A$</span>.</p>
<p>If you have a terminal object <span class="math-container">$1$</span>, morphisms <span class="math-container">$1 \to A$</span> are called global elements, and we can look at epi-sinks consisting of global elements of <span class="math-container">$A$</span>.</p>
<p>For many categories, though, global elements are not enough. When we have a forgetful functor <span class="math-container">$U$</span> to <span class="math-container">$\mathbf{Set}$</span> with a left adjoint <span class="math-container">$F$</span>, we have <span class="math-container">$U(-) \cong \hom(F(1),-)$</span>, so that elements of the underlying set can be seen as morphisms on <span class="math-container">$F(1)$</span>, and we can talk about epi-sinks on <span class="math-container">$F(1)$</span>.</p>
<p>But the most general form does not put any restrictrions on the domains at all.</p>
|
2,096,711 | <p>I need help finding a closed form of this finite sum. I'm not sure how to deal with sums that include division in it.</p>
<p>$$\sum_{i=1}^n \frac{2^i}{2^n}$$</p>
<p>Here's one of the attempts I made and it turned out to be wrong:</p>
<p>$$\frac{1}{2^n}\sum_{i=1}^n {2^i} = \frac{1}{2^n} (2^{n +1} - 1) = \frac{2^{n + 1} - 1} {2^n}$$</p>
<p>And then from there, simplifying it ended up with just a constant.</p>
<p>I also tried it in which I moved $2^{-n}$ to the outside of the sigma notation and went from there:</p>
<p>$$2^{-n}\sum_{i=1}^n {2^i} = 2^{-n} (2^{n +1} - 1) = 2 - 2^{-n}$$</p>
<p>I plugged the equation in to Wolfram Alpha to check my answers. It gave me $2 - 2^{1-n}$, which is close to what I got in that second method. I need help finding the error in my math. I keep looking over it and I guess I'm just not seeing something.</p>
| haqnatural | 247,767 | <p>$$\sum _{ i=1 }^{ n }{ 2^{ i } } =\frac { 2\left( 1-{ 2 }^{ n } \right) }{ 1-2 } ={ 2 }^{ n+1 }-2$$</p>
|
2,353,272 | <p>Suppose that we are given the function $f(x)$ in the following product form:
$$f(x) = \prod_{k = -K}^K (1-a^k x)\,,$$
Where $a$ is some real number. </p>
<p>I would like to find the expansion coefficients $c_n$, such that:
$$f(x) = \sum_{n = 0}^{2K+1} c_n x^n\,.$$</p>
<p>A closed form solution for $c_n$, or at least a relation between the coefficients $c_n$ (e.g. between $c_n$ and $c_{n+1}$) would be great! </p>
| Ronald Blaak | 458,842 | <p>Let the function $f_n(x)$ be given by
$$
f_n(x) = \prod_{k=-n}^{n} \left( 1 - a^k x \right)
$$
Since it is clear that it is a polynomial of degree $2 n +1$, it can be expressed as:
$$
f_n(x) = \sum_{k=0}^{2n+1} c_{n,k} x^k
$$
in some yet unknown coefficients $c_{n,k}$. For these coefficients it is easy to see that $c_{n,0}=1$ and $c_{n,2n+1}=-1$. More generally one could show that $c_{n,k} = -c_{n,2n+1-k}$.</p>
<p>The functions for different values of $n$ are related by:
$$
f_n(x) = f_{n-1}(x) * \left(1 - a^n x\right)\left(1-a^{-n}x\right) =
f_{n-1} * \left[1 - \left(a^n + a^{-n}\right)x + x^2\right]
$$
If we now substitute the expansion in to this expression and group the terms with the same power of $x$ on both the left and right side we can find a recurrence relation between the coefficients $c_{n,k}$ of successive functions:
$$
c_{n,k} = c_{n-1,k} - \left(a^n + a^{-n}\right) c_{n-1,k-1} + c_{n-1,k-2}
$$
Together with the condition $c_{0,0} = 1$ and $c_{0,k}$ for $k \neq 0$ this completely specifies the coefficients and one could set up a program to evaluate them, because in general a simple and compact expression for them might not exist.</p>
<p>In this particular case, however, there is such a "simple" expression:
$$
c_{n,k} =\frac{\prod_{i=0}^{k-1} \left(a^{2n+1} - a^i \right)}{a^{k n}\prod_{i=1}^{k} \left(1 - a^i \right)}
$$
with $0 \leq k \leq 2n+1$ and the product by definition is unity if no terms are present. By rearranging the limits of products a few other but equivalent expressions exist.</p>
<p>Deriving them is a lot of work so I simply presented them. The fact that they are correct however, is a lot easier to show. For this one first observes that $c_{0,0}=1$. From there we only need to show that the expression satisfies the recurrence relation shown above and the correctness follows from induction.</p>
<p>I leave that proof as an exercise, but give the following hint. In the recurrence relation there are three coefficients on the right hand side for the same value of $n$. If you compare the coefficients $c_{n-1,k}$, $c_{n-1,k-1}$, and $c_{n-1,k-2}$ there are quite a few factors from the numerator and denominator that they have in common and which also appear in $c_{n,k}$. The simplest way to see such a thing is to write the recurrence relation out for some chosen values for $n$ and $k$.</p>
<p>As another remark consider $c_{n,k}$ for $k>n$, then the number of factors in the numerator and denominator keep on growing which appear to lead to very long expressions. This is however not the case and can be seen as well if you write all the factors out for some particular values of $n$ and $k$.</p>
|
4,013,559 | <p>In the video <a href="https://youtu.be/eI4an8aSsgw?t=16354" rel="nofollow noreferrer">https://youtu.be/eI4an8aSsgw?t=16354</a> the professor says that the following equation
<span class="math-container">$$\sqrt{(x+c)^2 + y^2} + \sqrt{(x-c)^2+y^2}=2a$$</span>
simplifies to
<span class="math-container">$$
(a^2-c^2)x^2+a^2y^2=a^2(a^2-c^2)
$$</span>
I don't get it. I need someone who can explain it step by step</p>
| zwim | 399,263 | <p>Let's call <span class="math-container">$\begin{cases}U=\sqrt{(x+c)^2+y^2}\\V=\sqrt{(x-c)^2+y^2}\end{cases}\quad$</span> and we start from <span class="math-container">$U+V=2a$</span></p>
<p>Squaring we get <span class="math-container">$\quad U^2+2UV+V^2=4a^2$</span></p>
<p><span class="math-container">$U^2+V^2=(x+c)^2+y^2+(x-c)^2+y^2=2(x^2+y^2+c^2)$</span></p>
<p>Now we'll divide by <span class="math-container">$2$</span> and put the roots on one side and square again:</p>
<p><span class="math-container">$$U^2V^2=(2a^2-\tfrac 12(U^2+V^2))^2$$</span>
<span class="math-container">$\begin{align}U^2V^2
&=((x+c)^2+y^2)((x-c)^2+y^2)\\
&=((x+c)^2+(x-c)^2)y^2+(x+c)^2(x-c)^2+y^4\\
&=(2x^2+2c^2)y^2+(x^2-c^2)^2+y^4\\
\require{cancel}&=\cancel{2x^2y^2}+\cancel{2c^2y^2}+\cancel{x^4}-2c^2x^2+\cancel{c^4}+\cancel{y^4}
\end{align}$</span></p>
<p>It is equal to:</p>
<p><span class="math-container">$(2a^2-x^2-y^2-c^2)^2
\require{cancel}=4a^4+\cancel{x^4}+\cancel{y^4}+\cancel{c^4}-4a^2x^2-4a^2y^2-4a^2c^2+\cancel{2x^2y^2}+2x^2c^2+\cancel{2y^2c^2}$</span></p>
<p>Gathering remaining terms:</p>
<p><span class="math-container">$-2x^2c^2=4a^4-4a^2x^2-4a^2y^2-4a^2c^2+2x^2c^2\iff 4\Big(a^4-a^2x^2-a^2y^2-a^2c^2+x^2c^2\Big)=0$</span></p>
<p><span class="math-container">$$(-a^2+c^2)x^2-a^2y^2+a^2(a^2-c^2)=0$$</span></p>
<p>Which is the desired expression.</p>
|
101,098 | <p>I apologize in advance because I don't know how to enter code to format equations, and I apologize for how elementary this question is. I am trying to teach myself some differential geometry, and it is helpful to apply it to a simple case, but that is where I am running into a wall.</p>
<p>Consider $M=\mathbb{R}^2$ as our manifold of interest. I believe that the tangent space is also $\mathbb{R}^2$. From linear algebra, we know that a basis set for $\mathbb{R}^2$ is $$\left\{\left[\matrix{1\\0}\right], \left[\matrix{0\\ 1}\right] \right\}\;.$$</p>
<p>Now, from differential geometry, we are told that basis vectors are $\frac{d}{dx}$ and $\frac{d}{dy}$ where the derivatives are partial deravatives.</p>
<p>So my question is how does one obtain a two-component basis vector of linear algebra from a simple partial derivative?</p>
<hr>
<p>EDIT: Thanks to everyone for the replies. They have been very helpful, but thinking as a physicist, I would like to see how the methods of differential geometry could be used to derivive the standard basis of linear algebra. It seems that there must be more to it than saying that there is an isomorphism between the space of derivatives at a point and R^n which sets up a natural correspondence between the basis vectors.</p>
<p>I may be completely off-the-wall wrong, but somehow I think that the answer involves partial derivatives of a local orthognal coordinate system at point p.</p>
| mathNotebook | 23,648 | <p>This isomorphism is usually established in elemantary textbooks (see Schutz. Geometrical Methods or Wheeler.Gravitation) via the directional derivative. I could go through the argument, but you can find it here:</p>
<p><a href="http://en.wikipedia.org/wiki/Vector_(geometric)" rel="nofollow">http://en.wikipedia.org/wiki/Vector_(geometric)</a></p>
<p>under the section "The Vector as a Directional Derivative"</p>
<p>and ending with "Therefore any directional derivative can be identified with a corresponding vector, and any vector can be identified with a corresponding directional derivative."</p>
<p>(This is just a more informal version of Dylan's answer.)</p>
|
681,737 | <p>What is the simplest way we can find which one of $\cos(\cos(1))$ and $\cos(\cos(\cos(1)))$ [in radians] is greater without using a calculator [pen and paper approach]? I thought of using some inequality relating $\cos(x)$ and $x$, but do not know anything helpful.
We can use basic calculus. Please help. </p>
| Henno Brandsma | 4,280 | <p>The example given in the other answer shows why it is false. Note that the diameters of these sets are all infinite. If indeed we have countably many such closed sets <strong>and</strong> their diameters tend to $0$ as well, then completeness will give a point in the intersection (as picking points in the intersections will give a Cauchy sequence). Maybe this is what confused you?</p>
|
2,524,809 | <p>I am trying to solve the following problem:</p>
<blockquote>
<p>Show that $\frac{dy}{dt}=f(y/t)$ is equal to $t\frac{dv}{dt}+v=f(v)$, (which is a separable differential equation) by using substitution of $y = t \cdot v$ or $v =\frac{y}{t}$. </p>
</blockquote>
<p>I did the following:</p>
<p>By using the chain-rule, we can write down $\frac{dy}{dt} = \frac{dy}{dv} \cdot\frac{dv}{dt}$. The first part of this product, $\frac{dy}{dv}$ is equal to $t$, as $y=t \cdot v$. Using substitution, we can also see that $f(y/t)=f(v)$. Thus, we have found the following equation:
$ t\cdot\frac{dv}{dt} = f(v)$.</p>
<p>My question is what I did wrong, what did I do to lose the '$+ v$' part of the equation?</p>
<p>Thanks for your help,</p>
<p>K. Kamal</p>
<hr>
<p>If anyone is still interested, I forgot that v is a function of t and therefore we need to use the product rule.</p>
| Dr. Sonnhard Graubner | 175,066 | <p>if we assume, that $y=y(x)$ we get by the chain rule
$$2x+2yy'=0$$</p>
|
63,525 | <p>I asked this question in math.stackexchange but I didn't have much luck. It might be more appropiate for this forum. Let $z_1,z_2,…,z_n$ be i.i.d random points on the unit circle ($|z_i|=1$) with uniform distribution on the unit circle. Consider the random polynomial $P(z)$ given by
$$
P(z)=\prod_{i=1}^{n}(z−z_i).
$$
Let $m$ be the maximum absolute value of $P(z)$ on the unit circle $m=\max\{|P(z)|:|z|=1\}$.</p>
<p>How can I estimate $m$? More specifically, I would like to prove that there exist $\alpha>0$ such that the following holds almost surely as $n\to\infty$
$$
m\geq \exp(\alpha\sqrt{n}).
$$
Or at least that for every $\epsilon>0$ there exists $n$ sufficiently large such that
$$
\mathbb{P}(m\geq\exp(\alpha\sqrt{n}))>1-\epsilon
$$
for some $\alpha$ independent on $n$.</p>
<p>Any idea of what can be useful here?</p>
| Seva | 9,924 | <p>I believe you can obtain very reasonable bounds for your problem using the following approach. (I myself was too lazy to carry out the computations.) Split the unit circle into the union of an interval $I$ of length $4\pi/(n\log n)$ and $N\sim \pi n/\log n$ intervals $J_k$ of length about $2\log n/n$ each. (You may need to adjust the logarithmic factors at the optimization stage.) Almost surely, the interval $I$ will not contain any point $z_i$, whereas each of the intervals $J_k$ will contain at most $3\log n$ points. Now choose your point $z$ to be in the middle of the interval $I$ and compute $|P(z)|$ for the worst-case scenario, where each of the two intervals $J_k$ abating to $I$ contains $3\log n$ points $z_i$, all of these points at the distance $2\pi/(n\log n)$ from $z$, the two "next" intervals $J_k$ also contain $3\log n$ points each at the minimum possible distance from $z$ and so on.</p>
|
63,525 | <p>I asked this question in math.stackexchange but I didn't have much luck. It might be more appropiate for this forum. Let $z_1,z_2,…,z_n$ be i.i.d random points on the unit circle ($|z_i|=1$) with uniform distribution on the unit circle. Consider the random polynomial $P(z)$ given by
$$
P(z)=\prod_{i=1}^{n}(z−z_i).
$$
Let $m$ be the maximum absolute value of $P(z)$ on the unit circle $m=\max\{|P(z)|:|z|=1\}$.</p>
<p>How can I estimate $m$? More specifically, I would like to prove that there exist $\alpha>0$ such that the following holds almost surely as $n\to\infty$
$$
m\geq \exp(\alpha\sqrt{n}).
$$
Or at least that for every $\epsilon>0$ there exists $n$ sufficiently large such that
$$
\mathbb{P}(m\geq\exp(\alpha\sqrt{n}))>1-\epsilon
$$
for some $\alpha$ independent on $n$.</p>
<p>Any idea of what can be useful here?</p>
| Johan Wästlund | 14,302 | <p>I think $z$ should be chosen so that the deviations tend to go in the positive direction on all scales. The following approach seems to work: Suppose $n$ is a power of 2 (some fix is needed if it isn't). Suppose the points $z_i$ are sorted, say in counter-clockwise direction, and assume without loss of generality that we have $z_0 = z_n = 1$ (indexing modulo $n$). The points $z_0$ and $z_{n/2}$ split the unit circle in two sectors. We pick the larger of those two, and look at the point whose index is the mean of the indices at the endpoints (either $z_{n/4}$ or $z_{3n/4}$, the point that we expect to be near the midpoint of that larger sector). That point splits the sector in two, and we pick the larger of those two and continue. In the end we arrive at two consecutive points $z_i$, and we let $z$ be the midpoint of the sector between them. </p>
<p>It should now be possible to get a high-probability lower bound on $\log(P(z))$. I haven't done this in detail, but a simulation for $n=64,128,\dots,4096$ indicates that $\log(P(z))$ is rarely much smaller than $\sqrt{n}$. Since you ask for an idea that might be useful, I dare post this as an answer. </p>
<p>UPDATE:
Here's a simple argument that should almost, but not quite, give the desired bound. What it ought to show, although some precision in the analysis is still missing, is that for every $\epsilon>0$ there is an $\alpha>0$ such that $m\geq \exp(\alpha\sqrt{n})$ with probability at least $1-\epsilon$.</p>
<p>Here's how it works: Since the mean value of $\log\left|P(z)\right|$ is 0, we can always find a $z$ such that $\left|P(z)P(-z)\right| = 1$. Notice that $P(z)P(-z)$ is unchanged if we replace $z_i$ by $-z_i$. Therefore we can start by randomly generating $n$ pairs of diametrically opposite points $\{z_i, -z_i\}$, then find $z$ with $\left|P(z)P(-z)\right|=1$ given only this information, and finally fix the $z_i$'s by $n$ independent coin flips. </p>
<p>Now condition on the outcome of the first stage of the process, so that the $n$ pairs $\{z_i, -z_i\}$ are fixed. With high probability there is a bunch of say $n/2$ such pairs for which the quantity $\log\left|P(z)\right| - \log\left|P(-z)\right|$ is affected by at least some constant depending on a coin flip (this just requires $z_i$ to be substantially closer to one of $z$ and $-z$ than to the other). Therefore the standard deviation of $\log\left|P(z)\right| - \log\left|P(-z)\right|$ is at least some constant times $\sqrt{n}$, which means that $\max\left(\log\left|P(z)\right|, \log\left|P(-z)\right|\right)$ should be of order $\sqrt{n}$ most of the time. </p>
<p>I guess the argument can be made precise, but this choice of $z$ doesn't let us fix $\alpha>0$ and get a.a.s. the bound asked for. </p>
<p>Perhaps this helps to at least clarify the question. What the OP asks for is just beyond what we get with this argument.</p>
<p>EDIT: The more I think about it, the more I suspect that the first statement asked for in the OP is not true. Of course $\log\left|P(z)\right|$ will take large negative values when $z$ is extremely close to a $z_i$, but when it isn't, it seems that the irregularities in distribution of the $z_i$ on a smaller scale will be less significant than the large scale distribution. Roughly speaking this is because by the nature of the logarithm, the points $z_i$ close to $z$ will have only a mildly stronger influence on $\log\left|P(z)\right|$ than the points farther away. If the points $z_i$ happen to be unusually but not extremely uniformly distributed on the large scale, it seems that $\max\left(\log\left|P(z)\right|\right)$ need not be larger than any particular constant times $\sqrt{n}$.</p>
<p>If this is correct, it means that the values of $\log\left|P(z)\right|$ for different $z$ are significantly correlated even on a larger scale.
Needless to say, these speculations are still nothing close to a proof.</p>
<p>The second, weaker version, seems to be equivalent to what is claimed in the update above, and it should not be too hard to fill in the missing details.</p>
|
4,321,675 | <p>I'm struggling to derive the Finsler geodesic equations. The books I know either skip the computation or use the length functional directly. I want to use the energy. Let <span class="math-container">$(M,F)$</span> be a Finsler manifold and consider the energy functional <span class="math-container">$$E[\gamma] = \frac{1}{2}\int_I F^2_{\gamma(t)}(\dot{\gamma}(t))\,{\rm d}t\tag{1}$$</span>evaluated along a (regular) curve <span class="math-container">$\gamma\colon I \to M$</span>. We use tangent coordinates <span class="math-container">$(x^1,\ldots,x^n,v^1,\ldots, v^n)$</span> on <span class="math-container">$TM$</span> and write <span class="math-container">$g_{ij}(x,v)$</span> for the components of the fundamental tensor of <span class="math-container">$(M,F)$</span>. We may take for granted (using Einstein's convention) that <span class="math-container">$$F^2_x(v) = g_{ij}(x,v)v^iv^j, \quad \frac{1}{2}\frac{\partial F^2}{\partial v^i}(x,v) = g_{ij}(x,v)v^j, \quad\frac{\partial g_{ij}}{\partial v^k}(x,v)v^k = 0.\tag{2} $$</span></p>
<p>Setting <span class="math-container">$L(x,v) = (1/2) F_x^2(v)$</span>, and writing <span class="math-container">$(\gamma(t),\dot{\gamma}(t)) \sim (x(t),v(t))$</span>, the Euler-Lagrange equations are <span class="math-container">$$0 = \frac{{\rm d}}{{\rm d}t}\left(\frac{\partial L}{\partial v^k}(x(t),v(t))\right) -\frac{\partial L}{\partial x^k}(x(t),v(t)),\quad k=1,\ldots, n=\dim(M).\tag{3}$$</span>It's easy to see (omitting application points) that <span class="math-container">$$\frac{\partial L}{\partial x^k} = \frac{1}{2}\frac{\partial g_{ij}}{\partial x^k}\dot{x}^i\dot{x}^j\quad\mbox{and}\quad \frac{\partial L}{\partial v^k} = g_{ik}\dot{x}^i,\tag{4}$$</span>so <span class="math-container">$$\frac{\rm d}{{\rm d}t}\left(\frac{\partial L}{\partial v^k}\right) = \frac{\partial g_{ik}}{\partial x^j}\dot{x}^j\dot{x}^i +{\color{red}{ \frac{\partial g_{ik}}{\partial v^j} \ddot{x}^j\dot{x}^i }}+ g_{ik}\ddot{x}^i\tag{5}$$</span>
<strong>Problem:</strong> I cannot see for the life of me how to get rid of these <span class="math-container">$v^j$</span>-derivatives indicated in red, even using the last relation in (2), as the indices simply don't match. I am surely missing something obvious. Once we know that this term does vanish, then (4) and (5) combine to give <span class="math-container">$$ g_{ik}\ddot{x}^i + \left(\frac{\partial g_{ik}}{\partial x^j} - \frac{1}{2}\frac{\partial g_{ij}}{\partial x^k}\right)\dot{x}^i\dot{x}^j =0\tag{6}$$</span><a href="https://en.wikipedia.org/wiki/Finsler_manifold#Canonical_spray_structure_on_a_Finsler_manifold" rel="nofollow noreferrer">as in the Wikipedia page</a>.</p>
| Ivo Terek | 118,056 | <p>Someone (not on this website) also pointed me to the Bao, Chern, Shen book, but namely, to Exercise 1.2.1 on page 11 and to relation (1.4.5) on page 23. Using the suggestive coordinate notation on the book, the exercise says that</p>
<p>(a) <span class="math-container">$y^iF_{y^i} = F$</span> (I already knew that)</p>
<p>(b) <span class="math-container">$y^iF_{y^iy^j} = 0$</span> (that one too)</p>
<p>(c) <span class="math-container">$y^iF_{y^iy^jy^k} = -F_{y^jy^k}$</span> (that was the missing piece for me! also a consequence of Euler's theorem)</p>
<p>while (1.4.5) says that <span class="math-container">$$y^i\frac{\partial g_{ij}}{\partial y^k} = y^j\frac{\partial g_{ij}}{\partial y^k} = y^k\frac{\partial g_{ij}}{\partial y^k} = 0.$$</span>
In the notation of my original post, we just have to apply <span class="math-container">$v^i\partial_{v^j}$</span> to <span class="math-container">$g_{ik} = FF_{v^iv^k} + F_{v^i}F_{v^k}$</span> and use (c), and the conclusion follows easily.</p>
|
2,217,454 | <p>Let $\{ x_i : i \in I \}$ be a family of numbers $x_i \in \mathbb R$ with $I$ an arbitrary index set. We say that this family is summable with value $s$ (and write $s = \sum_{i \in I} x_i$ then) if for every $\varepsilon > 0$ there exists some finite set $I_{\varepsilon}$ such that for every finite superset $J \subseteq I$, i.e. such that $I_{\varepsilon} \subseteq J$, we have
$$
\left| \sum_{i \in J} x_i - s \right| < \varepsilon.
$$</p>
<p>Does there exists a family of numbers $\{x_i : i \in I\}$ with uncountable $I$ such that $\sum_{i \in I} x_i = 1$ and such that for every countable $J \subseteq I$ we have
$$
\sum_{j \in J} x_j < 1
$$
i.e. the countable "sub"-sums have a strictly smaller value?</p>
| Gio67 | 355,873 | <p>You cannot find such a sequence. Take a look at my answer in
<a href="https://math.stackexchange.com/questions/2126816/on-a-necessary-and-sufficient-condition-for-sum-k-in-mathbbza-k-l-a-k-i">on-a-necessary-and-sufficient-condition-for-sum-k-in-mathbbza-k-l-a-k-i</a></p>
<p>If $\sum_{i\in I}x_i=1$, then $\sum_{i\in I}|x_i|\le C$. This implies that $x_i=0$ for all but countably many $i$. To see this let $I_k=\{i\in I:\, |x_i|\ge \frac1k\}$. Then for every finite set $F$ of $I_k$,
$$\frac1k \text{card} F\le \sum_{i\in F}|x_i|\le C,$$
which shows that $\text{card} F\le k C$. In turn, $I_k$ must have only finitely many elements. Hence, ${i\in I: x_i\ne 0}=\cup_k I_k$ is countable. So the infinite sum is just a series.</p>
|
2,217,454 | <p>Let $\{ x_i : i \in I \}$ be a family of numbers $x_i \in \mathbb R$ with $I$ an arbitrary index set. We say that this family is summable with value $s$ (and write $s = \sum_{i \in I} x_i$ then) if for every $\varepsilon > 0$ there exists some finite set $I_{\varepsilon}$ such that for every finite superset $J \subseteq I$, i.e. such that $I_{\varepsilon} \subseteq J$, we have
$$
\left| \sum_{i \in J} x_i - s \right| < \varepsilon.
$$</p>
<p>Does there exists a family of numbers $\{x_i : i \in I\}$ with uncountable $I$ such that $\sum_{i \in I} x_i = 1$ and such that for every countable $J \subseteq I$ we have
$$
\sum_{j \in J} x_j < 1
$$
i.e. the countable "sub"-sums have a strictly smaller value?</p>
| s.harp | 152,424 | <p>If $\sum_{i\in I} x_i$ converges as a sum in $\Bbb R$ then $I(x_i{\neq0}):=\{i\in I\mid x_i\neq0\}$ has to be countable. This is because
$$I_{\neq0}=\bigcup_{n\in\Bbb N} I(x_i{>\frac1n})\cup I(x_i{<\frac{-1}n})$$
and if this were uncountable, there would have to be one term in the union that is uncountable (since a countable union of countable things is countable).</p>
<p>This means you've got an $n$ so that infinitely many elements are larger than $1/n$ (or infinitely many are smaller than $-1/n$). Well for any finite $I_\epsilon$ you can consider $J_k=I_\epsilon\cup \{k\cdot n\text{ elements of }I(>\frac1n)\}$ and then $\sum_{i\in J_k}x_i>\sum_{i\in I_\epsilon} x_i +k$, where you can make $k$ as big as you like.</p>
<p>But you can have uncountable sums in other spaces.</p>
<p>For example if you consider the space of functions $\Bbb R\to\Bbb R$, this is a topological vector space equipped with a family of semi-norms $\{\|\cdot\|_r\mid r\in\Bbb R\}$ where $\|f\|_r:=|f(r)|$. If you let $\delta_{r}$ be the function that is one when $x=r$ and $0$ otherwise you have that
$$\sum_{r\in \Bbb R} f(r)\delta_r$$
converges to $f$ for any function in this topology. For example
$$\sum_{r\in\Bbb R}\delta_r$$
converges to the constant function $1$ and none of the uncountably many summands are zero elements.</p>
|
1,478,314 | <p>In this particular case, I am trying to <strong>find all points $(x,y)$ on the graph of $f(x)=x^2$ with tangent lines passing through the point $(3,8)$</strong>. </p>
<p>Now then, I know the <a href="http://www.meta-calculator.com/online/?panel-102-graph&data-bounds-xMin=-10&data-bounds-xMax=10&data-bounds-yMin=-7.28&data-bounds-yMax=7.28&data-equations-0=%22y%3Dx%5E2%22&data-rand=undefined&data-hideGrid=false" rel="nofollow">graph</a> of $x^2$. What now?</p>
| Michael Burr | 86,421 | <p>Step 1: For a point $(x,y)$ on the graph of $f(x)=x^2$, find the slope of the line between $(x,y)$ and $(3,8)$.</p>
<p>Step 2: Compute the slope of the tangent line to $f(x)=x^2$ at the point $(x,y)$. </p>
<p>Step 3: Set these two slopes equal to each other and find candidate $x$ values.</p>
<p>Step 4: Check your answers.</p>
|
2,114,276 | <p>How to show that $(x^{1/4}-y^{1/4})(x^{3/4}+x^{1/2}y^{1/4}+x^{1/4}y^{1/2}+y^{3/4})=x-y$</p>
<p>Can anyone explain how to solve this question for me? Thanks in advance. </p>
| BranchedOut | 364,830 | <p>Start with $$(x^{1/4}-y^{1/4})(x^{3/4}+x^{1/2}y^{1/4}+x^{1/4}y^{1/2}+y^{3/4}).$$
Then distribute the terms:
$$=(x+x^{3/4}y^{1/4}+x^{1/2}y^{1/2}+x^{1/4}y^{3/4})-(x^{3/4}y^{1/4}+x^{1/2}y^{1/2}+x^{1/4}y^{3/4}+y).$$
Regrouping them gives
$$(x-y)+(x^{3/4}y^{1/4}-x^{3/4}y^{1/4})+(x^{1/2}y^{1/2}-x^{1/2}y^{1/2})+(x^{1/4}y^{3/4}-x^{1/4}y^{3/4}).$$
The terms cancel out, giving
$$(x-y)+0+0+0=x-y.$$</p>
<p>$$$$</p>
|
4,216,602 | <p>In this book - <a href="https://www.oreilly.com/library/view/machine-learning-with/9781491989371/" rel="nofollow noreferrer">https://www.oreilly.com/library/view/machine-learning-with/9781491989371/</a> - I came to the differentiation of these to terms like this:</p>
<p>Train - Applying a learning algorithm to data using numerical approaches like gradient descent.
Fit - Applying a learning algorithm to data using analytical approaches.</p>
<p>I don't quite understand the difference.</p>
<p>Can someone please elaborate and/or provide examples? Thank you.</p>
| 1 2 100 | 47,907 | <p>In simple words, Train is referring to making a choice of algorithm which you want to train your ML model. Meaning, which algorithm you want to use to train your model to accomplish your task. Here you use your expertise and/or intuition which algorithm might work for you ML model. Example which Regression algorithm you want to use for forecasting sales.</p>
<p>Fit is referring to the step where you train your model using your training data. Here your data is applied to the ML algorithm you chose earlier. This is literally calling a function named Fit in most of the ML libraries where you pass your training data as first parameter and labels/target values as second parameter.</p>
<p>Example python code below is using Support Vector Classification algorithm and calling the fit function to do the actual training.</p>
<pre><code>clf = svm.SVC(kernel=kernel, gamma=10)
clf.fit(X_train, y_train)
</code></pre>
<p>In a typical workflow these steps will be followed by evaluation of your model. Depending on results you may tweak parameters of SVC class or change the algorithm and repeat the steps until you get satisfactory results.</p>
|
24,318 | <p>I have an expression as below:</p>
<pre><code>Equations = 2.0799361919940695` x[1] + 3.3534325557330327` x[1]^2 -
4.335179297091139` x[1] x[2] + 1.1989715511881491` x[2]^2 -
3.766597877399148` x[1] x[3] - 0.33254815073371535` x[2] x[3] +
1.9050048836042945` x[3]^2 + 1.1386715715291826` x[1] x[4] +
2.802846492104668` x[2] x[4] - 0.6210244597295915` x[3] x[4] +
4.943369095158792` x[4]^2
</code></pre>
<p>I want to write it in an output file. So I use the below code:</p>
<pre><code>removebracketvar[x_] :=
StringReplace[
StringReplace[
ToString[x], {"[" -> "", "]" -> "", "," -> "", "*^" -> "e",
".*" -> ".0*"}], Whitespace -> ""];
SetDirectory["C:\\folder"];
WriteString["eqfile.txt",
removebracketvar[
ToString[Equations , InputForm, NumberMarks -> False]] ];
Close["eqfile.txt"]
</code></pre>
<p>The slight problem with the code for me is that it inserts the floating point numbers up to 16 digits of precision. I just want them to around up to 10 digits of precision.
When I use <code>SetPrecision[Equations,10]</code>, it weirdly changes <code>x[1]</code> etc. to <code>x[1.0000000]</code>, etc.! I want to leave the variables as they are but want to change the floating points to less number of digits after the decimal point. What would be the best way of doing this?</p>
| Dr. belisarius | 193 | <pre><code>f = {# &, 3*# - 5 &, 0.1*#^2 &};
xvalues = Range[0, 500, 2.5];
t1 = Through[f[xvalues]] /. x_ /; x < 0 -> 0;
ListPlot[t1, DataRange -> {0, 500}]
</code></pre>
<p><img src="https://i.stack.imgur.com/kh38S.png" alt="enter image description here"></p>
|
2,119,178 | <p>I have this question:</p>
<blockquote>
<blockquote>
<p>Known that:
$$3pq-5p+4q=22$$
Find the value of $p + q$</p>
</blockquote>
</blockquote>
<p>I have solved 2 variables with 2 equations or more, but I have never encountered 1 equation with 2 variables. The answer is a positive integer. Can I have a hint or a guide?</p>
<p>Thanks!</p>
| Anurag A | 68,092 | <p>The given equation can be written as
$$(3p+4)(3q-5)=46$$
Since $p$ and $q$ are both integers (as what OP has mentioned in the comments). Therefore, we want factors of $46=ab$ such that
\begin{align*}
3p+4 & =a \\
3q-5 & =b
\end{align*}
Thus
$$3(p+q)=a+b+1 \implies a+b+1 \equiv 0 \pmod{3}.$$
But the only possible values for $a,b \in \{\pm 1, \pm 2, \pm 23, \pm 46\}$ (of course with $ab=46)$. However the solutions that satisfy $a+b+1 \equiv 0 \pmod{3}$ are $(a,b) \in \{(1,46), (46,1),(-2,-23),(-23,-2)\}$. </p>
<p>Hence $p+q \in \{16,-8\}$.</p>
|
2,119,178 | <p>I have this question:</p>
<blockquote>
<blockquote>
<p>Known that:
$$3pq-5p+4q=22$$
Find the value of $p + q$</p>
</blockquote>
</blockquote>
<p>I have solved 2 variables with 2 equations or more, but I have never encountered 1 equation with 2 variables. The answer is a positive integer. Can I have a hint or a guide?</p>
<p>Thanks!</p>
| Lutz Lehmann | 115,115 | <p>Multiply with $3$, add a constant term and factorize to get
$$
(3p+4)(3q-5)=3(3pq−5p+4q)-20=46
$$</p>
<p>If the quadratic equation $$0=z^2-az+46$$ has the solutions $z_1=(3p+4)$, $z_2=(3q−5)$ then by the Viete rules $$a=z_1+z_2=(3p+4)+(3q−5)=3(p+q)-1$$ which is independent of the order of the roots.</p>
<p>As $46=z_1·z_2=1·46=2·23=(-2)·(-23)=(-1)·(-46)$ only has that many integer factorizations, one quickly finds the only solutions
\begin{align}
3(p+q)-1&=47:&p+q&=16;\\
3(p+q)-1&=25:&not~&possible\\
3(p+q)-1&=-25:&p+q&=-8\\
3(p+q)-1&=-47:&not~&possible\\
\end{align}</p>
|
1,927,394 | <blockquote>
<p>Number of all positive continuous function <span class="math-container">$f(x)$</span> in <span class="math-container">$\left[0,1\right]$</span> which satisfy <span class="math-container">$\displaystyle \int^{1}_{0}f(x)dx=1$</span> and <span class="math-container">$\displaystyle \int^{1}_{0}xf(x)dx=\alpha$</span> and <span class="math-container">$\displaystyle \int^{1}_{0}x^2f(x)dx=\alpha^2$</span></p>
<p>Where <span class="math-container">$\alpha$</span> is a given real numbers.</p>
</blockquote>
<p><span class="math-container">$\bf{My\; Try::}$</span> :: Adding <span class="math-container">$(1)$</span> and <span class="math-container">$(3)$</span> and subtracting <span class="math-container">$2\times (2),$</span> we. Get <span class="math-container">$$\displaystyle \int^{1}_{0}(x-1)^2f(x)dx=(\alpha-1)^2$$</span> now how can I solve it after that, Thanks</p>
| Joel Cohen | 10,553 | <p>I believe there are no such functions. Indeed, by combining the equations, we get that for every polynomial $P$ of degree $\le2$, we have </p>
<p>$$\int_0^1 P(x) f(x) dx = P(\alpha) $$</p>
<p>In particular, for $P=(x-\alpha)^2 $, we get </p>
<p>$$\int_0^1 (x-\alpha)^2 f(x) dx = 0$$</p>
<p>The integrand $x \mapsto (x-\alpha)^2 f(x)$ is a non negative continuous function whose integral is zero, so it's zero, and $f=0$, which contradicts the fact that it's integral over $[0,1] $ is $1$.</p>
|
2,222,514 | <p>A straight line OL rotates around the point O with a constant angular velocity !. A point M moves along the line OL with a speed proportional to the distance OM. Find the equation of the curve described by the point M</p>
<p>As it says angular velocity is constant which i think means
$$θ' =constant=ω$$</p>
<p>and after integrating i get $$θ = ω ⋅ t + θ_o$$</p>
<p>What can i do about the linear velocity?</p>
<p>Can someone help me with this problem</p>
| Narasimham | 95,860 | <p>There a constant ratio between any two velocities. There are several ways to say the same thing.</p>
<p>$$\frac { r \,d \theta}{ dr} = \frac { r \,d \theta/dt}{ dr/dt} =\frac { V_{\theta}}{ V_{r}}= \tan \psi = const. \tag{1} $$</p>
<p>Log variation ( instead of $ \cot \psi$ say $k$ )</p>
<p>$$\frac{dr}{d\theta}=kr\tag{2}$$</p>
<p>$$r= a e^{k \theta }\tag{3}$$</p>
<p>where $a$ is arbitrary</p>
<p>Linear variation</p>
<p>$$\frac{d\theta}{dt}=\omega \tag{4}$$</p>
<p>$$ \theta = \omega t + \alpha \tag{5}$$</p>
<p>where $\alpha$ is arbitrary</p>
<p>Plug into (2)</p>
<p>$$r=ae^{k (\omega t + \alpha )}\tag{6}$$</p>
|
122,503 | <p>It seems that the current state of quantum Brownian motion is ill-defined. The best survey I can find is <a href="http://arxiv.org/pdf/1009.0843v1.pdf">this one</a> by László Erdös, but the closest the quantum Brownian motion comes to appearing is in this conjecture (p. 30):</p>
<blockquote>
<p>[<b>Quantum Brownian Motion Conjecture</b>]: For small [disorder] $\lambda$ and [dimension] $d \ge 3$, the location of the electron is governed by a heat equation in a vague sense: $$\partial_t \big|\psi_t(x)\big|^2 \sim \Delta_x \big|\psi_t(x)\big|^2 \quad \Rightarrow \quad \langle \, x^2 \, \rangle_t \sim t, \quad t \gg 1.$$
The precise formulation of the first statement requires a scaling limit. The second
statement about the diffusive mean square displacement is mathematically precise, but
what really stands behind it is a diffusive equation that on large scales mimics the
Schrödinger evolution. Moreover, the dynamics of the quantum particle converges to
the Brownian motion as a process as well; this means that the joint distribution of
the quantum densities $\big|\psi_t(x)\big|^2$ at different times $t_1 < t_2 < \dots < t_n$ converges to the corresponding finite dimensional marginals of the Wiener process.</p>
</blockquote>
<p>This is the Anderson model in $\mathbb R^d$ with disordered Hamiltonian $H = -\Delta + \lambda V$. The potential $V$ is disordered, and is generated by i.i.d. random fields; the parameter $\lambda$ controls the scale of the disorder.</p>
<hr>
<p><a href="http://en.wikipedia.org/wiki/Brownian_motion">Classical Brownian motion</a> admits many characterizations and generalizations. For example, <a href="http://en.wikipedia.org/wiki/Wiener_measure">Wiener measure</a> leads to the construction of an <a href="http://en.wikipedia.org/wiki/Abstract_Wiener_space">abstract Wiener space</a>, which is the appropriate setting for the powerful <a href="http://en.wikipedia.org/wiki/Malliavin_calculus">Mallivin calculus</a>. The <a href="http://en.wikipedia.org/wiki/Structure_theorem_for_Gaussian_measures">structure theorem of Gaussian measures</a> says that <i>all</i> <a href="http://en.wikipedia.org/wiki/Gaussian_measure#Gaussian_measures_on_infinite-dimensional_spaces">Gaussian measures</a> are abstract <a href="http://ncatlab.org/nlab/new/Wiener+measure">Wiener measures</a> in this way. I would love to know what all this theory looks like in the language of <a href="http://www.mitchener.staff.shef.ac.uk/free.pdf">non-commutative probability theory</a>.</p>
<p>The QBM Conjecture states roughly that a quantum particle in a weakly disordered environment should behave like a quantum Brownian motion. This is an important open problem, but it doesn't quite capture what a QBM <i>is</i>, nor what different <i>types</i> of QBM may exist. Thus my question:</p>
<blockquote>
<p>What kind of precise mathematical object is a quantum Brownian motion? </p>
</blockquote>
| Uwe Franz | 30,364 | <p>No answer, just another question:</p>
<p>Is quantum Brownian motion related to Quantum Noise or the quantum Wiener process? I think these notions have a well-established mathematical theory, e.g. there are quantum stochastic integrals defined for them.</p>
<p>For a more physical approach see</p>
<p>Gardiner, Zoller, Quantum Noise, Springer, 2004,</p>
<p>for more mathematical literature see, e.g.,</p>
<p>K. R. Parthasarathy, An Introduction to Quantum Stochastic Calculus, Springer, 1992,</p>
<p>P.A. Meyer, Quantum Probability for Probabilists, Lect. Notes in Math. 1538, Springer, 1995.</p>
<p>The quantum Wiener process has applications to quantum filtering, see, e.g.</p>
<p>L. Bouten, R. van Handel, M. James, An introduction to quantum filtering,
<a href="http://arxiv.org/abs/math/0601741">http://arxiv.org/abs/math/0601741</a>.</p>
|
122,503 | <p>It seems that the current state of quantum Brownian motion is ill-defined. The best survey I can find is <a href="http://arxiv.org/pdf/1009.0843v1.pdf">this one</a> by László Erdös, but the closest the quantum Brownian motion comes to appearing is in this conjecture (p. 30):</p>
<blockquote>
<p>[<b>Quantum Brownian Motion Conjecture</b>]: For small [disorder] $\lambda$ and [dimension] $d \ge 3$, the location of the electron is governed by a heat equation in a vague sense: $$\partial_t \big|\psi_t(x)\big|^2 \sim \Delta_x \big|\psi_t(x)\big|^2 \quad \Rightarrow \quad \langle \, x^2 \, \rangle_t \sim t, \quad t \gg 1.$$
The precise formulation of the first statement requires a scaling limit. The second
statement about the diffusive mean square displacement is mathematically precise, but
what really stands behind it is a diffusive equation that on large scales mimics the
Schrödinger evolution. Moreover, the dynamics of the quantum particle converges to
the Brownian motion as a process as well; this means that the joint distribution of
the quantum densities $\big|\psi_t(x)\big|^2$ at different times $t_1 < t_2 < \dots < t_n$ converges to the corresponding finite dimensional marginals of the Wiener process.</p>
</blockquote>
<p>This is the Anderson model in $\mathbb R^d$ with disordered Hamiltonian $H = -\Delta + \lambda V$. The potential $V$ is disordered, and is generated by i.i.d. random fields; the parameter $\lambda$ controls the scale of the disorder.</p>
<hr>
<p><a href="http://en.wikipedia.org/wiki/Brownian_motion">Classical Brownian motion</a> admits many characterizations and generalizations. For example, <a href="http://en.wikipedia.org/wiki/Wiener_measure">Wiener measure</a> leads to the construction of an <a href="http://en.wikipedia.org/wiki/Abstract_Wiener_space">abstract Wiener space</a>, which is the appropriate setting for the powerful <a href="http://en.wikipedia.org/wiki/Malliavin_calculus">Mallivin calculus</a>. The <a href="http://en.wikipedia.org/wiki/Structure_theorem_for_Gaussian_measures">structure theorem of Gaussian measures</a> says that <i>all</i> <a href="http://en.wikipedia.org/wiki/Gaussian_measure#Gaussian_measures_on_infinite-dimensional_spaces">Gaussian measures</a> are abstract <a href="http://ncatlab.org/nlab/new/Wiener+measure">Wiener measures</a> in this way. I would love to know what all this theory looks like in the language of <a href="http://www.mitchener.staff.shef.ac.uk/free.pdf">non-commutative probability theory</a>.</p>
<p>The QBM Conjecture states roughly that a quantum particle in a weakly disordered environment should behave like a quantum Brownian motion. This is an important open problem, but it doesn't quite capture what a QBM <i>is</i>, nor what different <i>types</i> of QBM may exist. Thus my question:</p>
<blockquote>
<p>What kind of precise mathematical object is a quantum Brownian motion? </p>
</blockquote>
| Carlo Beenakker | 11,260 | <p><strong>A.</strong> To the extent that you think of Brownian motion as a random walk, the natural quantum extension is the <em>quantum random walk</em>. For a physics perspective, see <A HREF="http://arxiv.org/abs/quant-ph/0303081">Quantum random walks - an introductory overview</A>, but you might prefer the more math-oriented exposition of <A HREF="http://arxiv.org/abs/math/0211356">Martin boundary theory of some quantum random walks</A> and <A HREF="http://arxiv.org/abs/quant-ph/0510128">On algebraic and quantum random walks</A>.</p>
<blockquote>
<p>We give a concise prescription of the concept of a quantum random walk
(QRW), using the example of QRW on integers as paradigm. It briefly
explains the notion of quantum coin system and the coin tossing map,
and summarizes two emblematic properties of that walk, namely the
quadratic enhancement of its diffusion rate due to quantum
entanglement between the walker and the entropy increase without
majorization effect of its probability distributions. We conclude with
a group theoretical scheme of classification of various known QRW's.</p>
</blockquote>
<hr>
<p><strong>B.</strong> Concerning the relation between Wiener processes and quantum Brownian motion: A quantum version of the wavelet expansion of a Wiener process has been developed in <A HREF="http://www.applebaum.staff.shef.ac.uk/qbbwave.pdf">A Levy-Cielsielski expansion for quantum Brownian motion and the construction of quantum Brownian bridges</A>.</p>
<blockquote>
<p>Classical Brownian motion has a delightful wavelet expansion obtained
by combining the Schauder system with a sequence of i.i.d. standard
normals. Our main technical result is to obtain a quantum version of
this expansion and so construct quantum Brownian motion in Fock space.
Consequently, only the discrete skeleton provided by a "quantum random
walk" is required to generate the continuous time process. Our result
seems easier to establish than the classical one of Lévy-Cielsielski
as we don’t require logarithmic growth estimates on the squares of
i.i.d. Gaussians, thanks to the nice action of annihilation operators
on exponential vectors.</p>
</blockquote>
<hr>
<p><strong>C.</strong> Concerning a mathematical description of the <em>physical phenomenon</em> of Brownian motion: We are then concerned with the effect of an environment having a large (infinite) number of degrees of freedom on the dynamics of a particle with a few degrees of freedom. So we are seeking a quantum theory of friction, diffusion, and thermalization. The seminal paper here is the path integral theory of <A HREF="http://en.wikipedia.org/wiki/Quantum_dissipation">Caldeira and Leggett.</A> The literature is very extensive, an older but still relevant review is <A HREF="http://www.researchgate.net/publication/222287864_Quantum_Brownian_motion_The_functional_integral_approach">Quantum Brownian Motion: The Functional Integral Approach</A>.</p>
<blockquote>
<p>The quantum mechanical dynamics of a particle coupled to a heat bath
is treated by functional integral methods and a generalization of the
Feynman-Vernon influence functional is derived. The extended theory
describes the time evolution of nonfactorizing initial states and of
equilibrium correlation functions. The theory is illuminated through
exactly solvable models.</p>
</blockquote>
|
36,568 | <p>To do Algebraic K-theory, we need a technical condition that a ring $R$ satisfies $R^m=R^n$ if and only if $m=n$. I know some counterexamples for a ring $R$ satisfies $R=R^2$. </p>
<p>Are there any some example that $R\neq R^3$ but $R^2 = R^4$ or something like that?</p>
<p>(c.f. if $R^2=R^4$, then we need that $R^3=R^5=\ldots =R^{2n+1}$ for any $n>1$)</p>
| Pace Nielsen | 3,199 | <p>There are examples of exotic behavior like that which you propose. The specific objects which you should look for are the Leavitt algebras of type (2,2). A very good source on how to create many such examples is George Bergman's paper "Coproducts and some universal ring constructions" although there are easier methods for the specific example you seek.</p>
|
2,909,022 | <p>I don't understand how to get from the first to the second step here and get $1/3$ in front.</p>
<p>In the second step $g(x)$ substitutes $x^3 + 1$.</p>
<p>\begin{align*}
\int_0^2 \frac{x^2}{x^3 + 1} \,\mathrm{d}x
&= \frac{1}{3} \int_{0}^{2} \frac{1}{g(x)} g'(x) \,\mathrm{d}x
= \frac{1}{3} \int_{1}^{9} \frac{1}{u} \,\mathrm{d}u \\
&= \left. \frac{1}{3} \ln(u) \,\right|_{1}^{9}
= \frac{1}{3} ( \ln 9 - \ln 1 )
= \frac{\ln 9}{3}
\end{align*}</p>
| Jan | 529,121 | <p>Let $u:=x^3+1$. Then it follows that </p>
<p>$$\frac{\operatorname{d}u}{\operatorname{d}x}=3x^2 \Longleftrightarrow \operatorname{d}x=\frac{\operatorname{d}u}{3x^2}.$$</p>
<p>Putting this into the integral leads to</p>
<p>$$\int_{0}^2 \frac{x^2}{x^3+1}\operatorname{d}x= \int_{1}^9 \frac{x^2}{u} \cdot \frac{\operatorname{d}u}{3x^2}= \frac{1}{3} \int_{1}^9 \frac{\operatorname{d}u}{u}$$</p>
<p>and then the rest like above.</p>
|
2,989,494 | <p>I am trying to derive properties of natural log and exponential just from the derivative properties.</p>
<p>Let <span class="math-container">$f : (0,\infty) \to \mathbb{R}$</span> and <span class="math-container">$g : \mathbb{R} \to \mathbb{R}$</span>.
Without knowing or stating that <span class="math-container">$f = \ln(x)$</span> and <span class="math-container">$g = e^x$</span>, but only knowing that <span class="math-container">$f'(x) = \frac{1}{x}$</span> and <span class="math-container">$g'(x) = g(x)$</span> how do I show from just the derivative that:</p>
<ul>
<li><span class="math-container">$f(xy) = f(x)+f(y)$</span> and <span class="math-container">$g(x+y) = g(x)g(y)$</span></li>
<li><span class="math-container">$f(x^y) = yf(x)$</span></li>
<li><span class="math-container">$\lim_{x \to \infty}f(x) = \infty$</span> and <span class="math-container">$\lim_{x \to 0}f(x) = -\infty$</span></li>
<li><span class="math-container">$\lim_{x\to \infty}g(x) = \infty$</span> and <span class="math-container">$\lim_{x \to -\infty}g(x) = 0$</span></li>
<li><span class="math-container">$g \circ f$</span> is an identity of <span class="math-container">$(0,\infty)$</span> and <span class="math-container">$f \circ g$</span> is an identity on <span class="math-container">$\mathbb{R}$</span>.</li>
</ul>
<p>I know how to do that in general, but using <span class="math-container">$\textbf{only}$</span> the derivative, I am a bit stuck, and would appreciate help. The text that inspired me to do this problem (I expanded it a bit), also expects me not to use integration.</p>
| Martund | 609,343 | <p>Assume the contrary, then x<50 and y<50, adding both inequalities, we get x+y<100, a contradiction.
Hence proved.</p>
|
2,348,131 | <p>In our class, we encountered a problem that is something like this: "A ball is thrown vertically upward with ...". Since the motion of the object is rectilinear and is a free fall, we all convene with the idea that the acceleration $a(t)$ is 32 feet per second square. However, we are confused about the sign of $a(t)$ as if it positive or negative. </p>
<p>Now, various references stated that if we let the upward direction to be positive then $a$ is negative and if we let downward to be the positive direction, then $a$ is positive. The problem in their claim is that they did not explain well how they arrived with that conclusion. </p>
<p>My question now is that, why is the acceleration $a$ negative if we choose the upward direction to be positive. Note: I need a simple but comprehensive answer. Thanks in advance. </p>
| Jr Antalan | 207,778 | <p>I will try to answer my own question but correct me if I am wrong. This answer was due to @Yves's answer and with the help of Serway's book which states that the negative in $a$ simply means that the acceleration is on the negative direction. </p>
<p>Clearly, if we set the upward direction to be positive, the gravitational force that acts on the object is in the negative direction. With the use of the Newton's law that $F=ma$, we have $(-)F=ma$. Now, since mass is a scalar quantity, in order for $F$ to be on the negative direction, $a$ must be on the negative direction. That is $a$ must be negative. Note, the negative in $a$ means the acceleration is also downward. Thanks for all your answers and comments. </p>
|
634,344 | <p>Im trying to go alone through Fultons, Introduction to algebraic topology. He asks whether there is a function $g$ on a region such that $dg$ is the form:
$$\omega =\dfrac{-ydx+xdy}{x^2+y^2}$$
in some regions. I know you can do it on the upper half plane by considering $-arctan(x/y)$. But Im a bit confused. I know that you can measure the angle, except maybe on a fixed hlaf line, like taking away the negative axis, but on the other hand, the tan is bijective on intervals of lenght $\pi$. So, can you find such a function on the union of the right half plane andthe upper half plane? Thanks </p>
| Eric Auld | 76,333 | <p>By the way, this form is notable in that it is closed but not exact on $\mathbb{R}^2 \setminus \{0\}$. (You can tell it is not exact by integrating around the unit circle and getting a nonzero result.)</p>
<p>Be careful of the meaning he intends for the word "region"...I suppose the general meaning is "open connected set".</p>
|
634,344 | <p>Im trying to go alone through Fultons, Introduction to algebraic topology. He asks whether there is a function $g$ on a region such that $dg$ is the form:
$$\omega =\dfrac{-ydx+xdy}{x^2+y^2}$$
in some regions. I know you can do it on the upper half plane by considering $-arctan(x/y)$. But Im a bit confused. I know that you can measure the angle, except maybe on a fixed hlaf line, like taking away the negative axis, but on the other hand, the tan is bijective on intervals of lenght $\pi$. So, can you find such a function on the union of the right half plane andthe upper half plane? Thanks </p>
| Brian Rushton | 51,970 | <p>Yes, you can let $g$ be the function that takes a polar representation with $r>0$ and $-\pi<\theta<\pi$ and returns just $\theta$. This is equal to shifted versions of your arctan function pieced together.</p>
|
2,325,968 | <p>I was trying to calculate : $e^{i\pi /3}$.
So here is what I did : $e^{i\pi /3} = (e^{i\pi})^{1/3} = (-1)^{1/3} = -1$</p>
<p>Yet when I plug : $e^{i\pi /3}$ in my calculator it just prints : $0.5 + 0.866i$</p>
<p>Where am I wrong ? </p>
| MaudPieTheRocktorate | 256,345 | <p>$(e^{i\pi /3})^3=-1$, but that doesn't mean $e^{i\pi /3}=(-1)^{1/3}$. Similarly, $(-1)^2=1$, but $-1\neq1^{1/2}=1$</p>
<p>There are three different cubic roots of $-1$, and $-1$ is just one of them. $e^{i\pi /3}$ is another, and $e^{2i\pi /3}$ is the third one.</p>
<p>The problem is essentially that taking the cubic root, as taking the square root, is not strictly speaking a function. When you take the cubic root of a nonzero number, you have three possible results, and you need to choose one in order to get a function.</p>
<p>Your calculator simply chose $-1$ as "the" cubic root of $-1$, and "forgot to tell you" that there are two more roots.</p>
|
2,325,968 | <p>I was trying to calculate : $e^{i\pi /3}$.
So here is what I did : $e^{i\pi /3} = (e^{i\pi})^{1/3} = (-1)^{1/3} = -1$</p>
<p>Yet when I plug : $e^{i\pi /3}$ in my calculator it just prints : $0.5 + 0.866i$</p>
<p>Where am I wrong ? </p>
| Saketh Malyala | 250,220 | <p>There exist three numbers that all have the property that $z^3=- 1$.</p>
<p>Your calculator uses $e^{ix}=\cos(x)+i\sin(x)$, which is why it spit out the value $\displaystyle \frac{1}{2}+\frac{\sqrt{3}}{2}i$.</p>
<p>The other two numbers are $-1$ and $\displaystyle \frac{1}{2}-\frac{\sqrt{3}}{2}i$.</p>
|
2,433,174 | <p>I'm struggling with the following (<em>is it true?</em>):</p>
<blockquote>
<p>Let <span class="math-container">$X$</span> be a set and denote <span class="math-container">$\aleph(X)$</span> the <em><strong>cardinality</strong></em> of <span class="math-container">$X$</span>. Suppose that <span class="math-container">$\aleph(X)\geq \aleph_0$</span>, the cardinality <span class="math-container">$\aleph_0:=\aleph(\Bbb N)$</span>, with <span class="math-container">$\Bbb N$</span> the set of natural numbers.</p>
<p>Consider two subsets <span class="math-container">$A_1,A_2\subset X$</span> with <span class="math-container">$\aleph(A_i)<\aleph(X)$</span>, <span class="math-container">$i=1,2$</span>. Show that <span class="math-container">$\aleph(\bigcup_1^2 A_i)<\aleph(X)$</span>.</p>
</blockquote>
<p>From this, it would follow by induction that <span class="math-container">$\aleph(\bigcup_1^n A_i)<\aleph(X)$</span>, for any sets <span class="math-container">$A_1,\dots, A_n\subset X$</span> with <span class="math-container">$\aleph(A_i)<\aleph(X)$</span>, <span class="math-container">$i=1,\dots,n$</span>.</p>
<p>With this result, I would be able to solve <em>Dugundji's Exercise 1-b (Chapter III - Topological Spaces)</em>, which asserts:</p>
<blockquote>
<p>1.b) If <span class="math-container">$\aleph(X)\geq \aleph_0$</span>, then <span class="math-container">$\scr{A}_1$$=\{\emptyset\}\cup \{A\,|\, \aleph(X-A)<\aleph(X)\}$</span> is a topology on <span class="math-container">$X$</span>.</p>
</blockquote>
<p>The fact that <span class="math-container">$\emptyset$</span> and <span class="math-container">$X$</span> are in <span class="math-container">$\scr A_1$</span> is easy. Arbitrary unions of elements of <span class="math-container">$\scr A_1$</span> is in <span class="math-container">$\scr A_1$</span>, since the complement in <span class="math-container">$X$</span> of such an union is an <em><strong>intersection</strong></em> of complements, which have, each of of them, cardinality strictly smaller than <span class="math-container">$\aleph(X)$</span> (intersections decrease cardinality). But to show that finite intersections of elements of <span class="math-container">$\scr A_1$</span> are in <span class="math-container">$\scr A_1$</span>, I need the fact above...</p>
<p>It is easy to see that it is true if <span class="math-container">$\aleph(X)=\aleph_0$</span>, since in this case <span class="math-container">$A_1$</span> and <span class="math-container">$A_2$</span> must be finite. The problem arises if <span class="math-container">$\aleph_0\leq \aleph(A_i)<\aleph(X)$</span>...</p>
| Deepesh Meena | 470,829 | <p>assume $h=4x-y$ and $k=3x-2y$ now take the image of the point $(h,k)$ with respect to the line $4x+2y=6$ you will get a point now you can get the slope easily</p>
|
1,603,272 | <p>I'm trying to figure out if the sequence $e^{(-n)^n}$ where n is a natural number has a convergent subsequence? It's in a past exam paper. I know that obviously I can't apply the Bolzano-Weirstrass theorem because its not a convergence sequence but im not sure how to test for a convergent subsequence if the original sequence is not convergent. Thanks in advance</p>
| adjan | 219,722 | <p>The subsequence of odd indices $(a_{2n+1})$ should converge to zero, since you only have negative values in the exponent.</p>
|
1,552 | <p>Closely related: what is the smallest known composite which has not been factored? If these numbers cannot be specified, knowing their approximate size would be interesting. E.g. can current methods factor an arbitrary 200 digit number in a few hours (days? months? or what?).
Can current methods certify that an arbitrary 1000 digit number is prime, or composite in a few hours (days? months? not at all?).</p>
<p>Any broad-brush comments on the current status of primality proving, and how active this field is would be appreciated as well. Same for factoring.</p>
<hr>
<p>Edit: perhaps my header question was something of a troll. I am not interested in lists. But if anyone could shed light on the answers to the portion of my question starting with "E.g.". it would be appreciated. (I could answer it in 1990, but what is the status today?)</p>
| Michael Lugo | 143 | <p>I don't think this is the right question to be asking. People aren't going to store long lists of primes. You might be able to store the first 10^12 or so primes (in some compressed form) on your hard drive; but testing the first number not on your table for primality, or factoring it, would be trivial.</p>
<p>There are similar questions you could ask where people actually <em>do</em> keep such lists -- for example, what's the smallest <a href="http://en.wikipedia.org/wiki/Mersenne_prime" rel="noreferrer">Mersenne number</a> (number of form 2^n-1) whose primality is not known? This looks to be <a href="http://en.wikipedia.org/wiki/Great_Internet_Mersenne_Prime_Search" rel="noreferrer">somewhere in the neighorhood of n = 15 million</a>.</p>
|
1,552 | <p>Closely related: what is the smallest known composite which has not been factored? If these numbers cannot be specified, knowing their approximate size would be interesting. E.g. can current methods factor an arbitrary 200 digit number in a few hours (days? months? or what?).
Can current methods certify that an arbitrary 1000 digit number is prime, or composite in a few hours (days? months? not at all?).</p>
<p>Any broad-brush comments on the current status of primality proving, and how active this field is would be appreciated as well. Same for factoring.</p>
<hr>
<p>Edit: perhaps my header question was something of a troll. I am not interested in lists. But if anyone could shed light on the answers to the portion of my question starting with "E.g.". it would be appreciated. (I could answer it in 1990, but what is the status today?)</p>
| Ian H | 151 | <p>What is the smallest integer whose even/odd status is not known?</p>
|
624,715 | <p>Find this follow ODe solution
$$y''-y'+y=x^2e^x\cos{x}$$</p>
<p>I konw solve this follow three case
$$y''-y'+y=x^2\cos{x}$$
$$y''-y'+y=e^x\cos{x}$$
$$y''-y'+y=x^2e^x$$</p>
<p>But for $f(x)=x^2e^x\cos{x}$, I can't.</p>
<p>Thank you very much!</p>
| Mikasa | 8,581 | <p>Honestly, I don't know how the three cases you made is related to the original ODE. I point you a criteria:</p>
<blockquote>
<p>Let $a_ny^{(n)}+a_{n-1}y^{(n-1)}+\cdots+a_1y'+a_0y=Q(x)$ where $a_n\neq0$ and $Q(x)$ is not zero in an interval $I$. If <em>No term</em> of $Q(x)$ is the same as a term of $y_c(x)$, then the particular solution $y_p(x)$ is a linear combination of the terms in $Q(x)$ and all its linearly independent derivatives.</p>
</blockquote>
<p>Here, we have $y_c(x)=\exp(x/2)\left(C_1\sin(\frac{\sqrt{3}}{2})+C_2\cos(\frac{\sqrt{3}}{2})\right)$ so according to what above point says we get: $$y_p(x)=(Ae^x+Bxe^x+Cx^2e^x)\sin(x)+(De^x+Exe^x+Fx^2e^x)\cos(x)$$ Now we can use the <a href="http://en.wikipedia.org/wiki/Method_of_undetermined_coefficients" rel="nofollow">Undetermined coefficient method</a> to find $A,B,C,D,E,F$. Let's to find $y_p'(x)$ and $y_p''(x)$:</p>
<p>$$y_p(x)=(A\exp(x)+B\exp(x)+Bx\exp(x)+2Cx\exp(x)+Cx^2\exp(x))\sin(x)+(A\exp(x)+Bx\exp(x)+Cx^2\exp(x))\cos(x)+(D\exp(x)+E\exp(x)+Ex\exp(x)+2Fx\exp(x)+Fx^2\exp(x))*cos(x)-(D\exp(x)+Ex\exp(x)+Fx^2\exp(x))\sin(x)$$ and $$y''_p(x)=(A\exp(x)+2B\exp(x)+Bx\exp(x)+2C\exp(x)+4Cx\exp(x)+Cx^2\exp(x))\sin(x)+(2(A\exp(x)+B\exp(x)+Bx\exp(x)+2Cx\exp(x)+Cx^2\exp(x)))\cos(x)-(A\exp(x)+Bx\exp(x)+Cx^2\exp(x))\sin(x)+(D\exp(x)+2E\exp(x)+Ex\exp(x)+2F\exp(x)+4Fx\exp(x)+Fx^2\exp(x))\cos(x)-(2(D\exp(x)+E\exp(x)+Ex\exp(x)+2Fx\exp(x)+Fx^2\exp(x)))\sin(x)-(D\exp(x)+Ex\exp(x)+Fx^2\exp(x))\cos(x)$$ Now we have $$y''_p(x)-y'_p(x)+y_p(x)=\exp(x)(2\sin(x)Cx+\cos(x)Bx+4\cos(x)Cx+\cos(x)Cx^2+2\cos(x)Fx-\sin(x)Ex-4\sin(x)Fx-\sin(x)Fx^2+\sin(x)B+2\sin(x)C+\cos(x)A+2\cos(x)B+\cos(x)E+2\cos(x)F-\sin(x)D-2\sin(x)E)$$ If we make the latter identity equal to $x^2\exp(x)\cos(x)$ we get $$2C-E-4F=0,~~B+4C+2F=0,~~C=1,~~-F=0,~~B+2C-D-2E=0,~~A+2B+E+2F=0$$ and finally, $$A=6,B=-4,C=1,F=0,E=2,D=-6$$</p>
|
624,715 | <p>Find this follow ODe solution
$$y''-y'+y=x^2e^x\cos{x}$$</p>
<p>I konw solve this follow three case
$$y''-y'+y=x^2\cos{x}$$
$$y''-y'+y=e^x\cos{x}$$
$$y''-y'+y=x^2e^x$$</p>
<p>But for $f(x)=x^2e^x\cos{x}$, I can't.</p>
<p>Thank you very much!</p>
| Leox | 97,339 | <p>One more way to solve it is using the Laplace transformation $\mathcal{L}.$</p>
<p>Let $\mathcal{L}(y)=Y(p).$ Then
$$
\mathcal{L}(y')=p Y(p)-y(0),\\
\mathcal{L}(y'')=p^2 Y(p)-y'(0)-p y(0),\\ \text{and, using the standard rules for L.t.}\\
\mathcal{L}({x}^{2}{{\rm e}^{x}}\cos \left( x \right))=\frac{1}{\left( p-1-i \right) ^{3}}+ \frac{1}{\left( p-1+i \right) ^{3}}
$$
We have now
$$
\left( {p}^{2}-p+1 \right) Y \left( p \right) -py \left( 0 \right) +y
\left( 0 \right) - y' \left( 0 \right) =
\frac{1}{\left( p-1-i \right) ^{3}}+ \frac{1}{\left( p-1+i \right) ^{3}},
$$
or
$$
Y(p)={\frac {1}{ \left( p-1-i \right) ^{3} \left( {p}^{2}-p+1 \right) }}+{\frac {1}{ \left( p-1+i \right) ^{3} \left( {p}^{2}
-p+1 \right) }}-{\frac {-py \left( 0 \right) +y \left( 0 \right) -
y'' \left( 0 \right) }{{p}^{2}-p+1}}.
$$
To find $y$ we compute the inverse Laplace transform for $Y(p).$ After some calculation we get
$$
y(x)=\mathcal{L}^{-1}(Y(p))=\left( 2\,\cos \left( x \right) \left( -3+x \right) +\sin \left( x \right) \left( 6-4\,x+{x}^{2} \right) \right) {{\rm e}^{x}}+1/3\,
\left( 3\,\cos \frac{\sqrt {3}x}{2} \left( 6+y \left( 0
\right) \right) +\sin \frac{\sqrt {3}x}{2}
\left( -10+2\, y'' \left( 0 \right) -y \left( 0
\right) \right) \right) {{\rm e}^{\frac 12\,x}}.
$$
After simplification taking into account that $y(0), y'(0)$ are constant we get
$$
y(x)={{e}^{\frac 12\,x}}(\sin \frac{\sqrt {3}x}{2} { C_1}+\cos \frac{\sqrt {3}x}{2} {C_2})+ \left( 2\,\cos \left( x \right) \left( x-3
\right) +\sin \left( x \right) \left( 6-4\,x+{x}^{2} \right)
\right) {{ e}^{x}},
$$
for some constant $C_1, C_2.$</p>
|
142,127 | <p>A simplex is regular if its all edges have the same length.</p>
<p>How to test in Mathematica whether a <code>Simplex</code> is regular or not, without checking all the edges manually? I'm not really familiar with loops in Mathematica. I also can't find in the documentation how to access the vertices of a <code>Simplex</code>.</p>
| J. M.'s persistent exhaustion | 50 | <p>As I mentioned in my comment, you can use <code>Subsets[]</code> to enumerate the edges of your simplex:</p>
<pre><code>regularSimplexQ[Simplex[vertices_]] :=
MatrixQ[vertices] && Subtract @@ Dimensions[vertices] == 1 &&
Equal @@ EuclideanDistance @@@ Subsets[vertices, {2}];
regularSimplexQ[_] := False
</code></pre>
<p>Try it out:</p>
<pre><code>regularSimplexQ[Simplex[{{0, 0, 1}, {1, 0, 0}, {1, 0, 1}, {1, 1, 1}}]]
False
regularSimplexQ[Simplex[{{0, 1, 0}, {1, 0, 0}, {0, 0, 1}, {1, 1, 1}}]]
True
</code></pre>
|
3,125,958 | <p>I've been asked to consider this parabolic equation. </p>
<p><span class="math-container">$ 3\frac{∂^2u}{∂x^2} + 6\frac{∂^2u}{∂x∂y} +3\frac{∂^2u}{∂y^2} - \frac{∂u}{∂x} - 4\frac{∂u}{∂y} + u = 0$</span></p>
<p>I calculated the characteristic coordinates to be <span class="math-container">$ξ = y - x, η = x$</span>. The question then asks to transform the equation to the canonical form. I've got the method in other questions but can't seem to work out how to transfer the method from those examples to this one. </p>
| Dylan | 135,643 | <p>I'm using subscripts as partials to save time</p>
<p><span class="math-container">\begin{align}
u_x &= u_\xi\xi_x + u_\eta\eta_x = - u_\xi + u_\eta \\
u_y &= u_\xi\xi_y + u_\eta\eta_y = u_\xi
\end{align}</span></p>
<p>Using the chain rule again</p>
<p><span class="math-container">\begin{align}
u_{xx} &= (-u_\xi + u_\eta)_x = -u_{\xi\xi}\xi_x - u_{\xi\eta}\eta_x + u_{\eta\xi}\xi_x + u_{\eta\eta}\eta_x = u_{\xi\xi} - 2u_{\xi\eta} + u_{\eta\eta} \\
u_{yy} &= (u_\xi)_y = u_{\xi\xi}\xi_y + u_{\xi\eta}\eta_y = u_{\xi\xi} \\
u_{xy} &= (u_\xi)_x = u_{\xi\xi}\xi_x + u_{\xi\eta}\eta_x = -u_{\xi\xi} + u_{\eta\xi}
\end{align}</span></p>
<p>Combining everything:</p>
<p><span class="math-container">$$ 3u_{xx} + 6u_{xy} + 3u_{yy} - u_x - 4u_y + u = 3u_{\eta\eta} - 3u_\xi - u_\eta + u = 0 $$</span></p>
<p>which is indeed parabolic</p>
|
126,901 | <p>How to evaluate this determinant $$\det\begin{bmatrix}
a& b&b &\cdots&b\\ c &d &0&\cdots&0\\c&0&d&\ddots&\vdots\\\vdots &\vdots&\ddots&\ddots& 0\\c&0&\cdots&0&d
\end{bmatrix}?$$</p>
<p>I am looking for the different approaches.</p>
| bgins | 20,321 | <p>Let $A_n=(a_{ij})$ be the $n\times n$ matrix with
$$
a_{ij}=\left\{\matrix{a&i=j=1\\b&i=1\ne j\\c&i\ne1=j\\d&i=j\ne1\\0&\text{otherwise}}\right.
$$
and $\Delta_n=\det A_n$. Then ($\Delta_1=a$ according to my definition above, which may differ from your implicit definition), $\Delta_2=ad-bc$ and for $n\ge2$,
$$
\Delta_{n+1}=d\,\Delta_n-bc\,d^{n-1}=\left(\Delta_n-bcd^{n-2}\right)d
$$
expanding on the bottom row (or equivalently the rightmost column, but with the matrix below transposed and with $c$ in stead of $b$).
This is because the $n\times n$ below has determinant
$$\det\begin{bmatrix}
b&b &\cdots&b&b\\
d &0&\cdots&0&0\\
0&d&\ddots&\vdots&0\\
\vdots&\ddots&\ddots&0&0\\
0&\cdots&0&d&0
\end{bmatrix}
= (-1)^{n-1}bd^{n-1}\,,
$$
and is multiplied by $(-1)^nc$ from $a_{n+1,1}=c$,
giving the second term above. Thus
$$
\eqalign{
\frac{\Delta_{n+1}}{d^{n-1}}&=
\frac{\Delta_{n} }{d^{n-2}}-bc
\quad
\text{and}
\quad
\frac{\Delta_1}{d^{-1}}=ad
\quad
\implies
\\\\
\frac{\Delta_{n}}{d^{n-2}}&=
ad-(n-1)bc
\qquad
\implies
\\\\
\Delta_{n}&=\big[ad-(n-1)bc\big]d^{n-2}
}
$$
for $n\ge1$ by induction.</p>
|
1,579,349 | <p>Need help setting this thing up don't really get how to get the derivative is it $0$? If you just plug everything in since there will be no variable.</p>
| user299317 | 299,317 | <p>I think I got it I was confused whether you plug everything in at first which I was stuck at you have to find derivative first, then plug the x and get your answer
Y= 6*2(2^2-1)^2= 108</p>
<p>Thanks for the help</p>
|
2,309,721 | <p>The problem is: Prove that $7|x^2+y^2$ only if $7|x$ and $7|y$ for $x,y∈Z$.</p>
<p>I found a theorem in my book that allows to do the following transformation:
if $a|b$ and $a|c$ -> $a|(b+c)$</p>
<p>So, can I prove it like this: $7|x^2+y^2 =>7|x^2, 7|y^2 => 7|x*x, 7|y*y => 7|x, 7|y$ ?</p>
<p>I am not really sure because I have this simple example in my head that even if $6|18$ => $6|14+4$, $6 ∤ 14$ and $6 ∤ 4$.</p>
<p>Any help will be appreciated, thank you!</p>
| Qwerty | 290,058 | <p>No you cant!</p>
<p>The book says if $a|b$ and $a|c\implies a|b+c$ and <strong>not</strong> the reverse.. The case you have is a simple contradiction.</p>
<p>As far the problem goes , I can give you a proceeding.</p>
<p>Take $x=7\alpha +\beta; y=7\gamma +\delta$ Find out what $x^2+y^2$ evaluates to. Check what must be necessary condition for $\beta$ and $\delta$ for $7$ to divide $x^2+y^2$</p>
|
67,516 | <p>The book by Durrett "Essentials on Stochastic Processes" states on page 55 that:</p>
<blockquote>
<p>If the state space S is finite then there is at least on stationary
distribution.</p>
</blockquote>
<ol>
<li><p>How can I find the stationary distribution for example for the square 2x2 matrix $[[a,b],[1-a, 1-b]]$? I have tested this with WA and chains of such form seems to converge to certain probalities, as <a href="http://www.wolframalpha.com/input/?i=%7B%7B0.01,%200.09%7D,%7B0.99,%200.91%7D%7D%5E101" rel="nofollow">here</a>. But if you look in the general case, <a href="http://www.wolframalpha.com/input/?i=%7B%7Ba,%20b%7D,%7B1-a,%201-b%7D%7D%5E40" rel="nofollow">here</a>, I feel quite confused to find the general formula. I just know that it exist but I cannot see any general formula as $n \rightarrow \infty$.</p></li>
<li><p>But look <a href="http://www.wolframalpha.com/input/?i=%7B%7B0,%201%7D,%7B1,%200%7D%7D%5E38" rel="nofollow">here</a>, $[[0,1],[1,0]]$ does not have a stationary distribution! So am I right to say that this chain is depended on initial conditions? If the $M=[[0,1],[1,0]]$, the chain will not converge to stationary condition. But how can I find out this from the matrix? (not through series of observations)</p></li>
<li><p>And how can I know when a certain markov chain is depended on initial conditions? For example, with the above example, its $det = a-b$ and for eigenvalues $\lambda_{1} =1$ and $\lambda_{2} = a-b$.</p></li>
<li><p>Now <a href="http://hypertextbook.com/chaos/11.shtml" rel="nofollow">Hypertextbook</a> mentions that <code>"behavior, which exhibits sensitive dependence on initial conditions, is said to be chaotic"</code>, look we found a case with initial condition sensitivity. Is it chaotic?</p></li>
</ol>
| Robert Israel | 8,508 | <p>Draw tangent lines at three points on the circle, and see where they intersect.</p>
|
67,516 | <p>The book by Durrett "Essentials on Stochastic Processes" states on page 55 that:</p>
<blockquote>
<p>If the state space S is finite then there is at least on stationary
distribution.</p>
</blockquote>
<ol>
<li><p>How can I find the stationary distribution for example for the square 2x2 matrix $[[a,b],[1-a, 1-b]]$? I have tested this with WA and chains of such form seems to converge to certain probalities, as <a href="http://www.wolframalpha.com/input/?i=%7B%7B0.01,%200.09%7D,%7B0.99,%200.91%7D%7D%5E101" rel="nofollow">here</a>. But if you look in the general case, <a href="http://www.wolframalpha.com/input/?i=%7B%7Ba,%20b%7D,%7B1-a,%201-b%7D%7D%5E40" rel="nofollow">here</a>, I feel quite confused to find the general formula. I just know that it exist but I cannot see any general formula as $n \rightarrow \infty$.</p></li>
<li><p>But look <a href="http://www.wolframalpha.com/input/?i=%7B%7B0,%201%7D,%7B1,%200%7D%7D%5E38" rel="nofollow">here</a>, $[[0,1],[1,0]]$ does not have a stationary distribution! So am I right to say that this chain is depended on initial conditions? If the $M=[[0,1],[1,0]]$, the chain will not converge to stationary condition. But how can I find out this from the matrix? (not through series of observations)</p></li>
<li><p>And how can I know when a certain markov chain is depended on initial conditions? For example, with the above example, its $det = a-b$ and for eigenvalues $\lambda_{1} =1$ and $\lambda_{2} = a-b$.</p></li>
<li><p>Now <a href="http://hypertextbook.com/chaos/11.shtml" rel="nofollow">Hypertextbook</a> mentions that <code>"behavior, which exhibits sensitive dependence on initial conditions, is said to be chaotic"</code>, look we found a case with initial condition sensitivity. Is it chaotic?</p></li>
</ol>
| davidlowryduda | 9,754 | <p>It is. But it is not unique, i.e. infinitely many triangles can be drawn from a single circle. (To see this, draw many non-similar triangles, find their incircles, and then scale them so that the circles are all the same size. Then the triangles have the same incircle, though they're different).</p>
<p>I think the easiest would be to fit an equilateral triangle around it. From the center of the circle, mark off 120 degree arcs, and then place the tangent to each arc. These lines will form an equilateral triangle whose incircle is the desired circle.</p>
|
2,176,081 | <p>I am trying to compute </p>
<blockquote>
<p>$$ \int_0^\infty \frac{\ln x}{x^2 +4}\,dx,$$</p>
</blockquote>
<p>which I find <a href="https://math.stackexchange.com/questions/2173289/integrating-int-0-infty-frac-ln-xx24-dx-with-residue-theorem/2173342">here</a>, without complex analysis. I am consistently getting the wrong answer and am hoping someone can spot the error. </p>
<p>First, denote the integral by $I$, and take $x = \frac{1}{t}$. Hence, the integral becomes </p>
<p>$$I = \int_\infty^0 \frac{\ln(1/t)}{1/t^2 + 4} \left(-\frac{1}{t^2} dt \right) = \int_0^\infty \frac{\ln(1)}{1+4t^2} dt - \int_0^\infty \frac{\ln(t)}{1+4t^2} dt$$</p>
<p>Note that the leftmost integral on the right-hand side is zero. Now, letting $u = 2t$, we get </p>
<p>$$I = -\frac{1}{2} \int_0^\infty \frac{\ln(2u)}{1+u^2}\,du = - \frac{1}{2} \int_0^\infty \frac{\ln(2)}{1+u^2}\,du - \frac{1}{2} \int_0^\infty \frac{\ln(u)}{1+u^2}\,du = - \frac{1}{2} \int_0^\infty \frac{\ln(2)}{1+u^2} - \frac{I}{2}$$</p>
<p>and therefore $I = - \frac{\ln(2)}{3} \int_0^\infty \frac{1}{1+u^2}du = - \frac{\pi \ln(2)}{6}$. This, however, is not the right answer. So, where did I go wrong?</p>
| Jack D'Aurizio | 44,121 | <p>It is probably easier to perform the following manipulations:
$$ \int_{0}^{+\infty}\frac{\log x}{x^2+4}\,dx\stackrel{x\mapsto 2z}{=} 2\int_{0}^{+\infty}\frac{\log(2)+\log(z)}{4+4z^2}\,dz \\= \color{red}{\frac{\log(2)}{2}\int_{0}^{+\infty}\frac{dz}{1+z^2}}+\color{blue}{\frac{1}{2}\int_{0}^{+\infty}\frac{\log z}{1+z^2}\,dz}$$
leading to $\color{red}{\frac{\pi\log(2)}{4}}$, since the blue integral vanishes by the substitution $z\mapsto\frac{1}{z}$.</p>
|
155,429 | <p>Consider a square skew-symmetric $n\times n$ matrix $A$. We know that $\det(A)=\det(A^T)=(-1)^n\det(A)$, so if $n$ is odd, the determinant vanishes.</p>
<p>If $n$ is even, my book claims that the determinant is the square of a polynomial function of the entries, and Wikipedia confirms this. The polynomial in question is called the <a href="http://en.wikipedia.org/wiki/Pfaffian">Pfaffian</a>.</p>
<p>I was wondering if there was an easy (clean, conceptual) way to show that this is the case, without mucking around with the symmetric group. </p>
| Matt E | 221 | <p>Here is an elaboration of Qiaochu's comment above: </p>
<p>A $2n\times 2n$ matrix $A$ induces a pairing (say on column vectors), namely
$$\langle v,w \rangle := v^T A w.$$
Thus we can think of $A$ as being an element of $(V\otimes V)^*$ (which is
the space of all bilinear pairings on $V$), where $V$ is the space of $2n$-dimensional column vectors.</p>
<p>If $A$ is skew-symmetric, then this pairing is anti-symmetric, and so we can actually regard $A$ as an element of $\wedge^2 V^*$. We can then take the $n$th exterior power of $A$, so as to obtain an element of $\wedge^{2n} V^*$. This latter space is $1$-dimensional, and so if we fix some appropriately normalized basis for it, the $n$th exterior power of $A$ can be thought of just as a number. This is the Pfaffian of $A$ (provided we chose the right basis for $\wedge^{2n} V^*$).</p>
<p>How does this compare to the usual description of determinants via exterior powers: </p>
<p>For this, we regard $A$ as an endomorphism $V \to V$, which induces an endomorphism $\wedge^{2n} V \to \wedge^{2n} V$, which is a scalar (being an endomorphism of a $1$-dimensional space); this is $\det A$.</p>
<p>So now we see where the formula $\det(A) =$ Pf$(A)^2$ comes from: computing the determinant involves taking a $2n$th exterior power of $A$, while computing the Pfaffian involves only taking an $n$th exterior power (because we use the skew-symmetry of $A$ to get an exterior square "for free", so to speak).</p>
<p>The sorting out the details of all this should be a fun exercise.</p>
|
155,429 | <p>Consider a square skew-symmetric $n\times n$ matrix $A$. We know that $\det(A)=\det(A^T)=(-1)^n\det(A)$, so if $n$ is odd, the determinant vanishes.</p>
<p>If $n$ is even, my book claims that the determinant is the square of a polynomial function of the entries, and Wikipedia confirms this. The polynomial in question is called the <a href="http://en.wikipedia.org/wiki/Pfaffian">Pfaffian</a>.</p>
<p>I was wondering if there was an easy (clean, conceptual) way to show that this is the case, without mucking around with the symmetric group. </p>
| Qmechanic | 11,127 | <p>Here is an approach using (possibly complex) <a href="https://en.wikipedia.org/wiki/Grassmann_number" rel="nofollow noreferrer">Grassmann variables</a> and Berezin integration<span class="math-container">$^1$</span> to prove the required relation <span class="math-container">$${\rm Det}(A)~=~{\rm Pf}(A)^2. \tag{1}$$</span> This approach isn't purely conceptional, but at least it is easy, we don't fudge the overall sign, we don't muck around much with the symmetric group, and Grassmann variables do implement exterior calculus.</p>
<ol>
<li><p>Define the <a href="https://en.wikipedia.org/wiki/Pfaffian" rel="nofollow noreferrer">Pfaffian</a> of a (possibly complex) antisymmetric matrix <span class="math-container">$A^{jk}=-A^{kj}$</span> (in <span class="math-container">$n$</span> dimensions) as<span class="math-container">$^2$</span>
<span class="math-container">$$ \begin{align}
{\rm Pf}(A)&~:=~\int \!d\theta_n \ldots d\theta_1~
e^{\frac{1}{2}\theta_j A^{jk}\theta_k}
\cr &~=~(-1)^{\lfloor\frac{n}{2}\rfloor} \int \!d\theta_1 \ldots d\theta_n~
e^{\frac{1}{2}\theta_j A^{jk}\theta_k}\cr &
\cr &~=~(-1)^{\frac{n}{2}} \int \!d\theta_1 \ldots d\theta_n~
e^{\frac{1}{2}\theta_j A^{jk}\theta_k}\cr
\cr &~=~i^n \int \!d\theta_1 \ldots d\theta_n~
e^{\frac{1}{2}\theta_j A^{jk}\theta_k}\cr
&~=~ \int \!d\theta_1 \ldots d\theta_n~
e^{-\frac{1}{2}\theta_j A^{jk}\theta_k}.\end{align}
\tag{2}$$</span>
In the last equality of eq. (2), we rotated the Grassmann variables <span class="math-container">$\theta_k\to i\theta_k$</span> with the <a href="https://en.wikipedia.org/wiki/Imaginary_unit" rel="nofollow noreferrer">imaginary unit</a>. </p></li>
<li><p>Define the determinant as
<span class="math-container">$$ {\rm Det}(A)~:=~\int \!d\theta_1 ~d\widetilde{\theta}_1 \ldots d\theta_n ~d\widetilde{\theta}_n~ e^{\widetilde{\theta}_j A^{jk}\theta_k}
. \tag{3}$$</span><br>
It is not hard to prove via coordinate substitution that eq. (3) indeed reproduces the standard definition of the <a href="https://en.wikipedia.org/wiki/Determinant" rel="nofollow noreferrer">determinant</a>.</p></li>
<li><p>If we make a change of coordinates
<span class="math-container">$$ \theta^{\pm}_k~=~ \frac{\theta_k\pm \widetilde{\theta}_k}{\sqrt{2}}, \qquad k~\in~\{1,\ldots,n\},\tag{4} $$</span>
in eq. (3), the super-Jacobian becomes <span class="math-container">$(-1)^n$</span>.</p></li>
<li><p>Therefore we calculate
<span class="math-container">$$\begin{align} {\rm Det}(A)&\stackrel{(3)+(4)}{=}~(-1)^n\int \!d\theta^+_1 ~d\theta^-_1 \ldots d\theta^+_n ~d\theta^-_n~
e^{\frac{1}{2}\theta^+_j A^{jk}\theta^+_k
-\frac{1}{2}\theta^-_j A^{jk}\theta^-_k}\cr
&~~=~\int \!d\theta^-_1 \ldots d\theta^-_n~d\theta^+_n\ldots d\theta^+_1 ~~e^{\frac{1}{2}\theta^+_j A^{jk}\theta^+_k}
e^{-\frac{1}{2}\theta^-_j A^{jk}\theta^-_k}\cr
&~~\stackrel{(2)}{=}~{\rm Pf}(A)^2, \end{align}\tag{5}$$</span><br>
which proves eq. (1).<span class="math-container">$\Box$</span></p></li>
</ol>
<p>--</p>
<p><span class="math-container">$^1$</span> We use the sign convention that Berezin integration <span class="math-container">$$\int d\theta_i~\equiv~\frac{\partial}{\partial \theta_i}\tag{6} $$</span> is the same as differentiation wrt. <span class="math-container">$\theta_i$</span> acting from left. See e.g. <a href="https://physics.stackexchange.com/q/15786/2451">this</a> Phys.SE post and <a href="https://math.stackexchange.com/q/1718670/11127">this</a> Math.SE post.</p>
<p><span class="math-container">$^2$</span> The <a href="https://en.wikipedia.org/wiki/Parity_of_a_permutation" rel="nofollow noreferrer">sign</a> of the permutation <span class="math-container">$(1, \ldots, n)\mapsto(n, \ldots, 1)$</span> is given by <span class="math-container">$(-1)^{\frac{n(n-1)}{2}}=(-1)^{\lfloor\frac{n}{2}\rfloor}$</span>, where <span class="math-container">$\lfloor\frac{n}{2}\rfloor$</span> denotes the <a href="https://planetmath.org/integerpart" rel="nofollow noreferrer">integer part</a> of <span class="math-container">$\frac{n}{2}$</span>. One may show that the Pfaffian (2) vanishes in odd dimensions <span class="math-container">$n$</span>.</p>
|
162,655 | <p>Does there exist a Ricci flat Riemannian or Lorentzian manifold which is geodesic complete but not flat? And is there any theorm about Ricci-flat but not flat? </p>
<p>I am especially interset in the case of Lorentzian Manifold whose sign signature is (- ,+ ,+ , + ). Of course, the example is not constricted in Lorentzian case.</p>
<p>I know there are many Ricci flat case in General Relativiy which is the vacuum solution to Einstein's equation. But what I know, such as Kerr solution, are all geodesic incomplete. So I want a geodesic complete example and reference. Thanks!</p>
| Otis Chodosh | 1,540 | <p>Your claim that "all solutions from general relativity are geodesically incomplete" is not true. The classical Schwarzschild/Kerr black hole solutions are geodesically incomplete, and along with Minkowski space (which is flat), these are the most well known explicit metrics. </p>
<p>However, many solutions to the (matterless) Einstein equations are geodesically complete. This follows, for example, from the work of Christodoulou-Klainerman <a href="http://press.princeton.edu/titles/5159.html">http://press.princeton.edu/titles/5159.html</a>, which (<em>very</em> roughly) says that for a sufficiently small appropriate perturbation of $(\mathbb{R}^3,\delta)$, using this as initial data for the Einstein equations yields a Lorentzian Ricci flat manifold which is geodesically complete (and much more: they showed it was asymptotic to Minkowski space in some sense). A nice discussion is given <a href="http://www.numdam.org/item?id=SEDP_1989-1990____A15_0">here</a>.</p>
<hr>
<p>Continuing the theme of physically related Ricci flat metrics, here is an interesting (explicit) Riemannian example:</p>
<p>The Lorentzian Schwarzschild metric (in $4$-dimensions) is incomplete, as you say. However, a strange observation is that formally setting $\tau = it$ (known to the physicists as "Wick rotation") yields a <em>Riemannian</em> manifold which is Ricci flat. The amazing thing is that this metric turns out to be complete, as long as $\tau$ is considered to be periodic with the appropriate period. </p>
<p>This gives a complete Ricci flat metric on $S^2\times S^1\times\mathbb{R}$. </p>
<p>See, for example, section 2 of <a href="http://arxiv.org/pdf/hep-th/9112065v1.pdf">http://arxiv.org/pdf/hep-th/9112065v1.pdf</a>. Or you can search for "Euclidean black hole" or "Schwarzschild instanton" for more physics literature. </p>
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.