qid
int64
1
4.65M
question
large_stringlengths
27
36.3k
author
large_stringlengths
3
36
author_id
int64
-1
1.16M
answer
large_stringlengths
18
63k
2,953,837
<p>Given <span class="math-container">$n_1$</span> number of a's, <span class="math-container">$n_2$</span> number of b's, <span class="math-container">$n_3$</span> number of c's.</p> <p>They form a sequence using all these characters such that no two a's and no two b's are adjacent.</p> <p>(a and b can be adjacent, but two a's or b's cannot be adjacent. c has no restrictions.)</p> <p>for example : acbccab is valid for <span class="math-container">$n_1=2$</span>, <span class="math-container">$n_2=2$</span>, <span class="math-container">$n_3=3$</span></p> <p>but,</p> <p>cbcbcaa is not valid as two a's are adjacent.</p> <p>I have tries lot of things but nothing worked.</p> <p>Can somebody tell me how to solve this problem?</p>
Ross Millikan
1,827
<p>I would define coupled recurrences. Let <span class="math-container">$A(x,y,z)$</span> be the number of strings <span class="math-container">$x\ a$</span>s, <span class="math-container">$y\ b$</span>s, and <span class="math-container">$z\ c$</span>s that ends in <span class="math-container">$a$</span> and similarly for <span class="math-container">$B$</span> and <span class="math-container">$C$</span>. Then <span class="math-container">$$A(x,y,z)=B(x-1,y,z)+C(x-1,y,z)$$</span> and similar for <span class="math-container">$B$</span>. The recurrence for <span class="math-container">$C$</span> is slightly different. A recursive computer program will make it easy. There might be a generating function approach.</p>
514
<p>I just came back from my Number Theory course, and during the lecture there was mention of the Collatz Conjecture.</p> <p>I'm sure that everyone here is familiar with it; it describes an operation on a natural number – <span class="math-container">$n/2$</span> if it is even, <span class="math-container">$3n+1$</span> if it is odd.</p> <p>The conjecture states that if this operation is repeated, all numbers will eventually wind up at <span class="math-container">$1$</span> (or rather, in an infinite loop of <span class="math-container">$1-4-2-1-4-2-1$</span>).</p> <p>I fired up Python and ran a quick test on this for all numbers up to <span class="math-container">$5.76 \times 10^{18}$</span> (using the powers of cloud computing and dynamic programming magic). Which is millions of millions of millions. And all of them eventually ended up at <span class="math-container">$1$</span>.</p> <p>Surely I am close to testing every natural number? How many natural numbers could there be? Surely not much more than millions of millions of millions. (I kid.)</p> <p>I explained this to my friend, who told me, "Why would numbers suddenly get different at a certain point? Wouldn't they all be expected to behave the same?"</p> <p>To which I said, "No, you are wrong! In fact, I am sure there are many conjectures which have been disproved by counterexamples that are extremely large!"</p> <p>And he said, "It is my conjecture that there are none! (and if any, they are rare)".</p> <p>Please help me, smart math people. Can you provide a counterexample to his conjecture? Perhaps, more convincingly, several? I've only managed to find one! (Polya's conjecture). One, out of the many thousands (I presume) of conjectures. It's also one that is hard to explain the finer points to the layman. Are there any more famous or accessible examples?</p>
Patrick Da Silva
10,704
<p>I was so pissed after testing one of my own conjectures that I remembered this question and wanted to post it here. </p> <p>I conjectured after numerical observations that for every prime p, and integers $k \ge 1, n \ge 1$, that $$ p^k \, || 2^n-1 \quad \Longleftrightarrow \quad p^{k-1} \, || \, n \quad and \quad O(2,p) \, |\, n, $$ where $O(2,p)$ is the least integer $m$ such that $2^m \equiv 1 \pmod p$, and $||$ stands for exact division (i.e. $a^k \, | \, m$ but $a^{k+1} \, \nmid \, m$ is written $p^k \, || \, m$). This conjecture happens to be true for the first $180$ primes and the first $3000$ multiples of $O(2,p)$ (when $n$ is not a multiple of $O(2,p)$ we already know what happens). But it so happens that $1093$ is prime, that $O(2,1093) = 364$ and $2^{364} \equiv 1 \pmod {1093^2}$, so that the statement above is not true when $k=1$, $n = 364$ and $p=1093$ because the division on the LHS is not exact.</p>
4,002
<p>I'm trying to obtain a series of points on the unit sphere with a somewhat homogeneous distribution, by minimizing a function depending on distances (I took $\exp(-d)$). My points are represented by spherical angles $\theta$ and $\phi$, starting by choosing equidistributed random vectors:</p> <pre><code>pts = Apply[{2 π #1, ArcCos[2 #2 - 1]} &amp;, RandomReal[1, {100, 2}], 1]; </code></pre> <p>The energy function is defined first:</p> <pre><code>energy[p_] := Module[{cart}, cart = Apply[{Sin[#1]*Cos[#2], Sin[#1]*Sin[#2], Cos[#1]} &amp;, p, 1]; Total[Outer[Exp[-Norm[#1 - #2]] &amp;, cart, cart, 1], 2] ] </code></pre> <p>But now, I can’t manage to get the right routine for minimization. I tried <code>FindMinimum</code>, which does local minimization from a given starting point, which is what I want. But it should operate on an expression of literal variables, so I'm kind of screwed:</p> <pre><code>FindMinimum[energy[p], {p, pts}] </code></pre> <p> </p> <pre><code>Outer::normal: Nonatomic expression expected at position 2 in Outer[Exp[-Norm[#1-Slot[&lt;&lt;1&gt;&gt;]]]&amp;,p,p,1]. &gt;&gt; FindMinimum::nrnum: The function value […] is not a real number at {p} = […] &gt;&gt; </code></pre> <p>The above obviously doesn't work, but I don't think it's wise to introduce a series of 200 literal variables. There has to be another way, hasn't it? Or is there an efficient way of introducing a lot of variables?</p>
Jens
245
<p>To get arbitrarily many formal variables, you can use <code>Array</code>. But with those variables, your function definition won't work because of the <code>Apply</code> statement. So I modified your definition as follows (I reduced the point number for testing purposes):</p> <pre><code>pts = Apply[{ArcCos[2 #2 - 1], 2 \[Pi] #1} &amp;, RandomReal[1, {10, 2}], 1]; Clear[energy]; Clear[a]; vars = Array[a, {Length[pts], 2}]; energy[p_] := Module[{cart}, cart = Map[{Sin[#[[1]]]*Cos[#[[2]]], Sin[#[[1]]]*Sin[#[[2]]], Cos[#[[1]]]} &amp;, p]; Total[Outer[Exp[-Norm[#1 - #2]] &amp;, cart, cart, 1], 2]]; FindMinimum[energy[vars], Transpose[{Flatten@vars, Flatten@pts}]] </code></pre> <blockquote> <p><code>{32.2548, {a[1, 1] -&gt; 1.93787, a[1, 2] -&gt; 1.72361, a[2, 1] -&gt; 1.11355, a[2, 2] -&gt; 0.893035, a[3, 1] -&gt; 6.21077, a[3, 2] -&gt; 2.1405, a[4, 1] -&gt; 3.06917, a[4, 2] -&gt; 2.14062, a[5, 1] -&gt; 1.06997, a[5, 2] -&gt; 2.50937, a[6, 1] -&gt; 4.21367, a[6, 2] -&gt; 1.69561, a[7, 1] -&gt; 5.07748, a[7, 2] -&gt; 2.48594, a[8, 1] -&gt; 4.31041, a[8, 2] -&gt; 0.111206, a[9, 1] -&gt; 4.25016, a[9, 2] -&gt; 3.31368, a[10, 1] -&gt; 5.11923, a[10, 2] -&gt; 0.955784}}</code></p> </blockquote> <p>The form of the array passed to <code>energy</code> matches the $N\times2$ dimension list that is expected by the line creating the <code>cart</code> variable. In the <code>FindMinimum</code> statement the dummy variables and initial conditions are specified as a single list of pairs by using <code>Flatten</code> on both. </p> <p>There is the usual wrinkle that the minimization may need to be tweaked for precision, but that's a different issue.</p> <p>Finally, to get the minimizing point list, you have to do </p> <pre><code>vars/.Last[%] </code></pre> <p><strong>Edit</strong></p> <p>Depending on the function to be optimized, it's sometimes faster to avoid the use of derivatives by specifying the initial conditions for <code>FindMinimum</code> in the form of three numbers:</p> <pre><code>FindMinimum[energy[vars], Transpose[{Flatten@vars, Flatten@pts, Flatten@pts - .1, Flatten@pts + .1}]] </code></pre> <p><strong>Edit 2</strong></p> <p>I did get a significant speed-up with this for your example, but the performance depends on the random starting points (and on the choice of bracket width) so I can't say anything definitive. That seems like a topic for a different question.</p> <p><strong>Edit 3</strong></p> <p>Though I didn't look at the speed issue in detail, forcing <code>FindMinimum</code> to work with <em>numerical</em> derivatives may be the worst option here. That will happen if you define your function <code>energy</code> only for numerical arguments, as in </p> <pre><code>energy[p : {{_?NumericQ, _?NumericQ} ..}] := </code></pre> <p>followed by either your own or my initial definition above. So although that's a common advice people give in these applications, it is not going to be the fastest approach here.</p> <p><strong>Edit 4</strong></p> <p>I just had another idea on how to improve the speed of my solution: the use of <code>Norm</code> might make it harder to estimate the Hessian for this function. And indeed, when I got rid of <code>Norm</code> there was a significant speed gain (note that the initial solution above is already faster than the <code>_?NumericQ</code> approach even when the latter is <em>compiled</em> while mine is not). I think this is worth adding here because <code>Norm</code> seems like a natural thing to use in pair potentials, even if the energy expression becomes more complicated than the one in this question.</p> <p>So here is the new version, with <code>Norm</code> replaced by <code>Sqrt[(#1 - #2).(#1 - #2)]</code>. Observe that I have now put back the original particle number of <code>100</code> because on my laptop this takes less than 8 seconds to evaluate!</p> <pre><code>pts = Apply[{ArcCos[2 #2 - 1], 2 \[Pi] #1} &amp;, RandomReal[1, {100, 2}], 1]; Clear[energy]; Clear[a]; vars = Array[a, {Length[pts], 2}]; energy[p_] := Module[{cart}, cart = Map[{Sin[#[[1]]]*Cos[#[[2]]], Sin[#[[1]]]*Sin[#[[2]]], Cos[#[[1]]]} &amp;, p]; Total[Outer[Exp[-Sqrt[(#1 - #2).(#1 - #2)]] &amp;, cart, cart, 1], 2]]; FindMinimum[energy[vars], Transpose[{Flatten@vars, Flatten@pts}]] </code></pre> <p>Oh, and one more thing I changed (unrelated to the question), is to switch your definitions of polar and azimuthal angles in <code>pts</code>.</p>
4,002
<p>I'm trying to obtain a series of points on the unit sphere with a somewhat homogeneous distribution, by minimizing a function depending on distances (I took $\exp(-d)$). My points are represented by spherical angles $\theta$ and $\phi$, starting by choosing equidistributed random vectors:</p> <pre><code>pts = Apply[{2 π #1, ArcCos[2 #2 - 1]} &amp;, RandomReal[1, {100, 2}], 1]; </code></pre> <p>The energy function is defined first:</p> <pre><code>energy[p_] := Module[{cart}, cart = Apply[{Sin[#1]*Cos[#2], Sin[#1]*Sin[#2], Cos[#1]} &amp;, p, 1]; Total[Outer[Exp[-Norm[#1 - #2]] &amp;, cart, cart, 1], 2] ] </code></pre> <p>But now, I can’t manage to get the right routine for minimization. I tried <code>FindMinimum</code>, which does local minimization from a given starting point, which is what I want. But it should operate on an expression of literal variables, so I'm kind of screwed:</p> <pre><code>FindMinimum[energy[p], {p, pts}] </code></pre> <p> </p> <pre><code>Outer::normal: Nonatomic expression expected at position 2 in Outer[Exp[-Norm[#1-Slot[&lt;&lt;1&gt;&gt;]]]&amp;,p,p,1]. &gt;&gt; FindMinimum::nrnum: The function value […] is not a real number at {p} = […] &gt;&gt; </code></pre> <p>The above obviously doesn't work, but I don't think it's wise to introduce a series of 200 literal variables. There has to be another way, hasn't it? Or is there an efficient way of introducing a lot of variables?</p>
Mark McClure
36
<p>Here are a few comments</p> <p>First, I believe that you have switched the roles of $\phi$ and $\theta$ in your first definition. Thus, a slight edit of your code yields the following</p> <pre><code>pts = Apply[{ArcCos[2 #2 - 1], 2 #1*Pi} &amp;, RandomReal[1, {10000, 2}], 1]; pts3D = {Sin[#[[1]]]*Cos[#[[2]]], Sin[#[[1]]]*Sin[#[[2]]], Cos[#[[1]]]} &amp; /@ pts; Graphics3D[Point[pts3D]] </code></pre> <p>Now, <code>pts3D</code> is already a nice uniform distribution on the sphere. Is this all you want??</p> <p>If you do need to go through the energy minimization, then you can use <code>FindMinimum</code> since it <em>does</em> allow the variables to be tensors, and your original <code>FindMinimum</code> command is fine. The problem is that you haven't restricted your definition of <code>energy</code> to work only with numeric values. Thus, you can minimize the energy (with compilation for speed) like so:</p> <pre><code>energy1[p:{{_?NumericQ,_?NumericQ}..}] := Module[{cart}, cart = Apply[{Sin[#1]*Cos[#2], Sin[#1]*Sin[#2], Cos[#1]} &amp;, p, 1]; Total[Outer[Exp[-Norm[#1 - #2]] &amp;, cart, cart, 1], 2] ]; cEnergy2 = Compile[{{p,_Real,2}},Module[{cart}, cart = Map[{Sin[#[[1]]]*Cos[#[[2]]], Sin[#[[1]]]*Sin[#[[2]]],Cos[#[[1]]]} &amp;, p, 1]; Sum[Exp[-Sqrt[(u-v).(u-v)]],{u,cart},{v,cart}] ], CompilationTarget -&gt; "C", RuntimeOptions -&gt; "Speed"]; energy2[p:{{_?NumericQ,_?NumericQ}..}] := cEnergy2[p]; SeedRandom[1]; pts=Apply[{ArcCos[2 #2-1], 2#1*Pi}&amp;,RandomReal[1,{20,2}],1]; FindMinimum[energy2[p],{p,pts}]//AbsoluteTiming FindMinimum[energy1[p],{p,pts}]//AbsoluteTiming </code></pre> <p>I think this produces the result that you want. </p> <p>Note that I've used compilation in to speed up the code by a factor of nearly twenty on my machine. But the time complexity is such that even 100 points is out of reach.</p>
26,451
<p>I am trying to solve the following:</p> <p>$\begin{align*} &amp;X \sim N(1,1)\\ &amp;\mathrm{cov}(X, X^3) = \text{?} \end{align*}$</p> <p>where $\mathrm{cov}$ is the covariance.</p> <p>How would you do this in <em>Mathematica</em>?</p> <p>I have tried</p> <pre><code>X = NormalDistribution[1, 1] cov[x_, y_] := Mean[TransformedDistribution[a*b, {a \[Distributed] x, b \[Distributed] y}]] - Mean[x] Mean[y] cov[X, TransformedDistribution[a^3, a \[Distributed] X]] </code></pre> <p>But this doesn't seem to work.</p>
Silvia
17
<p><em>(This is too long for a comment.)</em></p> <p>About your comment under 0x4A4D's answer: I think you didn't make it clear enough if your $X$ is a <strong>random variable</strong> or <strong>fixed data</strong> generated from some distribution. Usually, we interpret the notation in your question with the former meaning. In that case, $Y=X^3$ merely means a certain relationship between the <strong>distributions</strong> of two <strong>independent random variables</strong>, so $\mathbb{E}(XY)\neq\mathbb{E}(X\cdot X^3)$.</p> <ul> <li><p>Compare the differences between the following two cases:</p> <pre><code>xdata = RandomVariate[NormalDistribution[1, 1], 10^6]; x3data = xdata^3; Covariance[xdata, x3data] </code></pre> <blockquote> <p><code>5.96164</code></p> </blockquote> <pre><code>xdata = RandomVariate[NormalDistribution[1, 1], 10^6]; x3data = RandomVariate[NormalDistribution[1, 1], 10^6]^3; Covariance[xdata, x3data] </code></pre> <blockquote> <p><code>0.00830199</code></p> </blockquote></li> </ul> <p><em>Mathematica</em> has the ability to deal with multivariable distributions:</p> <pre><code>multiDist = TransformedDistribution[{x1, x2^3}, {x1 \[Distributed] NormalDistribution[1, 1], x2 \[Distributed] NormalDistribution[1, 1]}] Covariance[multiDist, 1, 2] </code></pre> <blockquote> <p><code>0</code></p> </blockquote> <p>In the latter case:</p> <pre><code>multiDist = TransformedDistribution[{x, x^3}, x \[Distributed] NormalDistribution[1, 1]]; Covariance[multiDist, 1, 2] </code></pre> <blockquote> <p><code>6</code></p> </blockquote> <p>Also we can calculate the covariance with the expanded formula:</p> <pre><code>xyDist = TransformedDistribution[x1 x2^3, {x1 \[Distributed] NormalDistribution[1, 1], x2 \[Distributed] NormalDistribution[1, 1]}] Exy = Expectation[xy, xy \[Distributed] xyDist] Ex = Expectation[x, x \[Distributed] NormalDistribution[1, 1]] Ey = Expectation[x^3, x \[Distributed] NormalDistribution[1, 1]] COVxy = Exy - Ex Ey </code></pre> <blockquote> <p><code>0</code></p> </blockquote> <p>In the latter case:</p> <pre><code>xyDist = TransformedDistribution[x x^3, x \[Distributed] NormalDistribution[1, 1]]; Exy = Expectation[xy, xy \[Distributed] xyDist]; Ex = Expectation[x, x \[Distributed] NormalDistribution[1, 1]]; Ey = Expectation[x^3, x \[Distributed] NormalDistribution[1, 1]]; COVxy = Exy - Ex Ey </code></pre> <blockquote> <p><code>6</code>.</p> </blockquote> <p>Please note the difference between the distributions.</p>
3,595,622
<p><strong>Problem: Give an example of a linear continuum which is not the real line <span class="math-container">$\mathbb{R}$</span>, nor topologically equivalent to a subspace of <span class="math-container">$\mathbb{R}$</span>.</strong></p> <p><strong>Definition of Linear Continuum:</strong> Let X be a linearly ordered set with order &lt;. We say that X is a linear continuum iff it satisfies the following two axioms:</p> <p>(1) LUB: X has the least upper bound property. (2) Betweenness: <span class="math-container">$\forall x$</span> &lt; <span class="math-container">$y$</span> <span class="math-container">$\in X$</span>, <span class="math-container">$\exists z$</span> <span class="math-container">$\in X$</span>, such that <span class="math-container">$x &lt; z &lt; y$</span>.</p> <p>This is how I did it, not sure whether its accurate or not. </p> <p>I tried to prove that <span class="math-container">$I$</span> x <span class="math-container">$I$</span> is not connected under subspace topology of <span class="math-container">$\mathbb{R^{2}}$</span> under dictionary order. Since, <span class="math-container">${x}$</span> X <span class="math-container">$I$</span> is open in <span class="math-container">$I$</span> X <span class="math-container">$I$</span>, therefore, for each <span class="math-container">$x$</span> <span class="math-container">$\in$</span> <span class="math-container">$I$</span>, say (<span class="math-container">${a}$</span> X <span class="math-container">$[a,b]$</span>) <span class="math-container">$\cup$</span> <span class="math-container">${y}$</span> X <span class="math-container">$I$</span>, where <span class="math-container">$y \in [a,b]$</span>, we can clearly say that their intersection will be <span class="math-container">$\emptyset$</span>, i.e. (<span class="math-container">${a}$</span> X <span class="math-container">$[a,b]$</span>) <span class="math-container">$\cup$</span> {<span class="math-container">${y}$</span> X <span class="math-container">$I$</span>} = <span class="math-container">$I X I$</span>, &amp; (<span class="math-container">${a}$</span> X <span class="math-container">$[a,b]$</span>) <span class="math-container">$\cap$</span> (<span class="math-container">${y}$</span> X <span class="math-container">$I$</span>) = <span class="math-container">$\emptyset$</span>. Hence, its not connected under subspace topology of <span class="math-container">$\mathbb{R}^{2}$</span> but its still a linear continuum. </p> <p>I was trying to come up with something else but unfortunately I couldn't get a better example. Need help from someone on this. Appreciate your time and patience. </p>
The_Sympathizer
11,172
<p>A ready example that comes to my mind from prior interests of mine and that is fairly natural, in that it is not specifically contrived for this purpose, I'd say, is the <strong>Dedekind completion of a suitably large non-Archimedean ordered field.</strong></p> <p>Non-Archimedean ordered fields can be made of any cardinality one desires: to do this, it simply suffices to truncate the construction process for Conway's <em>surreal numbers</em> "just before" a suitably large "birthday", where "just before" means to take the union of all the preceding steps but not including the step with that birthday. The cardinality of this set will be at least as (may be equal to; I don't know this part) large as that of the ordinal labelling the birthday, and if that ordinal is large enough, one can guarantee it is a field; in particular, any uncountable ordinal will work and so one can use the initial ordinal of a cardinal larger than the real continuum; then take the Dedekind completion: by inclusion, it must also surpass the real continuum in cardinality, and it will be a linear (general) continuum, but algebraically a rather bad structure. As the cardinality surpasses that of <span class="math-container">$\mathbb{R}$</span>, topologically this cannot be homeomorphic thereto.</p> <hr> <p>ADD: in fact, you don't need to worry so much about the field property since this is a purely topological matter and so it simply suffices to just say to take the stage set <span class="math-container">$S_\alpha$</span> of all surreals with birthday up to and including <span class="math-container">$\alpha$</span> an uncountable ordinal larger than the initial ordinal of the continuum, e.g. <span class="math-container">$\alpha = \mathrm{init}(\beth_2)$</span>, and then Dedekind-complete it, i.e. form <span class="math-container">$\mathrm{DC}(S_\alpha)$</span>. It will be compact (surreals "equivalent" to <span class="math-container">$\pm \alpha$</span> will bookend it), but also fail the homeomorphism properties regardless.</p>
1,401,898
<p>I need a test for primality that I apply to $2^{255}-19$ (which is claimed to be prime) and certify to be correct with the ACL2 theorem prover. This means that I must be able to code the test in Common LISP, run it on this case in a reasonable period of time (I'd be happy if it ran in a day), and write a proof of correctness of the test that is simple enough to be mechanized in the ACL2 logic.</p>
Peter
82,961
<p>This very small program written in PARI/GP shows the result and the time needed for calculation. I have done this multiple times and the times the program needed, differed, but it took never longer than $200ms$. The routine certifies the primality using the adleman-pomerance-rumely-test (APR-test), which is one of the fastest known algorithms for certifying primes.</p> <pre><code>? gettime();print(isprime(2^255-19,2));gettime() 1 %1 = 110 ? </code></pre> <p>Asymptotically, there must be a better algorithm because it is now known, that the decision problem : "Is a given natural number $n&gt;1$ prime" is in $P$, that means, it can be solved in polnomial time. </p>
340,855
<p>Say if there is a matrix A:</p> <p>$$\begin{bmatrix} 1 &amp; 2 &amp; 0 &amp; 2 \\ 0 &amp; 1 &amp; 1 &amp; 0 \\ 0 &amp; 0 &amp; 0 &amp; 1 \end{bmatrix}$$</p> <p><strong>What the column space of A?</strong> : I am confused whether to exclude NON-pivot columns.</p> <p><strong>What is the dimension of column space?</strong> : The dimension of the column space or dimension of the basis of column space? Can be either 4 or 3?</p> <p><strong>What is the basis of column space?</strong> This is just the pivot columns. Is this the so called $\operatorname{Col}A$?</p>
Sepideh Bakhoda
36,591
<p>The dimension of column space of this matrix can not be 4, because dimension of column space=dimension of row space, and number of rows is 3, then the number of linearly independent rows is less than or equal to 3!</p>
4,167,747
<p>I need to prove the following statement: If <span class="math-container">$1+ \alpha = \alpha$</span>, <span class="math-container">$\alpha$</span> is an infinite ordinal.</p> <p>I am trying to use Bernstein's Theorem(CBS) to show if <span class="math-container">$1+ \alpha \leq \alpha$</span>, i.e., there is an injection from <span class="math-container">$1+ \alpha$</span> to <span class="math-container">$\alpha$</span>, <span class="math-container">$\alpha$</span> must be an infinite ordinal.</p> <p>Is this the right approach? I feel like this statement can be proven using a simpler approach like induction or etc., but I can't seem to think of one.</p> <p>Also, does the converse always hold?</p> <p>Thank you in advance.</p>
Cooler Paradox
654,036
<p>In the following, we will use the von Neumann definition of ordinal numbers.</p> <p>First suppose that <span class="math-container">$1 + \alpha = \alpha$</span>. This means that there is an order isomorphism <span class="math-container">$f: B \rightarrow \alpha$</span>, where <span class="math-container">$B$</span> is the set</p> <p><span class="math-container">$B = \{(0, 0)\} \sqcup \alpha \times \{1\}$</span></p> <p>imbued with an ordering <span class="math-container">$&lt;$</span> defined by</p> <p><span class="math-container">$(0, 0) &lt; (\beta, 1)$</span> for all <span class="math-container">$\beta \in \alpha$</span> and <span class="math-container">$(\beta, 1) &lt; (\beta', 1)$</span> if and only if <span class="math-container">$\beta &lt; \beta'$</span> for all <span class="math-container">$\beta, \beta' \in \alpha$</span>.</p> <p>To prove that <span class="math-container">$\alpha$</span> is infinite, we just need to construct an injection <span class="math-container">$h: \omega \rightarrow \alpha$</span>. To this end, let <span class="math-container">$\iota: \alpha \rightarrow B$</span> denote the natural inclusion <span class="math-container">$\iota(\beta) = (\beta, 1)$</span>, and let <span class="math-container">$g = f \circ \iota: \alpha \rightarrow \alpha$</span>. Since <span class="math-container">$f$</span> and <span class="math-container">$\iota$</span> are both injective, so is <span class="math-container">$g$</span>. Then we can define <span class="math-container">$h: \omega \rightarrow \alpha$</span> by <span class="math-container">$g^n(f(0, 0))$</span>, where by convention <span class="math-container">$g^0 = \mathbb{1}_{\alpha}$</span> is the identity map.</p> <p>To show that <span class="math-container">$h$</span> is injective, suppose for a contradiction that there are <span class="math-container">$m, n \in \omega$</span> such that <span class="math-container">$h(m) = h(n)$</span> but <span class="math-container">$m \neq n$</span>. From the definition of <span class="math-container">$h$</span>, this means <span class="math-container">$g^m(f(0, 0)) = g^n(f(0, 0))$</span>. Assume without loss of generality that <span class="math-container">$m &lt; n$</span>. Then because <span class="math-container">$g$</span> is injective, <span class="math-container">$g^0(f(0, 0)) = g^{n - m}(f(0, 0))$</span>. That is, <span class="math-container">$f(0, 0) = g^{n - m}(f(0, 0))$</span>. But <span class="math-container">$f(0, 0)$</span> is not in the image of <span class="math-container">$g$</span>, since <span class="math-container">$(0, 0)$</span> is not in the image of the inclusion <span class="math-container">$\iota$</span>.</p> <p>We conclude that <span class="math-container">$h$</span> is injective, hence <span class="math-container">$\alpha$</span> is infinite. Note that we did not have to use the fact that <span class="math-container">$f$</span> is an order isomorphism to prove that <span class="math-container">$\alpha$</span> is infinite, only that <span class="math-container">$f$</span> is injective.</p> <p>The converse is also true. To see this, suppose that <span class="math-container">$\alpha$</span> is infinite. Then <span class="math-container">$\omega \subseteq \alpha$</span>, so we need only find an order isomorphism <span class="math-container">$\sigma: 1 + \omega \rightarrow \omega$</span>. But writing the elements of <span class="math-container">$1 + \omega$</span> as</p> <p><span class="math-container">$0' &lt; 0 &lt; 1 &lt; 2 &lt; \cdots$</span></p> <p>and the elements of <span class="math-container">$\omega$</span> as</p> <p><span class="math-container">$0 &lt; 1 &lt; 2 &lt; 3 &lt; \cdots$</span>,</p> <p>the isomorphism is clear. We just take <span class="math-container">$\sigma(0') = 0$</span> and <span class="math-container">$\sigma(n) = S(n)$</span> for all <span class="math-container">$n \in \omega$</span>.</p>
897,043
<p>I'm having issues getting my head around cartesian products and their cardinalities.</p> <p>$A = \{0, 1, \{2, 3, 4\}\}$<br> $B = \{1,5\}$<br> $D = B \times N$ (where $N$ is the set of natural numbers)</p> <p><strong>The first problem:</strong> What is the cardinality of:</p> <p>(a) $A \times B$ (cartesian product)</p> <p>(b) $A \times D$</p> <p><strong>Part 2:</strong> true/false (a) $N$ is a subset of $D$</p> <p>for (a) I used $|A \times B|$ = $|A| * |B|$ and got $3*2 = 6$ </p> <p>is this the correct way to do this?</p> <p>for (b) I assumed that the cardinality was infinite since it involved the set of natural numbers, am I correct in assuming this?</p> <p>for part 2 (a) I assumed that it was true since $D$ contains the natural set so presumably the natural set is a subset of $D$, am I correct in assuming this?</p>
Akash Patalwanshi
168,676
<p>|A x B| = |A| * |B| = 6 , but |A x D| = aleph_null i.e set A x D and set of nauturals are equipotent i.e there exists a bijection between A x D and N. note D is not a subset of N because elements of D are (1, 1) , (1,2) , (1,3),... , (5,1), (5,2),(5,3) ... i.e elements of D are in 2-tuple wheres elements of N are simply as usual numbers, so N is not subset of D</p>
1,848,222
<p>Very simple and quick question. Usually distribution notation is such that you give the name of the distribution, then its mean, and finally the variance, for example for normal distribution:</p> <p>$$N(0,1)$$</p> <p>The 0 means that the distribution has mean zero, and the 1 tells that the variance is one. However, for standard uniform distribution:</p> <p>$$U(0,1)$$</p> <p>The zero is the minimum value the distribution generates and 1 is the maximum value. Using the more standard notation it should be:</p> <p>$$U(0.5, \frac{1}{12})$$</p> <p>At least it would make more sense to me if it was U]0,1[. Can anyone explain why the notation is so, and whether there are any other exceptions?</p>
parsiad
64,601
<p>Are you from France or somewhere in Europe? In France, you use $]0,1[$ to denote the open unit interval. Some other countries use $(0,1)$. Regardless, $U$ can be regarded as a function taking two inputs and producing a mathematical object.</p> <p>I find $U(0,1)$ much more intuitive than $U(0.5,1/12)$. I would say it is more convenient to be given the bounds of the interval than the mean and variance and having to figure out what the bounds are (though regardless of which direction we go, it is a trivial computation).</p>
4,032,767
<blockquote> <p>Prove that there exists no bijective function <span class="math-container">$f: \Bbb{N} \to \Bbb{N}$</span> such that <span class="math-container">$$f(mn)=f(m)+f(n)+3f(m)f(n)$$</span> for <span class="math-container">$m,n \geqslant1.$</span></p> </blockquote> <p>This was a problem from a Putnam practice book and I couldn't seem to figure out how to show this. Initially, I started to wonder that if the function were bijective then it has to be injective as well as surjective. So trying to determine if its injective let <span class="math-container">$a,b \in \mathbb{N}$</span> then if it were injective <span class="math-container">$$f(mn)=f(ab) \implies f(m)+f(n)+3f(m)f(n) =f(a)+f(b)+3f(a)f(b)$$</span> but this doesn't seem to help since I don't have explicitly <span class="math-container">$f$</span>. How should I look at this problem?</p>
NoNames
884,560
<p>Let's assume <span class="math-container">$0\in\mathbb{N}$</span>, since otherwise the condition <span class="math-container">$m,n\ge1$</span> would be redundant. Then, <span class="math-container">$m=n=1$</span> implies <span class="math-container">$f(1)=0$</span>. If we define <span class="math-container">$g(n)=3\,f(n)+1$</span>, we obtain <span class="math-container">$$g(mn)=g(m)g(n).\tag{mult}$$</span> So we have a completely multiplicative bijection between <span class="math-container">$\mathbb{N}$</span> and <span class="math-container">$\mathbb{M}=\{m\in\mathbb{N}: m=1\bmod3\}$</span>. Now the numbers <span class="math-container">$10,34,55,187$</span> all belong to <span class="math-container">$\mathbb{M}$</span>, so there must be natural numbers <span class="math-container">$n_1, n_2, n_3, n_4$</span> with <span class="math-container">$$g(n_1)=10,\quad g(n_2)=34,\quad g(n_3)=55,\quad g(n_4)=187.$$</span> The <span class="math-container">$n_i$</span> must be different, and they must be primes, because <span class="math-container">$d|n_i$</span> implies <span class="math-container">$g(d)|g(n_i)$</span> through (mult), and <span class="math-container">$g(n_i)$</span> by definition don't have non-trivial divisors in <span class="math-container">$\mathbb{M}$</span>. But then, we have <span class="math-container">$$g(n_1n_4)=g(n_1)\,g(n_4)=g(n_2)\,g(n_3)=g(n_2n_3),$$</span> and since <span class="math-container">$g$</span> is injective, <span class="math-container">$$n_1n_4=n_2n_3,$$</span> but this is impossible for four different primes.</p>
307,264
<p>A professor of mine has suggested to me to look at this theorem and to find a problem related to it to explain in a future class. I found an understandable proof in "Linear operators" by Dunford-Schwartz and I think I studied it, so now I know how to probe Brouwer's Theorem. Now I was thinking of some interesting related problem I could try to solve, do you have any suggestion of something not too hard (I am a second year undergraduate student!) ?</p> <p>Thank you very much! </p> <p>EDIT: I put PDE in the tags as this professor is mainly interested in this area so if you have any idea on that then even better (I guess most of the applications to PDEs will be very hard though!! )</p>
UwF
59,239
<p>How about Ky Fan's inequality? The one from his 1972 paper, "A minimax inequality and its applications" (there are several inequalities frequently called Ky Fan inequality). This Ky Fan Inequality is used to establish the existence of equilibria in various games studied in economics.</p> <p>For the applications to PDEs that I know you will need infinite-dimensional generalisations of Brouwer's fixed point theorem.</p> <p>Maybe the Brouwer invariance of domain theorem is more accessible. It is discussed in Terry Tao's blog, see <a href="http://terrytao.wordpress.com/2011/06/13/brouwers-fixed-point-and-invariance-of-domain-theorems-and-hilberts-fifth-problem/" rel="nofollow">http://terrytao.wordpress.com/2011/06/13/brouwers-fixed-point-and-invariance-of-domain-theorems-and-hilberts-fifth-problem/</a>.</p>
1,661,986
<p>I was out all last week sick with the flu and am trying to get caught up in my Discrete Mathematics course. One set of questions in my book goes as follows:</p> <p>Find the least integer $n$ such that $f(x) \in O(x^n)$ for each of the functions:</p> <p>(a). $f(x) = 8x+4$<br> (b). $f(x) = x\sqrt(x)$<br> (c). $f(x) = log_3 9$<br> (d). $f(n) = n!$</p> <p>Can someone walk me through this? I have no idea where to even begin on these or what it is I am attempting to do in the end.</p>
grand_chat
215,011
<p>The general concept of "Big-O" is that function $f$ is in Big-O of function $g$ if $f(x)$ grows <strong>no faster than</strong> $g(x)$ as $x\to\infty$. Generally speaking both $f$ and $g$ take positive values, so more precise ways of saying this are:</p> <ul> <li><p>$f\in O(g)$ if there is a constant $C&gt;0$ such that $f(x)\le Cg(x)$ for all large $x$</p></li> <li><p>$f\in O(g)$ if the ratio $f(x)/g(x)$ remains bounded as $x\to\infty$.</p></li> </ul> <p>The function $g$ is therefore an upper bound on the asymptotic growth rate of $f$. Since $g$ is only an upper bound, there are many choices for $g$.</p> <p>The point of a big-O analysis is to make a statement about the growth rate of a function, which in turn allows you to compare growth rates ("complexity") of different functions. In analysis of algorithms, you generally prefer algorithms that don't grow too fast with the size of the problem. For example if you want to sort a list of $n$ items, a naive 'selection' algorithm might build the sorted list by selecting the smallest item from the list, then the next smallest, and so on; this algorithm has complexity $O(n^2)$. The fastest sort algorithms have a complexity that is $O(n\log n)$, which grows considerably slower than the selection algorithm, and therefore can handle much larger problems.</p> <p>There are related concepts $f\in o(g)$ ("little oh", which gives a lower bound on the growth of $f$) and $f\in\Theta(g)$ ("theta", which gives both upper and lower bounds).</p> <p>Back to your problem: You have to find the smallest integer $n$ such that $f(x)\le Cx^n$ as $n\to\infty$. Taking the smallest $n$ amounts to finding the sharpest upper bound.</p> <p>To solve part (a), you have $f(x)=8x+4$ is a linear function, so certainly $f\in O(x)$.</p> <ul> <li><p>Proof (1): $f(x)\le 10x$ for all large $x$.</p></li> <li><p>Proof (2): As $x\to\infty$, we see that $f(x)/x\to 8$ so the ratio $f(x)/x$ remains bounded as $x\to\infty$.</p></li> </ul> <p>It's also true that $f\in O(x^2)$ and that $f\in O(x^{2.3})$, but $n=1$ wins because it's the smallest $n$ such that $f(x)/x^n$ stays bounded as $x\to\infty$.</p> <p>For the remaining parts, just ask yourself "how does $f(x)$ grow as $x$ gets larger, and can I place a polynomial upper bound on this growth?" (The answer to the second question can be "no".)</p>
1,821,411
<p>$f:[a,b]\rightarrow R$ that is integrable on [a,b]</p> <p>So we need to prove:</p> <p>$$\int_{-b}^{-a}f(-x)dx=\int_{a}^{b}f(x)dx$$</p> <p>1.) So we'll use a property of definite integrals: (homogeny I think it's called?)</p> <p>$$\int_{-b}^{-a}f(-x)dx=-1\int_{-b}^{-a}f(x)dx$$</p> <p>2.) Great, now using the fundamental theorem of calculus:</p> <p>$$-1\int_{-b}^{-a}f(x)dx=(-1)^2\int_{-a}^{-b}f(x)dx=\int_{-a}^{-b}f(x)dx$$</p> <p>This is where I'm stuck. For some reason I think it might be smarter to skip step 2, to leave it asL</p> <p>$$-1\int_{-b}^{-a}f(x)dx$$ </p> <p>because graphically, we've "flipped" the graph about the x-axis, but we're still calculating the same area. Proving that using properties seems to have stumped me.</p> <p>I prefer hints over solutions, thanks.</p>
egreg
62,967
<p>Your step 1 is wrong and you can realize the error by considering $a=0$, $b=1$, $f(x)=e^x$.</p> <p>Then $$ \int_{0}^{1}e^x\,dx=e-1, \qquad -\int_{-1}^0e^x\,dx=\frac{1}{e}-1 $$ which are quite different.</p> <p>You can prove the statement by the definition, I'll use Riemann sums. A Riemann sum for $\int_{a}^{b}f(x)\,dx$ consists first in a choice $S$ of points $$ a=x_0&lt;x_1&lt;x_2&lt;\dots&lt;x_{n-1}&lt;x_n=b, \qquad c_i\in[x_{i-1},x_i],\ i=1,2,\dots,n $$ and in considering $$ \sigma(f;S)=\sum_{i=1}^n f(c_i)(x_i-x_{i-1}) $$ Define $\delta(S)=\max\{x_1-x_0,x_2-x_1,\dots,x_n-x_{n-1}\}$; then it's not much difficult to give a meaning to $$ \lim_{\delta(S)\to0}\sigma(f;S) $$ and, if this exists, it is called the integral.</p> <p>Now note that for each Riemann sum for $f(x)$ over $[a,b]$ we can define a Riemann sum $\hat{S}$ for $g(x)=f(-x)$ over $[-b,-a]$ by simply taking the negative of each point (and renaming indices, if you prefer to make your life difficult). Then $$ \sigma(g;\hat{S})=\sum_{i=1}^n g(-c_i)(-x_{i-1}-(-x_i)) = \sum_{i=1}^n f(c_i)(x_i-x_{i-1}) = \sigma(f;S) $$ Thus the two limits are equal, because also each Riemann sum for $g$ over $[-b,-a]$ corresponds to a Riemann sum for $f$ over $[a,b]$, by the same construction.</p> <p>Similarly if you define integrals with upper and lower sums.</p> <hr> <p>If the function $f$ is continuous, you can use substitutions (through the fundamental theorem of calculus): \begin{align} \int_{-b}^{-a}f(-x)\,dx &amp;=\int_{b}^{a}f(t)\cdot(-1)\,dt &amp;&amp; -x=t,\quad dx=-dt \\ &amp;=-\int_{b}^{a}f(t)\,dt \\ &amp;=\int_{a}^{b}f(t)\,dt \\ &amp;=\int_{a}^{b}f(x)\,dx &amp;&amp; x=t,\quad dt=dx \end{align}</p>
4,219,360
<p>I was having some problems understanding how he found <span class="math-container">$\gamma(t)$</span> from the given <span class="math-container">$\Sigma$</span> and i was hoping someone could explain to me how if that is ok</p> <p>So the problem goes like this:</p> <p>Given the vector field <span class="math-container">$F(x, y, z) = (z, x, y)$</span>, compute the flux of the curl of F through the surface <span class="math-container">$ Σ = (x, y, z) ∈ R^ 3 : z = xy, x^2 + y^ 2 ≤ 1 $</span></p> <p>oriented so that the normal versor points upward</p> <p>So what the professor did was first he computed <span class="math-container">$\gamma(t)$</span> using parametrization and he immediately writes</p> <p><span class="math-container">$\gamma(t)=(cos(t),sin (t), cos(t)sin(t) )$</span> with <span class="math-container">$t\in[0,2\pi]$</span> and from here he finds <span class="math-container">$\gamma$</span>' and from there he computes</p> <p><span class="math-container">$\int _\Sigma rot F *nd\sigma$</span>=<span class="math-container">$\int_0^{2\pi} F(\gamma(t))\gamma'(t)dt$</span></p> <p>and from there is history i can do it myself</p> <p>But what i couldn't understand is how did he get the <span class="math-container">$\gamma$</span></p>
RobPratt
683,666
<p>A correct starting point is instead: <span class="math-container">$$\mathop{\mathbb{E}}[X^Y] = \sum_{y=0}^\infty \mathop{\mathbb{E}}[X^Y|Y=y] \mathop{\mathbb{P}}[Y=y] = \sum_{y=0}^\infty \mathop{\mathbb{E}}[X^y] \mathop{\mathbb{P}}[Y=y] $$</span></p>
1,303,485
<p>Evaluate the integral $$\int_{C}\frac{z^2}{z^2+9}dz$$ where C is the circle $|z|=4$</p> <p>I know that if f is analytic in simply connected domain $D$, $C$ a simple closed positively oriented contour that lies in D and $z_o$ lies interior to $C$, then $$\int_{C}\frac{f(z)}{z-z_o}dz=2\pi i f(z_o)$$</p> <p>But for this problem, the circle contains both interior points which is $3i$ and $-3i$. And I found that reducing the fraction into partial fraction seems to be useless in solving the problem. So what is the $f(z)$ here?</p>
kmitov
84,067
<p>$I=2\pi i (\frac{(3i)^2}{3i+3i}+\frac{(-3i)^2}{-3i-3i})=\frac{-9}{6i}+\frac{9}{6i}=0$</p>
4,350,450
<p>For me, <span class="math-container">$\Bbb N$</span> includes <span class="math-container">$0$</span>. I am referencing, yet again, <a href="https://www.math.uni-leipzig.de/%7Eeisner/book-EFHN.pdf" rel="nofollow noreferrer">this</a> text, exercise <span class="math-container">$19$</span>, page <span class="math-container">$30$</span>.</p> <blockquote> <p>Let <span class="math-container">$K$</span> be a compact Hausdorff space, and <span class="math-container">$\phi:K\to K$</span> continuous and surjective - i.e. <span class="math-container">$(K;\phi)$</span> is a surjective topological dynamic system.</p> <p>Let <span class="math-container">$K^\omega=\prod_{n\in\Bbb N}K$</span>, and let <span class="math-container">$\psi:K^\omega\to K^\omega,\,(x_1,x_2,\cdots)\mapsto(\phi(x_1),x_1,x_2,\cdots)$</span>. By Tychonoff's theorem, <span class="math-container">$(K^\omega;\psi)$</span> is a topological system. Let <span class="math-container">$L=\bigcap_{n\in\Bbb N}\psi^n(K^\omega)\subseteq K^\omega$</span>.</p> </blockquote> <p>It is &quot;shown&quot; earlier in the book (Corollary <span class="math-container">$2.27$</span>, page <span class="math-container">$20$</span>), that <span class="math-container">$L$</span> is the maximal (by set inclusion) surjective subsystem of <span class="math-container">$K^\omega$</span>.</p> <blockquote> <p>Show that <span class="math-container">$\pi(L)=K$</span>, where <span class="math-container">$\pi:K^\omega\to K$</span> is the projection onto the first component.</p> </blockquote> <p>I can do this fine, but I fear it is a bit unrigorous in equality <span class="math-container">$1$</span>:</p> <blockquote> <p><span class="math-container">$$\pi(L)=\pi\left(\bigcap_{n\in\Bbb N}\psi^n(K^\omega)\right)\color{red}{\overset{1}=}\bigcap_{n\in\Bbb N}(\pi\circ\psi^n)(K^\omega)$$</span>Note that <span class="math-container">$\psi^n(x_1,x_2,\cdots)=(\phi^n(x_1),\phi^{n-1}(x_1),\cdots)$</span>, and <span class="math-container">$\pi\circ\psi^n$</span> therefore maps <span class="math-container">$(x_1,x_2,\cdots)\mapsto\phi^n(x_1)$</span>. As <span class="math-container">$\phi$</span> is a surjection on <span class="math-container">$K$</span>, <span class="math-container">$(\pi\circ\psi^n)(K^\omega)=K$</span> regardless of <span class="math-container">$n$</span>, from which it follows that <span class="math-container">$\pi(L)=K$</span>.</p> </blockquote> <p>However, they leave a hint suggesting more rigour is required:</p> <blockquote> <p>Hint: For <span class="math-container">$y\in K$</span> apply Lemma <span class="math-container">$2.26$</span> to the <span class="math-container">$\psi$</span>-invariant set <span class="math-container">$\pi^{-1}\{y\}$</span>.</p> <p>Lemma <span class="math-container">$2.2$</span>6: Suppose that <span class="math-container">$(K;\phi)$</span> is a topological system and then <span class="math-container">$\varnothing\neq A\subseteq K$</span> is closed and invariant (<span class="math-container">$\phi(A)\subseteq A$</span>). Then there is a closed set <span class="math-container">$B$</span>, <span class="math-container">$\varnothing\neq B\subseteq A$</span>, with <span class="math-container">$\phi(B)=B$</span>. Explicitly, <span class="math-container">$B=\bigcap_{n\in\Bbb N_1}\phi^n(A)$</span>.</p> </blockquote> <p>Assuming for the moment that <span class="math-container">$\pi^{-1}\{y\}$</span> is indeed <span class="math-container">$\psi$</span>-invariant, then this lemma can &quot;solve&quot; the problem similarly (I am unsure why it is needed, but I tried to indulge them nonetheless):</p> <blockquote> <p><span class="math-container">$$\begin{align}\pi(L)&amp;=\pi\left(\bigcap_{n\in\Bbb N}\psi^n(K^\omega)\right)\\&amp;=\pi\left(\bigcap_{n\in\Bbb N}\bigcup_{\mathbf{x}\in K^\omega}\psi^n(\mathbf{x})\right)\\&amp;=\pi\left(\bigcap_{n\in\Bbb N}\bigcup_{y\in K}\psi^n(\pi^{-1}\{y\})\right)\\&amp;\color{red}{\overset{2}{=}}\pi\left(\bigcup_{y\in K}\bigcap_{n\in\Bbb N}\psi^n(\pi^{-1}\{y\})\right)\\&amp;=\pi\left(\bigcup_{y\in K}B_y\right)\\&amp;=\bigcup_{y\in K}\pi(B_y)\\&amp;=\bigcup_{y\in K}y\\&amp;=K\end{align}$$</span></p> </blockquote> <p>Why we need to go down that route, I am very unsure. It seems like a strange detour to take, so I feel like I'm missing their intended solution. Moreover, this approach introduces a second dubious equality, <span class="math-container">$2$</span>, that I don't know how to justify. My proof seems much shorter and more elegant, but also uses a potentially dubious equality in <span class="math-container">$1$</span>.</p> <p>Returning to the invariance of <span class="math-container">$\pi^{-1}\{y\}$</span> - I do not believe it is invariant:</p> <blockquote> <p><span class="math-container">$$\pi^{-1}\{y\}=\{(y,x_1,x_2,\cdots):x_1,x_2,\cdots\in K\}=\{y\}\times K^\omega\\\psi(\pi^{-1}\{y\})=\{(\phi(y),y,x_1,x_2,\cdots):x_1,x_2,\cdots\in K\}=\{(\phi(y),y)\}\times K^\omega\not\subset\pi^{-1}(y)$$</span>Whenever <span class="math-container">$\phi(y)\neq y$</span>.</p> </blockquote> <p>What am I missing with regards to the alleged <span class="math-container">$\psi$</span>-invariance, and are the equalities <span class="math-container">$1,2$</span> correct? That is, is my proposed proof of <span class="math-container">$\pi(L)=K$</span> correct?</p>
Augusto Santos
162,474
<p>That <span class="math-container">$\pi(L)\subseteq K$</span> is clear.</p> <p>Here is my take for the rest.</p> <p>Conditioned on <span class="math-container">$\pi^{-1}(\left\{y\right\})$</span> being invariant for all <span class="math-container">$y\in K$</span>, then (and building on your derivation before identity '2')</p> <p><span class="math-container">$$\pi(L)=\pi\left(\bigcap_{n\in\mathbb{N}}\bigcup_{y\in K} \psi^{n}\left(\pi^{-1}(\left\{y\right\})\right)\right)\supseteq \pi\left(\bigcap_{n\in\mathbb{N}}\bigcup_{y\in K} \psi^{n}\left(C_y\right)\right)=\pi\left(\bigcap_{n\in\mathbb{N}}\bigcup_{y\in K} C_y\right)=\pi\left(\bigcup_{y\in K} C_y\right)=K$$</span></p> <p>where in the inclusion above, we resorted to Lemma 2.26 -- in particular, to each <span class="math-container">$y$</span> there exists <span class="math-container">$C_y\subseteq \pi^{-1}(\left\{y\right\})$</span> with <span class="math-container">$\psi(C_y)=C_y$</span>.</p> <p>Regarding the <span class="math-container">$\psi$</span>-invariance of <span class="math-container">$\pi^{-1}(\left\{y\right\})$</span>, this would be the case if rather <span class="math-container">$\psi(x)=\left(x_1,\phi(x_1),x_2,\ldots\right)$</span> -- which might be blatantly inapropriate for some fundamental reason that I am missing.</p>
1,392,257
<p><strong>The definition of a conjugate element</strong> </p> <p>We say that $x$ is conjugate to $y$ in $G$ if $y = g^{-1}xg $ for some $g \in G$</p> <p>Now for the group $G=Q_8$ , we have the group presentation $$Q_8 = \big&lt;a,b: a^4 =1,b^2 = a^2, b^{-1}ab = a^{-1} \big&gt;$$</p> <p>Now the elements of $Q_8$ are $\{1,a,a^2,a^3,ab,a^2b,a^3b,b\}$ and after some calculation we would get $5$ different conjugacy classes, namely $a^G = \{a,a^3\}$ where $a^G$ denotes the conjugacy class of $a$ in $G = Q_8$,</p> <p>also we have </p> <p>$1^G = \{1 \}$, ${a^2}^G = \{ a^2 \}$, ${(a^2b)}^G = \{a^2b,b \}$ and ${(ab)}^G = \{ab,a^3b\}$</p> <p>Of course , there is no surpise that for every element $x \in G$ we have $x \in x^G$ because $x = 1^{-1}x1$. However, we see that all the conjugacy classes for $Q_8$ contain the element and it's inverse. Like $a^{-1} = a^3$, ${(a^2)}^{-1} = a^2$, ${(a^2b)}^{-1} = b$ and so on.</p> <p>My question is does this hold true for all groups ?</p> <p>More formally , Is it true that for an element $x \in G$ then $x,x^{-1} \in x^G$ ?</p>
Mark Bennet
2,906
<p>No. Think of an abelian group. All the conjugacy classes contain a single element. Only elements of order ($1$ or) $2$ are their own inverse.</p>
3,430,008
<p>Currently stuck on the last part of 15.2.8(e) of this problem:</p> <p><a href="https://i.stack.imgur.com/UvGJK.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/UvGJK.png" alt="enter image description here"></a></p> <p>I don't know how to apply Fubini's theorem since one index relies on the other.</p> <p>Having slept on it, I've almost got it figured out except for one thing. Part of Fubini's theorem states (in my book) that if</p> <p><span class="math-container">$$\sum_{(n,m) \in \mathbb{N} \times \mathbb{N}} f(n,m)$$</span> </p> <p>converges absolutely to some limit <span class="math-container">$L$</span>, then </p> <p><span class="math-container">$$\sum_{n=0}^{\infty} \left ( \sum_{m=0}^{\infty} f(n, m)\right )$$</span></p> <p>also converges absolutely to <span class="math-container">$L$</span>.</p> <p>What I'm trying to figure out is if the converse of this statement is true. Because it seems that in order to apply Fubini's theorem to this problem it needs to be.</p>
Masacroso
173,262
<p>Note that if the terms are non-negative or the double series converges absolutely then <span class="math-container">$$ \begin{align*} \sum_{m\geqslant 0}d_m(x-b)^m&amp;=\sum_{m\geqslant 0}\sum_{n\geqslant m}\binom{n}{m}(b-a)^{n-m}(x-b)^mc_n\\ &amp;=\sum_{n\geqslant m\geqslant 0}\binom{n}{m}(b-a)^{n-m}(x-b)^mc_n\\ &amp;=\sum_{n\geqslant 0}\sum_{m=0}^n\binom{n}{m}(b-a)^{n-m}(x-b)^mc_n\\ &amp;=\sum_{n\geqslant 0}c_n(b-a+x-b)^n\\ &amp;=\sum_{n\geqslant 0}(x-a)^nc_n \end{align*} $$</span></p> <p>To "free" one variable of the other in the summation signs you also can use some indicator function as follows: <span class="math-container">$$ \begin{align*} \sum_{m\geqslant 0}d_m(x-b)^m&amp;=\sum_{m\geqslant 0}\sum_{n\geqslant m}\binom{n}{m}(b-a)^{n-m}(x-b)^mc_n\\ &amp;=\sum_{m\geqslant 0}\sum_{n\geqslant 0}\binom{n}{m}(b-a)^{n-m}(x-b)^mc_n\,\chi _{\Bbb N \cap [m,\infty )}(n) \end{align*} $$</span> and writing the last expression as a double integral with respect to the counting measure <span class="math-container">$\delta $</span> we have <span class="math-container">$$ \int_{\Bbb N }\int_{\Bbb N }\binom{n}{m}(b-a)^{n-m}(x-b)^mc_n\,\chi _{\Bbb N \cap [m,\infty )}(n)\,\delta (n)\, \delta (m) $$</span></p>
482,801
<p>What does it mean that the characteristic function <span class="math-container">$f(x)=1_{[b \le x \lt \infty]}$</span> is right continuous with left limits? Here <span class="math-container">$x ,b \in \mathbb{R}$</span>.</p>
Stefan Hansen
25,632
<p>The graph of $f(x)=\mathbf{1}\{x\geq b\}$ looks like this:</p> <p><img src="https://i.stack.imgur.com/nzR9r.png" alt="enter image description here"></p> <p>Clearly, approaching any number from the right yields the same value of $f$ meaning that $f$ is right-continuous. That $f$ has left limits just means that the limit exists and is finite when approaching any number from the left. This is also obvious from the graph.</p> <p>Note also what happens if the filled dot and the hollow dot swap places. Then we're looking at the graph of $f(x)=\mathbf{1}\{x&gt;b\}$ instead, and this is left-continuous with right limits.</p>
2,732,562
<p>I'm trying to figure out how to find the number of ternary strings of length $n$ that have 3 or more consecutive 2's. So far I've been able to establish that there is $n(2^{n-1})$ with a single 2. And I think (but am not certain) that this can be extrapolated to the number of strings with a single group of 2's of length $x$ by: $$\bigl(n-(x-1)\bigr)(2^{n-x})$$ What I'm getting caught on is the 'or more part', any help would be greatly appreciated.</p>
saulspatz
235,128
<p>Trying to count them directly will get you nowhere, because of double counting. The best approach is to come up with a recurrence relation, for the number of acceptable strings of length $n$ in terms of the number of shorter acceptable strings. Call a ternary string with at least three consecutive $2$'s "good" and other ternary strings "bad".</p> <p>Let $a_n$ be the number of good strings of length $n$. To make a good string of length $n+1$ we can add any digit to an good string of length $n$ or we can add a $2$ to the end of a bad string of length $n$ that ends in two $2$'s. So we have $$a_{n+1}=3a_n+b_n$$ So, how many bad strings of length $n$ end in two $2$'s? We get them by adding two $2$'s to a a bad string of length $n-2$ that doesn't end in a $2$. $$b_{n}=c_{n-2}$$ So, how many bad strings of length $n-2$ don't end in $2$? We just add $0$ or $1$ to a bad string of length $n-3$, and there are $3^{n-3}-a_{n-3}$ bad strings of length $n-3$. $$ a_{n+1}=3a_{n}+2(3^{n-3}-a_{n-3})\implies a_{n+1}-3{a_n}+2a_{n-3}=2\cdot 3^{n-3}\tag 1$$</p> <p>(I think the definitions of $b_n, c_n$ are obvious from the context.)</p> <p>We have the initial data $$a_0=a_1=a_2=0,a_3=1$$</p> <p>A little experimentation with python confirms this formula. The recurrence is not easy to solve. Besides the obvious root at $1$, the characteristic equation has one positive root, which <a href="https://www.wolframalpha.com/input/?i=solve%20r%5E4-3r%5E3%2B2%3D0" rel="nofollow noreferrer">Wolfram Alpha</a> calculates as $2.9196.$ (You can click on the "Exact Forms" button to get the expression in terms of radicals, for what it's worth.) </p> <p>You can try to use the <a href="http://reference.wolfram.com/language/tutorial/SolvingRecurrenceEquations.html" rel="nofollow noreferrer">RSolve</a> feature of Wolfram Alpha to get an exact solution. I have no experience with this myself.</p> <p>For grins, I did it with <a href="https://www.wolframalpha.com/input/?i=RSolve%5B%7Ba%5Bn%5D%3D3*a%5Bn-1%5D-2*a%5Bn-4%5D%2B2*3%5E(n-4),a%5B0%5D%3D0,a%5B1%5D%3D0,a%5B2%5D%3D0,a%5B3%5D%3D1%7D,a%5Bn%5D,n%5D" rel="nofollow noreferrer">RSolve.</a></p> <p>I also wrote a python script to calculate it up to $n=15$ in three different ways: explicitly, by generating the ternanry strings and counting the good ones, from the recurrence relation, and from the approximate formula from Wolfram Alpha. Here's the code:</p> <pre><code>from itertools import product print("Explicit") chars = '0 1 2'.split() for n in range(4,16): print(n, len([s for s in product(chars, repeat=n) if '222' in ''.join(s)])) print("Recurrence") a=[0,0,0,1] for n in range(4,16): a.append(3*a[n-1]-2*a[n-4]+2*3**(n-4)) print(n, a[n]) r1 = .0231038-.0550705j r2 = -.45982+.688173j r3 = -1.04621 r4 = 2.91964 print("Formula") for n in range(3,16): x = r1*r2**n print(n, 2*x.real+r3*r4**n+3**n) </code></pre> <p>This produced the output:</p> <pre><code>Explicit 4 5 5 21 6 81 7 295 8 1037 9 3555 10 11961 11 39667 12 130049 13 422403 14 1361385 15 4359115 Recurrence 4 5 5 21 6 81 7 295 8 1037 9 3555 10 11961 11 39667 12 130049 13 422403 14 1361385 15 4359115 Formula 3 0.9999269125799088 4 4.999775013651643 5 20.999310049238204 6 80.99788957631085 7 294.99355667854866 8 1036.9803663807425 9 3554.940278866641 10 11960.818633386094 11 39666.45003106547 12 130047.3346004755 13 422397.96336414735 14 1361369.7860350916 15 4359069.095182693 </code></pre> <p>The recurrence gives the exact value, and is more efficient to compute than the approximate formula. Of course, it would be possible to get exact values for the constants in the formula, since they're the roots of cubics, but that would make the formula even harder to compute. </p>
2,011,003
<p>I stumbled upon this logic question in a math class recently. </p> <p>My teacher told us that a statement that is not tested/is empty is true. For example, that if I stated that: "if the team A wins the game, I am gonna buy you a coke", and then team B goes on and wins the game, the statement would be true, independent of me buying a coke. Could anybody elaborate how this can be the case, and why?</p> <p>It came up as an explanation to why the the empty-set is both an open and a closed set. </p>
FraGrechi
348,690
<p>The general idea at which you are hinting is that of conditional probability, which states that for two events, $P$ and $Q$, the statement $P$ <em>implies</em> $Q$ is given by</p> <p>$$(P \implies Q) \iff (Q \lor \lnot P),$$ </p> <p>where "$\implies$" denotes the implication operator.</p> <p>To better understand this, consider the following timely scenario. A politicians states: $$\textrm{"If } \underbrace{\textrm{I win the elections, }}_{\textrm{Statement } P} \textrm{then } \underbrace{\textrm{taxes will go down"}}_{\textrm{Statement } Q}.$$ This statement would have a truth value of $T$ is he/she does not lie, and a truth value of $F$ if he/she does lie. </p> <ul> <li>If $P$ is $T$, and $Q$ is $T$, then the politician has not lied; he/she was elected, and taxes went down. Hence $P \implies Q$ is $T$.</li> <li>If $P$ is $T$, and $Q$ is $F$, then the politician has lied; he/she was elected, and taxes did not go down. Hence $P \implies Q$ is $F$</li> <li>If $P$ is $F$, regardless of what $Q$ is, the politician cannot lie; he/she was never elected, and so never got the opportunity to lie. Hence $P \implies Q$ is $T$.</li> </ul>
1,095,621
<p>I am looking for a way to integrate $$\int \sqrt{x^2-4}\ dx $$ using trigonometric substitutions. </p> <p>All my attempts so far lead to complicated solutions that were uncomputable.</p>
Aaron Maroja
143,413
<p>Hint: Let $x = 2 \cosh \theta \Rightarrow dx = 2 \sinh \theta\ d\theta $.</p> <p><strong>Edit:</strong> The OP said that he hasn't seen hyperbolic functions just yet. Then what about $x = 2\sec \theta \Rightarrow dx = 2\sec \theta \tan\theta\ d\theta$</p> <p>So </p> <p>$$2\int \sqrt{4\sec^2\theta - 4}\sec\theta\tan\theta \ d\theta = 4\int \sec\theta \tan^2\theta d\theta$$</p>
3,054,898
<h3>Problem</h3> <p>Evaluate <span class="math-container">$$\int_0^{2\pi}(t-\sin t)(1-\cos t)^2{\rm d}t.$$</span></p> <h3>Comment</h3> <p>It's very complicated to compute the integral applying normal method. I obtain the result resorting to the skillful formula</p> <blockquote> <p><span class="math-container">$$\int_0^{2\pi}xf(\cos x){\rm d}x=\pi\int_0^{2\pi}f(\sin x){\rm d}x,$$</span> where <span class="math-container">$f(x) \in C[-1,1].$</span></p> </blockquote> <p><span class="math-container">\begin{align*} \require{begingroup} \begingroup \newcommand{\dd}{\;{\rm d}}\int_0^{2\pi} (t-\sin t)(1-\cos t)^2 \dd t &amp;= \int_0^{2\pi} t(1-\cos t)^2 \dd t - \int_0^{2\pi} \sin t(1-\cos t)^2 \dd t \\ &amp;= \pi\int_0^{2\pi} (1-\sin t)^2 \dd t - \int_0^{2\pi} (1-\cos t)^2 \dd (1-\cos t) \\ &amp;= \pi\int_0^{2\pi} \left(\frac32-\frac12\cos2t-2\sin t\right) \dd t - \left[\frac13(1-\cos t)^3\right]_0^{2\pi}\\ &amp;= \pi\left[\frac32t-\frac14\sin2t+2\cos t\right]_0^{2\pi}\\ &amp;= 3\pi^2 \endgroup \end{align*}</span></p> <p><strong>But any other solution?</strong></p>
egreg
62,967
<p>The integral <span class="math-container">$$ \int_0^{2\pi}\sin t(1-\cos t)^2\,dt=\Bigl[\frac{(1-\cos t)^3}{3}\Bigr]_0^{2\pi}=0 $$</span> is immediate. Thus we can concentrate on <span class="math-container">$$ \int_0^{2\pi}t(1-\cos t)^2\,dt=[\dots t=2u \dots]= 16\int_0^{\pi}u\sin^4u\,du=\int_0^\pi u(e^{iu}-e^{-iu})^4\,du $$</span> For integer <span class="math-container">$a$</span>, we have <span class="math-container">$$ \int_0^\pi ue^{2iau}\,du=\Bigl[\frac{ue^{2iau}}{2ia}\Bigr]_0^\pi-\frac{1}{2ia}\int_0^\pi e^{2iau}\,du=\frac{\pi e^{2ia\pi}}{2ia}+\frac{1}{4a^2}\Bigl[e^{2iau}\Bigr]_0^\pi=\frac{\pi}{2ia} $$</span> Since <span class="math-container">$(e^{iu}-e^{-iu})^4=e^{4iu}-4e^{2iu}+6-4e^{-2iu}+e^{-4iu}$</span> we see that <span class="math-container">$$ \int_0^\pi u\sin^4u\,du=\int_0^\pi 6u\,du=3\pi^2 $$</span></p>
2,912,152
<p>I know there is already a question about resolving a quadrilateral from three sides and two angles, but I want to ask about a special case. Firstly, two of the sides are known to be of equal size. Secondly, I'm only interested in the area, not in the remaining angles or lengths. Can anyone suggest a simple formula?</p> <p><a href="https://i.stack.imgur.com/Z9nCn.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Z9nCn.png" alt="enter image description here"></a></p>
hmakholm left over Monica
14,366
<p>Here's your basic mistake:</p> <blockquote> <p>If we don't do that, (6) is a contradiction in the case $A'$ is empty and so the whole proof is wrong. Am I the wrong one?</p> </blockquote> <p>There's nothing wrong with having contradictory <em>assumptions</em> at some point in a proof. On the contrary, that is a <em>good</em> thing to happen, because it means you can use the Principle of Explosion to conclude <em>whatever you want</em> right away and be done with that branch of the proof.</p> <p>A more semantic way of saying this is that, finding yourself with contradictory assumptions means that you're in a branch of the proof that doesn't correspond to a possible situation you're trying to prove something about. Therefore <em>no proof is needed</em> in that branch -- or, in yet other words, "eh, whatever" will suffice as a proof. That's the reasoning behind the principle of explosion.</p> <p>If your particular case, you don't even <em>have</em> a contradicting assumption, only the <em>possibility</em> that there is no $x\in A'$ to prove anything about. Again, this is not a problem for a proof: if it turns out that $A'$ is empty, it still doens't harm you to have had a plan for what to do with its elements.</p>
264,745
<p>When I was learning statistics I noticed that a lot of things in the textbook I was using were phrased in vague terms of "this is a function of that" e.g. a statistic is a function of a sample from a distribution. I realized that while I know the definition of a function as a relation and I have an intuitive notion of what "function of" means, it's unclear to me how you transform this into a rigorous definition of "function of". So what is the actual definition of "function of"?</p>
Christopher A. Wong
22,059
<p>A function $f$ is called "a function of $x$", if, for each $x$ (in some domain $X$), there is a unique corresponding output, denoted by $f(x)$.</p> <p>So a statistic is a function of a sample from a distribution means that, given a sample $S$, a statistic takes that sample $S$ and spits out a unique statistic value $f(S)$.</p>
2,130,807
<p>How to Prove $G$ is connected, if $G$ is an acyclic graph on $n \ge 1$ vertices containing exactly $n − 1$ edges?</p>
Kuifje
273,220
<p>If $G$ is acyclic with $n$ vertices and $n-1$ edges then $G$ is a connected tree.</p> <p>Proof: (by induction on $n$)</p> <p>If $n=1$ (or $n=2$), the result is trivial.</p> <p>Suppose the property holds for a given $n$, and consider an acyclic graph with $n+1$ vertices and $n$ edges. Since $G$ is acyclic, there is at least one vertex $v$ with degree $1$. The graph induced by deleting this node is acyclic with $n$ vertices and $n-1$ edges: it is a connected tree by assumption. Linking $v$ back to the tree does not create a cycle, it follows that $G$ is a connected tree.</p>
2,653,708
<p>In this question I am using the euclidean metric to determine the distance between two points.</p> <p>I want to make a function $f(x)=$ the minimum distance between $y=x$ and $y=e^x$ at each given point x, is there an efficient way of doing this?</p> <p>Second related question if i knew $e^x$ was the shortest distance between $g(x)$ and $y = x$ could I figure out closed form solution for $g(x)$</p>
g.kov
122,782
<p>Consider the point $(x_1,g(x_1))$. At this point the distance to the line $y=x$ is $d(x_1)=\exp(x_1)$. </p> <p><a href="https://i.stack.imgur.com/4rFBp.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/4rFBp.png" alt="enter image description here"></a></p> <p>As we can see, for every $x$ there are two suitable points,</p> <p>\begin{align} P_1&amp;=(x_1,x_1+\sqrt2\,d(x_1)) \\ \text{and }\quad P_1&amp;=(x_1,x_1-\sqrt2\,d(x_1)) , \end{align}<br> so at least there are two suitable continuous functions,</p> <p>\begin{align} g_1(x)&amp;=x+\sqrt2\,d(x) ,\\ g_2(x)&amp;=x-\sqrt2\,d(x) . \end{align} </p>
116,037
<p>I would warmly appreciate it if someone could tell me whether the following question has an affirmative answer. I am new to the field of commutative algebra, so I am simply trying to fill in some (huge) gaps. Thanks!</p> <p>Let $ (R,{\frak{m}}) $ be a Noetherian local (commutative unital) ring. Let $ I $ be an ideal of $ R $ with minimal generating set $ \lbrace x_{1},\ldots,x_{n} \rbrace $, and let $ \beta: R^{n} \rightarrow I $ be the surjective $ R $-linear map defined by $ \beta(r_{1},\ldots,r_{n}) = r_{1} x_{1} + \cdots + r_{n} x_{n} $. Viewing $ I $ as an $ R $-module, does there exist a free resolution of $ I $ of the form $$ 0 \longrightarrow R^{n-1} \stackrel{\alpha}{\longrightarrow} R^{n} \stackrel{\beta}{\longrightarrow} I \longrightarrow 0, $$ where the map $ \alpha $ is left-multiplication by some matrix $ M \in {\text{M}_{n \times (n-1)}}(R) $?</p>
Daniel Litt
6,950
<p>No. Consider $\mathfrak{m}:=(x,y,z)\subset k[x,y,z]_{(x,y,z)}=:R$. Then the kernel of the map $$R^3\to \mathfrak{m}$$ defined by the minimal generating set $x,y,z$ is minimally generated by $$k_1:=(y, -x, 0), k_2:=(z, 0, -x), k_3:=(0, z, -y).$$ But $$zk_1-yk_2-xk_3=0$$ so the submodule of $R^3$ that these generate is not free. Rather, we have a free resolution $$0\to R\to R^3\to R^3\to \mathfrak{m}\to 0.$$ where the middle map is defined by the matrix $(k_1 ~k_2 ~k_3)$ and the first map sends a generator of $R$ to</p> <p>$$\begin{pmatrix} z \\ -y\\ -x \end{pmatrix}.$$</p> <p>(I should mention--one can know this example will work by "pure thought" using local cohomology; essentially there is a natural way of identifying the local cohomology of $(R, \mathfrak{m})$, with the coherent cohomology of $\mathbb{P}^2$. But this does not always vanish in degree $2$, so there cannot be a length $2$ resolution...this argument also shows that there's no way of, say, choosing different generators to get a shorter resolution.)</p>
1,299,266
<p>How many zeros are there in the number $50!$?</p> <p>My attempt:</p> <p>The zeros in every number come from the 10s that make up the number. The 10s are, in turn, made up of 2s and 5s.</p> <p>So: $\frac{50}{5*2} = 5$ zeros?</p>
5xum
112,884
<p>Much more than $5$, surely. Since $50!$ is divisible by $2\cdot 5 \cdot 10\cdot 20\cdot 30\cdot 40\cdot 50$, and this number already has $6$ zeroes, you can be sure that $50!$ has at least $6$ zeroes.</p> <p>In fact, you need to count up how many twos and how many fives appears in the factorization of $50!$. Then, the number of zeroes is the smaller of these two numbers.</p>
1,299,266
<p>How many zeros are there in the number $50!$?</p> <p>My attempt:</p> <p>The zeros in every number come from the 10s that make up the number. The 10s are, in turn, made up of 2s and 5s.</p> <p>So: $\frac{50}{5*2} = 5$ zeros?</p>
Community
-1
<p>The number of 0's is equal to the powers of 5 in the expansion of 50!. This is because the prime decomposition of 50! will have more factors of 2 than factors of 5, and whenever we have a factor of 2 and 5 we can combine them and tack on a 0 at the end of the number.</p> <p>The number of powers of 5 is $\lfloor{\frac{50}{5}}\rfloor + \lfloor\frac{50}{25}\rfloor = 12$, from an application of Legendre's Formula which can be found <a href="http://en.wikipedia.org/wiki/Legendre%27s_formula" rel="nofollow">here</a></p>
4,164,553
<p>Can anybody enlighten me about the applications of <a href="https://en.wikipedia.org/wiki/Intuitionistic_logic" rel="noreferrer">intuitionistic logic</a>? I am familiar with this system only by <a href="https://www.maa.org/publications/maa-reviews/proof-theory" rel="noreferrer">G.Takeuti's book</a>, where it is described as one of the examples of axiomatic systems of logic. Is it possible to explain to a non-specialist why this system deserves studying?</p> <p>I suspect that this is easier to explain on the example of <a href="https://encyclopediaofmath.org/wiki/Intuitionistic_propositional_calculus" rel="noreferrer">propositional intuitionistic calculus</a>, so if somebody could cast a light on this, this would be especially valuable.</p>
Noah Schweber
28,111
<p>The <strong>internal logic of a topos</strong> is inherently intuitionistic, not classical. This means that if we want to &quot;prove a fact in all topoi (satisfying some conditions),&quot; we should generally look for <em>constructive</em> arguments.</p> <p>Even if we don't care about the internal structure of a topos, this is still worth thinking about since we may then be able to &quot;de-toposify&quot; the result we get to arrive at a purely classical fact (usually involving sheaves somehow). For example, see <a href="https://math.stackexchange.com/a/394981/28111">this answer of Ingo Blenschmidt</a> where he shows roughly that a particular <em>constructive</em> argument about rings yields the <em>classical</em> fact that for <span class="math-container">$X$</span> a reduced scheme and <span class="math-container">$\mathcal{F}$</span> an <span class="math-container">$\mathcal{O}_X$</span>-module of locally finite type, <span class="math-container">$\mathcal{F}$</span> is locally free iff its rank is constant.</p>
1,150,805
<p>An unfair 3-sided die is rolled twice. The probability of rolling a 3 is $0.5$, the probability of rolling a 1 is $0.25$, and the probability of rolling a 2 is $0.25$. Let $X$ be the outcome of the first roll and $Y$ the outcome of the second.</p> <ul> <li><p>Find the Joint Distribution of $X$ and $Y$ in a Table.</p> <p>The outcome of $X = \{1,2,3\}$.</p> <p>The outcome of $Y = \{1,2,3\}$.</p> <p>Would I just make a table of all the roll possibilities?</p></li> <li><p>Find the Probability $\mathrm{P}(X+Y \geq 5)$.</p> <p>The only roll that will make this is a 3 or a 2. Should I just take the same of every possible roll to find this probability?</p></li> </ul>
Community
-1
<p>Since the set is finite, without loss of generality, assume that $X_n = \{1,2, \ldots, n\}$. Then, note that: $$\mathcal{P}(X_1) = \mathcal{P}(\{1\}) = \{\phi,\{1\} \} = \{\phi, X_1\}$$ $$\mathcal{P}(X_n) = \mathcal{P}(X_{n-1}) \cup \{Y \cup \{n\} : Y \in\mathcal{P}(X_{n-1})\}\; \forall \; n \ge 2$$</p> <p>This is a recursive definition of $\mathcal{P}(X_n)$ with only union as the operation in the definition. In language of sets, the power set of $X = X_n$ is just the union of power set of $X_{n-1}$ and the set of unions of $\{n\}$ with elements of $\mathcal{P}(X_{n-1}) \; \forall \; n \ge 2$.</p>
827,154
<p>I need help with the definition of "within 1":</p> <ul> <li><p>If $x = 8$ and $y = 7$, then $x$ is "within 1" of $y$. </p></li> <li><p>If $x = 8$ and $y = 9$, then $x$ is "within 1" of $y$.</p></li> <li><p>If $x = 8$ and $y = 8$, is $x$ still "within 1" of $y$?</p></li> </ul> <p>It's my understanding that this would still be true, but I'm being asked for something to back up my assumption, so I guess I'm looking for a second opinion.</p>
RJ Hill
155,968
<p>It means that the value lies within the limits of +/− 1. </p> <p>If you were to say 'within 1' of 20, that means that 19, 20, and 21 are all valid numbers because they're 'within 1' of 20. </p> <p>The most common term for this is 'plus or minus 1' or whatever range you're looking in. Symbols used to denote this particular range are +/− and ±. This is used a good bit in statistics and the sciences.</p>
2,010,069
<p>I am looking on the solution to this problem presented in the book <em>"Fifty Challenging Problems in Probability with Solutions"</em> by Mosteller (p.18-19).</p> <blockquote> <p>On average, how many times must a die be thrown until one gets a 6?</p> </blockquote> <p>There are many ways to solve this problem as this is simple example of a geometric distribution, but I don't quite get the trick the author did it with the $qm$. <br> <strong>I am looking for explanation/interpretation of this trick (transition (2) (3) )</strong>? <br> <br> The first expression is clear, it is just the expansion of the expected value definition</p> <p><br> (p be the probability of a 6 on a given trial) <br> $$ m = p + 2 pq + 3pq^2 + 4pq^3 + ... \quad \quad \quad (1)$$ <br> Further the "trick" with qm has been used <br> $$qm =\ \ \ \ \ \ \ \ pq + 2pq^2 + 3pq^3 + ... \quad \quad \quad (2)$$ so that $$m - qm = p + pq + pq^2 + ... \quad \quad \quad (3)$$ $$m(1-q) = 1 \quad \quad \quad (4)$$ $$m=\frac{1}{p} \quad \quad \quad (5)$$</p>
Somos
438,089
<p>I think an intuitive way to think about matrix multiplication is to regard it as a combination of coordinate extraction and scalar multiplication. First, given any basis for a finite vector space, such as $\mathbb{R}^n$, we usually express any vector $\mathbf{v}=\sum_i c_i \mathbf{e}_i=[c_1,c_2,\dots,c_n]$ as an $n$-tuple of scalars. Given any vector $\mathbf{v}$, the functions that extracts the $i$-th coordinate $c_i,$ are linear functionals, and form a basis for the dual space. They can be thought of similar to projection maps.</p> <p>Second, given any scalar $c$, the operation of multipliying a vector $\mathbf{v}$ by $c$ to produce $c\mathbf{v}$ is a linear transformation to $\mathbb{R}^n$. The composition of extracting the $i$-th coordinate of a vector $\mathbf{v}$ and then multiplying the scalar by another vector $\mathbf{e}_j$ as a linear transformation from $\mathbb{R}^n$ to $\mathbb{R}^m$ we denote by $T_{ij}.$ Now a matrix $\mathbf{A}$ with entries $a_{ij}$ is associated with the finite sum $\sum_{ij} a_{ij}T_{ij}$ as a linear transformation. Note that the identity map $I_n=\sum_i T_{ii}$ is the sum of projection maps $T_{ii}.$</p> <p>We can think of this in two ways. First, the $i$-th row of the matrix $\mathbf{A}$ is associated with the composite map $\mathbf{v} \mapsto (\sum_j a_{ij} c_j)\mathbf{e}_i$ which is a dot product scalar multiplied by a basis vector. Second, the $j$-th column of the matrix $\mathbf{A}$ is associated with the composite map $\mathbf{v} \mapsto c_j(\sum_i a_{ij}\mathbf{e}_i)$ which is the scalar $c_j$, the $j$-th coordinate of $\mathbf{v}$, times the $j$-th column vector of $\mathbf{A}$. Your original understanding of matrix multiplication was pretty good.</p>
2,028,703
<p>I'm having this example for a simple <a href="https://en.wikipedia.org/wiki/Binary_symmetric_channel" rel="nofollow noreferrer">binary symmetric channel</a> (BSC) to bound the mutual information of $X$ and $Y$ as</p> <p>\begin{align*} I(X;Y) &amp;= H(Y) - H(Y|X)\\ &amp;= H(Y) - \sum p(x) H(Y \mid X = x) \\ &amp;= H(Y) - \sum p(x) H(p) \\ &amp;= H(Y) - H(p) \\ &amp;\leq 1 - H(p) \end{align*}</p> <p>However, as the title states, I don't really understand why I can write</p> <p>\begin{align*} \sum p(x) H(Y \mid X = x) = \sum p(x) H(p) \end{align*}</p> <p>I know that</p> <p>\begin{align*} \mathbb{P}[Y = 0 \mid X = 0 ] &amp;= 1 - p \\ \mathbb{P}[Y = 1 \mid X = 0 ] &amp;= p \\ \mathbb{P}[Y = 1 \mid X = 1 ] &amp;= p \\ \mathbb{P}[Y = 0 \mid X = 1 ] &amp;= 1 - p \end{align*}</p> <p>but let's assume I set $p = \frac{1}{3}$, would that mean that I have</p> <p>\begin{align*} I(X;Y) \leq 1- H(p) = 1- H(\frac{1}{3}) \approx 0.4716 \text{ bit} \end{align*}</p> <p>I ask because if this is the case, why is it not</p> <p>\begin{align*} I(X;Y) \leq 1- H(1-p) = 1- H(\frac{2}{3}) \approx 0.61 \text{ bit} \end{align*}</p> <p>instead? </p> <p>Or, and this would make the most sense to me, it's actually $p = (p_{error}, 1-p_{error})= (\frac{1}{3}, \frac{2}{3})$ and thus we have</p> <p>\begin{align*} I(X;Y) \leq 1- H(p) = 1- H(\frac{1}{3}, \frac{2}{3}) \approx 0.0817 \text{ bit} \end{align*}</p>
Stefan Falk
70,606
<p>I just realized that it's actually very simple to show that</p> <p>\begin{align*} \sum_{x \in X} \mathbb{P}[X = x]H(Y|X=x) &amp;= H_B(p) \end{align*}</p> <p>We start by observing that of course</p> <p>\begin{align*} \sum_{x \in X} \mathbb{P}[X = x]H(Y|X=x) &amp;= p_X(0) \cdot H(Y|X=0) + p_X(1) \cdot H(Y|X = 1) \\ &amp;= \sum_{x \in X} p_X(x) \cdot \Big( - \sum_{y \in Y} p_{Y|X}(y|x) \log_2 p_{Y|X}(y|x) \Big) \end{align*}</p> <p>and since</p> <p>\begin{align*} \mathbb{P}[X = 0] H(Y|X=0) &amp;= p_X(0) \cdot \Big( -p_{Y|X}(0|0) \log_2 p_{Y|X}(0|0) - p_{Y|X}(1|0) \log_2 p_{Y|X}(1|0) \Big) \\ &amp;= p_X(0) \cdot \underbrace{\Big(- (1-p)\log_2 (1-p) - p \log_2p \Big)}_\text{H((1-p),p)} \\ &amp;=p_X(0) \cdot H((1-p),p) \\ &amp;=p_X(0) \cdot H_B(p) \\ \mathbb{P}[X = 1] H(Y|X=1) &amp;= p_X(1) \cdot H_B(p) \end{align*}</p> <p>we have</p> <p>\begin{align*} \sum_{x \in X} p_X(x)\sum_{y\in Y} p_{Y|X}(y|x) \log_2 p_{Y|X}(y|x) &amp;= \mathbb{P}[X = 0] H(Y|X=0) + \mathbb{P}[X = 1] H(Y|X=1) \\ &amp;= p_X(0) \cdot H_B(p) + p_X(1) \cdot H_B(p) \\ &amp;= H_B(p) \end{align*}</p>
1,476,313
<p>I want to simplify this fraction</p> <p>$$ \frac{\sqrt{6} + \sqrt{10} + \sqrt{15} + 2}{\sqrt{6} - \sqrt{10} + \sqrt{15} - 2} $$</p> <p>I've tried to group up the denominator members like $ (\sqrt{6} + \sqrt{15}) - (\sqrt{10} + 2) $ and then amplify with $ (\sqrt{6} + \sqrt{15}) + (\sqrt{10} + 2) $ </p>
Community
-1
<p>For this <a href="https://math.stackexchange.com/a/2308483/4414">system</a> we implemented finding a split form for the denomiator: $$a + \sqrt{p}\,b$$ Such that $\sqrt{p}$ is a new radical. For a quotient we then have: $$\frac{c}{a + \sqrt{p}\,b} = \frac{c\,(a - \sqrt{p}\,b)}{a^2 - p\,b^2}$$ Lets give it a try: $$\frac{2+\sqrt{6}+\sqrt{10}+\sqrt{15}}{-2+\sqrt{6}-\sqrt{10}+\sqrt{15}} = $$ $$\frac{2+\sqrt{15}+\sqrt{2}(\sqrt{3}+\sqrt{5})}{- 2+\sqrt{15}+\sqrt{2}(\sqrt{3}-\sqrt{5})} =$$ $$\frac{(2+\sqrt{15}+\sqrt{2}(\sqrt{3}+\sqrt{5}))\,(- 2+\sqrt{15}-\sqrt{2}(\sqrt{3}-\sqrt{5}))}{(- 2+\sqrt{15})^2-2(\sqrt{3}-\sqrt{5})^2} = $$ $$\frac{15+6\sqrt{6}}{3} = 5+2\sqrt{6}$$</p>
1,985,552
<p>Where $p_n \rightarrow p$. I'm trying to prove that for $E=\{ p_n : n \in \mathbb{N}$ and $lim_{n\rightarrow \infty} p_n =p \}$, then $Cl(E)=E \cup \{p \}$ and $Cl(E)$ is compact. </p> <p>Also, I'm currently using the definition of limit points as p is a limit point if $\forall r&gt;0, (E \cap N_r(p)) \backslash \{p\} \neq \emptyset$. </p> <p>Here's a rough outline for what I have:</p> <p><strong>Proving $Cl(E)=E \cup \{p \}$:</strong></p> <p>By a theorem, we konw that $\{ p_n \}$ converges to $p \in E$ iff every neighborhood of p contains $p_n$ for all but finitely many n. So I'm thinking this theorem shows that $\forall r&gt;0, (E \cap N_r(p)) \backslash \{p\} \neq \emptyset$.However, I'm not sure I'm supposed to prove that p is the only limit point. </p> <p><strong>Proving compact</strong></p> <p>I'm also still unsure how to prove $Cl(E)$ is compact just by the definition. Since $d(p_n, p) &lt; \epsilon$, does this imply that all open covers $\mathcal{G}$ is somehow bounded by this and we therefore have finite subcovers? </p>
true blue anil
22,388
<p>It is akin to putting 30 identical balls in 4 distinct boxes</p> <p>Put 1 in the first, 4 in the second, 6 in the third and 10 in the fourth, 9 more remain to be put.</p> <p>Allotments in which you put $\ge 5$ in the first box and $\ge6$ in the others will be invalid.<br> [ Note that with the given figures, you can violate the constraints in only one box]</p> <p>Applying <a href="https://en.wikipedia.org/wiki/Stars_and_bars_(combinatorics)" rel="nofollow">stars and bars</a> with <a href="https://en.wikipedia.org/wiki/Inclusion%E2%80%93exclusion_principle" rel="nofollow">inclusion-exclusion</a>, we get</p> <p>$\binom{12}3 - \binom73 - \binom31\binom63 = 125$</p>
88,156
<p>I understand that <code>Round</code> give the nearest even integer for cases where the number is between two integers, i.e. <code>Round[2.5] = 2</code> and <code>Round[3.5] = 4</code> (see this <a href="https://mathematica.stackexchange.com/questions/29122/why-do-numberform-and-round-apparently-use-different-tie-breaking-methods/33939#33939">question</a>). This carries over to rounding to non-integers that are not orders of magnitude as well:</p> <pre><code>Round[1.3, 0.2] = 1.2 Round[1.5, 0.2] = 1.6 Round[Range[1, 2, 0.1], 0.2] = {1., 1.2, 1.2, 1.2, 1.4, 1.6, 1.6, 1.6, 1.8, 1.8, 2.} </code></pre> <p>However, the <code>Floor</code> function seems to behave oddly in a similar situation:</p> <pre><code>Floor[Range[1, 2, 0.1], 0.2] = {1., 1., 1., 1.2, 1.2, 1.4, 1.6, 1.6, 1.8, 1.8, 2.} </code></pre> <p>but <code>Ceiling</code> does what I would expect it to do:</p> <pre><code>Ceiling[Range[1, 2, 0.1], 0.2] = {1., 1.2, 1.2, 1.4, 1.4, 1.6, 1.6, 1.8, 1.8, 2., 2.} </code></pre> <p>My question therefore is why does <code>Floor[1.2, 0.2] = 1.</code>? I understand the behavior of <code>Round</code> for this test set, but I do not understand <code>Floor</code>. Then, with <code>Floor</code> giving something I don't expect, I am very surprised that <code>Ceiling</code> gives exactly what I would expect.</p> <hr> <h2>Background for my Problem</h2> <p>I have a regular array of 2D data (<code>{x, y, probability}</code>) that I want to bin by summing, so <code>Round</code> doesn't work because different numbers of elements are included in consecutive bins because it rounds to evens, but I expected <code>Floor</code> to work.</p> <pre><code>gathered = GatherBy[predictedProbabilityDist, { Round[#[[1]], newGridSpacing], Round[#[[2]], newGridSpacing] } &amp;]; </code></pre> <p>versus</p> <pre><code>gathered = GatherBy[predictedProbabilityDist, { Floor[#[[1]], newGridSpacing], Floor[#[[2]], newGridSpacing] } &amp;]; </code></pre> <p>or the nearly equivalent expression with <code>Ceiling</code>, followed by something like</p> <pre><code>predictedProbDist = Map[ { Min[#[[All, 1]]], Min[#[[All, 2]]], Total[#[[All, 3]]] } &amp;, gathered]; </code></pre> <p>Based on the testing with the simple <code>Range</code> above, it looks like I need to use <code>Ceiling</code>, but I don't know why for sure that is the case.</p>
Eric Towers
16,237
<p>Sjoerd C. de Vries correctly diagnoses the problem and provides a resolution. An alternative is to work in exact numbers and convert to floating point at the end.</p> <pre><code>Floor[Range[1, 2, 1/10], 2/10] (* Output: {1, 1, 6/5, 6/5, 7/5, 7/5, 8/5, 8/5, 9/5, 9/5, 2} *) N[Floor[Range[1, 2, 1/10], 2/10]] (* Output: {1., 1., 1.2, 1.2, 1.4, 1.4, 1.6, 1.6, 1.8, 1.8, 2.} *) </code></pre> <p>Note that you may still have trouble with your binning at one or both endpoints of your range.<br> Alternatively, since you have a (full?) regular grid, you may be happier using Partition[] since it does what you want structurally instead of by means of comparisons. (The following is in no sense optimal. For instance, if your {x,y,prob} triples are sorted in y and then by x, partitioning [[All,3]] would give you the dataTable directly.)</p> <pre><code>inData = (* linear array of {x,y,probability} triples *) ((intermediateData[#[[1]], #[[2]]] = #[[3]]) &amp;) /@ inData dataTable = Table[intermediateData[x,y],{x,(* min x *), (* max x *)}, {y, (* min y *), (* max y *)}] (* the minimum and maximum values can be extracted from inData or just supplied by hand. *) Total[Flatten[#]] &amp; /@ # &amp; /@ Partition[dataTable, {(* columns per grid *), (* rows per grid *)}] </code></pre> <p>for instance (my random numbers might not match your random numbers), with a 10x10 initial grid and 5x5 cells...</p> <pre><code>inData = Flatten[Table[{x, y, Random[]}, {x, 1, 10}, {y, 1, 10}], 1] ((intermediateData[#[[1]], #[[2]]] = #[[3]]) &amp;) /@ inData dataTable = Table[intermediateData[x, y], {x, 1, 10}, {y, 1, 10}] Total[Flatten[#]] &amp; /@ # &amp; /@ Partition[dataTable, {5, 5}] (* last Output: {{12.0149, 15.3413}, {12.7288, 11.8542}} *) </code></pre> <p>Partition[] will provide partial cells at the upper ends of the x and y ranges if the cell size does not exactly divide the number of rows or columns.</p>
2,567,607
<p>$$\arctan 2x +\arctan 3x = \left(\frac{\pi}{4}\right)$$ $$\arctan \left(\frac{2x+3x}{1-2x*3x}\right)=\frac {\pi}{4}$$ $$\frac {5x}{1-6x^2}=\tan \frac{\pi}{4}=1$$ $$6x^2 + 5x -1 = 0$$ $$(6x-1)(x+1)=0$$ $$x=-1, \frac{1}{6}$$</p> <p>The answer however rejects the solution $x=-1$ saying that it makes the L.H.S of the equation negative. I don't understand this, I don't see how $x=-1$ makes the L.H.S. negative.</p>
Maadhav
416,874
<blockquote> <p>$\arctan x + \arctan y = \arctan\left(\dfrac{x+y}{1-xy}\right), \quad\quad xy &lt; 1$</p> </blockquote> <p>So</p> <p>$x = -1$ doesn't work as $2x \times 3x = 6x^2 = 1 \nless 1$.</p> <p>$x=\frac16$ works as $6x^2 = \frac1{6} \lt 1$.</p>
3,126,936
<p>Numbers between <span class="math-container">$1 - 1000$</span> which leave no remainder when divided by <span class="math-container">$4$</span> and divided by <span class="math-container">$6$</span> but not by <span class="math-container">$21$</span>?</p> <p>I tried <span class="math-container">$$\frac{1000}{12} = 83 - \frac{83}{21} = 83-3 = 80$$</span></p> <p>Am I correct? Can someone please explain to me how it works?</p>
Community
-1
<p>We get that the number is a multiple of 12 but not of 21. As 12 is coprime to 21, and <span class="math-container">$12 \times 21 = 252$</span>, which has 3 multiples in 1-1000, we also know that there are 83 multiples of 12 in 1-1000. So, we get 83-3, which is equal to 3.</p>
422,761
<blockquote> <p>Prove that there is no Integer such that $x≡2 \pmod 6$ and $x≡3 \pmod 9$ are both true.</p> </blockquote> <p>How should I approach this question?<br> I attempted using contra-positive proof, so $x=6p+2$ and $x=9q+3$ where $p,q$ are integers.<br> Then $6p+2=9q+3$. </p>
Arnab Dutta
80,339
<p>@TomDavies92 </p> <p>I think the simple answer he asks for is - What does the Generating function $ \sum\limits_{i=1}^n \ln^n3 $ means:</p> <p>The above expression should be better written as: $ \sum\limits_{i=1}^n \ln^i3 $</p> <p>Answer simple, Its expansion is:<br> $ln\ 3 + ln^2\ 3 + ln^3\ 3 + ...ln^n\ 3 $ $ = ln\ 3 + (ln\ 3)^2 + (ln\ 3)^3 + ...(ln\ 3)^n$ -------(1)</p> <p>Now:<br> $ln3 &gt;1$ ( lets say $ln3 = 1.3$)</p> <p>So equation (1) =$1.3 + 1.3^2+1.3^3+....+1.3^n$ which definitely increases as n increase. Hence divergent.</p>
4,348,969
<p>Let <span class="math-container">$a&lt;b\in\mathbb{R}$</span>. A sequence <span class="math-container">$P:=(p_0,\ldots,p_n)$</span> is a called a partition of <span class="math-container">$[a,b]$</span> if <span class="math-container">$$a=p_0&lt;\ldots&lt;p_n=b.$$</span> The size of <span class="math-container">$P$</span> is taken to be <span class="math-container">$\max_i(p_{i+1}-p_i)$</span>.</p> <p>Now, suppose we are given <span class="math-container">$\delta&gt;0$</span>. There exists <span class="math-container">$n\in\mathbb{N}_{\geq 1}$</span> such that <span class="math-container">$(b-a)/n&lt;\delta$</span>. I can divide <span class="math-container">$[a,b]$</span> into <span class="math-container">$n$</span> equal sub-intervals, by writing <span class="math-container">$$I_k:=\left[a+\frac{k}{n}(b-a),a+\frac{k+1}{n}(b-a)\right]$$</span> for all <span class="math-container">$0\leq k\leq n-1$</span>. Clearly the length of each <span class="math-container">$I_k$</span> is <span class="math-container">$&lt;\delta$</span>. We take <span class="math-container">$p_k=a+\frac{k}{n}(b-a)$</span> for <span class="math-container">$0\leq k\leq n$</span>.</p> <p>The above gives us a partition. Are there other ways of constructing partitions of <span class="math-container">$[a,b]$</span> whose size is <span class="math-container">$&lt;\delta$</span>?</p>
RRL
148,510
<p>You gave an example of a uniform partition.</p> <p>The dyadic partition <span class="math-container">$P_n = (x_0,x_1,\ldots,x_{2^n})$</span> with <span class="math-container">$(b-a)/2^n &lt; \delta$</span> and</p> <p><span class="math-container">$$x_k = a + \frac{b-a}{2^n}k, \quad k=0,1,\ldots, 2^n,$$</span></p> <p>is also uniform and is useful because it forms a sequence of refining partitions. That is, <span class="math-container">$P_{n+1} \supset P_n$</span> for all <span class="math-container">$n \geqslant 1$</span> -- which does not hold for your sequence of uniform partitions.</p> <p>There is an endless variety of other possibilities.</p>
4,348,969
<p>Let <span class="math-container">$a&lt;b\in\mathbb{R}$</span>. A sequence <span class="math-container">$P:=(p_0,\ldots,p_n)$</span> is a called a partition of <span class="math-container">$[a,b]$</span> if <span class="math-container">$$a=p_0&lt;\ldots&lt;p_n=b.$$</span> The size of <span class="math-container">$P$</span> is taken to be <span class="math-container">$\max_i(p_{i+1}-p_i)$</span>.</p> <p>Now, suppose we are given <span class="math-container">$\delta&gt;0$</span>. There exists <span class="math-container">$n\in\mathbb{N}_{\geq 1}$</span> such that <span class="math-container">$(b-a)/n&lt;\delta$</span>. I can divide <span class="math-container">$[a,b]$</span> into <span class="math-container">$n$</span> equal sub-intervals, by writing <span class="math-container">$$I_k:=\left[a+\frac{k}{n}(b-a),a+\frac{k+1}{n}(b-a)\right]$$</span> for all <span class="math-container">$0\leq k\leq n-1$</span>. Clearly the length of each <span class="math-container">$I_k$</span> is <span class="math-container">$&lt;\delta$</span>. We take <span class="math-container">$p_k=a+\frac{k}{n}(b-a)$</span> for <span class="math-container">$0\leq k\leq n$</span>.</p> <p>The above gives us a partition. Are there other ways of constructing partitions of <span class="math-container">$[a,b]$</span> whose size is <span class="math-container">$&lt;\delta$</span>?</p>
B. S. Thomson
281,004
<p>Here is my favorite:</p> <p><strong>LEMMA</strong>. [Cousin partitioning lemma] Let <span class="math-container">$\delta(x)$</span> be a positive function defined on some fixed interval <span class="math-container">$[a,b]$</span>. Then for any subinterval <span class="math-container">$[c,d]\subset [a,b]$</span> there must exist points <span class="math-container">$$c=x_0&lt;x_1&lt; x_2&lt; \dots &lt; x_k = d$$</span> and points <span class="math-container">$\xi_i\in [x_{i-1},x_i]$</span> subject to the constraint that <span class="math-container">$(x_{i}-x_{i-1})&lt; \delta(\xi_i)$</span> for each <span class="math-container">$i=1,2,\dots, k$</span>.</p> <p>Cousin's lemma first appeared in an 1895 paper by the Belgian mathematician Pierre Cousin who was a student of Poincaré. It was discovered again by Goursat who included it a paper that appeared in the very first issue of the American Math. Society Transactions journal in 1900.</p> <p>It is very useful. Think of it as equivalent to the nested interval property. Anything you have proved before with the nested interval property (or the Bolzano-Weierstrass theorem, or the Heine-Borel theorem, or the least upper bound propery) usually has a simpler proof using Cousin's lemma.</p> <p>If you use partitions of small size <span class="math-container">$\delta&gt;0$</span> as you mentioned, you can define the Riemann integral. If you use Cousin's partitions monitored by a small positive function <span class="math-container">$\delta(x)$</span> you can define the Lebesgue integral.</p>
310,462
<p>I am looking for an elegant proof of the fact that a countable metric space is complete iff its underlying topology is discrete.</p> <p>It is easy to see that a discrete space is complete because its topology can be derived from the distance <span class="math-container">$d(x,y)=1$</span> iff <span class="math-container">$x$</span> and <span class="math-container">$y$</span> are distinct, so that every Cauchy sequence must be eventually constant, and converge to this constant point that is inside the space. But the proof in the other side, that the topology underlying a countable and complete metric space must be discrete does not seem so easy. </p> <p>By the way, I take it that:</p> <p>(i) If the space is a singleton, <span class="math-container">$d(x,x)=0$</span> is a distance whose underlying topology is the discrete (and unique) topology on <span class="math-container">$X$</span>, so that the only (Cauchy) sequence on x is constant and convergent to x, making the space complete;</p> <p>(ii) If the space is empty, it is a countable space whose only topology can be considered as discrete. We can also consider that the only function from the empty set into the positive reals (and similarly into the natural numbers) is the empty function, so that no Cauchy sequence exists making our empty space complete. Gérard Lang. </p>
Tomasz Kania
15,129
<p>This is not true. Take a countable successor ordinal with the order topology. This topology makes it compact hence Polish.</p>
484,367
<p>I've been trying to find a tight upper bound for the series</p> <p>$$S (x) = e^{-x} \sum_{k=0}^{\infty} \frac{x^k}{k!} \sqrt{k+1}$$</p> <p>So far, I've managed to get a reasonable bound for small values of $x$ by using the inequality $\sqrt{k+1} \leq \sqrt{\frac{k^{2}}{4} + k + 1} = \frac{k}{2} + 1 ~\forall~k \geq 0$, but it becomes very loose when $x$ is large. I've also tried taking a Taylor series approximation to $\sqrt{k+1}$, but this leads to a complicated infinite sum of weighted Bell polynomials which, as far as I'm aware, doesn't have a closed form. Any suggestions would be greatly appreciated!</p>
robjohn
13,854
<p><strong>Upper and Lower Bounds</strong></p> <p>Note that $$ e^{-x}\sum_{k=0}^\infty\frac{x^k}{k!}=1\tag{1} $$ and that $$ e^{-x}\sum_{k=0}^\infty(k+1)\frac{x^k}{k!}=x+1\tag{2} $$ Since $\sqrt{x}$ is concave, <a href="http://en.wikipedia.org/wiki/Jensen%27s_inequality" rel="nofollow">Jensen's Inequality</a> gives $$ e^{-x}\sum_{k=0}^\infty\sqrt{k+1}\frac{x^k}{k!}\le\sqrt{x+1}\tag{3} $$ Also, $$ e^{-x}\sum_{k=0}^\infty\frac1{k+1}\frac{x^k}{k!}=\frac{1-e^{-x}}{x}\tag{4} $$ Since $1/\sqrt{x}$ is convex, Jensen's Inequality gives $$ \begin{align} e^{-x}\sum_{k=0}^\infty\sqrt{k+1}\frac{x^k}{k!} &amp;\ge\sqrt{\frac{x}{1-e^{-x}}}\\ &amp;\ge\sqrt{x}\tag{5} \end{align} $$ Therefore, we get the bounds $$ \sqrt{x}\le e^{-x}\sum_{k=0}^\infty\sqrt{k+1}\frac{x^k}{k!}\le\sqrt{x+1}\tag{6} $$</p> <hr> <p><strong>Asymptotic Expansion</strong></p> <p>Using Stirling's Expansion and the Binomial Theorem, we get $$ \begin{align} \frac1{4^n}\binom{2n}{n} &amp;=\frac1{\sqrt{\pi n}} \left(1-\frac1{8n}+\frac1{128n^2}+\frac5{1024n^3}-\frac{21}{32768n^4}+\dots\right)\\ &amp;=\frac1{\sqrt{\pi(n+1)}} \left(1+\frac3{8n}-\frac{23}{128n^2}+\frac{89}{1024n^3}-\frac{1509}{32768n^4}+\dots\right)\tag{7} \end{align} $$ and therefore, $$ \begin{align} \frac{\sqrt{n+1}}{n!} &amp;=\frac{4^n}{\sqrt{\pi}}\frac{n!}{(2n)!}\left(1+\frac3{8n}-\frac{23}{128n^2}+\frac{89}{1024n^3}-\frac{1509}{32768n^4}+\dots\right)\\ &amp;=\frac{2^n}{\sqrt{\pi}}\frac1{(2n-1)!!}\left(1+\frac3{8n}-\frac{23}{128n^2}+\frac{89}{1024n^3}-\frac{1509}{32768n^4}+\dots\right)\\ &amp;=\frac{2^n}{\sqrt{\pi}}\small\left(\frac1{(2n{-}1)!!}+\frac{3/4}{(2n{+}1)!!}+\frac{1/32}{(2n{+}3)!!}+\frac{9/128}{(2n{+}5)!!}+\frac{491/2048}{(2n{+}7)!!}+\dots\right)\tag{8} \end{align} $$ Note that $$ \begin{align} \int_x^\infty e^{-t^2/2}\,\mathrm{d}t &amp;=\frac1x\int_x^\infty\frac{x}{t}e^{-t^2/2}\,\mathrm{d}t^2/2\\ &amp;\le\frac1x\int_x^\infty e^{-t^2/2}\,\mathrm{d}t^2/2\\ &amp;=\frac1xe^{-x^2/2}\tag{9} \end{align} $$ therefore, since both the following sum and integral satisfy $f'=1+xf$ and agree at $x=0$, $$ \begin{align} \sum_{k=0}^\infty\frac{x^{2k+1}}{(2k+1)!!} &amp;=e^{x^2/2}\int_0^xe^{-t^2/2}\,\mathrm{d}t\\ &amp;=\sqrt{\frac\pi2}\ e^{x^2/2}+O\left(\frac1x\right)\\ \frac1{\sqrt{2x}}\sum_{k=0}^\infty\frac{(2x)^{k+1}}{(2k+1)!!} &amp;=\sqrt{\frac\pi2}e^x+O\left(\frac1{\sqrt{x}}\right)\\ e^{-x}\sum_{k=0}^\infty\frac{(2x)^{k+1}}{(2k+1)!!} &amp;=\sqrt{\pi x}+O\left(e^{-x}\right)\tag{10} \end{align} $$ Multiplying $(8)$ by $e^{-x}x^n$, summing, and applying $(10)$ yields the asymptotic expansion that Raymond Manzoni got: $$ \begin{align} e^{-x}\sum_{n=1}^\infty\frac{\sqrt{n+1}}{n!}x^n &amp;=\sqrt{x}\small\left(1+\frac3{8x}+\frac1{128x^2}+\frac9{1024x^3}+\frac{491}{32768x^4}+O\left(\frac1{x^5}\right)\right)\tag{11} \end{align} $$</p>
3,080,230
<p>On <a href="https://en.wikipedia.org/wiki/Net_(mathematics)#Properties" rel="nofollow noreferrer">Wikipedia</a> it states that a space <span class="math-container">$X$</span> is compact if and only if every net has a convergent subnet. It then states that a net in the product topology has a limit if and only if each projection has a limit. I understand why both of these facts are true. However it then states this leads to a slick proof of Tychonoff's theorem, and I don't quite see how.</p> <p>In particular, it seems to me that the first fact implies that every compact space is sequentially compact. Since every sequence is also a net, it has a convergent subnet, which gives a convergent subsequence. This is obviously not true, since <span class="math-container">$\{0,1\}^\mathbb{R}$</span> is not sequentially compact, but it is compact by Tychonoff's theorem.</p>
bof
111,012
<p>Here is an example of a sequence which has no convergent subsequence, but it has a convergent subnet <strong>assuming the axiom of choice</strong>.</p> <p>Let <span class="math-container">$I=\{0,1\}^\mathbb N$</span>.</p> <p>The product space <span class="math-container">$X=\{0,1\}^I$</span> is a compact space which is not sequentially compact.</p> <p>For <span class="math-container">$n\in\mathbb N$</span> define <span class="math-container">$f_n:I\to\{0,1\}$</span> by setting <span class="math-container">$f_n(i)=i(n)$</span>. Then <span class="math-container">$\langle f_n:n\in\mathbb N\rangle$</span> is a sequence in <span class="math-container">$X$</span> with no convergent subsequence.</p> <p>Let <span class="math-container">$\mathcal U$</span> be a uniform ultrafilter on <span class="math-container">$\mathbb N$</span>. (I suppose it can be done without using ultrafilters, but I'm more used to filters than nets.)</p> <p>Define <span class="math-container">$f:I\to\{0,1\}$</span> so that, for each <span class="math-container">$i\in I$</span>, <span class="math-container">$\{n\in\mathbb N:i(n)=f(i)\}\in\mathcal U$</span>.</p> <p>Let <span class="math-container">$D$</span> be the collection of all finite subsets of <span class="math-container">$I$</span>, directed by <span class="math-container">$\subseteq$</span>.</p> <p>For <span class="math-container">$K\in D$</span>, let <span class="math-container">$h(K)$</span> be the least <span class="math-container">$n\in\mathbb N$</span> such that <span class="math-container">$i(n)=f(i)$</span> for all <span class="math-container">$i\in K$</span>; this defines a monotone final function <span class="math-container">$h:D\to\mathbb N$</span>.</p> <p>Define <span class="math-container">$g_K=f_{h(K)}\in X$</span>; then <span class="math-container">$\langle g_K:K\in D\rangle$</span> is a subnet of <span class="math-container">$\langle f_n:n\in\mathbb N\rangle$</span> which converges to <span class="math-container">$f$</span>.</p>
1,822,336
<p>My friend asked me what the roots of $y=x^3+x^2-2x-1$ was.</p> <p>I didn't really know and when I graphed it, it had no integer solutions. So I asked him what the answer was, and he said that the $3$ roots were $2\cos\left(\frac {2\pi}{7}\right), 2\cos\left(\frac {4\pi}{7}\right)$ and $2\cos\left(\frac {8\pi}{7}\right)$.</p> <blockquote> <p><strong>Question:</strong> How would you get the roots without using a computer such as Mathematica? Can other equations have roots in Trigonometric forms? </p> </blockquote> <p>Anything helps!</p>
achille hui
59,379
<p>Let $p(x) = x^3+x^2-2x-1$, we have $$p(t + t^{-1}) = t^3 + t^2 + t + 1 + t^{-1} + t^{-2} + t^{-3} = \frac{t^7-1}{t^3(t-1)}$$</p> <p>The RHS has roots of the form $t = e^{\pm \frac{2k\pi}{7}i}$ ( coming from the $t^7 - 1$ factor in numerator ) for $k = 1,2,3$. So $p(x)$ has roots of the form $$e^{\frac{2k\pi}{7} i} + e^{-\frac{2k\pi}{7} i} = 2\cos\left(\frac{2 k\pi}{7}\right)$$ for $k = 1,2,3$.</p>
1,659,075
<p>In linear algebra, the Rank-Nullity theorem states that given a vector space $V$ and an $n\times n$ matrix $A$, $$\text{rank}(A) + \text{null}(A) = n$$ or that $$\text{dim(image}(A)) + \text{dim(ker}(A)) = \text{dim}(V).$$</p> <hr> <p>In abstract algebra, the Orbit-Stabilizer theorem states that given a group $G$ of order $n$, and an element $x$ of the set $G$ acts on, $$|\text{orb}(x)||\text{stab}(x)| = |G|.$$</p> <hr> <p>Other than the visual similarity of the expressions, is there some deeper, perhaps category-theoretic connection between these two theorems? Is there, perhaps, a functor from the category of groups $\text{Grp}$ to some category where linear transformations are morphisms? Am I even using the words functor and morphism correctly in this context?</p>
Nick
27,349
<p>As was pointed out in the comments by Clement Guerin and Berci above, the Rank-Nullity Theorem is more properly seen as an immediate consequence of the First Isomorphism Theorem, which says that $\mathrm{Im}(A) \cong V / \mathrm{Ker}(A)$. Taking dimensions of these spaces gives the statement of the Rank-Nullity Theorem, since the "rank" is the dimension of the image of $A$, and the "nullity" is the dimension of the kernel, and the dimension of $V / \mathrm{Ker}(A)$ is just the difference in dimensions $\dim(V) - \dim(\mathrm{Ker}(A))$.</p>
234,945
<p>Let $a \in \mathbb R$, what values of $t$ solve the equation $at + \sin(t) = 0$?</p>
preferred_anon
27,150
<p>Since we can write it as $at=-\sin(t)$, the roots lie where the line with gradient $a$ and $-\sin(t)$ intersect.<br> If $a\le1$, the only root is $0$.<br> For $-1&lt;a&lt;0$ there are finitely many solutions whose values can only be computed numerically. There will be no roots when $t&gt;T$, where $T$ is such that $|aT|&gt;1$ (i.e. $T&gt;1/|a|$ or $T&lt;-1/|a|$).<br> The closer $a$ is to $0$, the more roots there are. At $a=0$, there are infinitely many roots at $t=n\pi$. For all $n \in \mathbb{Z}$.<br> If $0&lt;a\le A$, there are again finitely many solutions. ($A$ is a constant such that $a=A$ has exactly 1 solution)<br> When $a&gt;A$, there are zero solutions.<br> Notice that when $a=A$, the line is tangent to the curve, meaning that for some $t_{0}$, $At_{0}=-\sin(t_{0})$ and $A=-\cos(t_{0})$. Dividing these, we get $t_{0}=\tan(t_{0})$. We can solve numerically to get $t_{0}=4.493409...$ and thus $A=0.217234...$ </p>
234,945
<p>Let $a \in \mathbb R$, what values of $t$ solve the equation $at + \sin(t) = 0$?</p>
Douglas B. Staple
65,886
<p>The function $$\operatorname{sinc}(x)\equiv\frac{\sin(x)}{x}$$ is called the <a href="https://en.wikipedia.org/wiki/Sinc_function" rel="nofollow">$\operatorname{sinc}$</a> function. You need the (multi-valued) inverse of this function, which is a transcendental function with no name. We could call it the "arcsinc" function. Defining $\operatorname{arcsinc(x)}$ by $$ \operatorname{sinc}(\operatorname{arcsinc(x)})\equiv1,$$ the solution to your equation is $$t=\operatorname{arcsinc(-a)}.$$</p>
2,734,338
<p>I would like to show that $$\forall n\in\mathbb{N}^*, \quad \sqrt{\frac{n}{n+1}}\notin \mathbb{Q}$$</p> <p>I'm interested in more ways of proofing this.</p> <p>My method :</p> <p>suppose that $\sqrt{\frac{n}{n+1}}\in \mathbb{Q}$ then there exist $(p,q)\in\mathbb{Z}\times \mathbb{N}^*$ such that $\sqrt{\frac{n}{n+1}}=\frac{p}{q}$ thus </p> <p>$$\dfrac{n}{n+1}=\dfrac{p^2}{q^2} \implies nq^2=(n+1)p^2 \implies n(q^2-p^2)=p^2$$ since $p\neq q\implies p^2\neq q^2$ then</p> <p>$$n=\dfrac{p^2}{(q^2-p^2)}$$</p> <p>since $n\in \mathbb{N}^*$ then $n\in \mathbb{Q}$</p> <ul> <li>I'm stuck here and I would like to see different ways to prove $\sqrt{\frac{n}{n+1}}\notin \mathbb{Q}$</li> </ul>
Hw Chu
507,264
<p>If $\displaystyle\sqrt{\frac{n}{n+1}} \in \mathbb Q$, then $\displaystyle\sqrt{n(n+1)} = (n+1)\sqrt{\frac{n}{n+1}} \in \mathbb Q$, and since $n \in \mathbb N_{&gt;0}$, $\sqrt{n(n+1)} \in \mathbb Z$.</p> <p>But $n^2 &lt; n(n+1) &lt; (n+1)^2$.</p> <hr> <p><em>Remark:</em> Actually the proof in your post is still in the right track. You actually can assume $\gcd(p,q) = 1$. Factorize</p> <p>$$ n = \frac{p^2}{(q-p)(q+p)}. $$</p> <p>Since $\gcd(p, q-p) = \gcd(p, q+p) = 1$, and $n$ is an integer, it is enforced that $q-p, q+p \in \{0,1\}$, which is impossible if $n \in \mathbb N_{&gt;0}$.</p>
1,128,623
<p>Question: Let S be the set of sequences of $0$s and $1$s. For $x = (x_1, x_2, x_3, ...)$ and $y = (y_1, y_2, y_3, ...)$. Define</p> <p>$d(x,y)=\sum_{i=1}^\infty \dfrac{|x_i - y_i|}{2^i}$ </p> <p>Proof the infinite sum in the definition of $d(x,y)$ converges for all $x$ and $y$. </p> <p>Incomplete answer: Since the max of $d(x,y)$ happens when all elements of one of $x$ and $y$ is $1$ and the other is $0$, and the min of $d(x,y)$ happens when $x_n=y_n$ for all indices $n$, then </p> <p>$0=\sum_{i=1}^\infty \dfrac{0}{2^i}\leq d(x,y)=\sum_{i=1}^\infty \dfrac{|x_i - y_i|}{2^i}\leq \sum_{i=1}^\infty \dfrac{1}{2^i}=1 $</p> <p>But how to prove that $d(x,y)$ converges to some point when $x$ and $y$ have fixed arbitrary elements?</p> <p>Thank you. </p>
doeo
180,343
<p>You have the right idea of bounding the the infinite sum. Now consider what the bounds would be if we started the series at the $n^{th}$ term. We can get that</p> <p>$\sum_{i=n}^\infty \dfrac{|x_i-y_i|}{2^i}\leq \dfrac{1}{2^{n-1}}\to0$</p> <p>as $n \to\infty$, which shows that the series converges.</p>
1,802,215
<p>I'd like to show that $Cov(aX+b, Y+Z)=aCov(X,Y)+aCov(X,Z)$.</p> <p>Therefore I use:</p> <ul> <li>$Cov(X,Y)=E(XY)-E(X)\cdot E(Y)$.</li> </ul> <p>So $Cov(aX+b, Y+Z)=$</p> <p>$=E[(aX+b)(Y+Z)]-E(aX+b)E(Y+Z)$</p> <p>$=E(aXY+aZ+bEY+bEZ)-[(aEX+b)(EY+EZ)]$</p> <p>$=aE(XY)+aEZ+bEY+bEZ-[aEXEY+aEXEZ+bEY+bEZ]$</p> <p>?? I can't use that $X, Y$ are independent; if they were, then $EXEY=E(XY)$.</p> <p>What I want to show in the end is:</p> <p>$... = aE(XY)-a(EX)(EY)+aE(XZ)-a(EX)(EY)$, right?</p>
Dark
208,508
<p>It is much faster to use the fact that $Cov$ is a bi-linear map.</p> <p>Hence $Cov(aX+b,Y+Z) = a Cov(X,Y+Z) + Cov(b,Y+Z)$ (linearity w.r.t first variable)</p> <p>$b$ is constant so $Cov(b,Y+Z)=0$.</p> <p>Then $Cov(aX+b,Y+Z) = aCov(X,Y+Z) = a(Cov(X,Y)+Cov(X,Z))$. (linearity w.r.t second variable)</p>
1,802,215
<p>I'd like to show that $Cov(aX+b, Y+Z)=aCov(X,Y)+aCov(X,Z)$.</p> <p>Therefore I use:</p> <ul> <li>$Cov(X,Y)=E(XY)-E(X)\cdot E(Y)$.</li> </ul> <p>So $Cov(aX+b, Y+Z)=$</p> <p>$=E[(aX+b)(Y+Z)]-E(aX+b)E(Y+Z)$</p> <p>$=E(aXY+aZ+bEY+bEZ)-[(aEX+b)(EY+EZ)]$</p> <p>$=aE(XY)+aEZ+bEY+bEZ-[aEXEY+aEXEZ+bEY+bEZ]$</p> <p>?? I can't use that $X, Y$ are independent; if they were, then $EXEY=E(XY)$.</p> <p>What I want to show in the end is:</p> <p>$... = aE(XY)-a(EX)(EY)+aE(XZ)-a(EX)(EY)$, right?</p>
grand_chat
215,011
<p>Your line $$E(aXY+aZ+bEY+bEZ)-[(aEX+b)(EY+EZ)]$$ has a mistake: it should be $$E(aXY+a\color{red}{XZ}+bEY+bEZ)-[(aEX+b)(EY+EZ)]\tag1$$ After you fix this, you should be able to collect terms properly: $$ \begin{align} (1)&amp;=aE(XY)+aEXZ+bEY+bEZ-[aEXEY+aEXEZ+bEY+bEZ]\\ &amp;=aE(XY)-aEXEY +aEXZ-aEXEZ \end{align} $$ since the terms $bEY$ and $bEZ$ drop out.</p>
257,121
<p>The question is very simple and I apologize for that, but I am not an expert of this kind of problem. Given the polynomial $$ P(x_1,\ldots,x_{2n})=x_1^2+\ldots+x_n^2-x_{n+1}^2-\ldots-x_{2n}^2,$$ I would like to know if there are non trivial integer roots $(y_1,\ldots, y_{2n})$ such that $$y_1+\cdots+y_{n}=y_{n+1}+\cdots+ y_{2n}.$$ With non trivial I mean the ones like $$y_1=y_{n+1},\ldots,y_{n}=y_{2n},$$ or their permutations.</p>
Bugs Bunny
5,301
<p>Yes, there are. For instance, (1,4,6,7,2,3,5,8)</p> <p>The general principle behind this solution is that $$ n^2+(n+1)^2=((n-1)^2+(n+2)^2)-4. $$ Combining two such collections on the opposite sides always gives you a solution.</p>
4,305,538
<p>I am trying to compute the Fourier transform of <span class="math-container">$(x_{1}+ix_{2})^{-1}$</span> in <span class="math-container">$S'(\mathbb{R}^{2})$</span>. i.e. as a tempered distribution.</p> <p>It might be useful to note that for <span class="math-container">$\mu \in S'(\mathbb{R}^{2})$</span> and <span class="math-container">$\psi \in S(\mathbb{R}^{2})$</span> we define <span class="math-container">$\langle\hat{\mu},\psi\rangle=\langle\mu,\hat{\psi}\rangle$</span>.</p> <p>In my attempt, I noted that the definition of the Fourier transform in <span class="math-container">$S(\mathbb{R}^{2})$</span>:</p> <p><span class="math-container">$$ \hat{f}(\lambda)=\int_{\mathbb{R}^{2}}f(x)e^{-i \lambda \cdot x} dx, $$</span> gave us that <span class="math-container">$\widehat{(-i \partial_{1}+\partial_{2}) \delta}=x_{1}+ix_{2}$</span>. I'm not sure how to use this fact to help me complete the problem. Any help would be greatly appreciated.</p>
Ninad Munshi
698,724
<p>To calculate this Fourier transform, we will need another well known Fourier transform pair in <span class="math-container">$1$</span>D:</p> <p><span class="math-container">$$\int_{-\infty}^\infty e^{-|a||x|}e^{-i\lambda x}dx = \int_{-\infty}^0e^{(|a|-i\lambda)x}dx + \int_0^\infty e^{-(|a|+i\lambda)x}dx $$</span></p> <p><span class="math-container">$$= \frac{1}{|a|-i\lambda}+\frac{1}{|a|+i\lambda} = \frac{2|a|}{a^2+\lambda^2}$$</span></p> <p>which gives us the integral</p> <p><span class="math-container">$$\frac{1}{2\pi}\int_{-\infty}^\infty\frac{2|a|}{a^2+\lambda^2}e^{i\lambda x}d\lambda = e^{-|a||x|}$$</span></p> <p>for free. Note that since both functions are real and even, the difference between forward and reverse Fourier transforms is negligible.</p> <p>Back to our function, rewrite it as</p> <p><span class="math-container">$$\frac{1}{x_1+ix_2} = \frac{x_1-ix_2}{x_1^2+x_2^2}$$</span></p> <p>and let's compute the Fourier transform of the real part only</p> <p><span class="math-container">$$\int_{-\infty}^\infty\int_{-\infty}^\infty \frac{x_1}{x_1^2+x_2^2}e^{-i\lambda_2x_2}e^{-i\lambda_1x_1}dx_2dx_1 = \int_{-\infty}^\infty \frac{\pi x_1}{|x_1|}e^{-|x_1||\lambda_2|}e^{-i\lambda_1x_1}dx_1$$</span></p> <p><span class="math-container">$$= \frac{\pi}{|\lambda_2|+i\lambda_1}-\frac{\pi}{|\lambda_2|-i\lambda_1} = \frac{-2\pi i \lambda_1}{\lambda_1^2+\lambda_2^2}$$</span></p> <p>And without any extra work, the full Fourier transform is</p> <p><span class="math-container">$$-2\pi\frac{i\lambda_1+\lambda_2}{\lambda_1^2+\lambda_2^2} = \frac{2\pi}{i\lambda_1-\lambda_2}$$</span></p> <p>by linearity.</p>
2,508,381
<p>Starting from the idea that $$\sum_{n=1}^\infty n = -\frac{1}{12}$$ It's fairly natural to ask about the series of odd numbers $$\sum_{n=1}^{\infty} (2n - 1)$$ I worked this out in two different ways, and get two different answers. By my first method $$\sum_{n=1}^{\infty} (2n - 1) + 2\bigg( \sum_{n=1}^\infty n \bigg) = \sum_{n=1}^\infty n$$ $$\therefore ~\sum_{n=1}^{\infty} (2n - 1) = - \sum_{n=1}^\infty n = \frac{1}{12}$$ But then by the second $$\sum_{n=1}^{\infty} (2n - 1) - \sum_{n=1}^\infty n = \sum_{n=1}^\infty n$$ $$\therefore ~\sum_{n=1}^{\infty} (2n - 1) = 2 \sum_{n=1}^\infty n = - \frac{1}{6}$$ Is there any reason to prefer one of these answers over the other? Or is the sum over all odd numbers simply undefined? In which case, was there a way to tell that in advance?</p> <p>I'm also curious if this extends to other series of a similar form $$\sum_{n=1}^{\infty} (an + b)$$ Are such series undefined whenever $b \neq 0$?</p>
spaceisdarkgreen
397,125
<p>With the usual caveat that $$ \sum_{n=1}^\infty n \ne -\frac{1}{12}$$ we can do a similar zeta function regularization for the sum of odd integers. We start with the fact that $$ \sum_{n = 1}^\infty \frac{1}{(2n-1)^s} =(1-2^{-s})\zeta(s)$$ for $\Re(s) &gt; 1$ and then analytically continue to $s=-1$ to get $$ \sum_{n=1}^\infty(2n+1) "=" (1-2)\zeta(-1) = \frac{1}{12}$$</p> <p><strong>Edit</strong></p> <p>Zeta function regularization and Ramanujan get the same answer here. As for why your first method gets the "right answer" and the second doesn't, note that the first is argued by the exact same formal steps used to derive $$ \sum_{n=1}^\infty\frac{1}{(2n-1)^s} = (1-2^{-s})\zeta(s)$$ while the second uses both linearity and index shifting which are generally not preserved by the regularization methods.</p>
46,772
<p>I have the following to set into mathematica. </p> <p>Assume a stress(sigma) applied on a specimen which has a number of fibers within it. All fibers that have a strength less than the applied stress should fail. I set a cumulative distribution of the fiber strengths with a mean and standard deviation.</p> <p>I want to use this cumulative distribution to tell me how many fibers are less than the applied stress which have failed and store this number somewhere. This procedure should be repeated until all fibers have failed. so there is a convergence in the two variables. I am new in mathematica so sorry for a vague question. </p> <p>thanks,</p> <p>Nick </p>
george2079
2,079
<p>Here is how you do this analytically: (using user2790167's <code>pull</code> in pure function form)</p> <pre><code> nFibers = 50; mean = 100; stdev = 20; fibers = Sort@RandomVariate[NormalDistribution[mean, stdev], nFibers]; Show[{ Plot[ InverseCDF[ NormalDistribution[ mean, stdev] , nweak/nFibers] (nFibers - nweak + 1 ), {nweak, 1, nFibers - 1}, PlotStyle -&gt; {Red, Dashed}], ListPlot[ Rest@First@ Transpose@ NestList[{First@#[[2]] Length@#[[2]], Rest@#[[2]]} &amp;, {0, fibers}, nFibers], Joined -&gt; True, AxesLabel -&gt; {"Broken Fibers", "Net Load"}]}, PlotRange -&gt; All, AxesOrigin -&gt; {0, 0}] </code></pre> <p><img src="https://i.stack.imgur.com/IHslx.png" alt="enter image description here"></p> <p>Also it is useful to plot against the strength of the fiber that is about to fail: (which correlates to strain assuming equal stiffness of the fibers )</p> <pre><code> nFibers = 50; mean = 100; stdev = 20; fibers = Sort@RandomVariate[NormalDistribution[mean, stdev], nFibers]; Show[{ Plot[ s (nFibers (1 - CDF[ NormalDistribution[ mean, stdev] , s]) ) , {s, 0, 200}, PlotStyle -&gt; {Red, Dashed}], ListPlot[ First@ Transpose@ NestList[{{First@#[[2]] {1, Length@#[[2]]}, Rest@#[[2]]} &amp;, {{0, 0}, fibers}, nFibers], Joined -&gt; True, AxesLabel -&gt; {"min surviving strength", "Net Load"}]}, PlotRange -&gt; All, AxesOrigin -&gt; {0, 0}, AspectRatio -&gt; 1/GoldenRatio] </code></pre> <p><img src="https://i.stack.imgur.com/ZfAxI.png" alt="enter image description here"></p> <p>for large nFibers the simulations converge to the analytic form..</p>
983,566
<p>Here is the problem in full:</p> <blockquote> <p>A heap has $x$ marbles, where $x$ is a positive integer. The following process is repeated until the heap is broken down into single marbles: choose a heap with more than 1 marble and form two non-empty heaps from it. One will contain $n$ marbles and the other $m$ marbles. Each time this is done, record the product $nm$. Use induction to show the sum of your recorded products is $\binom{x}{2}$ no matter what the sequence of dividing the heap is.</p> </blockquote> <p>So, using induction: We assume truth when there are $k$ marbles, where $k$ is a positive integer. We test the basis case $k = 1$, which is a one marble pile. $\binom{1}{2} = 0$, and splitting into a pile with 1 marble and a pile with 0 marbles also gives a product of 0, so the basis case is true. Next we demonstrate truth when $x=k+1$. We begin by making a pile of size 1 and a pile of size $k$. The product that we get from this split is $k$. The induction hypothesis says $k$ marbles ends up with a result of $\binom{k}{2}$. I can't prove it just yet (I need help here too) but I know $k + \binom{k}{2} = \binom{k+1}{2}$. Therefore the proof is shown.</p> <p>Now, I need help here. I don't think this is right because of the part, "no matter what the sequence of dividing the heap is." I think I violated this when I make a pile of size 1 specifically during the induction step. Could strong induction somehow help me here? I just can't really see how.</p> <p>Additionally, if anyone could point me to a proof for $k + \binom{k}{2} = \binom{k+1}{2}$ I would really appreciate being able to understand this one from an analytic standpoint too.</p> <p>Thanks.</p>
David Holden
79,543
<p>this has a computation-lite intuitive solution. think of the marbles as a squadron of $K$ tiny space-craft. suppose craft belonging to the same pile can communicate by walkie-talkie, but if any two are separated a long-range communications link is set up between them. </p> <p>so when a pile of $m+n$ craft is split into two piles of $m$ and $n$ respectively, this requires $mn$ new long-range communications links to be set up. by the stage when each craft occupies its own pile, all possible 2-way communication links will have been set up, numbering $\binom{K}{2}$ </p>
2,502,224
<p>I don't quite understand <a href="https://math.stackexchange.com/a/1866099/185631">this example given by Mike Haskel</a>. I want to find an example about</p> <p><span class="math-container">$$\operatorname{Hom}_R\left ( M ,\bigoplus_{i\in I} N_{i}\right )\not \cong\bigoplus_{i\in I} \operatorname{Hom}_R\left ( M ,N_{i}\right ).$$</span></p> <p>Mike Haskel's example is that:</p> <blockquote> <p>It's not true when <span class="math-container">$I$</span> is infinite, due exactly to the problem you encounter. Consider the case where <span class="math-container">$R = \mathbb{R}$</span>, <span class="math-container">$M$</span> is an infinite dimensional vector space, and each <span class="math-container">$N_i$</span> is <span class="math-container">$\mathbb{R}$</span>, with <span class="math-container">$I$</span> infinite. Convince yourself that <span class="math-container">$\operatorname{Hom}(M,\bigoplus_i N_i)$</span> corresponds to infinite matrices whose columns each have finitely many nonzero entries, while <span class="math-container">$\bigoplus_i \operatorname{Hom}(M,N_i)$</span> corresponds to infinite matrices with only a finite number of nonzero rows.</p> </blockquote> <p>It's not quite clear to me why &quot;<span class="math-container">$\bigoplus_i \operatorname{Hom}(M,N_i)$</span> corresponds to infinite matrices with only a finite number of nonzero rows&quot;.</p> <p>I don't know how to write a formal rigorous proof that <span class="math-container">$\operatorname{Hom}_R\left ( M ,\bigoplus_{i\in I} N_{i}\right )\not \cong\bigoplus_{i\in I} \operatorname{Hom}_R\left ( M ,N_{i}\right )$</span> in this case.</p>
No One
185,631
<p>I think they are probably isomorphic to each other. Let me know where I am wrong.</p> <p>The dimension of LHS is $|\bigoplus_{i=1}^{\infty}\mathbb R|^{\dim(M)}=|\mathbb R|^{\dim(M)}$, while the dimension of the RHS is $|\mathbb N||\dim(M^*)|=|\mathbb N||\mathbb R ^{\dim(M)}|=|\mathbb R ^{\dim(M)}|$.</p> <p>Reference:</p> <p><a href="https://mathoverflow.net/questions/168596/dim-homv-w">https://mathoverflow.net/questions/168596/dim-homv-w</a></p>
409,626
<p>I'm studying elementary group theory, and just seeing the ways in which groups break apart into simpler groups, specifically, a group can be broken up as the sort of product of any of its normal subgroups with the quotient group of that subgroup. So I wondered how you could do the inverse of that operation:</p> <ol> <li>Given two groups $A$ and $B$, construct a group $G$ which admits a normal subgroup $H$ isomorphic to $A$, such that $G/H$ is isomorphic to $B$.</li> </ol> <p>I think I have a proof that the cartesian product $A \times B$ (with the usual component-wise operation) verifies (1), but since I'm just starting out I'm not totally confident in my construction. Furthermore, if I'm right, is this the <em>only</em> group up to isomorphism satisfying (1)?</p> <p>Edit: I just noticed <a href="https://math.stackexchange.com/questions/220487/proving-the-direct-product-d-of-two-groups-g-h-has-a-normal-subgroup-n-such-th?rq=1">Proving the direct product D of two groups G &amp; H has a normal subgroup N such that N isomorphic to G and D/N isomorphic to H</a>, which seems to positively answer my question. In that case I'd like to draw attention to the follow up question above (uniqueness up to isomorphism).</p>
Najib Idrissi
10,014
<p>Yes, the direct product $A \times B$ satisfies the property, as you've noticed. But it's not unique up to isomorphism. For example, the <a href="https://en.wikipedia.org/wiki/Dihedral_group">dihedral group</a> $D_n$ has a normal subgroup $H \simeq \mathbb Z/n \mathbb Z$, with $G/H \simeq \mathbb Z/2 \mathbb Z$, but $D_n$ is not isomorphic (for $n &gt; 1$) to $\mathbb Z/2\mathbb Z \times \mathbb Z/n\mathbb Z$.</p> <p>More generally, you're asking whether, given an <a href="https://en.wikipedia.org/wiki/Exact_sequence">exact sequence</a> of the form $1 \to N \to G \to H \to 1$, is $G$ isomorphic to $N \times H$? The answer is no, as I've shown. Many counterexamples are provided by <a href="https://en.wikipedia.org/wiki/Semi-direct_product">semidirect products</a> (something you'll learn soon enough if you're studying elementary group theory).</p> <p>For abelian groups, the concept of <a href="https://en.wikipedia.org/wiki/Ext_functor">Ext functor</a> allows one to classify all such extensions (given abelian groups $A,B$, "how many" groups $G$ are there with an exact sequence $0 \to B \to G \to A \to 0$ is given by $\mathrm{Ext}(A,B)$), but this is much more advanced.</p>
2,003
<p>I use some custom shortcut keys in <code>KeyEventTranslations.tr</code>. One is for the <code>Delete All Output</code> function: </p> <pre><code>Item[KeyEvent["w", Modifiers -&gt; {Control}], FrontEnd`FrontEndExecute[FrontEnd`FrontEndToken["DeleteGeneratedCells"]]] </code></pre> <p>or simply:</p> <pre><code>Item[KeyEvent["w", Modifiers -&gt; {Control}], "DeleteGeneratedCells"] </code></pre> <p>This works as expected, putting up the dialog: "Do you really want to delete all the output cells in the notebook?". Is there any way to set up <code>KeyEventTranslations.tr</code> that when I hit <kbd>Ctrl</kbd>+<kbd>w</kbd> the dialog is automatically acknowledged and I don't have to hit <kbd>Enter</kbd>? The same goes for the <code>Quit kernel</code> function, that also puts up a dialog.</p>
Szabolcs
12
<p>Try using this:</p> <pre><code>FrontEndExecute[ {FrontEnd`NotebookFind[FrontEnd`SelectedNotebook[], "Output", All, CellStyle, AutoScroll-&gt;False], FrontEnd`FrontEndToken["Clear"]}] </code></pre> <p>(Untested in <code>KeyEventTranslations.tr</code>, but works as a button!)</p> <hr> <p>Regarding automating confirming the dialog---I don't think it is possible from within Mathematica. I'd like to note though that you can press <kbd>Space</kbd> to confirm the dialog (instead of using <kbd>Enter</kbd>), which is considerably easier for me due to the size and position of the key.</p> <hr> <p><strong>Update:</strong> As Albert Retey pointed out in a comment, this will only remove output cells, but not <code>"Message"</code> or <code>"Print"</code> cells. Those need to be added separately to the command, and this is still a workaround to finding all <code>GeneratedCell</code>s.</p>
2,919,561
<p>What is the following limit ? How do I solve it ? $$\lim \limits_{x\to0}\frac{6\sin x}{x-3\tan x}$$</p>
Matheus Andrade
508,844
<p>$$\begin{align*} \lim \limits_{x\to0}\frac{6\sin x}{x-3\tan x} &amp;= \lim \limits_{x\to0} \frac{6\cos(x)}{1 - 3\sec^2(x)} \\ &amp;=\frac{6}{1-3} \\ &amp;= -3\end{align*}$$</p>
629,989
<p>Given function sequence $\{f_n(x)\}^\infty$ defined as $f_n(x) = \frac{nx}{2 + n + x}. (0 \le x \le 1)$</p> <p>I need to find the limit function and whether it converges uniformly or not uniformly.</p> <p>I found that the limit is:</p> <p>$$\lim_{n \to \infty} \frac{nx}{2 + n + x} = \lim_{n \to \infty} \frac{x}{\frac{2}{n} + \frac{n}{n} + \frac{x}{n}} = \lim_{n \to \infty} \frac{x}{0+1+0} = \lim_{n \to \infty} \frac{x}{1} = \lim_{n \to \infty} x = x.$$ </p> <p>The book confirms that,</p> <p>but it says that it converges uniformly to $x$.</p> <p>Why? the limit is dependent with $x$, so how does the convergence is uniformly?</p> <p>Doesn't the definition says the a sequence of function converge uniformly to $f(x)$ if it dependent only on $\varepsilon &gt; 0$?</p> <p>Thanks in advance.</p>
Clarinetist
81,560
<p>Julián explains it very well in the comments, but I'll expand the discussion a bit. A sequence $\{f_{n}\}$ such that $f_{n}: E\subseteq \mathbb{R} \to \mathbb{R}$ for all $n$ converges uniformly to $f$ if for all $\epsilon &gt; 0$ there is an $N(\epsilon) \in \mathbb{N}$ such that for all $n \geq N(\epsilon)$, $|f_{n}(x) - f(x)| &lt; \epsilon$ for all $x \in E$. <strong>It is not</strong> $f$ which is dependent on $\epsilon$; rather, it is $N$. In order for you to see that a particular sequence of functions is uniformly convergent, you would need to do more work than to simply take $\lim\limits_{n \to \infty}f_{n}(x)$ (unless there's some trick that I don't know about). </p> <p>Let's prove that $\left\{\dfrac{nx}{2+n+x}\right\}^{\infty}_{1}$ converges uniformly to $f(x) = x$ for all $x \in [0, 1]$. </p> <p><strong>Proof</strong>. Let $\epsilon &gt; 0$ be given. Choose $N:= \lceil 3\epsilon \rceil$. Then for all $n \geq N$, </p> <p>$\begin{align}\left|\dfrac{nx}{2+n+x}-x\right| &amp;= \left|\dfrac{nx-x(2+n+x)}{2+n+x}\right| \\ &amp;= \left|\dfrac{-2x-x^{2}}{2+n+x}\right| \\ &amp;= \left|\dfrac{2x+x^{2}}{2+n+x}\right| \\ &amp;\leq \left|\dfrac{2x+x^{2}}{2+n}\right| \text{ since } x \in [0, 1] \\ &amp;\leq \left|\dfrac{3}{2+n}\right| \text{ since } x \in [0, 1] \\ &amp;&lt; \left|\dfrac{3}{n}\right| \\ &amp;= \dfrac{3}{n} \\ &amp;&lt; \dfrac{3}{\left(\dfrac{1}{3\epsilon}\right)} = \epsilon\end{align}$ </p> <p>for all $x \in [0, 1]$. Hence $\{f_{n}\} \to f$ uniformly for all $x \in [0, 1]$. (I might've messed up somewhere, since I did this rather quickly.) The point is the $N$ that we chose doesn't depend on $x$. (If it did, we would have pointwise convergence.) </p>
3,896,314
<p>I am trying to show that <span class="math-container">$\log 15$</span> and <span class="math-container">$\log 3$</span> + <span class="math-container">$\log 5$</span> is irrational.</p> <p>For <span class="math-container">$\log 15$</span> I feel like I have no issues showing this is irrational.</p> <p>By contradiction assume <span class="math-container">$\log 15$</span> is rational then</p> <p><span class="math-container">$\log 15 = \frac{m}{n}$</span> where <span class="math-container">$m,n\in\mathbb{N}$</span> and <span class="math-container">$n\not= 0$</span></p> <p><span class="math-container">$15 = 10^{\frac{m}{n}}\Rightarrow 15^n = 10^m \Rightarrow 3^n5^n = 5^m2^m$</span></p> <p>The left side of the equation has an odd multiple of <span class="math-container">$5$</span> but the right side has an even multiple of <span class="math-container">$5$</span> and we have arrived at a contradiction.</p> <p>But for <span class="math-container">$\log 3$</span> + <span class="math-container">$\log 5$</span> I do not even know where to begin, I am assuming I cannot use the fact that <span class="math-container">$\log 15 = \log 3$</span> + <span class="math-container">$\log 5$</span> as that would be trivial. Should I show they are individually irrational? But that leads me to my next thought as the sum of two irrational numbers is not always irrational (even though in this case I know the sum is irrational). And then would that lead me to breaking it into a case by case where I assume one is rational and the other is not, etc. I am still an undergrad so I do not know much advanced number theory so I would appreciate a more intuitive approach if possible.</p>
Community
-1
<p>The sum of two irrational numbers can be rational. Take <span class="math-container">$a$</span> irrational, and <span class="math-container">$1-a$</span>, which is also irrational (in fact <span class="math-container">$-a$</span> is even simpler, but I guess that this would not satisfy the readers). So there is no way to prove that the sum of two general irrationals is irrational, as this is false.</p> <p>If you are not allowed to use that <span class="math-container">$\log 3+\log 5=\log 15$</span> and only that <span class="math-container">$\log 3$</span> and <span class="math-container">$\log 5$</span> are irrational, I don't see a way to obtain a proof. You could claim that these irrationals are &quot;special&quot; in that they are logarithms of integers, but this brings you back to the sum/product property. If they have no special property, then no proof.</p>
1,694,495
<p>Graphically, I am searching for something like this:</p> <p><a href="https://i.stack.imgur.com/Rskpk.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Rskpk.png" alt="enter image description here"></a></p> <p>The only additional requirement would be that the elements are defined by a closed formula or "simple" recursion, i.e. no definition by cases (Fallunterscheidung) and such.</p>
Oscar Lanzi
248,217
<p>This is not exactly the same thing, but consider the "Tower of Hanoi" sequence:</p> <p>1 2 1 3 1 2 1 4 1 2 1 3 1 ...</p> <p>$a_n = k$ when $n$ is an odd number times $2^{k-1}$</p> <p>We need more than 1000 terms before we see any term greater than 10 (in the Tower of Hanoi puzzle that means if you have 11 layers you need over 1000 moves to expose the bottom disk). Yet the sequence is ultimately unbounded.</p>
1,311,466
<p>My concept of real no. Is not very clear. Please also tell the logic behind the question. The expression is true for 19, is it true for all the multiples? </p>
gt6989b
16,192
<p>In $(3n)! = 1 \times 2 \times \ldots \times 3n$ there are $n$ numbers divisible by 3 directly ($3, 6, \ldots, 3n$) and additionally $n$ which are divisible by $2$, so $(3n)!$ is divisible by $(3!)^n = 3^n \cdot 2^n$.</p>
1,311,466
<p>My concept of real no. Is not very clear. Please also tell the logic behind the question. The expression is true for 19, is it true for all the multiples? </p>
alkabary
96,332
<p>Here is a proof by induction </p> <p>Base case $n=0$ , we have $$\frac{(3 \times 0)!}{(3!)^0} = \frac{0!}{6^0} = \frac{1}{1} = 1$$ because $0! = 1$ and $6^0 =1$ and so $1$ is an integral number so the base case works</p> <p>Now assume it works for an integer $k \geq 0$ then we need to prove it for $k+1$ , here is how</p> <p>Since it work for $k$ then we have $$\frac{(3k)!}{(3!)^k} = m$$ for some integral number $m$, now consider $$\frac{(3(k+1))!}{(3!)^{k+1}}$$ which is equal to $$\color{blue}{\frac{(3k + 3)!}{3!^k \times 3!}}$$</p> <p>Now $(3k+3)! = (3k +3)(3k +2)(3k+1)(3k)!$ and so now we have the blue fraction equal to $$\frac{ (3k +3)(3k +2)(3k+1)(3k)!}{3!^k \times 3!}$$ which is equal to $$\frac{ (3k +3)(3k +2)(3k+1)}{3!} \times \frac{(3k)!}{(3!)^k}$$ and remember that $$\frac{(3k)!}{(3!)^k} = m$$ and so we have $$\frac{ (3k +3)(3k +2)(3k+1)}{3!} \times m$$ now $$\frac{ (3k +3)(3k +2)(3k+1)}{3!=6}$$ is divisible by $6$ ($\color{red}{I \space will \space leave \space it \space you \space to \space prove \space it \space ! )}$ and so we have $$\frac{ (3k +3)(3k +2)(3k+1)}{3!} \times \frac{(3k)!}{(3!)^k} = 6am$$ for some integer $a$ which is an integral number as well , hence done !</p>
1,130,855
<p>I've searched this website and while there are a few questions similar to mine, I couldn't find what I was looking for/a specific method for what I want to do.</p> <p>I want to understand how one would prove that the remainder of $5^{336}$ by $23$ is $8$, or in other words, $$5^{336}\equiv 8 \pmod{23}.$$</p> <p>Can anyone help me understand how one would approach a problem like this?</p>
Bill Dubuque
242
<p><strong>Hint</strong> $\ {\rm mod}\ 23\!:\ \underbrace{\color{#c00}{5^{\large 22}}\equiv\color{#c00}1}_{\rm little\ Fermat}\Rightarrow\, 5^{\large 336}\equiv 5^{\large 6+\color{#c00}{22}(15)}\equiv \color{#0a0}{5}^{\large \color{#0a0}{2}\cdot 3}\,\color{#c00}{(5^{\large 22})}^{\large 15}\equiv \color{#0a0}2^{\large 3} \color{#c00}{(1)}^{\large 15}\equiv 8 $</p> <p>where we have used the fundamental <a href="https://math.stackexchange.com/a/879262/242">Congruence Product and Power Rules.</a></p> <p><strong>Remark</strong> $ $ Generally, if $\,a^e\equiv 1\pmod m\,$ then exponents on $\,a\,$ can be considered mod $\,e,\,$ i.e. $\ a^{\large j}\equiv a^{\large k}\pmod m\,\ $ if $\,\ j\equiv k\pmod e.\ $ This may be proved exactly as above, i.e.</p> <p>$$ \begin{array}{}\color{#c00}{a^{\large e}\equiv 1}\\ j = k\! +\! en\end{array}\Rightarrow\,\ a^{\large j}\equiv a^{\large k+en}\equiv a^{\large k}\color{#c00}{(a^{\large e})}^{\large n}\equiv a^{\large k}\color{#c00}{(1)}^{\large n}\equiv a^k\!\!\pmod m\qquad $$</p>
2,559,560
<blockquote> <p>Show that there are two distinct positive integers such that: $1394|2^a-2^b$</p> </blockquote> <p>I'm sure pigeon hole principle applies here,but don't recognize holes.Another problem statement is: show that there are two positive integers $a,b$ such that: $$2^a\equiv 2^b\pmod {1394}$$<br> Of course we have $1394$ cases for division mod $1394$,but what are the pigeons?</p>
Steven Alexis Gregory
75,410
<p>Note $1394=2 \times 17 \times 41$. </p> <p>Since $\varphi(17 \times 41) = 16\times 40 = 640$, </p> <p>then $2^{640} \equiv 1 \pmod{697}$.</p> <p>Hence $2^{641} \equiv 2 \pmod{1394}$.</p> <p>In other words, $1394 \mid 2^{641}-2^1$.</p> <p>Note. Actually, we can do better. Checking the divisors of $640$, we find that $2^{40} \equiv 1 \pmod{697}$. So $1394 \mid 2^{41}-2^1$.</p>
1,250,132
<p>Below is part of a solution to a critical points question. I'm just not sure how the equation on the left becomes the equation on the right. Could someone please show me the steps in-between? Thanks.</p> <blockquote> <p>$$\frac{-1}{x^2}+2x=0 \implies 2x^3-1=0$$</p> </blockquote>
Community
-1
<p>Just multiply both sides by $x^2$ to get $-1+2x^3=0$, then you get $2x^3-1=0$ by commutivity. </p>
1,014,476
<p>I pick 6 cards from a set of 13 (ace-king). If ace = 1 and jack,queen,king = 10 what is the probability of the sum of the cards being a multiple of 6? </p> <p><strong>Tried so far:</strong> I split the numbers into sets with values: 6n, 6n+1, 6n+2, 6n+3 like so:</p> <p>{6}{1,7}{2,8}{3,9}{4,10,j,q,k}{5}</p> <p>and then grouped the combinations that added to a multiple of 6:</p> <p>(5c4)(1c1)(2c1) + (2c2)(2c2)(2c2) + (5c4)(2c2) + (5c2)(1c1)(1c1)(2c1)(2c1) / (13c6)</p> <p>= 10/1716</p> <p>I am almost certain I am missing combinations but am having trouble finding out which.</p>
user145600
145,600
<p>Here is a partial answer to your question. You want the sum of two cards to be 6, 12 or 18.</p> <p>For the sum of 6, the possibilities are 1+5, 2+4, and 3+3</p> <p>The probability of an ace and a 5 is 1/13 times 1/12.</p> <p>Same with 2+4, to wit, 1/(13*12) = 1/156.</p> <p>For two threes, we have 1/13 times 3/51.</p> <p>The sum of these is 2/156 + 3/(13*51)</p> <p>You still need to do 12 and 18</p>
1,014,476
<p>I pick 6 cards from a set of 13 (ace-king). If ace = 1 and jack,queen,king = 10 what is the probability of the sum of the cards being a multiple of 6? </p> <p><strong>Tried so far:</strong> I split the numbers into sets with values: 6n, 6n+1, 6n+2, 6n+3 like so:</p> <p>{6}{1,7}{2,8}{3,9}{4,10,j,q,k}{5}</p> <p>and then grouped the combinations that added to a multiple of 6:</p> <p>(5c4)(1c1)(2c1) + (2c2)(2c2)(2c2) + (5c4)(2c2) + (5c2)(1c1)(1c1)(2c1)(2c1) / (13c6)</p> <p>= 10/1716</p> <p>I am almost certain I am missing combinations but am having trouble finding out which.</p>
Steve Kass
60,500
<p>You can “cast out” any $6$s from the values of the cards, since it will not affect whether the sum is a multiple of $6$. So you have 13 cards valued 1,2,3,4,5,0,1,2,3,4,4,4, and 4. How many ways can you make a multiple of 6 from the sum of 6 of these numbers? Consider the number of 4s used. If none, you can make a sum of 12 (1,2,3,1,2,3) or (0,1,1,2,3,5). With one 4, you can enumerate the possibilities, and so on. </p>
4,050,307
<p>You have a black box function to which you can give real number inputs and from which you can receive real number outputs. <strong>How would you test whether it is likely to be a polynomial?</strong></p> <p>One expensive idea is to use finite differences:</p> <ol> <li>Choose a maximum degree <em>n</em> of the &quot;polynomial&quot; you are testing.</li> <li>Choose a consecutive sequence with random step size, and evaluate the function there to get an output sequence. E.g., <span class="math-container">$$[ 2, 2.3, 2.6, 2.9,\dots] \to [ 4.81, 5.02, 5.05, 4.90,\dots]$$</span></li> <li>Using the output sequence as S[0], define S[n] so that its k^th entry S[n][k] = S[n-1][k+1]-S[n-1][k]. E.g. S[1] = [5.02-4.81,5.05-5.02,4.90-5.05,...] = [0.21,0.03,-0.15,...]</li> <li>If the function is a polynomial (of degree at most <em>n</em>), then the sequence S[n+1] should be all zeros.</li> </ol> <p>Some issues about programming this method:</p> <ul> <li>Would be <a href="https://en.wikipedia.org/wiki/Finite_difference#Higher-order_differences" rel="nofollow noreferrer">expensive for large <em>n</em></a></li> <li>If S[0] has <em>large</em> values, computer arithmetic will produce bad results for S[1] and beyond.</li> </ul>
Petrus1904
808,320
<p>I think the largest problem here is that given things like Taylor series, every function can eventually be closely approximated as a polynomial. As such, even a finite range of measured outputs of a blackbox representing a sine function will might yield a polynomial. Measuring a large range is computationally difficult as higher powers tend to yield very large values for increasing inputs. Anyway, this is what I would do: Suppose you an input set <span class="math-container">$x\in \mathbb{R}^{n+1\times1}$</span> and the corresponding output set <span class="math-container">$S\in \mathbb{R}^{n+1\times1}$</span>, where <span class="math-container">$n$</span> is the estimated polynomial order (or better, larger than). Then the polynomial itself can be estimated as follows:</p> <ol> <li>create a square matrix <span class="math-container">$X = \begin{bmatrix}x^0 &amp; x^1 &amp; x^2 &amp; ... &amp; x^{n}\end{bmatrix} \in \mathbb{R}^{n\times n}$</span> containing numerical powers of the input vector.</li> <li>Suppose we construct <span class="math-container">$A = \begin{bmatrix} a_0 &amp; a_1 &amp; ... &amp; a_{n} \end{bmatrix}^T$</span> representing the black box polynomial parameters such that <span class="math-container">$f(x) = a_0 + a_1x+a_2x^2 + ... + a_{n}x^{n}$</span>. Then <span class="math-container">$f(x) = XA$</span>.</li> <li>use the output vector <span class="math-container">$S$</span> to compute <span class="math-container">$A$</span> as follows: <span class="math-container">$A = X^{-1}S$</span>, provided that <span class="math-container">$X$</span> is invertible (which should be the case if all entries of <span class="math-container">$x$</span> are unique).</li> </ol> <p>If the black box does represent a perfect polynomial, the resulting <span class="math-container">$A$</span> vector should be unique and every entry with a higher order than this polynomial should be a flat zero (or approaching floating precision). The uniqueness can be verified by plugging in various different input sets.</p> <p>This method can become just as troublesome for very large orders of <span class="math-container">$n$</span> and can yield just an estimated Taylor series of the function (at least I tested it with a sine function and it did return the Taylor series expansion of it, where the even powers had a parameters smaller than <span class="math-container">$1e^{-7}$</span>).</p>
2,359,455
<blockquote> <p>Let <span class="math-container">$M$</span> be the largest subset of <span class="math-container">$\{1,\dots,n\}$</span> such that for each <span class="math-container">$x\in M$</span>, <span class="math-container">$x$</span> divides at most one other element in <span class="math-container">$M$</span>. Prove that<span class="math-container">$$ |M|\leq \left\lceil \frac{3n}4\right\rceil. $$</span></p> </blockquote> <p><strong>My attempt:</strong></p> <p>Let <span class="math-container">$M_1 = \{x\in M;\;\exists !y\in M:\; x|y \} $</span> and <span class="math-container">$M_0 = M\setminus M_1$</span>. Obviously both <span class="math-container">$M_0$</span> and <span class="math-container">$M_1$</span> are an antichains in <span class="math-container">$M$</span> and they make a partition of <span class="math-container">$M$</span>. And this is all I can find. I was thinking about Dilworth theorem but... Also, I was thinking, what if I add a new element <span class="math-container">$k\in M^C$</span> to <span class="math-container">$M$</span>. Then <span class="math-container">$M\cup \{k\}$</span> is no more ''good'' set...</p>
Asinomás
33,907
<p>Divide the $M$ integers into $\lceil\frac{n}{2}\rceil$ groups, depending on the maximum odd divisor. Notice we can take at most two from each group and some groups have size $1$, we do the analysis mod $4$.</p> <p>$n=4k:$</p> <p>number of groups: $2k$, number of groups of size $1: k$, upper bound: $3k=\lceil\frac{12k}{4}\rceil=\lceil\frac{3n}{4}\rceil$</p> <p>$n=4k+1:$</p> <p>number of groups: $2k+1$, number of groups of size $1: k+1$, upper bound: $3k+1=\lceil\frac{12k+3}{4}\rceil=\lceil\frac{3n}{4}\rceil$</p> <p>$n=4k+2:$</p> <p>number of groups: $2k+1$, number of groups of size $1: k$, upper bound: $3k+2=\lceil\frac{12k+6}{4}\rceil=\lceil\frac{3n}{4}\rceil$</p> <p>$n=4k+3:$</p> <p>number of groups: $2k+2$, number of groups of size $1: k+1$, upper bound: $3k+3=\lceil\frac{12k+12}{4}\rceil=\lceil\frac{3n}{4}\rceil$</p>
2,061,363
<p>I have the complex power series $ \sum_{k=1}^{\infty}(\frac{z^4}{4} - \frac{\pi}{7})^k$. </p> <p>Through algebraic manipulation I obtain $ \sum_{k=1}^{\infty}(\frac{1}{4})^k(z^4 - \frac{4}{7}\pi)^k$. I now argue that this is a power series around $\frac{4}{7}\pi$ with radius of convergence R = 4, using the euler root test, asserting all the while that the power 4 in the argument z doesn't affect those two quantities. However, I'm not sure how to prove or even heuristically show that last bit. In fact, I'm not even sure I'm right in asserting that the power doesn't matter, it's really more of a hunch. Does anyone know how to handle this scenario? </p>
epi163sqrt
132,007
<blockquote> <p><em>Hint:</em> This is a geometric series \begin{align*} \sum_{k=1}^\infty\left(\frac{z^4}{4}-\frac{\pi}{7}\right)^k =\frac{1}{1-\left(\frac{z^4}{\pi}-\frac{\pi}{7}\right)}-1 \end{align*}</p> <p>converging for $\left|\frac{z^4}{4}-\frac{\pi}{7}\right|&lt;1$</p> </blockquote>
1,809,022
<p>I've been working on some very basic differential equations, but I came to a problem where I need to figure out the behavior of $y(t)$ as $t \rightarrow \infty$ Given that $$\frac{dy}{dt} = \frac{3t}{1+2e^{y}}.$$ In this case, it was very apparent to me that I would not be able to solve for a simple solution of $y(t)$ due to the equation $1+2e^y$ in the denominator. However, solving for this was rather straightforward: $$\frac{dy}{dt}(1+2e^y) = 3t$$ $$\implies \frac{dy}{dt}+2e^y\frac{dy}{dt} = 3t$$ $$\implies \int \frac{dy}{dt}+2e^y\frac{dy}{dt} dt= \int 3t dt$$ $$\implies y(t) + 2e^{y(t)} = \frac{3}{2}t^2 + C.$$ However, it now has become quite difficult for me to figure out how to figure out the behavior of $y(t)$ as $t \rightarrow \infty$. Someone suggested that I should look for a particular in equality, but I am not sure how I could manipulate the right-hand side to provide me with the desired information. Any suggestions on this?</p>
Eric Towers
123,905
<p>As $t \rightarrow \infty$, the right-hand side of your (implicit) solution goes to infinity, so the left-hand side must also.</p> <p>Note that $\dfrac{\mathrm{d}}{\mathrm{d}u} u + 2 \mathrm{e}^u = 1 + 2 \mathrm{e}^u$, which is positive for all values of $u$. Consequently, if $y(t)$ increases, the left-hand side increases and if $y$ decreases, the left-hand side decreases. Therefore, since the left hand-side must go to infinity as $t \rightarrow \infty$ and the left-hand side is finite for every positive value of $y(t)$, we must have $y(t) \rightarrow \infty$ as well.</p> <p>You can see some of this from the original differential equation: $$ \dfrac{\mathrm{d}y}{\mathrm{d}t} = \frac{3t}{1+2 \mathrm{e}^y} \text{.} $$ Since the denominator always positive (greater than $1$, even), as $t \rightarrow \infty$, the numerator is also positive, so the derivative of $y$ is always positive, so $y$ strictly monotonically increases (maybe not to $\infty$). (Since by the previous paragraph, we know $y \rightarrow \infty$, it must be that $y$ does not grow too fast, otherwise the slope of $y$ would get too flat and would fail to escape to $\infty$.)</p>
3,547,384
<p>I saw this equation<span class="math-container">$$S(q)=\int_a^bL(t,q(t),\dot q(t))dt$$</span> in <a href="https://en.wikipedia.org/wiki/Euler%E2%80%93Lagrange_equation" rel="nofollow noreferrer">wikipedia</a>.</p> <p>So I would think that <span class="math-container">$f(x,y)$</span> must be equal to <span class="math-container">$f(x,y,t)$</span> if <span class="math-container">$x$</span> and <span class="math-container">$y$</span> are the function of <span class="math-container">$t$</span>. So let's take an experiment.</p> <p>Firstly, just let <span class="math-container">$f = f(x,y)$</span>, where <span class="math-container">$x = x(t)$</span>, <span class="math-container">$y=y(t)$</span>, so <span class="math-container">$f=f\left(x(t),y(t)\right)$</span></p> <p><span class="math-container">$$\cfrac{df}{dt}=\cfrac{\partial f}{\partial x}\cfrac{dx}{dt}+\cfrac{\partial f}{\partial y}\cfrac{dy}{dt}\qquad (1)$$</span></p> <p>Secondly, since you all see that <span class="math-container">$f$</span> is actually also a function of <span class="math-container">$t$</span>, so we have</p> <p><span class="math-container">$$f = f(x,y)=f(x,y,t)\qquad (2)$$</span></p> <p>Now, </p> <p><span class="math-container">$$\cfrac{df}{dt}=\cfrac{\partial f}{\partial x}\cfrac{dx}{dt}+\cfrac{\partial f}{\partial y}\cfrac{dy}{dt}+\cfrac{\partial f}{\partial t}\cfrac{dt}{dt}\qquad (3)$$</span></p> <p><span class="math-container">$$\cfrac{df}{dt}=\cfrac{\partial f}{\partial x}\cfrac{dx}{dt}+\cfrac{\partial f}{\partial y}\cfrac{dy}{dt}+\cfrac{\partial f}{\partial t}\qquad (4)$$</span></p> <p>Because it is always correct that <span class="math-container">$\cfrac{df}{dt}=\cfrac{\partial f}{\partial t}$</span>,</p> <p><span class="math-container">$$\cfrac{\partial f}{\partial x}\cfrac{dx}{dt}+\cfrac{\partial f}{\partial y}\cfrac{dy}{dt}=0\qquad (5)$$</span></p> <p>Substitute (5) into (1),</p> <p><span class="math-container">$$\cfrac{df}{dt}=0\qquad (6)$$</span></p> <p>This is not a correct outcome, since (6) are not always equal to zero for all cases.</p> <p>So what's wrong???</p>
Empy2
81,790
<p>Suppose <span class="math-container">$f(x,y)$</span> is the height of a hillside above sea level, and <span class="math-container">$(x(t),y(t))$</span> the latitude and longitude of your car. Then <span class="math-container">$\frac{df}{dt}$</span> is the rate you are going up and down because the road slopes up or down.<br> <span class="math-container">$\frac{\partial f}{\partial t}$</span> is the amount the road is physically rising or falling, for example because of continental drift or because a mine below it collapsed.</p>
3,102,336
<p>I have been looking for fixed points of <a href="https://simple.wikipedia.org/wiki/Riemann_zeta_function" rel="nofollow noreferrer">Riemann Zeta function</a> and find something very interesting, it has two fixed points in <span class="math-container">$\mathbb{C}\setminus\{1\}$</span>.</p> <p>The first fixed point is in the Right half plane viz. <span class="math-container">$\{z\in\mathbb{C}:Re(z)&gt;1\}$</span> and it lies precisely in the real axis (Value is : <span class="math-container">$1.83377$</span> approx.).</p> <p><strong>Question:</strong> I want to show that Zeta function has no other fixed points in the right half complex plane excluding the real axis, <span class="math-container">$D=\{z\in\mathbb{C}:Im(z)\ne 0,Re(z)&gt;1\}$</span>.</p> <p><strong>Tried:</strong> In <span class="math-container">$D$</span> the Zeta function is defined as, <span class="math-container">$\displaystyle\zeta(s)=\sum_{n=1}^\infty\frac{1}{n^s}$</span>. If possible let it has a fixed point say <span class="math-container">$z=a+ib\in D$</span>. Then, <span class="math-container">$$\zeta(z)=z\\ \implies\sum_{n=1}^\infty\frac{1}{n^z}=z\\ \implies \sum_{n=1}^\infty e^{-z\log n}=z\\ \implies \sum_{n=1}^\infty e^{-(a+ib)\log n}=a+ib$$</span> Equating real and imaginary part we get, <span class="math-container">$$\sum_{n=1}^\infty e^{-a\log n}\cos(b\log n)=a...(1) \\ \sum_{n=1}^\infty e^{-a\log n}\sin(b\log n)=-b...(2)$$</span> Where <span class="math-container">$b\ne 0, a&gt;1$</span>.</p> <p><strong>Problem:</strong> How am I suppose to show that the relation (2) will <strong>NOT</strong> hold at any cost?</p> <p>Any hint/answer/link/research paper/note will be highly appreciated. Thanks in advance.</p> <p>Please visit <a href="https://math.stackexchange.com/questions/3145277/counting-numbers-of-fixed-point-of-zeta-function-by-argument-principle">this</a>.</p>
Conrad
298,272
<p>I do not think your statement about fixed points in the plane to be true - it may be true for <span class="math-container">$Re(z)&gt;1$</span> in the sense of being just one fixed point there, but otherwise <span class="math-container">$(s-1)\zeta(s)$</span> is an entire function of order 1 and maximal type (by the usual properties of the critical strip zeros - eg their ~<span class="math-container">$T\log(T)$</span> density and general stuff about entire functions of finite order - the usual notion of density of zeros for entire functions and the one for <span class="math-container">$\zeta$</span> differ a little but they have the same order of magnitude) and subtracting a polynomial like <span class="math-container">$s(s-1)$</span> doesn't change order 1 or maximal type as those depend on the Taylor coefficients at infinity for any entire function, so in particular <span class="math-container">$(s-1)\zeta(s) - s(s-1)$</span> is entire of order 1 and maximal type and those have lots of zeros - either they have density growing faster than T at infinity or the conditional sum of their reciprocals is not convergent by a theorem of Lindelof. Maximal type is crucial because obviously exponentials of linear polynomials have order 1 and arbitrary finite type.</p> <p>Note that the reciprocal of the Gamma function is order 1 and maximal type but has the density of zeros ~T (say on the disk of radius T centered at the origin) as its zeros are just the negative numbers (so, in particular, the conditional sum of their reciprocals is not convergent, so it's possible the number of fixed points of <span class="math-container">$\zeta$</span> to be of order T only sure; similar considerations apply to any equation of the type <span class="math-container">$\zeta(s)=Polynomial(s)$</span> by multiplying with s-1 and reducing to considerations about entire functions of order 1 and maximal type.</p>
2,538,521
<blockquote> <p>Let $x$ be a function of $C^1(I,R)$ where $I\subset \mathbb{R}$ , such that $$x'(t)\leq a(t) x(t)+b(t),$$ where $a$ and $b$ are continuous functions on $I$ in $R$ then $$ x(t)\leq x(t_0) \exp\left(\int_{t_0}^{t}a(s)ds\right)+\int_{t_0}^{t}\exp\left(\int_{s}^t a(\sigma)d\sigma\right)b(s)ds$$</p> </blockquote> <p>How to prove this proposition please ?</p> <p>Thank you</p>
Robert Lewis
67,071
<p>I, too, first guessed this would be a problem suggesting the application of <a href="https://en.wikipedia.org/wiki/Gr%C3%B6nwall%27s_inequality" rel="nofollow noreferrer">Grownall's inequality</a>, but it seems an even more elementary solution avails itself:</p> <p>Given that</p> <p><span class="math-container">$x'(t) \le a(t) x(t) + b(t), \tag 1$</span></p> <p>we have the equivalent form</p> <p><span class="math-container">$x'(t) - a(t) x(t) \le b(t); \tag 2$</span></p> <p>and since</p> <p><span class="math-container">$\displaystyle \exp \left (-\int_{t_0}^t a(\sigma)\; d\sigma \right ) &gt; 0 \tag 3$</span></p> <p>we further have</p> <p><span class="math-container">$\displaystyle \exp \left (-\int_{t_0}^t a(\sigma)\; d\sigma \right )(x'(t) - a(t) x(t)) \le \exp \left (-\int_{t_0}^t a(\sigma)\; d\sigma \right ) b(t); \tag 4$</span></p> <p>we observe that</p> <p><span class="math-container">$\displaystyle \left ( \exp \left (-\int_{t_0}^t a(\sigma)\; d\sigma \right ) x(t) \right )' = \displaystyle -a(t) \exp \left (-\int_{t_0}^t a(\sigma)\; d\sigma \right ) x(t) + \exp \left (-\int_{t_0}^t a(\sigma)\; d\sigma \right ) x'(t)$</span> <span class="math-container">$= \displaystyle \exp \left (-\int_{t_0}^t a(\sigma)\; d\sigma \right )(x'(t) - a(t)x(t)); \tag 5$</span></p> <p>thus (4) becomes</p> <p><span class="math-container">$\displaystyle \left ( \exp \left (-\int_{t_0}^t a(\sigma)\; d\sigma \right ) x(t) \right )' \le \exp \left (-\int_{t_0}^t a(\sigma)\; d\sigma \right ) b(t); \tag 6$</span></p> <p>we integrate (6) 'twixt <span class="math-container">$t_0$</span> and <span class="math-container">$t$</span>:</p> <p><span class="math-container">$\displaystyle \exp \left (-\int_{t_0}^t a(\sigma)\; d\sigma \right ) x(t) - x(t_0) = \int_{t_0}^t \left ( \exp \left (-\int_{t_0}^s a(\sigma)\; d\sigma \right ) x(s) \right )'\; ds$</span> <span class="math-container">$\le \displaystyle \int_{t_0}^t \exp \left (-\int_{t_0}^s a(\sigma)\; d\sigma \right ) b(s) \; ds, \tag 7$</span></p> <p>whence</p> <p><span class="math-container">$\displaystyle \exp \left (-\int_{t_0}^t a(\sigma)\; d\sigma \right ) x(t) \le x(t_0) + \displaystyle \int_{t_0}^t \exp \left (-\int_{t_0}^s a(\sigma)\; d\sigma \right ) b(s) \; ds, \tag 8$</span></p> <p>which we multiply through by</p> <p><span class="math-container">$\displaystyle \exp \left (\int_{t_0}^t a(\sigma)\; d\sigma \right ) &gt; 0 \tag 9$</span></p> <p>to obtain</p> <p><span class="math-container">$x(t) \le$</span> <span class="math-container">$\displaystyle x(t_0) \exp \left (\int_{t_0}^t a(\sigma)\; d\sigma \right )$</span> <span class="math-container">$+ \exp \left (\displaystyle \int_{t_0}^t a(\sigma)\; d\sigma \right ) \displaystyle \int_{t_0}^t \exp \left (-\int_{t_0}^s a(\sigma)\; d\sigma \right ) b(s) \; ds; \tag{10}$</span></p> <p>finally,</p> <p><span class="math-container">$\displaystyle \exp \left (\int_{t_0}^t a(\sigma)\; d\sigma \right ) \int_{t_0}^t \exp \left (-\int_{t_0}^s a(\sigma)\; d\sigma \right ) b(s) \; ds$</span> <span class="math-container">$= \displaystyle \int_{t_0}^t \exp \left (\int_{t_0}^t a(\sigma)\; d\sigma \right ) \exp \left (-\int_{t_0}^s a(\sigma)\; d\sigma \right ) b(s) \; ds, \tag{11}$</span></p> <p>and since</p> <p><span class="math-container">$\displaystyle \int_{t_0}^t \exp \left (\int_{t_0}^t a(\sigma)\; d\sigma \right ) \exp \left (-\int_{t_0}^s a(\sigma)\; d\sigma \right ) b(s) \; ds$</span> <span class="math-container">$= \displaystyle \int_{t_0}^t \exp \left (\int_{t_0}^t a(\sigma)\; d\sigma -\int_{t_0}^s a(\sigma)\; d\sigma \right ) b(s) \; ds$</span> <span class="math-container">$= \displaystyle \int_{t_0}^t \exp \left (\int_s^t a(\sigma)\; d\sigma \right ) b(s) \; ds, \tag{12}$</span></p> <p>(10) becomes</p> <p><span class="math-container">$x(t) \le \displaystyle x(t_0) \exp \left (\int_{t_0}^t a(\sigma)\; d\sigma \right ) + \int_{t_0}^t \exp \left (\int_s^t a(\sigma)\; d\sigma \right ) b(s) \; ds, \tag{14}$</span></p> <p>which, since <span class="math-container">$s$</span> and <span class="math-container">$\sigma$</span> are in fact merely so-called &quot;dummy&quot; variables of integration, is indeed the desired result.</p>
1,662,090
<p>First of all, hi. I am new here.</p> <p>Let$$X_1,\dots, X_n$$ be i.i.d. exponential random variables.</p> <p>$$ Pr({\max X_n}&gt;{(\sum X_n-\max X_n ) }) = ? $$</p> <p>I think we should take integrals on exponential distribution functions over corresponding intervals but I could not make it work.</p> <p>Thanks.</p> <p>edit: my initial work was similar to and more generalized version of problem 6.45 in <a href="http://www.stat.washington.edu/~hoytak/teaching/current/stat395/" rel="nofollow">http://www.stat.washington.edu/~hoytak/teaching/current/stat395/</a></p> <p>instead of 1s as in uniform distribution I tried to put exponential pdf. But nothing good seemed to come out of it.</p> <p>2nd edit: thanks a lot for all of you, especially sinbadh and BGM. you spent much time. I could not put my initial work thoroughly because I just start getting used to LATEX format. Hopefully after earning some repition I start giving thumbs up.</p>
Intelligenti pauca
255,730
<p>HINT.</p> <p>You must solve the equation $(4x^2+2)(3x^2+2)=50052$, where $50052=(75583)_9=2^2\cdot3\cdot43\cdot97$.</p>
24,876
<p>As it is possible to see the last time when you or others visited M.SE, I wonder if one can see a statistics of visits of your own or of a specific user for a period of time (last year, let's say).</p>
user642796
8,348
<p>This is more of a supplement to <a href="https://math.meta.stackexchange.com/a/24877">quid's answer</a>.</p> <p>Regular users can only see the visits information for their own account. Where in <a href="https://math.stackexchange.com/users/current?tab=profile">your own profile</a> you might see</p> <p><a href="https://i.stack.imgur.com/VmCxK.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/VmCxK.png" alt="my profile with visits data"></a></p> <p>looking at <a href="https://math.stackexchange.com/users/8348/arjafi?tab=profile">another user's profile</a> will show</p> <p><a href="https://i.stack.imgur.com/tcnx8.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/tcnx8.png" alt="another&#39;s profile without visits data"></a></p> <p>On the other hand site moderators (and SE employees) can see any user's complete visit information, including the little calendar showing exact (UTC) dates visited.</p>
4,544,787
<p>These sums showed up in a probability problem I was working on. They're not quite the Stirling numbers of the first kind since it's possible to have e.g. <span class="math-container">$i_1 = i_2$</span>. Denoting the sum by <span class="math-container">$(k\mid n)$</span> we have the recurrence relation</p> <p><span class="math-container">$(k\mid n) = n(k-1\mid n) + (k\mid n-1)$</span></p> <p>I thought maybe the average term of the sum would be similar to the average product over a random list of <span class="math-container">$k$</span> numbers from <span class="math-container">$1$</span> to <span class="math-container">$n$</span>. But it seems to always be a few times greater.</p> <p>I'm beginning to despair of there being a closed form expression for <span class="math-container">$(k\mid n)$</span> in terms of factorials and powers. Does anyone know how to analyze the sum further and perhaps find a good approximation?</p>
orangeskid
168,051
<p>Consider the expansion</p> <p><span class="math-container">$$X^N = \sum_{k=0}^N {N\brace k} (x)_k$$</span></p> <p>where <span class="math-container">$(x)_k = x(x-1)\cdots (x-k+1)$</span>, and <span class="math-container">${N\brace k}$</span> is the <a href="https://en.wikipedia.org/wiki/Stirling_numbers_of_the_second_kind#Definition" rel="nofollow noreferrer">Sterling number</a> of the second kind. It follows that for every <span class="math-container">$n$</span> between <span class="math-container">$0$</span> and <span class="math-container">$N$</span><br /> <span class="math-container">$$\sum_{k=0}^n {N\brace k} (x)_k$$</span> is the <a href="https://en.wikipedia.org/wiki/Lagrange_polynomial" rel="nofollow noreferrer">Lagrange interpolation </a> polynomial for the function <span class="math-container">$f(x) = x^N$</span> and nodes <span class="math-container">$0$</span>, <span class="math-container">$1$</span>, <span class="math-container">$\ldots$</span>, <span class="math-container">$n$</span>.</p> <p>The leading term <span class="math-container">${N \brace n}$</span> of this polynomial can also be calculated with <a href="https://en.wikipedia.org/wiki/Cramer%27s_rule" rel="nofollow noreferrer">Cramer's rule</a> as a quotient of determinants <span class="math-container">$$\frac{\Delta'(x_0, x_1, \ldots, x_n)}{\Delta(x_0, x_1, \ldots, x_n)}$$</span></p> <p>evaluated at <span class="math-container">$(x_0, x_1, \ldots, x_n) = (0, 1, \ldots, n)$</span>. Here, <span class="math-container">$\Delta$</span> is the usual <a href="https://mathworld.wolfram.com/VandermondeDeterminant.html" rel="nofollow noreferrer">Vandermonde determinant</a>, while <span class="math-container">$\Delta'$</span> is obtained from <span class="math-container">$\Delta$</span> by having in last column the exponent <span class="math-container">$N$</span> instead of <span class="math-container">$n$</span>.</p> <p>Now, it is a fact that the above expression in <span class="math-container">$x_0$</span>, <span class="math-container">$\ldots$</span>, <span class="math-container">$x_n$</span> is the <a href="https://en.wikipedia.org/wiki/Complete_homogeneous_symmetric_polynomial" rel="nofollow noreferrer">complete symmetric polynomial</a> of degree <span class="math-container">$N-n$</span> in <span class="math-container">$x_0$</span>, <span class="math-container">$\ldots$</span>, <span class="math-container">$x_n$</span>.</p> <p>We are done.</p>
1,135,045
<p>I need to compute \begin{align} S = \sum_{k=-\infty}^j \sum_{m=-1}^2 w_{k,m} f_{k+m-1} \end{align} but I only want to access the elements of $f$ once, so I would prefer something like \begin{align} \sum_k f_k \sum_m ... \end{align} Here is what I did: substitute $l=m-1+k$ to get \begin{align} S &amp;= \sum_{k=-\infty}^j \sum_{m=-1}^2 w_{k,m} f_{k+m-1} \\ &amp;=\sum_{k=-\infty}^j \sum_{l+1-k=-1}^2 w_{k,l+1-k} f_{l} \\ &amp;=\sum_{k=-\infty}^j \sum_{l=k-2}^{k+1} w_{k,l+1-k} f_{l} \end{align} But when I try to get $f_l$ out of the inner sum I'm messing something up. Can anyone produce the correct sum for looping over $f$ only once? Thanks in advance.</p> <p>Since the highest index of $f$ that is accessed is $j+1$, I assume that the outer sum should be \begin{align} \sum_{k=-\infty}^{j+1} f_k \end{align}</p>
AlexR
86,940
<p>The problem is you are introducing a dependency on $k$. To get around this we define $$s_k := \cases{1&amp;$k\ge0$\\0&amp;$k&lt;0$}$$ Then we have $$\begin{align*} \sum_{k=-\infty}^j \sum_{m=-1}^2 w_{k,m} f_{k+m-1} &amp; = \sum_{k=-\infty}^j w_{k,-1} f_{k-2} + w_{k,0} f_{k-1}+w_{k,1} f_k + w_{k,2} f_{k+1} \\ &amp;= \sum_{k=-\infty}^{j-2} w_{k+2,-1} f_k + \sum_{k=-\infty}^{j-1} w_{k+1,0}f_k + \sum_{k=-\infty}^j w_{k,1} f_k + \sum_{k=-\infty}^{j+1} w_{k-1,2} f_k \\ &amp; = \sum_{k=-\infty}^{j+1} s_{j-2-k} w_{k+2,-1} f_k + \sum_{k=-\infty}^{j+1} s_{j-1-k} w_{k+1,0}f_k \\ &amp; \qquad + \sum_{k=-\infty}^{j+1} s_{j-k} w_{k,1} f_k + \sum_{k=-\infty}^{j+1} s_{j+1-k} w_{k-1,2} f_k \\ &amp; = \sum_{k=-\infty}^{j+1} f_k (s_{j-2-k} w_{k+2,-1} + s_{j-1-k} w_{k+1,0} + s_{j-k} w_{k,1} + s_{j+1-k} w_{k-1,2}) \\ &amp; = \sum_{k=-\infty}^{j+1} f_k \sum_{l=-1}^2 s_{j-1+l-k} w_{k+1-l,l} \end{align*}$$ A final step for simplification would then be an <em>inner</em> substitution to get rid of the $s_\cdot$ terms.</p>
142,035
<p>I have sets of functions of material parameters where the first input argument denotes the material. Here's a simple version:</p> <pre><code>tf1["a", x_] = x^3; tf1["b", x_] = x^4; </code></pre> <p>Now, I'd like to make a function which needs the derivative of <code>tf1</code> with respect to x. My goal is the following:</p> <blockquote> <p>myderivative["a",x]=3 x^2</p> <p>myderivative["b",x]=4 x^3</p> </blockquote> <p>The way to implement this should be something like</p> <pre><code>myderivative[i_, x_] = Derivative[2][tf1[i, #] &amp;][x] myderivativeb[i_, x_] = Derivative[0, 2][tf1][i, x] </code></pre> <p>I even tried</p> <pre><code>tf2[i_] = tf1[i, #] &amp; myderivative2[i_, x_] = Derivative[2][tf2[i]][x] </code></pre> <p>But all of the variants return</p> <pre><code>(tf1^(0,2))[i,x] </code></pre> <p>I tried using <code>SetDelayed</code> (<code>:=</code>) in all of the above variants but this doesn't return anything else. Also, approaches with <code>D</code> or <code>[Esc] pd [Esc]</code> failed.</p> <p>So how do I take the derivative of this overloaded function? It should also work with distinct numbers, e.g. I expect</p> <blockquote> <p>myderivative["a",2]=12</p> </blockquote>
David G. Stork
9,735
<pre><code>tf1[myname_String, x_] := Which[myname == "a", x^3, myname == "b", x^4]; D[tf1["a", x], x] </code></pre> <p>(* 3 x^2 *)</p> <pre><code>D[tf1["b", x], x] </code></pre> <p>(* 4 x^3 *)</p> <p>If you have a "default" function for all string variables not yet entered, then:</p> <pre><code>tf1[myname_String, x_] := Which[myname == "a", x^3, myname == "b", x^4, True, Sin[x]]; D[ft1["c",x],x] </code></pre> <p>(* Cos[x] *)</p>
142,035
<p>I have sets of functions of material parameters where the first input argument denotes the material. Here's a simple version:</p> <pre><code>tf1["a", x_] = x^3; tf1["b", x_] = x^4; </code></pre> <p>Now, I'd like to make a function which needs the derivative of <code>tf1</code> with respect to x. My goal is the following:</p> <blockquote> <p>myderivative["a",x]=3 x^2</p> <p>myderivative["b",x]=4 x^3</p> </blockquote> <p>The way to implement this should be something like</p> <pre><code>myderivative[i_, x_] = Derivative[2][tf1[i, #] &amp;][x] myderivativeb[i_, x_] = Derivative[0, 2][tf1][i, x] </code></pre> <p>I even tried</p> <pre><code>tf2[i_] = tf1[i, #] &amp; myderivative2[i_, x_] = Derivative[2][tf2[i]][x] </code></pre> <p>But all of the variants return</p> <pre><code>(tf1^(0,2))[i,x] </code></pre> <p>I tried using <code>SetDelayed</code> (<code>:=</code>) in all of the above variants but this doesn't return anything else. Also, approaches with <code>D</code> or <code>[Esc] pd [Esc]</code> failed.</p> <p>So how do I take the derivative of this overloaded function? It should also work with distinct numbers, e.g. I expect</p> <blockquote> <p>myderivative["a",2]=12</p> </blockquote>
Pillsy
531
<p>I suggest doing the derivative and then substituting in a value. This can be done as follows: </p> <pre><code>der[i_, x_] := D[tf1[i, \[FormalX]], \[FormalX]] /. \[FormalX] -&gt; x; </code></pre> <p>Using a "formal" variable means you don't have to worry about it being defined in scope and screwing things up.</p>
4,517,429
<p>Let <span class="math-container">$ {\textstyle \{X_{1},\ldots ,X_{n},\ldots \}}$</span> be a sequence of independent random variables, each of those random variable follow a Gamma distribution.</p> <p>For the summation of those random variable:</p> <p><span class="math-container">$ {\displaystyle {\bar {X}}_{n}\equiv {\frac {X_{1}+\cdots +X_{n}}{n}}}$</span></p> <p>Question: Is the summation of independent Gamma distributions a Gamma distribution or a normal distribution ?</p> <p>Here is the confusion: the summation of Gamma distribution should still be a Gamma distribution. On the other hand, central limit theorem says that such summation should approach a normal distribution. which of those viewpoint is correct ?</p>
heropup
118,193
<p>The answer depends on whether the gamma distributions have the same rate parameter.</p> <p>if <span class="math-container">$X_i \sim \operatorname{Gamma}(a_i, b)$</span> then their sample total will also be gamma distributed: <span class="math-container">$$\sum X_i \sim \operatorname{Gamma}\left(\sum a_i, b \right),$$</span> hence the sample mean will also be gamma: <span class="math-container">$$\bar X \sim \operatorname{Gamma}\left(\sum a_i, nb\right);$$</span> that is to say, <span class="math-container">$\bar X$</span> is gamma with shape equal to the sum of shapes, and rate equal to <span class="math-container">$n$</span> times the original rate.</p> <p>However, if the rate parameter is not the same for each <span class="math-container">$X_i$</span>, then in general, the sample mean will not be gamma distributed. There is no general closed form.</p> <p>That said, the asymptotic behavior of the sample mean does obey the central limit theorem. The <em>exact</em> distribution in the common rate case above is indeed gamma; as <span class="math-container">$n \to \infty$</span>, such a gamma distribution tends to a normal distribution with mean <span class="math-container">$\mu = \bar a/b$</span> where <span class="math-container">$\bar a = \lim_{n \to \infty} \frac{1}{n} \sum a_i$</span>.</p>
3,566,603
<p>I need a simple way to show <span class="math-container">$\mathbb R^2$</span> is not isomorphic <span class="math-container">$\mathbb{R}[x]/(x^2)$</span>. Both are not integral domains, and both are not fields, so I’m not sure how to go about it.</p>
Geoffrey Trang
684,071
<p>The ring <span class="math-container">$\mathbb{R} \times \mathbb{R}$</span> has 4 ideals, namely <span class="math-container">$0$</span>, <span class="math-container">$\mathbb{R} \times 0$</span>, <span class="math-container">$0 \times \mathbb{R}$</span>, and <span class="math-container">$\mathbb{R} \times \mathbb{R}$</span>. In contrast, the ring <span class="math-container">$\mathbb{R}[x]/(x^2)$</span> has only 3 ideals, namely <span class="math-container">$0$</span>, <span class="math-container">$(x)$</span>, and <span class="math-container">$\mathbb{R}[x]/(x^2)$</span>. Hence, they could not be isomorphic.</p>
3,695,868
<p>In right triangle <span class="math-container">$ABC,$</span> <span class="math-container">$\angle C = 90^\circ.$</span> Let <span class="math-container">$P$</span> and <span class="math-container">$Q$</span> be points on <span class="math-container">$\overline{AC}$</span> so that <span class="math-container">$AP = PQ = QC.$</span> If <span class="math-container">$QB = 67$</span> and <span class="math-container">$PB = 76,$</span> find <span class="math-container">$AB.$</span></p> <p><a href="https://i.stack.imgur.com/BIPQ0.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/BIPQ0.png" alt="enter image description here"></a></p> <p>How do I use ratios and given side lengths to create a proportion to solve for <span class="math-container">$AB$</span>? Is there any other way to solve this?</p> <p>I would think the best way to approach this is to relate <span class="math-container">$QB/CB = AB/CB$</span>, though that would make <span class="math-container">$CB$</span> for both the same. I guess the relation of <span class="math-container">$AB/AC = QB/QC$</span> can also be used.</p>
Vishu
751,311
<p><strong>Hint:</strong></p> <p>Suppose <span class="math-container">$AP=PQ=QC=x$</span>. Then, we know <span class="math-container">$$BC^2 = 67^2-x^2 =76^2 -(2x)^2 $$</span> You can solve for <span class="math-container">$x$</span> from here and get <span class="math-container">$AB=3x$</span>.</p>
277,217
<p>I am stuck on the following problem, which I do not believe to be so difficult.</p> <p>Let $X$ and $Y$ be Banach spaces. Let $f:X\times X\rightarrow Y$ be a function such that for any fixed $x_0$, $f(x,x_0)$ and $f(x_0,x)$ are continuous in $x$. Then is $f(x,x)$ continuous in $x$?</p> <p>I tried taking an arbitrary convergent subsequence $\{x_n\}$ which converges to some $x$ and trying to argue that $f(x_n,x_n)$ converges to $f(x,x)$ using continuity in both terms, but I cannot seem to make this work for some reason.</p> <p>Any help is greatly appreciated.</p>
Ittay Weiss
30,953
<p>Consider the function $f:\mathbb R\times \mathbb R\to \mathbb R$ given by $f(x,0)=f(0,x)=0$ for all $x\in \mathbb R$, and $f(x,y)=1$ if either $x$ or $y$ is irrational, and $f(x,y)=1/2$ otherwise. </p> <p>Clearly, $f(x,0)$ and $f(0,x)$, being constantly $0$, are continuous. However, $f(x,x)$ is a Dirichlet function which is discontinuous. Note that the discontinuity is not removable. </p> <p>Intuitively what is going on here is that there are infinitely many directions to consider in $\mathbb R \times \mathbb R$ and knowledge of the behaviour of the function along just two directions (along the axis) is not sufficient to determine it's behaviour with respect to the other infinity of directions. </p>
1,993,217
<p>Let $\left\{f_{n}\right\}$ be a sequence of equicontinuous functions where $f_n: [0,1] \rightarrow \mathbf{R}$. If $\{f_n(0)\}$ is bounded, why is $\left\{f_{n}\right\}$ uniformly bounded?</p>
zhw.
228,045
<p>Let's do it for an equicontinuous family $\mathcal F$ of functions on $[0,1]$ such that</p> <p>$$\sup_{f\in \mathcal F}|f(0)| =C &lt; \infty.$$</p> <p>Choose $m \in \mathbb N$ such that $|y-x|\le 1/m$ implies $|f(y)-f(x)| \le 1$ for all $f\in \mathcal F.$ Then for any $f\in \mathcal F,$</p> <p>$$f(k/m) = [f(k/m) - f((k-1)/m) ]+ [f((k-1)/m) - f((k-2)/m)]\,+$$ $$ \cdots + [f(1/m) -f(0)] + f(0).$$</p> <p>for $k= 1,\dots , m.$ Take absolute values to see this implies $|f(k/m)| \le k + C \le m+C.$ Now any $x\in [0,1]$ is within $1/m$ of one of the $k/m$ points. It follows that $|f(x)| \le 1 +m + C,$ for all $x\in [0,1],$ for all $f\in \mathcal F.$</p>
74,347
<blockquote> <p>Construct a function which is continuous in $[1,5]$ but not differentiable at $2, 3, 4$.</p> </blockquote> <p>This question is just after the definition of differentiation and the theorem that if $f$ is finitely derivable at $c$, then $f$ is also continuous at $c$. Please help, my textbook does not have the answer. </p>
Did
6,179
<p>$$\ \ \ \ \mathsf{W}\ \ \ \ $$</p>
521,740
<p>$x$,$y$ are real numbers satisfying $(x-1)^{2}+4y^{2}=4$<br> find the maximum of $xy$ and justify it without calculus.<br> Does there exist a tricky solution using elementary inequalities (AM-GM or Cauchy-Schwarz) ?</p> <p>I tried and got it's when $x=\dfrac{3+\sqrt{33}}{4}$</p>
user2566092
87,313
<p>If you set $y = k/x$, then you get a quartic equation in $x$ and you want to know the maximal $k$ such that there is a real solution for $x$. If you trace through all the complicated equations that define the solutions for a quartic equation, you should be able to figure it out. However I doubt this is the most efficient approach without calculus.</p>
439,620
<p>As we know, the QR-factorization <span class="math-container">$Q\cdot R=A$</span> of any real symmetric <span class="math-container">$n \times n$</span> matrix <span class="math-container">$A$</span> with full rank is <em><strong>unconditionally</strong></em> <em>numerically stable</em>. Further, when A is rank-1-updated, the factorization can be updated in <span class="math-container">$\mathcal{O}(n^2)$</span>, <em><strong>and</strong></em> <em>the factorization remains stable</em> after the update as well!</p> <p>Now, when <span class="math-container">$A$</span> is symmetric, I seek a decomposition with all the same properties, plus that the factorization itself is symmetric. So to summarize, these are the properties of a factorization that I seek:</p> <ol> <li>The factorization is unconditionally numerically stable (i.e., no conditions on inertia, spectrum, norm, M-property, reodering, growth-factor, etc, are permitted to be imposed on <span class="math-container">$A$</span> whatsoever).</li> <li>The factorization is inherently symmetric (e.g., <span class="math-container">$Q^T \cdot R^T \cdot D \cdot R \cdot Q$</span>), i.e., exact multiplication of the factors yields a symmetric matrix.</li> <li>The factorization can be updated in <span class="math-container">$\mathcal{O}(n^2)$</span> and remains stable afterwards.</li> </ol> <p>Remark 1: For general symmetric <span class="math-container">$A$</span>, no assertions can be given on the stability of LDL-factorizations. (In some cases, reorderings do exist, but upon rank-1-update, the matrix would have to be reordered from the start, thus assertions on the stability of the factorization after the rank-1-update do not exist.</p> <p>Remark 2: I am likewise interested in a result that such sought decomposition cannot exist.</p>
Federico Poloni
1,898
<p>[EDIT: not working in fact because the update of <span class="math-container">$Q$</span> cannot be merged efficiently, see comments] A simple eigendecomposition <span class="math-container">$A=QDQ^*$</span> should work, since <a href="https://arxiv.org/abs/1405.7537" rel="nofollow noreferrer">it can be updated</a> in <span class="math-container">$O(n^2)$</span>.</p>
1,131,622
<p>The question itself is a very easy one:<br/></p> <blockquote> <p>Somebody has got two kids, one of whom is a girl. Then what's the probability that he's got <strong>at least</strong> one boy?</p> </blockquote> <p>My answer is that, since he's already got a girl, then "he's got at least one boy" amounts to "the other kid is a boy", whose probability is apparently $\frac{1}{2}$.<br/> But my friends argue that the probability should be $\frac23$: they say this is a binomial distribution, all the possible cases are (girl,girl),(girl,boy),(boy,girl) which yields that the probability is two cases out of three and is thus $\frac23$.<br/> But I think this is totally unacceptable. I don't think it is a binomial distribution at all, at least not what my friends explained to me. However, I just can't disuade them of their opinion, nor can I prove that I am wrong.<br/> So what on earth is the probability? and why? Any help is appreciated. Thanks in advance.<hr/> Esp. Can anybody show why <strong>my</strong> explanation is wrong? Isn't it that whether the other kid is a boy or a girl a 50/50 event? <hr/> EDIT:<br/> Thanks for all the help you provided for me, and special thanks will go to @HammyTheGreek and @KSmarts, who have made it clear to me that there is in fact some ambiguity in my statement in this problem.<br/> As is pointed in <a href="http://en.wikipedia.org/wiki/Boy_or_Girl_paradox" rel="nofollow">this link</a> ,two distinct interpretation of the statement "one of whom is a girl" that gives rise to ambiguity:<br/></p> <blockquote> <p>From all families with two children, at least one of whom is a boy, a family is chosen at random. This would yield the answer of 1/3.<br/> From all families with two children, one child is selected at random, and the sex of that child is specified to be a boy. This would yield an answer of 1/2. </p> </blockquote>
Community
-1
<p>A lot of great explanations about the correct answer. Your answer is wrong because it ignores that fact that people are distinguishable, but the information you got <em>doesn't</em> tell you <em>which</em> child is the boy. You are ignoring this and treating the problem as if you knew that, for example, Child 1 is a boy. As others here have stated, this is only half the number of cases that are consistent with the information you got.</p>
394,517
<p>How can I evaluate $\lim_{x \to \infty}\left(\sqrt{x+\sqrt{x}}-\sqrt{x-\sqrt{x}}\right)$?</p>
tom
59,101
<p>Almost always when you have a limit of $\sqrt{A}-\sqrt{B}$ type. It is good idea to multiply it by $\frac{\sqrt{A}+\sqrt{B}}{\sqrt{A}+\sqrt{B}}$. So you have to do limit of $\frac{A-B}{\sqrt{A}+\sqrt{B}}$</p> <p>Thus for your limit you get: $$\sqrt{x+\sqrt{x}}-\sqrt{x-\sqrt{x}} = \frac{ (x+\sqrt{x} ) - (x-\sqrt{x})}{\sqrt{x+\sqrt{x}}+\sqrt{x-\sqrt{x}}}=\frac{ 2\sqrt{x}}{\sqrt{x+\sqrt{x}}+\sqrt{x-\sqrt{x}}} \rightarrow 1$$ as $x \rightarrow \infty$.</p>
29,115
<p>I just read a proof and, after struggling some time with a mental leap, I think that it uses tacitly the following:</p> <p>Let $\kappa$ be a regular cardinal, $\theta &gt; \kappa$ a regular cardinal too then: $ S \subset \kappa$ is stationary if and only if $\forall \mathcal{A} = (H(\theta), \in, &lt;,..) \exists M \prec \mathcal{A}, |M| &lt; \kappa,$ such that $sup(M \cap \kappa) \in S$.</p> <p>Now my questions are:</p> <ol> <li><p>Is this statement above even true? (I think so as I have a proof, but this doesn't have to mean anything)</p></li> <li><p>It appears to me that the latter part of this characterization is a quite strong assumption as $\mathcal{A}$ might contain a lot of additional information, so is there a possibility to weaken it? Or could you mention any similar statements to the one above?</p></li> </ol> <p>Thank you</p> <p>EDIT: I accepted the answer of Philip, simply because he has lower points. Francois answer would have deserved it too.</p>
François G. Dorais
2,000
<p>Yes, the statement is true. </p> <p>The forward direction is clear since the set $$C_{\mathcal{A}} = \{\sup(M\cap\kappa) : M \prec \mathcal{A}, |M| &lt; \kappa\}$$ is a club. Indeed, let $\langle M_\alpha : \alpha &lt; \kappa \rangle$ be an elementary chain of elementary submbodels of $\mathcal{A}$ with size less than $\kappa$ such that:</p> <ul> <li>$\langle M_\alpha : \alpha &lt; \beta \rangle \in M_{\beta+1}$ for every $\beta &lt; \kappa$, and</li> <li>$M_\gamma = \bigcup_{\alpha&lt;\gamma} M_\alpha$ for every limit $\gamma &lt; \kappa$. </li> </ul> <p>Then $\langle \sup(M_\alpha \cap \kappa) : \alpha &lt; \kappa \rangle$ enumerates a closed unbounded subset of $\kappa$ which is contained in $C_{\mathcal{A}}$.</p> <p>For the converse, let $C \subseteq \kappa$ be a closed unbounded set and consider the structure $\mathcal{A} = (H(\theta),{\in},{&lt;},C)$. If $M \prec \mathcal{A}$ then $M$ satisfies "$C$ is closed unbounded in $\kappa = \sup C$," and so $C \cap M$ is closed unbounded in $\sup(C \cap M) = \sup(\kappa \cap M)$. It follows that $C_{\mathcal{A}} \subseteq C$, where $C_{\mathcal{A}}$ is defined as above. Thus it is sufficient to consider the structures $\mathcal{A}$ as I just described.</p>
3,883,164
<p>I evaluated following limit with taylor series but for a practice I am trying to evaluate it using L'Hopital's Rule:</p> <p><span class="math-container">$$\lim_{x\to 0}\frac{\sinh x-x\cosh x+\frac{x^3}3}{x^2\tan^3x}=\lim_{x\to0}\cfrac{f(x)}{g(x)}$$</span> <span class="math-container">$f(x)=\sinh x-x\cosh x+\frac{x^3}3 ,f(0)=0$</span></p> <p><span class="math-container">$f'(x)=-x\sinh x+x^2, f'(0)=0$</span></p> <p><span class="math-container">$f''(x)=-\sinh x-x\cosh x+2x, f''(0)=0$</span></p> <p><span class="math-container">$f'''(x)=-2\cosh x-x\sinh x+2, f'''(0)=0$</span></p> <p>It seems it is going to be <span class="math-container">$0$</span> for further derivatives.</p> <p>Also for <span class="math-container">$g(x)=x^2\tan^3x$</span>, wolfram alpha gives this result:</p> <p><a href="https://i.stack.imgur.com/vPSik.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/vPSik.png" alt="enter image description here" /></a></p> <p>Which seems we have <span class="math-container">$g^{(n)}(x)=0$</span> too.</p> <p>So Is there any way to evaluate the limit applying L'Hopital's Rule?</p>
user
505,767
<p>To simplify the evaluation we can use that</p> <p><span class="math-container">$$\frac{\sinh x-x\cosh x+\frac{x^3}3}{x^2\tan^3x}=\frac{x^3}{\tan^3x}\frac{\sinh x-x\cosh x+\frac{x^3}3}{x^5}$$</span></p> <p>and since <span class="math-container">$\frac{x^3}{\tan^3x} \to 1$</span> we reduce to evaluate by l'Hospital</p> <p><span class="math-container">$$\lim_{x\to 0}\frac{\sinh x-x\cosh x+\frac{x^3}3}{x^5}$$</span></p>