qid
int64
1
4.65M
question
large_stringlengths
27
36.3k
author
large_stringlengths
3
36
author_id
int64
-1
1.16M
answer
large_stringlengths
18
63k
1,775,965
<p>I'm learning about measure theory, specifically the Lebesgue integral of nonnegative functions, and need help with the following problem. </p> <blockquote> <p>Let $f:\mathbb{R}\to[0,\infty)$ be measurable and $f\in L^1$. Show that $F(x)=\int_{-\infty}^x f$ is continuous.</p> </blockquote> <p>I know is isn't much but the only thing a could think of is that given $x, y \in \mathbb{R}$ with $x &lt; y$ we note that $F(x) \leq F(y)$, i.e. $F$ is increasing. So maybe we can apply one of the convergence theorems of Lebesgue integration theory.</p> <hr> <p>I was also wondering if this problem can be solved using only Riemann integration theory. </p>
Aloizio Macedo
59,234
<p>Let $x \in \mathbb{R}$. It suffices to prove that $F(x_n) \to F(x)$ for every $x_n \to x$. Therefore, let $x_n \to x$.</p> <p>We have $F(x_n)=\int_{-\infty}^{\infty}f \cdot \chi_{[-\infty,x_n]} $. </p> <p>It is easy to see that $f \cdot \chi_{[-\infty,x_n]} \to f \cdot \chi_{[-\infty,x]}$ (except possibly at $x$).</p> <p>$f \cdot \chi_{[-\infty,x_n]}$ is dominated by an integrable function (namely, $f$). Therefore, we have $F(x_n) \to F(x)$.</p>
1,154,763
<p>I'm given this equation:</p> <p>$$ u(x,y) = \begin{cases} \dfrac{(x^3 - 3xy^2)}{(x^2 + y^2)}\quad&amp; \text{if}\quad (x,y)\neq(0,0)\\ 0\quad&amp; \text{if} \quad (x,y)=(0,0). \end{cases} $$</p> <p>It seems like L'hopitals rule has been used but I'm confused because</p> <ol> <li>there is no limit here it's just straight up $x$ and $y$ equals zero.</li> <li>if I have to invoke limit here to use Lhopitals rule, there are two variables $x$ and $y$. How do I take limit on both of them?</li> </ol>
TravisJ
212,738
<p>Probably, it is a piecewise definition. $u(x,y)$ makes sense as long as both $x$ and $y$ are not $0$. It is probably just to mean that: if $(x,y)\neq (0,0)$ then $u(x,y)$ is as defined (with the formula), otherwise $u(x,y)=0$. You certainly cannot plug in $(0,0)$ to the formula. It may be that the point is chosen because there is a "hole" and that $0$ would fill in the hole, but this is not necessarily the case.</p>
2,258,557
<p>Why the equation of an arbitrary straight line in complex plane is $zz_o + \bar z \bar z_0 = D$ where D $\in R$</p> <p>I understand that a vertical straight line can be defined by the equation $z+\bar z= D$ because suppose $z =x+yi$ then $\bar z = x-yi$ Thus, $z+\bar z = x+yi+x-yi=2x$ which is an arbitrary vertical straight line in w-plane.</p> <p>But why $zz_o + \bar z \bar z_0 = D$ is an arbitrary straight line in complex plane?</p>
Mark Viola
218,419
<p><strong>HINT:</strong></p> <p>$$z+\bar z=2\text{Re}(z)\implies zz_0+\bar z\bar z_0=2\text{Re}(zz_0)$$</p>
2,258,557
<p>Why the equation of an arbitrary straight line in complex plane is $zz_o + \bar z \bar z_0 = D$ where D $\in R$</p> <p>I understand that a vertical straight line can be defined by the equation $z+\bar z= D$ because suppose $z =x+yi$ then $\bar z = x-yi$ Thus, $z+\bar z = x+yi+x-yi=2x$ which is an arbitrary vertical straight line in w-plane.</p> <p>But why $zz_o + \bar z \bar z_0 = D$ is an arbitrary straight line in complex plane?</p>
dxiv
291,201
<p>Hint: given any two points $z_1, z_2 \in \mathbb{C}\,$, then $z$ is collinear with $z_1, z_2$ iff there exists $\lambda \in \mathbb{R}$ such that $z-z_1 = \lambda(z-z_2)$. Eliminate $\lambda$ between the following, then define $z_0, D$ appropriately:</p> <p>$$ \begin{cases} \begin{align} z-z_1 &amp;= \lambda(z-z_2) \\ \bar z- \bar z_1 &amp;= \lambda(\bar z- \bar z_2) \end{align} \end{cases} $$</p>
21,372
<blockquote> <p>Let $ y = \min \{ (x + 6), (4 – x) \}$, then find $y$.</p> </blockquote> <p>How to solve this problem?</p>
Isaac
72
<p>Try graphing $y=x+6$ and $y=4-x$ together on one graph, then highlight or otherwise mark the parts of those graphs that make up $y=\min{x+6,4-x}$. The resulting shape should be a familiar type of basic function, perhaps translated and/or reflected.</p>
1,989,259
<p><strong>Can modus tollens be statement of proof by contradiction or is it just a specific case of contradiction?</strong></p> <p>i.e we know that in general, proof by contradiction stated as follows</p> <p><span class="math-container">$[P' \implies (q \land q')] \implies P$</span></p> <p>And by modus tollens, we have</p> <p><span class="math-container">$[(P' \implies q) \land q'] \implies P$</span></p> <p>Here we assume <span class="math-container">$P'$</span> true and show q' happens, which should not happen: a contradiction. I tend to think modus tollens is foundation of proof by contradiction, but it seems just a specific case of contradictions...</p>
PMar
383,670
<p>Actually, these are two different views of the same thing. In one case you begin with q' already proven, and prove P' -> q; then you apply the logical equivalence P' -> q &lt;==> q' -> P, and use modus-ponens on the latter (plus q') to arrive at P. In the other case you prove P' -> (q^q'), apply the same equivalence to infer (q^q')' -> P, observe that (q^q')' is a tautology hence true, and apply modus-ponens to infer P.</p> <p>And in principle you can convert the second form to the first: First prove P' -> q, introduce the tautology q' -> (P' -> q'), apply modus-ponens to that (plus 'q') to infer P' -> q', then combine with the first to derive P' -> (q^q').</p> <p>It is really a question of what you have and what you need. In the first case you have neither q nor q' and so have to prove the contradiction from P' in its entirety. In the second case you have HALF of the contradiction from other sources, so you only need to prove the other half from P'.</p>
37,900
<p>I use the following code to find out the number of consecutive prime numbers using a formula $n^2+n+i$ found out by Euler (starting from n=0):</p> <pre><code>Nbs = {}; Do[Nbs = Union[Nbs, Select[Range[5000], (PrimeQ[#^2 + # + i] == False &amp;), 1]], {i, 1, 5000}]; Nbs </code></pre> <p>How can I also get in the output list the value of the <code>Do</code> iterator where $i$ is corresponding to each number of consecutive primes?</p> <p>I would like to get something like that:</p> <pre><code>({1,2},...,{40,41}} </code></pre>
ubpdqn
1,997
<p><strong>EDIT</strong></p> <p>My previous answer related to the title of the question and was directed to "reaping" or "catching" first cases of a condition being met from a loop. </p> <p>The Euler formula, as I now understand, was a method for generating consecutive prime numbers : $n^2+n+k$, where $k$ is prime and $n$ ranges from 0 to $k-2$. With all due respect this does not appear consistent with the aim of the code. A relevant reference <a href="http://en.wikipedia.org/wiki/Heegner_number#Consecutive_primes" rel="nofollow noreferrer">here</a>.</p> <p>The following relates to the use of Euler formula to generate consecutive primes:</p> <pre><code>f[u_?PrimeQ] := Module[{s = {}, j = 0}, While[PrimeQ[j^2 + j + u] &amp;&amp; j &lt;= u - 2, AppendTo[s, j^2 + j + u]; j++]; {u, Length@s, s}] f[u_] := Sequence[]; </code></pre> <p>Then:</p> <pre><code>Grid[Prepend[ f /@ Range[100], {"k", "number of consecutive primes", "Primes"}], Frame -&gt; All] </code></pre> <p>yields:</p> <p><img src="https://i.stack.imgur.com/dvfUa.png" alt="enter image description here"></p> <p>Further, if the intention is just to explore the effect of any the original code, miscounts as if i is composite the corresponding value j will be 1 greater than the count of consecutive primes, eg i=77-> 79, 83, 89, 97, 107}, i.e. 5 primes not 6 as listed in the OP code (the value for n=6 being 119=7 x 17). My original answer listed all 5000 "answers" (in compact form) based on merely answering the title of the question.</p> <p>If the purpose is to explore i composite or prime:</p> <pre><code>g[u_?PrimeQ] := Module[{set = {}, j = 0}, While[PrimeQ[j^2 + j + u] &amp;&amp; j &lt;= u - 2, AppendTo[set, j^2 + j + u]; j++]; {u, Length@set, set}] g[u_] := Module[{set = {}, j = 1}, While[PrimeQ[j^2 + j + u] &amp;&amp; j &lt;= u - 2, AppendTo[set, j^2 + j + u]; j++]; {u, Length@set, set}] </code></pre> <p>I note that up to 100000, the case i=41 yields the longest number of consecutive primes:</p> <pre><code>prim = g /@ Range[100000]; mx = Max[prim[[All, 2]]]; Cases[prim, {_, mx, _}] </code></pre> <p>yields:</p> <blockquote> <p>{41, 40, {41, 43, 47, 53, 61, 71, 83, 97, 113, 131, 151, 173, 197,<br> 223, 251, 281, 313, 347, 383, 421, 461, 503, 547, 593, 641, 691,<br> 743, 797, 853, 911, 971, 1033, 1097, 1163, 1231, 1301, 1373, 1447,<br> 1523, 1601}}}</p> </blockquote>
69,508
<p>I was just wondering, when I call the <code>CopulaDistribution</code> function in Mathematica, am I calling its cumulative function or its density function?</p> <p>I have looked up the help and am still a little bit unsure.</p> <p>EDIT: In particular, what does it mean when I take a RandomVariate from this CopulaDistribution? Surely I would have to sample from either the CDF or PDF.</p>
Romke Bontekoe
1,178
<p>I recently asked WRI about a similar behaviour on probability distributions. The answer was that the output IS generated, but erroneously the result IS NOT displayed.</p> <p>Try</p> <pre><code>dist = ProbabilityDistribution[1 - Abs[x], {x, -1, 1}] </code></pre> <p>with </p> <pre><code>Mean[dist] </code></pre> <p>(* out 0 *)</p> <p>or </p> <pre><code>cdist = CopulaDistribution[{"FGM", .2}, {NormalDistribution[-1, 2], NormalDistribution[1, 1/2]}] </code></pre> <p>with</p> <pre><code>Mean[cdist] Variance[cdist] </code></pre> <p>(* out {-1, 1} and {4, 1/4} *)</p> <p>So it does work. However, it is confusing.</p>
1,533,362
<p>I need to prove this identity:</p> <p>$\sum_{k=0}^n \frac{1}{k+1}{2k \choose k}{2n-2k \choose n-k}={2n+1 \choose n}$</p> <p>without using the identity:</p> <p>$C_{n+1}=\sum_{k=0}^n C_kC_{n-k}$.</p> <p>Can't figure out how to.</p>
user
293,846
<p>Rewriting the LHS as: <span class="math-container">$$ \sum_{k=0}^n \frac{1}{1+2k}{1+2k \choose k}{2n-2k \choose n-k} $$</span> one observes that this is a particular case (<span class="math-container">$a=1,b=2,c=2n$</span>) of the <a href="https://en.wikipedia.org/wiki/Rothe%E2%80%93Hagen_identity" rel="nofollow noreferrer">Rothe–Hagen identity</a>: <span class="math-container">$$ \sum_{k=0}^n\frac a{a+bk}\binom{a+bk}k\binom{c-bk}{n-k}=\binom{a+c}n, $$</span> valid for any <strong>complex</strong> numbers <span class="math-container">$a,b,c$</span>.</p>
1,141,074
<p>I need help with this integral: $$\int\frac{\sqrt{\tan x}}{\cos^2x}dx$$ I tried substitution and other methods, but all have lead me to this expression: $$2\int\sqrt{\tan x}(1+\tan^2 x)dx$$ where I can't calculate anything... Any suggestions? Thanks!</p>
Aaron Maroja
143,413
<p>Hint: Let $u = \tan x$ then $du = \sec^2 x\ \ dx = \frac{1}{\cos^2 x} dx$</p>
1,141,074
<p>I need help with this integral: $$\int\frac{\sqrt{\tan x}}{\cos^2x}dx$$ I tried substitution and other methods, but all have lead me to this expression: $$2\int\sqrt{\tan x}(1+\tan^2 x)dx$$ where I can't calculate anything... Any suggestions? Thanks!</p>
Bernard
202,857
<p>Set $t=\tan x$; then $\,\mathrm d\mkern1.5mu t=\dfrac1{\cos^2x}\mathrm d\mkern1.5mu x$ so the integral becomes $$\int\sqrt t\,\mathrm d\mkern1.5mu t = \frac 23 (t)^{\frac32}=\frac23\tan t\sqrt{\tan t}.$$</p>
7,981
<p>I've read so much about it but none of it makes a lot of sense. Also, what's so unsolvable about it?</p>
PAD
27,304
<p>Let $H_n$ be the nth harmonic number, i.e. $ H_n = 1 + \frac12 + \frac13 + \dots + \frac1n.$ Then, the Riemann hypothesis is true if and only if</p> <p>$$ \sum_{d | n}{d} \le H_n + \exp(H_n)\log(H_n)$$</p>
188,102
<p>I have the following list: </p> <pre><code>m={{14, "extinguisher"}, {54, "virgule"}, {55, "turnoff"}, {51, "sofa"}, {77, "beachcomber"}, {61, "stoic"}, {6, "isomorphism"}, {34, "leftist"}, {84, "spline"}, {42, "heartiness"}, {35, "postnatal"}, {41, "stratified"}, {66, "silkworm"}, {95, "conformance"}, {38, "hemophiliac"}, {19, "abdication"}, {13, "reimpose"}, {82, "cowhide"}, {78, "banteringly"}, {26, "contention"}}; </code></pre> <p>I wonder if it is possible to make a spiral bubble chart of this on Mathematica, where the number is represented by how the bubble should be big and each bubble would be labeled by the corresponding words. </p> <p>In fact I am expecting to make something as follow: <a href="https://i.stack.imgur.com/nErYz.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/nErYz.png" alt="enter image description here"></a></p>
JimB
19,758
<p>This is an answer but might likely be interpreted as just a comment: If you want to construct a spiral bubble chart as an example of poor information transfer, then by all means go for it. If not, don't do it. For such data a simple bar chart might be your best bet.</p> <pre><code>m = Sort[m] BarChart[m[[All, 1]], BarOrigin -&gt; Left, ChartLabels -&gt; (Style[#, 14] &amp;) /@ m[[All, 2]], BarSpacing -&gt; Large, GridLines -&gt; Automatic, Frame -&gt; True] </code></pre> <p><a href="https://i.stack.imgur.com/WKq0L.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/WKq0L.png" alt="enter image description here"></a></p> <p><strong><em>Reluctant edit</em></strong></p> <p>@march is right. Some things are interesting to program even when there are no apparent redeeming social aspects of the result. If the objective is to confuse or get puzzled looks, here is one way to construct that uninformative/confusing figure.</p> <pre><code>(* Sort the data from high to low - assuming values are proportional to area *) m = {{14, "extinguisher"}, {54, "virgule"}, {55, "turnoff"}, {51, "sofa"}, {77, "beachcomber"}, {61, "stoic"}, {6, "isomorphism"}, {34, "leftist"}, {84, "spline"}, {42, "heartiness"}, {35, "postnatal"}, {41, "stratified"}, {66, "silkworm"}, {95, "conformance"}, {38, "hemophiliac"}, {19, "abdication"}, {13, "reimpose"}, {82, "cowhide"}, {78, "banteringly"}, {26, "contention"}}; m = -Sort[-m]; (* Determine associated relative radius *) r = m[[All, 1]]^0.5; r = r/Max[r]; (* Make array to hold coordinates of circle centers *) xy = ConstantArray[{0, 0}, Length[r]]; (* Set the coordinates of the second circle just to the right of the first circle *) xy[[2]] = {r[[1]] + r[[2]], 0}; (* Function that determines the coordinates of the i-th circle *) coordinates[base_, i_] := Module[{sol, rMatrix, rxy}, sol = NSolve[{(x - xy[[base, 1]])^2 + (y - xy[[base, 2]])^2 == (r[[base]] + r[[i]])^2, (x - xy[[i - 1, 1]])^2 + (y - xy[[i - 1, 2]])^2 == (r[[i - 1]] + r[[i]])^2}, {x, y}]; (* Choose the solution that will be counter-clockwise to the previous circle *) rMatrix = RotationMatrix[-ArcTan[xy[[i - 1, 1]], xy[[i - 1, 2]]]]; rxy1 = rMatrix.{x, y} /. sol[[1]]; rxy2 = rMatrix.{x, y} /. sol[[2]]; If[ArcTan[rxy1[[1]], rxy1[[2]]] &gt;= ArcTan[rxy2[[1]], rxy2[[2]]], {x, y} /. sol[[1]], {x, y} /. sol[[2]]]] base = 1; (* base is the index of the circle to which the next circle will touch *) (* It is assumed the the current circle will always touch the previous circle *) Do[ xy[[i]] = coordinates[base, i]; (* Is there any overlap with previous circles? *) (* If so, make the base circle next in the list *) overlap = False; Do[If[(xy[[i, 1]] - xy[[j, 1]])^2 + (xy[[i, 2]] - xy[[j, 2]])^2 &lt; (r[[i]] + r[[j]])^2, overlap = True], {j, 1, i - 1}]; If[overlap, base = base + 1; xy[[i]] = coordinates[base, i]], {i, 3, Length[m]}] Show[ListPlot[xy, PlotStyle -&gt; White, AspectRatio -&gt; 1, PlotRange -&gt; {1.4 MinMax[xy], 1.4 MinMax[xy]}, Axes -&gt; False], Graphics[Flatten[{Red, Table[{Text[m[[i, 2]], xy[[i]]], Circle[xy[[i]], r[[i]]]}, {i, Length[m]}]}]]] </code></pre> <p><a href="https://i.stack.imgur.com/OMrR9.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/OMrR9.png" alt="Sprial bubble chart"></a></p>
3,615,117
<p>I want to find the intersection of the sphere <span class="math-container">$x^2+y^2+z^2 = 1$</span> and the plane <span class="math-container">$x+y+z=0$</span>. </p> <p><span class="math-container">$z=-(x+y)$</span> that gives <span class="math-container">$x^2+y^2+xy= \frac 12$</span></p> <p>How do I represent this in the standard form of ellipse? Any help is appreciated to proceed further. Thanks in advance.</p>
Z Ahmed
671,540
<p><span class="math-container">$$x^2+y^2+xy=1/2 \implies \frac{(x+y)^2+(x-y)^2}{2}+\frac{(x+y)-(x-y)^2}{4}=\frac{1}{2}$$</span> <span class="math-container">$$\implies \frac{3}{2}(x+y)^2+\frac{(x-y)^2}{2}=1$$</span> <span class="math-container">$$\implies \frac{(\frac{x+y}{\sqrt{2}})^2}{1/3}+\frac{(\frac{x-y}{\sqrt{2}})^2}{1}=1,$$</span> which is an ellipse.</p>
2,267,005
<p>I have been asked to evaluate the $\int{Fdr}$ over a curve $C$ where $F = yz\mathbf{i} + 2xz\mathbf{j} + e^{xy}\mathbf{k}$ and $C$ is the curve $x^2 + y^2 = 16, z =5$ with downward orientation</p> <p>I want to use Stokes theorem, so I am thinking of parametrizing this surface as $(x, y, z)=(4 \cos t, 4\sin t,z)$ but I am confused as to what the bounds for $z$ would be since I have only been given $z =5$. I know $t \in (0, 2 \pi)$.</p> <p>Any help would be greatly appreciated. Thanks</p>
Steve Kass
60,500
<p>The car can certainly go at least $450$ and no more than $600$ km on one set of tires, regardless of how the tires are swapped, so the function $r=f(S)$ that gives the range $r$ of a swapping strategy $S$ is bounded and has a maximum value over all swapping strategies. Call this optimal range $K$. ($f$ is continuous in the sense that a tiny modification of the swapping strategy will have a tiny effect on the range.)</p> <p>All together, the tires spend a total $4K$ tire-km on the road. The total number of “front-km” equals the number of “rear-km,” so $2K$ of these km are front-km, and the average number of front-km a tire spends on the road is $K/2$. Some tire must therefore spend at least $K/2$ km on the front. If any tire spends more than $K/2$ miles on the front, another tire spends less than $K/2$ on the front and is not fully worn at the end of the trip (because it had a less weary front-to-back ratio), so the swapping strategy can be improved if these two tires are swapped to equalize their front and back km.</p> <p>This argument implies that $K$ is attained when each tire accumulates exactly $K/2$ front-km (and consequently $K-K/2=K/2$ back-km). A tire wears down $1/450$ of its tread for each front km and $1/600$ of its tread for each back km. Therefore each tire wears down $\frac{K}2/450 + \frac{K}2/600$ of its tread in all. The tires wear down completely after $K$ km, so $\frac{K}2/450 + \frac{K}2/600=1$, whence $K=3600/7$.</p>
148,374
<p>I have checked all Mathematica color schemes, and I think "Hue" is the most vibrant, beautiful one. However, it has one issue: the two ends of the spectrum are red (though, different reds). I like a spectrum from, say, red to blue. Is that possible to manipulate the Hue and remove the pink and second red? </p> <p>Consider the following:</p> <pre><code>DensityPlot[Sin[x y], {x, 0.1, 1}, {y, 0.1, 1}, PlotLegends -&gt; Automatic, Frame -&gt; True, ColorFunction -&gt; Hue] </code></pre> <p>The output is <a href="https://i.stack.imgur.com/NKaue.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/NKaue.jpg" alt="enter image description here"></a></p> <p>As you see, the two extremes are red. </p>
kglr
125
<p>From <a href="http://reference.wolfram.com/mathematica/ref/Hue.html" rel="nofollow noreferrer">Documentation >> Hue >> Details</a></p> <p><img src="https://i.stack.imgur.com/Zy63W.png" alt="Mathematica graphics"></p> <p>So, we need to rescale the function values (that run from <code>-1</code> to <code>1</code>) to the unit interval:</p> <pre><code>DensityPlot[Sin[x y], {x, 0.1, 1}, {y, 0.1, 1}, PlotLegends -&gt; Automatic, Frame -&gt; True, ColorFunction -&gt; (Hue[Rescale[#, {-1, 1}, {0, 1}]] &amp;)] </code></pre> <p><img src="https://i.stack.imgur.com/f6JbP.png" alt="Mathematica graphics"></p> <p>Or, using the option ColorFunctionScaling -> False:</p> <pre><code>DensityPlot[Sin[x y], {x, 0.1, 1}, {y, 0.1, 1}, PlotLegends -&gt; Automatic, Frame -&gt; True, ColorFunction -&gt; Hue, ColorFunctionScaling -&gt; False] </code></pre> <p><img src="https://i.stack.imgur.com/oXNTF.png" alt="Mathematica graphics"></p>
4,127,149
<p>I understand that the addition and subtraction of complex number is the same as vector addition and subtraction. But what is the vector equivalent of multiplication and division of complex numbers?</p>
José Carlos Santos
446,262
<p>In general, there is none. In some cases, as in the case of complex numbers and of quaternions, such operations can be defined. And in the case of quaternions, you have two divisions, not just one (if <span class="math-container">$q$</span> and <span class="math-container">$r$</span> are quaternions and <span class="math-container">$r\ne0$</span>, then, in general, <span class="math-container">$qr^{-1}\ne r^{-1}q$</span> since the product is not commutative in this case).</p>
3,752,402
<p>I want to find out the existence of the solutions in diophantine equations of the style:</p> <p><span class="math-container">$$-259y ^2+ 2400yx + 1817y + 2122x = $$</span> <span class="math-container">$$1057364602723981500371957207036553770637547302056514367123547565680640946707606178926389130616$$</span></p> <p>The point is that solving it with current methods (elliptic curves) takes a long time, so I want to find out whether or not it has solutions.</p> <p>What I have already tried:</p> <ol> <li><p>If the GCD of the coefficients does not divide the independent term, then it has no solutions. Even if I divided it, it may be that the equation has solutions or it may not have them</p> </li> <li><p>Shaping the <span class="math-container">$x$</span> and the <span class="math-container">$y$</span>: for example <span class="math-container">$x,y$</span> pairs and come to a contradiction (sometimes it works, sometimes it doesn't)</p> </li> </ol> <p>My questions are:</p> <p>-Do you know of any procedure that allows deciding whether or not it has solutions given any second-degree diophantine equation with two unknowns?</p> <p>-The equation above is of a hyperbolic type, is it possible to modify it to make it an elliptical type? If yes, how do I do it?</p>
dan_fulea
550,003
<p>The general form of the &quot;type&quot; of the equation is (4) in</p> <p><a href="https://mathworld.wolfram.com/PellEquation.html" rel="nofollow noreferrer">https://mathworld.wolfram.com/PellEquation.html</a></p> <p>And there are algorithms to reduce the equation. The idea is to homogenize, thus getting an equation of the shape <span class="math-container">$$ q(x,y,z):= a_{11}x^2 +a_{22}y^2+a_{33}z^2+2a_{12}xy+2a_{23}yz+2a_{13}xz = 0\ . $$</span> See also <a href="https://encyclopediaofmath.org/wiki/Quadratic_form" rel="nofollow noreferrer">https://encyclopediaofmath.org/wiki/Quadratic_form</a>.</p> <p>Solutions <span class="math-container">$[x:y:z]$</span> with <span class="math-container">$z\ne 0$</span> of <span class="math-container">$q=0$</span> (over <span class="math-container">$\Bbb Q$</span> or equivalently over <span class="math-container">$\Bbb Z$</span>) correspond to solutions over <span class="math-container">$\Bbb Q$</span></p> <p>The second question has a clear answer, the discriminant is an invariant of a ternary quadratic form and there is no linear transformation / no base change switching its sign. A &quot;naive definition of type&quot; is in this homogeneous version unclear, because after grouping squares in some linear combinations <span class="math-container">$X,Y,Z$</span> of <span class="math-container">$x,y,z$</span> we can write equivalently for <span class="math-container">$a,b,c&gt;0$</span> <span class="math-container">$$ \begin{aligned} aX^2 + bY^2 &amp;= cZ^2\ ,\\ aX^2 &amp;= cZ^2-bY^2\ , \end{aligned} $$</span> and dehomogenizing w.r.t. <span class="math-container">$Z$</span> on the one side, w.r.t. <span class="math-container">$X$</span> on the other side, we get different &quot;naive types&quot;.</p> <hr /> <p>Note that the given diophantine equation is not related to elliptic curves. (There the world is much complicated. There is no local-to-global (Hasse) principle holding. An elliptic curve may have solutions in all place / localizations, without having solutions over <span class="math-container">$\Bbb Q$</span>. In this given quadratic case the Hasse principle is a theorem.)</p> <hr /> <p>To solve it, let us denote by <span class="math-container">$N$</span> the big number on the R.H.S. of the given equation, so we can write it in one line: <span class="math-container">$$ 259y^2 - 2400xy - 2122x - 1817y + N =0\ . $$</span> We multiply with <span class="math-container">$4\cdot 259$</span> and first group squares in all terms involving <span class="math-container">$y$</span>, getting successively: <span class="math-container">$$ \begin{aligned} 0 &amp;= 259y^2 - 2400xy - 2122x - 1817y + N \ ,\\ 0 &amp;= 4\cdot259^2y^2 - 2\cdot 259\cdot 4800xy - 4\cdot 2122\cdot 259x - 4\cdot259\cdot 1817y + 1036N \ ,\\ 0 &amp;= (518y - 2400x - 1817)^2 \\ &amp;\qquad -5760000x^2 - 10919992x + 1036N - 1817^2 \ ,\\ 0 &amp;= (518y - 2400x - 1817)^2 \\ &amp;\qquad -\left(2400^2x^2 + 2\cdot \underbrace{\frac {10919992}{2\cdot 2400 }}_{:=s} \cdot 2400x +s^2\right) \\ &amp;\qquad\qquad + s^2 + 1036N - 1817^2 \ ,\\ \end{aligned} $$</span> and because of <span class="math-container">$s=1364999/600$</span>, we multiply with the denominator <span class="math-container">$600^2$</span> of <span class="math-container">$s^2$</span>, thus getting: <span class="math-container">$$ \tag{$\dagger$} \\ 0= 600^2(518y - 2400x - 1817)^2 - (600\cdot 2400 x + 1817^2)^2 + \underbrace{ 1364999^2 +600^2( 1036N - 3301489)}_{:=M} \ . $$</span> The &quot;(conceptually) simple problem&quot; of solving a quadratic diofantine equation turns out to lead to the factorization of a &quot;big number&quot;, <span class="math-container">$$ \tiny M= 394354702231936140378725159936353094296979641774997598362398300096251847484068800492386090829229590001\ , $$</span> so that we can match factors in the equivalent equation <span class="math-container">$(\dagger\dagger)$</span>: <span class="math-container">$$ \tag{$\dagger\dagger$} \\ 259(1200y + 1061)(2880000x - 310800y + 2455199) =M\ . $$</span> And <span class="math-container">$M$</span> is indeed divisible by <span class="math-container">$259$</span>, so let <span class="math-container">$M'=M/259$</span>. <span class="math-container">$$ \tiny M' = 1522605027922533360535618378132637429718068114961380688657908494580122963258952897654000350692006139\ . $$</span> It turns out that we are here in a lucky situation. Writing <span class="math-container">$M'=(\pm1)\cdot(\pm M')$</span> and trying to match in all four cases the two factors to the factorization of the L.H.S. above leads to solutions in <span class="math-container">$\Bbb Q$</span>, and one pairing leaves over <span class="math-container">$\Bbb Z$</span>. So we do not need the factorization for the existence question. (Finding all solution would require the prime factors. This would be practically a more complicated question. Yes, here we could use elliptic curves.)</p> <p>Optically we see that <span class="math-container">$M'+1061$</span> has last digits <span class="math-container">$00$</span>, so this must be the chance we have. (All other pairings fail when taken modulo <span class="math-container">$100$</span>.)</p> <p>From here on, i will use <a href="https://www.sagemath.org" rel="nofollow noreferrer">sage</a> to compute and type the steps, and find one solution of the initial problem over <span class="math-container">$\Bbb Z$</span>.</p> <pre><code>sage: N = 1057364602723981500371957207036553770637547302056514367123547565680640946707606178926389130616 sage: var('x,y'); sage: def f(x,y): return -259*y^2 + 2400*y*x + 1817*y + 2122*x - N sage: R.&lt;X,Y&gt; = PolynomialRing(ZZ) sage: M = 1364999^2 + 600^2*( 1036*N - 3301489 ) sage: def g(x,y): return 600^2*(518*Y - 2400*X -1817)^2 - (600*2400*X + 1364999)^2 + M sage: 4*259*600^2*f(X,Y) + g(X,Y) 0 sage: factor(g(X,Y) - M) 7 * 37 * (1200*Y + 1061) * (-2880000*X + 310800*Y - 2455199) sage: MM = ZZ(M/259) sage: MM 1522605027922533360535618378132637429718068114961380688657908494580122963258952897654000350692006139 sage: eq1 = 1200*y + 1061 == -MM sage: eq2 = -2880000*x + 310800*y - 2455199 == 1 sage: sol = solve([eq1, eq2], [x,y], solution_dict=True)[0] sage: sol {x: -136928716052755604298168458311233713297562375616318610542499409755643002598635000170967392649039, y: -1268837523268777800446348648443864524765056762467817240548257078816769136049127414711666958910006} sage: x0, y0 = sol[x], sol[y] sage: f(x0, y0) 0 </code></pre> <hr /> <p>The integer solution is explicitly: <span class="math-container">$$ \tiny \begin{aligned} x &amp;= -136928716052755604298168458311233713297562375616318610542499409755643002598635000170967392649039 \ ,\\ y &amp;= -1268837523268777800446348648443864524765056762467817240548257078816769136049127414711666958910006 \ . \end{aligned} $$</span></p> <hr /> <p>Later EDIT:</p> <p>Let us verify the above condition:</p> <pre><code>x0, y0 = sol[x], sol[y] print(f&quot;x0 = {x0}&quot;) print(f&quot;y0 = {y0}&quot;) print(f&quot;f(x0, y0) is {f(x0, y0)}&quot;) </code></pre> <p>Result:</p> <pre><code>x0 = -136928716052755604298168458311233713297562375616318610542499409755643002598635000170967392649039 y0 = -1268837523268777800446348648443864524765056762467817240548257078816769136049127414711666958910006 f(x0, y0) is 0 </code></pre> <p>So these values verify the given equation.</p> <p>This verification is here, since there is a comment below claiming that the result of inserting the above solution in the LHS of the given equation is not the RHS, but an other one, one of the shape <span class="math-container">$10573\dots8340$</span>. OK, let us check the last digit. Then for the above solution working <strong>modulo ten</strong> we have <span class="math-container">$$ -259y_0 ^2+ 2400y_0x_0 + 1817y_0 + 2122x_0 \equiv 1\cdot 4^2 + 0 + 7\cdot 4+2\cdot 1=6+8+2\equiv 6\ .$$</span> So the last digit is <span class="math-container">$6$</span>, not zero. (Please check the own computation, it has to be done using a proper high precision calculator like pari/gp.)</p> <p>$$</p>
853,659
<p>Evaluate the integral:</p> <p>$$\int \frac{x^6}{x^4-1} \, \mathrm{d}x$$</p> <p>After a lot of help I have reached this point:</p> <p>$x^2 = Ax^3 - Ax + Bx^2 - B + Cx^3 + Cx^2 + Cx + C + Dx^3 - Dx^2 + Dx - D$</p> <p>But now I don't really know how to solve for $A, B, C$, and $D$. Please help!</p>
Tunk-Fey
123,277
<p>Rewrite the integrand as \begin{align} \frac{x^6}{x^4-1}&amp;\stackrel{\color{red}{[1]}}=\color{darkgreen}{x^2}+\color{blue}{\frac{x^2}{x^4-1}}\\ &amp;=\color{darkgreen}{x^2}+\color{blue}{\frac{x^2}{(x^2-1)(x^2+1)}}\\ &amp;\stackrel{\color{red}{[2]}}=\color{darkgreen}{x^2}+\color{blue}{\frac{1}{2}\left(\color{black}{\frac{1}{x^2+1}}+\color{red}{\frac{1}{x^2-1}}\right)}\\ &amp;=\color{darkgreen}{x^2}+\color{blue}{\frac{1}{2}\left(\color{black}{\frac{1}{x^2+1}}+\color{red}{\frac{1}{(x-1)(x+1)}}\right)}\\ &amp;\stackrel{\color{red}{[3]}}=\color{darkgreen}{x^2}+\color{blue}{\frac{1}{2}\left(\color{black}{\frac{1}{x^2+1}}+\color{red}{\frac12\left[\frac1{x-1}-\frac1{x+1}\right]}\right)}\\ &amp;=x^2+\color{red}{\underbrace{\color{black}{\frac{1}{2(x^2+1)}}}_{\color{blue}{\large \color{black}{\text{set}}\ x=\tan\theta}}}+\color{red}{\underbrace{\color{black}{\frac1{4(x-1)}}}_{\color{blue}{\large \color{black}{\text{set}}\ u=x-1}}}-\color{red}{\underbrace{\color{black}{\frac1{4(x+1)}}}_{\color{blue}{\large \color{black}{\text{set}}\ v=x+1}}} \end{align}</p> <hr> <p><strong>Notes :</strong></p> <p>$\color{red}{[1]}\;\;\;$Polynomial long division</p> <p>$\color{red}{[2]}\text{ and }\color{red}{[3]}\;\;\;$Partial fractions decomposition</p>
259,083
<p>There was a question asked: <a href="https://math.stackexchange.com/q/136204/8348">An open subset $U\subseteq R^n$ is the countable union of increasing compact sets.</a> There Davide gave an <a href="https://math.stackexchange.com/a/136209/">answer</a>. Can anyone tell me how the equality holds, and the motivation behind this construction?</p>
user642796
8,348
<p>Let's first fix a $k$ and consider Davide's set (which I will slightly rewrite) $$X_k := U \cap A_k \cap B_k$$ where $$\begin{gather}A_k := \{x : \lVert x\rVert\leq k\} \\ B_k := \{x : d(x,U^c)\geq k^{-1}\} = \{ x : B ( x; k^{-1} ) \subseteq U\}.\end{gather}$$ One thing to note is that as $B_k \subseteq U$, we actually have that $$X_k = A_k \cap B_k.$$ It is quite easy to show that $A_k$ and $B_k$ are individually closed, and that $A_k$ is bounded, so $X_k$ must be closed and bounded, <em>i.e.</em>, compact.</p> <blockquote> <p>To see that $B_k$ is closed, note that if $x \notin B_k$, then we may take $y \in B ( x ; k^{-1} ) \cap U^c$, and let $\delta = \frac{k - d(x,y)}{2}$. Given any $z \in B ( x; \delta )$, we have that $$d ( z , y ) \leq d ( z,x) + d (x,y) &lt; \delta + d(x,y) = \frac{k+d(x,y)}{2} &lt; k$$ and so $B ( z;k^{-1}) \not\subseteq U$. Therefore $B ( x;\delta ) \subseteq B_k^c$.</p> </blockquote> <p>It is also easy to show that $X_k \subseteq X_{k+1}$, and so we have an increasing sequence of compact subsets of $U$. It is also clear that $\bigcup_k X_k \subseteq U$, so we need only show the reverse inclusion.</p> <p>If $x \in U$, then since $U$ is open there is an $k_0 \in \mathbb{N}$ such that $B ( x ; k_0^{-1} ) \subseteq U$. Also, there is a $k_1 \in \mathbb{N}$ such that $\| x \| \leq k_1$. Letting $k = \max \{ k_0 , k_1 \}$ it follows that $x \in A_k \cap B_k = X_k$.</p> <p>The basic idea of the construction is, I think, as follows:</p> <ul> <li>The sets $B_k$ consist of those points of $U$ which are "far away" from the boundary of $U$. As $U$ is open, every point in $U$ is some positive distance from the boundary of $U$, and so there must be a $k$ such that $d ( x , U^c ) \geq k^{-1}$. We actually have that $U = \bigcup_k B_k$. However, if $U$ is itself an unbounded set, it could be that certain of the $B_k$ are unbounded, and so it does not suffice to only consider these sets.</li> <li>The sets $A_k$ are there to ensure that the given set is bounded. As $\mathbb{R}^n = \bigcup_k A_k$, we also have that $U \subseteq \bigcup_k A_k$.</li> </ul>
168,819
<p>I was looking for a free PDF from which I can review MV calculus.</p> <p>Specifically:</p> <ol> <li>MV Limits, Continuity, Differentiation.</li> <li>Differentiation of vector and scalar fields</li> <li>Surface/Multiple Integrals</li> </ol> <p>A succinct book would be great, (coherent) course notes and presentations would do as well.</p> <p>I ran google searches with <code>filetype:pdf</code> but I couldn't find one which fits all my requirements.</p>
Alex Nelson
31,693
<p>Michael Corral's <a href="http://www.mecmath.net/" rel="nofollow"><em>Vector Calculus</em></a> is a good free reference too.</p>
157,876
<p>Can anyone tell me how to find all normal subgroups of the symmetric group $S_4$?</p> <p>In particular are $H=\{e,(1 2)(3 4)\}$ and $K=\{e,(1 2)(3 4), (1 3)(2 4),(1 4)(2 3)\}$ normal subgroups?</p>
Douglas S. Stones
139
<p>As suggested by Babak Sorouh, the answer can be found easily using <a href="http://www.gap-system.org/" rel="nofollow">GAP</a> using the <a href="http://www.gap-system.org/Packages/sonata.html" rel="nofollow">SONATA</a> library. Here's the code:</p> <pre><code>G:=SymmetricGroup(4); S:=Filtered(Subgroups(G),H-&gt;IsNormal(G,H)); for H in S do Print(StructureDescription(H),"\n"); od; </code></pre> <p>So as to not spoil Arturo Magidin's answer, here's the output if I replace <code>G:=SymmetricGroup(4);</code> with <code>G:=DihedralGroup(32);</code> (the dihedral group of order $32$)</p> <pre><code>1 C2 C4 C8 D16 D16 C16 D32 </code></pre>
23,911
<p>I am teaching a course on Riemann Surfaces next term, and would <strong>like a list of facts illustrating the difference between the theory of real (differentiable) manifolds and the theory non-singular varieties</strong> (over, say, $\mathbb{C}$). I am looking for examples that would be meaningful to 2nd year US graduate students who has taken 1 year of topology and 1 semester of complex analysis.</p> <p>Here are some examples that I thought of:</p> <p><strong>1.</strong> Every $n$-dimensional real manifold embeds in $\mathbb{R}^{2n}$. By contrast, a projective variety does not embed in $\mathbb{A}^n$ for any $n$. Every $n$-dimensional non-singular, projective variety embeds in $\mathbb{P}^{2n+1}$, but there are non-singular, proper varieties that do not embed in any projective space.</p> <p><strong>2.</strong> Suppose that $X$ is a real manifold and $f$ is a smooth function on an open subset $U$. Given $V \subset U$ compactly contained in $U$, there exists a global function $\tilde{g}$ that agrees with $f$ on $V$ and is identically zero outside of $U$.</p> <p>By contrast, consider the same set-up when $X$ is a non-singular variety and $f$ is a regular function. It may be impossible find a global regular function $g$ that agrees with $f$ on $V$. When $g$ exists, it is unique and (when $f$ is non-zero) is not identically zero on outside of $U$.</p> <p><strong>3.</strong> If $X$ is a real manifold and $p \in X$ is a point, then the ring of germs at $p$ is non-noetherian. The local ring of a variety at a point is always noetherian. </p> <p><em><strong>What are some more examples?</em></strong></p> <p>Answers illustrating the difference between real manifolds and complex manifolds are also welcome.</p>
Charles Staats
5,094
<p>Any two compact surfaces (without boundary) of the same genus are diffeomorphic. However, if S is a surface of genus g > 0, there are uncountably many non-isomorphic complex (or, equivalently, algebraic) structures on S.</p>
23,911
<p>I am teaching a course on Riemann Surfaces next term, and would <strong>like a list of facts illustrating the difference between the theory of real (differentiable) manifolds and the theory non-singular varieties</strong> (over, say, $\mathbb{C}$). I am looking for examples that would be meaningful to 2nd year US graduate students who has taken 1 year of topology and 1 semester of complex analysis.</p> <p>Here are some examples that I thought of:</p> <p><strong>1.</strong> Every $n$-dimensional real manifold embeds in $\mathbb{R}^{2n}$. By contrast, a projective variety does not embed in $\mathbb{A}^n$ for any $n$. Every $n$-dimensional non-singular, projective variety embeds in $\mathbb{P}^{2n+1}$, but there are non-singular, proper varieties that do not embed in any projective space.</p> <p><strong>2.</strong> Suppose that $X$ is a real manifold and $f$ is a smooth function on an open subset $U$. Given $V \subset U$ compactly contained in $U$, there exists a global function $\tilde{g}$ that agrees with $f$ on $V$ and is identically zero outside of $U$.</p> <p>By contrast, consider the same set-up when $X$ is a non-singular variety and $f$ is a regular function. It may be impossible find a global regular function $g$ that agrees with $f$ on $V$. When $g$ exists, it is unique and (when $f$ is non-zero) is not identically zero on outside of $U$.</p> <p><strong>3.</strong> If $X$ is a real manifold and $p \in X$ is a point, then the ring of germs at $p$ is non-noetherian. The local ring of a variety at a point is always noetherian. </p> <p><em><strong>What are some more examples?</em></strong></p> <p>Answers illustrating the difference between real manifolds and complex manifolds are also welcome.</p>
Kevin H. Lin
83
<p>A proper variety doesn't have (non-constant) global sections. A real manifold, compact or not, has lots of global sections.</p> <p>There are lots of maps between real manifolds. Maps between varieties are much more restricted (e.g. by Riemann-Hurwitz in the case of curves). </p>
683,513
<p>There is much discussion both in the education community and the mathematics community concerning the challenge of (epsilon, delta) type definitions in real analysis and the student reception of it. My impression has been that the mathematical community often holds an upbeat opinion on the success of student reception of this, whereas the education community often stresses difficulties and their "baffling" and "inhibitive" effect (see below). A typical educational perspective on this was recently expressed by Paul Dawkins in the following terms: </p> <p><em>2.3. Student difficulties with real analysis definitions. The concepts of limit and continuity have posed well-documented difficulties for students both at the calculus and analysis level of instructions (e.g. Cornu, 1991; Cottrill et al., 1996; Ferrini-Mundy &amp; Graham, 1994; Tall &amp; Vinner, 1981; Williams, 1991). Researchers identified difficulties stemming from a number of issues: the language of limits (Cornu, 1991; Williams, 1991), multiple quantification in the formal definition (Dubinsky, Elderman, &amp; Gong, 1988; Dubinsky &amp; Yiparaki, 2000; Swinyard &amp; Lockwood, 2007), implicit dependencies among quantities in the definition (Roh &amp; Lee, 2011a, 2011b), and persistent notions pertaining to the existence of infinitesimal quantities (Ely, 2010). Limits and continuity are often couched as formalizations of approaching and connectedness respectively. However, the standard, formal definitions display much more subtlety and complexity. That complexity often baffles students who cannot perceive the necessity for so many moving parts. Thus learning the concepts and formal definitions in real analysis are fraught both with need to acquire proficiency with conceptual tools such as quantification and to help students perceive conceptual necessity for these tools. This means students often cannot coordinate their concept image with the concept definition, inhibiting their acculturation to advanced mathematical practice, which emphasizes concept definitions.</em> </p> <p>See <a href="http://dx.doi.org/10.1016/j.jmathb.2013.10.002" rel="nofollow noreferrer">http://dx.doi.org/10.1016/j.jmathb.2013.10.002</a> for the entire article (note that the online article provides links to the papers cited above).</p> <p>To summarize, in the field of education, researchers decidedly have <em>not</em> come to the conclusion that epsilon, delta definitions are either "simple", "clear", or "common sense". Meanwhile, mathematicians often express contrary sentiments. Two examples are given below. </p> <p><em>...one cannot teach the concept of limit without using the epsilon-delta definition. Teaching such ideas intuitively does not make it easier for the student it makes it harder to understand. Bertrand Russell has called the rigorous definition of limit and convergence the greatest achievement of the human intellect in 2000 years! The Greeks were puzzled by paradoxes involving motion; now they all become clear, because we have complete understanding of limits and convergence. Without the proper definition, things are difficult. With the definition, they are simple and clear.</em> (see Kleinfeld, Margaret; Calculus: Reformed or Deformed? Amer. Math. Monthly 103 (1996), no. 3, 230-232.) </p> <p><em>I always tell my calculus students that mathematics is not esoteric: It is common sense. (Even the notorious epsilon, delta definition of limit is common sense, and moreover is central to the important practical problems of approximation and estimation.)</em> (see Bishop, Errett; Book Review: Elementary calculus. Bull. Amer. Math. Soc. 83 (1977), no. 2, 205--208.)</p> <p>When one compares the upbeat assessment common in the mathematics community and the somber assessments common in the education community, sometimes one wonders whether they are talking about the same thing. How does one bridge the gap between the two assessments? Are they perhaps dealing with distinct student populations? Are there perhaps education studies providing more upbeat assessments than Dawkins' article would suggest? </p> <p>Note 1. See also <a href="https://mathoverflow.net/questions/158145/assessing-effectiveness-of-epsilon-delta-definitions">https://mathoverflow.net/questions/158145/assessing-effectiveness-of-epsilon-delta-definitions</a></p> <p>Note 2. Two approaches have been proposed to account for this difference of perception between the education community and the math community: (a) sample bias: mathematicians tend to base their appraisal of the effectiveness of these definitions in terms of the most active students in their classes, which are often the best students; (b) student/professor gap: mathematicians base their appraisal on their own scientific appreciation of these definitions as the "right" ones, arrived at after a considerable investment of time and removed from the original experience of actually learning those definitions. Both of these sound plausible, but it would be instructive to have field research in support of these approaches.</p> <p>We recently published <a href="http://dx.doi.org/10.5642/jhummath.201701.07" rel="nofollow noreferrer">an article</a> reporting the result of student polling concerning the comparative educational merits of epsilon-delta definitions and infinitesimal definitions of key concepts like continuity and convergence, with students favoring the infinitesimal definitions by large margins.</p>
Marcel Besixdouze
29,892
<p>The opinions are not in conflict. Something can be simple, obvious, intuitive, etc. and a person can still fail to grok that it is simple, obvious, intuitive, etc. The notion of <em>building intuition</em> is an oxymoron according to a common understanding of <em>intuition</em>, but is in fact central to the understanding of <em>intuition</em> relevant to mathematical training. </p> <p>A joke every student of mathematics eventually hears: </p> <blockquote> <p>[...] our professor then formulated a theorem, wrote its statement on the board, and declared to us that "the proof is obvious". Another student raised a hand in objection. "I'm sorry but I don't see the proof immediately, could you elaborate?" Our professor stopped for a moment, and mulled over the statement. He paced back and forth in front of the board, stroking his beard in deep puzzlement, and then wandered out of the classroom. Us students sat dumbfounded for half the remaining class period, a good quarter hour in all, until our professor returned. With a large smile beaming on his face, he announced to the class "indeed, it is obvious!", and continued the lecture without further comment. </p> </blockquote> <p>Obvious($X$) $\not\rightarrow$ Obvious(Obvious($X$)). The mathematicians are declaring Obvious($X$), while the educators are declaring $\neg$Obvious(Obvious($X$)). There is no conflict between these propositions. </p>
3,543,150
<p>My question : two indefinite integrals of a function being given , how to express one indefinite integral in terms of the other? </p> <p><a href="https://i.stack.imgur.com/VkMzJ.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/VkMzJ.png" alt="enter image description here"></a></p>
J. W. Tanner
615,567
<p>Fractional powers of negative numbers aren't uniquely defined. </p> <p>There are three cube roots of <span class="math-container">$-1$</span>: <span class="math-container">$-1$</span>, <span class="math-container">$\frac12+\frac{\sqrt{3}}2i$</span>, and <span class="math-container">$\frac12-\frac{\sqrt3}2i$</span>. </p> <p>The answer given by Google for <span class="math-container">$(-1)^{4/3}$</span> was the fourth power of the middle one of those three.</p>
3,246,244
<p>Consider the action of <span class="math-container">$G$</span> on <span class="math-container">$X$</span>.</p> <p>Let it be a property of <span class="math-container">$G,X$</span> that <span class="math-container">$\forall x,y,\exists g:g\cdot x=g\cdot y$</span>. This is not quite a transitive action - it describes for example a sequence of inclusions. <strong>What is the name for this type of action?</strong> I can't pair it with an appropriate definition from <a href="https://en.wikipedia.org/wiki/Group_action_(mathematics)#Types_of_actions" rel="nofollow noreferrer">here</a>.</p> <p>My attempt? There seem to be several things going on here, none of which I can associate with documented group theory at the moment.</p> <p><span class="math-container">$G$</span> seems to define a "contracting epimorphism"</p> <p><span class="math-container">$G$</span> seems to define the identity function on the trivial group having the powerset of <span class="math-container">$X$</span> as its element.</p>
Mark Kamsma
661,457
<p>This is only possibly if <span class="math-container">$X$</span> is a singleton (or, vacuously, if <span class="math-container">$X$</span> is empty). To see this: let <span class="math-container">$x, y \in X$</span> and suppose <span class="math-container">$g$</span> is such that <span class="math-container">$g \cdot x = g \cdot y$</span>. Then <span class="math-container">$x = g^{-1}g \cdot x = g^{-1}g \cdot y = y$</span>. So all elements in <span class="math-container">$X$</span> are equal, and thus <span class="math-container">$X$</span> is a singleton . It is not hard to see that if <span class="math-container">$X$</span> is a singleton, this property holds.</p> <p>Since this property can only hold in uninteresting cases, I doubt it has a name. </p>
518,627
<p>Proof that:$ \sum\limits_{n=1}^{p} \left\lfloor \frac{n(n+1)}{p} \right\rfloor= \frac{2p^2+3p+7}{6} $ <br> where $p$ is a prime number such that $p \equiv 7 \mod{8}$ <br> <br>I tried to separate the sum into parts but it does not seems to go anywhere. I also tried to make a substitutions for $p$ ,but, I don't think it is entriely correct to call $p=7+8t$. Any ideas?</p>
mercio
17,445
<p>$$\sum_{i=1}^p \frac{n(n+1)}p = \frac1 p \frac {p(p+1)(p+2)}3 = \frac{p^2+3p+2}3$$, and $$\frac{p^2+3p+2}3 - \frac{2p^2+3p+7}6 = \frac{p-1}2$$</p> <p>So you are asking to prove that $$\sum_{n=1}^p \frac{n(n+1)}p - \lfloor\frac{n(n+1)}p\rfloor = \frac{p-1}2$$.</p> <p>The term being summed is $\dfrac 1 p$ times the residue of $n(n+1)$ modulo $p$. So this becomes showing $\displaystyle \sum_{n \in \Bbb F_p} (n(n+1) \pmod p) = p(p-1)/2$</p> <p>Let $f(x)$ be the number of solutions to $n(n+1)=x$ in $\Bbb F_p$. $n(n+1) = x \iff n^2 + n = x \iff (2n+1)^2 = 4x+1$, hence $\displaystyle f(x) = 1 + \binom{4x+1}p$,</p> <p>and the sum becomes $\displaystyle \sum_{x=0}^{p-1} x f(x) = \sum x + \sum x \binom{4x+1}p$.<br> The first sum is $p(p-1)/2$, so we are left with showing that the second sum is zero.</p> <p>Let us do a last rearrangement by setting $y = 1+4x$ and writing the sum as $\displaystyle \sum x(y) \binom y p $, where $x(y) = \frac 14 (y-1 + k(y)p)$ and $k(y)$ is the remainder of $y-1$ mod $4$.</p> <p>Let $\displaystyle S_i = \sum_{y \equiv i \pmod 4} \binom y p$.<br> Since $-1$ is not a square, $S_0 = - S_3$ and $S_1 = - S_2$.<br> Since $2$ is a square, $S_1 + S_3 = S_2 + S_3$ (and $S_0 + S_2 = S_0 + S_1$) hence $S_1 = S_2 = 0$.</p> <p>We can rewrite the sum into $$\frac 1 4 \left(\sum y\binom y p + (3p-1)S_0 - S_1 + (p-1)S_2 + (2p-1)S_3\right) = \frac 1 4 \left(\sum y\binom y p + p(S_0 + S_2)\right)$$</p> <p>Since $(-1)$ is not a square, $$\sum y \binom y p = \sum_0^{(p-1)/2} (2y - p) \binom y p$$ By <a href="https://math.stackexchange.com/questions/114293/evaluate-a-character-sum-sum-limits-r-1p-1-2r-left-fracrp-r">this question</a>, this is $$-p \sum_0^{(p-1)/2} \binom y p = -p \sum_0^{(p-1)/2} \binom {2y} p = -p(S_0 + S_2)$$</p>
1,693,045
<p>I know if $x=e^{\frac{2\pi i}{17}}$ then $x^{17}=1$ and $\Re(x)=\cos\left(\frac{2\pi}{17}\right)$.</p> <p>But how do I form a polynomial which has root $\cos\left(\frac{2\pi}{17}\right)$.</p> <p>I know you can consider de Moivre's theorem and expand the LHS using binomial theorem but that will take a long time.</p>
Darío G
27,454
<p>The number $x=cos\left(\frac{2\pi}{17}\right)$ is a root of the polynomial $$\sum_{k=0}^{8} \binom{17}{2k+1}x^{2k+1}\cdot i^{16-2k}\cdot (1-x^2)^{8-k}=1$$.</p>
1,693,045
<p>I know if $x=e^{\frac{2\pi i}{17}}$ then $x^{17}=1$ and $\Re(x)=\cos\left(\frac{2\pi}{17}\right)$.</p> <p>But how do I form a polynomial which has root $\cos\left(\frac{2\pi}{17}\right)$.</p> <p>I know you can consider de Moivre's theorem and expand the LHS using binomial theorem but that will take a long time.</p>
lhf
589
<p>Let $c=\cos\left(\frac{2\pi}{17}\right)$ and $s=\sin\left(\frac{2\pi}{17}\right)$.</p> <p>Then</p> <p>$ 1=\Re(1)=\Re ((c+s\, i)^{17})=c^{17}-136 c^{15} s^2+2380 c^{13} s^4-12376 c^{11} s^6+24310 c^9 s^8-19448 c^7 s^{10}+6188 c^5 s^{12}-680 c^3 s^{14}+17 c s^{16} $</p> <p>Note that $s$ appears only with even powers. Now replace $s^2=1-c^2$ to get a polynomial in $c$.</p>
3,053,975
<p><span class="math-container">$3^6-3^3 +1$</span> factors?, 37 and 19, but how to do it using factoring, <span class="math-container">$3^3(3^3-1)+1$</span>, can't somehow put the 1 inside </p>
Bill Dubuque
242
<p>It's a special case of: <em>completing</em> the square leads to a <em>difference</em> of squares, i.e.</p> <p><span class="math-container">$$\begin{eqnarray}\overbrace{3^{\large 6}+1}^{\rm incomplete}-\,3^{\large 3}&amp;=\ &amp;\!\!\!\! \overbrace{(3^{\large 3}+1)^{\large 2}}^{\rm\!\!\! complete\ the\ square\!\!\!}\!\!\!\!-\color{#c00}{3\,3^{\large 3}}\ \ \text{so, factoring this} \it\text{ difference of squares}\\[.3em] &amp;\!\!\!=&amp;\! (\underbrace{3^{\large 3}+1\,\ -\, \color{#c00}{3^{\large 2}}}_{\Large 19})\ (\underbrace{3^{\large 3}+1\ +\,\color{#c00}{3^2}}_{\Large 37})\\ \end{eqnarray}$$</span></p> <p>Generally <span class="math-container">$\ a^{\large 6} + b\, a^{\large 3} + c^{\large 2}\,$</span> factors if <span class="math-container">$\ \color{#c00}{ b= 2c\!-\!ad^{\large 2}}$</span> for some <span class="math-container">$d\,$</span> (above is <span class="math-container">$\,a,b,c,d = 3,-1,1,1)$</span></p> <p><span class="math-container">$$\qquad\ \begin{eqnarray}\overbrace{a^{\large 6}+c^{\large 2}}^{\rm incomplete}\!+b\,a^{\large 3}&amp;=\ &amp;\!\!\!\! \overbrace{(a^{\large 3}+c)^{\large 2}}^{\rm\!\!\! complete\ the\ square\!\!\!}\!\!\!\!+\color{#c00}{\overbrace{(b-2c)}^{\!\!\large -d^{\Large 2}a}\,a^{\large 3}}\ \ \text{so, factoring this} \it\text{ difference of squares}\\[.3em] &amp;\!\!\!=&amp;\! ({a^{\large 3}+c\,\ -\, \color{#c00}{da^{\large 2}}})\ ({a^{\large 3}+c\ +\,\color{#c00}{da^{\large 2}}})\\ \end{eqnarray}$$</span></p> <hr> <p><strong>Remark</strong> <span class="math-container">$ $</span> Below is another well-known example</p> <p><span class="math-container">$$\begin{eqnarray} n^4+4k^4 &amp;\,=\,&amp; \overbrace{(n^2\!+2k^2)^2}^{\rm\!\!\! complete\ the\ square\!\!\!}\!\!\!-\!(\color{#c00}{2nk})^2\ \ \text{so, factoring this} \it\text{ difference of squares}\\ &amp;\,=\,&amp; (n^2\!+2k^2\ -\,\ \color{#c00}{2nk})\,(n^2\!+2k^2+\,\color{#c00}{2nk})\\ &amp;\,=\,&amp;(\underbrace{(n-k)^2}_{\rm\!\!\!\!\!\!\!\!\!\!\! complete\ the\ square\!\!\!\!\!\!\!\!}\ +\ \,k^2)\ \ \underbrace{((n+k)^2}_{\rm\!\!\!\!\!\!\!\!\!\!\! complete\ the\ square\!\!\!\!\!\!\!\!\!\!\!\!\!\!} +\,k^2)\\ \end{eqnarray}$$</span></p>
3,200,354
<p>How can I find the maximal value in the range <span class="math-container">$[-1,1]$</span> for <span class="math-container">$x$</span> and <span class="math-container">$y$</span> of the following expression:</p> <p><span class="math-container">$$\sin(\Pi x)(y-3)/2.$$</span></p> <p>I tried doing the derivative of both <span class="math-container">$x$</span> and <span class="math-container">$y$</span>, but it seemed there could be an easier way.</p>
José Carlos Santos
446,262
<p>Yes, they are equivalent. Asserting that <span class="math-container">$A\subset B$</span> is equivalent to asserting that <span class="math-container">$(\forall a\in A):a\in B$</span>. And asserting that <span class="math-container">$\Phi(B)\subset B$</span>, in particular, is equivalent to <span class="math-container">$(\forall b\in B):\Phi(b)\in B$</span>.</p>
2,196,037
<p>Let $E$ be a universal set and $\{A_{\alpha}\}_{\alpha \in J},$ for some index set $J$ be a family of subsets of $E.$</p> <p>Prove that: (a)$E-\bigcup_{\alpha \in J}A_{\alpha} = \bigcap_{\alpha \in J}($R$-A_{\alpha}).$</p> <p>I do not know what is $R$ or it is a mistake in the question, Could anyone help me ? </p> <p>(b)$E-\bigcap_{\alpha \in J}A_{\alpha} = \bigcup_{\alpha \in J}($E$-A_{\alpha}).$</p> <p>Shall I prove it by induction? but what about the index set is it countably infinite or finite or uncountable, and how the proof will differ? </p>
Graham Kemp
135,106
<p>What ever $R$ means should have been identified earlier in your reference book, otherwise it is a mystery.</p> <p>As to the second, just use the definitions that $$\bigcup_{\alpha\in J} X_\alpha := \{x~:~ \exists \alpha \in J~ (x\in X_\alpha)\} = \{x~:~\bigvee_{\alpha\in J}(x\in X_\alpha)\}\\ \bigcap_{\alpha\in J} X_\alpha := \{x~:~ \forall \alpha\in J~(x\in X_\alpha)\}=\{x~:~ \bigwedge_{\alpha\in J}(x\in X_\alpha)\}\\ B- A = \{x ~:~ x \in B~\wedge~ x\notin A\}$$</p>
2,196,037
<p>Let $E$ be a universal set and $\{A_{\alpha}\}_{\alpha \in J},$ for some index set $J$ be a family of subsets of $E.$</p> <p>Prove that: (a)$E-\bigcup_{\alpha \in J}A_{\alpha} = \bigcap_{\alpha \in J}($R$-A_{\alpha}).$</p> <p>I do not know what is $R$ or it is a mistake in the question, Could anyone help me ? </p> <p>(b)$E-\bigcap_{\alpha \in J}A_{\alpha} = \bigcup_{\alpha \in J}($E$-A_{\alpha}).$</p> <p>Shall I prove it by induction? but what about the index set is it countably infinite or finite or uncountable, and how the proof will differ? </p>
Nosrati
108,128
<p>‎\begin{eqnarray*}‎ ‎(\bigcap_{i\in\Lambda}A_i)^c &amp;=&amp; \{x|x\notin \bigcap_{i\in\Lambda}A_i\} \\‎ ‎&amp;=&amp; \{x|\exists i\in\Lambda,~~x\notin A_i\}\\‎ ‎&amp;=&amp; \{x|\exists i\in\Lambda,~~x\in A_i^c\} \\‎ ‎&amp;=&amp; \bigcup_{i\in\Lambda}A_i^c‎ ‎\end{eqnarray*}‎</p>
620,045
<p>What is the mean and variance of Squared Gaussian: $Y=X^2$ where: $X\sim\mathcal{N}(0,\sigma^2)$?</p> <p>It is interesting to note that Gaussian R.V here is zero-mean and non-central Chi-square Distribution doesn't work.</p> <p>Thanks.</p>
iballa
116,491
<p>Note that $X^2 \sim \sigma^2 \chi^2_1$ where $\chi^2_1$ is the Chi-squared distribution with 1 degree of freedom. Since $E[\chi^2_1] = 1, \text{Var}[\chi^2_1] = 2$ we have $E[X^2] = \sigma^2, \text{Var}[X^2] = 2 \sigma^4$.</p>
129
<p>Is there some criterion for whether a space has the homotopy type of a closed manifold (smooth or topological)? Poincare duality is an obvious necessary condition, but it's almost certainly not sufficient. Are there any other special homotopical properties of manifolds?</p>
Benjamin Antieau
100
<p>Jacob Lurie gave a talk last week at Peter May's birthday conference on noncommutative Poincaré duality. The idea is to take an $n$-manifold $M$ and a $(n-1)$-connected space $X$. Then, he showed that the compact mapping space $\mbox{Map}_c(M,X)$ is isomorphic to a certain homotopy colimit over a certain category of open subsets of $M$. This is equivalent to the usual commutative Poincaré duality. However, it is not clear (to me) what the natural generalization of the statement is to non-manifolds. So, I am not sure how to use it as a test. However, if you could use it as a test of being a manifold, it seems feasible that if the noncommutative statement held for your test space $M$ and all $(n-1)$-connected spaces $X$ for some $X$, then it would seem reasonable ask whether your test space is the homotopy type of an n-manifold.</p> <p>The category of open sets over which Lurie takes the colimit is the category of disjoint balls (homeomorphic to $\mathbb{R}^n$) in $M$. Thus, a guess might be something like: if $M$ is a space, and if $U$ is a category of open sets of $M$ that cover $M$, and if $\mbox{Map}_c(M,X)$ is equivalent to the homotopy colimit of $\mbox{Map}_c(U_i,X)$ for all $U_i$ in $U$ for all $(n-1)$-connected $X$ for some $n$, then $M$ has the homotopy type of an $n$-manifold.</p> <p>I have no idea if this is true, and even if it is true, it is not clear if it would be useful.</p>
796,262
<p>So, I am computing something seemingly simple involving complex gaussians and constants, but I am getting a big contradiction in my calculations. </p> <p><strong>The setup:</strong></p> <ul> <li>Let $C$ be a complex constant, that is, $C = c_r + jc_i$. </li> <li>Let $G$ be a complex gaussian variable, $G = g_r + jg_i$, where $g_r$ and $g_i$ are both uncorrelated, and where each are $\sim\mathcal{N}(0,\sigma^2)$.</li> </ul> <p>I am computing $z = |C + G|^2$. </p> <p><strong>The problem:</strong> Now, before I go on, it is obvious that the variable $z$, must always be greater than or equal to $0$, owning to the $| \cdot |^2$ operation. However when I open up and compute $z$, I get an expression that seems like it CAN be less than $0$. </p> <p>Opening up $z$, I get</p> <p>$$ z = (c_r^2 + c_i^2) + 2\Big[c_rg_r + c_ig_i \Big] + (g_r^2 + g_i^2) $$</p> <p>The first term is a constant, and will always be greater than or equal to zero. The last term is has a gamma distribution, and by definition, will also always be greater than or equal to zero. However, the <em>middle</em> term is simply a summation of two gaussians, but this means that there is a finite probability that they take on a value of less than zero, meaning that $z$ can also be less than zero! </p> <p>But $z$ can never be less than zero. This is the contradiction... I am not sure where I am making a mistake in my reasoning...</p> <p>Thank you.</p>
Rebecca J. Stones
91,818
<p>Since the spanning trees are subgraphs of $K_{3,4}$, the degree sequences are of the form $(d_1,d_2,d_3,d_4), (d_5,d_6,d_7)$ where</p> <p>\begin{align*} d_1+d_2+d_3+d_4 &amp;= 6, \\ d_5+d_6+d_7 &amp;= 6, \\ d_i &amp; \geq 1 &amp; \text{for all } i \in \{1,\ldots,7\}, \text{ and} \\ d_i &amp; \leq 4 &amp; \text{for all } i \in \{1,\ldots,7\} \\ \end{align*}</p> <p>This just leaves $(d_1,d_2,d_3,d_4) \in \{(1, 1, 2, 2),(1, 1, 1, 3)\}$ and $(d_5,d_6,d_7) \in \{(1,2,3),(2,2,2)\}$.</p> <p>Going through the possible degree sequences one by one, we find the following seven spanning trees:</p> <p><a href="https://i.stack.imgur.com/9itRt.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/9itRt.png" alt="enter image description here"></a></p> <p>If a spanning tree has $(d_1,d_2,d_3,d_4) = (1, 1, 1, 3)$, then the graph is unique up to isomorphism (the degree-$3$ vertex is adjacent to the three vertices in the other parts, and the neighbors of the degree-$1$ vertices are determined by $(d_5,d_6,d_7)$). This gives the top row above.</p> <p>If a spanning tree has $(d_1,d_2,d_3,d_4) = (1, 1, 2, 2)$, then the two degree-$2$ vertices together with their neighbors induce a $5$-vertex path. We add two degree-$1$ vertices to this in all possible ways, and if we're systematic and careful, we obtain the latter $4$ spanning trees above (1. attach both to the middle of the path, 2. attach both to one end of the path, 3. attach one to one end of the path and one to the middle of the path, 4. attach one to each end of the path).</p>
3,557,840
<p>Find the quadratic polynomial <span class="math-container">$p(x)$</span> for given data points <span class="math-container">$$p(x_0)=y_0, p'(x_1)=y_1', p(x_2)=y_2 \text{ with } x_0 \neq x_2.$$</span></p> <p><strong>My approach</strong></p> <p>I tried the problem taking <span class="math-container">$p(x)=a+bx+c x^2$</span> but I am not sure about.</p> <p>Any help is appreciated.</p>
user5713492
316,404
<p>The most general quadratic that goes through <span class="math-container">$(x_0,y_0)$</span> and <span class="math-container">$(x_2,y_2)$</span> is <span class="math-container">$$p(x)=y_0+\frac{(y_2-y_0)}{(x_2-x_0)}(x-x_0)+C(x-x_0)(x_2-x)$$</span> Then we require <span class="math-container">$$p^{\prime}(x_1)=y_1^{\prime}=\frac{(y_2-y_0)}{(x_2-x_0)}+C(x_0+x_2-2x_1)$$</span> If <span class="math-container">$x_0+x_2-2x_1=0$</span> there may be no solution. Otherwise solve for <span class="math-container">$C$</span> and your solution is as given.</p>
218,915
<blockquote> <p>Prove that for any integer $n$, $\gcd (3n^2+5n+7, n^2+1)=1$ or $41$.</p> </blockquote> <p>The following answer is convoluted because I've intentionally created excess solutions. However, I can't figure out how to eliminate them! Anyone?</p> <p>Let $$d=\gcd (3n^2+5n+7, n^2+1).$$ Then $$d|[(3n^2+5n+7)-3(n^2+1)]$$ $$d |(5n+4)$$ And $$d | [5(3n^2+5n+7)-3n(5n+4)]$$ $$d |(13n+35)$$ And $$d |[5(13n+35)-13(5n+4)]$$ $$d |123$$ Therefore, $d= 1$ or $3$ or $41$ or $123$.</p>
Community
-1
<p>From your last step, we get that $d = 1,3,41,123$.</p> <p>Recall that $$n^2 \equiv 0,1 \pmod{3} \text{ (Why?)}$$ Hence, $3$ (or) $123$ does not divide $n^2+1$.</p> <p><strong>EDIT</strong></p> <p>Note that any $n$ is either $0 \pmod{3}$ or $\pm1 \pmod{3}$.</p> <p>Hence, $n^2 \equiv 0,1 \pmod{3}$. (Recall that if $x \equiv y \pmod{a}$, then $x^k \equiv y^k \pmod{a}$.)</p> <p>Hence, $n^2 + 1 \equiv 1,2 \pmod{3}$. This means that $3$ does not divide $n^2+1$. Hence, $3$ cannot divide any divisor of $n^2+1$. This enables us to rule out $d=3$ and $d=123$.</p>
218,915
<blockquote> <p>Prove that for any integer $n$, $\gcd (3n^2+5n+7, n^2+1)=1$ or $41$.</p> </blockquote> <p>The following answer is convoluted because I've intentionally created excess solutions. However, I can't figure out how to eliminate them! Anyone?</p> <p>Let $$d=\gcd (3n^2+5n+7, n^2+1).$$ Then $$d|[(3n^2+5n+7)-3(n^2+1)]$$ $$d |(5n+4)$$ And $$d | [5(3n^2+5n+7)-3n(5n+4)]$$ $$d |(13n+35)$$ And $$d |[5(13n+35)-13(5n+4)]$$ $$d |123$$ Therefore, $d= 1$ or $3$ or $41$ or $123$.</p>
P..
39,722
<p>Or, you can write $$(-5n+4)(3n^2+5n+7)+(15n+13)(n^2+1)=41 \ .$$</p>
218,915
<blockquote> <p>Prove that for any integer $n$, $\gcd (3n^2+5n+7, n^2+1)=1$ or $41$.</p> </blockquote> <p>The following answer is convoluted because I've intentionally created excess solutions. However, I can't figure out how to eliminate them! Anyone?</p> <p>Let $$d=\gcd (3n^2+5n+7, n^2+1).$$ Then $$d|[(3n^2+5n+7)-3(n^2+1)]$$ $$d |(5n+4)$$ And $$d | [5(3n^2+5n+7)-3n(5n+4)]$$ $$d |(13n+35)$$ And $$d |[5(13n+35)-13(5n+4)]$$ $$d |123$$ Therefore, $d= 1$ or $3$ or $41$ or $123$.</p>
Bill Dubuque
242
<p><strong>Hint</strong> $\, $ Let $\rm\:d = gcd$, so $\rm\:d\:|\ i^2\!+1,\, 7+5\,i+3\,i^2.\:$ Then, like taking norms of Gaussian integers, $$\rm\:mod\ d\!:\,\ i^2\equiv -1\ \Rightarrow\ 0\equiv 7+5\,i+3\,i^2\equiv 4+5\,i\ \Rightarrow\ 0\equiv (4+5\,i)(4-5\,i)\equiv 4^2\!+5^2\equiv 41$$</p>
2,042,428
<p>If I'm correct, hidden induction is when we use something along the lines of "etc..." in a proof by induction. Are there any examples of when this would be appropriate (or when it's not appropriate but used anyway)?</p>
Bill Dubuque
242
<p>It's ubiquitous in inductive proofs by <em>telescopy</em>, e.g. multiplicative telescopic cancellation </p> <p>$\qquad\qquad\, \displaystyle (x-1)(x+1)(x^{\large 2}\!+1)(x^{\large 4}\!+1)\qquad\! \cdots\qquad (x^{\large 2^{\rm N}}\!+\,1)$</p> <p>$\qquad\ \ \ = \ \displaystyle \frac{\color{#0a0}{x-1}}{\color{#90f}1} \frac{\color{brown}{x^{\large 2}-1}}{\color{#0a0}{x-1}}\frac{\color{royalblue}{x^{\large 4}-1}}{\color{brown}{x^{\large 2}-1}}\frac{\phantom{f(3)}}{\color{royalblue}{x^{\large 4}-1}}\, \cdots\, \frac{\color{#c00}{\large x^{\large 2^{\rm N}}\!-1}}{\phantom{f(b)}}\frac{x^{\large 2^{\large \rm N+1}}\!-1}{\color{#c00}{x^{\large \rm 2^N}\!-1}} \,=\, \frac{x^{\large 2^{\rm N+1}}-1}{\color{#90f}1} $</p> <p>As to your question about rigor, informal proofs like the above can be <em>mechanically</em> rewritten into a rigorous inductive proof by anyone who is proficient with telescopic induction. But that is not necessarily the case for someone who is not (esp. for hairer problems where the telescopic cancellation is not so obvious). </p> <p>Thus typically it depends upon the context whether or not such informal proofs will be accepted as complete. If we are in a context where it is assumed that telescopic induction is known then such informal proofs may indeed be deemed acceptable. Otherwise more needs to be said to convince the reader that you know how to complete the proof into standard inductive form.</p> <p>Similar remarks hold for other common forms of inductive proofs. For example, many of my posts illustrate how the use of modular arithmetic (congruences) allows us to transform many inductive divisibility problems into a trivial induction such as $\, x\equiv 1\,\Rightarrow\, x^n\equiv 1,\,$ a consequence of the Congruence Power Rule (which has an <em>obvious</em> simple inductive proof). In a number theoretical context you would not be expected to give a rigorous inductive proof of the concluding inference, essentially $\,1^n\equiv 1\,$ (or, similarly $(-1)^{2n}\equiv 1).$ But in other contexts you would be expected to be more explicit, esp, if you are working without the simplifying language of congruences, so the innate algebraic structure may be much more obfuscated, greatly complicating the intuition needed to devise the inductive step. For example <a href="https://math.stackexchange.com/a/695494/242">see this post</a> where I explain how a divisibility inductive proof that is typically pulled out of a hat like magic, is nothing but a special case of the Congruence Product Rule, and the inductive proof becomes obvious from the algebraic perspective. </p>
1,309,728
<p>I know what a 3x10 looks like, but I cannot seem to find a distinguishable pattern to extend it to a 3x14.</p> <p>The 3x10 pattern I'm using looks like the one at the top right of figure 6 of <a href="http://faculty.olin.edu/~sadams/DM/ktpaper.pdf" rel="nofollow">this paper</a>.</p> <p>Any help would be greatly appreciated.</p>
rschwieb
29,335
<p>Just keep going with using $X-a$ and $Y-b$ to translate stuff.</p> <p>$(X-a)(Y-b)+b(X-a)+a(Y-b)=XY-aY-bX+1+bX-1+aY-1=XY-1$</p>
638,875
<p>Let $P$ be a $p$-group and let $A$ be maximal among abelian normal subgroups of $P$. Show that $A=C_P(A)$.</p> <p>This is the second part of a problem in which I successfully proved the following: Let $P$ be a finite $p$-group and let $U&lt;V$ be normal subgroups of $P$. Show that there exists $W \triangleleft P$ with $U&lt;W \le V$ and $|W:U|=p$.</p> <p>I did this by observing that since $U&lt;V$ are normal in $P$, $(V/U) \triangleleft P/U$ and so $(V/U) \cap Z(P/U)$ is nontrivial. Now suppose that $|V/U|=p$. Then it easily follows that $U \triangleleft V \triangleleft P$ and $|V:U|=p$. Now suppose that $|V/U|&gt;p$. Then choose a subgroup of $(V/U) \cap Z(P/U)$ of order $p$, which is normal (since it is central) in $P/U$. This subgroup is of the form $W/U$ for some $W&lt;P$. Then by the Correspondence Theorem, we have $U \triangleleft W \triangleleft P$ and $|W:U|=p$.</p> <p>I have been told to apply the first part with $U=A$ and $V=C_P(A)$ and show that $W$ is abelian. I tried using the same strategy as above, i.e. choosing $W$ from $Z(P/U)$. However, abelian-ness isn't necessarily preserved under the canonical homomorphism from $P$ to $P/U$. Even if I could obtain such a $W$ I don't see how $W$ abelian implies that $A=C_P(A)$.</p> <p>I would appreciate a hint to point me in the right direction with this. Thanks.</p>
zcn
115,654
<p>Since $A$ is abelian, $A \subseteq C_P(A)$. If $A \neq C_P(A)$, then $C_P(A)/A$ would be a nontrivial normal subgroup of the $p$-group $P/A$, hence would intersect the center of $P/A$ nontrivially. Picking an nonidentity element $\overline{a} \in Z(P/A) \cap (C_P(A)/A)$, and lifting back to $P$, gives an element $a \in C_P(A)$, such that $\langle A, a \rangle$ is an abelian normal subgroup of $P$ properly containing $A$, contradicting maximality.</p>
10,615
<p>The tag <a href="https://math.stackexchange.com/questions/tagged/summation" class="post-tag" title="show questions tagged &#39;summation&#39;" rel="tag">summation</a> was created less than a year ago, see <a href="https://math.meta.stackexchange.com/questions/6324/summation-tag-for-finite-and-formal-summations">&quot;summation&quot; tag for finite and formal summations</a></p> <p>Before that, post about finite sums were usually tagged as <a href="https://math.stackexchange.com/questions/tagged/sequences-and-series" class="post-tag" title="show questions tagged &#39;sequences-and-series&#39;" rel="tag">sequences-and-series</a>. Since then the new question have been usually correctly tagged as <a href="https://math.stackexchange.com/questions/tagged/summation" class="post-tag" title="show questions tagged &#39;summation&#39;" rel="tag">summation</a>, and some of older questions have been retagged too.</p> <p>The <a href="https://math.stackexchange.com/tags/summation/info">tag-excerpt and tag-wiki for summation</a> say:</p> <blockquote> <p>Use <a href="https://math.stackexchange.com/questions/tagged/sequences-and-series" class="post-tag" title="show questions tagged &#39;sequences-and-series&#39;" rel="tag">sequences-and-series</a> for sums of infinite series and questions of convergence; use <a href="https://math.stackexchange.com/questions/tagged/summation" class="post-tag" title="show questions tagged &#39;summation&#39;" rel="tag">summation</a> for questions about finite sums and simplification of expressions involving sums.</p> </blockquote> <p>Based on this it seems that typical question on finite sums should be tagged only <a href="https://math.stackexchange.com/questions/tagged/summation" class="post-tag" title="show questions tagged &#39;summation&#39;" rel="tag">summation</a> and not <a href="https://math.stackexchange.com/questions/tagged/sequences-and-series" class="post-tag" title="show questions tagged &#39;sequences-and-series&#39;" rel="tag">sequences-and-series</a>. (Of course, there might be exceptions where both tags are appropriate.) Yet we have <a href="https://math.stackexchange.com/questions/tagged/summation+sequences-and-series">many question</a> tagged with both these tags.</p> <p>I have to say that I am usually careful with removing the tags the the OP has chosen, especially if I am not entirely sure. (So some of occurrences of both tags are due to my retags.)</p> <p>I would like to ask on the opinion of the community about this combination of tags. Perhaps this will encourage more people (including me) to use the tags correctly. And if most users disagree with the guide given in the tag-wiki, we can change the tag-wiki.</p> <blockquote> <ul> <li>Should the tag <a href="https://math.stackexchange.com/questions/tagged/sequences-and-series" class="post-tag" title="show questions tagged &#39;sequences-and-series&#39;" rel="tag">sequences-and-series</a> be removed the questions about finite sums and only the tag <a href="https://math.stackexchange.com/questions/tagged/summation" class="post-tag" title="show questions tagged &#39;summation&#39;" rel="tag">summation</a> should be used from this pair of tags?</li> </ul> </blockquote>
doraemonpaul
30,938
<p>In fact the tag wikis of <a href="https://math.stackexchange.com/questions/tagged/summation" class="post-tag" title="show questions tagged &#39;summation&#39;" rel="tag">summation</a> and <a href="https://math.stackexchange.com/questions/tagged/sequences-and-series" class="post-tag" title="show questions tagged &#39;sequences-and-series&#39;" rel="tag">sequences-and-series</a> are not properly scoped, and even they are not named properly.</p> <p>Sequence is an ordered list of objects (the objects can be numbers, functions, etc.), Series initially it refers the sum of the terms of a sequence, but in modern mathematics it even extended to refer the summation-type kernel function. The complicated type of series can contain several summation signs (e.g. <a href="http://en.wikipedia.org/wiki/Kamp%C3%A9_de_F%C3%A9riet_function" rel="nofollow noreferrer">http://en.wikipedia.org/wiki/Kamp%C3%A9_de_F%C3%A9riet_function</a>). Summation only refer the operation of sum, the <a href="https://math.stackexchange.com/questions/tagged/summation" class="post-tag" title="show questions tagged &#39;summation&#39;" rel="tag">summation</a> tag has an inappropriate tag name.</p> <p>The questions about sequences are not necessarily interested about sum of the terms, while the questions about series are often interested about e.g. the existance of close-form, the convergence etc. rather than about e.g. the origin of which sequences, so the <a href="https://math.stackexchange.com/questions/tagged/sequences-and-series" class="post-tag" title="show questions tagged &#39;sequences-and-series&#39;" rel="tag">sequences-and-series</a> tag also has an inappropriate tag name.</p> <p>Even the tag wikis of the <a href="https://math.stackexchange.com/questions/tagged/sequences-and-series" class="post-tag" title="show questions tagged &#39;sequences-and-series&#39;" rel="tag">sequences-and-series</a> tag and the <a href="https://math.stackexchange.com/questions/tagged/summation" class="post-tag" title="show questions tagged &#39;summation&#39;" rel="tag">summation</a> tag are very awful.</p> <p>The tag wiki of the <a href="https://math.stackexchange.com/questions/tagged/sequences-and-series" class="post-tag" title="show questions tagged &#39;sequences-and-series&#39;" rel="tag">sequences-and-series</a> tag:</p> <blockquote> <p>Recurrence relations, convergence tests, identifying sequences</p> </blockquote> <p>Does't there already have the <a href="https://math.stackexchange.com/questions/tagged/recurrence-relations" class="post-tag" title="show questions tagged &#39;recurrence-relations&#39;" rel="tag">recurrence-relations</a> tag specify for questions about recurrence relations?</p> <p>The tag wiki of the <a href="https://math.stackexchange.com/questions/tagged/summation" class="post-tag" title="show questions tagged &#39;summation&#39;" rel="tag">summation</a> tag:</p> <blockquote> <p>Questions about evaluating summations, especially finite summations. For infinite series, please consider the (sequences-and-series) tag instead.</p> </blockquote> <p>The first sentence and the second sentence are contradicted, since the first sentence said that questions about infinite series can use the <a href="https://math.stackexchange.com/questions/tagged/summation" class="post-tag" title="show questions tagged &#39;summation&#39;" rel="tag">summation</a> tag, but the sentence sentence said that can't. Therefore no wonder why quite many people use both the <a href="https://math.stackexchange.com/questions/tagged/summation" class="post-tag" title="show questions tagged &#39;summation&#39;" rel="tag">summation</a> tag and the <a href="https://math.stackexchange.com/questions/tagged/sequences-and-series" class="post-tag" title="show questions tagged &#39;sequences-and-series&#39;" rel="tag">sequences-and-series</a> tag.</p> <p>So the <a href="https://math.stackexchange.com/questions/tagged/summation" class="post-tag" title="show questions tagged &#39;summation&#39;" rel="tag">summation</a> tag and the <a href="https://math.stackexchange.com/questions/tagged/sequences-and-series" class="post-tag" title="show questions tagged &#39;sequences-and-series&#39;" rel="tag">sequences-and-series</a> tag should be deleted. The replacement should be to create the <a href="https://math.stackexchange.com/questions/tagged/sequences" class="post-tag" title="show questions tagged &#39;sequences&#39;" rel="tag">sequences</a> tag specifically for the questions about sequences, the <a href="https://math.stackexchange.com/questions/tagged/finite-series" class="post-tag" title="show questions tagged &#39;finite-series&#39;" rel="tag">finite-series</a> tag specifically for the questions about finite series, and the <a href="https://math.stackexchange.com/questions/tagged/infinite-series" class="post-tag" title="show questions tagged &#39;infinite-series&#39;" rel="tag">infinite-series</a> tag specifically for the questions about infinite series.</p>
1,917,313
<p>I am to find a combinatorial argument for the following identity:</p> <p>$$\sum_k \binom {2r} {2k-1}\binom{k-1}{s-1} = 2^{2r-2s+1}\binom{2r-s}{s-1}$$</p> <p>For the right hand side, I was think that would just be number of ways to choose at least $s-1$ elements out of a $[2r-s]$ set. However, for the left hand side, I don't really know what it is representing. </p> <p>Any help would be greatly appreciated!</p>
Marko Riedel
44,883
<p>Suppose we seek to verify that $$\sum_{k=1}^r {2r\choose 2k-1} {k-1\choose s-1} = 2^{2r-2s+1} {2r-s\choose s-1}$$</p> <p>where presumably $s\ge 1$. The lower limit is set to $k=1$ as the first binomial coefficient is zero when $k=0.$</p> <p>Introduce $${2r\choose 2k-1} = \frac{1}{2\pi i} \int_{|z|=\epsilon} \frac{1}{z^{2r-2k+2}} \frac{1}{(1-z)^{2k}} \; dz.$$</p> <p>This provides range control and vanishes when $k\gt r$ so we may extend the range to infinity, obtaining</p> <p>$$\frac{1}{2\pi i} \int_{|z|=\epsilon} \frac{1}{z^{2r+2}} \sum_{k\ge 1} {k-1\choose s-1} \frac{z^{2k}}{(1-z)^{2k}} \; dz.$$</p> <p>This yields</p> <p>$$\frac{1}{2\pi i} \int_{|z|=\epsilon} \frac{1}{z^{2r+2}} \sum_{k\ge s} {k-1\choose s-1} \frac{z^{2k}}{(1-z)^{2k}} \; dz \\ = \frac{1}{2\pi i} \int_{|z|=\epsilon} \frac{1}{z^{2r+2}} \frac{z^{2s}}{(1-z)^{2s}} \sum_{k\ge 0} {k+s-1\choose s-1} \frac{z^{2k}}{(1-z)^{2k}} \; dz \\ = \frac{1}{2\pi i} \int_{|z|=\epsilon} \frac{1}{z^{2r+2}} \frac{z^{2s}}{(1-z)^{2s}} \frac{1}{(1-z^2/(1-z)^2)^s} \; dz \\ = \frac{1}{2\pi i} \int_{|z|=\epsilon} \frac{1}{z^{2r-2s+2}} \frac{1}{((1-z)^2-z^2)^s} \; dz \\ = \frac{1}{2\pi i} \int_{|z|=\epsilon} \frac{1}{z^{2r-2s+2}} \frac{1}{(1-2z)^s} \; dz.$$</p> <p>This is</p> <p>$$[z^{2r-2s+1}] \frac{1}{(1-2z)^s} = 2^{2r-2s+1} {2r-2s+1+s-1\choose s-1} \\ = 2^{2r-2s+1} {2r-s\choose s-1}$$</p> <p>and we have the claim.</p>
78,478
<blockquote> <p>Prove that $\frac{1}{n} \sum_{k=2}^n \frac{1}{\log k}$ converges to $0.$</p> </blockquote> <p>Okay, seriously, it's like this question is mocking me. I know it converges to $0$. I can feel it in my blood. I even proved it was Cauchy, but then realized that didn't tell me what the limit <em>was</em>. I've been working on this for an hour, so can one of you math geniuses help me?</p> <p>Thanks!</p>
N. S.
9,176
<p>Stolz Cezaro:</p> <p>$$\lim \frac{1}{n} \sum_{k=2}^n \frac{1}{log k} = \lim \frac{1}{ \log (n+1)}$$</p> <p><strong>Edit</strong> Here is a direct proof:</p> <p>$$0 \leq \frac{\sum_{k=2}^n \frac{1}{log k}}{n} = \frac{\sum_{k=2}^{\sqrt{n}} \frac{1}{log k}}{n} + \lim \frac{\sum_{k=\sqrt{n}}^n \frac{1}{log k}}{n} \leq \frac{\sqrt{n} \frac{1}{log 2}}{n} + \frac{(n-\sqrt{n})\frac{1}{\log \sqrt{n}}}{n} $$</p> <p>Now each of the last two sequences converge to $0$, so squeeze it.</p>
352,849
<p>I have to show that $\lim \limits_{n\rightarrow\infty}\frac{n!}{(2n)!}=0$ </p> <hr> <p>I am not sure if correct but i did it like this : $(2n)!=(2n)\cdot(2n-1)\cdot(2n-2)\cdot ...\cdot(2n-(n-1))\cdot (n!)$ so I have $$\displaystyle \frac{1}{(2n)\cdot(2n-1)\cdot(2n-2)\cdot ...\cdot(2n-(n-1))}$$ and $$\lim \limits_{n\rightarrow \infty}\frac{1}{(2n)\cdot(2n-1)\cdot(2n-2)\cdot ...\cdot(2n-(n-1))}=0$$ is this correct ? If not why ?</p>
Zev Chonoles
264
<p>It's correct, but I imagine you're expected to show a bit more work to <em>justify</em> your assertion that $$\lim \limits_{n\rightarrow \infty}\frac{1}{(2n)\cdot(2n-1)\cdot(2n-2)\cdot ...\cdot(2n-(n-1))}=0$$ An easy way to do this is to bound this sequence of fractions with another, simpler one whose limit you know is 0.</p>
2,596,213
<p>I'm having huge troubles with problems like this. I know the following:</p> <p>$$\frac{\sin{x}}{x}=1-\frac{x^2}{3!}+\frac{x^4}{5!}-\frac{x^6}{7!}+O(x^7)$$</p> <p>and </p> <p>$$\ln{(1+t)}=t-\frac{t^2}{2}+\frac{t^3}{3}+O(t^4)$$</p> <p>So</p> <p>$$\ln{\left(1+\left(-\frac{x^2}{3!}+\frac{x^4}{5!}-\frac{x^6}{7!}+O(x^7)\right)\right)}=\\\left[-\frac{x^2}{3!}+\frac{x^4}{5!}-\frac{x^6}{7!}+O(x^7)\right]-\frac{\left[-\frac{x^2}{3!}+\frac{x^4}{5!}-\frac{x^6}{7!}+O(x^7)\right]^2}{2}+\frac{\left[-\frac{x^2}{3!}+\frac{x^4}{5!}-\frac{x^6}{7!}+O(x^7)\right]^3}{3}+O(x^8).$$</p> <p>But how on earth would one simplify this? Obviously I should not need to manually expand something of the form $(a+b+c+d+e)^n$. Seriously don't understand what is happening here.</p> <p>Also, how should I know to what $O(x^?)$ I should expand the initial functions to? </p>
Jack D'Aurizio
44,121
<p>$$\frac{\sin x}{x}=\prod_{n\geq 1}\left(1-\frac{x^2}{n^2\pi^2}\right) \tag{1}$$ implies $$ \log\frac{\sin x}{x} = -\sum_{n\geq 1}\sum_{m\geq 1}\frac{x^{2m}}{m n^{2m}\pi^{2m}}=-\sum_{m\geq 1}\frac{\zeta(2m)\,x^{2m}}{m\pi^{2m}} \tag{2}$$ and by recalling $\zeta(2)=\frac{\pi^2}{6},\zeta(4)=\frac{\pi^4}{90},\zeta(6)=\frac{\pi^6}{945}$ we get $$ \log\frac{\sin x}{x} = -\frac{x^2}{6}-\frac{x^4}{180}-\frac{x^6}{2835}+O(x^8).\tag{3}$$</p>
109,298
<p>I'm an applied model theorist, and open image theorems are important in the mathematical structures I study (they limit the number of types of elements being realised, and therefore keep things model theoretically nice e.g. stable). </p> <p>So I have some idea as to why these open image theorems should hold from a model theoretic viewpoint, and I know that these are regarded as important theorems, but I don't think I've ever come across a diophantine application of an open image theorem in the literature and I'd like to see one.</p> <p>I'm most familiar with Serre's open image theorem for elliptic curves so an example in this context would be ideal.</p>
Barinder Banwait
5,744
<p>Serre's open image theorem (on page IV-20 in his book "Abelian $l$-adic representations...) for non-CM elliptic curves $E/K$ is equivalent to the statement that, for almost all $l$ (how large depending on $E$ and $K$), the $l$-adic representation attached to $T_l(E)$ is surjective. Ditto for the mod-$l$ representation. These are nice examples of number theoretic applications. Explicit bounds are also known (work of Hall, Cojocaru and others...). Note that how large $l$ must be is expected to be independent of $E$, and should depend only on $K$. For example, if $K = \mathbb{Q}$, it is hoped that 37 is large enough for any non-CM elliptic curve. </p> <p>It is conjectured that, for $A/K$ any abelian variety for which $End_{\overline{K}}(A) = \mathbb{Z}$, there should be a similar open-image theorem. This is known (by work of Serre) when the dimension of $A$ is 2,6, or odd. In particular, for such an $A/K$, and for sufficiently large $l$, the mod-$l$ representation has image $GSp_{2g}(\mathbb{F}_l)$. This is also nice.</p> <p>(Work of Bogomolov says that the $l$-adic image of $A$ is open (with the $l$-adic topology) in $G_{A,l}(\mathbb{Q}_l)$ ; here $G_{A,l}$ is the $l$-adic algebraic monodromy group. See <a href="http://www.martinorr.name/blog/2010/11/27/images-of-galois-representations" rel="nofollow">this blog post of Martin Orr</a> for a discussion of these groups.)</p>
381,059
<p><span class="math-container">$\newcommand\R{\mathbb R}$</span>Let <span class="math-container">$f\colon\R^p\to\R$</span> be a continuous function. For <span class="math-container">$u=(u_1,\dots,u_p)$</span> and <span class="math-container">$v=(v_1,\dots,v_p)$</span> in <span class="math-container">$\R^p$</span>, let <span class="math-container">$[u,v]:=\prod_{r=1}^p[u_r,v_r]$</span>; <span class="math-container">$u\wedge v:=\big(\min(u_1,v_1),\dots,\min(u_p,v_p)\big)$</span>; <span class="math-container">$u\vee v:=\big(\max(u_1,v_1),\dots,\max(u_p,v_p)\big)$</span>; <span class="math-container">$$\int_u^v dx\, f(x):= (-1)^{\sum_{r=1}^p\,1(u_r&gt;v_r) }\int_{[u\wedge v,u\vee v]}dx\,f(x).$$</span> Let <span class="math-container">$F\colon\R^p\to\R$</span> be any antiderivative of <span class="math-container">$f$</span>, in the sense that <span class="math-container">$$D_1\cdots D_p F=f,$$</span> where <span class="math-container">$D_j$</span> is the operator of the partial differentiation with respect to the <span class="math-container">$j$</span>th argument; it is assumed that the result of this repeated partial differentiation does not depend on the order of the arguments with respect to which the partial derivatives are taken. Let <span class="math-container">$[p]:=\{1,\dots,p\}$</span>. For each set <span class="math-container">$J\subseteq[p]$</span>, let <span class="math-container">$|J|$</span> denote the cardinality of <span class="math-container">$J$</span>.</p> <p>Then it is not hard to establish the following multidimensional generalization of the fundamental theorem of calculus (<a href="https://arxiv.org/abs/1705.09159" rel="nofollow noreferrer">Lemma 5.1</a>): <span class="math-container">\begin{equation} \int_u^v dx\, f(x)=\sum_{J\subseteq[p]}(-1)^{p-|J|}F(v_J), \end{equation}</span> where <span class="math-container">$v_J:=\big(v_1\,1(1\in J)+u_1\,1(1\notin J),\dots,v_p\,1(p\in J)+u_p\,1(p\notin J)\big)$</span>.</p> <p>Has anyone seen this or similar statement elsewhere? (I am only asking about references, not proofs.)</p>
Zach Teitler
88,133
<p>The <span class="math-container">$p=2$</span> dimensional case is an exercise in Rogawski's calculus textbook. It is exercise 47 on page 885, section 15.1 (Integration in Several Variables) in the 2008 Early Transcendentals edition.</p>
381,059
<p><span class="math-container">$\newcommand\R{\mathbb R}$</span>Let <span class="math-container">$f\colon\R^p\to\R$</span> be a continuous function. For <span class="math-container">$u=(u_1,\dots,u_p)$</span> and <span class="math-container">$v=(v_1,\dots,v_p)$</span> in <span class="math-container">$\R^p$</span>, let <span class="math-container">$[u,v]:=\prod_{r=1}^p[u_r,v_r]$</span>; <span class="math-container">$u\wedge v:=\big(\min(u_1,v_1),\dots,\min(u_p,v_p)\big)$</span>; <span class="math-container">$u\vee v:=\big(\max(u_1,v_1),\dots,\max(u_p,v_p)\big)$</span>; <span class="math-container">$$\int_u^v dx\, f(x):= (-1)^{\sum_{r=1}^p\,1(u_r&gt;v_r) }\int_{[u\wedge v,u\vee v]}dx\,f(x).$$</span> Let <span class="math-container">$F\colon\R^p\to\R$</span> be any antiderivative of <span class="math-container">$f$</span>, in the sense that <span class="math-container">$$D_1\cdots D_p F=f,$$</span> where <span class="math-container">$D_j$</span> is the operator of the partial differentiation with respect to the <span class="math-container">$j$</span>th argument; it is assumed that the result of this repeated partial differentiation does not depend on the order of the arguments with respect to which the partial derivatives are taken. Let <span class="math-container">$[p]:=\{1,\dots,p\}$</span>. For each set <span class="math-container">$J\subseteq[p]$</span>, let <span class="math-container">$|J|$</span> denote the cardinality of <span class="math-container">$J$</span>.</p> <p>Then it is not hard to establish the following multidimensional generalization of the fundamental theorem of calculus (<a href="https://arxiv.org/abs/1705.09159" rel="nofollow noreferrer">Lemma 5.1</a>): <span class="math-container">\begin{equation} \int_u^v dx\, f(x)=\sum_{J\subseteq[p]}(-1)^{p-|J|}F(v_J), \end{equation}</span> where <span class="math-container">$v_J:=\big(v_1\,1(1\in J)+u_1\,1(1\notin J),\dots,v_p\,1(p\in J)+u_p\,1(p\notin J)\big)$</span>.</p> <p>Has anyone seen this or similar statement elsewhere? (I am only asking about references, not proofs.)</p>
Abdelmalek Abdesselam
7,410
<p>For an elementary fact like this, which may have been reinvented a thousand times, it is hard to find the first paper where this appeared. However, let me give some missing context. There is a whole industry in <strong>constructive quantum field theory</strong> and <strong>statistical mechanics</strong> about related &quot;smart&quot; interpolation formulas or Taylor formulas with integral remainders. These are used to perform so-called <strong>cluster expansions</strong>. For the OP's identity, there is no loss of generality in taking <span class="math-container">$u=(0,0,\ldots,0)$</span> and <span class="math-container">$v=(1,1,\ldots,1)$</span>. In this case, via <em>Möbius inversion in the Boolean lattice</em>, the formula comes from the following identity.</p> <p>Let <span class="math-container">$L$</span> be a finite set. Let <span class="math-container">$f:\mathbb{R}^L\rightarrow \mathbb{R}$</span>, <span class="math-container">$\mathbf{x}=(x_{\ell})_{\ell\in L}\mapsto f(\mathbf{x})$</span> be a sufficiently smooth function, and let <span class="math-container">$\mathbf{1}=(1,\ldots,1)\in\mathbb{R}^L$</span>, then <span class="math-container">$$ f(\mathbf{1})=\sum_{A\subseteq L}\int_{[0,1]^A}d\mathbf{h} \left[\left(\prod_{\ell\in A}\frac{\partial}{\partial x_{\ell}}\right)f\right](\psi_A(\mathbf{h})) $$</span> where <span class="math-container">$\psi_A(\mathbf{h})$</span> is the element <span class="math-container">$\mathbf{x}=(x_{\ell})_{\ell\in L}$</span> of <span class="math-container">$\mathbb{R}^L$</span> defined from the element <span class="math-container">$\mathbf{h}=(h_{\ell})_{\ell\in A}$</span> in <span class="math-container">$[0,1]^A$</span> by the rule: <span class="math-container">$x_{\ell}=0$</span> if <span class="math-container">$\ell\notin A$</span> and <span class="math-container">$x_{\ell}=h_{\ell}$</span> if <span class="math-container">$\ell\in A$</span>. Of course one needs to 1) apply this to all <span class="math-container">$L$</span>'s which are subsets of <span class="math-container">$[p]$</span>, 2) use Möbius inversion in the Boolean lattice, and 3) specialize to <span class="math-container">$L=[p]$</span>, and this gives the OP's identity.</p> <p>The above formula is the most naive one of its kind used to do a &quot;pair of cubes&quot; cluster expansion. See formula III.1 in the article</p> <p>A. Abdesselam and V. Rivasseau, <a href="https://arxiv.org/abs/hep-th/9409094" rel="noreferrer">&quot;Trees, forests and jungles: a botanical garden for cluster expansions&quot;</a>.</p> <p>It is also explained in words on page 115 of the book</p> <p>V. Rivasseau, <a href="http://www.rivasseau.com/resources/book.pdf" rel="noreferrer">&quot;From Perturbative to Constructive Renormalization&quot;</a>.</p> <p>Now the formula is a particular case of a much more powerful one, namely, Lemma 1 in</p> <p>A. Abdesselam and V. Rivasseau, <a href="https://arxiv.org/abs/hep-th/9605094" rel="noreferrer">&quot;An explicit large versus small field multiscale cluster expansion&quot;</a>,</p> <p>where one sums over &quot;allowed&quot; sequences <span class="math-container">$(\ell_1,\ldots,\ell_k)$</span> of arbitrary length of elements of <span class="math-container">$L$</span>, instead of subsets of <span class="math-container">$L$</span>. The notion of allowed is based on an arbitrary stopping rule. The above identity corresponds to &quot;allowed&quot;<span class="math-container">$=$</span>&quot;without repeats&quot;, or the stopping rule that one should not tack on an <span class="math-container">$\ell$</span> at the end of a sequence where it already appeared. By playing with this kind of choice of stopping rule one can use Lemma 1 of my article with Rivasseau, to prove the Hermite-Genocchi formula, the anisotropic Taylor formula by Hairer in Appendix A of <a href="https://arxiv.org/abs/1303.5113" rel="noreferrer">&quot;A theory of regularity structures&quot;</a> and many other things. When <span class="math-container">$f$</span> is the exponential of a linear form for instance, one can obtain various algebraic identities as in the MO posts</p> <p><a href="https://mathoverflow.net/questions/74102/rational-function-identity/74280#74280">rational function identity</a></p> <p><a href="https://mathoverflow.net/questions/334201/identity-involving-sum-over-permutations/334204#334204">Identity involving sum over permutations</a></p> <p>I forgot to mention, one can use Lemma 1 to derive the Taylor formula from calculus 1. This corresponds to <span class="math-container">$L$</span> having one element and defining allowed sequences as the ones of length at most <span class="math-container">$n$</span>. See</p> <p><a href="https://math.stackexchange.com/questions/3753212/is-there-any-geometrical-intuition-for-the-factorials-in-taylor-expansions/3753600#3753600">https://math.stackexchange.com/questions/3753212/is-there-any-geometrical-intuition-for-the-factorials-in-taylor-expansions/3753600#3753600</a></p>
1,285,941
<p>I have a question where i couldn't find any clue. The question is</p> <p>$$\frac{1}{1\cdot 2}+\frac{1\cdot3}{1\cdot2\cdot3\cdot4}+\frac{1\cdot3\cdot5}{1\cdot2\cdot3\cdot4\cdot5\cdot6}+\cdots$$</p> <p>I could get the general term as $t_n=\frac{1\cdot3\cdot5\cdot7\cdots(2n-1)}{1\cdot2\cdot3\cdot4\cdot5\cdot6\cdots2n}$</p> <p>I have also tried it to form the sequence in the telescopic form.But couldn't get. Any hint will be appreciated.</p>
wythagoras
236,048
<p>$$t_n=\frac{1\cdot3\cdot5\cdot7\dots(2n-1)}{1\cdot2\cdot3\cdot4\cdot5\cdot6\dots2n}=\frac{(2n-1)!!}{(2n-1)!!(2n)!!}=\frac{1}{(2n)!!}=\frac{1}{2^n n!}$$</p> <p>Therefore this sum does not go to infinity. </p> <p>Invoking $e^x = \sum^{\infty}_{n=0} \frac{x^n}{n!}$, we can compute the sum: $$\sum^{\infty}_{n=1} \frac{(\frac{1}{2})^n}{n!} = \sum^{\infty}_{n=0} \frac{(\frac{1}{2})^n}{n!}-1 = e^{\frac{1}{2}}-1$$</p>
1,521,720
<p>I'm working through a book on logic and wanted to check one of my steps in a derivation I'm working out. Given the two quantifier negation equalities:</p> <ol> <li><p>$\lnot\exists P(x) = \forall\lnot P(x)$</p></li> <li><p>$\exists\lnot P(x) = \lnot\forall P(x)$</p></li> </ol> <p>I'm trying to derive (2) from (1). I'm not asking for the solution if I have it wrong -- I think I've got it, but I want to find out if my last step is valid.</p> <p>Negating both sides of (1):</p> <p>$\lnot\lnot\exists P(x) = \lnot\forall\lnot P(x)$</p> <p>Then removing the double negation, you get:</p> <p>$\exists P(x) = \lnot\forall\lnot P(x)$</p> <p>Now, it's not at all clear to me that algebraically, you can just slap a negation of both sides <em>inside the quantifiers</em> to get to (2), preserving correctness. But I notice that this equation has the same "shape" as (2). Intuitively, it seems to be saying the same thing, because you could define a predicate $R(x) = \lnot P(x)$ and obtain:</p> <p>$\exists\lnot R(x) = \lnot\forall R(x)$</p> <p>...but I don't like "intuitively". This step feels like a hand-wavy leap of faith -- is it valid? If so, is there a name for it?</p>
Graham Kemp
135,106
<p>The full statement is Second Order Logic</p> <ol> <li>$\forall P\; \big(\neg \exists x \; P(x) \;\leftrightarrow\; \forall x\;\neg P(x)\big)$</li> </ol> <p>Then we can use Universal Instantiation</p> <p>1.1. $\neg \exists x\;\neg Q(x)\;\leftrightarrow\; \forall x\; \neg\neg Q(x)$</p> <p>Double Negation Elimination</p> <p>1.2. $\neg \exists x\;\neg Q(x)\;\leftrightarrow\; \forall x\; Q(x)$</p> <p>Equivalence $\neg a\leftrightarrow b \iff a\leftrightarrow \neg b$</p> <p>1.3. $\exists x\;\neg Q(x)\;\leftrightarrow\; \neg \forall x\; Q(x)$</p> <p>And Universal Generalisation</p> <ol start="2"> <li>$\forall P\;\big(\exists x\;\neg P(x)\;\leftrightarrow\; \neg \forall x\; P(x)\big)$</li> </ol>
4,530,792
<p>I have the following sequence <span class="math-container">$\left \{k \sin \left(\frac{1}{k}\right) \right\}^{\infty}_{1}$</span>. I don't know how to show that this is monotonically increasing.</p> <p>I tried taking the derivative of the corresponding function <span class="math-container">$f(x) = x \sin \left(\frac{1}{x}\right)$</span>, and show that <span class="math-container">$f^{\prime} \geq 0$</span> for <span class="math-container">$x \geq 1$</span>, but the derivative is kind of messy. The problem falls down to showing that <span class="math-container">$$\sin(\frac{1}{x}) - \frac{\cos{\frac{1}{x}}}{x} \geq 0.$$</span></p> <p>I am open to other approaches too, maybe some outside the box thinking way that doesn't even need the first derivative. But it feels like one should be able to show that the inequality holds. Thank you!</p>
GReyes
633,848
<p>It is easier to look at the function <span class="math-container">$g(x)=\frac{\sin x}{x}$</span> and show that it is decreasing on <span class="math-container">$(0,1)$</span> (notice that <span class="math-container">$f(x)=g(1/x)$</span>). You have <span class="math-container">$$ g'(x)=\frac{x\cos x-\sin x}{x^2} $$</span> and the numerator is negative, since <span class="math-container">$x&lt;\tan x$</span> on <span class="math-container">$(0,\pi/2)$</span>.</p>
2,012,947
<p>I'm trying to prove that if f,g are continuous functions, and if E is a dense subset of X $(\text{or } Cl(E) = X)$ and if $f(x)=g(x) \forall x \in E$ then $f(x)=g(x) \forall x \in X$. </p> <p>I understand that if f,g are continuous, then:</p> <blockquote> <p>$\exists \delta_1, \delta_2$ such that $\forall X \in E$ with $d(x,p)&lt; \delta_1$, $|f(x) - f(p)| &lt; \epsilon$ and similarly $\forall X \in E$ with $d(x,p)&lt; \delta_2$, $|g(x) - g(p)| &lt; \epsilon$</p> </blockquote> <p>And by definition of closure, I know that:</p> <blockquote> <p>$Cl(E) = E \cup E'$ where E' is the set accumulation points of E, where p is an accumulation point if $\forall r&gt;0, (E\cup N_r(p)) \backslash \{p\} \neq \emptyset $</p> </blockquote> <p>I have zero clue on how to approach this problem. If $f(x) = g(x)$, then I'm guessing it implies that $|f(x) - f(p)| = |g(x) - g(p)|$. And so I'm guessing that $\delta_1 = \delta_2$. </p> <p>Help would be very much appreciated. </p>
Firepi
379,085
<p>Rememer that $x\in Cl(E)$ if and only if there is a sequence of elements in $E$ which converges to $x$.</p> <p>If $\{x_n\}$ is a sequence in $X$ such that $\lim_{n \to \infty}x_n=x$ for some $x\in X$ then $\lim_{n\to \infty}f(x_n)=f(x)$ and $\lim_{n\to \infty}g(x_n)=g(x)$ since $f$ and $g$ are continuous in $X$. For $x$ there is a sequence $\{y_n\}$ of elements in $E$ such that $\lim_{n \to \infty}y_n=x$. Then $$\lim_{n \to \infty}f(y_n)=f(x)$$ and $$\lim_{n \to \infty}g(y_n)=g(x).$$ But $y_n \in E$ for all natural $n$, hence $f(x)=g(x)$.</p>
3,391,225
<p>What is the term for a (connected?) set <span class="math-container">$S$</span> of the plane <span class="math-container">$\mathbb{R}^2$</span> such that the intersection of <span class="math-container">$S$</span> with every horizontal line <span class="math-container">$\ell_{b}: y=b$</span> is either empty, or an interval of the line <span class="math-container">$\ell_b$</span>? </p>
RobPratt
683,666
<p>Such a set is called <em>horizontally convex</em>.</p>
4,394,247
<p>I know how to represent the sentence “there is exactly one person that is happy”,</p> <p>∀y∀x((Happy(x)∧Happy(y))→(x=y))</p> <p>Edit: ∃x∀y(y=x↔Happy(y)) (NOW, I actually know how to represent it)</p> <p>Where x and y represent a person.</p> <p>However, my problem is that I can’t figure out how to say “there are exactly 3 people that are happy” in predicate logic.</p>
ryang
21,813
<blockquote> <p>I know how to represent the sentence “there is exactly one person that is happy”: <span class="math-container">$$∀y∀x((\text{Happy}(x)∧\text{Happy}(y))→(x=y))$$</span></p> </blockquote> <p>Correction: <span class="math-container">$$∃x \,∀p\;\Big( p=x \leftrightarrow\text{Happy}(p) \Big).$$</span></p> <p>(Note that even though this sentence looks simpler than Tom's suggestion, they are actually logically equivalent to each other.)</p> <blockquote> <p>I can’t figure out how to say “there are exactly 3 people that are happy” in predicate logic.</p> </blockquote> <p><span class="math-container">$$∃x ∃y ∃z \,∀p\;\bigg(x\neq y\land y\neq z\land z\neq x \land \Big( (p=x\lor p=y\lor p=z ) \leftrightarrow\text{Happy}(p) \Big)\bigg).$$</span></p>
3,794,101
<blockquote> <p>Show that <span class="math-container">$f: \mathbb{R^3} \to \mathbb{R}$</span> <span class="math-container">$$f(x, y, z) = xy + z^2$$</span> is continuous.</p> </blockquote> <p>One could just deduce that since it's a polynomial it's continuous, but how would I show this using <span class="math-container">$(\varepsilon, \delta)$</span>? I'm not familiar on using the method with multivariate functions.</p>
peek-a-boo
568,204
<p>Hint: Denote <span class="math-container">$\xi = (x,y,z)$</span> and <span class="math-container">$\alpha = (a,b,c)$</span>. Then, <span class="math-container">\begin{align} |f(\xi) - f(\alpha)| &amp;= |(xy+z^2) - (ab+c^2)|\\ &amp;= |(x-a)y + a(y-b) + (z-c)(z+c)|\\ &amp; \leq |y| |x-a| + |a||y-b| + |z+c| |z-c| \\ &amp;\leq \left( |y| + |a| + |z+c|\right) \lVert\xi - \alpha \rVert \end{align}</span> Now, if <span class="math-container">$\lVert \xi-\alpha\rVert &lt; 1$</span>, can you find an upper bound on the thing in brackets?</p> <p>If this is still too far of a leap, I suggest you take a look at the proof of how we prove the sums and product of continuous functions is continuous. In particular look at the proof of why <span class="math-container">$t\mapsto t^2$</span> is continuous from single-variable analysis.</p> <hr /> <p>Notice that the idea is to use basic algebra and &quot;force&quot; terms like <span class="math-container">$x-a$</span> to appear, because if this is small enough, you can make the entire thing small enough.</p>
3,464,342
<blockquote> <p>For <span class="math-container">$j\in \mathbb{N}$</span> let <span class="math-container">$$M_j=\{f\in L^2([0,1]):\int_0^1 |f|^2 dx \leq j\}$$</span> (a) Establish that <span class="math-container">$L^2([0,1])=\cup_{j\in \mathbb{N}}M_j$</span>.</p> <p>(b) Show that each <span class="math-container">$M_j$</span> is a closed subset in <span class="math-container">$L^1([0,1])$</span>.</p> <p>(c) Show that the interior of each <span class="math-container">$M_j$</span> in the norm topology of <span class="math-container">$L^1([0,1])$</span> is empty.</p> <p>(d) From (a)-(c) it appears that <span class="math-container">$L^2([0,1])$</span> is the countable union of sets with empty interior. Explain why this does not contradict Baire's theorem.</p> </blockquote> <p>I believe (a) is obvious.</p> <p>For (b) I need to show if <span class="math-container">$\int |f_n|^2 \leq j$</span> for <span class="math-container">$j\in \mathbb{N}$</span> and <span class="math-container">$\int |f_n-f| \to 0$</span> then also <span class="math-container">$\int |f|^2\leq j$</span>. To relate <span class="math-container">$L^2$</span> and <span class="math-container">$L^1$</span> I was thinking of using Cauchy-Schwarz and saying <span class="math-container">$\int |f_n-f|\leq (\int |f_n-f|^2)^{\frac12}$</span> but I need the inequality to go the other way.</p> <p>For (c) I assume there exists an <span class="math-container">$M_j$</span> such that <span class="math-container">$O\in M_j$</span> where <span class="math-container">$O$</span> is open. Then there exists <span class="math-container">$f \in M_j$</span> and a sequence <span class="math-container">$g_j \in M_j$</span> such that <span class="math-container">$\int |f-g_j|&lt;\epsilon$</span>. I somehow want to obtain a contradiction.</p> <p>I found this helpful post <a href="https://math.stackexchange.com/questions/2560595/set-with-empty-interior-in-l10-1">Set with empty interior in $L^1([0,1])$</a> but there <span class="math-container">$f\in L^1([0,1])$</span> in the definition of <span class="math-container">$M_j$</span> so I am not sure if I can use it. There they argue if <span class="math-container">$M_j \ni f_k\to f$</span> in <span class="math-container">$L^1$</span> then for some subsequence <span class="math-container">$f_{k_n}\to f$</span> a.e. Then by Fatou <span class="math-container">$\int |f|^2\leq \lim \inf \int |f_{k_n}|^2\leq j$</span> so <span class="math-container">$f\in M_j$</span>. Wouldn't this argument show <span class="math-container">$M_j$</span> is closed in any <span class="math-container">$L^p$</span>-space then as we can always extract an almost everywhere converging subsequence?</p> <p>For (d) is the problem that <span class="math-container">$M_j$</span> is closed and of empty interior in <span class="math-container">$L^1([0,1])$</span> but it is defined as a subset of <span class="math-container">$L^2([0,1])$</span>?</p>
Ian
83,396
<p>In b, say you have a sequence in <span class="math-container">$M_j$</span> converging in <span class="math-container">$L^1$</span>, pass to a subsequence to get an a.e. convergent subsequence, then Fatou's Lemma does what you want.</p> <p>In c, find a family <span class="math-container">$f_{\epsilon,A}$</span> with <span class="math-container">$\| f_{\epsilon,A}\|_{L^1}=\epsilon$</span> but <span class="math-container">$\| f_{\epsilon,A} \|_{L^2}=A$</span>, then for each <span class="math-container">$\epsilon$</span> and each <span class="math-container">$f\in M_j$</span>, perturb <span class="math-container">$f$</span> using an appropriate function of this family to conclude that <span class="math-container">$M_j$</span> does not contain the ball centered at <span class="math-container">$f$</span> of radius <span class="math-container">$\epsilon$</span> in the <span class="math-container">$L^1$</span> norm.</p> <p>In d the point is that <span class="math-container">$L^2([0,1])$</span> with the <span class="math-container">$L^1$</span> norm is not a complete metric space. Indeed one can use my suggestion in part c to see that.</p>
833,827
<p>I am trying to refresh on algorithm analysis. I am looking for a refresher on summation formulas.<br> E.g.<br> I can derive the $$\sum_{i = 0}^{N-1}i$$ to be N(N-1)/2 but I am rusty on the and more complex e.g. something like $$\sum_{i = 0}^{N-1}{\sum_{j = i+1}^{N-1}\sum_{k=j+1}^{N-1}}$$<br> Is there a good refresher material for this?<br> In my example my result of the inner most loop is:<br> $$N(N-1)(N-2)/2$$</p> <p>which is wrong though </p> <p><strong>UPDATE</strong><br> The sums I am describing are basically representing the following algorithm: </p> <pre><code>for (i = 0; i &lt; n; i++) { for( j = i+1; j &lt; n; j++) { for (k = j +1; j &lt; n; j++) { //code } } } </code></pre> <p>This algorithm is <code>O(N^3)</code> according to all textbooks by definition of its structure. I am not sure why the answers are giving me an <code>O(N^4)</code></p>
mlk
155,406
<p>Well, the basic time proven technique for simpler problems like this is guessing + proof by induction, something you can learn best by excercise. There certainly is way more advanced stuff even going into analytic number theory, but at least initially I would suggest you should get some problem book on induction or google a bit for excercises (there are hundreds around) and just start. This really helps to get some intuition, which I believe is really important to understand the more complex things.</p> <p>If you want some literature instead, you should search for some books about discrete mathematics, there are several that are centered around computer science and should cover the topics you need. I've always been told that "Concrete Mathematics" by Graham, Knuth and Patashnik is a classic, but I haven't read it myself, so I won't guarantee anything.</p>
833,827
<p>I am trying to refresh on algorithm analysis. I am looking for a refresher on summation formulas.<br> E.g.<br> I can derive the $$\sum_{i = 0}^{N-1}i$$ to be N(N-1)/2 but I am rusty on the and more complex e.g. something like $$\sum_{i = 0}^{N-1}{\sum_{j = i+1}^{N-1}\sum_{k=j+1}^{N-1}}$$<br> Is there a good refresher material for this?<br> In my example my result of the inner most loop is:<br> $$N(N-1)(N-2)/2$$</p> <p>which is wrong though </p> <p><strong>UPDATE</strong><br> The sums I am describing are basically representing the following algorithm: </p> <pre><code>for (i = 0; i &lt; n; i++) { for( j = i+1; j &lt; n; j++) { for (k = j +1; j &lt; n; j++) { //code } } } </code></pre> <p>This algorithm is <code>O(N^3)</code> according to all textbooks by definition of its structure. I am not sure why the answers are giving me an <code>O(N^4)</code></p>
acegs
52,463
<p>Based on your code, we can say that the possible codes inside the innermost loop executes in a constant time $A$. Then it's overall execution time can be approximated by: $$T_{N} = \sum_{i = 0}^{N-1}{\sum_{j = i+1}^{N-1}\sum_{k=j+1}^{N-1}}A$$</p> <p>Using the summation properties given on Perry Iverson's answer, we can now solve.</p> <p>Solving: \begin{align} \\ T_{N} &amp;= \sum_{i = 0}^{N-1}{\sum_{j = i+1}^{N-1}\sum_{k=j+1}^{N-1}A} \\ &amp;= A\sum_{i = 0}^{N-1}{\sum_{j = i+1}^{N-1}\sum_{k=j+1}^{N-1}1} \\ &amp;= A\sum_{i = 0}^{N-1}{\sum_{j = i+1}^{N-1}[(N-1) - (j+1) + 1]} \\ &amp;= A\sum_{i = 0}^{N-1}{\sum_{j = i+1}^{N-1}[(N-1) - j]} \\ &amp;= A\sum_{i = 0}^{N-1}\left(\sum_{j = i+1}^{N-1}(N-1) - \sum_{j = i+1}^{N-1}j\right) \\ &amp;= A\sum_{i = 0}^{N-1}\left((N-1)\sum_{j = i+1}^{N-1}1 - \dfrac{1}{2}[N(N-1)-i(i+1)]\right) \\ &amp;= A\sum_{i = 0}^{N-1}\left((N-1)(N-1-i) - \dfrac{1}{2}[N(N-1)-i(i+1)]\right) \\ &amp;= A\sum_{i = 0}^{N-1}\left(\dfrac{1}{2}(N-1)(N-1)+\dfrac{1}{2}i^2+i(\dfrac{3}{2}-N)\right) \\ &amp;= \dfrac{A}{2}\sum_{i = 0}^{N-1}\left((N-1)(N-2)+i(2N-3)+i^2\right) \\ &amp;= \dfrac{A}{2}\left((N-1)(N-2)\sum_{i = 0}^{N-1}1+(2N-3)\sum_{i = 0}^{N-1}i+\sum_{i = 0}^{N-1}i^2\right) \\ &amp;= \dfrac{A}{2}\left((N-1)(N-2)N+(2N-3)\dfrac{N(N-1)}{2}+\dfrac{N(N-1)(N-2)}{6}\right) \\ &amp;= \dfrac{AN(N-1)(N-2)}{6} \end{align}</p> <p>To check it, try running this program:</p> <pre><code>int getNumLoop(int p_num){ int sum = 0; for(int i = 0; i &lt; p_num; ++i) for(int j=i+1; j &lt; p_num; ++j) for(int k=j+1; k &lt; p_num; ++k) { sum++; } return sum; } int computeNumLoop(int p_num){ return p_num*(p_num-1)*(p_num-2)/6; } void main(){ for(int N = 0; N &lt; 10; ++N){ printf("N = %d: loop[%d], compute[%d]", N, getNumLoop(N), computeNumLoop(N)); } } </code></pre> <p><strong>Update:</strong></p> <p>I found out that this is just $A{ N \choose 3}$</p> <p>If you increase the number of loop, say 5, the result will be $A{ N \choose 5}$.</p> <p>In general, the total execution time of this kind of loop structure is $A{ N \choose k}$, where $k$ is the number of loop.</p>
1,376,651
<p>To be specific here is the system:</p> <p>$$x-2y=0 \tag{1}$$ $$x-2(k+2)y=0 \tag{2}$$ $$x-(k+3)y=-k \tag{3}$$ </p> <p>I have already solved it for equations $(1)$ and $(2)$... what should I do with the 3rd equation?</p> <p>Just to make sure everything goes well here is my method:</p> <p>$D=-2(k+2)$ and $D_x=D_y=0$</p> <p>If $k=-2$ then $D=0$ so there are indefinite solutions. If $k\not=-2$ then $D\not=0$ so the solution is $(0,0)$</p>
mvw
86,776
<p>The volume of the big box is $V_B = 7\cdot 9 \cdot 11 = 693$, the total volume of the small boxes is $V_b = 77 \cdot 3 \cdot 3 \cdot 1 = 693$.</p> <p>This means the volume of the small boxes is sufficient and we need to use all small boxes.</p> <p>Let us try to model this problem Tetris style: </p> <ul> <li>We have a base field, e.g. $7\times 9$, and need to drop all 77 small boxes over it.</li> <li>For each drop we have two decisions: <ul> <li>where to put the center of the box $c = (c_x, c_y, c_z)$ over the base field $(c_x, c_y) \in I_x \times I_y$ with $I_x = \{ 1, \ldots, 7 \}$ and $I_y = \{ 1, \ldots, 9 \}$</li> <li>how to orientate the $3\times 3\times 1$ box. There seem only three feasible orientations: <ul> <li>a large $3\times 3$ side as base, like a pizza box ("O") </li> <li>a small $3 \times 1$ side as base, orientated along the $x$-axis ("-")</li> <li>a small $3 \times 1$ side as base, orientated along the $y$-axis ("|")</li> </ul></li> </ul></li> <li>We loose if after the drop some part of the box sticks outside the big volume</li> <li>We win if we dropped all $77$ boxes without loosing.</li> </ul> <p>This is a search space of $77\times 7 \times 9 \times 3 = 14553$ drop configurations. Not that much for a machine.</p> <p>We could avoid the drop simulation and instead have $c_z$ as another choice. This would enlarge the search space to $77\times 7 \times 9 \times 11 \times 3 = 160083$ configurations. In both cases we need to check that boxes do not intersect.</p> <p>This should be sufficient to code a solver which visits all configurations of the search space (brute force) and will answer the question by either listing feasible configurations or reporting that there is no solution.</p> <p>Note: I submitted this before MJD published a counter argument.</p>
613,940
<p>Given two parameters $a$ and $b$ (both positive integers), please estimate the order of growth of the following function:</p> <p>$$F(t)=\left\{\begin{array}{ll} 1, \, &amp;t\le a \\ F(t-1) + b\cdot F(t-a),&amp;t&gt;a\end{array}\right.$$ </p> <p>My guess is $\Theta\left(b^{t/a}\right)$. Any answer that might help to confirm or deny this is welcome. The same with helpful references or suggestions.</p>
Slade
33,433
<p>We have $F(n) = \Theta (\kappa^n)$, where $\kappa$ is the unique positive root of $p(x) = x^a - x^{a-1} -b$.</p> <p>This <em>can</em> be close to the crude guess of $b^{n/a}=(\sqrt[a]{b})^n$, but it can also be quite far. For example, if $a=1$, then this gives the correct asymptotics of $\Theta((b+1)^n)$, while the crude guess gives $\Theta(b^n)$. If $b=1$ as well, these are <em>extremely</em> different!</p> <p>Estimating $\kappa$ is interesting in itself, but I'd have to write for dozens of paragraphs to do justice to it. Steven's comment mentions one way of generating approximations, which is the same or similar to iterating $\kappa' = (\kappa^{a-1} + b)^{1/a}$. So $b^{1/a}$ is a good first approximation, $(b^{1-1/a}+b)^{1/a}$ is a good second approximation, and so on.</p> <p>A cheap bound worth mentioning is $\sqrt[a]{b} &lt; \kappa \leq b+1$, and $\kappa\leq b$ for $a,b \geq 2$. One easy way to check these bound is by using the fact that $\kappa$ is always greater than $1$, and $p(x)$ is increasing on $[1,\infty)$... but again, I could go on for quite a long time about this, and it's a little beyond the scope of the question.</p> <p>Anyways—why is my claim accurate? Well, we're looking at the asymptotics of the sequence $\{f_n\}$, where $f_n = f_{n-1} + bf_{n-a}$ for $n\geq a+1$, and $f_n = 1$ for $n=1,\ldots, a$.</p> <p>The characteristic polynomial of this sequence is $p(x) = x^a - x^{a-1}-b$, which has no repeated roots. By the standard theory of such recurrences (quick summary: one varies the initial conditions to get a vector space of dimension $a$, then shows that the $a$ roots give linearly independent solutions), we can write $f_n = \sum_{i=1}^a c_i \kappa_i^n$, where $\kappa_1, \ldots,\kappa_a$ are the roots of $p(x)$.</p> <p>The conclusion is more or less immediate, though we should be slightly careful. We should make sure that the $\kappa^n$ term is actually nonzero, and that it is actually the term of largest magnitude. This are not complicated facts, but in the short time I thought about this I only came up with complicated proofs, given below.</p> <p>To show that $\kappa$ is the root with the largest magnitude, we can use Rouché's Theorem on the circle with radius $\kappa$. This proof (if you follow it through) shows that $z^a$ and $p(z)$ have the same number of roots in this disc, and that all the roots of $p(z)$ but $\kappa$ lie in the interior.</p> <p>We check that the coefficient of $\kappa^n$ is nonzero. Since $\kappa$ is the root of largest magnitude, given any nonnegative initial values $f_1, \ldots f_a$, we must get a nonnegative coefficient of $\kappa^n$, since otherwise the sequence, which is always positive, would have to approach $-\infty$. But the $a$ sequences given by initial values $\{1,0,\ldots 0\}, \{0,1,0,\ldots 0\}$, etc. generate the $a$-dimensional vector space of all sequences satisfying our recurrence. Hence at least one of them must have a positive coefficient of $\kappa^n$, and so their sum does as well.</p>
19,521
<p>I am trying to integrate a hat function for a project that I am doing and have found a method to do so but I find it sloppy. Currently I have the basis function</p> <pre><code>\[Psi][z_] := z - Subscript[Z, i]/ \[CapitalDelta]z + 1; </code></pre> <p>which I am trying to integrate from $z_{i-1}$ to $z_{i+1}$. I break the basis function up into two pieces and integrate the left side from $z_{i-1}$ to $z_{i}$ and then the right side from $z_i$ to $z_{i+1}$. My first question is, is there a way to integrate piecewise functions? The second question I have is, is there a way to set global assumptions like $z_{i-1} &lt; z_i &lt; z_{i+1}$, $z_i - z_{i-1} = \Delta z$ , etc?</p> <p><strong>Edit</strong>: This is the piece wise function taken directly from my code I am trying to integrate</p> <pre><code>\[Psi][z_, c_] := Piecewise[{{(z - c)/\[CapitalDelta]z + 1, z &lt;= c}, {-(z - c)/\[CapitalDelta]z + 1, z &gt; c}}]; </code></pre> <p>where $c$ is the center of the hat function. Here is my attempt to integrate the piece wise function</p> <pre><code> FullSimplify[ Integrate[\[Psi][z, Subscript[Z, i]], {z, Subscript[Z, i - 1], Subscript[Z, i + 1]}], Assumptions -&gt; {-(Subscript[Z, i + 1] - Subscript[Z, i ]) == -\[CapitalDelta]z, Subscript[Z, i + 1] - Subscript[Z, i ] == \[CapitalDelta]z, -(Subscript[Z, i] - Subscript[Z, i - 1 ]) == -\[CapitalDelta]z, Subscript[Z, i] - Subscript[Z, i - 1 ] == \[CapitalDelta]z}] </code></pre> <p>I do not get a usable answer. Am I doing something wrong (ie can one integrate a piece wise function)?</p>
Niki Estner
242
<p>Your integral would have worked too, if you had added assumptions that all variables are real:</p> <pre><code>(* New assumptions so Integrate can do its work *) Assuming[{z \[Element] Reals, c \[Element] Reals, \[CapitalDelta]z \[Element] Reals, Subscript[Z, i - 1] \[Element] Reals, Subscript[Z, i + 1] \[Element] Reals}, (* The rest is just copied from the question *) FullSimplify[ Integrate[\[Psi][z, Subscript[Z, i]], {z, Subscript[Z, i - 1], Subscript[Z, i + 1]}], Assumptions -&gt; {-(Subscript[Z, i + 1] - Subscript[Z, i ]) == -\[CapitalDelta]z, Subscript[Z, i + 1] - Subscript[Z, i ] == \[CapitalDelta]z, -(Subscript[Z, i] - Subscript[Z, i - 1 ]) == -\[CapitalDelta]z, Subscript[Z, i] - Subscript[Z, i - 1 ] == \[CapitalDelta]z}]] </code></pre> <p>Result:</p> <p>$\begin{array}{cc} \{ &amp; \begin{array}{cc} \frac{3 \left(Z_i-Z_{i+1}\right){}^2}{\text{$\Delta $z}} &amp; Z_i&gt;Z_{i+1} \\ \frac{\left(Z_i-Z_{i+1}\right){}^2}{\text{$\Delta $z}} &amp; Z_i&lt;Z_{i+1} \\ \end{array} \\ \end{array}$</p>
1,585,408
<p>I have the equation: (1-x<sup>2</sup>)u<sup>''</sup> -xu<sup>'</sup>+ku=0, where ' represents differentiation with respect to x and k is a constant.</p> <p>I am asked to show that cos(k<sup>1/2</sup>cos<sup>-1</sup>x) is a solution to this equation.</p> <p>I assumed to show this you need to set u=cos(k<sup>1/2</sup>cos<sup>-1</sup>x) and substitute it into the the Tchebycheff equation and show it equals zero. However, when doing this I get quite a lot of messy differentiation. </p> <p>The question before this asked to put the equation into Sturm Liouville form, so I am unsure if that is meant to be used to solve the question.</p> <p>Any hints on how to advance would be greatly appreciated.</p>
Jan Eerland
226,665
<p>HINT:</p> <p>$$(1-x^2)u''(x)-xu'(x)+ku(x)=0\Longleftrightarrow$$</p> <hr> <p>Let $t=i\sqrt{k}\ln(\sqrt{x^2-1}+x)$, which gives $x=\frac{1}{2}e^{-\frac{it}{\sqrt{k}}}\left(1+e^{\frac{2it}{\sqrt{k}}}\right)$:</p> <hr> <p>$$\left(-\frac{1}{4}e^{-\frac{2it}{\sqrt{k}}}\left(1+e^{\frac{2it}{\sqrt{k}}}\right)^2+1\right)u''(x)-\frac{1}{2}e^{-\frac{it}{\sqrt{k}}}\left(1+e^{\frac{2it}{\sqrt{k}}}\right)u'(x)+ku(x)=0\Longleftrightarrow$$</p> <hr> <p>Apply the chain rule $\frac{\text{d}u(x)}{\text{d}x}=\frac{\text{d}u(t)}{\text{d}t}\frac{\text{d}t}{\text{d}x}$:</p> <hr> <p>$$k\left(u''(t)+u(t)\right)=0\Longleftrightarrow$$</p> <hr> <p>Assume a solution will be proportional to $e^{\lambda t}$ for some constant $\lambda$. Substitute $u(t)=e^{\lambda t}$ into the differential equation:</p> <hr> <p>$$k\cdot\frac{\text{d}^2}{\text{d}t^2}(e^{\lambda t})+k\cdot e^{\lambda t}=0\Longleftrightarrow$$</p> <hr> <p>Substitute $\frac{\text{d}^2}{\text{d}t^2}(e^{\lambda t})=\lambda^2e^{\lambda t}$:</p> <hr> <p>$$e^{\lambda t}\left(k+k\lambda^2\right)=0\Longleftrightarrow$$</p> <hr> <p>Since $e^{\lambda t}\ne 0$ for any finite $\Lambda$, the zeros must come from the polynomial:</p> <hr> <p>$$k+k\lambda^2=0\Longleftrightarrow$$ $$k\left(\lambda^2+1\right)=0\Longleftrightarrow$$ $$\lambda=\pm i$$</p>
432,964
<p>Let $X\in \mathbb{R}^{n \times n}$. Then, is the function</p> <p>$$ \text{Tr}\left( (X^T X )^{-1} \right)$$ </p> <p>convex in $X$? ($\text{Tr}$ denotes the trace operator)</p>
MathIsArt
450,658
<p>The previous answer is unfortunately not complete due to a nontrivial sign mistake. In fact, even the local convexity result does not hold in this case. To see the obstruction, let <span class="math-container">$X$</span> be an invertible matrix, <span class="math-container">$V$</span> arbitrary, and compute <span class="math-container">$$ (X + t\,V)^\intercal(X+t\,V) = X^\intercal X + t(V^\intercal X + X^\intercal V) + t^2\,V^\intercal V := S^{-2} + t\,Z + t^2\,V^\intercal V,$$</span> where <span class="math-container">$S^2 = (A^\intercal A)^{-1}$</span> is positive definite and <span class="math-container">$Z=Z^\intercal$</span>. Then, series expansion yields <span class="math-container">\begin{equation} \begin{split} \left((X+t\,V)^{\intercal}(X+t\,V)\right)^{-1} &amp;= \left(S^{-1}\left(I + t\,SZS + t^2\,SV^\intercal VS\right)S^{-1}\right)^{-1} \\ &amp;= S\left(I + t\,SZS + t^2\,SV^\intercal VS\right)^{-1}S \\ &amp;= S\left(I + t\left(SZS + t\,SV^\intercal VS\right)\right)^{-1}S \\ &amp;= S\left(I - t\left(SZS + t\,SV^\intercal VS\right) + t^2\left(SZS + t\,SV^\intercal VS\right)^2 + \mathcal{O}(t^3)\right)S \\ &amp;= S\left(I - t\,SZS + t^2\left(SZS^2ZS - SV^\intercal VS\right)\right)S + \mathcal{O}(t^3). \end{split} \end{equation}</span> Notice the minus sign in the <span class="math-container">$\mathcal{O}(t^2)$</span> term, so that positive semidefiniteness is no longer obvious. On the other hand, <span class="math-container">\begin{equation} \begin{split} ZS^2Z - V^\intercal V &amp;= \left(V^\intercal X + X^\intercal V\right)X^{-1}X^{-\intercal}\left(V^\intercal X + X^\intercal V\right) \\ &amp;= \left(X^\intercal(VX^{-1}V) + (VX^{-1}V)^\intercal X\right) + (X^\intercal VX^{-1})(X^\intercal VX^{-1})^\intercal \\ &amp;:= A + BB^\intercal. \end{split} \end{equation}</span> Therefore, by linearity of the trace, we have <span class="math-container">\begin{equation} \begin{split} \frac{d^2}{dt^2}\bigg|_{t=0}\mathrm{tr}\left(\left((X+t\,V)^{\intercal}(X+t\,V)\right)^{-1}\right) &amp;= 2\,\mathrm{tr}\left(S^2(A+BB^\intercal)S^2\right) \\ &amp;= 2\,\mathrm{tr}\left(S^2AS^2\right) + 2\,\mathrm{tr}\left((S^2B)(S^2B)^\intercal\right), \end{split} \end{equation}</span> and it remains to see if this is nonnegative. The second term is obviously nonnegative since <span class="math-container">$BB^\intercal$</span> is positive semidefinite. However, we compute <span class="math-container">$$ S^2 A S^2 = S^2\left((X^{-1}V)^2\right)^\intercal + (X^{-1}V)^2 S^2 = X^{-1}\left(\left(VX^{-1}VX^{-1}\right)^\intercal + VX^{-1}VX^{-1}\right)X^{-\intercal}, $$</span> which is clearly symmetric, but not necessarily positive semidefinite. Interestingly, using that <span class="math-container">$S^2B = X^{-1}VX^{-1}$</span> we can write <span class="math-container">$$ S^2AS^2 + (S^2B)(S^2B)^\intercal = X^{-1}\left((VX^{-1}VX^{-1})^\intercal + VX^{-1}VX^{-1} + (VX^{-1})(VX^{-1})^\intercal\right)X^{-\intercal}, $$</span> where the term in parentheses is one term away from <span class="math-container">$(C + C^\intercal)^2$</span> for <span class="math-container">$C = VX^{-1}$</span>. So, it is conceivable that the trace of all this could be nonnegative, but in fact it is generally not. One counterexample is afforded by the matrices <span class="math-container">$$ X= \begin{pmatrix}0.9 &amp; 0.85 \\ 0.37 &amp; 0.2\end{pmatrix}, \quad V= \begin{pmatrix}0.08 &amp; 0.34 \\ 0.66 &amp; 0.77\end{pmatrix}. $$</span> Computing <span class="math-container">$\mathrm{tr}\left(\left[(X+t\,V)^{\intercal}(X+t\,V)\right]^{-1}\right)$</span> symbolically on the Wolfram Cloud and plotting the result yields the following:</p> <p><a href="https://i.stack.imgur.com/PZNlR.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/PZNlR.png" alt="enter image description here" /></a></p> <p>Which is concave around <span class="math-container">$t=0$</span>. Hence, the function in the OP cannot be convex at all, despite the fact that it is a composition of convex functions.</p>
426,499
<p>Let <span class="math-container">$X$</span> be a separable metric space which is <em>homogeneous</em>, i.e. for every two points <span class="math-container">$x,y\in X$</span> there is a homeomorphism <span class="math-container">$h$</span> of <span class="math-container">$X$</span> onto itself such that <span class="math-container">$h(x)=y$</span>.</p> <p>A compactification of <span class="math-container">$X$</span> is a compact metric space which contains a dense homeomorphic copy of <span class="math-container">$X$</span>.</p> <p>Does <span class="math-container">$X$</span> have a homogeneous compactification?</p> <p>Examples of homogeneous compactifications include the circle for the real line, the torus for the plane etc.</p>
Nate Eldredge
4,832
<p>The countable discrete space <span class="math-container">$\omega$</span> is a counterexample.</p> <p>Suppose <span class="math-container">$Y$</span> is a homogeneous compactification of <span class="math-container">$\omega$</span>, with <span class="math-container">$X \subset Y$</span> being homeomorphic to <span class="math-container">$\omega$</span>. As <span class="math-container">$Y$</span> is infinite, it necessarily contains at least one limit point. So by homogeneity, every point of <span class="math-container">$Y$</span> is a limit point of <span class="math-container">$Y$</span>, including those that are in <span class="math-container">$X$</span>. But since <span class="math-container">$X$</span> is dense in <span class="math-container">$Y$</span>, this implies that each point of <span class="math-container">$X$</span> is a limit point of <span class="math-container">$X$</span>. Thus <span class="math-container">$X$</span> is not discrete, a contradiction.</p>
636,467
<p>What is it that makes something a paradox? It seems to me that paradoxes are just, in many cases, misunderstandings about the properties some object can have and so misunderstandings about definitions. Is there something I might be missing? How is this kind of thought handled in logic?</p>
Carl Mummert
630
<p>"Paradox" is not a formally defined term. Many modern authors use "paradox" for all sorts of surprising or unexpected results; you can see examples by searching for "paradox" on Google News.</p> <p>A more substantial use of the word "paradox" refers to a result that shows that a particular naive intuition is not sound. For example, the Banach-Tarski paradox shows that is it not possible to have a measure of volume for arbitrary subsets of Euclidean space, if that measure satisfies certain basic properties such as invariance under rigid motions and finite additivity. This goes against a certain naive thought that every subset of Euclidean space must have a well-defined volume (it might be 0 for strangely defined sets of points, but at least it should be defined, this intuition would say). </p> <p>The classical paradoxes of logic (also called "antinomies") are somewhat different because they show that our intuition about logic itself is not valid. In particular, these show that the naive ways we talk about "truth" and "sets" in natural language lead to contradictions. An example is <a href="http://plato.stanford.edu/entries/curry-paradox/#2" rel="nofollow">Curry's paradox</a>, which shows that the naive way we prove "if/then" statements in normal mathematics can lead to false results when combined with self-referential sentences (even when there is no negation in the sentences). </p> <p>The thing that makes the classical paradoxes more genuinely <em>paradoxical</em> is that it is hard to see where the problem comes from, or any straightforward way to resolve the issue. Consider the sentence of Curry's paradox</p> <blockquote> <p>If this sentence is true, then 0 = 1</p> </blockquote> <p>We can prove this sentence in the usual way: assume the hypothesis "this sentence is true" and prove that the conclusion must follow. That is how we prove many other implications in mathematics. But then, because the sentence is true, its hypothesis is true, so its conclusion must also be true: 0=1. It is very difficult to find a hidden assumption in this argument as with the Banach-Tarski "paradox". </p> <p>One resolution in mathematics is to take refuge in formal logic, where the self-reference of the quoted sentence is impossible. But that does not resolve the issue that the sentence seems to be perfectly clear English, and yet applying the usual methods to it leads to a contradiction.</p>
514
<p>I just came back from my Number Theory course, and during the lecture there was mention of the Collatz Conjecture.</p> <p>I'm sure that everyone here is familiar with it; it describes an operation on a natural number – <span class="math-container">$n/2$</span> if it is even, <span class="math-container">$3n+1$</span> if it is odd.</p> <p>The conjecture states that if this operation is repeated, all numbers will eventually wind up at <span class="math-container">$1$</span> (or rather, in an infinite loop of <span class="math-container">$1-4-2-1-4-2-1$</span>).</p> <p>I fired up Python and ran a quick test on this for all numbers up to <span class="math-container">$5.76 \times 10^{18}$</span> (using the powers of cloud computing and dynamic programming magic). Which is millions of millions of millions. And all of them eventually ended up at <span class="math-container">$1$</span>.</p> <p>Surely I am close to testing every natural number? How many natural numbers could there be? Surely not much more than millions of millions of millions. (I kid.)</p> <p>I explained this to my friend, who told me, "Why would numbers suddenly get different at a certain point? Wouldn't they all be expected to behave the same?"</p> <p>To which I said, "No, you are wrong! In fact, I am sure there are many conjectures which have been disproved by counterexamples that are extremely large!"</p> <p>And he said, "It is my conjecture that there are none! (and if any, they are rare)".</p> <p>Please help me, smart math people. Can you provide a counterexample to his conjecture? Perhaps, more convincingly, several? I've only managed to find one! (Polya's conjecture). One, out of the many thousands (I presume) of conjectures. It's also one that is hard to explain the finer points to the layman. Are there any more famous or accessible examples?</p>
Eric O. Korman
9
<p>The wikipedia article on the Collatz conjecture gives these three examples of conjectures that were disproved with large numbers:</p> <p><a href="http://en.wikipedia.org/wiki/P%C3%B3lya_conjecture">Polya conjecture</a>.</p> <p><a href="http://en.wikipedia.org/wiki/Mertens_conjecture">Mertens conjecture</a>.</p> <p><a href="http://en.wikipedia.org/wiki/Skewes%27_number">Skewes number</a>.</p>
514
<p>I just came back from my Number Theory course, and during the lecture there was mention of the Collatz Conjecture.</p> <p>I'm sure that everyone here is familiar with it; it describes an operation on a natural number – <span class="math-container">$n/2$</span> if it is even, <span class="math-container">$3n+1$</span> if it is odd.</p> <p>The conjecture states that if this operation is repeated, all numbers will eventually wind up at <span class="math-container">$1$</span> (or rather, in an infinite loop of <span class="math-container">$1-4-2-1-4-2-1$</span>).</p> <p>I fired up Python and ran a quick test on this for all numbers up to <span class="math-container">$5.76 \times 10^{18}$</span> (using the powers of cloud computing and dynamic programming magic). Which is millions of millions of millions. And all of them eventually ended up at <span class="math-container">$1$</span>.</p> <p>Surely I am close to testing every natural number? How many natural numbers could there be? Surely not much more than millions of millions of millions. (I kid.)</p> <p>I explained this to my friend, who told me, "Why would numbers suddenly get different at a certain point? Wouldn't they all be expected to behave the same?"</p> <p>To which I said, "No, you are wrong! In fact, I am sure there are many conjectures which have been disproved by counterexamples that are extremely large!"</p> <p>And he said, "It is my conjecture that there are none! (and if any, they are rare)".</p> <p>Please help me, smart math people. Can you provide a counterexample to his conjecture? Perhaps, more convincingly, several? I've only managed to find one! (Polya's conjecture). One, out of the many thousands (I presume) of conjectures. It's also one that is hard to explain the finer points to the layman. Are there any more famous or accessible examples?</p>
Larry Wang
73
<p>A famous example that is not quite as large as these others is the <a href="http://mathworld.wolfram.com/ChebyshevBias.html">prime race</a>. </p> <p>The conjecture states, roughly: Consider the first n primes, not counting 2 or 3. Divide them into two groups: A contains all of those primes congruent to 1 modulo 3 and B contains those primes congruent to 2 modulo 3. A will never contain more numbers than B. The smallest value of n for which this is false is 23338590792.</p>
514
<p>I just came back from my Number Theory course, and during the lecture there was mention of the Collatz Conjecture.</p> <p>I'm sure that everyone here is familiar with it; it describes an operation on a natural number – <span class="math-container">$n/2$</span> if it is even, <span class="math-container">$3n+1$</span> if it is odd.</p> <p>The conjecture states that if this operation is repeated, all numbers will eventually wind up at <span class="math-container">$1$</span> (or rather, in an infinite loop of <span class="math-container">$1-4-2-1-4-2-1$</span>).</p> <p>I fired up Python and ran a quick test on this for all numbers up to <span class="math-container">$5.76 \times 10^{18}$</span> (using the powers of cloud computing and dynamic programming magic). Which is millions of millions of millions. And all of them eventually ended up at <span class="math-container">$1$</span>.</p> <p>Surely I am close to testing every natural number? How many natural numbers could there be? Surely not much more than millions of millions of millions. (I kid.)</p> <p>I explained this to my friend, who told me, "Why would numbers suddenly get different at a certain point? Wouldn't they all be expected to behave the same?"</p> <p>To which I said, "No, you are wrong! In fact, I am sure there are many conjectures which have been disproved by counterexamples that are extremely large!"</p> <p>And he said, "It is my conjecture that there are none! (and if any, they are rare)".</p> <p>Please help me, smart math people. Can you provide a counterexample to his conjecture? Perhaps, more convincingly, several? I've only managed to find one! (Polya's conjecture). One, out of the many thousands (I presume) of conjectures. It's also one that is hard to explain the finer points to the layman. Are there any more famous or accessible examples?</p>
Charles Stewart
100
<p>For an old example, Mersenne made the following <a href="http://en.wikipedia.org/wiki/Mersenne_conjectures" rel="noreferrer">conjecture</a> in 1644:</p> <p><i>The Mersenne numbers, <span class="math-container">$M_n=2^n − 1$</span>, are prime for n = 2, 3, 5, 7, 13, 17, 19, 31, 67, 127 and 257, and no others.</i></p> <p>Pervushin observed that the Mersenne number at <span class="math-container">$M_{61}$</span> is prime, so refuting the conjecture.</p> <p><span class="math-container">$M_{61}$</span> is quite large by the standards of the day: 2 305 843 009 213 693 951.</p> <p>According to Wikipedia, there are 51 known Mersenne primes as of 2018</p>
514
<p>I just came back from my Number Theory course, and during the lecture there was mention of the Collatz Conjecture.</p> <p>I'm sure that everyone here is familiar with it; it describes an operation on a natural number – <span class="math-container">$n/2$</span> if it is even, <span class="math-container">$3n+1$</span> if it is odd.</p> <p>The conjecture states that if this operation is repeated, all numbers will eventually wind up at <span class="math-container">$1$</span> (or rather, in an infinite loop of <span class="math-container">$1-4-2-1-4-2-1$</span>).</p> <p>I fired up Python and ran a quick test on this for all numbers up to <span class="math-container">$5.76 \times 10^{18}$</span> (using the powers of cloud computing and dynamic programming magic). Which is millions of millions of millions. And all of them eventually ended up at <span class="math-container">$1$</span>.</p> <p>Surely I am close to testing every natural number? How many natural numbers could there be? Surely not much more than millions of millions of millions. (I kid.)</p> <p>I explained this to my friend, who told me, "Why would numbers suddenly get different at a certain point? Wouldn't they all be expected to behave the same?"</p> <p>To which I said, "No, you are wrong! In fact, I am sure there are many conjectures which have been disproved by counterexamples that are extremely large!"</p> <p>And he said, "It is my conjecture that there are none! (and if any, they are rare)".</p> <p>Please help me, smart math people. Can you provide a counterexample to his conjecture? Perhaps, more convincingly, several? I've only managed to find one! (Polya's conjecture). One, out of the many thousands (I presume) of conjectures. It's also one that is hard to explain the finer points to the layman. Are there any more famous or accessible examples?</p>
Jon Bannon
941
<p>I don't know if I would consider this accessible or 'large', but the counterexample of Adyan to the famous <a href="http://en.wikipedia.org/wiki/Burnside%27s_problem">General Burnside Problem</a> in group theory requires an odd exponent greater than or equal to 665. The "shorter" counterexample (proof) due to Olshanskii requires an exponent greater than $10^{10}$. The reason for the large number in the latter proof is essentially due to 'large scale' consequences of Gauss-Bonnet theorem for certain planar graphs expressing relations in groups. It may be that a finer analysis can show that a counterexample can occur at exponent as low as 5, but this is still not known. </p> <p>This is probably essentially different than what you are asking, since we aren't forced to consider 665 because the cases 1-664 are known to be true. I thought it may be fun to point out, here, though! </p>
514
<p>I just came back from my Number Theory course, and during the lecture there was mention of the Collatz Conjecture.</p> <p>I'm sure that everyone here is familiar with it; it describes an operation on a natural number – <span class="math-container">$n/2$</span> if it is even, <span class="math-container">$3n+1$</span> if it is odd.</p> <p>The conjecture states that if this operation is repeated, all numbers will eventually wind up at <span class="math-container">$1$</span> (or rather, in an infinite loop of <span class="math-container">$1-4-2-1-4-2-1$</span>).</p> <p>I fired up Python and ran a quick test on this for all numbers up to <span class="math-container">$5.76 \times 10^{18}$</span> (using the powers of cloud computing and dynamic programming magic). Which is millions of millions of millions. And all of them eventually ended up at <span class="math-container">$1$</span>.</p> <p>Surely I am close to testing every natural number? How many natural numbers could there be? Surely not much more than millions of millions of millions. (I kid.)</p> <p>I explained this to my friend, who told me, "Why would numbers suddenly get different at a certain point? Wouldn't they all be expected to behave the same?"</p> <p>To which I said, "No, you are wrong! In fact, I am sure there are many conjectures which have been disproved by counterexamples that are extremely large!"</p> <p>And he said, "It is my conjecture that there are none! (and if any, they are rare)".</p> <p>Please help me, smart math people. Can you provide a counterexample to his conjecture? Perhaps, more convincingly, several? I've only managed to find one! (Polya's conjecture). One, out of the many thousands (I presume) of conjectures. It's also one that is hard to explain the finer points to the layman. Are there any more famous or accessible examples?</p>
P..
39,722
<p>It is well known that <a href="http://en.wikipedia.org/wiki/Goldbach&#39;s_conjecture" rel="noreferrer">Goldbach's conjecture</a> is one of the oldest unsolved problems in mathematics. A counterexample if it exists it will be a number greater than $4\cdot10^{18}$. </p> <p>What is not well-known is that Goldbach made another conjecture which turned out to be false. The conjecture was </p> <blockquote> <p>All odd numbers are either prime, or can be expressed as the sum of a prime and twice a square.</p> </blockquote> <p>The first of only two known counterexamples is $5777$ (The second being $5993$).</p> <p>This number is not "extremely large" for today's data but surely it was on 1752 when Goldbach proposed this conjecture in a <a href="http://eulerarchive.maa.org/correspondence/letters/OO0878.pdf" rel="noreferrer">letter</a> to Euler who <a href="http://eulerarchive.maa.org/correspondence/letters/OO0881.pdf" rel="noreferrer">failed</a> to find the counterexample. It was found a century later in 1856 by <a href="http://en.wikipedia.org/wiki/Moritz_Abraham_Stern" rel="noreferrer">Moritz Abraham Stern</a> (see <a href="http://archive.numdam.org/ARCHIVE/NAM/NAM_1856_1_15_/NAM_1856_1_15__23_0/NAM_1856_1_15__23_0.pdf" rel="noreferrer">this</a>). The prime numbers that cannot be written as a sum of a (smaller) prime and twice a square are called <a href="http://en.wikipedia.org/wiki/Stern_prime" rel="noreferrer">Stern primes</a>. It is believed that there are only finitely many Stern primes.</p>
514
<p>I just came back from my Number Theory course, and during the lecture there was mention of the Collatz Conjecture.</p> <p>I'm sure that everyone here is familiar with it; it describes an operation on a natural number – <span class="math-container">$n/2$</span> if it is even, <span class="math-container">$3n+1$</span> if it is odd.</p> <p>The conjecture states that if this operation is repeated, all numbers will eventually wind up at <span class="math-container">$1$</span> (or rather, in an infinite loop of <span class="math-container">$1-4-2-1-4-2-1$</span>).</p> <p>I fired up Python and ran a quick test on this for all numbers up to <span class="math-container">$5.76 \times 10^{18}$</span> (using the powers of cloud computing and dynamic programming magic). Which is millions of millions of millions. And all of them eventually ended up at <span class="math-container">$1$</span>.</p> <p>Surely I am close to testing every natural number? How many natural numbers could there be? Surely not much more than millions of millions of millions. (I kid.)</p> <p>I explained this to my friend, who told me, "Why would numbers suddenly get different at a certain point? Wouldn't they all be expected to behave the same?"</p> <p>To which I said, "No, you are wrong! In fact, I am sure there are many conjectures which have been disproved by counterexamples that are extremely large!"</p> <p>And he said, "It is my conjecture that there are none! (and if any, they are rare)".</p> <p>Please help me, smart math people. Can you provide a counterexample to his conjecture? Perhaps, more convincingly, several? I've only managed to find one! (Polya's conjecture). One, out of the many thousands (I presume) of conjectures. It's also one that is hard to explain the finer points to the layman. Are there any more famous or accessible examples?</p>
Joe K
64,292
<p>My favorite example, which I'm surprised hasn't been posted yet, is the conjecture:</p> <blockquote> <p>$n^{17}+9 \text{ and } (n+1)^{17}+9 \text{ are relatively prime}$</p> </blockquote> <p>The first counterexample is $n=8424432925592889329288197322308900672459420460792433$</p>
514
<p>I just came back from my Number Theory course, and during the lecture there was mention of the Collatz Conjecture.</p> <p>I'm sure that everyone here is familiar with it; it describes an operation on a natural number – <span class="math-container">$n/2$</span> if it is even, <span class="math-container">$3n+1$</span> if it is odd.</p> <p>The conjecture states that if this operation is repeated, all numbers will eventually wind up at <span class="math-container">$1$</span> (or rather, in an infinite loop of <span class="math-container">$1-4-2-1-4-2-1$</span>).</p> <p>I fired up Python and ran a quick test on this for all numbers up to <span class="math-container">$5.76 \times 10^{18}$</span> (using the powers of cloud computing and dynamic programming magic). Which is millions of millions of millions. And all of them eventually ended up at <span class="math-container">$1$</span>.</p> <p>Surely I am close to testing every natural number? How many natural numbers could there be? Surely not much more than millions of millions of millions. (I kid.)</p> <p>I explained this to my friend, who told me, "Why would numbers suddenly get different at a certain point? Wouldn't they all be expected to behave the same?"</p> <p>To which I said, "No, you are wrong! In fact, I am sure there are many conjectures which have been disproved by counterexamples that are extremely large!"</p> <p>And he said, "It is my conjecture that there are none! (and if any, they are rare)".</p> <p>Please help me, smart math people. Can you provide a counterexample to his conjecture? Perhaps, more convincingly, several? I've only managed to find one! (Polya's conjecture). One, out of the many thousands (I presume) of conjectures. It's also one that is hard to explain the finer points to the layman. Are there any more famous or accessible examples?</p>
Yuriy S
269,624
<p>In this paper <a href="http://arxiv.org/abs/math/0602498" rel="nofollow noreferrer">http://arxiv.org/abs/math/0602498</a> a sequence of integers is proposed, which, when started with <span class="math-container">$1$</span> begins like this:</p> <p><span class="math-container">$$1, 1, 2, 1, 1, 2, 2, 2, 3, 1, 1, 2, 1, 1, 2, 2, 2, 3, 2, \dots$$</span></p> <p>This is also the sequence <a href="http://oeis.org/A090822" rel="nofollow noreferrer">A090822</a> at OEIS. The description there is somewhat better:</p> <blockquote> <p>Gijswijt's sequence: <span class="math-container">$a(1) = 1$</span>; for <span class="math-container">$n&gt;1$</span>, <span class="math-container">$a(n) =$</span> largest integer <span class="math-container">$k$</span> such that the word <span class="math-container">$a(1)a(2)...a(n-1)$</span> is of the form <span class="math-container">$xy^k$</span> for words <span class="math-container">$x$</span> and <span class="math-container">$y$</span> (where <span class="math-container">$y$</span> has positive length), i.e. the maximal number of repeating blocks at the end of the sequence so far.</p> </blockquote> <p>The rules are better explained by demonstration:</p> <p><span class="math-container">$$\color{blue}{1} \to 1$$</span></p> <p><span class="math-container">$$\color{blue}{1} \color{red}{1} \to 2$$</span></p> <p><span class="math-container">$$11 \color{blue}{2} \to 1$$</span></p> <p><span class="math-container">$$112 \color{blue}{1} \to 1$$</span></p> <p><span class="math-container">$$112 \color{blue}{1}\color{red}{1} \to 2$$</span></p> <p><span class="math-container">$$ \color{blue}{112}\color{red}{112} \to 2$$</span></p> <p><span class="math-container">$$11211 \color{blue}{2}\color{red}{2} \to 2$$</span></p> <p><span class="math-container">$$11211 \color{blue}{2}\color{red}{2}\color{green}{2} \to 3$$</span></p> <p><span class="math-container">$$11211222 \color{blue}{3} \to 1$$</span></p> <p>etc.</p> <p>What's really surprising:</p> <ul> <li><span class="math-container">$4$</span> appears for the first time in position <span class="math-container">$220$</span></li> <li><span class="math-container">$5$</span> appears for the first time in approximately position <span class="math-container">$10^{10^{23}}$</span> (sic !)</li> <li>The sequence is <strong>unbounded</strong></li> </ul> <p>To clarify, this fits the question like this: If someone tried to check this sequence for large numbers experimentally they would most likely conclude that it's bounded, and has no numbers larger than <span class="math-container">$4$</span></p> <hr /> <p><strong>Edit</strong></p> <p>Curiously, <a href="http://neilsloane.com/doc/g4g7.pdf" rel="nofollow noreferrer">this paper</a> explicitly states that the authors initially thought that no number greater than <span class="math-container">$4$</span> appears in the sequence.</p>
514
<p>I just came back from my Number Theory course, and during the lecture there was mention of the Collatz Conjecture.</p> <p>I'm sure that everyone here is familiar with it; it describes an operation on a natural number – <span class="math-container">$n/2$</span> if it is even, <span class="math-container">$3n+1$</span> if it is odd.</p> <p>The conjecture states that if this operation is repeated, all numbers will eventually wind up at <span class="math-container">$1$</span> (or rather, in an infinite loop of <span class="math-container">$1-4-2-1-4-2-1$</span>).</p> <p>I fired up Python and ran a quick test on this for all numbers up to <span class="math-container">$5.76 \times 10^{18}$</span> (using the powers of cloud computing and dynamic programming magic). Which is millions of millions of millions. And all of them eventually ended up at <span class="math-container">$1$</span>.</p> <p>Surely I am close to testing every natural number? How many natural numbers could there be? Surely not much more than millions of millions of millions. (I kid.)</p> <p>I explained this to my friend, who told me, "Why would numbers suddenly get different at a certain point? Wouldn't they all be expected to behave the same?"</p> <p>To which I said, "No, you are wrong! In fact, I am sure there are many conjectures which have been disproved by counterexamples that are extremely large!"</p> <p>And he said, "It is my conjecture that there are none! (and if any, they are rare)".</p> <p>Please help me, smart math people. Can you provide a counterexample to his conjecture? Perhaps, more convincingly, several? I've only managed to find one! (Polya's conjecture). One, out of the many thousands (I presume) of conjectures. It's also one that is hard to explain the finer points to the layman. Are there any more famous or accessible examples?</p>
Trevor
493,232
<p>I had a <a href="https://math.stackexchange.com/questions/3305125/is-it-true-that-for-any-two-integers-with-the-same-least-prime-factor-there-mus/3459621#3459621">conjecture</a> that for any two natural numbers with the same least prime factor, there must be at least one number in between them with a higher least prime factor. It seemed extremely robust for small numbers and gave every indication via empirical trends that it would hold for arbitrarily large numbers as well.</p> <p>Just this morning, I discovered a counterexample at 724968762211953720363081773921156853174119094876349. While this may not be the smallest one possible, it's easy to show that any counterexample that does exist can't be too much smaller. I was amazed to see such a big number pop out of a relatively simple problem statement.</p>
514
<p>I just came back from my Number Theory course, and during the lecture there was mention of the Collatz Conjecture.</p> <p>I'm sure that everyone here is familiar with it; it describes an operation on a natural number – <span class="math-container">$n/2$</span> if it is even, <span class="math-container">$3n+1$</span> if it is odd.</p> <p>The conjecture states that if this operation is repeated, all numbers will eventually wind up at <span class="math-container">$1$</span> (or rather, in an infinite loop of <span class="math-container">$1-4-2-1-4-2-1$</span>).</p> <p>I fired up Python and ran a quick test on this for all numbers up to <span class="math-container">$5.76 \times 10^{18}$</span> (using the powers of cloud computing and dynamic programming magic). Which is millions of millions of millions. And all of them eventually ended up at <span class="math-container">$1$</span>.</p> <p>Surely I am close to testing every natural number? How many natural numbers could there be? Surely not much more than millions of millions of millions. (I kid.)</p> <p>I explained this to my friend, who told me, "Why would numbers suddenly get different at a certain point? Wouldn't they all be expected to behave the same?"</p> <p>To which I said, "No, you are wrong! In fact, I am sure there are many conjectures which have been disproved by counterexamples that are extremely large!"</p> <p>And he said, "It is my conjecture that there are none! (and if any, they are rare)".</p> <p>Please help me, smart math people. Can you provide a counterexample to his conjecture? Perhaps, more convincingly, several? I've only managed to find one! (Polya's conjecture). One, out of the many thousands (I presume) of conjectures. It's also one that is hard to explain the finer points to the layman. Are there any more famous or accessible examples?</p>
Enzo Creti
820,969
<p>I have a conjecture that can be maybe disproved by an extremely huge counterexample:</p> <p>Consider sequence <a href="https://oeis.org/A301806" rel="nofollow noreferrer">https://oeis.org/A301806</a></p> <p>I conjecture that if n divides a(n), then a(n) +1 is prime.</p>
256,649
<p>I am trying to plot phase diagram and poincare map. But I cannot get the poincare map as shown in the image below</p> <p>Phase Space</p> <pre><code>sol = NDSolve[{v'[t] == 0.320 x[t] - 1.65 x[t]^3 - 0.005*v[t] + 0.855 Cos[1.2*t], x'[t] == v[t], x[0] == 0, v[0] == 0}, {x, v}, {t, 0, 1500}]; ParametricPlot[{x[t], v[t]} /. sol, {t, 200, 1000}, AxesLabel -&gt; {&quot;x&quot;, &quot;v&quot;}, PlotRange -&gt; Full, PlotStyle -&gt; LightGray, Axes -&gt; False, Frame -&gt; True, FrameTicksStyle -&gt; Directive[Black, 20], ImageSize -&gt; {700, 350}, AspectRatio -&gt; Full] </code></pre> <p>Poincare Map</p> <pre><code>poincare[A_, gamma_, omega_, ndrop_, nplot_, psize_] := (T = 2*Pi/omega; g[{xold_, vold_}] := {x[T], v[T]} /. NDSolve[{v'[t] == 0.320 x[t] - 1.65 x[t]^3 - gamma*v[t] + A*Cos[omega*t], x'[t] == v[t], x[0] == xold, v[0] == vold}, {x, v}, {t, 0, T}][[1]]; lp = ListPlot[Drop[NestList[g, {0, 0}, nplot + ndrop], ndrop], PlotStyle -&gt; {PointSize[psize], Black}, Axes -&gt; False, Frame -&gt; True, FrameTicksStyle -&gt; Directive[Black, 20], PlotRange -&gt; All, AxesLabel -&gt; {&quot;x&quot;, &quot;v&quot;}, ImageSize -&gt; {700, 350}, AspectRatio -&gt; Full]) poincare[0.855, 0.005, 1.2, 1000, 200, 0.01] </code></pre> <p>I want diagram as shown in below image(The phase diagram will be different for my code)</p> <p><a href="https://i.stack.imgur.com/RYPDV.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/RYPDV.png" alt="Phase Diagram I want" /></a></p>
Chris K
6,358
<p>If you run your first code for the same amount of time as the Poincare section, you'll see that they do agree (and that they're not particularly interesting parameter values):</p> <pre><code>T = 2*Pi/1.2; sol = NDSolve[{v'[t] == 0.320 x[t] - 1.65 x[t]^3 - 0.005*v[t] + 0.855 Cos[1.2*t], x'[t] == v[t], x[0] == 0, v[0] == 0}, {x, v}, {t, 0, 1200 T}][[1]]; pp = ParametricPlot[{x[t], v[t]} /. sol, {t, 1000 T, 1200 T}, AxesLabel -&gt; {&quot;x&quot;, &quot;v&quot;}, PlotRange -&gt; Full, PlotStyle -&gt; LightGray, Axes -&gt; False, Frame -&gt; True, FrameTicksStyle -&gt; Directive[Black, 20], ImageSize -&gt; {700, 350}, AspectRatio -&gt; Full, MaxRecursion -&gt; 7]; poincare[0.855, 0.005, 1.2, 1000, 200, 0.005]; Show[pp, lp] </code></pre> <p><a href="https://i.stack.imgur.com/ryetM.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/ryetM.png" alt="enter image description here" /></a></p>
4,002
<p>I'm trying to obtain a series of points on the unit sphere with a somewhat homogeneous distribution, by minimizing a function depending on distances (I took $\exp(-d)$). My points are represented by spherical angles $\theta$ and $\phi$, starting by choosing equidistributed random vectors:</p> <pre><code>pts = Apply[{2 π #1, ArcCos[2 #2 - 1]} &amp;, RandomReal[1, {100, 2}], 1]; </code></pre> <p>The energy function is defined first:</p> <pre><code>energy[p_] := Module[{cart}, cart = Apply[{Sin[#1]*Cos[#2], Sin[#1]*Sin[#2], Cos[#1]} &amp;, p, 1]; Total[Outer[Exp[-Norm[#1 - #2]] &amp;, cart, cart, 1], 2] ] </code></pre> <p>But now, I can’t manage to get the right routine for minimization. I tried <code>FindMinimum</code>, which does local minimization from a given starting point, which is what I want. But it should operate on an expression of literal variables, so I'm kind of screwed:</p> <pre><code>FindMinimum[energy[p], {p, pts}] </code></pre> <p> </p> <pre><code>Outer::normal: Nonatomic expression expected at position 2 in Outer[Exp[-Norm[#1-Slot[&lt;&lt;1&gt;&gt;]]]&amp;,p,p,1]. &gt;&gt; FindMinimum::nrnum: The function value […] is not a real number at {p} = […] &gt;&gt; </code></pre> <p>The above obviously doesn't work, but I don't think it's wise to introduce a series of 200 literal variables. There has to be another way, hasn't it? Or is there an efficient way of introducing a lot of variables?</p>
Daniel Lichtblau
51
<p>I played around with a few plausible variants on the energy. In all cases I compared the result using the original energy function. Some things I learned:</p> <p>(1) Some variants will tend to give results that do quite well when gauged via the original energy.</p> <p>(2) Others (not shown below) will do poorly because they weigh the far values too heavily. This, alas, means we cannot easily use the GaussNewton (LevenbergMarquardt) method, since it is the operations of squaring that hurts us. Well, maybe there are ways around this.</p> <p>(3) Summing over only distinct pairs rather than all pairs cuts the time in half. I will speculate that the bulk of time is spent in evaluating derivatives and not in the function evaluations themselves, as I am fairly certain Total[Outer[...]] will beat Sum even when the latter only need account for half or so as many pairs.</p> <p>(4) For one energy variant I got a modest speed improvement using the ConjugateGradient method.</p> <p>(5) Scaling appears to be quadratic in the number of points (no huge surprise, I guess).</p> <p>(6) We can handle 200 points in 24 seconds on my desktop machine.</p> <pre><code>In[254]:= pts = Apply[{ArcCos[2 #2 - 1], 2 \[Pi] #1} &amp;, RandomReal[1, {100, 2}], 1]; Clear[a]; vars = Array[a, {Length[pts], 2}]; </code></pre> <p>Here is the basic case.</p> <pre><code>In[292]:= energy[p_] := Module[{cart}, cart = Map[{Sin[#[[1]]]*Cos[#[[2]]], Sin[#[[1]]]*Sin[#[[2]]], Cos[#[[1]]]} &amp;, p]; Total[Outer[Exp[-Sqrt[(#1 - #2).(#1 - #2)]] &amp;, cart, cart, 1], 2]]; In[293]:= t = Timing[{min, vals} = Quiet[FindMinimum[energy[vars], Transpose[{Flatten@vars, Flatten@pts}], MaxIterations -&gt; 1000]];]; {t, min} Out[294]= {{14.1, Null}, 2978.01} </code></pre> <p>We use Sum over distinct pairs from here onward.</p> <pre><code>In[295]:= energy2[p_] := Module[{cart}, cart = Map[{Sin[#[[1]]]*Cos[#[[2]]], Sin[#[[1]]]*Sin[#[[2]]], Cos[#[[1]]]} &amp;, p]; Sum[Exp[-Sqrt[(cart[[j]] - cart[[k]]).(cart[[j]] - cart[[k]])]], {j, Length[p] - 1}, {k, j + 1, Length[p]}]]; In[296]:= t2 = Timing[{min2, vals2} = Quiet[FindMinimum[energy2[vars], Transpose[{Flatten@vars, Flatten@pts}], MaxIterations -&gt; 1000]];]; {t2, min2, energy[vars /. vals2]} Out[297]= {{6.58, Null}, 1439., 2978.01} </code></pre> <p>Minimize the sum of reciprocals of the pairwise distances squared.</p> <pre><code>In[298]:= energy3[p_] := Module[{cart}, cart = Map[{Sin[#[[1]]]*Cos[#[[2]]], Sin[#[[1]]]*Sin[#[[2]]], Cos[#[[1]]]} &amp;, p]; Sum[1/((cart[[j]] - cart[[k]]).(cart[[j]] - cart[[k]])), {j, Length[p] - 1}, {k, j + 1, Length[p]}] ] In[299]:= t3 = Timing[{min3, vals3} = Quiet[FindMinimum[energy2[vars], Transpose[{Flatten@vars, Flatten@pts}], MaxIterations -&gt; 1000]];]; {t3, min3, energy[vars /. vals3]} Out[300]= {{6.72, Null}, 1439., 2978.01} </code></pre> <p>This variant on energy happened to get a bit faster using a nondefault method setting.</p> <pre><code>In[301]:= t3b = Timing[{min3b, vals3b} = Quiet[FindMinimum[energy3[vars], Transpose[{Flatten@vars, Flatten@pts}], MaxIterations -&gt; 1000, Method -&gt; "ConjugateGradient"]];]; {t3b, min3b, energy[vars /. vals3b]} Out[302]= {{5.23, Null}, 5340.65, 2978.01} </code></pre> <p>Maximize sum of distances. I will mention that using the sum of squares, which i would prefer to do, fails to give a useful result. It puts half the points one place and the other half at the polar opposite, I believe. That comes from the further distances getting relatively more weight in the objective function.</p> <pre><code>In[304]:= energy4[p_] := Module[{cart}, cart = Map[{Sin[#[[1]]]*Cos[#[[2]]], Sin[#[[1]]]*Sin[#[[2]]], Cos[#[[1]]]} &amp;, p]; Sum[Sqrt[((cart[[j]] - cart[[k]]).(cart[[j]] - cart[[k]]))], {j, Length[p] - 1}, {k, j + 1, Length[p]}] ] In[305]:= t4 = Timing[{min4, vals4} = Quiet[FindMaximum[energy4[vars], Transpose[{Flatten@vars, Flatten@pts}], MaxIterations -&gt; 1000]];]; {t4, min4, energy[vars /. vals4]} Out[306]= {{8.44, Null}, 6662.64, 2978.} </code></pre> <p>Similar to a couple of tries above, but with distances instead of squared distances.</p> <pre><code>In[308]:= energy5[p_] := Module[{cart}, cart = Map[{Sin[#[[1]]]*Cos[#[[2]]], Sin[#[[1]]]*Sin[#[[2]]], Cos[#[[1]]]} &amp;, p]; Sum[1/((cart[[j]] - cart[[k]]).(cart[[j]] - cart[[k]]))^(1/2), {j, Length[p] - 1}, {k, j + 1, Length[p]}] ] In[309]:= t5 = Timing[{min5, vals5} = Quiet[FindMinimum[energy5[vars], Transpose[{Flatten@vars, Flatten@pts}], MaxIterations -&gt; 1000]];]; {t5, min5, energy[vars /. vals5]} Out[310]= {{6.04, Null}, 4448.45, 2978.01} </code></pre> <p>Notice that all of these agreed fairly closely to six places in terms of the original energy function.</p> <p>Now we'll go to 200 points and use the fastest variant from above.</p> <pre><code>In[319]:= pts200 = Apply[{ArcCos[2 #2 - 1], 2 \[Pi] #1} &amp;, RandomReal[1, {200, 2}], 1]; vars200 = Array[a, {Length[pts200], 2}]; t200 = Timing[{min200, vals200} = Quiet[FindMinimum[energy3[vars200], Transpose[{Flatten@vars200, Flatten@pts200}], MaxIterations -&gt; 1000, Method -&gt; "ConjugateGradient"]];]; {t200, min200, energy[vars200 /. vals200]} Out[322]= {{23.59, Null}, 24816.3, 11891.3} </code></pre> <p>Here I crib Mark McClure's code to show both the original points and the result of the optimization. The pictures will have to speak for themselves because i'm not going to speak for them.</p> <pre><code>In[323]:= pts3D = {Sin[#[[1]]]*Cos[#[[2]]], Sin[#[[1]]]*Sin[#[[2]]], Cos[#[[1]]]} &amp; /@ pts200; Graphics3D[Point[pts3D]] </code></pre> <p><img src="https://i.stack.imgur.com/58zl6.gif" alt="enter image description here"></p> <pre><code>In[325] := pts3Db = {Sin[#[[1]]]*Cos[#[[2]]], Sin[#[[1]]]*Sin[#[[2]]], Cos[#[[1]]]} &amp; /@ (vars200 /. vals200); Graphics3D[Point[pts3Db]] </code></pre> <p><img src="https://i.stack.imgur.com/wAVzb.gif" alt="enter image description here"></p>
3,805,989
<p>I'm doing Exercise 4 in textbook Algebra by Saunders MacLane and Garrett Birkhoff.</p> <p><a href="https://i.stack.imgur.com/JQww8.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/JQww8.png" alt="enter image description here" /></a></p> <blockquote> <p>Show that, if <span class="math-container">$F$</span> is a field, the group of all those automorphisms of <span class="math-container">$F[x]$</span> which leave all elements of <span class="math-container">$F$</span> fixed, consists of substitutions given by <span class="math-container">$x \mapsto a x+b, a \neq 0$</span> and <span class="math-container">$b$</span> in <span class="math-container">$F$</span>.</p> </blockquote> <p>Could you please verify if my understanding is correct? Thank you so much for your help!</p> <hr /> <p><strong>My attempt:</strong></p> <p>Consider a map <span class="math-container">$f: \sum a_n x^n \mapsto \sum a_n (ax+b)^n$</span>. It suffices to show that <span class="math-container">$f$</span> is an automorphism. It's trivial to show that it is a homomorphism. Hence it remains to show that it is bijective.</p> <p>Let <span class="math-container">$p = \sum a_n x^n\in F[x]$</span>. By polynomial division, there are unique polynomials <span class="math-container">$q_1,r_1$</span> such that <span class="math-container">$p = (ax+b)q_1+r_1$</span> and <span class="math-container">$\deg r_1 &lt; \deg q_1$</span>. Inductively, <span class="math-container">$p = \sum b_n (ax+b)^n$</span> for some <span class="math-container">$b_n$</span>'s. The surjectivity then follows. Because such <span class="math-container">$b_n$</span>'s are unique, the injectivity then follows.</p> <hr /> <p><strong>Update:</strong> I add the proof for &quot;If <span class="math-container">$f$</span> is an automorphism on <span class="math-container">$F[x]$</span> such that <span class="math-container">$f(c)=c$</span> for all <span class="math-container">$c \in F$</span>, then <span class="math-container">$f(x)=ax+b$</span> for some <span class="math-container">$a \neq 0$</span> and <span class="math-container">$b$</span> in <span class="math-container">$F$</span>&quot; here.</p> <p>If <span class="math-container">$\deg f(x) &lt; 1$</span>, then <span class="math-container">$\operatorname{im} f \subseteq F$</span>. If <span class="math-container">$\deg f(x) &gt; 1$</span>, then <span class="math-container">$\operatorname{im} f$</span> does not contain such polynomials whose degrees are <span class="math-container">$1$</span>. In both cases, <span class="math-container">$f$</span> is not surjective. As such, <span class="math-container">$\deg f(x) = 1$</span>.</p>
Community
-1
<p>As stated in a comment by OP, it remains to show that for all automorphisms <span class="math-container">$\sigma: F[x] \to F[x]$</span> that leave every element of <span class="math-container">$F$</span> fixed, we have that <span class="math-container">$\sigma(x) = ax + b$</span> for some <span class="math-container">$a, b \in F$</span> with <span class="math-container">$a \neq 0$</span>, or equivalently, <span class="math-container">$\text{deg}\left(\sigma(x)\right) = 1$</span>.</p> <p>Suppose that <span class="math-container">$\sigma$</span> is an automorphism of <span class="math-container">$F[x]$</span> that leaves every element of <span class="math-container">$F$</span> fixed. Then certainly <span class="math-container">$\sigma(x) \notin F$</span> (otherwise we would have <span class="math-container">$x \in F$</span>). Hence, the degree of <span class="math-container">$\sigma(x)$</span> is at least <span class="math-container">$1$</span>. Letting <span class="math-container">$k$</span> be the degree of <span class="math-container">$\sigma(x)$</span>, there exist scalars <span class="math-container">$a_0, a_1, \dots, a_k$</span> in <span class="math-container">$F$</span> with <span class="math-container">$a_k \neq 0$</span> such that</p> <p><span class="math-container">$$\sigma(x) = \sum_{i = 0}^k a_i x^i.$$</span> Noting that <span class="math-container">$\sigma^{-1}$</span> is an automorphism of <span class="math-container">$F[x]$</span> that leaves every element of <span class="math-container">$F$</span> fixed, by an identical argument used to show that the degree of <span class="math-container">$\sigma(x)$</span> is larger than or equal to <span class="math-container">$1$</span>, we get that <span class="math-container">$\text{deg}\left(\sigma^{-1}(x)\right) \geq 1$</span>. Letting <span class="math-container">$l$</span> be the degree of <span class="math-container">$\sigma^{-1} (x)$</span>, there exist scalars <span class="math-container">$b_0, b_1, \dots, b_l$</span> in <span class="math-container">$F$</span> with <span class="math-container">$b_l \neq 0$</span> such that <span class="math-container">$$\sigma^{-1}(x) = \sum_{j = 0}^l b_j x^j.$$</span> Moreover</p> <p><span class="math-container">\begin{align} x &amp;= (\sigma^{-1} \sigma)(x) \\ &amp;= \sum_{i = 0}^k a_i \left(\sigma^{-1} (x)\right)^i \\ &amp;= \sum_{i = 0}^k a_i \left( \sum_{j = 0}^l b_j x^j \right)^i. \end{align}</span></p> <p>The coefficient of <span class="math-container">$x^{kl}$</span> in the above is <span class="math-container">$a_k (b_l)^k$</span>, which is nonzero. We know that <span class="math-container">$l \geq 1$</span>, so if <span class="math-container">$k \geq 2$</span>, then <span class="math-container">$kl \geq 2$</span>, implying that the degree of the polynomial <span class="math-container">$x$</span> exceeds <span class="math-container">$1$</span>. Since the latter is not true, we have that <span class="math-container">$k \leq 1$</span>. We showed earlier that <span class="math-container">$k \geq 1$</span>, and so <span class="math-container">$$ \text{deg} \left(\sigma(x)\right) = 1.$$</span></p>
103,776
<p>I am curious as to why Wolfram|Alpha is graphing a logarithm the way that it is. I was always taught that a graph of a basic logarithm function $\log{x}$ should look like this:</p> <p><img src="https://i.stack.imgur.com/3SRqI.png" alt="enter image description here"></p> <p>However, Wolfram|Alpha is graphing it like this:</p> <p><img src="https://i.stack.imgur.com/W7JuQ.png" alt="enter image description here"></p> <p>As you can see, there is a "real" range in the region $(-\infty, 0)$, and an imaginary part indicated by the orange line. Is there a part about log graphs that I am missing which would explain why Wolfram|Alpha shows the range of the log function as $\mathbb{R}$?</p>
N. S.
9,176
<p>$\ln(x)$ is formally defined as the solution to the equation $e^y=x$.</p> <p>If $x$ is positive, this equation has an unique real solution, anyhow if $x$ is negative this doesn't have a <strong>real</strong> solution. But it has <strong>complex</strong> roots.</p> <p>Indeed, $\ln(x)= a+ib$ is equivalent to </p> <p>$$x= e^{a+ib}= e^{a} (\cos(b)+i \sin (b)) \,.$$ </p> <p>If $x &lt;0$ we need $e^{a}=|x|$, $\cos(b)=-1$ and $\sin(b)=0$. </p> <p>Thus, $a= \ln(|x|)$ and $b=\frac{3\pi}{2}+2k\pi$....</p>
3,753,474
<p><strong>Question:</strong></p> <blockquote> <p>If <span class="math-container">$\alpha,\beta,\gamma$</span> are the roots of the equation, <span class="math-container">$x^3+x+1=0$</span>, then find the equation whose roots are: <span class="math-container">$({\alpha}-{\beta})^2,({\beta}-{\gamma})^2,({\gamma}-{\alpha})^2$</span></p> </blockquote> <p>Now, the normal way to solve this question would be to use the theory of equations and find the sum of roots taken one at a time, two at a time and three at a time. Using this approach, we get the answer as <span class="math-container">$(x+1)^3+3(x+1)^2+27=0$</span>. However, I feel that this is a very lengthy approach to this problem. Is there an easier way of doing it?</p>
Jean Marie
305,862
<p>Final constant term <span class="math-container">$1+3+27=31$</span> can be obtained at once (or checked at once) by considering that it is the opposite of the product of roots</p> <p><span class="math-container">$$(({\alpha}-{\beta})({\beta}-{\gamma})({\gamma}-{\alpha}))^2$$</span></p> <p>which is the classical <strong>discriminant</strong> <span class="math-container">$-(4p^3+27q^2)$</span> of a reduced 3rd degree equation <span class="math-container">$X^3+pX+q=0$</span> with <span class="math-container">$p=q=1$</span>. (<a href="https://en.wikipedia.org/wiki/Discriminant#Degree_3" rel="nofollow noreferrer">https://en.wikipedia.org/wiki/Discriminant#Degree_3</a>)</p>
3,753,474
<p><strong>Question:</strong></p> <blockquote> <p>If <span class="math-container">$\alpha,\beta,\gamma$</span> are the roots of the equation, <span class="math-container">$x^3+x+1=0$</span>, then find the equation whose roots are: <span class="math-container">$({\alpha}-{\beta})^2,({\beta}-{\gamma})^2,({\gamma}-{\alpha})^2$</span></p> </blockquote> <p>Now, the normal way to solve this question would be to use the theory of equations and find the sum of roots taken one at a time, two at a time and three at a time. Using this approach, we get the answer as <span class="math-container">$(x+1)^3+3(x+1)^2+27=0$</span>. However, I feel that this is a very lengthy approach to this problem. Is there an easier way of doing it?</p>
lab bhattacharjee
33,337
<p>Hint:</p> <p>Let <span class="math-container">$y=(a-b)^2=(a+b)^2-4ab=(-c)^2-\dfrac4{-c}$</span> as <span class="math-container">$abc=-1, a+b=-c$</span></p> <p><span class="math-container">$$\iff c^3-cy+4=0\ \ \ \ (1) $$</span></p> <p>Again we have <span class="math-container">$$c^3+c+1=0\ \ \ \ (0)$$</span></p> <p>Solve the two simultaneous equations for <span class="math-container">$c,c^3$</span> and use <span class="math-container">$c^3=(c)^3$</span> to eliminate <span class="math-container">$c$</span></p>
3,349,206
<p>This comes theorem 17.1 of commutative ring theory by Matsumura:</p> <p><a href="https://i.stack.imgur.com/TtGXD.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/TtGXD.png" alt="enter image description here"></a></p> <blockquote> <p>It is easy to see that if <span class="math-container">$\text{Ext}^i_A(N_j/N_{j+1},M)=0$</span> for each <span class="math-container">$j$</span> then <span class="math-container">$\text{Ext}^i_A(N,M)=0$</span>...</p> </blockquote> <p>I am not very familiar with identities of the Ext functor, why is this easy to see?</p>
Angina Seng
436,618
<p>From the short exact sequence <span class="math-container">$$0\to N_1/N_2\to N_0/N_2\to N_0/N_1\to0$$</span> we get a long exact sequence <span class="math-container">$$ \cdots\to\text{Ext}_A^i(N_0/N_1,M) \to\text{Ext}_A^i(N_0/N_2,M) \to\text{Ext}_A^i(N_1/N_2,M)\to\cdots$$</span> and as the outer groups here are zero, so is the inner term. One now proves <span class="math-container">$\text{Ext}_A^i(N_0/N_3,M)=0$</span> etc.</p>
3,349,206
<p>This comes theorem 17.1 of commutative ring theory by Matsumura:</p> <p><a href="https://i.stack.imgur.com/TtGXD.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/TtGXD.png" alt="enter image description here"></a></p> <blockquote> <p>It is easy to see that if <span class="math-container">$\text{Ext}^i_A(N_j/N_{j+1},M)=0$</span> for each <span class="math-container">$j$</span> then <span class="math-container">$\text{Ext}^i_A(N,M)=0$</span>...</p> </blockquote> <p>I am not very familiar with identities of the Ext functor, why is this easy to see?</p>
E.R
325,912
<p>Let we have a sequence <span class="math-container">$0=N_n\subset N_{n-1}\subset \cdots \subset N_1\subset N_0 =N $</span>. Thus we have the following exact sequences: <span class="math-container">$$0\to N_1\to N_0\to N_0/N_1\to 0$$</span> and <span class="math-container">$$0\to N_2\to N_1\to N_1/N_2\to 0$$</span> and <span class="math-container">$$\vdots$$</span> <span class="math-container">$$0\to N_{n-1}\to N_{n-2}\to N_{n-2}/N_{n-1}\to 0$$</span> and <span class="math-container">$$0\to N_n=0\to N_{n-1}\to N_{n-1}/N_{n}\to 0$$</span></p> <p>Hence, we have long exact sequences: <span class="math-container">$$\cdots\to 0=\text{Ext}^i_A(N_0/N_1, M)\to \text{Ext}^i_A(N_0, M)\to \text{Ext}^i_A(N_1, M)\to \text{Ext}^{n+1}_A(N_0/N_1, M)=0\to\cdots$$</span> and <span class="math-container">$$\cdots\to 0=\text{Ext}^i_A(N_1/N_2, M)\to \text{Ext}^i_A(N_1, M)\to \text{Ext}^i_A(N_2, M)\to \text{Ext}^{n+1}_A(N_1/N_2, M)=0\to\cdots$$</span> and <span class="math-container">$$\vdots$$</span> <span class="math-container">$$\cdots\to 0=\text{Ext}^i_A(N_{n-1}/N_{n-2}, M)\to \text{Ext}^i_A(N_{n-1}, M)\to \text{Ext}^i_A(N_{n-2}, M)\to \text{Ext}^{n+1}_A(N_{n-1}/N_{n-2}, M)=0\to\cdots$$</span> and <span class="math-container">$$\cdots\to 0=\text{Ext}^i_A(N_{n}/N_{n-1}, M)\to \text{Ext}^i_A(N_{n }=0, M)\to \text{Ext}^i_A(N_{n-1}, M)\to \text{Ext}^{n+1}_A(N_{n }/N_{n-1}, M)=0\to\cdots.$$</span> Thus, we have: </p> <p><span class="math-container">$$0=\text{Ext}^i_A(N_{n }=0, M)\cong \text{Ext}^i_A(N_{n-1}, M)\cong\text{Ext}^i_A(N_{n-2}, M)\cong\cdots\cong\text{Ext}^i_A(N_{1}, M)\cong\text{Ext}^i_A(N_{0}, M).$$</span> So <span class="math-container">$\text{Ext}^i_A(N, M)=0$</span>.</p>
755,227
<p>Would it be possible for a ring to have elements that are their own additive inverses? What I mean is, would it be possible to have a ring $K$ of mathematical objects $A$ such that: $$A+A=i,\;\forall A\in K$$</p> <p>Where $i$ is the additive identity?</p>
Omran Kouba
140,450
<p>Yes, consider $K=\Bbb{Z}/2\Bbb{Z}=\{0,1\}$, with addition and multiplication Modulo 2.</p>
755,227
<p>Would it be possible for a ring to have elements that are their own additive inverses? What I mean is, would it be possible to have a ring $K$ of mathematical objects $A$ such that: $$A+A=i,\;\forall A\in K$$</p> <p>Where $i$ is the additive identity?</p>
Robert Lewis
67,071
<p>Yes, it is possible; consider any ring of characteristic $2$; since $1 + 1 = 0$ in such a ring by definition, we have for all $a$ in the ring $a + a = a(1 + 1) = a0 = 0$; this implies $-a = a$; an example, as Dietrich Burde mentioned in his comment, is $\Bbb Z_2 = \Bbb Z/2\Bbb Z$. A more complex example is $\Bbb Z_2[x]$, the polynomial ring over $\Bbb Z_2$, or $GF(2^n)$, the finite field with $2^n$ elements, or $GF(2^n)[x]$, the polynomials with coefficients in $GF(2^n)$; the list goes on . . .</p> <p>Note that the characteristic of a commutative unital ring $A$ is the minimum number of times $1_A$ must be added to itself to produce $0$; it is denoted by $\text{char}A$; it is considered infinite of there is no finite number of times $1_A$ may be added to itself to produce $0$; see <a href="http://en.wikipedia.org/wiki/Characteristic_%28algebra%29" rel="nofollow">this widipedia entry</a>.</p> <p>Note that if $\text{char}A = 2$, then $A[x]$ is an infinite ring of characteristic $2$.</p> <p>Hope this helps. Cheers,</p> <p>and as always,</p> <p><strong><em>Fiat Lux!!!</em></strong></p>
3,953,153
<p>I need help to prove this: If <span class="math-container">$\gcd(a,b)=1$</span> then <span class="math-container">$\gcd(a+b, a^2-ab+b^2)$</span> is equal to <span class="math-container">$1$</span> or <span class="math-container">$3$</span>. I have done this:</p> <p>Let <span class="math-container">$d$</span> be the g.c.d. of <span class="math-container">$(a+b, a^2-ab+b^2)$</span>, then <span class="math-container">$d$</span> divides <span class="math-container">$a+b$</span> and <span class="math-container">$d$</span> divides <span class="math-container">$a^2-ab+b^2$</span>. That implies <span class="math-container">$d$</span> divides <span class="math-container">$(a+b)n + (a^2-ab+b^2)m$</span>, for some <span class="math-container">$n,m$</span> integers. Let <span class="math-container">$n$</span> and <span class="math-container">$m$</span> be <span class="math-container">$a$</span> and <span class="math-container">$1$</span> respectively, then <span class="math-container">$d$</span> divides <span class="math-container">$2a^2+ b$</span>. With the same argument but with <span class="math-container">$n=b$</span> we get <span class="math-container">$d$</span> divides <span class="math-container">$a^2+2b$</span>. Then <span class="math-container">$d$</span> divides <span class="math-container">$3a^2+3b^2$</span>. That implies <span class="math-container">$3a^2+3b^2 \geq d$</span>, and we get that <span class="math-container">$3 \geq d$</span>, because <span class="math-container">$\gcd(a^2+b^2)=1$</span>. So <span class="math-container">$d$</span> must be <span class="math-container">$3$</span> or <span class="math-container">$1$</span> because if <span class="math-container">$d =2$</span>, <span class="math-container">$d$</span> has to divide <span class="math-container">$a^2+2b$</span> and <span class="math-container">$2a^2+ b$</span>, but we see that no. So <span class="math-container">$d=3$</span> or <span class="math-container">$1$</span>. I don't know if I did it well.</p>
nonuser
463,553
<p>You started right. Let <span class="math-container">$d =\gcd(a+b, a^2-ab+b^2)$</span>, then <span class="math-container">$d$</span> divides <span class="math-container">$a+b$</span> and <span class="math-container">$a^2-ab+b^2$</span>. But from here it is pretty messy.</p> <p>We can say <span class="math-container">$$d\mid (a+b)^2- (a^2-ab+b^2) = 3ab$$</span></p> <p>So if prime <span class="math-container">$p\mid d$</span> we have <span class="math-container">$p\mid 3$</span> or <span class="math-container">$p\mid a$</span> or <span class="math-container">$p\mid b$</span>. Say <span class="math-container">$p\nmid 3$</span> and say <span class="math-container">$p\mid a$</span>. Since <span class="math-container">$p\mid a+b$</span> we have then <span class="math-container">$p\mid b$</span>, a contradiction. So <span class="math-container">$p\mid 3$</span> and thus <span class="math-container">$d=3^n$</span>.</p> <p>Say <span class="math-container">$n\geq 2$</span>, then <span class="math-container">$9\mid 3ab$</span> so <span class="math-container">$3\mid ab$</span> so <span class="math-container">$3\mid a$</span> or <span class="math-container">$3\mid b$</span>. In both cases we have that <span class="math-container">$3$</span> divides then also the other number since <span class="math-container">$3\mid a+b$</span>, a contradiciton. So <span class="math-container">$n\leq 1$</span> and we are done.</p>
1,821,411
<p>$f:[a,b]\rightarrow R$ that is integrable on [a,b]</p> <p>So we need to prove:</p> <p>$$\int_{-b}^{-a}f(-x)dx=\int_{a}^{b}f(x)dx$$</p> <p>1.) So we'll use a property of definite integrals: (homogeny I think it's called?)</p> <p>$$\int_{-b}^{-a}f(-x)dx=-1\int_{-b}^{-a}f(x)dx$$</p> <p>2.) Great, now using the fundamental theorem of calculus:</p> <p>$$-1\int_{-b}^{-a}f(x)dx=(-1)^2\int_{-a}^{-b}f(x)dx=\int_{-a}^{-b}f(x)dx$$</p> <p>This is where I'm stuck. For some reason I think it might be smarter to skip step 2, to leave it asL</p> <p>$$-1\int_{-b}^{-a}f(x)dx$$ </p> <p>because graphically, we've "flipped" the graph about the x-axis, but we're still calculating the same area. Proving that using properties seems to have stumped me.</p> <p>I prefer hints over solutions, thanks.</p>
Dinesh.hmn
308,545
<p>Let f(x) = x. then f(-x) = -x. Substituting -a and -b in the limits of the integration will lead to be f(a) = a then f(-a) = -(-a) = a.</p> <p>it's simply this that you are multiplying the limits and the function by -1. if both are multiplied then they would get neutralized,</p>
4,374,521
<p>In the <span class="math-container">$(x,t)$</span>- plane, the characteristic of the initial value problem <span class="math-container">$$u_t+uu_x=0$$</span> with <span class="math-container">$$u(x,0)=x,0\leq x\leq 1$$</span> are</p> <p><span class="math-container">$1$</span>. parallel straight lines .</p> <p><span class="math-container">$2.$</span> straight lines which intersects at <span class="math-container">$(0,-1)$</span>.</p> <p><span class="math-container">$3.$</span> non- intersecting parabolas.</p> <p><span class="math-container">$4.$</span> concentric circles with center at origin.</p> <p>I am learning partial differential equation so don’t have good knowledge of it . According to me characteristic equations are</p> <p><span class="math-container">$$\frac{dt}{1}=\frac{dx}{u}=\frac{du}{0}$$</span> Now <span class="math-container">$u=c$</span> by last fraction. So by first two fractions I have <span class="math-container">$x-ct=k$</span>, where <span class="math-container">$c$</span> and <span class="math-container">$k$</span> are constants. Now I don’t known how to use initial condition of <span class="math-container">$u(x,0)=x$</span> and what is final answer? I see that <span class="math-container">$x-ct-k=0$</span> are straight lines in <span class="math-container">$(x,t)$</span>-plane. Please help me to reach at final option . Thank you.</p>
EditPiAf
418,542
<p>The resolution of the initial value problem is discussed in <a href="https://math.stackexchange.com/q/305727/418542">this post</a>. So we end up with the set of curves <span class="math-container">$$ u=C_1, \qquad x-ut=C_2 $$</span> where <span class="math-container">$C_1$</span>, <span class="math-container">$C_2$</span> are constants. These curves are straight lines in the <span class="math-container">$x$</span>-<span class="math-container">$t$</span> plane, thus options 3. and 4. are eliminated. Now we implement the boundary condition <span class="math-container">$u(x,0)=x$</span> at <span class="math-container">$t=0$</span>: <span class="math-container">$$ x=C_1, \qquad x-x\cdot 0 = C_2 , $$</span> i.e. <span class="math-container">$C_1=C_2=c$</span>. To see if the curves <span class="math-container">$x-ct=c$</span> are parallel, we look at the slope in <span class="math-container">$x$</span>-<span class="math-container">$t$</span> coordinates, whose value equals <span class="math-container">$$c=u = \frac{x}{1+t}.$$</span> Are these curves parallel? Lastly you could check for the intersection of two characteristics by solving the system <span class="math-container">$$ x-a t = a, \qquad x-b t = b $$</span> with respect to <span class="math-container">$(x,t)$</span> for <span class="math-container">$0\leq a\neq b \leq 1$</span>.</p>
3,888,146
<p>When we give a proof that the tangent is the sine to cosine ratio of an oriented angle,</p> <p><span class="math-container">$$\bbox[5px,border:2px solid #C0A000]{\tan \alpha=\frac{\sin\alpha}{\cos \alpha}}$$</span> with <span class="math-container">$\cos \alpha \neq 0$</span>, we take the tangent <span class="math-container">$t$</span> in <span class="math-container">$A(1,0)\equiv S$</span> to the circle of center in <span class="math-container">$O(0,0)$</span> ad radius <span class="math-container">$r=1$</span>. See the image</p> <p><a href="https://i.stack.imgur.com/LPQPV.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/LPQPV.png" alt="https://www.youmath.it/images/stories/funzioni-elementari/definizione-tangente.png" /></a></p> <blockquote> <p>The name tangent has been given because we consider the tangent to the circle of radius <span class="math-container">$1$</span> at point <span class="math-container">$A\equiv S$</span> or for another reason?</p> </blockquote>
Tbw
498,717
<p>As Christian said, the center has equal distance to <span class="math-container">$A$</span> and <span class="math-container">$B$</span> and therefore lies on the perpendicular bisector of <span class="math-container">$AB$</span>. This center is therefore at the intersection of <span class="math-container">$AB$</span> and <span class="math-container">$ℓ$</span>, and the radius is simply the distance of this center to <span class="math-container">$A$</span> or <span class="math-container">$B$</span>.</p>
3,755,288
<p>I'm trying to solve this:</p> <blockquote> <p>Which of the following is the closest to the value of this integral?</p> <p><span class="math-container">$$\int_{0}^{1}\sqrt {1 + \frac{1}{3x}} \ dx$$</span></p> <p>(A) 1</p> <p>(B) 1.2</p> <p>(C) 1.6</p> <p>(D) 2</p> <p>(E) The integral doesn't converge.</p> </blockquote> <p>I've found a lower bound by manually calculating <span class="math-container">$\int_{0}^{1} \sqrt{1+\frac{1}{3}} \ dx \approx 1.1547$</span>. This eliminates option (A). I also see no reason why the integral shouldn't converge. However, to pick an option out of (B), (C) and (D) I need to find an upper bound too. Ideas? Please note that I'm not supposed to use a calculator to solve this.</p> <p>From <strong>GRE problem sets by UChicago</strong></p>
IPPK
719,117
<p>Let's try to use integration by parts to <span class="math-container">$I = \int\limits_0^1 \sqrt{1 + \frac{1}{3x}}dx$</span>. First, transform integral into <span class="math-container">$\frac{2}{\sqrt3}\int\limits_0^1\frac{\sqrt{1 + 3x}}{2\sqrt{x}}$</span>. Now <span class="math-container">$u = \sqrt{3x+1}$</span> and <span class="math-container">$dv = \frac{dx}{2\sqrt{x}}$</span> and what we get after IBP is <span class="math-container">$$\frac{2}{\sqrt3}\sqrt{x(3x+1)}|_0^1 - \sqrt{3}\int\limits_0^1 \sqrt{\frac{x}{3x+1}}dx = \frac{4}{\sqrt3} - \sqrt{3}\int\limits_0^1 \sqrt{\frac{x}{3x+1}}dx$$</span>. We have <span class="math-container">$$\frac{5}{2\sqrt3} = \frac{4}{\sqrt3} - \sqrt{3}\int\limits_0^1 \sqrt{\frac{x}{3x + x}}dx &lt; I &lt;\frac{4}{\sqrt3} - \sqrt3\int\limits_0^1 \sqrt{\frac{x}{3 + 1}}dx = \frac{4}{\sqrt3} - \frac{\sqrt3}{2} \frac{2}{3}x\sqrt{x}|_0^1 = \sqrt3$$</span> <span class="math-container">$\frac{5}{2\sqrt3} \approx 1.44$</span> and <span class="math-container">$\sqrt3 \approx 1.73$</span>, so the answer is (C).</p> <p>If one doesn't know the value of <span class="math-container">$\sqrt3$</span>, we can check that <span class="math-container">$1.7^2 &lt; 3 &lt; 1.8^2$</span> and then <span class="math-container">$3 &lt; 1.75^2$</span>. Therefore, <span class="math-container">$\sqrt3 &lt; 1.75$</span>. From it we have <span class="math-container">$\frac{5}{2\sqrt3} &gt; \frac{5}{2\cdot1.75} &gt; 1.42$</span> and, for integral, <span class="math-container">$1.42 &lt; I &lt; 1.75$</span>.</p>
1,182,844
<p>I have completed Velleman's book, 'How to prove it'. I have also worked through Apostol Vol.1. I have messed about with many rigorous single variable calculus textbooks, e.g.,Apostol, Spivak, Courant, Lang, etc. I had started working through Lang's, 'Calculus of several variables' but put it up to do a book like Edwards, 'Advanced Calculus:A Differential Forms Approach.' I now see that book to be a waste of time, because it is extremely 'hand-wavy' I can't stand mathematics out of physics texts and I will most certainly not tolerate the same from an actual math book. Therefore I am back to my starting point, I have finished Linear Algebra via Lang and have been perusing Hubbard and Hubbard's book and Artin. I do not like that Hubbard and Hubbard is so wordy. I don't really have the patience to put up with extremely long winded explanations of trivial facts just to get to the meat. </p> <p>I would really like to work through Spivak's, 'Calculus on Manifolds' the problem being that I need to know if I can do without a book like Lang's, 'Calculus of Several Variables'? My goal is to get to manifolds and skip 'Vector Calculus' but I do not want to shortchange myself on computation if Spivak's book would leave me in that state.</p> <p>I need help to determine if it is worth the time to work through Lang's book or can I just skip it, I want to be able to apply forms, etc to physics although I am a math major.</p>
Moya
192,336
<p>I honestly don't think skipping vector calculus to be a particularly good idea. Yes, you can certainly do it and make do by jumping immediately to calculus on manifolds, but you're definitely going to have a severe gap in your understanding by having not taken the time to sit down and do computations that you might consider too simple, just because at some point you have to get your hands dirty with calculations and you need to do it early on so you understand what's going on later. Yes, abstraction is great and allows you to prove some truly striking things, but pretty much every professor I've had, in both undergraduate and graduate work, agrees that you don't understand the material if you can't do the computations regardless of how well you can prove something.</p> <p>To the question itself: Spivak has a mixture of some straight computational problems and good theoretical problems, but the book being so short there are not nearly enough problems, in my opinion, to get a really solid grasp of vector calculus and calculus on (Euclidean) submanifolds. His goal is not to give you a solid calculus textbook or to even give a solid introduction to manifolds, but rather to make sense of Stoke's Theorem as quickly as possible and he addresses at least one topic (integration on chains) which as far as I'm aware is pretty outdated.</p> <p>In terms of using a textbook to self-study vector calculus, I think you should read a combination of Lang's book, Munkre's <em>Analysis on Manifolds</em> (which is similar to Spivak though a little bit more drawn out and has a few more computational exercises), and honestly any decent multivariable standard calculus textbook like Hubbard or even Stewart to use just for basic problem solving. Both have a lot of problems to solve, and should give you a solid computational background in vector calculus that you'll need.</p>
115,821
<p>Here are some sample data:</p> <pre><code>data = {{1}, {50., 53, 52, 52}, {100., 105, 104, 104}, {150., 157, 156, 156}, {200., 209, 208, 208}, {250., 261, 260, 260}, {300., 313, 312, 313}, {2}, {50., 53, 52, 51}, {100., 106, 105, 102}, {150., 158, 157, 153}, {200., 211, 210, 204}, {250., 265, 264, 256}, {300., 319, 318, 307}, {3}, {50., 53, 52, 52}, {100., 106, 105, 104}, {150., 158, 158, 156}, {200., 211, 210, 209}, {250., 264, 263, 261}, {300., 317, 316, 313}, {4}, {50., 51, 50, 51}, {100., 102, 101, 102}, {150., 153, 152, 152}, {200., 204, 203, 204}, {250., 256, 256, 254}, {300., 309, 309, 305}, {5}, {50., 52, 51, 52}, {100., 104, 104, 104}, {150., 156, 155, 156}, {200., 208, 208, 208}, {250., 260, 260, 260}, {300., 312, 311, 312}} </code></pre> <p>As we can see, there are five sets of four columns. Now we will delete the indices (1,2,3,4,5) and create the corresponding sub lists.</p> <pre><code>d2 = SplitBy[data, Dimensions][[2 ;; ;; 2]]; </code></pre> <p>The first column is time, while the other three are some integers (nx, ny, nz). I want to see the time-evolution of these integers. In particular, I want to <code>ListPlot</code> all five <code>nx</code> versus time in a single plot (one on top of each other). Then I suppose it would be trivial to do the same for <code>ny</code> and <code>nz</code> versus time.</p> <p>Any suggestions?</p>
demm
30,122
<p>Maybe if you consider the min and max of the individual tables within normTable.</p> <pre><code>min=Map[Min,normData];max=Map[Max,normData]; Table[ListVectorPlot[data[[i]],PlotRange-&gt;All,PlotLegends -&gt;BarLegend[{"Rainbow",{min[[i]],max[[i]]}}],VectorColorFunction -&gt;Function[{x,y,vx,vy,n},ColorData["Rainbow"][Rescale[n, {min[[i]], max[[i]]}]]],VectorColorFunctionScaling-&gt;False,ImageSize-&gt;300],{i,1,2}] </code></pre>
571,941
<p>I know that $\sum _{ n=1 }^{ \infty }{ { (-1) }^{ n+1 }\frac { 1 }{ n } =\ln(2) }$ .</p> <p>How about the series $\sum _{ n=1 }^{ \infty }{ { (-1) }^{ n+1 } } \frac { 1 }{ \sqrt { n } }$ </p> <p>To what number does it converge?</p>
Adam
93,671
<p>Lets consider partial sums. $A(x)$ is a sum of problems after $x$ days. Now what is the remainder of $A(x)$ divided by $229$? $A(x)$ changes as days go by, there are only $229$ possible remainders and $365$ days - so there must be two days $x$ and $y$, where $A(x)$ and $A(y)$ have the same remainder. $A(x)$ and $A(y)$ represent sums of problems of two overlapping periods. $A(x)$ is the larger period (you can assume this). $A(x)-A(y)$ is divisible by 229 and is positive. $A(x)-A(y)$ also represents a sum of problems of a certain period (since the two periods are overlapping, just take the larger period and disregard the part of the larger period that is the smaller period.</p> <p>In any $55$ days he must have solved at least $55$ problems, therefore in $310$ days he must have solved at most $500-55=445$ problems which is less than $458$. There are only $229$ remainders, but $310$ days therefore in the $310$ days, there must be two days $x$ and $y$ such that $A(x)-A(y)$ is divisible by $229$, we also know that it is smaller than $458$, therefore it has to be equal to $229$.</p>
39,387
<p>It is usual to use second, third and fourth moments of a distribution to describe certain properties. Do partial moments or moments higher than fourth describe any useful properites of a distribtution.</p>
Community
-1
<p>That depends on what you mean by "useful properties". For instance, all of the even moments are required to characterize sub-Gaussian random variables, which are those variables whose tail is majorized by that of a Gaussian random variable. A random variable X is sub-Gaussian if and only if there exists a non-negative number b such that $E[X^{2k}] \leq \frac{(2k)!b^{2k}}{2^k k!}$ for all $k \geq 1$.</p>
39,387
<p>It is usual to use second, third and fourth moments of a distribution to describe certain properties. Do partial moments or moments higher than fourth describe any useful properites of a distribtution.</p>
Yemon Choi
763
<p>Not sure if this is what you had in mind, but a bounded random variable (i.e. one which almost surely takes values in some bounded subset of $\mathbb R$) is uniquely determined by its moments.</p> <p>Off the top of my head, one way to see this - probably not the most efficient! - is that the distribution of a random variable $X$ is <a href="http://en.wikipedia.org/wiki/Characteristic_function_%2528probability_theory%2529" rel="nofollow">uniquely determined</a> by the set of values ${\mathbb E}f(X)$ as $f$ runs over all bounded continuous functions $\mathbb R\to \mathbb R$; and now since $X$ is a.s. bounded, the <a href="http://en.wikipedia.org/wiki/Stone%25E2%2580%2593Weierstrass_theorem#Weierstrass_approximation_theorem" rel="nofollow">Weierstrass approximation theorem</a> implies that $X$ is uniquely determined by the set of values $\mathbb E p(X)$ as $p$ runs over all polynomials.</p> <p>Thus in some settings, knowing the moments is equivalent to knowing the distribution. (This point of view is much loved by those working in "free probability", but that's another story altogether...)</p> <p>I should also mention that the question of just when we can say that the moment sequence determines the distribution has also been studied: see <a href="http://en.wikipedia.org/wiki/Carleman%2527s_condition" rel="nofollow">here</a> for instance.</p>
4,188,656
<p><strong>Problem</strong>: How many strings are there of length <span class="math-container">$ n $</span> over <span class="math-container">$ \{ 1,2,3,4,5,6 \} $</span> s.t. the sum of all characters in the string divide by <span class="math-container">$ 3 $</span>.</p> <p><strong>Attempt</strong>: Initially I thought about solving this using generating functions ( <span class="math-container">$ (x^0 + x^3 +...)^n = (\frac{1}{1-x^3})^n $</span> ), but that is little bit problematic since we don't have an equation equal to some number and maybe this way is too difficult. Then I thought about using recurrence relation and I done like this:<br /> Let <span class="math-container">$ a_n $</span> denote the number of strings of length <span class="math-container">$ n $</span> s.t. the sum of all characters in the string divide by <span class="math-container">$ 3 $</span>. Let's look at the first character, if it divides by 3 then there are two choices - <span class="math-container">$ 3,6 $</span> and the rest of the <span class="math-container">$ n-1 $</span> string is legal, similarly if the first character does not divide by 3 then there are four choices - <span class="math-container">$ 1,2,4,5 $</span> and the rest of the <span class="math-container">$ n-1 $</span> string is legal, so we have the recurrence relation <span class="math-container">$ a_n = a_{n-1} + a_{n-1} = 2*a_{n-1} $</span>.<br /> Obviously this recurrence relation is wrong <strong>but why?</strong> I keep repeating mistakes like this when creating recurrence relations, and I'd like to know where my mistake is here. ( besides having wrong values for different <span class="math-container">$ n $</span>, <strong>where was the fallacy in logic that led me to the creation of this recurrence relation?</strong> maybe is it that if the first character is from <span class="math-container">$\{ 3,6 \} $</span> then the rest of the string is not necessarily legal? ).</p>
Arctic Char
629,362
<p>It's not true even when <span class="math-container">$A$</span> is finite. Example: <span class="math-container">$f_1 = \chi_{[0,1]}$</span> and <span class="math-container">$f_2 = \chi_{[1,2]}$</span>. Then</p> <p><span class="math-container">$$\sup \left\{ \int f_1, \int f_2\right\} = 1 &lt; 2 = \int \sup \{ f_1, f_2\}.$$</span></p>
2,363,840
<blockquote> <p>As the heading states, find all $p(x)$ for $p(x+1)=p(x)+3x(x+1)+1$ for all real $x$.</p> </blockquote> <p>I have no idea how to approach this. Any solution or guide how to solve these kinds of questions would be appreciated! </p>
Jack D'Aurizio
44,121
<p>For any $x\in\mathbb{N}$ we have $$ p(x+1)-p(x) = 6 \binom{x+1}{2}+1 \tag{1} $$ hence by applying $\sum_{x=0}^{n}$ to both sides and exploiting the <a href="https://en.wikipedia.org/wiki/Hockey-stick_identity" rel="nofollow noreferrer">hockey stick identity</a>: $$ p(n+1)-p(0) = 6\binom{n+2}{3}+(n+1) = (n+1)^3 \tag{2} $$ so $p(x) = \color{red}{x^3+C}$.</p>