qid
int64
1
4.65M
question
large_stringlengths
27
36.3k
author
large_stringlengths
3
36
author_id
int64
-1
1.16M
answer
large_stringlengths
18
63k
173,387
<p>How can I indent properly long code in <em>Mathematica</em>? Are there some best practices?</p>
b3m2a1
38,205
<p>Here's a quick plug for some stuff in that question I linked. I have it all built into my GitHub so you can load a thing to let your cells be indentable by:</p> <pre><code>loadIndenter[] := ( BeginPackage["Indenter`"]; Indenter`MakeIndentable::usage = "Makes indentable"; BeginPackage["`Package`"]; Indenter`Package`$PackageName = "Indenter"; EndPackage[]; Get["https://github.com/b3m2a1/mathematica-BTools/raw/master/Packages/FrontEnd/StylesheetEdits.m"]; Get["https://github.com/b3m2a1/mathematica-BTools/raw/master/Packages/FrontEnd/IndentableCells.m"]; EndPackage[]; ) </code></pre> <p>Then with that loaded just make a cell and run:</p> <pre><code>MakeIndentable@yourCellHere </code></pre> <p>And it'll be indentable. Here's a demo:</p> <p><a href="https://i.stack.imgur.com/GTPuq.gif" rel="noreferrer"><img src="https://i.stack.imgur.com/GTPuq.gif" alt="enter image description here"></a></p> <p>Indenting is done by selecting a piece and using <kbd>Command</kbd>+<kbd>Shift</kbd>+<kbd>]</kbd><br> Dedenting is done with <kbd>Command</kbd>+<kbd>Shift</kbd>+<kbd>[</kbd><br> Toggling between <code>"\n"</code> newlines and <code>"\[IndentingNewlines]"</code> is done with <kbd>Command</kbd>+<kbd>Alt</kbd>+<kbd>[</kbd> </p> <p>Whenever you try to indent a cell any <code>"\[IndentingNewlines]"</code> get converted to <code>"\n"</code>-type newlines and I attempt to preserve the formatting if possible (that's the last thing I do in that GIF) </p> <hr> <h3>Indentable notebooks</h3> <p>That function can also make a full notebook or stylesheet indentable. If you open up a Package notebook with this:</p> <pre><code>makeNewPackageNotebook[ops : OptionsPattern[Notebook]] := With[ { flops = FilterRules[{ops}, Options@Notebook] }, FrontEndTokenExecute["NewPackage"]; SetOptions[Notebooks[][[1]], flops]; Notebooks[][[1]] ] </code></pre> <p>You can immediately make it indentable by:</p> <pre><code>MakeIndentable@ makeNewPackageNotebook[] </code></pre> <p>This'll hang for a bit while it finds and makes a stylesheet for the package.</p> <p>Alternately you can scrape off a <code>Notebook</code> what makes it indentable and pass that in directly:</p> <pre><code>$indentingStyleSheet = With[{nb = CreateDocument[{}, Visible -&gt; False]}, MakeIndentable@nb; (NotebookClose[nb]; #) &amp;@Options[nb, StyleDefinitions] ]; newIndentablePackage[ops : OptionsPattern[Notebook]] := makeNewPackageNotebook[ Flatten@{ops, $indentingStyleSheet /. (StyleDefinitions -&gt; "Default.nb") -&gt; (StyleDefinitions -&gt; "Package.nb")} ] </code></pre> <p>Now any package notebook made with that function will have this indentation autoconfigured.</p>
4,574,692
<p>The theorem goes: Let <span class="math-container">$A_{1}, A_{2} ... \in \mathcal{A}$</span> with <span class="math-container">$A_{N}$</span> increasing to <span class="math-container">$\Omega$</span> and <span class="math-container">$\mu (A_{N}) &lt; \infty$</span> for all <span class="math-container">$N \in \mathbb{N}$</span>. For measurable <span class="math-container">$f, g: \Omega \xrightarrow{} E$</span> where <span class="math-container">$E$</span> is a metric space, define</p> <p><span class="math-container">$\tilde{d}(f, g) := \sum_{N = 1}^{\infty} \frac{2^{-N}}{1 + \mu(A_{N})} \int_{A_{N}} \text{min}\{1, d(f(\omega), g(\omega))\} d\mu$</span>.</p> <p>Then <span class="math-container">$\tilde{d}$</span> is a metric that induces convergence in measure: if <span class="math-container">$f, f_{1}, ...$</span> are measurable, then <span class="math-container">$f_{n} \xrightarrow{} f$</span> in measure iff <span class="math-container">$\tilde{d}(f, f_{n}) \xrightarrow{} 0$</span>.</p> <p>In the proof, the author defines <span class="math-container">$\tilde{d}_{N}(f, g) = \int_{A_{N}} \text{min}\{1, d(f(\omega), g(\omega))\} d\mu$</span>.</p> <p>He says <span class="math-container">$\tilde{d}(f, f_{n}) \xrightarrow{} 0 $</span> iff <span class="math-container">$\tilde{d}_{N}(f, f_{n}) \xrightarrow{} 0 $</span> for all <span class="math-container">$N$</span>. Why do we have this? First taking the infinite sum then taking the limit is the same as first taking the limit then sum? How to justify this?</p>
Tom
986,425
<p>Thank @geetha290krm for your hint!</p> <p>Following this hint, fix <span class="math-container">$m \in \mathbb{N}$</span>, we have that</p> <p><span class="math-container">$\tilde{d}(f, f_{n}) \leq \frac{1}{2^{m}} + \sum_{N = 1}^{m}\frac{2^{-N}}{1 + \mu(A_{N})} \tilde{d}_{N}(f, f_{n})$</span>.</p> <p>Then, if all <span class="math-container">$\tilde{d}_{N}(f, f_{n})$</span> converges to zero, we have the limit of RHS is <span class="math-container">$1/2^{m}$</span>. So by the definition of limit, for such <span class="math-container">$m$</span>, there is a <span class="math-container">$k \in \mathbb{N}$</span> s.t. if <span class="math-container">$n \geq k$</span>, we have</p> <p>RHS <span class="math-container">$\leq 1/2^{m} + 1/2^{m}$</span> where the first <span class="math-container">$1/2^{m}$</span> is the limit and the second serves as the <span class="math-container">$\epsilon$</span>.</p> <p>Then for <span class="math-container">$n \geq k$</span>, LHS also is smaller than <span class="math-container">$1/2^{m} + 1/2^{m}$</span>. <span class="math-container">$m$</span> can be arbitrarily large so the limit of LHS is zero.</p> <p>If not all <span class="math-container">$\tilde{d}_{N}$</span> converges to zero then it's easy to see <span class="math-container">$\tilde{d}$</span> doesn't go to zero.</p>
1,894,867
<p>Let $n=3^{1000}+1$. Is n prime?</p> <p>My working so far:</p> <p>$n=3^{1000}+1 \cong 1 \mod 3$</p> <p>I notice that n is of form; $n=3^n+1$</p> <p>Seeking advice tips, and methods on progressing this.</p>
Ennar
122,131
<p>Since we have</p> <p>$$3\equiv 1 \pmod 2 \implies 3^{1000} \equiv 1 \pmod 2 \implies 3^{1000}+1\equiv 0\pmod 2$$</p> <p>$3^{1000}+1$ is not a prime.</p>
1,102,928
<p>Let $\mathcal{H}$ be a Hilbert space. I am trying to show that every self-adjoint idempotent continuous linear transformation is the orthogonal projection onto some closed subspace of $\mathcal{H}$. If $P$ is such an operator, the obvious thing is to consider $S=\{Px:x\in\mathcal{H}\}$. However, I'm having trouble showing that S is in fact closed even though I'm sure this should be almost trivial. I tried to show that if $x_n\to x$ and $x_n\in S$ then $x\in S$ but somehow I just can't quite do it...</p>
tomasz
30,222
<p><strong>Hint</strong>: use continuity of $P$. Then show what the kernel of $P$ is orthogonal to $S$.</p>
1,842,340
<p>A polynomial with integer coefficients is called primitive if its coefficients are relatively prime. For example, $$3{x^2} + 7x + 9$$ is primitive while $$10{x^2} + 5x + 15$$ is not.</p> <p>(a) Prove that the product of two primitive polynomials is primitive.</p> <p>(b) Use this to prove Gauss's Lemma: If a polynomial with integer coefficients can be factored into polynomials with rational coefficients, it can also be factored into primitive polynomials with integer coefficients</p>
Benjamin Lindqvist
96,816
<p>This will not be possible because the new code would be MDS. It is known that such codes do not exist for $n&gt;k+1$.</p>
2,964,359
<blockquote> <p>Let <span class="math-container">$(X, d)$</span> be a metric space with no isolated points, and let <span class="math-container">$A$</span> be a relatively discrete subset of <span class="math-container">$X$</span>. Prove that <span class="math-container">$A$</span> is nowhere dense in <span class="math-container">$X$</span>.</p> </blockquote> <p><strong>relatively discrete subset of</strong> <span class="math-container">$X$</span>:= A subset <span class="math-container">$A$</span> of a topological space <span class="math-container">$(X,\mathscr T)$</span> is relatively discrete provided that for each <span class="math-container">$a\in A$</span>, there exists <span class="math-container">$U\in \mathscr T$</span> such that <span class="math-container">$U \cap A=\{a\}$</span>.</p> <p>My aim is to prove <span class="math-container">$int(\overline{A})=\emptyset$</span>. Let if possible <span class="math-container">$int(\overline{A})\neq \emptyset$</span>. Let <span class="math-container">$x\in int(\overline{A})$</span>. which implies there exists <span class="math-container">$B_d(x,\epsilon)\subset \overline A=A\cup A'$</span>.</p> <p>How do I complete the proof?</p> <p>What if metric space is replaced by arbitrary topological space, will the result still hold?</p>
José Carlos Santos
446,262
<p>Let <span class="math-container">$x\in\mathring{\overline A}$</span>. Then there is a <span class="math-container">$r&gt;0$</span> such that <span class="math-container">$B_r(x)\subset\overline A$</span>. Since <span class="math-container">$B_r(x)$</span> is an open set which is contained in <span class="math-container">$\overline A$</span>, it contains some element <span class="math-container">$a\in A$</span>. But then, if <span class="math-container">$r'=r-d(x,a)$</span>, <span class="math-container">$B_{r'}(a)\subset B_r(x)$</span>. In particular, <span class="math-container">$B_{r'}(a)\subset\overline A$</span>. This is impossible, since <span class="math-container">$A$</span> is relatively discrete.</p> <p>I will not answer the question from the title, since you already got an answer in the comments.</p>
2,946,384
<p>How to prove that any integer n which is not divisible by 2 or 3 is not divisible by 6?</p> <p>The point was to prove separately inverse, converse and contrapositive statements of the given statement: "for all integers n, if n is divisible by 6, then n is divisible by 3 and n is divisible by 2". I have the proof for converse and inverse similar to that given in comments. I have trouble only with the proof that integer not divisible by 2 or 3 is not divisible by 6. </p> <p>As I review my proof for inverse statement, I'm not sure of it as well. "For all integers n, if n is not divisible by 6, n is not divisible by 3 or n is not divisible by 2."</p> <p>n = 6*x where x in not an integer<br> n = 2*3*x<br> n/2 = 3*x and n/3 = 2*x where 2x or 3x is not an integer,<br> so n is not divisible by 2 or 3</p>
Bernard
202,857
<p>If you make the substitution <span class="math-container">$\;t=\mathrm e^x\iff x=\ln t$</span>, so that <span class="math-container">$\;\mathrm dx=\dfrac{\mathrm d t}t$</span>, we obtain <span class="math-container">$$\int_{1}^{\infty}\frac{\mathrm e^{x}+\mathrm e^{3x}}{\mathrm e^{x}-\mathrm e^{5x}}\,\mathrm dx=\int_{\mathrm e}^{\infty}\frac{t+t^3}{t-t^5}\dfrac{\mathrm d t}t = \int_{\mathrm e}^{\infty}\frac{1+t^2}{t-t^5}\,\mathrm d t.$$</span> Now you can prove the convergence using the comparison test: near <span class="math-container">$+\infty$</span> the integrand has a simple equivalent: <span class="math-container">$$\frac{1+t^2}{t-t^5}\sim_{+\infty}\frac{t^2}{-t^5}=-\frac1{t^3},$$</span> which has a convergent integral on the same inteval.</p>
2,946,384
<p>How to prove that any integer n which is not divisible by 2 or 3 is not divisible by 6?</p> <p>The point was to prove separately inverse, converse and contrapositive statements of the given statement: "for all integers n, if n is divisible by 6, then n is divisible by 3 and n is divisible by 2". I have the proof for converse and inverse similar to that given in comments. I have trouble only with the proof that integer not divisible by 2 or 3 is not divisible by 6. </p> <p>As I review my proof for inverse statement, I'm not sure of it as well. "For all integers n, if n is not divisible by 6, n is not divisible by 3 or n is not divisible by 2."</p> <p>n = 6*x where x in not an integer<br> n = 2*3*x<br> n/2 = 3*x and n/3 = 2*x where 2x or 3x is not an integer,<br> so n is not divisible by 2 or 3</p>
Henry Lee
541,220
<p>I think it should be: <span class="math-container">$$I=\int_1^\infty\frac{e^x+e^{3x}}{e^x-e^{5x}}dx$$</span> <span class="math-container">$u=e^x$</span> so <span class="math-container">$dx=\frac{du}{u}$</span> so: <span class="math-container">$$I=\int_e^\infty\frac{u+u^3}{u-u^5}\frac{1}{u}du=\int_1^\infty\frac{1+u^2}{u(1-u^4)}du=\int_1^\infty\frac{1+u^2}{u(1-u^2)(1+u^2)}du=\int_1^\infty\frac{1}{u(1-u^2)}du$$</span> <span class="math-container">$v=1-u^2$</span> <span class="math-container">$$I=-\frac{1}{2}\int_0^{-\infty}\frac{1}{v(1-v)}dv=\frac{1}{2}\int_{-\infty}^0\frac{1}{v(1-v)}dv$$</span> and this can easily be solved using partial fractions</p>
444,486
<p>I am teaching myself real analysis, and in this particular set of lecture notes, the <a href="http://www.math.louisville.edu/~lee/RealAnalysis/IntroRealAnal-ch01.pdf" rel="nofollow">introductory chapter on set theory</a> when explaining that not all sets are countable, states as follows:</p> <blockquote> <p>If $S$ is a set, $\operatorname{card}(S) &lt; \operatorname{card}(\mathcal{P}(S))$.</p> </blockquote> <p>Can anyone tell me what this means? It is theorem 1.5.2 found on page 13 of the article.</p>
Billy
39,970
<p>$\mathbb{R}^2$ and $\mathbb{C}$ have the same cardinality, so there are (lots of) bijective maps from one to the other. In fact, there is one (or perhaps a few) that you might call "obvious" or "natural" bijections, e.g. $(a,b) \mapsto a+bi$. This is more than just a bijection:</p> <ul> <li>$\mathbb{R}^2$ and $\mathbb{C}$ are also metric spaces (under the 'obvious' metrics), and this bijection is an isometry, so these spaces "look the same".</li> <li>$\mathbb{R}^2$ and $\mathbb{C}$ are also groups under addition, and this bijection is a group homomorphism, so these spaces "have the same addition".</li> <li>$\mathbb{R}$ is a subfield of $\mathbb{C}$ in a natural way, so we can consider $\mathbb{C}$ as an $\mathbb{R}$-vector space, where it becomes isomorphic to $\mathbb{R}^2$ (this is more or less the same statement as above).</li> </ul> <p>Here are some differences:</p> <ul> <li>Viewing $\mathbb{R}$ as a ring, $\mathbb{R}^2$ is actually a direct (Cartesian) product of $\mathbb{R}$ with itself. Direct products of rings <em>in general</em> come with a natural "product" multiplication, $(u,v)\cdot (x,y) = (ux, vy)$, and it is not usually the case that $(u,v)\cdot (x,y) = (ux-vy, uy+vx)$ makes sense or is interesting in general direct products of rings. The fact that it makes $\mathbb{R}^2$ look like $\mathbb{C}$ (in a way that preserves addition and the metric) is in some sense an accident. (Compare $\mathbb{Z}[\sqrt{3}]$ and $\mathbb{Z}^2$ in the same way.)</li> <li>Differentiable functions $\mathbb{C}\to \mathbb{C}$ are not the same as differentiable functions $\mathbb{R}^2\to\mathbb{R}^2$. (The meaning of "differentiable" changes in a meaningful way with the base field. See complex analysis.) The same is true of linear functions. (The map $(a,b)\mapsto (a,-b)$, or $z\mapsto \overline{z}$, is $\mathbb{R}$-linear but not $\mathbb{C}$-linear.)</li> </ul>
444,486
<p>I am teaching myself real analysis, and in this particular set of lecture notes, the <a href="http://www.math.louisville.edu/~lee/RealAnalysis/IntroRealAnal-ch01.pdf" rel="nofollow">introductory chapter on set theory</a> when explaining that not all sets are countable, states as follows:</p> <blockquote> <p>If $S$ is a set, $\operatorname{card}(S) &lt; \operatorname{card}(\mathcal{P}(S))$.</p> </blockquote> <p>Can anyone tell me what this means? It is theorem 1.5.2 found on page 13 of the article.</p>
James S. Cook
36,530
<p>My thought is this: $\mathbb{C}$ is not $\mathbb{R}^2$. However, $\mathbb{R}^2$ paired with the operation $(a,b) \star (c,d) = (ac-bd, ac+bd)$ provides a <strong>model</strong> of the complex numbers. However, there are others. For example, a colleague of mine insists that complex numbers are $2 \times 2$ matrices of the form: $$ \left[ \begin{array}{cc} a &amp; -b \\ b &amp; a \end{array} \right] $$ but another insists, no, complex numbers have the form $$ \left[ \begin{array}{cc} a &amp; b \\ -b &amp; a \end{array} \right] $$ but they both agree that complex multiplication and addition are mere matrix multiplication rules for a specific type of matrix. Another friend says, no that's nonsense, you can't teach matrices to undergraduates, they'll never understand it. Maybe they'll calculate it, but they won't really understand. Students get algebra. We should model the complex numbers as the quotient by the ideal generated by $x^2+1$ in the polynomial ring $\mathbb{R}[x]$ in fact, $$ \mathbb{C} = \mathbb{R}[x]/\langle x^2+1\rangle$$ So, why is it that $\mathbb{C} = \mathbb{R}^2$ paired with the operation $\star$? It's because it is easily implemented by the rule $i^2=-1$ and <strong>proceed normally</strong>. In other words, if you know how to do real algebra then the rule $i^2=-1$ paired with those real algebra rules gets you fairly far, at least until you face the dangers of exponents. For example, $$ -1 = \sqrt{-1} \sqrt{-1} = \sqrt{(-1)(-1)} = \sqrt{1} = 1 $$ oops. Of course, this is easily remedied either by choosing a branch of the square root or working with sets of values as opposed to single-valued functions.</p> <p>All of this said, I like Rudin's answer for your question.</p>
106,219
<blockquote> <p>Define a sequence of functions $f_n: (0,1)\rightarrow\mathbb{R}$ by<br> $\ f(x) = \begin{cases} 1/q^n &amp; \text{if } x =p/q \space(\space\mathrm{nonzero})\\ 0 &amp; \text{otherwise} \end{cases} $<br> Find the pointwise limit $f$ of $\{f_n\}$ and show $\{f_n\}$ converges uniformly. </p> </blockquote> <p>$f$ looks like a modification of Thomae's function to me, but I can't see how a function that converges uniformly can also have a pointwise limit -- I thought uniform convergence was a stronger type of convergence?</p>
VSJ
21,330
<p>For every irrational $x \in (0,1)$, $$\lim_{n \to \infty} f_n(x) = 0$$ is obvious. For a rational $x = \frac{p}{q}$, its again easy to see that $$\lim_{n \to \infty} f_n(x) = \lim_{n \to \infty} \frac{p}{q^n} = 0$$<br> Thus the pointwise limit is $0$ at all $x \in (0,1)$.<br> To show uniform convergence, note that $f_1(x) \leq 1$ for all $x$. Also $f_2(x) \leq \frac{1}{2}$ and so on, its easy to see (since for any rational $\frac{p}{q}$, $q \geq 2$) that $$f_n(x) \leq \frac{1}{2^n}$$<br> Now given $\epsilon &gt; 0$, choose $N$ such that $$\frac{1}{2^N} &lt; \epsilon$$ and you're done.</p>
2,038,520
<p>I know that the series b. converges as $\sum \frac{1}{n^p}$ converges for $p&gt;1$, So a. also converges. I want to know the sum.</p> <blockquote> <blockquote> <p>a.$1+\frac{1}{9}+\frac{1}{25}+\frac{1}{49}+.....$</p> <p>$b.1+\frac{1}{4}+\frac{1}{9}+\frac{1}{16}+.....$</p> </blockquote> </blockquote>
Claude Leibovici
82,404
<p><strong>Hint</strong></p> <p>$$1+\frac{1}{9}+\frac{1}{25}+\frac{1}{49}+\cdots=1+\frac{1}{4}+\frac{1}{9}+\frac{1}{16}+\frac{1}{25}+\frac{1}{36}+\frac{1}{49}+\cdots-(\frac{1}{4}+\frac{1}{16}+\frac{1}{36}+\cdots)$$ Now $$\frac{1}{4}+\frac{1}{16}+\frac{1}{36}+\cdots=\frac{1}{4}(1+\frac{1}{4}+\frac{1}{9}+\cdots)$$</p>
189,069
<p>The Survival Probability for a walker starting at the origin is defined as the probability that the walker stays positive through n steps. Thanks to the Sparre-Andersen Theorem I know this PDF is given by</p> <pre><code>Plot[Binomial[2 n, n]*2^(-2 n), {n, 0, 100}] </code></pre> <p>However, I want to validate this empirically. </p> <p>My attempt to validate this for <code>n=100</code>:</p> <pre><code>FoldList[ If[#2 &lt; 0, 0, #1 + #2] &amp;, Prepend[Accumulate[RandomVariate[NormalDistribution[0, 1], 100]], 0]] </code></pre> <p>I want<code>FoldList</code> to stop if <code>#2 &lt; 0</code> evaluates to <code>True</code>, not just substitute in 0. </p>
m_goldberg
3,066
<p>It seems to me that this is a problem to which <code>Catch</code> and <code>Throw</code> can be usefully applied.</p> <pre><code>SeedRandom[1]; Module[{result = {0}, s}, Catch[ Fold[ If[#2 &lt; 0, Throw[Null], result = {result, s = #1 + #2}; s] &amp;, 0, Accumulate[RandomVariate[NormalDistribution[0, 1], 100]]]]; result // Flatten] </code></pre> <p><a href="https://i.stack.imgur.com/IP9wP.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/IP9wP.png" alt="result"></a></p>
1,121,845
<p>let $G$ be a multiplicative group of non-zero complex analysis.consider the group homomorphism $\phi:G\rightarrow G$ defined by $\phi(z)=z^4$.</p> <p>1.Identify kernel of $\phi=H$.</p> <p>2.Identify $G/H$</p> <p>My try:</p> <p>let $z\in \ker \phi$ then $\phi(z)=1\implies z^4=1$ let $z=re^{i\theta}\implies r^4\cos 4\theta =1;r^4\sin 4\theta =0$ </p> <p>then $r=1$ and $\tan 4\theta=0\implies 4\theta=0\implies \theta=\frac{n\pi}{2}$</p> <p>Is it correct?</p> <p>I cant proceed in the 2nd problem.</p> <p>Any hints in this regard.</p>
Andreas Caranti
58,401
<p>You are right about the first point, just finish off by noting the the kernel has four elements, and try and list them (this will turn out to be easy).</p> <p>As to the second point, remember the so-called fundamental theorem of algebra: in particular, for each complex number $a$, there is $z$ such that $z^{4} = a$.</p>
1,488,388
<p><strong>The Statement of the Problem:</strong></p> <p>Let $G$ be a finite abelian group. Let $w$ be the product of all the elements in $G$. Prove that $w^2 = 1$.</p> <p><strong>Where I Am:</strong></p> <p>Well, I know that the commutator subgroup of $G$, call it $G'$, is simply the identity element, i.e. $1$. But, can I conclude from this that $\forall g \in G, g=g^{-1}$, i.e., $\forall g \in G, g^2 = gg^{-1} = 1$, which is our desired result? That just seems... strange. But, it kind of makes sense. After all, each element in $G$ has an associated inverse element (because it's a group), and because it's abelian, we can always position an element next to its inverse, i.e.</p> <p>$$ w^2 = (g_1g_1^{-1}g_2g_2^{-1}g_3g_3^{-1}\cdot \cdot \cdot g_ng_n^{-1})^2 = (1\cdot 1\cdot 1\cdot \cdot \cdot 1)^2=1.$$</p> <p>Is that all there is to it? Actually, looking at it now, I don't even need to mention the commutator subgroup, do I...</p>
Jacob Maibach
159,592
<p>Consider the following "proof" that $w = 1$. See if you can patch it up to reach to conclusion that $w^{2} = 1$ instead.</p> <p>We partition the non-identity elements of $G$ into two sets, which we call $S = \{g_{1}, g_{2}, \dots\}$ and $S' = \{g_{1}^{-1}, g_{2}^{-1}, \dots\}$. We do this by iteratively building up the two sets. We start with each set empty, and at each step pick an element $g$ from those not in either of $S$ or $S'$. We put $g$ in $S$, and $g^{-1}$ in $S'$.</p> <p>Then we can write out the product $w$ in a convenient way. \begin{equation*} w = \prod_{g \in G}g = 1 \cdot \left(\prod_{i}g_{i}\right) \left(\prod_{i}g_{i}^{-1}\right) = \prod_{i}g_{i}g_{i}^{-1} = \prod_{i}1 = 1 \end{equation*} Note that we used that $G$ is abelian going from $(\prod g_{i})(\prod g_{i}^{-1})$ to $\prod g_{i}g_{i}^{-1}$. Thus we get $w = 1$.</p> <p>Hint: Are $S$ and $S'$ disjoint?</p>
51,509
<p>Here is a problem due to Feynman. If you take 1 divided by 243 you get 0.004115226337 .... It goes a little cockeyed after 559 when you're carrying out the decimal expansion, but it soon straightens itself out and repreats itself nicely. Now I want to see how many times it repeats itself. Does it do this indefinitely, or does it stop after certain number of repititions? Can you write a simple <em>Mathematica</em> program to verify one conjecture or the other?</p>
Bob Hanlon
9,362
<pre><code>NumberForm[N[1/243,135],DigitBlock-&gt;27] </code></pre> <blockquote> <p>0.004115226337448559670781893 004115226337448559670781893 004115226337448559670781893 004115226337448559670781893 004115226337448559670781893 00</p> </blockquote> <p>let x = 0.004115226337448559670781893... then for it to repeat forever would require that</p> <pre><code>eqn = (10^27 -1) x == 4115226337448559670781893; Solve[eqn, x] </code></pre> <blockquote> <p>{{x->1/243}}</p> </blockquote> <p>Hence it repeats forever.</p>
1,537,648
<p>For example let's say we have a password combination of (a,b,c,d), if the password length was 1 then we'll have 4 possible passwords (a,b,c,d), now if the length was 2 then we'll have 20 possible passwords (a,b,...,dc,dd), I calculated this manually, I want the rule of calculating probability?</p>
yoki
28,262
<p>Every addition of a character multiplies the number of possibilities by $4$. For instance, if you have $N$ possibilities of passwords, and I now add another letter, then for any seuqence $[xyz...w]$ you can now generate four times: $$[xyz...wa], [xyz...wb], [xyz...wc], [xyz...wd]$$.</p> <p>So, for a password of length exactly $m$, we have $4^m$ possibilities.</p> <p>According to your question I understand that you also consider all the shorter lengths as well, so the number of possibilities for password in length $m$ or less would be the sum of all possibilities, beginning from $n=1$, that is:</p> <p>$$ A = 4 + 4^2 + ... + 4^m = \sum_{n=1}^m 4^n. $$</p> <p>This is a geometric sum that has a closed form: $A = \frac{1-4^{m+1}}{1-4}-1$, where the deduction of $1$ is due to the fact that the formula applies when starting the summation from $n=0$ and not $n=1$.</p>
3,042,149
<p>We can't exactly draw a line of length square root of 2 but in an isosceles right angle triangle of sides 1 unit each, the length of hypotenuse will be the square root of 2. Now does it mean we can get the line of exact such length?</p> <p>How is it possible? How can we get a line of exact length square root of 2 which we can't construct exactly due to its infinite decimal expansion? So does Pythagoras theorem mislead us or create a paradox?</p>
Keith Backman
29,783
<p>The apparent paradox results from the difference between the ideal triangle you can construct in your mind, with a hypotenuse of <span class="math-container">$\sqrt{2}$</span>, and an actual figure that you can draw, where making a two legs of <em>exactly</em> unit length, meeting at a <em>perfectly</em> right angle, can be approximated, but not actually accomplished. Thus any figure you draw will not feature a line of length exactly <span class="math-container">$\sqrt{2}$</span>, even though the perfect triangle you can conceive of will.</p>
1,115,645
<p>I understand that a primitive polynomial is a polynomial that generates all elements of an extension field from a base field. However I am not sure how to apply this definition to answer my question. Can someone explain to me how I need to start please?</p>
user208259
208,259
<p>The polyomial is irreducible for the reasons given in Will Brooks' answer. $F_{49}^{*}$ is cyclic of order 48. You want to check that a root $\alpha$ of the polynomial must have order 48. Since every element of $F_{49}^{*}$ has order dividing $48$, it's enough to check that $\alpha^{16} \ne 1$ and $\alpha^{24} \ne 1$. So it's enough to check that your given polynomial doesn't divide $X^{24} - 1$ or $X^{16} - 1$. You can do that by computing powers of $X$ modulo $f(X)$. (Successively compute $X^2$, $X^4$, $X^8$, $X^{16}$ and $X^{24}$ modulo $f(X)$.)</p>
2,426,892
<blockquote> <p>Between which two integers does <span class="math-container">$\sqrt{2017}$</span> fall? </p> </blockquote> <p>Since <span class="math-container">$2017$</span> is a prime, there's not much I can do with it. However, <span class="math-container">$2016$</span> (the number before it) and <span class="math-container">$2018$</span> (the one after) are not, so I tried to factorise them. But that didn't work so well either, because they too are not perfect squares, so if I multiply them by a number to make them perfect squares, they're no longer close to <span class="math-container">$2017.$</span> How can I solve this problem?</p> <p>Update: Okay, since <span class="math-container">$40^2 = 1600$</span> and <span class="math-container">$50^2 = 2500$</span>, I just tried <span class="math-container">$45$</span> and <span class="math-container">$44$</span> and they happened to be the answer - but I want to be more mathematical than that... </p>
Dr. Sonnhard Graubner
175,066
<p>it is $$44&lt;\sqrt{2017}&lt;45$$ since $$44^2=1936$$ and $$45^2=2025$$</p>
2,426,892
<blockquote> <p>Between which two integers does <span class="math-container">$\sqrt{2017}$</span> fall? </p> </blockquote> <p>Since <span class="math-container">$2017$</span> is a prime, there's not much I can do with it. However, <span class="math-container">$2016$</span> (the number before it) and <span class="math-container">$2018$</span> (the one after) are not, so I tried to factorise them. But that didn't work so well either, because they too are not perfect squares, so if I multiply them by a number to make them perfect squares, they're no longer close to <span class="math-container">$2017.$</span> How can I solve this problem?</p> <p>Update: Okay, since <span class="math-container">$40^2 = 1600$</span> and <span class="math-container">$50^2 = 2500$</span>, I just tried <span class="math-container">$45$</span> and <span class="math-container">$44$</span> and they happened to be the answer - but I want to be more mathematical than that... </p>
Simon Fraser
717,270
<p><span class="math-container">$\sqrt{1600} = 40$</span> and <span class="math-container">$\sqrt{2500} = 50$</span>. <span class="math-container">$(40+4)^2 = 40^2 + 8\cdot 40 + 16 = 1936$</span> and <span class="math-container">$(40+5)^2 = 40^2 + 10\cdot 40 +25 = 2025$</span>. Hence the desired answer is <span class="math-container">$44$</span> and <span class="math-container">$45$</span>.</p>
2,788,498
<p>Suppose $T([a,-b])=[−x,y]$ and $T([a,b])=[x,y]$. Find a matrix $A$ such that $T(x)=Ax$ for all $x\in\mathbb{R}^2$.</p>
MR ASSASSINS117
546,265
<p>$$y''+y=4xe^x$$</p> <p>the characteristic equation is $m^2+1=0$ with solutions $m_{12}=\pm\ i$</p> <p>$$\mathbf{y_{h}(x)=C_1 \cos(x)+C_2\sin(x)}$$</p> <p>from here you can use <em>Undetermined Coefficients</em> or <em>Variation of parameters</em> \begin{align} y_p(x)&amp;=(Ax+B)e^x \\~ \\ y'_p(x)&amp;=Ae^x+Axe^x+Be^x \\ y''_p(x)&amp;=2Ae^x+Axe^x+Be^x \end{align} replace in the ODE $$ (2Ae^x+Axe^x+Be^x)+(Axe^x+Be^x)=4xe^x \\ (2A+2B)e^x+2Axe^x=4xe^x $$ then $ A=2$ and $ B=-2$ $$\mathbf{y_p(x)=2(x-1)e^x} \\~\\ \mathbf{y(x)=C_1 \cos(x)+C_2\sin(x)+2(x-1)e^x} $$</p>
3,575,334
<p>I am trying to show that <span class="math-container">$\int_{-b}^{b} \frac{f(N+\frac{1}{2} + it)}{e^{2\pi i(N+\frac{1}{2} + it)}-1} dt \to 0$</span> as <span class="math-container">$N \to \infty$</span> where <span class="math-container">$|f(N+1/2+it)| \le A/(1+(N+1/2)^2)$</span> for some constant <span class="math-container">$A$</span>. </p> <p>To show this, I have <span class="math-container">$|\int_{-b}^{b} \frac{f(N+\frac{1}{2} + it)}{e^{2\pi i(N+\frac{1}{2} + it)}-1} dt |\le \frac{A}{1+N^2} \int_{-b}^b \frac{1}{|e^{2\pi i(N+\frac{1}{2})} e^{-2\pi t} - 1|} dt \le \frac{A}{1+N^2} \int_{-b}^b \frac{dt}{e^{-2 \pi t} - 1}.$</span></p> <p>My proof would be complete if we have <span class="math-container">$\int_{-b}^b \frac{dt}{e^{-2 \pi t} - 1}$</span> exists. However, I don't know think it does. How can I show that the contour integral vanishes in the limit?</p>
fleablood
280,126
<p><span class="math-container">$\color{magenta}{(A\setminus C)}\cap \color{green}{(C\setminus B)}$</span></p> <p><a href="https://i.stack.imgur.com/tzQ3j.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/tzQ3j.jpg" alt="enter image description here"></a></p> <p>If <em>roundsquare</em> wants to steal this image and and it to his/her excellent answer, s/he has my permission and I will delete this derivative post.</p> <p>But... colors help.</p> <p>oh.... and hopefully J.W. Tanner's point is abundantly clear: <span class="math-container">$\color{magenta}{(A\setminus C)}$</span> contains utterly no part of <span class="math-container">$C$</span>. (Because <span class="math-container">$C$</span> was subtracted) while <span class="math-container">$\color{green}{(C\setminus B)}$</span> contains <em>ONLY</em> parts of <span class="math-container">$C$</span> because everything was subtracted <em>FROM</em> <span class="math-container">$C$</span>. </p> <p>So the intersect contains no part of <span class="math-container">$C$</span> and <em>only</em> parts of <span class="math-container">$C$</span>. Thus the intersection is empty.</p>
2,674,102
<p>Is the following Proof Correct?</p> <blockquote> <p>Given that $T\in\mathcal{L}(\mathbf{R}^2)$ defined by $T(x,y) = (-3y,x)$. $T$ has no eigenvalues.</p> </blockquote> <p><em>Proof.</em> Let $\sigma_T$ denote the set of all eigenvalues of $T$ and assume that $\sigma_T\neq\varnothing$ then for some $\lambda\in\sigma_T$ we have $T(x,y) = \lambda(x,y) = (-3y,x)$ where $(x,y)\neq (0,0)$, equivalently $\lambda x = -3y\text{ and }\lambda y = x$. but then $\lambda(\lambda y) = -3y$ equivalently $y(\lambda^2+3) = 0$. The equation $\lambda^2+3 = 0$ has no solutions in $\mathbf{R}$ consequently $y=0$ and then by equation $\lambda y = x$ it follows that $x=0$ thus $(x,y) = (0,0)$ contradicting the fact that $(x,y)\neq (0,0)$.</p> <p>$\blacksquare$</p>
Jesse Meng
536,610
<p>Proof looks correct to me. Though you could have also just added the case for $\lambda$ is zero since otherwise to multiply both sides by $0$ would reduce the solution set for $x$ and $y$.</p> <p>$Edit:$ The case is not necessary as you are actually using substitution instead of multiplying both sides by $\lambda$.</p>
1,278,719
<p>This is a problem from Artin's book "Algebra". In the fifth miscellaneous problem of the chapter "Vector spaces", he has asked to prove that:</p> <p>If $\alpha$ is a cube root of $2$, then the real numbers $a+b\alpha +c\alpha ^2$ with $a,b,c \in \mathbb{Q}$ form a field.</p> <p>I am stuck at proving this. For example, what would be the inverse of $1+\alpha +\alpha^2$? In the previous subpart, we were asked to prove that $1,\alpha, \alpha^2$ are linearly independent over $\mathbb{Q}$, and that went well. Using this result seems to give me there is no inverse of $1+\alpha + \alpha^2$ in the above set, which can't happen in a field.</p> <p>Am I missing something? Thanks.</p>
Sam Christopher
239,327
<p>Try Leibniz theorem for alternating series</p> <p>$\sum_{n=0}^\infty$ $(-1)^{n}u_{n}$ converges when</p> <p>i) $lim_{n \to \infty}{u_n} =0$ and</p> <p>ii){$u_n$} is monotonically decreasing sequence. </p>
4,572,505
<p>There are many approximations of <span class="math-container">$\pi$</span> using trigonometric and rational numbers. But I created this one: <span class="math-container">$$\pi \approx \sqrt[11]{294204}$$</span> Which is correct to almost <span class="math-container">$8$</span> decimal places. Are there any other approximations of <span class="math-container">$\pi$</span> using radicals? I know of <span class="math-container">$\sqrt{10}$</span>, <span class="math-container">$\sqrt[3]{31}$</span>, <span class="math-container">$\sqrt[4]{97}$</span>, and so on. But are there more beautiful ways (like using <span class="math-container">$\varphi$</span> since it is composed of radicals)?</p>
Claude Leibovici
82,404
<p>Playing your game, let me define <span class="math-container">$$A_n=\text{Round}\left[\pi ^{p_n}\right]$$</span> and compute <span class="math-container">$$\Delta_n=\log_{10}\Bigg[\left| A^{\frac{1}{p_n}}-\pi \right| \Bigg]$$</span> where <span class="math-container">$p_n$</span> is the <span class="math-container">$n^{\text{th}}$</span> prime number <span class="math-container">$$\left( \begin{array}{ccc} n &amp; A_n &amp; \Delta_n\\ 5 &amp; 294204 &amp; -7.758 \\ 6 &amp; 2903677 &amp; -7.647 \\ 7 &amp; 282844564 &amp; -9.568 \\ 8 &amp; 2791563950 &amp; -10.62 \\ 9 &amp; 271923706894 &amp; -12.71 \\ 10 &amp; 261424513284461 &amp; -16.27 \\ 11 &amp; 2580156526864959 &amp; -16.72 \\ 12 &amp; 2480534602660760780 &amp; -19.83 \\ 13 &amp; 241626620923575111130 &amp; -22.04 \\ 14 &amp; 2384759161287667022585 &amp; -23.76 \\ 15 &amp; 232297222236041657886386 &amp; -25.24 \\ 16 &amp; 223328039155291367651964249 &amp; -28.00 \\ 17 &amp; 214705163466259581466085919504 &amp; -32.68 \\ 18 &amp; 2119055026283205737110078195749 &amp; -31.92 \\ 19 &amp; 2037236602860106711985144771884927 &amp; -35.04 \\ 20 &amp; 198445365705802004702001456166116206 &amp; -37.30 \\ \end{array} \right)$$</span> showing that, more or less, <span class="math-container">$\Delta_n=6.156-2.190\,n$</span></p>
4,572,505
<p>There are many approximations of <span class="math-container">$\pi$</span> using trigonometric and rational numbers. But I created this one: <span class="math-container">$$\pi \approx \sqrt[11]{294204}$$</span> Which is correct to almost <span class="math-container">$8$</span> decimal places. Are there any other approximations of <span class="math-container">$\pi$</span> using radicals? I know of <span class="math-container">$\sqrt{10}$</span>, <span class="math-container">$\sqrt[3]{31}$</span>, <span class="math-container">$\sqrt[4]{97}$</span>, and so on. But are there more beautiful ways (like using <span class="math-container">$\varphi$</span> since it is composed of radicals)?</p>
Piquito
219,998
<p>COMMENT: By way of simple pertinent information.</p> <p>There are many remarkable approximations of <span class="math-container">$\pi$</span>. For example <span class="math-container">$$\pi\approx\dfrac{22}{17}+\dfrac{37}{47}+\dfrac{88}{43}\\\pi\approx\sqrt[4]{\frac{2143}{22}}\\\pi\approx\sqrt[9]{\frac{34041350274878}{1141978491}}\\\pi\approx\frac{\ln(640320^3+744)}{\sqrt{163}} $$</span> The first and second each give <span class="math-container">$9$</span> exact decimal places, the third gives <span class="math-container">$15$</span> and the fourth gives <span class="math-container">$30$</span>.</p> <p><strong>Buffon's experiment</strong>.- On any flat surface covered by parallel straight bands of equal width, a rod of length equal to the width of the bands is pulled a given number <span class="math-container">$T$</span> of times. In each shot the rod either stays within a band or cuts one of the parallel delimiting lines. Let <span class="math-container">$C$</span> be the number of cuts obtained; the quotient <span class="math-container">$\dfrac{2T}{c}$</span> it will be in principle the closer to <span class="math-container">$\pi$</span> the higher the number <span class="math-container">$T$</span>.</p> <p>In 1901, the Italian M. Lazzarini using a rod with a length equal to <span class="math-container">$\dfrac56$</span> of the width of the bands (in which case the theoretical probability of cutting is <span class="math-container">$\dfrac{5\pi}{3}$</span>, he got in 3408 shots the rational approximation <span class="math-container">$\dfrac{355}{113}\approx3.141592$</span> .</p>
2,602,799
<p>This is exactly what is written in Walter Rudin chapter 2, Theorem 2.41:</p> <p>If $E$ is not closed, then there is a point $\mathbf{x}_o \in \mathbb{R}^k$ which is a limit point of $E$ but not a point of $E$. For $n = 1,2,3, \dots $ there are points $\mathbf{x}_n \in E$ such that $|\mathbf{x}_n-\mathbf{x}_o| &lt; \frac{1}{n}$. Let $S$ be the set of these points $\mathbf{x}_n$. Then $S$ is infinite (otherwise $|\mathbf{x}_n-\mathbf{x}_o|$ would have a constant positive value, for infinitely many $n$), $S$ has $\mathbf{x}_o$ as a limit point, and $S$ has no other limit point in $\mathbb{R}^k$. For if $\mathbf{y} \in \mathbb{R}^k, \mathbf{y} \neq \mathbf{x}_o$, then \begin{align} |\mathbf{x}_n-\mathbf{y}| \geq{} &amp;|\mathbf{x}_o-\mathbf{y}| - |\mathbf{x}_n-\mathbf{x}_o|\\ \geq {} &amp; |\mathbf{x}_o-\mathbf{y}| - \dfrac{1}{n} \geq \dfrac{1}{2} |\mathbf{x}_o-\mathbf{y}| \end{align} for all but finitely many $n$. This shows that $\mathbf{y}$ is not a limit point of $S$.</p> <p>The question:</p> <p>I'm stuck in understanding the reason behind why $S$ is infinite. Also I need clarification why the last inequality holds. May someone help, please?</p>
Sahiba Arora
266,110
<p>Suppose $S$ is finite, say $S=\{y_1,\cdots,y_k\}\subseteq\{x_n:n \in \mathbb N\}\subseteq E.$ Then the possible values each $|x_n-x_0|$ can take are $$|y_1-x_0|,\cdots,|y_k-x_0|.$$</p> <p>So there will exists $i \in \{1,\cdots,k\}$ such that $|x_n-x_0|=|y_i-x_0|$ for infinitely many $n.$ So we have $$|y_i-x_0|\leq \frac 1n$$ for infinitely many $n.$ Thus, $x_0 = y_i \in E,$ which is a contradiction.</p> <hr> <p>Since, $x_0 \neq y,$ then there exists $n_0 \in \mathbb N$ such that $$|x_0-y|&gt;\frac {2}{n_0}&gt;\frac 2n$$ for all $n \geq n_0.$ Thus, $$\frac 12 |x_0 -y|&gt;\frac 1n$$ for all $n \geq n_0.$</p> <p>So, $$|x_0-y|-\frac 12 |x_0-y|&gt;\frac 1n$$ for all $n\geq n_0.$ Hence, the last inequality follows.</p>
1,679,920
<p>I'm working for a firm, who can only use straight lines and (parts of) circles.</p> <p>Now I would like to do the following: imagine a square of size $5\times5$. I would like to expand it with $2$ in the $x$-direction and $1$ in the $y$-direction. The expected result is a rectangle of size $7\times9$. Until here, everything is OK.</p> <p>Now I would like the edges to be rounded, but as the length expanding is different in $x$ and $y$ direction, the rounding should be based on ellipses, not on circles, but I don't have ellipses to my disposal, so I'll need to approximate those ellipses using circular arcs.</p> <p>I've been looking on the internet for articles about this matter (searching for splines, Bézier, interpolation, ...), but I don't find anything. I have tried myself to invent my own approximation using standard curvature calculations, but I don't find a way to glue the curvature circular arcs together.</p> <p>Does somebody know a way to approximate ellipses using circular arcs?</p> <p>Thanks<br> Dominique</p>
Jean Marie
305,862
<p>Being given any prescribed ellipse curve, it is possible to find a parametric family of circles having this ellipse as its <em>envelope</em> (see figure 2 below). The more circles you take, the more precise you are.</p> <p>How are these circles obtained? As an oblique projection of level sets of an ellipsoid (Figure 1) with equation:</p> <p><span class="math-container">$$\tag{1}x^2+y^2+\dfrac{z^2}{c^2}=1 \ \iff \ x^2+y^2=1-\dfrac{z^2}{c^2}$$</span></p> <p>i.e., as circles of radius <span class="math-container">$r:=\sqrt{1-\tfrac{z^2}{c^2}}$</span>.</p> <p>These circles are parametrized by <span class="math-container">$z$</span> with parametric equations:</p> <p><span class="math-container">$$(C_z) \ \begin{cases} x=r \cos(u)+z\\y=r \sin(u) \end{cases} \ \ \text{for any } z \in [-c,c].$$</span></p> <p>Here is a Matlab program that generates this family of circles (changing the value of parameter <span class="math-container">$c$</span> changes the eccentricity of the generated ellipse):</p> <blockquote> <pre><code>clear all;close all;axis equal;axis off;hold on c=1; u=0:0.01:2*pi; for z=-c:0.1:c; r=sqrt(1-(z/c)^2); x=r*cos(u)+z; y=r*sin(u); plot(x,y); end; </code></pre> </blockquote> <p>In fact there are two circles with radius 0, i.e., points that are the foci of this ellipse. This will become clearer with the following modification and its physical interpretation :</p> <p>In fact, one can have more evenly spaced circles (it is in this way that Figure 2 has been generated) by setting <span class="math-container">$z=c \sin(t)$</span> for parameter <span class="math-container">$t \in [-\tfrac{\pi}{2},\tfrac{\pi}{2}]$</span>. It means that the loop in the program above is to be replaced by :</p> <blockquote> <pre><code>for t=-pi/2:pi/24:pi/2; x=cos(t)*cos(u)+c*sin(t); y=cos(t)*sin(u); plot(x,y); end; </code></pre> </blockquote> <p>A nice physical interpretation of this new family of circles is as sound waves propagation (see fig. 2) (<a href="http://www.math.ubc.ca/%7Ecass/courses/m309-01a/dawson/index.html" rel="nofollow noreferrer">http://www.math.ubc.ca/~cass/courses/m309-01a/dawson/index.html</a>).</p> <p><strong>Remarks about machining concerns:</strong></p> <ul> <li><p>it is easy to retrieve the intersection points of successive circles where the tool must proceed to the next arc (dropping of course certain circles that remain interior to the elliptical envelope).</p> </li> <li><p>a better fit can be achieved by introducing tangents between the arcs. A supplementary benefit would be to obtain a smooth curve, but without continuity of the derivative.</p> </li> </ul> <p><a href="https://i.stack.imgur.com/rpgac.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/rpgac.jpg" alt="enter image description here" /></a></p> <p>Fig. 1: Oblique (45°) projection on plane xOy of the level sets of the ellipsoid (called &quot;prolate spheroid&quot;) with equation (1).</p> <p><a href="https://i.stack.imgur.com/wD0Fr.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/wD0Fr.jpg" alt="enter image description here" /></a></p> <p>Fig. 2: Sound waves propagation interpretation: A sound emmitted from one of the foci of the ellipse is reflected on the ellipse and concentrated in the other focus.</p>
1,679,920
<p>I'm working for a firm, who can only use straight lines and (parts of) circles.</p> <p>Now I would like to do the following: imagine a square of size $5\times5$. I would like to expand it with $2$ in the $x$-direction and $1$ in the $y$-direction. The expected result is a rectangle of size $7\times9$. Until here, everything is OK.</p> <p>Now I would like the edges to be rounded, but as the length expanding is different in $x$ and $y$ direction, the rounding should be based on ellipses, not on circles, but I don't have ellipses to my disposal, so I'll need to approximate those ellipses using circular arcs.</p> <p>I've been looking on the internet for articles about this matter (searching for splines, Bézier, interpolation, ...), but I don't find anything. I have tried myself to invent my own approximation using standard curvature calculations, but I don't find a way to glue the curvature circular arcs together.</p> <p>Does somebody know a way to approximate ellipses using circular arcs?</p> <p>Thanks<br> Dominique</p>
bubba
31,744
<p>If you want to control the error in the approximation, then biarc interpolation/approximation is what you need, as indicated in the answer from @Paul H.</p> <p>A biarc curve is usually constructed from two points and two tangent vectors. This actually leaves one degree of freedom unfixed, and there are several different ways to fix this. It can be shown that the "junction" where the two arcs meet lies somewhere on a certain circle, and you can choose where. See <a href="http://www.rug.nl/research/portal/files/14477957/05c5.pdf" rel="nofollow">this paper</a> for more than you probably want to know.</p> <p>To get a good approximation, you'll probably have to use several biarc curves, strung end-to-end. Start out with the end-points and end-tangents of your quarter ellipse, and construct a single biarc. Calculate 10 or 15 points along the ellipse, and measure their distance to the biarc, to estimate the approximation error. If the error is too large, compute a point and tangent at some interior location on the ellipse, and approximate with two biarcs. Rinse and repeat until you have an approximation that's satisfactory. More details in <a href="http://msekce.karlin.mff.cuni.cz/~sir/papers/planar-biarcs.pdf" rel="nofollow">this paper</a>.</p> <p>If you just want a simple approximation, and you don't care very much about the error, then there is a technique that draftsmen have used to draw ellipses for decades, sometimes called the "four center" method. See section 5.2.0 of <a href="http://navybmr.com/study%20material/14069a/14069A_ch5.pdf" rel="nofollow">this document.</a>, or <a href="http://www.had2know.com/makeit/ellipse-approximation-normal-circular-arc.html" rel="nofollow">this web page</a>.</p>
242,636
<p>I am interested in the proof of the following result: Suppose that $A &gt; 1$, $\lambda \in \mathbb{R}$, and for $0 &lt; Z \leq 1$, let $U(Z)$ be the number of integer solutions $v$ of \begin{eqnarray} |v| &lt; ZA \ \ \ \text{ and } \ \ \ \| \lambda v \| &lt; Z A^{-1}. \end{eqnarray} Then, if $0 &lt; Z_1 &lt; Z_2 \leq 1$, we have $$ U(Z_1) \gg (Z_1/Z_2) \ U(Z_2). $$</p> <p>I would greatly appreciate any comments or hints on this! Thank you very much!</p> <p>PS Here $\|x\|$ denotes the distance the closest integer. </p>
Anton
22,733
<p>If you don't mind, I'll reformulate your problem slightly. Let $X = ZA$, $B = A^2$. Then $XB^{-1} = ZA^{-1}$. We would like to know the number of integer solutions $U'(X)$ to the system of inequalities</p> <p>$$ \begin{cases}|v| &lt; X;\\\|\lambda v\|&lt;XB^{-1}.\end{cases} $$</p> <p>Let $\lfloor x \rfloor$ denote the largest integer not exceeding $x$, and put $\{x\} = x - \lfloor x \rfloor$. Then</p> <p>$$ \|\lambda v\| = \|(\lfloor \lambda \rfloor + \{\lambda\})v\| = \|\lfloor \lambda\rfloor v + \{\lambda\}v\| = \|\{\lambda\}v\|. $$</p> <p>Since $0 \leq \{\lambda\} &lt; 1$, without loss of generality we may assume that $\lambda$ satisfies $0 \leq \lambda &lt; 1$.</p> <p>Suppose that $XB^{-1} &gt; 1/2$. Then $0 \leq \|\lambda v\| \leq 1/2 &lt; XB^{-1}$ is true for any choice of $\lambda$ or $v$, which means that $U(X)$ is equal to $2\lfloor X\rfloor + 1$. The same observation appplies to the case when $\lambda = 0$.</p> <p>Suppose that $XB^{-1} \leq 1/2$ and $0 &lt; \lambda &lt; 1$. Denote the largest integer in the interval $(-\lambda X-1, \lambda X+1)$ by $k$. The fact that $\|\lambda v\| &lt; XB^{-1}$ simply means that there exists some integer $n \in \{-k, -k+1, \ldots, k-1, k\}$ such that</p> <p>$$ \left|n - \lambda v\right| &lt; XB^{-1}. $$</p> <p>Verify that there are at least $\lambda X/2$ and at most $2 \lambda X +3$ possible values of $n$. </p> <p>Further, each $\lambda v$ is contained in exactly one interval of the form $(n - XB^{-1},n + XB^{-1})$, whose length is $2XB^{-1}$. Verify that this interval contains at least $X(\lambda B)^{-1}$ and at most $2X(\lambda B)^{-1} + 1$ numbers of the form $\lambda v$, where $v$ is an integer.</p> <p>We conclude that, when $X/B \leq 1/2$ and $0 &lt; \lambda &lt; 1$,</p> <p>$$ \frac{\lambda X}{2}\cdot \frac{X}{\lambda B} \leq U'(X) \leq (2 \lambda X + 3)(2 X(\lambda B)^{-1} + 1). $$</p> <p>Otherwise,</p> <p>$$ U'(X) = 2\lfloor X\rfloor + 1. $$</p> <p>Finally, note that $U(Z) = U'(ZA)$. Plug in $B = A^2$ and $X=ZA$ inside the above expressions and deduce $U(Z_1) \gg (Z_1/Z_2)U(Z_2)$.</p>
1,713,778
<p>Let $P=\{p_1,p_2,\ldots ,p_n\}$ the set of the first $n$ prime numbers and let $S\subseteq P$. Let $$A=\prod_{p\in S}p$$ and $$B=\prod_{p\in P-S}p.$$ Show that if $A+B&lt;p_{n+1}^2$, then the number $A+B$ is prime. Also, if $$1&lt;|A-B|&lt;p_{n+1}^2,$$ then the number $|A-B|$ is prime.</p>
S.C.B.
310,930
<p><strong>HINT</strong></p> <p>For any geeral $n$, $n&lt;(p_{k+1})^2$ where $p_{k+1}$ is the $k+1$th prime than we just need to check if $n$ is divisble by $p_i$ where $1 \le i \le n$.</p> <p>But note that for any $p_i$ $$p_i \in S, P-S$$ is a contradiction. Thus, since $p_i$ is a prime it can only divide one of $A$ or $B$. In other words, if $A \equiv 0 \pmod {p_i}$ then $B \not \equiv 0 \pmod {p_i}$.</p>
3,183,274
<p>This is a reinterpretation of my old question <a href="https://math.stackexchange.com/questions/3177594/fit-data-to-function-gt-frac1001-alpha-e-beta-t-by-using-least-s">Fit data to function $g(t) = \frac{100}{1+\alpha e^{-\beta t}}$ by using least squares method (projection/orthogonal families of polynomials)</a>. I need to understand things in terms of orthogonal projections and inner products and the answers were for common regression techniques.</p> <blockquote> <p>t --- 0 1 2 3 4 5 6</p> <p>F(t) 10 15 23 33 45 58 69</p> <p>Adjust <span class="math-container">$F$</span> by a function of the type <span class="math-container">$$g(t) = \frac{100}{1+\alpha e^{-\beta t}}$$</span> by the discrete least squares method</p> </blockquote> <p>First of all, we cannot work with the function <span class="math-container">$g(t)$</span> as it is. The way I'm trying to see the problem is via projections. </p> <p>So let's try to transform the problem like this:</p> <p><span class="math-container">$$\frac{100}{g(t)}-1 = \alpha e^{-\beta t}\implies \ln \left(\frac{100}{g(t)}-1\right) = \ln \alpha -\beta t$$</span></p> <p>Since we want to fit the function to the points, we want to minimize the distance of the function from the set of points, that is:</p> <p><span class="math-container">$$\min_{\alpha,\beta} \left(\ln\left(\frac{100}{g(t)}-1\right)-\ln\alpha + \beta t\right)$$</span></p> <p>Without using derivative and equating things to <span class="math-container">$0$</span>, there's a way to see this problem as an orthogonal projection problem. </p> <p>I know I need to end up with something like this:</p> <p><span class="math-container">$$\langle \ln\left(\frac{100}{g(t)}-1\right)-\ln\alpha + \beta t, 1\rangle = 0\\ \langle \ln\left(\frac{100}{g(t)}-1\right)-\ln\alpha + \beta t, t\rangle=0$$</span> </p> <p>And I know this comes from the knowledge that our minimum is related to some projection and this projection lives in a space where the inner product with <span class="math-container">$span\{1, t\}$</span> (because of <span class="math-container">$\ln\alpha,\beta t$</span>), gives <span class="math-container">$0$</span>.</p> <p>In order to end up with </p> <p><span class="math-container">$$\begin{bmatrix} \langle 1,1\rangle &amp; \langle t,1\rangle \\ \langle 1,t\rangle &amp; \langle t,t\rangle \\ \end{bmatrix} \begin{bmatrix} \ln \alpha \\ -\beta \\ \end{bmatrix}= \begin{bmatrix} \langle \ln\left(\frac{100}{g(t)}-1\right) , 1\rangle \\ \langle \ln\left(\frac{100}{g(t)}-1\right) , t\rangle \\ \end{bmatrix}$$</span></p> <p>Where the inner product is </p> <p><span class="math-container">$$\langle f,g\rangle = \sum f_i g_i $$</span></p> <p>*why?</p> <p>Can someone tell me what reasoning gets me to the inner products above, if I did everything rigth and how to finish the exercise?</p>
Yuri Negometyanov
297,350
<p><span class="math-container">$\color{brown}{\textbf{Via linear model}}$</span></p> <p>Let <span class="math-container">$$h(t) = \ln\left(\dfrac{100}{g(t)}-1\right),\tag1$$</span> then the data table is <span class="math-container">\begin{vmatrix} i &amp; 1 &amp; 2 &amp; 3 &amp; 4 &amp; 5 &amp; 6 &amp; 7\\ t_i &amp; 0 &amp; 1 &amp; 2 &amp; 3 &amp; 4 &amp; 5 &amp; 6\\ g_i &amp; 10 &amp; 15 &amp; 23 &amp; 33 &amp; 45 &amp; 58 &amp; 69\\ h_i &amp; 2.197225 &amp; 1.734631 &amp; 1.208311 &amp; 0.708185 &amp; 0.200671 &amp; -0.322773 &amp; -0.800119\\ h(t_i) &amp; 2.215988 &amp; 1.711902 &amp; 1.207816 &amp; 0.703730 &amp; 0.199644 &amp; -0.304442 &amp; -0.808528\\ g(t_i) &amp; 9.83239 &amp; 15.29172 &amp; 23.00877 &amp; 33.09858 &amp; 45.02541 &amp; 57.55280 &amp; 69.17958\\ r(t_i) &amp; 0.16761 &amp; -0.29172 &amp; -0.00877 &amp; -0.09858 &amp; -0.02541 &amp; 0.44720 &amp; -0.17958\\ g_1(t_i) &amp; 9.83245 &amp; 15.29853 &amp; 23.02728 &amp; 33.13320 &amp; 45.07696 &amp; 57.61634 &amp; 69.2460\\ \tag2 \end{vmatrix}</span></p> <p>The task is to estimate parameters of the function <span class="math-container">$h(t)$</span> in the form of <span class="math-container">$$h(t) = \ln\alpha + \beta_* t.\tag 3$$</span></p> <p>The least squares method provides minimization of the discrepancy function <span class="math-container">$$d_h(\alpha,\beta_*) = \sum\limits_{i=1}^7 (\ln\alpha - \beta t_i - h_i)^2\tag 4$$</span> as the function of the parameters <span class="math-container">$\alpha$</span> and <span class="math-container">$\beta.$</span> </p> <p>The minimum of the quadratic function achieves in the single stationary point, which can be defined fro the system <span class="math-container">$(d_h)'_{ln\alpha} = (d_h)'_{\beta*}= 0,$</span> or <span class="math-container">\begin{cases} 2\sum\limits_{i=1}^7 (\ln\alpha + \beta* t_i - h_i) = 0\\ 2\sum\limits_{i=1}^7 (\ln\alpha \beta* t_i - h_i)T_I = 0.\tag5. \end{cases}</span></p> <p>The system <span class="math-container">$(5)$</span> can be presented in the form of <span class="math-container">\begin{cases} 7\ln\alpha + a_1 \beta* = b_0\\ a_1\ln\alpha + a_2 \beta* = b_1, \end{cases}</span> where <span class="math-container">$$a_1 = \sum\limits_{i=1}^7 t_1 = 21,\quad a_2 = \sum\limits_{i=1}^7 t_1^2 = 91,$$</span> <span class="math-container">$$b_1 = \sum\limits_{i=1}^7 h_1 = 4.926100,\quad b_2 = \sum\limits_{i=1}^7 h_1 = 0.663879.$$</span> The discriminants are <span class="math-container">$$\Delta = \begin{vmatrix}7 &amp; 21 \\ 21 &amp; 91\end{vmatrix} = 196,$$</span> <span class="math-container">$$\Delta_1 = \begin{vmatrix}4.9261 &amp; 21 \\ 0.663879 &amp; 91\end{vmatrix} \approx 434.33364,$$</span> <span class="math-container">$$\Delta_2 = \begin{vmatrix} 7 &amp; 4.926 \\ 21 &amp;0.663879 \end{vmatrix} \approx -98.80095.$$</span></p> <p>Then <span class="math-container">$$\alpha = e^{\large \frac{\Delta_1}\Delta} \approx 9.170465,\quad \beta = -\dfrac{\Delta_2}\Delta \approx 0.504086,$$</span> <span class="math-container">$$d_h(\alpha, \beta) \approx 0.001295,\quad d_g(\alpha, \beta)\approx 0.355863.$$</span></p> <p>Results of the calculations, which are shown in the table <span class="math-container">$(2),$</span> confirm obtained parameters values.</p> <p><span class="math-container">$\color{brown}{\textbf{Orthogonal projections approach}}$</span></p> <p>The method of orthogonal projections is used to solve problems of large dimension. The essence of the method for the source data is that the parameters of the linear model are calculated one by one.</p> <p>The already selected dependences should be subtracted.</p> <p>In the given case, the data after first stage has not essential correlations. Linear approximation of the difference <span class="math-container">$r_i = g_i - g(t_i)$</span> in the form of <span class="math-container">$$r_i = -0.043425+0.014987 t$$</span> gives <span class="math-container">$d_r = 0.349557$</span>.</p> <p><span class="math-container">$\color{brown}{\textbf{Via the gradient descent.}}$</span></p> <p>Obtained solution via linear model is not optimal for the discrepancy in the form of <span class="math-container">$$d_g(\alpha,\beta)=\sum\limits_{i=1}^7\left(\dfrac{100}{1+\alpha e^{-\beta t_i}} - g_i\right)^2.$$</span></p> <p>To verify the orthogonal projections approach, can be used the gradient descent method.</p> <p>Really, the gradient is <span class="math-container">$$\binom uv = \left(\begin{matrix} \dfrac {\partial d_*}{\partial \alpha}\\[4pt] \dfrac{\partial d_*}{\partial \beta}\end{matrix}\right) = 200\left(\begin{matrix} -\sum\limits_{i=1}^7 \dfrac{e^{-\beta t_i}}{\left(1+\alpha e^{-\beta t_i}\right)^2} \left(\dfrac{100}{1+\alpha e^{-\beta t_i}} - g_i\right)\\[4pt] \sum\limits_{i=1}^7 \dfrac{t_ie^{-\beta t_i}}{\left(1+\alpha e^{-\beta t_i}\right)^2} \left(\dfrac{100}{1+\alpha e^{-\beta t_i}} - g_i\right) \end{matrix}\right),$$</span> <span class="math-container">$$\binom uv =\frac1{50}\left(\begin{matrix} \sum\limits_{i=1}^7 e^{-\beta t_i}g^2(t_i)r_i \\[4pt] -\sum\limits_{i=1}^7 t_i e^{-\beta t_i}g^2(t_i)r_i \end{matrix}\right) =\binom{0,26390}{-2.32907}\not=\binom00.$$</span></p> <p>Optimization get for the difference <span class="math-container">$\Delta d_r = -0.000223$</span> gives <span class="math-container">$$\binom{\alpha_1}{\beta_1} = \binom{\alpha}{\beta} +\binom{\Delta\alpha}{\Delta\beta} = \binom\alpha\beta + \Delta d_r\binom uv\approx\binom{9,170406} {0,504605}.$$</span> Then <span class="math-container">$$d_g(\alpha_1,\beta_1) \approx 0,349343,\quad \operatorname{grad} d_g(\alpha_1,\beta_1) = \dbinom{-0,036480}{-0,081239}.$$</span></p> <p>The data in the table <span class="math-container">$(2)$</span> confirm the same estimation accuracy. </p>
194,134
<p>For some FittedModel, the "BestFitParameters" are given in terms of the symbols used to define the model. </p> <pre><code>fit = NonlinearModelFit[{10,11,12},a*x+c,{a,c},x]; fit["BestFitParameters"] </code></pre> <p>returns <code>{a-&gt;1.,c-&gt;9.}</code></p> <p>This can be problematic if I define <code>a</code> or <code>c</code> somewhere else. One option is to use the model in a module, and try to localize variables but this generates unique keys that I have to keep track of:</p> <pre><code>fit=Module[ {a,c}, NonlinearModelFit[{10,11,12},a*x+c,{a,c},x] ]; fit["BestFitParameters"] </code></pre> <p>returns <code>{a$768206 -&gt; 1., c$768206 -&gt; 9.}</code></p> <p>Often I'll do something like</p> <pre><code>fit=Module[ {a,c,lf}, lf=NonlinearModelFit[{10,11,12},a*x+c,{a,c},x]; {"a"-&gt;a,"c"-&gt;c}/lf]; fit["BestFitParameters"] </code></pre> <p>which returns <code>{"a"-&gt;1.,"c"-&gt;9.}</code> with strings as keys.</p> <p>I actually prefer this since now <code>"a"</code> and <code>"c"</code> can't clash with a definition of the symbols <code>a</code> or <code>c</code>, but it's a pain because I lose access to the <code>FittedModel</code> object which may be useful later on. </p> <p>Since I can't use the strings <code>"a"</code> or <code>"c"</code> in the <code>NonlinearModelFit</code> function, my question is this: Is there a way to modify the <code>FittedModel</code> object, such that requesting <code>"BestFitParameters"</code> returns the a list of rules with strings as the keys?</p> <p>Alternatively, does anyone have a more elegant way of working with these objects, so I don't have to keep track of what symbols I use in fits and make sure not to use the same symbols to define similar values elsewhere?</p>
N.J.Evans
11,777
<p>One straightfoward way to handle this is to accept the unique keys generated inside the module and write a function that replaces these with de-unique-ified strings when the best fit parameters are needed:</p> <pre><code>getBestFit[fit_FittedModel] := Module[ {a, c, bf, newKeys, x, oldkeys}, bf = fit["BestFitParameters"]; oldkeys = Keys@bf; newKeys = First /@ StringSplit[ToString /@ oldkeys, "$"]; Rule @@@ Transpose@{newKeys, oldkeys /. bf} ]; fit = Module[ {a, c, x}, NonlinearModelFit[{10, 11, 12}, a*x + c, {a, c}, x] ]; fit["BestFitParameters"] </code></pre> <p>shows that the unique symbols generated inside the module are retained</p> <pre><code>{a<span class="math-container">$772530 -&gt; 1., c$</span>772530 -&gt; 9.} </code></pre> <p>while</p> <pre><code>getBest@fit </code></pre> <p>returns the desired rules with strings as keys</p> <pre><code>{"a"-&gt;1.,"c"-&gt;9.} </code></pre>
1,855,824
<blockquote> <p>Given $a_1=1$ and $a_n=a_{n-1}+4$ where $n\geq2$ calculate, $$\lim_{n\to \infty }\frac{1}{a_1a_2}+\frac{1}{a_2a_3}+\cdots+\frac{1}{a_na_{n-1}}$$</p> </blockquote> <p>First I calculated few terms $a_1=1$, $a_2=5$, $a_3=9,a_4=13$ etc. So $$\lim_{n\to \infty }\frac{1}{a_1a_2}+\frac{1}{a_2a_3}+\cdots+\frac{1}{a_na_{n-1}}=\lim_{n\to \infty }\frac{1}{5}+\frac{1}{5\times9}+\cdots+\frac{1}{a_na_{n-1}} $$</p> <p>Now I got stuck. How to proceed further? Should I calculate the sum ? Please help.</p>
Behrouz Maleki
343,616
<p>As lab bhattacharjee mentioned for every $n\in\mathbb{N}$, we have $$a_n-a_{n-1}=4$$ $$\begin{align} &amp; {{I}_{n}}=\sum\limits_{i=1}^{n-1}{\frac{1}{{{a}_{i}}{{a}_{i+1}}}}=\sum\limits_{i=1}^{n-1}{\frac{1}{{{a}_{i+1}}-{{a}_{i}}}\left( \frac{1}{{{a}_{i}}}-\frac{1}{{{a}_{i+1}}} \right)}=\frac{1}{4}\sum\limits_{i=1}^{n-1}{\left( \frac{1}{{{a}_{i}}}-\frac{1}{{{a}_{i+1}}} \right)}=\frac{1}{4}\left( \frac{1}{{{a}_{i}}}-\frac{1}{{{a}_{n}}} \right) \\ &amp; {{I}_{n}}=\frac{1}{4}\left( \frac{1}{{{a}_{1}}}-\frac{1}{{{a}_{1}}+4n-4} \right) \\ \end{align} $$ therefore $$\underset{n\to \infty }{\mathop{\lim }}\,{{I}_{n}}=\frac{1}{4{{a}_{1}}}=\frac{1}{4}$$</p>
156,179
<p>Let $A$ be a closed subset of $\mathbb{R}^{n}$. Can the quotient space $\mathbb{R}^{n}/A$ be embedded in some Euclidean space $\mathbb R^{m}$? In particular, assume that $A$ is an algebraic variety of degree $k$, can we control $m$ in term of $n$ and $k$? </p>
Daniele Zuddas
23,193
<p>For $A$ compact the answer is that given by Joseph Van Name. If $A$ is not compact then the answer is negative. For example take the standard $\Bbb R \subset \Bbb R^2$. Then the quotient is not II-countable (the same holds for any unbounded closed subset of $\Bbb R^n$, with unbounded complement).</p> <p>In the compact setting there is an important special case: that of cellular subsets. $A\subset \Bbb R^n$ is <em>cellular</em> if it is the countable intersection of a sequence of nested $n$-balls: $$A = \bigcap_{i\in \Bbb N} B_i,$$ $B_i \cong B^n$ and $B_{i+1} \subset \text{Int}\, B_i$. For cellular $A$, the projection map $\pi : \Bbb R^n \to \Bbb R^n/A$ can be approximated by homeomorphisms which are the identity outside a neighborhood of $A$, with respect to a distance function on the quotient (whose existence follows from the Urysohn metrization theorem). As a consequence, $\Bbb R^n/A \cong \Bbb R^n$. This fact plays an important role in the proof of the topological Schoenflies theorem in dimension $n$. A reference is M. Brown, <em>"A proof of the generalized Schoenflies theorem"</em>, Bull. Amer. Math. Soc. 66 (1960), 74-76.</p>
3,789,494
<p>I'm stuck in solving this strange and beautiful formula : <span class="math-container">$3= 3^{z}$</span> since it says 'Solve' and not 'Prove'</p> <p>Also i really don't understand what does it means by saying <span class="math-container">$3^{z}$</span>? Will <span class="math-container">$3^z$</span> form a set ?</p> <p>I will be very grateful if you can help me or even give me a little hint to make a headway with this problem ...</p> <p>To be clear, find all <span class="math-container">$z \in \left\{ 3 = 3^z \right\}.$</span></p>
Muhammad
804,057
<p>Since <span class="math-container">$z$</span> is a complex number then <span class="math-container">$z=x+iy$</span> where <span class="math-container">$x,y \in \mathbb{R}$</span></p> <p>Also Since <span class="math-container">$0,1 \in \mathbb{R}$</span> then <span class="math-container">$z=x+iy=(1)+i(0)=1$</span></p> <p>Thus <span class="math-container">$1 \in \mathbb{Z}$</span> And <span class="math-container">$3^{1}=3 \in 3^{z}$</span></p>
106,031
<p>I need to "monochromize" a large amount of plots (mostly coming from <code>ListPlot</code>) and export them to PDF. The problem is that I no longer have the data used to generate the plots, I only have notebooks that contain the plots. I attempted copy-pasting one plot and then something along the lines of <code>Show[plot,PlotTheme-&gt;"Monochrome"]</code> but that obviously did not work. I also tried <code>ColorConvert[plot,"Grayscale"]</code> but then the exported PDF was of extremely poor quality although I used the <code>ImageResolution</code> option of <code>Export</code>. One could, I think, extract the points from <code>FullForm</code> of the plots and simply plot them once more but this seems to be quite complicated given the quantity of the plots I have and their various structure (some of the <code>ListPlot</code> outputs contain two lists, some of them three, I also have a couple of simple graphics coming from <code>Plot</code>). Method of last resort is using some other software like Photoshop, but that makes the PDF many times larger - I need to use the plots with LaTeX so this would be the last option. </p>
Aisamu
8,238
<p>In the same spirit of <a href="https://mathematica.stackexchange.com/users/57/sjoerd-c-de-vries">Sjoerd's</a> answer, you can "steal" the theme-generated directives and replace them in the target plot.</p> <p>With a dummy plot created using the desired theme,</p> <pre><code>dummyPlot = Plot[{1, 2, 3}, {x, 0, 1}, PlotTheme -&gt; "Monochrome"] </code></pre> <blockquote> <p><img src="https://i.stack.imgur.com/0X7rqm.png" alt="Dummy Plot"></p> </blockquote> <p>You can collect the generated directives,</p> <pre><code>directives = Cases[dummyPlot, Directive[a__] :&gt; List@a, Infinity] </code></pre> <blockquote> <p>{{Opacity[1.],GrayLevel[0],CapForm[Butt],AbsoluteThickness[1.6],AbsoluteDashing[{}]},&lt;&lt;2>>,{&lt;&lt;1>>}}</p> </blockquote> <p>And apply them to the target plot (cycling through the list)!</p> <pre><code>plot = Plot[{Cos[x], Sin[x], Tan[x]}, {x, 0, 2 π}]; plot /. Directive[a__] :&gt; (directives = RotateLeft[directives]; Directive[Last@directives]) </code></pre> <blockquote> <p><a href="https://i.stack.imgur.com/js40Bm.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/js40Bm.png" alt="Re-colored Plot"></a></p> </blockquote> <p>You must make a dummy plot with an equivalent number of plotted functions, otherwise you'll have to drop/add elements to the <code>directives</code> list. </p>
19,598
<p>I have two independent ODE systems. </p> <pre><code>A = NDsolve[..., {x, y}, {t, 0, 10}]; B = NDsolve[..., {a, b}, {t, 0, 10}]; </code></pre> <p>I can draw a <code>ParametricPlot</code> from one ODE. That is, </p> <pre><code>ParametricPlot[Evaluate[{x[t], y[t]} /. A], {t, 0, 10}] </code></pre> <p>I wonder if I can draw a <code>ParametricPlot</code> from the two independent ODE systems. That is a <code>ParametricPlot</code> of <code>x[t]</code> taken from A and <code>a[t]</code> taken from B.</p>
2island
6,590
<p>Simply you can use the following command:</p> <pre><code>sol = NDSolve[{x'[t] == Sin[t], a'[t] == Cos[t], a[0] == 1, x[0] == 1}, {x, a}, {t, 0, 10}]; ParametricPlot[{x[t], a[t]} /. sol, {t, 0, 10}, AxesLabel -&gt; {x[t], a[t]}] </code></pre> <p>Firstly, you may consider <em>one</em> system so as to use the <code>NDSolve</code> only one time and then use the <code>ParametricPlot</code>. It is the same if you add the equations <code>y[t]</code> and <code>b[t]</code>, as you say in your question. </p> <p>So your conclutions are right!</p>
322,140
<p>$$\int \left ( r\sqrt{R^2-r^2} \right )dr$$</p> <p>It looks simple. I know that the derivative of </p> <p>$$\left (R^2-r^2 \right )^\frac{3}{2}$$</p> <p>Is the stuff in the integral.</p> <p>However, what about if I don't know?</p> <p>How in general do we solve integral of</p> <p>$$G(r)^n$$</p>
Bombyx mori
32,240
<p>This kind of integral is usually dealt with standard trignometry substitutions. You can use $r=R\cos[\theta]$, $\sqrt{R^{2}-r^{2}}=R\sin[\theta]$, $dr=-R\sin[\theta]d\theta$, for example. I am sure you can find some other ingenius ways to do the job as well. </p> <p>The main difficulty for this approach is to integrate $\cos[\theta]\sin^{2}[\theta]=\frac{1}{4}\sin[2\theta]\sin[\theta]=\frac{1}{8}(\cos[2\theta-\theta]-\cos[2\theta+\theta])$. You may also try to integrate by parts. </p>
1,399,935
<p>I'm reading Kleene's introduction to logic and in the beginning he mentions something that I have thought about for a while. The question is how can we treat logic mathematically without using logic in the treatment? He mentions that in order to deal with this what we do is that we separate the logic we are studying from the logic we are using to study it (which is the <em>object language and metalanguage, respectively</em>). How does this answer the question? Aren't we still using logic to build logic? </p> <p>And I have a feeling that the answer is to some extent that we use simpler logics to build more complex ones, but then don't we run into a paradox of what 's the simplest logic, for won't any system of logic no matter how simple be a metalanguage for a simpler language? </p>
Andreas Blass
48,510
<p>We use logic to <em>study</em> logic, not to <em>create</em> logic. Our study is usually not intended to justify some logic but rather to understand how it works. For example, we might try to prove that, whenever a conclusion $c$ follows from an infinite set $H$ of hypotheses then $c$ already follows from a finite subset of $H$. Many logical systems have this finiteness property; many others do not. And that's quite independent of the logic that we use in studying this property and trying to prove or disprove it for one or another logical system.</p> <p>Here's an analogy: Suppose a biologist is writing a paper about the origin of trees. He could use a wooden pencil to write the paper. That pencil was made using wood from trees, so its existence presupposes that the origin of trees actually happened. Nevertheless, there is nothing circular here. The pencil that is being used probably consists of wood quite different from that in prehistoric trees. And even if it wasn't different, there's no problem with using the pencil to describe those ancient trees. </p> <p>Similarly, there's no problem using ordinary reasoning, also called logic, to describe and analyze the process of reasoning.</p>
2,728,248
<blockquote> <p>Let $K=\mathbb{Q}(\sqrt{-2})$. Show that $\mathcal{O}(K)$ is a principal ideal domain. Deduce that every prime $p\equiv 1, 3$ (mod 8) can be written as $p = x^2 + 2y^2$ with $x, y \in \mathbb{Z}$.</p> </blockquote> <p>As $−2$ is squarefree $6\equiv 1$ (mod 4) we have $\mathcal{O}(K) = \mathbb{Z}[ \sqrt{2}$]. The discriminant is $\Delta$ = −8. The degree $n = 2$. The signature is $(0, 2)$. Thus the Minkowski bound is</p> <p>$$ B_k = \frac{2!}{2^2}=\frac{4}{\pi}\times \sqrt{8} = \frac{4\sqrt{2}}{\pi}&lt;2$$</p> <p>Hence $Cl(K)$ is generated by the empty set of ideal classes and so $Cl(K) = \{1\}$. So this means $\mathcal{O}(K)$ is a principal ideal domain I believe...</p> <p>Ok, now if we let $p \equiv 1$ or $3$ (mod 8). By quadratic reciprocity, $−2 ≡ \alpha^2$ (mod p) for some integer $\alpha$. Thus</p> <p>$$X^2 + 2 ≡ (X + \alpha)(X − \alpha) \quad\text{(mod}~~ p).$$</p> <p>Ok, now I am slightly stuck, can we apply some theorem here? Not sure if what above is correct to get to the desired result</p>
Jack D'Aurizio
44,121
<p>An elementary proof starting from scratch. If $p\equiv 1\pmod{8}$, in $\mathbb{F}_p^*$ there is an element with order $8$, which we may call $\alpha$. Since $\alpha^4+1=0$ we have $(\alpha+\alpha^{-1})^2=0 $ and $-2$ is a quadratic residue $\pmod{p}$. If $p\equiv 3\pmod{8}$ we may consider the degree of the splitting field of $\Phi_8(x)=x^4+1$ over $\mathbb{F}_p$, which is given by the least $k$ such that $8\mid(p^k-1)$, i.e. by $2$. So we have $\alpha\in\mathbb{F}_{p^2}$ with order $8$, and by Frobenius automorphism one of the quadratic factors of $x^4+1$ is given by $$(x-\alpha)(x-\alpha^3)=x^2-(\alpha+\alpha^3)-1$$ and the other quadratic factor is given by $$(x-\alpha^5)(x-\alpha^7) = x^2-(\alpha^5+\alpha^7)-1.$$ On the other hand $(\alpha+\alpha^3)^2 = \alpha^2+\alpha^6-2 = -2$, so $-2$ is a quadratic residue in this case, too.</p> <p>Let $u\in\{1,\ldots,\frac{p-1}{2}\}$ be such that $u^2+2\equiv 0\pmod{p}$. We have $u^2+2 = kp $ for some $k\in\{1,\ldots,\lfloor\frac{p}{4}\rfloor\}$. Let $v$ be the minimum between $u\pmod{k}$ and $(k-u)\pmod{k}$. We have $v^2+2=kq$ for some $q&lt;k$ and by the Lagrange-Brahmagupta-Fibonacci identity $$ (a^2+2b^2)(c^2+2d^2)= (ac+2bd)^2 + 2(bc-ad)^2 $$ we have $$ (u^2+2)(v^2+2) = (uv+2)^2+2(v-u)^2 =k^2 pq.$$ On the other hand both $uv+2$ and $v-u$ are multiples of $k$, so by starting with a representation of a multiple of $p$ as $A^2+2B^2$ we got a representation as $A^2+2B^2$ of a <em>smaller</em> multiple $$ \left(\frac{uv+2}{k}\right)^2+\left(\frac{v-u}{k}\right)^2 = qp $$ and the trick can be iterated again, ultimately leading to a representation of $p$ as $A^2+2B^2$.</p>
2,065,254
<p>Let $f: \mathbb{R} \to \mathbb{R}$ be a function that is twice differentiable.</p> <p>We know that: $$\lim_{x\to-\infty}\ f(x) = 1$$</p> <p>$$\lim_{x\to\infty}\ f(x) = 0$$</p> <p>$$f(0) = \pi$$</p> <p>We have to prove that there exist at least two points of the function in which $f''(x) = 0$.</p> <p>How could we do it in a rigorous way? It is pretty intuitive, but in a rigorous way it isn't that simple for me...</p>
Yiorgos S. Smyrlis
57,021
<p>Since the limits of $f$, as $x$ tends to $\infty$ and $-\infty$ both exist, and $$ \lim_{x\to\infty}f(x),\,\,\lim_{x\to-\infty}f(x)&lt;f(0)=\pi, $$ then $f$ attains a total maximum, say at $x_0$, with $f(x_0)\ge\pi$, and thus $f'(x_0)=0$. </p> <p>The fact that $\,\lim_{x\to-\infty}f(x)&lt;f(0)$, implies the existence of $x_1\in (-\infty,x_0)$, where $f'(x_1)&gt;0$, and similarly, an $x_2\in (x_0,\infty)$, where $f'(x_2)&lt;0$. But the existence of the limits $\lim_{x\to\infty}f(x)$ and $\lim_{x\to-\infty}f(x)$ also implies that</p> <p>$$ \lim_{x\to-\infty} f(x+1)-f(x)=0\quad\text{and}\quad\lim_{x\to\infty} f(x+1)-f(x)=0, $$ and hence, $f(x+1)-f(x)$ can become arbitrarily small both for $x$ or $-x$ sufficiently large. Due to Mean Value Theorem $f(x+1)-f(x)=f'(\xi)$, for some $\xi\in(x,x+1)$, and hence there exist $\xi_1&lt;x_1$ and $\xi_2&gt;x_2$, such that $$ \xi_1&lt;x_1&lt;x_0&lt;x_2&lt;\xi_2 $$ and $$ f'(\xi_1)&lt;f'(x_1)&gt;f'(x_0) \quad \text{and}\quad f'(x_0)&gt;f'(x_2)&lt;f'(\xi_2). $$ Hence $f''$ vanishes at some point in each of the intervals $$ (\xi_1,x_0) \quad\text{and}\quad (x_0,\xi_2). $$</p>
3,043,296
<p>Prop: For sets A and B, say A ~ B iff there exists a bijection from A to B. Then ~ is an equivalence relation on sets.</p> <p>I understand that an equivalence relation holds the properties of reflexive, symmetric, and transitive. I am also aware of their definitions, however, I am struggling to write a proof for this proposition.</p> <p>I would assume we can suppose there is a bijection between A and B, as this would imply there is a bijection between the two. This would also mean that the two sets have equal cardinality but from this point on I am completely lost, the direction of the proof seems very unclear.</p> <p>A hidden answer (written proof) would be great with some visible guidance or hints so enhance my understanding.</p> <p>Thank you.</p>
Sambo
454,855
<p>Showing that these properties hold is a straightforward application of the definitions, with some elementary properties of bijections. So, I suspect what you are having trouble with is formulating a proper proof. I will show reflexivity as an example.</p> <p>We want to show that for any set <span class="math-container">$A$</span>, we have <span class="math-container">$A \sim A$</span>. Because of how we defined <span class="math-container">$\sim$</span>, this means that for any set <span class="math-container">$A$</span>, we need to show that there exists a bijection <span class="math-container">$f : A \rightarrow A$</span>. How do we show this? We must construct such a function explicitly. Since we know nothing of the contents of <span class="math-container">$A$</span>, there is really only one choice: the identity function.</p> <p>Let <span class="math-container">$f : A \rightarrow A$</span> be the function such that <span class="math-container">$f(a) = a$</span> for every <span class="math-container">$a \in A$</span>. We must check that <span class="math-container">$f$</span> is bijective. First, <span class="math-container">$f$</span> is injective: if <span class="math-container">$f(a) = f(b)$</span>, then <span class="math-container">$a = f(a) = f(b) = b$</span>, so <span class="math-container">$a = b$</span>. Next, <span class="math-container">$f$</span> is surjective: if <span class="math-container">$a \in A$</span>, then there exists an element <span class="math-container">$x \in A$</span> such that <span class="math-container">$f(x) = a$</span>: namely, <span class="math-container">$x = a$</span>.</p> <p>So, we have shown that for any set <span class="math-container">$A$</span>, we can construct a bijection <span class="math-container">$f : A \rightarrow A$</span>, which shows that <span class="math-container">$A \sim A$</span>. This shows the reflexivity of <span class="math-container">$\sim$</span>. Others have provided hints for the other two properties.</p>
4,498,199
<p>Exercise 1.2.1(vii) from Page 5 of Keith Devlin's &quot;The Joy of Sets&quot;:</p> <blockquote> <p>Prove the following assertion directly from the definitions. The drawing of &quot;Venn diagrams&quot; is forbidden; this is an exercise in the manipulation of logical formalisms. <span class="math-container">$$(x\subseteq y)\leftrightarrow(x\cap y =x)\leftrightarrow(x\cup y=y)$$</span></p> </blockquote> <p><strong>Attempt at solution</strong>:</p> <p>This is not a difficult claim to understand or prove in terms of sets, but I can't figure out the pure logical formalism. I guess I should break this into seven individual implications?</p> <ol> <li><span class="math-container">$\forall w(w\in x\rightarrow w\in y)\rightarrow \forall z(z\in x\rightarrow (z\in x\enspace\wedge\enspace z\in y)) $</span></li> <li><span class="math-container">$\forall w(w\in x\rightarrow w\in y)\rightarrow \forall z((z\in x\enspace\wedge\enspace z\in y)\rightarrow z\in x))$</span></li> <li><span class="math-container">$\forall z(z\in x\rightarrow(z\in x \enspace\wedge\enspace z\in y))\rightarrow \forall w(w\in x\rightarrow w\in y)$</span></li> <li><span class="math-container">$\forall z(z\in x\rightarrow(z\in x \enspace\wedge\enspace z\in y))\rightarrow \forall w((w \in x \enspace\lor\enspace w\in y)\rightarrow w\in y)$</span></li> <li><span class="math-container">$\forall z(z\in x\rightarrow(z\in x \enspace\wedge\enspace z\in y))\rightarrow \forall w(w\in y\rightarrow(w \in x \enspace\lor\enspace w\in y))$</span></li> <li><span class="math-container">$\forall w((w\in x\enspace\lor\enspace w\in y)\rightarrow w\in y)\rightarrow\forall z((z\in x \enspace\wedge\enspace z\in y)\rightarrow z\in x)$</span></li> <li><span class="math-container">$\forall w((w\in x\enspace\lor\enspace w\in y)\rightarrow w\in y)\rightarrow\forall z(z\in x \rightarrow (z\in x \enspace\wedge\enspace z\in y))$</span></li> </ol> <p>In addition to wondering if I'm doing this correctly, I also feel like I gained nothing from writing out these implications.</p>
ryang
21,813
<blockquote> <p>Exercise 1.2.1(vii) from <strong>page 5</strong> of Keith Devlin's &quot;The Joy of Sets&quot;:</p> <blockquote> <p>Prove the following assertion <strong>directly from the definitions</strong>. this is an exercise in the <strong>manipulation of logical formalisms</strong>. <span class="math-container">$$x\subseteq y\;\leftrightarrow\;x\cap y =x\;\leftrightarrow\;x\cup y=y.\tag{*}$$</span></p> </blockquote> </blockquote> <p>Even though this exercise is requesting &quot;logical formalism&quot; and sentence <span class="math-container">$(*)$</span> does not <a href="https://math.stackexchange.com/a/4495116/21813">technically</a> mean that those three atomic statements are equivalent to one another, the author quite certainly is wanting you to prove just that.</p> <blockquote> <p>I understand from your linked post that it is not necessary to prove all seven implications; proving <span class="math-container">$(A→B),\; (B→C)$</span> and <span class="math-container">$(C→A)$</span> would be sufficient.</p> </blockquote> <p>Yes. So, we just need</p> <ol> <li><span class="math-container">$∀p\;(p\in X→p\in Y)→∀q\;(q\in X\cap Y ↔ q\in X)$</span></li> <li><span class="math-container">$∀q\;(q\in X\cap Y ↔ q\in X)→∀r\;(r\in X\cup Y ↔ r\in Y)$</span></li> <li><span class="math-container">$∀r\;(r\in X\cup Y ↔ r\in Y)→∀p\;(p\in X→p\in Y)$</span></li> </ol> <p>or, alternatively,</p> <ol> <li><span class="math-container">$∀p\;(p\in X→p\in Y)→∀q\;(q\in X\cap Y ↔ q\in X)$</span></li> <li><span class="math-container">$∀p\;(p\in X→p\in Y)→∀r\;(r\in X\cup Y ↔ r\in Y)$</span></li> <li><span class="math-container">$∀q\;(q\in X\cap Y ↔ q\in X)→∀p\;(p\in X→p\in Y)$</span></li> <li><span class="math-container">$∀r\;(r\in X\cup Y ↔ r\in Y)→∀p\;(p\in X→p\in Y)$</span></li> </ol> <p>(the definition of subset has been applied).</p> <hr /> <p><strong>Addendum 1</strong></p> <blockquote> <p>I believe my #1 and #2 from the OP are equivalent to your #1. If you look at <strong>my #1</strong> for example, I really don't know what more I can do to &quot;prove&quot; this.</p> </blockquote> <p>Okay, let's reformat <strong>your #1</strong>, using <span class="math-container">$Ab$</span> to denote <span class="math-container">$b\in A:$</span> <span class="math-container">$$∀p\:(Xp → Yp) → ∀q\:(Xq → (Xq ∧ Yq)).\tag 4$$</span> This is logically equivalent to <span class="math-container">$$∀q\:∃p\:\Big((Xp → Yp) → (Xq → (Xq ∧ Yq))\Big),\tag 3$$</span> which is logically equivalent to <span class="math-container">$$∀q\:∃p\:\Big((Xp → Yp) → (Xq → Yq)\Big),\tag 2$$</span> which is a logical consequence (choose <span class="math-container">$p:=q$</span>) of <span class="math-container">$$∀q\:\Big((Xq → Yq) → (Xq → Yq)\Big),\tag 1$$</span> which is clearly true. Notice that this proof is purely logical, utilising no result from Set Theory.</p> <p>On the other hand, after the relevant results have been introduced after page 5, then a proof of the first part of <span class="math-container">$(*),$</span> <span class="math-container">$$X\subseteq Y\implies X\cap Y =X,$$</span> can simply (with the relevant justifications) be: <span class="math-container">$$X⊆Y \implies X∩Y⊆X\\ \text{and }X⊆Y \implies X⊆X∩Y.$$</span></p> <hr /> <p><strong>Addendum 2</strong></p> <blockquote> <p>Thanks. I don't understand why <span class="math-container">$(4)$</span> and <span class="math-container">$(3)$</span> are logically equivalent.</p> </blockquote> <p>Paring them down (here, <span class="math-container">$Mn$</span> denotes that object <span class="math-container">$n$</span> holds property <span class="math-container">$M):$</span> <span class="math-container">\begin{gather}∀p\:Fp → ∀q\:Gq\tag 4 \\∀q\:∃p\:\big(Fp → Gq\big)\tag 3\end{gather}</span></p> <p>Sentence <span class="math-container">$(3)$</span> is the <a href="https://en.wikipedia.org/wiki/Prenex_normal_form#Implication" rel="nofollow noreferrer">Prenex form</a> of sentence <span class="math-container">$(4).$</span> They are logically equivalent because (here, <span class="math-container">$\psi$</span> is a sentence) <span class="math-container">\begin{gather}\psi → ∀q\:Gq\quad\equiv\quad ∀q\;(\psi → Gq),\tag{L1}\\ ∀p\:Fp → \psi \quad\equiv\quad ∃q\;(Fp → \psi).\tag{L2} \end{gather}</span></p> <p><span class="math-container">$(\mathrm L1)$</span> and <span class="math-container">$(\mathrm L2)$</span> contains four logical entailments; their formal proofs can be found in exercises 3 &amp; 6 on p. 114 &amp; 116 of <a href="https://forallx.openlogicproject.org/forallxyyc-solutions.pdf" rel="nofollow noreferrer">this book</a>. I'll illustrate the least obvious one <span class="math-container">$$∀p\:Fp → \psi \quad\models\quad ∃q\;(Fp → \psi)$$</span> with an example:</p> <ul> <li>premise: “Dark matter exists if every perfumier is French.”</li> <li>conclusion: “For some perfumier, dark matter exists if they're French.”</li> </ul> <p>If every perfumier is French, then the conclusion clearly follows from the premise. On the other hand, if some perfumier isn't French, then for <em>that</em> perfumier, “dark matter exists if they're French” is indeed (vacuously) true, and the conclusion is again true.</p>
1,837,220
<p>In this post: <a href="https://math.stackexchange.com/questions/1056058/computing-int-sqrt14x2-dx">Computing $\int \sqrt{1+4x^2} \, dx$</a> someone mentioned Euler substitution to compute the following integral:</p> <p>$$\int \sqrt{1+4x^2} \, dx$$</p> <p>I tried to follow this advice and got very nice result, namely I substituted $\sqrt{1+4x^2}=t-2x$ which after raising to the square and reducing gives $x=\frac{t^2-1}{4t}$ and $t-2x=\frac{t^2+1}{2t}$, then derivative is equal to $\frac{dx}{dt}=\frac{t^2+1}{4t^2}$ and the whole integral:</p> <p>$$\int \sqrt{1+4x^2} \, dx = \int \frac{(t^2+1)^2}{8t^3} \, dt$$</p> <p>Could you please check my solution, because it seems a lot easier than all these trigonometric substitution (too easy which is suspicious...). Thanks in advance.</p>
egreg
62,967
<p>If you set $\sqrt{1+4x^2}=t-2x$, you have $$ 1+4x^2=t^2-4tx+4x^2 $$ so $4tx=t^2-1$ and therefore $$ x=\frac{t^2-1}{4t}=\frac{t}{4}-\frac{1}{4t} $$ Thus $$ dx=\left(\frac{1}{4}+\frac{1}{4t^2}\right)\,dt=\frac{t^2+1}{4t^2}\,dt $$ and $$ \sqrt{1+4x^2}=t-\frac{t}{2}+\frac{1}{2t}=\frac{t^2+1}{2t} $$ so the integral becomes $$ \int\frac{(t^2+1)^2}{8t^3}\,dt= \frac{1}{8}\int\left(t+\frac{2}{t}+\frac{1}{t^3}\right)\,dt $$ that's elementary.</p> <p>Yes, your computation is right.</p> <hr> <p>The alternative way is setting $2x=\sinh(t/2)$, so $$ \sqrt{1+4x^2}=\cosh\frac{t}{2}, \qquad dx=\frac{1}{4}\cosh\frac{t}{2}\,dt $$ and the integral is $$ \frac{1}{4}\int\cosh^2\frac{t}{2}\,dt= \frac{1}{8}\int(\cosh t-1)\,dt $$ remembering that $2\cosh^2\frac{t}{2}+1=\cosh t$.</p> <hr> <p>Another (tricky) way is to do $2x=t$, that reduces to computing, up to a scalar factor, $$\DeclareMathOperator{\arsinh}{arsinh} I=\int\sqrt{1+t^2}\,dt=\int\frac{1+t^2}{\sqrt{1+t^2}}\,dt= \arsinh t+\int t\frac{t}{\sqrt{1+t^2}}\,dt= \arsinh t+t\sqrt{1+t^2}-I $$ (the last one by parts).</p>
1,808,441
<p>I want to know for two circles why the ratio of arc length is equal to the ratio of the two central angles in geometry. It must have something to do with the concept of similarity in geometry. I have scoured the Internet looking answers but only found the ratio of circumferences to the ratio of their diameter.</p> <p>If my question seems crude it is because I have never posted a question before.</p>
Xaver
302,955
<p>The arc length of a circle with radius $r$ is $2r\pi$. The full circle has a central angel of $\varphi=2\pi$. Therefore, given a central angel $\varphi$, you can calculate the arc length by $r\varphi$. So the following statements hold:</p> <ul> <li>For a fixed central angel $\varphi$, the arc length $r\varphi$ is proportional to the radius $r$.</li> <li>For a fixed radius $r$, the arc length $r\varphi$ is proportional to the central angel $\varphi$.</li> </ul>
1,528,235
<p>Recall that <a href="http://en.wikipedia.org/wiki/Tetration" rel="noreferrer">tetration</a> ${^n}x$ for $n\in\mathbb N$ is defined recursively: ${^1}x=x,\,{^{n+1}}x=x^{({^n}x)}$. </p> <p>Its inverse function with respect to $x$ is called <a href="http://en.wikipedia.org/wiki/Tetration#Super-root" rel="noreferrer">super-root</a> and denoted $\sqrt[n]y_s$ (the index $_s$ is not a variable, but is part of the notation &mdash; it stands for "super"). For $y&gt;1, \sqrt[n]y_s=x$, where $x$ is the unique solution of ${^n}x=y$ satisfying $x&gt;1$. It is known that $\lim\limits_{n\to\infty}\sqrt[n]2_s=\sqrt{2}$. We are interested in the convergence speed. It appears that the following limit exists and is positive: $$\mathcal L=\lim\limits_{n\to\infty}\frac{\sqrt[n]2_s-\sqrt2}{(\ln2)^n}\tag1$$ Numerically, $$\mathcal L\approx0.06857565981132910397655331141550655423...\tag2$$</p> <hr> <p>Can we prove that the limit $(1)$ exists and is positive? Can we prove that the digits given in $(2)$ are correct? Can we find a closed form for $\mathcal L$ or at least a series or integral representation for it?</p>
mick
39,261
<p>I got a message from Tommy1729.</p> <p>He considered nonzero $T$ such that :</p> <p>$$T = \lim \frac{A^n}{(f(n,2) - \sqrt 2 - L \ln 2 ^n)}$$.</p> <p>Where $f(n,2)$ is the nth superroot of $2$ , $L$ is the constant from the Op and $A$ is a constant.</p> <p>If the limit $T$ does not exist at least the best fitting $A$ is considered. In other words : </p> <p>$$A^n $$</p> <p>~</p> <p>$$\frac{1}{(f(n,2) - \sqrt 2 - L \ln 2 ^n)}$$</p> <p>Generalizing the Op is easy. And the problem was not new to him.</p> <p>These generalizations seem even harder and he did not provide formal proofs.</p> <p>However what amazed me was this :</p> <p>$$L \ln2 = r = 0.0475...$$</p> <p>Where $ r $ is conjectured the radius of Gottfried's answer !</p> <p>Sorry too add questions instead of answers. But this is too much for a comment.</p>
4,587,657
<p>The call function is defined as</p> <p><span class="math-container">$$ \text{call}: \begin{cases} (\mathbb{R}^{I}\times I) \to \mathbb{R} \\ (f,x) \mapsto f(x) \end{cases} $$</span></p> <p>is &quot;<span class="math-container">$\text{call}$</span>&quot; a measurable function? In other words: for a random field <span class="math-container">$Z:\mathbb{R}^n\to\mathbb{R}$</span> and a random location <span class="math-container">$X\in\mathbb{R}^n$</span>, is <span class="math-container">$Z(X)$</span> a random variable?</p> <p>To address a comment: This is not simply a question of chaining measurable functions, as both <span class="math-container">$Z$</span> and <span class="math-container">$X$</span> depend on <span class="math-container">$\omega$</span>. I.e we really have <span class="math-container">$Z(\omega):\mathbb{R}^n\to\mathbb{R}$</span>, so <span class="math-container">$\omega\mapsto Z(\omega)(X(\omega))=\text{call}(Z(\omega), X(\omega))$</span> needs to be measurable.</p> <p>I would guess this should be correct tough. At least for continuous random fields. But I am not sure what to search for. In a programming context &quot;call&quot; would be appropriate (cf. <a href="https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Function/call" rel="nofollow noreferrer">https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Function/call</a>). I am mostly looking for a reference to cite.</p>
Felix B.
445,105
<p>EDIT: previously this presented a dead end, but with @FShrike's answer this does not make any sense anymore. Instead I want to elaborate on @FShrike's answer. Partly to explain it to myself.</p> <h1>The Continuity Approach</h1> <p>The main idea is: A common <strong>sufficient</strong> criterion for measurability with regard to the <strong>Borel</strong> sigma algebra is continuity. So if we get continuity, we get Borel measurability for free. This opens three questions:</p> <ol> <li>Is our evaluation/call function actually continuous?</li> <li>Is the Borel sigma algebra what we want? <ul> <li>is the sigma algebra we want a Borel sigma algebra at all?</li> <li>have we generated the Borel sigma algebra from the correct topology?</li> </ul> </li> <li>If the evaluation/call function is not continuous, can it still be measurable?</li> </ol> <h2>1 Is the evaluation function continuous?</h2> <p>This is the main concern of @FShrike's <a href="https://math.stackexchange.com/a/4587832/445105">answer</a>. The first question is of course, what topology can we give <span class="math-container">$\mathbb{R}^I$</span>?</p> <p>It appears there is the concept of an <a href="https://encyclopediaofmath.org/wiki/Exponential_topology" rel="nofollow noreferrer">Exponential Topology</a>, which exists for very general <span class="math-container">$I$</span> (<a href="https://en.wikipedia.org/wiki/Locally_compact_space" rel="nofollow noreferrer">locally compact</a> <a href="https://en.wikipedia.org/wiki/Hausdorff_space" rel="nofollow noreferrer">Hausdorff</a> is sufficient). And the evaluation function is always continuous with regard to this topology.</p> <h2>2 Is the Borel Sigma algebra what we want?</h2> <p>@FShrike also addresses the question whether we have the correct topology somewhat. It appears that</p> <p>the <a href="https://encyclopediaofmath.org/wiki/Exponential_topology" rel="nofollow noreferrer">exponential topology</a> is identical to the <a href="https://en.wikipedia.org/wiki/Compact-open_topology" rel="nofollow noreferrer">compact-open topology</a> if and only if the evaluation function is continuous with regard to the compact open topology.</p> <p>The compact-open topology is only defined on the set of continuous functions <span class="math-container">$C(I)$</span> though. But according to <a href="https://en.wikipedia.org/wiki/Compact-open_topology" rel="nofollow noreferrer">wikipedia</a>, the evaluation function is continuous with regards to the compact-open topology. This implies that the two topologies <em>do</em> coincide, which makes the exponential topology a reasonable extension for non-continuous functions.</p> <h3>What sigma algebra do we want?</h3> <p>Let us now start from the probability side: What do we want? Due to Kolmogorov's extension theorem, probabilists like to work with the sigma algebra <span class="math-container">$\mathcal{A}$</span> generated by finite cylinder sets.</p> What does that mean? <p>For a finite Subset <span class="math-container">$J=\{t_1,\dots,t_n\}\subset I$</span> we define the finite dimensional projection</p> <p><span class="math-container">$$ \pi_J(f):= (f(t_1),\dots,f(t_n)) $$</span></p> <p>A cylinder is of the form <span class="math-container">$\pi_J^{-1}(A_1\times\dots\times A_n)$</span>, because for any value <span class="math-container">$t\not\in J$</span> the functions in this set can take any value. So informally we have</p> <p><span class="math-container">$$ \pi_J^{-1}(A_1\times\dots\times A_n) = A_1\times\dots\times A_n \times \mathbb{R}\times\mathbb{R}\times... \subset \mathbb{R}^I $$</span></p> <p>So the sigma algebra we want is <span class="math-container">$$ \begin{aligned} \mathcal{A} &amp;= \{\pi_J^{-1}(A), J\subset I \text{ countable}, A\in \mathcal{B}(\mathbb{R}^{|J|})\}\\ &amp;= \sigma( \pi_J^{-1}(A), J\subset I \text{ finite}, A\in \mathcal{B}(\mathbb{R}^{|J|}))\\ &amp;= \sigma( \pi_{\{x\}}^{-1}(A), x\in I, A\in\mathcal{B}(\mathbb{R})) \end{aligned} $$</span></p> <p>Where the middle is ususally the definition because it meshes well with <a href="https://en.wikipedia.org/wiki/Kolmogorov_extension_theorem" rel="nofollow noreferrer">Kolmogorov's extension theorem</a> which uses a consistent set of finite-dimensional marginal distributions.</p> <p>If you view a sigma algebra as the set of questions you are allowed to ask, the sigma algebra <span class="math-container">$\mathcal{A}$</span> essentially allows you to ask questions about a countable set of points of your function. For general functions this does not allow you to separate a lot of functions. The set <span class="math-container">$$ \{ f\in \mathbb{R}^I : f\equiv c \} $$</span> is not measurable for example. But if we restrict ourselves to continuous functions, this set suddenly becomes measurable. Because we only need to know our function in the countable points in <span class="math-container">$D(I)$</span> for a separable <span class="math-container">$I$</span> due to continuity. More precisely, we have</p> <p><span class="math-container">$$ \mathcal{A}|_{C(I)} = \mathcal{B}(C(I))\tag{*} $$</span></p> <p>where the topology of the Borel sigma algebra <span class="math-container">$\mathcal{B}(C(I))$</span> is generated by the metric</p> <p><span class="math-container">$$ d(f,g) := \sum_{k=1}^\infty \frac{\max\{d_k(f,g),1\}}{2^k}, \quad d_k(f,g) := \sup_{x\in\overline{B_k(0)}}|f(x)-g(x)| $$</span></p> <p>where we assume that the ball <span class="math-container">$\overline{B_k(0)}$</span> with radius <span class="math-container">$k$</span> is compact (need <strong>Heine-Borel</strong>).</p> <h5>Proof of (*):</h5> <h6><span class="math-container">$\subseteq$</span>:</h6> <p>It is sufficient to prove that <span class="math-container">$\pi_{\{x\}}$</span> are continuous and therefore measurable, because <span class="math-container">$\mathcal{A}$</span> is the smallest sigma algebra to make them all measurable.</p> <h6><span class="math-container">$\supseteq$</span>:</h6> <p>Because the open balls around rational polynomials form a countable basis of the topology of <span class="math-container">$C(I)$</span> induced by <span class="math-container">$d$</span>, it is sufficient to prove that the open balls are in <span class="math-container">$\mathcal{A}$</span>. If we show that <span class="math-container">$$ h_0: f\mapsto d(f_0, f) $$</span> is measurable with regard to <span class="math-container">$\mathcal{A}$</span> we are done, because then <span class="math-container">$$ B_\epsilon(f_0) = h_0^{-1}([0,\epsilon)) \in \mathcal{A} $$</span></p> <p>To prove <span class="math-container">$h_0$</span> is measurable it is sufficient to show that <span class="math-container">$h_k: f\mapsto d_k(f_0, f)$</span> is measurable.</p> <p>But because</p> <p><span class="math-container">$$ h_k(f) = \sup_{x\in \overline{B_k(0)}\cap D(I)} |f(x)-f_0(x)| = \sup_{x\in \overline{B_k(0)}\cap D(I)} |\pi_{\{x\}}(f)-\pi_{\{x\}}(f_0)| $$</span></p> <p>is just a countable supremum of measurable functions, it is measurable. Where we have used the separability of <span class="math-container">$I$</span> and the continuity of <span class="math-container">$f$</span> and <span class="math-container">$f_0$</span> when intersecting with the countable dense set <span class="math-container">$D(I)$</span>.</p> <h4>Comparison with the Borel(Compact Open Topology)</h4> <p>The compact-open topology is defined as</p> <p><span class="math-container">$$ \tau( U_K(O): K\subset I \text{ compact}, O\in \tau_{\mathbb{R}}) $$</span> where <span class="math-container">$\tau_{\mathbb{R}}$</span> are the open sets in <span class="math-container">$\mathbb{R}$</span> and we define <span class="math-container">$$ U_K(O):= \{ f\in C(I) : f(K)\subset O\} =\pi_K^{-1}(O\times\dots\times O). $$</span> The second equality is not quite rigorous because for a compact <span class="math-container">$K$</span> we need the carthesian product of <span class="math-container">$|K|$</span> many <span class="math-container">$O$</span>, which is generally uncountable.</p> <p>For continuity of the evaluation function with regard to <span class="math-container">$d$</span> we only need to show, that the topology generated by <span class="math-container">$d$</span> is larger than the compact-open topology. For this, it is sufficient to show that every <span class="math-container">$U_K(O)$</span> is open with regards to <span class="math-container">$d$</span>.</p> <p>So let <span class="math-container">$U_K(O)$</span> be given. To show it is open, we need to find an epsilon ball around any element <span class="math-container">$f\in U_K(O)$</span>, which we now assume as given. Using <strong>Heine-Borel</strong> again, there exists some <span class="math-container">$k\in\mathbb{N}$</span> such that <span class="math-container">$K\subseteq \overline{B_k(0)}$</span></p> <p>Then for <span class="math-container">$d(f,g)\le 2^{-k}$</span>, we have <span class="math-container">$$ \sup_{x\in K} | f(x) - g(x)| \le d_k(f,g) \overset{d(f,g)\le 2^{-k}}\le 2^k d(f,g) $$</span> So with an appropriate <span class="math-container">$\epsilon&gt;0$</span> we can ensure by <span class="math-container">$d(f,g)\le\epsilon$</span> that <span class="math-container">$$ \sup_{x\in K} | f(x) - g(x)|\le \eta. \tag{1} $$</span> Let <span class="math-container">$\eta$</span> be the minimum distance of <span class="math-container">$f(y)$</span> and <span class="math-container">$O^\complement$</span>. This is non-zero because <span class="math-container">$f(K)$</span> is compact and a subset of <span class="math-container">$O$</span>. So with (1), we can guarantee <span class="math-container">$g(K)\subset O$</span> for all <span class="math-container">$d(f,g)&lt;\epsilon$</span>. We have found our <span class="math-container">$\epsilon$</span> ball.</p> <h4>Comparison with Borel(Exponential Topology)</h4> <p>The borel sigma algebra of the exponential topology would have to be smaller than the one generated by all finite projections. Given that</p> <blockquote> <p>I am reasonably sure that the projections <span class="math-container">$\pi_J$</span> are measurable in the Borel-algebra, but I doubt that the Borel-algebra equals the cylinder algebra. I don't know for sure. To me, that is quite a weird thing to do, to take a product algebra for <span class="math-container">$I=\Bbb R^n$</span> (which is enormous!) - @FShrike</p> </blockquote> <p>If the <span class="math-container">$\pi_J$</span> are measurable in the Borel-algebra, the smallest algebra to make them measurable <span class="math-container">$\mathcal{A}$</span> is going to be smaller or equal. If they are not equal, the measurability with regard to the Borel-algebra does not imply measurability with regard to <span class="math-container">$\mathcal{A}$</span>.</p> <p>Because I do not really understand the definition of the Exponential Topology well in the first place, I doubt that I will be able to make a more concrete statement in the near future.</p>
1,795,836
<p>Let's say that $A \subset X$ is a deformation retract. It follows that $A$ is both a retract and a space homotopically equivalent to $X$. Is the converse true? Probably not, but I couldn't find any example yet.</p> <p>More specifically the converse would be:</p> <p>If $A \subset X$ is a retract which is homotopic to $X$ as a topological space then does there exist a homotopy between the retraction and the identity map: $$H:X \times [0, 1] \to X$$ such that $H(x,0)=x$, $H(x,1)\in A$ and $H(a,1)=a$ for $a\in A$.</p>
Faraad Armwood
317,914
<p>If your question is whether every space $Y$ which is homotopically equivalent to a space $X$ must be a retract of $X$ then this is certainly not true. One main requirement for a deformation retract is that $A \subset X$. One can easily produce homeomorphic spaces which aren't subspaces of each other. Consider two disjoint disks of different radii in the plane. Does this answer your question? </p>
1,838,002
<p>There is famous <a href="https://en.wikipedia.org/wiki/Quillen-Suslin_theorem" rel="nofollow">Quillen-Suslin theorem</a> which states that every finitely generated projective module over a ring of polynomials $k[x_1,...,x_n]$, where $k$ is a field, is free.</p> <p>I have never carefully read a proof of this theorem, which is for example in the <strong>Lang's Algebra</strong>. <code>Probably it is based on Quillen's original ideas.</code>- this is not true as it was pointed out in the answer below.</p> <p><strong>Questions:</strong> Is every finitely generated projective modules over $\mathbb{Z}[x_1,...,x_n]$ free? </p> <p>If yes, then is the proof modification of the one given in <strong>Lang's Algebra</strong>?</p> <p>And if yes, then how about polynomial rings over other Dedekind domains or number rings?</p>
Joel92
349,727
<p>It should be true for any PID. See the book by Lam, Serre's Conjecture</p>
2,259,109
<p>If the value of $f(z_0)$ or $f^\prime(z_0)$ is complex number then is $f(z)$ analytic at $z_0$?</p>
Lutz Lehmann
115,115
<p>No. But if $f(z)$ is differentiable as a 2D real function and its derivative $f'(z)$ is expressible as a complex number, and that is true for all points in a neighborhood of $z_0$, then $f$ is holomorphic or analytical in that open set and thus in $z_0$.</p>
1,705,081
<p>It's a matrix solved with least squares equations (probaly). I used some calculator but can't get his outcome. If you have a way how to get to this please explain how.</p> <p>[The math on that image is: $$A = \left[\matrix{4&amp;3&amp;1&amp;0&amp;1\cr 5&amp;2&amp;1&amp;0&amp;1\cr 4&amp;2&amp;1&amp;1&amp;1\cr 3&amp;1&amp;0&amp;1&amp;1\cr 1&amp;1&amp;0&amp;1&amp;1\cr}\right], \quad\vec b = \left[\matrix{4\cr 6\cr 6\cr 3\cr 1\cr}\right]$$ "Least Squares" of $A\vec x = \vec b$ is $\vec x = \left[\matrix{1\cr -1\cr 2\cr 1\cr 1\cr}\right]$.]</p> <p>Pic related: <a href="http://postimg.org/image/7ellc2pnl/" rel="nofollow">http://postimg.org/image/7ellc2pnl/</a></p>
Alexander
316,927
<p>Your answer is correct because $101^{16}$ is fixed in terms of growing. $n^{100}$ is slower than $1.5^{n}$. And $(n!)^{2}$ is significantly large.</p>
2,328,505
<p>Let $X$ be an exponential random variable with $\lambda =5$ and $Y$ a uniformly distributed random variable on $(-3,X)$. Find $\mathbb E(Y)$.</p> <p>My attempt:</p> <p>$$\mathbb E(Y)= \mathbb E(\mathbb E(Y|X))$$ </p> <p>$$\mathbb E(Y|X) = \int^{x}_{-3} y \frac{1}{x+3} dy = \frac{x^2+9}{2(x+3)}$$</p> <p>$$ \mathbb E(\mathbb E(Y|X))= \int^{\infty}_{0} \frac{x^2+9}{2(x+3)} 5 e^{-5x} \, dx$$</p>
Yining Wang
158,147
<p>You made a mistake in your calculation of $\mathbb E[Y|X]$. The correct calculation is $$ E[Y|X=x] = \frac{1}{x+3}\int_{-3}^x{y dy} = \frac{1}{x+1} \left(\frac{1}{2}y^2\right)\bigg|_{-3}^x = \frac{x^2-9}{2(x+3)} = \frac{x-3}{2}. $$</p> <p>We then have that $$ \mathbb E[Y] = \int_0^{\infty}\frac{x-3}{2}\cdot 5e^{-5x}dx = \left(\frac{1}{10}e^{-5x}(14-5x)\right)\bigg|_0^{\infty} = -\frac{7}{5}. $$</p>
226,323
<p>Let $X$ and $Y$ be complex Banach spaces and $B(X,Y)$ be the Banach space of all bounded operators. An operator $T\in B(X,Y)$ is weakly compact if $T(\{ x\in X;\; \| x\| \leq 1\})$ is relatively compact in the weak topology of $Y$. If $X$ or $Y$ is reflexive, then every operator in $B(X,Y)$ is weakly compact. I guess that the converse holds as well: if every operator in $B(X,Y)$ is weakly compact then either $X$ or $Y$ has to be reflexive. But I cannot find a satisfactory argument for this. I know about some characterizations of weakly compact operators: factorization through a reflexive Banach space or continuity with respect to the right topology in $X$. However it seems to me that there must be a simple argument which forces that either $X$ or $Y$ is reflexive if every operator in $B(X,Y)$ is weakly compact. I am asking if someone knows this simple argument or if there is a paper with an answer to my question.</p>
M.González
39,421
<p>The fact that each $T\in B(X,Y)$ is weakly compact does not imply $X$ or $Y$ reflexive. For example, every non-weakly compact operator $T:\ell_\infty\to Y$ is an isomorphism on a subspace isomorphic to $\ell_\infty$ (See Prop. 2.f.4 in Classical Banach spaces I, by Lindenstrauss ans Tzafriri). </p> <p>Thus if $Y$ is a non-reflexive space containing no copy of $\ell_\infty$ (e.g. $\ell_1$, or a separable non-reflexive space) then every $T\in B(\ell_\infty,Y)$ is weakly compact.</p>
1,439,429
<p>Is it possible to calculate the Sagitta, knowing the Segment Area and Radius? Alternatively, is there a way to calculate the Chord Length, knowing the Segment Area and Radius?</p>
Rajat
177,357
<ol> <li><p>Non-Convex</p></li> <li><p>Gradient descent is an unconstrained optimization method, but it's success depends on various conditions over the functions, like differentiability of the function, Lipschitz condition etc. If a function is not convex, then you can not guarantee about the global optima.</p></li> </ol>
1,651,991
<p>Let $p(x)$ be an odd degree polynomial and let $q(x)=(p(x))^2+ 2p(x)-2$ </p> <p>a) The equation $q(x)=p(x)$ admits atleast two distinct real solutions.</p> <p>b) The equation $q(x)=0$ admits atleast two distinct real solutions.</p> <p>c) The equation $p(x)q(x)=4$ admits atleast two distinct real solutions.</p> <p>which of the following are true?</p> <p>i know that all the three are true but donot know how to prove them</p>
Claude Leibovici
82,404
<p>The equation of a straight line is $$y=a + b x$$ So, just apply to each point</p> <p>$$-0.5=a-2b$$ $$0.25=a+0.5b$$ Solve for $a$ and $b$ and apply $$?=a - b$$</p> <p>I am sure that you can take from here.</p>
3,328,737
<p>For any rational number, <span class="math-container">$\frac{p}{q}$</span> , <span class="math-container">$p$</span> and <span class="math-container">$q$</span> should be integers, <span class="math-container">$q\neq0$</span> and <span class="math-container">$p,q$</span> should not have any common factors. Now, if we have two even numbers, say <span class="math-container">$2m$</span> and <span class="math-container">$2n$</span> where <span class="math-container">$m$</span> and <span class="math-container">$n$</span> are integers. <span class="math-container">$$\frac{\text{even}}{\text{even}}=\frac{2m}{2n}=\frac{m}{n}$$</span> where(<span class="math-container">$\frac{m}{n}$</span>) nature is still unknown. So, what nature does <span class="math-container">$\frac{\text{even}}{\text{even}}$</span> have, rational or irrational?</p> <p>additional reference: <a href="https://www.quora.com/Is-2-4-rational" rel="nofollow noreferrer">https://www.quora.com/Is-2-4-rational</a></p>
Mark Bennet
2,906
<p>The fact is that if you have <span class="math-container">$p$</span> and <span class="math-container">$q\neq 0$</span> integers then <span class="math-container">$|p|$</span> and <span class="math-container">$|q|$</span> are positive integers, or <span class="math-container">$p=0$</span> when <span class="math-container">$\frac pq=0$</span> is rational. If you cancel a common factor <span class="math-container">$2$</span> to obtain <span class="math-container">$|m|\lt p$</span> and <span class="math-container">$|n|\lt q$</span> you have smaller positive integers.</p> <p>You can keep dividing common factors and obtain a decreasing sequence of positive integers for numerator, and another for denominator. Since a decreasing sequence of positive integers must eventually be constant this process will terminate in a (form of the) fraction in which numerator and denominator have no common positive integer factor apart from <span class="math-container">$1$</span>. This is what you have defined as a rational number.</p>
439,941
<p>I ran into this question and I am finding it very difficult to solve:</p> <blockquote> <p>How many different expressions can you get by inserting parentheses into: $$x_{1}-x_{2}-\cdots-x_{n}\quad ?$$</p> </blockquote> <p>For example:</p> <p>$$\begin{align*} x_{1}-(x_{2}-x_{3}) &amp;= x_{1}-x_{2}+x_{3}\\ (x_{1}-x_{2})-x_{3}&amp;=x_{1}-x_{2}-x_{3}\\ x_{1}-(x_{2}-x_{3})-x_{4})&amp;=x_{1}-x_{2}+x_{3}+x_{4}\\ \end{align*}$$</p> <p>I'm really desperate for a full answer. I've been working on this for 3 hours. Thanks in advance.</p>
Ross Millikan
1,827
<p>The answer is $2^{n-2}$. $x_1$ must always be positive and $x_2$ must always be negative. Then you can pick the signs on all the rest any way you want, starting with $x_3$. For a string of length $n$, start with a string of length $n-1$ that has the signs the way you want up to there. If you want the sign before $x_n$ to be negative, leave it outside the parentheses. If you want it positive, add it to the last set of parentheses if $x_{n-1}$ is inside one, or group it with $x_{n-1}$ if not. Steven Stadnicki's example of $x_1-x_2+x_3+x_4=x_1-(x_2-x_3-x_4)$</p>
2,281,894
<blockquote> <p>The Hardy space <span class="math-container">$H^2(\mathbb{D})$</span> is defined to be the space of all functions <span class="math-container">$f$</span> &gt;holomorphic on the unit disk <span class="math-container">$\mathbb{D}$</span> with the norm <span class="math-container">$\lVert \cdot \rVert_H$</span></p> <p><span class="math-container">$\lVert f \rVert_H^2=\sup_{0&lt;r&lt;1}\int_0^{2\pi}|f(re^{i\theta})|^2 d\theta$</span></p> <p>is finite.</p> <p>Show that <span class="math-container">$H^2(\mathbb{D})$</span> is a Hilbert space.</p> </blockquote> <p>I have shown that if <span class="math-container">$f(z)=\sum_n c_nz^n$</span>, then <span class="math-container">$\lVert f \rVert_H^2=2\pi \sum_n|c_n|^2$</span>. How does this imply <span class="math-container">$H^2(\mathbb{D})$</span> is a Hilbert space? What is the inner product induced by the norm?</p>
Teebro Prokash
481,770
<p>Try using this fact (called the <em>polarization identity</em>): </p> <blockquote> <p>A <em>Banach space</em> $\mathcal{B}$ with <em>norm</em> $\parallel \ .\parallel$ is a <em>Hilbert space</em> iff $$\forall f,g \in \mathcal{B}, \ \ \ \ \parallel f+g\ \parallel +\parallel f-g \ \parallel = 2\left(\parallel f \ \parallel +\parallel g \ \parallel \right)$$ with $\langle f,g\rangle = \dfrac{1}{4} \left(\left(\parallel f+g \parallel - \parallel f-g \parallel \right) + i \left( \parallel f + ig \parallel - \parallel f - ig \parallel \right) \right)$</p> </blockquote> <p>In fact, letting $f(z) = \sum_{n}{a_n z^n}$, and $g(z) = \sum_{n}{b_n z^n}$, we have, $$|a_n + b_n|^2+|a_n - b_n|^2 = 2(|a_n|^2+|b_n|^2) \ \ \ \ \ \forall n\in \mathbb{N}$$, (check! [$a_n, b_n \in \mathbb{C}$]) and because all the terms in all the series are positive, we can write, $$\parallel f+g\ \parallel +\parallel f-g \ \parallel = 2\pi \left(\sum_{n}{|a_n + b_n|^2}+ \sum_{n}{|a_n - b_n|^2}\right) = 2 \pi \left(2\left(\sum_{n}{|a_n|^2}+\sum_{n}{|b_n|^2}\right) \right) = 2\left(\parallel f \ \parallel +\parallel g \ \parallel \right)$$ Now you can try proving that the inner product you thus obtain is the same as the one @Fred has described! In fact, check that $$\langle f,g \rangle = 2\pi \sum_{n}{a_n \overline{b_n}}$$ [Note that, even when you encounter a <em>Banach space</em> which you don't know the <em>inner product</em> formula for, you can thus check if it is a <em>Hilbert space</em> and describe an <em>inner product</em>.] </p> <p>To show completeness, do the following: </p> <ul> <li>Let $\{f_n\}$ be <em>Cauchy</em> in $\mathbb{H}^2$.Say, $f_n(z) = \sum_{i}{a_{n,i} z^i}$. Using your formula for $\parallel . \parallel_{\mathbb{H}^2}$, $\{s_n\}:s_n = \{a_{n,i}\}_{i=1}^{\infty}$ is <em>Cauchy</em> in $l^2(\mathbb{N})$. </li> <li>$l^2(\mathbb{N})$ is complete. $$\Rightarrow \ \ \ \exists \{a_i\} \in l^2(\mathbb{N}), \ \ \ \{a_{n,i}\} \xrightarrow[n\rightarrow \infty]{l^2(\mathbb{N})} \{a_i\}$$</li> <li>Define, $f(z) : = \sum_{i}{a_{i} z^i}$. Again, using your formula for $\parallel . \parallel_{\mathbb{H}^2}$, show that, $$f_{n} \xrightarrow[n\rightarrow \infty]{\mathbb{H}^2} f$$</li> </ul> <p>This finishes the proof.</p>
2,325,436
<p>I was reading <em>Introduction to quantum mechanics</em> by David J. Griffiths and came across following paragraph:</p> <blockquote> <p><span class="math-container">$3$</span>. The eigenvectors of a hermitian transformation span the space.</p> <p>As we have seen, this is equivalent to the statement that any hermitian matrix can be diagonalized. <strong>This rather technical fact is</strong>, in a sense, <strong>the mathematical support on which much of a quantum mechanics leans</strong>. It turns out to be a thinner reed then one might have hoped, because <strong>the proof does not carry over to infinite-dimensional spaces.</strong>&quot;</p> </blockquote> <p>My thoughts:</p> <p>If much of a quantum mechanics leans on it, but the proof does not carry over to infinite-dimensional spaces, then hermitian transformations with infinite dimensionality are spurious.</p> <p>But there is infinite set of separable solutions for e.g. particle in a box. So Hamiltionan for that system has spectrum with infinite number of eigenvectors and is of infinite dimensionality.</p> <p>If we can't prove that this infinite set of eigenvectors span the space then how can we use completness all the time?</p> <p>Am I missing something here? Any missconceptions?</p> <p>I'd appriciate any help.</p>
Robert Israel
8,508
<p>Physicists persist in writing things that do not make sense mathematically, but there is a mathematically rigorous version of it: the Spectral Theorem for densely defined self-adjoint operators on Hilbert space. Most functional analysis texts will cover it.</p>
4,479,972
<p>The suspension <span class="math-container">$SX$</span> of a topological space <span class="math-container">$X$</span> is defined as follows: <span class="math-container">$${\displaystyle S(X)=(X\times I)/\{(x_{1},0)\sim (x_{2},0){\mbox{ and }}(x_{1},1)\sim (x_{2},1){\mbox{ for all }}x_{1},x_{2}\in X\}}.$$</span></p> <p>My question is that: what is the dimension of <span class="math-container">$S(\mathbb{R}P^2)$</span>, the suspension of real projective plane?</p>
GSofer
509,052
<p>The dimension of the suspension is equal to the dimensions of <span class="math-container">$\mathbb{R}P^2$</span> plus <span class="math-container">$1$</span>, so <span class="math-container">$3$</span>. This is usually the case with 'nice' spaces - taking the suspension increases the dimension by <span class="math-container">$1$</span>.</p> <p>In fact, in terms of homology, taking the suspension &quot;shifts&quot; all of the (reduced) homology groups up by one dimension.</p>
4,479,972
<p>The suspension <span class="math-container">$SX$</span> of a topological space <span class="math-container">$X$</span> is defined as follows: <span class="math-container">$${\displaystyle S(X)=(X\times I)/\{(x_{1},0)\sim (x_{2},0){\mbox{ and }}(x_{1},1)\sim (x_{2},1){\mbox{ for all }}x_{1},x_{2}\in X\}}.$$</span></p> <p>My question is that: what is the dimension of <span class="math-container">$S(\mathbb{R}P^2)$</span>, the suspension of real projective plane?</p>
Mihail
201,204
<p>It is not correct to talk about the dimension of a suspension because it is not a manifold in our example. See <a href="https://math.stackexchange.com/questions/784962/easier-proof-about-suspension-of-a-manifold">this</a></p>
2,038,323
<p>I am a first year college student studying linear algebra. </p> <p>I understand that all linear transformations can be represented by a matrix mapping, and more specifically, the matrix mapping can be constructed by taking the column vectors of the images of the standard basis vectors. However, if the transformation is not linear, you would not be able to construct a matrix mapping for it.</p> <p>But then, my textbook proves that a rotation is a linear transformation by assuming that the transformation is linear, then constructing the corresponding matrix mapping. Then it argues that since the transformation can be represented by a matrix, it is linear. </p> <p>To me, this seems like a invalid proof, because the condition for representing a transformation with a matrix is that the transformation should be linear (also it starts off by assuming what you are trying to prove). Can someone tell me what is wrong with my reasoning? </p>
q.Then
222,237
<p>By definition, </p> <blockquote> <p>Any linear transformation can be represented as a matrix...</p> </blockquote> <p>Any rotation can be represented as an orthogonal matrix in the general form of $$\begin{bmatrix} cos\theta &amp; -sin\theta \\ sin\theta &amp;cos\theta \end{bmatrix}$$ And can be generalized to $\mathbb{R}^n$. You can also check that any matrix $A$ maintains linearity.</p> <p><a href="https://en.wikipedia.org/wiki/Rotation_matrix" rel="nofollow noreferrer">https://en.wikipedia.org/wiki/Rotation_matrix</a></p>
3,643
<p>Is there a quick method to transpose uneven lists without conditionals?</p> <p>With:</p> <pre><code>Drop[Table[q, {10}], #] &amp; /@ Range[10] </code></pre> <p>Thus the first list would have the first element of all the lists, the 2nd list would have all the 2nd elements of all the lists, etc. If there are no elements, skips. I have a feeling that this should incorporate Mathematica's Reap / Sow function, but unfamiliar.</p>
Sjoerd C. de Vries
57
<p>Just to show that there are always a zillion ways to do things in Mathematica, here is my version. Actually, I myself would have used <code>Flatten</code> and its mind-shattering second argument after having learned of its existence a couple of months ago. </p> <p>Contrary to the <code>Flatten</code> method this one is straightforward and easy to understand, but boorish and probably not very efficient.</p> <p>I'll start with a slightly modified test table to better demonstrate the results:</p> <pre><code>t = Drop[Table[q[i, #], {i, 10}], #] &amp; /@ Range[10]) // TableForm </code></pre> <p><img src="https://i.stack.imgur.com/hndSs.png" alt="Mathematica graphics"></p> <p>The ragged array is changed to a rectangular using <code>PadRight</code>, padding it with a unique symbol generated by <code>Unique</code>. I then <code>Transpose</code> the matrix and remove the unique symbol by replacing it with <code>Sequence</code> (think of it as a black hole).</p> <pre><code>u = Unique[]; m = Length /@ t // Max; Transpose[PadRight[#, m, u] &amp; /@ t] /. u -&gt; Sequence[] // TableForm </code></pre> <p><img src="https://i.stack.imgur.com/2AYBe.png" alt="Mathematica graphics"></p> <hr> <p>This is probably how Mr.Wizard does it:</p> <pre><code>Module[{u}, Transpose[PadRight[t, Automatic, u]] /. u -&gt; Sequence[]] </code></pre> <p>Same result, but compacter code.</p>
3,987,718
<p>Let <span class="math-container">$L \in \mathbb{R}$</span> and let <span class="math-container">$f$</span> be a function that is differentiable on a deleted neighborhood of <span class="math-container">$x_{0} \in \mathbb{R}$</span> such that <span class="math-container">$\lim_{x \to x_{0}}f'(x)=L$</span>.</p> <p>Find a function satisfying the above, and such that <span class="math-container">$f$</span> is not differentiable at <span class="math-container">$x_{0}$</span>.</p> <p>--</p> <p>So I think that I do not completlely understand, when a function is indeed differentiable at <span class="math-container">$x_{0}$</span> and when it's not, and why in both cases I can still find its <span class="math-container">$f'$</span>?</p> <p>I will appreciate some explanation about that.</p> <p>Moreover, I thought of <span class="math-container">$f(x)=x^x$</span> or <span class="math-container">$f(x)=\ln(x^x)$</span>.</p> <p>If I understand it correctly, than both my <span class="math-container">$f$</span>'s does not differentiable at <span class="math-container">$x_{0}=0$</span>, because:</p> <p><span class="math-container">$f'(0)=\lim_{x \to 0}\frac{x^x-0^0}{x-0}$</span> which is undefined?</p> <p>or</p> <p><span class="math-container">$f'(0)=\lim_{x \to 0}\frac{\ln(x^x)-\ln(0^0)}{x-0}$</span> which is undefined?</p> <p>Thanks a lot!</p>
leoli1
649,658
<p>Take <span class="math-container">$f:\Bbb R\to\Bbb R$</span>, with <span class="math-container">$$f(x)=\begin{cases}Lx~~~\text{ for }x&lt;x_0\\Lx+1~\text{for }x\geq x_0\end{cases}$$</span></p>
23,502
<p><em>Edit: I wrote the following question and then immediately realized an answer to it, and moonface gave the same answer in the comments. Namely, $\mathbb C(t)$, the field of rational functions of $\mathbb C$, gives a nice counterexample. Note that it is of dimension $2^{\mathbb N}$.</em></p> <p>The following is one statement of Schur's lemma:</p> <blockquote> <p>Let $R$ be an associative unital algebra over $\mathbb C$, and let $M$ be a simple $R$-module. Then ${\rm End}_RM = \mathbb C$.</p> </blockquote> <p>My question is: are there extra conditions required on $R$? In particular, how large can $R$ be?</p> <p>In particular, the statement is true when $\dim_{\mathbb C}R &lt;\infty$ and also when $R$ is countable-dimensional. But I have been told that the statement fails when $\dim_{\mathbb C}R$ is sufficiently large.</p> <p>How large must $\dim_{\mathbb C}R$ be to break Schur's lemma? I am also looking for an explicit example of Schur's lemma breaking for $\dim_{\mathbb C}R$ sufficiently large?</p>
Kevin McGerty
1,878
<p>This is standard stuff: for example, if $A$ is an associative algebra over a field $k$ and $M$ is a simple module over $A$ whose dimension as a $k$-vector space is smaller than the cardinality of $k$, then any element of $\text{End}_k(M)$ is algebraic over $k$ (one just needs to consider the $k$-dimension of $k(\alpha)$ if $\alpha$ is a nonzero endomorphism -- see Kevin Buzzard's comment above for example). On the other hand, there's a nice (short) article of Quillen in Proceedings of the AMS about Schur's Lemma for filtered algebras: he checks that for a filtered algebra $U$ over $k$ whose associated graded is commutative and finitely generated, if $M$ a simple $U$-module, and $\theta \in \text{End}_U(M)$, then one has $\theta$ algebraic over $k$.</p>
2,285,299
<p>For $ c&gt; b&gt;a&gt;0 $ Is this inequality true? $$ c^2+ab&gt; ac+bc $$</p> <p>If yes can anybody please provide hint so I can solve it? </p>
Dr. Sonnhard Graubner
175,066
<p>we have $$c^2-ac+ab-bc&gt;0$$ and this is equivalent to $$c(c-a)+b(a-c)&gt;0$$ and this is equivalent to $$(c-a)(c-b)&gt;0$$ this is true, since we have $$c&gt;b&gt;a&gt;0$$</p>
3,299,469
<p>Consider the set <span class="math-container">$A=\{n\ a \}$</span> where <span class="math-container">$a&gt;0$</span> is a constant and <span class="math-container">$n \in \mathbb{N}$</span></p> <p><strong>How shall we write this set <span class="math-container">$A$</span> in set theory?</strong></p> <p>If we write it as <span class="math-container">$A=\{n\ a\ \backslash n \in \mathbb{N}, a&gt;0 \}$</span> or <span class="math-container">$A=\{n\ a\ / n \in \mathbb{N}, a&gt;0 \}$</span> will it mean just one set or a set of infinite sets?</p>
Chinnapparaj R
378,881
<ul> <li>If both <span class="math-container">$n$</span> and <span class="math-container">$a$</span> are fixed, then the set is , the singleton <span class="math-container">$\{na\}$</span></li> <li>If <span class="math-container">$a$</span> is fixed and <span class="math-container">$n \in \Bbb N$</span> is a varying quantity, then the set is <span class="math-container">$\{na: n \in \Bbb N\}=\{a,2a,3a,\cdots\}$</span></li> </ul> <p>In both cases, <span class="math-container">$A$</span> is <em>one</em> set with cardinality <span class="math-container">$1$</span> and infinite(of course, <span class="math-container">$\aleph_0)$</span> respectively!</p>
4,049,293
<p>I am learning about the cross entropy, defined by Wikipedia as <span class="math-container">$$H(P,Q)=-\text{E}_P[\log Q]$$</span> for distributions <span class="math-container">$P,Q$</span>.</p> <p>I'm not happy with that notation, because it implies symmetry, <span class="math-container">$H(X,Y)$</span> is often used for the joint entropy and lastly, I want to use a notation which is consistent with the notation for entropy: <span class="math-container">$$H(X)=-\text{E}_P[\log P(X)]$$</span></p> <p>When dealing with multiple distributions, I like to write <span class="math-container">$H_P(X)$</span> so it's clear with respect to which distribution I'm taking the entropy. When dealing with multiple random variables, it thinks it's sensible to make precise the random variable with respect to which the expectation is taken by using the subscript <span class="math-container">$_{X\sim P}$</span>. My notation for entropy thus becomes <span class="math-container">$$H_{X\sim P}(X)=-\text{E}_{X\sim P}[\log P(X)]$$</span></p> <p>Now comes the point I don't understand about the definition of cross entropy: Why doesn't it reference a random variable <span class="math-container">$X$</span>? Applying analogous reasoning as above, I would assume that cross entropy has the form <span class="math-container">\begin{equation}H_{X\sim P}(Q(X))=-\text{E}_{X\sim P}[\log Q(X)]\tag{1}\end{equation}</span> however, Wikipedia makes no mention of any such random variable <span class="math-container">$X$</span> in the article on cross entropy. It speaks of</p> <blockquote> <p>the cross-entropy between two probability distributions <span class="math-container">$p$</span> and <span class="math-container">$q$</span></p> </blockquote> <p>which, like the notation <span class="math-container">$H(P,Q)$</span>, implies a function whose argument is a pair of distributions, whereas entropy <span class="math-container">$H(X)$</span> is said to be a function of a random variable. In any case, to take an expected value I need a (function of) a random variable, which <span class="math-container">$P$</span> and <span class="math-container">$Q$</span> are not.</p> <p>Comparing the definitions for the discrete case: <span class="math-container">$$H(p,q)=-\sum_{x\in\mathcal{X}}p(x)\log q(x)$$</span> and <span class="math-container">$$H(X)=-\sum_{i=1}^n P(x_i)\log P(x_i)$$</span></p> <p>where <span class="math-container">$\mathcal{X}$</span> is the support of <span class="math-container">$P$</span> and <span class="math-container">$Q$</span>, there would only be a qualitative difference if the events <span class="math-container">$x_i$</span> didn't cover the whole support (though I could just choose an <span class="math-container">$X$</span> which does).</p> <p>My questions boil down to the following:</p> <ol> <li><p>Where is the random variable necessary to take the expected value which is used to define the cross entropy <span class="math-container">$H(P,Q)=-\text{E}_{P}[\log Q]$</span></p> </li> <li><p>If I am correct in my assumption that one needs to choose a random variable <span class="math-container">$X$</span> to compute the cross entropy, is the notation I used for (1) free of ambiguities.</p> </li> </ol>
Somos
438,089
<p>The question asks how to prove a polynomial <span class="math-container">$\,f(n)\,$</span> takes only integer values for any integer <span class="math-container">$\,n\,.$</span> If the polynomial is of degree <span class="math-container">$0$</span>, then it is a constant and if that constant is an integer we are done. If the polynomial is of degree <span class="math-container">$1$</span>, then if any two consecutive values, <span class="math-container">$\,f(k), f(k+1)\,$</span> are integers, then it is an arithmetic progression and we are done. In <strong>general</strong>, for any degree <span class="math-container">$\,d\,$</span> polynomial <span class="math-container">$\,f(n),\,$</span> it is sufficient to verify that <span class="math-container">$$f(k),f(k+1), \dots,f(k+d-1) $$</span> are all integers for some integer <span class="math-container">$\,k,\,$</span> which implies that the polynomial takes on only integer values for all integer <span class="math-container">$\,n.\,$</span></p> <p>A proof of this can be done by using a <strong>difference table</strong> to find an explicit expression for the polynomial as a sum of binomial coefficients. For example, using the particular polynomial in the question, <span class="math-container">$$ f(n) := \frac{1}{5}n^5+\frac{1}{3}n^3+\frac{7}{15}n $$</span> the forward difference table is:</p> <p><span class="math-container">$$ \begin{matrix} \Delta^6f(n) &amp;&amp;&amp;&amp;&amp;&amp;&amp; 0 \\ \Delta^5f(n) &amp;&amp;&amp;&amp;&amp;&amp; 24 &amp;&amp; 24 \\ \Delta^4f(n) &amp;&amp;&amp;&amp;&amp; 48 &amp;&amp; 72 &amp;&amp; 96 \\ \Delta^3f(n) &amp;&amp;&amp;&amp; 32 &amp;&amp; 80 &amp;&amp; 152 &amp;&amp; 248 \\ \Delta^2f(n) &amp;&amp;&amp; 8 &amp;&amp; 40 &amp;&amp; 120 &amp;&amp; 272 &amp;&amp; 520\\ \Delta f(n) &amp;&amp; 1 &amp;&amp; 9 &amp;&amp; 49 &amp;&amp; 169 &amp;&amp; 441 &amp;&amp; 961 \\ f(n) &amp; 0 &amp;&amp; 1 &amp;&amp; 10 &amp;&amp; 59 &amp;&amp; 228 &amp;&amp; 669 &amp;&amp; 1630 \\ n &amp; 0 &amp;&amp; 1 &amp;&amp; 2 &amp;&amp; 3 &amp;&amp; 4 &amp;&amp; 5 &amp;&amp; 6 \end{matrix} $$</span> where the 5th differences are constant as expected for a 5th degree polynomial. Thus, <span class="math-container">$$ f(n) = 0{n \choose 0} +1{n \choose 1} + 8{n \choose 2} + 32{n \choose 3} + 48{n \choose 4} + 24{n \choose 5}$$</span> where the coefficient of <span class="math-container">$\,{n\choose k}=\Delta^kf(0),\,$</span> the <span class="math-container">$k$</span>th difference of <span class="math-container">$\,f\,$</span> at zero. The Wikipedia article <a href="https://en.wikipedia.org/wiki/Finite_difference" rel="nofollow noreferrer">finite difference</a> explains some of this theory. The forward differences of <span class="math-container">$\,f\,$</span> are <span class="math-container">$$ \Delta f(n) := f(n+1)-f(n), \:\: \Delta^2 f(n) := \Delta f(n+1) - \Delta f(n), \:\: \dots. $$</span></p> <p>In this <strong>particular</strong> case, the 1st backward difference is the <a href="https://oeis.org/A058031" rel="nofollow noreferrer">OEIS sequence A058031</a> <span class="math-container">$$ a_n := (n^2-n+1)^2 = \nabla f(n) := f(n)-f(n-1).$$</span> Thus the polynomial is the partial sums of an integer sequence and therefore is an integer sequence itself.</p>
1,138,212
<p>I am given $f(x) = 1 + x - \frac{sin(x)}{(x e^x)} $ and am asked to solve this for when x ≃ 0.</p> <p>I'm doing the following steps but am getting stuck halfway through:</p> <p>$$f(x) = 1 + x - \frac {x - \frac{x^3}{6} + \frac{x^5}{120}}{xe^x} $$</p> <p>$$= 1 + x - \frac{e^{-x} (x - \frac{x^3}{6} + \frac{x^5}{120})}{x} $$</p> <p>$$= 1 + x - \frac {(1 - x + \frac{x^2}{2} - \frac{x^3}{3!} + \frac{x^4}{4!}) (x - \frac{x^3}{6} + \frac{x^5}{120})}{x} $$</p> <p>At this point however I'm not really sure what I should be doing next. I figure that multiplying the numerator part is not necessary, so I don't see what else to do. Could someone provide me with a hint on how to finish this?</p>
Emilio Novati
187,568
<p>Since $$ \lim_{x\rightarrow 0}\dfrac{\sin x}{x}=1 $$ You have $$ \lim_{x\rightarrow 0}\dfrac{\sin x}{xe^x}=1 $$ and $$ \lim_{x\rightarrow 0}\left(1+x-\dfrac{\sin x}{xe^x}\right)=1+0-1=0 $$ so, for $x \simeq 0 $ you have $f(x)&lt;\epsilon \qquad\forall \epsilon&gt;0$ </p>
1,290,176
<p>Can anybody help me with this limit? I think the answer should be $0$ as $0$ to the power $1$ should be $0$ but it doesn't match with the book's answer.</p> <p>$$ \lim_{x\to 0} |x|^{\lfloor\cos{x}\rfloor}$$</p>
Wolfgang Brehm
223,307
<p>$$ \lim_{x\rightarrow0}|x|^{\lfloor\cos{x}\rfloor}\\ \lim_{x\rightarrow0}\lfloor\cos{x}\rfloor=0\\ y^0 = 1 $$ The floor function means this is a special case... for values of x>0 it will have the limit 1 but for x=1 it will have the value 0. This means that the limit is 1 but the value of the function at x=0 is 0 as there is a step in your function.</p>
1,563,518
<p>Give an example of a natural number $n &gt; 1$ and a polynomial $f(x) ∈ \Bbb Z_n[x]$ of degree $&gt; 0$ that is a unit in $\Bbb Z_n[x]$.</p> <p>I am trying to understand how units work in polynomial rings. My book doesn't really define it and I need a bit of help with this.</p>
Alekos Robotis
252,284
<p>Your answer is completely correct. More generally, the solution is $45+77k: k\in \mathbb{Z}$ as stated above, because $$ 45+77k\equiv 45\mod 77.$$ If you're concerned about getting the negative answer first, it is simple to just add $77$ as you did to find the first positive value.</p>
1,563,518
<p>Give an example of a natural number $n &gt; 1$ and a polynomial $f(x) ∈ \Bbb Z_n[x]$ of degree $&gt; 0$ that is a unit in $\Bbb Z_n[x]$.</p> <p>I am trying to understand how units work in polynomial rings. My book doesn't really define it and I need a bit of help with this.</p>
ale
270,082
<p>You must to find the inverse of 12 in $Z$/77$Z$ and for that you can use Euclid'S algorithm .That is a general way to find the solutions.</p>
194,671
<p>I'm searching for two symbols - considering they exist - (1) unknown value; (2) unknown probability.</p> <p><strong>Note</strong>: I thought that $x$ was used in a temporary context, whenever I see it, it remains unknown until an evaluation is made. I was thinking in a "unknown and impossible to be known" context. I'm not sure if this context exists or if $x$ also express it.</p>
kjetil b halvorsen
32,967
<p>This arises often in statistics, for example, the statistical programming language R has two special values: NaN (mentioned in another answer) and NA. This can be taken as NaN Not a number, can be a result of 0/0 and other illegal operations NA Not available, used to represent "Do logically have a value, but we do not know the value" Typical use cases is data from a questionnaire, where some respondent didnt answer one specific question. If he/she did not answer a question of Gender: woman () man (), do not mean (s)he does not have a gender! Or it might be data from some agriculture experiment, where some plot was destroyed by a tractor. Logically, that plot should have some value (kgs harvested), but that year we didnt get to measure it, since it was destroued. So the value is NA. It would be wrong to use the value zero, even if that was the actual quantity harvested! because that informs us only about an accident irrelevant to the research design.</p> <p>NA respects some kind of three-valued logic, for instance NA or TRUE = TRUE, while NA and TRUE is NA, we do not have information to answer the question! Note also the weird logic of equality: NA == NA have value NA. For arithmetical comparisons we have 4 &lt; NA is NA (The NA we compare to 4 might be 6, in which case we have TRUE, or it might be 2, in case it is FALSE. so we do not know, that is the value is NA).</p>
2,818,427
<p>Let $f \in \mathrm{End} (\mathbb{C^2})$ be defined by its image on the standard basis $(e_1,e_2)$: </p> <p>$f(e_1)=e_1+e_2$</p> <p>$f(e_2)=e_2-e_1$</p> <p>I want to determine all eigenvalues of f and the bases of the associated eigenspaces.</p> <p>First of all how does the transformation matrix of $f$ look like? Is it </p> <p>$\begin{pmatrix}1 &amp;-1 \\1 &amp;1 \end{pmatrix}$?</p>
Dylan
135,643
<p>The method is very simple. Start with the general form of a homogeneous, second-order ODE</p> <p>$$ y'' + a(x)y' + b(x)y = 0 $$</p> <p>You know the two solutions, so you can plug them into the equation to get</p> <p>\begin{align} 2 + 2xa(x) + x^2b(x) &amp;= 0 \\ e^{-x} - e^{-x}a(x) + e^{-x}b(x) &amp;= 0 \end{align}</p> <p>This is a system of equations in $a(x)$ and $b(x)$. You can easily solve this algebraically to get</p> <p>$$ a(x) = \frac{x^2-2}{x^2+2x}, \quad b(x) = -\frac{2x+2}{x^2+2x} $$</p> <p>Note that the coefficients are <em>non-constant</em> here, so the characteristic equation will not help.</p>
178,302
<p>Assume that $H$ is a separable Hilbert space. Is there a polynomial $p(z)\in \mathbb{C}[x]$ with $deg(p)&gt;1$ with the following property?:</p> <p>Every densely defined operator $A:D(A)\to D(A),\;D(A)\subset H$ with $p(A)=0$ is necessarily a bounded operator on $H$.</p> <p>That is the polynomial-operator equation $p(A)=0$ has only bounded solution.</p>
user52733
56,229
<p>I do not think so. </p> <p><strong>Observation:</strong> Without loss of generality, $p(x)$ can be taken to be monic (constant multiples won't affect either $p(A) = 0$ or boundedness). </p> <p><strong>Case 1:</strong> $p$ is degree $2$.</p> <p>By the above reduction, $p(x) = (x - \lambda)(x - \mu)$ for some $\lambda$ and $\mu$ in $\mathbb{C}$ (since $A$ clearly commutes with itself and with $I$, and since $A$ maps $D(A)$ to itself, this decomposition is reasonable). Yet if $p(A) = 0$, take the operator $\displaystyle B = A - \frac{\lambda + \mu}{2} I$ (with the same domain), and we see that for $\displaystyle \nu = \frac{\lambda - \mu}{2}$, $B$ satisfies $(B + \nu)(B - \nu) = 0$, or $B^2 - \nu^2 = 0$. We will show that an unbounded choice of $B$ exists, satisfying $B: D(B) \to D(B)$, hence an unbounded choice of $A$ exists, with $A: D(A) \to D(A)$. </p> <p><em>Subcase 1:</em> $\nu = 0$. Then take $H = \ell^2(\mathbb{N})$, let $H_0 = D(B)$ be the sequences with only finitely many nonzero elements, and let $B$ be the operator represented by the infinite matrix </p> <p>$$ \begin{pmatrix} 0 &amp; 1 &amp; &amp; &amp; &amp; &amp; \cdots \\ 0 &amp; 0 &amp; &amp; &amp; &amp; &amp; \cdots \\ &amp; &amp; 0 &amp; 2 &amp; &amp; &amp; \cdots \\ &amp; &amp; 0 &amp; 0 &amp; &amp; &amp; \cdots \\ &amp; &amp; &amp; &amp; 0 &amp; 3 &amp; \cdots \\ &amp; &amp; &amp; &amp; 0 &amp; 0 &amp; \ddots \\ \vdots &amp; \vdots &amp; \vdots &amp; \vdots &amp; \vdots &amp; \ddots &amp; \ddots \end{pmatrix},$$ which is clearly well-defined on $H_0$ and clearly maps $H_0$ to itself. Then $B^2 = 0$, but letting $e_j$ be the $j$th basis vector, $B e_{2j} = j e_{2j - 1}$, so clearly $B$ is unbounded.</p> <p><em>Subcase 2:</em> $\nu \neq 0$. Then again take $H = \ell^2(\mathbb{N})$, and $H_0$ the almost-everywhere-$0$ sequences. We now define $B$ by the matrix </p> <p>$$ \begin{pmatrix} 0 &amp; \nu &amp; &amp; &amp; &amp; &amp; \cdots \\ -\nu &amp; 0 &amp; &amp; &amp; &amp; &amp; \cdots \\ &amp; &amp; 0 &amp; 2\nu &amp; &amp; &amp; \cdots \\ &amp; &amp; -\frac{1}{2}\nu &amp; 0 &amp; &amp; &amp; \cdots \\ &amp; &amp; &amp; &amp; 0 &amp; 3\nu &amp; \cdots \\ &amp; &amp; &amp; &amp; -\frac{1}{3}\nu &amp; 0 &amp; \ddots \\ \vdots &amp; \vdots &amp; \vdots &amp; \vdots &amp; \vdots &amp; \ddots &amp; \ddots \end{pmatrix}.$$</p> <p>Again, $B$ maps $H_0$ to itself, and $B^2 e_j = \nu^2 e_j$ for all $j$, so $B^2 = \nu^2$ on $H_0$. Yet $Be_{2j} = j e_{2j - 1}$, so $B$ is unbounded.</p> <p><strong>Case 2:</strong> $\deg p &gt; 2$.</p> <p>Well, then $p(x) = q_1(x) q_2(x)$, with $\deg q_1 = 2$, $\deg q_2 \geq 1$. Again take $H$ and $H_0$ as above, and take $A$ to be an unbounded solution to $q_1(A) = 0$, satisfying $A: D(A) \to D(A)$. Then $p(A) = q_1(A) q_2(A) = 0 q_2(A) = 0$, and $A$ is unbounded. [Again, my naive factoring really depends on $D(A) \subseteq D(A^n)$, hence I am strongly using the $A: D(A) \to D(A)$ fact here.] QED.</p> <p>Of course, Case 2 is sort of a cheat. I think there should be a "natural" example in general, since you can construct unbounded examples to $A^n = 0$ by just increasing the order of the nilpotency, and $A^n = I$ by taking positive real numbers $a_1, \dotsc, a_n$ with $a_1 a_2 \dotsc a_n = 1$ and letting the building-block matrix be</p> <p>$$ \begin{pmatrix} 0 &amp; a_1 &amp; &amp; &amp; \cdots &amp; &amp; \\ &amp; 0 &amp; a_2 &amp; &amp; \cdots &amp; &amp; \\ &amp; &amp; 0 &amp; a_3 &amp; \cdots &amp; &amp; \\ \vdots &amp; \vdots &amp; \vdots &amp; \ddots &amp; \ddots &amp; &amp;\\ \vdots &amp; \vdots &amp; \vdots &amp; \vdots &amp; \ddots &amp; \ddots &amp; \\ &amp; &amp; &amp; &amp; &amp; 0 &amp; a_{n-1} \\ a_n &amp; 0 &amp; 0 &amp; &amp; \dotsc &amp; 0 &amp; 0 \end{pmatrix} $$ Then again make a block-diagonal infinite matrix such that as we repeat the blocks, $a_3, \dotsc, a_n$ are constant, and $a_1 \to \infty$ and $a_2 \to 0$ (or somesuch). </p>
232,424
<p>Are there any claims and counterclaims to mathematics being in some certain cases a result of common sense thinking? Or can some mathematical results be figured out using just pure common sense i.e. no mathematical methods? </p> <p>I'd also appreciate any mentions relating to sciences, social sciences or ordinary life.</p>
glebovg
36,367
<p>Many basic theorems can be proven using common sense, not to mention that almost all axioms in mathematics, except for axioms of set theory are based on common sense. According to <a href="http://mathworld.wolfram.com/Axiom.html" rel="nofollow">MathWorld</a>, an axiom is a proposition regarded as self-evidently true without proof, which is just another way of saying it is based on common sense. The reason why I mentioned set theory is because common sense leads to numerous paradoxes in naive set theory, hence the name. In general, common sense does help, for example, understanding what a limit or a continuous function is, or why a certain theorem is true, but it is not enough. Everything in mathematics must be rigorous and every word is important. If you study or read books about epistemology and metaphysics, you should realize that it is very difficult to define common sense, so I think your question is very hard to answer indeed.</p>
38,193
<p>For simplicity, let me pick a particular instance of Gödel's Second Incompleteness Theorem:</p> <p>ZFC (Zermelo-Fraenkel Set Theory plus the Axiom of Choice, the usual foundation of mathematics) does not prove Con(ZFC), where Con(ZFC) is a formula that expresses that ZFC is consistent.</p> <p>(Here ZFC can be replaced by any other sufficiently good, sufficiently strong set of axioms, but this is not the issue here.)</p> <p>This theorem has been interpreted by many as saying &quot;we can never know whether mathematics is consistent&quot; and has encouraged many people to try and prove that ZFC (or even PA) is in fact inconsistent. I think a mainstream opinion in mathematics (at least among mathematician who think about foundations) is that we believe that there is no problem with ZFC, we just can't prove the consistency of it.</p> <p>A comment that comes up every now and then (also on mathoverflow), which I tend to agree with, is this:</p> <p>(*) &quot;What do we gain if we could prove the consistency of (say ZFC) inside ZFC? If ZFC were inconsistent, it would prove its consistency just as well.&quot;</p> <p>In other words, there is no point in proving the consistency of mathematics by a mathematical proof, since if mathematics were flawed, it would prove anything, for instance its own non-flawedness. Hence such a proof would not actually improve our trust in mathematics (or ZFC, following the particular instance).</p> <p>Now here is my question: Does the observation (*) imply that the only advantage of the Second Incompleteness Theorem over the first one is that we now have a specific sentence (in this case Con(ZFC)) that is undecidable, which can be used to prove theorems like &quot;the existence of an inaccessible cardinal is not provable in ZFC&quot;? In other words, does this reduce the Second Incompleteness Theorem to a mere technicality without any philosophical implication that goes beyond the First Incompleteness Theorem (which states that there is some sentence <span class="math-container">$\phi$</span> such that neither <span class="math-container">$\phi$</span> nor <span class="math-container">$\neg\phi$</span> follow from ZFC)?</p>
Kaveh
7,507
<p>The answer is the following observation due to Hilbert: </p> <blockquote> <p>If we can prove the consistency of $ZFC$ using <em>elementary</em> methods, then any <em>elementary theorem</em> of $ZFC$ has an <em>elementary proof</em>, i.e. we don't need <em>ideal/abstract objects</em> like sets or real number for dealing with concrete/finite objects like numbers. </p> </blockquote> <p>The importance of Godel's theorems is not that $ZFC$ can't prove its own consistency but rather the weaker result that elementary methods (assuming that listing these methods is easy, i.e. recursively enumerable) cannot prove all elementary results, in other words, we need abstract objects even for doing elementary number theory. Hilbert wanted to show that although abstract objects are helpful for elementary mathematics in practice, they are not essential and can be avoided (at least in theory) if needed. But Godel's <strong>first incompleteness theorem</strong> already shows that this is not true. (Here elementary can arguably be identified with unbounded-quantifier-free formulas or $\Pi_1$ sentences.)</p>
38,193
<p>For simplicity, let me pick a particular instance of Gödel's Second Incompleteness Theorem:</p> <p>ZFC (Zermelo-Fraenkel Set Theory plus the Axiom of Choice, the usual foundation of mathematics) does not prove Con(ZFC), where Con(ZFC) is a formula that expresses that ZFC is consistent.</p> <p>(Here ZFC can be replaced by any other sufficiently good, sufficiently strong set of axioms, but this is not the issue here.)</p> <p>This theorem has been interpreted by many as saying &quot;we can never know whether mathematics is consistent&quot; and has encouraged many people to try and prove that ZFC (or even PA) is in fact inconsistent. I think a mainstream opinion in mathematics (at least among mathematician who think about foundations) is that we believe that there is no problem with ZFC, we just can't prove the consistency of it.</p> <p>A comment that comes up every now and then (also on mathoverflow), which I tend to agree with, is this:</p> <p>(*) &quot;What do we gain if we could prove the consistency of (say ZFC) inside ZFC? If ZFC were inconsistent, it would prove its consistency just as well.&quot;</p> <p>In other words, there is no point in proving the consistency of mathematics by a mathematical proof, since if mathematics were flawed, it would prove anything, for instance its own non-flawedness. Hence such a proof would not actually improve our trust in mathematics (or ZFC, following the particular instance).</p> <p>Now here is my question: Does the observation (*) imply that the only advantage of the Second Incompleteness Theorem over the first one is that we now have a specific sentence (in this case Con(ZFC)) that is undecidable, which can be used to prove theorems like &quot;the existence of an inaccessible cardinal is not provable in ZFC&quot;? In other words, does this reduce the Second Incompleteness Theorem to a mere technicality without any philosophical implication that goes beyond the First Incompleteness Theorem (which states that there is some sentence <span class="math-container">$\phi$</span> such that neither <span class="math-container">$\phi$</span> nor <span class="math-container">$\neg\phi$</span> follow from ZFC)?</p>
Carl Mummert
5,442
<p>The fact that the second incompleteness theorem refers to consistency is important for several applications, both philosophical and mathematical. </p> <p>Philosophically, the second incompleteness theorem is what lets us know that we cannot, in general, prove the existence of a (set) model of ZFC within ZFC itself. This is a fundamental obstruction to naive methods of proving relative consistency results. We cannot show, for example, that the continuum hypothesis is unprovable in ZFC by constructing a set model of ZFC where CH fails <em>using methods that themselves can be formalized in ZFC</em>. Philosophically, this says we should not be surprised that the relative consistency results that we do have require methods that cannot be formalized within ZFC. </p> <p>Second, there are some theorems (perhaps less well known) that leverage the second incompleteness theorem to prove the existence of special kinds of models. These are mathematical results, not philosophical ones. </p> <p><strong>Theorem</strong> (Harvey Friedman). Let $S$ be an effective theory of second-order arithmetic that contains the theory ACA<sub>0</sub>. If there is a countable &omega;-model of $S$, then there is a countable $\omega$-model of $S$ + "there is no countable $\omega$-model of $S$."</p> <p>The proof proceeds by showing that, if the conclusion fails, a certain effective theory obtained from $S$ is consistent and proves its own consistency. The type of model constructed by the theorem is useful for proving that certain systems of second-order arithmetic are not the same. </p>
38,193
<p>For simplicity, let me pick a particular instance of Gödel's Second Incompleteness Theorem:</p> <p>ZFC (Zermelo-Fraenkel Set Theory plus the Axiom of Choice, the usual foundation of mathematics) does not prove Con(ZFC), where Con(ZFC) is a formula that expresses that ZFC is consistent.</p> <p>(Here ZFC can be replaced by any other sufficiently good, sufficiently strong set of axioms, but this is not the issue here.)</p> <p>This theorem has been interpreted by many as saying &quot;we can never know whether mathematics is consistent&quot; and has encouraged many people to try and prove that ZFC (or even PA) is in fact inconsistent. I think a mainstream opinion in mathematics (at least among mathematician who think about foundations) is that we believe that there is no problem with ZFC, we just can't prove the consistency of it.</p> <p>A comment that comes up every now and then (also on mathoverflow), which I tend to agree with, is this:</p> <p>(*) &quot;What do we gain if we could prove the consistency of (say ZFC) inside ZFC? If ZFC were inconsistent, it would prove its consistency just as well.&quot;</p> <p>In other words, there is no point in proving the consistency of mathematics by a mathematical proof, since if mathematics were flawed, it would prove anything, for instance its own non-flawedness. Hence such a proof would not actually improve our trust in mathematics (or ZFC, following the particular instance).</p> <p>Now here is my question: Does the observation (*) imply that the only advantage of the Second Incompleteness Theorem over the first one is that we now have a specific sentence (in this case Con(ZFC)) that is undecidable, which can be used to prove theorems like &quot;the existence of an inaccessible cardinal is not provable in ZFC&quot;? In other words, does this reduce the Second Incompleteness Theorem to a mere technicality without any philosophical implication that goes beyond the First Incompleteness Theorem (which states that there is some sentence <span class="math-container">$\phi$</span> such that neither <span class="math-container">$\phi$</span> nor <span class="math-container">$\neg\phi$</span> follow from ZFC)?</p>
user8248
8,248
<p>John H Conway proves and discusses the incompleteness theorem is his badass wolf prize lectures: <a href="http://www.math.princeton.edu/facultypapers/Conway/" rel="nofollow">http://www.math.princeton.edu/facultypapers/Conway/</a> Anyone who hasn't seen these talks is missing out. </p>
38,193
<p>For simplicity, let me pick a particular instance of Gödel's Second Incompleteness Theorem:</p> <p>ZFC (Zermelo-Fraenkel Set Theory plus the Axiom of Choice, the usual foundation of mathematics) does not prove Con(ZFC), where Con(ZFC) is a formula that expresses that ZFC is consistent.</p> <p>(Here ZFC can be replaced by any other sufficiently good, sufficiently strong set of axioms, but this is not the issue here.)</p> <p>This theorem has been interpreted by many as saying &quot;we can never know whether mathematics is consistent&quot; and has encouraged many people to try and prove that ZFC (or even PA) is in fact inconsistent. I think a mainstream opinion in mathematics (at least among mathematician who think about foundations) is that we believe that there is no problem with ZFC, we just can't prove the consistency of it.</p> <p>A comment that comes up every now and then (also on mathoverflow), which I tend to agree with, is this:</p> <p>(*) &quot;What do we gain if we could prove the consistency of (say ZFC) inside ZFC? If ZFC were inconsistent, it would prove its consistency just as well.&quot;</p> <p>In other words, there is no point in proving the consistency of mathematics by a mathematical proof, since if mathematics were flawed, it would prove anything, for instance its own non-flawedness. Hence such a proof would not actually improve our trust in mathematics (or ZFC, following the particular instance).</p> <p>Now here is my question: Does the observation (*) imply that the only advantage of the Second Incompleteness Theorem over the first one is that we now have a specific sentence (in this case Con(ZFC)) that is undecidable, which can be used to prove theorems like &quot;the existence of an inaccessible cardinal is not provable in ZFC&quot;? In other words, does this reduce the Second Incompleteness Theorem to a mere technicality without any philosophical implication that goes beyond the First Incompleteness Theorem (which states that there is some sentence <span class="math-container">$\phi$</span> such that neither <span class="math-container">$\phi$</span> nor <span class="math-container">$\neg\phi$</span> follow from ZFC)?</p>
nickname
5,462
<p>There is another nice consequence of the Goedel first incompleteness theorem. Indeed by proving that there exists an undecidable sentence, the theorem is offering a formal proof of the consistency of ZFC (if it were not consistent then it would prove whatever). The only problem is that it is doing so <em>inside</em> ZFC, so the proof is not really worth because it would carry on also if ZFC were inconsistent.</p> <p>I think this is also related to your sentence "a mainstream opinion in mathematics ... is that we believe that there is no problem with ZFC".</p>
4,253,640
<p>For example: suppose we need to find <strong>x</strong> given that <strong>x mod 7 = 5</strong> and <strong>x mod 13 = 8</strong>.</p> <p><strong>x = 47</strong> is a solution but needs hit and trial.</p> <p>Is there any shortcut to calculate such number?</p>
Roddy MacPhee
903,195
<p>There exist a lot of ways to do this :</p> <ul> <li>Sequentially try values of <span class="math-container">$x$</span> (basic, 47 guesses maximum)</li> <li>Use the lowest congruence and work up in sequence( meh,13 guesses maximum )</li> <li>Use the highest congruence and work down sequence (okay 7 guesses maximum)</li> <li>Note if <span class="math-container">$x$</span> is odd the first congruence can only work at even multiples of <span class="math-container">$7$</span>, and the second only at odd multiples of <span class="math-container">$13$</span>. (cool)</li> <li>Use Chinese remainder theorem or extension ( typical)</li> </ul>
1,216,983
<p>Let $f(x)$ be a polynomial with complex coefficients such that $\exists n_0 \in \mathbb Z^+$ such that $f(n) \in \mathbb Z , \forall n \ge n_0$, then is it true that $f(n) \in \mathbb Z , \forall n \in \mathbb Z$ ?</p>
Clément Guérin
224,918
<p>Intersting question, but I think, only constant sequences verify this. take $(a_n)$ a sequence verifying a recurrence equation :</p> <p>$$a_n=\sum_{k=2}^r\lambda_ka_{\lfloor \frac{n}{k} \rfloor}$$</p> <p>Then, evaluating it for $n=0$ we get :</p> <p>$$a_0=\sum_{k=2}^r\lambda_ka_0 $$</p> <p>That is either $a_0=0$ or :</p> <p>$$\sum_{k=2}^r\lambda_k=1 $$</p> <p>First, if $a_0=0$ then $a_1=a_0*=0$ and with a trivial induction you have that $a_n=0$ for all $n$.</p> <p>Suppose now that $a_0\neq 0$ then we will show by induction that $a_n=a_0$. The case $n=0$ is trivial. Assume that $a_l=a_0$ for $0\leq l&lt;n$ then for all $k\geq 2$ :</p> <p>$$\lfloor \frac{n}{k} \rfloor &lt;n$$</p> <p>So by induction hypothesis :</p> <p>$$a_{\lfloor \frac{n}{k} \rfloor}=a_0 $$</p> <p>Finally :</p> <p>$$a_n=\sum_{k=2}^r\lambda_ka_{\lfloor \frac{n}{k} \rfloor}=\sum_{k=2}^r\lambda_ka_0=a_0$$</p> <p>If we add together all the pieces this shows that any function verifying a recurrence equation involving a division should be constant. </p>
3,325,658
<blockquote> <p>Count the number of 5 cards such that there's exactly 2 suits</p> </blockquote> <p>Suppose we draw five cards from a standard deck of 52 cards. I want to count the number of ways I can draw five cards such that the hand contains exactly 2 suits.</p> <p>Here's my intuition:<br/> There are two cases, one case involves a single card with a different suit from the other 4, and the other case involves two cards with the same suit, and the other three have a different suit.</p> <p>Case 1: <span class="math-container">${4 \choose 1} {13 \choose 1} \cdot {3 \choose 1} {13 \choose 4}$</span></p> <p>Case 2: <span class="math-container">${4 \choose 1} {13 \choose 2} \cdot {3 \choose 1} {13 \choose 3}$</span></p> <p>Am I correct with the cases? I'm confused on the coefficients for choosing the suits for the first group of cards since it will limit the number of suits to choose for the next group of cards. Should I multiply each case by <span class="math-container">$2$</span> ?</p>
RobPratt
683,666
<p>Yes, this is correct without multiplying by 2. You can check by computing a different way. Choose two suits, choose all five cards from these two suits, and subtract the ways that yield only one suit: <span class="math-container">$$\binom{4}{2}\left(\binom{26}{5}-\binom{2}{1}\binom{13}{5}\right)=379236$$</span></p>
4,196,583
<p>More precisely:</p> <blockquote> <p><strong>Definition.</strong><br /> A subset <span class="math-container">$S \subset \Bbb R$</span> is called <em>good</em> if the following hold:</p> <ol> <li>if <span class="math-container">$x, y \in S$</span>, then <span class="math-container">$x + y \in S,$</span> and</li> <li>if <span class="math-container">$(x_n)_{n = 1}^\infty \subset S$</span> is a sequence in <span class="math-container">$S$</span> and <span class="math-container">$\sum_{n = 1}^\infty x_n$</span> converges, then <span class="math-container">$\sum_{n = 1}^\infty x_n \in S$</span>.</li> </ol> </blockquote> <p>In other words, a good subset is closed under finite sums and countable sums whenever the sum does exist.</p> <blockquote> <p><strong>Question:</strong> What are all the good subsets of <span class="math-container">$\Bbb R$</span>?</p> </blockquote> <hr /> <p><strong>Origin</strong></p> <p><a href="https://math.stackexchange.com/questions/4194005/">This question</a> was asked recently and <a href="https://math.stackexchange.com/users/152568/conifold">Conifold</a> had <a href="https://math.stackexchange.com/questions/4194005/does-1-frac14-frac19-frac116-cdots-frac-pi26-imply-that-the-rational#comment8699081_4194005">commented</a> how the only subsets of <span class="math-container">$\Bbb R$</span> closed under countable summations are <span class="math-container">$\varnothing$</span> and <span class="math-container">$\{0\}$</span>. It was then natural to ask &quot;closed under countable summation, assuming it exists&quot;.</p> <hr /> <p><strong>My thoughts</strong></p> <p>Here are some examples of familiar sets which are good: <span class="math-container">$\varnothing$</span>, <span class="math-container">$\{0\}$</span>, <span class="math-container">$\Bbb Z_{\geq 0}$</span>, <span class="math-container">$\Bbb Z_{&gt; 0}$</span>, <span class="math-container">$\Bbb Z$</span>, <span class="math-container">$n\Bbb Z$</span>, <span class="math-container">$\Bbb R$</span>.<br /> We even have the following:<br /> <span class="math-container">$$r \Bbb Z := \{rn : n \in \Bbb Z\},$$</span> where <span class="math-container">$r$</span> is any real number.</p> <p>But the examples apart from <span class="math-container">$\Bbb R$</span> are good for a trivial reason: Those are sets that are closed under finite summation and have the property that they are discrete enough so that the only convergent sums are those where the terms are eventually <span class="math-container">$0$</span>. (In the case of <span class="math-container">$\Bbb Z_{&gt; 0}$</span>, there is no such sum.)</p> <p>On the same note, intervals of the form <span class="math-container">$[a, \infty)$</span> and <span class="math-container">$(a, \infty)$</span> are good for <span class="math-container">$a &gt; 0$</span>.</p> <p>In general, suppose that <span class="math-container">$S$</span> satisfies the following: <span class="math-container">$S$</span> is closed under finite sums and there exists <span class="math-container">$\epsilon &gt; 0$</span> such that <span class="math-container">$|s| &gt; \epsilon$</span> for all <span class="math-container">$s \in S$</span>. Then, <span class="math-container">$S$</span> and <span class="math-container">$S \cup \{0\}$</span> are good.</p> <p>Another example: <span class="math-container">$[0, \infty)$</span> and <span class="math-container">$(0, \infty)$</span> are good and do not follow the criteria above.</p> <hr /> <p>The following are some examples of <em>not</em> good sets: <span class="math-container">$\Bbb Q$</span>, <span class="math-container">$\Bbb R \setminus \Bbb Q$</span>, <span class="math-container">$\Bbb R \setminus \Bbb Z$</span>, a proper cofinite subset of <span class="math-container">$\Bbb R$</span>, any bounded set apart from <span class="math-container">$\{0\}$</span> and <span class="math-container">$\varnothing$</span>. In fact, excluding <span class="math-container">$\Bbb Q$</span>, the other ones are not even closed under finite sums.<br /> In the same vein as <span class="math-container">$\Bbb Q$</span>, we also have the set of real algebraic numbers which is not good (but is indeed closed under finite sums).</p> <p>Here's a nontrivial one: Consider the set <span class="math-container">$$B = \left\{\frac{1}{2^k} : k \in \Bbb Z_{&gt; 0}\right\}.$$</span> Then, <em>any</em> countable subset of <span class="math-container">$\Bbb R$</span> that contains <span class="math-container">$B$</span> is not good.<br /> <em>Proof.</em> <span class="math-container">$(0, 1]$</span> is uncountable and every element in it can be written as a sum of elements of <span class="math-container">$B$</span>. (Binary expansions.) <span class="math-container">$\Box$</span></p> <p>A bit more thought actually shows that more is true: Since <span class="math-container">$(0, 1]$</span> is contained in the set of all possible sums, any good set containing <span class="math-container">$B$</span> must contain all of <span class="math-container">$(0, \infty)$</span>.</p> <p>Another one: let <span class="math-container">$(a_n)_{n \ge 1}$</span> be any real sequence such that <span class="math-container">$\sum a_n$</span> converges conditionally. Then, the only good subset containing <span class="math-container">$\{a_n\}_{n \ge 1}$</span> is <span class="math-container">$\Bbb R$</span>, by the Riemann rearrangement theorem.</p> <hr /> <p><strong>Additional comments</strong></p> <p>There are some variants that come to mind. Not sure if any of them are any more interesting. But I'd be happy with an answer that answers only one of the following variants as well.</p> <ol> <li>What if I exclude point 1. from my definition? Let's call such a set nice.<br /> In that case, the set <span class="math-container">$\{1\}$</span> is nice but not good. (Of course, if <span class="math-container">$0 \in S$</span>, then nice is equivalent to good.) What are the nice subsets of <span class="math-container">$\Bbb R$</span>? What are the nice subsets which are not good?</li> <li>What if I consider only those sums which have converge absolutely?</li> </ol> <hr /> <p><strong>More observations</strong><br /> (These are edits, which I'm adding later)</p> <p>Here are some additional observations:</p> <ol> <li>Arbitrary intersection of good sets is good and <span class="math-container">$\Bbb R$</span> is good. Thus, it makes sense to talk about the smallest good set containing a given subset of <span class="math-container">$\Bbb R$</span>.<br /> So, given a subset <span class="math-container">$A \subset \Bbb R$</span>, let us call this smallest good set to be the <em>good set generated by <span class="math-container">$A$</span></em> and notationally denote it as <span class="math-container">$\langle A \rangle$</span>.<br /> (In particular, <span class="math-container">$A$</span> is good iff <span class="math-container">$A = \langle A \rangle$</span>.)</li> <li><span class="math-container">$A \subset B \implies \langle A \rangle \subset \langle B \rangle$</span>.</li> <li><span class="math-container">$\langle (0, \epsilon) \rangle = (0, \infty)$</span> and similar symmetric results.</li> <li>Suppose <span class="math-container">$S \subset (0, \infty)$</span> is dense in <span class="math-container">$(0, \infty)$</span>, then <span class="math-container">$\langle S \rangle = (0, \infty)$</span>.<br /> Indeed, pick <span class="math-container">$a_0 \in (0, \infty)$</span>.<br /> Then, there exists <span class="math-container">$s_1 \in (a_0/2, a_0)$</span>.<br /> Put <span class="math-container">$a_1 := a - s_1$</span>. Then, <span class="math-container">$a_1 \in (0, a_0/2)$</span>.<br /> Now pick <span class="math-container">$s_2 \in (a_1/2, a_1)$</span> and so on. Then, <span class="math-container">$\sum s_n = a_0$</span>.<br /> Symmetric results apply. In particular, the only dense good subset of <span class="math-container">$\Bbb R$</span> (or <span class="math-container">$\Bbb R^+$</span> or <span class="math-container">$\Bbb R^-$</span>) is the whole set.</li> <li>Suppose <span class="math-container">$S \subset (0, \infty)$</span> contains arbitrarily small elements (i.e., <span class="math-container">$S \cap (0, \epsilon) \neq \varnothing$</span> for all <span class="math-container">$\epsilon &gt; 0)$</span>), then <span class="math-container">$\langle S \rangle = (0, \infty)$</span>.<br /> To see this, let <span class="math-container">$a &gt; 0$</span> be arbitrary. Pick <span class="math-container">$s_1 \in (0, a)$</span>.<br /> Let <span class="math-container">$n_1$</span> be the largest positive integer such that <span class="math-container">$n_1 s_1 &lt; a$</span>.<br /> Then pick <span class="math-container">$s_2 \in (0, a - n_1s_1)$</span>. Let <span class="math-container">$n_2$</span> be the largest such <span class="math-container">$n_2s_2 &lt; a - n_1s_1$</span> and so on.<br /> Then, <span class="math-container">$$\underbrace{s_1 + \cdots + s_1}_{n_1} + \underbrace{s_2 + \cdots + s_2}_{n_2} + \cdots = a.$$</span></li> </ol>
Sangchul Lee
9,340
<p>Write <span class="math-container">$S^{\times} = S\setminus\{0\}$</span>. Then we will prove the following claim:</p> <blockquote> <p><strong>Claim.</strong> A subset <span class="math-container">$S$</span> of <span class="math-container">$\mathbb{R}$</span> is good if and only if <span class="math-container">$S$</span> satisfies one of the following options:</p> <ol> <li><span class="math-container">$S$</span> is one of <span class="math-container">$\varnothing$</span>, <span class="math-container">$\{0\}$</span>, <span class="math-container">$(0, \infty)$</span>, <span class="math-container">$[0, \infty)$</span>, <span class="math-container">$(-\infty, 0)$</span>, <span class="math-container">$(-\infty, 0]$</span>, or <span class="math-container">$\mathbb{R}$</span>.</li> <li><span class="math-container">$S^{\times}$</span> is a semigroup contained in <span class="math-container">$[r, \infty)$</span> for some <span class="math-container">$r &gt; 0$</span>.</li> <li><span class="math-container">$S^{\times}$</span> is a semigroup contained in <span class="math-container">$(-\infty, -r]$</span> for some <span class="math-container">$r &gt; 0$</span>.</li> <li><span class="math-container">$S = \alpha \mathbb{Z}$</span> for some <span class="math-container">$\alpha &gt; 0$</span>.</li> </ol> </blockquote> <p><em>Proof.</em> It is obvious that a set <span class="math-container">$S$</span> satisfying one of the above options is always good. For the opposite direction, let <span class="math-container">$S$</span> be a good subset of <span class="math-container">$\mathbb{R}$</span>. Then both <span class="math-container">$S^+ = S \cap (0, \infty)$</span> and <span class="math-container">$S^- = S \cap (-\infty, 0)$</span> are good, since they are intersections of good sets. Then we have the following trichotomy for <span class="math-container">$S^+$</span>:</p> <p><span class="math-container">\begin{equation*} S^+ = \varnothing, \quad\text{or}\quad S^+ = (0, \infty), \quad\text{or}\quad \inf S^+ &gt; 0, \end{equation*}</span></p> <p>and similarly for <span class="math-container">$S^-$</span>:</p> <p><span class="math-container">\begin{equation*} S^- = \varnothing, \quad\text{or}\quad S^- = (-\infty, 0), \quad\text{or}\quad \sup S^- &lt; 0. \end{equation*}</span></p> <p>Now let us examine all the nine possibilities:</p> <p><span class="math-container">$$ \begin{array}{c|ccc} &amp; S^+ = \varnothing &amp; S^+ = (0, \infty) &amp; \inf S^+ &gt; 0 \\ \hline S^- = \varnothing &amp; S^{\times} = \varnothing &amp; S^{\times} = (0, \infty) &amp; \text{Option 2} \\ S^- = (-\infty, 0) &amp; S^{\times} = (-\infty, 0) &amp; S = \mathbb{R} &amp; \text{impossible} \\ \sup S^- &lt; 0 &amp; \text{Option 3} &amp; \text{impossible} &amp; \text{Option 4} \\ \end{array} $$</span></p> <p>Here we only cover non-trivial cases:</p> <ul> <li><p>Suppose <span class="math-container">$S^+ = (0, \infty)$</span> and <span class="math-container">$\sup S^- &lt; 0$</span>. Then by picking <span class="math-container">$a &gt; 0$</span> so that <span class="math-container">$-a \in S^-$</span>, we find that <span class="math-container">$(-a, \infty) = (-a) + S^{+} \subseteq S$</span>. This contradicts <span class="math-container">$\sup S^- &lt; 0$</span>.</p> </li> <li><p>Suppose <span class="math-container">$\inf S^+ &gt; 0$</span> and <span class="math-container">$\sup S^- &lt; 0$</span>. In this case, we claim:</p> <blockquote> <p><strong>Claim.</strong> Whenever <span class="math-container">$a \in S^+$</span> and <span class="math-container">$b \in S^-$</span>, we have <span class="math-container">$a/b \in \mathbb{Q}$</span>.</p> </blockquote> <p>Let us first check that this claim implies the assertion in Option 4. For any <span class="math-container">$a \in S^+$</span> and <span class="math-container">$b \in S^-$</span>, write <span class="math-container">$a/b = -p/q$</span> for some <span class="math-container">$p, q \in \mathbb{Z}^{&gt;0}$</span> in lowest term and write <span class="math-container">$g = a/p = -b/q$</span>. Then by the Bezout's identity, we can find integers <span class="math-container">$x, y$</span> such that <span class="math-container">$xp - yq = 1$</span>. By replacing <span class="math-container">$x$</span> and <span class="math-container">$y$</span> by <span class="math-container">$x+qk$</span> and <span class="math-container">$y+pk$</span> if necessary, we may assume that both <span class="math-container">$x$</span> and <span class="math-container">$y$</span> are positive. Then <span class="math-container">$xa + yb = g \in S$</span>. A similar reasoning also shows that <span class="math-container">$-g \in S$</span>. So the good set generated by <span class="math-container">$\{a, b\}$</span> is precisely <span class="math-container">$g\mathbb{Z}$</span>.</p> <p>Next, it is easy to see that the claim implies that both <span class="math-container">$S^+$</span> and <span class="math-container">$S^-$</span> are countable. So we may enumerate them by <span class="math-container">$S^+ = \{a_1, a_2, \dots\}$</span> and <span class="math-container">$S^- = \{ b_1, b_2, \dots\}$</span>. Then the above observation yields <span class="math-container">$$\langle \{a_1, b_1, a_2, b_2, \dots, a_n, b_n\}\rangle = g_n \mathbb{Z}$$</span> for some <span class="math-container">$g_n &gt; 0$</span> such that <span class="math-container">$g_n/g_{n+1} \in \mathbb{Z}$</span> for all <span class="math-container">$n$</span>. Moreover, the assumption tells that <span class="math-container">$(g_n)_{n\geq 1}$</span> eventually stabilizes with some value <span class="math-container">$\alpha &gt; 0$</span>. Therefore <span class="math-container">$S = \alpha \mathbb{Z}$</span>.</p> <p>So it remains to prove the claim. Suppose otherwise. Then there exist <span class="math-container">$a \in S^+$</span> and <span class="math-container">$b \in S^-$</span> such that <span class="math-container">$a/b \notin \mathbb{Q}$</span>. Write <span class="math-container">$\xi = -a/b$</span> and note that <span class="math-container">$\xi$</span> is irrational. So by the Dirichlet approximation theorem, for any <span class="math-container">$\epsilon &gt; 0$</span> we can find positive integers <span class="math-container">$x, y$</span> such that <span class="math-container">$0 &lt; \left| x \xi - y \right| &lt; \epsilon/|b|$</span>. This then implies that <span class="math-container">$0 &lt; \left| xa + yb \right| &lt; \epsilon$</span>, hence contradicts the assumption that <span class="math-container">$\inf S^+ &gt; 0$</span> and <span class="math-container">$\sup S^- &lt; 0$</span>. Therefore the claim must be true.</p> </li> </ul>
210,735
<p>The Cantor set is closed, so its complement is open. So the complement can be written as a countable union of disjoint open intervals. Why can we not just enumerate all endpoints of the countably many intervals, and conclude the Cantor set is countable?</p>
user642796
8,348
<p>Because the Cantor set includes numbers which are not the endpoints of any intervals removed. For example, the number $\frac{1}{4}$ (0.02020202020... in ternary) belongs to the Cantor set, but is not an endpoint of any interval removed.</p>
59,954
<p>I can rather easily imagine that some mathematician/logician had the idea to symbolize "it <strong>E</strong> xists" by $\exists$ - a reversed E - and after that some other (imitative) mathematician/logician had the idea to symbolize "for <strong>A</strong> ll" by $\forall$ - a reversed A. Or vice versa. (Maybe it was one and the same person.)</p> <p>What is hard (for me) to imagine is, how the one who invented $\forall$ could fail to consider the notations $\vee$ and $\wedge$ such that today $(\forall x \in X) P(x)$ must be spelled out $\bigwedge_{x\in X} P(x)$ instead of $\bigvee_{x\in X}P(x)$? (Or vice versa.)</p> <p>Since I know that this is not a real question, let me ask it like this: Where can I find more about this observation?</p>
Bill Dubuque
242
<p>See <a href="http://jeff560.tripod.com/set.html">Earliest Uses of Symbols of Set Theory and Logic</a> for this and much more.</p>
59,954
<p>I can rather easily imagine that some mathematician/logician had the idea to symbolize "it <strong>E</strong> xists" by $\exists$ - a reversed E - and after that some other (imitative) mathematician/logician had the idea to symbolize "for <strong>A</strong> ll" by $\forall$ - a reversed A. Or vice versa. (Maybe it was one and the same person.)</p> <p>What is hard (for me) to imagine is, how the one who invented $\forall$ could fail to consider the notations $\vee$ and $\wedge$ such that today $(\forall x \in X) P(x)$ must be spelled out $\bigwedge_{x\in X} P(x)$ instead of $\bigvee_{x\in X}P(x)$? (Or vice versa.)</p> <p>Since I know that this is not a real question, let me ask it like this: Where can I find more about this observation?</p>
Doug Spoonwood
11,300
<p>I've misplaced my copy of it, but I recall S. C. Kleene in his Mathematical Logic noting that "v" came as an abbreviation of "vel". In Latin "vel" is one of the words which commonly gets translated to the English word "or", and at least people believed that the Latin word "vel" comes closer to alternation (or equivalently, inclusive <a href="http://plato.stanford.edu/entries/disjunction/#MytVelAut" rel="nofollow">disjunction</a>) than any of the other words which commonly get translated as "or". Since Russell read Peano, and <a href="http://www.archive.org/details/arithmeticespri00peangoog" rel="nofollow">Peano's book on arithmetic</a> got written in Latin, it does seem at least plausible that Russell first used "v" for alternation, as Bill's reference states.</p> <p>That "∀" has to get interpreted by "⋀" as you correctly state in at least some cases (though not always), may seem strange at first, I agree. But, one way to think of things here comes as to have the truth set as linearly ordered. When you do this with "0" as the least truth value, and "1" as the greatest truth value, "⋀" most closely corresponds to, if not in fact is, the infimum, while "v" most closely corresponds to the supremum. Both "infimum" and "supremum", at least according to my intuition of them, involve notions just as strange if you don't look at them carefully. </p>
59,954
<p>I can rather easily imagine that some mathematician/logician had the idea to symbolize "it <strong>E</strong> xists" by $\exists$ - a reversed E - and after that some other (imitative) mathematician/logician had the idea to symbolize "for <strong>A</strong> ll" by $\forall$ - a reversed A. Or vice versa. (Maybe it was one and the same person.)</p> <p>What is hard (for me) to imagine is, how the one who invented $\forall$ could fail to consider the notations $\vee$ and $\wedge$ such that today $(\forall x \in X) P(x)$ must be spelled out $\bigwedge_{x\in X} P(x)$ instead of $\bigvee_{x\in X}P(x)$? (Or vice versa.)</p> <p>Since I know that this is not a real question, let me ask it like this: Where can I find more about this observation?</p>
robjohn
13,854
<p>My understanding of the quantifier symbols $\bigvee$ ("there exists") and $\bigwedge$ ("for all") was that they were supposed to be large versions of $\vee$ ("or") and $\wedge$ ("and"). Then $\bigvee_{x\in X}Fx$ would mean $Fx_1\vee Fx_2\vee Fx_3\vee\dots$ whereas $\bigwedge_{x\in X}Fx$ would mean $Fx_1\wedge Fx_2\wedge Fx_3\wedge\dots$</p> <p>This is similar to the notation $\bigcup_{x\in X}S_x$ for $S_{x_1}\cup S_{x_2}\cup S_{x_3}\cup\dots$ and $\bigcap_{x\in X}S_x$ for $S_{x_1}\cap S_{x_2}\cap S_{x_3}\cap\dots$</p> <p>I don't see that these symbols "fail". I can see that $\forall$ could be confused for $\bigvee$, and that would be bad since $\forall$ means the same as $\bigwedge$. However, I don't see that as being a condemnation of one over the other, and there would be no confusion if only one were being used at a time.</p>
59,954
<p>I can rather easily imagine that some mathematician/logician had the idea to symbolize "it <strong>E</strong> xists" by $\exists$ - a reversed E - and after that some other (imitative) mathematician/logician had the idea to symbolize "for <strong>A</strong> ll" by $\forall$ - a reversed A. Or vice versa. (Maybe it was one and the same person.)</p> <p>What is hard (for me) to imagine is, how the one who invented $\forall$ could fail to consider the notations $\vee$ and $\wedge$ such that today $(\forall x \in X) P(x)$ must be spelled out $\bigwedge_{x\in X} P(x)$ instead of $\bigvee_{x\in X}P(x)$? (Or vice versa.)</p> <p>Since I know that this is not a real question, let me ask it like this: Where can I find more about this observation?</p>
Ricardo
145,198
<p>That's a nice question, but you misunderstood the creation of the universal quantifier "all" and "there exists". It appears to be derived from the letter A, and I guess it is, but it didn't emerge that way. The first one to introduce quantifiers the way we know today was Gottlob Frege, who was a german mathematician, in the book <em>Begriffsschrift</em>. See the article about his book on wikipedia, it has the notation used for him. So, that clarifies your question about $\forall$ and $\exists$. In regard of "e" and "or", they are just Boolean connectives, introduced by George Boole in <em>Investigation of the laws of thought</em>. The use of "or" as $\vee$ it's just a convention, Quine explains it in his <em>Mathematical logic</em>, as just an abreviation of the latim word "vel". The use of "e" as $\wedge$ it's, again, a convention, many mathematicians in the analytic tradition use dots (.) instead of. Answering your last question. You're thinking about quantification in a wrong way, it's not just about "quantifying" (read Quine's ML), it's about signing, you use it just to show the variable place in a statement. Using Quine's example: suppose you want to say that every number is less than $0$, equal to $0$ or different from $0$, then you say "Whatever number you may select, it $&lt;0$ $\vee$ it$=0$ $\vee$ it$&gt;0$", or, "whatever number (it $&lt;0$ $\vee$ it$=0$ $\vee$ it$&gt;0$)". Now, for simplification, instead of using the last one, just say "$(x)$ number ($x&gt;0 \vee x=0 \vee x&lt;0)$". Mathematically, you just say $(x)(x \epsilon Number (x&gt;0 \vee x=0 \vee x&lt;0))$. So, $\bigwedge _{x\epsilon Number} P(x)$ it's just an abreviation of $(x)(x \epsilon Number (P(x)))$ but what if you wanted just to say $(x)(x=x)$ without specifying a class, like Number or $X$? You would had to introduce quantification in the notation, and it would get messy. I know my example doesn't clarify a lot, but the quantification business isn't as clear as everyone thinks, there are a lot of divergence between the professionals. Russell for example, in his <em>Mathematical logic as based on the theory of types</em> talks a lot about quantification, and you can see it's not a simple, it involves a lot philosophy. So, read Quine and Russell. Have a nice day. P.S.: Consider this: $(x)(x \epsilon N .\supset. P(x))$, so, suppose we want to say that using the "and" quantifier, than it becomes $(x_1,..,x_n \epsilon N)(P(x_1) \wedge .. \wedge P(x))$ which is a lot more work, and still have to use quantification. Definitions are a way of simplification. But, read Russell (<em>Principia</em> and the article I mencioned) and Quine, quantification used in modern mathematical logic come from their works, see if the philosophical approach correspond to what we just did.</p>
2,089,502
<blockquote> <p>How many numbers are there from $1$ to $1400$ which maintain these conditions: when divided by $5$ the remainder is $3$ and when divided by $7$ the remainder is $2$?</p> </blockquote> <p>How can I start? I am newbie in modular arithmetics. I can just figure out that the number $= 5k_1+3 = 7k_2+2$. </p>
Max
130,322
<p>$\left|f(x)-f(y)\right|\leq \left(\sup\limits_{z\in [x,y]}\left|f'(z)\right|\right)\cdot \left|x-y\right|$</p>
30,220
<p>Jeremy Avigad and Erich Reck claim that one factor leading to abstract mathematics in the late 19th century (as opposed to concrete mathematics or hard analysis) was <em>the use of more abstract notions to obtain the same results with fewer calculations.</em></p> <p>Let me quote them from their remarkable historical <a href="https://www.andrew.cmu.edu/user/avigad/Papers/infinite.pdf" rel="nofollow noreferrer">paper</a> &quot;Clarifying the nature of the infinite: the development of metamathematics and proof theory&quot;.</p> <blockquote> <p>The gradual rise of the opposing viewpoint, with its emphasis on conceptual reasoning and abstract characterization, is elegantly chronicled by <a href="http://mcps.umn.edu/philosophy/11_10Stein.pdf" rel="nofollow noreferrer">Stein</a> (<a href="https://web.archive.org/web/20140415224643/http://mcps.umn.edu/philosophy/11_10Stein.pdf" rel="nofollow noreferrer">Wayback Machine</a>), as part and parcel of what he refers to as the “second birth” of mathematics. The following quote, from Dedekind, makes the difference of opinion very clear:</p> </blockquote> <blockquote> <blockquote> <p>A theory based upon calculation would, as it seems to me, not offer the highest degree of perfection; it is preferable, as in the modern theory of functions, to seek to draw the demonstrations no longer from calculations, but directly from the characteristic fundamental concepts, and to construct the theory in such a way that it will, on the contrary, be in a position to predict the results of the calculation (for example, the decomposable forms of a degree).</p> </blockquote> </blockquote> <blockquote> <p>In other words, from the Cantor-Dedekind point of view, abstract conceptual investigation is to be preferred over calculation.</p> </blockquote> <p><strong>What are concrete examples from concrete fields avoiding calculations by the use of abstract notions?</strong> (Here &quot;calculation&quot; means any type of routine technicality.) Category theory and topoi may provide some examples.</p> <p>Thanks in advance.</p>
Bill Dubuque
6,716
<p>Some of the prettiest examples of Dedekind's structuralism arise from revisiting proofs in elementary number theory from a highbrow viewpoint, e.g. by reformulating them after noticing hidden structure (ideals, modules, etc). A striking example of such is the generalization and unification of elementary irrationality proofs of n'th roots by way of Dedekind's notion of <strong>conductor ideal</strong>. This gem seems to be little-known (even to some number theorists, e.g. Estermann and Niven). Since I've already explained this at length elsewhere I'll simply <a href="https://math.stackexchange.com/a/39769/242">link</a> to it.</p> <p>At first glance the various &quot;elementary&quot; proofs seem to be magically pulled out of a hat since the crucial structure of the conductor ideal is obfuscated by the descent &quot;calculations&quot; of various lemmas (that have all been inlined vs. abstracted out). However, once one abstracts out the hidden innate structure the proof becomes a striking one-liner: simply remark that in a PID a conductor ideal is principal so cancelable, thus PIDs are integrally closed. Here, the complexity of the calculations verifying the descent (induction) etc are abstracted out and tidily encapsulated once-and-for-all in the lemma that Euclidean domains are PIDs. Following Dedekind's ground-breaking insight, we recognize in many number-theoretical contexts the innate structure of an ideal, and we exploit that structure whenever possible. For much further detail and discussion see <em>all</em> of my posts in the thread <a href="https://math.stackexchange.com/a/39769/242">1</a> (click on the thread's title/subject at the top of the frame to see a threaded view in the Google Groups usenet web interface)</p> <p>When I teach such topics I emphasize that one should always look for &quot;hidden ideals&quot; and other obfuscated innate structure. Alas, too many students cannot resist the urge to dive in and &quot;calculate&quot; before pursuing conceptual investigations. It was such methodological principles that led Dedekind to discover most all of the fundamental algebraic structures. Nowadays we often take for granted such structural abstractions and methodology. But it was certainly a nontrivial task to discover these in the rarefied mathematical atmosphere of Dedekind's day (and it remains so even nowadays for students when first learning such topics). Emmy Noether wasn't joking when she said &quot;it's all already in Dedekind&quot;. It deserves emphasis that this remark also remains true for methodological principles.</p>
30,220
<p>Jeremy Avigad and Erich Reck claim that one factor leading to abstract mathematics in the late 19th century (as opposed to concrete mathematics or hard analysis) was <em>the use of more abstract notions to obtain the same results with fewer calculations.</em></p> <p>Let me quote them from their remarkable historical <a href="https://www.andrew.cmu.edu/user/avigad/Papers/infinite.pdf" rel="nofollow noreferrer">paper</a> &quot;Clarifying the nature of the infinite: the development of metamathematics and proof theory&quot;.</p> <blockquote> <p>The gradual rise of the opposing viewpoint, with its emphasis on conceptual reasoning and abstract characterization, is elegantly chronicled by <a href="http://mcps.umn.edu/philosophy/11_10Stein.pdf" rel="nofollow noreferrer">Stein</a> (<a href="https://web.archive.org/web/20140415224643/http://mcps.umn.edu/philosophy/11_10Stein.pdf" rel="nofollow noreferrer">Wayback Machine</a>), as part and parcel of what he refers to as the “second birth” of mathematics. The following quote, from Dedekind, makes the difference of opinion very clear:</p> </blockquote> <blockquote> <blockquote> <p>A theory based upon calculation would, as it seems to me, not offer the highest degree of perfection; it is preferable, as in the modern theory of functions, to seek to draw the demonstrations no longer from calculations, but directly from the characteristic fundamental concepts, and to construct the theory in such a way that it will, on the contrary, be in a position to predict the results of the calculation (for example, the decomposable forms of a degree).</p> </blockquote> </blockquote> <blockquote> <p>In other words, from the Cantor-Dedekind point of view, abstract conceptual investigation is to be preferred over calculation.</p> </blockquote> <p><strong>What are concrete examples from concrete fields avoiding calculations by the use of abstract notions?</strong> (Here &quot;calculation&quot; means any type of routine technicality.) Category theory and topoi may provide some examples.</p> <p>Thanks in advance.</p>
Andreas Blass
6,794
<p>When I was a student, I once watched a professor (a famous and brilliant mathematician) spend a whole class period proving that the functor $M\otimes-$ is right exact. (This was in the context of modules over a commutative ring.) He was working from the generators-and-relations definition of the tensor product. With what I'd consider the "right" definition of $M\otimes-$, as the left adjoint of a Hom functor, the proof becomes trivial: Left adjoints preserve colimits, in particular coequalizers. Since the functors in question are additive, $M\otimes-$ also preserves 0 and therefore preserves cokernels. And that's what right-exactness means.</p>
1,818,976
<p>Let there be many numbers $a_1,a_2,a_3,\dots,a_n$.</p> <p>I want to find the first digit of their product, i.e. of $A=a_1\times a_2\times a_3\times a_4\times \dots\times a_n$.</p> <p>These numbers are huge and multiplying all of them exceeds the time limit.</p> <p>Is there any shortcut to find the most significant digit of $A$ (first digit from the left)?</p>
hmakholm left over Monica
14,366
<p><strong>Hint:</strong> The unit circle (or even the closed unit disc) is compact. And if $r$ is irrational, then the $e^{i2\pi rn}$s are all different ...</p>
78,569
<p><img src="https://i.stack.imgur.com/FhX2B.png" alt="Limit of both sides of function"></p> <p>I need to solve for <code>c</code> such that the function is continuous at <code>x=2</code>. How do I do this automatically?</p> <p>I have expressions for the limit of both sides of the function as x->2, but how would i use the solve command for the value c that makes f(x) continuous at x= 2. would I make both out puts equal each other? Once I get the value of C how do i use the Piecewise command to define f(x) explicitly.</p>
Nasser
70
<pre><code>myfunc[x_] := Piecewise[{{c x^2 + 2 x, x &lt;= 2}, {x^3 - c x, True}}]; lim1 = Limit[myfunc[x], x -&gt; 2, Direction -&gt; -1] lim2 = Limit[myfunc[x], x -&gt; 2, Direction -&gt; 1] sol = c /. First@Solve[{lim1 == lim2}, c] (*2/3*) Plot[myfunc[x] /. c -&gt; sol, {x, 1, 3}, Epilog -&gt; {Red, PointSize[.015], Point[{2, myfunc[2] /. c -&gt; sol}]}] </code></pre> <p><img src="https://i.stack.imgur.com/Vgj3J.png" alt="Mathematica graphics"></p> <blockquote> <p>Once I get the value of C how do i use the Piecewise command to define f(x) explicitly</p> </blockquote> <p><code>f(x)</code> is already defined. No need to redfine it. Just replace the <code>c</code> value found from solve each time you use the function.</p> <p>Notice that the function is continuous at <code>x=2</code> but not differentiable at <code>x=2</code> and Mathematica can tell that:</p> <pre><code>D[(myfunc[x] /. c -&gt; sol), x] </code></pre> <p><img src="https://i.stack.imgur.com/uhCRp.png" alt="Mathematica graphics"></p>
3,257,799
<blockquote> <p>Find all values of <span class="math-container">$a$</span> for which the equation <span class="math-container">$$ (a-1)4^x + (2a-3)6^x = (3a-4)9^x $$</span> has only one solution.</p> </blockquote> <p><br> I have two cases, one when <span class="math-container">$a = 1$</span> and other when Discriminant <span class="math-container">$= 0$</span>.<br> The answers I get are <span class="math-container">$a =1; a=0,5$</span>. <br> Since I don't have the answers could sb tell me if I am right.</p>
nonuser
463,553
<p>Here is another way to solve this. </p> <p>Let <span class="math-container">$t=2^x/3^x&gt;0$</span>, then we get <span class="math-container">$$a(t^2+2t-3) = t^2+3t-4$$</span> so <span class="math-container">$$a(t+3)(t-1)=(t+4)(t-1)$$</span></p> <p>Then for each <span class="math-container">$a$</span> number <span class="math-container">$t=1$</span> i.e. <span class="math-container">$x=0$</span> is a solution. Say <span class="math-container">$t\neq 1$</span>, then <span class="math-container">$$a ={t+4\over t+3}\implies a\in (1,{4\over 3})$$</span></p> <p>So for each <span class="math-container">$a\in (1,{4\over 3})$</span> we have another solution, so <span class="math-container">$$\boxed{a\in (-\infty, 1]\cup [{4\over 3},\infty)}$$</span></p> <hr> <p>Example, if <span class="math-container">$a=2$</span> we get <span class="math-container">$$2\cdot 9^x = 4^x+6^x\implies t\in\{1,-2\}$$</span></p> <p>But <span class="math-container">$t&gt;0$</span> so we have only 1 solution. </p>
69,448
<p>What <code>Method</code> options are allowed for <code>DensityPlot</code> and <code>ContourPlot</code>? I am unable to find this information either in MMA documentation or in SE. Thanks.</p>
Community
-1
<p>One should not be confused with method or option.</p> <p>A method in the sense of Mathematica (See: <a href="http://reference.wolfram.com/language/ref/Method.html" rel="nofollow noreferrer">Method</a>)</p> <p><img src="https://i.stack.imgur.com/9JTnC.png" alt="enter image description here"></p> <p>Options in the sense of Mathematica (See: <a href="http://reference.wolfram.com/language/ref/Options.html" rel="nofollow noreferrer">Options</a>)</p> <p><img src="https://i.stack.imgur.com/jlGG2.png" alt="enter image description here"></p> <p>For <code>DensityPlot</code> or <code>ContourPlot</code> you can query for Options with <code>??DensityPlot</code> or with <code>??ContourPlot</code>: </p> <p><img src="https://i.stack.imgur.com/H57Gd.png" alt="enter image description here"></p> <p>The <a href="https://mathematica.stackexchange.com/questions/65680/how-to-extract-a-list-of-available-method-s">Link</a> provided by @Karsten 7. (answer by @Nasser), is a really fabulous strategy to "unearth" the methods to a function.</p> <p>For example (algorithm omitted) with:</p> <pre><code>r = getList2["NDSolve"]; Grid[r, Frame -&gt; All, Alignment -&gt; Left] </code></pre> <p>all "Methods" approved for NDSolve are listed:</p> <p><img src="https://i.stack.imgur.com/Dx3aU.png" alt="enter image description here"></p> <p><strong>Edit</strong></p> <p>Since DensityPlot, ContourPlot and others have the same options as Graphics those rules are applicable (see <a href="http://reference.wolfram.com/language/ref/Graphics.html" rel="nofollow noreferrer">Graphics</a> and see "The following Method options can be given" ibid. For Functions Mathematica states:</p> <blockquote> <p>With the default setting Method->Automatic, the Wolfram Language will automatically try to pick the best method for a particular computation.</p> </blockquote> <p>See <a href="http://reference.wolfram.com/language/ref/Method.html" rel="nofollow noreferrer">Method</a>.</p>
4,549,340
<p>I have heard people say that the flight time from Fort Lauderdale to Seattle is the longest possible flight time within the continental United States. However, upon further consideration, I realized that the curvature of the Earth may cause the visible distance on a map to decrease when traveling north (the circumference of a cross section of the Earth is smaller further from the equator). A change in cross sectional circumference as one travels north can affect the true distance between Fort Lauderdale and a destination. This is not accounted for in a 2D map which is what most think of when considering flight times. With this in mind, does the Earth’s curvature affect the apparent distance (on a 2D map) between Fort Lauderdale and Seattle, and if so, is there another location in the continental United States with a longer flight time from Fort Lauderdale? In other words, suppose you were to fly around the globe across the equator. This would take longer than flying around the globe at a point north of the equator (say Seattle). This is due to the curvature of the Earth, so would this curvature also take effect when traveling from Fort Lauderdale to Seattle. The Earth is “wider” across Florida’s latitude than it is at Seattle’s. This means that it must take longer to travel from Florida to San Diego than from Maine to Seattle. My questions is if that difference could account for a change in flight time. Although Seattle is north of Fort Lauderdale and thus farther, there is added distance closer to the equator due to the Earth’s curvature. So may there exist a location south of Seattle that is further simply due to this change in cross sectional circumference?</p>
Damian Pavlyshyn
154,826
<p>When doing induction arguments, it is helpful to write down exactly what your induction hypothesis <span class="math-container">$P(p)$</span> is and how it depends on the variable <span class="math-container">$p$</span>. The way that you've written it is not very clear, and this is what's causing you to (correctly) be worried.</p> <p>As I understand it, the precise induction hypothesis that you're proving is:</p> <p><span class="math-container">$P(p)$</span>: For all <span class="math-container">$\epsilon &gt; 0$</span>, there exists <span class="math-container">$n \in \mathbf{Z}_{\geq 1}$</span> such that, for any <span class="math-container">$m \in [n, n+p]$</span>, we have <span class="math-container">$|x_n - x_m| &lt; \epsilon$</span>.</p> <p>By induction, you have proved that <span class="math-container">$P(p)$</span> holds for all <span class="math-container">$p \in \mathbf{Z}_{\geq 1}$</span>. However, this is not the same as the Cauchy condition: compare the following statments:</p> <ul> <li><span class="math-container">$\forall p \in \mathbf{Z}_{\geq 1}\; \forall \epsilon &gt; 0\; \exists n \in \mathbf{Z}_{\geq 1}\; \forall m\in [n, n+p]: |x_n - x_m| &lt; \epsilon$</span> (what you've proved)</li> <li><span class="math-container">$\forall \epsilon &gt; 0\; \exists n \in \mathbf{Z}_{\geq 1}\; \forall p \in \mathbf{Z}_{\geq 1}\; \forall m\in [n, n+p]: |x_n - x_m| &lt; \epsilon$</span> (Cauchy condition)</li> </ul> <p>As an exercise, check that the sequence <span class="math-container">$x_n = \sum_{k=1}^n 1/k$</span> satisfies your condition but not the Cauchy condition.</p> <p>Basically, induction is not suitable for this problem. A hint for how to proceed is to notice that <span class="math-container">$$ |x_n - x_m| \leq \sum_{k=n}^{m-1}|x_{k} - x_{k+1}|. $$</span></p>
191,984
<p>In this context composition series means the same thing as defined <a href="http://en.wikipedia.org/wiki/Composition_series#For_groups" rel="noreferrer">here.</a></p> <p>As the title says given a finite group <span class="math-container">$G$</span> and <span class="math-container">$H \unlhd G$</span> I would like to show there is a composition series containing <span class="math-container">$H.$</span></p> <p>Following is my attempt at it.</p> <p>The main argument of the claim is showing the following.</p> <blockquote> <p><strong>Lemma.</strong> If <span class="math-container">$H \unlhd G$</span> and <span class="math-container">$G/H$</span> is not simple then there exist a subgroup <span class="math-container">$I$</span> such that <span class="math-container">$H \unlhd I \unlhd G.$</span></p> </blockquote> <p>The proof follows from the 4th isomorphism theorem since if <span class="math-container">$G/H$</span> is not simple then there is a normal subgroup <span class="math-container">$\overline{I} \unlhd G/H$</span> of the form <span class="math-container">$\overline{I} = H/I.$</span></p> <p>Suppose now that <span class="math-container">$G/H$</span> is not simple. Using the above lemma we deduce that there exist a finite chain of groups (since <span class="math-container">$G$</span> is finite) such that <span class="math-container">$$H \unlhd I_1 \unlhd \cdots \unlhd I_k \unlhd G$$</span> and <span class="math-container">$G/I_k$</span> is simple. Now one has to repeat this process for all other pairs <span class="math-container">$I_{i+1}/I_{i}$</span> and for <span class="math-container">$I_1/H$</span> until the quotients are simple groups. This is all fine since all the subgroups are finite as well.</p> <p>Now if <span class="math-container">$H$</span> is simple we are done otherwise there is a group <span class="math-container">$J \unlhd H$</span> and we inductively construct the composition for <span class="math-container">$H.$</span></p> <p>Is the above &quot;proof&quot; correct? If so, is there a way to make it less messy?</p>
Marc van Leeuwen
18,880
<p>Yes your proof is essentially correct. You can make it look less messy as follows. Noticing that the proof that $G$ has a composition series in the first place has the same structure as your proof, you could do a two-in-one-blow induction on the order of $G$ by showing: "$G$ has a composition series, and if $H$ is a nontrivial proper normal subgroup of $G$, then it can be chosen to pass through $H$".</p> <p>The base cases are $G$ trivial or simple, and the statement is obvious then (the second part is vacuously satisfied). Otherwise there exists a nontrivial proper normal subgroup $H$ of $G$; choose it, or for the second part take the one that is imposed. Now by induction $H$ has a composition series, which we stick below $H$ as part of our final composition series. The quotient $G/H$ also has a composition series, which we "lift" by taking the preimage in $G$ (under the projection modulo $H$) of every subgroup in the series to obtain the part of our composition series between $G$ and $H$, by the isomorphism theorem.</p>