qid
int64
1
4.65M
question
large_stringlengths
27
36.3k
author
large_stringlengths
3
36
author_id
int64
-1
1.16M
answer
large_stringlengths
18
63k
3,746,597
<p>My assumption would be</p> <p><span class="math-container">$$\int_{-a}^a x\ dx=0$$</span></p> <p>Am I on the right track here? Also, for indefinite integrals</p> <p><span class="math-container">$$\int (f)x\ dx$$</span></p> <p>would this be correct as well?</p> <p><strong>Background</strong></p> <p>My professor raised this question in his lecture and I provided the following</p> <p><span class="math-container">\begin{align}\int_{-a}^{a}\left(x^3\right)dx&amp;= 0\end{align}</span></p> <p>and</p> <p><span class="math-container">\begin{align}\int_{-a}^{a}\left(x^7\right)dx&amp;= 0\end{align}</span></p> <p>to support that odd degrees will always equal to zero. The professor stated my evaluations were correct, however, I couldn't use the fact that it works for two positive odd exponents to deduce conclusively that the result will hold for all positive odd exponents. Thus, my assumption is that</p> <p><span class="math-container">$$\int_{-a}^a x\ dx=0$$</span></p> <p>covers all non-negative integers <span class="math-container">$n$</span> simultaneously. Any help in this would be appreciated!</p>
user0102
322,814
<p>Here it is a more general result which may help you.</p> <p>Consider that the function <span class="math-container">$f:\textbf{R}\to\textbf{R}$</span> is odd. This means that <span class="math-container">$f(-x) = -f(x)$</span>. Thus one has that <span class="math-container">\begin{align*} \int_{-a}^{a}f(x)\mathrm{d}x &amp; = \int_{-a}^{0}f(x)\mathrm{d}x + \int_{0}^{a}f(x)\mathrm{d}x\\\\ &amp; = \int_{0}^{a}f(-x)\mathrm{d}x + \int_{0}^{a}f(x)\mathrm{d}x\\\\ &amp; = -\int_{0}^{a}f(x)\mathrm{d}x + \int_{0}^{a}f(x)\mathrm{d}x = 0 \end{align*}</span></p> <p>where it has been used the change of variable <span class="math-container">$u = -x$</span>.</p> <p>In particular, at your case, <span class="math-container">$f(x) = x^{2n+1}$</span>, which is odd.</p>
4,566,254
<p>Let <span class="math-container">$F$</span> be a functor <span class="math-container">$\mathscr{C}^\text{op}\times\mathscr{C}\to\mathbf{Set}$</span>, and let <span class="math-container">$S$</span> be an arbitrary set. Can we write the following? <span class="math-container">$$ \int^{C:\mathscr{C}} S\times F(C,C) \cong S\times\int^{C:\mathscr{C}} F(C,C) $$</span> This seems intuitively reasonable by analogy to integrals, and it would be generally useful in proofs. Is it true and if so how can I prove it?</p> <p>(edited: of course it should be <span class="math-container">$\cong$</span>, not <span class="math-container">$=$</span>.)</p>
Dabouliplop
426,049
<p>Yes, this is true. It can be proved in several way. Let's see that in a concrete way. A point of <span class="math-container">$∫^c S×F(c,c)$</span> is given by some triple <span class="math-container">$(c,s,x)$</span> with <span class="math-container">$\newcommand{\C}{\mathscr{C}}c∈\C$</span>, <span class="math-container">$s∈S$</span> and <span class="math-container">$x ∈ F(c,c)$</span>. For each <span class="math-container">$f : c→c'$</span> in <span class="math-container">$\C$</span>, <span class="math-container">$s ∈ S$</span> and <span class="math-container">$x ∈ F(c',c)$</span>, we impose the relation <span class="math-container">$(c,s,F(f,1)(x)) = (c',s,F(1,f)(x))$</span>.</p> <p><span class="math-container">$\require{AMScd}$</span> <span class="math-container">\begin{CD} @&gt;&gt;&gt; c\\ {} @VV{f}V\\ @. c' @&gt;&gt;&gt; \end{CD}</span></p> <p>From this description, we see that it results in <span class="math-container">$S$</span> copies of <span class="math-container">$∫^c F(c,c)$</span>.</p> <p>Another possibility is to use the universal property of the coend: <span class="math-container">$$\begin{align*}\newcommand{\Hom}{\operatorname{Hom}}\Hom\left(∫^c S×F(c,c), X\right) &amp;≅ ∫_c \Hom(S×F(c,c),X)\\ &amp;≅ ∫_c \Hom(F(c,c),X^S)\\ &amp;≅ \Hom\left(∫^c F(c,c),X^S\right)\\ &amp;≅ \Hom\left(S×∫^c F(c,c),X\right)\end{align*}$$</span></p> <p>Or we can write the coend as a colimit and use the fact that products distribute over colimits in sets.</p>
1,302,708
<p>I have 2 sets of elements, say $A=\{a\}$ (only 1 element) and $B = \{b_1, b_2,..., b_n\}$.</p> <p>The probability of picking $A$ is $0.3$ and the probability of picking $B$ is $0.7$, and all elements in $B$ have equal chances.</p> <p>Now combining everything together:</p> <p>What would be the probability of picking $a$ and the probability of picking any element in $B$.</p> <p>Am I right to say:</p> <p>$P(a)$ = 0.3</p> <p>$P(\text{picking any element in}~B) = \frac{0.7}{n}$ ?</p>
mjqxxxx
5,546
<p>If either of the exponents is strictly positive $(\alpha &gt; 0)$, then $(0,1)$ or $(1,0)$ is a solution, because $0^{\alpha} + 1^{\beta} = 1$ for any $\beta$. This occurs for all $\theta \in (-\pi/2, \pi)$, and leaves two cases:</p> <ul> <li>One exponent is zero and the other is strictly negative: $\theta\in\{-\pi/2,\pi\}$. In this case there is no solution... any integer raised to the zeroth power is $1$ (if defined), but no integer raised to a negative power is $0$.</li> <li>Both exponents are strictly negative: $\theta \in (\pi, 3\pi/2)$.</li> </ul> <p>In the second case, we want $$ \frac{1}{x^\alpha} + \frac{1}{y^\sqrt{1-\alpha^2}}=1 $$ for some $\alpha \in (0,1)$ and integers $x,y$. If $x$ or $y$ is $0$ or negative, the expression is undefined, and if $x$ or $y$ is $1$, then the left-hand side is strictly greater than $1$; so we must have $x,y\ge 2$. Fix $x$ and $y$ and consider the behavior of the left-hand side as we vary $\alpha$: $$ f(\alpha;x,y)=e^{-\alpha \log x}+e^{-\sqrt{1-\alpha^2}\log y}, $$ so $$ f'(\alpha;x,y)=-\log x e^{-\alpha\log x}+\frac{\alpha \log y}{\sqrt{1-\alpha^2}}e^{-\sqrt{1-\alpha^2}\log y}, $$ which goes from $-\log x &lt; 0$ at $\alpha=0$ to $+\infty &gt; 0$ at $\alpha=1$; and $$ f''(\alpha;x,y)=(\log x)^2 e^{-\alpha\log x}+\left(\frac{\alpha^2 (\log y)^2}{1-\alpha^2}+\frac{\log y}{\sqrt{1-\alpha^2}}-\frac{\alpha^2 \log y}{(1-\alpha^2)^{3/2}}\right)e^{-\sqrt{1-\alpha^2}\log y}, $$ which is always positive. (*I think.) So for each $x,y$, there is a unique minimum of $f(\alpha;x,y)$ with respect to $\alpha$, at which $$ \frac{\alpha\log y}{y^{\sqrt{1-\alpha^2}}}=\frac{\sqrt{1-\alpha^2}\log x}{x^{\alpha}}. $$ Depending on whether the value of $f$ at this point is less than, equal to, or greater than $1$, there will be $2$, $1$, or $0$ solutions in $\alpha$ to the original equation for this $(x,y)$ pair. Checking this out numerically, we find that there are no solutions for $(x,y)=(2,2)$ and $(x,y)=(2,3)$, and there are two solutions for each other $(x,y)$ pair with $x,y\ge 2$.</p>
1,302,708
<p>I have 2 sets of elements, say $A=\{a\}$ (only 1 element) and $B = \{b_1, b_2,..., b_n\}$.</p> <p>The probability of picking $A$ is $0.3$ and the probability of picking $B$ is $0.7$, and all elements in $B$ have equal chances.</p> <p>Now combining everything together:</p> <p>What would be the probability of picking $a$ and the probability of picking any element in $B$.</p> <p>Am I right to say:</p> <p>$P(a)$ = 0.3</p> <p>$P(\text{picking any element in}~B) = \frac{0.7}{n}$ ?</p>
Piquito
219,998
<p>For all couple of natural numbers (n,m) we have $\frac{1}{n} + \frac{1}{m}$$\leq$ $n^{cos\theta}$+ $m^{sin\theta}$$\leq n+m$ because<br> $\frac{1}{n}$$\leq$ $n^{cos\theta}$$\leq{n}$ and the same for the sinus; therefore, when $\frac{1}{n} + \frac{1}{m}$&lt;1 one has by continuity, a value of $\theta$ (actually infinitely many by periodicity) for which the asked equality is verified. Hence if $ 3\leq n, m$ there are always solution for (n,m). What values of $\theta$? I don’t know. For (2,2) and (2,3) there are no solution because the minimum of $2^{cos\theta}$+$2^{sin\theta}$ and $2^{cos\theta}$ +$3^{sin\theta}$ are both greater than 1. However for (2,m) with $4\leq{m}$ we have solutions (it was good and my edition was bad so I erased it) </p>
4,126,470
<p><span class="math-container">$$\sum_{n=2}^{∞} \frac{1}{n\left(\left(\ln\left(n\right)\right)^3+\ln\left(n\right)\right)}$$</span></p> <p>I know that there are several methods of finding the convergence of a series. The ratio test, the comparison test, the limit comparison test. There is also this theorem: If a series <span class="math-container">$\sum_{n=1}^{\infty}a_n$</span> of real numbers converges then <span class="math-container">$\lim_{n \to \infty}a_n = 0$</span>.</p> <p><strong>So can I everytime just apply this theorem instead of using all the tests?</strong> For example in here,</p> <p><span class="math-container">$\lim _{n\to \infty }\left(\frac{1}{n\left(\left(\ln\left(n\right)\right)^3+\ln\left(n\right)\right)}\right) = 0$</span> So I can just conclude that <span class="math-container">$\sum_{n=2}^{∞} \frac{1}{n\left(\left(\ln\left(n\right)\right)^3+\ln\left(n\right)\right)}$</span>convergences?</p> <p>It seems to me that most of the time I can just get away with using all that comparison by using this theorem or am I getting the wrong idea?</p>
Gary
83,800
<p>You can use comparison and integral test. The terms of the series are all positive and we have <span class="math-container">\begin{align*} \sum\limits_{n = 2}^\infty {\frac{1}{{n(\log ^3 n + \log n)}}} &amp; \le \sum\limits_{n = 2}^\infty {\frac{1}{{n\log ^2 n}}} &lt; \frac{1}{{2\log ^2 2}} + \sum\limits_{n = 3}^\infty {\int_{n - 1}^n {\frac{{dt}}{{t\log ^2 t}}} } \\ &amp; = \frac{1}{{2\log ^2 2}} + \int_2^{ + \infty } {\frac{{dt}}{{t\log ^2 t}}} = \frac{1}{{2\log ^2 2}} + \frac{1}{{\log 2}} &lt; + \infty \end{align*}</span> whence the series converges.</p>
2,168,906
<blockquote> <p>The task is to find necessary and sufficient condition on <span class="math-container">$b$</span> and <span class="math-container">$c$</span> for the equation <span class="math-container">$x^3-3b^2x+c=0$</span> to have three distinct real roots.</p> </blockquote> <p>Are there any formulas (such as <span class="math-container">$x_1x_2=c/a$</span> and <span class="math-container">$x_1+x_2=-b/a$</span> for roots in <span class="math-container">$ax^2+bx+c=0$</span>), but for equations of 3rd power?</p>
Mark Viola
218,419
<p>If $y=e^{x^x}$, then $\log(y)=x^x\ne x\log(e^x)=\log(e^{x^2})$</p> <p>Note that $e^{x^x}\ne (e^x)^x=e^{x^2}$.</p> <p>So, to differentiate $y$, we use $\log(\log(y))=x\log(x)$. Then, </p> <p>$$\frac{d\log(\log(y))}{dx}=\log(x)+1=\frac{1}{y\log(y)}\frac{dy}{dx}$$</p> <p>whence solving for $\frac{dy}{dx}$ and using $y=e^{x^x}$ and $\log(y)=x^x$ yields the coveted result.</p>
1,101,371
<p>Any book that I find on abstract algebra is somehow advanced and not OK for self-learning. I am high-school student with high-school math knowledge. Please someone tell me a book can be fine on abstract algebra? Thanks a lot. </p>
Theodore Sternberg
207,475
<p>Try Pinter's <em>A Book of Abstract Algebra</em>. Over half the book is extended problems that you're gently led through as you sort of discover algebra on your own. There are nice applications to computer science, genetics and kinship networks too. You'll have a great time!</p> <p>Stahl's <em>Introductory Modern Algebra, a historical approach</em> is worth a look for the way it takes you very early on, after a minimum of pain, to a handwaving (but satisfying for me!) understanding of some of the classic results (quadrature of the circle, constructibility of regular polygons). He also covers an offbeat topic -- the cubic equation (i.e. the next thing after the quadratic equation) -- which is surprisingly interesting (even after you apply the formula, it can take some trickery to simplify your result). But I wouldn't embark on Stahl if I wanted to learn the standard results in the standard order; consider it supplementary reading.</p> <p>Herstein is way harder than either of these. If you're in high school and you can handle Herstein (like, you can work his medium-difficulty problems on your own), you should look for a pro mathematician to mentor you. (At that point you're probably one of the best students your high school teacher has ever seen, maybe he/she could help you find someone.)</p>
184,601
<p>A user on the chat asked how could he make something that would cap when it gets a specific value like 20. Then the behavior would be as follows:</p> <p>$f(...)=...$</p> <p>$f(18)=18$</p> <p>$f(19)=19$</p> <p>$f(20)=20$</p> <p>$f(21)=20$</p> <p>$f(22)=20$</p> <p>$f(...)=20$</p> <p>He said he would like to perform it with a regular calculator. Is it possible to do this?</p>
celtschk
34,930
<p>While we are at fancy expressions, what about $$20-\lim_{n\to\infty}\frac1n\ln\left(1+\mathrm e^{n(20-x)}\right)$$</p>
245,464
<p>I only have one region plot and still want to get the legend (both marker and label). I tried the following, but why the legend market does not show up?</p> <p><code>RegionPlot[x^2 &lt; y^3 + 1 &amp;&amp; y^2 &lt; x^3 + 1, {x, -2, 5}, {y, -2, 5}, PlotLegends -&gt; Placed[&quot;MyLegend&quot;, {0.15, 0.08}]]</code></p> <p><a href="https://i.stack.imgur.com/OQjmj.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/OQjmj.png" alt="enter image description here" /></a></p> <hr /> <p>Edit: Thanks @kglr for pointing out that {&quot;MyLegend&quot;} works in the original simple example. This helped me realize that what I want to achieve is slightly different. Since I use mesh to highlight the region, I was trying to get the legend for the mesh. Please see the code below:</p> <p><code>RegionPlot[x^2 &lt; y^3 + 1 &amp;&amp; y^2 &lt; x^3 + 1, {x, -2, 5}, {y, -2, 5}, MeshFunctions -&gt; {#1 - #2 &amp;}, Mesh -&gt; 12, MeshStyle -&gt; {Hue[0.75], Opacity[0.3]}, PlotStyle -&gt; None, BoundaryStyle -&gt; None, PlotLegends -&gt; Placed[{&quot;MyLegend&quot;}, {0.15, 0.15}]]</code></p> <p><a href="https://i.stack.imgur.com/DVaw4.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/DVaw4.png" alt="enter image description here" /></a></p>
kglr
125
<p>If the only reason for using <code>Mesh</code> is to get a hatch filling for the region, you can use <a href="https://reference.wolfram.com/language/ref/HatchFilling.html" rel="nofollow noreferrer"><code>HatchFilling</code></a>:</p> <pre><code>RegionPlot[x^2 &lt; y^3 + 1 &amp;&amp; y^2 &lt; x^3 + 1, {x, -2, 5}, {y, -2, 5}, PlotStyle -&gt; Directive[Hue @ .75, Opacity @ .3, HatchFilling[Pi/4, 0, 10]], BoundaryStyle -&gt; None, PlotLegends -&gt; Placed[SwatchLegend[{&quot;MyLegend&quot;}, LegendMarkerSize -&gt; 30], {0.2, 0.1}]] </code></pre> <p><a href="https://i.stack.imgur.com/Sy5k6.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Sy5k6.png" alt="enter image description here" /></a></p> <p>If you <em>have to</em> use <code>Mesh</code> + <code>MeshStyle</code> you can do</p> <pre><code>legendmarker = RegionPlot[True, {x, 0, 1}, {y, 0, 1}, Frame -&gt; False, MeshFunctions -&gt; {#1 - #2 &amp;}, Mesh -&gt; 7, MeshStyle -&gt; Directive[Hue[0.75], Opacity[0.3]], PlotStyle -&gt; None, BoundaryStyle -&gt; None]; RegionPlot[x^2 &lt; y^3 + 1 &amp;&amp; y^2 &lt; x^3 + 1, {x, -2, 5}, {y, -2, 5}, MeshFunctions -&gt; {#1 - #2 &amp;}, Mesh -&gt; 12, MeshStyle -&gt; {Hue[0.75], Opacity[0.3]}, PlotStyle -&gt; None, BoundaryStyle -&gt; None, PlotLegends -&gt; Placed[SwatchLegend[ {&quot;MyLegend&quot;}, LegendMarkerSize -&gt; {20, 20}, LegendMarkers -&gt; {legendmarker}], {0.2, 0.075}]] </code></pre> <p><a href="https://i.stack.imgur.com/2MGul.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/2MGul.png" alt="enter image description here" /></a></p>
2,666,772
<blockquote> <p>$W$ = $\begin{bmatrix} 1 &amp; 1 &amp; 1 &amp; 1 \\ 1 &amp; -1 &amp; 1 &amp; -1 \\ 1 &amp; 1 &amp; -1 &amp; -1 \\ 1 &amp; -1 &amp; -1 &amp; 1 \end{bmatrix}$. Use W to build an 8x8 matrix encoding an orthonormal basis in $R^8$ by scaling A = $\begin{bmatrix} W &amp; W \\ W &amp; -W \end{bmatrix}$ in the right way.</p> </blockquote> <p>Am I wrong here, but is it not just this matrix beside each other 4 times? Doesn't that work?</p>
user
505,767
<p>Note that $$W^T\cdot W=4I$$ then</p> <p>$$\begin{bmatrix}W&amp;0\\0&amp;W\end{bmatrix}^T\begin{bmatrix}W&amp;0\\0&amp;W\end{bmatrix}=4\begin{bmatrix}I&amp;0\\0&amp;I\end{bmatrix}$$</p> <p>then assume</p> <p>$$U=\begin{bmatrix} \frac1{\sqrt4} &amp; \frac1{\sqrt4} &amp; \frac1{\sqrt4} &amp; \frac1{\sqrt4} \\ \frac1{\sqrt4} &amp; -\frac1{\sqrt4} &amp; \frac1{\sqrt4} &amp; -\frac1{\sqrt4} \\ \frac1{\sqrt4} &amp; \frac1{\sqrt4} &amp; -\frac1{\sqrt4} &amp; -\frac1{\sqrt4} \\ \frac1{\sqrt4} &amp; -\frac1{\sqrt4} &amp; -\frac1{\sqrt4} &amp; \frac1{\sqrt4} \end{bmatrix}$$</p> <p>such that $$U^T\cdot U=I$$ then</p>
3,773,695
<p>I have been trying to get some upper bound on the coefficient of <span class="math-container">$x^k$</span> in the polynomial <span class="math-container">$$(1-x^2)^n (1-x)^{-m}, \text{ $m \le n$}.$$</span></p> <p>A straightforward calculation shows that for even <span class="math-container">$k$</span>, the coefficient can be expressed as <span class="math-container">$$\sum_{i=0}^{k/2} (-1)^i \binom{n}{i} (-1)^{k-2i} \binom{-m}{k-2i} = \sum_{i=0}^{k/2} (-1)^i\binom{n}{i} \binom{m+k-2i-1}{k-2i}$$</span> and therefore simply using <span class="math-container">$\binom{n}{k} \le n^{k}$</span>, one gets a bound of <span class="math-container">$$(k/2+1) (n+(m+k)^2)^{\frac{k}{2}} .$$</span></p> <p>I'm wondering if one could get a better bound, ideally with a better dependence on <span class="math-container">$k$</span>?</p>
Brian Moehring
694,754
<p><strong>A short disclaimer/warning:</strong> I haven't studied these types of problems formally, so there may be a standard, simpler way to deal with this. This is just the method that occurred to me.</p> <p>Also, in the general proof, I abuse the symbol <span class="math-container">$y'$</span> a fair bit to mean the derivative on different curves, even in the same equation. Hopefully the context gives enough clues to what I mean, but if anyone knows a standard notation for this application, I'd appreciate comments to that effect.</p> <hr /> <p>For the moment, let's consider <span class="math-container">$y=x^3$</span> and <span class="math-container">$y=x^3-3x$</span> as you have done. Note that the range of the derivative of each function is of the form <span class="math-container">$[c_0,\infty)$</span> for different values of <span class="math-container">$c_0$</span> (<span class="math-container">$0$</span> and <span class="math-container">$-3$</span> respectively). This difference is what we want to capture, but we need to define a quantity that both makes sense and is independent of similarity.</p> <p>To do this, first we make the interval bounded by applying <span class="math-container">$\arctan$</span> so the range of <span class="math-container">$\arctan(y')$</span> is of the form <span class="math-container">$[c_1,\pi/2)$</span> for different values of <span class="math-container">$c_1=\arctan(c_0)$</span>. Then take [Lebesgue] measures, giving <span class="math-container">$$\lambda(\arctan(y'(\mathbb{R}))) = \frac{\pi}{2} - c_1.$$</span> This gives us the quantity we want.</p> <hr /> <p>More generally, we can consider any smooth curve <span class="math-container">$C \subset \mathbb{R}^2$</span> and let <span class="math-container">$[C]$</span> denote the equivalence class of all curves which are similar to <span class="math-container">$C$</span>. Then I claim that the function <span class="math-container">$$[C] \mapsto \lambda(\arctan(y'(C)))$$</span> is well-defined. To see this, it's enough to see how it behaves under translation, uniform scaling, reflection, and rotation.</p> <ul> <li>Translation and uniform scaling: Note that <span class="math-container">$y'(C)$</span> is unchanged under both, so obviously the above value is unchanged.</li> <li>The particular reflection <span class="math-container">$R_y(x,y) = (x,-y)$</span> satisfies <span class="math-container">$$\begin{align*}\lambda(\arctan(y'(R_yC))) &amp;= \lambda(\arctan(-y'(C))) \\ &amp;= \lambda(-\arctan(y'(C))) \\ &amp;= \lambda(\arctan(y'(C)))\end{align*}$$</span> and any other reflection can be written as <span class="math-container">$T\rho R_y \rho^{-1}T^{-1}$</span> where <span class="math-container">$T$</span> is a translation and <span class="math-container">$\rho$</span> is a rotation</li> <li>For <span class="math-container">$0 \leq \theta &lt; \pi$</span>, any counterclockwise rotation <span class="math-container">$\rho_{\theta}$</span> by <span class="math-container">$\theta$</span> about some point satisfies, for <span class="math-container">$(x,y) \in C$</span>, <span class="math-container">$$\arctan(y'(\rho_\theta(x,y))) = \begin{cases}\arctan(y'(x,y)) + \theta &amp; \text{ if } \arctan(y'(x,y)) + \theta \leq \frac{\pi}{2} \\ \arctan(y'(x,y)) + \theta - \pi &amp; \text{ otherwise}\end{cases}$$</span> Using this, we may show directly that <span class="math-container">$\lambda(\arctan(y'(\rho_\theta C))) = \lambda(\arctan(y'(C))).$</span></li> <li>For <span class="math-container">$\pi \leq \theta &lt; 2\pi$</span>, just note <span class="math-container">$\rho_\theta = (\rho_{\theta/2})^2$</span> and <span class="math-container">$0 \leq \theta/2 &lt;\pi$</span>.</li> </ul> <hr /> <p>Finally, applying the above to <span class="math-container">$C_A : y=x^3 + Ax,$</span> we find that <span class="math-container">$$\lambda(\arctan(y'(C_A))) = \frac{\pi}2 - \arctan(A)$$</span> so none of these curves are similar to one another.</p>
3,277,555
<p>For a math class I was given the assignment to make a game of chance, for my game the person must roll 4 dice and get a 6, a 5, and a 4 in a row in 3 rolls or less to qualify. the remaining dice must be over 3 for you to win. my question though is how can I find out the probability of rolling the 6,5, and 4 in a single roll? </p> <p>My thought was <span class="math-container">$\frac{4}{24} + \frac{3}{15} + \frac{2}{8} = 0.61$</span></p> <p>Please tell me if this is correct or if I need to do it in another method. </p> <p>Thank you!</p>
drhab
75,923
<p>If I understand well then the question is: "if <span class="math-container">$4$</span> dice are thrown then what is the probability that <span class="math-container">$4$</span>, <span class="math-container">$5$</span> and <span class="math-container">$6$</span> are among the results?" (please correct me if I am wrong)</p> <hr> <p>Number the dice and let <span class="math-container">$E_i$</span> be the event that the numbers <span class="math-container">$6,5,4$</span> are rolled with the <span class="math-container">$3$</span> dice that have numbers in <span class="math-container">$\{1,2,3,4\}-\{i\}$</span>.</p> <p>If <span class="math-container">$E$</span> is the event described above then <span class="math-container">$E=\bigcup_{i=1}^4E_i$</span> and using <a href="https://en.wikipedia.org/wiki/Inclusion%E2%80%93exclusion_principle" rel="nofollow noreferrer">inclusion/exclusion</a> and symmetry we find:</p> <p><span class="math-container">$$P(E)=P\left(\bigcup_{i=1}^4E_i\right)=4P(E_1)-6P(E_1\cap E_2)=4\cdot3!\cdot6^{-3}-6\cdot3!\cdot6^{-4}=\boxed{\frac1{12}}$$</span></p> <p>Note that <span class="math-container">$E_1$</span> occurs if the outcome is e.g. <span class="math-container">$(x,4,6,5)$</span> where <span class="math-container">$x$</span> can take any of the <span class="math-container">$6$</span> values.</p> <p>Note that <span class="math-container">$E_1\cap E_2$</span> occurs if the outcome is e.g. <span class="math-container">$(5,5,4,6)$</span>.</p> <p>Note the <span class="math-container">$E_1\cap E_2\cap E_3$</span> and <span class="math-container">$E_1\cap E_2\cap E_3\cap E_4$</span> are both empty (i.e. cannot occur).</p>
1,110,543
<p>I am dealing with galois theory at the moment and I came across with an example in the lecture and I got a question:</p> <p>Let $K=\mathbb Q$ and $L=\mathbb Q(\sqrt{2},\sqrt{3})\subset \mathbb C$. Lets consider the field-extension $L/K$</p> <p>Because of $[L:K]=[L:K(\sqrt{2})]\cdot [K(\sqrt{2}):K]=4$</p> <p><strong>Question 1:</strong> How we can argue that the minimal polynomial of $\sqrt{3}$ in $K(\sqrt{2})$ is still $x^2-3?$ I.e. we have to show that $\sqrt{3}\notin K(\sqrt{2})$. It seems trivial, but i would like to see an argument.</p> <p>Because of $L$ is a splitting field of $f(X)=(x^2-3)(x^2-2)$ $L/K$ we have a galois-extension.</p> <p><strong>Question 2:</strong> Then we concluded that $|G|=|Gal(L/K)|=4$ but why? Does it holds that $|[L:K]|=|G|$ in any case? The definition of $Gal(L/K)$ is $Gal(L/K)=Aut_K(L)$, but i thought that $K$-automorphisms are already defined if we know the images of all generater of the extension, hence $|G|$ must be 2.</p> <p>The rest is clear to me. We noticed that there are only $2$ groups with order $4$, namely $\mathbb Z_4$ and $\mathbb Z_2 \times \mathbb Z_2$. And we saw that this group cant be cyclic, hence must be $\mathbb Z_2 \times \mathbb Z_2$.</p> <p>I hope someone can answer my questions. Also if someone knows a useful link to this topic I will be very glad about it.</p>
Andrea Mori
688
<p>(1) Suppose that $\sqrt{3}=a+b\sqrt{2}$ with $a,b\in\Bbb Q$. Taking squares $$ (a^2+2b^2)+2ab\sqrt{2}=3. $$ Can you derive a contradiction?</p> <p>(2) The cardinality of the Galois group of a Galois extension always coincides with the degree. What are the generators in this case, and what is the effect of the automorphisms on them?</p>
3,504,422
<blockquote> <p>Find: <span class="math-container">$$\displaystyle\lim_{x\to \infty}\left(\frac{\ln(x^2+3x+4)}{\ln(x^2+2x+3)}\right)^{x\ln x}$$</span></p> </blockquote> <p>My attempt:</p> <p><span class="math-container">$\displaystyle\lim_{x\to\infty}\left(\frac{\ln(x^2+3x+4)}{\ln(x^2+2x+3)}\right)^{x\ln x}=\lim_{x\to\infty}\left(1+\frac{\ln(x^2+3x+4)-\ln(x^2+2x+3)}{\ln(x^2+2x+3)}\right)^{x\ln x}=\\\displaystyle\lim_{x\to\infty}\left(1+\frac{\ln\left(\frac{x^2+3x+4}{x^2+2x+3}\right)}{\ln(x^2+2x+3)}\right)^{x\ln x}=\lim_{x\to\infty}\left(1+\frac{\ln\left(1+\frac{x+1}{x^2+2x+3}\right)}{\ln(x^2+2x+3)}\right)^{x\ln x}$</span></p> <p>What I used: <span class="math-container">$\displaystyle\lim_{x\to\infty}\ln\left(1+\frac{x+1}{x^2+2x+3}\right)=0\;\;\&amp;\;\;\lim_{x\to\infty}\ln(x^2+2x+3)=+\infty$</span></p> <p>In the end, I got an indeterminate form: <span class="math-container">$\displaystyle\lim_{x\to\infty}1^{x\ln x}=1^{\infty}$</span></p> <p>Have I made a mistake anywhere? It seems suspicious.</p> <p>added: replacement <span class="math-container">$\frac{x+1}{x^2+2x+3}$</span> by <span class="math-container">$\frac{1}{x}$</span> wasn't appealing either.</p> <p>Would:<span class="math-container">$$\lim_{x\to\infty}\Big(\Big(1+\frac{1}{x}\Big)^x\Big)^{\ln x}=x=\infty$$</span> be wrong?</p> <p>//a few days after users had provided hints and answered the question,we discussed this with our assistant and he suggested a table formula that can also be applied (essentialy the last step in methods provided in the answers I recieved).:<span class="math-container">$$\lim_{x\to c}f(x)=1\;\&amp;\;\lim_{x\to c}g(x)=\pm\infty$$</span>then<span class="math-container">$$\lim_{x\to c}f(x)^{g(x)}=e^{\lim_{x\to c}(f(x)-1)g(x)}//$$</span></p>
Michael Rozenberg
190,319
<p>By your work and since <span class="math-container">$e^x$</span> and <span class="math-container">$\ln$</span> are continuous function, we obtain: <span class="math-container">$$\lim_{x\rightarrow+\infty}\left(\frac{\ln(x^2+3x+4)}{\ln(x^2+2x+3)}\right)^{x\ln{x}}=\lim_{x\rightarrow+\infty}\left(1+\frac{\ln\frac{x^2+3x+4}{x^2+2x+3}}{\ln(x^2+2x+3)}\right)^{\frac{\ln(x^2+2x+3)}{\ln\frac{x^2+3x+4}{x^2+2x+3}}\cdot\frac{x\ln{x}\ln\frac{x^2+3x+4}{x^2+2x+3}}{\ln(x^2+2x+3)}}=$$</span> <span class="math-container">$$=e^{\lim\limits_{x\rightarrow+\infty}\frac{\ln{x}\ln\left(1+\frac{x+1}{x^2+2x+3}\right)^x}{2\ln{x}+\ln\left(1+\frac{2}{x}+\frac{3}{x^2}\right)}}=e^{\frac{1}{2}\ln\lim\limits_{x\rightarrow+\infty}\left(1+\frac{x+1}{x^2+2x+3}\right)^{\frac{x^2+2x+3}{x+1}\cdot\frac{x(x+1)}{x^2+2x+3}}}=e^{\frac{1}{2}\ln{e}}=\sqrt{e}.$$</span></p>
441,962
<p>I looking for a proof for the theorem but I have not find yet.</p> <p>A link or even sketch for How it goes will be very appreciate.</p> <p>A linear map is self adjoint </p> <p>iff </p> <p>the matrix representation according to orthonormal basis is self adjoint.</p> <p>by the way is not that true for all self adjoint matrix and not just for matrices </p> <p>representation according to orthonormal basis.</p> <p>Thanks in advanced !!</p>
Hagen von Eitzen
39,174
<p>The adjoint $A^*$ of a linear map $A$ on a space with scalar product is determined by the property $\langle Ax,y\rangle = \langle x, A^*y\rangle$ for all $x,y$. If we let $x,y$ run through the base vectors of an ON basis, then $\langle Ae_i,e_j\rangle$ is the $i,j$ entry of the matrix for $A$ and $\langle e_i,A^*e_j\rangle$ is the $j,i$ entry of the matrix for $A^*$. So if $A=A^*$ then the corresponding matrix (with respect to the ON basis considered) is self adjoint</p>
979,144
<p>I am searching for a formula of sum of binomial coefficients $^{n}C_{k}$ where $k$ is fixed but $n$ varies in a given range? Does any such formula exist?</p>
robjohn
13,854
<p>As shown in <a href="https://math.stackexchange.com/a/399213">this answer</a>, we have the formula $$ \sum_{j=k}^{n-m}\binom{j}{k}\binom{n-j}{m}=\binom{n+1}{k+m+1} $$ If we set $m=0$, we get $$ \sum_{j=k}^n\binom{j}{k}=\binom{n+1}{k+1} $$</p>
3,702,649
<p>It's obvious that it's symmetric because <span class="math-container">$a_{\left(i+1\right)j}=\left(m+1\right)\left(i+1+j\right) = a_{i\left(j+1\right)}=\left(m+1\right)\left(i+j+1\right)$</span>, but how can I prove that it's a Latin square and that it's diagonal consists of different elements?</p> <p>I thought about showing that the sum of the elements in any row/column is equal to 0+1+...+n-1, but it's not working out</p> <p><span class="math-container">$\left(m+1\right)\left[\left(i+j\right)+\left(i+j+1\right)+...+\left(i+j+n-1\right)\right] mod(n)$</span></p> <p><span class="math-container">$\left(m+1\right)\left[n\left(i+j\right)+\left(1\right)+...+\left(n-1\right)\right] mod(n)$</span></p> <p><span class="math-container">$\left(m+1\right)\left[n\left(i+j\right)+\frac{n\left(n-1\right)}{2}\right] mod(n)$</span></p> <p><span class="math-container">$\left(m+1\right) \left[0\right]mod(n)$</span></p> <p>Which will get me 0 or k*n where k is an integer, so we only know that if summing up any row/column (I get the same for other rows also)</p>
Rebecca J. Stones
91,818
<p>It's true if and only if <span class="math-container">$m+1$</span> and <span class="math-container">$n$</span> are coprime. In particular, the cells <span class="math-container">$(i,j)$</span> and <span class="math-container">$(i,j+n/\gcd(m+1,n))$</span> are in the same row and contain the same symbol. However, they are the same cell if and only if <span class="math-container">$m+1$</span> and <span class="math-container">$n$</span> are coprime, i.e., provided <span class="math-container">$\gcd(m+1,n)=1$</span>).</p> <p>As you say, it's obvious that it's symmetric, since <span class="math-container">$a_{ij} = a_{ji}$</span> by commutativity of + (i.e., we know <span class="math-container">$i+j = j+i$</span>). However, checking <span class="math-container">$a_{(i+1)j} = a_{i(j+1)}$</span> does not show it's symmetric (it shows some other property, not symmetry). We need only show the rows are Latin, i.e., there are no repeated symbols in any row (the columns are the same but transposed, so we don't need to re-check them). I don't think summing the rows is going to lead to a proof: it won't show there are no duplicate symbols in each row.</p> <p>To prove the rows are Latin, suppose <span class="math-container">$a_{ij} = a_{ij'}$</span> for columns <span class="math-container">$j$</span> and <span class="math-container">$j'$</span>. Then we deduce from the definition of <span class="math-container">$a_{ij}$</span> and <span class="math-container">$a_{ij'}$</span> that <span class="math-container">$$j(m+1) \equiv j'(m+1) \pmod n.$$</span></p> <p>If <span class="math-container">$m+1$</span> and <span class="math-container">$n$</span> are coprime, then we can &quot;cancel out&quot; the <span class="math-container">$m+1$</span> (or more formally, multiply by the <a href="https://en.wikipedia.org/wiki/Modular_multiplicative_inverse" rel="nofollow noreferrer">multiplicative inverse</a> of <span class="math-container">$m+1$</span> on both sides) to obtain <span class="math-container">$j \equiv j' \pmod n$</span> which implies <span class="math-container">$a_{ij}$</span> and <span class="math-container">$a_{ij'}$</span> refer to the same cell, and thus there are no duplicate symbols in the arbitrary row <span class="math-container">$i$</span>.</p>
148,032
<p>What is the larger of the two numbers?</p> <p>$$\sqrt{2}^{\sqrt{3}} \mbox{ or } \sqrt{3}^{\sqrt{2}}\, \, \; ?$$ I solved this, and I think that is an interesting elementary problem. I want different points of view and solutions. Thanks!</p>
Robert Mastragostino
28,869
<p>$$\sqrt2^{\sqrt 3}&lt;^?\sqrt3^{\sqrt 2}$$ Raise both sides to the power $2\sqrt 2$, and get an equivalent problem: $$2^{\sqrt 6}&lt;^?9$$ Since $\sqrt 6&lt;3$, we have: $$2^{\sqrt 6}&lt; 2^3 = 8 &lt;9$$ So ${\sqrt 2}^{\sqrt 3}$ is smaller than $\sqrt3^{\sqrt 2}$.</p>
1,243,661
<p>Let $\Theta$ be an unknown random variable with mean $1$ and variance $2$. Let $W$ be another unknown random variable with mean $3$ and variance $5$. $\Theta$ and $W$ are independent.</p> <p>Let: $X_1=\Theta+W$ and $X_2=2\Theta+3W$. We pick measurement $X$ at random, each having probability $\frac{1}{2}$ of being chosen. This choice is independent of everything else.</p> <p>How does one calculate $Var(X)$ in this case? Is </p> <p>$$ Var(X)\;\; = \;\; \frac{1}{2}(Var(\Theta)+Var(W))+\frac{1}{2}(Var(2\Theta)+Var(3W)) \;\; =\;\; \frac{1}{2}(5Var(\Theta)+10Var(W))? $$</p>
BruceET
221,800
<p>First, $Var(X_2) = Var(2\Theta + 3W) = 4Var(\Theta) + 9Var(W)$ and similarly for $Var(X_1)$. However, you cannot average the two variances to get $Var(X)$. This does not take account of the randomness of the coin toss to decide whether to pick $X_1$ or $X_2.$ [And even if you deterministically chose each $X_i$ alternately, you still could not average the variances because they are, in general, centered on different means.]</p> <p>Here is a simulation that 'performs' a similar experiment a million times to show you cannot just average the variances of $X_1$ and $X_2$ to get the variance of your $X.$ I used $X_1$ normal with mean 100 and variance 100, and $X_2$ normal with mean 70 and variance 25. (You can, however, average the means.)</p> <pre><code> x1 = rnorm(10^6, 100, 10) x2 = rnorm(10^6, 70, 5) b = rbinom(10^6, 1, .5) # coin tosses x = b*x1 + (1-b)*x2 cbind(x1, x2, x)[1:10,] # first 10 results x1 x2 x [1,] 100.67966 70.06460 100.67966 # chose 1st [2,] 80.60085 70.76861 70.76861 # chose 2nd [3,] 80.97220 69.87782 69.87782 # chose 2nd [4,] 99.28911 72.56239 72.56239 # etc... [5,] 78.18894 65.62820 65.62820 [6,] 107.14613 77.80526 77.80526 [7,] 94.79299 76.17693 94.79299 [8,] 95.45661 66.13161 66.13161 [9,] 107.01943 71.48198 71.48198 [10,] 117.05664 72.13655 117.05664 var(x1); var(x2); var(x) ## 99.89692 # Exact is V(X1) = 100 ## 25.02683 # Exact is V(X2) = 25 ## 287.2787 # Approx result using other "Answers" mean(x1); mean(x2); mean(x) ## 99.9913 # Exact is E(X1) = 100 ## 69.99634 # Exact is E(X2) = 70 ## 84.99333 # Exact E(X) IS avg of 100 and 70 </code></pre>
596,671
<blockquote> <p>$$\sum^{\infty}_{n=1}\frac{(-1)^n}{n^a\ln n}$$ $$a&gt;0$$</p> <p>Does the series converge/converge absolutely/diverge ?</p> </blockquote> <p>I tried to divide to cases and factor the series:</p> <p>$\sum^{\infty}_{n=1}\frac{(-1)^n}{n^a\ln n}=\sum\frac{(-1)^n}{n^a}\sum\frac{1}{\ln n}$</p> <p>for $a \le 1$ the series $\sum\frac{(-1)^n}{n^a}$ converges (from Leibniz), and doesn't converge absolutely but $\sum\frac{1}{\ln n} $ diverges. So I guess factoring the series into two is not how to solve this. </p> <p>Now trying the condensation test: </p> <p>$\sum^{\infty}_{n=1}\frac{2^n(-1)^{2^n}}{{2^n}^a\ln 2^n} \Rightarrow \frac{2^{n(1-a)}}{\ln 2^n}$</p> <p>If $a \ge 1$ it goes to infinity. </p> <p>If $a \le 1$ it also goes to infinity. </p> <p>I think I'm doing somthing wrong. </p> <p>Any advice ?</p>
Community
-1
<p>Firstly note that your series should be $$\sum_{n=2}^{\infty}\frac{(-1)^n}{n^a \ln n}$$ since the logarithm function becomes zero at $n=1$. Let us consider the absolute valued series first,</p> <p>$$\sum_{n=2}^{\infty}\frac{1}{n^a \ln n}$$</p> <p>Note that, </p> <p>$$\frac{1}{n^a \ln n}&lt;\frac{1}{n^a}\mbox{ for }n\geq 3$$</p> <p>Hence by the direct comparison test we have $\sum_{n=2}^{\infty}\frac{1}{n^a \ln n}$ converges when $a&gt;1$. That is $\sum_{n=2}^{\infty}\frac{(-1)^n}{n^a \ln n}$ absolutely converges when $a&gt;1$.</p> <p>Ir remains to show the case $a\leq 1$. I still haven't come with a method for that. :) </p>
596,671
<blockquote> <p>$$\sum^{\infty}_{n=1}\frac{(-1)^n}{n^a\ln n}$$ $$a&gt;0$$</p> <p>Does the series converge/converge absolutely/diverge ?</p> </blockquote> <p>I tried to divide to cases and factor the series:</p> <p>$\sum^{\infty}_{n=1}\frac{(-1)^n}{n^a\ln n}=\sum\frac{(-1)^n}{n^a}\sum\frac{1}{\ln n}$</p> <p>for $a \le 1$ the series $\sum\frac{(-1)^n}{n^a}$ converges (from Leibniz), and doesn't converge absolutely but $\sum\frac{1}{\ln n} $ diverges. So I guess factoring the series into two is not how to solve this. </p> <p>Now trying the condensation test: </p> <p>$\sum^{\infty}_{n=1}\frac{2^n(-1)^{2^n}}{{2^n}^a\ln 2^n} \Rightarrow \frac{2^{n(1-a)}}{\ln 2^n}$</p> <p>If $a \ge 1$ it goes to infinity. </p> <p>If $a \le 1$ it also goes to infinity. </p> <p>I think I'm doing somthing wrong. </p> <p>Any advice ?</p>
robjohn
13,854
<p>The <a href="http://en.wikipedia.org/wiki/Alternating_series_test" rel="nofollow">alternating series test</a> says that $$ \sum_{n=2}^\infty\frac{(-1)^n}{n^a\log(n)} $$ converges conditionally since $\frac1{n^a\log(n)}$ monotonically converges to $0$.</p> <p>For absolute convergence, by comparison to $$ \sum_{n=3}^\infty\frac1{n^a} $$ the series converges absolutely for $a\gt1$.</p> <p>For $a=1$, the <a href="http://en.wikipedia.org/wiki/Integral_test_for_convergence" rel="nofollow">integral test</a> shows that the series diverges absolutely since $$ \int_2^M\frac1{x\log(x)}\,\mathrm{d}x=\log(\log(M))-\log(\log(2)) $$ diverges as $M\to\infty$.</p> <p>If you can't use the integral test, you can use the <a href="http://en.wikipedia.org/wiki/Cauchy_condensation_test" rel="nofollow">condensation test</a> for $a=1$: $$ \sum_{n=1}^\infty2^n\frac1{2^n\log(2^n)}=\frac1{\log(2)}\sum_{n=1}^\infty\frac1n $$ diverges since the <a href="http://en.wikipedia.org/wiki/Harmonic_series_%28mathematics%29" rel="nofollow">harmonic series</a> diverges.</p> <p>Since the series diverges for $a=1$, the comparison test with $$ \sum_{n=2}^\infty\frac1{n\log(n)} $$ shows that the series diverges absolutely for $a&lt;1$.</p>
1,567,229
<p>Let Y1 and Y2 have the joint probability density function given by:</p> <p>$ f (y_1, y_2) = 6(1−y_2), \text{for } 0≤y_1 ≤y_2 ≤1$</p> <p>Find $P(Y_1≤3/4,Y_2≥1/2).$</p> <p>Answer:</p> <p>$$\int_{1/2}^{3/4}\int_{y_1}^{1}6(1− y_2 )dy_2dy_1 + \int_{1/2}^{1}\int_{1/2}^{1}6(1− y_2 )dy_1dy_2 = 7/64 + 24/64 = 31/64 $$</p> <p>My problem is I don't understand the logic for the second part. We know for a fact that:</p> <p>$$P(a_1 ≤ Y_1 ≤ a_2, b_1 ≤ Y_2 ≤ b_2) = \int^{b_2}_{b_1}\int^{a_2}_{a_1}6(1− y_2 )dy_1dy_2$$</p> <p>Thus, I can make the following equations</p> <p>$$ \int_{1/2}^{3/4}\int_{y_1}^{1}6(1− y_2 )dy_2dy_1 = P(1/2 \leq Y_2 \leq 1, 1/2 \leq Y_1 \leq 3/4 | Y_1 \leq Y_2) $$</p> <p>This part I understand the logic of why we would add this integral to the expression</p> <p>$$\int_{1/2}^{1}\int_{1/2}^{1}6(1− y_2 )dy_1dy_2 = P(1/2 \leq Y_2 \leq 1, 1/2 \leq Y_1 \leq 1)$$. </p> <p>I don't understand the logic of how adding both expressions leads to $P(Y_1≤3/4,Y_2≥1/2).$. In fact, I don't see the logic of choosing $P(1/2 \leq Y_2 \leq 1, 1/2 \leq Y_1 \leq 1)$.</p>
Alex
38,873
<p>8 digit number cant start with 0. Hence: choose 3 out of 7 for 0s, then 3 out of 5 for 2s, multiply it by 2!</p>
1,567,229
<p>Let Y1 and Y2 have the joint probability density function given by:</p> <p>$ f (y_1, y_2) = 6(1−y_2), \text{for } 0≤y_1 ≤y_2 ≤1$</p> <p>Find $P(Y_1≤3/4,Y_2≥1/2).$</p> <p>Answer:</p> <p>$$\int_{1/2}^{3/4}\int_{y_1}^{1}6(1− y_2 )dy_2dy_1 + \int_{1/2}^{1}\int_{1/2}^{1}6(1− y_2 )dy_1dy_2 = 7/64 + 24/64 = 31/64 $$</p> <p>My problem is I don't understand the logic for the second part. We know for a fact that:</p> <p>$$P(a_1 ≤ Y_1 ≤ a_2, b_1 ≤ Y_2 ≤ b_2) = \int^{b_2}_{b_1}\int^{a_2}_{a_1}6(1− y_2 )dy_1dy_2$$</p> <p>Thus, I can make the following equations</p> <p>$$ \int_{1/2}^{3/4}\int_{y_1}^{1}6(1− y_2 )dy_2dy_1 = P(1/2 \leq Y_2 \leq 1, 1/2 \leq Y_1 \leq 3/4 | Y_1 \leq Y_2) $$</p> <p>This part I understand the logic of why we would add this integral to the expression</p> <p>$$\int_{1/2}^{1}\int_{1/2}^{1}6(1− y_2 )dy_1dy_2 = P(1/2 \leq Y_2 \leq 1, 1/2 \leq Y_1 \leq 1)$$. </p> <p>I don't understand the logic of how adding both expressions leads to $P(Y_1≤3/4,Y_2≥1/2).$. In fact, I don't see the logic of choosing $P(1/2 \leq Y_2 \leq 1, 1/2 \leq Y_1 \leq 1)$.</p>
true blue anil
22,388
<p><em>A simple way</em></p> <p>You computed the ans as $1120$.<br> Just multiply it by $\dfrac58$, the probability that it starts with a non-zero</p> <p>The ans it yields (which others have also got) is $700$,</p> <p>the options are definitely wrong.</p>
362,854
<blockquote> <p>Show that every subgroup of $Q_8$ is normal.</p> </blockquote> <p>Is there any sophisticated way to do this ? I mean without needing to calculate everything out.</p>
Geoff Robinson
13,147
<p>It depends what you call sophisticated. There is only one subgroup of order $8,$ one subgroup of order $2$ and one subgroup of order $1$ in $Q_{8},$ so each of those is normal. In any finite $p$-group, every maximal subgroup is normal, so each subgroup of order $4$ of $Q_{8}$ is normal. This is not really substantially different from Cameron Buie's more explicit answer.</p>
221,729
<p>Till now, I have proved followings;</p> <p>Suppose $X,Y$ are metric spaces and $E$ is dense in $X$ and $f:E\rightarrow Y$ is uniformly continuous. Then,</p> <ol> <li><p>$Y=\mathbb{R}^k \Rightarrow \exists$ a continuous extension.</p></li> <li><p>$Y$ is compact $\Rightarrow \exists$ a continuous extension.</p></li> <li><p>$Y$ is complete $\Rightarrow \exists$ a continuous extension. (AC$_\omega$)</p></li> <li><p>$E$ is countable &amp; $Y$ is complete $\Rightarrow \exists$ a continuous extension.</p></li> </ol> <p>What are true and what are false if $f$ is replaced by a 'continuous function', not uniformly?</p>
Tsz Chung Ho
136,092
<p>The definition of the derivative can be expressed using asymptotic notation.</p> <p>We say f has a derivative at x if there exists M such that:</p> <p><span class="math-container">$$f(x+\epsilon) = f(x) + M\epsilon + o(\epsilon)$$</span></p> <p>We denote this M as f '(x)</p> <p>(edited as per Antonio's correction)</p>
546,701
<p>Find the number of positive integers $$n &lt;9,999,999 $$ for which the sum of the digits in n equals 42.</p> <p>Can anyone give me any hints on how to solve this?</p>
Brian M. Scott
12,042
<p>Let the digits be $d_1,d_2,d_3,d_4,d_5,d_6$, and $d_7$, where we allow leading zeroes so as to make each number in the specified interval a seven-digit integer. You’re looking for all solutions in non-negative integers to </p> <p>$$d_1+d_2+d_3+d_4+d_5+d_6+d_7=42\;,$$</p> <p>with the restriction that $d_k\le 9$ for $k=1,\ldots,7$. Without the restriction this is a standard <a href="http://en.wikipedia.org/wiki/Stars_and_bars_%28probability%29" rel="nofollow noreferrer">stars-and-bars problem</a>, whose solution is </p> <p>$$\binom{42+7-1}{7-1}=\binom{48}6\;.\tag{1}$$</p> <p>Both the formula that I used here and a pretty decent explanation/derivation of it can be found at the link. However, $(1)$ includes unwanted solutions in which one or more of the $d_k$ exceeds $9$. To remove these, you can use an <a href="http://en.wikipedia.org/wiki/Inclusion%E2%80%93exclusion_principle" rel="nofollow noreferrer">inclusion-exclusion argument</a>. <a href="https://math.stackexchange.com/a/203839/12042">This answer</a> shows such an argument in some detail in a smaller problem of this kind.</p>
1,033,208
<p>What do square brackets mean next to sets? Like $\mathbb{Z}[\sqrt{-5}]$, for instance. I'm starting to assume it depends on context because google is of no use.</p>
annimal
157,326
<p>it's the polynomial ring in the bracket with coefficients $\mathbf{Z}$, for example, $\mathbf{Z}[x]$ is the polynomial of x with coefficients $\mathbf{Z}$, like $x^3 + 2x^2 + 3$, and for $\mathbf{Z}[\sqrt{-5}]$, just replace x by $\sqrt-5$, the only difference is that $(\sqrt{-5}) ^2 = 5$, which is in $\mathbf{Z}$, but $x$ is independent of $\mathbf{Z}$<br/> You can also find more formal definition in Lang's Algebra book(in the chapter of ring I think).</p>
3,276,332
<p>I'm studying for my exam in discrete mathematics and found the following problem on last years exam:</p> <p>Find a closed formula without using induction for <span class="math-container">$\sum_{k=0}^n k^3$</span>.</p> <p>I tried it by finding the Generating Function first:</p> <p><span class="math-container">$F(x) = F_0 + \sum_{k=1}^nF_nx^n = \sum_{k=1}^n (F_{n-1}+n^3)x^n = \sum_{k=1}^n F_{n-1}x^n + n^3x^n = \sum_{k=0}^n F_n x^{n+1} + \sum_{k=1}^n n^3x^n = xF(x) + \sum_{k=1}^nn^3x^n$</span></p> <p>The problem seems to be, that I lack an actual recursive definition of <span class="math-container">$\sum_{k=0}^n k^3$</span> which is, as far as I know, needed to find a generating function. Above, I pretty much used, that <span class="math-container">$X_n = X_{n-1}+n^3$</span>, but obviously, that<span class="math-container">`</span>s not enough. Because a recursive definition was always given in our lectures, I don't now other possibilities to solve this, except for finding the Generating Function with help of recursive definitions.</p>
Acccumulation
476,070
<p>There are several methods. There's the linear algebra method of finding difference formulas for <span class="math-container">$k^m$</span>, treating a difference formula for a particular <span class="math-container">$m$</span> as a vector, then finding <span class="math-container">$\sum k^m$</span> in terms of those vectors. For instance, for <span class="math-container">$m=1$</span>, we have <span class="math-container">$k^m-(k-1)^m = 1$</span>. For <span class="math-container">$m=2$</span>, we have <span class="math-container">$k^m-(k-1)^m = 2k-1$</span>. If we call those <span class="math-container">$v_1$</span> and <span class="math-container">$v_2$</span> respectively, we have <span class="math-container">$k= \frac {v_2+v_1}2$</span>. Thus, to find <span class="math-container">$\sum k$</span>, we can take <span class="math-container">$\sum \frac {v_2+v_1}2$</span>. Since <span class="math-container">$v_2$</span> is, by definition, the difference formula for <span class="math-container">$k^m$</span>, <span class="math-container">$\sum v_2$</span> is just <span class="math-container">$n^2$</span>. That is, <span class="math-container">$\sum_{k=0}^n v_2 = n^2$</span>. Similarly <span class="math-container">$\sum v_1 = n$</span>. So <span class="math-container">$\sum k = \frac {n^2+n}2$</span>. A similar, albeit more tedious, method works for finding <span class="math-container">$\sum k^3$</span>: find <span class="math-container">$c_m$</span> such that <span class="math-container">$k^3=\sum_{m=1}^4c_mv_m$</span>, and <span class="math-container">$\sum k^3$</span> will be <span class="math-container">$\sum_{m=1}^4c_m n^m$</span></p>
3,276,332
<p>I'm studying for my exam in discrete mathematics and found the following problem on last years exam:</p> <p>Find a closed formula without using induction for <span class="math-container">$\sum_{k=0}^n k^3$</span>.</p> <p>I tried it by finding the Generating Function first:</p> <p><span class="math-container">$F(x) = F_0 + \sum_{k=1}^nF_nx^n = \sum_{k=1}^n (F_{n-1}+n^3)x^n = \sum_{k=1}^n F_{n-1}x^n + n^3x^n = \sum_{k=0}^n F_n x^{n+1} + \sum_{k=1}^n n^3x^n = xF(x) + \sum_{k=1}^nn^3x^n$</span></p> <p>The problem seems to be, that I lack an actual recursive definition of <span class="math-container">$\sum_{k=0}^n k^3$</span> which is, as far as I know, needed to find a generating function. Above, I pretty much used, that <span class="math-container">$X_n = X_{n-1}+n^3$</span>, but obviously, that<span class="math-container">`</span>s not enough. Because a recursive definition was always given in our lectures, I don't now other possibilities to solve this, except for finding the Generating Function with help of recursive definitions.</p>
Community
-1
<p><span class="math-container">$$\sum_{k=0}^0 k^3=0, \\\sum_{k=0}^1 k^3=1, \\\sum_{k=0}^2 k^3=9, \\\sum_{k=0}^3 k^3=36, \\\sum_{k=0}^4 k^3=100. $$</span></p> <p>The requested formula must be a quartic polynomial, because the difference <span class="math-container">$P(n)-P(n-1)=n^3$</span> is a cubic polynomial. This polynomial is uniquely determined as the Lagrangian interpolation polynomial by the five above points.</p> <p><a href="https://www.wolframalpha.com/input/?i=interpolate+%7B%7B0,+0%7D,+%7B1,1%7D,+%7B2,9%7D,+%7B3,36%7D,%7B4,100%7D%7D" rel="nofollow noreferrer">https://www.wolframalpha.com/input/?i=interpolate+%7B%7B0,+0%7D,+%7B1,1%7D,+%7B2,9%7D,+%7B3,36%7D,%7B4,100%7D%7D</a></p>
3,251,233
<blockquote> <p>Calculate <span class="math-container">$\int_3^4 \sqrt {x^2-3x+2}\, dx$</span> using Euler's substitution</p> </blockquote> <p><strong>My try:</strong> <br><span class="math-container">$$\sqrt {x^2-3x+2}=x+t$$</span> <span class="math-container">$$x=\frac{2-t^2}{2t+3}$$</span> <span class="math-container">$$\sqrt {x^2-3x+2}=\frac{2-t^2}{2t+3}+t=\frac{t^2+3t+2}{2t+3}$$</span> <span class="math-container">$$dx=\frac{-2(t^2+3t+2)}{(2t+3)^2} dt$$</span> <span class="math-container">$$\int_3^4 \sqrt {x^2-3x+2}\, dx=\int_{\sqrt {2} -3}^{\sqrt {2} -4} \frac{t^2+3t+2}{2t+3}\cdot \frac{-2(t^2+3t+2)}{(2t+3)^2}\, dt=2\int_{\sqrt {2} -4}^{\sqrt {2} -3}\frac{(t^2+3t+2)^2}{(2t+3)^3}\, dt$$</span><br> However I think that I can have a mistake because Euler's substition it should make my task easier, meanwhile it still seems quite complicated and I do not know what to do next.<br><br>Can you help me?<br><br>P.S. I must use Euler's substitution because that's the command.</p>
egreg
62,967
<p>You can first observe that <span class="math-container">$$ \sqrt{x^2-3x+2}=\frac{1}{2}\sqrt{4x^2-12x+8}=\frac{1}{2}\sqrt{(2x-3)^2-1} $$</span> so with <span class="math-container">$2x-3=t$</span>, you get <span class="math-container">$$ \frac{1}{4}\int_3^5\sqrt{t^2-1}\,dt $$</span> Now use the Euler substitution <span class="math-container">$\sqrt{t^2-1}=t-u$</span>, so <span class="math-container">$t^2-1=t^2-2tu+u^2$</span> and <span class="math-container">$t=\frac{u^2+1}{2u}$</span>. Thus <span class="math-container">$$ 2t=u+\frac{1}{u},\qquad 2\,dt=\left(1-\frac{1}{u^2}\right)\,du=\frac{u^2-1}{u^2}\,du $$</span> and the integral becomes <span class="math-container">$$ \frac{1}{8}\int_{3-2\sqrt{2}}^{5-2\sqrt{6}}\frac{u^2-1}{u^2}\left(\frac{u^2+1}{2u}-u\right)\,du= -\frac{1}{16}\int_{3-2\sqrt{2}}^{5-2\sqrt{6}}\left(u-\frac{2}{u}+\frac{1}{u^2}\right)\,du $$</span> Not nice, but less ugly. Check the computations, please.</p>
1,444,820
<p>I want to solve the following funktion for $x$, is that possible? And how woult it look like?</p> <p>$y = xp -qx^{2}$</p> <p>Thanks for Help!</p>
Titus
156,008
<p>We can reduce the problem to the case that $g(x) = x^k$ for $k \in \mathbb{N}$. We find that if $f(x) = a_0 + a_1 x + a_2x^2$, then $$ \int_0^1 f(x)g(x) dx = g(1/2) ~~ \textrm{ implies } ~~ {a_0 \over k+1} + {a_1 \over k+2} + {a_2 \over k+3} = {1 \over 2^k} ~~ \forall k \in \mathbb{N}.$$ Writing this equation for each of $k = 0,1,2$ we get three equations in three variables, which can be expressed as the augmented matrix $$ \left[ \begin{array}{ccc|c} 1 &amp; 1/2 &amp; 1/3 &amp; 1 \\ 1/2 &amp; 1/3 &amp; 1/4 &amp; 1/2 \\ 1/3 &amp; 1/4 &amp; 1/5 &amp; 1/4 \end{array} \right].$$ Reducing this we find that <em>if</em> $f$ exists, its coefficients must be $$ a_0 = -3/2, ~~ a_1 = 15, ~~ a_2 = -15. $$ Plugging these back into the first display, $$ {-3/2 \over k+1 } + {15 \over k+2} - { 15 \over k+3 } = {1 \over 2^k} $$ which simplifies to $$ 2^k(-1.5k^2 + 7.5k + 6) - k^3 -6k^2-11k-6 = 0 $$ and we need this equation to hold for all nonnegative $k$. It does connivingly hold for $k = 3$ as well, but for no other choices of $k$.</p>
358,786
<p>Why does Egorov's theorem not hold in the case of infinite measure? It turns out that, for example, $f_n = \chi_{[n,n+1]}x$ does not converge nearly uniformly, that is, it does not converge on E such that for a set F m(E\F) &lt; $\epsilon$. Is this simply true because it takes on the value 1 for each n but suddenly hits 0 when n ---> infinity?</p>
Alp Uzman
169,085
<p>Set $\forall n\geq1: f_n:[0,\infty[\to\{0,1\}, f_n:=\chi_{[n-1,n]}$. Then $f_n\to0$ pointwise on $\mathbb{R}$. Suppose $\exists F\subseteq\mathbb{R}: f_n\stackrel{u.}{\to}0$ on $F$, i.e. that</p> <p>$$\forall \epsilon&gt;0,\exists N,\forall n\geq N,\forall x\in F: |f_n(x)|&lt;\epsilon.$$</p> <p>For $\epsilon:=1, \exists N,\forall n\geq N,\forall x\in F:|f_n(x)|&lt;1$, so that $x\not\in[N,\infty[$. Thus $F\subseteq [0,N[$, and consequently $m(\mathbb{R}-F)\geq m([N,\infty[)=\infty$.</p> <hr> <p>The moral of the story is then that for this example any set on which we have uniform convergence has to be of finite measure, and we cannot make the rest have arbitrarily small measure.</p>
594,975
<p>What will be the basis of vector space <span class="math-container">$\Bbb C$</span> over a field of rational numbers <span class="math-container">$\Bbb Q$</span>?</p> <p>I think it will be an infinite basis! I think it will be <span class="math-container">$B=\{r_1+r_2i \mid r_1, r_2 \in \Bbb Q^{c}\}\cup\{1,i\}$</span>. But this generator is an uncountable set. Can a basis of a vector space be that big? If it is true does it mean that <span class="math-container">$\Bbb Q$</span>-module (Vector space) <span class="math-container">$\Bbb C$</span> is free?</p>
Dylan Yott
62,865
<p>Any basis for $\Bbb C$ as a $\Bbb Q$ vector space must be infinite, since any finite dimensional $\Bbb Q$ vector space is countable, but $\Bbb C$ is not. Constructing such a basis requires the axiom of choice.</p>
594,975
<p>What will be the basis of vector space <span class="math-container">$\Bbb C$</span> over a field of rational numbers <span class="math-container">$\Bbb Q$</span>?</p> <p>I think it will be an infinite basis! I think it will be <span class="math-container">$B=\{r_1+r_2i \mid r_1, r_2 \in \Bbb Q^{c}\}\cup\{1,i\}$</span>. But this generator is an uncountable set. Can a basis of a vector space be that big? If it is true does it mean that <span class="math-container">$\Bbb Q$</span>-module (Vector space) <span class="math-container">$\Bbb C$</span> is free?</p>
Ryan Reich
3,547
<p>The straightforward answer for "what is a basis for $\mathbb{C}/\mathbb{Q}$" is that we don't know. The sneaky answer is that we do know there is one, because any maximal linearly independent set is a basis, and exists by Zorn's Lemma. This shows that $\mathbb{C}$ is a free $\mathbb{Q}$-module, since that concept is literally equivalent to the existence of a basis, and the same argument in fact proves that any vector space is a free module over its field. As noted in the comments, for <em>finite-dimensional</em> vector spaces one could deduce this from the fact that fields are PIDs and their modules are torsion-free, but that uses a very fancy theorem of ring theory that, moreover, reduces to computing a basis! So it's enough just to say that we could, in principle, find one.</p> <p>As for <em>how</em> you would find one, the answer is not satisfying. One way is by ordinal induction: for every ordinal $\alpha$ less than the cardinality of $\mathbb{C}$, suppose we have found a $\mathbb{Q}$-linearly independent set in $\mathbb{C}$ of cardinality that of $\alpha$. If it doesn't span $\mathbb{C}$, then it can be extended by some new element and that gives such a set for $\alpha + 1$. For any limit ordinal $\beta$, construct the union of the linearly independent sets for lesser $\alpha$, which is linearly independent since they are, by construction, nested. Then by the time we pass through all ordinals less than or equal to the cardinality of $\mathbb{C}$, we must have found a basis, since we will have run out of complex numbers!</p> <p>I do not encourage you to actually implement this construction.</p>
4,015,203
<p>How would you prove the following property about covariances?</p> <p><a href="https://i.stack.imgur.com/y18wd.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/y18wd.png" alt="enter image description here" /></a></p> <p>I found it here:</p> <p><a href="https://www.probabilitycourse.com/chapter5/5_3_1_covariance_correlation.php" rel="nofollow noreferrer">https://www.probabilitycourse.com/chapter5/5_3_1_covariance_correlation.php</a></p>
Vons
274,987
<p>You can prove the result <span class="math-container">$Cov(aX+bY,cW+dZ)=abCov(X,Y)+acCov(X,W)+bcCov(Y,W)+bdCov(Y,Z)$</span> as follows.</p> <p><span class="math-container">$$\begin{split}Cov(aX+bY,cW+dZ)&amp;=E[(aX+bY-E(aX+bY))(cW+dZ-E(cW+dZ))]\\ &amp;=E[(a(X-E(X)+b(Y-E(Y))(c(W-E(W)+d(Z-E(Z))]\\ &amp;=E[ac(X-E(X))(W-E(W))]+E[ad(X-E(X))(Z-E(Z))+E[bc(Y-E(Y))(W-E(W))]+E[bd(Y-E(Y))(Z-E(Z))]]\\ &amp;=acCov(X,W)+adCov(X,Z)+bcCov(Y,W)+bdCov(Y,Z)\end{split}$$</span></p> <p>Repeated application of this gives the desired result.</p>
3,484,293
<p>In the <span class="math-container">$xy$</span> - plane, the point of intersection of two functions <span class="math-container">$f(x) = x^2$</span> and <span class="math-container">$g(x) = x + 2$</span> lies in which quadrant/s ?</p> <p>I have no idea how to begin with this question.</p>
Community
-1
<p>Go forward like you would mathematically, That is equating the two functions.</p> <p><span class="math-container">$f(x) = g(x)$</span></p> <p><span class="math-container">$x^2 = x + 2$</span></p> <p><span class="math-container">$x^2 - x - 2 = 0$</span></p> <p><span class="math-container">$x = -1$</span> or <span class="math-container">$x = 2$</span></p> <p>The corresponding points are (-1,1) and (2,4) which lie in <span class="math-container">$2^{nd}$</span> and <span class="math-container">$1^{st}$</span> quadrant respectively</p>
2,368,179
<p>Answer should be in radians Like π/4 (45°) π(90°). I used $\tan(A+B)$ formula and got $5/7$ as the answer, but that's obviously wrong.</p>
Dr. Sonnhard Graubner
175,066
<p>use that $$\tan(A+B)=\frac{\tan(A)+\tan(B)}{1-\tan(A)\tan(B)}$$</p>
2,546,161
<p>I came across this interesting question in an interview:</p> <p>Given $X$ and $Y$, these two independent standard normal. We have the following probability of $P(X&gt;0| X+Y&gt;0) = 0.75$. One can get this easily by draw a 2d plane and find out the required area.</p> <p>Now, if $X$ and $Y$ are joint normal with correlation $\rho$, and we are given $P(X&gt;0| X+Y&gt;0) = 0.8$, what is the value of $\rho$? </p> <p>We can find this by writing out the pdf of the joint normal $(X, Y)$ and compute the required probability and solve for $\rho$. I want to know if there is a more intuitive way other than the cumbersome double integral? </p> <p>I am thinking about making some transformation of $X$ and $Y$, but I don't have much clue how. </p>
Did
6,179
<p>For approaches "less cumbersome than the double gaussian integral", start from the fact that if the 2D standard normal distribution is invariant by the rotations, thus, for every $(X_0,Y_0)$ standard normal and every <strong>angular sector $S$ of angle $\vartheta$</strong>, $$P((X_0,Y_0)\in S)=\vartheta/(2\pi)$$ If $(X,Y)$ is standard normal, this yields $P(X&gt;0\mid X+Y&gt;0)$, using $(X_0,Y_0)=^d(X,Y)$, as follows.</p> <ul> <li>The event $\{X+Y&gt;0\}$ corresponds to an angular sector of angle $\pi$.</li> <li>The event $\{X&gt;0,X+Y&gt;0\}$ corresponds to an angular sector of angle $3\pi/4$.</li> </ul> <p>Hence, $$P(X&gt;0\mid X+Y&gt;0)=(3\pi/4)/\pi=3/4$$</p> <p>Likewise, if $(X,Y)$ is centered normal with unit variances and correlation $\varrho$, then, considering $\sigma=\sqrt{1-\varrho^2}$, one gets $$(X,Y)=^d(X_0,\varrho X_0+\sigma Y_0)$$ The event $\{X+Y&gt;0\}$ again corresponds to an angular sector of angle $\pi$. On the other hand, $$\{X&gt;0,X+Y&gt;0\}=\{X_0&gt;0,X_0+\varrho X_0+\sigma Y_0&gt;0\}=\{X_0&gt;0,X_0+\tau Y_0&gt;0\}$$ with $\tau=\sigma/(1+\varrho)$, which corresponds to an angular sector of angle $\vartheta$ in $(\frac12\pi,\pi)$ with $$\tan\vartheta=-\tau=-\sqrt{\frac{1-\varrho}{1+\varrho}}$$ The double angle formula $$\cos(2\vartheta)=\frac{1-\tan^2\vartheta}{1+\tan^2\vartheta}$$ then readily yields </p> <blockquote> <p>$$\varrho=\cos(2\vartheta)=\cos(2\pi P(X&gt;0\mid X+Y&gt;0))$$</p> </blockquote> <p>For example, if $P(X&gt;0\mid X+Y&gt;0)=0.8$, one gets $$\varrho=\cos(2\pi/5)=(\sqrt5-1)/4$$</p>
2,773,515
<p>Given $X_1 \sim \exp(\lambda_1)$ and $X_2 \sim \exp(\lambda_2)$, and that they are independent, how can I calculate the probability density function of $X_1+X_2$? </p> <hr> <p>I tried to define $Z=X_1+X_2$ and then: $f_Z(z)=\int_{-\infty}^\infty f_{Z,X_1}(z,x) \, dx = \int_0^\infty f_{Z,X_1}(z,x) \, dx$.<br> And I don't know how to continue from this point.</p>
tarkovsky123
518,320
<p>Just as has been pointed out by the other answers, you can simply calculate the pdf for $X_1 + X_2$ by using the principle of <em>convolution</em>. In fact, in general one can show that if $X_1,X_2,...X_n$ are i.i.d variables with exponential distribution with parameter $\lambda$ then $S = \sum_{k=1}^{n}X_k \sim \Gamma (n,\lambda)$.</p>
3,692,083
<p>I wish to show that the closed unit ball in <span class="math-container">$l^1$</span> is not compact, for which I believe it would be easiest to show that it is not bounded. For this I want to consider the sequence {1, 1/2, 1/3, ... , 1/n, ...}, since the harmonic series is known to be divergent. But will this sequence actually be in the unit ball of <span class="math-container">$l^1$</span>? I'm confused by the definition of the norm given to me. </p>
Kavi Rama Murthy
142,385
<p>The closed unit ball in any normed linear space is bounded. Your sequence does not belong to <span class="math-container">$\ell^{1}$</span>. </p> <p>To show that that closed unit ball in <span class="math-container">$\ell^{1}$</span> is not compact consider the elements <span class="math-container">$e_n=(0,0,...,0,1,0,0...)$</span> where <span class="math-container">$0$</span> is in the <span class="math-container">$n-$</span>th place. This is a sequence in this ball with no convergent subsequence: if it all a subsequence converges the limit has to be <span class="math-container">$(0,0,...)$</span> (since convergence in <span class="math-container">$\ell^{1}$</span> implies convegence of each coordinate) but <span class="math-container">$\|e_n-(0,0,...)\|=1$</span> for all <span class="math-container">$n$</span>. </p>
2,420,255
<p>If the price of an article is increased by percent $p$, then the decrease in percent of sales must not exceed $d$ in order to yield the same income. The value of $d$ is: $\textbf{(A)}\ \frac{1}{1+p} \qquad \textbf{(B)}\ \frac{1}{1-p} \qquad \textbf{(C)}\ \frac{p}{1+p} \qquad \textbf{(D)}\ \frac{p}{p-1}\qquad \textbf{(E)}\ \frac{1-p}{1+p}$</p> <p>My try :</p> <p>Suppose that original price=$x$ , then price after increase =$x+px =x (1+p)$</p> <p>Sales before increase = $x.n$ </p> <p>Sales after increase =$ x.n. (1+p)$</p> <p>For the sales to be equal $ \to $</p> <p>$x.n=x.n. (1+p).d \to d=\frac {1}{p+1}$</p>
R. J. Mathar
478,393
<p>The proof is easy with standard vector algebra: The line from (B,D) to (A,C) is parametrized by point coordinates (x,y) = (B,D)+t(A-B,C-D), 0&lt;=t&lt;=1, and the line from (C,D) to (A,B) is parametrized by point coordinates (x,y) = (C,D)+t'(A-C,B-D), 0&lt;= t'&lt;=1. The intersection between the two lines is found by equating the x and y coordinates of the two lines, solving B+t(A-B)=C+t'(A-C) together with D+t(C-D)=D+T'(B-D), which is a simple 2x2 linear system of equations. The solution is t=(D-B)/(A-C+D-B), t'=(D-C)/(A-C+D-B). Insertion of t and t' back into the line equations gives (B,D)+t(A-B,C-D) = [(AD-CB)/(A-C+D-B), (AD-CB)/(A-C+D-B)]. So indeed the intersection stays always on the diagonal because the x and y coordinates are always the same.</p>
518,140
<p>What is the relation between the definition of homotopy of two functions</p> <blockquote> <p>"A homotopy between two continuous functions $f$ and $g$ from a topological space $X$ to a topological space $Y$ is defined to be a continuous function $H : X × [0,1] → Y$ from the product of the space $X$ with the unit interval $[0,1]$ to $Y$ such that, if $x \in X$ then $H(x,0) = f(x)$ and $H(x,1) = g(x)$".</p> </blockquote> <p>and the definition of the homotopy between two morphisms of chain complexes</p> <blockquote> <p>"Let $A$ be an additive category. The homotopy category $K(A)$ is based on the following definition: if we have complexes $A, B$ and maps $f, g$ from $A$ to $B$, a chain homotopy from $f$ to $g$ is a collection of maps $h^n \colon A^n \to B^{n - 1}$ (not a map of complexes) such that $f^n - g^n = d_B^{n - 1} h^n + h^{n + 1} d_A^n$, or simply $f - g = d_B h + h d_A$." </p> </blockquote> <p>Please help me. Thank you!</p>
Asaf Karagila
622
<p>When I took a course in mathematical history, the only real achievement of the medieval mathematics (according to the professor teaching the course) was the following:</p> <blockquote> <p>$$\sum_{n=1}^\infty\frac1n=\infty$$</p> </blockquote> <p>The proof is due to Oresme, who gave a very nice proof of the following flavor:</p> <p>$$1+\frac12+\frac13+\frac14+\frac15+\frac16+\frac17+\frac18+\ldots\geq\\1+\frac12+\frac14+\frac14+\frac18+\frac18+\frac18+\frac18+\ldots=\\1+\frac12+\frac12+\frac12+\ldots\geq1+1+1+\ldots=\infty$$</p> <p>You can read about Oresme, and the proof <a href="http://en.wikipedia.org/wiki/Nicole_Oresme">on this Wikipedia page</a>.</p>
518,140
<p>What is the relation between the definition of homotopy of two functions</p> <blockquote> <p>"A homotopy between two continuous functions $f$ and $g$ from a topological space $X$ to a topological space $Y$ is defined to be a continuous function $H : X × [0,1] → Y$ from the product of the space $X$ with the unit interval $[0,1]$ to $Y$ such that, if $x \in X$ then $H(x,0) = f(x)$ and $H(x,1) = g(x)$".</p> </blockquote> <p>and the definition of the homotopy between two morphisms of chain complexes</p> <blockquote> <p>"Let $A$ be an additive category. The homotopy category $K(A)$ is based on the following definition: if we have complexes $A, B$ and maps $f, g$ from $A$ to $B$, a chain homotopy from $f$ to $g$ is a collection of maps $h^n \colon A^n \to B^{n - 1}$ (not a map of complexes) such that $f^n - g^n = d_B^{n - 1} h^n + h^{n + 1} d_A^n$, or simply $f - g = d_B h + h d_A$." </p> </blockquote> <p>Please help me. Thank you!</p>
Per Erik Manne
33,572
<p>You can find a list of mathematicians from this period <a href="http://www-history.mcs.st-and.ac.uk/Indexes/500_999.html" rel="nofollow noreferrer">here</a> and <a href="http://www-history.mcs.st-and.ac.uk/Indexes/1000_1499.html" rel="nofollow noreferrer">here</a>, with links to biographies.</p> <p>Alcuin of York wrote <a href="http://www-history.mcs.st-and.ac.uk/PrintHT/Alcuin_book.html" rel="nofollow noreferrer">Problems to Sharpen the Young</a> around 800 A.D., which is a nice collection of problems of recreational mathematics. Problem 18 is very well known, and I believe this is the oldest reference we have for this problem.</p> <p>Leonardo Pisano (who acquired the name Fibonacci some 600 years after his death) has some neat number theory in his <a href="https://rads.stackoverflow.com/amzn/click/com/0126431302" rel="nofollow noreferrer" rel="nofollow noreferrer">Book of Squares</a>. He starts with the result <span class="math-container">$1+3+\dots+(2n-1)=n^2$</span>, and develops from this a number of methods for solving Diophantine equations. The last problem that he solves is to find numbers <span class="math-container">$a,b,c$</span> such that the three numbers <span class="math-container">$a+b+c+a^2$</span>, <span class="math-container">$a+b+c+a^2+b^2$</span>, and <span class="math-container">$a+b+c+a^2+b^2+c^2$</span> all are squares. His solution: <span class="math-container">$a=35$</span>, <span class="math-container">$b=144$</span>, <span class="math-container">$c=360$</span>.</p> <p>Levi ben Gerson used mathematical induction in France in the early 1300's, giving proofs of formulas such as <span class="math-container">$C_k^n={n(n-1)\cdots(n-k+1)\over 1\cdot 2\cdots k}$</span>, of <span class="math-container">$(1+2+\cdots+n)^2=1^3+2^3+\cdots+n^3$</span>, and of the associate law of multiplication for any number of factors. (Reference: <a href="https://rads.stackoverflow.com/amzn/click/com/0321387007" rel="nofollow noreferrer" rel="nofollow noreferrer">Katz</a>.)</p> <p>The oldest surviving copies of the texts of <a href="http://www.claymath.org/library/historical/euclid/" rel="nofollow noreferrer">Euclid</a> and <a href="http://archimedespalimpsest.org/about/" rel="nofollow noreferrer">Archimedes</a> were written in the Byzantine empire in the ninth and tenth century. Obviously, there were interest in mathematics in this part of Europe, though what has survived is mostly commentaries on older texts.</p>
230,887
<p>Let $(F^\bullet,d_F)$ and $(G^\bullet,d_G)$ be two complexes in an abelian category $\mathbf{A}$.</p> <p>The complex cone $Cone(\varphi)^\bullet$ of a morphism of complexes $\varphi:F^\bullet \to G^\bullet$ is defined as</p> <p>$$Cone(\varphi)^i=G^i\oplus F^{i+1},$$</p> <p>and its differential is</p> <p>$$d(g^i,f^{i+1})=(d_G(g^i)+\varphi(f^{i+1}),-d_F(f^{i+1})).$$</p> <p>then there are natural maps $G^\bullet \to Cone(\varphi)^\bullet$ and $Cone(\varphi)^\bullet \to F[1]^\bullet$ that make</p> <p>$$F\to G\to Cone(\varphi) \to F[1]$$</p> <p>into a distinguished triangle inside the derived category $\mathbf{D}^b(\mathbf{A})$.</p> <p>My question is: what is the reason behind the "twisting" of the first component of the differential with $\varphi(f^{i+1})$? Shouldn't one obtain an honest complex even without that? It must be required by some interesting property of the cone itself.</p>
Jason Starr
13,265
<p>You might want to take a look at Problem 1 on the following problem set from a course on homological algebra. <br> <a href="http://www.math.stonybrook.edu/~jstarr/M536f15/M536f15ps10.pdf" rel="nofollow">http://www.math.stonybrook.edu/~jstarr/M536f15/M536f15ps10.pdf</a> <br> In particular, the mapping complex satisfies a property (up to homotopy) that makes it seem like a kernel, and it simultaneously satisfies a property (up to homotopy) that makes it seem like a cokernel.</p>
2,408,521
<blockquote> <p>The plane $x - y = 0$ </p> </blockquote> <p>This seems very easy, but I will do it in case I'm barking up the wrong tree. Also, if there is a more efficient way to do it please tell me.</p> <p>$y$ is a free variable so let it equal $r$<br> $x = y = r$ therefore,<br> $$v = \begin{bmatrix} r \\ r \\ \end{bmatrix} = r\begin{bmatrix} 1 \\ 1 \\\end{bmatrix} $$ </p> <p>Therefore, the dimension is 1, and the basis vector is<br> $$ \begin{bmatrix} 1 \\ 1 \\ \end{bmatrix} $$</p> <p>Ok so going further on what @StackTD said $x - y + (0)z = 0$<br> $z = r$<br> $y = s$<br> $x = s - (0)r = s$<br> Therefore<br> $$\begin{bmatrix} x \\ y \\ z \\ \end{bmatrix} = \begin{bmatrix} s \\ s \\ r \\ \end{bmatrix} = s\begin{bmatrix} 1 \\ 1 \\ 0 \end{bmatrix} + r \begin{bmatrix} 0 \\ 0 \\ 1 \end{bmatrix} $$</p> <p>I know this is correct because @Fred said.</p>
Shaun
104,041
<p>It's already in conjunctive normal form.</p>
2,408,521
<blockquote> <p>The plane $x - y = 0$ </p> </blockquote> <p>This seems very easy, but I will do it in case I'm barking up the wrong tree. Also, if there is a more efficient way to do it please tell me.</p> <p>$y$ is a free variable so let it equal $r$<br> $x = y = r$ therefore,<br> $$v = \begin{bmatrix} r \\ r \\ \end{bmatrix} = r\begin{bmatrix} 1 \\ 1 \\\end{bmatrix} $$ </p> <p>Therefore, the dimension is 1, and the basis vector is<br> $$ \begin{bmatrix} 1 \\ 1 \\ \end{bmatrix} $$</p> <p>Ok so going further on what @StackTD said $x - y + (0)z = 0$<br> $z = r$<br> $y = s$<br> $x = s - (0)r = s$<br> Therefore<br> $$\begin{bmatrix} x \\ y \\ z \\ \end{bmatrix} = \begin{bmatrix} s \\ s \\ r \\ \end{bmatrix} = s\begin{bmatrix} 1 \\ 1 \\ 0 \end{bmatrix} + r \begin{bmatrix} 0 \\ 0 \\ 1 \end{bmatrix} $$</p> <p>I know this is correct because @Fred said.</p>
skyking
265,767
<p>It's both in conjunctive and disjunctive normal form. However the terms/factors are different. To make it more clear in conjunctive normal form it's</p> <p>$$(\neg a) \land (\neg b)$$</p> <p>and in disjunctive normal form it's</p> <p>$$(\neg a \land \neg b)$$</p> <p>Where the factors/terms are inside the parenthesis (in the later there's only one term which make the $\lor$ sign absent). If the terms/factors are only literals or their negation (that is there's apart from literal and negated literals only $\lor$s or only $\land$s) it's in both DNT and CNT.</p> <p>Note that the normal forms does not require the expression to be minimal in any sense (which isn't guaranteed to be unique anyway). So there may be many solutions to the problem.</p>
3,896,562
<p>Suppose <span class="math-container">$f:\mathbb{S}^n\rightarrow Y$</span> is a continuous map null homotopic to a constant map <span class="math-container">$c$</span>. In other words: <span class="math-container">$F: f\simeq c$</span> , where <span class="math-container">$c(x)=y$</span></p> <p>Now, we may extend <span class="math-container">$f$</span> to a continuous map <span class="math-container">$g: D^{n+1}\rightarrow Y$</span> by defining</p> <p><span class="math-container">$g(x)= y$</span> if <span class="math-container">$0\leq ||x||\leq \frac{1}{2}$</span> and <span class="math-container">$F(\frac{x}{||x||},2-2||x||)$</span> if <span class="math-container">$\frac{1}{2}\leq ||x||\leq \frac{1}{2}$</span></p> <p>Now, on <span class="math-container">$||x||=\frac{1}{2}$</span>, <span class="math-container">$F(\frac{x}{||x||},2-2||x||)=F(\frac{x}{||x||},1)=c=y$</span>.</p> <p>Hence <span class="math-container">$F$</span> is continuous by the gluing lemma.</p> <p>I was wondering as to what the intuition is for constructing such a function, what geometrical clues allow one to define such a function <span class="math-container">$F$</span></p>
NHL
841,510
<p>let <span class="math-container">$h=g-f&gt;0$</span> on <span class="math-container">$[a,b]$</span>. Since <span class="math-container">$h$</span> is continuous on a closed interval, it has a global minimum (extreme value theorem).</p> <p>then let <span class="math-container">$\delta=\min_{x\in[a,b]}{h(x)}$</span></p> <p>Then, by definition <span class="math-container">$\delta&gt;0$</span> and <span class="math-container">$\forall x\in[a,b], f(x)+\delta\leq g(x) $</span></p>
194,096
<p>Is it possible to find an expression for: $$S(N)=\sum_{k=0}^{+\infty}\frac{1}{\sum_{n=0}^{N}k^n}?$$</p> <p>For $N=1$ we have</p> <p>$$S(1) = \displaystyle\sum_{k=0}^{+\infty}\frac{1}{1 + k} = \displaystyle\sum_{k=1}^{+\infty}\frac{1}{k}$$</p> <p>which is the (divergent) harmonic series. Thus, $S (1) = \infty$.</p> <p>For $N=2$ this sum is: $$S(2)=\sum_{k=0}^{+\infty}\frac{1}{1+k+k^2}$$ which can be expressed as: $$S(2)=-1+\frac{1}{3}\sqrt 3 \pi \tanh(\frac{1}{2}\pi\sqrt 3)\approx 0.798$$</p> <p>For $N=3$ we have: $$S(3)=\frac{1}{4}\Psi(I)+\frac{1}{4I}\Psi(I)-\frac{1}{4I}\pi\coth(\pi)+\frac{1}{4}\pi\coth(\pi)+\frac{1/}{4}\Psi(1+I)-\frac{1}{4I}\Psi(1+I)-\frac{1}{2}+\frac{1}{2}\gamma \approx 0.374$$</p>
Marko Riedel
44,883
<p>Here is an approach using Mellin transforms to enrich the collection of solutions. Write</p> <p>$$S(N) = 1 + \frac{1}{N+1} + \sum_{k\ge 2} \frac{1}{\sum_{n=0}^N k^n} = 1 + \frac{1}{N+1} + \sum_{k\ge 2} \frac{k-1}{k^{N+1}-1} \\= 1 + \frac{1}{N+1} + \sum_{k\ge 2} \frac{k}{k^{N+1}-1} - \sum_{k\ge 2} \frac{1}{k^{N+1}-1}.$$</p> <p>There are two harmonic sums with the same base function here which we now evaluate.</p> <p>Introduce $$S_1(x, M) = \sum_{k\ge 2} \frac{1}{(xk)^M-1} \quad\text{and}\quad S_2(x, M) = \sum_{k\ge 2} \frac{k}{(xk)^M-1}$$ so that we are interested in $S_2(1, N+1)-S_1(1, N+1).$</p> <p>Recall the harmonic sum identity $$\mathfrak{M}\left(\sum_{k\ge 1} \lambda_k g(\mu_k x);s\right) = \left(\sum_{k\ge 1} \frac{\lambda_k}{\mu_k^s} \right) g^*(s)$$ where $g^*(s)$ is the Mellin transform of $g(x).$</p> <p>In the present case we have for $k\ge 2$ that (do $S_1$ first as $S_2$ will follow) $$\lambda_k = 1, \quad \mu_k = k \quad \text{and} \quad g(x) = \frac{1}{x^M-1}.$$</p> <p>We need the Mellin transform $g^*(s)$ of $g(x)$ which is $$\int_0^\infty \frac{1}{x^M-1} x^{s-1} dx.$$ We take $M = N+1 &gt; 2$ since the sum from the beginning diverges when $N=1.$</p> <p><P> The fundamental strip of this Mellin transform is $\langle 0,M\rangle.$ We will in fact use $\langle 0, M-1\rangle$</p> <p><P> Now to evaluate this (seemingly divergent) transform we use a slice contour with the bottom side of the slice aligned with the positive real axis and the origin. The angle of the slice is $2\pi/M.$ The radius of the slice is $R$ and we let it go to infinity so that we can easily see the contribution from the arc that connects the two straight sides vanishes by the ML bound because its length is $\Theta(R)$ and the integrand is $$\Theta(1/R^M\times R^{(M-1)-1}) = \Theta(1/R^2).$$</p> <p><P> We will be integrating through the poles at $x=1$ and $x=\exp(2\pi i/M),$ thereby picking up half the residues from these poles. The integral along the horizontal side $\Gamma_1$ of the slice is our transform $g^*(s)$. The contribution from the arc call it $\Gamma_2$ vanishes. Along the slanted side call it $\Gamma_3$ we put $x=\exp(2\pi i/M)t$ obtain the following integral:</p> <p>$$\int_{\Gamma_3} \frac{1}{x^M-1} x^{s-1} dx \\= \exp(2\pi i/M) \int_\infty^0 \frac{1}{(\exp(2\pi i/M) t)^M-1} (\exp(2\pi i/M) t)^{s-1} dt \\ = - \exp(2\pi i/M) \int_0^\infty \frac{1}{t^M-1} (\exp(2\pi i/M) t)^{s-1} dt \\= - \exp(2\pi i \times s/M) \int_0^\infty \frac{1}{t^M-1} t^{s-1} dt = - \exp(2\pi i \times s/M) \times g^*(s).$$</p> <p>Putting it all together we have for $g^*(s)$ that $$g^*(s) = \left(1-e^{2\pi i \times s/M}\right) = \pi i \left( \mathrm{Res}\left(\frac{1}{x^M-1} x^{s-1}; x=1\right)+ \mathrm{Res}\left(\frac{1}{x^M-1} x^{s-1}; x=\exp(2\pi i/M)\right) \right).$$</p> <p>These poles are simple and may be evaluated with a single derivative which gives $1/(M x^{M-1})$ and produces $$ \pi i \left(\frac{1}{M} + \frac{1}{M \exp(2\pi i/M)^{M-1}} \exp(2\pi i \times (s-1)/M)\right).$$ This is $$\frac{\pi i}{M} \left(1 + \exp(2\pi i \times (s-1-(M-1))/M)\right) \\= \frac{\pi i}{M} \left(1 + \exp(2\pi i \times (s-M)/M)\right) \\= \frac{\pi i}{M} \left(1 + \exp(2\pi i \times s/M)\right).$$ Returning to $g^*(s)$ we finally obtain $$g^*(s) = \frac{\pi i}{M} \frac{1 + e^{2\pi i \times s/M}}{1-e^{2\pi i \times s/M}} = \frac{\pi}{M} \frac{i(e^{-\pi i \times s/M} + e^{\pi i \times s/M})} {e^{-\pi i \times s/M}-e^{\pi i \times s/M}} = - \frac{\pi}{M} \cot(\pi s/M).$$</p> <p>It follows that the Mellin transform $Q_1(s)$ of the harmonic sum $S_1(x,M)$ is given by</p> <p>$$Q_1(s) = - \frac{\pi}{M} \cot(\pi s/M) (-1+\zeta(s)) \\ \text{because}\\ \sum_{k\ge 2} \frac{\lambda_k}{\mu_k^s} = \sum_{k\ge 2} \frac{1}{k^s} = -1+\zeta(s)$$ for $\Re(s) &gt; 1.$</p> <p>Similarly the Mellin transform $Q_2(s)$ of the harmonic sum $S_2(x,M)$ is given by</p> <p>$$Q_2(s) = - \frac{\pi}{M} \cot(\pi s/M) (-1+\zeta(s-1)) \\ \text{because}\\ \sum_{k\ge 2} \frac{\lambda_k}{\mu_k^s} = \sum_{k\ge 2} k \times \frac{1}{k^s} = -1+\zeta(s-1)$$ for $\Re(s) &gt; 2.$</p> <p>The Mellin inversion integral for the first one is $$\frac{1}{2\pi i} \int_{3/2-i\infty}^{3/2+i\infty} Q_1(s)/x^s ds$$ which we evaluate by shifting it to the right for an expansion about infinity. For the second one we get $$\frac{1}{2\pi i} \int_{5/2-i\infty}^{5/2+i\infty} Q_2(s)/x^s ds$$</p> <p>Now note that the first pole to the right of the abscissa of convergence is at $s=M$ and since $M\ge 3&gt;5/2&gt;3/2$ we may in fact join these two inversion integrals and write</p> <p>$$\frac{1}{2\pi i} \int_{5/2-i\infty}^{5/2+i\infty} (Q_2(s)-Q_1(s))/x^s ds$$ which is (the minus sign disappears because we are integrating clockwise in the right half-plane) $$\frac{1}{2\pi i} \int_{5/2-i\infty}^{5/2+i\infty} \frac{\pi}{M} \cot(\pi s/M) (\zeta(s-1)-\zeta(s))/x^s ds.$$ Observe that $$\mathrm{Res}\left(\cot(\pi s/M); s=qM\right) = \frac{M}{\pi}$$ with $q$ an integer. </p> <p>Collecting the residues at the poles at $s = qM$ in the right half plane and setting $x=1$ we obtain the convergent series for $S_2(1, M)-S_1(1, M)$</p> <p>$$\sum_{q\ge 1} \frac{\pi}{M} \frac{M}{\pi} (\zeta(qM-1)-\zeta(qM)) = \sum_{q\ge 1} (\zeta(qM-1)-\zeta(qM)).$$</p> <p>Returning to $N$ and the sum we started with we can confirm the earlier result that $$S(N) = 1 + \frac{1}{N+1} + \sum_{q\ge 1} (\zeta(q(N+1)-1)-\zeta(q(N+1))).$$</p>
1,883,459
<p>I am trying to make a block diagonal matrix from a given matrix by multiplying the given matrix to some other matrices. Say $A$ is an $N \times N$ matrix, I want to make an $A^\prime$ matrix with size $kN \times kN$ such that $A^\prime$ has $A$ as its diagonal element $k$ times. In fact $A^\prime$ is the direct sum of A as $A^\prime = \bigoplus_{i =1}^{k} A$.</p> <p>What I am looking for is what the elements of $B$ and $C$ should be to have $$ BAC =A^\prime . $$</p> <p><strong>Use case:</strong> Using $A^\prime$ in a linear optimization problem if $A$ can be transformed to $A^\prime$ linearly.</p>
Nick Alger
3,060
<p>As Med points out, it is generally not possible to do what you want with only one "matrix sandwich". However, it is possible with $k$ "matrix sandwiches" as follows:</p> <p>$$A' = \begin{bmatrix} I \\ 0 \\ \vdots \\ 0 \end{bmatrix}A \begin{bmatrix} I &amp; 0 &amp; \dots &amp; 0 \end{bmatrix} + \begin{bmatrix} 0 \\ I \\ \vdots \\ 0 \end{bmatrix}A \begin{bmatrix} 0 &amp; I &amp; \dots &amp; 0 \end{bmatrix} + \dots + \begin{bmatrix} 0 \\ 0 \\ \vdots \\ I \end{bmatrix}A \begin{bmatrix} 0 &amp; 0 &amp; \dots &amp; I \end{bmatrix},$$ where $0$ and $I$ are the zero matrix and identity matrix of the appropriate sizes, respectively. More succicintly, $$A' = B_1 A B_1^T + B_2 A B_2^T + \dots + B_k A B_k^T,$$ where the matrices $B_i$ are the tall rectangular matrices with zero and identity blocks as shown above.</p> <p>Also note that the map from $A$ to $A'$ is linear, and the <a href="https://en.wikipedia.org/wiki/Vectorization_(mathematics)" rel="nofollow">vectorization</a> of the map can be expressed using the <a href="https://en.wikipedia.org/wiki/Kronecker_product" rel="nofollow">Kronecker product</a> as follows: $$\text{vec}(A') = \left(B_1 \otimes B_1 + B_2 \otimes B_2 + \dots + B_k \otimes B_k\right)\text{vec}(A).$$</p>
1,555,429
<p>Hi I am trying to solve the sum of the series of this problem:</p> <p>$$ 11 + 2 + \frac 4 {11} + \frac 8 {121} + \cdots $$</p> <p>I know its a geometric series, but I cannot find the pattern around this. </p>
Esperluet
293,654
<p>Your series is a geometric series with a common ration of $q =\dfrac{2}{11}$ and a first term $a_0$ equal to $11$.<br> $-1 &lt; \dfrac{2}{11}&lt; 1$ so the series is convergent.<br> Its sum is equal to $$a_0 \cdot \dfrac{1}{1-q} = 11 \cdot \dfrac{1}{1-\dfrac{2}{11}} = \dfrac{121}{9}$$</p>
3,735,904
<p><span class="math-container">$\mathbf{Question:}$</span> Prove that <span class="math-container">$(A\cap C)-B=(C-B)\cap A$</span></p> <p><span class="math-container">$\mathbf{My\ attempt:}$</span></p> <p>Looking at LHS, assuming <span class="math-container">$(A\cap C)-B \neq \emptyset$</span></p> <p>Let <span class="math-container">$x\in (A\cap C)-B$</span></p> <p>This implies <span class="math-container">$x\in A$</span> and <span class="math-container">$x\in C$</span> and <span class="math-container">$x\notin B$</span></p> <p>Looking at RHS, assuming <span class="math-container">$(C-B)\cap A \neq \emptyset$</span>,</p> <p>Let <span class="math-container">$y \in (C-B)\cap A$</span></p> <p>This implies <span class="math-container">$y\in C$</span> and <span class="math-container">$y\notin B$</span> and <span class="math-container">$y\in A$</span></p> <p>By comapring the LHS and RHS, we find that: <span class="math-container">$$ x,y\in A $$</span></p> <p><span class="math-container">$$ x,y\in C $$</span></p> <p><span class="math-container">$$ x,y\notin B $$</span></p> <p>Thus LHS = RHS.</p> <p>Is this correct?</p>
user0102
322,814
<p><span class="math-container">\begin{align*} (A\cap C) - B = (A\cap C)\cap\overline{B} = (C\cap\overline{B})\cap A = (C - B)\cap A \end{align*}</span></p>
4,551,674
<p>The question is:</p> <blockquote> <p><span class="math-container">$f: A\to R$</span> is a continuous, real-valued function, where <span class="math-container">$A\subseteq\mathbb{R}^n$</span>.</p> <p>If <span class="math-container">$f(x)\to\infty$</span> as <span class="math-container">$\|x\|\to\infty,$</span> show that <span class="math-container">$f$</span> attains a minimum.</p> </blockquote> <p>Where I’ve gotten so far is I’ve written down the definition of this limit, and that tells that for all <span class="math-container">$M &gt; 0$</span>, there exists some <span class="math-container">$L &gt; 0$</span> such that if <span class="math-container">$\|x\| &gt; L$</span>, then <span class="math-container">$f(x) &gt; M$</span>.</p> <p>I can kind of see that this means that I need to take some <span class="math-container">$[-L, L]\subseteq A$</span> and use E.V.T. for compact sets here, and prove that <span class="math-container">$f(x)$</span> needs to be larger in <span class="math-container">$[-L, L]^c$</span>, but I’m not really sure how to actually do any of that. Any help would be greatly appreciated.</p>
mathcounterexamples.net
187,663
<p>The result is false without further hypothesis on <span class="math-container">$A$</span>. For example if <span class="math-container">$n=1$</span> and <span class="math-container">$A=\mathbb Q \subset \mathbb R$</span>, then the map <span class="math-container">$f(x)=(x-\sqrt 2)^2$</span> satisfies the hypothesis of the question, but has no minimum.</p> <p>Suppose that <span class="math-container">$A= \mathbb R^n$</span>. Then according to the hypothesis on the limit of <span class="math-container">$f$</span> at <span class="math-container">$\infty$</span>, it exists <span class="math-container">$R \gt 0$</span> such that for <span class="math-container">$\lVert x \rVert \ge R$</span>, we have <span class="math-container">$\lvert f(x) \rvert \ge \lvert f(0) \rvert + 1$</span>. On the compact disk <span class="math-container">$D$</span> centered on the origin and of radius <span class="math-container">$R$</span>, <span class="math-container">$f$</span> is bounded and attains its minimum as it is supposed to be continuous. This minimum is also a global minimum.</p>
3,989,591
<p>The following question I read in a book, but the book does not give proof. I doubt the correctness of the result</p> <p>let <span class="math-container">$p&gt;3$</span> be prime number. prove or disprove <span class="math-container">$$(x+1)^{2p^2}\equiv x^{2p^2}+\binom{2p^2}{p^2}x^{p^2}+1\pmod {p^2}\tag{1}$$</span></p> <p>I think use binomial theorem it maybe show <span class="math-container">$$\binom{2p^2}{k}\equiv 0,\pmod {p^2},k=1,2,\cdots,p^2-1,p^2+1,\cdots,2p^2-1\tag{2}$$</span> but other hand I think (2) is not right.becuase let <span class="math-container">$p=5,k=5$</span> we have <span class="math-container">$$\binom{2p^2}{k}=\binom{50}{5}=\dfrac{50\cdot 49\cdot 48\cdot 47\cdot 46}{5\cdot 4\cdot 3\cdot 2\cdot 1}\ne 0\pmod {25}$$</span>,so I think <span class="math-container">$(1)$</span> is not right?</p>
Kenta S
404,616
<p><span class="math-container">\begin{align} \int_0^2e^xdx&amp;=\lim_{n\to\infty}\sum_{i=0}^{n-1}\frac2{n}e^{2i/n}\\ &amp;=\lim_{n\to\infty}\frac2{n}\left(\frac{e^{2}-1}{e^{2/n}-1}\right)\\ &amp;=e^2-1. \end{align}</span></p>
629,347
<p>I understand <strong>how</strong> to calculate the dot product of the vectors. But I don't actually understand <strong>what</strong> a dot product is, and <strong>why</strong> it's needed.</p> <p>Could you answer these questions?</p>
Michael Hoppe
93,935
<p>If the length of $B$ is $1$ then $\langle A,B\rangle$ is the coordinate of $A$ in direction $B$.</p> <p>There is a nice interpretation of the scalar product where $B$ has arbitrary length. Let $B=(b_1,b_2)$, then define $J(B):=(-b_2,b_1)$; you'll get $J(B)$ by rotating $B$ counterclockwise by $\pi/2$. Observe that $$\langle A, B\rangle=\det\bigl(A,J(B)\bigr),$$ that is: <em>the dot product is the (orientated) area of the parallelogram spanned by $A$ and $J(B)$.</em></p>
2,956,330
<p>Hi all I think I found a new proof that <span class="math-container">$[0, 1]$</span> is compact but I am not 100% if it is correct, could you help me check? In the usual proof we just take the <code>sup{x in [0, 1] : [0, x] is covered by finitely many intervals}</code> etc.</p> <p>My proof goes like this:</p> <p>First we show that compactness for <span class="math-container">$[0, 1]$</span> is equivalent to <em>countable compactness</em>: if we have an uncountable open cover of <span class="math-container">$[0, 1]$</span> then let us first notice that if a point is covered by more than 2 intervals one of the intervals is contained in the union of the other two so we may as well assume that any point in <span class="math-container">$[0, 1]$</span> is covered by at most two intervals.</p> <p>Next, select a rational from inside every interval in the cover and note that since every rational is contained in at most two intervals by the above and since there are countably many rationals we get a countable cover.</p> <p>Finally, to show that <span class="math-container">$[0, 1]$</span> is countably compact, we can do the following : suppose <span class="math-container">$I_1, I_2, ..., I_n, ...$</span> is a countable cover such that <span class="math-container">$[0,1]$</span> is not covered by <span class="math-container">$I_1, ..., I_k$</span> for any natural k. Now <span class="math-container">$[0,1] - I_1$</span> is a union of at most two closed intervals. It must be the case that at least one of these two intervals is never covered by <span class="math-container">$I_1,...,I_k$</span> for any natural k. Select this interval, wlog let's call it <span class="math-container">$J_1$</span>. Next, <span class="math-container">$J_1-I_2$</span> is also a union of at most two closed intervals, one of which cannot be covered by <span class="math-container">$I_1,...,I_k$</span> for any natural k.Let's call this interval <span class="math-container">$J_2$</span>. Inductively we get <span class="math-container">$J_1 \supset J_2 \supset J_3$</span> ... a descending sequence of closed intervals whose intersection is non-empty and not covered by any <span class="math-container">$I_n$</span>, contradiction.</p> <p>Is this correct? I find this proof much more intuitive than the usual one, don't you agree?</p> <p>N.B. actually a student of mine found this proof</p>
Calum Gilhooley
213,690
<p>It's not clear to me how the selection procedure in your third and fourth paragraphs would work.</p> <p>Let <span class="math-container">$\alpha = 1/\sqrt{2}$</span>, and consider the cover of <span class="math-container">$[0, 1]$</span> by the uncountable collection consisting of (i) the open interval <span class="math-container">$K = (-1, 2)$</span>, (ii) all the open intervals <span class="math-container">$I(\beta) = (\beta, \beta + 1)$</span>, where <span class="math-container">$\alpha &lt; \beta &lt; 1$</span>, and (iii) all the open intervals <span class="math-container">$J(\gamma) = (\gamma - 1, \gamma)$</span>, where <span class="math-container">$0 &lt; \gamma &lt; \alpha$</span>.</p> <p>Every rational in <span class="math-container">$[0, 1]$</span> belongs to <span class="math-container">$K$</span>, and either infinitely many intervals <span class="math-container">$I(\beta)$</span> or infinitely many intervals <span class="math-container">$J(\gamma)$</span>.</p> <p>For rational <span class="math-container">$q &gt; \alpha$</span>, I cannot see why your procedure, whatever it is, might not be tricked into considering only three of the intervals <span class="math-container">$I(\beta)$</span>, for example <span class="math-container">$I((3\alpha + q)/4)$</span>, <span class="math-container">$I((\alpha + q)/2)$</span>, and <span class="math-container">$I((\alpha + 3q)/4)$</span>, and then choosing <span class="math-container">$I((3\alpha + q)/4)$</span> and <span class="math-container">$I((\alpha + 3q)/4)$</span>, perhaps because their union contains <span class="math-container">$I((\alpha + q)/2)$</span>. Similarly for rational <span class="math-container">$q &lt; \alpha$</span>, with <span class="math-container">$J$</span> in place of <span class="math-container">$I$</span>.</p> <p>If this were to happen, then <span class="math-container">$K$</span> would not be chosen (even when a rational number is selected from it), and the selected countable subcollection of open intervals would not cover <span class="math-container">$[0, 1]$</span>.</p> <p>Have I got the wrong end of the stick? Quite likely! But even so, some clarification seems to be needed.</p> <p>Everything depends on exactly how your words are to be interpreted. I hope I have not put too twisted an interpretation on them. I have tried not to enforce any single detailed interpretation, by mere "nitpicking".</p> <p>Can you spell out in more detail exactly what would happen to the uncountable cover that I defined above?</p>
2,956,330
<p>Hi all I think I found a new proof that <span class="math-container">$[0, 1]$</span> is compact but I am not 100% if it is correct, could you help me check? In the usual proof we just take the <code>sup{x in [0, 1] : [0, x] is covered by finitely many intervals}</code> etc.</p> <p>My proof goes like this:</p> <p>First we show that compactness for <span class="math-container">$[0, 1]$</span> is equivalent to <em>countable compactness</em>: if we have an uncountable open cover of <span class="math-container">$[0, 1]$</span> then let us first notice that if a point is covered by more than 2 intervals one of the intervals is contained in the union of the other two so we may as well assume that any point in <span class="math-container">$[0, 1]$</span> is covered by at most two intervals.</p> <p>Next, select a rational from inside every interval in the cover and note that since every rational is contained in at most two intervals by the above and since there are countably many rationals we get a countable cover.</p> <p>Finally, to show that <span class="math-container">$[0, 1]$</span> is countably compact, we can do the following : suppose <span class="math-container">$I_1, I_2, ..., I_n, ...$</span> is a countable cover such that <span class="math-container">$[0,1]$</span> is not covered by <span class="math-container">$I_1, ..., I_k$</span> for any natural k. Now <span class="math-container">$[0,1] - I_1$</span> is a union of at most two closed intervals. It must be the case that at least one of these two intervals is never covered by <span class="math-container">$I_1,...,I_k$</span> for any natural k. Select this interval, wlog let's call it <span class="math-container">$J_1$</span>. Next, <span class="math-container">$J_1-I_2$</span> is also a union of at most two closed intervals, one of which cannot be covered by <span class="math-container">$I_1,...,I_k$</span> for any natural k.Let's call this interval <span class="math-container">$J_2$</span>. Inductively we get <span class="math-container">$J_1 \supset J_2 \supset J_3$</span> ... a descending sequence of closed intervals whose intersection is non-empty and not covered by any <span class="math-container">$I_n$</span>, contradiction.</p> <p>Is this correct? I find this proof much more intuitive than the usual one, don't you agree?</p> <p>N.B. actually a student of mine found this proof</p>
DanielWainfleet
254,665
<p>I don't see how your 2nd step is guaranteed to produce a countable cover of all the irrationals in <span class="math-container">$[0,1].$</span></p> <p>I suggest this: Let <span class="math-container">$C$</span> be a cover of <span class="math-container">$[0,1]$</span> by open subsets of <span class="math-container">$\Bbb R.$</span> For each <span class="math-container">$x\in [0,1]$</span> choose a bounded open real interval <span class="math-container">$j(x)$</span> with rational end-points and such that <span class="math-container">$x\in j(x)\subset c$</span> for some <span class="math-container">$c\in C.$</span></p> <p>Now <span class="math-container">$D=\{j(x):x\in [0,1]\}$</span> is a subset of the family of open real intervals with rational end-points, so <span class="math-container">$D$</span> is countable. And <span class="math-container">$D$</span> is a cover of <span class="math-container">$[0,1].$</span> </p> <p>So if there exists a finite <span class="math-container">$E\subset D$</span> such that <span class="math-container">$\cup E\supset [0,1]$</span> then for each <span class="math-container">$e\in E$</span> choose <span class="math-container">$c_e\in C$</span> such that <span class="math-container">$e\subset c_e.$</span> Then <span class="math-container">$\{c_e:e\in E\}$</span> is a finite subset of <span class="math-container">$C$</span> and a cover of <span class="math-container">$[0,1]$</span>.</p> <p>In general: (i). A space whose topology has a countable base (basis) is called second-countable. And the family of all real open intervals with rational end-points is a countable base (basis) for the topology of <span class="math-container">$\Bbb R.$</span> If <span class="math-container">$Y$</span> is any subspace of a second-countable space then <span class="math-container">$Y$</span> is second-countable. (ii). A space for which every open cover has a countable sub-cover is called a Lindelof space. A second-countable space is always Lindelof. (iii). To prove that a space <span class="math-container">$X$</span> is compact, it suffices to find a base <span class="math-container">$B$</span> for <span class="math-container">$X $</span>such that any cover of <span class="math-container">$X$</span> by a subset of <span class="math-container">$B$</span> has a finite sub-cover. </p> <p>So it DOES suffice to prove that any cover of <span class="math-container">$[0,1]$</span> by a countable family of open real intervals has a finite sub-cover, and your proof of that is good.</p>
312,238
<p>Reading my textbook, I came across exercises for nested quantifiers.</p> <p>The question: Let $L(x, y)$ be the statement “$x$ loves $y$,” where the domain for both $x$ and $y$ consists of all people in the world. Use quantifiers to express each of these statements.</p> <p>i) Everyone loves himself or herself.</p> <p>Textbook answer: $$ \forall xL(x, x) $$</p> <p>Is this equivalent to my answer? : $$ \forall x\forall y((x=y)\to L(x,y)) $$</p>
Peter Smith
35,151
<p>Petr Pudlák's answer is of course exactly right about the equivalence, and he gives a pair of proofs which shows why it holds. But it is worth remarking as a footnote that this equivalence is (of course!) only available if you are already using the language of first-order logic <em>with identity</em>. </p> <p>Now, it is natural and indeed pretty standard first to (1) introduce the language of first-order logic first without identity and then (a later chapter!) (2) add the identity relation. Note, then, that "Everyone loves himself or herself" can already be perfectly well rendered into our formalism at stage (1): we <em>do not</em> have to have identity explicitly in the formal language do to the translation. </p> <p>It is a good principle, when rendering English into the language of FOL, to only expose as much structure as we need (translating as simply as we can, without unnecessarily going round the houses -- as with any translation). That is why the simpler translation already available at level (1) would be preferred as a <em>translation</em> to the unnecessary circumlocution of the more complex (though provably equivalent) sentence that becomes available at level (2). </p>
1,119,010
<p>Write down the assumptions in a form of clauses and give a resolution proof that the proposition $$\Big((p \rightarrow q) \land ( q \rightarrow r) \land p \Big) \rightarrow r$$ is a tautology.</p>
amWhy
9,003
<p>Well, you have $p$ in the antecedent, and you have $p\rightarrow q$, and together, by modus ponens, you get $q$. Now, $q$, with the implication $q\rightarrow r$ give you $r$, again, using modus ponens.</p> <p>So the conjunction in the antecedent (i.e. the three conjuncts in the antecedent) imply $r$. </p> <p>Can you now use this to complete your assignment? Remember, the main connective here is an implication. The only way it can be false is if the antecedent is true but the consequent is false. By using the above rules of inference, you are guaranteed that if the antecedent (and hence each of the conjuncts) is true, so is the conclusion. I.e, the implication cannot be false.</p> <p>Alternatively, you can proceed as follows (but you supply the justification for each line):$$\begin{align}\Big((p \rightarrow q) \land ( q \rightarrow r) \land p \Big) \rightarrow r &amp;\equiv \lnot \big((p\rightarrow q) \land (q\rightarrow r) \land p\big) \lor r\\ \\ &amp;\equiv \lnot\big((\lnot p \lor q) \land (\lnot q \lor r) \land p\big) \lor r\\ \\ &amp;\equiv (\lnot(\lnot p \lor q) \lor \lnot(\lnot q \lor r) \lor \lnot p)\lor r\\ \\ &amp;\equiv (p \land \lnot q) \lor (q \land \lnot r)\lor \lnot p \lor r\\ \\ &amp;\equiv [(\lnot p \lor p) \land (\lnot p \lor \lnot q)] \lor [(r\lor q) \land (r \lor \lnot r)] \\ \\ &amp;\equiv \lnot p \lor \lnot q \lor r \lor q\\ \\ &amp;\equiv T\end{align}$$</p>
3,971,833
<p>If there is an <span class="math-container">$n$</span> by <span class="math-container">$n$</span> matrix where each element is either 1 or -1, how many unique matrices are there such that each row and each column multiplies to 1?</p> <p>I solved for the trivial case of <span class="math-container">$n = 2$</span>, which is 2. However, for larger <span class="math-container">$n$</span>, I'm not sure how to find a systematic way to count. I have tried using the fact that you can transform an existing correct matrix into another correct matrix by selecting 4 points that form a rectangle to all become negative, but I am not sure how I could count these in an organized way.</p>
Donald Splutterwit
404,247
<p>Each row can have <span class="math-container">$1$</span>'s &amp; <span class="math-container">$-1$</span>'s in <span class="math-container">$2^{n-1}$</span> ways. Fill in the first <span class="math-container">$n-1$</span> rows &amp; then complete the last row to give each column a positive parity. So there are <span class="math-container">$\color{red}{2^{(n-1)^2}}$</span> ways to construct these matricies.</p>
3,622,508
<p>I’m not sure exactly about the conditions needed for a subset <span class="math-container">$S$</span> to localise a ring <span class="math-container">$R$</span>. I know <span class="math-container">$S$</span> has to be multiplicative. But does <span class="math-container">$S$</span> also have to be a subset of the non-zero divisors of <span class="math-container">$R$</span> or does it have to be a subset of the group of units of <span class="math-container">$R$</span>?</p> <p>I can’t find a clear answer. </p>
Andrea Mori
688
<p>The only thing that is "forbidden" is that <span class="math-container">$0\in S$</span> or otherwise the localized ring is the <span class="math-container">$0$</span>-ring. Also you better have <span class="math-container">$1\in S$</span> (or to be precise in the saturation of <span class="math-container">$S$</span>) or else the canonical map <span class="math-container">$$ \phi:A\longrightarrow S^{-1}A,\qquad \phi(a)=\frac a1 $$</span> is not defined.</p> <p>Of course <span class="math-container">$S$</span> can contain zero-divisors, but be aware that when <span class="math-container">$A$</span> is not a domain the equivalence relation on <span class="math-container">$A\times S$</span> that leads to <span class="math-container">$S^{-1}A$</span> by taking quotient is <span class="math-container">$$ (a,s)\sim(a',s')\Longleftrightarrow\text{$t(as'-a's)=0$ for some $t\in S$}. $$</span> This is forced because if you ask just that <span class="math-container">$as'-a's=0$</span> (like when one constructs the field of fractions of a domain) you don't get a transitive relation.</p> <p>It is now straightforward to check that if <span class="math-container">$t\in S$</span> is a zero-divisor then <span class="math-container">$\phi(t)=0$</span>.</p>
1,158,970
<p>In the lectures notes <a href="http://users.jyu.fi/~pkoskela/quasifinal.pdf" rel="nofollow">http://users.jyu.fi/~pkoskela/quasifinal.pdf</a> (Prof. Koskela has made them freely available from his webpage, so I am guessing is OK that I paste the link here) Quasiconformality is defined by saying that $\displaystyle \limsup_\limits{r \rightarrow 0} \frac{L_{f}(x,r)}{l_{f}(x,r)}$ must be uniformly bounded in $x,$ where $\displaystyle L_{f}(x,r):=\sup_\limits{\vert x-y \vert \leq r} \{ \vert f(x)-f(y) \vert \}$ and $\displaystyle l_{f}(x,r):=\inf_\limits{\vert x-y \vert \geq r} \{ \vert f(x)-f(y) \vert \}.$</p> <p>I have three questions concerning this definition:</p> <p>1) The main question: When he proves that a conformal mapping is quasiconformal he says (at the beginning of page 5): "Thus, given a vetor h, we have that $|Df(x, y)h| = |∇u||h|$ By the complex differentiability of f we conclude that: $\limsup_\limits{r \rightarrow 0} \dfrac{L_{f}(x,r)}{l_{f}(x,r)}=1$"</p> <p>And I don't quite understand how did he do that step. Is he perhaps using the mean value theorem and the maximum modulus principle?</p> <p>2) Second question: Even accepting the previous argument, he only shows that conformal mappings are quasiconformal in dimension $2.$ How to do this in general? Also, is this definition the same if we replace $\vert x-y \vert \leq r$ and $\vert x-y \vert \geq r$ by $\vert x-y \vert =r$? The former bounds the latter trivially, but more than that I do not know.</p> <p>3) What would be a nice visual interpretation of a quasiconformal mapping? How would look a map with possible infinite distortion at some points?</p> <p>Thanks</p>
matthew
233,974
<p>Regarding point (3), you could not have "infinite distortion at some points" and still have an injective function. But, you can have, say, points with arbitrarily large distortion as you approach the boundary of your domain. A relatively simple example on the unit disk is $f(z) = \text{Re}\left( \frac{i}{2} \text{Log}\left(\frac{i+z}{i-z}\right)\right) + i \text{ Im}\left( \frac{1}{2} \text{Log}\left(\frac{1+z}{1-z}\right)\right)$, which maps the disk onto a square as illustrated below. This particular example comes from the Poisson integral formula; it is locally quasiconformal but with unbounded distortion near the boundary. If you're curious, see <a href="http://www.jimrolf.com/explorationsInComplexVariables/bookChapters/Ch5.pdf" rel="nofollow noreferrer">http://www.jimrolf.com/explorationsInComplexVariables/bookChapters/Ch5.pdf</a> and <a href="http://www.jimrolf.com/explorationsInComplexVariables/bookChapters/Ch4.pdf" rel="nofollow noreferrer">http://www.jimrolf.com/explorationsInComplexVariables/bookChapters/Ch4.pdf</a> for more background and examples along these lines.<br> <img src="https://i.stack.imgur.com/EhDEU.jpg" alt="Image of unit disk under $f(z)$"></p>
3,628,159
<p>I have <span class="math-container">$1,2,\ldots, n$</span> numbers and I want pick <span class="math-container">$k$</span> of them with replacement and such that order matters. </p> <p>So for <span class="math-container">$n=10$</span> and <span class="math-container">$k=4$</span> I can get: <span class="math-container">$(1,2,2,4), (1,2,4,2), (1,2,3,10)$</span>,...etc</p> <p>I then have <span class="math-container">$n^k$</span> possible combinations. But now I only want to count <strong>the tuples which have a unique number</strong>. So <span class="math-container">$(1,2,2,4)$</span> and <span class="math-container">$(1,1,1,2)$</span> would be included but <span class="math-container">$(1,1,2,2)$</span> would not be included since both 1 and 2 are not unique? How do I count these?</p> <p>I figured that I can pick a number out of <span class="math-container">$n$</span> for the first element in my tuple and then the remaining <span class="math-container">$k-1$</span> elements out of the remaining <span class="math-container">$n-1$</span> numbers, so the number of combinations would be <span class="math-container">$n\,(n-1)^{k-1}$</span>. Since I have <span class="math-container">$k$</span> possible locations for the unique number I get <span class="math-container">$k\, n\, (n-1)^{k-1}$</span>. However, clearly I am counting some combinations multiple times and I am not sure how to discount them.</p>
drhab
75,923
<p>Assume that it is not true for every positive integer <span class="math-container">$n$</span>.</p> <p>Then according to WOP a positive integer <span class="math-container">$m$</span> exists such that <span class="math-container">$3\nmid4^m+5$</span> and <span class="math-container">$3\mid4^n+5$</span> for every positive integer <span class="math-container">$n$</span> that satisfies <span class="math-container">$n&lt;m$</span>. </p> <p>Evidently <span class="math-container">$m&gt;1$</span> so <span class="math-container">$m=k+1$</span> where <span class="math-container">$k$</span> is a positive integer that satisfies <span class="math-container">$k&lt;m$</span>.</p> <p>Then <span class="math-container">$4^k+5=3l$</span> for some integer <span class="math-container">$l$</span>.</p> <p>Then <span class="math-container">$4^m+5=4^{k+1}+5=4(4^k+5)-15=3(4l-5)$</span> contradicting that <span class="math-container">$3\nmid4^m+5$</span>.</p> <p>So a contradiction is found and we conclude that the assumption is wrong.</p>
390,145
<p>Let p be a prime and a belong to Z. Find all solutions to the equation $$(x-a)^2(x-a-1) + p \equiv 0 \bmod{p^3}$$</p> <p>I'm having a hard time working with this as such few variables are given. We know p is prime and a is an integer, and we are solving for x. I tried letting another variable $y=x-a$, but that leaves me with $$y^2(y-1)=p(kp^2-1)$$ which tells me that either $p|y$ or $p|(y-1)$, but not much about the possible values of x... I'm uncertain whether it is looking for a set of numbers (and whether that set would be dependent on a and p? I couldn't imagine it not be), or if the answer is no solution. Any advice would be great.</p>
Ivan Loh
61,044
<p>Note that $p \|(x-a)^2(x-a-1)$, so $p \nmid x-a$ and $p \| x-a-1$. Write $x-a=rp+1$, where $p \nmid r$. We have $(rp+1)^2rp+p \equiv 0 \pmod{p^3}$, so $0 \equiv (rp+1)^2r+1 \equiv 2r^2p+(r+1) \pmod{p^2}$. Thus $p \mid r+1$. Write $r=sp-1$. We have $2(sp-1)^2p+sp \equiv 0\pmod{p^2}$, so $0 \equiv 2(sp-1)^2+s \equiv 2+s \pmod{p}$. Therefore we can write $s=tp-2$. </p> <p>We now have $x=a+rp+1=a+(sp-1)p+1=a+((tp-2)p-1)p+1=tp^3-2p^2-p+(a+1)$, where $t \in \mathbb{Z}$. In other words, $x \equiv -2p^2-p+(a+1) \pmod{p^3}$.</p>
390,145
<p>Let p be a prime and a belong to Z. Find all solutions to the equation $$(x-a)^2(x-a-1) + p \equiv 0 \bmod{p^3}$$</p> <p>I'm having a hard time working with this as such few variables are given. We know p is prime and a is an integer, and we are solving for x. I tried letting another variable $y=x-a$, but that leaves me with $$y^2(y-1)=p(kp^2-1)$$ which tells me that either $p|y$ or $p|(y-1)$, but not much about the possible values of x... I'm uncertain whether it is looking for a set of numbers (and whether that set would be dependent on a and p? I couldn't imagine it not be), or if the answer is no solution. Any advice would be great.</p>
Robert Israel
8,508
<p>First of all, consider it mod $p$. Either $x \equiv a$ or $a+1 \mod p$. Then write $x = a + t p$ or $x = a+1 + tp$ ...</p>
1,031,632
<p>I have problem with the sum:</p> <p>$$ \sum_{k=0}^n \dbinom{n}{k}(\cos \alpha)^k(i\sin \alpha)^{n-k}\,\, $$ Apparantly, I have an imaginary unit therefore I need to distinguish even and odd powers of $i$ to do so I need to introduce $2k$ as in: $$ \sum_{k=0}^n f(k) = \sum_{k=0}^{n/2} g(2k) $$ and eventually find $g$ starting from $f$</p> <p>The goal of the exercise is to separate the real part and an imaginary part of this sum to find real expressions of $\sin (n\alpha)$ and $\cos (n\alpha)$</p>
epi163sqrt
132,007
<blockquote> <p><em>Hint:</em> Using a slightly <em>extended version</em> of the binomial coefficient (see e.g. <a href="http://www.math.upenn.edu/~wilf/gfology2.pdf" rel="nofollow">Wilf</a> p.15) with </p> <p>$$\binom{n}{k}=0\qquad k&gt;n$$</p> <p>the calculation can be written more compactly:</p> </blockquote> <p>\begin{align*} (\cos \alpha + i \sin \alpha)^n&amp;= \sum_{k=0}^n\binom{n}{k}(i\sin\alpha)^k(\cos\alpha)^{n-k}\\ &amp;=\sum_{k=0}^{n}\binom{n}{2k}(-1)^k(\sin\alpha)^{2k} (\cos\alpha)^{n-2k}\tag{1}\\ &amp;\qquad + i\sum_{k=0}^{n}\binom{n}{2k+1}(-1)^k(\sin\alpha)^{2k+1} (\cos\alpha)^{n-(2k+1)}\tag{2} \end{align*}</p> <blockquote> <p>Observe that $\binom{n}{2k}=0$ if $2k &gt; n$ in (1) and a similar argument applies to (2).</p> </blockquote>
4,105,812
<p>could someone help me check if my proof is valid?</p> <p>Use direct proof to prove the following theorem: <span class="math-container">$$ A \lor (B \rightarrow A), B \vdash_R A $$</span></p> <p>We aren't allowed to use proof by resolution, we can only use logic axioms and inference rules such as hypothetical and disjunctive syllogism, constructive and destructive dilemma, modus ponens and modus tolens. Also, we can use similar equivalencies like contraposition <span class="math-container">$(A \Rightarrow B) \Leftrightarrow (\lnot B \Rightarrow \lnot A)$</span></p> <p>Here is my proof:</p> <ol> <li><span class="math-container">$A \lor(B \rightarrow A)$</span>, premise</li> <li><span class="math-container">$B$</span>, premise</li> </ol> <ol start="3"> <li><span class="math-container">$\lnot A$</span>, (assumption)</li> <li><span class="math-container">$\lnot A \rightarrow(B \rightarrow A)$</span>, elimination of disjunction from (1)</li> <li><span class="math-container">$B\rightarrow A $</span>, (modus ponens from (3), (4))</li> <li><span class="math-container">$A$</span>, (modus ponens from (2), (5))</li> </ol> <p>The reason I'm asking is because I'm not sure if it is valid, since I made an assumption that A is incorrect and used it as a premise until I got to a contradiction at <span class="math-container">$6.$</span></p> <p>Since I have used an incorrect assumption as a premise, should I start anew but using the assumption that A is true, albeit me getting a contradiction?</p>
Lutz Lehmann
115,115
<p>Take the derivative of the equation and test if the coefficient of <span class="math-container">$y''$</span> can be divided out, modulo the original equation. If the equation can be transformed into a Clairaut equation, then this should work. <span class="math-container">\begin{align} x(y')^2+yy'&amp;=\frac3{5x^2}\\ (2xy'+y)y''+2(y')^2&amp;=-\frac6{5x^3}=-2\left((y'^2)+\frac{yy'}x\right)\\ (2xy'+y)(xy''+2y')&amp;=0 \end{align}</span> So on segments where the first factor is zero one has <span class="math-container">$y'=\frac{y}{2x}$</span>, inserted into the original equation that gives <span class="math-container">$$ \frac3{5x^2}=\frac12yy'=\frac{y^2}{4x}\implies y=\pm2\sqrt{\frac3{5x}}. $$</span> On segments where the second factor is zero, <span class="math-container">$x^2y'=C$</span> can be inserted, <span class="math-container">$$ y=\frac{3}{5x^2y'}-xy'=\frac{3}{5C}-\frac{C}{x} $$</span></p>
52,874
<p>Consider a coprime pair of integers $a, b.$ As we all know ("Bezout's theorem") there is a pair of integers $c, d$ such that $ac + bd=1.$ Consider the smallest (in the sense of Euclidean norm) such pair $c_0, d_0$, and consider the ratio $\frac{\|(c_0, d_0)\|}{\|(a, b)\|}.$ The question is: what is the statistics of this ratio as $(a, b)$ ranges over all <em>visible</em> pairs in, for example, the square $1\leq a \leq N, 1 \leq b \leq N?$</p> <p>Experiment shows the following amazing histogram:<img src="https://dl.dropbox.com/u/5188175/histogram.jpg" alt="alt text"></p> <p><strong>EDIT</strong> by popular demand: the histogram is for an experiment for $N=1000.$ The $x$ axis is the ratio, the $y$ axis is the number of points in the bin. The total number of points is $1000000/\zeta(2),$ so there are $100$ bins each with around $6000$ points.</p> <p>But no immediate proof leaps to mind.</p>
Gerry Myerson
3,684
<p>I did a little experiment. Fix $a=29$, let $b=1,2,\dots,28$. So, you get 28 data points. Well, these points are already extremely regularly distributed. Taking just the first half, $1\le b\le14$, and rearranging the ratios in increasing order, they are (to three decimals) $$.034,.069,.103,.138,.172,.207,.242,.275,.310,.345,.379,.414,.448,.483$$ To three decimals, and modulo round-off errors, these are the numbers $1/29,2/29,\dots,14/29$, which is to say they are about as regularly distributed as possible. The ratios for $15\le b\le28$ are essentially the same numbers - in fact, the ratio for $(a,b)$ seems to be pretty nearly the ratio for $(b-a,b)$. </p> <p>If what's happening for 29 happens in general, I think it would explain the original histogram. </p> <p>EDIT: So I think I see what's going on. We're looking at the numbers $$\sqrt{c^2+d^2\over a^2+b^2}$$ But $b$ is very close to $-ac/d$ (since $ac+bd=1$), so these numbers are very close to $$\sqrt{c^2+d^2\over a^2+(ac/d)^2}$$ which simplifies to $|d|/a$. For fixed $a$, as $b$ runs through the units modulo $a$, so does $d$, since $bd\equiv1\pmod a$. So our ratios are as uniformly distributed as the fractions $|d|/a$, which is very. </p>
1,567,152
<blockquote> <p>Theorem: $X$ is a finite Hausdorff. Show that the topology is discrete.</p> </blockquote> <p>My attempt: $X$ is Hausdorff then $T_2 \implies T_1$ Thus for any $x \in X$ we have $\{x\}$ is closed. Thus $X \setminus \{x\}$ is open. Now for any $y\in X \setminus \{x\}$ and $x$ using Hausdorff property, we get $\{x\}$ is open. Am I right till here? And how to proceed further? </p>
skyking
265,767
<p>You're a bit sloppy in assuming that $\{x\}$ is open.</p> <p>The thing you have to prove is that any subset of $X$ is open. This is quite straight forward as every subset of $X$ is $X$ minus a finite number of points, if it's not $X$ itself (which is open anyway) it's minus a finite positive number of points. That is you can write the subset as a finite intersection:</p> <p>$$\bigcap X\setminus \{x_j\}$$</p> <p>but the set $X\setminus \{x_j\}$ is open as you pointed out. And it's known that finite intersection of open sets is open. So any subset of $X$ is therefore open.</p> <p>The same reasoning can be used to specially prove that $\{x\}$ is open, but we can prove the topology to be discrete directly here.</p>
1,567,152
<blockquote> <p>Theorem: $X$ is a finite Hausdorff. Show that the topology is discrete.</p> </blockquote> <p>My attempt: $X$ is Hausdorff then $T_2 \implies T_1$ Thus for any $x \in X$ we have $\{x\}$ is closed. Thus $X \setminus \{x\}$ is open. Now for any $y\in X \setminus \{x\}$ and $x$ using Hausdorff property, we get $\{x\}$ is open. Am I right till here? And how to proceed further? </p>
Community
-1
<p>Let <span class="math-container">$X$</span> be a finite Hausdorff space. Let <span class="math-container">$x\in X$</span>. For each <span class="math-container">$y\not =x\in X$</span> let <span class="math-container">$U_y$</span> and <span class="math-container">$V_y$</span> be disjoint open sets with <span class="math-container">$x\in U_x$</span> and <span class="math-container">$y \in V_y$</span>. Set <span class="math-container">$V=\cup_{y\not = x} V_y$</span>. Then <span class="math-container">$V$</span> is open... So <span class="math-container">$X\setminus V=\{x\}$</span> is closed.</p> <p>Thus every point in <span class="math-container">$X$</span> is closed. Since <span class="math-container">$X$</span> is finite, every point is also open (complement of finite union of closed sets).</p> <p>We could actually say that every finite <span class="math-container">$T_1$</span> space is discrete, since points being closed is equivalent to being <span class="math-container">$T_1$</span>.</p>
3,235,300
<p>I tried with , whenever <span class="math-container">$x &gt; y$</span> implies <span class="math-container">$p(x) - p(y) =( 5/13)^x (1-(13/5)^{(x-y)}) + (12/13)^x (1- (13/12)^{(x-y)}) &gt; 0 $</span>. But here I don't understand why the answer is no.</p>
auscrypt
675,509
<p>No, it is in fact strictly decreasing. Note that <span class="math-container">$\frac{5}{12}$</span> and <span class="math-container">$\frac{12}{13}$</span> are both less than <span class="math-container">$1$</span>, and so <span class="math-container">$\left(\frac{5}{12}\right)^x$</span> and <span class="math-container">$\left(\frac{12}{13}\right)^x$</span> are both strictly decreasing. This implies that their sum is strictly decreasing; the constant term in <span class="math-container">$p(x)$</span> is irrelevant.</p>
2,405,505
<p>How to prove that the infinite product $\prod_{n=1}^{+\infty} \left(1-\frac{1}{2n^2}\right)$ is positive ?</p> <p>Thanks</p>
Zhihao.Lu
474,641
<p>$1-\dfrac{1}{n^2}&lt;1-\dfrac{1}{2n^2}&lt;\left(1-\dfrac{1}{n^2}\right)^2$,So you can see Its limit exist.</p> <p>Sorry For I cannot use LaTex and My poor English.</p>
4,286,136
<p>I'm trying to find the general solution to <span class="math-container">$xy' = y^2+y$</span>, although I'm unsure as to whether I'm approaching this correctly.</p> <p>What I have tried:</p> <p>dividing both sides by x and substituting <span class="math-container">$u = y/x$</span> I get:</p> <p><span class="math-container">$$y' = u^2x^2+u$$</span></p> <p>Then substituting <span class="math-container">$y' = u'x + u$</span> I get the following: <span class="math-container">$$u'x+u = u^2x^2+u \implies u' = u^2x \implies \int\frac{du}{u^2}=\int x dx$$</span> Proceeding on with simplification after integration: <span class="math-container">$$\frac{1}{u}=\frac{x^2}{2}+c\implies y = \frac{2x}{x^2+c}$$</span></p> <p>However, the answer shows <span class="math-container">$y=\frac{x}{(c-x)}$</span></p>
Botnakov N.
452,350
<p>You say that <span class="math-container">$$y' = u^2x^2+u$$</span> but <span class="math-container">$$y' = \frac{y^2+y}{x} = \bigg(\frac{y}{x}\bigg)^2 x + u = u^2 x +u.$$</span> So here's a mistake.</p> <p>The right solution:</p> <p><span class="math-container">$$xdy = (y^2 + y)dx$$</span> <span class="math-container">$$\frac{dy}{y^2+y} = \frac{dx}{x}$$</span> <span class="math-container">$$\frac{dy}{y} - \frac{dy}{y+1} = \frac{dx}{x}$$</span> <span class="math-container">$$ln|y| - ln|y+1| = ln|x| + C_1$$</span> <span class="math-container">$$\frac{y}{y+1} = C x$$</span> <span class="math-container">$$1-\frac{1}{y+1} = C x$$</span> <span class="math-container">$$1-Cx = \frac{1}{y+1}$$</span> and so on.</p>
403,184
<p>A (non-mathematical) friend recently asked me the following question:</p> <blockquote> <p>Does the golden ratio play any role in contemporary mathematics?</p> </blockquote> <p>I immediately replied that I never come across any mention of the golden ratio in my daily work, and would guess that this is the same for almost every other mathematician working today.</p> <p>I then began to wonder if I were totally correct in this statement . . . which has led me to ask this question.</p> <p>My apologies is this question is unsuitable for Mathoverflow. If it is, then please feel free to close it.</p>
Matt Zaremsky
164,670
<p>The &quot;Cleary group&quot; <span class="math-container">$F_\tau$</span> is a version of Thompson's group <span class="math-container">$F$</span>, introduced by Sean Cleary, that is defined using the golden ratio, and it's definitely of interest in the world of Thompson's groups. See <em><a href="https://arxiv.org/abs/1806.00108" rel="noreferrer">An Irrational-slope Thompson's Group</a></em> ( Publ. Mat. 65(2): 809-839 (2021). DOI: 10.5565/PUBLMAT6522112 ). Very roughly, where <span class="math-container">$F$</span> arises by &quot;cutting things in half&quot;, <span class="math-container">$F_\tau$</span> arises in an analogous way by &quot;cutting things using the golden ratio&quot;. There are lots of similarities between <span class="math-container">$F_\tau$</span> and <span class="math-container">$F$</span>, but also plenty of mysteries, for example I believe it's still open whether <span class="math-container">$F_\tau$</span> embeds into <span class="math-container">$F$</span> (i.e., whether there exists a subgroup of <span class="math-container">$F$</span> isomorphic to <span class="math-container">$F_\tau$</span>).</p>
403,184
<p>A (non-mathematical) friend recently asked me the following question:</p> <blockquote> <p>Does the golden ratio play any role in contemporary mathematics?</p> </blockquote> <p>I immediately replied that I never come across any mention of the golden ratio in my daily work, and would guess that this is the same for almost every other mathematician working today.</p> <p>I then began to wonder if I were totally correct in this statement . . . which has led me to ask this question.</p> <p>My apologies is this question is unsuitable for Mathoverflow. If it is, then please feel free to close it.</p>
wlad
75,761
<p>Consider the function which is the limit of the sequence <span class="math-container">$x^{2^0}, \sqrt{1 + x^{2^1}}, \sqrt{1 + \sqrt{1 + x^{2^2}}}, \dotsc$</span> Call this function <span class="math-container">$U(x)$</span>, with domain <span class="math-container">$\{x \in \mathbb R \mid x \geq 0\}$</span>. Notice that for <span class="math-container">$x \in [0,1]$</span>, <span class="math-container">$U(x)$</span> is equal to the golden ratio. It can be shown that for <span class="math-container">$x &gt; 1$</span>, <span class="math-container">$U(x)$</span> is greater than the golden ratio and <span class="math-container">$U(x) \sim x$</span>. Finally, this function can be used to show that problem (I) is equivalent to problem (II).</p> <p>Problem (I): Find the limit, and upper bound on convergence rate, for the sequence <span class="math-container">$\sqrt{u_1}, \sqrt{u_1 + \sqrt{u_2}}, \sqrt{u_1 + \sqrt{u_2 + \sqrt{u_3}}}$</span>, etc. where all <span class="math-container">$u_n$</span> are non-negative.</p> <p>Problem (II): Find a way to compute the terms of the sequence <span class="math-container">$n \mapsto \sup_{k \geq 0} u_{n+k}^{2^{-(n+k)}}$</span>.</p> <p>This is a &quot;constructive&quot; analogue of Herschfeld's Convergence Theorem. Herschfeld's theorem gives a necessary and sufficient condition for an infinite radical to converge. The above gives a necessary and sufficient condition for being able to compute what an infinite radical converges to. The two necessary and sufficient conditions are quite similar.</p> <p>Postscript: There is a sense in which <span class="math-container">$U(x)$</span> is a family of <em>transfinite radicals</em>, which generalises the notion of an infinite radical. For more, see the preprint <em>Constructive proof of Herschfeld's Convergence Theorem</em>, arXiv:<a href="https://arxiv.org/abs/1907.02700" rel="nofollow noreferrer">1907.02700</a> by R Gutin</p>
403,184
<p>A (non-mathematical) friend recently asked me the following question:</p> <blockquote> <p>Does the golden ratio play any role in contemporary mathematics?</p> </blockquote> <p>I immediately replied that I never come across any mention of the golden ratio in my daily work, and would guess that this is the same for almost every other mathematician working today.</p> <p>I then began to wonder if I were totally correct in this statement . . . which has led me to ask this question.</p> <p>My apologies is this question is unsuitable for Mathoverflow. If it is, then please feel free to close it.</p>
Alexandre Eremenko
25,510
<p>Yes, it does:</p> <p>Lyubich, Mikhail; Milnor, John The Fibonacci unimodal map. J. Amer. Math. Soc. 6 (1993), no. 2, 425–457.</p> <p>and two more papers of the same authors studying what they call Fibonacci map.</p>
403,184
<p>A (non-mathematical) friend recently asked me the following question:</p> <blockquote> <p>Does the golden ratio play any role in contemporary mathematics?</p> </blockquote> <p>I immediately replied that I never come across any mention of the golden ratio in my daily work, and would guess that this is the same for almost every other mathematician working today.</p> <p>I then began to wonder if I were totally correct in this statement . . . which has led me to ask this question.</p> <p>My apologies is this question is unsuitable for Mathoverflow. If it is, then please feel free to close it.</p>
Roland Bacher
4,556
<p>The golden ratio occurs in the asymptotic growth-rate for the number of numerical semigroups of given genus:</p> <p>A numerical semigroup of genus <span class="math-container">$g$</span> is a subset <span class="math-container">$S$</span> of <span class="math-container">$\mathbb N=\{0,1,2,\ldots\}$</span> such that <span class="math-container">$S=S+S$</span> and <span class="math-container">$S$</span> lacks exactly <span class="math-container">$g$</span> elements of <span class="math-container">$\mathbb N$</span>.</p> <p>The number <span class="math-container">$n(g)$</span> of numerical semigroups of genus <span class="math-container">$g$</span> is easily shown to be finite and is asymptotically given by <span class="math-container">$C \omega^n$</span> for some constant <span class="math-container">$C$</span> with <span class="math-container">$\omega=(1+\sqrt{5})/2$</span> the golden number. This was conjectured by M.Bras-Amoros, <em>Fibonacci-like behaviour of the number of numerical semigroups of a given genus</em>, Semigroup Forum <strong>76</strong>, No 2, 379--384 (2008), proven by A. Zhai, <em>Fibonacci-like growth of numerical semigroups of a given genus</em>, Semigroup Forum <strong>86</strong>, No 3, 634--662 (2013). A different and hopefully more comprehensive proof (sorry, self-promotion) is contained in <a href="https://arxiv.org/abs/2105.04200" rel="noreferrer">https://arxiv.org/abs/2105.04200</a> .</p>
403,184
<p>A (non-mathematical) friend recently asked me the following question:</p> <blockquote> <p>Does the golden ratio play any role in contemporary mathematics?</p> </blockquote> <p>I immediately replied that I never come across any mention of the golden ratio in my daily work, and would guess that this is the same for almost every other mathematician working today.</p> <p>I then began to wonder if I were totally correct in this statement . . . which has led me to ask this question.</p> <p>My apologies is this question is unsuitable for Mathoverflow. If it is, then please feel free to close it.</p>
David Richter
11,264
<p>See the article &quot;<a href="https://doi.org/10.1007/BF01388660" rel="nofollow noreferrer">Generalized Dehn–Sommerville relations for polytopes, spheres and Eulerian partially ordered sets</a>&quot; by Margaret Bayer and Louis Billera. These authors aimed to extend the Dehn–Sommerville equations to homology spheres and face lattices of non-simplical convex polytopes. One of their results is that the dimension of the affine hull of a certain set of flag vectors for an Eulerian poset of rank <span class="math-container">$d$</span> is the <span class="math-container">$d$</span>th Fibonacci number.</p>
403,184
<p>A (non-mathematical) friend recently asked me the following question:</p> <blockquote> <p>Does the golden ratio play any role in contemporary mathematics?</p> </blockquote> <p>I immediately replied that I never come across any mention of the golden ratio in my daily work, and would guess that this is the same for almost every other mathematician working today.</p> <p>I then began to wonder if I were totally correct in this statement . . . which has led me to ask this question.</p> <p>My apologies is this question is unsuitable for Mathoverflow. If it is, then please feel free to close it.</p>
Sylvain JULIEN
13,625
<p>I'm not sure whether it qualifies or not, but I found a link between the golden ratio and the Riemann Hypothesis some years ago: <a href="https://math.stackexchange.com/questions/2380532/is-there-a-hidden-connection-between-rh-and-the-golden-ratio">Is there a hidden connection between RH and the golden ratio?</a>.</p> <p>No matter how anecdotic it might be, I nevertheless find this aesthetically pleasant.</p>
1,147,808
<p>I try to prove that ${2^{n-1}}$ elements of the field $\mathbf{F}_{2^{n}}$ have a Trace with value 1, while the other ${2^{n-1}}$ elements have a Trace with value 0.</p> <p>I started to show that Trace(1) = 1, and I tried to use the additivity of the Trace but I wasn't successful. Any advice ?</p>
Timbuc
118,527
<p>This is just linear algebra: the trace map is a linear functional $\;\Bbb F_{2^n}\to\Bbb F_2\;$, and since the extension $\;\Bbb F_{2^n}/\Bbb F_2\;$ is separable it is <strong>not</strong> the zero functional (or just show there's some element with trace different from zero), from where it follows that it is onto (this much is true for <em>any</em> linear functional over <em>any</em> vector space) and thus $\;\dim\ker\,Tr.=n-1\;$, and this means that</p> <p>$$|\ker Tr.|=|\Bbb F_{2^{n-1}}|=2^{n-1}$$</p> <p>and we're done.</p>
426,306
<p>If <span class="math-container">$K = \mathbb{Q}(\sqrt{d})$</span> is a real quadratic field, then any unit <span class="math-container">$u \in \mathcal{O}_K^\times$</span> with <span class="math-container">$u &gt; 1$</span> must not be too small: indeed, such a <span class="math-container">$u = u_1 + u_2 \sqrt{d}$</span> with <span class="math-container">$u_1, u_2 &gt; 0$</span> must satisfy <span class="math-container">$u_1^2 - d u_2^2 = \pm 1$</span>, so <span class="math-container">$u_1 \gg \sqrt{d}$</span>, say. Thus the gap between the smallest unit <span class="math-container">$u \in \mathcal{O}_K^\times$</span> and <span class="math-container">$1$</span> must be quite large.</p> <p>This seems a particular quirk of fields whose unit group is 1-generated. For real cubic fields, whose unit group is 2-generated, there seems to be substantial possibility for cancellation and even possible for the smallest unit <span class="math-container">$u &gt; 1$</span> to be arbitrarily close to <span class="math-container">$1$</span> as the fields vary.</p> <p>My question is this: let <span class="math-container">$K$</span> be a cyclic cubic field (in particular, necessarily totally real) having discriminant <span class="math-container">$d_K = c_K^2$</span>. Let <span class="math-container">$u_K \in \mathcal{O}_K^\times$</span> be the smallest (in terms of usual archimedean valuation) element satisfying <span class="math-container">$u_K &gt; 1$</span>. Write</p> <p><span class="math-container">$$\displaystyle u_K = 1 + \kappa_K.$$</span></p> <p>Can one effectively bound <span class="math-container">$\kappa_K$</span> in terms of <span class="math-container">$d_K$</span> (or equivalently, <span class="math-container">$c_K$</span>)?</p>
KConrad
3,272
<p>Asking about a smallest unit bigger than <span class="math-container">$1$</span> in a unit group of rank greater than <span class="math-container">$1$</span> feels like the wrong question, sort of like asking for a smallest algebraic integer of absolute value greater than <span class="math-container">$1$</span> in a number field (inside <span class="math-container">$\mathbf C$</span>) of degree greater than <span class="math-container">$1$</span>. The ring of integers is discrete when you use all Archimedean embeddings, but not fewer Archimedean embeddings (e.g., <span class="math-container">$\mathbf Z[\sqrt{2}]$</span> is dense in <span class="math-container">$\mathbf R$</span> but its image in <span class="math-container">$\mathbf R^2$</span> using the Euclidean embedding is a lattice). Likewise, the unit group of that ring is discrete using the logarithm mapping to a hyperplane in <span class="math-container">$\mathbf R^{r_1+r_2}$</span>.</p> <p>Use the regulator as a measure of the size of the unit group. For a unit group of rank <span class="math-container">$1$</span> with a real embedding, taking logarithms shows that comparing the size of the smallest unit greater than <span class="math-container">$1$</span> with a power of the absolute value of the discriminant is like comparing the regulator of that unit group with a multiple of the logarithm of the absolute value of discriminant. For cubic fields of rank <span class="math-container">$1$</span>, Artin showed its regulator <span class="math-container">$R$</span> and (negative) discriminant <span class="math-container">$D$</span> satisfy <span class="math-container">$R &gt; \log(|D|/4 - 6)$</span> except for the unique cubic field of discriminant <span class="math-container">$-23$</span>. For totally real cubic fields, the unit rank is <span class="math-container">$2$</span> and Cusick showed the regulator <span class="math-container">$R$</span> and (positive) discriminant <span class="math-container">$D$</span> for such fields satisfy <span class="math-container">$R &gt; (1/16)(\log(D/4))^2$</span>. See Theorem 5.8 <a href="https://kconrad.math.uconn.edu/blurbs/gradnumthy/unittheorem.pdf" rel="nofollow noreferrer">here</a>.</p>
390,640
<p>Please help me to find a closed form for the following integral: $$\int_0^1\log\left(\log\left(\frac{1}{x}+\sqrt{\frac{1}{x^2}-1}\right)\right)\,{\mathrm d}x.$$</p> <p>I was told it could be calculated in a closed form.</p>
Start wearing purple
73,025
<p>$$\boxed{\displaystyle\int_0^1\log\log\left(\frac1x+\sqrt{\frac1{x^2}-1}\right)\mathrm dx=-\gamma-2\ln\frac{2\Gamma(3/4)}{\Gamma(1/4)}}\tag{$\heartsuit$}$$</p> <hr> <p><strong>Derivation</strong>:</p> <p>After the change of variables $x=\frac{1}{\cosh u}$ the integral becomes $$\int_0^{\infty}\ln u \frac{\sinh u}{\cosh^2 u}du,$$ as was noticed above by Eric. We would like to integrate by parts to kill the logarithm but we get two divergent pieces. To go around this, let us consider another integral, $$I(s)=\int_0^{\infty}u^s \frac{\sinh u}{\cosh^2 u}du,$$ with $s&gt;0$. The integral we actually want to compute is equal to $I'(0)$, which will be later obtained in the limit.</p> <p>Indeed, integrating once by parts one finds that \begin{align} I(s)&amp;=s\int_0^{\infty}\frac{u^{s-1}du}{\cosh u}=s\cdot 2^{1-2 s}\Gamma(s)\left[\zeta\left(s,\frac14\right)-\zeta\left(s,\frac34\right)\right]=\\ &amp;=2^{1-2 s}\Gamma(s+1)\left[\zeta\left(s,\frac14\right)-\zeta\left(s,\frac34\right)\right], \end{align} where $\zeta(s,a)=\sum_{n=0}^{\infty}(n+a)^{-s}$ denotes Hurwitz zeta function (in the way we have used its integral representaion (5) from <a href="http://mathworld.wolfram.com/HurwitzZetaFunction.html" rel="noreferrer">here</a>). </p> <p>Now to get ($\heartsuit$), it suffices to use \begin{align} &amp;\frac{\partial}{\partial s}\left[2^{1-2 s}\Gamma(s+1)\right]_{s=0}=-2\gamma-4\ln 2,\\ &amp;\zeta\left(0,\frac14\right)-\zeta\left(0,\frac34\right)=\frac12, \\ &amp;\frac{\partial}{\partial s}\left[\zeta\left(s,\frac14\right)-\zeta\left(s,\frac34\right)\right]_{s=0}=-\ln\frac{\Gamma(\frac34)}{\Gamma(\frac14)}. \end{align} [See formulas (10) and (16) on the <a href="http://mathworld.wolfram.com/HurwitzZetaFunction.html" rel="noreferrer">same page</a>].</p>
684,755
<p>In section 10 of <em>Topology</em> by Munkres, the minimal uncountable well-ordered set $S_{\Omega}$ is introduced. Furthermore, it is remarked that,</p> <blockquote> <p>Note that $S_{\Omega}$ is an uncountable well-ordered set every section of which is countable. Its order type is in fact uniquely determined by this condition.</p> </blockquote> <p>However, how to justify its uniqueness?</p>
Unwisdom
124,220
<p>Suppose that $\alpha$ and $\beta$ are two uncountable ordinals. Suppose further that for every $\gamma&lt;\alpha$, $\gamma$ is countable. Likewise, suppose that for very $\eta&lt;\beta$, $\eta$ is countable. You want to show that $\alpha=\beta$. </p> <p>Suppose that $\alpha&lt;\beta$. Then $\eta=\alpha$ is uncountable, and satisfies $\eta&lt;\beta$. This is a contradiction. Thus $\alpha\not&lt;\beta$. Likewise $\beta\not&lt;\alpha$.</p>
148,037
<p>for example I have a data</p> <pre><code>Clear[data]; data[n_] := Join[RandomInteger[{1, 10}, {n, 2}], RandomReal[1., {n, 1}], 2]; </code></pre> <p>then <code>data[3]</code> gives</p> <pre><code>{{4, 8, 0.264842}, {9, 5, 0.539251}, {3, 1, 0.884612}} </code></pre> <p>in each sublist, first two value is matrix index, the last is <strong>matrix element which have to be added together for same matrix index</strong>.</p> <p>I want to transform the data into matrix. Usually I do it like</p> <pre><code>Clear[toSparse] toSparse[data_] := SparseArray@ Normal@Merge[Thread[data[[;; , 1 ;; 2]] -&gt; data[[;; , -1]]], Total] </code></pre> <p>I cared about the performance</p> <pre><code>In[171]:= toSparse[data[1000]]; // AbsoluteTiming Out[171]= {0.00836793, Null} In[172]:= toSparse[data[10000]]; // AbsoluteTiming Out[172]= {0.0644464, Null} In[173]:= toSparse[data[100000]]; // AbsoluteTiming Out[173]= {1.35507, Null} In[174]:= toSparse[data[1000000]]; // AbsoluteTiming Out[174]= {200.862, Null} </code></pre> <p>Any faster way to do this?</p>
Edmund
19,542
<p>You may use <a href="http://reference.wolfram.com/language/ref/GroupBy.html" rel="nofollow noreferrer"><code>GroupBy</code></a>.</p> <pre><code>Clear[toSparse] toSparse[data_] := SparseArray@Normal@GroupBy[data, Most -&gt; Last, Total] </code></pre> <p>Then</p> <pre><code>toSparse[data[1000]]; // AbsoluteTiming </code></pre> <blockquote> <pre><code>{0.0019702, Null} </code></pre> </blockquote> <pre><code>toSparse[data[10000]]; // AbsoluteTiming </code></pre> <blockquote> <pre><code>{0.0155542, Null} </code></pre> </blockquote> <pre><code>toSparse[data[100000]]; // AbsoluteTiming </code></pre> <blockquote> <pre><code>{0.181737, Null} </code></pre> </blockquote> <pre><code>toSparse[data[1000000]]; // AbsoluteTiming </code></pre> <blockquote> <pre><code>{2.0271, Null} </code></pre> </blockquote> <p>Hope this helps.</p>
2,877,916
<p>Can you some one please tell how to prove Holder Space is Normed Linear Space</p> <p>The Holder Space $C^{k,\gamma}(\bar{U})$ consisting of the all $u \in C^k(\bar{U})$ for which the norm</p> <p>$$\|u\|_{C^{k,\gamma}(\bar{U})}:= \sum_{|\alpha|\le k} \|D^\alpha u \|_{C(\bar{U})}+\sum_{|\alpha|=k} [D^\alpha u]_{C^{0,\gamma}(\bar{U})}$$</p> <p>is finite </p> <p><strong>Definition 1:</strong> </p> <p>If $u:U\to \mathbb{R}$ is bounded and continuous , we write</p> <p>$$\|u\|_{C(\bar{U})}:=\sup_{x\in U}|u(x)|.$$</p> <p><strong>Definition 2</strong></p> <p>The $\gamma^{th} -$ Holder seminorm of $u:U\to \mathbb{R}$ is </p> <p>$$[u]_{C^{0,\gamma}(\bar{U})}:=\sup_{\substack{x,y\in U \\ x \neq y}} \left\{\frac{|u(x)-u(y)|}{|x-y|^\gamma} \right\},$$</p> <p>and the $\gamma^{th} -$ Holder Norm is</p> <p>$$\|u\|_{C^{0,\gamma}(\bar{U})}:=\|u\|_{C(\bar{U})}+[u]_{C^{0,\gamma}(\bar{U})}.$$</p> <p>and please explain those norms ..I was trying to understand things but i can't thank you very much </p>
Community
-1
<p>Pick any of the points remaining in the square. Draw loops starting at that point, encircling a single one of the removed points, without self intersections, and without intersecting the other two loops.</p> <p>Since the interior of the loops have a removed point each, then they can be retracted to the points of the loop. The exterior of the concatenation of all the loops can be retracted to the points of the loops as well.</p>
1,037,736
<p>$$\sum \limits_{v=1}^n v=\frac{n^2+n}{2}$$</p> <p>please don't downvote if this proof is stupid, it is my first proof, and i am only in grade 5, so i haven't a teacher for any of this 'big sums'</p> <p>proof:</p> <p>if we look at $\sum \limits_{v=1}^3 v=1+2+3,\sum \limits_{v=1}^4 v=1+2+3+4,\sum \limits_{v=1}^5 v=1+2+3+4+5$</p> <p>i learnt rainbow numbers in class three years ago, so i use that knowlege here:</p> <p>$n=3,1+3=4$ and $2$.</p> <p>$n=4,1+4$ and $2+3$</p> <p>$n=5,1+5$ and $2+4$ and $3$</p> <p>and more that i have done on paper that i don't wanna type.</p> <p>we can see from this for the odd case that we have $(n+1)$ added together moving in from the outside, so we get to add $(n+1)$ to the total $\frac{(n-1)}2$ times plus the center number, which is $\frac{n+1}2$.. giving $\frac{n-1}2(n+1)+\frac{n+1}2=\frac{(n+1)(n-1)}{2}+\frac{n+1}{2}$ and i can get $\frac{n^2-1}2+\frac{n+1}2=\frac{n^2+n}2$ which is what we want.</p> <p>so odd are proven.</p> <p>for even we have a simplier problem: we have $n+1$ on each pair of numbers going in. since we are even numbers, we have $1+n=n+1$ , with $n$ even, $2+(n-1)=n+1$ and we can see this is good for all numbers since we increase one side by one and lower the other by 1. so we get $\frac{n}2$ times $n+1$ gives $\frac{n^2+n}{2}$</p> <p>thus is proven for all cases. thus is is proven</p>
Deepak
151,732
<p>I'm not familiar with "rainbow numbers", and I'm afraid I can't follow every step of your proof. But if you're just looking for a very elementary proof of this, here's the easiest one I can think of:</p> <p>Write the sum forwards:</p> <p>$S_n = 1 + 2 + 3 + ... + n$</p> <p>and then backwards:</p> <p>$S_n = n + (n-1) + (n-2) + ... + 1$</p> <p>and then sum term by term to get:</p> <p>$2S_n = (n+1) + (n+1) +... + (n+1)$</p> <p>where there are exactly $n$ of those terms.</p> <p>So $2S_n = n(n+1)$</p> <p>and $S_n = \frac{1}{2}n(n+1)$.</p> <p>My apologies if this doesn't answer your question. I just thought you might want a nice elementary method to approach this (and it looks like less work than splitting into cases, etc.)</p>
164,060
<p>When I plot the data I have using <code>ListStepPlot</code> and <code>ListLinePlot[data,InterpolationOrder -&gt; 0]</code> I am getting two different plot. I guess there is a bug in <code>ListStepPlot</code>. </p> <pre><code>data={{{0, 1}, {0.0582215, 2}, {0.597255, 3}, {1.17158, 4}}, {{1.17158, 4}, {1.36478, 5}, {1.424, 6}, {1.4586, 7}}, {{1.4586, 7}, {1.73938, 8}, {1.88332, 9}, {2.03753, 10}}, {{2.03753, 10}, {2.17872, 11}, {2.46005, 12}, {2.71547, 13}}, {{2.71547, 13}, {3.16095, 14}, {3.30726, 15}, {3.5329, 16}}, {{3.5329, 16}, {3.63022, 17}, {4.34524, 18}, {5.20954, 19}}}; ListLinePlot[data, Frame -&gt; True, PlotTheme -&gt; "Detailed", ImageSize -&gt; 600, InterpolationOrder -&gt; 0] </code></pre> <p><a href="https://i.stack.imgur.com/jBaN4.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/jBaN4.png" alt="enter image description here"></a></p> <pre><code>ListStepPlot[data, Frame -&gt; True, PlotTheme -&gt; "Detailed", ImageSize -&gt; 600] </code></pre> <p><a href="https://i.stack.imgur.com/9NJWd.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/9NJWd.png" alt="enter image description here"></a></p> <p>Here is different data that has a problem </p> <pre><code>data1={{{0, 1}, {0.139219, 2}, {0.607566, 3}, {1.18343, 4}}, {{1.18343, 4}, {1.22964, 5}, {2.01722, 6}, {2.62576, 7}}, {{2.62576, 7}, {3.69976, 8}, {3.90317, 9}, {4.49939, 10}}, {{4.49939, 10}, {4.83385, 11}, {4.92839, 12}, {5.3667, 13}}, {{5.3667, 13}, {5.37191, 14}, {5.75267, 15}, {5.86257, 16}}, {{5.86257, 16}, {6.49011, 17}, {6.56514, 18}, {6.73022, 19}}}; </code></pre> <p>"11.1.0 for Microsoft Windows (64-bit) (March 13, 2017)"</p>
m_goldberg
3,066
<p>There is no bug. <code>ListStepPlot</code> gives a different result from <code>ListLinePlot</code> because it is using a different plotting algorithm.</p> <p><code>ListStepPlot</code> draws steps (horizontal lines) through the data points and gives the user a choice of three positions for where the data point stands on the horizontal line. It will always extend the first or last step as required to get a full step at that point.</p> <p><code>ListLinePlot</code> with <code>InterpolationOrder -&gt; 0</code> joins the data points with staircase polyline, which is quite a different thing.</p> <p>To visualize the differences of the plotting methods, we need only give the option <code>Mesh -&gt; Full</code> which makes the data points visible.</p> <pre><code>ListStepPlot[data, Right, Frame -&gt; True, PlotTheme -&gt; "Detailed", Mesh -&gt; Full] </code></pre> <p><a href="https://i.stack.imgur.com/Yj11A.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Yj11A.png" alt="right"></a></p> <p><code>Right</code> is the default value for mesh point position.</p> <pre><code>ListStepPlot[data, Center, Frame -&gt; True, PlotTheme -&gt; "Detailed", Mesh -&gt; Full] </code></pre> <p><a href="https://i.stack.imgur.com/V7yoC.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/V7yoC.png" alt="center"></a></p> <pre><code>ListStepPlot[data, Left, Frame -&gt; True, PlotTheme -&gt; "Detailed", Mesh -&gt; Full] </code></pre> <p><a href="https://i.stack.imgur.com/IQdAz.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/IQdAz.png" alt="left"></a></p> <pre><code>ListLinePlot[data, Frame -&gt; True, PlotTheme -&gt; "Detailed", InterpolationOrder -&gt; 0, Mesh -&gt; Full] </code></pre> <p><a href="https://i.stack.imgur.com/wd7bO.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/wd7bO.png" alt="line"></a></p> <p>It is up to you to chose the one which displays your data more clearly. </p>
2,556,339
<p>This is the function $f(x)$$=\frac{1}{\sqrt{3x-2}}$ . I wrote that $$\lim_{h\to 0}\frac{\frac{\sqrt{3x+3h-2}}{3x+3h-2}-\frac{\sqrt{3x-2}}{3x-2}}{h}.$$ I am not able to continue further.</p>
Mark Viola
218,419
<p><strong>HINT:</strong></p> <p>Note that </p> <p>$$\begin{align} \frac{1}{\sqrt{3(x+h)-2}}-\frac{1}{\sqrt{3x-2}}&amp;=\frac{\frac{1}{3(x+h)-2}-\frac{1}{3x-2}}{\frac{1}{\sqrt{3(x+h)-2}}+\frac{1}{\sqrt{3x-2}}}\\\\ &amp;=\frac{\frac{-3h}{(3(x+h)-2)(3x-2)}}{\frac{1}{\sqrt{3(x+h)-2}}+\frac{1}{\sqrt{3x-2}}} \end{align}$$</p> <p>Divide by $h$ and let $h\to 0$.</p>
1,364,936
<p>How can one prove that the real cubic equation $$P(X)=X^3+pX+q$$ is not solvable by <strong>real radicals</strong> when $$D=-4p^3 - 27q^2 &gt;0?$$</p> <p>Which means that there is no sequence of extension: $$\mathbb R=L_0 \subset L_1 \subset ... \subset L_n=L$$ with $a\in L$ root of $P$ and for $0 \leqslant i \leqslant n-1$, $L_{i+1}= L_i(u_i)$, where $u_i^{p_i} \in L_{i-1}$, $p_i$ being a prime number, and $u_i$ a stricly positive real.</p>
Jack D'Aurizio
44,121
<p>$$\begin{eqnarray*}S_N=\!\!\!\sum_{\substack{-N\leq z_1,z_2\leq N\\ z_1\neq z_2}}\!\!(z_1^2-2z_1 z_2)&amp;=&amp;\sum_{-N\leq z_1,z_2\leq N}(z_1^2-2z_1 z_2)+\sum_{-N\leq z\leq N}z^2\\&amp;=&amp;(2N+2)\sum_{-N\leq z\leq N}z^2-2\,\left(\sum_{-N\leq z\leq N}z\right)^2\\&amp;=&amp;(4N+4)\sum_{n=1}^{N}n^2\\&amp;=&amp;\frac{2}{3}N(N+1)^2(2N+1).\end{eqnarray*}$$</p>
1,292,490
<blockquote> <p>Let $(a_{ij})$ be a real $n \times n$ matrix satisfying,</p> <ol> <li>$a_{ii} &gt; 0 \space (1 \leq i \leq n) ,$</li> <li>$a_{ij} \leq 0 \space (i \ne j, 1 \leq i,j \leq n) ,$</li> <li>$\sum_{i=1}^ {i=n} \space a_{ij} &gt; 0 (1 \leq j \leq n).$ </li> </ol> <p>Then $\det (A) &gt; 0$</p> </blockquote> <p>How to prove this? I have no idea.</p>
AlexR
86,940
<p><strong>Hint</strong><br> $A$ has positive eigenvalues because it is diagonally dominant (<em>why?</em>) and the diagonal entries are positive. This suffices to show that $\det A = \prod_{i=1}^n \lambda_i &gt; 0$ where $\lambda_i$ are the eigenvalues of $A$.</p>
3,407,852
<p>A space X is said to be h-homogeneous if every non-empty clopen subset of <span class="math-container">$X$</span> is homeomorphic to <span class="math-container">$X.$</span></p> <p>Is the space <span class="math-container">$L = 2^{\mathbb N} - \{p\}$</span> for <span class="math-container">$p \in 2^{\mathbb N}$</span> h-homogeneous?</p>
David G. Stork
210,401
<p><span class="math-container">$$n = 10^{10} - \left( \lfloor \sqrt{10^{10}} \rfloor + \lfloor \sqrt[3]{10^{10}} \rfloor + \lfloor \sqrt[6]{10^{10}} \rfloor - \lfloor \sqrt[6]{10^{10}} \rfloor - \lfloor \sqrt[15]{10^{10}} \rfloor - \lfloor \sqrt[10]{10^{10}} \rfloor + \lfloor \sqrt[30]{10^{10}} \rfloor \right) = 9,999,897,804$$</span></p>
3,401,044
<p>I am solving Section38 Exercise 5 in Topology, Munkres.</p> <p>I solved that there is continuous surjectice closed <span class="math-container">$$f : \beta(S_\Omega) \rightarrow Y$$</span> for any compactification <span class="math-container">$Y$</span> of <span class="math-container">$S_\Omega$</span></p> <p>And one point compactification of <span class="math-container">$S_\Omega$</span> is equivalent to Stone-Cech compactification.</p> <p>However I am stuck in the last problem that Every compactification of <span class="math-container">$S_\Omega$</span> is equivalent to one point compactification.</p> <p>Could you help me with details??</p>
Henno Brandsma
4,280
<p>In part a) of that exercise it is shown that any continuous function <span class="math-container">$f: S_\Omega \to \Bbb R$</span> is eventually constant, in the sense that there is some <span class="math-container">$\alpha_0 \in S_\Omega$</span> and some <span class="math-container">$p \in \Bbb R$</span> such that <span class="math-container">$\forall \alpha \ge \alpha_0: f(\alpha) = p$</span>. (In particular all continuous real-valued functions are bounded.)</p> <p>This means that in particular the inclusion of <span class="math-container">$S_\Omega$</span> into <span class="math-container">$\overline{S}_\Omega$</span> obeys the extension property: every continuous <span class="math-container">$f: S_\Omega \to \Bbb R$</span> has a continuous extension <span class="math-container">$\bar{f}$</span> to <span class="math-container">$\overline{S}_\Omega = S_\Omega \cup \{\Omega\}$</span>: we just give <span class="math-container">$\bar{f}$</span> the value <span class="math-container">$p$</span> on <span class="math-container">$\Omega$</span> too, and as basic neighbourhoods of <span class="math-container">$\Omega$</span> in <span class="math-container">$\overline{S}_\Omega$</span> are of the form <span class="math-container">$(\alpha, \Omega]$</span> (as it's the maximal element), and so the extension is continuous (still constant on almost all basic neighbourhoods).</p> <p>Now theorem 38.5 essentially says that <span class="math-container">$\overline{S}_\Omega$</span> is equivalent to <span class="math-container">$\beta S_\Omega$</span>. And as <span class="math-container">$\overline{S}_\Omega\setminus S_\Omega$</span> has one point, <span class="math-container">$\overline{S}_\Omega$</span> is also equivalent to the one-point compactification of <span class="math-container">$S_\Omega$</span>.</p> <p>That's all there is to it: the one-point compactification <span class="math-container">$\overline{S}_\Omega$</span> obeys the extension property so it's essentially the Cech-Stone compactification.</p> <p>And if <span class="math-container">$h: S_\Omega \to C$</span> is any compactification (so <span class="math-container">$h\restriction S_\Omega$</span> is an embedding into a compact Hausdorff <span class="math-container">$C$</span> where <span class="math-container">$h[S_\Omega]$</span> is dense in <span class="math-container">$C$</span>) then Thm. 38.4 in that paragraph says that <span class="math-container">$h$</span> has a continuous extension <span class="math-container">$\beta h$</span> from <span class="math-container">$\overline{S}_\Omega$</span> to <span class="math-container">$C$</span> and then <span class="math-container">$C = \beta h[\overline{S}_\Omega]$</span> by density and continuity and <span class="math-container">$\beta h$</span> is a homeomorphism between <span class="math-container">$\overline{S}_\Omega$</span> and <span class="math-container">$C$</span>. (a 1-1 continuous map from a compact space onto a Hausdorff one.). So <strong>all</strong> compactifications of <span class="math-container">$S_\Omega$</span> are just <span class="math-container">$\overline{S}_\Omega$</span> in essence.</p>
2,332,419
<p>What's the angle between the two pointers of the clock when time is 15:15? The answer I heard was 7.5 and i really cannot understand it. Can someone help? Is it true, and why?</p>
MPW
113,214
<p>If this is a 12-hour clock, then the minute hand is at 3 and the hour hand is 1/4 of the way between 3 and 4. Thus the angle between them is $\frac14(\frac{360^{\circ}}{12})=7.5^{\circ}$.</p> <p>(Note that the angle between two successive numbers on the face, like 3 and 4, is 1/12 of the full circle; that's where the $\frac{360^{\circ}}{12}$ comes from.)</p>
569,012
<p>Let $I$ be the incenter of $\triangle{ABC}$. Let $R$ be the radius of the circle that circumscribes $\triangle{IAB}$. Find a formula for $R$ in term of other elements $a, b, c, A, B, C, r, R$ of $\triangle{ABC}$. I need this formula in order to prove a geometric inequality.</p>
chloe_shi
45,070
<p><img src="https://i.stack.imgur.com/pQXw1.gif" alt="enter image description here"></p> <p>suppose that $AI$ cut the circumcircle of $\triangle ABC$ at $D$.<br> $\angle DBI=\angle IBC+\angle DBC=\angle IBA+\angle BAD=\angle BID$<br> thsu, $\triangle DBI$ is isosceles and likewise, $\triangle DCI$ is isosceles.<br> that is, $DB=DI=DC$ and then $D$ is the circumcenter of $\triangle IBC$.<br> It's called the theorem of <strong>Mention</strong>, who is French mathematician.<br> hence, the circumradius of $\triangle IBC$ is $DB$.<br> by sine law, $\dfrac{BD}{2R}=\sin\dfrac{\angle A}{2}$ and then $BD=2R\sin\dfrac{\angle A}{2}$<br> Likewise, we know that the circumradius of $\triangle IAB$ is $2R\sin\dfrac{\angle C}{2}$</p>
2,886,973
<p>The Wikipedia article gives an <a href="https://en.wikipedia.org/wiki/Gauss%E2%80%93Bonnet_theorem#Interpretation_and_significance" rel="nofollow noreferrer">interesting example</a> of the Gauss-Bonnet theorem:</p> <blockquote> <p>As an application, a torus has Euler characteristic 0, so its total curvature must also be zero. ... It is also possible to construct a torus by identifying opposite sides of a square, in which case the Riemannian metric on the torus is flat and has constant curvature 0, again resulting in total curvature 0.</p> </blockquote> <p>But there are of course other ways of closing up the square by identifying points on its boundary. If we identify opposite sides with one pair's orientation flipped, we get the Klein bottle, which also has Euler characteristic 0. But if we flip both pair's orientations then we get the real projective plane, which has Euler characteristic 1. And if we identify the entire boundary together then <a href="https://math.stackexchange.com/questions/24785/the-n-disk-dn-quotiented-by-its-boundary-sn-1-gives-sn">we get</a> the sphere, with Euler characteristic 2. These are all closed surfaces so the Gauss-Bonnet theorem would naively imply that their total curvature equals $2\pi$ times their Euler characteristic, but this only works for the torus and Klein bottle identifications, but not the real projective plane or sphere identifications. Why?</p> <p>For concreteness, consider the manifold $M$, the closed square (or unit disk) quotiented by its boundary. I think that $M$ has the topological structure of $S^2$ but not the differential structure - i.e. it's homeomorphic but not diffeomorphic to $S^2$. Is this correct? But $M$ can't be an <a href="https://en.wikipedia.org/wiki/Exotic_sphere" rel="nofollow noreferrer">exotic sphere</a> because they don't exist in two dimensions, so it must not be a differentiable manifold at all. If I'm correct, then the identification process means that $M$ is not actually differentiable at the point which is the identified boundary - it's perfectly homogenous as a topological manifold, but fails to be a differentiable manifold because there's one problematic point at which the Gaussian curvature is undefined. Similarly, I guess the identified-edge square is homeomorphic but not diffeomorphic to the real projective plane? Then are the squares with the torus and Klein bottle identifications differentiable at the identified edges and corners? If so, why are the torus- and Klein-bottle-identified squares differentiable at their boundary but not the RPP- and sphere-identified squares?</p>
Angina Seng
436,618
<p>When you join up the edges of the flat square to make a projective plane, what happens to its corners? Each corner gets identified with the opposite corner. So at two points in the projective plane you have two corner bits of squares identified. They may be flat, but round the corner, you have only $\pi$ worth of angle instead of $2\pi$. This means you can't extend the flat metric to these corners, so the quotient isn't naturally a Riemannian manifold.</p> <p>An easier illustration: consider the surface of a standard cube in $\Bbb R^3$. Each face is flat. What about the edges? They aren't trouble, one can think of each pair of adjacent faces as a $2\times1$ rectangle folded over, and that has a flat metric. So if you remove the vertices, the surface of the cube has a flat Riemannian metric all over it.</p> <p>But those pesky vertices! If you make a little circuit about one of these, in effect you're going through an angle of $3\pi/2$. If you parallel-transport a vector round the path, it come back turned through a right angle. This means there's no way of extending this flat Riemannian metric to the vertex.</p> <p>This works for every compact surface; you can express it as a simplicial complex, and if you delete the vertices then you can put a flat metric on it. Alas one can rarely extend it to the whole surface (certainly if it has nonzero Euler characteristic).</p>
1,252,167
<p>I'm trying to understand what a vector of functions is, from trying to understand how to solve linear homogeneous differential equations. </p> <p>It seems that functions can be manipulated as vectors as long as they are not interpreted as having real values.<br> Suppose the solution space of a linear homogeneous diff equation is spanned by $\cos(x)$, $\sin(x)$, then the solution is $a \sin(x) + b \cos(x)$, and it's a vector.</p> <p>But, if $y$ is a vector $y = a \sin(x) + b \cos(x)$, then how is it that for any value of $x$, $y$ is always a scalar value?</p> <p>If $x$ is a set of values rather than a symbol then how can $y$ remain a vector if, for each element of $x$, $y$ is scalar?</p>
Brian M. Scott
12,042
<p>As you’ve already discovered, the answer is <em>no</em>. In fact, you can’t even guarantee an irreducible open refinement. For $n\in\Bbb N$ let $U_n=\{k\in\Bbb N:k&lt;n\}$, and let $\tau=\{U_n:n\in\Bbb N\}\cup\{\Bbb N\}$; then $\tau$ is a $T_0$ topology on $\Bbb N$, and $\tau\setminus\{\Bbb N\}$ is an open cover of $\Bbb N$ with no irreducible open refinement.</p> <p>A well-known positive result is that every point-finite open cover of a space has an irreducible subcover. Thus, every open cover of a metacompact space has an irreducible open refinement. (A space $X$ is <em>metacompact</em> if every open cover of $X$ has a point-finite open refinement. A family of sets is <em>point-finite</em> if each point of the space lies in only finitely many of the sets.)</p>
96,970
<p>I would need to identify the types of regular polygons forming the surface of a convex hull of 3D points. If I e.g. take the following example of a regular polyhedron</p> <pre><code>ConvexHullMesh[N[PolyhedronData["Dodecahedron", "VertexCoordinates"]]] </code></pre> <p>The convex hull routine returns a triangulated mesh surface. Is there any simple way to convince <em>Mathematica</em> to return the surface as polyhedrons (in this case pentagons) instead of a triangulation.</p> <p>To illustrate the issue further, e.g if one applies</p> <pre><code>MeshCells[ConvexHullMesh[N[PolyhedronData["Dodecahedron", "VertexCoordinates"]]], 2] </code></pre> <p><em>Mathematica</em> only returns triangles.</p> <p>If one applies</p> <pre><code>ConvexHullMesh[N[PolyhedronData["Dodecahedron", "VertexCoordinates"]]] // FullForm </code></pre> <p>There is the option <code>"CoplanarityTolerance"</code>. But I do not know how to use it.</p> <p>Any ideas?</p>
Taiki
5,906
<p>The procedure groups triangles based on the same unit normal vector, then uses the vertices in each group to form a new polygon. The vertices are sorted in such a way that their polygon is not self-intersecting.</p> <p>This method doesn't allow for coplanar tolerance. Triangles in the same group have the same unit normal vector determined to within the second argument of <code>Round</code> (<code>10^-5</code> here).</p> <p>The sorting function <code>sort</code> is modified from <a href="https://mathematica.stackexchange.com/a/48105/5906">#48091</a>, which is a 2D method. <code>sort</code> uses the XY-projection of the points, unless they're colinear in X or Y.</p> <pre><code>sort[pts_] := Module[ {p, subspaceselector}, p = coord[[#]] &amp; /@ pts; subspaceselector = Which[ p[[1, 1]] == p[[2, 1]] == p[[3, 1]], Rest, p[[1, 2]] == p[[2, 2]] == p[[3, 2]], Drop[#, {2}] &amp;, True, Most ]; SortBy[pts, N[ArcTan @@ subspaceselector[coord[[#]] - Mean[p]]] &amp;] ]; unitnormal[verts_] := Round[ Normalize[Cross[verts[[2]] - verts[[1]], verts[[3]] - verts[[1]]]], 10^-5 ]; convexhull = ConvexHullMesh[N[PolyhedronData["Dodecahedron", "VertexCoordinates"]]]; coord = MeshCoordinates[convexhull]; trivertices = Level[MeshCells[convexhull, 2], {-2}]; polysets = GatherBy[ trivertices, unitnormal[Function[i, coord[[i]]] /@ #] &amp; ]; polyvertices = Map[sort][Union @@ # &amp; /@ polysets]; MeshRegion[coord, Polygon /@ polyvertices] </code></pre> <p><a href="https://i.stack.imgur.com/OGUpp.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/OGUpp.png" alt="New convex hull"></a></p>
96,970
<p>I would need to identify the types of regular polygons forming the surface of a convex hull of 3D points. If I e.g. take the following example of a regular polyhedron</p> <pre><code>ConvexHullMesh[N[PolyhedronData["Dodecahedron", "VertexCoordinates"]]] </code></pre> <p>The convex hull routine returns a triangulated mesh surface. Is there any simple way to convince <em>Mathematica</em> to return the surface as polyhedrons (in this case pentagons) instead of a triangulation.</p> <p>To illustrate the issue further, e.g if one applies</p> <pre><code>MeshCells[ConvexHullMesh[N[PolyhedronData["Dodecahedron", "VertexCoordinates"]]], 2] </code></pre> <p><em>Mathematica</em> only returns triangles.</p> <p>If one applies</p> <pre><code>ConvexHullMesh[N[PolyhedronData["Dodecahedron", "VertexCoordinates"]]] // FullForm </code></pre> <p>There is the option <code>"CoplanarityTolerance"</code>. But I do not know how to use it.</p> <p>Any ideas?</p>
J. M.'s persistent exhaustion
50
<p>Here is a solution that uses undocumented functionality to generate an appropriate <code>MeshRegion[]</code> object:</p> <pre><code>Graphics`Mesh`MeshInit[]; FirstCase[ConvexHull3D[N[PolyhedronData["Dodecahedron", "VertexCoordinates"]], FlatFaces -&gt; False], GraphicsComplex[pts_, stuff_] :&gt; MeshRegion[pts, Cases[stuff, _Polygon, ∞]], ∞] </code></pre> <p><img src="https://i.stack.imgur.com/NrZpT.png" alt="dodecahedron as a MeshRegion[]"></p>
167,262
<p>I make a circle with radius as below</p> <pre><code>Ctest = Table[{0.05*Cos[Theta*Degree], 0.05*Sin[Theta*Degree]}, {Theta, 1, 360}] // N; </code></pre> <p>And herewith is my list of data points</p> <pre><code>pts = {{0., 0.}, {0.00493604, -0.00994539}, {0.00987001, -0.0198918}, {0.0148019, -0.0298392}, {0.0197318, -0.0397877}, {0.0246596, -0.0497372}, {0.0295853, -0.0596877}, {0.0345089, -0.0696392}, {0.0394305, -0.0795918}, {0.04435, -0.0895453}, {0.0492675, -0.0994999}, {0.0541829, -0.109456}, {0.0590962, -0.119412}, {0.0640075, -0.12937}, {0.0689166, -0.139328}, {0.0738238, -0.149288}, {0.0787288, -0.159249}, {0.0836318, -0.169211}, {0.0885327, -0.179173}, {0.0934316, -0.189137}, {0.0983284, -0.199102}, {0.103223, -0.209068}, {0.108116, -0.219034}, {0.113006, -0.229002}, {0.117895, -0.238971}, {0.122781, -0.248941}, {0.127666, -0.258912}}; </code></pre> <p>I would like to know the intersection between a circle and list data point as shown by figure below. How to make its program automatically? I mean that if one day I would like to change the radius of circle, the program would still work.</p> <p><a href="https://i.stack.imgur.com/ckZuP.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/ckZuP.jpg" alt="enter image description here"></a></p>
OkkesDulgerci
23,291
<pre><code> eq = 2 x11 + 8 x12 + 6 x13 - 4 x14 + 7 x15 + 3 x16 - 5 x17 + 4 x18 + 2 x19 + 2 x20 + 10 x21 + 10 x22 + 4 x23 + 3 x24 + 2 x25 + 2 x26 - 5 x27 + 9 x28 + 6 x29 + 9 x30 + 7 x31 - x32 - 4 x33 - 3 x34 + x35 + 2 x36 + x37 - 2 x38 - x39 + x40; var = Variables@eq; NSolve[eq == 0 &amp;&amp; And @@ (0 &lt;= # &lt;= 1 &amp; /@ var), var, Integers] </code></pre> <p>There are more than 600k solutions!</p>
85,717
<p>Nowadays we can associate to a topological space $X$ a category called the fundamental (or Poincare) $\infty$-groupoid given by taking $Sing(X)$.</p> <p>There are many different categories that one can associate to a space $X$. For example, one could build the small category whose object set is the set of points with only the identity morphisms from a point to itself. It is claimed that the classifying space of this category returns the space: $BX=X$</p> <p>The inspiration for these examples comes from three primary sources: Graeme Segal's famous 1968 paper <em>Classifying Spaces and Spectral Sequences</em>, Raoul Bott's Mexico notes (taken by Lawrence Conlon) <em>Lectures on characteristic classes and foliations</em>, and a 1995 pre-print called <em>Morse Theory and Classifying Spaces</em> by Ralph Cohen, G. Segal and John Jones. </p> <p>In each of these papers there is a notion of a topological category. It is not just a category enriched in <strong>Top</strong>, since the set of objects can have non-discrete topology. Here is the definition that I can gleam from these articles:</p> <p>A <strong>topological category</strong> consists of a pair of spaces $(Obj,Mor)$ with four continuous structure maps:</p> <ul> <li>$i:Obj\to Mor$, which sends an object to the identity morphism</li> <li>$s:Mor\to Obj$, which gives the source of an arrow</li> <li>$t:Mor\to Obj$, which gives the target of an arrow</li> <li>$\circ:Mor\times_{t,s}Mor\to Mor$, which is composition.</li> </ul> <p>Were $i$ is a section of both $s$ and $t$, and all the axioms of a small category hold.</p> <p><strong>Is the appropriate modern terminology to describe this a <a href="http://ncatlab.org/nlab/show/Segal+space" rel="nofollow noreferrer">Segal Space</a>? What would Lurie call it?</strong> Based on reading <a href="https://mathoverflow.net/questions/29728/a-model-category-of-segal-spaces">Chris Schommer-Pries MO post</a> and elsewhere this seems to be true. Would the modern definition of the above be a Segal Space where the Segal maps are identities? Also, why do we demand that the topology on objects be discrete for Segal Categories? <strong>Is there something wrong with allowing the object sets to have topologies?</strong></p>
Toby Bartels
8,508
<p>I would call this an <a href="http://ncatlab.org/nlab/show/internal+category">internal category</a> in the category of topological spaces and continuous maps.</p>
97,877
<p>Does anyone know a reference for the 2-dimensional version of the Schoenflies theorem? To be precise, I'd like a reference for the fact that every continuous, 1-1 map $S^1\rightarrow \mathbb{R}^2$ extends to a homeomorphism $\mathbb{R}^2 \rightarrow \mathbb{R}^2$. The discussions of the Jordan Curve Theorem that I can remember don't prove this stronger statement.</p> <p>This statement is mentioned on the Wikipedia page for the <a href="http://en.wikipedia.org/wiki/Schoenflies_problem"> Schoenflies problem </a>. I looked through several papers on the generalized Schoenflies problem (which requires extra hypotheses in higher dimensions to rule out things like the Alexander Horned Sphere), but no luck...</p>
Ryan Budney
1,465
<p>In the smooth case the idea is to take a linear height function on the plane, which is generically Morse on the curve. Apply the Jordan curve theorem + basic Morse theory, this tells you the compact region bounded by the curve is a union of discs, glued together along common arcs, and the "gluing pattern" is that of a tree. An induction argument finishes it.</p> <p>If you really need it for the topological category that's a fair bit more work. Larry Siebenmann has a recent article on this</p> <p><a href="http://hopf.math.purdue.edu/cgi-bin/generate?/Siebenmann/Schoen-02Sept2005" rel="noreferrer">L C Siebenmann 2005 Russ. Math. Surv. 60 645</a></p> <p>His article seems to have pretty much all the historic references.</p>