qid int64 1 4.65M | question large_stringlengths 27 36.3k | author large_stringlengths 3 36 | author_id int64 -1 1.16M | answer large_stringlengths 18 63k |
|---|---|---|---|---|
3,382,464 | <p>Let <span class="math-container">$g$</span> be a <strong>smooth</strong> Riemannian metric on the closed <span class="math-container">$n$</span>-dimensional unit disk <span class="math-container">$\mathbb{D}^n$</span>. Let <span class="math-container">$f$</span> be a harmonic function w.r.t <span class="math-container">$g$</span>.</p>
<blockquote>
<p>Is it true that <span class="math-container">$f$</span> must be real-analytic?</p>
</blockquote>
<p>I <em>think</em> that this is true if we assume that <span class="math-container">$g$</span> is real-analytic, but I am not sure. Is it true in that case? I would like to find a reference.</p>
<p>This should be related to whether or not the Riemannian laplacian <span class="math-container">$\Delta_g$</span> is "analytically hypoelliptic".</p>
| Community | -1 | <p>Given that <span class="math-container">$a^5 - a^3 + a=2$</span> we write <span class="math-container">$$a^3\left(a^2-1+\frac{1}{a}\right)=2$$</span>
This implies <span class="math-container">$$a^3\left(\left(a+\frac{1}{a}\right)^2-3\right)=2$$</span>
Now by A.M-G.M inequality <span class="math-container">$a+\frac{1}{a}\gt 2$</span>. Now using this we can write <span class="math-container">$\left(a+\frac{1}{a}\right)^2\gt 4$</span>. Therefore,<span class="math-container">$$a^3(4-3)\lt a^3\left(\left(a+\frac{1}{a}\right)^2-3\right)=2\implies a^3\lt 2\implies a^6\lt 4$$</span></p>
|
1,939,382 | <p>I've read about integration, and i believe i understood concept correctly. But, unfortunately, the simplest exercise already got my stumbled. I need to find an integral of $x{\sqrt {x+x^2}}$. So i proceed as follows,</p>
<p>By the fundamental theorem of calculus:</p>
<p>$f(x)=\int[f'(x)]=\int[x\sqrt{x+x^2}]$,</p>
<p>First I've tried to apply chain rule and i end up with: </p>
<p>$f'(x)=xu^\frac{1}{2}\frac{du}{2x+1}$ , not sure how i can proceed in this case.</p>
<p>Next I've tried to apply product rule:</p>
<p>If $f(x)=i(x)j(x)$, then $x\sqrt{x+x^2}=i'(x)j(x)+j'(x)i(x)$,</p>
<p>Using sum rule i could assume that, $f(x)=i(x)j(x)-\int[j'(x)i(x)]$,</p>
<p>Now, finding that $i'(x)=x, i(x)=\frac{x^2}{2}, j(x)=\sqrt{x+x^2}$ and $j'(x)=\frac{2x+1}{2\sqrt{x+x^2}}$, </p>
<p>$f(x)$ should be of form $f(x)=\frac{x^2\sqrt{x+x^2}}{2}-\int\frac{x^2(2x+1)}{ 4\sqrt{x+x^2}}$, so now i should find integral of this fraction,</p>
<p>If i can assume, that $p(x)=\frac{a(x)}{b(x)}$, then $\frac{x^2(2x+1)}{ 4\sqrt{x+x^2}}=\frac{a'(x)b(x)-a(x)b'(x)}{(b(x))^2}$, hence:</p>
<p>$b(x)=2(x+x^2)^\frac{1}{4}, b'(x)=\frac{1}{2(x+x^2)^\frac{3}{4}}$ and as a result $a(x)$, should be $a(x)=\frac{(x+x^2)^\frac{3}{4}(4a'(x)(x+x^2)^\frac{1}{4}-4x^3-2x^2)}{2x+1}$, but now i don't now how substitute $a'(x)$, if i differentiate this expression i will get $a''(x)$. </p>
<p>So my question is, what substitution i shall perform to obtain a(x), a'(x)?</p>
<p>Thank you! And forgive me my ignorance. </p>
| Enrico M. | 266,764 | <p>Hint</p>
<p>$$x\sqrt{x + x^2} = x\sqrt{\left(x + \frac{1}{2}\right)^2 - \frac{1}{4}}$$</p>
<p>Then you may think about setting</p>
<p>$$x + \frac{1}{2} = y$$</p>
<p>Et cetera.</p>
|
3,996,218 | <p>Well I wanted to know whether or not <span class="math-container">$y = x^2 + x + 7$</span> is a quadratic equation since the general form is <span class="math-container">$ax^2 + bx + c = 0$</span> here the equation
<span class="math-container">$y=x^2+x+7$</span>. Isn't equal to zero so I'm a bit confused</p>
| Toby Mak | 285,313 | <p>The general form is not <span class="math-container">$ax^2+bx+c=0$</span>, but <span class="math-container">$y = ax^2+bx+c$</span>. To have a function, you must be able to put one number in (<span class="math-container">$x$</span> in this case), and output one number out (<span class="math-container">$y$</span>).</p>
<p><span class="math-container">$ax^2+bx+c =0$</span> when <span class="math-container">$y = 0$</span>, so when you write this, you are trying to find which <span class="math-container">$x$</span> values the function outputs <span class="math-container">$0$</span> (or the roots). This doesn't help you figure out if <span class="math-container">$y=x^2+x+7$</span> is a quadratic equation.</p>
|
3,996,218 | <p>Well I wanted to know whether or not <span class="math-container">$y = x^2 + x + 7$</span> is a quadratic equation since the general form is <span class="math-container">$ax^2 + bx + c = 0$</span> here the equation
<span class="math-container">$y=x^2+x+7$</span>. Isn't equal to zero so I'm a bit confused</p>
| Deepak | 151,732 | <p>When you write <span class="math-container">$y=x^2 + x+7$</span>, that is <em>not</em> generally considered a "quadratic equation" in the commonly used sense. Most of the time, that is taken to mean a functional relationship between two variables, namely <span class="math-container">$y$</span> and <span class="math-container">$x$</span>. Because the right hand side takes the form of a quadratic polynomial, you are justified in calling it a "quadratic function" of <span class="math-container">$x$</span>. When writing the relationship between two variables in this form, you're looking to answer questions like: what is the value of <span class="math-container">$y$</span> for a given value of <span class="math-container">$x$</span>? What does a plot of <span class="math-container">$y$</span> against <span class="math-container">$x$</span> look like? And so forth.</p>
<p>A quadratic equation is (already, or can easily be rearranged into) something of the canonical form <span class="math-container">$ax^2 + bx +c =0$</span>. The last term on the left hand side and is a constant term while the right hand side is zero.</p>
<p>So these are quadratic equations:</p>
<ol>
<li><p><span class="math-container">$x^2 + x +7 =0$</span> (already in the canonical form)</p>
</li>
<li><p><span class="math-container">$x^2 + x + 7 = 2$</span> (can be immediately rearranged into the canonical form)</p>
</li>
<li><p><span class="math-container">$x^2 + x + 7 = k$</span> (where <span class="math-container">$k$</span> is specified as a constant, even if it's not a known constant, allowing rearrangement into the proper form)</p>
</li>
</ol>
<p>Note that the quadratic functional relationship <span class="math-container">$y = x^2 + x +7$</span> can be made into a quadratic equation if we ask and try to answer questions like:</p>
<ol>
<li><p>What value(s) of <span class="math-container">$x$</span> makes <span class="math-container">$y = 10$</span>? In this case <span class="math-container">$x^2 + x +7 =10$</span>, which is a quadratic equation with two real roots, so you have your two possible <span class="math-container">$x$</span> values.</p>
</li>
<li><p>Does the curve <span class="math-container">$y = x^2 + x +7$</span> intersect the <span class="math-container">$x$</span> axis? The answer is 'no' because the quadratic equation <span class="math-container">$x^2 + x + 7 =0$</span> has no real roots, only complex ones.</p>
</li>
</ol>
<p>I hope I've understood the essence of your question and answered clearly enough.</p>
|
1,431,464 | <p>Does anyone know a good reference where it is shown that the Schwartz class $\mathcal{S}(\mathbb R)$ is a dense subset of $L^2(\mathbb R)$?</p>
<p>Many thanks</p>
| Silvia Ghinassi | 258,310 | <p>Dan gave a good bunch of references. Another proof can be found in Lieb and Loss' "Analysis", Lemma 2.19. The following is a quick sketch of how the proof goes.</p>
<hr>
<p>In Rudin's "Real and Complex Analysis", Theorem 3.14, it is proved that $C_c(\mathbb R)$ is dense in $L^p (\mathbb R)$ (you can find this also in Folland, Proposition 7.9).</p>
<p>We know that $C_c^{\infty}(\mathbb R) \subset \mathcal{S}(\mathbb R)$, so to show density it is enough to show $C_c^{\infty}(\mathbb R)$ is dense in $L^2(\mathbb R)$. To this purpose it is enough to show that $C_c^{\infty}(\mathbb R)$ is dense in $C_c(\mathbb R)$ since we already know that the latter is dense in $L^2(\mathbb R)$.</p>
<p>Now, let $\rho_{\frac1n}$ be a family of <a href="https://en.wikipedia.org/wiki/Mollifier" rel="nofollow">mollifiers</a>. Then, if $f \in C_c(\mathbb R)$, we have $f_n=\rho_{\frac1n} * f \in C_c^{\infty}(\mathbb R)$ and $f_n \to f$ in $L^p(\mathbb R)$, for $p \in [1,+\infty)$ (see, for instance Theorem 2.1 in Duoandikoetxea's "Fourier Analysis", or section 8.2 in Folland) and this gives us the desired result.</p>
|
20,314 | <p>Hi all.
I'm looking for english books with a good coverage of distribution theory.
I'm a fan of Folland's Real analysis, but it only gives elementary notions on distributions.
Thanks in advance.</p>
| 7-adic | 2,666 | <p>I would say Fourier analysis, by Javier Duoandikoetxea, AMS.</p>
|
20,314 | <p>Hi all.
I'm looking for english books with a good coverage of distribution theory.
I'm a fan of Folland's Real analysis, but it only gives elementary notions on distributions.
Thanks in advance.</p>
| John Stillwell | 1,587 | <p>For a really gentle introduction I would recommend
Kolmogorov and Fomin's <em>Introductory Real Analysis</em>,
available as a Dover paperback. They have a nice
introduction to distributions as "generalized functions"
in Section 21.</p>
|
20,314 | <p>Hi all.
I'm looking for english books with a good coverage of distribution theory.
I'm a fan of Folland's Real analysis, but it only gives elementary notions on distributions.
Thanks in advance.</p>
| Michael Hardy | 6,316 | <p>There's the book by Ian Richard and Heekyung Youn. It describes itself as a "non-technical introduction", which apparently means you don't need to know measure theory, topology, or functional analysis. Nonetheless you do need to think more like a mathematician than a physicist or the like in order to appreciate their approach.</p>
|
20,314 | <p>Hi all.
I'm looking for english books with a good coverage of distribution theory.
I'm a fan of Folland's Real analysis, but it only gives elementary notions on distributions.
Thanks in advance.</p>
| user45664 | 94,943 | <p>"Mathematics for the Physical Sciences", Laurent Schwartz, Dover 2008 is a simplified English language book that covers some (maybe even much) of Schwartz's theory of distributions. Very readable, helpful and interesting (also $19.95). The title sounds more general than it actually is--really is focused on distributions, and their applications. Schwartz says in the preface: 'This work is concerned with the mathematical methods of physics'.</p>
|
842,266 | <p>I have a tiny little doubt related to one proof given in Ahlfors' textbook. I'll copy the statement and the first part of the proof, which is the part where my doubt lies on.</p>
<p><strong>Statement</strong>
The stereographic projection transforms every straight line in the $z$-plane into a circle on $S$ which passes through the pole $(0,0,1)$ and the converse is also true. More generally, any circle on the sphere corresponds to a circle or straight line in the $z$-plane.</p>
<p><strong>Proof</strong></p>
<p>To prove this we observe that a circle on the sphere lies in a plane $\alpha_1x_1+\alpha_2x_2+\alpha_3x_3=\alpha_0$, where we can assume ${\alpha_1}^2+{\alpha_2}^2+{\alpha_3}^2=1$ and $0\leq \alpha_0 <1$</p>
<p>I don't understand why it is always the case that the condition $0\leq \alpha_0 <1$ can be satisfied. I mean, a plane can be described as:</p>
<p>$$\Pi: \space n.(v-v_0)=0 \tag{1}$$ where $v$ and $v_0$ are two vectors with endpoints lying on $\Pi$. I know that $n$ is a perpendicular vector to the plane, and I understand that if $n=(\alpha_1,\alpha_2,\alpha_3)$ doesn't satisfy $||n||=1$, then the vector $n'=\dfrac{n}{||n||}$ is a unit vector which also satisfies equation (1).</p>
<p>Equation (1) is the same as $$\space n.v=n.v_0 \tag{2}$$</p>
<p>In this problem, $\alpha_0=n.v_0$, I don't understand why we can always choose $n$ and $v_0$ such that all the conditions said in my previous lines are satisfied.</p>
<p>I put the title "complex-analysis" but I am not sure if it is the proper tag, if anyone can think of a better tag for this post, tell me and I'll change it.</p>
| Emily | 31,475 | <p>$0 \le a_0< 1$ is satisfied because the plane must intersect the Riemann sphere, whereon the maximum component if any point is bounded by 1.</p>
|
4,411,096 | <p>I know closure of connected set in a topological space must be connected as well. However, I can't understand why this counterexample fails.
Take <span class="math-container">$X=[0,2)\cup\{3\}, B_2(1)=(0,2)$</span> which is connected. Now take the closed ball <span class="math-container">$C_2(1)=[0,2)\cup \{3\}$</span> which is clearly not connected. I appreciate your help</p>
| José Carlos Santos | 446,262 | <p>It turns out that the closure of <span class="math-container">$(0,2)$</span> in <span class="math-container">$[0,2)\cup\{3\}$</span> is <span class="math-container">$[0,2)$</span>, not <span class="math-container">$[0,2)\cup\{3\}$</span>. The closure of an open ball is not always the corresponding closed ball.</p>
|
3,369,069 | <p>Let <span class="math-container">$l_1$</span> and <span class="math-container">$l_2$</span> be two distributions in disjoint variables <span class="math-container">$x_1, ..., x_n$</span> and <span class="math-container">$y_1, ..., y_m$</span>. Then it is said to be possible to define a product distribution.</p>
<p>However, I am fundamentally confused. Distributions are in fact linear functionals on the space of smooth and compactly supported functions. Then, how does the 'product' of linear functionals again become a linear functional?</p>
<p>Especially, what can be a definition of <span class="math-container">$\delta(x_1)\delta(x_2)$</span> such that it is equal to <span class="math-container">$\delta(x_1, x_2)$</span>? I am just stuck......</p>
| Robert Furber | 184,596 | <p>As mathcounterexamples.net says, this is related to the tensor product of distributions. Here is an outline of how to do this. The full proof is rather involved, depending on your background knowledge, so I'll give a reference.</p>
<p>Let's write <span class="math-container">$\newcommand{\D}{\mathcal{D}}\newcommand{\R}{\mathbb{R}}\D(\R^n)$</span> for the space of smooth, compactly supported real-valued functions on <span class="math-container">$\R^n$</span>. We write <span class="math-container">$\D'(\R^n)$</span> for the dual space, the space of distributions (continuous linear functions <span class="math-container">$\D(\R^n) \rightarrow \R$</span>). So your <span class="math-container">$l_1 \in \D'(\R^n)$</span> and <span class="math-container">$l_2 \in \D'(\R^m)$</span>. To avoid confusion with the pointwise product, I'm going to write the product in <span class="math-container">$\D'(\R^{n+m})$</span> as <span class="math-container">$l_1 \otimes l_2$</span>. But in order to define this, I'll define the analogous operation for <span class="math-container">$\D(\R^{n+m})$</span> first. </p>
<p>If <span class="math-container">$f \in \D(\R^n)$</span> and <span class="math-container">$g \in \D(\R^m)$</span>, define <span class="math-container">$f \otimes g \in \D(R^{n+m})$</span> by:
<span class="math-container">$$
(f \otimes g)(x_1, \ldots, x_n, y_1, \ldots, y_m) = f(x_1,\ldots,x_n)g(y_1,\ldots,y_m)
$$</span>
Importantly, the linear span of <span class="math-container">$\{ f \otimes g \mid f \in \D(\R^n) \text{ and } g \in \D(\R^m) \}$</span> is <em>dense</em> in <span class="math-container">$\D(\R^{n+m})$</span> its usual topology. This means we can uniquely define distributions in <span class="math-container">$\D'(\R^{n+m})$</span> by defining them on smooth functions of the form <span class="math-container">$f \otimes g$</span>. </p>
<p>So for <span class="math-container">$l_1 \in \D'(\R^n)$</span> and <span class="math-container">$l_2 \in \D'(\R^m)$</span>, define
<span class="math-container">$$
(l_1 \otimes l_2)(f \otimes g) = l_1(f) l_2(g),
$$</span>
or more generally
<span class="math-container">$$
(l_1 \otimes l_2)\left(\sum_{i=1}^p \alpha_i f_i \otimes g_i\right) = \sum_{i=1}^p \alpha_i l_1(f_i) l_2(g_i),
$$</span>
where <span class="math-container">$\alpha_i \in \R$</span>, <span class="math-container">$f_i \in \D(\R^n)$</span> and <span class="math-container">$g_i \in \D(\R^m)$</span>. This defines a continuous linear map, which can therefore be extended to <span class="math-container">$l_1 \otimes l_2 : \D(\R^{n+m}) \rightarrow \R$</span>, defining a distribution. </p>
<p>To fill in the gaps in what I'm saying above, consult a standard reference text on distributions, such as Hörmander's <em>The Analysis of Linear Partial Differential Operators I</em>, Theorem 5.1.1.</p>
<p>You asked to see how this works out for <span class="math-container">$\delta$</span> functions. So for any <span class="math-container">$h \in \D(\R^p)$</span>, <span class="math-container">$\delta(h) = h(0)$</span> is the usual definition. Now, two continuous linear maps that agree on a subset of <span class="math-container">$\D(\R^{n+m})$</span> whose linear span is dense must be equal. To avoid confusion, let's write <span class="math-container">$\delta_n$</span> for the <span class="math-container">$\delta$</span> function in <span class="math-container">$\D'(R^n)$</span>. So we see that
<span class="math-container">$$
(\delta_n \otimes \delta_m)(f \otimes g) = \delta(f)\delta(g) = f(0)g(0) = (f \otimes g)(0) = \delta_{n+m}(f \otimes g),
$$</span>
and this proves that <span class="math-container">$\delta_n \otimes \delta_m = \delta_{n+m}$</span>.</p>
|
214,486 | <p><a href="https://i.stack.imgur.com/rZXpG.gif" rel="noreferrer"><img src="https://i.stack.imgur.com/rZXpG.gif" alt="enter image description here"></a></p>
<p>I made it by another software, and met some problems to change it into MMA code.</p>
<pre><code>f[x_] := Graphics[
Line[AnglePath[{90 °, -90 °}[[
1 + Nest[Join[#, {0}, Reverse[1 - #]] &, {0}, x]]]]]];
f /@ Range[5]
</code></pre>
<p>The effect is weird.</p>
<p>It has two affine rules</p>
<p><span class="math-container">$(x,y)\to(0.5x-0.5y,0.5x+0.5y)$</span> and <span class="math-container">$(x,y)\to(-0.5x-0.5y+1,0.5x-0.5y)$</span></p>
<p>for example: </p>
<pre><code>g[{x_, y_}] := Block[
{}, Return[{{0.5 x - 0.5 y, 0.5 x + 0.5 y}, {-0.5 x - 0.5 y + 1,
0.5 x - 0.5 y}}]
]
h[x_] := Flatten[g /@ x] // Partition[#, 2] &
NestList[h, {{0, 0}}, 13] // ListPlot
</code></pre>
<p>gives <a href="https://i.stack.imgur.com/qX8Xq.png" rel="noreferrer"><img src="https://i.stack.imgur.com/qX8Xq.png" alt="enter image description here"></a></p>
<p>So,I know how to plot still picture, But I have no idea about let it animate.</p>
| chyanog | 2,090 | <p>I think OP may want animation with transition effects. Compare these two effects:<br>
<a href="https://i.stack.imgur.com/c81s4.gif" rel="noreferrer"><img src="https://i.stack.imgur.com/c81s4.gif" alt="enter image description here"></a><br>
Then translation<br>
<a href="https://i.stack.imgur.com/kbTXW.gif" rel="noreferrer"><img src="https://i.stack.imgur.com/kbTXW.gif" alt="enter image description here"></a></p>
<pre><code>Clear["`*"]
cf = Compile[{{M, _Real, 2}, t},
With[{A = M[[1]], B = M[[2]]},
With[{P = (A + B + t Cross[B - A])/2}, {{A, P}, {B, P}}]], RuntimeAttributes -> Listable
];
f[n_] := Flatten[Nest[cf[#, 1] &, {{{0, 0}, {1, 0}}}, Floor@n], Floor@n];
g[n_] := Flatten[cf[f[n], FractionalPart[n]], 1];
Manipulate[Graphics[{Line[f[n]]}, PlotRange -> {{-0.4, 1.2}, {-0.4, 0.7}}], {n, 0, 12}]
Manipulate[Graphics[{Line[g[n]]}, PlotRange -> {{-0.4, 1.2}, {-0.4, 0.7}}], {n, 0, 12}]
Manipulate[
With[{i = Floor[n], TF = TranslationTransform},
Graphics[{
Table[Line[TF[{2 j, 0}]@f[j]], {j, 0, n}],
Line@If[n - i < 0.5, TF[{4 n - 2 i, 0}]@f[n], TF[{2 i + 2, 0}]@g[2 n - i - 1]]
}, ImageSize -> 670, PlotRange -> {{-0.2, 13.2}, {-0.5, 0.8}}]],
{n, 0, 6}]
</code></pre>
|
214,486 | <p><a href="https://i.stack.imgur.com/rZXpG.gif" rel="noreferrer"><img src="https://i.stack.imgur.com/rZXpG.gif" alt="enter image description here"></a></p>
<p>I made it by another software, and met some problems to change it into MMA code.</p>
<pre><code>f[x_] := Graphics[
Line[AnglePath[{90 °, -90 °}[[
1 + Nest[Join[#, {0}, Reverse[1 - #]] &, {0}, x]]]]]];
f /@ Range[5]
</code></pre>
<p>The effect is weird.</p>
<p>It has two affine rules</p>
<p><span class="math-container">$(x,y)\to(0.5x-0.5y,0.5x+0.5y)$</span> and <span class="math-container">$(x,y)\to(-0.5x-0.5y+1,0.5x-0.5y)$</span></p>
<p>for example: </p>
<pre><code>g[{x_, y_}] := Block[
{}, Return[{{0.5 x - 0.5 y, 0.5 x + 0.5 y}, {-0.5 x - 0.5 y + 1,
0.5 x - 0.5 y}}]
]
h[x_] := Flatten[g /@ x] // Partition[#, 2] &
NestList[h, {{0, 0}}, 13] // ListPlot
</code></pre>
<p>gives <a href="https://i.stack.imgur.com/qX8Xq.png" rel="noreferrer"><img src="https://i.stack.imgur.com/qX8Xq.png" alt="enter image description here"></a></p>
<p>So,I know how to plot still picture, But I have no idea about let it animate.</p>
| A little mouse on the pampas | 42,417 | <p>I have a similar code made by <a href="https://github.com/ChenMinQi/chenminqi.github.io/tree/master/Koch%E6%9B%B2%E7%BA%BF%E7%9A%84%E5%8A%A8%E6%80%81%E5%8C%96" rel="nofollow noreferrer">Apple</a>, just for reference.</p>
<pre><code>Clear["Global`*"]
rotate[p4_, p2_] := Evaluate[Simplify@RotationTransform[1. Pi/3, p2][p4]];
generate[p1_, p5_] := Module[{p2, p3, p4},
p2 = (p5 - p1)/3 + p1;
p4 = 2 (p5 - p1)/3 + p1;
p3 = rotate[p4, p2];
{p1, p2, p3, p4}];
data[0]=N@{{ 0, 0}, {1, 0}};
data[n_] := data[n] = Flatten[{generate @@@ Partition[data[n - 1], 2, 1], {{{ 1, 0}}}}, 2];
move[{p1_, p2_, p3_, p4_, p5_}, t_] := {{p1, p2, (1 - t) p4 + t p3}, {(1 - t) p2 + t p3, p4, p5}};
AllMove[data_, t_] := move[#, t] & /@ Partition[data, 5, 4];
newdata[t_] := Flatten[AllMove[data[Quotient[t + 1, 1]], Mod[t, 1]], 1];
Manipulate[ListLinePlot[newdata[t], PlotRange -> {{ 0, 1}, {-0.02, 0.3}}, AspectRatio -> 0.32,
Axes -> False, PlotStyle -> RGBColor[0.353, 0.741, 0.913], ImageSize -> {500, 200}],
{t, 0, 4, 0.03},SaveDefinitions -> True]
</code></pre>
|
3,761,689 | <p>I was watching a YouTube video where it showed how length of daylight changes depending on the time of year, and I was curious and wanted to try calculating the value of how long the daylight is in the Tropic of Cancer (23.5 degrees latitude) during the winter solstice, apparently 10 hours and 33 minutes or so according to the video. Here is the <a href="https://www.youtube.com/watch?v=WLRA87TKXLM&t=5m27s" rel="noreferrer">timestamp</a> for reference.</p>
<p>This is my work (the yellow blobs represent 23.5 degrees and the pink blobs 43 degrees):</p>
<p><a href="https://i.stack.imgur.com/yKmZe.jpg" rel="noreferrer"><img src="https://i.stack.imgur.com/yKmZe.jpg" alt="enter image description here" /></a></p>
<p><span class="math-container">$\sin(66.5 \text{ degrees}) = (\text{yellow leg + orange leg}) / r$</span> implies <span class="math-container">$0.917060r = \text{yellow leg + orange leg}$</span></p>
<p><span class="math-container">$\cos(66.5 \text{ degrees}) = \text{purple leg} / r$</span> implies <span class="math-container">$0.398749r = \text{purple leg}$</span></p>
<p><span class="math-container">$\tan(23.5 \text{ degrees}) = \text{orange leg / purple leg}$</span> implies <span class="math-container">$0.434812 \cdot \text{ purple leg} = \text{orange leg}$</span></p>
<p>Subbing in the value we already got from the purple leg, we get <span class="math-container">$0.173381r = \text{orange leg}$</span></p>
<p>That means the orange leg is <span class="math-container">$0.173381r/ 0.917060r$</span> fraction of the yellow and orange leg, about <span class="math-container">$0.189061784$</span>. This represents how much extra darkness there is along the line.</p>
<p>Since this darkness is on both sides of the globe, I multiply it by two, to get <span class="math-container">$0.37812$</span>.</p>
<p>So the daylight is about <span class="math-container">$37.81$</span>% shorter, down from <span class="math-container">$12$</span> hours to about <span class="math-container">$7.46$</span> hours. Way off compared to the video's <span class="math-container">$10$</span> hours <span class="math-container">$33$</span> minutes.</p>
<p>Where is my mistake?</p>
| Hagen von Eitzen | 39,174 | <ul>
<li>The purple line at latitude <span class="math-container">$\alpha$</span> is <span class="math-container">$r\sin\alpha$</span></li>
<li>Then the orange line is <span class="math-container">$r\sin\alpha\tan\alpha$</span></li>
<li>The radius of the latitude circle is <span class="math-container">$r\cos\alpha$</span>.</li>
<li>Hence the orange line divided by the radius is <span class="math-container">$\tan^2\alpha $</span></li>
</ul>
<p>Now if the angle between 6 o'clock and sunrise is <span class="math-container">$\beta$</span>, we have <span class="math-container">$\sin\beta=\tan^2\alpha$</span> and so obtain a daytime length of
<span class="math-container">$$ \left(1-\frac{\arcsin\tan^2\alpha}{90^\circ}\right)\cdot{12\,\text{h}}=\arccos\tan^2\alpha\cdot\frac{12\,\text{h}}{90^\circ}$$</span></p>
<p>For <span class="math-container">$\alpha=23.5^\circ$</span>, this gives me <span class="math-container">$10.55$</span> hours, or <span class="math-container">$10:32:49$</span>.</p>
|
2,035,186 | <p>This is a probability question where I am asked to integrate a region that represents the probability of a scenario. X, Y, and U are random variables, where U = X-Y. I need to find the probability </p>
<p>$P(U \leq u) = P(X-Y \leq u)$, where the density function I'm integrating over is defined by f(x,y) = 1, for 0 $\leq x \leq 2 \quad 0 \leq y \leq 1 \quad 2 y \leq x$. </p>
<p>Why can't I capture this region with a single double integral going from $\int_{0}^{1}$$\int_{2y}^{u+y}$dxdy ? </p>
| rogerl | 27,542 | <p>The area of the circle is $400\pi\ \text{cm}^2$.</p>
<p>Since the area of the triangle is $60$, the other leg is $8$, so that the hypotenuse is $17$. Now, call the two non-right-angle vertices of the triangle $A$ and $B$, and let the three points of tangency be $P$, $Q$, and $R$, all right-to-left. Then $AP = AQ$ and $BQ = BR$. Now, if you finish drawing the square of which your picture produces parts of two sides, the side of that square is $15+8+AP+BR$, so the radius of the circle is $\frac{23+AP+BR}{2}$. But $AP+BR = AQ+BQ = AB = 17$, so the circle's radius is $20$ and its area is $400\pi$.</p>
|
161,024 | <p>I was recently having a discussion with someone, and we found that we could not agree on what an exponential function is, and thus we could not agree on what exponential growth is. </p>
<p>Wikipedia claims it is $e^x$, whereas I thought it was $k^x$, where k could be any unchanging number. For example, when I'm doing Computer Science classes, I would do everything using base 2. Is $2^x$ not an exponential function? The classical example of exponential growth is something that doubles every increment, which is perfectly fulfilled by $2^x$. I'd also thought $10^x$ was a common case of exponential growth, that is, increasing by an order of magnitude each time. Or am I wrong in this, and only things that follow the natural exponential are exponential equations, and thus examples of exponential growth?</p>
| hmakholm left over Monica | 14,366 | <p>$e^x$ is <strong>the</strong> exponential function, but $c\cdot k^x$ is <strong>an</strong> exponential function for any $k$ ($> 0, \ne1$) and $c$ ($\ne 0$).</p>
<p>The terminology is a bit confusing, but is so well settled that one just has to get used to it.</p>
|
935,454 | <p>Suppose we have the integral operator $T$ defined by</p>
<p>$$Tf(y) = \int_{-\infty}^{\infty} e^{-\frac{x^2}{2}}f(xy)\,dx,$$</p>
<p>where $f$ is assumed to be continuous and of polynomial growth at most (just to guarantee the integral is well-defined). If we are to inspect that the kernel of the operator, we would want to solve</p>
<p>$$0 = \int_{-\infty}^{\infty} e^{-\frac{x^2}{2}}f(xy)\,dx.$$</p>
<p>If $f$ were odd, this would be trivially zero so we would like to consider the case that $f$ is even. My hunch is that $f$ should be identically zero but I haven't been able to convincingly prove it to myself. The reason that I feel like it should be the zero function is that by scaling the Gaussian, we can make it arbitrarily close to $0$ or $1$. I'll sketch some thoughts of mine.</p>
<p>If $y=0$, then we would have that $0 = \sqrt{2\pi}f(0)$, forcing $f(0)=0$. Making use of even-ness and supposing instead that $y\neq 0$, we can make a change of variable to get</p>
<p>$$0 = \int_0^{\infty} e^{-\frac{x^2}{2y^2}}f(x)\,dx.$$</p>
<p>Since $f$ has polynomial growth at most, given a fixed $y$, for any $\varepsilon > 0$, there exists $R_y$ such that</p>
<p>$$\left|e^{-\frac{x^2}{2y^2}}f(x)\right| \le \frac{\varepsilon}{1+t^2}$$</p>
<p>for all $x > R_y$. Thus we can focus instead on the integral from $0$ to $R_y$ since the tail effectively integrates to zero:</p>
<p>$$0 = \int_0^{R_y}e^{-\frac{x^2}{2y^2}}f(x)\,dx.$$</p>
<p>Since $[0,R_y]$ is compact and $f$ is a continuous function, it can be approximated uniformly by (even) polynomials with constant term $0$ by Stone-Weierstrass, i.e.</p>
<p>$$f(x) = \lim_n p_n(x),$$</p>
<p>where $p_n(x) = \sum\limits_{m=1}^n a_{m,y}x^{2m}$. Here the coefficients are tacitly dependent upon the upper bound (so I've made it explicit to prevent any confusion). From here, we have</p>
<p>$$0 = \int_0^{R_y}e^{-\frac{x^2}{2y^2}}\lim_n p_n(x)\,dx.$$</p>
<p>However since the convergence is uniform, we can commute limit and integral to get that</p>
<p>$$0 = \lim_n\sum_{m=0}^n a_{m,y}\int_0^{R_y}e^{-\frac{x^2}{2y^2}}x^{2m}\,dx.$$</p>
<p>Making use of a change of variable, this gives</p>
<p>$$0 = \lim_n\sum_{m=0}^n a_{m,y}y^{2m+1}\int_0^{\frac{R_y}{y}} e^{-\frac{x^2}{2}}x^{2m}\,dx.$$</p>
<p>I would like to be able to say that the coefficients must be zero but this is pretty messy at this point. Does anyone have any clue as to how to proceed? Or is there a better way to do this? (I would like to avoid Fourier transform-based or Weierstrass transform-based arguments.)</p>
| capea | 86,132 | <p>easy there are infinity solutions like $$y^2 \sin (x)+x \cos (x)$$ $$e^{-x} y^2+e^{-x} x$$
done easy</p>
|
947,254 | <p>The problem is part (b):</p>
<p><b>1.4.7.</b> A pair of dice is cast until either the sum of seven or eigh appears.</p>
<p> <b>(a)</b> Show that the probability of a seven before an eight is 6/11.</p>
<p> <b>(b)</b> Next, this pair of dice is cast until a seven appears twice or until each of a six and eight has appeared at least once. Show that the probability of the six and eight occurring before two sevens is 0.546.</p>
<p>I would like to try to solve this problem using Markov chains, but I'm encountering a dilemma. To calculate the probability, I would need to multiply down the branches that lead to a terminating state, and then sum all of those branches. But I have loops in my diagram, so I'm not sure how to account for the fact that I could remain in a state for an indefinite number of rolls:</p>
<p>[I only drew the branches corresponding to rolling a 6, but there are of course the two other branches (and sub-branches) for rolling a 7 or 8.]</p>
<p><img src="https://i.stack.imgur.com/Cy6S6.jpg" alt="enter image description here"></p>
<p>If that's hard to read, here <a href="https://i.imgur.com/T8BE5ix.jpg" rel="nofollow noreferrer">is a higher resolution</a>. This is my chain of reasoning: We start out in a state of not having a 6, 7, or 8 yet. We could stay here indefinitely. Rolling a 6 takes us to the next state. We could also stay here indefinitely, or roll an 8 and get an accept state. Or we could roll a 7. At that state, we could roll another 7 and get an accept state or roll and 8 or indefinitely roll a 6 (or any other number). All of those probabilities are noted in the transitions.</p>
<p>How do I account for these possibilities?</p>
| Alijah Ahmed | 124,032 | <p>Drawing a state diagram in terms of Markov chains will help in calculating the probabilities to some extent, and you are right in that we need to sum all the branches. </p>
<p>The scenario of an indefinite number of rolls can be dealt with by realising that we will end up with a sum to infinity of geometric progressions whose ratio is a positive number less than $1$ (as the ratios are simply probabilities). Thus, there will be a finite sum to infinity, which will correspond to the probability we wish to find.</p>
<p>To find the probability that a $6$ and $8$ occur before two $7$s, there are $4$ branches we need to add together:-</p>
<ol>
<li><p>For $n\geq2$ rolls of the dice, we obtain <em>at least one $6$</em> and $(n-2)$ sums which are not $6$,$7$ or $8$, and the final roll results in the sum of $8$.</p></li>
<li><p>For $n\geq3$ rolls of the dice, we roll and obtain <em>at least one $6$</em> and $(n-3)$ dice sums which are not $6$,$7$ or $8$ and <em>one $7$</em>, and the final roll results in the sum of $8$.</p></li>
<li><p>For $n\geq2$ rolls of the dice, we obtain <em>at least one $8$</em> and $(n-2)$ sums which are not $6$,$7$ or $8$, and the final roll results in the sum of $6$.</p></li>
<li><p>For $n\geq3$ rolls of the dice, we roll and obtain <em>at least one $8$</em> and $(n-3)$ sums which are not $6$,$7$ or $8$ and <em>one $7$</em>, with the final roll resulting in the sum of $6$.</p></li>
</ol>
<p>Next we need to evaluate the probability of each these four branches occurring, which is obtained by summing to $n\rightarrow\infty$ dice rolls for each branch.</p>
<p>Let us denote as $P(S=x)$ the probability that the sum of the two dice equals $2\leq x \leq 12$ for a particular roll of the dice. </p>
<p>There are a total of $36$ possible outcomes, of which $5$ correspond to the sums of $6$ and $8$, and $6$ outcomes correspond to the sum of $7$. This leads to
$$P(S=6)=P(S=8)=\frac{5}{36},P(S=7)=\frac{6}{36}=\frac{1}{6}$$
There are $20$ outcomes which do not correspond to the sums of $6$,$7$ and $8$, so that
$$P(S\notin \{6,7,8\} )=\frac{20}{36}=\frac{5}{9}$$</p>
<hr>
<p>Let us consider the probability of Branch 1 occurring with $n$ rolls of the dice, which we denote as $P_1(n)$. </p>
<p>The probability of obtaining such a result, for $n\geq2$ rolls is given as (where $1\leq k\leq n$ are the number of outcomes where the sum of $6$ is obtained, and there are $n-1 \choose k$ ways of selecting $k$ 6's from $n-1$ rolls- the final roll will be the sum of $8$)</p>
<p>$$P_{1}(n)=P(S=8)\left(\sum_{k=1}^{n-1}{n-1\choose k}P(S=6)^kP(S\notin \{6,7,8\})^{n-1-k}\right)\\=P(S=8)\left(\color{blue}{\left[\sum_{k=0}^n{n-1\choose k}P(S=6)^kP(S\notin \{6,7,8\})^{n-1-k}\right]}-P(S\notin \{6,7,8\})^{n-1}\right)\\=P(S=8)(\color{blue}{(P(S=6)+P(S\notin\{6,7,8\}))^{n-1}}-P(S\notin \{6,7,8\})^{n-1})\\=\frac{5}{36}\left(\left(\frac{25}{36}\right)^{n-1}-\left(\frac{5}{9}\right)^{n-1}\right)$$
Thus the probability of branch $1$ occurring is the difference between two sum to infinities of geometric series
$$P_1=\sum_{n=2}^{\infty}P_1(n)=\frac{5}{36}\sum_{n=2}^{\infty}\left(\left(\frac{25}{36}\right)^{n-1}-\left(\frac{5}{9}\right)^{n-1}\right)\\=\frac{5}{36}\left(\frac{25}{11}-\frac{5}{4}\right)$$</p>
<hr>
<p>Next we consider branch 2, where at most one $7$ is obtained. </p>
<p>This is a more involved process, as we need to consider one $7$, one or more $6$'s and $(n-3)$ sums which are not $6$,$7$ or $8$, and the final $8$. </p>
<p>Maintaining consistency in notation, for $n\geq3$ rolls of the dice we have (the highlighted factor $(n-1)$ is due to the number of positions the single outcome of $7$ occurs).
$$P_2(n)=P(S=8)\color{red}{(n-1)}P(S=7)\left(\sum_{k=1}^{n-2}{n-2\choose k}P(S=6)^kP(S\notin \{6,7,8\})^{n-2-k}\right)\\=\frac{5}{36}(n-1)\frac{1}{6}\left(\left(\frac{25}{36}\right)^{n-2}-\left(\frac{5}{9}\right)^{n-2}\right)$$
Thus the probability of branch $2$ occurring is
$$P_2=\frac{5}{216}\sum_{n=3}^{\infty}(n-1)\left(\left(\frac{25}{36}\right)^{n-2}-\left(\frac{5}{9}\right)^{n-2}\right)$$
To evaluate the sum to infinity, note that the sum is a differential of the sum of a geometric series, where
$$\sum_{n=3}^{\infty}(n-1)x^{(n-2)}=\frac{d}{dx}\left(\sum_{n=3}^{\infty}x^{n-1}\right)=\frac{d}{dx}\left(\frac{x^2}{1-x}\right)=\frac{x(2-x)}{(1-x)^2}$$
Using this result, and setting $x=\frac{25}{36}$ and $x=\frac{5}{9}$, we obtain
$$P_2=\frac{5}{216}\left(\left(\frac{36}{11}\right)^2\left(\frac{25}{36}\right)\left(\frac{47}{36}\right)-\left(\frac{9}{4}\right)^2\left(\frac{5}{9}\right)\left(\frac{13}{9}\right)\right)$$</p>
<hr>
<p>Having gone through the detailed process for branch 1, evaluation of branch 3 is done in a similar manner, whereby
$$P_{3}(n)=P(S=6)\left(\sum_{k=1}^{n-1}{n-1\choose k}P(S=8)^kP(S\notin \{6,7,8\})^{n-1-k}\right)$$
noting that $P(S=8)=P(S=6)$, we have $P_3(n)=P_1(n)$, so that that $$P_3=P_1=\frac{5}{36}\left(\frac{25}{11}-\frac{5}{4}\right)$$</p>
<hr>
<p>Evaluation of branch 4, where the single sum of $7$ has to be dealt with, results in
$$P_4(n)=P(S=6)(n-1)P(S=7)\left(\sum_{k=1}^{n-2}{n-2\choose k}P(S=8)^kP(S\notin \{6,7,8\})^{n-2-k}\right)\\=\frac{5}{36}(n-1)\frac{1}{6}\left(\left(\frac{25}{36}\right)^{n-2}-\left(\frac{5}{9}\right)^{n-2}\right)$$
and exploiting the fact that $P(S=8)=P(S=6)$, we have
$$P_4=P_2=\frac{5}{216}\left(\left(\frac{36}{11}\right)^2\left(\frac{25}{36}\right)\left(\frac{47}{36}\right)-\left(\frac{9}{4}\right)^2\left(\frac{5}{9}\right)\left(\frac{13}{9}\right)\right)$$</p>
<hr>
<p>Having evaluated all four branches, the total probability is given by
$$\begin{align}P =& P_1+P_2+P_3+P_4\\=&2(P_1+P_2)\\=&\frac{5}{36}\left(\frac{25}{11}-\frac{5}{4}\right)+\frac{5}{108}\left(\left(\frac{36}{11}\right)^2\left(\frac{25}{36}\right)\left(\frac{47}{36}\right)-\left(\frac{9}{4}\right)^2\left(\frac{5}{9}\right)\left(\frac{13}{9}\right)\right)\\\approx& 0.546 \end{align}$$</p>
|
42,040 | <p>Suppose the polynomial $t^k - a$ has a root (hence splits) in $\mathbb{Q}(\zeta_k)$. For which $k$ does it follow that one of the roots of $t^k - a$ is rational? In particular, are there infinitely many such $k$? </p>
<p>A counting argument shows this is true whenever $k$ has the property that $\varphi(k)$ is a power of a prime relatively prime to $k$. Unfortunately, I think it's an open problem whether there are infinitely many such $k$. </p>
<p><strong>Motivation:</strong> If enough $k$ have this property then I think I can complete my solution to <a href="https://math.stackexchange.com/questions/41774/is-an-integer-uniquely-determined-by-its-multiplicative-order-mod-every-prime/42022#42022">"Is an integer uniquely determined by its multiplicative order mod every prime?"</a></p>
| Gerry Myerson | 8,269 | <p>I take it $a$ is to be a (rational) integer, otherwise you could take any old $\beta$ in ${\bf Q}(\zeta_k)$ and let $a=\beta^k$ and then $t^k-a$ would have a root in ${\bf Q}(\zeta_k)$ but, in general, not in $\bf Q$. </p>
<p>Now $t^k-a$ is irreducible over the rationals unless it is of the form $t^{pr}-b^p$ for some prime $p$ or else of the form $t^{4r}+4b^4$. And if it's irreducible, then its roots have degree $k$ over the rationals and thus can't be in ${\bf Q}(\zeta_k)$ which has degree at most $k-1$. So the hypothesis that $t^k-a$ has a root in ${\bf Q}(\zeta_k)$ already only holds true in a very few special cases. </p>
|
2,969,203 | <p>Let <span class="math-container">$f$</span> be a <span class="math-container">$C''$</span> function on <span class="math-container">$(a, b)$</span> and suppose there is a point <span class="math-container">$c$</span> in (a, b) with <span class="math-container">$$f(c)= f'(c)=f''(c) = 0$$</span> Show that there is a continuous function <span class="math-container">$h$</span> on <span class="math-container">$(a, b)$</span> with <span class="math-container">$$f(x) =(x-c)^2h(x)$$</span> for all <span class="math-container">$x$</span> in <span class="math-container">$(a, b)$</span>.</p>
| seamp | 606,999 | <p>Define <span class="math-container">$h(x) = \frac{f(x)}{(x-c)^2}$</span> for all <span class="math-container">$x \in (a,b)$</span> different from <span class="math-container">$c$</span>. Then try to show that <span class="math-container">$h$</span> can be extended by continuity at <span class="math-container">$x = c$</span> using the hypothesis. </p>
|
1,977,588 | <p>In books like Calculus (Larson), in the theorems'definitions like Rolle's theorem, when they talk about continuity, they use closed intervals [a,b]. But when they talk about differentiability they use open brackets (a,b). </p>
<p>Why are closed intervals used for continuity and open intervals for differentiability?</p>
<p>Why can't you say "differentiable on the closed interval [a,b]"?</p>
<p><a href="https://i.stack.imgur.com/FX2a4.jpg" rel="noreferrer">Rolle's Theorem definition</a></p>
| H. H. Rugh | 355,946 | <p>In the particular case of Rolle's theorem you need continuity on $[a,b]$, but you only need differentiability in $(a,b)$. This being said, in ${\Bbb R}$ there is no problem in defining differentiability on $[a,b]$ (differentiability from the right/left). </p>
<p>In higher dimensions this gets more complicated. It is 'easier' to define differentiability on an open domain. Whereas continuity may be defined without problems on the boundary of a domain. For example, think of the continuous image of a compact set being compact is an extremely useful property and really needs the definition on the compact set.</p>
|
878,939 | <p>I have found the eigen vaues, I also know that you can find the eigenvectors through a Gausian Jordan.
-- x1, gauss jordan gives me rows(1 -1/3 ,, 0 0 ), so [a, b] = [1,3]
For vector x2, GJ gives (1 -2/5 ,, 0 0 ), I would assume [a,b] = [2,5], but why did they choose to go with [-2,-5]. I don't get it?</p>
<p>A bigger picture is on this webpage if needed;</p>
<p><a href="http://oi59.tinypic.com/2v7unw1.jpg" rel="nofollow noreferrer">http://oi59.tinypic.com/2v7unw1.jpg</a></p>
<p><img src="https://i.stack.imgur.com/5h7Ix.jpg" alt="enter image description here"></p>
| Community | -1 | <p>Because $n = 5q + r$ is actually a digression from what you're trying to prove.</p>
<p>The thing is, with $n = 4q + r$, you can theoretically set $q$ and $r$ to any integer values you want. But, if $|r| \geq 4$, you can choose a different value of $q$ so that then $0 \leq r < 4$ and then that way you reduce the infinity of $\mathbb{Z}$ to just four cases: $r = 0$, $r = 1$, $r = 2$, $r = 3$. If $r = 0$, then $n = 4q$ and it's an even number. If $r = 1$, then $n$ is odd. If $r = 2$, then $n = 4q + 2$ and it's even. And if $r = 3$, then $n$ is odd. You don't have to worry about $r = 4$ because then you can increment $q$ to $q + 1$, reset $r$ to $0$ and then you're back to $n = 4q + 0$. If $r = 5$, you can likewise change things so your dealing with $r = 1$ instead, etc., etc.</p>
<p>But with $n = 5q + r$, you'd have way more than just four cases to worry about. I think you'd have like twenty cases to worry about, which you'd wind up converting to $n = 4q + r$, e.g., if $n = 5q + r$ and $q$ is a multiple of $4$ and $r$ is even, this can be rewritten as $n = 20k + r$ where $k = \frac{q}{4}$. So why not start with $n = 4q + r$ right from the start?</p>
<p>By the way, you could also do $n = 8q + r$. You'd have just eight cases to consider, but still, you'd be making things more complicated than they need to be. You're first instinct to use $n = 4q + r$ was correct.</p>
|
22,207 | <p>How to make a defined symbol stay in symbol form?</p>
<pre><code>w = 3; g = 4;
{w, g}[[2]]
</code></pre>
<blockquote>
<p><code>3</code></p>
</blockquote>
<p>I want the output to be <strong><code>g</code></strong> and not <code>3</code>. For example, if I want to save different definitions by <code>DumpSave</code> in different files like below:</p>
<p><code>Table[DumpSave["/Users/simonlausen/Desktop/Input/ex"<>ToString[i]<>".mx",
{w,g}[[i]]],{i,1,2}]</code></p>
<p>Any suggestions?</p>
| Jens | 245 | <p>You have to sacrifice <em>something</em>, but it depends on your preferences <em>what</em> you want to keep and what you want to give up. Let's assume you don't want to sacrifice being able to use assignments such as <code>w=3</code>, then you may have to give up using <code>{...}</code> as a wrapper grouping the <em>names</em> of the variables together. You could then define a new wrapper <code>myList</code> to be used instead of <code>{...}</code>:</p>
<pre><code>w = 3; g = 4;
myList[x__] := {ReleaseHold[HoldForm /@ Hold[x]]}
SetAttributes[myList, HoldAll]
myList[w, g][[2]]
</code></pre>
<blockquote>
<p><code>g</code></p>
</blockquote>
<p>This is done by wrapping all elements of <code>myList</code> in <code>HoldForm</code>. You can retrieve the values of these held expressions by applying <code>ReleaseHold</code> to them.</p>
<p><strong>Edit</strong></p>
<p>To address the question about <code>DumpSave</code>, we have to modify the strategy a little:</p>
<pre><code>myList[x__] := {ReleaseHold[Unevaluated /@ Hold[x]]}
Table[
DumpSave["ex" <> ToString[i] <> ".mx", #] &[myList[w, g][[i]]], {i,
1, 2}]
(* ==> {{3}, {4}} *)
</code></pre>
<p>Here, I replaced <code>HoldForm</code> by <code>Unevaluated</code> in <code>myList</code>, so that the symbols <code>g</code> and <code>w</code> are now wrapped by it and will also print with that wrapper. The list of <code>DumpSave</code> commands in the <code>Table</code> then can take these unevaluated arguments as input. However, since <code>DumpSave</code> itself has attribute <code>HoldRest</code>, it wouldn't evaluate these wrappers either. So instead I feed them into <code>DumpSave</code> using the anonymous function <code>... #]& [..]</code> which doesn't obey the <code>HoldRest</code> attribute. </p>
|
2,951,825 | <p>I want to show formally that </p>
<p><span class="math-container">$$M =\{(t, \vert t \vert) \text{ }\vert t \in \mathbb{R} \} $$</span> </p>
<p>is not a smooth <span class="math-container">$C^{\infty}$</span>-submanifold of <span class="math-container">$\mathbb{R}^2$</span>. </p>
<p>My attempts: Intuitively it's clear that the problem is the origin point <span class="math-container">$(0,0)$</span>. Indeed, <span class="math-container">$M$</span> is the graph of the absolute value function which is not differentiable in the origin. </p>
<p>But I have some problems to show that <span class="math-container">$M$</span> isn't smooth manifold in a rigorous formal way. </p>
<p>In the lecture we are working with following definition: <span class="math-container">$M$</span> is a <span class="math-container">$n$</span>-dimensional smooth (so <span class="math-container">$C^{\infty}$</span>) submanifold of <span class="math-container">$\mathbb{R}^{n+k}$</span> iff for every <span class="math-container">$p \in M$</span> there exist an open subset <span class="math-container">$U \subset \mathbb{R}^{n+k}$</span> with <span class="math-container">$p \in U$</span>, an open <span class="math-container">$V \subset \mathbb{R}^n$</span> and a smooth function <span class="math-container">$\gamma \in C^{\infty}(V,U)$</span> with following properties</p>
<p><span class="math-container">$\gamma(V) = U \cap M$</span></p>
<p><span class="math-container">$rank(D\gamma \vert _v) = n$</span> at every <span class="math-container">$v \in V$</span> where <span class="math-container">$D\gamma \vert _v$</span> is the differential of <span class="math-container">$\gamma$</span> at <span class="math-container">$v$</span></p>
<p><span class="math-container">$\gamma$</span> is a homeomorphism from <span class="math-container">$V$</span> to <span class="math-container">$M \cap U$</span></p>
<p>I know that there are some other equivalent definitions of smooth manifolds but I want to know how to get a contradiction using this criterion.</p>
<p>The problem is that there exist no such function with properties as above so if I try to find some <span class="math-container">$\gamma$</span> which maps onto <span class="math-container">$(0,0)$</span> how to show that there exist some <span class="math-container">$v_0 \in V$</span> with <span class="math-container">$rank(D\gamma \vert _{v_0}) = 0$</span>.</p>
<p>Another idea would be to get a contradiction showing that <span class="math-container">$\gamma'$</span> along some can't be continuous, right? But here also I don't find a way how to construct the contradiction formally using the submanifold criterion above. </p>
| Nate Eldredge | 822 | <p>Reduce it to calculus: <span class="math-container">$\gamma$</span> is a smooth map from an open subset of <span class="math-container">$\mathbb{R}^1$</span> into <span class="math-container">$\mathbb{R}^2$</span>; i.e. it's a smooth curve. Write <span class="math-container">$\gamma(s) = (x(s), y(s))$</span> and suppose <span class="math-container">$\gamma(s_0) = (0,0)$</span>. Note that since <span class="math-container">$(x(s), y(s))$</span> is a point of <span class="math-container">$M$</span>, we have <span class="math-container">$y(s) = |x(s)|$</span>.</p>
<p>If <span class="math-container">$y'(s_0) \ne 0$</span> then by definition of the derivative, there would exist <span class="math-container">$s$</span> near <span class="math-container">$s_0$</span> with <span class="math-container">$y(s) < y(s_0) = 0$</span>, which is impossible. So <span class="math-container">$y'(s_0) = 0$</span>. Now note that <span class="math-container">$$|x'(s_0)| = \lim_{s \to s_0} \frac{|x(s)-x(s_0)|}{|s-s_0|} = \lim_{s \to s_0} \frac{|x(s)|}{|s-s_0|} = \lim_{s \to s_0} \frac{|y(s)|}{|s-s_0|} = |y'(s_0)| = 0.$$</span></p>
<p>Since <span class="math-container">$x'(s_0) = y'(s_0) = 0$</span>, we have <span class="math-container">$D\gamma_{s_0} = 0$</span> whose rank is 0, not 1.</p>
|
372,198 | <blockquote>
<p>If $G$ is a group, $H$ and $K$ both subgroups of $G$, $K \subseteq H$, $\left[G:H\right]$ and $\left[H:K\right]$ both finite then $\left[G:K\right]=\left[G:H\right]\cdot\left[ H:K \right].$</p>
</blockquote>
<p>I am not sure if this is standard notation but $\left[ G : K \right]$ denotes the number of right or left cosets of $K$ in $G$.</p>
<p>I haven't tried to do the case where if $G$ is finite but I imagine the result would immediately follow by using Lagrange's Theorem. I am trying to think about the case where $G$ is infinite.</p>
<p>I at least made an example to show myself that the index $\left[ G: K \right]$ could be a finite number but $K$ could have size of infinity. Any leads?</p>
<p>Thanks very much</p>
| Elchanan Solomon | 647 | <p>Suppose that $f(x)$ has a root $y$ in $F$. Then $y^{p} =a$. If $p$ is odd,</p>
<p>$$f(x) = x^p - a = x^p - y^p = x^{p} + (-y)^{p} = (x-y)^p$$</p>
<p>So $f(x)$ splits. If $p$ is even, then it is $2$, and if you have one root of a quadratic, you have the other.</p>
|
46,837 | <p>I am looking for jokes which involve some serious mathematics. Sometimes, a totally absurd argument is surprisingly convincing and this makes you laugh. I am looking for jokes which make you laugh and think at the same time. </p>
<p>I know that a similar <a href="https://mathoverflow.net/questions/1083/do-good-math-jokes-exist-closed">question</a> was closed almost a year ago, but this went too much in the direction "<span class="math-container">$e^x$</span> was walking down the street ...". There is also the community wiki <a href="https://mathoverflow.net/questions/38856/jokes-in-the-sense-of-littlewood-examples">Jokes in the sense of Littlewood</a>, but that is more about notational curiosities. In order to motivate you, let me give an example:</p>
<blockquote>
<p>The real numbers are countable. Indeed, let <span class="math-container">$r_1,r_2,r_3,\dots$</span> be a list of real numbers and suppose that there is a real number missing. Just add it to the list.</p>
</blockquote>
<p>If moderators or audience decide to close this question as off-topic or duplicate, I can fully understand. I just thought it could be interesting and entertaining to have this question open for at least some time.</p>
<hr>
<p><strong>Added by joro Sat Apr 27 08:59:45 UTC 2019</strong> There is <a href="https://chat.stackexchange.com/rooms/92902/jokes">chat room</a> about general jokes and it appears close resistant.</p>
| Mikael Vejdemo-Johansson | 102 | <p>The first time I ran into the <em>carry</em> operation from grade school addition presented as a non-trivial group cocycle generating part of the group cohomology of <span class="math-container">$\mathbb Z/10$</span>, it was introduced as a joke embedded completely within mathematics.</p>
<p>Specifically, for those who haven't seen this yet, the carry operation <span class="math-container">$c(n,m)$</span> is defined as <span class="math-container">$c(n,m) = 0$</span> if <span class="math-container">$n+m < 10$</span> and <span class="math-container">$c(n,m) = 1$</span> for <span class="math-container">$n+m ≥ 10$</span>. You can verify the cocycle condition reasonably easily, and then it remains to check there is no endomap <span class="math-container">$g:\mathbb Z/10\to\mathbb Z/10$</span> with <span class="math-container">$c$</span> as its coboundary.</p>
<p>More information here: <a href="https://chromotopy.org/latex/talks/pme-talk.pdf" rel="nofollow noreferrer">https://chromotopy.org/latex/talks/pme-talk.pdf</a></p>
|
46,837 | <p>I am looking for jokes which involve some serious mathematics. Sometimes, a totally absurd argument is surprisingly convincing and this makes you laugh. I am looking for jokes which make you laugh and think at the same time. </p>
<p>I know that a similar <a href="https://mathoverflow.net/questions/1083/do-good-math-jokes-exist-closed">question</a> was closed almost a year ago, but this went too much in the direction "<span class="math-container">$e^x$</span> was walking down the street ...". There is also the community wiki <a href="https://mathoverflow.net/questions/38856/jokes-in-the-sense-of-littlewood-examples">Jokes in the sense of Littlewood</a>, but that is more about notational curiosities. In order to motivate you, let me give an example:</p>
<blockquote>
<p>The real numbers are countable. Indeed, let <span class="math-container">$r_1,r_2,r_3,\dots$</span> be a list of real numbers and suppose that there is a real number missing. Just add it to the list.</p>
</blockquote>
<p>If moderators or audience decide to close this question as off-topic or duplicate, I can fully understand. I just thought it could be interesting and entertaining to have this question open for at least some time.</p>
<hr>
<p><strong>Added by joro Sat Apr 27 08:59:45 UTC 2019</strong> There is <a href="https://chat.stackexchange.com/rooms/92902/jokes">chat room</a> about general jokes and it appears close resistant.</p>
| Quadrescence | 556 | <p><em>This is from <a href="http://symbo1ics.com/blog/?p=389%20%22my%20blog%22">my blog</a>, which I interestingly just posted today (at the time of this posting).</em></p>
<p>Several mathematicians are asked, "how do you put an elephant in a refrigerator?"</p>
<p><strong>Real Analyst</strong>: Let $\epsilon\gt0$. Then for all such $\epsilon$, there exists a $\delta\gt0$ such that $$\left|\frac{\mathit{elephant}}{2^n}\right|\lt\epsilon$$ for all $n\gt\delta$. Therefore $$\lim_{n\to\infty} \frac{\mathit{elephant}}{2^n}=0.$$ Since $1/2^n \lt 1/n^2$ for $n\ge 5$, by comparison, we know that $$\sum_{n\ge 1}\frac{\mathit{elephant}}{2^n}$$ converges — in fact, identically to $\mathit{elephant}$. As such, cut the elephant in half, put it in the fridge, and repeat.</p>
<p><strong>Differential Geometer</strong>: Differentiate it and put into the refrigerator. Then integrate it in the refrigerator.</p>
<p><strong>Set Theoretic Geometer</strong>: Apply the Banach–Tarski theorem to form a refrigerator with more volume.</p>
<p><strong>Measure Theorist</strong>: Let $E$ be the subset of $\mathbb{R}^3$ assumed by the elephant and $\Phi\in\mathbb{R}^3$ be that by the fridge. First, construct a partition $e_1,\ldots,e_i$ on $E$ for $1\le i \le N$. Since $\mu(E)=\mu(\Phi)$, and $$\mu(E)=\mu\left(\bigcup_{1\le i \le N}e_i\right)=\sum_{1\le i \le N}\mu(e_i),$$ we can just embed each partition of $E$ in $\Phi$ with no problem.</p>
<p><strong>Number Theorist</strong>: You can always squeeze a bit more in. So if, for $i\ge 0$. you can fit $x_i$ in, then you can fit $x_i + x_{i-1}$ in. You can fit in a bit of the elephant $x_n$ for fixed $n$, so just use induction on $i$.</p>
<p><strong>Algebraist</strong>: Show that parts of it can be put into the refrigerator. Then show that the refrigerator is closed under addition.</p>
<p><strong>Topologist</strong>: The elephant is compact, so it can be put into a finite collection of refrigerators. That’s usually good enough.</p>
<p><strong>Linear Algebraist</strong>: Let $F$ mean "put inside fridge". Since $F$ is linear — $F(x+y)=F(x)+F(y)$ — just put 10% of the elephant in, showing that $F\left(\frac{1}{10}\mathit{elephant}\right)$ exists. Then, by linearity, $F(\mathit{elephant})$ does too.</p>
<p><strong>Affine Geometer</strong>: There exists an affine transformation $F:\mathbb{R}^3\to\mathbb{R}^3:\vec{p}\mapsto A\vec{p}+\vec{q}$ that will allow the elephant to be put into the refrigerator. Just make sure $\det A\neq 0$ so you can take the elephant back out, and $\det A \gt 0$ so you don't end up with a bloody mess.</p>
<p><strong>Geometer</strong>: Create an axiomatic system in which "an elephant can be placed in a refrigerator" is an axiom.</p>
<p><strong>Complex Analyst</strong>: Put the refrigerator at the origin and the elephant outside the unit circle. Then get the image under inversion.</p>
<p><strong>Fourier Analyst</strong>: Will $\mathcal{F}^{-1}[\mathcal{F}(\mathit{elephant})\cdot\mathcal{F}(\mathit{fridge})]$ do?</p>
<p><strong>Numerical Analyst</strong>: Eh, $\mathit{elephant}=\mathit{trunk}+\varepsilon$, and $$\mathrm{fridge}(\mathit{elephant})=\mathrm{fridge}(\mathit{trunk}+\varepsilon)=\mathrm{fridge}(\mathit{trunk})+O(\varepsilon),$$ so just put the trunk in for a good approximation.</p>
<p><strong>Probabilist</strong>: Keep trying to push it in in random ways and eventually it will fit.</p>
<p><strong>Combinatorist</strong>: Discretize the elephant, partition it, and find a suitable rearrangement.</p>
<p><strong>Statistician</strong>: Put its tail in the refrigerator as a sample, and say, "done!"</p>
<p><strong>Logician</strong>: I know it's possible, I just can't do it.</p>
<p><strong>Category Theorist</strong>: Isn't this just a special case of Yoneda's lemma?</p>
<p><strong>Theoretical Computer Scientist</strong>: I can't decide.</p>
<p><strong>Experimental Mathematician</strong>: I think it'd be much more interesting to get the refrigerator inside the elephant.</p>
<p><strong>Set Theorist</strong>: Force it.</p>
|
3,552,555 | <p>Let <span class="math-container">$S$</span> be the set of all column matrices
<span class="math-container">$
\begin{bmatrix}
b_1 \\
b_2 \\
b_3
\end{bmatrix}
$</span>
such that <span class="math-container">$b_1,b_2,b_3 \in \mathbb{R}$</span> and the system of equations (in real variables)
<span class="math-container">$$\begin{align*}
-x+2y+5z &=b_1 \nonumber\\
2x-4y+3z &=b_2 \nonumber\\
x-2y+2z &=b_3
\end{align*}$$</span>
has at least one solution.Then, which of the following system(s)(in real variables) has (have) at least one solution for each
<span class="math-container">$$
\begin{bmatrix}
b_1 \\
b_2 \\
b_3
\end{bmatrix} \in S?
$$</span></p>
<p><strong>A.</strong> <span class="math-container">$x+2y+3z=b_1$</span>, <span class="math-container">$4y+5z=b_2$</span> and <span class="math-container">$x+2y+6z=b_3$</span></p>
<p><strong>B.</strong> <span class="math-container">$x+y+3z=b_1$</span>, <span class="math-container">$5x+2y+6z=b_2$</span> and <span class="math-container">$-2x-y-3z=b_3$</span></p>
<p><strong>C.</strong> <span class="math-container">$-x+2y-5z=b_1$</span>, <span class="math-container">$2x-4y+10z=b_2$</span> and <span class="math-container">$x-2y+5z=b_3$</span></p>
<p><strong>D.</strong> <span class="math-container">$x+2y+5z=b_1$</span>, <span class="math-container">$2x+3z=b_2$</span> and <span class="math-container">$x+4y-5z=b_3$</span></p>
<p>Can anyone please help me with this problem? I am really clueless how to proceed.</p>
| Peter Rasmussen | 750,159 | <p>You start by making it a single fraction:</p>
<p><span class="math-container">$$\frac pq = \frac{a}{b+\sqrt c} + \frac d{\sqrt c} = \frac{(d+a)\sqrt c + bd}{b\sqrt c + c}$$</span></p>
<p>with <span class="math-container">$p, q\in \mathbb Z$</span> and <span class="math-container">$q\neq 0$</span>. Then rewrite as
<span class="math-container">$ p(b\sqrt c + c) = q((d+a)\sqrt c + bd)$</span> and finally isolate <span class="math-container">$\sqrt c$</span> to get
<span class="math-container">$$ (pb-q(d+a))\sqrt c = qbd-pc.$$</span> </p>
<p>Since <span class="math-container">$\sqrt c$</span> is irrational, both sides of the equation must be rational, and <span class="math-container">$(pb+q(d+a))$</span> is rational, we must have <span class="math-container">$pb-q(d+a)=0$</span> and <span class="math-container">$qbd-pc=0$</span>. Rearranging the first equation yields <span class="math-container">$p=q(d+a)/b$</span> and inserting this into the second equation, we get <span class="math-container">$q(bd-(d+a)c/b)=0$</span> Since <span class="math-container">$q\neq 0$</span>, we have <span class="math-container">$bd-(d+a)c/b=0$</span>, which yields the conclusion <span class="math-container">$b^2d=(d+a)c$</span>.</p>
|
4,058,884 | <p>I have an orthonormal basis <span class="math-container">${\bf{b}}_1$</span> and <span class="math-container">${\bf{b}}_2$</span> in <span class="math-container">$\mathbb{R}^2$</span>. I want to find out the angle of rotation. I added a little picture here. I essentially want to find <span class="math-container">$\theta$</span></p>
<p><a href="https://i.stack.imgur.com/YH2Vt.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/YH2Vt.png" alt="enter image description here" /></a></p>
<p>I know that I can compute the angle between two vectors but then there are <span class="math-container">$4$</span> combinations here</p>
<ul>
<li><span class="math-container">$\theta_{11} = \arccos({\bf{b}}_1^\top{\bf{e}}_1)$</span></li>
<li><span class="math-container">$\theta_{12} = \arccos({\bf{b}}_1^\top{\bf{e}}_2)$</span></li>
<li><span class="math-container">$\theta_{21} = \arccos({\bf{b}}_2^\top{\bf{e}}_1)$</span></li>
<li><span class="math-container">$\theta_{22} = \arccos({\bf{b}}_2^\top{\bf{e}}_2)$</span></li>
</ul>
<p>How would one know which angle is correct? Importantly, here I used the labels <span class="math-container">$b_1$</span>, <span class="math-container">$b_2$</span> in the same order as <span class="math-container">$e_1$</span> and <span class="math-container">$e_2$</span> but that's not necessarily the same order geometically!</p>
| Somos | 438,089 | <p>You may want to consider using the complex plane in this situation. Each
point in the <span class="math-container">$\,x\,y\,$</span> plane is associated with a complex number. Thus,
<span class="math-container">$\,\mathbf{e}_1 \equiv 1\,$</span> and <span class="math-container">$\,\mathbf{e}_2 \equiv i.\,$</span> Similarly, for any two orthogonal <strong>unit</strong> basis vectors <span class="math-container">$\,\mathbf{b}_1 \equiv z_1\,$</span> and
<span class="math-container">$\,\mathbf{b}_2 \equiv z_2\,$</span> where <span class="math-container">$\,z_2 = z_1 i.\,$</span> More precisely, if
<span class="math-container">$\,\mathbf{b}_1 := (x_1,y_1),\,$</span> then <span class="math-container">$\,\mathbf{b}_2 := (x_2,y_2) = (-y_1,x_1).$</span>
You may need to reverse the roles of <span class="math-container">$\,\mathbf{b}_1\,$</span> and <span class="math-container">$\,\mathbf{b}_2\,$</span> to
ensure this.</p>
<p>Because <span class="math-container">$\,\mathbf{b}_1\,$</span> is a unit vector, we get <span class="math-container">$\,z_1 = e^{i\theta}\,$</span>
for some angle <span class="math-container">$\,\theta.$</span> Compute this angle using
<span class="math-container">$\,\theta = \text{atan2}(y,x)\,$</span> where
<a href="https://en.wikipedia.org/wiki/Atan2" rel="nofollow noreferrer">atan2</a> is the "2-argument arctangent". If <span class="math-container">$\,\text{atan2}\,$</span> is not available, use the
identity
<span class="math-container">$$\text{atan2}(y,x) =
2\arctan\left(\frac{y}{x+\sqrt{x^2+y^2}}\right).$$</span></p>
|
2,646,363 | <p>Let $A_1, A_2, \ldots , A_{63}$ be the 63 nonempty subsets of $\{ 1,2,3,4,5,6 \}$. For each of these sets $A_i$, let $\pi(A_i)$ denote the product of all the elements in $A_i$. Then what is the value of $\pi(A_1)+\pi(A_2)+\cdots+\pi(A_{63})$?</p>
<p>Here is the solution </p>
<p>For size 1: sum of the elements, which is 21
For size 2: $ 1 \cdot (2 + 3 + 4 + 5 + 6) = 20 $, $ 2 \cdot (3 + 4 + 5 + 6) = 36 $, $ 3 \cdot (4 + 5 + 6) = 45 $, $ 4 \cdot (5 + 6) = 44 $, $ 5 \cdot 6 = 30 $. Sum is 175.
For size 3: Those with least element 1: $ 6, 8, 10, 12, 12, 15, 18, 20, 24, 30 = 155 $. Those with least element 2: $ 24, 30, 36, 40, 48, 60 = 238 $. Those with least element 3: $ 60 + 72 + 90 = 222 $. Those with least element 4: only one possible subset, which is $ \{4, 5, 6\} $, the $ \pi $ of which is 120. The total sum here is 735.
For size 4: Least element 1: $ 24 + 30 + 36 + 40 + 48 + 60 + 60 + 72 + 90 + 120 = 580 $; least element 2: $ 120 + 144 + 180 + 240 + 360 = 1044 $; least element 3: only one, which is $ 3 \cdot 4 \cdot 5 \cdot 6 = 360 $. The total sum here is 1984.
For size 5: Exclude each one individually to get $ 720 + 360 + 240 + 180 + 144 + 120 = 1764 $
For size 6: $ 6! = 720 $</p>
<p>The final answer is $ 21 + 175 + 735 + 1984 + 1764 + 720 = \boxed{5399} $</p>
<p>Is there any shorter way for doing this ?</p>
<p>Thank a lot </p>
| quasi | 400,434 | <p>It's just one less than the product
$$(1+1)(1+2)(1+3)(1+4)(1+5)(1+6)$$
Equivalently, it's $f(1)-1$ where
$$f(x) = (x+1)(x+2)(x+3)(x+4)(x+5)(x+6)$$
By Vieta's formulas, the coefficients of all powers of $x$, other than $x^6,\;$in the expanded form of $f(x)$ are the sums of products that you want.
<p>
More precisely, for $1 \le k \le 6$, the coefficient of $x^{6-k}$ in the expanded form of $f(x)$ is the sum of all products of $k$ elements of $\{1,2,3,4,5,6\}$.
<p>
But summing those coefficients is the same as substituting $x=1$ into $f(x)$, except that you need to subtract $1$ to correct for the extra summand from the term $x^6$.</p>
|
2,646,363 | <p>Let $A_1, A_2, \ldots , A_{63}$ be the 63 nonempty subsets of $\{ 1,2,3,4,5,6 \}$. For each of these sets $A_i$, let $\pi(A_i)$ denote the product of all the elements in $A_i$. Then what is the value of $\pi(A_1)+\pi(A_2)+\cdots+\pi(A_{63})$?</p>
<p>Here is the solution </p>
<p>For size 1: sum of the elements, which is 21
For size 2: $ 1 \cdot (2 + 3 + 4 + 5 + 6) = 20 $, $ 2 \cdot (3 + 4 + 5 + 6) = 36 $, $ 3 \cdot (4 + 5 + 6) = 45 $, $ 4 \cdot (5 + 6) = 44 $, $ 5 \cdot 6 = 30 $. Sum is 175.
For size 3: Those with least element 1: $ 6, 8, 10, 12, 12, 15, 18, 20, 24, 30 = 155 $. Those with least element 2: $ 24, 30, 36, 40, 48, 60 = 238 $. Those with least element 3: $ 60 + 72 + 90 = 222 $. Those with least element 4: only one possible subset, which is $ \{4, 5, 6\} $, the $ \pi $ of which is 120. The total sum here is 735.
For size 4: Least element 1: $ 24 + 30 + 36 + 40 + 48 + 60 + 60 + 72 + 90 + 120 = 580 $; least element 2: $ 120 + 144 + 180 + 240 + 360 = 1044 $; least element 3: only one, which is $ 3 \cdot 4 \cdot 5 \cdot 6 = 360 $. The total sum here is 1984.
For size 5: Exclude each one individually to get $ 720 + 360 + 240 + 180 + 144 + 120 = 1764 $
For size 6: $ 6! = 720 $</p>
<p>The final answer is $ 21 + 175 + 735 + 1984 + 1764 + 720 = \boxed{5399} $</p>
<p>Is there any shorter way for doing this ?</p>
<p>Thank a lot </p>
| Donald Splutterwit | 404,247 | <p>Include the empty set in the sum (we can subtract at the end) </p>
<p>The contribution from each of the subsets will correspond to a term in the following product
\begin{eqnarray*}
(1+1)(1+2)(1+3)(1+4)(1+5)(1+6)
\end{eqnarray*}
So the answer is $\color{red}{5039}$.</p>
<p>In your question the values should be $21,175,735,\color{red}{1624},1764,720$.</p>
|
1,282,489 | <p>I have a simple problem that I need to solve. Given a height (in blue), and an angle (eg: 60-degrees), I need to determine the length of the line in red, based on where the green line ends. The green line comes from the top of the blue line and is always 90-degrees.</p>
<p>The height of the blue line is variable.
The angle of the blue line is variable.</p>
<p>Also, I do not know the length of the green dashed-line. Is there a way to figure out the length of the red line without knowing the length of the green?</p>
<p>-Adam</p>
<p>Any help would be much appreciated!</p>
<p>-Adam </p>
<p><img src="https://i.stack.imgur.com/3ccLl.png" alt="enter image description here"></p>
| Sufyan Naeem | 199,112 | <p>Use Law of sines, $$\frac{a}{\sin{A}}=\frac{b}{\sin{B}}=\frac{c}{\sin{C}}$$</p>
<p>Let, </p>
<p>$\angle{A}=60^o$</p>
<p>$\angle{B}=90^o$</p>
<p>$\angle{C}=30^o$</p>
<p>$a=?$</p>
<p>$b=?$</p>
<p>$c=10cm$</p>
<p>From Law of sines we have,</p>
<p>$$\frac{a}{\sin{A}}=\frac{c}{\sin{C}}$$</p>
<p>Put the values and find $a$.</p>
<p>Now, from Law of sines we have,</p>
<p>$$\frac{a}{\sin{A}}=\frac{b}{\sin{B}}$$</p>
<p>Put the known values and find $b$. </p>
<p>Congratulations, you are done!</p>
|
3,085,842 | <p>What can be said about the uniform Convergence of <span class="math-container">$\sum_{n=1}^{\infty}\frac{x}{[(n-1)x+1][nx+1]}$</span> in the interval <span class="math-container">$[0,1]$</span>?</p>
<p>The sequence inside the summation bracket doesn't seem to yield to root or ratio tests. The pointwise convergence itself seems doubtful. Should we use Cauchy-Criterion directly here? Maybe some comparison would be useful in this case? And what about uniform convergence? Any hints? Thanks beforehand.</p>
| José Carlos Santos | 446,262 | <p><strong>Hint:</strong> <span class="math-container">$\displaystyle\frac1{\bigl((n-1)x+1\bigr)(nx+1)}=\frac1{(n-1)x+1}-\frac1{nx+1}$</span>.</p>
|
143,324 | <p>I want to know how to simplify the following expression by using the fact that $\sum_{i=0}^\infty \frac{X^i}{i!}=e^X$. The expression to be simplified is as follows:</p>
<p>$$\sum_{i=0}^{\infty} \sum_{j=0}^i \frac{X^{i-j}}{(i-j)!} \cdot \frac{Y^j}{j!}\;,$$ where $X$ and $Y$ are square matrices (not commutative). (That is, $X\cdot Y \neq Y \cdot X$).</p>
| Qiaochu Yuan | 232 | <p>Even if $X$ and $Y$ don't commute, it's still true that this expression is equal to $e^X e^Y$; it's just not true that this is equal to $e^{X+Y}$. </p>
|
4,247,888 | <p>I'm having a lot of trouble about an apparently simple task. I have the following trigonometric equation:</p>
<p><span class="math-container">$A\cos(\omega_1t+\phi_1)=B\cos(\omega_2t+\phi_2)$</span></p>
<p>which holds for every <span class="math-container">$t \in [0,+\infty)$</span>, where <span class="math-container">$\omega_1,\omega_2,\phi_1,\phi_2 \in \mathbb{R}$</span> are fixed, with <span class="math-container">$\omega_1 \neq 0$</span> and <span class="math-container">$\omega_2 \neq 0$</span>.</p>
<p>With <span class="math-container">$A$</span> and <span class="math-container">$B$</span> positive, I need to show that <span class="math-container">$A=B$</span>, but I'm really stuck. I tried to find a value of <span class="math-container">$t$</span> such that <span class="math-container">$\omega_1t+\phi_1=\pi$</span> and so on, but after a lot of calculation I can't conclude anything. Any help or hint would be really appreciated!</p>
| The_Sympathizer | 11,172 | <p>The easiest way to prove this is <em>not</em> with a direct solution, but with a <em>contrapositive</em> proof. That is, instead of trying to prove that if</p>
<p><span class="math-container">$$\forall t \in [0, \infty)\ [A \cos(\omega_1 t + \phi_1) = B \cos(\omega_2 + \phi_2)]$$</span></p>
<p>implies</p>
<p><span class="math-container">$$A = B$$</span></p>
<p>try to prove that</p>
<p><span class="math-container">$$A \ne B$$</span></p>
<p>implies</p>
<p><span class="math-container">$$\exists t \in [0, \infty)\ [A \cos(\omega_1 t + \phi_1) \ne B \cos(\omega_2 t \phi_2)]$$</span></p>
<p>which is the logical negative of the statement you gave before. Note that this is an existential statement, so we only need to find <strong>one</strong> <span class="math-container">$t$</span> that works. And that's easy to do.</p>
<p>Cosine has a range of <span class="math-container">$[-1, 1]$</span>. Hence <span class="math-container">$t \mapsto A \cos(\omega_1 t + \phi_1)$</span> has a range of <span class="math-container">$[-A, A]$</span>. Similarly, the one involving <span class="math-container">$B$</span> has a range of <span class="math-container">$[-B, B]$</span>.</p>
<p>Suppose <span class="math-container">$A > B$</span>. Consider a point <span class="math-container">$t$</span> where <span class="math-container">$A \cos(\omega_1 t + \phi_1) = A$</span>. At this same point, <span class="math-container">$B \cos(\omega_2 t + \phi_2)$</span> <em>cannot equal</em> <span class="math-container">$A$</span>, because this latter expression can only get as large as <span class="math-container">$B$</span>. Hence we have found the value <span class="math-container">$t$</span> for which the two sides are unequal.</p>
<p>The case for <span class="math-container">$A < B$</span> is pretty much the same way. Thus we have proven the contrapositive, and so too, proven the original statement.</p>
|
1,427,816 | <p>This is kinda of a philosophical question I guess. But are the elmements of the topological closure inside the linear space $X$ all the time? Or do they become apperent when we introduce the topology? And hence introduce the topolgy to control these elements of the space which are there but out of control when we only have algebraic structure. i.e (I think) is the linear space smaller then "same" with the topology? Lets assume we have a reasonable topology.</p>
| Eric Auld | 76,333 | <p>Here's how I would put it. We start out with a large linear topological space $\Omega$. Within $\Omega$ we isolate a linear subspace $X$ with nice properties (perhaps $X$ is the linear span of countably many elements, for example), and it may or may not be the case that each point of $\Omega$ is a topological limit of elements in $X$. </p>
<p>On the other hand, there is a situation where we can start with a metric space and form its "completion", which refers to taking equivalence classes of Cauchy sequences, and the completion then becomes a metric space (hence a topological space) in its own right, with the original space as a subspace. This may be what you're referring to. Then we are really introducing something larger. </p>
<p>Completion is one example of a situation when we start out with a topological space and then form a larger topological space from the original one (another is compactification). But these are not to be confused with the topological closure, which is non-vacuous only in reference to a larger space $\Omega$ which we already know contains $X$.</p>
<p>One thing you may be confused about is that (as far as I can tell) it does not make sense to say that taking the closure "adds a topology". The closure makes sense only if there is already a topology present--only if $X$ is already a topological subspace of a larger topological space. (The larger space could be $X$ itself, but this makes the discussion vacuous.)</p>
<p>You should think of a topology as specifying which elements are close to other elements. In the case of a linear space, there is already some notion of which elements are close to others (those which have close coefficients), and so the topology that we choose will back up this notion of closeness (unless for some bizarre reason we want to proclaim that elements with close coefficients are not the elements that are close to each other).</p>
<p>A good example to keep in mind is when $X$ is the finite linear combinations of trig functions on a compact interval $C$ and $\Omega$ is $L^2(C)$. In that case, the closure of $X$ is in fact all of $\Omega$, although $X \subsetneq \Omega$. In particular, the closure of $X$ includes all continuous functions on $C$.</p>
|
3,225,784 | <p>Solve for x:</p>
<blockquote>
<p><span class="math-container">$$2\sin(x) + 3\sin(2x) = 0 $$</span></p>
<p><span class="math-container">$$2\sin(x)(1 + 3\cos(x)) = 0$$</span></p>
</blockquote>
<p>Stuck here. The solution mentions some arccos function, but I need a detailed explanation on this one.</p>
| Robert Israel | 8,508 | <p>Hint: if the product of two numbers is <span class="math-container">$0$</span>, at least one of them is <span class="math-container">$0$</span>.</p>
|
573,964 | <blockquote>
<p>Let set $S$ be the set of all functions $f:\mathbb{Z_+} \rightarrow \mathbb{Z_+}$. Define a realtion $R$ on $S$ by $(f,g)\in R$ iff there is a constant $M$ such that $\forall n (\frac{1}{M} < \frac{f(n)}{g(n)}<M). $ Prove that $R$ is an equivalence relation and that there are infinitely mane equivalence classes.</p>
</blockquote>
<p><strong>Attempt</strong>: Would it work if I define $f$ as $f_k(n)=kn$ and g as $g_k(n)=(M-k)n$ where $M>k$. Then, $$\forall n ((\frac{1}{M} < \frac{f(n)}{g(n)}<M)=(\frac{1}{M} < \frac{k}{M-k} <M))$$ is true as long as $M>k$. </p>
<p>So, $R$ is <strong>reflexive</strong>: $(f,f) \in R$
$$\forall n ((\frac{1}{M} < \frac{f(n)}{f(n)}<M)=(\frac{1}{M} < 1 <M)), \space M>1$$</p>
<p>$R$ is <strong>symmetric</strong>: $(f,g)\in R \Rightarrow (g,f)\in R$
$$\forall n ((\frac{1}{M} < \frac{f(n)}{g(n)}<M)=(\frac{1}{M} < \frac{k}{M-k} <M))$$
$$\forall n ((\frac{1}{M} < \frac{g(n)}{f(n)}<M)=(\frac{1}{M} < \frac{M-k}{k} <M))$$</p>
<p>for $M>k$.</p>
<p>$R$ is <strong>transitive</strong>: $(f,g)\in R \wedge (g,h) \in R \Rightarrow (f,h)\in R$
$$\frac{f(n)}{g(n)} \in R, \space \frac{g(n)}{h(n)} \in R \Rightarrow \frac{f(n)g(n)}{h(n)g(n)}=\frac{f(g)}{h(n)}\Rightarrow \forall n ((\frac{1}{M} < \frac{f(n)}{h(n)}<M) $$</p>
<p>Then, $R$ is an equivalence relation.</p>
| BaronVT | 39,526 | <p>Subspaces have to be closed under more general linear combinations than just $x + y$. That is, you have to have </p>
<p>$$
c_1 x + c_2 y \in S
$$
whenever $x,y \in S$ for any $c_1,c_2 \in \mathbb R$ (since the original vector space is a real-vector space, the scalars for linear combinations are, in general, real numbers)</p>
|
783,502 | <p>Here in my exercise I have to study the function and draw its graph. Can you please tell me what's the best method to do this, because I don't think that's reasonable to use the input output method, it's quite imprecise.
$$f(x)={|x+1|\over x}$$</p>
<p>Thank you!!!</p>
| Caleb Stanford | 68,107 | <ul>
<li><p>There is a zero at $x = -1$.</p></li>
<li><p>The function does not exist at $0$ and grows infinitely large near $0$.</p></li>
<li><p>The function is positive when $x > 0$ and negative when $x < 0$.</p></li>
<li><p>When the function hits the $x$-axis at $x = -1$ it is a sharp corner rather than a smooth bump, since the absolute value function changes abruptly.</p></li>
<li><p>As $x \to \infty$, the function approaches $1$ from above.</p></li>
<li><p>As $x \to -\infty$, the function approaches $-1$ from above.</p></li>
</ul>
<p>Along with a few plotted points, this should give you enough information to draw a precise graph.</p>
|
1,920,994 | <p>My calculus teacher gave us this interesting problem: Calculate</p>
<p>$$ \int_{0}^{1}F(x)\,dx,\ $$ where $$F(x) = \int_{1}^{x}e^{-t^2}\,dt $$</p>
<p>The only thing I can think of is using the Taylor series for $e^{-t^2}$ and go from there, but since we've never talked about uniform convergence and term by term integration, I suppose that there is an easier way to do this.</p>
| mickep | 97,236 | <p>No fancy stuff is needed. I think you could just integrate by parts.</p>
<p>$$\int_0^1 F(x)\,dx=[xF(x)]_0^1-\int_0^1 xF'(x)\,dx$$</p>
<p>The outintegrated part cancel, and using the fundamental theorem of calculus, $F'(x)=e^{-x^2}$. Thus</p>
<p>$$\int_0^1 F(x)\,dx=-\int_0^1 xe^{-x^2}\,dx
$$
from where I think you can finish.</p>
|
1,162,147 | <p>The definition of open set is different in metric space and topological space, though metric space is a special case of topological space. The definition in metric space seems to convey the idea that all the points isolated from outside from outside, while the definition in topological space is intended to separate different points, so I don't know how to link them.</p>
| Mnifldz | 210,719 | <p>The definition of a <em>topology</em> $\mathcal{T}$ on a set $X$ is given as follows: Let $\mathcal{T}$ be a collection of subsets of $X$ satisfying the following:</p>
<ol>
<li>Both the empty set and $X$ itself belong to $\mathcal{T}$.</li>
<li>The union of any collection of subsets $\{U_\alpha \; | \; \alpha \in I\}$ of $\mathcal{T}$ is in $\mathcal{T}$, namely $\bigcup_{\alpha \in I} U_\alpha \in \mathcal{T}$.</li>
<li>The intersection of any finite collection $\{U_1, \ldots, U_n\}$ of sets in $\mathcal{T}$ is contained in $\mathcal{T}$, or rather $\bigcap_{k=1}^n U_k \in \mathcal{T}$.</li>
</ol>
<p>Once we have a collection $\mathcal{T}$ defined like this for a set $X$, we <em>define the elements of $\mathcal{T}$ to be open in $X$.</em> You may have seen in your analysis class that open sets in a metric space satisfy the above three conditions, but in general topological spaces don't need to have a metric associated with them. The simplest example of how a metric can't be attached to all topological spaces is the indiscrete topology: For any set $X$ define a topology $\mathcal{T} =\{\emptyset, X\}$. The idea that you mentioned about separating points is not true for all topological spaces since the topology $\mathcal{T} = \{\emptyset, X\}$ cannot separate any of the points of $X$.</p>
|
1,415,505 | <p>I was trying to find out how to prove </p>
<p>$$ \sin(A-\arcsin(0.3 \ \sin \ A)) \ \cdot \ \sin(A+\arcsin(0.3 \ \sin \ A)) \ = \ 0.91 \ \sin^2 \ A \ \ . $$
When I put this equation into my calculator both sides appear to be exactly the same, but I have no idea how to prove it.</p>
| haqnatural | 247,767 | <p>$$\sin { \left( A-\arcsin { \left( 0.3\sin { \left( A \right) } \right) } \right) } \cdot \sin { \left( A+\arcsin { \left( 0.3\sin { \left( A \right) } \right) } \right) =0.91\sin ^{ 2 }{ \left( A \right) } }$$</p>
<p><strong>Solution</strong>
:
$$\left( \sin { A\cos { \left( \arcsin { \left( 0.3\sin { \left( A \right) } \right) } \right) } -\cos { A } \sin { \left( \arcsin { \left( 0.3\sin { \left( A \right) } \right) } \right) } } \right) \ast \left( \sin { A\cos { \left( \arcsin { \left( 0.3\sin { \left( A \right) } \right) } \right) } +\cos { A } \sin { \left( \arcsin { \left( 0.3\sin { \left( A \right) } \right) } \right) } } \right) =0.91\sin ^{ 2 }{ A } \\ \left( \sin { A\sqrt { 1-0.09\sin ^{ 2 }{ A } } -0.3\cos { A } \sin { A } } \right) \ast \left( \sin { A\sqrt { 1-0.09\sin ^{ 2 }{ A } } +0.3\cos { A } \sin { A } } \right) =\sin ^{ 2 }{ A\left( 1-0.09\sin ^{ 2 }{ A } \right) -0.09\cos ^{ 2 }{ A } \sin ^{ 2 }{ A } = } \\ =\sin ^{ 2 }{ A } \left( 1-0.09\sin ^{ 2 }{ A } -0.09\cos ^{ 2 }{ A } \right) =\sin ^{ 2 }{ A } \left( 1-0.09\left( \sin ^{ 2 }{ A } +\cos ^{ 2 }{ A } \right) \right) =\sin ^{ 2 }{ A\left( 1-0.09 \right) =091\sin ^{ 2 }{ A } } $$</p>
|
122,274 | <p>I have a question, I think it concerns with field theory.</p>
<blockquote>
<p>Why the polynomial $$x^{p^n}-x+1$$ is irreducible in ${\mathbb{F}_p}$ only when $n=1$ or $n=p=2$?</p>
</blockquote>
<p>Thanks in advance. It bothers me for several days. </p>
| Hoang Nguyen | 934,337 | <p>I have another solution that might be easier to follow.</p>
<p>Let <span class="math-container">$\alpha$</span> be a root of <span class="math-container">$q(x)=x^{p^n}-x+1$</span>. Note that <span class="math-container">$\alpha + a$</span> is also a root of <span class="math-container">$q(x)$</span> for all <span class="math-container">$a \in \mathbb{F}_{p^n}$</span>. Consider cyclic muplicative group <span class="math-container">$\mathbb{F}_{p^n}^{\times} = \mathbb{F}_{p}(\theta)$</span> for some generator <span class="math-container">$\theta$</span>, then <span class="math-container">$\alpha + \theta$</span> and <span class="math-container">$\alpha$</span> are roots of <span class="math-container">$q(x)$</span>, so they belong to <span class="math-container">$\mathbb{F}_{p}(\alpha)$</span> which shows that <span class="math-container">$\theta \in \mathbb{F}_{p}(\alpha)$</span>, hence <span class="math-container">$\mathbb{F}_{p^n} \subset \mathbb{F}_{p}(\alpha)$</span>. We have <span class="math-container">$\mathbb{F}_{p} \subset \mathbb{F}_{p^n} \subset \mathbb{F}_{p}(\alpha)$</span>.</p>
<p>If <span class="math-container">$p(x)$</span> is irreducible over <span class="math-container">$\mathbb{F}_p$</span>, then <span class="math-container">$[\mathbb{F}_{p}(\alpha):\mathbb{F}_{p}] = p^n$</span>, hence <span class="math-container">$|\mathbb{F}_{p}(\alpha)|=p^{pn}$</span>. Consider the endomorphism <span class="math-container">$\sigma$</span>: <span class="math-container">$\mathbb{F}_{p}(\alpha) \to \mathbb{F}_{p}(\alpha)$</span> which sends <span class="math-container">$\alpha \to \alpha^{p^n}$</span> (why it is a endomorphism?). Consider subgroup of automorphism <span class="math-container">$H = \langle \sigma \rangle$</span>. <span class="math-container">$H$</span> fixes <span class="math-container">$\mathbb{F}_{p^n}$</span> (Why?), so we have <span class="math-container">$[\mathbb{F}_{p}(\alpha): \mathbb{F}_{p^n}]=|H|=p$</span> (<span class="math-container">$\sigma^p$</span> is identity map). Then <span class="math-container">$[\mathbb{F}_{p}(\alpha):\mathbb{F}_{p}] = [\mathbb{F}_{p}(\alpha): \mathbb{F}_{p^n}][\mathbb{F}_{p^n}:\mathbb{F}_{p}]$</span> which means <span class="math-container">$p^{n}=pn$</span> and this only happens when <span class="math-container">$n=1$</span> or <span class="math-container">$n=p=2$</span>.</p>
|
3,396,882 | <blockquote>
<p>Let <span class="math-container">$X$</span> be a non-negative random variable, and suppose that <span class="math-container">$P(X \geq
n) \geq 1/n$</span> for each <span class="math-container">$n \in \mathbb{N}$</span>. Prove that <span class="math-container">$E(X) = \infty$</span>.</p>
</blockquote>
<p>I have been stuck with this problem for a few days now. I guess it can make some sense intuitively because you have some probability mass everywhere, and we're looking at probability of it being greater than some value. I tried to use inequalities like Markov's and Chebyshev's with no luck. I was hoping if someone can please explain to me how to answer this problem. It is coming from an introductory probability with measure theory book, and I am trying my best to get better at these kind of problems.</p>
| GReyes | 633,848 | <p>I am giving a proof for the continuous case. Integrating by parts in the definition of expected value and observing that <span class="math-container">$P(X<-t)=0$</span> for any <span class="math-container">$t>0$</span> you have
<span class="math-container">$$
\mathbb{E}[X]=\int\limits_0^{\infty}P(X>t)\,dt\ge \int\limits_0^{\infty}\frac{1}{t+1}\,dt=\infty
$$</span>
(since <span class="math-container">$P(x>t)\ge P(x>{\rm{ceiling\ of}}\, t)$</span> and <span class="math-container">${\rm{ceiling\ of}}\, t\le t+1$</span></p>
|
154,722 | <p>Let $A = \pmatrix{1 & 0 \\ \alpha & 1} $ and $ B = \pmatrix{1 & 1 \\ 0 & 1}$, where $\alpha \in \mathbb{C}$ is a complex parameter.</p>
<p>Now consider the family of representations $r_{\alpha}$ of the free group on two generators $F_2 = \langle a,b\rangle$ in $\mathrm{SL}(2, \mathbb{C})$ setting $r_{\alpha}(a) = A$ and $r_{\alpha}(b) = B$. One can see that when $\alpha$ is transcendental over $\mathbb{Q}$, the representation $r_{\alpha}$ is faithful (see T. Church & A. Pixton "Separating twists and the Magnus representation of the Torelli group" Lemma 5.1). </p>
<p>The question I am interested in is the following : when is (or is not) $r_{\alpha}(F_2)$ a discrete subgroup of $\mathrm{SL}(2, \mathbb{C})$ ? </p>
<p>I suppose this is a difficult question of dynamics, I am curious if anyone has ever studied similar questions. </p>
| Venkataramana | 23,291 | <p>When $\alpha$ is very large one can see that the two elements play ping pong:</p>
<p>It is clear that if $\alpha ,n$ are large, $A,B^n$ play ping pong; so they are free. The proof of freeness also shows that the group generated by $A,B^n$ acts properly discontinuously on a piece of ${\mathbb P}^1({\mathbb C})$ and hence form a discrete subgroup of $SL_2({\mathbb C})$. </p>
<p>To reduce to the above situation, conjugate $A,B$ by a diagonal matrix so that $B$ is replaced by $B^n$ and $A$ is replaced by $A'= \begin{pmatrix}1 & 0 \cr \alpha /n &1\end{pmatrix}$ . If $n$ is large, and $\alpha /n$ is large, then by the preceding para, $A'$ and $B^n$ generate a discrete subgroup, hence so do the conjugates $A,B$. </p>
|
1,118,259 | <p>Consider a sphere of radius $a$ with 2 cylindrical holes of radius $b<a$ drilled such that both pass through the center of the sphere and are orthogonal to one another. What is the volume of the remaining solid?</p>
<p>Can someone help me at least setting up the integral? I know that there is a similar problem but it was a sphere with one hole. </p>
| achille hui | 59,379 | <p>WOLOG, choose the coordinate system such that the sphere of radius $a$ is centered at origin $O$ and the axes of the two holes are aligned along the $x$ and $y$ axis. Let</p>
<p>$$c = \sqrt{a^2 - b^2},\quad
d = \begin{cases}\sqrt{b^2 - c^2},& b > \frac{a}{\sqrt{2}}\\0,& \text{otherwise}\end{cases}
\quad\text{ and }\quad
e = \min(b,c)
$$
When one intersect the sphere with two holes with a plane of $z = const$, one find:</p>
<ul>
<li>$|z| > a \leadsto$ the intersection is empty.</li>
<li>$a \ge |z| \ge b \leadsto$ the intersection is a circle of radius $\rho = \sqrt{a^2-z^2}$.</li>
<li><p>$b \ge |z| \ge d \leadsto$ the part of circle within a distance
$\lambda = \sqrt{b^2 - z^2}$ from the $x$ or $y$-axis has been removed. The intersection split into $4$ pieces and the shaded area $ABC$ in following figure is one of these pieces:<br>
$\hspace0.8in$ <img src="https://i.stack.imgur.com/tNsPQ.jpg" alt="a piece of the the cross section"><br>
It is clear
$$\begin{align}
\text{Area}(ABC)
&= \text{Area}(OBC) - (\text{Area}(OBA) + \text{Area}(OAC))\\
&= \frac12\rho^2(2\phi) - 2(\frac12\lambda(c-\lambda))
= \rho^2\phi - \lambda(c - \lambda)\end{align}$$
where $\phi$ is the half angle of the circular arc $BC$ with respect to $O$.</p></li>
<li><p>Finally, when $b > \frac{a}{\sqrt{2}}$, $d > 0$ and for those $|z| < d$, the intersection is empty.</p></li>
</ul>
<p>If we extend the definition of $\phi$ to $\frac{\pi}{4}$ and $\lambda$ to $0$
for $|z| \in [b,a]$, the volume of the sphere with two holes can be expressed as
$$\text{Vol} = 8 \int_d^a \left( \rho^2 \phi - \lambda ( c - \lambda )\right) dz \tag{*1}$$</p>
<p>It is clear $\frac{d\phi}{dz} = 0$ for $z > b$. Let $\theta = \frac{\pi}{4} - \phi$ be the angle between $OB$ and the $x$-axis. For $z \in [b,d]$, we have:
$$\begin{align}
\lambda = c \tan\theta
& \implies d\lambda = c (\tan^2\theta + 1)d\theta\\
& \implies d\phi = -d\theta = -\frac{c d\lambda}{\lambda^2 + c^2}
= \frac{cz dz}{\lambda(\lambda^2+c^2)}
\end{align}
$$
Notice $\rho^2 dz = d\Delta$ where $\displaystyle\;\Delta(z) = (a^2 - \frac{z^2}{3})z\;$, we can integrate $(*1)$ by part and get</p>
<p>$$\begin{align}
\text{Vol}
&= 8 \left\{\left[\phi(z)\Delta(z)\right]_d^a -
\int_d^a \left(\Delta(z)\frac{d\phi}{dz} + \lambda(c - \lambda)\right) dz
\right\}\\
&= \frac{4\pi a^3}{3} + \frac{8}{3}\underbrace{\left(3b^2(b-d) - (b^3-d^3)\right)}_{\text{comes from }\int_d^b \lambda^2 dz} - 8b^2c\mathcal{O}
\end{align}\tag{*2}
$$
where
$$\mathcal{O} = \frac{1}{b^2}\int_d^b \left[\left(a^2 - \frac{z^2}{3}\right)\frac{z^2}{(a^2 - z^2)\sqrt{b^2-z^2}} + \sqrt{b^2 - z^2}\right] dz$$</p>
<p>Let </p>
<ul>
<li>$z = b\cos\psi$ and $\psi_0 = \cos^{-1}\frac{d}{b}$.</li>
<li>$t = \tan\psi$ and $t_0 = \tan\psi_0 = \frac{\sqrt{b^2-d^2}}{d} = \frac{e}{d}$, </li>
</ul>
<p>We find
$$\begin{align}
\mathcal{O}
&= \int_0^{\psi_0}\left[
\left(a^2 - \frac{b^2}{3}\cos^2\psi\right)
\frac{\cos^2\psi}{(a^2-b^2\cos^2\psi)} + \sin^2\psi
\right] d\psi\\
&= \int_0^{\psi_0} \left( \frac{2a^2}{3}\frac{\cos^2\psi}{a^2-b^2\cos^2\psi}
+ \frac13\cos^2\psi + \sin^2\psi \right) d\psi\\
&= \frac23\psi_0 -\frac13\sin\psi_0\cos\psi_0 + \frac{2a^2}{3}
\int_0^{t_0} \frac{dt}{(c^2+a^2t^2)(1+t^2)}\\
&= \frac23\psi_0 -\frac13\sin\psi_0\cos\psi_0 + \frac{2a^2}{3b^2}
\left(\frac{a}{c}\tan^{-1}\left(\frac{at_0}{c}\right) - \psi_0\right)\\
&= \frac{2a^3}{3b^2c}\tan^{-1}\left(\frac{ae}{dc}\right) - \frac{2c^2}{3b^2}\cos^{-1}\left(\frac{d}{b}\right) - \frac{de}{3b^2}
\end{align}
$$
Substitute this into $(*2)$, we find Vol is equal to
$$
\frac{4\pi a^3}{3} + \frac{8}{3}\left(2b^3 - 3b^2d + d^3 + cde\right)
- \frac{16a^3}{3}\tan^{-1}\left(\frac{ae}{dc}\right) + \frac{16c^3}{3}\cos^{-1}\left(\frac{d}{b}\right)
$$</p>
<p>Notice</p>
<ul>
<li>when $b > \frac{a}{\sqrt{2}}$, $e = c$,</li>
<li>when $b > \frac{a}{\sqrt{2}}$, $d = 0$</li>
</ul>
<p>We can use this to simplify above expression and get</p>
<p>$$
\bbox[8pt,border:1px solid blue]{
\text{Vol}
= \frac{16}{3} \times \begin{cases}
\displaystyle\;a^3\left(\tan^{-1}\left(\frac{d}{a}\right) - \frac{\pi}{4}\right) + b^2(b-d) + c^3\cos^{-1}\left(\frac{d}{b}\right), & b > \frac{a}{\sqrt{2}}\\
\\
\displaystyle\;-\frac{\pi}{4}a^3 + b^3 + \frac{\pi}{2} c^3,& b < \frac{a}{\sqrt{2}}
\end{cases}
}
$$</p>
<p>The formula for $b < \frac{a}{\sqrt{2}}$ has a simple geometric interpretation. We can rewrite it as
$$\text{Vol}_{small} =
\underbrace{\frac{4\pi}{3}a^3}_{I} -
\underbrace{2 \times \frac{4\pi}{3}(a^3 - c^3)}_{II}
+ \underbrace{\frac{16}{3} b^3}_{III}$$</p>
<p>One can show that if we drill a single cylindrical hole of radius $b$ from a sphere of radius $a$. The volume of remaining sphere is $\frac{4\pi}{3}c^3$. This means
the volume of the cylinder removed is $\frac{4\pi}{3} (a^3 - c^3)$. Now, let us consider the intersection of two such cylinders, one aligned along the $x$-axis, the other along the $y$-axis. If we intersect this intersection by a plane with $|z| < b$, we will obtain a square of side $2\sqrt{b^2 - z}$. This means the volume of the intersection of two such cylinders is given by $$\int_{-b}^b 4(b^2 - z^2) dz = \frac{16}{3}b^3.$$</p>
<p>Compare this with $(*4)$, we find $\text{Vol}_{small}$ can be calculated as:</p>
<ul>
<li>start with $I$, the volume of the sphere.</li>
<li>subtract $II$, the volume of the 2 cylinder removed.</li>
<li>add back $III$, the volume of the intersection of the two cylinders which has been over-subtracted in previous step.</li>
</ul>
<p>This is the inclusion-exclusion mentioned in Christian-Blatter's answer.</p>
|
4,545,364 | <blockquote>
<p>Solve the quartic polynomial :
<span class="math-container">$$x^4+x^3-2x+1=0$$</span>
where <span class="math-container">$x\in\Bbb C$</span>.</p>
<p>Algebraic, trigonometric and all possible methods are allowed.</p>
</blockquote>
<hr />
<p>I am aware that, there exist a general quartic formula. (Ferrari's formula). But, the author says, this equation doesn't require general formula. We need some substitutions here.</p>
<p>I realized there is no any rational root, by the rational root theorem.</p>
<p>The harder part is, WolframAlpha says the factorisation over <span class="math-container">$\Bbb Q$</span> is impossible.</p>
<p>Another solution method can be considered as the quasi-symmetric equations approach. (divide by <span class="math-container">$x^2$</span>).</p>
<p><span class="math-container">$$x^2+\frac 1{x^2}+x-\frac 2x=0$$</span></p>
<p>But the substitution <span class="math-container">$z=x+\frac 1x$</span> doesn't make any sense.</p>
<p>I want to ask the question here to find possible smarter ways to solve the quartic.</p>
| Bob Dobbs | 221,315 | <p><span class="math-container">$(x^2+e^{\theta i}x+e^{\phi i})(x^2+e^{-\theta i}x+e^{-\phi i})=x^4+2\cos\theta x^3+(1+2\cos\phi)x^2+2\cos(\theta-\phi)x+1$</span>.</p>
<p>Then <span class="math-container">$\cos\theta=\frac{1}{2}$</span>, <span class="math-container">$\cos\phi=-\frac{1}{2}$</span>, <span class="math-container">$\cos(\theta-\phi)=-1.$</span></p>
<p>Then <span class="math-container">$\theta$</span> is <span class="math-container">$\pm\frac{\pi}{3}$</span> and <span class="math-container">$\phi$</span> is <span class="math-container">$\pm\frac{2\pi}{3}$</span>.</p>
<p>Their difference is <span class="math-container">$\pm\pi$</span>. Let <span class="math-container">$\theta=\frac{\pi}{3}$</span> and <span class="math-container">$\phi=-\frac{2\pi}{3}$</span></p>
<p><span class="math-container">$(x^2+e^{\frac{\pi}{3}i}x+e^{-\frac{2\pi}{3} i})(x^2+e^{-\frac{\pi}{3} i}x+e^{\frac{2\pi}{3} i})=0.$</span></p>
<p>Then we solve by quadratic formula.</p>
|
2,179,253 | <p>$$n{n-1 \choose 2}={n \choose 2}{(n-2)}$$
Give a conceptual
explanation of why this formula is true.</p>
| Dando18 | 274,085 | <p>Here's a derivation:
$$ n \binom{n-1}{2} = \frac{n (n-1)!}{2!(n-3)!} = \frac{n!}{2!(n-3)!} = \frac{1}{2}(n-2)(n-1)n$$</p>
<p>$$ \binom{n}{2}(n-2) = \frac{n!(n-2)}{2!(n-2)!} = \frac{n!}{2!(n-3)!} = \frac{1}{2}(n-2)(n-1)n $$</p>
<p>So it follows that:</p>
<p>$$ n \binom{n-1}{2} = \binom{n}{2}(n-2) $$</p>
|
68,145 | <p>All the statements below are considered over local rings, so by regular, I mean a regular local ring and so on;</p>
<p>It is well-known that every regular ring is Gorenstein and every Gorenstein ring is Cohen-Macaulay. There are some examples to demonstrate that the converse of the above statements do not hold. For example, $A=k[[x,y,z]]/(x^2-y^2, y^2-z^2, xy, yz, xz)$ where $k$ is a field, is Gorenstein but not regular, or $k[[x^3, x^5, x^7]]$ is C.M. but not Gorenstein.
Now, here is my question:</p>
<p>I want to know where these examples have come from, I mean, have they been created by the existence of some logical translations to the Algebraic combinatorics (like Stanley did), or even algebraic geometry, or they are as they are and they are some kind of lights that have been descended from heaven to their creators by any reason!</p>
| Sándor Kovács | 10,076 | <p>All of these conditions are very important in algebraic geometry. I don't know much about the algebraic combinatorics aspect of these notions, but my feeling is that came from geometry and not vice versa.</p>
<p>The reason we care about these notions is that even though it would be nice to always work with non-singular varieties (a.k.a. regular) it can't always be done. For instance, families of non-singular varieties may degenerate to singular ones and most of the time there is no way to resolve these singularities in the families. </p>
<p>For instance, any families of hypersurfaces (e.g., plane curves) degenerate to singular ones. However, hypersurfaces are Gorenstein so if we can handle those we are fine. So, in particular, to give an example of a Gorenstein but not regular ring you only need to find a singular hypersurface. For example $k[[x,y]]/(x^2-y^3)$ is such an example. </p>
<p>Now if you study more general varieties than hypersurfaces you might not always be able to guarantee that they degenerate to Gorenstein varieties. On the other hand, if you consider stable families, then if the general fiber is smooth, then all fibers are Cohen-Macaulay. This is a non-trivial result. You can find it <a href="http://www.ams.org/journals/jams/2010-23-03/S0894-0347-10-00663-6/home.html" rel="nofollow noreferrer">here</a>.</p>
<p>As Kevin mentioned, the Gorenstein and Cohen-Macaulay properties can be measured by the dualizing complex. $X$ is Cohen-Macaulay if and only if its dualizing complex is a sheaf and it is Gorenstein if and only if it is Cohen-Macaulay and its dualizing sheaf is a line bundle. I am not totally sure what he means by the last statement, but $X$ is regular if the sheaf of differentials is locally free. If it isn't regular one needs to think about what "top differentials" mean. For a discussion of that see <a href="https://mathoverflow.net/questions/35736/the-canonical-line-bundle-of-a-normal-variety/46663#46663">this MO answer</a>.</p>
<p>Anyway, this gives us an easy way to construct Cohen-Macaulay but not Gorenstein varieties. You "only" need a Cohen-Macaulay variety whose canonical bundle is not a line bundle.
An easy way to do that is to use the fact that rational singularities are always Cohen-Macaulay. For surface singularities you can ensure that they are rational from their resolution graph (see Artin's <a href="http://www.jstor.org/pss/2373050" rel="nofollow noreferrer">paper</a>) and it is easy to cook up a resolution graph that makes sure that the canonical sheaf of the singularity is not a line bundle. </p>
<p>Another way to make sure that a singularity is Cohen-Macaulay is to compute its local cohomology. See Lemma 4.1 of this <a href="http://arxiv.org/abs/1005.5207" rel="nofollow noreferrer">paper</a> of Patakfalvi for a condition. That tells you when a cone is Cohen-Macaulay and then just pick a variety with the right embedding and it will give you something non-Gorenstein. For instance, take a cone over $\mathbb P^1\times \mathbb P^1$ embedded by the $(2,1)$ line bundle. </p>
|
2,068,986 | <p>Consider the function</p>
<p>$$K(u) = \frac 1 {\sqrt {2\pi}} \left( \Bbb e ^{-\frac 1 2 \left( \frac {u-5.3} h \right)^2 } + \Bbb e ^{-\frac 1 2 \left( \frac {u-1.6} h \right)^2 } + \Bbb e ^{-\frac 1 2 \left( \frac {u-2.1} h \right)^2 } + \Bbb e ^{-\frac 1 2 \left( \frac {u-1.7} h \right)^2 } + \Bbb e ^{-\frac 1 2 \left( \frac {u-1.9} h \right)^2 } \right) .$$</p>
<p>I know how to compute the derivative of each term in order to find its extrema, but how should I proceed in order to find the extrema of the whole sum?</p>
| Barry Cipra | 86,747 | <p>Try </p>
<p>$$a_n=\lfloor2^n\ln2\rfloor$$</p>
<p>and use the inequalities $2^n\ln2-1\lt a_n\le2^n\ln2$ in the Squeeze Theorem: Since $1-{1\over2^n}\lt1$, we have</p>
<p>$$\left(1-{1\over2^n}\right)^{2^n\ln2}\le\left(1-{1\over2^n}\right)^{a_n}\lt\left(1-{1\over2^n}\right)^{2^n\ln2-1}$$</p>
<p>The left- and right-hand expressions are easily seen to tend to $(e^{-1})^{\ln2}={1\over e^{\ln2}}={1\over2}$ as $n\to\infty$.</p>
|
2,741,686 | <p>If I have the following vector space $ V, \text{{$e_0, e_1, e_2$}} \text{ where } e_0(x) = 1, e_1(x) = x \text{ and } e_2(x) = x^2$.I want to know the linear dependency of it how can I proceed? I thought of following the definition of linearly independent $$c_0e_0 + c_1e_1 + c_2e_2 = c_0+ c_1x + c_2x^2=0\iff c_0 = c_1 = c_2 = 0$$
but I can not mount a system because of the $x^2$</p>
<p>I know that $c_0 = c_1 = c_2 = 0$ is solution, but i want to know if there is another solution for the equation $c_0+ c_1x + c_2x^2=0$ with $c_0 \neq 0, c_1 \neq 0 \text{ and }c_2 \neq 0 $ </p>
| user | 505,767 | <p>Note that for the <a href="http://mathworld.wolfram.com/ZeroPolynomial.html" rel="nofollow noreferrer">zero polynomial</a> property</p>
<p>$$c_0e_0 + c_1e_1 + c_2e_2 = c_0+ c_1x + c_2x^2=0 \quad \forall x\iff c_0 = c_1 = c_2 = 0$$</p>
|
510,130 | <p>Let $(r_i)_{i=1}^m$ be a sequence
of positive reals such that
$\sum_i r_i < 1$
and let $t$ be a positive real.
Consider the sequence $T(n)$
defined by $T(0) = t$,
$T(n) = \sum_i T(\lfloor r_i n \rfloor) $
for $n \ge 1$.</p>
<p>Show that
$T(n) = o(n)$,
that is,
$\lim_{n \to \infty} \dfrac{T(n)}{n}
= 0
$.</p>
<p>Note:
This is a variation on
<a href="https://math.stackexchange.com/questions/506489/if-tn-un-sum-i-t-lfloor-r-i-n-rfloor-show-that-tn-thetan">If $T(n) = un + \sum_i T(\lfloor r_i n \rfloor) $, show that $T(n) = \Theta(n)$</a>.
It is gotten by setting $u=0$ there.</p>
<p>I am close to a solution,
and hope to have one in a few days.
If I find one,
I will post it.</p>
<p>Note:
It is easy to prove that
$T(n) = O(n)$.
The problem is showing that
$T(n)/n \to 0$.</p>
| Marko Riedel | 44,883 | <p>I have some very exciting news for Mr. M. Cohen and other potential readers of this thread. I hope it gets viewed often because the result is really very pretty. We show that we can solve a more general case than in the first post using Dirichlet series and Mellin transforms, getting exact formulas for $T(n)$ and for the leading asymptotic term.</p>
<p>Suppose that $r_k = 1/p_k$ for all $k$ and $\sum_k 1/p_k < 1$ where the $p_k\ge 2$ are distinct and $k\ge 2$. Note that these can be arbitrary integers now as opposed to powers of a unique integer in our first post. Furthermore let $T(0) = 1.$</p>
<p>Now evidently what we are facing here is a tree where nodes branch out for every $p_k$ as long as $n$ is not zero and we want a count of the leaves. The following observation is the key: the finite Dirichlet series in $s$ given by
$$D_l(s) = \left(\sum_k \frac{1}{p_k^s}\right)^l = \sum_q \frac{a_{l,q}}{q^s}$$
perfectly encodes by means of a bijection all paths of length $l$ (that is, $l$ edges) from the root to a leaf and the multiplier of $n$ corresponding to each path. If $q> \lfloor n/p_k \rfloor$ and $q\le n$ then appending a step along $p_k$ produces $a_{l,q}$ zero values, i.e. leaves.</p>
<p>Hence we have the following exact formula (exact for all $n$):
$$T(n) = \sum_k \sum_{l=0}^{\lfloor \log_2 n \rfloor}
\sum_{\lfloor n/p_k \rfloor < q \le n} a_{l, q}.$$</p>
<p>Now observe that we can safely replace this by
$$T(n) = \sum_k \sum_{l=0}^\infty
\sum_{\lfloor n/p_k \rfloor < q \le n} a_{l, q}$$
because $a_{l,q}$ never contributes when $q>n$ and the smallest denominator occurring in the corresponding $D_l(s)$ is $(\min_k p_k)^{\lfloor \log_2 p_k \rfloor + 1} > n.$</p>
<p>Introducing the Dirichlet series
$$D(s) = \sum_{l\ge 0} D_l(s) =
\sum_{l\ge 0} \left(\sum_k \frac{1}{p_k^s}\right)^l =
\frac{1}{1 - \sum_k \frac{1}{p_k^s}} = \sum_q \frac{a_q}{q^s}$$
we thus obtain
$$T(n) = \sum_k\sum_{\lfloor n/p_k \rfloor < q \le n} a_q.$$</p>
<p>Now put $$M(n) = \sum_{q=1}^n a_q$$ so that
$$T(n) = \sum_k (M(n) - M(\lfloor n/p_k \rfloor).$$
We will evaluate $M(n)$ by means of the Wiener-Ikehara theorem, which is a form of Mellin summation. Note that by the intermediate value theorem $$1-\sum_k \frac{1}{p_k^s}$$ has a root between $s>0$ and $s<1$ (these inequalities are strict). Call this root $\rho.$
Now $$\frac{1}{1-\sum_k \frac{1}{p_k^s}}$$ has a simple pole there with residue
$$\operatorname{Res}
\left(\frac{1}{1-\sum_k \frac{1}{p_k^s}}; s=\rho\right) =
\frac{1}{\sum_k \frac{\log p_k}{p_k^\rho}}.$$
We may thus conclude that
$$M(n) \sim \left(\sum_k \frac{\log p_k}{p_k^\rho}\right)^{-1} \frac{n^\rho}{\rho}$$
obtaining finally that
$$T(n) \sim \left(\sum_k \frac{\log p_k}{p_k^\rho}\right)^{-1}
\sum_k\left(\frac{n^\rho}{\rho} - \frac{(n/p_k)^\rho}{\rho}\right)\\
= \left(\sum_k \frac{\log p_k}{p_k^\rho}\right)^{-1}
\frac{n^\rho}{\rho}\sum_k\left(1 - (1/p_k)^\rho\right)
= (m-1) \left(\sum_k \frac{\log p_k}{p_k^\rho}\right)^{-1}
\frac{n^\rho}{\rho}.$$
This asymptotic formula converges very nicely and the quotient between $T(n)$ and this leading term goes to one very quickly, as numerical experiments show.
In particular $$\frac{T(n)}{n}\in\Theta(n^{\rho-1})$$ so that
$$\frac{T(n)}{n}\to 0$$
as claimed since $\rho-1<0.$</p>
<p>I encourage readers to fill in the details and I am available for questions. Thanks go to M. Cohen for asking such a wonderful question.</p>
<p>This is the Maple code that I used to verify these formulas.</p>
<pre>
ex :=
proc(l, n)
option remember;
local k, r, t, ds, cfs, terms, tval, pos;
if n=0 then return 1 fi;
r := 0;
for k from 0 to ilog[2](n) do
if k=0 then
for t in l do
if t>n then r := r+1; fi;
od;
else
ds :=
map(simplify, expand(add(1/l[q]^s, q=1..nops(l))^k));
ds := convert(ds,list);
cfs := map(t->subs(s=0, t), ds);
terms := [seq(ds[pos]/cfs[pos], pos=1..nops(ds))];
for pos to nops(ds) do
for t in l do
tval := op(1, terms[pos]);
if tval>floor(n/t) and tval<=n then
r := r+cfs[pos];
fi;
od;
od;
fi;
od;
r;
end;
T :=
proc(l, n)
option remember;
local t, r;
if n=0 then return 1 fi;
r := 0;
for t in l do
r := r+T(l, floor(n/t));
od;
r;
end;
rho :=
proc(l)
option remember;
fsolve(1-add(1/l[k]^s, k=1..nops(l)), s);
end;
lterm :=
proc(l, n)
(nops(l)-1)*1/
add(log(l[k])/l[k]^rho(l), k=1..nops(l))*n^rho(l)/rho(l);
end;
</pre>
|
510,130 | <p>Let $(r_i)_{i=1}^m$ be a sequence
of positive reals such that
$\sum_i r_i < 1$
and let $t$ be a positive real.
Consider the sequence $T(n)$
defined by $T(0) = t$,
$T(n) = \sum_i T(\lfloor r_i n \rfloor) $
for $n \ge 1$.</p>
<p>Show that
$T(n) = o(n)$,
that is,
$\lim_{n \to \infty} \dfrac{T(n)}{n}
= 0
$.</p>
<p>Note:
This is a variation on
<a href="https://math.stackexchange.com/questions/506489/if-tn-un-sum-i-t-lfloor-r-i-n-rfloor-show-that-tn-thetan">If $T(n) = un + \sum_i T(\lfloor r_i n \rfloor) $, show that $T(n) = \Theta(n)$</a>.
It is gotten by setting $u=0$ there.</p>
<p>I am close to a solution,
and hope to have one in a few days.
If I find one,
I will post it.</p>
<p>Note:
It is easy to prove that
$T(n) = O(n)$.
The problem is showing that
$T(n)/n \to 0$.</p>
| Marko Riedel | 44,883 | <p>I am presenting an important addendum. The code for my first answer does not work properly when some of the $p_k$ are repeated even though the math is right, and it is not all that efficient. I have remedied this defect and I am presenting code that works for duplicate $p_k$ and is amazingly fast even for large arguments $n$ to $T(n).$ I have used this code to verify the correctness of the above mathematical argument empirically for a number of sets of $p_k.$ </p>
<pre>
mul_dir :=
proc(l1, l2)
option remember;
local r, res, t, t1, t2, pos;
r := [];
for t1 in l1 do
for t2 in l2 do
r := [op(r), [t1[1]*t2[1], t1[2]*t2[2]]];
od;
od;
r := sort(r, (p1, p2) -> p1[2] < p2[2]);
res := []; p := r[1];
for pos from 2 to nops(r) do
if p[2] <> r[pos][2] then
res := [op(res), p];
p := r[pos];
else
p[1] := p[1] + r[pos][1];
fi;
od;
res := [op(res), p];
res;
end;
ex :=
proc(l, n)
option remember;
local k, r, t, f, ds, cfs, terms, tval, pos;
if n=0 then return 1 fi;
r := 0; f:= [seq([1, l[q]], q=1..nops(l))];
for k from 0 to ilog[2](n) do
if k=0 then
ds := [[1,1]];
else
ds := mul_dir(ds, f);
fi;
for pos to nops(ds) do
for t in l do
tval := ds[pos][2];
if tval>floor(n/t) and tval<=n then
r := r+ds[pos][1];
fi;
od;
od;
od;
r;
end;
T :=
proc(l, n)
option remember;
local t, r;
if n=0 then return 1 fi;
r := 0;
for t in l do
r := r+T(l, floor(n/t));
od;
r;
end;
rho :=
proc(l)
option remember;
fsolve(1-add(1/l[k]^s, k=1..nops(l)), s);
end;
lterm :=
proc(l, n)
(nops(l)-1)*1/
add(log(l[k])/l[k]^rho(l), k=1..nops(l))*n^rho(l)/rho(l);
end;
</pre>
|
1,448,585 | <blockquote>
<p>Let $\alpha \in \mathbb{R}^n$, $n \geq 2$, be a non-zero vector. Define a reflection in the hyperplane perpendicular to $\alpha$ by:
$$\sigma_{\alpha}(v) = v - \dfrac{2(v, \alpha)}{(\alpha, \alpha)} \cdot \alpha$$
($(x, y)$ is the usual inner product on $\mathbb{R}^n$).</p>
<p>1) Show $\sigma_{\alpha}$ is a linear map that fixes the hyperplane orthogonal to $\alpha$ and sends $\alpha$ to $-\alpha$.</p>
<p>2) Given $\alpha, \beta$ non-zero vectors, determine when the subgroup $\langle \sigma_{\alpha}, \sigma_{\beta} \rangle$ is infinite. Find its order when it is finite.</p>
</blockquote>
<p>For 2) I don't understand what the group is. If $\sigma_{\alpha}$ and $\sigma_{\beta}$ are elements of a group, what other elements do they generate? Like for example, $\sigma_{\alpha}(\sigma_{\beta}(v)) = \left(v - \dfrac{2(v, \beta)}{(\beta, \beta)} \cdot \beta \right) - \dfrac{2\left(v - \dfrac{2(v, \beta)}{(\beta, \beta)} \cdot \beta, \beta \right)}{(\beta, \beta)} \cdot \beta$ which I guess makes sense (in the sense that dot products work in this function since the dot product is between vectors). But how do I know when there will be an infinite many number of these, and when there will be finitely many?</p>
<p>I can't even find an identity function $\sigma$, because a composition of $\sigma_{\alpha}$ and $\sigma_{\beta}$ is $\sigma_{\beta}$ only when $\sigma_{\alpha} = v$, but this is a constant function and does not reflect $\alpha$ about the hyperplane to $-\alpha$, so this constant function cannot be in the group.</p>
| Lee Mosher | 26,501 | <p>For your first subquestion of 1), the hyperplane is described in the question: it is the hyperplane orthogonal to $\alpha$. You know from linear algebra that this hyperplane is the solution of the equation $\alpha \cdot v = 0$. So your goal is to take any $v$ in that hyperplane, i.e. take any $\nu$ such that $\alpha\cdot v=0$, and prove the equation $\sigma_\alpha(v)=v$. </p>
<p>For your second subquestion of 1), $\alpha$ is a vector, and it is a constant, so it is a constant vector (unlike $v$ which is a vector, and it is a variable, so it is a variable vector). To say that a function $f$ sends $a$ to $b$ means $f(a)=b$. So to say that the function $\sigma_\alpha$ sends $\alpha$ to $-\alpha$ means that $\sigma_\alpha(\alpha)=-\alpha$. That's the equation you are asked to prove.</p>
<p>For 2), you are correct that a function is not a group, but the question does not ask you to believe that a function is a group. Instead, the question asks you to believe that the <strong>set</strong> of all linear isomorphisms of $\mathbb{R}^n$ is a group under the binary operation of <strong>composition</strong> --- you may have heard of this group, it is denoted $GL(n,\mathbb{R})$. Also, you are asked to believe that if $\alpha$ is a constant vector then $\sigma_\alpha$ is an element of the group $GL(n,\mathbb{R})$. Also, if you fix two constant vectors $\alpha$ and $\beta$, then you are asked to believe that there is a subgroup of $GL(n,\mathbb{R})$ denoted $\langle \sigma_\alpha,\sigma_\beta \rangle$ and called the subgroup of $GL(n,\mathbb{R})$ that is generated by $\sigma_\alpha,\sigma_\beta$.</p>
|
1,448,585 | <blockquote>
<p>Let $\alpha \in \mathbb{R}^n$, $n \geq 2$, be a non-zero vector. Define a reflection in the hyperplane perpendicular to $\alpha$ by:
$$\sigma_{\alpha}(v) = v - \dfrac{2(v, \alpha)}{(\alpha, \alpha)} \cdot \alpha$$
($(x, y)$ is the usual inner product on $\mathbb{R}^n$).</p>
<p>1) Show $\sigma_{\alpha}$ is a linear map that fixes the hyperplane orthogonal to $\alpha$ and sends $\alpha$ to $-\alpha$.</p>
<p>2) Given $\alpha, \beta$ non-zero vectors, determine when the subgroup $\langle \sigma_{\alpha}, \sigma_{\beta} \rangle$ is infinite. Find its order when it is finite.</p>
</blockquote>
<p>For 2) I don't understand what the group is. If $\sigma_{\alpha}$ and $\sigma_{\beta}$ are elements of a group, what other elements do they generate? Like for example, $\sigma_{\alpha}(\sigma_{\beta}(v)) = \left(v - \dfrac{2(v, \beta)}{(\beta, \beta)} \cdot \beta \right) - \dfrac{2\left(v - \dfrac{2(v, \beta)}{(\beta, \beta)} \cdot \beta, \beta \right)}{(\beta, \beta)} \cdot \beta$ which I guess makes sense (in the sense that dot products work in this function since the dot product is between vectors). But how do I know when there will be an infinite many number of these, and when there will be finitely many?</p>
<p>I can't even find an identity function $\sigma$, because a composition of $\sigma_{\alpha}$ and $\sigma_{\beta}$ is $\sigma_{\beta}$ only when $\sigma_{\alpha} = v$, but this is a constant function and does not reflect $\alpha$ about the hyperplane to $-\alpha$, so this constant function cannot be in the group.</p>
| whacka | 169,605 | <p>Get some intuition from three dimensions first. Say the intersection of the two planes is the axis spanned by $\gamma$. Then $\{\alpha,\beta,\gamma\}$ is a basis, and the reflections only act on the $\alpha$ and $\beta$ components of any vector. This generalizes: prove that ${\rm span}\{\alpha,\beta\}$ is the orthogonal complement of the planes' intersection.</p>
<p>(More generally, $A^\perp\cap B^\perp=(A+B)^\perp$ for any subspaces $A,B$ of an inner product space.)</p>
<p>So really, you only need to worry about what the reflections do to the plane $\alpha$ and $\beta$ generate. That's only two dimensions to worry about. Without loss of generality, say one reflection is across the $x$-axis and the other is across the line $y=\tan(\theta)x$ (which makes an angle of $\theta$ with the $x$-axis). What exactly is the composition of the two reflections then?</p>
<p>If it helps, draw these two lines on a piece of paper, and put a point $P$ just under the $x$-axis in the fourth quadrant. Reflect across the $x$-axis to get a point $Q$, then reflect across the other line to get point $R$. If you label all of the angles made (between the lines and the imaginary line segments joining the origin to the three points) you should be able to make some deductions about the angles, and then get an idea for what the composition of the two reflections is.</p>
<p>(Spoiler: you'll be thinking about $n$-gons and dihedral groups soon after that.)</p>
|
2,193,171 | <p>Question: Let $\{a_n\}$ and $\{b_n\}$ be convergent sequences with $a_n \Rightarrow L$ and $b_n \Rightarrow M$ as $n \Rightarrow \infty$. </p>
<p>Prove that $a_nb_n \Rightarrow LM$</p>
<p>Solution: (My Attempt). Instead of redoing it could someone just tell me what I'm doing wrong. Thx</p>
<p>WTS: </p>
<p>(1) $\exists L \in R, \forall \epsilon > 0, \exists N_1 > 0$ such that for all $n \in N_1$, if $n > N_1$, then </p>
<p>$|a_n - L| < \text{(We dont know yet)}$</p>
<p>(2) $\exists M \in R, \forall \epsilon > 0, \exists M > 0$, such that for all $m \in M$, if $m > M$, then </p>
<p>$|b_n - M| < \text{(We dont know yet)}$</p>
<p>Choose N = $\text{we dont know yet} > 0$</p>
<p>Suppose $n > N$ and $m > M$, then</p>
<p>$$|a_nb_n - LM| = |a_nb_n - a_nM + a_nM - LM| $$</p>
<p>$$= |a_n(b_n - M) + M(a_n - L)| \text{ by algebra}$$</p>
<p>$$\leq |a_n(b_n-M)| + |M(a_n - L)| \text{ triangle inequality}$$ </p>
<p>$$= |a_n||b_n - M| + |M||a_n - L|$$</p>
<p>Can we say $|a_n||b_n - M| = \epsilon/2$ same with $|M||a_n - L| = \epsilon/2$ ? Then Q.E.D? With N = $max(N_1, M)$ ? </p>
<p>I have no idea what I'm doing. </p>
| Mark Viola | 218,419 | <p>Note that the exponential satisfies the inequality </p>
<p>$$\begin{align}
e^x&\ge 1+x+\cdot +\frac{x^{k+1}}{(k+1)!}\\\\
&>\frac{x^{k+1}}{(k+1)!}\tag 1
\end{align}$$</p>
<p>Using $(1)$, it is easy to see that</p>
<p>$$\begin{align}
\frac{e^{-1/t}}{t^k}&=\frac{1}{t^ke^{1/t}}\\\\
&\le \frac{1}{t^k\left(\frac{1}{(k+1)!t^{k+1}}\right)}\\\\
&=(k+1)!t
\end{align}$$</p>
<p>Hence, given $\epsilon>0$, $\frac{e^{-1/t}}{t^k}<\epsilon$ whenever $0<t<\delta=\epsilon/(k+1)!$.</p>
|
2,636,712 | <p>I have question about my proof. I could not tell whether it is sufficient enough since my professor approached it differently. </p>
<p><strong>The problem:</strong></p>
<blockquote>
<p>Let $z \in \mathbb{C}^{*}$. If $|z| \neq 1$, prove that the order of $z$ is infinite. </p>
</blockquote>
<p><strong>My proof:</strong> (by contradiction)</p>
<p>Let $z = r\cos(\theta)+r\sin(\theta)=r\operatorname{cis}(\theta)$, where $r> 0, \theta \in [0, 2\pi]$. Since $|z| \neq 1$, then $r \neq 1$. </p>
<p>Suppose that the order of $z$ is finite i.e. $\exists m \in \mathbb{Z}_{+} s.t. z^{m}=1$. Then, observe that: </p>
<p>$z^{m}=r^{m}\operatorname{cis}(m\theta)$, so</p>
<p>$|z^{m}|=|1| \implies \sqrt{r^{2m}\operatorname{cis}^{2}(m\theta)}=1 \implies r^{m}=1$.</p>
<p>However, since $m$ is the least positive integer that $z^{m}=1$ i.e. $m> 0$. Then, $r$ has to be $1$. Yet, this contradicts the assumption that $r\neq 1$. </p>
<p>Hence, we proved that the order of $z$ can’t be finite. </p>
<p><strong>My question:</strong> </p>
<p>Is this proof complete? I did not do the way my professor talked about, which using induction and state that $z \neq 1$ is equivalent to $z \in \mathbb{Q}^{*}$. </p>
<p>Any suggestion or different approach is highly appreciated.</p>
| egreg | 62,967 | <p>The proof is good, but has several redundant steps.</p>
<blockquote>
<p>Suppose $z$ has finite order. Then there exists an integer $m>0$ such that $z^m=1$. Hence $|z|^m=1$ which implies $|z|=1$.</p>
</blockquote>
<p>No contradiction, but “contrapositive”: if $z$ has finite order, then $|z|=1$.</p>
|
7,237 | <p>this came up in class yesterday and I feel like my explanation could have been more clear/rigorous. The students were given the task of finding the zeros of the following equation $$6x^2 = 12x$$ and one of the students did $$\frac{6x^2}{6x}=\frac{12x}{6x}$$ $$x = 2$$ which is a valid solution but this method eliminates the other solution of $$ x = 0$$ When the student brought it up, I explained to the student that if $x = 0, 2$ and we divide by $6x$ there is a possibility that we would be dividing by 0 which is undefined. The student, very reasonably, responded "Well, obviously I didn't know that zero was an answer when I was doing the problem". The student understands why we can't divide by 0 but is still struggling with how that connects to dividing by $x$. I went on to explain that by dividing by $x$ you are "dropping a solution" because the problem, which was quadratic, is now linear. Again, this didn't seem to click with the student. Does anyone have maybe an axiom/law/theorem that I can show the student to give a rigorous reason as to why you can't just divide by $x$?</p>
| Aeryk | 401 | <p>Just before dividing, you can reason "Either $x=0$ or I can divide by $x$." This creates two separate cases to be analyzed.</p>
<p>This works for dividing by anything. You want to divide by $\sin(x)$? You need to make two cases: $\sin(x) \neq 0$ and $\sin(x)=0$. And then analyze each independently.</p>
|
2,638,679 | <p><a href="https://i.stack.imgur.com/S4p0Y.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/S4p0Y.jpg" alt="enter image description here"></a></p>
<p>Due apologies for this rustic image. But while drawing this lattice arrangement about the "square numbers" , I discovered a pattern here wherein if I add the alternate red dots (as depicted in the image above) to the square number, I get the next square number. For instance, $4 + 5(red\ dot) = 9$ , $9+7(red\ dot)=16$, $16+9(red\ dot)=25$, $25+11(red\ dot)=36$, $36+13 (red\ dot)=49$.</p>
<p>The red dotted numbers themselves have a pattern as is obvious from the image. Is there any mathematical explanation to this pattern.</p>
| N. S. | 9,176 | <p>The characteristic Polynomial for $A^{-1}$ for any invertible $n \times n$ matrix is
$$P_{A^{-1}}(X)=\det(xI-A^{-1})=\det(A^{-1}) \det(xA-I)=x^n\det(A^{-1})\det(A-\frac{1}{x}I)\\=(-1)^nx^n \det(A^{-1}) P_{A}(\frac{1}{x})$$</p>
<p>Now use the fact that for a $2\times 2$ matrix the characteristic polynomial is
$$P_B(x)=x^2-\operatorname{tr}(B)x+\det(B)$$</p>
|
121,450 | <p>I am trying to prove that the series <span class="math-container">$\sum \dfrac {1} {\left( m_{1}^{2}+m_{2}^{2}+\cdots +m_{r }^{2}\right)^{\mu} } $</span> in which the summation extends over all positive and negative integral values and zero values of <span class="math-container">$m_1, m_2,\dots, m_r$</span>, except the set of simultaneous zero values, is absolutely convergent if <span class="math-container">$\mu > \dfrac {r} {2}$</span>.</p>
<p>Any help with a proof strategy would be much appreciated.</p>
| Community | -1 | <p>What about comparing with an integral on $\mathbb{R}^r$? And then an appropriate change of variable?</p>
|
3,541,524 | <blockquote>
<p>Decide whether the following ie true or false
<span class="math-container">$$\lvert\arcsin z \rvert \le \left\lvert \frac {\pi z} {2} \right\rvert $$</span>
whenever <span class="math-container">$z\in\Bbb C$</span> . </p>
</blockquote>
<p><span class="math-container">$\arcsin z =-i \text{Log } (\sqrt{1-z^2}+iz)$</span>, </p>
<p><span class="math-container">$\text{Log }z=\log|z|+i\arg z,\arg z\in(-\pi,\pi] $</span></p>
<p>The problem is related to <a href="https://math.stackexchange.com/questions/2533309/show-that-the-series-sum-n-1-infty-textarcsinn-2z-converges-norma">the series <span class="math-container">$\sum_{n=1}^{\infty}\arcsin(n^{-2}z) $</span> converges normally in the whole complex plane</a>. </p>
| River Li | 584,414 | <p><strong>Proof</strong>: We split into four cases:</p>
<p>1) <span class="math-container">$z = x \in (1, +\infty)$</span>: From 4.23.20 in [1], we have
<span class="math-container">$$|\arcsin x| = \sqrt{\frac{1}{4}\pi^2 + \ln^2 (\sqrt{x^2-1} + x)}.\tag{1}$$</span>
It suffices to prove that
<span class="math-container">$$\tfrac{1}{2}\pi\sqrt{x^2-1} - \ln (\sqrt{x^2-1} + x) \ge 0.\tag{2}$$</span>
With the substitution <span class="math-container">$x = \frac{1+u^2}{2u}$</span> for <span class="math-container">$u > 1$</span>, the inequality above becomes
<span class="math-container">$$f(u) = \tfrac{1}{2}\pi \frac{u^2-1}{2u} - \ln u \ge 0, \quad \forall u > 1.\tag{3}$$</span>
We have <span class="math-container">$f'(u) = \frac{\pi (1 + u^2) - 4u}{4u^2} \ge \frac{\pi \cdot 2u - 4u}{4u^2} > 0$</span>.
Also, <span class="math-container">$f(1) = 0$</span>. Thus, <span class="math-container">$f(u) \ge 0$</span> for <span class="math-container">$u > 1$</span>. The inequality is true.</p>
<p>2) <span class="math-container">$z = x\in (-\infty, -1)$</span>: From 4.23.21 in [1], we have
<span class="math-container">$$|\arcsin x| = \sqrt{\frac{1}{4}\pi^2 + \ln^2 (\sqrt{x^2-1} - x)}.\tag{4}$$</span>
From Case 1), the inequality is true. </p>
<p>3) <span class="math-container">$z = x \in [-1, 1]$</span>: It suffices to prove that
<span class="math-container">$$g(u) = \tfrac{1}{2}\pi u - \arcsin u \ge 0, \quad \forall u \in [0, 1].$$</span>
We have <span class="math-container">$g'(u) = \tfrac{1}{2}\pi - \frac{1}{\sqrt{1-u^2}}$</span>.
Denote <span class="math-container">$u_0 = \frac{\sqrt{\pi^2 -4}}{\pi}$</span>.
We know that <span class="math-container">$g(u)$</span> is strictly increasing on <span class="math-container">$[0, u_0)$</span>,
and strictly decreasing on <span class="math-container">$(u_0, 1]$</span>.
Also, <span class="math-container">$g(0) = g(1) = 0$</span>. Thus, <span class="math-container">$g(u) \ge 0$</span> on <span class="math-container">$[0, 1]$</span>. The inequality is true.</p>
<p>4) <span class="math-container">$z = x + y\mathrm{i}$</span> with <span class="math-container">$y \ne 0$</span>:
From 4.23.34 in [1], we have
<span class="math-container">$$|\arcsin z| = \sqrt{\arcsin^2 \beta + \ln^2 (\sqrt{\alpha^2 - 1} + \alpha) }\tag{5}$$</span>
where
<span class="math-container">\begin{align}
\alpha &= \tfrac{1}{2}\sqrt{(x+1)^2 + y^2} + \tfrac{1}{2}\sqrt{(x-1)^2 + y^2}, \tag{6}\\
\beta &= \tfrac{1}{2}\sqrt{(x+1)^2 + y^2} - \tfrac{1}{2}\sqrt{(x-1)^2 + y^2}. \tag{7}
\end{align}</span>
It suffices to prove that
<span class="math-container">$$\tfrac{1}{4}\pi^2 (x^2 + y^2) \ge \arcsin^2 \beta + \ln^2 (\sqrt{\alpha^2 - 1} + \alpha). \tag{8}$$</span></p>
<p>Clearly, we only need to prove the case when <span class="math-container">$x\ge 0$</span> and <span class="math-container">$y > 0$</span>. We have
<span class="math-container">$$x\ge 0, \ y > 0
\quad \Longleftrightarrow \quad
\alpha > 1, \ 0 \le \beta < 1. \tag{9}$$</span>
<em>Proof</em>: “<span class="math-container">$\Longrightarrow$</span>” part is easy.
“<span class="math-container">$\Longleftarrow$</span>” part: Indeed, from (6), (7) and <span class="math-container">$\alpha > 1, \ 0 \le \beta < 1$</span>,
we uniquely obtain <span class="math-container">$x = \alpha \beta$</span> and <span class="math-container">$y = \sqrt{(\alpha^2 - 1) (1-\beta^2)}$</span>.</p>
<p>Also, it is easy to prove that <span class="math-container">$x^2+y^2 = \alpha^2 + \beta^2 - 1$</span>.
Thus, it suffices to prove that for <span class="math-container">$\alpha > 1$</span> and <span class="math-container">$0\le \beta < 1$</span>,
<span class="math-container">$$\tfrac{1}{4}\pi^2 (\alpha^2 + \beta^2 - 1) \ge \arcsin^2 \beta + \ln^2 (\sqrt{\alpha^2 - 1} + \alpha).\tag{10}$$</span>
With the substitutions <span class="math-container">$\alpha = \frac{1+u^2}{2u}$</span> and
<span class="math-container">$\beta = \sin v$</span> for <span class="math-container">$u > 1$</span> and <span class="math-container">$v \in [0, \frac{1}{2}\pi)$</span>,
the inequality above becomes
<span class="math-container">$$\tfrac{1}{4}\pi^2 \Big(\frac{(1+u^2)^2}{4u^2} + \sin^2 v - 1\Big)
\ge v^2 + \ln^2 u, \quad \forall u > 1, \ v \in [0, \tfrac{1}{2}\pi).\tag{11}$$</span>
It is easy to prove that <span class="math-container">$\frac{\pi}{2} \sin v \ge v$</span> for <span class="math-container">$v \in [0, \frac{1}{2}\pi)$</span>.
Thus, it suffices to prove that
<span class="math-container">$$\tfrac{1}{4}\pi^2 \Big(\frac{(1+u^2)^2}{4u^2} - 1\Big)
\ge \ln^2 u, \quad \forall u > 1, \tag{12}$$</span>
or
<span class="math-container">$$\frac{\pi(u^2-1)}{4u} \ge \ln u, \quad \forall u > 1. \tag{13}$$</span>
This has been proved in Case 1) (see (3)). </p>
<p>We are done.</p>
<p><em>Reference</em></p>
<p>[1] <a href="https://dlmf.nist.gov/4.23" rel="nofollow noreferrer">https://dlmf.nist.gov/4.23</a></p>
|
1,747,696 | <p>First of all: beginner here, sorry if this is trivial.</p>
<p>We know that $ 1+2+3+4+\ldots+n = \dfrac{n\times(n+1)}2 $ .</p>
<p>My question is: what if instead of moving by 1, we moved by an arbitrary number, say 3 or 11? $ 11+22+33+44+\ldots+11n = $ ?
The way I've understood the usual formula is that the first number plus the last equals the second number plus second to last, and so on.
In this case, this is also true but I can't seem to find a way to generalize it.</p>
| Gottfried Helms | 1,714 | <p>The key here is to look at the constant(!) differences. In $1+2+3+4+...n$ it is $d=1$, and the last number is, calling the first number $a$, $a+(n-1)*d=1+n-1=n$. In your next example the difference is $d=11$, and with $a=11$ you have the last number $a+(n-1)*d=11+(n-1)*11 = 11n $ and the sum can then analogically be computed like the first sum.</p>
|
4,045,074 | <p><strong>Let <span class="math-container">$X$</span> be the random variable whose cumulative distribution function is
<span class="math-container">$$
F_X (x) = \begin{cases}
0, & \text{for} \space x\lt 0 \\
\frac{1}{2}, & \text{for} \space 0\le x\le 1 \\
1, & \text{for} \space x\gt 1 \\
\end{cases}.
$$</span>
Let <span class="math-container">$Y$</span> be a random variable independent of <span class="math-container">$X$</span> and uniformly distributed over the interval <span class="math-container">$(0,1)$</span>. Define the random variable <span class="math-container">$Z$</span> as
<span class="math-container">$$
Z = \begin {cases}
X, & \text{if} \space X\le \frac{1}{2} \\
Y, & \text{if} \space X\gt \frac{1}{2} \\
\end{cases}
$$</span>
Determine <span class="math-container">$\mathbb{P} (Z\le \frac{1}{5})$</span>.</strong></p>
<p>I believe that <span class="math-container">$X$</span> only takes the discrete values <span class="math-container">$0$</span> and <span class="math-container">$1$</span> with equal probability, but I'm not entirely sure. By intuition, I think that the answer is <span class="math-container">$\frac{1}{2}$</span>. I'm unsure about this question, so any advice would be appreciated.</p>
| reuns | 276,986 | <p><span class="math-container">$$(1+O(\frac1{\log n}))n\log n = \log n! = \sum_{p^k \le n} \lfloor n/p^k \rfloor\log p $$</span> <span class="math-container">$$= (1+O(\frac1{\log n}))\sum_{p \le n} n \frac{\log p}{p}\tag{1}$$</span>
Followed by a partial summation
<span class="math-container">$$\sum_{p\le n} \frac1p = \frac{\sum_{p\le n} \frac{\log p}p}{\log n}+\sum_{m \le n-1} (\sum_{p\le m} \frac{\log p}p) (\frac1{\log m}-\frac1{\log (m+1)})$$</span>
<span class="math-container">$$ = \frac{(1+O(\frac1{\log n}))\log n}{\log n}+\sum_{m\le n-1}(1+O(\frac1{\log m})) \log m \frac{1+O(\frac1{m})}{m\log^2 m}$$</span> <span class="math-container">$$=\log \log n+C+O(\frac1{\log n})$$</span></p>
|
1,932,961 | <p>Prove by mathematical induction that
$$\sum_{i=1}^{n} i^2 = \frac{n(n+1)(2n+1)}{6}$$
holds $\forall n\in\mathbb{N}$.</p>
<hr>
<p>(1) Assume that $n=1$. Then left side is $1^2 =1$ and right side is $6/6 = 1$, so both sided are equal and expression holds for $n = 1$.</p>
<p>(2) Let $k \in \mathbb{N}$ is given. Assume that for $n = k$ expression holds. Then for $n = k+1$ we get
$$\sum_{i = 1}^{k+1} i^2 = \left(\sum_{i = 1}^{k} i^2\right) + (k+1)^2 = \frac{k(k+1)(2k+1)}{6} + k^2 + 2k + 1 = \frac{2k^3 + 9k^2 + 13k + 6}{6}.$$
Factoring the result we get that $\frac{2k^3 + 9k^2 + 13k + 6}{6} = \frac{(k+1)(k+2)(2k+3)}{6}$ and thus expression holds for $n = k+1$.</p>
<p>Combining (1) and (2) we can conclude that the expression holds $\forall n \in \mathbb{N}$.</p>
<hr>
<p>I have a few questions:</p>
<ol>
<li>Is my proof correct?</li>
<li>If you would be a math professor, is this style of writing math proofs right and sufficient for freshman? Or is there something I miss?</li>
</ol>
| Tintarn | 197,823 | <p>Substitute $a=\frac{x}{x-1}, b=\frac{y}{y-1}, c=\frac{z}{z-1}$.</p>
<p>Then we have $x=\frac{a}{a-1}$ and the similar identities so that the condition implies $abc=(a-1)(b-1)(c-1)$ and hence $ab+ac+bc=a+b+c-1$.</p>
<p>We want to prove $a^2+b^2+c^2 \ge 1$ which is equivalent to $(a+b+c)^2 - 2(ab+ac+bc)-1 \ge 0$ or, using the condition, $(a+b+c)^2 -2(a+b+c)+1 \ge 0$.</p>
<p>But the LHS is $(a+b+c-1)^2$ which is clearly non-negative. Hence the result.</p>
|
104,375 | <p>How I am supposed to transform the following function in order to apply the laplace transform.</p>
<p>$f(t) = t[u(t)-u(t-1)]+2t[u(t-1) - u(t-2)]$</p>
<p>I know that it has to be like this</p>
<p>$L\{f(t-t_0)u(t-t_0)\} = e^{-st_0}F(s), F(s) = L\{f(t)\}$</p>
| Community | -1 | <p>I'll try to put it this way:</p>
<blockquote>
<p>Define a relation <span class="math-container">$\sim$</span> on <span class="math-container">$\mathbb Z$</span>, such that <span class="math-container">$a \sim b \iff \exists k \in \mathbb Z ~~ \text{such that}~~~~a-b=3k$</span></p>
<p>What does this say?</p>
<p>Integers <span class="math-container">$a$</span> and <span class="math-container">$b$</span> are related if and only if on their difference is a multiple of <span class="math-container">$3$</span>. Since, the remainder when <span class="math-container">$a-b$</span> is divided by <span class="math-container">$3$</span> is the difference of the remainders when <span class="math-container">$a$</span> and <span class="math-container">$b$</span> are divided by <span class="math-container">$3$</span>, taken(all taken<span class="math-container">$\mod 3$</span>).
<em>So, integers <span class="math-container">$a$</span> and <span class="math-container">$b$</span> are related if and only if they leave the same remainder when divided by <span class="math-container">$3$</span>.</em></p>
</blockquote>
<p>Now try to put all those numbers that are related to each other in the same "cell" and those that are not related in different "cells".</p>
<p>But, now notice that the number of distinct cells you'll need for the purpose is no more than <span class="math-container">$3$</span> and no less! (Why?)</p>
<p>Construct these "cells" to see how they coincide with what you have written down in your class.</p>
<p>And, now call these cells "equivalence classes".</p>
|
3,142,417 | <p>If <span class="math-container">$a , b , c$</span> and <span class="math-container">$d$</span> are positive integers,
and <span class="math-container">$ab$</span> is greater than <span class="math-container">$cd$</span>,
then, is <span class="math-container">$a+b$</span> greater than or equal to <span class="math-container">$c+d$</span>, always true?</p>
| Yanko | 426,577 | <p>Not at all.</p>
<p>Consider <span class="math-container">$a=b=3$</span> and <span class="math-container">$c=1,d=8$</span>. Then <span class="math-container">$ab=9$</span> while <span class="math-container">$cd=8$</span> however <span class="math-container">$a+b=6$</span> while <span class="math-container">$c+d=9$</span>.</p>
<p>If you want a trivial example, you can take <span class="math-container">$c=0$</span>, any non zero <span class="math-container">$a,b$</span>. Then take <span class="math-container">$d$</span> very large.</p>
|
1,768,700 | <p>According to my knowledge, to prove that $24^{31}$ is congruent to $23^{32}$ mod 19, we must show that both numbers are divisible by 19 i.e. their remainders must be equal with mod 19. Please correct me if I'm wrong.</p>
<p>So, I was able to reduce $23^{32}$ and find its mod 19, which is 17 but I am having a bit of problem with $24^{31}$ since 31 is a prime number and I do not know how to break it down. Please help me with that. </p>
| lhf | 589 | <p>$
24 \equiv 5 \bmod 19
$</p>
<p>$
23 \equiv 4 \bmod 19
$</p>
<p>$
5 \cdot 4 \equiv 1 \bmod 19
$</p>
<p>$
5^{-31} 4^{32} \equiv 4^{31} 4^{32} \equiv 4^{63} \equiv 4^9 = 2^{18} \equiv 1 \bmod 19
$</p>
|
965,851 | <p>I have a computer problem that I was able to reduce to an equation in quadratic form, and thus I can solve the problem, but it's a little messy. I was just wondering if anybody sees any tricks to simplify it?</p>
<p>$$\sin^2\beta ⋅ d^4 + c^2\left(\cos^2\beta⋅\cos^2\alpha-\frac{\cos^2\beta}{2}-\frac12\right)d^2 + sin^2\beta⋅\frac{c^4}{16} = 0$$</p>
<p>Obviously I am using the quadratic formula to solve for $d^2$.</p>
<p>$\beta$, $\alpha$, and $c$ are known. </p>
<p>That middle term is so ugly. Perhaps this could even be simplified and solved without the quadratic formula? </p>
<p>By dividing through with $\sin^2\beta$ I came up with (assuming I did it correctly, I didn't double check it):</p>
<p>$$d^4 + \left[\cot^2\beta\cos^2\alpha - \frac{\csc^2\beta}{2}-\frac{\cot^2\beta}{2}\right]c^2d^2 + \frac{c^4}{16} = 0$$</p>
<p>Which is a little less ugly. Am I missing a cool trick to simplify this?</p>
| Alexandru Ionescu | 596,292 | <pre><code>We see by observation that x = 2 and x = 4 are clearly solutions.
We will prove by induction that 2^n > n^2 for all n >= 5.
For the base case, n = 5 gives 32 > 25 which is true.
For the inductive case, we know that 2^n > n^2 and we want to show that 2^(n+1) > (n+1)^2.
We will have that 2^(n+1) > 2*n^2 and because 2*n^2 = n^2 + n^2 > n^2 + 2*n + 1 when n >=5
(n^2 - 2*n - 1 = (n-1)^2 - 2 > 0 for n >= 5) we will also have that 2^(n+1) > (n+1)^2 as we wanted
to show. By induction 2^n > n^2 for n >= 5 and the only solutions are x=2 and x = 4.
</code></pre>
|
4,450,470 | <p>Let <span class="math-container">$X$</span> a vectorial space y let <span class="math-container">$\Gamma \subset X^{\ast}$</span>. We will say that <span class="math-container">$\Gamma$</span> is <strong>total</strong> in <span class="math-container">$X$</span> if <span class="math-container">$f(x)=0$</span>, <span class="math-container">$\forall f \in \Gamma$</span> implies that <span class="math-container">$x=0$</span>. I have to prove that if <span class="math-container">$\Gamma$</span> is total in <span class="math-container">$X$</span> and <span class="math-container">$x_1, x_2, \cdots, x_n$</span> are linearly independent in <span class="math-container">$X$</span>, then exist <span class="math-container">$f_1, f_2, \cdots, f_n$</span> in <span class="math-container">$\Gamma$</span> such that <span class="math-container">$f_i(x_j)=\delta_{ij}$</span>, where <span class="math-container">$\delta_{ij}$</span> is the Kronecker delta function, that is
<span class="math-container">$$f_i(x_j)=\delta_{ij}=\begin{cases}1 \qquad i=j \\ 0 \qquad i \ne j \end{cases}$$</span></p>
<p><strong>My attempt</strong>. I have to define <span class="math-container">$f_i: X \to \mathbb{K}$</span>, where <span class="math-container">$\mathbb{K}=\mathbb{C}$</span> or <span class="math-container">$\mathbb{R}$</span>. The issue is that I have no guarantee that the space is finite dimensional. Since if it were of finite dimension the problem would be easier and i'm stuck here. I would really appreciate some help.</p>
| angryavian | 43,949 | <p>Let <span class="math-container">$x_0 := (x_1+x_2)/2$</span> and let <span class="math-container">$h = (x_2-x_1)/2$</span>.
The desired inequality is <span class="math-container">$f(x_0) \le \frac{1}{2}(f(x_0-h) + f(x_0+h))$</span>,
which can be rearranged as
<span class="math-container">$$f(x_0) - f(x_0-h) \le f(x_0 + h) - f(x_0).\tag{$*$}$$</span>
By Taylor's theorem, there exist <span class="math-container">$\xi_-$</span> and <span class="math-container">$\xi_+$</span> in <span class="math-container">$[0, h]$</span> such that
<span class="math-container">$$f(x_0-h) = f(x_0) - h f'(x_0) + \frac{h^2}{2} f''(x_0-\xi_-)$$</span>
and
<span class="math-container">$$f(x_0+h) = f(x_0) + h f'(x_0) + \frac{h^2}{2} f''(x_0+\xi_+).$$</span>
Can use these two equations to prove the above inequality (<span class="math-container">$*$</span>)?</p>
|
4,450,470 | <p>Let <span class="math-container">$X$</span> a vectorial space y let <span class="math-container">$\Gamma \subset X^{\ast}$</span>. We will say that <span class="math-container">$\Gamma$</span> is <strong>total</strong> in <span class="math-container">$X$</span> if <span class="math-container">$f(x)=0$</span>, <span class="math-container">$\forall f \in \Gamma$</span> implies that <span class="math-container">$x=0$</span>. I have to prove that if <span class="math-container">$\Gamma$</span> is total in <span class="math-container">$X$</span> and <span class="math-container">$x_1, x_2, \cdots, x_n$</span> are linearly independent in <span class="math-container">$X$</span>, then exist <span class="math-container">$f_1, f_2, \cdots, f_n$</span> in <span class="math-container">$\Gamma$</span> such that <span class="math-container">$f_i(x_j)=\delta_{ij}$</span>, where <span class="math-container">$\delta_{ij}$</span> is the Kronecker delta function, that is
<span class="math-container">$$f_i(x_j)=\delta_{ij}=\begin{cases}1 \qquad i=j \\ 0 \qquad i \ne j \end{cases}$$</span></p>
<p><strong>My attempt</strong>. I have to define <span class="math-container">$f_i: X \to \mathbb{K}$</span>, where <span class="math-container">$\mathbb{K}=\mathbb{C}$</span> or <span class="math-container">$\mathbb{R}$</span>. The issue is that I have no guarantee that the space is finite dimensional. Since if it were of finite dimension the problem would be easier and i'm stuck here. I would really appreciate some help.</p>
| B. S. Thomson | 281,004 | <blockquote>
<p><strong>Definition</strong>. a function <span class="math-container">$f$</span> on an interval <span class="math-container">$(a,b)$</span> is said to be
<strong>midpoint convex</strong> if <span class="math-container">$$f\left(\frac{x_1 + x_2} {2}\right) \le \frac{1}{2}[f(x_1) + f(x_2)] \tag{1}$$</span> for all <span class="math-container">$x_1, x_2 \in (a,b)$</span>.</p>
</blockquote>
<p><em>Equivalently you can replace (1) with (2):</em> <span class="math-container">$$f(x+h) +f(x-h) - 2f(x) \geq 0 \tag{2}$$</span> for all
<span class="math-container">$x,x+h,x-h\in (a,b)$</span>.</p>
<p>As the OP correctly notes, every convex function is midpoint convex, so it is sufficient to rely on any theorem that asserts that a function is convex.</p>
<p>Or...one might prove directly since (as observed in a comment) that defeats the spirit of the problem.</p>
<blockquote>
<p><strong>Problem</strong>. Show [directly] that a differentiable function <span class="math-container">$f$</span> with a nondecreasing derivative <span class="math-container">$f'$</span> is midpoint convex.</p>
</blockquote>
<p>The simplest proof maybe. Consider the function
<span class="math-container">$$t \to \frac{f(x+t) + f(x-t) -2 f(x)}{t}$$</span>
and apply the Cauchy Mean Value theorem [see below] on the interval <span class="math-container">$[0,h]$</span> to obtain <span class="math-container">$\tau \in (0,h)$</span> with
<span class="math-container">$$
\frac{f(x+h) + f(x-h) -2 f(x)}{h} = \frac{f'(x+\tau) - f'(x-\tau)}{1} \geq 0.$$</span>
QED.</p>
<p><em>But I jumped in here with a totally different motive.</em></p>
<p><strong>What about this notion of midpoint convexity.</strong> Is that a thing? If every convex function is midpoint convex, then is every midpoint convex function really just convex?</p>
<ol>
<li><p>No. Not every midpoint convex function is convex.</p>
</li>
<li><p>But every <em>continuous</em> midpoint convex function is convex.</p>
</li>
<li><p>If a midpoint convex function is not convex then it is pretty weird. Blumberg (1919) and Sierpiński (1920) independently proved that every <em>measurable</em> midpoint convex function must be convex.</p>
</li>
<li><p>But there are plenty of nonmeasurable functions that are midpoint convex. All of them are unbounded in every open subinterval, so not your ordinary everyday function.</p>
</li>
<li><p>There is a big literature on midpoint convex functions
which I can only encourage interested parties to consult.</p>
</li>
</ol>
<p><strong>Notes:</strong></p>
<p><em>Calculus students are not provided with many tools. About the only reliable and often-used one is the mean-value theorem. I would suggest that you memorize a small upgrade. The weak version is easy to remember; this is almost as easy.</em></p>
<p><strong>Cauchy's Mean Value Theorem</strong>: Let <span class="math-container">$F,G:R→R$</span> be continuous on <span class="math-container">$[a, b] $</span> and differentiable on <span class="math-container">$(a, b)$</span>. Suppose that <span class="math-container">$G(b)≠G(a)$</span>. Then there exists <span class="math-container">$c∈(a, b)$</span> such that <span class="math-container">$G′(c)≠0$</span> and such that
<span class="math-container">$$\frac{F(b) - F(a)}{G(b) - G(a)} = \frac{F'(c)}{G'(c)}.$$</span></p>
|
3,449,274 | <p>I have equation <span class="math-container">$2b^2 - 72b - 406=0$</span>. I divided it with 2 and I got <span class="math-container">$b^2 - 36b - 203=0$</span>. My teacher then wrote <span class="math-container">$(b-29)(b-7)=0$</span> but I don’t understand how he got that. When I try to solve that equation I get <span class="math-container">$18 \pm\sqrt{(527)}$</span>. How did he get <span class="math-container">$29$</span> and <span class="math-container">$7$</span> and how did he factorize that?</p>
| Quanto | 686,284 | <p>Note that the function does not cross the <span class="math-container">$y$</span>-axis due to the singularity at <span class="math-container">$x=0$</span>, which makes <span class="math-container">$f(x)$</span> discontinuous. For <span class="math-container">$x>0$</span>, <span class="math-container">$f(x)$</span> crosses <span class="math-container">$x=1$</span> while still approaching the horizontal asymptote.</p>
<p><img src="https://i.stack.imgur.com/TySm6.jpg" alt="enter image description here"></p>
|
825,703 | <p>I have been working with vector spaces for a while and I now take for granted what the vector space does. I feel like I dont really understand why multiplication and addition must be defined on a vector space. For example, it feels like adding two vectors and having their sum contained within the space is just a name for a vector space and I dont get what necessarily happens IF the two vector's sum arent in the space. In other words, I dont know why must addition and multiplication must be defined on a vector space, is it to take advantage of nice properties? Thanks!</p>
| mathematician | 98,943 | <p>There are lots of "natural" spaces that happen to satisfy the properties of a vector space. For example $\mathbb{R}^n$ and $C(\mathbb{R})$. Spaces with this kind of addition and scalar multiplication come up so much that we came up with the abstract definition of a vector space. That way we can just do one proof using abstract vector spaces and be able to apply that to $\mathbb{R}^n$ and $C(\mathbb{R})$. Sometimes we are also interested in topology of a particular space, and for that we use the theory of topological vector spaces/Banach spaces. So my point is, a vector space has addition and multiplication because those are the things that we are actually interested in.</p>
|
1,737,674 | <p>I am trying to understand how to find all congruence classes in $\mathbb{F}_2[x]$ modulo $x^2$. How can I compute them ? Can someone get me started with this? I am having trouble understanding $\mathbb{F}_2[x] $ is it the set $\{ f(x) = a_nx^n + ...+ a_1 x + a_0 : a_i = 0,1 \} $?</p>
| Ángel Mario Gallegos | 67,622 | <p>If you only want the measure of the angle (and after deduce with a geometric/trigonometric procedure) Geogebra gives the following:</p>
<p><a href="https://i.stack.imgur.com/pZQlP.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/pZQlP.png" alt="enter image description here"></a></p>
|
3,027,286 | <p>I am a little confused as to proving that <span class="math-container">$(C^*)^{-1} = (C^{-1})^*$</span> where <span class="math-container">$C$</span> is an invertible matrix which is complex. </p>
<p>Initially, I thought that it would have something to do with the identity matrix where <span class="math-container">$CC^{-1}=C^{-1}$</span>. <span class="math-container">$C = I$</span> but don't seem to be getting anywhere with that. </p>
<p>Thank you! </p>
| Sri-Amirthan Theivendran | 302,692 | <p>The statement is false. Put
<span class="math-container">$$
p_j=\frac{6}{\pi^2}\frac{1}{j^2}\quad (j\geq 1)
$$</span>
where the constant is for normalization. Let <span class="math-container">$N$</span> be distributed according to this pmf. Then
<span class="math-container">$$
\sum_{j=1}^\infty jp_j=EN=\frac{6}{\pi^2}\sum_{j=1}^\infty\frac{1}{j}=\infty
$$</span></p>
|
4,271,909 | <p>I'm struggling with improper integrals (Calc I). I've calculated the following:</p>
<p>If <span class="math-container">$a = 0$</span>:
<span class="math-container">$$\int_{0}^{\infty}\cos(x)dx =\lim\limits_{R\to\infty} \int_{0}^{R}\cos(x)dx =\lim\limits_{R\to\infty} \sin(R) $$</span>
Which diverges?</p>
<p>If <span class="math-container">$a > 0$</span>:</p>
<p><span class="math-container">$$\int_{0}^{\infty}e^{-ax}\cos(x)dx = \lim\limits_{R\to\infty}\int_{0}^{R}e^{-ax}\cos(x)dx = \lim\limits_{R\to\infty}\left[\frac{a}{a+1}+\left(\frac{\sin(R)-a\cos(R)}{a^{2}+1}\right)e^{-aR}\right] $$</span>
But what happens to this expression? Does it equal <span class="math-container">$\frac{a}{a+1}$</span>, and how can I decide the greatest area from this? (That is if the expression I got is correct...)</p>
| Jean Marie | 305,862 | <p>Let us write this integral under the more general form:</p>
<p><span class="math-container">$$\int_{0}^{\infty}e^{-sx}\cos(Ax)\,dx \ \text{with} \ A=1$$</span></p>
<p>This integral is classical : it is the Laplace Transform (see formula 8 <a href="https://tutorial.math.lamar.edu/pdf/Laplace_Table.pdf" rel="nofollow noreferrer">here</a>) of <span class="math-container">$\cos(At)$</span> for <span class="math-container">$A=1$</span>, i.e. can be expressed as</p>
<p><span class="math-container">$$F(s)=\dfrac{s}{s^2+A^2}=\dfrac{s}{s^2+1}$$</span></p>
<p>which is maximal for the value <span class="math-container">$s=1$</span> which annihilates <span class="math-container">$F'(s)=\frac{1-s^2}{(s^2+1)^2}$</span> (the variations of <span class="math-container">$F$</span> are increasing before <span class="math-container">$s=1$</span> and decreasing for $s>1).</p>
<p>For this value <span class="math-container">$s=1$</span>, we have</p>
<p><span class="math-container">$$F(1)=\dfrac{1}{2}$$</span></p>
|
4,215,724 | <p><span class="math-container">$f\colon \mathbb{R}^2\to \mathbb{R}$</span> such that <span class="math-container">$f_x(2,-1)=1$</span> and <span class="math-container">$f_y(2,-1)=1$</span> and <span class="math-container">$g(x,y)=\langle x^2y,x-y\rangle$</span> and <span class="math-container">$h = f\circ g$</span> then find <span class="math-container">$h_y(1,2)$</span>. The options given are <span class="math-container">$-2,2,0,5,-5,-3,10,3$</span></p>
<p><strong>My Attempt</strong></p>
<p><span class="math-container">$$
h_y(1,2)=(f\circ g)_y(x,y)\Big|_{x=1,y=2}=f_y(g(1,2))\cdot g_y(1,2)=f_y(2,-1)\cdot g_y(1,2)\\=1 \cdot g_y(1,2)
$$</span></p>
<p>Is it the right way towards finding the solution ? How do I find <span class="math-container">$g_y(1,2)$</span> ?</p>
| Dhanvi Sreenivasan | 332,720 | <p>Since <span class="math-container">$f$</span> is a multivariable function, and <span class="math-container">$g$</span> is a vector function, we can write it as</p>
<p><span class="math-container">$$\frac{\partial h}{\partial y} = \vec{\nabla f} . \vec{\frac{\partial g}{\partial y}}$$</span></p>
|
2,264,614 | <p>Is there a way to evaluate, </p>
<p>$$
\large \cos x \cdot \cos \frac{x}{2} \cdot \cos \frac{x}{4} ... \cdot \cos \frac{x}{2^{n-1}} \tag*{(1)}
$$</p>
<p>I asked this to one of my teachers and what he told is something like this, </p>
<p>Multiply and divide the last term of $(1)$ with $\boxed{\sin \frac{x}{2^{n-1}}}$</p>
<p>So,
$$
\large \frac{\cos \frac{x}{2^{n-1}} \cdot \sin \frac{x}{2^{n-1}}}{\sin \frac{x}{2^{n-1}}} \\ \tag*{(2)}
$$
$$
\large \implies \frac{\sin (2x)}{2^n \cdot \sin \frac{x}{2^{n-1}}} \\
$$
$$
\large \implies \frac{\sin (2x)}{2^n \cdot \frac{\sin \frac{x}{2^{n-1}}}{\frac{x}{2^{n-1}}} \cdot \frac{x}{2^{n-1}}} \tag*{(3)}
$$</p>
<p>Now, as $n \to \infty$ , we have $x \to 0$,
Using this, $\lim$ we have, </p>
<p>$$
\boxed{ \lim_{x \to 0} \frac{\sin x}{x} = 1}
$$</p>
<p>Using this in $(3)$, we have,
$$
\large \boxed{\frac{\sin (2x)}{2x}} \tag*{(4)}
$$</p>
<p>All the steps <strong>sort of</strong> make sense. My doubts are, </p>
<ol>
<li>How do I do this for other trigonometric ratios?</li>
<li>How does the step 2 happen? </li>
</ol>
<p>I need help looking into it more intuitionally. </p>
<p>Please provide necessary reading suggestions. </p>
<p>Regards.</p>
| Doug M | 317,162 | <p>How does step 2 happen:</p>
<p>multiplying the last factor by <span class="math-container">$\frac {\sin \frac{x}{2^{n-1}}}{\sin \frac{x}{2^{n-1}}}$</span></p>
<p>gives us</p>
<p><span class="math-container">$\cos x \cdot \cos \frac x2\cdots \cos \frac x{2^{n-2}}\cdot \frac{\cos \frac{x}{2^{n-1}} \cdot \sin \frac{x}{2^{n-1}}}{\sin \frac{x}{2^{n-1}}}$</span></p>
<p>Double angle formula.</p>
<p><span class="math-container">$\cos \frac{x}{2^{n-1}} \cdot \sin \frac{x}{2^{n-1}} = \frac 12 \sin \frac{x}{2^{n-2}}$</span></p>
<p>Applying this we get:</p>
<p><span class="math-container">$\cos x \cdot \cos \frac x2\cdots \cdot \frac{\cos \frac x{2^{n-2}}\cdot\sin \frac{x}{2^{n-2}}}{2\sin \frac{x}{2^{n-1}}}$</span></p>
<p>And we can apply the double angle formula again. And do it repeatedly until all of the <span class="math-container">$\cos$</span> factors have been devoured.</p>
|
401,002 | <p>$\forall x \neg A \implies \neg \exists xA$<br>
I won't ask you to solve this for me, but can you please give some guiding lines on how to approach a proof in NDFOL?<br>
There are many tricks that the TA shows in class, that I could not dream of...</p>
<p>P.S. I managed to proof $\neg \exists xA \implies \forall x \neg A$ but could not get on from there.<br>
Thanks!</p>
<hr>
<p>After the proposed answer, let me see if I got this correct: </p>
<ol>
<li>$\exists x A \implies \exists x A$ (<strike>axiom</strike> assumption)</li>
<li>$\exists x A \implies A$ (from 1)</li>
<li>$\exists x A, \forall x \neg A \implies \forall x \neg A$ (<strike>axiom</strike> assumption)</li>
<li>$\exists x A, \forall x \neg A \implies \neg A$ ($\forall$ extraction, from 3)</li>
<li>$\forall x \neg A \implies \neg \exists x A$ (from 2,4)</li>
</ol>
<p>Am I correct?<br>
I could not understand the justification going from (1) to (2)</p>
| Lord_Farin | 43,351 | <p>To prove an implication, the general guideline for constructing formal proofs is: Assume the premise and the negation of the consequence, and derive a contradiction.</p>
<p>In your present case:</p>
<hr>
<p>Assume $\exists x A(x)$. By Existential Instantiation, we have $A(t)$ for some (unspecified but fixed) $t$.</p>
<p>Assume $\forall x \neg A(x)$. By Universal Instantiation, we have $\neg A(t)$.</p>
<p>Now $A(t)$ and $\neg A(t)$ combine into a contradiction $\bot$.</p>
<p>We use Negation Introduction on the open assumption $\exists x A(x)$ to conclude $\neg \exists x A(x)$.</p>
<p>Finally, by Implication Introduction, we conclude the desired $\forall x \neg A(x) \implies \neg \exists x A(x)$ holds without any assumption.</p>
<p>Q.E.D.</p>
<hr>
<p><em>Addressing OP's efforts:</em></p>
<p>As you see, there is a difference in our notations. I have parametrised $A$ as $A(x)$, while you haven't. However, this is <em>crucial</em> for the inference of $(2)$ from $(1)$. The expression $A(t)$ contains a $t$, which we can think of as an arbitrary "witness" of $\exists x A(x)$. It is <em>this witness</em> $t$ we apply Universal Instantiation / $\forall$ Extraction to: Since $\forall x \neg A(x)$, in particular $\neg A(t)$, where $t$ is the witness to $\exists x A(x)$.</p>
<p>This working with witnesses requires some practice, and even then one can sometimes mix things up. They are however crucial for the validity of the reasoning, so be sure to try and derive some more "trivialities" containing existential quantifiers!</p>
<p>A final remark (inspired by the comment by Peter Smith) is that in place of where you wrote "axiom", it's better to write one of "assumption" or "hypothesis", because these words have different meanings in mathematical lingo.</p>
|
3,098,838 | <blockquote>
<p>The displacement of a particle varies according to <span class="math-container">$x=3(\cos t +\sin t)$</span>.
Then find the amplitude of the oscillation of the particle.</p>
</blockquote>
<p>Can someone kindly explain the concept of amplitude and oscillation and how to solve it?</p>
<p>Any hints for solving the problem would be helpful.</p>
| David Holden | 79,543 | <p><span class="math-container">$$
ax^2 + 2bxy + dy^2 = a(x + \frac{by}a)^2 + \frac1a(ac-b^2)y^2
$$</span></p>
|
254,695 | <p>The concept of dimension seems to be:</p>
<blockquote>
<p>In physics and mathematics, the dimension of a space or object is
informally defined as the minimum number of coordinates needed to
specify any point within it.</p>
</blockquote>
<p>According to <a href="http://en.wikipedia.org/wiki/Dimension_%28mathematics_and_physics%29" rel="nofollow">wikipedia.</a> But the fourth dimension seems to have also some kind of connection with <a href="http://en.wikipedia.org/wiki/Spacetime" rel="nofollow">spacetime</a> which seems to be related to <a href="http://en.wikipedia.org/wiki/Minkowski_space" rel="nofollow">Minkowski spaces</a>. I want to understand dimensionality and also learn about these issues about spacetime.</p>
<p>I'm searching for references on what I should read in order to understand this, I'm searching for a serious way (no pop-science) to understand it, I'm searching for a list of topics and also some recomendations on textbooks for it.</p>
<p>I hope I'm not asking too much, but I'm very curious to grasp this subject. I'm also realistic on this, I do not think it's something easy of fast to learn. </p>
<p><strong>EDIT</strong>: I've found some books on Minkowski spaces, I guess I'll be able to understand from those in the near future.</p>
| Wolphram jonny | 43,048 | <p>It is easy to be confused because the different meanings often given to the "fourth dimension". In the simplest an more natural case, the fourth dimension is just another spatial dimension, and you can have as many dimensions as you want. An introductory book on linear algebra should make it easy to understand (look for metric spaces in particular). This fourth dimension has nothing to do with time. But...
In physics, relativity theory defines a relationship between space an time, in the sense that space and time can be interchanged if two observers move relative to each other. The equations for this relationship resemble those of metric spaces. The equations look as if time behaves like an "imaginary" (because in involves imaginary numbers) extra spatial dimension. In the end the mathematical structure can be described as something called Minkowsky space, although it doesnt behave like an actual spatial four dimensional spatial space. Because of this relationship between time and space, usually people consider time as the fourth dimension, but it is not a good analogy. A fourth spatial dimension would behave very differently, and moving into a 4D space from somebody inhabiting 3D space would be equivalent to moving into 3d space for somebody living in a 2d planar surface. You can read about hypercubes to get a better intuition.
But remember, dont mix extra spatial dimensions with time as a fourth dimension, they are completely different stuff.</p>
|
254,695 | <p>The concept of dimension seems to be:</p>
<blockquote>
<p>In physics and mathematics, the dimension of a space or object is
informally defined as the minimum number of coordinates needed to
specify any point within it.</p>
</blockquote>
<p>According to <a href="http://en.wikipedia.org/wiki/Dimension_%28mathematics_and_physics%29" rel="nofollow">wikipedia.</a> But the fourth dimension seems to have also some kind of connection with <a href="http://en.wikipedia.org/wiki/Spacetime" rel="nofollow">spacetime</a> which seems to be related to <a href="http://en.wikipedia.org/wiki/Minkowski_space" rel="nofollow">Minkowski spaces</a>. I want to understand dimensionality and also learn about these issues about spacetime.</p>
<p>I'm searching for references on what I should read in order to understand this, I'm searching for a serious way (no pop-science) to understand it, I'm searching for a list of topics and also some recomendations on textbooks for it.</p>
<p>I hope I'm not asking too much, but I'm very curious to grasp this subject. I'm also realistic on this, I do not think it's something easy of fast to learn. </p>
<p><strong>EDIT</strong>: I've found some books on Minkowski spaces, I guess I'll be able to understand from those in the near future.</p>
| Ross Millikan | 1,827 | <p>There are two different concepts of the fourth dimension in what you talk about. Four dimensional Euclidean space, $\mathbb R^4$ has four equivalent coordinates. To find the distance between two points, you sum the squares of the differences of coordinates and take the square root. Many things are similar to your experience in $\mathbb R^3$</p>
<p>Spacetime is a different creature. The time axis is fundamentally different from the three space axes. Your normal experience should tell you this and it doesn't lie. Mathematically it is because the "metric" has a minus sign on the difference in time-$d^2=\Delta x^2+\Delta y^2+\Delta z^2 -c^2\Delta t^2$ This means "distances" are not positive definite. Anywhere a light ray goes is zero distance. If the distance between two points is greater than zero, they can't influence each other and there is a reference frame where they happen at the same time. If the distance is less than zero, one precedes the other and does in all reference frames.</p>
|
648,607 | <p>I would like to determine whether the following series is absolut convergent or not. I´m not sure how to begin generally. I would say no, because when taking the absolut value of the fraction and add all of them together the series doesnt converge...could someone give me a general road plan how to manage this.</p>
<p>$$\sum_{n=0}^{\infty} \frac{(-1)^n}{2n+1}$$</p>
| Lost1 | 44,877 | <p>Do a comparison: sum of absolute value of your each term against $\sum \frac{1}{3n}$. Note the latter diverges, because?</p>
|
648,607 | <p>I would like to determine whether the following series is absolut convergent or not. I´m not sure how to begin generally. I would say no, because when taking the absolut value of the fraction and add all of them together the series doesnt converge...could someone give me a general road plan how to manage this.</p>
<p>$$\sum_{n=0}^{\infty} \frac{(-1)^n}{2n+1}$$</p>
| user76568 | 74,917 | <p>The meaning of a series $\sum_{n=1}^{\infty}c_n$ being absolutely convergent is equivalent (defined as) to $\sum_{n=1}^{\infty}|c_n|$ converging. </p>
<p>So,we compare $|a_n|=|\frac{(-1)^n}{2n+1}|=\frac{1}{2n+1}$ with $b_n=\frac{1}{n}$:
$$\lim_{n \to \infty}\frac{b_n}{|a_n|}=\lim_{n \to \infty}\frac{2n+1}{n}=\lim_{n \to \infty}(2+\frac{1}{n})=2.$$
We know the harmonic series diverges, and conclude by comparison that $\sum_{n=0}^\infty |a_n|=1+\sum_{n=1}^\infty |a_n|$ diverges. </p>
<p>More generally, for any $a,b,c \in \mathbb{N}$:
$$\sum_{n=1}^{\infty}\frac{a}{bn+c}$$ diverges.</p>
|
3,466,680 | <p>I'm solving a problem in ODE:</p>
<blockquote>
<p>Solve in <span class="math-container">$\left (-\dfrac{\pi}{2},\dfrac{\pi}{2} \right )$</span> the ODE <span class="math-container">$y''(t) \cos t + y (t) \cos t=1$</span></p>
</blockquote>
<p>In my lecture, we are given three theorems:</p>
<blockquote>
<p><span class="math-container">$\textbf{Theorem 1} \quad$</span> If <span class="math-container">$y_1$</span> and <span class="math-container">$y_2$</span> are two linearly independent solutions to <span class="math-container">$y''+ay'+by=c$</span>. Then</p>
<p>i) The system <span class="math-container">$$\begin{bmatrix}y_1 & y_2 \\ y_1' & y_2' \\ \end{bmatrix} \begin{bmatrix}h \\ k \\ \end{bmatrix} = \begin{bmatrix}0 \\ c \\ \end{bmatrix}$$</span> (in which the unknown functions are <span class="math-container">$h$</span> and <span class="math-container">$k$</span>) has a unique solution.</p>
<p>ii) All the solutions <span class="math-container">$s$</span> are of the form <span class="math-container">$t \mapsto H y_1 +K y_2$</span> where <span class="math-container">$H,K$</span> are anti-derivatives of <span class="math-container">$h$</span> and <span class="math-container">$k$</span>.</p>
</blockquote>
<p>and</p>
<blockquote>
<p><span class="math-container">$\textbf{Theorem 2} \quad$</span> Homogeneous case</p>
<p>If we can find a solution <span class="math-container">$y_1$</span> to <span class="math-container">$y''+ay'+by=0$</span>, then we can determine another solution <span class="math-container">$y_2$</span> by using undetermined constant method to look for a solution of the form <span class="math-container">$y_2 = \lambda y_1$</span> in which <span class="math-container">$\lambda$</span> is a function.</p>
</blockquote>
<p>and</p>
<blockquote>
<p><span class="math-container">$\textbf{Theorem 3} \quad$</span> Superposition Principle</p>
<p>Consider <span class="math-container">$y''+ay'+by= c_i \quad (E_i)$</span> in which <span class="math-container">$c_i$</span> are functions. If <span class="math-container">$y_i$</span> is the solution to <span class="math-container">$(E_i)$</span> then <span class="math-container">$\sum_{i=1}^n \alpha_i y_i$</span> is the solution to <span class="math-container">$y''+ay'+by=\sum_{i=1}^n \alpha_i c_i$</span>.</p>
</blockquote>
<hr />
<p>I'm unable to apply those theorems to solve this ODE. Unfortunately, my professor's never solved an example with non-constant coefficients in class. The lectures are very likely to contain typos. I'm sorry for that because I'm unable to recognize them.</p>
<p>Could you please elaborate on how to solve this ODE?</p>
| nmasanta | 623,924 | <p><span class="math-container">\begin{equation} y''(t) \cos t + y (t) \cos t=1\\
\implies y''(t) + y (t)=\sec t\\
\implies (D^2+1)y=\sec t\tag1
\end{equation}</span>
where <span class="math-container">$~D\equiv \dfrac{d}{dt}~$</span></p>
<p>Let <span class="math-container">$~y=e^{mt}~$</span> be solution of <span class="math-container">$$(D^2+1)y=0\tag2$$</span> Putting the value of <span class="math-container">$y$</span> in <span class="math-container">$(2)$</span> we have <span class="math-container">$$m^2+1=0\implies m=\pm ~i$$</span></p>
<p>So solution of equation <span class="math-container">$(2)$</span> is <span class="math-container">$$y=A\cos t +B\sin t$$</span>where <span class="math-container">$~A,~B~$</span>are constant of integration.</p>
<p>Here <strong>Complementary function</strong> of equation <span class="math-container">$(1)$</span> is <span class="math-container">$$y_c=A\cos t +B\sin t$$</span>where <span class="math-container">$~A,~B~$</span>are constant of integration.</p>
<p>Now for <strong>particular integral</strong> <span class="math-container">$y_p$</span>,<br></p>
<p>let <span class="math-container">$~u=\cos t~$</span>, <span class="math-container">$~v=\sin t~$</span></p>
<p>Here <span class="math-container">$$W=\begin{vmatrix}
u & v \\
u' & v'
\end{vmatrix}=\begin{vmatrix}
\cos t & \sin t \\
-\sin t & \cos t
\end{vmatrix}=1\ne 0$$</span>
Then <span class="math-container">$$y_p=uf(t)+vg(t)$$</span>
where <span class="math-container">$$f(t)=-\int \dfrac{vR}{W}~dt=-\int \sin t\cdot \sec t~dt=\log(\cos t)$$</span>
<span class="math-container">$$g(t)=\int \dfrac{uR}{W}~dt=-\int \cos t \cdot\sec t~dt=t$$</span>
<span class="math-container">$R ~: ~$</span>Non-homogeneous part of equation <span class="math-container">$(1)$$~~=\sec t~$</span>.</p>
<p>Hence <span class="math-container">$$y_p=\cos t~\log(\cos t)+t~\sin t $$</span></p>
<p>Therefore the general solution of equation <span class="math-container">$(1)$</span> is <span class="math-container">$$y=y_c+y_p$$</span>
<span class="math-container">$$\implies y= A\cos t +B\sin t+\cos t~\log(\cos t)+t~\sin t$$</span>where <span class="math-container">$~A,~B~$</span>are constant of integration to be determined by the given condition.</p>
|
462,983 | <h2>The Question:</h2>
<p>This is a very fundamental and commonly used result in linear algebra, but I haven't been able to find a proof or prove it myself. The statement is as follows:</p>
<blockquote>
<p>let $A$ be an $n\times n$ square matrix, and suppose that $B=\operatorname{LeftInv}(A)$ is a matrix such that $BA=I$. Prove that $AB=I$. That is, prove that a matrix commutes with its inverse, that the left-inverse is also the right-inverse</p>
</blockquote>
<h2>My thoughts so far:</h2>
<p>This is particularly annoying to me because it seems like it should be easy.</p>
<p>We have a similar statement for group multiplication, but the commutativity of inverses is often presented as part of the definition. Does this property necessarily follow from the associativity of multiplication? I've noticed that from associativity, we have
$$
\left(A\operatorname{LeftInv}(A)\right)A=A\left(\operatorname{LeftInv}(A)A\right)
$$
But is that enough?</p>
<p>It might help to talk about <a href="http://en.wikipedia.org/wiki/Generalized_inverse" rel="nofollow">generalized inverses</a>.</p>
| Hagen von Eitzen | 39,174 | <p>Assume $A,B$ are $n\times n$ matrices with $BA=I$.
Let $\alpha\colon V\to V$ with $V=K^n$ be the endomorphism described by $A$ and similarly with $\beta$ for $B$. Then we are given that $\beta\circ\alpha=\operatorname{id}_V$, hence $\alpha$ is injective. The image of the standard basis of $V$ is therefore a linearly independant family of $n$ vectors in $V$, hence is in fact a basis, hence $\alpha$ is in also surjective. Thus for any $v\in V$, we can find $w\in V$ with $v=\alpha w$ and then we have $\alpha\beta v=\alpha\beta\alpha w=\alpha w=v$, i.e. $\alpha\beta=\operatorname{id}_V$. Translated back to the matrices this means $AB=I$.
Note that it was essential that $\dim V<\infty$.</p>
|
3,237,337 | <p>I have the function <span class="math-container">$f(x)= \frac{\sqrt{x^2-1}}{x+\log x}$</span> in the set <span class="math-container">$E=[1,+ \infty)$</span>and I have to discuss the uniform continuity of f in E.</p>
<p>I've calculated the derivative <span class="math-container">$y'$</span> and it tends to <span class="math-container">$0$</span> as <span class="math-container">$x$</span> tends to <span class="math-container">$\infty$</span></p>
<p>Can this fact be used to prove that <span class="math-container">$f(x)$</span> is uniformly continuous in <span class="math-container">$E$</span>?</p>
| Peter | 82,961 | <p>This site</p>
<p><a href="https://en.wikipedia.org/wiki/Linnik%27s_theorem" rel="nofollow noreferrer">https://en.wikipedia.org/wiki/Linnik%27s_theorem</a></p>
<p>shows what has been found out for the least prime in an arithmetic progression. The conjecture <span class="math-container">$$p(a,d)<d^2$$</span> would imply that you can always find a prime this way, but this has not been unconditionally proven.</p>
<p>For Goldbach's conjecture we need two numbers which are simultaneously prime, so even if we can always find primes this way, there is no guarantee that they can be summed up to a given even number. I cannot see a way to use this approach for solving Goldbach's conjecture.</p>
|
3,237,337 | <p>I have the function <span class="math-container">$f(x)= \frac{\sqrt{x^2-1}}{x+\log x}$</span> in the set <span class="math-container">$E=[1,+ \infty)$</span>and I have to discuss the uniform continuity of f in E.</p>
<p>I've calculated the derivative <span class="math-container">$y'$</span> and it tends to <span class="math-container">$0$</span> as <span class="math-container">$x$</span> tends to <span class="math-container">$\infty$</span></p>
<p>Can this fact be used to prove that <span class="math-container">$f(x)$</span> is uniformly continuous in <span class="math-container">$E$</span>?</p>
| Community | -1 | <p><span class="math-container">$$n\equiv-(x^{-1})\bmod p$$</span> are knocked out as lead coefficients. They create a number divisible by p any time p is not a factor of x. That means just 57 survive for x=100, p<10 and only 23 survive that for x=101 . Accounting for overlap is really the only tricky part for me. I count it wrong without code. </p>
|
838,690 | <p>True or false question</p>
<p>If B is a subset of A then {B} is an element of power set A. </p>
<p>I think this is true.</p>
<p>Because B is {1,2} say A {1,2,3} then power set of includes </p>
<p>$\{\{1\},\{2\},\{3\},\{1,2\},\{1,3\},\{3,2\},\{1,2,3\},\emptyset\}$</p>
<p>Unless {B} means $\{\{1,2\}\}$</p>
| Muhammad Kashif | 269,168 | <p>I think it is true... Because the in asked question the {B} is a power set of A. It meabs all the elements of B are exist in A as Set B us the subset of set A. So, the power set of A will also contain all the sub sets of Set B.</p>
|
547,971 | <p>I have to show that for $f,g$ analytic on some domain and $a$ a double zero of $g$, we have:</p>
<p>$$\operatorname{Res} \left(\frac{f(z)}{g(z)}, z=a\right) = \frac{6f'(a)g''(a)-2f(a)g'''(a)}{3[g''(a)]^2}.$$</p>
<p>The problem is that direct calculation using the formula (for pole of order $2$):</p>
<p>$$\operatorname{Res}(h(z),z=a)=\lim_{z \to a} \frac{d}{dz}\left( (z-a)^2h(z) \right)$$</p>
<p>is extremely ugly, given that we're dealing with a quotient. Is there some sort of trick to make the calculation more manageable?</p>
| Daniel Fischer | 83,702 | <p>A little Taylor expansion takes you a long way. Say</p>
<p>$$g(z) = (z-a)^2\left(a_2 + a_3(z-a) + (z-a)^2\cdot \tilde{g}(z)\right)$$</p>
<p>with $a_2 \neq 0$, and</p>
<p>$$f(z) = b_0 + b_1(z-a) + (z-a)^2\cdot \tilde{f}(z).$$</p>
<p>Then</p>
<p>$$\begin{align}
\frac{f(z)}{g(z)} &= \frac{b_0 + b_1(z-a) +(z-a)^2\tilde{f}(z)}{(z-a)^2\left(a_2 + a_3(z-a) + (z-a)^2\tilde{g}(z)\right)}\\
&= \frac{1}{a_2(z-a)^2}\frac{b_0 + b_1(z-a) + (z-a)^2\tilde{f}(z)}{1 + \frac{a_3}{a_2}(z-a) + (z-a)^2h(z)}\\
&= \frac{1}{a_2(z-a)^2}\left(b_0 + b_1(z-a)\right)\left(1-\frac{a_3}{a_2}(z-a)\right) + \tilde{h}(z)\\
&= \frac{c}{(z-a)^2} + \frac{b_1 - (b_0a_3)/a_2}{a_2(z-a)} + k(z),
\end{align}$$</p>
<p>so the residue is</p>
<p>$$\frac{b_1a_2 - b_0a_3}{a_2^2} = \frac{f'(a)\frac12g''(a) - f(a)\frac16g'''(a)}{\left(\frac12g''(a)\right)^2} = \frac{6f'(a)g''(a) - 2f(a)g''(a)}{3g''(a)^2}.$$</p>
<p>That didn't hurt, doctor.</p>
|
316,965 | <p>How do I interpret following types of matrices as special types of transformations?
I mean what are the transformative properties of following types of matrices, from $\mathbb{R}^n $ to $ \mathbb{R}^n$, or $\mathbb{C^n}$ to $\mathbb{C^n}$?</p>
<p><strong>Normal and Anti Hermitian Matrices</strong>?</p>
<p><strong>ADDED</strong></p>
<p>I expect answer something like this for orthogonal matrices <a href="http://en.wikipedia.org/wiki/Orthogonal_matrix">Quoting Wikipedia:</a>
As a linear transformation, an orthogonal matrix preserves the dot product of vectors, and therefore acts as an isometry of Euclidean space, such as a rotation or reflection. In other words, it is a unitary transformation.</p>
<p><strong>ADDED</strong></p>
<p>I managed to collect some more information:
Unitary matrices as Linear Maps <strong>preserve inner products</strong>, may be not necessarily for reals only, as orthogonal ones(???).</p>
| Christian Blatter | 1,303 | <p><a href="http://en.wikipedia.org/wiki/Normal_matrix" rel="nofollow">Wikipedia</a> lists about 10 properties of a linear transformation that are equivalent with normality. Here is a property with a geometrical touch that is not mentioned there (it's an exercice in Halmos' <em>Finite-dimensional vector spaces</em>):</p>
<p>A linear transformation $A:\ V\to V$ of a finite-dimensional unitary space is normal iff the orthogonal complement of any invariant subspace is invariant as well.</p>
|
62,000 | <p>Let $I,J,K$ be three non-void sets, and let $\gamma$:$I\times J\times K\rightarrow\mathbb{N}$.
Is there some nonempty set $X$, together with some functions {$\{ f_{i}:X\rightarrow X;i\in I\} $},
some subsets {$\{ \Omega_{j}\subset X;j\in J\} $}, and some
points {$\{p_{k}\in X;k\in K} $} s.t. $\mid f_{i}^{-1}\left(p_{k}\right)\cap\Omega_{j}\mid=\gamma\left(i,j,k\right)$
$\left(i\in I,j\in J,k\in K\right)$, and $\mid f_{i}^{-1}\left(p\right)\mid\leq\mid\mathbb{R\mid}$$\left(i\in I,p\in X\right)$
? In other words, is $\gamma$ ''representable'' as the number of
solutions of some ''reasonable'' equations? [An elementary problem,
indeed.] </p>
| Mark Meilstrup | 14,479 | <p>This is basically a detailed description of a solution, based on Gerhard's answer.</p>
<p>Let $X=\mathbb{N}_0 \times I_0 \times K $, where $\mathbb{N}_0$ includes $0$ and similarly $I_0=I\cup 0$ with $0\not \in I$.</p>
<p>Let $p_k =(0,0,k)$, and let $\displaystyle \Omega_j=\bigcup_i \bigcup_k \bigcup_{n=1}^{\gamma_{ijk}} (n,i,k)$.</p>
<p>Define $f_i(n,i,k)=(0,0,k)=p_k$ and $f_i(n,i',k)=(n+1,i',k)$ for $i'\neq i$. We add $1$ to $n$ so that $f_i(p_k)\neq p_k$.</p>
<p>Note that for $x\neq p_k$, $|f_i^{-1}(x)|\leq 1$. On the other hand, $f_i^{-1}(p_k)=\mathbb{N} \times \{i\}\times \{k\} $, and then $f_i^{-1}(p_k)\cap \Omega_j =\{(n,i,k)\mid 1\leq n\leq \gamma_{ijk} \}$, which has the desired cardinality $\gamma_{ijk} $. </p>
<p>Remark: In a comment to Gerhard's answer, Ady says "I think you're underestimating the size of J." The size of $J$ actually plays no role in this problem as stated, as the sets $\Omega_j$ may overlap (and will overlap a lot in my construction). If you want to require the $\Omega_j$ to be disjoint, note that the other conditions force $|J|\leq |\mathbb{R}|$, as $\left|\bigcup_j \left( f_i^{-1}(p_k)\cap \Omega_j \right)\right| \leq |f_i^{-1}(p_k)|\leq |\mathbb R|$. We can modify the construction above by replacing $\mathbb N$ with $\mathbb R$, and can assure that the $\Omega_j$ are disjoint if we are more careful in choosing which $\gamma_{ijk}$ elements of $\mathbb{R}\times\{i\}\times\{k\}$ to include in $\Omega_j$ (above I chose the points $(n,i,k)$ for $n$ from $1$ to $\gamma_{ijk}$).</p>
|
4,513,678 | <p>Suppose <span class="math-container">$f(x) = ax^3 + bx^2 + cx + d$</span> is a cubic equation with roots <span class="math-container">$\alpha, \beta, \gamma.$</span> Then we have:</p>
<p><span class="math-container">$\alpha + \beta + \gamma= -\frac{b}{a}\quad (1)$</span></p>
<p><span class="math-container">$\alpha\beta + \beta\gamma + \gamma\alpha = \frac{c}{a}\quad (2)$</span></p>
<p><span class="math-container">$\alpha\beta\gamma = -\frac{d}{a}\quad (3)$</span></p>
<p>We can find <span class="math-container">$\alpha^2\beta + \beta^2\gamma + \gamma^2\alpha + \alpha^2\gamma + \gamma^2\beta + \beta^2\alpha$</span> in terms of <span class="math-container">$a,b,c,d$</span> with the formula:</p>
<p><span class="math-container">$$ \alpha^2\beta + \beta^2\gamma + \gamma^2\alpha + \alpha^2\gamma + \gamma^2\beta + \beta^2\alpha = (\alpha+\beta+\gamma)(\alpha\beta+\alpha\gamma+\beta\gamma) - 3\alpha\beta\gamma $$</span>
<span class="math-container">$$=\left(\frac{-b}{a}\right) \left(\frac{c}{a}\right) - 3\left(-\frac{d}{a}\right).$$</span></p>
<p>But I was wondering if there was some way to find <span class="math-container">$ \alpha^2\beta + \beta^2\gamma + \gamma^2\alpha\ $</span> and therefore also <span class="math-container">$\ \alpha^2\gamma + \gamma^2\beta + \beta^2\alpha\ $</span> in terms of <span class="math-container">$a,b,c,d,\ $</span> with some algebraic manipulation, i.e. without <a href="https://mathworld.wolfram.com/CubicFormula.html" rel="nofollow noreferrer">finding the roots with a cubic formula</a>?</p>
<p>Notice that there are <em>two</em> possible values of <span class="math-container">$\alpha^2\beta+\beta^2\gamma+\gamma^2\alpha,$</span> namely <span class="math-container">$\alpha^2\beta+\beta^2\gamma+\gamma^2\alpha = \beta^2\gamma+\gamma^2\alpha+\alpha^2\beta = \gamma^2\alpha+\alpha^2\beta+\beta^2\gamma$</span> and <span class="math-container">$\alpha^2\gamma+\gamma^2\beta+\beta^2\alpha = \gamma^2\beta+\beta^2\alpha+\alpha^2\gamma = \beta^2\alpha + \alpha^2\gamma+\gamma^2\beta.$</span></p>
| Jyrki Lahtonen | 11,619 | <p>You know the elementary symmetric polynomials evaluated at the roots (by the Vieta relations):
<span class="math-container">$$
\begin{aligned}s_1&=\alpha+\beta+\gamma=-b/a,\\
s_2&=\alpha\beta+\beta\gamma+\gamma\alpha=c/a,\\
s_3&=\alpha\beta\gamma=-d/a.
\end{aligned}
$$</span>
The fundamental theorem of symmetric polynomials says that every symmetric polynomial can be written in terms of the elementary ones.</p>
<p>You are interested in finding
<span class="math-container">$$
\begin{aligned}
u&=\alpha^2\beta+\beta^2\gamma+\gamma^2\alpha,&\text{or}\\
v&=\alpha^2\gamma+\gamma^2\beta+\beta^2\alpha.
\end{aligned}
$$</span>
The problem, as explained by the others, is that you cannot tell which is which because <span class="math-container">$u$</span> and <span class="math-container">$v$</span> only follow cyclic symmetry.</p>
<p>However, the combinations <span class="math-container">$u+v$</span> and <span class="math-container">$uv$</span> are fully symmetric. A banal (but a bit tedious) calculation shows that
<span class="math-container">$$
\begin{aligned}
u+v&=s_1s_2-3s_3,\\
uv&=s_1^3s_3-6s_1s_2s_3+s_2^3+s_3^2.
\end{aligned}
$$</span>
This means that we know the coefficients of the quadratic polynomial
<span class="math-container">$$
p(x)=(x-u)(x-v)=x^2-[u+v]x+uv
$$</span>
that has <span class="math-container">$u$</span> and <span class="math-container">$v$</span> as its roots. All you need to do is plug in the known values of <span class="math-container">$s_1,s_2,s_3$</span> into the formulas above, and solve the quadratic <span class="math-container">$p(x)=0$</span>. The roots are the two choices.</p>
|
2,583,047 | <p>I have a $m$ dimensional vector space $V$. And we define $\wedge^rV^*$ as the collection $r$ antisymmetric tensors. Why is $\wedge^rV^* = 0$ if $r>m$?</p>
<p>I have no idea.</p>
| C. Falcon | 285,416 | <p>Let $(e_1,\ldots,e_m)$ be a basis of $V$, then for any integer $r$, $\Lambda^rV^*$ is spanned by elements of the form:
$$\mathrm{d}e_{i_1}\wedge\ldots\wedge\mathrm{d}e_{i_r},$$
where $i_1,\ldots,i_r$ are elements of $\{1,\ldots,m\}$. Now, if $r>m$, for each choice of $i_1,\ldots,i_r$, at least twice the same index appears, so that $\mathrm{d}e_{i_1}\wedge\cdots\wedge\mathrm{d}e_{i_r}$ is identically vanishing. Whence the result.</p>
|
2,583,047 | <p>I have a $m$ dimensional vector space $V$. And we define $\wedge^rV^*$ as the collection $r$ antisymmetric tensors. Why is $\wedge^rV^* = 0$ if $r>m$?</p>
<p>I have no idea.</p>
| Community | -1 | <p>Because:</p>
<ul>
<li>$V^*$ is also an $m$-dimensional vector space.</li>
<li>$\Lambda^r U \cong 0$ for <em>any</em> vector space $U$ of dimension less than $r$</li>
</ul>
<p>An easy way to see the latter fact is to use (multi-)linearity decompose any wedge product into a linear combination of wedges of basis vectors. If $U$ has dimension less than $r$, then any collection of $r$ basis vectors must have a repeated vector, and thus the wedge is zero.</p>
|
1,979,226 | <p>Use Bayes' theorem or a tree diagram to calculate the indicated probability. Round your answer to four decimal places.
Y1, Y2, Y3 form a partition of S.</p>
<p>P(X | Y1) = .8, P(X | Y2) = .1, P(X | Y3) = .9, P(Y1) = .1, P(Y2) = .4. </p>
<p>Find P(Y1 | X).</p>
<p>P(Y1 | X) =</p>
<p>For this one I thought that all I had to do was P(X | Y1)*P(Y1)/P(X | Y1)*P(Y1)+P(X | Y2)*P(Y2)+P(X | Y3)*P(Y3)</p>
<p>But when I do that I am not getting the correct answer, is it possible that the value for P(Y3) is not .1 and if it is not, what is it? </p>
| Nicolas FRANCOIS | 288,125 | <p>Another way of seeing this is to note that $(u_n)$ also verifies the recurrence formula
$$u_{n+1}=\sqrt{u_n+u_n^2}$$
and study the function $f:x\mapsto \sqrt{x+x^2}$. This function stabilizes the interval $[0,+\infty[$, is strictly increasing and it is easy to prove that $x>0$ implies $f(x)>x$.</p>
<p>So the sequence $(u_n)$ is strictly increasing, and cannot be bounded because, as you said, the only fixed point by $f$ is $0$.</p>
<p>It would be funny then to find an equivalent of $u_n$ :-)</p>
<p>This is what I found : from $u_{n+1}=\sqrt{u_n+u_n^2}$, you can first derive, as you pointed,
$$u_{n+1}-u_n=\frac{u_n}{u_n+u_{n+1}}< \frac{1}{2}$$
because $(u_n)$ is strictly increasing. This leads to
$$u_n=u_1+\sum_{k=1}^{n-1} u_{k+1}-u_k\le 1+\frac{n-1}{2}=\frac{n+1}{2}$$
But you can also derive
$$u_{n+1}=u_n\sqrt{1+\frac{1}{u_n}}$$
and because we proved $\lim u_n=+\infty$, we can use the development :
$$u_{n+1}=u_n(1+\frac{1}{2u_n}+o(\frac{1}{u_n})) = u_n+\frac12 +o(1)$$
so
$$u_{n+1}-u_n=\frac{1}{2}+o(1)\sim \frac12$$
and by Cesaro theorem (or by summations of equivalents) :
$$u_n = u_1+\sum_{k=1}^{n-1} u_{k+1}-u_k \sim \frac n2$$
So now you have the limit, an upper bound and an equivalent :-)</p>
|
2,098,810 | <p>In a triangle,what is the ratio of the distance between a vertex and the orthocenter and the distance of the circumcenter from the side opposite vertex.</p>
| Simply Beautiful Art | 272,831 | <p>Notice that</p>
<p>$$\int_{-1}^1\sqrt{1+x^2}\ dx<\int_{-1}^1\sqrt{1+1^2}\ dx=2\sqrt2$$</p>
<p>Likewise,</p>
<p>$$\int_{-1}^1\sqrt{1+x^2}\ dx>\int_{-1}^1\sqrt{1+0^2}\ dx=2$$</p>
<p>where we used</p>
<p>$$\int_a^b\min_{t\in(a,b)}f(t)\ dx\le\int_a^bf(x)\ dx\le\int_a^b\max_{t\in(a,b)}f(t)\ dt$$</p>
|
2,740,954 | <p>Determine price elasticity of demand and marginal revenue if $q = 30-4p-p^2$, where q is quantity demanded and p is price and p=3.</p>
<p>I solved it for first part-</p>
<p>Price elasticity of demand = $-\frac{p}{q} \frac{dq}{dp}$</p>
<p>on solving above i got answer as $\frac{10}{3}$</p>
<p>But on solving for Marginal revenue i am getting -10. But the correct answer given is $\frac{21}{10}$</p>
<p>Any hint is appreciable please help.</p>
| user636814 | 636,814 | <p>marginal revenue <span class="math-container">$= p(1+1/elasticity) = 3(1-3/10)= 21/10$</span>.</p>
|
1,760,687 | <p>Can anyone explain me why this equality is true?</p>
<p>$x^k(1-x)^{-k} = \sum_{n = k}^{\infty}{{n-1}\choose{k-1}}x^n$</p>
<p>I really don't see how any manipulation could give me this result. </p>
<p>Thanks!</p>
| Arthur | 15,500 | <p>That depends on your definition. Some would say you need to have two terms in order to have a well-defined difference. Some would say that $1$ is an arithmetic progression (of length $1$ and any difference you like). </p>
<p>Personally, I consider the empty sequence $\{\}$ an arithmetic progression as well, simply because "there is a number $d$ such that the difference between any two adjacent terms is equal to $d$" is <em>vacuously</em> true.</p>
|
4,492,566 | <blockquote>
<p>To which degree must I rotate a parabola for it to be no longer the graph of a function?</p>
</blockquote>
<p>I have no problem with narrowing the question down by only concerning the standard parabola: <span class="math-container">$$f(x)=x^2.$$</span></p>
<p>I am looking for a specific angle measure. One such measure must exist as the reflection of <span class="math-container">$f$</span> over the line <span class="math-container">$y=x$</span> is certainly no longer well-defined. I realize that preferentially I should ask the question on this site with a bit of work put into it but, alas, I have no intuition for where to start. I suppose I know immediately that it must be less than <span class="math-container">$45^\circ$</span> as such a rotation will cross the y-axis at <span class="math-container">$(0,0)$</span> and <span class="math-container">$(0,\sqrt{2})$</span>.</p>
<p>Any insight on how to proceed?</p>
| M. Imaninezhad | 61,045 | <p><a href="https://i.stack.imgur.com/FKAvf.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/FKAvf.png" alt="enter image description here" /></a></p>
<p><span class="math-container">$$\lim_{n\to\infty}{\alpha_n}=\frac{\pi}{2}$$</span>
So we cannot rotate the graph of <span class="math-container">$y=x^2$</span> at all.</p>
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.