qid int64 1 4.65M | question large_stringlengths 27 36.3k | author large_stringlengths 3 36 | author_id int64 -1 1.16M | answer large_stringlengths 18 63k |
|---|---|---|---|---|
1,049,808 | <p>Let $u_0 = 1$ and $u_{n+1} = \frac{u_n}{1+u_n^2}$ for all $n \in \mathbb{N}$.</p>
<p>I can show that $u_n \sim \frac{1}{\sqrt{2n}}$, but I would like one more term in the asymptotic development, something like $u_n = \frac{1}{\sqrt{2n}}+\frac{\alpha}{n\sqrt{n}} + o\bigl(\frac{1}{n^{3/2}}\bigr)$.</p>
<p>Here is the outline of my proof of $u_n \sim \frac{1}{\sqrt{2n}}$: </p>
<ul>
<li>$(u_n)$ is decreasing, and bounded from below by $0$, hence converges.</li>
<li>The limit $\ell$ satisifies $\ell = \frac{\ell}{1+\ell^2}$, hence $\ell = 0$.</li>
<li>A computation gives $v_n = u_{n+1}^{-2} - u_n^{-2} \to 2$.</li>
<li>Using Cesàro lemma, $\frac{1}{n} \sum_{k=0}^{n-1} v_k \to 2$.</li>
<li>Hence $\frac{1}{n} u_n^{-2} \to 2$.</li>
</ul>
| not all wrong | 37,268 | <p><strong>Note:</strong> This answer does not contain a proof, but I'm fairly sure it's right. Posted in the hope that perhaps the answer will guide you towards a proof thereof.</p>
<p>Looking at the graph you can figure out that in fact the next correction looks to be $n^{-3/2} \log n$ dominating the next $n^{-3/2}$ term. Given this, one can do a plausible self-consistency check using the recurrence relation to find that the answer.</p>
<p>Specifically, suppose</p>
<p>$$u_n \sim \frac \alpha {\sqrt{2n}} + \frac {\beta \log n}{n^{3/2}}+\text{a sufficiently nice smaller series in particular }\mathcal{O}\left(\frac{1}{n^{3/2}}\right)$$</p>
<p>Then
$$u_{n+1}=\frac{u_n}{1+u_n^2}$$
implies
$$u_{n+1}-\frac{u_n}{1+u_n^2} \sim \frac 1 2 (\alpha - 2 \alpha^3) \frac 1 {n^{3/2}}+\cdots $$
so that $\alpha=0,-1/\sqrt 2,1/\sqrt 2$ are the options. You have checked that $\alpha=1/\sqrt{2}$.</p>
<p>Substituting this back in gives
$$u_{n+1}-\frac{u_n}{1+u_n^2} \sim \left(-\frac 1 {8\sqrt 2} - \beta\right) \frac 1 {n^{5/2}}+\cdots $$
and hence it seems the only consistent result has
$$\boxed{\displaystyle u_{n} \sim \frac{1}{\sqrt{2n}}-\frac{\log n}{8\sqrt 2 n^{3/2}}}$$</p>
<p>Numerically, I find the error
$$\epsilon = \frac{u_n - \frac{1}{\sqrt{2n}}}{-\frac{\log n}{8\sqrt 2 n^{3/2}}} - 1\approx \frac{3.4}{\log n} \to 0$$
suggesting that indeed this is correct with the next term in the series being the expected $n^{-3/2}$ term.</p>
<p>However, the coefficient of the $n^{-3/2}$ term cannot be determined by this self-consistency procedure, reflecting the fact that one can check this actually depends on the initial datum $u_0$.</p>
<p><strong>Left to do:</strong> (Not that I think I'll come back to this myself, but others can!)</p>
<ul>
<li>Rigorously show the above logarithmic correction.</li>
<li>Figure out the dependence of the next term on the initial data.</li>
</ul>
<hr>
<p>Mathematica code for my verification:</p>
<pre><code>min = 2;
max = 5000000;
dat = RecurrenceTable[{u[n + 1] == u[n]/(N[1, 50] + u[n]^2), u[0] == 1}, u, {n, min, max}];
errordat =
Table[{M, (dat[[M - min + 1]] -
1/Sqrt[2 M])/(-Log[M]/(8 Sqrt[2] M^(3/2))) - 1}, {M, min, max, 100000}];
nextconst = ((dat[[max - min + 1]] -
1/Sqrt[2 max])/(-Log[max]/(8 Sqrt[2] max^(3/2))) - 1)*Log[max]
Show[ListPlot[errordat, AxesOrigin -> {0, 0}, PlotStyle -> PointSize[Large]], Plot[nextconst/Log[x], {x, min, max}, PlotStyle -> Red, PlotRange -> All]]
</code></pre>
<p>Output: Estimate of $3.446...$ for next constant, and a plot of the $\epsilon$ against $n$ with a fitted curve based on a logarithmic next correction:
<img src="https://i.stack.imgur.com/56axN.jpg" alt="enter image description here">
Obviously the decay to 0 here is slow, which is to be expected given the predicted logarithmic correction. Feel free to go to larger ranges.</p>
|
2,015,717 | <p>I know that the O notation tells me to find $n_0$ natural and $c>0$ real, so the
$$2^{(n^2)} \le c*2^{2n} $$
Only step that I can think about is this:
$$2^{(n^2)} \le c*2^{n+n} $$
$$2^{(n^2)} \le c*2^n*2^n $$
But I have no clue what to do next. (Some estimate?)</p>
<p>Do you have some hints please?</p>
<p>EDIT: sorry it should be $2^{(n^2)}$</p>
| Arthur | 15,500 | <p>This is, in fact, not true. Given a $c$, there is some natural number $m$ such that $2^m>c$. Now, for each $n>1+2\sqrt{ m+1}$ we have $n^2>2n+m$, which gives
$$
2^{n^2}>2^{2n+m}=2^{2n}\cdot 2^m>2^{2n}\cdot c
$$</p>
|
1,549,843 | <p>Prove the following limits:</p>
<p>$$\lim_{x \rightarrow 0^+} x^x = 1$$
$$\lim_{x \rightarrow 0^+} x^{\frac{1}{x}}=0$$
$$\lim_{x \rightarrow \infty} x^{\frac{1}{x}}=1$$</p>
<p>They are not that hard using l'Hospital or the Sandwich theorem. But I curious if they can be solved with the basic knowledge of limits. I have been trying to make some famous limits like the definition of $e$ but without luck.
Thank you for your help.</p>
| Kushal Bhuyan | 259,670 | <p>For the third you can use $$\lim_{n\to \infty}(a_n)^{1/n}=l$$ if $$\lim_{n\to \infty}\frac{a_{n+1}}{a_n}=l$$</p>
<p>Here $a_n=x$</p>
|
3,917,283 | <p>I have the following : <span class="math-container">$f_n(x)=\frac{nx}{1+nx^3} \quad n=1,2,\dots \quad$</span> and <span class="math-container">$f(x)=\lim_{n \to \infty} f_n(x) ,$</span></p>
<p>and I have done the following : <span class="math-container">$$|f_n(x) -f(x)| = \biggl|\frac{nx}{1+nx^3}-\frac{1}{x^2}\biggr| = \biggl|\frac{1}{x^2(1+nx^3)}\biggr |$$</span> where <span class="math-container">$\frac{1}{x^2}$</span> is the convergence of the series of the function. Now I know that <span class="math-container">$\Bigl|\frac{1}{x^2(1+nx^3)}\Bigr|< \epsilon$</span> but now I don't know how to choose <span class="math-container">$N$</span> in order to fully prove that it is uniformly convergent or not. Can somebody help me proceed with the proof?</p>
| zhw. | 228,045 | <p>Hint: The <span class="math-container">$f_n$</span>'s are bounded on <span class="math-container">$(0,\infty).$</span> If the convergence were uniform there, then <span class="math-container">$f$</span> would be bounded there.</p>
|
3,464,477 | <p>An odd perfect number n is of the form <span class="math-container">$n=q^{k}m^{2}$</span> where <span class="math-container">$q$</span> is prime and both <span class="math-container">$q,k \equiv 1 \mod 4$</span>. Also, n satisfies <span class="math-container">$\sigma (n)=2n$</span> so that <span class="math-container">$\sigma (q^{k}m^{2})=2q^{k}m^{2}$</span>.
My questions are about <span class="math-container">$q,k$</span>. </p>
<p>1) Is it known whether it is possible that <span class="math-container">$k>q$</span> ? and also if so</p>
<p>2) Can <span class="math-container">$k=a*q$</span> where is a positive integer?</p>
| JoshuaZ | 64,009 | <p>We can't answer these questions at present. We can't rule out almost any specific choice of this sort, such as <span class="math-container">$k=1$</span> and <span class="math-container">$q=5$</span>. That said, we can rule out a few choices of <span class="math-container">$q$</span> and <span class="math-container">$k$</span>. For example, we can rule out <span class="math-container">$q=1049$</span> using that <span class="math-container">$q+1|2n$</span>, and so if <span class="math-container">$1049=q$</span> then <span class="math-container">$105|n$</span> but it isn't hard to show that any number of the form in OP that is divisible by 105 must be abundant (since 9(5)(49) is abundant and any multiple of an abundant number is abundant). But other than examples of <span class="math-container">$k$</span> and <span class="math-container">$q$</span> where they force an abundant divisor of <span class="math-container">$n$</span>, we can't say anything much. </p>
|
3,900,435 | <p>Suppose that <span class="math-container">$\mathbf{Y}\sim N_3(0,\,\sigma^2\mathbf{I}_3)$</span> and that <span class="math-container">$Y_0$</span> is <span class="math-container">$N(0,\,\sigma_0^2)$</span>, independently of the <span class="math-container">$Y_i$</span>'s.
My question is that does <span class="math-container">$(\mathbf{Y}, Y_0)$</span> also have multivariate normal distribution? By using moment generating function, it suffices that <span class="math-container">$\mathbf Y$</span> and <span class="math-container">$Y_0$</span> are independent. But is this the case?</p>
| mathcounterexamples.net | 187,663 | <p>To elaborate on the example given by <a href="https://math.stackexchange.com/users/570087/perpetuallyconfused">perpetuallyconfused</a>, indeed <span class="math-container">$\mathbb C$</span> and <span class="math-container">$\mathbb C(x)$</span> provide a counterexample.</p>
<p>Some additional elements.</p>
<p><span class="math-container">$\mathbb C$</span> is algebraically closed: this is well known. <span class="math-container">$\mathbb C(x)$</span> is not. In particular the polynomial <span class="math-container">$p(t) = t^2-x \in \mathbb C(x)[t]$</span> can't have a root <span class="math-container">$\frac{r(x)}{s(x)} \in \mathbb C(x)$</span>. If it was the case, you would have <span class="math-container">$r^2(x)=x s^2(x)$</span> with the contradiction that the left polynomial of the equality is having an even degree and the right one an odd one. Therefore <span class="math-container">$\mathbb C$</span> and <span class="math-container">$\mathbb C(x)$</span> are not isomomorphic.</p>
<p>Also, the identity is an obvious embedding <span class="math-container">$\mathbb C \hookrightarrow \mathbb C(x)$</span>.</p>
<p>Regarding an embedding <span class="math-container">$\mathbb C(x) \hookrightarrow \mathbb C$</span>, you have to know that <a href="https://math.stackexchange.com/questions/3623568/two-algebraically-closed-fields-are-isomorphic-if-and-only-if-they-have-the-same">Two algebraically closed fields are isomorphic if and only if they have the same transcendence degree over their prime fields</a> (proof provided in the link). And also that the cardinality of the <a href="https://en.wikipedia.org/wiki/Algebraic_closure" rel="nofollow noreferrer">algebraic closure</a> of an infinite field <span class="math-container">$F$</span> has the cardinality of <span class="math-container">$F$</span>. As the cardinality of <span class="math-container">$\mathbb C(x)$</span> is the one of <span class="math-container">$\mathbb C$</span>, the algebraic closure <span class="math-container">$\overline{\mathbb C(x)}$</span> of <span class="math-container">$\mathbb C(x)$</span> is isomorphic to <span class="math-container">$\mathbb C$</span> and therefore you can embed <span class="math-container">$\mathbb C(x)$</span> into <span class="math-container">$\mathbb C$</span>.</p>
|
2,535,111 | <p>Let $n$ be a natural number, and let $1\leq j,k\leq n$ be two randoms.
Show that if $k\mid n$ ($k$ divides $n$) then the probabilty of
$k$ also dividing $j$ is $\frac{1}{k}$, that is $\text{P}\left(k\mid j\right)=\frac{1}{k}$.</p>
<p>One way i was thinking about is to look straight forward for
$$\text{P}\left(k\mid j\,\biggl|\,k\mid n\right)=\frac{\text{P}\left(k\mid j\,\cap\,k\mid n\right)}{\text{P}\left(k\mid n\right)}
$$But i dont really know how to find any of those on the right side
(Any help with that?)</p>
<p>Other way i was thinking about is that if $k\mid n$ then there is
an $s\in\mathbb{N}$ such that $n=k\cdot s$, so we can divide $\left\{ 1,\ldots,n\right\} $
to $s$ distinct sets or to $k$ distinct sets in these ways for example:
$$\begin{aligned}(1)\quad & \left\{ 1,\ldots,k\right\} ,\left\{ k+1,\ldots,2k\right\} ,\dots,\left\{ \left(s-1\right)k+1,\ldots,sk\right\} \\
(2)\quad & \left\{ 1,k+1,\ldots,\left(s-1\right)k+1\right\} ,\left\{ 2,k+2,\ldots,\left(s-1\right)k+2\right\} ,\ldots,\left\{ k,2k,\ldots sk\right\}
\end{aligned}
$$Somehow i think that using the 2nd way would be more helpful. I know
that the probabilty of choosing a random $j$ from $\left\{ 1,\ldots,n\right\} $
is $\frac{1}{n}$, but how do i find the probabilty that it is also
from the last set $\left\{ k,2k,\ldots,sk\right\} $ ?</p>
| Bernard | 202,857 | <p>First, the volume of the half ball $B$ with radius $R$ is $\frac23\pi R^3$. In <code>spherical coordinates</code> $(R,\theta,\varphi)$, the coordinates of the centre of gravity $G$ are given by the integrals
\begin{align}\newcommand\d{\mathop{}\!\mathrm{d}}
x_G&=\frac3{2\pi R^3}\iiint_B x\d x\d y\d z=\frac3{2\pi R^3}\iiint_{\substack{0\le r \le R\\[0.35ex]0\le\theta\le 2\pi\\[0.2ex]0\le\varphi\le\tfrac\pi2}}(r\sin\varphi\cos\theta)\,r^2\sin\varphi\d r\d\theta\d\varphi \\
y_G&=\frac3{2\pi R^3}\iiint_B y\d x\d y\d z=\frac3{2\pi R^3}\iiint_{\substack{0\le r \le R\\[0.35ex]0\le\theta\le 2\pi\\[0.2ex]0\le\varphi\le\tfrac\pi2}}(r\sin\varphi\sin\theta)\,r^2\sin\varphi\d r\d\theta\d\varphi\\
z_G&=\frac3{2\pi R^3}\iiint_B z\d x\d y\d z=\frac3{2\pi R^3}\iiint_{\substack{0\le r \le R\\[0.35ex]0\le\theta\le 2\pi\\[0.2ex]0\le\varphi\le\tfrac\pi2}}(r\cos\varphi)\,r^2\sin\varphi\d r\d\theta\d\varphi
\end{align}</p>
|
80,452 | <p>When can the curvature operator of a Riemannian manifold (M,g) be diagonalized by a basis of the following form</p>
<p>'{${E_i\wedge E_j }$}' where '{${E_i}$}' is an orthonormal basis of the tangent space? If the manifold is three dimensional then it is always possible. But what about higher dimensional cases?</p>
| Willie Wong | 3,948 | <p>A sufficient condition is if the Riemannian manifold is conformally flat, this implies that the Weyl curvature vanishes, and the Riemann curvature tensor is a linear combination of the identity operator on two forms and the operator formed by the Kulkarni-Nomizu product of the Ricci curvature and the metric. Using that the Ricci curvature is a symmetric bilinear form, you can diagonalize it relative to the metric, and explicitly show (as in the 3 dimensional case) that the Kulkarni-Nomizu product of Ricci and the metric can be diagonalized over a basis formed by $\{e_i\wedge e_j\}$. </p>
<p>On the other hand, there are also large classes of manifolds for which it is impossible to satisfy your requirement. For example, consider the four dimensional (anti)-self-dual Einstein manifolds with nonvanishing Weyl curvature. The Einstein equation $Ric = \lambda g$ means that the Ricci and scalar parts of the curvature are just multiplies of the identity. But the self-duality of the Weyl part means any eigen-twoform of the curvature operator must be either self-dual or anti-self-dual, which rules them out from being rank two. </p>
<hr>
<p>Here are also some possibly relevant papers. </p>
<ul>
<li><a href="http://projecteuclid.org/euclid.pjm/1102779976">Vilms considered in this paper</a> conditions related to the curvature operator having bounded rank. </li>
<li><a href="http://www.jstor.org/stable/1998025">In this paper</a> the same author studied curvature operators of the form $R = b\wedge b$, where $b$ is symmetric bilinear. In general one sees that a necessary and sufficient condition for curvature operators to be diagonalisable in your sense is that $R = \sum_{i = 1}^{M} b_i\wedge b_i$, where the $b_i$'s are symmetric bilinear forms that can all be simultaneously diagonalised. </li>
</ul>
|
893,839 | <p>I am dealing with the dual spaces for the first time.</p>
<p>I just wanted to ask is their any practical application of Dual space or is it just some random mathematical thing? If there is, please give a few.</p>
| Joonas Ilmavirta | 166,535 | <p>Distributions, or "generalized functions", such as the Dirac delta are most conveniently described by studying the dual space of a space of "nice" (smooth or something of the kind) functions.
Heuristic calculations with distributions are commonplace in physics, but if you need a rigorous understanding of what is going on and what operations are allowed, you end up studying dual spaces.</p>
|
13,616 | <p>Are there enough interesting results that hold for general locally ringed spaces for a book to have been written? If there are, do you know of a book? If you do, pelase post it, one per answer and a short description.</p>
<p>I think that the tags are relevant, but feel free to change them.</p>
<p>Also, have there been any attempts to classify locally ringed spaces? Certainly, two large classes of locally ringed spaces are schemes and manifolds, but this still doesn't cover all locally ringed spaces.</p>
| Emerton | 2,874 | <p>In addition to the examples mentioned in the question, of manifolds and schemes, other commonly occuring types of locally ringed spaces are formal schemes and complex analytic spaces. </p>
<p>I don't know how extensive the taxonomy of locally ringed spaces is. For example,
if $A$ is a local ring, we can form the locally ringed space consisting of a single point,
with $A$ sitting on top of it. These are the topologically simplest locally ringed spaces
(after the empty space).
If $A$ is a field, one obtains a scheme. If $A$ is a complete local ring, one obtains a formal scheme. In general, this doesn't fit into any particular taxonomic grouping that I know of.</p>
<p>Incidentally, it might be worth mentioning that the various taxonomic classes can interact:
for example, analytification of schemes over ${\mathbb C}$ is conveniently described in terms of
maps (in the category of locally ringed spaces) to complex analytic spaces.</p>
|
1,576,561 | <p>I'd like to show that $$\sum\limits_{n = 1}^\infty {{{{x^{n + 1}}} \over {n(n + 1)}}} $$ absolutely converges for $|x| < 1$</p>
| Berci | 41,488 | <p>You can also use this directly:</p>
<p>If $|x|<1$, then $|x^{n+1}|<1$, so for any $N>n$, we have</p>
<p>$$\left|\sum_{n=1}^N\frac{x^{n+1}}{n(n+1)}\right| \le \sum_{n=1}^N\frac{|x^{n+1}|}{n(n+1)}\le\sum_{n=1}^N\frac1{n(n+1)} =\sum_{n=1}^N\left(\frac1n-\frac1{n+1}\right) = 1-\frac1{N+1}\,.$$</p>
|
927,815 | <p>Find an equation of the plane that passes through the points $A(0, 1, 0)$, $B(1, 0, 0)$ and $C(0, 0, 1)$.</p>
| JimmyK4542 | 155,509 | <p><strong>Hint</strong>: The sum of the $x$-coordinate, $y$-coordinate, and $z$-coordinate for each of those points is $1$. </p>
<p>Now, turn that into an equation. </p>
|
3,566,083 | <p>I have a game that assigns the probability of finding items in treasure chests by making several independent checks until it fills a quota, with duplicates not allowed. I am trying to figure out how to calculate the resulting probabilities of finding each item - without breaking the bank in terms of calculation brute force.</p>
<p>For an example: The % chance of getting a small chest is 30. For a medium chest it's 15, and for a big chest it's 5. (There is no requirement that these add to 100.)</p>
<p>The algorithm:</p>
<ul>
<li>Roll random against the large's 5%.</li>
<li>If successful, it's a large. If not, roll random against the medium's 15%.</li>
<li>If successful, it's a medium. If not, roll random against the small's 30%.</li>
<li>If successful, it's a small. If not, return to rolling for the large, and repeat the process indefinitely until <em>something</em> is successful.</li>
</ul>
<p>This is then repeated over three layers. The layers are:</p>
<ul>
<li>size of treasure chest</li>
<li>item category (e.g. weapon or armour)</li>
<li>specific item (e.g. for weapons: sword or pike; for armour: helmet, gloves, or boots)</li>
</ul>
<p>So first the game makes this "roll until success" check to decide what chest to get. The chest selected determines which item pool is drawn from and how many items are needed. For each item needed, the game does a "roll until success" check for category, and then another for item. It repeats these two checks until it has the requisite number of items. Duplicates are not allowed; if an item would be a duplicate, the process is restarted. (Which I think is identical to changing the % chance of potential duplicates to 0.)</p>
<p>I am trying to, given the % chances for everything in all 3 layers, calculate the final probability that you will get a chest that has a specific item in it. I'm not looking for a one-time result of any particular example data: I'd like help in figuring out how to formulate the algorithm so I can apply it to any data.</p>
<p>This is giving me a headache for two reasons:</p>
<ul>
<li>It's really close to being a geometric distribution, but because the "success rate" at each step is not identical, it isn't. Also you have to fudge the meaning of "success" because what item you get matters.</li>
<li>Ignoring duplicates is a pain. If item 2 is in a different category than item 1, there's no effect. But if item 2 is in the same category - or if one category has been completely exhausted - the rest of the things in that level all have different rates for item 2.</li>
</ul>
<p>The brute force way of formulating this doesn't seem too hard (e.g. doing the <span class="math-container">$(1-p)^k(p)$</span> thing for each p in order a bunch of times). But I don't want to use brute force, because I have to present the data <a href="https://meta.wikimedia.org/wiki/Help:Calculation" rel="nofollow noreferrer">using MediaWiki</a>, which <em>can</em> do this given the variable and loop extensions but I imagine doing this a ton will not be ideal - taking 2 items from a chest that has 4 categories and 3 things in each category looks like I need 21 iterations of <span class="math-container">$(1-p)^k(p)$</span> (1 for "chance of picking this", 1 for each other option to get "chance of picking this given I previously picked that"). If possible, I'm looking for something more pragmatic.</p>
<hr>
<p>Other notes:</p>
<ul>
<li>Some items can appear in more than one size of chest, with different rates. It should be easy enough to calculate them separately and proportionally add them together.</li>
</ul>
<p>Related-looking questions:</p>
<ul>
<li><a href="https://math.stackexchange.com/questions/2805529/algorithm-game-item-drop-probability">Exhibit A</a> - Very similar, but no answer.</li>
<li><a href="https://math.stackexchange.com/questions/1341703/what-is-the-probability-of-an-item-in-a-list-being-chosen-if-both-the-list-and-i">Exhibit B</a> - The same problem in essence (including the part where there's multiple levels of selection), but without the all-important "repeat until something succeeds" condition.</li>
</ul>
<hr>
<p>Examples of possible data:</p>
<pre><code>Set 1
1% large chest (pick 2)
50% weapon
30% big sword
10% claymore
20% armour
50% steel helmet
50% steel gauntlets
50% steel boots
10% jewel
20% ruby
30% sapphire
40% emerald
20% potion
10% red potion
20% blue potion
30% green potion
40% yellow potion
10% medium chest (pick 2)
50% potion
40% water
20% red potion
10% weapon
40% medium sword
20% armour
40% iron helmet
40% iron boots
100% small chest (10% chance of pick 2, 90% chance of pick 1)
100% creature bits
15% tail
70% hair
Set 2
5% large chest (pick 2)
50% weapon
30% claymore
30% big sword
20% giant club
30% armour
100% magic bracelet
10% jewel
50% emerald
100% diamond
30% potion
10% red potion
20% blue potion
30% green potion
40% yellow potion
15% medium chest (pick 2)
40% potion
80% water
10% red potion
5% weapon
40% small sword
20% medium sword
25% armour
40% iron helmet
40% iron boots
100% small chest (10% chance of pick 2, 90% chance of pick 1)
100% creature bits
40% tail
70% hair
</code></pre>
| Brian Moehring | 694,754 | <p>In general, we have <span class="math-container">$$\ln(b^x) = x\ln(b) + 2\pi i k$$</span> for some <span class="math-container">$k \in \mathbb{Z}$</span> (which may depend on both <span class="math-container">$x$</span> and <span class="math-container">$b$</span>), for any fixed branch of the complex logarithm.</p>
<p>In your case, this gives
<span class="math-container">$$(b^x)^y = \exp[y(\ln b^x)] = \exp[y(x\ln(b)+2\pi ik)] = b^{xy}\cdot\exp[2\pi iyk]$$</span></p>
<p>If <span class="math-container">$yk \in \mathbb{Z},$</span> then we can show <span class="math-container">$(b^x)^y = b^{xy}$</span> for that branch, but usually this isn't reasonable to assume unless <span class="math-container">$y \in \mathbb{Z}$</span>.</p>
|
214,705 | <p>If $a(x) + b(x) = x^6-1$ and $\gcd(a(x),b(x))=x+1$ then find a pair of polynomials of $a(x)$,$b(x)$.</p>
<p>Prove or disprove, if there exists more than 1 more distinct values of the polynomials.</p>
| André Nicolas | 6,312 | <p>There is too much freedom. Let $a(x)=x+1$ and $b(x)=(x^6-1)-(x+1)$. Or else use $a(x)=k(x+1)$, $b(x)=x^6-1-k(x+1)$, where $k$ is any non-zero integer. </p>
|
1,914,536 | <p><strong>Problem</strong>: Solve $3^x \equiv 2 \pmod{29}$ using Shank's Baby-Step Giant-Step method.</p>
<p>I choose $k=6$ and calculated $3^i \pmod {29}$ for $i=1,2,...,6$.
$$3^1 \equiv 3 \pmod {29}$$
$$3^2 \equiv 9 \pmod {29}$$
$$3^3 \equiv 27 \pmod {29}$$
$$3^4 \equiv 23 \pmod {29}$$
$$3^5 \equiv 11 \pmod {29}$$
$$3^6 \equiv 4 \pmod {29}$$</p>
<p>Then I have calculated $3^{-1} \equiv 10 \pmod {29}$ and started calculating second list:</p>
<p>$$2 \cdot 3^{-6} \equiv 2 \cdot 10^6 \equiv 2 \cdot 22 = 44 \equiv 15 \pmod {29}$$
$$2 \cdot 3^{-12} \equiv 2 \cdot {3^{-6}}^2 \equiv 2 \cdot 15^2 \equiv 2 \cdot 22 = 44 \equiv 15 \pmod {29}$$
$$2 \cdot 3^{-18} \equiv 2 \cdot {3^{-12}}^2 \equiv 2 \cdot 22^2 \equiv 2 \cdot 20 = 40 \equiv 11\pmod {29}$$</p>
<p>And now I can stop. I can see that:
$$ 3^5 \equiv 11 \pmod {29}\ and\ 2 \cdot 3^{-18} \equiv 11 \pmod {29}$$
Therefore $x = 5 + 18 = 23$</p>
<p>But when I plugin $x=23$ above I get that $3^{23} \equiv 8 \pmod {29}$.
So where am I wrong?</p>
| clzola | 262,220 | <p>I found flaw in my calculus for the second expression in second list, it should be</p>
<p>$$ 2 \cdot 3^{-12} = 2 \cdot (3^{-6})^2 \equiv 2 \cdot 22^2 = 2 \cdot 484 \equiv 11 \pmod {29}$$</p>
<p>Hence, $x = 5 + 12 = 17 $ which is correct.</p>
|
2,447,014 | <p>I attempted to solve this question using a method learned from a previous answer on here. I was just looking for a bit more guidance.
This is what I have for this problem:</p>
<blockquote>
<p>$y=\frac{4x}{x+1}\\
y(x+1) =4x \\
yx + y = 4x \\
y = 4x-yx \\
y= x(4-y) \\
\frac{y}{4-y}=x \\
\frac{y}{4-y} > 0 \\
0 < y < 4 $</p>
</blockquote>
<p>Is this the correct approach? Also, it says to prove my answer. Would this just be doing the same steps in reverse?
Thanks,</p>
<p>EDIT: It came to my attention that proving it might just be plugging in the values and showing that they satisfy the original domain + target space. Is this correct?</p>
| Community | -1 | <p>$\dfrac {4x}{x+1}=\dfrac{4x+4-4}{x+1}=4-\dfrac {4}{x+1}$</p>
<p>When $x$ tends to $0$ then $4-\dfrac {4}{x+1}$ tends to $0$. When $x$ tends to $+\infty$ then $4-\dfrac {4}{x+1}$ tends to $4$.</p>
<p>So, because your function is continuous your image is $(0,4)$.</p>
<p>Alternative approach:</p>
<p>$\dfrac {y}{4-y} >0$</p>
<p>This is possible if and only if $y>0$ and $4-y>0$ or if $y<0$ and $4-y<0$. See what case out of these two cases is possible.</p>
|
265,549 | <p>$$ \left\{ x \in\mathbb{R}\; \middle\vert\; \tfrac{x}{|x| + 1} < \tfrac{1}{3} \right\}$$</p>
<p>What is the supremum and infimum of this set? I thought the supremum is $\frac{1}{3}$. But can we say that for any set $ x < n$ that $n$ is the supremum of the set? And for the infimum I have no idea at all. Also, let us consider this example:</p>
<p>$$ \left\{\tfrac{-1}{n} \;\middle\vert\; n \in \mathbb{N}_0\right\}$$</p>
<p>How can I find the infimum and supremum of this set? It confuses me a lot. I know that as $n$ gets bigger $\frac{-1}{n}$ asymptotically approaches $0$ and if $n$ gets smaller $\frac{-1}{n}$ approaches infinity, but that's about it. </p>
| Nameless | 28,087 | <p>Hint: Find the range of
$$f(x)=\frac{x}{\left|x\right|+1}-\frac13$$
This function is continuous and increasing.</p>
|
2,873,449 | <p>For a continuous function $f: R \to R $ Define:
$$ \lim_{x \to 0} \frac{1}{x^2} \int_{0}^{x} f(t)t \space dt $$
Since the function is continuous I can assume that it's also integrable since continuity implies integrability. I assume furthermore that there exists a function $F$ which is an antiderivative of $f$ for which the following are true:</p>
<p>$$\int_{a}^{b} f(x) dx =F(b)-F(a) \space\space\space\space\space a,b\in R \space $$
$$ \lim_{x \to x_{0}} \frac{F(x)-F(x_{0})}{x-x_{0}}=f(x)$$
And that for $f$
$$ \lim_{x \to x_{0}} f(x)=f(x_{0})$$</p>
<p>In order to find the limit, I used partial integration and ended up with:
$$ \lim_{x \to 0} \frac{F(x)(x-1) +F(0)}{x^2}$$
At this point, I tried to use L'Hôpital's rule and ended up with the value $\frac{f(0)}{2}$ which seems totally wrong to me .
Any advice would be appreciated, I mainly think that my solution idea is wrong, but I am stuck.</p>
| Clement C. | 75,808 | <p><em>An approach not relying on L'Hopital's Rule.</em></p>
<p>We have, for $x\neq 0$ and with the change of variable $u=\frac{t}{x}$, $$
\frac{1}{x^2}\int_0^x t f(t)dt
= \int_0^1 u f(xu) du
$$
Now, it is not hard to prove that
$$
\lim_{x\to 0}\int_0^1 u f(xu) du
= \int_0^1 u \lim_{x\to 0} f(xu) du
= \int_0^1 u f(0) du
= f(0)\left [\frac{x^2}{2}\right]^1_0
= \frac{f(0)}{2}
$$
(where the swapping limit/integral can be justified e.g. by arguing about uniform convergence, using continuity of $f$).</p>
|
2,873,449 | <p>For a continuous function $f: R \to R $ Define:
$$ \lim_{x \to 0} \frac{1}{x^2} \int_{0}^{x} f(t)t \space dt $$
Since the function is continuous I can assume that it's also integrable since continuity implies integrability. I assume furthermore that there exists a function $F$ which is an antiderivative of $f$ for which the following are true:</p>
<p>$$\int_{a}^{b} f(x) dx =F(b)-F(a) \space\space\space\space\space a,b\in R \space $$
$$ \lim_{x \to x_{0}} \frac{F(x)-F(x_{0})}{x-x_{0}}=f(x)$$
And that for $f$
$$ \lim_{x \to x_{0}} f(x)=f(x_{0})$$</p>
<p>In order to find the limit, I used partial integration and ended up with:
$$ \lim_{x \to 0} \frac{F(x)(x-1) +F(0)}{x^2}$$
At this point, I tried to use L'Hôpital's rule and ended up with the value $\frac{f(0)}{2}$ which seems totally wrong to me .
Any advice would be appreciated, I mainly think that my solution idea is wrong, but I am stuck.</p>
| Paramanand Singh | 72,031 | <p>Write $f(t) $ as $f(t) - f(0)+f(0)$ and then the desired limit is $$\lim_{x\to 0}\frac{1}{x^2}\int_{0}^{x}\{f(t)-f(0)\}t\,dt+\frac{f(0)}{2}\tag{1}$$ Next we can show that the first limit above is $0$. Let $\epsilon >0$ be given. Then by continuity we have a $\delta>0$ such that $$|f(x) - f(0)|<\epsilon $$ whenever $|x|<\delta$. Thus we have $$\left|\int_{0}^{x}\{f(t)-f(0)\}t\,dt\right |\leq\int_{0}^{x}|f(t)-f(0)|t\,dt<\frac{\epsilon x^2}{2}$$ whenever $0<x<\delta$. Similar inequality holds when $-\delta <x<0$. It thus follows that the first limit in $(1)$ above is $0$. Thus the desired limit is $f(0)/2$.</p>
|
3,528,946 | <p><a href="https://i.stack.imgur.com/JG89d.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/JG89d.png" alt="enter image description here"></a></p>
<p>This is the example from Friedberg's Linear Algebra book. It is giving an example about Null space (denoted by <span class="math-container">$N(T)$</span>), and range, denoted by <span class="math-container">$R(T)$</span>. I don't understand is why is <span class="math-container">$N(T_{0})$</span>=V not <span class="math-container">$0$</span>. I thought since <span class="math-container">$N(T)$</span>={<span class="math-container">$x \in V: T(x)=0$</span>}, it should always be <span class="math-container">$0$</span>, why not?</p>
| gjl | 565,375 | <p>If <span class="math-container">$A^TAv = 0$</span> then also <span class="math-container">$AA^TAv = 0$</span> and <span class="math-container">$Av$</span> would be an eigenvector of eigenvalue <span class="math-container">$0$</span> unless <span class="math-container">$Av$</span> is the zero vector. So this is what you need to prove, that if <span class="math-container">$A^TAv = 0$</span> then <span class="math-container">$Av$</span> is the zero vector.</p>
<p>You have that <span class="math-container">$x = Av$</span> is in the column space of <span class="math-container">$A$</span>. But <span class="math-container">$A^TAv = A^Tx = 0$</span> which means that <span class="math-container">$x$</span> is also in the left nullspace of <span class="math-container">$A$</span>. By the <a href="https://en.wikipedia.org/wiki/Fundamental_theorem_of_linear_algebra" rel="nofollow noreferrer">fundamental theorem of linear algebra</a> the only vector satisfying both is the zero vector.</p>
<p>Edit: There's a more straight forward proof.
<span class="math-container">$A^TAv = 0 \Rightarrow v^TA^TAv = 0 \Rightarrow (Av)^T(Av) = 0 \Rightarrow ||Av||^2 = 0 \Rightarrow Av = 0$</span></p>
|
1,405,787 | <p>We're given the following function :</p>
<p>$$f(x,y)=\dfrac{1}{1+x-y}$$</p>
<p>Now , how to prove that the given function is differentiable at $(0,0)$ ?</p>
<p>I found out the partial derivatives as $f_x(0,0)=(-1)$ and $f_y(0,0)=1$ ,</p>
<p>Clearly the partial derivatives are continuous , but that doesn't guarantee differentiability , does it ?</p>
<p>Is there any other way to prove the same ?</p>
| mathcounterexamples.net | 187,663 | <p>You have several ways to approach the question.</p>
<p><strong>First one</strong>
$f$ is the ratio of two differentiable functions, the denominator one not vanishing in the neighborhood of the origin. Hence $f$ is differentiable at the origin.</p>
<p><strong>Second one</strong>
Using a theorem stating that if $f$ is continuous in an open set $U$ and has continuous partial derivatives in $U$ then $f$ is continuously differentiable at all points in $U$.</p>
<p><strong>Third one</strong>
Using the definition of the derivative, prove that $$\lim\limits_{(h,k) \to (0,0)} \frac{f(h,k)-f(0,0)+h-k}{\sqrt{h^2+k^2}}=0$$</p>
|
4,253,761 | <p><span class="math-container">$$\begin{array}{ll} \text{extremize} & xy+2yz+3zx\\ \text{subject to} & x^2+y^2+z^2=1\end{array}$$</span></p>
<p>How to find the maximum/minimum using Lagrange multipliers?</p>
<p>Context: This is not a homework problem, my friend and I often make up problems to challenge each other. We both love Maths and we are both students.</p>
<p>I have improved my answer based on user247327's suggestion, and I have found the maximum value of 2.056545.Thank you for contributing ideas to my questions.</p>
| ryang | 21,813 | <p>I think of the <strong>trivial case</strong> as the case that invariably arises as the problem’s parameters vary.</p>
<p>In particular, the <strong>trivial solution</strong> of a homogeneous system of linear equations is the zero vector.</p>
|
2,086,770 | <p>$[c^2, c^3, c^4] \text { parallel to/same direction as } [1,-2,4]$</p>
<p>Find $c$ if it exists.</p>
<p>How do I see if they are parallel and find a c?</p>
<p>Generally, we can say that $r[1,-2,4] = [c^2, c^3, c^4]$</p>
<p>But then I have two unknowns and I don't know how to solve like this?</p>
| Bernard | 202,857 | <p>As $[c^2,c^2,c^4]=c^2[1,c,c^2]$ is parallel to and has the same direction as $[1,c,c^2]$, this amounts to solving the same problem for the latter. Actually, one can obviously find $c$ such that $[1,c,c^2]$ is not only parallel, but <em>equal</em> to $[1,-2,4]$.</p>
|
1,201,904 | <p>I have to implement a circuit following the boolean equation A XOR B XOR C, however the XOR gates I am using only have two inputs (I am using the 7486 XOR gate to be exact, in case that makes a difference)... is there a way around this?</p>
| coffeemath | 30,316 | <p>Take the output of A XOR B and pipe it into an XOR having C as the other input.
(This implements (A XOR B) XOR C, and XOR is associative.</p>
|
2,707,749 | <p>I know this has to be extremely easy but I'm not going to solve this problem.</p>
<p>The task is to find the angle at point $A$.</p>
<p>Thanks!</p>
<p><img src="https://i.stack.imgur.com/8NTEO.png" alt="Here is the image."></p>
| ericw31415 | 333,981 | <p>Notice that $\angle EOF+\angle BOC+49^\circ+175^\circ=360^\circ$. It follows that $\angle EOF+\angle BOC=136^\circ$.</p>
<p>$\angle OCA=\dfrac{180^\circ-\angle BOC}{2}$ and $\angle OEA=\dfrac{180^\circ-\angle EOF}{2}$ because $\triangle OBC$ and $\triangle OEF$ are isosceles respectively.</p>
<p>Now $\angle OAC+\angle OCA+\angle BOC+\angle BOA=180^\circ$ and $\angle OAE+\angle OEA+\angle EOF+\angle FOA=180^\circ$. Summing these two gives $$\begin{align}
(\angle OAC+\angle OAE)+\angle OCA+\angle OEA+(\angle BOC+\angle EOF)+(\angle BOA+\angle FOA)&=360^\circ\\
\angle EAC+\angle OCA+\angle OEA+136^\circ+49^\circ&=\\
\angle EAC+\frac{180^\circ-\angle BOC}{2}+\frac{180^\circ-\angle EOF}{2}+136^\circ+49^\circ&=\\
2\angle EAC+(180^\circ-\angle BOC)+(180^\circ-\angle EOF)+370&=720^\circ\\
2\angle EAC+360^\circ-(\angle BOC+\angle EOF)+370&=\\
2\angle EAC+360^\circ-136^\circ+370&=\\
2\angle EAC&=126^\circ\\
\angle EAC&=63^\circ
\end{align}$$</p>
|
4,469,136 | <p>I took a number theory course this past semester and I found the idea of there being different primes in different fields. The only fields that we did in detail were the set of reals and the set of Gaussian integers.</p>
<p>What other fields are there, and what numbers are prime in those fields?</p>
| anomaly | 156,999 | <p>Fields don't have any proper ideals, so the idea of looking literally at primes in a field isn't very interesting. The usual construction is to start with a number field <span class="math-container">$k$</span> (i.e., a finite extension of <span class="math-container">$\mathbb{Q}$</span>) and consider the algebraic integers <span class="math-container">$\cal{O}_k$</span>: those elements <span class="math-container">$\alpha\in k$</span> for which the minimal monic polynomial of <span class="math-container">$\alpha$</span> over <span class="math-container">$\mathbb{Q}$</span> has coefficients in <span class="math-container">$\mathbb{Z}$</span>. (For <span class="math-container">$k = \mathbb{Q}$</span> itself, the minimal polynomial of <span class="math-container">$\alpha$</span> is just <span class="math-container">$X - \alpha$</span>, and so <span class="math-container">$\cal{O}_\mathbb{Q} = \mathbb{Z}$</span>.) This turns out to be a ring with some nice properties that parallel the situation with <span class="math-container">$\mathbb{Z}$</span> and <span class="math-container">$\mathbb{Q}$</span>:</p>
<ul>
<li><span class="math-container">$\operatorname{Frac}{\cal O}_k = k$</span>;</li>
<li>Every ideal of <span class="math-container">${\cal O}_k$</span> is generated by (at most) two elements;</li>
<li>Every ideal of <span class="math-container">${\cal O}_k$</span> has a unique factorization into prime ideals of <span class="math-container">${\cal O}_k$</span>.</li>
</ul>
<p>Thus <span class="math-container">${\cal O}_k$</span> is almost a principal ideal domain, and it looks like a unique factorization domain if we consider ideals rather than individual elements.</p>
<p>To that end, we can consider the prime ideals inside <span class="math-container">${\cal O}_k$</span>. There's a whole area of number theory devoted to figuring out exactly what these prime ideals look like, but one of the basic questions is to figure out what happens to rational primes <span class="math-container">$p$</span> (i.e., ordinary primes in <span class="math-container">$\mathbb{Z}$</span>) inside <span class="math-container">${\cal O}_k$</span>: Does the ideal <span class="math-container">$(p)$</span> remain prime, or does it split into a product of prime ideals? (The third point in the list above means that there's no ambiguity about this product in the latter case.) The general situation is well understood, especially in the case where <span class="math-container">$k/\mathbb{Q}$</span> is Galois. The particular case of <span class="math-container">$k = \mathbb{Q}(\sqrt{d})$</span> for squarefree <span class="math-container">$d$</span> is particularly easy to state: For an odd rational prime <span class="math-container">$p$</span>, the ideal <span class="math-container">$(p)$</span> remains prime; has <span class="math-container">$(p) = \mathfrak{p}\mathfrak{q}$</span> for distinct prime ideals <span class="math-container">$\mathfrak{p},\mathfrak{q}\subset {\cal O}_k$</span>; or has <span class="math-container">$(p) = \mathfrak{p}^2$</span> for some prime ideal <span class="math-container">$\mathfrak{p}\subset {\cal O}_k$</span> depending on whether the Jacobi symbol <span class="math-container">$(d/p)$</span> is <span class="math-container">$-1, 1,$</span> or <span class="math-container">$0$</span>, respectively. This result is also one of many that leads to or leads from quadratic reciprocity. There's quite a lot more to say about the subject, and there are many other kinds of fields that come up just within number theory, but this is a start.</p>
|
2,819,667 | <p>The problem is the same as <a href="https://math.stackexchange.com/questions/14190/average-length-of-the-longest-segment">here</a>. </p>
<blockquote>
<p>A stick of 1m is divided into three pieces by two random points. Find the average length of the largest segment.</p>
</blockquote>
<p>I tried solving it in a different way, and the logic seems fine, however I get a different result to $\frac{11}{18}$. </p>
<p>Here is my solution. Please let me know what I did wrong. </p>
<p>Let $X$ be the length of the stick from the beginning to the first cut. $Y$ be the length of the stick between the first and second cut and $1-X-Y$ the length between the second cut and the end of the stick. </p>
<p>We want to find the CDF of the following random variable: $Z=\max(X,Y,1-X-Y)$. (I believe that if anything is wrong, this might be it).</p>
<p>$$\begin{split}
F_Z(z) = P(Z\leq z) & = P(\max(X,Y,1-X-Y) \leq z)\\ & = P(X\leq z, Y\leq z, 1-X-Y\leq z)\\ &= P(1-Y-z\leq X \leq z, Y\leq z)
\end{split}
$$</p>
<p>Since we have $1-Y-z\leq z$ we deduce that $Y\geq 1-2z$. Hence:
$$\begin{split}
F_Z(z) &= \int_{1-2z}^z\int_{1-y-z}^z 1 dx dy = \int_{1-2z}^z (z-1+y+z) dy\\ &= (2z-1)(z-1+2z) + \left. \frac{y^2}{2}\right|_{y=1-2z}^{y=z} \\ &=(2z-1)(3z-1) + \frac{1}{2}(z^2- (2z-1)^2) \\ & = (2z-1)(3z-1) +\frac{1}{2}(-3z^2 + 4z -1) \\ & = \frac{1}{2}(3z-1)^2
\end{split}
$$
Now, the pdf of $Z$ is :
$$f_Z(z) = \frac{d}{dz}F_Z(z) = 9z-3
$$</p>
<p>And now, in order to find the expected value of the largest length, we need to integrate over $(\frac{1}{3},1)$ as the largest piece needs to be greater than $\frac{1}{3}$. Hence</p>
<p>$$\begin{split}
E[Z] = \int_{\frac{1}{3}}^{1} z f_Z(z) dz = \int_{\frac{1}{3}}^{1} z (9z-3) dz = \frac{14}{9}
\end{split}
$$
The result is obviously wrong as it needs to be something between $0$ and $1$, however after going over the solution multiple times, and checking the calculations with Wolfram, I cannot seem to figure out what went wrong. </p>
| Pi_die_die | 564,003 | <p>This is just a suggestion not an answer but could you try solving it keeping the distance of first division as x and second as y making the lengths of the segments as x, y-x ,1-y</p>
|
2,819,667 | <p>The problem is the same as <a href="https://math.stackexchange.com/questions/14190/average-length-of-the-longest-segment">here</a>. </p>
<blockquote>
<p>A stick of 1m is divided into three pieces by two random points. Find the average length of the largest segment.</p>
</blockquote>
<p>I tried solving it in a different way, and the logic seems fine, however I get a different result to $\frac{11}{18}$. </p>
<p>Here is my solution. Please let me know what I did wrong. </p>
<p>Let $X$ be the length of the stick from the beginning to the first cut. $Y$ be the length of the stick between the first and second cut and $1-X-Y$ the length between the second cut and the end of the stick. </p>
<p>We want to find the CDF of the following random variable: $Z=\max(X,Y,1-X-Y)$. (I believe that if anything is wrong, this might be it).</p>
<p>$$\begin{split}
F_Z(z) = P(Z\leq z) & = P(\max(X,Y,1-X-Y) \leq z)\\ & = P(X\leq z, Y\leq z, 1-X-Y\leq z)\\ &= P(1-Y-z\leq X \leq z, Y\leq z)
\end{split}
$$</p>
<p>Since we have $1-Y-z\leq z$ we deduce that $Y\geq 1-2z$. Hence:
$$\begin{split}
F_Z(z) &= \int_{1-2z}^z\int_{1-y-z}^z 1 dx dy = \int_{1-2z}^z (z-1+y+z) dy\\ &= (2z-1)(z-1+2z) + \left. \frac{y^2}{2}\right|_{y=1-2z}^{y=z} \\ &=(2z-1)(3z-1) + \frac{1}{2}(z^2- (2z-1)^2) \\ & = (2z-1)(3z-1) +\frac{1}{2}(-3z^2 + 4z -1) \\ & = \frac{1}{2}(3z-1)^2
\end{split}
$$
Now, the pdf of $Z$ is :
$$f_Z(z) = \frac{d}{dz}F_Z(z) = 9z-3
$$</p>
<p>And now, in order to find the expected value of the largest length, we need to integrate over $(\frac{1}{3},1)$ as the largest piece needs to be greater than $\frac{1}{3}$. Hence</p>
<p>$$\begin{split}
E[Z] = \int_{\frac{1}{3}}^{1} z f_Z(z) dz = \int_{\frac{1}{3}}^{1} z (9z-3) dz = \frac{14}{9}
\end{split}
$$
The result is obviously wrong as it needs to be something between $0$ and $1$, however after going over the solution multiple times, and checking the calculations with Wolfram, I cannot seem to figure out what went wrong. </p>
| Doug M | 317,162 | <p>Here is how I would do it.</p>
<p>Lets define $x$ to be the short stick, $y$ to be the medium stick and $z$ to be the long stick.</p>
<p>$x\le y\le z\\
z = 1-x-y\\
x\le y \le \frac {1-x}{2}\\
x\le \frac 13$</p>
<p>$$ \bar z = \frac {\displaystyle\int_0^\frac 13\int_x^{\frac {1-x}{2}} 1-x-y\ dy\ dx}{\displaystyle\int_0^\frac 13\int_x^{\frac {1-x}{2}} 1\ dy\ dx}$$</p>
|
2,902,768 | <p>$f:\mathbb{R}^2 \to \mathbb{R}$</p>
<p>$f\Bigg(\begin{matrix}x\\y\end{matrix}\Bigg)=\begin{cases}\frac{xy^2}{x^2+y^2},(x,y)^T \neq(0,0)^T \\0 , (x,y)^T=(0,0)^T\end{cases}$</p>
<p>I need to determine all partial derivatives for $(x,y)^T \in \mathbb{R}^2$:</p>
<p>$f_x=y^2/(x^2+y^2)-2x^2y^2/(x^2+y^2)^2$ for $(x,y)^T \neq (0,0)$</p>
<p>$f_y=2xy/(x^2+y^2)-2xy^3/(x^2+y^2)^2$ for $(x,y)^T \neq (0,0)$</p>
<p>and $f_x=f_y=0$ for $(x,y)^T = (0,0)$.</p>
<p>Then I need to determine $\frac{\partial f}{\partial v}((0,0)^T)$ for all $v=(v_1,v_2)^T \in \mathbb{R}^2$.</p>
<p>I tried: $\frac{1}{s} (f(x+sv)-f(x))$ at $x=(0,0)^T$ is equal to $\frac{1}{s} f(sv)$=$\frac{1}{s} f\Big(\begin{matrix}sv_1\\sv_2\end{matrix}\Big)$.</p>
<p>Which is either equal to $0$ when the argument is $(0,0)^T$ or it is $\frac{1}{s}\frac{sv_1s^2v_2^2}{s^2v_1^2+s^2v_2^2}$ which converges to $\frac{v_1v_2^2}{v_1^2+v_2^2}$ as $s \to \infty$.</p>
<p>Is that correct so far?</p>
<p>And how do I know if $f$ is continuously partial differentiable on $\mathbb{R}^2$? According to our professor $f$ is not differentiable at $0$. How do I show that? As far as I know it has something to do with that something is not linear but I don't know what exactly. So I guess it can't be continuously partial differentiable on $\mathbb{R}^2$ as well but I am not sure about that.</p>
<p>Thanks for your help!</p>
| Rigel | 11,776 | <p>The function $f$ is not differentiable at the origin.
You can verify this fact (at least) in two ways.</p>
<p>1) The map $v \mapsto \frac{\partial f}{\partial v} (\mathbf{0})$ is not a linear map.</p>
<p>2) Using the definition of differentiability.</p>
<p>In this second case, you have to show that
$$
\varphi(x,y) := \frac{f(x,y) - f(0,0) - \nabla f(0,0) \cdot (x,y)}{\sqrt{x^2+y^2}}
= \frac{xy^2}{(x^2+y^2)^{3/2}}
$$
does not go to $0$ as $(x,y) \to (0,0)$.</p>
<p>This fact can be easily checked since
$$
\varphi(x,x) = \frac{x^3}{2^{3/2}|x|^3}
$$
does not converge to $0$ as $x\to 0$.</p>
|
2,902,768 | <p>$f:\mathbb{R}^2 \to \mathbb{R}$</p>
<p>$f\Bigg(\begin{matrix}x\\y\end{matrix}\Bigg)=\begin{cases}\frac{xy^2}{x^2+y^2},(x,y)^T \neq(0,0)^T \\0 , (x,y)^T=(0,0)^T\end{cases}$</p>
<p>I need to determine all partial derivatives for $(x,y)^T \in \mathbb{R}^2$:</p>
<p>$f_x=y^2/(x^2+y^2)-2x^2y^2/(x^2+y^2)^2$ for $(x,y)^T \neq (0,0)$</p>
<p>$f_y=2xy/(x^2+y^2)-2xy^3/(x^2+y^2)^2$ for $(x,y)^T \neq (0,0)$</p>
<p>and $f_x=f_y=0$ for $(x,y)^T = (0,0)$.</p>
<p>Then I need to determine $\frac{\partial f}{\partial v}((0,0)^T)$ for all $v=(v_1,v_2)^T \in \mathbb{R}^2$.</p>
<p>I tried: $\frac{1}{s} (f(x+sv)-f(x))$ at $x=(0,0)^T$ is equal to $\frac{1}{s} f(sv)$=$\frac{1}{s} f\Big(\begin{matrix}sv_1\\sv_2\end{matrix}\Big)$.</p>
<p>Which is either equal to $0$ when the argument is $(0,0)^T$ or it is $\frac{1}{s}\frac{sv_1s^2v_2^2}{s^2v_1^2+s^2v_2^2}$ which converges to $\frac{v_1v_2^2}{v_1^2+v_2^2}$ as $s \to \infty$.</p>
<p>Is that correct so far?</p>
<p>And how do I know if $f$ is continuously partial differentiable on $\mathbb{R}^2$? According to our professor $f$ is not differentiable at $0$. How do I show that? As far as I know it has something to do with that something is not linear but I don't know what exactly. So I guess it can't be continuously partial differentiable on $\mathbb{R}^2$ as well but I am not sure about that.</p>
<p>Thanks for your help!</p>
| leonbloy | 312 | <p>If a funcion is differentiable, the the directional derivative equals $\nabla_f \cdot v $. But here $\nabla_f =(0,0)$, hence all directional derivatives should be zero.</p>
<p>Now, setting $y=ax$ we get $$f(x,y)=f(x,ax)=\frac{ a x^3}{x^2 +a^2 x^2}=\frac{a}{1+a^2}x$$</p>
<p>Then, the derivative take different values (it's only zero for $a=0$... and $a=\infty$, which corresponds to the partial derivatives). Then, the function is not differentiable.</p>
|
6,990 | <p>The Fourier transform of periodic function $f$ yields a $l^2$-series of the functions coefficients when represented as countable linear combination of $\sin$ and $\cos$ functions.</p>
<ul>
<li><p>In how far can this be generalized to other countable sets of functions? For example, if we keep our inner product, can we obtain another Schauder basis by an appropiate transform? What can we say about the bases in general?</p></li>
<li><p>Does this generalize to other function spaces, say, periodic functions with one singularity?</p></li>
<li><p>What do these thoughts lead to when considering the continouos FT?</p></li>
</ul>
| Konstantin Slutsky | 896 | <p>It is not what you want, but may be worth mentioning. There is a huge branch of abstract harmonic analysis on (abelian) locally compact groups, which generalizes Fourier transformation on reals and circle. The main point about sin and cos (or rather complex exponent $e^{i n x}$) is that it is a character (continuous homomorphism from a group to a circle) and it is not hard to see that those are the only characters of the circle. That what makes Fourier transform so powerful. If you generalize it along the direction which drops characters, you'll probably get a much weaker theory.</p>
|
6,990 | <p>The Fourier transform of periodic function $f$ yields a $l^2$-series of the functions coefficients when represented as countable linear combination of $\sin$ and $\cos$ functions.</p>
<ul>
<li><p>In how far can this be generalized to other countable sets of functions? For example, if we keep our inner product, can we obtain another Schauder basis by an appropiate transform? What can we say about the bases in general?</p></li>
<li><p>Does this generalize to other function spaces, say, periodic functions with one singularity?</p></li>
<li><p>What do these thoughts lead to when considering the continouos FT?</p></li>
</ul>
| Bob Bauer | 2,396 | <p>Any compact normal operator on a Hilbert space has an orthonormal basis of eigenvectors. If I remember correctly then the standard Fourier series comes from the second derivative operator on L^2(0,2pi) with boundary conditions f(0)=f(2pi) and f'(0)=f'(2pi). This operator is not compact, but its inverse is (and has the same eigenvectors). Using other compact normal operators (usually inverses of differential operators with certain boundary conditions) you obtain other orthonormal bases. </p>
|
2,083,475 | <p>This is the conic $$x^2+6xy+y^2+2x+y+\frac{1}{2}=0$$
the matrices associated with the conic are:
$$
A'=\left(\begin{array}{cccc}
\frac{1}{2} & 1 & \frac{1}{2} \\
1 & 1 & 3 \\
\frac{1}{2} & 3 & 1
\end{array}\right),
$$</p>
<p>$$
A=\left(\begin{array}{cccc}
1 & 3 \\
3 & 1
\end{array}\right),
$$</p>
<p>His characteristic polynomial is: $p_A(\lambda) = \lambda^2-2\lambda-8$<br>
A has eigenvalues discordant $(\lambda = 4, \lambda = -2)$, so it's an Hyperbole.
Then i found that the center of the conic is: $(-\frac{1}{16}, -\frac{5}{16})$<br>
Then with the eigenvalues i found the two lines passing through the center:
$$4x-4y-1=0$$
$$8x+8y+3=0$$</p>
<p><a href="https://i.stack.imgur.com/OlKbQ.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/OlKbQ.png" alt="enter image description here"></a></p>
<p>Now i want to find the focus and the asymptotes but i have no idea how to do it.There is a way to find These two things through the data I have now? or do i need the canonical form of the conical? Thanks</p>
| Ng Chung Tak | 299,599 | <p>You've two principal axes:</p>
<p>\begin{align*}
x^2+6xy+y^2+2x+y+\frac{1}{2} & \equiv
A\left( \frac{4x-4y-1}{\sqrt{4^2+4^2}} \right)^2+
B\left( \frac{8x+8y+3}{\sqrt{8^2+8^2}} \right)^2+C \\
& \equiv
-2\left( \frac{4x-4y-1}{4\sqrt{2}} \right)^2+
4\left( \frac{8x+8y+3}{8\sqrt{2}} \right)^2+\frac{9}{32}
\end{align*}</p>
<p>Now
\begin{align*}
\frac{4x-4y-1}{4\sqrt{2}} &= x'\\
\frac{8x+8y+3}{8\sqrt{2}} &= y'\\
a &= \frac{3}{8} \\
b &= \frac{3}{8\sqrt{2}} \\
\frac{x'^2}{a^2}-\frac{y'^2}{b^2} &=1
\end{align*}</p>
<p>Asymptotes</p>
<p>$$b x' \pm a y'=0$$</p>
<p>Foci
$$(x',y')=(\pm \sqrt{a^2+b^2},0)$$</p>
|
2,083,475 | <p>This is the conic $$x^2+6xy+y^2+2x+y+\frac{1}{2}=0$$
the matrices associated with the conic are:
$$
A'=\left(\begin{array}{cccc}
\frac{1}{2} & 1 & \frac{1}{2} \\
1 & 1 & 3 \\
\frac{1}{2} & 3 & 1
\end{array}\right),
$$</p>
<p>$$
A=\left(\begin{array}{cccc}
1 & 3 \\
3 & 1
\end{array}\right),
$$</p>
<p>His characteristic polynomial is: $p_A(\lambda) = \lambda^2-2\lambda-8$<br>
A has eigenvalues discordant $(\lambda = 4, \lambda = -2)$, so it's an Hyperbole.
Then i found that the center of the conic is: $(-\frac{1}{16}, -\frac{5}{16})$<br>
Then with the eigenvalues i found the two lines passing through the center:
$$4x-4y-1=0$$
$$8x+8y+3=0$$</p>
<p><a href="https://i.stack.imgur.com/OlKbQ.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/OlKbQ.png" alt="enter image description here"></a></p>
<p>Now i want to find the focus and the asymptotes but i have no idea how to do it.There is a way to find These two things through the data I have now? or do i need the canonical form of the conical? Thanks</p>
| Will Jagy | 10,400 | <p>These two examples are from a book that does not assume linear algebra... Both good and bad, as they are giving concrete methods, which are a bit much to memorize. Worth doing both ways, really, get them to agree; this way, then linear algebra and coordinate changes (translation to center followed by rotation). They do asymptotes and foci below</p>
<p><a href="https://i.stack.imgur.com/l6YRe.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/l6YRe.jpg" alt="enter image description here"></a></p>
<p><a href="https://i.stack.imgur.com/sbcJo.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/sbcJo.jpg" alt="enter image description here"></a></p>
|
8,814 | <p>Here is a funny exercise
$$\sin(x - y) \sin(x + y) = (\sin x - \sin y)(\sin x + \sin y).$$
(If you prove it don't publish it here please).
Do you have similar examples?</p>
| Quixotic | 2,109 | <p>$$ \sin \theta \cdot \sin \bigl(60^\circ - \theta \bigr) \cdot \sin \bigl(60^\circ + \theta \bigr) = \frac{1}{4} \sin 3\theta$$</p>
<p>$$ \cos \theta \cdot \cos \bigl(60^\circ - \theta \bigr) \cdot \cos \bigl(60^\circ + \theta \bigr) = \frac{1}{4} \cos 3\theta$$</p>
<p>$$ \tan \theta \cdot \tan \bigl(60^\circ - \theta \bigr) \cdot \tan \bigl(60^\circ + \theta \bigr) = \tan 3\theta $$</p>
|
8,814 | <p>Here is a funny exercise
$$\sin(x - y) \sin(x + y) = (\sin x - \sin y)(\sin x + \sin y).$$
(If you prove it don't publish it here please).
Do you have similar examples?</p>
| tomerg | 9,659 | <p>M.V Subbarao's identity: an integer $n>22$ is a prime number iff it satisfies,</p>
<p>$$n\sigma(n)\equiv 2 \pmod {\phi(n)}$$</p>
|
8,814 | <p>Here is a funny exercise
$$\sin(x - y) \sin(x + y) = (\sin x - \sin y)(\sin x + \sin y).$$
(If you prove it don't publish it here please).
Do you have similar examples?</p>
| draks ... | 19,341 | <p>What is 42?</p>
<p>$$
6 \times 9 = 42 \text{ base } 13
$$
I always knew that there is something wrong with this universe.</p>
|
8,814 | <p>Here is a funny exercise
$$\sin(x - y) \sin(x + y) = (\sin x - \sin y)(\sin x + \sin y).$$
(If you prove it don't publish it here please).
Do you have similar examples?</p>
| Doug Spoonwood | 11,300 | <p>$\lnot$(A$\land$B)=($\lnot$A$\lor$$\lnot$B) and
$\lnot$(A$\lor$B)=($\lnot$A$\land$$\lnot$B), because they mean that negation is an "equal form".</p>
|
8,814 | <p>Here is a funny exercise
$$\sin(x - y) \sin(x + y) = (\sin x - \sin y)(\sin x + \sin y).$$
(If you prove it don't publish it here please).
Do you have similar examples?</p>
| liaombro | 90,999 | <p>$$27\cdot56=2\cdot756,$$
$$277\cdot756=27\cdot7756,$$
$$2777\cdot7756=277\cdot77756,$$
and so on.</p>
|
8,814 | <p>Here is a funny exercise
$$\sin(x - y) \sin(x + y) = (\sin x - \sin y)(\sin x + \sin y).$$
(If you prove it don't publish it here please).
Do you have similar examples?</p>
| Stefan4024 | 67,746 | <p>Here's one clever trigonometric identity that impressed me in high-school days. Add $\sin \alpha$, to both the numerator and the denominator of $\sqrt{\frac{1-\cos \alpha}{1 + \cos \alpha}}$ and get rid of the square root and nothing changes. In other words:</p>
<p>$$\frac{1 - \cos \alpha + \sin \alpha}{1 + \cos \alpha + \sin \alpha} = \sqrt{\frac{1-\cos \alpha}{1 + \cos \alpha}}$$</p>
<p>If you take a closer look you'll notice that the RHS is the formula for tangent of a half-angle. Actually if you want to prove those, nothing but the addition formulas are required.</p>
|
8,814 | <p>Here is a funny exercise
$$\sin(x - y) \sin(x + y) = (\sin x - \sin y)(\sin x + \sin y).$$
(If you prove it don't publish it here please).
Do you have similar examples?</p>
| Vladimir Reshetnikov | 19,661 | <p>$$\int_0^\infty\frac1{1+x^2}\cdot\frac1{1+x^\pi}dx=\int_0^\infty\frac1{1+x^2}\cdot\frac1{1+x^e}dx$$</p>
|
8,814 | <p>Here is a funny exercise
$$\sin(x - y) \sin(x + y) = (\sin x - \sin y)(\sin x + \sin y).$$
(If you prove it don't publish it here please).
Do you have similar examples?</p>
| ILikeMath | 86,744 | <p>If we define $P$ as the infinite lower triangular matrix where $P_{i,j} = \binom{i}{j}$ (we can call it the Pascal Matrix), then $$P^k_{i,j} = \binom{i}{j}k^{i-j}$$</p>
<p>where $P^k_{i,j}$ is the element of $P^k$ in the position $i,j.$</p>
|
8,814 | <p>Here is a funny exercise
$$\sin(x - y) \sin(x + y) = (\sin x - \sin y)(\sin x + \sin y).$$
(If you prove it don't publish it here please).
Do you have similar examples?</p>
| happymath | 129,901 | <p><strong>Voronoi summation formula:</strong></p>
<p>$\sum \limits_{n=1}^{\infty}d(n)(\frac{x}{n})^{1/2}\{Y_1(4\pi \sqrt{nx})+\frac{2}{\pi}K_1(4\pi \sqrt{nx})\}+x \log x +(2 \gamma-1)x +\frac{1}{4}=\sum \limits _{n\leq x}'d(n)$</p>
|
4,556,592 | <p>Check if the applications defined below are linear transformations:</p>
<p>a) <span class="math-container">$T: \mathbb R^2 \to \mathbb R^2$</span>, <span class="math-container">$T(x_1, x_2) = (x_1 – 1, x_2).$</span></p>
<p>b) <span class="math-container">$T: \mathbb R^2 \to \mathbb R^2$</span>, <span class="math-container">$T(x_1, x_2) = (x_2, x_1).$</span></p>
<p>c) <span class="math-container">$T: M_{2,2} \to \mathbb R, T (\left [ \begin{matrix}
a & b \\
c & d \\
\end{matrix} \right ]) = a – 2b + c – 2d$</span>.</p>
<hr />
<p>My attempt:</p>
<p>a) Let <span class="math-container">$u = (u_1, u_2)$</span> e <span class="math-container">$v = (v_1, v_2)$</span></p>
<p><span class="math-container">$T(u+v) = T(u_1+v_1, u_2+v_2) = (u_1 – 1 + v_1 – 1, u_2 + v_2) = (u_1 + v_1 – 2, u_2 + v_2) \ne T(u) + T(v)$</span></p>
<p><span class="math-container">$T(cu) = T(cu_1, cu_2) = (c(u_1 - 1), cu_2) = (cu_1 - c, u_2) \ne cT(u)$</span>.</p>
<p>No.</p>
<p>b)Let <span class="math-container">$u = (u_1, u_2)$</span> e <span class="math-container">$v = (v_1, v_2)$</span></p>
<p><span class="math-container">$T(u+v) = T(u_1+v_1, u_2+v_2) = (u_2 + v_2 , u_1 + v_1) = T(v) + T(u).$</span></p>
<p><span class="math-container">$T(cu) = T(cu_1, cu_2) = (cu_2, cu_1) = c(u_2, u_1) \ne cT(u).$</span></p>
<p>No.</p>
<p>c)Let <span class="math-container">$u = (a, b, c, d)$</span> e <span class="math-container">$v = (e, f, g, h)$</span></p>
<p><span class="math-container">$T(u+v) = T([a, b, c, d] + [e, f, g, h]) = (a+b) – 2(b+f) + (c+g) – 2(d+h) = (a -2b +c -2d, e -2f + c -2h) = T(u) + T(v)$</span></p>
<p><span class="math-container">$T(ku) = T(k[a, b, c, d]) = ka – k2b + kc – k2d = k(a – 2b + c – 2d) = kT(u).$</span></p>
<p>Yes.</p>
<p>Are my checks correct?</p>
<p>Thanks.</p>
| A. P. | 1,027,216 | <ul>
<li>In a) you have <span class="math-container">$T(u+bv)=(u_{1}+u_{2}-1,b(v_{2}+v_{2}))\not=T(u)+bT(v)$</span>. Hence, <span class="math-container">$T$</span> is not linear transformation.</li>
<li>In b), <span class="math-container">$T(u+bv)=(b(v_{1}+v_{2}),u_{1}+u_{2})=T(u)+bT(v)$</span>. Hence <span class="math-container">$T$</span> is a linear transformation.</li>
<li>In c) <span class="math-container">\begin{align*}T(u+\beta v)&=(a_{1}+\beta a_{2})-2(b_{1}+\beta b_{2})+(c_{1}+\beta c_{2})-2(d_{1}+\beta d_{2}),\\&=T(u)+\beta T(v)\end{align*}</span>Hence <span class="math-container">$T$</span> is a linear transformation.</li>
</ul>
<p>Notice that <span class="math-container">$T$</span> is a linear transformation iff <span class="math-container">$T(u+bv)=T(u)+bT(v)$</span> for all <span class="math-container">$u,v\in V$</span> and <span class="math-container">$\beta\in \mathbb{F}$</span>, with the application <span class="math-container">$T:V\to W$</span> and vector spaces <span class="math-container">$V$</span> and <span class="math-container">$W$</span> defined over a field <span class="math-container">$\mathbb{F}$</span>.</p>
<p>In general your "checks" they have "good structure", but the problem is that you are not being careful with the arithmetic operations. For example, in a) you wrote "<span class="math-container">$T(u+v)=(u_{1}-1+u_{2}-1,v_{1}+v_{2})$</span>" but it is not correct because <span class="math-container">$$T(u+v)=T((u_{1},u_{2})+(v_{1},v_{2}))=T(\underbrace{u_{1}+u_{2}}_{=u},\underbrace{v_{1}+v_{2}}_{=v})=(\underbrace{u_{1}+u_{2}}_{=u}-1,\underbrace{v_{1}+v_{2}}_{=v})$$</span>
Then in 2.b) you said <span class="math-container">$T(cu)\not=cT(u)$</span> but you are not writing all the operations and that is important because the RHS is just <span class="math-container">$cT(u)=c(u_{2},u_{1})$</span> that is the LHS so <span class="math-container">$T(cu)=cT(u)$</span>in fact. The conclusion of 3) is correct but in first place the transformation is from <span class="math-container">$M_{2\times 2}$</span> to <span class="math-container">$\mathbb{R}$</span> not to <span class="math-container">$\mathbb{R}^{3}$</span> as you write by the notation so just a small remark you should use the notation correct in the problem and for example write <span class="math-container">$\begin{pmatrix}a&b\\c&d\end{pmatrix}$</span> instead of <span class="math-container">$[a,b,c,d]$</span>. If you write all the operations so you should reduce the risk of being wrong.</p>
|
4,388,974 | <p>Let <span class="math-container">$f$</span> be a analytic function in the closed unit circle with its center
at the point <span class="math-container">$\alpha\in\mathbb{R}$</span>, then:
<span class="math-container">\begin{equation*}
\int_0^\pi\frac{f\left(\alpha+e^{ix}\right)+f\left(\alpha+e^{-ix}\right)}{1+2p\cos(x)+p^2}\mathrm dx=\frac{2\pi}{1-p^2}f(\alpha+p) ,
\end{equation*}</span>
for <span class="math-container">$|p|<1$</span>.</p>
<p>My attempt: By
<span class="math-container">\begin{align*}
\therefore\quad \sum_{n=1}^{\infty}p^{n}\sin(nx)=\frac{p\sin (x)}{1-2p\cos (x)+p^2},\qquad|p|<1
\end{align*}</span>
adjusting <span class="math-container">$p\to -p$</span> and highlighting <span class="math-container">$\displaystyle \frac1{1+2p\cos(x)+p^2}$</span>:
<span class="math-container">\begin{align*}
\frac1{1+2p\cos(x)+p^2}=-\frac{1}{\sin(x)}\sum_{n=1}^\infty(-p)^{n}\sin(nx).
\end{align*}</span>
Thus:
<span class="math-container">\begin{align*}
\int_0^\pi\frac{f\left(\alpha+e^{ix}\right)+f\left(\alpha+e^{-ix}\right)}{1+2p\cos(x)+p^2}\mathrm dx=-\sum_{n=1}^\infty(-p)^{n}\int_0^\pi\left\{f\left(\alpha+e^{ix}\right)+f\left(\alpha+e^{-ix}\right)\right\}\frac{\sin(nx)}{\sin(x)}\mathrm dx\tag{1}
\end{align*}</span>
by the \textit{Dirichlet Kernel}: <span class="math-container">$\displaystyle \sum_{k=0}^{N-1}e^{2ikx}=e^{(N-1)x}\frac{\sin(Nx)}{\sin(x)}$</span> setting <span class="math-container">$N\to n$</span>, <span class="math-container">$n\in\mathbb{N}$</span> and then taking <span class="math-container">$(1)$</span>, follows that:
<span class="math-container">\begin{align*}
&=-\sum_{n=1}^\infty(-p)^{n}\sum_{k=0}^{n-1}\int_0^\pi\left\{f\left(\alpha+e^{ix}\right)+f\left(\alpha+e^{-ix}\right)\right\}e^{-(n-1)x}e^{2ikx}\mathrm dx,\quad\left(e^{ix}\to z\right)\\
&=...
\end{align*}</span><br />
At this point I'm out of ideas. I would like some light on my last step, or another approach that is similar to this one.</p>
| Quanto | 686,284 | <p>Integrate as follows
<span class="math-container">\begin{align}
&\int_0^\pi\frac{f\left(\alpha+e^{ix}\right)+f\left(\alpha+e^{-ix}\right)}{1+2p\cos x+p^2} dx\\
=&\int_0^\pi\frac{2\Re f\left(\alpha+e^{ix}\right)}{1+2p\cos x+p^2} dx
= 2\int_0^\pi\frac{\sum_{k=0}^\infty \frac{f^{(k)}(a)}{k!} \cos(kx)}{1+2p\cos x+p^2} dx\\
=& \> 2\sum_{k=0}^\infty \frac{f^{(k)}(a)}{k!} \int_0^\pi \frac{\cos(kx)}{1+2p\cos x+p^2} dx\\
=& \> 2\sum_{k=0}^\infty \frac{f^{(k)}(a)}{k!}\cdot \frac{\pi (-p)^k}{1-p^2}=
\frac{2\pi}{1-p^2}f(\alpha-p)
\end{align}</span>
where
<span class="math-container">$$\eqalign{
\frac{1-p^2}{1+2p\cos x+p^2}
=1+2\sum_{j=1}^\infty (-p)^j\cos(j x)
}
$$</span>
is used to integrate
<span class="math-container">$\int_0^\pi \frac{\cos(kx)}{1+2p\cos x+p^2} dx=\frac{\pi (-p)^k}{1-p^2}$</span>.</p>
|
495,069 | <p>Find an equation of the plane.
The plane that passes through the line of intersection of the planes
$x − z = 1$ and $y + 4z = 1$
and is perpendicular to the plane
$x + y − 2z = 2$.</p>
<p>I keep getting the answer of $7x-y+5z=6$ and I am told that it is wrong. I do not understand what I am doing wrong.
I have the 1st normal vector to be $\langle 1,0,-1\rangle$ and the second normal vector to be $\langle0,1,4\rangle$
and when I did a cross product on them, I got $\langle1,-4,1\rangle$ to be the direction of the line.
I then got a 2nd vector parallel to the desired plane as $\langle1,1,-2\rangle$ since its perpendicular to $x+y-2z=2$
I got a normal plane by the cross product of the 2 normal vectors and the result was $\langle 7,-1,5\rangle$
Then I plugged $\langle7,-1,5\rangle$ into the scalar equation of the plane for $\langle a,b,c\rangle$ and used the point $(1,1,0) $
and for my final answer I got $7x-y+5z=6$
I need help figuring out where I went wrong.</p>
| Pratyush Sarkar | 64,618 | <p>You made a small error in your last cross product. The result is $(7, 3, 5)$. So the equation is of the form $7x + 3y + 5z = c$. Using the point $(1, 1, 0)$, we get $c = 10$. So the equation of the plane is $7x + 3y + 5z = 10$.</p>
|
39,828 | <p>Dear MO-community, I am not sure how mature my view on this is and I might say some things that are controversial. I welcome contradicting views. In any case, I find it important to clarify this in my head and hope that this community can help me doing that.</p>
<p>So after this longish introduction, here goes: Many of us routinely use algebraic techniques in our research. Some of us study questions in abstract algebra for their own sake. However, historically, most algebraic concepts were introduced with a specific goal, which more often than not lies outside abstract algebra. Here are a few examples:</p>
<ul>
<li>Galois developed some basic notions in group theory in order to study polynomial equations. Ultimately, the concept of a normal subgroup and, by extension, the concept of a simple group was kicked off by Galois. It would never have occurred to anyone to define the notion of a simple group and to start classifying those beasts, had it not been for their use in solving polynomial equations.</li>
<li>The theory of ideals, UFDs and PIDs was developed by Kummer and Dedekind to solve Diophantine equations. Now, people study all these concepts for their own sake.</li>
<li>Cohomology was first introduced by topologists to assign discrete invariants to topological spaces. Later, geometers and number theorists started using the concept with great effect. Now, cohomology is part of what people call "commutative algebra" and it has a life of its own.</li>
</ul>
<p>The list goes on and on. The axiom underlying my question is that you don't just invent an algebraic structure and study it for its own sake, if it hasn't appeared in front of you in some "real life situation" (whatever this means). Please feel free to dispute the axiom itself.</p>
<p>Now, the actual question. Suppose that you have some algebraic concept which has proved useful somewhere. You can think of a natural generalisation, which you personally consider interesting.</p>
<blockquote>
<p>How do you decide whether a generalisation (that you find natural) of an established algebraic concept is worth studying? How often does it happen (e.g., how often has it happened to you or to your colleagues or to people you have heard of) that you undertake a study of an algebraic concept and when you try to publish your results, people wonder "so what on earth is this for?" and don't find your results interesting? How convincing does the heuristic "well, X naturally generalises Y and we all know how useful Y is" sound to you?</p>
</blockquote>
<p>Arguably, the most important motivation for studying a question in pure mathematics is curiosity. Now, you don't have to explain to your colleagues why you want to classify knots or to solve a Diophantine equation. But might you have to explain to someone, why you would want to study ideals if he doesn't know any of their applications (and if you are not interested in the applications yourself)? How do you motivate that you want to study some strange condition on some obscure groups?</p>
<p>Just to clarify this, I have absolutely no difficulties motivating myself and I know what curiosity means subjectively. But I would like to understand, how a consensus on such things is established in the mathematical community, since our understanding of this consensus ultimately reflects our choice of problems to study.</p>
<p>I could formulate this question much more widely about motivation in pure mathematics, but I would rather keep it focused on a particular area. But one broad question behind my specific one is</p>
<blockquote>
<p>How much would you subscribe to the statement that
EDIT: "studying questions for the only reason that one finds them interesting is something established mathematicians do, while younger ones are better off studying questions that they know for sure the rest of the community also finds interesting"?</p>
</blockquote>
<p>Sorry about this long post! I hope I have been able to more or less express myself. I am sure that this question is of relevance to lots of people here and I hope that it is phrased appropriately for MO.</p>
<hr>
<p>Edit: just to clarify, this question addresses the status quo and the prevalent consensus of the mathematical community on the issues concerned (if such a thing exists), rather than what you would like to be true.</p>
<hr>
<p>Edit 2: I received some excellent answers that helped me clarify the situation, for which I am very grateful! I have chosen to accept Minhyong's answer, as that's the one that comes closest to giving examples of the sort I had in mind and also convincingly addresses the more general question at the end. But I am still very grateful to everyone who took the time to think about the question and I realise that for other people who find the question relevant, another answer might be "the correct one".</p>
| Per Alexandersson | 1,056 | <p>Right now? "If you can use it in Quantum mechanics, it is worth studying" seems to be the general idea behind a lot of math these days (I have 2 ongoing articles with this explanation)..</p>
|
39,828 | <p>Dear MO-community, I am not sure how mature my view on this is and I might say some things that are controversial. I welcome contradicting views. In any case, I find it important to clarify this in my head and hope that this community can help me doing that.</p>
<p>So after this longish introduction, here goes: Many of us routinely use algebraic techniques in our research. Some of us study questions in abstract algebra for their own sake. However, historically, most algebraic concepts were introduced with a specific goal, which more often than not lies outside abstract algebra. Here are a few examples:</p>
<ul>
<li>Galois developed some basic notions in group theory in order to study polynomial equations. Ultimately, the concept of a normal subgroup and, by extension, the concept of a simple group was kicked off by Galois. It would never have occurred to anyone to define the notion of a simple group and to start classifying those beasts, had it not been for their use in solving polynomial equations.</li>
<li>The theory of ideals, UFDs and PIDs was developed by Kummer and Dedekind to solve Diophantine equations. Now, people study all these concepts for their own sake.</li>
<li>Cohomology was first introduced by topologists to assign discrete invariants to topological spaces. Later, geometers and number theorists started using the concept with great effect. Now, cohomology is part of what people call "commutative algebra" and it has a life of its own.</li>
</ul>
<p>The list goes on and on. The axiom underlying my question is that you don't just invent an algebraic structure and study it for its own sake, if it hasn't appeared in front of you in some "real life situation" (whatever this means). Please feel free to dispute the axiom itself.</p>
<p>Now, the actual question. Suppose that you have some algebraic concept which has proved useful somewhere. You can think of a natural generalisation, which you personally consider interesting.</p>
<blockquote>
<p>How do you decide whether a generalisation (that you find natural) of an established algebraic concept is worth studying? How often does it happen (e.g., how often has it happened to you or to your colleagues or to people you have heard of) that you undertake a study of an algebraic concept and when you try to publish your results, people wonder "so what on earth is this for?" and don't find your results interesting? How convincing does the heuristic "well, X naturally generalises Y and we all know how useful Y is" sound to you?</p>
</blockquote>
<p>Arguably, the most important motivation for studying a question in pure mathematics is curiosity. Now, you don't have to explain to your colleagues why you want to classify knots or to solve a Diophantine equation. But might you have to explain to someone, why you would want to study ideals if he doesn't know any of their applications (and if you are not interested in the applications yourself)? How do you motivate that you want to study some strange condition on some obscure groups?</p>
<p>Just to clarify this, I have absolutely no difficulties motivating myself and I know what curiosity means subjectively. But I would like to understand, how a consensus on such things is established in the mathematical community, since our understanding of this consensus ultimately reflects our choice of problems to study.</p>
<p>I could formulate this question much more widely about motivation in pure mathematics, but I would rather keep it focused on a particular area. But one broad question behind my specific one is</p>
<blockquote>
<p>How much would you subscribe to the statement that
EDIT: "studying questions for the only reason that one finds them interesting is something established mathematicians do, while younger ones are better off studying questions that they know for sure the rest of the community also finds interesting"?</p>
</blockquote>
<p>Sorry about this long post! I hope I have been able to more or less express myself. I am sure that this question is of relevance to lots of people here and I hope that it is phrased appropriately for MO.</p>
<hr>
<p>Edit: just to clarify, this question addresses the status quo and the prevalent consensus of the mathematical community on the issues concerned (if such a thing exists), rather than what you would like to be true.</p>
<hr>
<p>Edit 2: I received some excellent answers that helped me clarify the situation, for which I am very grateful! I have chosen to accept Minhyong's answer, as that's the one that comes closest to giving examples of the sort I had in mind and also convincingly addresses the more general question at the end. But I am still very grateful to everyone who took the time to think about the question and I realise that for other people who find the question relevant, another answer might be "the correct one".</p>
| Jon Bannon | 6,269 | <p>In some sense, mathematical structure is simply analogy at a very high level. One tries to fill in details in a way that is likely to pay off. (E.g. looking for a natural way to make a semigroup you are looking at into a group may just pay off, simply because groups are ubiquitous and useful.) This may be the reason why an eye toward mathematical structure is a good thing to cultivate. This is usually a decent way to meet algebraic problems that need attention, when a "picture" needs to be filled in. Ultimately, this "picture" should provide some unification or better understanding of diverse phenomena, or the solution of a reticent problem. Looking for or working on mathematical (or simply algebraic) structure is just another strategy for building a better conceptual picture of the mathematical landscape.</p>
|
39,828 | <p>Dear MO-community, I am not sure how mature my view on this is and I might say some things that are controversial. I welcome contradicting views. In any case, I find it important to clarify this in my head and hope that this community can help me doing that.</p>
<p>So after this longish introduction, here goes: Many of us routinely use algebraic techniques in our research. Some of us study questions in abstract algebra for their own sake. However, historically, most algebraic concepts were introduced with a specific goal, which more often than not lies outside abstract algebra. Here are a few examples:</p>
<ul>
<li>Galois developed some basic notions in group theory in order to study polynomial equations. Ultimately, the concept of a normal subgroup and, by extension, the concept of a simple group was kicked off by Galois. It would never have occurred to anyone to define the notion of a simple group and to start classifying those beasts, had it not been for their use in solving polynomial equations.</li>
<li>The theory of ideals, UFDs and PIDs was developed by Kummer and Dedekind to solve Diophantine equations. Now, people study all these concepts for their own sake.</li>
<li>Cohomology was first introduced by topologists to assign discrete invariants to topological spaces. Later, geometers and number theorists started using the concept with great effect. Now, cohomology is part of what people call "commutative algebra" and it has a life of its own.</li>
</ul>
<p>The list goes on and on. The axiom underlying my question is that you don't just invent an algebraic structure and study it for its own sake, if it hasn't appeared in front of you in some "real life situation" (whatever this means). Please feel free to dispute the axiom itself.</p>
<p>Now, the actual question. Suppose that you have some algebraic concept which has proved useful somewhere. You can think of a natural generalisation, which you personally consider interesting.</p>
<blockquote>
<p>How do you decide whether a generalisation (that you find natural) of an established algebraic concept is worth studying? How often does it happen (e.g., how often has it happened to you or to your colleagues or to people you have heard of) that you undertake a study of an algebraic concept and when you try to publish your results, people wonder "so what on earth is this for?" and don't find your results interesting? How convincing does the heuristic "well, X naturally generalises Y and we all know how useful Y is" sound to you?</p>
</blockquote>
<p>Arguably, the most important motivation for studying a question in pure mathematics is curiosity. Now, you don't have to explain to your colleagues why you want to classify knots or to solve a Diophantine equation. But might you have to explain to someone, why you would want to study ideals if he doesn't know any of their applications (and if you are not interested in the applications yourself)? How do you motivate that you want to study some strange condition on some obscure groups?</p>
<p>Just to clarify this, I have absolutely no difficulties motivating myself and I know what curiosity means subjectively. But I would like to understand, how a consensus on such things is established in the mathematical community, since our understanding of this consensus ultimately reflects our choice of problems to study.</p>
<p>I could formulate this question much more widely about motivation in pure mathematics, but I would rather keep it focused on a particular area. But one broad question behind my specific one is</p>
<blockquote>
<p>How much would you subscribe to the statement that
EDIT: "studying questions for the only reason that one finds them interesting is something established mathematicians do, while younger ones are better off studying questions that they know for sure the rest of the community also finds interesting"?</p>
</blockquote>
<p>Sorry about this long post! I hope I have been able to more or less express myself. I am sure that this question is of relevance to lots of people here and I hope that it is phrased appropriately for MO.</p>
<hr>
<p>Edit: just to clarify, this question addresses the status quo and the prevalent consensus of the mathematical community on the issues concerned (if such a thing exists), rather than what you would like to be true.</p>
<hr>
<p>Edit 2: I received some excellent answers that helped me clarify the situation, for which I am very grateful! I have chosen to accept Minhyong's answer, as that's the one that comes closest to giving examples of the sort I had in mind and also convincingly addresses the more general question at the end. But I am still very grateful to everyone who took the time to think about the question and I realise that for other people who find the question relevant, another answer might be "the correct one".</p>
| Ronnie Brown | 19,949 | <p>I have just now looked again at this interesting blog and thought to add a few points. </p>
<p>1) Methodology: You could read the comments of Grothendieck on "speculation" <a href="http://groupoids.org.uk/Grothendieck-speculation.html" rel="nofollow noreferrer">http://groupoids.org.uk/Grothendieck-speculation.html</a>. I also think that in private one should test an idea `beyond the bounds of human thought': that is, just for fun, take it as far as you you think it can possibly go, and if all went as well as possible. This I call the "ideal scenario". If, under the ideal scenario, the result does not look all that exciting, then you might put is aside. On the other hand, if, under the ideal scenario, the result would be wonderful, then you might say to yourself: "Life is not like that, there must be some obstructions to this working." So you look for obstructions, small things that you think you might be able to do. If these obstructions turn to be real, then that would be interesting, and you should modify your scenario. On the other hand, if these obstructions disappear one by one, that would be even more interesting! Either way, this is a win-win research strategy. If some negative person (these abound in mathematics!) says "your idea cannot work because...." then that gives another obstruction to work on. </p>
<p>I also like the idea of writing a (draft!) paper on your new idea, in which a key part is the Introduction, which should be as free ranging as possible, following flights of fancy, catching ideas as they occur. These can always be later relegated to another document (the great advantage of mathematical wordprocessing). The process of writing can make these ideas more real. So can talking about them, though you do sometimes get funny looks from superior people! </p>
<p>You may write a draft 4 times, ending in failure, then the fifth time the paper writes itself! (It took me 9 years, and many drafts which ran into sand, trying to write a paper on a new homotopy double groupoid, before realising with Philip Higgins in 1974 that it was useful to try a definition for a pair of spaces, rather than a plain space!) </p>
<p>2) The composer Ravel said you should copy. If you have some originality, then this might come out as you copy. If not, then never mind! I feel copying is a way of getting the rusty wheels of the brain slowly turning! The originality may come out later. So I advise trying to write up a known piece of mathematics in as "nice" a way as you can. Nothing can be lost by this. </p>
<p>3) A question for Scott: Is there a (hopefully useful) groupoid version of quandles related to the fundamental groupoid and a `peripheral subgroupoid'? </p>
<p>4) A dictum of the algebraist Philip Hall was that one should try to make the algebra model the geometry rather than force the geometry into an already existing algebraic mold. For me, an example of this "forcing" is to try and get a group, and then bring in the idea of change of base point, when the naturally occurring structure is a groupoid. There are many other examples! </p>
|
39,828 | <p>Dear MO-community, I am not sure how mature my view on this is and I might say some things that are controversial. I welcome contradicting views. In any case, I find it important to clarify this in my head and hope that this community can help me doing that.</p>
<p>So after this longish introduction, here goes: Many of us routinely use algebraic techniques in our research. Some of us study questions in abstract algebra for their own sake. However, historically, most algebraic concepts were introduced with a specific goal, which more often than not lies outside abstract algebra. Here are a few examples:</p>
<ul>
<li>Galois developed some basic notions in group theory in order to study polynomial equations. Ultimately, the concept of a normal subgroup and, by extension, the concept of a simple group was kicked off by Galois. It would never have occurred to anyone to define the notion of a simple group and to start classifying those beasts, had it not been for their use in solving polynomial equations.</li>
<li>The theory of ideals, UFDs and PIDs was developed by Kummer and Dedekind to solve Diophantine equations. Now, people study all these concepts for their own sake.</li>
<li>Cohomology was first introduced by topologists to assign discrete invariants to topological spaces. Later, geometers and number theorists started using the concept with great effect. Now, cohomology is part of what people call "commutative algebra" and it has a life of its own.</li>
</ul>
<p>The list goes on and on. The axiom underlying my question is that you don't just invent an algebraic structure and study it for its own sake, if it hasn't appeared in front of you in some "real life situation" (whatever this means). Please feel free to dispute the axiom itself.</p>
<p>Now, the actual question. Suppose that you have some algebraic concept which has proved useful somewhere. You can think of a natural generalisation, which you personally consider interesting.</p>
<blockquote>
<p>How do you decide whether a generalisation (that you find natural) of an established algebraic concept is worth studying? How often does it happen (e.g., how often has it happened to you or to your colleagues or to people you have heard of) that you undertake a study of an algebraic concept and when you try to publish your results, people wonder "so what on earth is this for?" and don't find your results interesting? How convincing does the heuristic "well, X naturally generalises Y and we all know how useful Y is" sound to you?</p>
</blockquote>
<p>Arguably, the most important motivation for studying a question in pure mathematics is curiosity. Now, you don't have to explain to your colleagues why you want to classify knots or to solve a Diophantine equation. But might you have to explain to someone, why you would want to study ideals if he doesn't know any of their applications (and if you are not interested in the applications yourself)? How do you motivate that you want to study some strange condition on some obscure groups?</p>
<p>Just to clarify this, I have absolutely no difficulties motivating myself and I know what curiosity means subjectively. But I would like to understand, how a consensus on such things is established in the mathematical community, since our understanding of this consensus ultimately reflects our choice of problems to study.</p>
<p>I could formulate this question much more widely about motivation in pure mathematics, but I would rather keep it focused on a particular area. But one broad question behind my specific one is</p>
<blockquote>
<p>How much would you subscribe to the statement that
EDIT: "studying questions for the only reason that one finds them interesting is something established mathematicians do, while younger ones are better off studying questions that they know for sure the rest of the community also finds interesting"?</p>
</blockquote>
<p>Sorry about this long post! I hope I have been able to more or less express myself. I am sure that this question is of relevance to lots of people here and I hope that it is phrased appropriately for MO.</p>
<hr>
<p>Edit: just to clarify, this question addresses the status quo and the prevalent consensus of the mathematical community on the issues concerned (if such a thing exists), rather than what you would like to be true.</p>
<hr>
<p>Edit 2: I received some excellent answers that helped me clarify the situation, for which I am very grateful! I have chosen to accept Minhyong's answer, as that's the one that comes closest to giving examples of the sort I had in mind and also convincingly addresses the more general question at the end. But I am still very grateful to everyone who took the time to think about the question and I realise that for other people who find the question relevant, another answer might be "the correct one".</p>
| Timothy Chow | 3,106 | <p>I'm going to interpret your question in the language of Gowers's "two cultures" essay as follows:</p>
<blockquote>
<p>How does one get good at theory-building?</p>
</blockquote>
<p>The process of developing a good theory can seem deceptively simple. One takes some definitions, perhaps by generalizing some known definitions, and deduces simple consequences of them. In comparison with the work required to solve a hard problem, this seems easy---perhaps <i>too</i> easy. The catch, of course, is the one you raised: there is a significant risk of spending a lot of time studying something that ultimately has very little mathematical value. Of course there is also the risk of wasted effort when trying to solve a specific problem, but in that case, it's at least clear what you were trying to accomplish. In the case of theory-building, the signposts are less clear; maybe you succeeded in proving some things, so your efforts weren't entirely fruitless, but at the same time, how do you know that you actually got somewhere when there was no clear endpoint?</p>
<p>The number one principle that I keep in mind when trying to build a theory is this:</p>
<blockquote>
<p>Relentlessly pursue the goal of understanding <i>what's really going on</i>.</p>
</blockquote>
<p>I'm reminded of a wonderful sentence that Loring Tu wrote in his May 2006 <i>Notices</i> article on "The Life and Works of Raoul Bott." Tu wrote, "I. M. Singer remarked that in their younger days, whenever they had a mathematical discussion, the most common phrase Bott uttered was “I don't understand,” and that a few months later Bott would emerge with a beautiful paper on precisely the subject he had repeatedly not understood." Von Neumann reportedly said that in mathematics, you don't understand things; you just get used to them. This can be valuable advice to a young mathematician who hasn't yet grasped that the reason we're doing research is precisely that we don't really understand what we're doing. However, the key to theory-building is to insist on thorough understanding, <i>especially of things that are widely regarded as being already understood</i>. Often, such subjects are not really as well understood as others would have you believe. If you start asking probing questions---why are things defined this way and not that way? why doesn't this argument actually prove something more (or maybe it does?)?---you will find surprisingly often that what seems like a very basic question has not really been addressed before.</p>
<p>You asked:</p>
<blockquote>
<p>How do you decide whether a generalisation (that you find natural) of an established algebraic concept is worth studying? How convincing does the heuristic "well, X naturally generalises Y and we all know how useful Y is" sound to you?</p>
</blockquote>
<p>My reply is that the generalization is worth studying if it helps you understand the original concept better. Perhaps the generalization was obtained by weakening an axiom, and you can now see more clearly that certain theorems hold more generally while others don't, so you get some insight into which specific hypotheses of your original object are needed for which conclusions. The heuristic as you've stated it, on the other hand, doesn't sound too convincing to me. I see too much risk of wandering off into a fruitless direction if you're not firmly grounded in trying to understand your original object better.</p>
<p>Keeping firmly in mind that your goal is a thorough understanding of some particular subject is also important because your efforts will, at least initially, not be greeted with enthusiasm by others. You will appear to be a complete idiot who doesn't understand even very basic things that other people think are obvious. Even when you start getting some fresh insights, they will seem trivial to others, who will claim that they "already knew that" (which they probably did, implicitly if not explicitly). Constantly adjusting definitions also appears to others to be an unproductive use of time. Even if you get to the point where your approach leads to a new and wonderfully clear presentation of the subject, and raises important new questions that nobody thought to ask before, you may not get credit for original thinking. Thus it is important that your internal compass is pointed firmly in the right direction. To repeat: ask yourself, am I driving towards an understanding of <i>what's really going on</i> in this important piece of mathematics? If so, keep at it. If not, then you've lost the thread somewhere along the way.</p>
|
3,843,559 | <p>Can someone please give me a hint why for metric spaces we have</p>
<p><span class="math-container">$d_1(x,y)<d(x,y)\Rightarrow \{x|d(x,y)<\varepsilon\}\subset \{x|d_1(x,y)<\varepsilon\}$</span></p>
<p>I have expected the opposite:</p>
<p><span class="math-container">$d_1(x,y)<d(x,y)\Rightarrow \{x|d(x,y)<\varepsilon\}\supset \{x|d_1(x,y)<\varepsilon\}$</span></p>
| Victor Hugo | 322,450 | <p>Yes, we claim that <span class="math-container">$\{x:d(x,y)<\varepsilon\}\subset \{x:d_1(x,y)<\varepsilon\}$</span>. Indeed, let <span class="math-container">$x_0 \in \{x:d(x,y)<\varepsilon\}$</span> then, <span class="math-container">$d(x_0,y)<\varepsilon$</span>. Since <span class="math-container">$d_1(x_0,y)<d(x_0,y)$</span> we obtain <span class="math-container">$d_1(x_0,y)<\varepsilon$</span>, that is, <span class="math-container">$x_0 \in \{x: d_1(x,y)<\varepsilon\}$</span></p>
|
2,454,455 | <p>I know this is a soft and opinion based question and I risk that this question get's closed/downvoted but I still wanted to know what other persons, who are interested in mathematics, think about my question.</p>
<p>Whenever people are talking about the most beautiful equation/identity Euler's identity is cited in this fashion:</p>
<p>$$e^{i\pi}+1=0.$$</p>
<p>While I would agree that this is a beautiful identity (see my avatar) I personally always wondered why not </p>
<p>$$e^{2i\pi}-1 = 0$$</p>
<p>is the most beautiful identity. It has $e$, $i$, $\pi$, $0$ and the number $2$ in it. I prefer it because the number $2$ is the first and at the same time the only even odd prime number. Having the prime numbers, which are in some way the atoms of mathematics, included makes this formula even more pleasant for me. The minus sign seems a little bit "negative" but the good part is that it is displaying the principle of inversion.</p>
<blockquote>
<p>So my question is, why is this not the form in which it is most often
presented?</p>
</blockquote>
| Lwins | 134,950 | <p>In my opinion, you definitely have authority to define what is beautiful and what is not to yourself. And your post let me think of a constant $\tau = 2\pi$ (see <a href="https://tauday.com/" rel="nofollow noreferrer">https://tauday.com/</a>), which is thought by some people as a more "beautiful" and more "natural" one rather than $\pi$ since we have seen lots of formulas including $2\pi$.</p>
<p>I personally think that we should be tolerant to different perception of beauty. If you think
$$ e^{\pi i} + 1 = 0 $$
is the greatest, it is fine. For those people who consider
$$ e^{2 \pi i} - 1 = 0$$
as the most fascinating I would say it is totally OK. And in case a person insists that
$$ \sqrt{2} e^{\pi i/2} = 1+i $$
is the best (since it can imply the above two equations) I would not refute because it is more like a personal preference.</p>
|
1,729,893 | <p>Let $\nu_1, \nu_2, \nu_3 \in \mathbb{R}$ not all be zero.</p>
<p>I wish to show
$$\nu_1^2 + \nu_1\nu_2 + \nu_2^2+\nu_2\nu_3 + \nu_3^2 > 0\text{.}$$</p>
<p><a href="http://www.wolframalpha.com/input/?i=x%5E2%2Bx*y%2By%5E2%2By*z%2Bz%5E2+%3E+0" rel="nofollow">Wolfram</a> seems to suggest splitting this into cases, but I'm wondering if there's a shorter way to approach this. This expression seems very similar to the binomial expansion, minus the fact that we have three terms that are squared and two cross-terms occurring (rather than one cross-term multiplied by $2$).</p>
<p>As for my work, if any one of these are $0$, (I think) this is a trivial exercise. If any two of these are $0$, this is a trivial exercise (you're left with a squared non-zero term). But if all three are non-zero? Then I'm at a loss on how to pursue this, because there isn't a clean way to deal with three variables (or is there?). I've tried seeing if Wolfram could perhaps factor the above. It can't, but maybe, I thought, we could try working with
$$(\nu_1 + \nu_2 + \nu_3)^2 = (\nu_1^2 + \nu_1\nu_2 + \nu_2^2+\nu_2\nu_3 + \nu_3^2) +2\nu_1\nu_3+\nu_1\nu_2+\nu_2\nu_3$$
but there is no guarantee that this is $> 0$ either (take $\nu_3 = -(\nu_1 + \nu_2)$, for example).</p>
| user5713492 | 316,404 | <p>I could more or less instantly spot
$$\nu_1^2+\nu_1\nu_2+\nu_2^2+\nu_2\nu_3+\nu_3^3=(\nu_1+\frac12\nu_2)^2+\left(\frac1{\sqrt2}\nu_2\right)^2+(\frac12\nu_2+\nu_3)^2$$</p>
|
239,900 | <p>Hatcher states the following theorem on page 114 of his Algebraic Topology:</p>
<blockquote>
<p>If $X$ is a space and $A$ is a nonempty closed subspace that is a deformation retract of some neighborhood in $X$, then there is an exact sequence
$$...\longrightarrow\widetilde{H}_n(A)\overset{i_*}\longrightarrow \widetilde{H}_n(X)\overset{j_*}\longrightarrow\widetilde{H}_n(X/A)\overset{\partial}\longrightarrow \widetilde{H}_{n-1}(A)\overset{i_*}\longrightarrow... $$</p>
<p>where $i: A\hookrightarrow X$ is the inclusion and $j:X \rightarrow X/A$ is the quotient map. </p>
</blockquote>
<p>Perhaps I am having a brain malfunction at the moment, but what are some interesting nonempty spaces which do not satisfy these criterion? By interesting, I mean something that appears "in nature." </p>
| Elchanan Solomon | 647 | <p>If you're looking for an arbitrary topological space with this property, take $X = \{a,b,c\}$ and let the topology $\mathcal{T} = \{\emptyset, \{a,b,c\},\{c\},\{a,c\},\{b,c\}\}$. Consider the closed subset $C = \{a,b\}$. The only open set that might retract onto $C$ is the whole space $X = \{a,b,c\}$. Let us suppose that $r(a) = a, r(b) = b, r(c) = b$. Then the inverse of the open set $\{a,c\}$ is just $\{a\}$, which is not an open set, so $r$ is not continuous.</p>
|
698,474 | <p>I am trying to find the phi(18). Using an online calculator, it says it is 6 but im getting four.
<br/>
The method I am using is by breaking 18 down into primes and then multiplying the phi(primes)</p>
<p>$$=\varphi (18)$$
$$=\varphi (3) \cdot \varphi(3) \cdot \varphi(2)$$
$$= 2 \cdot 2 \cdot 1$$
$$= 4$$</p>
| Community | -1 | <p>The phi function is <a href="http://en.wikipedia.org/wiki/Multiplicative_function">multiplicative</a>, but not completely multiplicative: Thus if $a, b$ are relatively prime, we have that</p>
<p>$$\varphi(ab) = \varphi(a) \varphi(b)$$</p>
<p>but this is <em>not</em> necessarily true if $a$ and $b$ have a common prime factor. In particular, it's true that</p>
<p>$$\varphi(18) = \varphi(2) \varphi(9)$$</p>
<p>but not $\varphi(2) \varphi(3)^2$. </p>
|
1,243,750 | <p>$A \in \mathbb{R}^{n\times n}$, with $A^2 = 1$ and $A\ne\pm1$</p>
<p>Show that the only eigenvalues of $A$ are $1$ and $-1$.</p>
| BaronVT | 39,526 | <p>Hint: if $\lambda$ is an eigenvalue of $A$, what can you say about eigenvalues of $A^2$?</p>
|
1,105,352 | <p>Here are a few definition from my workbook:</p>
<p>Let $a\in\Bbb{R}\cup\{+\infty, -\infty\}$, $D\subset\Bbb{R}$. Then point $a$ is a limit point of $D$ if and only if there exists a sequence $\{x_n\}_{n \geq n_0}$ with terms in set $D - \{a\}$ such that $x_n \rightarrow a$.</p>
<p>And now we have a limit of a function $f: D \rightarrow \Bbb{R}$ where $a$ is a limit point of $D$ and $g \in \Bbb{R} \cup \{+\infty,-\infty\}$ defined as follows:</p>
<p>$g$ is a limit of $f$ at point $a$ if and only if for every sequence $\{x_n\}_{n \geq n_0}$ with terms in $D - \{a\}$ such that $x_n \rightarrow a$ we have $f(x_n)\rightarrow g$.</p>
<p>Now let's analyze function $f(x)=2$ defined at set $D=[0;1]$. We're analyzing point $a=1$. Clearly, we haven't got any right-sided limit there, as there doesn't exist any $d\in D$ which is greater than $1$. But according to these definitions, the limit at $1$ still exists and is equal to $2$ as for every sequence $x_n$ such that $x_n \rightarrow 1$ we have $f(x_n)\rightarrow 2$. Am I interpretting these definitions correctly?</p>
| Zubin Mukerjee | 111,946 | <p>I think you are reading the definitions correctly. Since $f$ is <em>defined</em> only on D, there is no such thing as a "right-hand limit" at $1$. </p>
<p>This means that the limit existing at $1$ is not a contradiction (nothing in the limit definition says anything about the right and left hand limits).</p>
<p>Similarly if a function is defined from $\mathbb{R}$ to $\mathbb{R}$, and you want the limit at $0$, then the limit of the continuation of that function along the imaginary axis in $\mathbb{C}$ is irrelevant.</p>
|
778,294 | <p>If $G$ is an open subset of $ \mathbb R$ such that $ 0 \notin G $ , then is it true that $H:=\{ xy \mid x , y \in G \}$ is an open subset of $ \mathbb R$ ?</p>
| Hagen von Eitzen | 39,174 | <p>For $y\ne 0$, the map $x\mapsto xy$ is a homeomorphism $\mathbb R\to\mathbb R$, hence with $G$ also $Gy$ is open. Then $H=\bigcup_{y\in G} Gy$ is the union of open sets, hence open.</p>
<hr>
<p>Remarkably, the <em>result</em> does not change if we allow $0\in G$, though the <em>method</em> of proof above becomes problematic: If $0\in H$ then $0\in G$, then some $(-\epsilon,\epsilon)\subseteq G$, then $(-\epsilon^2,\epsilon^2)\subseteq H$, so $0$ is an inner point of $h$, For the rest consider $G\setminus\{0\}$ and the first paragraph.</p>
|
331,169 | <p>Let $\Omega\subset \mathbb{R}^n$ be bounded and let $X:=H^1(\Omega)$. Let $a:\Omega\times \mathbb{R} \to \mathbb{R}, (x,z)\mapsto a(x,z)$ be a bounded function such that $a(x,.)$ is continuous on $\mathbb{R}$ for every $x$ and $a(.,z)$ is measurable on $\Omega$ for every $z$.</p>
<p>For $v,\phi\in H^1$ one finds in pde the integral
$$\int_\Omega a(x,v(x))\nabla v(x) \cdot \nabla\phi(x) dx$$
All measures should be the Lebesgue measure
My question is: Why is $a(x,v(x))$ a measurable function, why is $a(x,z)$ measurable on $\Omega\times \mathbb{R}$ or why is the integral defined?</p>
<p>Usually one can not conclude from measurability of each compontent to measurability on the product space. Somehow i feel I need more measure theory than i know yet :)</p>
| apnorton | 23,353 | <p>As per comments, the group in question is $\mathbb{Z} \times \mathbb{Z}$.</p>
|
520,203 | <p>Here's what I'm reading: every regular bipartite graph has a 1-factor.</p>
<p>But I understand that not every regular graph has a 1-factor.</p>
<p>So, I was thinking if it's possible to find a $k$-regular simple graph without 1-factor for any fixed $k$?</p>
| Elchanan Solomon | 647 | <p>If $k$ is even, then $K_{k+1}$ is $k$ regular with $k+1$ (odd) vertices, and hence has no $1$-factor (as a perfect matching requires an even number of vertices).</p>
<p>If $k$ is odd, consider the following graph. Start at an initial vertex, then branch out in $k$ directions. From each of those new vertices, branch out $k-1$many times. For each collection of $k-1$ vertices, draw another $k-1$ vertices opposite it, and connect each vertex on one side with every vertex on the other. Lastly, since $k-1$ is even, in the opposite $k-1$ collection, pair up each vertex with one other vertex. Note that: the initial vertex has $k$neighbors, each ensuing vertex has $k$ neighbors. Every vertex in the first $k-1$ collection is connected to the one that preceeded it and $k-1$ other ones in the matching $k-1$ set, so has $k$ neighbors. Lastly, each vertex in the $k-1$ matching set is connected to every vertex in the initial $k-1$ set, and one more. Thus our graph is $k$ regular. However, if we delete the initial vertex, each component has
$$(k-1) + (k-1) + 1 = 2k - 1$$
vertices, which is an odd number, and so we get $k$ odd components. By Tutte's theorem, this means that our graph could not have had a $1$-factor.</p>
<p>Here is an illustrative drawing in the $5$ case:</p>
<p><img src="https://i.stack.imgur.com/rgGJy.png" alt="enter image description here"></p>
|
3,290,073 | <p>I'm trying to help my daughter learn maths. She is struggling with factors, which is to work out what numbers go into a larger number (division).</p>
<p>I've already learned that by summing numbers, if they make 3, it can divide by 3. I also know the rules for 2, 5, 6, 9 and 10.</p>
<p>I'm trying to see if there is a rule for 4. I'm thinking not.</p>
<p><a href="https://www.quora.com/Why-does-the-divisibility-rule-for-the-number-4-work" rel="nofollow noreferrer">https://www.quora.com/Why-does-the-divisibility-rule-for-the-number-4-work</a> shows the following</p>
<blockquote>
<p>The divisibility rule for 4 is in any large number, if the digits in tens and units places is divisible by then the whole number is divisible by 4.</p>
</blockquote>
<p>This doesn't make sense. 56 divides by 4. However, the 2 numbers add to 11, and so can't be divided by 4.</p>
<p>It may very well get a "no" answer, but is there any pattern/method I can use for determining if a number can be divided by 4 if it is less than 100 (and greater than 4)</p>
| Robert Israel | 8,508 | <p>Tens place even and units <span class="math-container">$0$</span>, <span class="math-container">$4$</span> or <span class="math-container">$8$</span> (i.e. divisible by <span class="math-container">$4$</span>), or tens place odd and units <span class="math-container">$2$</span> or <span class="math-container">$6$</span> (even but not divisible by <span class="math-container">$4$</span>).</p>
|
3,290,073 | <p>I'm trying to help my daughter learn maths. She is struggling with factors, which is to work out what numbers go into a larger number (division).</p>
<p>I've already learned that by summing numbers, if they make 3, it can divide by 3. I also know the rules for 2, 5, 6, 9 and 10.</p>
<p>I'm trying to see if there is a rule for 4. I'm thinking not.</p>
<p><a href="https://www.quora.com/Why-does-the-divisibility-rule-for-the-number-4-work" rel="nofollow noreferrer">https://www.quora.com/Why-does-the-divisibility-rule-for-the-number-4-work</a> shows the following</p>
<blockquote>
<p>The divisibility rule for 4 is in any large number, if the digits in tens and units places is divisible by then the whole number is divisible by 4.</p>
</blockquote>
<p>This doesn't make sense. 56 divides by 4. However, the 2 numbers add to 11, and so can't be divided by 4.</p>
<p>It may very well get a "no" answer, but is there any pattern/method I can use for determining if a number can be divided by 4 if it is less than 100 (and greater than 4)</p>
| Questioner | 894,469 | <p>There <em>is</em> a rule of divisibility for the number <span class="math-container">$4$</span>. Here it is:</p>
<p>To figure out if a number is divisible by four, you first need to look at the last two digits, and if they're divisible by four together, you can assume that the whole number is divisible by <span class="math-container">$4$</span>.</p>
<p>Why does this work? Well, <span class="math-container">$100$</span> is divisible by four, any number value in the place values greater than the hundreds place is a multiple of <span class="math-container">$100$</span>. For example, in the number <span class="math-container">$2,375$</span>, the <span class="math-container">$2$</span> in the thousands place stands for <span class="math-container">$2,000$</span>, and <span class="math-container">$100\times 20=2,000$</span>, so <span class="math-container">$2,000$</span> is a multiple of <span class="math-container">$100$</span>. If we then add the two digits below the hundreds place, we can say that if all the digits above the ones place, and the tens place are divisible by four, if the two digits left are also divisible by four, it doesn't change anything!</p>
<p>Hopefully this helped you with your question.</p>
|
341,621 | <p>Is there an inequality such as
$$(a+b)^2 \leq 2(a^2 + b^2)$$
for higher powers of $k$
$$(a+b)^k \leq C(a^k + b^k)?$$</p>
| lsp | 64,509 | <p>The first inequality is true because : $2ab ≤ a^2+b^2$</p>
<p>There is no specific inequality for higher powers of k, although you can always give a very higher value for 'C' to make the second inequality true.</p>
|
2,703,559 | <p>My assignment asks me to prove that the only automorphism of order 2 of $\mathbb{Z}_q$ is $m \mapsto -m$ for $q = 3$ and $q = 5$ and $q = 7$. I have been stuck for ages, and now I wonder if it is true. I need some help to get started. This is my attempt: Assume $k$ and $q$ are relative prime (this is necessary otherwise $\phi$ is not an automorphism), then:
$$
m = \phi^2(m) = k^2m \implies k^2 \equiv 1 \mod q
$$
But I don't know how to go on from there. I cant see that $k = -1$ is the only option.</p>
| lhf | 589 | <p><em>Hint:</em> When $q$ is prime, $ k^2 \equiv 1 \bmod q \iff k \equiv \pm1 \bmod q$. What is the order of $x \mapsto kx$ when $k=1$ ?</p>
|
2,703,559 | <p>My assignment asks me to prove that the only automorphism of order 2 of $\mathbb{Z}_q$ is $m \mapsto -m$ for $q = 3$ and $q = 5$ and $q = 7$. I have been stuck for ages, and now I wonder if it is true. I need some help to get started. This is my attempt: Assume $k$ and $q$ are relative prime (this is necessary otherwise $\phi$ is not an automorphism), then:
$$
m = \phi^2(m) = k^2m \implies k^2 \equiv 1 \mod q
$$
But I don't know how to go on from there. I cant see that $k = -1$ is the only option.</p>
| Asinomás | 33,907 | <p>Suppose $k^2\equiv 1 \bmod p$, then $(k-1)(k+1)\equiv 0 \bmod p$. That is the reason why there are only $2$ options.</p>
<p>In general we can do these sort of things in any integer domain.( polynomials cant have more roots than their degree)</p>
|
866,921 | <p>On the <a href="http://chat.stackexchange.com/rooms/36/mathematics">Mathematics chat</a> we were recently talking about the following problem <a href="https://math.stackexchange.com/users/32016/chriss-sis">@Chris'ssis</a> had to solve during an interview :</p>
<p>$$3\times 4=8$$
$$4\times 5=50$$
$$5\times 6=30$$
$$6\times 7=49$$
$$7\times 8=?$$</p>
<p>We have not managed to solve it so far, all we know is the solution (which was given <strong>after</strong> we had given up) :</p>
<blockquote class="spoiler">
<p> $224$</p>
</blockquote>
<p>How do we find this solution ?</p>
| Axel Kemper | 58,610 | <p>The left-hand-side input and the right-hand-side output can be imagined as binary numbers in a kind of truth table: </p>
<p><img src="https://i.stack.imgur.com/rgfSR.jpg" alt="enter image description here"> </p>
<p>All eight output bits can be calculated from the seven input bits evaluating simple Boolean expressions.</p>
|
866,921 | <p>On the <a href="http://chat.stackexchange.com/rooms/36/mathematics">Mathematics chat</a> we were recently talking about the following problem <a href="https://math.stackexchange.com/users/32016/chriss-sis">@Chris'ssis</a> had to solve during an interview :</p>
<p>$$3\times 4=8$$
$$4\times 5=50$$
$$5\times 6=30$$
$$6\times 7=49$$
$$7\times 8=?$$</p>
<p>We have not managed to solve it so far, all we know is the solution (which was given <strong>after</strong> we had given up) :</p>
<blockquote class="spoiler">
<p> $224$</p>
</blockquote>
<p>How do we find this solution ?</p>
| devnull69 | 164,488 | <p>The first multiplicant is given. So the open question is "what is the second multiplicant"?</p>
<p>The list can be grouped in sixes. So lines 1-6 is one group, the rules count for each group.</p>
<ul>
<li>Define fm as the given first multiplicant of the row. Start with <code>4</code>. Increment fm by one on each row</li>
<li>Set cm (current multiplicant) to 1</li>
<li>The result for each row is <code>result = fm * cm</code>. You only change <code>cm</code> from row to row</li>
</ul>
<p>These are the rules for the six rows</p>
<ol>
<li>cm := cm + 1</li>
<li>cm := fm * cm</li>
<li>cm := fm - 1</li>
<li>cm := fm</li>
<li>cm := cm * fm / 2</li>
<li>cm := (fm - 1) / 2</li>
</ol>
<p>The sequence of <code>cm</code> would be</p>
<p>2, 10, 5, 7, 28, 4, 5, 55, 11, 13, 91, 7, 8, 136, 17, 19, 190, 10</p>
<p>I think you can continue like that</p>
|
866,921 | <p>On the <a href="http://chat.stackexchange.com/rooms/36/mathematics">Mathematics chat</a> we were recently talking about the following problem <a href="https://math.stackexchange.com/users/32016/chriss-sis">@Chris'ssis</a> had to solve during an interview :</p>
<p>$$3\times 4=8$$
$$4\times 5=50$$
$$5\times 6=30$$
$$6\times 7=49$$
$$7\times 8=?$$</p>
<p>We have not managed to solve it so far, all we know is the solution (which was given <strong>after</strong> we had given up) :</p>
<blockquote class="spoiler">
<p> $224$</p>
</blockquote>
<p>How do we find this solution ?</p>
| WizardLizard | 87,676 | <p>$A\cdot B = C$</p>
<p>$\dfrac{AB²}{\gcd(AB²,6)} = C$ or $\dfrac{\text{lcm}(AB²,6)}6 = C$</p>
|
1,401,760 | <p>First I tried to use integration:
$$y=\lim_{n\to\infty}\frac{a^n}{n!}=\lim_{n\to\infty}\frac{a}{1}\cdot\frac{a}{2}\cdot\frac{a}{3}\cdots\frac{a}{n}$$
$$\log y=\lim_{n\to\infty}\sum_{r=1}^n\log\frac{a}{r}$$
But I could not express it as a <em>riemann integral</em>. Now I am thinking about sandwich theorem.</p>
<p>$$\frac{a}{n!}=\frac{a}{1}\cdot\frac{a}{2}\cdot\frac{a}{3}\cdots\frac{a}{t} \cdot\frac{a}{t+1}\cdot\frac{a}{t+2}\cdots\frac{a}{n}=\frac{a}{t!}\cdot\frac{a}{t+1}\cdot\frac{a}{t+2}\cdots\frac{a}{n}$$
Since $\frac{a}{t+1}>\frac{a}{t+2}>\frac{a}{t+1}>\cdots>\frac{a}{n}$
$$\frac{a^n}{n!}<\frac{a^t}{t!}\cdot\big(\frac{a}{t+1}\big)^{n-t}$$
since $\frac{a}{t+1}<1$, $$\lim_{n\to\infty}\big(\frac{a}{t+1}\big)^{n-t}=0$$
Hence, $$\lim_{n\to\infty}\frac{a^t}{t!}\big(\frac{a}{t+1}\big)^{n-t}=0$$
And by using sandwich theorem, $y=0$. Is this correct?</p>
| ajotatxe | 132,456 | <p>You missed the lower slice of bread in your sandiwch :)</p>
<p>Assuming that $a>0$, you should make clear that all terms are positive, no matter how obvious seems to be.</p>
<p>You should also write explicitly that there is some $t\in\Bbb N$ such that $t>a$. This is called "Archimedean property" of real numbers.</p>
<p>This would be <strong>my proof</strong> (I insist, assuming that $a>0$):</p>
<p>There exists some natural $t$ such that $t>a$. Then, for $n> t$
$$0<\frac{a^n}{n!}=\frac{a^t}{t!}\frac{a^{n-t}}{(t+1)\cdots n}<\frac{a^t}{t!}\left(\frac at\right)^{n-t}$$</p>
<p>Since $a/t<1$, the rightmost expression tends to $0$, and hence, by the sandwich theorem
$$\frac{a^n}{n!}\to 0$$</p>
<p><em>Remark</em>: If $a\le 0$, the limit is still $0$, but in this case you should use this fact:</p>
<blockquote>
<p>If $a_n$ is a sequence of real numbers such that $\lim |a_n|=0$ then $\lim a_n=0$.</p>
</blockquote>
|
1,401,760 | <p>First I tried to use integration:
$$y=\lim_{n\to\infty}\frac{a^n}{n!}=\lim_{n\to\infty}\frac{a}{1}\cdot\frac{a}{2}\cdot\frac{a}{3}\cdots\frac{a}{n}$$
$$\log y=\lim_{n\to\infty}\sum_{r=1}^n\log\frac{a}{r}$$
But I could not express it as a <em>riemann integral</em>. Now I am thinking about sandwich theorem.</p>
<p>$$\frac{a}{n!}=\frac{a}{1}\cdot\frac{a}{2}\cdot\frac{a}{3}\cdots\frac{a}{t} \cdot\frac{a}{t+1}\cdot\frac{a}{t+2}\cdots\frac{a}{n}=\frac{a}{t!}\cdot\frac{a}{t+1}\cdot\frac{a}{t+2}\cdots\frac{a}{n}$$
Since $\frac{a}{t+1}>\frac{a}{t+2}>\frac{a}{t+1}>\cdots>\frac{a}{n}$
$$\frac{a^n}{n!}<\frac{a^t}{t!}\cdot\big(\frac{a}{t+1}\big)^{n-t}$$
since $\frac{a}{t+1}<1$, $$\lim_{n\to\infty}\big(\frac{a}{t+1}\big)^{n-t}=0$$
Hence, $$\lim_{n\to\infty}\frac{a^t}{t!}\big(\frac{a}{t+1}\big)^{n-t}=0$$
And by using sandwich theorem, $y=0$. Is this correct?</p>
| Michael Hardy | 11,667 | <p>Here's an easy way that I'm surprised is not yet here:</p>
<p>$$
\frac{a^n}{n!} = \frac{\overbrace{a\cdots\cdots\cdots\cdots a}}{\underbrace{1\cdot2\cdot3\cdots\,\cdots n}}
$$
When $n$ reaches the point of being twice as big as $a,$ then every time you increment $n$ by $1$ after that, you multiply the numerator by $a$ and the denominator by more than $2a,$ so the fraction gets multiplied by something whose absolute value is less than $1/2.$ Multiplying by something less than $1/2$ over and over again will give you a product approaching $0.$</p>
|
3,635,266 | <p>As stated in title, I am curious how to construct a sequnce <span class="math-container">$\{a_n\}_{n=1}^{\infty}$</span> such that <span class="math-container">$\sum_{n=1}^{\infty} a_n^2 < +\infty$</span> but <span class="math-container">$\sum_{n=1}^{\infty}\frac{a_n}{\sqrt{n}} = \infty$</span>.</p>
<p>This problem arises when I try to show that <span class="math-container">$f(x) = \sum_{i=1}^{\infty} \frac{x_n}{\sqrt{n}}$</span> defined on <span class="math-container">$l_2$</span> space is not bounded.</p>
| Integrand | 207,050 | <p>Try <span class="math-container">$a_n = (\sqrt{n}\log(n))^{-1},$</span> for <span class="math-container">$n\geq 2$</span>.</p>
|
642,497 | <p>How do we prove by vector method that "if the diagonals of a trapezium have equal length then the non-parallel sides of the trapezium have equal length." ? </p>
<p>(taking $ABCD$ to be the trapezium with
$AD$ || $BC$ and $O$ the intersection point of diagonals , if it can be shown by vector method that
$OB=OC$ , then $AB=CD$ follows ; I can show $OB=OC$ but not by vector method and thus the problem , though any other line of approach is acceptable.) </p>
| Belgi | 21,335 | <p><strong>Hint</strong>: Taylor series around $x=-2$</p>
<p>Your way is good too!</p>
|
642,497 | <p>How do we prove by vector method that "if the diagonals of a trapezium have equal length then the non-parallel sides of the trapezium have equal length." ? </p>
<p>(taking $ABCD$ to be the trapezium with
$AD$ || $BC$ and $O$ the intersection point of diagonals , if it can be shown by vector method that
$OB=OC$ , then $AB=CD$ follows ; I can show $OB=OC$ but not by vector method and thus the problem , though any other line of approach is acceptable.) </p>
| Stefan4024 | 67,746 | <p>Here's another way:</p>
<p>$$\begin{array}{c|c|c|c|c|c}
1 & 0 & -2 & 6 & 0 & 1 \\ \hline
1 & -2 & 2 & 2 & -4 & 9 \\[0.55ex]
1 & -4 & 10 & -18 & 32\\[0.55ex]
1 & -6 & 22 & -62 \\[0.55ex]
1 & -8 & 38 \\[0.55ex]
1 & -10\\[0.55ex]
1
\end{array}$$</p>
<p>Now let's look how the table is obtained. The first row represents the coefficients of the polynomial. Every number in the left-most colum is the same and has the same value as the coefficient in front of the highest degree. Now because we want to represent it as sum of powers of $(x+2)$, our approximatiozation will be at $x=-2$. So every other number is obtained in such way that we multiply the the number that's left to him by $x=-2$ and add it to the number that's above it. </p>
<p>So for example for $10$ in the third row we have: $(-2) \times (-4) + 2 = 10$ and so on.</p>
<p>Now use the values in the main diagonal and the give polynomial will be equivalent to:</p>
<p>$$(x+2)^5 - 10(x+2)^4 + 38(x+2)^3 - 62(x+2)^2 + 32(x+2)^1 + 9(x+2)^0$$</p>
<p>This method is called Horner Method/Scheme/Division and in fact represents polynomial division.</p>
|
2,307,268 | <p>I tried to compute the power of 2^2^2^2 on google calculator and my casio calculator but both are giving different results. same is true for 3^3^3.
Please explain me the difference between two expressions.</p>
<p><a href="https://i.stack.imgur.com/2CMDt.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/2CMDt.jpg" alt="enter image description here"></a></p>
<p><a href="https://i.stack.imgur.com/XxKtW.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/XxKtW.jpg" alt="enter image description here"></a></p>
<p><a href="https://i.stack.imgur.com/AP7vm.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/AP7vm.jpg" alt="enter image description here"></a></p>
<p><a href="https://i.stack.imgur.com/2GHzu.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/2GHzu.jpg" alt="enter image description here"></a></p>
<p><a href="https://i.stack.imgur.com/Xqysx.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Xqysx.png" alt="enter image description here"></a></p>
| Community | -1 | <p>Your calculator is interpreting it as this:</p>
<p>$${{2^2}^2}^2 = ((2^2)^2)^2 = (4^2)^2 = 16^2 = 256$$</p>
<p>Google is interpreting it as this:</p>
<p>$${{2^2}^2}^2 = 2^{(2^{(2^2)})} = 2^{(2^4)} = 2^{16} = 65536$$</p>
<p>Similarly with the threes.</p>
<p>Technically Google is correct because order of operations says to do exponents first. So when we want to evaluate $2^{\color{red}{2}^{\color{blue}{2^2}}}$, order of operations says to evaluate the $\color{red}{{2^{\color{blue}{2^2}}}}$ first, i.e., evaluate the exponent first. Apply this rule again and it tells us we're supposed to evaluate the $\color{blue}{2^2}$ first, which is $4.$ Therefore $\color{red}{{2^{\color{blue}{2^2}}}} = 2^4 = 16$, and so $2^{\color{red}{2}^{\color{blue}{2^2}}} = 2^{16} = 65536$.</p>
|
4,356,457 | <p>Let <span class="math-container">$k$</span> be a finite field of size <span class="math-container">$q$</span>, let <span class="math-container">$n\ge1$</span>, and let <span class="math-container">$\mathrm{GL}_n(k)$</span> and <span class="math-container">$M_n(k)$</span> be the group of invertible matrices and ring of <span class="math-container">$n\times n$</span> matrices, respectively.</p>
<p>Now, I have calculated (by grouping by the characteristic polynomial) that the number of conjugacy classes are:</p>
<ul>
<li><span class="math-container">$q-1$</span> in <span class="math-container">$\mathrm{GL}_1(k)=k^\times$</span>, <span class="math-container">$q$</span> in <span class="math-container">$M_1(k)=k$</span>;</li>
<li><span class="math-container">$q^2-1$</span> in <span class="math-container">$\mathrm{GL}_2(k)$</span>, <span class="math-container">$q^2+q$</span> in <span class="math-container">$M_2(k)$</span>; and</li>
<li><span class="math-container">$q^3-q$</span> in <span class="math-container">$\mathrm{GL}_3(k)$</span>, <span class="math-container">$q^3+q^2+q$</span> in <span class="math-container">$M_3(k)$</span>.</li>
</ul>
<p>I conjecture that the pattern continues on the <span class="math-container">$M_n(k)$</span>-side, i.e., that there are always <span class="math-container">$q^n+\dots+q$</span> conjugacy classes in <span class="math-container">$M_n(k)$</span>. However, I cannot seem to prove this. How should I proceed?</p>
<p>Also, is there an analogous formula for the number of conjugacy classes in <span class="math-container">$\mathrm{GL}_n(k)$</span>?</p>
<p><strong>Note:</strong> By conjugacy class in <span class="math-container">$M_n(k)$</span>, I mean the equivalence classes under the relation, for <span class="math-container">$a,b\in M_n(k)$</span>, of <span class="math-container">$a\sim b$</span> iff there exists a <span class="math-container">$u\in\mathrm{GL}_n(k)$</span> such that <span class="math-container">$a=ubu^{-1}$</span>.</p>
| Kenta S | 404,616 | <p>This is based on reuns's idea.</p>
<p>Fix a field <span class="math-container">$k$</span> of cardinality <span class="math-container">$q$</span>.</p>
<p><strong>Lemma 1</strong> There are <span class="math-container">$\frac1n\sum_{d|n}\mu(\frac nd)q^d$</span> irreducible monic polynomials of degree <span class="math-container">$n$</span> in <span class="math-container">$k[x]$</span>.</p>
<p><strong>Proof:</strong> Let <span class="math-container">$A_d$</span> be the set of <span class="math-container">$\alpha\in\overline k$</span> such that <span class="math-container">$k(\alpha)/k$</span> has degree exactly <span class="math-container">$d$</span>. Then we have that <span class="math-container">$\sum_{d|n}A_d=q^n$</span>, so that by Mobius inversion <span class="math-container">$A_n=\sum_{d|n}\mu(\frac nd)q^d$</span>. Now, the <span class="math-container">$n$</span> Galois conjugates of an <span class="math-container">$\alpha\in\overline k$</span> with <span class="math-container">$[k(\alpha):k]=n$</span> all give rise to the same minimal polynomial, so the number of irreducible polynomial of degree <span class="math-container">$n$</span> is <span class="math-container">$A_n/n$</span>.</p>
<p>Now for every fixed monic irreducible polynomial <span class="math-container">$f\in k[x]$</span> of degree <span class="math-container">$n$</span> and <span class="math-container">$e\ge 1$</span>, the <span class="math-container">$k[x]$</span>-modules of the form <span class="math-container">$(k[x]/(f^e))^{\oplus k}$</span> can be encoded in the generating function <span class="math-container">$\frac1{1-x^{en}}$</span>.</p>
<p>We conclude that the generating function for the conjugacy classes in <span class="math-container">$M_n(k)$</span> is:</p>
<p><span class="math-container">\begin{align*}
\prod_{f\text{ monic irreducible}}\prod_{e\ge1}\frac1{1-x^{e\deg f}}&=\prod_{n\ge1}\left(\prod_{e\ge1}\frac1{1-x^{en}}\right)^{\frac1n\sum_{d|n}\mu(\frac nd)q^d}.
\end{align*}</span>
I wonder whether this can be simplified.</p>
<hr />
<p>There is also a <em>very</em> indirect way to simplify the expression.</p>
<p>By Mobius inversion we have <span class="math-container">$q^n=\sum_{j|n}\mu(j)\sum_{d|\frac nj}q^d=\sum_{dij=n}\mu(j)q^d$</span>, so that</p>
<p><span class="math-container">\begin{align*}
\sum_{n\ge1}\frac1n\sum_{d|n}\mu\big(\frac nd\big)q^d\log\left(\frac1{1-x^n}\right)&=\sum_{j\ge1}\frac1{dj}\sum_{d\ge1}\mu(j)q^d\log\left(\frac1{1-x^{dj}}\right)\\
&=\sum_{j\ge1}\frac1{dj}\sum_{d\ge1}\mu(j)q^d\sum_{i\ge1}\frac1ix^{dij}\\
&=\sum_{n\ge1}\frac1n\sum_{dij=n}\mu(j)q^dx^n\\
&=\sum_{n\ge1}\frac1nq^nx^n=\log\big(\frac1{1-qx}\big).
\end{align*}</span>
This shows
<span class="math-container">$$\prod_{n\ge1}\left(\prod_{e\ge1}\frac1{1-x^{en}}\right)^{\frac1n\sum_{d|n}\mu(\frac nd)q^d}=\prod_{e\ge1}\left(\frac1{1-qx^e}\right).$$</span></p>
<hr />
<p><strong>P.S.</strong> In particular,
<span class="math-container">$$\prod_{e\ge1}\left(\frac1{1-qx^e}\right)=1+qx+(q^2+q)x^2+(q^3+q^2+q)x^3+(q^4+q^3+2q^2+q)x^4+(q^5+q^4+2q^3+2q^2+q)x^5+\dots,$$</span>
so my conjecture was wrong.</p>
|
3,912,635 | <p>I am currently trying to understand the proof of Proposition 4.3.18 in Pedersen's Analysis now, which reads</p>
<blockquote>
<p>To each Tychonoff space <span class="math-container">$X$</span> there is a Hausdorff compactification <span class="math-container">$\beta(X)$</span>, with the property that every continuous function <span class="math-container">$\Phi: X \to Y$</span>, where <span class="math-container">$Y$</span> is a compact Hausdorff space, extends to a continuous function <span class="math-container">$\beta \Phi: \beta(X) \to Y$</span>.</p>
</blockquote>
<p>The proof starts by noting that <span class="math-container">$C_b(X)$</span> is a commutative unital C<span class="math-container">$^*$</span>-algebra, and is therefore isometrically isomorphic to a (commutative and unital) C<span class="math-container">$^*$</span>-algebra of the form <span class="math-container">$C(\beta(X))$</span>, where <span class="math-container">$\beta(X)$</span> is a compact Hausdorff space.</p>
<p>By the Gelfand duality between the category of commutative and unital C<span class="math-container">$^*$</span>-algebras and the category of compact Hausdorff spaces, we can take <span class="math-container">$\beta(X) = \Omega(C_b(X))$</span>, the space of characters on <span class="math-container">$C_b(X)$</span>.</p>
<p>Then we can define a map <span class="math-container">$\iota: X \to \beta(X)$</span>, where <span class="math-container">$\iota(x)(\phi) := \phi(x)$</span> for all <span class="math-container">$x \in X$</span> and <span class="math-container">$\phi \in \beta(X)$</span>.</p>
<p>The particular part of the proof that I am struggling to understand is the proof that <span class="math-container">$\iota(X)$</span> is dense in <span class="math-container">$\beta(X)$</span>.</p>
<p>He argues that if <span class="math-container">$\iota(X)$</span> is not dense in <span class="math-container">$\beta(X)$</span>, then there is a non-zero continuous map <span class="math-container">$f: \beta(X) \to \mathbb{C}$</span> vanishing on <span class="math-container">$\iota(X)$</span>. This I understand. He then says that under the identification <span class="math-container">$C_b(X) = C(\beta(X))$</span>, this is impossible. This is the sentence I am stuck on. Why is it impossible under this identification?</p>
<p>We have that <span class="math-container">$C_b(X)$</span> is isometrically isomorphic to <span class="math-container">$C(\Omega(C_b(X)))$</span> via the map <span class="math-container">$\delta: g \mapsto (\delta_g: \Omega(C_b(X)) \to \mathbb{C}, \phi \mapsto \phi(g))$</span>. I am pretty sure what Pedersen is getting at is that the map <span class="math-container">$\delta^{-1}(f)$</span> is zero, but I am not able to show that this is the case. <a href="https://math.stackexchange.com/questions/260794/stone-%C4%8Cech-via-c-bx-cong-c-beta-x">This answer</a> also claims that a similar map is zero.</p>
<p>In summary, my question is:</p>
<blockquote>
<p>Can we show that <span class="math-container">$\iota(X)$</span> is dense in <span class="math-container">$\beta(X)$</span> by showing that <span class="math-container">$\delta^{-1}(f) = 0$</span>? If so, how do we do this?</p>
</blockquote>
| s.harp | 152,424 | <p>Consider a special set of characters of <span class="math-container">$C_b(X)$</span>, for each <span class="math-container">$x\in X$</span> define:</p>
<p><span class="math-container">$$\delta_x: C_b(X)\to\Bbb C, \quad g\mapsto g(x)$$</span></p>
<p>Since the (non-zero) characters of <span class="math-container">$C_b(X)$</span> are the points of <span class="math-container">$\beta X$</span> this gives you a way of embedding <span class="math-container">$X$</span> into <span class="math-container">$\beta X$</span>. Now if <span class="math-container">$f$</span> is some continuous function on <span class="math-container">$\beta X$</span> we may identify it also with an element <span class="math-container">$\tilde f\in C_b(X)$</span>, namely <span class="math-container">$\tilde f = \delta^{-1}(f)$</span> using your notation. Remember that
<span class="math-container">$$f(\delta_x) = \delta(\tilde f)\,(\delta_x) = [\phi \mapsto \phi(\tilde f)]\,(\delta_x)= \delta_x(\tilde f) = \tilde f(x) $$</span></p>
<p>Asking that <span class="math-container">$f$</span> vanishes on <span class="math-container">$X$</span> is asking that <span class="math-container">$f(\delta_x)=0$</span> for all <span class="math-container">$x\in X$</span>, in particular looking at <span class="math-container">$\tilde f$</span> this becomes:
<span class="math-container">$$\tilde f(x)=0\quad \forall x\in X$$</span>
the only function in <span class="math-container">$C_b(X)$</span> satisfying this property is the zero function.</p>
|
792,813 | <p>Let A be a random variable defined as:</p>
<ul>
<li>With probability $p[i]$, the random variable $B[i]$ is drawn</li>
<li>$B[i] ~ N[mu[i],sigma[i]]$</li>
<li>probabilities $p[i]$ sum up to one</li>
</ul>
<p>I know how to compute the mean, which is given by:</p>
<p>$$E[A] = p[1]*mu[1] + .. + p[N]*mu[N]$$</p>
<p>I would like to know how to compute the variance</p>
<p><img src="https://i.stack.imgur.com/J94Al.png" alt="Tree random variable"></p>
| Community | -1 | <p>You're correct. I give an other proof. Let
$$F(z)=\int_0^z\cos(x^2)x^3dx$$
and by the fundamental theorem of analysis and using the l'Hôpital's rule we have
$$\lim_{z \to 0^+} \frac{1}{z^4} \int_{0}^{z} cos(x^2)x^3dx=\lim_{z \to 0^+} \frac{F(z)}{z^4}=\lim_{z \to 0^+} \frac{\cos(z^2)z^3}{4z^3}=\frac14$$</p>
|
2,670,258 | <p>Just to explain the motivation behind some other thing, I shall ask first this question:</p>
<blockquote>
<p>Is $\frac{1}{x}$ continuosly differentiable (or we can examine
$\frac{1}{x^3}$ also) on the set $$E = (- \infty, 0) \cup (0,
> \infty)$$</p>
</blockquote>
<p>The reason why I'm asking this is that when we consider a solution to a given initial value problem, we define the definition of interval of that solution as a single connected set (or just an interval), and the reason for this is explained to me as that we are trying to find solutions $y(x) \in C^1$, but when I discard the problematic points from my domain, why should I get any problem about the condition that $y \in C^1$ ?</p>
<p><strong>Edit:</strong></p>
<p>For example, let say that we solve an ODE, and the general solution is of the form
$$y(x) = c / x.$$
Then if the initial condition is given as $y(1) = 1$, we see that $c = 1$, and the domain of definition is $1 \in (0,\infty)$. However, I do not understand the motivation behind the reason why the interval of definition is not $(-\infty, 0) \cup (0, \infty)$ .</p>
| MPW | 113,214 | <p>Your notation is inadequate. The function is in $C^1(E)$. It is also in $C^1(U)$ for every interval $U\subset E$. You must specify the domain in this notation if it isn’t implicit, especially when you are considering several of them.
In other words, $C^1$ without a specified domain is ambiguous.</p>
|
2,670,258 | <p>Just to explain the motivation behind some other thing, I shall ask first this question:</p>
<blockquote>
<p>Is $\frac{1}{x}$ continuosly differentiable (or we can examine
$\frac{1}{x^3}$ also) on the set $$E = (- \infty, 0) \cup (0,
> \infty)$$</p>
</blockquote>
<p>The reason why I'm asking this is that when we consider a solution to a given initial value problem, we define the definition of interval of that solution as a single connected set (or just an interval), and the reason for this is explained to me as that we are trying to find solutions $y(x) \in C^1$, but when I discard the problematic points from my domain, why should I get any problem about the condition that $y \in C^1$ ?</p>
<p><strong>Edit:</strong></p>
<p>For example, let say that we solve an ODE, and the general solution is of the form
$$y(x) = c / x.$$
Then if the initial condition is given as $y(1) = 1$, we see that $c = 1$, and the domain of definition is $1 \in (0,\infty)$. However, I do not understand the motivation behind the reason why the interval of definition is not $(-\infty, 0) \cup (0, \infty)$ .</p>
| symplectomorphic | 23,611 | <p>Yes, $1/x$ and $1/x^3$ are continuously differentiable (in fact, infinitely differentiable) on your domain (the set of all nonzero reals).</p>
<p>The reason we typically assume that the domain of a solution to an IVP is connected is to ensure uniqueness. If you try to solve the IVP $f'(x)=-1/x^2$ with $f(1)=1$ on your domain, the solution isn't unique: the values of the function aren't determined in the left half-plane.</p>
|
2,072,061 | <p>My Working: </p>
<p>\begin{align}
a_0=&-0+2=2\\
a_1=&-1+2=1\\
a_2=&-2+2=0.
\end{align}
and $$a_2=a_{2-1}+2a_{2-2}+2(2)-9=-4.$$</p>
<p>I don't know what's wrong with my solution. I used this method on various questions but now I'm just ending up with the wrong answer.</p>
| Kanwaljit Singh | 401,635 | <p>Solving it further we have -</p>
<p>$a_{2} = a_{2-1} + 2a_{2-2} + 2(2) - 9$</p>
<p>$a_{2} = a_{1} + 2a_{0} + 2(2) - 9$</p>
<p>$a_{2}$ = 1 + 2(2) + 2(2) - 9$</p>
<p>$a_{2}$ = 1 + 4 + 4 - 9$</p>
<p>= 0</p>
<p>Your mistake is you are using the value of $a_{0}$ = 0 but it is 2.</p>
|
812,263 | <p>To further explain the title:</p>
<p>Is there a probabilistic reason as to why a 6-sided die has the opposing sides suming to 7?</p>
<p>My argument begun when a friend decided to use <a href="http://ecx.images-amazon.com/images/I/51OvRGphnOL.jpg" rel="nofollow">this die</a> instead of <a href="http://nerdywithchildren.com/wp-content/uploads/2013/05/d20.jpg" rel="nofollow">this die</a>.</p>
<p>I understand that having 20 sides, each is as likely to come up, but does the different pattern affect the subsequent rolls?</p>
<p>Thanks in advance!</p>
| Henry | 6,460 | <p>If dice were cuboids rather than cubes due to a manufacturing defect, then each face would no longer have equal probability, but by symmetry the expected value of each throw would still be $\dfrac{7}{2}$.</p>
<p>I have no idea whether this was a consideration.</p>
|
470,081 | <p>There is a question from an old topology prelim that is somewhat giving me a hard time. Consider the cylinder $X= S^1 \times [-1,1]$. Now we define an equivalence relation $\sim$ as follows: For points $v,v' \in S^1$, we have $(v,-1) \sim (v',-1)$ and $(v,1) \sim (v',1)$. I am asked to show that the quotient space $X^{*}= S^1 \times [-1,1]/\sim$ is homeomorphic to the unit sphere $S^2$. The problem is I can't off the top of my head come up with a decent continuous bijection from the quotient space onto $S^2$. What might work here? </p>
<p>Suppose I had some sort of continuous bijection $h: X^{*} \rightarrow S^2$. Now the quotient map $p: X \rightarrow X^{*}$ is continuous and surjective, and since $X$ is compact, so is $X^{*}$. We also know that $S^2$ being a topological manifold is Hausdorff. Recall that if there is a continuous bijection between the compact space $X^{*}$ (any compact space for that matter) and the Hausdorff space $S^2$ (or any Hausdorff space), then that continuous bijection is a homeomorphism. This is what I intended to do, but I still can't come up with such a continuous bijection. Also, perhaps I am a bit confused in trying to visualize the quotient space. I would really appreciate some input on this, and any ideas that may prove useful.</p>
| kahen | 1,269 | <p>The obvious projection map from $X$ (as a subset of $\mathbb R^3$) to the unit sphere in $\mathbb R^3$ is a continuous surjection. By the <a href="https://en.wikipedia.org/wiki/Quotient_space#Properties" rel="nofollow">universal property of the quotient topology</a> it descends to a map $f: X^* \to S^2$. Now show that $f$ has the desired property.</p>
|
1,934,259 | <blockquote>
<p><strong>Problem.</strong></p>
<p>Let <span class="math-container">$\emptyset \subset A\subset X$</span> and <span class="math-container">$\emptyset \subset B\subset Y$</span>. If <span class="math-container">$X$</span> and <span class="math-container">$Y$</span> are connected, show that <span class="math-container">$(X\times Y)\setminus (A\times B)$</span> is also connected by <strong>using the criteria of connectedness that if for any continuous function <span class="math-container">$f$</span> such that <span class="math-container">$f:X\to \{\pm1\}$</span>, <span class="math-container">$f$</span> is constant then <span class="math-container">$X$</span> is connected</strong>.</p>
</blockquote>
<p>I began by assuming that there exists a function <span class="math-container">$f:(X\times Y)\setminus (A\times B)\to\{\pm1\}$</span> which is continuous but not constant but couldn't proceed any further beyond that.</p>
| Alex Mathers | 227,652 | <p>The proof is essentially the same as the one linked to. Suppose we have a function $f:(X\times Y)\setminus(A\times B)\to\{\pm1\}$. Begin by choosing $a\in X\setminus A$ and $b\in Y\setminus B$ (which is possible because both are proper subsets). Now, let $(x,y)\in(X\times Y)\setminus(A\times B)$ be arbitrary. We will show that $f(x,y)=f(a,b)$. </p>
<p>Because $(x,y)\notin A\times B$, either $x\notin A$ or $y\notin B$. Without loss of generality, suppose $x\notin A$. Then $\{x\}\times Y$ is homeomorphic to $Y$ and contained in $(X\times Y)\setminus(A\times B)$, so the restriction $f|_{\{x\}\times Y}$ is constant. Similarly, $X\times\{b\}$ is homeomorphic to $X$ and contained in $(X\times Y)\setminus(A\times B)$, so $f|_{X\times\{b\}}$ is constant. Hence</p>
<p>$$f(x,y)=f(x,b)=f(a,b)$$</p>
<p>and we are done.</p>
|
2,003,660 | <p>I got this exercise from the textbook Book of Proof, CH4 E12. I've tackle this problem in the following manner:</p>
<p>Suppose x is a real and $0 < x < 4$, it follows that,</p>
<p>\begin{align*}
&\Rightarrow 0 - 2 < x - 2 < 4 - 2 \\
&\Rightarrow 4 < (x - 2)^2 < 4\\
&\Rightarrow 0 \leq (x - 2)^2 < 4
\end{align*}</p>
<p>Since, $x(4 - x) = 4x - x^2 = 4 - (x - 2)^2$, then</p>
<p>$$\dfrac{4}{x(4 - x)} = \dfrac{4}{4 - (x - 2)^2}.$$</p>
<p>This expression is greater or equal to $1$ for
$0 \leq (x - 2)^2 < 4$. Thus,</p>
<p>$$\dfrac{4}{x(4 - x)} \geq 1.$$</p>
<p>I'm quite new to proof technique and I'm using this book to self-learn logic and proofing writing. My question is: is the solution stated above logically sound? Would my arguments be considered sufficient to prove that $P \Rightarrow Q$?</p>
| Barry Cipra | 86,747 | <p>Your approach is actually quite nice, but there is one error: It is incorrect to say that $0-2\lt x-2 \lt 4-2$ implies $4\lt(x-2)^2\lt4$. What you want to say instead is something like</p>
<p>$$0-2\lt x-2\lt4-2\implies|x-2|\lt2\implies0\le(x-2)^2\lt4$$</p>
|
1,493,783 | <p>We have $4$ pockets each containing $4$ placement positions, we want to obtain an estimate on the number of possible arrangements, when the exact set of allowed elements is given:
$$ls=[a,a,a,a,b,b,c,c,d,d,x,y] \tag{1}$$
and numbers [1:4] are used for empty slots. For example if one pocket is of the form -,a,-,- then it corresponds to $1$ empty slot then letter a then $2$ empty slots.</p>
<p>Only real restriction is that each pocket can only contain 4 elements at most, and the size is always 4. Two different examples to clarify the size discussion: a,b,c,d has $4$ letters, thus size $4$, another would be 1,a,2 which has $1$ letter thus 1+1+2 = 4. Another clarification, for example a,b,c,d is distinct from b,a,c,d, else we would be counting combinations.</p>
<p>I'm really interested to learn how one should go about doing the count of arrangements of all four pockets, when there are restrictions such as above on pockets. For simplicity, let us assume for the moment that all items in $(1)$ without exception should be inserted into pockets. One example of an arrangement would be: $$a,b,c,d - c,b,a,a - d,a,x,y - 4$$</p>
<ul>
<li>Essentially the question is: how many possible arrangements are there like the above?</li>
<li>The main difficulty to me, lies on the one hand in the fact that we have restrictions, and that once one pocket is filled and we move to the next, the set of elements for the remaining pockets has changed. If we had only one pocket then the problem would translate to: Permutation of $n$ objects taken $4$ at a time.</li>
</ul>
<hr>
<p>As pointed out by Ian Miller, the problem can effectively be formulated in terms of number of permutations of n elements with repetition. So as an additional sub-question: how to include restrictions such as the a's only allowed to be in certain number of positions (e.g. from 1 to 8 i.e. first 2 pockets, or 8 to 12 etc).</p>
| true blue anil | 22,388 | <p>There is a typo in Miller's answer, it should be $\dfrac{16!}{4!4!2!2!2!}$, as there are $2$ each of $b,c,d$</p>
<p>The ans can also be got by <em>placing</em> elements which will make dealing with the restrictions easier. </p>
<p>$\dbinom{16}{4}\dbinom{12}{4}\dbinom 82\dbinom 62 \dbinom 42\dbinom21\dbinom11 = 4540536000 = T $, say</p>
<p>To come now to the restrictions:</p>
<p><em>x and y can't be next to each other :</em></p>
<p>They <em>can</em> be next to each other as $xy$ or $yx$, and we treat as if there are only 15 slots,</p>
<p>so $2*\dbinom{15}1\dbinom{14}4\dbinom{10}4\dbinom62\dbinom42\dbinom22 = P$, say</p>
<p>and ans = $T - P$</p>
<p><em>4 a's can only be in pockets 1-8:</em> Place them first and go on</p>
<p>$\dbinom84\dbinom{12}4\dbinom82\dbinom62\dbinom42\dbinom21\dbinom11$</p>
<p><em>PS</em></p>
<p>The expressions can, of course, be condensed as multinomial coefficients,</p>
<p>e.g. the $2_{nd}$ answer would be $\dbinom{16}{4,4,2,2,2,1,1}-2\dbinom{15}{1,4,4,2,2,2}$</p>
|
2,131,224 | <blockquote>
<p>Say I have vectors $x, y$, then is $\text{proj }_x y $ a scalar multiple of $x$?</p>
</blockquote>
<p>I have a book saying that it is, but I have no clue why this true. Is this really true?</p>
| angryavian | 43,949 | <p>Yes this is true. You should check any definitions/formulas for the projection that you have.</p>
<p>In general, $$\operatorname{proj}_x y = \frac{\langle y,x\rangle }{\langle x,x\rangle } x$$
from which your claim is clear.</p>
|
483,533 | <p>How to prove that the following linear map :</p>
<p>$T : \Bbb R^2 \to \Bbb R^2$ defined by $$T(e_1)=e_2$$
$$T(e_2)=0$$ </p>
<p>can't be diagonalized with respect to any basis. Here $e_i$s are standard basis.</p>
| Ben Grossmann | 81,360 | <p>A transformation is diagonalizable iff its eigenvectors form a basis of the vector space. In this case, $T$'s only eigenvectors are the multiples of $e_2$. Since these do not form a basis of $\mathbb R^2$, $T$ is not diagonalizable.</p>
|
4,552,098 | <p>The rules: Two people rolls a single dice. If the dice rolls 1,2,3 or 4, person A gets a point. For the rolls 5 and 6, person B gets a point. One person needs a 2 point lead to win the game.</p>
<p>This is a question taken from my math book. The answer says the probability is <span class="math-container">$\frac{4}{5}$</span> for person A to win the game. Which I dont understand.</p>
<p>My thought process: Lets look at all the four possible outcomes with the two first roles. These would be AA, BB, AB or BA. AA means person A gets a point two times a row. Under is the probability for all these scenarios:</p>
<p><span class="math-container">$P(AA)=(\frac{2}{3})^2=\frac{4}{9}$</span></p>
<p><span class="math-container">$P(BB)=(\frac{1}{3})^2=\frac{1}{9}$</span></p>
<p><span class="math-container">$P(AB)=\frac{2}{3}\cdot\frac{1}{3}=\frac{2}{9}$</span></p>
<p><span class="math-container">$P(BA)=\frac{1}{3}\cdot\frac{2}{3}=\frac{2}{9}$</span></p>
<p>If AB or BA happens they have an equal amount of points again, no matter how far they are into the game. The probability of this would then be <span class="math-container">$2\cdot\frac{2}{9}=\frac{4}{9}$</span>. Since they have an equal amount of points, you can look at that as the game has restarted.</p>
<p>Meaning person A has to get two points a row to win no matter what. Would that not mean the probability is <span class="math-container">$\frac{4}{9}$</span> for person A to win? Can someone tell me where my logic is flawed and what the correct logic would be?</p>
| Parcly Taxel | 357,390 | <p>After you show that the game effectively restarts after <span class="math-container">$AB$</span> or <span class="math-container">$BA$</span> and is played out in batches of two, the remaining probabilities must be "scaled up" so that they add to <span class="math-container">$1$</span> to give the true chances of <span class="math-container">$A$</span> or <span class="math-container">$B$</span> winning. This is done by dividing by the sum, so <span class="math-container">$A$</span> wins with probability
<span class="math-container">$$\frac{4/9}{4/9+1/9}=\frac45$$</span></p>
|
2,081,001 | <p>The number $\frac{22}{7}$ is irrational in our base-$10$ system, but in, say, base-$14$, it is rational (it comes out to $3.2$ in that system).</p>
<p>It's easy for fractions that are irrational as decimals, as you can just represent them in a base that's double the denominator of the fraction. However, what if I have a number like $\pi$, or $\log(2)$?</p>
<p>For those numbers, it could easily be represented as a rational number if it is in base-($\pi\cdot 2$) or base-($\log(2)\cdot 2$), but is it possible to represent them in any rational-based number system?</p>
| Mark Bennet | 2,906 | <p>Whether a number is rational or not is independent of the base in which the number may be expressed.</p>
<p>On the other hand the fraction $\frac ab: a,b\in \mathbb Z, b\gt 0$ may terminate or eventually recur when expressed as a "decimal" (Hardy could find no better word - Hardy and Wright, Introduction to the Theory of Numbers). Choosing the base $b$ automatically ensures that the expression terminates. This doesn't work if $b=1$, but then you have an integer anyway.</p>
|
859,209 | <p>Im looking for an efficent method of solving the following inequality: $$\left(\frac{x-3}{x+1}\right)^2-7 \left|\frac{x-3}{x+1}\right|+ 10 <0$$</p>
<p>I've tried first determining when the absolute value will be positive or negative etc, and than giving it the signing in accordance to range it is in, bur it turned out to be quite complex and apperently also wrong. Are there any other ways?</p>
| Elimination | 160,028 | <p><strong>Hint (for a start):</strong> $$t = \frac{x-3}{x+1}$$</p>
|
859,209 | <p>Im looking for an efficent method of solving the following inequality: $$\left(\frac{x-3}{x+1}\right)^2-7 \left|\frac{x-3}{x+1}\right|+ 10 <0$$</p>
<p>I've tried first determining when the absolute value will be positive or negative etc, and than giving it the signing in accordance to range it is in, bur it turned out to be quite complex and apperently also wrong. Are there any other ways?</p>
| Mary Star | 80,708 | <p>$$\left(\frac{x-3}{x+1}\right)^2-7 \left|\frac{x-3}{x+1}\right|+ 10 \lt 0 \Rightarrow \\ \left(\left|\frac{x-3}{x+1}\right|\right)^2-7 \left|\frac{x-3}{x+1}\right|+ 10 \lt 0 $$</p>
<p>$$\Delta=\left(-7\right)^2-4 \cdot 10=49-40=9$$</p>
<p>$$\left|\frac{x-3}{x+1}\right|_{1,2}=\frac{-\left(-7\right) \pm \sqrt{\Delta}}{2}=\frac{7 \pm \sqrt{9}}{2}=\frac{7 \pm 3}{2}$$</p>
<p>$\left(\left|\frac{x-3}{x+1}\right|\right)^2-7 \left|\frac{x-3}{x+1}\right|+ 10 \lt 0 \Rightarrow \left|\frac{x-3}{x+1}\right| \in \left(\frac{7-3}{2},\frac{7+3}{2}\right)=\left(2,5\right)$</p>
<p>$$\left|\frac{x-3}{x+1}\right|\gt 2 \Rightarrow \frac{x-3}{x+1}\lt -2 \text{ OR } \frac{x-3}{x+1}\gt 2 $$
and
$$\left|\frac{x-3}{x+1}\right|\lt 5 \Rightarrow -5\lt \frac{x-3}{x+1}\lt 5 $$</p>
<p>Solve at each case for $x$.</p>
<p>Can you continue?</p>
|
2,737,144 | <p>I tried to prove it by contradiction. </p>
<p>Suppose it is not true that $1\ge\frac{3}{x(x-2)}$, so $1\lt\frac{3}{x(x-2)}$. Then $\frac{3}{x(x-2)}-1\gt0$. Multiply both sides of $\frac{3}{x(x-2)}-1\gt0$ by ${x(x-2)}$.</p>
<p>$(\frac{3}{x(x-2)}-1\gt0)({x(x-2)}\gt0(x(x-2)$</p>
<p>${3-(x(x-2)\gt0}$</p>
<p>${3-x^2-2x\gt0}$</p>
<p>${-x^2-2x+3\gt0}$</p>
<p>${-1(x^2+2x-3)\gt0}$</p>
<p>$-1\frac{(x-1)(x+3)}{-1}\gt0/-1$</p>
<p>${(x-1)(x+3)\lt0}$</p>
<p>At this point I really do not know what to do after this point or if I really even went about it the right way. Thank you for the help.</p>
| marty cohen | 13,079 | <p>$x > 3
\implies x-2 > 1
\implies x(x-2) > 3
\implies 1 > \dfrac{3}{x(x-2)}
$</p>
|
1,886,239 | <p>I am trying to figure out whether or not the following series is convergent: $$\sum_{k=1}^{\infty} \frac{\ln(3^k-2k)}{3k+k^2}$$ </p>
<p>Now, I know from the back of the book that it is divergent, but I haven't been able to show it. I think I am supposed to compare it to some other series, but I don't know which one. I have tried looking at the integral $\int_{1}^{\infty} f(x) dx$ but that integral was really hard to solve (haven't managed it) which makes me think that there should be an easier way. </p>
<p>In general, I am having some trouble with this type of exercise where I should use comparison tests. I never know what I should compare it to.</p>
| Zestylemonzi | 270,448 | <p>The bog-standard comparison test works well too. Note that (for $k \ge 3$)
\begin{align*}
\frac{\log(3^k - 2k)}{3k+k^2} &\ge \frac{\log(3^k - \frac{1}{2}3^k)}{3k^2+k^2}\\
&=\frac{\log(\frac{1}{2}3^k)}{4k^2}\\
&=\frac{\log(3)}{4k} - \frac{\log(2)}{4k^2}
\end{align*}
and hence the sequence diverges.</p>
|
14,765 | <p>I like to make the "dominoes" analogy when I teach my students induction.</p>
<p>I recently came across the following video:</p>
<p><a href="https://www.youtube.com/watch?v=-BTWiZ7CYoI" rel="noreferrer">https://www.youtube.com/watch?v=-BTWiZ7CYoI</a></p>
<p>In this video, a sequence of concrete block wall caps are set up like dominoes on the top of a wall. The first wall cap is knocked down, setting off the domino effect. The blocks are spaced so that they are resting on each other when they fall, but just barely. So rather than resting flat each block is supported slightly by its successor. When the last block falls, however, it falls flat (having no subsequent block to rest on). This causes the block behind it to slip off, and lay flat, which causes the brick behind it to slip off and lie flat, until all the blocks are lying flat perfectly end to end.</p>
<p>Is there any instance of a similar phenomena occurring in mathematics? I am thinking of a situation in which you want to prove both <span class="math-container">$P(n)$</span> and <span class="math-container">$Q(n)$</span> for <span class="math-container">$n = 1, 2, 3, \dots, 100$</span> (say). If you are able to prove: </p>
<ol>
<li><span class="math-container">$P(1)$</span></li>
<li><span class="math-container">$\forall k \in \{1,2,3, \dots, 99\} P(k) \implies P(k+1)$</span></li>
<li><span class="math-container">$P(100) \implies Q(100)$</span></li>
<li><span class="math-container">$\forall k \in \{ 100, 99, 98, \dots, 3,2\}, Q(k) \implies Q(k-1)$</span></li>
</ol>
<p>Then it will follow that both <span class="math-container">$P(n)$</span> and <span class="math-container">$Q(n)$</span> are true for <span class="math-container">$n = 1, 2, 3, \dots, 100$</span>.</p>
<p>If an example is found, it could be a great example for teaching because it would force students to think through the logic of why induction works rather than blindly following a certain form of "an induction proof".</p>
| Benoît Kloeckner | 187 | <p>Not a perfect solution, but I would insist on the meaning of <span class="math-container">$\sqrt{\cdot}$</span>: I have observed that this symbol quickly becomes a meaningless mantra to students. I regularly ask (notably in 1st year of master for future high school teacher, also in France) what <span class="math-container">$\sqrt{2}$</span> means. First time students stare at me, and I go with them through the definition: <span class="math-container">$\sqrt{2}$</span> is the unique positive real number whose square is <span class="math-container">$2$</span>. Then I question the definition: does it make sense? Why is there one such number? Why only one? Is there a non-positive one? (intermediate value theorem / monotony / look at the graph of the square function and draw a horizontal line).</p>
<p>It will not replace drill, but it complements it. Often, this kind of mistake come from the loss of meaning. Bring meaning back any way you can.</p>
|
3,643,917 | <p>Find the limit of
<span class="math-container">$$\lim_{x\to 0+} \frac{1}{x^2} \int_{0}^{x} t^{1+t} dt$$</span>. </p>
<p>My idea to solve it is to use L'Hospital's rule but I am not sure why I can use it and how should i do it. Many thanks to them who are willing to help. </p>
| Reveillark | 122,262 | <p>Formally, <span class="math-container">$t^{1+t}=te^{t\ln(t)}\to 0$</span> when <span class="math-container">$t\to 0^+$</span>, because <span class="math-container">$\lim_{t\to 0^+} t\ln(t)=0$</span>. It follows that, if <span class="math-container">$f(x)=\int_0^x t^{1+t}dt$</span>,
<span class="math-container">$$
\lim_{x\to 0^+}f(x)=0
$$</span>
The function <span class="math-container">$f$</span> is differentiable at <span class="math-container">$0$</span>. Having checked all the hypotheses for L'Hôspital's Rule, </p>
<p><span class="math-container">$$
\lim_{x\to 0^+} \frac{f(x)}{x^2}=\lim_{x\to 0^+} \frac{x^{1+x}}{2x}=\lim_{x\to 0^+} \frac{x^x}{2}=\frac{1}{2}
$$</span></p>
|
1,574,290 | <p>How do I prove this? </p>
<p>For the Fibonacci numbers defined by $f_1=1$, $f_2=1$, and $f_n = f_{n-1} + f_{n-2}$ for $n ≥ 3$, prove that $f^2_{n+1} - f_{n+1}f_n - f^2_n = (-1)^n$ for all $n≥ 1$.</p>
| egreg | 62,967 | <p>The recursion can also be written as
$$
f_{n+2}=f_{n+1}+f_n
$$
So you want to prove that the formula holds for $n=1$ (just plug in the value) and that, if it holds for $n$, then it holds for $n+1$.</p>
<p>Thus, assume you know that $f^2_{n+1} - f_{n+1}f_n - f^2_n = (-1)^n$. Then
$$
f_{n+2}^2-f_{n+2}f_{n+1}-f_{n+1}^2=
(f_{n+1}+f_n)^2-(f_{n+1}+f_n)f_{n+1}-f_{n+1}^2
$$
Expand and apply the induction hypothesis</p>
|
1,574,290 | <p>How do I prove this? </p>
<p>For the Fibonacci numbers defined by $f_1=1$, $f_2=1$, and $f_n = f_{n-1} + f_{n-2}$ for $n ≥ 3$, prove that $f^2_{n+1} - f_{n+1}f_n - f^2_n = (-1)^n$ for all $n≥ 1$.</p>
| lhf | 589 | <p>You can reduce it immediately to <a href="http://en.wikipedia.org/wiki/Cassini_and_Catalan_identities" rel="nofollow">Cassini's identity</a>:
$$
f^2_{n+1} - f_{n+1}f_n - f^2_n
=
f_{n+1}(f_{n+1} - f_n) - f^2_n
=
f_{n+1}f_{n-1} - f^2_n
= (-1)^n
$$
Cassini's identity has a nice proof using determinants:
$$
f_{n-1}f_{n+1} - f_n^2
=\det\left[\begin{matrix}f_{n+1}&f_n\\f_n&f_{n-1}\end{matrix}\right]
=\det\left[\begin{matrix}1&1\\1&0\end{matrix}\right]^n
=\left(\det\left[\begin{matrix}1&1\\1&0\end{matrix}\right]\right)^n
=(-1)^n
$$
This matrix formulation of Fibonacci numbers is well worth knowing.</p>
|
2,423,086 | <p>Show, by listing the elements or how we can list the elements, that </p>
<p>$\mathbb{N}^3$ = $\mathbb{N×N×N}$ is a countable set.</p>
<p><strong>Attempt at a solution:</strong></p>
<p>I was thinking about making use of Cantor's diagonal argument. If this were $\mathbb{N×N}$ I could just do:</p>
<p>(1,1), (1,2), (1,3),...</p>
<p>(2,1), (2,2), (2,3)...</p>
<p>(3,1), (3,2), (3, 3)...</p>
<p>and snake my way through it.</p>
<p>However, I am having trouble finding a way to list $\mathbb{N×N×N}$ such that I can implement the diagonal argument.</p>
| mechanodroid | 144,766 | <p>$\mathbb{N}^3$ is a countable union of finite sets:</p>
<p>$$\mathbb{N}^3 = \bigcup_{n\in\mathbb{N}}\{(i,j,k)\in\mathbb{N^3} : i + j + k = n\}$$</p>
<p>Hence, $\mathbb{N}^3$ is countable.</p>
<p>For an explicit bijection:</p>
<p>Notice that every $n \in \mathbb{N}$ can be written in a unique way as a product of a power of $2$, power of $3$, and a number not divisible by $2$ nor $3$.
$$n = 2^{i-1}3^{j-1}k$$</p>
<p>Now define $f : \mathbb{N}\to\mathbb{N^3}$ as $f(n) = (i,j,k)$ to get the bijection.</p>
|
2,642,547 | <p>If a subset A of $\mathbb{R}^n$ has no interior, must it be closed?</p>
<p>Can I prove this using the example of a subset A that consists of a single point, so A has no interior yet it is closed?</p>
| Mohammad Riazi-Kermani | 514,496 | <p>The set of rational numbers is a set with no interior and it is not closed.</p>
<p>Note that for a set to be closed its complement should be open and the complement of rationals is the set of irrationals which is not open. </p>
|
2,008,341 | <p>Consider the standard presentation of $D_{2n}$:</p>
<p>$\langle r, s : r^n = s^2 =1, rs = sr^{-1}\rangle$.</p>
<p>I have seen the latter relation given as $sr = r^{-1}s$ a few times. Is this correct, as well?</p>
| pjs36 | 120,540 | <p>I would believe it simply because the group defined "doing everything backwards" is isomorphic to the group you get "doing everything normally." </p>
<p>In particular, given a group $(G, *)$, define its opposite group $(G^{\rm op}, *^{\rm op})$ by $g *^{\rm op} h = h * g$. It's a fun exercise to show that the map $G \to G^{\rm op}$ defined by $g \mapsto g^{-1}$ is an isomorphism.</p>
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.