qid int64 1 4.65M | question large_stringlengths 27 36.3k | author large_stringlengths 3 36 | author_id int64 -1 1.16M | answer large_stringlengths 18 63k |
|---|---|---|---|---|
2,433,614 | <p>The solution to the initial-value problem $y' = y^{2} + 1$ with $y(0)=1$ is $y = \tan(x + \frac{\pi}{4})$. I would like to show that this is the correct solution in a way that is analogous to my solution to differential equation $y' = y - 12$.</p>
<p><strong>Solution to Differential Equation</strong></p>
<p>If $y' = y - 12$,
\begin{equation*}
\frac{y'}{y - 12} = 1 ,
\end{equation*}
\begin{equation*}
\left(\ln(y - 12)\right)' = 1 ,
\end{equation*}
\begin{equation*}
\ln(y - 12) = t + C
\end{equation*}
for some constant $C$. If $C' = e^{C}$,
\begin{equation*}
y = C' e^{t} + 12 .
\end{equation*}</p>
| John Hughes | 114,036 | <p>Apostol's calculus book has (I believe) a nice approach to sine/cosine: they lay out a few "axioms" that, taken together, define sine and cosine on all points of the form $\frac{n\pi}{2^k}$, where $n$ and $k$ are integers. </p>
<p>Then they show that as a function on this set, sine and cosine are continuous. </p>
<p>Then they show that this set is dense in $\Bbb R$, and hence the sine and cosine functions so defined actually have a unique continuous extension to the whole real line. </p>
<p>Since one of the "axioms" is the $\sin^2 t + \cos^2 t = 1$ for all $t$, it's then pretty easy to show that for every point on the circle, there's a $t$ with $(\cos t, \sin t)$ being that point, via the intermediate value theorem. </p>
<p>THat's not the approach you were looking for, I know, but it IS a nice rigorous way to get sines and cosines, which may be what you wanted, and it doesn't require convergence of power series and other such things. </p>
|
927,815 | <p>Find an equation of the plane that passes through the points $A(0, 1, 0)$, $B(1, 0, 0)$ and $C(0, 0, 1)$.</p>
| Paul Sundheim | 88,038 | <p>HINT: The equation of a plane can be expressed as $ax+by+cz=d$ Since the plane goes through $(1,0,0)$, $a=d$. Use the other two points in the same way to get the equation.</p>
|
944,992 | <p>I'm trying to understand each part of this completed proof that my professor did, here is my interpretation in parentheses, please advise as necessary.</p>
<p>Proof: Assume that $2^{1/2}$ is rational, then $2^{1/2}$ = $p/q$ for some integer $p$ and $q$ with $q \neq 0$ and $\gcd(p,q) = 1$</p>
<p>$$(p/q)^2 = 2 = p^2 / q^2 $$ $$\text{ (What was the purpose of squaring √2 and p/q?)}$$</p>
<p>$$p^2 = 2q^2$$ $$ \text{
(Is this basically just canceling out the q^2 on left side, and multiplying the right side?)}$$</p>
<p>2 | p^2 => (2 divides p^2 since it is a factor)</p>
<p>2 | p => (thus 2 divides p)</p>
<p>p = 2k for some integer k => (definition of even)</p>
<p>(2k)^2 = 4k^2 = 2q^2 => (Why are we squaring 2k, and how does 4k^2 = 2q^2?) </p>
<p>q^2 = 2k^2 => (Canceled out 2 on one side, and dividing on other)</p>
<p>2 | q^2 => (2 divides q^2)</p>
<p>2 | q => (thus 2 divides q)</p>
<p>2 | gcd(p,q) => (2 divides gcd of p and q)</p>
<p>2 | 1 Which is a Contradiction (no clue how I got here)</p>
| Chris Culter | 87,023 | <p>The proof <em>strategy</em> is valid, but it needs some formalization if we want to be confident that it proves exactly what it claims. In this case, the statement is false as written; you need to assume that the polygon is convex!</p>
<p>Here's one way to formalize your proof. In physics, gravity tends to minimize the distance from the center of mass to the floor. Mathematically, then, we want to argue that there exists a point $M$ on $P$ with minimal distance to $A$, and that $M$ lies in the interior of a side $S$, rather than on a vertex. (Here we need $P$ to be convex. Otherwise, we'll get stuck.) Then, we can argue that $MA\perp S$.</p>
<p>Looking back, our physical intuition helps us to decide which mathematical tools to employ, and the mathematical deductive process forces us to clarify the statement before we claim a full proof.</p>
|
1,918,885 | <p>I would like to show that $\mathbb{Q}$ is the smallest field between $\mathbb{R}$ and $\mathbb{Z}$. In other words, if there is a field $\mathbb{F}$ where $\mathbb{Z} \subset \mathbb{F} \subset \mathbb{R}$, then it must also be that $\mathbb{Q} \subset \mathbb{F}$.</p>
<p>My proof is by contradiction, assume that it is not the case $\mathbb{Q} \subset \mathbb{F}$. Then, it is possible to find a $y \in \mathbb{Q}$ such that $y \notin \mathbb{F}$. Suppose that $y \in \mathbb{Q}$, then, by definition of rational numbers, we may find a $c,d \in \mathbb{Z}$ where $d \neq 0$ such that $y=\frac{c}{d}$.</p>
<p>BUT, we know from the assumption above that $\mathbb{Z} \subset \mathbb{F}$. This means that there exists $a,b \in \mathbb{Z}$ where $a=c$ and $b=d$. Furthermore, because $\mathbb{Z} \subset \mathbb{F}$, $a,b \in \mathbb{F}$. Now, due to the multiplicative inverse properties of $\mathbb{F}$, we know that $\frac{a}{b}$ exists and that $\frac{a}{b} \in \mathbb{F}$. </p>
<p>Putting it all together, we have that:</p>
<p>$$
y= \frac{c}{d} = \frac{a}{b} \in \mathbb{F}.
$$</p>
<p>But, this contradicts the assumed that $y \notin \mathbb{F}$. </p>
<p>I am not sure if the above proof works, because in a proof by contradiction, I normally contradict something in the assumption area. However, it seems here that I am contradicting the negated result. </p>
<p>In other words, if I let $P$ be the statement: <strong>"if there is a field $\mathbb{F}$ where $\mathbb{Z} \subset \mathbb{F} \subset \mathbb{R}$"</strong></p>
<p>and $Q$ be the statement <strong>"then it must also be that $\mathbb{Q} \subset \mathbb{F}$"</strong>, </p>
<p>then in a direct proof I normally have $P \implies Q$. In a contradiction proof, what I understand is that we assume NOT $Q$, then try to contradict something in $P$. But here, it seems I am not really contradicting anything in $P$, but rather something in $Q$, and so seems almost circular. </p>
<p>Could anyone help me see what is missing? Thanks!</p>
| Rafael | 303,887 | <p>The contradiction is because you are assuming that $\mathbb{Z}\subset\mathbb{F}$. If you assume NOT Q you check NOT P: $\mathbb{Q}\not\subset\mathbb{F}\Rightarrow\mathbb{Z}\not\subset\mathbb{F}$.</p>
|
64,653 | <p>Consider the two (inequivalent) $\mathbb{Z}$-representations $\phi,\psi$ of the symmetric group $S=S_3$ given by</p>
<p>$(1,2)^\phi=\left(\begin{array}{rr}0 &-1\\\ -1 & 0\end{array}\right), \qquad
(1,2,3)^\phi=\left(\begin{array}{rr}0 &1\\\ -1 & -1\end{array}\right);$</p>
<p>$(1,2)^\psi=\left(\begin{array}{rr}0 &1\\\ 1 & 0\end{array}\right), \qquad
(1,2,3)^\psi=\left(\begin{array}{rr}0 &1\\\ -1 & -1\end{array}\right).$</p>
<p>Now, let $F=\langle x,y\rangle$ be a free 2-generated group. The representation $\phi$ can be "lifted" to an embedding $\tau:S\to\rm{Aut}(F)$ as follows:</p>
<p>$(1,2)^\tau=[x\mapsto y^{-1};\quad y\mapsto x^{-1}], \qquad
(1,2,3)^\tau=[x\mapsto y;\quad y\mapsto x^{-1}y^{-1}].$</p>
<p><strong>Question.</strong> Can one similarly lift $\psi$?</p>
<p><strong>Remark 1.</strong> By "lifting" a representation $\phi:S\to\rm{GL}_2(\mathbb{Z})$ I mean finding an embedding $\tau:S\to\rm{Aut}(F)$ such that $\phi=\tau\alpha$, where $\alpha:\rm{Aut}(F)\to\rm{GL}_2(\mathbb{Z})$ is the natural epimorphism.</p>
<p><strong>Remark 2.</strong> A naïve attempt to send
$(1,2)\ \mapsto\ [x\mapsto y;\quad y\mapsto x], \qquad
(1,2,3)\ \mapsto\ [x\mapsto y;\quad y\mapsto x^{-1}y^{-1}]$</p>
<p>does <em>not</em> give a lifting of $\psi$.</p>
| Allen Hatcher | 23,571 | <p>As Tom Goodwillie noted in his comment, $GL(2,\Bbb Z)$ can be identified with $Out(F_2)$, so the question can be rephrased in terms of lifting subgroups of $Out(F_2)$ to $Aut(F_2)$. There is a Realization Theorem for finite subgroups of $Aut(F_n)$ and $Out(F_n)$ which says that such a subgroup can always be realized as a group of symmetries of some finite connected graph with fundamental group $F_n$, where the symmetries fix a basepoint in the graph in the case of $Aut(F_n)$. When $n=2$ there are only two graphs to consider, and the relevant one for $S_3$ is the join of two points with three points. This has two symmetry groups isomorphic to $S_3$, but only one of these two groups fixes a basepoint, so this should answer the question.</p>
<p>The Realization Theorem is discussed in Karen Vogtmann's survey paper "Automorphism groups of free groups and outer space", section II.6. The references given there are to papers by M. Culler, B. Zimmermann, and D. G. Khramtsov from 1981 to 1984.</p>
|
221,026 | <p>I need to find the value of <span class="math-container">$z$</span> for a particular value of <span class="math-container">$D_c$</span> (eg. <span class="math-container">$500$</span>), but <span class="math-container">$z$</span> is inside an integral, and I'm not able to use <code>Solve</code> since the integral is giving <code>Hypergeometric2F1</code> function as the output.</p>
<pre><code>OmegaM = 0.3111;
OmegaLambda = 0.6889;
Dc = 500;
eqn = Integrate[(OmegaM (1 + z1)^3 + OmegaLambda)^(-1/2), {z1, 0, z},
Assumptions -> z > 0]
</code></pre>
<blockquote>
<pre><code>-1.1473+(1.20482+1.20482z)Hypergeometric2F1[0.333333,0.5,1.33333,-0.451589(1.+z)^3]
</code></pre>
</blockquote>
<pre><code>zvalue = Solve[eqn == Dc, z]
</code></pre>
<blockquote>
<pre><code>Solve was unable to solve the system with inexact coefficients or the
system obtained by direct rationalization of inexact numbers present
in the system. Since many of the methods used by Solve require exact
input, providing Solve with an exact version of the system may help.
</code></pre>
</blockquote>
<p>Is there any other way I can solve this equation? </p>
<p>Also, Integrate is taking some time and I'd like it to be fast since I need to put it in a loop with lots of <span class="math-container">$z$</span> values to be computed for corresponding <span class="math-container">$D_c$</span> values. </p>
| Bob Hanlon | 9,362 | <pre><code>OmegaM = 0.3111 // Rationalize;
OmegaLambda = 0.6889 // Rationalize;
Dc = 500;
eqn = Integrate[(OmegaM (1 + z1)^3 + OmegaLambda)^(-1/2), {z1, 0, z},
Assumptions -> z > 0]
</code></pre>
<p><a href="https://i.stack.imgur.com/qheZS.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/qheZS.png" alt="enter image description here"></a></p>
<p>For <code>z > 0</code>, <code>eqn</code> is monotonically increasing</p>
<pre><code>Assuming[z > 0, D[eqn, z] > 0 // Simplify]
(* True *)
</code></pre>
<p>The maximum value of <code>eqn</code> is</p>
<pre><code>(lim = Limit[eqn, z -> Infinity]) // N
(* 3.25664 *)
LogLinearPlot[{lim, eqn}, {z, 10^-2, 10^4},
PlotLegends -> Placed["Expressions", {.3, .7}]]
</code></pre>
<p><a href="https://i.stack.imgur.com/hpPac.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/hpPac.png" alt="enter image description here"></a></p>
<p>Consequently, <code>eqn</code> can never equal the specified value of <code>Dc</code></p>
<p>Using instead</p>
<pre><code>Dc = 2;
</code></pre>
<p>Use <a href="https://reference.wolfram.com/language/ref/NSolve.html" rel="nofollow noreferrer"><code>NSolve</code></a></p>
<pre><code>zvalue = NSolve[{eqn == Dc, z > 0}, z]
(* {{z -> 7.13731}} *)
</code></pre>
<p>Or <a href="https://reference.wolfram.com/language/ref/FindRoot.html" rel="nofollow noreferrer"><code>FindRoot</code></a></p>
<pre><code>zvalue = FindRoot[eqn == Dc, {z, 1}]
{z -> 7.13731}
</code></pre>
<p>Or <a href="https://reference.wolfram.com/language/ref/Reduce.html" rel="nofollow noreferrer"><code>Reduce</code></a> (provides the exact value as a Root expression)</p>
<pre><code>zvalue = Reduce[{eqn == Dc, z > 0}, z]
</code></pre>
<p><a href="https://i.stack.imgur.com/6193M.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/6193M.png" alt="enter image description here"></a></p>
<pre><code>zvalue // N
(* z == 7.13731 *)
</code></pre>
<p>Similarly with <a href="https://reference.wolfram.com/language/ref/Solve.html" rel="nofollow noreferrer"><code>Solve</code></a></p>
<pre><code>zvalue = Solve[{eqn == Dc, z > 0}, z][[1]]
</code></pre>
<p><a href="https://i.stack.imgur.com/78HaJ.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/78HaJ.png" alt="enter image description here"></a></p>
|
4,292,091 | <p>I'm reading the definition of <span class="math-container">$inf\emptyset$</span> and <span class="math-container">$sup\emptyset$</span>.</p>
<p>a) I'm wondering why <span class="math-container">$inf\emptyset = \infty$</span> and <span class="math-container">$sup\emptyset = -\infty$</span>. I would have expected both to be undefined.</p>
<p>b) In general, can something equal infinity if it's not in the extend real number system? Should I assume they are using about extended real numbers in these definitions?</p>
| José Carlos Santos | 446,262 | <p>Having<span class="math-container">$$\inf\emptyset=\infty\text{ and }\sup\emptyset=-\infty\tag1$$</span>is that only way of defining <span class="math-container">$\inf\emptyset$</span> and <span class="math-container">$\sup\emptyset$</span> so that you always have<span class="math-container">$$A\subset B\implies \inf A\geqslant\inf B\quad\text{and}\quad\sup A\leqslant\sup B.$$</span>And, yes, you can only have <span class="math-container">$(1)$</span> if we are working with the extended real numbers.</p>
|
2,707,749 | <p>I know this has to be extremely easy but I'm not going to solve this problem.</p>
<p>The task is to find the angle at point $A$.</p>
<p>Thanks!</p>
<p><img src="https://i.stack.imgur.com/8NTEO.png" alt="Here is the image."></p>
| user | 293,846 | <p>By <a href="http://2000clicks.com/mathhelp/geometrytriangleinscribedanglecircle2.aspx" rel="nofollow noreferrer">extended central angle theorem</a> the angle $A=(175^\circ- 49^\circ)/2=63^\circ $.</p>
|
4,469,136 | <p>I took a number theory course this past semester and I found the idea of there being different primes in different fields. The only fields that we did in detail were the set of reals and the set of Gaussian integers.</p>
<p>What other fields are there, and what numbers are prime in those fields?</p>
| Ethan Bolker | 72,858 | <p>What you are calling a "field" is officially a ring of algebraic integers.</p>
<p>There is lots known about unique factorization and what the primes are in the rings of integers in the fields <span class="math-container">$\mathbb{Q}(\sqrt{d})$</span>.</p>
<p>See the wikipedia page <a href="https://en.wikipedia.org/wiki/Quadratic_field" rel="nofollow noreferrer">https://en.wikipedia.org/wiki/Quadratic_field</a> and this stackexchange question: <a href="https://math.stackexchange.com/questions/3751518/list-of-quadratic-field-with-the-ufd-property">List of quadratic field with the UFD property</a></p>
|
1,999,643 | <p>The title states my question: what aspect of closed makes it attractive for optimization?</p>
| the_candyman | 51,370 | <p>Consider these problems:</p>
<p>$$\begin{cases}
\max x \\
\text{s.t.}\\
x \in (0,1)
\end{cases},
\begin{cases}
\max x \\
\text{s.t.}\\
x \in [0,1]
\end{cases}.$$</p>
<p>The first has no solution, while the second has (exactly, $x=1$).</p>
<hr>
<p>Summarizing, it is better to have closed set since sometimes the optimal value is on the frontier of a set. A closed set does include its frontier. In this way, you don't lose solutions.</p>
|
925,941 | <p>Solve:</p>
<p>$xy=-30$<br>
$x+y=13$</p>
<p>{15, -2} is a particular solution, but, how would I know if is the only solution, or what would be the way to solve this without "guessing" ?</p>
| Kim Jong Un | 136,641 | <p>Start with
$$
x+y=13 \iff y=13-x
$$
so that
$$
-30=xy=x(13-x)=13x-x^2 \iff x^2-13x-30=0
$$
which solves to give $x=-2$ and $x=15$. The corresponding $y$ values are $15$ and $-2$.</p>
|
925,941 | <p>Solve:</p>
<p>$xy=-30$<br>
$x+y=13$</p>
<p>{15, -2} is a particular solution, but, how would I know if is the only solution, or what would be the way to solve this without "guessing" ?</p>
| Community | -1 | <p>The solutions of the system of equations $xy=p$ and $x+y=s$ are the roots of the quadratic equation:</p>
<p>$$x^2-sx+p=0$$
so in your case we solve
$$x^2-13x-30=0$$
using the discriminant:
$$\Delta=13^2+4\times 30=17^2$$
so the two roots are
$$x_1=\frac{13+17}{2}=15\quad;\quad x_2=\frac{13-17}{2}=-2$$</p>
|
6,990 | <p>The Fourier transform of periodic function $f$ yields a $l^2$-series of the functions coefficients when represented as countable linear combination of $\sin$ and $\cos$ functions.</p>
<ul>
<li><p>In how far can this be generalized to other countable sets of functions? For example, if we keep our inner product, can we obtain another Schauder basis by an appropiate transform? What can we say about the bases in general?</p></li>
<li><p>Does this generalize to other function spaces, say, periodic functions with one singularity?</p></li>
<li><p>What do these thoughts lead to when considering the continouos FT?</p></li>
</ul>
| Akhil Mathew | 344 | <p>You need the orthogonality condition to get such an integral representation for the coefficients; otherwise it would probably be more complicated. </p>
<p>The Fourier series of any $L^2$ function converges not only in the norm (which follows from the fact that $\{e^{inx}\}$ is an orthonormal basis) but also almost everywhere (the <a href="http://en.wikipedia.org/wiki/Carleson-Hunt_theorem" rel="nofollow">Carleson-Hunt theorem</a>). Both these assertions are also true in any $L^p,p>1$ but at least the first one requires different methods than Hilbert space ones. In $L^1$, by contrast, a function's Fourier series may diverge everywhere.</p>
<p>There are many conditions that describe when a function's Fourier series converges to the appropriate value at a given point (e.g. having a derivative at that point should be sufficient). Simple continuity is insufficient; one can construct continuous functions whose Fourier series diverge at a dense $G_{\delta}$. The problem arises because the Dirichlet kernels that one convolves with the given function to get the Fourier partial sums at each point are not bounded in $L^1$ (while by contrast, the Fejer kernels or Abel kernels related respectively to Cesaro and Abel summation are, and consequently it is much easier to show that the Fourier series of an $L^1$ function can be summed to the appropriate value using either of those methods). Zygmund's book <em>Trigonometric Series</em> contains plenty of such results. </p>
<p>There is a version of the Carleson-Hunt theorem for the Fourier transform as well.</p>
|
2,083,475 | <p>This is the conic $$x^2+6xy+y^2+2x+y+\frac{1}{2}=0$$
the matrices associated with the conic are:
$$
A'=\left(\begin{array}{cccc}
\frac{1}{2} & 1 & \frac{1}{2} \\
1 & 1 & 3 \\
\frac{1}{2} & 3 & 1
\end{array}\right),
$$</p>
<p>$$
A=\left(\begin{array}{cccc}
1 & 3 \\
3 & 1
\end{array}\right),
$$</p>
<p>His characteristic polynomial is: $p_A(\lambda) = \lambda^2-2\lambda-8$<br>
A has eigenvalues discordant $(\lambda = 4, \lambda = -2)$, so it's an Hyperbole.
Then i found that the center of the conic is: $(-\frac{1}{16}, -\frac{5}{16})$<br>
Then with the eigenvalues i found the two lines passing through the center:
$$4x-4y-1=0$$
$$8x+8y+3=0$$</p>
<p><a href="https://i.stack.imgur.com/OlKbQ.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/OlKbQ.png" alt="enter image description here"></a></p>
<p>Now i want to find the focus and the asymptotes but i have no idea how to do it.There is a way to find These two things through the data I have now? or do i need the canonical form of the conical? Thanks</p>
| amd | 265,466 | <p>For a hyperbola in standard position, $\tan\alpha=\frac b a$, where $\alpha$ is the angle that the asymptotes make with the transverse axis. If you transform the general equation into standard form, you’ll find that $a^2=-|A'|/\lambda_2|A|$ and $b^2=|A'|/\lambda_1|A|$, where $\lambda_1$ is the positive eigenvalue, so $\tan\alpha=\sqrt{-\lambda_2/\lambda_1}$. (Note that $|A|=\lambda_1\lambda_2$, so you don’t have to compute that separately.) With this value in hand, you can then use the formulas for tangents of the sum and difference of angle to find the slopes of the asymptotes. </p>
<p>We have $\tan\alpha=\sqrt{2/4}=1/\sqrt2$, and the slope of the transverse axis is $-1$, so for the slopes of the asymptotes we get $${-1+1/\sqrt2\over1-(-1)\cdot1/\sqrt2}=-3+2\sqrt2$$ and $${-1-1/\sqrt2\over1+(-1)\cdot1/\sqrt2}=-3-2\sqrt2.$$ From this, we have $$y+\frac5{16}=(-3\pm2\sqrt2)\left(x+\frac1{16}\right)$$ for equations of the asymptotes, which you can then rearrange as you see fit. </p>
<p>For the foci, we can use the fact that the eccentricity of a hyperbola is $e=\sqrt{1+b^2/a^2}=\sqrt{1-\lambda_2/\lambda_1}$ and that the distance from the center to the focus is $f=ea$. For this hyperbola, we have $$e=\sqrt{1+\frac24}=\frac{\sqrt6}2 \\ a^2=-{|A'|\over\lambda_2|A|}=-{-9/4\over(-2)(-8)}=\frac9{64}$$ so $$f=\frac{\sqrt6}2\cdot\frac38={3\sqrt6\over16}$$ which means that the foci are at $$\left(-\frac1{16},-\frac5{16}\right)\pm{3\sqrt6\over16}\left(\frac1{\sqrt2},-\frac1{\sqrt2}\right),$$ approximately $(0.262,-0.637)$ and $(-0.387,0.012)$.</p>
|
8,814 | <p>Here is a funny exercise
$$\sin(x - y) \sin(x + y) = (\sin x - \sin y)(\sin x + \sin y).$$
(If you prove it don't publish it here please).
Do you have similar examples?</p>
| user02138 | 2,720 | <p>\begin{eqnarray}
\zeta(0) = \sum_{n \geq 1} 1 = -\frac{1}{2}
\end{eqnarray}</p>
|
8,814 | <p>Here is a funny exercise
$$\sin(x - y) \sin(x + y) = (\sin x - \sin y)(\sin x + \sin y).$$
(If you prove it don't publish it here please).
Do you have similar examples?</p>
| Pedro | 23,350 | <p>Considering the main branches</p>
<p>$$i^i = \exp\left(-\frac{\pi}{2}\right)$$</p>
<p>$$\root i \of i = \exp\left(\frac{\pi}{2}\right) $$</p>
<p>And
$$ \frac{4}{\pi } = \displaystyle 1 + \frac{1}{{3 +\displaystyle \frac{{{2^2}}}{{5 + \displaystyle\frac{{{3^2}}}{{7 +\displaystyle \frac{{{4^2}}}{{9 +\displaystyle \frac{{{n^2}}}{{\left( {2n + 1} \right) + \cdots }}}}}}}}}} $$</p>
|
8,814 | <p>Here is a funny exercise
$$\sin(x - y) \sin(x + y) = (\sin x - \sin y)(\sin x + \sin y).$$
(If you prove it don't publish it here please).
Do you have similar examples?</p>
| Community | -1 | <p>The Cayley-Hamilton theorem:</p>
<p>If $A \in \mathbb{R}^{n \times n}$ and $I_{n} \in \mathbb{R}^{n \times n}$ is the identity matrix, then the characteristic polynomial of $A$ is $p(\lambda) = \det(\lambda I_n - A)$. Then the Cayley Hamilton theorem can be obtained by "<strong><em>substituting</em></strong>" $\lambda = A$, since $$p(A) = \det(AI_n-A) = \det(0-0) = 0$$</p>
|
8,814 | <p>Here is a funny exercise
$$\sin(x - y) \sin(x + y) = (\sin x - \sin y)(\sin x + \sin y).$$
(If you prove it don't publish it here please).
Do you have similar examples?</p>
| Diego Silvera | 41,495 | <p>The following number is prime</p>
<blockquote>
<p>$p = 785963102379428822376694789446897396207498568951$</p>
</blockquote>
<p>and $p$ in base 16 is</p>
<blockquote>
<p>$89ABCDEF012345672718281831415926141424F7$</p>
</blockquote>
<p>which includes counting in hexadecimal, and digits of $e$, $\pi$, and $\sqrt{2}$.</p>
<p>Do you think this's surprising or not?</p>
<p>$$11 \times 11 = 121$$
$$111 \times 111 = 12321$$
$$1111 \times 1111 = 1234321$$
$$11111 \times 11111 = 123454321$$
$$\vdots$$</p>
|
8,814 | <p>Here is a funny exercise
$$\sin(x - y) \sin(x + y) = (\sin x - \sin y)(\sin x + \sin y).$$
(If you prove it don't publish it here please).
Do you have similar examples?</p>
| 2'5 9'2 | 11,123 | <p>\begin{align}\frac{64}{16}&=\frac{6\!\!/\,4}{16\!\!/}\\&=\frac41\\&=4\end{align}</p>
<p>For more examples of these <em>weird fractions</em>, see "How Weird Are Weird Fractions?",
Ryan Stuffelbeam, <strong>The College Mathematics Journal</strong>, Vol. 44, No. 3 (May 2013), pp. 202-209.</p>
|
8,814 | <p>Here is a funny exercise
$$\sin(x - y) \sin(x + y) = (\sin x - \sin y)(\sin x + \sin y).$$
(If you prove it don't publish it here please).
Do you have similar examples?</p>
| Felix Marin | 85,343 | <p>\begin{align}
E &=
\sqrt{\left(pc\right)^{2} + \left(mc^{2}\right)^{2}}
=
mc^{2}
+
\left[\sqrt{\left(pc\right)^{2} + \left(mc^{2}\right)^{2}} - mc^{2}\right]
\\[3mm]&=
mc^{2}
+
{\left(pc\right)^{2}
\over
\sqrt{\left(pc\right)^{2} + \left(mc^{2}\right)^{2}} + mc^{2}}
=
mc^{2}
+
{p^{2}/2m
\over
1 + {\sqrt{\left(pc\right)^{2} + \left(mc^{2}\right)^{2}} - mc^{2} \over 2mc^{2}}}
\\[3mm]&=
mc^{2}
+
{p^{2}/2m
\over
1 + {p^{2}/2m \over \sqrt{\left(pc\right)^{2} + \left(mc^{2}\right)^{2}} + mc^{2}}}
=
mc^{2}
+
{p^{2}/2m
\over 1 +
{p^{2}/2m \over
1 + {p^{2}/2m \over \sqrt{\left(pc\right)^{2} + \left(mc^{2}\right)^{2}} - mc^{2}}}}
\end{align}</p>
|
8,814 | <p>Here is a funny exercise
$$\sin(x - y) \sin(x + y) = (\sin x - \sin y)(\sin x + \sin y).$$
(If you prove it don't publish it here please).
Do you have similar examples?</p>
| Shivam Patel | 95,509 | <p>Heres a interesting one again<br>
$3435=3^3+4^4+3^3+5^5%$</p>
|
8,814 | <p>Here is a funny exercise
$$\sin(x - y) \sin(x + y) = (\sin x - \sin y)(\sin x + \sin y).$$
(If you prove it don't publish it here please).
Do you have similar examples?</p>
| Gerry Myerson | 8,269 | <p>$$\lim_{\omega\to\infty}3=8$$ The "proof" is by rotation through $\pi/2$. More of a joke than an identity, I suppose. </p>
|
8,814 | <p>Here is a funny exercise
$$\sin(x - y) \sin(x + y) = (\sin x - \sin y)(\sin x + \sin y).$$
(If you prove it don't publish it here please).
Do you have similar examples?</p>
| Neves | 1,747 | <p><span class="math-container">$$
\dfrac{1}{2}=\dfrac{\dfrac{1}{2}}{\dfrac{1}{2}+\dfrac{\dfrac{1}{2}}{\dfrac{1}{2}+\dfrac{\dfrac{1}{2}}{\dfrac{1}{2}+\dfrac{\dfrac{1}{2}}{\dfrac{1}{2}+\dfrac{\dfrac{1}{2}}{\dfrac{1}{2}+\dfrac{\dfrac{1}{2}}{\dfrac{1}{2}+\cdots}}}}}}
$$</span></p>
<p>and more generally we have
<span class="math-container">$$
\dfrac{1}{n+1}=\frac{\dfrac{1}{n(n+1)}}{\dfrac{1}{n(n+1)}+\dfrac{\dfrac{1}{n(n+1)}}{\dfrac{1}{n(n+1)}+\dfrac{\dfrac{1}{n(n+1)}}{\dfrac{1}{n(n+1)}+\dfrac{\dfrac{1}{n(n+1)}}{\dfrac{1}{n(n+1)}+\dfrac{\dfrac{1}{n(n+1)}}{\dfrac{1}{n(n+1)}+\dfrac{\frac{1}{n(n+1)}}{\dfrac{1}{n(n+1)}+\ddots}}}}}}
$$</span></p>
|
8,814 | <p>Here is a funny exercise
$$\sin(x - y) \sin(x + y) = (\sin x - \sin y)(\sin x + \sin y).$$
(If you prove it don't publish it here please).
Do you have similar examples?</p>
| Neves | 1,747 | <p>$$
\frac{\pi}{2}=1+2\sum_{k=1}^{\infty}\frac{\eta(2k)}{2^{2k}}
$$
$$
\frac{\pi}{3}=1+2\sum_{k=1}^{\infty}\frac{\eta(2k)}{6^{2k}}
$$
where
$
\eta(n)=\sum_{k=1}^{\infty}\frac{(-1)^{k+1}}{k^{n}}
$</p>
|
2,101,059 | <p>Let $d,n~$ be positive integer.</p>
<p>$\int{y(1-y^d)^n}dy$</p>
<p>Is it possible to solve it? If you know the method, please teach me.</p>
| Eff | 112,061 | <p><strong>Hint.</strong> Expand the term $(1-y^d)^n$ using the binomial theorem, and multiply by $y$. Then you will simply have a polynomial, and polynomials are easy to integrate.</p>
|
3,687,484 | <p>Here, <span class="math-container">$H^1(\mathbb{R}^2)$</span> is the standard Sobolev spaces for <span class="math-container">$L^2(\mathbb{R}^2)$</span> functions whose weak derivative belongs to <span class="math-container">$L^2(\mathbb{R}^2).$</span></p>
<p>My question in the title comes from calculus of variations. It is usually the case that a minimizer of some given energy functional defined on <span class="math-container">$H^1(\mathbb{R}^2)$</span> is known to be continuous (or even <span class="math-container">$C^2(\mathbb{R}^2))$</span>. I want to know the behavior of this minimizer at infinity.</p>
<p>If <span class="math-container">$u \in L^2(\mathbb{R}^2),$</span> then is known <span class="math-container">$\liminf_{|x| \to \infty} u(x) = 0.$</span> But it cannot say <span class="math-container">$\limsup_{|x| \to \infty} u(x) = 0$</span> since counterexamples exist.</p>
<p>If we assume <span class="math-container">$u \in H^{1+\epsilon}(\mathbb{R}^2)$</span> for some <span class="math-container">$\epsilon > 0,$</span> then the classical Morrey's inequality can imply uniform H\"older continuity of <span class="math-container">$u.$</span> So we can conclude <span class="math-container">$\limsup_{|x| \to \infty} u(x) = 0$</span> via proof by contradiction.</p>
<p>So my problem is about the case <span class="math-container">$\epsilon = 0.$</span> That is, when
<span class="math-container">$$u \in H^1(\mathbb{R}^2) \cap C(\mathbb{R}^2),$$</span>
is it true that
<span class="math-container">$$\limsup_{|x| \to \infty} u(x) = 0?$$</span></p>
<p>Using proof by contradiction, I think this should be true. Here is my non-rigorous argument.</p>
<blockquote>
<p>Assume not, then there are <span class="math-container">$\epsilon > 0$</span> and <span class="math-container">$x_n \in \mathbb{R}^2$</span> such that <span class="math-container">$|x_n| \to \infty$</span> and <span class="math-container">$|u(x_n)| \geq 2\epsilon.$</span> By the continuity, there is <span class="math-container">$r_n > 0$</span> such that <span class="math-container">$|u(x)| \geq \epsilon$</span> for all <span class="math-container">$x \in B(x_n, r_n).$</span>
Since <span class="math-container">$u \in L^2, r_n \to 0$</span> as <span class="math-container">$n \to \infty.$</span>
<strong>I think non-rigorously that</strong>
<span class="math-container">$$ \int_{B(x_n, r_n)} |\nabla u|^2 \gtrsim \int_{B(x_n, r_n)} (\frac{\epsilon}{r_n})^2 = \epsilon^2$$</span>
for large <span class="math-container">$n$</span> and
<span class="math-container">$$
\int_{\mathbb{R}^2} |\nabla u|^2 \geq \sum_{n\,\text{is large}} \int_{B(x_n, r_n)} |\nabla u|^2.
$$</span>
So they imply a contradiction <span class="math-container">$\int_{\mathbb{R}^2} |\nabla u|^2 = \infty$</span></p>
</blockquote>
<p>I appreciate any discussion.</p>
<p><strong>Edit</strong>: How about <span class="math-container">$u$</span> is additionally assumed to be <span class="math-container">$C^1(\mathbb{R}^2)$</span> or even <span class="math-container">$C^2(\mathbb{R}^2)?$</span> Is there any proof or counterexample?</p>
| Claudio Moneo | 454,365 | <p>It does not hold for general functions <span class="math-container">$u \in H^{1}(\mathbb{R}^{2})$</span>, even if they are assumed to be smooth.
The reason lies in the following lemma:</p>
<p>Let <span class="math-container">$N \ge 2$</span>. Then for any <span class="math-container">$x_{k} \in \mathbb{R}^{N}, \; \epsilon>0$</span> and <span class="math-container">$\delta>0$</span> there exists a radial smooth function <span class="math-container">$u_{k}$</span> such that:</p>
<ol>
<li><p><span class="math-container">$u_{k}(x_{k})=1$</span></p>
</li>
<li><p><span class="math-container">$u_{k}(x)=0$</span> for <span class="math-container">$|x-x_{k}|>\delta$</span></p>
</li>
<li><p><span class="math-container">$\int_{\mathbb{R}^{n}} |\nabla{u_{k}}|^2 \le \epsilon$</span></p>
</li>
</ol>
<p>Now choose a sequence of points <span class="math-container">$x_{k}$</span> going to infinity, choose corresponding
<span class="math-container">$\delta_{n}=\frac{1}{n^{2}}$</span> and <span class="math-container">$\epsilon_{n}=\frac{1}{n^{2}}$</span>.</p>
<p>Let <span class="math-container">$u=\sum_{n} u_{n}$</span>.</p>
<p>Then <span class="math-container">$u \in H^{1}(\mathbb{R}^{2})$</span>.</p>
<p>But for any <span class="math-container">$r>0$</span> we find some point <span class="math-container">$x_{r}$</span> with <span class="math-container">$|x_{r}|>r$</span> such that <span class="math-container">$u(x_{r})=1$</span>.</p>
|
39,828 | <p>Dear MO-community, I am not sure how mature my view on this is and I might say some things that are controversial. I welcome contradicting views. In any case, I find it important to clarify this in my head and hope that this community can help me doing that.</p>
<p>So after this longish introduction, here goes: Many of us routinely use algebraic techniques in our research. Some of us study questions in abstract algebra for their own sake. However, historically, most algebraic concepts were introduced with a specific goal, which more often than not lies outside abstract algebra. Here are a few examples:</p>
<ul>
<li>Galois developed some basic notions in group theory in order to study polynomial equations. Ultimately, the concept of a normal subgroup and, by extension, the concept of a simple group was kicked off by Galois. It would never have occurred to anyone to define the notion of a simple group and to start classifying those beasts, had it not been for their use in solving polynomial equations.</li>
<li>The theory of ideals, UFDs and PIDs was developed by Kummer and Dedekind to solve Diophantine equations. Now, people study all these concepts for their own sake.</li>
<li>Cohomology was first introduced by topologists to assign discrete invariants to topological spaces. Later, geometers and number theorists started using the concept with great effect. Now, cohomology is part of what people call "commutative algebra" and it has a life of its own.</li>
</ul>
<p>The list goes on and on. The axiom underlying my question is that you don't just invent an algebraic structure and study it for its own sake, if it hasn't appeared in front of you in some "real life situation" (whatever this means). Please feel free to dispute the axiom itself.</p>
<p>Now, the actual question. Suppose that you have some algebraic concept which has proved useful somewhere. You can think of a natural generalisation, which you personally consider interesting.</p>
<blockquote>
<p>How do you decide whether a generalisation (that you find natural) of an established algebraic concept is worth studying? How often does it happen (e.g., how often has it happened to you or to your colleagues or to people you have heard of) that you undertake a study of an algebraic concept and when you try to publish your results, people wonder "so what on earth is this for?" and don't find your results interesting? How convincing does the heuristic "well, X naturally generalises Y and we all know how useful Y is" sound to you?</p>
</blockquote>
<p>Arguably, the most important motivation for studying a question in pure mathematics is curiosity. Now, you don't have to explain to your colleagues why you want to classify knots or to solve a Diophantine equation. But might you have to explain to someone, why you would want to study ideals if he doesn't know any of their applications (and if you are not interested in the applications yourself)? How do you motivate that you want to study some strange condition on some obscure groups?</p>
<p>Just to clarify this, I have absolutely no difficulties motivating myself and I know what curiosity means subjectively. But I would like to understand, how a consensus on such things is established in the mathematical community, since our understanding of this consensus ultimately reflects our choice of problems to study.</p>
<p>I could formulate this question much more widely about motivation in pure mathematics, but I would rather keep it focused on a particular area. But one broad question behind my specific one is</p>
<blockquote>
<p>How much would you subscribe to the statement that
EDIT: "studying questions for the only reason that one finds them interesting is something established mathematicians do, while younger ones are better off studying questions that they know for sure the rest of the community also finds interesting"?</p>
</blockquote>
<p>Sorry about this long post! I hope I have been able to more or less express myself. I am sure that this question is of relevance to lots of people here and I hope that it is phrased appropriately for MO.</p>
<hr>
<p>Edit: just to clarify, this question addresses the status quo and the prevalent consensus of the mathematical community on the issues concerned (if such a thing exists), rather than what you would like to be true.</p>
<hr>
<p>Edit 2: I received some excellent answers that helped me clarify the situation, for which I am very grateful! I have chosen to accept Minhyong's answer, as that's the one that comes closest to giving examples of the sort I had in mind and also convincingly addresses the more general question at the end. But I am still very grateful to everyone who took the time to think about the question and I realise that for other people who find the question relevant, another answer might be "the correct one".</p>
| Minhyong Kim | 1,826 | <p>Dear Alex,</p>
<p>It seems to me that the general question in the background of your query on algebra really is the better one to focus on, in that we can forget about irrelevant details. That is, as you've mentioned, one could be asking the question about motivation and decision in any kind of mathematics, or maybe even life in general. In that form, I can't see much useful to write other than the usual cliches: there are safer investments and riskier ones; most people stick to the former generically with occasional dabbling in the latter, and so on. This, I think, is true regardless of your status. Of course, going back to the corny financial analogy that Peter has kindly referred to, just <em>how</em> risky an investment is depends on how much money you have in the bank. We each just make decisions in as informed a manner as we can.</p>
<p>Having said this, I rather like the following example: <a href="http://en.wikipedia.org/wiki/Kac%E2%80%93Moody_algebra">Kac-Moody algebras</a> could be considered 'idle' generalizations of finite-dimensional simple Lie algebras. One considers the construction of simple Lie algebras by generators and relations starting from a Cartan matrix. When a positive definiteness condition is dropped from the matrix, one arrives at general Kac-Moody algebras. I'm far from knowledgeable on these things, but I have the impression that the initial definition by Kac and Moody in 1968 really was somewhat just for the sake of it. Perhaps indeed, the main (implicit) justification was that the usual Lie algebras were such successful creatures. Other contributors here can describe with far more fluency than I just how dramatically the situation changed afterwards, accelerating especially in the 80's, as a consequence of the interaction with conformal field theory and string theory. But many of the real experts here seem to be rather young and perhaps regard vertex operator algebras and the like as being just so much bread and butter. However, when I started graduate school in the 1980's, this story of Kac-Moody algebras was still something of a marvel.
There must be at least a few other cases involving a rise of comparable magnitude. </p>
<p>Meanwhile, I do hope some expert will comment on this. I fear somewhat that my knowledge of this story is a bit of the fairy-tale version.</p>
<p>Added: In case someone knowledgeable reads this, it would also be nice to get a comment about further generalizations of Kac-Moody algebras. My vague memory is that some naive generalizations have not done so well so far, although I'm not sure what they are. Even if one believes it to be the purview of masters, it's still interesting to ask if there is a pattern to the kind of generalization that ends up being fruitful. Interesting, but probably hopeless.</p>
<p>Maybe I will add one more personal comment, in case it sheds some darkness on the question. I switched between several supervisors while working towards my Ph.D. The longest I stayed was with Igor Frenkel, a well-known expert on many structures of the Kac-Moody type. I received several personal tutorials on vertex operator algebras, where Frenkel expressed his strong belief that these were really fundamental structures, 'certainly more so than, say, Jordan algebras.' I stubbornly refused to share his faith, foolishly, as it turns out (so far).</p>
<p>Added again:</p>
<p>In view of Andrew L.'s question I thought I'd add a few more clarifying remarks.</p>
<p>I explained in the comment below what I meant with the story about vertex operator algebras.
Meanwhile, I can't genuinely regret the decision not to work on them because I quite
like the mathematics I do now, at least in my own small way. So I think what I had in mind was just
the platitude that most decisions in mathematics,
like those of life in general, are mixed: you might gain
some things and lose others.</p>
<p>To return briefly to the original question, maybe I do have some practical
remarks to add. It's obvious stuff, but no one seems to have written it so far on this page.
Of course, I'm not in a position to give anyone advice, and your question didn't really ask for it,
so you should read this with the usual reservations. (I feel, however, that what I write <em>is</em> an
answer to the original question, in some way.)</p>
<p>If you have a strong feeling about a structure or an idea, of course
keep thinking about it. But it may take a long time for your ideas
to mature, so keep other things going as well, enough to build up
a decent publication list. The part of work that belongs
to quotidian maintenance is part of the trade,
and probably a helpful routine for most people. If you go about it sensibly, it's really
not that hard either. As for the truly original
idea, I suspect it will be of interest to many people at some point, if
you keep at it long enough. Maybe the real difference between
starting mathematicians and established ones is the length of time
they can afford to invest in a strange idea before feeling
like they're running out of money. But by keeping a suitably interesting
business going on the side, even a young person can afford
to dream. Again, I suppose all this is obvious to you and many other people.
But it still is easy to forget in the helter-skelter of life.</p>
<p>By the way, I object a bit to how several people have described this question
of community interest as a two-state affair. Obviously, there are many different
degrees of interest, even in the work of very famous people. </p>
|
356,940 | <p>According to Wolfie:</p>
<p>$2^{-1} \bmod 5 = 3$</p>
<p><a href="http://www.wolframalpha.com/input/?i=2%5E-1+mod+5" rel="nofollow">http://www.wolframalpha.com/input/?i=2%5E-1+mod+5</a></p>
<p>Why is that?</p>
| André Nicolas | 6,312 | <p>The result says that the (multiplicative) inverse of $2$ modulo $5$ is $3$. This is another way of saying that $3\cdot 2= 1\bmod{5}$.</p>
<p>In general, if $p$ is a prime, and $a$ is a number between $1$ and $p-1$, we can say that $a*{-1}\bmod{p}$ is the number $b$, between $1$ and $p$, such that $ba=1\bmod{p}$.</p>
<p>So $b$, in the "mod $p$" arithmetic, behaves structurally like reciprocal does in ordinary arithmetic. </p>
<p>More generally, let $m$ be a positive integer, and let $a$ be a number relatively prime to $m$ and between $1$ and $m$. Then there is a unique $b$ between $1$ and $m$ such that $ba\bmod{m}=1$.</p>
<p>This number $b$ can be called $a^{-1}\bmod{m}$. The notation $a^{-1}$ comes from Group Theory. </p>
|
1,105,352 | <p>Here are a few definition from my workbook:</p>
<p>Let $a\in\Bbb{R}\cup\{+\infty, -\infty\}$, $D\subset\Bbb{R}$. Then point $a$ is a limit point of $D$ if and only if there exists a sequence $\{x_n\}_{n \geq n_0}$ with terms in set $D - \{a\}$ such that $x_n \rightarrow a$.</p>
<p>And now we have a limit of a function $f: D \rightarrow \Bbb{R}$ where $a$ is a limit point of $D$ and $g \in \Bbb{R} \cup \{+\infty,-\infty\}$ defined as follows:</p>
<p>$g$ is a limit of $f$ at point $a$ if and only if for every sequence $\{x_n\}_{n \geq n_0}$ with terms in $D - \{a\}$ such that $x_n \rightarrow a$ we have $f(x_n)\rightarrow g$.</p>
<p>Now let's analyze function $f(x)=2$ defined at set $D=[0;1]$. We're analyzing point $a=1$. Clearly, we haven't got any right-sided limit there, as there doesn't exist any $d\in D$ which is greater than $1$. But according to these definitions, the limit at $1$ still exists and is equal to $2$ as for every sequence $x_n$ such that $x_n \rightarrow 1$ we have $f(x_n)\rightarrow 2$. Am I interpretting these definitions correctly?</p>
| hmakholm left over Monica | 14,366 | <p>Depending on what situation you're in, it can either be convenient to say that there is a limit in such cases, or to insist that only the one-sided limit exists. Neither choice is inherently wrong, but of course it pays to be consistent in our use of words. (But beware that textbooks are not always consistent in this respect, especially between each other, so you may find some that assume one meaning and some that assume the other -- and they may not be explicit about their assumptions if the text <em>uses</em> the limit concept as a prerequisite rather than teach it from scratch).</p>
<p>The <em>most common</em> choice is to say that a limit does exist in this case, such that for example $\lim_{x\to 0}\sqrt x = 0$ without needing to specify a one-sided limit from the right. Formally we would take our definition of limit to be "for all $\varepsilon>0$ there is a $\delta>0$ such that for every $x$ <em>in the domain of $f$</em> with $0<|x-x_0|<\delta$ it holds that such-and-such".</p>
<p>This choice has the pragmatic advantage that it is now easy to express the <em>other</em> concept when that is what we need -- we can simply say something like "assume that $f$ is defined in a punctured neighborhood of $x_0$ and $\lim_{x\to x_0}f(x)=L$" to get the more restrictive concept.</p>
<p>In contrast if the default meaning of "limit" included a requirement of $f$ always being defined close to $x_0$, it would be quite cumbersome to express the more lenient sense of limit when <em>that</em> is what is needed.</p>
|
2,378,717 | <p>I wondered many times how the editors of mathematical journals know that the content of a paper or article that was submited by an author is original, this is, that the main content of such calculations and reasonings is unpublished.</p>
<blockquote>
<p><strong>Question.</strong> How do editors of mathematical journals know that a submited paper is original, that is unpublished? <strong>Many thanks.</strong></p>
</blockquote>
<p>I would like to know it as curiosity, because I imagine that professors working in mathematical journals are experts in a field of mathematics. Then a day they receive a writing, a paper or a remark, with perhaps several results, and they have to evaluate if this was not published or known previously.
Additionally I imagine that these professors or editors have tools to search information for this purpose, and maybe they have contact with other colleagues who have historical memory about what results, and of what kind, have been published. Am I wrong in these believes? How do they know and they note the originality of such mathematical content?</p>
| thetha | 283,262 | <p>Usually they have payed staff, some Prof. who are working in this field. So they send your paper to them, they review it. Based on their review it will be published or not. It depends on the quality of the journal, how well they check your work. </p>
|
61,659 | <p>How to find: $$\lim_{x \to \frac{\pi}{2}} \frac{\tan{2x}}{x - \frac{\pi}{2}}$$ I know that $\tan(2\theta)=\frac{2\tan\theta}{1-\tan^{2}\theta}$ but don't know how to apply it here.</p>
| Beni Bogosel | 7,327 | <p><a href="http://en.wikipedia.org/wiki/L%27H%C3%B4pital%27s_rule" rel="nofollow">l'Hospital Rule</a> works here. The indetermination is $0/0$. </p>
|
3,276,264 | <h1>My attempt</h1>
<p>Based on the sine rule and the graph of <span class="math-container">$\sin A = k a$</span> (where <span class="math-container">$k$</span> is a constant) in interval <span class="math-container">$(0,\pi)$</span>,
increasing <span class="math-container">$a$</span> up to <span class="math-container">$1/k$</span> will either</p>
<ul>
<li>increase <span class="math-container">$A$</span> up to <span class="math-container">$90^\circ$</span>.</li>
<li>decrease <span class="math-container">$A$</span> up to <span class="math-container">$90^\circ$</span>.</li>
</ul>
<p>So I cannot conclude that increasing <span class="math-container">$a$</span> will increase <span class="math-container">$A$</span>.</p>
<p>Now I use the cosine rule (it is promising because the cosine is decreasing in the given interval).</p>
<p><span class="math-container">\begin{align}
A &= \cos^{-1}\left(\frac{b^2+c^2-a^2}{2bc}\right)\\
B &= \cos^{-1}\left(\frac{a^2+c^2-b^2}{2ac}\right)\\
C &= \cos^{-1}\left(\frac{a^2+b^2-c^2}{2ab}\right)\\
\end{align}</span></p>
<p>It is hard to show that <span class="math-container">$0^\circ<A\leq B\leq C<180^\circ$</span> for any <span class="math-container">$\triangle ABC$</span> with <span class="math-container">$0<a\leq b\leq c$</span>. Could you show it?</p>
<p>It means that I need to show that </p>
<p><span class="math-container">$$
-1<\frac{a^2+b^2-c^2}{2ab}\leq \frac{a^2+c^2-b^2}{2ac} \leq \frac{b^2+c^2-a^2}{2bc}<1
$$</span></p>
<p>for <span class="math-container">$0<a\leq b\leq c$</span>.</p>
| Thomas Andrews | 7,933 | <p>If <span class="math-container">$a^2+b^2=c^2,$</span> with <span class="math-container">$a,b,c$</span> positive, you'd have: <span class="math-container">$c>a,b>0$</span> and thus <span class="math-container">$ca^2>a^3$</span> and <span class="math-container">$cb^2>b^3.$</span> So you get:</p>
<p><span class="math-container">$$c^3=c\cdot c^2=c(a^2+b^2)=ca^2+cb^2>a^3+b^3.$$</span></p>
<p>So <span class="math-container">$c^3>a^3+b^3.$</span></p>
<hr>
<p>You can prove more generally for any triple <span class="math-container">$(a,b,c)$</span> of positive reals, there is at most one positive <span class="math-container">$n$</span> such that <span class="math-container">$a^n+b^n=c^n.$</span> If <span class="math-container">$a^n+b^n=c^n$</span> then <span class="math-container">$c>a,b>0$</span> and for <span class="math-container">$m>n$</span> you have:</p>
<p><span class="math-container">$$\begin{align}c^m&=c^{m-n}\cdot c^n
\\&=c^{m-n}(a^n+b^n)\\&=c^{m-n}a^n+c^{m-n}b^n\\
&>a^{m-n}a^n+b^{m-n}b^n\\
&=a^m+b^m.\end{align}$$</span></p>
|
225,481 | <p>I would like to write a program that gives me the Klein Gordon Equations given a metric. I will explain.</p>
<p>My code is in the following:</p>
<p><strong>I) Standard Quantities</strong></p>
<p>I have no doubt here the first part is <a href="http://web.physics.ucsb.edu/%7Egravitybook/mathematica.html" rel="nofollow noreferrer">given by Hartle's</a>.</p>
<pre><code>Clear[coord, metric, inversemetric, affine, riemann, ricci,
scalar, einstein, t, x, y, z]
n = 4;
coord = {t, r, θ, ϕ};
metric = {{-(1 - ((2*m)/(r))), 0, 0, 0},
{0, (1)/(1 - ((2*m)/(r))), 0, 0},
{0, 0, r^2, 0},
{0, 0, 0, r^2*(Sin[θ]*Sin[θ])}};
inversemetric = Simplify[Inverse[metric]];
Det[metric]
</code></pre>
<p><strong>II) MY TRY</strong></p>
<p>I wrote the components by hand:</p>
<pre><code>KG00 = FullSimplify[((1)/(Sqrt[-Det[metric]]))*
D[(Sqrt[-Det[metric]])*(inversemetric[[1, 1]])*
D[Ξ[t, r, θ, ϕ], t], t]];
KG11 = FullSimplify[((1)/(Sqrt[-Det[metric]]))*
D[(Sqrt[-Det[metric]])*(inversemetric[[2, 2]])*
D[Ξ[t, r, θ, ϕ], r], r]];
KG22 = FullSimplify[((1)/(Sqrt[-Det[metric]]))*
D[(Sqrt[-Det[metric]])*(inversemetric[[3, 3]])*
D[Ξ[t, r, θ, ϕ], θ], θ]];
KG33 = FullSimplify[((1)/(Sqrt[-Det[metric]]))*
D[(Sqrt[-Det[metric]])*(inversemetric[[4, 4]])*
D[Ξ[t, r, θ, ϕ], ϕ], ϕ]];
KG00 + KG11 + KG22 + KG33
</code></pre>
<p><strong>III) What I would Like</strong></p>
<p>I would like to use summation convention on the code of section <strong>II)</strong>, since the Klein-Gordon equations are given by:</p>
<p><span class="math-container">$$ \frac{1}{\sqrt{-g}}\sum_{\mu=1}^{4}\sum_{\nu=1}^{4}\partial_{\mu}\Bigg(\sqrt{-g}g^{\mu\nu}\partial_{\nu} \Psi(r,\theta,\phi,t) \Bigg) \tag{1}$$</span></p>
<p><strong>IV) Hartle's code on summation convention</strong></p>
<p>Actually, Hartle's <span class="math-container">$[1]$</span> gives an way to work with tensor indexes, for instance the Christoffel Symbols are given by:</p>
<p><span class="math-container">$$ \Gamma^{s}_{jk}=\sum_{s=1}^{4}\frac{1}{2}g^{is}\Bigg(g_{sj,k} + g_{sk,j} - g_{jk,s} \Bigg) \tag{2}$$</span></p>
<p>and the code using summation is:</p>
<pre><code>affine :=
affine = Simplify[
Table[(1/2)*
Sum[
inversemetric[[i, s]]*(D[metric[[s, j]], coord[[k]]] +
D[metric[[s, k]], coord[[j]]] -
D[metric[[j, k]], coord[[s]]]),
{s, 1, n}
],
{i, 1, n}, {j, 1, n}, {k, 1, n}
] ];
listaffine :=
Table[
If[UnsameQ[affine[[i, j, k]], 0],
{ToString[Γ[i - 1, j - 1, k - 1]],
affine[[i, j, k]]}
],
{i, 1, n}, {j, 1, n}, {k, 1, j}
];
TableForm[
Partition[DeleteCases[Flatten[listaffine], Null], 2],
TableSpacing -> {2, 2}
]
</code></pre>
| Alex Trounev | 58,388 | <p>We can use this code for Klein-Gordon:</p>
<pre><code>Clear[coord, metric, inversemetric, affine, riemann, ricci, scalar, \
einstein, t, r, θ, ϕ]
n = 4;
coord = {t, r, θ, ϕ};
g = {{-(1 - ((2*m)/(r))), 0, 0, 0}, {0, 1/(1 - ((2*m)/(r))), 0,
0}, {0, 0, r^2, 0}, {0, 0, 0, r^2*(Sin[θ]*Sin[θ])}};
g1 = Simplify[Inverse[g]];
dg = Det[g];
KG = 1/Sqrt[-dg] Sum[
D[Sqrt[-dg] g1[[i, j]] D[psi[t, r, θ, ϕ], coord[[j]]],
coord[[i]]], {i, n}, {j, n}]
</code></pre>
<p>Now we can compare <code>KG</code> with spherical Laplacian</p>
<pre><code>KG /. m -> 0 // FullSimplify
Laplacian[psi[t, r, θ, ϕ], {r, θ, ϕ},
"Spherical"] // FullSimplify
</code></pre>
<p>And we see that two expression differ on second derivative <span class="math-container">$\partial_t \partial_t \psi$</span> as expected.</p>
|
2,703,559 | <p>My assignment asks me to prove that the only automorphism of order 2 of $\mathbb{Z}_q$ is $m \mapsto -m$ for $q = 3$ and $q = 5$ and $q = 7$. I have been stuck for ages, and now I wonder if it is true. I need some help to get started. This is my attempt: Assume $k$ and $q$ are relative prime (this is necessary otherwise $\phi$ is not an automorphism), then:
$$
m = \phi^2(m) = k^2m \implies k^2 \equiv 1 \mod q
$$
But I don't know how to go on from there. I cant see that $k = -1$ is the only option.</p>
| Dietrich Burde | 83,966 | <p>For $q$ prime, $\mathbb{Z}_q$ is a field, so $k^2-1=0$ there has only two solutions, $k=1$ or $k=-1$.</p>
|
2,941,579 | <p>In Taylor's series, to determine the number of terms needed to obtain the desired accuracy, sometimes one needs to solve inequalities of the form <span class="math-container">$$\frac{a^n}{n!}<b,$$</span>
where <span class="math-container">$a$</span> and <span class="math-container">$b$</span> are fixed positive numbers. In most textbooks in calculus, the only introduced method to solve <span class="math-container">$\frac{a^n}{n!}<b$</span> for <span class="math-container">$n$</span> is trial and error. While this method works well in many cases, I feel that it is inefficient when <span class="math-container">$a$</span> is large and <span class="math-container">$b$</span> is small. (For example, how about solving <span class="math-container">$\frac{1000^n}{n!}<0.01$</span>?) </p>
<p>My Question: Apart from using brutal force, is there another method to solve the inequality <span class="math-container">$\frac{a^n}{n!}<b$</span> for <span class="math-container">$n$</span>?</p>
| Siong Thye Goh | 306,553 | <p><span class="math-container">$$|4x-12|=4|x-3|$$</span></p>
<p>we have <span class="math-container">$|x-3| < \delta$</span>, hence </p>
<p><span class="math-container">$$|4x-12|=4|x-3|< 4 \delta$$</span></p>
|
866,921 | <p>On the <a href="http://chat.stackexchange.com/rooms/36/mathematics">Mathematics chat</a> we were recently talking about the following problem <a href="https://math.stackexchange.com/users/32016/chriss-sis">@Chris'ssis</a> had to solve during an interview :</p>
<p>$$3\times 4=8$$
$$4\times 5=50$$
$$5\times 6=30$$
$$6\times 7=49$$
$$7\times 8=?$$</p>
<p>We have not managed to solve it so far, all we know is the solution (which was given <strong>after</strong> we had given up) :</p>
<blockquote class="spoiler">
<p> $224$</p>
</blockquote>
<p>How do we find this solution ?</p>
| Denis | 66,241 | <p>These interview problems are sometimes weird, where notations are bad, rules are arbitrary, and they expect only one answer where several could fit.</p>
<p>Here is one, which could be the expected one, but probably not:</p>
<p>To compute $a \times b$, take the numerator of $\dfrac{ab^2}{6}$ after simplification of the fraction.</p>
<p>I don't see how they could argue it is wrong.</p>
|
866,921 | <p>On the <a href="http://chat.stackexchange.com/rooms/36/mathematics">Mathematics chat</a> we were recently talking about the following problem <a href="https://math.stackexchange.com/users/32016/chriss-sis">@Chris'ssis</a> had to solve during an interview :</p>
<p>$$3\times 4=8$$
$$4\times 5=50$$
$$5\times 6=30$$
$$6\times 7=49$$
$$7\times 8=?$$</p>
<p>We have not managed to solve it so far, all we know is the solution (which was given <strong>after</strong> we had given up) :</p>
<blockquote class="spoiler">
<p> $224$</p>
</blockquote>
<p>How do we find this solution ?</p>
| maddog2k | 164,410 | <p>56 Did the question explicitly say there was a pattern to be found or is it just like you've presented it here? The symbols for multiplication(x) and equality(=) have well defined mathematical meaning and therefore 7 x 8 = 56 regardless of what misleading noise was written before. It may just be a test of the ability to avoid presumption.</p>
|
63,641 | <p>Here is the problem: Two mathematicians meet at a bar. They like each other and tend to collaborate. But it is not so clear what problems could be of common interest to both of them. Of course, the traditional way is they keep describing they work or their field in general so that hopefully they catch something at the end. But is there any reference, graph, table or whatever that they can use to help them? This, of course, makes sense only when such a reference keeps updated based on the continuous production in mathematics.</p>
| Qiaochu Yuan | 290 | <p>Here's a comment which might as well be written down. If $f$ is required to be an inner automorphism, then for $G$ finite this question can be understood using the character table of $G$:</p>
<blockquote>
<p>$x$ is conjugate to its inverse if and only if $\chi(x)$ is real for all characters $\chi$.</p>
</blockquote>
<p>Since $\chi(x^{-1}) = \overline{ \chi(x) }$, one direction is clear. In the other direction, if $\chi(x)$ is real then $\chi(x) = \chi(x^{-1})$ for all characters $\chi$, hence $c(x) = c(x^{-1})$ for all class functions $c$. One also has the following cute result: the number of conjugacy classes which are closed under inversion is equal to the number of irreducible characters all of whose values are real (equivalently, the number of self-dual irreps). Since there exist plenty of groups (even simple groups) whose character tables have complex entries, there are plenty of groups with elements not conjugate to their inverses.</p>
<p>This is one way to address the question for finite groups with no outer automorphisms. </p>
|
642,497 | <p>How do we prove by vector method that "if the diagonals of a trapezium have equal length then the non-parallel sides of the trapezium have equal length." ? </p>
<p>(taking $ABCD$ to be the trapezium with
$AD$ || $BC$ and $O$ the intersection point of diagonals , if it can be shown by vector method that
$OB=OC$ , then $AB=CD$ follows ; I can show $OB=OC$ but not by vector method and thus the problem , though any other line of approach is acceptable.) </p>
| Claude Leibovici | 82,404 | <p>I think that an easy way is to build a Taylor expansion around $x=-2$. As a result, you should have
$$9 + 32 (2 + x) - 62 (2 + x)^2 + 38 (2 + x)^3 - 10 (2 + x)^4 + (2 + x)^5$$</p>
|
642,497 | <p>How do we prove by vector method that "if the diagonals of a trapezium have equal length then the non-parallel sides of the trapezium have equal length." ? </p>
<p>(taking $ABCD$ to be the trapezium with
$AD$ || $BC$ and $O$ the intersection point of diagonals , if it can be shown by vector method that
$OB=OC$ , then $AB=CD$ follows ; I can show $OB=OC$ but not by vector method and thus the problem , though any other line of approach is acceptable.) </p>
| Arthur | 15,500 | <p>Alternative solution: You know you need $1\cdot (x + 2)^5$, since $1$ is the coefficient in front of $x^5$. Calculate $$x^5 - 2x^3 + 6x^2 + 1 - 1\cdot(x + 2)^5$$ and deduce how many $(x + 2)^4$ you need. Rinse and repeat.</p>
|
758,181 | <p><img src="https://i.stack.imgur.com/erzie.png" alt="enter image description here"></p>
<p>Above is just an example I'm trying to work from as I have the solutions. I've seen lots of definitions of what inversions are but they use signs like sigma, and it doesn't really explain what the sigma is. So if anyone could just give me a easy to understand definition and use the above as a possible example that would be great. Thank you.</p>
<p>In my book it says it says it has 3 inversions (3,1),(3,2),(2,1), how would these be found?</p>
| Fly by Night | 38,495 | <p>If $\vec{u}\cdot \vec{w} = 2\times 1 + 1 \times 3$ then that implies that $\vec{u} = (2,1)$ and $\vec{w}=(1,3)$.</p>
<p>If that is the case then $\|\vec{w}\| = \sqrt{1^2+3^1} = \sqrt{10}$ and not $\sqrt{45}$ as you seem to think.</p>
<p>From your calculation that $\|\vec{w}\| = \sqrt{3^2+(-6)^2}$, I assume that $\vec{w} = (3,-6)$.</p>
<p>Is $\vec{w} = (1,3)$ or is $\vec{w}=(3,-6)$?</p>
|
758,181 | <p><img src="https://i.stack.imgur.com/erzie.png" alt="enter image description here"></p>
<p>Above is just an example I'm trying to work from as I have the solutions. I've seen lots of definitions of what inversions are but they use signs like sigma, and it doesn't really explain what the sigma is. So if anyone could just give me a easy to understand definition and use the above as a possible example that would be great. Thank you.</p>
<p>In my book it says it says it has 3 inversions (3,1),(3,2),(2,1), how would these be found?</p>
| Matias Morant | 140,918 | <p>$$|W|=\sqrt{3^2+1^2}=\sqrt{10}$$
$$\cos ^{-1}\left(\frac{5}{\sqrt{10} \sqrt{5}}\right)=45 {}^{\circ}$$</p>
|
398,002 | <p>There is a <a href="https://en.wikipedia.org/wiki/Lie_group%E2%80%93Lie_algebra_correspondence" rel="noreferrer">classical correspondence</a> between Lie algebras (over <span class="math-container">$\mathbb{R}$</span> or <span class="math-container">$\mathbb{C}$</span>) and Lie groups in the finite dimensional case: to every Lie group <span class="math-container">$G$</span> there is an associated Lie algebra <span class="math-container">$\mathfrak{g}$</span>, and conversely. Moreover, this correspondence is one-to-one if one requires <span class="math-container">$G$</span> to be simply connected. One also has maps <span class="math-container">$\exp$</span> and <span class="math-container">$\log$</span> that map between a Lie group and its associated Lie algebra, and the Baker–Campbell–Hausdorff (BCH) formula gives a Lie series <span class="math-container">$z$</span> for given <span class="math-container">$x$</span> and <span class="math-container">$y$</span> in <span class="math-container">$\mathfrak{g}$</span> such that <span class="math-container">$\exp x\exp y=\exp z$</span> (at least for sufficiently small <span class="math-container">$x$</span> and <span class="math-container">$y$</span>).</p>
<p>Every Lie group gives rise to a Lie algebra, but in the infinite-dimensional case there are ‘non-enlargeable’ Lie algebras which don't correspond to a Lie group.</p>
<p>However, I came across a <a href="https://mathoverflow.net/questions/41644/does-a-finite-dimensional-lie-algebra-always-exponentiate-into-a-universal-cover#comment98452_41650">remark</a> in <a href="https://mathoverflow.net/questions/41644/does-a-finite-dimensional-lie-algebra-always-exponentiate-into-a-universal-cover">Does a finite-dimensional Lie algebra always exponentiate into a universal covering group</a> saying that every Lie algebra over <span class="math-container">$\mathbb{R}$</span> can be exponentiated to give an abstract group, but that it is a non-trivial theorem. Rather than poking this old comment to another question (where I may have taken something out of context), I thought this deserves to be explicitly asked.</p>
<blockquote>
<p>Let <span class="math-container">$\mathfrak{g}$</span> be a Lie algebra over a field <span class="math-container">$k$</span>. Is there an abstract group <span class="math-container">$G$</span> with which <span class="math-container">$\mathfrak{g}$</span> is naturally associated? What if we restrict to <span class="math-container">$k$</span> having characteristic zero — or even <span class="math-container">$k=\mathbb{R}$</span> or <span class="math-container">$k=\mathbb{C}$</span>?</p>
</blockquote>
<p>By ‘naturally associated’ I am thinking of a bijective correspondence reminiscent of the classical situation, where the algebra and the group are linked by the BCH formula, and Lie algebra homomorphisms <span class="math-container">$\phi$</span> lift to group homomorphisms <span class="math-container">$\Phi=\exp \phi\log$</span>. However I am not expecting a differentiable structure on <span class="math-container">$G$</span>. (That said, if <span class="math-container">$G$</span> comes with some extra structure for free, I would be interested to hear about it.)</p>
<p>I have looked at other MO questions such as <a href="https://mathoverflow.net/questions/4765/lie-groups-and-lie-algebras">Lie Groups and Lie Algebras</a> but it's not clear to me that infinite dimensional Lie algebras are considered there.</p>
<p>A further observation is that the Lie series encountered in the classical case converge in the usual sense on sufficiently small neighbourhoods. However, in some cases of interest to me, such as free Lie algebras, the Lie algebra can be embedded in an (associative) algebra of formal power series with non-commuting indeterminates, in which case there are no restrictions on when the Lie series converge. So it seems to me that a subcase of the question above corresponding to the situation where all Lie series converge may well have been studied. There is a 1948 Doklady paper <em>On normed Lie algebras over the field of rational numbers</em> by Mal'cev which sounds relevant but has proven resistant to my attempts to track it down.</p>
<p>I would be grateful for any pointers to books or articles that discuss this.</p>
| Alexander Schmeding | 46,510 | <p>Another source discussing the (non-)integrability of infinite dimensional Lie algebras is
T. Robart: Around the exponential mapping, in: Banyaga et al (Eds) Infinite dimensional Lie groups in geometry and representation theory, 2002</p>
<p>There a detailed account of "the exponential catastrophe" can be found together with an overview of older research on integrability criteria (e.g. for Banach Lie algebras.</p>
|
1,201,352 | <p>I have a very simple question which is bugging me.</p>
<p>We are 3 roommates and our total electricity bill is $61 this month,
I was home for the whole month,
Friend X was home for 15 days,
Friend Y was home for 20 days</p>
<p>now the easy question, how much would each person would pay?</p>
<p>My calculation is </p>
<pre><code>61/3=20,33
Guy X : 20,33 / 2 = 10.16
Guy Y : (20,33 * 2) / 3 = 13.55
Me : 61 - (10.16+13.55) = 37.29 which doesn't make sense at all!!!
</code></pre>
<p>Help me!!!</p>
| Jimmy R. | 128,037 | <p>One way to do it is to think that this month had $$30+15+20=65$$ separate days (or say: electricity days). The electricity bill is $61, and it should be splitted as follows</p>
<ol>
<li>you should pay $$\$61\cdot \frac{30}{65}=\$28.15$$</li>
<li>friend X should pay $$\$61\cdot \frac{15}{65}=\$14.08$$ and</li>
<li>friend Y should pay $$\$61\cdot \frac{20}{65}=\$18.77$$</li>
</ol>
<p>Observe now that the 3 amounts sum up to \$61.</p>
|
3,534,896 | <p>If <span class="math-container">${\sqrt 3} - {\sqrt 2}, 4- {\sqrt 6}, p{\sqrt 3} - q {\sqrt 2}$</span> form a geometric progression, find the values of p and q.</p>
<p>So I take the second term <span class="math-container">$4-{\sqrt 6} =( {\sqrt 3} - {\sqrt 2}) (r)$</span> , where r is the common ratio.</p>
<p><span class="math-container">$4-{\sqrt 6} =( {\sqrt 3} - {\sqrt 2})( 2{\sqrt3} + {\sqrt2 })$</span></p>
<p>And found that the common ratio, r = <span class="math-container">$2{\sqrt3} + {\sqrt2 }$</span></p>
<p>To find the third term, I multiplied the second term with the common ratio.</p>
<p><span class="math-container">$(4-{\sqrt 6})( 2{\sqrt3} + {\sqrt2 })= p{\sqrt 3} - q {\sqrt 2}$</span> </p>
<p><span class="math-container">$8{\sqrt 3} + 4{\sqrt2} - 6 {\sqrt 2} - 2{\sqrt 6} = p{\sqrt 3} - q {\sqrt 2}$</span> </p>
<p>I am unable to proceed beyond this step. </p>
| Parcly Taxel | 357,390 | <p>You've made a computation error, perhaps?
<span class="math-container">$$(4-\sqrt6)(2\sqrt3+\sqrt2)=8\sqrt3+4\sqrt2-6\sqrt2-2\sqrt3=6\sqrt3-2\sqrt2$$</span></p>
|
859,493 | <p>Given that any fixed integer $n>0$, let $S=\{1,2,3,4,...,n\}$. Now a Red-Blue subset of $S$ is called $T$. Every element of $T$ is given a colour (either red or blue). For instance $\{17 (\text{red})\}, \{1 (\text{red}), 5 (\text{red})\}$ and $\{1(\text{red}),5(\text{blue})\}$ are $3$ different subsets of $S$.</p>
<p>Determine the number of different RB-coloured subsets of $S$.</p>
<p>First thing I did was look at the problem without the colour restrictions to find out how many different subsets I can have. The number of different I calculated was $2^n - 1$. The $-1$ is from subtracting the null set. </p>
<p>I'm not sure how to number of possibilities when the colour restrictions are introduced.</p>
<p>Also how would I go about determining the number of different subsets of size $i$, where $0<i<n$?</p>
| agha | 118,032 | <p>$$(2x^2+x-3)^8=(2x^2+x-3)\cdot(2x^2+x-3) \cdots (2x^2+x-3)$$</p>
<p>Note that something like $ax$ you get only in one way: if you choose $x$ from one of eight parentheses and $-3$ from the other, so this coefficient is $\displaystyle 8 \cdot (-3)^7 $.</p>
|
2,670,258 | <p>Just to explain the motivation behind some other thing, I shall ask first this question:</p>
<blockquote>
<p>Is $\frac{1}{x}$ continuosly differentiable (or we can examine
$\frac{1}{x^3}$ also) on the set $$E = (- \infty, 0) \cup (0,
> \infty)$$</p>
</blockquote>
<p>The reason why I'm asking this is that when we consider a solution to a given initial value problem, we define the definition of interval of that solution as a single connected set (or just an interval), and the reason for this is explained to me as that we are trying to find solutions $y(x) \in C^1$, but when I discard the problematic points from my domain, why should I get any problem about the condition that $y \in C^1$ ?</p>
<p><strong>Edit:</strong></p>
<p>For example, let say that we solve an ODE, and the general solution is of the form
$$y(x) = c / x.$$
Then if the initial condition is given as $y(1) = 1$, we see that $c = 1$, and the domain of definition is $1 \in (0,\infty)$. However, I do not understand the motivation behind the reason why the interval of definition is not $(-\infty, 0) \cup (0, \infty)$ .</p>
| 5xum | 112,884 | <p>The funciton $f(x)=\frac1x$ is an element of $C^1(\mathbb R\setminus\{0\})$.</p>
<p>To comment on your question about the initial value problems, I would need more information about what is unclear to you.</p>
|
2,706,061 | <p>This can be rewritten as $A\vec{x}=0$, where $A=(1, -2, 3), \vec{x}=(x, y, z)^T$.
I understand that one can find vectors perpendicular to this place by finding the basis for the null space, as the null space is perpendicular to the row space.</p>
<p>This basis is $(2, 1, 0), (-3, 0, 1)$. I believe that the space describing all perpendicular vectors can be written as $a(2, 1, 0)^T+b (-3, 0, 1)^T$? In order to find the basis for this space, we solve for the null space of
$$
B=
\left[ {\begin{array}{cc}
2, 1, 0 \\
3, 0, 1 \\
\end{array} } \right]
$$</p>
<p>This gives you $(1, -2, 3)^T$ which makes sense as this described the original plane. But I don't understand why we combine the two solutions for the null space and transpose them to find the basis for all vectors $a(2, 1, 0)^T+b (-3, 0, 1)^T$. Could somebody explain this to me? Thank you.</p>
<p>Note: <a href="https://math.stackexchange.com/questions/2681592/find-a-basis-for-all-vectors-perpendicular-to-x-2y3z-0">This question</a> is about the same problem but has a different misunderstanding.</p>
| José Carlos Santos | 446,262 | <p>Because we're working in a $3$-dimensional vector space and we have found $2$ linearly independent solutions $s_1$ ans $s_2$. Therefore, the space $S$ of <em>all</em> solutions must have dimension <em>at least</em> $2$. But it can't be $3$, since that would meant that $S=\mathbb{R}^3$, but it isn't ($(1\ \ -2\ \ 3)^T\notin S$). Therefore, $\dim S=2$ and $\{s_1,s_2\}$ is a basis of $S$.</p>
|
4,453,356 | <p>I had been having trouble understanding a proof of the irrational nature of √2.</p>
<p>I found this proof in the first page of the foreward to <a href="https://www.cs.ru.nl/%7Efreek/comparison/comparison.pdf" rel="nofollow noreferrer"><em>17 theorem provers of the world</em></a> where a 'geometrical proof' (is that a right term?) of the irrationality of √2 is mentioned (page 2 of the pdf).</p>
<p>It goes like this:</p>
<blockquote>
<p>Call the original triangle ABC, with the right angle at C. Let the hypothenuse AB = p, and let the legs AC = BC = q. As remarked, p² = 2q².</p>
<p>Reflect ABC around AC obtaining the congruent copy ADC. On AB position E so that BE = q. Thus AE = p − q. On CD position F so that BF = p. Thus DF = 2q − p. The triangle BF E is congruent to the original triangle ABC. EF is perpendicular to AB, the lines EF and AD are parallel.</p>
<p>Now, position G on AD so that AG = EF = q. Since AEFG is a rectangle, we find AG = q. Thus, DG = F G = AE = p − q. So, the triangle DFG is an isosceles right triangle with a leg = p − q and hypothenuse = 2q − p.</p>
<p>If there were commensurability of p and q, we could find an example with integer lengths of sides and with the perimeter p + 2q a minimum. But we just constructed another example with a smaller perimeter p, where the sides are also obviously integers. Thus, assuming commensurability leads to a contradiction.</p>
<p><a href="https://i.stack.imgur.com/nEkgA.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/nEkgA.png" alt="enter image description here" /></a></p>
</blockquote>
<p>What I understood was, we assume that we have got the smallest possible isosceles right triangle: ABC.</p>
<p>It has hypotenuse p and leg q, where p,q ∈ ℤ.</p>
<p>ie, p and q are smallest pair integers for which ABC would be a right isosceles triangle.</p>
<p>That means p² = 2q².</p>
<p>Then we derive a smaller isosceles right triangle (DFG) with hypotenuse 2q-p and leg p-q.</p>
<p>Thus, the assumption that ABC was the smallest right isosceles triangle was wrong.</p>
<p>But how can that be used to say that p and q are not commensurable?</p>
<p>(p and q being commensurable means p/q is rational, right?)</p>
<p>My level of math knowledge is very basic.</p>
<p>Could you help me understand this?</p>
| Aaron Montgomery | 485,314 | <p>This is an example of <strong>argument by contradiction</strong>. The key idea is to take a fact that you hope to eventually show is false, and instead assume that it is <em>true</em>, but that the assumption of truth leads you to a contradiction.</p>
<p>The classic example of this style of argument is the proof that there's no largest prime; if there <em>was</em>, then that would mean I could make a finite list of all primes, such as <span class="math-container">$\{2, 3, \dots, p\}$</span>, where <span class="math-container">$p$</span> was the largest prime. But if that were true, I would notice that the expression <span class="math-container">$2 \cdot 3 \cdots p + 1$</span> would not be divisible by any of my primes -- hence, it must be prime itself, and it's necessarily larger than <span class="math-container">$p$</span>, which leads me to a contradiction. This implies that my assumption ("there exists a largest prime") must have been incorrect.</p>
<p>In this case, we start by assuming that there exists an rational isosceles right triangle (meaning one whose sides are all fractions) -- that is, <span class="math-container">$\left( \frac a b \right)^2 + \left( \frac a b \right)^2 = \left(\frac c d \right)^2$</span>. If so, multiplying both sides by <span class="math-container">$(bd)^2$</span> would give us an <em>integer</em> isosceles right triangle, as we'd have <span class="math-container">$(ad)^2 + (ad)^2 = (cb)^2$</span>.</p>
<p>If there are <em>integer</em> isosceles right triangles, then by looking at the length of the shorter sides we see that there must be a <em>smallest</em> integer isosceles right triangle (this is an application of the <a href="https://en.wikipedia.org/wiki/Well-ordering_principle" rel="nofollow noreferrer">well-ordering principle</a>). But, the argument above shows that we could find a <em>smaller</em> one than the one we had, which contradicts that we had the <em>smallest</em> one.</p>
<p>Since there can't be a <em>smallest</em> integer isosceles right triangle, there can't be <em>any</em> integer isosceles right triangles at all, which also means there couldn't have been a <em>rational</em> isosceles right triangle to begin with.</p>
|
2,003,660 | <p>I got this exercise from the textbook Book of Proof, CH4 E12. I've tackle this problem in the following manner:</p>
<p>Suppose x is a real and $0 < x < 4$, it follows that,</p>
<p>\begin{align*}
&\Rightarrow 0 - 2 < x - 2 < 4 - 2 \\
&\Rightarrow 4 < (x - 2)^2 < 4\\
&\Rightarrow 0 \leq (x - 2)^2 < 4
\end{align*}</p>
<p>Since, $x(4 - x) = 4x - x^2 = 4 - (x - 2)^2$, then</p>
<p>$$\dfrac{4}{x(4 - x)} = \dfrac{4}{4 - (x - 2)^2}.$$</p>
<p>This expression is greater or equal to $1$ for
$0 \leq (x - 2)^2 < 4$. Thus,</p>
<p>$$\dfrac{4}{x(4 - x)} \geq 1.$$</p>
<p>I'm quite new to proof technique and I'm using this book to self-learn logic and proofing writing. My question is: is the solution stated above logically sound? Would my arguments be considered sufficient to prove that $P \Rightarrow Q$?</p>
| Guy Fsone | 385,707 | <p>Hint See that: $0\le x\le 4 $ implies $0\le 4-x\le 4$ and then,
$$0\le x(4-x) =-(x-2)^2+4\le4$$</p>
<p>Thus directly implies your inequality.$$\dfrac{4}{x(4 - x)} \geq 1.$$</p>
|
1,465,755 | <p>How do I find the angles $\alpha ,\beta ,\theta $ between the vector $(1,0,-1)$ and the unit vectors $i,j,k$ along the axes?</p>
<p>This question is not making sense to me. I know that in order to find the angle between any two nonzero vectors, I just have to take their dot product and divide it by the product of their lengths as such: $\cos { \theta } =\frac { \overrightarrow { v } \cdot \overrightarrow { w } }{ \left\| v \right\| \left\| w \right\| } $</p>
<p>How can I extend this knowledge to the 3 dimensional vector I was given? I don't know how I can get the dot product of the given vector with the given unit vectors. </p>
<p>Hints only, please. No actual solution. </p>
| Ben Grossmann | 81,360 | <p>For example, we have
$$
(1,0,-1)\cdot \hat i = (1,0,-1) \cdot (1,0,0) =
(1)(1) + (0)(0) + (-1)(0) = 1
$$</p>
|
2,375,714 | <p>I know this is a really easy question but for some reason I'm having trouble with it.</p>
<p>If $M$ is an object in an additive category $\mathcal C$, and $\text{Hom}_{\mathcal C}(M,M) = 0$, then $M = 0$.</p>
<p>I know that this implies $\text{id}_M =0$ but I'm having trouble showing that $M$ satisfies the condition to be the zero object, or showing that the map from $0\rightarrow M$ is necessarily invertible.</p>
<p>We have that this composite $M \rightarrow 0 \rightarrow M$ is the $0$ map and the identity simultaneously but I'm stumped.</p>
<p>I think I'm thinking about it too much.</p>
| Pece | 73,610 | <p>You always have that $0 \to M \to 0$ equals $\mathrm{id}_0$, and you just showed that $M\to 0 \to M$ equals $\mathrm{id}_M$. Hence $0\to M$ is an iso (with inverse $M \to 0$).</p>
<p>Remark: the additive structure is not really used here, this is rather a property of pointed categories.</p>
|
160,765 | <p>I have a functor $F:C \to D$ between poset-enriched categories, and I'd like to show that the induced map on classifying spaces is a homotopy-equivalence. To this end, I am trying to establish the presence of initial objects in all the fibers $d\setminus F$ and use the 2-categorical version of Quillen's Theorem A due to Bullejos and Cegarra (see their pdf <a href="http://hera.ugr.es/doi/14976262.pdf">here</a>). It'd be nice if these fibers had initial elements, so</p>
<blockquote>
<p>What is an initial object $i$ in a 2-category $C$, and where can I find a reference in the literature?</p>
</blockquote>
<p>I imagine that instead of having a unique morphism $1 \to c$ for every object $c$ of $C$ like one does for a 1-categorical initial object, we'd now want the category of all morphisms from $1$ to $c$ in $C$ to be contractible. So if $C$ is poset-enriched it would suffice for the poset $C(i,c)$ to have a minimal element for all objects $c$. Is this accurate, and if so, what can I cite as a reference? </p>
| Zhen Lin | 11,640 | <p>There are several possible definitions of initial object in a 2-category <span class="math-container">$\mathfrak{K}$</span>; which one is appropriate depends on your applications.</p>
<ol>
<li><p>A 2-category has an underlying ordinary category, so we may just reuse the standard definition of initial object.</p>
</li>
<li><p>A 2-category can be regarded as a category enriched over categories, so we may use the definition of initial object in an enriched category. Concretely, this refers to an object <span class="math-container">$a$</span> such that the hom-category <span class="math-container">$\mathfrak{K} (a, b)$</span> is the terminal category for all objects <span class="math-container">$b$</span> in <span class="math-container">$\mathfrak{K}$</span>.</p>
<p>Clearly, every enriched initial object is an initial object in the underlying ordinary category, but the converse is not true. (For example, the unique object of <span class="math-container">$\mathfrak{K} (a, b)$</span> may have non-trivial endomorphisms.) This definition is standard: see e.g. [Kelly, <em>Basic concepts of enriched category theory</em>].</p>
</li>
<li><p>We can do things up to isomorphism in a 2-category, so we might define an initial object to be an object <span class="math-container">$a$</span> such that the hom-category <span class="math-container">$\mathfrak{K} (a, b)$</span> has only one object up to isomorphism. This is the same thing as an initial object in the "homotopy category" <span class="math-container">$\operatorname{Ho} \mathfrak{K}$</span> obtained by identifying all isomorphic morphisms in <span class="math-container">$\mathfrak{K}$</span>.</p>
<p>Clearly, an object that is initial in the underlying ordinary category of <span class="math-container">$\mathfrak{K}$</span> is also initial in this sense, but the converse is not true. As far as I know, this definition is not used.</p>
</li>
<li><p>Every 2-category is also a bicategory, so we can take the definition from there. A bicategorical initial object in <span class="math-container">$\mathfrak{K}$</span> is an object <span class="math-container">$a$</span> such that the hom-categories <span class="math-container">$\mathfrak{K} (a, b)$</span> are contractible <em>groupoids</em>. More concretely, this means there is a morphism <span class="math-container">$a \to b$</span> for every <span class="math-container">$b$</span> and there is a unique 2-cell between any parallel pair of morphisms <span class="math-container">$a \to b$</span>.</p>
<p>Every bicategorical initial object is also initial in the sense of (3). This definition is a special case of the general notion of bicolimit: see e.g. [Kelly, <em>Elementary observations on 2-categorical limits</em>].</p>
</li>
<li><p>Every 2-category can be regarded as a simplicially enriched category by replacing each hom-category with its nerve. We could therefore define an initial object in <span class="math-container">$\mathfrak{K}$</span> to be an object <span class="math-container">$a$</span> such that the nerve of <span class="math-container">$\mathfrak{K} (a, b)$</span> is contractible for all <span class="math-container">$b$</span>.</p>
<p>It is not hard to see that bicategorical initial objects are also initial in this sense, but the converse is not true. (For example, <span class="math-container">$\mathfrak{K} (a, b)$</span> might have non-invertible morphisms.) I have not seen this definition before, but it may be useful if <span class="math-container">$\mathfrak{K} (a, b)$</span> is actually standing in for a homotopy type.</p>
</li>
</ol>
<p>If the hom-categories of <span class="math-container">$\mathfrak{K}$</span> are posets (and I mean partially ordered set, <em>not</em> preordered set) then definitions (1) – (4) coincide. This is because the only isomorphisms in a poset are the identities.</p>
|
457,427 | <p>Find the derivative of the functions:<br>
$$\int_{x^2}^{\sin(x)}\sqrt{1+t^4}dt$$<br></p>
<p>In class we had the following solution:<br>
By the fundamental theorem of calculus we know that <br>
$$\left(\int_a^xf(t)dt\right)'=f(x)$$ So<br>
$$\int_{x^2}^0\sqrt{1+t^4}dt+\int_0^{\sin(x)}\sqrt{1+t^4}dt=$$<br>
$$\int_0^{\sin(x)}\sqrt{1+t^4}dt-\int_0^{x^2}\sqrt{1+t^4}dt=$$<br>
Letting $g(x)=\sqrt{1+t^4} $<br>
$$g(\sin(x))(\sin(x))'-g(x^2)(x^2)'=$$<br>
$$\sqrt{1+\sin(x)^4}\cdot \cos(x)-\sqrt{1+x^8} \cdot 2x$$<br></p>
<p>However, if we have that $\left(\int_a^xf(t)dt \right)'=f(x)$ wouldn't the answer just be <br>
$$\sqrt{1+\sin(x)^4}-\sqrt{1+x^8}?$$</p>
| Eric Auld | 76,333 | <blockquote>
<p>However, if we have that $\left(\int_a^xf(t)dt \right)'=f(x)$ wouldn't the answer just be $$\sqrt{1+\sin(x)^4}-\sqrt{1+x^8}?$$</p>
</blockquote>
<p>Note that hidden in the first formula you mention is multiplication by $\frac {d}{dx} (x)$. The reason you don't see it is because it is equal to one! It might be more clear to write </p>
<p>$$\left(\int_a^x f(t)\,dt\right)' = f(x) \frac{d}{dx}(x)$$ </p>
<p>since the more general formula is </p>
<p>$$\left(\int_{u(x)}^{v(x)} f(t)\,dt\right)' = f(v(x)) \frac{dv}{dx} - f(u(x))\frac{du}{dx}$$</p>
<p>or even more generally, for $f(x,t)$ continuous,</p>
<p>$$\left(\int_{u(x)}^{v(x)} f(x,t)\,dt\right)' = \int_{u(x)}^{v(x)}\frac{\partial } {\partial x} f(x,t) + f(x,v(x)) \frac{dv}{dx} - f(x,u(x))\frac{du}{dx}.$$</p>
<p>This last formula easily reduces to the simpler case, since $\frac{\partial } {\partial x} f(t) =0$. </p>
|
4,552,098 | <p>The rules: Two people rolls a single dice. If the dice rolls 1,2,3 or 4, person A gets a point. For the rolls 5 and 6, person B gets a point. One person needs a 2 point lead to win the game.</p>
<p>This is a question taken from my math book. The answer says the probability is <span class="math-container">$\frac{4}{5}$</span> for person A to win the game. Which I dont understand.</p>
<p>My thought process: Lets look at all the four possible outcomes with the two first roles. These would be AA, BB, AB or BA. AA means person A gets a point two times a row. Under is the probability for all these scenarios:</p>
<p><span class="math-container">$P(AA)=(\frac{2}{3})^2=\frac{4}{9}$</span></p>
<p><span class="math-container">$P(BB)=(\frac{1}{3})^2=\frac{1}{9}$</span></p>
<p><span class="math-container">$P(AB)=\frac{2}{3}\cdot\frac{1}{3}=\frac{2}{9}$</span></p>
<p><span class="math-container">$P(BA)=\frac{1}{3}\cdot\frac{2}{3}=\frac{2}{9}$</span></p>
<p>If AB or BA happens they have an equal amount of points again, no matter how far they are into the game. The probability of this would then be <span class="math-container">$2\cdot\frac{2}{9}=\frac{4}{9}$</span>. Since they have an equal amount of points, you can look at that as the game has restarted.</p>
<p>Meaning person A has to get two points a row to win no matter what. Would that not mean the probability is <span class="math-container">$\frac{4}{9}$</span> for person A to win? Can someone tell me where my logic is flawed and what the correct logic would be?</p>
| N. F. Taussig | 173,070 | <p>As you showed, person <span class="math-container">$A$</span> can win the first round of two rolls with probability <span class="math-container">$4/9$</span>. However, as Fishbane pointed out in the comments, person <span class="math-container">$A$</span> can also win if both person <span class="math-container">$A$</span> and person <span class="math-container">$B$</span> fail to win in the first round and person <span class="math-container">$A$</span> obtains two points in a subsequent round before person <span class="math-container">$B$</span> does. Hence, the probability that person <span class="math-container">$A$</span> wins should be higher than <span class="math-container">$4/9$</span> since person <span class="math-container">$A$</span> does not have to win the game in the first round of two rolls.</p>
<p><strong>Method 1:</strong> We add the probabilities that person <span class="math-container">$A$</span> wins in the <span class="math-container">$k$</span>th round of two rolls.</p>
<p>Since the probability neither <span class="math-container">$A$</span> nor <span class="math-container">$B$</span> wins a round is <span class="math-container">$4/9$</span>, the probability neither <span class="math-container">$A$</span> nor <span class="math-container">$B$</span> wins in any of the first <span class="math-container">$k - 1$</span> rounds of two rolls is <span class="math-container">$(4/9)^{k - 1}$</span>. The probability that person <span class="math-container">$A$</span> then wins the <span class="math-container">$k$</span>th round of two rolls is <span class="math-container">$4/9$</span>. Hence, the probability that person <span class="math-container">$A$</span> wins in the <span class="math-container">$k$</span>th round is <span class="math-container">$(4/9)^k$</span>. Therefore, the probability that person <span class="math-container">$A$</span> wins is
<span class="math-container">\begin{align*}
\sum_{k = 1}^{\infty} \left(\frac{4}{9}\right)^k & = \frac{4}{9}\sum_{k = 1}^{\infty} \left(\frac{4}{9}\right)^{k - 1}\\
& = \frac{4}{9} \cdot \frac{1}{1 - \frac{4}{9}}\\
& = \frac{4}{9} \cdot \frac{1}{\frac{5}{9}}\\
& = \frac{4}{9} \cdot \frac{9}{5}\\
& = \frac{4}{5}
\end{align*}</span></p>
<p><strong>Method 2:</strong> Let <span class="math-container">$p$</span> be the probability that <span class="math-container">$A$</span> wins the game. You showed that the probability <span class="math-container">$A$</span> wins in the first round is <span class="math-container">$4/9$</span>. You also showed that the probability that neither <span class="math-container">$A$</span> nor <span class="math-container">$B$</span> wins the first round is <span class="math-container">$4/9$</span>, at which point the game restarts, so <span class="math-container">$A$</span> again has probability <span class="math-container">$p$</span> of winning. Hence,
<span class="math-container">$$p = \frac{4}{9} + \frac{4}{9}p$$</span>
Solving for <span class="math-container">$p$</span> yields <span class="math-container">$p = 4/5$</span>, as before.</p>
|
2,081,001 | <p>The number $\frac{22}{7}$ is irrational in our base-$10$ system, but in, say, base-$14$, it is rational (it comes out to $3.2$ in that system).</p>
<p>It's easy for fractions that are irrational as decimals, as you can just represent them in a base that's double the denominator of the fraction. However, what if I have a number like $\pi$, or $\log(2)$?</p>
<p>For those numbers, it could easily be represented as a rational number if it is in base-($\pi\cdot 2$) or base-($\log(2)\cdot 2$), but is it possible to represent them in any rational-based number system?</p>
| FuzzyPixelz | 376,394 | <p>Always refer to definitions, a number $x$ is called <em>irrational</em> iff $\forall p,q \in \Bbb Z : x\neq\frac pq$, that is when you can't express it as ratio of two integers, not based on how it <em>looks</em> using a different number system. <strong>EDIT:</strong> This means that you can never find two integers to <em>precisely</em> equal $\pi$ for example, $\frac 31$, $\frac{22}{7}$, $\frac{333}{106}$, $\frac{355}{113}$, $\frac{103993}{33102}$ $\dots$ won't <em>equal</em> $\pi$, they're all finite decimals, the real irrational $\pi$, has unending decimals. </p>
|
2,038,617 | <p>The question asks to show that $$\sum_{k=0}^{3n}(-3)^k \binom{6n}{2k}=2^{6n}$$ by considering the binomial expansion </p>
<p>I thought about the use of $$(1+z)^n=\sum_{k=0}^{n}\binom{n}{k}z^k$$
with suitable complex number $z$, as the formula shows the $(-3)^k$ term might suggest the use of complex number, which take the imaginary part of the expansion</p>
<p>However, I cannot find such $z$ that makes the sum to $2^{6n}$</p>
<p>Any hints are appreciated!</p>
| Robert Z | 299,698 | <p>Consider the real part of the binomial expansion of $(1+i\sqrt{3})^{6n}=(2e^{i\pi/3})^{6n}=2^{6n}$:
$$2^{6n}=\mbox{Re}\left((1+i\sqrt{3})^{6n}\right)=
\mbox{Re}\left(\sum_{j = 0}^{6n}{6n \choose j}(i\sqrt{3})^{j} \right)
=\sum_{k = 0}^{3n}{6n \choose 2k}(-3)^k.$$</p>
|
2,737,144 | <p>I tried to prove it by contradiction. </p>
<p>Suppose it is not true that $1\ge\frac{3}{x(x-2)}$, so $1\lt\frac{3}{x(x-2)}$. Then $\frac{3}{x(x-2)}-1\gt0$. Multiply both sides of $\frac{3}{x(x-2)}-1\gt0$ by ${x(x-2)}$.</p>
<p>$(\frac{3}{x(x-2)}-1\gt0)({x(x-2)}\gt0(x(x-2)$</p>
<p>${3-(x(x-2)\gt0}$</p>
<p>${3-x^2-2x\gt0}$</p>
<p>${-x^2-2x+3\gt0}$</p>
<p>${-1(x^2+2x-3)\gt0}$</p>
<p>$-1\frac{(x-1)(x+3)}{-1}\gt0/-1$</p>
<p>${(x-1)(x+3)\lt0}$</p>
<p>At this point I really do not know what to do after this point or if I really even went about it the right way. Thank you for the help.</p>
| Dr. Sonnhard Graubner | 175,066 | <p>From $$x-3\geq 0$$ follows:
$$x-1\geq 2$$
$$(x-1)^2\geq 4$$
$$x^2-2x+1\geq 4$$
$$x^2-2x\geq 3$$
$$x(x-2)\geq 3$$ so $$1\geq \frac{3}{x(x-2)}$$</p>
|
1,679,772 | <p>A binary relation R on $N×N$ is defined as follows$: (a,b)R(c,d)$ if $a≤c$ or $b≤d$. Consider the following
propositions:</p>
<p>$P: R$ is reflexive</p>
<p>$Q: R$ is transitive</p>
<p>Which one of the following statements is TRUE?</p>
<ol>
<li>Both $P$ and $Q$ are true. </li>
<li>$P$ is true and $Q$ is false. </li>
<li>$P$ is false and $Q$ is true. </li>
<li>Both $P$ and $Q$ are false.</li>
</ol>
<hr>
<hr>
<p>My attempt :</p>
<p>Reflexive$: (a, a) R(a, a)$
Since $a \leq a$, or $a \leq a$</p>
<p>Transitive$: (a, b) R (c, d)$ or $(c, d) R(m, n)$ then $(a, b) R(m, n)$
Suppose $(a, b) R(c, d)$</p>
<p>$\implies a \leq c$ or $b \leq d$
and $(c, d) R(m, n)$</p>
<p>$\implies c \leq m, d \leq n$</p>
<p>Since $a \leq c$, or $c \leq m$ so $a \leq m$</p>
<p>$b \leq d$ or $d \leq n$, so $b \leq n$</p>
<p>$\implies (a, b) R(m, n)$</p>
<blockquote>
<p>Can you explain in formal way, please?</p>
</blockquote>
| MooS | 211,913 | <p>We can also directly use the definition of compactness: For the sake of notation, let $K_1$ be the ambient topological space.</p>
<p>Let the intersection be $\emptyset$, then we have an open cover $K_1 = \bigcup_{i=2}^\infty K_i^c$. By definition we find a finite subcover, i.e. $K_1 = \bigcup_{i=1}^N K_i^c$ for some $N$. This means $K_N = \bigcap_{i=1}^N K_i = \emptyset$, contradiction!</p>
|
14,765 | <p>I like to make the "dominoes" analogy when I teach my students induction.</p>
<p>I recently came across the following video:</p>
<p><a href="https://www.youtube.com/watch?v=-BTWiZ7CYoI" rel="noreferrer">https://www.youtube.com/watch?v=-BTWiZ7CYoI</a></p>
<p>In this video, a sequence of concrete block wall caps are set up like dominoes on the top of a wall. The first wall cap is knocked down, setting off the domino effect. The blocks are spaced so that they are resting on each other when they fall, but just barely. So rather than resting flat each block is supported slightly by its successor. When the last block falls, however, it falls flat (having no subsequent block to rest on). This causes the block behind it to slip off, and lay flat, which causes the brick behind it to slip off and lie flat, until all the blocks are lying flat perfectly end to end.</p>
<p>Is there any instance of a similar phenomena occurring in mathematics? I am thinking of a situation in which you want to prove both <span class="math-container">$P(n)$</span> and <span class="math-container">$Q(n)$</span> for <span class="math-container">$n = 1, 2, 3, \dots, 100$</span> (say). If you are able to prove: </p>
<ol>
<li><span class="math-container">$P(1)$</span></li>
<li><span class="math-container">$\forall k \in \{1,2,3, \dots, 99\} P(k) \implies P(k+1)$</span></li>
<li><span class="math-container">$P(100) \implies Q(100)$</span></li>
<li><span class="math-container">$\forall k \in \{ 100, 99, 98, \dots, 3,2\}, Q(k) \implies Q(k-1)$</span></li>
</ol>
<p>Then it will follow that both <span class="math-container">$P(n)$</span> and <span class="math-container">$Q(n)$</span> are true for <span class="math-container">$n = 1, 2, 3, \dots, 100$</span>.</p>
<p>If an example is found, it could be a great example for teaching because it would force students to think through the logic of why induction works rather than blindly following a certain form of "an induction proof".</p>
| Acccumulation | 8,989 | <p>The square root symbol <span class="math-container">$\sqrt {\cdot}$</span> means "the principal square root". For positive numbers, it's understood that it means the positive square root. But your students clearly haven't advanced enough for this to be implicit. Insist that if they want to represent the positive square root of <span class="math-container">$x$</span>, they have to write <span class="math-container">$\sqrt[+] x$</span>. Note that I didn't write <span class="math-container">$+\sqrt x$</span>. I'm treating positive sign as <em>part</em> of the square root symbol. When you take the square root, write it as <span class="math-container">$\sqrt[\pm] x$</span> rather than <span class="math-container">$\pm \sqrt x$</span> to emphasize that you're not taking "the" square root and then taking plus and minus that number; the square root operation itself gives two outputs. You should eliminate as much as possible the concept of "the" square root from your students' minds.</p>
|
14,765 | <p>I like to make the "dominoes" analogy when I teach my students induction.</p>
<p>I recently came across the following video:</p>
<p><a href="https://www.youtube.com/watch?v=-BTWiZ7CYoI" rel="noreferrer">https://www.youtube.com/watch?v=-BTWiZ7CYoI</a></p>
<p>In this video, a sequence of concrete block wall caps are set up like dominoes on the top of a wall. The first wall cap is knocked down, setting off the domino effect. The blocks are spaced so that they are resting on each other when they fall, but just barely. So rather than resting flat each block is supported slightly by its successor. When the last block falls, however, it falls flat (having no subsequent block to rest on). This causes the block behind it to slip off, and lay flat, which causes the brick behind it to slip off and lie flat, until all the blocks are lying flat perfectly end to end.</p>
<p>Is there any instance of a similar phenomena occurring in mathematics? I am thinking of a situation in which you want to prove both <span class="math-container">$P(n)$</span> and <span class="math-container">$Q(n)$</span> for <span class="math-container">$n = 1, 2, 3, \dots, 100$</span> (say). If you are able to prove: </p>
<ol>
<li><span class="math-container">$P(1)$</span></li>
<li><span class="math-container">$\forall k \in \{1,2,3, \dots, 99\} P(k) \implies P(k+1)$</span></li>
<li><span class="math-container">$P(100) \implies Q(100)$</span></li>
<li><span class="math-container">$\forall k \in \{ 100, 99, 98, \dots, 3,2\}, Q(k) \implies Q(k-1)$</span></li>
</ol>
<p>Then it will follow that both <span class="math-container">$P(n)$</span> and <span class="math-container">$Q(n)$</span> are true for <span class="math-container">$n = 1, 2, 3, \dots, 100$</span>.</p>
<p>If an example is found, it could be a great example for teaching because it would force students to think through the logic of why induction works rather than blindly following a certain form of "an induction proof".</p>
| WeCanLearnAnything | 7,151 | <p>Telling and explaining is really not enough. Working exclusively in <em>square</em> roots of numbers all but guarantees blocking vs interleaving problems and encourages shallow robotic thinking and generalizing involving roots.</p>
<p>A few ideas:</p>
<ol>
<li>Error analysis. Have them look at a <em>variety</em> of equations that boil down to something of the form <span class="math-container">$x^n=m$</span>, where <span class="math-container">$n$</span> is an even or odd whole number and <span class="math-container">$m$</span> is a real number. Show them various (fictitious) students' work of varying qualities. Have them check and assess the work, then discuss. Why do some have two solutions and others have only one? Why do some have more?</li>
<li>Multiple representations. Have them graph <span class="math-container">$y=x^2$</span> and have them use the graph to show the solution(s), if there are any, to <span class="math-container">$x^2=9$</span>, <span class="math-container">$x^2=0$</span>, <span class="math-container">$x^2=-9$</span>, etc. Then use the graph of <span class="math-container">$y=x^3$</span> to solve <span class="math-container">$x^3=1$</span>, <span class="math-container">$x^3=8$</span>, <span class="math-container">$x^3=-8$</span>, etc. Compare and contrast. Encourage them to generalize as to how many solutions the equations will have depending on the degree of the polynomial. Relate this to the error analysis above.</li>
<li>Positive incentivizes. Make a list of the 5 or 10 common mistakes that you see in class. Create a bonus quiz where students, on a blank piece of paper, recall all 10 mistakes from memory, explain and/or prove the wrongness in each, and demonstrate and check the right way to think about it. Another positive incentive: Tell them that regardless of the topic the class is on, at least 5% of every assessment will involve one of those 10 mistakes, so they have to keep studying them. Distributed practice makes a huge difference.</li>
<li><p>Negative incentives. Given the list of 5 or 10 most common mistakes has been released, tell them that any mistakes of that sort on the test will result in a zero (or a max score of 50% or something like that) for that portion of the test regardless of all other work.</p></li>
<li><p>A gentler version of those negative incentives. Only give feedback and grades on work that does not contain those common mistakes. e.g. Write something like this on their quiz. "On question 8, you made one of the common mistakes. Search for it and fix it, then I will grade it. If you choose not to fix it, you get zero for question 8."</p></li>
</ol>
<p>Good luck with this!</p>
|
14,765 | <p>I like to make the "dominoes" analogy when I teach my students induction.</p>
<p>I recently came across the following video:</p>
<p><a href="https://www.youtube.com/watch?v=-BTWiZ7CYoI" rel="noreferrer">https://www.youtube.com/watch?v=-BTWiZ7CYoI</a></p>
<p>In this video, a sequence of concrete block wall caps are set up like dominoes on the top of a wall. The first wall cap is knocked down, setting off the domino effect. The blocks are spaced so that they are resting on each other when they fall, but just barely. So rather than resting flat each block is supported slightly by its successor. When the last block falls, however, it falls flat (having no subsequent block to rest on). This causes the block behind it to slip off, and lay flat, which causes the brick behind it to slip off and lie flat, until all the blocks are lying flat perfectly end to end.</p>
<p>Is there any instance of a similar phenomena occurring in mathematics? I am thinking of a situation in which you want to prove both <span class="math-container">$P(n)$</span> and <span class="math-container">$Q(n)$</span> for <span class="math-container">$n = 1, 2, 3, \dots, 100$</span> (say). If you are able to prove: </p>
<ol>
<li><span class="math-container">$P(1)$</span></li>
<li><span class="math-container">$\forall k \in \{1,2,3, \dots, 99\} P(k) \implies P(k+1)$</span></li>
<li><span class="math-container">$P(100) \implies Q(100)$</span></li>
<li><span class="math-container">$\forall k \in \{ 100, 99, 98, \dots, 3,2\}, Q(k) \implies Q(k-1)$</span></li>
</ol>
<p>Then it will follow that both <span class="math-container">$P(n)$</span> and <span class="math-container">$Q(n)$</span> are true for <span class="math-container">$n = 1, 2, 3, \dots, 100$</span>.</p>
<p>If an example is found, it could be a great example for teaching because it would force students to think through the logic of why induction works rather than blindly following a certain form of "an induction proof".</p>
| Daniel R. Collins | 5,563 | <p>We are all dealing with this, of course. In a theoretical/philosophical sense, you can't "make" students learn anything (<a href="https://www.phrases.org.uk/meanings/you-can-lead-a-horse-to-water.html" rel="nofollow noreferrer">"You can lead a horse to water..."</a>). In a practical/punitive sense, this is what course grades are for. </p>
<p>So, consider one or more of the following options:</p>
<ul>
<li>Simply grade exams rigorously on this concept. </li>
<li>Give a lead-in multiple-choice or short-answer question on the Fundamental Theorem of Algebra. </li>
<li>Award only half the maximum score for the question if only one of two solutions is given.</li>
<li>Require in applications where only a positive solution is practically useful (e.g., solving a right triangle) that the mathematical two solutions be expressed (for extra practice), and then a separate natural-language statement using only the positive result.</li>
<li>For the most radical approach, consider mastery grading; if this particular item is absolutely critical, then do not pass any student from the course until they can successfully answer such a question. </li>
</ul>
|
14,765 | <p>I like to make the "dominoes" analogy when I teach my students induction.</p>
<p>I recently came across the following video:</p>
<p><a href="https://www.youtube.com/watch?v=-BTWiZ7CYoI" rel="noreferrer">https://www.youtube.com/watch?v=-BTWiZ7CYoI</a></p>
<p>In this video, a sequence of concrete block wall caps are set up like dominoes on the top of a wall. The first wall cap is knocked down, setting off the domino effect. The blocks are spaced so that they are resting on each other when they fall, but just barely. So rather than resting flat each block is supported slightly by its successor. When the last block falls, however, it falls flat (having no subsequent block to rest on). This causes the block behind it to slip off, and lay flat, which causes the brick behind it to slip off and lie flat, until all the blocks are lying flat perfectly end to end.</p>
<p>Is there any instance of a similar phenomena occurring in mathematics? I am thinking of a situation in which you want to prove both <span class="math-container">$P(n)$</span> and <span class="math-container">$Q(n)$</span> for <span class="math-container">$n = 1, 2, 3, \dots, 100$</span> (say). If you are able to prove: </p>
<ol>
<li><span class="math-container">$P(1)$</span></li>
<li><span class="math-container">$\forall k \in \{1,2,3, \dots, 99\} P(k) \implies P(k+1)$</span></li>
<li><span class="math-container">$P(100) \implies Q(100)$</span></li>
<li><span class="math-container">$\forall k \in \{ 100, 99, 98, \dots, 3,2\}, Q(k) \implies Q(k-1)$</span></li>
</ol>
<p>Then it will follow that both <span class="math-container">$P(n)$</span> and <span class="math-container">$Q(n)$</span> are true for <span class="math-container">$n = 1, 2, 3, \dots, 100$</span>.</p>
<p>If an example is found, it could be a great example for teaching because it would force students to think through the logic of why induction works rather than blindly following a certain form of "an induction proof".</p>
| Dominique | 8,190 | <p>When you just know the basics of complex number theory, then you know that <span class="math-container">$x^n = a$</span> has <span class="math-container">$n$</span> solutions, so for <span class="math-container">$n=2$</span>, <span class="math-container">$x^n = a$</span> has two solutions.</p>
<p>What's the big deal?<br>
For <span class="math-container">$n=2$</span>, all roots (in case <span class="math-container">$a>0$</span>) are real, which means you have 2 real roots.<br>
For <span class="math-container">$n=3$</span>, there is one real root and two purely complex ones.<br>
For <span class="math-container">$n=4$</span>, (in case <span class="math-container">$a>0$</span>) two roots are real, and two are purely complex.<br>
...</p>
<p>You can easily show this in a nice drawing:<br>
For <span class="math-container">$n=5$</span>, the roots are on a regular pentagon.<br>
For <span class="math-container">$n=4$</span>, the roots are on a regular quadrilateral.<br>
For <span class="math-container">$n=3$</span>, the roots are on a regular triangle.<br>
For <span class="math-container">$n=2$</span>, the roots are on a line segment.</p>
<p>I truely believe, once you've shown this in a graphical manner, they won't forget it anymore.</p>
|
197,667 | <p>I have a table with 10 columns, obtained as import from file. I would like to use 4 columns as <code>{x, y, z, u}</code> params for <code>BubbleChart3d</code>, and an additional column to color the points. I'm unable to make this work. The code I have is like this:</p>
<p><code>t</code> is table from a file import. I am using columns 1, 2, 9, 10 for xyz-coordinates and size of the bubble, I an trying to use column 7 for color of the bubble</p>
<pre><code>BubbleChart3D[Map[Function[r, {r[[1]], r[[2]], r[[9]], r[[10]]}]][t],
ColorFunction -> Map[Function [c, RGBColor[1, 0, c[[7]]]]][t]]
</code></pre>
<p>This doesn't work — I get a bubble plot with same color. The 7th column does have differing values. What am I doing wrong?</p>
<p>Example of table t:</p>
<pre><code>{{1, -1, 1, 0, 15, 0, 82899, 177, 1, 0}, {1, -1, 1, 0, 15, 0, 10231991, 177, 1, 0},
{1, 0, 1, 0, 15, 0, 4633, 177, 1, 0}, {2, -1, 2, 0, 0, 0, 10231991, 204, 1, 1},
{2, 1, 2, 0, 0, 0, 0, 204, 0, 1}, {4, 3, 4, 4, 354, 1, 2, 13, 0, 2},
{4, -1, 4, 4, 354, 1, 10231991, 13, 1, 2}, {4, 0, 4, 4, 354, 1, 4633, 13, 1, 2},
{4, 4, 4,4, 354, 1, 0, 13, 0, 2}, {5, 5, 5, 0, 0, 0, 0, 64, 0, 2},
{5, -1, 5, 0, 0, 0, 82899, 64, 1, 2}, {5, -1, 5, 0, 0, 0, 10231991, 64, 1, 2},
{5, 0, 5, 0, 0, 0, 4633, 64, 1, 2}, {5, 6, 5, 0, 0, 0, 0, 64, 0,2},
{6, 7, 6, 2, 0, 0, 0, 519, 0, 3}, {6, 8, 6, 2, 0, 0, 0, 519, 0,3},
{6, -1, 6, 2, 0, 0, 10231991, 519, 1, 3}, {6, 9, 6, 2, 0, 0, 0, 519, 0, 3},
{6, -1, 6, 2, 0, 0, 82899, 519, 1, 3}, {7, -1, 7, 0, 0, 0, 10231991, 0, 1, 0}}
</code></pre>
| Michael E2 | 4,999 | <blockquote>
<p>Does this mean I can be 100% confident that the expression <span class="math-container">$f$</span>
does not converge at <span class="math-container">$x=1/a$</span>?</p>
</blockquote>
<p><strong><em>No:</em></strong></p>
<pre><code>f = ((E + x)^2 - E^2 - x^2 - 2 E x)/(a x - 1);
f /. x -> 1/a
(* ComplexInfinity -- message omitted *)
Limit[f, x -> 1/a]
(* 0 *)
</code></pre>
<p>In fact, this <span class="math-container">$f$</span> is zero:</p>
<pre><code>Simplify[f]
(* 0 *)
</code></pre>
|
1,574,290 | <p>How do I prove this? </p>
<p>For the Fibonacci numbers defined by $f_1=1$, $f_2=1$, and $f_n = f_{n-1} + f_{n-2}$ for $n ≥ 3$, prove that $f^2_{n+1} - f_{n+1}f_n - f^2_n = (-1)^n$ for all $n≥ 1$.</p>
| Alexander Heyes | 286,131 | <p>Subscripts and superscripts are a real nuisance in maths - mathematicians tend to privilege intense efficiency and minimalism of notation over readability - it makes sense in the long run, but for a learner it is quite confusing. </p>
<p>Try reading any subscript as "the first $f$, the second $f$" etc., and any superscript as you normally would; squared, cubed etc.</p>
<p>In plain english, the question asks:</p>
<p>"Given a sequence where the first number is $1$, the second is $1$, and after that, each number is equal to the sum of the previous two numbers (so $f_3=1+1=2$), then prove that for the $n$th number in the sequence, if you take the square of the next number, subtract the product of that next number and the $n$th number, then subtract the square of the $n$th number, you will get either plus or minus $1$, depending on whether $n$ was even or odd."</p>
<p>Have a read of that paragraph, try and understand where I got each translation from, then read on!</p>
<p>To prove this by induction, we follow the same basic plan:</p>
<p>1) Base case. Is it true for $n=1$? Well, $$f_2^2-f_2f_1-f_1^2=1^2-1\cdot1-1^2=-1=(-1)^1$$ So yes, it is true.</p>
<p>2) Hypothesis. You should always state your hypothesis clearly, as it helps you and your reader/marker see exactly what you needed to get to the conclusion. Here, you say "Suppose $f_{n+1}^2-f_{n+1}f_n-f_n^2=(-1)^n$ for $n=k$"; that is, assume the equation works for some number $k$.</p>
<p>3) Now show, using your assumption, that it must hold for $k+1$. That is, we want to show that when $n=k+1$, the sentence $f_{n+2}^2-f_{n+2}f_{n+1}-f_{n+1}^2$ holds. We can do this by relating $f_{n+2}$ to $f_{n+1}$ and $f_n$ using the rule by which we constructed the sequence in the first place. As other answers have shown, you can then take the equation you want to prove, rearrange, substitute the construction rule, and apply the hypothesis to conclude it really does work for $n=k+1$. </p>
<p>4) Conclude! Since it holds for $n=1$, and given it holds for any number $n=k$, it will also hold for $n=k+1$, it must hold for all natural numbers. The End.</p>
|
2,138,963 | <p>finding $\displaystyle \int^{\pi}_{-\frac{\pi}{3}}\bigg[\cot^{-1}\bigg(\frac{1}{2\cos x-1}\bigg)+\cot^{-1}\bigg(\cos x - \frac{1}{2}\bigg)\bigg]dx$</p>
<p>Attempt:</p>
<p>\begin{align}
& \int^{\frac{\pi}{3}}_{-\frac{\pi}{3}}\bigg[\cot^{-1}\bigg(\frac{1}{2\cos x-1}\bigg)+\cot^{-1}\bigg(\cos x - \frac{1}{2}\bigg)\bigg] \, dx \\[10pt]
+ {} & \int^\pi_{\frac{\pi}{3}}\bigg[\cot^{-1}\bigg(\frac{1}{2\cos x-1}\bigg)+\cot^{-1}\bigg(\cos x - \frac{1}{2}\bigg)\bigg] \, dx
\end{align}</p>
<p>as we break because $\displaystyle \cos x- \frac 1 2 =0$ at $\displaystyle x= \frac \pi 3$</p>
<p>wan,t be able to go further, could some help me</p>
| Claude Leibovici | 82,404 | <p><em>Assuming that the solution requires numerical integration.</em></p>
<p>Considering $$f(x)=\cot^{-1}\bigg(\frac{1}{2\cos x-1}\bigg)+\cot^{-1}\bigg(\cos x - \frac{1}{2}\bigg)$$ let us compute $$I_1(k)=\int_{-\frac \pi 3}^{\frac \pi 3-10^{-k}}f(x)\,dx\qquad I_2(k)=\int_{\frac \pi 3+10^{-k}}^\pi f(x)\,dx\qquad I(k)=I_1(k)+I_2(k)$$ As a function of $k$, the following results would be obtained
$$\left(
\begin{array}{cccc}
k & I_1(k) & I_2(k) & I(k) \\
1 & 3.633155522 & -3.703785769 & -0.07063024720 \\
2 & 3.778692433 & -3.849483982 & -0.07079154961 \\
3 & 3.792872380 & -3.863664096 & -0.07079171605 \\
4 & 3.794286526 & -3.865078242 & -0.07079171622 \\
5 & 3.794427902 & -3.865219618 & -0.07079171622 \\
6 & 3.794442039 & -3.865233755 & -0.07079171622 \\
7 & 3.794443453 & -3.865235169 & -0.07079171622
\end{array}
\right)$$</p>
|
2,423,086 | <p>Show, by listing the elements or how we can list the elements, that </p>
<p>$\mathbb{N}^3$ = $\mathbb{N×N×N}$ is a countable set.</p>
<p><strong>Attempt at a solution:</strong></p>
<p>I was thinking about making use of Cantor's diagonal argument. If this were $\mathbb{N×N}$ I could just do:</p>
<p>(1,1), (1,2), (1,3),...</p>
<p>(2,1), (2,2), (2,3)...</p>
<p>(3,1), (3,2), (3, 3)...</p>
<p>and snake my way through it.</p>
<p>However, I am having trouble finding a way to list $\mathbb{N×N×N}$ such that I can implement the diagonal argument.</p>
| Eric Towers | 123,905 | <p>To go your way, $(x,y,z) \in \mathbb{N} \times \mathbb{N} \times \mathbb{N}$ with $x+y+z = c$ is a finite set. Viewed in $\mathbb{R}^3$ it is a triangle. So explain how to snake across each of these triangles (which will look remarkably like what you are doing in $\mathbb{N}^2$).</p>
<p>Alternatively, let $x = \cdots 0 x_{N(x)}\cdots x_2 x_1 x_0$ be $x$'s base 10 digital representation (preceded by an infinite number of zeroes so we may avoid an astounding amount of fussy notation) and similarly for $y$ and $z$. Then consider the number $\cdots 0 \cdots x_2 y_2 z_2 x_1 y_1 z_1 x_0 y_0 z_0$ formed by interleaving the digits of $x$, $y$, and $z$...</p>
|
11,652 | <p>Last August, so-called "review audits" <a href="https://math.meta.stackexchange.com/q/10740/43351">were introduced</a> on MSE.</p>
<p>These are fake review samples added to the review queues, designed to catch those users who (in a dash to get a badge) practice mindless button mashing instead of careful consideration while reviewing.</p>
<p>Prior to the introduction, people were enthusiastic -- me included.</p>
<hr>
<p>However, in three months, we have generated:</p>
<ul>
<li><a href="https://math.meta.stackexchange.com/q/10836/43351">STOP! Look and Listen.</a> (+58)</li>
<li><a href="https://math.meta.stackexchange.com/q/10924/43351">Examples of poor review audits</a> (+26)</li>
<li><a href="https://math.meta.stackexchange.com/q/11641/43351">Audits: this is getting ridiculous.</a> (+27)</li>
<li><a href="https://math.meta.stackexchange.com/q/11307/43351">Try to comment: Fail review audit</a> (+13)</li>
</ul>
<p>in which threads, more and more people start stating their resentment of the whole auditing thing, with wordings like <a href="https://math.meta.stackexchange.com/a/11322/43351">Vedran Šego's</a>:</p>
<blockquote>
<p>The first time I got an audit, I thought it was a good idea. Well, not anymore.</p>
</blockquote>
<hr>
<p>It is time to evaluate. Some valid concerns have been raised, but not addressed by the SE developers. A tour around Meta.StackOverflow reveals a similar picture. The developers are unwilling to invest the time to do something about at least the most-heard complaints about the audit system.</p>
<p>However, the current implementation has a <strong>far too high</strong> "false negative" statistic: Too many conscientious reviewers are being shouted at by the audit system for trying to be helpful to their best judgement.</p>
<blockquote>
<p>Therefore, pending improvements to the audit system to reduce the "false negatives", I want the audits to be disabled on Math.StackExchange. Do you agree?</p>
</blockquote>
<p>Please find two polling answers below -- you can use an upvote to indicate your preference.</p>
<hr>
<p>Seeing as (to the best of my knowledge) the audit system is standardised SE-network-wide, the relevant feature requests for improving the audit system should be filed at <a href="http://meta.stackoverflow.com">Meta.StackOverflow</a>.</p>
<hr>
<p>The vote tallies have more or less crystallised. Disregarding downvotes, and taking into account my own, the final result is:</p>
<blockquote>
<ul>
<li>Disable audits: 24 votes</li>
<li>Keep audits: 15 votes</li>
</ul>
</blockquote>
<p>This falls short (and has consistently fallen short throughout the poll) of the "qualified majority" (two thirds) I had in mind for requesting the termination of the audits. That means the audits remain, if I am to decide.</p>
| Lord_Farin | 43,351 | <p>Yes.</p>
<p>Please disable the review audits.</p>
|
1,489,078 | <p>Let $P(S)$ denotes the power set of set $S$. Which of the following is always true$?$</p>
<ol>
<li>$P(P(S)) = P(S)$</li>
<li>$P(S) ∩ P(P(S)) = \{ Ø \}$</li>
<li>$P(S) ∩ S = P(S)$</li>
<li>$S ∉ P(S)$</li>
</ol>
<hr>
<p>I try to explain $:$
If $S$ is the set $\{x, y, z \},$ then the subsets of S are:</p>
<p>${}$ (also denoted $\phi,$ the empty set)
$\{x\},
\{y\},
\{z\},
\{x, y \},
\{x, z \},
\{y, z \},
\{x, y, z \}$
and hence the power set of S is i.e.,</p>
<p>$P(S) =$ $\{\{\}, \{x\}, \{y\}, \{z\}, \{x, y\}, \{x, z\}, \{y, z\}, \{x, y, z\}\}$.</p>
<p>Similarly , </p>
<p>$P(P(S))=\{\{\{\}\}, \{\{x\}\}, \{\{y\}\}, \{\{z\}\}, \{\{x, y\}\}, \{\{x, z\}\}, \{\{y, z\}\}, \{\{x, y, z\}\}\}.....\}$</p>
<p>therefore , </p>
<p>$P(S) ∩ P(P(S)) = \{ Ø \}$</p>
<p>Note that $\{ Ø \}$ is always element of powerset of a set , and also $\{ Ø \}$ is the subset of a set , in other words all subset of a set is a powerset .</p>
<hr>
<blockquote>
<p>My question is $:$ both $\{\}$ and $\{\{\}\}$ same element ?</p>
</blockquote>
| layman | 131,740 | <p>No, $\{\}$ is the empty set, and $\{ \{ \} \}$ is the set <em>containing</em> the empty set. They aren't the same -- $\{\}$ has <strong>no elements</strong>, and $\{ \{ \} \}$ has <strong>one element</strong>.</p>
<p>By the way, in $P(P(S))$, you forgot to add the element $\{ \}$, since the empty set is a subset of every set.</p>
|
317,666 | <p>What is the difference between $\mathbb{H}$ and $Q_8$? Both are called <em>quaternions.</em></p>
| Chris Brooks | 7,676 | <p>A good analogy here is the difference between the complex numbers $\mathbb{C}$ and the cyclic group with 4 elements, which can be realized as the group $\{1, i, -1, -i\}\subset\mathbb{C}$ with complex multiplication.</p>
<p>$\mathbb{C}$ is a 2-dimensional vector space over $\mathbb{R}$, and there is a group inside it denoted $\mathbb{Z}/4\mathbb{Z}=\{\pm 1, \pm i\}$ with complex multiplication. This is the cyclic group with four elements.</p>
<p>Analogously, $\mathbb{H}$ is a 4-dimensional vector space over $\mathbb{R}$, and there is a group called $Q_8$ inside of it, namely the standard basis (with negatives) $\{\pm 1, \pm i, \pm j, \pm k\}\subset\mathbb{H}$ where the operation is quaternion multiplication.</p>
<p>tl;dr -- $Q_8$ sits inside $\mathbb{H}$. The first is known as <em>the quaternion group</em>, and the second thing is <em>the quaternions</em>.</p>
|
102,932 | <p>This is a naive question but I hope that the answers will be educational. When is it the case that a finitely presented group $G$ admits a faithful $2$-dimensional complex representation, e.g. an embedding into $\text{GL}_2(\mathbb{C})$? (I am mostly interested in sufficient conditions.) </p>
<p>I think I can figure out the finite groups with this property (they can be conjugated into $\text{U}(2)$ and taking determinants reduces to the classification of finite subgroups of $\text{SU}(2)$ and an extension problem) as well as the f.g. abelian groups with this property (there can't be too much torsion). But already I don't know what finitely presented groups appear as, say, congruence subgroups of $\text{GL}_2(\mathcal{O}_K)$ for $K$ a number field. </p>
<p>What can be said if you are given, say, a nice space $X$ with fundamental group $G$? I hear that in this case linear representations of $G$ are related to vector bundles on $X$ with flat connection. </p>
| HJRW | 1,463 | <p>This is a very nice question which, as Agol says, is probably out of reach at the moment. To say that there is a classification of the fp subgroups of $GL_2(\mathbb{C})$ would be to say that the set of presentations of those subgroups is recursively enumerable and has solvable Isomorphism Problem. It's hard to guess which way to jump for either of these properties: the Isomorphism Problem is known to be unsolvable for finitely presented subgroups of $GL_N(\mathbb{Z})$ for some large $N$, by work of Bridson--Miller and Haglund--Wise.</p>
<p>There is one nice, tangential but positive, result that I know of. Oddly enough, it says that we have some sort of limited decision-theoretic understanding of which groups are <em>not</em> subgroups of $GL_2(\mathbb{C})$. Apparently it was known to Mal'cev; Daniel Groves and I re-discovered it for ourselves, but then found it in Lubotzky and Segal's book.</p>
<p>It's well known that most nice (more precisely, Markov) classes of finitely presented groups aren't recursively recognizable, by the Adian--Rabin Theorem. This applies to subgroups of $GL_2(\mathbb{C})$, essentially because the word problem is solvable, as Igor Belegradek pointed out above. However, in nice cases, it turns out that the word problem is the only obstruction.</p>
<p>More precisely, Groves, Manning and I call a class $\mathcal{C}$ of fp groups <em>recursive modulo the word problem</em> if $\mathcal{C}\cap\mathcal{D}$ can be recursively recognized in any class of groups $\mathcal{D}$ in which the word problem is uniformly solvable. This holds for the trivial group, abelian groups, nilpotent groups of class at most $k$ for fixed $k$, free groups, Sela's limit groups, surface groups, 3-manifold groups...</p>
<p>There is some hope that this could hold for subgroups of $GL_2(\mathbb{C})$. As I said above, it's unknown whether the class of these subgroups is recursively enumerable. However, it is true that the complement of this class is recursively enumerable modulo the word problem. That is, the following holds:</p>
<p><strong>Theorem:</strong> Let $\mathcal{L}_n$ be the set of all finite presentations of subgroups of $GL_n(\mathbb{C})$. There is a Turing machine that determines in finite time if a presentation $P\notin\mathcal{L}_n$, using a solution to the word problem in $P$.</p>
<p>The key point is that a group is in $\mathcal{L}_n$ if and only if it is fully residually $\mathcal{L}_n$, and this latter property can be reduced to the decidability of the elementary theory of $\mathbb{C}$ (Tarski). </p>
<p>===========</p>
<p><strong>Added details:</strong></p>
<p>As far as I know, the theorem above isn't written down anywhere. The proof I had in mind is essentially an application of the following theorem.</p>
<p><strong>Theorem (Mal'cev):</strong> A finitely generated group $G$ is a subgroup of a group $GL_n(\mathbb{C})$ if and only if $G$ is fully residually $GL_n(\mathbb{C})$; that is, for any finite subset $X\subseteq G\smallsetminus 1$ there is a homomorphism $f:G\to GL_n(\mathbb{C})$ with $1\notin f(X)$.</p>
<p>This is Theorem 16.4.1 in Lubotsky and Segal's book <em>Subgroup growth</em>.</p>
<p><em>Proof of the first theorem.</em> Let $G$ be the group presented by $P$. Using the word problem, enumerate balls $B(n)$ in $G$. If $G$ is not embeddable in $GL_n(\mathbb{C})$ then, by Mal'cev's theorem, for some $n$, every homomorphism $f:G\to GL_n(\mathbb{C})$ kills a non-trivial element of $B(n)$. This last statement can be rephrased as a system of equations and inequations over $\mathbb{C}$ with integral coefficients, and hence solved. <em>QED</em></p>
|
102,932 | <p>This is a naive question but I hope that the answers will be educational. When is it the case that a finitely presented group $G$ admits a faithful $2$-dimensional complex representation, e.g. an embedding into $\text{GL}_2(\mathbb{C})$? (I am mostly interested in sufficient conditions.) </p>
<p>I think I can figure out the finite groups with this property (they can be conjugated into $\text{U}(2)$ and taking determinants reduces to the classification of finite subgroups of $\text{SU}(2)$ and an extension problem) as well as the f.g. abelian groups with this property (there can't be too much torsion). But already I don't know what finitely presented groups appear as, say, congruence subgroups of $\text{GL}_2(\mathcal{O}_K)$ for $K$ a number field. </p>
<p>What can be said if you are given, say, a nice space $X$ with fundamental group $G$? I hear that in this case linear representations of $G$ are related to vector bundles on $X$ with flat connection. </p>
| YCor | 14,094 | <p>For a finitely generated group $G$, a necessary and sufficient condition to be embeddable into $\text{GL}_2(\mathbf{C})$ is the following strengthening of being residually finite: for every finite subset $F$ of $G$ and any $m$ there exists a finite field $K$ of characteristic $>m$ and a homomorphism from $G$ to $\text{GL}_2(K)$ that is injective on $F$.</p>
<p>Indeed, if this condition is satisfied, write $G=\bigcup F_n$ (increasing union) and consider such homomorphisms $G\to\text{GL}_2(K_n)$ with $K_n$ of characteristic $>n$. If $K$ is an ultraproduct of the $K_n$ then it is isomorphic to a subfield of $\mathbf{C}$ and $G\to\text{GL}_2(K)$ is an injective homomorphism. For the converse, we have to know that for every finitely generated domain of characteristic zero $R$ the intersection of maximal ideals is reduced to 0 (it's indeed a Jacobson ring). It follows that $\text{GL}_2(R)$ satisfies the above property. Indeed let $F$ be a finite subset. For every non-identity element $b$ in $FF^{-1}$ pick a nonzero entry of $b-1$ and let $x$ be the product of all these nonzero entries. Take a maximal ideal $M$ not containing $m!x$. Then $R/M$ is a finite field of characteristic $>n$ in which $x$ is nonzero, so the homomorphism $\text{GL}_2(R)\to\text{GL}_2(R/M)$ is injective on $F$.</p>
<p>If you remove the condition "characteristic $>m$", you similarly characterize being isomorphic to a subgroup of $\text{GL}$ over an arbitrary field $K$. Also this works the same with $\text{GL}_k$ and $\text{SL}_k$, etc. </p>
|
411,549 | <p>Here is an modular equation</p>
<p>$$5x \equiv 6 \bmod 4$$</p>
<p>And I can solve it, $x = 2$.</p>
<p>But what if each side of the above equation times <strong>8</strong>, which looks like this</p>
<p>$$40x \equiv 48 \bmod 4$$</p>
<p>Apparently now, $x = 0$. Why is that? Am I not solving the modular equation in a right way, or should I divide both side with their greatest-common-divisor before solving it?</p>
<p>P.S.</p>
<p>To clarify, I was solving a system of modular equations, using <strong>Gaussian Elimination</strong>, and after applying the elimination on the coefficient matrix, the last row of the echelon-form matrix is :</p>
<p>$$0, \dots, 40 | 48$$</p>
<p>but I think each row in the echelon-form should have been divided by its greatest common divisor, that turns it into :</p>
<p>$$0, \dots, 5 | 6$$</p>
<p>But apparently they result into different solution, one is $x = 0,1,2,3....$, the other $x = 2$. And why? Am I applying <strong>Gaussian-Elimination</strong> wrong?</p>
| Key Ideas | 78,535 | <p>Hint: mod $\,4\!:\,\ 40\equiv 0\equiv 48,\ $ so $\,\ 40\cdot x\equiv 48\iff 0\cdot x\equiv 0,\ $ true for all $\,x.$</p>
<p>Generally, scaling an equation by a noninvertible factor may increase the solution set.</p>
|
702,506 | <p>We have an exam in $3$ hours and I need help how to solve such trigonometric equations for intervals.</p>
<p>How to solve</p>
<p>$$\sin x - \cos x = -1$$</p>
<p>for the interval $(0, 2\pi)$.</p>
| Michael Hoppe | 93,935 | <p>Since $\cos(\pi/4)=\sin(\pi/4)=\sqrt{2}/2$ we have using the angle sum theorem
$$\sin(x)\sin(\pi/4)-\cos(x)\cos(\pi/4)=-\sqrt{2}/2\iff\cos(x+\pi/4)=\sqrt{2}/2$$
$$\iff
x+\pi/4=\pi/4\lor x+\pi/4=7\pi/4\iff x=3\pi/2$$
as $x\in(0,2\pi)$.</p>
|
716,767 | <p>I feel like an idiot for asking this but i can't get my formula to work with negative numbers</p>
<p>assume you want to know the percentage of an increase/decrease between numbers</p>
<pre><code>2.39 1.79 =100-(1.79/2.39*100)=> which is 25.1% decrease
</code></pre>
<p>but how would i change this formula when there are some negative numbers?</p>
<pre><code>6.11 -3.73 =100-(-3.73/6.11*100) which is 161% but should be -161%
</code></pre>
<p>the negative sign is lost.. what I am missing here?</p>
<p>also</p>
<pre><code>-2.1 0.6 =100-(-3.73/6.11*100) which is 128.6% ??? is it?
</code></pre>
| ismael | 55,205 | <h2>Conventional Formula</h2>
<p>The conventional formula for computing the relative growth between two values <span class="math-container">$a$</span> and <span class="math-container">$b$</span> is <span class="math-container">$\displaystyle \frac{b-a}{a}$</span>.</p>
<p>For example, if <span class="math-container">$a = 50$</span> and <span class="math-container">$b = 60$</span>, the relative growth is <span class="math-container">$\displaystyle \frac{60-50}{50} = 0.2 = 20\%$</span>.</p>
<p>So far, so good. But what if <span class="math-container">$a$</span> and <span class="math-container">$b$</span> have different signs? For example, <span class="math-container">$a = -10$</span> and <span class="math-container">$b = 20$</span>?</p>
<p>The conventional formula would return a <em>negative</em> growth of <span class="math-container">$-300\%$</span>, which does not make much sense.</p>
<h2>Adjusted Formula</h2>
<p>Believe it or not, there is no universally-accepted formula for the relative growth of signed values.</p>
<p>The one used by most statisticians is <span class="math-container">$\displaystyle \frac{b-a}{|a|}$</span>, where <span class="math-container">$|a|$</span> is the absolute value of <span class="math-container">$a$</span> (<span class="math-container">$-a$</span> when <span class="math-container">$a$</span> is negative).</p>
<p>This formula is perfectly valid, but usually leads to counter-intuitive results. For example, the relative growth between <span class="math-container">$-10$</span> and <span class="math-container">$20$</span> is <span class="math-container">$300\%$</span>, and the relative growth between <span class="math-container">$-20$</span> and <span class="math-container">$20$</span> is <span class="math-container">$200\%$</span>. Both pairs of values end at the same exact value (<span class="math-container">$20$</span>), yet the absolute growth for the first pair (<span class="math-container">$30$</span>) is lower than the absolute growth for the second (<span class="math-container">$40$</span>), while the relative growth is greater for the first than the second. How can that be?</p>
<h2>Interpretation</h2>
<p>To better understand what is going on, it usually helps to think of the relative growth between two values of different signs as being composed of two separate parts: the relative growth from the first value to zero, plus the relative growth from zero to the second value. For example, the relative growth between <span class="math-container">$-10$</span> and <span class="math-container">$20$</span> is equal to the relative growth between <span class="math-container">$-10$</span> and <span class="math-container">$0$</span>, plus the relative growth between <span class="math-container">$0$</span> and <span class="math-container">$20$</span>.</p>
<p>The relative growth between <span class="math-container">$-10$</span> and <span class="math-container">$0$</span> is <span class="math-container">$\displaystyle \frac{0-(-10)}{|-10|} = \frac{10}{10} = 1 = 100\%$</span> according to our previous formula.</p>
<p>In fact, the relative growth between <em>any</em> negative value and <span class="math-container">$0$</span> is always equal to <span class="math-container">$100\%$</span>, which actually makes sense when one comes to think about it. But things become a bit more tricky when we need to compute the relative growth between <span class="math-container">$0$</span> and a positive value, like <span class="math-container">$20$</span> in our previous example. There, we cannot simply use our formula, because it would lead to a division by zero. Instead, we have to compute this relative growth in relation to the previous one, by computing the <em>ratio</em> of two absolute growths.</p>
<p>For example, when going from <span class="math-container">$-10$</span> to <span class="math-container">$20$</span>, we gain <span class="math-container">$10$</span> in absolute terms between <span class="math-container">$-10$</span> and <span class="math-container">$0$</span>, then <span class="math-container">$20$</span> in absolute terms again between <span class="math-container">$0$</span> and <span class="math-container">$20$</span>. Therefore, we gained twice as much going from <span class="math-container">$0$</span> to <span class="math-container">$20$</span> than we did when going from <span class="math-container">$-10$</span> to <span class="math-container">$0$</span>. And as we have seen earlier, the relative growth between <span class="math-container">$-10$</span> and <span class="math-container">$0$</span> is <span class="math-container">$100\%$</span>, therefore the relative growth between <span class="math-container">$0$</span> and <span class="math-container">$20$</span> should be twice that amount, or <span class="math-container">$200\%$</span>, and the relative growth between <span class="math-container">$-10$</span> and <span class="math-container">$20$</span> should be the sum of our two relative growths, or <span class="math-container">$100\% + 200\% = 300\%%$</span>, which matches what our adjusted formula gave us in the first place. In other words, we found two ways of getting to the same result, but this should not come as a surprise if you look at the original equation.</p>
<p>Indeed, if <span class="math-container">$a$</span> is negative, we can rewrite <span class="math-container">$\displaystyle \frac{b-a}{|a|}$</span> as <span class="math-container">$\displaystyle \frac{b-a}{-a}$</span>. Then, we can split the fraction as <span class="math-container">$\displaystyle \frac{b}{-a} + \frac{-a}{-a}$</span> and further simplify it into <span class="math-container">$\displaystyle \frac{b}{-a} + 1$</span>, or even <span class="math-container">$\displaystyle 1 + \frac{b}{-a}$</span>. This addition has two addends, <span class="math-container">$1$</span> and <span class="math-container">$\displaystyle \frac{b}{-a}$</span>. The former is the relative growth from the first value to zero (always equal to <span class="math-container">$1$</span>), and the latter is the ratio of the absolute growth between the first value and <span class="math-container">$0$</span> and the absolute growth between <span class="math-container">$0$</span> and the second value, which we can write as <span class="math-container">$\displaystyle \frac{b - 0}{0 -a}$</span>.</p>
<p>When we look at the formula through that angle, we realize that a relative growth between two values of different signs is only affected by the ratio between the two values <span class="math-container">$\displaystyle \frac{b}{-a}$</span>, unlike the relative growth between two values of the same sign, which is the ratio between the difference of the two values and the first value <span class="math-container">$\displaystyle \frac{b-a}{a}$</span>. But this is where the adjusted formula becomes misleading, because we should <em>not</em> look at the former as a ratio between two values. Instead, we should look at it as a ratio between two absolute growths <span class="math-container">$\displaystyle \frac{b - 0}{0 -a}$</span>.</p>
<p>Make no mistake though: this is not a trivial arithmetic rewriting. Instead, by writing <span class="math-container">$\displaystyle \frac{b}{-a}$</span> as <span class="math-container">$\displaystyle \frac{b - 0}{0 -a}$</span>, we clearly communicate the fact that our fraction is a ratio between two absolute growths, instead of a ratio between two values. And this is what explains that the relative growth between <span class="math-container">$-10$</span> and <span class="math-container">$20$</span> (<span class="math-container">$300\%$</span>) is greater than the relative growth between <span class="math-container">$-20$</span> and <span class="math-container">$20$</span> (<span class="math-container">$200\%$</span>).</p>
<p>As explained earlier, to grow from <span class="math-container">$-10$</span> to <span class="math-container">$20$</span>, you first need to grow to from <span class="math-container">$-10$</span> to <span class="math-container">$0$</span>, then grow from <span class="math-container">$0$</span> to <span class="math-container">$20$</span>, and the first growth component to <span class="math-container">$0$</span> is always equal to <span class="math-container">$100\%$</span>. But when you grow from <span class="math-container">$0$</span> to <span class="math-container">$20$</span>, while this absolute growth (<span class="math-container">$20$</span>) is the same whether you started from <span class="math-container">$-10$</span> or <span class="math-container">$-20$</span>, the relative growth is twice as much if you started from <span class="math-container">$-10$</span> than if you started from <span class="math-container">$-20$</span>.</p>
<p>In other words, what the relative growth of values of different signs really measures is the relative growth <em>after</em> you have crossed the zero line, because the relative growth <em>before</em> you cross the zero line is always the same (<span class="math-container">$100\%$</span>).</p>
|
2,721,372 | <p>A sequence is defined by $a_1=2$ and $a_n=3a_{n-1}+1 $ .Find the sum $a_1+a_2+\cdots+a_n$</p>
<p>how to find sum $a_1=2,a_2=7,\ldots$</p>
<p>Also i found the value of $a_n=\frac{5}{6}\cdot3^n-\frac{1}{2}$</p>
| Mauro ALLEGRANZA | 108,274 | <p>The proof relies on <a href="https://en.wikipedia.org/wiki/Principle_of_explosion#Proof" rel="nofollow noreferrer">Ex falso</a> :</p>
<blockquote>
<p>$\vdash \lnot P \to (P \to Q)$.</p>
</blockquote>
<p>We have to apply it in the form :</p>
<blockquote>
<p>$\lnot (x \in \emptyset) \to (x \in \emptyset \to x \in B)$.</p>
</blockquote>
<p>We have (axiom or theorem) : $\mathsf {ZF} \vdash \forall x \ \lnot (x \in \emptyset)$.</p>
<p>By <a href="https://en.wikipedia.org/wiki/Universal_instantiation" rel="nofollow noreferrer">Universal instantiation</a> we get : $\lnot (x \in \emptyset)$ and thus from <em>Ex falso</em>, by <em>Modus Ponens</em> : $(x \in \emptyset \to x \in B)$.</p>
<p>Finally, by <a href="https://en.wikipedia.org/wiki/Universal_generalization" rel="nofollow noreferrer">Universal generalization</a> we conclude with :</p>
<blockquote>
<p>$\mathsf {ZF} \vdash \forall x \ (x \in \emptyset \to x \in B)$.</p>
</blockquote>
|
2,380,880 | <p>The following question was asked in an exam:<br>
Consider the problem:<br>
Maximize $2y_1+3y_2+5y_3+4y_4$<br>
subject to<br>
$y_1+y_2\leq 1,$ $y_2+y_3\leq 1,$<br>
$y_4+y_1\leq 1,$ $y_3+y_4\leq 1$ and $y_i\geq 0$ for i=1,2,3,4.<br>
Then the optimum value is<br>
1. equal to 8<br>
2. between 8 and 9<br>
3. greater than or equal to 7<br>
4. less than or equal to 7<br>
I solved the above problem by the usual simplex method and got the optimal value to be 7. Now, I am just curious to know if there's any other simpler and faster method to solve such problems in competitive exams where time is a constraint.<br>
Thank you in advance!</p>
| farruhota | 425,072 | <p>The answer is $D$.</p>
<p>Express it as:
$$2y_1+3y_2+5y_3+4y_4=2(\underbrace{y_1+y_2}_{\le 1})+(\underbrace{y_2+y_3}_{\le 1})+4(\underbrace{y_3+y_4}_{\le 1})\le7.$$
Equality occurs when $(y_1,y_2,y_3,y_4)=(1,0,1,0); (\frac12,\frac12,\frac12,\frac12);(0,1,0,1); etc.$</p>
|
15,316 | <blockquote>
<p>What is the length $f(n)$ of the shortest nontrivial group word $w_n$ in $x_1,\ldots,x_n$ that collapses to $1$ when we substitute $x_i=1$ for any $i$?</p>
</blockquote>
<p>For example, $f(2)=4$, with the commutator $[x_1,x_2]=x_1 x_2 x_1^{-1} x_2^{-1}$ attaining the bound. </p>
<p>For any $m,n \ge 1$, the construction $w_{m+n}(\vec{x},\vec{y}):=[w_m(\vec{x}),w_n(\vec{y})]$ shows that $f(m+n) \le 2 f(m) + 2 f(n)$.</p>
<p>Is $f(1),f(2),\ldots$ the same as sequence <a href="http://oeis.org/A073121" rel="nofollow noreferrer" title="A073121">A073121</a>:
$$ 1,4,10,16,28,40,52,64,88,112,136,\ldots ?$$</p>
<p><strong>Motivation:</strong> Beating the iterated commutator construction would improve the best known bounds in <a href="https://mathoverflow.net/questions/15022/size-of-the-smallest-group-not-satisfying-an-identity/15065#15065">size of the smallest group not satisfying an identity</a>.</p>
| Erik Demaine | 4,015 | <p>In an unpublished manuscript "Picture-Hanging Puzzles" by Erik D. Demaine, Martin L. Demaine, Yair N. Minsky, and Joseph S. B. Mitchell, we prove the $O(n^2)$ upper bound that comes from iterated commutator with a balanced split, same as sequence A073121. (Indeed, the manuscript cites that sequence.) We conjecture that there's an $\Omega(n^2)$ lower bound (and indeed that A073121 is exactly tight), but haven't proved it. If you come up with a proof, it might breath some life into that manuscript and we could consider joining forces.</p>
|
4,107,656 | <p>Let <span class="math-container">$G$</span> be a finite abelian group with a neutral element <span class="math-container">$e$</span>. Prove that for any element <span class="math-container">$g$</span> of <span class="math-container">$G$</span>: <span class="math-container">$g^{|G|}=e$</span>. <span class="math-container">$|G|$</span> shall be the number of elements within <span class="math-container">$G$</span>.</p>
<p>Use and show that the multiplication with <span class="math-container">$g$</span> defines a bijection on <span class="math-container">$G$</span>. Therefore: <span class="math-container">$\prod_{h\in G}h=\prod_{h\in G}gh$</span> applies!</p>
<p>I do not have any Idea on how to solve it, as I do not understand the idea behind all this. I found out that it has to do with the Lagrange Theorem but we did not learn that in class. Every single hint is welcome!</p>
<p>Thank you very much!</p>
| Community | -1 | <p>Slight variation on the proof. It's straightforward to show that multiplication by an element of <span class="math-container">$G$</span> defines a bijection. Now that implies that any coset <span class="math-container">$gH$</span> has order <span class="math-container">$|H|$</span>, for <span class="math-container">$H\le G$</span>.</p>
<p>Next notice that being in a coset <span class="math-container">$gH$</span> defines an equivalence relation. That is <span class="math-container">$g\sim h\iff gh^{-1}\in H$</span>. I'll leave it to you to check symmetry, reflexivity and transitivity.</p>
<p>As a consequence the cosets, which are all the size of <span class="math-container">$H$</span>, partition <span class="math-container">$G$</span>. It follows that we can define <span class="math-container">$[G:H]:=n$</span> for <span class="math-container">$n=|G|/|H|$</span>.</p>
<p>Now let <span class="math-container">$H=\langle g\rangle$</span>, the cyclic subgroup generated by <span class="math-container">$g$</span>. Then <span class="math-container">$g^{|H|}=e$</span> (fairly easy). As a result <span class="math-container">$g^{|G|}=g^{n|H|}=(g^{|H|})^n=e^n=e$</span>.</p>
<p>Note that here <span class="math-container">$G$</span> doesn't have to be abelian.</p>
|
18,530 | <p>Sorry about the title, I have no idea how to describe these types of problems.</p>
<p>Problem statement:</p>
<p>$A(S)$ is the set of 1-1 mappings of $S$ onto itself. Let $S \supset T$ and consider the subset $U(T) = $ { $f \in A(S)$ | $f(t) \in T$ for every $t \in T$ }. $S$ has $n$ elements and $T$ has $m$ elements. Show that there is a mapping $F:U(T) \rightarrow S_m$ such that $F(fg) = F(f)F(g)$ for $f, g \in U(T)$ and $F$ is onto $S_m$.</p>
<p>How do I write up this reasoning: When I look at the sets $S$ = { 1, 2, ..., $n$} and $T$ = { 1, 2, ..., $m$}, I can see that there are a bunch of permutations of the elements of $T$ within $S$. I can see there are $(n - m)!$ members of $S$ for each permutation of $T$'s elements. But there needs to be some way to get a handle on the positions of the elements in $S$ and $T$ in order to compare them to each other. But $S$ isn't any particular set, like a set of integers, so how can I relate the positions of the elements to one another? Or, is this the wrong way to go about it?</p>
<p>Example:</p>
<p>$U(T_3 \subset S_6) = \left(
\begin{array}{cccccc}
1 & 2 & 3 & 4 & 5 & 6 \\
1 & 2 & 3 & 4 & 6 & 5 \\
1 & 2 & 3 & 5 & 4 & 6 \\
1 & 2 & 3 & 5 & 6 & 4 \\
1 & 2 & 3 & 6 & 4 & 5 \\
1 & 2 & 3 & 6 & 5 & 4 \\
1 & 3 & 2 & 4 & 5 & 6 \\
1 & 3 & 2 & 4 & 6 & 5 \\
1 & 3 & 2 & 5 & 4 & 6 \\
1 & 3 & 2 & 5 & 6 & 4 \\
1 & 3 & 2 & 6 & 4 & 5 \\
1 & 3 & 2 & 6 & 5 & 4 \\
2 & 1 & 3 & 4 & 5 & 6 \\
2 & 1 & 3 & 4 & 6 & 5 \\
2 & 1 & 3 & 5 & 4 & 6 \\
2 & 1 & 3 & 5 & 6 & 4 \\
2 & 1 & 3 & 6 & 4 & 5 \\
2 & 1 & 3 & 6 & 5 & 4 \\
2 & 3 & 1 & 4 & 5 & 6 \\
2 & 3 & 1 & 4 & 6 & 5 \\
2 & 3 & 1 & 5 & 4 & 6 \\
2 & 3 & 1 & 5 & 6 & 4 \\
2 & 3 & 1 & 6 & 4 & 5 \\
2 & 3 & 1 & 6 & 5 & 4 \\
3 & 1 & 2 & 4 & 5 & 6 \\
3 & 1 & 2 & 4 & 6 & 5 \\
3 & 1 & 2 & 5 & 4 & 6 \\
3 & 1 & 2 & 5 & 6 & 4 \\
3 & 1 & 2 & 6 & 4 & 5 \\
3 & 1 & 2 & 6 & 5 & 4 \\
3 & 2 & 1 & 4 & 5 & 6 \\
3 & 2 & 1 & 4 & 6 & 5 \\
3 & 2 & 1 & 5 & 4 & 6 \\
3 & 2 & 1 & 5 & 6 & 4 \\
3 & 2 & 1 & 6 & 4 & 5 \\
3 & 2 & 1 & 6 & 5 & 4
\end{array}
\right),A(T_3) = \left(
\begin{array}{ccc}
1 & 2 & 3 \\
1 & 3 & 2 \\
2 & 1 & 3 \\
2 & 3 & 1 \\
3 & 1 & 2 \\
3 & 2 & 1
\end{array}
\right)$</p>
| NebulousReveal | 2,548 | <p><a href="https://math.stackexchange.com/questions/16342/point-of-logarithms">This</a> question might be of relevance to you. It asks about the purpose of logarithms. <a href="http://www.math.umt.edu/tmme/vol5no2and3/TMME_vol5nos2and3_a14_pp.337_344.pdf" rel="nofollow noreferrer">This</a> site gives a brief history of logarithms. As Qiaochu explained, logarithms are very useful in information theory. The <a href="https://math.stackexchange.com/questions/3419/is-it-possible-to-represent-every-huge-number-in-abreviated-form/3421#3421">answer</a> to this question elegantly uses logarithms. </p>
|
18,530 | <p>Sorry about the title, I have no idea how to describe these types of problems.</p>
<p>Problem statement:</p>
<p>$A(S)$ is the set of 1-1 mappings of $S$ onto itself. Let $S \supset T$ and consider the subset $U(T) = $ { $f \in A(S)$ | $f(t) \in T$ for every $t \in T$ }. $S$ has $n$ elements and $T$ has $m$ elements. Show that there is a mapping $F:U(T) \rightarrow S_m$ such that $F(fg) = F(f)F(g)$ for $f, g \in U(T)$ and $F$ is onto $S_m$.</p>
<p>How do I write up this reasoning: When I look at the sets $S$ = { 1, 2, ..., $n$} and $T$ = { 1, 2, ..., $m$}, I can see that there are a bunch of permutations of the elements of $T$ within $S$. I can see there are $(n - m)!$ members of $S$ for each permutation of $T$'s elements. But there needs to be some way to get a handle on the positions of the elements in $S$ and $T$ in order to compare them to each other. But $S$ isn't any particular set, like a set of integers, so how can I relate the positions of the elements to one another? Or, is this the wrong way to go about it?</p>
<p>Example:</p>
<p>$U(T_3 \subset S_6) = \left(
\begin{array}{cccccc}
1 & 2 & 3 & 4 & 5 & 6 \\
1 & 2 & 3 & 4 & 6 & 5 \\
1 & 2 & 3 & 5 & 4 & 6 \\
1 & 2 & 3 & 5 & 6 & 4 \\
1 & 2 & 3 & 6 & 4 & 5 \\
1 & 2 & 3 & 6 & 5 & 4 \\
1 & 3 & 2 & 4 & 5 & 6 \\
1 & 3 & 2 & 4 & 6 & 5 \\
1 & 3 & 2 & 5 & 4 & 6 \\
1 & 3 & 2 & 5 & 6 & 4 \\
1 & 3 & 2 & 6 & 4 & 5 \\
1 & 3 & 2 & 6 & 5 & 4 \\
2 & 1 & 3 & 4 & 5 & 6 \\
2 & 1 & 3 & 4 & 6 & 5 \\
2 & 1 & 3 & 5 & 4 & 6 \\
2 & 1 & 3 & 5 & 6 & 4 \\
2 & 1 & 3 & 6 & 4 & 5 \\
2 & 1 & 3 & 6 & 5 & 4 \\
2 & 3 & 1 & 4 & 5 & 6 \\
2 & 3 & 1 & 4 & 6 & 5 \\
2 & 3 & 1 & 5 & 4 & 6 \\
2 & 3 & 1 & 5 & 6 & 4 \\
2 & 3 & 1 & 6 & 4 & 5 \\
2 & 3 & 1 & 6 & 5 & 4 \\
3 & 1 & 2 & 4 & 5 & 6 \\
3 & 1 & 2 & 4 & 6 & 5 \\
3 & 1 & 2 & 5 & 4 & 6 \\
3 & 1 & 2 & 5 & 6 & 4 \\
3 & 1 & 2 & 6 & 4 & 5 \\
3 & 1 & 2 & 6 & 5 & 4 \\
3 & 2 & 1 & 4 & 5 & 6 \\
3 & 2 & 1 & 4 & 6 & 5 \\
3 & 2 & 1 & 5 & 4 & 6 \\
3 & 2 & 1 & 5 & 6 & 4 \\
3 & 2 & 1 & 6 & 4 & 5 \\
3 & 2 & 1 & 6 & 5 & 4
\end{array}
\right),A(T_3) = \left(
\begin{array}{ccc}
1 & 2 & 3 \\
1 & 3 & 2 \\
2 & 1 & 3 \\
2 & 3 & 1 \\
3 & 1 & 2 \\
3 & 2 & 1
\end{array}
\right)$</p>
| Travelling Salesman | 141,167 | <p>As Yuan says above, logarithms are numbers describing numbers, intended to give more information about something but in less space or time. </p>
<p>If you want a very simple everyday analogy you could consider some physical cases like steepness, density, or percentages for price increases. All of these are like cruder, simpler versions of a log: they give information about other numbers in a more general, more compact way. </p>
<p>Is that more what you were looking for?</p>
|
655,005 | <p>Show that if $k \in \mathbb{Z}$, then the integers $6k-1$, $6k+1$, $6k+2$, $6k+3$, and $6k+5$ are pairwise relatively prime.
I am still new and uncomfortable with proofs. Any help would be great. </p>
| MJD | 25,554 | <p>The short answer to the issue raised your question is this: $K^\ast$ is the set of <em>finite</em> sequences of elements of $K$. If $K$ is a finite set, then $K^\ast$ is countable, and Cantor's theorem tells us that the real numbers are not countable.</p>
<p>The mapping may be injective, but the real question is whether it is <em>surjective</em>: for any real number $r$, is there some sequence that is mapped to $r$. But there surely is not.</p>
<p>I began by asking how you plan to represent $\frac 13$, since the scheme you described, which is supposed to map rational numbers onto elements of $\{0, 1, \ldots 9\}^\ast$, does <em>not</em> succeed with $\frac13$, which is $0._825252525\ldots$ and does not receive a terminating base-8 representation, so does not correspond to a finite sequence of symbols in your system. You arbitrarily assigned $\frac13$ the representation <code>8</code>.</p>
<p>But it's rather ingenuous to ask “How is this not a listing of the reals?” when you didn't actually explain how you plan to map $\{8, 9\}^\ast$ to the "irrationals or whatever", where "whatever" now apparently includes $\frac13$. If you had given a specific mapping, it would be possible to produce a real number that you missed; this is exactly what Cantor's theorem does. But in the absence of a particular mapping, all one can do is appeal to Cantor's theorem in general.</p>
<p>But these sequences are <em>not</em> members of $\{0,1\}^\ast$, because that is the set of <em>finite</em> sequences of zeroes and ones, and many real numbers are only represented by an <em>infinite</em> sequence. For example, $\frac13$ is $0._201010101\ldots$.</p>
<p>And indeed Cantor's theorem tells us that there is <em>no</em> system in which every real number has a finite representation.</p>
|
4,473,264 | <p>I have part of a circle described by three two dimensional vectors.</p>
<ul>
<li>start point <code>s1</code></li>
<li>center point <code>c1</code></li>
<li>end point <code>e</code></li>
</ul>
<p>I move the start point <code>s1</code> by <code>m1</code>, which is a <strong>known</strong> two dimensional vector. My question is: Can I calculate the new center point <code>c2</code> from the data I have? And if so, how?</p>
<p>Problem</p>
<p><a href="https://i.stack.imgur.com/3wWOr.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/3wWOr.jpg" alt="enter image description here" /></a></p>
<p>I'm creating a svg-manuipulation-app (drawing-app) in javascript where I want to edit one point of an arc, but keep the shape of the arc intact by appropriately moving the center of the arc.</p>
<p>It only looks like I want to keep the <code>x</code> value the same. Small coincidence I didn't realised. The question should cover any vector <code>m1</code>, no matter where the new center <code>c2</code> would end up.</p>
| Lee Mosher | 26,501 | <p>One thing which has not been mentioned explicitly in the comments or the other answer is the following fact:</p>
<blockquote>
<p>Your function <span class="math-container">$f(x)=\sqrt{1+\dfrac{1}{x^4}}$</span> <em>does</em> have an indefinite integral, i.e. an antiderivative, on any interval where it is defined.</p>
</blockquote>
<p>For example it has an antiderivative on the interval <span class="math-container">$0 < x < +\infty$</span> (we have to avoid <span class="math-container">$x=0$</span> because it is not in the domain of <span class="math-container">$f(x)$</span>).</p>
<p>That fact is true by application of the <a href="https://en.wikipedia.org/wiki/Fundamental_theorem_of_calculus#First_part" rel="nofollow noreferrer">first fundamental theorem of calculus</a>. For your particular example, that theorem tells you that the formula
<span class="math-container">$$F(x) = \int_1^x \sqrt{1+\dfrac{1}{t^4}} \, dt
$$</span>
defines an antiderivative of your function <span class="math-container">$f(x)$</span> for all values <span class="math-container">$0 < x < \infty$</span>.</p>
<p>Now, you might not particularly like that formula for <span class="math-container">$F(x)$</span>, because it's hard to evaluate.</p>
<p>What you <em>can</em> always do is to use that formula for <span class="math-container">$F(x)$</span> to get a numerical estimate of its output value for any input value <span class="math-container">$x \in (0,\infty)$</span>, by using the trapezoidal rule for example, or any other numerical integration method of your choice.</p>
<p>What you can <em>sometimes</em> do for other functions <span class="math-container">$f(x)$</span> is to use methods of integration to write down an explicit elementary formula for <span class="math-container">$F(x)$</span>.</p>
<p>But for this particular function <span class="math-container">$f(x) = \sqrt{1+\dfrac{1}{x^4}}$</span> you <em>cannot</em> find an elementary formula for <span class="math-container">$F(x)$</span>. See the other answer for a fuller explanation of this point.</p>
|
24,361 | <p>Let $X$ be a topological space and let $\mathcal{F}$ and $\mathcal{G}$ be two sheaves over $X$.</p>
<p>Of course, if one has a morphism $f : \mathcal{F} \to \mathcal{G}$ such that for all $x\in X$, $f_x : \mathcal{F}_x \to \mathcal{G}_x$ is an isomorphism, then it is known that $f$ itself is an isomorphism.</p>
<p>My question is the following: if we don't have such a morphism $f$, but if we know that for all $x\in X$, $\mathcal{F}_x$ and $\mathcal{G}_x$ are isomorphic, is it true that $\mathcal{F}$ and $\mathcal{G}$ are isomorphic ?</p>
| Martin Brandenburg | 2,841 | <p>Definitely not. If $X$ is a ringed space, then a $\mathcal{O}_X$-module $F$ is called locally free of rank $1$, if $X$ is covered by open subsets $U_i$ such that $F|_{U_i}$ is free of rank $1$ over $\mathcal{O}_{U_i}$. The correspond to line bundles on $X$. Line bundles form a group, called the Picard group of $X$ and denoted by $\text{Pic}(X)$. This group does not have to vanish: If $X$ is a CW complex and we take the sheaf of continuous functions, then $\text{Pic}(X)$ is isomorphic to $H^1(X,\mathbb{Z}/2)$. For $X=S^1$, the moebius strip is the nontrivial element here. The corresponding example in algebraic geometry is $\text{Pic}(\mathbb{P}^n_k)=\mathbb{Z}$, the generator given by the Serre twist $\mathcal{O}(1)$.</p>
<p>Of course, there are more easy counterexamples, but I wanted to indicate that there is a rich theory coming from the observation that two locally isomorphic sheaves are not isomorphic.</p>
|
24,361 | <p>Let $X$ be a topological space and let $\mathcal{F}$ and $\mathcal{G}$ be two sheaves over $X$.</p>
<p>Of course, if one has a morphism $f : \mathcal{F} \to \mathcal{G}$ such that for all $x\in X$, $f_x : \mathcal{F}_x \to \mathcal{G}_x$ is an isomorphism, then it is known that $f$ itself is an isomorphism.</p>
<p>My question is the following: if we don't have such a morphism $f$, but if we know that for all $x\in X$, $\mathcal{F}_x$ and $\mathcal{G}_x$ are isomorphic, is it true that $\mathcal{F}$ and $\mathcal{G}$ are isomorphic ?</p>
| Robin Chapman | 4,213 | <p>Since there are locally free sheaves that are not globally free, the answer is
clearly no. Indeed the existence of the term "locally free" is a meta-proof
of this. For a concrete example, consider the circle embedded as the central circle
in a smooth Mobius strip. Its normal bundle is locally isomorphic to the
constant sheaf with fibres $\mathbb{R}$ but it is not a constant sheaf as it has
no global sections which are everywhere nonzero.</p>
|
2,783,129 | <p>How many different value of x from 0° to 180° for the equation $(2\sin x-1)(\cos x+1) = 0$?</p>
<p>The solution shows that one of these is true:</p>
<p>$\sin x = \frac12$ and thus $x = 30^\circ$ or $120^\circ$ </p>
<p>$\cos x = -1$ and thus $x = 180^\circ$</p>
<p><strong>Question:</strong> Inserting the $\arcsin$ of $1/2$ will yield to $30°$, how do I get $120^\circ$? and what is that $120^\circ$, why is there $2$ value but when you substitute $\frac12$ as $x$, you'll only get $1$ value which is the $30^\circ$?</p>
<p>Also, when I do it inversely: $\sin(30^\circ)$ will result to 1/2 which is true as $\arcsin$ of $1/2$ is $30^\circ$. But when you do $\sin(120^\circ)$, it will be $\frac{\sqrt{3}}{2}$, and when you calculate the $\arcsin$ of $\frac{\sqrt{3}}{2}$, it will result to $60^\circ$ and not $120^\circ$. Why?</p>
| mallan | 557,176 | <p>$0\leq \sin(x)\leq1$ in quadrants I <em>and</em> II. That is to say $\sin x$ (and $\cos x$) are stuck in a circle which is periodic. There are an infinite amount of $x$ that satisfy for example $\sin(x)=\frac{1}{2}$, but the simplest answer is enough to suffice</p>
<p>Specifically, $\sin^{-1}x$ isn't a function unless its domain is restricted to $-\pi/2\leq x\leq\pi/2$</p>
<p>Likewise for $\cos^{-1}x$. Its not a function unless its domain is restricted to $0\leq x\leq\pi$</p>
|
138,866 | <p>I have data in a csv file. The first row has labels, and the first column, too.</p>
<pre><code>Datos = Import["C:\\Users\\jodom\\Desktop\\Data.csv"]
</code></pre>
<p>Tha data in the csv file is that:</p>
<pre><code>{{"No", "Vol", "Vel"}, {1, 500, 45}, {2, 700, 67}, {3, 350, 87}, {4,
123, 23}, {5, 587, 45}, {6, 435, 89}, {7, 896, 65}, {8, 125,
45}, {9, 476, 27}, {10, 987, 80}}
</code></pre>
<p>I put those csv data into a dataset:</p>
<pre><code>B = Dataset[Datos]
</code></pre>
<p>You can check it out as an image here,on how it has seen on wolfram after the import:
<a href="https://drive.google.com/file/d/0B56r_V66BiodQUhUMWNHcHZFOWc/view?usp=sharing" rel="noreferrer">https://drive.google.com/file/d/0B56r_V66BiodQUhUMWNHcHZFOWc/view?usp=sharing</a></p>
<p>Now I want to convert the first row that has the labels, into a head or label of the dataset, and the first column into a label column, so I can get data from this dataset, like </p>
<pre><code>Dataset[labelrow, labelcolumn]
</code></pre>
| gwr | 764 | <p>What you can do is this:</p>
<pre><code>association = AssociationThread[ First @ data, ## ]& /@ data; (* data = Datos *)
dataset = Association /* Dataset @@ (
association // Query[ All, Lookup[##, "No"] -> Part[##, 2 ;; 3] & ]
)
</code></pre>
<p><img src="https://i.stack.imgur.com/nPuXK.png" alt="Mathematica graphics"></p>
<p>Then</p>
<pre><code>dataset[ 2, "Vol" ]
</code></pre>
<blockquote>
<p>700</p>
</blockquote>
|
509,856 | <p>Let $K_n$ be a complete $n$ graph with a color set $c$ with $c=\{\text{Red}, \text{Blue}\}$. Every edge of the complete $n$ graph is colored either $\text{Red}$ or $\text{Blue}$. Since $R(3, 3)=6$, the $K_6$ graph must contain at least one monochromatic $K_3$ graph. How can I prove that this graph must contain another (different) monochromatic $K_3$ graph. I saw proofs which uses the fact that there are at most $18$ non-monochromatic $K_3$ graphs. Since there are $20$ $K_3$ graphs (how can you calculate this) there are at least 2 monochromatic $K_3$ graphs. Are there other proofs?</p>
| Sean Eberhard | 23,805 | <p>Since $R(3,3)=6$ there is a monochromatic triangle $\Delta$. Let's say it's blue. Look at the other three vertices. If there is no red edge between them then we've found a second blue triangle, so suppose we have found a red edge $xy$, $x,y\notin\Delta$. If there are two blue edges from $x$ to $\Delta$ then we've found a second blue triangle, so assume there are two red edges from $x$ to $\Delta$. Similarly assume there are two red edges from $y$ to $\Delta$. But that means that there is a $z\in\Delta$ such that $xz$ and $yz$ are both red, so we've found a red triangle.</p>
|
701,176 | <p>A function $f$ is differentiable over its domain and has the following properties:</p>
<ol>
<li><p>$\displaystyle f(x+y)=\frac{f(x)+f(y)}{1-f(x)f(y)}$</p></li>
<li><p>$\lim_{h \to 0} f(h) = 0$</p></li>
<li><p>$\lim_{h \to 0} f(h)/h = 1$</p></li>
</ol>
<p>i) Show that $f(0)=0$</p>
<p>ii) show that $f'(x)=1+[f(x)]^2$ by using the def of derivatives Show how the above properties are involved.</p>
<p>iii) find $f(x)$ by finding the antiderivative. Use the boundary condition from part (i).</p>
<hr>
<p>So basically I think I found out how to do part 1 because if $x+y=0$ then the top part of the fraction will always have to be zero.</p>
<p>part 2 and 3 are giving me trouble. The definition is the limit $(f(x+h)-f(x))/h$</p>
<p>So I can set $x+y=h$ and make the numerator equal to $f(h)$?</p>
<p>Thanks for all who help</p>
| Lord Soth | 70,323 | <p>Suppose $n^3-6n^2 = 2$ for some integer $n$. It is easy to check that $n$ cannot be odd. Hence $n = 2k$ for some integer $k$. We have $8k^3-24k^2 = 2$, or equivalently, $k^3-3k^2 = \frac{1}{4}$, which is a contradiction since $k$ was supposed to be an integer.</p>
|
3,310,812 | <p>Solve the equation :
<span class="math-container">$$p^k=kl+1, $$</span> with <span class="math-container">$p$</span> a prime number and <span class="math-container">$k,l\ge 1$</span> two integers.</p>
<p>I know that <span class="math-container">$(p,k,l)=(3,1,2)$</span> is a solution, but can we find all solutions to the equation ?</p>
| Rory Daulton | 161,807 | <p>This problem has two degrees of freedom: there are three column widths, but the sum of those widths is known so the degrees of freedom are reduced to only two. Therefore this will be easier to solve if we use two variables.</p>
<p>Let's use <span class="math-container">$x$</span> for the width of the first column and <span class="math-container">$y$</span> for the width of the second. Then the width of the third column is <span class="math-container">$W-x-y$</span> where <span class="math-container">$W$</span> is the known sum of the widths.</p>
<p>Then, using the known areas of the cells on the main diagonal, the heights of the three rows are</p>
<p><span class="math-container">$$\frac ax,\ \frac by,\ \frac{c}{W-x-y}$$</span></p>
<p>The height of the complete table is then</p>
<p><span class="math-container">$$H = \frac ax + \frac by + \frac{c}{W-x-y}$$</span></p>
<p>The values of <span class="math-container">$x$</span> and <span class="math-container">$y$</span> are limited by <span class="math-container">$0 < x,\ 0 < y,\ x+y < W$</span> so there are no boundaries. Thus the minimum value of <span class="math-container">$H$</span>, if one exists, is where the two partial derivatives are zero. So we get</p>
<p><span class="math-container">$$\frac{\partial H}{\partial x} = -\frac{a}{x^2}+0-\frac{c}{(W-x-y)^2}(-1)=0 $$</span></p>
<p>and</p>
<p><span class="math-container">$$\frac{\partial H}{\partial y} = 0-\frac{b}{y^2}-\frac{c}{(W-x-y)^2}(-1)=0 $$</span></p>
<p>So we get</p>
<p><span class="math-container">$$\frac{c}{(W-x-y)^2}=\frac{a}{x^2}$$</span></p>
<p>and </p>
<p><span class="math-container">$$\frac{c}{(W-x-y)^2}=\frac{b}{y^2}$$</span></p>
<p>Equating the two right-hand sides and solving yields</p>
<p><span class="math-container">$$y=\sqrt{\frac ba}x$$</span></p>
<p>We now are down to only one independent variable, <span class="math-container">$x$</span>. You substitute the expression for <span class="math-container">$y$</span> into either of the two expressions for <span class="math-container">$\frac{c}{(W-x-y)^2}$</span> and solve for <span class="math-container">$x$</span>. Finally, you substitute that expression for <span class="math-container">$x$</span> into the formula for <span class="math-container">$H$</span> and get your final answer. But a shorter way is to see that the problem is symmetric in <span class="math-container">$a, b, c$</span> and we could have set <span class="math-container">$x$</span> and <span class="math-container">$y$</span> to the first and last columns. Following our work above would then yield</p>
<p><span class="math-container">$$z=W-x-y=\sqrt{\frac ca}x$$</span></p>
<p>I'll let you finish from here, as in the bulk of @Henry's answer. Note that the intuition of @Henry regarding the proportions of each column/row is correct: <span class="math-container">$\sqrt a: \sqrt b: \sqrt c$</span>.</p>
|
738,934 | <p>Write an equation for the line tangent to the graph of x=y^2+4 at the point (5,1).</p>
<p>I have found the derivative y'=1/2y but I do not know what to do next.</p>
| user137794 | 137,794 | <p>Find $y'$ at the point $(5, 1)$. That is the slope of the line. Then you can use point-slope formula.</p>
|
738,934 | <p>Write an equation for the line tangent to the graph of x=y^2+4 at the point (5,1).</p>
<p>I have found the derivative y'=1/2y but I do not know what to do next.</p>
| Sam Ye | 139,806 | <p>Since $y^{\prime}=\frac{1}{2y}$, find $y^{\prime}$ by substitute (5, 1) into the equation which is $y^{\prime} = \frac{1}{2}$ = gradient of the equation.</p>
<p>Equation of a line: $$\begin{align}y-y_1=m(x-x_1)\end{align}$$</p>
<p>Thus, the equation should be $$\begin{align}y-1=\frac{1}{2}(x-5)\end{align}$$</p>
<p>Tidy it up and becomes $$\begin{align}x-2y-3=0\end{align}$$</p>
|
1,335,446 | <p>Ordinary sphere in $\mathbb{R}^3$ is two-dimensional object (2-sphere), i.e. it requires at least two coordinates to define point on a surface. As I notice, however, there is a catch. <br/>
If we use spherical coordinates,$(\phi,\theta)$, then there are two points where $\phi$ is not defined (polar regions). If we use stereographic coordinates, $j=x+iy$, then there is one point where both $x$ and $y$ are infinite, hence not defined. But if we use three coordinates (e.g. cartesian), then we have no such problem. <br/>
In that sense, is 2-sphere a three-dimensional object, i.e. there is some dimension function that is equal three for 2-sphere (but equal two for 2-sphere with two points excluded)? Or is there a way to use two coordinates and not to encounter such 'problematic' points? </p>
| Emilio Novati | 187,568 | <p>A spherical surface in $\mathbb{R}^3$ is a two dimension <a href="https://en.wikipedia.org/wiki/Manifold" rel="nofollow">manifold</a> that is not globally <a href="https://en.wikipedia.org/wiki/Diffeomorphism" rel="nofollow">diffeomorphic</a> to a plane. But we can establish a diffeomorphism locally. This means that in a neighborhood of any point we can define a smooth function that is a diffeomorphism between the neighborhood and $\mathbb{R}^2$.The spherical coordinates $(\phi, \theta)$ are an example of how we can do this. The dimension of the manifold is, by definition, the dimension $n$ of the locally diffeomorphic space, so, for the sphere this dimension is $2$.</p>
|
1,335,446 | <p>Ordinary sphere in $\mathbb{R}^3$ is two-dimensional object (2-sphere), i.e. it requires at least two coordinates to define point on a surface. As I notice, however, there is a catch. <br/>
If we use spherical coordinates,$(\phi,\theta)$, then there are two points where $\phi$ is not defined (polar regions). If we use stereographic coordinates, $j=x+iy$, then there is one point where both $x$ and $y$ are infinite, hence not defined. But if we use three coordinates (e.g. cartesian), then we have no such problem. <br/>
In that sense, is 2-sphere a three-dimensional object, i.e. there is some dimension function that is equal three for 2-sphere (but equal two for 2-sphere with two points excluded)? Or is there a way to use two coordinates and not to encounter such 'problematic' points? </p>
| Community | -1 | <blockquote>
<p>some dimension function that is equal three for 2-sphere (but equal two for 2-sphere with two points excluded)? </p>
</blockquote>
<p>Yes: the smallest number $n$ such that the space can be embedded into $\mathbb{R}^n$, in the sense of being diffeomorphic to a submanifold of $\mathbb{R}^n$. This is sometimes called the <em>embedding dimension</em>. </p>
<ul>
<li>The embedding dimension of $S^2$ is $3$, since it embeds into $\mathbb{R}^3$ but not into $\mathbb{R}^2$. </li>
<li>The embedding dimension of $S^2$ with two points excluded is $2$, since this set is diffeomorphic to $\mathbb{R}^2$ minus a point. </li>
</ul>
<p>(The embedding dimension of $S^2$ with <em>one</em> point excluded is also $2$, thanks to the stereographic projection.)</p>
|
18,224 | <p>I need to generate a very large sparse block matrix, with blocks consisting only of ones along the diagonal. I have tried several ways of doing this, but I seem to always run out of memory.</p>
<p>The fastest way of doing this that I've come up with so far is as follows:</p>
<p>(typically, I will need <code>n</code> to be at least 2500 and m of the order 50).</p>
<pre><code>tmp= SparseArray[{}, {n,n}, 1];
SparseArray@
ArrayFlatten@
Table[If[i == j, tmp, 0], {i, m}, {j, m}]
</code></pre>
<p>Example when n=2, m=4:</p>
<p><img src="https://i.stack.imgur.com/PIpvo.png" alt="Matrix"></p>
<p>The problem with this construction is that <code>ArrayFlatten</code> for some reason converts the result to a normal matrix, and I run out of memory. That is, when it works, this code computes the end result very quickly and the result does not take up much memory. At some specific number however, it suddenly crashes as the intermediate <code>ArrayFlatten</code> step clogs up the memory.</p>
<p>Any help will be greatly appreciated!</p>
| Carl Woll | 45,431 | <p>I think <a href="http://reference.wolfram.com/language/ref/KroneckerProduct" rel="nofollow noreferrer"><code>KroneckerProduct</code></a> is the right tool for this question:</p>
<pre><code>kp[n_,m_] := KroneckerProduct[
SparseArray[IdentityMatrix[m]],
SparseArray[SparseArray[{}, {n,n}, 1], Automatic, 0]
]
</code></pre>
<p>Note that I convert the <a href="http://reference.wolfram.com/language/ref/SparseArray" rel="nofollow noreferrer"><code>SparseArray</code></a> in the second argument into a <a href="http://reference.wolfram.com/language/ref/SparseArray" rel="nofollow noreferrer"><code>SparseArray</code></a> that has <code>0</code> as the default element. Otherwise, the output of <a href="http://reference.wolfram.com/language/ref/KroneckerProduct" rel="nofollow noreferrer"><code>KroneckerProduct</code></a> is no longer a <a href="http://reference.wolfram.com/language/ref/SparseArray" rel="nofollow noreferrer"><code>SparseArray</code></a> object. Here is Mr.Wizard's last suggestion:</p>
<pre><code>fn[n_,m_] := SparseArray[
Tuples[Range@# - {1,0,0}] . {Rest@#, {1,0},{0,1}}&@{m,n,n} -> 1
]
</code></pre>
<p>And a comparison:</p>
<pre><code>r1 = fn[300, 20]; //MaxMemoryUsed //AbsoluteTiming
r2 = kp[300, 20]; //MaxMemoryUsed //AbsoluteTiming
r1 === r2
</code></pre>
<blockquote>
<p>{0.959416, 379069184}</p>
<p>{0.023961, 59095360}</p>
<p>True</p>
</blockquote>
|
1,460,561 | <p>Let $\lambda =1$ is the eigenvalue corresponding to the single Jordan block $J$. Prove $J^m \sim J$ with an arbitrary positive integer $m$.</p>
<p>My try: Because $\lambda = 1$ is eigenvalue, $(J-I)^m =0$. After that $(J-I)^{m-1} J = (J-I)^{m-1}$. At this point, I do not know how to continue.</p>
| Nyfiken | 284,069 | <p>I would do it this way: $J=I+N$ where $N$ is the $n\times n$ nilpotent upper triangular matrix. Taking the $m$-th power we obtain
$$
J^m=(I+N)^m=\sum_{k=1}^m\binom{m}{k}N^k=I+mN+(\text{super-super-diagonals}),
$$
from what we get $J^m-I=mN+(\text{super-super-diagonals})$ and, therefore,
$$
\text{rank}(J^m-I)=\text{rank}(N)=n-1.
$$
It gives us $\text{dim}\ker(J^m-I)=1$, which means only one Jordan block for $J^m$. The block is $J$.</p>
|
1,953,628 | <p>0 choices for the 1st person.
17 choices for the 2nd person (must exclude 1st and his/her two neighbours) </p>
<p>For 2 of these choices of 2nd person, there is one shared neighbour, so 15 remaining choices. (e.g. if they are numbered 1 to 20 in a circle, 1st person is #1, 2nd is #3, then people 20,1,2,3,4 are excluded).
For the other 15 choices of 2nd person, there are no shared neighbors, so 14 remaining choices. </p>
<p>So if order matters, total is $20 \cdot (2 \cdot 15 + 15 \cdot 14) $
but since order does not matter, divide by $3! = 6$ to account for the permutations in order of the 3 people.
So total = $20 \cdot \frac{2 \cdot 15 + 15 \cdot 14}{6}$</p>
<p>just redid it; does this make any sense?</p>
| N. F. Taussig | 173,070 | <p>Assuming the seating assignments are fixed, the problem is tantamount to arranging $17$ blue chairs and $3$ green chairs in a circle so that no two of the green chairs are adjacent. We will solve the problem for a line, then adjust our answer to account for the fact that the chairs are arranged in a circle.</p>
<p>Line up $17$ blue chairs. This creates $18$ spaces, $16$ between successive blue chairs and two at the ends of the row. Choose three of the spaces in which to place a green chair. This can be done in $\binom{18}{3}$ ways.</p>
<p>However, we have counted selections in which both the first and last chairs are green. If both ends are selected, there are $16$ remaining spaces in which to place the third green chair. Hence, there are $16$ linear arrangements in which no two of the green chairs are adjacent in which there is a green chair at both ends of the row. Since we will be joining the ends of the row to form a circle, these arrangements must be excluded. </p>
<p>Therefore, the number of ways of circular arrangements of $17$ blue chairs and $3$ green chairs so that no two of the green chairs are adjacent is
$$\binom{18}{3} - \binom{16}{1} = 800$$
as you found. </p>
|
754,603 | <p>I have two equations:</p>
<ol>
<li>$x = 2^n (p+i)^{3n}$ </li>
<li>$x = 14^n p^{3n} $</li>
</ol>
<p>Here $n$, $p$, and $i$ are all integers $\geq0$.</p>
<p>I worked out (using a spreadsheet) that if $i > 1$ then the value of x in expression 1 is larger than the value of x in expression 2.</p>
<p>How can I show this mathematically?</p>
| Git Gud | 55,235 | <p>Given two matrices $M_{m\times n},N_{m\times p}$, there are two ways to interpret the entity $\begin{bmatrix} M & N \end{bmatrix}$.</p>
<p>One is the $m\times (n+p)$ matrix whose $(i,j)$ entry is $\begin{cases} (M)_{(i,j)}, &\text{if }j\leq n\\ (N)_{(i, j-n)}, &\text{if }j\ge n+1\end{cases}$.</p>
<p>In this case I'd rather denote the matrix described above as $\begin{bmatrix} M \mid N \end{bmatrix}$, (the <a href="http://en.wikipedia.org/wiki/Augmented_matrix" rel="nofollow">augmented matrix</a>). This is standard notation.</p>
<p>The other is a $1\times 2$ matrix whose first entry is the matrix $M$ and whose second entry is the matrix $N$.</p>
<p>Under the first interpretation one has $$\begin{bmatrix} M & N \end{bmatrix}^T=\begin{bmatrix} M \mid N \end{bmatrix}^T=\begin{bmatrix} M^T \\ \overline{N^T} \end{bmatrix}.$$</p>
<p>Under the second interpretation one has $$\begin{bmatrix} M & N \end{bmatrix}^T=\begin{bmatrix} M \\ N \end{bmatrix}.$$</p>
<p>The second interpretation is very uncommon. Most of the time it's safe to assume one is under the first interpretation.</p>
<p><strong>An example:</strong> Let $M=\begin{bmatrix} 1 & 0 & 1\\ 2 & 3 & 5\end{bmatrix}_{2\times 3}$ and $N=\begin{bmatrix} 1 & 1\\ 0 & 1\end{bmatrix}_{2\times 2}$.</p>
<p>The first interpretation yields the matrix $A$ where $$(A)_{ij}=\begin{cases} (M)_{(i,j)}, &\text{if }j\leq 3\\ (N)_{(i, j-n)}, &\text{if }j\ge 4\end{cases}, \text{ for all }(i,j)\in \{1,2\}\times\{1,2,3,4,5\}.$$</p>
<p>That is
$$ A=\left[\begin{array}{ccc|cc}
(M)_{11} & (M)_{12} & (M)_{13} & (N)_{14} & (N)_{15}\\
(M)_{21} & (M)_{22} & (M)_{23} & (N)_{24} & (N)_{25}
\end{array}\right]=\left[\begin{array}{ccc|cc}
1 & 0 & 1 & 1 & 1\\
2 & 3 & 5 & 0 & 1
\end{array}\right].$$</p>
<p>Transposing yields $A^T=\left[\begin{array}{cc}1 & 2\\ 0 & 3\\ 1 & 5\\ \hline 1 & 0\\ 1 & 1 \end{array}\right]=\left[\begin{array}{c}M^T\\ \hline N^T \end{array}\right]$.</p>
<p>The second interpretation gives $A_{1\times 2}=\left[\begin{matrix} (A)_{11} & (A)_{12}\end{matrix}\right]_{1\times 2}$ where $(A)_{11}=M$ and $(A)_{12}=N$, transposing: $$\left(A^T\right)_{2\times 1}=\begin{bmatrix} (A)_{11}\\ (A)_{21}\end{bmatrix}_{2\times 1}=\begin{bmatrix} M\\ N\end{bmatrix}_{2\times 1}=\begin{bmatrix}\begin{bmatrix} 1 & 0 & 1\\ 2 & 3 & 5\end{bmatrix}_{2\times 3}\\ \begin{bmatrix} 1 & 1\\ 0 & 1\end{bmatrix}_{2\times 2} \end{bmatrix}_{2\times 1}.$$</p>
<p>Here the entries in the matrix just happen to be matrices themselves, you can create matrices in which their entries are whatever you want. For instance, $\begin{bmatrix} 1 & \begin{bmatrix} 1 & 2\\ 3 & 4\end{bmatrix} & \spadesuit\\ \implies & \huge{〠} & +\end{bmatrix}$ is a matrix.</p>
|
3,727,772 | <p><span class="math-container">$Y_1,Y_2,\ldots,Y_{n+1}$</span> be non-empty subsets of <span class="math-container">$\{1,2,3\ldots,n\}$</span>. Prove that there exists non-empty disjoint subsets <span class="math-container">$A_1$</span> and <span class="math-container">$A_2$</span> of <span class="math-container">$\{1,2,3\ldots,n+1\}$</span> such that <span class="math-container">$$\bigcup\limits_{i\in A_1} Y_{i}=\bigcup\limits_{j\in A_2} Y_{j}.$$</span></p>
<p>Please give a hint for this problem. I am trying but could not proceed.</p>
| Daniel | 391,594 | <p>Previously this was a hint. However, as @mindlack pointed out it does not seem to be straightforward to deduce from here there are two disjoint subsets.</p>
<p><strong>Attempt.</strong> Define <span class="math-container">$[n] = \{1, 2, \ldots, n\}$</span> and let <span class="math-container">$\mathcal{P}([n])$</span> be the power set of <span class="math-container">$[n]$</span>, i.e., the set of all subsets of <span class="math-container">$[n]$</span>.</p>
<p>Consider the function <span class="math-container">$f: \mathcal{P}([n+1])\setminus\{\varnothing\} \to \mathcal{P}([n])$</span> such that <span class="math-container">$A\mapsto \cup_{i \in A} X_i$</span>. The pidgeonhole principle implies that there must be <span class="math-container">$2$</span> different <span class="math-container">$A$</span>s with the same image.</p>
|
806,779 | <p>Prove, without expanding, that
\begin{vmatrix}
1 &a &a^2-bc \\
1 &b &b^2-ca \\
1 &c &c^2-ab
\end{vmatrix} vanishes.</p>
<p>Any hints ?</p>
| adfriedman | 153,126 | <p>Simply add a multiple of the first column to the last
$$\begin{vmatrix} 1 & a & a^2-bc\\1&b&b^2-ac\\1&c&c^2-ab\end{vmatrix}
=\begin{vmatrix} 1 & a & a^2-bc+(1)(ab+ac+bc)\\1&b&b^2-ac+(1)(ab+ac+bc)\\1&c&c^2-ab+(1)(ab+ac+bc)\end{vmatrix}
= \begin{vmatrix} 1 & a & a(a+b+c)\\1&b&b(a+b+c)\\1&c&c(a+b+c)\end{vmatrix}
= 0$$
The final step follows because the second and third column are linearly dependent.</p>
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.