qid int64 1 4.65M | question large_stringlengths 27 36.3k | author large_stringlengths 3 36 | author_id int64 -1 1.16M | answer large_stringlengths 18 63k |
|---|---|---|---|---|
3,738,579 | <blockquote>
<p>What is the cardinality of set <span class="math-container">$\big\{(x,y,z)\mid x^2+y^2+z^2= 2^{2018}, xyz\in\mathbb{Z} \big\}$</span>?</p>
</blockquote>
<p>Since I have very limited knowledge in number theory, I tried using logarithms and then manipulating the equation so that we get <span class="math-container">$$10^{2018}+2=x^2+y^2+z^2.$$</span>
Then setting one of <span class="math-container">$x,y,z$</span> equal to <span class="math-container">$\sqrt{2}$</span> we find all values of <span class="math-container">$x$</span> and <span class="math-container">$y$</span> where <span class="math-container">$$2x^2+y^2=10^{2018}.$$</span>
Finally we use combinatorics to get the required answer.
However this led to no-where.</p>
<p>What is the correct way to solve this problem?</p>
| DonAntonio | 31,254 | <p><span class="math-container">$$\frac{2}{3}(4^n - 1) + 2^{2n + 1}= \frac23\left(\color{red}{4^n}-1+\overbrace{\color{red}{3\cdot2^{2n}}}^{=3\cdot4^n}\right)=\frac23\left(\color{red}{4\cdot4^n}-1\right) = \frac{2}{3}(4^{n + 1} - 1)$$</span></p>
|
3,738,579 | <blockquote>
<p>What is the cardinality of set <span class="math-container">$\big\{(x,y,z)\mid x^2+y^2+z^2= 2^{2018}, xyz\in\mathbb{Z} \big\}$</span>?</p>
</blockquote>
<p>Since I have very limited knowledge in number theory, I tried using logarithms and then manipulating the equation so that we get <span class="math-container">$$10^{2018}+2=x^2+y^2+z^2.$$</span>
Then setting one of <span class="math-container">$x,y,z$</span> equal to <span class="math-container">$\sqrt{2}$</span> we find all values of <span class="math-container">$x$</span> and <span class="math-container">$y$</span> where <span class="math-container">$$2x^2+y^2=10^{2018}.$$</span>
Finally we use combinatorics to get the required answer.
However this led to no-where.</p>
<p>What is the correct way to solve this problem?</p>
| Community | -1 | <p>You want to prove</p>
<p><span class="math-container">$$\frac23(4^{n+1}-1)-\frac23(4^n-1)=2^{2n+1}.$$</span></p>
<p>Simplifying the <span class="math-container">$-1$</span> and dividing by <span class="math-container">$2\cdot4^n$</span>,
<span class="math-container">$$\frac13\cdot4-\frac13=1.$$</span></p>
<hr />
<p>Note that this is just as case of the sum of a geometric progression</p>
<p><span class="math-container">$$\sum_{k=0}^{n-1}ab^k=a\frac{b^n-1}{b-1},$$</span> proven by</p>
<p><span class="math-container">$$ab^n=a\frac{b^{n+1}-1}{b-1}-a\frac{b^n-1}{b-1}=ab^n\frac{b-1}{b-1}.$$</span></p>
|
25,853 | <p>With regard to an undergraduate statistics course, I am developing a standardized list of point deductions with the TAs (doctoral students) so that graders are consistent in what they are taking off intermediate points for. For example, most problems are 10 points total, and my proposed point deductions for intermediate math errors are (for example):</p>
<ul>
<li>-2 pts, erroneous +, - , *, /</li>
<li>-2 pts, erroneous sign, e.g. 3.02 instead of -3.02</li>
<li>-3 pts, failed to square, e.g. (x) instead of (x)^2</li>
<li>-3 pts, failed to take square root, e.g. (x) instead of sqrt(x)</li>
</ul>
<p>If, after grading, you discover on a particular exam that the final answers for five 10-point questions are incorrect because of only making a minor -2 point intermediate error, the student could conceivably obtain a score of 80% on an exam if they missed only -2 points per question (40/50).</p>
<p>However, in statistics, there is a contextual element to every question, not just solving for a numerical answer -- that is, in addition to the worked problem, students need to write a text-based response for the following:</p>
<ul>
<li>(2 pts) state whether the hypothesis test is significant or not</li>
<li>(2 pts) state whether the null hypothesis is rejected or accepted</li>
<li>(2 pts) state whether the p-value is less than 0.05 or not.</li>
</ul>
<p>So if there was only one minor (-2 point) intermediate error made, causing an incorrect final numerical answer, the student will also incorrectly respond to the final text-based answers (above) as well.</p>
<p>Thus, would you also take off e.g. -2 points for an incorrect final numerical answer, as well as -6 points for missing the final text-based sub-items listed above?</p>
<p>In other words, would you only deduct -2 points for a complex (multi-step) algebra or calculus question if only a minor intermediate step was erroneous, or would you also deduct for having an incorrect final numerical answer as well?</p>
<p>Maybe I could propose to the TAs to augment the point deduction list with:</p>
<ul>
<li>-1 pt, incorrect final numerical answer</li>
<li>-1 pt, state whether the hypothesis test is significant or not</li>
<li>-1 pt, state whether the null hypothesis is rejected or accepted</li>
<li>-1 pt, state whether the p-value is less than 0.05 or not.</li>
</ul>
| Xander Henderson | 8,571 | <p>Part of the problem is writing the exam questions in the first place. Others have noted that, when designing a grading rubric, you should identify what the key skills in the problem are. This seems backwards to me. <em>First</em>, identify the key skill that you want to test, and <em>then</em> write the exam questions which hit those skills.</p>
<p>Once the exam has been written, I would very much recommend that you make the rubric as simple as possible, given that you want consistent grading across a (possibly large) group of TAs. I typically grade on a 3 points scale:</p>
<ul>
<li>[3] The answer is nearly perfect.</li>
<li>[2] The answer contains errors that are mechanical in nature (e.g. missing signs, incorrect computations, etc), but not conceptual. The mechanical errors are minor or are not central to the skill(s) being tested by the question.</li>
<li>[1] There are serious mechanical errors and/or conceptual errors, but <em>something</em> correct or relevant has been written on the page, in a way which clearly demonstrates at least some conceptual understanding.</li>
<li>[0] The answer is essentially ungradable (it is blank, or nonsensical, or whatever).</li>
</ul>
<p>I will note that my [2] and [1] are essentially the 50% category in <a href="https://matheducators.stackexchange.com/a/25854/">this answer</a>. I think that it is worthwhile to distinguish between "dumb" arithmetic mistakes and more fundamental conceptual errors. That being said, my scheme is essentially the same idea—simple and quick to implement.</p>
<p>Experience has shown me that students really don't <em>like</em> this system. The students I work with are used to a grading scale in which an A is anything over 90%, a B is anything over 80%, and so on. Thus when they get 2 points out of 3 (67%), they feel like they are failing (since 67% is a D). However, my feeling is that [2] represents, roughly speaking, B or C level response, while a [1] represents a D or low C. This is something which has to be thought about when assigning letter grades—either add more "free" points into the course elsewhere, or grade on a different percentage scale, or accept that more students are going to fail.</p>
<p>If you are really stuck on a 90/80/70% scale, then (for example) remap [3] to 5 points, [2] to 4 points, [1] to 3 points, and [0] to 0 points.</p>
<p>If you want to weight different questions differently, continue to grade them on a 3 point scale, but weight them differently (easy-peasy). Students will be most happy if all of the questions are worth multiples of 3 points (because they don't really want to think about weighting).</p>
<p>In any event, the overall goal is to construct a grading scheme which is <strong>fast</strong> (you don't want your graders to have to spend a lot of time on things), and <strong>consistent</strong> (different graders should score a given response in the same way). Creating lots of deductions or opportunities for partial credit makes grading slower, hence I would tend to avoid it. Consistency is also easier to attain if there are fewer categories.</p>
|
4,309,812 | <p>Recently I knew about <a href="https://en.m.wikipedia.org/wiki/Heron%27s_formula" rel="nofollow noreferrer">Heron's formula</a> for the area of some triangle, and its generalizations to quadrilaterals by Bretschneider's formula. According to Wikipedia there are also generalizations for pentagons and hexagons inscribed in a circle.</p>
<p>My question: has someone researched on the possibility to generalize this formula for <span class="math-container">$n$</span>-gons? Is there any underlying reason for this to be a difficult (or impossible) task?</p>
<p>Thanks!</p>
| Ninad Munshi | 698,724 | <p>I'm assuming your bounds were actually</p>
<p><span class="math-container">$$y=x \hspace{15 pt} y=x+2 \hspace{15 pt} y = \frac{1}{x} \hspace{15 pt} y = \frac{2}{x}$$</span></p>
<p>It is completely possible to find <span class="math-container">$x+y$</span>, the trick is noticing the degree of the terms. <span class="math-container">$u$</span> is quadratic in <span class="math-container">$xy$</span> while <span class="math-container">$v$</span> is linear, so we should expect another linear term to involve a square root of a <span class="math-container">$u$</span> and <span class="math-container">$v^2$</span> somehow. For example</p>
<p><span class="math-container">$$v^2 = y^2-2xy+x^2$$</span></p>
<p><span class="math-container">$$v^2+4u = y^2+2xy+x^2 $$</span></p>
<p><span class="math-container">$$\sqrt{v^2+4u} =x+y$$</span></p>
<p>Now that you have formulas for <span class="math-container">$y-x$</span> and <span class="math-container">$y+x$</span> you maybe tempted to invert and solve for the Jacobian, but that would be a waste of your time. It's simpler derivative wise to compute the inverse Jacobian instead</p>
<p><span class="math-container">$$J^{-1} = \left|\begin{vmatrix}y & x \\ -1 & 1\end{vmatrix}\right| = y+x \implies J = \frac{1}{J^{-1}} = \frac{1}{x+y}$$</span></p>
<p>In other words finding the formula for <span class="math-container">$x+y$</span> was unnecessary because the Jacobian would have canceled it out. I showed the trick for solving for <span class="math-container">$x+y$</span> because it can still be useful for other problems. This means our integral is</p>
<p><span class="math-container">$$\int_0^2\int_1^2 v\:dvdu$$</span></p>
|
4,187,932 | <p>Is there a general <strong>algebraic</strong> form to the integral <span class="math-container">$$\int_{k_1}^{k_2} x^2 e^{-\alpha x^2}dx?$$</span> I know that if this integral is an improper one, then the integral can be calculated quite easily (i.e. is a well known result). However, when these bounds are not imposed, I am getting results in the form of the error function which turns out to be another integral. So I am confused if this can even be solved by hand or the only way to do so is to plug it in through a calculator.</p>
<p>The reason I am asking is because this integral is highly related to physics as in that it is the proportional to the Maxwell Boltzmann distribution. So if I want to know the number of particles between two velocities, I have to calculate this. I am sorry if this is a basic question. I just could not find anything similar online.</p>
<p>I would also be interested in the results if only the upper bound is <span class="math-container">$\infty$</span> and the lower bound is <span class="math-container">$k$</span>.</p>
| Timur Bakiev | 855,963 | <p><span class="math-container">$dv$</span> is not even a number in math, this is another kind of object, so physicists clearly abuse some natural ways of understanding and visualising the differential. In math, expressions like <span class="math-container">$v^2 + (dv)^2$</span> or <span class="math-container">$v + dv$</span> usually make no sense.</p>
|
1,626,362 | <p><code>The following is a short extract from the book I am reading:</code> </p>
<blockquote>
<p>If given a Homogeneous ODE:
$$\frac{\mathrm{d}^2 y}{\mathrm{d}x^2}+5\frac{\mathrm{d} y}{\mathrm{d}x}+4y=0\tag{1}$$
Letting
$$D=\frac{\mathrm{d}}{\mathrm{d}x}$$ then $(1)$ becomes
$$D^2 y + 5Dy + 4y=(D^2+5D+4)y$$
$$\implies\color{blue}{(D+1)(D+4)y=0}\tag{2}$$
$$\implies (D+1)y=0 \space\space\text{or}\space\space (D+4)y=0$$ which has solutions $$y=Ae^{-x}\space\space\text{or}\space\space y=Be^{-4x}\tag{3}$$ respectively, where $A$ and $B$ are both constants. </p>
<p>Now if $(D+4)y=0$, then $$(D+1)(D+4)y=(D+1)\cdot 0=0$$
so any solution of $(D + 4)y = 0$ is a solution of the differential equation $(1)$ or $(2)$. Similarly, any solution of $(D + 1)y = 0$ is a solution of $(1)$ or $(2)$. $\color{red}{\text{Since the two solutions (3) are linearly independent, a linear combination}}$ $\color{red}{\text{of them contains two arbitrary constants and so is the general solution.}}$ Thus $$y=Ae^{-x}+Be^{-4x}$$ is the general solution of $(1)$ or $(2)$.</p>
</blockquote>
<p>The part I don't understand in this extract is marked in $\color{red}{\mathrm{red}}$.</p>
<ol>
<li>Firstly; <em>How</em> do we know that the two solutions: $y=Ae^{-x}\space\text{and}\space y=Be^{-4x}$ are linearly independent?</li>
<li>Secondly; <em>Why</em> does a linear combination of linearly independent solutions give the general solution. Or, put in another way, I know that $y=Ae^{-x}\space\text{or}\space y=Be^{-4x}$ are both solutions. But <em>why</em> is their <strong>sum</strong> a solution: $y=Ae^{-x}+Be^{-4x}$? </li>
</ol>
| Valentin | 31,877 | <p>The equation can be written in the following form:
$$L[y]=0 \tag{1}$$
where $L=\frac{d^2}{dx^2} + 5\frac{d}{dx} + 4$ is a second-order linear operator on the space of twice differentiable functions $C^2$. Let $y_1,y_2\in C^2$ be two solutions, that is $L[y_1] = L[y_2] =0$ (assuming we know they exist). By linearity $$L[Ay_1+By_2] = AL[y_1] + BL[y_2] = 0$$ where $A$, $B$ are constants. Hence $y_g = Ay_1 + By_2$ is also a solution.
Now $(1)$ is of second order, so the general solution must involve 2 arbitrary constants. $y_1$ and $y_2$ must be linearly independent to form the general solution, otherwise, if $y_2\equiv Cy_1$ where $C$ is some constant, $y_g$ collapses into $(A+BC)y_1=Dy_1$ so there is really only one arbitrary constant.
How do we know if $y_1$ and $y_2$ are linearly independent? Take arbitrary $x=x_0$ and write:
$$Ay_1(x_0) + By_2(x_0)=0 \tag{2}$$
$\color{red}{\text{By definition of linear independence the only solution to}}$ $\color{red}{(2)}$ $\color{red}{\text{must be}}$ $\color{red}{A=B=0}$. Now differentiating $Ay_1(x) + By_2(x)$ with respect to $x$ and evaluating again at $x=x_0$ we get another equation:
$$Ay'_1(x_0) + By'_2(x_0)=0 \tag{3}$$
Again the only solution has to be $A=B=0$. $(2)$ and $(3)$ together give an criterion for linear independence, that is, taken as a system their determinant (known as the <a href="https://en.wikipedia.org/wiki/Wronskian" rel="nofollow">Wronskian</a>):
$$W(x_0) = \left|\begin{matrix}y_1(x_0) & y_2(x_0) \\ y'_1(x_0) & y'_2(x_0)\end{matrix}\right|$$
must not vanish for any permitted $x_0$</p>
|
1,626,362 | <p><code>The following is a short extract from the book I am reading:</code> </p>
<blockquote>
<p>If given a Homogeneous ODE:
$$\frac{\mathrm{d}^2 y}{\mathrm{d}x^2}+5\frac{\mathrm{d} y}{\mathrm{d}x}+4y=0\tag{1}$$
Letting
$$D=\frac{\mathrm{d}}{\mathrm{d}x}$$ then $(1)$ becomes
$$D^2 y + 5Dy + 4y=(D^2+5D+4)y$$
$$\implies\color{blue}{(D+1)(D+4)y=0}\tag{2}$$
$$\implies (D+1)y=0 \space\space\text{or}\space\space (D+4)y=0$$ which has solutions $$y=Ae^{-x}\space\space\text{or}\space\space y=Be^{-4x}\tag{3}$$ respectively, where $A$ and $B$ are both constants. </p>
<p>Now if $(D+4)y=0$, then $$(D+1)(D+4)y=(D+1)\cdot 0=0$$
so any solution of $(D + 4)y = 0$ is a solution of the differential equation $(1)$ or $(2)$. Similarly, any solution of $(D + 1)y = 0$ is a solution of $(1)$ or $(2)$. $\color{red}{\text{Since the two solutions (3) are linearly independent, a linear combination}}$ $\color{red}{\text{of them contains two arbitrary constants and so is the general solution.}}$ Thus $$y=Ae^{-x}+Be^{-4x}$$ is the general solution of $(1)$ or $(2)$.</p>
</blockquote>
<p>The part I don't understand in this extract is marked in $\color{red}{\mathrm{red}}$.</p>
<ol>
<li>Firstly; <em>How</em> do we know that the two solutions: $y=Ae^{-x}\space\text{and}\space y=Be^{-4x}$ are linearly independent?</li>
<li>Secondly; <em>Why</em> does a linear combination of linearly independent solutions give the general solution. Or, put in another way, I know that $y=Ae^{-x}\space\text{or}\space y=Be^{-4x}$ are both solutions. But <em>why</em> is their <strong>sum</strong> a solution: $y=Ae^{-x}+Be^{-4x}$? </li>
</ol>
| Community | -1 | <p>If you recall from linear algebra, abstract functional spaces can be considered as vector spaces. We define the zero function to serve as the zero vector, and pointwise addition/multiplication as the vector space operations.</p>
<p>For a collection of normal vectors, to show linear independence, we want to show none of the chosen vectors can be written as a linear combination of any of the others; each vector describes a 'different' part of the space. For a vector space of functions, e.g. the space of differentiable functions, to show linear independence, we must show there are no non-zero scalars $a,b$ such that
\begin{align*}
af(t)+bg(t)=0
\end{align*}
for $\textit{all}$ values of $t$ in the domain. It isn't enough that we can find one or two values of $t$ where the sum equals zero, but instead for all values of the domain.</p>
<p>Now, to show the two functions in your problem are linearly independent. Suppose we could find two non zero numbers $a$ and $b$ such that
\begin{align*}
ae^{-t}+be^{-4t}=0
\end{align*}
For all $t$. Differentiate this expression.
\begin{align*}
-ae^{-t}-4be^{-4t}=0
\end{align*}
Adding the two equations together gives
\begin{align*}
-3be^{-4t}&=0
\end{align*}
The exponential function is strictly positive, so we must have $b=0$. This implies $ae^{-t}=0$, and similarly $a=0$. </p>
<p>Now, why must a linear combination also be a solution? Well the differential equation described is $\textbf{linear}$, so any element of the span of linearly independent solutions will always yield another solution. They all get mapped to zero.</p>
|
2,746,153 | <p>Assume $m\ \mathrm{and}\ n\ \mathrm{are\ two\ relative\ prime\ positive\ integers.}$</p>
<p>Given $x \equiv a\ \pmod m$ and $x \equiv a\ \pmod n$.</p>
<p>Prove that $x \equiv a\ \pmod {mn}\ \mathrm{by\ using\ Chinese\ Remainder\ Theorem}.$<br/></p>
<p>And I did the following:
<br>
$$ \mathrm {M_1 = }\ n\ \ and\ \ \mathrm {M_2 = }\ m\ \\
\mathrm {y_1 = }\ n’\ \ and\ \ \mathrm {y_2 = }\ m’ \\
\mathrm{where}\ n\cdot n’\equiv 1\ \mathrm{(mod}\ m) \ \ and\ \ m\cdot m’\equiv1\ \mathrm{(mod}\ n) \\
Then\ x\equiv\ (a\cdot n\cdot n’\ +a\cdot m\cdot m’ )\pmod{mn} $$
<br>
But how could I conclude
“$x \equiv a\ (\mathrm {mod}\ mn)$” from the last statement or I did it wrongly? I would be grateful for your help :)</p>
| thesmallprint | 438,651 | <p>$x\equiv a\bmod n$ implies there exists a $k\in\Bbb Z$ such that $x=nk+a$. Now, we have $$nk+a\equiv a\bmod m\Rightarrow nk\equiv0\bmod m\Rightarrow k\equiv0\bmod m,$$ so then there exists a $j\in\Bbb Z$ such that $k=jm$. Substituting this in our equation for $x$ gives $$x=njm+a,$$ which means that $x\equiv a\bmod nm$.</p>
|
3,167,261 | <blockquote>
<p>Let <span class="math-container">$\mathcal{O}$</span> be an open subset of the plane <span class="math-container">$\mathbb{R}^{2}$</span> and
let the mapping <span class="math-container">$F : \mathcal{O} \rightarrow \mathbb{R}^{2}$</span> be
represented by <span class="math-container">$F(x, y) = (u(x, y), v(x, y))$</span> for <span class="math-container">$(x, y)$</span> in
<span class="math-container">$\mathcal{O}$</span>. Then, we say the mapping <span class="math-container">$F : \mathcal{O} \rightarrow
\mathbb{R}^{2}$</span> is called a <em>Cauchy-Riemann mapping</em> provided that
each of the functions <span class="math-container">$u : \mathcal{O} \rightarrow \mathbb{R}$</span> and <span class="math-container">$v
: \mathcal{O} \rightarrow \mathbb{R}$</span> has continuous second-order
partial derivatives and <span class="math-container">$$\frac{\partial u}{\partial x}(x, y) =
\frac{\partial v}{\partial y}(x, y) \hspace{1em} \text{ and }
\hspace{1em} \frac{\partial u}{\partial y} = -\frac{\partial
v}{\partial x}(x, y)$$</span> </p>
<p>for all <span class="math-container">$(x, y)$</span> in <span class="math-container">$\mathcal{O}$</span>.</p>
</blockquote>
<p>Is it necessarily true that <span class="math-container">$u(x, y)$</span> and <span class="math-container">$v(x, y)$</span> are harmonic?</p>
| Ernie060 | 592,621 | <p>Yes. For instance,
<span class="math-container">$$
\frac{\partial^2 u}{\partial^2 x} + \frac{\partial^2 u}{\partial^2 y} =
\frac{\partial }{\partial x}\frac{\partial u}{\partial x} + \frac{\partial }{\partial y}\frac{\partial u}{\partial y}
= \frac{\partial }{\partial x}\frac{\partial v}{\partial y} - \frac{\partial }{\partial y}\frac{\partial v}{\partial x} = \frac{\partial^2 v}{\partial x\partial y} - \frac{\partial^2 v}{\partial x\partial y} = 0.
$$</span>
The calculation for <span class="math-container">$v$</span> is similar.</p>
|
2,669,278 | <p>I noticed a strange thing with my calculator.<br>
When I start with any number like 1,2,3 or 1.2, 1.34 .... or even 0.<br>
And repeatedly take the cosine function of this number.<br>
I get the same following number. I don't thing this is a coincidence since it's happening with any number I try. </p>
<pre>0.99984774153108811295981076866798</pre>
<p>It's pretty astonishing the accuracy this number has. I wouldn't have asked this question if only only few 4 or 5 decimals of every number matched but it's it's 32 decimal places I get for every number I try.<br>
You got to try it yourself to believe it.<br>
I want to know if there's a reason behind this? And why don't other functions like sine or tangent show similar properties?
Note that the calculator is set to degrees.</p>
| Travis Willse | 155,629 | <p>First, the number is a (the) fixed point $x_0$ of the map $x \mapsto \cos x^{\circ}$; here, $\cdot^{\circ}$ denotes interpreting $x$ as an angle measure of $x$ degree. Alternatively, we can avoid mention of degrees by saying this number is (the) fixed point of the map $T : x \mapsto \cos \frac{\pi x}{180}$.</p>
<p>Applying the cosine function to any real number gives a number in the interval $I := [-1, 1]$, so we can just as well ask why repeatedly applying cosine to any number $x$ in this interval gives a sequence $x, Tx, T^2 x, \ldots$ converging to this particular number, and the standard tool for proving this is the <a href="https://en.wikipedia.org/wiki/Banach_fixed-point_theorem" rel="nofollow noreferrer">Banach Fixed-Point Theorem</a>:</p>
<p>On this interval, the derivative stays small: $|T'(x)| \leq \sin \frac{\pi}{180} < 1$ for all $x \in I$, so for any $x, y \in I$, we have $|\cos x - \cos y| \leq \left(\sin \frac{\pi}{180}\right) |x - y|$, and hence the map $\cos$ is a <em>contraction</em> on that interval. Thus, by the B.F.-P.T., the map $T : x \mapsto \cos \frac{\pi x}{180}$ has a unique fixed point, and for every starting point $x \in I$ (and hence, by our previous observation, for every $x \in \Bbb R$), the sequence $(x, Tx, T^2 x, \ldots)$ converges to $x_0$.</p>
<p>The unit of degree is somewhat arbitrary, so it's more common to consider the corresponding operator $x \mapsto \cos x$ (mechanically, this just amounts to putting your calculator in degree mode). In that case, the fixed point is the <em><a href="http://mathworld.wolfram.com/DottieNumber.html" rel="nofollow noreferrer">Dottie Number</a></em>, $0.739085\!\ldots$.</p>
|
168,020 | <p>Let $R$ be an local Artinian ring, with maximal ideal $\mathfrak{m}$.</p>
<p>Let $e$ be the smallest positive integer for which $\mathfrak{m}^e=(0)$.</p>
<p>Let $t$ be the smallest positive integer for which $x^t=0$ for all $x \in \mathfrak{m}$.</p>
<p>We know $t \leq e$, with equality holding whenever $\mathfrak{m}$ is a principal ideal (i.e., $R$ is a principal ideal ring). Moreover, equality holds whenever $e \leq 2$.</p>
<p>What (else) is known about the relationship between these two integers?</p>
<p>What about the case when $R$ is the Artinian ring associated to a point of an algebraic curve that is contained in two distinct irreducible components?</p>
| Neil Epstein | 19,045 | <p>To complement Mohan's answer, it is worth noting that there are counterexamples when $R$ contains a field $k$ of prime characteristic $p$. Indeed, when $p\geq 3$, let $R=k[\![X,Y]\!]/(X^p, Y^p)$, and denote the images of $X$, $Y$ in $R$ by $x$, $y$ respectively. Then I claim that $t=p$ but $e\geq 2p-2>p$. To see this, note that any element of $f\in\mathfrak m$ is of the form $f=xg+yh$, and then by Freshman's Dream, $f^p = x^p g^p + y^p h^p = 0$, whereas clearly $x^{p-1} \neq 0$, showing that $t=p$. On the other hand, $0 \neq x^{p-1} y^{p-1} \in {\mathfrak m}^{2p-2}$.</p>
<p>A characteristic 2 counterexample is given by $k[\![X,Y]\!]/(X^4, Y^4)$ ($k$ any field of char $2$), in which case $t=4$ but $e\geq 6$.</p>
<p>To summarize, your question of equality has a 'yes' answer if you are willing to assume the ring contains $\mathbb Q$, but can be 'no' if $R$ contains a field of any other characteristic. <s> I don't know what happens in mixed characteristic.</s></p>
<p>EDIT: Equality fails in any mixed characteristic $(p^c, p)$. To see this, let $A := {\mathbb Z}/(p^c)$ and $R := A[X,Y]/(X^p, Y^p)$. First note that $0\neq p^{c-1} (xy)^{p-1} \in {\mathfrak m}^{c+2p-3}$, whence $e>c+2p-3$. However, I claim that $t \leq c+2p-3$. To see this, note that any element of $\mathfrak m$ has the form $pf+xg+yh$. We have $(xg+yh)^{2p-1}=0$ since every term in the expansion is divisible by $x^p$ or $y^p$, and by a similar computation we have $$
(xg+yh)^{2p-2} = {2p-2 \choose p-1} (xygh)^{p-1}.
$$
We have $$
(pf+xg+yh)^{c+2p-3} = \sum_{i=0}^{c+2p-3} {c+2p-3 \choose i} (pf)^i (xg+yh)^{c+2p-3-i},
$$
and by the above considerations, the only term that potentially survives is the term where $i=c-1$. That is, $$
(pf+xg+yh)^{c+2p-3} = {c+2p-3 \choose c-1} (pf)^{c-1} (xg+yh)^{2p-2} = {c+2p-3 \choose c-1} (pf)^{c-1} {2p-2 \choose p-1} (xygh)^{p-1}.
$$
But it is elementary to check that $p \mid {2p-2 \choose p-1}$, whence $p^c$ divides the displayed term, which is then $0$ in $R$. </p>
|
114,122 | <p>I am trying to figure out the maximum possible combinations of a (HEX) string, with the following rules:</p>
<ul>
<li>All characters in uppercase hex (ABCDEF0123456789)</li>
<li>The output string must be exactly 10 characters long</li>
<li>The string must contain at least 1 letter</li>
<li>The string must contain at least 1 number</li>
<li>A number or letter can not be represented more than 2 times</li>
</ul>
<p>I am thinking the easy way to go here (I am most likely wrong, so feel free to correct me):</p>
<ol>
<li>Total possible combinations: $16^{10} = 1,099,511,627,776$</li>
<li>Minus all combinations with just numbers: $10^{10} = 10,000,000,000$</li>
<li>Minus all combinations with just letters: $6^{10} = 60,466,176$</li>
<li>etc...</li>
</ol>
<p>Can someone could tell me if this is the right way to go and if so, how to get the total amount of possible combinations where a letter or a number occur more than twice.</p>
<p>Any input or help would be highly appreciated!</p>
<p>Muchos thanks!</p>
<p>PS.</p>
<p>I don't know if I tagged this question right, sorry :(</p>
<p>DS.</p>
| Syed | 321,426 | <p>Four years and no answer? To continue with your original reasoning...<br><p><ol>
<li>Total amount of numbers: $10^{10}$ or 10,000,000,000</li>
<li>Total amount of letters: $6^{10}$ or $60,466,176$</li>
<li>Subtract the above two numbers $10^{10}$ and $6^{10}$ from...(drum roll please)...Your last criteria! ("A number or letter cannot be represented more than $2$ times").</li></ol></p><br></p>
<p>So your final answer is: $264,683,239,424$.</p>
<p>I'm sure you're wondering what the probability of your last criteria is. It would be: ($16^2)*(15^2)*(14^2)*(13^2)*(12^2)$.</p>
<p>That is, every two numbers I reduce by one, because for every two spaces, one number cannot repeat. Thanks.</p>
|
784,753 | <p>In spherical coordinates, we have</p>
<p>$ x = r \sin \theta \cos \phi $;</p>
<p>$ y = r \sin \theta \sin \phi $; and </p>
<p>$z = r \cos \theta $; so that</p>
<p>$dx = \sin \theta \cos \phi\, dr + r \cos \phi \cos \theta \,d\theta – r \sin \theta \sin \phi \,d\phi$;</p>
<p>$dy = \sin \theta \sin \phi \,dr + r \sin \phi \cos \theta \,d\theta + r \sin \theta \cos \phi \,d\phi$; and</p>
<p>$dz = \cos \theta\, dr – r \sin \theta\, d\theta$</p>
<p>The above is obtained by applying the chain rule of partial differentiation.</p>
<p>But in a physics book I’m reading, the authors define a volume element $dv = dx\, dy\, dz$, which when converted to spherical coordinates, equals $r \,dr\, d\theta r \sin\theta \,d\phi$. How do the authors obtain this form?</p>
| Community | -1 | <p>$dV=dxdydz=|\frac{\partial(x,y,x)}{\partial(r,\theta,\phi)}|drd\theta d\phi$</p>
|
1,238,210 | <p>How we can solve that $\lim _{_{x\rightarrow \infty }}\int _0^x\:e^{t^2}dt$ ?</p>
<p>P.S: This is my method as I thought:
$\int _0^x\:\:e^{t^2}dt>\int _1^x\:e^tdt=e^x-e$ which is divergent, so all your answers, helped me to think otherwise, maybe my method help something else :D</p>
| Teoc | 190,244 | <p>$$e^{t^2}> e^t\text{ from 1 to $\infty$, and the part from 0 to 1 is finite.}$$ and the integral$ \int_1^\infty e^t $ diverges. Therefore, by the comparison test, it diverges too.</p>
|
1,934,033 | <p>I'm new here. I wish to ask a question regarding predicate logic:</p>
<p>I was given three predicates:</p>
<p><strong>parent(p,q): p is the parent of q.</strong></p>
<p><strong>female(p): p is a female.</strong></p>
<p><strong>p = q: p and q are the same person.</strong></p>
<p>Now, I was tasked with translating this sentence: Alice has a daughter.</p>
<p>My answer was: <strong>There exists a q such that parent(Alice,female(q)).</strong></p>
<p>The answer given is: <strong>There exists a q such that female(q) AND parent(Alice,q).</strong></p>
<p>Is it correct to have a predicate (in this case, female) within another predicate (in this case, parent)?</p>
<p>Much appreciated.</p>
| Parcly Taxel | 357,390 | <p>Intuitively, a predicate in a predicate doesn't make sense; predicates only take terms as arguments. Using the normal convention of abbreviating predicates and terms by letters ($P(p,q)$ for parent, $F(q)$ for female, $a$ for Alice), your example is
$$\exists q\ P(a,F(q))$$
which is interpreted as "Alice is the parent of <em>true</em>" – absurd, since children aren't truth values. In other words, your attempt does not produce a well-formed formula.</p>
<p>The given answer translates as
$$\exists q\ F(q)\land P(a,q)$$
which is well-formed.</p>
|
2,616,280 | <p>This question seems obvious, but I'm not secure of my proof.</p>
<blockquote>
<p>If a compact set $V\subset \mathbb{R^n}$ is covered by a finite union of open balls of common radii $C(r):=\bigcup_{i=1}^m B(c_i,r)$, then is it true that there exists $0<s<r$ such that $V\subseteq C(s)$ as well? The centers are fixed.</p>
</blockquote>
<p>I believe this statement is true and this is my attempt to prove it:</p>
<p>Each point of $v\in V$ is an interior point of least one ball (suppose its index is $j_v$), that is, there exists $\varepsilon_v>0$ such that $B(v,\varepsilon_v)\subseteq B(c_{j_v},r)$, so $v\in B(c_{j_v},r-\varepsilon_v)$. Lets consider only the greatest $\varepsilon_v$ such that this holds. Then defining $\varepsilon:=\inf\{\varepsilon_v\mid v\in V\}$ and $s=r-\varepsilon$ we get $V\subseteq C(s)$.</p>
<blockquote>
<p>But why is $\varepsilon$ not zero? I thought that considering the greatest $\varepsilon_v$ was important, but still couldn't convince myself.</p>
</blockquote>
<p>I would appreciate any help.</p>
| Umberto P. | 67,536 | <p>Let $X$ denote the set of centers: $X = \{c_1,\ldots,c_m\}$. </p>
<p>The function $\phi(x) = \mathop{\rm dist} (x,X)$ is continuous on $\mathbb R^n$ and attains a maximum value on $V$ because $V$ is compact. </p>
<p>Note that if $x \in V$, then by definition $\phi(x) < r$. Whatever maximum it attains must be less than $r$. </p>
<p>Choose $s$ to lie in between this maximum and $r$.</p>
|
2,616,280 | <p>This question seems obvious, but I'm not secure of my proof.</p>
<blockquote>
<p>If a compact set $V\subset \mathbb{R^n}$ is covered by a finite union of open balls of common radii $C(r):=\bigcup_{i=1}^m B(c_i,r)$, then is it true that there exists $0<s<r$ such that $V\subseteq C(s)$ as well? The centers are fixed.</p>
</blockquote>
<p>I believe this statement is true and this is my attempt to prove it:</p>
<p>Each point of $v\in V$ is an interior point of least one ball (suppose its index is $j_v$), that is, there exists $\varepsilon_v>0$ such that $B(v,\varepsilon_v)\subseteq B(c_{j_v},r)$, so $v\in B(c_{j_v},r-\varepsilon_v)$. Lets consider only the greatest $\varepsilon_v$ such that this holds. Then defining $\varepsilon:=\inf\{\varepsilon_v\mid v\in V\}$ and $s=r-\varepsilon$ we get $V\subseteq C(s)$.</p>
<blockquote>
<p>But why is $\varepsilon$ not zero? I thought that considering the greatest $\varepsilon_v$ was important, but still couldn't convince myself.</p>
</blockquote>
<p>I would appreciate any help.</p>
| Mikhail Katz | 72,694 | <p>Replace each open ball $B_i$ of radius $r$ in the cover by the union of concentric open balls of radii strictly smaller than $r$. You get an infinite cover of $V$. By compactness there is a finite subcover. By construction the radii are smaller than before. Finally we choose the maximal radius (for all of the finitely many balls) which is still smaller than $r$.</p>
|
79,869 | <p>Let <span class="math-container">$(X,\mu,\mathcal{F})$</span> be a probability space. The paper <em><a href="http://projecteuclid.org/euclid.aoms/1177693405" rel="nofollow noreferrer">Equiconvergence of Martingales</a></em> by Edward Boylan introduced a pseudometric on sub-<span class="math-container">$\sigma$</span>-fields (sub-<span class="math-container">$\sigma$</span>-algebras) of <span class="math-container">$\mathcal{F}$</span> as follows:</p>
<p><span class="math-container">$\rho(\mathcal{G},\mathcal{H})
:= \sup_{A\in \mathcal{G}} \inf_{B\in \mathcal{H}} \mu(A \triangle B) + \sup_{B\in \mathcal{H}} \inf_{A\in \mathcal{G}} \mu(A \triangle B)$</span></p>
<p>where <span class="math-container">$A \triangle B$</span> is symmetric difference.</p>
<p>It seems to be called the Hausdorff pseudometric on <span class="math-container">$\sigma$</span>-fields in later papers. (Does anyone know why?) Further, if we only consider a <span class="math-container">$\mu$</span>-complete <span class="math-container">$\sigma$</span>-fields then <span class="math-container">$\rho$</span> is a metric. Also, the paper shows <span class="math-container">$\rho$</span> is complete.</p>
<blockquote>
<p>Is this metric <span class="math-container">$\rho$</span>
separable---assuming, say, <span class="math-container">$X=[0,1]$</span>
and <span class="math-container">$\mu$</span> is the Lebesgue measure?</p>
</blockquote>
<p>My guess is that it is not, but I cannot off-hand come up with a witnessing set to show this. Considering the paper is 40 years old, I imagine this might be well-known. And if it is not separable, then my follow up question is this?</p>
<blockquote>
<p>Is there a known separable, complete
metric on the space of
<span class="math-container">$\mu$</span>-complete sub-<span class="math-container">$\sigma$</span>-fields?</p>
</blockquote>
<p>For reference, I found the following list <a href="https://groups.google.com/forum/#!searchin/sci.math/%22Hausdorff-metric<span class="math-container">$20of$</span>20sigma-fields%22%7Csort:date/sci.math/iz249jCUvEU/rkKOei1NV5YJ" rel="nofollow noreferrer">online</a>, compiled by Dave L. Renfro, of papers dealing with metrics on <span class="math-container">$\sigma$</span>-fields (listed in Chronological order). I quickly looked though these papers and didn't find what I was looking for, but maybe I missed something.</p>
<ol>
<li><p>Edward S. Boylan, "Equiconvergence of martingales",<br>
Annals of Mathematical Statistics 42
(1971), 552-559. [MR 44 #7603; Zbl 218.60049]</p></li>
<li><p>Jacques Neveu, "Note on the tightness of the metric on the
set of complete sub sigma-algebras of a probability space",
Annals of Mathematical Statistics 43 (1972), 1369-1371.
[MR 48 #5133; Zbl 241.60036]</p></li>
<li><p>Hirokichi Kudo, "A note on the strong convergence of
sigma-algebras", Annals of Probability 2 (1974), 76-83.
[MR 51 #6900; Zbl 275.60007]</p></li>
<li><p>Lothar Rogge, "Uniform inequalities for conditional
expectations", Annals of Probability 2 (1974), 486-489.
[MR 50 #14858; Zbl 285.28010]</p></li>
<li><p>Louis H. Blake, "Some further results concerning
equiconvergence of martingales", Revue Roumaine de
Mathématiques Pures et Appliquées 28 (1983), 927-932.
[MR 86i:60130; Zbl 524.60029]</p></li>
<li><p>Hari G. Mukerjee, "Almost sure equiconvergence of
conditional expectations", Annals of Probability 12
(1984), 733-741. [MR 86c:28012; Zbl 557.28001]</p></li>
<li><p>Beth Allen, "Convergence of sigma-fields and applications
to mathematical economics", pp. 161-174 in Gerald Hammer
and Diethard Pallaschke (editors), SELECTED TOPICS IN
OPERATIONS RESEARCH AND MATHEMATICAL ECONOMICS (Proceedings,
Karlsruhe, West Germany, 22-25 August 1983), Lecture Notes
in Economics and Mathematical Systems #226, Springer-Verlag, 1984.
[MR 86f:90029; Zbl 547.28001]</p></li>
<li><p>Dieter Landers and Lothar Rogge, "An inequality for the
Hausdorff-metric of sigma-fields", Annals of Probability
14 (1986), 724-730. [MR 87h:60006; Zbl 597.60003]</p></li>
<li><p>Abdallah M. Al-Rashed, "On countable unions of sigma
algebras", Journal of Karachi Mathematical Association
8 (1986), 57-63. [MR 88f:28001; Zbl 639.28001]</p></li>
<li><p>Maxwell B. Stinchcombe, "A further note on Bayesian
information topologies", Journal of Mathematical Economics
22 (1993), 189-193. [MR 93k:60011; Zbl 773.90016]</p></li>
<li><p>Timothy Van Zandt, "The Hausdorff metric of sigma-fields
and the value of information", Annals of Probability 21
(1993), 161-167. [MR 94d:62012; Zbl 777.62007]</p></li>
<li><p>Xikui Wang, "Completeness of the set of sub-sigma-algebras",
International Journal of Mathematics and Mathematical
Sciences 16 (1993), 511-514. [MR 94f:28002; Zbl 782.28001]</p></li>
<li><p>Zvi Artstein, "Compact convergence of sigma-fields and
relaxed conditional expectation", Probability Theory and
Related Fields [= Zeitschrift für Wahrscheinlichkeits-
theorie] 120 (2001), 369-394. [MR 2002g:28003; Zbl 992.28001]</p></li>
</ol>
| Yuri Bakhtin | 2,968 | <p>I suspect that the following is a dense set:</p>
<p>For each $n\in\mathbb{N}$ take all sub-algebras of the finite sigma-algebra generated by intervals of the form $[i/2^n,(i+1)/2^n)$, $i=0,\ldots,2^n-1$.</p>
|
39,423 | <ul>
<li><p>case1</p>
<pre><code>Options[f] = {"t" -> "0"};
f[___, OptionsPattern[]] := StringReplace["content", "t" :> OptionValue["t"]]
f[]
(*
con0en0
*)
</code></pre></li>
<li><p>case2</p>
<pre><code>rule = {"t" -> OptionValue["t1"]};
Options[gg] = {"t1" -> "T1", "t2" -> "1"};
gg[___, OptionsPattern[]] := StringReplace["content", rule]
gg[1]
(*
con~~OptionValue[t1]~~en~~OptionValue[t1]
*)
</code></pre></li>
</ul>
<p>Here <code>OptionValue</code> couldn't get the value of "t1"
So, how to make case 2 works like case 1? </p>
<hr>
<p>I found one solution is </p>
<pre><code>Options[gg]={"t1"->"T1","t2"->"1"};
gg[___,OptionsPattern[]]:=Hold[StringReplace]["content",rule]//ReleaseHold//Evaluate
</code></pre>
<p>Any simpler methods?</p>
| Mr.Wizard | 121 | <p>It would help if you outlined you intended use of this behavior, as without that it is not clear what is and is not helpful.</p>
<h3>Single function case</h3>
<p>You can use the two-argument form of <code>OptionValue</code>:</p>
<pre><code>rule = {"t" :> OptionValue[gg, "t1"]}; (* note RuleDelayed *)
Options[gg] = {"t1" -> "T1", "t2" -> "1"};
gg[___, OptionsPattern[]] := StringReplace["content", rule]
gg[1]
</code></pre>
<blockquote>
<pre><code>"conT1enT1"
</code></pre>
</blockquote>
<p>This works just fine with a single function (<code>gg</code>), but it is not directly applicable if you intend to use this rule in multiple functions.</p>
<h3>Arbitrary function case</h3>
<p>As you apparently understand based on your workaround, the single-argument <code>OptionValue</code> expression must appear literally on the right-hand-side of a rule or definition with <code>OptionsPattern</code>, which your use of <code>Evaluate</code> does. Any other method that does the same can be used, e.g.:</p>
<pre><code>With[{rule = rule},
gg[___, OptionsPattern[]] := StringReplace["content", rule]
]
</code></pre>
<p>Or:</p>
<pre><code>(gg[___, OptionsPattern[]] := StringReplace["content", #]) & @ rule
</code></pre>
<p>For the arbitrary-function case I see no simpler method than these.</p>
|
28,877 | <p>Since I self-study mathematical analysis without <em>formal</em> teacher, I can only appeal to help from out site most of the time. It's obvious that to grasp the underlying concepts in mathematics, we must roll the sleeves and solve problems.</p>
<p>It's clear that there are actually mistakes and misunderstanding that are too subtle for me to recognize, so it's very natural for me to ask for proof verifications.</p>
<p>Even though I tried to write my proofs as detailed and clear as possible, they seem to attract little attention from other users. It seems to me that proof checking is a boring and tedious job, but it is essential for me (and possibly for all of us) to know whether and where I get wrong.</p>
<p>How can I make my post for proof verification more attractive and consequently attract more attention?</p>
<p>Below are questions that i have not received any answer. Most of them are related to Cantor-Bernstein-Schröder theorem. It would be great if someone can help me improve them so that they get an answer. Thank you so much!</p>
<p><a href="https://math.stackexchange.com/questions/2813526/is-this-a-mistake-in-the-proof-of-halls-marriage-theorem-from-https-proofwiki">Is this a mistake in the proof of Hall's Marriage Theorem from https://proofwiki.org?</a></p>
<p><a href="https://math.stackexchange.com/questions/2748266/top-down-and-bottom-up-proofs-of-a-lemma-used-to-prove-cantor-bernstein-schr%C3%B6der">Top-down and Bottom-up proofs of a lemma used to prove Cantor-Bernstein-Schröder theorem and their connection</a> (this is the question that i would like to receive answer most)</p>
<p><a href="https://math.stackexchange.com/questions/2759971/is-my-proof-of-cantor-bernstein-schr%C3%B6der-theorem-correct">Is my proof of Cantor-Bernstein-Schröder theorem correct?</a></p>
<p><a href="https://math.stackexchange.com/questions/2751389/bottom-up-proof-of-a-lemma-used-to-prove-bernstein-schr%C3%B6der-theorem">Bottom-up proof of a lemma used to prove Bernstein-Schröder theorem</a></p>
<p><a href="https://math.stackexchange.com/questions/2749527/julius-k%C3%B6nigs-proof-of-schr%C3%B6der-bernstein-theorem">Julius König's proof of Schröder–Bernstein theorem</a></p>
| Andres Mejia | 297,998 | <p>I'm not sure about appealing, but if you're looking for an answer, I think that there are some alternatives. I'm mostly just speaking from personal experience, in hopes to add to a nice list of suggestions by Arnaud Mortier.</p>
<ol>
<li><p>Explain succinctly the proof idea. Usually, if it is novel/ different and seems likely to work, I will usually read the rest of the question more carefully. I.e: if the proof is interesting you should mention why. Your question <a href="https://math.stackexchange.com/questions/2751389/bottom-up-proof-of-a-lemma-used-to-prove-bernstein-schr%C3%B6der-theorem">here</a> I honestly would not read, because I can't see from a bunch of $\subseteq$ how anything is "bottom up" etc. This additionally will help garner comments because specialists will usually say "this will not work because of $x$.) <a href="https://math.stackexchange.com/questions/2228755/attempted-proof-of-an-open-mapping-theorem-for-lie-groups">Here</a>, I added a single little sentence at the beginning that I think helped get a good comment that made me feel reassured and a nice answer.</p></li>
<li><p>When the problem is localized (you are unsure about a statement or two,) put that at the <em>top</em> of the question and mention that you will need it to prove $xyz$.</p></li>
<li><p>If you are just uncertain about a proof, as you were in <a href="https://math.stackexchange.com/questions/2759971/is-my-proof-of-cantor-bernstein-schr%C3%B6der-theorem-correct">this question</a>, and the problem is not "localized," then just mention why you are unsure.</p></li>
<li><p>If you are not convinced that a particular method works, ask a question in slightly greater generality. I did that <a href="https://math.stackexchange.com/questions/2803546/are-there-intersection-theoretic-proofs-for-ham-sandwich-type-theorems">here</a> and it seemed to gather some attention (although no answer) <em>even though I had a specific proof in mind</em>. Basically these are along the lines of "will this type of approach work?" Questions of that sort on this sight (granted that a genuine attempt is provided) do a few things. They give answerers a little more freedom to add something interesting, open the subject matter up to a wider range of audiences, and are generally just interesting to casual readers.</p></li>
</ol>
|
108,010 | <p>It is not necessarily true that the closure of an open ball $B_{r}(x)$ is equal
to the closed ball of the same radius $r$ centered at the same point $x$. For a quick example, take $X$ to be any set and define a metric
$$
d(x,y)=
\begin{cases}
0\qquad&\text{if and only if $x=y$}\\
1&\text{otherwise}
\end{cases}
$$
The open unit ball of radius $1$ around any point $x$ is the singleton set $\{x\}$. Its closure is also the singleton set. However, the closed unit ball of radius $1$ is everything. </p>
<p>I like this example (even though it is quite artificial) because it can show that this often-assumed falsehood can fail in catastrophic ways. My question is: are there necessary and sufficient conditions that can be placed on the metric space $(X,d)$ which would force the balls to be equal? </p>
| JDH | 413 | <p>Here is a characterization that is straight from the definitions, but which it seems may be useful when verifying that a particular space has the property.</p>
<p>For any metric space <span class="math-container">$(X,d)$</span>, the following are equivalent:</p>
<ul>
<li>For any <span class="math-container">$x\in X$</span> and radius <span class="math-container">$r$</span>, the closure of the open ball of radius <span class="math-container">$r$</span> around <span class="math-container">$x$</span> is the closed ball of radius <span class="math-container">$r$</span>.</li>
<li>For any two distinct points <span class="math-container">$x,y$</span> in the space and any positive <span class="math-container">$\epsilon$</span>, there is a point <span class="math-container">$z$</span> within <span class="math-container">$\epsilon$</span> of <span class="math-container">$y$</span>, and closer to <span class="math-container">$x$</span> than <span class="math-container">$y$</span> is.
That is, for every <span class="math-container">$x\neq y$</span> and <span class="math-container">$\epsilon\gt 0$</span>, there is <span class="math-container">$z$</span> with <span class="math-container">$d(z,y)<\epsilon$</span> and <span class="math-container">$d(x,z)<d(x,y)$</span>.</li>
</ul>
<p>Proof. If the closed ball property holds, then fix any <span class="math-container">$x,y$</span> with <span class="math-container">$r=d(x,y)$</span>. Since the closure of <span class="math-container">$B_r(x)$</span> includes <span class="math-container">$y$</span>, the second property follows. Conversely, if the second property holds, then if <span class="math-container">$r=d(x,y)$</span>, then the property ensures that <span class="math-container">$y$</span> is in the closure of <span class="math-container">$B_r(x)$</span>, and so the closure of the open ball includes the closed ball (and it is easy to see it does not include anything more than this, since if <span class="math-container">$g$</span> belongs to the closure of <span class="math-container">$B_r(x)$</span> then <span class="math-container">$d(x,g) \le r$</span> and so <span class="math-container">$g$</span> must also belong to the closed ball of radius <span class="math-container">$r$</span> centered at <span class="math-container">$x$</span>).
QED</p>
|
2,612,308 | <p>Obviously we can rearrange for <span class="math-container">$x$</span> in a polynomial of degree 2. </p>
<p>Let <span class="math-container">$y=ax^2+bx+c$</span></p>
<p>then </p>
<p><span class="math-container">$x=\frac{-b\pm\sqrt{b^2-4ac+4ay}}{2a}$</span></p>
<p>Similarly, for <span class="math-container">$y=ax^3+bx^2+cx+d$</span>, although it is very difficult and long, there apparently also exists a way to make <span class="math-container">$x$</span> the subject. </p>
<p>Now I'm wondering whether it is always possible to make <span class="math-container">$x$</span> the subject when <span class="math-container">$y=p_n(x)$</span>, where <span class="math-container">$p_n(x)$</span> is any polynomial of degree <span class="math-container">$n$</span>.</p>
<p>If so, is it always possible to make <span class="math-container">$x$</span> the subject when <span class="math-container">$y=f(x)$</span>, where <span class="math-container">$f(x)$</span> is any function of <span class="math-container">$x$</span>. </p>
<p>And lastly, is there always an exact way to get a desired expression on one side of an equation, obviously still being equivalent to the initial one. If this sounds vague, here is the equation that got me thinking about this:</p>
<p><span class="math-container">$y^3+x^3=3xy$</span></p>
<p>is there a way to make <span class="math-container">$x$</span> the subject?</p>
| Ross Millikan | 1,827 | <p>You have to define what equations you care about and desirably but it seems likely the answer is no. You are presumably familiar with the solutions of linear and quadratic equations. Your equation is a cubic, so you can feed it to <a href="https://en.wikipedia.org/wiki/Cubic_function" rel="noreferrer">Cardano's formula</a> and get $x=$stuff. Quartic equations can be solved, too, but it is such a mess most people ignore the fact. We know there is no general solution for quintic polynomials. Equations which mix exponentials and polynomials also often cannot be solved for one variable. We get asked about them a lot and usually recommend numerical methods to find an approximate solution.</p>
|
1,285,774 | <p>I have looked at similar questions under 'Questions that may already have your answer" and unless I have missed it, I cannot find a similar question.</p>
<p>I am trying to answer the following:</p>
<p>Let $A = \left(\begin{matrix}
a & b \\
b & d \\
\end{matrix}\right)$ be a symmetric 2 x 2 matrix. Prove that $A$ is positive definite if and only if $a > 0$ and $\det(A) > 0$. [Hint: $ax^2 + 2bxy + dy^2 = a\left(x+\frac{b}{a}y\right)^2 + \left(d-\frac{b^2}{a}\right)y^2$.]</p>
<p>I can see that in order for $\det(A)$ to be greater than $0$, $ad > b^2$.
I have also tried to find the eigenvalues corresponding to the standard matrix $A$ (of the quadratic form) and somehow, I get to $ad < b^2$, which is a contradiction. Could someone point me in the right direction? </p>
| abel | 9,252 | <p>suppose $\pmatrix{a&b\\b&d} $ is positive definite. we need $$ax^2 + 2bxy + dy^2 > 0 \text{ for all } x, y, x^2 + y^2 \neq 0.$$ taking $x = 1, y = 0$ gives $a > 0.$ </p>
<p>taking $x = -b, y = a$ gives $ab^2 -2ab^2+da^2 > 0 \to ad - b^2 > 0$ </p>
<p>now, for other directions. suppose $a > 0 \text{ and }ad - b^2 > 0.$ then $$a(ax^2 + 2bxy + dy^2) = (ax+by)^2 + (ad-b^2)y^2 > 0$$ which in turn implies $$ax^2 + 2bxy+dy^2 > 0.$$</p>
|
2,342,124 | <p><a href="https://i.stack.imgur.com/QdbFG.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/QdbFG.png" alt="enter image description here"></a></p>
<p>Well this seems like <span class="math-container">$1-|t|$</span> for <span class="math-container">$|t|<1$</span> and <span class="math-container">$0$</span> for <span class="math-container">$|t|>1$</span> . Taking the Fourier transform <span class="math-container">$$X(ω) = \int_{-\infty}^\infty(1-|t|)e^{-jωt}dt\\=\int_{-\infty}^\infty e^{-jωt}dt -\int_{-\infty}^\infty|t|e^{-jωt}dt\\=2\piδ(ω)-\int_{-1}^1|t|e^{-jωt}dt\\=2πδ(ω)-\int_{-1}^0-te^{-jωt}dt-\int_{0}^1te^{-jωt}dt$$</span></p>
<p>The infinity integral goes from -1 to 1 since the function is zero elsewhere. Now I'm having trouble continuing with this. It doesn't seem to give me the result of the problem's solution which is <span class="math-container">$sinc^2(\frac{ω}{2\pi})$</span> , the sampling function.</p>
| Michael Hardy | 11,667 | <p>If you have $dx\,dy\,dz = \rho^2 \sin\varphi \, d\rho\, d\theta\,d\varphi$ Then this is
\begin{align}
& \int_0^\pi \int_0^{2\pi} \int_0^1 \rho (\rho^2\sin\varphi \, d\rho \, d\theta \, d\varphi) \\[10pt]
= {} & \int_0^\pi\left( \int_0^{2\pi} \left( \int_0^1 \rho^3 \sin\varphi \, d\rho \right) d\theta \right) d\varphi \\[10pt]
= {} & \int_0^\pi\left( \sin\varphi \left( \int_0^{2\pi} \left( \int_0^1 \rho^3 \, d\rho \right) d\theta \right) \right) d\varphi \text{ since } \sin\varphi \text{ does not depend on } \rho \text{ or } \theta \\[10pt]
= {} & \int_0^\pi \sin\varphi\,d\varphi \cdot \int_0^{2\pi} \int_0^1 \rho^3 \,d\rho\,d\theta \text{ since the latter integral does not depend on } \varphi \\[10pt]
= {} & \int_0^\pi \sin\varphi\,d\varphi \cdot \int_0^1 \rho^3 \,d\rho \cdot \int_0^{2\pi} 1\,d\theta \text{ since the inner integral above does not depend on } \theta \\[10pt]
= {} & 2\cdot \frac 1 4 \cdot 2\pi = \pi.
\end{align}</p>
<p>Three times we used the fact that $\displaystyle \int \text{constant} \times f(\alpha)\,d\alpha = \text{constant} \times \int_a^b f(\alpha)\,d\alpha.$ <b>“Constant”</b> means something that does not change as $\alpha$ changes.</p>
|
3,832,484 | <p>Title's all there is to say.
I'm very new to linear algebra and haven't wrapped my head around determinant rules yet.
Any help would be appreciated.</p>
| fleablood | 280,126 | <p>You should wrestle this to the ground to get a feel for it.</p>
<p>If <span class="math-container">$A = (a_{ij})$</span> then <span class="math-container">$A + A = (a_{ij}+a_{ij}) = (2a_{ij}) = (b_{ij})$</span> where <span class="math-container">$b_{ij} = 2a_{ij}$</span></p>
<p>Now the determinate of an <span class="math-container">$n\times n$</span> matrix will be some sum/difference combination of the product of <span class="math-container">$n$</span> <span class="math-container">$a_ij$</span> terms. For example: for a <span class="math-container">$3\times 3$</span> matrix the determinate is <span class="math-container">$a_{11}a_{22}a_{33} - a_{11}a_{32}a_{23} - a_{1,2}a_{2,1}a_{3,3} + a_{1,2}a_{3,1}a_{2,3} + a_{13}a_{2,1}a_{3,2}-a_{13}a_{22}a_{3,1}$</span>. We can write that as <span class="math-container">$\sum\limits_{\text{some conditions}}\pm a_{i,j}a_{k,l}a_{m,n}$</span>. (We don't actually <em>care</em> about the details here. Those terms are what we <em>base</em> the determinate one</p>
<p>so the determinate of <span class="math-container">$(b_{ij}) = ([2a_{ij}])$</span> will be</p>
<p><span class="math-container">$\sum\limits_{\text{some conditions}}\pm b_{i,j}b_{k,l}b_{m,n}=$</span></p>
<p><span class="math-container">$\sum\limits_{\text{some conditions}}\pm 2\cdot a_{i,j}2\cdot a_{k,l}2\cdot a_{m,n}$</span></p>
<p><span class="math-container">$\sum\limits_{\text{some conditions}}\pm 2^3a_{i,j}a_{k,l}a_{m,n}=$</span></p>
<p><span class="math-container">$2^3\sum\limits_{\text{some conditions}}\pm a_{i,j}a_{k,l}a_{m,n}=$</span></p>
<p><span class="math-container">$2^3 \det A$</span></p>
<p>....</p>
<p>And for an <span class="math-container">$n\times n$</span> matrix we would have</p>
<p><span class="math-container">$\sum\limits_{\text{some conditions}}\pm b_{i,j}b_{k,l}b_{m,p}.......=$</span></p>
<p><span class="math-container">$\sum\limits_{\text{some conditions}}\pm 2\cdot a_{i,j}2\cdot a_{k,l}2\cdot a_{m,p}.......$</span></p>
<p><span class="math-container">$\sum\limits_{\text{some conditions}}\pm 2^na_{i,j}a_{k,l}a_{m,p}......=$</span></p>
<p><span class="math-container">$2^n\sum\limits_{\text{some conditions}}\pm a_{i,j}a_{k,l}a_{m,p}......=$</span></p>
<p><span class="math-container">$2^n \det A$</span>.</p>
|
4,017,964 | <p>Write down in roster notation a set of cardinality 3, of which all elements are sets, and which
satisfies the following property:
∀, ∈ ( ⊆ ⇒ ⊆ ).
No justification is needed. Do not use ellipses (“…”) in your answer.</p>
<p>What I understands from this question is that all the 3 sets have to be the same to fulfill the statement.Example of an answer would be { {1,2,3},{1,2,3},{1,2,3}} am i right?</p>
| Ross Millikan | 1,827 | <p><span class="math-container">$U$</span> is a set of sets. The inner sets have to have three members each but nothing is specified about the size of <span class="math-container">$U$</span>. Your example has two problems. First, the elements of a set have to be distinct. Second, you need a set of outer braces for the set.</p>
<p>It turns out any set of sets of cardinality three where the sets are disjoint works. For any finite sets <span class="math-container">$A,B$</span> of the same size <span class="math-container">$A \subseteq B \implies B \subseteq A$</span>. This is because the only way one finite set can be a subset of another finite set of the same size is for the two sets to be equal.</p>
<p>The simplest <span class="math-container">$U$</span> that satisfies the requirement is the empty set. Since there are no sets that are elements of <span class="math-container">$U$</span> the requirement is vacuously satisfied.</p>
|
3,186,627 | <p>Proposition: Let A be a subset of R which is bounded below. Let B be a subset of R which is bounded above. If <span class="math-container">$\inf(A) < \sup (B) $</span> then there is some <span class="math-container">$a \in A$</span> and <span class="math-container">$b \in B$</span> such that <span class="math-container">$a<b$</span>.</p>
<p>Proof: </p>
<p>Let <span class="math-container">$\inf(A) = I$</span> and <span class="math-container">$\sup(B) = S$</span> </p>
<p>So <span class="math-container">$I<S$</span></p>
<p>By definition, <span class="math-container">$\forall a \in A, I\leq a$</span> and <span class="math-container">$\forall b \in B, b\leq S$</span> </p>
<p>Case 1: If I <span class="math-container">$\in A$</span> and S <span class="math-container">$\in B$</span>, we are done.</p>
<p>Case 2: If I <span class="math-container">$\notin A$</span> and S <span class="math-container">$\notin B$</span>, then assume BWOC <span class="math-container">$\nexists a \in A $</span> and <span class="math-container">$b\in B$</span> such that <span class="math-container">$a<b$</span></p>
<p>So <span class="math-container">$\forall a \in A$</span> and <span class="math-container">$\forall b \in B, b \leq a$</span></p>
<p>So <span class="math-container">$\forall a \in A$</span>, <span class="math-container">$a$</span> is an upper bound for B.</p>
<p>But since S is the least upper bound for B, <span class="math-container">$b\leq S<a$</span></p>
<p><span class="math-container">$S<a$</span></p>
<p>So <span class="math-container">$S$</span> is also a lower bound for <span class="math-container">$A.$</span></p>
<p>But since <span class="math-container">$I$</span> is the greatest lower bound for <span class="math-container">$A$</span>, <span class="math-container">$S<I$</span>, which is a contradiction.</p>
<p>Is my proof valid? If not, what am I missing? Would I have to show that this holds for other cases as well? </p>
| Clayton | 43,239 | <p>Your proof works fine; note that what you've done in case <span class="math-container">$2$</span> is actually sufficient to prove the full statement, though. As you say, assume that there do not exist elements <span class="math-container">$a\in A$</span> and <span class="math-container">$b\in B$</span> such that <span class="math-container">$a<b$</span>. Then for any <span class="math-container">$a\in A$</span>, we have that <span class="math-container">$a\geq b$</span> for any <span class="math-container">$b\in B$</span>. In particular, <span class="math-container">$a$</span> is an upper bound for <span class="math-container">$B$</span> so that <span class="math-container">$\sup B\leq a$</span>. Since <span class="math-container">$a\leq\sup A$</span> by definition, we have <span class="math-container">$\sup B\leq \inf A$</span>, and this is a contradiction.</p>
<p>In fact, you don't need to use contradiction; the statement is not too hard to prove directly. Define <span class="math-container">$\varepsilon=\sup B - \inf A$</span>. By definition of infimum and supremum, there exists an <span class="math-container">$a\in[\inf A,\inf A+\varepsilon/2)$</span> and similarly there exists a <span class="math-container">$b\in(\sup B-\varepsilon/2,\sup B]$</span>. By construction, <span class="math-container">$a<b$</span> so the statement is proven. </p>
|
79,542 | <p>The <em>polarization identity</em> expresses a symmetric bilinear form on a vector space in terms of its associated quadratic form:
$$
\langle v,w\rangle = \frac{1}{2}(Q(v+w) - Q(v) - Q(w)),
$$
where $Q(v) = \langle v,v\rangle$. More generally (over fields of characteristic $0$), for any homogeneous polynomial
$h(x_1,\dots,x_n)$ of degree $d$ in $n$ variables, there is a unique symmetric $d$-multilinear polynomial $F({\mathbf x}_1,\dots,{\mathbf x}_d)$, where each ${\mathbf x}_i$ consists of $n$ indeterminates, such that $h(x_1,\dots,x_n) = F({\mathbf x},\dots,{\mathbf x})$, where ${\mathbf x} = (x_1,\dots,x_n)$. There is a formula which expresses $F({\mathbf x}_1,\dots,{\mathbf x}_d)$ in terms of $h$, generalizing the above formula for a bilinear form in terms of a quadratic form, and it is also called a polarization identity.</p>
<p>Where did the meaning of "polarization", in this context, come from? Weyl uses it in his book <em>The classical groups</em> (see pp. 5 and 6 on Google books) but I don't know if this is the first place it appeared. Jeff Miller's extensive math etymology website doesn't include this term. See <a href="http://jeff560.tripod.com/p.html">http://jeff560.tripod.com/p.html</a>.</p>
| Scot Adams | 812,550 | <p>If you are looking at a single tangent space V to a Riemannian 2-manifold, then there is a positive definite quadratic form Q on V, and you can use that quadratic form to define a function r that represents lengths of the tangent vectors in V. Specifically, for any point v in V, r(v) will be the square root of Q(v). However, if you want to set up <em>polar</em> coordinates, you will also need to be able to compute <em>angles</em>, and the <em>polarization</em> B of Q allows you to do that. Specifically, the angle between two unit vectors v,w is the arccos of B(v,w). So polarization allows you to go from knowing which vectors are unit vectors to knowing how close two unit vectors are to pointing in the same direction. If you're traveling in some 2-dimensional Riemannian manifold, like the surface of the earth, it is useful to know how fast you're going, but it's also important to know whether you're pointed toward the north pole, the south pole, or in some other direction. The geometry of speed of travel comes from Q, but the second "polar" geometry comes from the polarization of Q. Something like that anyway ...</p>
|
2,138,916 | <p>My question read: </p>
<p>Show that $S_{10}$ contains elements of orders $10,20,30$. Does it contain an element of order $40$? </p>
<p>I am not too sure what the question is asking. Would I have to explicitly write out all the permutations in $S_{10}$ first and then find the orders for all of them? </p>
<p>Update: I understand I need to only show a few examples of disjoint cycles, but I am not sure how to show if order 40 is possible.</p>
| Felix Marin | 85,343 | <p><span class="math-container">$\newcommand{\bbx}[1]{\,\bbox[15px,border:1px groove navy]{\displaystyle{#1}}\,}
\newcommand{\braces}[1]{\left\lbrace\,{#1}\,\right\rbrace}
\newcommand{\bracks}[1]{\left\lbrack\,{#1}\,\right\rbrack}
\newcommand{\dd}{\mathrm{d}}
\newcommand{\ds}[1]{\displaystyle{#1}}
\newcommand{\expo}[1]{\,\mathrm{e}^{#1}\,}
\newcommand{\ic}{\mathrm{i}}
\newcommand{\mc}[1]{\mathcal{#1}}
\newcommand{\mrm}[1]{\mathrm{#1}}
\newcommand{\pars}[1]{\left(\,{#1}\,\right)}
\newcommand{\partiald}[3][]{\frac{\partial^{#1} #2}{\partial #3^{#1}}}
\newcommand{\root}[2][]{\,\sqrt[#1]{\,{#2}\,}\,}
\newcommand{\totald}[3][]{\frac{\mathrm{d}^{#1} #2}{\mathrm{d} #3^{#1}}}
\newcommand{\verts}[1]{\left\vert\,{#1}\,\right\vert}$</span>
<a href="https://i.stack.imgur.com/Zx7BL.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Zx7BL.png" alt="enter image description here" /></a></p>
<p><span class="math-container">\begin{align}
&\bbox[15px,#ffd]{\int_{-\infty}^{\infty}{\ln\pars{\root{x^{2} + a^{2}}} \over x^{2} + b^{2}}\,\dd x} =
\Re\int_{-\infty}^{\infty}{\ln\pars{a + \ic x} \over x^{2} + b^{2}}\,\dd x
\\[5mm]
\stackrel{\large x\ =\ \pars{a - s}\ic}{\large\vphantom{A}=}\,\,\,&
\Re\int_{\large a - \infty\ic}^{\large a + \infty\ic}
{\ln\pars{s} \over
\bracks{\pars{a - s}\ic + b\ic}\bracks{\pars{a - s}\ic - b\ic}}
\,\pars{-\ic}\,\dd s
\\[5mm] = &\
-\Im\int_{\large a - \infty\ic}^{\large a + \infty\ic}
{\ln\pars{s} \over
\bracks{s - \pars{a + b}}\bracks{s - \pars{a - b}}}\,\dd s
\\[5mm] = &\
-\Im\bracks{-2\pi\ic
{\ln\pars{a + b} \over \pars{a + b} - \pars{a - b}}} =
\bbx{{\pi \over b}\ln\pars{a + b}}
\end{align}</span>
<span class="math-container">$\ds{\ln}$</span> is the <em>logarithm principal branch</em>. The contribution to the integral from the arc vanishes out
<span class="math-container">$\ds{\pars{~\mbox{its magnitude is}\ < {\pi\ln\pars{R} \over R}\ \mbox{as}\ R \to \infty~}}$</span> as the arc radius <span class="math-container">$\ds{R \to \infty}$</span>.</p>
|
3,438,048 | <p>I've recently obtained my University entrance papers from 1967 (yes,52 years ago!) and I found the question below difficult. I presume the answer is a symmetric expression in the differences between alpha,beta and gamma.Am I missing some obvious trick? Any help would be appreciated.</p>
<p>Simplify and evaluate the determinant
<a href="https://i.stack.imgur.com/Dfft4.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Dfft4.png" alt="enter image description here"></a></p>
<p>and show that its value is independent of theta.</p>
| Mark | 470,733 | <p>I'm not sure you really understood the definition of <span class="math-container">$\tau$</span>. A set <span class="math-container">$U$</span> is in <span class="math-container">$\tau$</span> if for each <span class="math-container">$x\in U$</span> there is some <span class="math-container">$\epsilon>0$</span> (which might depend on <span class="math-container">$x$</span>) for which <span class="math-container">$(x-\epsilon,x+\epsilon)\subseteq U$</span>. This has nothing to do with taking <span class="math-container">$\epsilon$</span> to <span class="math-container">$0$</span> or to <span class="math-container">$\infty$</span>. </p>
<p>An example: for each <span class="math-container">$x\in\mathbb{R}$</span> we have <span class="math-container">$(x-1,x+1)\subseteq\mathbb{R}$</span>. So for each <span class="math-container">$x\in\mathbb{R}$</span> we can take <span class="math-container">$\epsilon=1$</span>. Hence <span class="math-container">$\mathbb{R}\in\tau$</span>. </p>
<p>Now the empty set: well, if we suppose it isn't in <span class="math-container">$\tau$</span> then there must be some <span class="math-container">$x\in\emptyset$</span> such that for all <span class="math-container">$\epsilon>0$</span> we have that <span class="math-container">$(x-\epsilon,x+\epsilon)$</span> is not contained in <span class="math-container">$\emptyset$</span>. But this is a contradiction because there can't be any elements <span class="math-container">$x\in\emptyset$</span>. </p>
<p>Now let's show <span class="math-container">$\tau$</span> is closed to unions. Let's say <span class="math-container">$\{A_i\}_{i\in I}$</span> is a collection of sets in <span class="math-container">$\tau$</span> and we want to show their union is in <span class="math-container">$\tau$</span>. Let <span class="math-container">$x\in\cup_{i\in I} A_i$</span>. Then there is some <span class="math-container">$j\in I$</span> such that <span class="math-container">$x\in A_j$</span>. Since <span class="math-container">$A_j\in\tau$</span> we know there is some <span class="math-container">$\epsilon>0$</span> for which <span class="math-container">$(x-\epsilon,x+\epsilon)\subseteq A_j\subseteq\cup_{i\in I} A_i$</span>. So we proved exactly what we wanted, the union is in <span class="math-container">$\tau$</span> as well.</p>
<p>Finally, we have to show that <span class="math-container">$\tau$</span> is closed to finite intersections. Let <span class="math-container">$A,B\in\tau$</span>. We want to show that their intersection is in <span class="math-container">$\tau$</span>. Let <span class="math-container">$x\in A\cap B$</span>. Since <span class="math-container">$x\in A$</span> and <span class="math-container">$A\in\tau$</span> there is <span class="math-container">$\epsilon_1>0$</span> for which <span class="math-container">$(x-\epsilon_1,x+\epsilon_1)\subseteq A$</span>. Similarly, there is <span class="math-container">$\epsilon_2>0$</span> such that <span class="math-container">$(x-\epsilon_2,x+\epsilon_2)\subseteq B$</span>. Let <span class="math-container">$\epsilon=\min\{\epsilon_1,\epsilon_2\}$</span>. Then <span class="math-container">$(x-\epsilon,x+\epsilon)\subseteq A\cap B$</span>. </p>
|
88,363 | <p>It is easy to truncate Series upto some order, say $n$. My question is how do I remove low orders? Let us say my series is a power series in $x$. I want to remove the terms with negative powers because they diverge at $x = 0$. I can simply write</p>
<p>s1-s2, where</p>
<p>s1=Normal[Series[blah, {x, 0, n}]</p>
<p>s2=Normal[Series[blah, {x, 0, -1}]</p>
<p>but Mathematica does not understand to cancel the removed terms because they are complicated. The solution would be to use Collect[s1-s2, x, Simplify], but this is horribly slow as I increase $n$ above even 2. I suppose I could simply delete the terms by hand, but the outputs are very messy, and there must exist a proper way to do this.</p>
| Marius Ladegård Meyer | 22,099 | <p>I'm not sure this approach is applicable to all series, but from a quick test it seems to work for rational exponents:</p>
<p>Looking at the <code>FullForm</code> of</p>
<pre><code>ser = Series[Exp[x]/x^(2/3), {x, 0, 5}]
(* x^(-2/3) + x^(1/3) + x^(4/3)/2 + x^(7/3)/6 + x^(10/3)/24 + x^(13/3)/120 + O[x]^(16/3) *)
</code></pre>
<p>gives</p>
<pre><code>FullForm[ser]
(* SeriesData[x,0,List[1,0,0,1,0,0,Rational[1,2],0,0,Rational[1,6],0,0,Rational[1,24],0,0,Rational[1,120]],-2,16,3]
</code></pre>
<p>As we see, <code>[[3]]</code> contains a list of coefficients, while the lowest and highest powers are given by <code>[[4]]/[[6]]</code> and <code>[[5]]/[[6]]</code> respectively. If we want to eliminate all negative powers we may simply remove the <code>-[[4]]</code> first coefficients from the coefficient list, and set <code>[[4]]</code> to 0 afterwards. That is:</p>
<pre><code>ser2 = ReplacePart[ser, {3 -> Drop[ser[[3]], -ser[[4]]], 4 -> 0}]
(* x^(1/3) + x^(4/3)/2 + x^(7/3)/6 + x^(10/3)/24 + x^(13/3)/120 + O[x]^(16/3) *)
</code></pre>
|
545,003 | <p>I have a proof that I am trying to prove and I am getting stuck at the inductive hypothesis. This is my theorem:</p>
<blockquote>
<p>For all real numbers $n>3$, the following is true: $n + 3 < n!$.</p>
</blockquote>
<p>I have proven true for $n = 4$, and will assume true for some arbitrary value $k$, i.e.,</p>
<p>$$k + 3 < n!,$$</p>
<p>and I want to prove for $k+1$, i.e.,</p>
<p>$$(k+1) + 3 < (k+1)!.$$</p>
<p>Consider the $k+1$ term:</p>
<p>$$(k+1)+3 = ?$$</p>
<p>I am confused on how to approach the next step.</p>
<p>Ok here is how I am proceeding. It seems really long so if anyone has a better way let me know:
$$
=(k+3)+1
$$
$$
<(k!)+1
$$
$$
<k!+k!
$$
$$
=2k!
$$
$$
<(k+1)k!
$$
$$
=(k+1)!
$$
Therefore both sides are equivalent.</p>
| JessicaK | 102,435 | <p>Since by induction hypothesis,</p>
<p>$$k+3< k!$$</p>
<p>for $k>4$, multiply both sides by $(k+1)$ to get</p>
<p>$$(k+3)(k+1) < k! (k+1)$$</p>
<p>or</p>
<p>$$(k+3)(k+1) < (k+1)!$$</p>
<p>I'll leave the rest for you to think about, as a hint, remember that's an inequality.</p>
|
545,003 | <p>I have a proof that I am trying to prove and I am getting stuck at the inductive hypothesis. This is my theorem:</p>
<blockquote>
<p>For all real numbers $n>3$, the following is true: $n + 3 < n!$.</p>
</blockquote>
<p>I have proven true for $n = 4$, and will assume true for some arbitrary value $k$, i.e.,</p>
<p>$$k + 3 < n!,$$</p>
<p>and I want to prove for $k+1$, i.e.,</p>
<p>$$(k+1) + 3 < (k+1)!.$$</p>
<p>Consider the $k+1$ term:</p>
<p>$$(k+1)+3 = ?$$</p>
<p>I am confused on how to approach the next step.</p>
<p>Ok here is how I am proceeding. It seems really long so if anyone has a better way let me know:
$$
=(k+3)+1
$$
$$
<(k!)+1
$$
$$
<k!+k!
$$
$$
=2k!
$$
$$
<(k+1)k!
$$
$$
=(k+1)!
$$
Therefore both sides are equivalent.</p>
| sundaycat | 102,804 | <p>Suppose $k! \gt k+3$ is true:</p>
<blockquote>
<p>\begin{align*}
\ (k+1)! &=k!\cdot(k+1)
\\ &\gt(k+3)(k+1)
\\ &=k^2+4k+3
\\ &\gt k^2+k+3
\\ &\gt (k+1)+3\ldots(\text{where}\space k\gt 3)
\end{align*}</p>
</blockquote>
|
2,030,739 | <p>Find <span class="math-container">$\frac{d^2y}{dx^2}$</span> of:</p>
<blockquote>
<p><span class="math-container">$$4y^2+2=3x^2$$</span></p>
</blockquote>
<h2>My Attempt</h2>
<p>I attempted the probelm my first solving for the first derivative:</p>
<blockquote>
<p><span class="math-container">$8y*y'=6x$</span><br>
<span class="math-container">$y'=\frac{3x}{4y}$</span></p>
</blockquote>
<p>Then I tried it again; however I was a bit confused, and ended up getting</p>
<blockquote>
<p><span class="math-container">$y''=\frac{6y-3x(2*y')}{16y^2}$</span></p>
</blockquote>
<p>Would I substitute in the first derivative back in to get:</p>
<blockquote>
<p><span class="math-container">$y''=\frac{6y-6x(\frac{3x}{4y})}{4y^2}$</span></p>
</blockquote>
<p>and then finally</p>
<h2>Final Answer</h2>
<p><span class="math-container">$$y''=\frac{6y^2-9x^2}{4y^3}$$</span></p>
<p>Is my final answer correct, if not what is the correct answer please?</p>
| Shraddheya Shendre | 384,307 | <p>Your final answer is wrong and since you only ask for the correct final answer, here you go :
$$\frac{d^2y}{dx^2} = \frac{12y^2-9x^2}{16y^3}$$</p>
|
2,068,906 | <p>Recall, with the birthday problem, with 23 people, the odds of a shared birthday is APPROXIMATELY .5 (correct?)</p>
<p>P(no sharing of dates with 23 people) = $$\frac{365}{365}*\frac{364}{365}*\frac{363}{365}*...*\frac{343}{365} $$</p>
<p>$$= \frac{365!}{342!}*\frac{1}{365^{23}} $$</p>
<p>I want to do this multiplication, but nothing I have can handle it.
How can I know for sure it actually is around .5 ?</p>
<p>$$\frac{365!}{342!}*\frac{1}{365^{23}} = .5$$</p>
| Beni Bogosel | 7,327 | <p>You can use Pari Gp in order to do this. You need multiple precision arithmetic due to the large powers and factorials. Pari GP is usually the right way to go if you need to do this kind of computations. Just open the program, type <code>1.0* 365!/342!/365^23</code> and you'll get the result
$$ 0.49270276567601459277458277166296749976 $$</p>
<p>--</p>
<p>There are smarter way to do this, by paying attention to what you're doing. For example in Matlab/Octave you can do the following:</p>
<p><code>>> v = 343:365</code> </p>
<p><code>>> v = v/365</code></p>
<p><code>>> prod(v)</code></p>
<p>and you get $0.492702765676015$</p>
|
2,068,906 | <p>Recall, with the birthday problem, with 23 people, the odds of a shared birthday is APPROXIMATELY .5 (correct?)</p>
<p>P(no sharing of dates with 23 people) = $$\frac{365}{365}*\frac{364}{365}*\frac{363}{365}*...*\frac{343}{365} $$</p>
<p>$$= \frac{365!}{342!}*\frac{1}{365^{23}} $$</p>
<p>I want to do this multiplication, but nothing I have can handle it.
How can I know for sure it actually is around .5 ?</p>
<p>$$\frac{365!}{342!}*\frac{1}{365^{23}} = .5$$</p>
| heropup | 118,193 | <p>You can certainly do this in Excel, and here's how you would do it:</p>
<p>$$\begin{array}{|c|c|c|c|} \hline & \text{A} & \text{B} & \text{C} \\ \hline
1 & 365 & \text{=A1} & \text{=B1/A1} \\
2 & \text{=A1} & \text{=B1-1} & \text{=B2/A2} \\
3 & \text{=A2} & \text{=B2-1} & \text{=B3/A3} \\
4 & \text{=A3} & \text{=B3-1} & \text{=B4/A4} \\
\vdots & \vdots & \vdots & \vdots \\
23 & \text{=A22} & \text{=B22-1} & \text{=B23/A23} \\ \hline
& & & \text{=PRODUCT(C1:C23)} \\ \hline
\end{array}$$
This shows the formulas you need to enter into the respective cells. You start with entering <code>365</code> into A1, then type in <code>=A1</code> into cells A2 and B1. Next, type in the formula <code>=B1-1</code> into B2, and <code>=B1/A1</code> into C1. Next, copy down all the formulas up to row 23. Column C then computes each ratio in your original expression, and <code>=PRODUCT(C1:C23)</code> computes the product.</p>
|
2,122,389 | <p>The problem goes so : you have a parking lot with 8 parking spaces and 8 cars, of which 4 are red and 4 are white. What is the probability of :</p>
<p>a) 4 white cars being parked next to each other ?</p>
<p>b) 4 white cars and 4 red cars being parked next to each other ?</p>
<p>c) red and white cars being parked alternately ( red-white-red...) ?</p>
<p>Any help will be greatly appreciated. :-)</p>
| Brevan Ellefsen | 269,764 | <p><strong>Mathematica Output</strong> </p>
<hr>
<p>Mathematica instantly produces the following:
$$\frac{1}{2} x \left(2 \log \left(m^2+2 m \cos (x)+1\right)-2 \log \left(\frac{m+e^{i
x}}{m}\right)-2 \log \left(1+m e^{i x}\right)+i x\right)+i
\operatorname{Li}_2\left(\frac{-e^{i x}}{m}\right)+i \operatorname{Li}_2\left(-e^{i x} m\right)+C$$
I am working on a way to prove this by expanding and greatly improving on Durgesh's answer, but I am not there yet. Should all else fail, converting each of the terms to integral form and combining should work. There are definitely patterns emerging in the output; for example, the derivatives of the two Dilogarithm terms in the output are the latter two logarithm terms present in the output, i.e.
$$\frac{d}{dx} i \operatorname{Li}_2\left(\frac{-e^{i x}}{m}\right) = \log \left(\frac{m+e^{i
x}}{m}\right)$$
$$\frac{d}{dx} i \operatorname{Li}_2\left(-e^{i x} m\right) = \log \left(1+m e^{i x}\right)$$</p>
|
23,566 | <p>I love math, and I used to be very good at it. The correct answers came fast and intuitively. I never studied, and redid the demonstration live for the tests (sometimes inventing new ones). I was the one who answered the tricky questions in class (8 hours of math/week in high school)... You get the idea.</p>
<p>As such I used to have a lot of confidence in my math abilities, and didn't think twice about saying the first idea that came to mind when answering a question.</p>
<p>That was more than 10 years ago, and I (almost) haven't done any math since then. I've graduated in a scientific field that requires little of it (I prefer not to give details) and worked for some time.</p>
<p>Now I'm back at school (master of statistics) and I need to do math, once again. I make mistakes upon blunders with the same confidence I used to have when I was good, which is extremely embarrassing when it happens in class. </p>
<p>I feel like a tone deaf musician and an ataxic painter at the same time.</p>
<p>One factor that probably plays a role is that I've learnt math in my mother tongue, and I'm now using it English, but I wouldn't expect it to make such a difference. </p>
<p>I know that it will require practice and hard work, but I need direction.</p>
<p>Any help is welcome.</p>
<p>Kind regards,</p>
<p>-- Mathemastov</p>
| Derek Jennings | 1,301 | <p>I think the key to your problem is in your first paragraph. You say, "The correct answers came fast and intuitively. I never studied." This is the classic high school con that can lead one to doubt one's own abilities as soon as the going gets more challenging.</p>
<p>No matter what your abilities, to do worthwhile work in mathematics you will need to study and work hard. You say in your last paragraph that you know it will require practice and hard work, but I do not think you have fully taken this on board.</p>
<p>When you do, you will stop worrying about making mistakes. They are an essential part of the learning process and not something to beat yourself up about. Forget the belief, instilled in high school, that you should be able to come up with the correct answer straight away. Those questions were crafted to have short, snappy answers within the reach of rote learning.</p>
<p>Your mathematical career has now gone beyond this stage. So get down to study and make as many mistakes as you like on the path to a deeper understanding, and enjoy!</p>
|
23,566 | <p>I love math, and I used to be very good at it. The correct answers came fast and intuitively. I never studied, and redid the demonstration live for the tests (sometimes inventing new ones). I was the one who answered the tricky questions in class (8 hours of math/week in high school)... You get the idea.</p>
<p>As such I used to have a lot of confidence in my math abilities, and didn't think twice about saying the first idea that came to mind when answering a question.</p>
<p>That was more than 10 years ago, and I (almost) haven't done any math since then. I've graduated in a scientific field that requires little of it (I prefer not to give details) and worked for some time.</p>
<p>Now I'm back at school (master of statistics) and I need to do math, once again. I make mistakes upon blunders with the same confidence I used to have when I was good, which is extremely embarrassing when it happens in class. </p>
<p>I feel like a tone deaf musician and an ataxic painter at the same time.</p>
<p>One factor that probably plays a role is that I've learnt math in my mother tongue, and I'm now using it English, but I wouldn't expect it to make such a difference. </p>
<p>I know that it will require practice and hard work, but I need direction.</p>
<p>Any help is welcome.</p>
<p>Kind regards,</p>
<p>-- Mathemastov</p>
| Community | -1 | <p>there are some great answers / advise here already ... what i would like to add is that it demonstrates, in my opinion the clear link between communication, speech, language patterns and numerical skills, that part of the brain that visualizes these patterns and helps you make sense of the answer ... </p>
|
23,566 | <p>I love math, and I used to be very good at it. The correct answers came fast and intuitively. I never studied, and redid the demonstration live for the tests (sometimes inventing new ones). I was the one who answered the tricky questions in class (8 hours of math/week in high school)... You get the idea.</p>
<p>As such I used to have a lot of confidence in my math abilities, and didn't think twice about saying the first idea that came to mind when answering a question.</p>
<p>That was more than 10 years ago, and I (almost) haven't done any math since then. I've graduated in a scientific field that requires little of it (I prefer not to give details) and worked for some time.</p>
<p>Now I'm back at school (master of statistics) and I need to do math, once again. I make mistakes upon blunders with the same confidence I used to have when I was good, which is extremely embarrassing when it happens in class. </p>
<p>I feel like a tone deaf musician and an ataxic painter at the same time.</p>
<p>One factor that probably plays a role is that I've learnt math in my mother tongue, and I'm now using it English, but I wouldn't expect it to make such a difference. </p>
<p>I know that it will require practice and hard work, but I need direction.</p>
<p>Any help is welcome.</p>
<p>Kind regards,</p>
<p>-- Mathemastov</p>
| Community | -1 | <p>I'm in my late 30s and still remember being crap at maths while at school (this might sound like cold comfort, but run with this a little) because it's like I said in a Yahoo Q&A some time ago now. a little each day will keep your brain sharp and active and you'll still be able to do a little of the advanced statistical mathematics. I used to be able to do long multiplications in my head (eg 1687x432 and come to the figure 728700, whereas the correct answer is 728784 as an example. manually calculating stuff like that isn't as easy these days) but I'd only be worried for you if you lost the ability to do the basics.</p>
|
1,988,191 | <p>Today I coded the multiplication of quaternions and vectors in Java. This is less of a coding question and more of a math question though:</p>
<pre><code>Quaternion a = Quaternion.create(0, 1, 0, Spatium.radians(90));
Vector p = Vector.fromXYZ(1, 0, 0);
System.out.println(a + " * " + p + " = " + Quaternion.product(a, p));
System.out.println(a + " * " + p + " = " + Quaternion.product2(a, p));
</code></pre>
<p>What I am trying to do is rotate a point $\mathbf{p}$ using the quaternion $\mathbf{q}$. The functions <code>product()</code> and <code>product2()</code> calculate the product in two different ways, so I am quite certain that the output is correct:</p>
<pre><code>(1.5707964 + 0.0i + 1.0j + 0.0k) * (1.0, 0.0, 0.0) = (-1.0, 0.0, -3.1415927)
(1.5707964 + 0.0i + 1.0j + 0.0k) * (1.0, 0.0, 0.0) = (-1.0, 0.0, -3.1415927)
</code></pre>
<p>However, I can't wrap my head around why the result is the way it is. I expected to rotate $\mathbf{p}$ 90 degrees around the y-axis, which should have resulted in <code>(0.0, 0.0, -1.0)</code>.</p>
<p>Wolfram Alpha's visualization also suggests the same:
<a href="https://www.wolframalpha.com/input/?i=(1.5707964+%2B+0.0i+%2B+1.0j+%2B+0.0k)" rel="nofollow">https://www.wolframalpha.com/input/?i=(1.5707964+%2B+0.0i+%2B+1.0j+%2B+0.0k)</a></p>
<p>So what am I doing wrong here? Are really both the functions giving invalid results or am I not understanding something about quaternions?</p>
| Emilio Novati | 187,568 | <p>I dont well understand your code, but it seems that you have multiplied only one way the quaternion by the vector, and this is wrong.</p>
<p>The rotation of the vector $\vec v = \hat i$ by $\theta=\pi/2$ around the axis $\mathbf{u}=\hat j$ is represented by means of quaternions as ( see <a href="https://math.stackexchange.com/questions/1175209/representing-rotations-using-quaternions/1176318#1176318">Representing rotations using quaternions</a>):</p>
<p>$$
R_{\mathbf{u},\pi/2}(\vec v)= e^{\frac{\pi}{4}\hat j}\cdot \hat i \cdot e^{-\frac{\pi}{4}\hat j}
$$
where the exponential is given by the formula ( see <a href="https://math.stackexchange.com/questions/1030737/exponential-function-of-quaternion-derivation/1047246#1047246">Exponential Function of Quaternion - Derivation</a>):
$$
e^{\frac{\pi}{4}\hat j}=\cos\frac{\pi}{4} +\hat j \sin\frac{\pi}{4}
$$
so we have:
$$
R_{\mathbf{u},\pi/2}(\vec v)=\left( \frac{\sqrt{2}}{2}+\hat j\frac{\sqrt{2}}{2} \right)(\hat i) \left(\frac{\sqrt{2}}{2}-\hat j\frac{\sqrt{2}}{2} \right)=
$$
$$
=\frac{1}{2}\hat i-\frac{1}{2}\hat k-\frac{1}{2}\hat k-\frac{1}{2}\hat i=-\hat k
$$</p>
<p>Note we need two multiplications by a unitary quaternion and its inverse, with an angle that is one half of the final rotation angle.</p>
|
1,012,236 | <p>A continuous time process it's nule for t < 0. In which conditions is it stationary (WSS)?</p>
<p>I know that E[x(t)] must be a constant and the autocorrelation function must depend only on the time difference t2-t1. Are there any other conditions?</p>
| user2345215 | 131,872 | <p>Both sequences have $e$ as the limit, so it suffices to show that the left sequence is increasing and the right sequence is decreasing.</p>
<p>Use the AM-GM inequality to get
$$\frac{n+2}{n+1}=\frac{\frac{n+1}n+\ldots+\frac{n+1}n+1}{n+1}>\sqrt[n+1\,]{\left(\frac{n+1}n\right)^n}$$
It follows that
$$\left(1+\frac1n\right)^n<\left(1+\frac1{n+1}\right)^{n+1}$$</p>
<p>For the other sequence use the HM-GM inequality to get
$$\frac{n+2}{n+1}=\frac{n+2}{\frac n{n+1}+\ldots+\frac n{n+1}+1}<\sqrt[n+2\,]{\left(\frac{n+1}n\right)^{n+1}}$$
It follows that
$$\left(1+\frac1n\right)^{n+1}>\left(1+\frac1{n+1}\right)^{n+2}$$</p>
|
99,572 | <p>One of the most useful tools in the study of convex polytopes is to move from polytopes (through their fans) to toric varieties and see how properties of the associated toric variety reflects back on the combinatorics of the polytopes. This construction requires that the polytope is rational which is a real restriction when the polytope is general (neither simple nor simplicial). Often we would like to consider general polytopes and even polyhedral spheres (and more general objects) where the toric variety construction does not work.</p>
<p>I am aware of very general constructions by M. Davis, and T. Januszkiewicz,
(one relevant paper might be <a href="http://www.math.osu.edu/%7Edavis.12/old_papers/DJ_toric.dmj.pdf" rel="nofollow noreferrer">Convex polytopes, Coxeter orbifolds and torus actions, Duke Math. J. 62 (1991)</a> and several subsequent papers). Perhaps these constructions allow you to start with arbitrary polyhedral spheres and perhaps even in greater generality.</p>
<p>I ask about an explanation of the scope of these constructions, and, in simple terms as possible, how does the construction go?</p>
| Fiammetta Battaglia | 32,295 | <p>Dear Gil, in addition to Dan's answer let me mention that the construction of toric varieties à la Cox has been generalized to arbitrary convex polytopes in <a href="http://www.worldscientific.com/doi/abs/10.1142/S0129167X11007562">Geometric spaces from arbitrary convex polytope, Int. J. Math., 23, (2012)</a> (the simple case had been treated in a previous <a href="http://imrn.oxfordjournals.org/content/2001/24/1315.abstract">paper</a> joint with E. Prato).</p>
|
3,362,654 | <p>Let's say <span class="math-container">$C$</span> is a category, and <span class="math-container">$\mathscr{C}$</span> is a collection of morphisms in <span class="math-container">$C$</span>. I have come across the following sentence </p>
<p>"<span class="math-container">$C$</span> admits pullbacks along morphisms from <span class="math-container">$\mathscr{C}$</span>, and <span class="math-container">$\mathscr{C}$</span> is pullback stable" </p>
<p>I know what a pullback is, but have no idea what this sentence means. Here are my attempts.</p>
<ol>
<li><span class="math-container">$C$</span> admits pullbacks along morphisms from <span class="math-container">$\mathscr{C}$</span>: Suppose <span class="math-container">$(P,p,q)$</span> is a pullback of <span class="math-container">$f$</span> and <span class="math-container">$g$</span>, then <span class="math-container">$f$</span> and <span class="math-container">$g$</span> are in <span class="math-container">$\mathscr{C}$</span>.</li>
<li><span class="math-container">$\mathscr{C}$</span> is pullback stable: Suppose <span class="math-container">$(P,p,q)$</span> is a pullback of <span class="math-container">$f$</span> and <span class="math-container">$g$</span>, then if <span class="math-container">$g$</span> has a certain property so does <span class="math-container">$p$</span>. But which property are we talking about here? Is it talking about belonging to <span class="math-container">$\mathscr{C}$</span>?</li>
</ol>
<p>Any correction or help will be greatly appreciated.</p>
| Henry | 6,460 | <p>You asked about how to find this in R</p>
<p>You could do something like:</p>
<pre><code>twelve <- 12
probwin <- numeric(twelve)
for (n in 1:twelve){
if(n == 1){
probwin[n] <- 1 / twelve
}else{
probwin[n] <- (1 - sum(probwin[1:(n-1)])) * n / twelve
}
}
probwin
</code></pre>
<p>which would give </p>
<pre><code> [1] 8.333333e-02 1.527778e-01 1.909722e-01 1.909722e-01 1.591435e-01
[6] 1.114005e-01 6.498360e-02 3.094457e-02 1.160421e-02 3.223393e-03
[11] 5.909554e-04 5.372322e-05
</code></pre>
<p>though the one line corresponding to <span class="math-container">$\frac{12!\, n}{(12-n)!\, 12^{n+1}}$</span></p>
<pre><code>(1:12) * factorial(12) / factorial(12-(1:12)) / 12^((1:12)+1)
</code></pre>
<p>should give the same result.</p>
<p>By inspection, the third and fourth positions are both best with probability <span class="math-container">$\frac{55}{288}$</span>, though using the <code>which.max</code> function in R would probably only identify one of these due to floating point precision issues</p>
|
2,798,207 | <p>This problem needs also to be extended to $n*m$ chessboard. I tried to think like this:</p>
<p>First I choose a place for the first king in $64$ ways. Then I have a choice $64-5 = 59$ squares for the second king . But this solution is not right because this is not the case if I place the first king in the sidemost layers of squares. Then I have $64-4 = 60$ squares for the other king. How can I solve this problem?</p>
| Donald Splutterwit | 404,247 | <p>The first King could be on a corner square($4$ ways), leaving $60$ other squares for the next King.</p>
<p>The first King could be on a edge square($24$ ways), leaving $58$ other squares for the next King.</p>
<p>The first King could be on a central square($36$ ways), leaving $55$ other squares for the next King.</p>
<p>This will double count the configurations, so we have $(4 \times 60 + 24 \times 58 + 36 \times 55)/2 =\color{red}{1806}$.</p>
|
1,714,654 | <p>Show that a box (rectangular parallelopiped) of maximum volume V with prescribed surface area is a cube.
Let $$V=xyz$$
$$S=2xy + 2yz + 2zx$$
$S$ is constant.</p>
<p>Using Lagrange method, I am stuck at $V_x$$_x$=$0$=$V_y$$_y$=$V_z$$_z$ at the (only) critical point. How to approach this. </p>
| mrprottolo | 84,266 | <p>You can use polar coordinates here. Set $x=r\cos\theta$, $y=r\sin\theta$, then notice that $x^2-y^2=r^2\cos 2\theta$. Then the limit becomes
$$\lim_{r \to 0} \frac{\sin (r^2\cos 2\theta)}{r^2\cos 2 \theta}.$$</p>
<p>Clearly you have to exclude the case $\theta=\pm \pi/4$ because $f$ is not defined there, even if you can try to extend it by continuity.</p>
|
322,302 | <p>Conjectures play important role in development of mathematics.
Mathoverflow gives an interaction platform for mathematicians from various fields, while in general it is not always easy to get in touch with what happens in the other fields.</p>
<p><strong>Question</strong> What are the conjectures in your field proved or disproved (counterexample found) in recent years, which are noteworthy, but not so famous outside your field?</p>
<p>Answering the question you are welcome to give some comment for outsiders of your field which would help to appreciate the result.</p>
<p>Asking the question I keep in mind by "recent years" something like a dozen years before now, by a "conjecture" something which was known as an open problem for something like at least dozen years before it was proved and I would say the result for which the Fields medal was awarded like a proof of <a href="https://en.wikipedia.org/wiki/Fundamental_lemma_(Langlands_program)" rel="noreferrer">fundamental lemma</a> would not fit "not so famous", but on the other hand these might not be considered as strict criteria, and let us "assume a good will" of the answerer.</p>
| YorkT | 135,410 | <p>some important conjectures in matroid theory, for instance the Rota conjecture on excluded minors (by Geelen, Gerards and Whittle, still unpublished, note claiming proof <a href="http://www.ams.org/notices/201407/rnoti-p736.pdf" rel="noreferrer">here</a>) and the log-concavity conjecture (also due to Rota) for the characteristic polynomial (<a href="http://arxiv.org/abs/1511.02888" rel="noreferrer">arxiv.org/abs/1511.02888</a>). The method of the latter had several applications to solve more problems in matroid theory.</p>
<p>edit: let me add to that Liu's <a href="http://arxiv.org/abs/1606.05033" rel="noreferrer">counterexample</a> to the extension space conjecture</p>
|
322,302 | <p>Conjectures play important role in development of mathematics.
Mathoverflow gives an interaction platform for mathematicians from various fields, while in general it is not always easy to get in touch with what happens in the other fields.</p>
<p><strong>Question</strong> What are the conjectures in your field proved or disproved (counterexample found) in recent years, which are noteworthy, but not so famous outside your field?</p>
<p>Answering the question you are welcome to give some comment for outsiders of your field which would help to appreciate the result.</p>
<p>Asking the question I keep in mind by "recent years" something like a dozen years before now, by a "conjecture" something which was known as an open problem for something like at least dozen years before it was proved and I would say the result for which the Fields medal was awarded like a proof of <a href="https://en.wikipedia.org/wiki/Fundamental_lemma_(Langlands_program)" rel="noreferrer">fundamental lemma</a> would not fit "not so famous", but on the other hand these might not be considered as strict criteria, and let us "assume a good will" of the answerer.</p>
| Sean Lawton | 12,218 | <p>A Margulis spacetime is the quotient of the Minkowski space by a free proper orientation-preserving isometric action of a free group of rank at least two.</p>
<p>From <a href="https://arxiv.org/pdf/1306.2240.pdf" rel="noreferrer"> Danciger, Kassel, and Guéritaud</a>:</p>
<blockquote>
<p>"Based on a question of Margulis, Drumm–Goldman conjectured
in the early 1990s that all Margulis spacetimes should be tame, meaning
homeomorphic to the interior of a compact manifold."</p>
</blockquote>
<p>In a series of paper, I believe Choi, Drumm, and Goldman, and independently Danciger, Kassel, and Guéritaud resolved this conjecture affirmatively.</p>
<p>Links: </p>
<ol>
<li><a href="https://arxiv.org/abs/1204.5308" rel="noreferrer">Topological tameness of Margulis spacetimes</a>, by Suhyoung Choi, William Goldman</li>
<li><a href="https://arxiv.org/abs/1710.09162" rel="noreferrer">Tameness of Margulis space-times with parabolics</a>, by Suhyoung Choi, Todd Drumm, William Goldman</li>
<li><a href="https://arxiv.org/abs/1306.2240" rel="noreferrer">Geometry and topology of complete Lorentz spacetimes of constant curvature</a>, by Jeffrey Danciger, François Guéritaud, Fanny Kassel</li>
<li><a href="https://arxiv.org/abs/1407.5422" rel="noreferrer">Margulis spacetimes via the arc complex</a>, by Jeffrey Danciger, François Guéritaud, Fanny Kassel</li>
</ol>
|
322,302 | <p>Conjectures play important role in development of mathematics.
Mathoverflow gives an interaction platform for mathematicians from various fields, while in general it is not always easy to get in touch with what happens in the other fields.</p>
<p><strong>Question</strong> What are the conjectures in your field proved or disproved (counterexample found) in recent years, which are noteworthy, but not so famous outside your field?</p>
<p>Answering the question you are welcome to give some comment for outsiders of your field which would help to appreciate the result.</p>
<p>Asking the question I keep in mind by "recent years" something like a dozen years before now, by a "conjecture" something which was known as an open problem for something like at least dozen years before it was proved and I would say the result for which the Fields medal was awarded like a proof of <a href="https://en.wikipedia.org/wiki/Fundamental_lemma_(Langlands_program)" rel="noreferrer">fundamental lemma</a> would not fit "not so famous", but on the other hand these might not be considered as strict criteria, and let us "assume a good will" of the answerer.</p>
| Neal | 20,796 | <p>A "hot spot" on a sufficiently regular domain is an interior extremum of the first nonconstant Neumann eigenfunction of the Laplace operator. The Hot Spots conjecture states that hot spots do not exist on convex planar domains.</p>
<p>Chris Judge and Sugata Mondal have settled the Hot Spots conjecture in the affirmative for all Euclidean triangles: <a href="https://annals.math.princeton.edu/2020/191-1/p03" rel="noreferrer"><em>Euclidean triangles have no hot spots</em>, Annals of Mathematics 191-1 (2020) 167-211</a>. (<a href="https://arxiv.org/abs/1802.01800" rel="noreferrer">preprint</a>)</p>
<p>This conjecture was the subject of <a href="https://polymathprojects.org/category/hot-spots/" rel="noreferrer">Polymath 7</a>.</p>
|
322,302 | <p>Conjectures play important role in development of mathematics.
Mathoverflow gives an interaction platform for mathematicians from various fields, while in general it is not always easy to get in touch with what happens in the other fields.</p>
<p><strong>Question</strong> What are the conjectures in your field proved or disproved (counterexample found) in recent years, which are noteworthy, but not so famous outside your field?</p>
<p>Answering the question you are welcome to give some comment for outsiders of your field which would help to appreciate the result.</p>
<p>Asking the question I keep in mind by "recent years" something like a dozen years before now, by a "conjecture" something which was known as an open problem for something like at least dozen years before it was proved and I would say the result for which the Fields medal was awarded like a proof of <a href="https://en.wikipedia.org/wiki/Fundamental_lemma_(Langlands_program)" rel="noreferrer">fundamental lemma</a> would not fit "not so famous", but on the other hand these might not be considered as strict criteria, and let us "assume a good will" of the answerer.</p>
| David White | 11,540 | <p>The <a href="https://en.wikipedia.org/wiki/Kervaire_invariant" rel="nofollow noreferrer">Kervaire Invariant One Problem</a> (1969) is a question about which framed manifolds can be converted into spheres via surgery. It's related to the classification of exotic smooth structures on spheres (like Milnor's Fields Medal winning structure on <span class="math-container">$S^7$</span> that started the whole field of differential topology by displaying that a homeomorphism need not be a diffeomorphism). After a flurry of work in the 1950s and 1960s, this problem languished with no progress from 1969 until 2009, when it was <a href="https://arxiv.org/abs/0908.3724" rel="nofollow noreferrer">resolved by Hill, Hopkins, and Ravenel</a> (published in Annals), in all dimensions except 126. The authors have <a href="https://www.cambridge.org/core/books/equivariant-stable-homotopy-theory-and-the-kervaire-invariant-problem/E2648B96D05293C5197B29B104FA80C2" rel="nofollow noreferrer">a wonderful new book</a> explaining the proof and the history of the problem. I have <a href="http://personal.denison.edu/%7Ewhiteda/files/Slides/Lehigh%20beamer%20N-infty.pdf" rel="nofollow noreferrer">some slides</a> where I explain a bit about it (but the importance in differential topology is much more than what I discuss).</p>
|
316,770 | <p>It is known that for a subring $R$ of some (commutative) ring $S$, the nilradical of $R$ $$\text{nil }R=R\cap\text{nil }S.$$ Moreover for Jacobson rings $R\subset S$, this means that the Jacobson radical of $R$ can also be written in this way, i.e., $J(R)=R\cap J(S)$.</p>
<p><strong>Edit.</strong> Are there separate-case counterexamples where this is not true more generally? For example, (credited to YACP) to show that the Jacobson radical of a subring can properly contain the intersection, consider the following: let $R$ be a local integral domain and $S=R[X]$. Then $J(S)=(0)$ and $J(R)=\mathfrak m$.</p>
<p>What about where the Jacobson radical of a subring does not contain the intersection?</p>
| Community | -1 | <p>Take $R$ an integral domain, $\mathfrak m$ a maximal ideal, and $S=R_{\mathfrak m}$. Then $J(S)\cap R=\mathfrak m$ and obviously $J(R)$ does not necessarily contain $J(S)\cap R$. </p>
|
316,770 | <p>It is known that for a subring $R$ of some (commutative) ring $S$, the nilradical of $R$ $$\text{nil }R=R\cap\text{nil }S.$$ Moreover for Jacobson rings $R\subset S$, this means that the Jacobson radical of $R$ can also be written in this way, i.e., $J(R)=R\cap J(S)$.</p>
<p><strong>Edit.</strong> Are there separate-case counterexamples where this is not true more generally? For example, (credited to YACP) to show that the Jacobson radical of a subring can properly contain the intersection, consider the following: let $R$ be a local integral domain and $S=R[X]$. Then $J(S)=(0)$ and $J(R)=\mathfrak m$.</p>
<p>What about where the Jacobson radical of a subring does not contain the intersection?</p>
| zacarias | 35,464 | <p>Let $\mathbb Z_4=\left\{0, 1, 2, 3\right\}$ be the residual classes modulo $z$. Considere the matrix ring $S=M_2(\mathbb Z_4)$. Since for a ring $A$ the nilradical of the matrix ring $M_n(A)$ is $M_n(\mathcal N(A))$ where $\mathcal N(A)$ is the nilradical of $A$, it follows that $\mathcal N(M_n(\mathbb Z_4))=M_n(I)$, where $I=\{0, 2\}$. Now, consider the subring </p>
<p>$$R=\left\{\left[\begin{array}{cc}x&y\\0&x\end{array}\right]\;\;x,y\in\mathbb Z_4\right\}.$$</p>
<p>Note that $R$ is commutative. So, the nilradical of $R$ is the set of all nilpotent elements. We have that </p>
<p>$$w=\left[\begin{array}{cc}0&1\\0&0\end{array}\right]$$
is nilpotent, so is in the radical of $R$. But $w\notin M_n(I) $. Hence $\mathcal N(R)\neq R\cap \mathcal N(S) $</p>
|
122,848 | <p>Is my calculation correct for this rotation around a point?</p>
<p>A point a(-19.94,392.11) is rotated -49.45 degrees, what is the new coordinates of point a?</p>
<p>My solution:</p>
<pre><code>x' = x*cos(0) - y*sin(0)
y' = x*sin(0) + y*cos(0)
x' = (-12.961) - (-298.0036)
y' = (15.15) + (254.92)
x' = 285.04
y' = 270.07
</code></pre>
| Raymond Manzoni | 21,783 | <p>I don't know why this old point emerged... Anyway let's try this using complex numbers :
$$(-19.94+392.11 i)\cdot e^{2\pi i\dfrac{-49.45}{360}}\approx 284.98 + 270.07i$$</p>
<p>So that the OP's answer looked not so bad!</p>
|
25,100 | <p>Suppose one has a set $S$ of positive real numbers, such that the usual numerical ordering on $S$ is a well-ordering. Is it possible for $S$ to have any countable ordinal as its order type, or are the order types that can be formed in this way more restricted than that?</p>
| gowers | 1,459 | <p>You can get any order type. Let's assume you can get all order types up to but not including alpha, using subsets of (0,1]. If alpha=beta + 1 then squash your representation of beta and add an extra point. If alpha is a limit ordinal, choose a sequence of ordinals that converges to alpha and put the first one into (0,1/2], the second into (1/2,3/4] etc. and the result will have order type alpha.</p>
|
25,100 | <p>Suppose one has a set $S$ of positive real numbers, such that the usual numerical ordering on $S$ is a well-ordering. Is it possible for $S$ to have any countable ordinal as its order type, or are the order types that can be formed in this way more restricted than that?</p>
| Andrew Marks | 6,151 | <p>Using wellorderings of positive reals is actually the standard way to construct an Aronszajn tree.</p>
|
3,536,822 | <p>A man has three bags filled with balls. One bag contains balls weighing <span class="math-container">$9$</span> grams, the second bag contains balls weighing <span class="math-container">$10$</span> grams and the third bag contains balls weighing <span class="math-container">$11$</span> grams. The man got confused and does not know which bag contains which balls. But in each bag all the balls weigh the same. He has an old-fashioned scale that is about to break. This means that he can only weigh once with it. How does the man find out which ball weighs what?</p>
| Bram28 | 256,001 | <blockquote>
<p>Since <span class="math-container">$(A\vdash B)$</span> and <span class="math-container">$(A \mkern-2mu\not\mkern2mu\Rightarrow B)$</span>, therefore, <span class="math-container">$\text{If}\ A \vdash B\ \text{then}\ A \mkern-2mu\not\mkern2mu\Rightarrow B$</span> can be concluded.</p>
</blockquote>
<p>Careful! If you are trying to say that this "If <span class="math-container">$A \vdash B$</span>, then <span class="math-container">$A \not \Rightarrow B$</span>" holds for <em>any</em> <span class="math-container">$A$</span> and <span class="math-container">$B$</span>, then you are clearly mistaken. For example, we have that <span class="math-container">$A \vdash A$</span>, but we also have that <span class="math-container">$A \Rightarrow A$</span></p>
<p>Indeed, since soundness means that <em>for every</em> <span class="math-container">$A$</span> and <span class="math-container">$B$</span> we have that "If <span class="math-container">$A \vdash B$</span>, then <span class="math-container">$A \Rightarrow B$</span>", the system being unsound means that we don't have "If <span class="math-container">$A \vdash B$</span>, then <span class="math-container">$A \Rightarrow B$</span>" for <em>some</em> <span class="math-container">$A$</span> and <span class="math-container">$B$</span>.</p>
<p>OK, so should we say that we can conclude that for some <span class="math-container">$A$</span> and <span class="math-container">$B$</span>, we have "If <span class="math-container">$A \vdash B$</span>, then <span class="math-container">$A \not \Rightarrow B$</span>"?</p>
<p>Well, technically, that is true ... but it is not very interesting at all! Note that <span class="math-container">$A \not \Rightarrow \neg A$</span>, and since any conditional is trivially true as soon as its consequent is true, you'd immediately have that "If <span class="math-container">$A \vdash \neg A$</span>, then <span class="math-container">$A \not \Rightarrow \neg A$</span>", and thus that we have "If <span class="math-container">$A \vdash B$</span>, then <span class="math-container">$A \not \Rightarrow B$</span>" for <em>some</em> <span class="math-container">$A$</span> and <span class="math-container">$B$</span>. So note that the latter is true also for systems that are perfectly sound!</p>
<p>Indeed, the much <em>stronger</em> and more interesting claim would be to say that for some <span class="math-container">$A$</span> and <span class="math-container">$B$</span>, we have both <span class="math-container">$A \vdash B$</span> <em>and</em> <span class="math-container">$A \not \Rightarrow B$</span>: <em>that</em> is the characteristic feature of an unsound system.</p>
|
3,479,883 | <p>I know that (I might be wrong):</p>
<ul>
<li>Symbol for empty or null set : {Ø} or {}</li>
<li>Null or empty set is 'subset of all sets' as well as 'empty or null set' set</li>
<li>So, { {} } is same as { Ø }</li>
</ul>
<p>I just want to know { {} } or { Ø } is an empty set or not ? And if yes then we can conclude that if a set contains a only null set which is by definition always true then it must be a null or empty set.</p>
<p>(Here I am assuming empty and null are same, because I've read that they sometimes taken as different.)</p>
| StackTD | 159,845 | <blockquote>
<p>I know that (I might be wrong):</p>
<ul>
<li>Symbol for empty or null set : {Ø} or {}</li>
</ul>
</blockquote>
<p>You write the empty set as "Ø" or "{}" so your first notation, "{Ø}" is already <em>a set containing the empty set</em>!</p>
<p>So don't mix:</p>
<ul>
<li>the empty set: "Ø" or "{}"</li>
<li>a set containing the empty set: "{Ø}" or "{ {} }"</li>
</ul>
<blockquote>
<p>So, { {} } is same as { Ø }</p>
</blockquote>
<p>Correct!</p>
|
1,015,826 | <p>For $r>1$, prove the sequence $$X_n=\left(1+r^n\right)^{1/n}$$ is decreasing. I understand the limit is decreasing and that the limit of this sequence is $r$. I am just not sure on the algebra. My thought is to show $X_n>X_{n+1}$ by showing $X_n-X_{n+1}>0$ for all $n$. I could also use induction; however, I am not sure how that would be done. </p>
<p>If someone is willing to give me a push in the right direction, it would be much appreciated! </p>
| orangeskid | 168,051 | <p>Here is how you show that if $x_1$, $\ldots$, $x_k >0$ and $k \ge 2$ then the function
$$(0 , \infty) \ni s \mapsto (x_1^s + \cdots +x_k^s)^{\frac{1}{s}}$$ is strictly decreasing. </p>
<p>Let $0< s< t$. Want to show </p>
<p>$$ (x_1^{s} + \cdots +x_k^s)^{\frac{1}{s}}> (x_1^{t} + \cdots +x_k^t)^{\frac{1}{t}}$$</p>
<p>This is equivalent to: </p>
<p>$$\sum_i \left( \frac{x_i^s}{x_1^{s} + \cdots +x_k^s}\right)^{\frac{t}{s}} < 1$$
and you note that $\frac{t}{s} > 1$ and
$$\sum_i \left( \frac{x_i^s}{x_1^{s} + \cdots +x_k^s}\right)=1$$</p>
|
4,593,212 | <p>Question: A coin is tossed where the probability it lands on heads is <span class="math-container">$1/3$</span>. What is the expected number of heads before tails?</p>
<p>My answer: number of heads before tails = <span class="math-container">$\frac{1}{3}^1+\frac{1}{3}^2+\frac{1}{3}^3+...$</span> <span class="math-container">$=$</span> <span class="math-container">$\frac{1}{1-\frac{1}{3}} = \frac{3}{2}$</span></p>
<p>I don't know if this is right or not any input would be great, Thanks.</p>
| Daniel S. | 362,911 | <p>Almost there, it should be <span class="math-container">$3/2-1=0.5$</span>. There is a chance you will get no heads before tails! So, you subtract 1 from your solution. There are two versions for the geometric distribution: <a href="https://en.wikipedia.org/wiki/Geometric_distribution" rel="nofollow noreferrer">https://en.wikipedia.org/wiki/Geometric_distribution</a></p>
|
4,593,212 | <p>Question: A coin is tossed where the probability it lands on heads is <span class="math-container">$1/3$</span>. What is the expected number of heads before tails?</p>
<p>My answer: number of heads before tails = <span class="math-container">$\frac{1}{3}^1+\frac{1}{3}^2+\frac{1}{3}^3+...$</span> <span class="math-container">$=$</span> <span class="math-container">$\frac{1}{1-\frac{1}{3}} = \frac{3}{2}$</span></p>
<p>I don't know if this is right or not any input would be great, Thanks.</p>
| RyRy the Fly Guy | 412,727 | <p>Let <span class="math-container">$X$</span> be a geometric random variable such that <span class="math-container">$X=k$</span> if and only if the first instance of tails is the <span class="math-container">$k$</span>th flip.</p>
<p>With probability <span class="math-container">$p = \frac{2}{3}$</span> of getting tails on any given flip, this random variable has expectation</p>
<p><span class="math-container">$$ E[X] = \frac{1}{p} = \frac{1}{\Big( \frac{2}{3} \Big)} = \frac{3}{2} = 1.5$$</span></p>
<p>In other words, we expect to get tails in <span class="math-container">$1.5$</span> flips. We then minus one flip to obtain the number of heads we expect to obtain before the instance of tails</p>
<p><span class="math-container">$$ \frac{3}{2} - 1 = \frac{1}{2} $$</span></p>
<p>Hence, we expect to get heads <span class="math-container">$0.5$</span> times before we get tails.</p>
|
3,768,198 | <p>Show that <span class="math-container">$\|uv^T-wz^T\|_F^2\le \|u-w\|_2^2+\|v-z\|_2^2$</span>, assuming <span class="math-container">$u,v,w,z$</span> are all unit vectors.</p>
| Daniel Li | 294,291 | <p>Let <span class="math-container">$a=u^Tw,b=v^Tz.$</span></p>
<p><span class="math-container">$\|uv^T-wz^T\|_F^2=tr((uv^T-wz^T)^T(uv^T-wz^T))=2-2ab$</span></p>
<p>And RHS=<span class="math-container">$4-2(a+b).$</span></p>
<p>Check that <span class="math-container">$2-2ab\le4-2(a+b) \iff a+b-ab\le1,$</span> using that <span class="math-container">$|a|\le1, |b|\le 1.$</span></p>
|
3,768,198 | <p>Show that <span class="math-container">$\|uv^T-wz^T\|_F^2\le \|u-w\|_2^2+\|v-z\|_2^2$</span>, assuming <span class="math-container">$u,v,w,z$</span> are all unit vectors.</p>
| user1551 | 1,551 | <p>Let <span class="math-container">$A=(u-w)v^T$</span> and <span class="math-container">$B=w(v-z)^T$</span>. The inequality in question is then equivalent to
<span class="math-container">$$
\|A+B\|_F^2\le\|A\|_F^2+\|B\|_F^2.
$$</span>
It is true if and only if <span class="math-container">$\langle A,B\rangle_F\le0$</span>. Indeed, this is the case because
<span class="math-container">$$
\langle A,B\rangle_F
=\left[w^T(u-w)\right]\left[v^T(v-z)\right]
=(w^Tu-1)(1-v^Tz)\le0.
$$</span></p>
|
2,094,123 | <p>A plane curve is printed on a piece of paper with the directions of both axes specified. How can I (roughly) verify if the curve is of the form $y=a e^{bx}+c$ without fitting or doing any quantitative calculation?</p>
<p>For example, for linear curves, I can choose two points on the curve and check if the midpoint is also on the curve. For parabolas, I can examine the geometric relationship between the tangent at a point and the secant connecting the peak and that point. Does the exponential curve have any similar geometric features that I can take advantage of?</p>
| Anonymous | 399,787 | <p><strong>Are the doubling points evenly spaced?</strong></p>
<p>Assume $a$ and $b$ are positive (if not, it's easy to see and readjust - $a$ is positive if the curve flattens to the left, $b$ has the same sign as $a$ if the curve is increasing). Mentally reset the $x$ axis at the height to which the curve tends to become horizontal to the far left (that's $c=\lim_{x\rightarrow -\infty} ae^{bx}+c$) . Note: there's a way to be really rigorous about this and obtain $c$ with ruler and compass (see below) but then it's not really "eyeballing".</p>
<p>With $c$ set to $0$, take a point $x_1$ on the (new) $x$ axis, and a random multiplier $M>1$; say, $M=2$. Eyeball the point $x_2$ at which $y(x_2)$ is roughly $M$ times $y(x_1)$. Then eyeball the point $x_3$ at which $y(x_3)$ is roughly $M$ times $y(x_2)$, and so on, and check if the points $x_1,x_2,x_3,...$ at which $y$ multiplies by $M$ are evenly spaced, as they must be if your curve is an exponential.</p>
<p>Easier to do than to say... as long as you have found enough "multiplication" points on your piece of paper. If your curve is very "flat" you'd have to choose an $M$ very close to $1$, but it still works <em>in theory</em> (or if you do it with ruler and compass with sufficient accuracy - but again, that's not really "eyeballing"). In practice it will be hard to tell such a "flat" exponential from a parabola or even a line: remember that $e^{bx}$ is very close to $1+bx$ if $bx\ll 1$, i.e. if $\frac{1}{b}$ is much larger than the largest $x$ you have on your piece of paper. Then again, even a "sufficiently flat" parabola is hard to tell from a line...</p>
<p>If your piece of paper on the other hand is sufficiently large, as a bonus, you can also gauge $a$ (or more accurately $a \cdot$ e), $b$ (or more accurately $\frac{\ln M}{b}$) and $c$. We know that $c$ is the height to which the curve converges as $x\rightarrow -\infty$, while ($a \cdot$ e) is the distance between said height and the $y$ axis intersect. And the horizontal spacing $\Delta x$ between the points at which you check the curve is $\frac{\ln M}{b}$ since $e^{b\Delta x}=M$.</p>
<p><strong>Let's be rigorous!</strong> </p>
<p>First of all, how can we rigorously construct $c$ with ruler and compass? To do it, all we have to do is to take a random positive horizontal spacing $\Delta x$, and obtain $\Delta^0_y=y(0)-y(-\Delta x)$, and $\Delta^1_y=y(-\Delta x) - y(-2\Delta)$. Call $\rho<1$ the ratio $\frac{\Delta^1_y}{\Delta^0_y}$. It's immediate that $c$ is located at distance $\sum_{i=0}^\infty \rho^i \Delta^0_y = \Delta^0_y \frac{1}{1-\rho}$ below the intersect of the curve with the $y$ axis, which we can easily if somewhat laboriously obtain with ruler and compass noting that $\Delta^0_y$ is mid-proportional between said distance and $\Delta^0_y-\Delta^1_y$.</p>
<p>Also, as Rahul correctly points out, to be really formal one would have to choose $M$ sufficiently "randomly" (uniformly at random in any arbitrary small non-degenerate interval suffices), so that the probability of encountering a function in the form $e^{bx+f(x)}$ with $f(x)$ periodic with a period that's exactly an integer multiple of $M$ would be $0$. In practice, since you are only eyeballing for <em>rough</em> exponentiality, checking that your curve does not "wiggle" is enough to rule out such cases!</p>
|
1,802,515 | <blockquote>
<p>Say you have a bank account in which your invested money yields 3% every year, continuously compounded. Also, you have estimated that you spend $1000 every month to pay your bills, that are withdrawn from this account.</p>
<p>Create a differential model for that, find its equilibriums and determine its stability.</p>
</blockquote>
<p>My problem here is that the \$1000 withdrawal is not continuous on time, it's discrete. The best I could achieve is, if <span class="math-container">$S(t)$</span> is the current balance: <span class="math-container">$\dot S (t) = 0,0025S(t) - 1000$</span>. I'm using <span class="math-container">$0,0025$</span> as the interest rate because it yields 3% every year, so it should yield 0,25% every month. But I'm pretty confident that it's wrong. Any help would be highly appreciated! Thanks!</p>
| mvw | 86,776 | <p>The money flow consists of two contributions
$$
\dot{S} = \dot{S}_y + \dot{S}_m
$$
with the continous contribution
$$
\dot{S}_y = a S \quad (*)
$$
where $a$ must be adjusted to give the yearly interest rate such that
$$
S_y(1\text{y}) = (1 + p) S_y(0\text{y})
$$
for $p = 3\% = 3/100$ and the monthly part
$$
\dot{S}_m = -M \sum_{k=1}^\infty \delta(t - k\cdot 1\text{m})
$$
with $M = 1000 \$$.</p>
<p>Determining $a$ from $p$: Equation $(*)$ means
$$
S_y(t) = S_y(0\text{y}) e^{at}
$$
and
$$
S_y(1\text{y}) = S_y(0\text{y}) e^{a \cdot 1 \text{y}} = (1 + p) S_y(0\text{y})
$$
so $a = \ln(1+p)/1\text{y}$.
In summary:
$$
\dot{S} = S(0\text{y}) \, (1+p)^{t/1\text{y}} - M \sum_{k=1}^\infty \delta(t - k\cdot 1\text{m})
$$</p>
|
433,403 | <ol>
<li>Let F(x,y) be the statement, “x can fool y,” where the domain consists of all of the people in the world. Translate this statement into symbolic logic.
a. Everyone can be fooled by somebody.</li>
</ol>
<p>Would it be: For every x.y in W, F(x,y) is in W?</p>
<p>I am not getting the gist of this...</p>
| Amr | 29,267 | <p>$$\forall x \exists y F(y,x)$$</p>
|
229,161 | <p>A sequence of positive integer is defined as follows</p>
<blockquote>
<ul>
<li>The first term is $1$.</li>
<li>The next two terms are the next two even numbers $2$, $4$.</li>
<li>The next three terms are the next three odd numbers $5$, $7$, $9$.</li>
<li>The next $n$ terms are the next $n$ even numbers if $n$ is even or the next $n$ odd numbers if $n$ is odd.</li>
</ul>
</blockquote>
<p>What is the general term $a_n?$</p>
<p><strong>Please, proofs of all these formulas would be nice</strong></p>
| Jean-Sébastien | 31,493 | <p>Not a general term, but an interesting reformulation of your recursion. Given a(1)=1</p>
<p>$$
a(n):=\begin{cases}
a(n-1)+1& \text{if} \,\, a(n-1) \text{ is a square}\\
a(n-1)+2 &\text{otherwise}
\end{cases}
$$</p>
|
138,243 | <p><a href="https://i.stack.imgur.com/kRmeb.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/kRmeb.png" alt="Area and Perimeter"></a></p>
<p>How can I draw the figure shown above in rectangular coordinates, calculate the area and perimeter of the shaded region as a function of radius <code>r</code> of the outer circle, and find the points of intersection of the inner circles.</p>
| m_goldberg | 3,066 | <p>This isn't really a Mathematica problem. It is a Euclidean geometry problem and can be solve by a little classic geometry reasoning. Like ubpdqn I will work in the 1st quadrant and invoke symmetry. </p>
<p><a href="https://i.stack.imgur.com/0fAM8.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/0fAM8.png" alt="diagram"></a></p>
<p>By construction</p>
<p>$\qquad$OC = OE = R<br>
$\qquad$OB = OD = DA = BA = R/2</p>
<p>By observation</p>
<p>$\qquad$Quadrant perimeter = EA + AO + OA + BA + EC<br>
$\qquad$OBAD is a square</p>
<p>Arcs EA + OA and AO, + BA are one half the circumference of the equal circles centered at B and D, which have diameters R/2, so EA + OA + AO, + BA = circumference of an inner circle = π R. EC is one quarter of the circumference of the outer circle, so EC = (2 π R)/4 = π R/2. The quadrant perimeter is therefore π R + π R/2 = 3 π R/2. </p>
<p>It follows that the full perimeter, 4 x (quadrant perimeter), is 6 π R.</p>
<p>Point A is one of the points where the inner circles intersect and it clearly lies at {R/2, R/2}. By symmetry, the four points of intersection are </p>
<p>$\qquad${{R/2, R/2}, {-R/2, R/2}, {-R/2, -R/2}, {R/2, -R/2}}.</p>
<p>Finding the area is a little more complicated, but not much. </p>
<p>The area, a1, between the two arcs ending at points O and A is clearly twice the difference of the area between the arc OA and the dashed line OA. This in turn is the area of a quadrant of inner circle centered at B less the half the square OBAD. Thus, </p>
<p>$\qquad$a1 = 2 ((π (R/2)^2)/4 - ((R/2)^2)/2) = 1/8 (π - 2) R^2</p>
<p>The area, a2, bordered by the arcs EC, EA and AC is the area of the quadrant less the area of 2 quadrants of an inner circle less the area of the square OBAD. This is given by</p>
<p>$\qquad$a2 = (π R^2)/4 - (π (R/2)^2)/2 - (R/2)^2 = 1/8 (π - 2) R^2</p>
<p>Note that a1 = a2 (which I find an interesting result in itself). Therefore, the full area is </p>
<p>$\qquad$4 (2 a1) = (π - 2) R^2</p>
|
138,243 | <p><a href="https://i.stack.imgur.com/kRmeb.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/kRmeb.png" alt="Area and Perimeter"></a></p>
<p>How can I draw the figure shown above in rectangular coordinates, calculate the area and perimeter of the shaded region as a function of radius <code>r</code> of the outer circle, and find the points of intersection of the inner circles.</p>
| Carl Woll | 45,431 | <p>I would use <a href="http://reference.wolfram.com/language/ref/BooleanRegion.html" rel="noreferrer"><code>BooleanRegion</code></a>:</p>
<pre><code>reg = BooleanRegion[
Xor,
{Disk[{-1,0},1], Disk[{0,1},1], Disk[{1,0},1], Disk[{0,-1},1], Disk[{0,0},2]}
];
RegionMeasure @ reg
RegionMeasure @ RegionBoundary @ reg
</code></pre>
<blockquote>
<p>4 (-2 + Pi)</p>
<p>12 Pi</p>
</blockquote>
|
842,271 | <p>Evaluation of $\displaystyle \int \frac{\sqrt[3]{x+\sqrt[4]{x}}}{\sqrt{x}}dx$</p>
<p>$\bf{My\; Try::}$ Let $x=t^4\;,$ Then $dx = 4t^3dt$</p>
<p>So Integral is $\displaystyle \int\frac{\sqrt[3]{t^4+t}}{t^2} \cdot 4t^3dt$</p>
<p>So Integral is $\displaystyle 4\int t^{\frac{7}{3}}\cdot (1+t^{-3})^{\frac{1}{3}}$</p>
<p>Now How can i solve after that</p>
<p>Help me</p>
<p>Thanks</p>
| Felix Marin | 85,343 | <p>$\newcommand{\+}{^{\dagger}}
\newcommand{\angles}[1]{\left\langle\, #1 \,\right\rangle}
\newcommand{\braces}[1]{\left\lbrace\, #1 \,\right\rbrace}
\newcommand{\bracks}[1]{\left\lbrack\, #1 \,\right\rbrack}
\newcommand{\ceil}[1]{\,\left\lceil\, #1 \,\right\rceil\,}
\newcommand{\dd}{{\rm d}}
\newcommand{\down}{\downarrow}
\newcommand{\ds}[1]{\displaystyle{#1}}
\newcommand{\expo}[1]{\,{\rm e}^{#1}\,}
\newcommand{\fermi}{\,{\rm f}}
\newcommand{\floor}[1]{\,\left\lfloor #1 \right\rfloor\,}
\newcommand{\half}{{1 \over 2}}
\newcommand{\ic}{{\rm i}}
\newcommand{\iff}{\Longleftrightarrow}
\newcommand{\imp}{\Longrightarrow}
\newcommand{\isdiv}{\,\left.\right\vert\,}
\newcommand{\ket}[1]{\left\vert #1\right\rangle}
\newcommand{\ol}[1]{\overline{#1}}
\newcommand{\pars}[1]{\left(\, #1 \,\right)}
\newcommand{\partiald}[3][]{\frac{\partial^{#1} #2}{\partial #3^{#1}}}
\newcommand{\pp}{{\cal P}}
\newcommand{\root}[2][]{\,\sqrt[#1]{\vphantom{\large A}\,#2\,}\,}
\newcommand{\sech}{\,{\rm sech}}
\newcommand{\sgn}{\,{\rm sgn}}
\newcommand{\totald}[3][]{\frac{{\rm d}^{#1} #2}{{\rm d} #3^{#1}}}
\newcommand{\ul}[1]{\underline{#1}}
\newcommand{\verts}[1]{\left\vert\, #1 \,\right\vert}
\newcommand{\wt}[1]{\widetilde{#1}}$
$\ds{\int{\root[3]{x + \root[4]{x}} \over \root{x}}\,\dd x:\ {\large ?}}$</p>
<blockquote>
<p>$$
\mbox{Lets consider}\quad\fermi\pars{x}\equiv
\int_{0}^{x}{\root[3]{t + \root[4]{t}} \over \root{t}}\,\dd t\,,\qquad x > 0
$$</p>
</blockquote>
<p>\begin{align}
\fermi\pars{x}&=
\int_{0}^{x^{1/4}}{\root[3]{t^{4} + t} \over t^{2}}\,4t^{3}\,\dd t
=4\int_{0}^{x^{1/4}}\root[3]{t^{3} + 1}t^{4/3}\,\dd t
\\[3mm]&=4\int_{0}^{x^{3/4}}\pars{1 + t}^{1/3}t^{4/9}\,{1 \over 3}\,t^{-2/3}\dd t
={4 \over 3}\int_{0}^{x^{3/4}}\pars{1 + t}^{1/3}t^{-2/9}\dd t
\end{align}</p>
<blockquote>
<p>Set $\ds{\xi \equiv {1 \over 1 + t}\quad\imp\quad t = {1 \over \xi} - 1}$:
\begin{align}
\fermi\pars{x}&={4 \over 3}\int_{1}^{1/\pars{1 + x^{3/4}}}
\xi^{-1/3}\pars{{1 \over \xi} - 1}^{-2/9}\pars{-\,{\dd\xi \over \xi^{2}}}
\\[3mm]&={4 \over 3}\int_{1/\pars{1 + x^{3/4}}}^{1}
\xi^{-19/9}\pars{1 - \xi}^{-2/9}\,\dd\xi
\end{align}</p>
</blockquote>
<p>The final result can be expressed in terms of the
<a href="http://functions.wolfram.com/GammaBetaErf/Beta4/" rel="nofollow">Generalized Incomplete Beta Function</a>
$\ds{{\rm B}\pars{z_{1},z_{2},a,b}}$:
\begin{align}
\fermi\pars{x}\equiv
\color{#66f}{\large\int_{0}^{x}{\root[3]{t + \root[4]{t}} \over \root{t}}\,\dd t
={4 \over 3}\,
{\rm B}\pars{{1 \over 1 + x^{3/4}},1,-\,{10 \over 9},{7 \over 9}}}
\end{align}</p>
|
207,865 | <p>It is known that all $B$, $C$ and $D$ are $3 \times 3$ matrices. And the eigenvalues of $B$ are $1, 2, 3$; $C$ are $4, 5, 6$; and $D$ are $7, 8, 9$. What are the eigenvalues of the $6 \times 6$ matrix
$$\begin{pmatrix}
B & C\\0 & D
\end{pmatrix}$$
where $0$ is the $3 \times 3$ matrix whose entries are all $0$.
From my intuition, I think the eigenvalues of the new $6 \times 6$ matrix are the eigenvalues of $B$ and $D$. But how can I show that? </p>
| hadi | 743,402 | <p><span class="math-container">$BE=-CD^{-1}$</span>
By your assumptions <span class="math-container">$B$</span> is invertible, hence we have:
<span class="math-container">$-B^{-1}CD^{-1}$</span></p>
|
2,293,600 | <p>How to calculate X $\cap$ $\{X\}$ for finite sets to develop an intuition for intersections?</p>
<p>If $X$ = $\{$1,2,3$\}$, then what is $X$ $\cap$ $\{X\}$? </p>
| Affineline | 448,123 | <p>$\phi$ ... The set $\{1,2,3\} \notin \{1,2,3\}$.</p>
<p>a collection which contains itself is NOT a set...</p>
|
1,081,417 | <p>This is exercise number $57$ in Hugh Gordon's <em>Discrete Probability</em>. </p>
<hr>
<p>For $n \in \mathbb{N}$, show that</p>
<p>$$\binom{\binom{n}{2}}{2}=3\binom{n+1}{4}$$</p>
<hr>
<p>My algebraic solution:</p>
<p>$$\binom{\binom{n}{2}}{2}=3\binom{n+1}{4}$$</p>
<p>$$\binom{\frac{n(n-1)}{2}}{2}=\frac{3n(n+1)(n-1)(n-2)}{4 \cdot 3 \cdot 2}$$</p>
<p>$$2\left(\frac{n(n-1)}{2}\right)\left(\frac{n(n-1)}{2}-1\right)=\frac{n(n+1)(n-1)(n-2)}{2}$$</p>
<p>$$2n(n-1)\frac{n^2-n-2}{2} = n(n+1)(n-1)(n-2)$$</p>
<p>$$n(n-1)(n-2)(n+1)=n(n+1)(n-1)(n-2)$$</p>
<p>This finishes the proof.</p>
<hr>
<p>I feel like this is not what the point of the exercise was; it feels like an unclean, inelegant bashing with the factorial formula for binomial coefficients. Is there a nice counting argument to show the identity? Something involving committees perhaps?</p>
| Marc van Leeuwen | 18,880 | <p>For the right hand side, add a special element $s$ to your $n$-element set; then choose $4$ elements from the extended set, and a partition of those $4$ into $2$ sets of size $2$ (the latter is possible in $3$ ways). If $s$ was not among the selected elements retain the two disjoint pairs; otherwise let the pairs be $\{s,x\}$ and $\{y,z\}$, and retain the sets $\{x,y\}$ and $\{x,z\}$. Every pair of pairs in the left hand side is counted once.</p>
|
2,752,511 | <p>Prove that if $X$ is Hausdorff, $\Delta=\{(x, x)\mid x\in X\}$ is closed in $X\times X$ (with the product topology).</p>
<p><strong>My attempt:</strong></p>
<p>Let $x_1, x_2\in X$ s.t. $x_1\ne x_2$.</p>
<p>There exist neighborhoods $U_1$ and $U_2$ of $x_1$ and $x_2$ that are disjoint.</p>
<p>$U_1\times U_2$ is a basis element in the product topology on $X\times X$. So, $U_1\times U_2$ is open in $X\times X$.</p>
<p>Let $x\in X$. </p>
<p>$(x, x)\in U_1\times U_2\implies x\in U_1$ and $x\in U_2\implies x\in U_1\cap U_2$, which contradicts the fact that $U_1$ and $U_2$ are disjoint.</p>
<p>So, $(x, x)\notin U_1\times U_2$.</p>
<p>I feel that I'm on the right track but don't know how to proceed. Could someone please help me out?</p>
| Mike Earnest | 177,399 | <p>Your work shows that
$$
\Delta^c=\bigcup_{\substack{(x_1,x_2)\in X\times X\\x_1\neq x_2}} U_1(x_1)\times U_2(x_2),
$$
where $U_1(x_1)$ and $U_2(x_2)$ are separating sets for $x_1,x_2$. This shows the complement of $\Delta$ is a union of open sets, so the complement of $\Delta$ is open, so $\Delta$ is closed.</p>
|
1,114,258 | <p>I am new to differential geometry and Riemannian geometry. </p>
<p>I have on two separate occasions (separated by 6 months) encountered exercises where I feel like I am not giving a complete answer. </p>
<p>Problem 1: </p>
<p><em>Show that the Gaussian curvature of the surface of a cylinder is zero.</em></p>
<p>Problem 2: </p>
<p><em>Use Cartesian coordinates to write out and solve the geodesic equations for a two-dimensional flat plane and show the solutions are straight lines.</em> </p>
<p>In both cases, my argument went something like </p>
<ol>
<li>Define the metric.</li>
<li>Show the Christoffel symbols are zero.</li>
<li>Use this to show there is no curvature. </li>
</ol>
<p>But I feel like this is just me avoiding the real stuff. In other words, I resort to calculus because I don't actually know what I'm doing. So I make a roundabout argument rather than diving right into the mathematics itself. </p>
<p>Is my argument sensible? Is there a more formal way to approach this type of problem? If you could do an example that would be fantastic. </p>
| Steven Stadnicki | 785 | <p>Unfortunately, there is no more elementary argument than going through some form of AC, because the result actually does depend on some amount of choice. As shown by e.g. C.J. Ash (<a href="http://journals.cambridge.org/download.php?file=%2FJAZ%2FJAZ1_19_03%2FS1446788700031505a.pdf&code=680c42651efdb40c1e3a1a9fe14560df">see this 1973 J. Australian Math Society paper</a>), an isomorphism between $(\mathbb{R},+)$ and $(\mathbb{C},+)$ implies the existence of a non-measurable set of reals. The paper has the full argument, but the short version is that (assuming that all sets of reals are measurable) one takes an isomorphism $f:\mathbb{R}\oplus\mathbb{R}\mapsto\mathbb{R}$, defines the sets $S_n=f[\mathbb{R}\oplus[n,n+1)]\cap(0,1)$ (that is, the image of $\mathbb{R}\oplus[n,n+1)$ under $f()$, intersected with the unit interval), and then shows that (a) the $S_n$ partition $(0,1)$ and (b) they all have the same measure. This is enough to contradict countable additivity.</p>
|
1,029,485 | <p>I wish to show the following statement:</p>
<p>$
\forall x,y \in \mathbb{R}
$</p>
<p>$$
(x+y)^4 \leq 8(x^4 + y^4)
$$</p>
<p>What is the scope for generalisaion?</p>
<p><strong>Edit:</strong></p>
<p>Apparently the above inequality can be shown using the Cauchy-Schwarz inequality. Could someone please elaborate, stating the vectors you are using in the Cauchy-Schwarz inequality: </p>
<p>$\ \ \forall \ \ v,w \in V, $ an inner product space,</p>
<p>$$|\langle v,w\rangle|^2 \leq \langle v,v \rangle \cdot \langle w,w \rangle$$</p>
<p>where $\langle v,w\rangle$ is an inner product.</p>
| DeepSea | 101,504 | <p>Apply Cauchy-Schwarz inequality twice: $x^4 + y^4 \geq \dfrac{1}{2}\left(x^2+y^2\right)^2 \geq \dfrac{1}{2}\left(\dfrac{1}{2}\left(x+y\right)^2\right)^2 = \dfrac{1}{8}\left(x+y\right)^4$.</p>
|
1,029,485 | <p>I wish to show the following statement:</p>
<p>$
\forall x,y \in \mathbb{R}
$</p>
<p>$$
(x+y)^4 \leq 8(x^4 + y^4)
$$</p>
<p>What is the scope for generalisaion?</p>
<p><strong>Edit:</strong></p>
<p>Apparently the above inequality can be shown using the Cauchy-Schwarz inequality. Could someone please elaborate, stating the vectors you are using in the Cauchy-Schwarz inequality: </p>
<p>$\ \ \forall \ \ v,w \in V, $ an inner product space,</p>
<p>$$|\langle v,w\rangle|^2 \leq \langle v,v \rangle \cdot \langle w,w \rangle$$</p>
<p>where $\langle v,w\rangle$ is an inner product.</p>
| Milly | 182,459 | <p>A more general result is ($x,y\geq 0$, $p\geq 1$)
$$(x+y)^p \leq 2^{p-1} (x^p+y^p),$$
which is direct consequence of convexity of $t\mapsto t^p$.</p>
|
187,545 | <p><span class="math-container">$\DeclareMathOperator\GL{GL}\DeclareMathOperator\L{\mathfrak{L}}$</span>The free Lie algebra <span class="math-container">$\L(V)$</span> generated by an <span class="math-container">$r$</span>-dimensional vector space <span class="math-container">$V$</span> is, in the
language of <a href="https://en.wikipedia.org/wiki/Free_Lie_algebra" rel="nofollow noreferrer">https://en.wikipedia.org/wiki/Free_Lie_algebra</a>,
the free Lie algebra generated by any choice of basis <span class="math-container">$e_1, \ldots , e_r$</span> for the vector space <span class="math-container">$V$</span>. (Work over the field <span class="math-container">${\mathbb R}$</span> or <span class="math-container">${\mathbb C}$</span>, whichever you prefer.)
It is a graded Lie algebra<br />
<span class="math-container">$$\L(V) = V \oplus \L_2 (V) \oplus \L_3 (V) \oplus \ldots .$$</span>
The general linear group <span class="math-container">$\GL(V)$</span> of <span class="math-container">$V$</span> acts on <span class="math-container">$\L(V)$</span> by gradation-preserving Lie algebra automorphisms.
Thus each graded piece <span class="math-container">$\L_k (V)$</span> is a finite dimensional
representation space for <span class="math-container">$\GL(V)$</span>. (The `weight' of <span class="math-container">$\L_k (V)$</span> is <span class="math-container">$k$</span> in the sense that <span class="math-container">$\lambda \mathrm{Id} \in \GL(V)$</span> acts on <span class="math-container">$\L_k (V)$</span> by scalar multiplication by <span class="math-container">$\lambda^k$</span>.)
QUESTION: How does <span class="math-container">$\L_k (V)$</span> break up into <span class="math-container">$\GL(V)$</span>-irreducibles?</p>
<p>I only really know that <span class="math-container">$\L_2 (V) = \Lambda ^2 (V)$</span>, which is already irreducible.</p>
<p>To start the game off, perhaps some reader out there already is familiar with <span class="math-container">$\L_3 (V)$</span>
as a <span class="math-container">$\GL(V)$</span>-rep, and can tell me its irreps in terms of the Young diagrams / Schur theory involving 3 symbols?</p>
<p>(My motivation arises from trying to understand some details of the subRiemannian geometry <a href="https://en.wikipedia.org/wiki/Sub-Riemannian_manifold" rel="nofollow noreferrer">https://en.wikipedia.org/wiki/Sub-Riemannian_manifold</a> of the Carnot group whose Lie algebra is the free <span class="math-container">$k$</span>-step Lie algebra, which is <span class="math-container">$\L(V)$</span>-truncated after step <span class="math-container">$k$</span>. )</p>
| F. C. | 10,881 | <p>Here is the result, computed using sage:</p>
<pre><code>sage: def lie(n):
....: p = SymmetricFunctions(QQ).p()
....: return p.sum_of_terms((Partition([d for j in range(ZZ(n / d))]),
....: moebius(d) / n) for d in divisors(n))
sage: s = SymmetricFunctions(QQ).schur()
sage: [s(lie(i)) for i in range(1,8)]
[s[1],
s[1, 1],
s[2, 1],
s[2, 1, 1] + s[3, 1],
s[2, 1, 1, 1] + s[2, 2, 1] + s[3, 1, 1] + s[3, 2] + s[4, 1],
s[2, 1, 1, 1, 1] + 2*s[2, 2, 1, 1] + s[3, 1, 1, 1] + 3*s[3, 2, 1] + s[3, 3] + 2*s[4, 1, 1] + s[4, 2] + s[5, 1],
s[2, 1, 1, 1, 1, 1] + 2*s[2, 2, 1, 1, 1] + 2*s[2, 2, 2, 1] + 2*s[3, 1, 1, 1, 1] + 5*s[3, 2, 1, 1] + 3*s[3, 2, 2] + 3*s[3, 3, 1] + 3*s[4, 1, 1, 1] + 5*s[4, 2, 1] + 2*s[4, 3] + 2*s[5, 1, 1] + 2*s[5, 2] + s[6, 1]]
</code></pre>
<p>This uses a classical formula for the character of the symmetric group acting on Lie(n).</p>
|
1,241,970 | <p>A fair coin is tossed three times. Let $X$ be the number of heads that turn up on the first two tosses and $Y$ the number of heads that turn up on the third toss. Give the distribution of $X$, $Y$, $X + Y$, $X − Y$ and $XY$.</p>
| Brian Tung | 224,454 | <p>The distribution of $X+Y$ is a binomial distribution, as expected:</p>
<p>$$
q_0 = q_3 = 1/8 \\
q_1 = q_2 = 3/8
$$</p>
<p>where $q_i = P(X+Y = i)$. This can be reasoned out as follows: You can obtain $0$ only if $X = Y = 0$, so the probability is $(1/4)(1/2) = 1/8$. You can obtain $3$ only if $X = 2, Y = 1$, so again the probability is $(1/4)(1/2) = 1/8$. Finally, you can obtain $1$ if $X = 1, Y = 0$, or <em>vice versa</em>, and you can obtain $2$ if $X = 2, Y = 0$ or $X = Y = 1$, and in either case the probability is $(1/4)(1/2)+(1/2)(1/2) = 3/8$.</p>
<p>Interestingly, the difference $X-Y$ has the same spread, only shifted down by one:</p>
<p>$$
r_{-1} = r_2 = 1/8 \\
r_0 = r_1 = 3/8
$$</p>
<p>where $r_i = P(X-Y = i)$. Can you see, by a similar consideration of the different possibilities, why that is?</p>
<p>ETA: Ahh, OK, for $XY$, again, a similar consideration of cases gives us</p>
<p>$$
p_0 = P(X = \mbox{anything}, Y = 0) + P(X = 0, Y = 1)
= 1/2 + (1/4)(1/2) = 5/8 \\
p_1 = P(X = Y = 1) = (1/2)(1/2) = 1/4 \\
P_2 = P(X = 2, Y = 1) = (1/4)(1/2) = 1/8
$$</p>
<p>where $p_i = P(XY = i)$.</p>
|
340,886 | <p>Suppose $x=(x_1,x_2),y = (y_1,y_2) \in \mathbb{R}^2$. I noticed that
\begin{align*}
\|x\|^2 \|y\|^2 - \langle x,y \rangle^2 &=
x_1^2y_1^2 + x_1^2 y_2^2 + x_2^2 y_1^2 + x_2^2 y_2 ^2 - (x_1^2 y_1^2 + 2 x_1 y_1 x_2 y_2 + x_2^2 y_2^2) \\
&=(x_1 y_2)^2 - 2x_1 y_2 x_2 y_1 + (x_2 y_2)^2 \\
&=(x_1 y_2 - x_2 y_1)^2
\end{align*}
which proves the CSB inequality in dimension two. This begs the question:</p>
<blockquote>
<p>If $x = (x_1,\ldots,x_n),y=(y_1,\ldots,y_n) \in \mathbb{R}^n$, is there a polynomial $p \in \mathbb{R}[x_1,\ldots,x_n;y_1,\ldots,y_n]$ such that $ \|x\|^2 \|y\|^2 - \langle x,y \rangle^2 = p^2$?</p>
</blockquote>
| Community | -1 | <p>$\sum x_i^2\sum y_i^2-(\sum x_iy_i)^2$</p>
<p>$=\sum x_i^2y_i^2+\sum_{i\neq j} x_i^2y_j^2-\sum x_i^2y_i^2-\sum_{i\neq j} x_iy_ix_jy_j$</p>
<p>$=\sum_{i<j} (x_iy_j-x_jy_i)^2$</p>
|
2,185,118 | <p>Find the sum of the series. For what values of the variable does the series converge to this sum?</p>
<p>$$1+\frac{x} {2}+\frac{x^2} {4}+\frac{x^3} {8}...$$</p>
<p>Summation notation: $\sum_{n=0}^\infty \frac{x^n} {2^n}$</p>
<p>I know you use the formula $\frac{a} {1-r}$ to find the sum of geometric series but I'm confused about the x</p>
| joeb | 362,915 | <p>$\sum_{n=0}^\infty \frac{x^n}{2^n} = \sum_{n=0}^\infty y^n$, where $y = x/2$. The series converges when $|x|/2 = |y| < 1$.</p>
|
2,185,118 | <p>Find the sum of the series. For what values of the variable does the series converge to this sum?</p>
<p>$$1+\frac{x} {2}+\frac{x^2} {4}+\frac{x^3} {8}...$$</p>
<p>Summation notation: $\sum_{n=0}^\infty \frac{x^n} {2^n}$</p>
<p>I know you use the formula $\frac{a} {1-r}$ to find the sum of geometric series but I'm confused about the x</p>
| Dr. Sonnhard Graubner | 175,066 | <p>Hint: prove by induction that $$\sum_{n=0}^m\frac{x^n}{2^n}=-\frac{2^{-m} \left(2^{m+1}-x^{m+1}\right)}{x-2}$$</p>
|
231,887 | <p>I'm learning to do proofs, and I'm a bit stuck on this one.
The question asks to prove for any positive integer $k \ne 0$, $\gcd(k, k+1) = 1$.</p>
<p>First I tried: $\gcd(k,k+1) = 1 = kx + (k+1)y$ : But I couldn't get anywhere.</p>
<p>Then I tried assuming that $\gcd(k,k+1) \ne 1$ , therefore $k$ and $k+1$ are not relatively prime, i.e. they have a common divisor $d$ s.t. $d \mid k$ and $d \mid k+1$ $\implies$ $d \mid 2k + 1$</p>
<p>Actually, it feels obvious that two integers next to each other, $k$ and $k+1$, could not have a common divisor. I don't know, any help would be greatly appreciated.</p>
| Bob Dobbs | 221,315 | <p>For <span class="math-container">$k>0$</span>, consider the polynomial <span class="math-container">$p(x)=x^{k+1}-x^k=x^k(x-1)$</span> whose only roots are <span class="math-container">$0$</span> and <span class="math-container">$1$</span>. Let <span class="math-container">$d=\gcd(k,k+1)\geq 1$</span>. Then <span class="math-container">$p(e^{\frac{2\pi i}{d}})=0$</span>. Since, <span class="math-container">$e^{\frac{2\pi i}{d}}\neq 0$</span>, we conclude that <span class="math-container">$e^{\frac{2\pi i}{d}}=1$</span> and <span class="math-container">$d=1$</span>.</p>
<p>For <span class="math-container">$k<0$</span>, it is left to the reader.</p>
|
40,474 | <p>there is a binomial formula:</p>
<p>$$(x+y)^n=\displaystyle\sum_{k=0}^n \binom{n}{k} x^{n-k} y^k$$</p>
<p>When operations are done in $GF(2^m)$ then all positive integers are reduced $\bmod2$, so binomial formula for $n=2^i$ in $GF(2^m)$ is:</p>
<p>$$(x+y)^{2^i}=x^{2^i} + y^{2^i} $$</p>
<p>So now the question. If there are reduced all binomial coefficients $\binom{n}{k}$, then why exponents like $2^i$ are not reduced?</p>
| Luboš Motl | 10,599 | <p>It's because multiplication by a coefficient is periodic in the coefficient with the right period $P$:
$$ (a+P)\cdot b = a\cdot b + P\cdot b \equiv a\cdot b \quad {\rm mod} \quad P$$
because the equivalence modulo $P$ is defined so that it allows me to subtract multiples of $P$ such as $P\cdot b$ above - while powers are not periodic functions of the exponent with the period $P$:
$$ a^{b+P} \neq a^b \quad {\rm mod}\quad P$$
For a simple example, look at powers of $2$ computed modulo $2$:
$$2^0 = 1, \quad 2^1 = 2 = 0, \quad 2^2 = 4 = 0, \dots $$
All the following powers are equal to $0$ modulo $2$ but the initial one, the zeroth power, is not. (Pick more complicated numbers to get the point - to see how complicated the powers may be especially if $P$ is large.) So the results of the exponentiation are not periodic as a function of the exponent, and you are never allowed to simplify the exponents by considering them modulo $P$.</p>
|
40,474 | <p>there is a binomial formula:</p>
<p>$$(x+y)^n=\displaystyle\sum_{k=0}^n \binom{n}{k} x^{n-k} y^k$$</p>
<p>When operations are done in $GF(2^m)$ then all positive integers are reduced $\bmod2$, so binomial formula for $n=2^i$ in $GF(2^m)$ is:</p>
<p>$$(x+y)^{2^i}=x^{2^i} + y^{2^i} $$</p>
<p>So now the question. If there are reduced all binomial coefficients $\binom{n}{k}$, then why exponents like $2^i$ are not reduced?</p>
| Phira | 9,325 | <p>Regard the simplest non-trivial example to get a better understanding.</p>
<p>For example $\{0,1,x,x+1\}$ with $2=0$ and $x^2=x+1$.</p>
|
2,952,014 | <p>So I was doing some self study and came across a proposition in one of my chemical engineering course's prescribed textbooks. I can't quite get the proof out. It's to do with a particle moving through a medium such that when it makes contact with to either of two plates <span class="math-container">$L$</span> units apart (i.e. one at <span class="math-container">$0$</span> and one at <span class="math-container">$L$</span>), it remains there. </p>
<blockquote>
<p>Consider that the movement of a single particle follows a random walk which can be described by a Markov chain with states <span class="math-container">$[0, L]$</span> where <span class="math-container">$P(X_n = -1) = p_{-1}$</span>,<span class="math-container">$P(X_n = 0) = p_{0}$</span> and <span class="math-container">$P(X_n = 1) = p_{1}$</span> with <span class="math-container">$p_{-1} + p_{0} + p_{1} = 1$</span>. Show that if states <span class="math-container">$0$</span> and <span class="math-container">$L$</span> are completely absorbing, then there does not exist a stationary distribution. <strong>Hint:</strong> Start by considering <span class="math-container">${\pi} = \pi P$</span> </p>
</blockquote>
<p>This makes sense intuitively since we have two recurrent classes <span class="math-container">$\{0\}$</span> and <span class="math-container">$\{L\}$</span> and one transient class <span class="math-container">$\{1, 2, ..., L - 2, L - 1\}$</span>. However, once I try and expand <span class="math-container">${\pi} = \pi P$</span>, I don't know how to proceed next. Ideally I'd like a few more hints rather than an answer. </p>
| amd | 265,466 | <p>Write your equation as <span class="math-container">$\mathbf\pi(P-I)=0$</span>. The first and last rows of <span class="math-container">$P-I$</span> are zero, so <span class="math-container">$(1,0,\dots,0)$</span> and <span class="math-container">$(0,\dots,0,1)$</span> are obvious independent solutions of the equation.</p>
|
1,700 | <p>Is there an algorithm in literature to compute an efficient (pareto optimal) and envy-free cake cutting when there are only $n=2$ players and a mediator?</p>
| TonyK | 767 | <p>Huh? I cut, you choose. Why do we need a Mediator?</p>
|
4,155,718 | <p>I'm given <span class="math-container">$n$</span> points <span class="math-container">$(p_1, p_2, \ldots, p_n)$</span>, lying on the boundary of a polygon and constituting this polygon (not necessarily convex), whereby those points are given me in clockwise order and I want to compute the convex hull of this polygon, i.e. determine the set of points <span class="math-container">$S \subseteq \{p_1, p_2, \ldots, p_n\}$</span> s.t. the points in <span class="math-container">$S$</span> shape the convex hull of the polygon. The tricky part is that my runtime constraint is <span class="math-container">$O(n)$</span>, which is why I'm very stuck on this task. One should output the points of the convex hull in counterclockwise order.</p>
| Glärbo | 933,372 | <p>Given an unordered set of points, <a href="https://en.wikipedia.org/wiki/Graham_scan" rel="nofollow noreferrer">Graham scan</a> yields their convex hull in <i>O</i>(<i>N</i> log <i>N</i>) time complexity. The limiting factor is the sort phase; everything else is linear in time, as the Wikipedia article explains.</p>
<p>When <i>N</i> is a positive integer with a known maximum (for example, 2<sup>32</sup>-1 when using C <code>uint32_t</code>, or 2<sup>64</sup>-1 when using C <code>uint64_t</code> type), you can use <a href="https://en.wikipedia.org/wiki/Radix_sort" rel="nofollow noreferrer">radix sort</a> to do the sort phase in linear time, yielding an implementation of Graham scan with linear time complexity, but with a fixed upper limit for <i>N</i>.</p>
<p>If the points are given in a suitable order, other algorithms can be used to achieve linear time complexity.</p>
<p>Quite a few algorithms assume the points given are the vertices of a polygon in order – meaning that each consecutive pair of points given are connected with an edge in the polygon, with the initial and final points also connected with an edge.</p>
<p><a href="https://doi.org/10.1016/0020-0190(87)90086-X" rel="nofollow noreferrer">Melkman's Convex Hull Algorithm</a> is probably the best known of these; the Wikipedia <a href="https://en.wikipedia.org/wiki/Convex_hull_of_a_simple_polygon" rel="nofollow noreferrer">Convex hull of a simple polygon</a> article lists a couple more. These assume that the polygon given is <a href="https://en.wikipedia.org/wiki/Convex_hull_of_a_simple_polygon" rel="nofollow noreferrer">simple</a> (does not intersect itself, and has no holes); some also assume the polygon is not degenerate (a line segment) or that consecutive points (polygon vertices) are not collinear.</p>
<p>I would recommend you examine your data properties, to see how you can get under the <i>O</i>(<i>N</i> log <i>N</i>) time complexity boundary (due to sorting); either by (implicitly) limiting <i>N</i> (so you can use radix sort with linear time complexity), or by leveraging data order (avoiding the need to sort altogether).</p>
<p>Finally, note that <em>time complexity</em> is not a measure of <em>computational efficiency</em>, but of how an algorithm <em>scales</em> as the problem set grows. In particular, radix sort algorithms tend to be "slow" in practice, with several <i>O</i>(<i>N</i> log <i>N</i>) being "faster" up to a very large <i>N</i> (last time I checked, on the order of millions to hundreds of millions, depending on exact implementation – computer cache architecture details being a deciding factor!).</p>
|
2,712,631 | <p>The problem is to use a power series to evaluate the integral to six decimal places. The upper limit of integration is one and the lower limit of integration is zero.</p>
<p>To start the problem I factored $x$ out and focused on $\arctan(3x)$.
I knew that by taking the derivative I could get this equation
in the form $\frac{1}{1-r}$. After I put it in that form I integrated and solved for $c$. I then put $x$ back into the equation and integrated.</p>
<p>Is this effective method for solving this problem?
I determined that the 19th term would give me an answer to the sixth decimal place is this correct?</p>
<p><img src="https://i.stack.imgur.com/0BfNf.jpg" alt="Here is the problem worked through in detail"></p>
| geometryfan | 546,282 | <p>You method is good. Although, I think at the beginning you lost some exponents that were supposed to be even; an $x^{2n}$ that became $x^n$. As a consequence some of the coefficients and exponents are not right.</p>
<p>I would have probably computed the primitive $$g(x)=\frac{1}{18}((9x^2 + 1)\arctan(3 x) - 3 x)$$</p>
<p>And then spend all the effort of approximation on the evaluating $g(0.1)$. Fortunately, $g(0)=0$. </p>
<p>The Taylor series of $$\arctan(3x)=\sum_{n=0}^{\infty}(-1)^n3^{2n+1}\frac{x^{2n+1}}{2n+1}$$
is alternating for $x>0$. That means that we even have good control on the error after truncation. The error is bounded by the absolute value of the next term after the truncation evaluated at $0.1$.</p>
<p>If you impose that $(0.3)^{2n+1}/(2n+1)<10^{-7}$, that ensures that at least the error of approximating $\arctan(0.1)$ is going to be small enough.</p>
|
469,947 | <blockquote>
<p>Show that the presentation $G=\langle a,b,c\mid a^2 = b^2 = c^3 = 1, ab = ba, cac^{-1} = b, cbc^{-1} =ab\rangle$ defines a group of order $12$.</p>
</blockquote>
<p>I tried to let $d=ab\Rightarrow G=\langle d,c\mid d^2 =c^3 = 1, c^2d=dcdc\rangle$. But I don't know how to find the order of the new presentation. I mean I am not sure how the elements of new $G$ look like. (For sure not on the form $c^id^j$ and $d^kc^l$ otherwise $|G|\leq 5$).</p>
<p>Is it good step to reduce the number of generators or not necessary?</p>
| Community | -1 | <p>Consider $D := \langle a, b\rangle$ and $C = \langle c \rangle$. Note that</p>
<p>$$ab = ba \implies D = \langle a \rangle \langle b \rangle$$</p>
<p>so $|D| \leq 4$. Further, the last two relations tell you that $c \in N_{G}(D)$ (the normalizer in $G$ of $D$), so $C \leq N_{G}(D)$. It follows that $DC \leq G$, but $DC$ contains every generator of $C$; therefore, $DC = G$. We then have</p>
<p>$$|G| = |DC| = \frac{|D||C|}{|D \cap C|} \leq \frac{|D||C|}{1} \leq \frac{4 \cdot 3}{1} = 12$$</p>
<p>It just remains to find a group of order $12$ satisfying the given relations - so how many groups of order $12$ do you know?</p>
|
469,947 | <blockquote>
<p>Show that the presentation $G=\langle a,b,c\mid a^2 = b^2 = c^3 = 1, ab = ba, cac^{-1} = b, cbc^{-1} =ab\rangle$ defines a group of order $12$.</p>
</blockquote>
<p>I tried to let $d=ab\Rightarrow G=\langle d,c\mid d^2 =c^3 = 1, c^2d=dcdc\rangle$. But I don't know how to find the order of the new presentation. I mean I am not sure how the elements of new $G$ look like. (For sure not on the form $c^id^j$ and $d^kc^l$ otherwise $|G|\leq 5$).</p>
<p>Is it good step to reduce the number of generators or not necessary?</p>
| Mikasa | 8,581 | <p>Although my approach is not the way you expected, it is a nice way for your group. This way is called <a href="http://en.wikipedia.org/wiki/Coset_enumeration" rel="nofollow noreferrer">Coset enumeration</a> or <a href="http://en.wikipedia.org/wiki/Todd%E2%80%93Coxeter_algorithm" rel="nofollow noreferrer">Todd-Coxeter Algorithm</a>. You have see $12$ rows completed as follows. In fact, I found $[G:\langle e\rangle]=12$:</p>
<p><img src="https://i.stack.imgur.com/tx1gm.jpg" alt="enter image description here"></p>
|
469,947 | <blockquote>
<p>Show that the presentation $G=\langle a,b,c\mid a^2 = b^2 = c^3 = 1, ab = ba, cac^{-1} = b, cbc^{-1} =ab\rangle$ defines a group of order $12$.</p>
</blockquote>
<p>I tried to let $d=ab\Rightarrow G=\langle d,c\mid d^2 =c^3 = 1, c^2d=dcdc\rangle$. But I don't know how to find the order of the new presentation. I mean I am not sure how the elements of new $G$ look like. (For sure not on the form $c^id^j$ and $d^kc^l$ otherwise $|G|\leq 5$).</p>
<p>Is it good step to reduce the number of generators or not necessary?</p>
| AG. | 80,733 | <p>We can construct a directed graph called the Cayley diagram of the group $G := \langle a,b,c | a^2=b^2=c^3=1, ab=ba, ca=bc, cb=abc \rangle$ with respect to the generators $a,b,c$. The vertices of this digraph will be the group elements. The set of arcs will be of of the form $\{(g,gs): g \in G, s \in \{a,b,c\} \}$. After this digraph is constructed completely, each vertex $g \in G$ will have 3 outgoing arcs, labeled with colors $a,b,c$, and 3 incoming arcs with colors $a,b,c$, and the number of vertices will be the order of the group. </p>
<p>To construct this digraph, start with the identity vertex 1. Draw an arc from 1 to $a$, and label the arc as $a$. This is an edge-colored arc, denoted $(1,a,a)$, where the third coordinate refers to the color of the arc $(1,a)$. Add a new arc from vertex $a$ to vertex $a^2$ with color $a$, and notice that the relation $a^2=1$ forces us to identify the vertex $a^2$ and $1$. So we really have a directed cycle of length 2 at this point. Another directed cycle of length 2 arises due to arcs $(1,b,b)$ and $(b,1,b)$. Next we draw a directed cycle of length 3 on vertices $1,c,c^2$. </p>
<p>Next start at vertex $a$ and draw its 3 outgoing arcs, one of which will $(a,b,ab)$. Draw three outgoing arcs from $b$, one of which will be $(b,a,ba)$. Since $ab=ba$, we identify (i.e. merge) these two vertices $ab$ and $ba$. In addition, we start at the existing vertices (say the vertex 1), and <em>enforce</em> each relation at each vertex. Thus, starting at 1, the directed path $ab$ of length 2 and the directed path $ba$ of length 2 must end up at the same vertex. Similarly, $ca$ and $bc$ end up at the same vertex, as do $cb$ and $abc$. </p>
<p>Continue this process until every vertex has three outgoing and three incoming arcs, and until each relation is enforced at each vertex. We identify vertices whenever a relation forces us to do so. The number of vertices will then end up being the order of the group. I constructed this digraph in your example and verified it has order 12. </p>
|
288,974 | <p>Alright this maybe really funny but I want to know why is this wrong. We often come across identities which we prove by multiplying both the sides of the identity by a certain entity but why don't we multiply it by $0$. That way every identity will be proved in one single line. That is so stupid. I mean, by that way we may also say that $1=2=3$. I know it is wrong. But why? I mean if we can multiply both the sides by $2$ then why not by $0$. For example, consider the following trigonometric identity :</p>
<p>Prove the identity : $\sin^2 \theta = \tan^2 \theta \cos ^2 \theta$</p>
<h2>Usual way</h2>
<p>To Prove : $\sin^2 \theta = \tan^2 \theta \cos ^2 \theta $</p>
<p>$\displaystyle \implies {\sin^2 \theta \over \cos^2 \theta } = {\tan^2 \theta \cos ^2 \theta \over \cos^2 \theta}$ (multiplying both the sides by $\displaystyle 1 \over \cos^2 \theta$)</p>
<p>$\implies \tan ^2 \theta = \tan^2\theta$</p>
<p>$\implies LHS=RHS$</p>
<p>$\therefore proved$</p>
<h2>Funny way</h2>
<p>To Prove : $\sin^2 \theta = \tan^2 \theta \cos ^2 \theta $</p>
<p>$\displaystyle \implies {\sin^2 \times 0} = {\tan^2 \theta \cos ^2 \theta \times 0}$ (multiplying both the sides by $0$)</p>
<p>$\implies 0 = 0$</p>
<p>$\therefore proved$</p>
<p>Please explain why is this wrong.</p>
| Ross Millikan | 1,827 | <p>When you go from $ax=b$ to $x=\frac ba$ you have multiplied both sides by $\frac 1a$ and it would be good to remark that this can only be done if $a \ne 0$ (though people often forget this). What you are wanting to do under <strong>Funny Way</strong> is to start with $0=0$, then divide both sides by zero (which is not allowed) and get whatever you want on each side. As you say in your introduction, this would allow us to conclude that all things are equal. </p>
|
288,974 | <p>Alright this maybe really funny but I want to know why is this wrong. We often come across identities which we prove by multiplying both the sides of the identity by a certain entity but why don't we multiply it by $0$. That way every identity will be proved in one single line. That is so stupid. I mean, by that way we may also say that $1=2=3$. I know it is wrong. But why? I mean if we can multiply both the sides by $2$ then why not by $0$. For example, consider the following trigonometric identity :</p>
<p>Prove the identity : $\sin^2 \theta = \tan^2 \theta \cos ^2 \theta$</p>
<h2>Usual way</h2>
<p>To Prove : $\sin^2 \theta = \tan^2 \theta \cos ^2 \theta $</p>
<p>$\displaystyle \implies {\sin^2 \theta \over \cos^2 \theta } = {\tan^2 \theta \cos ^2 \theta \over \cos^2 \theta}$ (multiplying both the sides by $\displaystyle 1 \over \cos^2 \theta$)</p>
<p>$\implies \tan ^2 \theta = \tan^2\theta$</p>
<p>$\implies LHS=RHS$</p>
<p>$\therefore proved$</p>
<h2>Funny way</h2>
<p>To Prove : $\sin^2 \theta = \tan^2 \theta \cos ^2 \theta $</p>
<p>$\displaystyle \implies {\sin^2 \times 0} = {\tan^2 \theta \cos ^2 \theta \times 0}$ (multiplying both the sides by $0$)</p>
<p>$\implies 0 = 0$</p>
<p>$\therefore proved$</p>
<p>Please explain why is this wrong.</p>
| achille hui | 59,379 | <p>One can look at the issue from the angle of information.</p>
<p>When we multiply $b$ and $c$ by a non-zero number $a$, no information is loss:</p>
<p>$$b = c \to a b = a c\\
b \ne c \to a b \ne a c$$
Since no information is lost, we can reverse the "logic" and cancel $a$ in both side of equation.
In contrast, when we multiply $b$ and $c$ by $0$, one lose the information of "equality":
$$b = c \to 0 b = 0 c\\b \ne c \to 0 b = 0 c$$
This means one can no longer reverse the "logic" and deduce $b = c$ from $0 b = 0 c$.</p>
|
288,974 | <p>Alright this maybe really funny but I want to know why is this wrong. We often come across identities which we prove by multiplying both the sides of the identity by a certain entity but why don't we multiply it by $0$. That way every identity will be proved in one single line. That is so stupid. I mean, by that way we may also say that $1=2=3$. I know it is wrong. But why? I mean if we can multiply both the sides by $2$ then why not by $0$. For example, consider the following trigonometric identity :</p>
<p>Prove the identity : $\sin^2 \theta = \tan^2 \theta \cos ^2 \theta$</p>
<h2>Usual way</h2>
<p>To Prove : $\sin^2 \theta = \tan^2 \theta \cos ^2 \theta $</p>
<p>$\displaystyle \implies {\sin^2 \theta \over \cos^2 \theta } = {\tan^2 \theta \cos ^2 \theta \over \cos^2 \theta}$ (multiplying both the sides by $\displaystyle 1 \over \cos^2 \theta$)</p>
<p>$\implies \tan ^2 \theta = \tan^2\theta$</p>
<p>$\implies LHS=RHS$</p>
<p>$\therefore proved$</p>
<h2>Funny way</h2>
<p>To Prove : $\sin^2 \theta = \tan^2 \theta \cos ^2 \theta $</p>
<p>$\displaystyle \implies {\sin^2 \times 0} = {\tan^2 \theta \cos ^2 \theta \times 0}$ (multiplying both the sides by $0$)</p>
<p>$\implies 0 = 0$</p>
<p>$\therefore proved$</p>
<p>Please explain why is this wrong.</p>
| Barbara Osofsky | 59,437 | <p>A function actually consists of two things, a domain (in first courses in calculus the domain is assumed to be the natural domain) and a rule for assigning a unique value to any real number in that domain. The domain of $\sin^2(\theta)$ is all of $\mathbb R$ and the domain of $\tan(\theta)$ is all of the reals that are not odd integer multiples of $\pi/2$. Hence this is not really an identity because the functions on each side of the $=$ have different (natural) domains. It is an identity if you restrict the domains of $\sin$ and $\cos$ to the reals which are not of the form ${(2n+1)\pi}\over2$. </p>
|
288,974 | <p>Alright this maybe really funny but I want to know why is this wrong. We often come across identities which we prove by multiplying both the sides of the identity by a certain entity but why don't we multiply it by $0$. That way every identity will be proved in one single line. That is so stupid. I mean, by that way we may also say that $1=2=3$. I know it is wrong. But why? I mean if we can multiply both the sides by $2$ then why not by $0$. For example, consider the following trigonometric identity :</p>
<p>Prove the identity : $\sin^2 \theta = \tan^2 \theta \cos ^2 \theta$</p>
<h2>Usual way</h2>
<p>To Prove : $\sin^2 \theta = \tan^2 \theta \cos ^2 \theta $</p>
<p>$\displaystyle \implies {\sin^2 \theta \over \cos^2 \theta } = {\tan^2 \theta \cos ^2 \theta \over \cos^2 \theta}$ (multiplying both the sides by $\displaystyle 1 \over \cos^2 \theta$)</p>
<p>$\implies \tan ^2 \theta = \tan^2\theta$</p>
<p>$\implies LHS=RHS$</p>
<p>$\therefore proved$</p>
<h2>Funny way</h2>
<p>To Prove : $\sin^2 \theta = \tan^2 \theta \cos ^2 \theta $</p>
<p>$\displaystyle \implies {\sin^2 \times 0} = {\tan^2 \theta \cos ^2 \theta \times 0}$ (multiplying both the sides by $0$)</p>
<p>$\implies 0 = 0$</p>
<p>$\therefore proved$</p>
<p>Please explain why is this wrong.</p>
| Jim | 23,117 | <p>Suppose</p>
<p>$a \neq b$</p>
<p>$a\cdot0 \neq b\cdot0$</p>
<p>But, since $a\cdot0=0$ and $b\cdot0=0$</p>
<p>We get, by substitution </p>
<p>$0\neq0$ </p>
<p>Combine this with your discovery, and I expect that this site will self-desctruct in 3,...2,...1,... (Wait, don't do it, it's just a jo........</p>
|
288,974 | <p>Alright this maybe really funny but I want to know why is this wrong. We often come across identities which we prove by multiplying both the sides of the identity by a certain entity but why don't we multiply it by $0$. That way every identity will be proved in one single line. That is so stupid. I mean, by that way we may also say that $1=2=3$. I know it is wrong. But why? I mean if we can multiply both the sides by $2$ then why not by $0$. For example, consider the following trigonometric identity :</p>
<p>Prove the identity : $\sin^2 \theta = \tan^2 \theta \cos ^2 \theta$</p>
<h2>Usual way</h2>
<p>To Prove : $\sin^2 \theta = \tan^2 \theta \cos ^2 \theta $</p>
<p>$\displaystyle \implies {\sin^2 \theta \over \cos^2 \theta } = {\tan^2 \theta \cos ^2 \theta \over \cos^2 \theta}$ (multiplying both the sides by $\displaystyle 1 \over \cos^2 \theta$)</p>
<p>$\implies \tan ^2 \theta = \tan^2\theta$</p>
<p>$\implies LHS=RHS$</p>
<p>$\therefore proved$</p>
<h2>Funny way</h2>
<p>To Prove : $\sin^2 \theta = \tan^2 \theta \cos ^2 \theta $</p>
<p>$\displaystyle \implies {\sin^2 \times 0} = {\tan^2 \theta \cos ^2 \theta \times 0}$ (multiplying both the sides by $0$)</p>
<p>$\implies 0 = 0$</p>
<p>$\therefore proved$</p>
<p>Please explain why is this wrong.</p>
| user217187 | 217,187 | <p>This is an interesting point. I will do this on the fly....</p>
<p>OK. So, false proof.</p>
<p>\begin{align*}
1 & = 2\\
1 \cdot 0 & =2 \cdot 0\\
0 & =0
\end{align*}</p>
<p>Q.E.D.</p>
<p>Um... I can't see anything wrong with this. So $0 = 0$, ok. So $1 \cdot 0 = 2 \cdot 0$... ok. So $1 = 2$... Not ok. What is wrong? The flaw must be in the conclusion, so what did we do from one step to the next? We did $0/0$... Ahhhhh. Ok, as this is an indeterminate value (not undefined. Undefined means no solutions: $0 \cdot$ what $= 1$? Indeterminate is infinite. $0 \cdot$ what $= 0$?) we can really prove anything, by substituting one value for another. See, you really proved a (then, if) statement, which is ridiculous. </p>
|
1,303,577 | <p>I have started to learn about the properties of the <a href="http://en.wikipedia.org/wiki/Quadratic_residue" rel="nofollow">quadratic residues modulo n (link)</a> and reviewing the list of quadratic residues modulo $n$ $\in [1,n-1]$ I found the following possible property:</p>
<blockquote>
<p>(1) $\forall\ p \gt 3\in \Bbb P, \ (number\ of\ Quadratic\ Residues\ mod\ kp)=p\ when\ k\in\{2,3\}$</p>
</blockquote>
<p>In other words: (a) if $n$ is $2p$ or $3p$, where $p$ is a prime number greater than $3$, then the total number of the quadratic residues modulo $n$ is exactly the prime number $p$. (b) And every prime number $p$ is the number of quadratic residues modulo $2p$ and $3p$.</p>
<blockquote>
<p>E.g.:</p>
<p>$n=22$, the list of quadratic residues is $\{1,3,4,5,9,11,12,14,15,16,20\}$, the total number is $11 \in \Bbb P$ and $22=11*2$.</p>
<p>$n=33$, the list of quadratic residues is $\{1,3,4,9,12,15,16,22,25,27,31\}$, the total number is $11 \in \Bbb P$ and $33=11*3$.</p>
</blockquote>
<p>I did a quick Python test initially in the interval $[1,10^4]$, no counterexamples found. Here is the code:</p>
<pre><code>def qrmn():
from sympy import is_quad_residue
from gmpy2 import is_prime
def list_qrmn(n):
lqrmn = []
for i in range (1,n):
if is_quad_residue(i,n):
lqrmn.append(i)
return lqrmn
tested1 = 0
tested2 = 0
for n in range (4,10000,1):
lqrmn = list_qrmn(n)
# Test 1
if is_prime(len(lqrmn)):
if n==3*len(lqrmn) or n==2*len(lqrmn):
print("SUCCESS1 " + str(n) + " len " + str(len(lqrmn)) + " div " + str(int(n/len(lqrmn))))
tested1 = tested1 + 1
# Test 2
if n==3*len(lqrmn) or n==2*len(lqrmn):
if is_prime(len(lqrmn)):
print("SUCCESS2 " + str(n) + " len " + str(len(lqrmn)) + " div " + str(int(n/len(lqrmn))))
tested2 = tested2 + 1
else:
print("ERROR2 " + str(n) + " len " + str(len(lqrmn)) + " div " + str(int(n/len(lqrmn))))
if tested1 == tested2:
print("\nTEST SUCCESS: iif condition is true")
else:
print("\nTEST ERROR: iif condition is not true: " + str(tested1) + " " + str(tested2))
qrmn()
</code></pre>
<p>I am sure this is due to a well known property of the quadratic residues modulo $n$, but my knowledge is very basic (self learner) and initially, reviewing online, I can not find a property of the quadratic residues to understand if that possible property is true or not.</p>
<p>Please I would like to share with you the following questions:</p>
<blockquote>
<ol>
<li><p>Is (1) a trivial property due to the definition of the quadratic residue modulo n?</p></li>
<li><p>Is there a counterexample?</p></li>
</ol>
</blockquote>
<p>Thank you!</p>
| fretty | 25,381 | <p>The Chinese Remainder theorem lets us conclude that counting squares mod $mn$ is the same as counting pairs of squares mod $m$ and mod $n$ separately...whenever $m,n$ are coprime.</p>
<p>Note that there are exactly two squares mod $2$ and same mod $3$.</p>
<p>It is also known that modulo an odd prime there are $\frac{p-1}{2} + 1$ squares (including $0 \bmod p$).</p>
<p>So putting this together, when counting squares mod $2p$ or $3p$ and $p>3$ (so is coprime to $2,3$) we should get exactly $2(\frac{p-1}{2}+1) = p+1$ squares.</p>
<p>You seem to be discounting $0 \bmod kp$ as a square, hence why you got exactly $p$ non-zero squares.</p>
|
1,148,760 | <p>$\displaystyle \int x^7\cos x^4 dx$</p>
<p>I tried first by letting $x^4 = u$ and then using integration by parts by assigning f(x) to $u^\frac74$ and cos(u) to g'(x) and I end up getting after applying parts twice, the same integral on the RHS as what we are looking for. So I bring it in on the LHS and add it over and get $\displaystyle \cos x^4 \bigg (\frac{\displaystyle 4 \displaystyle u^\frac{11}{4}}{11} \bigg)$</p>
| Claude Leibovici | 82,404 | <p><strong>Hint</strong></p>
<p>$$I=\int x^7 \cos(x^4)\,dx=\frac 14\int (4x^3)\, x^4 \cos(x^4)\,dx$$ So, let $x^4=u$ and then $$I=\frac 14\int u \cos(u) \, du$$ I am sure that you can take from here.</p>
|
934,660 | <p>Prove that for $ n \geq 2$, n has at least one prime factor.</p>
<p>I'm trying to use induction. For n = 2, 2 = 1 x 2. For n > 2, n = n x 1, where 1 is a prime factor. Is this sufficient to prove the result? I feel like I may be mistaken here.</p>
| Adriano | 76,987 | <p>If $n \ge 2$ is prime, then we're done, since $n$ is our desired prime factor of $n$. Otherwise, if $n \ge 2$ is not prime, then $n = ab$ for some $a,b \in \mathbb N$, where $1 < a \leq b < n$. But then since $a \geq 2$, it follows by the induction hypothesis that $a$ has at least one prime factor, say $p$, so that $a = pk$ for some $k \in \mathbb Z$. But then since $n = (pk)b = p\underbrace{(kb)}_{\in ~ \mathbb Z}$, we have that $p$ is also a prime factor of $n$, as desired.</p>
|
3,604,745 | <p>(I Prefer to open new question because those are my homework and i want to understand my way)</p>
<p>In my homework i need to solve the integral: </p>
<p><span class="math-container">$$
\int \frac{e^x}{2e^x + \sqrt{e^x}}dx
$$</span></p>
<p>I tried the substitution method: </p>
<p><span class="math-container">$$
t = e^x \Rightarrow dt = e^x dx
$$</span></p>
<p>Therefore i get: </p>
<p><span class="math-container">$$
\int \frac{dt}{2t + \sqrt{t}}
$$</span></p>
<p>But what now? How can i proceed? Is this the right way? </p>
<p>I prefer hint and not whole answer - those are my homework</p>
<p>Thanks. </p>
| Surb | 154,545 | <p><strong>Hint</strong></p>
<ul>
<li><p>Make the substitution <span class="math-container">$u=\sqrt t$</span> in your last integral.</p></li>
<li><p>Or do at the beginning the substitution <span class="math-container">$t=\sqrt{e^x}$</span> instead of <span class="math-container">$t=e^x$</span>.</p></li>
</ul>
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.