qid int64 1 4.65M | question large_stringlengths 27 36.3k | author large_stringlengths 3 36 | author_id int64 -1 1.16M | answer large_stringlengths 18 63k |
|---|---|---|---|---|
1,549,490 | <p>$f(x) = 3x - \frac{1}{x^2}$</p>
<p>I am finding this problem to be very tricky:</p>
<p><a href="https://i.stack.imgur.com/7RjYG.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/7RjYG.jpg" alt="enter image description here"></a></p>
<p><a href="https://i.stack.imgur.com/jwUHp.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/jwUHp.jpg" alt="enter image description here"></a></p>
| Ian Miller | 278,461 | <p>There is no benefit joining the $3$ into the other fraction. It just makes your algebra harder/messier.</p>
<p>$$\lim_{h\to0}\frac{3(x+h)-\frac{1}{(x+h)^2}-3x+\frac{1}{x^2}}{h}$$</p>
<p>$$=\lim_{h\to0}\frac{3h-\frac{1}{(x+h)^2}+\frac{1}{x^2}}{h}$$</p>
<p>$$=\lim_{h\to0}\frac{3h-\frac{x^2}{x^2(x+h)^2}+\frac{(x+h)^2}{x^2(x+h)^2}}{h}$$</p>
<p>$$=\lim_{h\to0}\frac{3h+\frac{(x+h)^2-x^2}{x^2(x+h)^2}}{h}$$</p>
<p>$$=\lim_{h\to0}\frac{3h+\frac{2xh+h^2}{x^2(x+h)^2}}{h}$$</p>
<p>$$=\lim_{h\to0}\frac{3h+\frac{h(2x+h)}{x^2(x+h)^2}}{h}$$</p>
<p>$$=\lim_{h\to0}3+\frac{(2x+h)}{x^2(x+h)^2}$$</p>
<p>$$=3+\frac{2x}{x^2\cdot x^2}$$</p>
<p>$$=3+\frac{2}{x^3}$$</p>
|
757,917 | <p>According to <a href="http://www.wolframalpha.com/input/?i=sqrt%285%2bsqrt%2824%29%29-sqrt%282%29%20=%20sqrt%283%29" rel="nofollow">wolfram alpha</a> this is true: $\sqrt{5+\sqrt{24}} = \sqrt{3}+\sqrt{2}$</p>
<p>But how do you show this? I know of no rules that works with addition inside square roots.</p>
<p>I noticed I could do this:</p>
<p>$\sqrt{24} = 2\sqrt{3}\sqrt{2}$</p>
<p>But I still don't see how I should show this since $\sqrt{5+2\sqrt{3}\sqrt{2}} = \sqrt{3}+\sqrt{2}$ still contains that addition</p>
| Git Gud | 55,235 | <p><strong>Hint:</strong> Since they are both positive numbers, they are equal if, and only if, their squares are equal.</p>
|
757,917 | <p>According to <a href="http://www.wolframalpha.com/input/?i=sqrt%285%2bsqrt%2824%29%29-sqrt%282%29%20=%20sqrt%283%29" rel="nofollow">wolfram alpha</a> this is true: $\sqrt{5+\sqrt{24}} = \sqrt{3}+\sqrt{2}$</p>
<p>But how do you show this? I know of no rules that works with addition inside square roots.</p>
<p>I noticed I could do this:</p>
<p>$\sqrt{24} = 2\sqrt{3}\sqrt{2}$</p>
<p>But I still don't see how I should show this since $\sqrt{5+2\sqrt{3}\sqrt{2}} = \sqrt{3}+\sqrt{2}$ still contains that addition</p>
| Bill Dubuque | 242 | <p>On can easily <em>discover</em> the denesting using my simple <a href="https://math.stackexchange.com/a/664987/242">radical denesting algorithm.</a></p>
<p>$\ w = 5+\sqrt{24}\,$ has norm $\,n = ww' = 5^2-24 = 1.\,$ Subtracting out $\,\sqrt{n}=1\,$ yields $\,4+\sqrt{24}.$ </p>
<p>This has trace $\,t = 8,\,$ so dividing $\,\sqrt{t} = 2\sqrt{2}\,$ out of $\,4+\sqrt{24}=4+2\sqrt{6}\,$ yields</p>
<p>$$ \frac{4+2\sqrt{6}}{2\sqrt{2}}\,=\, \frac{2+\sqrt{6}}{\sqrt{2}\ } \,=\, \sqrt{2}+\sqrt{3}$$</p>
|
757,917 | <p>According to <a href="http://www.wolframalpha.com/input/?i=sqrt%285%2bsqrt%2824%29%29-sqrt%282%29%20=%20sqrt%283%29" rel="nofollow">wolfram alpha</a> this is true: $\sqrt{5+\sqrt{24}} = \sqrt{3}+\sqrt{2}$</p>
<p>But how do you show this? I know of no rules that works with addition inside square roots.</p>
<p>I noticed I could do this:</p>
<p>$\sqrt{24} = 2\sqrt{3}\sqrt{2}$</p>
<p>But I still don't see how I should show this since $\sqrt{5+2\sqrt{3}\sqrt{2}} = \sqrt{3}+\sqrt{2}$ still contains that addition</p>
| Community | -1 | <p>You kind of have to assume that the nested radical can be rewritten as the sum of two surds (or radicals) in the form $\sqrt{a+b\sqrt{c}}=\sqrt{x}+\sqrt{y}$.</p>
<p>So in your question, we have $\sqrt{5+\sqrt{24}}=\sqrt{x}+\sqrt{y}$. Squaring both sides gives you: $$5+\sqrt{24}=x+y+2\sqrt{xy}$$</p>
<p>This can be easily solved by finding two numbers ($x$ and $y$) that sum to $5$, and multiply to $6$. Numbers $3$ and $2$ work; so therefore, $$\sqrt{5+\sqrt{24}}=\sqrt{3}+\sqrt{2}$$</p>
<p>NOTE: You can generalize this and develop a formula.</p>
|
47,143 | <p>I want to create a table of replacement rules. </p>
<pre><code>g[a_, b_] := a -> b
t1 = Table[10 i + j, {i, 5}, {j, 3}]
t2 = Table[ i + j, {i, 5}, {j, 3}]
g[ # & @@@ t1, # & @@@ t2 ]
</code></pre>
<p>The correct output is below:</p>
<pre><code>{{11 -> 2, 12 -> 3, 13 -> 4},
{21 -> 3, 22 -> 4, 23 -> 5},
{31 -> 4, 32 -> 5, 33 -> 6},
{41 -> 5, 42 -> 6, 43 -> 7},
{51 -> 6, 52 -> 7, 53 -> 8}}
</code></pre>
<p>Instead I am getting:</p>
<pre><code> {11, 21, 31, 41, 51} -> {2, 3, 4, 5, 6}
</code></pre>
<p>Which shows two concepts I am still trying to wrap my mind around how to accomplish in <em>Mathematica</em>.</p>
<p>In a 2-d list how to you select each row, and then perform an operation on each element of that row ( in this case take that element and make a rule replacment a -> b). Then iterate through every row.</p>
| Albert Retey | 169 | <p>There are other approaches to achieve what you have described. As I guess this is only a simplification of what you really need, here is how you would do it with <code>WhenEvent</code>:</p>
<pre><code>equation = x'[t] + (x[t] - λ[t]) == 0;
sol = NDSolveValue[{equation, x[0] == 0, λ[0] == 1,
WhenEvent[x'[t] == 0.25, λ[t] -> x[t]]}, x, {t, 0, 5},
DiscreteVariables -> {λ}]
</code></pre>
<p>the trick is to make <code>λ</code> a discrete dependent variable for the equation. I think this is not straightforward to find in the documentation as I remember I was also struggling with it when I first needed it. A discrete dependent variable will need an initial value but no differential equation and can be changed at any event just as other dependent variables (and only there). It will also become a function of time and can be in the list of variables to solve for at position 2 of the call to <code>NDSolve</code>. Once one knows how it works all that seems to be very consistent. The documentation for <code>DiscreteVariables</code> has some examples which are very close to your problem...</p>
<p><strong>EDIT 2023-03-01</strong></p>
<p>as @xzczd mentioned, this stopped working with newer versions. It can be made working by either adding the option <code>SolveDelayed->True</code> or, as that seems to be deprecated the newer version of that: <code>Method -> {"EquationSimplification" -> "Residual"}</code></p>
|
3,684,331 | <p>I am working on a problem from my Qual</p>
<p>"Let <span class="math-container">$T:V\to V$</span> be a bounded linear map where <span class="math-container">$V$</span> is a Banach space. Assume for each <span class="math-container">$v\in V$</span>, there exists <span class="math-container">$n$</span> s.t. <span class="math-container">$T^n(v)=0$</span>. Prove that <span class="math-container">$T^n=0$</span> for some <span class="math-container">$n$</span>."</p>
<p>My impression is that it looks like a algebraic problem. But I know nothing about <span class="math-container">$V$</span> (not fintely generated or something like that). So this one is off.</p>
<p>The hypothesis seems to be set up for the Uniformly Bounded Principle. Indeed, for each <span class="math-container">$v$</span>, <span class="math-container">$\sup_n \{||T^n(v)|| \}$</span> is bounded, so by the principle, <span class="math-container">$\sup_n ||T^n||$</span> is bounded. </p>
<p>I'm stuck here. I don't think there is a relation between <span class="math-container">$||T^n||$</span> and <span class="math-container">$||T||$</span>, even if there is, I cannot shake the feeling that the sequence <span class="math-container">$||T^n||$</span> may be eventually constant. I am hoping <span class="math-container">$||T^n||=0$</span> eventually but failed to prove it.</p>
| robjohn | 13,854 | <p>The product formula for sine says
<span class="math-container">$$
\frac{\sin(x)}{x}=\prod_{k=1}^\infty\left(1-\frac{x^2}{k^2\pi^2}\right)\tag1
$$</span>
Thus, for <span class="math-container">$0\lt x\le\pi$</span>
<span class="math-container">$$
\begin{align}
\frac{\sin(x)}x
&\le1-\frac{x^2}{\pi^2}\tag2\\[3pt]
&\le1-\frac{x^2}{(x+\pi)^2}\tag3\\
x-\sin(x)
&\ge\frac{x^3}{(x+\pi)^2}\tag4
\end{align}
$$</span>
Explanation:<br>
<span class="math-container">$(2)$</span>: follows from <span class="math-container">$(1)$</span><br>
<span class="math-container">$(3)$</span>: <span class="math-container">$x\gt0$</span><br>
<span class="math-container">$(4)$</span>: subtract from <span class="math-container">$1$</span> (which reverses the inequality)<br>
<span class="math-container">$\phantom{\text{(4):}}$</span> and multiply by <span class="math-container">$x$</span></p>
<hr>
<p>For <span class="math-container">$x\gt\pi$</span>,
<span class="math-container">$$
\begin{align}
(x-\sin(x))(x+\pi)^2
&\ge(x-1)(x+\pi)^2\tag5\\[3pt]
&=x^3+(2\pi-1)x^2+\left(\pi^2-2\pi\right)\!x-\pi^3\tag6\\[3pt]
&\ge x^3\tag7\\
x-\sin(x)
&\ge\frac{x^3}{(x+\pi)^2}\tag8
\end{align}
$$</span>
Explanation:<br>
<span class="math-container">$(5)$</span>: <span class="math-container">$\sin(x)\lt1$</span><br>
<span class="math-container">$(6)$</span>: expand product<br>
<span class="math-container">$(7)$</span>: plug in <span class="math-container">$x\gt\pi$</span><br>
<span class="math-container">$(8)$</span>: divide by <span class="math-container">$(x+\pi)^2$</span></p>
<hr>
<p>Inequalities <span class="math-container">$(4)$</span> and <span class="math-container">$(8)$</span> together show that for <span class="math-container">$x\gt0$</span>,
<span class="math-container">$$
x-\sin(x)\ge\frac{x^3}{(x+\pi)^2}\tag9
$$</span></p>
|
268,635 | <p>Consider the triangle formed by randomly distributing three points on a circle. What is the probability of the center of the circle be contained within the triangle?</p>
| nickdon2006 | 446,107 | <p>Assume two points are drawn at random. First called A, second called B. Connect the points with the center, we get two lines and four quarters (area is not the same). Only in one quarter that the triangle formed will contain the center. </p>
<p>Apply the symmetry, the points A, B are totally random, thus the area of quarter is totally random. The events the third point fall into any quarter are symmetry. </p>
<p>Thus the probability is 1/4. This is very intuitive explanation. </p>
|
268,635 | <p>Consider the triangle formed by randomly distributing three points on a circle. What is the probability of the center of the circle be contained within the triangle?</p>
| orangeskid | 168,051 | <p>Fix a point $A$. The angular coordinates, measured from $A$, of the other two points lie in a square (say $[-\pi,\pi]^2$). In the picture below, the green region is the desired one. It has area $1/4$ the area of the square. So the probability is $1/4$.</p>
<p><a href="https://i.stack.imgur.com/B45j3.png" rel="noreferrer"><img src="https://i.stack.imgur.com/B45j3.png" alt="enter image description here"></a></p>
|
1,761,228 | <p>I have a group of order 35 and I want to know if it contains elements of order 7 and 5. I know that it does and there is a proof that is much longer, but I wanted to know if the following worked to show that it does contain elements of order 5 and 7. </p>
<p><strong>My Approach:</strong> Let $|G| = 35$. We know that any group of order 35 is cyclic so $G$ is isomorphic to $C_7$ x $C_5$ and $C_7$ contains an element of order 7 and $C_5$ contains an element of order 5. So, there are elements of order 5 and order 7 in $G$. </p>
| Dietrich Burde | 83,966 | <p>It follows by <a href="https://en.wikipedia.org/wiki/Cauchy's_theorem_%28group_theory%29" rel="nofollow noreferrer">Cauchy's theorem</a>, with $p=5$ and $p=7$. The proof of Cauchy's theorem is elementary. Furthermore we could use the result that all groups of order $pq$ with $p<q$ prime and $p\nmid q-1$ are cyclic - see <a href="https://math.stackexchange.com/questions/67129/groups-of-order-pq-without-using-sylow-theorems">here</a>.</p>
|
1,761,228 | <p>I have a group of order 35 and I want to know if it contains elements of order 7 and 5. I know that it does and there is a proof that is much longer, but I wanted to know if the following worked to show that it does contain elements of order 5 and 7. </p>
<p><strong>My Approach:</strong> Let $|G| = 35$. We know that any group of order 35 is cyclic so $G$ is isomorphic to $C_7$ x $C_5$ and $C_7$ contains an element of order 7 and $C_5$ contains an element of order 5. So, there are elements of order 5 and order 7 in $G$. </p>
| Community | -1 | <p>You can also show this using an elementary counting argument. First, note by Lagrange's theorem that every element must have order <span class="math-container">$1$</span>, <span class="math-container">$5$</span>, <span class="math-container">$7$</span>, or <span class="math-container">$35$</span>.</p>
<p>If there is an element of order <span class="math-container">$35$</span> then the group is cyclic, so there is some <span class="math-container">$g$</span> with order <span class="math-container">$35$</span>. Then <span class="math-container">$g^5$</span> has order <span class="math-container">$7$</span>, and <span class="math-container">$g^7$</span> has order <span class="math-container">$5$</span>.</p>
<p>Now suppose there is no element of order <span class="math-container">$35$</span>. Only the identity has order <span class="math-container">$1$</span>, so every non-identity element must have order <span class="math-container">$5$</span> or order <span class="math-container">$7$</span>.</p>
<p>If there is no element with order <span class="math-container">$7$</span>, then every non-identity element has order <span class="math-container">$5$</span>. Therefore <span class="math-container">$G$</span> is the union of <span class="math-container">$n$</span> subgroups of order <span class="math-container">$5$</span>. Since <span class="math-container">$5$</span> is prime, each pair of subgroups intersects trivially, which means that we must have <span class="math-container">$|G| = 35 = 4n+1$</span>, but there is no integer <span class="math-container">$n$</span> satisfying this equation.</p>
<p>Similarly, if there is no element with order <span class="math-container">$5$</span>, then every non-identity element has order <span class="math-container">$7$</span>. Then <span class="math-container">$G$</span> is the union of <span class="math-container">$n$</span> subgroups of order <span class="math-container">$7$</span>. Since <span class="math-container">$7$</span> is prime, again the subgroups intersect trivially, so <span class="math-container">$|G| = 35 = 6n+1$</span>. Again, there is no integer <span class="math-container">$n$</span> satisfying this equation.</p>
<p>We conclude that <span class="math-container">$G$</span> must contain at least one element of order <span class="math-container">$5$</span> and at least one element of order <span class="math-container">$7$</span>.</p>
|
1,176,381 | <p>I have many sets containing three values like $\{1, -2, 5\}$.
I am want to write in mathematical form to filter set where exist one value with different sign. (and for sure, none of them should be zero)</p>
<p>I am not sure about tag. (please correct the tag if it is not correct)/ </p>
| martini | 15,379 | <p>Note that you want to filter the sets where not all values have the same sign, that is there exists values of both signs, and zero does not occur. If now $\mathcal A$ denotes your collection of sets, then the filtered collection, which contains only the sets that fulfill your condition is
$$ \mathcal F = \{A \in \mathcal A \mid \forall a \in A: a\ne 0 \land \exists a \in A : a < 0 \land \exists a \in A : a > 0 \} $$</p>
|
452,292 | <p>Let $E$ be a Lebesgue measurable set in $\mathbb{R}$. Prove that
$$\lim_{x\rightarrow 0} m(E\cap (E+x))=m(E).$$ </p>
| Alex Becker | 8,173 | <p>Recall that a Lebesgue measurable set $E$ can be approximated arbitrarily well by a finite union of disjoint closed intervals $S=I_1\cup\ldots\cup I_n$, i.e. $\epsilon=m(E\Delta S)$ can be made arbitrarily small. Thus we have that
$$\begin{align}
|m(E\cap (E+x))-m(S\cap (S+x))|&\leq m((E\cap (E+x))\Delta(S\cap (S+x)))\\
&\leq m(E\Delta S)+m((E+x)\Delta(S+x))\\
&\leq 2m(E\Delta S) = 2\epsilon
\end{align}$$
can be made arbitrarily small, so it suffices to show that $\lim\limits_{x\to 0} m(S\cap (S+x))=m(S)$.
Note that for sufficiently small $x$, the intervals $I_i$ and $I_j$ for $i\ne j$ are separated by more than $x$, so $I_i\cap (I_j+x)=\emptyset$. Thus for sufficiently small $x$ we have
$$S\cap (S+x)=(I_1\cap (I_1+x))\cup \cdots \cup (I_n\cap (I_n+x))$$
and each set in the union is disjoint. Observing that the desired result clearly holds for intervals, we have
$$\begin{align}
\lim\limits_{x\to 0}m(S\cap (S+x))&=\lim\limits_{x\to 0}\sum\limits_{i=1}^nm(I_i\cap (I_i+x))\\
&= \sum\limits_{i=1}^n\lim\limits_{x\to 0} m(I_i\cap (I_i+x))\\
&= \sum\limits_{i=1}^nm(I_i)=m(S).
\end{align}$$ </p>
|
1,906,332 | <p>I know every complex differentiable function is continuous. I would like to know if the converse is true. If not, could someone give me some counterexamples?</p>
<p><strong>Remark:</strong> I know this is not true for the real functions (e.g. $f(x)=|x|$ is a continuous function in $\mathbb R$ which is not differentiable at the origin).</p>
| Jean Marie | 305,862 | <p>Another classical family of $\mathbb{C}$-continuous but not $\mathbb{C}$-differentiable functions: functions whose expression is analytic but involve a conjugation: </p>
<p>$f$ defined by $f(z):=\bar z sin(z)+cos(\bar z)$ is one of these.</p>
|
446,796 | <p>I am a student in Undergraduate Mathematics, and I'm struggling to number theory ... I have this problem gcd, and do not know how to do it, and still do not study, congruences, Diophantine equations or, among other matters more advanced ... I'm used to divisibility, and some properties and/or theorems gcd ... Help me, please ...</p>
<p>question
a) Show that if </p>
<p>$(a, b) = 1$, $\Longrightarrow$ $(a · c, b) = (c, b)$.</p>
| Tomas | 83,498 | <p><strong>Hint 1:</strong> It is sufficient to show:
$$d\mid c \wedge d\mid b \Leftrightarrow d\mid ac \wedge d\mid b$$
for any $d\in\mathbb Z$ (<em>why?</em>)</p>
<p><strong>Hint 2:</strong> For the "$\Leftarrow$"-implication, use that $d\mid ac$ and $d\mid bc$, so $d$ divides their gcd.</p>
|
446,796 | <p>I am a student in Undergraduate Mathematics, and I'm struggling to number theory ... I have this problem gcd, and do not know how to do it, and still do not study, congruences, Diophantine equations or, among other matters more advanced ... I'm used to divisibility, and some properties and/or theorems gcd ... Help me, please ...</p>
<p>question
a) Show that if </p>
<p>$(a, b) = 1$, $\Longrightarrow$ $(a · c, b) = (c, b)$.</p>
| felasfa | 55,243 | <p>We use the following theorem from Number Theory.</p>
<p>$$
g=(a,b) \Longleftrightarrow ax+by=g
$$ for some
integers $x$ and $y$. **</p>
<p>Now consider $(a,b)=1$ then you can write
$$
ax+by=1\quad (1)
$$
Let $g=(a\cdot c,b)$ then
invoking the above theorem again
$$
ac(x_{1})+b(y_{1})=g\quad (2)
$$
for some integers $x_{1}$ and $y_{1}$.</p>
<p>Now if we show the following
$$
c(ax_{2})+b(y_{2})=g
$$
we are done. To do that, look the hint below.</p>
<p><strong>Hint</strong> Consider the two equations above.
Multiply equation $1$ by g and so some algebra. </p>
|
536,068 | <p>If X and Y are iid random variables with distribution $F(x)=e^{-e^{-x}}$ and we let $Z=X-Y$ find the distribution function of $F_Z(x)$. I get $F_Z(x)=\frac{e^x}{1+e^x}$ but that doesn't match the answer that the professor gave us. Is what I have correct? I used the standard approach where I integrate over a region the joint density which is just the product of the two densities. Nothing too fancy I did, its just I think my professor got it wrong. </p>
| Stefan Hansen | 25,632 | <p>$$
\begin{align}
P(Z\leq z)&=P(X\leq z+Y)=\int_{-\infty}^\infty P(X\leq z+y) f(y)\,\mathrm dy\\
&=\int_{-\infty}^\infty\exp(-e^{-(z+y)})\exp(-y-e^{-y})\,\mathrm dy \\
&=\int_{-\infty}^\infty \exp(-y-e^{-y}(1+e^{-z}))\,\mathrm dy\\
&=\left[\frac{1}{1+e^{-z}}\exp(-(1+e^{-z})e^{-y})\right]_{-\infty}^\infty\\
&=\frac{1}{1+e^{-z}}.
\end{align}
$$</p>
|
1,897,771 | <p>Let $(a, b)$ be the open interval $\left\{z\in\mathbb{R} : a < z < b\right\}$. </p>
<p>Write the theorem "If $x,y\in(a,b)$ then $|x − y| < b − a$" in logic form, and then prove the theorem.</p>
| Planche | 330,526 | <p>Let $x,y \in (a,b)$.
By definition,
$$ a<x<b , a<y<b$$
so $$ -b< -x<-a, a<y<b.$$
$$-b+a<y-x<b-a \mbox{ similary } -b+a<x-y<b-a$$
so $$\left | x-y \right| < b-a.$$</p>
|
557,468 | <p>Let $f$ be a continuous map from $[0,1]$ to $[0,1].$ Show that there exists $x$ with $f(x)=x. $</p>
<p>I have $f$ being a continuous map from $[0,1]$ to $[0,1]$ thus $f: [0,1]\to [0,1]$. Then I know from the intermediate value theorem there exists an $x$ with $f(x)=x$ but I don't know how to formally prove it? </p>
<p>Is there another way of proving this besides using $g(x) = f(x) - x$? </p>
| Deven Ware | 14,334 | <p>Yeah it is by the intermediate value theorem. </p>
<p>Consider the function $g(x) = f(x) - x$. </p>
<p>What can you say about $g(0)$? $g(1)$? Now apply the IVT.</p>
<p><strong>Edit:</strong> If you want to do it without $g$ or the IVT explicitly you can use the proof idea of IVT and say: </p>
<p>If not :</p>
<p>$\{x: f(x) < x \}$ and $\{x: f(x) > x \}$ </p>
<p>Are open, non-empty (since $f(0) > 0$ and $f(1) < 1$) which is a contradiction to connectedness of $[0,1]$. </p>
|
3,956,392 | <p>So the question is as follows:</p>
<blockquote>
<p>An urn contains m red balls and n blue balls. Two balls are drawn uniformly at random
from the urn, without replacement.</p>
</blockquote>
<blockquote>
<p>(a) What is the probability that the first ball drawn is red?</p>
</blockquote>
<blockquote>
<p>(b) What is the probability that the second ball drawn is red?*</p>
</blockquote>
<p>The answer to a) quite clearly works out to be <span class="math-container">$\frac{m}{(m+n)}$</span>, but the answer to b turns out to be the same, and my tutor said this is intuitive by a symmetry argument.</p>
<p>i.e. that <span class="math-container">$P(A_1)$</span> = <span class="math-container">$P(A_2)$</span> where <span class="math-container">$A_i$</span> is the event that a red ball is drawn on the ith turn. However I am struggling to see how this is evident, can anyone explain this?</p>
| true blue anil | 22,388 | <p>Part <strong>(b)</strong> doesn't give any information about the first ball, it is just asking for the probability that the second ball in the line is red.</p>
<p>Now red balls (or those of any other color!) <strong>don't have any preference for positions in the line</strong>, hence if you randomly pick up <strong>any</strong> ball from the line, its probability of being red will be the same.</p>
|
83,764 | <p>I have come across an interesting property of a dynamical system, being transformed by a map, but i haven't been able to figure out <em>why</em> this is happening (for quite some time now actually). Any help is greatly appreciated. Here goes then:</p>
<p>Let M be a n-D manifold and $\dot x=F(x)u_1, F\in \mathbb{R}^{n\times m}, x \in \mathbb{R}^{n}, u_1 \in \mathbb{R}^{m}$ be a control system evolving on M (F is the system matrix i.e. state transition function, and $u_1$ is the input of the system. For all practical purposes $u_1$ is an m-vector from an input space $\mathbb{R}^{m}$). Now let $x=\Psi (y)$ be a coordinate change on M and $u_2=M(y)u_1$ a transformation of the input $u_1$ of the first system. By applying these maps on the system, you get the new equations $\dot y=F(y)u_2$. As you may notice, F is <em>the same</em> in both systems. The problem is <em>why is this happening</em> i.e. for what systems and transformations does this property hold?</p>
<h2>A little more elaboration</h2>
<p>It is useful to investigate the maps more closely. In the general case one has</p>
<p>$\dot x=D\Psi \dot y$<br>
$\dot x= F(x)u_1$</p>
<p>thus<br>
$\dot y=D\Psi ^{-1} F(x)u_1$, (1)</p>
<p>where $D\Psi$ is the Jacobian matrix of $\Psi$. In our case it actually turns out that: </p>
<p>$\dot y=F(y)M(y)u_1$. (2)</p>
<p>You can then consider that $u_2=M(y)u_1$ and get the final system,</p>
<p>$\dot y=F(y)u_2$,</p>
<p>that is, <em>the same</em> system.
By (1),(2) you get,</p>
<p>$D\Psi ^{-1} F(x)u_1=F(y)M(y)u_1 \Rightarrow (D\Psi ^{-1} F(x)-F(y)M(y))u_1=0$. </p>
<p>Since this holds for every $u_1$, you have the condition,</p>
<p>$F(\Psi (y))=D\Psi F(y)M(y)$</p>
<p>So, what does this condition imply? What systems F and maps $\Psi$ hold this property (of system invariance)? I should note that F is nonlinear and a case study where this actually happens is the kinematic model of a unicycle robot i.e. <a href="http://planning.cs.uiuc.edu/node660.html" rel="noreferrer">this</a>. Any ideas?</p>
| John B | 74,138 | <p>Let me reply taking M to be the identity (indeed M is somewhat cosmetic to the discussion).</p>
<p>The identity $F\circ Ψ=DΨ\circ F$ is what one considers for example in the Grobman-Hartman theorem, passing from a dynamics to its linearization say at a fixed point. The possibilities for F are endless; $F$ would then be a topological conjugacy, perhaps locally, although maybe not very regular, more precisely at most Hölder continuous in general. Moreover, $F$ need not satisfy any invariance properties, which if I understand correctly is your main concern.</p>
|
1,804,360 | <p>The following is a classically valid deduction for any propositions <span class="math-container">$A,B,C$</span>.
<span class="math-container">$\def\imp{\rightarrow}$</span></p>
<blockquote>
<p><span class="math-container">$A \imp B \lor C \vdash ( A \imp B ) \lor ( A \imp C )$</span>.</p>
</blockquote>
<p>But I'm quite sure it isn't intuitionistically valid, although I don't know how to prove it, which is my first question.</p>
<p>If my conjecture is true, my next question is what happens if we add this rule to intuitionistic logic. I do not think we will get classical logic. Is my guess right?</p>
<p>[Edit: The user who came to close and downvote this 4 years after I asked this presumably did not see my comment: "The BHK interpretation suggests to me that it isn't intuitionistically valid, but that's just my intuition...". If you are not familiar with Kripke frames (and I was not at that time), good luck trying to figure out what to try!]</p>
| user21820 | 21,820 | <p>Here is an answer that builds on the intuition I had 4 years ago when asking this question. The idea is to use the <a href="https://en.wikipedia.org/wiki/Brouwer%E2%80%93Heyting%E2%80%93Kolmogorov_interpretation" rel="nofollow noreferrer">BHK interpretation</a> where a witness of <span class="math-container">$P→Q$</span> is a program that given any witness of <span class="math-container">$P$</span> as input will produce a witness of <span class="math-container">$Q$</span> as output. Intuitionistic logic proves only sentences that have such a witness.</p>
<p>Then to answer the first question, it suffices to find <span class="math-container">$A,B,C$</span> such that we can computably convert any given witness of <span class="math-container">$A$</span> into a witness of <span class="math-container">$B∨C$</span>, but such that we cannot construct a witness for <span class="math-container">$A→B$</span> nor a witness for <span class="math-container">$A→C$</span>. To do so, consider <span class="math-container">$A :≡ X∨Y$</span> and <span class="math-container">$B :≡ X$</span> and <span class="math-container">$C :≡ Y$</span> where <span class="math-container">$X,Y$</span> are distinct propositional variables. Then clearly <span class="math-container">$A→B∨C$</span> has a witness, and hence is intuitionistically provable. However, <span class="math-container">$A→B ≡ X∨Y→X$</span> has no witness, since a witness of <span class="math-container">$X∨Y$</span> may be a witness of <span class="math-container">$Y$</span> rather than <span class="math-container">$X$</span>. Similarly, <span class="math-container">$A→C$</span> has no witness. Thus there is no witness of <span class="math-container">$(A→B)∨(A→C)$</span>, and hence it is intuitionistically unprovable. Therefore the rule cannot be intuitionistically valid.</p>
<p>I do not see an obvious way to address the second question using the BHK interpretation, so for that please refer to Hanno's answer using Kripke frames. =)</p>
|
1,804,360 | <p>The following is a classically valid deduction for any propositions <span class="math-container">$A,B,C$</span>.
<span class="math-container">$\def\imp{\rightarrow}$</span></p>
<blockquote>
<p><span class="math-container">$A \imp B \lor C \vdash ( A \imp B ) \lor ( A \imp C )$</span>.</p>
</blockquote>
<p>But I'm quite sure it isn't intuitionistically valid, although I don't know how to prove it, which is my first question.</p>
<p>If my conjecture is true, my next question is what happens if we add this rule to intuitionistic logic. I do not think we will get classical logic. Is my guess right?</p>
<p>[Edit: The user who came to close and downvote this 4 years after I asked this presumably did not see my comment: "The BHK interpretation suggests to me that it isn't intuitionistically valid, but that's just my intuition...". If you are not familiar with Kripke frames (and I was not at that time), good luck trying to figure out what to try!]</p>
| MJD | 25,554 | <p>It's interesting to me that different people's intuitions were so different on this. Different models are helpful with different examples, and nobody in the thread mentioned the connection between intuitionistic logic and programming language semantics, which for this example I found very helpful.</p>
<p>In the programming language model, <span class="math-container">$$(A\to(B\lor C))\to((A\to B)\lor(A\to C))$$</span> is the type of a function <span class="math-container">$f$</span> which is given a value <span class="math-container">$in$</span> of type <span class="math-container">$$A\to(B\lor C)$$</span> and which transforms it into a value <span class="math-container">$out$</span> of type <span class="math-container">$$(A\to B)\lor(A\to C).$$</span> How might we implement such a function <span class="math-container">$f$</span>?</p>
<p>An experienced computer programmer will think something like this:</p>
<blockquote>
<p>I would need to produce a function that turns <span class="math-container">$A$</span> into <span class="math-container">$B$</span>. I can't do that because all I have is a function that turns <span class="math-container">$A$</span> into <span class="math-container">$B\lor C$</span> and it might not give me the <span class="math-container">$B$</span> I need.</p>
</blockquote>
<p>That's the intuition. The intuition is a shortcut for the following thought process.</p>
<p>In code, our function <span class="math-container">$f$</span> will look like this:</p>
<pre><code>def f in = out where -- `in` has type A → (B ∨ C)
out = ??? -- `out` should have type (A → B) ∨ (A → C)
</code></pre>
<p><span class="math-container">$\def\lt{\mathtt{Left}\ }\def\rt{\mathtt{Right}\ }$</span>
A value of type <span class="math-container">$(A\to B)\lor(A\to C)$</span> must have the form <span class="math-container">$\lt out_L$</span> where <span class="math-container">$out_L$</span> has type <span class="math-container">$A\to B$</span>, or <span class="math-container">$\rt out_R$</span> where <span class="math-container">$out_R$</span> has type <span class="math-container">$A\to C$</span>. Those are the only two choices. Let's try the first one:</p>
<pre><code>def f in = out where -- `in` has type A → (B ∨ C)
out = Left outL -- `outL` should have type A → B
</code></pre>
<p>Since <code>outL</code> has type <span class="math-container">$A\to B$</span>, it is a function that takes a value of type <span class="math-container">$A$</span> and turns it into a value of type <span class="math-container">$B$</span>:</p>
<pre><code>def f in = out where -- `in` has type A → (B ∨ C)
out = Left outL
outL a = ??? -- we want to produce a value of type B
</code></pre>
<p>Where is <span class="math-container">$out_L$</span> going to get a value of type <span class="math-container">$B$</span>? The only tools it has available are <span class="math-container">$in$</span> (of type <span class="math-container">$A\to(B\lor C)$</span>) and <span class="math-container">$a$</span> (of type <span class="math-container">$A$</span>) and there is only one thing it can do with these: it must give the argument <span class="math-container">$a$</span> to the function <span class="math-container">$in$</span>, producing a result of type <span class="math-container">$B\lor C$</span>:</p>
<pre><code>def f in = out where -- `in` has type A → (B ∨ C)
out = Left outL
outL a = ?? in a ?? -- `in a` has type B ∨ C
-- we want to produce a value of type B
</code></pre>
<p>The <code>in a</code> expression produces a value of type <span class="math-container">$B\lor C$</span>. A value of type <span class="math-container">$B\lor C$</span> looks either like <span class="math-container">$\lt b$</span> or like <span class="math-container">$\rt c$</span>, so we need a <code>case</code> expression to handle the two cases.</p>
<pre><code>def f in = out where -- `in` has type A → (B ∨ C)
out = Left outL
outL a = case in a of -- we want to produce a value of type B
Left b -> ???
Right c -> ???
</code></pre>
<p>In the <span class="math-container">$\lt b$</span> case it's easy to see how to proceed: just return the <span class="math-container">$b$</span>:</p>
<pre><code>def f in = out where -- `in` has type A → (B ∨ C)
out = Left outL
outL a = case in a of -- we want to produce a value of type B
Left b -> b -- … and we did!
Right c -> ???
</code></pre>
<p>In the <span class="math-container">$\rt c$</span> case there's no way to proceed, because <span class="math-container">$out_L$</span> has available a value <span class="math-container">$c$</span> of type <span class="math-container">$C$</span>, but it still needs to produce a value of type <span class="math-container">$B$</span> and it has no way to get one. There is no way to complete the implementation.</p>
<p>Perhaps we made a mistake back at the beginning when we chose to define <code>out = Left outL</code>? Would <code>out = Right outR</code> work better? No, the problem will be exactly the same:</p>
<pre><code>def f in = out where -- `in` has type A → (B ∨ C)
out = Right outR
outR a = case in a of -- we want to produce a value of type C
Left b -> ??? -- … but we don't have one
Right c -> c
</code></pre>
<p>This exhausts the possibilities. There is no way to implement <span class="math-container">$f$</span>.</p>
<p>The points I want to make here are not only that programmers can and do acquire this sort of intuition for what can't be implemented and why not, but also that they can convert the intuition into an argument that doesn't rely on the intuition alone. Once the intuition is turned a detailed argument, the <a href="https://en.wikipedia.org/wiki/Curry%E2%80%93Howard_correspondence" rel="nofollow noreferrer">Curry-Howard correspondence</a> can be used to convert the failed implementation directly to an LJ sequent calculus proof or analytic tableau proof that the statement is invalid.</p>
<p>[ Addendum: Note by the way that the reverse implication, which is intuitionistically valid, is easy to prove and just as easy to construct a program:</p>
<pre><code>def fINV out a = -- fINV has type ((A→B)∨(A→C))→(A→(B∨C))
case out of Left outL -> Left (outL a) -- outL has type A→B
Right outR -> Right (outR a) -- outR has type A→C
</code></pre>
<p>and again the Curry-Howard correspondence converts the program into a proof or vice-versa. ]</p>
|
2,247,445 | <p>Find isomorphism between <span class="math-container">$\mathbb F_2[x]/(x^3+x+1)$</span> and <span class="math-container">$\mathbb F_2[x]/(x^3+x^2+1)$</span>.</p>
<hr />
<p>It is easy to construct an injection <span class="math-container">$f$</span> satisfying <span class="math-container">$f(a+b)=f(a)+f(b)$</span> and <span class="math-container">$f(ab)=f(a)f(b)$</span>. However, I am stuck how to construct such a mapping that is bijective.</p>
<p>Thank you for help!</p>
| zahbaz | 176,922 | <p>$$75\sin^2\alpha + \frac{75}{4}\sin\alpha$$</p>
<p><a href="https://i.stack.imgur.com/6IdrJ.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/6IdrJ.png" alt="Draw a right triangle with alpha in one corner."></a></p>
<p>\begin{align}
\frac{75}{10} + \frac{75\sqrt{10}}{4\cdot10}
\\
\\
\frac{15}{2} + \frac{15\sqrt{10}}{8}
\end{align}</p>
|
4,046,265 | <p>I'm evaluating a line integral of the function <span class="math-container">$T= x^2 + 4xy + 2yz^3$</span> from <span class="math-container">$a = (0,0,0)$</span> to <span class="math-container">$b=(1,1,1)$</span> on the path <span class="math-container">$z = x^2$</span>, and <span class="math-container">$y = x$</span> without using the fundamental theorem.</p>
<p>My question is how to factor in the boundaries of the integral when I parameterize the path in terms of <span class="math-container">$t$</span></p>
<p>So far I have:<br />
Let <span class="math-container">$x = t$</span><br />
so <span class="math-container">$r= \langle t,t,t^2\rangle$</span> and <span class="math-container">$dr = \langle 1,1,2t\rangle$</span></p>
<p>how do I factor in the boundaries <span class="math-container">$a=(0,0,0)$</span> and <span class="math-container">$b=(1,1,1)$</span> for my integral? After I have the boundaries, solving the line integral is not a problem</p>
<p>Thank you</p>
| Daàvid | 433,274 | <p>Well, when you defined the map <span class="math-container">$\overline{\phi}: TU \to \phi(U) \times \mathbb{R}^{n}$</span>, you didn't write it the right way, because it is given by <span class="math-container">$\overline{\phi}(p,v) = (\phi(p), D_p\phi(v))$</span> (you need to take into account the derivative of the second component with respect to the point on the manifold, and not just the vector in the tangent space), so when you take the jacobian of a change of coordinates <span class="math-container">$\overline{\phi}_{\alpha} \circ \overline{\phi}_{\beta} ^{-1} : V_{\beta} \times \mathbb{R}^n \to V_{\alpha}\times \mathbb{R}^n$</span>, if you call <span class="math-container">$(x_1,...,x_n)$</span> the coordinates in <span class="math-container">$V_{\beta}$</span>, and <span class="math-container">$(x_{n+1},...,x_{2n})$</span> the coordinates in the <span class="math-container">$\mathbb{R}^n$</span> of the domain and do the same with <span class="math-container">$y$</span>'s for the co domain, to calculate the jacobian, we need to write <span class="math-container">$\frac{\partial y_j}{\partial x_i}$</span> for any <span class="math-container">$i,j = 1,...,2n$</span>. Now, obviously, it is zero when <span class="math-container">$1\leq j \leq n$</span> and <span class="math-container">$n+1 \leq i \leq 2n$</span> so the upper right quadrant in the matrix representation is zero, but the rest of them won't in general, so it will actually be a lower triangular block matrix which will have determinant equal to the product of the determinants of <span class="math-container">$[\frac{\partial y_j}{\partial x_i}]_{i,j=1,...,n}$</span> and <span class="math-container">$[\frac{\partial y_j}{\partial x_i}]_{i,j=n+1,...,2n}$</span> as you can check here <a href="https://math.stackexchange.com/questions/75293/determinant-of-a-block-lower-triangular-matrix">Determinant of a block lower triangular matrix</a>.</p>
<p>Now, if you notice, the second matrix I pointed out in the product of the determinants, will be exactly <span class="math-container">$det (J(D_p \phi_{\beta}^{-1} \circ D_p \phi_{\alpha}))$</span>, where <span class="math-container">$p \in U$</span> is now fixed because we're calculating partial derivatives with respect to the variables indexed by <span class="math-container">$n+1,...,2n$</span>. Now, <span class="math-container">$D_p \phi_{\beta}^{-1} \circ D_p \phi_{\alpha}$</span> is a linear map from <span class="math-container">$\mathbb{R}^n$</span> to itself, and as we all know, in that case, the derivative of a linear map is itself, so the jacobian is the matrix representation of <span class="math-container">$D_p \phi_{\beta}^{-1} \circ D_p \phi_{\alpha}$</span> which in turn is exactly <span class="math-container">$J(\phi_{\beta}^{-1}\circ \phi_{\alpha})$</span>.
We have now concluded that the determinant of the jacobian of <span class="math-container">$\overline{\phi}_{\alpha} \circ \overline{\phi}_{\beta} ^{-1}$</span> is exactly <span class="math-container">$det(J(\phi_{\beta}^{-1}\circ \phi_{\alpha}))^2 >0$</span>.</p>
|
1,190,798 | <p>I'm having difficulty understanding when to use $\cos$ and $\sin$ to find $x$ and $y$ components of a vector.
Do we always use $\cos$ for $x$-component or what?</p>
| Floris | 101,979 | <p>It depends on your definition of the angle:</p>
<p><img src="https://i.stack.imgur.com/Sgu3w.png" alt="enter image description here"></p>
<p>In the picture as drawn, $x$ is $r\cos\alpha$ and $y$ is $r\sin\alpha$. But if I chose a different convention for $x$, $y$ or $\alpha$ I would need a different equation.</p>
|
172,157 | <p>Today I was asked if you can determine the divergence of $$\int_0^\infty \frac{e^x}{x}dx$$ using the limit comparison test.</p>
<p>I've tried things like $e^x$, $\frac{1}{x}$, I even tried changing bounds by picking $x=\ln u$, then $dx=\frac{1}{u}du$. Then the integral, with bounds changed becomes $\int_1^\infty \frac{1}{\ln u}du$ This didn't help either.</p>
<p>This problem intrigued me, so any helpful pointers would be greatly appreciated.</p>
| Pedro | 23,350 | <p>You are presented with $$I=\int_0^\infty \frac{e^x}{x}dx$$</p>
<p>It is clear the function is bounded at any point inside $(0,\infty)$ so we're worried about the extrema of the interval. Split the integral at, say $1$, we have</p>
<p>$$I=\int_0^1 \frac{e^x}{x}dx+\int_1^\infty \frac{e^x}{x}dx$$</p>
<p>We need to analyze, then</p>
<p>$$\lim_{\epsilon \to 0}\int_\epsilon^1 \frac{e^x}{x}dx$$</p>
<p>and</p>
<p>$$\lim_{m \to \infty}\int_1^m \frac{e^x}{x}dx$$</p>
<p>But note that for $x\in(0,1)$, we have</p>
<p>$$\frac{1}{x}<\frac{e^x}{x}$$</p>
<p>so that for $\epsilon >0$</p>
<p>$$\int_\epsilon^1\frac{dx}{x}<\int_\epsilon^1\frac{e^x}{x}dx$$</p>
<p>If we let $\epsilon \to 0$ we see that </p>
<p>$$\lim_{\epsilon \to 0}\int_\epsilon^1\frac{dx}{x}<\lim_{\epsilon \to 0}\int_\epsilon^1\frac{e^x}{x}dx$$</p>
<p>But $\displaystyle \lim_{\epsilon \to 0}\int_\epsilon^1\frac{dx}{x}$ diverges, so that $\displaystyle \lim_{\epsilon \to 0}\int_\epsilon^1 \frac{e^x}{x}dx$ forcedfully, diverges too.</p>
<p>Now consider $e^{x/2}$ in $(1,\infty)$. You can check that </p>
<p>$$e^{x/2}<\frac{e^x}{x}$$</p>
<p>so</p>
<p>$$\int_1^me^{x/2}dx<\int_1^m\frac{e^x}{x}dx$$</p>
<p>for $m>1$. But now if we let $m\to \infty$ we see that </p>
<p>$$\lim_{m \to \infty}\int_1^me^{x/2}dx$$</p>
<p>diverges, so $$\lim_{m \to \infty}\int_1^m\frac{e^x}{x}dx$$ diverges forcedfully, too.</p>
<p>In conclusion, you integral diverges.</p>
|
172,157 | <p>Today I was asked if you can determine the divergence of $$\int_0^\infty \frac{e^x}{x}dx$$ using the limit comparison test.</p>
<p>I've tried things like $e^x$, $\frac{1}{x}$, I even tried changing bounds by picking $x=\ln u$, then $dx=\frac{1}{u}du$. Then the integral, with bounds changed becomes $\int_1^\infty \frac{1}{\ln u}du$ This didn't help either.</p>
<p>This problem intrigued me, so any helpful pointers would be greatly appreciated.</p>
| ncmathsadist | 4,154 | <p>In that case, do this</p>
<p>You have $$e^x/x\sim {1\over x}$$
as $x\to 0$ and
$$\int _{0^+} {dx\over x} = +\infty.$$
Now you are done.</p>
|
1,563,004 | <p>Assume we that we calculate the expected value of some measurements $x=\dfrac {x_1 + x_2 + x_3 + x_4} 4$. what if we dont include $x_3$ and $x_4$, but instead we use $x_2$ as $x_3$ and $x_4$. Then We get the following expression $v=\dfrac {x_1 + x_2 + x_2 + x_2} 4$.</p>
<p>How do I know if $v$ is a unbiased estimation of $x$?</p>
<p>I am not sure how to approach this problem, any ideas are appreciated!</p>
| Michael Hardy | 11,667 | <p>\begin{align}
\text{your answer} & = 10^x\cdot \ln10\cdot \log_{10} x+\frac{1}{x\cdot \ln10}\cdot 10^x \\[10pt]
& = 10^x \ln x + \frac{10^x}{x\ln 10} \\[10pt]
& = 10^x \left( \ln x + \frac 1 {x\ln 10} \right) = \text{Wolfram's answer}.
\end{align}
Note that where Wolfram writes $\log x$ or $\log 10$ with no base specified, it means the base is $e$, so it's the natural logarithm.</p>
<p>Note also that we used the identity $\ln10\cdot\log_{10}x = \ln x$. That is an instance of the change-of-base formula for logarithms.</p>
|
1,563,004 | <p>Assume we that we calculate the expected value of some measurements $x=\dfrac {x_1 + x_2 + x_3 + x_4} 4$. what if we dont include $x_3$ and $x_4$, but instead we use $x_2$ as $x_3$ and $x_4$. Then We get the following expression $v=\dfrac {x_1 + x_2 + x_2 + x_2} 4$.</p>
<p>How do I know if $v$ is a unbiased estimation of $x$?</p>
<p>I am not sure how to approach this problem, any ideas are appreciated!</p>
| Jan Eerland | 226,665 | <p>$$\frac{\text{d}}{\text{d}x}\left(10^x\cdot\log_{10}(x)\right)=\frac{\text{d}}{\text{d}x}\left(\frac{10^x\ln(x)}{\ln(10)}\right)=$$
$$\frac{\frac{\text{d}}{\text{d}x}\left(10^x\ln(x)\right)}{\ln(10)}=\frac{\ln(x)\frac{\text{d}}{\text{d}x}(10^x)+10^x\frac{\text{d}}{\text{d}x}(\ln(x))}{\ln(10)}=$$
$$\frac{10^x\ln(10)\ln(x)+10^x\frac{\text{d}}{\text{d}x}(\ln(x))}{\ln(10)}=\frac{10^x\ln(10)\ln(x)+\frac{10^x}{x}}{\ln(10)}=$$
$$10^x\left(\ln(x)+\frac{1}{x\ln(10)}\right)$$</p>
|
440,452 | <blockquote>
<p>Let $b,c \in \mathbb{Z} $ and let $n \in \mathbb{N} $, $n \ge 2. $ Let $f(x) = x^{n} -bx+c$. Prove that $$\hbox{disc} (f(x)) = n^{n }c^{ n-1}-(n-1)^{n-1 }b^{n }.$$</p>
</blockquote>
<p>Here $\hbox{disc} (f(x)) = \prod_{i} f'(\alpha_{i} )$ where $\alpha_{1}, \dots, \alpha_{n}$ are the roots of $f(x)$.</p>
<p>After some calculations I obtained $\hbox{disc} (f(x)) = \frac{\prod_{i} \alpha_{i}(n-1)b \ - \ nc }{\prod_{i} \alpha_{i}} $, but I'm afraid this is the wrong way.</p>
| Community | -1 | <p>Here is a brute force approach:</p>
<p><span class="math-container">$f'(x) = nx^{n-1} - b$</span>, and we want to compute <span class="math-container">$\prod_i f'(\alpha_i)$</span>. We do this by looking for the minimal polynomial with roots <span class="math-container">$\alpha_i^{n-1}$</span>.</p>
<p>Note that</p>
<p><span class="math-container">$$\begin{array}%
x^n - bx+c = 0 &\Leftrightarrow x(x^{n-1} - b) = -c \\
&\Leftrightarrow x^{n-1} (x^{n-1} - b)^{n-1} = (-1)^{n-1}c^{n-1}
\end{array}$$</span></p>
<p>let <span class="math-container">$y_i = \alpha_i^{n-1}$</span>, and <span class="math-container">$z_i = f'(\alpha_i) = ny_i - b$</span>. We have found the minimal polynomial for <span class="math-container">$y_i$</span>:
<span class="math-container">$$y(y-b)^{n-1} = (-1)^{n-1}c^{n-1}$$</span>
and we want the product <span class="math-container">$\prod_i z_i$</span>. Consider the change of variable <span class="math-container">$z = ny-b$</span>, i.e. <span class="math-container">$y = \frac{z+b}{n}$</span>. Substitute into the above equation, we get
<span class="math-container">$$\frac{z+b}{n} \left(\frac{z - (n-1)b}{n}\right)^{n-1} = (-1)^{n-1}c^{n-1} \\
\Leftrightarrow (z+b)(z-(n-1)b)^{n-1} - (-1)^{n-1} n^n c^{n-1} = 0$$</span>
The constant term of this polynomial in <span class="math-container">$z$</span> is <span class="math-container">$(-1)^n \prod_i z_i$</span>, therefore
<span class="math-container">$$\prod_i z_i = (-1)^n \left((-1)^{n-1}b^n(n-1)^{n-1} - (-1)^{n-1} n^n c^{n-1}\right) = n^n c^{n-1} - (n-1)^{n-1}b^{n}$$</span></p>
|
975 | <p>The usual <code>Partition[]</code> function is a very handy little thing:</p>
<pre><code>Partition[Range[12], 4]
{{1, 2, 3, 4}, {5, 6, 7, 8}, {9, 10, 11, 12}}
Partition[Range[13], 4, 3]
{{1, 2, 3, 4}, {4, 5, 6, 7}, {7, 8, 9, 10}, {10, 11, 12, 13}}
</code></pre>
<p>One application I'm working on required me to write a particular generalization of <code>Partition[]</code>'s functionality, which allowed the generation of sublists of unequal lengths, for as long as the lengths were appropriately commensurate. (Let's assume for the purposes of this question that the list lengths being commensurate is guaranteed, but you're welcome to generalize further to the incommensurate case.) Here's my generalization in action:</p>
<pre><code>multisegment[lst_List, scts_List] := Block[{acc},
acc = Prepend[Accumulate[PadRight[scts, Length[lst]/Mean[scts], scts]], 0];
Inner[Take[lst, {#1, #2}] &, Most[acc] + 1, Rest[acc], List]]
multisegment[CharacterRange["a", "x"], {3, 1, 2}]
{{"a", "b", "c"}, {"d"}, {"e", "f"}, {"g", "h", "i"}, {"j"}, {"k", "l"},
{"m", "n", "o"}, {"p"}, {"q", "r"}, {"s", "t", "u"}, {"v"}, {"w", "x"}}
</code></pre>
<p>(Thanks to halirutan for optimization help with <code>multisegment[]</code>.)</p>
<p>The problem I've hit into is that I wanted <code>multisegment[]</code> to also support offsets, just like in <code>Partition[]</code>. I want to be able to do something like the following:</p>
<pre><code>multisegment[Range[14], {4, 3}, {3, 1}]
{{1, 2, 3, 4}, {4, 5, 6}, {5, 6, 7, 8},
{8, 9, 10}, {9, 10, 11, 12}, {12, 13, 14}}
</code></pre>
<p>How might a version of <code>multisegment[]</code> with offsets be accomplished?</p>
| Simon | 34 | <p>Here's my version using the <code>Sow</code> and <code>Reap</code> combination.</p>
<pre><code>multisegment::arglen =
"The argument `1` is not of the same length as the argument `2`.";
multisegment[lst_List, scts_List, offsets_List] :=
Module[{len = Length[lst], slen = Length[scts], i = 1, j = 1},
Reap[If[slen =!= Length[offsets],
Message[multisegment::arglen, scts, offsets]; Sow[$Failed],
Do[Sow[Take[lst, {i, i + scts[[j]] - 1}]];
i = i + offsets[[j]]; j = Mod[j + 1, slen, 1];
If[i + scts[[j]] - 1 > len, Break[]],
{len/Total[offsets]*slen}]]]][[2, 1]]
multisegment[lst_List, scts_List] := multisegment[lst, scts, scts]
</code></pre>
<p>Note that you should also add checks to make sure that the <code>scts</code> and <code>offsets</code> arguments are all integers and that <code>Total[offsets] > 0</code> etc...</p>
<hr>
<p>Here's the relative timings (using my <code>TimeAv</code> code) for running</p>
<pre><code>multisegment[Range[n], {4, 3}, {3, 1}]; // TimeAv
</code></pre>
<p>with various values of <code>n</code> and the different solutions presented so far.
The timing of my version of <code>multisegment</code> is normalised to 1.</p>
<pre><code> Mike H Heike
n = 200 0.488689, 2.17595
n = 20 000 0.444445, 4.00373
n = 200 000 0.495761, 54.6492
</code></pre>
|
975 | <p>The usual <code>Partition[]</code> function is a very handy little thing:</p>
<pre><code>Partition[Range[12], 4]
{{1, 2, 3, 4}, {5, 6, 7, 8}, {9, 10, 11, 12}}
Partition[Range[13], 4, 3]
{{1, 2, 3, 4}, {4, 5, 6, 7}, {7, 8, 9, 10}, {10, 11, 12, 13}}
</code></pre>
<p>One application I'm working on required me to write a particular generalization of <code>Partition[]</code>'s functionality, which allowed the generation of sublists of unequal lengths, for as long as the lengths were appropriately commensurate. (Let's assume for the purposes of this question that the list lengths being commensurate is guaranteed, but you're welcome to generalize further to the incommensurate case.) Here's my generalization in action:</p>
<pre><code>multisegment[lst_List, scts_List] := Block[{acc},
acc = Prepend[Accumulate[PadRight[scts, Length[lst]/Mean[scts], scts]], 0];
Inner[Take[lst, {#1, #2}] &, Most[acc] + 1, Rest[acc], List]]
multisegment[CharacterRange["a", "x"], {3, 1, 2}]
{{"a", "b", "c"}, {"d"}, {"e", "f"}, {"g", "h", "i"}, {"j"}, {"k", "l"},
{"m", "n", "o"}, {"p"}, {"q", "r"}, {"s", "t", "u"}, {"v"}, {"w", "x"}}
</code></pre>
<p>(Thanks to halirutan for optimization help with <code>multisegment[]</code>.)</p>
<p>The problem I've hit into is that I wanted <code>multisegment[]</code> to also support offsets, just like in <code>Partition[]</code>. I want to be able to do something like the following:</p>
<pre><code>multisegment[Range[14], {4, 3}, {3, 1}]
{{1, 2, 3, 4}, {4, 5, 6}, {5, 6, 7, 8},
{8, 9, 10}, {9, 10, 11, 12}, {12, 13, 14}}
</code></pre>
<p>How might a version of <code>multisegment[]</code> with offsets be accomplished?</p>
| J. M.'s persistent exhaustion | 50 | <p>I've managed to slightly build on Mike's answer. There's a minimum (i.e., woefully incomplete) amount of checking done, but it should mostly work:</p>
<pre><code>multisegment[lst_List, scts:{__Integer?Positive}, offset:{__Integer?NonNegative}]:=
Module[{n = Length[lst], k, offs},
k = Ceiling[n/Mean[offset]];
offs = Prepend[Accumulate[PadRight[offset, k, offset]], 0];
Take[lst, #] & /@ TakeWhile[
Transpose[{offs + 1, offs + PadRight[scts, k + 1, scts]}],
Apply[And, Thread[# <= n]] &]] /; Length[scts] == Length[offset]
multisegment[lst_List, scts:{__Integer?Positive}] :=
multisegment[lst, scts, scts] /; Mod[Length[lst], Total[scts]] == 0
</code></pre>
|
3,830,971 | <p>The function <span class="math-container">$f(x)=\cot^{-1} x$</span> is well known to be neither even nor odd because <span class="math-container">$\cot^{-1}(-x)=\pi-\cot^{-1} x$</span>. it's domain is <span class="math-container">$(-\infty, \infty)$</span> and range is <span class="math-container">$(0, \pi)$</span>. Today, I was surprised to notice that Mathematica treats it as an odd function, and yields its plot as given below:</p>
<p><a href="https://i.stack.imgur.com/IcH3w.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/IcH3w.png" alt="enter image description here" /></a></p>
<p>How to reconcile this ? I welcome your comments.</p>
<p>Edit: I used: Plot[ArcCot[x], {x, -3, 3}] there to plot</p>
| user | 505,767 | <p>We have that <span class="math-container">$\cot^{-1}(-x)$</span> is invertible only on suitable restrictions, in this case it seems Mathematica is considering the following definition</p>
<p><span class="math-container">$$f(x)=\cot^{-1}(x): \mathbb R \to \left(-\frac \pi 2, \frac \pi 2\right)$$</span></p>
<p>that is also the definition used by <a href="https://reference.wolfram.com/language/ref/ArcCot.html" rel="nofollow noreferrer">Wolfram</a>.</p>
|
1,430,369 | <p>I know how to prove this by induction but the text I'm following shows another way to prove it and I guess this way is used again in the future. I'm confused by it.</p>
<p>So the expression for first n numbers is:
$$\frac{n(n+1)}{2}$$</p>
<p>And this second proof starts out like this. It says since:</p>
<p>$$(n+1)^2-n^2=2n+1$$</p>
<p>Absolutely no idea where this expression came from, doesn't explain where it came from either.</p>
<p>Then it proceeds to say:
\begin{align}
2^2-1^2&=2*1+1 \\
3^2-2^2&=2*2+1\\
&\dots\\
n^2-(n-1)^2&=2(n-1)+1\\
(n+1)^2-n^2&=2n+1
\end{align}
At this point I'm completely lost.
But it continues to say "adding and noting the cancellations on the left, we get"
\begin{align}
(n+1)^2-1&=2(1+2+...+n)+n \\
n^2+n&=2(1+2+...+n) \\
(n(n+1))/2&=1+2+...+n
\end{align}</p>
<p>Which proves it but I have no clue what has happened. I am entirely new to these math proofs. Im completely lost. I was great at high school math and calculus but now I haven't got the slightest clue of what's going on. Thanks</p>
| Dipole | 152,639 | <p>A simple way to "prove" the formula you give is to note that you can pair </p>
<p>$1$ with $n-1$</p>
<p>$2$ with $n-2$</p>
<p>$3$ with $n-3$</p>
<p>... and so on. Then you can see that the sum will be $(n+1)/2$ (the $+1$ coming from the fact that we basically pair $n$ with zero) of these pairs, each of which is the number $n$. </p>
|
157,985 | <p>Question from a beginner. I have data containing dates and values of the format:</p>
<pre><code> data = {{{2015, 1, 1}, 2}, {{2015, 1, 2}, 3}, {{2015, 2, 1}, 4}, {{2015, 2, 2}, 5}, {{2016, 1, 1}, 6}, {{2016, 1, 2}, 7}}
</code></pre>
<p>Aim is to multiply the values of each day in a month, e.g. for January 2015, the result should be 2*3=6, for February 2015 4*5=20 and so on. Ideally, the output would be a list of the format {{January 2015, 6}, {February 2015, 20},etc}, but just a list of the results of the multiplications would be fine.</p>
<p>To group the data by month, I use:</p>
<pre><code>selectElements[list_, start_, end_] := Module[{s = AbsoluteTime@start,
e = AbsoluteTime@end}, Select[list, Composition[s <= # <= e &, AbsoluteTime, First]]]
</code></pre>
<p>I then create a table multiplying the values of the data grouped by month:</p>
<pre><code>test1 = Table[Times @@ selectElements[data, {y, m, 1}, {y, m, 31}], {y, 2015, 2016}, {m, 1, 12}]
</code></pre>
<p>However, this multiplies not only the values, but also the dates themselves giving me:</p>
<pre><code> {{{{4060225, 1, 2}, 6}, {{4060225, 4, 2}, 20}, 1, 1, 1, 1, 1, 1, 1, 1,1, 1}, {{{4064256, 1, 1}, 42}, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1}}
</code></pre>
<p>I'm sure there is an easy way to get just the dates and the values I'm interested in (i.e. 6,20,42, ideally with month/year), but so far I couldn't find it. I'd be very grateful for any pointers.</p>
| Carl Woll | 45,431 | <p>I think this is a simpler variant of @Alan's answer:</p>
<pre><code>GroupBy[
data,
Most@*First -> Last,
Apply[Times]
]
</code></pre>
<blockquote>
<p><|{2015, 1} -> 6, {2015, 2} -> 20, {2016, 1} -> 42|></p>
</blockquote>
|
51,903 | <p>Does anyone know of a tool which</p>
<ol>
<li>Can display formulas neatly, preferably like this website without hassle. (Unlike wikipedia with <code>:<math></code>) </li>
<li>Has a wiki like structure: i.e categories of pages, individual articles with hyperlinked sections, subsections etc. </li>
<li>Preferrably can be used online and does not require installation of some software.</li>
<li>Comes with a free host, i.e for people with little money and no university server.</li>
</ol>
<p>So basically I need a notebook on steroids :)</p>
<p><em>Update</em>: Edited this question to remove the essay I wrote on blogs. You may refer to the revision history if you're interested. I am keeping it open in case anyone knows of further alternatives than have already been mentioned. At present I have settled on and am fairly content with Drupal.</p>
| 410 gone | 8,572 | <p>This site uses <a href="http://www.mathjax.org/" rel="nofollow noreferrer">MathJax</a>, which has pretty much become the web standard Latex tool, because it's so easy to use.</p>
<p>You can get a free account at the <a href="http://www.tumblr.com/" rel="nofollow noreferrer">Tumblr</a> blogging site, and <a href="http://www.mathjax.org/demos/use-in-web-platforms/#Tumblr" rel="nofollow noreferrer">link MathJax in to it</a>.</p>
<p>AFAIK, you can't incorporate MathJax into freely-hosted Wordpress.com sites. You can <a href="http://www.mathjax.org/docs/1.1/platforms/wordpress.html" rel="nofollow noreferrer">incorporate it into Wordpress</a> if it's self-hosted. There may be other free Wordpress hosting sites where you could do it.</p>
<p>And apparently you can <a href="http://idmsj.wordpress.com/2011/05/23/latex-to-blogger-powered-by-mathjax-experimental/" rel="nofollow noreferrer">incorporate Mathjax</a> into <a href="https://www.blogger.com/" rel="nofollow noreferrer">Blogger</a>.</p>
<p>You can also <a href="http://www.mathjax.org/docs/1.1/platforms/index.html" rel="nofollow noreferrer">link Mathjax to your own installation</a> of various content management systems: Movable Type, Joomla, Drupal, MediaWiki, TiddlyWiki, and Moodle.</p>
<p>And just in case this question does get closed, similar questions could be asked and answered over on the <a href="https://webapps.stackexchange.com/">WebApps StackExchange</a>.</p>
<p>Edit: so, if it's a wiki you want, search for "free mediawiki hosting" or "free tiddlywiki hosting", and then look for one which will allow MathJax too.</p>
|
180,323 | <p>apologies if this is a naive question. Consider two Galois extensions, K and L, of the rational numbers. For each extension, consider the set of rational primes that split completely in the extensions, say Split(K) and Split(L).</p>
<p>If Split(K) = Split(L), then is it necessarily true that K and L are isomorphic as Galois extensions of the rationals?</p>
<p>If so, for a given set of rational primes, S, is there a way to construct the extension over which S is the set of completely split primes?</p>
<p>References welcomed! Thanks, Martin</p>
| Zavosh | 2,604 | <p>This result is due to Bauer and dated to 1916:</p>
<p>$\textbf{Theorem}:$ Let $K$ be an algebraic number field, $L/K$ and $M/K$ finite Galois extensions, and $\text{Spl}(L/K)$, $\text{Spl}(M/K)$ the set of prime ideals of $K$ which split completely in $L$ and $M$, respectively. Then $L \subseteq M$ if and only if $\text{Spl}(M/K) \subseteq \text{Spl}(L/K)$.</p>
<p>The case of $K=\mathbb{Q}$ answers your first question in the affirmative. </p>
<p>As for the second question, not every subset of rational primes is $\text{Spl}(K/\mathbb{Q})$ for a number field $K$, simply by cardinality arguments. Class field theory gives a description of $\text{Spl}(L/K)$ when $\text{Gal}(L/K)$ is abelian, but the general problem is open and very hard.</p>
<p>An excellent source for this material is Keith Conrad's notes on the history of class field theory, which you can find under the Expository Notes section on his website here: <a href="http://www.math.uconn.edu/~kconrad">http://www.math.uconn.edu/~kconrad</a></p>
|
3,837,856 | <p>In <a href="http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.29.6143&rep=rep1&type=pdf" rel="nofollow noreferrer">Fast Exact Multiplication by the Hessian</a> equation 1,</p>
<p><span class="math-container">$O(\left\Vert\Delta w\right\Vert^2)$</span> gets taken from RHS to LHS and <span class="math-container">$\Delta w$</span> is substituted as <span class="math-container">$rv$</span> where r is small scalar and v is a vector. I understand that <span class="math-container">$O(\left\Vert rv\right\Vert^2) = O(r^2\left\Vert v\right\Vert^2)$</span>. But what I don't get is how did the sign of <span class="math-container">$O$</span> not become negative when it was taken to LHS. And why did <span class="math-container">$\left\Vert v\right\Vert^2 $</span> disappear form <span class="math-container">$O$</span>. Is it because as <span class="math-container">$r$</span> is tiny it will govern the <span class="math-container">$O$</span> term and <span class="math-container">$\left\Vert v\right\Vert^2$</span> won't matter?</p>
| Steven Stadnicki | 785 | <p><span class="math-container">$O(f(x))$</span> refers to a quantity that's bounded (in the limit) by some multiple of <span class="math-container">$f(x)$</span>; it properly (IMHO) represents a <em>set</em>. That is, to say that <span class="math-container">$g(x)\in O(f(x))$</span> means that there's some constant C such that for all sufficiently large (or sufficiently small, depending on the direction of the limit) <span class="math-container">$x$</span>, <span class="math-container">$|g(x)|\lt C|f(x)|$</span>. But this means in particular that <span class="math-container">$O()$</span> is 'sign-agnostic'; if <span class="math-container">$g(x)\in O(f(x))$</span>, then <span class="math-container">$cg(x)\in O(f(x))$</span> for all constants <span class="math-container">$c$</span>, positive or negative. This same fact is also what 'removes' the dependency on <span class="math-container">$\left\Vert v\right\Vert^2$</span>; because the limit that's implicit in <span class="math-container">$O()$</span> is being taken with respect to <span class="math-container">$r$</span> and <span class="math-container">$v$</span> doesn't depend on <span class="math-container">$r$</span>, the quantity <span class="math-container">$\left\Vert v\right\Vert^2$</span> is 'constant' and can be absorbed into the constant in <span class="math-container">$O()$</span>.</p>
|
481,673 | <p>Find all functions $g:\mathbb{R}\to\mathbb{R}$ with $g(x+y)+g(x)g(y)=g(xy)+g(x)+g(y)$ for all $x,y$.</p>
<p>I think the solutions are $0, 2, x$. If $g(x)$ is not identically $2$, then $g(0)=0$. I'm trying to show if $g$ is not constant, then $g(1)=1$. I have $g(x+1)=(2-g(1))g(x)+g(1)$. So if $g(1)=1$, we can show inductively that $g(n)=n$ for integer $n$. Maybe then extend to rationals and reals.</p>
| Mohsen Shahriari | 229,831 | <blockquote>
<p>This is my answer to the question <a href="https://math.stackexchange.com/questions/1221042/solving-functional-equation-fxy-fxfy-fxfyfxy-for-all-real-numb">solving functional equation <span class="math-container">$f(x+y)+f(x)f(y)=f(x)+f(y)+f(xy)$</span> for all real numbers</a>, which I recently found to be a duplicate. I thought it might be useful to post it here.</p>
</blockquote>
<p>The only solutions to the functional equation
<span class="math-container">$$g(x+y)+g(x)g(y)=g(x)+g(y)+g(xy)\tag0\label0$$</span>
are the identity function <span class="math-container">$g(x)=x$</span>, and constant functions <span class="math-container">$g(x)=0$</span> and <span class="math-container">$g(x)=2$</span>.</p>
<p>To observe this, if we set <span class="math-container">$x=y=2$</span> we get <span class="math-container">$g(2)=0$</span> or <span class="math-container">$g(2)=2$</span>.
By setting <span class="math-container">$x=y=1$</span> we get <span class="math-container">$\big(g(1)\big)^2-3g(1)+g(2)=0$</span>. So if <span class="math-container">$g(2)=0$</span> then <span class="math-container">$g(1)=0$</span> or <span class="math-container">$g(1)=3$</span>, and if <span class="math-container">$g(2)=2$</span> then <span class="math-container">$g(1)=1$</span> or <span class="math-container">$g(1)=2$</span>.</p>
<ol>
<li>If <span class="math-container">$g(2)=0$</span> and <span class="math-container">$g(1)=3$</span>, by letting <span class="math-container">$y=1$</span> in \eqref{0}, we have <span class="math-container">$g(x+1)=3-g(x)$</span> which yields <span class="math-container">$g(x+2)=3-g(x+1)=3-\big(3-g(x)\big)=g(x)$</span>. Now if we put <span class="math-container">$x=\frac{1}{2}$</span> and <span class="math-container">$y=2$</span> in \eqref{0} we get <span class="math-container">$g(1)=0$</span> which leads to a contradiction. So this case can't happen.</li>
<li>If <span class="math-container">$g(2)=0$</span> and <span class="math-container">$g(1)=0$</span>, by letting <span class="math-container">$y=1$</span> in \eqref{0}, we have <span class="math-container">$g(x+1)=2g(x)$</span> which inductively yields <span class="math-container">$g(x+n)=2^ng(x)$</span> for any nonnegative integer <span class="math-container">$n$</span>. Now if we put <span class="math-container">$y=2$</span> in \eqref{0} we get <span class="math-container">$g(2x)=3g(x)$</span> and then <span class="math-container">$g(4x)=3g(2x)=9g(x)$</span>. Againg, putting <span class="math-container">$y=4$</span> in \eqref{0} we conclude that <span class="math-container">$g(4x)=15g(x)$</span> since <span class="math-container">$g(4)=2^2g(2)=0$</span>. Hence <span class="math-container">$9g(x)=15g(x)$</span> and so <span class="math-container">$g$</span> is the constant zero function.</li>
<li>If <span class="math-container">$g(2)=2$</span> and <span class="math-container">$g(1)=2$</span>, by letting <span class="math-container">$y=1$</span> in \eqref{0}, we have <span class="math-container">$g(x+1)=2$</span>. So <span class="math-container">$g$</span> is the constant two function.</li>
<li>If <span class="math-container">$g(2)=2$</span> and <span class="math-container">$g(1)=1$</span>, by letting <span class="math-container">$y=1$</span> in \eqref{0}, we have <span class="math-container">$g(x+1)=g(x)+1$</span> which inductively yields <span class="math-container">$g(x+n)=g(x)+n$</span> for any integer <span class="math-container">$n$</span>. By putting <span class="math-container">$y=n$</span> in \eqref{0} we have <span class="math-container">$g(nx)=ng(x)$</span> since <span class="math-container">$g(n)=g\big(1+(n-1)\big)=n$</span>. Substituting <span class="math-container">$2x$</span> for <span class="math-container">$x$</span> and <span class="math-container">$2y$</span> for <span class="math-container">$y$</span> in \eqref{0} we get:
<span class="math-container">$$g(2x+2y)+g(2x)g(2y)=g(2x)+g(2y)+g(4xy)$$</span>
<span class="math-container">$$\therefore 2g(x+y)+4g(x)g(y)=2g(x)+2g(y)+4g(xy)$$</span>
Multiplying \eqref{0} by <span class="math-container">$2$</span> and subtracting the last equation, we get:
<span class="math-container">$$g(xy)=g(x)g(y)\tag1\label1$$</span>
Subtracting \eqref{0} and \eqref{1} we have:
<span class="math-container">$$g(x+y)=g(x)+g(y)\tag2\label2$$</span>
It's well known that if <span class="math-container">$g$</span> satisfies \eqref{1} and \eqref{2}, then it's either the constant zero function or the identity function (hint: \eqref{1} implies that <span class="math-container">$g(x)$</span> is nonnegative for nonnegative <span class="math-container">$x$</span>. By \eqref{2} we conclude that <span class="math-container">$g$</span> is increasing.)</li>
</ol>
|
416,514 | <p>I really think I have no talents in topology. This is a part of a problem from <em>Topology</em> by Munkres:</p>
<blockquote>
<p>Show that if $A$ is compact, $d(x,A)= d(x,a)$ for some $a \in A$. </p>
</blockquote>
<p>I always have the feeling that it is easy to understand the problem emotionally but hard to express it in math language. I am a student in Economics and I DO LOVE MATH. I really want to learn math well, could anyone give me some advice. Thanks so much!</p>
| Luis Banegas Saybe | 420,416 | <p>Let $f$ : A $\longrightarrow$ $\mathbb{R}$ such that a $\mapsto$ d(x, a), where $\mathbb{R}$ is the topological space induced by the $<$ relation, the order topology.</p>
<p>For all open intervals (b, c) in $\mathbb{R}$, $f^{-1}((b, c))$ = {a $\in$ A $\vert$ d(x, a) $>$ b} $\cap$ {a $\in$ A $\vert$ d(x, a) $<$ c}, an open set. Therefore $f$ is continuous.</p>
<p>(Munkres) Theorem 27.4 Let $f$ : X $\longrightarrow$ Y be continuous, where Y is an ordered set in the order topology. If X is compact, then there exists points c and d in X such that $f$(c) $\leq$ $f$(x) $\leq$ $f$(d) for every x $\in$ X</p>
<p>By Theorem 27.4, $\exists$ r $\in$ A, d(x, r) = inf{ d(x, a) $\vert$ a $\in$ A}</p>
<p>Therefore d(x, A) = d(x, a) for some a $\in$ A</p>
|
1,136,458 | <p>Suppose $G$ is an abelian group. Then the subset $H= \big\{ x\in G\ | \ x^3=1_G \big\}$ is a subgroup of $G$.</p>
<p>I was able to show the statement is correct but the weird thing is that I didn't use the fact $G$ is abelian. <strong>Is it possible that the fact $G$ is abelian is redundant?</strong></p>
<p>Here is how I proved the statement above without using the fact $G$ is abelian:</p>
<p><strong>(Identity)</strong> Of course the identity is in H because $1_G^3=1_G$.</p>
<p><strong>(Closure)</strong> Suppose $a,b\in H$ then $a^3=1_G$ and $b^3=1_G$. Therefore we have:
$(ab)^3 = a^3b^3 = 1_G1_G = 1_G$.</p>
<p><strong>(Inverse)</strong> Suppose $a\in H$ then $a^3=1_G \implies aa^2=1_G \implies a^{-1} = a^2$</p>
<p>Am I correct?</p>
| 5xum | 112,884 | <p>It is probably pa precision thing. Computers do not have infinite space and can therefore represent only a finite set of numbers exactly. Any operation you do on the numbers which results in a number that the computer cannot represent will result in rounding. </p>
<p>This means that there exists some <em>smallest number</em> which is still greater than $1$ and is still representable by the computer, and if the result of $1+\frac1n$ is close enough to $1$, the computer will make an estimation that $1+\frac1n = 1$, therefore, $\left(1+\frac1n\right)^n = 1^n=1.$</p>
|
2,418,181 | <p><strong>Question:</strong>
Let $P_1,\ldots,P_n$ be propositional variables. When is the statement $P_1 \oplus \cdots \oplus P_n$ true?</p>
<p>I'm currently learning the basics of discrete math. I am stuck on this last question of my assignment... not really sure how to go about solving it.</p>
<p>I do know that a propositional variable can either be true or false.</p>
<p>Thanks</p>
| paw88789 | 147,810 | <p>Hint: $x \bigoplus {\rm False}$ has the same truth value as $x$.</p>
<p>$x \bigoplus {\rm True}$ has the opposite truth value as $x$.</p>
<p>So every time you have a True among the arguments, the truth value flips.</p>
|
747,997 | <p>in <a href="https://math.stackexchange.com/questions/239521/why-no-trace-operator-in-l2">Why no trace operator in $L^2$?</a>
it is mentioned, that there exists a linear continuous trace operator from $L^2(\Omega)$ to $H^\frac12(\partial\Omega)$* for sufficiently smooth boundary. Can you give me any reference for this statement? I need something like this and can not find it anywhere else.</p>
| bastienchaudet | 390,718 | <p>@Michael: I think that this result is not true, even when the boundary is smooth. I've been looking for it (for a while now...) in the literature, but unfortunately I haven't been able to find anything..! It seems that the continuity of the trace operator $\gamma:H^s(\Omega)\rightarrow H^{s-\frac{1}{2}}(\partial\Omega)$ holds iff $s>1/2$ (for smooth enough domains). Even the limit case $H^{\frac{1}{2}}(\Omega)\rightarrow L^2(\partial\Omega)$ does not work (see Lions & Magenes, same reference as Tomás', theorem 9.5).</p>
<p>Actually, in order to have a trace exactly in $L^p(\partial\Omega)$, $p>1$, the function has to be in some Besov space, see Cornelia Schneider's article <em>Trace operators in Besov and Triebel-Lizorkin spaces</em> (Corollary 3.17).</p>
<p>@Tomás: In the reference you suggest (Lions & Magenes), I assume you are referring to Theorem 9.4. But, when considering the trace operator, i.e. $\mu=0$, the assumption $\mu<s-\frac{1}{2}$ implies that $s$ has to be strictly greater than $\frac{1}{2}$ for the result to apply.</p>
|
2,038,498 | <p>How do you prove this formula
$$\lim_{x \to\infty}\frac{x^{x^2}}{2^{2^x}}$$</p>
<p>Since both top and bottom approaches infinity, I assume it is L'Hospital's rule to solve it, but after the first step I'm stuck
$$\lim_{x \to\infty}\frac{x^2 logx}{2^xlog2}$$
So how can I solve this problem, it seems the answer is infinity but I don't know how to approach that.</p>
| manthanomen | 67,750 | <p>The relation says $(x, y, t) \sim (x', y', t')$ if and only if either 1) $y = y'$ and $t = t' = 0$ or 2) $x = x'$ and $t = t' = 1$</p>
|
1,866,801 | <p>Let $A$ be an infinite set, $B\subseteq A$ and $a\in B$. Let $X\subseteq \mathcal{P}(A)$ be an infinite family of subsets of $A$ such that $a\in \bigcap X$.</p>
<p>Suppose $\bigcap X\subseteq B$. Is it possible that, for every non-empty finite subfamily $Y\subset X$, $\bigcap Y \not\subseteq B$ ? </p>
<p>Thanks for your help</p>
| fleablood | 280,126 | <p>This is probably improper vocabulary but I think this concepts are fundamental.</p>
<p>Your "basic" angle is $0 \le \theta \le \pi/2$. All other angles are "essentially" equivalent "upto reflection on one or two axes". (Or up to periods of multiples of $2\pi$.)</p>
<p>Example. $\theta$ and $\pi - \theta$ are "equivalent up to reflection on the y-axis" and $\sin \theta = \sin (\pi - \theta)$ and $\cos \theta = - \cos (\pi - \theta)$ and "sines change signs on reflection over x axis" and "cosines change signs on reflection over y axis".</p>
<p>$\theta$ and $\pi + \theta$ are equivalent "up to reflection on both axes" and $\sin \theta = - \sin (\pi + \theta)$ and $\cos \theta = - \cos (\pi + \theta)$.</p>
<p>Similarly for $\theta$ and $-\theta$ are equivalent "up to reflection on the x axis" and $\sin \theta = - \sin (- \theta)$ and $\cos \theta = \cos (- \theta)$.</p>
<p>.......
.......</p>
<p>Okay, So $\cos x + \cos y = 0\implies \cos x = - \cos y$ means $x$ and $y$ are equivalent upto but reflected over the y-axis.</p>
<p>So $\sin x = \pm \sin y$ but $\sin x + \sin y \ne 0$ means $\sin x = \sin y$ meaning $x$ and $y$ are not reflected over the x-axis. </p>
<p>So $y = \pi - x$.</p>
<p>And now as $2\sin x= 2\sin y = 1$, the "basic angle" is $x|y = \pi/6$ and $y|x = \pi - \pi/6 = 5\pi/6$. </p>
|
3,885,025 | <p>Let be <span class="math-container">$(A,M)$</span> a local ring (if it needs noetherian ring), <span class="math-container">$P_1$</span> and <span class="math-container">$P_2$</span> two prime ideals with <span class="math-container">$P_1\subseteq P_2$</span>, <span class="math-container">$\ a_1\in M\setminus P_2$</span>. I don't know if it suffices to conclude this sentence:</p>
<p>If <span class="math-container">$P_1+(a)=P_2+(a)$</span> so <span class="math-container">$P_1=P_2$</span>.</p>
<p>Someone can help me?
Maybe in a more general setting it is possible to say that if <span class="math-container">$P+I=Q+I$</span> so <span class="math-container">$P=Q$</span>.</p>
<p>Thank you.</p>
| Atticus Stonestrom | 663,661 | <p>I assume <span class="math-container">$A$</span> is commutative since you have commutative algebra as a tag, and I do use a hypothesis that <span class="math-container">$A$</span> is noetherian. I am not sure whether the result holds in more general settings.</p>
<hr />
<p>Lemma: if <span class="math-container">$R$</span> is a noetherian integral domain, and <span class="math-container">$x\in R$</span> is a non-unit, then the only prime ideal strictly contained in <span class="math-container">$xR$</span> is <span class="math-container">$\{0\}$</span>.</p>
<p>Proof: Let <span class="math-container">$P\subset xR$</span> a prime ideal strictly contained in <span class="math-container">$xR$</span>. Certainly <span class="math-container">$(xR)P=xP=P$</span>: indeed, given any <span class="math-container">$p\in P$</span>, since <span class="math-container">$p\in xR$</span> there is <span class="math-container">$r\in R$</span> such that <span class="math-container">$p=xr$</span>, whence – because <span class="math-container">$P$</span> is prime and <span class="math-container">$x\notin P$</span> – we must have <span class="math-container">$r\in P$</span>. Now, <span class="math-container">$R$</span> is noetherian, so <span class="math-container">$P$</span> is a finitely generated ideal – ie a finitely generated <span class="math-container">$R$</span>-module – and thus we may apply Nakayama's lemma to obtain an element <span class="math-container">$xy\in xR$</span> such <span class="math-container">$(1-xy)P=\{0\}$</span>. Now, if <span class="math-container">$P$</span> is non-zero – say <span class="math-container">$p\in P\setminus\{0\}$</span> – then, because <span class="math-container">$(1-xy)p=0$</span> and <span class="math-container">$R$</span> is an integral domain, we have that <span class="math-container">$xy=1$</span> and so <span class="math-container">$x$</span> is a unit, a contradiction. Thus <span class="math-container">$P$</span> must indeed be the zero ideal.</p>
<hr />
<p>Proof of main result: We consider the quotient ring <span class="math-container">$R:=A/P_1$</span>. Note, since <span class="math-container">$P_1$</span> is prime, <span class="math-container">$R$</span> is an integral domain. Also, since <span class="math-container">$(a)+P_1\subseteq M$</span> is a proper ideal of <span class="math-container">$A$</span>, the principal ideal <span class="math-container">$(a+P_1)R$</span> is certainly a proper ideal of <span class="math-container">$R$</span>.</p>
<p>Now note, <span class="math-container">$P_2/P_1\subseteq (P_2+(a))/P_1=(P_1+(a))/P_1=(a+P_1)R$</span>. We claim this inclusion is proper; indeed, otherwise we have <span class="math-container">$a+P_1\in P_2/P_1$</span>, whence there are <span class="math-container">$p_2\in P_2$</span> and <span class="math-container">$p_1\in P_1$</span> such that <span class="math-container">$a=p_2+p_1\in P_2$</span>, a contradiction.</p>
<p>So <span class="math-container">$P_2/P_1$</span> is a strict subset of the proper principal ideal <span class="math-container">$(a+P_1)R$</span>. It is also a prime ideal, since <span class="math-container">$P_2$</span> is, and hence we may apply our lemma above to obtain <span class="math-container">$P_2/P_1=0$</span>. But this means precisely that <span class="math-container">$P_2=P_1$</span>, as desired.</p>
|
3,097,640 | <p>So I have the following problem: $a + b = c + c.
I want to prove that the equation has infinitely many relatively prime integer solutions.</p>
<p>What I did first was factor the right side to get:
(</p>
| Sam | 640,956 | <p>Above equation shown below has parametric solution:</p>
<p><span class="math-container">$(a^2+b^2)=c(c^4+1)$</span></p>
<p><span class="math-container">$a=u^5+uv^4+2u^3v^2+v$</span></p>
<p><span class="math-container">$b=v^5+u^4v+2u^2v^3-u$</span></p>
<p><span class="math-container">$c=u^2+v^2$</span></p>
<p>For <span class="math-container">$(u,v)=(3,2)$</span> we get:</p>
<p><span class="math-container">$(509^2+335^2)=13(13^4+1)$</span></p>
|
2,219,266 | <blockquote>
<p>Compute integral $\displaystyle \int_{-\infty}^{\infty}e^{-k^2t+ikx}\, dk$.</p>
</blockquote>
<p>Hint: Complete the square in the Exponent.</p>
<p>Okay, for the Exponent, we have
$$
-k^2t+ikx=-t\cdot\left(k-\frac{ix}{2t}\right)^2-\frac{x^2}{4t}.
$$</p>
<p>Now, is it easier to compute
$$
\int_{-\infty}^{\infty}\exp\left(-t\cdot\left(k-\frac{ix}{2t}\right)^2-\frac{x^2}{4t}\right)\, dk?
$$
Don't see the trick.</p>
| Connor Harris | 102,456 | <p>There's a $3/7$ chance of one male child, a $3/7$ chance of two, and a $1/7$ chance of three, so the total probability is $\left(\dfrac{1}{3} \times \dfrac{3}{7}\right) + \left(\dfrac{2}{3} \times \dfrac{3}{7}\right) + \left( \dfrac{3}{3} \times \dfrac{1}{7} \right) = \dfrac{4}{7}.$ Alternatively, if you want Bayes' theorem, $P(\text{meeting a boy}) = \dfrac{1}{2}$ if you know nothing about the woman's family, $P(\text{at least one boy}) = \dfrac{7}{8}$ if you haven't met any of the woman's children, and of course $P(\text{at least one boy}|\text{meeting a boy}) = 1$, so by Bayes,
$$P(\text{meeting a boy}|\text{at least one boy}) = \frac{P(\text{meeting a boy}) P(\text{at least one boy}|\text{meeting a boy})}{P(\text{at least one boy})} = \frac{ (1/2) \times (1)}{7/8} = \frac{4}{7}.$$</p>
<p>For the question you ask in comments, "Given that you met a (presumably randomly selected) boy, what's the chance that she has at least one girl?" $$P(\text{at least one girl}|\text{meeting a boy}) = \frac{P(\text{at least one girl}) P(\text{meeting a boy}|\text{at least one girl})}{P(\text{meeting a boy})}.$$ The prior probabilities $P(\text{at least one girl}) = 7/8$ and $P(\text{meeting a boy}) = 1/2$, same as above, and $P(\text{meeting a boy}|\text{at least one girl})$ is the same (by symmetry) as $P(\text{meeting a girl}|\text{at least one boy})$, which is $3/7$, the complement of the probability that we calculated in the first problem. The probability is $\dfrac{(7/8) \times (3/7)}{1/2} = \dfrac{3}{4}$. Alternatively, taking a page from drhab's answer here, of the seven equally likely triplets BBB BBG BGB BGG GBB GBG GGB, there are $12$ boys of whom $9$ appear with at least one girl, and $9/12 = 3/4$.</p>
|
2,219,266 | <blockquote>
<p>Compute integral $\displaystyle \int_{-\infty}^{\infty}e^{-k^2t+ikx}\, dk$.</p>
</blockquote>
<p>Hint: Complete the square in the Exponent.</p>
<p>Okay, for the Exponent, we have
$$
-k^2t+ikx=-t\cdot\left(k-\frac{ix}{2t}\right)^2-\frac{x^2}{4t}.
$$</p>
<p>Now, is it easier to compute
$$
\int_{-\infty}^{\infty}\exp\left(-t\cdot\left(k-\frac{ix}{2t}\right)^2-\frac{x^2}{4t}\right)\, dk?
$$
Don't see the trick.</p>
| drhab | 75,923 | <p>Alternatively you can think of 8 triplets (BBB,BBG, et cetera). </p>
<p>Symmetrically there are $12$ boys and $12$ girls. </p>
<p>Now the triple GGG is taken away so that $12$ boys and $9$ girls remain all having equal probability to be met on the street. </p>
<p>That gives probability $\frac{12}{12+9}=\frac47$ to meet a boy.</p>
|
903,117 | <p>I am trying to evaluate
$$\int_{-\infty}^{\infty} \frac{\sin(x)^2}{x^2} dx $$
Would a contour work? I have tried using a contour but had no success.
Thanks.</p>
<p>Edit: About 5 minutes after posting this question I suddenly realised how to solve it. Therefore, sorry about that. But thanks for all the answers anyways.</p>
| Mhenni Benghorbal | 35,472 | <p>Here is another approach. Write the integral as</p>
<blockquote>
<p>$$I = {2}\int_{0}^{\infty} \frac{\sin^2(x)}{x^2}. $$</p>
</blockquote>
<p>Recalling the Mellin transform of a function $f$</p>
<blockquote>
<p>$$ \int_{0}^{\infty} x^{s-1} f(x)dx $$</p>
</blockquote>
<p>our integral is the Mellin transform of $\sin(x)^2$ with $s=-1$. The Mellin transform is $\sin(x)^2$ given by</p>
<p>$$ -\frac{1}{2}\,{\frac {\sqrt {\pi }\,\Gamma \left( 1+s/2 \right) }{s\,\Gamma
\left( -s/2 + 1/2 \right) }}.$$</p>
<p>Taking the limit as $s\to -1$ gives the desired answer $\frac{\pi}{2}$. See <a href="https://math.stackexchange.com/questions/390456/laplace-transform-int-0-infty-frac-sin4-xx3-dx">other approaches</a>.</p>
|
2,617,467 | <p>I have been trying to solve this limit for more than an hour and I'm stuck.</p>
<blockquote>
<p>$$ \lim_{n\to\infty} \frac{3^{n}+\sqrt{n}}{n!+2^{(n+1)}}$$</p>
</blockquote>
<p>What I've so far is:</p>
<p>$$ \lim_{n\to\infty} \frac{3^{n}+\sqrt{n}}{n!+2^{(n+1)}}\\ \lim_{n\to\infty} \frac{3^{n}}{n!+2^{(n+1)}}+\lim_{n\to\infty} \frac{\sqrt{n}}{n!+2^{(n+1)}}\\\lim_{n\to\infty} \frac{3^{n}}{1+\frac{2^{(n+1)}}{n!}}\frac{1}{n!}+\lim_{n\to\infty} \frac{\sqrt{n}}{1+\frac{2^{(n+1)}}{n!}}\frac{1}{n!}\\ \lim_{n\to\infty} \frac{3^{n}}{1+\frac{2^{(n+1)}}{n!}}*0+\lim_{n\to\infty} \frac{\sqrt{n}}{1+\frac{2^{(n+1)}}{n!}}*0\\ \lim_{n\to\infty} \frac{3^{n}}{1+0}*0+\lim_{n\to\infty} \frac{\sqrt{n}}{1+0}*0\\ \lim_{n\to\infty} \frac{3^{n}}{1}*0+\lim_{n\to\infty} \frac{\sqrt{n}}{1}*0\\ \lim_{n\to\infty}{3^{n}}*0+\lim_{n\to\infty} \sqrt{n}*0\\ \lim_{n\to\infty}{\inf}*0+\lim_{n\to\infty} \inf*0$$</p>
<p>Which is an undetermination.
And I don't know how to continue it.
Can someone help me?
Please let me know if something isn't very clear. Thank you</p>
| Arthur | 15,500 | <p>Hint: Compare your original fraction to $\frac{2\cdot 3^n}{n!}$, which is a lot easier to work with.</p>
|
2,617,467 | <p>I have been trying to solve this limit for more than an hour and I'm stuck.</p>
<blockquote>
<p>$$ \lim_{n\to\infty} \frac{3^{n}+\sqrt{n}}{n!+2^{(n+1)}}$$</p>
</blockquote>
<p>What I've so far is:</p>
<p>$$ \lim_{n\to\infty} \frac{3^{n}+\sqrt{n}}{n!+2^{(n+1)}}\\ \lim_{n\to\infty} \frac{3^{n}}{n!+2^{(n+1)}}+\lim_{n\to\infty} \frac{\sqrt{n}}{n!+2^{(n+1)}}\\\lim_{n\to\infty} \frac{3^{n}}{1+\frac{2^{(n+1)}}{n!}}\frac{1}{n!}+\lim_{n\to\infty} \frac{\sqrt{n}}{1+\frac{2^{(n+1)}}{n!}}\frac{1}{n!}\\ \lim_{n\to\infty} \frac{3^{n}}{1+\frac{2^{(n+1)}}{n!}}*0+\lim_{n\to\infty} \frac{\sqrt{n}}{1+\frac{2^{(n+1)}}{n!}}*0\\ \lim_{n\to\infty} \frac{3^{n}}{1+0}*0+\lim_{n\to\infty} \frac{\sqrt{n}}{1+0}*0\\ \lim_{n\to\infty} \frac{3^{n}}{1}*0+\lim_{n\to\infty} \frac{\sqrt{n}}{1}*0\\ \lim_{n\to\infty}{3^{n}}*0+\lim_{n\to\infty} \sqrt{n}*0\\ \lim_{n\to\infty}{\inf}*0+\lim_{n\to\infty} \inf*0$$</p>
<p>Which is an undetermination.
And I don't know how to continue it.
Can someone help me?
Please let me know if something isn't very clear. Thank you</p>
| user | 505,767 | <p>You can divide the limit in two pieces but can't take the limit only for a single part as you have done here</p>
<p>$$\color{red}{\lim_{n\to\infty} \frac{3^{n}}{1+\frac{2^{(n+1)}}{n!}}*0+\lim_{n\to\infty} \frac{\sqrt{n}}{1+\frac{2^{(n+1)}}{n!}}*0}$$</p>
<p>The limit can be easily handled as follow</p>
<p>$$\frac{3^{n}+\sqrt{n}}{n!+2^{(n+1)}}=\frac{3^{n}}{n!}\frac{ 1 +\frac{\sqrt{n}}{3^{n}}}{1+\frac{2^{(n+1)}}{n!}}\to0\cdot\frac{1+0}{1+0}=0$$</p>
<p>indeed by ratio test</p>
<p>$$\frac{3^{n}}{n!}\to 0 \quad \quad \frac{2^{(n+1)}}{n!} \to0 \quad \quad \frac{\sqrt{n}}{3^{n}}\to0$$</p>
|
3,238,054 | <blockquote>
<p>Find parameter <span class="math-container">$a$</span> for which <span class="math-container">$$\frac{ax^2+3x-4}{a+3x-4x^2}$$</span> takes all real values for <span class="math-container">$x \in \mathbb{R}$</span></p>
</blockquote>
<p>I have equated the function to a real value, say, k
which gets me a quadratic in x. I have then put <span class="math-container">$D\geq 0$</span> (since <span class="math-container">$x \in \mathbb{R}$</span>) which gets me <span class="math-container">$a \geq -9/16$</span></p>
<p>How do I proceed further to get other parameter for <span class="math-container">$a $</span>? </p>
| Lozenges | 219,277 | <p>Find <span class="math-container">$a$</span> so that the equation </p>
<p><span class="math-container">$$y= \frac{a x^2+3x -4}{-4x^2+3x+a}$$</span></p>
<p>has a root which is in the domain of the function</p>
<p>The discriminant must be <span class="math-container">$\geq 0$</span> for all <span class="math-container">$y$</span></p>
<p><span class="math-container">$$(9+16a)y^2+\left(4a^2+46\right)y +(9+16a)\geq 0$$</span> </p>
<p>This is the case if its discriminant is negative and <span class="math-container">$9+16a>0$</span></p>
<p><span class="math-container">$$16 (-7+a) (-1+a) (4+a)^2<0$$</span></p>
<p>which says <span class="math-container">$a$</span> must be in the interval <span class="math-container">$(1,7)$</span></p>
|
3,745,097 | <p>In my general topology textbook there is the following exercise:</p>
<blockquote>
<p>If <span class="math-container">$F$</span> is a non-empty countable subset of <span class="math-container">$\mathbb R$</span>, prove that <span class="math-container">$F$</span> is not an open set, but that <span class="math-container">$F$</span> may or may not but a closed set depending on the choice of <span class="math-container">$F$</span>.</p>
</blockquote>
<p>I already proved that <span class="math-container">$F$</span> is not opened in the euclidean topology, but why is the second part true?</p>
<p>If <span class="math-container">$F$</span> is countable then <span class="math-container">$F \sim \mathbb N$</span>. This means that we can list the elements of <span class="math-container">$F$</span>, so we can write: <span class="math-container">$F=\{f_1,...,f_k,...\}$</span></p>
<p><span class="math-container">$\mathbb R \setminus F= (-\infty, f_1) \cup \bigcup \limits _{i=1}^{\infty}(f_i,f_{i + 1})$</span></p>
<p>We have that <span class="math-container">$(-\infty, f_1) \in \tau$</span> and that every <span class="math-container">$(f_i,f_{i + 1}) \in \tau$</span>. Because the union of elements of <span class="math-container">$\tau$</span> is also a element of <span class="math-container">$\tau$</span>, we have that <span class="math-container">$(-\infty, f_1) \cup \bigcup \limits _{i=1}^{\infty}(f_i,f_{i + 1}) \in \tau$</span>, then <span class="math-container">$F$</span> is closed.</p>
<p>Is this correct, because the statement says that "may or may not but a closed set depending on the choice of <span class="math-container">$F$</span>"?</p>
| D. Brogan | 404,162 | <p>The problem here is that you are supposing that you can write <span class="math-container">$F=\{f_1,f_2,\ldots\}$</span> where the <span class="math-container">$f_i$</span>'s are in increasing order in <span class="math-container">$\mathbb{R}.$</span> This isn't true, for example consider <span class="math-container">$\mathbb Z\subset\mathbb R$</span>. However, this is still closed. If you want a countable set which is not closed, you should consider a sequence approaching a given point, say the set <span class="math-container">$\{\frac{1}{n}\;|\;n\in\mathbb N\}$</span>. Can you show that this isn't closed?</p>
|
757,656 | <p>Find the inverse of the function $f(x)= \dfrac{2x-1}{x^2-1}.$</p>
<p>We switch the $x$ and $y$ letters and then solve the the equation, but it became kind of complicated while solving.</p>
| kingW3 | 130,953 | <p>Just expanding on the Mitsos answer
$$t(x^2-1)=2x-1\\tx^2-2x-t+1=0\\x_{1,2}=\frac{2\pm\sqrt{4-4t(1-t)}}{2t}$$</p>
|
757,656 | <p>Find the inverse of the function $f(x)= \dfrac{2x-1}{x^2-1}.$</p>
<p>We switch the $x$ and $y$ letters and then solve the the equation, but it became kind of complicated while solving.</p>
| Tunk-Fey | 123,277 | <p>You have
$$
y= \frac{2x-1}{x^2-1}.
$$
As you mentioned, switch $x$ and $y$. It becomes
$$
x= \frac{2y-1}{y^2-1}.
$$
Now, do the following steps:
$$
\begin{align}
x(y^2-1)&=2y-1\\
xy^2-x&=2y-1\\
xy^2-2y&=x-1.\tag1
\end{align}
$$
Now, multiply botth sides of $(1)$ by $x$ so that it's easy to use complete square method to obtain $y$.
$$
\begin{align}
x(xy^2-2y)&=x(x-1)\\
x^2y^2-2xy&=x^2-x\\
(xy-1)^2+1^2&=x^2-x\\
(xy-1)^2+1-1&=x^2-x-1\\
(xy-1)^2&=x^2-x-1\\
xy-1&=\pm\sqrt{x^2-x-1}\\
xy-1+1&=\pm\sqrt{x^2-x-1}+1\\
xy&=1\pm\sqrt{x^2-x-1}\\
y&=\frac{1\pm\sqrt{x^2-x-1}}{x}.\tag2
\end{align}
$$
The last step, change $y$ with $y^{-1}$ in $(2)$. Thus, the inverse is
$$
y^{-1}=\frac{1\pm\sqrt{x^2-x-1}}{x}.
$$</p>
|
827,072 | <p>How to find the equation of a circle which passes through these points $(5,10), (-5,0),(9,-6)$
using the formula
$(x-q)^2 + (y-p)^2 = r^2$.</p>
<p>I know i need to use that formula but have no idea how to start, I have tried to start but don't think my answer is right.</p>
| Donn S. Miller | 155,974 | <p>$\begin{vmatrix}
x^2+y^2&x&y&1\\
5^2+10^2&5&10&1\\
(-5)^2+0^2&-5&0&1\\
9^2+(-6)^2&9&-6&1\\
\end{vmatrix}=
\begin{vmatrix}
x^2+y^2&x&y&1\\
125&5&10&1\\
25&-5&0&1\\
117&9&-6&1\\
\end{vmatrix}
=
0$</p>
|
198,612 | <p>Let $(P,\leq)$ be a partially ordered set. A <em>down-set</em> is a set $d\subseteq P$ such that $x\in d$ and $x'\in P, x'\leq x$ imply $x'\in d$. If the down-set is totally ordered, we say it is a totally ordered down-set (tods).</p>
<p>Let $d_1, d_2$ be tods. We say that they are <em>incompatible</em> if neither $d_1\subseteq d_2$ nor $d_2\subseteq d_1$ holds. A set of pairwise incompatible tods is called a <em>club</em>. A club $C$ said to be <em>complete</em> if for every maximal chain $m\subseteq P$ there is $c\in C$ such that $c\subseteq m$.</p>
<p>Given a club $D$ consisting of finite members only, is there a complete club $C$ also consisting of finite sets only, and $C \supseteq D$?</p>
| Dominic van der Zypen | 8,628 | <p>The answer is No.</p>
<p>We define two sets of elements of $\{0,1,2\}^\omega$ in the following way:</p>
<ol>
<li>For $n\in\omega$ let $u_n$ be defined by $u_n(k)=1$ for $k\leq n$
and $u_n(k)=0$ for $k>n$;</li>
<li>For $n\in\omega$ let $t_n$ be defined by $t_n(k)=1$ for $k\leq n$
and $t_n(n+1) = 2$ and $t_n(k)=0$ for $k>n+1$;</li>
</ol>
<p>Note that informally speaking, the $u_n$ have the form $(1,1,1,\ldots, 1,0,0,\ldots)$
and the $t_n$ have the form $(1,1,1,\ldots, 1,2,0,\ldots)$.</p>
<p>Then we set $$E:=\{u_n:n\in \omega\}\cup\{t_n: n\in \omega\}.$$ The ordering
on $E$ is componentwise (equivalently, the ordering inherited from the product
ordering on $\{0,1,2\}^\omega$).</p>
<p><strong>Step 1</strong>. Let $c_n = \downarrow t_n$. Then $P = \{c_n: n\in \omega\}$ is a club.</p>
<p><em>Proof.</em> We have to show that for $m<n $ the tods $c_m, c_n$ are
incompatible. This is the case if we find incomparable elements
in $c_m, c_n$. This is easy: $t_m\in c_m$ and $t_n\in c_n$ are incomparable
in the ordering we chose for $E$.</p>
<p><strong>Step 2</strong>. The set $m = \{u_n: n\in \omega\}$ is a maximal tods.</p>
<p><em>Proof.</em> It's easy to see that $m$ is a chain and a down-set. Next, maximality: if we consider $m\cup\{t_n\}$ for some $n\in \omega$, the elements
$u_{n+2}$ and $t_n$ are not comparable, so $m\cup\{t_n\}$
is not totally ordered.</p>
<p><strong>Step 3</strong>. If $x$ is a finite tods with $x\subseteq m$, then
$x$ is compatible with a member of $P$. </p>
<p><em>Proof.</em> Any finite tods $x\subseteq m$ has the form
$x = \downarrow u_n$ for some $n\in \omega$. So $c_n = \downarrow
t_n \supseteq x$ therefore $c_n \cup x = c_n$ is a tods,
so by definition $c_n$ and $x$ are compatible.</p>
<p><strong>Conclusion</strong>. There is no complete club consisting of finite sets $P'\supseteq P$
such that $P$ contains a finite subset of $m$. Therefore the answer to the question is No.</p>
|
1,981,553 | <p>When a person asks: "What is the smallest number (natural numbers) with two digits?"</p>
<p>You answer: "10".</p>
<p>But by which convention 04 is no valid 2 digit number?</p>
<p>Thanks alot in advance</p>
| hmakholm left over Monica | 14,366 | <p>The question asks for a <em>number</em>, not a <em>representation</em> of a number.</p>
<p>The digit sequence <code>04</code> is a representation of the number $4$ (also known as $1+1+1+1$), but <code>4</code> is <em>also</em> a representation of that same number.</p>
<p>Every natural number has many such representations, with varying amounts of leading zeroes, but when the question asks about a "number with two digits" it mus implicitly assume that a number <em>has</em> a well-defined number of digits. The only reasonable way to make sense of that is it interpret the question as</p>
<blockquote>
<p>What is the smallest number whose <em>usual decimal representation</em> has two digits?</p>
</blockquote>
<p>(where "usual" means with no leading zeroes except that $0$ itself is represented as <code>0</code> because using the empty string of digits to represent something would be confusing).</p>
<p>$4$ does not qualify as an answer to this because its <em>usual</em> representation is <code>4</code> rather than <code>04</code>.</p>
|
291,284 | <p>Let $(\Omega, \cal{A}, \mathbb{P})$ be a probability space and $X$ a random variable on $\Omega$. Let, also, $f:\Omega\to\mathbb{R}$ be a Borel function. Then:<br>
$X$ and $f(X)$ are independent $\Longleftrightarrow$ there exists some $t\in\mathbb{R}$ such that $\mathbb{P}[f(X)=t]=1$, that is $f(X)$ is a degenerate r.v. </p>
<p>The only thing that I could make out is that if $X$ and $f(X)$ are independent, then<br>
$\mathbb{P}[f(X)\in B]=0$ or $1$ for every Borel subset of $\mathbb{R}$, since $\sigma(f(X))\subseteq \sigma(X)$ and hence, $f(X)$ is independent of its self. Suppose, now, that $\mathbb{P}[f(X)\leq x]=0$ for all $x\in\mathbb{R}$. Then:<br>
$\mathbb{P}[f(X)\in\mathbb{R}]=\mathbb{P}[\bigcup_{n=0}^{\infty}[f(X)\leq n]]\leq\sum_{n=0}^{\infty}[f(X)\leq n]=0$ which obviously is a contradiction since $\mathbb{P}[f(x)\in\mathbb{R}]=1$. </p>
<p>However, I don't know ow to prove this and my attempt isn't likely to become a complete solution.<br>
Any help would be appreciated.<br>
Thanks in advance! </p>
| Ilya | 5,887 | <p>There is another proof of this fact, which follows immediately from the fact that you proved: $\Bbb P[f(x)\in B]\in \{0,1\}$ and reminds the Nested Intervals Theorem. For the shorthand let
$$
\mu(B):=\Bbb P[f(X)\in B]
$$
denote the distribution of $f(X)$. </p>
<p>So we have $\mu \in \{0,1\}$ and $\mu(\Bbb R) = 1$. Let us prove that it implies that $\mu$ is a degenerate distribution, i.e. there exists $x\in \Bbb R$ such that $\mu(\{x\}) = 1$.</p>
<ol>
<li><p>There exists a bounded closed interval $I_n =[-n,n]$ such that $\mu(I_n) = 1$. Indeed, if there is no such interval then
$$
1 = \mu(\Bbb R) = \mu\left(\bigcup_n I_n\right) = \lim_n \mu(I_n) = 0
$$
which is a contradiction.</p></li>
<li><p>Now we construct a sequence of nested intervals. Denote $J_0 = I_n$, and let
$$
J_0^{-} = [-n,0],\quad J_0^+ = [0,n].
$$
There is at least one of these intervals of measure $1$. Denote it by $J_1$.</p></li>
<li><p>Repeat by induction the procedure for $J_k$ where $k = 1,2,\dots$: divide it symmetrically into two parts and put $J_{k+1}$ be any of these parts such that $\mu(J_{k+1}) = 1$.</p></li>
<li><p>In the end, you have a decreasing sequence of compact intervals $J_0,\dots,J_k,\dots$ such that $\mathrm{diam}(J_k)\leq 2^{1-k}n $ thus $\bigcap_k J_k$ is some single point. We have
$$
\mu\left(\bigcap_k J_k\right) = \lim_k \mu(J_k) = 1
$$
so that we found $x = \bigcap_k J_k$.</p></li>
</ol>
|
3,995,913 | <p>We are looking at the following expression:</p>
<p><span class="math-container">$$\frac{d}{dx}\int_{u(x)}^{v(x)}f(x) dx$$</span></p>
<p>The solution is straightforward for this: <span class="math-container">$\frac{d}{dx}\int_{u(x)}^{v(x)}f(t) dt$</span>. Do we evaluate the given expression in like manner? Do we treat the <span class="math-container">$f(x) dx$</span> as if it were <span class="math-container">$f(t) dt$</span>?</p>
| Community | -1 | <p>This expression is badly written. It only makes sense to understand it as
<span class="math-container">$$
\frac{d}{dx}\int_{u(x)}^{v(x)}f(t)dt
$$</span>
where the dummy variable for the definite integral should <em>not</em> be the same as any variable in the bounds.</p>
<p>The general method for dealing with such derivative is called the <a href="https://en.wikipedia.org/wiki/Leibniz_integral_rule" rel="nofollow noreferrer">Leibniz integral rule</a>, which tells you how to find
<span class="math-container">$$
\frac{d}{dx}\int_{a(x)}^{b(x)}g(x,t)\,dt
$$</span></p>
<p>Your question is a special case where <span class="math-container">$g(x,t)=f(t)$</span>.</p>
|
2,671,384 | <p>I want to solve following difference equation:</p>
<p>$a_i = \frac13a_{i+1} + \frac23a_{i-1}$, where $a_0=0$ and $a_{i+2} = 1$</p>
<p><strong>My approach:</strong>
Substituting $i=1$ in the equation,<br>
$3a_1 = a_2 + 2a_0$ <br>
$a_2 = 3a_1$<br>
Similarly substituting $i = 2, 3 ...$<br>
$a_3 = 7a_1$<br>
$a_4 = 15a_1$<br>
...</p>
<p>Generalizing,<br>
$a_i = (2^i - 1)a_1$<br></p>
<p>I am not sure how to get rid of $a_1$ to get the final answer of
$\frac{(2^i - 1)}{(2^{2+i} - 1)}$</p>
| user | 505,767 | <p>The standard method to solve <a href="https://brilliant.org/wiki/linear-recurrence-relations/" rel="nofollow noreferrer">recurrence equations</a> is to set as trial solution</p>
<p>$$a_i=x^i \implies x^i = \frac13x^{i+1} + \frac23 x^{i-1}\implies x =\frac13 x^2+\frac23\implies x^2-3x+2=0$$</p>
<p>then find $x_1$ and $x_2$ then the general solution is</p>
<p>$$x_i = ax_1^i+bx_2^i$$</p>
<p>with $a$ and $b$ determined by initial conditions.<br>
Solving, $x_1=1, x_2=2$<br>
Substituting, $x_i = a + 2^ib$ <br>
Using boundary conditions and solving through, $a = \frac{-1}{(2^{i+2}-1)}$ and $b = \frac{1}{(2^{i+2}-1)}$ <br>
Plugging back these into general solution equation and rearranging:
$a_i = \frac{(2^{i}-1)}{(2^{i+2}-1)}$</p>
|
884,342 | <p>$N$ is a normal subgroup of $G$ if $aNa^{-1}$ is a subset of $N$ for all elements $a $ contained in $G$. Assume, $aNa^{-1} = \{ana^{-1}|n \in N\}$.</p>
<p>Prove that in that case $aNa^{-1}= N.$</p>
<p>If $x$ is in $N$ and $N$ is a normal subgroup of $G$, for any element $g$ in $G$, $gxg^{-1}$ is in $G$. Suppose $x$ is in $N$, and $y=axa^{-1}$ as is defined. Since $N$ is normal, $aNa^{-1}$ is a subset of N.
$x= a^{-1}ya$. Given that $x$ is in $N$, and $x=a^{-1}ya$, $y$ is also in $N$. If $y$ is in N, then $axa^{-1}$ is also in $N$. $X$ is in $aNa^{-1}$.</p>
<p>Does the proof make sense?</p>
| pre-kidney | 34,662 | <p>Here's another way to see where the result comes from. If $N$ is normal,
$$\begin{align*}
aNa^{-1}&\subset N\\
aN&\subset Na.
\end{align*}$$
But the definition is symmetric in $a$! Swapping the roles of $a$ and $a^{-1}$, we also get $Na\subset aN$. Thus $aN=Na$, which gives you $aNa^{-1}=N$.</p>
<p>Philosophical aside: Groups can be annoying to work with when elements don't commute. However, oftentimes the next best thing is knowing that certain <strong>subgroups</strong> commute. That's what it means to be normal: you commute with all elements. All the important properties of normal subgroups (especially the formation of quotients) follow from this observation.</p>
|
1,686,012 | <p>Let $T$: $\mathbb{R}^3 \to \mathbb{R}$ be a linear transformation.</p>
<p>Show that either $T$ is surjective, or $T$ is the zero linear transformation.</p>
<p>My approach:</p>
<p>First we start off by supposing T is not surjective and we want to show that $\forall \overrightarrow v \in \mathbb{R}^3, T(\overrightarrow v)=\overrightarrow 0$.</p>
<p>$T$ not surjective implies that $Image(T) \not = \mathbb{R}$ and that $\exists \overrightarrow y \in \mathbb{R}$ such that
$$
\begin{bmatrix}
a & b & c
\end{bmatrix} . \overrightarrow v \not = \overrightarrow y$$ with a,b,c $\in \mathbb{R}$</p>
<p>$\Rightarrow$ $ax_{1}+bx_{2}+cx_{3} \not = y_{1}$</p>
<p>From here, I want to show that a,b,c are equal to zero. Should I say that an equation of a plane to a real number should have infinite solutions unless the coefficients are zero and conclude that a,b,c are zero or is there another way?</p>
| user296113 | 296,113 | <p>A simple approach is to say that the image of $T$ is a subspace of $\Bbb R$ so it would be $\Bbb R$ or $\{0\}$. The former case is when $T$ is surjective and the last is when $T=0$.</p>
|
2,753,504 | <p>Where $a,b$ and $c$ are positive real numbers.</p>
<p>So far I have shown that $$a^2+b^2+c^2 \ge ab+bc+ac$$ and that $$a^2+b^2+c^2 \ge a\sqrt{bc} + b\sqrt{ac} + c\sqrt{ab}$$ but I am at a loss what to do next... I have tried adding various forms of the two inequalities but always end up with something extra on the side of $ab+bc+ac$. Any help appreciated!</p>
| DeepSea | 101,504 | <p>Another answer is using the "friendly" Cauchy-Schwarz inequality:</p>
<p>$a\sqrt{bc}+b\sqrt{ca}+ c\sqrt{ab}= \sqrt{ab}\sqrt{ac}+\sqrt{bc}\sqrt{ab}+\sqrt{ac}\sqrt{bc}\le \sqrt{(\sqrt{ab})^2+(\sqrt{bc})^2+(\sqrt{ac})^2}\times\sqrt{(\sqrt{ac})^2+(\sqrt{ab})^2+(\sqrt{bc})^2}= \sqrt{ab+bc+ca}\times\sqrt{ab+ac+bc}= \sqrt{(ab+bc+ca)^2}= ab+bc=ca$</p>
<p>Point is: if it can be solved by AM-GM inequality, then it can be solved by Cauchy-Schwarz inequality. These famous inequalities are like "therapeutic doctors" at a clinic...</p>
|
47,724 | <p>I am looking at a past exam written by a student. There was a question I believed he got correct but received only 1/4. The marker wrote down "4 more compositions, order matters".</p>
<p>This is the problem:</p>
<p>List all 3 part compositions of 5. (recall that compositions have no zeros)</p>
<p>$(1, 1, 3)
(1, 2, 2)
(1, 3, 1)
(3, 1, 1)
(2, 1, 2)
(2, 2, 1)$ (what the student has on paper) <strong>EDIT: I made some mistakes copying</strong></p>
<p>My guess is that the student wrote only (1,1,3) and (2,2,1) ,but corrected it after exam was returned. I just want to make sure that this is the case, so that I don't miss something.</p>
| Brian M. Scott | 12,042 | <p>What the student has is still wrong, assuming that you didn't typo it in writing up the question: $(3,1,1)$ has been duplicated, and $(1,2,2)$ has been omitted. Your understanding is correct, and my guess is the same as yours.</p>
|
101,157 | <p>Let $G$ be an algebraic group defined over a char 0
local field $k$. Following Borel and Tits (73) we define
the group $G^+(k)$ or $G^+$ by the subgroup of $G(k)$
generated by the unipotent elements of $G(k)$. </p>
<p>Suppose $G$ is generated by a finite set of unipotent
$k$-subgroups, say $U_1,\cdots, U_n$. Is it true that
the group generated by $U_1(k), \cdots, U_n(k)$ is
$G^+$? </p>
<p>I feel the answer is positive but do not know how to prove it.
It seems that the ideas of the original paper of Borel and Tits can
help, but I still do not read French (which I always plan to learn) yet. </p>
| Lee Mosher | 20,787 | <p>The conjecture is not known for any other surfaces.</p>
<p>I'll throw out that there are other known recipes for constructing all pseudo-Anosov mapping classes using train track representatives; see for example my paper "The classification of pseudo-Anosovs", and the Bestvina-Handel paper "Train-tracks for surface homeomorphisms". However, none of these can be regarded as positive evidence for Penner's conjecture.</p>
|
101,157 | <p>Let $G$ be an algebraic group defined over a char 0
local field $k$. Following Borel and Tits (73) we define
the group $G^+(k)$ or $G^+$ by the subgroup of $G(k)$
generated by the unipotent elements of $G(k)$. </p>
<p>Suppose $G$ is generated by a finite set of unipotent
$k$-subgroups, say $U_1,\cdots, U_n$. Is it true that
the group generated by $U_1(k), \cdots, U_n(k)$ is
$G^+$? </p>
<p>I feel the answer is positive but do not know how to prove it.
It seems that the ideas of the original paper of Borel and Tits can
help, but I still do not read French (which I always plan to learn) yet. </p>
| Autumn Kent | 1,335 | <p>Shin and Strenner have shown that the conjecture is false when 3g + n > 4. </p>
<p>See <a href="http://arxiv.org/abs/1410.6974">http://arxiv.org/abs/1410.6974</a></p>
|
1,617,372 | <blockquote>
<p>The number of policies that an agent sells has a Poisson distribution
with modes at $2$ and $3$. $K$ is the smallest number such that the
probability of selling more than $K$ policies is less than 25%.
Calculate K.</p>
</blockquote>
<p>I know that the parameter lambda is $3$, of the Poisson distribution but I'm not sure how to calculate the integer $K$.</p>
<p>Correct answer: 4.</p>
| heropup | 118,193 | <p>Recall that the probability mass function of a Poisson-distributed random variable $X$ is $$\Pr[X = x] = e^{-\lambda} \frac{\lambda^x}{x!}, \quad x = 0, 1, 2, \ldots.$$ If the mode is at $X = 2$ and $X = 3$, this means $$\Pr[X = 2] = \Pr[X = 3],$$ or $$e^{-\lambda} \frac{\lambda^2}{2!} = e^{-\lambda} \frac{\lambda^3}{3!},$$ or $3 \lambda^2 = \lambda^3$, or $\lambda = 3$ (since we require $\lambda > 0$). Then we sequentially compute $\Pr[X \le x]$ for $x = 0, 1, 2, \ldots$, until we find the first instance $X = K$ where this value is greater than $0.75$, thus implying that $\Pr[X > K] < 0.25$. Clearly, we have $$\Pr[X \le x] = \sum_{k=0}^x e^{-\lambda} \frac{\lambda^k}{k!},$$ so we just try it out. $$\Pr[X \le 0] \approx 0.0497871, \\ \Pr[X \le 1] \approx 0.199148, \\ \Pr[X \le 2] \approx 0.42319, \\ \Pr[X \le 3] \approx 0.647232, \\ \Pr[X \le 4] \approx 0.815263. $$ Therefore, $K = 4$ is the smallest value for which $\Pr[X > K] < 0.25$.</p>
|
1,617,372 | <blockquote>
<p>The number of policies that an agent sells has a Poisson distribution
with modes at $2$ and $3$. $K$ is the smallest number such that the
probability of selling more than $K$ policies is less than 25%.
Calculate K.</p>
</blockquote>
<p>I know that the parameter lambda is $3$, of the Poisson distribution but I'm not sure how to calculate the integer $K$.</p>
<p>Correct answer: 4.</p>
| ELO Petro | 573,265 | <p>Then that's completely wrong. If Pr[X>K]<0.25, and K = 4. The equation becomes Pr[X>4]. Which is the probability that X is greater than or equal to 5. However, if we let K=3, then Pr[X>3] is the probability that X is greater than or equal to 4. K=3 is the smallest number at which "selling more than K policies (aka. 4 or more) is less than 25%. </p>
|
1,522,062 | <p>Identify for which values of $x$ there is subtraction of nearly equal numbers, and find an alternate form that avoids the problem:
$$E = \frac{1}{1+x} - \frac{1}{1-x} = -\frac{2x}{1-x^2} = \frac{2x}{x^2-1} $$
How come $-2x/(1-x^2)$ can be changed to $2x(x^2 - 1)$ according to the homework solutions? Why does the denominator change it's digits/variables places? Shouldn't it be just $2x/(1-x^2)$ ? Thanks!</p>
| Ian Miller | 278,461 | <p>$\require{cancel}$
$$\frac{-2x}{1-x^2}=\frac{-1\times2x}{-1\times(-1+x^2)}$$
$$=\frac{\cancel{-1\times}2x}{\cancel{-1\times}(-1+x^2)}$$
$$=\frac{2x}{x^2-1}$$</p>
|
2,158,981 | <p>How should i solve : $$\sum_{r=1}^n (2r-1)\cos(2r-1)\theta $$</p>
<p>I can solve $\sum_{r=1}^n cos(2r-1)\theta $ by considering
$\Re \sum_{r=1}^n z^{2r-1} $ and using summation of geometric series, but I can't seem to find a common geometric ratio when $ 2r-1 $ is involved in the summation.</p>
<p>Visually : $\sum_{r=1}^n z^{2r-1} = z +z^3+...+z^{2r-1}$ where the common ratio $ r= z^2 $ can easily be seen, but in the case of $\sum_{r=1}^n (2r-1)z^{2r-1} = z + 3z^3 + 5z^5 +...+ (2r-1)z^{2r-1}$, how should i solve this ? A hint would be appreciated.</p>
| NickD | 416,685 | <p>Here's a hint: $$ {d \over dz} z^n = n z^{n-1}$$. What do you get if you differentiate the geometric series term by term?</p>
|
2,812,472 | <p>I am trying to show that</p>
<p>$$
\frac{d^n}{dx^n} (x^2-1)^n = 2^n \cdot n!,
$$ for $x = 1$. I tried to prove it by induction but I failed because I lack axioms and rules for this type of derivatives. </p>
<p>Can someone give me a hint?</p>
| achille hui | 59,379 | <p>Apply <a href="https://en.wikipedia.org/wiki/General_Leibniz_rule" rel="nofollow noreferrer">General Leibniz rule</a></p>
<p>$$(fg)^{(n)}(x) = \sum_{k=0}^n \binom{n}{k} f^{(n-k)}(x) g^{(k)}(x)$$</p>
<p>to $f(x) = (x+1)^n$ and $g(x) = (x-1)^n$. Notice for $k \le n$, we have
$$g^{(k)}(x) = \frac{n!}{(n-k)!} (x-1)^{n-k} \implies g^{(k)}(1) = \begin{cases} 0, & k < n\\n! & k = n\end{cases}$$</p>
<p>We obtain</p>
<p>$$\left. (x^2-1)^{(n)} \right|_{x=1} = \binom{n}{n}f(1)g^{(n)}(1) = (1+1)^n n! = 2^n n!$$</p>
|
2,812,472 | <p>I am trying to show that</p>
<p>$$
\frac{d^n}{dx^n} (x^2-1)^n = 2^n \cdot n!,
$$ for $x = 1$. I tried to prove it by induction but I failed because I lack axioms and rules for this type of derivatives. </p>
<p>Can someone give me a hint?</p>
| Stefan4024 | 67,746 | <p>You can use Taylor's formula and try to expland $x^2-1$ around $x=1$ and the coefficient ahead of $(x-1)^n$ would be $\frac{d^n}{dx^n} (x^2-1)^n{\Big|_{x=1}} \cdot \frac{1}{n!}$</p>
<p>We have $x^2 - 1 = (x-1)((x-1)+2)$ and so $(x^2-1)^n = (x-1)^n((x-1)^n+2)^n$. From here it's not hard to conclude that the coefficient in front of $(x-1)^n$ is $2^n$. You can use Binomial Formula or explicitly just write out the $n$ factors and see that you have to multiply by $2$ just to keep the exponent $n$ in the explansion. Hence we have that:</p>
<p>$$2^n = \frac{d^n}{dx^n} (x^2-1)^n{\Big|_{x=1}} \cdot \frac{1}{n!}$$</p>
<p>Hence the proof.</p>
|
128,784 | <p>Consider the two lists</p>
<pre><code>list1={1,2,a[1],8,b[4],9};
list2={8,b[4],9,1,2,a[1]};
</code></pre>
<p>it is evident by inspection that <code>list2</code> is just a cyclic rotation of <code>list1</code>. Considering an equivalence class of lists under cyclic rotations, I would like to have a function <code>cycRot[x_List]</code> that takes a list and returns a cyclically rotated representative of that list, which would be independent of the initial cyclic order of the list. Such that</p>
<pre><code>cycRot[list1]==cycRot[list2]
</code></pre>
<blockquote>
<p>True</p>
</blockquote>
<p>is guaranteed (the exact resulting rotation is irrelevant as long as the function returns the same result for any cyclically equivalent list).
Is there such a function in Mathematica? Or maybe one can implement it efficiently? Thanks for any suggestion!</p>
| bill s | 1,783 | <p>Here is a function</p>
<pre><code>cyc[list_] := RotateLeft[list, First@Ordering[list, 1]]
</code></pre>
<p>For your lists:</p>
<pre><code>list1 = {1, 2, a[1], 8, b[4], 9};
list2 = {8, b[4], 9, 1, 2, a[1]};
cyc[list1] == cyc[list2]
True
</code></pre>
|
1,060,213 | <p>I have searched the site quickly and have not come across this exact problem. I have noticed that a Pythagorean triple <code>(a,b,c)</code> where <code>c</code> is the hypotenuse and <code>a</code> is prime, is always of the form <code>(a,b,b+1)</code>: The hypotenuse is one more than the non-prime side. Why is this so?</p>
| Will Jagy | 10,400 | <p>The only possibility is, with positive integers $r > s,$
$$ a = r^2 - s^2, $$
$$ b = 2rs, $$
$$ c = r^2 + s^2. $$</p>
<p>In order to have $a= (r-s)(r+s)$ prime, we must have $(r-s) = 1,$ or
$$ r=s+1. $$ So, in fact, we have
$$ a = 2s+1, $$
$$ b = 2s^2 + 2 s, $$
$$ c = 2s^2 + 2 s + 1. $$</p>
<p>There you go. </p>
|
1,060,213 | <p>I have searched the site quickly and have not come across this exact problem. I have noticed that a Pythagorean triple <code>(a,b,c)</code> where <code>c</code> is the hypotenuse and <code>a</code> is prime, is always of the form <code>(a,b,b+1)</code>: The hypotenuse is one more than the non-prime side. Why is this so?</p>
| iadvd | 189,215 | <p>After reading the great answers, just wanted to add one easy-to-follow rule that makes possible to build a Pythagorean triple $(a,b,b+1$) starting with any odd number $a \gt 3$, as follows:</p>
<p>$a$ is the original odd number</p>
<p>$b=\frac{a^2-1}{2}$</p>
<p>$c=b+1$</p>
<p>As that works for any odd number greater than $3$, the prime numbers greater than $3$ are also able to be found in a triplet of that shape.</p>
|
47,492 | <p>Consider a continuous irreducible representation of a compact Lie group on a finite-dimensional complex Hilbert space. There are three mutually exclusive options:</p>
<p>1) it's not isomorphic to its dual (in which case we call it 'complex')</p>
<p>2) it has a nondegenerate symmetric bilinear form (in which case we call it 'real')</p>
<p>3) it has a nondegenerate antisymmetric bilinear form (in which case we call it 'quaternionic')</p>
<p>It's 'real' in this sense iff it's the complexification of a representation on a real vector space, and it's 'quaternionic' in this sense iff it's the underlying complex representation of a representation on a quaternionic vector space.</p>
<p>Offhand, I know just <b>four compact Lie groups whose continuous irreducible representations on complex vector spaces are all either real or quaternionic in the above sense</b>:</p>
<p>1) the group Z/2 </p>
<p>2) the trivial group</p>
<p>3) the group SU(2)</p>
<p>4) the group SO(3)</p>
<p>Note that I'm so desperate for examples that I'm including 0-dimensional compact Lie groups, i.e. finite groups! </p>
<p>1) is the group of unit-norm real numbers, 2) is a group covered by that, 3) is the group of unit-norm quaternions, and 4) is a group covered by <i>that</i>. This probably explains why these are all the examples I know. For 1), 2) and 4), all the continuous irreducible representations are in fact real.</p>
<p><b>What are all the examples?</b> </p>
| Torsten Ekedahl | 4,008 | <p>An irreducible representation is real or quaternionic precisely when its
character is real-valued. By the Peter-Weyl theorem all characters are
real-valued precisely when every element in the group is conjugate to its
inverse. When the group is connected a more precise answer is as follows: The
Weyl group (in its tautological representation) must contain multiplication by
$-1$ and this is true precisely when all indecomposable root system factors have
that property. I don't remember off hand which indecomposable root systems have
this property but it is of course well known (type A is out, type B/C is in,
type D depends on the parity of the rank).</p>
<p><b>Addendum</b>: Found the relevant places in Bourbaki. All characters are real-valued precisely when the element he calls $w_0$ is $-1$ (Ch. VIII,Prop. 7.5.11) and one can also read off if a given representation is real or quaternionic (loc. cit. Prop 12). From the tables in Chapter 6 one gets that $w_0=-1$ precisely for $A_1$, B/C, D for even rank, $E_7$, $E_8$, $F_4$ and $G_2$.</p>
|
47,492 | <p>Consider a continuous irreducible representation of a compact Lie group on a finite-dimensional complex Hilbert space. There are three mutually exclusive options:</p>
<p>1) it's not isomorphic to its dual (in which case we call it 'complex')</p>
<p>2) it has a nondegenerate symmetric bilinear form (in which case we call it 'real')</p>
<p>3) it has a nondegenerate antisymmetric bilinear form (in which case we call it 'quaternionic')</p>
<p>It's 'real' in this sense iff it's the complexification of a representation on a real vector space, and it's 'quaternionic' in this sense iff it's the underlying complex representation of a representation on a quaternionic vector space.</p>
<p>Offhand, I know just <b>four compact Lie groups whose continuous irreducible representations on complex vector spaces are all either real or quaternionic in the above sense</b>:</p>
<p>1) the group Z/2 </p>
<p>2) the trivial group</p>
<p>3) the group SU(2)</p>
<p>4) the group SO(3)</p>
<p>Note that I'm so desperate for examples that I'm including 0-dimensional compact Lie groups, i.e. finite groups! </p>
<p>1) is the group of unit-norm real numbers, 2) is a group covered by that, 3) is the group of unit-norm quaternions, and 4) is a group covered by <i>that</i>. This probably explains why these are all the examples I know. For 1), 2) and 4), all the continuous irreducible representations are in fact real.</p>
<p><b>What are all the examples?</b> </p>
| Skip | 6,486 | <p>Torsten answered this question perfectly for the definition of real/complex/quaternionic in John's original question. But this usage of real/complex/quaternionic is foreign to my experience. Specifically, if you look at an irreducible real representation of a group, then its endomorphism ring is (by Schur and Frobenius) R, C, or H. And this seems to give a natural meaning of the terms "real", "complex" and "quaternionic" for irreps. This definition does not agree with John's, as you can see by considering the spin reps of Spin(7,1).</p>
<p>My definition is also what you find in Noah Snyder's answer <a href="https://mathoverflow.net/questions/17495/what-rings-groups-have-interesting-quaternionic-representations">here</a> and in Wikipedia's definition of quaternionic representation.</p>
|
1,689,853 | <p>If I have that $B_t$ is a standard brownian motion process, is $B_t^2 - \frac{t}{2}$ a martingale w.r.t. brownian motion? I know that $B_t^2 - t$ is but can't see it for the latter. </p>
| Math-fun | 195,344 | <p>\begin{align}
E(B_t^2-\frac t2 \Big|B_s)&=E((B_t-B_s+B_s)^2-\frac t2 \Big|B_s)\\
&=E((B_t-B_s)^2+B_s^2+2B_s(B_t-B_s)-\frac t2 \Big|B_s)\\
&=t-s+B_s^2-0-\frac t2 \\
&=B_s^2-\frac s2+\frac {t-s}2 \\
& \geq B_s^2-\frac s2
\end{align}</p>
|
3,688,151 | <blockquote>
<p>A necklace is made up of <span class="math-container">$3$</span> beads of one sort and <span class="math-container">$6n$</span> of another, those of each sort being similar. Show that the number of arrangement of the beads is <span class="math-container">$3n^2+3n +1$</span>.</p>
</blockquote>
<p><strong>My attempt</strong>:</p>
<p>There are total <span class="math-container">$6n+3$</span> beads. Also, it is a necklace, so their clockwise and anticlockwise arrangements are same. Hence, no. of their cyclic arrangements are given by,
<span class="math-container">$$\frac{1}{2}\frac{(6n+2)!}{3!\times 6n!}$$</span></p>
<p>This is obviously not what we want to prove. Where am I going wrong? </p>
<p>I found few questions on MSE related to this topic <a href="https://math.stackexchange.com/questions/706595/what-are-the-number-of-circular-arrangements-possible">Q.1</a>, <a href="https://www.google.com/url?sa=t&source=web&rct=j&url=https://math.stackexchange.com/questions/2690473/how-many-different-necklaces-can-be-formed-with-6-white-and-5-red-beads&ved=2ahUKEwjrwq2wtMnpAhWZyDgGHRvXBV0QFjABegQIBxAB&usg=AOvVaw07b2tsMZXoXwM3PDpPAnC2" rel="nofollow noreferrer">Q.2</a>, <a href="https://math.stackexchange.com/questions/3507865/circular-permutations-with-same-objects">Q.3</a> (I doubt this solutions are wrong) . But it didn't address my concern. Also, I'm a high school student and Burnside Lemma is out of my scope. Please help me with an alternate method. </p>
| Mike Earnest | 177,399 | <p>Let us say there are <span class="math-container">$3$</span> white beads and <span class="math-container">$6n$</span> black beads. A necklace is determined by the lengths of the three sections of black beads comprising the necklace. Call these lenghts <span class="math-container">$a,b$</span> and <span class="math-container">$c$</span>, then the number of ways to choose three nonnegative integers <span class="math-container">$a,b,c$</span> which satisfy <span class="math-container">$a+b+c=6n$</span> is given by stars and bars to be
<span class="math-container">$$
\#\{(a,b,c):a+b+c=6n, \;\;a,b,c\in \mathbb Z_{\ge 0}\}=\binom{6n+3-1}{2}
$$</span>
However, this is overcounting the number of distinct necklaces, because rotation and reflection shows that the the necklaces <span class="math-container">$(a,b,c),(b,c,a),(a,c,b)$</span> are all the same, and same for the other three permtuations. To correct for this over counting, we have to count the number of all possible patterns of <span class="math-container">$(a,b,c)$</span> separately<span class="math-container">$^*$</span>:</p>
<ul>
<li><p>The number of neckalces like <span class="math-container">$(a,a,a)$</span>, where all three lenghts are equal, is just <span class="math-container">$\boxed{1}$</span>, as you need <span class="math-container">$a=2n$</span>. </p></li>
<li><p>The number of necklaces like <span class="math-container">$(a,a,b)$</span>, where <span class="math-container">$a\neq b$</span>, is the number of solutions to <span class="math-container">$2a+b=6n$</span>. Here, <span class="math-container">$a$</span> can be any number between <span class="math-container">$0$</span> and <span class="math-container">$3n$</span> inclusive, <em>except</em> for <span class="math-container">$2n$</span>, because this was counted in the previous case. Therefore, there are <span class="math-container">$3n$</span> necklaces in this case. </p></li>
<li><p>The number of necklaces like <span class="math-container">$(a,b,c)$</span>, where <span class="math-container">$a\neq b\neq c\neq a$</span>, is given by taking all <span class="math-container">$\binom{6n+2}{2}$</span> tuples <span class="math-container">$(a,b,c)$</span>, subtracting the <span class="math-container">$1+3\cdot 3n$</span> tuples counted in previous cases, and then dividing by <span class="math-container">$6$</span> to account for the fact that all permutations are the same. The number is<span class="math-container">$$
\frac16\Big(\binom{6n+2}{2}-1-3\cdot 3n\Big)
$$</span></p></li>
</ul>
<p>Adding all three of these together, you get <span class="math-container">$3n^2+3n+1$</span>.</p>
<p><span class="math-container">$^*$</span> A better way is to use Burnside's Lemma, but you said in the comments you wanted to avoid this.</p>
|
3,188,327 | <p>I derived the operator <span class="math-container">$\mathscr{L} = \dfrac{\partial}{\partial{x}} + u\dfrac{\partial}{\partial{y}}$</span> from the PDE <span class="math-container">$u_x + uu_y = 0$</span> in order to figure out whether it is linear. </p>
<p>The textbook solutions take the following steps in finding whether the operator is linear:</p>
<p><span class="math-container">$$\mathscr{L}(u + v) = \dfrac{\partial{(u + v)}}{\partial{x}} + (u + v)\dfrac{\partial{(u + v)}}{\partial{y}} = \dots = \mathscr{L}u + \mathscr{L}v + uv_y + vu_y,$$</span></p>
<p>which is obviously nonlinear. But I proceeded as follows:</p>
<p><span class="math-container">$$\mathscr{L}(u + v) = \dfrac{\partial{(u + v)}}{\partial{x}} + (u)\dfrac{\partial{(u + v)}}{\partial{y}} = \dots = \mathscr{L}u + \mathscr{L}v,$$</span></p>
<p>which obviously <em>is</em> linear.</p>
<p>I don't understand why the author let <span class="math-container">$u$</span> in the operator become <span class="math-container">$u = u + v$</span>? It seemed to me that, since <span class="math-container">$u$</span> is part of the operator, it should remain as <span class="math-container">$u$</span> and <em>not</em> <span class="math-container">$u + v$</span>?</p>
<p>I would greatly appreciate it if people could please take the time to clarify this. <em>Why</em> is it wrong? My point is that <span class="math-container">$u$</span> is part of the operator, and the operator is acting upon <span class="math-container">$(u + v)$</span>, so why is it that <span class="math-container">$\mathscr{L}(u + v) = \dfrac{\partial{(u + v)}}{\partial{x}} + (u + v)\dfrac{\partial{(u + v)}}{\partial{y}}$</span> instead of <span class="math-container">$\mathscr{L}(u + v) = \dfrac{\partial{(u + v)}}{\partial{x}} + (u)\dfrac{\partial{(u + v)}}{\partial{y}}$</span>?</p>
<p>EDIT: For future reference, I would also like to present another interesting example.</p>
<p>Take the operator <span class="math-container">$\mathscr{L} u = u_x + u_y + 1$</span>. Therefore, <span class="math-container">$\mathscr{L} = \dfrac{\partial}{\partial{x}} + \dfrac{\partial}{\partial{y}} + \dfrac{1}{u}$</span>. And so,</p>
<p><span class="math-container">$$\mathscr{L} (u + v) = \left( \dfrac{\partial}{\partial{x}} + \dfrac{\partial}{\partial{y}} + \dfrac{1}{u + v} \right) (u + v) = \dfrac{\partial{(u + v)}}{\partial{x}} + \dfrac{\partial{(u + v)}}{\partial{y}} + 1$$</span></p>
<p>EDIT2: </p>
<p>I think the derivation of <span class="math-container">$\mathscr{L}$</span> in the above edit is incorrect. Here is what I think the correct derivation is:</p>
<p>Take the operator <span class="math-container">$\mathscr{L} u = u_x + u_y + 1$</span>. Therefore, <span class="math-container">$\mathscr{L} = \dfrac{\partial}{\partial{x}} + \dfrac{\partial}{\partial{y}} + 1$</span>. And so,</p>
<p><span class="math-container">$$\mathscr{L} (u + v) = \dfrac{\partial{(u + v)}}{\partial{x}} + \dfrac{\partial{(u + v)}}{\partial{y}} + 1$$</span></p>
| Arctic Char | 629,362 | <p>Let me just say that </p>
<p><span class="math-container">$$\mathcal L = \frac{\partial}{\partial x} + u\frac{\partial }{\partial y}$$</span></p>
<p>is NOT the correct notation. The above implies that <span class="math-container">$u$</span> is a fixed function, and the operator acts as </p>
<p><span class="math-container">$$\mathcal L (f) = \frac{\partial f}{\partial x} + u \frac{\partial f}{\partial y}$$</span></p>
<p>for all differentiable function <span class="math-container">$f$</span>.</p>
<p>To avoid confusion, your operator should be written as </p>
<p><span class="math-container">$$ \mathcal L' (f) = \frac{\partial f}{\partial x} + f \frac{\partial f}{\partial y}$$</span></p>
<p>and now it is clear why we have </p>
<p><span class="math-container">$$\mathcal L' (f_1+ f_2) = \frac{\partial}{\partial x} (f_1+f_2) + (f_1+f_2) \frac{\partial }{\partial y} (f_1+f_2).$$</span></p>
|
2,063,038 | <p>Let <span class="math-container">$S$</span> be the region in the plane that is inside the circle <span class="math-container">$(x-1)^2 + y^2 = 1$</span> and outside the circle <span class="math-container">$x^2 + y^2 = 1 $</span>. I want to calculate the area of <span class="math-container">$S$</span>.</p>
<h3>Try:</h3>
<p>first, the circles intersect when <span class="math-container">$x^2 = (x-1)^2 $</span> that is when <span class="math-container">$x = 1/2$</span> and so <span class="math-container">$y =\pm \frac{ \sqrt{3} }{2} $</span>. So, using washer method, we have</p>
<p><span class="math-container">$$Area(S) = \pi \int\limits_{- \sqrt{3}/2}^{ \sqrt{3}/2} [ (1+ \sqrt{1-y^2})^2 - (1-y^2) ] dy $$</span></p>
<p>is this the correct setting for the area im looking for?</p>
| Michael R | 399,606 | <p>I would recommend using a polar coordinate system, i.e.</p>
<p>$$ x = rcos\theta $$ </p>
<p>$$ y=rsin\theta $$</p>
<p>which implies: $ \rightarrow x^2 + y^2 = r^2sin^2\theta + r^2cos^2\theta = r^2(sin\theta + cos\theta) = r^2 $.</p>
<p>To use this method, you must find the intersection points (as you have already done) in order to find the angle $ \theta $. If you substitute $ x = \frac{1}{2} $ or $ y = \pm \frac{\sqrt3}{2} $ into either of the above polar equations, you get $ \theta = \pm \frac{\pi}{3} $.</p>
<p>The general formula for finding the area under a curve using polar coordinates is:</p>
<p>$$ A = \int_{a}^{b} \frac{1}{2} r^2 d\theta $$</p>
<p>If you are interested in finding the area between two curves, the formula becomes:</p>
<p>$$ A = \int_{a}^{b} \frac{1}{2} \big( (r_{outer})^2 - (r_{inner})^2 \big)d\theta $$</p>
<p>In these formulas, $ a $ is the lower bound of the angle $ \theta $, and $ b $ is the upper bound for the angle $ \theta $. Finally, $ r $ is the radius equation that defines region that your integral is "sweeping over."</p>
<p>In this case, we have: $ A = \frac{1}{2} \int_{-\frac{\pi}{3}}^{\frac{\pi}{3}} \big( (r_{outer})^2 - (r_{inner})^2 \big)d\theta $</p>
<p>To find the outer radius and the inner radius, you must use the fact that $ x^2 + y^2 = r^2 $.</p>
<p><strong>Circle 1</strong> (outer circle $ \rightarrow (x-1)^2 + y^2 = 1 $ ):</p>
<p>$ (x-1)^2 + y^2 = 1 \rightarrow x^2 - 2x + 1 + y^2 = 1 \rightarrow x^2 + y^2 = 2x \rightarrow r^2 = 2rcos\theta \rightarrow r = 2cos\theta $</p>
<p><strong>Circle 2</strong> (inner circle $ \rightarrow x^2 + y^2 = 1 $ ): </p>
<p>$ x^2 + y^2 = 1 \rightarrow r^2 = 1 $</p>
<p>Putting this all together, we get:</p>
<p>$ A = \frac{1}{2} \int_{-\frac{\pi}{3}}^{\frac{\pi}{3}} \big( (r_{outer})^2 - (r_{inner})^2 \big)d\theta = \frac{1}{2} \int_{-\frac{\pi}{3}}^{\frac{\pi}{3}} \big( (2cos\theta)^2 - (1)^2 \big)d\theta = \frac{1}{6} (3\sqrt3 + 2\pi) \approx 1.91 $</p>
|
1,170,708 | <p>What functions satisfy $f(x)+f(x+1)=x$?</p>
<p>I tried but I do not know if my answer is correct.
$f(x)=y$</p>
<p>$y+f(x+1)=x$</p>
<p>$f(x+1)=x-y$</p>
<p>$f(x)=x-1-y$</p>
<p>$2y=x-1$</p>
<p>$f(x)=(x-1)/2$</p>
| kryomaxim | 212,743 | <p>Concluding that from $f(x+1)=x-y$ it follows $f(x)=x-1-y$ is wrong, because the variable $x$ was not changed in the Argument $y$. </p>
<p>Try the Ansatz $f(x)=ax+b$ for some coefficients $a,b$ to solve this equation.</p>
|
1,734,819 | <p>I think I'm on the right track with constructing this proof. Please let me know.</p>
<p>Claim: Prove that there exists a unique real number $x$ between $0$ and $1$ such that
$x^{3}+x^{2} -1=0$</p>
<p>Using the intermediate value theorem we get
$$r^{3}+r^{2}-1=c^{3}+c^{2}-1$$
......
$$r^{3}+r^{2}-c^{3}-c^{2}=0$$</p>
<p>Factoring gives us</p>
<p>$$(r-c)[(r^{2}+rc+c^{2})+(r+c)]=0$$
I'm lost now. How do I prove that there exists a number between $0$ and $1$.</p>
| Vik78 | 304,290 | <p>You want an $x$ in $(0, 1)$ such that $x^2 + x = 1/x$. Note that as $x$ goes to zero $x^2 + x < 1/x,$ and as $x$ goes to one $x^2 + x > 1/x$. Since both curves are continuous they must intersect at least once on $(0, 1),$ and since $x^2 + x$ is increasing on that interval whereas $1/x$ is decreasing they intersect exactly once.</p>
|
1,786,421 | <p>I have the following equality to prove. </p>
<p>Given $X \sim Bin(n, p)$ and $Y \sim Bin(n, 1 - p)$ prove that $P(X \leq k) = P(Y \geq n - k)$. I have been trying to come up with a solution but cannot find one. I am looking for suggestions and not a complete answer as this is a homework question.</p>
<p>What I did until now is the following:</p>
<p>$P(X \leq k) = \sum_{i = 0}^{k} \binom{n}{i} p^i (1 - p)^{n - i} = 1 - \sum_{i = k + 1}^{n} \binom{n}{i} p^i (1 - p)^{n - i} = \sum_{j = 0}^{n - k - 1} \binom{n}{j + k + 1} p^{j + k + 1} (1 - p)^{n - j - k - 1}$ </p>
<p>Here , $j = i - k - 1$. </p>
| peterwhy | 89,922 | <p>$$\sum_{i=0}^k\binom{n}{i}p^i(1-p)^{n-i} = \sum_{j=n-k}^{n}\binom{n}{n-j}(1-1+p)^{n-j}(1-p)^j$$
where $j=n-i$, or $i=n-j$.</p>
|
174,075 | <p>What is the difference when a line is said to be normal to another and a line is said to be perpendicular to other?</p>
| S Jagdish | 477,526 | <p>Even I agree to the above contexts. Normal is more relevant to 3-D(Line and Surface) and perpendicular to 2-D(Line and Line = One Plane). </p>
<p>This is correct when we do say normal to surface and perpendicular to the line.</p>
|
1,307,280 | <p>I m in the point X. I m 2 blocks up from a point A and 3 blocks down from my home H. Every time I walk one block i drop a coin.</p>
<p>H
.
.
.
X
.
.
A</p>
<p>If the coin is face I go one block up and if it is not face I go one block down.</p>
<p>Which is the probability of arriving home before the point A?</p>
<hr>
<p>What I really want to do is to solve that problem in a recursive way. Maybe it can be solved with a binomial distribution... But is it also recursive?</p>
| Community | -1 | <p>Probably a better intuitive definition is $f(x)$ can be made arbitrarily close to $L$ by making $x$ close enough to $a$.</p>
<p>You avoid the awkwardness about the constant case. Additionally this emphasizes that it is for ALL $\epsilon$. It's not just that $f(x)$ gets "closer", it's that it can be made as close as you'd like.</p>
<p>For a counterexample of "gets closer" is $\lim\limits_{x\to 0} -x^2 $. I propose that $\lim\limits_{x\to 0} -x^2 =1$ because as $x$ gets closer to $0$ then $-x^2$ gets closer to $1$. However it doesn't get within $\frac12$ of $1$.</p>
|
1,307,280 | <p>I m in the point X. I m 2 blocks up from a point A and 3 blocks down from my home H. Every time I walk one block i drop a coin.</p>
<p>H
.
.
.
X
.
.
A</p>
<p>If the coin is face I go one block up and if it is not face I go one block down.</p>
<p>Which is the probability of arriving home before the point A?</p>
<hr>
<p>What I really want to do is to solve that problem in a recursive way. Maybe it can be solved with a binomial distribution... But is it also recursive?</p>
| CiaPan | 152,299 | <p>The 'working definition' is actually NOT working and you shoud NOT use it. To make it work replace it with </p>
<blockquote>
<p>$f$ gets <strong>arbitrarily</strong> close to $L$ if $x$ <em>sufficiently</em> approaches $a$</p>
</blockquote>
<p>where 'gets arbitrarily close' does not mean just $f$ <em>may</em> get closer to $L$ but rather that it certainly <em>will not get apart</em> from $L$ farther than any arbitrarily set distance.</p>
<p>In other words</p>
<blockquote>
<p>$f$ gets <strong>arbitrarily</strong> close to $L$ and <strong>stays there</strong></p>
</blockquote>
|
1,906,013 | <blockquote>
<p>Let $f$ be a smooth function such that $f'(0) = f''(0) = 1$. Let $g(x) = f(x^{10})$. Find $g^{(10)}(x)$ and $g^{(11)}(x)$ when $x=0$.</p>
</blockquote>
<p>I tried applying chain rule multiple times:</p>
<p>$$g'(x) = f'(x^{10})(10x^9)$$</p>
<p>$$g''(x) = \color{red}{f'(x^{10})(90x^8)}+\color{blue}{(10x^9)f''(x^{10})}$$</p>
<p>$$g^{(3)}(x)=\color{red}{f'(x^{10})(720x^7) + (90x^8)f''(x^{10})(10x^9)}+\color{blue}{(10x^9)(10x^9)f^{(3)}(x^{10})+f''(x^{10})(90x^8)}$$</p>
<p>The observation here is that, each time we take derivative, one "term" becomes two terms $A$ and $B$, where $A$ has power of $x$ decreases and $B$ has power of $x$ increases. $A$ parts will become zero when evaluated at zero, but what about $B$ parts?</p>
| Domates | 8,065 | <p>\begin{align*}
g'(x) &= 10x^9f'(x^{10})\\
g''(x) &= 10x^9f''(x^{10})+10.9x^8f'(x^{10})\\
& \vdots\\
g^{(9)}(x)&= p(x)+10.9.\cdots .3x^2f''(x^{10})+10.9.\cdots .2x^1f'(x^{10})\\
g^{(10)}(x)&= q(x)+10.9.\cdots .2x^1f''(x^{10})+10!x^0f'(x^{10})\\
g^{(11)}(x)&= r(x)+10!f''(x^{10})
\end{align*}
where $p(x)$, $q(x)$ and $r(x)$ are real polynomials which haven't constant term, so $p(0)=q(0)=r(0)=0$. Consequently $g^{(11)}(0)=g^{(10)}(0)=10!$</p>
|
1,494,167 | <p>Using only addition, subtraction, multiplication, division, and "remainder" (modulo), can the absolute value of any integer be calculated?</p>
<p>To be explicit, I am hoping to find a method that does not involve a piecewise function (i.e. branching, <code>if</code>, if you will.)</p>
| Lasoloz | 283,199 | <p>I think you need this: <a href="https://stackoverflow.com/questions/9772348/get-absolute-value-without-using-abs-function-nor-if-statement">https://stackoverflow.com/questions/9772348/get-absolute-value-without-using-abs-function-nor-if-statement</a>.</p>
<p>Please note that remainder is modulo, absolute is modulus, so please correct the question, because I can't suggest edit.</p>
|
3,605,368 | <p>Imagine a <span class="math-container">$9 \times 9$</span> square array of pigeonholes, with one pigeon in each pigeonhole. Suppose that all at once, all the pigeons move up, down, left, or right by one hole. (The pigeons on the edges are not allowed to move out of the array.) Show that some pigeonhole winds up with two pigeons in it.</p>
<p>Let each side of the square be n. There are <span class="math-container">$n^2$</span> pigeons and pigeonholes. If the pigeons are shifted in any direction, then there will be n empty pigeonholes on the side opposite to the direction. Furthermore, now <span class="math-container">$n^2$</span> pigeons are trying to fit into <span class="math-container">$n^2 - n$</span> pigeonholes. We can invoke the pigeon hole principle as follows: Let the entire set of pigeons be <span class="math-container">$X$</span> and the set of pigeonholes to be populated after the shift be <span class="math-container">$Y$</span>. For <span class="math-container">$X$</span> and <span class="math-container">$Y$</span> and for some integer <span class="math-container">$k$</span>, if <span class="math-container">$X > k Y$</span>, and <span class="math-container">$f X: \to Y$</span>, then <span class="math-container">$f(x) = \ldots = f(x {\rm till\ index}\ k+1)$</span>.</p>
<p>So, <span class="math-container">$81 > 72 k$</span> which means <span class="math-container">$k > 1.125$</span> which means <span class="math-container">$k = 2$</span>. This means that there are at least <span class="math-container">$3$</span> instances with <span class="math-container">$2$</span> pigeons in it.</p>
<p>Now intuitively I know there ought to be <span class="math-container">$9$</span> instances. Where did I go wrong? Forgive me if I have butchered the whole thing. I am new to this type of math. </p>
| David G. Stork | 210,401 | <p>Hint: This figure suggests why you cannot do it with <span class="math-container">$9 \times 9$</span> but (modified) shows you <em>can</em> with <span class="math-container">$8 \times 8$</span>.</p>
<p><a href="https://i.stack.imgur.com/uF0jT.png" rel="noreferrer"><img src="https://i.stack.imgur.com/uF0jT.png" alt="enter image description here"></a></p>
|
38,915 | <p>It has to be the silliest question, but it’s not clear to me how to calculate eigenvectors quickly. I am just talking about a very simple 2-by-2 matrix.</p>
<p>When I have already calculated the eigenvalues from a characteristic polynomial, I can start to solve the equations with $A\mathbf{v}_1 = e_1\mathbf{v}_1$ and A\mathbf{v}_2 = e_2\mathbf{v}_2$, but in this case it always requires writing lines of equations and solving them.</p>
<p>On the other hand I figured out that just by looking at the matrix you can come up with the eigenvectors very quickly. But I'm a bit confused in this part.</p>
<p>When you have the matrix with subtracted $e_1$ values like this:
$$\left(\begin{array}{cc}
A&B\\
C&D
\end{array}\right).$$</p>
<p>Then for me, it always worked to use the eigenvector.
$$\left(\begin{array}{r}-B\\A\end{array}\right)$$</p>
<p>But in some guides I find that they are using A C as an eigenvector.
$$\left(\begin{array}{c}A\\C
\end{array}\right).$$</p>
<p>And when I check it, they are indeed multiples of each other. But this other method is not clear to me, how could $A$ and $C$ mean anything about the eigenvector, when both of them are connected to $x$, without having to do anything with $y$. But it’s still working. Was it just a coincidence?</p>
<p>So is the recommended method for calculating them is just to subtract the eigenvalues from the matrix and look at
$$\left(\begin{array}{r}
-B\\A\end{array}\right)\qquad\text{or}\qquad\left(\begin{array}{r}
-D\\A
\end{array}\right).$$</p>
| Arturo Magidin | 742 | <p>(Note that you are using $A$ for two things in your post: it is the original matrix, and then it's an entry of the matrix; that's very bad form. and likely to lead to confusion; never use the same symbol to represent two different things).</p>
<p>So, if I understand you: you start with a matrix $\mathscr{A}$,
$$\mathscr{A} = \left(\begin{array}{cc}
a_{11} & a_{12}\\
a_{21} & a_{22}
\end{array}\right).$$</p>
<p>Then, if you know that $e_1$ is an eigenvalue, then you look at the matrix you get when you subtract $e_1$ from the diagonal:
$$\left(\begin{array}{cc}
a_{11}-e_1 & a_{12}\\
a_{21} & a_{22}-e_1
\end{array}\right) = \left(\begin{array}{cc}A&B\\C&D
\end{array}\right).$$</p>
<p>Now, the key thing to remember is that, because $e_1$ is an eigenvalue, that means that the matrix is <em>singular</em>: an eigenvector corresponding to $e_1$ will necessarily map to $\mathbf{0}$. That means that the determinant of this matrix is equal to $0$, so $AD-BC=0$.</p>
<p>Essentially: one of the rows of the matrix is a multiple of the other; one of the columns is a multiple of the other. </p>
<p>What this means is that the vector $\left(\begin{array}{r}-B\\A\end{array}\right)$ is mapped to $0$: because
$$\left(\begin{array}{cc}
A&B\\C&D\end{array}\right)\left(\begin{array}{r}-B\\A\end{array}\right) = \left(\begin{array}{c}-AB+AB\\-BC+AD \end{array}\right) = \left(\begin{array}{c}0\\0\end{array}\right)$$
because $AD-BC=0$. If $A$ and $B$ are not both zero, then this gives you an eigenvector.</p>
<p>If both $A$ and $B$ are zero, though, this method does not work because it gives you the zero vector. In that case, the matrix you are looking at is
$$\left(\begin{array}{cc}0&0\\C&D
\end{array}\right).$$
One vector that is mapped to zero is $\left(\begin{array}{r}-D\\C\end{array}\right)$; that gives you an eigenvalue unless $C$ and $D$ are both zero as well, in which case any vector will do. </p>
<p>On the other hand, what about the vector $\left(\begin{array}{r}A\\C\end{array}\right)$? That vector is mapped to a multiple of itself by the matrix:
$$\left(\begin{array}{cc}
A&B\\C&D\end{array}\right)\left(\begin{array}{c}A\\C\end{array}\right) = \left(\begin{array}{c}A^2 + BC\\AC+DC\end{array}\right) = \left(\begin{array}{c}A^2+AD\\AC+DC\end{array}\right) = (A+D)\left(\begin{array}{c}A\\C\end{array}\right).$$
<strong>However,</strong> you are looking for a vector that is mapped to $\left(\begin{array}{c}0\\0\end{array}\right)$ by $\left(\begin{array}{cc}A&B\\C&D\end{array}\right)$, <em>not</em> for a vector that is mapped to a multiple of itself by this matrix. </p>
<p>Now, if your <em>original</em> matrix has zero determinant already, and you haven't subtracted the eigenvalue from the diagonal, then here's why this will work: the sum of the two eigenvalues of the matrix equals the trace, and the product of the two eigenvalues equals the determinant. Since the determinant is $0$ under this extra assumption, then one of the eigenvalues is $0$, so the other eigenvalue equals the trace, $a_{11}+a_{22}$. In this case, the vector $\left(\begin{array}{c}A\\C\end{array}\right)$ is an eigenvector of $\mathscr{A}$ corresponding to $a_{11}+a_{22}$, unless $A=C=0$ (in which case $\left(\begin{array}{r}a_{22}\\a_{11}\end{array}\right)$ is an eigenvector unless $\mathscr{A}$ is the zero matrix). </p>
<p><strong>Added.</strong> As Robert Israel points out, though, there is another point here. Remember that $e_1+e_2 = a_{11}+a_{22}$, $A=a_{11}-e_1 = e_2-a_{22}$; and that $a_{11}a_{22}-a_{12}a_{21} = e_1e_2$; if we take the vector $\left(\begin{array}{c}A\\C\end{array}\right)$ with the <em>original</em> matrix $\mathscr{A}$, we have:
$$\begin{align*}
\left(\begin{array}{cc}a_{11}&a_{12}\\a_{21}&a_{22}\end{array}\right)\left(\begin{array}{c}A\\C\end{array}\right) &= \left(\begin{array}{c}
a_{11}A + a_{12}C\\a_{21}A + a_{22}C.\end{array}\right)\\
&= \left(\begin{array}{c}
a_{11}(e_2-a_{22}) + a_{12}a_{21}\\
a_{21}(e_2-a_{22}) + a_{22}a_{21}
\end{array}\right) = \left(\begin{array}{c}
e_2a_{11} + (a_{12}a_{21}-a_{11}a_{22})\\
e_2a_{21} + (a_{22}a_{21} - a_{22}a_{21}
\end{array}\right)\\
&= \left(\begin{array}{c}
e_2a_{11} - e_1e_2\\
e_2a_{21}
\end{array}\right) = \left(\begin{array}{c}
e_2(a_{11}-e_1)\\e_2a_{21}
\end{array}\right)\\
&= e_2\left(\begin{array}{c}
a_{11}-e_1\\a_{21}
\end{array}\right) = e_2\left(\begin{array}{c}A\\C
\end{array}\right).
\end{align*}$$
So if $A$ and $C$ are not both zero, then $\left(\begin{array}{c}A\\C\end{array}\right)$ is an eigenvector for the <strong>other</strong> eigenvalue of $\mathscr{A}$. </p>
<p>To summarize:</p>
<ol>
<li><p>If you subtract the eigenvalue from the diagonal, <strong>and</strong> in the <em>resulting</em> matrix the first row is not equal to $0$, then your method will produce an eigenvector corresponding to the eigenvalue you subtracted.</p></li>
<li><p>If you subtract the eigenvalue from the diagonal, <strong>and</strong> in the resulting matrix the first <em>column</em> is not equal to $0$, then taking that column will produce an eigenvector corresponding to the <strong>other</strong> eigenvalue of $\mathscr{A}$ (<em>not</em> the one you subtracted). </p></li>
</ol>
|
70,500 | <p>I am trying to find $\cosh^{-1}1$ I end up with something that looks like $e^y+e^{-y}=2x$. I followed the formula correctly so I believe that is correct up to this point. I then plug in $1$ for $x$ and I get $e^y+e^{-y}=2$ which, according to my mathematical knowledge, is still correct. From here I have absolutely no idea what to do as anything I do gives me an incredibly complicated problem or the wrong answer.</p>
| John Alexiou | 3,301 | <p>start with</p>
<p>$$\cosh(y)=x$$</p>
<p>since</p>
<p>$$\cosh^2(y)-\sinh^2(y)=1$$ or $$x^2-\sinh^2(y)=1$$</p>
<p>then</p>
<p>$$\sinh(y)=\sqrt{x^2-1}$$</p>
<p>now add $\cosh(y)=x$ to both sides to make</p>
<p>$$\sinh(y)+\cosh(y) = \sqrt{x^2-1} + x $$</p>
<p>which the left hand side simplifies to : $\exp(y)$</p>
<p>so the answer is $$y=\ln\left(\sqrt{x^2-1}+x\right)$$</p>
|
70,500 | <p>I am trying to find $\cosh^{-1}1$ I end up with something that looks like $e^y+e^{-y}=2x$. I followed the formula correctly so I believe that is correct up to this point. I then plug in $1$ for $x$ and I get $e^y+e^{-y}=2$ which, according to my mathematical knowledge, is still correct. From here I have absolutely no idea what to do as anything I do gives me an incredibly complicated problem or the wrong answer.</p>
| Christian Blatter | 1,303 | <p>You have found out that the unknown $y$ satisfies the equation $e^y+e^{-y}=2$. Multiply by $e^y$ and rearrange terms. You then get
$$e^{2y}-2e^y+1=0\ .$$
Now use the following trick: Put $e^y=:u$ with a new unknown $u$. This $u$ has to satisfy the quadratic equation
$$u^2-2u+1=0\ ,\quad{\rm i.e.,}\quad (u-1)^2=0\ .$$
The last equation has the unique solution $u=1$. The corresponding $y$ therefore satisfies the equation $e^y=1$, and there is only one such real $y$, namely $y=0$.</p>
<p>All in all we have shown that $\cosh^{-1}(1)=0$, which is corroborated by the fact that conversely $\cosh(0)={1\over2}(e^0+e^{-0})=1$. </p>
|
389,425 | <blockquote>
<p>What kind of math topics exist?</p>
</blockquote>
<p>The question says everything I want to know, but for more details: I enjoy studying mathematics but the problem is that I can't find any information with a summary of all math topics, collected together. I also googled this and took a look at other websites and searched this website but without success.</p>
<p>So if someone knows most of the topics, then please let me know them. The topics I am looking for are the ones from basics, to the university, and beyond university limit. Any comprehensive information you can provide would be useful.</p>
<p>Further thanks and have a nice day.</p>
| nitrous2 | 30,074 | <p>You might find the "Princeton Companion to Mathematics" helpful.</p>
<p><a href="http://rads.stackoverflow.com/amzn/click/0691118809" rel="nofollow">http://www.amazon.com/Princeton-Companion-Mathematics-Timothy-Gowers/dp/0691118809</a></p>
|
1,090,658 | <p>I'm doing some previous exams sets whilst preparing for an exam in Algebra.</p>
<p>I'm stuck with doing the below question in a trial-and-error manner:</p>
<p>Find all $ x \in \mathbb{Z}$ where $ 0 \le x \lt 11$ that satisfy $2x^2 \equiv 7 \pmod{11}$</p>
<p>Since 11 is prime (and therefore not composite), the Chinese Remainder Theorem is of no use? I also thought about quadratic residues, but they don't seem to solve the question in this case.</p>
<p>Thanks in advance</p>
| DeepSea | 101,504 | <p>$2x^2 - 7 \cong 2x^2 + 4 - 11 \cong 2(x^2+2)\pmod {11} \to x^2+2 = 0\pmod {11} \to x^2=-2\pmod {11}=9\pmod {11}\to x^2-9 = 0\pmod {11}\to (x-3)(x+3)=0\pmod {11}\to x-3=0\pmod {11} \text{ or } x+3 = 0\pmod {11}\to x=3,8 $ $\text{since}$ $0\leq x < 11$.</p>
|
3,021,631 | <p>I've been strongly drawn recently to the matter of the fundamental definition of the exponential function, & how it connects with its properties such as the exponential of a sum being the product of the exponentials, and it's being the eigenfunction of simple differentiation, etc. I've seen various posts inwhich clarification or demonstration or proof of such matters as how such & such a property <em>proceeds</em> from its definition as <span class="math-container">$$e^z\equiv\lim_{k\rightarrow\infty}\left(1+\frac{z}{k}\right)^k .$$</span> I'm looking at how there is a <em>web</em> of interconnected theorems about this; and I am trying to <em>spin</em> the web <em>as a whole</em>. This is not necessary for proving <em>some particular item to be proven</em> - a path along some one thread or sequence of threads is sufficient for that; but I think the matter becomes 'unified', and by reason of that actually <em>simplified</em>, when the web is perceived as an entirety. This is why I <em>bother</em> with things like combinatorial demonstrations of how the terms in a binomial expansion <em>evolve towards</em> the terms in a Taylor series as some parameter tends to ∞, when the matter at hand is actually susceptible of a simpler proof by taking the logarithm of both sides ... & other seeming redundancies.</p>
<p>To this end, another 'thread' I am looking at is that of showing that the coefficients in the Taylor series for <span class="math-container">$\ln(1+z)$</span> actually are a <em>consequence</em> of the requirement that the logarithm of a product be the sum of the logarithms, and ... if not quite a combinatorial <em>derivation</em> of them <em>from</em> that requirement, at least a <em>reverse- (or sideways-) engineering <strong>equivalent</em></strong> of it - the combinatorially showing that if the coefficients <em>be plugged into</em> the Taylor series, then the property follows</p>
<p>Taking the approach that <span class="math-container">$$(1+x)(1+y) = 1+x+y+xy ,$$</span> plugging <span class="math-container">$z=x+y+xy$</span> into the series for <span class="math-container">$\ln(1+z)$</span> and hoping that all non-fully-homogeneous terms cancel out, leaving sum of the series for <span class="math-container">$\ln(1+x)$</span> & that for <span class="math-container">$\ln(1+y)$</span>, we are left with proving that</p>
<p><span class="math-container">$$\sum_{k=1}^\infty\left((-1)^k(k-1)!×\sum_{p\inℕ_0,q\inℕ_0,r\inℕ_0,p+q+r=k}\frac{x^{p+r}y^{q+r}}{p!q!r!}\right)$$</span><span class="math-container">$$=$$</span><span class="math-container">$$\sum_{k=1}^\infty(-1)^k\frac{x^k+y^k}{k} .$$</span></p>
<p>The <em>inner</em> sum of the LHS of this quite appalling-looking theorem is the trinomial expansion of <span class="math-container">$(x+y+xy)^k$</span> for arbitrary <span class="math-container">$k$</span>, & the outer sum is simply the logarithm Taylor series expansion (with its <span class="math-container">$k$</span> in the denominator 'absorbed' into the combinatorial <span class="math-container">$k!$</span> in the numerator of the inner sum) . This theorem can be quite easily verified by 'brute force' - simply <em>doing</em> the expansions at the first (very!) few terms; but the labour of it escalates <em>extremely</em> rapidly. An algebraic manipulation package would no-doubt verify it at a good few more terms; but what I am looking-for is a showing of the fully general case: but I do not myself have the combinatorial toolage for accomplishing this. </p>
<p>So I am asking whether anyone can show me an outline of what to do ... or even actually <em>do</em> it for me, although that would probably take up a <em>very</em> great deal of space and be an <em>extremely</em> laborious task for the person doing it ... so I'm content to ask for just an outline of doing it, or for some 'signposts' as to how to do it - maybe someone knows some text on this kind of thing that they would recommend.</p>
| Ira Gessel | 437,380 | <p>Your identity is a special case of a well-known binomial coefficient identity called Vandermonde's theorem:
<span class="math-container">$$\sum_{r=0}^m \binom a{m-r}\binom nr = \binom {a+n}{r}.$$</span>
This is an identity of polynomials in <span class="math-container">$a$</span> and <span class="math-container">$n$</span>, where <span class="math-container">$m$</span> is a nonnegative integer.
(The binomial coefficient <span class="math-container">$\binom br$</span> is defined for general <span class="math-container">$b$</span> by
<span class="math-container">$\binom br = b(b-1)\cdots (b-r+1)/r!$</span> where <span class="math-container">$r$</span> is a nonnegative integer.) You can easily find many proofs of Vandermonde's theorem on the internet, so I won't give a reference here.</p>
<p>We also need <span class="math-container">$\binom{b}{r} = (-1)^r \binom{-b+r-1}{r}$</span>, an identity of polynomials in <span class="math-container">$b$</span> that follows easily from the definition of binomial coefficients.</p>
<p>Your identity is equivalent to
<span class="math-container">$$\sum_{r=0}^m(-1)^{m-r}\binom{m+n-r-1}{m-r}\binom nr = 0.$$</span>
By the second identity above, the sum is equal to
<span class="math-container">$$\sum_{r=0}^m\binom{-n}{m-r}\binom nr.$$</span>
By Vandermonde's theorem this is <span class="math-container">$\binom 0m$</span>, which is <span class="math-container">$0$</span> for <span class="math-container">$m>0$</span>.</p>
|
197,393 | <p>Playing around on wolframalpha shows $\tan^{-1}(1)+\tan^{-1}(2)+\tan^{-1}(3)=\pi$. I know $\tan^{-1}(1)=\pi/4$, but how could you compute that $\tan^{-1}(2)+\tan^{-1}(3)=\frac{3}{4}\pi$ to get this result?</p>
| Per Erik Manne | 33,572 | <p>The simplest way is by using complex numbers. It is a trivial computation to show that $$(1+i)(1+2i)(1+3i)=-10$$
Now recall the geometric description of complex multiplication (multiply the lengths and add the angles), and take the argument on both sides of this equation. This gives
$$\tan^{-1}(1)+\tan^{-1}(2)+\tan^{-1}(3)=\pi$$</p>
|
503,827 | <p>I need help to prove the following:</p>
<p>Let $a$, $b$, and $c$ be any integers. If $a \mid b$, then $a \mid bc$</p>
<p>My brain is in overload and just not working.</p>
| user66733 | 66,733 | <p>If $a \mid b$ then $\exists q \in \mathbb{Z}: aq=b$. So, $aqc=bc$. If you take $Q=qc$ you see that $\exists Q \in \mathbb{Z}: aQ=bc$, therefore, $a \mid bc$.</p>
<p>Now you can prove this one on your own for practicing: </p>
<p>If $a \mid b$ and $c \mid d$ then $ac \mid bd$.</p>
<p>And this one:</p>
<p>If $a \mid b$ and $a \mid c$ then $a \mid bx+cy$ for any $x,y \in \mathbb{Z}$.</p>
<p>Just applying definitions should help you prove them straightforwardly.</p>
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.