qid int64 1 4.65M | question large_stringlengths 27 36.3k | author large_stringlengths 3 36 | author_id int64 -1 1.16M | answer large_stringlengths 18 63k |
|---|---|---|---|---|
2,483,231 | <p>If $F(x)=f(g(x))$, where $f(5) = 8$, $f'(5) = 2$, $f'(−2) = 5$, $g(−2) = 5$, and
$g'(−2) = 9$, find $F'(−2)$. I'm totally lost on this problem, I'm assuming to incorporate the Chain Rule. I get $5(5) * 9 = 225$ but I am incorrect.</p>
<p>Update: Thanks guys, I see where I messed up thanks!</p>
| ultrainstinct | 177,777 | <p>We have that $$F'(x) = f'(g(x))g'(x),$$
and since we know $g(-2)$, $g'(-2)$, and $f'(5)$, we can find the final answer of
$$F'(-2) = f'(g(-2))g'(-2) = f'(5)\cdot 9 = 18.$$</p>
|
1,643,649 | <blockquote>
<p>I need to show that $\Bbb Z^*_8$ is not isomorphic to $\Bbb Z^*_{10}$.</p>
</blockquote>
<p>$\Bbb Z^*_n$ means integers up to $n$ coprime with $n$</p>
<p>I do not know how to do this. I have difficulties doing proofs involving isomorphisms. A methodological answer would be highly appreciated.</p>
<p>Thanks in advance!</p>
| user26857 | 121,097 | <p>Order of $3$ in $\mathbb Z_{10}^*$ is $4$, while the order of elements from $\mathbb Z_8^*$ is $2$.</p>
|
121,362 | <p>I have a set of sample time-series data below of monthly prices for two companies. </p>
<p>Q1. I want to calculate monthly and quarterly log returns.what is the most expedient way to do this? <code>TimeSeriesAggregate[]</code> only has the standard <code>Mean</code>, etc. </p>
<p>Q2. With the returns from Q1, what is the most expedient method to calculate the correlation of the monthly returns between the two companies?</p>
<p>Q3. How would it be possible to calculate six-monthly log returns and then create a series of overlapping $6m \log$ returns so I can derive $7\times 6M$ outcomes from the limited dataset below; i.e. <code>[1m-6m, 2m-7m, 3m-8m, ...]</code> (and then calculate a correlation between these)?</p>
<pre><code>(data1 = {{Date, CompanyA, CompanyB}, {"16/01/2007", 3655,
1000}, {"16/02/2007", 3655, 1000}, {"16/03/2007", 3655,
1000}, {"16/04/2007", 3655, 1000}, {"16/05/2007", 3655,
1000}, {"16/06/2007", 3435, 1011}, {"16/07/2007", 3528,
1012}, {"16/08/2007", 3348, 1013}, {"16/09/2007", 3648,
1022}, {"16/10/2007", 3648, 1022}, {"16/11/2007", 3648,
1022}, {"16/12/2007", 3648, 1022}});
(data2 = MapAt[DateList[{#, {"Day", "Month", "Year"}}] &,
data1, {2 ;;, 1}]) // Grid
</code></pre>
<p>Thanks</p>
| J. M.'s persistent exhaustion | 50 | <p>I guess something like this:</p>
<pre><code>With[{n = 7},
BlockRandom[SeedRandom["triangles"];
Graphics[Table[{RandomColor[],
RegularPolygon[{Sqrt[3] (j + i - 1),
3 j + Boole[EvenQ[i]]}/2,
{1, (-1)^i π/6}, 3]},
{i, 2 n - 1}, {j, n - Quotient[i, 2]}]]]]
</code></pre>
<p><img src="https://i.stack.imgur.com/ZIcD8.png" alt="some colored triangles"></p>
|
121,362 | <p>I have a set of sample time-series data below of monthly prices for two companies. </p>
<p>Q1. I want to calculate monthly and quarterly log returns.what is the most expedient way to do this? <code>TimeSeriesAggregate[]</code> only has the standard <code>Mean</code>, etc. </p>
<p>Q2. With the returns from Q1, what is the most expedient method to calculate the correlation of the monthly returns between the two companies?</p>
<p>Q3. How would it be possible to calculate six-monthly log returns and then create a series of overlapping $6m \log$ returns so I can derive $7\times 6M$ outcomes from the limited dataset below; i.e. <code>[1m-6m, 2m-7m, 3m-8m, ...]</code> (and then calculate a correlation between these)?</p>
<pre><code>(data1 = {{Date, CompanyA, CompanyB}, {"16/01/2007", 3655,
1000}, {"16/02/2007", 3655, 1000}, {"16/03/2007", 3655,
1000}, {"16/04/2007", 3655, 1000}, {"16/05/2007", 3655,
1000}, {"16/06/2007", 3435, 1011}, {"16/07/2007", 3528,
1012}, {"16/08/2007", 3348, 1013}, {"16/09/2007", 3648,
1022}, {"16/10/2007", 3648, 1022}, {"16/11/2007", 3648,
1022}, {"16/12/2007", 3648, 1022}});
(data2 = MapAt[DateList[{#, {"Day", "Month", "Year"}}] &,
data1, {2 ;;, 1}]) // Grid
</code></pre>
<p>Thanks</p>
| Wjx | 6,084 | <p>This question is not a bit hard:</p>
<pre><code>mat = {{1, 0}, {1/2, Sqrt[3]/2}};
draw[n_] :=
Graphics[Table[{RandomColor[],
Triangle[{{i + n + 1 - #, j + n + 1 - #}, {i, j + 1}, {i + 1,
j}}.mat]}, {i, n}, {j, # - i}] & /@ {n, n + 1}];
draw[8]
</code></pre>
<p>Code is easy, check it by yourself~</p>
|
370,007 | <p>A river boat can travel a 20km per hour in still water. The boat travels 30km upstream against the current then turns around and travels the same distance back with the current. IF the total trip took 7.5 hours, what is the speed of the current? Solve this question algebraically as well as graphically..</p>
<p>I started the Algebra Solution: starting with this
x=(Vstill-Vcurrent)t,(When goes up stream)
x=(Vstill Vcurrent) t2( when it goes back stream....</p>
<p>I have the same question on a quiz in 1 hours and I need to know how to do this please show a solution :D thanks</p>
| Math Gems | 75,092 | <p><strong>Hint</strong> <span class="math-container">$\rm\ (ab)^k\! = 1\Rightarrow a^k\! = b^{-k}\! =\color{#c00}c \in \langle a\rangle\cap\langle b\rangle\Rightarrow ord\,c\mid m,n\,\Rightarrow\, ord\,c\mid(m,n)\!=\!1\,\Rightarrow\, \color{#c00}{c\! =\! 1},\,$</span> thus <span class="math-container">$\rm\ a^k\! = 1 = b^k\,$</span> <a href="https://math.stackexchange.com/q/2322114/242">thus</a> <span class="math-container">$\rm\, m,n\mid k\:\Rightarrow\:\ell \!=\! lcm(m,n)\mid k.\ $</span> Conversely <span class="math-container">$\rm\:m,n\mid \ell \:\Rightarrow\:(ab)^\ell\! = 1.$</span></p>
|
1,356,367 | <p>Is it true that projection is a normal matrix? It's clear that orthogonal projection is, but what about non-orthogonal projection?</p>
<p>By normal matrix, I mean matrix A such that $AA' = A'A$.</p>
| quid | 85,306 | <p><a href="http://en.wikipedia.org/wiki/Binomial_distribution" rel="nofollow">From the relevant Wikipedia page:</a> </p>
<blockquote>
<p>The binomial distribution is frequently used to model the number of successes in a sample of size $n$ drawn with replacement from a population of size $N$. </p>
</blockquote>
<p>Note two things: </p>
<ul>
<li><p>What is key is the <em>number</em> of successes. So if with four tries you have $(S,F,F,S)$ or $(F,S,F,S)$ is irrelevant, in both cases you have two successes. </p></li>
<li><p>Since one draws with replacement each try is independent from what happened before. Thus $(S,F,F,S)$ and $(F,S,F,S)$ certainly have the same probability. </p></li>
</ul>
<p>And, $\binom{n}{k}$ is the number of ways in which you can have $k$ successes among $n$ tries. </p>
<p>If you do not draw with replacement, but without replacement, then the second point is no longer true and you get a different distribution, called <a href="http://en.wikipedia.org/wiki/Hypergeometric_distribution" rel="nofollow">hyper-geometric distribution</a>. </p>
|
1,018,248 | <p>Let $X:=(X_t)_{t\geq0}$ be a Lévy process with triple $(b,A,\nu)$. Is there any known relation between the "distribution" of its jumps and the Lévy measure $\nu$? E.g. can we express something like $\mathbb{P}[X$ has $n$ jumps in $[0,1]]$ or $\mathbb{P}[X$ has a jump of absolute value $>u$ in $[0,1]]$ for some $u>0$ in terms of $\nu$?</p>
| saz | 36,150 | <p>Yes, there is a very strong relation between the (distribution of the) jumps of a Lévy process and its Lévy measure. In fact, the Lévy measure describes the jump behaviour of the corresponding Lévy process:</p>
<p>Define the jump counting measure</p>
<p>$$N([0,t] \times B) := |\{0 \leq s \leq t; \Delta X_s \in B\}| \tag{1}$$</p>
<p>where $\Delta X_s := X_s-X_{s-}$ denotes the jump height at time $s$. So, basically, $N([0,t] \times B)(\omega)$ gives the number of jumps of height $\in B$ during the time interval $[0,t]$ of the sample path $s \mapsto X_s(\omega)$. For a fixed Borel set $B$ such that $0 \notin \overline{B}$, set</p>
<p>$$N_t := N([0,t] \times B).$$</p>
<p>Then one can show that $(N_t)_{t \geq 0}$ is again a Lévy process; more precisely a Poisson process, and</p>
<p>$$\mathbb{E}(N_t) = t \cdot \nu(B). \tag{2}$$</p>
<p>This means that $\nu$ characterizes the jump behavior of the process $(X_t)_{t \geq 0}$. Some important consequences:</p>
<ul>
<li>Whenever $\nu(B)=0$, then the process $(X_t)_{t \geq 0}$ has almost surely no jumps of height $B$. For example for the Poisson process we have $\nu = \delta_1$; hence, $\nu(B) = 0$ whenever $1 \notin B$. Consequently, by the above considerations, the Poisson process can only have jumps of size $1$.</li>
<li>The same argumentation shows that a Lévy process with Lévy measure $\nu=0$ does not have any jumps.</li>
<li>If $B$ is such that $\nu(B)<\infty$, then in any finite time interval $[0,T]$ we have only finitely many jumps of size $B$.</li>
<li>We have $$\mathbb{P}(N_t = n) = \exp(-t \nu(B)) \cdot \frac{(t \nu(B))^n}{n!};$$ note that the left-hand side equals the probability that $(X_s)_{s \geq 0}$ has $n$ jumps of size $B$ during the time interval $[0,t]$.</li>
</ul>
<p>In fact, one can show that $(1)$ defines a so-called Poisson random measure and define stochastic integrals with respect to such (random) measures. This leads finally to the Lévy-Itô decomposition.</p>
|
2,825,789 | <p>I struggle to understand the following theorem (not the proof, I can't even validate it to be true). Note: I don't have a math background.</p>
<blockquote>
<p>If S is not the empty set, then (f : T → V) is injective if and only if Hom(S, f) is injective.</p>
<p>Hom(S, f) : Hom(S, T) → Hom(T, V)</p>
</blockquote>
<p>As I understand, to prove</p>
<p><strong>f is injective ↔ Hom(S, f) is injective</strong></p>
<p>we can go two ways. We can either prove</p>
<ol>
<li><strong>f</strong> is injective → <strong>Hom(S, f)</strong> is injective AND</li>
<li><strong>f</strong> is not injective → <strong>Hom(S, f)</strong> is not injective</li>
</ol>
<p>Or we can prove</p>
<ol>
<li><strong>Hom(S, f)</strong> is injective → <strong>f</strong> is injective AND</li>
<li><strong>Hom(S, f)</strong> is not injective → <strong>f</strong> is not injective</li>
</ol>
<p>Both ways should give the same result, because biconditional is symmetric, right?!</p>
<p>Then I draw the following diagram:</p>
<p><a href="https://i.stack.imgur.com/1IRGM.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/1IRGM.png" alt="enter image description here" /></a></p>
<p>where I see <strong>f</strong> as injective but <strong>HOM(S, f)</strong> as not!</p>
<p>Where I'm wrong? How to visualize <strong>HOM(S, f)</strong> correctly?</p>
| Paramanand Singh | 72,031 | <p>It is worthwhile to give another proof for Riemann integrability of functions which are continuous on a closed interval.</p>
<p>The proof below is taken from <em>Calculus</em> by Spivak and I must say it is novel enough. It does not make use of uniform continuity bur rather invokes mean value theorem for derivatives.</p>
<p>The central idea is to show that if $f:[a, b] \to\mathbb {R} $ is continuous on $[a, b] $ then the upper and lower Darboux integrals of $f$ on $[a, b] $ are equal ie $$\overline{\int} _{a} ^{b} f(x) \, dx=\underline{\int} _{a} ^{b} f(x) \, dx$$ Now to establish the above identity Spivak considers the upper Darboux integrals as a function of the upper limit of integration. Thus following Spivak we consider the function $$J(x) =\overline{\int} _{a}^{x} f(t) \, dt$$ and show that $J'(x) =f(x) $ for all $x\in[a, b] $. Similarly we have $j'(x) =f(x) $ for all $x\in[a, b] $ where $$j(x) =\underline{\int} _{a} ^{x} f(t) \, dt$$ The derivative of function $F=J-j$ vanishes everywhere on $[a, b] $ and $F(a) =0$ so that $F$ vanishes on whole of $[a, b] $.</p>
<p>The key point which needs to be established here is the relation $$J'(x) =f(x) =j'(x), \forall x\in[a, b] $$ and the proof is almost the same as that of first fundamental theorem of calculus. The upper Darboux integrals enjoy the same additive property as Riemann integrals and we have $$J(x+h) - J(x) =\overline{\int} _{x} ^{x+h} f(t) \, dt$$ Further given $\epsilon >0$ the continuity of $f$ at $x$ ensures the existence of a $\delta>0$ such that $$f(x) - \epsilon<f(t) <f(x) +\epsilon$$ whenever $t\in(x-\delta, x+\delta) $. If $0<h<\delta$ then the above inequality yields $$h(f(x) - \epsilon) \leq J(x+h) - J(x) \leq h(f(x) +\epsilon) $$ or $$\left|\frac{J(x+h) - J(x)} {h} - f(x) \right|\leq \epsilon$$ The same identity holds even when $-\delta<h<0$ and hence by definition of derivative we have $J'(x) =f(x) $. The proof for $j'(x) =f(x) $ is exactly the same (using lower Darboux integrals). </p>
|
1,925,867 | <p>I can't find any. For saying $H$ is a subgroup of $G$ we have notation but it seems none exists for subrings.</p>
| user259242 | 308,784 | <p>If $S$ is a subring of $R$ we write $S\hookrightarrow R$. The hooked arrow means 'monomorphism', which encapsulates the idea of $S$ being isomorphic to a sub-object of $R$ which is in this case a sub<em>ring</em>.</p>
<p><strong>EDIT</strong></p>
<p>After the discussion in quid's answer I will explain this a little. </p>
<p>The symbol '$\subset$' means 'subset'. The symbol '$\hookrightarrow$' means 'subobject'. Hence the latter can be used to denote a subgroup, a subring, a sub(topological/metric/&c.)space and indeed even a subset. </p>
|
3,792,135 | <p><strong>Question:</strong> Sum of the series <span class="math-container">$1-3x^2+5x^4 - ... + (-1)^{n-1} (2n-1)x^{2n-2} = \sum\limits_{n=0}^{\infty} (-1)^{n-1} (2n-1)x^{2n-2}$</span></p>
<p>My first idea is to integrate to get <span class="math-container">$\int f(x) dx = x -x^3 + x^5 - ... + (-1)^{ n-1}x^{2n-1} = \sum\limits_{n=1}^{\infty} (-1)^{n-1} x^{2n-1}$</span>. Now I am trying to modify this to a geometric form:</p>
<p><span class="math-container">$$\sum\limits_{n=1}^{\infty} (-1)^{n-1} x^{2n-1}$$</span></p>
| Riemann'sPointyNose | 794,524 | <p>You had the right idea. Now,</p>
<p><span class="math-container">$${\sum_{n=1}^{\infty}(-1)^{n-1}x^{2n-1}=\left(-\frac{1}{x}\right)\sum_{n=1}^{\infty} (-1)^{n}x^{2n}=\left(-\frac{1}{x}\right)\sum_{n=1}^{\infty}(-x^2)^{n}}$$</span></p>
<p>Now, we get that this is a Geometric Series with ratio <span class="math-container">${(-x^2)}$</span> - but the index starts from <span class="math-container">${1}$</span>, not <span class="math-container">${0}$</span>. Hence</p>
<p><span class="math-container">$${=\left(-\frac{1}{x}\right)\frac{(-x^2)}{1-(-x^2)}=\frac{x}{1+x^2}}$$</span></p>
<p>Now simply differentiate this expression, giving</p>
<p><span class="math-container">$${\frac{1-x^2}{(1+x^2)^2}}$$</span></p>
|
1,186,825 | <p>Prove $$\lim_{n\to\infty}\int_0^1 \left(\cos{\frac{1}{x}} \right)^n\mathrm dx=0$$</p>
<p>I tried, but failed. Any help will be appreciated.</p>
<p>At most points $(\cos 1/x)^n\to 0$, but how can I prove that the integral tends to zero clearly and convincingly?</p>
| Villetaneuse | 222,146 | <p>We know that $\left|\cos(\frac{1}{x})\right|<1$ except on a countable set, which hence has measure 0.</p>
<p>Therefore, for almost any $x \in [0;1]$, $\lim_{n \to +\infty} \left(\cos \frac{1}{x}\right)^n =0$.</p>
<p>Since for any $x \in [0,1]$ and any $n$, $|\cos(\frac{1}{x})^n|\leq 1$, we can conclude by using Lebesgue dominated convergence theorem.</p>
|
1,186,825 | <p>Prove $$\lim_{n\to\infty}\int_0^1 \left(\cos{\frac{1}{x}} \right)^n\mathrm dx=0$$</p>
<p>I tried, but failed. Any help will be appreciated.</p>
<p>At most points $(\cos 1/x)^n\to 0$, but how can I prove that the integral tends to zero clearly and convincingly?</p>
| yakaqi | 220,822 | <p>you can try to substitute $t=\frac1x$.</p>
<p>$$\lim_{n\to\infty}\int_0^1\cos^n \frac1x\mathrm dx \Rightarrow \lim_{n\to\infty}\int_{+\infty}^1\ \frac {cos^n(t)}{-t^2}\mathrm dt \Rightarrow
\lim_{n\to\infty}\int_1^{+\infty}\ \frac {cos^n(t)}{t^2}\mathrm dt$$</p>
<p>Next we consider about the integral.</p>
<p>\begin{align}
\left|\int_1^{+\infty}\ \frac {cos^n(t)}{t^2}\mathrm dt\right|
&=\left|\int_1^{2\pi}\ \frac {cos^n(t)}{t^2}\mathrm dt+\sum_{i=1}^\infty \int_{2 \pi i}^{2\pi(i+1)}\ \frac {cos^n(t)}{t^2}\mathrm dt\right|\\
&\leqslant\left|\int_1^{2\pi}\ \frac {cos^n(t)}{t^2}\mathrm dt|+\sum_{i=1}^{\infty} |\int_{2\pi i}^{2\pi(i+1)}\ \frac {cos^n(t)}{t^2}\mathrm dt\right|\\
&\leqslant \int_1^{2\pi}\ \left|\frac {cos^n(t)}{t^2}\right|\mathrm dt+\sum_{i=1}^\infty \int_{2 \pi i}^{2\pi(i+1)}\ \left|\frac {cos^n(t)}{t^2}\right|\mathrm dt.
\end{align}</p>
<p>We denote the area under $\cos^n(t)$ between 0 to $2\pi$ as $S_n$(Don't consider the sign).</p>
<p>$$\int_1^{2\pi}\ \left|\frac {cos^n(t)}{t^2}\right|\mathrm dt+\sum_{i=1}^\infty \int_{2 \pi i}^{2\pi(i+1)}\ \left|\frac {cos^n(t)}{t^2}\right|\mathrm dt \leq \frac {S_n}{1^2}+\sum_{i=1}^\infty \frac {S_n}{(2\pi i)^2}=S_n(1+\sum_{i=1}^\infty \frac {1}{(2\pi i)^2}).$$</p>
<p>Since $\displaystyle 1+\sum_{i=1}^\infty \frac {1}{(2\pi i)^2}$ is convergent,we denote it as M.<br/></p>
<p>So we have $\displaystyle \lim_{n\to\infty}\left|\int_1^{+\infty}\ \frac {cos^n(t)}{t^2}\mathrm dt\right| \leq \lim_{n\to\infty} S_nM$.</p>
<p>since $S_n \to 0$ as $n\to\infty$.(You do it. :-) )</p>
<p>so $\displaystyle \lim_{n\to\infty}\left|\int_1^{+\infty}\ \frac {cos^n(t)}{t^2}\mathrm dt\right|=0 \Rightarrow
\lim_{n\to\infty}\int_1^{+\infty}\ \frac {cos^n(t)}{t^2}\mathrm dt=0 \Rightarrow \lim_{n\to\infty}\int_0^1\cos^n \frac1x\mathrm dx=0$</p>
<p>Why $\lim_{n\to\infty}S_n=0$ ?<br/></p>
<p>First we only focus the area $T_n$ of $cos^n(t)$ between $0$ to $\frac \pi2$.<br/>
Then according to symmetry of $cos^n(t)$ , we have $S_n=4T_n$.<br/>
$T_n=\int_0^{\frac\pi2}|cos^n(t)|\mathrm dt=\int_0^{\frac\pi2}cos^n(t)\mathrm dt$</p>
<p>Next we notice if $0\leq x <1$ , then $\lim_{n\to\infty} x^n=0$ and $0\leq cos^n(t)\leq 1$ with t ranges from $0$ to $\frac \pi2$.<br/>
$\lim_{n\to\infty}cos^n(t)=0$ if $t\neq0$ and $\lim_{n\to\infty}cos^n(t)=1$ if $t=0$<br/></p>
<p>Let $f_n(t)=cos^n(t)$ and $g(t)=cos(t)$.<br/>
$|f_n(t)|\leq g_n(t)$ for t ranges from $0$ to $\frac \pi2$.<br/></p>
<p>According to Dominated Convergent Theorem,<br/>
We have $\lim_{n\to\infty}\int_0^{\frac\pi2}\cos^n(t)\mathrm dt=\int_0^{\frac\pi2}\lim_{n\to\infty}cos^n(t)\mathrm dt$.<br/>
The left hand side is $\lim_{n\to\infty}T_n$,and the right hand side gives us 0.<br/>
$\lim_{n\to\infty}T_n=0\Rightarrow\lim_{n\to\infty}S_n=0$</p>
|
374,909 | <p>If <span class="math-container">$A\subseteq\mathbb{N}$</span> is a subset of the positive integers, we let <span class="math-container">$$\mu^+(A) = \lim\sup_{n\to\infty}\frac{|A\cap\{1,\ldots,n\}|}{n}$$</span> be the <em>upper density</em> of <span class="math-container">$A$</span>.</p>
<p>For <span class="math-container">$n\in\mathbb{N}$</span> we let <span class="math-container">$\sigma(n)$</span> be the number of divisors of <span class="math-container">$n$</span>, the numbers <span class="math-container">$1$</span> and <span class="math-container">$n$</span> included.</p>
<p>Do we have <span class="math-container">$\mu^+\big(\sigma^{-1}(\{k\})\big) = 0$</span> for all <span class="math-container">$k\in\mathbb{N}$</span>? If not, what is the value of <span class="math-container">$\sup\big\{\mu^+\big(\sigma^{-1}(\{k\})\big):k\in\mathbb{N}\big\}$</span>?</p>
| Random | 88,679 | <p>Notice that <span class="math-container">$\sigma(p^{k-1}) = k$</span> and so the image of <span class="math-container">$\sigma$</span> is all of <span class="math-container">$\mathbb{N}$</span>.</p>
<p>By the way, <span class="math-container">$\sigma$</span> is usually used for the sum of divisors function, and it is more standard to use <span class="math-container">$d$</span> or <span class="math-container">$\tau$</span> for your function.</p>
<p>EDIT: I misread the question. I will use <span class="math-container">$\tau$</span> instead of <span class="math-container">$\sigma$</span>.</p>
<p>I claim that <span class="math-container">$\mu ^ {+}(\tau^{-1}(\{k\})) = 0$</span>. Take a number <span class="math-container">$m$</span> in this set, and let us look at <span class="math-container">$m$</span>'s prime factorization: <span class="math-container">$m = p_1^{\alpha_1} p_2^{\alpha_2}\cdots p_r^{\alpha_r}$</span>. Notice that there are finitely many options for the <span class="math-container">$\alpha_i$</span> (up to a permutation), because <span class="math-container">$(\alpha_1 + 1)(\alpha_2 + 1) \cdots (\alpha_r + 1)=k$</span>, so it is enough to show that the upper density of numbers of the form <span class="math-container">$p_1 ^ {\alpha_1} p_2 ^{\alpha_2} \cdots p_r^{\alpha_r}$</span> where <span class="math-container">$r, \alpha_i$</span> are fixed is zero.</p>
<p>Let us look at numbers in this set that are at most <span class="math-container">$x$</span>. Then if we fix <span class="math-container">$p_1$</span>, we need to choose primes <span class="math-container">$p_2, \cdots p_r$</span> such that <span class="math-container">$p_2 ^{\alpha_2} \cdots p_r^{\alpha_r} \leq \frac{x}{p_1 ^{\alpha_1}}$</span>.</p>
<p>By induction we can assume that the amount of numbers of the form <span class="math-container">$p_2 ^{\alpha_2} \cdots p_r^{\alpha_r}$</span> which are at most <span class="math-container">$x$</span> is <span class="math-container">$o(x)$</span>, and if <span class="math-container">$\alpha_1 \geq 2$</span> then this shows that the amount of numbers of the form <span class="math-container">$p_1 ^{\alpha_1} p_2 ^{\alpha_2} \cdots p_r^{\alpha_r}$</span> is <span class="math-container">$o(x)$</span> by summing over the options of <span class="math-container">$p_1$</span> (and using the fact that <span class="math-container">$\sum_{p} \frac{1}{p^2} \leq \sum_{n} \frac{1}{n^2}$</span> converges. Therefore it is enough to solve in this case where all <span class="math-container">$\alpha_i$</span> are 1, that is to show that the amount of numbers of the form <span class="math-container">$p_1 \cdots p_r$</span> up to <span class="math-container">$x$</span> is <span class="math-container">$o(x)$</span> (for <span class="math-container">$r$</span> fixed).</p>
<p>Fixing <span class="math-container">$p_1$</span> we see that <span class="math-container">$p_2$</span> can be any prime that is at most <span class="math-container">$\frac{x}{p_1}$</span>. and then <span class="math-container">$p_3$</span> can be anything that is at most <span class="math-container">$\frac{x}{p_1 p_2}$</span>, ... and <span class="math-container">$p_r$</span> is at most <span class="math-container">$\frac{x}{p_1 p_2 \cdots p_{r-1}}$</span>. So we see that the amount of numbers at most <span class="math-container">$x$</span> is</p>
<p><span class="math-container">$$\sum_{p_1 \leq x} \sum_{p_2 \leq \frac{x}{p_1}} \cdots \sum_{p_{r-1} \leq \frac{x}{p_1 \cdots p_{r-2}}} \pi (\frac{x}{p_1 \cdots p_{r-1}})$$</span></p>
<p>From here we can use the simple bound <span class="math-container">$\pi (x) \leq \frac{cx}{log x}$</span> for some constant <span class="math-container">$c$</span> and see that this sum is small.</p>
|
3,029,208 | <p>Hi I have been trying to find a way to find a combinatorial proof for <span class="math-container">${kn \choose 2}= k{n \choose 2}+n^2{k \choose 2}$</span>. </p>
| Daniel Robert-Nicoud | 60,713 | <p><strong>Hint:</strong> You want to pick <span class="math-container">$2$</span> elements out of <span class="math-container">$k$</span> buckets of <span class="math-container">$n$</span> elements each. You have two possible ways to do it: either you pick a bucket, and then you take <span class="math-container">$2$</span> elements from this bucket, or you pick <span class="math-container">$2$</span> buckets and then you choose one element from each of those <span class="math-container">$2$</span> buckets.</p>
|
2,895,284 | <blockquote>
<p>Find $\frac{d}{dx}\frac{x^3}{{(x-1)}^2}$</p>
</blockquote>
<p>I start by finding the derivative of the denominator, since I have to use the chain rule. </p>
<p>Thus, I make $u=x-1$ and $g=u^{-2}$. I find that $u'=1$ and $g'=-2u^{-3}$. I then multiply the two together and substitute $u$ in to get:</p>
<p>$$\frac{d}{dx}(x-1)^{2}=2(x-1)$$</p>
<p>After having found the derivative of the denominator I find the derivative of the numerator, which is $3x^2$. With the two derivatives found I apply the quotient rule, which states that </p>
<p>$$\frac{d}{dx}(\frac{u(x)}{v(x)})=\frac{v'u-vu'}{v^2}$$</p>
<p>and substitute in the numbers</p>
<p>$$\frac{d}{dx}\frac{x^3}{(x-1)^2}=\frac{3x^2(x-1)^2-2x^3(x-1)}{(x-1)^4}$$</p>
<p>Can I simplify this any further?Is the derivation correct?</p>
| Umberto P. | 67,536 | <p>No it isn't. When you apply the quotient rule you should be differentiating the denominator $v(x) = (x-1)^2$, not its reciprocal.</p>
|
4,243,344 | <blockquote>
<p><span class="math-container">${43}$</span> equally strong sportsmen take part in a ski race; 18 of
them belong to club <span class="math-container">${A}$</span>, 10 to club and 15 to club <span class="math-container">${C}$</span>. What is the
average place for (a) the best participant from club <span class="math-container">${B}$</span>; (b) the
worst participant from club <span class="math-container">${B}$</span>?</p>
</blockquote>
<hr />
<p>I've found the possible range of places the participant could get for both cases. In the case (a), the best participant from club <span class="math-container">${B}$</span> can be at any place between <span class="math-container">$1$</span> and <span class="math-container">$34$</span>. As for the case (b), the worst participant from club <span class="math-container">$B$</span> can get any place between <span class="math-container">$10$</span> and <span class="math-container">$43$</span>. To find the average place I need to compute the expected (mean) value of the this variable. But I'm not sure how to find the chances for getting each place. I suppose they should be equal, but neither <span class="math-container">$\frac{1}{33}$</span> nor <span class="math-container">$\frac{1}{43}$</span> seem to give the right answer.</p>
| Mike Earnest | 177,399 | <p>Instead of finding the probability of each place and doing <span class="math-container">$\sum k\,p(k)$</span>, you can use this trick.</p>
<p>When all the people are lined up in order of place, the ten people in the <span class="math-container">$B$</span> group will divide the line into <span class="math-container">$11$</span> contiguous sections. These sections are composed of the <span class="math-container">$43-10=33$</span> people from groups <span class="math-container">$A$</span> and <span class="math-container">$C$</span>. Furthermore, each person in <span class="math-container">$A\cup C$</span> is equally likely to be in any of these <span class="math-container">$11$</span> spots. It follows that the expected number of <span class="math-container">$A\cup C$</span> members in each section is <span class="math-container">$33/11=3$</span>. Using this, you should be able to deduce the average positions of the best from the <span class="math-container">$B$</span> club and the worst from the <span class="math-container">$B$</span> club.</p>
<p>Here is an illustrative picture. <span class="math-container">$B_1$</span> is the best, <span class="math-container">$B_{10}$</span> is the worst.
<span class="math-container">$$
\newcommand{\s}{\,\boxed{\,\;3\;\,}\,}
\s B_1
\s B_2
\s B_3
\s B_4
\s B_5
\s B_6
\s B_7
\s B_8
\s B_9
\s B_{10}
\s
$$</span></p>
<hr />
<p>Why is each of the gaps equally likely? Let <span class="math-container">$X$</span> be a particular person in clubs <span class="math-container">$A$</span> or <span class="math-container">$C$</span>. We will group all of the orderings into sets of <span class="math-container">$11$</span>, where <span class="math-container">$X$</span> is in a different gap in each set. Therefore, the fraction of orderings where <span class="math-container">$X$</span> is in any particular gap is <span class="math-container">$1/11$</span>.</p>
<p>In each set, all of the people except for the <span class="math-container">$B$</span> club and <span class="math-container">$X$</span> will retain their positions, while <span class="math-container">$X$</span> will swap with some of the <span class="math-container">$B$</span> members as follows. Each <span class="math-container">$\cdots$</span> obscures a sequence of <span class="math-container">$A$</span> and <span class="math-container">$C$</span> members is the same for all <span class="math-container">$11$</span> orderings.
<span class="math-container">$$
\#1) \cdots X\cdots B_1\cdots B_2\cdots B_3\cdots B_4 \cdots B_5\cdots B_6\cdots B_7\cdots B_8\cdots B_9\cdots B_{10}\cdots\\\,\\
\#2)\cdots B_1\cdots X\cdots B_2\cdots B_3\cdots B_4 \cdots B_5\cdots B_6\cdots B_7\cdots B_8\cdots B_9\cdots B_{10}\cdots\\\,\\
\#3)\cdots B_1\cdots B_2\cdots X\cdots B_3\cdots B_4 \cdots B_5\cdots B_6\cdots B_7\cdots B_8\cdots B_9\cdots B_{10}\cdots\\\,\\
\vdots\\\,\\
\#11)\cdots B_1\cdots B_2\cdots B_3\cdots B_4 \cdots B_5\cdots B_6\cdots B_7\cdots B_8\cdots B_9\cdots B_{10}\cdots X\cdots
$$</span></p>
|
1,013,484 | <p>I've this function : $f(x,y)= \dfrac{(1+x^2)x^2y^4}{x^4+2x^2y^4+y^8}$ for $(x,y)\ne (0,0)$ and $0$ for $(x,y)=(0,0)$</p>
<p>It's admits directional derivatives at the origin?</p>
| user72272 | 72,272 | <p>A function admits directional derivative at a point if its gradient $\nabla{f}$ exists at that point. The gradient of your function is given by,
$$\nabla{f}=\left(\begin{array}{cc} -\frac{2\, x\, y^4\, \left(2\, x^6 + 3\, x^4 - 2\, y^4 + 1\right)}{{\left(x^6 + x^2 + 2\, y^4\right)}^2} & \frac{4\, x^2\, y^3\, \left(x^6 + x^4 + x^2 + 1\right)}{{\left(x^6 + x^2 + 2\, y^4\right)}^2} \end{array}\right)$$
which doesn't exist at $(x,y)=(0,0)$, i.e. the origin, since you have $\frac{0}{0}$ which is undefined.</p>
|
1,013,484 | <p>I've this function : $f(x,y)= \dfrac{(1+x^2)x^2y^4}{x^4+2x^2y^4+y^8}$ for $(x,y)\ne (0,0)$ and $0$ for $(x,y)=(0,0)$</p>
<p>It's admits directional derivatives at the origin?</p>
| MickG | 135,592 | <p>Let $\nu=(\cos\alpha,\sin\alpha)$ be any vector in the plane $\mathbb{R}^2$. Let us calculate the limit (which will depend on alpha) which is the d.d. with respect to $\nu$ of $f$:
$$\frac{\partial f}{\partial\nu}(0,0)=\lim_{t\to0^+}\frac{f(t\cos\alpha,t\sin\alpha)}{t}=\lim\frac{(1+t^2\cos^2\alpha)t^6\cos^2\alpha\sin^4\alpha}{(t^2\cos^2\alpha+t^4\sin^4\alpha)^2t}.$$
By an asymptotic relationship I can get rid of the $t^2$ term above and of the $t^4$ term below, since they go to 0 faster than the rest. Si the limit above is the following:
$$\lim\frac{t^6\cos^2\alpha\sin^4\alpha}{t^4\cos^4\alpha\cdot t}=\lim t\frac{\cos^2\alpha\sin^4\alpha}{\cos^4\alpha}=0.$$
Remember the limit is for $t\to0$. So not only do <em>all</em> d.d.'s exist, but they are also all equal to $\nabla f(0,0)\cdot\nu$, where $\cdot$ is the scalar product. Is it my impression, or this implies the function is <em>differentiable</em> in the origin? And has the zero function as its differential? Well that <em>is</em> interesting. This means the converse of the total differential theorem is not true, since the theorem states that if the partial derivatives exist in a neighborhood of a point and are continuous at that point then the function is differentiable at that point, and here we have a function with derivatives that exist in the neighborhood but are not continuous at the point and yet is differentiable. Which makes us wonder: is there any necessary and sufficient condition to differentiability?</p>
<p><strong>Update:</strong>
No, the function is not differentiable, nor even continuous, at the origin. So having all d.d.s and them being equal to gradient times direction versor doesn't imply differentiability. I wonder how I could have thought that. The function is not continuous because approaching from $y=\sqrt x$ we get the limit of $\frac{(1+x^2)x^2x^2}{(x^2+x^2)^2}=\frac{(1+x^2)x^4}{4x^4}=\frac{1+x^2}{4}$, and that limit, for $x\to0$ is $\frac14\neq0$.</p>
<p>And now the dollar that was previously missing is in place, and you can read this update at last :).</p>
|
4,192,869 | <p>What is the difference between a set being an element of a <span class="math-container">$\sigma$</span>-algebra compared to being a subset of a <span class="math-container">$\sigma$</span>-algebra?</p>
| user0102 | 322,814 | <p>Let <span class="math-container">$\Omega$</span> be a nonempty set. We say that a class of subsets of <span class="math-container">$\Omega$</span> denoted by <span class="math-container">$\Sigma$</span> is a <span class="math-container">$\sigma$</span>-algebra iff</p>
<ol>
<li><span class="math-container">$\Omega\in\Sigma$</span>,</li>
<li><span class="math-container">$A^{c}\in\Sigma$</span> whenever <span class="math-container">$A\in\Sigma$</span>,</li>
<li><span class="math-container">$\Sigma$</span> is closed under countable unions, that is to say:
<span class="math-container">\begin{align*}
A_{k}\in\Sigma \Rightarrow \bigcup_{k=1}^{\infty}A_{k}\in\Sigma
\end{align*}</span></li>
</ol>
<p>So the first is a subset of <span class="math-container">$\Omega$</span> and the second is a set of subsets from <span class="math-container">$\Omega$</span>.</p>
<p>Hopefully this helps!</p>
|
205,479 | <p>There are $K$ items indexed $X_1, X_2, \ldots, X_K$ in the pool. Person A first randomly take $K_A$ out of these $K$ items and put them back to the pool. Person B then randomly take $K_B$ out of these $K$ items. What is the expectation of items that was picked by B but not taken by A before?</p>
<p>Assuming $K_A \geq K_B$, the formula I get is,</p>
<p>\begin{equation}
E = \sum_{i=1}^{K_B} i \frac{{{K}\choose{K_A}}{{K_A}\choose{K_B - i}}{{K - K_A}\choose{i}}}{{{K}\choose{K_A}}{{K}\choose{K_B}}}
\end{equation}</p>
<p>When $K_B > K_A$, I can derive similar formulas. I am wondering if there is a way to simplify this formula? Thanks.</p>
| Community | -1 | <p>André's solution is the best one, of course. </p>
<p>But for the sheer fun of it, let's calculate the sum
\begin{equation}
E = \sum_{i=1}^{K_B} i \frac{{{K}\choose{K_A}}{{K_A}\choose{K_B - i}}{{K - K_A}\choose{i}}}{{{K}\choose{K_A}}{{K}\choose{K_B}}}
\end{equation}
First, cancel the common factor
$$E = \sum_{i} i \frac{{K_A\choose K_B - i}{{K - K_A}\choose{i}}}{{{K}\choose{K_B}}}.$$</p>
<p>The <a href="http://en.wikipedia.org/wiki/Binomial_coefficient#Identities_involving_binomial_coefficients" rel="nofollow">absorption identity (4)</a> lets us get rid of the factor "$i$"<br>
$$E = \sum_{i} \frac{{K_A\choose K_B - i}(K-K_A) {{K - K_A-1}\choose{i-1}}}{{{K}\choose{K_B}}},$$
so that<br>
$$E = {(K-K_A)\over {K\choose K_B}} \sum_{i} {K_A\choose K_B - i} {K - K_A-1 \choose i-1}.$$</p>
<p>Using <a href="http://en.wikipedia.org/wiki/Binomial_coefficient#Identities_involving_binomial_coefficients" rel="nofollow">Vandermonde's convolution (7a)</a> we get
$$E = {(K-K_A)\over {K\choose K_B}} {K-1 \choose K_B - 1},$$
and using the absorption identity once more we arrive at
$$E = (K-K_A)\,{K_B\over K}.$$</p>
|
6,431 | <p>I hate to sound like a broken record, but closing <a href="https://math.stackexchange.com/q/219906/12042">this question</a> as <em>not constructive</em> makes no sense to me. The canned explanation reads in relevant part:</p>
<blockquote>
<p>We expect answers to be supported by facts, references, or specific expertise, but this question will likely solicit debate, arguments, polling, or extended discussion.</p>
</blockquote>
<p>In fact the question has a short, simple answer that can be supported by facts: the information given is not sufficient to answer the question, and this can easily be illustrated with a couple of examples.</p>
<p><em>Not a real question</em> comes a little closer to being a legitimate reason for closure; that was the reason chosen by those who closed the OP’s <a href="https://math.stackexchange.com/q/218843/12042">one previous question</a>. But even that isn’t really accurate, since it seems quite clear what the question is asking, and indeed I provided an answer before the question was closed.</p>
<p>I said nothing when that question was closed, because it <em>had</em> been answered. This one has not, even in the comments, and I can see no reason to have closed it instead of answering it, let alone closing it with a specious reason. I should really like to know the thinking behind doing so. This is MSE, after all, not MO.</p>
<p>I’m not suggesting that the question should be re-opened, by the way: it now has an answer in the comments that is at least marginally adequate. I do think, though, that the OP has been treated rather shabbily.</p>
| Noah Snyder | 8 | <p>One thing to keep in mind is that the software picks the reason that got a majority of the close votes. I agree that "not constructive" is not applicable to this question and is a little harsh. I voted as "too localized" as it didn't seem to me that this question is of interest to anyone not working on this exact exercise.</p>
|
1,617,269 | <p>Let X and Y be independent random variables with probability density functions
$$f_X(x) = e^{-x} , x>0$$
$$f_Y(y) = 2e^{-2y} , y>0$$</p>
<p>Derive the PDF of $Z_1 = X + Y$</p>
<p>other cases: $Z =min(X,Y)$ , $Z =1/Y^2 $ , $Z =e^{-2y} $ </p>
<p>Just considering the 1st part, I understand to go from the fact that
$P(X + Y<= z)$ then $$\int_{0}^{z} f_{xy}(z-y,y) dx $$ since they are independent I integrate $f_x(z-y) f_y (y)$ wrt y. I come to $-e^{-z}$</p>
<p>This is as much as I have managed to pick up, but still very much unsure.
What would I need to look out for in the other cases as for $min(X,Y)$ id have no idea of how to start.</p>
| Em. | 290,196 | <p>It appears that
\begin{align*}
P(X+Y<z)&=\int_0^\infty\int_0^{-y+z} f_X(x)f_Y(y)\,dxdy\tag 1\\
&=\int_0^\infty\int_0^{-y+z} e^{-x}\cdot2e^{-2y}\,dxdy\\
&=\int_0^\infty 2e^{-2y}\left(1-e^{-(-y+z)}\right)\,dy\\
&=\int_0^\infty 2e^{-2y}\,dy-2e^{-z}\int_0^\infty e^{-y}\,dy\\
&=1-2e^{-z}
\end{align*}
where $(1)$ is true by independence.</p>
<p>As for $Z = \min\{X,Y\}$, I would go after the cdf
\begin{align*}
P(Z\leq z) &= 1-P(Z>z)\\
&=1-P(X> z, Y> z)\\
&=1-P(X> z)P(Y> z)\tag2\\
\end{align*}
where $(2)$ is true by independence.</p>
|
39,684 | <p>In order to have a good view of the whole mathematical landscape one might want to know a deep theorem from the main subjects (I think my view is too narrow so I want to extend it).</p>
<p>For example in <em>natural number theory</em> it is good to know quadratic reciprocity and in <em>linear algebra</em> it's good to know the Cayley-Hamilton theorem (to give two examples).</p>
<p>So, what is one (<strong>per post</strong>) deep and representative theorem of each subject that one can spend a couple of months or so to learn about? (In Combinatorics, Graph theory, Real Analysis, Logic, Differential Geometry, etc.)</p>
| mpiktas | 4,742 | <p>In <strong>Probability theory</strong> I was told by my professor, that that 3 most important theorems are Central Limit Theorem, Law of Large Numbers and Law of the Iterated Logarithm.</p>
|
3,009,387 | <p>I'm asking the following: it is true that if <span class="math-container">$K$</span> is a normal subgroup of <span class="math-container">$G$</span> and <span class="math-container">$K\leq H\leq G$</span> then <span class="math-container">$K$</span> is normal in <span class="math-container">$H$</span>? I tried to prove it but I failed to do so, so I'm starting to suspect that it is not true. Can you provide me a proof or a counterexample of this statement or hint about its proof? </p>
| Peter Szilas | 408,605 | <p>Numerator</p>
<p><span class="math-container">$2x^2-50=2(x-5)(x+5)$</span>.</p>
<p>Denominator</p>
<p><span class="math-container">$2x^2+3x -35 =(2x-7)(x+ 5)$</span></p>
<p><span class="math-container">$\dfrac{2(x-5)(x+5)}{(2x-7)(x+5)}=$</span></p>
<p><span class="math-container">$\dfrac{2(x-5)}{2x+7}.$</span></p>
<p>Take the limit <span class="math-container">$x \rightarrow -5.$</span></p>
<p>Try to factorize the original expression.The term <span class="math-container">$(x+5) $</span> cancels out .</p>
|
2,853,401 | <p>Assume $E\neq \emptyset $, $E \neq \mathbb{R}^n $. Then prove $E$ has at least one boundary point. (i.e $\partial E \neq \emptyset $).</p>
<p>================= </p>
<p>Here is what I tried.<br>
Consider $P_0=(x_1,x_2,\dots,x_n)\in E,P_1=(y_1,y_2,\dots,y_n)\notin E $.<br>
Denote $P_t=(ty_1+(1-t)x_1,ty_2+(1-t)x_2,\dots,ty_n+(1-t)x_n) $, $0\le t\le 1$.<br>
$ t_0=\sup\{t |P_t \in E\} $. And then I wanted to prove that $P_{t_0}\in \partial E$. </p>
<p>A. If $P_{t_0}\in E$. then $t\neq 1$ otherwise $P_{t_0}=P_1$. And by definition ,$P_t \notin E$ for $t_0 \lt t \leq 1 $. And choose $t_n$,such that $1\gt t_n\gt t_0$,$t_n \to t_0$, which makes $P_{t_n} \notin E$, but $P_{t_n} \to P_{t_0}$.Then $P_{t_o} \in \partial E$. </p>
<p>B. If $P_{t_0}\notin E$. then $t\neq 0$ otherwise $P_{t_0}=P_0$.And then choose $t_n$ such that $0\lt t_n\lt t_0$ , $t_n \to t_0$ ,therefore $P_{t_n} \to P_{t_0}$ and $P_{t_n} \in E$. Hence $P_{t_o} \in \partial E$.</p>
<p>Thus we have $\partial E \neq \emptyset$. </p>
<p>Am I correct? the construct of $P_t$ is a clue from my elder. What I am wondering is this step in A(Also, B).
$$P_{t_n} \to P_{t_0} \Rightarrow P_{t_o} \in \partial E$$
I can somewhat image this. But how to make this step strictly?</p>
| DanielWainfleet | 254,665 | <p>Take $p\in E.$ For $0\ne q\in \Bbb R^n$ let $S(q)=\{x\geq 0: \{p+yq: 0\leq y\leq x\}\subset E\}.$ </p>
<p>Take $q$ such that $z=\sup S(q)<\infty$. Such $q$ exists, otherwise $E=\Bbb R^n.$ </p>
<p>(i). If $p+zq\in E$ then $p+zq$ is in the closure of $E^c\cap \{p+yq:y>z\},$ which is a subset of $\overline {E^c},$ so $p+zq\in \overline {E^c}\cap E\subset \partial E.$ </p>
<p>(iii). If $p+zq\in E^c$ then $z>0$ and $p+zq$ is in the closure of $\{p+yq:0\leq y<z\},$ which is a subset of $\overline E,$ so $p+zq\in \overline E \cap E^c\subset \partial E.$</p>
|
58,209 | <p>Question: Of the following, which is the best approximation of
$$\sqrt{1.5}(266)^{3/2}$$</p>
<p>$$(A)~1,000~~~~(B)~2,700~~~~(C)~3,200~~~~(D)~4,100~~~~(E)~5,300$$</p>
<p>I used $1.5\approx1.44=1.2^2$ and $266\approx256=16^2$. Therefore the approximation by me is $4096$, so I chose $(D)$ which is wrong. The correct answer is $(E)$.</p>
<p>How should I find it out?</p>
| anon | 11,763 | <p>$$\sqrt{1.5}\cdot266^{3/2}\approx1.2 \times 16^3 = 4915.2$$</p>
<p>The closest answer is (E) 5300. Great intuition on how to find simple approximations, but you forgot to multiply by $1.2$! Also note that $1.44<1.5$ and $256<266$, so you know the true answer must be above the discovered approximation, leaving only the last answer.</p>
|
58,209 | <p>Question: Of the following, which is the best approximation of
$$\sqrt{1.5}(266)^{3/2}$$</p>
<p>$$(A)~1,000~~~~(B)~2,700~~~~(C)~3,200~~~~(D)~4,100~~~~(E)~5,300$$</p>
<p>I used $1.5\approx1.44=1.2^2$ and $266\approx256=16^2$. Therefore the approximation by me is $4096$, so I chose $(D)$ which is wrong. The correct answer is $(E)$.</p>
<p>How should I find it out?</p>
| kuch nahi | 8,365 | <p>$$\sqrt{\frac{3}{2}} \cdot ( \sqrt{266})^3 =\sqrt{\frac{3\cdot 266}{2}}\cdot (\sqrt{266})^2 = \sqrt{399} \cdot 266 \approx 266 \cdot 20 = 5320$$</p>
<p>This is closest to option (E)</p>
<p><strong>Edit</strong>: Note that the only approximation I used here is $\sqrt{399}\approx \sqrt{400}$ so the result will differ by a factor of $\frac{\sqrt{399}}{20}$. This can be quickly approximated too, $\frac{\sqrt{399}}{20} = \sqrt{1-1/400} \approx 1 - 1/800 =0.9987 $.</p>
|
1,917,942 | <p>Prove that every homogeneous equation of second degree in $x$ and $y$ represents a pair of lines, each passing through the origin.</p>
<p>My Attempt:
Let $ax^2+2hxy+by^2=0$ be a homogeneous equation of second degree in $x$ and $y$.</p>
<p>We can write this equation as
$$by^2+2hxy+ax^2=0$$
Dividing both sides by $x^2$,</p>
<p>$$b\frac {y^2}{x^2} + 2h\frac {y}{x} +a=0$$.</p>
<p>Now, what should I do to continue.</p>
<p>Please help.</p>
| coffeemath | 30,316 | <p>You have divided by $x^2,$ but no loss there since if $x^2=0$ then $x=0$ and that leads to $y=0$ provided $b \neq 0.$ In that case the point $(0,0)$ is on it.</p>
<p>If it happens that $b=0,$ then it factors as $x(a+2hy)=0,$ and setting each factor to zero gives a line through the origin as desired.</p>
<p>Now assume $x \neq 0$ so your steps so far are OK. Then your final equation is a quadratic in the quantity $u=y/x$ (unless $b=0$ already dealt with). So the next step would be to solve this quadratic for $u.$ If you can show you always get two solutions, then that leads back to two lines on putting $y/x$ equal to the two solutions of the quadratic. </p>
<p>I haven't checked what happens if there's a double root, or imaginary roots.</p>
<p>Added note: the equation $y^2=0$ is homogeneous and represents only one line in the plane, namely the $x$ axis. As another example, only the point $(0,0)$ satisfies $x^2+xy+y^2=0$ so in that case the equation represents only a single point (the origin) which would likely not qualify as "two lines." So to make sure one gets two lines extra condition(s) are needed on $a,b,c.$</p>
|
1,917,942 | <p>Prove that every homogeneous equation of second degree in $x$ and $y$ represents a pair of lines, each passing through the origin.</p>
<p>My Attempt:
Let $ax^2+2hxy+by^2=0$ be a homogeneous equation of second degree in $x$ and $y$.</p>
<p>We can write this equation as
$$by^2+2hxy+ax^2=0$$
Dividing both sides by $x^2$,</p>
<p>$$b\frac {y^2}{x^2} + 2h\frac {y}{x} +a=0$$.</p>
<p>Now, what should I do to continue.</p>
<p>Please help.</p>
| H. H. Rugh | 355,946 | <p>Writing $ax^2+2b xy +c y^2$ for the quadratic form, the number of lines depends upon the eigenvalues of the matrix</p>
<p>$$ \left(\begin{matrix} a & b \\ b & c \end{matrix} \right)$$</p>
<p>If the matrix is positive or negative definite there is only the origin when setting the form to zero (I assume we are in real space?). If zero is an eigenvalue there is one line only. If there is one positive, one negative eigenvalue you get two lines.
Assume $a<>0$.
You may rewrite the matrix in the form:
$$
\left(\begin{matrix} 1 & 0 \\ b/a & 1 \end{matrix} \right)
\left(\begin{matrix} a & 0 \\ 0 & c-b^2/a \end{matrix} \right)
\left(\begin{matrix} 1 & b/a \\ 0 & 1 \end{matrix} \right)
$$ Here $c-b^2/a$ must be negative (to get two lines) and your quadratic form being zero gives:
$$ a (x+b/a\; y)^2 = (b^2/a - c)y^2$$
Taking $\pm$ square roots you get two lines. The case $c<>0$ is treated in a similar way and the case $a=c=0$ is easily solved. </p>
<p>Remark: there is probably a solution with a nicer symmetry in the constants.</p>
|
362,926 | <p>I have a problem that looks like this:</p>
<p>$$\frac{20x^5y^3}{5x^2y^{-4}}$$</p>
<p>Now they said that the "rule" is that when dividing exponents, you bring them on top as a negative like this:</p>
<p>$$4x^{5-2}*y^{3-(-4)}$$</p>
<p>That doesn't make too much sense though. A term like $y^{-4}$ is essentially saying $\large \frac 1{y^4}$ in the denominator because a negative exponent is the opposite of a positive exponent and you use division. And so here you are dividing by $y$ four times. So if that's the case, you cross multiply: $\large \frac{1}{y^4} \frac{y^4}{1}$ on bottom and then of course to keep balance, you multiply $\large \frac{y^4}{1}$ on top to get this:</p>
<p>$$4x^{5 - 2}y^{3 + 4}$$</p>
<p>Now look at my solution and look at the other one. They get the same answer but through different means. I dont see how they get $y^{3-(-4)}$.</p>
| amWhy | 9,003 | <p>There is nothing wrong with your thinking. You are correct.</p>
<p>But so is the text: Note that you do indeed arrive at the same answer. </p>
<p>$$ 4x^{5-2}y^{3-(-4)}= 4x^{5-2}y^{3+4} = 4x^3y^7 $$</p>
<p>The textbook did exactly what you <em>both did</em> with $\dfrac{x^5}{x^2} = x^{5-2}$: subtracting the exponent $2$ which is in the denominator, from the exponent of $x$ in the numerator: $x^{5-2}$ Since $y^{-4}$ is in the denominator, we <em>subtract</em> $-4$ from the exponent of $y$ in the numerator, giving $y^{3-(-4)} =y^{3+4}$.</p>
<p>Note that what the text refers to as a "rule" is simply a method for obtaining the correct value of the exponent. But the rule is based in the laws of exponents, and can be similarly justified consistently with the logic you applied.</p>
<p>You approached this, if I understand correctly, like this:</p>
<p>$$\frac{20x^5y^3}{5x^2y^{-4}} = 4x^{5-2}\cdot \frac{y^3}{\frac 1{y^{4}}} \cdot \frac{y^4}{y^4} = 4x^{5-2}y^{3+4} = 4x^3 y^7$$</p>
<p>And that's a perfectly fine way to handle the problem.</p>
<p>Essentially, what's happening here is that we have $$\frac{20x^5y^3}{5x^2y^{-4}} = \dfrac{\not{5}\cdot 4 \cdot \not{x^2}\cdot x^3\cdot y^3\cdot y^{4}}{\not{5}\cdot \not{x^2}\cdot {\large \frac{1}{{ \not{y^4}}}}\cdot \not{y^4}} = 4x^3y^{3+4} = 4x^3y^7$$</p>
|
4,345,671 | <p>I have a series of cubic polynomials that are being used to create a trajectory. Where some constraints can be applied to each polynomial, such that these 4 parameters are satisfied.
-Initial Position
-final Position
-Initial Velocity
-final Velocity</p>
<p>The polynomials are pieced together such that the ends of one polynomial are identical to the beginnings of the next to preserve continuity.</p>
<p>I instead want to represent these polynomials as cubic Bézier curves.</p>
<p><strong>How would I find the x,y position of each control point for the cubic Bézier curves, such that it matches the curvature of the cubic polynomial.</strong></p>
<p>Here is what I have so far, made in desmos.</p>
<p><a href="https://www.desmos.com/calculator/agsywptfno" rel="nofollow noreferrer">https://www.desmos.com/calculator/agsywptfno</a></p>
<p>Currently the bezier curve is defined as a binomial, with a polynomial for X and or Y
e.g. Bezier = (X(t), Y(t))</p>
| bubba | 31,744 | <p>After suitable scaling, the sides of the pseudo-triangle shown in the answer from Andrew Hwang are the graphs of <span class="math-container">$\cos(x)$</span> and <span class="math-container">$-\cos(x)$</span> curves, over the range <span class="math-container">$0$</span> to <span class="math-container">$\tfrac12 \pi$</span>. By integrating the cosine function over this range, it's easy to show that the pseudo-triangle has <span class="math-container">$4/\pi$</span> times the area of the corresponding triangle.</p>
<p>The OP got the answer <span class="math-container">$\pi^2 R^2$</span> by adding up the areas of the triangles. If you multiply this by the area ratio <span class="math-container">$4/\pi$</span>, you get the correct answer: <span class="math-container">$\pi^2 R^2 \times 4/\pi = 4\pi R^2$</span>.</p>
|
764,905 | <p>Calculate $$\int_{D}(x-2y)^2\sin(x+2y)\,dx\,dy$$ where $D$ is a triangle with vertices in $(0,0), (2\pi,0),(0,\pi)$.</p>
<p>I've tried using the substitution $g(u,v)=(2\pi u, \pi v)$ to make it a BIT simpler but honestly, it doesn't help much.</p>
<p>What are the patterns I need to look for in these problems so I can get an integral that's viable to calculate? Everything I try always leads to integrating a huge function and that's extremely error prone.</p>
<p>I mean, I can obviously see the $x-2y$ and $x+2y$ but I don't know how to use it to my advantage. Also, when I do my substitution, I get $\sin(2\pi(u+v))$ and in the triangle I have, $u+v$ goes from 0 to 1, so the $\sin$ goes full circle. Again, no idea if that helps me.</p>
<p>Any help appreciated.</p>
| colormegone | 71,645 | <p>This problem seems to have been designed for the use of variable substitutions and a Jacobian determinant. <strong>Luka Horvat</strong>'s intuition is proper, and <strong>Santiago Canez</strong> makes the proposal, that substitutions <span class="math-container">$ \ u \ = \ x - 2y \ , \ v \ = \ x + 2y \ $</span> , will be helpful. The triangular region of integration is transformed into one symmetrical about the <span class="math-container">$ \ y-$</span> axis, as seen below.</p>
<p><img src="https://i.stack.imgur.com/BS3Ut.png" alt="enter image description here" /></p>
<p>In order to complete the expression of the transformed integral, we need to calculate the Jacobian determinant <span class="math-container">$ \ \mathfrak{J} \ $</span> of the transformation. We can either find the determinant of the <em>inverse</em> transformation,</p>
<p><span class="math-container">$$ \mathfrak{J}^{-1} \ = \ \left| \ \begin{array}{cc} \frac{\partial u}{\partial x} & \frac{\partial u}{\partial y} \\ \frac{\partial v}{\partial x} & \frac{\partial v}{\partial y} \end{array} \ \right| \ = \ \left| \ \begin{array}{cc} 1 & -2 \\ 1 & 2 \end{array} \ \right| \ = \ 4 \ \ , $$</span></p>
<p>and use <span class="math-container">$ \ \mathfrak{J} \ = \ \frac{1}{\mathfrak{J}^{-1}} \ = \ \frac{1}{4} \ $</span> , <strong>or</strong> solve for <span class="math-container">$ \ x \ $</span> and <span class="math-container">$ \ y \ $</span> in terms of <span class="math-container">$ \ u \ $</span> and <span class="math-container">$ \ v \ $</span> [not difficult for <em>these</em> variables] to obtain <span class="math-container">$ \ x \ = \ \frac{u+v}{2} \ , $</span> <span class="math-container">$ y \ = \ \frac{v-u}{4} \ $</span> and the determinant for the transformation,</p>
<p><span class="math-container">$$ \mathfrak{J} \ = \ \left| \ \begin{array}{cc} \frac{\partial x}{\partial u} & \frac{\partial x}{\partial v} \\ \frac{\partial y}{\partial u} & \frac{\partial y}{\partial v} \end{array} \ \right| \ = \ \left| \ \begin{array}{cc} \frac{1}{2} & \frac{1}{2} \\ -\frac{1}{4} & \frac{1}{4} \end{array} \ \right| \ = \ \frac{1}{4} \ \ . $$</span></p>
<p>The integral carried over the transformed triangle can be split (at least for the moment) into left- and right-hand halves as</p>
<p><span class="math-container">$$ \int_{-2 \pi}^0 \int_{-u}^{2 \pi} \ \mathfrak{J} \cdot (u^2 \sin v) \ \ dv \ du \ \ + \ \ \int^{2 \pi}_0 \int_{u}^{2 \pi} \ \mathfrak{J} \cdot (u^2 \sin v) \ \ dv \ du $$</span></p>
<p><span class="math-container">$$ = \ \ \frac{1}{4} \ \left[ \ \int_{-2 \pi}^0 \ (-u^2 \cos v) \vert_{-u}^{2 \pi} \ \ du \ \ + \ \ \int^{2 \pi}_0 \ (-u^2 \cos v) \vert_{u}^{2 \pi} \ \ du \ \right] $$</span></p>
<p><span class="math-container">$$ = \ \ \frac{1}{4} \ \left[ \ \int_{-2 \pi}^0 \ \left( -u^2 \ [\cos (2 \pi) \ - \ \cos(-u) ] \ \right) \ \ du \ \ + \ \ \int^{2 \pi}_0 \ \left( -u^2 \ [\cos (2 \pi) \ - \ \cos(u) ] \ \right) \ \ du \ \right] $$</span></p>
<p><span class="math-container">$$ = \ \ \frac{1}{4} \ \left[ \ \int_{-2 \pi}^0 \ \left( -u^2 \ [1 \ - \ \cos(u) ] \ \right) \ \ du \ \ + \ \ \int^{2 \pi}_0 \ \left( -u^2 \ [1 \ - \ \cos(u) ] \ \right) \ \ du \ \right] $$</span></p>
<p>[the terms of the integrands are even functions, so we can merge the integrals and exploit the symmetry]</p>
<p><span class="math-container">$$ = \ \ \frac{1}{4} \ \int_{-2 \pi}^{2 \pi} \ u^2 \cos u \ - \ u^2 \ \ du \ \ = \ \ \frac{1}{4} \cdot 2 \ \int_0^{2 \pi} \ u^2 \cos u \ - \ u^2 \ \ du $$</span></p>
<p>[integrating the first term of the integrand by parts twice]</p>
<p><span class="math-container">$$ = \ \ \frac{1}{2} \ \left( \ [ \ (u^2 - 2) \sin u \ + \ 2u \cos u \ ] \ - \ \frac{1}{3}u^3 \ \right) \vert_0^{2 \pi} $$</span></p>
<p><span class="math-container">$$ = \ \frac{1}{2} \ \left[ \ 2 \cdot 2 \pi \cdot \cos (2 \pi) \ - \ \frac{1}{3} (2 \pi)^3 \ \right] \vert_0^{2 \pi} \ \ = \ \ 2 \pi \ - \ \frac{4 \pi^3}{3} \ \ . $$</span></p>
|
3,200,330 | <p>Suppose <span class="math-container">$\Phi: A\to A$</span> is a transformation of the set <span class="math-container">$A$</span>. I want to understand what it means for a subset <span class="math-container">$B\subseteq A$</span> to be invariant under <span class="math-container">$\Phi$</span>. </p>
<p><a href="https://press.princeton.edu/titles/3098.html" rel="nofollow noreferrer">Halmos</a> states that this means <span class="math-container">$\forall b\in B (\Phi(b)\in B)$</span>. Later on, in the same book, he characterizes an invariant subset <span class="math-container">$B$</span> as one satisfying <span class="math-container">$\Phi(B)\subset B$</span>. I want to understand if these two are equivalent.</p>
<p>It seems to me that the statement <span class="math-container">$\forall b\in B (\Phi(b)\in B)$</span> is equivalent to <span class="math-container">$B\subset \Phi^{-1}(B)$</span>, (<span class="math-container">$\Phi^{-1}$</span> here denotes the inverse image, which is always defined; not the inverse of the function) and not <span class="math-container">$\Phi(B)\subset B$</span>. So my question is this: </p>
<blockquote>
<p>What does it really mean for a subset <span class="math-container">$B$</span> to be invariant under <span class="math-container">$\Phi$</span>? is it <span class="math-container">$B\subset \Phi^{-1}(B)$</span> or <span class="math-container">$\Phi(B)\subset B$</span>; or are the above characterizations equivalent for an arbitrary <span class="math-container">$\Phi$</span>?</p>
</blockquote>
<p>A tiny playing around with the definitions tells me that <span class="math-container">$A\subset B \Rightarrow \Phi(A)\subset \Phi(B)$</span>, but in general the arrow doesn't go the other way around, i.e., <span class="math-container">$\Phi(A)\subset \Phi(B)$</span> does not, in general, imply that <span class="math-container">$A\subset B$</span>. So the above definitions of invariant subset are not equivalent.</p>
| Kavi Rama Murthy | 142,385 | <p><span class="math-container">$B \subset \Phi^{-1}(B)$</span> is equivalent to <span class="math-container">$\Phi (B) \subset B$</span>. Both say the same thing: whenever <span class="math-container">$b \in B$</span> we also have <span class="math-container">$\Phi (b) \in B$</span>.</p>
|
1,220,790 | <p>Consider the black scholes equation, </p>
<p>$$
\frac{\partial V}{\partial t } + \frac{1}{2}\sigma^2 S^2 \frac{\partial^2 V}{\partial S^2 } + ( r-q )S\frac{\partial V}{\partial S }-rV =0
$$</p>
<p>How do I show that if $V( S, t)$ is a solution, then $S(\frac{\partial V}{\partial S })$ is also a solution?</p>
<p>I tried substituting $S(\frac{\partial V}{\partial S })$ to the equation and working through the calculations but it doesn't seem to work out.</p>
<p>On a related note, how do we also show that for $ \beta = 1-2(r-q)/\sigma^2$, </p>
<p>$$
W(S, t) = S^\beta V(\frac{1}{S}, t)
$$</p>
<p>is also a solution?</p>
<p>The relevant partial derivatives are, </p>
<p>$$
\begin{align}
\frac{\partial W}{\partial S} & = \beta S^{\beta -1 }V- S^{\beta -2}\frac{\partial V}{\partial S}\\
\frac{\partial^2 W}{\partial S^2} & = S^{\beta -4}\frac{\partial^2 V}{\partial S^2} -2S^{\beta -3}\frac{\partial V}{\partial S} + \beta(\beta -1 )S^{\beta-2}V
\end{align}
$$</p>
<p>So the various terms in the PDE are,</p>
<p>$$\begin{align}
(r-q)S\frac{\partial W}{\partial S} & = (r-q) \left[ \beta S^{\beta }V- S^{\beta -1}\frac{\partial V}{\partial S} \right] \\
\frac{1}{2}\sigma^2S^2\frac{\partial^2W}{\partial S^2}& =\frac{1}{2}\sigma^2\left[ S^{\beta -2}\frac{\partial^2 V}{\partial S^2} -2S^{\beta -1}\frac{\partial V}{\partial S} + \beta(\beta -1 )S^{\beta}V \right]
\end{align}
$$</p>
<p>I can see that $ (r-q) \beta S^{\beta }V$ in the top term cancels with $\frac{1}{2}\sigma^2\beta(\beta -1 )S^{\beta}V $ in the bottom term.</p>
<p>So we end up with, </p>
<p>$$
(r-q)S\frac{\partial W}{\partial S}+\frac{1}{2}\sigma^2S^2\frac{\partial^2W}{\partial S^2}=-(r-q) S^{\beta -1}\frac{\partial V}{\partial S} + \frac{1}{2}\sigma^2\left[ S^{\beta -2}\frac{\partial^2 V}{\partial S^2} -2S^{\beta -1}\frac{\partial V}{\partial S} \right]
$$</p>
<p>But beyond this I kind of stuck despite trying various manipulations.</p>
<p>Any help will be greatly appreciated!</p>
| Mark Joshi | 106,024 | <p>$$\frac{\partial V}{\partial t } + \frac{1}{2}\sigma^2 S^2 \frac{\partial^2 V}{\partial S^2 } + ( r-q )S\frac{\partial V}{\partial S }-rV =0$$
We can regroup
$$\frac{\partial V}{\partial t } + \frac{1}{2}\sigma^2 \left( S\frac{\partial }{\partial S}\right)^2 V + ( r-q - \sigma^2/2 )S\frac{\partial V }{\partial S } -rV =0.$$</p>
<p>Now observe that $S \frac{\partial}{\partial S}$ commutes with all the coefficients (i.e. terms in front) of $V$ since it commutes itself and doesn't interact with $\frac{\partial}{\partial t}.$</p>
<p>The result is now immediate. </p>
<p>(see my book Concepts etc section 5.8)</p>
|
1,220,790 | <p>Consider the black scholes equation, </p>
<p>$$
\frac{\partial V}{\partial t } + \frac{1}{2}\sigma^2 S^2 \frac{\partial^2 V}{\partial S^2 } + ( r-q )S\frac{\partial V}{\partial S }-rV =0
$$</p>
<p>How do I show that if $V( S, t)$ is a solution, then $S(\frac{\partial V}{\partial S })$ is also a solution?</p>
<p>I tried substituting $S(\frac{\partial V}{\partial S })$ to the equation and working through the calculations but it doesn't seem to work out.</p>
<p>On a related note, how do we also show that for $ \beta = 1-2(r-q)/\sigma^2$, </p>
<p>$$
W(S, t) = S^\beta V(\frac{1}{S}, t)
$$</p>
<p>is also a solution?</p>
<p>The relevant partial derivatives are, </p>
<p>$$
\begin{align}
\frac{\partial W}{\partial S} & = \beta S^{\beta -1 }V- S^{\beta -2}\frac{\partial V}{\partial S}\\
\frac{\partial^2 W}{\partial S^2} & = S^{\beta -4}\frac{\partial^2 V}{\partial S^2} -2S^{\beta -3}\frac{\partial V}{\partial S} + \beta(\beta -1 )S^{\beta-2}V
\end{align}
$$</p>
<p>So the various terms in the PDE are,</p>
<p>$$\begin{align}
(r-q)S\frac{\partial W}{\partial S} & = (r-q) \left[ \beta S^{\beta }V- S^{\beta -1}\frac{\partial V}{\partial S} \right] \\
\frac{1}{2}\sigma^2S^2\frac{\partial^2W}{\partial S^2}& =\frac{1}{2}\sigma^2\left[ S^{\beta -2}\frac{\partial^2 V}{\partial S^2} -2S^{\beta -1}\frac{\partial V}{\partial S} + \beta(\beta -1 )S^{\beta}V \right]
\end{align}
$$</p>
<p>I can see that $ (r-q) \beta S^{\beta }V$ in the top term cancels with $\frac{1}{2}\sigma^2\beta(\beta -1 )S^{\beta}V $ in the bottom term.</p>
<p>So we end up with, </p>
<p>$$
(r-q)S\frac{\partial W}{\partial S}+\frac{1}{2}\sigma^2S^2\frac{\partial^2W}{\partial S^2}=-(r-q) S^{\beta -1}\frac{\partial V}{\partial S} + \frac{1}{2}\sigma^2\left[ S^{\beta -2}\frac{\partial^2 V}{\partial S^2} -2S^{\beta -1}\frac{\partial V}{\partial S} \right]
$$</p>
<p>But beyond this I kind of stuck despite trying various manipulations.</p>
<p>Any help will be greatly appreciated!</p>
| Danny | 203,396 | <p>Ok for the second part on $W(S,t)=S^\beta V(1/S,t)$, I've figured it out. Just some carelessness on my part when tossing terms around. It goes as follows...</p>
<p>The partial derivatives are,
$$\begin{align}
\frac{\partial W}{\partial S} & = \beta S^{\beta -1 }V- S^{\beta -2}\frac{\partial V}{\partial S}\\
\frac{\partial^2 W}{\partial S^2} & = \beta \left[ -S^{\beta-3}\frac{\partial V}{\partial S} + (\beta-1)S^{\beta-2}V \right] - \left[ -S^{\beta-4} \frac{\partial^2 V}{\partial S^2} + (\beta-2)S^{\beta -3}\frac{\partial V}{\partial S} \right]
\end{align}$$</p>
<p>Let $x=1/S$ and factor out $S^\beta$, </p>
<p>$$\begin{align}
\frac{\partial W}{\partial S} & = S^{\beta } \left[ \beta xV -x^2\frac{\partial V}{\partial S} \right] \\
\frac{\partial^2 W}{\partial S^2}& = S^{\beta } \left[ \beta \left( -x^3\frac{\partial V}{\partial S} + (\beta-1)x^2V \right) -\left( -x^4\frac{\partial^2 V}{\partial S^2} +(\beta-2)x^3\frac{\partial V}{\partial S}\right)\right]
\end{align}$$
We exclude $S^{\beta} $ since it's a common factor for all terms. The PDE terms are,</p>
<p>$$\begin{align}
( r-q )S\frac{\partial W}{\partial S} & = ( r-q )x^{-1}\frac{\partial W}{\partial S} \\ & = ( r-q )\left[ \beta V -x\frac{\partial V}{\partial S} \right]\\
\frac{1}{2}\sigma^2S^2\frac{\partial^2 W}{\partial S^2} &= \frac{1}{2}\sigma^2x^{-2}\frac{\partial^2 W}{\partial S^2}\\ &= \frac{1}{2}\sigma^2 \left[ \beta \left( -x\frac{\partial V}{\partial S} + (\beta-1)V \right) -\left( -x^2\frac{\partial^2 V}{\partial S^2} +(\beta-2)x\frac{\partial V}{\partial S}\right) \right]
\end{align}$$</p>
<p>As mentioned earlier, the terms involving $V$ cancels, therefore
$$\begin{align}
( r-q )S\frac{\partial W}{\partial S}+\frac{1}{2}\sigma^2S^2\frac{\partial^2 W}{\partial S^2} & = -( r-q )\left[x\frac{\partial V}{\partial S}\right] + \frac{1}{2}\sigma^2\left[(2-2\beta)x\frac{\partial V}{\partial S}+x^2\frac{\partial^2 V}{\partial S^2} \right] \\& =-( r-q )\left[x\frac{\partial V}{\partial S}\right] +2( r-q )\left[x\frac{\partial V}{\partial S}\right] +\frac{1}{2}\sigma^2x^2\frac{\partial^2 V}{\partial S^2}\\&=\frac{1}{2}\sigma^2x^2\frac{\partial^2 V}{\partial S^2}+( r-q )x\frac{\partial V}{\partial S}
\end{align}
$$</p>
<p>Combining with the other two terms we get, </p>
<p>$$
S^\beta \left[ \frac{\partial V}{\partial t} + \frac{1}{2}\sigma^2x^2\frac{\partial^2 V}{\partial S^2}+( r-q )x\frac{\partial V}{\partial S} -rV \right]=S^\beta.0=0
$$
Therefore we've shown that if $ V(S,t)$ is a solution to the Black Scholes PDE, then $W(S,t)=S^\beta V(1/S,t)$ is also solution. </p>
|
58,772 | <p>Is there any general way to find out the coefficients of a polynomial.</p>
<p>Say for e.g.<br>
$(x-a)(x-b)$ the constant term is $ab$, coefficient of $x$ is $-(a+b)$ and coefficient of $x^2$ is $1$.</p>
<p>I have a polynomial $(x-a)(x-b)(x-c)$.
What if the number is extended to n terms.?</p>
| Gadi A | 1,818 | <p>Try opening it to get the feel for yourself; in general, the coefficient of $x^k$ is a sum of all the products on choices of $n-k$ out of the possible $n$ roots, multiplied by $(-1)^k$. So for $(x-a)(x-b)(x-c)(x-d)$ you'll get that the coefficient of $x^2$ is $ab+ac+ad+bc+bd+cd$.</p>
<p>For a precise discussion see</p>
<p><a href="http://en.wikipedia.org/wiki/Vi%C3%A8te%27s_formulas" rel="nofollow">http://en.wikipedia.org/wiki/Vi%C3%A8te%27s_formulas</a></p>
|
2,208,113 | <p>Let $x$ and $y \in \mathbb{R}^{n}$ be non-zero column vectors, from the matrix $A=xy^{T}$, where $y^{T}$ is the transpose of $y$. Then the rank of $A$ is ?</p>
<hr>
<p>I am getting $1$, but need confirmation .</p>
| quasi | 400,434 | <p>Suppose $x,y,z$ are real numbers such that
\begin{align*}
x + y &= \sqrt{4z-1}\\[4pt]
y + z &= \sqrt{4x-1}\\[4pt]
z + x &= \sqrt{4y-1}\\[4pt]
\end{align*}</p>
<p>Let $s = x + y + z$. Since $x + y \ge 0,\;\;y + z \ge 0,\;\;z + x \ge 0$, we have $s \ge 0$.
<p>
Then from the original system of equations, we get</p>
<p>\begin{align*}
(s-z)^2 &= 4z-1\\[4pt]
(s-x)^2 &= 4x-1\\[4pt]
(s-y)^2 &= 4y-1\\[4pt]
\end{align*}</p>
<p>Let $f(u) = (s - u)^2 - (4u - 1) = u^2 - (2s + 4)u + (s^2+1)$.
<p>
Then, $x,y,z$ are roots of $f$, hence, since $f$ is quadratic in $u$, at least two of $x,y,z$ must be equal.
<p>
Without loss of generality, assume $z = y$.
<p>
Suppose $x \ne y$. Then since $x,y$ are roots of $f$, Vieta's formulas yield</p>
<p>\begin{align*}
&x + y = 2s + 4\\[4pt]
&xy = s^2 +1\\[4pt]
\end{align*}</p>
<p>hence, since $s \ge 0$, we get $x,y > 0$. Then also $z > 0$, since $z=y$. But then</p>
<p>\begin{align*}
&x,y,z > 0\\[4pt]
\implies\; &x + y < s\\[4pt]
\implies\; &2s + 4 < s\\[4pt]
\implies\; &s < -4\\[4pt]
\end{align*}</p>
<p>contradiction.
<p>
It follows that $x = y$, hence $x = y = z$, so</p>
<p>\begin{align*}
&(s - z)^2 = 4x-1\\[4pt]
\implies\; &(2x)^2 = 4x -1\\[4pt]
\implies\; &(2x-1)^2 = 0\\[4pt]
\implies\ &x = {\small{\frac{1}{2}}}\\[4pt]
\implies\; &x = y = z = {\small{\frac{1}{2}}}\\[4pt]
\end{align*}</p>
<p>It's easily verfied that the triple</p>
<p>$$
(x,y,z) =
{\small{
\left(
\frac{1}{2},
\frac{1}{2},
\frac{1}{2}
\right)
}}
$$</p>
<p>satisfies the original system of equations, hence it's the only solution.</p>
|
166,925 | <p>I have a function <code>u[y]</code> and I want to find the limit of integration that integration is equal zero.</p>
<pre><code>Λ = -30;
u[η_] := (2*η - 2*η^3 + η^4) + Λ/6*(η - 3*η^2 + 3*η^3 - η^4);
θ = Integrate[u[η]*(1 - u[η]), {, 0, 1}] // N;
δ = 1/θ;
u[y_] := Piecewise[{{1,y > δ}}, (2*y/δ - 2*(y/δ)^3 + (y/δ)^4) + Λ/6*
((y/δ) - 3*(y/δ)^2 + 3*(y/δ)^3 - (y/δ)^4)];
FindRoot[Integrate[u[y], {y, 0, yd}] , {yd, 5}]
</code></pre>
<p>I have the following error:
"Unable to prove that integration limits {0,yd} are real. Adding assumptions may help."</p>
| Community | -1 | <p><strong>Attempt 3.</strong></p>
<p>It turns out you can trick Mathematica into not complaining about lack of derivatives by adding a dependent derivative in a separate equation:</p>
<pre><code>sum3 = NDSolveValue[
{Laplacian[dummy[x, y], {x, y}] == 0,
s[x, y] == f1[x, y] + f2[x, y]
}, s, Element[{x, y}, mesh]]
</code></pre>
<p>Of course this is slower than chuy's method.</p>
|
887,200 | <p>So I have the permutations:
$$\pi=\left( \begin{array}{ccc}
1 & 2 & 3 & 4 & 5 & 6 & 7 & 8 & 9 \\
2 & 3 & 7 & 1 & 6 & 5 & 4 & 9 & 8
\end{array} \right)$$
$$\sigma=\left( \begin{array}{ccc}
1 & 2 & 3 & 4 & 5 & 6 & 7 & 8 & 9 \\
9 & 5 & 6 & 8 & 7 & 1 & 2 & 4 & 3
\end{array} \right)$$</p>
<p>I found $\pi\sigma$ and $\sigma\pi$ to be</p>
<p>$$\pi\sigma=\left( \begin{array}{ccc}
1 & 2 & 3 & 4 & 5 & 6 & 7 & 8 & 9 \\
5 & 6 & 2 & 9 & 1 & 7 & 8 & 3 & 4
\end{array} \right)$$</p>
<p>$$\sigma\pi=\left( \begin{array}{ccc}
1 & 2 & 3 & 4 & 5 & 6 & 7 & 8 & 9 \\
8 & 6 & 5 & 9 & 4 & 2 & 3 & 1 & 7
\end{array} \right)$$.</p>
<p>So my question is: did I do these correctly or did I mix them up (i.e, the permutation matrix I have for $\pi\sigma$ is actually $\sigma\pi$)?</p>
| user1729 | 10,513 | <p>It depends.</p>
<p>Where I did my undergrad we learned to write permutations on the right, so $3\sigma\pi=\ldots$. This can be found in older books. The more common (people will claim, "standard") notation today is to write $\sigma\pi(3)$. Therefore, it depends on what your book or lecturer is telling you. If you have been taught to write permutations on the right, then you are correct (yay!), but if you have been taught to write them on the left then you are incorrect.</p>
<p>My personal preference is to write them on the right. This is for a variety of reasons (I work with automorphism groups, and to me writing maps on the right makes seeing how the automorphisms compose much clearer, as $x\phi\psi=(x\phi)\psi$ while (because I want my diagrams to be nice) writing on the right yields $\phi\circ\psi(x)=\psi(\phi(x))$), but the most relevant one here is the following:</p>
<p>When you write permutations in disjoint cycle form, then I find it much clearer for the following reason:</p>
<ul>
<li>Writing your maps on the left (common) means you have to read left-to-right in each permutation, but right-to-left through the permutations. For example, when working out where $(42)(341)(2314)$ sends $2$, I see that $2\mapsto 3\mapsto4\mapsto2$. But my brain is mush.</li>
<li>Writing your maps on the right (less common) means you have to read left-to-right <em>throughout</em> the calculation. For example, now when working out where $(42)(341)(2314)$ sends $2$, I see that $2\mapsto 4\mapsto1\mapsto4$. And my brain is no longer mush!</li>
</ul>
<p>Having spouted my opinion I should, however, give you a warning: You should follow the standards of your peers and superiors. If your lecturer is telling you to write them on the right, then you do that. If they say "on the left!", then do that. Don't try to force your opinions on them - it is a bad reason to fail an exam! (Or, in my case, have a paper rejected (okay, not the only reason, but it is the one I took away from it...) - and seriously, it is an pain in the arse having to change your notation consistently throughout!).</p>
|
2,536,553 | <p>I know that is possible to apply the spectral decomposition (diagonalization) to a matrix when the sum of the dimensions of its eigenspaces is equal to the size of the matrix.</p>
<p>The spectral decomposition is:</p>
<p>$$
F=P\Lambda P^{-1}
$$</p>
<p>where $\Lambda$ is the diagonal matrix of eigenvalues and $P$ is the matrix of eigenvectors. </p>
<p>I have the following matrix:</p>
<p>$$
F=\begin{pmatrix}\phi_{1} & \phi_{2} & \phi_{3} & \cdots & \phi_{p-1} & \phi_{p}\\
1 & 0 & 0 & \cdots & 0 & 0\\
0 & 1 & 0 & \cdots & 0 & 0\\
\vdots & \vdots & \vdots & \ddots & \vdots & \vdots\\
0 & 0 & 0 & \cdots & 0 & 0\\
0 & 0 & 0 & \cdots & 1 & 0
\end{pmatrix}
$$</p>
<p>where $\phi_{p}\neq0$ and all $\phi_{i}$ are real valued. How can I be sure that its possible to apply the spectral decomposition to $F$?</p>
| JayTuma | 506,755 | <p>You know that in a PID a element is irreducible if and only if it is prime. Then, if $p_1$ on the left divides $q_1 \cdot \ldots \cdot q_m$ then it must divide one of this factors, lets say $q_i$. But $q_i$ is irreducible, thus</p>
<p>$$ p_1 = uq_i $$</p>
<p>where $u$ is invertible. By induction, you can then show that $m=n$</p>
<p>after $k$ times, if $m$ is less than $k$ (which happen only when $m<n$)
$$ p_k \cdot \ldots \cdot p_n = u_k $$
Which is a contraddiction (the product of prime elements cannot be invertible). So $m \geq k$ and
$$ p_k \cdot \ldots \cdot p_n = u_k q_k \cdots q_m $$
We can then apply the same argument I explained on top.</p>
<p>If otherwise m > n eventually you will have
$$ 1 = u_n q_n \cdot \ldots \cdot q_m $$
which is a contraddiction, as shown before</p>
|
1,413,145 | <p>I would like to find a way to show that the sequence $a_n=\big(1+\frac{1}{n}\big)^n+\frac{1}{n}$ is eventually increasing.</p>
<p>$\hspace{.3 in}$(Numerical evidence suggests that $a_n<a_{n+1}$ for $n\ge6$.)</p>
<p>I was led to this problem by trying to prove by induction that $\big(1+\frac{1}{n}\big)^n\le3-\frac{1}{n}$, as in</p>
<p>$\hspace{.4 in}$ <a href="https://math.stackexchange.com/questions/1087545/a-simple-proof-that-bigl1-frac1n-bigrn-leq3-frac1n">A simple proof that $\bigl(1+\frac1n\bigr)^n\leq3-\frac1n$?</a></p>
| Robert Israel | 8,508 | <p>Let
$$ \eqalign{f(n) = \dfrac{1}{n} + \left( 1 + \dfrac{1}{n}\right)^n &= \dfrac{1}{n} + \exp\left( n \ln\left(1+\dfrac{1}{n}\right)\right) \cr &=
\dfrac{1}{n} + \exp\left(1 - \dfrac{1}{2n} + \dfrac{1}{3n^2} + O\left(\dfrac{1}{n^3}\right)\right) \cr &= e - \dfrac{e-2}{2n} + \dfrac{11e}{24 n^2} + O\left(\dfrac{1}{n^3}\right) }$$</p>
<p>Then $$\eqalign{f(n+1) &= e - \dfrac{e-2}{2n+2} + \dfrac{11e}{24 (n+1)^2} + O\left(\dfrac{1}{n^3}\right)\cr
&= e - \dfrac{e-2}{2n} + \dfrac{23 e - 24}{24 n^2} + O\left(\dfrac{1}{n^3}\right) \cr
f(n+1) - f(n) &= \dfrac{12e-24}{24n^2} + O\left(\dfrac{1}{n^3}\right)}$$</p>
<p>and since $e > 2$, this is positive for sufficiently large $n$.</p>
|
2,738,957 | <p>I did the following to derive the value of $\pi$, you might want to grab a pencil and a piece of paper:</p>
<p>Imagine a unit circle with center point $b$ and two points $a$ and $c$ on the circumference of the circle such that triangle $abc$ is an obtuse triangle. you can see that if $\theta$ denotes the angle $\angle acb$ then $0<\theta<90$ and that the angle of the sector $abc$ is $180 -2\theta$, so the area of sector $abc$ is $\frac{180-2\theta}{360}\pi = \frac{90-\theta}{180}\pi$. If we extend the radius $bc$ to form a diameter $D$, then the angle between line $ab$ and $D$ is $180-(180-2\theta) = 2\theta$; so if we define the distance between the point $a$ and the line $D$ as $h$ we get $h = \sin(2\theta)$. These allows us to derive the area of triangle $abc$ as $\frac{1}{2}sin(2\theta)$. The area of segment $ac$ is the area of sector abc - the area of triangle abc :
$$
\frac{90-\theta}{180}\pi - \frac{1}{2}\sin(2\theta)
$$
We can see that as $\theta$ approaches 0, the area of segment ac approaches the half the area of the circle which is $\frac{\pi}{2}$
$$
\lim_{\theta \to 0} \frac{90-\theta}{180}\pi - \frac{1}{2}\sin(2\theta) = \frac{\pi}{2}
$$
$$
\lim_{\theta \to 0} \frac{90-\theta}{90}\pi - \sin(2\theta) = \pi
$$
$$
\lim_{\theta \to 0} \pi\Big[\frac{90-\theta}{90} - 1\Big] = \lim_{\theta \to 0} \sin(2\theta)
$$
$$
\lim_{\theta \to 0} -\frac{\theta\pi}{90} = \lim_{\theta \to 0} \sin(2\theta)
$$
$$
\pi = -\lim_{\theta \to 0} \frac{90\sin(2\theta)}{\theta}
$$
However this limit approaches -3.1415...</p>
| Mohammad Riazi-Kermani | 514,496 | <p>Your mathematics is good up to </p>
<p>$$\lim_{\theta \to 0} -\frac{\theta\pi}{90} = \lim_{\theta \to 0} \sin(2\theta)$$ which is simply $$0=0$$ which is the same as $$-0=0$$ </p>
<p>You want to manipulate $$0=0$$ by dividing both side by $0$ to generate the undefined $$0/0$$ and get a negative value for $\pi $</p>
<p>Well if notice $$\lim_{\theta \to 0} -\frac{\theta\pi}{90}= \lim_{\theta \to 0} \frac{\theta\pi}{90}$$
and with the same argument you can get $\pi = 3.14..$ </p>
|
28,532 | <p><code>MapIndexed</code> is a very handy built-in function. Suppose that I have the following list, called <code>list</code>:</p>
<pre><code>list = {10, 20, 30, 40};
</code></pre>
<p>I can use <code>MapIndexed</code> to map an arbitrary function <code>f</code> across <code>list</code>:</p>
<pre><code>{f[10, {1}], f[20, {2}], f[30, {3}], f[40, {4}]}
</code></pre>
<p>where the second argument to <code>f</code> is the part specification of each element of the list.</p>
<p>But, now, what if I would like to use <code>MapIndexed</code> only at certain elements? Suppose, for example, that I want to apply <code>MapIndexed</code> to only the second and third elements of <code>list</code>, obtaining the following:</p>
<pre><code>{10, f[20, {2}], f[30, {3}], 40}
</code></pre>
<p>Unfortunately, there is no built-in "<code>MapAtIndexed</code>", as far as I can tell. What is a simple way to accomplish this? Thanks for your time.</p>
| Jacob Akkerboom | 4,330 | <p><strong>Level one version</strong></p>
<p>This is an adaptation of amr's answer (based on Kuba's answer)</p>
<pre><code>mapAtLevOneIndexed[f_, list_, pos_] :=
ReplacePart[list,
Inner[Rule[#, f[#2, #]] &, pos, Part[list, pos], List]]
</code></pre>
<p>Example</p>
<pre><code>mapAtLevOneIndexed[f, {1, 2, 6, 7}, {2, 3}]
</code></pre>
<p>-> {1, f[2, 2], f[6, 3], 7}</p>
<p>In the case you work at level one, I think the most convenient way to enter a position is just a single integer. Also I think that is the most convenient way for the position to occur in <code>f</code>, but this makes it a little different from <code>MaxIndexed</code>.</p>
<p>This may be a case where <code>Thread</code> is faster than <code>Inner</code>, maybe I will check later. The idea here is that we use both <code>Part</code> and <code>ReplacePart</code> both only once, to speed things up. This use of <code>Part</code> only works on level one. Despite this I don't think it is faster than rm-rf's answer, which is my favorite. Maybe it can be faster if the positions at which we want to "mapindex" are very sparse.</p>
<p><strong>Deeper level version</strong></p>
<p>Of course you can let <code>Extract</code> do the job of <code>Part</code> here and make it work on deeper levels. Below I use <code>Thread</code> just convenience, as <code>Inner</code> did not deal with lists of lists as I want. I do not answer which one is faster.</p>
<p>Deeper level version:</p>
<pre><code>mapAtIndexed[f_, list_, pos_] :=
ReplacePart[list,
Thread[Unevaluated[
Rule[#, f[#2 // First, #]] &[pos, Extract[list, pos]]]]]
</code></pre>
<p><strong>Examples</strong></p>
<pre><code>mapAtIndexed[f, {{{1}}, 2}, {{1, 1}}]
</code></pre>
<p>-> {{f[{1}, {1, 1}]}, 2}</p>
<pre><code>mapAtIndexed[f, {{{1}}, 2}, {{1, 1, 1}, {2}}]
</code></pre>
<p>-> {{{f[1, {1, 1, 1}]}}, f[1, {2}]}</p>
<p>Here you have to be careful to enter the positions in the form <code>{pos1, pos2}</code>, rather than just <code>pos1</code>, but that can be easily overcome.</p>
|
2,337,583 | <p>I cannot understand the inductive dimension properly. I read something on Google but mostly there only are conditions or properties. Not a definition. I got to know about it from the book “ The fractal geometry of nature”. ( I am a 12 grader.)</p>
| Lehs | 171,248 | <p>The small inductive dimension can be defined inductively by </p>
<ol>
<li><p>$\text{ind}(\emptyset)=-1$ </p></li>
<li><p>$\text{ind}(\{x\})=0$ </p></li>
<li><p>$\text{ind}(X)$ is the smallest number $n$ such that for all $x\in X$ and every open set $U\ni x$ there is an open set $V\ni x$ with $\bar{V}\subseteq U$ such that $\text{ind}(\partial V)\leq n-1$</p></li>
</ol>
<p><a href="https://en.wikipedia.org/wiki/Inductive_dimension#Formal_definition" rel="nofollow noreferrer">https://en.wikipedia.org/wiki/Inductive_dimension#Formal_definition</a></p>
|
81,715 | <p>I am a graduate student in physics trying to learn differential geometry on my own, out of a book written by Fecko.</p>
<p>He defines the gradient of a function as:</p>
<p>$
\nabla f = \sharp_g df = g^{-1}(df, \cdot )
$</p>
<p>This makes enough sense to me. However, when I try to calculate the gradient of a function in spherical coordinates:</p>
<p>$
g^{-1} (df, \cdot) = g^{ij} \partial_i(df) \otimes \partial_j = g^{ij} \partial_i f \partial_j
$</p>
<p>So the $j^{th}$ component of the gradient of f is:</p>
<p>$
g^{ij} \partial_if
$</p>
<p>The coefficients of the metric tensor are:</p>
<p>$
g =
\begin{pmatrix}
1 & 0 & 0 \\
0 & r^2 & 0 \\
0 & 0 & r^2 \sin^2{\theta}
\end{pmatrix}
$</p>
<p>So the inverse of a diagonal matrix ($g^{-1}$) is just a diagonal matrix whose entries are the reciprocals of the original matrix:</p>
<p>$
g^{-1} =
\begin{pmatrix}
1 & 0 & 0 \\
0 & r^{-2} & 0 \\
0 & 0 & r^{-2} \csc^2{\theta}
\end{pmatrix}
$</p>
<p>So it seems our expression doesn't match the vector calculus definition of the gradient in spherical coordinates. For instance, differential geometry gives us a $\hat{\theta}$ component of $ r^{-2} \partial_\theta f$ but vector calculus tells us this is $ r^{-1} \partial_\theta f$.</p>
<p>Where is my mistake?</p>
| Marián Fecko | 19,396 | <p>In vector calculus one usually computes in terms of <strong>orthonormal</strong> (rather than <strong>coordinate</strong>) components of vector fields.
This stuff is discussed in detail a bit later in the book :-)
See pages 183-4 (Section 8.5.)
You find explicit expressions in both components, there.
Good luck!</p>
|
1,192,338 | <p>How to prove that there are infinite taxicab numbers?
ok i was reading this <a href="http://en.wikipedia.org/wiki/Taxicab_number#Known_taxicab_numbers" rel="nofollow">http://en.wikipedia.org/wiki/Taxicab_number#Known_taxicab_numbers</a>
and thought of this question..any ideas?</p>
| Joshua Tilley | 389,601 | <p>Use Ramanujan's identity that
<span class="math-container">$$\left(x^2+7xy-9y^2\right)^3+\left(2x^2-4xy+12y^2\right)^3=\left(2x^2+10y^2\right)^3+\left(x^2-9xy-y^2\right)^3$$</span></p>
<p>Reference:</p>
<p><a href="http://mathworld.wolfram.com/DiophantineEquation3rdPowers.html" rel="nofollow noreferrer">http://mathworld.wolfram.com/DiophantineEquation3rdPowers.html</a></p>
|
78,423 | <p>How far can one go in proving facts about projective space using just its universal property?</p>
<p>Can one prove Serre's theorem on generation by global sections, calculate cohomology, classify all invertible line bundles on projective space?</p>
<p>I don't like many proofs of some basic technical facts very aesthetic because one has to consider homogeneous prime ideals, homogeneous localizations, etc. Do there exist nice clean conceptual proofs which avoid the above unpleasantries?</p>
<p>If you include references in your answer it would be very helpful, thanks.</p>
| Daniel Litt | 6,950 | <p>I agree with Anton that it would be too much to hope for to get serious results (e.g. cohomology of line bundles) from the "nice" universal property of projective space, but one can indeed prove that there are no non-constant regular functions on $\mathbb{P}^n$ using only the universal property. </p>
<p>Namely, it suffices to check that $\mathbb{P}^n$ is proper and connected. For properness, one may use the valuative criterion. Namely, let $R$ be a valuation ring and $K$ its fraction field. Then a map from $\operatorname{Spec}(K)\to \mathbb{P}^n$ is a surjection $K^{n+1}\to K$; then the image of $R^n\hookrightarrow K^n\to K$, where the inclusion is the obvious one, is isomorphic to $R$. In particular, the map $K^{n+1}\to K$ lifts to a surjective map $R^{n+1}\to R$, which is a map $\operatorname{Spec}(R)\to \mathbb{P}^n$ fullfilling the valuative criterion (its uniqueness up to automorphisms of the given diagram is also clear).</p>
<p>As for connectedness, it suffices to check that $\mathbb{P}^n$ is path-connected in the following sense--for any two geometric points, represented by surjections $x_0: \bar k^{n+1}\to\bar k, x_1: \bar k^{n+1}\to\bar k$, there is a ``path" connecting them; namely a map $f: \mathbb{A}^1\to \mathbb{P}^n$ with $f(0)=x_0, f(1)=x_1$. This is a surjection $\bar k[t]^{n+1}\to \bar k[t]$ such that reducing mod $(t), (t-1)$ gives the desired maps. Translating, we must choose polynomials $f_1, ..., f_{n+1}$ such that $f_i(0)=x_0(e_i), f_i(1)=x_1(e_i)$, where $e_i$ are the standard basis of $\bar k^{n+1}$, and where all the $f_i$ do not vanish simultaneously.</p>
<p>But one can do this by Lagrange interpolation; choose any $f_1$ with the desired values at $0,1$, then any $f_2$ with the desired values at $0, 1$ and not vanishing at the other zeros of $f_1$, then any $f_3$ analogously, and so on. </p>
|
899,230 | <p>It seems that both isometric and unitary operators on a Hilbert space have the following property:</p>
<p><span class="math-container">$U^*U = I$</span> (<span class="math-container">$U$</span> is an operator and <span class="math-container">$I$</span> is an identity operator, <span class="math-container">$^*$</span> is a binary operation.) </p>
<p>What is the difference between <strong>isometry</strong> and <strong>unitary</strong>? Which one is more general,
or are they the same? Are they isomorphic?</p>
| Jonas Meyer | 1,424 | <p>An isometric operator on a (complex) Hilbert space is a linear operator that preserves distances. That is, $T$ is an isometry if (by definition) $\|Tx-Ty\|=\|x-y\|$ for all $x$ and $y$ in the space. By linearity, this is equivalent to $\|Tx\|=\|x\|$ for all $x$. Because of the definition of the norm in terms of the inner product and the definition of adjoint operators, this is equivalent to $\langle T^*Tx,x\rangle=\langle x,x\rangle$ for all $x$. This <a href="https://math.stackexchange.com/q/57350/">implies</a> that $T^*T=I$. Conversely, if $T^*T=I$, you can show that $T$ is an isometry (this direction is easier). </p>
<p>A unitary operator $U$ does indeed satisfy $U^*U=I$, and therefore in particular is an isometry. However, unitary operators must also be surjective (by definition), and are therefore isometric and invertible. They are the isometric isomorphisms on Hilbert space. One way to characterize them algebraically is to say that $U$ is a unitary if $U^*U=UU^*=I$.</p>
<p>On infinite dimensional Hilbert spaces (unlike in finite dimensional cases), there are always nonunitary isometries. For example, on $\ell^2$, the operator sending $(a_0,a_1,a_2,a_3,\ldots)$ to $(0,a_0,a_1,a_2,\ldots)$ is a nonunitary isometry. </p>
<p>I'm not sure what you mean by "isomorphic". One notion of equivalence of linear transformations is similarity; but a surjective operator is never similar to a nonsurjective operator. A stronger notion is unitary equivalence, i.e., similarity induced by a unitary transformation (since these are the isometric isomorphisms of Hilbert space), which again cannot happen between a nonunitary isometry and a unitary operator (or between any nonunitary operator and a unitary operator).</p>
|
464,426 | <p>Find the limit of $$\lim_{x\to 1}\frac{x^{1/5}-1}{x^{1/3}-1}$$</p>
<p>How should I approach it? I tried to use L'Hopital's Rule but it's just keep giving me 0/0.</p>
| Ovi | 64,460 | <p>With L'Hopital's rule:</p>
<p>$\lim_{x\to1}\large \frac{\frac {d}{dx} (x^{1/5}-1)}{\frac {d}{dx} (x^{1/3}-1)}=\lim_{x\to1} \large\frac { \frac 15 x^{-4/5}}{\frac 13 x^{-2/3}}$</p>
<p>Since we are dividing, we subtract the exponents of $x$ and get:</p>
<p>$$ \lim_{x \to 1} \frac 35 x^{(-\frac 45)- (-\frac 23)}=\lim_{x \to 1} \frac 35 x^{-2/15}$$</p>
<p>Now we can substitute $1$ directly for $x$ and get the answer of $\frac 35$</p>
|
435,079 | <p>This is exercise from my lecturer, for IMC preparation. I haven't found any idea.</p>
<p>Find the value of</p>
<p>$$\lim_{n\rightarrow\infty}n^2\left(\int_0^1 \left(1+x^n\right)^\frac{1}{n} \, dx-1\right)$$</p>
<p>Thank you</p>
| Start wearing purple | 73,025 | <p>Mathematica evaluates the integral to
$$\int_0^{1}(1+x^n)^{1/n}dx={}_2F_1\left(-\frac{1}{n},\frac1n,1+\frac1n;-1\right).\tag{1}$$
Next, let us write the standard series representation for the hypergeometric function
$$_2F_1(a,b,c;t)=\sum_{k=0}^{\infty}\alpha_kt^k,\qquad \alpha_k=\frac{\Gamma(a+k)\Gamma(b+k)\Gamma(c)}{k!\,\Gamma(a)\Gamma(b)\Gamma(c+k)}.$$
Now an easily verified claim: as $n\rightarrow\infty$, for the parameters as in (1), we have $\alpha_0=1$ and
$$\alpha_k\sim-\frac{1}{k^2n^2}+O(n^{-3}).$$
Hence we obtain
\begin{align}
\lim_{n\rightarrow\infty}n^2\left(\int_0^{1}(1+x^n)^{1/n}dx-1\right)=\sum_{k=1}^{\infty}\frac{(-1)^{k+1}}{k^2}=\frac{\pi^2}{12},
\end{align}
which is also confirmed by numerical calculation with $n\sim 10^4-10^8$.</p>
|
1,681,205 | <p>I would like a <strong>hint</strong> for the following, more specifically, what strategy or approach should I take to prove the following?</p>
<p><em>Problem</em>: Let $P \geq 2$ be an integer. Define the recurrence
$$p_n = p_{n-1} + \left\lfloor \frac{p_{n-4}}{2} \right\rfloor$$
with initial conditions:
$$p_0 = P + \left\lfloor \frac{P}{2} \right\rfloor$$
$$p_1 = P + 2\left\lfloor \frac{P}{2} \right\rfloor$$
$$p_2 = P + 3\left\lfloor \frac{P}{2} \right\rfloor$$
$$p_3 = P + 4\left\lfloor \frac{P}{2} \right\rfloor$$</p>
<p>Prove that the following limit converges:
$$\lim_{n\rightarrow \infty} \frac{p_n}{z^n}$$
where $z$ is the positive real solution to the equation $x^4 - x^3 - \frac{1}{2} = 0$.</p>
<p><em>Note</em>: I've already proven the following:
$$\lim_{n\rightarrow \infty} \frac{p_n}{p_{n-1}} = z$$
Any ideas? Not sure if this result helps. Also $\lim_{n\rightarrow \infty}p_n/z^n$ is also bounded above and below. I've attempted to show $\lim_{n\rightarrow \infty} \frac{p_n}{z^n}$ is Cauchy, but had no luck with that. I don't know what the limit converges to either.</p>
<p><em>Edit</em>: I believe the limit should converge as $p_n$ achieves an end behaviour of the form $cz^n$ for $c \in \mathbb{R}$ (this comes from the fact that the limit of the ratios of $p_n$ converge to $z$), however I do not know how to make this rigorous.</p>
<p><em>Edit 2</em>: Proving the limit exists is equivalent to showing
$$p_0 \cdot \prod_{n=1}^{\infty} \left( \frac{p_n/p_{n-1}}{z} \right)$$
converges.</p>
<p><strong>UPDATED:</strong></p>
<p>If someone could prove that $|p_n-z \cdot p_{n-1}|$ is bounded above (or converges, or diverges), then the proof is complete.</p>
| mjqxxxx | 5,546 | <p>Consider the vector $X_n = (p_n, p_{n-1}, p_{n-2}, p_{n-3})$. We have
$$
\begin{eqnarray}
X_{n+1} &=& (p_{n+1},p_n, p_{n-1},p_{n-2}) \\
&=& \left(p_n+\left\lfloor \frac{1}{2}p_{n-3}\right\rfloor, p_n, p_{n-1}, p_{n-2}\right) \\ &=& (p_n+ \frac{1}{2}p_{n-3}, p_n, p_{n-1}, p_{n-2}) - (\varepsilon_n, 0, 0, 0)\\ &=&\hat{M}\cdot X_{n} + \varepsilon_n E,
\end{eqnarray}
$$
where $|\varepsilon_n| < 1$, $E=(-1,0,0,0)$, and
$$
\hat{M}=\left(
\begin{matrix}
1 & 0 & 0 & \frac{1}{2} \\ 1 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 \\ 0 & 0 & 1 & 0
\end{matrix}\right).
$$
Let $\lambda_{i=1,2,3,4}$ and $u_i$ be the eigenvalues and normalized eigenvectors of $\hat{M}$. Only $\lambda_1 = z \approx 1.25372$ has magnitude greater than $1$; the other eigenvalues have magnitudes strictly less than $1$. We can write $E=\sum_i e_i u_i$ for some fixed coefficients $e_i$. Now, writing $X_n=z^n \sum_i c_{i,n} u_i$, we have
$$
z^{n+1}\sum_i c_{i,n+1}u_i = X_{n+1}=\hat{M}\cdot X_n + \varepsilon_n E = z^n \sum_i c_{i,n} \lambda_i u_i + \varepsilon_n \sum_i e_i u_i;
$$
or simply
$$
c_{i,n+1} = \left(\frac{\lambda_i}{z}\right) c_{i,n} + \frac{\varepsilon_n e_i}{z^{n+1}}.
$$
Because $|\varepsilon_n|$ is bounded, $\lim_{n\rightarrow \infty}c_{i,n}$ exists for each $i$ (and is zero for $i\neq 1$). Therefore $X_n / z^n=\sum_i c_{i,n}u_i$ has a limit, as does its first component, $p_n/z^n$.</p>
|
1,746,180 | <p>I already solved a few integrals with substitution but in this case I have no idea how to start. How to solve the integral $$\int_0^{\pi/2} \frac{\sqrt{\sin(x)}}{\sqrt{\sin(x)}+\sqrt{\cos(x)}} dx,$$where $x=\frac{\pi}{2}-t$ with substitution, can you tell me how to start?
It would be great!</p>
| GoodDeeds | 307,825 | <p>$$\int_a^b f(x) dx=\int_a^b f(a+b-x) dx$$
$$\tag1I=\int_0^{\pi/2} \frac{\sqrt{\sin(x)}}{\sqrt{\sin(x)}+\sqrt{\cos(x)}} dx,$$
Replace $x$ by $\frac{\pi}2-x$.
$$I=\int_0^{\pi/2} \frac{\sqrt{\sin(\frac{\pi}2-x)}}{\sqrt{\sin(\frac{\pi}2-x)}+\sqrt{\cos(\frac{\pi}2-x)}} dx,$$
$$\tag2I=\int_0^{\pi/2} \frac{\sqrt{\cos(x)}}{\sqrt{\cos(x)}+\sqrt{\sin(x)}} dx,$$
Adding $(1)$ and $(2)$,
$$2I=\int_0^{\pi/2}dx$$
which can be solved easily.</p>
|
1,275,461 | <p>I am trying to proof that $-1$ is a square in $\mathbb{F}_p$ for $p = 1 \mod{4}$. Of course, this is really easy if one uses the Legendre Symbol and Euler's criterion. However, I do not want to use those. In fact, I want to prove this using as little assumption as possible.</p>
<p>What I tried so far is not really helpful:</p>
<p>We can easily show that $\mathbb{F}_p^*/(\mathbb{F}_p^*)^2 = \{ 1\cdot (\mathbb{F}_p^*)^2, a\cdot (\mathbb{F}_p^*)^2 \}$ where $a$ is not a square (this $a$ exists because the map $x \mapsto x^2$ is not surjective). Now $-1 = 4\cdot k = 2^2 \cdot k$ for some $k\in \mathbb{F}_p$.</p>
<p>From here I am trying to find some relation between $p =1 \mod{4}$ and $-1$ not being a multiple of a square and a non-square.</p>
| Mathmo123 | 154,802 | <p>For $p\ge 3$, $\mathbb F_p^*$ is a cyclic group of order $p-1$. If $g$ is a generator, then $g^\frac{p-1}2= -1$. In particular $-1$ will be a square if and only if $\frac{p-1}{2}$ is even - i.e. if $p \equiv 1 \pmod 4$.</p>
<p>Note that this proof isn't fundamentally different from those using Fermat's little theorem. Indeed utilising the group theoretic structure of $\mathbb F_p^*$ gives one way of proving it.</p>
|
158,662 | <p>I know to prove a language is regular, drawing NFA/DFA that satisfies it is a decent way. But what to do in cases like</p>
<p>$$
L=\{ww \mid w \text{ belongs to } \{a,b\}*\}
$$</p>
<p>where we need to find it it is regular or not. Pumping lemma can be used for irregularity but how to justify in a case where it can be regular?</p>
| Boris Trayvas | 33,272 | <p>An alternative way of proving a language is regular/irregular is the <a href="http://en.wikipedia.org/wiki/Myhill%E2%80%93Nerode_theorem" rel="nofollow">Myhill-Nerode theorem</a>.</p>
|
1,246,705 | <p>I was doing some linear algebra exercises and came across the following tough problem :</p>
<blockquote>
<p>Let $M_{n\times n}(\mathbf{R})$ denote the set of all the matrices whose entries are real numbers. Suppose $\phi:M_{n\times n}(\mathbf{R})\to M_{n\times n}(\mathbf{R})$ is a nonzero linear transform (i.e. there is a matrix $A$ such that $\phi(A)\neq 0$) such that for all $A,B\in M_{n\times n}(\mathbf{R})$
$$\phi(AB)=\phi(A)\phi(B).$$
Prove that there exists a invertible matrix $T\in M_{n\times n}(\mathbf{R})$ such that
$$\phi(A)=TAT^{-1}$$
for all $A\in M_{n\times n}(\mathbf{R})$.</p>
</blockquote>
<p>This is an exercise from my textbook and I am all thumbs when I attempted to solve it .</p>
<p>Can someone tell me as to how should I , at least , start the problem ? </p>
| user1551 | 1,551 | <p>Here is an alternative proof that is much less computational. It works over any field. Let $E_{ij}$ be the matrix with a $1$ at the $(i,j)$-th entry and zeros elsewhere.</p>
<ol>
<li><p>$\phi$ is injective and hence an automorphism. Suppose the contrary that $\phi(A)=0$ for some nonzero $A$. Since $A$ is nonzero, every square matrix $B$ can be written as a finite sum of the form $\sum_k X_kAY_k$ (that's simply because Kronecker products of the form $Y_k^T\otimes X_k$ span the set of all $n^2\times n^2$ matrices). But then we would have $\phi(B)=\sum_k\phi(X_k)\phi(A)\phi(Y_k)=0$ for every $B$, which is a contradiction because $\phi\ne0$.</p></li>
<li><p>$\phi$ preserves the rank/nullity of a matrix. As $\phi$ is an automorphism, $\phi$ and $\phi^{-1}$ preserve the linear dependence/independence of any set of matrices. In turn,
$$
\frac1n\dim\{X\in M_n(\mathbb C):\,AX=0\}=\frac1n\dim\{Y\in M_n(\mathbb C):\,\phi(A)Y=0\}.
$$
As the two sides are the respective nullities of $A$ and $\phi(A)$, the conclusion follows. In particular, $\phi$ preserves singularity/invertibility of matrices and $\phi(I)=I$.</p></li>
<li><p>As $\phi$ is an automorphism, a polynomial $p$ annihilates a matrix $A$ if and only if it annihilates $\phi(A)$. It follows that</p>
<ul>
<li>$A$ and $\phi(A)$ always have identical minimal polynomials.</li>
<li>So, if $A$ is diagonalisable, $\phi(A)$ must be diagonalisable too and the two matrices must share the same set of eigenvalues. The multiplicities of the eigenvalues must be identical too, because $A-\lambda I$ and $\phi(A)-\lambda I$ have the same rank for every scalar $\lambda$. In other words, $\phi(A)$ must be similar to $A$.</li>
<li>As $\phi$ also preserve the linear independence of any set of matrices as well as matrix rank, it must map the set of all diagonal matrices onto an $n$-dimensional subspace of commuting diagonalisable matrices. Since commuting diagonalisable matrices are simultaneously diagonalisable, we may assume that $\phi$ maps diagonal matrices to diagonal matrices.</li>
<li>So, each $\phi(E_{ii})$ is a diagonal matrix that is similar to $E_{ii}$. Hence $\phi(E_{ii})=E_{jj}$ for some $j$. As $\phi$ is injective, we may assume without loss of generality that $\phi(E_{ii})=E_{ii}$ for each $i$. Hence $\phi(D)=D$ for every diagonal matrix $D$.</li>
</ul></li>
<li><p>Here comes the computational part. Let $C$ be the circulant permutation matrix whose super-diagonal as well as the bottom left entry are filled with ones. By comparing the coefficients on both sides of $\phi(D_1CD_2)=D_1\phi(C)D_2$ for arbitrary diagonal matrices $D_1$ and $D_2$, we see that $\phi(C)=DC$ for some diagonal matrix $D$. Note that $\det D=1$ because $I=\phi(C^n)=(DC)^n=\det(D)I$. Write $D=\operatorname{diag}(d_1,\ldots,d_n)$. Then $DC=\tilde{D}C\tilde{D}^{-1}$ where
$$
\tilde{D}=\operatorname{diag}\left(\prod_{j=1}^nd_j,\ \prod_{j=2}^nd_j,\ \prod_{j=3}^nd_j,\ \ldots,\ d_n\right).
$$
Since every matrix $A$ can be written as $A=\sum_{k=0}^{n-1}D_kC^k$ for some diagonal matrices $D_0,D_1,\ldots,D_{n-1}$, we obtain
\begin{align}
\phi(A)
&=\sum_{k=0}^{n-1}\phi(D_k)\,\phi(C)^k\\
&=\sum_{k=0}^{n-1}D_k\left(\tilde{D}C\tilde{D}^{-1}\right)^k\\
&=\sum_{k=0}^{n-1}D_k\tilde{D}C^k\tilde{D}^{-1}\\
&=\sum_{k=0}^{n-1}\tilde{D}D_kC^k\tilde{D}^{-1}\\
&=\tilde{D}A\tilde{D}^{-1}.
\end{align}</p></li>
</ol>
|
4,598,275 | <p>Let <span class="math-container">$(X, \mathcal{F})$</span> be a measurable space, and <span class="math-container">$\mu_{n}, \mu$</span> probability measures on it. <span class="math-container">$\mu_{n}$</span> is said to converge weakly to <span class="math-container">$\mu$</span> if for any bounded continuous functions <span class="math-container">$f$</span> on <span class="math-container">$X$</span>, <span class="math-container">$\int f d\mu_{n} \xrightarrow{} \int f d\mu$</span>.</p>
<p>The professor mentioned if <span class="math-container">$X = \mathbb{R}^{d}$</span> and <span class="math-container">$\mathcal{F}$</span> is the Borel sigma algebra, then it is enough to check the convergence of integrals for any compactly supported continuous function <span class="math-container">$f$</span>. But why is this true?</p>
| Tom | 986,425 | <p>After discussing the problem with others I'm now trying to show the progress I make by far.</p>
<p>Suppose <span class="math-container">$g \in C_{b}(\mathbb{R}^{d})$</span>. We want to show</p>
<p>(1): <span class="math-container">$\int g d\mu_{n} \xrightarrow{} \int g d\mu$</span>.</p>
<p>Now fix any <span class="math-container">$\epsilon > 0$</span>. Since there is a sequence of closed cubes increasing to the whole space, by lower continuity of measure there is a closed cube <span class="math-container">$K$</span> such that</p>
<p>(2): <span class="math-container">$\mu(K) \geq 1-\epsilon$</span>.</p>
<p>Now fix <span class="math-container">$\delta$</span> to be a small positive number. Consider the set <span class="math-container">$O = \{x: d(x, K) \leq \delta\}$</span>, where <span class="math-container">$d$</span> is the usual distance function, continuous because <span class="math-container">$K$</span> is closed (<a href="https://math.stackexchange.com/questions/944659/distance-to-a-closed-set-is-continuous">Distance to a closed set is continuous.</a>). It's also an easy result that <span class="math-container">$d(x, K) = 0$</span> iff <span class="math-container">$x \in K$</span>. Since it's continuous and <span class="math-container">$[0, \delta]$</span> is closed, we know <span class="math-container">$O$</span> is a closed set. Since <span class="math-container">$K$</span> is bounded, <span class="math-container">$O$</span> is bounded. And so <span class="math-container">$O$</span> is a compact set.</p>
<p>Now define a function <span class="math-container">$f = \text{ max}(1 - \frac{1}{\delta}d(x, K), 0)$</span>. The maximum function is continuous and so <span class="math-container">$f$</span> is continuous. Furthermore, on <span class="math-container">$K$</span>, <span class="math-container">$f = 1$</span>; on <span class="math-container">$O - K$</span>, <span class="math-container">$f \in [0, 1)$</span>; on <span class="math-container">$O^{c}$</span>, <span class="math-container">$f = 0$</span>.</p>
<p>Notice <span class="math-container">$f \in C_{c}$</span> and <span class="math-container">$f$</span> is no larger than the indicator function of <span class="math-container">$O$</span> so we have <span class="math-container">$\mu_{n}(O) \geq \int f d\mu_{n}, \forall n$</span>. By our assumption, <span class="math-container">$\int f d\mu_{n} \xrightarrow{} \int f d\mu \geq \int_{K} f d\mu = \mu(K) \geq 1-\epsilon$</span>. So there is an integer <span class="math-container">$N$</span> such that for all <span class="math-container">$n \geq N$</span>, <span class="math-container">$\mu_{n}(O) \geq \int f d\mu_{n} \geq 1 - 2\epsilon$</span>. This means <span class="math-container">$\mu_{n}(O^{c}) \leq 2\epsilon, n \geq N$</span>.</p>
<p>Now up to discarding <span class="math-container">$\mu_{1}, ..., \mu_{N-1}$</span> the sequence of measures <span class="math-container">$\mu_{n}$</span> is uniformly tight. What next?</p>
|
489,109 | <p>I've been stumped on this problem for hours and cannot figure out how to do it from tons of tutorials.</p>
<p>Please note: This is an intro to calculus, so we haven't learned derivatives or anything too complex.</p>
<p>Here's the question: </p>
<p>Let $f(x) = x^5 + x + 7$. Find the value of the inverse function at a point.
$f^{-1}(1035) = $_<strong>__</strong>?</p>
<p>I tried setting $f(x)$ as $y$.. and solving for $x$. Clearly that doesn't help lol. I've tried many different approaches and cannot figure out the answer. I used wolframalpha, my textbook, notes, examples, and tons of Google searches and nothing makes sense. Can someone please help? Thanks!!</p>
| davidlowryduda | 9,754 | <p><strong>HINT(s)</strong></p>
<ol>
<li>$f$ is an increasing function.</li>
<li>Since $f$ is increasing, you will be able to modify your guesses to close in on the answer quickly.</li>
</ol>
|
1,501,876 | <blockquote>
<p>I want to prove $A_n$ has no subgroups of index 2. </p>
</blockquote>
<p>I know that if there exists such a subgroup $H$ then $\vert H \vert = \frac{n!}{4}$ and that $\vert \frac{A_n}{H} \vert = 2$ but am stuck there. I have tried using the proof that $A_4$ has no subgroup of order 6 to get some ideas but am still stuck. Sorry I don't have much else to add at this point. Thanks a bunch.</p>
| David Hill | 145,687 | <p>As has been stated in the comments, if you know that $A_n$ is simple for $n\geq 5$, and subgroups of index 2 are normal, you are done with that case. </p>
<p>For the $n=4$ case, you do need to do a bit more work. Your idea to show that $A_n$ has no subgroup of order 6 is correct. Well, a subgroup of order $6$ in $A_4$ is either cyclic, or isomorphic to $S_3$. Since $A_4$ has no element of order 6, we can exclude the case of a cyclic subgroup and any subgroup of $A_4$ of order 6 must be isomorphic to $S_3$. </p>
<p>To see that this can't happen, suppose
$$\phi:S_3\to A_4$$
is an injective homomorphism and use the fact that $S_3$ is generated by the simple transpositions $(12),(23)$. The image of these elements must be elements of order 2 in $A_4$ which are disjoint 2-cycles. Any two of these generate a subgroup of order 4 in $A_4$. Hence, no such homomorphism exists.</p>
|
3,168,119 | <p>How do I solve for n?</p>
<p><span class="math-container">$125 = x * 2^n$</span></p>
<p>This is what I have so far:</p>
<p><span class="math-container">$5^3 = x * 2^n$</span></p>
<p>I do remember that according to the exponential rules,
that the powers should be the same if the equation is like this:</p>
<p><span class="math-container">$8 = 2^n$</span></p>
<p><span class="math-container">$2^3 = 2^n \iff 2^3 = 2^3$</span></p>
<p>I am not sure if this rule can be used in the equation above.</p>
| Dr. Sonnhard Graubner | 175,066 | <p>Hint: We get <span class="math-container">$$\frac{125}{x}=2^n$$</span> so <span class="math-container">$$\ln\left(\frac{125}{x}\right)=n\ln(2)$$</span></p>
|
4,139,141 | <p>If the conditions of the theorem are met for some ordinary differential equation, then we are guaranteed that a solution exists. However, I don't fully understand what it means for a solution to exist. If we can show that a solution exists, does that mean that it can be found explicitly using known methods? Or, are there some differential equations, that we know exist because of the theorem, but for which we can not find a general solution, and are thus forced to use numerical methods for an approximation?</p>
| Henno Brandsma | 4,280 | <ol>
<li><p>The <span class="math-container">$V_a$</span> are open for each <span class="math-container">$a \in A$</span> (that's how it's chosen in the application of Hausdorffness) and each <span class="math-container">$a \in A$</span> is covered by "its own" <span class="math-container">$V_a$</span>, so it's clear that <span class="math-container">$A \subseteq \bigcup \{V_a\mid a \in A\}$</span> which is what it means to cover <span class="math-container">$A$</span>.</p>
</li>
<li><p>is the just reformulating (using some notations) that <span class="math-container">$\{V_a\mid a \in A\}$</span> has a finite subcover. So finitely many <span class="math-container">$V_a$</span> can be found (for diffrent <span class="math-container">$a \in A$</span> so that still <span class="math-container">$A$</span> is a subset of their union. The proof just gives a <em>name</em> to the <span class="math-container">$a$</span> whose <span class="math-container">$V_a$</span> are used in this finite subcover: there are <span class="math-container">$a_1, a_2, \ldots a_n$</span> (so <span class="math-container">$n\in \Bbb N$</span> is the number of sets in the finite subcover) so that <span class="math-container">$A \subseteq \bigcup_{i=1}^n V_{a_i} = \bigcup \{V_{a_i}\mid i=1,2,\ldots n\}$</span>. The name <span class="math-container">$V$</span> is introduced for this finite union.</p>
</li>
<li><p>For those sam <span class="math-container">$a_1,a_2,\ldots,a _n$</span> we have <em>corresponding</em> <span class="math-container">$U_{a_i}$</span> which are their disjoint counterparts that all contain <span class="math-container">$x$</span> by construction. (<span class="math-container">$x$</span> is the fixed arbitary point outside <span class="math-container">$A$</span> that we're working with in this part of the proof), and so <span class="math-container">$U:= \bigcap_{i=1}^n U_{a_i}$</span> is a <strong>finite</strong> (this is essential) intersection of open sets that all contain <span class="math-container">$x$</span> and so <span class="math-container">$x \in U$</span> and <span class="math-container">$U$</span> is open.</p>
</li>
<li><p>That <span class="math-container">$A \subseteq V$</span> is already part of the 2 and the definition of <span class="math-container">$V$</span> as the union of the finite subcover. <span class="math-container">$x \in U$</span> I noted in 3 already. No new info.</p>
</li>
<li><p><span class="math-container">$U \cap V = \emptyset$</span> is the whole point: If <span class="math-container">$p \in U \cap V$</span> then <span class="math-container">$p \in V_{a_j}$</span> for some <span class="math-container">$1 \le j \le n$</span> (definition of union) and also <span class="math-container">$p \in U_{a_j}$</span> for that some <span class="math-container">$j$</span> (as <span class="math-container">$p$</span> is in the intersection of <span class="math-container">$U_{a_i}$</span> ,so in <em>all</em> of them). Contradiction, as these sets <span class="math-container">$U_{a_j}$</span> and <span class="math-container">$V){a_j}$</span> are the <span class="math-container">$U_a$</span> and <span class="math-container">$V_a$</span> sets (for <span class="math-container">$a=a_j$</span>) that were chosen disjointly by Hausdorffness! So no such <span class="math-container">$p \in U \cap V$</span> can exist.</p>
</li>
<li><p>As <span class="math-container">$A$</span> is a subset of <span class="math-container">$V$</span>, <span class="math-container">$U$</span> and <span class="math-container">$A$</span> are certainly disjoint too.</p>
</li>
<li><p>So reformulated: <span class="math-container">$x \in U \subseteq X-A$</span> (two sets are disjoint iff one is a subset of the compelment of the other, a simple logical reformulation).</p>
</li>
</ol>
<p>As each point of <span class="math-container">$X-A$</span> is an interior point of <span class="math-container">$X-A$</span>, <span class="math-container">$X-A$</span> is open and is <span class="math-container">$A$</span> is closed.</p>
|
14,238 | <p>In question #7656, Peter Arndt asked <a href="https://mathoverflow.net/questions/7656/why-does-the-gamma-function-complete-the-riemann-zeta-function">why the Gamma function completes the Riemann zeta function</a> in the sense that it makes the functional equation easy to write down. Several of the answers were from the perspective of Tate's thesis, which I don't really have the background to appreciate yet, so I'm asking for another perspective.</p>
<p>The perspective I want goes something like this: the Riemann zeta function is a product of the local zeta functions of a point over every finite prime $p$, and the Gamma function should therefore be the "local zeta function of a point at the infinite prime."</p>
<p><strong>Question 1:</strong> Can this intuition be made precise without the machinery of Tate's thesis? (It's okay if you think the answer is "no" as long as you convince me why I should try to understand Tate's thesis!)</p>
<p>Multiplying the local zeta functions for the finite and infinite primes together, we get the Xi function, which has the nice functional equation. Now, as I learned from Andreas Holmstrom's <a href="https://mathoverflow.net/questions/2040/why-are-functional-equations-important">excellent answer to my question about functional equations</a>, for the local zeta functions at finite primes the functional equation</p>
<p>$$\zeta(X,n-s) = \pm q^{ \frac{nE}{2} - Es} \zeta(X, s)$$</p>
<p>(notation explained at <a href="http://en.wikipedia.org/wiki/Weil_conjectures#Statement_of_the_Weil_conjectures" rel="nofollow noreferrer">the Wikipedia article</a>), which for a point is just the statement $\frac{1}{1 - p^s} = -p^{-s} \frac{1}{1 - p^{-s}}$, reflects Poincare duality in etale cohomology, and the hope is that the functional equation for the Xi function reflects Poincare duality in some conjectural "arithmetic cohomology theory" for schemes over $\mathbb{Z}$ (or do I mean $\mathbb{F}_1$?).</p>
<p><strong>Question 2:</strong> Can the reflection formula for the Gamma function be interpreted as "Poincare duality" for some cohomology theory of a point "at the infinite prime"? (Is this question as difficult to answer as the more general one about arithmetic cohomology?)</p>
| Kevin Buzzard | 1,384 | <p>I am no expert, but let me give you some guesses as to the answers.</p>
<p>Q1) I am going to go for "no". I think it's precisely Tate's thesis that shows that the gamma factor for Riemann zeta can be interpreted as a local factor analogous to the usual local factors at the finite primes. However let me absolutely stress that you do not need to read all the technical details of Tate's thesis to understand the analogue completely. Any function on a local field (including $\mathbf{Q}_p$ and $\mathbf{R}$) has a Fourier transform, which is another such function. If you normalise things in a sane way, then the characteristic function of $\mathbf{Z}_p$ is its own Fourier transform, and there's a standard function on $\mathbf{R}$ that is its own Fourier transform. Now do a certain explicit integral to these functions---the same sort in both cases---this is in Tate's thesis. On the $p$-adic side you get $(1-p^{-s})^{-1}$ and on the real side you get the correct Gamma factor. None of this is a mystery really and is really just scratching the surface of Tate's thesis (the meat of which is the functional equation, not the definition!). In fact, let's do it now.</p>
<p>So $k$ is either $\mathbf{Q}_p$ or $\mathbf{R}$, and $\mu^*$ is a Haar measure on $k^*$. Now if $f$ a function on $k$, let's define
$$\zeta(f,s)=\int_{k^\times}f(t)|t|^s d\mu^*$$
(the integral should be over $k^\times$)
Let's now compute this integral for some choices of $f$, $k$. If $k=\mathbf{Q}_p$
and $f$ is the characteristic function of $\mathbf{Z}_p$ (which turns out to be
its own Fourier transform) and $\mu^*$ is normalised to make $\mathbf{Z}_p^*$ have
measure 1, then the integral is (breaking up $\mathbf{Z}_p\backslash\{0\}$ into a sum of $p^m\mathbf{Z}_p^\times$ for $m\geq0$)
$$\zeta(f,s)=\sum_{m\geq0}p^{-ms}=(1-p^{-s})^{-1}.$$</p>
<p>Now let's try another example: let's let $f$ be $e^{-\pi x^2}$, which is its own Fourier transform, let's let Haar measure on $\mathbf{R}^\times$ be $dx/|x|$, and let's compute the integral. It's done on p317 of Cassels-Froehlich---Tate's thesis: we need to compute
$$\int_{x\in\mathbf{R}}e^{-\pi x^2}|x|^{s-1} dx$$
which is readily checked to be $\pi^{-s/2}\Gamma(s/2)$.</p>
<p>Q2) I think there is a misunderstanding here (perhaps mine, perhaps yours). The global functional equation reflects Poincare duality in the function field case. I don't really see a local functional equation for $(1-p^{-s})^{-1}$ so I don't really know what you're asking.</p>
|
115,367 | <p>Let $f(z)$ be an analytic function on $D=\{z : |z|\leq 1\}$. $f(z) < 1$ if $|z|=1$. How to show that there exists $z_0 \in D$ such that $f(z_0)=z_0$. I try to define $f(z)/z$ and use Schwarz Lemma but is not successful. </p>
<p>Edit: Hypothesis is changed to $f(z) < 1$ if $|z|=1$. I try the following. If $f$ is constant, then conclusion is true. Suppose that $f$ is not constant and $f(z_0)\neq z_0$ for all $z_0\in D$. Then $g(z)=\frac{1}{f(z)-z}$ is analytic. If I can show that $g$ is bounded, then we are done. But it seems that $g$ is not bounded.</p>
| davin | 2,881 | <p>Fixed point theorem should suffice, but here's a slightly more complex-analysis-style proof:</p>
<p>By Rouche's theorem, we note that $|f(z)|<1=|z|$ on $\{|z|=1\}$ therefore $N_{f(z)-z}(D)=N_z(D)=1$, so we've proven something slightly stronger; that there exists <em>a unique</em> $z_0 \in D$ such that $f(z_0)-z_0=0$ therefore $f(z_0)=z_0$</p>
|
795,193 | <p><strong>THEOREM</strong>: Suppose $\{f_n\}$ is a sequence of continuous functions from $[a,b]$ to $\Bbb R$ that converge pointwise to a continuous function $f$ over $[a,b]$. If $f_{n+1}\leq f_n$, then convergence is uniform. </p>
<p>Then, why is the continuity of the the functions $f_i$'s important for the theorem?</p>
<p><strong>Almost all textbooks seems to use property of compactness which I will get to learn only in Metric Spaces chapter. Can anyone please give me an alternate reasoning?</strong></p>
<p><strong>Attempt</strong>: If each $f_k$ is continuous in $[a,b]$, it means it is bounded as well. Let the infimum of $f_k$ b3 denoted as $m_k$ and supremum as $M_k$ in $[a,b]$. Then, since the sequence of functions is given to be monotonically increasing, this means:</p>
<p>$m_k < m_{k+1} < .......< m_n <...$ and $M_k < M_{k+1} < .......< M_n < ...$</p>
<p>How do I proceed next?</p>
<p><strong>Edit</strong> : Unless $f_n(x)$ is continuous at every point, we might not be able to infer the very definition of uniform convergence which states that a uniformily convergent sequence of functions $f_i$ such that $lim ~f_i = f$, then unless each $f_i$ is continuous, we won't be able to find the value of $f_i(x)$ at every point $x$ and hence won't be able to write the following definition of uniform convergence "</p>
<p>$\forall \epsilon >0, \exists ~m \in N$ s.t $f(x)-f_i(x) < \epsilon ~~\forall~~ n\geq m ~~\forall x ~\in ~ I$ where $I$ is the given interval</p>
<p>Would this be correct??</p>
| Sameer Kailasa | 117,021 | <p>As far as I know, there isn't a way to prove this theorem without using properties of compactness. In fact, compactness of domain is necessary for the statement to be true.</p>
<p>So, let's just develop those relevant properties here, in $\mathbb{R}$ instead of in metric spaces. Hopefully working through these definitions and exercises will help.</p>
<p><strong>Definition</strong>: A <em>limit point</em> of a set $S \subset \mathbb{R}$ is a point $p$ such that, for every $\epsilon>0$, there is $x\in S$ with $|p-x|<\epsilon$.</p>
<p><strong>Definition</strong>: A subset $S\subset \mathbb{R}$ is <em>open</em> if, for all $x\in S$, there is some $\epsilon > 0$ such that $(x-\epsilon, x+\epsilon) \subset S$.</p>
<p><strong>Definition</strong>: A subset $S\subset \mathbb{R}$ is <em>closed</em> if it contains all of its limit points.</p>
<p><strong>Exercise 1</strong>: Prove that a set $A\subset \mathbb{R}$ is open if and only if $A^c$, its complement, is closed.</p>
<p><strong>Exercise 2</strong>: </p>
<p>a) Let $\{A_n\}$ be a sequence of open sets. Prove $\bigcup_{n=1}^{\infty} A_n$ is itself open.</p>
<p>b) Let $\{A_n\}$ be a sequence of closed sets. Prove $\bigcap_{n=1}^{\infty} A_n$ is itself closed. </p>
<p><strong>Exercise 3</strong>: Prove that $f:\mathbb{R} \to \mathbb{R}$ is continuous if and only if $f^{-1} (U)$ is open for every open subset $U \subset\mathbb{R}$. What does this tell you about $f^{-1} (V)$ for closed sets $V\subset\mathbb{R}$?</p>
<p><strong>Definition</strong>: An <em>open covering</em> of a set $S\subset \mathbb{R}$ is a collection of open sets $\{U_\alpha\}$ (countable or uncountable), such that $S\subset \bigcup U_\alpha$. A <em>finite subcovering</em> of $\{U_\alpha\}$ is some subcollection $\{U_1, \cdots, U_n\}$ of $\{U_\alpha\}$ such that $S \subset \bigcup_{i=1}^{n} U_i$.</p>
<p><strong>Definition</strong>: A set $S \subset\mathbb{R}$ is compact if every open covering of $S$ contains a finite subcovering of $S$.</p>
<p><strong>Exercise 4</strong>: Let $S\subset \mathbb{R}$ be compact. Prove that every infinite subset of $S$ has a limit point.</p>
<p><strong>Theorem (Heine-Borel)</strong>: A set $S\subset\mathbb{R}$ is compact if and only if it is closed and bounded. This is hard to prove, but think about why it <em>should</em> be true.</p>
<p><strong>Proof of Dini's Theorem</strong>: </p>
<p>Fix $\epsilon> 0$. Let $g_n = f_n - f$ for all $n$. Since $\{f_n\}$ is decreasing, we see $g_{n+1} \le g_n$ for all $n$, and also $g_n \ge 0$ for all $x\in [a,b]$. In addition $g_n (x) \to 0$ as $n\to\infty$ for all $x\in [a,b]$ (by pointwise convergence of the $\{f_n\}$).</p>
<p>Now, consider the sets ${g_n}^{-1} ([\epsilon, \infty))$. First, check that $[\epsilon, \infty)$ is a closed set. By Exercise 3, we then have that ${g_n}^{-1} ([\epsilon, \infty))$ is also closed, for all $n$. Thus, ${g_n}^{-1} ([\epsilon, \infty))$ is compact for every $n$ (why?). Finally, note that $${g_{n+1}}^{-1}([\epsilon, \infty)) \subset {g_n}^{-1}([\epsilon,\infty))$$ </p>
<p>The statement that the $\{f_n\}$ converge uniformly to $f$ is equivalent to the statement that ${g_n}^{-1} ([\epsilon, \infty)) = \emptyset$ for some $n$ (why?). So suppose this is not true. Then for each $n$, we can pick $x_n \in {g_n}^{-1} ([\epsilon, \infty))$. By Exercise 4, the sequence $\{x_i\}$ has a convergent subsequence $\{x_{i_k}\}$. Say that $x_{i_k} \to x$ as $k\to\infty$. We must have $x\in {g_n}^{-1} ([\epsilon, \infty))$ for all $n$ (why? Use Exercise 2). Thus, $g_n (x) \ge \epsilon$ for all $n$. This contradicts the fact that $g_n (x) \to 0$ pointwise. </p>
<p>Conclusion: The $\{f_n\}$ converge uniformly to $f$.</p>
|
215,983 | <p>I was expecting to get the answer to the above by doing either of the following:</p>
<pre><code>Options[Graph[{1 -> 2, 2 -> 1}], GraphLayout]
AbsoluteOptions[Graph[{1 -> 2, 2 -> 1}], GraphLayout]
</code></pre>
<p>However the result to both is</p>
<blockquote>
<p>GraphLayout->Automatic</p>
</blockquote>
<p>How do I find out the optionsetting for <code>GraphLayout</code> chosen by the Automatic method?</p>
| Szabolcs | 12 | <p>I believe that <code>GraphLayout -> Automatic</code> typically resolves to one of the following:</p>
<ul>
<li>For large graphs, the default is <code>"SpringElectricalEmbedding"</code>.</li>
<li><p>Small undirected trees up to 49 vertices use <code>"LayeredEmbedding"</code>. It looks like this:</p>
<p><a href="https://i.stack.imgur.com/2lmwC.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/2lmwC.png" alt="enter image description here"></a></p></li>
<li><p>Small directed acyclic graphs up to 49 vertices use <code>"LayeredDigraphEmbedding"</code>. It looks like this:</p>
<p><a href="https://i.stack.imgur.com/UFHAh.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/UFHAh.png" alt="enter image description here"></a></p></li>
</ul>
<p>This is based on experience. I have no references. You can compare with the images I included above to determine if any of the <code>Layered...</code> embeddings are being used. You can manually set a <code>"SpringElectricalEmbedding"</code> and see if anything changes to determine if that embedding was used.</p>
<p>If you have a graph which uses neither of these, let me know.</p>
|
539,363 | <p>Two horses start simultaneously towards each other and meet after $3h 20 min$. How much time will it take the slower horse to cover the whole distance if the first arrived at the place of departure of the second $5 hours$ later than the second arrived at the departure of the first.</p>
<p><strong>MY TRY</strong>::</p>
<p>Let speed of 1st be a kmph and 2nd be b kmph</p>
<p>Let the distance between A and B be d km</p>
<p>d = 10a/3 + 10b/3</p>
<p>and</p>
<p>d/a - d/b = 5 </p>
<p>now i cant solve it. :(</p>
<p><strong>Spoiler</strong>: The answer is $10$ hours.</p>
| lab bhattacharjee | 33,337 | <p>Let the distance from the meeting place of the departure of the first Horse $A$ is $a $ meter
and the distance from the meeting place of the departure of the first Horse $B$ is $b$ meter</p>
<p>So, the total distance is $a+b$ meter</p>
<p>So, the speed of the first horse is $\displaystyle\frac a{200}$ meter/minute and that of the second is $\displaystyle\frac b{200}$ meter/minute</p>
<p>So, the first horse $A$ will need to cover $b$ meter more which it will take $\displaystyle\frac b{\frac a{200}}=\frac{200b}a$ minute</p>
<p>So, the total time taken by $A$ will be $\displaystyle200+\frac{200b}a$ minute</p>
<p>Similarly, the total time taken by $B$ will be $\displaystyle200+\frac{200a}b$ minute</p>
<p>If $A$ is slower than $B,$ $\displaystyle200+\frac{200b}a-\left(200+\frac{200a}b\right)=300\implies 2b^2-3ab-2a^2=0\implies b=2a$ (why?)</p>
<p>The total time taken by $A$ will be $\displaystyle\frac{a+b}{\frac a{200}}$ minute</p>
|
3,118,462 | <p>cars arrives according to a Poisson process with rate=2 per hour and trucks arrives according to a Poisson process with rate=1 per hour. They are independent. </p>
<p>What is the probability that <strong>at least</strong> 3 cars arrive before a truck arrives? </p>
<p>My thoughts:
Interarrival of cars A ~ Exp(2 per hour), Interarrival of trucks B ~ Exp(1 per hour). </p>
<p>Probability that <strong>at least</strong> 3 cars arrive before a truck arrives</p>
<p><span class="math-container">$= 1- Pr(B<A) - Pr(A<B)Pr(B<A) - Pr(A<B)Pr(A<B)Pr(B<A)
\\= 1 - (\frac{1}{3})-(\frac{2}{3}\cdot\frac{1}{3})-(\frac{2}{3}\cdot\frac{2}{3}\cdot\frac{1}{3})\\=\frac{8}{27}.$</span> </p>
<p>Is this correct?</p>
| fleablood | 280,126 | <p>You have to guess. But use some tricks.</p>
<p><span class="math-container">$68 = 2^2*17$</span> so the only options are <span class="math-container">$1$</span> and <span class="math-container">$2^2*17=68$</span>, or <span class="math-container">$2$</span> and <span class="math-container">$2*17 = 34$</span>, or <span class="math-container">$4$</span> and <span class="math-container">$17$</span>. Of those only one is in the ball park.</p>
<p>If you have <span class="math-container">$ab=748$</span> and <span class="math-container">$a+b= 56$</span> we have <span class="math-container">$748 = 2^2*11*17$</span>. So there are only so many options. <span class="math-container">$11*17$</span> is way too big so we have to break up the <span class="math-container">$11$</span> and <span class="math-container">$17$</span>. One of the terms will be <span class="math-container">$11k$</span> and the other will by <span class="math-container">$17j$</span> where <span class="math-container">$kj= 4$</span>. <span class="math-container">$a + b = 56$</span> is even whereas <span class="math-container">$11$</span> and <span class="math-container">$17$</span> are odd so <span class="math-container">$j$</span> and <span class="math-container">$k$</span> must both be even so <span class="math-container">$2*11$</span> and <span class="math-container">$2*17$</span> is the only feasible option. And it .... works. <span class="math-container">$22 + 34 = 56$</span>. </p>
|
9,462 | <p>In my question I ask for practical tips for the mathematical research practice, if not personal,I look for some articles/websites/books/guides/faq related, or if was already asked on Math.SE, the link to the question.</p>
<p><a href="https://math.stackexchange.com/questions/386520/practical-tips-research-and-discoveries">Practical Tips: Mathematical research and discoveries</a></p>
<p>the motivation of the closure was this.</p>
<blockquote>
<p>"As it currently stands, this question is not a good fit for our Q&A
format. We expect answers to be supported by facts (1), references, or
specific expertise, but this question will likely solicit debate (2),
arguments, polling, or extended discussion(3). If you feel that this
question can be improved and possibly reopened"</p>
</blockquote>
<p>(1) Why is not supported by facts? Is a pratical problem, and I'm asking for tips.</p>
<p>(2) I'm asking for a list of links/books not for opinions.</p>
<p>(3) I don't want do discuss the answers.</p>
<p>Another reason was:" Please see the FAQ about what questions are appropriate to ask here"</p>
<p>and in the faqs I read:</p>
<blockquote>
<p>You should only ask <strong>practical</strong>, <strong>answerable</strong> questions based on <strong>actual problems that you face.</strong></p>
</blockquote>
<p>The question is about <strong>mathematical practice.</strong>
Well, If someone do mathematical research or know some books/guides/related questions, than he can <strong>answer</strong> this question. And this is an <strong>actual problem that I face.</strong></p>
<p>So where is the problem?</p>
<p>Thanks in advance</p>
<p><strong>QUESTION IS RECLOSED AGAIN!?</strong></p>
| Tom Oldfield | 45,760 | <p>I imagine people voted to close since the question would be seen by many to be off topic, I think that it is certainly borderline.</p>
<p>However, I think that it is valid and appropriate for the site. Most questions are related to specific mathematical problems, in the sense of proving things or answering questions, and so people forget that questions about how to undertake mathematics in a more general sense are also on topic. We have certainly had many in the past that have been well received.</p>
<p>One thing to bear in mind is that when people vote to close a question, they have to pick the "best available" reason to close. Thus the reason given may contain points that are not valid to a particular question (and at times even seem completly unrelated, although this is a different issue.)</p>
|
4,417,896 | <p>I have only found information regarding doing this by integration by parts. By differentiating under the integral sign, I let
<span class="math-container">$$I_n = \int_0^\infty x^n e^{-\lambda x} dx $$</span>
and get <span class="math-container">$\frac{dI_n}{d\lambda} = -I_{n+1} $</span> and therefore <span class="math-container">$\frac{dI_n}{d\lambda} = -\frac{n+1}{\lambda} I_n$</span>. Proceeding from here I solve the ODE to get <span class="math-container">$I_n = Ae^{-\frac{n+1}{\lambda}x}$</span>.</p>
<p>This is clearly wrong. What went wrong? I am unsure how to proceed with this differentiation of the integral approach to solve this problem.</p>
| Dr. Sundar | 1,040,807 | <p>Let us define
<span class="math-container">$$
I_n = \int\limits_{x = 0}^\infty \ x^n \, e^{-\lambda x} \ dx \tag{1}
$$</span></p>
<p><strong>Method 1: Using Gamma Functions</strong></p>
<p>Use the substitution
<span class="math-container">$$
\lambda x = t \ \ \mbox{or} \ \ x = {t \over \lambda} \tag{2}
$$</span></p>
<p>Then
<span class="math-container">$$
{dx \over dt} = {1 \over \lambda}
$$</span></p>
<p>Using the substitution (2), we can express <span class="math-container">$I_n$</span> in (1) as
<span class="math-container">$$
I_n = \int\limits_{t = 0}^\infty \ \left( {t^n \over \lambda^n} \right) \ e^{-t} \ {dt \over \lambda}
= {1 \over \lambda^{n + 1}} \ \int\limits_{t = 0}^\infty \ t^n e^{-t} \ dt
$$</span></p>
<p>It is easy to note that
<span class="math-container">$$
I_n = {1 \over \lambda^{n + 1}} \ \int\limits_{t = 0}^\infty \ t^{(n + 1) -1} \ e^{-t} \ dt = {\Gamma(n + 1) \over \lambda^{n + 1}} \
$$</span></p>
<p>Hence, the integral is evaluated as
<span class="math-container">$$
I_n = {\Gamma(n + 1) \over \lambda^{n + 1}}
$$</span>
where <span class="math-container">$\Gamma(\cdot)$</span> is the Gamma function.</p>
<p>If <span class="math-container">$n$</span> is a non-negative integer, then we deduce that
<span class="math-container">$$
I_n = {n! \over \lambda^{n + 1}}
$$</span>
because <span class="math-container">$\Gamma(n+1) = n!$</span> for <span class="math-container">$n \in \mathbf{N}$</span>.</p>
<p><strong>Method 2: Using Integration by Parts</strong></p>
<p><span class="math-container">$$
I_n = \int\limits_{x = 0}^\infty \ x^n \, e^{-\lambda x} \ dx \tag{1}
$$</span></p>
<p>Here, we express the Integral <span class="math-container">$I_n$</span> as
<span class="math-container">$$
I_n = \int\limits_{x = 0}^\infty \ x^n \ d\left[ {e^{-\lambda x} \over -\lambda} \right]
$$</span></p>
<p>Using Integration by Parts, we evaluate <span class="math-container">$I_n$</span> as
<span class="math-container">$$
I_n = \left[ x^n \left( {e^{-\lambda x} \over -\lambda} \right)
\right]_0^\infty -
\int\limits_0^\infty \left( {e^{-\lambda x} \over -\lambda} \right)
\ \left( n x^{n - 1} \right) \ dx
$$</span>
which can be simplified as
<span class="math-container">$$
I_n = \left[ 0 - 0 \right] + {n \over \lambda} \
\int\limits_0^\infty \ x^{n - 1} e^{-\lambda x} \ dx
$$</span></p>
<p>That is,
<span class="math-container">$$
I_n = {n \over \lambda} \ I_{n - 1}
$$</span>
which is an useful recurrence relation.</p>
<p>Proceeding recursively, we obtain
<span class="math-container">$$
I_n = {n \over \lambda} {(n - 1) \over \lambda} \cdots {1 \over \lambda} \ I_0
$$</span></p>
<p>That is,
<span class="math-container">$$
I_n = {n! \over \lambda^n} \ {1 \over \lambda} = {n! \over \lambda^{n + 1}}
$$</span>
because <span class="math-container">$I_0 = 1$</span>.</p>
<p>Note that
<span class="math-container">$$
I_0= \int\limits_{0}^\infty \ e^{-\lambda x} \ dx =
\left[ {e^{-\lambda x} \over - \lambda} \right]_0^\infty = {1 \over \lambda}.
$$</span></p>
<p>Hence, both methods yield the same value for <span class="math-container">$I_n$</span>. <span class="math-container">$\ \ \ \blacksquare$</span></p>
|
3,345,329 | <p>In Bourbaki Lie Groups and Lie Algebras chapter 4-6 the term displacement is used a lot. For example groups generated by displacements. But I can not find a definition of the term displacement given anywhere. I also looked at Humphreys Reflection Groups and Coxeter groups book but I could not find it. Can someone provide a definiton of displacements in the context of reflection groups and root systems? </p>
| River Li | 584,414 | <p>WLOG, assume that <span class="math-container">$\sigma = 1$</span>.</p>
<p>We have a random sample <span class="math-container">$X_1, X_2, \cdots, X_n$</span> of size <span class="math-container">$n$</span> from <span class="math-container">$X\sim N(0, 1)$</span>.
The probability density function of <span class="math-container">$X$</span> is <span class="math-container">$f(x) = \frac{1}{\sqrt{2\pi}}\mathrm{exp}(-\frac{x^2}{2})$</span>.
The cumulative distribution function of <span class="math-container">$X$</span> is
<span class="math-container">$\Phi(x) = \int_{-\infty}^x \frac{1}{\sqrt{2\pi}}\mathrm{exp}(-\frac{t^2}{2}) dt$</span>.</p>
<p>The order statistics <span class="math-container">$X_{(1)} \le X_{(2)} \le \cdots \le X_{(n)}$</span> are obtained
by ordering the sample <span class="math-container">$X_1, X_2, \cdots, X_n$</span> in ascending order.</p>
<p>The probability density function of the <span class="math-container">$k$</span>-th order statistic <span class="math-container">$X_{(k)}$</span> is
<span class="math-container">$$f_k(x) = \frac{n!}{(k-1)!(n-k)!}[\Phi(x)]^{k-1}[1-\Phi(x)]^{n-k}f(x), \quad -\infty < x < \infty.$$</span>
The joint probability density function of <span class="math-container">$k$</span>-th and <span class="math-container">$(k+1)$</span>-th order statistics <span class="math-container">$X_{(k)}$</span> and <span class="math-container">$X_{(k+1)}$</span> is
<span class="math-container">$$f_{k, k+1}(x, y) = \frac{n!}{(k-1)!(n-k-1)!}[\Phi(x)]^{k-1}[1-\Phi(y)]^{n-k-1} f(x) f(y),
\quad x \le y.$$</span></p>
<p>If <span class="math-container">$n$</span> is odd, we have <span class="math-container">$\mathrm{Med}(X_1, X_2, \cdots, X_n) = X_{(n+1)/2}$</span> and hence
<span class="math-container">\begin{align}
&\mathrm{E}[\mathrm{Med}(X_1, X_2, \cdots, X_n)^2]\\
=\ & \int_{-\infty}^\infty x^2 f_{(n+1)/2}(x) dx \\
=\ & \int_{-\infty}^\infty x^2 \frac{n!}{(\frac{n-1}{2})!^2}[\Phi(x)- \Phi(x)^2]^{(n-1)/2}\frac{1}{\sqrt{2\pi}}\mathrm{exp}(-\frac{x^2}{2}) dx.
\end{align}</span></p>
<p>If <span class="math-container">$n$</span> is even, we have <span class="math-container">$\mathrm{Med}(X_1, X_2, \cdots, X_n) = \frac{1}{2}(X_{n/2} + X_{n/2+1})$</span> and hence
<span class="math-container">\begin{align}
&\mathrm{E}[\mathrm{Med}(X_1, X_2, \cdots, X_n)^2]\\
=\ & \int_{-\infty}^\infty \int_{-\infty}^\infty \frac{1}{4}(x+y)^2 f_{n/2, n/2+1}(x, y)\ 1_{x < y}\ dx dy\\
=\ & \int_{-\infty}^\infty \int_{-\infty}^\infty \frac{1}{4}(x+y)^2 \frac{n!}{(n/2-1)!^2}[\Phi(x)]^{n/2-1}[1-\Phi(y)]^{n/2-1} f(x) f(y) 1_{x < y} dx dy
\end{align}</span>
where <span class="math-container">$1_{x < y}$</span> is the indicator function.</p>
<p>For both cases, since <span class="math-container">$\mathrm{E}[\mathrm{Med}(X_1, X_2, \cdots, X_n)] = 0$</span>,
we obtain <span class="math-container">$$\mathrm{Var}[\mathrm{Med}(X_1, X_2, \cdots, X_n)] = \mathrm{E}[\mathrm{Med}(X_1, X_2, \cdots, X_n)^2].$$</span></p>
<p>Numerically verified: I used Maple software to calculate the integrations. I also used Matlab to do Monte Carlo simulation (i.e., generate many group of normal distribution data, calculate median of each group, average them). </p>
|
3,921,847 | <p>I had the following question:</p>
<blockquote>
<p>Does there exist a nonzero polynomial <span class="math-container">$P(x)$</span> with integer coefficients satisfying both of the following conditions?</p>
<ul>
<li><span class="math-container">$P(x)$</span> has no rational root;</li>
<li>For every positive integer <span class="math-container">$n$</span>, there exist an integer <span class="math-container">$m$</span> such that <span class="math-container">$n$</span> divides <span class="math-container">$P(m)$</span>.</li>
</ul>
</blockquote>
<p>I created a proof showing that there was no polynomial satisfying both of these conditions:</p>
<blockquote>
<p>Suppose that we have a nonzero polynomial with integer coefficients <span class="math-container">$P(x)=\sum_i c_i x^i$</span> without a rational root, and for all positive integer <span class="math-container">$n$</span>, we have an integer <span class="math-container">$m_n$</span> such that <span class="math-container">$n|P(m_n)$</span>. This would imply <span class="math-container">$P(m_n)\equiv0\pmod n\Rightarrow \sum_i c_i m_n^i\equiv0$</span>. By Freshman's Dream we have <span class="math-container">$P(m_n+an)=\sum_i c_i(m_n+an)^i\equiv\sum_i c_im_n^i+c_ia^in^i\equiv\sum_ic_im_n^i=P(m_n)\equiv0\pmod n$</span> for some integer <span class="math-container">$a$</span>. Therefore if <span class="math-container">$b\equiv m_n\pmod n$</span> then <span class="math-container">$P(b)\equiv P(m_n)\equiv0\pmod n$</span>.</p>
</blockquote>
<blockquote>
<p>Now the above conditions and findings imply for all prime <span class="math-container">$p$</span>, we have a number <span class="math-container">$m_p$</span> such that <span class="math-container">$p|P(m_p)$</span>, and that if <span class="math-container">$b\equiv m_p\pmod p$</span> then <span class="math-container">$P(b)\equiv 0\pmod p$</span>. Consider the set of the smallest <span class="math-container">$n$</span> primes <span class="math-container">$\{p_1,p_2,p_3,\cdots,p_n\}$</span>. By Chinese Reamainder Theorem there exists an integer <span class="math-container">$b$</span> such that <span class="math-container">$b\equiv m_{p_1}\pmod{p_1},b\equiv m_{p_2}\pmod{p_2},b\equiv m_{p_3}\pmod{p_3},\cdots,b\equiv m_{p_n}\pmod{p_n}$</span> by Chinese Remainder Theorem. Then <span class="math-container">$p_1,p_2,p_3\cdots,p_n|P(b)\Rightarrow p_1p_2p_3\cdots p_n|P(b)$</span>. As <span class="math-container">$n$</span> approaches infinity (as there are infinitely many primes), <span class="math-container">$p_1,p_2,p_3\cdots,p_n$</span> approaches infinity. Therefore either <span class="math-container">$P(b)=\infty$</span> or <span class="math-container">$P(b)=0$</span>. Since for finite <span class="math-container">$b$</span> and integer coefficients <span class="math-container">$P(b)$</span> must be finite, then <span class="math-container">$P(b)=0$</span>. However as <span class="math-container">$a$</span> is an integer, this implies <span class="math-container">$P$</span> has a rational root, a contradiction.</p>
</blockquote>
<p>I'm not sure if my proof is correct, and my main concern is that I am incorrectly using Chinese Remainder Theorem since I am not sure if I can apply it to infinitely many divisors.</p>
<p>Is this proof correct, and if not, how do I solve this question?</p>
<p><strong>EDIT:</strong> It appears not only is my proof incorrect (as<span class="math-container">$b$</span> does not converge) as Paul Sinclair has shown, but that according to Jaap Scherphuis there are examples of polynomials that satisfy the conditions. Therefore, my question now is how one can prove the <em>existence</em> of these polynomials while using elementary methods (as this is an IMO selection test problem).</p>
| JMP | 633,430 | <p>Consider the polynomial</p>
<p><span class="math-container">$$
P(x) = (x^2 - 13)(x^2 - 17)(x^2 - 221)
$$</span></p>
<p>Clearly this has no rational solutions. We wish to show that the congruence <span class="math-container">$P(x) \equiv 0 \bmod{m}$</span> is solvable for all integers <span class="math-container">$m$</span>. By the chinese remainder theorem, this is the same as showing <span class="math-container">$P(x) \equiv 0 \bmod{p^k}$</span> is solvable for all prime powers <span class="math-container">$p^k$</span>.</p>
<p>Notice that if either <span class="math-container">$13$</span>, <span class="math-container">$17$</span>, or <span class="math-container">$221$</span> is a quadratic residue modulo <span class="math-container">$p^k$</span>, then one of the quadratic terms in <span class="math-container">$P$</span> is a difference of squares modulo <span class="math-container">$p^k$</span> and factors into linear terms and we are done. We show this is always the case.</p>
<p>For odd <span class="math-container">$p \neq 13,17$</span>, not all of <span class="math-container">$13$</span>, <span class="math-container">$17$</span>, <span class="math-container">$221$</span> can be quadratic non-residues modulo <span class="math-container">$p^k$</span>, otherwise we would have the following contradiction</p>
<p><span class="math-container">$$
-1 = \left( \frac{221}{p^k}\right) = \left( \frac{13}{p^k}\right)\left( \frac{17}{p^k}\right)=(-1) (-1)=1
$$</span></p>
<p>If <span class="math-container">$p = 13$</span> then 17 is a quadratic residue modulo <span class="math-container">$p^k$</span> because by quadratic reciprocity we have</p>
<p><span class="math-container">$$
\begin{eqnarray}
\left( \frac{17}{13^k}\right) &=& \left( \frac{13^k}{17}\right) (-1)^{\frac{13^k -1}{2} \frac{17 -1}{2}} \\
&=& \left( \frac{13}{17}\right)^k (-1)^{\frac{13^k -1}{2} \frac{17 -1}{2}} \\
&=& 1
\end{eqnarray}
$$</span></p>
<p>If <span class="math-container">$p=17$</span> a similar argument applies.</p>
<p>For <span class="math-container">$p=2$</span> we can use Hensel's lemma to show that for any odd <span class="math-container">$a$</span>, <a href="https://math.stackexchange.com/questions/31439/prove-that-x2-equiv-a-pmod2e-is-solvable-forall-e-iff-a-equiv-1-pm"><span class="math-container">$x^2 \equiv a \bmod{2^k}$</span> is solvable for all <span class="math-container">$k$</span> if and only if <span class="math-container">$a \equiv 1 \bmod{8}$</span>.</a> Therefore <span class="math-container">$17$</span> is a quadratic residue modulo <span class="math-container">$2^k$</span> for all <span class="math-container">$k$</span>.</p>
|
891,370 | <p>I got the function $8.513 \times 1.00531^{\Large t} = 10$. The task is to solve $t$. The correct answer is $t = 31$. How do I get there ?.</p>
| MPW | 113,214 | <p><strong>Hint:</strong> Use the fact that $\log(ab^c)=\log a + c\log b$</p>
|
16,795 | <p>Consider a finite simple graph $G$ with $n$ vertices, presented in two different but equivalent ways:</p>
<ol>
<li>as a logical formula $\Phi= \bigwedge_{i,j\in[n]} \neg_{ij}\ Rx_ix_j$ with $\neg_{ij} = \neg$ or $ \neg\neg$ </li>
<li>as an (unordered) set $\Gamma = \lbrace [n],R \subseteq [n]^2\rbrace$ </li>
</ol>
<p>In each case the complement $G'$ of $G$ is easily presented and is of course <em>not</em> isomorphic to $G$ (in the usual sense) generally:</p>
<ol>
<li>$ \Phi' = \bigwedge_ {i,j} \neg \neg_{ij}\ R x_i x_j $ </li>
<li>$\Gamma' = \lbrace [n],[n]^2 \setminus R\rbrace$</li>
</ol>
<p>Let's state for the moment that the presentation as a logical formula is the more "flexible" one: we can easily omit single literals, leaving it open whether $Rx_ix_j$ or not. But this can be mimicked for set presentation by making it from a pair to a triple $\lbrace[n],R,\neg R \subseteq [n]^2 \setminus R\rbrace$. </p>
<p>Let's call a presentation <em>complete</em>, if it leaves nothing open, i.e. no omitted literal and $\neg R = [n]^2 \setminus R$, resp.</p>
<p>Now, let a graph be given in complete set presentation $\lbrace[n],R,\neg R = [n]^2 \setminus R\rbrace$. Since order in this set should not matter, any sensible definition of "graph isomorphism" should make any graph isomorphic to its complement.</p>
<blockquote>
<p>Where and how do I run into trouble when I
assume - following this line of
reasoning, contrary to the usual line of thinking - that every (finite) graph is
isomorphic to its complement?</p>
</blockquote>
| Joel David Hamkins | 1,946 | <p>Perhaps what Hans means is simply that any graph has exactly the same information as the complement graph, because if we know completely where there are no edges, then we also know completely where are the edges, and conversely.</p>
<p>But having the same information in this logical sense is not the same as being isomorphic in the sense of graphs, and obviously there are numerous graphs that are not graph isomorphic to their complements, the simplest example being the graph with no edges, whose complement is the complete graph.</p>
|
4,580,717 | <p>I know I can express "everyone is A" as:</p>
<p>P: is a person
<span class="math-container">$$ \forall x (Px \implies Ax) $$</span></p>
<p>And I can express "everyone who's A is B" as:</p>
<p><span class="math-container">$$ \forall x ((Px \land Ax) \implies Bx) $$</span></p>
<p>But how can I express "everyone who's A"? If I were to take only the first part of the previous statement it would mean "everything is a person and A" instead of just "everyone who's A":</p>
<p><span class="math-container">$$ \forall x (Px \land Ax) $$</span></p>
| Bram28 | 256,001 | <p>"Everyone who's A" is not a sentence, because it has no truth-value.</p>
|
3,383,687 | <p>I'm interested in ideas for improving and fixing the proof I wrote for the following theorem:</p>
<blockquote>
<p>Let <span class="math-container">$f \colon \mathbb{R}^n \to \mathbb{R} $</span> be differentiable, and <span class="math-container">$ \lim_{\| x \| \to \infty} f(x) = 0 $</span>. Then <span class="math-container">$\nabla f(x) = 0 $</span> for some <span class="math-container">$x \in \mathbb{R}^n$</span>.</p>
</blockquote>
<p>Here's the idea of the proof. First, since <span class="math-container">$f$</span> is differentiable, it is continuous.</p>
<p>As <span class="math-container">$ \lim_{\| x \| \to \infty} f(x) = 0 $</span>, <span class="math-container">$\forall \varepsilon > 0, \exists r \in \mathbb{R} : |f(x) - 0| < \varepsilon$</span> whenever <span class="math-container">$\| x \| > r$</span>.</p>
<p>If we choose <span class="math-container">$D = \{x \in \mathbb{R}^n : \| x \| \leq r \}$</span>, we can use the theorem that states that all continuous functions are bounded inside closed sets. In other words, there's a supremum of <span class="math-container">$|f(x)|$</span> in <span class="math-container">$D$</span>.</p>
<p>Then we just look at the cases: if <span class="math-container">$f(x) = 0$</span>, so its gradient is always 0 and we're done.</p>
<p>If <span class="math-container">$f$</span> varies in the set <span class="math-container">$D$</span>, there exist <span class="math-container">$a,b \in D$</span> such that <span class="math-container">$f(a) \neq f(b)$</span>, ie. <span class="math-container">$\exists \varepsilon_2 > 0$</span> so <span class="math-container">$| f(a) - f(b) | > \varepsilon_2 $</span>.</p>
<p>If we choose <span class="math-container">$\varepsilon_2 > \varepsilon$</span>, <span class="math-container">$|f(x)|$</span> attains greater values in <span class="math-container">$D$</span> than outside it, and if we choose <span class="math-container">$c$</span> to be a point such that <span class="math-container">$$f(c) = \sup{\{f(x) : x \in D\}}$$</span> Then <span class="math-container">$|f(x)| \leq |f(c)|\quad \forall x \in D$</span> and as <span class="math-container">$f$</span> is differentiable, <span class="math-container">$\nabla f(c) = 0$</span>.</p>
<p>There are more than a few issues I have with the formulation of the proof. First, "<span class="math-container">$f$</span> attains greater values in <span class="math-container">$D$</span> than outside it" seems a little ambiguous. Then the choosing of <span class="math-container">$c$</span> in a convenient way after having talked about it at such length... Additionally, I'd like to use the definition of differentiability that states that if <span class="math-container">$f$</span> is differentiable, it can be represented as </p>
<p><span class="math-container">$$f(x_0+h) = f(x_0) + Df(x_0)h + \varepsilon(h)\| h \|,\quad h \in \mathbb{R}^n $$</span> </p>
<p>where <span class="math-container">$\varepsilon(h)\| h \| \to 0$</span> as <span class="math-container">$\| h \| \to 0$</span>, and where <span class="math-container">$Df(x)$</span> is the gradient in this case, or the Jacobian in a more general case. I'm almost certain you could bound the gradient <span class="math-container">$Df(c)$</span> to <span class="math-container">$0$</span> somehow using that definition, because it gives you a semi-explicit expression, instead of the verbal hand-waving I'm facing.</p>
<p>There might've also been a method much simpler than this, but I couldn't exactly employ the mean value theorem easily here with the whole open domain. Maybe using the <span class="math-container">$D$</span> I defined there would've worked.</p>
| user284331 | 284,331 | <p>Suppose <span class="math-container">$f$</span> is not identically zero, say, <span class="math-container">$f(x_{0})\ne 0$</span>, choose a big <span class="math-container">$M>0$</span> such that <span class="math-container">$|x_{0}|<M$</span> and <span class="math-container">$|f(x)|<|f(x_{0})|$</span> for all <span class="math-container">$|x|>M$</span>.</p>
<p>Now <span class="math-container">$|f(x)|\leq\max_{|x|\leq M}|f(x)|:=|f(c)|$</span> for all <span class="math-container">$|x|\leq M$</span>.</p>
<p>Then for all <span class="math-container">$x$</span>, we have <span class="math-container">$|f(x)|\leq\max\{|f(x_{0})|,|f(c)|\}$</span>. Since <span class="math-container">$|x_{0}|<M$</span>, then the later maximum is attained at <span class="math-container">$|f(c)|$</span>. So <span class="math-container">$|f|$</span> has global maximum at <span class="math-container">$c$</span>, it is not hard to see that it is also a local extremum for <span class="math-container">$f$</span>, and hence <span class="math-container">$\nabla f(c)=0$</span>.</p>
|
3,434,242 | <p>If I need to get some variables values from a vector in Matlab, I could do, for instance, </p>
<pre><code>x = A(1); y = A(2); z = A(3);
</code></pre>
<p>or I think I remember I could do something like</p>
<blockquote>
<p>[x, y, z] = A;</p>
</blockquote>
<p>However Matlab is not recognizing this format. what was the correct syntax? Thanks!!</p>
| horchler | 80,812 | <p>@Thales is correct. If <code>A</code> happens to be a <a href="https://www.mathworks.com/help/matlab/matlab_prog/what-is-a-cell-array.html" rel="nofollow noreferrer">cell array</a> rather than a matrix you could do something like this:</p>
<pre><code>A = {1 2 3};
[x,y,z] = A{:}
</code></pre>
<p>But that is as close as it gets unless <code>A</code> is a function.</p>
|
4,510,697 | <p><a href="https://i.stack.imgur.com/HKvBy.png" rel="nofollow noreferrer">How do I find explicit solutions of x and y in this system?</a></p>
| on1921379 | 805,886 | <p>There's a simpler solution if you consider <span class="math-container">$\frac{x^6 - 1}{x - 1} = (x^2 + x + 1)(x^3 + 1)$</span>. Let <span class="math-container">$d = \gcd(x^2 + x + 1, x^3 + 1)$</span>, then <span class="math-container">$d \mid x^2 + x + 1 \mid x^3 - 1$</span> and <span class="math-container">$d \mid x^3 + 1$</span>. So <span class="math-container">$d \mid 2$</span> but the number <span class="math-container">$x^2 + x + 1 = x(x + 1) + 1$</span> is odd. Therefore <span class="math-container">$d = 1$</span> and the numbers <span class="math-container">$x^2 + x + 1$</span> and <span class="math-container">$x^3 + 1$</span> are perfect squares. But for <span class="math-container">$x > 1$</span> we have <span class="math-container">$x^2 < x^2 + x + 1 < (x + 1)^2$</span>. So there's no such <span class="math-container">$x$</span>.</p>
|
958,099 | <p>I have the following question:</p>
<blockquote>
<p>For real $x$, $f(x) = \frac{x^2-k}{x-2}$ can take any real value. Find the range of values $k$ can take.</p>
</blockquote>
<p>Here is how I commenced:</p>
<p>$$
y(x-2) = x^2-k \\
-x^2 + xy - 2y + k = 0\\
$$</p>
<p>So we have $a=-1$, $b=y$, $c=(-2y+k)$. In order to find out the range of $f(x)$, i.e. the nature of its roots, we need the discriminant:</p>
<p>$$
y^2 - 4(-1)(-2y+k) \ge 0 \\
y^2 -8y + 4k \ge 0
$$</p>
<p>So $k$ must equal something to make that equation true - but the only way I understand "true" here is if I can factorise the equation after substituting a possible value of $k$. There are a few values which settle this criteria, like $k=4$, $k=3$ and $k=\frac{7}{4}$. The listed answer says that $k \ge 4$, which makes sense because the latter values produce $y$ values less than 0.</p>
<p>Is there a better way to find out the values of k in this situation? Or do I need to rely on factorising with trial and error?</p>
| najayaz | 169,139 | <p>Your last inequality was: $$y^2-8y+4k\ge0$$
For it to be true for all real values of y, the discriminant of the LHS should always be non-positive. So we have:$$64-16k\le0$$$$16k\ge64$$$$k\ge4$$</p>
|
646,183 | <p>I am not familiar with the theory of Lie groups, so I am having a hard time finding all the connected closed real Lie subgroups of $\mathrm{SL}(2, \mathbb{C})$ up to conjugation.</p>
<p>One can find the real and complex parabolic, elliptic, hyperbolic, subgroups, $\mathrm{SU}(2)$, $\mathrm{SU}(1,1)$ and $\mathrm{SL}(2,\mathbb{R})$ (the last two ones are isomorphic though), the subgroup of real upper triangular matrices,the subgroup of upper triangular matrices with unitary diagonal coefficients the subgroup of complex triangular matrices. </p>
<p>Are there any other one ?</p>
| Moishe Kohan | 84,907 | <p>I will start with the general strategy and the will show how to use it in the case of $SL(2,C)$. Let $G$ be a connected Lie subgroup of another Lie group $H$; in your case, $H$ is a complex Lie group, which helps. You first look at the Levi-Malcev decomposition of the Lie algebra $G$ and ask if the solvable radical is nontrivial. If it is, then its exponential is a normal solvable Lie subgroup $S$ of $H$ and, hence, it has a fixed for any action on a finite-dimensional complex projective space. The set of such fixed vectors will be invariant under $G$. </p>
<p>In your setting, $H$ and, hence, $H$, is acting on $CP^1$, hence, if $S<G$ is nontrivial, then $S$ fixes a point in $CP^1$. Dimension of the fixed-point set has to be zero (otherwise $S$ acts trivially on $CP^1$ which is impossible) and you trivially conclude that it is either one or two points. Since $G$ is connected, both points have to be fixed by $G$. Hence, up to conjugation, $G$ is contained in the group $B$ of upper-triangular matrices, a Borel subgroup of $SL(2,C)$ which is solvable (in particular, $G=S$). Now, you have to classify connected subgroups $G$ of $B$. This is not too hard, since you have to ask yourself how do they intersect the commutator subgroup $U$ (consisting of unipotent elements) of $B$. If the intersection is trivial, then $G$ (up to conjugation) embeds in the diagonal subgroup, isomorphic to $C^\times$ and, hence, $G$ is either ${\mathbb C}^\times$ or ${\mathbb R}_+$. If the intersection with $U$ is nontrivial, then you conclude that it is either real 1-dimensional or complex 1-dimensional. Then you see that $G$ is one of the following subgroups sitting naturally in $B$:</p>
<ol>
<li><p>${\mathbb R}$ or ${\mathbb C}$ (contained in $U$).</p></li>
<li><p>${\mathbb R}\rtimes {\mathbb R}_+$.</p></li>
<li><p>${\mathbb C}\rtimes {\mathbb R}_+$.</p></li>
<li><p>${\mathbb C}\rtimes {\mathbb S}^1$.</p></li>
<li><p>The entire group $B$. </p></li>
</ol>
<p>Now, let's consider the more interesting case, when $G$ has semisimple Lie algebra. The thing to do is to look for <em>maximal</em> semisimple subalgebras. In general, there is a classical work by Dynkin from 1950s where he classified maximal semisimple Lie subalgebras (and subgroups) of semisimple Lie groups. </p>
<p>In your case, everything again can be done by hand. First of all, $G$ has to have (real) rank $\le 1$ (since $SL(2, C)$ has rank 1) and, hence, be simple (in the sense that its Lie algebra is simple); moreover, real dimension of $G$ is at most 6. If you think in terms of classification of simple Lie algebras, this leaves you with only two options: $su(2)$, $sl(2,R)$ and $sl(2,C)$. Now, in the $su(2)$ case, the group $G$ has to be compact (since its Lie algebra is "compact"); hence, by Cartan's theorem, is is contained (up to conjugation) in the maximal compact subgroup $SU(2)< SL(2,C)$. If the Lie algebra is $sl(2,C)$ then the corresponding Lie groups is the entire $SL(2,C)$. Lastly, in the case of $sl(2,R)$, with a bit more work you see that $G$ is conjugate to the set of real points $SL(2,R)< SL(2,C)$. (First verify this on the level of Lie algebras.) </p>
|
3,095,710 | <p>Factor <span class="math-container">$x^8-x$</span> in <span class="math-container">$\Bbb Z[x]$</span> and in <span class="math-container">$\Bbb Z_2[x]$</span></p>
<p>Here what I get is <span class="math-container">$x^8-x=x(x^7-1)=x(x-1)(1+x+x^2+\cdots+x^6)$</span> now what next? Help in both the cases in <span class="math-container">$\Bbb Z[x]$</span> and in <span class="math-container">$\Bbb Z_2[x]$</span></p>
<p><strong>Edit:</strong> I think <span class="math-container">$(1+x+x^2+\cdots+x^6)$</span> is cyclotomic polynomial for <span class="math-container">$p=7$</span> so it is irred over <span class="math-container">$\Bbb Z$</span>. Now the problem remains for <span class="math-container">$\Bbb Z_2[x]$</span></p>
| Lubin | 17,760 | <p>To answer a question of yours in the comments, here’s how I thought in factoring <span class="math-container">$(x^7-1)/(x-1)$</span> over <span class="math-container">$\Bbb F_2$</span>:</p>
<p>You’re still talking about the six primitive seventh roots of unity here. But what is the smallest field containing <span class="math-container">$\Bbb F_2$</span> that also has seventh roots of unity? That is <span class="math-container">$\Bbb F_8$</span>, the <em>cubic</em> extension of <span class="math-container">$\Bbb F_2$</span>. So each of those roots must belong to an irreducible <em>cubic</em> polynomial over <span class="math-container">$\Bbb F_2$</span>. I happened to know that there are only two irreducible <span class="math-container">$\Bbb F_2$</span>-cubics, namely <span class="math-container">$x^3+x^2+1$</span> and <span class="math-container">$x^3+x+1$</span>. Our sextic had to be the product of these two.</p>
|
275,785 | <p>Let $a_{1}, a_{2}, \ldots, a_{n}$, $n \geq 3$. Prove that at least one of the number $(a_{1}+a_{2}\ldots +a_{n})^{2}-(n^2-n+2)a_{i}a_{j}$ is greater or equal with $0$ for $1 \leq i < j \leq n$.</p>
<p>I don't know at least how to catch this problem .
Thanks :)</p>
| Bojan Serafimov | 56,860 | <p>suppose it's not true, add everything together.</p>
<p>$t = \frac{n(n-1)}{2}$</p>
<p>$A = \sum a_i^2$</p>
<p>$B = \sum a_ia_{i+1}$</p>
<p>After adding you will get</p>
<p>$tA < 2B$</p>
<p>Which is a contradiction because $t > 2$ and $A \geq B$.</p>
|
231,583 | <p>Although there already exists active research area, so-called, automated theorem proving, mostly work on logic and elementary geometry. </p>
<p>Rather than only logic and elementary geometry, are there existing research results which by using machine learning techniques(possibly generative learning models) to discover new mathematical conjectures (if not yet proved), in wider branches of mathematics such as differential geometry, harmonic analysis etc...?</p>
<p>If this type of intellectual task is too difficult to study by far, then for a boiled-down version, can a machine learning system process a compact, well-formatted notes (to largely reduce natural language processing part) about for instance real analysis/algebraic topology, and then to ask it to solve some exercises ? Note that the focus and interest here are more about "exploring" new possible conjectures via current(or possible near future) state-of-the-art machine learning techniques, instead of proof techniques and logic-based knowledge representation which actually already are concerned and many works done in classical AI and automated theorem proving. </p>
<p>So, are there some knowing published research out there, particularly by generative model in machine learning techniques. </p>
<p>If there are not even known papers yet, then fruitful and interesting proposals and discussions are also highly welcomed. </p>
<p>As a probably not so good example I would like to propose, can a machine learning system "re-discover" Cauchy-Schwarz inequality if we "teach" it some basic operations, axioms and certain level of definitions, lemmas etc, with either provided or generated examples(numerical/theoretical) ? e.g. if artificial neural networks are used as training tools, what might be useful features in order eventually output a Cauchy-Schwarz inequality in final layer. </p>
| Noah Stein | 5,963 | <p>One example that is close to, if not exactly of this type, is <a href="http://arxiv.org/abs/1601.07227">Veit Elser's demonstration</a> that machine learning techniques can learn how to do fast matrix multiplication from examples of matrix products. As far as I know this method has so far just reproduced some known results as opposed to providing new ones. I say the example is not quite of the type requested because the system here benefits from the fact that this is a very structured problem in a specific domain; it is not going after a wide range of mathematics.</p>
|
231,583 | <p>Although there already exists active research area, so-called, automated theorem proving, mostly work on logic and elementary geometry. </p>
<p>Rather than only logic and elementary geometry, are there existing research results which by using machine learning techniques(possibly generative learning models) to discover new mathematical conjectures (if not yet proved), in wider branches of mathematics such as differential geometry, harmonic analysis etc...?</p>
<p>If this type of intellectual task is too difficult to study by far, then for a boiled-down version, can a machine learning system process a compact, well-formatted notes (to largely reduce natural language processing part) about for instance real analysis/algebraic topology, and then to ask it to solve some exercises ? Note that the focus and interest here are more about "exploring" new possible conjectures via current(or possible near future) state-of-the-art machine learning techniques, instead of proof techniques and logic-based knowledge representation which actually already are concerned and many works done in classical AI and automated theorem proving. </p>
<p>So, are there some knowing published research out there, particularly by generative model in machine learning techniques. </p>
<p>If there are not even known papers yet, then fruitful and interesting proposals and discussions are also highly welcomed. </p>
<p>As a probably not so good example I would like to propose, can a machine learning system "re-discover" Cauchy-Schwarz inequality if we "teach" it some basic operations, axioms and certain level of definitions, lemmas etc, with either provided or generated examples(numerical/theoretical) ? e.g. if artificial neural networks are used as training tools, what might be useful features in order eventually output a Cauchy-Schwarz inequality in final layer. </p>
| Tadashi | 22,389 | <p>A pre-print by Li-An Yang, Jui-Bin Liu, Chao-Hong Chen and Ying-ping Chen was submitted to arXiv last month (24 Feb) giving some preliminary results of an ad-hoc evolutionary algorithm used to prove some simple theorems within the Coq proof assistant:
<a href="http://arxiv.org/abs/1602.07455" rel="nofollow noreferrer"><strong>Automatically Proving Mathematical Theorems with Evolutionary Algorithms and Proof Assistants</strong></a></p>
<p>The paper also discuss some possible future work within this context and the code used was released as open-source at <a href="https://github.com/nclab/ea.prover" rel="nofollow noreferrer">GitHub</a>.</p>
<p><strong>Update (04/26/2019):</strong></p>
<p>Found today in a <a href="https://www.newscientist.com/article/2200707-google-has-created-a-maths-ai-that-has-already-proved-1200-theorems/" rel="nofollow noreferrer">NewScientist article</a> about the <a href="https://github.com/tensorflow/deepmath" rel="nofollow noreferrer">Deepmath project</a>, a Google project seeking to improve automated theorem proving using deep learning and other machine learning techniques. In particular, they made a software named DeepHOL that is a deep reinforcement learning driven automated theorem prover (1).</p>
<p>(1): Kshitij Bansal, Sarah M. Loos, Markus N. Rabe, Christian Szegedy, Stewart Wilcox. <em><a href="https://arxiv.org/abs/1904.03241" rel="nofollow noreferrer">HOList: An Environment for Machine Learning of Higher-Order Theorem Proving (extended version)</a></em></p>
|
4,501,286 | <blockquote>
<p>If a, b, c are positive real numbers such that <span class="math-container">$a^2+ b^2+ c^2 = 1$</span>
<br>Show that:
<span class="math-container">$$\frac{1}{a} +\frac{1}{b} +\frac{1}{c}+a +b +c \geq 4\sqrt{3}.$$</span></p>
</blockquote>
<p><strong>My attempt:</strong></p>
<p>First , I used Holder's : <span class="math-container">$$\frac{1}{a}+\frac{1}{b}+\frac{1}{c}\geq \frac{27}{3(a+b+c)}$$</span></p>
<p>similarly:</p>
<p><span class="math-container">$$ a+b+c \geq \frac{(a+b+c)^3}{3(a^2+b^2+c^2)} = \frac{(a+b+c)^3}{3}$$</span></p>
<p>Which gives us:</p>
<p><span class="math-container">$$LHS \geq \frac{27}{3(a+b+c)}+ \frac{(a+b+c)^3}{3}$$</span></p>
<p>By AM-GM:</p>
<p><span class="math-container">$$LHS \geq 2\sqrt{3} \times (a+b+c)$$</span></p>
<p>So all we need to do is prove that <span class="math-container">$$2\sqrt{3} \times (a+b+c)\geq 4\sqrt{3}$$</span></p>
<p>And this means we need to prove <span class="math-container">$$a+b+c \geq 2 $$</span></p>
<p>Which I can't.</p>
<p>I am not looking for a solution , only hints that would guide me through solving it .</p>
<p>Thanks in advance for any communicated help !</p>
| Dr. Mathva | 588,272 | <p>Alternative approach using the <em>point of incidence technique</em>.</p>
<p>We will first employ the AM-GM inequality, followed by QM-AM:
<span class="math-container">\begin{align*}
a+b+c+\frac1a+\frac1b+\frac1c&=a+b+c+\frac13\left(\frac1a+\frac1b+\frac1c\right) + \frac23\left(\frac1a+\frac1b+\frac1c\right)\\
&\geqslant \frac{6}{\sqrt{3}}\sqrt[6]{abc\cdot\frac1{abc}}+\frac{2}{\sqrt[3]{abc}}\\
&\geqslant 2\sqrt{3}+\frac2{\sqrt{\frac{a^2+b^2+c^2}3}}\\
&= 2\sqrt{3}+\frac2{\sqrt{\frac13}}=4\sqrt{3}
\end{align*}</span></p>
<p>Equality holds iff <span class="math-container">$a=b=c=\frac1{\sqrt{3}}$</span>.</p>
<hr />
<p><em>Observation.</em> This problem is a stronger version of an inequality that <a href="https://artofproblemsolving.com/community/c1068820h2006038p14040781" rel="nofollow noreferrer">appeared in Macedonia's National Olympiad back in <span class="math-container">$1999$</span></a>. Notwithstanding, a similar idea to that of Macedonia yields a solution for this problem.</p>
|
2,357,899 | <p>I want to prove that the cartesian product of a finite amount of countable sets is countable. I can use that the union of countable sets is countable. </p>
<p><strong>My attempt:</strong></p>
<p>Let $A_1,A_2, \dots, A_n$ be countable sets. </p>
<p>We prove the statement by induction on $n$</p>
<p>For $n = 1$, the statement clearly holds, as $A_1$ is countable. Now, suppose that $B := A_1 \times A_2 \times \dots A_{n-1}$ is countable.</p>
<p>We have: $$B \times A_n = \{(b,a)|b \in B, a \in A_n\}$$
$$= \bigcup_{a \in A_n} \{(b,a)|b \in B\}$$</p>
<p>and $\{(b,a)|b \in B\}$ is countable for a fixed $a \in A_n$, since the function $f_a: B \to B \times \{a\}: b \to (b,a)$ is a bijection, and $B$ is countable by induction hypothesis. Because the union of countable sets remains countable, we have proven that $(A_1 \times \dots A_{n-1}) \times A_n$ is countable, and because $f: (A_1 \times \dots A_{n-1}) \times B \to A_1 \times \dots A_{n-1} \times A_n: ((a_1, \dots, a_{n-1}),a_n) \mapsto (a_1, \dots, a_{n-1},a_n)$
is a bijection, the result follows.</p>
<p>Questions:</p>
<blockquote>
<ul>
<li>Is this proof correct/rigorous? </li>
<li>Are there other proofs that are
easier? </li>
<li>Someone pointed out that we can prove this theorem using the
'zigzag'-argument. Can someone provide this proof? I think this
zigzag-method is too graphical, and therefore not rigorous, so if
someone can clarify why this method is completely rigorous, I would be
more than glad to award him the bonus.</li>
</ul>
</blockquote>
| Community | -1 | <p>For countable $A_1,...,A_m$ form their cartesian product $A_1 \times ... \times A_m=\{(a_1,...,a_m) ; a_1 \in A_1,...,a_m \in A_m\}$.</p>
<p>This cartesian product can be written as $\bigcup_{j_1=1}^{w_1}... \bigcup_{j_m=1}^{w_m} \{(a_{j_1 1}...a_{j_m m} )\}$ where $w_k$ is either finite or equal to $\infty$ for $k=1,...,m$.</p>
<p>Since we have finite number of unions and every of them is at most countable then cartesian product is also countable.</p>
|
1,393,154 | <p><span class="math-container">$4n$</span> to the power of <span class="math-container">$3$</span> over <span class="math-container">$2 = 8$</span> to the power of negative <span class="math-container">$1$</span> over <span class="math-container">$3$</span></p>
<p>Written Differently for Clarity:</p>
<p><span class="math-container">$$(4n)^\frac{3}{2} = (8)^{-\frac{1}{3}}$$</span></p>
<hr />
<blockquote>
<p><strong>EDIT</strong></p>
<p>Actually, the problem should be solving <span class="math-container">$4n^{\frac{3}{2}} = 8^{-\frac{1}{3}}$</span>. Another user edited this question for clarity, but they edited it incorrectly to add parentheses around the right hand side, as can be seen above.</p>
</blockquote>
| Community | -1 | <p>Your equation is: $\{4 n\}^\frac32 = \frac12$.(I think you understood this)</p>
<p>Now, write $4=2^2$ in the left side. Then the equation looks like</p>
<p>$(2^2)^\frac32 \times n^\frac32=\frac12$ </p>
<p>$\Rightarrow $$n^\frac32 =$$\frac1{16}$</p>
<p>$\Rightarrow n=(\frac1{16})^\frac23$$=\frac14\times 2^{-\frac23}$</p>
|
1,393,154 | <p><span class="math-container">$4n$</span> to the power of <span class="math-container">$3$</span> over <span class="math-container">$2 = 8$</span> to the power of negative <span class="math-container">$1$</span> over <span class="math-container">$3$</span></p>
<p>Written Differently for Clarity:</p>
<p><span class="math-container">$$(4n)^\frac{3}{2} = (8)^{-\frac{1}{3}}$$</span></p>
<hr />
<blockquote>
<p><strong>EDIT</strong></p>
<p>Actually, the problem should be solving <span class="math-container">$4n^{\frac{3}{2}} = 8^{-\frac{1}{3}}$</span>. Another user edited this question for clarity, but they edited it incorrectly to add parentheses around the right hand side, as can be seen above.</p>
</blockquote>
| k170 | 161,538 | <p>$$4n^{\frac32} = 8^{-\frac{1}{3}}= \frac{1}{8^{\frac{1}{3}}}$$
$$4\sqrt{n^3}= \frac{1}{\sqrt[3]{8}}= \frac{1}{2}$$
$$\sqrt{n^3}= \frac{1}{8}$$
$$\left(\sqrt{n^3}\right)^2= \left(\frac{1}{8}\right)^2$$
$$\left|n^3\right|= \frac{1}{8^2}=\frac{1}{64}$$
Note that $n^{\frac32}$ or $\sqrt{n^3}$ implies that $n^3\geq 0$ and we can drop the absolute value bars. So now we have
$$n^3=\frac{1}{64}$$
$$\sqrt[3]{n^3}=\sqrt[3]{\frac{1}{64}}$$
$$n=\frac{1}{\sqrt[3]{64}}=\frac14$$</p>
|
2,843,560 | <p>If $\sin x +\sin 2x + \sin 3x = \sin y\:$ and $\:\cos x + \cos 2x + \cos 3x =\cos y$, then $x$ is equal to</p>
<p>(a) $y$</p>
<p>(b) $y/2$</p>
<p>(c) $2y$</p>
<p>(d) $y/6$</p>
<p>I expanded the first equation to reach $2\sin x(2+\cos x-2\sin x)= \sin y$, but I doubt it leads me anywhere. A little hint would be appreciated. Thanks!</p>
| Dr. Sonnhard Graubner | 175,066 | <p>Hint: $$\sin(x)+\sin82x)+\sin(3x)=\sin(2x)(2\cos(x)+1)=\sin(y)$$</p>
<p>$$\cos(x)+\cos(2x)+\cos(3x)=\cos(2x)(2\cos(x)+1)=\cos(y)$$
from here you will get</p>
<p>$$\tan(2x)=\tan(y)$$
con you finish?</p>
|
529,708 | <p>A is prime greater than 5, B is A*(A-1)+1,if B is prime,</p>
<p>then digital root of A and B must the same.(OEIS A065508)</p>
<p>Sample: 13*(13-1)+1 = 157
13 and 157 are prime and have same digital root 4</p>
| Empy2 | 81,790 | <p>There are two cases: $A=3k-1$ and $A=3k+1$.<br>
Expand $A*(A-1)+1$ in both these cases.<br>
$B=9k^2-9k+3$ or $B=9k^2+3k+1$</p>
|
353 | <p>One of the challenges of undergraduate teaching is logical implication. The case by case definition, in particular, is quite disturbing for most students, that have trouble accepting "false implies false" and "false implies true" as true sentences. </p>
<blockquote>
<p><strong>Question:</strong> What are good point of view, methods and tips to help students grasp the concept of logical implication?</p>
</blockquote>
<p>To focus the question, I would like to restrict to math majors, although the question is probably equally interesting for other kind of students.</p>
| Confutus | 40 | <p>I have come to understand this to be analogous to an order relationship among the truth values. P -> Q should be understood to mean "Q is at least as true as P", or "Q is not less true than P". </p>
<p>So, any statement at all (Q = t or f) is at least as true as a known falsehood (P=f), and a known truth (Q=t) is at least as true as any other statement (P= t or f).</p>
<p>This can be presented in connection with the most common instance of valid reasoning, modus ponens, and distinguishing two parts of sound reasoning: true premises, and valid reasoning.
What P -> Q to be true means is that it is logically valid: Using it will not introduce a false conclusion. If we know or assume that P is true, and "if P then Q" is valid, then it is safe to conclude Q. We don't often consider the case of when P is false. However, if know or assume that "If P then Q" is valid, but P is false, that combination of facts tells us nothing about Q, and it could equally well be true or false. </p>
|
353 | <p>One of the challenges of undergraduate teaching is logical implication. The case by case definition, in particular, is quite disturbing for most students, that have trouble accepting "false implies false" and "false implies true" as true sentences. </p>
<blockquote>
<p><strong>Question:</strong> What are good point of view, methods and tips to help students grasp the concept of logical implication?</p>
</blockquote>
<p>To focus the question, I would like to restrict to math majors, although the question is probably equally interesting for other kind of students.</p>
| Brendan W. Sullivan | 80 | <p>I find it helpful to introduce the negation of conditional claims simultaneously. For one, this better helps them to understand the "false implies false" case; but also, this helps them understand how to logically negate conditional claims (which is essential when they go on to learn proof techniques for conditional claims).</p>
<p>The classic "If it is raining, then I definitely have an umbrella with me" is my go-to. I say to the students: "I assert that conditional claim. How could you possibly call me out to be a liar?" They talk amongst themselves and realize that the <em>only</em> way to call me a liar is if they observe me walking around sans umbrella in the rain; all other situations do not yield a falsehood, so they must be true.</p>
<p>(Admittedly, this might be passing the buck to accepting the Law of the Excluded Middle, but I've found students are far more comfortable with "True or False?" than they are with "'false implies false' is true" :-) )</p>
|
353 | <p>One of the challenges of undergraduate teaching is logical implication. The case by case definition, in particular, is quite disturbing for most students, that have trouble accepting "false implies false" and "false implies true" as true sentences. </p>
<blockquote>
<p><strong>Question:</strong> What are good point of view, methods and tips to help students grasp the concept of logical implication?</p>
</blockquote>
<p>To focus the question, I would like to restrict to math majors, although the question is probably equally interesting for other kind of students.</p>
| Pete L. Clark | 176 | <p>Various psychological studies have been done which show that most people (including university students, who are the most common subjects of psychological tests!) are very poor at grappling with the last two entries of the truth table for <span class="math-container">$A \implies B$</span> in an abstract context, but they are much better with it in a situation in which the consequences of falsifying the implication are understood.</p>
<p>When I taught (twice) a "transitions" course for undergraduate math majors, I gave them this essay question.</p>
<blockquote>
<p>T3.1) a) You are shown a selection of cards, each of which has a single letter printed on one side and a single number printed on the other side. Then four cards are placed on the table. On the up side of these cards you can see, respectively, D, K, 3 and 7. Here is a rule: "Every card that has a D on one side has a 3 on the other." Your task is to select all those cards, but only those cards, which you would have to turn over in order to discover whether or not the rule has been violated.</p>
<p>b) You have been hired to watch, via closed-circuit camera, the bouncer at a certain 18-and-over club. In order to be allowed to drink once inside the club, a patron must display valid 21-and-over ID to the bouncer, who then gives him/her a special bracelet. In theory the bouncer should check everyone's ID, but (assume for the purposes of this problem, at least!) it is not illegal for someone who is under 18 to enter the club, so you are not concerned about who the bouncer lets in or turns away, but only who gets a bracelet. You watch four people walk into the club, but because the bouncer is so large, sometimes he obscures the camera. Here is what you can see:</p>
<p>The first person gets a bracelet.<br />
The second person does not get a bracelet.<br />
The third person displays ID indicating they are 21.<br />
The fourth person does not display any ID.</p>
<p>You realize that you need to go down to the club to check some IDs. Precisely whose ID's do you need to check to verify that the bouncer is obeying the law?</p>
<p>c) Any comments?</p>
</blockquote>
<p>Part a) is (up to psychological isomorphism) the infamous <a href="http://en.wikipedia.org/wiki/Wason_selection_task" rel="noreferrer">Wason selection task</a>. Part b) is a real-world analogue designed to be closer to the students' experience. It makes perfect sense that we do not need to recheck the IDs of anyone who did not get a bracelet: we're trying to enforce the implication "If you get served drinks, you must have ID." People can understand that no one is going to get in trouble for the people that they didn't serve drinks to.</p>
<p>One can see relations here to one of the other (good) answers. For one, yes, it's good to think in terms of when <span class="math-container">$A \implies B$</span> is false: there's just one possibility and that's what we care about, so in every other case we make it true. But yes, I do introduce implication via the truth table. It can also be helpful to define it as "(not A) or B": that somehow seems less arbitrary, and gives them good practice seeing that the negation is "A and (not B)". But then we have the burden of explaining why we call this "implies"....and I've found that if you emphasize that the one possibility you need to exclude is that A is true and B is false, then it is not in fact so terribly hard for students to swallow. I follow up with the concept of "vacuously true", namely the implication is true because the hypothesis is false. This becomes a key proof technique later in the course: sometimes you need to begin to analyze an implication <span class="math-container">$\forall x \in A$</span>, <span class="math-container">$P(x) \implies Q(x)$</span> by first figuring out for which <span class="math-container">$x \in A$</span> it is the case that <span class="math-container">$P(x)$</span> is true.</p>
|
353 | <p>One of the challenges of undergraduate teaching is logical implication. The case by case definition, in particular, is quite disturbing for most students, that have trouble accepting "false implies false" and "false implies true" as true sentences. </p>
<blockquote>
<p><strong>Question:</strong> What are good point of view, methods and tips to help students grasp the concept of logical implication?</p>
</blockquote>
<p>To focus the question, I would like to restrict to math majors, although the question is probably equally interesting for other kind of students.</p>
| mbork | 704 | <p>I am really impressed by the answers given by others here; I will definitely keep them in mind when teaching freshmen next semester. But I also have my 2 cents to add, since I haven't seen anything like that in them. I realize that this is only some kind of vague intuition, and it would probably confuse a lot of students, but it <em>might</em> as well help some. (It <em>did</em> help <em>me</em> at some point, at least.)</p>
<p>So let us assign a "$0$" to false statements and a "$1$" to true ones (this is a common convention at least here in Poland). Now "$p\implies q$" is true iff $p\le q$ (note that in the first formula we treat $p$ and $q$ as <em>propositional variables</em>, and in the second as <em>numeric variables</em>, which is clearly an abuse of notation!). This way one can view a material implication as a way of saying that "one sentence ($p$) <em>implying</em> another one ($q$) means that the latter one must be <em>at least as true</em> (whatever that means!) as the former one". In other words, when going from the antecedent to the consequent, we cannot "lose knowledge", only gain it. (Now this is really stretching things from philosophical point of view, and logicians would probably torture and kill me for that; but then again – this is only a (vague) intuition I'm talking about).</p>
<p>The way I present this to students (if I do!) is more or less this (with a wink): "So just like $p\land q$ is <em>somehow similar</em> to (but different from!) $p\cdot q$ – so that even we call $p$ and $q$ the "factors" of the conjunction [at least this is what we do in Polish; we also sometimes call a conjunction a "logical product", and we do similar things with <em>alternative</em> (i.e., use words "logical sum" and "summands")] – in a similar vein, $p\implies q$ is <em>somehow similar</em> to $p\le q$. But you know, better forget about it, since "truth" is not "one" and "falsehood" is not "zero" anyway, so what I've just told you is more or less a lie anyway."</p>
|
1,897,538 | <p>Next week I will start teaching Calculus for the first time. I am preparing my notes, and, as pure mathematician, I cannot come up with a good real world example of the following.</p>
<p>Are there good examples of
\begin{equation}
\lim_{x \to c} f(x) \neq f(c),
\end{equation}
or of cases when $c$ is not in the domain of $f(x)$?</p>
<p>The only thing that came to my mind is the study of physics phenomena at temperature $T=0 \,\mathrm{K}$, but I am not very satisfied with it.</p>
<p>Any ideas are more than welcome!</p>
<p><strong>Warning</strong></p>
<p>The more the examples are approachable (e.g. a freshman college student), the more I will be grateful to you! In particular, I would like the examples to come from natural or social sciences. Indeed, in a first class in Calculus it is not clear the importance of indicator functions, etc..</p>
<p><strong>Edit</strong></p>
<p>As B. Goddard pointed out, a very subtle point in calculus is the one of removable singularities. If possible, I would love to have some example of this phenomenon. Indeed, most of the examples from physics are of functions with poles or indeterminacy in the domain.</p>
| zhw. | 228,045 | <p>Every derivative you ever saw is an example. Suppose $f'(c)$ exists. Let $g(x) = (f(x) - f(c))/(x-c).$ Then $\lim_{x\to c} g(x) = f'(c).$ But $c$ is not in the domain of $g.$ This is the primordial example.</p>
|
1,897,538 | <p>Next week I will start teaching Calculus for the first time. I am preparing my notes, and, as pure mathematician, I cannot come up with a good real world example of the following.</p>
<p>Are there good examples of
\begin{equation}
\lim_{x \to c} f(x) \neq f(c),
\end{equation}
or of cases when $c$ is not in the domain of $f(x)$?</p>
<p>The only thing that came to my mind is the study of physics phenomena at temperature $T=0 \,\mathrm{K}$, but I am not very satisfied with it.</p>
<p>Any ideas are more than welcome!</p>
<p><strong>Warning</strong></p>
<p>The more the examples are approachable (e.g. a freshman college student), the more I will be grateful to you! In particular, I would like the examples to come from natural or social sciences. Indeed, in a first class in Calculus it is not clear the importance of indicator functions, etc..</p>
<p><strong>Edit</strong></p>
<p>As B. Goddard pointed out, a very subtle point in calculus is the one of removable singularities. If possible, I would love to have some example of this phenomenon. Indeed, most of the examples from physics are of functions with poles or indeterminacy in the domain.</p>
| tparker | 268,333 | <p>While this might be controversial, I'll make the claim that no physical quantity $f(x)$ can ever usefully be thought of as having a removable singularity. By definition, a physical quantity must be physically measurable, and every measurement has an associated error. The probability (not probability density, but absolute probability) of measuring the quantity exactly at the location of the removable singularity is always zero, so we might as well always redefine the function at the singularity to be continuous. (More generally, two functions representing physical quantities are always physically equivalent if they agree except on a set of Lebesgue measure zero.) The proper mathematical formalism for naturally discrete quantities embedded into continuous space is not removable singularities but the Dirac delta function.</p>
|
3,133,328 | <p>I am currently working on a question about proving the sum of eigenvalues and I have been searching for the solution <a href="https://www.youtube.com/watch?v=OLl_reBXY-g" rel="nofollow noreferrer">from YouTube</a>.</p>
<p>However, I do not understand why the teacher uses the diagonal method to show that the sum of eigenvalues equals the trace of matrix <span class="math-container">$A$</span>. Doesn't the diagonal method only apply on <span class="math-container">$3 \times 3$</span> or less dimensional matrix? </p>
<p>Thank you so much.</p>
| R. W. Prado | 284,531 | <p>There is an known easy proof for that. Indeed, let A be an <span class="math-container">$n \times n$</span> matrix, due <a href="https://www.google.com/search?q=Schur%20decomposition&oq=Schur%20decomposition&aqs=chrome..69i57j69i59l2j69i60&sourceid=chrome&ie=UTF-8" rel="nofollow noreferrer">Schur decomposition</a>, there exists an orthogonal matrix <span class="math-container">$Q$</span> such that <span class="math-container">$ A = Q T Q^{H}$</span>, with the eigenvalues of <span class="math-container">$A$</span>, say <span class="math-container">$\lambda_1,\lambda_2,\cdots, \lambda_n$</span>, in the diagonal of <span class="math-container">$T$</span>. Since the product can commute in the argument of the trace without changing its value, i.e., <a href="https://math.stackexchange.com/questions/1099745/how-to-prove-operatornametrab-operatornametrba"><span class="math-container">$\text{tr}(X Y) = \text{tr}(Y X) $</span>, for all <span class="math-container">$X, Y$</span></a>, then <span class="math-container">$\text{tr}(A)= \text{tr} ( Q T Q^{H}) = \text{tr} ( Q^{H} Q T ) = \text{tr} ( T ) = \sum^{n}_{i=1} \lambda_i$</span> the result follows.</p>
|
570,740 | <p>Hi there I'm having some trouble with the following problem:</p>
<p>I have a $3\times3$ symmetric matrix
$$
A=\pmatrix{1+t&1&1\\ 1&1+t&1\\ 1&1&1+t}.
$$
I am trying to determine the values of $t$ for which the vector $b = (1,t,t^2)^\top$ (this is a column vector) is in the column space of $A$.</p>
<p>I think I'm fairly aware of how to go about it, forming the augmented matrix $[A|b]$ and basically using row ops to find a solution with which I could solve for the value(s) of $t$. But I've been trying this and have no luck. May I be missing something?</p>
<p>Thank you</p>
| user1337 | 62,839 | <p>In case you are familiar with determinants, you can see that the matrix is invertible, unless $t \in \{0,-3 \}$. If $A$ is invertible its column space is all of $\mathbb R^3$, and the two remaining cases $t=0,-3$ are easy to check separately.</p>
|
570,740 | <p>Hi there I'm having some trouble with the following problem:</p>
<p>I have a $3\times3$ symmetric matrix
$$
A=\pmatrix{1+t&1&1\\ 1&1+t&1\\ 1&1&1+t}.
$$
I am trying to determine the values of $t$ for which the vector $b = (1,t,t^2)^\top$ (this is a column vector) is in the column space of $A$.</p>
<p>I think I'm fairly aware of how to go about it, forming the augmented matrix $[A|b]$ and basically using row ops to find a solution with which I could solve for the value(s) of $t$. But I've been trying this and have no luck. May I be missing something?</p>
<p>Thank you</p>
| LinAlgMan | 49,785 | <p>Here is the matrix
$$ A = \left[ \begin{matrix} 1+t & 1 & 1 \\ 1 & 1+t & 1 \\ 1 & 1 & 1+t \end{matrix} \right] $$
and denote
$$ b = \left[ \begin{matrix} 1 \\ t \\ t^2 \end{matrix} \right] $$
Then $b$ is in the column space of $A$ if and only if there is
$$ v = \left[ \begin{matrix} x \\ y \\ z \end{matrix} \right] $$
such that $Av = b$.</p>
<p>You can bring the matrix to row echelon form via Gauss-Jordan elimination. It is a tidious process in this case. Using Wolfram Alpha one obtains
$$ x = \frac{2t}{t(t+3)}, \quad y = \frac{2t-1}{t(t+3)}, \quad z = \frac{t^3 + 2 t^2 - t -1}{t(t+3)} \ . $$
So for any $t \ne 0$ and $t \ne -3$ you have a solution, that is: $(1,t,t^2)^T$ is in the column space of $A$. It remains to check the cases $t=0$ and $t=-3$ to see if you have either $$ 0 \cdot x + 0 \cdot y + 0 \cdot z = 0 $$ (in that case, they may be infinite number of solutions) or $$ 0 \cdot x + 0 \cdot y + 0 \cdot z = 1 $$ in which there is no solution.</p>
|
570,740 | <p>Hi there I'm having some trouble with the following problem:</p>
<p>I have a $3\times3$ symmetric matrix
$$
A=\pmatrix{1+t&1&1\\ 1&1+t&1\\ 1&1&1+t}.
$$
I am trying to determine the values of $t$ for which the vector $b = (1,t,t^2)^\top$ (this is a column vector) is in the column space of $A$.</p>
<p>I think I'm fairly aware of how to go about it, forming the augmented matrix $[A|b]$ and basically using row ops to find a solution with which I could solve for the value(s) of $t$. But I've been trying this and have no luck. May I be missing something?</p>
<p>Thank you</p>
| egreg | 62,967 | <p>Gaussian elimination is not difficult in this case:
\begin{align}
\left[\begin{array}{ccc|c}
1+t & 1 & 1 & 1 \\
1 & 1+t & 1 & t \\
1 & 1 & 1+t & t^2
\end{array}\right]
&\to
\left[\begin{array}{ccc|c}
1 & 1 & 1+t & t^2 \\
1 & 1+t & 1 & t \\
1+t & 1 & 1 & 1
\end{array}\right]
\\
&\to
\left[\begin{array}{ccc|c}
1 & 1 & 1+t & t^2 \\
0 & t & -t & t-t^2 \\
0 & -t & -2t-t^2 & 1-t^2(1+t)
\end{array}\right]
\end{align}
If $t\ne0$ we can go on:
\begin{align}
\left[\begin{array}{ccc|c}
1 & 1 & 1+t & t^2 \\
0 & t & -t & t-t^2 \\
0 & -t & -2t-t^2 & 1-t^2-t^3
\end{array}\right]
&\to
\left[\begin{array}{ccc|c}
1 & 1 & 1+t & t^2 \\
0 & 1 & -1 & 1-t \\
0 & 0 & -3t-t^2 & 1+t-2t^2-t^3
\end{array}\right]
\end{align}
If $t=-3$, the last row becomes
$$
\begin{array}{ccc|c}
0 & 0 & 0 & 7
\end{array}
$$
so the system has no solution.</p>
<p>If $t\ne-3$, the system has a solution.</p>
<p>If $t=0$, we get, from the place we stopped at,
$$
\left[\begin{array}{ccc|c}
1 & 1 & 1 & 0 \\
0 & 0 & 0 & 0 \\
0 & 0 & 0 & 1
\end{array}\right]
$$
and the system has no solution.</p>
|
623,703 | <blockquote>
<p>Find the exact value of $\tan\left ( \sin^{-1} \left ( \dfrac{\sqrt{2}}{2} \right )\right )$ without using a calculator. </p>
</blockquote>
<p>I started by finding $\sin^{-1} \left ( \dfrac{\sqrt{2}}{2} \right )=\dfrac{\pi}{4}$</p>
<p>So, $\tan\left ( \sin^{-1} \left ( \dfrac{\sqrt{2}}{2} \right )\right )=\tan\left( \dfrac{\pi}{4}\right)$. </p>
<p>The answer is $1$. Can you show how to solve $\tan\left( \dfrac{\pi}{4}\right)$ to get $1$? Thank you. </p>
| Michael Albanese | 39,599 | <p><strong>Hint:</strong> $$\tan\left(\frac{\pi}{4}\right) = \frac{\sin\left(\frac{\pi}{4}\right)}{\cos\left(\frac{\pi}{4}\right)}$$</p>
|
10,666 | <p>My question is about <a href="http://en.wikipedia.org/wiki/Non-standard_analysis">nonstandard analysis</a>, and the diverse possibilities for the choice of the nonstandard model R*. Although one hears talk of <em>the</em> nonstandard reals R*, there are of course many non-isomorphic possibilities for R*. My question is, what kind of structure theorems are there for the isomorphism types of these models? </p>
<p><b>Background.</b> In nonstandard analysis, one considers the real numbers R, together with whatever structure on the reals is deemed relevant, and constructs a nonstandard version R*, which will have infinitesimal and infinite elements useful for many purposes. In addition, there will be a nonstandard version of whatever structure was placed on the original model. The amazing thing is that there is a <em>Transfer Principle</em>, which states that any first order property about the original structure true in the reals, is also true of the nonstandard reals R* with its structure. In ordinary model-theoretic language, the Transfer Principle is just the assertion that the structure (R,...) is an elementary substructure of the nonstandard reals (R*,...). Let us be generous here, and consider as the standard reals the structure with the reals as the underlying set, and having all possible functions and predicates on R, of every finite arity. (I guess it is also common to consider higher type analogues, where one iterates the power set ω many times, or even ORD many times, but let us leave that alone for now.) </p>
<p>The collection I am interested in is the collection of all possible nontrivial elementary extensions of this structure. Any such extension R* will have the useful infinitesimal and infinite elements that motivate nonstandard analysis. It is an exercise in elementary mathematical logic to find such models R* as ultrapowers or as a consequence of the Compactness theorem in model theory. </p>
<p>Since there will be extensions of any desired cardinality above the continuum, there are many non-isomorphic versions of R*. Even when we consider R* of size continuum, the models arising via ultrapowers will presumably exhibit some saturation properties, whereas it seems we could also construct non-saturated examples. </p>
<p>So my question is: what kind of structure theorems are there for the class of all nonstandard models R*? How many isomorphism types are there for models of size continuum? How much or little of the isomorphism type of a structure is determined by the isomorphism type of the ordered field structure of R*, or even by the order structure of R*? </p>
| John Goodrick | 93 | <p>I think that the nonstandard models of R* will be fairly wild by most reasonable metrics, since the theory is <a href="http://en.wikipedia.org/wiki/Stable_theory" rel="nofollow">unstable</a> (the universe is linearly ordered). For instance, I don't think that arbitrary models will be determined up to isomorphism by well-founded trees of countable submodels (as they are in ``classifiable'' theories).</p>
<p>EDIT: I'm not sure how many nonisomorphic models there are of cardinality c (the size of the continuum), but there are 2^{2^c} distinct nonisomorphic nonstandard models of theory of R* of size 2^c. A crude counting argument shows that this is the maximum number of nonisomorphic models of size 2^c that <em>any</em> theory with a language of cardinality 2^c could possibly have, which can be considered as evidence that the class of models of the theory of R* is ``wild.''</p>
<p>(This result follows from the proof of Theorem VIII.3.2 of Shelah's <em>Classification Theory,</em> one of his ``many-models'' arguments about unclassifiable theories. In fact, an argument from the second chapter of my thesis applied to this theory shows that you can even build a collection of 2^{2^c} models of size 2^c which are pairwise bi-embeddable but pairwise nonisomorphic.)</p>
<p>It's a good question whether or not you can have two models of this theory which are order-isomorphic but nonisomorphic -- there must be somebody studying o-minimal structures with an answer to this.</p>
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.