qid int64 1 4.65M | question large_stringlengths 27 36.3k | author large_stringlengths 3 36 | author_id int64 -1 1.16M | answer large_stringlengths 18 63k |
|---|---|---|---|---|
606,431 | <p>Can someone explain to me how to solve this using inverse trig and trig sub?</p>
<p>$$\int\frac{x^3}{\sqrt{1+x^2}}\, dx$$</p>
<p>Thank you. </p>
| Farshad Nahangi | 50,728 | <p>You can use hyperbolic substitution, i.e. let $x=\sinh t$ then
\begin{align*}
\int\frac{x^3}{\sqrt{1+x^2}}\, dx&=\int \sinh^3 t\, dt\\
&=\int (\cosh^2t -1)\sinh t\, dt\\
&=\frac{\cosh^3 t}{3}-\cosh t+C\\
&=\frac{(\sqrt{1+x^2})^3}{3}-\sqrt{1+x^2}+C
\end{align*}
Also you can use triangle substitution : $x=\tan \theta$
\begin{align*}
\int\frac{x^3}{\sqrt{1+x^2}}\, dx&=\int \tan^3\theta\sec\theta\, d\theta\\
&=\int (\sec^2\theta-1)\sec\theta\tan\theta\, d\theta\\
&=\frac{\sec^3\theta}{3}-\sec\theta+C\\
&=\frac{(\sqrt{1+x^2})^3}{3}-\sqrt{1+x^2}+C
\end{align*}</p>
|
2,839,945 | <blockquote>
<p>Let $p$ be a prime in $\mathbb{Z}$ of the form $4n + 1, n \in \mathbb{N}$. Show that $\left(\frac{-1}{p}\right) = 1$ (here $\left(\frac{\#}{p}\right)$ is the Legendre symbol). Hence prove that $p$ is not a prime in the ring $\mathbb{Z}[i]$.</p>
</blockquote>
<p>Here is my solution:</p>
<p>Since $p > 2$, we have $\left(\frac{-1}{p}\right) = 1$ if and only if $(-1)^{\frac{p - 1}{2}} \equiv_p 1$ if and only $(-1)^{2n} \equiv_p 1$ which is true.</p>
<p>Now suppose $p$ is prime in $\mathbb{Z}[i]$, which means that there exists $x \in \mathbb{Z}$ such that $-1 \equiv_p x^2$, from which $p \mid (x^2 + 1) = (x - i)(x + i)$ and, since $p$ is prime, $p \mid (x - i)$ or $p \mid (x + i)$. In either case we have $m + ni \in \mathbb{Z}[i]$ such that $p(m + ni) = x \pm i$, which implies $pn = x$, that is $p \mid x$, and $x^2 + 1 \equiv_p 1$, which is not congruent to $0$, contradiction.</p>
<p>Is it correct? thanks in advance</p>
<p>Edit: (I've tried to write it better using Robert Soupe advice)</p>
<p>Since $p>2$ we have $(-1)^{(p-1)/2}\equiv_p (-1)^{2n} \equiv_p 1$, that is $\left(\frac{-1}{p} \right)= 1$.</p>
<p>Now suppose $p$ is prime in $\mathbb{Z}[i]$, this means that there exists $x \in \mathbb{Z}$ such that $x^2 \equiv_p -1$, hence $p \mid (x^2 + 1) = (x - i)(x + i)$ and, since $p$ is prime, $p \mid (x + i)$. Therefore there exists $m + ni \in \mathbb{Z}[i]$ such that $p(m + ni) = x + i$, but this is absurd because $p$ does not divide $1$. We can conclude that $p$ is not prime in $\mathbb{Z}[i]$.</p>
| Robert Soupe | 149,436 | <p>Yes, it's correct, though I'd like to unpack it a little, and work it out with a specific prime.</p>
<blockquote>
<p>Since $p > 2$, we have $$\left(\frac{-1}{p}\right) = 1$$ if and only if $$(-1)^{\frac{p - 1}{2}} \equiv 1 \pmod p$$ if and only $(-1)^{2n} \equiv 1 \pmod p$, which is true.</p>
</blockquote>
<p>So far so good. The wording is a bit clunky, but I should be able to just get past that.</p>
<blockquote>
<p>Now suppose $p$ is prime in $\mathbb Z[i]$, which means that there exists $x \in \mathbb{Z}$ such that $-1 \equiv x^2 \pmod p$, </p>
</blockquote>
<p>I'm sorry, I'm not liking your style of writing congruences at all. I left it alone when I edited your question, but now that I'm quoting you in my answer I really have to change it to something I like better.</p>
<p>I'm also having a bit of a problem with $-1 \equiv x^2 \pmod p$, I'd much rather write $x^2 \equiv -1 \pmod p$ even though it means the very same thing.</p>
<p>Maybe it's because it really obscures the meaning of $x^2 \equiv -1 \pmod p$. And that meaning is that the equation $x^2 = kp - 1$ has solutions in integers.</p>
<blockquote>
<p>from which $p \mid (x^2 + 1) = (x - i)(x + i)$ and, since $p$ is prime, $p \mid (x - i)$ or $p \mid (x + i)$. In either case we have $m + ni \in \mathbb Z[i]$ such that $p(m + ni) = x \pm i$, which implies $pn = x$, that is $p \mid x$, and $x^2 + 1 \equiv 1 \pmod p$, which is not congruent to $0$, contradiction.</p>
</blockquote>
<p>This is perfectly lucid and correct, but I can't shake the feeling that it's possible to be more elegant without getting too advanced (i.e., cyclic groups). Maybe Max will elaborate his comment into an answer.</p>
<p>Okay, now to work out an example with a specific prime. I choose 41. We have $$\left(\frac{-1}{41}\right) = (-1)^{20} = 1,$$ just as we expected.</p>
<p>This tells us that the congruence $x^2 \equiv 40 \pmod{41}$ has solutions, and likewise the equation $x^2 = 41k - 1$. This should immediately lead us to find $9^2 = 2 \times 41 - 1$. Then $(9 - i)(9 + i) = 82$.</p>
<p>At this point we're saying that 41 is prime in $\mathbb Z[i]$, which means that either 41 divides $9 - i$ or it divides $9 + i$. However, before carrying out either division, notice that $9 + i$ has even norm. It must be divisible by either $1 - i$ or $1 + i$, maybe both.</p>
<p>So then we compute $$\frac{9 + i}{1 - i} = \frac{(9 + i)(1 + i)}{2} = \frac{8 + 10i}{2} = 4 + 5i$$ and we verify that $N(4 + 5i) = 41$.</p>
<p>Since 41's norm is 1681 but $N(9 \pm i)$ is just 82, we should have realized earlier that $41 \nmid (9 \pm i)$. I admit I don't know what is the most elegant way to get from $9 \pm i$ to $4 \pm 5i$.</p>
|
3,075,979 | <p>Prove that <span class="math-container">$$\frac{k^7}{7}+\frac{k^5}{5}+\frac{2k^3}{3}-\frac{k}{105}$$</span> is an integer using mathematical induction.</p>
<p>I tried using mathematical induction but using binomial formula also it becomes little bit complicated.</p>
<p>Please show me your proof.</p>
<p>Sorry if this question was already asked. Actually i did not found it. In that case only sharing the link will be enough.</p>
| Angina Seng | 436,618 | <p>Call the expression <span class="math-container">$f(k)$</span>. As it's a degree <span class="math-container">$7$</span> polynomial, it obeys the recurrence
<span class="math-container">$$\sum_{j=0}^8(-1)^j\binom8jf(k-j)=0.$$</span>
Thus
<span class="math-container">$$f(k)=8f(k-1)-28f(k-2)+56f(k-3)-70f(k-4)+56f(k-5)-28f(k-6)+8f(k-7)-f(k-8)$$</span>
so that if <span class="math-container">$f$</span> takes eight consecutive integer values, by induction, all subsequent
values are integers too.</p>
|
4,301,632 | <blockquote>
<p><span class="math-container">$$X^2 = \begin{bmatrix}1&a\\0&1\\\end{bmatrix}$$</span> where <span class="math-container">$a \in \Bbb R \setminus \{0\}$</span>. Solve for matrix <span class="math-container">$X$</span>.</p>
</blockquote>
<hr />
<p>I was practicing for matrix equations and this is the first one where it has squared matrix and another number, in this case, <span class="math-container">$a$</span>. I would be grateful if you could help. If you can please suggest a book that has matrix equations to practice. Thank you!</p>
| Erdel von Mises | 981,313 | <p>HINT: Let <span class="math-container">$X = (I + N)$</span> where <span class="math-container">$I$</span> is the identy matrix and <span class="math-container">$N$</span> is the nilpotent matrix with all the diagonal entries and the lower left coner being zero. Take the second power of <span class="math-container">$X$</span>, <span class="math-container">$$X^2 = I + N + N + N^2 = I + 2N $$</span> the answer is trivially obtained from that.</p>
|
1,050,382 | <p>In $\mathbb{R}^5$ there is given vector space $V$. Its dimension is 3. In $\mathbb{R}^{6,5}$ consider the subset $X = \{A \in \mathbb{R}^{6,5} : V \subset \ker A\}$. I have to show that $X$ is a vector space in $\mathbb{R}^{6,5}$ and find its dimension. To show that $X$ is vector space consider $x_1, x_2 \in X$ and $v \in V$. We know that $x_1 v = 0$ and $x_2 v = 0$ so $(\alpha x_1 + \beta x_2) v = \alpha (x_1 v) + \beta (x_2 v) = 0$ so $V \subset \ker (\alpha x_1 + \beta x_2)$. But I don't know how to find $X$'s dimension. Any ideas?</p>
| Ivo Terek | 118,056 | <p>Let's take a quick look. I'll try a sketch/give a hint. It seems that so far, so good. Indeed, take $A,B \in X$, $\lambda \in \Bbb R$. To show that $A+\lambda B \in X$, we have to show that $V \subset \ker(A+\lambda B)$, assuming that $V \subset \ker A \ \cap \ker B $. But that's true, since given ${\bf v} \in V$ we have $$(A+\lambda B){\bf v} = A{\bf v}+\lambda B{\bf v} = 0+0 = 0.$$</p>
<p>Now, we have $A \in \Bbb R^{6,5}$, that is, $A: \Bbb R^5 \to \Bbb R^6$, and we know that: $$5 = \dim \ker A + \dim {\rm Im} \ A.$$
Since $\dim V = 3$ and $V \subset \ker A$, we have that $3 = \dim V \leq \dim \ker A.$ This way we can have three types of $A$: $$\begin{cases} \dim \ker A = 3, & \dim {\rm Im} \ A = 2 \\ \dim \ker A = 4, & \dim {\rm Im} \ A = 1 \\ \dim \ker A = 5, & \dim {\rm Im} \ A = 0\end{cases}.$$</p>
<p>Can you go on?</p>
|
1,050,382 | <p>In $\mathbb{R}^5$ there is given vector space $V$. Its dimension is 3. In $\mathbb{R}^{6,5}$ consider the subset $X = \{A \in \mathbb{R}^{6,5} : V \subset \ker A\}$. I have to show that $X$ is a vector space in $\mathbb{R}^{6,5}$ and find its dimension. To show that $X$ is vector space consider $x_1, x_2 \in X$ and $v \in V$. We know that $x_1 v = 0$ and $x_2 v = 0$ so $(\alpha x_1 + \beta x_2) v = \alpha (x_1 v) + \beta (x_2 v) = 0$ so $V \subset \ker (\alpha x_1 + \beta x_2)$. But I don't know how to find $X$'s dimension. Any ideas?</p>
| 2'5 9'2 | 11,123 | <p>For the dimension of $X$, if $A\in X$, $A$ has to nullify $V$ and can do anything on the $2$-dimensional orthogonal complement of $V$. So the dimension of $X$ is the same as the dimension of the space of linear transformations from a $2$-dimensional space to a $6$-dimensional space, which is $12$.</p>
<p>EDIT: A commenter is unfamiliar with the direct bijection between $6\times 5$ matrices and linear transformations from $\mathbb{R}^5$ to $\mathbb{R}^6$; we can reinvent like four or five wheels and say the same thing but longer:</p>
<p>Let $V$ have basis $\{v_1,v_2,v_3\}$. Use Gram Schmidt in $\mathbb{R}^5$ so that $\{v_4,v_5\}$ is a basis for $V^{\perp}$. Let $\{e_1,e_2,e_3,e_4,e_5,e_6\}$ be the standard basis for $\mathbb{R}^6$. </p>
<p>Consider the matrices $A_{ij}$ defined as the matrix that takes $v_j$ to $e_i$ and all other $v_k$ to the zero vector in $\mathbb{R}^6$. <em>Explicitly</em>, $A_{ij}=C_{ij}P$, where $C_{ij}$ is a $6\times5$ matrix with all zeros except a $1$ in the $i,j$ position, and $P$ is the change of basis matrix from $\{v_1,v_2,v_3,v_4,v_5\}$ to the standard basis for $\mathbb{R}^5$. Explicitly, $P=\begin{bmatrix}|&|&|&|&|\\v_1&v_2&v_3&v_4&v_5\\|&|&|&|&|\end{bmatrix}^{-1}$. Check: $A_{ij}v_j=C_{ij}Pv_j=C_{ij}\epsilon_j=e_i$ where $\epsilon_j$ are the standard basis vectors in $\mathbb{R}^5$.</p>
<p>There are $30$ of these $A_{ij}$, which are matrices in $M_{6\times 5}$. They are linearly independent because if $\sum_{i,j} c_{ij}A_{ij}=[0]$, then multiplying against each of the $v_j$ one at a time gives $\sum_i c_{ij}e_i=[0]$, and since the $e_i$ are independent, the $c_{ij}$ must equal $0$.</p>
<p>But 18 of them are not in $X$, because they have one of $Av_1,Av_2,Av_3$ nonzero. Only the twelve matrices $A_{i4}$ and $A_{i5}$, which all have $Av_1=Av_2=Av_3=0$ are in $X$.</p>
<p>$M_{6\times5}$ is $30$-dimensional. We have established 12 linearly independent vectors that are in the space $X$. So $\dim V\geq12$. We have established 18 more linearly independent vectors that are not in $X$. So $\dim V\leq30-18-12$. So $\dim V=12$.</p>
|
1,425,935 | <p>How would I solve this trigonometric equation?</p>
<p>$$3\cos x \cot x + \sin 2x = 0$$</p>
<p>I got to this stage: $$3 \cos x = -2 \sin^2x$$</p>
<p>Is is a dead end or is there a easier way solve this equation? </p>
| mathlove | 78,967 | <p>You might have missed $\cos x=0$.</p>
<p>$$3\cos x\cot x+\sin 2x=0$$
$$3\cos x\cdot\frac{\cos x}{\sin x}+2\sin x\cos x=0$$
$$3\cos^2x+2\sin^2x\cos x=0$$
$$\cos x(3\cos x+2\sin^2x)=0$$
$$\cos x(3\cos x+2-2\cos^2x)=0$$</p>
|
1,368,899 | <p>I am fairly new to statistics and just recently encountered queueing theory.</p>
<p>I have programmed a simulation for a $M/M/1$ queue in which I specify the inter-arrival times and service times. I input say, an exponential distribution with a mean of $1$ for both the inter-arrival and service time.</p>
<p>I also measure for the effective arrival rate, meaning I run for, say, $1000$ time steps and at each time step a random value is drawn from the exponential distribution. I collect these random values in a list and then compute the effective arrival rate, this being the mean of the list. In theory, this value should converge to the mean, yet, in practice I end up with values not so close to the mean.</p>
<p>My question is, how many random values from an exponential distribution should I draw such that the mean of these converges to the mean of the distribution?</p>
| lab bhattacharjee | 33,337 | <p>$$\dfrac{9\alpha^2}{2\alpha^2}=\dfrac{a^2}{a-b}$$</p>
<p>$$\implies9b=9a-2a^2=-2\left(a-\dfrac94\right)^2+2\left(\dfrac94\right)^2\le2\left(\dfrac94\right)^2$$ as $a$ is real</p>
|
2,184,776 | <p>So there's an almost exact question like this here: </p>
<p><a href="https://math.stackexchange.com/questions/576268/use-a-factorial-argument-to-show-that-c2n-n1c2n-n-frac12c2n2-n1#576280">Use a factorial argument to show that $C(2n,n+1)+C(2n,n)=\frac{1}{2}C(2n+2,n+1)$</a></p>
<p>However, I'm getting stuck in just figuring out the lcds for the factorials.</p>
<p>I end up with this after the <strong>CNR</strong>:</p>
<p>$$\frac{(2n)!}{(n-1)!(n+1)!} + \frac{(2n)!}{n!n!}$$</p>
<p>When I try to find the common denominator, I do:</p>
<p>$$\frac{(2n)!n}{(n-1)!(n+1)n!n} + \frac{(2n)!(n+1)}{n(n-1)!n!(n+1)}$$</p>
<p>Putting it together I get:</p>
<p>$$\frac{(2n)!(n) + (2n)!(n+1)}{ (n)(n+1)(n-1)!n!}$$</p>
<p>Which is wrong because according to the other answer, it should be:</p>
<p>$$\frac{(2n+1)!}{n!(n+1)!}$$</p>
<p>Not sure how they got there. I guess that's my question, how did they get that?</p>
<p>I've been googling for hours on how to find common denominators of factorials but can't seem to find anything. I mean, what happened to the $(n-1)!$ ?</p>
<p>Thanks.</p>
| Lærne | 252,762 | <p>I think your solution is correct. Here is the visualization. Take $2n+1$ marbles. You want to pick $n+1$ among them. However you notice one very specific tiny red marble among them. It captures your attention and your decision to pick that marble or not to pick it comes before picking any other marbles.</p>
<p>If you choose to pick the tiny red marble, you're left with $2n$ stones and you still have $n$ to choose. If you leave the tiny red marble out, you're left with the other $2n$ marbles and still have to choose $n+1$.</p>
<p>Putting it together,
\begin{equation}
C(2n+1,n+1) = C(2n,n) + C(2n,n+1).
\end{equation}</p>
<p>Note that in you computation, the least common multiple between $n!n!$ and $(n+1)!(n-1)!$ is $n!(n+1)!$. Indeed,
\begin{align}
n!\cdot n! &= (n-1)!\cdot n! \cdot n\\
(n-1) \cdot (n+1)! &= (n-1)! \cdot n! \cdot (n+1)
\end{align}
Hence the least common multiple is
\begin{equation}
(n-1)! \cdot n! \cdot n \cdot (n+1) = n! \cdot (n+1)
\end{equation}</p>
|
2,184,776 | <p>So there's an almost exact question like this here: </p>
<p><a href="https://math.stackexchange.com/questions/576268/use-a-factorial-argument-to-show-that-c2n-n1c2n-n-frac12c2n2-n1#576280">Use a factorial argument to show that $C(2n,n+1)+C(2n,n)=\frac{1}{2}C(2n+2,n+1)$</a></p>
<p>However, I'm getting stuck in just figuring out the lcds for the factorials.</p>
<p>I end up with this after the <strong>CNR</strong>:</p>
<p>$$\frac{(2n)!}{(n-1)!(n+1)!} + \frac{(2n)!}{n!n!}$$</p>
<p>When I try to find the common denominator, I do:</p>
<p>$$\frac{(2n)!n}{(n-1)!(n+1)n!n} + \frac{(2n)!(n+1)}{n(n-1)!n!(n+1)}$$</p>
<p>Putting it together I get:</p>
<p>$$\frac{(2n)!(n) + (2n)!(n+1)}{ (n)(n+1)(n-1)!n!}$$</p>
<p>Which is wrong because according to the other answer, it should be:</p>
<p>$$\frac{(2n+1)!}{n!(n+1)!}$$</p>
<p>Not sure how they got there. I guess that's my question, how did they get that?</p>
<p>I've been googling for hours on how to find common denominators of factorials but can't seem to find anything. I mean, what happened to the $(n-1)!$ ?</p>
<p>Thanks.</p>
| Joffan | 206,402 | <p>Another route is through the recurrence relations.</p>
<p>Note that $C(2n,n+1) = C(2n,n-1)$</p>
<p>Then</p>
<p>$\begin{align}
2\cdot \left( C(2n,n+1)+C(2n,n) \right) &= C(2n,n-1)+2C(2n,n) + C(2n,n+1)\\[1ex]
&= C(2n+1,n)+C(2n+1,n+1) \\[1ex]
&= C(2n+2,n+1) \\
\end{align}$</p>
|
119,584 | <p>It is known that there are multiplicative version concentration inequalities for
sums of independent random variables. For example, the following
multiplicative version <strong>Chernoff</strong> bound.</p>
<hr>
<p><strong>Chernoff bound:</strong></p>
<p>Let $X_1,\ldots,X_n$ be independent random variables and $X_i \in$
$[0,1]$. Let $Y=\sum_{i=1}^n X_i$. Then for any $\delta>0$,</p>
<p>$\Pr\left(Y \ge (1+\delta)EY \right) \le e^{-c\cdot(EY)\delta ^2},$</p>
<p>where $c$ is some absolute constant, e.g., c=1/3.</p>
<hr>
<p>Now consider <strong>dependent</strong> random variables. A slight variant of <strong>Azuma</strong>'s inequality states the following.</p>
<hr>
<p><strong>Azuma's Inequality:</strong></p>
<p>Let $X_1,\ldots,X_n$ be (dependent) random variables and $X_i \in
[0,1]$. Assume that there exists $\mu$, such that $ \Pr \left( \sum_{i=1}^n \mathbb{E}[X_i|X_{1},\ldots,X_{i-1}] \le \mu\right) = 1$. Let $Y=\sum_{i=1}^n X_i$. Then for any $\lambda > 0$,</p>
<p>$\Pr\left(Y \ge n\mu+\lambda \right) \le e^{-2 \lambda^2/n}.$</p>
<hr>
<p>Azuma's inequality is additive. My question is that does a
multiplicative version of Azuma's inequality such as the following
hold?</p>
<hr>
<p><strong>My question:</strong> does the following hold?</p>
<p>Let $X_1,\ldots,X_n$ be (dependent) random variables and $X_i \in
[0,1]$. Assume that there exists $\mu$, such that $\Pr\left( \sum_{i=1}^n \mathbb{E}[X_i|X_1,\ldots,X_{i-1}] \le \mu\right) = 1.$ Let $Y=\sum_{i=1}^n X_i$. Then for any $\delta>0$</p>
<p>$\Pr\left(Y \ge (1+\delta)n\mu \right) \le e^{-c\cdot n\mu \delta^2},$</p>
<p>where $c$ is some absolute constant.</p>
<hr>
<p><strong>Note</strong>: the standard Azuma's inequality does not imply the
multiplicative version when $n\mu \ll
\sqrt{n}$.</p>
| Iosif Pinelis | 36,721 | <p><span class="math-container">$\newcommand{\de}{\delta}$</span>
The "dependent" version of the multiplicative Chernoff bound can be proved quite similarly to the "independent" case. Indeed, let <span class="math-container">$E_{i-1}$</span> denote the conditional expectation given <span class="math-container">$X_1,\dots,X_{i-1}$</span>, so that <span class="math-container">$E_{i-1}X_i\le\mu$</span> almost surely (a.s.) for <span class="math-container">$i=1,\dots,n$</span>. Take any real <span class="math-container">$t\ge0$</span>. By the convexity of <span class="math-container">$e^{tx}$</span> in <span class="math-container">$x$</span> and the conditions that <span class="math-container">$0\le X_i\le1$</span> and <span class="math-container">$E_{i-1}X_i\le\mu$</span>, we have <span class="math-container">$e^{tX_i}\le1+(e^t-1)X_i$</span> and hence
<span class="math-container">\begin{equation*}
E_{i-1}e^{tX_i}\le1+(e^t-1)E_{i-1}X_i\le1+(e^t-1)\mu\le\exp\{(e^t-1)\mu\}
\end{equation*}</span>
for <span class="math-container">$i=1,\dots,n$</span>. So, by induction, for <span class="math-container">$j=1,\dots,n$</span> and <span class="math-container">$Y_j:=\sum_1^j X_i$</span> we have
<span class="math-container">\begin{equation*}
Ee^{tY_j}=EE_{j-1}e^{tY_j}=Ee^{tY_{j-1}}E_{j-1}e^{tX_j}\le Ee^{tY_{j-1}}\exp\{(e^t-1)\mu\},
\end{equation*}</span>
whence, by induction,
<span class="math-container">\begin{equation*}
Ee^{tY}=Ee^{tY_n}\le\exp\{n(e^t-1)\mu\}.
\end{equation*}</span>
So, using Markov's inequality and then choosing <span class="math-container">$t=\ln(1+\de)$</span>, we get
<span class="math-container">\begin{align*}
P(Y\ge(1+\de)n\mu)&\le e^{-(1+\de)n\mu t}Ee^{tY}
\le\exp\{-(1+\de)n\mu t+n(e^t-1)\mu\} \\
&=\exp\{-n\mu\psi(\de)\},
\end{align*}</span>
where <span class="math-container">$\psi(u):=u-(1+u)\ln(1+u)$</span>.
Up to notation, this bound is the same as the known <a href="https://en.wikipedia.org/wiki/Chernoff_bound#Multiplicative_form_(relative_error)" rel="nofollow noreferrer">multiplicative Chernoff bound in the "independent" case</a>.</p>
<p>Since <span class="math-container">$\psi(u)\le-u^2/3$</span> for <span class="math-container">$u\in[0,3/2]$</span>, we have
<span class="math-container">\begin{equation*}
P(Y\ge(1+\de)n\mu)\le\exp\{-n\mu\de^2/3\} \tag{1}
\end{equation*}</span>
if <span class="math-container">$\de\in[0,3/2]$</span>. </p>
<p>Note that (1) cannot hold for all <span class="math-container">$\de\ge0$</span>, even in the "independent" case. Indeed, suppose that the <span class="math-container">$X_i$</span>'s are iid with <span class="math-container">$P(X_1=1)=\mu=1-P(X_i=0)$</span>, where <span class="math-container">$\mu:=1/n$</span> and <span class="math-container">$n\to\infty$</span>. Then <span class="math-container">$Y$</span> will converge in distribution to a random variable <span class="math-container">$\Pi$</span> with the Poisson distribution with parameter <span class="math-container">$1$</span>, and (1) will yield
<span class="math-container">\begin{equation*}
P(\Pi\ge1+\de)\le\exp\{-\de^2/3\}.
\end{equation*}</span>
Since Poisson distributions are not subgaussian, the latter inequality cannot hold for all <span class="math-container">$\de\ge0$</span>. So, (1) cannot hold for all <span class="math-container">$\de\ge0$</span>. </p>
|
18,048 | <p>When taking a MOOC in calculus the exercises contain 5 options to select from. I then solve the question and select the option that matches my answer. Obviously only one of the options is correct. But there are (quite a few) times where my solution is wrong even though it is one of the available options. </p>
<p>My question is, how does a maths teacher know how to create wrong options that are as plausible as the correct answer?</p>
<p>I've found these two good questions but I am wondering how would an educator come up with wrong answers in general.</p>
<p><a href="https://matheducators.stackexchange.com/questions/16573/what-are-some-common-ways-students-get-confused-about-finding-an-inverse-of-a-fu/16576#16576">What are some common ways students get confused about finding an inverse of a function?</a></p>
<p><a href="https://matheducators.stackexchange.com/questions/12045/how-are-students-messing-up-in-this-khan-academy-surface-area-problem-right-tr">How are students messing up in this Khan Academy surface area problem? (right triangular prism: 3-4-5 triangular base, height of 11)</a></p>
| Tom Au | 1,333 | <p>If the answer requires a complicated formula, I would present several "permutations" of the formula, only one of which is right. Example:</p>
<p>If <span class="math-container">$f(x) = \frac{u(x)}{v(x)}$</span>, what is <span class="math-container">$f'(x)$</span>? (Quotient Rule.</p>
<blockquote>
<p>Answers:</p>
</blockquote>
<p>a) <span class="math-container">$u'(x)v(x) + v(x)u'(x)$</span>. Product rule formula, wrong. <br>
b) <span class="math-container">$u'(x)v(x) - v(x)u'(x)$</span>. Correct numerator, no denominator, wrong. <br>
c) <span class="math-container">$\frac{u'(x)v(x) + v(x)u'(x)}{(v(x))^2}$</span>. Correct denominator, wrong numerator, wrong. <br>
d) <span class="math-container">$\frac{u'(x)v(x) - v(x)u'(x)}{(v(x))^2}$</span>. Correct denominator, correct numerator, correct.</p>
|
2,792,770 | <p>I found the following question in a test paper:</p>
<blockquote>
<p>Suppose $G$ is a monoid or a semigroup. $a\in G$ and $a^2=a$. What can we say
about $a$?</p>
</blockquote>
<p>Monoids are associative and have an identity element. Semigroups are just associative. </p>
<p>I'm not sure what we can say about $a$ in this case other than that $a$ could be other things apart from the identity. Any idea if there's a definitive answer to this question?</p>
| Mees de Vries | 75,429 | <p>Small note: we only call such a function a random variable when $X$ <em>is</em> measurable.</p>
<p>A very simple example would be $\Omega = \Omega' = \{0, 1\}$, with $\mathcal F = \{\emptyset, \Omega\}$ and $\mathcal F' = \mathcal P(\Omega)$, and $X = \mathrm{id}_\Omega$. Then $\{1\}$ is measurable in $(\Omega', \mathcal F')$, but $X^{-1}(\{1\}) = \{1\}$ is not measurable in $(\Omega, \mathcal F)$.</p>
|
2,459,123 | <p>My attempt:</p>
<p>Step 1 $n=4 \quad LHS = 4! = 24 \quad RHS=4^2 = 16$</p>
<p>Therefore $P(1)$ is true.</p>
<p>Step 2 Assuming $P(n)$ is true for $n=k, \quad k!>k^2, k>3$</p>
<p>Step 3 $n=k+1$</p>
<p>$LHS = (k+1)! = (k+1)k! > (k+1)k^2$ (follows from Step 2)</p>
<p>I am getting stuck at this step.</p>
<p>How do I show $(k+1)k^2 > (k+1)^2$ ?</p>
| David Bowman | 366,588 | <p>Another construction: Just look at the interval $[0,1]$ and iterate this process in each interval. Let $A_n$ be the fat cantor set of measure $\displaystyle \frac{n}{n+1}$. Then $A_n^c$ is open and dense. By Baire Category, $\displaystyle \bigcap_{n \in \mathbb{N}} A_n^c$ is dense. By construction it is $G_\delta$, and it is null, since $\mu\left(\bigcup_{n \in \mathbb{N}} A_n\right) =1.$</p>
|
3,745,159 | <p>Let <span class="math-container">$A,B,C$</span> be <span class="math-container">$n\times n$</span> matrices with real entries such that their product is pairwise commutative. Also <span class="math-container">$ABC=O_{n}$</span>. If
<span class="math-container">$$k=\det\left(A^3+B^3+C^3\right).\det\left(A+B+C\right)$$</span>
then find the value or the range of values that <span class="math-container">$k$</span> may take.</p>
<p><strong>My Attempt</strong></p>
<p>I tried <span class="math-container">$k=\left(\det(A+B+C)\right)^2\left(\det(A^2+B^2+C^2-AB-BC-CA)\right)$</span>. but couldn't go further than this</p>
| Aditya Narayan Sharma | 335,483 | <p>Assume that <span class="math-container">$d=(m,n)$</span> and <span class="math-container">$p=(2^n-1,2^m-1)$</span> so that <span class="math-container">$p | (2^n-1)$</span> & <span class="math-container">$p | (2^m-1)$</span> since <span class="math-container">$p$</span> is their GCD.
Now since p divides both of them we can write ,
<span class="math-container">$$ 2^n \equiv 1 (\text{mod }p )$$</span>
<span class="math-container">$$ 2^m \equiv 1 (\text{mod }p )$$</span>
Due to Bezout's lemma since <span class="math-container">$(m,n)=d$</span> there must exist integers <span class="math-container">$x,y$</span> such that <span class="math-container">$mx+ny=d$</span> .
Using the congruences and raising them to suitable powers and multiplying we conclude ,
<span class="math-container">$$ \displaystyle 2^{mx+ny}\equiv 1 (\text{mod }p)$$</span>
This proves that <span class="math-container">$p | 2^d-1$</span> which gives <span class="math-container">$p\le (2^d-1)$</span> . Now since <span class="math-container">$d |m$</span> and <span class="math-container">$d|n$</span> we can prove that <span class="math-container">$2^d-1 | 2^m-1$</span> and <span class="math-container">$2^d-1 | 2^n-1$</span> .
But then <span class="math-container">$(2^d-1)$</span> is a common divisor of both <span class="math-container">$(2^m-1)$</span> and <span class="math-container">$(2^n-1)$</span> and it must be less than or equal to the GCD i.e. <span class="math-container">$2^d-1\le p$</span> as the GCD is the greatest common divisor.</p>
<p>Thus we conclude that <span class="math-container">$p=2^d-1$</span> from the condition <span class="math-container">$2^d-1\le p \le 2^d-1$</span> .</p>
<p>Note : If <span class="math-container">$m|n$</span> then <span class="math-container">$2^n-1=2^{mk}-1=(2^m)^k-1=(2^m-1)(...)$</span> and thus <span class="math-container">$2^m-1 | 2^n-1$</span></p>
|
1,831,134 | <p><a href="https://i.stack.imgur.com/cQEeH.jpg" rel="nofollow noreferrer">Worked examples</a></p>
<p>Can somebody please explain to me how the generator matrix is obtained when we are given the codewords of the binary code in the examples attached.</p>
<p>I tried arranging the codes in a matrix with each row being a codeword , I then reduced to row echelon form and hence found basis of the code. I then Tried to construct the generator matrix using the basis but it does not work out. Please help! </p>
| Ashwin Ganesan | 157,927 | <p>Given a set $\mathcal{C}$ of codewords, before we can construct a generator matrix, we need to verify that $\mathcal{C}$ is a linear subspace - ie, the sum (and also scalar multiples in the non-binary case) of any two codewords must be a codeword. In the link given, the subsets $\mathcal{C}$ given are all subspaces. The rows of the generator matrix $G$ can be taken to be any subset of $\mathcal{C}$ that forms a maximal linearly independent set. If $|\mathcal{C}|=8$, then $G$ would have 3 rows. In general, if $|C|=2^k$ for some $k$, then $\mathcal{C}$ is a $k$-dimensional subspace and has a basis consisting of $k$ vectors. So $G$ would have $k$ rows. </p>
<p>A simple algorithm that picks codewords from $\mathcal{C}$ to form the $k$ rows of $G$ is as follows. Take the first nonzero codeword in $\mathcal{C}$ and put it as the first row of $G$. For $i = 1, 2, \ldots, k-1$, after adding the $i$th row to $G$, remove from $\mathcal{C}$ all linear combinations of the first $i$ rows of $G$. This leaves a subset $\mathcal{C}_i \subseteq \mathcal{C}$ containing codewords outside the $i$-dimensional subspace spanned by the $i$ rows of $G$. Choose any vector from $\mathcal{C}_i$ for the $(i+1)$-th row of $G$.</p>
|
665,596 | <p>Let $b_n$ be the number of lists of length $100$ from the set $\{0,1,2\}$ such that the sum of their entries is $n$. How does $b_{198}$ equal ${100\choose 2}+100$?</p>
| Felix Marin | 85,343 | <p>$\newcommand{\+}{^{\dagger}}
\newcommand{\angles}[1]{\left\langle #1 \right\rangle}
\newcommand{\braces}[1]{\left\lbrace #1 \right\rbrace}
\newcommand{\bracks}[1]{\left\lbrack #1 \right\rbrack}
\newcommand{\ceil}[1]{\,\left\lceil #1 \right\rceil\,}
\newcommand{\dd}{{\rm d}}
\newcommand{\down}{\downarrow}
\newcommand{\ds}[1]{\displaystyle{#1}}
\newcommand{\equalby}[1]{{#1 \atop {= \atop \vphantom{\huge A}}}}
\newcommand{\expo}[1]{\,{\rm e}^{#1}\,}
\newcommand{\fermi}{\,{\rm f}}
\newcommand{\floor}[1]{\,\left\lfloor #1 \right\rfloor\,}
\newcommand{\half}{{1 \over 2}}
\newcommand{\ic}{{\rm i}}
\newcommand{\iff}{\Longleftrightarrow}
\newcommand{\imp}{\Longrightarrow}
\newcommand{\isdiv}{\,\left.\right\vert\,}
\newcommand{\ket}[1]{\left\vert #1\right\rangle}
\newcommand{\ol}[1]{\overline{#1}}
\newcommand{\pars}[1]{\left( #1 \right)}
\newcommand{\partiald}[3][]{\frac{\partial^{#1} #2}{\partial #3^{#1}}}
\newcommand{\pp}{{\cal P}}
\newcommand{\root}[2][]{\,\sqrt[#1]{\vphantom{\large A}\,#2\,}\,}
\newcommand{\sech}{\,{\rm sech}}
\newcommand{\sgn}{\,{\rm sgn}}
\newcommand{\totald}[3][]{\frac{{\rm d}^{#1} #2}{{\rm d} #3^{#1}}}
\newcommand{\ul}[1]{\underline{#1}}
\newcommand{\verts}[1]{\left\vert\, #1 \,\right\vert}$
\begin{align}
b_{n} &=\sum_{n_{0}=0}^{\infty}\sum_{n_{1}=0}^{\infty}\sum_{n_{2}=0}^{\infty}
\delta_{n_{0}\ +\ n_{1}\ +\ n_{2},100}\,
\delta_{0\,\times\,n0\ +\ 1\,\times\,n_{1}\ +\ 2\,\times\,n_{2},n}\tag{1}
\\[3mm]&=\sum_{n_{0}=0}^{\infty}\sum_{n_{1}=0}^{\infty}\sum_{n_{2}=0}^{\infty}
\int_{\verts{z}=1}{1 \over z^{-n_{0} - n_{1} - n_{2} + 101}}\,{\dd z \over 2\pi \ic}
\int_{\verts{z'}=1}{1 \over z'^{-n_{1} - 2n_{2} + n + 1}}\,{\dd z' \over 2\pi \ic}
\\[3mm]&=\int_{\verts{z}=1}{\dd z \over 2\pi \ic}
\int_{\verts{z'}=1}{\dd z' \over 2\pi \ic}\,{1 \over z^{101}}
\,{1 \over z'^{n + 1}}\,\pars{\sum_{n_{0} = 0}^{\infty}z^{n_{0}}}
\bracks{\sum_{n_{1} = 0}^{\infty}\pars{zz'}^{n_{1}}}
\bracks{\sum_{n_{2} = 0}^{\infty}\pars{zz'^{2}}^{n_{2}}}
\\[3mm]&=\int_{\verts{z}=1}{\dd z \over 2\pi \ic}
\int_{\verts{z'}=1}{\dd z' \over 2\pi \ic}\,{1 \over z^{101}}\,{1 \over z'^{n + 1}}\,
{1 \over 1 - z}\,{1 \over 1 - zz'}\,{1 \over 1 - zz'^{2}}
\\[3mm]&=\int_{\verts{z}=1}{\dd z \over 2\pi \ic}\,{1 \over z^{101}}\,{1 \over 1 - z}
\color{#f00}{\int_{\verts{z'}=1}{\dd z' \over 2\pi \ic}\,{1 \over z'^{n + 1}}\,
{1 \over 1 - zz'}\,{1 \over 1 - zz'^{2}}}\tag{2}
\end{align}</p>
<blockquote>
<p>\begin{align}
&\color{#f00}{\int_{\verts{z'}=1}{\dd z' \over 2\pi \ic}\,{1 \over z'^{n + 1}}\,
{1 \over 1 - zz'}\,{1 \over 1 - zz'^{2}}}
=\int_{\verts{z'}=1}{\dd z' \over 2\pi \ic}\,{1 \over z'^{n + 1}}
\sum_{\ell = 0}^{\infty}\pars{zz'}^{\ell}
\sum_{\ell' = 0}^{\infty}\pars{zz'^{2}}^{\ell'}
\\[3mm]&=\sum_{\ell' = 0}^{\infty}z^{\ell'}\sum_{\ell = 0}^{\infty}z^{\ell}
\int_{\verts{z'}=1}{\dd z' \over 2\pi \ic}\,{z'^{\ell + 2\ell'} \over z'^{n + 1}}
=\sum_{\ell' = 0}^{\infty}z^{\ell'}\sum_{\ell = 0}^{\infty}z^{\ell}
\delta_{\ell,n - 2\ell'}
=\sum_{\ell' = 0}^{N}z^{n - \ell'}
\\&\mbox{where}\ N \equiv \floor{n \over 2}
\end{align}
\begin{align}
&\color{#f00}{\int_{\verts{z'}=1}{\dd z' \over 2\pi \ic}\,{1 \over z'^{n + 1}}\,
{1 \over 1 - zz'}\,{1 \over 1 - zz'^{2}}}
=z^{n}\,{\pars{1/z}^{N + 1} - 1 \over 1/z - 1}
=\color{#f00}{z^{n - N}\,{1 - z^{N + 1} \over 1 - z}}
\end{align}</p>
</blockquote>
<p>Now, we replace this result in expression $\pars{2}$:
\begin{align}
b_{n}&=
\int_{\verts{z}=1}{\dd z \over 2\pi \ic}\,{1 \over z^{N - n + 101}}\,
{1 - z^{N + 1} \over \pars{1 - z}^{2}}
=\int_{\verts{z}=1}{\dd z \over 2\pi \ic}\,{1 \over z^{N - n + 101}}\,
\pars{1 - z^{N + 1}}\sum_{\ell = 1}^{\infty}\ell z^{\ell - 1}
\\[3mm]&=\sum_{\ell = 1}^{\infty}\ell\pars{%
\int_{\verts{z}=1}{\dd z \over 2\pi \ic}\,{z^{\ell - 1} \over z^{N - n + 101}}
-
\int_{\verts{z}=1}{\dd z \over 2\pi \ic}\,{z^{\ell} \over z^{-n + 101}}}
=\sum_{\ell = 1}^{\infty}\ell\pars{%
\delta_{\ell,N - n + 101} - \delta_{\ell,-n + 100}}
\end{align}</p>
<blockquote>
<p>$$
\color{#00f}{\large b_{n}}
=\color{#00f}{\large\pars{\floor{n \over 2} - n + 101}
_{n - \floor{n \over 2}\ \leq\ 100}\ -\ \pars{100 - n}_{n\ \leq\ 99}}
$$
${\tt\mbox{When the condition at the right of}\ \pars{~~}\mbox{'s is false the}\ \pars{~~}\mbox{'s evaluates to}\ 0}$.</p>
</blockquote>
<p>For example, when $n = 198$ we have that $198 - \floor{99} = 99 \leq 100$ is true but
$198 \leq 99$ is false. Then,
$$
b_{198} =\pars{\floor{198 \over 2} - 198 + 101} = {\large 2}
$$</p>
<p>The maximum value $\pars{~= 51~}$ occurs at $n = 100$. $b_{n}$ is symmetrical around $n = 100$ $\pars{~b_{n} = b_{200 - n}\,,\quad n \leq 100~}$ and vanishes out for $n > 200$. That means:
$$\color{#00f}{\large%
b_{n} =
\left\lbrace%
\begin{array}{lcc}
1 + \floor{n \over 2} & \mbox{if} & n \leq 99
\\[2mm]
51 & \mbox{if} & n = 100
\\[2mm]
b_{200 - n} & \mbox{if} & 101 \leq n \leq 200
\\[2mm]
0 && \mbox{otherwise}
\end{array}\right.}\tag{3}
$$
$\tt\mbox{Result}\ \pars{3}\ \mbox{is valid whenever we don't take into account the list order !!!}$.</p>
<blockquote>
<p>$\large\tt\mbox{ADDENDUM: Hereafter we take into account the list order}$<br>
In this case, each factor in $\pars{1}$ is multiplied by $100!/\pars{n_{0}!\,n_{1}!\,n_{2}!}$ such that $\pars{2}$ is reduced to:
\begin{align}
b_{n}&=
100!\int_{\verts{z'}=1}{\dd z' \over 2\pi \ic}\,{1 \over z'^{n + 1}}
\int_{\verts{z}=1}{\dd z \over 2\pi \ic}\,
{\expo{z\pars{1 + z' + z'^{2}}} \over z^{101}}
=\int_{\verts{z'}=1}
{\pars{1 + z' + z'^{2}}^{100} \over z'^{n + 1}}\,{\dd z' \over 2\pi \ic}
\\[3mm]&=\int_{\verts{z}=1}{1 \over z^{n + 1}}
\sum_{n_{0} =0, n_{1} = 0, n_{2} = 0 \atop n_{0} + n_{1} + n_{2} = 100}
{100! \over n_{0}!\,n_{1}!\,n_{2}!}\,1^{n_{0}}z^{n_{1}}\pars{z^{2}}^{n_{2}}
\,{\dd z \over 2\pi \ic}
\\[3mm]&=\left.\sum_{n_{0} =0, n_{1} = 0, n_{2} = 0}{100! \over n_{0}!\,n_{1}!\,n_{2}!}
\right\vert_{n_{0}\ +\ n_{1}\ +\ n_{2}\ =\ 100
\atop {\vphantom{\huge A}n_{1}\ +\ 2n_{2}\ =\ n}}
\\[3mm]&=\left.\sum_{n_{0} = 100 - n}^{100 - \floor{n/2}}
{100! \over n_{0}!\pars{200 - n - 2n_{0}}!\pars{n_{0} + n - 100}!}
\right\vert_{n_{0}\ \geq\ 0}
\end{align}</p>
</blockquote>
<p>Indeed,
$$
b_{198}=\sum_{n_{0} = 0}^{1}{100! \over n_{0}!\pars{2 - 2n_{0}}!\pars{n_{0} + 98}!}
= {100! \over 2!\,98!} + 100 = \color{#00f}{{100 \choose 2} + 100} = 5050
$$</p>
|
1,134,177 | <p>Consider this primality test: Fix an initial segment of primes (e.g. 2,3,5,7), and combine a $b$-pseudoprime test for each b in that list. For several such initial segments, find the first $n$ for which the test gives an incorrect answer.</p>
<p>Hey all! I'm not quite understanding what the aforementioned question is asking... So help would be handy. :)</p>
<p>My code:</p>
<pre><code>primetest1 := proc
(Numprimes, N) local i, j;
for i to Numprimes do
for j to N do
if `mod`(ithprime(i)^(j-1), j) = 1 then
if isprime(j) = false then print(j*"is a false answer")
end if
end if
end do
end do
end proc
</code></pre>
| Joffan | 206,402 | <p>My interpretation is as follows:</p>
<p>Taking some group of primes $p_i =\{2,3,\ldots\}$, then for numbers $n$ in turn not divisible by any of these, apply the Fermat pseudoprime test using each $p_i$ to each $n$:
$$ p_i^{n-1} \equiv1 \bmod n \implies n \text{ is prime} $$</p>
<p>Find the first number that gets this test wrong.</p>
<hr>
<p>In the case of $\{2,3,5\}$, the first non-prime that appears to be a prime under this assessment is $1729$, followed by $2821$ and $6601$.</p>
<hr>
<p>First five to fail for each group of primes...
$$\begin{array}{c|c|c|c}
\{2\} & \{2,3\} & \{2,3,5\} & \{2,3,5,7\} \\ \hline
341 & 1105 & 1729 & 29341 \\
561 & 1729 & 2821 & 46657 \\
645 & 2465 & 6601 & 75361 \\
1105 & 2701 & 8911 & 115921 \\
1387 & 2821 & 15841 & 162401 \\
\end{array}$$</p>
|
2,369,081 | <blockquote>
<p>Evaluate the integral $$\int_0^1\frac{x^7-1}{\log (x)}\,dx $$</p>
</blockquote>
<p>[1]: <a href="https://i.stack.imgur.com/lcK2p.jpgplz" rel="nofollow noreferrer">https://i.stack.imgur.com/lcK2p.jpgplz</a> I'm trying to solve this definite integral since 2 hours. Please, I need help on this.</p>
| sn24 | 199,128 | <p>You can use feynman approach
Consider \begin{align}
f(n)=\int_0^1 \frac{x^n-1}{\log(x)}\,dx&
\end{align}
Then
\begin{align}
f'(n)=\int_0^1 \frac{x^n\log(x)}{\log(x)}\,dx&
\end{align}
So \begin{align}
f'(n)= \frac{1}{n+1}
\end{align}
\begin{align}
f(n)=\log(n+1)+c
\end{align}
For c \begin{align}
f(0)=0 , c=0
\end{align}
\begin{align}
f(7)=\int_0^1 \frac{x^7-1}{\log(x)}\,dx&=\log(8)
\end{align}</p>
|
3,336,311 | <p><strong>Help me factor these polynomials</strong> </p>
<ul>
<li><span class="math-container">$(x+\sqrt{2})^2$</span> - 8</li>
<li>14a - 49<span class="math-container">$a^2$</span> + 100<span class="math-container">$b^2$</span> - 1</li>
</ul>
| David G. Stork | 210,401 | <p><span class="math-container">$$(x + 3 \sqrt{2})(x - \sqrt{2})$$</span></p>
<p><span class="math-container">$$7 a (2 - 7a) + (10 b + 1)(10 b - 1)$$</span></p>
|
1,823,736 | <p><a href="http://www.math.drexel.edu/~dmitryk/Teaching/MATH221-SPRING'12/Sample_Exam_solutions.pdf" rel="nofollow">Problem 10c from here</a>.</p>
<blockquote>
<p>Thirteen people on a softball team show up for a game. Of the $13$ people who show up, $3$ are women. How many ways are there to choose $10$ players to take the field if at least one of these players must be a woman?</p>
</blockquote>
<p>The given answer is calculated by summing the combination of $1$ woman + $9$ men, $2$ women + $8$ men, and $3$ women + $7$ men.</p>
<p>My question is, why can't we set this up as the sum $\binom{3}{1} + \binom{12}{9}$ - picking one of the three women first, then picking $9$ from the remaining $12$ men and women combined? The only requirement is that we have at least one woman, which is satisfied by $\binom{3}{1}$, and that leaves a pool of $12$ from which to pick the remaining $9$. The answer this way is <em>close</em> to the answer given, but it's $62$ short. I get that it's the "wrong" answer but I'm wondering why my thinking was wrong in setting it up this way. Thanks.</p>
| Charles | 306,633 | <p>This looks like a homework problem, and I think there might even be a tag for such problems.</p>
<p>Regardless, from the graph you can see that you can construct the area with two integrals (vertical <em>or</em> horizontal). I'll illustrate vertical.</p>
<p>The total area between functions $f(x)$ and $g(x)$ where $f$ is above $g$, may be written as</p>
<p>$A = \int_a^b (f(x) - g(x) )dx$</p>
<p>Since $y_1$ is the top function for both segments, and the integral must be split at $x=3$, we have</p>
<p>$A = \int_0^3 (y_1 - y_2) dx + \int_3^4 (y_1 - y_3 ) dx$</p>
<p>Then plug in your expressions for $y_1,y_2,y_3$ and integrate.</p>
<p>Substituting, we have</p>
<p>$A = \int_0^3 (\frac{25}{3} x - x^2) dx + \int_3^4 (52 - 9x - x^2 ) dx$</p>
<p>Performing this integral (which is straight forward) yields $A= \frac{110}{3}$.</p>
|
1,703,120 | <p>So I have a vector <span class="math-container">$a =( 2 ,2 )$</span> and a vector <span class="math-container">$b =( 0, 1 )$</span>.<br />
As my teacher told me, <span class="math-container">$ab = (-2, -1 )$</span>.</p>
<p><span class="math-container">$ab = b-a = ( 0, 1 ) - ( 2, 2 ) = ( 0-2, 1-2 ) = ( -2, -1 )$</span><br />
<span class="math-container">$ab = a-b = ( 2 ,2 ) - ( 0 ,1 ) = ( 2-0,2-1 ) = ( 2 ,1 )$</span></p>
<p>Seems like its the same but the negative signs are gone.</p>
<p>Why do I have to subtract b from a to get ab? Why not a-b or a+b?</p>
| MPW | 113,214 | <p>They are related by the fact that $$\mathbf a- \mathbf b = -(\mathbf b- \mathbf a)$$</p>
<p>The difference is the direction. Generally, the vector from a starting point to an ending point is $$(\textrm{terminal point})-(\textrm{initial point})$$</p>
|
2,187,509 | <p>We're currently implementing the IBM Model 1 in my course on statistical machine translation and I'm struggling with the following appplication of the chain rule.
When applying the model to the data, we need to compute the probabilities of different alignments given a sentence pair in the data. In other words to compute $\Pr(a\mid e,f)$, the probability of an alignment given the English and foreign sentences.</p>
<p>Why do I end up with </p>
<p>$$
\Pr(a\mid e,f) = \frac{\Pr( e,a \mid f )}{\Pr( e \mid f )}
$$</p>
<p>applying the chain rule which would be </p>
<p>$$
\Pr(A,B,C) = \Pr(A)\Pr(B \mid A)\Pr (C \mid B,A)
$$</p>
| Barry Cipra | 86,747 | <p>Rewrite the inequality to be proved as</p>
<p>$$\left(1+u\over2\right)^2e^{(1+u)/2}-\ln\left(1+u\over2\right)-1\gt0$$</p>
<p>for $-1\lt u\lt1$. This can be rewritten as</p>
<p>$$(1+u)^2\sqrt ee^{u/2}-4\ln(1+u)+4\ln2-4\gt0$$</p>
<p>Now $e^{u/2}\ge1+{u\over2}$, so </p>
<p>$$(1+u)^2\sqrt ee^{u/2}\ge(1+2u)\sqrt e\left(1+{1\over2}u\right)\ge\left(1+{5\over2}u\right)\sqrt e$$ </p>
<p>Also, $\ln(1+u)\le u$, so </p>
<p>$$-4\ln(1+u)\ge-4u$$</p>
<p>Putting these together, we have, for $|u|\le1$,</p>
<p>$$\begin{align}
(1+u)^2\sqrt ee^{u/2}-4\ln(1+u)+4\ln2-4
&\ge\sqrt e+4\ln2-4+\left({5\over2}\sqrt e-4\right)u\\
&\ge\sqrt e+4\ln2-4-\left|{5\over2}\sqrt e-4\right|
\end{align}$$</p>
<p>It remains to compute $\sqrt e+4\ln2-4\approx0.4213$ while ${5\over2}\sqrt e-4\approx0.1218$.</p>
<p>There might be some slicker way to do this that avoids the messy numerics at the end. I'd like to see it.</p>
<p><strong>Added later</strong>: Oh, here's a way to avoid the messy numerics. Note first that $e\gt2.56=1.6^2=(8/5)^2$, so ${5\over2}\sqrt e-4$ is positive. This simplifies things to showing $4\ln2\ge{3\over2}\sqrt e$. If we now allow ourselves to know that $\ln2\gt{2\over3}$, it suffices to show that ${16\over9}\gt\sqrt e$, or ${256\over81}\gt e$, which is clear, since ${256\over81}\gt3\gt e$.</p>
<p>If you want a quick proof that $\ln2\gt{2\over3}$, note that that inequality is equivalent to $8\gt e^2$, which follows from $8\gt{196\over25}=\left(14\over5\right)^2=2.8^2\gt e^2$.</p>
<p>And just to leave no numerical stone unturned, here's why we can confidently say that ${14\over5}\gt e\gt{64\over25}$:</p>
<p>$$e\gt1+1+{1\over2}+{1\over6}=2+{2\over3}\gt2+{14\over25}={64\over25}$$</p>
<p>since $2\cdot25\gt3\cdot14$, and</p>
<p>$$e^{-1}\gt1-1+{1\over2}-{1\over6}+{1\over24}-{1\over120}={11\over30}\gt{5\over14}$$</p>
<p>since $11\cdot14\gt5\cdot30$.</p>
|
66,009 | <p>Hi I have a very simple question but I haven't been able to find a set answer. How would I draw a bunch of polygons on one graph. The following does not work:</p>
<pre><code>Graphics[{Polygon[{{989, 1080}, {568, 1080}, {834, 711}}],
Polygon[{{1184, 1080}, {989, 1080}, {834, 711}, {958, 541}}],
Polygon[{{1379, 1080}, {1184, 1080}, {958, 541}, {1082, 370}}],
Polygon[{{1470, 1080}, {1379, 1080}, {1082, 370}, {1140, 291}}],
Polygon[{{1665, 1080}, {1470, 1080}, {1140, 291}, {1263, 120}}],
Polygon[{{1756, 1080}, {1665, 1080}, {1263, 120}, {1321, 41}}],
Polygon[{{1394, 0}, {1920, 0}, {1920, 1080}, {1846, 1080}}],
Polygon[{{1352, 0}, {1394, 0}, {1846, 1080}, {1756, 1080}, {1321,
41}}], Polygon[{{931, 0}, {1352, 0}, {1084, 367}}],
Polygon[{{736, 0}, {931, 0}, {1084, 367}, {961, 537}}],
Polygon[{{540, 0}, {736, 0}, {961, 537}, {836, 708}}],
Polygon[{{450, 0}, {540, 0}, {836, 708}, {779, 788}}],
Polygon[{{255, 0}, {450, 0}, {779, 788}, {654, 958}}],
Polygon[{{164, 0}, {255, 0}, {654, 958}, {597, 1038}}],
Polygon[{{73, 0}, {164, 0}, {597, 1038}, {568, 1080}, {524, 1080}}],
Polygon[{{0, 0}, {73, 0}, {524, 1080}, {0, 1080}}]}]
</code></pre>
<p>I apologize for how basic this question is. If anyone could steer me in the right direction it would be greatly appreciated.</p>
| Szabolcs | 12 | <p>This is not a full answer, just a start towards a solution.</p>
<p>The culprit is <code>Dispatch</code>, which became <a href="http://reference.wolfram.com/mathematica/ref/AtomQ.html" rel="noreferrer">atomic</a> in version 10, and comparison wasn't implemented for it.</p>
<p>Here's a small test in version 9:</p>
<pre><code>In[1]:= a =
Dispatch[{"a" -> 1, "b" -> 2, "c" -> 3, "d" -> 4, "e" -> 5,
"f" -> 6, "g" -> 7, "h" -> 8, "i" -> 9, "j" -> 10, "k" -> 11,
"l" -> 12, "m" -> 13}];
In[2]:= b =
Dispatch[{"a" -> 1, "b" -> 2, "c" -> 3, "d" -> 4, "e" -> 5,
"f" -> 6, "g" -> 7, "h" -> 8, "i" -> 9, "j" -> 10, "k" -> 11,
"l" -> 12, "m" -> 13}];
In[3]:= AtomQ[a]
Out[3]= False
In[4]:= a === b
Out[4]= True
</code></pre>
<p>(Note that there need to be a certain number of elements in the dispatch table before it actually builds a hash table from it. It won't happen when using only a few rules.)</p>
<p>In version 10 we get</p>
<pre><code>In[3]:= AtomQ[a]
Out[3]= True
In[4]:= a === b
Out[4]= False
</code></pre>
<p>Unfortunately I do not see an easy solution around this as the <code>Dispatch</code> objects are only small parts of the expressions you are comparing. A "proper" solution would be a significant reworking of the package, but who has time for that ... ?</p>
<p>Another question is: is this a bug? I suspect that many <em>programmers</em> (which most of us aren't) wouldn't consider it a bug. There was good reason to make <code>Dispatch</code> atomic, for good performance, and seamlessly integrating atomic objects into Mathematica is hard. Lots of operations need special implementation for a seamless integration: pattern matching, comparison, translation to/from some sort of FullForm, etc. Also, the type of comparison you are doing between these expressions could be considered a hack or a quick-and-dirty solution.</p>
<p>On the other hand: Mathematica has always had the implicit promise that we can rely on <a href="http://reference.wolfram.com/language/tutorial/EverythingIsAnExpression.html" rel="noreferrer">everything being an expression</a> and one of its big strengths is that it's really easy to hack together useful and working solutions. They are often not proper solutions, but most people who use Mathematica (researchers) don't <em>develop software</em>, as programmers do. We just create quick and dirty solutions for our own immediate use, targeted at the problem at hand. And Mathematica really shines at this.</p>
<p>You should definitely contact WRI about this and let them know about your use case.</p>
<hr />
<h2>Possible workaround</h2>
<p>Here's a possible workaround: instead of <code>Dispatch</code>, use <code>Association</code> (new in 10). In a quick test, <code>Association</code> seems to give similar performance benefits to <code>Dispatch</code>. But unlike <code>Dispatch</code>, associations <em>can</em> be compared using <code>===</code>. Just be sure to <code>KeySort</code> the association before including it in the expression: <code><| a -> 1, b -> 2 |></code> is different from <code><| b -> 2, a -> 1 |></code>.</p>
|
71,166 | <p>This question have been driving me crazy for months now. This comes from work on multiple integrals and convolutions but is phrased in terms of formal power series.</p>
<p>We start with a formal power series</p>
<p>$P(C) = \sum_{n=0}^\infty a_n C^{n+1}$</p>
<p>where $a_n = (-1)^n n!$</p>
<p>With these coefficients the formal power series can be expressed as a hypergeometric function</p>
<p>$P(C) = C \, _2F_0(1,1;;-C)$</p>
<p>I'm then interested in the formal power series $P_T(C)=\frac{P}{1-P}$ as well as, if possible, the series $P^n$ for arbitrary positive integer n (where this is the power series P raised to the nth power).</p>
<p>Specifically if</p>
<p>$P_T(C) = \sum_{n=0}^\infty b_n C^{n}$ </p>
<p>then want to construct the function</p>
<p>$f(x) = \sum_{n=0}^\infty \frac{b_{n+1}}{(n!)^2} x^{n}$</p>
<p>which, from other results, should converge for all x. We can think of this as the doubly-exponential generating function for the $b_n$ sequence.</p>
<p>There are rules for multiplying and dividing formal power series (see <a href="http://en.wikipedia.org/wiki/Formal_power_series#Operations_on_formal_power_series" rel="nofollow">here</a>) and I've used these to get a recurrence relation for the coefficients in $P_T(C)$ (as well as P^n(C)) but I've been unable to solve these recurrence relations explicitly. They're in a form where each $b_n$ depends on all the previous $b_n$'s and I've not been able to make progress with them.</p>
<p>Explicitly the recurrence relation for the $b_n$ is $b_0 = 1$, $b_n = \sum_{k=1}^n b_{n-k} a_k$ (for n > 1). This looks simple enough but I don't think has a nice closed-form expression.</p>
<p>Nevertheless I do know what the $b_n$ are. They are the sequence <a href="http://oeis.org/A052186" rel="nofollow">A052186</a> (up to plus and minus signs). So </p>
<p>$P_T(C) = C+C^3-3 C^4+14 C^5-77 C^6+497 C^7-3676 C^8+\ldots$</p>
<p>and</p>
<p>$f(x) = 1 + \frac{1}{4}x^2 - \frac{1}{12}x^3 + \frac{7}{288}x^4 - \frac{77}{14400} x^5 +
\frac{497}{518400}x^6 +\ldots$</p>
<p>The question is, is it possible to figure out the function $f(x)$!? Does it have a nice closed form? Perhaps in the form of a hypergeometric function? If it does, great, if it doesn't then at least I can stop searching for it!</p>
| Gerald Edgar | 454 | <p>Of course your series $P(C)$ diverges. But it is a transseries. Or an asymptotic series. In fact, one of the best known. The series
$$
\sum_{n=0}^\infty (-1)^n n! C^{n+1}
$$
is the asymptotic series (as $C \downarrow 0$) for the function
$$
p(C) = -e^{1/C} \mathrm{Ei}(-1/C) .
$$
So, of course, your series $P_T(C)$ is the asymptotic series for
$$
p_T(C) = -1 + \frac{1}{1+e^{1/C} \mathrm{Ei}(-1/C)} .
$$
Now all we need is to remember the formal relation between an ordinary generating function and an exponential generating function, and apply it twice to $p_T$. Maybe that is not so easy?</p>
|
2,263,230 | <p>Let's say I wanted to express sqrt(4i) in a + bi form. A cursory glance at WolframAlpha tells me it has not just a solution of 2e^(i<em>Pi/4), which I found, but also 2e^(i</em>(-3Pi/4))</p>
<p>Why do roots of unity exist, and why do they exist in this case? How could I find the second solution? </p>
| symplectomorphic | 23,611 | <p>You're solving the equation $z^2=4i$. According to the Fundamental Theorem of Algebra, this equation has two complex roots. You can find them in many ways. </p>
<p>The most elementary approach: assume $z=a+bi$, where $a$ and $b$ are real. Then $(a+bi)^2=(a^2-b^2)+2abi=4i$. Equating real and imaginary parts, you need $a^2-b^2=0$ and $2ab=4$. The first equation says either $a=b$ or $a=-b$. If $a=b$, then the second equation says $a^2=2$, whence $a=\pm\sqrt{2}$. If $a=-b$, the second equation says $a^2=-2$, which has no solutions (because we assumed $a$ is real). So the solutions are $\sqrt{2}+i\sqrt{2}$ and $-\sqrt{2}-i\sqrt{2}$. </p>
<p>Alternatively, write $4i=4e^{i\pi/2}$ and $z=re^{i\theta}$, with $r$ a positive real number and $\theta\in[0,2\pi)$. Then $z^2=4i$ says $r^2e^{2i\theta}=4e^{i\pi/2}$. Conclude that $r=2$. To finish we need to find $\theta$ such that $e^{2i\theta}=e^{i\pi/2}$. Clearly $\theta=\pi/4$ works. But $e^{i\pi/2}=e^{i(\pi/2+2\pi)}$, so we can also take $2\theta=\pi/2+2\pi$, whence $\theta=\pi/4+\pi=5\pi/4$. Hence the two solutions are $2e^{i\pi/4}$ and $2e^{5i\pi/4}$.</p>
|
119,904 | <p>It is possible to do simple math between TemporalSeries objects. For example</p>
<pre><code>es=EventSeries[{{{2016, 1, 1}, 2}, {{2016, 1, 3}, 2.1}}];
td=TemporalData[{{{2016, 1, 1}, 2}, {{2016, 1, 3}, 3.1}}];
es*td (* works fine *)
</code></pre>
<p>This returns a TemporalData object with the path <code>{{3660595200, 4}, {3660768000, 6.51}}</code>.</p>
<p>This method does not work however if the objects being manipulated have differing dimensions.</p>
<pre><code>es=EventSeries[{{{2016, 1, 1}, 2}, {{2012, 1, 3}, 1.9}, {{2016, 1, 3}, 2.1}}];
td=TemporalData[{{{2016, 1, 1}, 2}, {{2016, 1, 3}, 3.1}}];
es*td (* ka-boom *)
</code></pre>
<p>With the data I'm using, one series or the other may be missing data at various time stamps, and it would be acceptable to either interpolate for those missing points or to simply omit them from the computation and the output.</p>
<p>I've been dealing with this problem in several steps,</p>
<ol>
<li>Make a list of the dates that all the series have in common with
<code>intersectionDates=Apply[Intersection, Map[#["Path"][[All, 1]] &,
listOfTD]</code></li>
<li>Extract the points for only those dates with <code>alignedData=Map[Select[#, MemberQ[intersectiondates, #[[1]]] &]&, listOfTD]</code></li>
<li>And finally turn the output back into TemporalData with <code>Map[TimeSeries[#] &, alignedData]</code></li>
</ol>
<p>In practice I have additional code to remove duplicate dates, if any exist, to convert EventSeries back to EventSeries, and some other bells and whistles, not shown above.</p>
<p>The problem is, this method is unusably slow for large datasets. With a few hundred points, it works fine, but with 5000+, it becomes intolerable.</p>
<p>I welcome any suggestions for approaches that would allow me to complete simple math between TemporalData objects of differing dimensions much faster.</p>
| Michael Stern | 86 | <p>Given two <code>TimeSeries</code> (<code>ts1</code> and <code>ts2</code>), I was able to speed up my results about 60x with the following:</p>
<ol>
<li><code>paths=Map[#["Path"] &, {ts1,ts2}];</code></li>
<li><code>commonDates=Intersection[paths[[1, All, 1]], paths[[2, All, 1]]];</code></li>
<li><code>fakeDatepath=Transpose[{commonDates, Table[-1, {Length[commonDates]}]}];</code></li>
<li><code>shortPaths=Map[(TemporalData[Intersection[#, fakeDatepath,
SameTest -> (#1[[1]] == #2[[1]] &)]]) &, paths];</code></li>
<li><code>Map[TemporalData[#]&, shortPaths]</code></li>
</ol>
<p>My production code works with an arbitrary number of TimeSeries simultaneously, using a generalized form of this approach.</p>
|
119,904 | <p>It is possible to do simple math between TemporalSeries objects. For example</p>
<pre><code>es=EventSeries[{{{2016, 1, 1}, 2}, {{2016, 1, 3}, 2.1}}];
td=TemporalData[{{{2016, 1, 1}, 2}, {{2016, 1, 3}, 3.1}}];
es*td (* works fine *)
</code></pre>
<p>This returns a TemporalData object with the path <code>{{3660595200, 4}, {3660768000, 6.51}}</code>.</p>
<p>This method does not work however if the objects being manipulated have differing dimensions.</p>
<pre><code>es=EventSeries[{{{2016, 1, 1}, 2}, {{2012, 1, 3}, 1.9}, {{2016, 1, 3}, 2.1}}];
td=TemporalData[{{{2016, 1, 1}, 2}, {{2016, 1, 3}, 3.1}}];
es*td (* ka-boom *)
</code></pre>
<p>With the data I'm using, one series or the other may be missing data at various time stamps, and it would be acceptable to either interpolate for those missing points or to simply omit them from the computation and the output.</p>
<p>I've been dealing with this problem in several steps,</p>
<ol>
<li>Make a list of the dates that all the series have in common with
<code>intersectionDates=Apply[Intersection, Map[#["Path"][[All, 1]] &,
listOfTD]</code></li>
<li>Extract the points for only those dates with <code>alignedData=Map[Select[#, MemberQ[intersectiondates, #[[1]]] &]&, listOfTD]</code></li>
<li>And finally turn the output back into TemporalData with <code>Map[TimeSeries[#] &, alignedData]</code></li>
</ol>
<p>In practice I have additional code to remove duplicate dates, if any exist, to convert EventSeries back to EventSeries, and some other bells and whistles, not shown above.</p>
<p>The problem is, this method is unusably slow for large datasets. With a few hundred points, it works fine, but with 5000+, it becomes intolerable.</p>
<p>I welcome any suggestions for approaches that would allow me to complete simple math between TemporalData objects of differing dimensions much faster.</p>
| Jonathan Kinlay | 36,994 | <p>One of the regular tasks in statistical arbitrage is to compute correlations between a large universe of stocks, such as the S&P500 index members, for example. Mathematica/WL has some very nice features for obtaining financial data and manipulating time series. And of course it offers all the commonly required statistical functions, including correlation. But the WL Correlation function is missing one vital feature - the ability to handle data series of unequal length. This arises, of course, because stock data series do not all share a common start date and (very occasionally) omit data for dates in the middle of the series. This creates an issue for the Correlation function, which can only handle series of equal length.
The usual way of handling this is to apply pairwise correlation, in which each pair of data vectors is truncated to include only the dates common to both series. Of course this can easily be done in WL; but it is very inefficient.</p>
<p>Let's take an example. We start with the last 10 symbols in the S&P 500 index membership:</p>
<pre><code> In[1]:= tickers = Take[FinancialData["^GSPC", "Members"], -10]
Out[1]= {"NASDAQ:WYNN", "NASDAQ:XEL", "NYSE:XRX", "NASDAQ:XLNX", \
"NYSE:XYL", "NYSE:YUM", "NASDAQ:ZBRA", "NYSE:ZBH", "NASDAQ:ZION", \
"NYSE:ZTS"}
</code></pre>
<p>Next we obtain the returns series for these stocks, over the last 3 years. By default, FinancialData retrieves the data as TimeSeries Objects. This is very elegant, but slows the processing of the data, as we shall see.</p>
<pre><code>tsStocks =
FinancialData[tickers, "Return",
DatePlus[Today, {-2753, "BusinessDay"}]];
</code></pre>
<p>Not all the series contain the same number of date-return pairs. So using Correlation is out of the question:</p>
<pre><code>In[282]:= Table[Length@tsStocks[[i]]["Values"], {i, 10}]
Out[282]= {2762, 2762, 2762, 2762, 2388, 2762, 2762, 2762, 2762, 2060}
</code></pre>
<p>Since Correlation doesn't offer a pairwise option, we have to create the required functionality in WL. Let's start with:</p>
<pre><code>PairsCorrelation[ts_] := Module[{td, correl},
If[ts[[1]]["PathLength"] == ts[[2]]["PathLength"],
correl = Correlation @@ ts,
td = TimeSeriesResample[ts, "Intersection"];
correl = Correlation @@ td[[All, All, 2]]]];
</code></pre>
<p>We first check to see if the two arguments are of equal length, in which case we can Apply the Correlation function directly. If not, we use the "Intersection" option of the TSResample function to reduce the series to a set of common observation dates. The function is designed to be deployed using parallelization, as follows:</p>
<pre><code>PairsListCorrelation[tslist_] := Module[{pairs, i, td, c, correl = {}},
pairs = Subsets[Range[Length@tslist], {2}];
correl =
ParallelTable[
PairsCorrelation[tslist[[pairs[[i]]]]], {i, 1, Length@pairs}];
{correl, pairs}]
</code></pre>
<p>The Subsets function is used to generate a non-duplicative list of index pairs and then a correlation table is built in parallel using PairsCorrelation function on each pair of series.</p>
<p>When we apply the function to the ten stock time series, we get the following results:</p>
<pre><code>In[263]:= AbsoluteTiming[{correl, pairs} =
PairsListCorrelation[tsStocks];]
Out[263]= {13.4791, Null}
In[270]:= Length@correl
Out[270]= 45
In[284]:= Through[{Mean, Median, Min, Max}[correl]]
Out[284]= {0.381958, 0.396429, 0.200828, 0.536383}
</code></pre>
<p>So far, so good. But look again at the timing of the PairsListCorrelation function. It takes 13.5 seconds to calculate the 45 correlation coefficients for 10 series. To carry out an equivalent exercise for the entire S&P 500 universe would entail computing 124,750 coefficients, taking approximately 10.5 hours! This is far too slow to be practically useful in the given context.</p>
<p>Some speed improvement is achievable by retrieving the stock returns data in legacy (i.e. list rather than time series) format, but it still takes around 10 seconds to calculate the coefficients for our 10 stocks. Perhaps further speed improvements are possible through other means (e.g. compilation), but what is really required is a core language function to handle series of unequal length (or a Pairwise method for the Correlation function).</p>
<p>For comparison, I can produce the correlation coefficients for all 500 S&P member stocks in under 3 seconds using the 'Rows', 'pairwise' options of the equivalent correlation function in another scientific computing language.</p>
|
119,904 | <p>It is possible to do simple math between TemporalSeries objects. For example</p>
<pre><code>es=EventSeries[{{{2016, 1, 1}, 2}, {{2016, 1, 3}, 2.1}}];
td=TemporalData[{{{2016, 1, 1}, 2}, {{2016, 1, 3}, 3.1}}];
es*td (* works fine *)
</code></pre>
<p>This returns a TemporalData object with the path <code>{{3660595200, 4}, {3660768000, 6.51}}</code>.</p>
<p>This method does not work however if the objects being manipulated have differing dimensions.</p>
<pre><code>es=EventSeries[{{{2016, 1, 1}, 2}, {{2012, 1, 3}, 1.9}, {{2016, 1, 3}, 2.1}}];
td=TemporalData[{{{2016, 1, 1}, 2}, {{2016, 1, 3}, 3.1}}];
es*td (* ka-boom *)
</code></pre>
<p>With the data I'm using, one series or the other may be missing data at various time stamps, and it would be acceptable to either interpolate for those missing points or to simply omit them from the computation and the output.</p>
<p>I've been dealing with this problem in several steps,</p>
<ol>
<li>Make a list of the dates that all the series have in common with
<code>intersectionDates=Apply[Intersection, Map[#["Path"][[All, 1]] &,
listOfTD]</code></li>
<li>Extract the points for only those dates with <code>alignedData=Map[Select[#, MemberQ[intersectiondates, #[[1]]] &]&, listOfTD]</code></li>
<li>And finally turn the output back into TemporalData with <code>Map[TimeSeries[#] &, alignedData]</code></li>
</ol>
<p>In practice I have additional code to remove duplicate dates, if any exist, to convert EventSeries back to EventSeries, and some other bells and whistles, not shown above.</p>
<p>The problem is, this method is unusably slow for large datasets. With a few hundred points, it works fine, but with 5000+, it becomes intolerable.</p>
<p>I welcome any suggestions for approaches that would allow me to complete simple math between TemporalData objects of differing dimensions much faster.</p>
| sakra | 68 | <p>For some applications, an alternative to using <code>TimeSeries</code> expressions is using associations which support both a fast <code>KeyIntersection</code> and <code>KeyUnion</code> operation.</p>
<p>The example by Jonathan Kinlay from another answer to this question can be rewritten using associations in the following way:</p>
<p>First retrieve market data as a list of associations by using the <code>"Legacy"</code> method:</p>
<pre><code>tickers = Take[FinancialData["^GSPC", "Members"], -10];
tickerData = FinancialData[tickers, "Return",
DatePlus[Today, {-2753, "BusinessDay"}], Method -> "Legacy"];
assocStocks = Apply[Rule, tickerData, {2}] // Map[Association];
</code></pre>
<p>Then compute the correlations for unique pairs of stocks with <code>KeyIntersection</code>:</p>
<pre><code>pairs = Subsets[Range@Length@assocStocks, {2}];
correl = Map[Correlation @@ Values@KeyIntersection[assocStocks[[#]]] &, pairs];
</code></pre>
<p>This reduces the required time to compute the correlations to a fraction of a second on my machine.</p>
|
85,374 | <p>I'm currently using <code>WolframLibraryData::Message</code> to generate messages from a library function, like this:</p>
<pre><code>Needs["CCompilerDriver`"]
src = "
#include \"WolframLibrary.h\"
DLLEXPORT mint WolframLibrary_getVersion() {return WolframLibraryVersion;}
DLLEXPORT int WolframLibrary_initialize(WolframLibraryData libData) {return 0;}
DLLEXPORT void WolframLibrary_uninitialize(WolframLibraryData libData) {}
DLLEXPORT myFunc(WolframLibraryData libData, mint argc, MArgument* args, MArgument result)
{
libData->Message(\"Here's my message\");
MArgument_setReal(result,1.1);
return LIBRARY_NO_ERROR;
}
";
lib = CreateLibrary[src, "mylib"];
myFunc = LibraryFunctionLoad[lib, "myFunc", {}, Real];
</code></pre>
<p>Now the problem can be seen if <code>myFunc[]</code> is called. I get this result:</p>
<blockquote>
<p>LibraryFunction::Here's my message: -- Message text not found -- >></p>
<p>1.1</p>
</blockquote>
<p>The problem is this <code>-- Message text not found --</code> part. Apparently I'm generating the message in a wrong way. So how should I do instead? How do I fill this "message text" to make it look like messages from normal Mathematica functions?</p>
| Simon Woods | 862 | <p>You have not defined any message text in <em>Mathematica</em>.</p>
<p>The text you supply in the C code is the message tag, e.g</p>
<pre><code>libData->Message("myerror");
</code></pre>
<p>Then you need to define the actual message content in <em>Mathematica</em>:</p>
<pre><code>LibraryFunction::myerror = "Here's my message"
</code></pre>
<p>The relevant documentation page is <a href="http://reference.wolfram.com/language/LibraryLink/ref/callback/Message.html">here</a>.</p>
|
693,640 | <p>Assmue $f(x_{1},x_{2},\cdots,x_{n})$ is a second degree real polynomial with $n(n\ge 2)$ variables.
Let $S$ be such that $f(x_{1},x_{2},\cdots,x_{n})$ is the set of minimum and maximum points.
In other words:
$$S=\{(b_{1},b_{2},\cdots,b_{n})\in R^n| f(x_{1},x_{2},\cdots,x_{n})\le f(b_{1},b_{2},\cdots,b_{n}),\forall (x_{1},x_{2},\cdots,x_{n}\in R^n\}\bigcup \{(b_{1},b_{2},\cdots,b_{n})\in R^n| f(x_{1},x_{2},\cdots,x_{n})\ge f(b_{1},b_{2},\cdots,b_{n}),\forall (x_{1},x_{2},\cdots,x_{n}\in R^n\}$$</p>
<p>Assmue $f(x_{1},x_{2},\cdots,x_{n})$ is a symettric polynomial in $x_{1},x_{2},\cdots,x_{n}$,and $S$ is a finte non-empty set.</p>
<p>show that:</p>
<p>there exsit $b\in R$,such
$$S=\{(b,b,\cdots,b)\}$$</p>
<p>My idea: let
$$f(x_{1},x_{2},\cdots,x_{n})=x^2_{1}+x^2_{2}+\cdots+x^2_{n}$$ is symettric polynomial,then we take
$$S=\{(0,0,\cdots,0)\}$$ such it</p>
<p>But for this problem I can't prove it,Thank you</p>
| Einar Rødland | 37,974 | <p>There is actually a simple reason why this is true. The set $S$ of extremal points of a second degree polynomial must be an affine subspace. An affine subspace which is invariant under permutations of the coordinates must contain a point on the form $(p,\ldots,p)$.</p>
<p>To see this, assume we have two distrinct points
$P=(p_1,\ldots,p_n)$ and $Q=(q_1,\ldots,q_n)$ both in $S$. Let $l$ be the line through them. The restriction of $f$ to the line $l$ is now a second degree polynomial one a line (i.e. in one variable) with two extremal points, which is only possible if it is constant.</p>
<p>To write it out more pedantically: the second degree polynomial $g(t)=f(tQ+(1-t)P)=a+bt+ct^2$ cannot have two different extrema unless $g(t)$ is constant.</p>
<p>Thus, if $P,Q\in S$, the line through $P$ and $Q$ must be contained in $S$. More generally, for any set $P_1,\ldots,P_k\in S$, $S$ must also contain the affine space spanned by these points:
$$\left\{\sum_i t_iP_i\middle| \sum_i t_i=1\right\}\subset S.$$</p>
<p>If $P=(p_1,\ldots,p_n)\in S$ where $S$ is an affine space and symmetric under permutations of the $p_i$, then the mean of all permutations of $P$ is $\bar P=(\bar p,\ldots,\bar p)$ where $\bar p=(p_1+\cdots+p_n)/n$. The mean $\bar P$ must also be in $S$ since it is in the affine space spanned by the permutations of $P$.</p>
<p>Moreover, either the point $(p,\ldots,p)\in S$ is unique, or the while line $(p,\ldots,p)$ is in $S$.</p>
|
2,756,139 | <p>Let $I=[0,1]$ and $$X = \prod_{i \in I}^{} \mathbb{R}$$
That is, an element of $X$ is a function $f:I→\mathbb{R}$.</p>
<p>Prove that a sequence $\{f_n\}_n ⊆ X$ of real functions converges to some $f ∈ X$ in the product topology on $X$, if and only if it converges pointwise, i.e. for every $x ∈ I$, $f_n(x) → f(x)$ in the usual sense of convergence of sequences.</p>
<hr>
<p>I don't understand how a product indexed by an uncountably infinite set is like. I'm guessing a single element $f$ of $X$ is an uncountable set $(y_i)_{i \in I}$ in itself. But then how is $f(x)$ different from any other $f$? Is it the cartesian product indexed by $[0,x]$?</p>
<p>Note that I included the "prove" question just to show the context, but it isn't what I find problematic. It's probably easy enough if I get how the product works.</p>
| Henno Brandsma | 4,280 | <p>An element of $X = \prod_{i \in [0,1]} \mathbb{R}$ is just a function from $[0,1]$ to $\mathbb{R}$, i.e. a "choice" of a point in $\mathbb{R}$ for each $x \in [0,1]$. It's the same whether we write $(x_i)_{i \in [0,1]}$ or just the function $f$ that sends $i$ to $x_i$. Just writing $f$ is often easier in notations.</p>
<p>Likewise it's the same whether I write $(x,y,z) \in \mathbb{R}^3$ or the function $f: \{0,1,2\} \to \mathbb{R}$ defined by $f(0) = x, f(1) = y ,f(2) = z$.</p>
|
1,493,965 | <p>We're given the power series $$ \sum_1^\inf \frac{j!}{j^j}z^j$$</p>
<p>and are asked to find radius of convergence R. I know the formula $R=1/\limsup(a_n ^{1/n})$, which leads me to compute $\lim \frac{j!^{1/j}}{j}$, and then I'm stuck.</p>
<p>The solution manual calculates R by $1/\lim|\frac{a_{j+1}}{a_j}|$, but I can't figure out the motivation for that formula.</p>
| Julián Aguirre | 4,791 | <p>$Y$ is a normed space with norm defined as $\|x\|=\sup_{|s-t_0|\le\delta}|x(s)|$. From here we get a distance $d(x,y)=\|x-y\|$. What you have shown is
$$
\|\Delta T(x)\|=_{\text{def}}\|T(x+\Delta x)-T( x)\|\le q\|\Delta x\|=_{\text{def}}q\|(x+\Delta x)-x\|.
$$
Call $y=x+\Delta x$. The above inequality is now
$$
\|T(y)-T(x)\le q\|y-x\|.
$$
This is the definition of a contraction in $Y$.</p>
|
3,693,196 | <p>This is known as the factor formula. It is used for the addition of sin functions. I don't understand how the two are equal though. How would you get to the right side of the equation using the left?</p>
<p><span class="math-container">$$\sin(s) + \sin(t) = 2 \sin\left(\frac{s+t}{2}\right) \cos \left(\frac{s-t}{2}\right)$$</span></p>
| Anas A. Ibrahim | 650,028 | <p><span class="math-container">$$P(A \text{ given that } B) = \frac{P(A \text{ and } B)}{P(B)}$$</span>
where <span class="math-container">$A,B$</span> are events, this is another way of writing <strong>Bayes' theorem</strong>. </p>
<p>Just take <span class="math-container">$A$</span> as the event that the train will arrive at stop two on time and event <span class="math-container">$B$</span> that it left stop one on time.
So, your required answer is <span class="math-container">$0.8/0.9=8/9$</span></p>
|
2,664,370 | <p>I don't know how to start, i've noticed that it can be written as $$\lim_{x\to 0} \frac{2^x+5^x-4^x-3^x}{5^x+4^x-3^x-2^x}=\lim_{x\to 0} \frac{(5^x-3^x)+(2^x-4^x)}{(5^x-3^x)-(2^x-4^x)}$$</p>
| Michael Rozenberg | 190,319 | <p>$$\lim_{x\rightarrow0}\frac{2^x+5^x-4^x-3^x}{5^x+4^x-3^x-2^x}=\lim_{x\rightarrow0}\frac{\frac{2^x-1}{x}+\frac{5^x-1}{x}-\frac{4^x-1}{x}-\frac{3^x-1}{x}}{\frac{5^x-1}{x}+\frac{4^x-1}{x}-\frac{3^x-1}{x}-\frac{2^x-1}{x}}=$$
$$=\frac{\ln2+\ln5-\ln4-\ln3}{\ln5+\ln4-\ln3-\ln2}.$$
I used $$\lim_{x\rightarrow0}\frac{a^x-1}{x}=\ln{a}\lim_{x\rightarrow0}\frac{e^{x\ln{a}}-1}{x\ln{a}}=\ln{a}$$ for $a>0$.</p>
|
377,354 | <p>I'm referencing this page: <a href="http://www.maths.surrey.ac.uk/hosted-sites/R.Knott/Fibonacci/cfINTRO.html#sqrtalgsalg" rel="nofollow">An Introduction to the Continued Fraction</a>, where they explain the algebraic method of solving the square root of $14$.</p>
<p>$$\sqrt{14} = 3 + \frac1x$$</p>
<p>So, $x_0 = 3$, Solving for $x$, we get</p>
<p>$$x = \frac{\sqrt{14} + 3}5$$</p>
<p>However, in the next step, how do we get the whole number $x_1$ = 1?</p>
<p>$$\frac{\sqrt{14} + 3}5 = 1 + \frac1x$$</p>
<p>My understanding is we would substitute for $x$ in the original equation for $\sqrt{14}$ whereas $$\sqrt{14} = 3 + \frac1{\frac{\sqrt{14} + 3}5}$$</p>
<p>Then substitute the $\sqrt{14}$ again here for $x = \frac{\sqrt{14} + 3}5$ to get the $x_1$ of the continued fraction? Am I just getting the algebra at this point wrong or am I botching steps? </p>
| TonyK | 1,508 | <p>It's really easy to compute $\left\lfloor \frac{\sqrt a + b}{c}\right\rfloor$ for integers $a,b,c$. Just use the fact that</p>
<p>$$\left\lfloor \frac{r + b}{c}\right\rfloor = \left\lfloor \frac{\lfloor r \rfloor + b}{c}\right\rfloor$$</p>
<p>for real $r$ and integers $b,c$.</p>
<p>Here, $\lfloor\sqrt{14}\rfloor = 3$, so $x = \left\lfloor\frac{6}{5}\right\rfloor = 1$.</p>
|
1,253,475 | <p>I'm trying to prove the following statement: if E is a subspace of V, then dim E + dim $E^{\perp}$ = dim V. I know this is true because when these two subspaces are added, they are equal to V, but I'm not sure how to rigorously say this, could I get a little help?</p>
| mookid | 131,738 | <p>You need to prove that $$
V = E + E^\perp
$$ and $$
E \cap E^\perp = \{0\}
$$
then it follows from an elementary theorem.</p>
|
302,179 | <p>The question I am working on is:</p>
<blockquote>
<p>"Use a direct proof to show that every odd integer is the difference of two squares."</p>
</blockquote>
<p>Proof:</p>
<p>Let n be an odd integer: $n = 2k + 1$, where $k \in Z$</p>
<p>Let the difference of two different squares be, $a^2-b^2$, where $a,b \in Z$.</p>
<p>Hence, $n=2k+1=a^2-b^2$...</p>
<p>As you can see, this a dead-end. Appealing to the answer key, I found that they let the difference of two different squares be, $(k+1)^2-k^2$. I understand their use of $k$--$k$ is one number, and $k+1$ is a different number--;however, why did they choose to add $1$? Why couldn't we have added $2$?</p>
| AndreasT | 53,739 | <p>Here you have another approach...</p>
<p>Note that
$$
\sum_{i=1}^n
{
i
}
=
\frac{n(n+1)}2
$$
so that the sum of the first $n$ odd naturals is
$$
\sum_{i=1}^n
{
(2i-1)
}
=
2\left(
\sum_{i=1}^n
{
i
}
\right)
- n
=
n(n+1)-n
=
n^2
$$
We have showed that the sum of the first $n$ odd naturals is $n^2$. The $n$-th odd natural is, trivially, equal to the sum of the first $n$ minus the sum of the first $n-1$:
$$
2n-1
=
\sum_{i=1}^n
{
(2i-1)
}
-
\sum_{i=1}^{n-1}
{
(2i-1)
}
$$
and from what we found, we can write it as
$$
2n-1 = n^2 - (n-1)^2
$$</p>
|
2,260,051 | <p>Let $M$ and $N$ be two square matrices of same order, and $M^2 = N^4$.</p>
<p>Can any such $M,N$ exist when the following relations do not hold?</p>
<ol>
<li><p>$M = N^2$, and </p></li>
<li><p>$M = -N^2$ ?</p></li>
</ol>
| badjohn | 332,763 | <p>There are plenty of nilpotent matrices. So, pick $M$ of degree 2 and $N$ of degree 4.</p>
<p>Or look for roots of the identity matrix. $M$ could be any reflection and $N$ a rotation by 90 degrees.</p>
|
4,326,547 | <p>I've solved linear ODEs before. This however is something completely new to me. I want to solve it without using approximations or anything.</p>
<p><span class="math-container">$s''( t) s( t) =( s'( t))^{2} +B( s( t))^{2} s'( t) -g\cdot s( t) s'( t)$</span></p>
<p>These are the equations I started with</p>
<p><span class="math-container">$ \begin{array}{l}
s'( t) =-Bs( t) i( t)\\
r'( t) =g\cdot i( t)\\
i'( t) =i( t)( Bs( t) -g)
\end{array}$</span></p>
<p>The reason I'm solving this particular equation is because I want to solve for all of the functions <span class="math-container">$(i(t),r(t),s(t))$</span> from above.</p>
<p>I just figured <span class="math-container">$s(t)$</span> might be the place to start. I really don't have any clue where to start here. Any help would be awesome! Thanks!</p>
<p>--edit--
I made a mistake above. In order to fix it I changed <span class="math-container">$i'( t) =i( t)( Bs( t) -1)$</span> to this <span class="math-container">$i'( t) =i( t)( Bs( t) -g)$</span>. My bad...</p>
<p>---Edit---</p>
<p>I might have a step towards the answer.</p>
<p><span class="math-container">$ \begin{array}{l}
v( s) =s'( t)\\
\Longrightarrow \\
s''( t) =\frac{dv( s)}{dt} =\frac{dv}{ds}\frac{ds}{dt} =v'( s) \cdot \frac{ds}{dt} =v'( s) \cdot v( s)\\
\Longrightarrow \\
v'( s) \cdot v( s) s=v( t)^{2} +Bs^{2} v( t) -g\cdot s\cdot v( t)\\
\Longrightarrow \\
v'( s) \cdot s=v( t) +Bs^{2} -g\cdot s
\end{array}$</span></p>
<p>Wolfram Alpha says that the solution to this differential equation is</p>
<p><span class="math-container">$v(s)=Bs^2+c_1s-gsln(s)$</span></p>
<p>So that means that</p>
<p><span class="math-container">$s'(t)=Bs(t)^2+c_1s(t)-gs(t)\ln(s(t))$</span></p>
<p>This is certainly better than before</p>
| Highvoltagemath | 664,767 | <p>Me and my friend got it! (This was also exactly what Sal in the comments suggested doing)
<span class="math-container">$ \begin{array}{l}
v( s) =s'( t) \Longrightarrow \\
\frac{dv( s)}{dt} =\frac{dv( s)}{ds} \cdot \frac{ds}{dt} =v'( s) \cdot v( s) \ \text{literally the chain rule}\\
\Longrightarrow \\
v'( s) \cdot v( s) \cdot s=v( s)^{2} +Bs^{2} v( s) -gv( s) s\\
\\
\therefore v'( s) s=v( s) +Bs^{2} -gs
\end{array}$</span></p>
<p>Next find an integrating factor.</p>
<p><span class="math-container">$ \begin{array}{l}
v'( s) -\frac{v( s)}{s} =Bs-g\\
integrating\ factor\ I=e^{\int \frac{1}{s} ds} =\frac{1}{s}
\end{array}$</span></p>
<p>Multiply both sides of the equation by the integrating factor.</p>
<p><span class="math-container">$Iv'( s) -I\frac{1}{s} v( s) =I( Bs-g)$</span></p>
<p>The really interesting thing that makes this work is <span class="math-container">$\frac{d} {ds}(Iv(s))=Iv'(s)-I\frac{1}{s}v(s)$</span></p>
<p><span class="math-container">$ \begin{array}{l}
\therefore \frac{d( Iv( s))}{dt} =I( Bs-g)\\
\Longrightarrow \\
Iv( s) =\int I( Bs-g) ds=\int \left( B-\frac{g}{s}\right) ds=Bs-ln( s) +c_{1}
\end{array}$</span></p>
<p>So we get</p>
<p><span class="math-container">$v( s) =Bs^{2} +c_{1} s-gsln( s) \Longrightarrow
s'( t) =Bs( t)^{2} +c_{1} s( t) -gs( t) ln( s( t))$</span></p>
<p>Next just split the derivative and integrate</p>
<p><span class="math-container">$1dt=\frac{1}{Bs( t)^{2} +c_{1} s( t) -gs( t) ln( s( t))} ds \Longrightarrow t+c_{2} =\int \frac{1}{Bs( t)^{2} +c_{1} s( t) -gs( t) ln( s( t))} ds$</span></p>
<p>Solving for the constants we get</p>
<p><span class="math-container">$ \begin{array}{l}
at\ t=0\\
s'( 0) =Bs( 0)^{2} +c_{1} s( 0) -gs( 0) ln( s( 0))\\
and\ we\ know\ that\ s'( t) =-Bs( t) i( t)\\
\\
s'( 0) =-Bs( 0) i( 0)\\
\Longrightarrow \\
-Bs_{0} i_{0} =Bs_{0}^{2} +c_{1} s_{0} -gs_{0} ln( s_{0})\\
\Longrightarrow \\
c_{1} =gln( s_{0}) +B( s_{0} -i_{0})\\
c_{2} =\int _{0}^{s( 0)}\frac{1}{B\omega ^{2} +c_{1} \omega -g\omega ln( \omega )} d\omega =\int _{0}^{s_{0}}\frac{1}{B\omega ^{2} +( gln( s_{0}) +B( s_{0} -i_{0})) \omega -g\omega ln( \omega )} d\omega \\
\\
\therefore t=\\
\int\limits _{0}^{s( t)}\frac{1}{B\omega ^{2} +( gln( s_{0}) +B( s_{0} -i_{0})) \omega -g\omega ln( \omega )} d\omega -\int\limits _{0}^{s_{0}}\frac{1}{B\omega ^{2} +( gln( s_{0}) +B( s_{0} -i_{0})) \omega -g\omega ln( \omega )} d\omega
\end{array}$</span></p>
<p>We can combine the integrals because of their limits</p>
<p><span class="math-container">$t=\int _{s_{0}}^{s( t)}\frac{1}{B\omega ^{2} +( gln( s_{0}) +B[ s_{0} -i_{0}]) \omega -g\omega ln( \omega )} d\omega $</span></p>
<p>Desmos graphs this in its current form, but if you wanted <span class="math-container">$s(t)=$</span>something then the Lagrange Inversion theorem probably works here.
<a href="https://i.stack.imgur.com/VJHzF.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/VJHzF.png" alt="enter image description here" /></a></p>
<p>Edit... I was actually able to graph all three functions for the SIR model in desmos (green equation is s(t), red i(t) blue r(t)) <a href="https://www.desmos.com/calculator/tpqrs9kcnc" rel="nofollow noreferrer">desmos graph for SIR model</a>
<a href="https://i.stack.imgur.com/QFBwj.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/QFBwj.png" alt="enter image description here" /></a></p>
|
3,853,509 | <blockquote>
<p>prove <span class="math-container">$$\sum_{cyc}\frac{a^2}{a+2b^2}\ge 1$$</span> holds for all positives <span class="math-container">$a,b,c$</span> when <span class="math-container">$\sqrt{a}+\sqrt{b}+\sqrt{c}=3$</span> or <span class="math-container">$ab+bc+ca=3$</span></p>
</blockquote>
<hr />
<p><strong>Background</strong> Taking <span class="math-container">$\sqrt{a}+\sqrt{b}+\sqrt{c}=3$</span> This was left as an exercise to the reader in the book 'Secrets in inequalities'.This comes under the section Cauchy Reverse technique'.i.e the sum is rewritten as :<span class="math-container">$$\sum_{cyc} a- \frac{ab^2}{a+2b^2}\ge a-\frac{2}{3}\sum_{cyc}{(ab)}^{2/3}$$</span> which is true by AM-GM.(<span class="math-container">$a+b^2+b^2\ge 3{(a. b^4)}^{1/3}$</span>)</p>
<p>By QM-AM inequality <span class="math-container">$$\sum_{cyc}a\ge \frac{{ \left(\sum \sqrt{a} \right)}^2}{3}=3$$</span>.</p>
<p>we are left to prove that <span class="math-container">$$\sum_{cyc}{(ab)}^{2/3}\le 3$$</span> .But i am not able to prove this .Even the case when <span class="math-container">$ab+bc+ca=3$</span> seems difficult to me.</p>
<p>Please note I am looking for a solution using this cuchy reverse technique and AM-GM only.</p>
| José Carlos Santos | 446,262 | <p>Note that<span class="math-container">\begin{align}f(x)&=\frac1{2+3x^2}\\&=\frac12\times\frac1{1+\frac32x^2}\\&=\frac12\left(1-\frac32x^2+\frac{3^2}{2^2}x^4-\frac{3^3}{2^3}x^6+\cdots\right)\text{ if }\left|\frac32x^2\right|<1\\&=\frac12-\frac3{2^2}x^2+\frac{3^2}{2^3}x^4-\frac{3^3}{2^4}x^6+\cdots\end{align}</span>It is now easy to see that the radius of convergence is <span class="math-container">$\sqrt{\frac23}$</span>.</p>
|
3,540,613 | <p>The integral to solve:</p>
<p><span class="math-container">$$
\int{5^{sin(x)}cos(x)dx}
$$</span></p>
<p>I used long computations using integration by parts, but I don't could finalize:</p>
<p><span class="math-container">$$
\int{5^{sin(x)}cos(x)dx} = cos(x)\frac{5^{sin(x)}}{ln(5)}+\frac{1}{ln(5)}\Bigg[ \frac{5^{sin(x)}}{ln(5)}-\frac{1}{ln(5)}\int{5^{sin(x)}cos(x)dx} \Bigg]
$$</span></p>
| user577215664 | 475,762 | <p><span class="math-container">$$\int{5^{sin(x)}cos(x)dx} = \cos(x)\frac{5^{sin(x)}}{ln(5)}+.........$$</span></p>
<p>You made a confusion with:</p>
<p><span class="math-container">$$\int a^xdx =\int e^{x \ln |a|}dx=\dfrac {a^x}{\ln |a|}$$</span>
You don't have <span class="math-container">$x$</span> here but a function of <span class="math-container">$x$</span> that is <span class="math-container">$\sin x$</span>
Try by substitution <span class="math-container">$u=\sin x \implies du=\cos(x) dx$</span>. Then you can apply the formula for <span class="math-container">$\int a^xdx$</span> :
<span class="math-container">$$ I=\int 5^udu= \dfrac {5^u}{\ln 5}= \dfrac {5^{\sin x}}{\ln 5}$$</span></p>
|
4,550,991 | <p>This is question is taken from an early round of a Norwegian national math competition where you have on average 5 minutes to solve each question.</p>
<p>I tried to solve the question by writing every number with four digits and with introductory zeros where it was needed. For example 0001 and 0101 would be the numbers 1 and 101. I then divided the different numbers into four groups based on how many times the digit 1 appeared in the number. I called these group 1,2,3 and 4. 0001 would then belong to group 1 and 0101 to group 2. I first found out in how many ways I could place the digit 1 in each group, then multiplied it by the number of combinations with the other possible eight digits (0,2,3,5,6,7,8,9). This would be the number of combinations for each group and I lastly multiplied it with the number of times 1 appeared in the number. This is done for all of the groups under:</p>
<p><span class="math-container">$\binom{4}{1}\cdot8^3\cdot1$</span> times in group 1</p>
<p><span class="math-container">$\binom{4}{2}\cdot8^2\cdot2$</span> times in group 2</p>
<p><span class="math-container">$\binom{4}{3}\cdot8^1\cdot3$</span> times in group 3</p>
<p><span class="math-container">$\binom{4}{4}\cdot8^0\cdot4$</span> times in group 4</p>
<p>The sum of all these calculations will be the 2916 and the correct number of times 1 appears (I think). Is this calculation/way of thinking correct? And is there a more efficient way to do it?</p>
| Lourrran | 1,104,122 | <p>Consider an alphabet with 9 letters (0,1,2,3,5,6,7,8,9)</p>
<p>Consider all words that you can create with 4 letters in this alphabet : <span class="math-container">$9^4$</span> words.</p>
<p>So <span class="math-container">$4 \times 9^4$</span> letters will be used.</p>
<p>Each letter is used equally, so each letter is used <span class="math-container">$4 \times 9^3=2916$</span> times.</p>
|
302 | <p>I know that the Fibonacci numbers converge to a ratio of .618, and that this ratio is found all throughout nature, etc. I suppose the best way to ask my question is: where was this .618 value first found? And what is the...significance?</p>
| John Stillwell | 1,587 | <p>As previous answers have pointed out, both the golden ratio and the Fibonacci numbers go back thousands of years. However, I believe the connection
between the two was discovered around 1730. At that time, Daniel Bernoulli and
Abraham de Moivre independently came up with the generating function
for the Fibonacci numbers, and the resulting formula for the
$n$th Fibonacci number in terms of the golden ratio.</p>
|
3,060,250 | <p>Let <span class="math-container">$f : \mathbb{R} \to \mathbb{R}$</span> be differentiable. Assume that <span class="math-container">$1 \le f(x) \le 2$</span> for <span class="math-container">$x \in \mathbb{R}$</span> and <span class="math-container">$f(0) = 0$</span>. Prove that <span class="math-container">$x \le f(x) \le 2x$</span> for <span class="math-container">$x \ge 0$</span>.</p>
| EuxhenH | 281,807 | <p>I am assuming that you mean <span class="math-container">$1\leq f'(x) \leq 2$</span>, otherwise, as stated by John Omielan, it is not possible for <span class="math-container">$f(0)$</span> to be <span class="math-container">$0$</span>.</p>
<p>Since <span class="math-container">$f$</span> is differentiable on any interval, it is also differentiable on <span class="math-container">$(0, x)$</span>. By the Mean Value Theorem, we know that there exists a <span class="math-container">$c \in (0,x)$</span> such that <span class="math-container">$f'(c)=\dfrac{f(x)-f(0)}{x-0}=\dfrac{f(x)}{x}$</span>.</p>
<p>Since <span class="math-container">$1\leq f'(c) \leq 2$</span>, we have <span class="math-container">$1\leq \dfrac{f(x)}{x} \leq 2$</span>. Multiply both sides by <span class="math-container">$x$</span> to get <span class="math-container">$x \leq f(x) \leq 2x$</span>.</p>
|
58,926 | <p>Is it well known what happens if one blows-up $\mathbb{P}^2$ at points in non-general position (ie. 3 points on a line, 6 on a conic etc)? Are these objects isomorphic to something nice? </p>
| Francesco Polizzi | 7,460 | <p>In both examples you are considering, the anticanonical model is a <em>singular</em> del Pezzo surface.</p>
<p>In fact, let $X$ be the blow-up of $\mathbb{P}^2$ at three points lying on a line $L$. By Bezout's theorem, the birational map associated with the linear system of cubics through the three points contracts $L$. Since $L^2=1$, the blow-up of $L$ at three points gives a $(-2)$-curve. Therefore, the anticanonical model of $X$ is a Del Pezzo of degree $6$ in $\mathbb{P}^6$ with an ordinary double point (i.e., a node).</p>
<p>Analogously, let $Y$ be the blow-up of $\mathbb{P}^2$ at six points lying on a conic $C$. By Bezout's theorem, the birational map associated with the linear system of cubics through the six points contracts $C$. Since $C^2=4$ and we are blowing-up six points over $C$, we obtain again a $(-2)$-curve. Therefore, the anticanonical model of $Y$ is a Del Pezzo surface of degree $3$ in $\mathbb{P}^3$ with an ordinary double point, i.e. a cubic surface with a node.</p>
<p>Of course, if you blow-up more than $8$ points then the result is not a Del Pezzo anymore. For instance, the blow-up of $\mathbb{P}^2$ at nine points which are the base locus of a pencil of cubics is an elliptic fibration $X \to \mathbb{P}^1$ with nine sections; in general, such fibration has exactly $12$ nodal fibres, corresponding to the singular elements of the pencil.</p>
<p>When the number of points increases the situation becomes more and more complicated, and I guess that a satisfatory description is out of reach.</p>
|
8,567 | <p>When highlighting text using <code>Style</code> and <code>Background</code>, as in <code>Style["Test ", White, Background -> Lighter@Blue]</code> is there a way to pad (ie, enlarge) the bounding box? </p>
<p>The bottom of the background seems coincident with the base of the text: <img src="https://i.stack.imgur.com/SLimA.jpg" alt="here"></p>
| Szabolcs | 12 | <p>If I understand it correctly, the gist of your question is going from</p>
<pre><code>HoldComplete[x, y, z, up]
</code></pre>
<p>to</p>
<pre><code>HoldComplete[1, y^2, z, up]
</code></pre>
<p>assuming the following definitions:</p>
<pre><code>_[___, up, ___] ^= "UpValueEvaluated"
x = 1
f[x_] := x^2
</code></pre>
<p>That is, evaluate everything inside the <code>HoldComplete</code> except the <code>UpValue</code>. I managed to do this using the following construction:</p>
<pre><code>Internal`InheritedBlock[
{RuleCondition},
Attributes[RuleCondition] = {HoldAllComplete};
Replace[HoldComplete[x, f[y], z, up], e_ :> RuleCondition[e], {1}]
]
</code></pre>
<p>The undocumented <code>RuleCondition</code> is explained <a href="https://stackoverflow.com/a/7679152/695132">here</a>. This construction should be able to emulate the behaviour of <code>Set</code>.</p>
|
3,335,060 | <blockquote>
<p>The numbers of possible continuous <span class="math-container">$f(x)$</span> defiend on <span class="math-container">$[0,1]$</span> for which <span class="math-container">$I_1=\int_0^1 f(x)dx = 1,~I_2=\int_0^1 xf(x)dx = a,~I_3=\int_0^1 x^2f(x)dx = a^2 $</span> is/are</p>
<p><span class="math-container">$(\text{A})~~1~~~(\text{B})~~2~~(\text{C})~~\infty~~(\text{D})~~0$</span></p>
<p>I have tried the following:
Applying ILATE (the multiplication rule for integration) - nothing useful comes up, only further complications like the primitive of the primitive of f(x). No use of the given information either.
Using the rule <span class="math-container">$$ \int_a^b g(x)dx = \int_a^b g(a+b-x)dx$$</span>
I solved all three constraints to get
<span class="math-container">$$ \int_0^1 x^2f(1-x)dx = (a-1)^2 \\
\text{or} \int_0^1 x^2[f(1-x)+f(x)]dx = (a-1)^2 +a^2 \\$$</span>
Then I did the following - if f(x) + f(1-x) is constant, solve with the constraints to find possible solutions. Basically I was looking for any solutions where the function also follows the rule that f(x) + f(1-x) is constant.
Solving with the other constraints, I obtained that f(x) will only follow all four constraints if the constant [= f(x) + f(1-x)] is 2, and a is <span class="math-container">$\frac{√3\pm1}{2}$</span>.</p>
</blockquote>
| Yuriy S | 269,624 | <p>An extended comment.</p>
<p>Not really a proof, but an interesting consequence:</p>
<p><span class="math-container">$$\log 20=4 \log 2+\log \left(1+\frac{1}{4}\right)<3$$</span></p>
<p><span class="math-container">$$\log 2< \frac34 -\frac14 \log \left(1+\frac{1}{4}\right) $$</span></p>
<p><span class="math-container">$$\log 2< \frac34 -\frac14 \left(\frac{1}{4}-\frac{1}{32}\right) $$</span></p>
<p><span class="math-container">$$\log 2< \frac34 -\frac1{18} $$</span></p>
<p>The error here is approximately <span class="math-container">$0.0013$</span>.</p>
<p>That said, there's a lot of inequalities for logarithms, especially for <span class="math-container">$\log 2$</span> already known. This can be used to prove the OP.</p>
|
1,803,416 | <p>Does the function $d: \mathbb{R}^n \times \mathbb{R}^n \to \mathbb{R}$ given by:
$$d(x,y)= \frac{\lvert x-y\rvert} {1+{\lvert x-y\rvert}}$$ define a metric on $\mathbb{R}^n?$</p>
<p>How do you go about proving this? Do I need to just show that it satisfies the three conditions to be a metric? If so how do I show them?</p>
| Math1000 | 38,584 | <p>In general, if $(E,d)$ is a metric space, then $d':=\frac d{1+d}$ is a metric. The triangle inequality is the only nontrivial property here. If $x,y,z\in E$, then $d(x,z)\leqslant d(x,y)+d(y,z)$, so so monotonicity of the map $t\mapsto\frac t{1+t}$ on $[0,\infty)$ yields
\begin{align}
d'(x,z) &= \frac{d(x,z)}{1+d(x,z)}\\
&\leqslant \frac{d(x,y)+d(y,z)}{1+d(x,y)+d(y,z)}\\
&\leqslant \frac{d(x,y)}{1+d(x,y)}+\frac{d(y,z)}{1+d(y,z)}\\
&=d'(x,y)+d'(y,z).
\end{align}</p>
|
127,225 | <p>I got stuck solving the following problem:</p>
<pre><code>Table[Table[
Table[
g1Size = x; g2Size = y;
vals =
FindInstance[(a1 - a2) - (b1 - b2) == z && a1 + b1 == g1Size &&
a2 + b2 == g2Size && a1 + a2 == g1Size && b1 + b2 == g2Size &&
a1 > 0 && a2 > 0 && b1 > 0 && b2 > 0, {a1, a2, b1, b2},
Integers, 3];
aa1 = a1 /. vals; aa2 = a2 /. vals; bb1 = b1 /. vals;
bb2 = b2 /. vals;
{g1Size, g2Size, z, Flatten@{aa1, aa2, bb1, bb2}}
, {z, 0, 10}], {x, 1, 10}], {y, 1, 10}]
</code></pre>
<p>I want to loop through different values of g1Size, g2Size and z and find the first solution to the system of equations. As soon as a solution for a combination of g1Size,g2size and z was found, I want to extract the values for a1,a2,b1,b2 and continue with the next loop. In other words, only print the values when vals is not empty and then stop the z-loop and switch to the next values of x and y.</p>
<p>But my output is like this:</p>
<pre><code>{{{{1, 1, 0, {a1, a2, b1, b2}}, {1, 1, 1, {a1, a2, b1, b2}}, {1, 1,
2, {a1, a2, b1, b2}}, {1, 1, 3, {a1, a2, b1, b2}}, {1, 1,
4, {a1, a2, b1, b2}}, {1, 1, 5, {a1, a2, b1, b2}}, {1, 1,
6, {a1, a2, b1, b2}}, {1, 1, 7, {a1, a2, b1, b2}}, {1, 1,
8, {a1, a2, b1, b2}}, {1, 1, 9, {a1, a2, b1, b2}}, {1, 1,
10, {a1, a2, b1, b2}}}
</code></pre>
<p>plotting the names for a1,a2,b1,b2 when no solution was found.</p>
<p>My mathematica coding is a bit rust and this code seems far from elegant. And I hope it is clear what I mean :).</p>
| mikado | 36,788 | <p>Here is an implementation based on <code>ListConvolve</code></p>
<pre><code>lcimplementation[mat_, n_] :=
With[{c = ConstantArray[1, Length[First[mat]]]},
Last[Fold[
Take[ListConvolve[{c, #2}, #1, {1, -1}, 0, Times, Plus, 1],
UpTo[n + 1]] &, {c}, mat]]]
</code></pre>
<p>Compare this with the reference implementation</p>
<pre><code>refimplementation =
Coefficient[Times @@ (1 + #1 x) // Expand, x^#2] &;
</code></pre>
<p>For timing tests use a new random matrix (and see that the same result is given)</p>
<pre><code>matsize = 100;
subsize = 30;
testsize = 10000;
With[{mat = RandomInteger[{-10, 10}, {matsize, testsize}]},
{AbsoluteTiming[v1 = refimplementation[mat, subsize];],
AbsoluteTiming[v2 = lcimplementation[mat, subsize];]}]
v1 === v2
(* {{8.90806, Null}, {7.37183, Null}} *)
(* True *)
</code></pre>
<p>We see a small improvement in speed. However, this approach can benefit from working at machine precision. We see (in this case at least) it locates the same maximum, though there is a risk (for two nearly equal peaks) that it will get it wrong.</p>
<pre><code>matsize = 100;
subsize = 30;
testsize = 10000;
With[{mat = RandomInteger[{-10, 10}, {matsize, testsize}]},
{AbsoluteTiming[
v1 = refimplementation[mat, subsize] // Position[#, Max@#] &;],
AbsoluteTiming[
v2 = lcimplementation[N[mat], subsize] // Position[#, Max@#] &;]}]
v1 === v2
(* {{9.1465, Null}, {2.14473, Null}} *)
(* True *)
</code></pre>
<p>On my computer, we have a speed improvement of more than 4</p>
|
2,612,416 | <p>Can you please help me with this limit? I can´t use L'Hopital rule.</p>
<p>$$\lim_{x\to \infty} \frac{\sqrt{4x^2+5}-3}{\sqrt[3]{x^4}-1} $$</p>
| marwalix | 441 | <p>Rewrite the fraction as follows</p>
<p>$${\sqrt{4+{5\over x^2}}-{3\over x}\over \sqrt[3]{x}-{1\over x}}$$</p>
<p>And this has $0$ as limit at $+\infty$</p>
<p>At $1$ we have a $0/0$ indetermination. Many ways to solve, the most elementary being the conjugate radicals.</p>
<p>Multiply both numerator and denominator by $\sqrt{4x^2+5}+3$. This yields</p>
<p>$${4(x^2-1)\over \sqrt[3]{x^4}-1}\cdot{1\over \sqrt{4x^2+5}+3}$$</p>
<p>We still have a $0/0$ indetermination in the first term of the product while the second has $1\over 6$ as limit at $1$.</p>
<p>To solve the indetermination we’re left with we need to use the identity $a^3-b^3=(a-b)(a^2+ab+b^2)$. So we multiply both numerator and denominator by $(\sqrt[3]{x^4})^2+\sqrt[3]{x^4}+1$ to get:</p>
<p>$${4(x^2-1)\over \sqrt[3]{x^4}-1}={4(x^2-1)\cdot((\sqrt[3]{x^4})^2+\sqrt[3]{x^4}+1)\over x^4-1}=4{(\sqrt[3]{x^4})^2+\sqrt[3]{x^4}+1\over x^2+1}$$</p>
<p>And for this term the limit is $6$ when $x\to 1$ and so the limit we’re looking for is $6/6=1$</p>
|
90,459 | <p>I want to find the degree of $\mathbb{Q}(\sqrt{3+2\sqrt{2}})$ over $\mathbb{Q}$. I observe that
$3+2\sqrt{2}=2+2\sqrt{2}+1=(\sqrt{2}+1)^2$ so
$$
\mathbb{Q}(\sqrt{3+2\sqrt{2}})=\mathbb{Q}(\sqrt{2}+1)=\mathbb{Q}(\sqrt{2})
$$
so the degree is 2.</p>
<p>Is there a more mechanical way to show this without noticing the factorization?</p>
| Bill Dubuque | 242 | <p>This reduces to checking if the radical <span class="math-container">$\:\sqrt{3+2\sqrt{2}}\:$</span> denests. While there are <a href="https://math.stackexchange.com/a/4697/242">general algorithms</a>, simple cases like this can be tackled by employing an easy formula that I discovered as a teenager.</p>
<p><strong>Simple Denesting Rule</strong> <span class="math-container">$\rm\quad \color{blue}{subtract\ out}\ \sqrt{norm}\:,\ \ then\ \ \color{brown}{divide\ out}\ \sqrt{trace} $</span></p>
<p>E.g. <span class="math-container">$\ 3 + 2\sqrt{2}\ $</span> has norm <span class="math-container">$\ (3+2\sqrt{2})\:(3-2\sqrt{2})\ =\ 9 - 4\cdot 2\ =\ 1$</span></p>
<p>So <span class="math-container">$\rm\:\color{blue}{subtracting\ out}\ \sqrt{norm}\ = \pm1\ $</span> yields <span class="math-container">$\rm\ 3 + 2\sqrt{2}\: -\: \pm1\ =\ 2 + 2\sqrt{2}\ \ or\ \ 4 + 2\sqrt{2} $</span></p>
<p><span class="math-container">$\rm\qquad \sqrt{trace(2+2\sqrt{2})}\ =\ \sqrt{4}\ =\ 2,\quad\ \ so\ \quad\ \color{brown}{dividing\ it\ out}\ \ \ (2+2\sqrt{2})/2\ =\ 1+\sqrt{2}$</span></p>
<p><span class="math-container">$\rm\qquad \sqrt{trace(4+2\sqrt{2})}\ =\ \sqrt{8}\ =\ 2\sqrt{2}\quad so\quad \color{brown}{dividing\ it\ out}\ \ \ (4+2\sqrt{2})/(2\sqrt{2})\: =\: \sqrt{2}+1$</span></p>
<p>Indeed <span class="math-container">$\rm\ (1 + \sqrt{2})^2\ =\ 3 + 2\sqrt{2}\:.\ $</span> It is <a href="https://math.stackexchange.com/a/816527/242">an easy exercise</a> to check that the formula is correct.</p>
<p>With experience from a few worked examples, one can swiftly <em>mentally</em> denest such radicals.</p>
|
1,831,191 | <p>I am confused about the following Theorem:</p>
<p>Let <span class="math-container">$f: I \to \mathbb{R}^n$</span>, <span class="math-container">$a \in I$</span>. Then the function <span class="math-container">$f$</span> is differentiable at <span class="math-container">$a$</span> if and only if there exists a function <span class="math-container">$\varphi: I \to \mathbb{R}^n$</span> that is continuous in <span class="math-container">$a$</span>, and such that <span class="math-container">$f(x) - f(a) = (x - a)\varphi(x)$</span>, for all <span class="math-container">$x \in I$</span>; furthermore, <span class="math-container">$\varphi(a) = f'(a)$</span>.</p>
<p>I understand the proof of this theorem, but something confuses me. Doesn't this theorem state that the derivative of a function in a point is always continuous in that point, since <span class="math-container">$f'(a) = \varphi(a)$</span> is continuous in <span class="math-container">$a$</span>? This would mean that the derivative of a function is always continuous on the domain of the function, but I have encountered counterexamples. I have probably misinterpreted something; any help would be welcome.</p>
| Eric Wofsey | 86,856 | <p>The theorem states that $\varphi(a)=f'(a)$ for this particular value of $a$. It doesn't say that $\varphi(x)=f'(x)$ for <em>all</em> $x$, or indeed for any value of $x$ besides the single value $x=a$. So the fact that $\varphi$ is continuous at $a$ doesn't tell you that $f'$ is continuous at $a$, since continuity depends on the values of the function at points near $a$, not just at $a$ itself.</p>
|
4,080,234 | <p>This may be a stupid question but I was looking over a proof and one of the steps simplifies <span class="math-container">$|x|/x^2=1/|x|$</span> and I was wondering what the rigorous justification of that is. Is it because <span class="math-container">$x^2$</span> is essentially the same as <span class="math-container">$|x|^2$</span>?</p>
| Aditya_math | 798,141 | <p>Yes!</p>
<p>For intuition, split into <span class="math-container">$2$</span> cases, <span class="math-container">$x$</span> positive and negative</p>
|
4,080,234 | <p>This may be a stupid question but I was looking over a proof and one of the steps simplifies <span class="math-container">$|x|/x^2=1/|x|$</span> and I was wondering what the rigorous justification of that is. Is it because <span class="math-container">$x^2$</span> is essentially the same as <span class="math-container">$|x|^2$</span>?</p>
| lone student | 460,967 | <p>If <span class="math-container">$x\in\mathbb R$</span>, then <strong>by definition</strong> of absolute value:</p>
<ul>
<li><p>If <span class="math-container">$x>0$</span>, then <span class="math-container">$|x|=x$</span>, which implies <span class="math-container">$$|x|^2=x^2$$</span></p>
</li>
<li><p>If <span class="math-container">$x<0$</span>, then <span class="math-container">$|x|=-x$</span>, which implies <span class="math-container">$$|x|^2=(-x)^2=x^2$$</span></p>
</li>
</ul>
<p>So, you can deduce that <span class="math-container">$$|x|^2=x^2$$</span> holds for all <span class="math-container">$x\in\mathbb R.$</span></p>
<p>Thus,</p>
<p><span class="math-container">$$\frac{|x|}{x^2}=\frac{|x|}{|x|^2}=\frac{1}{|x|}.$$</span></p>
|
1,902,138 | <p>It's common to see a plus-minus ($\pm$), for example in describing error
$$
t=72 \pm 3
$$
or in the quadratic formula
$$
x = \frac{-b \pm \sqrt{b^2-4ac}}{2a}
$$
or identities like
$$
\sin(A \pm B) = \sin(A) \cos(B) \pm \cos(A) \sin(B)
$$</p>
<p>I've never seen an analogous version combining multiplication with division, something like $\frac{\times}{\div}$</p>
<blockquote>
<p>Does this ever come up, and if not why?</p>
</blockquote>
<p>I suspect it simply isn't as naturally useful as $\pm$. </p>
| guestDiego | 338,527 | <p>Perhaps because
$$
a\frac{\times}{\div}b
$$
(typographically quite horrible) is written as
$$
a\cdot b^{\pm1}
$$</p>
|
1,902,138 | <p>It's common to see a plus-minus ($\pm$), for example in describing error
$$
t=72 \pm 3
$$
or in the quadratic formula
$$
x = \frac{-b \pm \sqrt{b^2-4ac}}{2a}
$$
or identities like
$$
\sin(A \pm B) = \sin(A) \cos(B) \pm \cos(A) \sin(B)
$$</p>
<p>I've never seen an analogous version combining multiplication with division, something like $\frac{\times}{\div}$</p>
<blockquote>
<p>Does this ever come up, and if not why?</p>
</blockquote>
<p>I suspect it simply isn't as naturally useful as $\pm$. </p>
| gowrath | 255,605 | <p>There are many function (square roots for ex.) where $f(x) = f(-x)$ and for $f^{-1}$, the $\pm$ is useful. If there were common functions where $f(x) = f(\frac{1}{x})$ it might be a thing. Can anyone think of example functions like this? </p>
|
4,121,607 | <p>I want to find a function which satisfies certain following limits.</p>
<p>The question is:
Find a function which satisfies</p>
<p><span class="math-container">$$
\lim_{x\to5} f(x)=3, \text{ and } f(5) \text{ does not exist}
$$</span></p>
<p>I would think that because it says <span class="math-container">$f(5)$</span> doesn't exist, there must be a fraction with <span class="math-container">$(x-5)$</span> on the bottom. I would think <span class="math-container">$f(x) = \frac{15}{x-5}$</span> but that tends to infinity as <span class="math-container">$x\to5$</span></p>
| jjagmath | 571,433 | <p>Take the following function:</p>
<p><span class="math-container">$f:\Bbb R \setminus \{5\} \to \Bbb R$</span> given by <span class="math-container">$f(x) =3$</span></p>
|
4,017,554 | <p>Simple question - how to prove that:</p>
<p><span class="math-container">$\sqrt {x\times y} = \sqrt x \times \sqrt y$</span> ?</p>
<p>If I use the exponentation the answer seems easy, because</p>
<p><span class="math-container">$(x\times y)^n = x^n \times y^n$</span> because I get</p>
<p><span class="math-container">$(x\times y) \times\cdots\times (x\times y)$</span> (where <span class="math-container">$x$</span> occurs <span class="math-container">$n$</span> times and <span class="math-container">$y$</span> occurs <span class="math-container">$n$</span> times) can be rewritten as: <span class="math-container">$x \times\cdots\times x \times y \times \cdots \times y$</span>.</p>
<p>But in case of square roots that's not so obvious, because I can't rewrite the it the same way.
I can of course reason that <span class="math-container">$\sqrt x$</span> is <span class="math-container">$x^{\frac12}$</span> and <span class="math-container">$\sqrt y$</span> is <span class="math-container">$y^{\frac12}$</span> and think using induction, but that seems to not satisfy me.</p>
| J.G. | 56,861 | <p>If <span class="math-container">$z:=\sqrt{x}\sqrt{y}\ge0$</span> then <span class="math-container">$z^2=\sqrt{x}\sqrt{y}\sqrt{x}\sqrt{y}=\sqrt{x}\sqrt{x}\sqrt{y}\sqrt{y}=xy$</span> so <span class="math-container">$z=\sqrt{xy}$</span>.</p>
|
97,672 | <p>Given that I have a set of equations about varible $x_0,x_1,\cdots,x_n$, which own the following style:</p>
<p>$
\left(
\begin{array}{cccccccc}
\frac{1}{6} & \frac{2}{3} & \frac{1}{6} & 0 & 0 & 0 & 0 & 0 \\
0 & \frac{1}{6} & \frac{2}{3} & \frac{1}{6} & 0 & 0 & 0 & 0 \\
0 & 0 & \frac{1}{6} & \frac{2}{3} & \frac{1}{6} & 0 & 0 & 0 \\
0 & 0 & 0 & \frac{1}{6} & \frac{2}{3} & \frac{1}{6} & 0 & 0 \\
0 & 0 & 0 & 0 & \frac{1}{6} & \frac{2}{3} & \frac{1}{6} & 0 \\
\end{array}
\right) \left(
\begin{array}{c}
x_0 \\
x_1 \\
x_2 \\
x_3 \\
x_4 \\
\color{red}{x_0} \\
\color{red}{x_1} \\
\color{red}{x_2} \\
\end{array}
\right)=\left(
\begin{array}{c}
(1,1) \\
(2,3) \\
(3,-1) \\
(4,1) \\
(5,0) \\
\end{array}
\right)
$</p>
<p>Obviously, I <strong>cannot</strong> solve this linear system by <code>LinearSolve[]</code>. To solve this equation group, I only used the <code>Solve[]</code>.</p>
<pre><code>mat=
{{1/6, 2/3, 1/6, 0, 0, 0, 0, 0}, {0, 1/6, 2/3, 1/6, 0, 0, 0, 0},
{0, 0, 1/6, 2/3, 1/6, 0, 0, 0}, {0, 0, 0, 1/6, 2/3, 1/6, 0, 0},
{0, 0, 0, 0, 1/6, 2/3, 1/6, 0}};
eqns = mat.{x0, x1, x2, x3, x4, x0, x1, x2};
</code></pre>
<p>$
\begin{pmatrix}
\frac{x_0}{6}+\frac{2 x_1}{3}+\frac{x_2}{6}\\
\frac{x_1}{6}+\frac{2 x_2}{3}+\frac{x_3}{6}\\
\frac{x_2}{6}+\frac{2 x_3}{3}+\frac{x_4}{6}\\
\frac{x_0}{6}+\frac{x_3}{6}+\frac{2 x_4}{3}\\
\frac{2 x_0}{3}+\frac{x_1}{6}+\frac{x_4}{6}
\end{pmatrix}
$</p>
<pre><code>yValues = {{1, 1}, {2, 3}, {3, -1}, {4, 1}, {5, 0}};
part1 = {x0, x1, x2, x3, x4} /.
Solve[Thread[eqns == yValues[[All, 1]]], {x0, x1, x2, x3, x4}]
part2 = {x0, x1, x2, x3, x4} /.
Solve[Thread[eqns == yValues[[All, 2]]], {x0, x1, x2, x3, x4}]
res = Transpose[Join[part1, part2]]
</code></pre>
<blockquote>
<pre><code> {{75/11, -8/11}, {-9/11, 4/11}, {27/11, 58/11}, {3, -38/11}, {39/11, 28/11}}
</code></pre>
</blockquote>
<h3>Question</h3>
<p>However, the index $n$ for variables $\{x_0,x_1,\cdots,x_n\}$ is very large ($n=100$) in my work. My solution that by <code>Solve[]</code> is very cockamamie. So I would like to know how to deal with this case by the <em>built-in</em> <code>LinearSolve[]</code> efficiently?</p>
| xzczd | 1,871 | <pre><code>l = 5; s = 3;
(* Solution 1 *)
# + SparseArray[#2, {l, l}] & @@ Internal`PartitionRagged[mat\[Transpose], {l, s}];
LinearSolve[%\[Transpose], yValues]
(* Solution 2 *)
Module[{m = #[[;; l]]}, m[[;; s]] += #[[-s ;;]]; m] &[mat\[Transpose]];
LinearSolve[%\[Transpose], yValues]
</code></pre>
|
126,553 | <p>In <a href="http://www.icpr2010.org/pdfs/icpr2010_MoAT5.1.pdf" rel="nofollow">this paper</a>, in the Formula at the beginning of 2.2, we have</p>
<p>$B=\{b_i(O_t)\}$</p>
<p>where </p>
<p>$i=0,1$ - the number of probability formula</p>
<p>$O_t$ - the state at moment $t$</p>
<p>$b_i(O_t)$ - two probabilities or estimations for the state $O_t$</p>
<p>The result ($B$) is named "likelihood". How can likelihood be obtained from 2 numbers? Is this a weighted average like</p>
<p>$b_i(O_t)= 0*b_0(O_t) + 1*b_1(O_t)$</p>
<p>or </p>
<p>$b_i(O_t)= -1*b_0(O_t) + 1*b_1(O_t)$</p>
<p>or something?</p>
| yoyostein | 28,012 | <p>My guess is that he meant it as $B_i:=b_i(O_t)$</p>
|
2,352,313 | <p>If $f_n$ is the number of permutations of numbers $1$ to $n$ that no number is in it's place(I think same as $D_n$)and $g_n$ is the number of the same permutations with exactly one number in it's place Prove that $\mid f_n-g_n \mid =1$.</p>
<p>I need a proof using mosly combinatorics not mostly algebra.I think we should find sth like below:</p>
<p>$f_n-g_n=g_{n-1}-f_{n-1}$</p>
<p>But I can't do that.</p>
| Paul Aljabar | 435,819 | <p>I think this might be part of the way to a solution</p>
<p>Assuming that
$$
g_{n} = n f_{n-1}
$$</p>
<p>Use one of the recurrence relations for derangements:
$$
f_{n} = n f_{n-1} + (-1)^{n}
$$</p>
<p>$$
\begin{aligned}
f_{n} - g_{n}
&=
f_{n} - n f_{n-1}
\\
&=
f_{n} - n (n-1) f_{n-2} - n (-1)^{n-1}
\\
&=
f_{n} - n g_{n-1} - n(-1)^{n-1}
\\
&=
n f_{n-1} + (-1)^{n} - n g_{n-1} - n(-1)^{n-1}
\\
&=
n (f_{n-1} - g_{n-1} - (-1)^{n-1}) + (-1)^{n}
\end{aligned}
$$</p>
<p>So we get</p>
<p>$$
\begin{aligned}
f_{n} - g_{n} - (-1)^{n}
&=
n (f_{n-1} - g_{n-1} - (-1)^{n-1})
\end{aligned}
$$</p>
<p>So we get a recurrence relation on the expression $d_{n} := f_{n} - g_{n} - (-1)^{n}$</p>
<p>Can't figure out where to go from here.</p>
|
2,159,915 | <p>Consider the following system of ODE:</p>
<p>$$\begin{array}{ll}\ddot y + y + \ddot x + x = 0 \\
y+\dot x - x = 0 \end{array}$$</p>
<p><strong>Question</strong>: How many initial conditions are required to determine a unique solution?</p>
<p>A naive reasoning leads to four: $y(0),\dot y(0), x(0)$ and $\dot x(0)$. However, if we write the system in a first-order form:</p>
<p>$$\begin{bmatrix} 1 & 0 & 0 & 0 \\ 1 & 0 & 1 & 0 \\ 0 & 0 & 1 & 0 \\ 1 & 0 & 0 & 0 \end{bmatrix} \begin{bmatrix} \dot x \\ \ddot x \\ \dot y \\ \ddot y\end{bmatrix} =
\begin{bmatrix} 0 & 1 & 0 & 0 \\ -1 & 0 & -1 & 0 \\ 0 & 0 & 0 & 1\\ 1 & 0 & -1 & 0 \end{bmatrix}\begin{bmatrix} x \\ \dot x \\ y \\ \dot y\end{bmatrix}$$</p>
<p>the left matrix is not of full rank, which means the equations are not all independent.
Indeed, by differentiating the second equation: $\dot y + \ddot x -\dot x=0$ which leads to $\ddot y + y + \dot x - \dot y + x =0$, or, in the first-order form:
$$ \begin{bmatrix} 1 & 0 & 0 \\ 0 & 1 & 0 \\ 1 & 0 & 1 \end{bmatrix}
\begin{bmatrix} \dot x \\ \dot y \\ \ddot y \end{bmatrix} = \begin{bmatrix} 1 & -1 & 0 \\ 0 & 0 & 1 \\ -1 & -1 & 1 \end{bmatrix} \begin{bmatrix} x \\ y \\ \dot y \end{bmatrix}$$</p>
<p>With the above form, it shows only three initial conditions are required: $x(0),y(0),\dot y(0)$.</p>
<p>And, what if I had:</p>
<p>$$\begin{array}{ll}\ddot y + y + x^{(n)} + x = 0 \\
\dot x - x = 0 \end{array}$$</p>
<p>for some $n$. Then, I could solve the second equation for some $x(0)$, then differentiate $x$ $n$ times and inject in the first equation, so only $x(0), y(0), \dot y(0)$ are needed. Can this be seen directly, in a robust manner?</p>
| the_candyman | 51,370 | <p>I guess that the first answer is $4$, even if the dynamical matrix is rank deficient. Consider this example:
$$\left[\begin{array}{c}\dot{x}\\\dot{y}
\end{array}\right] = \left[\begin{array}{cc}1 & 0\\ 1 & 0
\end{array}\right] \cdot \left[\begin{array}{c}x\\y
\end{array}\right].$$
Following you thoughts, even in this case we need just $1$ initial condition instead of $2$. Anyway:</p>
<p>$$x(t) = A e^{-t}, y(t) = B + Ce^{-t}.$$</p>
<p>Then:</p>
<p>$$\dot{y}(t) = -Ce^{-t} = x(t) \Rightarrow A=-C,$$ and hence:</p>
<p>$$x(t) = Ae^{-t}, y(t) = B - Ae^{-t}.$$</p>
<p>This means that you still need $2$ initial conditions!!!</p>
<p>In this case, you have that $A = x(0)$ and $B = x(0)+y(0)$.</p>
<p>For the second case, you need $n+2$ initial conditions.</p>
<hr>
<p><strong>Addition</strong></p>
<p>If the dynamical matrix is rank deficient, then this means that it has at least one null eigenvalue. The presence of the null eigenvalue implies the presence of a "constant" term in the solution. This constant term still represents a degree of freedom of the system, even if the dynamical matrix lost a degree of freedom.</p>
|
182,346 | <p>Let's call a polygon $P$ <em>shrinkable</em> if any down-scaled (dilated) version of $P$ can be translated into $P$. For example, the following triangle is shrinkable (the original polygon is green, the dilated polygon is blue):</p>
<p><img src="https://i.stack.imgur.com/M0LOu.png" alt="enter image description here"></p>
<p>But the following U-shape is not shrinkable (the blue polygon cannot be translated into the green one):</p>
<p><img src="https://i.stack.imgur.com/S30bD.png" alt="enter image description here"></p>
<p>Formally, a compact $\ P\subseteq \mathbb R^n\ $ is called <em>shrinkable</em> iff:</p>
<p>$$\forall_{\mu\in [0;1)}\ \exists_{q\in \mathbb R^n}\quad \mu\!\cdot\! P\, +\, q\ \subseteq\ P$$</p>
<p>What is the largest group of shrinkable polygons?</p>
<p>Currently I have the following sufficient condition: if $P$ is <a href="https://en.wikipedia.org/wiki/Star-shaped_polygon" rel="nofollow noreferrer">star-shaped</a> then it is shrinkable. </p>
<p><em>Proof</em>: By definition of a star-shaped polygon, there exists a point $A\in P$ such that for every $B\in P$, the segment $AB$ is entirely contained in $P$. Now, for all $\mu\in [0;1)$, let $\ q := (1-\mu)\cdot A$. This effectively translates the dilated $P'$ such that $A'$ coincides with $A$. Now every point $B'\in P'$ is on a segment between $A$ and $B$, and hence contained in $P$.</p>
<p><img src="https://i.stack.imgur.com/sdPRw.png" alt="enter image description here"></p>
<p>My questions are:</p>
<p>A. Is the condition of being star-shaped also necessary for shrinkability?</p>
<p>B. Alternatively, what other condition on $P$ is necessary?</p>
| Włodzimierz Holsztyński | 8,385 | <p>Let $\ L\ $ be a Hilbert space. Let $\ P\subseteq L\ $ be a non-empty compact subset. Then $\ P\ $ is called $\ \mu$-shrinkable $\ \Leftarrow:\Rightarrow$</p>
<p>$$\exists_{q\in L}\ \ \mu\cdot P\ +\ q\ \subseteq\ P$$</p>
<p>for arbitrary $\ \mu\ge 0\ $ (thus $\ \mu \le 1\ $ when $\ |P|>1$).</p>
<p>Let $\ m(P)\ $ be the set of all $\ \mu\ge 0\ $ such that $\ P\ $ is $\ \mu$-shrinkable. Following @E.S.Halevi, let $\ P\ $ be called shrinkable $\ \Leftarrow:\Rightarrow\ \ m(P) = [0;1].\ $ Then</p>
<blockquote>
<p><strong>THEOREM</strong> The following three properties of a non-empty compact $\ P\subseteq L\ $are equivalent:</p>
<ol>
<li>P is a star set;</li>
<li>P is shrinkable;</li>
<li>$\ \sup (\ m(P)\cap[0;1)\ )\ =\ 1$</li>
</ol>
</blockquote>
<p><strong>PROOF</strong> Implications $\ 1\Rightarrow 2\Rightarrow 3\ $ are trivial. we need only $\ 3\Rightarrow 1.\ $ Thus assume condition $3$.</p>
<p>Consider map $\ f_\mu : x\mapsto \mu\cdot x + q_\mu,\ $ of $\ P\ $ into itself, for every $\ \mu\in m(P)\cap[0;1).\ $ Then by Banach's <em>fpp</em> there exists a unique $c_\mu\in P\ $ such that $\ c_\mu = \mu\cdot c_\mu + q_\mu,\ $ so that $\ q_\mu = (1-\mu)\cdot c_\mu.\ $ Thus there is a limit point $\ c_1\in P\ $ of a certain sequence of points $\ c_\mu\ $ for which $ \lim \mu = 1$.</p>
<p>Observe that for $\ \nu:=\mu^k\ $ the composition $\ g_\nu:=\bigcirc^k f_\mu\ $ has the same fixed point (I am going to choose at the most one $\ k\ $ for each $\ \mu;\ $ also</p>
<p>$$\forall_{x\in L}\ \ g_\nu(x)\ =\ \nu\cdot x\ +\ (1-\nu)\cdot c_\mu$$</p>
<p>Now let's consider an arbirary $\ \kappa\in[0;1).\ $ I'll show that function</p>
<p>$$\ F_\kappa\ :\ x\ \mapsto\ \kappa\cdot (x-c_1)+c_1\ \ =\ \ \kappa\cdot x\ +\ (1-\kappa)\cdot c_1$$</p>
<p>maps $\ P\ $ into itself (for every such $\kappa,\ $ so that will be the end of the proof).</p>
<p>Thus let $\ \epsilon > 0.\ $ Then there exist $\ \mu\in[0;1)\ $ and natural $\ k,\ $ such that $\ |c_\mu-c_1|<\epsilon\ $ and $\ |\nu-\kappa|<\epsilon\ $ for $\ \nu:=\mu^k,\ $ hence for arbitrary $\ x\in P$:</p>
<p>$$ |g_\nu(x)-F_\kappa(x)|\ \le\ |g_\nu(x)-F_\nu(x)|\ +\ |F_\nu(x)-F_\kappa(x)|$$</p>
<p>where</p>
<p>$$\ |F_\nu(x)-F_\kappa(x)|\ =\ |(\nu-\kappa)\cdot x + (\kappa-\nu)\cdot c_1|\ =\ |\nu-\kappa|\cdot|x-c_1|$$ </p>
<p>henceforth</p>
<p>$$|F_\nu(x)-F_\kappa(x)|\ \le\ \epsilon\cdot |x-c_1|$$</p>
<p>Next</p>
<p>$$|g_\nu(x)-F_\nu(x)|\ =\ |(1-\nu)\cdot c_\mu - (1-\nu)\cdot c_1|\ =\ |1-\nu|\cdot|c_\mu-c_1|\ \le\ \epsilon$$</p>
<p>These inequalities imply:</p>
<p>$$ |g_\nu(x)-F_\kappa(x)|\ \le\ (|x-c_1|+1)\cdot\epsilon$$</p>
<p>or</p>
<p>$$ |g_\nu(x)-F_\kappa(x)|\ \le\ D\cdot\epsilon$$</p>
<p>where $\ D := diam(P)$</p>
<p>Thus for every $\ \delta > 0\ $ let $\ \epsilon:=\frac\delta D\ $ such that... OK, enough of this $\delta$-$\epsilon$ business, $\ F_\kappa(x)\in P$.</p>
<p><strong>END of PROOF</strong></p>
<blockquote>
<p><strong>REMARK</strong> The theorem holds not just for the Hilbert spaces but also for Banach spaces. One should be also able to replace translations by arbitrary linear isometries. I am even curious and hopeful about considering this kind of theorems for the locally convex linear spaces.</p>
</blockquote>
|
1,647,327 | <p>suppose S is a metric space and $B(S)$ is the set of bounded functions and $C_b(S)$ is the set consisting of bounded continuous functions.</p>
<p>Prove that $C_b(S)$ is a closed subspace of $B(S)$.</p>
<p>I thought of looking at the complement $B(S) \backslash C_b(S) = \{f| \text{ f is bounded and not continuous}\}$ and proving this is open. </p>
<p>But how does one check that for each element $f \in B(S) \backslash C_b(S)$, there exists an $\epsilon$ such that $B_f(\epsilon) \subset B(S) \backslash C_b(S)$?</p>
| s.harp | 152,424 | <p>The norm on the function space is given by the sup norm $\|f\|:=\sup\{|f(x)|x\in S\}$. Convergence of functions in sup-norm is called uniform convergence. It is a general statement that a uniform limit of continuous functions is again continuous.</p>
<p>To see that let $x_n \to x$, let $f_m \in C_b(S)$ and $f_m \to f \in C(s)$. Consider $\epsilon>0$.</p>
<p>Since $f_m \to f$ there exists an $M$ st for $m>M$, $\|f_m-f\|<\epsilon$. Now let $m>M$, since $f_m$ is continuous and $x_n\to x$ we have an $N$ so that $|f_m(x_n)-f_m(x)|<\epsilon$. So for a given $\epsilon$, choose $m, N$ as above and if $n>N$:</p>
<p>$$|f(x_n)-f(x)|≤|f(x_n)-f_m(x_n)|+|f_m(x_n)-f_m(x)|+|f_m(x)-f(x)|≤2\|f-f_m\|+\epsilon≤3\epsilon$$</p>
<p>Which shows $f(x_n)\to f(x)$ and $f$ is continuous.</p>
|
3,062,701 | <p>I want to solve this system by Least Squares method:<span class="math-container">$$\begin{pmatrix}1 & 2 & 3\\\ 2 & 3 & 4 \\\ 3 & 4 & 5 \end{pmatrix}\begin{pmatrix}x\\y\\z\end{pmatrix} =\begin{pmatrix}1\\5\\-2\end{pmatrix} $$</span> This symmetric matrix is singular with one eigenvalue <span class="math-container">$\lambda1 = 0$</span>, so <span class="math-container">$\ A^t\cdot A$</span> is also singular and for this reason I cannot use the normal equation: <span class="math-container">$\hat x = (A^t\cdot A)^{-1}\cdot A^t\cdot b $</span>.
So I performed Gauss-Jordan to the extended matrix to come with <span class="math-container">$$\begin{pmatrix}1 & 2 & 3\\\ 0 & 1 & 2 \\\ 0 & 0 & 0 \end{pmatrix}\begin{pmatrix}x\\y\\z\end{pmatrix} =\begin{pmatrix}1\\3\\-1\end{pmatrix} $$</span>
Finally I solved the <span class="math-container">$\ 2x2$</span> system: <span class="math-container">$$\begin{pmatrix}1 & 2\\\ 0 & 1\end{pmatrix}\begin{pmatrix}x\\y\end{pmatrix} =\begin{pmatrix}1\\3\end{pmatrix} $$</span> taking into account that the best <span class="math-container">$\ \hat b\ $</span> is <span class="math-container">$\begin{pmatrix}1\\3\\0\end{pmatrix}$</span></p>
<p>The solution is then <span class="math-container">$\ \hat x = \begin{pmatrix}-5\\3\\0\end{pmatrix}$</span></p>
<p>Is this approach correct ? </p>
<p><strong>EDIT</strong></p>
<p>Based on the book 'Lianear Algebra and its applications' from David Lay, I also include the Least Squares method he proposes: <span class="math-container">$(A^tA)\hat x=A^t b $</span></p>
<p><span class="math-container">$$A^t b =\begin{pmatrix}5\\9\\13\end{pmatrix}, A^tA = \begin{pmatrix}14 & 20 & 26 \\
20 & 29 & 38 \\
26 & 38 & 50\end{pmatrix}$$</span>
The reduced echelon from the augmented is:
<span class="math-container">$$ \begin{pmatrix}14 & 20 & 26 & 5 \\
20 & 29 & 38 & 9 \\
26 & 38 & 50 & 13 \end{pmatrix} \sim \begin{pmatrix}1 & 0 & -1 & -\frac{35}{6} \\
0 & 1 & 2 & \frac{13}{3} \\
0 & 0 & 0 & 0
\end{pmatrix} \Rightarrow \hat x = \begin{pmatrix}-\frac{35}{6} \\ \frac{13}{3} \\ 0 \end{pmatrix}$$</span> for the independent variable case that <span class="math-container">$z=\alpha , \alpha=0 $</span> </p>
| Mostafa Ayaz | 518,023 | <p>Note that <span class="math-container">$$\begin{pmatrix}3&4&5\end{pmatrix}\begin{pmatrix}x\\y\\z\end{pmatrix}=-2$$</span>and <span class="math-container">$$\Big[2\cdot\begin{pmatrix}2&3&4\end{pmatrix}-\begin{pmatrix}1&2&3\end{pmatrix}\Big]\begin{pmatrix}x\\y\\z\end{pmatrix}=\begin{pmatrix}3&4&5\end{pmatrix}\begin{pmatrix}x\\y\\z\end{pmatrix}=2\times 5-1=9$$</span>since this leads to <span class="math-container">$-2=9$</span>, therefore <strong>the solution space is infeasible.</strong> </p>
|
3,360,694 | <p><span class="math-container">$U(n)$</span> is the collection of positive integers which are coprime to n forms a group under multiplication modulo n.</p>
<p>What is the order of the element 250 in <span class="math-container">$U(641)$</span>?</p>
<p>My attempt:
Here 641 is a prime number.
So <span class="math-container">$U(641)$</span> is a cyclic group. So this group is Isomorphic to <span class="math-container">$Z/640 Z$</span> under addition modulo 640.</p>
<p>I need to find the smallest positive integer n such that <span class="math-container">$250^n$</span> congruent to 1 (mod 641).
Any easy way to find this n?</p>
<p>Also I found that inverse of 250 in U(641) is 100.</p>
<p>Kindly provide some hints to find the required n.</p>
<p>Thanks in advance. </p>
| Hagen von Eitzen | 39,174 | <p>A priori, the order of <span class="math-container">$250$</span> could be any divisor of <span class="math-container">$640$</span>.
Since you found the inverse and it is an obvious square (<span class="math-container">$100=10^2$</span>), it is clear that the order of <span class="math-container">$250$</span> is at worst a divisor of <span class="math-container">$320=2^6\cdot 5$</span>, not of <span class="math-container">$640$</span>. Seeing this was a bit of luck. We could try to find out whether we can easily take a fifth root of either <span class="math-container">$250$</span> or <span class="math-container">$10$</span> (or another square root of <span class="math-container">$10$</span>, say). Unfortunately, numbers like <span class="math-container">$866$</span>, <span class="math-container">$741$</span>, or <span class="math-container">$651$</span> or even when adding yet another <span class="math-container">$641$</span> do not look like obvious squares or fifth powers, so systematic work seems unavoidable.</p>
<p>If we wanted to prove that the order is <span class="math-container">$320$</span> (if that were true), we'd only have to check that neither <span class="math-container">$250^{320/2}$</span> nor <span class="math-container">$250^{320/5}$</span> is <span class="math-container">$1$</span> (whereas we know from the first paragraph that <span class="math-container">$250^{320}\equiv 1$</span>). If the actual order is smaller, we'd have to climb down the grid of divisors anyway. In partucular, it is somewhat unavoidable that we need to compute <span class="math-container">$250^{2^k}$</span> for <span class="math-container">$k=1,2,\ldots$</span> until either <span class="math-container">$k=6$</span> or we hit <span class="math-container">$1$</span> (or one step earlier <span class="math-container">$-1$</span>), which is easily done by repeated squaring:
<span class="math-container">$$ 250^2=62500\equiv 323$$</span>
<span class="math-container">$$ 323^2=62500\equiv 487$$</span>
<span class="math-container">$$ 487^2=62500\equiv 640\equiv -1$$</span>
and voila! Apparently, the order of <span class="math-container">$250$</span> is <span class="math-container">$16$</span> (and we fortunately need not check the numbers <span class="math-container">$250^{2^k5}$</span>).</p>
|
1,323,317 | <p>I have to compute the following quantity:</p>
<p>$$
1) \sum\limits_{k=0}^{n} \binom{n}{k}k2^{n-k}
$$</p>
<p>Moreover, I have to give an upper bound for the following quantity:</p>
<p>$$
2) \sum\limits_{k=1}^{n-2} \binom{n}{k}\frac{k}{n-k}
$$</p>
<p>As regards 1), I see that $\binom{n}{k}k2^{n-k}=\frac{n! (n-k+1)2^{n-k}}{(n-k+1)!(k-1)!}$, i.e. I obtain</p>
<p>$$
\sum\limits_{k=0}^{n} \frac{n! (n-k+1)2^{n-k}}{(n-k+1)!(k-1)!}=\sum\limits_{k=0}^{n} \binom{n}{k-1}(n-k+1)2^{n-k}
$$</p>
<p>But it seems strange! As regards 2), I don't know!</p>
| Brian M. Scott | 12,042 | <p>HINT for the first problem: $\binom{n}kk2^{n-k}$ is the number of ways to start with $n$ people, choose a team of size $k$, choose one of the team members to be captain, and then choose some subset of the remaining $n-k$ people to be substitutes. Thus, $\sum_{k=0}^n\binom{n}kk2^{n-k}$ is the total number of ways to do this allowing the size of the team to be arbitrary.</p>
<p>You can bet the same effect by choosing one person from the $n$ to be captain, then dividing the remaining $n-1$ people into three groups: the other members of the team, the substitutes, and everyone else.</p>
<ul>
<li>How many ways are there to choose one person from the $n$ to be captain? </li>
<li>How many ways are there to divide $n-1$ people into three sets, any of which can be empty?</li>
</ul>
<p>Combine the answers to those two questions properly, and you’ll have a closed form for $\sum_{k=0}^n\binom{n}kk2^{n-k}$ as a function of $n$.</p>
<p>There are lots of possible upper bounds for the second summation. One, not especially good, can be obtained by noting that</p>
<p>$$\sum_{k=1}^{n-2}\binom{n}k\frac{k}{n-k}=\sum_{k=1}^{n-2}\binom{n-1}{k-1}\frac{n}{n-k}=n\sum_{k=0}^{n-3}\binom{n-1}k\frac1{n-1-k}$$</p>
<p>and that $\frac1{n-1-k}\le 1$ for $k=0,\ldots,n-1$.</p>
|
1,323,317 | <p>I have to compute the following quantity:</p>
<p>$$
1) \sum\limits_{k=0}^{n} \binom{n}{k}k2^{n-k}
$$</p>
<p>Moreover, I have to give an upper bound for the following quantity:</p>
<p>$$
2) \sum\limits_{k=1}^{n-2} \binom{n}{k}\frac{k}{n-k}
$$</p>
<p>As regards 1), I see that $\binom{n}{k}k2^{n-k}=\frac{n! (n-k+1)2^{n-k}}{(n-k+1)!(k-1)!}$, i.e. I obtain</p>
<p>$$
\sum\limits_{k=0}^{n} \frac{n! (n-k+1)2^{n-k}}{(n-k+1)!(k-1)!}=\sum\limits_{k=0}^{n} \binom{n}{k-1}(n-k+1)2^{n-k}
$$</p>
<p>But it seems strange! As regards 2), I don't know!</p>
| Alex R. | 22,064 | <p>For the first one, $(x+2)^n=\sum_{k=0}^n \binom{n}{k}x^k2^{n-k}$. Now differentiate in $x$:</p>
<p>$n(x+2)^{n-1}=\sum_{k=1}^n \binom{n}{k}kx^{k-1}2^{n-k}.$ </p>
<p>Now plug in $x=1$. </p>
<p>For the second one, $(x+y)^n=\sum_{k=0}^n \binom{n}{k}x^ky^{n-k}$. Notice that:</p>
<p>$$\frac{(x+y)^n}{y}=\sum_{k=0}^n \binom{n}{k}x^ky^{n-k-1}.$$</p>
<p>Perform partial differentiation in $x$, <em>integrate</em> in $y$, and plug in $x,y=1$. Then correct for any boundary issues in the sum indices. </p>
|
716,859 | <p>Define the mean of order $p$ of $a$ and $b$ as $s_p(a,b)$ $=$ $({a^p + b^p\over 2})^{1/p}$.</p>
<p>I have to find the limit of the sequence $s_n(a,b)$. I already know this sequence is bounded above by $b$ (from a previous question) and if I assume the limit exists I can show it is $b$. What I cannot show is that the sequence is increasing. Could someone assist me or show me how to prove this? </p>
| Pi89 | 130,217 | <p>Remark that for two positive numbers $a$ and $b$, $a^p+ b^p \underset{+ \infty}{\sim} max(a,b)^p$.
It should be helpful to answer the question</p>
|
4,436,210 | <p>I have been given this exercise: Calculate the double integral:</p>
<blockquote>
<p><span class="math-container">$$\iint_D\frac{\sin(y)}{y}dxdy$$</span>
Where <span class="math-container">$D$</span> is the area enclosed by the lines: <span class="math-container">$y=2$</span>, <span class="math-container">$y=1$</span>, <span class="math-container">$y=x$</span>, <span class="math-container">$2y=x$</span> (not <span class="math-container">$y = 2x$</span>).</p>
</blockquote>
<p>Visualising <span class="math-container">$D$</span> is easy. You can split D in two sub areas and get the bounds for the integrals. The problem I face is:</p>
<p>Let's split D in two sub areas, <span class="math-container">$D_1$</span> and <span class="math-container">$D_2$</span>. <span class="math-container">$D_1$</span> is the left, upright triangle of <span class="math-container">$D$</span> and <span class="math-container">$D_2$</span> is the right, upside down one.</p>
<p>Then <span class="math-container">$D_1$</span> is defined by the lines <span class="math-container">$y=1$</span>, <span class="math-container">$y=x$</span>, and <span class="math-container">$x=2$</span>.</p>
<p>You can express the area in a <span class="math-container">$y$</span>-normal form as:
<span class="math-container">$$\begin{align}
1 \le y \le 2\\
y \le x \le 2
\end{align}$$</span>
then the integral can be written as
<span class="math-container">$$ \begin{align}
&\int_1^2\int_y^2\frac{\sin(y)}{y}dxdy \\
&=\int_1^2\frac{\sin(y)}{y}[x]^2_y \space dxdy \\
&=\int_1^2\frac{2\sin(y)}{y} - \sin(y)dy \\
&=2\int_1^2\frac{\sin(y)}{y}dy -\int_1^2 \sin(y)dy \\
\end{align}$$</span></p>
<p>The second integral is trivial, but in the first one is not. I have tried substituting, integrating by parts but to no avail. What am I doing wrong?</p>
<p>Any answer is really appreciated.</p>
| Arturo Magidin | 742 | <p>You cannot just look at the final product if you did not carefully note steps in which you were assuming facts about the value of <span class="math-container">$a$</span>. So let us take a careful look at the Gaussian elimination process.</p>
<p>Starting from
<span class="math-container">$$\left(\begin{array}{rrr|r}
1 & 2 & -3 & 4\\
3 & -1 & 5 & 2\\
4 & 1 & a^2-14 & a+2
\end{array}\right)$$</span>
we first subtract three times the first row from the second, and four times the first row from the third row. We get:
<span class="math-container">$$\left(\begin{array}{rrr|r}
1 & 2 & -3 & 4\\
0 & -7 & 14 & -10\\
0 & -7 & a^2-2 & a-14
\end{array}\right)$$</span>
Then subtracting the second row from the third row, we obtain:
<span class="math-container">$$\left(\begin{array}{rrr|r}
1 & 2 & -3 & 4\\
0 & -7 & 26 & -10\\
0 & 0 & a^2-16 & a-4
\end{array}\right).$$</span>
At this point: if <span class="math-container">$a=-4$</span>, then the last row becomes
<span class="math-container">$$\left(\begin{array}{rrr|r}
0 & 0 & 0 & -8
\end{array}\right).$$</span>
So the system has no solutions.</p>
<p>If <span class="math-container">$a=4$</span>, on the other hand, the last row is
<span class="math-container">$$\left(\begin{array}{ccc|c}
0 & 0 & 0 & 0
\end{array}\right)$$</span>
and your matrix has rank <span class="math-container">$2$</span>, giving you infinitely many solutions.</p>
<p>And if <span class="math-container">$a\neq 4$</span> and <span class="math-container">$a\neq -4$</span>, then you have a matrix of rank <span class="math-container">$3$</span>, so you will get exactly one solution. This matches what the solutions say.</p>
<p>I suspect what happened is that you proceeded to divide the second row by <span class="math-container">$-7$</span> (no problem there):
<span class="math-container">$$\left(\begin{array}{rrr|r}
1 & 2 & -3 & 4\\
0 & 1 & -\frac{26}{7} & \frac{10}{7}\\
0 & 0 & a^2-16 & a-4
\end{array}\right)$$</span>
and then divided the last row by <span class="math-container">$a^2-16$</span>. But this last step <strong>requires</strong> the assumption that <span class="math-container">$a^2-16\neq 0$</span>. Thus, you are implicitly saying "and by the way, <span class="math-container">$a\neq 4$</span> and also <span class="math-container">$a\neq -4$</span>." Nothing you get after that can be used in the case where <span class="math-container">$a=4$</span> or where <span class="math-container">$a=-4$</span>. You need to consider those cases separately.</p>
|
79,726 | <p>Let $R$ be a commutative ring with unity. Let $M$ be a free (unital) $R$-module.</p>
<p>Define a <em>basis</em> of $M$ as a generating, linearly independent set.</p>
<p>Define the <em>rank</em> of $M$ as the cardinality of a basis of $M$ (as we know commutative rings have IBN, so this is well defined).</p>
<p>A <em>minimal generating set</em> is a generating set with cardinality $\inf\{\#S:S\subset M, M=\langle S \rangle\}$.</p>
<p>Must a minimal generating set have cardinality the rank of $M$?</p>
| Arturo Magidin | 742 | <p>In <em>Some remarks on the invariant basis property</em>, Topology <strong>5</strong> (1966), pp. 215-228, MR <strong>33</strong> #5676, P.M. Cohn proved that there are rings in which the notion of rank is well defined, but which admit free modules of rank $t$ that can be generated by fewer than $t$ elements. However, he also proved that this is impossible if the ring is Noetherian, Artinian, or commutative. I don't have access to that paper, so I don't know how easy it is to prove, but in your situation (commutative rings), the answer is therefore "yes". </p>
|
152,405 | <p>This question complement a previous MO question: <a href="https://mathoverflow.net/questions/95837/examples-of-theorems-with-proofs-that-have-dramatically-improved-over-time">Examples of theorems with proofs that have dramatically improved over time</a>.</p>
<p>I am looking for a list of</p>
<h3>Major theorems in mathematics whose proofs are very hard but was not dramatically improved over the years.</h3>
<p>(So a new dramatically simpler proof may represent a much hoped for breakthrough.) Cases where the original proof was very hard, dramatical improvments were found, but the proof remained very hard may also be included.</p>
<p>To limit the scope of the question:</p>
<p>a) Let us consider only theorems proved at least 25 years ago. (So if you have a good example from 1995 you may add a comment but for an answer wait please to 2020.)</p>
<p>b) Major results only.</p>
<p>c) Results with very hard proofs.</p>
<p>As usual, one example (or a few related ones) per post.</p>
<p>A similar question was asked before <a href="https://mathoverflow.net/questions/44593/still-difficult-after-all-these-years">Still Difficult After All These Years</a>. (That question referred to 100-years old or so theorems.)</p>
<h2>Answers</h2>
<p>(Updated Oct 3 '15)</p>
<p><strong>1)</strong> <a href="https://mathoverflow.net/a/152457">Uniformization theorem for Riemann surfaces</a>( Koebe and Poincare, 19th century)</p>
<p><strong>2)</strong> <a href="https://mathoverflow.net/a/152546">Thue-Siegel-Roth theorem</a> (Thue 1909; Siegel 1921; Roth 1955)</p>
<p><strong>3)</strong> <a href="https://mathoverflow.net/a/152412">Feit-Thompson theorem</a> (1963);</p>
<p><strong>4)</strong> <a href="https://mathoverflow.net/a/152419">Kolmogorov-Arnold-Moser (or KAM) theorem</a> (1954, Kolgomorov; 1962, Moser; 1963)</p>
<p><strong>5)</strong> <a href="https://mathoverflow.net/a/218043/1532">The construction of the <span class="math-container">$\Phi^4_3$</span> quantum field theory model.
This was done in the early seventies by Glimm, Jaffe, Feldman, Osterwalder, Magnen and Seneor.</a> (NEW)</p>
<p><strong>6)</strong> <a href="https://mathoverflow.net/a/152752">Weil conjectures</a> (1974, Deligne)</p>
<p><strong>7)</strong> <a href="https://mathoverflow.net/a/152418">The four color theorem</a> (1976, Appel and Haken);</p>
<p><strong>8)</strong> <a href="https://mathoverflow.net/a/152613">The decomposition theorem for intersection homology</a> (1982, Beilinson-Bernstein-Deligne-Gabber); (<strong>Update:</strong> A new simpler proof by de Cataldo and Migliorini is now available)</p>
<p><strong>9)</strong> <a href="https://mathoverflow.net/a/218061/1532">Poincare conjecture for dimension four</a>, 1982 Freedman (NEW)</p>
<p><strong>10)</strong> <a href="https://mathoverflow.net/a/152416">The Smale conjecture</a> 1983, Hatcher;</p>
<p><strong>11)</strong> <a href="https://mathoverflow.net/a/152741">The classification of finite simple groups</a> (1983, with some completions later)</p>
<p><strong>12)</strong> <a href="https://mathoverflow.net/a/152537">The graph-minor theorem</a> (1984, Robertson and Seymour)</p>
<p><strong>13)</strong> <a href="https://mathoverflow.net/a/152424">Gross-Zagier formula</a> (1986)</p>
<p><strong>14)</strong> <a href="https://mathoverflow.net/a/218066/1532">Restricted Burnside conjecture</a>, Zelmanov, 1990. (NEW)</p>
<p><strong>15)</strong> <a href="https://mathoverflow.net/a/219182/1532">The Benedicks-Carleson theorem</a> (1991)</p>
<p><strong>16)</strong> <a href="https://mathoverflow.net/a/152454">Sphere packing problem in R 3 , a.k.a. the Kepler Conjecture</a>(1999, Hales)</p>
<p><strong>For the following answers some issues were raised in the comments.</strong></p>
<p><a href="https://mathoverflow.net/a/152598">The Selberg Trace Formula- general case</a> (Reference to a 1983 proof by Hejhal)</p>
<p><a href="https://mathoverflow.net/a/152423">Oppenheim conjecture</a> (1986, Margulis)</p>
<p><a href="https://mathoverflow.net/a/152459">Quillen equivalence</a> (late 60s)</p>
<p><a href="https://mathoverflow.net/a/152434">Carleson's theorem</a> (1966) (Major simplification: 2000 proof by Lacey and Thiele.)</p>
<p><a href="https://mathoverflow.net/a/152432">Szemerédi’s theorem</a> (1974) (Major simplifications: ergodic theoretic proofs; modern proofs based on hypergraph regularity, and polymath1 proof for density Hales Jewett.)</p>
<p><strong>Additional answer:</strong></p>
<p><a href="https://mathoverflow.net/a/152658">Answer about fully formalized proofs for 4CT and FT theorem</a>.</p>
| Victor Protsak | 5,740 | <p><a href="http://www.math.harvard.edu/~yihang/GZSeminarnotes/talk%201.pdf"><strong>Gross-Zagier formula</strong></a> (1986) relating the heights of Heegner points on elliptic curves and the derivatives of $L$-series was a major source of progress in number theory in the last 25 years (cutting pretty close here!). Once again, it is my understanding that the original proof has not been dramatically improved. </p>
|
152,405 | <p>This question complement a previous MO question: <a href="https://mathoverflow.net/questions/95837/examples-of-theorems-with-proofs-that-have-dramatically-improved-over-time">Examples of theorems with proofs that have dramatically improved over time</a>.</p>
<p>I am looking for a list of</p>
<h3>Major theorems in mathematics whose proofs are very hard but was not dramatically improved over the years.</h3>
<p>(So a new dramatically simpler proof may represent a much hoped for breakthrough.) Cases where the original proof was very hard, dramatical improvments were found, but the proof remained very hard may also be included.</p>
<p>To limit the scope of the question:</p>
<p>a) Let us consider only theorems proved at least 25 years ago. (So if you have a good example from 1995 you may add a comment but for an answer wait please to 2020.)</p>
<p>b) Major results only.</p>
<p>c) Results with very hard proofs.</p>
<p>As usual, one example (or a few related ones) per post.</p>
<p>A similar question was asked before <a href="https://mathoverflow.net/questions/44593/still-difficult-after-all-these-years">Still Difficult After All These Years</a>. (That question referred to 100-years old or so theorems.)</p>
<h2>Answers</h2>
<p>(Updated Oct 3 '15)</p>
<p><strong>1)</strong> <a href="https://mathoverflow.net/a/152457">Uniformization theorem for Riemann surfaces</a>( Koebe and Poincare, 19th century)</p>
<p><strong>2)</strong> <a href="https://mathoverflow.net/a/152546">Thue-Siegel-Roth theorem</a> (Thue 1909; Siegel 1921; Roth 1955)</p>
<p><strong>3)</strong> <a href="https://mathoverflow.net/a/152412">Feit-Thompson theorem</a> (1963);</p>
<p><strong>4)</strong> <a href="https://mathoverflow.net/a/152419">Kolmogorov-Arnold-Moser (or KAM) theorem</a> (1954, Kolgomorov; 1962, Moser; 1963)</p>
<p><strong>5)</strong> <a href="https://mathoverflow.net/a/218043/1532">The construction of the <span class="math-container">$\Phi^4_3$</span> quantum field theory model.
This was done in the early seventies by Glimm, Jaffe, Feldman, Osterwalder, Magnen and Seneor.</a> (NEW)</p>
<p><strong>6)</strong> <a href="https://mathoverflow.net/a/152752">Weil conjectures</a> (1974, Deligne)</p>
<p><strong>7)</strong> <a href="https://mathoverflow.net/a/152418">The four color theorem</a> (1976, Appel and Haken);</p>
<p><strong>8)</strong> <a href="https://mathoverflow.net/a/152613">The decomposition theorem for intersection homology</a> (1982, Beilinson-Bernstein-Deligne-Gabber); (<strong>Update:</strong> A new simpler proof by de Cataldo and Migliorini is now available)</p>
<p><strong>9)</strong> <a href="https://mathoverflow.net/a/218061/1532">Poincare conjecture for dimension four</a>, 1982 Freedman (NEW)</p>
<p><strong>10)</strong> <a href="https://mathoverflow.net/a/152416">The Smale conjecture</a> 1983, Hatcher;</p>
<p><strong>11)</strong> <a href="https://mathoverflow.net/a/152741">The classification of finite simple groups</a> (1983, with some completions later)</p>
<p><strong>12)</strong> <a href="https://mathoverflow.net/a/152537">The graph-minor theorem</a> (1984, Robertson and Seymour)</p>
<p><strong>13)</strong> <a href="https://mathoverflow.net/a/152424">Gross-Zagier formula</a> (1986)</p>
<p><strong>14)</strong> <a href="https://mathoverflow.net/a/218066/1532">Restricted Burnside conjecture</a>, Zelmanov, 1990. (NEW)</p>
<p><strong>15)</strong> <a href="https://mathoverflow.net/a/219182/1532">The Benedicks-Carleson theorem</a> (1991)</p>
<p><strong>16)</strong> <a href="https://mathoverflow.net/a/152454">Sphere packing problem in R 3 , a.k.a. the Kepler Conjecture</a>(1999, Hales)</p>
<p><strong>For the following answers some issues were raised in the comments.</strong></p>
<p><a href="https://mathoverflow.net/a/152598">The Selberg Trace Formula- general case</a> (Reference to a 1983 proof by Hejhal)</p>
<p><a href="https://mathoverflow.net/a/152423">Oppenheim conjecture</a> (1986, Margulis)</p>
<p><a href="https://mathoverflow.net/a/152459">Quillen equivalence</a> (late 60s)</p>
<p><a href="https://mathoverflow.net/a/152434">Carleson's theorem</a> (1966) (Major simplification: 2000 proof by Lacey and Thiele.)</p>
<p><a href="https://mathoverflow.net/a/152432">Szemerédi’s theorem</a> (1974) (Major simplifications: ergodic theoretic proofs; modern proofs based on hypergraph regularity, and polymath1 proof for density Hales Jewett.)</p>
<p><strong>Additional answer:</strong></p>
<p><a href="https://mathoverflow.net/a/152658">Answer about fully formalized proofs for 4CT and FT theorem</a>.</p>
| Wlodek Kuperberg | 36,904 | <p>The <a href="http://en.wikipedia.org/wiki/Kepler_conjecture">Sphere packing problem in $\mathbb{R}^3$, <em>a.k.a.</em> the Kepler Conjecture.</a> Although the first accepted proof was published just about 10 years ago, the conjecture is very old, and there were several unsuccessful attempts at it for quite a long time. It seems unlikely that Hales' heavily computer-aided proof will be dramatically improved in the foreseeable future.</p>
|
152,405 | <p>This question complement a previous MO question: <a href="https://mathoverflow.net/questions/95837/examples-of-theorems-with-proofs-that-have-dramatically-improved-over-time">Examples of theorems with proofs that have dramatically improved over time</a>.</p>
<p>I am looking for a list of</p>
<h3>Major theorems in mathematics whose proofs are very hard but was not dramatically improved over the years.</h3>
<p>(So a new dramatically simpler proof may represent a much hoped for breakthrough.) Cases where the original proof was very hard, dramatical improvments were found, but the proof remained very hard may also be included.</p>
<p>To limit the scope of the question:</p>
<p>a) Let us consider only theorems proved at least 25 years ago. (So if you have a good example from 1995 you may add a comment but for an answer wait please to 2020.)</p>
<p>b) Major results only.</p>
<p>c) Results with very hard proofs.</p>
<p>As usual, one example (or a few related ones) per post.</p>
<p>A similar question was asked before <a href="https://mathoverflow.net/questions/44593/still-difficult-after-all-these-years">Still Difficult After All These Years</a>. (That question referred to 100-years old or so theorems.)</p>
<h2>Answers</h2>
<p>(Updated Oct 3 '15)</p>
<p><strong>1)</strong> <a href="https://mathoverflow.net/a/152457">Uniformization theorem for Riemann surfaces</a>( Koebe and Poincare, 19th century)</p>
<p><strong>2)</strong> <a href="https://mathoverflow.net/a/152546">Thue-Siegel-Roth theorem</a> (Thue 1909; Siegel 1921; Roth 1955)</p>
<p><strong>3)</strong> <a href="https://mathoverflow.net/a/152412">Feit-Thompson theorem</a> (1963);</p>
<p><strong>4)</strong> <a href="https://mathoverflow.net/a/152419">Kolmogorov-Arnold-Moser (or KAM) theorem</a> (1954, Kolgomorov; 1962, Moser; 1963)</p>
<p><strong>5)</strong> <a href="https://mathoverflow.net/a/218043/1532">The construction of the <span class="math-container">$\Phi^4_3$</span> quantum field theory model.
This was done in the early seventies by Glimm, Jaffe, Feldman, Osterwalder, Magnen and Seneor.</a> (NEW)</p>
<p><strong>6)</strong> <a href="https://mathoverflow.net/a/152752">Weil conjectures</a> (1974, Deligne)</p>
<p><strong>7)</strong> <a href="https://mathoverflow.net/a/152418">The four color theorem</a> (1976, Appel and Haken);</p>
<p><strong>8)</strong> <a href="https://mathoverflow.net/a/152613">The decomposition theorem for intersection homology</a> (1982, Beilinson-Bernstein-Deligne-Gabber); (<strong>Update:</strong> A new simpler proof by de Cataldo and Migliorini is now available)</p>
<p><strong>9)</strong> <a href="https://mathoverflow.net/a/218061/1532">Poincare conjecture for dimension four</a>, 1982 Freedman (NEW)</p>
<p><strong>10)</strong> <a href="https://mathoverflow.net/a/152416">The Smale conjecture</a> 1983, Hatcher;</p>
<p><strong>11)</strong> <a href="https://mathoverflow.net/a/152741">The classification of finite simple groups</a> (1983, with some completions later)</p>
<p><strong>12)</strong> <a href="https://mathoverflow.net/a/152537">The graph-minor theorem</a> (1984, Robertson and Seymour)</p>
<p><strong>13)</strong> <a href="https://mathoverflow.net/a/152424">Gross-Zagier formula</a> (1986)</p>
<p><strong>14)</strong> <a href="https://mathoverflow.net/a/218066/1532">Restricted Burnside conjecture</a>, Zelmanov, 1990. (NEW)</p>
<p><strong>15)</strong> <a href="https://mathoverflow.net/a/219182/1532">The Benedicks-Carleson theorem</a> (1991)</p>
<p><strong>16)</strong> <a href="https://mathoverflow.net/a/152454">Sphere packing problem in R 3 , a.k.a. the Kepler Conjecture</a>(1999, Hales)</p>
<p><strong>For the following answers some issues were raised in the comments.</strong></p>
<p><a href="https://mathoverflow.net/a/152598">The Selberg Trace Formula- general case</a> (Reference to a 1983 proof by Hejhal)</p>
<p><a href="https://mathoverflow.net/a/152423">Oppenheim conjecture</a> (1986, Margulis)</p>
<p><a href="https://mathoverflow.net/a/152459">Quillen equivalence</a> (late 60s)</p>
<p><a href="https://mathoverflow.net/a/152434">Carleson's theorem</a> (1966) (Major simplification: 2000 proof by Lacey and Thiele.)</p>
<p><a href="https://mathoverflow.net/a/152432">Szemerédi’s theorem</a> (1974) (Major simplifications: ergodic theoretic proofs; modern proofs based on hypergraph regularity, and polymath1 proof for density Hales Jewett.)</p>
<p><strong>Additional answer:</strong></p>
<p><a href="https://mathoverflow.net/a/152658">Answer about fully formalized proofs for 4CT and FT theorem</a>.</p>
| Sasha Patotski | 32,741 | <p>As far as I know, the Quillen equivalence between simplicial sets and topological spaces is one of such <a href="http://ncatlab.org/nlab/show/model+structure+on+simplicial+sets#classical_model_structure" rel="nofollow">theorems</a>.</p>
|
152,405 | <p>This question complement a previous MO question: <a href="https://mathoverflow.net/questions/95837/examples-of-theorems-with-proofs-that-have-dramatically-improved-over-time">Examples of theorems with proofs that have dramatically improved over time</a>.</p>
<p>I am looking for a list of</p>
<h3>Major theorems in mathematics whose proofs are very hard but was not dramatically improved over the years.</h3>
<p>(So a new dramatically simpler proof may represent a much hoped for breakthrough.) Cases where the original proof was very hard, dramatical improvments were found, but the proof remained very hard may also be included.</p>
<p>To limit the scope of the question:</p>
<p>a) Let us consider only theorems proved at least 25 years ago. (So if you have a good example from 1995 you may add a comment but for an answer wait please to 2020.)</p>
<p>b) Major results only.</p>
<p>c) Results with very hard proofs.</p>
<p>As usual, one example (or a few related ones) per post.</p>
<p>A similar question was asked before <a href="https://mathoverflow.net/questions/44593/still-difficult-after-all-these-years">Still Difficult After All These Years</a>. (That question referred to 100-years old or so theorems.)</p>
<h2>Answers</h2>
<p>(Updated Oct 3 '15)</p>
<p><strong>1)</strong> <a href="https://mathoverflow.net/a/152457">Uniformization theorem for Riemann surfaces</a>( Koebe and Poincare, 19th century)</p>
<p><strong>2)</strong> <a href="https://mathoverflow.net/a/152546">Thue-Siegel-Roth theorem</a> (Thue 1909; Siegel 1921; Roth 1955)</p>
<p><strong>3)</strong> <a href="https://mathoverflow.net/a/152412">Feit-Thompson theorem</a> (1963);</p>
<p><strong>4)</strong> <a href="https://mathoverflow.net/a/152419">Kolmogorov-Arnold-Moser (or KAM) theorem</a> (1954, Kolgomorov; 1962, Moser; 1963)</p>
<p><strong>5)</strong> <a href="https://mathoverflow.net/a/218043/1532">The construction of the <span class="math-container">$\Phi^4_3$</span> quantum field theory model.
This was done in the early seventies by Glimm, Jaffe, Feldman, Osterwalder, Magnen and Seneor.</a> (NEW)</p>
<p><strong>6)</strong> <a href="https://mathoverflow.net/a/152752">Weil conjectures</a> (1974, Deligne)</p>
<p><strong>7)</strong> <a href="https://mathoverflow.net/a/152418">The four color theorem</a> (1976, Appel and Haken);</p>
<p><strong>8)</strong> <a href="https://mathoverflow.net/a/152613">The decomposition theorem for intersection homology</a> (1982, Beilinson-Bernstein-Deligne-Gabber); (<strong>Update:</strong> A new simpler proof by de Cataldo and Migliorini is now available)</p>
<p><strong>9)</strong> <a href="https://mathoverflow.net/a/218061/1532">Poincare conjecture for dimension four</a>, 1982 Freedman (NEW)</p>
<p><strong>10)</strong> <a href="https://mathoverflow.net/a/152416">The Smale conjecture</a> 1983, Hatcher;</p>
<p><strong>11)</strong> <a href="https://mathoverflow.net/a/152741">The classification of finite simple groups</a> (1983, with some completions later)</p>
<p><strong>12)</strong> <a href="https://mathoverflow.net/a/152537">The graph-minor theorem</a> (1984, Robertson and Seymour)</p>
<p><strong>13)</strong> <a href="https://mathoverflow.net/a/152424">Gross-Zagier formula</a> (1986)</p>
<p><strong>14)</strong> <a href="https://mathoverflow.net/a/218066/1532">Restricted Burnside conjecture</a>, Zelmanov, 1990. (NEW)</p>
<p><strong>15)</strong> <a href="https://mathoverflow.net/a/219182/1532">The Benedicks-Carleson theorem</a> (1991)</p>
<p><strong>16)</strong> <a href="https://mathoverflow.net/a/152454">Sphere packing problem in R 3 , a.k.a. the Kepler Conjecture</a>(1999, Hales)</p>
<p><strong>For the following answers some issues were raised in the comments.</strong></p>
<p><a href="https://mathoverflow.net/a/152598">The Selberg Trace Formula- general case</a> (Reference to a 1983 proof by Hejhal)</p>
<p><a href="https://mathoverflow.net/a/152423">Oppenheim conjecture</a> (1986, Margulis)</p>
<p><a href="https://mathoverflow.net/a/152459">Quillen equivalence</a> (late 60s)</p>
<p><a href="https://mathoverflow.net/a/152434">Carleson's theorem</a> (1966) (Major simplification: 2000 proof by Lacey and Thiele.)</p>
<p><a href="https://mathoverflow.net/a/152432">Szemerédi’s theorem</a> (1974) (Major simplifications: ergodic theoretic proofs; modern proofs based on hypergraph regularity, and polymath1 proof for density Hales Jewett.)</p>
<p><strong>Additional answer:</strong></p>
<p><a href="https://mathoverflow.net/a/152658">Answer about fully formalized proofs for 4CT and FT theorem</a>.</p>
| Abdelmalek Abdesselam | 7,410 | <p>The construction of the $\Phi^4_3$ quantum field theory model.
This was done in the early seventies by <a href="http://isites.harvard.edu/fs/docs/icb.topic1494462.files/Phi%5E4_3_Online_1973%20Fort%20der%20Physik.pdf" rel="noreferrer">Glimm-Jaffe</a>, <a href="http://projecteuclid.org/euclid.cmp/1103859849" rel="noreferrer">Feldman</a>, <a href="http://www.sciencedirect.com/science/article/pii/0003491676902232" rel="noreferrer">Feldman-Osterwalder</a>, <a href="http://www.numdam.org/item?id=AIHPA_1976__24_2_95_0" rel="noreferrer">Magnen-Seneor</a>.</p>
<p>This model is related to current research since it should provide the invariant measure for the stochastic quantization SPDE in three spatial dimensions (see e.g the work of <a href="http://link.springer.com/article/10.1007/s00222-014-0505-4" rel="noreferrer">Hairer</a>, <a href="http://arxiv.org/abs/1310.6869" rel="noreferrer">Catellier-Chouk</a> and <a href="http://link.springer.com/article/10.1007/s00023-015-0408-y" rel="noreferrer">Kupiainen</a>).</p>
|
28,347 | <p>I often end up with a function that contains the term $1/(1 + x^2/y^2)$, and I need to evaluate this in the limit $y\rightarrow 0$. By hand, I can rewrite this as $y^2/(y^2 + x^2)$, but how can I tell <em>Mathematica</em> to make such a simplification?</p>
<p>I have tried using <code>1/(1 + x^2/y^2) // Simplify</code> and <code>Expand</code>, but neither work as intend (actually, they doesn't change anything).</p>
| Alexei Boulbitch | 788 | <pre><code> expr = 1/(1 + x^2/y^2)
(* 1/(1 + x^2/y^2) *)
Simplify[expr, ComplexityFunction -> (Count[#, _Power[_, -2]] &)]
(* y^2/(x^2 + y^2) *)
</code></pre>
|
2,069,507 | <p><a href="https://i.stack.imgur.com/B4b88.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/B4b88.png" alt="The image of parallelogram for help"></a></p>
<p>Let's say we have a parallelogram $\text{ABCD}$.</p>
<p>$\triangle \text{ADC}$ and $\triangle \text{BCD}$ are on the same base and between two parallel lines $\text{AB}$ and $\text{CD}$, So, $$ar\triangle \text{ADC}=ar\triangle \text{BCD}$$
Now the things those should be noticed are that:</p>
<p>In $\triangle \text{ADC}$ and $\triangle \text{BCD}$:</p>
<p>$$\text{AD}=\text{BC}$$ $$\text{DC}=\text{DC}$$ $$ar\triangle \text{ADC}=ar\triangle \text{BCD}.$$</p>
<p>Now in two different triangles, two sides are equal and their areas are also equal, so the third side is also equal or $\text{AC}=\text{BD}$. Which make this parallelogram a rectangle.</p>
<p>Isn't it a claim that every parallelogram is a rectangle or a parallelogram does not exist?</p>
| David K | 139,123 | <p>As already hinted in comments and answers, you appear to have
applied Heron's formula
to the areas of the triangles $\triangle ADC$ and $\triangle BCD$.
Then observing that the only variables in Heron's formula are
the three sides of the triangle and its area,
and that three of these quantities in the formula for $\triangle ADC$
are equal to the corresponding quantities in the formula
for $\triangle BCD$, you concluded that the fourth quantity
must also be the same.</p>
<p>I will try to show that Heron's formula not only allows for multiple
solutions but (in the case of a non-rectangular parallelogram)
gives exactly the two distinct lengths of the diagonals.</p>
<p>One usually sees Heron's formula in a symmetric form such as this:
\begin{align}
s &= \tfrac12(a+b+c),\\
\triangle &= \sqrt{s(s-a)(s-b)(s-c)}.
\end{align}
This format is slightly deceptive since the second line shows
only one explicit occurrence of the variable $c,$ but there are
four implicit occurrences of $c$, one in each place where $s$ occurs.
A more explicit formula is
$$
\triangle = \tfrac14 \sqrt{(a+b+c)(b+c-a)(c+a-b)(a+b-c)}.
$$</p>
<p>Algebra shows that the quantity inside the radical can be regrouped
as follows:
$$
\triangle = \tfrac14 \sqrt{(2ab)^2 - (a^2+b^2-c^2)^2}. \tag1
$$
Now, given
$\triangle = \text{Area}(\triangle ADC) = \text{Area}(\triangle BCD),$
$a = AD = BC,$ and $b = CD,$
suppose that $c$ is a solution to Equation $1.$
In order for the quantity under the radical in Equation $1$
to be positive, it must be true that $c^2\leq 2(a^2+b^2).$
Let
$$
c' = \sqrt{2(a^2+b^2) - c^2}. \tag2
$$
Squaring both sides of Equation $2$ and rearranging terms,
$$
a^2 + b^2 - c'^2 = -(a^2+b^2-c^2) \tag3
$$
and the squares of both sides of Equation $3$ are equal.
Hence we can substitute $(a^2 + b^2 - c'^2)^2$ for
$(a^2 + b^2 - c^2)^2$ in Equation $1$,
that is, $c'$ is also a solution of that equation.</p>
<p>It is still <em>possible</em> that $c'$ is the same as $c.$
This happens precisely when $c^2 = a^2 + b^2,$
that is, when $\triangle ADC$ and $\triangle BCD$ are right triangles.
In that case the parallelogram <em>is</em> a rectangle, its diagonals are equal,
and the two triangles are congruent.
But any value of $c$ such that $\lvert a - b \rvert < c < a+b$
may be a diagonal of a parallelogram with sides $a$ and $b,$
and for every such value such that $c \neq \sqrt{a^2 + b^2}$
there is a distinct value $c'$ that is the length of the other diagonal.</p>
|
1,325,563 | <p>Can someone show me:</p>
<blockquote>
<p>If $x$ is a real number, then $\cos^2(x)+\sin^2(x)= 1$.</p>
<p>Is it true that $\cos^2(z)+\sin^2(z)=1$, where $z$ is a complex variable?</p>
</blockquote>
<p>Note :look [this ] in wolfram alpha showed that's true !!!!</p>
<p>Thank you for your help</p>
| Mercy King | 23,304 | <p>$$
\cos^2z+\sin^2z=(\cos z+i\sin z)(\cos z-i\sin z)=e^{iz}e^{-iz}=e^{iz-iz}=e^0=1.
$$</p>
|
1,325,563 | <p>Can someone show me:</p>
<blockquote>
<p>If $x$ is a real number, then $\cos^2(x)+\sin^2(x)= 1$.</p>
<p>Is it true that $\cos^2(z)+\sin^2(z)=1$, where $z$ is a complex variable?</p>
</blockquote>
<p>Note :look [this ] in wolfram alpha showed that's true !!!!</p>
<p>Thank you for your help</p>
| Ruvi Lecamwasam | 77,395 | <p>You can use the <a href="https://www.wikiwand.com/en/Identity_theorem#/An_improvement">identity theorem</a>. As they are just sums of exponentials, $\sin(z)$ and $\cos(z)$ are holomorphic, and on the real axis $\sin^2(x)+\cos^2(x)=1$. As $\mathbb{R}$ is a set with an accumulation point (namely any point in $\mathbb{R}$), they agree everywhere. </p>
<p>Mercy's answer is a bit simpler, but this is a good principle to keep in mind when trying to show other identities that are true for real numbers.</p>
|
3,618,791 | <p>Given <span class="math-container">$p\in[0,1]$</span>, prove or disprove that the sum
<span class="math-container">$$\sum_{n=k}^\infty\sum_{j=0}^k\left(\matrix{n\\j}\right)p^j(1-p)^{n-j}$$</span>
is bounded by a constant that does not depend on <span class="math-container">$k$</span>.</p>
<p>The terms <span class="math-container">$\left(\matrix{n\\j}\right)p^j(1-p)^{n-j}$</span> do remind me of the binomial expansion. But using that, the most naive estimate is
<span class="math-container">$$\sum_{j=0}^k\left(\matrix{n\\j}\right)p^j(1-p)^{n-j}\leq\sum_{j=0}^n\left(\matrix{n\\j}\right)p^j(1-p)^{n-j}=1$$</span>
And then summing over <span class="math-container">$n$</span> still gives <span class="math-container">$\infty$</span>. How should I make the correct estimate?</p>
<p>p.s. <strong>I am not absolutely sure that this holds</strong>. There is a proof that I was reading where such an estimate could yield the final result. I am a bit confused now because one comment says this is wrong while an answer seems to have proved this.</p>
| Anton Vrdoljak | 744,799 | <p>(a) Given sum is nothing else than: </p>
<p><span class="math-container">$\sum_{r=0}^{10}u_r=u_0+u_1+u_2+u_3+u_4+u_5+u_6+u_7++u_8+u_9+u_{10}$</span>.</p>
<p>Next, we have <span class="math-container">$u_n=0.5^n$</span>, so we have to find:</p>
<p><span class="math-container">$0.5^0 + 0.5^1 + 0.5^2 + 0.5^3 + 0.5^4 + 0.5^5 + 0.5^6 + 0.5^7 + 0.5^8 + 0.5^9 + 0.5^{10}$</span>.</p>
<p>It is easy to recognize that the given sequence <span class="math-container">$u_n$</span> is geometric sequence, so basically we have to find the sum of first <span class="math-container">$11$</span> members of it. To find such sum we can use well known formula, i.e.</p>
<p><span class="math-container">$S_{11} = \frac{1 - (\frac{1}{2})^{11}}{1-\frac{1}{2}} = ... = \frac{2047}{1024} = \frac{2048-1}{2^{10}} = \frac{2^{11} - 2^0}{2^{10}}$</span>.</p>
|
384,318 | <p>Let $X$ be a topological space and let $A,B\subseteq X$ be closed in $X$ such that $A\cap B$ and $A\cup B$ are connected (in subspace topology) show that $A,B$ are connected (in subspace topology).</p>
<p>I would appreciate a hint towards the solution :)</p>
| Abel | 71,157 | <p>The general idea of a proof could go as follows. I'll leave the details to you:</p>
<p>Let $\mathbf{2}$ be the two point space with discrete topology and let $f\colon A\to \mathbf{2}$ be a continuous function. $f|_{A\cap B}\colon A\cap B\to\mathbf{2}$ is a continuous function and $A\cap B$ is connected, thus $f|_{A\cap B}$ is constant. Let $g\colon B\to\mathbf{2}$ be the constant function such that $g|_{A\cap B}=f|_{A\cap B}$, then we have a continuous function $f\cup g\colon A\cup B\to\mathbf{2}$ and by connectedness of $A\cup B$, $f\cup g$ is constant. This shows that $f = (f\cup g)|_A$ must be constant as well.</p>
|
3,712,699 | <p>Let <span class="math-container">$K$</span> be a number field with ring of integers <span class="math-container">$\mathcal{O}_K$</span> and let <span class="math-container">$p$</span> be a rational prime. Let <span class="math-container">$(p) = \mathfrak{p}_1^{e_1}\ldots\mathfrak{p}_r^{e_r}$</span> be the prime factorisation of (p) over <span class="math-container">$\mathcal{O}_K$</span>, and suppose that <span class="math-container">$\alpha \in \mathfrak{a} = \mathfrak p_1\ldots \mathfrak{p}_r$</span>. Then show that <span class="math-container">$\text{Tr}_{K/\mathbb{Q}}(\alpha) \equiv 0$</span> (mod <span class="math-container">$p$</span>). </p>
<p>I'd really appreciate any help in proving this. Thanks for reading! </p>
<hr>
<p><strong>Special Case</strong></p>
<p>I am able to prove the result in the case where <span class="math-container">$K/\mathbb{Q}$</span> is a Galois extension. In that case, each embedding <span class="math-container">$\sigma$</span> of <span class="math-container">$K$</span> is actually a <span class="math-container">$\mathbb{Q}$</span>-automorphism, and <span class="math-container">$\sigma$</span> permutes the <span class="math-container">$\mathfrak{p}_i$</span>, hence <span class="math-container">$\sigma(\alpha) \in \mathfrak{a}$</span>, so clearly <span class="math-container">$\text{Tr}_{K/\mathbb{Q}}(\alpha) = \sum_\sigma \sigma(\alpha) \in \mathfrak{a} \cap \mathbb{Z} \subseteq p\mathbb{Z}$</span>.</p>
<p>However, if <span class="math-container">$K/\mathbb{Q}$</span> is not Galois, the argument fails because the embeddings no longer permute the <span class="math-container">$\mathfrak{p}_i$</span>, so the conjugates of <span class="math-container">$\alpha$</span> are no longer in <span class="math-container">$\mathfrak{a}$</span>. </p>
<hr>
<p><strong>Other Ideas</strong></p>
<p>I'm aware that <span class="math-container">$\text{Tr}_{K/\mathbb{Q}}$</span> is the trace of the <span class="math-container">$\mathbb{Q}$</span>-linear transformation of <span class="math-container">$K$</span> given by <span class="math-container">$v \mapsto \alpha v$</span>, so I have thought about the matrix of this linear transformation with respect to an arbitrary integral basis, but haven't been able to make much headway with that.</p>
<p>We also have that <span class="math-container">$\alpha^e \in (p)$</span>, where <span class="math-container">$e = \max_i \{e_i\}$</span>, so that <span class="math-container">$\text{Tr}_{K/\mathbb{Q}}(\alpha^e)\in (p) \cap \mathbb{Z} = p\mathbb{Z}$</span>, so I've thought about trying to relate the trace of <span class="math-container">$\alpha$</span> to the trace of <span class="math-container">$\alpha^e$</span>, but also to no avail. </p>
| Wojowu | 127,263 | <p>One can prove the general case by essentially reducing to the Galois case (though see a remark at the end). Let <span class="math-container">$L/\mathbb Q$</span> be a Galois extension containing <span class="math-container">$K$</span>. Any prime <span class="math-container">$\mathfrak q$</span> lying above <span class="math-container">$p$</span> in <span class="math-container">$L$</span> contains one of the primes <span class="math-container">$\mathfrak p_i$</span> from <span class="math-container">$K$</span>, and hence <span class="math-container">$\mathfrak q$</span> contains <span class="math-container">$\alpha$</span>. By your own argument, any conjugate of <span class="math-container">$\alpha$</span> is also in <span class="math-container">$\mathfrak q$</span>, and so the sum <span class="math-container">$A$</span> of conjugates of <span class="math-container">$\alpha$</span> is in <span class="math-container">$\mathfrak q\cap\mathbb Z=p\mathbb Z$</span>. We have <span class="math-container">$A=\text{Tr}_{\mathbb Q(\alpha)/\mathbb{Q}}(\alpha)$</span>, and since <span class="math-container">$K$</span> contains <span class="math-container">$\mathbb Q(\alpha)$</span>, we have that <span class="math-container">$\text{Tr}_{K/\mathbb{Q}}(\alpha)=[K:\mathbb Q(\alpha)]\text{Tr}_{\mathbb Q(\alpha)/\mathbb{Q}}(\alpha)\in p\mathbb Z$</span>.</p>
<p>Now for the promised remark, my first proof attempt trying to literally reduce to the Galois case, which would tell us (in notation above) that <span class="math-container">$\text{Tr}_{L/\mathbb{Q}}(\alpha)\in p\mathbb Z$</span> and try to deduce that <span class="math-container">$\text{Tr}_{K/\mathbb{Q}}(\alpha)\in p\mathbb Z$</span>. However, the two differ by a factor of <span class="math-container">$[L:K]$</span> which itself might be divisible by <span class="math-container">$p$</span>, which is a problem. My proof avoids this by directly looking at <span class="math-container">$A$</span>, which is the sum of conjugates without any unnecessary repetitions.</p>
|
4,646,715 | <blockquote>
<p>Let <span class="math-container">$A\in \operatorname{Mat}_{2\times 2}(\Bbb{R})$</span> with eigenvalues <span class="math-container">$\lambda\in (1,\infty)$</span> and <span class="math-container">$\mu\in (0,1)$</span>. Define <span class="math-container">$$T:S^1\rightarrow S^1;~~x\mapsto \frac{Ax}{\|Ax\|}$$</span>
I need to show that <span class="math-container">$T$</span> has 4 fixed points.</p>
</blockquote>
<p>My idea was the following. Since <span class="math-container">$\mu, \lambda$</span> are eigenvalues there exists, <span class="math-container">$x,y\neq 0$</span> such that <span class="math-container">$Ax=\mu x$</span> and <span class="math-container">$Ay=\lambda y$</span>.Then I claim that <span class="math-container">$x,y$</span> are fixed points:<span class="math-container">$$Tx=\frac{Ax}{\|Ax\|}=\frac{\mu x}{\|\mu x\|}=\operatorname{sign}(\mu)\frac{x}{\|x\|}=\frac{x}{\|x\|}=x$$</span> similarly one can show that <span class="math-container">$Ty=y$</span>. But then I don't see where the other two fixed points should come from?</p>
| Anne Bauval | 386,889 | <p>Your two eigenvectors <span class="math-container">$x,y$</span> are only assumed to be non-zero, so they are not in <span class="math-container">$S^1.$</span></p>
<p>A point <span class="math-container">$z\in S^1$</span> is fixed by <span class="math-container">$T$</span> iff <span class="math-container">$Az=\|Az\|z,$</span> i.e. iff <span class="math-container">$z$</span> is an eigenvector for some positive eigenvalue.</p>
<p>Therefore, the fixed points of <span class="math-container">$T$</span> are exactly the <em>unit</em> eigenvectors of <span class="math-container">$A,$</span> i.e. the <span class="math-container">$4$</span> vectors
<span class="math-container">$$\frac x{\|x\|},\;-\frac x{\|x\|},\;
\frac y{\|y\|},\;-\frac y{\|y\|}.$$</span></p>
|
3,001,700 | <p>I am trying to find an <span class="math-container">$x$</span> and <span class="math-container">$y$</span> that solve the equation <span class="math-container">$15x - 16y = 10$</span>, usually in this type of question I would use Euclidean Algorithm to find an <span class="math-container">$x$</span> and <span class="math-container">$y$</span> but it doesn't seem to work for this one. Computing the GCD just gives me <span class="math-container">$16 = 15 + 1$</span> and then <span class="math-container">$1 = 16 - 15$</span> which doesn't really help me. I can do this question with trial and error but was wondering if there was a method to it.</p>
<p>Thank you</p>
| Mohammad Riazi-Kermani | 514,496 | <p>You have <span class="math-container">$16-15=1$</span></p>
<p>What about <span class="math-container">$$ x=-10+16k, y= -10+15k ?$$</span></p>
<p>That implies</p>
<p><span class="math-container">$$ 15 x-16y=10$$</span>
Which is a solution </p>
|
188,900 | <p><strong>Bug introduced in 10.0 and persisting through 11.3 or later</strong></p>
<hr>
<p>In <code>11.3.0 for Microsoft Windows (64-bit) (March 7, 2018)</code> writing:</p>
<pre><code>f[w_, x_, y_, z_] := w*x^2*y^3 - z*(w^2 + x^2 + y^2 - 1)
eqn = {D[f[w, x, y, z], w] == 0,
D[f[w, x, y, z], x] == 0,
D[f[w, x, y, z], y] == 0,
D[f[w, x, y, z], z] == 0};
sol = Solve[eqn];
Table[eqn /. sol[[n]], {n, Length[sol]}]
</code></pre>
<p>I get:</p>
<blockquote>
<p>{{True, True, True, True},
{True, True, True, True},
{True, True, True, True},
{True, True, True, True},
{True, True, True, True},
{True, True, True, True},
{True, True, True, True},
{True, True, True, True},
{False, True, True, False},
{True, True, True, True},
{True, True, True, True},
{False, True, True, False},
{True, True, True, True},
{True, True, True, True},
{False, True, True, False},
{True, True, True, True},
{True, True, True, True},
{False, True, True, False},
{True, True, True, True},
{True, True, True, True}}</p>
</blockquote>
<p>from which there are four wrong solutions.</p>
<p>Am I wrong or is it a <code>Solve[]</code> bug?</p>
<hr>
<p><strong>EDIT</strong>: through the email address <em>support@wolfram.com</em> I contacted <em>Wolfram Technical Support</em> who in less than three working days have confirmed that it is a bug and have already proceeded to report to their developers. </p>
| Somos | 61,616 | <p>Thanks for asking! In version 9.0 only 16 solutions are returned and they are all valid. In version 10.2 there are 20 solutions, with the extra 4 all being invalid. Contragulations! I think you found a bug. You may want to click "Help", then "Give Feedback...", and then fill out the form in your browser to report.</p>
<p>As another answer notes, you can always try <code>Reduce[]</code> instead which <strong>may</strong> give better results in some cases, but <code>Solve[]</code> is usually what you want.</p>
|
4,032,969 | <p>I have an integral that depends on two parameters <span class="math-container">$a\pm\delta a$</span> and <span class="math-container">$b\pm \delta b$</span>. I am doing this integral numerically and no python function can calculate the integral with uncertainties.</p>
<p>So I have calculated the integral for each min, max values of a and b.
As a result I have obtained 4 values, such that;</p>
<p><span class="math-container">$$(a + \delta a, b + \delta b) = 13827.450210 \pm 0.000015~~(1)$$</span>
<span class="math-container">$$(a + \delta a, b - \delta b) = 13827.354688 \pm 0.000015~~(2)$$</span>
<span class="math-container">$$(a - \delta a, b + \delta b) = 13912.521548 \pm 0.000010~~(3)$$</span>
<span class="math-container">$$(a - \delta a, b - \delta b) = 13912.425467 \pm 0.000010~~(4)$$</span></p>
<p>So it is clear that <span class="math-container">$(2)$</span> gives the min and <span class="math-container">$(3)$</span> gives the max. Let us show the result of the integral as <span class="math-container">$c \pm \delta c$</span>. So my problem is what is <span class="math-container">$c$</span> and <span class="math-container">$\delta c$</span> here?</p>
<p>The integral is something like this</p>
<p><span class="math-container">$$I(a,b,x) =C\int_0^b \frac{dx}{\sqrt{a(1+x)^3 + \eta(1+x)^4 + (\gamma^2 - a - \eta)}}$$</span></p>
<p>where <span class="math-container">$\eta$</span> and <span class="math-container">$\gamma$</span> are constant.</p>
<p>Note: You guys can also generalize it by taking <span class="math-container">$\eta \pm \delta \eta$</span> but it is not necessary for now.</p>
<p>I have to take derivatives or integrals numerically. There's no known analytical solution for the integral.</p>
<p><span class="math-container">$\eta = 4.177 \times 10^{-5}$</span>, <span class="math-container">$a = 0.1430 \pm 0.0011$</span>, <span class="math-container">$b = 1089.92 \pm 0.25$</span>, <span class="math-container">$\gamma = 0.6736 \pm 0.0054$</span>, <span class="math-container">$C = 2997.92458$</span></p>
| seVenVo1d | 551,567 | <pre><code>from numpy import sqrt
from scipy import integrate
import uncertainties as u
from uncertainties.umath import *
#Important Parameters
C = 2997.92458 # speed of light in [km/s]
eta = 4.177 * 10**(-5)
a = u.ufloat(0.1430, 0.0011)
b = u.ufloat(1089.92, 0.25)
gama = u.ufloat(0.6736, 0.0054)
@u.wrap
def D_zrec_finder(gama, a, b):
def D_zrec(z):
return C / sqrt(a * (1+z)**3 + eta * (1+z)**4 + (gama**2 - a - eta))
result, error = integrate.quad(D_zrec, 0, b)
return result
print((D_zrec_finder(gama, a, b)).n)
print((D_zrec_finder(gama, a, b)).s)
</code></pre>
<p>This works</p>
|
154,893 | <p>I am having trouble figuring this out.</p>
<p>$$\sqrt {1+\left(\frac{x}{2}- \frac{1}{2x}\right)^2}$$</p>
<p>I know that $$\left(\frac{x}{2} - \frac{1}{2x}\right)^2=\frac{x^2}{4} - \frac{1}{2} + \frac{1}{4x^2}$$ but I have no idea how to factor this since I have two x terms with vastly different degrees, 2 and -2.</p>
| Américo Tavares | 752 | <p>Since
$$\begin{equation*}
\left( \frac{x}{2}-\frac{1}{2x}\right) ^{2}=\frac{x^{2}}{4}-\frac{1}{2}+
\frac{1}{4x^{2}},
\end{equation*}$$</p>
<p>we have</p>
<p>$$\begin{eqnarray*}
1+\left( \frac{x}{2}-\frac{1}{2x}\right) ^{2} &=&1+\left( \frac{x^{2}}{4}-
\frac{1}{2}+\frac{1}{4x^{2}}\right) \\
&=&1+\frac{x^{2}}{4}-\frac{1}{2}+\frac{1}{4x^{2}} \\
&=&\frac{x^{2}}{4}+\left( 1-\frac{1}{2}\right) +\frac{1}{4x^{2}} \\
&=&\frac{x^{2}}{4}+\frac{1}{2}+\frac{1}{4x^{2}} \\
&=&\left( \frac{x}{2}+\frac{1}{2x}\right) ^{2},
\end{eqnarray*}$$</p>
<p>because</p>
<p>$$\left( \frac{x}{2}+\frac{1}{2x}\right) ^{2}=\frac{x^{2}}{4}+\frac{1}{2}+
\frac{1}{4x^{2}}.$$</p>
<p>Therefore
$$\begin{equation*}
\sqrt{1+\left(\frac{x}{2}-\frac{1}{2x}\right)^2}=\sqrt{\left(\frac{x}{2}
+\frac{1}{2x}\right)^2}=\left\vert\frac{x}{2}+\frac{1}{2x}\right\vert .
\end{equation*}$$</p>
|
2,669,942 | <p>I am very confused about "integration by substitution". For example:</p>
<p>We know that $\int\ x^2 dx$ = $x^3/3 +C$.</p>
<p>Just for doing it, maybe for the sake of practicing, or for testing with integration by substitution, we could have made the substitution $x^2=u$. Then $x=u^{1/2}$ and $dx= (u^{-1/2}/2) du$. Consequently:</p>
<p>$\int\ x^2 dx = \int\ u (u^{-1/2}/2) du = (1/2) \int\ u^{1/2} du = (1/3) u^{3/2} + C$</p>
<p>Since $u^{3/2} = x^{3}$, we see that we got the right answer.</p>
<p>My question is, why did we get the right answer? This method look like magic to me, since I don't know the justification behind it. How do we prove that we can make the change of variable and change the $dx$ accordingly to obtain the right answer. "Integration by parts" is based on the product rule, for example. On what differentiation rule or rules is based integration by substitution? Is it the chain rule? By the way, I have no idea what I am doing. I just want to know the logic behind it. I have not see an explanation in the books I have searched. It seems to be introduced by examples, without justification. I am sorry if my question is bad, I am really confused right now.</p>
| ncmathsadist | 4,154 | <p>The purpose of the $du$ is to suck up the extra factor of the inner function spilt out by the chain rule.</p>
|
2,669,942 | <p>I am very confused about "integration by substitution". For example:</p>
<p>We know that $\int\ x^2 dx$ = $x^3/3 +C$.</p>
<p>Just for doing it, maybe for the sake of practicing, or for testing with integration by substitution, we could have made the substitution $x^2=u$. Then $x=u^{1/2}$ and $dx= (u^{-1/2}/2) du$. Consequently:</p>
<p>$\int\ x^2 dx = \int\ u (u^{-1/2}/2) du = (1/2) \int\ u^{1/2} du = (1/3) u^{3/2} + C$</p>
<p>Since $u^{3/2} = x^{3}$, we see that we got the right answer.</p>
<p>My question is, why did we get the right answer? This method look like magic to me, since I don't know the justification behind it. How do we prove that we can make the change of variable and change the $dx$ accordingly to obtain the right answer. "Integration by parts" is based on the product rule, for example. On what differentiation rule or rules is based integration by substitution? Is it the chain rule? By the way, I have no idea what I am doing. I just want to know the logic behind it. I have not see an explanation in the books I have searched. It seems to be introduced by examples, without justification. I am sorry if my question is bad, I am really confused right now.</p>
| Rene Schipperus | 149,912 | <p>The chain rule is
<span class="math-container">$$\frac{d}{dx}f(g(x))=f^{\prime}(g(x))g^{\prime}(x)$$</span> now we can integrate both sides of this equation to get</p>
<p><span class="math-container">$$f(g(x))=\int f^{\prime}(g(x))g^{\prime}(x)dx$$</span></p>
<p>If we were to write <span class="math-container">$y=g(x)$</span>, then the previous equation becomes
<span class="math-container">$$f(y)=\int f^{\prime}(y)dy$$</span>
Putting the last two together we could express the rule as</p>
<p><span class="math-container">$$\int f^{\prime}(y)dy=\int f^{\prime}(g(x))g^{\prime}(x)dx$$</span> or with a change in the notation, <span class="math-container">$f$</span> instead of <span class="math-container">$f^{\prime}$</span>,
<span class="math-container">$$\int f(y)dy=\int f(g(x))g^{\prime}(x)dx$$</span> and this is exactly substitution as you are using it.</p>
|
631,214 | <p>Two kids starts to run from the same point and the same direction of circled running area with perimeter 400m. The velocity of each kid is constant. The first kid run each circle in 20 sec less than his friend. They met in the first time after 400 sec from the start. Q: Find their velocity.</p>
<p>I came with one equation:</p>
<p>400/v1 +20 = 400/v2 </p>
<p>But what is the second equation? ("They met in the first time after 400 sec from the start.")</p>
| mathlove | 78,967 | <p>If we can use the fact that
$$\frac{\ln(u)}{u}\to 0\ \ \text{($u\to\infty$)},$$
consider $x=1/u.$</p>
|
631,214 | <p>Two kids starts to run from the same point and the same direction of circled running area with perimeter 400m. The velocity of each kid is constant. The first kid run each circle in 20 sec less than his friend. They met in the first time after 400 sec from the start. Q: Find their velocity.</p>
<p>I came with one equation:</p>
<p>400/v1 +20 = 400/v2 </p>
<p>But what is the second equation? ("They met in the first time after 400 sec from the start.")</p>
| Ulrik | 53,012 | <p>One can also use the following power series, which converge for $-1 < x \leq 1$:
$$\ln(1+x) = \sum_{n=1}^\infty \frac{(-1)^{n+1}}{n} x^n$$
Then $x \ln x$ becomes
$$x \ln x = (x-1) \ln x + \ln x = \sum_{n=1}^\infty \frac{(-1)^{n+1}}{n}(x-1)^{n+1} + \sum_{n=1}^\infty \frac{(-1)^{n+1}}{n}(x-1)^n = (x-1) + \sum_{n=1}^\infty \left( \frac{(-1)^{n+1}}{n} + \frac{(-1)^{n+2}}{n+1}\right) (x-1)^{n+1} \to -1 + \sum_{n=1}^\infty \left( \frac{(-1)^{n+1+n+1}}{n} + \frac{(-1)^{n+2+n+1}}{n+1} \right) = -1 + \sum_{n=1}^\infty \left( \frac{1}{n} - \frac{1}{n+1} \right) = -1+1 = 0$$</p>
|
186,890 | <p>Working with other software called SolidWorks I was able to get a plot with a curve very close to my data points:</p>
<p><a href="https://i.stack.imgur.com/DooKo.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/DooKo.png" alt="enter image description here"></a></p>
<p>I tried to get a plot as accurate as this, but using the <code>Fit function</code>, I could not get a plot as accurate.</p>
<pre><code>Clear["Global`*"]
dados={{0,0},{1,1000},{2,-750},{3,250},{4,-1000},{5,0}};
Plot[Evaluate[Fit[dados,{1,x,x^2,x^3,x^4,x^5,x^6},x]],{x,0,5}]
</code></pre>
<p><a href="https://i.stack.imgur.com/syQGd.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/syQGd.png" alt="enter image description here"></a></p>
<p>Is there something I need to modify or another method more effective than this?</p>
| MassDefect | 42,264 | <p>If you want to reproduce that graph exactly, add in the derivatives at each point when using the Interpolate function.</p>
<pre><code>Clear["Global`*"]
dados = {{{0}, 0, 0}, {{1}, 1000, 0}, {{2}, -750, 0}, {{3}, 250,
0}, {{4}, -1000, 0}, {{5}, 0, 0}};
Plot[
Interpolation[dados][x],
{x, 0, 5},
ImageSize -> 500,
Epilog -> {
Red,
PointSize[0.01],
Point@Partition[Flatten[dados], 3][[All, 1 ;; 2]]
}]
</code></pre>
<p>This should give you:</p>
<p><a href="https://i.stack.imgur.com/EML0I.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/EML0I.png" alt="Interpolation of data"></a></p>
<p>which looks pretty close to your original graph.</p>
<p><strong>However,</strong> I strongly caution making any assumptions based off the interpolation from 6 data points unless you have some other reason to believe that this shape is correct. As others have pointed out, your own answer attempts to use a high-order polynomial to fit the data. A 7-term polynomial will fit almost any (except something really ugly like a perfectly vertical line) 6 data points perfectly. Similarly, this interpolation will work for any 6 data points and doesn't necessarily mean the values in between are correct.</p>
|
3,671,223 | <p>First and foremost, I have already gone through the following posts:</p>
<p><a href="https://math.stackexchange.com/questions/2463561/prove-that-for-all-positive-integers-x-and-y-sqrt-xy-leq-fracx-y">Prove that, for all positive integers $x$ and $y$, $\sqrt{ xy} \leq \frac{x + y}{2}$</a></p>
<p><a href="https://math.stackexchange.com/questions/64881/proving-the-am-gm-inequality-for-2-numbers-sqrtxy-le-fracxy2">Proving the AM-GM inequality for 2 numbers $\sqrt{xy}\le\frac{x+y}2$</a></p>
<p>The reason why I open a new question is because I do not understand after reading the two posts.</p>
<p>Question: Prove that for any two positive numbers x and y, <span class="math-container">$\sqrt{ xy} \leq \frac{x + y}{2}$</span></p>
<p>According to my lecturer, he said that the question should begin with <span class="math-container">$(\sqrt{x}- \sqrt{y})^2 \geq 0$</span>. Lecturer also said that this is from a "well-known" fact. Now, both posts also mentioned this exact same thing in the helpful answers.</p>
<p>My question is this - <strong>how</strong> and <strong>why</strong> do I know that I need to use <span class="math-container">$(\sqrt{x}- \sqrt{y})^2 \geq 0$</span>? What "well-known" fact is this? Can't I simply just subtract <span class="math-container">$\sqrt{xy}$</span> to both side and conclude at <span class="math-container">$0 \leq {(x-y)}^2$</span>? I do not know how this <span class="math-container">$(\sqrt{x}- \sqrt{y})^2 \geq 0$</span> come back and why it even appear.</p>
<p>Thanks in advance.</p>
<p>Edit: <strong>I am not looking for the direct answer to this question.</strong> I am looking for an answer on <strong>why</strong> <span class="math-container">$(\sqrt{x}- \sqrt{y})^2 \geq 0$</span> is even considered in the first place as the first step to this question. Is this from a mathematical theorem or axiom etc?</p>
| Gary | 83,800 | <p>We would like to prove
<span class="math-container">$$
\frac{{x + y}}{2} \ge \sqrt {xy}
$$</span>
for all non-negative <span class="math-container">$x$</span>, <span class="math-container">$y$</span>. If this was true, then we would also have
<span class="math-container">$$
x + y \ge 2\sqrt {xy},
$$</span>
<span class="math-container">$$
x - 2\sqrt {xy} + y \ge 0,
$$</span>
<span class="math-container">$$
\left( {\sqrt x - \sqrt y } \right)^2 \ge 0.
$$</span>
But this last inequality is certainly true for all non-negative <span class="math-container">$x$</span>, <span class="math-container">$y$</span>. This is because the square of any real number is non-negative. Now you can start with this "well-known fact" and do everything backwards to obtain <span class="math-container">$\frac{{x + y}}{2} \ge \sqrt {xy} $</span>. This shows you how one might come up with the idea of starting with <span class="math-container">$\left( {\sqrt x - \sqrt y } \right)^2 \ge 0$</span>. This is a common proof technique in mathematics when you have a series of equivalent transformations that can be performed in both directions.</p>
|
2,716,585 | <p>Evaluate $\sum_{k=1}^{n-3}\frac{(k+3)!}{(k-1)!}$. </p>
<p>My strategy is defining a generating function,
$$g(x) = \frac{1}{1-x} = 1 + x + x^2...$$
then shifting it so that we get,
$$f(x)=x^4g(x) = \frac{x^4}{1-x}= x^4+x^5+...$$
and then taking the 4th derivative of f(x).
Calculating the fourth derivative is going to be a little tedious but it won't be as bad compared to the partial fraction decomposition I will end up doing. What is a better way to evaluate the sum using generating functions?</p>
| epi163sqrt | 132,007 | <p>Here is a variation using generating functions <em>without</em> differentiation. It is convenient to use the <em>coefficient of</em> operator $[x^n]$ to denote the coefficient of $x^n$ of a series. This way we can write for instance
\begin{align*}
[x^k](1+x)^n=\binom{n}{k}\tag{1}
\end{align*}</p>
<blockquote>
<p>We obtain for $n\geq 4$
\begin{align*}
\color{blue}{\sum_{k=1}^{n-3}\frac{(k+3)!}{(k-1)!}}
&=4!\sum_{k=1}^{n-3}\binom{k+3}{4}=4!\sum_{k=0}^{n-4}\binom{k+4}{4}\tag{2}\\
&=4!\sum_{k=0}^{n-4}[x^4](1+x)^{k+4}\tag{3}\\
&=4^4\sum_{k=0}^{n-4}(1+x)^k\tag{4}\\
&=4^4\frac{(1+x)^{n-3}-1}{(1+x)-1}\tag{5}\\
&=4^{n+1}\tag{6}\\
&\,\,\color{blue}{=4!\binom{n+1}{5}}\tag{7}
\end{align*}</p>
</blockquote>
<p><em>Comment:</em></p>
<ul>
<li><p>In (2) we write the fraction using binomial coefficients and shift the index by one to start with $k=0$.</p></li>
<li><p>In (3) we use the <em>coefficient of</em> operator according to (1).</p></li>
<li><p>In (4) we use the linearity of the <em>coefficient of</em> operator.</p></li>
<li><p>In (5) we apply the <em><a href="https://en.wikipedia.org/wiki/Geometric_series#Geometric_power_series" rel="nofollow noreferrer">finite geometric series</a></em> formula.</p></li>
<li><p>In (6) we do some simplifications, apply the rule $[x^{p+q}]A(x)=[x^p]x^{-q}A(x)$ and ignore $(1+x)^{4}$ since it does not contribute to $[x^5]$.</p></li>
<li><p>In (7) we select the coefficient of $[x^5]$ accordingly.</p></li>
</ul>
|
2,397,874 | <p>I am new to modulus and inequalities , I came across this problem:</p>
<p>$ 2^{\vert x + 1 \vert} - 2^x = \vert 2^x - 1\vert + 1 $ for $ x $</p>
<p>How to find $ x $ ?</p>
| lab bhattacharjee | 33,337 | <p>Hint:</p>
<p>As $(5+2\sqrt6)(5-2\sqrt6)=1$</p>
<p><strong>Method</strong>$\#1$:</p>
<p>set $(5+2\sqrt6)^{x^2-3}=a$ to find $$a+\dfrac1a=10$$</p>
<p>So, we have $$(5+2\sqrt6)^{x^2-3}=a=(5+2\sqrt6)^{\pm1}$$</p>
<p><strong>Method</strong>$\#2$:</p>
<p>$$a+\dfrac1a=5+2\sqrt6+\dfrac1{5+2\sqrt6}$$</p>
|
2,397,874 | <p>I am new to modulus and inequalities , I came across this problem:</p>
<p>$ 2^{\vert x + 1 \vert} - 2^x = \vert 2^x - 1\vert + 1 $ for $ x $</p>
<p>How to find $ x $ ?</p>
| nonuser | 463,553 | <p>If you write $a = (5+2\sqrt6)^{x^2-3}$ then you have $a+1/a =10$ and thus $a_{1,2} = {5\pm 2\sqrt6}$. </p>
<p>So $x^2-3 = \pm 1$ and thus $x = \pm 2, \pm \sqrt{2}$.</p>
|
249,074 | <p>I am trying to solve the following problem:</p>
<blockquote>
<p>Show that a unit-speed curve $\gamma$ with nowhere vanishing curvature is a geodesic on the ruled surface $\sigma(u,v)=\gamma(u)+v\delta(u)$, where $\gamma$ is a smooth function of $u$, if and only if $\delta$ is perpendicular to the principal normal of $\gamma$ at $\gamma(u)$ for all values of $u$. </p>
</blockquote>
<p>Edit (rather large): My professor wrote the question down wrong. I fixed it on here. Sadly, even with it right, I can't get either direction.</p>
<p>Any help would be appreciated. Thanks!</p>
| Kevin | 51,522 | <p>are you enrolled in UofCalgary PMAT 423? I have the same question as you.</p>
<p>(=>) this is what I was thinking as well. Just remember that we're supposed to show that γ is perpendicular to the principal normal of γ, not to γ". Use γ" = kn (not N which is the standard unit normal of the surface).</p>
<p>(<=) here we have to use the idea that the principal normal of γ is perpendicular to γ at every point on γ. Since kn = t' = γ", where k is the curvature of γ (not the surface), this implies γ" is perpendicular to γ at every point which then implies γ" is perpendicular to γ'. A property of the cross product is that if A x B = C then C is perpendicular to A and to B so (N x γ') is perpendicular to γ'. Since Y' is perpendicular to γ" and γ" is parallel to N (because γ lies on the surface) then (N x γ') must be perpendicular to γ" as well. From this we get γ" • (N x γ') = 0 = kg. Since the geodesic curvature equals 0, then the surface must be geodesic.</p>
<p>I think this is the correct way to do this question but it both directions just seems too simple. If you have any ideas I'd love to here it.</p>
|
2,512,424 | <p>It is an easy exercise to show that all finite groups with at least three elements have at least one non-trivial automorphism; in other words, there are - up to isomorphism - only finitely many finite groups $G$ such that $Aut(G)=1$ (to be exact, just two: $1$ and $C_2$).</p>
<p>Is an analogous statement true for all finite groups? I.e., given a finite group $A$, are there - again up to isomorphism - only finitely many groups $G$ with $Aut(G)\cong A$?</p>
<p>If yes, is there an upper bound on the number of such groups $G$ depending on a property of $A$ (e.g. its order)?</p>
<p>And if not, which groups arise as counterexamples?</p>
<p>And finally, what does the situation look like for infinite groups $G$ with a given finite automorphism group? And what if infinite automorphism groups $A$ are considered?</p>
| YCor | 35,400 | <p>Mikko's nice answer concerns finite groups $G$. Let me here answer for infinite groups $G$ (but still finite automorphism groups, as in the question).</p>
<p>The picture is indeed very different:</p>
<blockquote>
<p>For $A=C_2$ cyclic, there exists uncountably many non-isomorphic (abelian countable) groups $G$ with $\mathrm{Aut}(G)\simeq C_2$.</p>
</blockquote>
<p>Indeed, for $I$ a set of primes, let $B_I$ be the additive subgroup of $\mathbf{Q}$ generated by $\{1/p:p\in I\}$. Then $B_I$ and $B_J$ are isomorphic if and only if the symmetric difference $I\triangle J$ is finite, and $\mathrm{Aut}(B_I)=\{1,-1\}$ (easy exercise: more generally for a nonzero subgroup $B$ of $\mathbf{Q}$, its automorphism group is $\{t\in\mathbf{Q}^*:tB=B\}$ acting by multiplication).</p>
<p>One also gets the group $C_2^n$ ($n\ge 1$) in a similar fashion. Say, for $n=2$, choose $I,J$ such that both $I\smallsetminus J$ and $J\smallsetminus I$ are infinite: then $\mathrm{Aut}(B_I\times B_J)\simeq C_2\times C_2$. </p>
<p>In general, if a group $G$ has finite automorphism group $A$, then its center has finite index in $G$, because $G/Z(G)$ embeds into $A$. A well-known result then implies that $[G,G]$ is finite.</p>
<p>[Also, it follows that if $A$ is cyclic of odd order, we deduce that $G/Z(G)$ is cyclic, and hence $G$ is abelian, and then $G$ has to be a finite elementary abelian $2$-group, and then $G=1$ or $G\simeq C_2$, whence $A=1$. In other words, for no group (finite or infinite) $G$, $\mathrm{Aut}(G)$ is cyclic of odd order $>1$.]</p>
<hr>
<p>One more example to mention that one gets non-abelian groups: let $F$ be a finite group. Then for every torsion-free abelian group $B$, $\mathrm{Aut}(B\times F)$ is a semidirect product $(\mathrm{Aut}(F)\times\mathrm{Aut}(B))\ltimes\mathrm{Hom}(B,Z(F))$. If $\mathrm{Aut}(B)=\{\pm 1\}$, then the $\mathrm{Aut}(B)$-action on $\mathrm{Hom}(B,Z(F))$ is trivial and this reduces to the product $\mathrm{Aut}(B\times F)=(\mathrm{Aut}(F))\ltimes\mathrm{Hom}(B,Z(F))\times\mathrm{Aut}(B)$. For $B=B_I$, we have $\mathrm{Hom}(B_I,Z(F))\simeq Z(F)$. For instance, for $F=C_2$ one gets $\mathrm{Aut}(B_I\times C_2)\simeq C_2^2$. The smallest non-abelian group we can get this way has order 12, namely for $F=C_3$ or $F=D_6$ (dihedral group of order 6), one gets $\mathrm{Aut}(B_I\times F)\simeq D_6\times C_2$. For $F=C_4$ one gets $\mathrm{Aut}(B_I\times C_4)\simeq D_8\times C_2$.</p>
<p>I don't know if we can obtain abelian $\mathrm{Aut}(B_I\times F)$ when $|F|\ge 3$. This holds if and only if $\mathrm{Aut}(F)$ is abelian and acts trivially on $F$. Then $F$ is non-abelian, of nilpotency class 2. Possibly some large $p$-groups satisfy this (see <a href="https://arxiv.org/abs/1304.1974" rel="nofollow noreferrer">Jain-Rai-Yadav (arXiv link)</a> for a discussion of large $p$-groups with abelian automorphism groups; however they don't indicate if they can be chosen so that automorphisms are trivial on the center).</p>
|
1,049,841 | <p>Out of interest </p>
<p>If i have the map $\phi: R \longrightarrow R/I $ where $R$ is a ring and $I$ is a nilpotent ideal ?</p>
<p>then would i be right in saying that if i were to apply this map to the jacobson radical of $R$ it would take me to the jacobson radical of $R/I$ </p>
<p>i.e. is the following true:
$\phi(J(R)) = J(R/I)$</p>
<p>I am guessing this is right but can't be certain</p>
<p>Also if $\phi$ is surjective with kernel $I$ then this would imply R is artinian right? with R/I as semi simple?</p>
<p>any help would be great, thank you ! </p>
| rschwieb | 29,335 | <p>Firstly, any maximal right ideal contains any nilpotent ideal. We want to prove this because it match the maximal right ideals of both rings exactly.</p>
<p>If $M$ were a maximal right ideal not containing $I$, then $M+I=R$, and in particular $m+i=1$ for some $m\in M$, $i\in I$. Pick $n$ such that $i^n=0$. Then $1=1^n=(m+i)^n\in M$, since all terms except $i^n$ have a multiple of $m$ are in $M$, and the last term $i^n=0$. But $M$ can't contain $1$ because it is proper.</p>
<p>Therefore by the correspondence of maximal right ideals, the two rings have "the same" maximal right ideals, and $\cap\{M/I\mid M\text{ maximal right ideal in } R\}=\cap\{M\mid M\text{ maximal right ideal in } R\}/I$</p>
<p>or in other words, $\mathrm{rad}(R/I)=\mathrm{rad}(R)/I$.</p>
<hr>
<p>This is not always the case, though, if $I$ isn't nilpotent (or nil). In general, if $f:R\to S$ is a surjective ring homomorphism, then only $f(\mathrm{rad}(R))\subseteq \mathrm{rad}(S)$.</p>
<hr>
<blockquote>
<p>Also if $ϕ$ is surjective with kernel $I$ then this would imply $R$ is artinian right? with $R/I$ as semi simple?</p>
</blockquote>
<p>No, that is non-sequitur. Even if $R/I$ is a field, $R$ need not be Artinian.</p>
<p>For example, let $\Bbb F_2$ be the field of two elements, and $V$ be an infinite dimensional $\Bbb F_2$ vector space. Define a ring structure on $R=\Bbb F_2\times V$ by using coordinatewise addition and by making multiplication $(a,v)(a',v')=(aa',av'+a'v)$. It's easy to see the following facts:</p>
<ol>
<li><p>for any subspace $W<V$, $\{0\}\times W$ is an ideal of this ring, and since there is an infintely descending chain of subspaces, there's an infinitely descending chain of ideals. So $R$ is not Artinian. (Actually, it's not even Noetherian.)</p></li>
<li><p>$R$ has a unique maximal ideal $\{0\}\times V$ which has square zero, so that is the Jacobson radical.</p></li>
<li><p>$R/\mathrm{rad}(R)\cong \Bbb F_2$ is semisimple Artinian.</p></li>
</ol>
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.