qid int64 1 4.65M | question large_stringlengths 27 36.3k | author large_stringlengths 3 36 | author_id int64 -1 1.16M | answer large_stringlengths 18 63k |
|---|---|---|---|---|
1,179,498 | <p>Is the following sequence convergent in <img src="https://i.stack.imgur.com/wtYTG.png" alt="enter image description here">?(Metric space is the normal, euclidean space)
<img src="https://i.stack.imgur.com/uqGFl.png" alt="enter image description here"></p>
| Michael Hardy | 11,667 | <p>Quid's answer is in one sense the same as what I was about to post when I read it. But the way I phrased it may make it clear to some people first learning the subject, in a way that quid's might not.</p>
<p>Suppose an isomorphism $\varphi$ from the multiplicative group to the additive group exists. In a field in which $-1\ne1$, we have $(-1)^2=1$ and so $\varphi(-1)+\varphi(-1)=\varphi(1)=0$. This is a field in which $2\ne0$, so it is permissible to divide both sides of the equality $2\varphi(-1)=0$ by $2$ and get $\varphi(-1)=0$. That puts $-1$ in the kernel of the homomorphism $\varphi$, which, being an isomorphism, should have only $1$ in its kernel.</p>
<p>In a field in which $-1=1$ one uses a different argument.</p>
|
2,471,217 | <p>Contradiction or contra-positive? Or is direct easier? </p>
<p>∀n∈N, (n>3∧ n is prime)→∃q∈N,(n=6q+1∨n=6q+5)</p>
| Peter | 82,961 | <p>In plain text the expression means : Every prime $n$ greater than $3$ has the form $6q+1$ or $6q+5$, which is clearly the case because the numbers $6q,6q+2,6q+3,6q+4$ are divisible by $2$ or $3$.</p>
|
3,164,343 | <p>Machine generates one random integer in range <span class="math-container">${[0;40)}$</span> on every spin.<br>
You should choose 5 numbers in that range. </p>
<p>Then the machine will spit out <strong>5</strong> numbers (<em>numbers are independent of each other</em>). </p>
<p><strong>what is the probability that you will get exactly two numbers correct?</strong></p>
<hr>
<p><strong>My logic:</strong><br>
You should get two of them right. chance of that is: <span class="math-container">$r = { \left( 1 \over 40 \right)^ 2 }$</span><br>
You should get 3 of wrong. Chance of that is: <span class="math-container">$w = { \left( 39 \over 40 \right)^3 }$</span><br>
As order doesn't matter answer should be: <span class="math-container">$$ans = { rw \over 2!3!}$$</span>
Simulator tells me I'm wrong. Where is my logic flawed?</p>
<p><strong>P.s. Machine can spit out duplicates</strong></p>
| lulu | 252,071 | <p>This is a simple binomial problem.</p>
<p>Note: I am assuming that both you and the machine choose with replacement. That is, either (or both) of you might have duplicates. </p>
<p>For a single choice you make, the probability that the machine also makes it (as one of the <span class="math-container">$5$</span> it chooses) is <span class="math-container">$\psi = 1 - \left( \frac {39}{40}\right)^5$</span>.</p>
<p>As your choices are independent, the answer is then <span class="math-container">$$\binom 52\times \psi^2\times (1-\psi)^3\approx .0967$$</span></p>
|
1,652,381 | <p>Suppose $f$ derivable on $\mathbb R$ and that $\lim_{x\to a}f'(x)=\ell$. Show that $$\lim_{h\to 0}\frac{f(x+h)-f(x)}{h}=\ell.$$</p>
<p>The proof of my course goes like this:</p>
<p>By mean value theorem, there is $y_h\in ]x,x+h[$ s.t. $$f(x+h)-f(x)=f'(y_h)h$$ and thus $$\lim_{h\to 0}\frac{f(x+h)-f(x)}{h}=\lim_{h\to 0}f'(y_h)=\ell.$$</p>
<p>QED.</p>
<p><strong>Question 1: Didn't we only proved that $$\lim_{h\to 0^+}\frac{f(x+h)-f(x)}{h}=\ell\ \ ?$$</strong></p>
<p><strong>Question 2: May-be by $]x,x+h[$ they mean $\{y\mid x<y<x+h\}$ if $h>0$ and $\{y\mid x+h<y<x\}$ if $h<0$, no ?</strong></p>
| Enrico M. | 266,764 | <p>To me, it's $40$.</p>
<p>Indeed</p>
<p>$$16 = 2\cdot 8$$
$$27 = 3\cdot 9$$</p>
<p>thence</p>
<p>$$40 = 4\cdot 10$$</p>
<p>The even terms of the series may follow this path, so a possible series could be</p>
<p>$$7, 16, 8, 27, 9, 40, 10, 55, 11, 72, 12, \cdots$$</p>
|
1,652,381 | <p>Suppose $f$ derivable on $\mathbb R$ and that $\lim_{x\to a}f'(x)=\ell$. Show that $$\lim_{h\to 0}\frac{f(x+h)-f(x)}{h}=\ell.$$</p>
<p>The proof of my course goes like this:</p>
<p>By mean value theorem, there is $y_h\in ]x,x+h[$ s.t. $$f(x+h)-f(x)=f'(y_h)h$$ and thus $$\lim_{h\to 0}\frac{f(x+h)-f(x)}{h}=\lim_{h\to 0}f'(y_h)=\ell.$$</p>
<p>QED.</p>
<p><strong>Question 1: Didn't we only proved that $$\lim_{h\to 0^+}\frac{f(x+h)-f(x)}{h}=\ell\ \ ?$$</strong></p>
<p><strong>Question 2: May-be by $]x,x+h[$ they mean $\{y\mid x<y<x+h\}$ if $h>0$ and $\{y\mid x+h<y<x\}$ if $h<0$, no ?</strong></p>
| Logophobic | 299,855 | <p>Another view, also giving $40$ as the next term: $2n+2;\frac{n}{2};3n+3;\frac{n}{3};4n+4;\frac{n}{4}\cdots$
\begin{align}\text{initial value}&=7\\
7*2+2&=16\\
\frac{16}{2}&=8\\
8*3+3&=27\\
\frac{27}{3}&=9\\
9*4+4&=40\\
\frac{40}{4}&=10
\end{align}
Or: $2(n+1);\frac{n}{2};3(n+1);\frac{n}{3};4(n+1);\frac{n}{4}\cdots$
\begin{align}\text{initial value}&=7\\
2(7+1)&=16\\
\frac{16}{2}&=8\\
3(8+1)&=27\\
\frac{27}{3}&=9\\
4(9+1)&=40\\
\frac{40}{4}&=10
\end{align}</p>
|
3,378,655 | <p>I have an assignment about epsilon-delta proofs and I'm having trouble with this one. I have worked it through using some of the methods I've picked up for similar proofs but it's just something about this particular expression that doesn't sit right with me. Any feedback would be very helpful. This is how far I've come:</p>
<p>Let <span class="math-container">$\varepsilon > 0$</span>. We want to find a <span class="math-container">$\delta$</span> so that <span class="math-container">$\left|\frac{1}{x - 1} - 1\right| < \varepsilon$</span> when <span class="math-container">$0 < |x - 2| < \delta$</span>. We expand the expression:
<span class="math-container">\begin{align*}
\left|\frac{1}{x - 1} - 1\right| &< \varepsilon \\
\left|\frac{1}{x - 1} - \frac{x - 1}{x - 1}\right| &< \varepsilon \\
\left|\frac{2 - x}{x - 1}\right| &< \varepsilon \\
|{x - 1}| &< \frac{|x - 2|}{\varepsilon} \\
\end{align*}</span></p>
<p>We could let <span class="math-container">$\delta = \dfrac{|x - 2|}{\varepsilon}$</span> but <span class="math-container">$|x - 2|$</span> contains an unwanted variable. Since the limit is only relevant when <span class="math-container">$x$</span> is close to <span class="math-container">$a$</span> we'll restrict <span class="math-container">$x$</span> so that it's at most <span class="math-container">$1$</span> from <span class="math-container">$a$</span> or in other words, in our case, that <span class="math-container">$|x - 1| < 1$</span>. This means <span class="math-container">$0 < x < 2$</span> and that <span class="math-container">$-2 < x - 2 < 0$</span>. Looking at our previous inequality</p>
<p><span class="math-container">\begin{align*}
|{x - 1}| &< \frac{|x - 2|}{\varepsilon}
\end{align*}</span></p>
<p>we see that the right-hand side is the smallest when <span class="math-container">$|x - 2|$</span> is the smallest which by the range above is when <span class="math-container">$x - 2 = -2$</span> and then we have that</p>
<p><span class="math-container">\begin{align*}
|{x - 1}| &< \frac{|x - 2|}{\varepsilon} < \frac{2}{\varepsilon}
\end{align*}</span></p>
<p>We now have the two inequalities <span class="math-container">$|x - 1| < 1$</span> and <span class="math-container">$|x - 1| < \frac{2}{\varepsilon}$</span>. Let <span class="math-container">$\delta = \textrm{min}(1, \frac{2}{\varepsilon})$</span> and by definition we have that for every <span class="math-container">$\varepsilon > 0$</span> there is a <span class="math-container">$\delta$</span> so that <span class="math-container">$|f(x) - A| < \varepsilon$</span> for every <span class="math-container">$x$</span> in the domain that satisfies <span class="math-container">$0 < |x - a| < \delta$</span>. <span class="math-container">$\blacksquare$</span></p>
| principal-ideal-domain | 131,887 | <p><span class="math-container">$\delta<\min(1,\varepsilon)/2$</span> should do the job.</p>
|
201,994 | <p>We arbitrarily choose n lattice points in a 3-dimensional Euclidean space such that no three points are on the same line. What is the least n in order to guarantee that there must be three points x, y, and z among the n points such that the center of mass of the x-y-z triangle is a lattice point?</p>
| binn | 39,264 | <p>19 according to "A lattice point problem and additive number theory" and its references</p>
|
3,187,628 | <p>On a keyboard 1-9 and A-D , how many possible way to have a 5 digit code with 3 different numbers and two identical letters (order isn't a problem)?</p>
<p>My try : 9C3 x 4C1 = 84 x 4 = 336</p>
| Mohammad Riazi-Kermani | 514,496 | <p>If you write down a few terms you will see that your series is indeed the Taylor series of <span class="math-container">$3\sin x$</span> evaluated at <span class="math-container">$x=\pi/3$</span></p>
<p>Thus the answer is <span class="math-container">$3\sin(\pi/3)$</span></p>
|
1,246,540 | <p>$$ x ^{x } = e^{\ln x^x } $$
$$ \lim_{x\rightarrow 0} x^x = \;? $$</p>
<p>I need to find the limit of x to the power of x as x approaches to 0 using l'Hopital's rule. From previous part there is a hint that I should use the first equation somehow, however I am confused how to rearrange the equation into a fraction where both nominator and denominator have limits of 0 or infinity. </p>
| Community | -1 | <p>For this limit note that we can write </p>
<p>\begin{equation*}
\lim_{x\to 0} x^x=\lim_{x\to 0}\exp{(\log(x^x))}=\lim_{x\to 0}\exp{x\log(x)}.
\end{equation*}</p>
<p>Factor the exponential function out of the limit due to its continuity, ie</p>
<p>\begin{equation*}
\exp (\lim_{x\to 0}x\log(x))=\exp(\lim_{x\to 0}\frac{\log(x)}{1/x})
\end{equation*}</p>
<p>The limit gives $\frac{\infty}{\infty}$ so apply L'Hopital's rule:</p>
<p>\begin{equation*}
\lim_{x\to 0}\frac{\log(x)}{1/x}=\lim_{x\to 0}\frac{\frac{d}{dx}\log(x)}{\frac{d}{dx}(\frac{1}{x})}=\lim_{x\to 0}\frac{1/x}{-1/x^2}=\lim_{x\to 0}-x.
\end{equation*}</p>
<p>Therefore, we have </p>
<p>\begin{equation*}
\exp(\lim_{x\to 0}-x)=\exp(0)=1.
\end{equation*}</p>
|
164,987 | <p>Find $(x, a, b, c)$ if $$x! = a! + b! + c!$$</p>
<p>I want to know if there are more solutions to this apart from $(x, a, b, c) = (3, 2, 2, 2)$.</p>
| Old John | 32,441 | <p>If $x\ge 4$ then $a \le x-1$, and we have $a!\le x!/4$, and similarly for $b$ and $c$. Then $a!+b!+c!\le \frac{3}{4}x!$, so there are no solutions with $x\ge 4$, and the other cases can be eliminated quickly.</p>
|
333,265 | <p>Can $|-x^2 | < 1 $ imply that $-1<x<1$?
My steps are as follows?
$$| -x^2| < 1 $$
$$-1<(-x^2)< 1 $$
$$-1<(-x^2)< 1 $$
$$-1<x^2< 1 $$
$$\sqrt{-1}<x< \sqrt 1 $$</p>
<p>I'm actually looking for the radius of convergence for the power series of $\frac{1}{1-x^2}$:</p>
<p>$$\frac{1}{1-x^2}=\sum\limits_{n=0}^\infty (-x^2)^n \hspace{10mm}\text{for} \,|-x^2|<1$$
This is derived from the equation $$\frac{1}{1-x}=\sum\limits_{n=0}^\infty x^n \hspace{10mm}\text{for} \,|x|<1$$
According to my textbook, the power series $$\sum\limits_{n=0}^\infty (-x^2)^n \hspace{10mm}\text{for} \,|-x^2|<1$$
is 'for the interval (-1,1)' which means that $|-x^2 | < 1 $ implies that $-1<x<1$.
However, that implication does not make sense to me.</p>
| Ayman Hourieh | 4,583 | <p>The conclusion is correct. However, the steps you took to reach it aren't. The $\sqrt{-1}$ at the end should ring an alarm bell.</p>
<p>A better approach would be to notice that $f(x) = x^2$ is a strictly increasing function on $[0, \infty)$ and strictly decreasing on $(-\infty, 0]$. We know that $f(-1) = f(1) = 1$. Thus, if $x^2 < 1$, then $0 \le x < 1$ or $-1 < x \le 0$. Put together to get $-1 < x < 1$.</p>
|
1,318,884 | <blockquote>
<p>Let S be a piecewise smooth oriented surface in <span class="math-container">$\mathbb{R}^3$</span> with positive oriented piecewise smooth boundary curve <span class="math-container">$\Gamma:=\partial S$</span> and <span class="math-container">$\Gamma : X=\gamma(t), t\in [a,b]$</span> a rectifiable parametrization of <span class="math-container">$\Gamma$</span>. Imagine <span class="math-container">$\Gamma$</span> is a wire in which a current I flows through. Then</p>
<p><span class="math-container">$$m:=\frac{I}{2}\int_a^b\gamma(t)\times \dot{\gamma}(t)dt$$</span></p>
<p>is the magnetic moment of the current.</p>
<p>Show that for an arbitrary <span class="math-container">$u\in \mathbb{R}^3$</span></p>
<p><span class="math-container">$$m\cdot u=I\int_Su\cdot d\Sigma$$</span> is true.</p>
</blockquote>
<p>I tried doing this with Stokes but I can't seem to get to the desired equation. The teacher gave us a hint: <span class="math-container">$k_u(x):=\frac{1}{2}u\times x$</span> is a vector field and <span class="math-container">$\operatorname{curl}k_u = u$</span>.</p>
<p>Any tips or hints? I would appreciate it.</p>
| Community | -1 | <p>Here is one way without using the Fundmental theorem of arithmetic just using the definitions </p>
<p>The definition of lcm(a,b) is as follows:</p>
<p>t is the lowest common multiple of a and b if it satisfies the following:</p>
<p>i)a | t and b | t </p>
<p>ii)If a | c and b | c, then t | c.</p>
<p>Similiarly for the gcd(a,b).</p>
<p>Here is my proof:</p>
<p>Case I: gcd(a,b) $\neq$ 1</p>
<p>Suppose gcd(a,b) = d.</p>
<p>Then $ab = dq_1b = dbq_1 = d*(dq_1q_2)$</p>
<p>Claim: $lcm(a,b) = dq_1q_2$</p>
<p>$a = dq_1$ | $dq_1q_2$ </p>
<p>$b = dq_2$ | $dq_2q_1$.</p>
<p>Supppose lcm(a,b) = c.
Hence c $\leq$ $dq_1q_2$ .</p>
<p>To get the other inequality we have $dq_1$ | a and $dq_2$ | b. Hence $dq_1$ $\leq$ a $\leq$ c $\leq$ $dq_1q_2$ similiarly for $dq_2$.</p>
<p>Suppose that c is strictly less than $dq_1q_2$, so we have $dq_1q_2$ < $cq_2$ and $dq_1q_2$ < $cq_1$.</p>
<p>So $dq_1q_2$ < c < $cq_2$ < $dq_2^2q_1$ and $dq_1q_2$ < c < $cq_2$ < $dq_1^2q_2$, but $dq_1^2q_2$ > $dq_1q_2$ so c < $dq_1q_2$ and </p>
<p>c > $dq_1q_2$ contradiction. Hence c = d$q_1q_2$ </p>
<p>Notice that the case where gcd(a,b) = 1 we can just set $q_1 = a$ and $q_2$ = b, and the proof will be the same.</p>
|
724,823 | <p>I would like to know if there is some way to imagine the case when a 3D subspace intersects with a 2D plane in a 4D space.
For example, let's have a 3D space in 4D
$$A = \left(\begin{array}{c}1 & 2 & 3 & 4\\7 & 9 & 0 & 5\\ 3 & 3 & 4 & 1\end{array}\right)$$
and
$$B=\left(\begin{array}{c}1 & 1 & 0 & 4\\1 & 2 & 0 & 1\end{array}\right)$$</p>
<p>And if I am right, the basis of the space intersection is $C\approx(\begin{array}{c}1 & 1.1614 & 1.2248 & 2.1628)\end{array}$, which is a line.</p>
<p>My stupid question is. Is it possible that an intersection in 4D of a 3D space with a 2D plane is a 1D line? I can't picture how it is possible (it's like a 2D plane is "touching" a 3D space in one line?). </p>
| Christoph | 86,801 | <p>Let $e_1, e_2, e_3, e_4$ be the standard basis vectors of $\mathbb R^4$. The $3$-dimensional subspace
$$
U = \operatorname{span}(e_1, e_2, e_4) = \left\{ (x,y,0,w) \in \mathbb R^4 \right\}
$$
and the $2$-dimensional subspace
$$
V = \operatorname{span}(e_3, e_4) = \left\{ (0,0,z,w) \in \mathbb R^4 \right\}
$$
intersect at the line
$$
U \cap V = \operatorname{span}(e_4) = \left\{ (0,0,0,w) \in \mathbb R^4 \right\}.
$$
The situation here is similar to the situation of the $z$-axis intersecting the $xy$-plane at a point in $\mathbb R^3$. Here the $zw$-plane is intersecting the $xyw$-hyperplane at the $w$-axis.</p>
|
569,814 | <p>I was watching a video on YouTube on Quantum Mechanics Concepts and saw that if you wanted to convert a probability amplitude to a probability, you square it. In the video he said that this was equivalent to multiply by it's complex conjugate. So is this correct? Is squaring the same as multiplying by the complex conjugate, or is this just a thing you do in Quantum Mechanics?
Also, I'm not sure if this should be asked on the physics.se instead of math.se, if it should then I'm sorry.</p>
| dfeuer | 17,596 | <p>No, they're different, in both math and quantum mechanics. What is true, however, is that $z\bar z=|z|^2$. In QM, the probability is the square of the <em>absolute value</em> of the amplitude.</p>
|
569,814 | <p>I was watching a video on YouTube on Quantum Mechanics Concepts and saw that if you wanted to convert a probability amplitude to a probability, you square it. In the video he said that this was equivalent to multiply by it's complex conjugate. So is this correct? Is squaring the same as multiplying by the complex conjugate, or is this just a thing you do in Quantum Mechanics?
Also, I'm not sure if this should be asked on the physics.se instead of math.se, if it should then I'm sorry.</p>
| cderwin | 50,816 | <p>Not exactly. You don't square the complex number itself, you square its <em>magnitude</em>. You can see this is true by actually doing the computation for each value:
Let $z=a+bi$. First the modulus:
$$\left|z\right|^2=\sqrt{a^2+b^2}^2=a^2+b^2$$
Then the conjugate:
$$z\bar{z}=(a+bi)(a-bi)=a^2-abi+abi-b^2i^2=a^2-(-1)b^2=a^2+b^2$$</p>
<p>So, yes, squaring the magnitude <em>is</em> the same thing as multiplying by the conjugate.</p>
|
1,900,365 | <p>Apparently this is a true statement, but I cannot figure out how to prove this. I have tried setting </p>
<p>$$(m)(m + 1)(m + 2)(m + 3) = (m + 4)^2 - 1 $$</p>
<p>but to no avail. Could someone point me in the right direction? </p>
| Will Jagy | 10,400 | <p>Intro: this is the business about taking the gcd of a polynomial and its derivative, in order to detect a repeated factor. In this case, the repeated factor just squares to give the original, so this is one of the simplest types. For this answer yesterday, <a href="https://math.stackexchange.com/questions/1899346/how-can-one-find-the-factorization-a4-2a3-3a2-2a-1-a2-a-12/1899391#1899391">How can one find the factorization $a^4 + 2a^3 + 3a^2 + 2a + 1 = (a^2 + a + 1)^2$ from scratch?</a> I deliberately made up a more complicated situation. In that one, it was actually the cube of something. </p>
<p>$$ f = m^4 + 6 m^3 + 11 m^2 + 6 m + 1 $$
$$ f' = 4 m^3 + 18 m^2 + 22 m + 6 $$
$$ f'/2 = 2 m^3 + 9 m^2 + 11 m + 3 $$
$$ 2 f - m (2 m^3 + 9 m^2 + 11 m + 3) = 3 m^3 + 11 m^2 + 9 m + 2 $$
$$ 3(2 m^3 + 9 m^2 + 11 m + 3) - 2 (3 m^3 + 11 m^2 + 9 m + 2) = 5 m^2 + 15 m + 5. $$
With $$ 5 m^2 + 15 m + 5 = 5 (m^2 + 3m + 1) $$ we want
$$ \gcd_{\mathbb Q}(2 m^3 + 9 m^2 + 11 m + 3,m^2 + 3m + 1 ). $$
However,, $$ 2 m^3 + 9 m^2 + 11 m + 3 = (2m+3)(m^2 + 3m + 1) $$
so the GCD is $m^2 + 3m + 1$ itself. This must be a repeat factor of the original $f,$ and we check that, in fact, $f = (m^2 + 3m+1)^2$</p>
|
2,542,242 | <p>When I tried solving for the inverse of $f(x)=\frac{1-x}{-x}$, I got this:</p>
<p>$f^{-1}(x)=\frac{1}{1-x}$</p>
<p>I know that the way to check my answer would be to take the inverse of the inverse I just found, but this is what I get:</p>
<p>$f^{-1}(f^{-1}(x))=1-\frac{1}{x}=\frac{x-1}{-x}$</p>
<p>The last part, when multiplied by $\frac{-1}{-1}$ is indeed my original function, but am I allowed to do that? And why does that get lost when taking the inverse?</p>
| Jacky Chong | 369,395 | <p>Another way is by a change of variable, which is also essentially how one prove the Leibniz formula mentioned by @spaceisdarkgreen . Take $u = t/x$, then we see that
\begin{align}
\int^x_0 f(x-t)\ dt = \int^1_0 xf(x(1- u))\ du.
\end{align}
Taking the derivative with respect $x$ gives us
\begin{align}
\frac{d}{dx}\int^x_0 f(x-t)\ dt =&\ \int^1_0 f(x(1-u))\ du+ \int^1_0 xf'(x(1-u))(1-u)\ du.
\end{align}
Using integration by part formula, we get
\begin{align}
\int^1_0 xf'(x(1-u))(1-u)\ du =&\ - f(x(1-u))(1-u)\big|^1_{u=0} - \int^1_0 f(x(1-u))\ du\\
=&\ f(x) -\int^1_0 f(x(1-u))\ du
\end{align}
which means
\begin{align}
\frac{d}{dx}\int^x_0 f(x-t)\ dt = f(x).
\end{align}</p>
|
1,547,057 | <p>I've been working on the following problems, and I know how to integrate functions,but I do not know how to find the value of "c" in the examples below when finding the antiderivative. Any idea what to do? Cheers</p>
<ol>
<li><p>A particle travels in a straight line such that its acceleration at time <em>t</em> seconds is equal to $6t+1$ $m/s^2$. When $t=2$, the displacement is equals to $ 12m$ and when $t=3 $ the displacement is equals to $ 34m$.
Find the displacement and velocity when $t=4$.</p></li>
<li><p>A particle travels in a straight line with its acceleration at time <em>t</em> equal to $3t+2$ $ m/s^2$. The particle has an initial positive velocity and travels $30m$ in the fourth second.
Find the velocity of the body when $t=5$.</p></li>
</ol>
| SchrodingersCat | 278,967 | <p><strong>HINT</strong>: Your first problem requires you to find both displacement and velocity. So you have to solve the first order as well as the second order differential equation. For the second order differential equation, you will have 2 constants, say $A$ and $B$. Use the 2 conditions for displacement and solve for the constants.Then differentiate to get the expression for velocity.</p>
|
1,547,057 | <p>I've been working on the following problems, and I know how to integrate functions,but I do not know how to find the value of "c" in the examples below when finding the antiderivative. Any idea what to do? Cheers</p>
<ol>
<li><p>A particle travels in a straight line such that its acceleration at time <em>t</em> seconds is equal to $6t+1$ $m/s^2$. When $t=2$, the displacement is equals to $ 12m$ and when $t=3 $ the displacement is equals to $ 34m$.
Find the displacement and velocity when $t=4$.</p></li>
<li><p>A particle travels in a straight line with its acceleration at time <em>t</em> equal to $3t+2$ $ m/s^2$. The particle has an initial positive velocity and travels $30m$ in the fourth second.
Find the velocity of the body when $t=5$.</p></li>
</ol>
| Nizar | 227,505 | <p>For the first problem: you have the acccelaration is $$a(t)=6t+1 $$ Let $x(t)$ and $v(t)$ denote the displacement and the velocity respectively. Then $x''(t)=a(t)$. So integration $a(t) $ twice we end up with $$ x(t)= t^3 +\frac{t^2}{2}+At+B$$ but we have $x(2)=12$ and $x(3)=34$ Then $$ 8+ 2 +2A+B= 12$$ and $$ 27+\frac{9}{2}+3A+B=34 $$ subtracting the first equation from the second we get $ 17+\frac{9}{2} +A= 22 $, hence $A= \frac{1}{2}$.Then substitute to find $B$. </p>
|
958,267 | <p>When I was doing problems in my textbook, I came across this problem:</p>
<blockquote>
<p>The velocity of a heavy meteorite entering the earth's atmosphere is inversely proportional to $\sqrt s$ when it is s kilometers from the earth's center. Show that the meteorite's acceleration is inversely proportional to $s^2$.</p>
</blockquote>
<p>From what I have learned, I know the velocity $= k/(\sqrt s)$. And the acceleration is just the derivative of the acceleration. Which means $dv/dt$. But when I looked at this answer, it does something like $\frac{dv}{dt} = \frac{dv}{ds} * \frac{ds}{dt} = \frac{dv}{ds}v$ And then got a different answer than mine. I don't understand why can we just take the derivative directly. I appreciate the help, thanks!</p>
| Varun Iyer | 118,690 | <p>The problem is here that we don't know the <strong>time.</strong></p>
<p>However, we do know what $s$ is, since it is in our equation.</p>
<p>Therefore,</p>
<p>we split up the derivative.</p>
<p>Taking the derivative with respect with $t$ for the equation.</p>
<p>$$\frac{dv}{dt}$$</p>
<p>Now, since we know $s$, we change the variable by using the chain rule.</p>
<p>$$\frac{dv}{dt} = \frac{dv}{ds}*\frac{ds}{dt}$$</p>
<p>Since $\frac{ds}{dt} = v$,</p>
<p>$$\frac{dv}{dt} = \frac{dv}{ds}*\frac{ds}{dt} = \frac{dv}{ds}*v$$</p>
<p>Now we have the acceleration in terms of $s$.</p>
|
4,338,558 | <p>I am not sure if my proof here is sound, please could I have some opinions on it? If you disagree, I would appreciate some advice on how to fix my proof. Thanks</p>
<p><span class="math-container">$X_1, X_2, ..., X_n$</span> are countably infinite sets.</p>
<p>Let <span class="math-container">$X_1 = \{{x_1}_1, {x_1}_2, {x_1}_3, ... \}$</span></p>
<p>Let <span class="math-container">$X_2 = \{{x_2}_1, {x_2}_2, {x_2}_3, ... \}$</span></p>
<p>...</p>
<p>Let <span class="math-container">$X_n = \{{x_n}_1, {x_n}_2, {x_n}_3, ... \}$</span></p>
<p>Let <span class="math-container">$P_n$</span> be the list of the first <span class="math-container">$n$</span> ordered primes: <span class="math-container">$P_n = (2,3,5,...,p_n) = (p_1,p_2,p_3,...,p_n)$</span></p>
<p>Define the injection: <span class="math-container">$\sigma: X_1 \times X_2 \times ... \times X_n \to \mathbb{N}$</span></p>
<p><span class="math-container">$\sigma (({x_1}_A, {x_2}_B, {x_3}_C,...,{x_n}_N)) = p_1^A\cdot p_2^B \cdot p_3^C \cdot ... \cdot p_n^N$</span></p>
<p>By the Fundamental Theorem of Arithmetic, <span class="math-container">$\sigma$</span> is an injection, because if two elements in the domain map to the same element in the codomain, they must be the same element.</p>
<p>Clearly, the image is finite. So by definition, the Cartesian product of n sets which are all countably infinite, is itself, countably infinite.</p>
<p>EDIT: Is it worth noting that my <span class="math-container">$X_n$</span> sets should be ordered or does that not matter?</p>
| H A Helfgott | 169,068 | <p>Let me now work out the harder cases, starting as @Ninad_Munshi has suggested.</p>
<p>Let <span class="math-container">$$U(R) = \{(x,y)\in [1,\infty)\times [1,\infty): x^\alpha y^\alpha \sqrt{|x\pm y|}> R\},$$</span>
where <span class="math-container">$\alpha>1$</span>. We want to estimate
<span class="math-container">$$\iint_{U(R)}\frac{\log x \log y}{x^\alpha y^\alpha \sqrt{|x\pm y|}} dx dy. $$</span></p>
<p>(I have a square root here, whereas I didn't have one in the original post. This is the question I actually need answered. To get an answer to the question originally asked, just go through what I've done below, making small changes as needed.)</p>
<p>First of all,
<span class="math-container">$$\iint_{U(R)}\frac{\log x \log y}{x^\alpha y^\alpha \sqrt{|x\pm y|}} dx dy =
\frac{\partial}{\partial a} \frac{\partial}{\partial b}
\iint_{U(R)}\frac{1}{x^a y^b \sqrt{|x\pm y|}} dx dy
\bigg|_{(a,b)=(\alpha,\alpha)}
$$</span>
Let <span class="math-container">$u = x y$</span>, <span class="math-container">$v = y/x$</span>. The Jacobian determinant of <span class="math-container">$(x,y)\mapsto (u,v)$</span> equals <span class="math-container">$2 v$</span>, and so <span class="math-container">$dx dy = du dv/2v$</span>. Thus
<span class="math-container">$$\iint_{U(R)} \frac{dy dx}{x^a y^b \sqrt{|x\pm y|}} =
2\mathop{\iint_{U(R)}}_{x<y} \frac{dy dx}{x^a y^b \sqrt{y\pm x}} =
\iint_{(u,v)\in V} \frac{du dv}{
u^{\frac{a+b}{2}+\frac{1}{4}} v^{\frac{b-a}{2}+\frac{3}{4}} \sqrt{v\pm 1}},
$$</span>
since <span class="math-container">$v \cdot \sqrt{\sqrt{u/v}+\sqrt{u v}}=u^{1/4} v^{3/4} \sqrt{1+v}$</span>. Here
<span class="math-container">$$\begin{aligned}V &=\left\{(u,v)\in \mathbb{R}^+\times (1,\infty): u^{\alpha}\left(\sqrt{uv}\pm \sqrt{u/v}\right)>R,\; \sqrt{u/v}\geq 1, \sqrt{u v} \geq 1\right\}\\
&=\left\{(u,v)\in \mathbb{R}^+\times (1,\infty): u^{2 \alpha+1}(\sqrt{v}\pm1/\sqrt{v})^2>R^2,\; u\geq v\right\}.\end{aligned}$$</span>
For given <span class="math-container">$v$</span>,
<span class="math-container">$$\int_{u: (u,v)\in V} \frac{du}{u^{\frac{a+b}{2} + \frac{1}{4}}} =
\int_{\max((R/(\sqrt{v}\pm 1/\sqrt{v}))^{\frac{2}{2\alpha+1}},v)}^\infty \frac{du}{u^{\frac{a+b}{2} + \frac{1}{4}}} =
\frac{\max\left(R^\frac{2}{2\alpha+1}/\left(\sqrt{v}\pm 1/\sqrt{v}\right)^{\frac{2}{2\alpha+1}},v
\right)^{-\left(\frac{a+b}{2} -\frac{3}{4}\right)}}{\frac{a+b}{2} -\frac{3}{4}}
$$</span>
Let <span class="math-container">$v_0$</span> be the least <span class="math-container">$v\geq 1$</span> such that <span class="math-container">$(v^{\alpha+1}\pm v^\alpha) \geq R$</span> (or <span class="math-container">$v_0=1$</span>, if there is no such <span class="math-container">$v$</span>). Then, in the expression above, <span class="math-container">$\max(\cdot,\cdot)$</span> equals <span class="math-container">$v$</span> if and only if <span class="math-container">$v\geq v_0$</span>. Now
<span class="math-container">$$\int_{v=1}^{v_0} \frac{\left(\sqrt{v}\pm 1/\sqrt{v}\right)^{\frac{2}{2\alpha+1}\left(\frac{a+b}{2} -\frac{3}{4}\right)}}{v^{\frac{b-a}{2}+\frac{3}{4}} \sqrt{v\pm 1}} dv =
\int_{v=1}^{v_0} \frac{\left(v\pm 1\right)^{c-\frac{1}{2}}}{v^{\frac{b-a+c}{2}+\frac{3}{4}}} dv,$$</span>
where <span class="math-container">$c = \frac{1}{2\alpha+1}\left(a+b -\frac{3}{2}\right)$</span>, and
<span class="math-container">$$\int_{v=v_0}^\infty \frac{v^{-\left(\frac{a+b}{2} -\frac{3}{4}\right)}}{v^{\frac{b-a}{2}+\frac{3}{4}} \sqrt{v\pm 1}} dv =
\int_{v=v_0}^\infty \frac{1}{v^{b } \sqrt{v\pm 1}} dv
.$$</span>
(Incidentally, both of the expressions on the right can be written as values of hypergeometric functions, not that it's clear how that helps.) The latter integral results in the contribution
<span class="math-container">$$\begin{aligned}\frac{\partial}{\partial a} \frac{\partial}{\partial b}\left(
\frac{1}{\frac{a+b}{2} -\frac{3}{4}}\int_{v=v_0}^\infty \frac{1}{v^{b } \sqrt{v\pm 1}} dv\right)
&= \left(\frac{\partial}{\partial b}\frac{\partial}{\partial a}
\frac{1}{\frac{a+b}{2} -\frac{3}{4}}\right)
\int_{v=v_0}^\infty \frac{1}{v^{b } \sqrt{v\pm 1}} dv\\
&+ \left(\frac{\partial}{\partial a}
\frac{1}{\frac{a+b}{2} -\frac{3}{4}}\right) \left(\frac{\partial}{\partial b}
\int_{v=v_0}^\infty \frac{1}{v^{b } \sqrt{v\pm 1}} dv\right)
\end{aligned}
$$</span>
Since
<span class="math-container">$$\left(\frac{\partial}{\partial a}
\frac{1}{\frac{a+b}{2} -\frac{3}{4}}\right)
\bigg|_{(a,b)=(\alpha,\alpha)} = - \frac{1/2}{\left(\alpha-\frac{3}{4}\right)^2},\;\;\;\;\;\;\left(\frac{\partial}{\partial b} \frac{\partial}{\partial a}
\frac{1/2}{\frac{a+b}{2} -\frac{3}{4}}\right)
\bigg|_{(a,b)=(\alpha,\alpha)} =
\frac{1/2}{\left(\alpha-\frac{3}{4}\right)^3},$$</span>
this contribution is
<span class="math-container">$$
\frac{1/2}{\left(\alpha-\frac{3}{4}\right)^3}
\int_{v=v_0}^\infty \frac{1}{v^{\alpha } \sqrt{v\pm 1}} dv +
\frac{1/2}{\left(\alpha-\frac{3}{4}\right)^2}
\int_{v=v_0}^\infty \frac{\log v}{v^{\alpha} \sqrt{v\pm 1}} dv
.
$$</span>
The second contribution is <span class="math-container">$R^{\frac{2}{2\alpha+1} \left(- \left(\alpha - \frac{3}{4}\right)\right)} = R^{\frac{2}{2\alpha+1}
\left(\frac{3}{4}-\alpha\right)}$</span> times
<span class="math-container">$$\frac{\partial}{\partial a} \frac{\partial}{\partial b}
\left(
\frac{1}{\frac{a+b}{2} -\frac{3}{4}}\int_{1}^{v_0} \frac{\left(v\pm 1\right)^{c-\frac{1}{2}}}{v^{\frac{b-a+c}{2}+\frac{3}{4}}} dv\right).$$</span>
Noting that <span class="math-container">$c = \frac{1}{2\alpha+1}\left(a+b -\frac{3}{2}\right) =1 - \frac{5/2}{2\alpha+1}$</span> for <span class="math-container">$a=b=\alpha$</span>, we see that
<span class="math-container">$$\frac{\partial}{\partial a} \int_{1}^{v_0} \frac{\left(v\pm 1\right)^{c-\frac{1}{2}}}{v^{\frac{b-a+c}{2}+\frac{3}{4}}} dv= \int_{1}^{v_0}
\left(\frac{\log (v\pm 1)}{2\alpha+1} + \frac{\log v}{2} \left(1 -
\frac{1}{2\alpha+1}\right)\right) \frac{\left(v\pm 1\right)^{
\frac{1}{2} - \frac{5/2}{2\alpha+1}}}{v^{\frac{5}{4} - \frac{5/4}{2\alpha+1}}} dv
,$$</span>
<span class="math-container">$$\frac{\partial}{\partial b} \int_{1}^{v_0} \frac{\left(v\pm 1\right)^{c-\frac{1}{2}}}{v^{\frac{b-a+c}{2}+\frac{3}{4}}} dv= \int_{1}^{v_0} \left(\frac{\log (v\pm 1)}{2\alpha+1} + \frac{\log v}{2} \left(-1 -
\frac{1}{2\alpha+1}\right)\right) \frac{\left(v\pm 1\right)^{
\frac{1}{2} - \frac{5/2}{2\alpha+1}}}{v^{\frac{5}{4} - \frac{5/4}{2\alpha+1}}} dv,$$</span>
<span class="math-container">$$\begin{aligned}\frac{\partial}{\partial a} \frac{\partial}{\partial b}
\int_{1}^{v_0} \frac{\left(v\pm 1\right)^{
\frac{1}{2} - \frac{5/2}{2\alpha+1}}}{v^{\frac{5}{4} - \frac{5/4}{2\alpha+1}}} dv &=
\int_{1}^{v_0} \left(\frac{\log (v\pm 1)}{2\alpha+1} + \frac{\log v}{2} \left(-1 -
\frac{1}{2\alpha+1}\right)\right)
\left(\frac{\log (v\pm 1)}{2\alpha+1} + \frac{\log v}{2} \left(1 -
\frac{1}{2\alpha+1}\right)\right) \frac{\left(1+v\right)^{
\frac{1}{2} - \frac{5/2}{2\alpha+1}}}{v^{\frac{5}{4} - \frac{5/4}{2\alpha+1}}} dv\\
&=
\int_{1}^{v_0} \left(\frac{\log^2 \left(\sqrt{v}\pm \frac{1}{\sqrt{v}}\right)}{(2\alpha+1)^2} - \frac{\log^2 v}{4}\right) \frac{\left(v\pm 1\right)^{
\frac{1}{2} - \frac{5/2}{2\alpha+1}}}{v^{\frac{5}{4} - \frac{5/4}{2\alpha+1}}} dv
.\end{aligned}$$</span>
We remark that <span class="math-container">$\left(\frac{\partial}{\partial a} \frac{1}{\frac{a+b}{2} -\frac{3}{4}}\right) \bigg|_{(a,b)=(\alpha,\alpha)} = \left(\frac{\partial}{\partial b}
\frac{1}{\frac{a+b}{2} -\frac{3}{4}}\right) \bigg|_{(a,b)=(\alpha,\alpha)},$</span> and conclude that the second contribution is <span class="math-container">$R^{\frac{2}{2\alpha+1} \left(\frac{3}{4}-\alpha\right)}$</span> times
<span class="math-container">$$\begin{aligned}
&
\frac{1/2}{\left(\alpha-\frac{3}{4}\right)^3}
\int_{1}^{v_0} \frac{\left(v\pm 1\right)^{
\frac{1}{2} - \frac{5/2}{2\alpha+1}}}{v^{\frac{5}{4} - \frac{5/4}{2\alpha+1}}} dv\\
- &\frac{1/2}{\left(\alpha-\frac{3}{4}\right)^2}
\int_{1}^{v_0} \frac{2 \log (\sqrt{v}\pm \frac{1}{\sqrt{v}})}{2\alpha+1} \cdot \frac{\left(v\pm 1\right)^{
\frac{1}{2} - \frac{5/2}{2\alpha+1}}}{v^{\frac{5}{4} - \frac{5/4}{2\alpha+1}}} dv \\
+ &\frac{1}{\alpha -\frac{3}{4}}
\int_{1}^{v_0} \left(\frac{\log^2 \left(\sqrt{v}\pm \frac{1}{\sqrt{v}}\right)}{(2\alpha+1)^2} - \frac{\log^2 v}{4}\right) \frac{\left(v\pm 1\right)^{
\frac{1}{2} - \frac{5/2}{2\alpha+1}}}{v^{\frac{5}{4} - \frac{5/4}{2\alpha+1}}} dv
\end{aligned}
$$</span></p>
<p>For <span class="math-container">$\alpha<2$</span> (as in the original example, <span class="math-container">$\alpha=5/3$</span>), these integrals all converge even when completed (that is, extended up to <span class="math-container">$\infty$</span>). Then we can bound our integral
<span class="math-container">$$\iint_{U(R)}\frac{\log x \log y}{x^\alpha y^\alpha \sqrt{|x\pm y|}} dx dy$$</span> by
<span class="math-container">$$\frac{1}{2 \left(\alpha-\frac{3}{4}\right)^2}\int_{v=v_0}^\infty \frac{\left(\alpha-\frac{3}{4}\right)^{-1} +{\log v}}{v^{\alpha} \sqrt{v\pm 1}} dv
+I_\pm\cdot R^{-\frac{2}{2\alpha+1} \left(\alpha-\frac{3}{4}\right)},
$$</span>
where
<span class="math-container">$$I_\pm = \int_{1}^{\infty}
\left(\frac{1/2}{\left(\alpha-\frac{3}{4}\right)^3}
- \frac{ \log (\sqrt{v}\pm \frac{1}{\sqrt{v}})}{\left(\alpha-\frac{3}{4}\right)^2}
+ \frac{\frac{\log^2 \left(\sqrt{v}\pm \frac{1}{\sqrt{v}}\right)}{(2\alpha+1)^2} - \frac{\log^2 v}{4}}{\alpha -\frac{3}{4}}\right) \frac{\left(v\pm 1\right)^{
\frac{1}{2} - \frac{5/2}{2\alpha+1}}}{v^{\frac{5}{4} - \frac{5/4}{2\alpha+1}}} dv
$$</span></p>
<p>In the "<span class="math-container">$-$</span>" case, since <span class="math-container">$v_0^{\alpha+1}-v_0^\alpha=R$</span>, it is clear that <span class="math-container">$v_0\geq R^{\frac{1}{\alpha+1}}$</span>. It is also clear that
<span class="math-container">$$\begin{aligned}\int_{v=v_0}^\infty \frac{\left(\alpha-\frac{3}{4}\right)^{-1} +{\log v}}{v^{\alpha} \sqrt{v- 1}} dv &\leq \frac{1}{\sqrt{1-1/v_0}}
\int_{v=v_0}^\infty \frac{\left(\alpha-\frac{3}{4}\right)^{-1} +{\log v}}{v^{\alpha+1/2}} dv\\ &=
\frac{\left(\alpha-\frac{3}{4}\right)^{-1} -
\frac{\partial}{\partial \alpha}}{\sqrt{1-1/v_0}}
\int_{v=v_0}^\infty \frac{1}{v^{\alpha+1/2}} dv\\ &=
\frac{\left(\alpha-\frac{3}{4}\right)^{-1}
+
\left(\alpha-\frac{1}{2}\right)^{-1}+\log v_0}{\sqrt{1-1/v_0}
\left(\alpha-\frac{1}{2}\right) v_0^{\alpha-1/2}}.
\end{aligned}$$</span></p>
<p>Hence, we can give a total bound of
<span class="math-container">$$\frac{\left(\alpha-\frac{3}{4}\right)^{-1}
+
\left(\alpha-\frac{1}{2}\right)^{-1}+\frac{\log R}{\alpha+1}}{2\sqrt{1-R^{-\frac{1}{\alpha+1}}}
\left(\alpha-\frac{1}{2}\right) \left(\alpha-\frac{3}{4}\right)^2} R^{-\frac{\alpha-\frac{1}{2}}{\alpha+1}} + I_-\cdot R^{-\frac{2\alpha-3/2}{2\alpha+1}}
$$</span>
For <span class="math-container">$\alpha<2$</span>, we have <span class="math-container">$\frac{\alpha-\frac{1}{2}}{\alpha+1}>\frac{2\alpha-3/2}{2\alpha+1}$</span>, and so the second term is dominant.</p>
<p>In the "<span class="math-container">$+$</span>" case, <span class="math-container">$v_0^{\alpha+1}+v_0^\alpha=(v_0+1) v_0^{\alpha} R$</span>, and so <span class="math-container">$v_0\geq R^{1/(\alpha+1)}-1$</span>. Much as before,
<span class="math-container">$$\begin{aligned}\int_{v=v_0}^\infty \frac{\left(\alpha-\frac{3}{4}\right)^{-1} +{\log v}}{v^{\alpha} \sqrt{v+ 1}} dv &\leq \frac{1}{\sqrt{1+1/v_0}}
\int_{v=v_0}^\infty \frac{\left(\alpha-\frac{3}{4}\right)^{-1} +{\log v}}{v^{\alpha+1/2}} dv\\ &=
\sqrt{1-R^{-\frac{1}{\alpha+1}}}
\frac{\left(\alpha-\frac{3}{4}\right)^{-1}
+
\left(\alpha-\frac{1}{2}\right)^{-1}+\log v_0}{
\left(\alpha-\frac{1}{2}\right) v_0^{\alpha-1/2}}.
\end{aligned}$$</span>
Thus, the total bound now is
<span class="math-container">$$\sqrt{1-R^{-\frac{1}{\alpha+1}}}
\frac{\left(\alpha-\frac{3}{4}\right)^{-1}
+
\left(\alpha-\frac{1}{2}\right)^{-1}+\frac{\log R}{\alpha+1}}{2
\left(\alpha-\frac{1}{2}\right) \left(\alpha-\frac{3}{4}\right)^2} v_0^{-\alpha+\frac{1}{2}} + I_+\cdot R^{-\frac{2\alpha-3/2}{2\alpha+1}},
$$</span>
where, again, the latter term is dominant for <span class="math-container">$\alpha<2$</span>.</p>
|
4,003,019 | <p>So I've just started learning topology and a lot of the definitions confuse me.
My main problem is that some of the definitions are quite inconsistent to me for example in topology without tears the author says that a topology <span class="math-container">$\tau$</span> is a set of subsets of <span class="math-container">$X$</span> and then he proceeds with the axioms.
However I've also seen a lot from other sources that a topology is a family of subsets of <span class="math-container">$X$</span> and then they proceeds to describe the same axioms. However they do not refer to the topology <span class="math-container">$\tau$</span> as being a set at all which confuses me.</p>
<p>I think my confusion is the definition on what a family actually is ive seen lots of confusing definitions on what a family is like it is a surjective function however I cannot seem to grasp the idea.</p>
<p>So why do they define them differently and what is a family?</p>
<p>Thanks in advance.</p>
| Paul Frost | 349,785 | <p>I think the word "family" is somewhat ambiguous. For example, <a href="https://en.wikipedia.org/wiki/Family_of_sets" rel="nofollow noreferrer">Wikipedia</a> says</p>
<blockquote>
<p>In set theory and related branches of mathematics, a collection <span class="math-container">$F$</span> of subsets of a given set <span class="math-container">$S$</span> is called a <em>family of subsets of <span class="math-container">$S$</span></em>, or a <em>family of sets over <span class="math-container">$S$</span></em>. More generally, a collection of any sets whatsoever is called a <em>family of sets</em> or a <em>set-family</em> or a <em>set-system</em>.<br />
The term "collection" is used here because, in some contexts, a family of sets may be allowed to contain repeated copies of any given member, and in other contexts it may form a proper class rather than a set.</p>
</blockquote>
<p>Here <em>allowed to contain repeated copies of any given member</em> means what Wikipedia denotes as an <a href="https://en.wikipedia.org/wiki/Indexed_family" rel="nofollow noreferrer">indexed family</a> :</p>
<blockquote>
<p>More formally, an indexed family is a mathematical function together with its domain <span class="math-container">$I$</span> and image <span class="math-container">$P$</span>. Often the elements of the set <span class="math-container">$P$</span> are referred to as making up the family. In this view indexed families are interpreted as collections instead of as functions. The set <span class="math-container">$I$</span> is called the <em>index (set)</em> of the family, and <span class="math-container">$P$</span> is the indexed set.</p>
</blockquote>
<p>The first quotation shows that a family of sets can be understood <strong>either</strong> as a set of sets <strong>or</strong> as function <span class="math-container">$f : I \to P$</span> where <span class="math-container">$P$</span> is a set of sets. This is indeed vague and leaves much scope for interpretation. The same vagueness of notation can be found in many textbooks.</p>
<p>For a given set <span class="math-container">$X$</span> we may consider</p>
<ul>
<li><p>sets <span class="math-container">$\tau$</span> of subsets of <span class="math-container">$X$</span>, i.e. subsets of <span class="math-container">$\tau \subset \mathfrak P(X)$</span> = power set of <span class="math-container">$X$</span>.</p>
</li>
<li><p>"indexed collections" of subsets of <span class="math-container">$X$</span>, i.e. functions <span class="math-container">$\theta : I \to \mathfrak P(X)$</span>.</p>
</li>
</ul>
<p>Each subset <span class="math-container">$\tau \subset \mathfrak P(X)$</span> can be canonically identified with the inclusion function <span class="math-container">$\iota(\tau) : \tau \hookrightarrow \mathfrak P(X)$</span>; this produces a "self-indexed collection" of subsets of <span class="math-container">$X$</span>. Conversely, each function <span class="math-container">$\theta : I \to \mathfrak P(X)$</span> determines the subset <span class="math-container">$\text{im}(\theta) = \theta(I) \subset \mathfrak P(X)$</span>. Clearly <span class="math-container">$\text{im}(\iota(\tau)) = \tau$</span>, but in general <span class="math-container">$\iota(\text{im}(\theta)) \ne \theta$</span>. In fact, for a given <span class="math-container">$\tau \subset \mathfrak P(X)$</span> there are many <span class="math-container">$\theta : I \to \mathfrak P(X)$</span> such that <span class="math-container">$\text{im}(\theta) = \tau$</span>. One can even show that the "collection" of all <span class="math-container">$\theta : I \to \mathfrak P(X)$</span> such that <span class="math-container">$\text{im}(\theta) = \tau$</span> is not even a set, but a <em>proper class</em>.</p>
<p><strong>The standard</strong> is to define a topology on a set <span class="math-container">$X$</span> as a <em>set of subsets of <span class="math-container">$X$</span> satisfying suitable axioms.</em></p>
<p>If you regard the word <em>family</em> as a synonym for <em>set</em>, then you do not get conflicting definitions. If you regard a family as a function, then a topology on <span class="math-container">$X$</span> would be some function <span class="math-container">$\theta : I \to \mathfrak P(X)$</span> from an index set <span class="math-container">$I$</span> to the power set of <span class="math-container">$X$</span> which is often written in the form <span class="math-container">$\{U_i\}_{i \in I}$</span> (indexed collection of subsets). You can easily modify the axioms for a topology to obtain similar axioms for an indexed collection of subsets of <span class="math-container">$X$</span>. It is fairly obvious that <span class="math-container">$\theta$</span> satisfies these modified axioms if and ony if <span class="math-container">$\text{im}(\theta)$</span> is a topology in the standard sense. The essential disadvantage of the alternative definition is that indices are completely unnessary - many formally distinct families of "indexed topologies" give the same "standard topology".</p>
|
2,006,437 | <p>Is it possible to get an example of two matrices $A,B\in M_4(\mathbb{R})$ both having $rank<2$ but $\det(A-\lambda B)\ne 0$ i.e it is not identically a zero polynomial. where $\lambda$ is indeterminate, I mean a variable. I want to say $(A-\lambda B)$ is of full rank matrix, assuming entries of $A-\lambda B$ is a polynomial matrix with linear polynomial.</p>
| RGS | 329,832 | <p>Note that $rank\ A, B < 2$ means their ranks are 1 (If any of them is 0 your question has a trivial answer). That means that all columns of $A$ are $v, \alpha_v v, \beta_v v, \delta_v v$ and the columns of $B$ are $u, \alpha_u u, \beta_u u, \delta_u u$. Now note that the columns of $(A - \lambda B)$ will be sums of those multiples of $u$ with multiples of $v$. Regardless of how you distribute the $v, \alpha_v v, \beta_v v, \delta_v v$ with the $u, \alpha_u u, \beta_u u, \delta_u u$, all columns of $(A - \lambda B)$ will be of the form $xu + yv$ for some $x$ and $y$ scalars. Therefore, all your 4 columns will be a linear combination of the two vectors $u$ and $v$ and therefore the rank of $(A - \lambda B)$ can never be greater than 2.</p>
|
215,125 | <p>Suppose $A(x)$ is a $\Delta_0$ formula defining a non-empty set of natural numbers. It's an easy theorem that there is a primitive recursive function $f:\mathbb{N} \rightarrow \mathbb{N}$ such that $Range(f) = \{n \in \mathbb{N} \mid A(n)\}$. I'm wondering if it's known whether either of the following strengthenings of this theorem are true:</p>
<p><strong>Strengthening 1:</strong> Suppose $A(x)$ is a $\Delta_0$ formula defining a non-empty set of natural numbers. Can we find a $c \in \mathbb{N}$ such that $\varphi_c$ is primitive recursive, $\mathbb{N} \models \forall x (A(x) \rightarrow \exists y \varphi_c(y) {\downarrow} = x)$, and $PA \vdash \forall y \exists x (\varphi_c(y) {\downarrow} = x \wedge A(x))$?</p>
<p><strong>Strengthening 2:</strong> Suppose $A(x)$ is a $\Delta_0$ formula defining a non-empty set of natural numbers. Can we find a $c \in \mathbb{N}$ such that $\varphi_c$ is primitive recursive, $PA \vdash \forall x (A(x) \rightarrow \exists y \varphi_c(y) {\downarrow} = x)$, and $PA \vdash \forall y \exists x (\varphi_c(y) {\downarrow} = x \wedge A(x))$?</p>
<p>(In the above, "$\varphi_c$" refers to the (partial) recursive function defined by the Turing machine whose Gödel code is c. "$\varphi_c(y) {\downarrow} = x$" is shorthand for a $\Sigma_1$ formula which says that with input y, the Turing machine with Gödel code c eventually halts and outputs x.)</p>
<p><strong>Pedantic Clarification:</strong> Typically "$\phi_c(x){\downarrow} = y$" is an abbreviation for "$\exists t M(c,x,y,t)$", where (i) $M(c,x,y,t)$ is a $\Delta_0$ formula and (ii) $\mathbb{N} \models M(c,x,y,t)$ if and only if the Turing machine with Gödel code c halts after t steps on input x and produces output y. Let's call any formula $M$ satisying (i) and (ii) a "computation predicate". By <a href="http://projecteuclid.org/download/pdf_1/euclid.ndjfl/1093893356" rel="nofollow">this paper by H.B. Enderton</a>, Strengthening 2 is <em>false</em> for certain choices of computation predicate.</p>
<p>It follows that any argument for Strengthening 2 which assumes that our choice of $\Delta_0$ computation predicate is arbitrary cannot suffice. We also need that PA proves certain facts about our chosen computation predicate. I suspect that any reasonable construction of a computation predicate will work (e.g., whatever construction is used in your favorite recursion theory book), so my question should be phrased more precisely as: is there a computation predicate such that Strengthenings 1 and 2 hold?</p>
| CooLee | 33,232 | <p>I am also interested in the dependency of the constant in the Harnack inequality.
<a href="https://mathoverflow.net/questions/261113/dependency-of-the-constant-in-the-harnack-inequality">Dependency of the constant in the Harnack inequality</a></p>
|
1,552,775 | <p>In how many ways 5 blue pens and 6 black pens can be distributed to 6 children?</p>
<hr>
<p>To do that I used:
$\text{Coefficient of } x^6 \text{ in } ((1+x+x^2+x^3+x^4+x^5)(1+x+x^2+x^3+x^4+x^5+x^6))$
and got answer = 6</p>
<p>but options given are:</p>
<p>a) 97020</p>
<p>b) 116424</p>
<p>c) 8008</p>
<p>d) 672</p>
<p>How does taking coefficient gives answer for distribution problems?</p>
| Siddharth Joshi | 288,487 | <p>Instead find the coefficient of $x^6 y^5$ in $(1+x+x^2+x^3+...x^\infty)^6(1+y+y^2+y^3+....y^\infty)^5$</p>
<p>I'll explain to you why taking the coefficient gives the answer for distribution problems.
Consider the expression $(1+x+x^2+x^3+...x^\infty)^n$. You can write this expression as follows:$$(1+x+x^2+x^3+...x^\infty)(1+x+x^2+x^3+...x^\infty)(1+x+x^2+x^3+...x^\infty)...n \ \mathbb{times}$$
try to expand this expression. If $b$ denotes the coefficient of $x^m$ in this expression, then $$bx^m = \sum_{(i_1, i_2, i_3, ... i_n)}^{i_1+i_2+...+i_n = m}x^{i_1}x^{i_2}...x^{i_n}$$ where $i_k \in \mathbb{N} \cup \{ 0 \} $ for all $k \in \mathbb{N}$.</p>
<p>If you reflect on this for a while, you'll find that the coefficient $b$ will give you the number of ways in which $m$ can be <strong>sequentially partitioned</strong> into non - negative integers. And hence this coefficient will give you the number of ways in which $m$ objects can be distributed among $n$ distinct sets. </p>
<p>P.S. : By sequentially partitioning, you can interpret that $(i_1, i_2, ... i_n)$ is different from $(i_2, i_1, ...i_n)$</p>
|
222,312 | <p>How to prove that if you have $x^*$ such that $x^*=\text{psuedoinverse}(A) b$, and $Ay=b$, then
$$\Vert x^* \Vert_2 \leq \Vert y \Vert_2$$</p>
| Community | -1 | <p>You essentially want to find the solution to he following optimization problem.
$$\min_{x}\Vert x \Vert_2 \text{ such that } Ax = b$$
Using Lagrange multipliers, we get that
$$\min_{x, \lambda} \dfrac{x^Tx}2 + \lambda^T (Ax - b)$$
Differentiate with respect to $x$ and $\lambda$ to get that
$$x^* = \underbrace{A^T(AA^T)^{-1}}_{\text{pseudoinverse}}b$$</p>
<p><strong>Proof</strong>:
$$\dfrac{d \left(\dfrac{x^Tx}2 + \lambda^T (Ax - b) \right)}{dx} = 0 \implies x^* + A^T \lambda = 0 \implies x^* = -A^T \lambda$$
We also have $$Ax^* = b \implies AA^T \lambda = -b \implies \lambda = - \left( AA^T\right)^{-1}b$$
Hence, $$x^* = \underbrace{A^T(AA^T)^{-1}}_{\text{pseudoinverse}}b$$</p>
|
4,482,637 | <p>Is there are a term for a generalized exponential function? The series expansions of sine and cosine look very similar to the exponential function's series expansion</p>
<p><span class="math-container">\begin{align}
& \sum_{n = 0}^{\infty} \frac{x^n}{n!} \\
& \sum_{n = 0}^{\infty} \frac{(-x)^n}{n!} \\
& \sum_{n = 0}^{\infty} \frac{x^{2n}}{(2n)!} & & \sum_{n = 0}^{\infty} \frac{x^{2n + 1}}{(2n + 1)!} & \\
& \sum_{n = 0}^{\infty} \frac{(-1)^nx^{2n}}{(2n)!} & & \sum_{n = 0}^{\infty} \frac{(-1)^nx^{2n + 1}}{(2n + 1)!} \\
& \sum_{n = 0}^{\infty} \frac{x^{3n}}{(3n)!} & & \sum_{n = 0}^{\infty} \frac{x^{3n + 1}}{(3n + 1)!} & & \sum_{n = 0}^{\infty} \frac{x^{3n + 2}}{(3n + 2)!} \\
& \sum_{n = 0}^{\infty} \frac{(-1)^nx^{3n}}{(3n)!} & & \sum_{n = 0}^{\infty} \frac{(-1)^nx^{3n + 1}}{(3n + 1)!} & & \sum_{n = 0}^{\infty} \frac{(-1)^nx^{3n + 2}}{(3n + 2)!} \\
& \vdots && \vdots && \vdots
\end{align}</span></p>
<p>and so I was wondering as to whether or not there is a name for all of these types of infinite sums and what properties they all share.</p>
| Empy2 | 81,790 | <p>An integral might run from <span class="math-container">$x=0$</span> to <span class="math-container">$x=X$</span>.</p>
<p>In <span class="math-container">$\int2xdx$</span>, the function is increasing as you go along. So the first <span class="math-container">$2$</span> is there all the way as x goes from 1 to X, but the second <span class="math-container">$2$</span> doesn't start until x is 2, and the final <span class="math-container">$2$</span> doesn't appear until x has already reached X.<br />
In <span class="math-container">$\int2dx+\int2dx+...$</span>, all the <span class="math-container">$2$</span>s are there all the way as x goes from 0 to X.</p>
|
285,975 | <p>I'm not sure I understand what to do with what's given to me to solve this. I know it has to do with the relationship between velocity, acceleration and time.</p>
<blockquote>
<p>At a distance of <span class="math-container">$45m$</span> from a traffic light, a car traveling <span class="math-container">$15 m/sec$</span> is brought to a stop at a constant deceleration.</p>
<p>a. What is the value of deceleration?</p>
<p>b. How far has the car moved when its speed has been reduced to <span class="math-container">$3m/sec$</span>?</p>
<p>c. How many seconds would the car take to come to a full stop?</p>
</blockquote>
<p>Can somebody give me some hints as to where I should start? All I know from reading this is that <span class="math-container">$v_0=15m$</span>, and I have no idea what to do with the <span class="math-container">$45m$</span> distance. I can't tell if it starts to slow down when it gets to <span class="math-container">$45m$</span> from the light, or stops <span class="math-container">$45m$</span> from the light.</p>
<hr />
<p>Edit:</p>
<p>I do know that since accelleration is the change in velocity over a change in time, <span class="math-container">$V(t)=\int a\ dt=at+C$</span>, where <span class="math-container">$C=v_0$</span>. Also, <span class="math-container">$S(t)=\int v_{0}+at\ dt=s_0+v_0t+\frac{1}{2}at^2$</span>. But I don't see a time variable to plug in to get the answers I need... or am I missing something?</p>
| André Nicolas | 6,312 | <p>For (i), prove first that there is an $N$ such that $f(x)$ is <strong>increasing</strong> in the interval $(N,\infty)$, and greater than $1$. Now suppose $a\gt N$ and $f(a)=m$. Then $f(a+m)\gt m$, and $f(a+m)$ is divisible by $m$. </p>
<p>Note that we used the fact (ii) that you were asked to prove. For the proof of (ii), use the fact that if $a\equiv a'$ and $b\equiv b'$ (both modulo $m$) that $a+b\equiv a'+b'$ and $ab\equiv a'b'$. </p>
|
1,865,086 | <p>$$T_n=2^{-n}$$</p>
<p>How can I tell if this converges? with previous questions I have just let $n = \infty$, however I'm unsure about this one.</p>
| b00n heT | 119,285 | <p>Hint: a monotone decreasing sequence bounded from below converges.</p>
<p>(Similarly a monotone increasing sequence boundes from above converges also)</p>
<p>Of course this is a non constructive approach, as you only get the existance of a limit, and not what the limit actually is. But in some cases, this is already sufficient.</p>
|
1,778,037 | <p>$ 9^x-6^x=4^{x+1/2}$, solve for $x$</p>
<p>Please don't solve this problems entirely, I just want some hints. I have tried to substitute $3^x=b$ and $2^x=a$. </p>
| Shourya Pandey | 331,004 | <p>Try dividing $9^x $ throughout, and substitute $(\frac {2}{3} )^x $ as $y $. Also, notice that $\frac {4}{9} = (\frac {2}{3})^2 $.</p>
|
959,410 | <p>I need to evaluate an expression similar to the following:</p>
<p>$\frac{\partial\mathrm{log}(a+b)}{\partial a}$</p>
<p>At this point I don't know how to proceed. $b$ is a constant so there should be some way to eliminate it.
How would you proceed in this case?</p>
<p>Actually, the original expression is much more complicated, and related to multiclass logistic regression, but I wanted to spare you tedious details.</p>
| Joseph | 158,173 | <p>Clearly, the answer is
$\frac{\partial log(a+b)}{\partial a} = \frac{1}{a+b}$
As you said, b is a constant but in this case it is also in an argument of a derivated function and thus cannot be eliminated using derivatives.</p>
|
2,755,091 | <p>I know that I have to prove this with the induction formula. If proved the first condition i.e. $n=6$ which is divisible by $6$. But I got stuck on how to proceed with the second condition i.e $k$.</p>
| Peter Szilas | 408,605 | <p>Induction step:n+1.</p>
<p>$7^{n+1} -1 = 7\cdot 7^{n} -1=$</p>
<p>$ (6+1)(7^n) -1=$</p>
<p>$6 \cdot 7^n +(7^n -1).$</p>
<p>By hypothesis $(7^n-1)$ is divisible by $6$, hence the above sum is divisible by $6.$</p>
|
3,621,836 | <p>Let's say I have a large determinant with scalar elements:</p>
<p><span class="math-container">$$\begin{vmatrix} x\cdot a & x\cdot b & x\cdot c & x\cdot d \\ x\cdot e & x\cdot f & x\cdot g & x\cdot h \\ x\cdot i & x\cdot g & x\cdot k & x\cdot l \\ x\cdot m & x\cdot n & x\cdot o & x\cdot p\end{vmatrix}$$</span></p>
<p>Is it valid to factor out a term that's common to every element of the determinant? Is the following true:</p>
<p><span class="math-container">$$\begin{vmatrix} x\cdot a & x\cdot b & x\cdot c & x\cdot d \\ x\cdot e & x\cdot f & x\cdot g & x\cdot h \\ x\cdot i & x\cdot g & x\cdot k & x\cdot l \\ x\cdot m & x\cdot n & x\cdot o & x\cdot p\end{vmatrix} = x \cdot \begin{vmatrix} a & b & c & d \\ e & f & g & h \\ i & g & k & l \\ m & n & o & p\end{vmatrix}$$</span></p>
| Rezha Adrian Tanuharja | 751,970 | <p>Since <span class="math-container">$\omega,\omega^{2}$</span> are the roots of <span class="math-container">$f(x)=x^{2}+x+1$</span>,</p>
<p><span class="math-container">$$
(2-\omega)(2-\omega^{2})(2-\omega^{19})(2-\omega^{23}) =\left(f(2)\right)^{2}=49
$$</span></p>
|
3,238,316 | <p>I think there is a glitch in the <a href="https://en.wikipedia.org/w/index.php?title=Jensen%27s_inequality&oldid=887772941#Proof_1_(finite_form)" rel="nofollow noreferrer">proof by induction</a>. The proof is still valid, but they add an unnecessary assumption:</p>
<p>In the induction step, they choose one of the <span class="math-container">$\lambda_i$</span>'s that is strictly positive (I guess by that, they mean nonzero). Since the sum of the <span class="math-container">$\lambda_i$</span>'s is 1, there must be at least one that is nonzero, that part is valid. And the argument that follows is also perfectly valid.</p>
<p>However, why do we need to pick a nonzero <span class="math-container">$\lambda_i$</span>? Wouldn't the argument work regardless? If <span class="math-container">$\lambda_1 = 0$</span>, the inequality still holds. In other words, the inequality holds regardless of the value of <span class="math-container">$\lambda_1$</span>:
<span class="math-container">$$ \varphi\left( \lambda_1 x_1 + (1 - \lambda_1) \sum_{i=2}^{n+1}\frac{\lambda_i}{1-\lambda_1} x_i \right) ~\leqslant~ \lambda_1 \varphi(x_1) + (1-\lambda_1)\varphi\left( \sum_{i=2}^{n+1} \frac{\lambda_i}{1-\lambda_1} x_i \right)$$</span> </p>
<p>because <span class="math-container">$\varphi$</span> is convex, period. No requirement on the coefficient being nonzero: according to <a href="https://en.wikipedia.org/wiki/Convex_function#Definition" rel="nofollow noreferrer">Wikipedia's definition of a convex function</a>, it is <span class="math-container">$\forall x_1, x_2 \in X$</span>, <span class="math-container">$\forall \lambda \in [0,1] ~ \cdots$</span></p>
<p>Since the <span class="math-container">$\lambda_i$</span>'s are all nonnegative and their sum is 1, then every <span class="math-container">$\lambda_i \in [0,1]$</span>, so the definition of convexity applies.</p>
<p>Am I missing something?</p>
| Misha Lavrov | 383,078 | <p>You are right that the case <span class="math-container">$\lambda_1 = 0$</span> is fine; the proof should be corrected to assume instead that <span class="math-container">$\lambda_1<1$</span>. (If <span class="math-container">$\lambda_1=1$</span> the inequality is trivial but should be shown differently, since we can't divide by <span class="math-container">$1-\lambda_1$</span>.)</p>
|
244,521 | <p>I cannot prove this statement, I tried to prove by using the definition of open sets however i feel that it is necessary prove it in two directions since it's an iff statement.</p>
<p>The question is, </p>
<p>Let $X$ be a metric space with metric $d$, and let $x_n$ be a sequence of points in $X$. Prove that $x_n\rightarrow a$ if and only if for every open set $U$ with $a\in U$, there is a number N such that whenever n > N we have $x_n\in U$</p>
<p>What I have done is, by using the definition of open sets, $B(a,r)\cap U$ is not an empty set and $r > 0$. So by choosing $r=1$ we obtain $x_1\in B(a,1)$ which intersects with $U$. Secondly I chose $r=1/2$ so $x_2\in B(a,1/2)$ which again intersects with $U$. By continuing like this eventually I chose $r=1/n$ so $x_n\in B(a,1/n)$ which intersects with $U$. Therefore $d(x_n,a) <\frac{1}{n}$. Therefore I conclude that when $n$ goes to infinity that $x_n\rightarrow a$. </p>
<p>Am I right up to this point? If so how should I proceed and what should I do to prove that for opposite inclusion?</p>
<p>I'm so new, so let me know if I do or did something inconvenient.
Thanks!</p>
| Tom Oldfield | 45,760 | <p>I'm not sure why you've done what you've done, or what it proves.</p>
<p>To prove the if part of your theorem, suppose we have a sequence $x_n$. For any $\epsilon > 0$, there exists $N$ such that $x_n\in B(a,\epsilon)$ $\forall n > N $ (since $B(a,\epsilon)$ is an open set containing $a$) . Hence $x_n \rightarrow a$. </p>
<p>For the only if part, suppose $x_n \rightarrow a$. Let $U$ be an open set containing $a$. Then by the definition $\exists r >0$ such that $B(a,r) \subset U $. Then $\exists N$ such that $x_n \in B(a,r) \forall n >N$ since $x_n \rightarrow a$. Hence $x_n \in U$ $ \forall n > N$ </p>
|
3,774,263 | <ol>
<li>Is it possible that distance (<span class="math-container">$r$</span>) or angle (<span class="math-container">$θ$</span>) contains Imaginary or Complex number?</li>
<li>If the answer is yes, how can I convert a number like that (Polar with complex argument) to Rectangular number? <br />
For example:
<strong><span class="math-container">$(r,θ) = (5+2i, 3+4i)$</span></strong> how to convert to <strong><span class="math-container">$x+yi$</span></strong> ? <br /> <br />
Thank you.</li>
</ol>
| Aaron | 9,863 | <p>Here are two perspectives that are slightly different from the answers already given.</p>
<p>Since <span class="math-container">$X$</span> is of dimension greater than <span class="math-container">$1$</span>, we can find two linearly independent vectors <span class="math-container">$v,w\in X$</span>. If <span class="math-container">$x^*(v)=0$</span> or <span class="math-container">$x^*(w)=0$</span>, then we are done. Otherwise <span class="math-container">$x^*(x^*(v)w-x^*(w)v)=x^*(v)x^*(w)-x^*(w)x^*(v)=0$</span>.</p>
<p>Alternatively, by the first isomorphism theorem, <span class="math-container">$\operatorname{im}(x^*)\cong X/ker(x^*)$</span>, and since <span class="math-container">$\dim(\operatorname{im} x^*)\leq 1$</span> and <span class="math-container">$\dim X > 1$</span>, we must have that <span class="math-container">$\ker(x^*)$</span> is nontrivial.</p>
|
1,726,745 | <p>I am looking for information in regards to a couple particular functions: </p>
<p>1) $P(x)=\sum_{p\in\mathbb{P}}\frac{x^p}{p!}$</p>
<p>2) $Q(x)=\sum_{p\not\in\mathbb{P}}\frac{x^p}{p!}$ (assuming $0, 1$ are included powers in the series...)</p>
<p>3) $R(x)=\frac{1}{P(x)}$</p>
<p>4) $S(x)=\frac{1}{Q(x)}$</p>
<p>I don't know if there is much literature on these, since I don't know if they are known, unknown, or what they are called.</p>
| Plutoro | 108,709 | <p>From the comment chain on the question, the OP says that some plots would be helpful, so though I cannot answer this question, this is what I can say.</p>
<p>Though finding an analytic expression for these seems a monumental task, approximating them in the first several terms or so is quite easy. I evaluated each of these sums for $0\leq p\leq 41$, and we can estimate their accuracy by evaluating $err(x)=\left|e^x-\sum_{i=1}^{41} x^i/i!\right|$, and noting that if $|x|<4$, $err(x)$ is smaller than $10^{-10}$, so the following graphs can be judged as somewhat accurate.</p>
<p><a href="https://i.stack.imgur.com/zCoV0.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/zCoV0.jpg" alt="P(x)"></a></p>
<p><a href="https://i.stack.imgur.com/dV18s.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/dV18s.jpg" alt="Q(x)"></a></p>
<p><a href="https://i.stack.imgur.com/93WqZ.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/93WqZ.jpg" alt="R(x)"></a></p>
<p><a href="https://i.stack.imgur.com/fGxWK.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/fGxWK.jpg" alt="S(x)"></a></p>
<p>There are some interesting features. For one, $P(x)$ and $Q(x)$ both have two roots. These are at $0$ and about $-2.301751$ for $P$, and about $-2.24203$ and $-1.05319$ for $Q$. We can also look at a plot of $P$ and $Q$ together with $e^x$:</p>
<p><a href="https://i.stack.imgur.com/hXyDU.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/hXyDU.jpg" alt="all three"></a></p>
<p>Note that there are intersections between $P(x)$ and $Q(x)$ at about $2.06337$, $-0.789488$, and $-2.27713$. It is clear in the plot that $P(x)+Q(x)=e^x$.</p>
|
13,861 | <p>I am currently working with a weighted adjacency matrix for a directed graph, and it contains several 0 columns and rows. With the unaltered matrix, I am able to monitor the relations between vertices with,</p>
<pre><code>TableForm[Normal @ WeightedAdjacencyMatrix[graph],
TableHeadings -> {a = VertexList[graph], a}]
</code></pre>
<p>This outputs a table with the corresponding vertex list labeling the rows and columns. I want to delete the 0 rows and columns while altering the labels to reflect the change. My matrix is currently $85\times 85$, and eliminating the necessary rows and columns reduces the size to $77\times 38$. I could theoretically go through by hand and track the eliminated entries, but that sounds way too time consuming for something that I'm sure has a simple solution. Any help is appreciated. </p>
| Dr. belisarius | 193 | <pre><code>graph= Graph[Range@5 , {1 -> 5, 5 -> 3, 3 -> 1}, EdgeWeight-> RandomInteger[100, 3]]
shAdj[graph_] :=
Grid[Transpose[
Select[Transpose[
Select[Join[{Join[{""}, VertexList[graph]]},
Transpose[Join[{VertexList[graph]}, WeightedAdjacencyMatrix[graph]]]],
Total@Rest@# != 0 &]], Total@Rest@# != 0 &]],
Alignment -> Right, Dividers -> {{2 -> Red}, {2 -> Red}}];
shAdj[graph]
</code></pre>
<p><img src="https://i.stack.imgur.com/Joahc.png" alt="Mathematica graphics">
<img src="https://i.stack.imgur.com/FRLZE.png" alt="Mathematica graphics"> </p>
|
77,290 | <p>How would you determine all integers $m$ such that the following is true? </p>
<p>$$\frac{1}{m}=\frac{1}{\lfloor 2x \rfloor}+\frac{1}{\lfloor 5x \rfloor} .$$</p>
<p>Note that $\lfloor \cdot \rfloor$ means the greatest integer function. Also, $x$ must be a positive real number.</p>
| N. S. | 9,176 | <p>Here is an idea, the computations seem too long to actually try.</p>
<p>Naive approach: forget the integer part. We need then $m=\frac{10x}{7}$ or $x = \frac{7m}{10}$.</p>
<p>Now non-naive approach: Try $x = \frac{7m+k}{10}$ where $k$ is small enough. $0 \leq k \leq 9$ or $6$ should work depending by whatever $7m$ is module 10. A case by case analysis might work, but the computations are too long to try.</p>
<p>It seems to me that this idea should work, but it could simply lead to a waste of time.</p>
<p>BTW: is it probably easy to argue that $x = \frac{7y+k}{10} + \epsilon$, for some integers $y,k$ with $k$ "small" and $0< \epsilon <\frac{1}{10}$ and then one can argue that we can ignore that $\epsilon$ part.</p>
|
77,290 | <p>How would you determine all integers $m$ such that the following is true? </p>
<p>$$\frac{1}{m}=\frac{1}{\lfloor 2x \rfloor}+\frac{1}{\lfloor 5x \rfloor} .$$</p>
<p>Note that $\lfloor \cdot \rfloor$ means the greatest integer function. Also, $x$ must be a positive real number.</p>
| Steven Alexis Gregory | 75,410 | <p>Consider the reciprocal equation
$$ m = \dfrac{\lfloor 2x \rfloor \cdot \lfloor 5x \rfloor}
{\lfloor 2x \rfloor + \lfloor 5x \rfloor}$$</p>
<p>Letting $x=n+\delta$ where $0 \le \delta < 1$, we get</p>
<p>$$ m = \dfrac{(2n + \lfloor 2\delta \rfloor)(5n + \lfloor 5\delta \rfloor)}
{7n + \lfloor 2\delta \rfloor + \lfloor 5\delta \rfloor}$$</p>
<p>We list the possible values of m with respect to $\delta$.</p>
<p>\begin{array}{|c|c|c|c|c|c|}
\delta \in & \lfloor 2\delta \rfloor & \lfloor 5\delta \rfloor
& \lfloor 2\delta \rfloor + \lfloor 5\delta \rfloor
& \lfloor 2\delta \rfloor \cdot \lfloor 5\delta \rfloor
& m \\
\hline
[ 0, 0.2) & 0 & 0 & 0 & 0 & \dfrac{10n}{7}\\
[0.2, 0.4) & 0 & 1 & 1 & 0 & \dfrac{10n^2+2n}{7n+1}\\
[0.4, 0.5) & 0 & 2 & 2 & 0 & \dfrac{10n^2+4n}{7n+2}\\
[0.5, 0.6) & 1 & 2 & 3 & 2 & \dfrac{10n^2+9n+2}{7n+3}\\
[0.6, 0.8) & 1 & 3 & 4 & 3 & \dfrac{10n^2+11n+3}{7n+4}\\
[0.8, 1.0) & 1 & 4 & 5 & 4 & \dfrac{10n^2+13n+4}{7n+5}\\
\hline
\end{array}</p>
<p>Now its just a matter of finding when those rational polynomials have integer solutions.</p>
|
254,926 | <p>I was working on a problem involving perturbation methods and it asked me to sketch the graph of $\ln(x) = \epsilon x$ and explain why it must have 2 solutions. Clearly there is a solution near $x=1$ which depends on the value of $\epsilon$, but I fail to see why there must be a solution near $x \rightarrow \infty$. It was my understanding that $\ln x$ has no horizontal asymptote and continues to grow indefinitely, where for really small values of $\epsilon, \epsilon x$ should grow incredibly slowly. How can I 'see' that there are two solutions?</p>
<p>Thanks!</p>
| ncmathsadist | 4,154 | <p>Put $$f(x) = {\ln(x)\over x}.$$<br>
Then you have
$$f'(x) = {1 - \ln(x)\over x^2}.$$
When $x > e$, the function $f$ decreases; in fact is is easy to see it decreases to zero as $x\to\infty$. When $x < e$, $f$ increases. You can see that it has the $y$-axis as a vertical asymptote. </p>
<p>The graph has a global maximum at $x = e$; $f(e) = 1/e$. Note that this function is defined only when $x > 0$. </p>
<p>Now let $0 <\lambda < 1/e$. The graph will strike the horizontal line $y = \lambda$ once on $(0,e)$ and once on $(e,\infty)$. </p>
|
254,926 | <p>I was working on a problem involving perturbation methods and it asked me to sketch the graph of $\ln(x) = \epsilon x$ and explain why it must have 2 solutions. Clearly there is a solution near $x=1$ which depends on the value of $\epsilon$, but I fail to see why there must be a solution near $x \rightarrow \infty$. It was my understanding that $\ln x$ has no horizontal asymptote and continues to grow indefinitely, where for really small values of $\epsilon, \epsilon x$ should grow incredibly slowly. How can I 'see' that there are two solutions?</p>
<p>Thanks!</p>
| M. Strochyk | 40,362 | <p>For all $\varepsilon>0$ using L'Hospital's rule
$$\lim\limits_{x \to +\infty} {\dfrac{\varepsilon x}{\ln{x}}}=\varepsilon \lim\limits_{x \to +\infty} {\dfrac{x}{\ln{x}}}=\varepsilon \lim\limits_{x \to +\infty} {\dfrac{1}{\frac{1}{x}}}=+\infty.$$</p>
|
3,792,983 | <p>As far as I can tell the definition of a source and a sink respectively are given in terms of the divergence operator.</p>
<p>That is, given a vector field <span class="math-container">$\vec{D}$</span>, it has a <em>source</em> in point <span class="math-container">$P$</span> if its divergence <span class="math-container">$\text{div}\vec{D}$</span> is pozitive in <span class="math-container">$P$</span> or a <em>sink</em> if it's negative. For example, in electromagnetism, one says <span class="math-container">$\text{div}\vec{D} = \rho_v$</span> where <span class="math-container">$\rho_v$</span> is the volume charge density and <span class="math-container">$\vec{D}$</span> is the electric flux density.</p>
<p>But let's say <span class="math-container">$\vec{D}$</span> is given by a positive point charge <span class="math-container">$q$</span> located at <span class="math-container">$(0,0,0)$</span> which creates the field</p>
<p><span class="math-container">$$\vec{D} = \text{const} \frac{\vec{R}}{|\vec{R}|^3}$$</span></p>
<p>where <span class="math-container">$\vec{R}=x\vec{i}+y\vec{j}+z\vec{k}$</span>.</p>
<p>In this case, <span class="math-container">$\text{div}\vec{D}=0$</span> everywhere, however the origin is a sort of a source as the field "emerges" from there and the net flux over each surface enclosing the charge is positive.</p>
<p>My question is: are there any other definitions of a source and sink? Possibly some that are a bit more general and encompass more particular cases such as the one I've last mentioned?</p>
| glS | 173,147 | <p>Here's a slight rewording of <a href="https://math.stackexchange.com/a/3793101/173147">the other answer</a>.</p>
<p>I want to prove that a closed, convex, nonempty <span class="math-container">$A\subset\mathbb R^n$</span> which contains no lines, always contains at least an extreme points.</p>
<p>The <span class="math-container">$\mathbb R^1$</span> case is trivial: the only possible <span class="math-container">$A$</span> are finite closed intervals or infinite segments of the form <span class="math-container">$[a,\infty)$</span> and <span class="math-container">$(-\infty,a]$</span>.
Let us, therefore, assume that the statement is true for <span class="math-container">$A\subset\mathbb R^{n-1}$</span>.</p>
<p>Let <span class="math-container">$x\in A$</span> be an arbitrary point, and let <span class="math-container">$L$</span> be some line passing through <span class="math-container">$x$</span>. Thus <span class="math-container">$x\in L$</span>, and by hypothesis <span class="math-container">$L\not\subset A$</span>. There will then be some <span class="math-container">$y\in L\setminus A$</span>. Let then <span class="math-container">$z\in L\cap \operatorname{conv}(\{x,y\})$</span> be an element on the boundary of <span class="math-container">$A$</span>, let <span class="math-container">$H$</span> be the supporting hyperplane for <span class="math-container">$A$</span> passing through <span class="math-container">$z$</span>, and consider the set <span class="math-container">$A_H\equiv A\cap H$</span>. Here's a representation of this construction in <span class="math-container">$\mathbb R^2$</span>:</p>
<img src="https://i.stack.imgur.com/uWy5C.png" width="300">
<p>In this simple case, <span class="math-container">$H$</span> must be a line and thus <span class="math-container">$A_H\subset\mathbb R^1$</span> contains an extreme point as per the induction hypothesis (in this particular case <span class="math-container">$A_H=\{z\}$</span>).
More generally, <span class="math-container">$A_H$</span> will be closed, convex, nonempty subset of <span class="math-container">$\mathbb R^{n-1}$</span>, and thus contain extreme points.</p>
<p>Now it only remains to prove that an extreme point of <span class="math-container">$A_H$</span> is also an extreme point for <span class="math-container">$A$</span>. In other words, we must prove that if <span class="math-container">$p\in A_H$</span> then <span class="math-container">$p\notin \operatorname{conv}(A\setminus A_H)$</span>.
For the purpose, we remember that <span class="math-container">$A_H$</span> is defined as the intersection between <span class="math-container">$A$</span> and a hyperplane, which means that there is some <span class="math-container">$\eta\in\mathbb R^n$</span> and <span class="math-container">$\alpha\in\mathbb R$</span> such that, defining <span class="math-container">$f(\xi)\equiv \langle \eta,\xi\rangle$</span>, we have <span class="math-container">$f(\xi)\le \alpha$</span> for all <span class="math-container">$\xi\in A$</span>, and <span class="math-container">$f(\xi)=\alpha$</span> for all <span class="math-container">$\xi\in A_H$</span>.</p>
<p>But then, if <span class="math-container">$p\in A_H$</span> were a convex combination of elements of <span class="math-container">$A$</span>, <span class="math-container">$p=\sum_k \lambda_k a_k$</span> with <span class="math-container">$a_k\in A, \sum_k\lambda_k=1, \lambda_k\ge0$</span>, then
<span class="math-container">$$\sum_k \lambda_k f(a_k) = f(p)= \alpha,$$</span>
which is only possible if <span class="math-container">$f(a_k)=\alpha$</span> for all <span class="math-container">$k$</span>, <em>i.e.</em> if <span class="math-container">$a_k\in A_H$</span>.</p>
|
3,624,732 | <p>I'm studying logical operators for school and there's a weird question that keeps bugging me even though it seems pretty basic.
<br>
I was asked to evaluate the proposition : <strong>p -> q -> r</strong> with p, r are False and q is True.
<br>
I tried evaluating it from left to right like this: <strong>( ( p -> q ) -> r )</strong> and got wrong answer.
<br>
Then, I checked my result with an online tool at <a href="https://web.stanford.edu/class/cs103/tools/truth-table-tool/" rel="nofollow noreferrer">https://web.stanford.edu/class/cs103/tools/truth-table-tool/</a> and it evaluates the proposition from right to left like this: <strong>( p -> ( q -> r ) )</strong> ( you can see in <a href="https://i.stack.imgur.com/X5fnm.png" rel="nofollow noreferrer">this picture</a> ). I tried calculating the result again with this order and it was accepted as right answer !
<br>
That's really odd because my lecturer said that if operators are at the same level then the proposition should be evaluated from left to right. Have I misunderstood something ? </p>
| Wuestenfux | 417,848 | <p>Well, the operations <span class="math-container">$\wedge,\vee,\Leftrightarrow$</span> are left-associative while the operation <span class="math-container">$\Rightarrow$</span> is right-associative.
So</p>
<p><span class="math-container">$[p\Rightarrow q\Rightarrow r ]\Longleftrightarrow [p\Rightarrow (q\Rightarrow r)].$</span></p>
|
3,001,860 | <p>I am looking for a <span class="math-container">$u(x)$</span> and an <span class="math-container">$f$</span> that make the answer to <span class="math-container">$$u(x)''''-u(x)''+u(x)=f(x)$$</span> where <span class="math-container">$$u'(-1)=u'(1)=u(1)=u(-1)=0$$</span> reasonably short.</p>
<p>I have some numerical code I am trying to test but I can't seem to find a nice solution to this problem. </p>
| Aleksas Domarkas | 562,074 | <p>If
<span class="math-container">$$u(x)={{\cos{\left( \frac{\pi x}{2}\right) }}^{2}}$$</span>
then
<span class="math-container">$$f(x)=(\pi^4+\pi^2+1){{\cos{\left( \frac{\pi x}{2}\right) }}^{2}}-\frac{\pi^4}{2}-\frac{\pi^2}{2}.$$</span></p>
|
2,838,537 | <p>Is the function<br>
$$f(x)=\begin{cases} x+0.2x^2 \sin (1/x) & x \ne 0\\ 0 & x=0 \end{cases}$$invertible in a neighborhood of origin?</p>
<p>I just know that $f$ is continuous on $\mathbb{R}$ and
$$f'(x)=\begin{cases}
1+0.4x\sin\left(\frac{1}{x}\right)-0.2\cos\left(\frac{1}{x}\right)& x \ne 0\\
1 & x=0, \end{cases}$$ </p>
<p>$f'$ is not continuous at $0$, it's not satisfies conditions in Inverse Theorem.</p>
<p>But when I've tried to prove like this:</p>
<p>Choose $B=(-0.1,0.1)\subset \mathbb{R}$, $f(x) \ne 0,\forall x \in B$. Assume $f'(x) \ne 0$, we have $$\|f'(x)-I\|=\left|0.4xsin\left(\frac{1}{x}\right)-0.2cos\left(\frac{1}{x}\right)\right|\leq 0.4r+0.2=0.24<\frac{1}{2}.$$
Let $g(x)=x-f(x)$ so $\|g'(x)\|<\frac{1}{2}$. For all $x_1,x_2\in B$ exits $c\in(x_1,x_2)$ such that $$g(x_2)-g(x_1)=g'(c)(x_2-x_1).$$
Then $$|g(x_2)-g(x_1)|<\frac{1}{2}$$
For all $y \in (-0.05,-0.05)$ let $g_y(x)=g(x)+y, x \in B$. As $g(0)=0$ so $$|g_y(x)|<|g(x)+y|<|g(x)|+|y|<0.1,\forall x \in B.$$
So $g_y(B)\subset B$. Moreover, $$|g_y(x_2)-g_y(x_1)|=|g(x_2)-g(x_1)|<\frac{1}{2}|x_2-x_1|.$$ So $g_y$ satisfies Lipschitz condition in x, then exits unique $x_y \in B$ such that $x_y=g_y(x_y)$ or $f(x_y)=y$.
Let $x_y:=f^{-1}(y)$. So $f^{-1}(y)$ exist for all $y \in (-0.05,0.05)$. And I can check $f^{-1}(y)$ is continuos. I'm so confused about this.
Help me, thank you so much.</p>
| Fred | 380,717 | <p>Hint: we have $f'(x)> 0.4>0$ for all $x$.</p>
<p>Conclusion ?</p>
|
2,838,537 | <p>Is the function<br>
$$f(x)=\begin{cases} x+0.2x^2 \sin (1/x) & x \ne 0\\ 0 & x=0 \end{cases}$$invertible in a neighborhood of origin?</p>
<p>I just know that $f$ is continuous on $\mathbb{R}$ and
$$f'(x)=\begin{cases}
1+0.4x\sin\left(\frac{1}{x}\right)-0.2\cos\left(\frac{1}{x}\right)& x \ne 0\\
1 & x=0, \end{cases}$$ </p>
<p>$f'$ is not continuous at $0$, it's not satisfies conditions in Inverse Theorem.</p>
<p>But when I've tried to prove like this:</p>
<p>Choose $B=(-0.1,0.1)\subset \mathbb{R}$, $f(x) \ne 0,\forall x \in B$. Assume $f'(x) \ne 0$, we have $$\|f'(x)-I\|=\left|0.4xsin\left(\frac{1}{x}\right)-0.2cos\left(\frac{1}{x}\right)\right|\leq 0.4r+0.2=0.24<\frac{1}{2}.$$
Let $g(x)=x-f(x)$ so $\|g'(x)\|<\frac{1}{2}$. For all $x_1,x_2\in B$ exits $c\in(x_1,x_2)$ such that $$g(x_2)-g(x_1)=g'(c)(x_2-x_1).$$
Then $$|g(x_2)-g(x_1)|<\frac{1}{2}$$
For all $y \in (-0.05,-0.05)$ let $g_y(x)=g(x)+y, x \in B$. As $g(0)=0$ so $$|g_y(x)|<|g(x)+y|<|g(x)|+|y|<0.1,\forall x \in B.$$
So $g_y(B)\subset B$. Moreover, $$|g_y(x_2)-g_y(x_1)|=|g(x_2)-g(x_1)|<\frac{1}{2}|x_2-x_1|.$$ So $g_y$ satisfies Lipschitz condition in x, then exits unique $x_y \in B$ such that $x_y=g_y(x_y)$ or $f(x_y)=y$.
Let $x_y:=f^{-1}(y)$. So $f^{-1}(y)$ exist for all $y \in (-0.05,0.05)$. And I can check $f^{-1}(y)$ is continuos. I'm so confused about this.
Help me, thank you so much.</p>
| Crostul | 160,300 | <p>Your computations of $f'$ tell you that in the set $[-1;0) \cup (0;1]$ you have
$$f'(x) \ge 1-0.2-0.4=0.4$$</p>
<p>Note that
$f(x)=x(1+x \sin (1/x))$
is equal to $x$ times a strictly positive quantity. Thus </p>
<ol>
<li>$f(x) <0$ if and only if $x<0$;</li>
<li>$f(x) >0$ if and only if $x>0$;</li>
<li>$f(x) =0$ if and only if $x=0$</li>
</ol>
<p>Now, suppose by contradiction that $f(x)$ is not invertible on $[-1;1]$. This means that there exist two distinct numbers $a < b$ such that $f(a)=f(b)$. The observation on the sign made above tells us that $a,b \neq 0$ and they have the same sign.</p>
<p>Now apply Rolle's theorem on the interval $[a;b]$ (which does not contain $0$!): there exists some $c$ such that
$$f'(c)=0$$
and this is a contradiction, since on $[a;b]$ you have $$f'(c) \ge 0.4$$</p>
|
4,065,655 | <p>If <span class="math-container">$A B = C$</span> where matrix <span class="math-container">$C$</span> is stochastic (all entries are positive and all rows add to <span class="math-container">$1$</span>) then is it necessary that both <span class="math-container">$A$</span> and <span class="math-container">$B$</span> be stochastic?</p>
| Peter Melech | 264,821 | <p>First note that
<span class="math-container">$$\int_0^1x^{t-1}dx$$</span>
converges and then use that <span class="math-container">$|x^{t-1}e^{-x}|\leq x^{t-1}$</span> for <span class="math-container">$x\in[0,1]$</span>.
Then see that
<span class="math-container">$$\int_1^{\infty}x^{-2}dx$$</span>
converges and use that
<span class="math-container">$$\frac{x^{t-1}e^{-x}}{x^{-2}}=\frac{x^{t+1}}{e^x}\rightarrow 0,x\rightarrow \infty.$$</span></p>
|
1,597,332 | <p>Get the following quadratic form:</p>
<p>$$Q(x)=x_1^{2}+x_3^2+4x_1x_2-4x_1x_3 $$</p>
<p>to obtain the canonical form I tried the following: </p>
<p>$$Q(x)=x_1^{2}+x_3^{2}+4x_1x_2-4x_1x_3=4(x_1^{2})+(x_3^{2})-4x_1x_3-3x_1x_1+4x_1x_2-(4/3)x_2x_2+(4/3)x_2x_2=(2x_1-x_3)^2-... $$</p>
<p>There I stopped because I rememebered that I shouldn't change the coefficient of $x_1^2$. </p>
<p>From here I don't know how to continue. </p>
<p>I thank you in anticipation for your understanding and I wait forward your answer!</p>
| Rene Schipperus | 149,912 | <p>I am not at all sure this is what you are looking for but dont you complete the square:</p>
<p>$$x_1^{2}+x_3^2+4x_1x_2-4x_1x_3=x_1^{2}+(4x_2-4x_3)x_1 +x_3^2$$
$$=x_1^{2}+(4x_2-4x_3)x_1 +(2x_2-2x_3)^2 -(2x_2-2x_3)^2 + x_3^2$$</p>
<p>$$=(x_1+2x_2-2x_3)^2 -(2x_2-2x_3)^2 + x_3^2$$</p>
<p>$$=(x_1+2x_2-2x_3)^2 -4(x_2^2-2x_2x_3)-3x_3^2$$</p>
<p>$$=(x_1+2x_2-2x_3)^2 -4(x_2+x_3)^2 +x_3^2$$</p>
<p>If I have not made a mistake this is the diagonalized form. That is $x^2-4y^2+z^2$ which is unique.</p>
|
1,597,332 | <p>Get the following quadratic form:</p>
<p>$$Q(x)=x_1^{2}+x_3^2+4x_1x_2-4x_1x_3 $$</p>
<p>to obtain the canonical form I tried the following: </p>
<p>$$Q(x)=x_1^{2}+x_3^{2}+4x_1x_2-4x_1x_3=4(x_1^{2})+(x_3^{2})-4x_1x_3-3x_1x_1+4x_1x_2-(4/3)x_2x_2+(4/3)x_2x_2=(2x_1-x_3)^2-... $$</p>
<p>There I stopped because I rememebered that I shouldn't change the coefficient of $x_1^2$. </p>
<p>From here I don't know how to continue. </p>
<p>I thank you in anticipation for your understanding and I wait forward your answer!</p>
| Will Jagy | 10,400 | <p>There is a method I asked about at
<a href="https://math.stackexchange.com/questions/1388421/reference-for-linear-algebra-books-that-teach-reverse-hermite-method-for-symmetr">reference for linear algebra books that teach reverse Hermite method for symmetric matrices</a><br>
The main advantage is that it is a recipe with matrices, no need to carry variable names. The main disadvantage is the need to invert one matrix at the end; however, the matrix has determinant $\pm 1$ and may very well be upper triangular (it is this time).</p>
<p>It leads to $$ x^2 + z^2 + 4 xy - 4 zx = (x +2y-2z)^2 - 4 (y-z)^2 + z^2 $$</p>
<p>and comes from this matrix stuff; I did it first by hand, it did work, but I thought i would check everything. The Pari code is not quite as readable as Latex but is not too bad.</p>
<pre><code>parisize = 4000000, primelimit = 500509
? m = [ 1,2,-2; 2,0,0; -2,0,1]
%2 =
[1 2 -2]
[2 0 0]
[-2 0 1]
? m - mattranspose(m)
%3 =
[0 0 0]
[0 0 0]
[0 0 0]
? p1 = [1,-2,2; 0,1,0; 0,0,1]
%4 =
[1 -2 2]
[0 1 0]
[0 0 1]
? m1 = mattranspose(p1) * m * p1
%5 =
[1 0 0]
[0 -4 4]
[0 4 -3]
? p2 = [ 1,0,0; 0,1,1; 0,0,1]
%6 =
[1 0 0]
[0 1 1]
[0 0 1]
? d = mattranspose(p2) * m1 * p2
%7 =
[1 0 0]
[0 -4 0]
[0 0 1]
? p = p1 * p2
%8 =
[1 -2 0]
[0 1 1]
[0 0 1]
? matdet(p)
%9 = 1
? q = matadjoint(p)
%10 =
[1 2 -2]
[0 1 -1]
[0 0 1]
? confirm = mattranspose(q) * d * q
%12 =
[1 2 -2]
[2 0 0]
[-2 0 1]
? m
%13 =
[1 2 -2]
[2 0 0]
[-2 0 1]
? m - confirm
%14 =
[0 0 0]
[0 0 0]
[0 0 0]
?
? ( x + 2 * y - 2 * z)^2 - 4 * (y - z)^2 + z^2
%1 = x^2 + (4*y - 4*z)*x + z^2
</code></pre>
<p>=========================================================</p>
<p>Places on this site I put this, several typeset:</p>
<p><a href="https://math.stackexchange.com/questions/1388421/reference-for-linear-algebra-books-that-teach-reverse-hermite-method-for-symmetr">reference for linear algebra books that teach reverse Hermite method for symmetric matrices</a></p>
<p><a href="https://math.stackexchange.com/questions/329304/bilinear-form-diagonalisation">Bilinear Form Diagonalisation</a> </p>
<p><a href="https://math.stackexchange.com/questions/395634/given-a-4-times-4-symmetric-matrix-is-there-an-efficient-way-to-find-its-eige/1170390#1170390">Given a $4\times 4$ symmetric matrix, is there an efficient way to find its eigenvalues and diagonalize it?</a> </p>
<p><a href="https://math.stackexchange.com/questions/1388281/find-the-transitional-matrix-that-would-transform-this-form-to-a-diagonal-form/1391117#1391117">Find the transitional matrix that would transform this form to a diagonal form.</a></p>
<p><a href="https://math.stackexchange.com/questions/1515495/writing-an-expression-as-a-sum-of-squares/1516684#1516684">Writing an expression as a sum of squares</a></p>
<p><a href="https://math.stackexchange.com/questions/1540594/determining-matrix-a-and-b-rectangular-matrix">Determining matrix $A$ and $B$, rectangular matrix</a></p>
<p><a href="https://math.stackexchange.com/questions/1580174/method-of-completing-squares-with-3-variables">Method of completing squares with 3 variables</a></p>
|
1,811,567 | <p>We had a high school mathematics teacher who taught us a cool technique that I've forgotten. It can be used, for example, for developing a formula for the sum of squares for the first "n" integers. You start by making a column for Sn, and then determine the differences until you get a constant. See the picture.</p>
<p><img src="https://i.stack.imgur.com/1iEYU.jpg" alt="example - sorry about the rotated picture">
(sorry about the rotated picture)</p>
<p><strong>How do you proceed from here to the formula?</strong></p>
| Jack D'Aurizio | 44,121 | <p>Every polynomial that takes integer values over the integers can be represented with respect to the binomial base as a linear combination with integer coefficients. In our case:</p>
<p>$$ n^2 = \color{blue}{2}\binom{n}{2}+\color{blue}{1}\binom{n}{1}+\color{blue}{0}\binom{n}{0} \tag{1}$$
And that leads to:
$$ \sum_{n=1}^{N}n^2 = 2\binom{N+1}{3}+1\binom{N+1}{2}+0\binom{N+1}{1} = \frac{N(N+1)(2N+1)}{6}.\tag{2} $$
The blue coefficients appearing in $(1)$ can be computed through the forward difference operator:
$$ \begin{array}{ccccccccc} \color{blue}{0} && 1 && 4 && 9 && 16 \\ &\color{blue}{1} && 3 && 5 && 7 && \\ && \color{blue}{2} && 2 && 2 &&& \end{array}\tag{3}$$</p>
<p>Another example, for $n^3$.
$$ \begin{array}{ccccccccc} \color{blue}{0} && 1 && 8 && 27 && 64 \\ &\color{blue}{1} && 7 && 19 && 37 && \\ && \color{blue}{6} && 12 && 18 &&& \\ &&& \color{blue}{6} && 6 \end{array}\tag{3bis}$$
Gives:
$$ n^3 = \color{blue}{6}\binom{n}{3}+\color{blue}{6}\binom{n}{2}+\color{blue}{1}\binom{n}{1}\tag{1bis} $$
hence:
$$ \sum_{n=1}^{N}n^3 = 6\binom{N+1}{4}+6\binom{N+1}{3}+1\binom{N+1}{2}=\left(\frac{N(N+1)}{2}\right)^2.\tag{2bis}$$</p>
<p>You may be also interested in knowing that our "magic blue numbers" just depend on <a href="http://mathworld.wolfram.com/StirlingNumberoftheSecondKind.html" rel="nofollow">Stirling numbers of the second kind</a>.</p>
|
28,348 | <p>Thomson et al. provide a proof that <span class="math-container">$\lim_{n\rightarrow \infty} \sqrt[n]{n}=1$</span> in <a href="http://classicalrealanalysis.info/documents/TBB-AllChapters-Landscape.pdf#page=95" rel="noreferrer">this book (page 73)</a>. It has to do with using an inequality that relies on the binomial theorem:
<a href="https://i.stack.imgur.com/n3dxw.png" rel="noreferrer"><img src="https://i.stack.imgur.com/n3dxw.png" alt="enter image description here" /></a></p>
<p>I have an alternative proof that I know (from elsewhere) as follows.</p>
<hr />
<p><strong>Proof</strong>.</p>
<p><span class="math-container">\begin{align}
\lim_{n\rightarrow \infty} \frac{ \log n}{n} = 0
\end{align}</span></p>
<p>Then using this, I can instead prove:
<span class="math-container">\begin{align}
\lim_{n\rightarrow \infty} \sqrt[n]{n} &= \lim_{n\rightarrow \infty} \exp{\frac{ \log n}{n}} \newline
& = \exp{0} \newline
& = 1
\end{align}</span></p>
<hr />
<p>On the one hand, it seems like a valid proof to me. On the other hand, I know I should be careful with infinite sequences. The step I'm most unsure of is:
<span class="math-container">\begin{align}
\lim_{n\rightarrow \infty} \sqrt[n]{n} = \lim_{n\rightarrow \infty} \exp{\frac{ \log n}{n}}
\end{align}</span></p>
<p>I know such an identity would hold for bounded <span class="math-container">$n$</span> but I'm not sure I can use this identity when <span class="math-container">$n\rightarrow \infty$</span>.</p>
<p><strong>Question:</strong></p>
<p>If I am correct, then would there be any cases where I would be wrong? Specifically, given any sequence <span class="math-container">$x_n$</span>, can I always assume:
<span class="math-container">\begin{align}
\lim_{n\rightarrow \infty} x_n = \lim_{n\rightarrow \infty} \exp(\log x_n)
\end{align}</span>
Or are there sequences that invalidate that identity?</p>
<hr />
<p>(Edited to expand the last question)
given any sequence <span class="math-container">$x_n$</span>, can I always assume:
<span class="math-container">\begin{align}
\lim_{n\rightarrow \infty} x_n &= \exp(\log \lim_{n\rightarrow \infty} x_n) \newline
&= \exp(\lim_{n\rightarrow \infty} \log x_n) \newline
&= \lim_{n\rightarrow \infty} \exp( \log x_n)
\end{align}</span>
Or are there sequences that invalidate any of the above identities?</p>
<p>(Edited to repurpose this question).
Please also feel free to add different proofs of <span class="math-container">$\lim_{n\rightarrow \infty} \sqrt[n]{n}=1$</span>.</p>
| JavaMan | 6,491 | <p>Since $x \mapsto \log x$ is a continuous function, and since continuous functions respect limits:
$$
\lim_{n \to \infty} f(g(n)) = f\left( \lim_{n \to \infty} g(n) \right),
$$
for continuous functions $f$, (given that $\displaystyle\lim_{n \to \infty} g(n)$ exists), your proof is entirely correct. Specifically,
$$
\log \left( \lim_{n \to \infty} \sqrt[n]{n} \right) = \lim_{n \to \infty} \frac{\log n}{n},
$$</p>
<p>and hence</p>
<p>$$
\lim_{n \to \infty} \sqrt[n]{n} = \exp \left[\log \left( \lim_{n \to \infty} \sqrt[n]{n} \right) \right] = \exp\left(\lim_{n \to \infty} \frac{\log n}{n} \right) = \exp(0) = 1.
$$</p>
|
28,348 | <p>Thomson et al. provide a proof that <span class="math-container">$\lim_{n\rightarrow \infty} \sqrt[n]{n}=1$</span> in <a href="http://classicalrealanalysis.info/documents/TBB-AllChapters-Landscape.pdf#page=95" rel="noreferrer">this book (page 73)</a>. It has to do with using an inequality that relies on the binomial theorem:
<a href="https://i.stack.imgur.com/n3dxw.png" rel="noreferrer"><img src="https://i.stack.imgur.com/n3dxw.png" alt="enter image description here" /></a></p>
<p>I have an alternative proof that I know (from elsewhere) as follows.</p>
<hr />
<p><strong>Proof</strong>.</p>
<p><span class="math-container">\begin{align}
\lim_{n\rightarrow \infty} \frac{ \log n}{n} = 0
\end{align}</span></p>
<p>Then using this, I can instead prove:
<span class="math-container">\begin{align}
\lim_{n\rightarrow \infty} \sqrt[n]{n} &= \lim_{n\rightarrow \infty} \exp{\frac{ \log n}{n}} \newline
& = \exp{0} \newline
& = 1
\end{align}</span></p>
<hr />
<p>On the one hand, it seems like a valid proof to me. On the other hand, I know I should be careful with infinite sequences. The step I'm most unsure of is:
<span class="math-container">\begin{align}
\lim_{n\rightarrow \infty} \sqrt[n]{n} = \lim_{n\rightarrow \infty} \exp{\frac{ \log n}{n}}
\end{align}</span></p>
<p>I know such an identity would hold for bounded <span class="math-container">$n$</span> but I'm not sure I can use this identity when <span class="math-container">$n\rightarrow \infty$</span>.</p>
<p><strong>Question:</strong></p>
<p>If I am correct, then would there be any cases where I would be wrong? Specifically, given any sequence <span class="math-container">$x_n$</span>, can I always assume:
<span class="math-container">\begin{align}
\lim_{n\rightarrow \infty} x_n = \lim_{n\rightarrow \infty} \exp(\log x_n)
\end{align}</span>
Or are there sequences that invalidate that identity?</p>
<hr />
<p>(Edited to expand the last question)
given any sequence <span class="math-container">$x_n$</span>, can I always assume:
<span class="math-container">\begin{align}
\lim_{n\rightarrow \infty} x_n &= \exp(\log \lim_{n\rightarrow \infty} x_n) \newline
&= \exp(\lim_{n\rightarrow \infty} \log x_n) \newline
&= \lim_{n\rightarrow \infty} \exp( \log x_n)
\end{align}</span>
Or are there sequences that invalidate any of the above identities?</p>
<p>(Edited to repurpose this question).
Please also feel free to add different proofs of <span class="math-container">$\lim_{n\rightarrow \infty} \sqrt[n]{n}=1$</span>.</p>
| marty cohen | 13,079 | <p>Here's a two-line, completely elementary proof that uses only Bernoulli's inequality:</p>
<p>$$(1+n^{-1/2})^n \ge 1+n^{1/2} > n^{1/2}$$
so, raising to the $2/n$ power,
$$ n^{1/n} < (1+n^{-1/2})^2 = 1 + 2 n^{-1/2} + 1/n < 1 + 3 n^{-1/2}.$$</p>
<p>I discovered this independently, and then found a very similar proof in Courant and Robbins' "What is Mathematics".</p>
|
2,010,423 | <p>Are $$(a, b, c) = (\mid 1 \mid, \mid 2 \mid, \mid 2 \mid), (\mid 2 \mid, \mid 4 \mid, \mid 4 \mid), (\mid 2 \mid, \mid 3\mid, \mid 6 \mid), (\mid 1 \mid, \mid 1 \mid, \mid 1 \mid)$$ the only integers such that
$$\frac{1}{a} + \frac{1}{b} + \frac{1}{c} $$ is an integer ?</p>
| DLIN | 355,583 | <p>We can consider the positive case, i.e. $a,b,c>0$.</p>
<p>We can categorize into the 3 cases(from simple to copmlex):</p>
<p>1): $a=b=c$. Then $\frac1a+\frac1b+\frac1c=\frac3a\in\mathbb Z$, so $a|3$, hence $a=1,3$. </p>
<p>2): $0<a<b<c$. Then $1\leq\frac1a+\frac1b+\frac1c<\frac3a$, $1\leq a<3$, So $a=1,2$.</p>
<p>If $a=1$, then $\frac1b+\frac1c<\frac22=1$, impossible, so $a=2.$</p>
<p>Then $b\geq3$(since $b>a$), and
$$1\leq\frac1a+\frac1b+\frac1c<\frac12+\frac2b\leq\frac12+\frac23<2,$$
thus the sum equals to $1$. Moreover, if $b\geq4$, then $\frac1b+\frac1c<\frac24$, which means the sum is not an integer.
So $a=2,~b=3~,\Rightarrow c=6.$</p>
<p>3): $a=b<c$. By the method of 2), we have $1\leq\frac2a+\frac1c<\frac3a,~\Rightarrow a<3$.</p>
<p>If $a=1$, then $\frac1c<1$, impossible. </p>
<p>If $a=2$, $\frac1c<1$, still impossible. </p>
<p>Hence, in this case, there is no result.</p>
<p>4): $a=b>c$. By the above argument, $1\leq\frac2a+\frac1c<\frac3c$.
$c=1,2$.</p>
<p>If $c=1$, then $a=2$.</p>
<p>If $c=2$, $\frac2a\in \mathbb Z+\frac12$, and $a\geq3$. So $a=4$.</p>
<p><strong>PS</strong>: For the non-positive case, there are infinite solutions. </p>
<p>e.g. $a=b=2n,~c=-n$, where $n\in \mathbb N^+.$</p>
|
2,240,616 | <p>Let $f:[0,1] \to \mathbb{R}$ be a function such that for every $a \in [0,1)$ and $b \in (0,1]$ the one-sided limits $$f(a^+)=\lim _{x\to a^+}f(x) \in \mathbb{R}$$ $$f(b^-)=\lim _{x \to b^-} f(x) \in \mathbb {R}$$ exist. </p>
<p>A) Show that $f$ is bounded. </p>
<p>B) Does $f$ necessarily achieve its maximum at some $x \in [0,1]$?</p>
<p>C) Suppose further that $f$ is continuous at $0$ and $1$, and that $f(0) f(1)<0$. Prove that there exists some point $p \in (0,1)$ such that $f(p^-)f(p^+) \leq 0$. </p>
<p>Intuitively, I can see why part A is true, but I am not sure how to prove this formally. For part B, I think the answer is no, but I haven't yet come up with a counterexample. My initial thoughts on part C are to somehow apply the intermediate value theorem, but I am not sure if this is the correct approach or not.</p>
| DanielWainfleet | 254,665 | <p>For (C).</p>
<p>Suppose $f(0)>0$ (as the case $f(0)<0$ is handled similarly). Since $f$ is continuous at $0$, there exists $r\in (0,1)$ such that $\forall x\in [0,r)\;(f(x)>f(0)/2).$ So there exists $r\in (0,1)$ such that $\forall x\in [0,r)\; (f(x^-)\geq 0\land f(x+)\geq 0).$ </p>
<p>Let $P(r)\iff \forall x\in [0,r)\;(f(x^-)\geq 0\land f(x^+)\geq 0).$ Let $s=\sup \{r\in (0,1): P(r)\}.$</p>
<p>We have $s>0$.</p>
<p>We cannot have $s=1$: Else, for any $t\in (0,1)$ we have $f(t+)\geq 0$, implying that for any $r>0$ there exists $t'\in (t,1)$ with $f(t')\geq -r$. But the continuity of $f$ at $1$ would then imply $f(1)\geq -r$ for any $r>0$, so $f(1)\geq 0,$ contradicting $f(0)f(1)<0.$ </p>
<p>So $s<1.$ We have</p>
<p>(i).... $s>0$, and $s'\in (0,s)\implies f(s'^+)\geq 0.$ So for every $r>0$ and every $s'\in (0,s)$ there exists $s''\in (s',s)$ with $f(s'')>-r.$ So $f(s)\geq -r$ for every $r>0.$ Therefore $$f(s^-)\geq 0.$$ </p>
<p>(ii)....We now show that $f(s^+)\leq 0.$</p>
<p>Suppose by contradiction that $f(s^+)>0. $. Since $f(s^-)\geq 0$ by (i), the def'n of $s$ requires that $$\forall s'\in (s,1)\exists x\in (s,s')\; (f(x^+)<0\lor f(x^-)<0).$$ But if $x\in (s,s')$ and $f(x^+)<0$ there exists $y\in (x',s')$ with $f(y)<0,$ while if $x\in (s,s')$ and $f(x^-)<0$ there exists $y\in (s,x)$ with $f(y)<0$. </p>
<p>So for every $s'\in (s,1)$ there exists $y\in (s,s')$ with $f(y)<0,$ implying $f(s+)\leq 0,$ contradicting $f(s+)>0.$ Therefore, by contradiction, we have $$f(s+)\leq 0.$$ Since $f(s^-)\geq 0$ and $f(s^+)\leq 0$ we have $$f(s^+)f(s^-)\leq 0.$$</p>
<p>Remark: The case $f(0)<0$ can be done by applying the above argument to the function $f^*(x)=-f(x).$</p>
|
2,663,370 | <p>Consider the following integral:</p>
<p>$$
I(k):= \int_k^{+\infty} \frac{e^{-1/x} \log(x)}{x^2} dx\\
=-\int_0^{1/k} e^{-t} \log t \, dt,
$$</p>
<p>where $k>0$. It is known that $\lim_{k \to 0} I(k)=\gamma$, where $\gamma$ denotes the Euler-Mascheroni constant. Henceforth, it must be that $\lim_{k\to +\infty}I(k)=0$. At which rate does $I(k)$ decays to $0$ as $k \to +\infty$? I guess there's some known result about that. </p>
| Kelenner | 159,886 | <p>Hint: For $x>0$, put $F(x)=\int_0^x\exp(-t)\log tdt$, integrate by parts using that $1-\exp(-t)$ is a primitive of $\exp(-t)$, and show that
$$F(x)=(x\log x)(\frac{1-\exp(-x)}{x}-\frac{1}{x\log x}\int_0^x \frac{1-\exp(-t)}{t}dt)$$</p>
|
718,901 | <p>I have for a very long time been thinking about certain notations in differential calculus, for instance it might sometimes be sufficient to carry out a variable substitution in integrals. For example we put $t=x^2$ and then write $\frac {dt}{dx} = 2x \Leftrightarrow dt = (2x)dx$</p>
<p>I dont quite understand the last step, for me, $\frac {dt}{dx}$ is just a notation for the derivative of the function $t$. What are we actually doing when we multiply each side with $dx$, I imagine that $\frac {dt}{dx}$ is just a limit when the difference in x and y tends to zero. But when we separate them then $dx=dt=0$</p>
<p>Does anyone understand my concerns? If not, I will try to explain better.</p>
| Henry | 6,460 | <p>To take an example, suppose you wanted to find $$\int_{x=0}^{\frac12} x(1-x^2)^3 dx.$$ How would you interpret $dx$ in that expression?</p>
<p>Would you be willing to do the substitution $t=x^2$ with $\frac{dt}{dx}=2x$ so that you got something like this?</p>
<p>$$\int_{x=0}^{\frac12} x(1-x^2)^3 dx = \int_{x=0}^{\frac12} \frac12(1-t)^3 \frac{dt}{dx} dx = \int_{t=0}^{\frac14} \frac12(1-t)^3 dt $$</p>
<p>Personally I regard $dt = (2x)dx$ as shorthand for $\frac {dt}{dx} = 2x$ in a form that makes such substitutions easier to do without making errors.</p>
|
54,088 | <p>I am studying KS (Kolmogorov-Sinai) entropy of order <em>q</em> and it can be defined as</p>
<p>$$
h_q = \sup_P \left(\lim_{m\to\infty}\left(\frac 1 m H_q(m,ε)\right)\right)
$$</p>
<p>Why is it defined as supremum over all possible partitions <em>P</em> and not maximum? </p>
<p>When do people use supremum and when maximum?</p>
| Brooke Knight | 13,777 | <p>Supremum need not be attained while maximum is. When maximum exist, maximum=supremum.</p>
|
2,497,072 | <p>Write using logical symbols: "Every even number greater than two is the sum of two prime numbers" <br>
This is how I attempted to write this:
$$((n=2m \land n > 2) \Rightarrow n=p_1+p_2)\land(x|p_1 \Rightarrow x=1 \lor x=p_1) \land (y|p_2 \Rightarrow y=1 \lor y=p_2) \land (p_1 >1 \land p_2 >1)$$
I am not sure if I haven't confused logical operations. I also think whether I should have used "iff" in the first parentheses instead. What should I change in my solution?</p>
| Redsbefall | 97,835 | <p>Slightly different view : $$ P := \{ p \: \mid p \in \mathbb{N} \rightarrow \ \forall q \in \mathbb {N } : \: 1<q<p , \: mod(p,q) \ne 0 \: \}$$</p>
<p>$$ n = 2(m+1), \: m \in \mathbb{N} \implies \exists p_{1}, p_{2} \in P \rightarrow n =p_{1}+p_{2}$$</p>
|
2,417,029 | <p>I have been told that a line segment is a set of points.
How can even infinitely many point, each of length zero, can make a line of positive length?</p>
<p>Edit:
As an undergraduate I assumed it was due to having uncountably many points.
But the Cantor set has uncountably many elements and it has measure $0$.</p>
<p>So having uncountably many points on a line is not sufficient for the measure to be positive.</p>
<p>My question was: what else is needed? It appears from the answers I've seen that the additional thing needed is the topology and/or the sigma algebra within which the points are set.</p>
<p>My thanks to those who have helped me figure out where to look for full answers to my question.</p>
| Mikhail Katz | 72,694 | <p>You write: "As an undergraduate I assumed it was due to having uncountably many points. When I learned about measure theory, I realized that's not the explanation."</p>
<p>But in fact this <em>is</em> indeed the explanation. Lebesgue measure $\lambda$ is countably additive (in ZFC) so if $\mathbb R$ were countable one would indeed have $\lambda(\mathbb R)=0$. But countable additivity is not generalizable to any hypothetical notion of uncountable additivity. Therefore no paradox of the sort $\lambda(\mathbb R)=0$ arises.</p>
|
545,771 | <blockquote>
<p>let sequence $\{a_{n}\}$,such $a_{1}=0,a_{2}=2,a_{3}=5$,and for $n\in N^{+}$,such
$$\begin{cases}
a_{2n}=2n+2a_{n}\\
a_{2n+1}=2n+1+a_{n}+a_{n+1}
\end{cases}$$
How can I find the closed form of $\{a_{n}\}$?</p>
</blockquote>
<p>My try:
we have</p>
<blockquote>
<p>$$a_{2n+1}-a_{2n}=a_{n+1}-a_{n}+1$$
so if $n=2k$,then
$$a_{2n+1}-a_{2n}=a_{n+1}-a_{n}+1$$
$$a_{2n}-a_{2n-1}=a_{n}-a_{n-1}+1$$
$\cdots\cdots\cdots$
$$a_{3}-a_{2}=a_{2}-a_{1}+1$$
so add all this equation,we have
$$a_{2n+1}-a_{2}=a_{n+1}-a_{1}+(2n-1)$$
then
$$a_{2n+1}=a_{n+1}+2n+1$$
<strong>My idea is true? and How solve this problem,and this This problem background is china comption today</strong></p>
</blockquote>
| Obinna Okechukwu | 11,283 | <p>Here is a closed form formula for $a_n$ <br>
$$a_n = (\lfloor \log_2{n}\rfloor + 2)n - 2^{\lfloor \log_2{n}\rfloor +1}$$
To get this simply notice that $$a_n = n + a_{\lfloor \frac{n}{2}\rfloor}+ a_{\lceil\frac{n}{2}\rceil}$$
Then follow Greg Martin's sugestions.</p>
|
1,775,378 | <p>So this question is inspired by the following thread: <a href="https://forums.factorio.com/viewtopic.php?f=5&t=25008">https://forums.factorio.com/viewtopic.php?f=5&t=25008</a></p>
<p>In it, the poster is examining an $8$-belt balancer (more on that to come) which he shows fails to satisfy a desirable property, which he called universally throughput unlimited.</p>
<p>So what is a $n$-belt balancer? It is a configuration of belts (which move items around), and splitters (which take two belts in and balance their items on the two belts on the output side), which will balance the input of all $n$ input belts across all $n$ output belts. They are frequently used in large factories to move large amounts of items to a variety of different areas in a manner where no one belt worth of items getting backlogged (more items coming at it than it can use) results in other projects not getting full throughput (or at least as much as they can use).</p>
<p>The desired property called <em>universally throughput unlimited</em> is the following: Suppose only $k$ of the $n$ input belts are getting input (assume full input; aka, input belts are assumed saturated), and that all but $k$ of the output belts are backlogged and have no throughput (already full of items and nothing is moving on those belts). Then the full input on those $k$ input belts can be provided across the $k$ output belts (which have the same maximum throughput, hence no one output belt can handle more than one input belt's worth of throughput). This basically means that the $n$-belt balancer is never a bottleneck no matter the current input or output limitations (which lanes are getting input/available for output).</p>
<p>The question I have is the following: It always possible to create an $n$-belt balancer satisfying the universally throughput unlimited condition for any $n$? It not, for which $n$'s is it possible? (clearly, $n=2$ works because of how splitters behave)</p>
<p>I have some ideas on how to approach this problem, but am nowhere near having it solved. The first idea is about how to represent the problem:
We can represent the input belts and output belts as vertices of a directed graph. The inputs being sources (in-degree=0) and the outputs being sinks (out-degree=0). The balancer is the input and output vertices together with a set of <em>intermediate</em> vertices which represent splitters which have $1\leq$in-degree,out-degree$\leq$2 (one or two directed edges point to them and one or two coming from them) and the associated directed edges. Looking at the problem this way, it is easy to see that a <em>necessary</em> condition is that input on any belt can reach any output belt (it is necessary because if not, then consider the case of all input on one input belt and all but one output belt backlogged with 0 throughput; in such a case, if you can't route that input belt's input to the output belt you won't get any throughput), but this condition is not sufficient (multiple examples that satisfy this condition have been shown both theoretically and experimentally to fail to have the desired universally throughput unlimited property).</p>
<p>An important thing to note is that belts can be routed <em>under</em> other belts via underground belts, hence planarity of the above described graph is not necessary. The fact that splitters have some very specific behaviors is important to this problem: They will always try to balance outputs provided there is no backlog, hence, in a no backlog scenario the output on each belt leaving a splitter is half of the <em>total</em> input on both of its input belts. If however, one of the output belts is backlogged with no throughput, then all of the throughput will be merged onto the 'free' belt <em>up to</em> it's throughput limit. If more than one belt worth of throughput is coming into a splitter in this case, then <em>both</em> input belts will start to bottleneck (each belt's effective throughput will be half of the maximum because that's how much of the saturated output belt is coming from the given input belt). Sometimes a backlog is only a <em>reduction</em> in throughput (do to bottlenecking down the line somewhere) in such a case, a splitter will still split input equally up to that the reduced throughput on the lowest throughput belt, after that, <em>all</em> remaining throughput is thrown at the belt with additional capacity until that two is saturated, and if there is any more input coming at the given splitter then <em>both</em> of its input belts will start to backlog.</p>
<p>This backlog phenomenon can result in some very subtle behaviors. Which makes simply assigning weights to the directed edges in the above described graph (constrained to a value of $[0,1]$ where $1$ is saturated and $0$ is no throughput) inadequate to describe the problem. For instance a splitter causing a backlog with some throughput but not enough to avoid backlog, can lead to a reduction in throughput for <em>another</em> splitter's output belt, shifting more of it's input onto the other output belt (which might cause a splitter further down that belt to suddenly become a bottleneck and backlog, etc.)</p>
<p>My suspicion from experimenting a tad as well as some theoretical work looking at how splitters are dividing inputs leads me to conjecture that it is not possible for all $n$, and that the most likely candidates are powers of $2$. Even then, for powers higher than $1$ it still might be impossible because of odd #s of belts having input needed to get to same number of output belts (and if balancing odd #s of belts isn't possible, then the universally throughput unlimited condition might not be satisfiable because of these cases).</p>
| Bomaz | 354,628 | <p>This is not a complete answer but</p>
<p>Assumption 1: a unlimited <a href="http://visionfear.com/Pictures/upload/2016/01/10/20160110151931-a269f11e.png" rel="nofollow noreferrer">4:4 balancer is possible</a>. My belief is that this is throughput unlimited.</p>
<p>Take the design of the 4:4 balancer, replace each 2:2 balancer with a 4:4 balancer and you <a href="https://imgur.com/HUqGOGv" rel="nofollow noreferrer">should have a 8:8 balancer</a> which I believe to be unlimited</p>
<p>A bit more general: If we have an n:n balancer, where n is some power of 2, we can get 2n:2n by taking the design of the 4:4 balancer and replacing each 2:2 balancer with a n:n balancer.</p>
<p>This assumes that it is always geometrically possible.</p>
<p>By induction all powers of 2 should have throughput unlimited balancers.</p>
<p>Addendum 1: Assuming that the above holds then as throughput unlimited balancers are possible for powers of 2 a throughput unlimited balancer for a smaller number can be achieved by blocking of the extra inputs/outputs.</p>
<p>Addendum 2: (This is just pure speculation) If we have a set of belts in a certain order all heading in the same direction we should be able to switch place of any 2 belts in the order by having all other belts use underground belts for a few squares. Thus we can permute belts arbitrarily as any permutation of 2 belts is possible. This should mean that we can always build 2n:2n balancers from n:n ones.</p>
|
2,843,337 | <blockquote>
<p>If $0 \lt x \lt \dfrac{\pi}{2}$, prove that
$$x^{3/2}\sin x + \sqrt{9-x^3}\cos x \leq 3$$</p>
</blockquote>
<p>This question must be done without calculus. First, I tried splitting it into the intervals $(0,\pi/4)$ and $(\pi/4, \pi/2)$, hoping that, $\sin x$ was bound tightly enough on the interval that it'd be less than 3 even if $\cos x = 1$ (which doesn't work -- letting $\sin x = \dfrac{1}{\sqrt{2}}$ and $\cos x = 1$ produces a result greater than 3).</p>
<p>The other thing I noticed was that inside the square root sign, we have $\sqrt{9-x^3} = \sqrt{(3-x^{3/2})(3+x^{3/2})}$, and an $x^{3/2}$ appears in the first term, but I'm not sure how useful the similarity there is.</p>
<p>Advice on how to proceed?</p>
| Mohammad Riazi-Kermani | 514,496 | <p>Note that $$(9-x^3)/9 +x^3/9 =1$$</p>
<p>Let $$(9-x^3)/9 =\sin ^2 t$$ and $$x^3/9 =\cos ^2 t$$</p>
<p>Thus $$ \cos t \sin x +\sin t \cos x =\sin (x+t) \le 1$$</p>
<p>$$ x^{3/2}\sin x + \sqrt{9-x^3}\cos x \leq 3$$</p>
|
394,489 | <p>$$\int^L_{-L} x \sin(\frac{\pi nx}{L})$$</p>
<p>I've seen something like this in Fourier theory, but I'm still not sure how to approach this integral. Wolfram Alpha gives me the answer, but no method. Integrate by parts? Substitution?</p>
| Mark McClure | 21,361 | <p>For what it's worth, WolframAlpha <em>will</em> show you the steps to evaluate the <em>indefinite</em> integral associated with this definite integral. It's then just a matter of plugging in the limits and subtracting.</p>
<p>Here are the steps returned by WolframAlpha via the Mathematica interface:</p>
<p><img src="https://i.stack.imgur.com/K1h44.png" alt="enter image description here"></p>
|
162,364 | <p>Assuming we have a list below, it has real number elements and complex numbers, how can I quickly find if the list has any Real number that its value is less than 1.0? </p>
<pre><code> lis = {3 Sqrt[354], Sqrt[2962], Sqrt[2746], 3 Sqrt[282], Sqrt[2338],
Sqrt[2146], 3 Sqrt[218], Sqrt[1786], Sqrt[1618], 27 Sqrt[2], Sqrt[
1306], Sqrt[1162], 3 Sqrt[114], Sqrt[898], Sqrt[778], 3 Sqrt[74],
Sqrt[562], Sqrt[466], 3 Sqrt[42], Sqrt[298], Sqrt[226], 9 Sqrt[2],
Sqrt[106], Sqrt[58], 3 Sqrt[2], I Sqrt[14], I Sqrt[38], 3 I Sqrt[6],
I Sqrt[62], I Sqrt[62], 3 I Sqrt[6], I Sqrt[38], I Sqrt[14], Sqrt[
1.2], Sqrt[58], Sqrt[1.06], 9 Sqrt[2], Sqrt[226]}
</code></pre>
<p>I am also considering if we can find methods to filter any real or complex numbers with specific values in any types list (e.g. there are some string elements mixed with real and complex number elements in a given list). But this could be another question and isn't necessary for my example. Please leave some advice if you interested! Thanks in advance!</p>
| Edmund | 19,542 | <p>You may use <a href="http://reference.wolfram.com/language/ref/Select.html" rel="nofollow noreferrer"><code>Select</code></a>and <a href="http://reference.wolfram.com/language/ref/Element.html" rel="nofollow noreferrer"><code>Element</code></a>.</p>
<pre><code>Select[# ∈ Reals && # < 1 &]@lis
</code></pre>
<blockquote>
<pre><code>{}
</code></pre>
</blockquote>
<p>There appear to be no items that are both in the reals and less than one.</p>
<p>If you do not need the items but only the result of the test then you could place in an <a href="http://reference.wolfram.com/language/ref/If.html" rel="nofollow noreferrer"><code>If</code></a> and test the <a href="http://reference.wolfram.com/language/ref/Length.html" rel="nofollow noreferrer"><code>Length</code></a>.</p>
<pre><code>If[
Length@Select[# ∈ Reals && # < 1 &]@lis > 0,
True,
False]
</code></pre>
<blockquote>
<pre><code>False
</code></pre>
</blockquote>
<p>Hope this helps.</p>
|
2,426,394 | <p>I'm sorry to bother you with this easy problem. But I'm working alone and totally confused.
It is the 1378th Problem from "Problems in Mathematical Analysis" written by Demidovich. The standard answer is
$$\frac{60(1+x)^{99}(1+6x)}{(1-2x)^{41}(1+2x)^{61}}$$</p>
<p>But when I followed the rutine
$$\frac{d(P(x)/Q(x))}{dx}=\frac{\frac{dP(x)}{dx}Q(x)-P(x)\frac{dQ(x)}{dx}}{Q(x)^2}$$
I got
$$\frac{100(1+x)^{99}(1-2x)^{40}(1+2x)^{60}-(1+x)^{100}(-80(1-2x)^{39}+120(1+2x)^{59})}{(1-2x)^{80}(1+2x)^{120}}$$</p>
<p>and finally
$$\frac{60(1+x)^{99}P(x)}{(1-2x)^{41}(1+2x)^{61}}$$</p>
<p>$$P(x)=1-4x^2-\frac{3(1+x)}{(1-2x)^{39}}-\frac{2(1+x)}{(1+2x)^{59}}$$</p>
<p>Would you tell me what mistake I made. Best regards.</p>
| user577215664 | 475,762 | <p>Or write it this way too it's easier </p>
<p>$P(x)=\frac{(1+x)^{100}}{(1-2x)^{40}(1+2x)^{60}}$</p>
<p>$P(x)=(\frac{(1+x)^{10}}{(1-4x^2)^{4}(1+2x)^{2}})^{10}$</p>
<p>$P(x)=(\frac{(1+x)^{5}}{(1-4x^2)^{2}(1+2x)})^{20}=Q(x)^{20}$</p>
<p>Then apply $Q'(x)^n=nQ(x)^{n-1}Q'(x)$</p>
|
4,292,822 | <blockquote>
<p>Prove linear independence of <span class="math-container">$1+x^3-x^5,1-x^3,1+x^5$</span> in the Vector Space of Polynomials</p>
</blockquote>
<p>The attempts I found online all are quite easy. You just substitute something in for <span class="math-container">$x$</span> into the equation <span class="math-container">$a(1+x^3-x^5)+b(1-x^3)+c(1+x^5)=0$</span> for example <span class="math-container">$x=1,0,-1$</span> and this will give you three equations where you can show that <span class="math-container">$a,b,c=0$</span>. But why can we substitute something in? If I define the Vector Space of Polynomials in a very abstract way that <span class="math-container">$\sum_{i} \alpha_i x^{i}+\sum_{i} \beta_{i} x^{i}:=\sum_{i} (\alpha_{i}+\beta_{i})x^{i})$</span> and <span class="math-container">$(\sum_{i}^{n} \alpha_i x^{i})(\sum_{i}^{m} \alpha_{i} x^{i} ):=\sum_{i=0}^{n+m} c_i x^i$</span> with <span class="math-container">$c_k=a_0 b_k+a_1 b_{k-1}+...+a_{k} b_0$</span> and a <span class="math-container">$x$</span> is just an abstract symbol with absolutely no meaning why should one be allowed to substitute something for <span class="math-container">$x$</span> or even worse differentiate the equation?</p>
| Servaes | 30,382 | <p>The reason that you can substitute and differentiate, is because every polynomial defines a function from <span class="math-container">$\Bbb{R}$</span> to <span class="math-container">$\Bbb{R}$</span>. The set <span class="math-container">$\operatorname{Map}(\Bbb{R},\Bbb{R})$</span> of all functions from <span class="math-container">$\Bbb{R}$</span> to <span class="math-container">$\Bbb{R}$</span> is of course an <span class="math-container">$\Bbb{R}$</span>-vector space, and the map
<span class="math-container">$$\Bbb{R}[X]\ \longrightarrow\ \operatorname{Map}(\Bbb{R},\Bbb{R}):\ P(X)\ \longmapsto\ (x\ \longmapsto\ P(x)),$$</span>
is injective. So if the functions defined by these polynomials are linearly independent in <span class="math-container">$\operatorname{Map}(\Bbb{R},\Bbb{R})$</span>, then also the polynomials are linearly independent in <span class="math-container">$\Bbb{R}[X]$</span>.</p>
|
1,137 | <p>In a freshman course in mathematics for biology students, I have had the issue that basic algebra (e.g. simplifying $\frac{a/b}{c/d}$) is far from being mastered. On the one hand, it seems ill-suited to teach more advanced matters to student failing on the basics. On the other hand, if we take too much time to cover the material allegedly covered in high school, we can hardly hope to cover what is needed to these student, and we enforce the impression that not mastering high-school mathematics is ok.</p>
<blockquote>
<p>How much effort should we spend in class for studying material that is
supposed to be mastered but in practice is not?</p>
</blockquote>
| user568 | 568 | <p>This is a situation where using an online homework system is particularly useful; usually there will be an early section of review material from which you can assign problems. I would advise issuing such an assignment as early as possible in the term (preferably on the first day), making sure the students realize it is for a grade and that as it covers prerequisite material for the course, no in class time (or very little) will be spent on these problems. Note that these homework systems will also give students examples and detailed instruction on how to do the assigned problems, so the students should be able to (re)learn the necessary material on their own. </p>
|
3,421,969 | <p>The point <span class="math-container">$C = (1, 2)$</span> lies inside the circle <span class="math-container">$x^2 + y^2 = 9$</span>. What is the length of the shortest chord of the circle through <span class="math-container">$C$</span>?</p>
| Heatconomics | 531,927 | <p>Let <span class="math-container">$S=1+a+a^2+\dots+a^{n-1}$</span>. Then <span class="math-container">$aS=a+a^2+\dots+a^n$</span>. Then, <span class="math-container">$S-aS=(1-a)S=1-a^n$</span>. Thus, if <span class="math-container">$a\neq 1$</span>, <span class="math-container">$S=\frac{1-a^n}{1-a}$</span>.</p>
|
3,421,969 | <p>The point <span class="math-container">$C = (1, 2)$</span> lies inside the circle <span class="math-container">$x^2 + y^2 = 9$</span>. What is the length of the shortest chord of the circle through <span class="math-container">$C$</span>?</p>
| Klaus | 635,596 | <p>You made a mistake in the first displayed line, it should be
<span class="math-container">$$a^{n-1}\left(\frac{1-(\frac{1}{a})^n}{1-\frac{1}{a}}\right) = \frac{a^n}{a}\left(\frac{1-\frac{1}{a^n}}{1-\frac{1}{a}}\right) = \left(\frac{a^n-1}{a-1}\right)$$</span>
as desired.</p>
|
1,867,695 | <blockquote>
<p>$\lim_{n\rightarrow \infty}n\left ( 1-\sqrt{1-\frac{5}{n}} \right )$</p>
</blockquote>
<p>$\lim_{n\rightarrow \infty} n *\lim_{n\rightarrow \infty}\left ( 1-\sqrt{1-\frac{5}{n}} \right ) = \infty * \left ( 1-\sqrt{1-0} \right ) = \infty * 0 = 0$</p>
<p>Did I do it correctly?</p>
<p>Problem is when I use my calculator and put big values for $n$, I get $2,5$ as result (if I take very very big values, it's $0$ :D).</p>
<p>Anyway, this made me feel a bit insecure and that's why I'm asking if I did it right.</p>
| Clement C. | 75,808 | <p>In short: <strong>no</strong>, you did not do it correctly. The reason is that $\infty\cdot 0$ is one of the <strong>indeterminate</strong> forms, you cannot conclude it's equal to zero.</p>
<p>The rationale being: <em>it can be anything.</em> For instance, $$\begin{align}
\underbrace{n}_{\to \infty}\cdot \underbrace{\frac{1}{n}}_{\to 0} &\xrightarrow[n\to\infty]{} 1\\
\underbrace{n^2}_{\to \infty}\cdot \underbrace{\frac{1}{n}}_{\to 0} &\xrightarrow[n\to\infty]{} \infty\\
\underbrace{n}_{\to \infty}\cdot \underbrace{\frac{1}{n^2}}_{\to 0} &\xrightarrow[n\to\infty]{} 0\\
\underbrace{n}_{\to \infty}\cdot \underbrace{\frac{\cos n}{n}}_{\to 0} &\xrightarrow[n\to\infty]{} \text{ nothing (no limit)}
\end{align}$$</p>
<p>To solve your indeterminate form, you have to use other techniques to "remove" that issue. Taylor expansions, multiplication by a conjugate, L'Hôpital rule... there are known and systematic methods to do so.</p>
<hr>
<p>E.g., with Taylor expansions: when $u\to 0$,
$$
\sqrt{1-5u} = 1-\frac{5}{2}u + o(u)
$$
so applying it with $u\stackrel{\rm def}{=}\frac{1}{n}\xrightarrow[n\to\infty]{}0$,
$$\begin{align}
n\left(1- \sqrt{1-\frac{5}{n}}\right)
&= \frac{1}{u}\left(1- \sqrt{1-5u}\right)
= \frac{1}{u}\left(1- 1 + \frac{5}{2}u + o(u)\right)
= \frac{1}{u}\left(\frac{5}{2}u + o(u)\right)\\
&= \frac{5}{2} + o(1) \xrightarrow[n\to\infty]{}\frac{5}{2}.
\end{align}$$</p>
<hr>
<p><strong>Addendum:</strong> why does your calculator do that? The correct answer is indeed $\frac{5}{2}=2.5$. Yet, for "very big $n$", your calculator will return $0$ instead... and that boils down to <strong>machine precision</strong>. I assume it starts by computing the term $\sqrt{1-\frac{5}{n}}$, then plugs in the result to get $1-\sqrt{1-\frac{5}{n}}$, and finally multiplies what remains by $n$ and outputs the value. <em>But</em> for very large $n$, due to machine precision and rounding errors, the first step will be so close to $1$ that the calculator will just compute $1$ instead of $\sqrt{1-\frac{5}{n}}\approx 1 - \frac{5}{2n}$. So then, $1-1=0$, and no matter what the last step <em>should</em> do, it returns $n\cdot 0=0$.</p>
|
1,867,695 | <blockquote>
<p>$\lim_{n\rightarrow \infty}n\left ( 1-\sqrt{1-\frac{5}{n}} \right )$</p>
</blockquote>
<p>$\lim_{n\rightarrow \infty} n *\lim_{n\rightarrow \infty}\left ( 1-\sqrt{1-\frac{5}{n}} \right ) = \infty * \left ( 1-\sqrt{1-0} \right ) = \infty * 0 = 0$</p>
<p>Did I do it correctly?</p>
<p>Problem is when I use my calculator and put big values for $n$, I get $2,5$ as result (if I take very very big values, it's $0$ :D).</p>
<p>Anyway, this made me feel a bit insecure and that's why I'm asking if I did it right.</p>
| Hulkster | 212,102 | <p>$$n-n\sqrt{1-\frac{5}{n}}=\frac{(n-n\sqrt{1-\frac{5}{n}})(n+n\sqrt{1-\frac{5}{n}})}{n+n\sqrt{1-\frac{5}{n}}}=\frac{5}{1+\sqrt{1-\frac{5}{n}}}\rightarrow \frac{5}{2}, \space \text{as} \space \space n \rightarrow \infty.$$</p>
|
2,823,675 | <p>Let $f$ nonnegative continuous decreasing function on $[0,\infty)$.
Suppose $\frac{f(x)}{\sqrt{x}}$ is integrable, then $\sqrt{x}f(x) \to 0$ as $x\to \infty$</p>
<p>I tried to prove $\sqrt{x}f(x)$ is Cauchy sequence, but I cannot to do. And I don't know how to use continuity.</p>
| Bill O'Haran | 551,590 | <p>Let's just shift the problem to make it easier:</p>
<blockquote>
<p>Let $f: \mathbb{R}^+ \to \mathbb{R}$ be a continuous, decreasing and integrable (on $\mathbb{R}^+$) function.
Prove that $xf(x)\to 0$ as $x\to \infty$.</p>
</blockquote>
<p><strong>First step:</strong> Notice that $f\to 0$ as $x\to \infty$ (fairly well-known fact but still)</p>
<p>Then $f$ has to be nonnegative (we did not need it as an hypothesis!) because it decreases towards $0$.</p>
<p><strong>Second step:</strong></p>
<p>$$
0\leq \frac{x}{2}f(x) \leq \int_{x/2}^x f(t) dt \underset{x\to \infty}{\to} 0
$$</p>
<p>Because $\int_{x/2}^x f(t) dt = \int_0^x f(t) dt - \int_0^{x/2} f(t) dt $ and both integrals converge to the same limit.</p>
<p>Hence the result.</p>
|
2,823,675 | <p>Let $f$ nonnegative continuous decreasing function on $[0,\infty)$.
Suppose $\frac{f(x)}{\sqrt{x}}$ is integrable, then $\sqrt{x}f(x) \to 0$ as $x\to \infty$</p>
<p>I tried to prove $\sqrt{x}f(x)$ is Cauchy sequence, but I cannot to do. And I don't know how to use continuity.</p>
| mathcounterexamples.net | 187,663 | <p>Take $0<u<v$. You have for $x \in [u,v]$ $\frac{f(u)}{\sqrt{x}} \le \frac{f(x)}{\sqrt{x}} \le \frac{f(v)}{\sqrt{x}} $</p>
<p>Hence</p>
<p>$$0 \le 2\left(\sqrt{v}-\sqrt{u}\right) f(u)\le \int_u^v F(t) \ dt\le 2\left(\sqrt{v}-\sqrt{u}\right) f(v)$$
where $F(x) = \frac{f(x)}{\sqrt{x}}$</p>
<p>Now take $u=x$ and $v=4x$ you get
$$0 \le \sqrt{x}f(x) \le \frac{1}{2}\int_{x}^{4x} F(t) \ dt \underset{x\to \infty}{\to} 0$$</p>
<p>using Cauchy criteria for a converging integral. Hence $\lim\limits_{x \to \infty} \sqrt{x}f(x) = 0$.</p>
|
481,017 | <blockquote>
<p>Find all $f(x)$ satisfying $f(f(x)) = x^2 - 2$.</p>
</blockquote>
<p>Presumably $f(x)$ is supposed to be a function from $\mathbb R$ to $\mathbb R$ with no further restrictions (we don't assume continuity, etc), but the text of the problem does not specify further. </p>
<p><strong>Possibly Helpful Links:</strong> Information on similar problems can be found <a href="https://mathoverflow.net/questions/17605/how-to-solve-ffx-cosx">here</a> and <a href="https://mathoverflow.net/questions/17614/solving-ffx-gx">here</a>.</p>
<p><strong>Source:</strong> <a href="https://math.stackexchange.com/questions/481000/some-old-russian-problems">This question.</a> It is about to be closed for containing too many problems in one question. I'm posting each problem separately. </p>
| Michael Albanese | 39,599 | <p>The following observation comes from a comment by Sergei Ivanov in <a href="https://mathoverflow.net/questions/17614/solving-ffx-gx">this</a> MathOverflow post. It discusses the existence of $f : \mathbb{C} \to \mathbb{C}$ such that $f(f(z)) = g(z)$ where $g(z) = z^2 - 2$. I am aware that this is not precisely what the question asks for, but it may be helpful.</p>
<blockquote>
<p>Let $g : \mathbb{C} \to \mathbb{C}$ be a quadratic polynomial such that $g(z) - z$ has distinct roots, then there are four solutions of $g(g(z)) = z$: the two roots of $g(z) - z$ and two points $a$, $b$ such that $g(a) = b$ and $g(b) = a$. If $g(z) = f(f(z))$ then the point $f(a)$ must be another solution to $g(g(z)) = z$. Note, this doesn't assume $f$ is continuous.</p>
</blockquote>
<p>(<em>Note:</em> I have paraphrased the above text; the original comment can be found <a href="https://mathoverflow.net/questions/17614/solving-ffx-gx#comment33340_17621">here</a>.)</p>
<p>Now note that if $g(z) = z^2 - 2$, then $g(z) - z = z^2 - z - 2 = (z - 2)(z + 1)$, so the above observation applies to $z^2 - 2$.</p>
<hr>
<p>Here are some details about the second paragraph.</p>
<p><strong>Why are there four solutions to $g(g(z)) = z$?</strong></p>
<p>$g(z)$ is a quadratic, so $g(g(z))$ is a quartic, as is $g(g(z)) - z$. By the fundamental theorem of algebra, there are four roots of the equation $g(g(z)) - z = 0$, though some may be repeated.</p>
<p><strong>Why are the zeroes of $g(z) - z$ solutions to $g(g(z)) = z$?</strong></p>
<p>As $g(z) - z$ has distinct zeroes, call them $w_1, w_2$. As $g(w_1) - w_1 = 0$, $g(w_1) = w_1$. Therefore $g(g(w_1)) = g(w_1) = w_1$. Likewise, $g(w_2) = w_2$ and $g(g(w_2)) = w_2$.</p>
<p><strong>Why must the other two roots of $g(g(z)) = z$ be $a, b$ such that $g(a) = b$ and $g(b) = a$?</strong></p>
<p>Let $a$ be a zero of $g(g(z)) = z$ other than $w_1$ and $w_2$. Now let $b = g(a)$. Then $a = g(g(a)) = g(b)$ and $g(g(b)) = g(a) = b$, so $b$ is the final zero.</p>
<p><strong>Why must $f(a)$ be another solution to $g(g(z)) = z$?</strong></p>
<p>Note that $f(f(a)) = g(a) = b$ and $f(f(b)) = g(b) = a$ so </p>
<p>$$g(g(f(a))) = g(f(f(f(a)))) = g(f(g(a))) = g(f(b)) = f(f(f(b))) = f(g(b)) = f(a).$$</p>
<p>To see that this is a new solution, note that $f(a) \neq w_1$ otherwise </p>
<p>$$b = g(a) = f(f(a)) = f(w_1),$$ </p>
<p>then </p>
<p>$$f(b) = f(f(w_1)) = g(w_1) = w_1.$$ Therefore </p>
<p>\begin{align*}
f(a) &= f(b)\\
f(f(a)) &= f(f(b))\\
g(a) &= g(b)\\
b &= a,
\end{align*}</p>
<p>which is a contradiction. Likewise, $f(a) \neq w_2$. </p>
<p>If $f(a) = a$ then </p>
<p>$$f(a) = a = g(g(a)) = g(f(f(a))) = g(f(a)) = g(a) = b,$$ which is a contradiction.</p>
<p>Finally, if $f(a) = b$ then</p>
<p>$$b = g(a) = f(f(a)) = f(b)$$</p>
<p>which leads to a similar contradiction as in the $f(a) = a$ case.</p>
|
481,017 | <blockquote>
<p>Find all $f(x)$ satisfying $f(f(x)) = x^2 - 2$.</p>
</blockquote>
<p>Presumably $f(x)$ is supposed to be a function from $\mathbb R$ to $\mathbb R$ with no further restrictions (we don't assume continuity, etc), but the text of the problem does not specify further. </p>
<p><strong>Possibly Helpful Links:</strong> Information on similar problems can be found <a href="https://mathoverflow.net/questions/17605/how-to-solve-ffx-cosx">here</a> and <a href="https://mathoverflow.net/questions/17614/solving-ffx-gx">here</a>.</p>
<p><strong>Source:</strong> <a href="https://math.stackexchange.com/questions/481000/some-old-russian-problems">This question.</a> It is about to be closed for containing too many problems in one question. I'm posting each problem separately. </p>
| Steven Stadnicki | 785 | <p>Any solutions to this problem should be findable via conjugacy; the key is that the given quadratic is <em>topologically conjugate</em> to the 'critical' logistic map, which in turn is known to be topologically conjugate to the so-called <em>bit shift</em> map. Let $f(x) = x^2-2$, $g(x) = 4x(1-x)$, $p(x) = 2-4x$. Then we can confirm that $f(p(x)) = (2-4x)^2-2 = 16x^2-16x+2$, while $p(g(x)) = 2-4(4x(1-x)) = 16x^2-16x+2$.</p>
<p>But now, taking $q(x) = \sin^2(2\pi x)$, it can be shown that $g(q(x)) = q(h(x))$ on $[0,1]$, where $h(x)$ is the bit-shift map $h(x) = 2x\bmod 1$ (this works because $g(q(x))$ $= 4(\sin^2(2\pi x))(1-\sin^2(2\pi x))$ $= 4\sin^2(2\pi x)\cos^2(2\pi x)$ $= \sin^2(2\cdot 2\pi x)$, etc).</p>
<p>Now, suppose we have a 'functional square root' $H(x)$ of the bit-shift map; that is, a function $H(x)$ such that $H(H(x)) = h(x)$. Then by 'chasing the chain' of conjugacies, we can turn this into a functional square root $F(x)$ of the original $f()$: letting $r = p\circ q$ (that is, defining $r(x) = p(q(x))$) we have $f\circ r=r\circ h$, so $f = r\circ h\circ r^{-1}$.. But then setting $F=r\circ H\circ r^{-1}$ we get $F\circ F = r\circ H\circ r^{-1}\circ r\circ H\circ r^{-1} = r\circ H\circ H\circ r^{-1} = r\circ h\circ r^{-1} = f$.</p>
<p>The problem thus reduces to finding an $H$ such that $H\circ H(x)=h(x) = 2x\bmod 1$. For $x$ in a limited range (e.g., $x\lt \frac12$ — and I haven't translated back to see what interval of the original problem this represents), we can simply take $H(x) = \sqrt{2}x$ as a solution, but this piecewise-linear behavior can't be extended to the whole of $[0,1]$: requiring $H(x) = \sqrt{2}x$ on $x\lt\frac12$ implies that it must in fact be true for $x\lt\frac1{\sqrt{2}}$ so that $H(H(x)) = 2x$ for all $x\lt\frac12$; but now consider $x=\frac12+\epsilon$, which then yields $H(H(x))$ $= H(\frac1{\sqrt{2}}+\sqrt{2}\epsilon)$ $=2\epsilon$, and then $H(H(\frac1{\sqrt{2}}+\sqrt{2}\epsilon))$ $= H(2\epsilon)$ $= 2\sqrt{2}\epsilon$, contradicting the requirement that $H(H(\frac1{\sqrt{2}}+\sqrt{2}\epsilon))$ $=h(\frac1{\sqrt{2}}+\sqrt{2}\epsilon)$ $=\sqrt{2}-1+2\sqrt{2}\epsilon$. This argument might be extendible to show there are no piecewise-continuous solutions outside a specific range, but that's well beyond my ken.</p>
|
481,017 | <blockquote>
<p>Find all $f(x)$ satisfying $f(f(x)) = x^2 - 2$.</p>
</blockquote>
<p>Presumably $f(x)$ is supposed to be a function from $\mathbb R$ to $\mathbb R$ with no further restrictions (we don't assume continuity, etc), but the text of the problem does not specify further. </p>
<p><strong>Possibly Helpful Links:</strong> Information on similar problems can be found <a href="https://mathoverflow.net/questions/17605/how-to-solve-ffx-cosx">here</a> and <a href="https://mathoverflow.net/questions/17614/solving-ffx-gx">here</a>.</p>
<p><strong>Source:</strong> <a href="https://math.stackexchange.com/questions/481000/some-old-russian-problems">This question.</a> It is about to be closed for containing too many problems in one question. I'm posting each problem separately. </p>
| mercio | 17,445 | <p>Michael's answer is fine, but here is a more in-depth analysis.</p>
<p>We want to look at the connected components of the graph whose vertices are the reals and where there is an arrow from $x$ to $y$ when $y = f^2(x) = x^2-2$.</p>
<p>It is clear that $f$ has to send connected components to connected components, so that given one component, either $f$ sends it onto itself (though it is not always possible), either $f$ goes back and forth with another component. Furthermore if two components are isomorphic (there is a bijective $\pi : X \to Y$ commuting with $f^2$), then we can always do so : pick $f_X = \pi$ and $f_Y = \pi^{-1} \circ f^2$.</p>
<p>If $|x|>2$, $x$ is among a component of the shape</p>
<p>$\begin{array}
& &\cdot & & \cdot & & \cdot & & \cdot & \\
&\downarrow & & \downarrow & & \downarrow & & \downarrow & \\
\cdots \rightarrow &\cdot &\rightarrow &\cdot &\rightarrow& \cdot& \rightarrow &\cdot \rightarrow &\cdots \end{array}$</p>
<p>where the top row has negative reals, the bottom row has positive reals, the limit on the left is $\pm 2$ and the limit on the right is $\pm \infty$.
There are uncountably many such components, so we can pair them up and define $f $ appropriately ($f$ can even be chosen continuous on $(-\infty ; -2) \cup (2 ; \infty)$)</p>
<p>For $|x| \le 2$, we have $f^2(2\cos a) = 2\cos(2a)$. </p>
<p>If $a/\pi$ is irrational, $2\cos a$'s component is made of all the $2\cos(2^ka + b\pi)$ with $k \in \Bbb Z$ and $b$ a dyadic rational, and takes the form of an infinite complete binary "tree" (it's not a tree because it's infinite both ways, and there is no root). Again, there are uncountably many such components, so using the axiom of choice, we can partition them into isomorphic pairs and define a square root $f$.</p>
<p>Next, we have the components with rational $a/\pi$. Those have cycles.
If $(f^2)^n(x) = x$, then $(f^2)^n(f(x)) = f((f^2)^n(x)) = f(x)$, so $n$-cycles of $f^2$ have to be sent to $n$-cycles of $f^2$. Moreover, if $n$ is even, you can't send an $n$-cycle to itself, so you necessarily need another $n$-cycle in another component.</p>
<p>There are two (non-isomorphic) components with a $1$-cycle,</p>
<p>$ \cdots \rightrightarrows 0 \rightarrow -2 \rightarrow 2 \\
\cdots \rightrightarrows 1 \rightarrow -1 $
(with infinite complete binary trees on the left)</p>
<p>The first one can't be mapped to itself, but we can pair those up by $0 \rightarrow 1 \rightarrow -2 \rightarrow -1 \leftrightarrow 2$</p>
<p>All the other cycles component are $n$-cycles with infinite complete binary trees attached to each vertex of the cycle. And even for odd length cycles, you can't send the component to itself. In any case, you need to pair them up in order to define $f$.</p>
<p>It is quick to compute the numbers of cycles of length $k$ : they are all neatly embedded in $(\Bbb Z / (2^k \pm 1) \Bbb Z,\times)/\{\pm 1\}$ via the map $(x \mod m) \mapsto 2\cos(2x\pi/m)$. So we pick those two semigroups, remove every smaller-length cycle they contain, get the number of elements left, and divide by $k$ to get the number of cycles.</p>
<p>We get $1,2,3,6,9,18,30,\ldots$ cycles of respective length $2,3,4,5,6,7,8,\ldots$. Unfortunately, it seems we often get an odd number of cycles of a given length. Simply knowing there is only one $2$-cycle (between $2\cos{\frac{2\pi}5}$ and $2\cos{\frac{4\pi}5}$) shows that we can't define $f$ properly. </p>
|
1,814,474 | <p>Lets have a series
$$\sum_{n=2}^\infty\frac{\ln\left(\frac{n+1}{n-1}\right)}{\sqrt{n}}$$</p>
<p>However, I have absolutely no clue how to try to continue. I could probably use the integral criterion and integrate the problem using the residue theorem, but that is too much of a hassle. Is there an easy way to prove the convergence of this series?</p>
| Felix Marin | 85,343 | <p>$\newcommand{\angles}[1]{\left\langle\, #1 \,\right\rangle}
\newcommand{\braces}[1]{\left\lbrace\, #1 \,\right\rbrace}
\newcommand{\bracks}[1]{\left\lbrack\, #1 \,\right\rbrack}
\newcommand{\dd}{\mathrm{d}}
\newcommand{\ds}[1]{\displaystyle{#1}}
\newcommand{\expo}[1]{\,\mathrm{e}^{#1}\,}
\newcommand{\half}{{1 \over 2}}
\newcommand{\ic}{\mathrm{i}}
\newcommand{\iff}{\Leftrightarrow}
\newcommand{\imp}{\Longrightarrow}
\newcommand{\ol}[1]{\overline{#1}}
\newcommand{\pars}[1]{\left(\, #1 \,\right)}
\newcommand{\partiald}[3][]{\frac{\partial^{#1} #2}{\partial #3^{#1}}}
\newcommand{\root}[2][]{\,\sqrt[#1]{\, #2 \,}\,}
\newcommand{\totald}[3][]{\frac{\mathrm{d}^{#1} #2}{\mathrm{d} #3^{#1}}}
\newcommand{\verts}[1]{\left\vert\, #1 \,\right\vert}$</p>
<blockquote>
<p>$\underline{\mbox{Just one of the integrals you didn't evaluate}\ }$ :</p>
</blockquote>
<p>With $\ds{0 < \mu < 1}$:
\begin{align}
&\color{#f00}{%
\int_{0}^{1 - \mu}{\ln\pars{1 + x}\ln\pars{1 - x} \over 1 + x}\,\dd x} =
\half\,\ln^{2}\pars{2 - \mu}\ln\pars{\mu} +
\half\,\int_{0}^{1 - \mu}{\ln^{2}\pars{1 + x} \over 1 - x}\,\dd x
\\[3mm] = &\
\half\,\ln^{2}\pars{2 - \mu}\ln\pars{\mu} +
\half\,\int_{1}^{2 - \mu}{\ln^{2}\pars{x} \over 2 - x}\,\dd x
\\[3mm] = &\
\half\,\ln^{2}\pars{2 - \mu}\ln\pars{\mu} +
\half\,\int_{1/2}^{1 - \mu/2}{\ln^{2}\pars{2x} \over 1 - x}\,\dd x
\\[3mm] = &\
\half\,\ln^{2}\pars{2 - \mu}\ln\pars{\mu} -
\half\ln\pars{{\mu \over 2}}\ln^{2}\pars{2 - \mu} +
\int_{1/2}^{1 - \mu/2}{\ln\pars{1 - x} \over x}\,\ln\pars{2x}\,\dd x
\end{align}
<hr>
When $\ds{\mu \to 0^{+}}$:
\begin{align}
&\color{#f00}{%
\int_{0}^{1}{\ln\pars{1 + x}\ln\pars{1 - x} \over 1 + x}\,\dd x} =
\half\,\ln^{3}\pars{2} -\int_{1/2}^{1}\mathrm{Li}_{2}'\pars{x}\ln\pars{2x}
\,\dd x
\\[3mm] = &\
\half\,\ln^{3}\pars{2} - \mathrm{Li}_{2}\pars{1}\ln\pars{2} +
\int_{1/2}^{1}\mathrm{Li}_{3}'\pars{x}\,\dd x
\\[3mm] = &\
\color{#f00}{\half\,\ln^{3}\pars{2} - \mathrm{Li}_{2}\pars{1}\ln\pars{2} +
\mathrm{Li}_{3}\pars{1} - \mathrm{Li}_{3}\pars{\half}}
\end{align}
<hr>
$$
\mbox{With}\quad
\left\lbrace\begin{array}{rcl}
\ds{\mathrm{Li}_{2}\pars{1}} & \ds{=} & \ds{{\pi^{2} \over 6}}
\\[2mm]
\ds{\mathrm{Li}_{3}\pars{1}} & \ds{=} & \ds{\zeta\pars{3}}
\\[2mm]
\ds{\mathrm{Li}_{3}\pars{\half}} & \ds{=} &
\ds{{1 \over 24}\bracks{21\zeta\pars{3} + 4\ln^{3}\pars{2} - 2\pi^{2}\ln\pars{2}}}
\end{array}\right.
$$</p>
<p>we'll get
$$
\color{#f00}{%
\int_{0}^{1}{\ln\pars{1 + x}\ln\pars{1 - x} \over 1 + x}\,\dd x} =
\color{#f00}{{1 \over 24}\bracks{3\zeta\pars{3} + 8\ln^{3}\pars{2} -2\pi^{2}\ln\pars{2}}} \approx -0.3028
$$</p>
|
3,687,355 | <p>I have trouble with how to find eigenvectors when you have a complex eigenvalue</p>
<p>For example the matrix
<span class="math-container">$$ \begin{pmatrix}
0 & 1\\
-2 & -2
\end{pmatrix}$$</span></p>
<p>Here you get the eigenvalues <span class="math-container">$-1$</span> and <span class="math-container">$ \pm i$</span></p>
<p>Where do i go from here to find a eigenvector. The solution says it should be the 2x1 matrix:
<span class="math-container">$$\begin{pmatrix}
1\pm i \\
-2
\end{pmatrix}$$</span></p>
| Minus One-Twelfth | 643,882 | <p><span class="math-container">$\newcommand{\x}{\mathbf{x}}$</span>You use the same procedure to find the eigenvectors as in the real case, just now you may have to use complex number arithmetic.</p>
<p>Let <span class="math-container">$A = \begin{pmatrix}
0 & 1\\
-2 & -2
\end{pmatrix}.$</span> I'll get you started on how to find the eigenvectors for the eigenvalue <span class="math-container">$-1+i$</span>. As usual, these are found by solving the equation <span class="math-container">$(A - (-1+i)I)\x = \mathbf{0}$</span>. Thus we solve the linear system </p>
<p><span class="math-container">$$\left(\begin{array}{cc|c}0-(-1+i) & 1 & 0 \\ -2 & -2-(-1+i) & 0\end{array}\right),$$</span>
that is,
<span class="math-container">$$\left(\begin{array}{cc|c}1-i& 1 & 0 \\ -2 & -1-i& 0\end{array}\right).$$</span></p>
<p>Can you solve this (if not, the first step would be to practise complex number arithmetic and row reductions with complex numbers)?</p>
|
872,893 | <p>How can I prove that the equation $r=16^r$ is wrong for any arbitrary value of $r$?
I have tried:</p>
<p>\begin{align}
&r=16^r &&\implies \log_r r = \log_r 16^r \\
&&& \implies 1 = r\log_r 16 \\
&&& \implies 1/r = \log_r 16 \\
&&& \implies r^{1/r} = r^{\log_r 16}\\
&&& \implies \sqrt[r]{r} = 16
\end{align}
I am stuck here.</p>
| Community | -1 | <p>I'm not too good with number theory, so I went about this in a different way and took a calculus-based approach.</p>
<p>Consider the function $f:(0,\infty) \to \mathbb{R}$ defined by $f(x) = x^{1/x}$. Then $$f'(x) = -x^{\frac{1}{x} - 2}(ln(x) - 1).$$</p>
<p>Then $f'(x) > 0$ for $x \in (0, e)$ and $f'(x) < 0$ for $x \in (e, \infty)$. Thus $f$ attains a global maximum at $x = e$. </p>
<p>So for all $x$, $$f(x) \leq f(e) = e^{1/e} \approx 1.445 < 16.$$ Thus, there exists no $r$ s.t. $r^{1/r} = 16$.</p>
|
872,893 | <p>How can I prove that the equation $r=16^r$ is wrong for any arbitrary value of $r$?
I have tried:</p>
<p>\begin{align}
&r=16^r &&\implies \log_r r = \log_r 16^r \\
&&& \implies 1 = r\log_r 16 \\
&&& \implies 1/r = \log_r 16 \\
&&& \implies r^{1/r} = r^{\log_r 16}\\
&&& \implies \sqrt[r]{r} = 16
\end{align}
I am stuck here.</p>
| lemon | 89,931 | <p>$16^{r}>0$ for all r and so, for $r=16^{r}$ to hold, you must have $r>0$. </p>
<p>Then observe from the Taylor expansion:
$$ 16^r-r=1+\underbrace{r(\log (16)-1)}_{>0}+\sum_{k=2}^\infty \underbrace{\frac{\log^k(16)}{k!}r^k}_{>0} > 0 $$
for all $r>0$.</p>
|
2,145,412 | <p>If you have a vector line m with equation $r = i + j + 2k + s( 3i + j - k)$
how do you find the normal?</p>
<p>If you presume that normal is $n = ( n i + o j + p k )$ then</p>
<p>$$3n + o -p = 0 $$</p>
<p>but this leaves you with too many options $\left \lbrace (0,0,0), (1,0,3), (0,1,1), ....\right \rbrace$ is it possible to find <em>the</em> normal?</p>
<p>(This is asked in relation to when I am trying to find the equation of a plane and have a line m and a point A, if I can find the normal to line $m$ I can use $n \cdot a=n \cdot r.$)</p>
| Joffan | 206,402 | <p>Computing the factor sum $\sigma_1(N)$ is quickly done from the prime decomposition of $N$. </p>
<p>Suppose $N$ has a number of prime factors $p_1^{a_1}p_2^{a_2}\ldots$. For the purpose of illustration take $a_1=3$. Then for any divisor $d$ of $N$ that is coprime to $p_1$, $N$ is also divisible by $p_1d$, $p_1^2d$, and $p_1^3d$. Also we know that $d$ must divide $N/p_1^3$. So here the factor sum of $N$ can be decomposed as $\sigma_1(N) = \sigma_1(N/p_1^3)(1+p_1+p_1^2+p_1^3)$ and since $(1+p_1+p_1^2+p_1^3) = \sigma_1(p_1^3)$ we can write this as $\sigma_1(N) = \sigma_1(N/p_1^3)\sigma_1(p_1^3)$.</p>
<p>So we can decompose $N$ into primes and for each prime power use the sum of powers formula. See how that works for $N=72=2^33^2$; we should find that $\sigma_1(N) = \sigma_1(2^3)\sigma_1(3^2) = (1+2+4+8)(1+3+9) = 15\cdot 13 = 195$. Laying out a table of factors of $72$ we can see this multiplication effect at work:</p>
<p>\begin{array}{c}
1 & 2 & 4 & 8 \\
3 & 6 & 12 & 24 \\
9 & 18 & 36 & 72 \\
\end{array}</p>
<p>Now when we're looking for perfect numbers, we're looking for the case when $\frac{\sigma_1(N)}{N}=2$, so it makes sense to consider this ratio. This can also obviously be decomposed into distinct prime components $\frac{\sigma_1(p_i^{a_i})}{p_i^{a_i}}$, which gives many of the results concerning perfect numbers, since the size of this value is tightly constrained between $\frac{p_i+1}{p_i}$ and $\frac{p_i}{p_i-1}$.</p>
|
1,314,802 | <p>True.</p>
<p>Since $f$ is continuous (because all uniformly continuous function is continuous), we can assume:</p>
<p>$$ f\left(\lim_{n\to\infty} \frac{1}{n}\right) $$</p>
<p>Since $ \lim_{n\to\infty} \frac{1}{n} $ is bounded in $ (0,1] $ and $ I \subset (0,1] $, we have by hypothesis $f$ uniformly continuous then</p>
<p>$$ \lim_{n\to\infty} f\left(\frac{1}{n}\right) \text{, exists.} $$</p>
| Bernard | 202,857 | <p>$$\frac{1-\cos x}x=\frac x2+o(x)$$
hence
$$\ln\Bigl(1+\frac{1-\cos x}x\Bigr)^{\!\tfrac 1x}=\frac1x\ln\Bigl(1+\frac x2+o(x)\Bigr)=\frac1x\Bigl(\frac x2+o(x)\Bigr)=\frac12+o(1)$$
so that the limit is $\,\mathrm e^{1/2}$.</p>
|
1,301,763 | <p>The only problem I have with this is knowing when you are solving for dx or dy. For example, this question which says</p>
<p>find the volume of the solid created by rotating the region bounded by y = 2x-4, y = 0, and x = 3, about the line x = 4. This one is dy. Instead of y = 2x-4, the person solved in respect to x and got y+4/(2). </p>
<p>And then there is this one, similar to the above but about the line y = -3. This one is dx. I don't understand why. Help appreciated. </p>
| Peter | 82,961 | <p>If the piece of the function rotates around the $x$-axis, you use $dx$</p>
<p>If it rotates around the $y$-axis, you use the inverse function and calculate
the values of the function to get the limits of the integral.</p>
|
1,144,488 | <p>I was assigned the following differential equation with initial value condition:</p>
<p>$$dy/dx=xy^{1/2}, y(2)=1.$$</p>
<p>Then I found out that the following functions, $y(x)=x^4/16$ and $(x^2-8)^2/16$, are solutions to this equation.</p>
<p>Why does not this contradict the uniqueness of the Picard-Lindelöf theorem?</p>
<p>The statement of the theorem which I'm using is the following: <a href="http://www.math.byu.edu/~grant/courses/m634/f99/lec4.pdf" rel="nofollow">http://www.math.byu.edu/~grant/courses/m634/f99/lec4.pdf</a></p>
<p>Is this formulation wrong?</p>
| Thomas Andrews | 7,933 | <p>$y(x)=(x^2-8)^2/16$ does not have the property that $y^{1/2}=(x^2-8)/4$, because that is a negative value when $x$ is near $2$.</p>
<p>So $y'=x(x^2-8)/4 = -xy^{1/2}$ when $x\in (0,2\sqrt{2})$, for example.</p>
|
1,144,488 | <p>I was assigned the following differential equation with initial value condition:</p>
<p>$$dy/dx=xy^{1/2}, y(2)=1.$$</p>
<p>Then I found out that the following functions, $y(x)=x^4/16$ and $(x^2-8)^2/16$, are solutions to this equation.</p>
<p>Why does not this contradict the uniqueness of the Picard-Lindelöf theorem?</p>
<p>The statement of the theorem which I'm using is the following: <a href="http://www.math.byu.edu/~grant/courses/m634/f99/lec4.pdf" rel="nofollow">http://www.math.byu.edu/~grant/courses/m634/f99/lec4.pdf</a></p>
<p>Is this formulation wrong?</p>
| Gonçalo Morais | 226,681 | <p>Or in other words, $y(x)=(x-8)^2/16$ is not a solution of the differential equation. Indeed it would be the solution that corresponds to the initial condition $y(0)=4$. However, by the slope field you see that this solution can not descend to pass through the point $(1,2)$.<img src="https://i.stack.imgur.com/OtB7B.png" alt="The slope field for the equation $y'=x\sqrt{y}$"> </p>
|
2,136,183 | <p>Find integers x,y such that the repeating decimal 0.712341234.... = x/y.</p>
<p>I would actually do this problem if the 7 was not there. If the 7 was not there, my proof would be as follows.</p>
<p>proof:</p>
<p>Let z = 0.12341234...</p>
<p>Then 10^4z = 1234.1234</p>
<p>10^4z - z = 1234</p>
<p>z = 1234/(10^4 - 1)</p>
<p>x = 1234, y = 10^4-1</p>
<p>So my question is, how would this change when there is a random number thrown in there that is not part of the repeating decimal?</p>
<p>Edit: Proof after hints given</p>
<p>Let Let z = 0.712341234...</p>
<p>Then 10z = 7.1234...</p>
<p>10z - 7 = 0.1234...</p>
<p>10^4*(10z - 7) = 1234.1234...</p>
<p>10^4*(10z-7) - (10z-7) = 1234</p>
<p>(10z-7) * 10^4 - 1 = 1234</p>
<p>(10z-7) = 1234/(10^4 - 1)</p>
<p>10z = 1234/(10^4-1) + 7</p>
<p>z = (1234/(10^4-1) + 7)/10</p>
<p>x = 1234/(10^4-1) + 7, y = 10</p>
<p>I mean this does give me the correct answer, but x isn't exactly an integer.</p>
| celtschk | 34,930 | <p>We have
\begin{align}
0.7\overline{1234}
&= \frac{7.\overline{1234}}{10}\\
&= \frac{1}{10}\left(7+0.\overline{1234}\right)\\
&= \frac{1}{10}\left(7+\frac{1234}{9999}\right)\\
&= \frac{1}{10}\left(\frac{7\cdot 9999}{9999} + \frac{1234}{9999}\right)\\
&= \frac{7\cdot 9999 + 1234}{10\cdot 9999}\\
&= \frac{71227}{99990}
\end{align}
Therefore $x=71227$ and $y=99990$.</p>
|
2,797,419 | <p>Calculating the moment of inertia is fairly simple, however, how would I proceed to calculate it about an arbitrary axis?
The question asks the moment of inertia of $C=\{(x,y,z)|0\leq x,0\leq y, 0 \leq z,x+y+z\leq 1\}$, so, if I'm not wrong about the boundaries, the moment of inertia about the "usual" Z axis would be:
$$I_z=\int_{0}^{1}\int_{0}^{1-x}\int_{0}^{1-x-y}x^2+y^2dzdydx$$</p>
<p>But, what about an arbitrary axis? The question actually asks the moment about the axis $\{(t,t,t)|t\in \mathbb R\}$, but, this is more about the general concept than about the question itself.
Any directions would be very welcome.</p>
| G Cab | 317,234 | <p>a) Also the variable $z$ shall be limited, presumably $0 \le z$, otherwise the solid is infinite in that direction. </p>
<p>b) Assuming the above, then $C=\{(x,y,z)|0\leq x,0\leq y,0 \le z,x+y+z\leq 1\}$ is a right Tetrahedron,
with edges on the axes and on the diagonal plane $x+y+z=1$.</p>
<p>c) The axis $(t,t,t)$ is a symmetry axis of the Tetrahedron, passing through the vertex at the origin
and the baricenter of the equilateral triangular face on the diagonal plane.<br>
So it is easy to calculate the moment around that axis as the integral of equilateral triangular slices,
parallel to the diagonal plane.</p>
<p>d) The solution for the moment around a generic axis involves the Inertia Matrix, do you know that ?</p>
|
1,607,108 | <p>Using the ratio test:
$$\frac{1}{3}\lim\limits_{n\to\infty}\left|\frac{(n+1)^3(\sqrt{2}+(-1)^{n+1})^{n+1}}{n^3(\sqrt{2}+(-1)^n)^n}\right|$$</p>
<p>Without evaluating the limit, the numerator is greater then denominator and the series is divergent. Is there an easier method for checking the convergence of this sequence?</p>
| Bernard | 202,857 | <p>If $x$ is even, $x^2-1\not\equiv 0\mod 2$. Hence from the hypothesis you know $x$ is odd. Furthermore, the square of any odd integer is congruent to $1$ modulo $8$.</p>
|
2,822,176 | <p>I'm solving a list of exercises of double integrals and they normally have a range for $x$ and $y$, but in this case it says that $y = x^2$ and $y = 4$, so I thought that $x$ would be $\sqrt{y}$, but my answer was wrong. The answer should be 25,60).</p>
<p>A thin metal plate occupies a shadow below the figure below.</p>
<p><a href="https://i.stack.imgur.com/VQv2u.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/VQv2u.png" alt="graph"></a></p>
<p>The region is limited by the graphs of $y = x^2$ and $y = 4$ where x and y are measured in centimeters. If the superficial density of the plate, in $g/cm^2$, is $p(x,y) = y$, its mass, in grams, will be:</p>
| Tony Hellmuth | 564,837 | <p>Well there are 2 approaches.</p>
<p>Firstly see the integral you want to compute is simply a box minus the area underneath the curve $y=x^2$. That is easily done by finding the intersection points.</p>
<p>OR one may flip the curve $y=x^2$ and then simply find the area under the new curve $y=-x^2+4$ and the $x$-axis.</p>
<p>The mass is then calculated by the answer from <strong>A.Γ.</strong></p>
|
1,413,955 | <p>I get confused when I put the following three notes together:</p>
<ol>
<li>Power set of any set is a $\sigma$-algebra.</li>
<li>If $X$ is a set and $\Sigma$ is a $\sigma$-algebra over $X$, then the pair $(X, \Sigma)$ is a measurable space.</li>
<li><a href="https://en.wikipedia.org/wiki/Vitali_set" rel="nofollow">Vitali</a> set is known as a counterexample that there is no measure on all the subsets of $\mathbb{R}$.</li>
</ol>
<p>By (1) and (2), one may think that $(\mathbb{R}, \mathcal{P}(\mathbb{R}))$ is a measurable space, which intuitively concludes that there must be a measure on all the subsets of $\mathbb{R}$. However, (3) says the opposite. Can anyone help me understand what is going on?</p>
| Zev Chonoles | 264 | <p>It is an unfortunate choice of terminology.</p>
<ul>
<li><p>The term <strong><em>measureable space</em></strong>, taken as a single unit, is <strong>defined</strong> to mean a set $X$ with a chosen sigma-algebra $\Sigma$. There is no specific choice of measure involved, and saying that $(X,\Sigma)$ is a measurable space is <strong>not intended</strong> to claim that it is possible to define a measure with that set and that sigma-algebra (even though, as it turns out, it is always possible: <a href="https://en.wikipedia.org/wiki/Dirac_measure">link</a>).<br><br>Therefore it is absolutely correct to say that $(\mathbb{R},\mathcal{P}(\mathbb{R}))$ is a "measurable space", but you should not take that as saying anything about a specific measure, or even whether it is possible to define a measure, despite the typical semantics of the word "measurable".</p></li>
<li><p>When we say that the Vitali set $V$ is "non-measurable", that is (implicitly) taken to mean with respect to the Lebesgue measure on the set $\mathbb{R}$ with the Lebesgue sigma-algebra $\mathscr{L}$. In other words, it is a claim that $V\notin \mathscr{L}$.<br><br> The Vitali set is <strong>not</strong> a counterexample to what you said, because it <strong>is possible</strong> to define a measure on the set $\mathbb{R}$ with the sigma-algebra $\mathcal{P}(\mathbb{R})$, as I pointed out earlier (<a href="https://en.wikipedia.org/wiki/Dirac_measure">link</a>). <br><br> The <strong>correct</strong> statement is that the Vitali set shows that one cannot define a measure $\mu$ on $(\mathbb{R},\mathcal{P}(\mathbb{R}))$ <strong>that also satsifies certain desired properties</strong> (such as $\mu((a,b))=b-a$, among others.) It is <strong>absolutely possible</strong> to define a measure on $(\mathbb{R},\mathcal{P}(\mathbb{R}))$ that does <strong>not</strong> have those nice properties.</p></li>
</ul>
|
2,118,230 | <blockquote>
<p>Let $G$ be a group and let $H,K \le G$ be subgroups of finite index, say $|G:H| = m$ and $|G:K|=n$. Prove that $\mathrm{lcm}(m,n) \le |G:H \cap K| \le mn$. </p>
</blockquote>
<p>I was able to establish the upper bound, but I am having difficulty establishing the lower bound, so I consulted <a href="https://crazyproject.wordpress.com/2010/03/25/bounds-on-the-index-of-an-intersection-of-two-subgroups/" rel="nofollow noreferrer">this</a>. However, I am having trouble following the author's reasoning. Here is the relevant part I am referring to:</p>
<blockquote>
<p>Now...we have $H \cap K \le H \le G$. Thus, $m$ divides $|G:H \cap K|$ and $n$ divides $|G:H \cap K|$, so that $\mathrm{lcm}(m,n)$ divides $|G:H \cap K|$.</p>
</blockquote>
<p>Exactly what theorem is being used to make this conclusion?</p>
| Cross Ratio | 409,170 | <p>If $ \prod p^{m_p} $ and $ \prod p^{n_p} $ are the prime factorizations of $ m $ and $ n $, then their LCM is $ w = \prod p^{max({m_p}, {n_p})} $.
(<a href="https://en.wikipedia.org/wiki/Least_common_multiple#Fundamental_theorem_of_arithmetic" rel="nofollow noreferrer">https://en.wikipedia.org/wiki/Least_common_multiple#Fundamental_theorem_of_arithmetic</a>). </p>
<p>As $ p^{max({m_p}, {n_p})} | [G: H \cap K]$ for each $p$ and as such factors of $w$ are relatively prime, $w | [G: H \cap K]$.</p>
|
588,228 | <p>A,B,C are distinct digits of a three digit number such that </p>
<pre><code> A B C
B A C
+ C A B
____________
A B B C
</code></pre>
<p>Find the value of A+B+C.</p>
<p>a) 16 b) 17 c) 18 d) 19</p>
<p>I tried it out by using the digits 16 17 18 19 by breaking them in three numbers but due to so large number of ways of breaking I cannot help my cause.</p>
| Brian M. Scott | 12,042 | <p>The largest possible carry from a sum of three digits is $2$, so from the righthand column we see that $2C+B$ is $C$, $10+C$, or $20+C$, and therefore $C+B$ is $0$, $10$, or $20$. The sum of two distinct digits cannot be $0$ or $20$, so $C+B=10$, and there is a carry of $1$ to the middle column.</p>
<p>From the middle column we then see that $B+2A+1$ is $B$, $10+B$, or $20+B$, so that $2A+1$ is $0$, $10$, or $20$. But all of these are impossible since $2A+1$ is clearly odd. Thus, the problem has no solution.</p>
|
1,937,515 | <p>If a function is analytic on $B(0,1)$ and continuous as well as real valued on the boundary (i.e. $|z| = 1$),then show that the function is constant.</p>
| H. H. Rugh | 355,946 | <p>You may also use Schwarz' reflection principle and define for $|z|>1$:
$$ f(z)= \overline{f\left(\frac{1}{\overline{z}} \right)} $$
You may show that this extends $f$ to a uniformly bounded holomorphic function and apply Liouville.</p>
|
3,622,574 | <blockquote>
<p><span class="math-container">$y''-y=\sin2t+1, y(0) = 1, y'(0) = -1$</span></p>
</blockquote>
<p>Hey all, I am not sure if I am completing all the steps correctly to solve this problem via convolution. I am pretty sure I am not doing it correctly, but am not sure what is going wrong. </p>
<p>First I apply Laplace transforms to everything. <span class="math-container">$$\mathcal{L}[y''] - \mathcal{L}[y] = \mathcal{L}[\sin2t] + \mathcal{L}[1]$$</span></p>
<p>Plugging in initial values and doing algebra gives me <span class="math-container">$$Y(s) = \left(\frac{1}{s^2+4} + \frac{1}{s} \right) \left(\frac{1}{s+1} \right)$$</span></p>
<p>Now I let </p>
<p><span class="math-container">\begin{align*}
\mathcal{L}[f](s) &= \left(\frac{1}{s^2+4} + \frac{1}{s} \right) \\
\mathcal{L}[g](s) &= \left(\frac{1}{s+1} \right)
\end{align*}</span></p>
<p>I get <span class="math-container">$f(t) = \sin2t+1$</span> and <span class="math-container">$g(t) = e^{-t}$</span> via inverse laplace transform.</p>
<p>Now that I have <span class="math-container">$f$</span> and <span class="math-container">$g$</span>, I want to do a convolution <span class="math-container">$f*g = \int_{0}^{t} f(x)g(t-x)dx$</span> , but I am unable to solve the integral so I think I have done something wrong before getting to this step.</p>
| Mnifldz | 210,719 | <p>You computed your initial Laplace transform incorrectly. Notice that </p>
<p><span class="math-container">\begin{eqnarray*}
\mathcal{L}[y''] - \mathcal{L}[y] & = & s^2\mathcal{L}[y] - y(0) - y'(0) - \mathcal{L}[y] \\
& = & (s^2-1)\mathcal{L}[y] - 1 - (-1) \;\; =\;\; (s^2-1)\mathcal{L}[y].
\end{eqnarray*}</span></p>
<p>You divided by the wrong term on the right hand side. You should've obtained</p>
<p><span class="math-container">$$
Y(s) \;\; =\;\; \left (\frac{1}{s^2+4} + \frac{1}{s}\right )\left (\frac{1}{s^2 - 1}\right )
$$</span></p>
<p>hence <span class="math-container">$f(t) = \frac{1}{2}\sin(2t) + 1$</span>. Because <span class="math-container">$\frac{1}{s^2-1} = \frac{1}{(s+1)(s-1)} = \frac{1}{2(s+1)} - \frac{1}{2(s-1)}$</span>, then we find that <span class="math-container">$g(t) = \frac{1}{2}e^{-t} - \frac{1}{2}e^t$</span>.</p>
|
3,622,574 | <blockquote>
<p><span class="math-container">$y''-y=\sin2t+1, y(0) = 1, y'(0) = -1$</span></p>
</blockquote>
<p>Hey all, I am not sure if I am completing all the steps correctly to solve this problem via convolution. I am pretty sure I am not doing it correctly, but am not sure what is going wrong. </p>
<p>First I apply Laplace transforms to everything. <span class="math-container">$$\mathcal{L}[y''] - \mathcal{L}[y] = \mathcal{L}[\sin2t] + \mathcal{L}[1]$$</span></p>
<p>Plugging in initial values and doing algebra gives me <span class="math-container">$$Y(s) = \left(\frac{1}{s^2+4} + \frac{1}{s} \right) \left(\frac{1}{s+1} \right)$$</span></p>
<p>Now I let </p>
<p><span class="math-container">\begin{align*}
\mathcal{L}[f](s) &= \left(\frac{1}{s^2+4} + \frac{1}{s} \right) \\
\mathcal{L}[g](s) &= \left(\frac{1}{s+1} \right)
\end{align*}</span></p>
<p>I get <span class="math-container">$f(t) = \sin2t+1$</span> and <span class="math-container">$g(t) = e^{-t}$</span> via inverse laplace transform.</p>
<p>Now that I have <span class="math-container">$f$</span> and <span class="math-container">$g$</span>, I want to do a convolution <span class="math-container">$f*g = \int_{0}^{t} f(x)g(t-x)dx$</span> , but I am unable to solve the integral so I think I have done something wrong before getting to this step.</p>
| user577215664 | 475,762 | <p><span class="math-container">$$y''-y=\sin2t+1, y(0) = 1, y'(0) = -1$$</span>
I got this:
<span class="math-container">$$Y(s) = \left(\frac{2}{s^2+4} + \frac{1}{s} \right) \left(\frac{1}{s^2-1} \right)+\dfrac 1 {s+1}$$</span>
Which is decomposed as:
<span class="math-container">$$Y(s) = -\frac{2}{5(s^2+4)} + \frac 1{5(s-1)} +\frac{s}{s^2-1} - \frac{1}{s} +\dfrac 4 {5(s+1)}$$</span>
<span class="math-container">$$y(t)=-1-\frac 15 \sin(2t)+\frac 15e^t+\cosh(t)+\frac 45e^{-t}$$</span>
Finally:
<span class="math-container">$$\boxed {y(t)=-1-\frac 15 \sin(2t)+\frac {7}{10}e^t+\frac {13}{10}e^{-t}}$$</span></p>
<hr>
<p><strong>Edit:</strong>
I didn't see that OP need to use Convolution Integral;
<span class="math-container">$$Y(s) = \left(\frac{2}{s^2+4} + \frac{1}{s} \right) \left(\frac{1}{s^2-1} \right)+\dfrac 1 {s+1}$$</span>
<span class="math-container">$$y(t) = \int_0^t \sinh (\tau)\sin(2(t-\tau))d\tau +\int_0^t \sinh (t-\tau)d\tau +e^{-t}$$</span>
<span class="math-container">$$y(t) = \int_0^t \sinh (\tau)\sin(2(t-\tau))d\tau +\cosh(t)-1 +e^{-t}$$</span>
Integrate by part for the first integral.
<span class="math-container">$$I=\int_0^t \sinh (\tau)\sin(2(t-\tau))d\tau$$</span>
<span class="math-container">$$I=\frac 25\sinh t -\frac 15\sin(2 t)$$</span>
Finally :
<span class="math-container">$$ y(t) = \frac 25\sinh t -\frac 15\sin (2t) +\cosh(t)-1 +e^{-t}$$</span>
<span class="math-container">$$\boxed {y(t)=-1-\frac 15 \sin(2t)+\frac {7}{10}e^t+\frac {13}{10}e^{-t}}$$</span></p>
|
2,087,199 | <p>Ok so my question is how do you proceed when you need to find the image representation if the basis is a lower RX than your linear transformation.</p>
<p>For example:
Find the image representation of the linear transformation f(x,y,z,t)=(x+y,z-x,t+y) fron $\Bbb R^4$ to $\Bbb R^3$ with respect to the following bases:</p>
<ul>
<li>$\{(0,0,0,1),(0,1,0,1),(0,0,1,0,),(1,0,1,0)\}$ of $\Bbb R^4$</li>
<li>$\{(1,0,0),(0,1,0),(0,0,1)\}$ of $\Bbb R^3$</li>
</ul>
<p>In the $\Bbb R^4$ basis all I have to do is find $f(x,y,z,t)$ using each of the vectors and I get the matrix $\{(0,0,1),(1,0,2),(0,1,0),(1,0,0)\}$</p>
<p>But I don´t know how to proceed with the $\Bbb R^3$ basis.
Any help would be nice.</p>
<p>Thanks.</p>
| spaceisdarkgreen | 397,125 | <p>I think you're misunderstanding. They want you to do it using that weird basis for R4 and the standard basis for R3, which is what you did. This isn't two separate problems. Any problem requires you specify the basis for each space.</p>
|
2,087,199 | <p>Ok so my question is how do you proceed when you need to find the image representation if the basis is a lower RX than your linear transformation.</p>
<p>For example:
Find the image representation of the linear transformation f(x,y,z,t)=(x+y,z-x,t+y) fron $\Bbb R^4$ to $\Bbb R^3$ with respect to the following bases:</p>
<ul>
<li>$\{(0,0,0,1),(0,1,0,1),(0,0,1,0,),(1,0,1,0)\}$ of $\Bbb R^4$</li>
<li>$\{(1,0,0),(0,1,0),(0,0,1)\}$ of $\Bbb R^3$</li>
</ul>
<p>In the $\Bbb R^4$ basis all I have to do is find $f(x,y,z,t)$ using each of the vectors and I get the matrix $\{(0,0,1),(1,0,2),(0,1,0),(1,0,0)\}$</p>
<p>But I don´t know how to proceed with the $\Bbb R^3$ basis.
Any help would be nice.</p>
<p>Thanks.</p>
| Andrea Mori | 688 | <p>Given a linear transformation $T:\Bbb R^n\rightarrow\Bbb R^m$ and a choice of bases $\{e_1,...,e_n\}$ of $\Bbb R^n$ and $\{f_1,...,f_m\}$ of $\Bbb R^m$ is associated a matrix $A$ with $m$ rows and $n$ columns as follows.
You write
$$
A=\left(\begin{array}[cccc]
{} a_{1,1}& a_{1,2}&\cdots& a_{1,n}\\
{} a_{2,1}& a_{2,2}&\cdots& a_{2,n}\\
{} ... & ... & ... & ... \\
{} a_{m,1}& a_{m,2}&\cdots& a_{m,n}
\end{array}\right)
$$
if
$$
T(e_k)=\sum_{j=1}^ma_{j,k}f_j.
$$
In words: the $k$-th column of $A$ lists the coordinates of $T(e_k)$ in terms of the basis $\{f_1,...,f_m\}$.</p>
<p>In your case you have $T(e_1)=f_3$, $T(e_2)=f_1+2f_3$, $T(e_3)=f_2$ and $T(e_4)=f_1$. Thus
$$
A=\left(\begin{array}[cccc]
{} 0& 1&0& 1\\
{} 0& 0&1& 0\\
{} 1 & 2 &0 & 0
\end{array}\right).
$$</p>
|
1,600,266 | <p>I'm trying to work out sum of this series $$1 + \frac{2}{2} + \frac{3}{2^2} + \frac{4}{2^3} + \ldots$$</p>
<p>I know one method is to do substitutions and getting the series into a form of a known series. So far I've converted the series into $$ 1 + \frac{2}{x} + \frac{3}{x^2} + \frac{4}{x^3} + \ldots $$ where $x=2$ and I'm trying to get it into the form of the $\ln(1+x)$ series somehow. I have tried differentiating, integrating and nothing is working out. The closest I got is by inverting which gave me $ 1 + \frac{x}{2} + \frac{x^2}{3} + \frac{x^3}{4} \ldots $ </p>
<p>Now I'm just lost and have no idea what to do.</p>
<p>The other idea I had was converting it into $$ \Large{\sum_{n=1}^\infty{\frac{n}{2^{n-1}}}}, $$ but I have no idea how to do anything further to it.
How would you do this? Thnx.</p>
| J.-E. Pin | 89,374 | <p>There is no mistake and the meaning is clear. However, it might have been simpler to state that $0011101$ is not the binary representation of <strong>any number</strong> (prime or not) because except for $0$, the binary representation of a number always start by $1$.</p>
<p>Let me quote the whole paragraph:</p>
<blockquote>
<p>The problem of testing primality can be expressed by the language
$L_P$ consisting of all binary strings whose values as a binary number
is prime. That is, given a string of $0$'s and $1$'s, say "yes" if the
string is the binary representation of a prime and say "no" if not.
For some strings, this decision is easy. For instance, $0011101$
cannot be the representation of a prime, for the simple reason that
every integer except $0$ has a binary representation that begins with
$1$. However, it is less obvious whether the string $11101$ belongs to
$L_P$, so any solution to this problem will have to use significant
computational resources of some kind: time and/or space, for example.</p>
</blockquote>
|
2,711,785 | <p>Having some issues with this proof. Assume we've already proven addition, etc.</p>
<p>Definition of multiplication:</p>
<p>$a \times S(b) = a \times b + a$ (the "definition of multiplication")</p>
<p>$a \times 0 = 0$ (the "zero property of multiplication")</p>
<p>First, some supporting proofs:</p>
<p><strong>Claim</strong>: $0 \times a = 0$</p>
<p><strong>Base Case</strong>: We induct on $a$. Let $a=0$. See that $0 \times 0 = 0$ by definition of zero property of multiplication. </p>
<p><strong>Inductive Step</strong>: Suppose $0 \times a = 0$. We must show that $0 \times S(a) = 0$. By definition of multiplication we have $0 \times S(a) = 0 \times a + 0 = 0 \times a$ which is just $0$ by the inductive hypothesis.</p>
<hr>
<p><strong>Claim</strong>: $a \times b = b \times a$</p>
<p><strong>Base Case</strong>: We induct on $a$. Let $a=0$. See that $0 \times b = 0 = b \times 0$ by zero property of multiplication.</p>
<p><strong>Inductive Step</strong>: Suppose $a \times b = b \times a$. We must show that $S(a) \times b = b \times S(a)$. By definition of multiplication we have $S(a) \times b = a \times b + b$ which is $b \times a + b$ by inductive hypothesis. Then by definition of multiplication $b \times a + b = b \times S(a)$ and we are done.</p>
<hr>
<p>Now technically I used a proof there I haven't derived yet. But that's where I am having trouble.</p>
<hr>
<p><strong>Claim</strong>: $S(a) \times b = a \times b + b$</p>
<p><strong>Base Case</strong>: We induct on $b$. Let $b=0$. See that $S(a) \times 0 = 0 = a \times 0 + 0$ by zero property of multiplication and additive identity.</p>
<p><strong>Inductive Step</strong>: Suppose $S(a) \times b = a \times b + b$. We must show that $S(a) \times S(b) = a \times S(b) + S(b)$. By inductive hypothesis we have $S(a) \times S(b) = a \times S(b) + S(b)$ and we are done.</p>
<p>Is this correct? I feel like I am making a mistake somewhere. I only needed to use the inductive hypothesis to get what I needed?</p>
| user525966 | 525,966 | <p>I think it's fixed now:</p>
<p><strong>Claim</strong>: $S(a) \times b = a \times b + b$</p>
<p><strong>Base Case</strong>: We induct on $b$. Let $b=0$. See that $S(a) \times 0 = 0 = a \times 0 + 0$ by zero property of multiplication and additive identity.</p>
<p><strong>Inductive Step</strong>: Suppose $S(a) \times b = a \times b + b$. We must show that $S(a) \times S(b) = a \times S(b) + S(b)$. </p>
<p>$\begin{align}S(a) \times S(b) &= S(a) \times b + S(a) \tag{definition of multiplication} \\
&= a \times b + b + S(a) \tag{inductive hypothesis} \\
&= a \times b + S(b + a) \tag{definition of addition} \\
&= a \times b + S(a + b) \tag{commutative property of addition} \\
&= a \times b + a + S(b) \tag{definition of addition} \\
&= a \times S(b) + S(b) \tag{definition of multiplication} \\
\end{align}$</p>
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.