qid int64 1 4.65M | question large_stringlengths 27 36.3k | author large_stringlengths 3 36 | author_id int64 -1 1.16M | answer large_stringlengths 18 63k |
|---|---|---|---|---|
28,811 | <p>There are lots of statements that have been conditionally proved on the assumption that the Riemann Hypothesis is true.</p>
<p>What other conjectures have a large number of proven consequences?</p>
| Joel David Hamkins | 1,946 | <p>Set theory is of course completely saturated with this feature, since the independence phenomenon means that a huge proportion of the most interesting natural set-theoretic questions turn out to be independent of the basic ZFC axioms. Thus, most of the interesting work in set theory is about the relations beteween these various independent statements. They typically have the form of implications assuming the truth of a hypothesis not known to be true (and often, known in some sense not to be provably true), and therefore are instances of what you requested. The status of these various hypotheses as <em>conjectures</em>, however, to use the word you use, has given rise to vigorous philosophical debate in the foundations of mathematics and set theory, as to whether or not they have definite truth values and how we could come to know them.</p>
<p>Examples of such hypothesis that are used in this way would include all of the main set-theoretic hypotheses known to be independent. This list would run to several hundred natural statements, but let me list just a few:</p>
<ul>
<li>The Continuum Hypothesis (also the Generalized Continuum Hypothesis)</li>
<li>The negation of the Continuum Hypothesis</li>
<li>Martin's Axiom</li>
<li>More generally, other forcing axioms, such as PFA or MM</li>
<li>Cardinal characteristics relations, such as <strong>b</strong> < <strong>d</strong></li>
<li>The entire large cardinal hierarchy</li>
</ul>
<p>This last example is extremely important and a unifying instance of what you requested, for the large cardinal hierarchy is a tower of increasingly strong hypotheses, which we believe to be consistent, but haven't proved, and indeed, provably cannot prove, to be consistent, unless set theory itself is inconsistent. From any level of the large cardinal hierarchy, if consistent, we provably cannot prove the consistency of the higher levels. </p>
<p>So in this sense, the large cardinal hierarchy provides enormous iterated towers of your phenomenon.</p>
<p>This might seem at first to be a flaw. Why would we be interested in these large cardinals, if we cannot prove they exist, cannot prove that their existence is consistent, and indeed, can <em>prove</em> that we cannot prove they are consistent, assuming our basic axioms are consistent? The reason is that because of Goedel's incompeteness theorem, we know and expect to find such statements, that are not settled, even when we assume Con(ZFC) and more. Thus, we know there is hierarchy of consistency strength towering above us. The remarkable thing is that this tower turns out to be describable in terms of the very natural infinite combinatorics of large cardinals. These were notions, such as inaccessible, Ramsey and measurable cardinals, that arose from natural questions about infinite combinatorics, independently of any considerations of consistency strength.</p>
<p>Some of the most interesting uses of large cardinals have been equiconsistencies between large cardinals and other natural mathematical statements. For example, the impossibility of removing AC from the Vitali construction of a non-measurable set is exactly equiconsistent with the existence of an inaccessible cardinal. And the complete determinacy of infinite integer games (with not AC) is equiconsistent with the existence of infinitely many Woodin cardinals. </p>
|
28,811 | <p>There are lots of statements that have been conditionally proved on the assumption that the Riemann Hypothesis is true.</p>
<p>What other conjectures have a large number of proven consequences?</p>
| Tobias Kildetoft | 4,614 | <p>In the representation theory of a reductive algebraic group $G$ in positive characteristic $p$, there is a conjecture known as the Humphreys-Verma conjecture, which states that an indecomposable injective module for a Frobenius kernel $G_r$ of $G$ should lift uniquely to a module for $G$. There is also a refinement of the conjecture, known as Donkin's tilting conjecture, which specifies which $G$-module this lift should be (an indecomposable tilting module with a specified highest weight).</p>
<p>Both conjectures are known to be true when $p\geq 2h-2$ where $h$ is the Coxeter number, and while it is not particularly common to see statements formulated conditional on either conjecture, the condition $p\geq 2h-2$ is exceedingly common, and quite often this condition could be replaced by an assumption that one or both of the above conjectures are true.</p>
|
1,879,283 | <p>It is said that greatest possible value of $380$ (2 s.f rounded) is $385$,and least one is $375$. How this is possible? if i try to round $385$ (2 s.f) it should be $390$.
As the least possible value of $390$ is $385$.</p>
<p>How can i resolve these issues?</p>
| marty cohen | 13,079 | <p>For A and C,
I would use
the point-slope form
of a line.
It states that
a line through
$(u, v)$
with slope $m$
has the form
$y-v = (x-u)m
$.</p>
<p>Note that this is true
when $x=u$ and $y=v$,
so it passes through the point.
Also note that
the equation can be written
$\dfrac{y-v}{x-u}
=m
$,
so its slope is $m$.</p>
<p>The vector through
$(-2, -5)$
(and, I assume,
$(0, 0)$)
has slope
$\dfrac{-5}{-2}
=\dfrac{5}{2}
$,
so the slope of the
normal to it
has a value which is
its negative reciprocal
which is
$\dfrac{-2}{5}$.</p>
<p>For C,
the slope of the line
$y=-2x-5$
is
$-2$,
and it passes through
$ (2,-4)$.</p>
<p>You should now be able to
get your answers.</p>
|
1,879,283 | <p>It is said that greatest possible value of $380$ (2 s.f rounded) is $385$,and least one is $375$. How this is possible? if i try to round $385$ (2 s.f) it should be $390$.
As the least possible value of $390$ is $385$.</p>
<p>How can i resolve these issues?</p>
| Bernard | 202,857 | <p>For A), the most direct way is to use the vector form of the equation: if $(x_A,y_A)$ are the coordinates of the point $A$, and $\vec n$ is a normal vector, the equation is
$$\overrightarrow{AM}\cdot \vec n=0,\quad\text{i.e.}\quad 2x+5(y+2)=0.$$</p>
<p>For B), if the given poibts have different abscissae, there's a formula, which is a translation of the colinearity of the vectors $\overrightarrow{AM}$ and $\overrightarrow{AB}$:
$$\frac{y -y_A}{x-x_A}=\frac{y_B -y_A}{x_B-x_A}\iff y=\frac{y_B -y_A}{x_B-x_A}(x-x_A)+y_A.$$</p>
<p>For C), use the formula for the eqaution of a line passing through $A$, with prescribed slope $m$:
$$y=m(x-x_A)+y_A.$$</p>
|
467,609 | <blockquote>
<p>Find the value of
$$\int _0 ^ \pi \dfrac{x}{1+\sin^2(x)} dx $$</p>
</blockquote>
<p>I have tried using $\int_a ^bf(x) dx=\int_a^b f(a+b-x)dx$</p>
<p>$\displaystyle \int _0 ^ \pi \dfrac{x}{1+\sin^2(x)} dx=\int _0 ^ \pi \dfrac{\pi-x}{1+\sin^2(x)} dx=I$</p>
<p>I couldn't go any further with that!</p>
| Start wearing purple | 73,025 | <p>Now add the two integrals to get
$$2I=\pi\int_0^{\pi}\frac{dx}{1+\sin^2x}=\pi\int_{0}^{2\pi}\frac{dy}{3-\cos y},\tag{1}$$
where we used $\cos2x=1-2\sin^2x$ and the change of variables $y=2x$ to get the second equality.
The last integral in (1) can be easily calculated by residues (in fact the integrals of this type are the most standard and straightforward application of the residue theorem) so that finally
$$I=\frac{\pi^2}{2\sqrt{2}}.$$</p>
<hr>
<p><strong>Added</strong>: (the computation of (1) without residues) The change of variables $t=\tan x$ allows to write
$$\sin^2x=\frac{t^2}{1+t^2},\qquad dx=\frac{dt}{1+t^2},$$
so that
\begin{align}
\int_0^{\pi}\frac{dx}{1+\sin^2x}=2\int_0^{\pi/2}\frac{dx}{1+\sin^2x}=2
\int_0^{\infty}\frac{dt}{1+2t^2}=\sqrt{2}\Bigl[\arctan(t\sqrt{2})\Bigr]_{0}^{\infty}
=\frac{\pi}{\sqrt{2}}.
\end{align}</p>
|
2,167,855 | <p>Let $f(t)$ be a differentiable function for $t$ $\in$ $[0,1]$ satisfying the above,</p>
<p>Does $f(t)$ have any fixed points?</p>
<p>I can easily prove there always exists fixed points without the second condition using $MVT$,</p>
<p>does $0$ $\leq$ $\frac{\partial f(t)}{\partial t}$ $\leq$ $\frac 12$ change anything?</p>
<p>I am very curious to know the answer of this problem, and note fixed points are when; $f(x)=x$</p>
| Pete Caradonna | 164,325 | <p>You definitely have fixed points, by an Intermediate Value theorem argument: let $g(t):= f(t)-t$. Then $g(0) \ge 0$ and $g(1)< 0$ so using only continuity one necessarily obtains a fixed point, which solves $g(t^*)=0$. </p>
<p>But, $f'(t) \ge 0$ implies that your function is always increasing, and $f' \le \frac{1}{2}$ implies that $g'(t) < 0$ for all $t$, in particular at $t^*$. </p>
<p>So what if, for sake of contradiction, $g$ had two zeroes: $t^*$ and $t^{**}$? Without loss $t^* < t^{**}$, hence $g(t^{**})<g(t^*)$, but by hypothesis $g(t^*) = g(t^{**})$! Thus your fixed point is unique.</p>
|
2,223,163 | <p>I don't have any idea on how to prove it, and I need it for one of my questions which is still unanswered: <a href="https://math.stackexchange.com/questions/2192947/what-is-the-largest-number-smaller-than-100-such-that-the-sum-of-its-divisors-is?noredirect=1#comment4521040_2192947">What is the largest number smaller than 100 such that the sum of its divisors is larger than twice the number itself?</a>.</p>
| Sri-Amirthan Theivendran | 302,692 | <p>Note that $4^n\equiv 4\mod 2$ and $4^n\equiv 4\mod 3$ so by the Chinese Remainder theorem $4^n\equiv 4\mod 6$ i.e. $10^n\equiv 4\mod 6$ </p>
|
2,223,163 | <p>I don't have any idea on how to prove it, and I need it for one of my questions which is still unanswered: <a href="https://math.stackexchange.com/questions/2192947/what-is-the-largest-number-smaller-than-100-such-that-the-sum-of-its-divisors-is?noredirect=1#comment4521040_2192947">What is the largest number smaller than 100 such that the sum of its divisors is larger than twice the number itself?</a>.</p>
| Bernard | 202,857 | <p>Because it is even and $\;10^n-4\equiv1^n-1\equiv0\mod3$, so it's divisible by $2$ and $3$.</p>
|
2,868,172 | <p>The power rule states that for any real number $r$, </p>
<p>$$\frac{d}{dx}x^r=rx^{r-1}$$</p>
<p>Now one common way to prove this is to use the definition $x^r=e^{r\ln x}$, where $e^x$ is defined as the inverse function of $\ln x$, which is in turn defined as $\int_1^x\frac{dt}{t}$.</p>
<p>But this puts the cart before the horse, because students typically learn differential calculus before integral calculus. And there is a perfectly good definition of exponentiation of real numbers that does not rely on integral calculus:</p>
<p>$$x^r=\lim_{q\rightarrow r} x^q$$</p>
<p>where $q$ is a variable that ranges over the rational numbers. </p>
<p>So my question is, if we use this definition, and we take it for granted that $\frac{d}{dx}x^q=qx^{q-1}$ holds true for rational numbers (which can be easily proven without invoking $e$), then can we prove the power rule for real exponents without invoking $e$?</p>
<p>EDIT: Here’s a more precise formulation of the definition above. If $r$ is a real number, we say that $x^r = L$ if for any $\epsilon>0$ there exists a $\delta>0$ such that for any rational number $q$ such that $|q-x| < \delta$, we have $|x^q-L|<\epsilon$.</p>
| Community | -1 | <p><strong>Hint:</strong></p>
<p>$$x^r:=\lim_{q\to r,\\q\in\mathbb Q}x^q.$$</p>
<p>Then</p>
<p>$$(x^r)'=\lim_{h\to0}\frac{\lim_{q\to r,\\q\in\mathbb Q}((x+h)^q-x^q)}{h}=\lim_{q\to r,\\q\in\mathbb Q}\lim_{h\to0}\frac{((x+h)^q-x^q)}{h}=\lim_{q\to r,\\q\in\mathbb Q}qx^{q-1}=rx^{r-1}.$$</p>
<p>The hard part is to justify the swap of the limits.</p>
|
2,113,062 | <blockquote>
<p>Find $x^2+y^2$ if $x^2-\frac{2}{x}=3y^2$ and $y^2-\frac{11}{y}=3x^2$.</p>
</blockquote>
<p>My try:</p>
<p>$$\frac{y^2-\frac{11}{y}}{3}-\frac{2}{x}=3y^2$$</p>
<p>then?</p>
| dxiv | 291,201 | <p>[ <em>EDIT</em> #3 ] The previously posted answer remains copied below, under <em>Alternative Solution</em>.</p>
<p>Write the equations as:</p>
<p>$$
\begin{cases}
\begin{align}
2 &= x\,\left(x^2-3y^2\right) \\
11 &= y\,\left(y^2-3x^2\right)
\end{align}
\end{cases}
$$</p>
<p>Square the two and add together:</p>
<p>$$
\begin{align}
125 = 2^2 + 11^2 &= x^2\,\left(x^2-3y^2\right)^2 + y^2\,\left(y^2-3x^2\right)^2 \\
&= x^6 - 6 x^4 y^2 + 9 x^2y^4 \;\;+\;\;y^6 - 6 y^4x^2 + 9 y^2x^4 \\
& = x^6 + 3 x^4y^2 + 3 x^2y^4+y^6 \\
&= \left(x^2+y^2\right)^3
\end{align}
$$</p>
<p>Therefore $\;x^2+y^2 = \sqrt[3]{125} = 5\,$.
<hr>
[ <em>Alternative Solution</em> ]</p>
<p>As far as brute force goes, you had the right idea to try and eliminate one variable between the equations, but it works out better if first bringing one of the equations to contain only even powers.</p>
<p>From the the second equation:</p>
<p>$$x^2 = \frac{y^3-11}{3y} \tag{1}$$</p>
<p>Rearranging and squaring the first equation, then substituting $x^2$ from $(1)$:</p>
<p>$$
\begin{align}
\left(x^2-3y^2\right)^2 & = \frac{2^2}{x^2} \\[5px]
\frac{y^3-11}{3y} \left(\frac{y^3-11}{3y}-3y^2\right)^2 & = 4 \\[5px]
\left(y^3-11\right)\left(8y^3+11\right)^2 & = 108 y^3 \\[5px]
64 y^9 - 528 y^6 - 1923 y^3 - 1331 & = 0 \tag{2}
\end{align}
$$</p>
<p>Equation $(2)$ is a cubic in $t=y^3$ and has the "obvious" root $t=-1$ correspoding to $y=-1\,$. Dividing by $t+1$ leaves a quadratic in $t$ which gives the other two roots $t=\cfrac{37 \pm 30 \sqrt{3}}{8}\,$. Guessing that those might be of the form $(a \pm b\sqrt{3})^3$ and identifying coefficients turns out to work, which gives the corresponding $y = \cfrac{1 \pm 2\sqrt{3}}{2}$.</p>
<p>For each of the three roots $y$ we can calculate $x^2$ from $(1)$ then $x^2+y^2\,$: </p>
<p>$$
\begin{cases}
\begin{alignat}{3}
y &= -1 \quad\quad & x^2 &= 4 \quad\quad & x^2+y^2 &= 5 \\[5px]
y &= \frac{1 \pm 2\sqrt{3}}{2} \quad\quad & x^2 &= \frac{7 \mp 4 \sqrt{3}}{4} \quad\quad & x^2+y^2 &= 5
\end{alignat}
\end{cases}
$$</p>
<p>(<em>Meta-comment: I didn't see an obvious way to derive $x^2+y^2$ directly, without actually solving the system, though the nice end result strongly suggests that there would be one.</em>)</p>
<p><hr>
[ <em>EDIT</em> ] For a computer-assisted direct solution, note that the problem is equivalent to eliminating $x,y$ between the given equations and $z=x^2+y^2\,$, then solving the resulting equation in $z$.</p>
<p>Elimination can be done using <a href="https://en.wikipedia.org/wiki/Resultant" rel="nofollow noreferrer">polynomial resultants</a>, and Wolfram Alpha <a href="http://www.wolframalpha.com/input/?i=resultant%5B%20resultant%5Bx%5E3-3x%20y%5E2%20-2,%20z%20-%20x%5E2%20-%20y%5E2,%20y%5D,%20resultant%5By%5E3-3x%5E2%20y%20-%2011,%20z%20-%20x%5E2%20-%20y%5E2,%20y%5D,%20x%20%5D" rel="nofollow noreferrer">gives the resultant</a> as:</p>
<p>$$2^{24}\,\left(z^3-125\right)^6$$</p>
<p>Quite obviously, $z=5$ is the only real root so $x^2+y^2=5\,$.
<hr>
[ <em>EDIT</em> #2 ] The <a href="http://www.wolframalpha.com/input/?i=resultant%5B%20resultant%5Bx%5E3-3x%20y%5E2%20-a,%20z%20-%20x%5E2%20-%20y%5E2,%20y%5D,%20resultant%5By%5E3-3x%5E2%20y%20-%20b,%20z%20-%20x%5E2%20-%20y%5E2,%20y%5D,%20x%20%5D" rel="nofollow noreferrer">following</a> generalization explains the "<em>magic constants</em>" in the given problem:</p>
<p>$$x^2-\frac{a}{x}=3y^2\;, \quad y^2-\frac{b}{y}=3x^2 \quad \implies \quad x^2+y^2 = \sqrt[3]{a^2+b^2}$$</p>
|
3,281,540 | <p>I wrote an algorithm by combining Fermat's Little Theorem and Euler's Method. However, I am experiencing a problem in Euler's method. </p>
<p>For instance, If I take <span class="math-container">$(A, B, M)$</span> such that <span class="math-container">$A^B mod(M)$</span>.</p>
<p>When the initial values are <span class="math-container">$(12341, 123141, 12313)$</span> they reduce to <span class="math-container">$(12341, 7113, 12313)$</span> However after this the program won't go further. The reason I believe is that when <span class="math-container">$$B < \phi(M)$$</span> the algorithm stops working. So simply to algorithm work properly we need larger <span class="math-container">$B$</span> but small <span class="math-container">$M$</span>.</p>
<p><strong>At this point, my question is there a way to reduce M to small numbers by some mathematical operations ?</strong> </p>
<pre><code>def eulers_theorem(A, B, M):
totient = M
factors_of_M = [(i-1) / i for i in factorint(M).keys()]
for i in factors_of_M:
totient *= i
new_B = int(B % totient)
return (A, new_B, M)
</code></pre>
<p>So I am factorizing the M values and then calculating the totient of it( <span class="math-container">$\phi(M)$</span> ) and then My new B becomes </p>
<p><span class="math-container">$B_{new} = B~mod (\phi(M))$</span></p>
<p>For instance, if (A, B, M) = (123, 562, 100) </p>
<p><span class="math-container">$\phi(100) = 40$</span> so </p>
<p><span class="math-container">$$B_{new} = 562~mod (40) = 2$$</span></p>
<p>So (123, 562, 100) reduces to (123, 2, 100)</p>
<p>In the Above example when <span class="math-container">$(12341, 123141, 12313)$</span> it reduces to <span class="math-container">$(12341, 7113, 12313)$</span>. In this case the algorithm enters a loop since, </p>
<p><span class="math-container">$$B_{new} = 7113~mod (\phi(12313)) = 7113~mod(10548) = 7113$$</span> so <span class="math-container">$B_{new}$</span> is always the same.</p>
| Community | -1 | <p>Apply Fermat's little theorem and extentions, for both of <span class="math-container">$p_1,p_2$</span>, then apply Chinese remainder theorem ( with LCM extension if needed) to the exponents for each to find the intersection. Apply to B Solve mod M. </p>
|
2,374,282 | <p>I am trying to find all connected sets containing $z=i$ on which $f(z)=e^{2z}$ is one to one.
I have no idea how to approach.
Can someone give me some hints?
Thank you</p>
| Dave | 334,366 | <p>I wrote a small program (I am a <strong>beginner</strong> programmer, so it may not be a very efficient way of doing this) to compute the number of days passed between two dates. One would enter the first date in the form D/M/Y and then the second in the same form, and this will spit out the number of days in between. I used C++.</p>
<pre><code>#include <iostream>
#include <string>
#include <math.h>
#include <cmath>
#include <utility>
#include <sstream>
#include <cmath>
#include <vector>
#include <tuple>
#include <algorithm>
#include <iterator>
using namespace std;
//Leap year check
bool leapyearcheck (long double y)
{
if ((y/4==floor(y/4)) && (y/100!=floor(y/100)))
{
return true;
}
else if (y/400==floor(y/400))
{
return true;
}
else
{
return false;
}
}
//Month counter
long double counter_month (long double d, long double m, long double y)
{
if (m==1)
{
return 0;
}
else if (m==2)
{
return 31;
}
else if ((m==3) && (leapyearcheck(y)==true))
{
return 60;
}
else if ((m==3) && (leapyearcheck(y)==false))
{
return 59;
}
else if ((m==4) && (leapyearcheck(y)==true))
{
return 91;
}
else if ((m==4) && (leapyearcheck(y)==false))
{
return 90;
}
else if ((m==5) && (leapyearcheck(y)==true))
{
return 121;
}
else if ((m==5) && (leapyearcheck(y)==false))
{
return 120;
}
else if ((m==6) && (leapyearcheck(y)==true))
{
return 152;
}
else if ((m==6) && (leapyearcheck(y)==false))
{
return 151;
}
else if ((m==7) && (leapyearcheck(y)==true))
{
return 182;
}
else if ((m==7) && (leapyearcheck(y)==false))
{
return 181;
}
else if ((m==8) && (leapyearcheck(y)==true))
{
return 213;
}
else if ((m==8) && (leapyearcheck(y)==false))
{
return 212;
}
else if ((m==9) && (leapyearcheck(y)==true))
{
return 244;
}
else if ((m==9) && (leapyearcheck(y)==false))
{
return 243;
}
else if ((m==10) && (leapyearcheck(y)==true))
{
return 274;
}
else if ((m==10) && (leapyearcheck(y)==false))
{
return 273;
}
else if ((m==11) && (leapyearcheck(y)==true))
{
return 305;
}
else if ((m==11) && (leapyearcheck(y)==false))
{
return 304;
}
else if ((m==12) && (leapyearcheck(y)==true))
{
return 334;
}
else if ((m==12) && (leapyearcheck(y)==false))
{
return 335;
}
}
//Year counter
long double counter_year (long double d, long double m, long double y)
{
long double a=0;
long double b=0;
for (long double j=0;j<y;j++)
{
if (leapyearcheck(j)==true)
{
a++;
}
else if (leapyearcheck(j)==false)
{
b++;
}
}
return 366*a+365*b;
}
//Main counter
long double counter_main (long double d, long double m, long double y)
{
return counter_month(d,m,y)+counter_year(d,m,y)+d;
}
//Days difference
long double daysdiff (long double d_1, long double m_1, long double y_1, long double d_2, long double m_2, long double y_2)
{
return counter_main(d_2,m_2,y_2)-counter_main(d_1,m_1,y_1);
}
//String to date
vector<long double> stodate (string input)
{
long double d,m,y;
istringstream ss(input);
string entry="";
int index=0;
while (getline(ss,entry,'/'))
{
if (index==0)
{
d=stold(entry);
}
else if (index==1)
{
m=stold(entry);
}
else if (index==2)
{
y=stold(entry);
}
index++;
}
vector<long double> date(3);
date[0]=d;
date[1]=m;
date[2]=y;
return date;
}
//Output
int main()
{
long double d_1,m_1,y_1,d_2,m_2,y_2;
string startdate, enddate;
cout << "Enter the start date (D/M/Y): ";
cin >> startdate;
cout <<"\nEnter the end date (D/M/Y): ";
cin >> enddate;
d_1=stodate(startdate)[0];
m_1=stodate(startdate)[1];
y_1=stodate(startdate)[2];
d_2=stodate(enddate)[0];
m_2=stodate(enddate)[1];
y_2=stodate(enddate)[2];
cout << "\nThe number of days passed is: " << daysdiff(d_1,m_1,y_1,d_2,m_2,y_2);
}
</code></pre>
<hr>
<p>I create functions to count the number of days produced by the number of years, months, and then days of each date. Then I subtract the two numbers obtained from each date to obtain the number of days passed between the two dates. The number of days for the months is $31$ if it is February, for instance (i.e. if someone inputs 2 as the month, then $31$ days have passed since the beginning of that year until this month). The only tricky part is with leap years, in which a leap year occurs if $$(4\mid y~\land~100\nmid y)\lor(400\mid y)$$where $y$ denotes the year.</p>
<p>For your birthdate: August 17, 1996, we would input 17/8/1996 as the start date, and then today's date as the end date (the day I posted this is 30/7/2017). On the day I posted this, you would be $7652$ days old.</p>
|
134,455 | <p>I have an expression which consists of terms with undefined function calls <code>a[n]</code>:</p>
<pre><code>example = 1 - c^2 + c a[1] a[2] + 1/2 c^2 a[1]^2 a[2]^2 + c a[1] a[3]
</code></pre>
<p>Now I want to transform each term with individual <code>a</code>s to a different function <code>v[m1,m2,m3]</code>, such that <code>v[[m_i]]</code> is the exponent of <code>a[[i]]</code>. In addition, I want that the coefficient of <code>v[[m_i]]</code> is multiplied by the (exponent of <code>a[[i]]</code> + 1):</p>
<pre><code>fct[example]
(* example -> (1 - c^2)*(0+1)*(0+1)*(0+1)*v[0,0,0] + c*(1+1)*(1+1)*(0+1)*v[1,1,0] +
+ 1/2 c^2*(2+1)*(2+1)*(0+1)*v[2,2,0] + c*(1+1)*(0+1)*(1+1)*v[1,0,1]
= (1 - c^2)*v[0,0,0] + c*4*v[1,1,0] +
+ 1/2 c^2*9*v[2,2,0] + c*4*v[1,0,1]
*)
</code></pre>
<hr>
<p>I found one solution, but it is annoyingly slow, and I was hoping that somebody finds faster methods. The idea is to multiply <code>example</code> with <code>a[1]^2 * a[2]^2 * a[3]^2</code> (such that every term has the form of <code>coeff * a[1]^n1*a[2]^n2*a[3]^n3</code>), then use a <code>Replace</code>-Rule:</p>
<pre><code>fct[expr_] :=
Expand[expr*a[1]^2*a[2]^2*a[3]^2] /.
{a[1]^n1_*a[2]^n2_*a[3]^n3_ -> (n1-1)*(n2-1)*(n3-1)*v[n1-2, n2-2, n3-2]}
fct[example] (* v[0, 0, 0] - c^2 v[0, 0, 0] + 4 c v[1, 0, 1] + 4 c v[1, 1, 0] +
9/2 c^2 v[2, 2, 0] *)
</code></pre>
<hr>
<p>My specific questions:</p>
<ol>
<li><p>How can I make the algorithm general for arbitrary <code>a[n]</code> with n>3?
(For the way I did it here, I dont know how to access the exponents more general)</p></li>
<li><p>How can I make the algorithm faster? (It takes already ~5sec for n=8, and roughly 80 terms of <code>a[1]^n1*a[2]^n2*a[3]^n3</code> in <code>example</code>.)</p></li>
</ol>
| Edmund | 19,542 | <p>You may use <a href="http://reference.wolfram.com/language/ref/CoefficientRules.html" rel="nofollow noreferrer"><code>CoefficientRules</code></a> and may find the <a href="http://reference.wolfram.com/language/guide/PolynomialAlgebra.html" rel="nofollow noreferrer">Polynomial Algebra</a> guide useful.</p>
<pre><code>ClearAll[fct];
fct[expr_, vars_] :=
Total@*KeyValueMap[#2 Times@@(#1 + 1) v@@#1 &]@*Association@@CoefficientRules[expr, vars]
</code></pre>
<p>Then following return immediately.</p>
<pre><code>fct[example, Array[a, 3]]
</code></pre>
<blockquote>
<pre><code>(1 - c^2) v[0, 0, 0] + 4 c v[1, 0, 1] + 4 c v[1, 1, 0] + 9/2 c^2 v[2, 2, 0]
</code></pre>
</blockquote>
<p>and</p>
<pre><code>fct[example* a[1]^2*a[2]^2*a[3]^2, Array[a, 3]]
</code></pre>
<blockquote>
<pre><code>27 (1 - c^2) v[2, 2, 2] + 48 c v[3, 2, 3] + 48 c v[3, 3, 2] + 75/2 c^2 v[4, 4, 2]
</code></pre>
</blockquote>
<p>Hope this helps.</p>
|
604,836 | <p>Prove if <span class="math-container">$a \equiv c \pmod{n}$</span> and <span class="math-container">$b \equiv d \pmod n$</span> then <span class="math-container">$ab \equiv cd \pmod{n}$</span>.</p>
<p>I tried to use <span class="math-container">$(a-c)(b-d) = ab-ad-cb+cd$</span>, but it seem doesn't work.</p>
| jgon | 90,543 | <p>Hint:</p>
<p>Congruence relation is transitive, and if $a\equiv b \mod(n)$, then $ax\equiv bx \mod n$. Play around.</p>
|
2,435,505 | <p>It is a question from permutations and combination chapter and its ans is 48 as given in book! Please help me to do this. I am unable to figure out the solution! Please help!</p>
| TomGrubb | 223,701 | <p>Hint: you can tell whether or not a number is even by looking solely at its last digit. Try conditioning on the last digit and counting each case individually.</p>
|
723,707 | <p>I'm trying to understand what the relation is between the direct product and the quotient group. </p>
<p>If we let $H$ be a normal subgroup of a group $G$, then it is not too difficult to show that the set of all cosets of $H$ in $G$ forms a quotient group $G/H$:
\begin{equation}
G/H = \{ g H \mid g \in G \}
\end{equation}</p>
<p>On the other hand, the Cartesian product of two groups $G$ and $H$ is defined as:
\begin{equation}
G \times H = \{ (g,h) \mid g \in G \text{ and } h \in H \}
\end{equation}
where $(g,h)$ denotes the set of ordered pairs. The direct product operation on this set is defined as:
\begin{equation}
(g_1,h_1)(g_2,h_2) = (g_1g_2,h_1h_2) \in G \times H
\end{equation}
and it is easy to see that the direct product forms a group.</p>
<p>Is the following statement true:
\begin{equation}
K = G \times H \implies G \simeq K / (\{e_G \} \times H)
\end{equation}
If so, under what conditions is it true? And how can we see it is true (or false)?</p>
<p><strong>Edit 26/03</strong>:</p>
<p>Up to this point, I believe I have found a method (see below) of showing the isomorphism relation. I would be really grateful if someone could tell me whether this proof is correct or not.</p>
<p>Let us identity the elements of $h \in H$ with element of $K$ by setting $h \equiv (e_G,h)$. The elements of $K/H$ are as usual defined by:
\begin{equation}
K/H = \{ (g,h) H \mid g \in G \text{ and } h \in H \}
\end{equation}
Since $h_1H=H$ for some $h_1 \in H$, we have:
\begin{equation}
(g,h)H = (g',h')H \iff g=g' \text{ and } h' = h h_1 \tag{1}
\end{equation}
and so without loss of generality we can write every element of $K/H$ in the form $(g,e_H)H$. Now, let the map:
\begin{equation}
f : G \to K/H
\end{equation}
be defined by:
\begin{equation}
f(g) = (g,e_H) H \tag{2}
\end{equation}
The map is one-to-one. This can be seen by equation $(1)$, because if:
\begin{equation}
f(g) = f(g')
\end{equation}
then:
\begin{equation}
(g,e_H) H = (g',e_H) H \implies g=g'
\end{equation}
Furthermore, the map is trivially onto:
\begin{equation}
\forall (g,h) H \in K/H \; \exists \; g \in G \; , \; f(g)=(g,h) H
\end{equation}
and thus the map is bijective. Finally, the map is also a homomorphism, because:
\begin{equation}
f(gg') = (gg',e_H) H = (g,e_H)(g',e_H) H = (g,e_H)(g',e_H) HH = (g,e_H) H (g',e_H) H = f(g) f(g')
\end{equation}
and so $f$ is a isomorphism. Thus, by definition of equation $(2)$, we have shown that $G \simeq K/H$.</p>
<p>Any input is much appreciated.</p>
| Pedro | 23,350 | <p>Suppose $G=H\times K$. We want to show $G/K\simeq H$. Consider the map $H\times K\to H\times 0\simeq H$ given by $(h,k)\to (h,0)$. What is the kernel of this? What is it's image? </p>
<p>In general, the whole point of the direct product is given groups $H$ and $K$ to form a group $G=H\times K$ with copies $\hat K=1\times K$ and $\hat H=H\times 1$ such that</p>
<p>$(\rm i)$ $\hat H,\hat K\lhd G$</p>
<p>$(\rm ii)$ $G=\hat H\hat K$</p>
<p>$(\rm iii)$ $\hat K\cap \hat H=1$</p>
<p>Moreover, $G/\hat K\simeq H$ and $G/\hat H\simeq K$.</p>
<p>Conversely, if $G$ contains subgroups $H,K$ with $H,K\lhd G$, $H\cap K=1$ and $HK=G$, then $G\simeq H\times K$.</p>
|
815,661 | <p>Let $m$ be the product of first n primes (n > 1) , in the following expression :</p>
<p>$$m=2⋅3…p_n$$</p>
<p>I want to prove that $(m-1)$ is not a complete square.</p>
<p>I found two ways that might prove this . My problem is with the SECOND way . </p>
<p><strong>First solution (seems to be working) :</strong> </p>
<p>The first way that I used is this : </p>
<p>Proof by negation : assume that $m-1$ is a complete square , i.e. $m-1 = x^2$ , then </p>
<p>$m=x^2+1=x^2-(-1)=(x-(-1))(x+(-1))=(x+1)(x-1)$</p>
<p>So we have either : </p>
<ol>
<li><p>$(x+1)$ is even and $(x-1)$ is even </p></li>
<li><p>$(x+1)$ is even and $(x-1)$ is odd</p></li>
<li><p>$(x-1)$ is even and $(x+1)$ is odd</p></li>
</ol>
<p>First case : $(x+1)$ is even and $(x-1)$ is even , then $m$ looks like this : </p>
<p>$m=2⋅otherNumbersA⋅2⋅otherNumbersB$ </p>
<p>If we disregard $2$ then $m$ is a multiplication of $n-1$ prime numbers , then </p>
<p>$m$ is a multiplication of : $2 \cdot bigPrimeNumber$ . Contradiction . </p>
<p>The other two cases are just the same .</p>
<p><strong>Second solution (my problem) :</strong></p>
<p>What I'm interested in is the following solution (that I'm stuck in) :</p>
<p>Proof by negation : assume that : $m-1 = x^2$ and $m=2⋅3…p_n$ , means that $m$ divides by 3 , so we can write : $m-1≡2(mod 3)$ , which means that : </p>
<p>$m-1≡2(mod 3) ===> (m-1)-2=3q , q\in N ===> m-3=3q=m=3(1+q)$</p>
<p>Meaning : </p>
<p>$m-1=x^2$</p>
<p>$m-1≡2(mod 3)$</p>
<p>$x^2≡2(mod 3)$</p>
<p>How do I continue from here ? how can I use : $x^2≡2(mod 3)$ to reach a contradiction ?</p>
<p>Thanks</p>
| lab bhattacharjee | 33,337 | <p>For any integer $\displaystyle x, x\equiv0, 1,2\pmod3$</p>
<p>$\displaystyle\implies x^2\equiv0,1^2\equiv1,2^2\equiv1\pmod3$</p>
|
900,884 | <p>If $ A = \{ m^n| \text{ } m, n \in Z \text { and } m, n \ge 2 \} $, then how find $\sum_{k \in A} \frac{1}{k-1} $?</p>
| barak manos | 131,263 | <p>Not a complete answer to your question, but if $A$ is a multi-set (where any element may appear more than once), then your series is larger than the sum of all the following series put together:</p>
<ul>
<li>$\sum\limits_{n=2}^{\infty}\frac{1}{2^n}=\frac{1}{2-1}-\frac{1}{2}=\frac{1}{2}$</li>
<li>$\sum\limits_{n=2}^{\infty}\frac{1}{3^n}=\frac{1}{3-1}-\frac{1}{3}=\frac{1}{6}$</li>
<li>$\sum\limits_{n=2}^{\infty}\frac{1}{4^n}=\frac{1}{4-1}-\frac{1}{4}=\frac{1}{12}$</li>
<li>$\sum\limits_{n=2}^{\infty}\frac{1}{5^n}=\frac{1}{5-1}-\frac{1}{5}=\frac{1}{20}$</li>
<li>$\dots$</li>
</ul>
<p>In other words:</p>
<p>$$\sum\limits_{k\in{A}}\frac{1}{k-1}>\sum\limits_{k\in{A}}\frac{1}{k}=\sum\limits_{m=2}^{\infty}\sum\limits_{n=2}^{\infty}\frac{1}{m^n}=\sum\limits_{m=1}^{\infty}\frac{1}{m(m+1)}=1$$</p>
|
2,786,656 | <p>I know that if a group is generated by a single element then the group is abelian but does this mean that if a group is abelian then its conjugacy class is composed of a single element?</p>
| Aweygan | 234,668 | <p>Tsemo is correct, in that the fact that $T$ maps the unit sphere of $X$ bijectively onto the unit sphere of $Y$ implies that $T$ is an isometry. Furtermore, note that
\begin{align*}
\operatorname{Ran}(T^*)_\perp&=\{x\in X:\forall f\in Y^*,\langle T^*f,x\rangle=0\}\\
&=\{x\in X:\forall f\in Y^*, \langle f,Tx\rangle=0\}\\
&=\ker(T)\\
&=\{0\}
\end{align*}
Thus $\operatorname{Ran}(T^*)$ is weak$^*$-dense in $X^*$. But as $T^*$ is an isometry, it's range is norm-closed, hence weak$^*$-closed, and thus equals $X^*$. Therefore $T^*$ is surjective.</p>
|
813,715 | <p>Say I am asked to find, in expanded form without brackets, the equation of a circle with radius 6 and centre 2,3 - how would I go on about doing this?</p>
<p>I know the equation of a circle is $x^2 + y^2 = r^2$, but what do i do with this information?</p>
| please delete me | 153,520 | <p>The equation of a circle with centre $(a,b)$ and radius $r$ is $(x-a)^2+(y-b)^2=r^2$.</p>
|
813,715 | <p>Say I am asked to find, in expanded form without brackets, the equation of a circle with radius 6 and centre 2,3 - how would I go on about doing this?</p>
<p>I know the equation of a circle is $x^2 + y^2 = r^2$, but what do i do with this information?</p>
| Nicky Hekster | 9,605 | <p>The equation gets like this: $$(x-2)^2+(y-3)^2=36,$$which can bee seen as translating a circle with radius 6 and center $(0,0)$ (the equation you mentioned) to the new center $(2,3)$.</p>
|
3,469,252 | <p>At first: I am new to differential equations, so this question might seam a little bit obvious.</p>
<p>The differential eqation was <span class="math-container">$y'x^3 = 2y -5$</span>.
I rearranged it to: <span class="math-container">$\frac{dx^3}{dx} = \frac{d(2y-5)}{dy}$</span>.</p>
<p>The Problem is, if i derive it, there is no y left: <span class="math-container">$3x^2 = 2$</span>, so how do I do this?</p>
<p>Thanks a lot! </p>
| Community | -1 | <p>Your equation is</p>
<p><span class="math-container">$$\frac{dy}{dx}x^3=2y-5.$$</span></p>
<p>You cannot rearrange it to</p>
<p><span class="math-container">$$\frac{dx^3}{dx} = \frac{d(2y-5)}{dy}$$</span></p>
<p>(why would those differentials appear at the numerators ?) but to</p>
<p><span class="math-container">$$\frac{dx}{x^3} = \frac{dy}{2y-5}.$$</span></p>
<p>This integrates as</p>
<p><span class="math-container">$$-\frac{1}{2x^2}=\frac12\log|2y-5|+C.$$</span></p>
|
2,704,102 | <p>Let $X,Y,Z$ be topological spaces. Is the following statement true?
$X \times Z \cong Y \times Z \implies X \cong Y$?
how would you prove it? </p>
<p>and I know that if $A \cong B$, and $a \in A$ that there is a $b \in B$, such that $A\setminus{\{a\}} \cong B\setminus{\{b\}}.$ How would you prove the same for removing lines from product topology, instead of point of normal topological spaces?</p>
| JKEG | 217,837 | <p>In addition to the counterexamples given in Henno Brandsma's answer, there is a very pathological and very interesting counterexample: In '<a href="http://www.ams.org/journals/proc/2006-134-12/S0002-9939-06-08596-0/S0002-9939-06-08596-0.pdf" rel="nofollow noreferrer">A counterexample related to topological sums – Yamamoto, Shuji and Yamashita, Atsushi</a>' a space $X$ and a space $Y$ are constructed such that $X$ and $Y$ are not homeomorphic but $X\times 2$ and $Y\times 2$ are. (By $2$ it is meant the discrete space with $2$ elements.) Notice that $X\times 2$ is just two copies of $X$, side by side, topologically unrelated to each other. Also, amazingly, $X$ and $Y$ are compact metric spaces—in fact, subspaces of $\mathbb{R}^2.$</p>
<p>This relates to your question in that $2\times X \cong 2\times Y$ but $X\not\cong Y$. </p>
<p>Also, much simpler albeit much less interesting counterexamples can be obtained with discrete spaces, e.g., $\mathbb{Z}\times 2\cong \mathbb{Z}\times 3$ but $2\not\cong 3$.</p>
|
525,885 | <p>Why is $$\frac{\sum_{i=1}^n a_i^{n+1}}{\sum_{i=1}^{n}a_i^n} \geq \frac{\sum_{i=1}^n a_i}{n}$$
where $n$ is some positive natural number, and all $a_i$s are assumed to be positive real number?</p>
| Noam D. Elkies | 93,983 | <p>This is the special case $(r,s) = (1,n)$ of the following inequality:</p>
<p><strong>Lemma.</strong>
<em>Suppose $r$, $s$, and $a_i$ ($1 \leq i \leq n$) are positive reals. Then</em>
$$
n \sum_{i=1}^n a_i^{r+s} \geq \sum_{i=1}^n a_i^r \sum_{i=1}^n a_i^s,
$$
<em>with equality</em> iff <em>the $a_i$ are all equal</em>.</p>
<p>The relation between the number $n$ of variables and the exponents $1,n$
is a red herring.</p>
<p><em>Proof</em> of Lemma:
Without loss of generality $r \leq s$. Let $x_i = a_i^s$,
and divide both sides by $n^2$, so the desired inequality becomes
$$
\left( \frac1n \sum_{i=1}^n x_i^{p+1} \right)
\geq
\left( \frac1n \sum_{i=1}^n x_i^p \right)
\left( \frac1n \sum_{i=1}^n x_i \right)
$$
where $p = r/s \leq 1$.
But the function $x \mapsto x^{p+1}$ is strictly convex upwards, so
$$
\left( \frac1n \sum_{i=1}^n x_i^{p+1} \right)
\geq
\left( \frac1n \sum_{i=1}^n x_i \right)^{p+1}
$$
with equality <em>iff</em> the $x_i$ are all equal; while
$x \mapsto x^p$ is convex downwards, so
$$
\left( \frac1n \sum_{i=1}^n x_i^p \right)
\leq
\left( \frac1n \sum_{i=1}^n x_i \right)^p
$$
with equality if the $x_i$ are all equal.
The desired result follows.$\diamondsuit$</p>
|
29,670 | <p>I have $a_k=\frac1{(k+1)^\alpha}$ and $c_k=\frac1{(k+1)^\lambda}$, where $0<\alpha<1$ and $0<\lambda<1$, and we have a infinite sequence $x_k$ with the following evolution equation.
$$
x_{k+1}=\left(1-a_{k+1}\right)x_{k}+a_{k+1}c_{k+1}^{2}
$$
I have proven that $x_k$ is bounded and obviously positive. How can I know its limit?</p>
| Did | 6,179 | <p>Let us show that $x_k\to0$.</p>
<p>For every positive $u$, there exists a finite integer $k(u)$ such that $c_{k+1}^2\le u$ for every $k\ge k(u)$, hence $x_{k+1}\le(1-a_{k+1})x_k+a_{k+1}u$. </p>
<p>-- If there exists $k\ge k(u)$ such that $x_k\le u$, then $x_i\le u$ for every $i\ge k$, hence $\limsup x_i\le u$. </p>
<p>-- Otherwise, $x_k\ge u$ for every $k\ge k(u)$ and furthermore $(x_{k+1}-u)\le(1-a_{k+1})(x_k-u)$. Hence $(x_k-u)\le b(k,k(u))(x_{k(u)}-u)$, with
$$
b(k,k(u))=\prod_{i=k(u)+1}^k(1-a_i).
$$
Now, the series $\displaystyle\sum_ka_k$ diverges hence $b(k,k(u))\to0$ when $k\to+\infty$, and $\limsup x_k-u\le0$.</p>
<p>In both cases, $\limsup x_i\le u$. This holds for every positive $u$, hence $x_k\to0$.</p>
<p>The proof uses only that $c_k\to0$, $a_k\in[0,1]$ and $\displaystyle\sum_ka_k$ diverges.</p>
|
2,475,938 | <blockquote>
<p>How can I factor the polynomial <span class="math-container">$x^4-2x^3+x^2-1$</span>?</p>
</blockquote>
<p>This is an exercise in algebra. I have the solution showing that
<span class="math-container">$$
x^4-2x^3+x^2-1=(x^2-x-1)(x^2-x+1).
$$</span></p>
<p>But the solution does not show any details. Using the distributive property I can check that this is indeed true:
<span class="math-container">$$
\begin{align}
&(x^2-x-1)(x^2-x+1)\\
&=x^2(x^2-x+1)-x(x^2-x+1)-(x^2-x+1)\\
&=x^4-x^3+x^2-x^3+x^2-x-x^2+x-1\\
&=x^4-2x^3+x^2-1,
\end{align}
$$</span></p>
<p>but I can't figure out the steps to get there. Can anyone help?</p>
| Crescendo | 390,385 | <p>Let’s examine the first three terms of your quartic$$y=x^4-2x^3+x^2-1$$The coefficients are in the order of $1,-2,1$. Thus$$\begin{align*}x^4-2x^3+x^2-1 & =x^2(x^2-2x+1)-1\\ & =x^2(x-1)^2-1\\ & =\left[x(x-1)-1\right]\left[x(x-1)+1\right]\\ & =(x^2-x-1)(x^2-x+1)\end{align*}$$where the second to last step is obtained with the difference of squares factorization.</p>
|
2,233,138 | <p>Let ${x_n}$ be defined by </p>
<p>$$x_n : = \begin{cases} \frac{n+1}{n}, &\text{if } n \text{ is odd}\\
0,&\text{if } n \text{ is even}.
\end{cases}$$</p>
<p>I am pretty sure about $\lim_{n\to\infty}\inf x_n = 0$ </p>
<p>because if ${x_1} = 2$, $x_2 = 0$, $x_3 = 4/3$, $x_4 = 0$ so $\lim_{n->\infty}\inf x_n = 0$</p>
<p>But about sup </p>
<p>$$\sup\{x_k : k \geq n\}=\begin{cases}
\frac{n+1}{n}, &\text{if } n \text{ is odd}\\
\frac{n+2}{n+1}, &\text{if } n \text{ is even}.
\end{cases}$$</p>
<p>I understand about odd but don't understand about when $n$ is even.</p>
<p>Why it is not $0$ when $n$ is even?</p>
| Chappers | 221,811 | <p>Continuing on from your first equality,
$$ \sum_{i=1}^n \sum_{j=1}^n a_i a_j \leq \sum_{i=1}^n \sum_{j=1}^n | a_i a_j | \leq \sum_{i=1}^n \sum_{j=1}^n \frac{a_i^2 + a_j^2}{2} = \frac{n}{2}\sum_{i=1} a_i^2 + \frac{n}{2}\sum_{j=1} a_j^2 = n \sum_{i=1}^n a_i^2 $$
by the AM–GM inequality.</p>
|
4,327,537 | <p>I have a question which states that
"In a group of 23 people what is the probability that there are two people with the same birthday? Assume there are 365 days in a year. Ignore leap years and such complications. Assume there is an equal probability of a person being born on each day of the year.". I solved it using the complement. I first computed the number of ways in which we can assign the birthdays to 23 people out of 365 days (without replacement). That gave 365 * (365-1).. (365-k+1). Then I divided this by 365^k. Then I subtracted the result from 1. But, the probability which I have now got may also contain 3 people having the same birthday or 4 people having the same birthday, etc. I want to know the probability of exactly two people having the same birthday. In short, what I have computer is, " what's the probability that AT LEAST TWO PEOPLE HAVE SAME BIRTHDAY" and what I'm looking for is "WHAT IS THE PROBABILITY OF EXACTLY TWO PEOPLE HAVING THE SAME BIRTHDAY".How do I compute that probability?</p>
| Thomas | 128,832 | <p>If you think of the graph of <span class="math-container">$f$</span> as a a mountain range then the condition <span class="math-container">$\frac{\partial f}{\partial y}(x_0,y_0) \neq 0$</span> means that, along the curve <span class="math-container">$f(x,y) =0$</span>, the mountain range has a nonzero slope in <span class="math-container">$y$</span> direction. This means that if you move in the <span class="math-container">$y$</span> direction, you will leave the <span class="math-container">$0$</span> niveau. If that derivative is <span class="math-container">$=0$</span>, you are on a (potentially infesitimally small) plateau. If you move in the <span class="math-container">$y$</span>- direction in this case it may happen that you remain on the <span class="math-container">$0$</span> niveau (so the solution to <span class="math-container">$f=0$</span> may fail to be unique for that value of <span class="math-container">$y$</span>).</p>
<p>Drawing a picture of this is quite easy, but I think you will get the most out of it if you do it on your own.</p>
|
2,081,641 | <p>I figured out that $\lim_{n \to \infty}\frac{(3n^2−2n+1)\sqrt{5n−2}}{(\sqrt{n}−1)(1−n)(3n+2)} $ is $-\sqrt{5}$ but I don't know how to prove it.</p>
| 8hantanu | 395,310 | <p>Multiplying numerator and denominator with $\frac{1}{n^{5/2}}$, we get</p>
<p>$$\lim_{n\to\infty}\frac{(3-\frac{2}{n}+\frac{1}{n^2})(\sqrt{5-\frac{2}{n}})}{(1-\frac{1}{n^{1/2}})(\frac{1}{n}-1)(3+\frac{2}{n})}=\frac{3\times \sqrt 5}{-1 \times 3}=-\sqrt5$$</p>
<p>Since $\lim_{n\to\infty}\frac{1}{n}=0.$</p>
|
1,923,808 | <p>Suppose we have a function $g:\mathbb{R}\rightarrow\mathbb{R}$ defined
as:
$$
g(x)=x^{2}
$$
Now, we know that this function is not onto because it is not defined
for negative values of $g.$ However, that is because we have <em>defined</em> the mapping from the set of real numbers to the set of real numbers.
If in fact, we had defined it is:
$$
g:\mathbb{R}\rightarrow\mathbb{R}\mbox{+}
$$
the function would have been onto. As such, it seems that our calling
a function onto or not depeneds crucially on how we define the mapping.
For instance, if we defined a function as going from its domain to
its co-domain, it would <strong>always</strong> be onto. Why do we have to allow
for cases when its not onto? </p>
| G Tony Jacobs | 92,129 | <p>Sometimes the codomain is specified for reasons outside of the function you're looking at. You might not know the range of your function, but you know some suitable codomain.</p>
<p>Here is a modest example from linear algebra. Let $A$ be an $m\times n$ matrix, then we can ask whether the equation $Ax=b$ has a solution in $\mathbb{R}^n$ for every $b\in\mathbb{R}^m$. The answer depends on whether the mapping $x\mapsto Ax$ is <em>onto</em> the codomain $\mathbb{R}^m$. The mapping is clearly onto its range, but what we want to know is whether its range includes all $m$-dimensional vectors or not.</p>
|
1,923,808 | <p>Suppose we have a function $g:\mathbb{R}\rightarrow\mathbb{R}$ defined
as:
$$
g(x)=x^{2}
$$
Now, we know that this function is not onto because it is not defined
for negative values of $g.$ However, that is because we have <em>defined</em> the mapping from the set of real numbers to the set of real numbers.
If in fact, we had defined it is:
$$
g:\mathbb{R}\rightarrow\mathbb{R}\mbox{+}
$$
the function would have been onto. As such, it seems that our calling
a function onto or not depeneds crucially on how we define the mapping.
For instance, if we defined a function as going from its domain to
its co-domain, it would <strong>always</strong> be onto. Why do we have to allow
for cases when its not onto? </p>
| 5xum | 112,884 | <p>We don't <em>have</em> to allow for such cases, but it's easier. It's easier to look at real valued functions as all having the same domain, because then it's easier to look at compositions of functions.</p>
<p>If We would always limit functions to their domains, then a function like</p>
<p>$$f(x)=e^{\sin x}$$</p>
<p>would be more difficult to define, since $x\mapsto e^x$ is defined on $\mathbb R$, while the codomain of $x\mapsto \sin x$ is $[-1,1]$.</p>
<p>So, you would then have to say that $f(x)=\exp_{[-1,1]} (\sin x)$, where $\exp_A$ is the exponential function whose domain has been reduced to $A$.</p>
|
1,923,808 | <p>Suppose we have a function $g:\mathbb{R}\rightarrow\mathbb{R}$ defined
as:
$$
g(x)=x^{2}
$$
Now, we know that this function is not onto because it is not defined
for negative values of $g.$ However, that is because we have <em>defined</em> the mapping from the set of real numbers to the set of real numbers.
If in fact, we had defined it is:
$$
g:\mathbb{R}\rightarrow\mathbb{R}\mbox{+}
$$
the function would have been onto. As such, it seems that our calling
a function onto or not depeneds crucially on how we define the mapping.
For instance, if we defined a function as going from its domain to
its co-domain, it would <strong>always</strong> be onto. Why do we have to allow
for cases when its not onto? </p>
| E. Joseph | 288,138 | <p>I agree with the current answers, and I want to give you another important example.</p>
<p>If you consider the mapping</p>
<p>$$\exp\colon \mathbb C\to \mathbb C,$$</p>
<p>it is not trivial but it is an important theorem that $\exp(\mathbb C)=\mathbb C\setminus \{0\}$.</p>
<p>Once you have proved that you can say that $\exp:\mathbb C\to \mathbb C \setminus\{0\}$ is onto.</p>
<p>It is only one example among many others.</p>
|
3,712,256 | <p>I am trying to prove that: </p>
<blockquote>
<p>For nonempty subsets of the positive reals <span class="math-container">$A,B$</span>, both of which are bounded above, define
<span class="math-container">$$A \cdot B = \{ab \mid a \in A, \; b \in B\}.$$</span>
Prove that <span class="math-container">$\sup(A \cdot B) = \sup A \cdot \sup B$</span>.</p>
</blockquote>
<p>Here is what I have so far. </p>
<blockquote>
<p>Let <span class="math-container">$A, B \subset \mathbb{R}^+$</span> be nonempty and bounded above, so <span class="math-container">$\sup A$</span> and <span class="math-container">$\sup B$</span> exist by the least-upper-bound property of <span class="math-container">$\mathbb{R}$</span>. For any <span class="math-container">$a \in A$</span> and <span class="math-container">$b \in B$</span>, we have
<span class="math-container">$$ab \leq \sup A \cdot b \leq \sup A \cdot \sup B.$$</span>
Hence, <span class="math-container">$A \cdot B$</span> is by bounded above by <span class="math-container">$\sup A \cdot \sup B$</span>. Since <span class="math-container">$A$</span> and <span class="math-container">$B$</span> are nonempty, <span class="math-container">$A \cdot B$</span> is nonempty by construction, so <span class="math-container">$\sup(A \cdot B)$</span> exists. Furthermore, since <span class="math-container">$\sup A \cdot \sup B$</span> is an upper bound of <span class="math-container">$A \cdot B$</span>, by the definition of the supremum, we have
<span class="math-container">$$\sup(A \cdot B) \leq \sup A \cdot \sup B.$$</span>
It suffices to prove that <span class="math-container">$\sup(A \cdot B) \geq \sup A \cdot \sup B$</span>. </p>
</blockquote>
<p>I cannot figure out the other half of this. A trick involving considering <span class="math-container">$\sup A - \epsilon$</span> and <span class="math-container">$\sup B - \epsilon$</span> for some <span class="math-container">$\epsilon > 0$</span> and establishing that <span class="math-container">$\sup(A \cdot B) < \sup A \cdot \sup B + \epsilon$</span> did not seem to work, though it did in the additive variant of this proof. I haven't anywhere used the assumption that <span class="math-container">$A$</span> and <span class="math-container">$B$</span> are contained in the <strong>positive</strong> real numbers, and it seems to me that this assumption must be important, probably as it pertains to inequality sign, so I assume that at some point I will need to multiply inequalities by some positive number. I cannot seem to get a good start on this, though. A hint on how to get started on this second half would be very much appreciated. </p>
| Calum Gilhooley | 213,690 | <p>Here's a completely different idea. (I've left only the straightforward proof of Lemma 1 to be filled in.)</p>
<p><strong>Lemma 1.</strong> If <span class="math-container">$E$</span> is a nonempty subset of <span class="math-container">$\mathbb{R}$</span> that is bounded above, and <span class="math-container">$c > 0,$</span> then
<span class="math-container">$$
\sup cE = c\sup E.
$$</span></p>
<p><strong>Lemma 2.</strong> If <span class="math-container">$(E_i)_{i \in I}$</span> is a nonempty family of nonempty subsets of <span class="math-container">$\mathbb{R}$</span> that are all bounded above, then
<span class="math-container">\begin{equation}
\label{3712256:eq:1}\tag{1}
\sup\bigcup_{i \in I}E_i = \sup\{\sup E_i : i \in I\},
\end{equation}</span>
in the sense that if either supremum exists, then so does the other, and they are equal.</p>
<p><em>Proof.</em>
If the supremum on the left of \eqref{3712256:eq:1} exists, then it is an upper bound for <span class="math-container">$\bigcup_{i \in I}E_i,$</span> therefore also an upper bound for <span class="math-container">$E_j$</span> for all <span class="math-container">$j \in I.$</span> Therefore, <span class="math-container">$\sup E_j$</span> exists for all <span class="math-container">$j \in I$</span> (we assumed this anyway), and
<span class="math-container">$$
\sup E_j \leqslant \sup\bigcup_{i \in I}E_i \, \text{ for all } j \in I,
$$</span>
therefore the set <span class="math-container">$\{\sup E_j : j \in I\}$</span> is bounded above, and
<span class="math-container">$$
\sup\{\sup E_j : j \in I\} \leqslant \sup\bigcup_{i \in I}E_i.
$$</span>
So the supremum on the right of \eqref{3712256:eq:1} also exists, and is bounded above by the supremum on the left.</p>
<p>Conversely, if the supremum on the right of \eqref{3712256:eq:1} exists, then it is an upper bound for <span class="math-container">$\sup E_i,$</span> therefore also an upper bound for <span class="math-container">$E_i,$</span> for all <span class="math-container">$i \in I.$</span> Therefore, it is an upper bound for <span class="math-container">$\bigcup_{i \in I}E_i.$</span> Therefore, the supremum on the left of \eqref{3712256:eq:1} also exists, and
<span class="math-container">$$
\sup\bigcup_{i \in I}E_i \leqslant \sup\{\sup E_i : i \in I\}.
$$</span>
We have now shown that if either side of \eqref{3712256:eq:1} is defined, then so is the other; and we have proved an inequality in both directions, therefore the two sides are equal. <span class="math-container">$\ \square$</span></p>
<p>Now, with Lemma 2 doing all the hard work for us, the proof is straightforward:
<span class="math-container">\begin{align*}
\sup AB & = \sup\bigcup_{a \in A}aB \\
& = \sup\{\sup aB : a \in A \} & \text{by Lemma 2} \\
& = \sup\{a\sup B : a \in A \} & \text{by Lemma 1} \\
& = \sup(A\sup B) \\
& = (\sup A)(\sup B). & \text{by Lemma 1}
\end{align*}</span></p>
|
25,707 | <p>[Edit: to explain some things that were not so clear in the original post]</p>
<p>I believe in being straightforward. Without linking to the specific question, people will just treat this as another vague gripe with nothing to discuss. But to talk about the specific issues with a specific post is equivalent to pointing out the specific authors of those posts too. I give credit where it is due; I've commended and upvoted some other answers by the author in question, yet many people have judged me solely based on my blunt criticism of this answer of his/hers. Past interactions showed that this user generally refuses to admit any serious conceptual error, and this is nowhere near the first time. So please be <strong>fair</strong> if you wish to judge my statements and actions.</p>
<p>Secondly, someone has taken the initiative to edit the objectively incorrect post to fix the error. As I mentioned in the comments, I thought this was not according to the SE rules, which is why I did not do it myself even though I really really wanted to. If the community thinks this is the way to go I have no problems with doing it quietly without making a fuss. It would also in my opinion be a viable solution to the whole issue of 'undeserved' upvotes.</p>
<p>Thirdly, someone felt that I was claiming access to some kind of (absolute?) truth. No I don't. But correct mathematics with respect to modern standards is not subjective. A correct statement is one that is provable in the chosen foundations (usually assumed to be ZFC), and a correct proof is a valid sequence of deductions in that chosen system (in practice at least a description that logicians can easily see is translatable into the foundational system). That is very much objective enough to any modern logician, and that is what I mean by "correct". For example, "$1+1 = 3$" is incorrect; anyone who disagrees should provide precise descriptions of their non-mainstream foundations or notations rather than claiming it is correct from a certain point of view or saying that someone highly qualified said it.</p>
<p>(I'm lenient with missing assumptions or logical gaps <strong>if and only if</strong> the expected audience is estimated to be largely capable of filling them in correctly. But the two posts I am mainly referring to here are simply false and there is nothing for me to be lenient about. I talk about this only because it's relevant to another post mentioned at the end.)</p>
<p>Fourthly, if this feels like a complaint, it is. Sorry if it offends anyone but I want to see Math SE remain a reliable repository of mathematics, and this is what I right now feel is a necessary topic of discussion to help to achieve that outcome.</p>
<hr>
<p>[Original post]</p>
<p>The two top-voted answers to <a href="https://math.stackexchange.com/q/2099679/21820">this question</a> are of such poor quality and I don't understand why none of the many upvoters have noticed. The answer by <em>David</em> essentially states that we use the square-root in RMS speed simply because it gives the correct units for velocity, which is <strong>not even true</strong> strictly speaking because the final quantity is not any kind of <strong>velocity</strong> at all. The answer by <em>Yves Daoust</em> is worse, affirming the false claim in the question that the average speed is zero. He/she even claims that his/her answer is at the level of the OP, but the falsehood is not defensible. Why does it seem like most Math SE users are carelessly upvoting answers without reading carefully?</p>
<p>More pertinently, <strong>what can be done about it?</strong> <a href="https://math.meta.stackexchange.com/a/18921/21820">Downvoting as per this meta-post</a> clearly fails, and review queues almost always fail (The Not-An-Answer flag is specifically for posts that do not even seem to address the question, and the Low-Quality flag is nearly always rejected for posts that are long). Also, these wrong answers are not as bad as <a href="https://math.meta.stackexchange.com/q/17512/21820">fake answers</a>, but it seems we cannot even agree on flagging to get rid of fake answers.</p>
<p>Note: There are even <strong>4</strong> completely nonsensical answers at the bottom, 2 of which are deleted. One talks about exams, as if that has anything to do with valid mathematics! It also makes the rubbish claim that $x = y$ iff $\sqrt{x} = \sqrt{y}$. Another one says that RMS "rectifies" negative to become positive, which cannot even be made sense of. I'll leave you to peruse the other 2 yourself. But these do not appear to be a problem probably because they came late and do not gather upvotes fast enough.</p>
| Pedro | 23,350 | <p>If you identify incorrect answers to a post, you can <em>downvote</em> and <em>comment</em>, and you can also <em>post an answer</em> you deem better and (hopefully) correct. The site is not immune to errors, and it is the duty of all of us to make sure we preserve a database of correct and useful answers. </p>
<p>Another point to be considered is that posts like the one cited, which have more upvotes than usual and treat a less mathematical topic, tend to lure in more answers, which in turn implies more people missing the point, misreading, hurrying to post, and so on. This is a mere explanation of what is going on, and is not meant to justify the incorrect answers. If you look at other "hot" posts, you'll find a big junkyard of deleted or downvoted posts: it is like that. </p>
<blockquote>
<p>Why does it seem like most Math SE users are carelessly upvoting answers without reading carefully?</p>
</blockquote>
<p>This claim in the form of a question has little support at the moment, and I will ignore it. </p>
|
28,751 | <p>$X \sim \mathcal{N}(0,1)$, then to show that for $x > 0$,
$$
\mathbb{P}(X>x) \leq \frac{\exp(-x^2/2)}{x \sqrt{2 \pi}} \>.
$$</p>
| cardinal | 7,003 | <p>Since for $t \geq x > 0$ we have that $1 \leq \frac{t}{x}$,
$$
\mathbb{P}(X > x) = \frac{1}{\sqrt{2\pi}} \int_x^\infty 1 \cdot e^{-t^2/2} \,\mathrm{d}t \leq \frac{1}{\sqrt{2\pi}} \int_x^\infty \frac{t}{x} e^{-t^2/2} \,\mathrm{d}t = \frac{e^{-x^2/2}}{x \sqrt{2\pi}} .
$$</p>
|
1,473,318 | <blockquote>
<p>How many numbers can by formed by using the digits $1,2,3,4$ and $5$ without repetition which are divisible by $6$?</p>
</blockquote>
<p><strong>My Approach:</strong></p>
<p>$3$ digit numbers formed using $1,2,3,4,5$ divisible by $6$ </p>
<p>unit digit should be $2/4$ </p>
<p>No. can be $XY2$ & $XY4$</p>
<p>$X+Y+2 = 6,9$ & $X+Y+4 = 9,12$</p>
<p>$X+Y = 4,7$ & $X+Y = 5,8$</p>
<p>$(X,Y)= (1,3),(3,1),(2,5),(5,2)$ & </p>
<p>$(X,Y)= (2,3),(3,2),(3,5),(5,3)$</p>
<p>Therefore,Total 8 numbers without repetition.</p>
<blockquote>
<p>But I am confused here how to find numbers of numbers?</p>
</blockquote>
| N. F. Taussig | 173,070 | <p>For a number to be divisible by $6$, it must be divisible by both $2$ and $3$. If it is divisible by $2$, it must be even, so the units digit must be $2$ or $4$. If it is divisible by $3$, the sum of its digits must be divisible by $3$.</p>
<p>The only one-digit positive integer that is divisible by $6$ is $6$ itself, so the number must have at least two digits.</p>
<p><strong>Two-digit numbers:</strong> If the units digit is $2$, the tens digit must have remainder $1$ when divided by $3$. Hence, the tens digit must be $1$ or $4$. </p>
<p>If the units digit is $4$, the tens digit have remainder $2$ when divided by $3$. Hence, the tens digit must be $2$ or $5$. </p>
<p>Therefore, there are four two-digit numbers divisible by $6$ that can be formed using the digits $1, 2, 3, 4, 5$ without repetition. They are $12$, $24$, $42$, $54$.</p>
<p><strong>Three-digit numbers:</strong> If the units digit is $2$, the sum of the hundreds digit and tens digit must have remainder $1$ when divided by $3$. Since the sum of the hundreds digit and tens digit must be at least $1 + 3 = 4$ and at most $5 + 4 = 9$, the only possibilities are that the sum of the hundreds digit and tens digit is $4$ or $7$. Since digits cannot be repeated, the only way to obtain $4$ is to use the digits $1$ and $3$ in either order, and the only way to obtain $7$ is to use the digits $3$ and $4$ in either order. Hence, there are four three-digit numbers divisible by $6$ that can be formed with the digits $1, 2, 3, 4, 5$ that have units digit $2$. They are $132$, $312$, $342$, and $432$.</p>
<p>If the units digit is $4$, then the sum of the hundreds digit and tens digit must have remainder $2$ when divided by $3$. Since the sum of the hundreds digit and tens digit must be at least $1 + 2 = 3$ and at most $3 + 5 = 8$, the sum of the hundreds digit and tens digit must be $5$ or $8$. Since digits cannot be repeated, the only way to obtain $5$ is to use the digits $2$ and $3$ in either order, and the only way to obtain $8$ is to use the digits $3$ and $5$ in either order. Hence, there are also four three-digit numbers divisible by $6$ that can be formed with the digits $1, 2, 3, 4, 5$ that have units digit $4$. They are $234$, $324$, $354$, $534$.</p>
<p>Therefore, there are a total of eight three-digit numbers divisible by $6$ that can be formed from the digits $1, 2, 3, 4, 5$ without repetition. </p>
<p><strong>Four-digit numbers:</strong> If the units digit is $2$, then the sum of the thousands digit, hundreds digit, and tens digit must have remainder $1$ when divided by $3$. Since the sum of the thousands digit, hundreds digit, and tens digit must be at least $1 + 3 + 4 = 8$ and at most $3 + 4 + 5 = 12$, the sum of the thousands digit, hundreds digit, and tens digit must be $10$. Since digits cannot be repeated, the only way to obtain a sum of $10$ is to use the digits $1$, $4$, and $5$ in some order. There are $3! = 6$ such orders. Hence, there are six four-digit numbers divisible by $6$ with units digit $2$ that can be formed from the digits $1, 2, 3, 4, 5$ without repetition. They are $1452$, $1542$, $4152$, $4512$, $5142$, and $5412$.</p>
<p>If the units digit is $4$, the remainder of the sum of the thousands digit, hundreds digit, and tens digit must be $2$ when divided by $3$. Since the sum of the thousands digit, hundreds digit, and tens digit must be at least $1 + 2 + 3 = 6$ and at most $2 + 3 + 5 = 10$, the sum of the thousands digit, hundreds digit, and tens digit must be $8$. Since digits cannot be repeated, the only way to obtain a sum of $8$ is to use the digits $1$, $2$, and $5$ in some order. Since there are $3! = 6$ such orders, there are also six four-digit numbers that can be formed from the digits $1, 2, 3, 4, 5$ without repetition. They are $1254$, $1524$, $2154$, $2514$, $5124$, and $5214$.</p>
<p>Hence, there are a total of $12$ four-digit numbers divisible by $6$ that can be formed from the digits $1, 2, 3, 4, 5$ without repetition.</p>
<p><strong>Five-digit numbers:</strong> The sum of the five digits $1, 2, 3, 4, 5$ is $15$, which is divisible by $3$. Hence, any five digit number formed from these digits without repetition that has units digit $2$ or $4$ is divisible by $6$. There are two ways of filling the units digit and $4!$ ways of filling the remaining digits. Hence, there are $2 \cdot 4! = 48$ five-digit numbers that can be formed with the digits $1, 2, 3, 4, 5$ without repetition.</p>
<p>In total, there are $4 + 8 + 12 + 48 = 72$ numbers divisible by $6$ that can be formed from the digits $1, 2, 3, 4, 5$ without repetition. </p>
|
3,577,249 | <p>Let be <span class="math-container">$X$</span> a topological space and let be <span class="math-container">$Y\subseteq X$</span> such that <span class="math-container">$\mathscr{der}(Y)=\varnothing$</span>: so for any neighborhood <span class="math-container">$I_y$</span> of <span class="math-container">$y\in Y $</span> it result that <span class="math-container">$I_y\cap Y={y}$</span> and so Y is a set of isolated point. Now of <span class="math-container">$Y$</span> is a set of isolated point clearly none of its point could be a accumulation point: anyway if <span class="math-container">$x\notin Y$</span> how prove that for any its neighborhood <span class="math-container">$I_x$</span> it result that <span class="math-container">$I_x\cap Y=\varnothing$</span>?</p>
<p>Could someone help me, please?</p>
| drhab | 75,923 | <p>Not true in general.</p>
<p>Let <span class="math-container">$a\neq b$</span> and <span class="math-container">$\tau=\{\varnothing,\{a\},\{a,b\}\}$</span>.</p>
<p>Then <span class="math-container">$\tau$</span> is topology on <span class="math-container">$X=\{a,b\}$</span>.</p>
<p><span class="math-container">$Y=\{a\}$</span> is a set of isolated points but evidently <span class="math-container">$b$</span> is a limit point of <span class="math-container">$Y$</span> so that <span class="math-container">$\mathsf{der}(Y)\neq\varnothing$</span>.</p>
|
3,536,135 | <blockquote>
<p>There are five multiple choice questions on a test, with four choices per question. A student was given 10 questions to study for the test and the teacher picked 5 out of 10 questions to put on the test. The student memorizes 7 of the 10 answers of the questions given. If the student encounters the three questions the student does not know the answer to, the four choices will be equally likely to be guessed upon.
<p>a. What is the probability the student gets the first question right?
<p>b. What is the probability the student gets a 100 on the test?</p>
</blockquote>
<p><p> I think I got the first part. Let A be the event the student gets the first question correct. For all questions to be rewarded a point, the probability is <span class="math-container">$\frac{7}{10}$</span> since she knows 7 of the the 10 questions. So we will call question one X1 and question two X2 and so on..
<p>So we have <span class="math-container">$P(A)=P(A|X1=1)P(X1=1)+P(A|X1=0)P(X1=0)$</span>=<span class="math-container">$(1)$</span>(<span class="math-container">$\frac{7}{10}$</span>)+(<span class="math-container">$\frac{1}{4}$</span>)(<span class="math-container">$\frac{3}{10}$</span>)<span class="math-container">$=.7+.075=.775$</span>
<p> The second question I am more confused on. Let C1 mean question one is correct C2 question 2 is correct etc.
<p>P(C1)=P(C2)=...=P(C5)=<span class="math-container">$\frac{31}{40}$</span>
<p> We will assume the C's are independent and do <span class="math-container">$(\frac{31}{40})^5$</span>. After here I am not sure what to do and I am not sure if I am going on the right track. If somebody can guide me on the right track if the work looks good so far that be appreciated!</p>
| WW1 | 88,679 | <p>Total ways of choosing the 5 questions is <span class="math-container">$\binom {10}5 $</span></p>
<p>To get 100% split into 4 cases</p>
<p>case 1: all five memorized , no lucky guesses</p>
<p><span class="math-container">$$ P_1 = \frac {\binom {7}5}{\binom {10}5} $$</span></p>
<p>case 2: 4 memorized , 1 lucky guess</p>
<p><span class="math-container">$$ P_2 = \frac {\binom {7}4 \times \frac14 \times \binom 31}{\binom {10}5} $$</span></p>
<p>case 3: 3 memorized , 2 lucky guesses</p>
<p><span class="math-container">$$ P_3 = \frac {\binom {7}3 \times (\frac14 )^2\times \binom 32}{\binom {10}5} $$</span></p>
<p>case 4: 2 memorized , 3 lucky guess</p>
<p><span class="math-container">$$ P_4 = \frac {\binom {7}2 \times (\frac14 )^3\times \binom 33}{\binom {10}5} $$</span></p>
|
75,005 | <p>Let's imagine a guy who claims to possess a machine that can each time produce a completely random series of 0/1 digits (e.g. $1,0,0,1,1,0,1,1,1,...$). And each time after he generates one, you can keep asking him for the $n$-th digit and he will tell you accordingly.</p>
<p>Then how do you check if his series is <em>really completely random</em>?</p>
<p>If we only check whether the $n$-th digit is evenly distributed, then he can cheat using:</p>
<blockquote>
<p>$0,0,0,0,...$<br>
$1,1,1,1,...$<br>
$0,0,0,0,...$<br>
$1,1,1,1,...$<br>
$...$</p>
</blockquote>
<p>If we check whether any given sequence is distributed evenly, then he can cheat using:</p>
<blockquote>
<p>$(0,)(1,)(0,0,)(0,1,)(1,0,)(1,1,)(0,0,0,)(0,0,1,)...$<br>
$(1,)(0,)(1,1,)(1,0,)(0,1,)(0,0,)(1,1,1,)(1,1,0,)...$<br>
$...$</p>
</blockquote>
<p>I may give other possible checking processes but as far as I can list, each of them has flaws that can be cheated with a prepared regular series.</p>
<p>How do we check if a series is really random? Or is randomness a philosophical concept that can not be easily defined in Mathematics?</p>
| GM2001 | 17,808 | <p>All the sequences you mentioned have a really low Kolmogorov complexity, because you can easily describe them in really short space. A random sequence (as per the usual definition) has a high Kolmogorov complexity, which means there is no instructions shorter then the string itself that can describe or reproduce the string. Ofcourse the length of the description depends on the formal system (language) you use to describe it, but if the length of the string is much longer then the axioms of your formal systems, then the Kolmogorov-complexity of a random string becomes independent of your choice of system.</p>
<p>Luckily, under the Church-Turing thesis, there is only 1 model of computation,(unless your machine uses yet undiscovered physical laws), so there is only 1 language your machine can speak that we have to check.</p>
<p>So to test if a string is random, we only have to brute-force check the length of the shortest Turing-program that outputs the first n bits correctly. If the length eventually becomes proportional to n, then we can be fairly certain we have a random sequence, but to be 100% we have to check the whole (infinite) string. (As per definition of random).</p>
|
2,568,157 | <p>Consider the following:</p>
<p>$$(1^5+2^5)+(1^7+2^7)=2(1+2)^4$$</p>
<p>$$(1^5+2^5+3^5)+(1^7+2^7+3^7)=2(1+2+3)^4$$</p>
<p>$$(1^5+2^5+3^5+4^5)+(1^7+2^7+3^7+4^7)=2(1+2+3+4)^4$$</p>
<p>In General is it true for further increase i.e.,</p>
<p>Is</p>
<p>$$\sum_{i=1}^n i^5+i^7=2\left( \sum_{i=1}^ni\right)^4$$ true $\forall $ $n \in \mathbb{N}$</p>
| lhf | 589 | <p>Both sides are polynomials in $n$ of degree $8$. Since they coincide for $n=0,\dots,8$, they are equal.</p>
<p>Any $9$ points will do. Taking $n=-4,\dots,4$ is probably easier to do by hand.</p>
|
2,568,157 | <p>Consider the following:</p>
<p>$$(1^5+2^5)+(1^7+2^7)=2(1+2)^4$$</p>
<p>$$(1^5+2^5+3^5)+(1^7+2^7+3^7)=2(1+2+3)^4$$</p>
<p>$$(1^5+2^5+3^5+4^5)+(1^7+2^7+3^7+4^7)=2(1+2+3+4)^4$$</p>
<p>In General is it true for further increase i.e.,</p>
<p>Is</p>
<p>$$\sum_{i=1}^n i^5+i^7=2\left( \sum_{i=1}^ni\right)^4$$ true $\forall $ $n \in \mathbb{N}$</p>
| Hypergeometricx | 168,053 | <p>Here are some interesting observations, which are too long to be included as a comment. </p>
<p>A useful reference can be found <a href="https://www.maa.org/sites/default/files/pdf/upload_library/22/Ford/Beardon201-213.pdf" rel="nofollow noreferrer">here</a>.</p>
<p>Denoting $\displaystyle\sum_{r=1}^n r^m=\sigma_m$, we note that</p>
<p>$$\begin{align}\sigma_3&=\sigma_1^2\tag{1}\\
\frac{\sigma_5}{\sigma_3}&=\frac {4\sigma_1-1}{3}\tag{2}\\
\frac {\sigma_7}{\sigma_3}&=\frac {6\sigma_1^2-4\sigma_1+1}3\tag{3}\end{align}$$</p>
<p>Adding $(2),(3)$ and using $(1)$ gives
$$\begin{align}
\frac {\sigma_5+\sigma_7}{\sigma_3}&=2\sigma_1^2=2\sigma_3\\
\sigma_5+\sigma_7&=2\sigma_3^2=2\sigma_1^4\end{align}$$</p>
|
3,482,441 | <p>Very confused on how to deal with these direct sum problems.</p>
<p>Problem:Suppose <span class="math-container">$U=\{(x,y,x+y,x-y,2x) \in \mathbb{F}^{5}:x,y \in \mathbb{F}\}$</span></p>
<p>Find a subspace <span class="math-container">$W$</span> of <span class="math-container">$\mathbb{F}^{5}$</span> such that <span class="math-container">$\mathbb{F}^{5}=U \oplus W$</span></p>
<p>Trying to figure out a routine way to do these problems. I used the following link to help <a href="https://math.stackexchange.com/questions/1771961/find-a-subspace-w-of-mathbbf4-such-that-mathbbf4-u-oplus-w">Find a subspace $W$ of $\mathbb{F}^4$ such that $\mathbb{F}^4 = U \oplus W$</a></p>
<p>Attempt:</p>
<p>Given <span class="math-container">$(a,b,c,d,e) \in \mathbb{F}^{5}$</span>,</p>
<p><span class="math-container">$(a,b,c,d,e)=(a,b,a+b+c-a-b,a-b-a+b+d,2a-2a+e)$</span></p>
<p><span class="math-container">$=(a,b,a+b,a-b,2a)+(0,0,c-a-b,-a+b+d,e-2a)$</span></p>
<p>where <span class="math-container">$(a,b,a+b,a-b,2a) \in U$</span> and <span class="math-container">$(0,0,c-a-b,-a+b+d,e-2a) \in W$</span></p>
<p>Hence <span class="math-container">$\mathbb{F}^{5}=U+W$</span></p>
<p>Next Show <span class="math-container">$U \cap W=\{0\}$</span></p>
<p>Attempt:</p>
<p>Let <span class="math-container">$(e,f,g,h,i) \in U \cap W$</span> then <span class="math-container">$e=0,f=0$</span></p>
<p>I can't seem to figure out why <span class="math-container">$g=0,h=0,i=0$</span></p>
<p>Also is this the correct way to approach this type of problem?</p>
<p>Thanks</p>
| Community | -1 | <p>I don't see your answer panning out.</p>
<p>Here's another method:</p>
<p><span class="math-container">$U$</span> is the span of <span class="math-container">$\{(1,0,1,1,2), (0,1,1,-1,0)\}$</span>. (To see this, plug in <span class="math-container">$x=1,y=0$</span>, and <span class="math-container">$x=0, y=1$</span>, to get two obviously l.i. vectors in <span class="math-container">$U$</span>. But <span class="math-container">$U$</span> is clearly two-dimensional: two free variables.)</p>
<p>So the problem can be boiled down to expanding this to a basis of <span class="math-container">$\Bbb F^5$</span>.</p>
<p>You could use the "sifting algorithm", applied to a generating set (most easily just adjoin the standard basis to the two vectors above to get a basis). See <a href="https://math.stackexchange.com/a/465913/403337">this answer</a>.</p>
|
3,482,441 | <p>Very confused on how to deal with these direct sum problems.</p>
<p>Problem:Suppose <span class="math-container">$U=\{(x,y,x+y,x-y,2x) \in \mathbb{F}^{5}:x,y \in \mathbb{F}\}$</span></p>
<p>Find a subspace <span class="math-container">$W$</span> of <span class="math-container">$\mathbb{F}^{5}$</span> such that <span class="math-container">$\mathbb{F}^{5}=U \oplus W$</span></p>
<p>Trying to figure out a routine way to do these problems. I used the following link to help <a href="https://math.stackexchange.com/questions/1771961/find-a-subspace-w-of-mathbbf4-such-that-mathbbf4-u-oplus-w">Find a subspace $W$ of $\mathbb{F}^4$ such that $\mathbb{F}^4 = U \oplus W$</a></p>
<p>Attempt:</p>
<p>Given <span class="math-container">$(a,b,c,d,e) \in \mathbb{F}^{5}$</span>,</p>
<p><span class="math-container">$(a,b,c,d,e)=(a,b,a+b+c-a-b,a-b-a+b+d,2a-2a+e)$</span></p>
<p><span class="math-container">$=(a,b,a+b,a-b,2a)+(0,0,c-a-b,-a+b+d,e-2a)$</span></p>
<p>where <span class="math-container">$(a,b,a+b,a-b,2a) \in U$</span> and <span class="math-container">$(0,0,c-a-b,-a+b+d,e-2a) \in W$</span></p>
<p>Hence <span class="math-container">$\mathbb{F}^{5}=U+W$</span></p>
<p>Next Show <span class="math-container">$U \cap W=\{0\}$</span></p>
<p>Attempt:</p>
<p>Let <span class="math-container">$(e,f,g,h,i) \in U \cap W$</span> then <span class="math-container">$e=0,f=0$</span></p>
<p>I can't seem to figure out why <span class="math-container">$g=0,h=0,i=0$</span></p>
<p>Also is this the correct way to approach this type of problem?</p>
<p>Thanks</p>
| Anurag A | 68,092 | <p>Since <span class="math-container">$U=\text{span}\left(\left\{\right(1,0,1,1,2), \, (0,1,1,-1,0)\}\right)$</span> so to find a <span class="math-container">$W$</span>, one can choose <span class="math-container">$W=U^{\perp}$</span> (orthogonal complement of <span class="math-container">$U$</span>), i.e.
<span class="math-container">$$W=\{v \in \Bbb{F}^5 \, | \, \forall u \in U, \,\, v \cdot u = 0\}.$$</span>
In this particular problem, (using the basis vectors of <span class="math-container">$U$</span>)
<span class="math-container">$$W=\{(x,y,z,s,t) \, | \, x+z+s+2t=0 \text{ and } y+z-s=0\}.$$</span>
Thus we need the basis vectors for the solution set of
<span class="math-container">\begin{align*}
x+ z+s+2t & =0\\
y+z-s & =0.
\end{align*}</span>
The solutions are given by
<span class="math-container">$$W=\{(-z-s-2t,-z+s,z,s,t) \, | \, z,s,t \in \Bbb{F}\}.$$</span>
OR
<span class="math-container">$$\begin{bmatrix}x\\y\\z\\s\\t\end{bmatrix}=z\begin{bmatrix}-1\\-1\\1\\0\\0\end{bmatrix}+s\begin{bmatrix}-1\\1\\0\\1\\0\end{bmatrix}+t\begin{bmatrix}-2\\0\\0\\0\\1\end{bmatrix}.$$</span>
<span class="math-container">$$W=\text{Span}\left(\left\{(-1,-1,1,0,0), \, (-1,1,0,1,1),\, \, (-2,0,0,0,1)\right\}\right).$$</span></p>
<p>Since <span class="math-container">$W=U^{\perp}$</span>, so <span class="math-container">$U \cap W=\{0\}$</span> is an easy outcome of that. </p>
|
281,735 | <blockquote>
<p><strong>Possible Duplicate:</strong><br>
<a href="https://math.stackexchange.com/questions/202452/why-is-predicate-all-as-in-allset-true-if-the-set-is-empty">Why is predicate “all” as in all(SET) true if the SET is empty?</a> </p>
</blockquote>
<p>In don't quite understand this quantification over the empty set:</p>
<p>$\forall y \in \emptyset: Q(y)$</p>
<p>The book says that this is always TRUE regardless of the value of the predicate $Q(y)$, and it explain that this is because this quantification adds no predicate at all, and therefore can be considered the weakest predicate possible, which is TRUE.</p>
<p>I know that TRUE is the weakest predicate because $ $P$ \Rightarrow$ TRUE is TRUE for every $P$.
I don't see what is the relationship between this weakest predicate and the quantification. </p>
| André Nicolas | 6,312 | <p>A peculiar explanation. </p>
<p>Whatever $Q(y)$ may be, it is true that for all $y\in \emptyset$, the sentence $Q(y)$ is true. For the empty set is $\dots$ empty. Every unicorn likes wine. </p>
|
4,554,831 | <blockquote>
<p>Let <span class="math-container">$(X,d)$</span> be a metric space. Prove that if the point <span class="math-container">$x$</span> is on the boundary of the open ball <span class="math-container">$B(x_0,r)$</span> then <span class="math-container">$d(x_0,x)=r$</span>.</p>
</blockquote>
<p>I find this difficult because it seems intuitive yet not easy to prove. By definition, if <span class="math-container">$A\subset X$</span> then a point <span class="math-container">$x$</span> is on the boundary if for all <span class="math-container">$\epsilon>0$</span> we have <span class="math-container">$B(x,\epsilon)\cap A\ne\emptyset$</span> and also <span class="math-container">$(X\setminus A)\cap B(x,\epsilon)\ne\emptyset$</span>. However I don't know how to use this definition in any meaningful way.</p>
| Sam | 530,289 | <p>We start by defining the boundary of a set.
Let <span class="math-container">$(X, \mathcal{T})$</span> be a topological space. Let <span class="math-container">$A$</span> be a set in <span class="math-container">$X$</span>, the boundary of <span class="math-container">$A$</span> is defines as
<span class="math-container">$$\partial A=Cl(A)\setminus Int(A)$$</span>
Where <span class="math-container">$Cl(A)$</span> and <span class="math-container">$Int(A)$</span> mean closure of <span class="math-container">$A$</span> and interior of <span class="math-container">$A$</span> respectively.</p>
<p>Now consider a metric space <span class="math-container">$(X,d)$</span>, a point <span class="math-container">$a\in X$</span>, and a positive real number <span class="math-container">$r$</span> and let
<span class="math-container">$$A=B_r (a)$$</span></p>
<p><strong>Step 1:</strong> We will first prove that for any point <span class="math-container">$b\in X$</span>,
<span class="math-container">$$d(a,b)< r \Rightarrow b\notin\partial A$$</span>
we prove this by taking <span class="math-container">$\delta= r-d(a,b)>0$</span> and proving that
<span class="math-container">$$B_\delta (b)\subseteq B_r (a)$$</span></p>
<p>For any <span class="math-container">$z\in B_\delta (b)$</span> we have
<span class="math-container">$$d(b,z)<\delta$$</span>
<span class="math-container">$$\Rightarrow d(b,z)<r-d(a,b)$$</span>
Using the triangular inequality we know that
<span class="math-container">$$d(a,z)\le d(a,b)+d(b,z)$$</span>
<span class="math-container">$$< d(a,b)+ r-d(a,b)$$</span>
<span class="math-container">$$\Rightarrow d(a,z)<r$$</span>
<span class="math-container">$$\Rightarrow z\in B_r(a)$$</span>
<span class="math-container">$$\Rightarrow B_\delta (b)\subseteq A$$</span>
<span class="math-container">$$\Rightarrow b \in Int(A)$$</span>
<span class="math-container">$$\Rightarrow b\notin\partial A$$</span></p>
<p><strong>Step 2:</strong> We will now show that for any point <span class="math-container">$b\in X$</span>,
<span class="math-container">$$d(a,b)> r \Rightarrow b\notin\partial A$$</span>
we prove this by taking <span class="math-container">$\delta= d(a,b)-r>0$</span> and proving that <span class="math-container">$b\notin Cl(A)$</span> i.e.
<span class="math-container">$$B_\delta (b)\cap B_r (a)=\emptyset$$</span>
Assume that <span class="math-container">$z\in B_\delta (b)\cap B_r (a)$</span> then
<span class="math-container">$$d(a,z)<r\text{ and } d(z,b)<\delta$$</span>
now take the triangular inequality
<span class="math-container">$$d(a,b)\le d(a,z)+d(z,b)$$</span>
<span class="math-container">$$\Rightarrow d(a,b)< r+\delta$$</span>
<span class="math-container">$$\Rightarrow d(a,b)< r+d(a,b)-r$$</span>
<span class="math-container">$$\Rightarrow d(a,b)< d(a,b)\text{ a clear contradiction}$$</span>
$\Rightarrow B_\delta (b)\cap B_r (a)=\emptyset$$
$$\Rightarrow b\notin Cl(A)$$
$$\Rightarrow b\notin\partial A$$</p>
<p><strong>Conclusion:</strong> Which means that if <span class="math-container">$b\in \partial A$</span>, then the only possibility left is to have
<span class="math-container">$$d(a,b)=r$$</span></p>
<p>Hope this helps</p>
|
2,005,555 | <p>When I was solving a DE problem I was able to reduce it to </p>
<p>$$e^x \sin(2x)=a\cdot e^{(1+2i)x}+b\cdot e^{(1−2i)x}.$$ </p>
<p>For complex $a,b$. Getting one solution is easy $(\frac{1}{2i},-\frac{1}{2i})$ but I was wondering what are all the values for complex $a,b$ that satisfy the equation. </p>
| DanielWainfleet | 254,665 | <p>When $x=0$ we have $0=a+b.$ So $b=-a.$ So $e^x\sin 2x=ae^{(1+2i)x}-ae^{(1-2i)x}=2iae^x\sin 2x.$ When $\sin 2x\ne 0$ this reduces to $1=2ia.$</p>
|
4,627,821 | <p>Consider the difference equations</p>
<p><span class="math-container">$$x(k+1) = f(x(k)) \qquad (1)$$</span></p>
<p>and</p>
<p><span class="math-container">$$y(k+1) = g(y(k)) \qquad (2)$$</span></p>
<p>where <span class="math-container">$g = f \circ f$</span>.</p>
<p>In <em>An Introduction to Difference Equations (3e)</em> by Saber Elaydi, in the proof of Theorem 1.16 (on p.32), it is mentioned that for <span class="math-container">$a$</span> an equilibrium of (1), if <span class="math-container">$a$</span> is an asymptotically stable equilibrium of (2), then it is also an asymptotically stable equilibrium for (1).</p>
<p>How can we prove this?</p>
<p>My progress so far</p>
<ul>
<li>Since <span class="math-container">$g(x) = f(f(x))$</span>,
<ul>
<li><span class="math-container">$g'(x) = f'(f(x))f'(x)$</span></li>
<li><span class="math-container">$g''(x) = f''(f(x)) [f'(x)]^2 + f'(f(x)) f''(x)$</span></li>
<li><span class="math-container">$g'''(x) = f'''(f(x)) f'(x) [f'(x)]^2 + 2f''(f(x)) f'(x) f''(x) + f''(f(x)) f'(x) f''(x) + f'(f(x))f'''(x)$</span></li>
</ul>
</li>
<li>Assume <span class="math-container">$a$</span> is an asymptotically stable equilibrium for (2). There are two cases
<ul>
<li><span class="math-container">$|g'(a)| < 1$</span></li>
<li><span class="math-container">$|g'(a)| = 1$</span>, <span class="math-container">$g''(a) = 0$</span>, and <span class="math-container">$g'''(a) < 0$</span></li>
</ul>
</li>
<li>Note that <span class="math-container">$g'(a) = f'(f(a))f'(a) = [f'(a)]^2$</span></li>
<li>Hence when <span class="math-container">$|g'(a)| < 1$</span>, <span class="math-container">$|f'(a)| < 1$</span> as well. In this case <span class="math-container">$a$</span> is an asymptotically stable equilibrium of (1)</li>
<li>Now assume <span class="math-container">$|g'(a)| = 1$</span>, <span class="math-container">$g''(a) = 0$</span>, and <span class="math-container">$g'''(a) < 0$</span>.</li>
<li>Since <span class="math-container">$g'(a) = [f'(a)]^2 \geq 0$</span>, <span class="math-container">$g'(a) = 1$</span>. There are two cases
<ul>
<li><span class="math-container">$f'(a) = 1$</span>.</li>
<li><span class="math-container">$f'(a) = -1$</span>.</li>
</ul>
</li>
<li>Assume <span class="math-container">$f'(a) = 1$</span>. In this case
<ul>
<li><span class="math-container">$g''(a) = f''(f(a))[f'(a)]^2 + f'(f(a)) f''(a) = 2f''(a) = 0$</span>. Hence <span class="math-container">$f''(a) = 0$</span></li>
<li><span class="math-container">$g'''(a) = f'''(f(a)) f'(a) [f'(a)]^2 + 2f''(f(a)) f'(a) f''(a) + f''(f(a)) f'(a) f''(a) + f'(f(a))f'''(a) = 2f'''(a) < 0$</span>. Hence <span class="math-container">$f'''(a) < 0$</span></li>
<li>It follows that <span class="math-container">$a$</span> is an asymptotically stable equilibrium of (2)</li>
</ul>
</li>
<li><strong>My question is how can we proceed when <span class="math-container">$f'(a) = -1$</span></strong>
<ul>
<li>Theorem 1.16 states that if <span class="math-container">$f'(a) = -1$</span> and <span class="math-container">$-f'''(a) - 3/2(f''(a))^2 < 0$</span>, then <span class="math-container">$a$</span> is asymptotically stable.</li>
<li>But since this proposition is used to prove Theorem 1.16, using Theorem 1.16 to prove this proposition would be circular.</li>
</ul>
</li>
</ul>
| Adam Rubinson | 29,156 | <p><span class="math-container">$f\ $</span> is bounded. Suppose <span class="math-container">$\ y_1\ $</span> is a lower bound and <span class="math-container">$\ y_2\ $</span> is an upper bound of <span class="math-container">$\ f.\ $</span></p>
<p>Let <span class="math-container">$\ \varepsilon > 0.\ $</span> Since <span class="math-container">$\ f(0) \in [y_1,y_2]\ $</span> and <span class="math-container">$\ f\left( \frac{2(y_2 - y_1)}{\varepsilon} \right) \in [y_1,y_2],\ $</span> it follows, by the <a href="https://en.wikipedia.org/wiki/Mean_value_theorem" rel="nofollow noreferrer">mean value theorem</a>, that <span class="math-container">$\ \exists\ x \in \left(0,\frac{2(y_2 - y_1)}{\varepsilon} \right)\ $</span> such that</p>
<p><span class="math-container">$$ f'(x) = \frac{ f\left(\frac{2(y_2 - y_1)}{\varepsilon} \right) - f(0) }{\frac{2(y_2 - y_1)}{\varepsilon} - 0},$$</span></p>
<p><span class="math-container">$$\implies \left\lvert f'(x) \right\rvert = \left\lvert \frac{ f\left(\frac{2(y_2 - y_1)}{\varepsilon} \right) - f(0) }{\frac{2(y_2 - y_1)}{\varepsilon} - 0} \right\rvert \leq \frac{ y_2 - y_1}{ \frac{2(y_2 - y_1)}{\varepsilon}} = \frac{\varepsilon}{2} < \varepsilon. $$</span></p>
|
1,684,124 | <p>Here is my attempt:</p>
<p>$$ \frac{2x}{x^2 +2x+1}= \frac{2x}{(x+1)^2 } = \frac{2}{x+1}-\frac{2}{(x+1)^2 }$$</p>
<p>Then I tried to integrate it,I got $2\ln(x+1)+\frac{2}{x+1}+C$ as my answer. Am I right? please correct me if I'm wrong.</p>
| 3SAT | 203,577 | <p>$$=\int\frac{2x+2}{x^2+2x+1}dx-2\int\frac{dx}{x^2+2x+1}$$</p>
<p>set $t=x^2+2x+1$ and $dt=(2x+2)dx$</p>
<p>$$=\int\frac{1}{t}-2\int\frac{dx}{(x+1)^2}dx$$</p>
<p>Set $\nu=x+1$ and $d\nu=dx$</p>
<p>$$=\int\frac{1}{t}-2\int\frac{1}{\nu ^2}d\nu=\ln|t|+\frac{2}{\nu}+\mathcal C=\color{red}{\ln|x^2+2x+1|+\frac{2}{x+1}+\mathcal C}$$</p>
|
1,501,940 | <p>This is related to a <a href="https://math.stackexchange.com/questions/1501852/why-does-this-statement-not-hold-when-me-0/1501925#1501925">question</a> I just asked, that I now think was based on wrong assumptions.</p>
<p>It is true that if <span class="math-container">$f=a$</span> a.e. on the interval <span class="math-container">$[a,b]$</span>, then <span class="math-container">$f = a$</span> on <span class="math-container">$[a,b]$</span>. However, apparently it is not true for a general measurable set <span class="math-container">$E$</span> with <span class="math-container">$m(E) \neq 0$</span>, which confuses me greatly.</p>
<p>I just finished the following proof which I thought showed that the statement was true for a general measurable set <span class="math-container">$E$</span>. Apparently, it's not. Could somebody please tell me 1) what's wrong with it, 2) how to fix it, 3) how to use it to show that the statement is not true for <span class="math-container">$E$</span> with <span class="math-container">$m(E) \neq 0$</span> (or whichever kind of set it doesn't work for):</p>
<blockquote>
<p>Suppose <span class="math-container">$f$</span> and <span class="math-container">$g$</span> are continuous on the general measurable set <span class="math-container">$E$</span>. Suppose also that <span class="math-container">$E_{0}\subseteq E$</span> is the set of all points where <span class="math-container">$f \neq g$</span> are all contained (i.e., <span class="math-container">$\forall x \in E_{0}$</span>, <span class="math-container">$f \neq g$</span>), where <span class="math-container">$m(E_{0})=0$</span>.</p>
<p>Consider <span class="math-container">$|h|=|f-g|$</span>. <span class="math-container">$|h|$</span> is the composition of a continuous function and the linear combination of two continuous functions, so <span class="math-container">$|h|$</span> is continuous.</p>
<p>Now, <span class="math-container">$E_{0} = |h|^{-1}(\mathbb{R}\backslash\{0\})=h^{-1}((-\infty,0)\cup (0,\infty))$</span>. <span class="math-container">$(-\infty,0)\cup(0,\infty)$</span> is a union of open sets and therefore open. Since <span class="math-container">$|h|$</span> is continuous, <span class="math-container">$E_{0}$</span> is also open.</p>
<p>Since <span class="math-container">$E_{0}$</span> is open and <span class="math-container">$m(E_{0})=0$</span>, it must be empty (as otherwise, it must contain a nonempty interval, whose measure would be positive.</p>
<p>Therefore, since the set of points on which <span class="math-container">$f \neq g$</span> is empty, <span class="math-container">$f=g$</span> everywhere on <span class="math-container">$E$</span>.</p>
</blockquote>
| saz | 36,150 | <blockquote>
<p>Now $E_0 = h^{-1}(\mathbb{R} \backslash \{0\})$ [...] Since $E_0$ is open and $m(E_0)=0$, it must be empty.</p>
</blockquote>
<p>No, that's not correct. The continuity of $h$ gives that</p>
<p>$$U := h^{-1}(\mathbb{R} \backslash \{0\}) = \{x \in \mathbb{R}; g(x) \neq f(x)\}$$</p>
<p>is open. This set does <strong>not</strong> equal</p>
<p>$$E_0 = \{x \in E; g(x) \neq f(x)\} = U \cap E;$$</p>
<p>in particular $E_0$ does not need to be open (at least not in $\mathbb{R}$, it <em>is</em> open in $E$).</p>
<p>Just consider the following example: Set $E := \{0\} \cup [1,2]$ and</p>
<p>$$f(x) := \begin{cases}0, & x \leq 0,\\ x, & x \in [0,1], \\1, & x>1 \end{cases}$$</p>
<p>and $g := 1$. Obviously, $f:\mathbb{R} \to \mathbb{R}$ is continuous and $f=g$ almost everywhere on $E$. However,</p>
<p>$$E_0 = \{0\}$$</p>
<p>is not empty and $f|_E$ does not equal $1$.</p>
|
1,858,095 | <p>Let $f : \mathbb{R} \rightarrow \mathbb{R}$ be a function with:</p>
<p>$$f(x) = x - \arctan{x}$$</p>
<p>We consider the sequence $(x_{n})$ with $x_{0} > 0$ and $x_{n + 1} = f(x_{n})$, for any $n \in \mathbb{N}$.</p>
<p>Prove that $(x_{n})$ is convergent and find its limit.</p>
<p>So far, I've proved that $f(x) \geq 0$ for any $x \geq 0$. From this I've concluded that $(x_{n})$ is a positive sequence. Now I need to find $(x_{n})$'s monotony. I've calculated both $x_{n + 1} - x_{n}$ and $\frac{x_{n + 1}}{x_{n}}$. For the first I've got $- \arctan{x_{n}}$ and the second one gave me $1 - \frac{\arctan{x_{n}}}{x_{n}}$. From this point I don't know what to do next.</p>
<p>Thank you in advance!</p>
| amcalde | 168,694 | <p>Look at the graph: (from Wolfram Alpha)</p>
<p><a href="https://i.stack.imgur.com/5krNw.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/5krNw.png" alt="enter image description here"></a></p>
<p>The way to answer this question is to prove that, say for $x > 0$, $f(x) < x \le 0$. It should be clear that iterating the function only gets you closer to zero.</p>
<p>Does this help?</p>
|
218,020 | <p>I came across the following statement: Let $R$ be a complete local Noetherian commutative ring. If $A$ is a commutative $R$-algebra that is finitely generated and free as a module over $R$, then $A$ is a semi-local ring that is the direct product of local rings. (I'm unsure if completeness or the Noetherian condition is actually relevant to this; but this is the specific fact being used)</p>
<p>I can prove it is a semi-local ring: Let $m$ be the maximal ideal of $R$, then $\frac{A}{mA}$ is finite dimensional as a $\frac{R}{m}$ vector space, and thus Artinian. Therefore, it only has a finite number of maximal ideals, and its maximal ideals correspond to maximal ideals of $A$ containing $mA$. But all maximal ideals of $A$ contain $mA$: To see this, this is equivalent to the Jacobson radical containing $mA$, which is equivalent to $1-x$ being a unit in $A$ for any $x \in mA$. The inverse is just $1+x+x^2+\cdots$, which exists by completeness.</p>
<p>But why is $A$ necessarily the direct product of local rings?</p>
| Gustavo Hospital | 219,590 | <p>If you have any sum s and want to know the smallest number which digits add up to that sum, you just have to use modulo 9 arithmetic. </p>
<p>You calculate s mod 9, that will be your first cypher to the left, then you solve for q and that will be how many 9s you add to the right.</p>
<p>For example:</p>
<p>16 mod 9 is 7.
Solving for q in the equation 16 = qx9 + 7 you get q=1, so 7 to the left and one 9 to the right = 79.</p>
<p>This is so because of modulo 9 congruent sets. </p>
|
110,896 | <p>Now I have a function, say $f(k,z)=e^{-kz}(1+kz)$</p>
<p>I want to find the $n$th $\log$ derivative with respect to z.
like $(z\partial_z)^{(n)}f(k,z)$ (or $(\partial_{\ln z})^{(n)}f(k,z)$
if you like), where the $(n)$ denotes that we take the derivative $n$
times. </p>
<p>I found the answer in <a href="https://mathematica.stackexchange.com/questions/9598">this question</a> quite helpful for me to find a general expression for $\partial_z^{(n)}f(k,z)$; however, I don't know how to generalize it to $\log$ derivative case using <em>Mathematica</em>. Any suggestions?</p>
| J. M.'s persistent exhaustion | 50 | <p>There is a nice closed form in terms of the Stirling subset numbers for these:</p>
<pre><code>lDerivative[n_Integer?Positive][f_] := Composition[Function, Evaluate] @
(StirlingS2[n, Range[n]].Table[#^\[FormalK] Derivative[\[FormalK]][f][#],
{\[FormalK], 1, n}])
</code></pre>
<p>For instance,</p>
<pre><code>lDerivative[8][f][x] == Nest[x D[#, x] &, f[x], 8] // Expand
True
lDerivative[2][Function[z, Exp[-k z] (1 + k z)]][z] // Simplify
E^(-k z) k^2 z^2 (-2 + k z)
</code></pre>
|
3,409,068 | <p>I have to find (and prove) the infimum and supremum of the following set:</p>
<p><span class="math-container">$M_1:=\{x\in\mathbb{Q} \mid x^2 < 9\}$</span></p>
<p>On first glance, I would say:</p>
<p><span class="math-container">$\inf M_1=-3 $</span><br>
<span class="math-container">$\sup M_1=3$</span></p>
<p>Now I have to prove that these really are the infimum and supremum of the set, and that's the point where I'm having problems. According to the definition of <span class="math-container">$\inf$</span> and <span class="math-container">$\sup$</span>, this means, that <span class="math-container">$-3$</span> is the biggest lower bound and 3 is the lowest upper bound:</p>
<p><span class="math-container">$\forall x\in\mathbb(M_1): -3 \leq x \leq 3$</span></p>
<p>We can see, that -3 and 3 are not elements of M1, which means:</p>
<p><span class="math-container">$\forall x\in\mathbb(M_1):-3<x<3$</span></p>
<p>But how can I show that -3 and 3 are the <span class="math-container">$\textbf{biggest / smallest}$</span> bound? I mean, for example, what if there is a number bigger than -3 that acts like a lower bound to the set? Obviously there isn't a bigger lower bound, but how can I mathematically show it? Do you guys have any advice? Thanks in advance, and sorry for my English :D</p>
| Mark Viola | 218,419 | <p>In order to be analytic at <span class="math-container">$x=0$</span>, the function and all of its derivatives must exist in a neighborhood of <span class="math-container">$x=0$</span>. However, if <span class="math-container">$f(x)=x\log(|x|)$</span> and <span class="math-container">$f(0)=0$</span>, we see that <span class="math-container">$f'(0)=\lim_{h\to 0}\frac{h\log(|h||)}{h}$</span> fails to exist. Therefore, <span class="math-container">$f$</span> is not analytic at <span class="math-container">$x=0$</span>.</p>
|
69,711 | <blockquote>
<p>Find an equation of the tangent line to the graph of $y= \sqrt{x-3}$ that is perpendicular to $6x+3y-4=0$. </p>
</blockquote>
<p>I don't understand what it's asking. Is this the normal line? How do I solve this?</p>
| Altar Ego | 11,020 | <p>First, determine the slope of the line $6x + 3y - 4 = 0$. Here, $m = -2$.<br><br>
Then we calculate the perpendicular slope to $-2$ as $1/2$ (why?).<br><br></p>
<p>Then we want to find where the slope of the tangent to $y = \sqrt{x - 3}$ is equal to $1/2$.<br>
In other words, where $y' = \frac{1}{2\sqrt{x - 3}} = 1/2$. <br><br>
Can you take it from here?</p>
|
1,811,443 | <p>Let $(a_n)$ be a sequence of rational numbers, where <strong>all rational numbers are terms</strong>. (<em>i.e. enumeration of rational numbers</em>)</p>
<p>Then, is there any convergent sub-sequence of $(a_n)$?</p>
| E. Joseph | 288,138 | <p>Yes there is. </p>
<p>Consider the interval $I:=[0,1]$. Since it contains infinitely many rationals, your sequence will have value in $I$ forever.</p>
<p>So you can extract the sub-sequece such that all the values $(a_{\varphi(n)})$ of the new sequence are in $I$ :</p>
<p>$$\forall n \quad a_{\varphi(n)}\in I.$$</p>
<p>Since $I$ is a compact and $(a_{\varphi(n)})$ is a sequence with values in this compact, you can extract of convergent sub-sequence of $(a_{\varphi(n)})$.</p>
<p>Which gives you the convergent sub-sequence of $(a_n)$ you wanted.</p>
|
2,497,875 | <p>Define $\sigma: [0,1]\rightarrow [a,b]$ by $\sigma(t)=a+t(b-a)$ for $0\leq t \leq 1$. </p>
<p>Define a transformation $T_\sigma:C[a,b]\rightarrow C[0,1]$ by $(T_\sigma(f))(t)=f(\sigma(t))$ </p>
<p>Prove that $T_\sigma$ satisfies the following:</p>
<p>a) $T_\sigma(f+g)=T_\sigma(f)+T_\sigma(g)$</p>
<p>b) $T_\sigma(fg)=T_\sigma(f)*T_\sigma(g)$</p>
<p>c) $T_\sigma(f)\leq T_\sigma(g)$ iff $f\leq g$</p>
<p>d) $||T_\sigma(f)||=||f||$</p>
<p>e) $T_\sigma$ is both 1-1 and onto, moreover, $(T_\sigma)^{-1}=T_{\sigma^{-1}}$</p>
<p>This is a problem from "A Short Course on Approximation Theory."
It looks like an easy problem and $\sigma$ is clearly an affine transformation, but I cannot figure out how to work with the transformation within another function. The results from this are used to extend the Weierstrass theorem from $C[0,1]$ to $C[a,b]$</p>
| lab bhattacharjee | 33,337 | <p>Let $1-\sqrt x=y\implies x=(1-y)^2$</p>
<p>$$\lim_{x\to 1}\frac{\sin(1-\sqrt{x})}{x-1}=\lim_{y\to0}\dfrac{\sin y}y\cdot\lim_{y\to0}\dfrac1{y-2}=?$$</p>
|
129,287 | <p>Suppose $p(x_1, x_2, \cdots, x_n)$ is a symmetric polynomial. Given any univariate polynomial $u$, we can define a new polynomial $q(x_1, x_2, \cdots, x_{n+1})$ as</p>
<p>$q(x_1, x_2, \cdots, x_{n+1}) = u(x_1)p(x_2, x_3, \cdots, x_{n+1}) + u(x_2)p(x_1, x_3, \cdots, x_{n+1}) + \cdots \\ \phantom{q(x_1, x_2, \cdots, x_{n+1}) = } \qquad + u(x_{n+1})p(x_1, x_2, \cdots, x_n).$</p>
<p>It is easy to verify that $q$ is a symmetric polynomial. My question is: Is there a name already defined for such a mapping from $(p, u)$ to $q$? Thanks.</p>
| P Vanchinathan | 22,878 | <p>Don't know if there is a name. Possibly this is known to Newton; the inductive proof of Newton's theorem on elementary symmetric polynomials goes along similar lines.</p>
<p>When we start with some polynomial and take the sum over its orbit under $S_{n+1}$ it will be invariant under $S_{n+1}$. You are starting with $u(x_{n+1}) p(x_1,x_2,\ldots, x_n)$, and summing it over the generating set of $n$ transpositions of $S_{n+1}$. </p>
|
3,210,295 | <p>I wondered if anybody knew how to calculate a percentage loss/gain of a process over time?</p>
<p>Suppose for example Factory A conducted activity over 6 periods.</p>
<p>In t-5, utilisation of resources was: 80%
t-4: 70%
t-3: 80%
t-2: 100%
t-1: 90%
t: 75%</p>
<p>Therefore, but for the exception of two periods ago, at 100% utilisation, there has been a utilisation loss. </p>
<p>Is it possible to calculate cumulative utilisation loss over this period?</p>
<p>Any help would be appreciated, </p>
<p>Best,</p>
<p>Andrew</p>
| Community | -1 | <p>For this example, just over 1 day production has been lost if you think of all days having similar normal production, as Ross Millikan's answer shows. Some processes compound though. If this were percentage left of a given stock value on each day without reset, then it works as follows:<span class="math-container">$$1\cdot0.8\cdot0.7\cdot0.8\cdot1\cdot0.9\cdot 0.75=0.3024=30.24\%$$</span> so your stock would be worth 30.24% of it's original value, having lost 69.76% of it's original value. If it were gains then without withdrawing any of it, it would be: <span class="math-container">$$1.8\cdot1.7\cdot1.8\cdot2\cdot1.9\cdot1.75 -1= 3562.82\%$$</span> There are more possible scenarios, but I think that is enough.</p>
|
1,977,031 | <p>How would I parameterise this curve in 3D?
I am confused since the diagrams deal with three variables in total – should I use complex numbers? I'm only used to two diagrams and haven't encountered a problem with three like this.</p>
| hamam_Abdallah | 369,188 | <p>Hint:</p>
<p>in $xy$ plane, you use polar coordinates as follows.</p>
<p>$x=r\cos(t)$ and $y=r\sin(t)$ with</p>
<p>$r=ae^{-bt}$ and you choose the right parameters $a,b.$</p>
<p>to get $z$ , you replace $x$.</p>
|
2,333,847 | <p>A function $f(x) = k$ and the domain is $\{-2,-1,\dotsc,3\}$. Would I say
$$x = \{-2,-1,\dotsc,3\}\quad\text{or}\quad x \in \{-2,-1,\dotsc,3\} \ ?$$
Thanks. </p>
| DMcMor | 155,622 | <p>Here's one way to look at it:</p>
<p>When you write $x = \lbrace -2, -1, \dotsc ,3\rbrace$ you are saying "$x$ is equal to the set consisting of the integers $-2$ through $3$". If $x$ were really a set then you'd be fine, but if you want to say that the set consists of the possible values for $x$, that is it's the domain of $f$, then saying $x = \lbrace -2, -1, \dotsc ,3\rbrace$ isn't true. When you write $x \in \lbrace -2, -1, \dotsc, 3\rbrace$ you are saying "$x$ is an element of the set consisting of the integers $-2$ through $3$" which is correct.</p>
|
80,966 | <p>I wonder if it there exists a topological compact group $G$ (by compact, I mean Hausdorff and quasi-compact) and a non-zero group morphism
$\phi : G \to \mathbb{Z}$ (without assuming any topological condition on this morphism).</p>
<p>For compact Lie groups, using the exponential map, the answers is no, but in general I don't know. </p>
| Sean Eberhard | 20,598 | <p>Sorry for resurrecting such an old question, but I think we can give a much simpler proof here. We'll reduce the problem from $G$ to the Bohr compactification $B\mathbf{Z}$ of $\mathbf{Z}$, then from $B\mathbf{Z}$ to the profinite completion $\hat{\mathbf{Z}}=\prod_p\mathbf{Z}_p$ of $\mathbf{Z}$, and then we'll argue directly.</p>
<p>Let $\phi:G\to\mathbf{Z}$ be a homomorphism and fix $x\in G$. The map $\mathbf{Z}\to G$ extending $1\mapsto x$ induces a map $B\mathbf{Z}\to G$ such that $1\mapsto x$, and thus we obtain a map $\phi':B\mathbf{Z}\to \mathbf{Z}$ such that $\phi'(1)=\phi(x)$.</p>
<p>Recall that to construct $B\mathbf{Z}$ one takes the dual of $\mathbf{Z}$, namely $\mathbf{R}/\mathbf{Z}$, strips the topology to get the discrete group
$\mathbf{R}_d/\mathbf{Z}\cong\mathbf{R}_d\times\mathbf{Q}/\mathbf{Z}$, then takes the dual again. The result is that $B\mathbf{Z} \cong B\mathbf{R}\times\hat{\mathbf{Z}}$. Since $B\mathbf{R}$ is divisible $\phi'$ must vanish on $B\mathbf{R}$. Since $\prod_{p\neq 2}\mathbf{Z}_p$ is infinitely $2$-divisibile and $\mathbf{Z}_2$ is infinitely $3$-divisible, $\phi'$ vanishes on $\hat{\mathbf{Z}}$. Thus $\phi'$ is identically $0$, so $\phi(x)=\phi'(1)=0$.</p>
|
1,642,029 | <p>I'm looking at my textbooks steps for calculating the complexity of bubble sort...and it jumps a step where I don't know what exactly they did. </p>
<p><a href="https://i.stack.imgur.com/XaztP.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/XaztP.png" alt="enter image description here"></a></p>
<p>I see everything up to that point using summation rules and all, but am unsure about that jump. Any help on explaining more how they got that?</p>
| Svetoslav | 254,733 | <p>You can take as many subsequences of $\{a_n\}$ as you want. </p>
<p>The subsequence $\{b_n\}$ is already <strong>convergent</strong>, so any further subsequence of $\{b_n\}$ will be convergent and will have the same limit as $\{b_n\}$.</p>
<p>If you have omitted only a finite number of terms from $\{a_n\}$ to get $\{b_n\}$ then by throwing away the terms of $\{b_n\}$ from $\{a_n\}$ you will end up with the same finite number of terms in $\{a_n\}$ which you have omitted first. Otherwise, if $\{a_n\}\setminus \{b_n\}$ is infinite, you can repeat the procedure.</p>
|
660,315 | <p>Let $A,B,C$ be sets. Identify a condition such that $A \cap C = B \cap C$ together with your condition implies $A=B$. Prove this implication. Show that your condition is necessary by finding an example where $A \cap C = B \cap C$, but $ A \neq B$</p>
<p>Edit: I've read the wrong proposition/definition. UGH! The question was probably about having sets being equal, not empty. </p>
<p>Now, suppose that $ A \neq B$... that would mean that $A$ isn't equal to $B$. So they are different sets, but $A \cap C = B \cap C$ are equal sets. </p>
<hr>
<p>I'm lost on this. I need to find a condition, but where do I even start? These are my thoughts about the question so far. </p>
<p>We need to find a condition for $A \cap C = B \cap C$. </p>
<p>Proposition 3.1.12 states that if $A$ and $B$ are both empty sets, then $A =B$.</p>
<p>So $A \cap C$ and $B \cap C$ are both empty sets which is why $A \cap C = B \cap C$.</p>
<p>It seems that there are empty sets everywhere because proposition 3.1.12 claims that if $A$ and $B$ are empty set, then $A =B$. It's like there aren't any elements at all. There's nothing. </p>
<p>We need to find a condition that demonstrates that $A \cap C = B \cap C$ which are empty sets, but $A \neq B$ means that there are elements in $A$ and $B$. </p>
<p>How is this even possible? </p>
<p>$A \cap C$ by definition 3.2.1 is $[x: x: \in A \land x \in C]$</p>
<p>x belongs in A and x belongs in C. </p>
<p>$B \cap C$ by definition 3.2.1 is $[x: x: \in B \land x \in C]$</p>
<p>x belongs in B and x belongs in C</p>
<p>The only way I could think of is a contradiction to this... but that would mean that $A = B$ ... there are no elements in A and B, but $A \cap C \neq B \cap C$ means that there are elements. There may not be elements in A and B, but there are elements in C. </p>
| Mauro ALLEGRANZA | 108,274 | <p>The condition for having </p>
<blockquote>
<p>$A \cap C = B \cap C$ but at the same time $A \neq B$</p>
</blockquote>
<p>impose you that</p>
<blockquote>
<p>$A \cap C = \emptyset = B \cap C$</p>
</blockquote>
<p>This does not imply that $A$ and $B$ are both <em>empty</em>.</p>
<p>Let $\quad C = \{ x \in \mathbb{N} : Even(x) \}$</p>
<p>and let $\quad A = \{ 1,3,5,7 \} \quad$ and let $\quad B = \{ 9,11,13,15 \}$.</p>
<p>Because both $A$ and $B$ do not contain <em>even</em> numbers, we have that :</p>
<blockquote>
<p>$A \cap C = \emptyset = B \cap C$</p>
</blockquote>
<p>but $A \neq B$.</p>
|
1,385,936 | <p><em>I was wondering how to approximate $\sqrt{1+\frac{1}{n}}$ by $1+\frac{1}{2n}$ without using Laurent Series.</em></p>
<p>The reason why I ask was because using this approximation, we can show that the sequence $(\cos(\pi{\sqrt{n^{2}-n}})_{n=1}^{\infty}$ converges to $0$. This done using a mean-value theorem or Lipschitz (bounded derivative) argument where</p>
<p>$$
|\cos(\pi{\sqrt{n^{2}-n}})-\cos(\pi{n}+\pi/2)|=|\cos(\pi{\sqrt{n^{2}-n}})|
\leq \pi|\sqrt{n^{2}-n}-n-1/2| = \pi |\frac{-1/4}{n^{2}-n+n+1/2}|
$$</p>
<p>I looked up $\sqrt{1+\frac{1}{n}}$ and saw that this approximation can be obtained using Laurent series at $x=\infty$. I am not familiar with Laurent series since I have not had any complex analysis yet, but I was wondering if there was another naive way to see this?</p>
| Mark Bennet | 2,906 | <p>This is not so different from other approaches (exactly equivalent to some), but has a different perspective based on pure computation. </p>
<p>Suppose $a$ is an estimate for the square root of $b$ so that $$a^2=b+h$$where $h$ is taken to be small (see below).</p>
<p>Then $$\left(a-\frac h{2a}\right)^2=a^2-h+\left(\frac {h^2}{4a^2}\right)=b+\left(\frac {h^2}{4a^2}\right)$$</p>
<p>And this gives a better estimate provided $|h|\gt\cfrac {h^2}{4a^2}$ or $|h|\lt 4a^2$ i.e. $h$ is small (definition).</p>
<p>If $h$ is small then the process can be repeated, and is clearly seen to be convergent (the error reduces faster than by a constant ratio).</p>
<p>This does not work, obviously, for $a=0$. For small $a$ it can sometimes help computationally to compute the square root of $\frac 1a$.</p>
|
65,192 | <p>Multivariate parameters appear to present a jagged appearance of integrands (using default Runge-Kutta ODE integration intervals?) in ParametricPlot3D plotting on a single argument. </p>
<p>Higher Mesh.. ing to 200 improves sectors' jagging (large step secants appearing instead of tangent) somewhat, but still color from PlotStyle for lines does not come through, e.g., like in case of the straight computed helical lines shown. How to get PlotStyle line colors with smoother lines? </p>
<pre><code>thmax=6Pi ;
EQU={SI'[th]==Sin[PH[th]],SI[0]==.123,
PH'[th]==Cos[th],PH[0]==-1.234,
Z'[th]==2 Cos[PH[th]]Cos[SI[th]],
R'[th]==3 Sin[PH[th]] Cos[SI[th]],
R[0]==1.321,Z[0]==0.};
NDSolve[EQU,{SI,PH,R,Z},
{th,0,thmax}];
{ph[u_],si[u_],
r[u_],z[u_]}={PH[u],
SI[u],R[u],Z[u]}/.First[%];
ParametricPlot[{z[th],r[th]},{th,0,thmax},
AspectRatio->Automatic,PlotLabel->MERIDIAN_,PlotStyle->{Thick,Magenta}]
ParametricPlot3D[{r[th] Cos[th+t],z[th],r[th] Sin[th+t]},{t,0,2 Pi},
{th,0,thmax},PlotLabel->SURFACE_]
SpC1=ParametricPlot3D[{r[th] Cos[th],z[th],r[th] Sin[th]},{t,0,2 Pi},
{th,0,thmax},PlotStyle->{Thick,Magenta},PlotLabel->SPACE_CURVE,Mesh->20];
SpC2=ParametricPlot3D[{2 Sin[th],th,1.8 Cos[th]},{th,0,thmax/3},
PlotLabel->HELIX_,PlotStyle->{Thick,Magenta}];
Show[SpC1,SpC2]
</code></pre>
| mgamer | 19,726 | <p>I´m not quite sure whether I understood your problem right, but I increased the Mesh, set the PlotPoints higher (I just choose a higher number) and MaxRecursion to 1, i.e.:</p>
<pre><code>SpC1 = ParametricPlot3D[{r[th] Cos[th], z[th], r[th] Sin[th]}, {t, 0,
2 Pi}, {th, 0, thmax}, PlotStyle -> {Thick, Magenta},
PlotLabel -> SPACE_CURVE, Mesh -> 100, PlotPoints -> 200,
MaxRecursion -> 1]
</code></pre>
<p>and</p>
<pre><code>SpC2 = ParametricPlot3D[{2 Sin[th], th, 1.8 Cos[th]}, {th, 0,
thmax/3}, PlotLabel -> HELIX_, PlotStyle -> {Thick, Magenta},
PlotPoints -> 200, MaxRecursion -> 1]
</code></pre>
<p>the result was than quite different</p>
<p><img src="https://i.stack.imgur.com/BwNV1.png" alt="new picture"></p>
|
4,144,083 | <p>I am currently doing an internship in a research laboratory ( I am in my third year of Bachelor ) and I'm really struggling with the things I have to do.</p>
<p>For instance, here's something I'm having trouble with.</p>
<p>Let <span class="math-container">$L$</span> be a finite Galois extension of <span class="math-container">$\mathbb{Q}$</span>, <span class="math-container">$O_L$</span> its ring of integers, and <span class="math-container">$I$</span> an ideal of <span class="math-container">$O_L$</span>. Let <span class="math-container">$K$</span> be the decomposition field of <span class="math-container">$I$</span>. Let <span class="math-container">$R=I\cap O_K$</span>. Suppose <span class="math-container">$n=[L : \mathbb{Q} ]$</span> and <span class="math-container">$g=[ K : \mathbb{Q} ]$</span>. Suppose also that we have a basis <span class="math-container">$(b_i)_{1 \leq i \leq n/g}$</span> of <span class="math-container">$O_L$</span> over <span class="math-container">$O_K$</span>.</p>
<p>In order to find an isomorphism between <span class="math-container">$(O_K/R)^{n/g}$</span> and <span class="math-container">$O_L/I$</span>, I wanted to proceed like that :</p>
<p><span class="math-container">$(O_K)^{\frac{n}{g}} \overset{f_1}{\longrightarrow} O_L \overset{f_2}{\longrightarrow} O_L /I$</span></p>
<p><span class="math-container">$f_1$</span> being : <span class="math-container">$ (x_1,\cdots , x_{n/g}) \longrightarrow \sum\limits_{i=1}^{n/g} x_i b_i$</span></p>
<p>and <span class="math-container">$f_2$</span> being the canonical surjection.</p>
<p>In order to find my isomorphism, I need to prove that the kernel of the composition of <span class="math-container">$f_1$</span> and <span class="math-container">$f_2$</span> is <span class="math-container">$R^{\frac{n}{g}}$</span>.</p>
<p>The Kernel of <span class="math-container">$f_2$</span> being <span class="math-container">$I$</span>, what's left to prove is that :</p>
<p><span class="math-container">$R^{\frac{n}{g}}=\{ x\in (O_K)^{\frac{n}{g}} ~:~ f_1(x)\in I \}$</span></p>
<p>The <span class="math-container">$\subset$</span> of the equality is easy, but I can't prove the <span class="math-container">$\supset$</span>.</p>
<p>Please forgive my mistakes, I'm still learning, and English isn't my first language.</p>
<p>Thank you for your help !</p>
| hm2020 | 858,083 | <p><strong>Question:</strong> "Please forgive my mistakes, I'm still learning, and English isn't my first language. Thank you for your help!"</p>
<p><strong>Answer:</strong> It seems you may reduce to the case of <span class="math-container">$I$</span> being a power of a prime ideal. Since <span class="math-container">$I \subseteq A:=\mathcal{O}_L$</span> it follows there are maximal ideals
<span class="math-container">$\mathfrak{p}_i$</span> for <span class="math-container">$i=1,..,n$</span> with</p>
<p><span class="math-container">$$I=\prod_i \mathfrak{p}_i^{l_i}.$$</span></p>
<p>You get a diagram of exact sequences <span class="math-container">$\require{AMScd}$</span></p>
<p><span class="math-container">\begin{CD}
\mathcal{O}_L @>>> \mathcal{O}_L/I @>>> \oplus_i\mathcal{O}_L/\mathfrak{p}_i^{l_i} \\
@V V V @VV V @VVV \\
\mathcal{O}_L @>>> \mathcal{O}_L/\mathfrak{p}_i^{l_i} @>>> \mathcal{O}_L/\mathfrak{p}_i^{l_i}
\end{CD}</span></p>
<p>where the middle vertical arrow is the canonical map and the rightmost vertical arrow is the projection map. In the Neukirch book on ramification theory they develop the theory of decomposition fields and decomposition groups in the case when <span class="math-container">$I:=\mathfrak{p_1}$</span> and <span class="math-container">$l_1=1$</span>. A good idea could be to try to develop this theory for powers of prime ideals.</p>
|
2,908,361 | <p>I tried to solve this inequality by taking the square outside the floor function $[y]$ (greatest integer less than $y$)but it was wrong since if $x=2.5$ then $[x]= 2$ and $x^2=4$ while $[x^2]=[6.25]=6$.</p>
| dmtri | 482,116 | <p>I think the answer should be: $x\le-\sqrt{27}\qquad $ or $\qquad -5\le{x}\le-\sqrt{22}\qquad $
or
$\qquad 0\le{x}\qquad $
Here is a sketch:</p>
<p>For $$-4\le{x}\le-1$$ it is $$x^2+5x+4\le0$$ or $$x^2+5x\le-4$$, but also
$$ [x^2]+5[x]\le{x^2+5x}$$
so there is no solution in $[-4, -1]$.</p>
<p>For $$-5\le{x}\lt-4$$ it is $\qquad [x]=-5\qquad $ and so $\qquad 5[x]=-25\qquad $ therefor $\qquad [x^2]-25\gt-4\qquad $ $\iff$ $\qquad [x^2]\geqslant22\qquad $ $\iff$ $\qquad x^2\geqslant22\qquad $ $\implies$ $\qquad x\le{-\sqrt{22}}\qquad $</p>
|
336,943 | <p>Given $P(A) = 0.5$ and $P(A \cup (B^c \cap C^c)^c)=0.8$.</p>
<p>Determine $P(A^c \cap (B \cup C))$.</p>
<p>I know from DeMorgans law that: $(B^c \cap C^c)^c = (B \cup C)$.</p>
<p>Edit:</p>
<p>Also how can I "prove" that P(X)=P(Y) if and only if $P(X \cap Y^c) = P(X^c \cap Y)$? </p>
| Berci | 41,488 | <ol>
<li>Let $X:=B\cup C$, as you noted, we have $P(X\cup A)=0.8$ and $P(X\cap A^c)=P(X\setminus A)$ is the question. For this, use that $X\setminus A$ and $A$ are disjoint and that $(X\setminus A)\cup A=X\cup A$. (All gets clear once you draftly draw them..)</li>
<li>As $X\cap Y$ is disjoint to $X\setminus Y=X\cap Y^c$ and their union is $X$, we have
$$P(X)=P(X\cap Y)+P(X\setminus Y)\,.$$
Also write it for $Y$. This implies the second statement.</li>
</ol>
|
1,316,008 | <p><img src="https://i.stack.imgur.com/xNGPi.png" alt="enter image description here"></p>
<p>The problem is shown in the image. I'm not able to post images yet.. What are the next steps to to find how tall the triangle is? So far i see, that the 3 triangles are similar; however, even by these similarities and by the fact that $AP^2+PB^2=100$ I'm unable to move forth. </p>
| Narasimham | 95,860 | <p>The angles in the figure are not drawn to proper angular proportion. BY is not convincing as a bisector of $XBA$. </p>
<p>Anyway, we can even verbally show its error.</p>
<p>If $ PAB + PBA$ is 90 degrees, due to bisections given the double angles sum </p>
<p>$YAB + XBA$ must be 180 degrees.</p>
<p>It makes AY parallel to BX by virtue Euclid's very early theorems on parallelism.</p>
<p>AY is parallel to BX and so C is of infinite height or distance from AB.
<img src="https://i.stack.imgur.com/QEFeK.png" alt="Correct Construction"></p>
|
336,827 | <p>A covering map $p:C\to X$ is finite when for each $x\in X$ we have $|p^{-1}(x)|<\infty.$ I have to prove that such a covering map has to be closed. I'm having trouble with it. </p>
<p>When $p$ is a covering map, we can take open neighborhoods $U_x$ of every point $x\in X$ such that $p^{-1}(U_x)$ is a disjoint union of open neighborhoods $U_{x_i}$ of $x_i$, where $\{x_1,\ldots,x_n\}=p^{-1}(x)$, and $p$ maps each $U_{x_i}$ homeomorphically onto $U_x$. I see that when I have a closed subset $A\subseteq C$ such that $p(C)\subseteq U_x$ for a certain $x\in X$, then $p(A)$ is closed. That's because in this case $A$ is a disjoint union of $A\cap U_{x_i}$ and each of this sets is closed. (Well, certainly they're closed in $U_{x_i}.$ I'm not sure why it must be closed in $C$, but I think it must.) Since $p$ is a homeomorphism on $A\cap U_{x_i}$, we have that each $p(A\cap U_{x_i})$ is closed. (Again, certainly in $U_x$, and I think in $X$ too.) Since there are finitely many of them, they're union, that is $p(A)$ is closed too.</p>
<p>I'm sure there are problems with this reasoning, which show how little I understand of topology, but I think the gist of it is right. But I don't see how I can make this local property global. What if $p(A)$ is large? And what if $A$ spans several components of $C$? </p>
<p>This is a homework question.</p>
| Georges Elencwajg | 3,217 | <p>Let $(U_i)_{i\in I}$ be an arbitrary open cover of a a topological space $X$.<br>
The crucial remark is that a subset $F\subset X$ is closed in $X$ if and only if for each $i\in I$ the intersection $F\cap U_i$ is closed in $U_i$. </p>
<p>In your situation, if you have a closed subset $A\subset C$ you should apply the above to $F=f(A)$ and to a trivializing cover $(U_i)_{i\in I}$ of your covering $p:C\to X$.<br>
In a nutshell: you may suppose that $p$ is a trivial covering with finite fibers.</p>
<p><strong>Remark</strong><br>
The result is false if the covering has infinite fibers.<br>
A counterexample is given by the covering of the unit circle in the complex plane $$p:\mathbb R\to S^1:t\mapsto \exp (2\pi it)$$ and the closed set $A=\{n+1/n\mid n\in \mathbb N^*\}$, whose image $p(A)\subset S^1$ is not closed [it accumulates at $1\in S^1$, which is not in $p(A)$ ].</p>
|
1,324,062 | <p>Evaluate: </p>
<blockquote>
<p>$$\lim_{h \rightarrow 0} \frac{e^{2h}-1}{h}$$</p>
</blockquote>
<p>Now one way would be using the Maclaurin expansion for $e^{2x}$</p>
<p>However, can we solve it using the definition of the derivative (perhaps considering $f(x)=e^x$)? Many thanks for your help! $$$$
EDIT: I forgot to mention to please not use L'Hopital's Rule. Using it, the problem becomes trivial and loses all chances of getting a beautiful solution.</p>
| Vincenzo Oliva | 170,489 | <p>Noting $e^{2h}-1=(e^h-1)(e^h+1)$ and recalling a notable limit does the job.</p>
|
1,324,062 | <p>Evaluate: </p>
<blockquote>
<p>$$\lim_{h \rightarrow 0} \frac{e^{2h}-1}{h}$$</p>
</blockquote>
<p>Now one way would be using the Maclaurin expansion for $e^{2x}$</p>
<p>However, can we solve it using the definition of the derivative (perhaps considering $f(x)=e^x$)? Many thanks for your help! $$$$
EDIT: I forgot to mention to please not use L'Hopital's Rule. Using it, the problem becomes trivial and loses all chances of getting a beautiful solution.</p>
| MCT | 92,774 | <p>Note that, with $f(x) = e^x$, we have $\lim \limits_{h \to 0} \dfrac{e^h - 1}{h} = f'(0) = 1$.</p>
<p>Therefore, $\lim \limits_{h \to 0} \frac{e^{2h} - 1}{h} = \lim \limits_{h \to 0} \frac{(e^h - 1)}{h}(e^h + 1) = 1 \cdot (1 + 1) = \boxed 2$.</p>
|
2,231,391 | <p>I'd like a low-discrepancy sequence of points over a 3D-hypercube <span class="math-container">$[-1,1]^3$</span>, but don't want to have to commit to a fixed number <span class="math-container">$n$</span> of points beforehand, that is just see how the numerical integration estimates develop with increasing numbers of low-discrepancy points.</p>
<p>I'd like to avoid have to start all over again, if the results with a fixed <span class="math-container">$n$</span> are unsatisfactory. Of course, one could just employ random numbers, but then the convergence behavior would be poorer.</p>
<p>"A sequence of n-tuples that fills n-space more uniformly than uncorrelated random points, sometimes also called a low-discrepancy sequence. Although the ordinary uniform random numbers and quasirandom sequences both produce uniformly distributed sequences, there is a big difference between the two." (mathworld.wolfram.com/QuasirandomSequence.html)</p>
<p>This question has also just been put on the mathematica.stack.exchange
(<a href="https://mathematica.stackexchange.com/questions/143457/how-can-one-generate-an-open-ended-sequence-of-low-discrepancy-points-in-3d">https://mathematica.stackexchange.com/questions/143457/how-can-one-generate-an-open-ended-sequence-of-low-discrepancy-points-in-3d</a>)</p>
<p>Since in his answer below, Martin Roberts advances a very interesting, appealing approach to the open-ended low-discrepancy problem, I’d like to indicate an (ongoing) implementation of his approach I’ve just reported in <a href="https://arxiv.org/abs/1809.09040" rel="nofollow noreferrer">https://arxiv.org/abs/1809.09040</a> . In sec. XI (p. 19) and Figs. 5 and 6 there, I analyze two problems—one with sampling dimension <span class="math-container">$d=36$</span> and one with <span class="math-container">$d=64$</span>—both using the parameter <span class="math-container">$\bf{\alpha}_0$</span> set to 0 and also to <span class="math-container">$\frac{1}{2}$</span>. To convert the quasi-uniformly distributed points yielded by the Roberts’ algorithm to quasi-uniformly distributed normal variates, I use the code developed by Henrik Schumacher in his answer to <a href="https://mathematica.stackexchange.com/questions/181099/can-i-use-compile-to-speed-up-inversecdf">https://mathematica.stackexchange.com/questions/181099/can-i-use-compile-to-speed-up-inversecdf</a></p>
| Martin Roberts | 575,045 | <p><em>As the OP cross-posted this <a href="https://math.stackexchange.com/questions/2231391/how-can-one-generate-an-open-ended-sequence-of-low-discrepancy-points-in-3d/2845473#2845473">question from Math stackexchange</a>, I have also cross-posted the answer that I wrote there.</em></p>
<h1>#</h1>
<p>The simplest traditional solution to the <span class="math-container">$d$</span>-dimensional which provides quite good results in 3-dimensions is to use the Halton sequence based on the first three primes numbers (2,3,5). The Halton sequence is a generalization of the 1-dimensional Van der Corput sequence and merely requires that the three parameters are pairwise-coprime. Further details can be found at the Wikipedia article: <a href="https://en.wikipedia.org/wiki/Halton_sequence" rel="nofollow noreferrer">"Halton Sequence"</a>. </p>
<p>An alternative sequence you could use is the generalization of the Weyl / Kronecker sequence. This sequence also typically uses the first three prime numbers, however, in this case they are chosen merely because the square root of these numbers is irrational. </p>
<p>However, I have recently written a detailed blog post, <a href="http://extremelearning.com.au/unreasonable-effectiveness-of-quasirandom-sequences/" rel="nofollow noreferrer">"The Unreasonable effectiveness of Quasirandom Sequences</a>, on how to easily create an open-ended low discrepancy sequences in arbitrary dimensions, that is:</p>
<ul>
<li>algebraically simpler</li>
<li>faster to compute</li>
<li>produces more consistent outputs</li>
<li>suffers less technical issues</li>
</ul>
<p>than existing existing low discrepancy sequences, such as the Halton and Kronecker sequences. </p>
<p>The solution is an additive recurrence method (modulo 1) which generalizes the 1-Dimensional problem whose solution depends on the Golden Ratio. The solution to the <span class="math-container">$d$</span>-dimensional problem, depends on a special constant <span class="math-container">$\phi_d$</span>, where <span class="math-container">$\phi_d$</span> is the unique positive root of <span class="math-container">$x^{d+1}=x+1$</span>.</p>
<p>For <span class="math-container">$d=1$</span>, <span class="math-container">$ \phi_1 = 1.618033989... $</span>, which is the canonical golden ratio.</p>
<p>For <span class="math-container">$d=2$</span>, <span class="math-container">$ \phi_2 = 1.3247179572... $</span>, which is often called the plastic constant, and has some beautiful properties. This value was conjectured to most likely be the optimal value for a related two-dimensional problem [Hensley, 2002]. Jacob Rus has posted a beautiful visualization of this 2-dimensional low discrepancy sequence, which can be found <a href="https://beta.observablehq.com/@jrus/plastic-sequence" rel="nofollow noreferrer">here</a>.</p>
<p>And finally specifically relating to your question, for <span class="math-container">$d=3$</span>, <span class="math-container">$ \phi_3 = 1.2207440846... $</span></p>
<p>With this special constant in hand, the calculation of the <span class="math-container">$n$</span>-th term is now extremely simple and fast to calculate:</p>
<p><span class="math-container">$$ R: \mathbf{t}_n = \pmb{\alpha}_0 + n \pmb{\alpha} \; (\textrm{mod} \; 1), \quad n=1,2,3,... $$</span>
<span class="math-container">$$ \textrm{where} \quad \pmb{\alpha} =(\frac{1}{\phi_d}, \frac{1}{\phi_d^2},\frac{1}{\phi_d^3},...\frac{1}{\phi_d^d}), $$</span></p>
<p>Of course, the reason this is called a recurrence sequence is because the above definition is equivalent to
<span class="math-container">$$ R: \mathbf{t}_{n+1} = \mathbf{t}_{n} + \pmb{\alpha} \; (\textrm{mod} \; 1) $$</span></p>
<p>In nearly all instances, the choice of <span class="math-container">$\pmb{\alpha}_0 $</span> does not change the key characteristics, and so for reasons of obvious simplicity, <span class="math-container">$\pmb{\alpha}_0 =\pmb{0}$</span> is the usual choice. However, there are some arguments, relating to symmetry, that suggest that <span class="math-container">$\pmb{\alpha}_0=\pmb{1/2}$</span> is a better choice.</p>
<p>Specifically for <span class="math-container">$d=3$</span>, <span class="math-container">$\phi_3 = 1.2207440846... $</span> and so for <span class="math-container">$\pmb{\alpha}_0= (1/2,1/2,1/2) $</span>,
<span class="math-container">$$\pmb{\alpha} = (0.819173,0.671044,0.549700) $$</span> and so the first 5 terms of the canonical 3-dimensional sequence are:</p>
<ol>
<li>(0.319173, 0.171044, 0.0497005)</li>
<li>(0.138345, 0.842087, 0.599401)</li>
<li>(0.957518, 0.513131, 0.149101)</li>
<li>(0.77669, 0.184174, 0.698802)</li>
<li>(0.595863, 0.855218, 0.248502)
... </li>
</ol>
<p>Of course, this sequence ranges between [0,1], and so to convert to a range of [-1,1], simply apply the linear transformation <span class="math-container">$ x:= 2x-1 $</span>. The result is</p>
<ol>
<li>(-0.361655, -0.657913, -0.900599)</li>
<li>(-0.72331, 0.684174, 0.198802)</li>
<li>(0.915035, 0.0262616, -0.701797)</li>
<li>(0.55338, -0.631651, 0.397604)</li>
<li>(0.191725, 0.710436, -0.502995),...</li>
</ol>
<p>The Mathematica Code for creating this sequence is as follows:</p>
<pre><code>f[n_] := x /. FindRoot[x^(1 + n) == x + 1, {x, 1}];
d = 3;
n = 5
gamma = 1/f[d];
alpha = Table[gamma^k , {k, Range[d]}]
ptsPhi = Map[FractionalPart, Table[0.5 + i alpha, {i, Range[n]}], {2}]
</code></pre>
<p>Similar Python code is</p>
<pre><code># Use Newton-Rhapson-Method
def gamma(d):
x=1.0000
for i in range(20):
x = x-(pow(x,d+1)-x-1)/((d+1)*pow(x,d)-1)
return x
d=3
n=5
g = gamma(d)
alpha = np.zeros(d)
for j in range(d):
alpha[j] = pow(1/g,j+1) %1
z = np.zeros((n, d))
for i in range(n):
z = (0.5 + alpha*(i+1)) %1
print(z)
</code></pre>
<p>Hope that helps!</p>
|
2,231,391 | <p>I'd like a low-discrepancy sequence of points over a 3D-hypercube <span class="math-container">$[-1,1]^3$</span>, but don't want to have to commit to a fixed number <span class="math-container">$n$</span> of points beforehand, that is just see how the numerical integration estimates develop with increasing numbers of low-discrepancy points.</p>
<p>I'd like to avoid have to start all over again, if the results with a fixed <span class="math-container">$n$</span> are unsatisfactory. Of course, one could just employ random numbers, but then the convergence behavior would be poorer.</p>
<p>"A sequence of n-tuples that fills n-space more uniformly than uncorrelated random points, sometimes also called a low-discrepancy sequence. Although the ordinary uniform random numbers and quasirandom sequences both produce uniformly distributed sequences, there is a big difference between the two." (mathworld.wolfram.com/QuasirandomSequence.html)</p>
<p>This question has also just been put on the mathematica.stack.exchange
(<a href="https://mathematica.stackexchange.com/questions/143457/how-can-one-generate-an-open-ended-sequence-of-low-discrepancy-points-in-3d">https://mathematica.stackexchange.com/questions/143457/how-can-one-generate-an-open-ended-sequence-of-low-discrepancy-points-in-3d</a>)</p>
<p>Since in his answer below, Martin Roberts advances a very interesting, appealing approach to the open-ended low-discrepancy problem, I’d like to indicate an (ongoing) implementation of his approach I’ve just reported in <a href="https://arxiv.org/abs/1809.09040" rel="nofollow noreferrer">https://arxiv.org/abs/1809.09040</a> . In sec. XI (p. 19) and Figs. 5 and 6 there, I analyze two problems—one with sampling dimension <span class="math-container">$d=36$</span> and one with <span class="math-container">$d=64$</span>—both using the parameter <span class="math-container">$\bf{\alpha}_0$</span> set to 0 and also to <span class="math-container">$\frac{1}{2}$</span>. To convert the quasi-uniformly distributed points yielded by the Roberts’ algorithm to quasi-uniformly distributed normal variates, I use the code developed by Henrik Schumacher in his answer to <a href="https://mathematica.stackexchange.com/questions/181099/can-i-use-compile-to-speed-up-inversecdf">https://mathematica.stackexchange.com/questions/181099/can-i-use-compile-to-speed-up-inversecdf</a></p>
| Paul B. Slater | 217,460 | <p>The original question was posed in April, 2017. Now, a few days ago, I extended the question </p>
<p><a href="https://mathematica.stackexchange.com/questions/143457/how-can-one-generate-an-open-ended-sequence-of-low-discrepancy-points-in-3d">https://mathematica.stackexchange.com/questions/143457/how-can-one-generate-an-open-ended-sequence-of-low-discrepancy-points-in-3d</a></p>
<p>to express concern about the possible relevance of the (Mathematica) WorkingPrecision setting to the reliability--noting that the command FractionalPart is applied at each iteration--of the quasirandom results generated by the algorithm given by Martin Roberts in his answer to the original question.</p>
<p>My test example concerned the estimation--in a 3D context--of nine values,
four of which are known from prior considerations. (In the question, I stated three, but then realized that a fourth also is known.) The four values, one anticipates the quasirandom procedure will converge to are
(see <a href="https://arxiv.org/abs/2004.06745" rel="nofollow noreferrer">https://arxiv.org/abs/2004.06745</a>)
<span class="math-container">\begin{equation}
\left\{\frac{1}{36},\frac{8 \pi }{27 \sqrt{3}},\frac{1}{81} \left(27+\sqrt{3} \log \left(97+56
\sqrt{3}\right)\right),\frac{2}{81} \left(4 \sqrt{3} \pi -21\right)\right\} \approx
\end{equation}</span>
<span class="math-container">\begin{equation}
\{0.027777777777777777778,0.53742203384717565944, 0.44597718463717723667, \
0.018903515328657140917\}.
\end{equation}</span>
In the estimation, I employed three billion 3D points, recording the results at intervals of one hundred million. A plot showing the
four sets of results, along with the constant/target line 1, is </p>
<p><a href="https://i.stack.imgur.com/xs3xg.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/xs3xg.jpg" alt="Estimation results"></a></p>
<p>The yellow curve corresponds to the estimation of 0.44597718463717723667. The estimation of 0.02777777777777777777 is clearly the best of the four, hovering close to the constant line of 1. The blue curve corresponds to 0.53742203384717565944, while the (most highly fluctuating) green is for the smallest target value of 0.018903515328657140917.</p>
<p>These results were obtained using WorkingPrecision->20.</p>
<p>Then, due to my concerns, I undertook a repetition of the calculations, but now employing WorkingPrecision->40. After seven hundred million iterations, the results were identical to those obtained using WorkingPrecision->20. (Somewhat curiously, the computational time decreased by about <span class="math-container">$7\%$</span>.) I am continuing to three billion iterations, as before, and will update this answer if I detect any deviations from the first set of results. Also, if there are no differences after the three billion, I will also note that.</p>
<p>But, as of now, it seems that the WorkingPrecision setting to 20 was certainly adequate for the task at hand.</p>
<p>Let me also note that as each quasirandom 3D point (Q1, Q2, Q3) is generated, I test to see if it satisfies the constraint
<span class="math-container">\begin{equation}
\text{Q1}>0\land \text{Q2}>0\land \text{Q3}>0\land \text{Q1}+3 \text{Q2}+2 \text{Q3}<1.
\end{equation}</span>
If it does not, it is discarded from further consideration. Only <span class="math-container">$\frac{1}{36} \approx 0.027777777777777777778$</span> should satisfy the constraint (and as the plot shows, this is certainly the case).</p>
<p>UPDATE:</p>
<p>I now have two sets of results, both based on three billion iterations, the first having employed WorkingPrecision->20, the second, WorkingPrecision->40.</p>
<p>For each point (Q1,Q2,Q3) generated, I tested--as indicated above--whether it satisfied the constraint
<span class="math-container">\begin{equation}
\text{Q1}>0\land \text{Q2}>0\land \text{Q3}>0\land \text{Q1}+3 \text{Q2}+2 \text{Q3}<1.
\end{equation}</span>
In both cases, the SAME number--83,333,308--of points passed the test, giving a probability of 0.0277777693333333, that is, very close to <span class="math-container">$\frac{1}{36}$</span>, that a simple 3D integration yields.</p>
<p>Then, for each of these 83,333,308 points I tested whether it satisfied the further ("PPT"--"positive partial transpose") constraint
<span class="math-container">\begin{equation}
\text{Q1}^2+3 \text{Q1} \text{Q2}+(3 \text{Q2}+\text{Q3})^2<2 \text{Q1} \text{Q3}+3
\text{Q2}.
\end{equation}</span></p>
<p>Now, the two number of points passing the further test were DIFFERENT,
but almost identical. With WorkingPrecision->20, the number was 44,785,111
and with WorkingPrecision->40, it was two greater, that is, 44,785,113. (Let me note that the ratio [<span class="math-container">$R_1$</span>] of the latter number to the common number 83,333,308, gives a further ratio [<span class="math-container">$R_2$</span>] to the known value <span class="math-container">$\frac{8 \pi }{27 \sqrt{3}} \approx 0.53742203384718$</span> of 0.9999990427, closer to 1--as we would hope/expect--than the former [lesser WorkingPrecision] number of 44,785,111.)</p>
<p>I will now continue on my analyses with the higher setting of WorkingPrecision.</p>
|
994,620 | <p>I need to solve $f(2x)=(e^x+1)f(x)$. I am thinking about Frobenius type method:
$$\sum_{k=0}^{\infty}2^ka_kx^k=\left(1+\sum_{m=0}^{\infty}\frac{x^m}{m!}\right)\sum_{n=0}^{\infty}a_nx^n\\
\sum_{k=0}^{\infty}(2^k-1)a_kx^k=\left(\sum_{m=0}^{\infty}\frac{x^m}{m!}\right)\left(\sum_{n=0}^{\infty}a_nx^n\right)=\sum_{m=0}^{\infty}\left(\frac{x^m}{m!}\sum_{n=0}^{\infty}a_nx^n\right)=\sum_{m=0}^{\infty}\left(\sum_{n=0}^{\infty}\frac{a_nx^{m+n}}{m!}\right)$$
So I think
$$(2^k-1)a_k=\sum_{m=0}^k\frac{a_{k-m}}{m!}.$$
Now I think about a recurrence relation:
$$(2^k-1)r^k=\sum_{m=0}^k\frac{r^{k-m}}{m!}.$$
Can somebody help me? Firstly, is my Frobenius technique right? Useful? Is there another easier way? Recurrence relation?</p>
| RE60K | 67,609 | <p>As suggested by @Shivang in comments, substituting $g(x)=f(x)/(e^x-1)$ gives:
$$f(2x)=(e^x+1)f(x)\implies g(2x)=g(x)$$
From a known process one can then prove that g is constant and thus f is $a(e^x-1)$ where a is a constant.</p>
|
994,620 | <p>I need to solve $f(2x)=(e^x+1)f(x)$. I am thinking about Frobenius type method:
$$\sum_{k=0}^{\infty}2^ka_kx^k=\left(1+\sum_{m=0}^{\infty}\frac{x^m}{m!}\right)\sum_{n=0}^{\infty}a_nx^n\\
\sum_{k=0}^{\infty}(2^k-1)a_kx^k=\left(\sum_{m=0}^{\infty}\frac{x^m}{m!}\right)\left(\sum_{n=0}^{\infty}a_nx^n\right)=\sum_{m=0}^{\infty}\left(\frac{x^m}{m!}\sum_{n=0}^{\infty}a_nx^n\right)=\sum_{m=0}^{\infty}\left(\sum_{n=0}^{\infty}\frac{a_nx^{m+n}}{m!}\right)$$
So I think
$$(2^k-1)a_k=\sum_{m=0}^k\frac{a_{k-m}}{m!}.$$
Now I think about a recurrence relation:
$$(2^k-1)r^k=\sum_{m=0}^k\frac{r^{k-m}}{m!}.$$
Can somebody help me? Firstly, is my Frobenius technique right? Useful? Is there another easier way? Recurrence relation?</p>
| bot | 92,881 | <p>What about $f(x)=D(x)(e^x-1)?$</p>
<p>But the additional condition:</p>
<p>There exist $\lim\limits_{x\to 0} \frac{f(x)}{x}$</p>
<p>will delete such counterexamples </p>
|
2,323,223 | <p>I am having a hard time with this question for some reason. </p>
<p>You and a friend play a game where you each toss a balanced coin. If the upper faces on
the coins are both tails, you win \$1; if the faces are both heads, you win \$2; if the coins
do not match (one shows head and the other tail), you lose \$1.
Calculate the expected value and standard deviation for your total winnings from this
game if you play 50 times.</p>
<p>PMF Values:
\begin{array}{c|c}
$& p\\\hline
+$1 & .25\\
+$2 & .25\\
-$1 & .50
\end{array}</p>
<p>I have calculated the expectation as $$1(.25)+2(.25)+(-1)(.5) = .25,$$ so $$E(50X) = 50\cdot.25 = \$12.5,$$ which I have confirmed is correct.</p>
<p>I know I need to get $\operatorname{Var}(50X)$, but doing a standard variance calculation and then using the formula $a^2\operatorname{Var}(X)$ is not giving me the correct value.</p>
<p>What step am I missing?</p>
| Satish Ramanathan | 99,745 | <p>$Variance= 50 Var(X) = 50.[{.25(1-.25)^2 + .25(2-.25)^2 +.5*(-1-.25)^2}]=84.375$</p>
|
3,516,189 | <p>I've been struggling with the following exercise for quite some time already:</p>
<blockquote>
<p>Consider a linear space <span class="math-container">$\mathbb{V} = \mathcal{C}\left(\left[a, b\right]\right)$</span> and let <span class="math-container">$f_{1},\ldots, f_{n}$</span> be linearly independent functions in <span class="math-container">$\mathbb{V}$</span>. Prove there exist numbers <span class="math-container">$a \leq x_{1} < \cdots < x_{n} \leq b$</span> such that <span class="math-container">$$ \det \begin{bmatrix}
f_{1}(x_{1}) & f_{1}(x_{2}) & \cdots & f_{1}(x_{n})\\
f_{2}(x_{1}) & f_{2}(x_{2}) & \cdots & f_{2}(x_{n}) \\
\vdots & \vdots & \ddots & \vdots \\
f_{n}(x_{1}) & f_{n}(x_{2}) & \cdots & f_{n}(x_{n})
\end{bmatrix} \neq 0.$$</span></p>
</blockquote>
<p>The statement is extremely easy to prove by means of induction. However, I'm interested if there's another (and more elegant) proof which <em>doesn't involve induction</em>. </p>
<p>Any hints appreciated.</p>
| user1551 | 1,551 | <p><em>(I'll call this a recursive algorithm rather than mathematical induction, but one may disagree.)</em></p>
<p>Let <span class="math-container">$\mathbf f=(f_1,f_2,\ldots,f_n)^T$</span>.</p>
<ul>
<li>Pick any nonzero vector <span class="math-container">$v_1$</span>.</li>
<li>Since <span class="math-container">$f_1,\ldots,f_n$</span> are linearly independent, there exists some <span class="math-container">$x_1$</span> such that <span class="math-container">$v_1^T\mathbf f(x_1)\ne0$</span>.</li>
<li>Pick any nonzero vector <span class="math-container">$v_2\perp\mathbf f(x_1)$</span> (that is, <span class="math-container">$v_2^T\mathbf f(x_1)=0$</span>).</li>
<li>Since <span class="math-container">$f_1,\ldots,f_n$</span> are linearly independent, there exists some <span class="math-container">$x_2$</span> such that <span class="math-container">$v_2^T\mathbf f(x_2)\ne0$</span>.</li>
<li>(Continue in this manner...)</li>
<li>Pick any nonzero vector <span class="math-container">$v_n\perp\{\mathbf f(x_1),\mathbf f(x_2),\ldots,\mathbf f(x_{n-1})\}$</span>.</li>
<li>Since <span class="math-container">$f_1,\ldots,f_n$</span> are linearly independent, there exists some <span class="math-container">$x_n$</span> such that <span class="math-container">$v_n^T\mathbf f(x_n)\ne0$</span>. Now
<span class="math-container">$$
\pmatrix{v_1^T\\ v_2^T\\ \vdots\\ v_n^T}\pmatrix{\mathbf f(x_1)&\mathbf f(x_2)&\cdots&\mathbf f(x_3)}
$$</span>
is an upper triangular matrix with nonzero diagonal entries. Hence it is invertible and <span class="math-container">$\det\pmatrix{\mathbf f(x_1)&\mathbf f(x_2)&\cdots&\mathbf f(x_3)}\ne0$</span>.</li>
</ul>
<p>By the way, note that neither the continuity of <span class="math-container">$\mathbf f$</span> nor the compactness/connectedness of the domain of <span class="math-container">$\mathbf f$</span> are relevant. The above proof works as long as <span class="math-container">$f_1,\ldots,f_n$</span> are linearly independent (which implies that their common domain has at least <span class="math-container">$n$</span> elements, if that matters).</p>
|
326,094 | <p>Suppose <span class="math-container">$(f_n)_n$</span> is a countable family of entire, surjective functions, each <span class="math-container">$f_n:\mathbb{C}\to\mathbb{C}$</span>. Can one always find complex scalars <span class="math-container">$(a_n)_n$</span>, not all zero, such that <span class="math-container">$\sum_{n=1}^{\infty} a_n f_n$</span> is entire but not-surjective? In fact, I am interested in this question under the additional assumption that <span class="math-container">$(f_n)_n$</span> are not polynomials. </p>
| Noam D. Elkies | 14,830 | <p>One expects there to be no such <span class="math-container">$a_n$</span> in general, because the
"typical" entire functions is surjective (those that aren't are of the
special form <span class="math-container">$z \mapsto c + \exp g(z)$</span>). An explicit example is
<span class="math-container">$f_n(z) = \cos z/n$</span>: any convergent linear combination <span class="math-container">$f = \sum_n a_n f_n$</span>
is of order <span class="math-container">$1$</span>, so if <span class="math-container">$f$</span> is not surjective then <span class="math-container">$g$</span> is a polynomial
of degree at most <span class="math-container">$1$</span>; but <span class="math-container">$f$</span> is even, so must be constant,
from which it soon follows that <span class="math-container">$a_n=0$</span> for every <span class="math-container">$n$</span>.</p>
|
2,856,180 | <p>I've been learning some about counting and basic combinatorics. But some scenarios were not explained in my class...</p>
<p><strong>Example problem:</strong> You are given 6 tiles. 1 is labeled "1", 2 are labeled "2", and 3 are labeled "3".</p>
<p><strong>Problem 1:</strong> How many different ways can you arrange groups of three tiles (order matters)?</p>
<p><strong>Answer 1: 19</strong>, {1,2,2} {1,2,3} {1,3,2} {1,3,3} {2,1,2} {2,1,3} {2,2,1} {2,2,3} {2,3,1} {2,3,2} {2,3,3} {3,1,2} {3,1,3} {3,2,1} {3,2,2} {3,2,3} {3,3,1} {3,3,2} {3,3,3}.</p>
<p><strong>Question 1:</strong> You can't use the normal $\frac{5!}{\left(5-3\right)!}$ because you have repeated symbols. And you can't divide by the factorials of repeated symbols $\frac{6!}{\left(6-3\right)!\cdot3!\cdot2!}$ like you can when n = k and you make different arrangements of all n of symbols $\frac{n!}{\prod_{i=1}^mn_i}$. What formula would be used to find the answer to a more complex version of this problem?</p>
<p><strong>Problem 2:</strong> How many different ways can you make groups of three tiles (order doesn't matter)?</p>
<p><strong>Answer 2: 6</strong>, {1,2,2} {1,2,3} {1,3,3} {2,2,3} {2,3,3} {3,3,3}.</p>
<p><strong>Question 2:</strong> You can't use the normal $\frac{n!}{\left(n-k\right)!\cdot k!}$ because you have repeated symbols. What formula would be used to find the answer to a more complex version of this problem?</p>
| jgon | 90,543 | <p>Not sure this is the best method, but the way I would actually solve such a set of questions would be the following:</p>
<p>Use generating function methods for the second question.
The explicit collection of submultisets of the six tiles is given by the terms of $(1+x)(1+y+y^2)(1+z+z^2+z^3)$ (the $x$ terms correspond to tile 1, $y$ terms to copies of tile 2, and $z$ terms to copies of tile 3), and the numbers of submultisets of a given size are given ignoring the labels of the tiles and just multiplying the polynomials
$$(1+x)(1+x+x^2)(1+x+x^2+x^3)=(1+2x+2x^2+x^3)(1+x+x^2+x^3)$$
$$= 1+(1+2)x+(1+2+2)x^2+(1+2+2+1)x^3+(2+2+1)x^4+(2+1)x^5+x^6$$
$$=1+3x+5x^2+6x^3+5x^4+3x^5+x^6.$$
Looking at the coefficient of $x^3$ in the product tells us that there are 6 submultisets.</p>
<p>Now the question of ordered triples drawn from the multiset is a little more difficult. I think I'd work out the explicit submultisets of size 3 first. I.e. solve question 2, then compute the number of ways to order each submultiset. There are a couple algorithmic methods to compute the submultisets of size 3. The first would be to multiply out the polynomials $(1+x)(1+y+y^2)(1+z+z^2+z^3)$ and compute the degree 3 terms. The others are essentially equivalent, but drop the mention of polynomials, and I find the polynomials are a helpful computational aide.</p>
<p>Now because I only want the degree three terms, I'll only compute those.
We have $$[(1+x)(1+y+y^2)(1+z+z^2+z^3)]_3$$
$$=1[(1+y+y^2)(1+z+z^2+z^3)]_3 + x[(1+y+y^2)(1+z+z^2+z^3)]_2$$
$$=1(z^3+yz^2+y^2z) + x(z^2+yz+y^2)$$
$$=z^3+yz^2+y^2z+xz^2+xyz+xy^2,$$
corresponding to submultisets
$\newcommand{\multset}[1]{\{\{{#1}\}\}}\multset{3,3,3},\multset{2,3,3},\multset{2,2,3},\multset{1,3,3},\multset{1,2,3},$ and $\multset{1,2,2}$.</p>
<p>Now we can work out how many ways there are to order each multiset in the usual manner: the ways to order a multiset of the form $\multset{a,a,a}$ is $3!/3!=1$, $\multset{a,a,b}=3!/2!=3$, and $\multset{a,b,c}=3!=6$. Thus summing the number of ways to order each of our multisets, we get $1+3+3+3+6+3 = 19$ different ordered triples drawn from our multiset.</p>
|
271,343 | <p>I need help finding the integral of $\sin(\sqrt{x})dx$. I have the answer here but would like to know how to get there. </p>
| Hanul Jeon | 53,976 | <p><strong>Hint</strong>: $$\sin(\sqrt{x}) = \frac{\sqrt{x}\sin(\sqrt{x})}{\sqrt{x}}$$
and take $u=\sqrt{x}$, $2 du=\frac{dx}{\sqrt{x}}$.</p>
|
271,343 | <p>I need help finding the integral of $\sin(\sqrt{x})dx$. I have the answer here but would like to know how to get there. </p>
| Community | -1 | <p>$$J = \underbrace{\int \sin(\sqrt{x})dx = \int \sin(t) 2t dt }_{\sqrt{x} = t \implies x = t^2 \implies dx = 2t dt}$$
$$I = \int t \sin(t) dt = -\int t d(\cos(t)) = - \left(t \cos(t) - \int \cos(t) dt \right) = - t \cos(t) + \sin(t) + c$$
Hence, $$J = - 2t \cos(t) + 2\sin(t) + k = -2 \sqrt{x} \cos(\sqrt{x}) + 2 \sin(\sqrt{x}) + k$$</p>
|
3,710,804 | <p><span class="math-container">$$f(x) = \int \frac{\cos{x}(1+4\cos{2x})}{\sin{x}(1+4\cos^2{x})}dx$$</span></p>
<p>I have been up on this problem for an hour, but without any clues. </p>
<p>Can someone please help me solving this?</p>
| trancelocation | 467,003 | <p>It is quite straight forward after rewriting the integrand:</p>
<p><span class="math-container">$$\frac{\cos x (1+4\cos 2x )}{\sin x (1+4\cos^2 x)}= \frac{\cos x}{\sin x}\left(1 - \frac{4\sin^2 x}{1+4\cos^2 x}\right)$$</span>
<span class="math-container">$$= \frac{\cos x}{\sin x} - \frac{2\sin 2x}{3+2\cos 2x}$$</span></p>
<p>Note, that <span class="math-container">$(3+2\cos 2x)' = -4\sin 2x$</span> and integrate:</p>
<p><span class="math-container">$$\int \frac{\cos{x}(1+4\cos{2x})}{\sin{x}(1+4\cos^2{x})}dx =\log |\sin x| + \frac 12 \log(3+2\cos 2x) + C$$</span></p>
|
2,141,182 | <p>In the case of $$\sqrt{(x_n-\ell_1)+(y_n-\ell_2)}\leq \sqrt{(x_n-\ell_1)^2} + \sqrt{(y_n-\ell_2)^2} = |x_n-\ell_1|+|y_n-\ell_2|$$</p>
<p>it is true, if we take the rise the two sides in the power of $2$ we get:</p>
<p>\begin{align}
& (x_n-\ell_1)+(y_n-\ell_2)\leq \left( \sqrt{(x_n-\ell_1)^2}+\sqrt{(y_n-\ell_2)^2} \,\right)^2 \\[10pt]
= {} &{(x_n-\ell_1)+(y_n-\ell_2)}+\sqrt{(x_n-\ell_1)+(y_n-\ell_2)}
\end{align}</p>
<p>Does the same is true for $$\sqrt{a+b}\leq \sqrt{a}+\sqrt{b} \text{ ?}$$</p>
<p>doesn't $\sqrt{a+b}$ is a product of $3$ components? $\sqrt{a}+\sqrt{b}+\text{(something positive)}$ and therefore
$$\sqrt{a+b}\geq \sqrt{a}+\sqrt{b} \text{ ?}$$</p>
| Matija Sreckovic | 367,539 | <p>Quite simply, the answer to your question from the title is "yes", if we assume that $a$ and $b$ are non-negative real numbers.</p>
<p>Since both of the sides are positive, we can square the entire equation, and we get a trivial inequality: $$ \sqrt{a+b} \leq \sqrt{a} + \sqrt{b} \iff a+b \leq a+b+2\sqrt{ab} \iff 0 \leq 2\sqrt{ab}. $$</p>
|
2,342,051 | <p>I am totally new to statistics. I'm learning the basics.</p>
<p>I came upon this question while solving Erwin Kreyszig's exercise on statistics.
The problem is simple. It asks to calculate standard deviation after removing outliers from the dataset.</p>
<p>The dataset is as follows: 1, 2, 3, 4, 10.
What I did is, I found out q<sub>m</sub> = 3. Then $q$<sub>l</sub> $= \frac{1+2}{2} = 1.5$ and $q$<sub>m</sub> $= \frac{4+10}{2} = 7$.</p>
<p>Now, $IQR = 7-1.5 = 5.5$ and $1.5*IQR = 8.25$</p>
<p>So, we can say numbers beyond $1.5 - 5.5 = -4$ and $7 + 5.5 = 12.5$ will be an outlier.</p>
<p>Since there is no outlier, I found out the Standard Deviation of the set which is 3.53.</p>
<p>But, the answer provided is 1.29 which is different from the standard deviation of the set.</p>
<p>Can anyone help me what I missed? </p>
<p>Also, I have another question - we can see with plain eyes 10 is an outlier. But it is not detected here - why? </p>
| Mundron Schmidt | 448,151 | <p>It is differentiable on $\mathbb{R}\setminus C$ using the same argument why it is continuous on $\mathbb{R}\setminus C$.<p>
By construction of $C$, you know that for each point $x\in\mathbb{R}\setminus C$ exists an open neighbourhood $U$ of $x$ such that $U\subset\mathbb{R}\setminus C$. On this neighbourhood is $f$ constant $0$ which yields that your function is continuous and differentiable on $x$.</p>
|
9,168 | <p>I'm having a doubt about how should we users encourage the participation of new members. So far I have only presented MSE to three of my fellow colleagues in grad school. In an overall way I feel like if MSE becomes too open and wide known, some of the high-rank researchers and top-class grad and undergrads users will be frustrated. Maybe I get this impression for seeing how some really complicated questions are well-received.</p>
<p>But today I saw someone asking some really simple algebra questions like "solve this equation for x" and started thinking what is the general feel of MSE when this kind of question arises. I'm not judging anything and no one. I just would like to get some opinions on this.</p>
| Ryan Budney | 642 | <p>I've always viewed MSE as an "anything goes" forum, as long as it's about actual mathematics. Grade-school homework problems to research problems. All of it is to be encouraged. I use filters to avoid looking at most of the things I don't want to see. </p>
|
4,026,149 | <p>If f is continuous on <span class="math-container">$[a,b]$</span> and <span class="math-container">$f(a)=f(b)$</span> then show that there exists <span class="math-container">$x,y \in (a,b)$</span> such that <span class="math-container">$f(x)=f(y)$</span></p>
<p>It looks obvious if I imagine the graph. But I am not able to prove it.
I am trying to employ intermediate value property, but not able to reach to the conclusion.</p>
| absolute0 | 883,281 | <p>Without using IVP: from the statement it's clear that <span class="math-container">$f$</span> cannot be strictly monotone in <span class="math-container">$(a,b)$</span>. If for all <span class="math-container">$x,y \in (a,b)$</span> we have <span class="math-container">$x\neq y \Rightarrow f(x) \neq f(y)$</span> (otherwise we are done), then that tells <span class="math-container">$f$</span> is injective and continuous in <span class="math-container">$(a,b)$</span>. <a href="https://math.stackexchange.com/questions/752073/continuous-injective-map-is-strictly-monotonic">This link</a> says that a continuous and injective function is strictly monotone, raising a contradiction.</p>
|
863,846 | <p>Steven Strogatz has a great informal textbook on Nonlinear Dynamics and Chaos. I have found it to be incredibly helpful to get an intuitive sense of what is going on and has been a great supplement with my much more formal text from Perko.</p>
<p>Anyways I was wondering if anyone knew of any similar informal, intuitive textbooks covering Numerical Analysis? I currently study out of Atkinson's Intro to Numerical Analysis. I am looking for a more informal numerical text aimed for upper level undergrads and first/second year grad students. I am currently a first year grad student studying for my qualifying exams. Any recommendations would be appreciated. Thank you</p>
| Ian | 83,396 | <p>For linear algebra, I like Numerical Linear Algebra by Trefethen and Bau.</p>
|
1,823,556 | <p>Let be $X \subset F_1 \cup F_2$, where $F_1$ and $F_2$ are closed. If the function $f\colon X \longrightarrow \mathbb{R}$ is such that $f|_{X \cap F_1}$ and $f|_{X \cap F_2}$ are continuous, so prove that $f$ is continuous. </p>
<p>My attempt:</p>
<p>Suppose that $f$ is discontinuously, so exists $x \in X$ such that $f$ is discontinuously in $x$, but $X \subset F_1 \cup F_2$, therefore, if $x \in X \cap F_1$, so $f|_{X \cap F_1}$ is discontinuously in $x$ which is absurd because contradicts the hypothesis. Analogously, if $x \in X \cap F_2$, we have a contradiction.</p>
<p>That's my answer, but I don't sure if it's correct, because I didn't use the hypothesis that $F_1$ and $F_2$ are closeds. I would like to know if my attempt is correct. Thanks in advance!</p>
<p>EDIT: $X \subset \mathbb{R}$</p>
| Michael Bil | 347,353 | <p>The proof falls when you pick a point on an edge. If the intervals don't overlap then the proof works fine. But when you are on the edge of the interval then you know that there is only one sided continuity. This is (I guess) where the closed part should be. For example: $f:X \rightarrow \mathbb{R}$, such that $X = [-1,0] \cup (0,1]$, and $f(x) = \sin(1/x)$ for $x \in (0,1]$ and 0 otherwise. f in continues at each interval but in x=0 the right limit does not exist. Then in your proof, the fact that the intervals are closed assures you that such situation cannot exist. Also I have assumed that $F_1$ and $F_2$ are intervals, I'm not sure if this is correct for any closed set (for example the Cantor set).</p>
|
1,823,556 | <p>Let be $X \subset F_1 \cup F_2$, where $F_1$ and $F_2$ are closed. If the function $f\colon X \longrightarrow \mathbb{R}$ is such that $f|_{X \cap F_1}$ and $f|_{X \cap F_2}$ are continuous, so prove that $f$ is continuous. </p>
<p>My attempt:</p>
<p>Suppose that $f$ is discontinuously, so exists $x \in X$ such that $f$ is discontinuously in $x$, but $X \subset F_1 \cup F_2$, therefore, if $x \in X \cap F_1$, so $f|_{X \cap F_1}$ is discontinuously in $x$ which is absurd because contradicts the hypothesis. Analogously, if $x \in X \cap F_2$, we have a contradiction.</p>
<p>That's my answer, but I don't sure if it's correct, because I didn't use the hypothesis that $F_1$ and $F_2$ are closeds. I would like to know if my attempt is correct. Thanks in advance!</p>
<p>EDIT: $X \subset \mathbb{R}$</p>
| paul garrett | 12,291 | <p>Complementing @BrianMScott's answer, one could also avoid the proof by contrapositive/contradiction/whatever by claiming that any $x\in X$ has a sufficiently small neighborhood (if you like, the intersection of $X$ with a $\delta$-ball around it) lying entirely inside either $X\cap F_1$ or $X\cap F_2$. Then continuity of the respective restriction (or both) give continuity at $x$. This is where the closedness is used, since the complements of $X\cap F_1$ and of $X\cap F_2$ are open.</p>
|
1,289,994 | <p>If you fold a rectangular piece of paper in half and the resulting
rectangles have the same aspect ratio as the original rectangle,
then what is the aspect ratio of the rectangles?</p>
| Sean Henderson | 212,476 | <p>Assume the original rectangle is $x$ wide and $a \cdot x$ long, where a is the requested aspect ratio. If we fold the paper in half (lengthwise), the resulting rectangles will be $\frac{a \cdot x}{2}$ wide and $x$ long. Equating these ratios, we find that</p>
<blockquote class="spoiler">
<p>$$a = \frac{1}{a / 2} = \frac{2}{a}$$</p>
</blockquote>
<p>Solving this simple equation yields</p>
<blockquote class="spoiler">
<p>$$a=\sqrt{2}$$</p>
</blockquote>
|
3,106,574 | <p>Let <span class="math-container">$(a_n) _{n\ge 0}$</span> <span class="math-container">$a_{n+2}^3+a_{n+2}=a_{n+1}+a_n$</span>,<span class="math-container">$\forall n\ge 1$</span>, <span class="math-container">$a_0,a_1 \ge 1$</span>. Prove that <span class="math-container">$(a_n) _{n\ge 0}$</span> is convergent.<br>
I could prove that <span class="math-container">$a_n \ge 1$</span> by mathematical induction, but here I am stuck. </p>
| maxmilgram | 615,636 | <p>There might be an easier and more elegant solution, but this should work.</p>
<p>First observe that:
<span class="math-container">$$
a_{n+1}+a_{n}=a_{n+2}^3+a_{n+2}\geq2\sqrt{a_{n+2}^4}=2a_{n+2}^2\geq4a_{n+2}-2
$$</span>
Here I used the AM-GM inequality and the simply fact that <span class="math-container">$(a_{n+2}-1)^2\geq0$</span>.
Hence, <span class="math-container">$a_n\leq b_n$</span> with
<span class="math-container">$$
b_{n+2}=\frac{b_{n+1}+b_{n}+2}{4}\\
b_{0,1}=a_{0,1}
$$</span></p>
<p>Now, the recurrence equation for this reads:
<span class="math-container">$$
b_{n}=1 + (\tfrac{1 - \sqrt{17}}{8})^n C_1 + (\tfrac{1 + \sqrt{17}}{8})^n C_2
$$</span>
Since <span class="math-container">$|\tfrac{1 \pm \sqrt{17}}{8}|<1$</span> we can deduct that <span class="math-container">$b_n\to1$</span> for <span class="math-container">$n\to\infty$</span>. So together with <span class="math-container">$1\leq a_n\leq b_n$</span> we can conclude that <span class="math-container">$a_n\to1$</span>.</p>
|
3,910,345 | <p>Recently a lecturer used this notation, which I assume is a sort of twisted form of Leibniz notation:</p>
<p><span class="math-container">$$y\,\mathrm{d}x - x\,\mathrm{d}y \equiv -x^2\,\mathrm{d}\left(\frac{y}{x}\right)$$</span></p>
<p>The logic here was that this could be used as:</p>
<p><span class="math-container">$$\begin{align}
-x^2\,\mathrm{d}\left(\frac{y}{x}\right) &\equiv -x^2\,\left(\frac{\mathrm{d}y}{x} -\frac{y}{x^2}\,\mathrm{d}x\right)\\
&\equiv y\mathrm{d}x - x\mathrm{d}y
\end{align}
$$</span></p>
<p>Why is this legal?</p>
<p>I can see some kind of differentiation going on with the second term in the above equivalence, producing the <span class="math-container">$\frac{1}{x^2}$</span>, but having the single <span class="math-container">$\mathrm{d}$</span> seems like a really weird abuse of notation, and I don't quite follow why it splits the single <span class="math-container">$\frac{y}{x}$</span> fraction into two parts.</p>
| md2perpe | 168,433 | <p>Very often it works to just think of <span class="math-container">$\mathrm{d}f$</span> as an alternative notation to <span class="math-container">$f'$</span> or the derivative of <span class="math-container">$f$</span> with respect to some unnamed variable.</p>
<p>For example, assume that <span class="math-container">$x$</span> and <span class="math-container">$y$</span> are functions of some common variable. Then we have
<span class="math-container">$$
y\,x' - x\,y' = -x^2 (y/x)'
$$</span>
Just using the notation <span class="math-container">$\mathrm{d}f$</span> instead of <span class="math-container">$f'$</span> gives
<span class="math-container">$$
y\,\mathrm{d}x - x\,\mathrm{d}y = -x^2 \mathrm{d}(y/x).
$$</span></p>
|
508,790 | <p>I always see this word $\mathcal{F}$-measurable, but really don't understand the meaning. I am not able to visualize the meaning of it.</p>
<p>Need some guidance on this.</p>
<p>Don't really understand $\sigma(Y)$-measurable as well. What is the difference?</p>
| Stefan Hansen | 25,632 | <p>Let $(\Omega,\mathcal{F},P)$ be a probability space, i.e. $\Omega$ is a non-empty set, $\mathcal{F}$ is a sigma-algebra of subsets of $\Omega$ and $P:\mathcal{F}\to [0,1]$ is a probability measure on $\mathcal{F}$. Now, suppose we have a function $X:\Omega\to\mathbb{R}$ and we want to "measure" the probability of $X$ belonging to some subset of $\mathbb{R}$. That is, we want to assign the probability to sets of the form $$\{X\in A\}:=X^{-1}(A)=\{\omega\in\Omega\mid X(\omega)\in A\}$$ for Borel sets $A\in\mathcal{B}(\mathbb{R})$. For this to make sense, we need to make sure that $\{X\in A\}\in\mathcal{F}$ for all $A\in\mathcal{B}(\mathbb{R})$, otherwise we can't assign a probability to it (recall that $P$ is <em>only</em> defined on $\mathcal{F}$). </p>
<p>Whenever $X:\Omega\to\mathbb{R}$ satisfies that $X^{-1}(A)\in\mathcal{F}$ for all $A\in\mathcal{B}(\mathbb{R})$ we say that $X$ is $(\mathcal{F},\mathcal{B}(\mathbb{R}))$-measurable or just $\mathcal{F}$-measurable when there is no chance of confusion. Thus, for a random variable $X$, it makes sense to assign the probability to any set of the form $\{X\in A\}$, and this defines the <em>distribution</em> of $X$:
$$
P_X(A):=P(\{X\in A\}),\quad A\in\mathcal{B}(\mathbb{R}).
$$
Note that a random variable is a synonym for an $\mathcal{F}$-measurable function.</p>
<p>If $Y:\Omega\to\mathbb{R}$ is a random variable, then $\sigma(Y)$ is, by definition, given as
$$
\sigma(Y)=\sigma(\{Y^{-1}(A)\mid\ A\in\mathcal{B}(\mathbb{R})\}),
$$
i.e. the smallest sigma-algebra containing all sets of the form $Y^{-1}(A)$. Another way of characterizing $\sigma(Y)$ is by saying that it is the smallest sigma-algebra we can put on $\Omega$ that makes $Y$ measurable.</p>
|
642,324 | <p>According to <a href="http://en.wikipedia.org/wiki/Regression_analysis" rel="nofollow">wikipedia</a>, regression analysis is a statistical process for estimating the relationships among variables. Regression analysis is widely used for prediction and forecasting. So why is regression analysis also used as statistical test? For example, in <a href="http://www.ats.ucla.edu/stat/spss/whatstat/" rel="nofollow">this page</a>, logistic regression and linear regression are listed among t-test, ANOVA, chi-square test etc.</p>
| Anatoly | 90,997 | <p>Regression analysis is surely an important tool for prediction and forecasting, but it is also commonly used as a statistical test for many purposes, e.g. to investigate whether a given variable is associated with another one independently of confounders, or whether a given multivariable model significantly predicts a given variable, and so on. In all these applications, regression provides a number of parameters, including measures of the strength of these associations and corresponding p values. In this view, it is one of the most important statistical tests used in many fields of sciences. </p>
|
3,546,773 | <p>what are the real/complex zeros for:</p>
<p><span class="math-container">$t^9 - 1$</span></p>
<p>I also need to use the exponential form of complex numbers</p>
| J. W. Tanner | 615,567 | <p><span class="math-container">$\exp\left(\dfrac{2\pi i k}9\right)$</span> for <span class="math-container">$k=1$</span> to <span class="math-container">$9$</span>.</p>
<p>That's real only for <span class="math-container">$k=9$</span>.</p>
|
306,461 | <p>Let $A = \{(x,y) \in\mathbb{R}^2: a \leq (x-c)^2+(y-d)^2 \leq b\}$ for given $a,b,c, d$ real numbers. I want to show that $A$ is path-connected.</p>
<p>How can I do that?</p>
<p>I know that every open subset of $\mathbb R^2$ that is connected is path connected. But this is obviously not open so I cannot use that. Then I thought of multiple cases. If we take arbitrary $x$ and $y$ and draw the line between them and they do not intersect with the circle centred at $(c,d)$ then we can obviously draw a line between the points which is still in the set, so we can then define the function. I am stuck on the other case. </p>
| Ben Millwood | 29,966 | <p>Consider the following set:</p>
<p>$A^\prime=\{(x,y) \in\mathbb{R}^2: a< (x-c)^2+(y-d)^2< b\}$</p>
<p>i.e. exactly your set but with $<$ instead of $\le$. This set is open and connected, so by your comments it's path connected.</p>
<p>Show that the places in your set but not in mine can easily reach mine by a path.</p>
|
29,255 | <p>sorry! am not clear with these questions</p>
<ol>
<li><p>why an empty set is open as well as closed?</p></li>
<li><p>why the set of all real numbers is open as well as closed?</p></li>
</ol>
| Jens Renders | 131,972 | <p>In my opinion, the other answers to this question are quite poor, as they just cite the definition of a topology which indeed states that the whole space and the empty set are open and closed. The natural question that then follows is: why define it like that?</p>
<p>The idea of topology comes from metric spaces (of which <span class="math-container">$\mathbb{R}$</span> is an example), and one approach is the following: In metric spaces we can talk about convergence of sequences (If you know what convergence means in <span class="math-container">$\mathbb{R}$</span> you can follow this explaination). In mathematics we often talk about subsets being closed under a certain operation, for example, a subvectorspace has to be closed under addition. That means that if you take to vectors in that subspace, their sum has to be in that subspace aswel. Closed in metric spaces means the same for convergence. A subset <span class="math-container">$A$</span> in a metric space is called closed if it is closed under the operation of taking limits. i.e.:
<span class="math-container">$$ (\forall n: a_n \in A) \implies \lim_n a_n \in A $$</span>
(If the limit exists). It is obvious that both the empty set and the whole space satisfy this (can you see this?) so they are both closed. Now open can be defined as being the complement of a closed set and you're question is answered.</p>
<p>The idea of general topology is to then give an abstract generalization of these closed sets (or of the open sets) that works better, for example, it allows you to take products of spaces which in the metric spaces causes problems. This abstract generalization is made by observing some <strong>properties</strong> of open and closed sets in metric spaces, and then <strong>calling them a definition</strong>. One of these properties is that the empty set and the whole space are open and closed. So to conclude, the general definition of a topology is great, but not to give an (insightful) answer to this question</p>
|
1,029,650 | <p>In Four-dimensional space, the Levi-Civita symbol is defined as:</p>
<p>$$ \varepsilon_{ijkl } =$$
\begin{cases}
+1 & \text{if }(i,j,k,l) \text{ is an even permutation of } (1,2,3,4) \\
-1 & \text{if }(i,j,k,l) \text{ is an odd permutation of } (1,2,3,4) \\
0 & \text{otherwise}
\end{cases}
</p>
<p>Let's suppose that I fix the last index ( l=4 for example). I guess that the 4-indices symbol can now be replaced with a 3-indices one:</p>
<p>$$ \varepsilon_{ijk } =$$
\begin{cases}
+1 & \text{if } (i,j,k) \text{ is } (1,2,3), (2,3,1) \text{ or } (3,1,2), \\
-1 & \text{if } (i,j,k) \text{ is } (3,2,1), (1,3,2) \text{ or } (2,1,3), \\
\;\;\,0 & \text{if }i=j \text{ or } j=k \text{ or } k=i
\end{cases} </p>
<p>My doubt is the following: is $$ \varepsilon_{ijk4 }A^{jk} = \varepsilon_{ij4k }A^{jk } $$ true? (In the sense that the 4-indices symbols can both be replaced by the same 3-indices symbols. I'm using the Einstein notation, so multiple indices are summed) or they give two 3-indices symbols with different sign</p>
| Semiclassical | 137,524 | <p>Note that the permutations corresponding to $(i,j,k,4)$ and $(i,j,4,k)$ differ only by a transposition of the last two indices. Consequently, they have different parity and so their Levi-Civita symbols have opposite signs (assuming they do not vanish, of course). Hence the correct statement is $\epsilon_{ijk4}A^{jk}=-\epsilon_{ij4k}A^{jk}$.</p>
|
1,032,535 | <p>I know $n \in \mathbb{N}$ and...</p>
<p>$$
a_n = \begin{cases}
0 & \text{ if } n = 0 \\
a_{n-1}^{2} + \frac{1}{4} & \text{ if } n > 0
\end{cases}
$$</p>
<ol>
<li><strong>Base Case:</strong></li>
</ol>
<p>$$a_1 = a^2_0 + \frac{1}{4}$$</p>
<p>$$a_1 = 0^2 + \frac{1}{4} = \frac{1}{4}$$</p>
<p>Thus, we have that $0 < a_1 < 1$. So our base case is ok.</p>
<ol start="2">
<li><strong>Inductive hypothesis</strong>:</li>
</ol>
<p>Assume $n$ is arbitrary. Suppose
$$0 < a_{n} < 1$$
$$0 < a_{n-1}^{2} + \frac{1}{4} < 1$$
is true, when $n > 1$.</p>
<ol start="3">
<li><strong>Inductive step</strong>:</li>
</ol>
<p>Let's prove
$$0 < a_{n+1} < 1$$
$$0 < a_{n}^{2} + \frac{1}{4} < 1$$</p>
<p>is also true when $n > 1$.</p>
<p>My guess is that we have to prove that $a^2_{n}$ has to be less than $\frac{3}{4}$, which otherwise would make $a_{n+1}$ equal or greater than $1$.</p>
<p>So we have $(a_{n-1}^{2} + \frac{1}{4})^2 < \frac{3}{4}$... I don't know if this is correct, and how to continue...</p>
| Community | -1 | <p><strong>Note:</strong> <em>Base on the mvggz's answer, I will try to give also my complete with a lot of explanations answer. If something is wrong, please write in the comment section below ;)</em></p>
<hr>
<p>Let $$
a_n = \begin{cases}
0 & \text{ if } n = 0 \\
a_{n-1}^{2} + \frac{1}{4} & \text{ if } n > 0
\end{cases}
$$</p>
<p>For all $n \in \mathbb{N}$ and $n \geq 1$</p>
<hr>
<ol>
<li><strong>Basis:</strong></li>
</ol>
<p>Our base case is when $n = 1$, so let's verify the following statement
$$0 < a_1 < 1$$
We know that $a_1 = a^2_0 + \frac{1}{4}$, so we have:
$$0 < a^2_0 + \frac{1}{4} < 1$$
$a_0$ is 0, from the definition of $a_n$, so we have:
$$0 < 0^2 + \frac{1}{4} < 1$$
$$0 < \frac{1}{4} < 1$$
which is clearly true, so our base case is proved.</p>
<hr>
<ol start="2">
<li><strong>Inductive step</strong></li>
</ol>
<p>Let's assume that $0 < a_n < \frac{1}{2}$ (this is a stronger case, because we are assuming that $a_n < \frac{1}{2}$ instead of $<1$)</p>
<p>We want to prove that $0 < a_{n+1} < 1$, but if we prove $0 < a_{n+1} < \frac{1}{2}$, then we prove also the first one, because $\frac{1}{2} < 1$.</p>
<p>We know that:</p>
<p>$$a_{n+1} = a^2_n + \frac{1}{4} > a^2_n > 0$$</p>
<p>We know $a^2_n > 0$, because $a_n$ is a positive number, and even though it wasn't, it would become positive, because raised to the power of $2$, would make it positive.</p>
<p>Since we assume that $a_n < \frac{1}{2}$, suppose we replace $a_n$ with $\frac{1}{2}$, and we say that (note the $<$ sign):</p>
<p>$$a_{n+1} = a^2_n + \frac{1}{4} < \left( \frac{1}{2} \right)^2 + \frac{1}{4}$$
$$a_{n+1} = a^2_n + \frac{1}{4} < \frac{1}{4}+ \frac{1}{4}$$
which simplified becomes:
$$a_{n+1} = a^2_n + \frac{1}{4} < \frac{1}{2}$$</p>
<p>and we have just proved that $a_{n + 1} < \frac{1}{2}$, so it must also be less than $1$. Note that $a_{n+1} > 0$, because $a^2_n > 0$ and also $\frac{1}{4} > 0$.</p>
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.