qid
int64
1
4.65M
question
large_stringlengths
27
36.3k
author
large_stringlengths
3
36
author_id
int64
-1
1.16M
answer
large_stringlengths
18
63k
4,461,327
<p>To show that they are equal, I need to show</p> <p><span class="math-container">$\bigcap_{n=1}^{\infty}[0,1+1/n) \subset [0,1]$</span> and <span class="math-container">$[0,1] \subset \bigcap_{n=1}^{\infty}[0,1+1/n)$</span></p> <p>My attempt is: let <span class="math-container">$x \in [0,1] \Rightarrow 0 \leq x \leq 1$</span>, since <span class="math-container">$1 &lt; 1+1/n, \ \forall n \geq1 \Rightarrow x \in \bigcap_{n=1}^{\infty}[0,1+1/n) \Rightarrow [0,1] \subset \bigcap_{n=1}^{\infty}[0,1+1/n)$</span></p> <p>However, I don't know how to show <span class="math-container">$\bigcap_{n=1}^{\infty}[0,1+1/n) \subset [0,1]$</span>. It seems obvious since <span class="math-container">$\lim_{n \to \infty} 1+1/n = 1$</span>, but I am having trouble to proving that. Any help or hint would be appreciated</p>
Jordan Miller
1,062,506
<p>To simplify this, let <span class="math-container">$p$</span> be the statement <span class="math-container">$x\in [0,1]$</span> and <span class="math-container">$q$</span> be the statement <span class="math-container">$x\in \bigcap_{n=1}^{\infty}[0,1+\frac{1}{n})$</span></p> <p><span class="math-container">$x\in [0,1]\subset x\in \bigcap_{n=1}^{\infty}[0,1+\frac{1}{n}) \text{ is the same as } p\to q$</span></p> <p>and <span class="math-container">$x\in \bigcap_{n=1}^{\infty}[0,1+\frac{1}{n})\subset x\in [0,1] \text{ is the same as } q\to p \text{ which is equivalent to } \lnot p \to \lnot q$</span>.</p> <p>As such, since you have proven that <span class="math-container">$x\in [0,1]\subset x\in \bigcap_{n=1}^{\infty}[0,1+\frac{1}{n})$</span> (<span class="math-container">$p\to q$</span>, though may need to be more rigorous especially if for a class). For the next part it may help to assume that <span class="math-container">$x\notin [0,1]$</span> and try to conclude that <span class="math-container">$x\notin \bigcap_{n=1}^{\infty}[0,1+\frac{1}{n})$</span>.</p>
4,461,327
<p>To show that they are equal, I need to show</p> <p><span class="math-container">$\bigcap_{n=1}^{\infty}[0,1+1/n) \subset [0,1]$</span> and <span class="math-container">$[0,1] \subset \bigcap_{n=1}^{\infty}[0,1+1/n)$</span></p> <p>My attempt is: let <span class="math-container">$x \in [0,1] \Rightarrow 0 \leq x \leq 1$</span>, since <span class="math-container">$1 &lt; 1+1/n, \ \forall n \geq1 \Rightarrow x \in \bigcap_{n=1}^{\infty}[0,1+1/n) \Rightarrow [0,1] \subset \bigcap_{n=1}^{\infty}[0,1+1/n)$</span></p> <p>However, I don't know how to show <span class="math-container">$\bigcap_{n=1}^{\infty}[0,1+1/n) \subset [0,1]$</span>. It seems obvious since <span class="math-container">$\lim_{n \to \infty} 1+1/n = 1$</span>, but I am having trouble to proving that. Any help or hint would be appreciated</p>
Eparoh
617,769
<p>Observe that if <span class="math-container">$x \in \bigcap_{n=1}^{\infty}[0,1+1/n)$</span> then you have that <span class="math-container">$$x \in [0,1+1/n), \forall n \in \mathbb{N}\hspace{3mm} (1)$$</span> That implies that <span class="math-container">$x \geq 0$</span> and it remains to prove that <span class="math-container">$x \leq 1$</span>.</p> <p>Here you have two options:</p> <ul> <li>If you have studied about sequence limits, defining <span class="math-container">$x_n=1+1/n$</span> we get from (1) that <span class="math-container">$$x \leq x_n, \forall n \in \mathbb{N}$$</span> So, by the monotonicity of the limit and knowing that <span class="math-container">$x_n \to 1$</span> we get that <span class="math-container">$x \leq 1$</span> as wanted.</li> <li>The second option is to prove it from scratch. Suppose by way of contradiction that <span class="math-container">$x &gt;1$</span>, then <span class="math-container">$x-1&gt;0$</span> and there exists an <span class="math-container">$n_0 \in \mathbb{N}$</span> such that <span class="math-container">$x-1 &gt; \frac{1}{n_0}$</span> <span class="math-container">$\left(\text{just take } n_0 &gt; \frac{1}{x-1} \right)$</span>. Now, this implies that <span class="math-container">$x&gt;1+1/n_0$</span> so <span class="math-container">$x \not \in [0,1+1/n_0)$</span> in contradiction with (1).</li> </ul>
4,186,743
<p>Is there a trigonometric function explaining <span class="math-container">$\cos(x)+\sin(x)$</span>? if not, what is <span class="math-container">$\cos(x)+\sin(x)$</span> as a function of <span class="math-container">$\cos(x)$</span>?</p>
Luca Ghidelli
176,416
<p><span class="math-container">$$\sqrt 2 \cos (x-\pi/4)$$</span></p> <p>By the <a href="https://mathworld.wolfram.com/ProsthaphaeresisFormulas.html" rel="nofollow noreferrer">Prostaphaeresis-Simpson-formula</a>. If you need, you can convert sinuses into cosinuses via <span class="math-container">$\sin x =\cos(\pi/2-x)$</span>.</p> <hr /> <p>Actually there is another fun way to see that this formula is true.</p> <p>As you may know, if you take a point on the plane, located at an angle x with respect to the reference axis, then <span class="math-container">$(\cos x, \sin x)$</span> are proportional to the Cartesian coordinates of the point.</p> <p>So your formula is is proportional to the sum of the coordinates of the point. This is basically equivalent (up to a proportionality constant) to compute a coordinate of your point with respect to an orthogonal Cartesian system that is rotated by an angle of 45 degrees.</p>
3,946,591
<blockquote> <p>The sides of a triangle are on the lines <span class="math-container">$2x+3y+4=0$</span>, <span class="math-container">$ \ \ x-y+3=0$</span>, and <span class="math-container">$5x+4y-20=0$</span>. Find the equations of the altitudes of the triangle.</p> </blockquote> <p>Should I find the vertices first? Or is there a direct way? Actually, I tried finding the vertices, using the substitution method but I find it hard to turn it into an equation.</p>
DanielV
97,045
<p>Transition matrix with two states, the string has an <span class="math-container">$b$</span> or it doesn't. State 0 will be no bs and state 1 will be some bs. <span class="math-container">$M_{i, j}$</span> is &quot;how many ways can we transist from a string of type <span class="math-container">$i$</span> to a string of type <span class="math-container">$j$</span>&quot; (by appending characters) :</p> <p><span class="math-container">$$M = \begin{bmatrix} 3 &amp; 1 \\ 0 &amp; 3 \end{bmatrix}$$</span></p> <p>For example, <span class="math-container">$M_{1, 0} = 0$</span> there is no way to go from a string with some bs to a string with no bs. <span class="math-container">$M_{1, 1} = 3$</span> there are 3 ways to go from a string with some bs to a string with some bs (by appending b, c, or d).</p> <p>Initial state is <span class="math-container">$V=[1, 0]$</span> since the empty string is one string which does not contain a <span class="math-container">$b$</span>. Final state can be either containing or not containing a string so it is <span class="math-container">$F=\begin{bmatrix} 1 \\ 1 \end{bmatrix}$</span></p> <p><span class="math-container">$$C = VM^nF$$</span> counts the number of ways to go from the initial state (empty string) to the final state (any string reachable). <span class="math-container">$M$</span> is a jordan normal form matrix so a closed form for it's exponent is well known:</p> <p><span class="math-container">$$C = [1, 0]\begin{bmatrix} 3^n &amp; n \cdot 3^{n-1} \\ 0 &amp; 3^n \end{bmatrix} \begin{bmatrix} 1 \\ 1 \end{bmatrix} = 3^n + n3^{n-1}$$</span></p>
2,966,871
<blockquote> <p>Define the unit sphere as <span class="math-container">$S^1=\{x\in \mathbb{R}^2: \|x\|=1\}$</span></p> <p>Also define the real projective line as <span class="math-container">$\mathbb{R}P^1=S^1/(x\sim-x)$</span></p> </blockquote> <p>We can consider the mapping <span class="math-container">$f:S^1\rightarrow S^1$</span>, <span class="math-container">$f(x)=x^2$</span></p> <p>If I can show that <span class="math-container">$f$</span> is a continuous quotient map, i.e. <span class="math-container">$f$</span> is a continuous surjective mapping such that <span class="math-container">$f(-x)=f(x)$</span> for all <span class="math-container">$x\in S^1$</span>, then I can apply the universal property of a quotient topology and conclude that there exists an induced homeomorphism between <span class="math-container">$\mathbb{R}P^1$</span> and <span class="math-container">$S^1$</span>.</p> <p>I am unsure how to prove, however, that <span class="math-container">$f$</span> is both surjective and continuous. It's obvious that <span class="math-container">$f(-x)=f(x)$</span> for all <span class="math-container">$x\in S^1$</span>, but how should I go about the other two claims? I think I am overthinking this. Any help would be much appreciated.</p>
Ashvin Swaminathan
259,604
<p>The map <span class="math-container">$f$</span> is surjective because in polar coordinates, it is given by <span class="math-container">$e^{i\theta} \mapsto e^{2i\theta}$</span>, and every angle <span class="math-container">$\psi \in [0, 2\pi)$</span> can be uniquely represented as <span class="math-container">$2\theta$</span> for some angle <span class="math-container">$\theta \in [0, \pi)$</span>. The map <span class="math-container">$f$</span> is a differentiable function from <span class="math-container">$\mathbb{R}$</span> to <span class="math-container">$S^1$</span>, and it is periodic with period <span class="math-container">$2\pi$</span>, so it is continuous as a function from <span class="math-container">$\mathbb{R}/(2\pi \mathbb{Z}) \simeq S^1$</span> to <span class="math-container">$S^1$</span>.</p>
2,237,441
<p>Let $n$ be a natural number.</p> <p>I need to prove that $9 \mid 4^n-3n-1$</p> <p>Could anyone give me some hints how to prove it without using induction.</p>
lhf
589
<p>By the binomial theorem, $4^n = (1+3)^n = 1 + 3n + 9a$.</p>
2,403,201
<p>How do I solve for $x$:</p> <p>$$\log\left(\frac{1.07^x}{1050-2.5x}\right)=\log\left(\frac{1.2}{828}\right)$$</p> <p>If I raise both sides to the power of $10$, I get: $\dfrac{1.07^x}{1050-2.5x}=\frac{1}{690}$</p> <p>Then I'm stuck. How do I solve this ?</p> <p>As suggest by @Kevin, I have decided to add my take here:</p> <p>One way I could solve this is using Linear Interpolation Approximation.</p> <p>We have,</p> <p>$\frac{1.07^x}{1050-2.5x}=\frac{1}{690}$</p> <p>$1-690\frac{1.07^x}{1050-2.5x}=0$</p> <p>We need to get the LHS as close to $0$ as possible.</p> <p>At $x=5(A)$,</p> <p>LHS $\simeq$ 0.067219 (a)</p> <p>Since LHS at $x=5$ is greater than $0$, we try at $x=7(B)$</p> <p>LHS $\simeq$ -0.07311 (b)</p> <p>Since LHS at $x=7$ is less than $0$, </p> <p>$5&lt;x&lt;7$</p> <p>Thus by interpolation,</p> <p>$x=[A+\frac{a}{a-b}(B-A)]=[5+\frac{0.067219}{0.067219-(-0.07311)}(7-5)]\simeq5.958$</p>
Barry Cipra
86,747
<p>It's convenient to rewrite the equation to be solved for $x$ as</p> <p>$$690\cdot1.07^x+2.5x=1050$$</p> <p>As others have indicated in comments and answers, equations of this form cannot, in general, be solved exactly; the best that can done is a numerical approximation. There are various ways to go about obtaining approximations, and depending on what you're trying to learn, it can be worth trying several of them to see how well they work (e.g., how quickly they get to an acceptable number of digits). If, however, you just want to get a few digits accuracy for <em>this</em> problem, and if you're willing to play around with a bit of computation (which in this day and age costs next to nothing), then a simple, intuitive approach may be all you need.</p> <p>The key observation to make is that the left hand side of the equation defines a function, $f(x)=690\cdot1.07^x+2.5x$ that is <em>strictly increasing.</em>. That's because it's the sum of an exponential function, $690\cdot1.07^x$ and a linear function $2.5x$, each of which individually is strictly increasing. The importance of this is that if you try a value for $x$ and $f(x)\gt1050$, then you know you've tried a value that's too big, while if $f(x)\lt1050$, then you've tried a value that's too small. Moreover, if $f(x)\approx1050$, then you're close to the value you want.</p> <p>With a little playing around, you're likely to find that</p> <p>$$f(6)=1050.50394278\ldots$$</p> <p>which is just barely too big, so it's worth computing</p> <p>$$f(5.9)=1043.27151078\ldots$$</p> <p>which is too small. This tells us that $5.9\lt x\lt6$ and is probably closer to $6$ than it is to $5.9$. A little more playing around gives</p> <p>$$f(5.99)=1049.77857176\ldots$$</p> <p>which is still too small, but if you're happy with two digits accuracy you might stop here and say $x\approx5.99$. If you want additional accuracy, some additional playing around reveals</p> <p>$$f(5.993)=1049.99613331\ldots$$</p> <p>which is probably as close as you need to get (for an investment analysis, at least, which is where the OP said in comments the problem comes from).</p> <p>The "playing around" can be streamlined if you know about linear interpolation. It's worth noting, though, that linear interpolation involves its own <em>separate</em> calculation requiring four inputs: the value of $x$ that's too big, the value that's too small, and the function value of each. I found it easier to simply enter the expression "690*1.07^6+2.5*6" into Google, and then change the 6's into 5.9's, then 5.99's, etc. (That's also why I reported the $f(x)$ values to $8$ digits: it was as easy to cut and past the entire result as it was to cut and paste just a portion.) Actually I tried 5.995 before I tried 5.993; it was easier to try splitting the difference between 5.99 and 6 first than it was to think about which one gave a function value closer to $1050$.</p>
3,454,514
<blockquote> <p>How to change the integration order in the given integral? <span class="math-container">$$ \int\limits_0^1dx\int\limits_0^1dy\int\limits_0^{x^2+y^2}fdz\rightarrow \int\limits_?^?dz\int\limits_?^?dy\int\limits_?^?fdx $$</span></p> </blockquote> <p>I tried to make a graphic interpretation, but it seemed rather complex and didn't clarify the things. Moreover, I might not have much time if I had to solve this kind of problem on a test. So, I would be really grateful if someone could explain how to solve this problem efficiently.</p>
Andrei
331,661
<p>It's not a good idea to reverse the order of integrals in this case, it will make things more complicated. But it can be done. You need to first understand what is the domain of integration. If you just look at the integral over <span class="math-container">$z$</span>, and you think about <span class="math-container">$x^2+y^2$</span> as <span class="math-container">$r^2$</span>, you have a paraboloid of revolution around the <span class="math-container">$z$</span> axis, and you are looking at the volume between the paraboloid and the <span class="math-container">$xy$</span> plane. Note however that you cut this paraboloid with the <span class="math-container">$x=0$</span>, <span class="math-container">$x=1$</span>, <span class="math-container">$y=0$</span>, and <span class="math-container">$y=1$</span> planes.</p> <p>Now let's look at the limits in the reverse order. <span class="math-container">$z$</span> obviously goes between <span class="math-container">$0$</span> and <span class="math-container">$2$</span>. Now look at the plane perpendicular to <span class="math-container">$z$</span> axis. If you draw this, you have a square from <span class="math-container">$(0,0)$</span> to <span class="math-container">$(1,1)$</span>, but your integration ranges for <span class="math-container">$x$</span> and <span class="math-container">$y$</span> must account for <span class="math-container">$x^2+y^2&gt;z$</span>. This is a circle centered at <span class="math-container">$0$</span>, with radius <span class="math-container">$R=\sqrt z$</span>. When the radius of the circle is grater than <span class="math-container">$1$</span>, but less than <span class="math-container">$\sqrt 2$</span>, the circle intersects the <span class="math-container">$x=1$</span> line. The minimum <span class="math-container">$y$</span> value is <span class="math-container">$\sqrt{R^2-1^2}=\sqrt{z-1}$</span>, and the maximum is <span class="math-container">$1$</span>. Then you integrate <span class="math-container">$x$</span> between <span class="math-container">$\sqrt{R^2-y^2}=\sqrt{z-1-y^2}$</span> and <span class="math-container">$1$</span>. For the case where <span class="math-container">$z&lt;1$</span>, the <span class="math-container">$y$</span> integral goes from <span class="math-container">$0$</span> to <span class="math-container">$1$</span>, but you need to split the integral further, in the case where <span class="math-container">$y&lt;R$</span> and <span class="math-container">$y&gt;R$</span>. If <span class="math-container">$y&gt;R$</span>, then the integral on <span class="math-container">$x$</span> is from <span class="math-container">$0$</span> to <span class="math-container">$1$</span>, otherwise it's from <span class="math-container">$\sqrt{R^2-y^2}$</span> to <span class="math-container">$1$</span>.</p>
3,917,912
<p>I am reading an article where the author seems to use a known relationship between the sum of a finite sequence of real positive numbers <span class="math-container">$a_1 +a_2 +... +a_n = m$</span> and the sum of their reciprocals. In particular, I suspect that <span class="math-container">\begin{equation} \sum_{i=1}^n \frac{1}{a_i} \geq \frac{n^2}{m} \end{equation}</span><br /> with equality when <span class="math-container">$a_i = \frac{m}{n} \forall i$</span>. Are there any references or known theorems where this inequality is proven?</p> <p><a href="https://math.stackexchange.com/a/1857918/852233">This</a> interesting answer provides a different lower bound. However, I am doing some experimental evaluations where the bound is working perfectly (varying <span class="math-container">$n$</span> and using <span class="math-container">$10^7$</span> uniformly distributed random numbers).</p>
Albus Dumbledore
769,226
<p>by <a href="https://en.wikipedia.org/wiki/Cauchy%E2%80%93Schwarz_inequality" rel="nofollow noreferrer">Cauchy schwarz inequality</a><span class="math-container">$$\left(\sum_{i=1}^{n}{\sqrt{a_i}}^2\right)\left(\sum_{i=1}^{n}\frac{1}{{\sqrt{a_i}}^2}\right)\ge (\sum_{i=1}^{n} 1)^2=n^2$$</span></p> <hr /> <p>WLOG <span class="math-container">$a_1\ge a_2...\ge a_n$</span> then <span class="math-container">$\frac{1}{a_1}\le \frac{1}{a_2}..\le \frac{1}{a_n}$</span></p> <p>So by <a href="https://artofproblemsolving.com/wiki/index.php/Chebyshev%27s_Inequality" rel="nofollow noreferrer">chebyshev's inequality</a> <span class="math-container">$$n^2=n\left(a_1\frac{1}{a_1}+a_2\frac{1}{a_2}..+a_n\frac{1}{a_n}\right)\le \left(\sum_{i=1}^{n}a_i\right)\left(\sum_{i=1}^{n}\frac{1}{a_i}\right)$$</span></p>
3,981,983
<p>I had an interesting math problem presented to me some time ago by a friend (he stated it in non-mathematical terms). At what angle would you launch a projectile from a spaceship/satellite such that it left that object and went on to hit another orbiting object? Then as a supplemental question he asked at what angle would you launch that projectile to hit the other orbiting object in the least amount of time?</p> <p>I assumed that the objects were only acted upon by a single spherically symmetric mass distribution such that I could treat it as a one-body problem for each object. Further, I assumed it all took place in the plane with a polar coordinate system so that I ended up with this simple system of nonlinear autonomous ODE's,</p> <p><span class="math-container">$$ \begin{bmatrix} \frac{d \theta}{dt} \\ \frac{dv}{dt}\\ \frac{dr}{dt} \end{bmatrix} = \begin{bmatrix} \frac{h}{r^{2}} \\ \frac{h^{2}}{r^{3}} -\frac{\mu}{r^{2}} \\ v \end{bmatrix}.$$</span></p> <p>Where the inital conditions for the projectile would be <span class="math-container">$\{ \theta_{i} , r_{i}, v_{\beta}\cos(\phi - \theta_{i}) + v_{i}\}$</span> with an <span class="math-container">$h_{\phi} = r_{i} (v_{\beta}\sin(\phi - \theta_{i}) + v_{\theta i})$</span> and the target objects' initial conditions are <span class="math-container">$\{ \theta_{i}^{'} , r_{i}^{'}, v_{i}^{'}\}$</span> with <span class="math-container">$h'= r^{'}_{i} v_{\theta i}^{'}$</span>. Where <span class="math-container">$\phi$</span> is the launch angle from the polar axis while <span class="math-container">$v_{\beta}$</span> is the magnitude of the projectiles velocity (which I assume to not change, only its launch direction) and <span class="math-container">$\mu$</span> is a constant relating to the gravitational field strength of the attracting object. Below is a picture depicting the general initial and final conditions.</p> <p><a href="https://i.stack.imgur.com/63WHz.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/63WHz.png" alt="The picture depicting the general initial and final conditions." /></a> <a href="https://i.stack.imgur.com/KZsRS.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/KZsRS.png" alt="enter image description here" /></a></p> <p>After all these preliminaries, i'm basically asking if there is a variational calculus or other simpler solution to solving this problem as perhaps a boundary problem of sorts mixed with an initial ODE problem. That is, aside from computationally pouring on through thousands of trajectories with minutely differing <span class="math-container">$\phi$</span>'s then numerically guessing at the appropriate approximate launch angle or angles that solve my question(s).</p> <p>Which. . . isn't what I exactly want to do and I desire to know if there is an equation, a single or system of ODE's, that I could solve themselves for this launch angle which gives a least time of travel or gives launch angles that would lead to a hit (irrespective of the time of travel). If you can help in anyway, this would be most appreciated. I'm a sophomore College student with little knowledge of solving robustly ODE's or even programming solvers for them.</p>
Henno Brandsma
4,280
<p><span class="math-container">$x$</span> is a limit point of <span class="math-container">$A$</span>, by definition if</p> <p>for all (open) neighbourhoods <span class="math-container">$U$</span> of <span class="math-container">$x$</span>, <span class="math-container">$U \cap (A\setminus\{x\}) \neq \emptyset$</span> or every (open) neighbourhood <span class="math-container">$U$</span> of <span class="math-container">$x$</span> contains a point in <span class="math-container">$A$</span> that is not equal to <span class="math-container">$x$</span>.</p> <p>So negating this</p> <p>There is an (open) neighbourhood <span class="math-container">$U_x$</span> of <span class="math-container">$x$</span> such that the only possible point of intersection with <span class="math-container">$A$</span> is <span class="math-container">$x$</span>, which says that <span class="math-container">$A \cap U_x = \emptyset$</span> or <span class="math-container">$A \cap U_x = \{x\}$</span>, depending on whether <span class="math-container">$x \in A$</span> or not. In short, we conclude <span class="math-container">$A \cap U_x \subseteq \{x\}$</span>. No need to observe that <span class="math-container">$A$</span> would be closed, we can then just apply compactness directly to the cover <span class="math-container">$\{U_x: x \in X\}$</span> and get a similar contradiction with the infiniteness of <span class="math-container">$A$</span>.</p> <p>In fact this allows us to generalise a bit:</p> <p>Call <span class="math-container">$x \in X$</span> a strong limit point of an infinite set <span class="math-container">$A$</span> if for all (open) neighbourhoods <span class="math-container">$U$</span> of <span class="math-container">$x$</span> we have <span class="math-container">$|U \cap A|=|A|$</span>. (<span class="math-container">$|\cdot|$</span> denoting set cardinality..) Then if <span class="math-container">$X$</span> is compact we have that every infinite <span class="math-container">$A$</span> has a &quot;strong limit point&quot; in <span class="math-container">$X$</span>. In this general formulation this property of <span class="math-container">$X$</span> is even <em>equivalent</em> to compactness which limit point compactness is not.</p>
3,000,862
<p>I can name at least 4 different ways of representing <span class="math-container">$\exp$</span> function:</p> <ol> <li>Taylor series: For <span class="math-container">$x \in \mathbb{R}, \exp(x) = \sum_{k=0}^{\infty} \frac{x^k}{k!}$</span>.</li> <li>Differential equation: <span class="math-container">$f: \mathbb{R} \to \mathbb{R}$</span> differentiable with <span class="math-container">$f'(x) = f(x)$</span> and <span class="math-container">$f(0)=1$</span>.</li> <li>Inverse function of <span class="math-container">$\ln(x) = \int_1^x \frac{dt}{t}$</span> for <span class="math-container">$x&gt;0$</span>.</li> <li>Exponent: The number <span class="math-container">$e$</span> (defined e.g. as <span class="math-container">$\sum_{k=0}^{\infty} \frac{1}{k!}$</span>) raised to the power <span class="math-container">$x$</span>, for <span class="math-container">$x \in \mathbb{R}$</span></li> </ol> <p>I managed to proved equivalence among the first <span class="math-container">$3$</span> but I am a bit puzzled by <span class="math-container">$4.$</span>. </p> <p>An easy way would be to look at <span class="math-container">$a^b = \exp(b \ln(a))$</span>. But I am not sure that is meaningful. </p> <p>Is there any other way of defining <span class="math-container">$a^b$</span> without involving <span class="math-container">$\exp$</span> that would give a more meaningful answer? Or how would you approach proving that 4. is equivalent to 1-3?</p>
Aleksas Domarkas
562,074
<p>With free computer algebra system Maxima 5.42.1 <a href="https://i.stack.imgur.com/aqC6L.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/aqC6L.png" alt="enter image description here"></a></p>
1,816,109
<p>Does anyone know how to deal with the integral $$\int_0^{\pi /2}\cot^n(x)dx$$</p> <p>with $n\in (-1,1)$?</p> <p>Apparently it is a well known identity: it is listed in the <a href="http://mathworld.wolfram.com/Cotangent.html" rel="nofollow">page of wolfram for the cotangent</a>, where it says that it equals $2^{-1}\pi \sec [2^{-1} (\pi n)]$</p> <p>Thanks in advance for any solution or hint!</p>
Jack D'Aurizio
44,121
<p>If $\alpha\in(-1,1)$ $$I(\alpha)=\int_{0}^{\pi/2}\cot^{\alpha}(x)\,dx = \int_{0}^{\pi/2}\tan^{\alpha}(x)\,dx = \int_{0}^{+\infty}\frac{t^{\alpha}}{1+t^2}\,dt $$ and the last integral boils down to a value of the <a href="https://en.wikipedia.org/wiki/Beta_function" rel="nofollow">Euler's beta function</a> through the substitution $\frac{1}{1+t^2}=u$. We get:</p> <blockquote> <p>$$ I(\alpha) = \color{red}{\frac{\pi}{2\cos\frac{\pi\alpha}{2}}} $$</p> </blockquote> <p>as a consequence of the <a href="https://en.wikipedia.org/wiki/Reflection_formula" rel="nofollow">reflection formula for the $\Gamma$ function</a>.</p>
3,763,744
<p>The helix is a curve <span class="math-container">$x(t) \in \mathbb{R}^3$</span> defined by:</p> <p><span class="math-container">$$ x(t) = \begin{bmatrix} \sin(t) \\ \cos(t) \\ t \end{bmatrix} $$</span></p> <p>and it takes the classic shape:</p> <p><a href="https://en.wikipedia.org/wiki/File:Rising_circular.gif" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/4W73Vm.png" alt="simple helix" /></a></p> <p>Does this have a natural extension from <span class="math-container">$\mathbb{R}^3$</span> to <span class="math-container">$\mathbb{R}^4$</span>? (Or even <span class="math-container">$\mathbb{R}^n$</span>?)</p> <hr /> <hr /> <h3>What I've tried so far:</h3> <p>The classic <span class="math-container">$\mathbb{R}^3$</span> helix curve above has two nice properties:</p> <ul> <li><span class="math-container">$x(t)$</span> has constant distance from the axis of propagation <span class="math-container">$\hat{e}_3$</span>, where <span class="math-container">$\hat{e}_3 = \begin{bmatrix} 0 \\ 0 \\ 1 \end{bmatrix}$</span></li> <li><span class="math-container">$x(t)$</span> has constant angular velocity when projected onto the plane normal to <span class="math-container">$\hat{e}_3$</span>. i.e. the vector <span class="math-container">$(x_1(t), x_2(t))$</span> has polar coordinates <span class="math-container">$(r, \theta) = (1, t)$</span>, so <span class="math-container">$\dot{\theta} \equiv 1$</span>.</li> </ul> <p>The classic helix can be viewed as a parametric walk of a circle in <span class="math-container">$\mathbb{R}^2$</span>, with the parameter <span class="math-container">$t$</span> added as the third dimension. A natural extension to a helix in <span class="math-container">$\mathbb{R}^n$</span> would be a parametric walk of a curve on a hypersphere in <span class="math-container">$\mathbb{R}^{n-1}$</span>, with parameter <span class="math-container">$t$</span> added as the nth dimension. So for <span class="math-container">$\mathbb{R}^4$</span>, one could choose a <a href="https://en.wikipedia.org/wiki/Spiral#Spherical_spirals" rel="nofollow noreferrer">spherical spiral</a> to walk the sphere in <span class="math-container">$\mathbb{R}^3$</span>, and use parameter t as the 4th dimension:</p> <p><span class="math-container">$$ x(t) = \begin{bmatrix} \sin(t) \cos(ct) \\ \sin(t) \sin(ct) \\ \cos(t) \\ t \end{bmatrix} $$</span></p> <p>The first three components are rendered on wikipedia as:</p> <p><a href="https://en.wikipedia.org/wiki/File:Kugel-spirale-1-2.svg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/uAlBAm.png" alt="spherical spiral" /></a></p> <p>This construction matches the two properties I listed:</p> <ul> <li><span class="math-container">$x(t)$</span> has constant distance from the axis of propagation <span class="math-container">$\hat{e}_4$</span>, where <span class="math-container">$\hat{e}_4 = \begin{bmatrix} 0 \\ 0 \\ 0 \\ 1 \end{bmatrix}$</span></li> <li>When <span class="math-container">$c=1$</span>, <span class="math-container">$x(t)$</span> has constant angular velocity when projected onto the 3-plane normal to <span class="math-container">$\hat{e}_4$</span>. i.e. the vector <span class="math-container">$(x_1(t), x_2(t), x_3(t))$</span> has spherical coordinates <span class="math-container">$(r, \theta, \phi) = (1, t, t)$</span>, so <span class="math-container">$\dot{\theta} = \dot{\phi} \equiv 1$</span>.</li> </ul> <p>It's technically a direct extension of the <span class="math-container">$\mathbb{R}^3$</span> helix, since <span class="math-container">$c=0$</span> induces an identical curve (up to a projection.) But it still feels a little arbitrary, and the closed form will be quite ugly in higher dimensions.</p> <p>Is there a generally accepted extension of the classical circular helix in <span class="math-container">$\mathbb{R}^3$</span> to <span class="math-container">$\mathbb{R}^4$</span>? (Or even <span class="math-container">$\mathbb{R}^n$</span>?) And do its properties or construction at all resemble the above?</p> <hr /> <p>After some research, I've learned that there are interesting generalizations of helices in <span class="math-container">$\mathbb{R}^n$</span>, defined in terms of derivative constraints, Frenet frames, etc. such that even polynomial curves can behave as helices. [<a href="https://link.springer.com/article/10.1007/s00006-018-0835-1" rel="nofollow noreferrer">Altunkaya and Kula 2018</a>]. However, that's much more general than I'm seeking, since those are aperiodic, and may have unbounded distance from the axis of propagation. But the existence of such work is promising - I just don't know how to search this space well.</p>
SZB1
1,045,629
<p>I think another idea which might be a generalization of helices to <span class="math-container">$\mathbb{R}^n$</span> is that we take the parametrization of a <span class="math-container">$\mathbb{R}^{n-1}$</span> sphere and then we translate it by a function of the parameters which are used at generating the sphere mentioned before in a direction which is not in the (affine) linear subspace, which the sphere lies in. For example in 3D it could be a &quot;skew Helix&quot;(for example given by the equation <span class="math-container">$\textbf{r}=\begin{bmatrix} \cos(t) &amp; \sin(t)+\frac{1}{4}t &amp; \frac{1}{2}t \end{bmatrix}$</span>, though this may not always be in a constant distance from the axis of rotation. I didn't check whether the distance is constant, even though it still remains I think a good generalization in some sense.) or a Helix of which the translation vector is dependent on an exponential function(such is the &quot;generalized Helix&quot; given by the equation <span class="math-container">$\textbf{r}(t)=\begin{bmatrix} \cos(t) &amp; \sin(t) &amp; e^{\frac{1}{2}t} \end{bmatrix}$</span>).</p> <p>Or in 4D an object given by the parametric equation <span class="math-container">$\textbf{s}(t;u)=\begin{bmatrix} \cos(t)\sin(u) &amp; \cos(t)\cos(u) &amp; \sin(t) &amp; t^2\end{bmatrix}$</span></p> <p>It's also only an idea, I didn't look up this anywhere. I study Math, but I don't have a degree yet.</p>
2,853,668
<blockquote> <p>Show that $$\sum_{n=1}^{\infty} \frac{(-1)^{n-1}}{x^n}$$ converges for every $x&gt;1$.</p> </blockquote> <p>let $a(x)$ be the sum of the series. does $a$ continious at $x=2$? differentiable?</p> <p>I guess the first part is with leibniz but I am not sure about it.</p>
Nosrati
108,128
<p>Using <a href="https://en.wikipedia.org/wiki/Root_test" rel="nofollow noreferrer">root test</a> $$\lim_{n\to\infty}\sqrt[n]{\left|\dfrac{(-1)^n}{x^n}\right|}=\dfrac{1}{|x|}&lt;1$$ then the series is converge for $|x|&gt;1$.</p>
2,538,305
<p>Here is the question I am struggling with:</p> <p>A box has 16 Balls, of which 8 are Green, 6 are Red, and 2 are Blue. If you draw 2 Balls with replacement, what is the probability of getting 1 Green Ball and 1 Blue Ball in no particular order?</p> <p>I see three different ways to get an answer to this problem. Please refute my wrong answers with explanations because I am confused.</p> <p>Method 1:</p> <p>Probability of getting one green: 8/16</p> <p>Probability of getting one blue: 2/16</p> <p>(8/16) * (2/16) = 1/16 final answer</p> <p>Method 2: Since the question said order does not matter, I still figured order does count into this equation so I approached it by finding the probability that the green ball is selected first, then the blue ball. Then add that probability to selecting the blue ball first, then the green ball.</p> <p>Probability of getting green first then blue: (8/16) * (2/16) = 1/16</p> <p>Probability of getting blue first then green: (2/16) * (8/16) = 1/16</p> <p>therefore, the final answer is 1/16 + 1/16 = 1/8.</p> <p>Note: This confuses me because we are double counting the answer, the problem said that order does not matter, but why doesn't the Method 1 take this into account?</p> <p>Method 3 (Combination Method):</p> <p>There are Comb(16,2) possible ways to select 2 balls out of 16</p> <p>There are Comb(8,1)*Comb(2,1) ways to select a green and a blue ball</p> <p>Probability of one green and one blue = Comb(8,1)*Comb(2,1)/Comb(16,2) = 16/120 = 2/15 final answer</p> <p>Which one of these, if any, is the correct answer? The book says it is 1/8, but can someone please explain more and explain why my other methods are wrong. Thanks!</p>
fleablood
280,126
<p>Method 1:</p> <p>This is specifying order. The $\frac 8{16}$ applies to a specific drawing as does the $\frac 2{16}$. As blue first then green is as equally likely as green first than blue. The probability is $\frac{8}{16}*\frac{2}{16}*2$</p> <p>Method 2:</p> <p>This is correcct.</p> <p>"Note: This confuses me because we are double counting the answer, the problem said that order does not matter, but why doesn't the Method 1 take this into account?"</p> <p>You aren't double counting. If the blue ball is first then it is impossible that the green ball is first. The two options are mutually exclusive. That is why adding the probabilities is acceptable. If it were possible for both events we'd have to take <em>conditional</em> probability into account.</p> <p>Method 3:</p> <p>If you are allowed replacement, then choosing two balls is not choosing two out of sixteen. It is choosing 1 out of sixteen twice.</p> <p>So there are $16^2$ ways to choose $2$ two balls.</p> <p>There are $8*2$ ways to choose a green and then a blue ball. And $2*8$ ways to choose a blue and a black ball. WHich is the same as method 2.</p> <p>Or if you want to be different. There are $10*10$ ways to choose two balls that are either blue or green. There are $8*8$ ways to choose two green balls and $2*2$ ways of choosing two blue balls and $10*10-8*8 - 2*2 = 100 - 64 - 4 = 32$ ways of choosing exactly one blue and one green ball.</p>
3,134,991
<p>If nine coins are tossed, what is the probability that the number of heads is even?</p> <p>So there can either be 0 heads, 2 heads, 4 heads, 6 heads, or 8 heads.</p> <p>We have <span class="math-container">$n = 9$</span> trials, find the probability of each <span class="math-container">$k$</span> for <span class="math-container">$k = 0, 2, 4, 6, 8$</span></p> <p><span class="math-container">$n = 9, k = 0$</span></p> <p><span class="math-container">$$\binom{9}{0}\bigg(\frac{1}{2}\bigg)^0\bigg(\frac{1}{2}\bigg)^{9}$$</span> </p> <p><span class="math-container">$n = 9, k = 2$</span></p> <p><span class="math-container">$$\binom{9}{2}\bigg(\frac{1}{2}\bigg)^2\bigg(\frac{1}{2}\bigg)^{7}$$</span> </p> <p><span class="math-container">$n = 9, k = 4$</span> <span class="math-container">$$\binom{9}{4}\bigg(\frac{1}{2}\bigg)^4\bigg(\frac{1}{2}\bigg)^{5}$$</span></p> <p><span class="math-container">$n = 9, k = 6$</span></p> <p><span class="math-container">$$\binom{9}{6}\bigg(\frac{1}{2}\bigg)^6\bigg(\frac{1}{2}\bigg)^{3}$$</span></p> <p><span class="math-container">$n = 9, k = 8$</span></p> <p><span class="math-container">$$\binom{9}{8}\bigg(\frac{1}{2}\bigg)^8\bigg(\frac{1}{2}\bigg)^{1}$$</span></p> <p>Add all of these up: </p> <p><span class="math-container">$$=.64$$</span> so there's a 64% chance of probability?</p>
Peter
82,961
<p>The easiest way to see this : Consider the number of heads we have in the first <span class="math-container">$8$</span> coins.</p> <ul> <li>If this number is even, we need a tail, we have probability <span class="math-container">$\frac{1}{2}$</span> </li> <li>If this number is odd, we need a head, we have probability <span class="math-container">$\frac{1}{2}$</span> </li> </ul> <p>Hence no matter what the <span class="math-container">$8$</span> coins delivered, we have probability <span class="math-container">$\frac{1}{2}$</span> , which is the answer.</p>
3,134,991
<p>If nine coins are tossed, what is the probability that the number of heads is even?</p> <p>So there can either be 0 heads, 2 heads, 4 heads, 6 heads, or 8 heads.</p> <p>We have <span class="math-container">$n = 9$</span> trials, find the probability of each <span class="math-container">$k$</span> for <span class="math-container">$k = 0, 2, 4, 6, 8$</span></p> <p><span class="math-container">$n = 9, k = 0$</span></p> <p><span class="math-container">$$\binom{9}{0}\bigg(\frac{1}{2}\bigg)^0\bigg(\frac{1}{2}\bigg)^{9}$$</span> </p> <p><span class="math-container">$n = 9, k = 2$</span></p> <p><span class="math-container">$$\binom{9}{2}\bigg(\frac{1}{2}\bigg)^2\bigg(\frac{1}{2}\bigg)^{7}$$</span> </p> <p><span class="math-container">$n = 9, k = 4$</span> <span class="math-container">$$\binom{9}{4}\bigg(\frac{1}{2}\bigg)^4\bigg(\frac{1}{2}\bigg)^{5}$$</span></p> <p><span class="math-container">$n = 9, k = 6$</span></p> <p><span class="math-container">$$\binom{9}{6}\bigg(\frac{1}{2}\bigg)^6\bigg(\frac{1}{2}\bigg)^{3}$$</span></p> <p><span class="math-container">$n = 9, k = 8$</span></p> <p><span class="math-container">$$\binom{9}{8}\bigg(\frac{1}{2}\bigg)^8\bigg(\frac{1}{2}\bigg)^{1}$$</span></p> <p>Add all of these up: </p> <p><span class="math-container">$$=.64$$</span> so there's a 64% chance of probability?</p>
Kyle Miller
172,988
<p>If <span class="math-container">$h$</span> denotes getting a heads and <span class="math-container">$p$</span> denotes getting tails, let's write <span class="math-container">$\frac{1}{2}h+\frac{1}{2}p$</span> for the notion of a fair coin: half the time it turns up heads, and half the time tails.</p> <p>If we expand out the following product while keeping track of multiplication order, <span class="math-container">$$\left(\frac{1}{2}h+\frac{1}{2}p\right)\left(\frac{1}{2}h+\frac{1}{2}p\right)=\frac{1}{4}hh+\frac{1}{4}hp+\frac{1}{4}ph+\frac{1}{4}pp,$$</span> we see that the sequences <span class="math-container">$hh$</span>, <span class="math-container">$hp$</span>, <span class="math-container">$ph$</span>, and <span class="math-container">$pp$</span> are equally likely. Forgetting about multiplication order, which we want to do because we only care about how many times heads or tails showed up, corresponds to just treating this like a polynomial: <span class="math-container">$$=\frac{1}{4}h^2+\frac{1}{2}hp+\frac{1}{4}p^2.$$</span> We can keep multiplying copies of a fair coin together to find the probability that a certain number of heads or tails happened, where the coefficient in front of <span class="math-container">$h^kp^\ell$</span> is the probability of <span class="math-container">$k$</span> heads and <span class="math-container">$\ell$</span> tails.</p> <p>Nine coins is the expansion <span class="math-container">$$\left(\frac{1}{2}h+\frac{1}{2}p\right)^9=\sum_{k=0}^9\binom{9}{k}\left(\frac{1}{2}h\right)^k\left(\frac{1}{2}p\right)^{9-k}=\sum_{k=0}^9\frac{1}{2^9}\binom{9}{k}h^kp^{9-k}.$$</span> So far, all this has done is explain why you were adding up <span class="math-container">$2^{-9}\binom{9}{k}$</span> for <span class="math-container">$k=0,2,4,\dots,8$</span>. Here, now, is a nice trick. If we formally set <span class="math-container">$h=1$</span> and <span class="math-container">$p=1$</span>, then we get <span class="math-container">$$1=\sum_{k=0}^9\frac{1}{2^9}\binom{9}{k},$$</span> and if we formally set <span class="math-container">$h=-1$</span> and <span class="math-container">$p=1$</span>, then we get <span class="math-container">$$0=\sum_{k=0}^9\frac{1}{2^9}\binom{9}{k}(-1)^k.$$</span> The average of these two equations is <span class="math-container">$$\frac{1}{2}=\sum_{k=0,k\text{ even}}^9\frac{1}{2^9}\binom{9}{k},$$</span> since <span class="math-container">$\frac{1}{2}(1+(-1)^k)$</span> is <span class="math-container">$1$</span> or <span class="math-container">$0$</span> depending on whether <span class="math-container">$k$</span> is even or odd. Thus the probability of an even number of heads is <span class="math-container">$\frac{1}{2}$</span>.</p> <p>Notice that this did not use the fact nine coins were tossed at any point! (Other than the fact that at least one coin was tossed. In the case of tossing no coins, an even number of heads happens with probability <span class="math-container">$1$</span>. What part of my argument goes wrong for the case of zero coins?)</p>
3,134,991
<p>If nine coins are tossed, what is the probability that the number of heads is even?</p> <p>So there can either be 0 heads, 2 heads, 4 heads, 6 heads, or 8 heads.</p> <p>We have <span class="math-container">$n = 9$</span> trials, find the probability of each <span class="math-container">$k$</span> for <span class="math-container">$k = 0, 2, 4, 6, 8$</span></p> <p><span class="math-container">$n = 9, k = 0$</span></p> <p><span class="math-container">$$\binom{9}{0}\bigg(\frac{1}{2}\bigg)^0\bigg(\frac{1}{2}\bigg)^{9}$$</span> </p> <p><span class="math-container">$n = 9, k = 2$</span></p> <p><span class="math-container">$$\binom{9}{2}\bigg(\frac{1}{2}\bigg)^2\bigg(\frac{1}{2}\bigg)^{7}$$</span> </p> <p><span class="math-container">$n = 9, k = 4$</span> <span class="math-container">$$\binom{9}{4}\bigg(\frac{1}{2}\bigg)^4\bigg(\frac{1}{2}\bigg)^{5}$$</span></p> <p><span class="math-container">$n = 9, k = 6$</span></p> <p><span class="math-container">$$\binom{9}{6}\bigg(\frac{1}{2}\bigg)^6\bigg(\frac{1}{2}\bigg)^{3}$$</span></p> <p><span class="math-container">$n = 9, k = 8$</span></p> <p><span class="math-container">$$\binom{9}{8}\bigg(\frac{1}{2}\bigg)^8\bigg(\frac{1}{2}\bigg)^{1}$$</span></p> <p>Add all of these up: </p> <p><span class="math-container">$$=.64$$</span> so there's a 64% chance of probability?</p>
Brian Tung
224,454
<p>A useful way to think about this problem, especially for the case of generally unfair coins, is in terms of a recurrence. Let <span class="math-container">$p$</span> be the probability that the coin flips heads, and let <span class="math-container">$q_n$</span> be the probability, after <span class="math-container">$n$</span> flips, that the number of flips is even. So, in particular, <span class="math-container">$q_0 = 1$</span>: Before the coin has been flipped at all (after <span class="math-container">$0$</span> flips, in other words), the probability that the number of heads is even equals <span class="math-container">$1$</span>.</p> <p>We can write a recurrence for <span class="math-container">$q_{n+1}$</span> in terms of <span class="math-container">$q_n$</span> as follows:</p> <ul> <li><p>If the <em>parity</em> (the even-or-oddness of the heads) was even after <span class="math-container">$n$</span> flips, which happens with probability <span class="math-container">$q_n$</span>, then it stays even with probability <span class="math-container">$1-p$</span>.</p></li> <li><p>If the parity was odd after <span class="math-container">$n$</span> flips, which happens with probability <span class="math-container">$1-q_n$</span>, then it turns even with probability <span class="math-container">$p$</span>.</p></li> </ul> <p>(We assume, as is typical in these problems, i.i.d. flips.) With these two observations in mind, we get</p> <p><span class="math-container">$$ q_{n+1} = q_n(1-p) + (1-q_n)p $$</span></p> <p>which we can rewrite as</p> <p><span class="math-container">$$ q_{n+1} = p + (1-2p)q_n $$</span></p> <p><em>If</em> this recurrence has a limit <span class="math-container">$q_n \to q$</span>, then we can put</p> <p><span class="math-container">$$ q = p+(1-2p)q $$</span> <span class="math-container">$$ 2pq = p $$</span></p> <p>from which we get that either <span class="math-container">$p = 0$</span> (in which case, clearly, <span class="math-container">$q_n = 1$</span> for all <span class="math-container">$q$</span>—if you only flip tails, then the parity of heads will always be even), <em>or</em> <span class="math-container">$q = 1/2$</span>; that is, the limiting probability of even parity is <span class="math-container">$1/2$</span> (and the same for odd parity, too, obviously). If there is no limit, it will be because <span class="math-container">$p = 1$</span>, and we continually alternate between even and odd parity. I do not show this, but it is not difficult.</p> <p>It is also not difficult to show that the recurrence has the solution</p> <p><span class="math-container">$$ q_n = \frac12 + \frac12(1-2p)^n $$</span></p> <p>and this lays out why the symmetry arguments work out well for fair coins: <span class="math-container">$(1-2p)^n = 0$</span> for all <span class="math-container">$n &gt; 0$</span>, leaving us with just <span class="math-container">$q_n = 1/2$</span>.</p> <hr> <p>It may help to see this recurrence in the form of a Markov chain with two states:</p> <p><a href="https://i.stack.imgur.com/qLxZ0.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/qLxZ0.png" alt="enter image description here"></a></p> <p>Since the transition probabilities from one state to the other are equal (<span class="math-container">$p = p$</span>), the state probabilities at equilibrium (if such exists) must also be equal, and therefore both equal to <span class="math-container">$1/2$</span>.</p>
780,895
<p>A collection of black and white balls are to be arranged on a straight line such that each ball has at least one neighbor of different color. If there are 100 black balls, then the maximum number of white balls that allows such an arrangement is? </p>
talegari
27,440
<p>$200$</p> <p>There can be at most two white balls between two black balls. If there were more than two white balls between some two black balls, one of the white ball ends up having two white neighbors. There can be one white ball to the left of the leftmost black ball and one white ball to the right of the rightmost black ball.</p>
1,291,107
<p>Let $X$ be random variable and $f$ it's density. How can one calculate $E(X\vert X&lt;a)$?</p> <p>From definition we have:</p> <p>$$E(X\vert X&lt;a)=\frac{E\left(X \mathbb{1}_{\{X&lt;a\}}\right)}{P(X&lt;a)}$$</p> <p>Is this equal to:</p> <p>$$\frac{\int_{\{X&lt;a\}}xf(x)dx}{P(X&lt;a)}$$</p> <p>? If yes, then how one justify it? Thanks. I'm conditional expectation noob.</p> <p>Also, what is $E(X|X=x_0)$? In discrete case it is $x_0$...</p>
bnosnehpets
224,658
<p>There are already answers on how to get the real solutions, so I will only show you the non-real solutions.</p> <p>You have obtained that $(x-5)^3=-1$. Expanding and simplifying we get: $$x^3-15x^2+75x-124=0$$ However, we know that $x=4$ is a solution so we can say that $(x-4)(ax^2+bx+c)=0$. You can equate coefficients or use polynomial division, but as you have already found with Wolfram Alpha, $(ax^2+bx+c)=(x^2-11x+31)$.</p> <p>To solve, complete the square: $$x^2-11x+31=0$$ $$(x-\frac{11}{2})^2-\frac{121}{4}+31=0$$ $$(x-\frac{11}{2})^2=-\frac{3}{4}$$ $$x-\frac{11}{2}=\sqrt{-\frac{3}{4}}= \pm \frac{\sqrt{3}}{2}i$$ $$x=\frac {11 \pm i \sqrt{3}}{2}$$</p>
1,458,405
<p>(p ∨ q) → r ≡ (p → q) ∨ (p → r) could be valid or invalid</p> <p>I need to prove it using logical equivalences (can't use truth table)</p> <p>This is how far I've gotten by working with the right side:</p> <p>&lt;-> p→(q v r)</p> <p>&lt;-> ¬p v (q v r) </p> <p>then commutative law</p> <p>&lt;-> (q v r) v ¬p </p> <p>then commutative law</p> <p>&lt;-> (r v q) v ¬p </p> <p>then associative law</p> <p>&lt;-> r v ( q v ¬p ) </p> <p>then commutative law</p> <p>&lt;-> (q v ¬p) v r</p> <p>then commutative law</p> <p>&lt;-> (¬p v q) v r</p> <p>&lt;-> ¬(¬p v q) → r</p> <p>What do I do next? Or did I do this wrong? Thanks for the assistance</p>
skyking
265,767
<p>Well the proposition is invalid. </p> <p>The systematic way to show this is to bring it to conjunctive normal form (this can be more easily done with carnaugh tables, but that would probably count as a truth table). We start with the statement:</p> <p>$((p\lor q)\rightarrow r) \leftrightarrow ((p\rightarrow q)\lor (p\rightarrow r))$</p> <p>My strategy here is to rewrite the equivalence as a conjunctive form with factors in disjunctive form. Which means that the LHS and RHS of the equivalence will need to be transformed into disjunctive normal form.</p> <p>The LHS:</p> <p>$(p\lor q) \rightarrow r$</p> <p>$\neg(p\lor q) \lor r$</p> <p>$(\neg p \land \neg q) \lor r $</p> <p>And the RHS:</p> <p>$(p \rightarrow q) \lor (p \rightarrow r)$</p> <p>$(\neg p \lor q) \lor (\neg p \lor r)$</p> <p>$\neg p \lor q \lor r$</p> <p>Putting them back in the original, and using that $\phi\leftrightarrow\psi$ is the same as $(\neg\phi\lor\psi)\land(\phi\lor\neg\psi)$:</p> <p>$((\neg p \land \neg q) \lor r) \leftrightarrow (\neg p \lor q \lor r)$</p> <p>$(\neg((\neg p \land \neg q) \lor r) \lor (\neg p \lor q \lor r)) \land (((\neg p \land \neg q) \lor r) \lor \neg(\neg p \lor q \lor r))$</p> <p>De Morgan:</p> <p>$((\neg(\neg p \land \neg q) \land \neg r) \lor (\neg p \lor q \lor r)) \land (((\neg p \land \neg q) \lor r) \lor (\neg\neg p \land \neg q \land \neg r))$</p> <p>And again (and removing the double negation):</p> <p>$(((\neg\neg p \lor \neg\neg q) \land \neg r) \lor (\neg p \lor q \lor r)) \land (((\neg p \land \neg q) \lor r) \lor (p \land \neg q \land \neg r))$</p> <p>Some clean up, and removing double negations:</p> <p>$(((p \lor q) \land \neg r) \lor \neg p \lor q \lor r) \land ((\neg p \land \neg q) \lor r \lor (p \land \neg q \land \neg r))$</p> <p>Now we've brought it to a form of conjunctions of disjunctions of conjunctions of disjunctions. We now bring it to conjunctive normal form by using distributive laws, starting at the inner levels and working our way outwards:</p> <p>$((p \land \neg r) \lor (q \land \neg r) \lor \neg p \lor q \lor r) \land ((\neg p \land \neg q) \lor r \lor (p \land \neg q \land \neg r))$</p> <p>$(p \lor q \lor \neg p \lor q \lor r) \land (p \lor \neg r \lor \neg p \lor q \lor r) \land (\neg r \lor q \lor \neg p \lor q \lor r) \land (\neg r \lor \neg r \lor \neg p \lor q \lor r) \land (\neg p \lor r \lor p) \land (\neg p \lor r \lor \neg q) \land (\neg p \lor r \lor \neg r) \land (\neg q \lor r \lor p) \land (\neg q \lor r \lor \neg q) \land (\neg q \lor r \lor \neg r)$</p> <p>Now bring som sanity to that mess by removing redundant terms and replacing $\phi\lor\neg\phi\lor...$ with $T$ (true):</p> <p>$T \land T \land T \land T \land T \land (\neg p \lor r \lor \neg q) \land T \land (\neg q \lor r \lor p) \land (\neg q \lor r) \land T$</p> <p>And removing redundant truths:</p> <p>$(\neg p \lor r \lor \neg q) \land (\neg q \lor r \lor p) \land (\neg q \lor r)$</p> <p>Already now we can see that this isn't universally true since the factors are disjunction of independent statements, but we can tidy it up a little bit by using absorbtion:</p> <p>$(\neg p \lor r \lor \neg q) \land (\neg q \lor r \lor p)$</p> <p>Since they are not equivalent I think the requirement not to utilize truth tables is partially wrong, basically because disproof by truth table only needs one line to mismatch which in turn means that we have a concrete counter-example. Creating a truth table shows that $q$ being true and $r$ being false yields a mismatch and then you can use any of these as a counter-example:</p> <p>$(p \lor T) \rightarrow F$ is false</p> <p>while:</p> <p>$(p \rightarrow T) \lor (p \rightarrow F)$ is true</p>
135,675
<p>Let $D$ be the Dirac-Operator on $\mathbb{R}^n$ or more generally the Dirac spinor bundle $\mathcal{S}\to M$ of a (semi-)Riemannian spin manifold $M$. Then we consider $D$ as an unbouded Operator on $\mathcal{H}=L^2(\mathbb{R}^n)$ with domain $C^\infty_c(\mathbb{R}^n,\mathbb{C}^N)$. Then it is said that the operator $f\langle D\rangle^{-n}$ is compact, where $f\in C^\infty_c(\mathbb{R}^n,\mathbb{C})$ is considered as a multiplication operator on $\mathcal{H}$ and $\langle D\rangle:=\sqrt{D^\dagger D+ DD^\dagger}$.</p> <p>Since I am not really a crack in functional analysis, it is not even obvious for my how exactly $\langle D\rangle$ works. I suspect that the Operator $D^\dagger D+DD^\dagger$ is (essentially) self-adjoint and then the spectral theorem is used for defining $\langle D\rangle$ and its powers $\langle D\rangle^{-n}$.</p> <p>But what is even more mysterious to me is the claim that $f\langle D\rangle^{-n}$ is actually compact (note that $f$ has compact support, however). Why is this true?</p>
Mikhail Katz
28,128
<p>What is it that you find mysterious exactly? On a compact manifold $M$, the eigenvalues of $\langle D\rangle$ tend to infinity, hence the eigenvalues of $\langle D\rangle^{-1}$ tend to zero, so that we have a compact operator, namely the image of unit ball is compact. Multiplication by $f$ is a bounded operator, therefore composing with it will respect the compactness of the image of the unit ball, so that the composition is a compact operator, as well.</p> <p>Note 1. I see that I may have misread the OP's question as he seems to be mostly interested in the noncompact case. Paul Garrett's answer seems to suggest that the claim may be problematic in the non-compact case.</p>
991,878
<p>How can it be proven that a cycle of length k is an even permutation if and only if k is odd? I know it can be done using the fact that a permutation which exchanges two elements but leaves the rest unchanged is an odd permutation.</p>
Dr. Sonnhard Graubner
175,066
<p>we have to distinguish two cases if $0&lt;a&lt;1$ then we have $x\geq \frac{a-1}{\log(a)}$ in the case that $a&gt;1$ we have $x\le \frac{a-1}{\log(a)}$ and for $a=1$ we have $0\le 0$</p>
203,673
<p>Let $X$ be a locally compact Hausdorff space. Does there exist a locally finite open covering consisting of relatively compact sets?</p>
Joseph Van Name
22,277
<p>The condition that you are referring to when coupled with local compactness is precisely paracompactness. Not every locally compact space is paracompact as the example of Arthur Fischer illustrates and which one can look up <a href="http://topology.jdabbs.com/">here</a>.</p> <p>$\mathbf{Theorem}$</p> <p>Suppose that $X$ is a locally compact Hausdorff space. Then the following are equivalent.</p> <ol> <li><p>$X$ is paracompact.</p></li> <li><p>$X$ is the disjoint union of $\sigma$-compact open sets.</p></li> <li><p>$X$ has a locally finite open covering consisting of relatively compact open sets.</p></li> </ol> <p>$\mathbf{Proof}$</p> <p>$1\rightarrow 3$. Suppose that $X$ is paracompact. Then let $\mathcal{U}$ be the collection of all relatively compact open subsets of $X$. Then $\mathcal{U}$ has a locally finite open refinement $\mathcal{V}$ and clearly each $V\in\mathcal{V}$ is paracompact. </p> <p>$3\rightarrow 2$. Suppose that $x$ has a locally finite open covering $\mathcal{C}$ consisting of relatively compact open sets. Let $(\mathcal{C},E)$ be the graph where we let $U,V$ be connected by an edge if $\overline{U}\cap\overline{V}\neq\emptyset.$ If $x\in\overline{U}$, then there is an open set $O_{x}$ which intersects only finitely many sets $\overline{W}$ for $W\in\mathcal{C}$, namely $W_{1,x},...,W_{n_{x},x}$. Therefore, there are $x_{1},...,x_{n}$ in $\overline{U}$ such that $\overline{U}\subseteq O_{x_{1}}\cup...\cup O_{x_{n}}$. In particular, the only sets $W$ in $\mathcal{C}$ such that it is possible that $\overline{U}\cap\overline{W}$ are the sets $W_{1,x_{1}},...,W_{n_{x_{1}},x_{1}},...,W_{1,x_{n}},...,W_{n_{x_{n}},x_{n}}$. Therefore, the graph $(\mathcal{C},E)$ is locally finite. Therefore let $P$ be the partition of $\mathcal{C}$ into connected components. Then $\{\bigcup\mathcal{R}|\mathcal{R}\in P\}$ is a partition of $X$ into open sets where each $\bigcup\mathcal{R}$ is $\sigma$-compact.</p> <p>$2\rightarrow 1$. This follows from the well-known fact that every open cover $\mathcal{U}$ of $X$ has an open refinement $\bigcup_{n}\mathcal{A}_{n}$ such that each $\mathcal{A}_{n}$ is locally finite. $\mathbf{QED}$</p>
2,275,016
<p>I am currently going through Dummit/Foote's <em>Abstract Algebra</em>, and was asked to prove the above for a specific case but was wondering if it holds in the general case.</p> <p>I have a feeling it might be false but I am bad at coming up with counterexamples so I tried to think of some simple contradictions if it were to be true but didn't get anywhere. Any help is appreciated.</p> <p>Edit: By a commutative element, I mean that for any element $y\in G$, we have that $xy=yx$.</p>
Vitor Borges
303,852
<p>Notice that $xa = ax$ iff $xax^{-1} = a$. Multiplying both sides by $x^{-1}$ shows that $x^{-1}xax^{-1} = ax^{-1} = x^{-1}a$.</p>
108,409
<p>Given a commutative ring <span class="math-container">$A$</span> with unity, Grothendieck used universal polynomials to define a <em>special</em> <span class="math-container">$\lambda$</span>-ring structure on <span class="math-container">$\Lambda(A):=1+t\:A[[t]]$</span>. Suppose <span class="math-container">$A$</span> is graded, say <span class="math-container">$A=\bigoplus_{i=0}^\infty A_i$</span>. In <a href="https://doi.org/10.1007/978-1-4757-1858-4" rel="nofollow noreferrer"><strong>Riemann–Roch Algebra</strong></a>, p. 11, Fulton and Lang define <span class="math-container">$\Lambda^{\circ}(A):=\{1+a_1t+a_2t^2\dotsb\mid a_i\in A_i\}$</span>. Then on page 15 they state that since the product and <span class="math-container">$\lambda$</span> operations of <span class="math-container">$\Lambda(A)$</span> take <span class="math-container">$\Lambda^\circ(A)$</span> to itself, <span class="math-container">$\Lambda^\circ(A)$</span> becomes a <span class="math-container">$\lambda$</span>-ring (without unit). They use this <span class="math-container">$\lambda$</span>-ring structure of <span class="math-container">$\Lambda^\circ(A)$</span> in the proof of Theorem 3.1 on p. 16.</p> <p>However, a straightforward computation shows that the product in <span class="math-container">$\Lambda(A)$</span> does <em>not</em> take <span class="math-container">$\Lambda^\circ(A)$</span> to itself. For example, if <span class="math-container">$1+a_1t+a_2t^2\dotsb$</span> and <span class="math-container">$1+b_1t+b_2t^2\dotsb$</span> are elements in <span class="math-container">$\Lambda^\circ(A)$</span>, then their product using the product of <span class="math-container">$\Lambda(A)$</span> is given by <span class="math-container">$1+P_1(a_1;b_1)t+P_2(a_1,a_2;b_1,b_2)t^2+\dotsb$</span>, where <span class="math-container">$P_1,P_2,\dotsc$</span> are certain universal polynomials. But <span class="math-container">$P_1(a_1;b_1)$</span> turns out to be <span class="math-container">$a_1b_1$</span> (<a href="https://docs.google.com/viewer?a=v&amp;pid=sites&amp;srcid=ZGVmYXVsdGRvbWFpbnxkYXJpamdyaW5iZXJnfGd4OjEwNzljZThlNDcwMGE5YmU" rel="nofollow noreferrer">see here</a>, p. 22) and <span class="math-container">$a_1b_1$</span> is not in <span class="math-container">$A_1$</span>, which shows the product is not in <span class="math-container">$\Lambda^\circ(A)$</span>.</p> <p><strong>Question.</strong> Is there an error in the book? If yes, can it be fixed?</p> <p><strong>Edit.</strong> If you know other errors in this book that one should be aware of, please share them here.</p>
Baptiste Calmès
4,763
<p>I'm not sure what product you are thinking of on $\Lambda^0(A)$, but the one I'm thinking of, and the one that I believe is implicitly used in Fulton-Lang is the usual product on power series. So in particular, $(1+a_1 t+\cdots)\cdot (1+b_1 t+\cdots) = 1 + (a_1 + b_1)t + \cdots$.</p> <p>There is no problem of grading.</p>
1,176,615
<p>I am invited to calculate the minimum of the following set:</p> <p>$\big\{ \lfloor xy + \frac{1}{xy} \rfloor \,\Big|\, (x+1)(y+1)=2 ,\, 0&lt;x,y \in \mathbb{R} \big\}$.</p> <p>Is there any idea?</p> <p>(The question changed because there is no maximum for the set (as proved in the following answers) and I assume that the source makes mistake)</p>
Lozenges
219,277
<p>There is a minimum of $6$ achieved at $x=y=\sqrt{2}-1$</p> <p>to prove this, start with the identity</p> <p>$\left(x-\frac{1}{x}\right)\left(y-\frac{1}{y}\right)+\left(\frac{y}{x}+\frac{x}{y}\right)=\left(x y+\frac{1}{x y}\right)$</p> <p>a simple verification shows that $\left(x-\frac{1}{x}\right)\left(y-\frac{1}{y}\right)=4$</p> <p>on the other hand $\frac{y}{x}+\frac{x}{y}\geq 2$</p> <p>it follows that $x y+\frac{1}{x y}\geq 6$</p>
75,880
<p>Say $f:X\rightarrow Y$ and $g:Y\rightarrow X$ are functions where $g\circ f:X\rightarrow X$ is the identity. Which of $f$ and $g$ is onto, and which is one-to-one?</p>
Arturo Magidin
742
<p>If it is just a matter of remembering what the right conclusion is, here's the picture I always use to remember: $$\begin{array}{rcl} &amp;\bullet &amp;\\ &amp;&amp;\searrow\\ \bullet\rightarrow&amp; \bullet &amp; \rightarrow\bullet\\ X\quad\quad&amp;Y&amp;\quad\quad Z \end{array}$$ The compositum is one-to-one and onto: the first function is one-to-one but not onto; the second function is onto, but not one-to-one.</p> <p>So: if a compositum is one-to-one, the first function applied is one-to-one. If a compositum is onto, then the <em>second</em> function applied is onto.</p> <p>If $g\circ f = \mathrm{id}$, then the first function ($f$) is one-to-one, and the second function ($g$), is onto. </p> <p>If it is a matter of <em>proving</em> that the first function is one-to-one and the second function is onto, well, you'd need a proof. An example does not suffice.</p>
2,362,790
<p>I'm interested in the problem linked with <a href="https://math.stackexchange.com/questions/770117/determinant-of-circulant-matrix/2362754#2362754">this answer</a>. </p> <p>Let $ f(x) = a_n + a_1 x + \dots + a_{n-1} x^{n-1} $ be polynomial with distinct $a_i$ which are <strong>primes</strong>. </p> <p>(Polynomials like that for $n= 4 \ \ \ \ f(x) = 7 + 11 x + 17 x^2 + 19 x^3 $)</p> <ul> <li>Is it for some $n$ possible that $x^n-1$ and $f(x)$ have some common divisors?</li> </ul> <p>(Negative answer would mean that it is possible to generate circulant non-singular matrices with any prime numbers)</p> <p>In other words </p> <ul> <li>$x^n-1$ has roots which lie (as complex vectors) symmetrically in complex plane on the<br> unit circle, can such root be also a root of $f(x) = a_n + a_1 x + \dots + a_{n-1} x^{n-1}$ in general case where $a_i$ are constrained as above? </li> </ul>
Jyrki Lahtonen
11,619
<p>This is certainly possible. The easiest way is to use pairs of twin primes and $n=4$. Such as $$ f(x)=7+5x+11x^2+13x^3 $$ where $f(-1)=0$ and $x+1$ is a common factor of $f(x)$ and $x^4-1.$</p> <hr> <p>Extending the same idea to third roots of unity. Consider $$ f(x)=7+5x+17x^2+29x^3+31x^4+19x^5. $$ Because $7+29=5+31=17+19=36$ we easily see that the third roots of unity $\omega=e^{\pm 2\pi i/3}$ are zeros of $f(x)$ as $f(\omega)=36(1+\omega+\omega^2)=0$. Therefore $f(x)$ has a common factor $\Phi_3(x)=x^2+x+1$ with $x^3-1$ and therefore also with $x^6-1$.</p> <hr> <p>For an example of an odd $n$ I found the following. As $$53=3+13+37=5+17+31=11+19+23$$ is the sum of three disjoint triples of primes, we can, as above, show that $$ f(x)=3+5x+11x^2+13x^3+17x^4+19x^5+37x^6+31x^7+23x^8 $$ has the common factor $x^2+x+1$ with $x^9-1$.</p>
879,640
<p>Does a matrix have only one inverse matrix (like the inverse of an element in a field)? If so, does this mean that</p> <p>$A,B \text{ have the same inverse matrix} \iff A=B$?</p>
Pauly B
166,413
<p>Yes, it is unique. To show this, assume a matrix $A$ has two inverses $B$ and $C$, so that $AB=I$ and $AC=I$. Therefore $AB=AC \implies BAB=BAC \implies B=C$. So the inverse is indeed unique. For the second question, note that $(A^{-1})^{-1}=A$ so that if $A$ and $B$ both have inverse $A^{-1}$, then $A^{-1}$ has a unique inverse as well. Since $A$ and $B$ are both inverses, therefore $A=B$.</p>
3,243,503
<p>If <span class="math-container">$x + y = 2c$</span>, find minimum value of <span class="math-container">$ \sec x +\sec y $</span> if <span class="math-container">$x,y\in(0,\pi/2)$</span>, in terms of <span class="math-container">$c$</span>.</p> <p>I was able to solve by differentiating the equation and got the answer as 2secc. But i would like to know solution with trigonometry as base or without differentiating the equation.</p>
Key Flex
568,718
<p>Here is a method without differentiation.</p> <p>Since <span class="math-container">$x,y\in(0,\pi/2)$</span></p> <p><span class="math-container">$\sec x,\sec y&gt;0$</span></p> <p>By using <span class="math-container">$A.M\ge G.M$</span>, we have <span class="math-container">$$\dfrac{\sec x+\sec y}{2}\ge\sqrt{\sec x\cdot\sec y}$$</span> <span class="math-container">$$\sec^2x+\sec^2y+2\sec x\sec y\ge4\sec x\sec y$$</span> <span class="math-container">$$\sec^2x+\sec^2y-2\sec x\sec y\ge0$$</span> <span class="math-container">$$(\sec x-\sec y)^2\ge0$$</span> <span class="math-container">$$\sec x-\sec y=0$$</span> <span class="math-container">$$x=y$$</span> Since, <span class="math-container">$x+y=2c$</span> <span class="math-container">$$x+x=2c$$</span> <span class="math-container">$$x=c$$</span> <span class="math-container">$$\sec x+\sec y=2\sec x\mbox{ which is the minimum}$$</span></p>
885,778
<ol> <li><p>Is there any group of order 36 with no subgroup of order 6?</p></li> <li><p>Is there any group of order $p^2q^2$ with no subgroup of order $pq$?</p></li> <li><p>Is there any group of order $p^{2m}q^{2m}$ with no subgroup of order $p^mq^m$?</p></li> <li><p>Is there any group of order $p^{2m}q^{2n}$ with no subgroup of order $p^mq^n$?</p></li> </ol> <p>($p,q$ are distinct primes)</p>
kabumm
167,588
<p>This is not really a complete answer (yet). But since it's an interesting question and since I cannot post comments due to forum restrictions, I'll hint you to an article I found: <a href="http://matwbn.icm.edu.pl/ksiazki/fm/fm92/fm9211.pdf" rel="nofollow noreferrer">http://matwbn.icm.edu.pl/ksiazki/fm/fm92/fm9211.pdf</a> (if your browser doesn't display anything, try downloading).</p> <p>On page 2 you'll find the following theorem: <img src="https://i.stack.imgur.com/oUFf2.png" alt="Theorem from: Baskaran, S.: CLT and non-CLT groups, 1972"></p> <p>My knowledge of algebra is too rusty to really understand this at first glance. But a CLT-group is a group, for which the converse Lagrange Theorem holds true, so for every divisor of the group order exists a subgroup. If this paper is right, this would mean every group of order 36 is a non-CLT group (case II), so there exist at least one divisor of 36, which has no corresponding subgroup.</p> <p>Since Cauchy's Theorem and Sylow Theorems imply the existence of subgroups of order 2,3,4 and 9, the divisors left are: 6,12 and 18. This doesn't directly answer your question, but the isomorphism should help finding an answer. I'll update, as soon as I find out something more precise...</p>
1,450,476
<p>I'm in number theory and I currently have these problems assigned as homework. I've looked through the sections containing these problems and I've solved/proved most of the other problems, but I can't figure these ones out.</p> <ol> <li>For $n&gt;1$, show that every prime divisor of $n!+1$ is an odd integer that is greater than $n$.</li> <li><p>Assuming that $p_n$ is the $n$th prime number, show that the sum $\frac{1}{p_1}+\frac{1}{p_2}+...+\frac{1}{p_n}$ is never an integer.</p></li> <li><p>How many zeroes end 1,111! ?</p></li> </ol> <p>Thanks in advance!</p>
Community
-1
<p>Even though the cardinality of $\mathbb{Z}$ and $\mathbb{Q}$ are same, its not a bijection (its an embedding but not a surjection). $\frac{1}{2}$ is not in the image set. </p>
1,507,526
<p>Let $S$ be the portion of the sphere $x^2+y^2+z^2=9$, where $1\leq x^2+y^2\leq4$ and $z\geq0$. Calculate the surface area of $S$</p> <p>Ok i'm really confused with this one. I know i have to apply the surface area formula but and possibly spherical coordinates but i can't seem how to get the integral out.</p> <p>The shape. I thought of using spherical system but after doing i ended up with a 3 coordinate system. I'm not even sure how to begin with this one.</p>
Narasimham
95,860
<p>I shall outline symbolically at first:</p> <p>Due to possibility two axial positions and independence of $ \theta$ ( full $ 0 to 2 \pi) you can use either spherical coordinate system or cylindrical. I used the latter.</p> <p>$$ dA= 2 \pi r ds = 2 \pi \frac {dz}{\cos \phi} =2 \pi R dz $$</p> <p>$$ (\because \cos \phi = \frac {r}{R}) $$</p> <p>Now integrating the above,</p> <p>$$ A = 2 \pi R ( z_2-z_1); \, z_1= \sqrt{R^2-r_1^2}, z_2= \sqrt{R^2-r_2^2} \tag{1}$$</p> <p>where generally we have $ (r_1&lt;r_2&lt; R) $.</p> <p>The result shows that area depends on product of $2 \pi R$ and spherical segment height, well known formula. </p> <p><em>However</em> there are two cases with <em>different</em> segment heights:</p> <p>Case1: $ (z_1&lt;0 &lt; z_2); $ and \, $ (z_1&lt; z_2&lt;0); $</p> <p>A rough sketch, z-axis horizontal.</p> <p><a href="https://i.stack.imgur.com/eRSZm.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/eRSZm.png" alt="Spherical segments"></a></p> <p>So the areas in the two cases are: </p> <p>$$ 2 \pi R (z_2-z_1), 2 \pi R (z_2+z_1) \tag{2}$$</p> <p>where z' values are taken from (1), i.e., depending upon whether the end sections lie on same side or on opposite sides of the sphere equator/ max. radius. </p> <p>In this particular case we have $ R=3, r_1= 1 , r_2 =2. $ and $ z_1= 2 \sqrt2, z_2= \sqrt 5$ </p> <p>$$ A_{1,2}= 6 \pi ( 2 \sqrt2 \pm \sqrt 5 ) $$</p>
3,845,475
<p>Here's what I'm tasked with showing:</p> <p>Let <span class="math-container">$(a_n)$</span> be a convergent sequence with <span class="math-container">$a_n\rightarrow a$</span> as <span class="math-container">$n\rightarrow\infty$</span>. By the Algebraic Limit Theorem, we know that <span class="math-container">$(a_n^2)\rightarrow a^2$</span>. Now prove this using the definition of convergence.</p> <p>In doing so, I have the following:</p> <p>Let <span class="math-container">$\epsilon&gt;0$</span> be given. Choose some <span class="math-container">$N\in\mathbb{N}$</span> so that for all <span class="math-container">$n\geq N$</span>, <span class="math-container">$|a_n^2-a^2|&lt;\epsilon$</span>. By algebra, <span class="math-container">$|a_n^2-a^2|=|a_n-a||a_n+a|$</span>. Consider <span class="math-container">$|a_n+a|$</span>. By the triangle inequality, <span class="math-container">$|a_n+a|\leq|a_n|+|a|$</span>, thus <span class="math-container">$|a_n|$</span> is bounded by some <span class="math-container">$M\in\mathbb{N}$</span>.</p> <p>I know I'm trying to choose <span class="math-container">$M$</span> so that <span class="math-container">$|a_n-a|&lt;\epsilon+M+|a|$</span>. Where should I take it from here?</p>
Brian M. Scott
12,042
<p>If you order the numbers <span class="math-container">$1$</span> and <span class="math-container">$2$</span> and then insert <span class="math-container">$3$</span>, you can get all <span class="math-container">$6$</span> permutations of <span class="math-container">$1,2$</span>, and <span class="math-container">$3$</span>. From the initial ordering <span class="math-container">$1,2$</span> you get <span class="math-container">$3,1,2$</span>, <span class="math-container">$1,3,2$</span>, and <span class="math-container">$1,2,3$</span>, and from the initial ordering <span class="math-container">$2,1$</span> you get <span class="math-container">$3,2,1$</span>, <span class="math-container">$2,3,1$</span>, and <span class="math-container">$2,1,3$</span>, depending in both cases on where you insert the <span class="math-container">$3$</span>. The insertion can be done <strong>anywhere</strong> in the shorter sequence.</p> <p>I’m afraid that I can’t follow the reasoning in your solution: the factors of <span class="math-container">$3$</span> and <span class="math-container">$4$</span> really don’t make sense, because treating the spade ace together with another ace or together with a two doesn’t make sense when you’ve already counted <span class="math-container">$51!$</span> permutations of the cards other than the space ace.</p>
3,845,475
<p>Here's what I'm tasked with showing:</p> <p>Let <span class="math-container">$(a_n)$</span> be a convergent sequence with <span class="math-container">$a_n\rightarrow a$</span> as <span class="math-container">$n\rightarrow\infty$</span>. By the Algebraic Limit Theorem, we know that <span class="math-container">$(a_n^2)\rightarrow a^2$</span>. Now prove this using the definition of convergence.</p> <p>In doing so, I have the following:</p> <p>Let <span class="math-container">$\epsilon&gt;0$</span> be given. Choose some <span class="math-container">$N\in\mathbb{N}$</span> so that for all <span class="math-container">$n\geq N$</span>, <span class="math-container">$|a_n^2-a^2|&lt;\epsilon$</span>. By algebra, <span class="math-container">$|a_n^2-a^2|=|a_n-a||a_n+a|$</span>. Consider <span class="math-container">$|a_n+a|$</span>. By the triangle inequality, <span class="math-container">$|a_n+a|\leq|a_n|+|a|$</span>, thus <span class="math-container">$|a_n|$</span> is bounded by some <span class="math-container">$M\in\mathbb{N}$</span>.</p> <p>I know I'm trying to choose <span class="math-container">$M$</span> so that <span class="math-container">$|a_n-a|&lt;\epsilon+M+|a|$</span>. Where should I take it from here?</p>
user2661923
464,411
<p>In your analysis of set <span class="math-container">$S$</span>, you didn't take it far enough. As you indicated, there are <span class="math-container">$3!$</span> total orderings.</p> <p>Further, as you indicated, there are <span class="math-container">$2!$</span> total orderings of the elements besides the <span class="math-container">$3$</span>. Each of these <span class="math-container">$2!$</span> orderings in effect have gaps before the 1st element, between the two elements, and after the 3rd element. In order for the <span class="math-container">$3$</span> to immediately follow the first element in any of the <span class="math-container">$2!$</span> orderings, the <span class="math-container">$3$</span> must go in the 2nd gap. There is only 1 way that this can occur.</p> <p>Thus, with respect to your set <span class="math-container">$S$</span>, you have a fraction where the denominator is <span class="math-container">$3!$</span>, and the numerator is <span class="math-container">$2!$</span>.</p> <p>For critiquing your solution to the 52 card deck:</p> <p>In your</p> <p>&quot;Ordering in which the card following the first ace is the ace of spades&quot;</p> <p>your enumeration of the numerator is wrong, because you are overcounting.</p> <p>Suppose that you couple the Ah with As. This only pertains if the Ah happens to be the first Ace, among the Ah, Ad, Ac.</p> <p>That is, when Ah-As are coupled, your enumeration incorrectly counts Ac,Ah-As,Ad as &quot;favorable&quot;.</p> <p>&quot;Ordering in which the card following the first ace is the two of clubs&quot;</p> <p>I am unable to critique this, because I can not re-construct (i.e. reverse engineer) what you mean re &quot;by a similar argument&quot;.</p>
2,952,803
<blockquote> <p>Let <span class="math-container">$a&lt;b$</span> and <span class="math-container">$a,b\in\Bbb R$</span>. Then there is <span class="math-container">$c\in\Bbb R\setminus\Bbb Q$</span> such that <span class="math-container">$a&lt;c&lt;b$</span>.</p> </blockquote> <hr> <p><strong>My attempt:</strong></p> <ol> <li><span class="math-container">$a+b$</span> is irrational</li> </ol> <p>Let <span class="math-container">$c:=\dfrac{a+b}{2}$</span></p> <ol start="2"> <li><span class="math-container">$a+b$</span> is rational</li> </ol> <p>Let <span class="math-container">$x:=\dfrac{a+b}{\sqrt 2}$</span>. Then <span class="math-container">$x$</span> is irrational.</p> <ul> <li><span class="math-container">$a&lt;x&lt;b$</span></li> </ul> <p>Let <span class="math-container">$c:=x$</span></p> <ul> <li><span class="math-container">$b\le x$</span></li> </ul> <p>Then <span class="math-container">$x-b&lt;x-a$</span>. Take <span class="math-container">$x'\in (x-b,x-a)$</span> such that <span class="math-container">$x'$</span> is rational.</p> <p>Then <span class="math-container">$a&lt;x-x'&lt;b$</span> where <span class="math-container">$x-x'$</span> is irrational.</p> <p>Let <span class="math-container">$c:=x-x'$</span>.</p> <ul> <li><span class="math-container">$x\le a$</span></li> </ul> <p>Then <span class="math-container">$a-x&lt;b-x$</span>. Take <span class="math-container">$x'\in (a-x,b-x)$</span> such that <span class="math-container">$x'$</span> is rational.</p> <p>Then <span class="math-container">$a&lt;x+x'&lt;b$</span> where <span class="math-container">$x+x'$</span> is irrational.</p> <p>Let <span class="math-container">$c:=x+x'$</span>.</p> <hr> <p>My proof is quite short. I'm worried if it's sloppy and contains mistakes. Please help me verify it!</p>
John Doe
399,334
<p>You have<span class="math-container">$$\int(M-P)^{-1}\,dP=k\int dt$$</span> This gives <span class="math-container">$$M-P=e^{-k(t-t_0)}$$</span> where <span class="math-container">$t_0$</span> is a constant of integration. Then we get<span class="math-container">$$ P=M-e^{-k(t-t_0)}=M-e^{-kt}e^{kt_0}=M-\alpha e^{-kt}$$</span>where we define <span class="math-container">$\alpha=e^{kt_0}$</span>. If you assume <span class="math-container">$P(0)=0$</span>, then you get <span class="math-container">$$P(0)=0=M-\alpha\\\implies \alpha=M$$</span> Which would give <span class="math-container">$$P=M-Me^{-kt}$$</span></p>
1,137,565
<blockquote> <p>Let <span class="math-container">$A\Delta C\subseteq A\Delta B$</span>. (<span class="math-container">$\Delta$</span> denotes symmetric difference.)</p> <p>Prove <span class="math-container">$A\cap B \subseteq C$</span>.</p> </blockquote> <p>I am getting ready for a test and I could really use proof verification and any help with this.</p> <p><em>Proof</em>: Let us look at the indicators, <span class="math-container">$x_{A\Delta C}=x_A+x_C-2x_Ax_C$</span>, <span class="math-container">$x_{A\Delta B}=x_A+x_B-2x_Ax_B$</span>, <span class="math-container">$x_{A\cap B}=x_Ax_B$</span>.</p> <p>Let <span class="math-container">$x_{A\cap B}(a)=1$</span>. Then <span class="math-container">$x_{A\Delta B}(a)=0$</span> which means <span class="math-container">$x_{A\Delta C}(a)=0$</span>. <span class="math-container">$x_A(a)=x_B(a)=1$</span> and therefore <span class="math-container">$x_C(a)$</span> must be 1. Therefore <span class="math-container">$x_{A\cap B}(a)=1\Rightarrow x_C(a)=1$</span> <span class="math-container">$\Rightarrow A\cap B \subseteq C$</span>.</p>
avz2611
142,634
<p>for $(x,y)\rightarrow t$ $$(x,-y)\rightarrow -t$$ $$(-x,-y)\rightarrow \frac{1}{t}$$ $$(-x,y)\rightarrow \frac{1}{-t}$$ now you can get every point so you can see to get $x=1$ the parameter should tend to infinity</p>
1,137,565
<blockquote> <p>Let <span class="math-container">$A\Delta C\subseteq A\Delta B$</span>. (<span class="math-container">$\Delta$</span> denotes symmetric difference.)</p> <p>Prove <span class="math-container">$A\cap B \subseteq C$</span>.</p> </blockquote> <p>I am getting ready for a test and I could really use proof verification and any help with this.</p> <p><em>Proof</em>: Let us look at the indicators, <span class="math-container">$x_{A\Delta C}=x_A+x_C-2x_Ax_C$</span>, <span class="math-container">$x_{A\Delta B}=x_A+x_B-2x_Ax_B$</span>, <span class="math-container">$x_{A\cap B}=x_Ax_B$</span>.</p> <p>Let <span class="math-container">$x_{A\cap B}(a)=1$</span>. Then <span class="math-container">$x_{A\Delta B}(a)=0$</span> which means <span class="math-container">$x_{A\Delta C}(a)=0$</span>. <span class="math-container">$x_A(a)=x_B(a)=1$</span> and therefore <span class="math-container">$x_C(a)$</span> must be 1. Therefore <span class="math-container">$x_{A\cap B}(a)=1\Rightarrow x_C(a)=1$</span> <span class="math-container">$\Rightarrow A\cap B \subseteq C$</span>.</p>
bubba
31,744
<p>A hyperbola has two pieces (often called "branches"). The implicit equation $x^2 - y^2=1$ describes both of these pieces -- the points on both pieces satisfy the equation. </p> <p>But the parametric equation represents only a portion of the curve, with the point $(1,0)$ missing.</p> <p>To understand the situation, let's see how the point $P(t)=(x(t),y(t))$ moves along the curve as $t$ increases from $-\infty$ to $\infty$. For convenience, I'll refer to the four quadrants as NE, NW, SW, and SE. The NE quadrant is where $x \ge 0$ and $y \ge 0$, and so on. Here's a (not very accurate) picture:</p> <p><img src="https://i.stack.imgur.com/M8AuW.png" alt="hyperbola"></p> <p>And here's the story in words:</p> <ul> <li><p>When $t$ is close to $-\infty$, $P(t)$ is in the SE quadrant, just below the point $(1,0)$</p></li> <li><p>As $t$ approaches $-1$, $P(t)$ moves rapidly to the SE, going to infinity along the asymptote $y=-x$</p></li> <li><p>When $t$ is slightly larger than $-1$, $P(t)$ is far away in the NW quadrant, again near the asymptote $y=-x$</p></li> <li><p>As $t$ increases from $-1$ to $1$, $P(t)$ moves down the left branch of the curve, passing from the NW quadrant into the SW one when $t=0$</p></li> <li><p>When $t$ is slightly larger than $1$, $P(t)$ is far away in the NE quadrant, near the asymptote $y=x$</p></li> <li><p>As $t$ increases, $P(t)$ moves down the curve in the NE quadrant. It approaches the point $(1,0)$ but never gets there for finite values of $t$.</p></li> </ul> <p>This situation often occurs -- the set of points represented by parametric equations is often just a subset of the set of points represented by an implicit equation. In particular, parametric equations (that use continuous functions) can only represent "connected" curves, not curves that have several disconnected pieces. Another example is $y^2 = 1$. This represents the two horizontal lines $y=1$ and $y=-1$. The parametric equations $(x,y) = (t,1)$ represent the line $y=1$, and the parametric equations $(x,y) = (t,-1)$ represent the line $y=-1$, but no parametric equation could represent both of these lines simultaneously.</p> <p>Another common example: the parametric equations $$ x = \frac{1-t^2}{1+t^2} \quad ; \quad y = \frac{2t}{1+t^2} $$ represent most of the circle $x^2 + y^2 = 1$, but the point $(-1,0)$ is missing.</p>
30,653
<p>I have a data file which contains thousands of lines and each line has eight elements. Here is a small sample of the data file</p> <pre><code>-4.00 -0.80 0.1886024468848907E+01 0.1467147621657460E+01 -.1217067274319363E+01 0.7206100000000000E+03 0.7693457688734395E-12 5 -4.00 -0.70 0.1430357986632780E+01 -.1404093461650013E+01 -.1742223680347601E+01 0.1824700000000000E+03 0.8439003681169850E-12 8 -4.00 -0.60 -.1324719768465547E+01 0.1740130076002850E+01 0.1497978622206873E+01 0.5479900000000000E+03 0.4264485578634903E-14 2 -4.00 -0.50 0.1358536876189560E+01 -.1580696533502541E+01 0.1621539980382560E+01 0.2881100000000000E+03 0.1319885603098060E-13 4 -4.00 -0.40 -.1588487538231399E+01 0.1275577589608218E+01 0.1707607247015512E+01 0.1337500000000000E+03 0.1057878487713421E-12 2 -4.00 -0.30 0.1755414125374284E+01 0.1332710827520201E+01 0.1477984475201826E+01 0.6426400000000000E+03 0.1459764022611444E-13 1 -4.00 -0.20 0.1245972697710741E+01 0.1540633564543885E+01 0.1777167629372046E+01 0.6112200000000000E+03 0.5718386586661092E-13 1 -4.00 -0.10 -.1311461418732105E+01 -.1594149065989313E+01 0.1661344176980193E+01 0.3507800000000000E+03 0.6765588799377521E-14 3 </code></pre> <p>I read this file using</p> <pre><code>data = ReadList["data.out", Number, RecordLists -&gt; True]; </code></pre> <p>The total length of the list is obtained, of course as</p> <pre><code>ntot = Length[data]; </code></pre> <p>The list contains eight elements per row and the last of them is an integer taking values in the interval [0,8]. What I want is the following:</p> <p>(a). Count how many rows have 0 value at the last element (let's suppose there are <code>n0</code>), how many have 1, 2, 3, ... , 8. Then calculate the corresponding percentages <code>per0 = n0/ntot</code>, <code>per1 = n1/ntot</code>, etc. It could be nice if this was inside a DO loop with <code>i = 0,8</code>.</p> <p>(b). Count again percentages but using more than one criteria this time. For example, count how many rows have 1 at the last element and the value of the seventh element is smaller than <code>10^{-4}</code>.</p> <p>Any suggestions?</p> <p><strong>EDIT</strong></p> <p>Using's @Kuba's solution we have</p> <pre><code>{{5, 656}, {8, 640}, {2, 673}, {4, 663}, {1, 673}, {3, 663}, {6, 656}, {7, 640}, {0, 19}} </code></pre> <p>Is it possible to divide each sum automatically with <code>ntot</code> thus obtaining the percentages? </p> <pre><code>{{5, 656/ntot}, {8, 640/ntot}, {2, 673/ntot}, {4, 663/ntot}, {1, 673/ntot}, {3, 663/ntot}, {6, 656/ntot}, {7,640/ntot}, {0, 19/ntot}} </code></pre> <p>Also it would be great if they were sorted from 0 to 8 not randomly as they are now. </p>
Mike Honeychurch
77
<p>You could write functions to generalize this but for your specific problem:</p> <pre><code>data = Import["list.txt", "Data"]; ntot = Length[data]; </code></pre> <p>(a) to get percentages of elements in the last column (you could optionally <code>Sort</code> this list too)</p> <pre><code>Inner[Times, Tally[data[[All, -1]]], {1, 100./ntot}, List] </code></pre> <p>Note that I used <code>Inner</code> here but normally I use <code>Part</code> for this sort of thing but Kuba did something similar:</p> <pre><code>tmp = Tally[data[[All, -1]]]; tmp[[All, 2]] = tmp[[All, 2]]*100./ntot; </code></pre> <p>(b) to get percentage when last column is 1 and a condition is applied on the second last column</p> <pre><code>Length[Cases[data, {__, z_ /; z &lt; 10^-4, 1}]]*100./ntot </code></pre>
4,341,172
<p><span class="math-container">$(x+1)^2y''+(x+1)y'+y=x^2+2\sin(\ln(x+1)), y(0)=\frac{1}{5},y'(0)=2$</span></p> <p>My solution:</p> <p><span class="math-container">$y''+\frac{y'}{(x+1)}+\frac{y}{(x+1)^2}=\frac{x^2+2\sin(\ln(x+1))}{(x+1)^2}$</span></p> <p>First of all , I found the equation solution of <span class="math-container">$y''+\frac{y'}{(x+1)}+\frac{y}{(x+1)^2}=0$</span></p> <p><span class="math-container">$y=c_1\cos(\ln(x+1))+c_2\sin(\ln(x+1))$</span></p> <p>I try to solve this ode using the variation of parameters theorem</p> <p>Get this system equation:</p> <p>(I)<span class="math-container">$c'_1\cos(\ln(x+1))+c'_2\sin(\ln(x+1))=0$</span></p> <p>(II)<span class="math-container">$-c'_1\sin(\ln(x+1))+c'_2\cos(\ln(x+1))=x^2+2\sin(\ln(x+1))$</span></p> <p>Multiply (I) by <span class="math-container">$\sin(\ln(x+1))$</span> , (II) by <span class="math-container">$\frac{\cos(\ln(x+1))}{x+1}$</span>.</p> <p>By addtion i get:</p> <p><span class="math-container">$c'_2=\frac{\cos(\ln(x+1))[x^2+2\sin(\ln(x+1))]}{(x+1)}$</span></p> <p>I do not know how I get <span class="math-container">$c_2$</span> by an integral ?</p> <p>Help please</p> <p>Thanks !</p>
Chee Han
242,589
<p>The two equations you have using variation of parameters is incorrect. Given a second-order linear inhomogeneous DE <span class="math-container">$y'' + p(x)y' + q(x)y = f(x)$</span> with homogeneous solution <span class="math-container">$y_c(x) = c_1y_1(x) + c_2y_2(x)$</span>, a particular solution <span class="math-container">$y_p$</span> is given by <span class="math-container">$y_p(x) = u_1(x)y_1(x) + u_2(x)y_2(x)$</span>, where <span class="math-container">$u_1$</span> and <span class="math-container">$u_2$</span> satisfy <span class="math-container">\begin{align*} u_1'y_1 + u_2'y_2 &amp; = 0 \\ u_1'y_1' + u_2'y_2' &amp; = f(x). \end{align*}</span> Let <span class="math-container">$y_1 = \cos\left(\ln(x + 1)\right)$</span> and <span class="math-container">$y_2 = \sin\left(\ln(x + 1)\right)$</span>. We have that <span class="math-container">\begin{align} \tag{1} u_1'\cos\left(\ln(x + 1)\right) + u_2'\sin\left(\ln(x + 1)\right) &amp; = 0 \\ \tag{2} -u_1'\frac{\sin\left(\ln(x + 1)\right)}{x + 1} + u_2'\frac{\cos\left(\ln(x + 1)\right)}{x + 1} &amp; = \frac{x^2 + 2\sin\left(\ln(x + 1)\right)}{(x + 1)^2} \end{align}</span> Multiplying <span class="math-container">$(1)$</span> by <span class="math-container">$\sin\left(\ln(x + 1)\right)$</span>, <span class="math-container">$(2)$</span> by <span class="math-container">$(x + 1)\cos\left(\ln(x + 1)\right)$</span>, and adding the two resulting equations, we get <span class="math-container">\begin{align*} u_2' &amp; = \frac{x^2 + 2\sin\left(\ln(x + 1)\right)}{x + 1}\cdot \cos\left(\ln(x + 1)\right). \end{align*}</span> Using the substitution <span class="math-container">$z = \ln(x + 1)$</span>, we see that <span class="math-container">$$ u_2 = \int \left[\left(e^z - 1\right)^2 + 2\sin z\right]\cos z\, dz. $$</span> Computing this integral is a standard exercise now. Can you take it from here?</p>
2,547,933
<p>Consider the integral $I=\displaystyle\int_{R}\int f(x,y)dx dy$ over the region $R$, given by the triangle with vertices $(0,0),(1,1)$ and $(2,0)$. </p> <p>This is an isosceles triangle with one side lying along the $x-$axis. So, our domain is not "nice" to find the bounds for integral I assume, since even if we write $0\le x \le 2$, we can not give bounds for $y$ easily. To find this integral, the book I am reading makes the following transformation: $u = y-x$, $v=y+x$. After that, our new domain becomes a right angled triangle with the perpendicular edges lying on the $u$ and $v$ axis.</p> <p>Finally, my question is how can we conclude this transformations? In general, setting up $u = x+y, v= x-y$ works quite nice for triangular/rectangular domains but is there a rule for this?</p> <p>Thank you. </p>
Martin Argerami
22,857
<p>If the limits were the same, they would have to be $1$, since $$\left(1+\frac1n\right)^n&gt;1$$ for all $n$, while $$\left(1-\frac1n\right)^n&lt;1$$ for all $n$.</p> <p>So if you expect the first limit to be $e$ and you know that $e&gt;1$, you cannot expect the second one to be equal to it. </p> <p><strong>Edit:</strong> (to incorporate two of Paramanad Singh comments). One can obtain from <a href="https://en.wikipedia.org/wiki/Bernoulli%27s_inequality" rel="nofollow noreferrer">Bernoulli's Inequality</a> that $$\left(1+\frac1n\right)^n\geq2,\ \ n\geq1.$$ Which implies that $e\geq2$ as soon as one knows that the limits exists. This already shows that the limits cannot be the same. More interestingly, from $$ \left(1-\frac{1}{n}\right)^{n}=\dfrac{1}{\left(1+\dfrac{1}{n-1}\right)^{n-1}}\cdot\left(1-\frac{1} {n}\right),n&gt;1 $$ one gets that $$\lim_{n\to\infty}\left(1-\frac{1}{n}\right)^{n}=e^{-1}.$$</p>
2,757,562
<p>I'm work through some questions relating to connection coefficents. My question is more about the summation notation being used. Why is there no starting point (or end point) for the summation here? For the i, j, k, I take these to range from 1 to 2 for a 2 dimensional surface. Does this just imply that l does likewise? </p> <p>$$ \Gamma^i_{jk} = \sum_{l} g^{il} \left( \dfrac{\partial g_{lk}}{\partial x^j} + \dfrac{\partial g_{jl}}{\partial x^k} - \dfrac{\partial g_{jk}}{\partial x^l} \right) $$</p>
pureundergrad
517,979
<p>Another example of use of a similar notation which might make more sense to you is </p> <p>$$\sum_{n\in \mathbb{N}}n=1+2+3+\dots$$</p> <p>If the set is not specified directly under your sum, then $l$ probably runs over some set of values which make sense, maybe try reading back, that set was probably stated either explicitly or implicitly</p>
1,552,055
<p>Why is it true that all irrational numbers are non-terminating/non-repeating decimals?</p> <p>By definition, an irrational number is one that can't be expressed as a ratio of integers.</p>
user236182
236,182
<p>The definition: a number is irrational if and only if it's not rational, i.e. it can't be expressed as a ratio of two integers. This answers one part of your question.</p> <p>The other part: I'll prove the contrapositive. If <span class="math-container">$x$</span> has a repeating decimal expansion (this includes terminating decimal expansions), then <span class="math-container">$x$</span> is rational. </p> <p>Proof: If <span class="math-container">$x$</span> has a repeating decimal expansion, then it can always be written in the following form: </p> <p>Let <span class="math-container">$c,b$</span> be non-negative integers and <span class="math-container">$a_i\in\{0,1,2,\ldots,9\}$</span> and <span class="math-container">$t$</span> is the number of digits of <span class="math-container">$b$</span>. <span class="math-container">$$x=\overline{c.ba_1a_2\ldots a_ka_1a_2\ldots a_ka_1a_2\ldots}$$</span> <span class="math-container">$$10^tx=\overline{cb.a_1a_2\ldots a_ka_2a_2\ldots a_ka_1a_2\ldots}$$</span> <span class="math-container">$$10^{kt}x=\overline{cba_1a_2\ldots a_k.a_1a_2\ldots a_ka_1a_2\ldots}$$</span> <span class="math-container">$$10^{kt}x-10^{t}x=\overline{cba_1a_2\ldots a_k}-\overline{cb}$$</span> <span class="math-container">$$x=\frac{\overline{cba_1a_2\ldots a_k}-\overline{cb}}{10^{kt}-10^t}$$</span></p>
1,527,197
<p>So in the case where data points have the same variance $\sigma^2$, the estimator (in normal equation form) can be written as </p> <p>$$\theta=(X^TX)^{-1}X^TY$$</p> <p>I'm not sure how to derive a similar formula when the data points have different variances, and thus the covariance matrix would be</p> <p>$$\Sigma = diag(\sigma_1^2, \sigma_2^2, ...,\sigma_n^2)$$</p>
Henno Brandsma
4,280
<p>As to (2), you are right that the alternating series test says it converges.</p> <p>The $n$-th term does not converge to $0$ (it doesn't even converge, period.) in 1, so, no convergence. In any series (alternating or not), if it converges , then $a_n$ tends to $0$.</p> <p>In (3) the AST also works. </p>
3,762,624
<p>This is a pretty common question in probability and there are already a few answers on the site. HOWEVER, my Professor has put a little twist on it and I can't piece it together anymore.</p> <p><strong>QUESTION:</strong></p> <p>Let <span class="math-container">$(X_n)$</span> be a sequence of geometric random variables, each with parameter <span class="math-container">$p_n$</span> respectively. Suppose <span class="math-container">$p_n \rightarrow 0$</span>. Let <span class="math-container">$(\theta_n)$</span> be a sequence of positive real numbers with <span class="math-container">$\frac{p_n}{\theta_n} \rightarrow \lambda$</span> for some <span class="math-container">$0&lt;\lambda &lt; \infty$</span>. Let <span class="math-container">$Y_n = \theta_n X_n$</span>. Show that <span class="math-container">$Y_n$</span> converges in distribution to an exponential distribution with parameter <span class="math-container">$\lambda$</span>.</p> <p><strong>MY ATTEMPT:</strong></p> <p>Fix <span class="math-container">$x \in \mathbb{R}^+$</span>. Let <span class="math-container">$Z$</span> be an exponential random variable with parameter <span class="math-container">$\lambda$</span>. Then we have <span class="math-container">\begin{align*} P(Y_n \leq x) &amp;=P\bigg(X_n \leq \frac{x}{\theta_n}\bigg)\\ &amp;= 1-(1-p_n)^{\big\lfloor \frac{x}{\theta_n}\big\rfloor} \\ &amp; \approx 1-e^{-\frac{p_n}{\theta_n} x} \tag{large n}\\ &amp; \rightarrow 1- e^{-\lambda x} \\ &amp;= P(Z \leq x), \end{align*}</span> as desired.</p> <p><strong>PROBLEMS WITH MY SOLUTION</strong></p> <p>In my third line, I am just using my intuition. I am unable to prove this, nor can I find any theorems which might help.</p> <p>Any input is appreciated! Thanks :)</p>
Robert Israel
8,508
<p>Let <span class="math-container">$f = g + h$</span>. So your assumptions are <span class="math-container">$g(0)=h(0) = 0$</span>, <span class="math-container">$g'(0) \ge 0$</span>, <span class="math-container">$h'(0) \ge 0$</span>, <span class="math-container">$g''(s) &gt; 0$</span> and <span class="math-container">$h''(s) \ge 0$</span> for <span class="math-container">$s &gt; 0$</span>, and your desired conclusion is equivalent to <span class="math-container">$$ h(t) \int_0^t g(s)\; ds \ge g(t) \int_0^t h(t)\; dt \tag{1}$$</span> Actually let's assume <span class="math-container">$h''(s) &gt; 0$</span>, not just <span class="math-container">$\ge$</span>. Take any <span class="math-container">$g$</span> and <span class="math-container">$h$</span> satisfying the assumptions and with the two sides of (1) not equal. Then if these are not a counterexample, just interchange <span class="math-container">$g$</span> and <span class="math-container">$h$</span> and you get a counterexample.</p> <p>For example, with <span class="math-container">$g(s) = s^2$</span> and <span class="math-container">$h(s) = s^3$</span>, (1) is <span class="math-container">$t^3/3 \ge t^3/4$</span> which is true, but with <span class="math-container">$g(s)=s^3$</span> and <span class="math-container">$h(s)=s^2$</span>, (1) is <span class="math-container">$t^3/4 \ge t^3/3$</span> which is false. This corresponds to <span class="math-container">$f(s) = s^3+s^2$</span>.</p>
622,076
<p>Continuity $\Rightarrow$ Intermediate Value Property. Why is the opposite not true? </p> <p>It seems to me like they are equal definitions in a way. </p> <p>Can you give me a counter-example? </p> <p>Thanks</p>
hot_queen
72,316
<p>A very strong counterexample would be a function whose range is all of R on every interval.</p>
442,950
<p>I would like to show <span class="math-container">$\lim\limits_{r\to\infty}\int_{0}^{\pi/2}e^{-r\sin \theta}\text d\theta=0$</span>.</p> <p>Now, of course, the integrand does not converge uniformly to <span class="math-container">$0$</span> on <span class="math-container">$\theta\in [0, \pi/2]$</span>, since it has value <span class="math-container">$1$</span> at <span class="math-container">$\theta =0$</span> for all <span class="math-container">$r\in \mathbb{R}$</span>. </p> <p>If <span class="math-container">$F(r) = \int_{0}^{\pi/2}e^{-r\sin \theta}\text d\theta$</span>, we can find the <span class="math-container">$j$</span>th derivative <span class="math-container">$F^{(j)}(r) = (-1)^j\int_{0}^{\pi/2}\sin^{j}(\theta)e^{-r\sin\theta}\text d\theta$</span>, but I don't see how this is helping.</p> <p>The function is strictly decreasing on <span class="math-container">$[0,\pi/2]$</span>, since <span class="math-container">$\partial_{\theta}(e^{-r\sin\theta})=-r\cos\theta e^{-r\sin \theta}$</span>, which is strictly negative on <span class="math-container">$(0,\pi/2)$</span>.</p> <p>Any ideas?</p>
nullUser
17,459
<p>On <span class="math-container">$[0,\pi/2]$</span> the sine is nonnegative and so <span class="math-container">$|e^{-r\sin \theta}| \leq 1$</span> for <span class="math-container">$r \geq 0$</span>. It follows by dominated convergence that <span class="math-container">$$ \lim_{r \to \infty} \int_0^{{\pi/2}} e^{-r \sin \theta}\text d\theta = \int_0^{{\pi/2}}\lim_{r \to \infty} e^{-r \sin \theta}\text d\theta = \int_0^{\pi/2}0 \text d\theta = 0. $$</span></p>
442,950
<p>I would like to show <span class="math-container">$\lim\limits_{r\to\infty}\int_{0}^{\pi/2}e^{-r\sin \theta}\text d\theta=0$</span>.</p> <p>Now, of course, the integrand does not converge uniformly to <span class="math-container">$0$</span> on <span class="math-container">$\theta\in [0, \pi/2]$</span>, since it has value <span class="math-container">$1$</span> at <span class="math-container">$\theta =0$</span> for all <span class="math-container">$r\in \mathbb{R}$</span>. </p> <p>If <span class="math-container">$F(r) = \int_{0}^{\pi/2}e^{-r\sin \theta}\text d\theta$</span>, we can find the <span class="math-container">$j$</span>th derivative <span class="math-container">$F^{(j)}(r) = (-1)^j\int_{0}^{\pi/2}\sin^{j}(\theta)e^{-r\sin\theta}\text d\theta$</span>, but I don't see how this is helping.</p> <p>The function is strictly decreasing on <span class="math-container">$[0,\pi/2]$</span>, since <span class="math-container">$\partial_{\theta}(e^{-r\sin\theta})=-r\cos\theta e^{-r\sin \theta}$</span>, which is strictly negative on <span class="math-container">$(0,\pi/2)$</span>.</p> <p>Any ideas?</p>
robjohn
13,854
<p>This can also be handled by <a href="http://en.wikipedia.org/wiki/Dominated_convergence_theorem" rel="nofollow">Monotone Convergence</a>.</p> <p>The functions $f_r(\theta)=1-e^{-r\sin(\theta)}$ monotonically converge to $$ \lim_{r\to\infty}f_r(\theta)=f(\theta)=\left\{\begin{array}{} 1&amp;\text{if }0\lt\theta\le\frac\pi2\\ 0&amp;\text{if }\theta=0 \end{array}\right. $$ Thus, $$ \begin{align} \lim_{r\to\infty}\int_0^{\pi/2}e^{-r\sin(\theta)}\,\mathrm{d}\theta &amp;=\lim_{r\to\infty}\int_0^{\pi/2}(1-f_r(\theta))\,\mathrm{d}\theta\\ &amp;=\frac\pi2-\lim_{r\to\infty}\int_0^{\pi/2}f_r(\theta)\,\mathrm{d}\theta\\ &amp;=\frac\pi2-\int_0^{\pi/2}f(\theta)\,\mathrm{d}\theta\\[4pt] &amp;=\frac\pi2-\frac\pi2\\[9pt] &amp;=0 \end{align} $$</p>
3,889,175
<p>Is there a way to calculate:<span class="math-container">$\sum_{i=0}^{k} {2k+1\choose k-i}$</span> using only:</p> <ul> <li>symmetry;</li> <li>pascal's triangle;</li> <li>one of these sums: <span class="math-container">$$\sum_{i=0}^{k} {n+i\choose i}={n+k+1\choose k}$$</span> and <span class="math-container">$${p\choose p}+{p+1\choose p}+\dots+{n\choose p}={n+1\choose p+1}.$$</span> I am not sure, because I do not see a way to get same lower number in all collectors so I can use the second sum. Also I tried using symmetry but not helpful really.</li> </ul>
Community
-1
<p>At worse, multiplying an equation by a common expression can introduce alien solutions (when you multiply by zero), but not lose any: if <span class="math-container">$a=b$</span>, <span class="math-container">$ac=bc$</span> remains true.</p> <p>Your equation has meaning only when <span class="math-container">$x\ne1$</span> and <span class="math-container">$x\ne2$</span>, so if you multiply by <span class="math-container">$(x-1)(x-2)$</span>, you will not introduce any if you keep this condition.</p>
3,889,175
<p>Is there a way to calculate:<span class="math-container">$\sum_{i=0}^{k} {2k+1\choose k-i}$</span> using only:</p> <ul> <li>symmetry;</li> <li>pascal's triangle;</li> <li>one of these sums: <span class="math-container">$$\sum_{i=0}^{k} {n+i\choose i}={n+k+1\choose k}$$</span> and <span class="math-container">$${p\choose p}+{p+1\choose p}+\dots+{n\choose p}={n+1\choose p+1}.$$</span> I am not sure, because I do not see a way to get same lower number in all collectors so I can use the second sum. Also I tried using symmetry but not helpful really.</li> </ul>
Rhys Hughes
487,658
<p>Losing or gaining solutions to an equation happens when you perform an operation which is either not uniquely invertible, or an inverse of such an operation. For example, with squaring we see: <span class="math-container">$$y=x\to y^2=x^2\to y=\pm x$$</span></p> <p>The function <span class="math-container">$x^2$</span> is not injective and does not have a unique inverse, and so that step produces extra solutions.</p> <p>However, multiplying by a given polynomial is fine at any point where the polynomial is non-zero, as the inverse of that operation is just dividing by said polynomial.</p> <p>So, in your question, multiplying by <span class="math-container">$(x-1)(x-2)$</span> is fine, but you must discount <span class="math-container">$x=1$</span> and <span class="math-container">$x=2$</span> if they arise as solutions. In this case, they aren't, so you haven't gained any solutions.</p>
2,836,539
<p>Recently I run into this integral</p> <p>$$\mathcal{J} = \int_{0}^{\pi/2} \frac{x \log \left ( 1-\sin x \right )}{\sin x} \, \mathrm{d}x$$</p> <p>I don't know to what it evaluates. I tried several approaches.</p> <p><strong>1st:</strong> Differentiation under the integral sign</p> <p>Consider the function $\displaystyle f(\alpha)= \int_{0}^{\pi/2} \frac{x \log \left ( 1-\alpha\sin x \right )}{\sin x} \, \mathrm{d}x$. Hence</p> <p>\begin{align*} \frac{\mathrm{d} }{\mathrm{d} \alpha} f(\alpha) &amp;= \frac{\mathrm{d} }{\mathrm{d} \alpha} \int_{0}^{\pi/2} \frac{x \log \left ( 1-\alpha\sin x \right )}{\sin x} \, \mathrm{d}x \\ &amp;= \int_{0}^{\pi/2} \frac{\partial }{\partial \alpha} \frac{x \log \left ( 1-\alpha\sin x \right )}{\sin x} \, \mathrm{d}x \\ &amp;= -\int_{0}^{\pi/2} \frac{x \sin x}{\sin x \left ( 1- \alpha \sin x \right )} \, \mathrm{d}x\\ &amp;=- \int_{0}^{\pi/2} \frac{x}{1- \alpha \sin x} \, \mathrm{d}x \end{align*}</p> <p>And the last integral equals? </p> <p><strong>2nd:</strong> Taylor series expansion</p> <p><em>Lemma:</em> It holds that</p> <p>$$x \sin^n x = \left\{\begin{matrix} 2^{1-n}\displaystyle\mathop{\sum}\limits_{k=0}^{\frac{n-1}{2}}(-1)^{\frac{n-1}{2}-k}\binom{n}{k}\,x\sin\big((n-2k)x\big) &amp; , &amp; n \;\; \text{odd} \\\\ 2^{-n}\displaystyle\binom{n}{\frac{n}{2}}\,x+2^{1-n}\mathop{\sum}\limits_{k=0}^{\frac{n}{2}-1}(-1)^{\frac{n}{2}-k}\binom{n}{k}\,x\cos\big((n-2k)x\big) &amp; , &amp; n \;\; \text{even} \end{matrix}\right.$$</p> <p>Hence,</p> <p>\begin{align*} \int_{0}^{\pi/2} \frac{x \log \left ( 1-\sin x \right )}{\sin x} \, \mathrm{d}x &amp;= -\int_{0}^{\pi/2} \frac{x}{\sin x} \sum_{n=1}^{\infty} \frac{\sin^n x}{n} \, \mathrm{d}x \\ &amp;=-\sum_{n=1}^{\infty} \frac{1}{n} \int_{0}^{\pi/2} x \sin^{n-1} x \, \mathrm{d}x \end{align*}</p> <p>However the lemma does not help at all. In fact, if someone substitutes the RHS what it seems to be in there is an $\arcsin $ Taylor expansion. The series that remains to be evaluated is very daunting.</p> <p>To sum up, I don't know to what this integral evaluates. I don't even know if a nice closed form exists neither do I expect one. But , I still hope. </p>
Jack D'Aurizio
44,121
<p>The given problem is equivalent to the evaluation of $$ \int_{0}^{1}\frac{\arcsin(x)}{\sqrt{1-x^2}}\cdot\frac{\log(1-x)}{x}\,dx =\sum_{n\geq 1}\frac{4^n}{2n\binom{2n}{n}}\int_{0}^{1}x^{2n-2}\log(1-x)\,dx=\sum_{n\geq 1}\frac{4^n H_{2n-1}}{2n\binom{2n}{n}(1-2n)}$$ which is a <em>twisted</em> hypergeometric series. On the other hand $$ \mathcal{J}= 2\int_{0}^{\pi/4}\frac{2x \log(1-\sin(2x))}{\sin(2x)}\,dx=2\int_{0}^{1}\frac{\arctan(t)}{t}\log\left(\frac{(1-t)^2}{1+t^2}\right)\,dt $$ appears to be manageable through the polylogarithms machinery.<br> Indeed $\arctan t=\text{Im}\log(1+it)$ and the integrals $$ \int \frac{\log(1+it)\log(1\pm it)}{t}\,dt, \qquad \int \frac{\log(1+it)\log(1-t)}{t}\,dt $$ have closed forms in terms of $\text{Li}_2$ and $\text{Li}_3$. However the simplest way to recover $\mathcal{J}=-\frac{\pi^3}{8}$ might be to exploit complex analysis and contour integration, as it often happens when integrating multiples of $\frac{x}{\sin x}$.</p> <p>Through <a href="https://math.stackexchange.com/questions/292468/fourier-series-of-log-sine-and-log-cos">the Fourier series of $\log\sin$</a> we have $$ \log(1-\cos x)=-\log(2)-2\sum_{n\geq 1}\frac{\cos(nx)}{n} $$ pointwise on $(0,\pi/2)$. We have that $\int_{0}^{\pi/2}\frac{x}{\sin x}\,dx $ equals $2K$, with $K$ being Catalan's constant, and by induction</p> <p>$$ \int_{0}^{\pi/2}\frac{x}{\sin x}\cos\left[n\left(\frac{\pi}{2}-x\right)\right]\,dx $$ up to the sign, equals $\sum_{m&gt;n/2}\frac{2(-1)^m}{(2m+1)^2}$ or $\sum_{m&gt; n/2}\frac{1}{(2m+1)^2}$, according to the parity of $n$. This allows to write the original twisted sum in terms of standard Euler sums. $K$ disappears from the outcome after some simplification and $$ \sum_{n\geq 0}\frac{(-1)^n}{(2n+1)^3} = \frac{\pi^3}{32} $$ <a href="https://math.stackexchange.com/questions/850442/an-interesting-identity-involving-powers-of-pi-and-values-of-etas">is well-known</a>.</p>
1,930,743
<p>We have a map <span class="math-container">$f:P(X)\to P(X)$</span>, where <span class="math-container">$P(X)$</span> means the part of <span class="math-container">$X$</span> and the function is monotone (by considering inclusion "<span class="math-container">$\subseteq$</span>"). So <span class="math-container">$\forall \space A\subseteq B $</span> we have <span class="math-container">$f(A)\subseteq f(B)$</span>. Show that this map has a fixed point.</p> <p>This claim is used in some proofs of <a href="https://en.wikipedia.org/wiki/Schr%C3%B6der%E2%80%93Bernstein_theorem" rel="nofollow noreferrer">Cantor–Schröder–Bernstein theorem</a>, for example, see <a href="https://proofwiki.org/wiki/Cantor-Bernstein-Schr%C3%B6der_Theorem#Proof_3" rel="nofollow noreferrer">proof 3</a> on ProofWiki (<a href="https://proofwiki.org/w/index.php?title=Cantor-Bernstein-Schr%C3%B6der_Theorem&amp;oldid=306562#Proof_3" rel="nofollow noreferrer">current revision</a>).</p>
Brian M. Scott
12,042
<p>HINT: Consider the set $\bigcup\{A\subseteq X:A\subseteq f(A)\}$. (Be sure to show that there is at least one $A\subseteq X$ such that $A\subseteq f(A)$.)</p>
3,832,383
<p>Below is a problem I found, however, after many attemps I can not seem to get a solution.</p> <p><strong>Problem:</strong> Let <span class="math-container">$E \subset [0,1]$</span>. Show that if <span class="math-container">$m^*(E) + m^*([0,1] \setminus E) = 1$</span>, then <span class="math-container">$E$</span> is measurable.</p> <p>(my attempted) <strong>Solution:</strong> Notice, the following is true <span class="math-container">$[0,1] \setminus E = [0,1] \cap E^c$</span>. Therefore, we can rewrite the given equations as</p> <p><span class="math-container">$$m^*(E) + m^*([0,1] \cap E^c) = 1. \quad (i)$$</span></p> <p>Also, notice that <span class="math-container">$E = [0,1] \cap E$</span>, hence, we can rewrite <span class="math-container">$(i)$</span> as the following</p> <p><span class="math-container">$$m^*([0,1] \cap E) + m^*([0,1] \cap E^c) = 1. \quad (ii)$$</span></p> <p>Since <span class="math-container">$m^*([0,1]) = 1$</span> we can, again, rewrite <span class="math-container">$(ii)$</span> as follows</p> <p><span class="math-container">$$m^*([0,1] \cap E) + m^*([0,1] \cap E^c) = m^*([0,1]). \quad (iii)$$</span></p> <p>For this path, this is where I get stuck. In other words, obviously if <span class="math-container">$[0,1]$</span> could be replaced by any set <span class="math-container">$A \subseteq \mathbb{R}$</span> then, sure, <span class="math-container">$E$</span> is measurable. Therefore, I do not think this is the &quot;correct path&quot; to take.</p> <p>I feel like one way is, possibly, to show that for any <span class="math-container">$\epsilon &gt; 0$</span>, there exists a closed set F, such that, <span class="math-container">$E \subseteq F$</span> and <span class="math-container">$m^*(F \setminus E) &lt; \epsilon$</span>. Taking <span class="math-container">$F = [0,1]$</span>, we have <span class="math-container">$E \subset F$</span> and <span class="math-container">$m^*(F \setminus E) &lt; 1 - m^*(E)$</span>. Hence, if I take <span class="math-container">$\epsilon = 1 - m^*(E)$</span> can I conclude the proof?</p>
Sumanta
591,889
<p>Let <span class="math-container">$A$</span> be any Lebesgue measurable subset of <span class="math-container">$[0,1]$</span>. So, Lebesgue measurability of <span class="math-container">$A$</span> gives <span class="math-container">$$m^*(E)=m^*(A\cap E)+m^*(A^c\cap E)$$</span> and <span class="math-container">$$m^*(E^c)=m^*(A\cap E^c)+m^*(A^c\cap E^c).$$</span> Adding these two we have, <span class="math-container">$$m^*([0,1])=m^*(E)+m^*(E^c)$$</span><span class="math-container">$$=\bigg[m^*(A\cap E)+m^*(A\cap E^c)\bigg]+\bigg[m^*(A^c\cap E)+m^*(A^c\cap E^c)\bigg]$$</span><span class="math-container">$$\geq m^*(A)+m^*(A^c)\geq m^*([0,1]).$$</span> Both inequalitities are due to the fact <span class="math-container">$m^*$</span> is sub-additive. Hence, <span class="math-container">$$m^*(A)+m^*(A^c)=m^*(A\cap E)+m^*(A\cap E^c)+m^*(A^c\cap E)+m^*(A^c\cap E^c).$$</span> But, <span class="math-container">$$m^*(A^c)\leq m^*(E\cap A^c)+m^*(E^c\cap A^c)$$</span><span class="math-container">$$\implies m^*(A)\geq m^*(E\cap A)+m^*(A\cap E^c).$$</span></p> <p>Hence, using sub-additivity of <span class="math-container">$m^*$</span> we have, <span class="math-container">$m^*(A)\geq m^*(E\cap A)+m^*(A\cap E^c)$</span>, so <span class="math-container">$E$</span> is measurable.</p> <blockquote> <p>Note that we are using the following equivalent conditions:</p> <p><span class="math-container">$(1)$</span> <span class="math-container">$E$</span> is measurable</p> <p><span class="math-container">$(2)$</span> For every <span class="math-container">$Y\subseteq [0,1]$</span> we have <span class="math-container">$m^*(Y)\geq m^*(Y\cap E)+m^*(E^c\cap Y)$</span>.</p> <p><span class="math-container">$(3)$</span> For every measurable set <span class="math-container">$A\subseteq [0,1]$</span> we have <span class="math-container">$m^*(A)\geq m^*(A\cap E)+m^*(E^c\cap A)$</span>.</p> </blockquote> <p>Let us prove the non-trivial part, namely <span class="math-container">$(3)\implies (2)$</span>. Let <span class="math-container">$Y\subseteq [0,1]$</span> and for a given <span class="math-container">$\varepsilon&gt;0$</span> choose Lebesgue measurable subsets <span class="math-container">$A_1,A_2,...$</span> of <span class="math-container">$[0,1]$</span> so that <span class="math-container">$Y\subseteq \bigcup_{n=1}^\infty A_n$</span> with <span class="math-container">$m^*(Y)+\varepsilon&gt;\sum_{n=1}^\infty m^*(A_i)$</span>.</p> <p>Next, since we are assuming <span class="math-container">$(3)$</span> we have <span class="math-container">$m(A_n)\geq m^*(A_n\cap E^c)+m^*(A_n\cap E)$</span>. So, we have <span class="math-container">$$m^*(Y)+\varepsilon&gt;\sum_{n=1}^\infty m^*(A_i)$$</span><span class="math-container">$$\geq \sum_{n=1}^\infty m^*(A_n\cap E^c)+\sum_{n=1}^\infty m^*(A_n\cap E)$$</span><span class="math-container">$$\geq m^*(Y\cap E)+m^*(Y\cap E^c).$$</span> Finally, letting <span class="math-container">$\varepsilon\to 0+$</span> we are done.</p>
3,515,649
<blockquote> <p><a href="https://i.stack.imgur.com/qqIay.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/qqIay.jpg" alt="enter image description here" /></a></p> </blockquote> <p><strong>My try</strong> :</p> <p>we can deal three different cases of removing two biased coin and 1 unbiased coin.</p> <p>Individual probability is known <span class="math-container">$$\frac9{10} , \frac2{25},\frac1{50}$$</span></p> <p>After that total is 49 as one is removed. now what to do conditional probability or Bayes' theorem ? i dont know how to implement ?</p>
Robert Shore
640,080
<p>Here's how I think about these problems. There are <span class="math-container">$50$</span> coins with <span class="math-container">$2$</span> sides each, or <span class="math-container">$100$</span> possible outcomes, each equally likely. Of those possible outcomes, <span class="math-container">$53$</span> result in heads and <span class="math-container">$47$</span> result in tails.</p> <p>Out of the <span class="math-container">$53$</span> possibilities that result in heads, <span class="math-container">$8$</span> of them come from a two-headed coin and the other <span class="math-container">$45$</span> come from fair coins. So if your result is heads, the probability of a two-headed coin is <span class="math-container">$\frac{8}{53}$</span> and the probability of a fair coin is <span class="math-container">$\frac{45}{53}.$</span> If the result is tails, the probability of a fair coin is <span class="math-container">$\frac{45}{47}$</span>.</p>
1,254,896
<p>I am trying to solve identify the expected value of a statistic that involves a fraction. I have simplified the expression to:</p> <p>$E[\frac{1}{1+ \sum_i x_i}] = E[\frac{1}{1+ T}]$</p> <p>However, I am not sure how to proceed. Is there anyway to simplify this through algebra, i.e. simplify the expression to the point that I have $E[x_i]$ and then substitute in a known expression for the expected value of $x_i$? Or, should I attempt to find the expected value of the expression by working with the pmf as</p> <p>$\sum \frac{1}{1+ T} * f_T(t)$ ?</p>
Paolo Leonetti
45,736
<p>Notice that by the Legendre symbol properties we have $$ \left(\frac{a}{p}\right) \equiv a^{\frac{p-1}{2}} \equiv x \pmod p $$ where $x$ can be only $-1$,$0$ or $1$. That's why if you have a $n$-th power in a diophantine equation and $2n+1$ is prime, then you're really lucky, although it is not a strange guess xd</p>
1,254,896
<p>I am trying to solve identify the expected value of a statistic that involves a fraction. I have simplified the expression to:</p> <p>$E[\frac{1}{1+ \sum_i x_i}] = E[\frac{1}{1+ T}]$</p> <p>However, I am not sure how to proceed. Is there anyway to simplify this through algebra, i.e. simplify the expression to the point that I have $E[x_i]$ and then substitute in a known expression for the expected value of $x_i$? Or, should I attempt to find the expected value of the expression by working with the pmf as</p> <p>$\sum \frac{1}{1+ T} * f_T(t)$ ?</p>
user26486
107,671
<p>You should check modulo $p=\text{lcm}(a_1,a_2,\ldots,a_n)+1$, if it is prime (here $a_1,a_2,\ldots,a_n$ are the degrees of the variables). The reason this may easily work is: by FLT (if $p\nmid x$) $$x^{\text{lcm}(a_1,a_2,\ldots,a_n)}\equiv 1\pmod {\!p}$$ </p> <p>and $a_ik_i=\text{lcm}(a_1,a_2,\ldots,a_n)$ for some small $k_i$ ($\forall i\in\{1,\ldots,n\}$)(I say 'small' because we're taking $\text{lcm}$ as opposed to, e.g., outright multiplying them all). </p> <p>$(x^{a_i})^{k_i}\equiv 1\pmod{\!p}$ has at most $k_i$ solutions in terms of $x^{a_i}$ (see Theorem below). $k_i$ is small, so $x^{a_i}$ can only gain a small amount ($\le k_i+1$ when $p\nmid x$ constraint is removed) of remainders $\bmod p$.</p> <p><strong>Theorem:</strong> a polynomial of degree $n$ has at most $n$ zeroes $\bmod p$. </p> <p><strong>Proof:</strong> $ax\equiv b\pmod {\!p}$ has one solution. Assume that any polynomial of degree $n-1$ has at most $n-1$ roots. Consider an arbitrary polynomial $f$ of degree $n$. If it has no zeroes, we're done. Otherwise (let the root be $a$) we can write $f$ as $f(x)=(x-a)g(x)$, where $g$ is a polynomial of degree $n-1$. $f$ then must have at most $1+(n-1)=n$ zeroes. $\ \ \ \square$ </p> <p><strong>Example:</strong> (Baltic Way 2012) Solve in integers: $2x^6+y^7=11$. </p> <p><strong>Solution:</strong> $2x^6\equiv\{2,8,22,27,32,39,42\}$ (<a href="http://www.wolframalpha.com/input/?i=Sort%5BMap%5BMod%5B2%23%5E6%2C43%5D%26%2C+Range%5B42%5D%5D%5D" rel="nofollow">WA</a>), $y^7\equiv\{1,6,7,36,37,42\} \pmod{\!43}$ (<a href="http://www.wolframalpha.com/input/?i=Sort%5BMap%5BMod%5B%23%5E7%2C43%5D%26%2C+Range%5B42%5D%5D%5D" rel="nofollow">WA</a>). </p> <p>So $2x^6+y^7\not\equiv 11\pmod {\!43}$, and equality can't be satisfied.$\ \ \ \square$</p>
2,780,731
<p>In school, I have recently been learning about simple differential equations. We know that the solution of $y'=y$ is $y=Ae^x$, where $A$ is a constant. But how can we know that it is the <strong>only</strong> solution? The only thing I can figure out is that $y$ is continuously differentiable. Help me, please.</p>
Jan
254,447
<p>Suppose there is another solution $y=f(x)$. Then we must have that $y=f(x)-Ae^x$ is a solution. Now working this out gives us: $$y'=\frac{\partial }{\partial x}(f(x) - A e^x) = f'(x) - A e^x$$ Note that we must have $y=y'$ so $f(x)=f'(x)$ clearly this only holds for $f(x) = A e^x$.</p>
1,598,006
<p>(Here, $B$ is relatively compact means the closure of $B$ is compact.)</p> <blockquote> <ol> <li><p>$\hat A$ is compact.</p></li> <li><p>$\hat A=\hat {\hat A}$.</p></li> <li><p>$\hat A$ is connected.</p></li> <li><p>$\hat A=X$.</p></li> </ol> </blockquote> <p>I try to eliminate the options by using an example.</p> <p>Consider $X=\Bbb R - \{1,2,3\}$ with metric topology and let $A=(-\infty,1)$.</p> <p>Then $\hat A=(-\infty,1) \cup (1,2) \cup (2,3)$.</p> <p>Hence options 1,3,4 are false.</p> <p>So I select option 2 as an answer.</p> <p>Is my method correct?</p>
qinxs
448,972
<p>For (2), <span class="math-container">$X-\hat{A}$</span> is the union of non-relatively compact connected components of <span class="math-container">$X-A$</span> by definition. Let <span class="math-container">$O$</span> be any connected component of <span class="math-container">$X-A$</span> which is not relatively compact, then <span class="math-container">$O$</span> is a non-relatively compact connected component of <span class="math-container">$X-\hat{A}$</span> since <span class="math-container">$X-\hat{A}\subset X-A$</span>. Therefore, <span class="math-container">$X-\hat{A}$</span> has no relatively compact connected components, which shows that <span class="math-container">$\hat{\hat{A}}=A$</span>.</p>
1,341,486
<p>Problem: Find the sum to $n$ terms of \begin{eqnarray*} \frac{1}{1\cdot 2\cdot 3} + \frac{3}{2\cdot 3\cdot 4} + \frac{5}{3\cdot 4\cdot 5} + \frac{7}{4\cdot 5\cdot 6}+\cdots \\ \end{eqnarray*} Answer: The way I see it, the problem is asking me to find this series: \begin{eqnarray*} S_n &amp;=&amp; \sum_{i=1}^{n} {a_i} \\ \text{with } a_i &amp;=&amp; \frac{2i-1}{i(i+1)(i+2)} \\ \end{eqnarray*} We have: \begin{eqnarray*} S_n &amp;=&amp; S_{n-1} + a_n \\ S_n &amp;=&amp; S_{n-1} + \frac{2n-1}{n(n+1)(n+2)} \\ \end{eqnarray*} I am tempted to apply the technique of partial fractions but I believe there is no closed formula for a series of the of the form:</p> <p>\begin{eqnarray*} \sum_{i=1}^{n} \frac{1}{i+k} \\ \end{eqnarray*} where $k$ is a fixed constant. Therefore I am stuck. I am hoping that somebody can help me.</p> <p>Thanks Bob</p>
A.A.
240,951
<p>see that $a_n = (2n-1)/((n*(n+1)*(n+2)=-1/2*(1/n)+3*(1/(n+1))-5/2*(1/(n+2)) = 1/2* (-1/n+1/(n+1)+5*(1/(n+1)-1/(n+2))$, so $S_n = 1/2*(-1+5+1/(n+1)-5/(n+2)) = 1/2*(4n^2+8n+5)/((n+1)*(n+2))$.</p>
1,341,486
<p>Problem: Find the sum to $n$ terms of \begin{eqnarray*} \frac{1}{1\cdot 2\cdot 3} + \frac{3}{2\cdot 3\cdot 4} + \frac{5}{3\cdot 4\cdot 5} + \frac{7}{4\cdot 5\cdot 6}+\cdots \\ \end{eqnarray*} Answer: The way I see it, the problem is asking me to find this series: \begin{eqnarray*} S_n &amp;=&amp; \sum_{i=1}^{n} {a_i} \\ \text{with } a_i &amp;=&amp; \frac{2i-1}{i(i+1)(i+2)} \\ \end{eqnarray*} We have: \begin{eqnarray*} S_n &amp;=&amp; S_{n-1} + a_n \\ S_n &amp;=&amp; S_{n-1} + \frac{2n-1}{n(n+1)(n+2)} \\ \end{eqnarray*} I am tempted to apply the technique of partial fractions but I believe there is no closed formula for a series of the of the form:</p> <p>\begin{eqnarray*} \sum_{i=1}^{n} \frac{1}{i+k} \\ \end{eqnarray*} where $k$ is a fixed constant. Therefore I am stuck. I am hoping that somebody can help me.</p> <p>Thanks Bob</p>
Community
-1
<hr> <p>The good old way: compute a few terms and find a pattern.</p> <p>$$\frac2{12},\frac7{24},\frac{15}{40},\frac{26}{60},\frac{40}{84}\cdots$$</p> <p>By finite differences, we find that the numerators are $\dfrac{n(3n+1)}2$ and the denominators $2(n+1)(n+2)$.</p> <hr> <p>Check:</p> <p>$$S_1=\frac2{12}$$and $$S_n-S_{n-1}=\frac{n(3n-1)}{4(n+1)(n+2)}-\frac{(n-1)(3n-4)}{4n(n+1)}=\frac{2n-1}{n(n+1)(n+2)}=a_n.$$</p>
3,752,771
<p>I wanted to get the full probability of 2 attempts made at 60% chance of success.</p> <p>I was looking at a different chain of math and found my probability to hit an enemy is 60% per each attack but I was wondering how it would look at all the outcomes and the probability of it.</p> <blockquote> <p>6/10 * 6/10 = 36%</p> </blockquote> <p>of both attempts failing and both attempts succeeding with a 64% chance of at least 1 attempt succeeding?</p> <p>I didn't understand how it would look as a failure adding up to 136%</p>
Cardinal
525,594
<p>Scenario 1: Both Hit</p> <p>6/10 * 6/10 = 36%</p> <p>Scenario 2: Hit first, miss second</p> <p>6/10 * 4/10 = 24%</p> <p>Scenario 3: Miss first, hit second</p> <p>4/10 * 6/10 = 24%</p> <p>Scenario 4: Miss Both</p> <p>4/10 * 4/10 = 16%</p> <p>Notice how this total adds to 100%. If you define success as at least one hit, then you may add scenarios 1-3 or take 1-P(Scenario 4).</p>
858,576
<p>Prove that the union of three subspaces of V is a subspace iff one of the subspaces contains the other two.</p> <p>I can do this problem when I am working in only two subspaces of $V$ but I don't know how to do it with three. </p> <p>What I tried is: If one of the subspaces contains the other two, Then their union is obviously a subspace because the subspace that contains them is a subspace. (Is this sufficient??).</p> <p>If the union of three subspaces is a subspace..... How do I prove that one of the subspaces must contain the other two from here?</p> <p>*When proving this for two I said that there is an element in one of the subspaces that is not the other and proved by contradiction that one of the subspaces must be contained in the other. How would I do this for three?</p>
pre-kidney
34,662
<p>Suppose $W=V_1\cup V_2\cup V_3$ is a subspace. Since $W$ contains $V_1,V_2,V_3$ and is closed under linear combinations, we have $V_1+V_2+V_3\subset W$ as well. The reverse inclusion also holds, since each $V_1,V_2,V_3$ lie in the sum. Thus $$W=V_1+V_2+V_3.$$</p> <p>Observe the inclusions $$ W=V_1\cup V_2\cup V_3\subset (V_1+V_2)\cup V_3\subset V_1+V_2+V_3=W. $$ Thus $(V_1+V_2)\cup V_3=W$, so by the result for two subspaces, either $V_3\subset V_1+V_2$ or $V_1+V_2\subset V_3$. In the former case, $W=V_1+V_2$ so repeating the two-subspace argument yields $W=V_1$ or $W=V_2$. In the latter case, $W=V_3$. The result follows.</p> <p>Note: The same proof extends by induction to any finite union of subspaces.</p>
4,276,974
<p>I have to prove that sentence, but I'm not sure how to do that. Help!</p>
Aaa Lol_dude
979,418
<p>The only thing we have to prove now is that the prime cannot be 3 mod 6.</p> <p>If it is 3 mod 6, then it is divisible by 3. Only leaves you with 1 and 5 mod 6.</p> <p>Note: 1 and 5 mod 6 are the p-1 and p+1, respectively.</p> <p>The fact is relied upon that all primes greater than 3 are odd.</p>
2,611,676
<p>Or consider the general problem- Find the value of n for which x^n is just greater than x!</p> <p>I dont know even if it is possible to find the solution or not...</p>
Ross Millikan
1,827
<p>Use Stirling's approximation and take the log. $x! \approx (\frac xe)^x \sqrt {2 \pi x}$. Take the base $x$ log and round up.</p>
446,499
<p>I have just learned the definition of connectedness and wikipedia gives an example of a disconnected set: <span class="math-container">$(0,1)\cup \left\{ 3 \right\}$</span> (<a href="https://en.wikipedia.org/wiki/Connected_space#Examples" rel="nofollow noreferrer">https://en.wikipedia.org/wiki/Connected_space#Examples</a>). Why is it disconnected? I need a bit clarification on this. Thanks for any help!</p>
Seirios
36,434
<p><strong>Hint:</strong> Show that $\{3\}$ is a <a href="https://en.wikipedia.org/wiki/Clopen_set" rel="nofollow">clopen set</a>.</p>
3,128,352
<p>I want to prove that when <span class="math-container">$F:K\rightarrow K[X]/\langle f\rangle $</span> is a map such that <span class="math-container">$F(a)=a+\langle f \rangle$</span>, then <span class="math-container">$F$</span> is an embedding from <span class="math-container">$K$</span> to <span class="math-container">$K[X]/\langle f \rangle$</span>, when <span class="math-container">$f\in K[X]\backslash K$</span>.</p> <p>To show the function is an embedding do I need to check that, for <span class="math-container">$a,b\in K$</span>:</p> <p><span class="math-container">$$F(a+b)=F(a)+F(b)$$</span></p> <p>Which seems straightforward in this case: <span class="math-container">$$F(a+b)=I+a+b=I+a+I+b=F(a)+F(b)$$</span></p> <p>And the same for multiplication... And in addition check that the map is an injection?</p>
Community
-1
<p>By long division, the expression is reworked as the sum of a polynomial and a remainder term</p> <p><span class="math-container">$$\frac c{x+7}.$$</span></p> <p>The polynomial plays no role, and the derivatives of <span class="math-container">$(x+7)^{-1}$</span> at <span class="math-container">$x=5$</span> are <span class="math-container">$(-1)^k\,k!\,12^{-k-1}$</span>, hence for <span class="math-container">$k&gt;2$</span>, the terms of the Taylor development are <span class="math-container">$$c\frac{(-1)^k}{12}\left(\frac{x-5}{12}\right)^k.$$</span></p>
2,867,404
<p>I have a problem how to get the area from the picture. Some ideas I got are not good enough to get the correct value of the whole element.</p> <p><a href="https://i.stack.imgur.com/PeQ5O.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/PeQ5O.jpg" alt="enter image description here"></a></p>
mathcounterexamples.net
187,663
<p>You don’t have enough data to compute the area (if 13.62 denotes the lenght of the basis as it seems on the OP figure). The area depends on the « balance » between the left and the right sides of the house. And this seems unknown from your picture.</p>
345,735
<p>If <strong>two planes</strong> are <strong>intersected</strong> <em>by making a straight line, like <span class="math-container">$AB$</span></em> then</p> <blockquote> <p>Does the angle between two planes (see figure) <strong>always</strong> given by the angle between normal vectors (<span class="math-container">$n_1$</span> and <span class="math-container">$n_2$</span>) ?</p> </blockquote> <p><img src="https://i.stack.imgur.com/Y054P.jpg" alt="enter image description here" /></p>
SSumner
25,240
<p>Yes. See <a href="http://www.netcomuk.co.uk/%7Ejenolive/vect14.html" rel="nofollow noreferrer">here</a></p> <blockquote> <p>It[The angle between two planes] is defined as the angle between 2 lines, one in each plane, so that they are at right angles to the line of intersection of the 2 planes (like the angle between the tops of the pages of an open book).</p> <p>To find this angle, will we first have to find the equation of the line of intersection of the 2 planes, and then find 2 vectors which are in the planes and perpendicular to this? Fortunately no! We just need to know a normal vector to each of the planes.</p> </blockquote>
3,512,521
<p>Let <span class="math-container">$x_1, x_2,\dots, x_n$</span> be positive real numbers. Let <span class="math-container">$A$</span> be the <span class="math-container">$n\times n$</span> matrix whose <span class="math-container">$i,j^\text{th}$</span> entry is <span class="math-container">$$a_{ij}=\frac{1}{x_i+x_j}.$$</span></p> <p>This is a <strong>Cauchy matrix</strong>. I am trying to show that this matrix is positive semi-definite.</p> <p>I have been given the following hint: Consider the matrix <span class="math-container">$T=(t^{x_i+x_j})$</span> where <span class="math-container">$t&gt;0$</span>. Use the fact that <span class="math-container">$T$</span> is positive semi-definite and that <span class="math-container">$$\frac{1}{x_i+x_j}=\int_0^1t^{x_i+x_j-1}dt.$$</span></p> <p>I have managed to show that <span class="math-container">$T$</span> is positive semi-definite but I don't understand where to go from there or how to use the rest of the hint.</p> <p>I would like another way to do this, preferably without involving integrals</p> <p>Thank you.</p>
Ben Grossmann
81,360
<p>Regarding the hint: just as a sum of positive semidefinite operators is positive semidefinite, so is an integral of positive semidefinite operators positive semidefinite. So, since <span class="math-container">$T(t)$</span> is positive semidefinite for all <span class="math-container">$t \geq 0$</span>, it follows that <span class="math-container">$$ A = \int_0^1 T(t)\,dt $$</span> is positive semidefinite.</p> <hr> <p><a href="https://core.ac.uk/download/pdf/82405552.pdf" rel="noreferrer">This paper</a> hints at an interesting proof (without integrals) that when the numbers <span class="math-container">$x_i$</span> are distinct, <span class="math-container">$A$</span> is necessarily positive definite. We note that the determinant of a Cauchy matrix <span class="math-container">$x_i$</span> is given by <span class="math-container">$$ \det(A) = \frac{\prod_{i,k,i&gt;k}(x_i-x_k)^2}{\prod_{i,j = 1}^n(x_i+x_j)}. $$</span> Because all terms being multiplied are positive, <span class="math-container">$\det(A) &gt; 0$</span>. Since every principal submatrix of <span class="math-container">$A$</span> is itself a Cauchy matrix for some set of distinct <span class="math-container">$x_i$</span>, we can conclude that the principal submatrices of <span class="math-container">$A$</span> also have positive determinant. By <a href="https://en.wikipedia.org/wiki/Sylvester%27s_criterion" rel="noreferrer">Sylvester's criterion</a>, we can conclude that <span class="math-container">$A$</span> is positive definite.</p> <p>For the more general statement with non-distinct <span class="math-container">$x_i$</span>, it suffices to note that the limit of a sequence positive semidefinite matrices must itself be positive semidefinite.</p>
3,512,521
<p>Let <span class="math-container">$x_1, x_2,\dots, x_n$</span> be positive real numbers. Let <span class="math-container">$A$</span> be the <span class="math-container">$n\times n$</span> matrix whose <span class="math-container">$i,j^\text{th}$</span> entry is <span class="math-container">$$a_{ij}=\frac{1}{x_i+x_j}.$$</span></p> <p>This is a <strong>Cauchy matrix</strong>. I am trying to show that this matrix is positive semi-definite.</p> <p>I have been given the following hint: Consider the matrix <span class="math-container">$T=(t^{x_i+x_j})$</span> where <span class="math-container">$t&gt;0$</span>. Use the fact that <span class="math-container">$T$</span> is positive semi-definite and that <span class="math-container">$$\frac{1}{x_i+x_j}=\int_0^1t^{x_i+x_j-1}dt.$$</span></p> <p>I have managed to show that <span class="math-container">$T$</span> is positive semi-definite but I don't understand where to go from there or how to use the rest of the hint.</p> <p>I would like another way to do this, preferably without involving integrals</p> <p>Thank you.</p>
A.Γ.
253,273
<p>Hint (without integrals): Let <span class="math-container">$X=\operatorname{diag}(x_1,x_2,\ldots,x_n)$</span> and <span class="math-container">$e$</span> be the vector of all ones. </p> <ol> <li>Prove that the Cauchy matrix <span class="math-container">$C$</span> satisfies the equation <span class="math-container">$$ XC+CX=ee^T. $$</span></li> <li>For any eigenvalue <span class="math-container">$\lambda$</span> of <span class="math-container">$C$</span> with <span class="math-container">$Cv=\lambda v$</span>, pre-multiply the equation by <span class="math-container">$v^T$</span> and post-multiply by <span class="math-container">$v$</span>. Conclude that <span class="math-container">$\lambda\ge 0$</span>.</li> </ol>
2,477,302
<p><a href="https://i.stack.imgur.com/AtXyT.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/AtXyT.png" alt="enter image description here"></a></p> <p>I solved the problem. I am getting the final answer as 8/15, however, the book says the answer is 7/15.Seems like they haven't subtracted 7/15 from 1 Please help.</p>
Nick Peterson
81,839
<p><strong>Hint:</strong></p> <p>If $x=0$, this is pretty straight-forward; I'll give you a hint for the $x&gt;0$ case, though.</p> <p>Let $\epsilon&gt;0$. As you said, you can write $$ \lvert\sqrt{x_n}-\sqrt{x}\rvert=\left\lvert\frac{x_n-x}{\sqrt{x_n}+\sqrt{x}}\right\rvert=\frac{\lvert x_n-x\rvert}{\sqrt{x_n}+\sqrt{x}}. $$</p> <p>And, as you said, you want to make this absolute difference less than a given $\epsilon&gt;0$ by choosing $n$ sufficiently large. Note that $$ \frac{\lvert x_n-x\rvert}{\sqrt{x_n}+\sqrt{x}}\leq\frac{\lvert x_n-x\rvert}{\sqrt{x}}, $$ since $\sqrt{x_n}+\sqrt{x}\geq\sqrt{x}$. (But, we have the benefit now that $\sqrt{x}$ is a constant for fixed $x$.) So, if you can find a way to make $\lvert x_n-x\rvert&lt;\epsilon\sqrt{x}$, you're in good shape. What assumptions have we made that will allow you to make $\lvert x_n-x\rvert$ small for $n$ large?</p>
2,477,302
<p><a href="https://i.stack.imgur.com/AtXyT.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/AtXyT.png" alt="enter image description here"></a></p> <p>I solved the problem. I am getting the final answer as 8/15, however, the book says the answer is 7/15.Seems like they haven't subtracted 7/15 from 1 Please help.</p>
MrYouMath
262,304
<p>Hint: First look at the case that the limit is equal to $x=0$. The assume $x\neq 0$ and find an upper bound for the fraction.</p> <p>$$\frac{1}{\sqrt{x_n}+\sqrt{x}}$$</p>
1,311,367
<p>Recently, I am considering a question, as is well known, Cauchy's inequality is a famous and useful inequality. $$\left|\int_{a}^{b}f(x)dx\right|^2\leq|b-a| \int_{a}^{b}f^2(x)dx.$$ My question is: can we obtain a inequality such that $$\left|\int_{a}^{b}f(x)dx\right|^2\ge |A|\times \left|\int_{a}^{b}f(x)^2dx\right| $$ holds? Namely, can we find something about A such that the inequality $|\int_{a}^{b}f(x)dx|^2\ge |A|\times |\int_{a}^{b}f(x)^2dx| $ holds??? Anyone can help me? Thanks!</p>
marty cohen
13,079
<p>No you can't, because you can choose $f(x)$ so that $\int_a^b f(x) dx = 0$.</p> <p>If you restrict $f$ so that it is always positive, then I think you need a bound on $f$.</p> <p>(added later)</p> <p>There <em>is</em> a reversed-inequality of the discrete C-S inequality:</p> <p>If $0 &lt; m &lt; a_i/b_i &lt; M$, then $$\left(\sum_{i=1}^n a_i^2\right)\left(\sum_{i=1}^n b_i^2\right) \le \frac{(M+m)^2}{4mM} \left(\sum_{i=1}^n a_i b_i\right)^2 $$</p> <p>You can find the proof here: <a href="http://www.artofproblemsolving.com/wiki/index.php/Cauchy-Schwarz_Inequality" rel="nofollow">http://www.artofproblemsolving.com/wiki/index.php/Cauchy-Schwarz_Inequality</a></p> <p>Setting $b_i = 1$ (to get an approximation for $\int_0^n$), this becomes</p> <p>If $0 &lt; m &lt; a_i &lt; M$, then $$n\sum_{i=1}^n a_i^2 \le \frac{(M+m)^2}{4mM} \left(\sum_{i=1}^n a_i \right)^2 $$</p> <p>Setting $a=0$, $b=n$, and $a_i=f(i)$, so that $\sum_{i=0}^n a_i \approx \int_0^n f(x) dx $, this becomes</p> <p>If $0 &lt; m &lt; f(x) &lt; M$, then $$(b-a)\int_a^b f(x)^2 dx \le \frac{(M+m)^2}{4mM} \left(\int_a^b f(x) dx \right)^2 $$</p> <p>Converting this to mean value over the interval by dividing by $(b-a)^2$, this inequality becomes</p> <p>$$\frac1{b-a}\int_a^b f(x)^2 dx \le \frac{(M+m)^2}{4mM} \left(\frac1{b-a}\int_a^b f(x) dx \right)^2 $$</p> <p>I'm sure this is well-known.</p>
1,419,315
<p>I have a particular scenario.</p> <p>In this scenario, we have the standard cubic equation,</p> <pre><code>ax^3 + bx^2 + cx + d = y </code></pre> <p>as well as 3 points that are graphed, <a href="https://i.imgur.com/VCZKuGW.png" rel="nofollow noreferrer">as can be seen in this graph</a>. (The line is irrelevant for now)</p> <p>Assume that </p> <pre><code>a, b, and c </code></pre> <p>will all adapt themselves so that they can fit all three dots as well as </p> <pre><code>d </code></pre> <p>and create a cubic line.</p> <p>The problem at hand, now, is to find some sort of general equation that would find the value of d that would lead to having the smallest sum of all coefficients, those being </p> <pre><code>a, b, c and d </code></pre> <p>The ending result doesn't have to be a specific number, though. A range of values a reasonable amount of numbers wide or so that is guaranteed to contain the number is acceptable as well.</p> <p>If any clarifications or more details are needed, then feel free to comment and I'll add anything necessary.</p> <p><strong>EDIT</strong></p> <p>I should have mentioned that by minimum, I mean the smallest absolute value of the equation, so in other words</p> <pre><code>|a| + |b| + |c| + |d| </code></pre> <p><a href="https://jsfiddle.net/gamea12/crkg1f6c/" rel="nofollow noreferrer">Here's a program that simulates the scenario</a></p>
m0_as
266,546
<p>To say about positive (negative) (semi-) definite, you need to find eigenvalues of A. Then, 1) If all eigenvalues are positive, A is positive definite 2) If all eigenvalues are non-negative, A is positive semi-definite 3) If all eigenvalues are negative, A is negative definite 4) If all eigenvalues are non-positive, A is negative semi-definite 3) If some eigenvalues are positive and some are negative, A is neither positive nor negative definite </p> <p>Eigenvalues of a matrix can be found by solving $det(\lambda I -A)=0$. For your example, this results in: $\lambda(\lambda-3)^2 =0$ which means that eigenvalues are 0, 3, 3. So we are in the second case and A is positive semi-definite.</p>
509,635
<p>If every chain in a lattice is complete (we take the empty set to be a chain), does that mean that the lattice is complete? If yes, why? </p> <p>My intuition says yes, and the reasoning is that we should somehow be able to define a supremum of any subset of the lattice to be the same as the supremum of some chain related to that lattice, but I've not abled to make more progress on the same. ANy suggestions? </p>
Brian M. Scott
12,042
<p>If $L$ is not complete, it has a subset with no join; among such subsets let $A$ be one of minimal cardinality, say $A=\{a_\xi:\xi&lt;\kappa\}$ for some cardinal $\kappa$. For each $\eta&lt;\kappa$ let $A_\eta=\{a_\xi:\xi\le\eta\}$; $|A_\eta|&lt;\kappa$, so $A_\eta$ has a join $b_\eta$. Clearly $b_\xi\le b_\eta$ whenever $\xi\le\eta&lt;\kappa$, so $\{b_\xi:\xi&lt;\kappa\}$ is a chain; indeed, with a little more argumentation we can assume that the chain is a strictly increasing $\kappa$-sequence. Now let $b=\bigvee_{\xi&lt;\kappa}b_\xi$ and show that $b=\bigvee A$ to get a contradiction.</p>
4,500,928
<p>How many ten-digit positive integers are there such that all of the following conditions are satisfied:</p> <p>(a) each of the digits 0, 1, ... , 9 appears exactly once;</p> <p>(b) the first digit is odd;</p> <p>(c) five even digits appear in five consecutive positions?</p> <p>From Combinatorics by Pavle Mladenovic</p> <p>My approach is as follows: We first choose where to place the five even digits since that is the most restrictive condition. So they can be placed in slots 1-5 all the way to 6-10 (ex. 1st to 5th digits). There are 6 ways to do this, and then we have 120 ways (5!) to place the even digits in the slots, as well as 5! ways to place the odds digits, for a total of 6 * 120 * 120 ways. Wondering if this is accurate or if I'm miscounting something.</p> <p>Hello everybody, was wondering if I can get some help solving this problem. Not sure how to approach it. Thank you in advance.</p>
MathFail
978,020
<p>The first digit must be odd, I use X to mark it. I use <span class="math-container">$[-----]$</span> to mark the five consecutive positions for even integers. I use <span class="math-container">$[~~]$</span> to mark other seat for odd integers. So we have <span class="math-container">$5$</span> ways to locate the five consecutive positions for even integers:</p> <p><span class="math-container">$$[X][-----][~~][~~][~~][~~]$$</span> <span class="math-container">$$[X][~~][-----][~~][~~][~~]$$</span> <span class="math-container">$$[X][~~][~~][-----][~~][~~]$$</span> <span class="math-container">$$[X][~~][~~][~~][-----][~~]$$</span> <span class="math-container">$$[X][~~][~~][~~][~~][-----]$$</span></p> <p>Next, we have <span class="math-container">$5!$</span> ways to arrange the odd integers, and <span class="math-container">$5!$</span> ways to arrange even integers. The final answer is:</p> <p><span class="math-container">$$5\cdot 5!\cdot 5!=72000$$</span></p>
393,250
<p>Let <span class="math-container">$G$</span> be a finitely generated residually finite group and let <span class="math-container">$M$</span> be a finitely generated <span class="math-container">$\mathbb{Z}[G]$</span>-module.</p> <p><strong>Question</strong>: Must <span class="math-container">$M$</span> be residually finite in the sense that for all nonzero <span class="math-container">$x \in M$</span>, there exists some submodule <span class="math-container">$N$</span> of <span class="math-container">$M$</span> such that <span class="math-container">$x \notin N$</span> and <span class="math-container">$M/N$</span> is finite?</p> <p>If this is not true in general, is it true if <span class="math-container">$G$</span> is also assumed to be nilpotent?</p>
YCor
14,094
<p>It's true, and due to Ph. Hall, when <span class="math-container">$G$</span> is virtually nilpotent, and more generally (Roseblade) when <span class="math-container">$G$</span> is virtually polycyclic.</p> <p>When <span class="math-container">$G=\mathbf{Z}\wr\mathbf{Z}$</span> there exists an infinite simple <span class="math-container">$\mathbf{Z}G$</span>-module, so it's not residually finite.</p> <hr> <p>Added: The counterexample is due to Ph. Hall. Since I already mentioned the original one for other purposes on this site, let me provide an immediate variant, which entails the result.</p> <p>Notation: <span class="math-container">$L_n=(\mathbf{Z}/n\mathbf{Z})\wr\mathbf{Z}$</span>, <span class="math-container">$\mathbf{F}_p=\mathbf{Z}/p\mathbf{Z}$</span> (viewed as field).</p> <p><b>Proposition</b> Let <span class="math-container">$p$</span>, <span class="math-container">$q$</span> be primes such that <span class="math-container">$p$</span> divides <span class="math-container">$q-1$</span>. Then there exists an infinite simple <span class="math-container">$\mathbf{F}_q L_p$</span>-module <span class="math-container">$V$</span> (which is therefore a simple <span class="math-container">$\mathbf{Z} L_p$</span>-module, thus not residually finite).</p> <p>Proof: Fix an element <span class="math-container">$x$</span> of order <span class="math-container">$p$</span> in the multiplicative group <span class="math-container">$\mathbf{F}_q^*$</span>. Let <span class="math-container">$(w_n)_{n\in\mathbf{Z}}$</span> be valued in <span class="math-container">$\{1,x\}$</span>, with the property that for every <span class="math-container">$n$</span> there exists <span class="math-container">$m$</span> such that <span class="math-container">$w_m=x$</span> and <span class="math-container">$w_i=1$</span> for all <span class="math-container">$i$</span> such that <span class="math-container">$0&lt;|i|\le n$</span>.</p> <p>Let <span class="math-container">$V_q=\mathbf{F}_q^{(\mathbf{Z})}$</span> be the abelian group of finitely supported sequences <span class="math-container">$\mathbf{Z}\to\mathbf{F}_q$</span>, with basis <span class="math-container">$(e_m)_{m\in\mathbf{Z}}$</span>. Let <span class="math-container">$d$</span> be the diagonal automorphism of <span class="math-container">$V$</span>: <span class="math-container">$(x_n)\mapsto (w_nx_n)$</span>. Let <span class="math-container">$\tau$</span> be the shift <span class="math-container">$(x_n)\mapsto (x_{n+1})$</span>. Hence <span class="math-container">$\tau^nf\tau^{-n}$</span> is diagonal for all <span class="math-container">$n$</span> and <span class="math-container">$f^p=\mathrm{id}$</span>, so that <span class="math-container">$\tau$</span>, <span class="math-container">$d$</span> define a representation <span class="math-container">$L_p\to\operatorname{Aut}(V)$</span>. Thus <span class="math-container">$V_q$</span> is a <span class="math-container">$\mathbf{F}_qL_p$</span>-module.</p> <p>I claim it is a simple <span class="math-container">$\mathbf{F}_qL_p$</span>-module. Indeed, start from a nonzero <span class="math-container">$v\in V_q$</span>, and let <span class="math-container">$W$</span> be the <span class="math-container">$L_p$</span>-submodule generated. Let <span class="math-container">$S$</span> be the (finite) support of <span class="math-container">$v$</span> and fix <span class="math-container">$m\in S$</span>. Then there exists a translate of <span class="math-container">$w$</span> that equals <span class="math-container">$1$</span> on <span class="math-container">$S\smallsetminus\{m\}$</span> and equals <span class="math-container">$x$</span> on <span class="math-container">$n$</span>. This corresponds to <span class="math-container">$\tau^n d\tau^{-n}$</span> for some <span class="math-container">$n$</span>. Hence <span class="math-container">$\tau^nd\tau^{-n}v-v$</span> is a nonzero scalar multiple of <span class="math-container">$e_m$</span>. So <span class="math-container">$e_m\in W$</span>, and using <span class="math-container">$\tau$</span> it follows that <span class="math-container">$W$</span> contains all basis elements. Hence <span class="math-container">$W=V$</span>, proving simplicity.</p>
4,363,409
<blockquote> <p>Define <span class="math-container">$X_0=\alpha\in(0,1)$</span> the initial capital and <span class="math-container">$X_n$</span> as the remaining capital after each game. A player bets <span class="math-container">$1-X_n$</span> if <span class="math-container">$X_n&gt;1/2$</span> and <span class="math-container">$X_n$</span> if <span class="math-container">$X_n\leq 1/2$</span> such that each game is a Bernoulli<span class="math-container">$(1/2)$</span>. Define <span class="math-container">$A_n=\{X_n\in(0,1)\}$</span>, the event that the player neither wins everything or reachs ruin. Show by induction that <span class="math-container">$P(A_n)\leq 2^{-n}$</span>.</p> </blockquote> <p>If <span class="math-container">$\alpha&lt; 1/2$</span> then the player either goes broken with probability <span class="math-container">$1/2$</span> or owns <span class="math-container">$2X_n$</span> in the next game. If <span class="math-container">$\alpha &gt;1/2$</span> then the player owns every available resource with probability <span class="math-container">$1/2$</span> or owns <span class="math-container">$2X_n -1$</span> in the next game. Should I use total law of probability and try to work with the conditionals upon the last fortune as in</p> <p><span class="math-container">$$P(A_n) = \sum_{k=1}^m P(A_n|B_m)P(B_m)$$</span> where <span class="math-container">$\bigcup B_m= \Omega$</span>.</p> <p>Also if this is the right path, should I split three-ways with a <span class="math-container">$X_n=1/2$</span> case where the player reach either <span class="math-container">$0$</span> or <span class="math-container">$1$</span>?</p> <p>I'm having a little trouble working this problem out.</p>
Josiah
1,017,665
<p>The basic reasoning in Andrew's answer is what you'd get for a perfect mirror. There are a few more physical effects which may be worth thinking about to get it just right.</p> <p>One is that water is much <a href="https://en.m.wikipedia.org/wiki/Reflectance#Water_reflectance%20less%20reflective" rel="nofollow noreferrer">less reflective</a> to light coming straight down at it than to light coming at a shallow angle. If, for example, you are looking down at it at an angle of 20 degrees from the horizontal, it reflects less than 15 percent of the light. This means that your reflection should be <strong>dimmer</strong> than what it's reflecting.</p> <p>A second effect which may make a bit of a difference, things which are further away look smaller. If you really did all of Andrew's maths for each point on the tree, this would happen anyway. But as a shortcut rule of thumb, if there's a significant difference between the length directly to the tree and the length down to the water and back up to the tree, your reflected tree would be appropriately that much smaller.</p> <p>The last two minor effects I'd mention are that water is not perfectly flat (so ripples distort reflections) and that water is transparent at steep angles (so close to the viewer where they're looking down, any reflection would be merged with whatever is in or under the water. (the last effect may just be a blurry brownness though, because of silt as much as because of optics.)</p> <p>For all of these, I'd suggest that you go with whatever looks right in the art, but it's worth knowing what to expect to look for.</p>
4,413,093
<p>Determine the radius of convergence of the series <span class="math-container">$\sum_{n=1}^{\infty}a_nz^n$</span> where <span class="math-container">$a_n=\frac{n^2}{4^n+3n}$</span></p> <p>Now <span class="math-container">$\alpha=\limsup_{n\to \infty}(\vert a_n\vert)^\frac{1}{n}$</span> and so radius of convergence <span class="math-container">$R=\frac{1}{\alpha}$</span></p> <p>So now <span class="math-container">$\alpha=\limsup_{n\to \infty}(\vert a_n\vert)^\frac{1}{n}=\limsup_{n\to\infty} (\vert \frac{n^2}{4^n+3n}\vert)^\frac{1}{n}$</span></p> <p>Now I know <span class="math-container">$lim_{n\to\infty} n^\frac{1}{n}=1$</span> but what about the denominator?</p>
Dr. Sundar
1,040,807
<p>When we define any function <span class="math-container">$f: A \rightarrow B$</span>, its domain <span class="math-container">$A$</span> must be carefully chosen so that <span class="math-container">$f(x)$</span> is a unique, well-defined, output for each input variable <span class="math-container">$x$</span> in the domain <span class="math-container">$A$</span>.</p> <p>If we take <span class="math-container">$f(x) = + \sqrt{x}$</span>, then it is a well-defined function over the non-negative real number system, <em>i.e.</em> if we take its domain as</p> <p><span class="math-container">$A = \{ x \in \mathbf{R} : x \geq 0 \}$</span>.</p> <p>Similarly, if we take <span class="math-container">$g(x) = {1 \over x}$</span>, then <span class="math-container">$g$</span> is well-defined except at the point <span class="math-container">$x = 0$</span>.</p> <p>Hence, <span class="math-container">$g(x)$</span> is a well-defined function over <span class="math-container">$\mathbf{R} \setminus \{ 0 \} $</span>, <em>i.e.</em></p> <p>Domain(g) = <span class="math-container">$\{ x \in \mathbf{R} : x \neq 0 \}$</span>.</p>
225,128
<p>I'm trying to define a function accepting only real values like this:</p> <pre><code>f[x_Real] := x^2 f[0] </code></pre> <p>But it outputs</p> <blockquote> <p><code>f[0] </code></p> </blockquote> <p>and doesn't output 0.</p> <p>Is there any reason why <code>f[x_Real]</code> doesn't work? I tested <code>f[x_Integer]</code> and <code>f[x_Complex]</code> and they both seem to work.</p>
mgamer
19,726
<p>0 without the &quot;.&quot; is an Integer, see</p> <pre><code>Head @ 0 </code></pre> <p>(* Integer *)</p> <p>but</p> <pre><code>f[0.] </code></pre> <p>delivers</p> <pre><code>0. </code></pre>
2,204,944
<p>A line is a collection of infinitely many points. By definition, a point has no dimensions. But, how can infinitely many dimensionless points give rise to a line with a dimension. This is the same case with planes, solids and higher dimensions too...</p> <p>Thanks in advance for any help..!!</p>
user1551
1,551
<p>$X_1$ and $X_2$ follow the same discrete uniform distribution (since $X_1$ and $X_2|X_1$ are uniform, so is $X_2$). Hence they have identical expectations.</p>
2,208,755
<p>I got stuck on this question: find all solutions $x$ for $a\in R$:</p> <p>$$\frac{(x^2-x+1)^3}{x^2(x-1)^2}=\frac{(a^2-a+1)^3}{a^2(a-1)^2}$$</p> <p>I see that if we simplify we get:</p> <p>$$\frac{(x^2-x+1)^3}{x^2(x-1)^2}=\frac{[(x-{\frac 12})^2+{\frac 34}]^3}{[(x-{\frac 12})^2-{\frac 14}]^2}$$</p> <p>From the expression $(x-{\frac 12})^2$, I see that if $x=x_1$ is a solution, then $x=1-x_1$ is also a solution. But in the solution to this exercise, it was stated that $x=\frac{1}{x_1}$ must also be a solution, and I don't see how.</p> <p>[EDIT]</p> <p>Ok, thx for the help guys. What do you think of this solution (doesn't involve any above precalculus math, and needs no long calculations)?</p> <p>From the above we know that if $x_1=a$ is a solution, then $x_2=1-a$ is also a solution.</p> <p>Also, from here:</p> <p>$$\require{cancel}\frac{(x^2-x+1)^3}{x^2(x-1)^2}=\frac{\cancel{x^3}(x+{\frac 1x}-1)^3}{\cancel{x^3}(x+{\frac 1x}-2)}$$</p> <p>in the expression $x+{\frac 1x}$ we see that if $x=x_1$ is a solution, then $x=\frac{1}{x_1}$ is also a solution, so $x_3=\frac{1}{a}$.</p> <p>With these two rules we can now keep generating roots until we have 6 total.</p> <p>If $x=x_2$ is a solution, then $x=\frac{1}{x_2}$ is also a solution, so $x_4=\frac{1}{1-a}$.</p> <p>If $x=x_3$ is a solution, then $x=1-x_3$ is also a solution, so $x_5=\frac{a-1}{a}$.</p> <p>Finally, if $x=x_5$ is a solution, then $x=\frac{1}{x_5}$ is also a solution, so $x_6=\frac{a}{a-1}$</p> <p>The 6 obtained values are distinct, so they cover all the roots.</p> <p>[EDIT2]</p> <p>I guess this is answered. No sure whose particular answer to actually select as the right one since they're all correct, so I'll just leave it like this.</p>
dxiv
291,201
<p>Hint: &nbsp;write the LHS in terms of $z = x+\cfrac{1}{x}$ as follows:</p> <p>$$ \require{cancel} \frac{(x^2-x+1)^3}{x^2(x-1)^2} = \frac{(x^2-x+1)^3}{x^2(x^2-2x+1)}= \frac{\bcancel{x^3}\left(x+\cfrac{1}{x}-1\right)^3}{\bcancel{x^3}\left(x+\cfrac{1}{x}-2\right)}=\frac{(z-1)^3}{z-2} $$</p> <p>Let $b=a+\cfrac{1}{a}$ and do the same for the RHS, then the equation becomes:</p> <p>$$ \frac{(z-1)^3}{z-2} = \frac{(b-1)^3}{b-2} $$</p> <p>The above is a cubic in $z\,$ with the obvious root $z_1=b\,$, which leaves a quadratic to solve.</p> <p>$$ \big(\,z-b\,\big) \,\big( \,(b-2)z^2 +(b-2)(b-3)z - (2b^2-6b+5)\,\big) \,=\, 0 $$</p> <p>After not too pretty calculations, the other roots turn out to be:</p> <p>$$ z_{2,3} = \frac{b^2 \pm (\sqrt{b^2 - 4} - 5) b \mp \sqrt{b^2 - 4} + 6}{4 - 2 b} $$</p> <p>Reverting to the $x$ and $a$ variables, the root $z_1=b$ gives the roots $x=a$ and $x=\cfrac{1}{a}\,$, then the roots $z_{2,3}$ give the other $4$ roots in $x$ after solving the respective equations. The calculations are again not pretty (and not included here), though noting $\sqrt{b^2-4}=a-\cfrac{1}{a}\,$ eliminates the radicals upfront. <hr> [ <em>EDIT</em> ] &nbsp;The following is a shortcut for the final calculations, using OP's observation that:</p> <blockquote> <p>I see that if $x=x_1$ is a solution, then $x=1-x_1$ is also a solution.</p> </blockquote> <p>This means that the second root in $z$ is $z_2=1-a + \cfrac{1}{1-a}\,$ corresponding to the roots $x=1-a$, $x=\cfrac{1}{1-a}\,$. The remaining root in $z$ as determined by Vieta's relations is:</p> <p>$$\require{cancel}z_3=-(b-3)-\left(1-a + \cfrac{1}{1-a}\right)= -\bcancel{a}-\frac{1}{a}+3-1+\bcancel{a}-\frac{1}{1-a}=2 - \frac{1}{a}-\frac{1}{1-a}$$</p> <p>Solving the equation $x+\cfrac{1}{x}=2 - \cfrac{1}{a}-\cfrac{1}{1-a}$ gives the last two roots $x=\cfrac{a}{a-1}\,$, $x=\cfrac{a-1}{a}$. <hr> [ <em>EDIT</em> #2 ] &nbsp;Even shorter: once established that if $x$ is a root then $1/x$ and $1-x$ are also roots, and given the obvious $x=a$ root, it follows that $\,1/a\,$, $\,1-a\,$, $\,1/(1-a)\,$, $\,1-1/a=(a-1)/a\,$, $\,1/(1-1/a)=a/(a-1)\,$ are also roots. For $a \ne 0,1$ and $a \not \in \{-1, \frac{1}{2},2\}$ the $6$ roots are distinct, and since the equation is of degree $\,6\,$, it follows that these are the only roots. For $a \in \{-1, \frac{1}{2},2\}$ the same can be shown to hold true by a continuity argument.</p>
2,677,823
<p>How can I precisely prove the existence of a continuous function $\rho(x)$ such that $0 \leq \rho(x) \leq 1 \forall x \in R^d $ such that $g(x) \rho(x)$ is bounded and continuous for $g(x)$ continuous?Both $g(x)$ and $\rho(x)$ are defined on $R^d$.</p> <p>My idea was that we can choose $\rho(x)$ such that $\rho(x)g(x)$ goes exponentially to zero outside a compact set in $R^d$.But i cant argue rigorously?</p> <p>Can hints on how could I proceed?</p>
Surb
154,545
<p>If $f:\mathbb C\to \mathbb R$ it's obvious since the integral of a real function is real. Suppose $f:\mathbb C\to\mathbb C$ and write $$f(z)=u(x,y)+iv(x,y)$$ with $x,y\in\mathbb R$, $u=\Re(f)$ and $v=\Im(f)$. Since $u(x,y),v(x,y)\in\mathbb R$ for all $x,y$, $$\overline{\int f}=\overline{\int u+i\int v}=\overline{\int u}-i\overline{\int v}=\int u-i\int v=\int(u-iv)=\int \bar f.$$</p>
99,799
<p>I have a <code>Solve</code> similar to the following:</p> <pre><code>Solve[e^2 - c^2 == -15, {e, c}, Integers] (* {{e -&gt; -7, c -&gt; -8}, {e -&gt; -7, c -&gt; 8}, {e -&gt; -1, c -&gt; -4}, {e -&gt; -1, c -&gt; 4}, {e -&gt; 1, c -&gt; -4}, {e -&gt; 1, c -&gt; 4}, {e -&gt; 7, c -&gt; -8}, {e -&gt; 7, c -&gt; 8}} *) </code></pre> <p>I need to add a region constraint to get the solution I want from the unconstrained list of solutions. I tried the following:</p> <pre><code>Solve[e^2 - c^2 == -15 ∧ {e, c} ∈ Interval[{0, 4}], {e, c}, Integers] (* {{e -&gt; {1}, c -&gt; {4}}} *) </code></pre> <p>However, when I do this it wraps the variable's solutions in <code>List</code>. Is there a way to turn this off so I just get <code>{{e -&gt; 1, c -&gt; 4}}</code> or <code>{e -&gt; 1, c -&gt; 4}</code> as the result? The current result is a pain as I have to massage it for use with <code>Replace</code>. Also, can any explain why it is doing this when I constrain the variables? </p>
bbgodfrey
1,063
<p>The unwanted <code>List</code> also can be replaced after completing <code>Solve</code>.</p> <pre><code>Solve[e^2 - c^2 == -15 ∧ {e, c} ∈ Interval[{0, 4}], {e, c}, Integers] /. Rule[z1_, {z2_}] -&gt; Rule[z1, z2] </code></pre> <p>or</p> <pre><code>Replace[Solve[e^2 - c^2 == -15 ∧ {e, c} ∈ Interval[{0, 4}], {e, c}, Integers], List[z_] -&gt; z, -1] (* {{e -&gt; 1, c -&gt; 4}} *) </code></pre>
250,454
<p>Is there a <code>ReplaceOnce</code> which does only one replacement if possible by trying the rules sequentially in order. Consider the following as an example:</p> <pre><code>ReplaceOnce[{&quot;May&quot;,&quot;5&quot;,&quot;May&quot;,&quot;5&quot;},{&quot;May&quot;-&gt;1,&quot;5&quot;-&gt;2}] </code></pre> <p>should produce:</p> <pre><code>{1,&quot;5&quot;,&quot;May&quot;,&quot;5&quot;} </code></pre> <p>Similarly,</p> <pre><code>ReplaceOnce[{&quot;May&quot;,&quot;5&quot;,&quot;May&quot;,&quot;5&quot;},{&quot;5&quot;-&gt;2,&quot;May&quot;-&gt;1}] </code></pre> <p>should produce:</p> <pre><code>{&quot;May&quot;,2,&quot;May&quot;,&quot;5&quot;} </code></pre>
Ben Izd
77,079
<p>I believe a better solution exists.</p> <pre><code>ClearAll[replaceLimitedBack, replaceLimited]; replaceLimitedBack[expr_, rulesRaw_, n_Integer, levelSpec_] := Block[{rules = Association[rulesRaw], one = 0}, {Replace[expr, a_?(If[one &lt; n &amp;&amp; KeyExistsQ[rules, #], one += 1; True, False] &amp;) :&gt; rules[a], levelSpec], n - one}] replaceLimited[expr_, rulesRaw_List, n_Integer : 1, levelSpec_ : Infinity] := First[Fold[ replaceLimitedBack[#1[[1]], #2, #1[[2]], levelSpec] &amp;, {expr, n}, rulesRaw]] </code></pre> <p>In the condition (<code>?</code>), we check how many times we have replaced elements using <code>one</code> + if a replacement exists with <code>KeyExistsQ</code>, then increase the <code>one</code>.</p> <p>result:</p> <pre><code>replaceLimited[{&quot;May&quot;, &quot;5&quot;, &quot;May&quot;, &quot;5&quot;}, {&quot;May&quot; -&gt; 1, &quot;5&quot; -&gt; 2}] (*Out: {1, &quot;5&quot;, &quot;May&quot;, &quot;5&quot;} *) replaceLimited[{&quot;May&quot;, &quot;5&quot;, &quot;May&quot;, &quot;5&quot;}, {&quot;May&quot; -&gt; 1, &quot;5&quot; -&gt; 2}, 2] (*Out: {1, &quot;5&quot;, 1, &quot;5&quot;} *) replaceLimited[{&quot;May&quot;, &quot;5&quot;, &quot;May&quot;, &quot;5&quot;}, {&quot;5&quot; -&gt; 2, &quot;May&quot; -&gt; 1}] (*Out: {&quot;May&quot;, 2, &quot;May&quot;, &quot;5&quot;} *) </code></pre> <p>Notes:</p> <ul> <li>Rules will be applied one by one, as far as permitted. For example, if the limit is 2 and the first rule occurred 2 times, only the first rule will be applied and the rest of the rules will not be touched (see the second example).</li> <li>The third argument (<code>n</code>) is for how many times you want to replace.</li> <li>The fourth argument (<code>levelSpec</code>) is for Replace <code>LevelSpec</code>.</li> </ul>
78,641
<p>I am interested in the relation between the property of countable chain condition (ccc) and the property of separable. Could someone recommend some papers or books about this to me? thanks in advance.</p>
Joel David Hamkins
1,946
<p>You will want to look at <a href="http://en.wikipedia.org/wiki/Suslin%27s_problem" rel="nofollow">Suslin lines</a>, which are examples of ccc non-separable orders. The existence of Suslin lines is independent of the axioms of ZFC, and this material is covered in any of the usual graduate set theory texts, such as Jech's book Set Theory. </p> <p>The real line $\langle\mathbb{R},\lt\rangle$ was known classically to be uniquely determined by the following properties:</p> <ul> <li>The real line is a dense linear order with no endpoints</li> <li>the real line has the LUB property</li> <li>the real line has a countable dense subset. </li> </ul> <p>Suslin inquired whether the final condition can be weakened to the ccc property, asserting that every family of pairwise disjoint intervals is countable, and still retain its characterization of $\mathbb{R}$. A counterexample to this latter property is known as a Suslin line. The existence of a Suslin line is equivalent to the existence of a Suslin tree, a well-founded tree of height $\omega_1$ with no uncountable branches or antichains. Much of the set-theoretic development is centered on the Suslin tree concept, which seems to clarify certain issues better than the Suslin lines. </p> <p>Suslin himself struggled with the question, and died before coming to learn the answer, which is:</p> <ul> <li><p>It is consistent with ZFC that Suslin lines exist. This is true in the constructable universe, and indeed, the existence of Suslin trees and hence Suslin lines is a consequence of the combinatorial principle known as $\Diamond$. Furthermore, every model of set theory has a forcing extension in which $\Diamond$ holds and hence in which Suslin trees exist.</p></li> <li><p>It is also consistent with ZFC that there are no Suslin trees. The solution of this problem was the main motivating example for the development of iterated forcing by Solovay and Tennenbaum, proved in the early 1960s. It is a conseqeunce of Martin's axiom at $\omega_1$ that there are no Suslin trees. </p></li> </ul>
2,073,923
<p>It seems the number of nonnegative integer solutions to the equation $xyz=n$ is given by $$\sum\limits_{d \mid n} \tau(d)$$</p> <p>$\tau$ is the number of divisors function. I'm wondering if there is a way to simplify this sum. Really appreciate any kind of help. Thank you.</p> <hr> <p>Here is my attempt so far $$xyz = n$$</p> <p>$x$ can be any of the factors of $n$ and the product $yz$ will be $n/x$. Since $yz$ sees all the factors of $n$, the number of nonnegative integer solutions to $xyz=n$ is simply the sum of divisors of the product $yz$.</p> <p>Edit : Special thanks to @Tryss for identifying an error in the formula. I've fixed it now..</p>
ajotatxe
132,456
<p>The function $$f(n)=\sum_{d\mid n}\tau(d)$$ is multiplicative. That is, $f(mn)=f(m)f(n)$ whenever $\gcd(m,n)=1$.</p> <p>Let's try to find a formula for powers of primes:</p> <p>$$f(p^r)=\sum_{d\mid p^r}\tau(d)=\sum_{k=0}^r\tau(p^k)=\sum_{k=0}^r(k+1)=\frac{(r+1)(r+2)}2$$</p> <p>Then, if the prime factorization of $n$ is $$n=\prod_{k=1}^sp_k^{t_k}$$ we have that $$f(n)=2^{-s}\prod_{k=1}^s(t_k+1)(t_k+2)$$</p>
3,097,672
<p>I have to find the definite integral of this:</p> <p><span class="math-container">$$\int_2^3 \frac{dx}{(x^2-1)^{\frac{3}{2}}}$$</span></p> <p>So let's start with the indefinite integral:</p> <p>so <span class="math-container">$x = \sec \theta$</span> so <span class="math-container">$ dx = \sec \theta \tan \theta d \theta$</span></p> <p>So </p> <p><span class="math-container">$$ \frac{\sec{x} \tan{x}}{(\sec^2{x}-1)^{\frac{3}{2}}} $$</span></p> <p><span class="math-container">$$ = \int \frac{\sec{x} \tan{x}}{\tan x^{\frac{3}{2}}}$$</span></p> <p><span class="math-container">$$= \int \frac{\sec{x}\tan{x}}{\tan{x}^{\frac{1}{2}}}$$</span></p> <p>But now I'm stuck...</p> <p><strong>EDIT</strong></p> <p>Unstuck:</p> <p><span class="math-container">$$\int \frac{cos \theta}{sin^2 \theta} $$</span></p> <p>Let's use <span class="math-container">$u = sin \theta$</span></p> <p><span class="math-container">$$\int \frac{1}{u^2} du$$</span></p> <p><span class="math-container">$$ \frac{u^-1}{-1} + c$$</span></p> <p><span class="math-container">$$- \frac{1}{sin \theta} + c$$</span></p> <p>So given that <span class="math-container">$ x = sec \theta$</span>:</p> <p><span class="math-container">$$ - \frac{1}{\frac{\sqrt{x^2-1}}{x}}$$</span></p> <p><span class="math-container">$$- \frac{x}{\sqrt{x^2-1}}$$</span></p> <p>How does that look?</p>
J.G.
56,861
<p>You made a slight mistake: since <span class="math-container">$\sec^2\theta-1=\tan^2\theta$</span> you should have <span class="math-container">$\int\frac{\sec\theta\tan\theta d\theta}{\tan^3\theta}=\int\frac{\cos\theta d\theta}{\sin^2\theta}$</span>, now use <span class="math-container">$u=\sin\theta$</span>.</p>
2,624,498
<p>Evaluate $$\lim_{n \rightarrow\infty} \sqrt[n]{3^{n} +5^{n}}$$</p> <p>Attempt:</p> <p>The only sort of manipulation that has come to mind is: $$e^{\frac{1}{n}ln(e^{n\ln(3)} + e^{n\ln(5)})}$$</p> <p>So what is the trick to successfully evaluate this?</p>
Rene Schipperus
149,912
<p>Squeeze </p> <p>$$5\leq \sqrt[n]{3^n+5^n}\leq 5\sqrt[n]{2}$$</p>
2,624,498
<p>Evaluate $$\lim_{n \rightarrow\infty} \sqrt[n]{3^{n} +5^{n}}$$</p> <p>Attempt:</p> <p>The only sort of manipulation that has come to mind is: $$e^{\frac{1}{n}ln(e^{n\ln(3)} + e^{n\ln(5)})}$$</p> <p>So what is the trick to successfully evaluate this?</p>
user284331
284,331
<p>\begin{align*} \dfrac{1}{n}\cdot\log\left(\left(\dfrac{3}{5}\right)^{n}+1\right)\rightarrow 0, \end{align*} so \begin{align*} \left(\left(\dfrac{3}{5}\right)^{n}+1\right)^{1/n}\rightarrow e^{0}=1, \end{align*} and hence \begin{align*} \sqrt[n]{3^{n}+5^{n}}=5\left(\left(\dfrac{3}{5}\right)^{n}+1\right)^{1/n}\rightarrow 5. \end{align*}</p>
2,746,637
<p>We have $*$ the Hodge operator, and $d $ the exterior derivative. We define $\delta=\pm *d*$ and $\triangle=d\delta+\delta d $. <a href="https://rads.stackoverflow.com/amzn/click/0387908943" rel="nofollow noreferrer">Warner</a> (pp. 223) says that we have $$ \triangle (E^p (M))=d\delta (E^p (M))\oplus \delta d (E^p (M))=d (E^{p-1}(M))\oplus\delta (E^{p+1}(M)) $$ I understand why the first space is a subspace of the second, and why the second is of the third. My question is why are there inverse inclusions? </p>
Ricanry
451,947
<p><strong>Original answer</strong>:</p> <p>Since you know that "the first space is a subspace of the second, and why the second is of the third", it's sufficient to show that the first space is equal to the third. As in Hodge theory, we assume manifold M is compact and oriented.</p> <p>Set $\mathbb{H}^k(M)=\{\omega \in E^k(M): \Delta \omega =0\}$, which is the space of all harmonic k-forms on M. First thing you should have known is that $\Delta \omega=0$ if and only if $d \omega = \delta \omega =0$, since $(\Delta \omega,\omega)=((d\delta+\delta d)\omega,\omega)=(d\omega,d\omega)+(\delta\omega,\delta\omega)=0$, and both the latter are non-negative.</p> <p>Now for any $\omega \in \mathbb{H}^k,\ \eta \in E^{k-1}, \ \theta \in E^{k+1}$, we have$$(\omega,d\eta)=(\delta\omega,\eta)=0=(d\omega,\theta)=(\omega,\delta\theta),\ since\ d \omega = \delta \omega =0,\ and\ (d\eta,\delta\theta)=(d^2\eta,\theta)=0.$$ So these three space are tangent to each other. Take a $\omega$ orthogonal to $dE^{k-1} \oplus\delta E^{k+1}$, then $(\delta\omega,\eta)=(\omega,d\eta)=0,\ which\ implys\ \delta\omega=0$, and for the same reason $d\omega=0$, so its in $\mathbb{H}^k$.</p> <p>It's sufficient to show $E^k=\mathbb{H}^k \oplus dE^{k-1} \oplus\delta E^{k+1}$, which is well-known as Hodge decomposition. Briefly, since $\mathbb{H}^k$ is a linear subspace of the infinite-dimension space $E^k$, there's a projection $\pi :E^k \to \mathbb{H}^k$. Theorem from PDE tells us the equation $\Delta \eta =\omega_0$, which is a ellipitic PDE of 2th order, must have a solution. (I read the details of this claim in a Chinese textbook so I cannot give you a reference, but I think it's easy to find in any PDE textbooks.)</p> <p>We assume $\eta_0$ is the solution to $\Delta\eta= \omega -\pi(\omega)$, for an arbitary $\omega \in E^k$, and we can modify it, by taking $\eta_1=\eta_0-\pi(\eta_0)$. It's easy to show $\eta_1$ is another solution to the equation, as $\Delta\pi=\pi\Delta=0$. From disscusion above we find the <em>Green's operator</em> $$G:E^k \to (\mathbb{H}^k)^{\bot}=dE^{k-1} \oplus\delta E^{k+1}, \omega \mapsto \eta_1.$$ Then Hodge decomposition is given by $$\omega=\pi(\omega)+d\delta(G\omega)+\delta d(G\omega).$$</p>
3,442,862
<blockquote> <p>Let <span class="math-container">$F $</span> be a subset family of the set {<span class="math-container">$ 1, 2, ..., 2017 $</span>} such that for any <span class="math-container">$ A, B \in F $</span>, worth that <span class="math-container">$A \cap B$</span> has exactly one element. Determine as many as possible of <span class="math-container">$ F $</span> elements</p> </blockquote> <p><strong>Solution:</strong> Generalization: if the total number set is <span class="math-container">$\{1, 2, ..., n\}$</span>, then the maximum of <span class="math-container">$|F|$</span> is <span class="math-container">$n$</span>. In the original problem <span class="math-container">$n=2017$</span>, so <span class="math-container">$max|F|=\boxed{2017}$</span>.</p> <ol> <li><p>We claim that <span class="math-container">$|F| \leq n$</span> Consider a map from one subset <span class="math-container">$A$</span> of <span class="math-container">$\{1, 2, ..., n\}$</span> to a <span class="math-container">$n$</span>-dimension vector <span class="math-container">$V=(v_1, v_2, ..., v_n)^T$</span>. For <span class="math-container">$1 \leq i \leq n$</span>, if <span class="math-container">$i \in A$</span> then <span class="math-container">$v_i=1$</span>, else <span class="math-container">$v_i=0$</span>. Consider the vector set mapped from <span class="math-container">$F$</span>: <span class="math-container">$\{V_1, V_2, ..., V_m\}$</span>. We could prove the vectors are linearly independent. For <span class="math-container">$i \ne j$</span>, <span class="math-container">$&lt;V_i, V_j&gt;=V_i^TV_j=1$</span>, since there is exactly one element included in any two elements of <span class="math-container">$F$</span>. For <span class="math-container">$i = j$</span>, <span class="math-container">$&lt;V_i, V_i&gt;=|V_i| \geq 1$</span>, where <span class="math-container">$|V_i|$</span> counts the number of <span class="math-container">$1$</span> appeared in <span class="math-container">$V_i$</span>. Consider <span class="math-container">$S=\sum_{k=1}^{m} a_kV_k$</span>, if <span class="math-container">$S=(0,0,...,0)^T$</span>, then <span class="math-container">$&lt;S, S&gt;=0$</span>. However, <span class="math-container">$&lt;S, S&gt;=\sum_{i=1}^{m} a_i^2&lt;V_i, V_i&gt;+\sum_{i \ne j} 2a_ia_j&lt;V_i, V_j&gt;=(\sum_{i=1}^{m} a_i)^2+\sum_{i=1}^{m} a_i^2(|V_i|-1) \geq 0$</span>. So the equality holds only when <span class="math-container">$a_i=0$</span>, that means <span class="math-container">$\{V_1, V_2, ..., V_m\}$</span> are linearly independent, so that <span class="math-container">$m \leq n$</span>, so that <span class="math-container">$|F| \leq n$</span>.</p></li> <li><p>A construction of <span class="math-container">$|F|=n$</span> Consider <span class="math-container">$F=\{\{a,n\} | 1 \leq a \leq n-1\} \cup\{n\}$</span>. <span class="math-container">$|F|=n$</span>, and any <span class="math-container">$ A, B \in F$</span>, <span class="math-container">$A \cap B=\{n\}$</span>.</p></li> </ol> <p>I didn't understand the logic of this solution</p>
Community
-1
<p>We can rewrite the argument as follows.</p> <p>(1). Each set in <span class="math-container">$F$</span> can be represented as a vector. For example, <span class="math-container">$\{2,3,6\}$</span> is represented by <span class="math-container">$$(0,1,1,0,0,1,0,0,...)$$</span> where a &quot;<span class="math-container">$1$</span>&quot; in the <span class="math-container">$i$</span>th position indicates that the set contains &quot;<span class="math-container">$i$</span>&quot;.</p> <blockquote> <p>If <span class="math-container">$|F|=m$</span>, then we now have <span class="math-container">$m$</span> such vectors <span class="math-container">$$V_1,V_2,V_3,...V_m.$$</span> The crucial fact is that for <strong>any</strong> pair of these vectors, there is one and only one position where they both have a &quot;<span class="math-container">$1$</span>&quot;.</p> </blockquote> <p>Now suppose we could find numbers <span class="math-container">$a_1,a_2,a_3,...a_m$</span> such that <span class="math-container">$$W=\sum_1^m a_iV_i=0.$$</span></p> <blockquote> <p>Then the scalar product of <span class="math-container">$W$</span> with itself is, of course, zero.</p> <p>We also know that if <span class="math-container">$i\ne j$</span>, then <span class="math-container">$V_i.V_j=1,$</span> whereas <span class="math-container">$V_i.V_i$</span> is the number of &quot;<span class="math-container">$1$</span>&quot;s in <span class="math-container">$V_i$</span> which we can denote by <span class="math-container">$||V_i||$</span>. Note that each <span class="math-container">$||V_i||\ge 1$</span> and only one <span class="math-container">$||V_i||$</span> can equal <span class="math-container">$1$</span>.</p> </blockquote> <p>The scalar product of <span class="math-container">$W$</span> with itself can be considered to be the sum of lots of scalar products of the form <span class="math-container">$a_iV_i.a_jV_j$</span>. Summing these we obtain <span class="math-container">$$0=\sum_1^m a_i^2||V_i||+\sum_{i\ne j}a_ia_j=\sum_1^m a_i^2(||V_i||-1)+\sum_1^m a_i^2+\sum_{i\ne j}a_ia_j$$</span></p> <p><span class="math-container">$$=\sum_1^m a_i^2(||V_i||-1)+(a_1+a_2+... +a_m)^2.$$</span></p> <blockquote> <p>The only possibility is that each <span class="math-container">$a_i=0$</span>. The vectors <span class="math-container">$V_i$</span> are therefore linearly independent and therefore there can be at most <span class="math-container">$n$</span> of them i.e. <span class="math-container">$|F|\le n$</span>.</p> </blockquote> <p>(2). The upper bound of <span class="math-container">$n$</span> can be attained since these <span class="math-container">$n$</span> sets satisfy the conditions:- <span class="math-container">$$\{1,n\},\{2,n\},\{3,n\},...\{n-1,n\},\{n\}.$$</span></p>
1,028,371
<p>I have been trying to prove this, but I am having trouble understanding how to prove the following mapping I found is injective and surjective. Just as a side note, I am trying to show the complex ring is isomorphic to special $2\times2$ matrices in regard to matrix multiplication and addition. Showing these hold is simple enough.</p> <p>$$\phi:a+bi \rightarrow \begin{pmatrix} a &amp; -b \\ b &amp; a \end{pmatrix}$$</p> <p>This is what I have so far:</p> <p>Injective: I am also confused over the fact that there are two operations, and in turn two neutral elements (1 and 0). Showing that the kernel is trivial is usually the way I go about proving whether a mapping is injective, but I just can't grasp this.</p> <p>$$ \phi(z_1) = \phi(z_2) \implies \phi(z_1)\phi(z_2)^{-1} = I = \phi(z_1)\phi(z_2^{-1}) = \phi(z_2)\phi(z_2^{-1}).$$</p> <p>So if we can just show that the kernel of $\phi$ is trivial, then it also shows that $z_1 = z_2$. The only complex number that maps to the identity matrix is one where $a = 1$ and $b = 0$, $a + bi = 1 + 0i = 1$.</p> <p>Using a similar argument for addition we can just say that the only complex number $z$ such that $\phi(z) = 0\text{-matrix}$, is one where $a=0$ and $b=0$, $a+bi=0+0i=0$. </p> <p>Surjective:</p> <p>I forgot to add this before I posted, but I honestly don't really understand how to prove this because it just seems so obvious. All possible $2\times2$ matrices of that form have a complex representation because the complex number can always be identified by its real parts and since the elements of the $2\times2$ matrix are real then the mapping is obviously onto.</p> <p>I have always had trouble understanding when I can say that I have "rigorously" proved something, so any help would be appreciated! </p>
anomaly
156,999
<p>The map is bijective by the argument you give; the trick is showing that it respects addition and (especially) multiplciation. Here's another way of doing so: Consider $\mathbb{C}$ as a $2$-dimensional real vector space with basis $\{1, i\}$. You then have a map $f:\mathbb{C} \to M_2(\mathbb{R})$ (i.e., $2$-dimensional real matrices) defined by $f(z)w = zw$ (with respect to this basis). It's clear that this map preserves addition and multiplication. If you write out $f$ explicitly, it's exactly the map $a + bi \to \begin{pmatrix} a &amp; b\\-b &amp; a\end{pmatrix}$ you describe (at least modulo a sign flip).</p>
1,028,371
<p>I have been trying to prove this, but I am having trouble understanding how to prove the following mapping I found is injective and surjective. Just as a side note, I am trying to show the complex ring is isomorphic to special $2\times2$ matrices in regard to matrix multiplication and addition. Showing these hold is simple enough.</p> <p>$$\phi:a+bi \rightarrow \begin{pmatrix} a &amp; -b \\ b &amp; a \end{pmatrix}$$</p> <p>This is what I have so far:</p> <p>Injective: I am also confused over the fact that there are two operations, and in turn two neutral elements (1 and 0). Showing that the kernel is trivial is usually the way I go about proving whether a mapping is injective, but I just can't grasp this.</p> <p>$$ \phi(z_1) = \phi(z_2) \implies \phi(z_1)\phi(z_2)^{-1} = I = \phi(z_1)\phi(z_2^{-1}) = \phi(z_2)\phi(z_2^{-1}).$$</p> <p>So if we can just show that the kernel of $\phi$ is trivial, then it also shows that $z_1 = z_2$. The only complex number that maps to the identity matrix is one where $a = 1$ and $b = 0$, $a + bi = 1 + 0i = 1$.</p> <p>Using a similar argument for addition we can just say that the only complex number $z$ such that $\phi(z) = 0\text{-matrix}$, is one where $a=0$ and $b=0$, $a+bi=0+0i=0$. </p> <p>Surjective:</p> <p>I forgot to add this before I posted, but I honestly don't really understand how to prove this because it just seems so obvious. All possible $2\times2$ matrices of that form have a complex representation because the complex number can always be identified by its real parts and since the elements of the $2\times2$ matrix are real then the mapping is obviously onto.</p> <p>I have always had trouble understanding when I can say that I have "rigorously" proved something, so any help would be appreciated! </p>
sunspots
110,953
<p>Consider the ordered basis <span class="math-container">$\beta=\{1,i\},$</span> as in the answer by anomaly. Then, the range <span class="math-container">$$R(\phi) = span(\phi(\beta)) = span(\{\phi(1, 0), \phi(0, 1)\}) = span(\{ \left (\begin{array}{cc} 1 &amp; 0 \\ 0 &amp; 1 \end{array}\right ), \left (\begin{array}{cc} 0 &amp; -1 \\ 1 &amp; \phantom{-} 0 \end{array}\right )\}),$$</span> is equal to the codomain <span class="math-container">$\{ \left (\begin{array}{cc} a &amp; -b \\ b &amp; a \end{array}\right ) : a,b \in \mathbb{R}\}.$</span> Hence, the linear transformation is surjective.</p> <p>Now, if we apply the dimension theorem, then we observe that the null space is zero-dimensional. This implies the linear transformation is injective. Thus, we conclude it is an isomorphism.</p> <p><strong>Alternatively</strong>, if we find ordered bases <span class="math-container">$\beta$</span> and <span class="math-container">$\gamma$</span> for the domain and codomain, then the linear transformation <span class="math-container">$\phi$</span> is invertible if and only if the matrix representation <span class="math-container">$[\phi]_{\beta}^{\gamma}$</span> is invertible.</p> <p>Using <span class="math-container">$\beta,$</span> as above, and <span class="math-container">$\gamma = \{ \left (\begin{array}{cc} 1 &amp; 0 \\ 0 &amp; 1 \end{array}\right ), \left (\begin{array}{cc} 0 &amp; -1 \\ 1 &amp; \phantom{-} 0 \end{array}\right )\},$</span> then we have <span class="math-container">$$[\phi]_{\beta}^{\gamma} = \left (\begin{array}{cc} 1 &amp; 0 \\ 0 &amp; 1 \end{array}\right ),$$</span> since <span class="math-container">$\phi(1, 0) = 1 \cdot \left (\begin{array}{cc} 1 &amp; 0 \\ 0 &amp; 1 \end{array}\right ) + 0 \cdot \left (\begin{array}{cc} 0 &amp; -1 \\ 1 &amp; \phantom{-} 0 \end{array}\right )$</span> and <span class="math-container">$\phi(0, 1) = 0 \cdot \left (\begin{array}{cc} 1 &amp; 0 \\ 0 &amp; 1 \end{array}\right ) + 1 \cdot \left (\begin{array}{cc} 0 &amp; -1 \\ 1 &amp; \phantom{-} 0 \end{array}\right ).$</span> Clearly, <span class="math-container">$[\phi]_{\beta}^{\gamma}$</span> is invertible.</p>
3,419,276
<p>I'm reading about the directional derivative:</p> <blockquote> <p>Let <span class="math-container">$(E,\|\cdot\|)$</span> and <span class="math-container">$(F,\|\cdot\|)$</span> be Banach spaces over the field <span class="math-container">$\mathbb{K}$</span>, and <span class="math-container">$X$</span> an open subset of <span class="math-container">$E$</span>. A function <span class="math-container">$f: X \rightarrow F$</span> is differentiable at <span class="math-container">$a \in X$</span> if there is <span class="math-container">$A \in \mathcal{L}(E, F)$</span> such that <span class="math-container">$$f(x)=f\left(a\right)+A\left(x-a\right)+o\left(\left\|x-a\right\|\right) \quad\left(x \rightarrow a\right)$$</span></p> </blockquote> <p>The author continues to define the directional derivative:</p> <blockquote> <p><a href="https://i.stack.imgur.com/pmF7C.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/pmF7C.png" alt="enter image description here"></a></p> </blockquote> <p>and prove a proposition:</p> <blockquote> <p><a href="https://i.stack.imgur.com/xWPS6.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/xWPS6.png" alt="enter image description here"></a></p> </blockquote> <p>We define <span class="math-container">$g:\mathbb K \rightarrow F, \quad t \mapsto f\left(x_{0}+t v\right)$</span>. Our goal is to find the derivative of <span class="math-container">$g$</span> at <span class="math-container">$t=0$</span>.</p> <p>Because <span class="math-container">$f$</span> is differentiable at <span class="math-container">$x_0$</span>, <span class="math-container">$$ f(x)=f\left(x_{0}\right)+\partial f\left(x_{0}\right)\left(x-x_{0}\right) + o(\left\|x-x_{0}\right\|) \quad (x \to x_0)$$</span> for all <span class="math-container">$x \in X$</span>. It follows that <span class="math-container">$$\begin{aligned}g(t) &amp;= f(x_0+tv) \\ &amp;=f\left(x_{0}\right)+\partial f\left(x_{0}\right)\left((x_0+tv)-x_{0}\right)+o(\left\|(x_0+tv)-x_{0}\right\|) \quad ((x_0+tv) \to x_0)\\ &amp;= g\left(0 \right)+\partial f\left(x_{0}\right)\left(tv\right)+o(\left\|tv\right\|) \quad (t \to 0) \\&amp;= g\left(0\right)+ \partial f\left(x_{0}\right)\left(v\right) \cdot (t-0) + o(\left\|t\right\|) \quad (t \to 0) \end{aligned}$$</span></p> <p>Let <span class="math-container">$\partial g(0) \in \mathcal{L}(\mathbb K, F)$</span> be the derivative of <span class="math-container">$g$</span> at <span class="math-container">$t=0$</span>. It follows that <span class="math-container">$$g(t)= g(0) + \partial g(0)(t-0) + o(\left\|t\right\|) \quad (t \to 0)$$</span></p> <p>To sum up, we have <span class="math-container">$$\begin{aligned}g(t) &amp;=g\left(0\right)+ \partial f\left(x_{0}\right)\left(v\right) \cdot (t-0) + o(\left\|t\right\|) \quad (t \to 0)\\ &amp;= g(0) + \partial g(0)(t-0) + o(\left\|t\right\|) \quad (t \to 0)\end{aligned}$$</span></p> <p>Hence <span class="math-container">$\partial f\left(x_{0}\right)\left(v\right) \cdot (t-0) = \partial g(0)(t-0)$</span> and thus <span class="math-container">$\partial f\left(x_{0}\right)\left(v\right) \cdot t = \partial g(0)(t)$</span>. Because <span class="math-container">$(t)$</span> on the right hand is just the input of the function <span class="math-container">$\partial g(0)$</span>, we have <span class="math-container">$\partial f\left(x_{0}\right)\left(v\right) \cdot t = \partial g(0)$</span>.</p> <p>In my understanding, <span class="math-container">$\partial g(0)(t)$</span> denote the value of the function <span class="math-container">$\partial g(0)$</span> at <span class="math-container">$t$</span>. On the contrary, <span class="math-container">$\partial f\left(x_{0}\right)\left(v\right) \cdot t$</span> denoted the product of <span class="math-container">$\partial f\left(x_{0}\right)\left(v\right)$</span> and <span class="math-container">$t$</span>.</p> <p><strong>My question:</strong></p> <p>I could not understand why the answer given in my textbook is just <span class="math-container">$\partial f\left(x_{0}\right)\left(v\right)$</span>, which is lack of <span class="math-container">$t$</span>.</p> <p>Could you please elaborate on this point?</p>
user
505,767
<p>Recall that by <a href="https://en.wikipedia.org/wiki/Logarithm" rel="nofollow noreferrer">definition of logarithm</a> with <span class="math-container">$u&gt;0$</span> and <span class="math-container">$b&gt;0,\neq 1$</span> we have</p> <p><span class="math-container">$$v=\log_b u \iff b^v=u$$</span></p> <p>therefore by definition</p> <p><span class="math-container">$$ b^{\log_b u}=b^v=u$$</span></p> <p>As a concrete simple example, we have that</p> <p><span class="math-container">$$\log_2 8 =3$$</span></p> <p>then</p> <p><span class="math-container">$$2^{\log_2 8}=2^3=8$$</span></p>
1,984,178
<p>I have a problem with the following exercise:</p> <p>We have the operator $T: l^1 \to l^1$ given by</p> <p>$$T(x_1,x_2,x_3,\dots)=\left(\left(1-\frac11\right)x_1, \left(1-\frac12\right)x_2, \dots\right)$$ for $(x_1,x_2,x_3,\dots)$ in $l^1$. Showing that this operator is bounded is easy, but I am really desperate with showing that the norm $\|T\| = 1$.</p> <p>I know that for bounded operators the norm is defined as $\|T\|=\sup{\left\{\|T(x)\|: \|x\| \le 1\right\}}$.</p> <p>I am also wondering if there exists a x in $ l^1$ such that $\|x\|=1 $</p> <p>and $\|T(x)\|= \|T\|$</p> <p>Thank you! :)</p>
Community
-1
<p>$$\left|\left| T((x_n)_{n\in\mathbb{N}})\right|\right|_{\ell_1}=\left|\left| \left(\left(1-\frac{1}{n}\right)x_n\right)_{n\in\mathbb{N}}\right|\right|_{\ell_1}=\sum_j \left|\left(1-\frac{1}{j}\right)x_j\right|\leqslant \sum_j |x_j |=||(x_n )_{n\in\mathbb{N}} ||_{\ell_1}$$</p> <p>hence </p> <p>$$||T||\leqslant 1$$</p> <p>but $$||T||\geqslant \sup_j ||Te_j || =\sup_j \left(1-\frac{1}{j}\right) =1$$</p> <p>Thus $$||T||=1.$$</p>
2,007,173
<p>I've managed to severely confuse myself in my attempts to simplify this seemingly straightforward expression: $$ \arctan(\cot(\alpha)),\quad\text{with $0&lt;\alpha\leq\pi$.} $$ It seems like maybe there are some issues with domain, as $\cot$ and $\tan$ are defined on $(0,\pi)$ and $(-\frac{\pi}{2},\frac{\pi}{2})$, respectively. However, $\arctan$ <em>is</em> able to "deal" with the fact that $\cot(\pi)=-\infty$, since its limit exists at negative infinity.</p> <p>What method should I be using to simplify this? I believe that I should be getting $\frac{\pi}{2}-\alpha$, but I'm not sure how to show it symbolically.</p>
Jed
281,031
<p>Draw a right triangle with one of the angles being $\alpha$. Label the leg that contains angle $\alpha$ to be $x$ and the opposite leg to be of length 1. This way, $$\cot(\alpha)=x$$ Now let's investigate $\arctan(x)$. Since the third angle of this right triangle measures $\frac{\pi}{2}-\alpha$, you know that $\tan(\frac{\pi}{2}-\alpha)=x$, and it follows that $\arctan(x)=\frac{\pi}{2}-\alpha$.</p> <p>Note: Due to the domain restrictions, this formula does <em>not</em> hold at $\alpha=\frac{k\pi}{2}$, where $k$ is an integer.</p>
1,662,226
<p>Find a sufficient statistic for $σ^2$ with $μ$ known, where $X_i$ is a random sample from $N(μ,σ^2)$</p> <p>I was able to find a sufficient statistic for $μ$ with $σ^2$ known, but I'm stuck on finding one for $σ^2$ when $μ$ is known. Can anyone give me some help? </p> <p>I was using the factorization method before, is this the best way?</p>
parsiad
64,601
<p>Though squaring each and checking works equally as well, here's another way to figure it out: $$ \sqrt{0.016}=\frac{\sqrt{1000}}{\sqrt{1000}}\sqrt{0.016}=\frac{\sqrt{16}}{\sqrt{1000}}=\frac{4}{\sqrt{1000}}\approx\frac{4}{\sqrt{1024}}=\frac{4}{\sqrt{2^{10}}}=\frac{4}{2^{5}}=\frac{4}{32}=\frac{1}{8}=0.125 $$</p>
13,951
<p>let $f$ and $g $ be two real valued function , I have asked many students what is the derivative of $(fg)'$ they answered me :it is $f' \cdot g'$, then I seek why most people (students) guess that ?</p>
Benoît Kloeckner
187
<p>As mentioned, a probable cause is an implicit reasoning as if every operation were a homomorphism (similar to the implicit reasoning by linearity). Similar errors include $\ln(x+y) = \ln(x)+\ln(y)$, $e^{xy}=e^x e^y$, $\int f(x)g(x) dx = \int f(x) dx \int g(x) dx$, etc.</p> <p>Such reasoning by (very loose) analogy can be caused by not understanding that whenever some similar rules do hold, they do for a reason. In particular, it is important to actually explain why $e^{x+y}=e^x e^y$ (at least when $x,y\in\mathbb{N}$), why $(f+g)'=f'+g'$, etc. So the answer to give to the error you mention implies discussing <em>other</em> formulas.</p> <p>One thing that one can also do is to show how erroneous formulas lead to obvious wrongs. For example, if it where true that $(fg)'=f'g'$, then taking $f=1$ would lead to $g'=0$ whatever $g$ actually is.</p>