qid int64 1 4.65M | question large_stringlengths 27 36.3k | author large_stringlengths 3 36 | author_id int64 -1 1.16M | answer large_stringlengths 18 63k |
|---|---|---|---|---|
4,167,747 | <p>I need to prove the following statement: If <span class="math-container">$1+ \alpha = \alpha$</span>, <span class="math-container">$\alpha$</span> is an infinite ordinal.</p>
<p>I am trying to use Bernstein's Theorem(CBS) to show if <span class="math-container">$1+ \alpha \leq \alpha$</span>, i.e., there is an injection from <span class="math-container">$1+ \alpha$</span> to <span class="math-container">$\alpha$</span>, <span class="math-container">$\alpha$</span> must be an infinite ordinal.</p>
<p>Is this the right approach? I feel like this statement can be proven using a simpler approach like induction or etc., but I can't seem to think of one.</p>
<p>Also, does the converse always hold?</p>
<p>Thank you in advance.</p>
| Asaf Karagila | 622 | <p>Using Cantor–Bernstein is not the right tool here. You're not trying to prove that cardinalities are equal, but rather than the ordinals are equal. For this you need more than a bijection. You need an order isomorphism.</p>
<p>One way to simplify this is to remember that <span class="math-container">$1+\alpha=1+\omega+\beta$</span> for some <span class="math-container">$\beta$</span> such that <span class="math-container">$\omega+\beta=\alpha$</span>, assuming that <span class="math-container">$\alpha$</span> is infinite.</p>
<p>So it is enough to show that for <span class="math-container">$\omega$</span>, <span class="math-container">$1+\omega=\omega$</span>. But that's fairly straightforward.</p>
|
125,610 | <p>I have question about sets. I need to prove that: $$X \cap (Y - Z) = (X \cap Y) - (X \cap Z)$$</p>
<p>Now, I tried to prove that from both sides of the equation but had no luck.</p>
<p>For example, I tried to do something like this: $$X \cap (Y - Z) = X \cap (Y \cap Z')$$ but now I don't know how to continue.</p>
<p>From the other side of the equation I tried to do something like this: $$(X \cap Y) - (X \cap Z) = (X \cap Y) \cap (X \cap Z)' = (X \cap Y) \cap (X' \cup Z')$$ and from here I don't know what to do again.</p>
<p>I will be glad to hear how should I continue from here and what I did wrong. Thanks in advance.</p>
| Rudy the Reindeer | 5,798 | <p>In general, to show equality of sets $A = B$ you show $A \subset B$ and $B \subset A$.</p>
<p>To show $X \cap (Y - Z) \subset (X \cap Y) - (X \cap Z)$ assume $a \in X \cap (Y -Z)$. Then $a \in X$ and $a \in Y - Z \subset Y$. Hence $a \in X \cap Y$. Also, $a \notin Z$ hence $a \notin X \cap Z$ and hence $x \in (X \cap Y) - (X \cap Z)$.</p>
<p>Now try to do the other direction. </p>
|
374,105 | <p>Does $\exists$ on the hyperbolic plane, a convex quadrilateral $Q$ and a convex pentagon $P$ with the same angle sum? I found this question to be rather interesting.</p>
| tessellation | 71,044 | <p>If you know about ideal polygon then this is easy because every ideal polygon has angle sum zero.</p>
|
16,802 | <p>In an attempt to squeeze more plots and controls into the limited space for a demo UI, I am trying to remove any extra white spaces I see.</p>
<p>I am not sure what options to use to reduce the amount of space between the ticks labels and the actual text that represent the labels on the axes.</p>
<p>Here is a small <code>Plot</code> example using <code>Frame->True</code> (I put an outside <code>Frame</code> as well, just for illustration, it is not part of the problem here)</p>
<pre><code>Framed[
Plot[Sin[x], {x, -Pi, Pi},
Frame -> True,
FrameLabel -> {{Sin[x], None}, {x,
Row[{"This is a plot of ", Sin[x]}]}},
ImagePadding -> {{55, 10}, {35, 20}},
ImageMargins -> 0,
FrameTicksStyle -> 10,
RotateLabel -> False
],
FrameMargins -> 0
]
</code></pre>
<p><img src="https://i.stack.imgur.com/JRzf3.png" alt="Mathematica graphics"></p>
<p>Is there an option or method to control this distance?</p>
<p>Notice that <code>ImagePadding</code> affects distance below the frame label, and not between the frame label and the ticks. Hence changing <code>ImagePadding</code> will not help here.</p>
<p>Depending on the plot and other things, this space can be more than it should be. The above is just a small example I made up. Here is a small part of a UI, and I think the space between the <code>t(sec)</code> and the ticks is too large. I'd like to reduce it by few pixels. I also might like to push the top label down closer to the plot by few pixels also.</p>
<p><img src="https://i.stack.imgur.com/jruyf.png" alt="Mathematica graphics"></p>
<p>I am Using V9 on windows.</p>
<p><strong>update 12/22/12</strong></p>
<p>Using Labeld solution by @kguler below is a good solution, one just need to be little careful with the type-sitting for the labels. <code>Plot</code> automatically typeset things as <code>Text</code> in <code>TraditionalFormat</code>, which is a nice feature. To do the same when using <code>Labeled</code> one must do this manually using <code>TraditionalForm</code> and <code>Text</code> as well. </p>
<p>Here is example to show the difference</p>
<p><strong>1)</strong> Labeled used just with <code>TraditionalForm</code>. The left one uses <code>Plot</code> and the right one uses <code>Labeled</code> with <code>TraditionalForm</code>. Notice the difference in how labels look.</p>
<pre><code>Grid[{
{
Plot[Sin[x], {x, -Pi, Pi}, Frame -> True,
FrameLabel -> {{Sin[x], None}, {x, E Tan[x] Sin[x]}},
ImageSize -> 300, FrameTicksStyle -> 10, FrameStyle -> 16,RotateLabel -> False],
Labeled[
Plot[Sin[x], {x, -Pi, Pi}, Frame -> True, ImageSize -> 300],
TraditionalForm /@ {Sin[x], x, E Tan[x] Sin[x]}, {Left, Bottom, Top},
Spacings -> {0, 0, 0}, LabelStyle -> "Panel"]
}
}, Frame -> All]
</code></pre>
<p><img src="https://i.stack.imgur.com/MXRlR.png" alt="Mathematica graphics"></p>
<p><strong>2)</strong> Now we do the same, just need to add <code>Text</code> to get the same result as <code>Plot</code>.</p>
<pre><code>Grid[{
{
Plot[Sin[x], {x, -Pi, Pi}, Frame -> True, FrameTicksStyle -> 10,
FrameStyle -> 16,
FrameLabel -> {{Sin[x], None}, {x, E Tan[x] Sin[x]}},
ImageSize -> 300, RotateLabel -> False],
Labeled[
Plot[Sin[x], {x, -Pi, Pi}, Frame -> True, ImageSize -> 300],
Text /@ TraditionalForm /@ {Sin[x], x, E Tan[x] Sin[x]}, {Left,
Bottom, Top}, Spacings -> {0, 0, 0}, LabelStyle -> "Panel"]
}
}, Frame -> All]
</code></pre>
<p><img src="https://i.stack.imgur.com/CHhZF.png" alt="Mathematica graphics"></p>
<p><strong>Update 12/22/12 (2)</strong></p>
<p>There is a big problem with controlling the spacing. </p>
<p><code>Labeled</code> spacing only seem to work for horizontal and vertical spacing, taken togother. </p>
<p>i.e. One can't control spacing on each side of the plot separately? Here is an example, where I tried to move the bottom axes label up, this ended with moving the top label down as well. Which is not what I want.</p>
<pre><code>Labeled[Plot[Sin[x], {x, -Pi, Pi}, Frame -> True, ImageSize -> 300],
Text /@ TraditionalForm /@ {Sin[x], x, E Tan[x] Sin[x]}, {Left,
Bottom, Top}, Spacings -> {-.2, -0.7}]
</code></pre>
<p><img src="https://i.stack.imgur.com/q0Wdo.png" alt="Mathematica graphics"></p>
<p>Will see if there is a way to control each side spacing on its own. Trying <code>Spacing->{-0.2,{-0.7,0}}</code> does not work, it seems to take the zero in this case and ignored the <code>-0.7</code></p>
<p>This gives the same result as above:</p>
<pre><code>Labeled[Plot[Sin[x], {x, -Pi, Pi}, Frame -> True, ImageSize -> 300],
Text /@ TraditionalForm /@ {Sin[x], x, E Tan[x] Sin[x]}, {Left,
Bottom, Top}, Spacings -> {-.2, -0.7, .0}]
</code></pre>
<p><img src="https://i.stack.imgur.com/6JCgC.png" alt="Mathematica graphics"></p>
<p>ps. there might be a way to specify the spacing for each side with some tricky syntax. I have not figured it out yet. Still trying thing....
<a href="http://reference.wolfram.com/mathematica/ref/Spacings.html" rel="noreferrer">http://reference.wolfram.com/mathematica/ref/Spacings.html</a></p>
<p><strong>update 12/22/12 (3)</strong>
Using combination of <code>ImagePadding</code> and <code>Spacing</code> should have worked, but for some reason, the top label now is cut off. Please see screen shot. Using V9 on windows</p>
<p><img src="https://i.stack.imgur.com/kbT3G.png" alt="enter image description here"></p>
<p>Note: The above seems to be related to the issue reported here:
<a href="https://mathematica.stackexchange.com/questions/16248/some-graphics-output-do-not-fully-render-on-the-screen-until-an-extra-click-is-m">some Graphics output do not fully render on the screen until an extra click is made into the notebook</a></p>
<p>Need an extra click inside the notebook. Then label become un-chopped !</p>
| Emilio Pisanty | 1,000 | <p>This can also be achieved by</p>
<ul>
<li>encasing the graphic inside a <code>Show</code>,</li>
<li>setting the outer <code>Show</code>'s <code>PlotRangeClipping</code> to <code>False</code>, and</li>
<li>adding the labels as <code>Text</code> commands inside an <a href="http://reference.wolfram.com/language/ref/Epilog.html" rel="noreferrer"><code>Epilog</code></a>.</li>
</ul>
<p>Thus, for your example, you would do</p>
<pre><code>Framed[
Show[
Plot[Sin[x], {x, -Pi, Pi}
, Frame -> True
, FrameLabel -> {{None, None}, {None,
Row[{"This is a plot of ", Sin[x]}]}}
, ImagePadding -> {{55, 10}, {35, 20}}
, ImageMargins -> 0
, FrameTicksStyle -> 10
, RotateLabel -> False
]
, PlotRangeClipping -> False
, Epilog -> {
Text[x, {0, -1.4}],
Text[Sin[x], {-4.12, 0}]
}
]
, FrameMargins -> 0
]
</code></pre>
<p>which produces</p>
<p><img src="https://i.stack.imgur.com/jFV7G.png" alt="Mathematica graphics"></p>
<p>You can then freely place the label text anywhere within the image box by adjusting the <code>Text</code> coordinates.</p>
<p>This has the advantage that (as far as I can tell, on v10.1.0) the resulting styling is identical to that produced by setting the labels via <code>AxesLabel</code> or <code>FrameLabel</code>; if you want to style them directly then you can do that as well.</p>
<p>Further, this produces an actual <code>Graphics</code> object, as opposed to an object with a <code>Labeled</code> head, which can be advantageous for manipulating and exporting the resulting plot.</p>
<p>Due credit to David Park's answer on <a href="https://groups.google.com/forum/#!topic/comp.soft-sys.math.mathematica/kKII7037rso" rel="noreferrer">this comp.soft-sys.math.mathematica thread</a> for pointing out the technique.</p>
|
1,821,411 | <p>$f:[a,b]\rightarrow R$ that is integrable on [a,b]</p>
<p>So we need to prove:</p>
<p>$$\int_{-b}^{-a}f(-x)dx=\int_{a}^{b}f(x)dx$$</p>
<p>1.) So we'll use a property of definite integrals: (homogeny I think it's called?)</p>
<p>$$\int_{-b}^{-a}f(-x)dx=-1\int_{-b}^{-a}f(x)dx$$</p>
<p>2.) Great, now using the fundamental theorem of calculus:</p>
<p>$$-1\int_{-b}^{-a}f(x)dx=(-1)^2\int_{-a}^{-b}f(x)dx=\int_{-a}^{-b}f(x)dx$$</p>
<p>This is where I'm stuck. For some reason I think it might be smarter to skip step 2, to leave it asL</p>
<p>$$-1\int_{-b}^{-a}f(x)dx$$ </p>
<p>because graphically, we've "flipped" the graph about the x-axis, but we're still calculating the same area. Proving that using properties seems to have stumped me.</p>
<p>I prefer hints over solutions, thanks.</p>
| Alex M. | 164,025 | <p>Your first step is mistaken: it seems that you mistake $\int _a ^b (-f) (x) \ \Bbb d x$ for $\int _a ^b f (-x) \ \Bbb d x$; these two are completely different, and the homogeneity property applies only to the first formula, not to the second.</p>
<p>Just use the substitution $y = -x$, this will solve the problem in no time.</p>
|
863,561 | <p>A Lambertian surface reflects or emits radiation proportional to the cosine of the angle subtended between the exiting angle and the normal to that surface. The integral of surface of the hemisphere which describes the exiting radiance is supposed to be equal to π. Is there a way I can prove that the surface of the lambertian hemisphere is equal to π?
The following is what I have tried. I assume that this hemispherical function can be described as
$$
x = \sin(\arccos(y)) = \sqrt{1-y^2}
$$
where I would attempt to use integration of a rotational surface to calculate the surface. The normal to the lambertian surface is defined by the y-axis. I want to rotate this surface about the y-axis. I thus use the following equation to calculate the rotational integration
$$
A = \int_0^1{2πx\sqrt{1+\left(\frac{dx}{dy}\right)^2}}dy
$$
I have used matlab to solve the integral symbolically and used numerical integration to try and get to the value of π, but it doesn't work. I think I am not starting with the right function to describe the surface. The following link may explain this better than above. <a href="http://fp.optics.arizona.edu/Palmer/rpfaq/rpfaq.htm#lambertian" rel="nofollow">http://fp.optics.arizona.edu/Palmer/rpfaq/rpfaq.htm#lambertian</a></p>
| johannesvalks | 155,865 | <p>Given your question, I think you need to evaluate</p>
<p>$$
\int_0^{\pi/2} d\theta \int_0^{2\pi} d\phi \sin(\theta) \cos(\theta)
$$</p>
<blockquote class="spoiler">
<p> $$= \pi \int_0^{\pi/2} \sin(2\theta) d\theta = \pi$$</p>
</blockquote>
|
863,561 | <p>A Lambertian surface reflects or emits radiation proportional to the cosine of the angle subtended between the exiting angle and the normal to that surface. The integral of surface of the hemisphere which describes the exiting radiance is supposed to be equal to π. Is there a way I can prove that the surface of the lambertian hemisphere is equal to π?
The following is what I have tried. I assume that this hemispherical function can be described as
$$
x = \sin(\arccos(y)) = \sqrt{1-y^2}
$$
where I would attempt to use integration of a rotational surface to calculate the surface. The normal to the lambertian surface is defined by the y-axis. I want to rotate this surface about the y-axis. I thus use the following equation to calculate the rotational integration
$$
A = \int_0^1{2πx\sqrt{1+\left(\frac{dx}{dy}\right)^2}}dy
$$
I have used matlab to solve the integral symbolically and used numerical integration to try and get to the value of π, but it doesn't work. I think I am not starting with the right function to describe the surface. The following link may explain this better than above. <a href="http://fp.optics.arizona.edu/Palmer/rpfaq/rpfaq.htm#lambertian" rel="nofollow">http://fp.optics.arizona.edu/Palmer/rpfaq/rpfaq.htm#lambertian</a></p>
| Jacques MALAPRADE | 163,723 | <p>Based on the p.571 and 566 in Stroud's 'Engineering Mathematics' I am setting out the answer below. The surface of revolution based on the parametric equation where in our case the rotation is around the y-axis the equation is as follows:
$$
A = \int_0^{\pi/2} 2\pi x \sqrt{(\frac{dx}{d\theta})^2 + (\frac{dy}{d\theta})^2}d\theta
$$
this is only for a hemisphere with a radius of unity. We require to multiply the integrand by the cosine of $\theta$ as rightly pointed out by Ron above, and this is positive for $\theta$ between 0 and $\pi/2$. As:
$ y = cos\theta $ and $x = sin\theta$ we have:
$$
A = \int_0^{\pi/2} cos\theta\space2\pi\space sin\theta \sqrt{sin^2\theta + cos^2\theta}\space d\theta
$$
This reduces to:
$$
A = \space \Biggr|_0^{pi/2} \pi \space sin^2\theta = \pi
$$</p>
|
4,374,521 | <p>In the <span class="math-container">$(x,t)$</span>- plane, the characteristic of the initial value problem <span class="math-container">$$u_t+uu_x=0$$</span> with <span class="math-container">$$u(x,0)=x,0\leq x\leq 1$$</span> are</p>
<p><span class="math-container">$1$</span>. parallel straight lines .</p>
<p><span class="math-container">$2.$</span> straight lines which intersects at <span class="math-container">$(0,-1)$</span>.</p>
<p><span class="math-container">$3.$</span> non- intersecting parabolas.</p>
<p><span class="math-container">$4.$</span> concentric circles with center at origin.</p>
<p>I am learning partial differential equation so don’t have good knowledge of it . According to me characteristic equations are</p>
<p><span class="math-container">$$\frac{dt}{1}=\frac{dx}{u}=\frac{du}{0}$$</span> Now <span class="math-container">$u=c$</span> by last fraction. So by first two fractions I have <span class="math-container">$x-ct=k$</span>, where <span class="math-container">$c$</span> and <span class="math-container">$k$</span> are constants. Now I don’t known how to use initial condition of <span class="math-container">$u(x,0)=x$</span> and what is final answer? I see that <span class="math-container">$x-ct-k=0$</span> are straight lines in <span class="math-container">$(x,t)$</span>-plane. Please help me to reach at final option . Thank you.</p>
| Henry Lee | 541,220 | <p>I'm just going to expand on my comment. This appears to fit the form of the <a href="https://en.wikipedia.org/wiki/Burgers%27_equation#Inviscid_Burgers%27_equation" rel="nofollow noreferrer">Inviscid Burgers' equation</a> which is:
<span class="math-container">$$\frac{\partial u }{\partial t}+u\frac{\partial u}{\partial x}=0$$</span>
Using the method of characteristics we say:
<span class="math-container">$$u(x,t)=u(x(s),t(s))$$</span>
then assume:
<span class="math-container">$$\frac d{ds}u(x(s),t(s))=F(u,x(s),t(s))$$</span>
and using chain rule we get:
<span class="math-container">$$\frac d{ds}u(x(s),t(s))=\frac{\partial u}{\partial x}\frac{dx}{ds}+\frac{\partial u}{\partial t}\frac{dt}{ds}$$</span></p>
<hr />
<p>I am not not particularly versed in this method but I hope this information helped</p>
|
3,996,090 | <p>I believe I have found the recurrence relation to be
<span class="math-container">$$B\left(n\right)=B\left(n-1\right)+2^{n-1}-1$$</span>
with Initial Condition B(0)=0 (I am a bit unsure about the initial condition though but I think it is correct)</p>
<p>Now I am trying to solve B(n) using iteration, this is what I have so far:
<span class="math-container">$$B\left(n\right)=B\left(n-1\right)+2^{n-1}-1$$</span>
<span class="math-container">$$B\left(n\right)=B\left(n-2\right)+2^{n-2}+2^{n-1}-\left(2\right)1$$</span>
<span class="math-container">$$B\left(n\right)=B\left(n-3\right)+2^{n-3}+2^{n-2}+2^{n-1}-\left(3\right)1$$</span>
<span class="math-container">$$=B\left(n-k\right)+2^{n-k}+2^{n-\left(k-1\right)}+2^{n-\left(k-2\right)}-\left(k\right)1$$</span>
and then I let n=k since the intial condition is B(0) but this is where I get confused I am not sure what to do from here?
<span class="math-container">$$=B\left(0\right)+2^0+2^1+2^2+...+2^{\left(?\right)}-n$$</span></p>
| RobPratt | 683,666 | <p>Yes, <span class="math-container">$B_0=B_1=0$</span>. Let <span class="math-container">$A_n = 2^n - B_n$</span> be the number of bit strings that do <em>not</em> contain <span class="math-container">$01$</span>, so <span class="math-container">$A_0=1$</span>. For <span class="math-container">$n>1$</span>, condition on whether the first bit is <span class="math-container">$0$</span> or <span class="math-container">$1$</span> to obtain recurrence relation
<span class="math-container">$$A_n = 1 + A_{n-1},$$</span>
which implies that
<span class="math-container">$$A_n = n+1.\tag1$$</span>
Hence
<span class="math-container">$$B_n = 2^n - A_n = 2^n - (1 + A_{n-1}) = 2^n - 1 - (2^{n-1} - B_{n-1}) = 2^{n-1} - 1 + B_{n-1},$$</span>
or just use <span class="math-container">$(1)$</span> to conclude that <span class="math-container">$B_n = 2^n - n - 1$</span>.</p>
|
3,888,146 | <p>When we give a proof that the tangent is the sine to cosine ratio of an oriented angle,</p>
<p><span class="math-container">$$\bbox[5px,border:2px solid #C0A000]{\tan \alpha=\frac{\sin\alpha}{\cos \alpha}}$$</span>
with <span class="math-container">$\cos \alpha \neq 0$</span>, we take the tangent <span class="math-container">$t$</span> in <span class="math-container">$A(1,0)\equiv S$</span> to the circle of center in <span class="math-container">$O(0,0)$</span> ad radius <span class="math-container">$r=1$</span>. See the image</p>
<p><a href="https://i.stack.imgur.com/LPQPV.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/LPQPV.png" alt="https://www.youmath.it/images/stories/funzioni-elementari/definizione-tangente.png" /></a></p>
<blockquote>
<p>The name tangent has been given because we consider the tangent to the circle of radius <span class="math-container">$1$</span> at point <span class="math-container">$A\equiv S$</span> or for another reason?</p>
</blockquote>
| heropup | 118,193 | <p>Since <span class="math-container">$\ell$</span> is a diameter, reflecting either <span class="math-container">$A$</span> or <span class="math-container">$B$</span> across <span class="math-container">$\ell$</span> will give a third point through which the circle passes. You can then use this as the construction.</p>
<p>Alternatively, you may simply construct the perpendicular bisector of <span class="math-container">$AB$</span>, which intersects <span class="math-container">$\ell$</span> at the center of the circle.</p>
|
3,752,162 | <p>I already knew that normal subgroups where important because they allow for quotient space to have a group structure.
But I was told that normal subgroups are also important in particular because they are the only subgroups that can occur as kernels of goup homomorphisms. Why is this property a big deal in algebra?</p>
| Andrea Mori | 688 | <p>Let <span class="math-container">$G$</span> be a group. Since the normal subgroups of <span class="math-container">$G$</span> coincide, as a set, with the subgroups that appear as kernels of homomorphisms with domain <span class="math-container">$G$</span>, the normal subgroups are exactly the subgroups of <span class="math-container">$G$</span> that appear as the left object in short exact sequences of the form
<span class="math-container">$$
1\longrightarrow N\longrightarrow G\longrightarrow K\longrightarrow 1.\qquad(*)
$$</span>
The point is that once you have an exact sequence <span class="math-container">$(*)$</span> the group <span class="math-container">$G$</span> can be reconstructed out of <span class="math-container">$N$</span> and <span class="math-container">$K$</span> plus some extra-combinatorial data (technically, cohomological data depending only on <span class="math-container">$K$</span> and <span class="math-container">$N$</span>).</p>
<p>Now suppose that <span class="math-container">$G$</span> is <em>finite</em>. Then, the isomorphism theorem tells you that <span class="math-container">$|G|=|N|\cdot|K|$</span>, i.e. the group <span class="math-container">$G$</span> can be reconstructed out of smaller groups plus some additional data depending only on those smaller groups.</p>
<p>If you have a list of finite groups that contain no normal subgroups (these groups are called <em>simple</em>) the above sets the very first step towards the goal of reconstructing <strong>all</strong> finite groups.</p>
<p>When <span class="math-container">$G$</span> is not finite the property has still some interest, for instance when studying <em>representations</em> of <span class="math-container">$G$</span>, i.e. homomorphisms of the kind
<span class="math-container">$$
G\longrightarrow{\rm GL}(V)
$$</span>
where <span class="math-container">$V$</span> is some vector space.</p>
|
571,941 | <p>I know that $\sum _{ n=1 }^{ \infty }{ { (-1) }^{ n+1 }\frac { 1 }{ n } =\ln(2) }$ .</p>
<p>How about the series $\sum _{ n=1 }^{ \infty }{ { (-1) }^{ n+1 } } \frac { 1 }{ \sqrt { n } }$ </p>
<p>To what number does it converge?</p>
| 1233dfv | 102,540 | <p>Let $a_1$ be the number of combinatorics problems solved on the first day, $a_2$ be the total number of combinatorics problems solved on the first and second days, and so on. The sequence of numbers $a_1,a_2,...,a_{365}$ is an increasing sequence since each term of the sequence is larger than the one that precedes it and at least one combinatorics problem is solved each day. Since he solves no more than $500$ combinatorics problems for the year we know that $1\leq a_1\leq a_2\leq \cdots \leq a_{365}\leq 500$. The sequence $a_1+229$, $a_2+229$, and so on is also an increasing sequence. So $230\leq a_1+229\leq a_2+229\leq \cdots \leq a_{365}+229 \leq 729$. Each of the $730$ numbers, that is $a_1,a_2,...,a_1+229+a_2+229,...$ is an integer between $1$ and $729$. It follows that two of them are equal since no two of the numbers $a_1,a_2,...$ are equal and no two of the numbers $a_1+229,a_2+229,...$ are equal. There must exist an $i$ and a $j$ such that $a_i=a_j+229$. Thus on days $j+1$, $j+2$,..., $i$ there exists consecutive days when the student solves $229$ combinatorics problems.</p>
|
2,507,828 | <p>$$\int_C (1+\cosh(y),x\sinh(y))d\vec{s}$$
Where C is a curve that goes from $(0,0)$ to $(1,1)$</p>
<p>I am not sure on how to proceed, I can find </p>
<p>$\vec{F}=(1+\cosh(y),x\sinh(y))$</p>
<p>$\vec{f}:\nabla f=\vec{F}$</p>
<p>$\vec f=(x+x\cosh(y), x\cosh(y))$</p>
| Doug M | 317,162 | <p>Fundamental theorem of line integrals: if $f = \nabla F$ and $c$ is a contour from $a$ to $b$</p>
<p>$\int_c f \cdot dr = F(b) - F(a)$</p>
<p>The line integral does not depend on the path. It only depends on the endpoints.</p>
<p>$f = \nabla (x+x \cosh y)$</p>
<p>$\int_c f\cdot dr = 1 + \cosh 1$</p>
|
2,507,828 | <p>$$\int_C (1+\cosh(y),x\sinh(y))d\vec{s}$$
Where C is a curve that goes from $(0,0)$ to $(1,1)$</p>
<p>I am not sure on how to proceed, I can find </p>
<p>$\vec{F}=(1+\cosh(y),x\sinh(y))$</p>
<p>$\vec{f}:\nabla f=\vec{F}$</p>
<p>$\vec f=(x+x\cosh(y), x\cosh(y))$</p>
| operatorerror | 210,391 | <p>Hint: You are in search of a function $f$ with the property that
$$
\frac{\partial f}{\partial x}=1+\cosh y\implies f(x,y)=x+x\cosh y+g(y)
$$
furthermore, you would like
$$
\frac{\partial f}{\partial y}=x\sinh y+g'(y)=x\sinh y\implies g(y)=c
$$
and
$$
f(x,y)=x+x\cosh y+c
$$
try evaluating the potential $f$ at the end points.</p>
|
2,361,602 | <p>The "Heine–Cantor theorem" states: If $f : M → N$ is a continuous function between two metric spaces, and $M$ is compact, then $f$ is uniformly continuous.</p>
<p>I do not doubt its validity, of course, just trying to understand <strong>why</strong> it is valid.</p>
<p>If we, say, take the function: $y = x^4$.
It rises very quickly with the rising value of the argument. How is it that according to Heine–Cantor theorem just because we enclose the argument of the function, say, between $[0, 10]$ it automatically becomes "<strong>uniformly</strong> continuous" (considering of course that it is "continuous")? </p>
<p>There are areas of function values inside the argument segment where function will rise quicker than in some other areas.</p>
<p>Does the reason have to do with the fact that we could in worst case choose $\delta=10$ (length of the segment in example) and thus cover all possible cases for $\varepsilon$?</p>
| A. Thomas Yerger | 112,357 | <p>Compactness is a finiteness property. It says that given any infinite collection of data associated to open sets in a topological space, you can in fact deal with only finitely many.</p>
<p>The great thing about a finite number of objects, is that you can compare them. Unlike with infinite sets, which only have well-defined suprema and infima, finite sets have maxima and minima (infinite sets can have these, but often don't, such as with sets like $(0,1)$, which has no maxima or minima). </p>
<p>Uniform continuity is a statement that one particular $\delta$ works for every $x$. Compactness says finitely many $\delta$ suffice to talk about the $\delta$'s needed for the whole space. So the finiteness lets us pick the one we need for the whole space.</p>
|
4,421,529 | <p><strong>Question:</strong> Let <span class="math-container">$n > 0$</span>. How can I find a function <span class="math-container">$f:\mathbb{N}\rightarrow\mathbb{R}^+$</span> such that
<span class="math-container">$$
\lim_{n\to\infty} \frac{f(n)^2}{n} \log \left(\frac{f(n)}{n}\right) = L
$$</span>
with <span class="math-container">$0<L<\infty$</span>?</p>
<p><strong>Background</strong>: The term above appears in my research on subexponential bounds for binary words containing a limited number of ones. I have been able to elimate all other terms, but I am stuck with this one.</p>
<p><strong>What I tried so far:</strong> I applied L'Hôpital's rule to get
<span class="math-container">$$
\lim_{n\to\infty} \frac{\log\left(\frac{f(n)}{n}\right)}{\frac{-n}{f(n)^2}} = \lim_{n\to\infty} \frac{\frac{f'(n)}{f(n)}-\frac{1}{n}}{\frac{1}{f(n)^2}-\frac{2nf'(n)}{f(n)^3}}
$$</span></p>
<p>which got rid of the <span class="math-container">$\log()$</span>. Since the limit should be finite, it seems to me that <span class="math-container">$\lim_{n\to\infty} \frac{f(n)}{\sqrt{n}} < \infty$</span>, but I haven't been able to come up with an <span class="math-container">$f(n)$</span> that doesn't lead to <span class="math-container">$L=0$</span>.</p>
| ajr | 266,348 | <p>Take <span class="math-container">$f(n) = n+1$</span>. Then
<span class="math-container">\begin{align*}
\lim\limits_{n\to\infty} \frac{f(n)^2}{n}\log\bigg(\frac{f(n)}{n}\bigg) = \lim\limits_{n\to\infty} \bigg(n + 2 + \frac 1n\bigg)\log\bigg(1 + \frac 1n\bigg) = \lim\limits_{n\to\infty} \frac{n + 2 + \frac 1n}{n}\cdot n\log\bigg(1 + \frac 1n\bigg) = 1.
\end{align*}</span>
You can get any other number <span class="math-container">$L>0$</span> by taking <span class="math-container">$f(n) = n+L$</span>.</p>
|
3,084,479 | <p><span class="math-container">$h\in \mathbb{R}$</span>, because we have defined the Trigonometric Functions only on <span class="math-container">$\mathbb{R}$</span> so far.</p>
<p>I have a look at <span class="math-container">$e^{ih}=\sum_{k=0}^{\infty}\frac{(ih)^k}{k!}=1+ih-\frac{h^2}{2}+....$</span> </p>
<p><strong>How can one describe the nth term of the sum?</strong></p>
<p>Then I look at <span class="math-container">$\frac{e^{ih}-1}{h}=\frac{(1-1)}{h}+i-\frac{h}{2}+...=i-\frac{h}{2}+....$</span> </p>
<p><strong>Again how can I describe that the nth term of the sum?</strong> </p>
<p>Because <span class="math-container">$\frac{e^{ih}-1}{h}=\sum_{k=1}^{\infty}\frac{\frac{(ih)^k}{h}}{k!}<\sum_{k=0}^{\infty}\frac{\frac{(ih)^k}{h}}{k!}=\sum_{k=1}^{\infty}\frac{(ih^{-1})^k}{k!}=e^{ih^{-1}}$</span></p>
<p>and <span class="math-container">$ih^{-1}$</span> is a complex number and the exponential-series converges absolutely for all Elements in <span class="math-container">$\mathbb{C}$</span>, I have found a convergent majorant. And I can apply the properties of Limits on <span class="math-container">$\frac{e^{ih}-1}{h}\forall, h\in \mathbb{R}$</span>.</p>
<p><strong>How can I now prove formally (i.e by chosing an explicit <span class="math-container">$\delta$</span>) that</strong> </p>
<p><span class="math-container">$$\forall_{\epsilon>0}\exists_{\delta>0}\forall_{h\in\mathbb{R}}|h-0|=|h|<\delta\Longrightarrow |(\frac{e^{ih}-1}{h}=i-\frac{h}{2}+...)-i|<\epsilon$$</span></p>
<p><strong>I am also seeking for advice how to argue in such cases more intuitively (i.e by not always ginving an explicit <span class="math-container">$\delta$</span> ).</strong></p>
| Jam | 161,490 | <p>Hint: Use Euler's formula and split the limit into well known trigonometric limits.</p>
|
188,492 | <p>$A$ is an $n\times n$ matrix (not symmetric). If $\rho(A)$, spectral radius of $A$, is less than or equal to 1, can we say that $x^TAx\leq x^Tx$? </p>
<p>In another word,</p>
<p>if $\rho(A)\leq 1$, then $\frac{1}{2}\rho(A+A^T)\leq 1$?</p>
| Harald Hanche-Olsen | 23,290 | <p>The second part of the question is easier to answer, with the counterexample $$A=\begin{pmatrix}0&1\\0&0\end{pmatrix}.$$</p>
|
4,350,450 | <p>For me, <span class="math-container">$\Bbb N$</span> includes <span class="math-container">$0$</span>. I am referencing, yet again, <a href="https://www.math.uni-leipzig.de/%7Eeisner/book-EFHN.pdf" rel="nofollow noreferrer">this</a> text, exercise <span class="math-container">$19$</span>, page <span class="math-container">$30$</span>.</p>
<blockquote>
<p>Let <span class="math-container">$K$</span> be a compact Hausdorff space, and <span class="math-container">$\phi:K\to K$</span> continuous and surjective - i.e. <span class="math-container">$(K;\phi)$</span> is a surjective topological dynamic system.</p>
<p>Let <span class="math-container">$K^\omega=\prod_{n\in\Bbb N}K$</span>, and let <span class="math-container">$\psi:K^\omega\to K^\omega,\,(x_1,x_2,\cdots)\mapsto(\phi(x_1),x_1,x_2,\cdots)$</span>. By Tychonoff's theorem, <span class="math-container">$(K^\omega;\psi)$</span> is a topological system. Let <span class="math-container">$L=\bigcap_{n\in\Bbb N}\psi^n(K^\omega)\subseteq K^\omega$</span>.</p>
</blockquote>
<p>It is "shown" earlier in the book (Corollary <span class="math-container">$2.27$</span>, page <span class="math-container">$20$</span>), that <span class="math-container">$L$</span> is the maximal (by set inclusion) surjective subsystem of <span class="math-container">$K^\omega$</span>.</p>
<blockquote>
<p>Show that <span class="math-container">$\pi(L)=K$</span>, where <span class="math-container">$\pi:K^\omega\to K$</span> is the projection onto the first component.</p>
</blockquote>
<p>I can do this fine, but I fear it is a bit unrigorous in equality <span class="math-container">$1$</span>:</p>
<blockquote>
<p><span class="math-container">$$\pi(L)=\pi\left(\bigcap_{n\in\Bbb N}\psi^n(K^\omega)\right)\color{red}{\overset{1}=}\bigcap_{n\in\Bbb N}(\pi\circ\psi^n)(K^\omega)$$</span>Note that <span class="math-container">$\psi^n(x_1,x_2,\cdots)=(\phi^n(x_1),\phi^{n-1}(x_1),\cdots)$</span>, and <span class="math-container">$\pi\circ\psi^n$</span> therefore maps <span class="math-container">$(x_1,x_2,\cdots)\mapsto\phi^n(x_1)$</span>. As <span class="math-container">$\phi$</span> is a surjection on <span class="math-container">$K$</span>, <span class="math-container">$(\pi\circ\psi^n)(K^\omega)=K$</span> regardless of <span class="math-container">$n$</span>, from which it follows that <span class="math-container">$\pi(L)=K$</span>.</p>
</blockquote>
<p>However, they leave a hint suggesting more rigour is required:</p>
<blockquote>
<p>Hint: For <span class="math-container">$y\in K$</span> apply Lemma <span class="math-container">$2.26$</span> to the <span class="math-container">$\psi$</span>-invariant set <span class="math-container">$\pi^{-1}\{y\}$</span>.</p>
<p>Lemma <span class="math-container">$2.2$</span>6: Suppose that <span class="math-container">$(K;\phi)$</span> is a topological system and then <span class="math-container">$\varnothing\neq A\subseteq K$</span> is closed and invariant (<span class="math-container">$\phi(A)\subseteq A$</span>). Then there is a closed set <span class="math-container">$B$</span>, <span class="math-container">$\varnothing\neq B\subseteq A$</span>, with <span class="math-container">$\phi(B)=B$</span>. Explicitly, <span class="math-container">$B=\bigcap_{n\in\Bbb N_1}\phi^n(A)$</span>.</p>
</blockquote>
<p>Assuming for the moment that <span class="math-container">$\pi^{-1}\{y\}$</span> is indeed <span class="math-container">$\psi$</span>-invariant, then this lemma can "solve" the problem similarly (I am unsure why it is needed, but I tried to indulge them nonetheless):</p>
<blockquote>
<p><span class="math-container">$$\begin{align}\pi(L)&=\pi\left(\bigcap_{n\in\Bbb N}\psi^n(K^\omega)\right)\\&=\pi\left(\bigcap_{n\in\Bbb N}\bigcup_{\mathbf{x}\in K^\omega}\psi^n(\mathbf{x})\right)\\&=\pi\left(\bigcap_{n\in\Bbb N}\bigcup_{y\in K}\psi^n(\pi^{-1}\{y\})\right)\\&\color{red}{\overset{2}{=}}\pi\left(\bigcup_{y\in K}\bigcap_{n\in\Bbb N}\psi^n(\pi^{-1}\{y\})\right)\\&=\pi\left(\bigcup_{y\in K}B_y\right)\\&=\bigcup_{y\in K}\pi(B_y)\\&=\bigcup_{y\in K}y\\&=K\end{align}$$</span></p>
</blockquote>
<p>Why we need to go down that route, I am very unsure. It seems like a strange detour to take, so I feel like I'm missing their intended solution. Moreover, this approach introduces a second dubious equality, <span class="math-container">$2$</span>, that I don't know how to justify. My proof seems much shorter and more elegant, but also uses a potentially dubious equality in <span class="math-container">$1$</span>.</p>
<p>Returning to the invariance of <span class="math-container">$\pi^{-1}\{y\}$</span> - I do not believe it is invariant:</p>
<blockquote>
<p><span class="math-container">$$\pi^{-1}\{y\}=\{(y,x_1,x_2,\cdots):x_1,x_2,\cdots\in K\}=\{y\}\times K^\omega\\\psi(\pi^{-1}\{y\})=\{(\phi(y),y,x_1,x_2,\cdots):x_1,x_2,\cdots\in K\}=\{(\phi(y),y)\}\times K^\omega\not\subset\pi^{-1}(y)$$</span>Whenever <span class="math-container">$\phi(y)\neq y$</span>.</p>
</blockquote>
<p>What am I missing with regards to the alleged <span class="math-container">$\psi$</span>-invariance, and are the equalities <span class="math-container">$1,2$</span> correct? That is, is my proposed proof of <span class="math-container">$\pi(L)=K$</span> correct?</p>
| José Carlos Santos | 446,262 | <p>This is not true in general. Suppose that <span class="math-container">$f$</span> is constant (you always have <span class="math-container">$f(x)=c$</span>), that each <span class="math-container">$A_n$</span> is non-empty and that <span class="math-container">$\bigcap_{n\in\Bbb N}A_n=\emptyset$</span>. Then<span class="math-container">$$f\left(\bigcap_{n\in\Bbb N}A_n\right)=\emptyset\quad\text{and}\quad\bigcap_{n\in\Bbb N}f(A_n)=\{c\}.$$</span></p>
|
1,937,826 | <p>Ok, this seems obvious to me, but how would one prove it?</p>
<p>Let $<f(t),g(t)>$ and $<h(t),p(t)>$ be parametrized arcs in the cartesian plot. If $f,g,h,p$ are all continous and the arcs don't intersect, then there will be a line between the two that will be the shortest distance. Prove this line is normal to both arcs.</p>
<p>Is this proof non trivial? It seems so obvious, but i am not sure how it would be done.</p>
| cjackal | 44,643 | <p>If you assume that the two curves are defined over an open interval, and there is a shortest segment connecting the two curves, then yes; just differentiate the squared length of the joining segment by one of the two parameters.</p>
<hr>
<p>Okay, let me be more explicit.</p>
<p>I assumes that the two smooth curves $\alpha:I\to \mathbb{R}^2, \beta:I\to \mathbb{R}^2$ defined over an open interval $I\subseteq \mathbb{R}$ are given. For the sake of convenience, I write the component as $\alpha(t)=(\alpha_1(t),\alpha_2(t))$ etc. If we assumes that the two curves never meet and there exists a shortest line segment(which is not necessarily unique) joining the two curves, and the endpoints of the shortest segment be $\alpha(t_0)$ and $\beta(s_0)$. We can think of the squared length function $f: (t,s)\mapsto |\alpha(t)-\beta(s)|^2=(\alpha_1(t)-\beta_1(s))^2+(\alpha_2(t)-\beta_2(s))^2$, which is smooth by the assumption. Then as $(t_0,s_0)$ is a minimum point of this function, the derivative of $f$ at $(t_0,s_0)$ should be zero. Thus $2(\alpha_1(t_0)-\beta_2(s_0))\alpha_1'(t_0)+2(\alpha_2(t_0)-\beta_2(s_0))\alpha_2'(t_0)=0$, and similarly for $\beta'(s)$. This is exactly the condition that the segment is normal to the two curves.</p>
<p>Note that the assumption that the curve is defined over an open interval is necessary in this proof because the derivative may not be zero if there is a end point in the domain of the curves.</p>
|
1,392,257 | <p><strong>The definition of a conjugate element</strong> </p>
<p>We say that $x$ is conjugate to $y$ in $G$ if $y = g^{-1}xg $ for some $g \in G$</p>
<p>Now for the group $G=Q_8$ , we have the group presentation $$Q_8 = \big<a,b: a^4 =1,b^2 = a^2, b^{-1}ab = a^{-1} \big>$$</p>
<p>Now the elements of $Q_8$ are $\{1,a,a^2,a^3,ab,a^2b,a^3b,b\}$ and after some calculation we would get $5$ different conjugacy classes, namely $a^G = \{a,a^3\}$ where $a^G$ denotes the conjugacy class of $a$ in $G = Q_8$,</p>
<p>also we have </p>
<p>$1^G = \{1 \}$, ${a^2}^G = \{ a^2 \}$, ${(a^2b)}^G = \{a^2b,b \}$ and ${(ab)}^G = \{ab,a^3b\}$</p>
<p>Of course , there is no surpise that for every element $x \in G$ we have $x \in x^G$ because $x = 1^{-1}x1$. However, we see that all the conjugacy classes for $Q_8$ contain the element and it's inverse. Like $a^{-1} = a^3$, ${(a^2)}^{-1} = a^2$, ${(a^2b)}^{-1} = b$ and so on.</p>
<p>My question is does this hold true for all groups ?</p>
<p>More formally , Is it true that for an element $x \in G$ then $x,x^{-1} \in x^G$ ?</p>
| Erik Rijcken | 261,145 | <p>No, this does not hold: take any abelian group $G$, then $ab=ba$ for all $a,b\in G$, so $b^{-1}ab = a$ for all $a,b\in G$, so $a^G=\{a\}$ for all $a\in G$. So if $G$ contains an element of order different from $2$, it does not satisfy that $a,a^{-1}\in a^G$ for all $a$.</p>
<p>For a concrete example, take $G = \langle a: a^4=1\rangle$, then $a^{-1}=a^3\not\in a^G=\{a\}$.</p>
|
61,106 | <p>Let <span class="math-container">$X$</span> and <span class="math-container">$Y$</span> be Poisson random variables with means <span class="math-container">$\lambda$</span> and <span class="math-container">$1$</span>, respectively. The difference of <span class="math-container">$X$</span> and <span class="math-container">$Y$</span> is a <a href="http://en.wikipedia.org/wiki/Skellam_distribution" rel="nofollow noreferrer">Skellam random variable</a>, with probability density function
<span class="math-container">$$\mathbb P(X - Y = k) = \mathrm e^{-\lambda - 1} \lambda^{k/2} I_k(2\sqrt{\lambda}) =: S(\lambda, k),$$</span>
where <span class="math-container">$I_k$</span> denotes the <a href="http://en.wikipedia.org/wiki/Bessel_function#Modified_Bessel_functions_:_I.CE.B1.2C_K.CE.B1" rel="nofollow noreferrer">modified Bessel function of the first kind</a>. Let <span class="math-container">$F(\lambda)$</span> denote the probability that <span class="math-container">$X$</span> is larger than <span class="math-container">$Y$</span>: <span class="math-container">$$F(\lambda) := \mathbb P(X > Y) = \sum_{k=1}^{\infty} S(\lambda, k) = \mathrm e^{-\lambda - 1} \sum_{k=1}^\infty \lambda^{k/2} I_k(2\sqrt{\lambda}).$$</span> According to Mathematica, the graph of the function <span class="math-container">$F$</span> looks like</p>
<p><img src="https://i.stack.imgur.com/m0mPi.png"><br/><sub>(source: <a href="https://cims.nyu.edu/~lagatta/F.png" rel="nofollow noreferrer">nyu.edu</a>)</sub><br/></p>
<p>My question:<li>Is there a closed-form expression for the function <span class="math-container">$F$</span>?</li><li>If not, what are <span class="math-container">$\lim_{\lambda \to 0} F'(\lambda)$</span> and <span class="math-container">$F'(1)$</span>? What is the asymptotic behavior as <span class="math-container">$\lambda \to \infty$</span>?</p>
| Adrien Hardy | 15,517 | <p>By simple computations : The definition of the modified Bessel function of the first kind yields
$$
I_k(\lambda)=\sum_{n\geq 0}\frac{1}{n!(n+k)!}\left(\frac{\lambda}{2}\right)^{2n+k}
$$
so that we get (the sums transpositions are clearly allowed)
$$F(\lambda)=e^{-\lambda-1}\sum_{k\geq 1}\sum_{n\geq 0}\frac{\lambda^{k+n}}{n!(k+n)!}=e^{-\lambda-1}\sum_{n\geq 1}a_n\lambda^n \qquad \mbox{where}\qquad a_n=\frac{1}{n!}\sum_{k=0}^{n-1}\frac{1}{k!}.$$ Thus, deriving under the sign sum</p>
<p>$$ F'(\lambda) = e^{-\lambda-1}\Big(1+\sum_{n\geq 1 }[(n+1)a_{n+1}-a_n)]\lambda^n\Big)
= e^{-\lambda-1}\sum_{n\geq 0}\frac{1}{(n!)^2}\lambda^n
$$
we obtain the closed form
$$
F'(\lambda)=e^{-\lambda-1}I_0(2\sqrt{\lambda}).
$$
One finally get $$F'(0)=e^{-1}, \quad F'(1)=e^{-2}I_0(2)=e^{-2}\sum_{n\geq0}\frac{1}{(n!)^2}$$
and, using the asymptotic formula when $\lambda\rightarrow+\infty$ for all $k$
$$
I_k(\lambda)=\frac{e^{\lambda}}{\sqrt{2\pi\lambda}}\Big(1+O(\lambda^{-1})\Big),
$$
that
$$
F'(\lambda)=\frac{e^{2\sqrt{\lambda}-\lambda-1}}{2\sqrt{\pi\sqrt{\lambda}}}\Big(1+O(\lambda^{-1/2})\Big)
$$
when $\lambda\rightarrow+\infty$.</p>
|
2,011,003 | <p>I stumbled upon this logic question in a math class recently. </p>
<p>My teacher told us that a statement that is not tested/is empty is true. For example, that if I stated that: "if the team A wins the game, I am gonna buy you a coke", and then team B goes on and wins the game, the statement would be true, independent of me buying a coke. Could anybody elaborate how this can be the case, and why?</p>
<p>It came up as an explanation to why the the empty-set is both an open and a closed set. </p>
| amWhy | 9,003 | <p>Do you understand that the conditional $p\rightarrow q$ is true whenever $p$ is false, or whenever $q$ is true?</p>
<p>I think the best way of representing the truth of a condition $p\rightarrow q$ is knowing that $p\rightarrow q$ IS TRUE, UNLESS both $p$ is true, and $q$ is false.</p>
<p>With two variables, $p, q,$ there are four possible assignments of truth values, each represented below. </p>
<p><a href="https://i.stack.imgur.com/Y0bmw.gif" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Y0bmw.gif" alt="enter image description here"></a></p>
<p>For example, let's say you promise $p$: = Team A wins the game. Let $q$: = "I'll buy you a Coke."</p>
<p>First of all, you never made any promise about what you'll give me if Team A loses. So if $\lnot p$, I can't accuse you of making a false statement, whether you give me a Coke or not. And in the case that team A wins and you buy me a Coke, well, you've made good on your promise.</p>
<p>The <strong>only</strong> way you'd be lying is if team A wins ($p$), and you don't buy me a Coke $(\lnot q).$</p>
<hr>
<p>Another classic example.</p>
<p>The empty set is a subset of every set.</p>
<p>Let's simply look at some arbitrary set $A$: If any set $B$ is a subset of $A$, then we know by definition that $b\in B \rightarrow b \in A$. Suppose now that $B =\varnothing$. Well, $\varnothing \subseteq A$ because if $b \in \varnothing,$ then $b\in A$. Well, there's no $b\in \varnothing$, but the definition still holds because "$b \in \varnothing$ is false, which makes $b \in \varnothing \rightarrow b \in A$ true</p>
|
789,458 | <p>If one day we finally prove the normality of $\pi $, would we be able to say that we have ourselves a sure-fire <em>truly random</em> number generator?</p>
| qwr | 122,489 | <p>The definition of random is <em>unpredictable</em>. $\pi$'s digits have been calculated to billions of digits and they are all public information. You might be able to use a random range of the digits, but then you would need a separate method to select that range.</p>
|
1,095,621 | <p>I am looking for a way to integrate $$\int \sqrt{x^2-4}\ dx $$ using trigonometric substitutions. </p>
<p>All my attempts so far lead to complicated solutions that were uncomputable.</p>
| APGreaves | 191,763 | <p>For a trigonometric substitution, $ x = 2\sec \theta $ will work if you can integrate certain other trig functions.</p>
|
1,095,621 | <p>I am looking for a way to integrate $$\int \sqrt{x^2-4}\ dx $$ using trigonometric substitutions. </p>
<p>All my attempts so far lead to complicated solutions that were uncomputable.</p>
| Barry Cipra | 86,747 | <p>Following up on Aaron Maroja's (second) hint, note that</p>
<p>$$\sec\theta\tan^2\theta\ d\theta={\sin^2\theta\ d\theta\over\cos^3\theta}={\sin^2\theta\cos\theta\ d\theta\over\cos^4\theta}={s^2\ ds\over(1-s^2)^2}$$</p>
<p>where $s=\sin\theta$. Some tedious partial fractions can wrap things up:</p>
<p>$${s^2\over(1-s^2)^2}={A\over1-s}+{B\over(1-s)^2}+{C\over1+s}+{D\over(1+s)^2}$$</p>
|
2,912,152 | <p>I know there is already a question about resolving a quadrilateral from three sides and two angles, but I want to ask about a special case. Firstly, two of the sides are known to be of equal size. Secondly, I'm only interested in the area, not in the remaining angles or lengths. Can anyone suggest a simple formula?</p>
<p><a href="https://i.stack.imgur.com/Z9nCn.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Z9nCn.png" alt="enter image description here"></a></p>
| Daniel Schepler | 337,888 | <p>One point of view which might be useful would be the <a href="https://en.wikipedia.org/wiki/Curry%E2%80%93Howard_correspondence" rel="nofollow noreferrer">Curry-Howard correspondence</a>: this point of view interprets (formal) proofs as being programs in a certain typed lambda calculus. (And the more typical informal proofs written in mathematical papers and textbooks correspond to pseudocode...)</p>
<p>So, for this problem, the "program" you need to build is one which takes inputs of a proof of $A \subseteq B$ and a proof of $B \subseteq C$, and outputs a proof of $A \subseteq C$. Furthermore, a "proof" of $A \subseteq B$ consists of a function which takes an object $x : V$ and a proof of $x \in A$ and outputs a proof of $x \in B$ -- and similarly for $B \subseteq C$ and $A \subseteq C$. So, what the typical proof that you outlined does to construct this program is: given inputs $x : V$ and $HA : x \in A$, it first feeds $x$ and $HA$ into the input proving $A \subseteq B$ to get a proof of $HB : x \in B$; then, it feeds $x$ and $HB$ into the input proving $B \subseteq C$ to get $HC : x \in C$. In summary, the proof ends up transforming to the "program"
$$ \lambda (A, B, C : V) (HAB : A \subseteq B) (HBC : B \subseteq C) . (\lambda (x : V) (HA : x \in A) . HBC(x, HAB(x, HA))). $$</p>
<p>Now, where I have been leading up to in this discussion is: the "programming language" for first-order logic allows for some types to be empty. On the other hand, you don't necessarily have to make special allowances for the possibility that one of the types might be empty in writing programs: as long as the proof "type-checks" then the corresponding program will automatically handle these cases correctly. So for example, if $A$ is an empty set, then for any $x : V$ then the type $x \in A$, consisting of all proofs that $x \in A$, will be an empty type; therefore, in this case it will not ever be possible for this function to be called since there is no possible input for the $HA$ parameter.</p>
<p>(On the other hand, the explosion principle mentioned in other answers corresponds to a family of operators $except_A : \bot \to A$, where $\bot$ is a canonical empty type and $A$ is an arbitrary type. What this means is: if at some point your program seems to be able to construct an element of the empty type $\bot$, then that means that branch of the code must be unreachable, so whatever type you're supposed to be returning from that part of the code, you can just return a synthetic value of that type in order to satisfy the type checker.)</p>
|
264,745 | <p>When I was learning statistics I noticed that a lot of things in the textbook I was using were phrased in vague terms of "this is a function of that" e.g. a statistic is a function of a sample from a distribution. I realized that while I know the definition of a function as a relation and I have an intuitive notion of what "function of" means, it's unclear to me how you transform this into a rigorous definition of "function of". So what is the actual definition of "function of"?</p>
| Community | -1 | <p>To answer this question, we must first ask ourselves "what is a variable?" What do I mean when I say that "$x$ is a real number-valued variable"?</p>
<p>I'm going to try and describe one useful approach.</p>
<p>We might think of $x$ as being a placeholder for an unknown but specific number. Or maybe a notation for expressing functions. But it is also useful to be able to consider the variable $x$ as simply <em>being</em> a real number, and not really any different from other real numbers like 0, 1, or $\pi$.</p>
<p>"But what is it's value?" you might ask. That's easy: it's value is $x$. "Is it positive, zero, or negative?" That one's easy too: the answer is "yes". Or more informatively, the truth value of the statement "$x$ is positive" is a variable too.</p>
<p>To distinguish modes of thought, let's reserve the term "real number" for the way we normally think, and use the term "scalar" to refer to real numbers in this new mode of thought.</p>
<p>If you can't wrap your head around this mode of thought, there are alternative semantics for this idea*: you can imagine there is some secret collection of "states", and every real number in this generalized sense is actually a real-valued function whose domain is the collection of states. e.g. in a physics context, the states might be the points in configuration space, and the scalars things like "temperature" or "the $x$-coordinate of the 17th particle".</p>
<p>The measure-theoretic notion of a random variable, or the analytic notion of a scalar field are very much examples of this sort of thing. (Which is why I chose the term "scalar")</p>
<p>Once you can wrap your head around scalars, you can imagine relationships between them. Just as $1$ and $2$ satisfy the relationship $1 + 1 = 2$, our real numbers $x$ and $y$ might satisfy the relationship $x + x = y$, or some more general sort of relationship $f(x,y) = 0$ for an <em>ordinary</em> function $f$. In this case, we say that $x$ and $y$ are functionally related. In the special case we can write $y = f(x)$, then we can say $y$ is a function of $x$.</p>
<p>(Why did I emphasize "ordinary" function? Just like it is useful to form the idea of $x$ being a variable number in the way I've described above, it is also useful to think of variable function in the same way; I wanted to emphasize that we are <em>not</em> doing that in the above paragraph)</p>
<p>If you are stuck thinking of scalars as functions of states, the notation $f(x,y)$ really means the function that sends the state $P$ to the number $f(x(P), y(P))$. A similar sort of composition happens when our scalars are random variables.</p>
<p>*: For those who know such things, I'm describing the internal logic of the topos of sheaves on a discrete space.</p>
|
3,267,499 | <p>Let <span class="math-container">$k$</span> be a field.
<span class="math-container">$k[x,y]$</span> is a UFD by the following known argument taken from <a href="https://en.wikipedia.org/wiki/Unique_factorization_domain" rel="nofollow noreferrer">wikipedia</a>:
"If <span class="math-container">$R$</span> is a UFD, then so is <span class="math-container">$R[X]$</span>, the ring of polynomials with coefficients in <span class="math-container">$R$</span>. Unless <span class="math-container">$R$</span> is a field, <span class="math-container">$R[X]$</span> is not a principal ideal domain. By induction, a polynomial ring in any number of variables over any UFD (and in particular over a field) is a UFD". </p>
<p><strong>(1)</strong> It seems that <span class="math-container">$k[x,x^{-1},y]$</span> is a UFD, isn't it?
Is the proof based on the result that <span class="math-container">$k[x,y]$</span> is a UFD? (probably yes?)</p>
<p><strong>(2)</strong> Can one find all irreducible=prime elements of <span class="math-container">$k[x,x^{-1},y]$</span>?
("In UFDs, every irreducible element is prime").</p>
<p>Thank you very much! </p>
| Bernard | 202,857 | <ol>
<li>The ring of Laurent polynomials <span class="math-container">$R=k[x,x^{-1}]=\bigl(k[x]\bigr)_{x}$</span>, and a ring of fractions of a U.F.D. is a U.F.D., so <span class="math-container">$k[x,x^{-1},y]=R[y]$</span> is a U.F.D.</li>
<li>You can find the irreducible elements of <span class="math-container">$k[x,x^{-1},y]$</span> inasmuch as you can find those of <span class="math-container">$k[x,y]$</span>. They're the same as the irreducible elements in <span class="math-container">$k[x,y]$</span> except <span class="math-container">$x$</span>, which becomes a unit in <span class="math-container">$k[x,x^{-1},y]$</span>. Which elements in <span class="math-container">$k[x,y]$</span> are irreducible depends on the base field.</li>
</ol>
|
3,090,448 | <p>I have the following question to complete.</p>
<p>Let <span class="math-container">$X$</span> be an inner product space. Let <span class="math-container">$(e_{j})_{j\geq1}$</span> be an orthonormal sequence in <span class="math-container">$X$</span>. Show that,
<span class="math-container">\begin{align}
\sum_{j=1}^{\infty}|(x|e_{j})(y|e_{j})|\leq\|x\|\|y\|,
\end{align}</span>
for all <span class="math-container">$x,y\in X$</span>.</p>
<p>I have tried to use the Cauchy-Schwartz Inequality.</p>
<p><span class="math-container">\begin{align}
\sum_{j=1}^{\infty}|(x|e_{j})(y|e_{j})|\leq\|x\|\|y\|\sum_{j=1}^{\infty}\|e_{j}\|^{2}.
\end{align}</span>
However, that remaining sum, as far as I know, does not converge.</p>
<p>I tried to use Parseval's Identity, but that didn't work either. Can someone offer me a hint?</p>
| jmerry | 619,637 | <p>The Cauchy-Schwarz inequality should just have <span class="math-container">$\|x\|\cdot\|y\|$</span> on the right hand side; you've got the statement of it mixed up.</p>
<p>In my post, the notation <span class="math-container">$(u|e_j)$</span> denotes the component of <span class="math-container">$u$</span> with respect to the unit vector <span class="math-container">$e_j$</span>, which is equal to the inner product <span class="math-container">$\langle u,e_j\rangle$</span>. This notation will never be used for anything other than a member of our orthonormal set <span class="math-container">$e_j$</span> in the second position.</p>
<p>Let <span class="math-container">$x_n$</span> be the projection onto the space spanned by the first <span class="math-container">$n$</span> of the <span class="math-container">$e_j$</span>, and similarly for <span class="math-container">$y_n$</span>. Then
<span class="math-container">$$\sum_{j=1}^n (x|e_j)(y|e_j) = \langle x_n,y_n\rangle \le \|x_n\|\cdot \|y_n\| \le \|x\|\cdot\|y\|$$</span>
There's no sum of <span class="math-container">$\|e_i\|^2$</span> there. And the equality is simply the inner product with respect to the standard (orthonormal) basis in <span class="math-container">$n$</span> dimensions.</p>
<p>Putting the absolute values in? Well, we could always rotate the <span class="math-container">$x|e_i$</span> and <span class="math-container">$y|e_i$</span> components to be positive; by orthogonality, that wouldn't affect <span class="math-container">$\|x_n\|$</span>, <span class="math-container">$\|y_n\|$</span>, or any of the other components.</p>
<p>To clarify about the absolute values:<br>
Define <span class="math-container">$x'_n$</span> as follows: <span class="math-container">$x'_n=|(x|e_1)|e_1+|(x|e_2)|e_2+\cdots+|(x,e_n)|e_n$</span> - the sum of the absolute values of the components times the basis vectors - and similarly for <span class="math-container">$y'_n$</span>. We claim that <span class="math-container">$\|x'_n\|=\|x_n\|$</span> and <span class="math-container">$\|y_n'\|=\|y_n\|$</span>. Why? Because
<span class="math-container">$$\|x'_n\|^2=\sum_{j=1}^n |(x_n,e_j)|^2 = \sum_{j=1}^n |(x,e_j)|^2=\|x_n\|^2$$</span>
Then, applying this,
<span class="math-container">$$\sum_{j=1}^n |(x|e_j)|\cdot |(y|e_j)| = \langle x_n',y_n'\rangle \le \|x_n'\|\cdot \|y_n'\| = \|x_n\|\cdot \|y_n\| \le \|x\|\cdot \|y\|$$</span>
by Cauchy-Schwarz for the first inequality and Parseval for the second. Done.</p>
|
116,037 | <p>I would warmly appreciate it if someone could tell me whether the following question has an affirmative answer. I am new to the field of commutative algebra, so I am simply trying to fill in some (huge) gaps. Thanks!</p>
<p>Let $ (R,{\frak{m}}) $ be a Noetherian local (commutative unital) ring. Let $ I $ be an ideal of $ R $ with minimal generating set $ \lbrace x_{1},\ldots,x_{n} \rbrace $, and let $ \beta: R^{n} \rightarrow I $ be the surjective $ R $-linear map defined by $ \beta(r_{1},\ldots,r_{n}) = r_{1} x_{1} + \cdots + r_{n} x_{n} $. Viewing $ I $ as an $ R $-module, does there exist a free resolution of $ I $ of the form
$$
0 \longrightarrow R^{n-1} \stackrel{\alpha}{\longrightarrow} R^{n} \stackrel{\beta}{\longrightarrow} I \longrightarrow 0,
$$
where the map $ \alpha $ is left-multiplication by some matrix $ M \in {\text{M}_{n \times (n-1)}}(R) $?</p>
| Steven Sam | 321 | <p>The resolution that you are asking for exists for a special class of ideals, namely the perfect codimension 2 ideals. Here perfect means that a finite resolution exists, and codimension 2 means roughly (in any reasonable situation of classical algebraic geometry, at least) that $\dim R/I = \dim R - 2$.</p>
<p>The Hilbert--Burch theorem <a href="http://en.wikipedia.org/wiki/Hilbert%E2%80%93Burch_theorem" rel="nofollow">http://en.wikipedia.org/wiki/Hilbert%E2%80%93Burch_theorem</a>
classifies all such ideals: the generators you speak of are the maximal minors of an $n \times (n+1)$ matrix. Conversely, any such ideal is perfect (and its resolution has the form you mentioned) if and only if it has codimension 2.</p>
<p>Remark: this is one of a few special cases where you can classify the structure of an ideal just by how its resolution looks. Some others of note come from Koszul complexes (ideals generated by a regular sequence) and the Buchsbaum--Eisenbud complex (codimension 3 Gorenstein ideals). Anyway I think this is a really neat subject and questions like the one you're asking (in the comment) are a nice gateway into the subject.</p>
|
222,555 | <p>I would like to find a simple equivalent of:</p>
<p>$$ u_{n}=\frac{1}{n!}\int_0^1 (\arcsin x)^n \mathrm dx $$</p>
<p>We have:</p>
<p>$$ 0\leq u_{n}\leq \frac{1}{n!}\left(\frac{\pi}{2}\right)^n \rightarrow0$$</p>
<p>So $$ u_{n} \rightarrow 0$$</p>
<p>Clearly:</p>
<p>$$ u_{n} \sim \frac{1}{n!} \int_{\sin(1)}^1 (\arcsin x)^n \mathrm dx $$</p>
<p>But is there a simpler equivalent for $u_{n}$?</p>
<p>Using integration by part:</p>
<p>$$ \int_0^1 (\arcsin x)^n \mathrm dx = \left(\frac{\pi}{2}\right)^n - n\int_0^1 \frac{x(\arcsin x)^{n-1}}{\sqrt{1-x^2}} \mathrm dx$$</p>
<p>But the relation </p>
<p>$$ u_{n} \sim \frac{1}{n!} \left(\frac{\pi}{2}\right)^n$$</p>
<p>seems to be wrong...</p>
| Julián Aguirre | 4,791 | <p>This is not a complete answer, but an improved inequality. From
$$
\arcsin x\le \frac{\pi}{2}\,x
$$
you get
$$
u_n\le\frac{1}{(n+1)!}\Bigr(\frac{\pi}{2}\Bigl)^n.
$$</p>
|
1,299,266 | <p>How many zeros are there in the number $50!$?</p>
<p>My attempt:</p>
<p>The zeros in every number come from the 10s that make up the number. The 10s are, in turn, made up of 2s and 5s.</p>
<p>So: $\frac{50}{5*2} = 5$ zeros?</p>
| Khosrotash | 104,171 | <p>number of zeros in $n!$ equal to number of $5$ in $n!$ =$$\sum_{k=1}^{\infty}\left \lfloor \frac{n}{5^k} \right \rfloor\\=\left \lfloor \frac{50}{5} \right \rfloor+\left \lfloor \frac{50}{5^2} \right \rfloor+\left \lfloor \frac{50}{5^3} \right \rfloor+...=10+2+0=12$$it means 12 zero in front of 50!</p>
|
311,153 | <p>Im trying to resolve the next definite integral:
$$\int_{1-x^2}^{1+x^2}{\ln(t^2)\ dt}$$
Im not sure if I can use the Barrow's theorem, I think I have to use the fundamental theorem of integral calculus, but im not sure. How can I solve it?</p>
| Mikasa | 8,581 | <p>If you set $$f(x)=\int_{1-x^2}^{1+x^2}{\ln(t^2)dt}$$ then according to F.T. we get $$f'(x)=4x\ln(1-x^4)$$ You can use the integral by parts firstly to solve the above OE. It takes time to be evaluated so I personally prefer the @experimentX's point of view.</p>
|
1,553,391 | <p>Let $E$ be a measurable set of finite measure and $1\leq p_1 < p_2 \leq \infty$ . Then $L^{p_2} (E) \subseteq L^{p_1} (E)$ Furthermore $||f||_{p_1} \leq c \cdot ||f||_{p_2}$ for all $f$ in $L^{p_2}(E)$ where $c =[m(E)]^{\frac{p_2-p_1}{p_1p_2}} $ if $p_2<\infty$ and $c=[m(E)]^{\frac{1}{p1}}$ if $p_2 =\infty$ </p>
<p>I need help with the second part ($p_2= \infty$)</p>
<p>Here is the proof of the first part: $p_2 < \infty$</p>
<p>Define $p = \frac{p_2}{p_1}> 1 $ and $q$ be the conjugate of $p$.
Let $f \in L^{p_2} (E) $ then we can prove that $f^{p_1} \in L^p(E)$ and let $g=\chi_E \in L^q(E)$ ( since $m(E)<\infty$)
By Holder inequality we have:
$\int_E |f|^{p_1} = \int_E |f|^{p_1} g =||f^{p_1}||_p ||g||_q \leq \Big[\int_E (|f|^{p_1})^{\frac{p_2}{p_1}} \Big]^{\frac{p_1}{p_2}} \cdot \Big[\int_E |g|^q \Big]^{\frac{1}{q}}= ||f||_{p_2}^{p_1} \Big[ m(E)\Big]^{\frac{1}{q}}$</p>
<p>$\Rightarrow \text{ by power to }\frac{1}{p_1}$ we get $||f||_{p_1} \leq \Big[m(E)\Big]^{\frac{p_2-p_1}{p_1p_2}} ||f||_{p_2}$</p>
<p>Can anyone help me with the second part: $p_2= \infty$?</p>
| Robert Israel | 8,508 | <p>It's easy. If $f \in L^\infty(E)$, then $ \int_E |f|^{p} \le \int_E \|f\|_\infty^p = \|f\|_\infty^p m(E)$.</p>
|
71,608 | <p>Consider the following question:</p>
<p>Is there a family $\mathcal{F}$ of subsets of $\aleph_\omega$ that satisfies the following properties?</p>
<p>(1) $|\mathcal{F}|=\aleph_\omega$</p>
<p>(2) For all $A\in \mathcal{F}$, $|A|<\aleph_\omega$</p>
<p>(3) For all $B\subset \aleph_\omega$, if $|B|<\aleph_\omega$, then there exists some $B'\in \mathcal{F}$ such that $B\subset B'$.</p>
<p>I am not sure if there is anything special about $\aleph_\omega$, but this was the example that came up. </p>
<p>Any help?</p>
| Santi Spadaro | 11,647 | <p>This question has been already answered thoroughly. I just wanted to address the OP's comment "I am not sure if there is anything special about $\aleph_\omega$".</p>
<p>Actually, there is nothing special about $\aleph_\omega$ other than the fact that it's a singular cardinal. Let $\kappa$ be a cardinal and let $S(\kappa)$ be the following statement:</p>
<blockquote>
<p>There is a family $\mathcal{F} \subset [\kappa]^{<\kappa}$ such that $|\mathcal{F}|=\kappa$ and for every $F \in [\kappa]^{<\kappa}$ there is $G \in \mathcal{F}$ such that $F \subset G$. </p>
</blockquote>
<p>Then $S(\kappa)$ holds if and only if $\kappa$ is a regular cardinal.</p>
<p>But things become more complicated if we just consider subsets of $\kappa$ of a fixed cardinality smaller than $\kappa$. For example, let $C(\kappa)$ be the statement:</p>
<blockquote>
<p>There is a family $\mathcal{F} \subset [\kappa]^{\aleph_0}$ such that $|\mathcal{F}|=\kappa$ and for every $F \in [\kappa]^{\aleph_0}$ there is $G \in \mathcal{F}$ such that $F \subset G$. </p>
</blockquote>
<p>Then $C(\aleph_n)$ is true for every $0< n< \omega$, $C(\aleph_\omega)$ is false for essentially the same reason that $S(\aleph_\omega)$ is false, but the truth value of $C(\aleph_{\omega+1})$ depends on your set theory. Namely, if there is an $\aleph_{\omega+1}$-sized family of countable subsets of $\aleph_\omega$ which is cofinal in $([\aleph_\omega]^\omega, \subseteq)$ then $C(\aleph_{\omega+1})$ is true, while if $cof([\aleph_\omega]^\omega, \subseteq) \geq \aleph_{\omega+2}$ (which is consistent with ZFC, modulo large cardinals) then $C(\aleph_{\omega+1})$ is clearly false... </p>
|
127,493 | <p>How many number less than $k$ contain the digit $3$?
For instance:</p>
<p>How many number contain the digit $3$ in the following list?</p>
<pre><code>Table[n, {n, 33}]
</code></pre>
<p>$\lbrace 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, \
20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33\rbrace$</p>
<p>I tried: </p>
<pre><code>numbers[k_] := Count[Table[n, {n, k}], 3]
</code></pre>
<p>but it doesn't work.</p>
<p>Then I want to find the limit</p>
<pre><code>Limit[numbers[k]/k, k -> Infinity]
</code></pre>
<p>( <a href="https://www.youtube.com/watch?v=UfEiJJGv4CE" rel="nofollow">See Numberphile video here.</a>)</p>
| Nasser | 70 | <p>One way to find out why M can not do an indefinite integral, is to run Rubi integration package on it. Since Rubi shows step by step integration, most of the time, when Rubi gets stuck on a step, it will also be the same with Mathematica. This can point out which part of the larger integral where M was not able to do.</p>
<p>When doing this, here is a small integral, generated during Rubi's steps, that shows possible place where M gave up.</p>
<pre><code> Integrate[x/(x - Log[x]), x]
</code></pre>
<p><img src="https://i.stack.imgur.com/mz4fZ.png" alt="Mathematica graphics"></p>
<p>The above can't be integrated. Rubi can't do it. M can't do it. Maple can't do it. Maxima can't do it. If all these programs can't do it, then I do not think we humans have any chance, so will not even try.</p>
<p>The above is just a very small version of small part of what your integral contains, when it is broken up. </p>
<p>The point is, not every integral can be solved analytically.</p>
<p>Rubi input</p>
<pre><code>Int[(x^2 + 2 x + 1 + (3 x + 1) Sqrt[x + Log[x]])/(x Sqrt[
x + Log[x]] (x + Sqrt[x + Log[x]])), x]
</code></pre>
<p>Rubi output at the step it got stuck</p>
<pre><code>Log[Log[25*x]] + Dist[2, Int[x/(-x + x^2 - Log[x]), x], x] +
Dist[2, Int[1/Sqrt[x + Log[x]], x], x] -
Int[1/(-x + x^2 - Log[x]), x] -
Int[1/((-1 + x)*(-x + x^2 - Log[x])), x] +
Int[1/((-1 + x)*x*(-x + x^2 - Log[x])), x] +
Int[1/Sqrt[x + Log[x]], x] +
Int[1/(x*Sqrt[x + Log[x]]), x] +
Int[(1 + x - 2*x^2)/((-x + x^2 - Log[x])*Sqrt[x + Log[x]]), x] -
Int[1/((-1 + x)*Log[25*x]), x] +
Int[1/((-1 + x)*x*Log[25*x]), x]
</code></pre>
<p>In all the above, where you see <code>Int[.....]</code> next to them, means it can not be integrated. Pick any and try it. </p>
<p><img src="https://i.stack.imgur.com/xMWy0.png" alt="Mathematica graphics"></p>
<p>So your integral generated 10 or 15 smaller integrals that can't be solved. So it is better to start with trying to solve one of the smaller ones first.</p>
<p>Version 11 on windows 7.</p>
<p><img src="https://i.stack.imgur.com/HMs4x.png" alt="Mathematica graphics"></p>
|
1,156,907 | <p>I don't know anything about measure theory, I'm studying real analysis and this showed up in the book I'm reading as a way to characterize integrable functions. The author defined that a subset $X \subset \mathbb{R}$ has measure zero if for each $\epsilon > 0$ we can find infinitely countable open intervals $I_n$ such that $X \subset \bigcup_{n=1}^{\infty}I_n$ and $\sum_{n=1}^{\infty} |I_n| < \epsilon $ where $|I|$ is the size of $I$, as in, if $I = (a,b)$, then $|I| = b - a$.</p>
<p>Now, the author gives the following proof that the countable union of measure-zero sets has measure zero:</p>
<p>"Let $Y =\bigcup_{i=1}^{\infty} X_i $, where each $X_i$ has measure zero. Now, given $\epsilon > 0 $ we can, for each $n$, write $X_n \subset \bigcup_{i=1}^{\infty} I_{n_i}$ where each $I_{n_i}$ is an open subset and $\sum_{i=1}^{\infty}|I_{n_i}| < \epsilon / 2^n$. Therefore, $Y \subset \bigcup_{n,j=1}^{\infty} I_{n_J}$ where $\sum_n \sum_j |I_{n_J}| < \sum_{i=1}^{\infty} \epsilon /2^n = \epsilon$. Therefore, $m(Y) = 0$"</p>
<p>I'm really confused about the ending. It is very intuitive, but it's not rigorous enough for me, I wanna see formally why this holds:</p>
<p>$\sum_n \sum_j |I_{n_J}| < \sum_{i=1}^{\infty} \epsilon /2^n$</p>
<p>Like maybe looking at the definition of a series, the limit of the sequence of partial sums. Any help?</p>
| Hagen von Eitzen | 39,174 | <p>That's just how measures work. We start by defining that the measure of an open interval $(a,b)$ is $b-a$. Then we can attempt to define the (outer) measure of an arbitrary set $A$ as the infimum of all $\sum_{i\in J}\mu(I_i)$ where the $I_j$ are open intervals and $A\subseteq \bigcup_{i\in J}I_i$.
A few observations:</p>
<ul>
<li>If more than countably many of the $\mu(I_i)$ are nonzero, then certainly $\sum_{i\in J}\mu(I_i)=\infty$. Therefore and as any set $A\subseteq \mathbb R$ allows a countable cover, we nay restrict to the case that $J$ is countable.</li>
<li>With the extended definition, we still have $\mu((a,b))=b-a$. One shows by induction that $\sum_{i\in J}\mu(I_i)\ge b-a$ if $A\subseteq \bigcup_{i\in J}I_i$ with finite $J$, and then extends this to the case of countable $J$ (and larger $J$ need not be considered)</li>
</ul>
<p>Now given countably many $X_k$, $k\in\mathbb N$, with $\mu(X_k)=0$, and given $\epsilon>0$, by definition as infimum we find covers $X_k\subseteq \bigcup_{j\in J_k}I_{k,j}$ with $\sum_{j\in J_k}\mu(I_{k,j})<2^{-k}\epsilon$. By the above we may assume that $J_k=\mathbb N$. Then
$$ X:=\bigcup_{k\in\mathbb N}X_k\subseteq\bigcup_{k\in\mathbb N}\bigcup_{j\in\mathbb N}I_{k,j}=\bigcup_{(k,j)\in\mathbb N\times\mathbb N}I_{k,j}$$
with
$$ \sum_{(k,j)\in\mathbb N\times\mathbb N}\mu(I_{k,j})=\sum_{k\in\mathbb N}\sum_{j\in\mathbb N}\mu(I_{k,j})<\sum_{k\in\mathbb N}2^{-k}\epsilon=\epsilon$$
so that $\mu(X)<\epsilon$. As $\epsilon$ was an arbitrary positive number, the infimum over all $\sum_{i\in J}\mu(I_i)$ for covers $X\subseteq \bigcup_{i\in J}\mu(I_j)$ is certainly $0$.</p>
|
374,209 | <p>Need to show that if $f$ is a glide reflection then there is only one line $L$
such that $f(L) = L$</p>
<p>What I know is that a glide reflection is an isometry </p>
<p>$$f(z)=a\bar{z}+b,$$ such that $|a|=1$ and $a\bar{b}+b\neq0$.</p>
<p>Now assume that two lines $L_1$ and $L_2$ such that are axes for this glide reflection. Take $x_1 \in L_1$ and $x_2 \in L_2$. Since glide reflection maps given point to some new "location", then</p>
<p>$$f(x_1)=a\bar{x_1}+b=a\bar{x_2}+b=f(x_2)$$</p>
<p>But then $\bar{x_1}=\bar{x_2}$ and consequently $x_1=x_2$ and two lines are the same.</p>
<p>Could the suffice for a proof? Thanks!</p>
| rschwieb | 29,335 | <p>Here is what I arrived at.</p>
<p>We produce the axis with some vector algebra.</p>
<p>By computing $f(f(z))=z+b+a\overline{b}$, we can see the expected result that the translation that occurs along the axis is by the nonzero amount $t=(b+a\overline{b})/2$. The translation helps us by showing us the direction of the axis, but we would still like to find a point on the axis. </p>
<p>We will know when we've found such a point $x$ if $f(x)-x=t$, because that will indicate that the point was translated along the axis of reflection and experienced no reflection across the axis.</p>
<p>Working with a little vector algebra you can compute that $x=(b-a\overline{b})/4$ yields $f(x)=(a\overline{b}+3b)/4$, and $f(x)-x=t$. I found these two points by considering $0$ and $f(0)$ and drawing a parallelogram with sides the vector $t$, and I surmised the axis of reflection went through this parallelogram (parallel to $t$).</p>
<p>So, basic vector algebra says that this axis is $L=\{\lambda t+x\mid \lambda\in \Bbb R\}$. From this you can compute that $f(\lambda t+x)=(\lambda+\frac{1}{2})t+x\in L$.</p>
<hr>
<p>Finally, I wanted to offer an argument for uniqueness that doesn't require computation. I haven't tried to translate it into a computational argument, but in principal the idea should translate.</p>
<p>Suppose there are two axes $L_1$ and $L_2$ such that $f(L_i)=L_i$. By the hypotheses, $b\neq 0$, so $f$ has no fixed points. Note that if the two axes intersect at a point $p$, then in order for $p$ to stay on both lines, it must be a fixed point of $f$. Thus we have a contradiction unless the two axes are parallel.</p>
<p>Now if they are parallel but distinct, $L_1$ must be reflected to the "other side" of $L_2$. But since both lines are fixed under $f$, this is an absurdity. So in fact, $L_1$ and $L_2$ have to be the same line.</p>
|
1,793,854 | <p>I am messed up on solving this question. What should I do first in order to get the answer ?</p>
<p><a href="https://i.stack.imgur.com/hE4rG.png" rel="nofollow noreferrer">This is the trigonometric function</a></p>
<p>$$ \lim \limits_{x \rightarrow 0} \frac{(a+x)\sec(a+x) - a \sec(a)}{x} $$</p>
| AbstractSage | 296,761 | <p>Changing into cosines greatly eases the manipulation of terms.
$$
\begin{align}
\lim \limits_{x \rightarrow 0} \frac{(a+x)\sec(a+x) - a \sec(a)}{x}
& = \lim \limits_{x \rightarrow 0} \frac{a\sec(a+x) - a \sec(a)}{x} + \lim \limits_{x \rightarrow 0} \frac{x\sec(a+x)}{x} \\
& = A + B
\end{align}
$$
$$
\begin{align}
A & = \lim \limits_{x \rightarrow 0} \frac{a\sec(a+x) - a \sec(a)}{x} \\
& = a\lim \limits_{x \rightarrow 0} \frac{\cos(a) - \cos(a+x)}{\cos(a)\cos(a+x)x} \\
& = a\lim \limits_{x \rightarrow 0} 2\frac{\sin(a + x/2) sin(x/2)}{\cos(a)\cos(a+x)x} \\
& = a\lim \limits_{x \rightarrow 0} \; \frac{\sin(a + x/2)}{\cos(a)\cos(a+x)} \frac {\sin(x/2)}{x/2} \\
& = a\tan(a)sec(a)
\end{align}
$$
$$
B = \sec(a)
$$</p>
|
1,738,968 | <blockquote>
<p>Let $V$ be a vector space and let $T \in \operatorname{End}(V)$. If $\operatorname{rank}(T)$ and $\operatorname{null}(T)$ are finite, prove that $\dim(V)$ is finite.</p>
</blockquote>
<p>I cannot use the Rank-Nullity Theorem as it only applies to finite
dimensional vector space and I don't know whether $V$ is finite or infinite dimensional. </p>
| Vishnu | 693,070 | <blockquote>
<p>This answer is preserved for those who want to understand why the sign of constants do not matter. After reading this answer please check the comments for more details from @mathlove.</p>
</blockquote>
<p>@mathlove's answer really explains the question. But, I would like to show that the "<span class="math-container">$c_1,c_2$</span> are of same sign" is <strong>relevant</strong> to our study here, unlike @mathlove's answer. </p>
<p>This is my hypothesis:</p>
<p>Consider two lines represented by <span class="math-container">$a_1x+b_1y+c_1=0$</span> and <span class="math-container">$a_2x+b_2y+c_2=0$</span>. Based on the nature of signs of <span class="math-container">$c_1$</span> and <span class="math-container">$c_2$</span>, we have two cases:</p>
<p><strong>Case I: Both <span class="math-container">$c_1$</span> and <span class="math-container">$c_2$</span> are of same sign:</strong></p>
<p><span class="math-container">$a_1x+b_1y+c_1=0$</span> and <span class="math-container">$a_2x+b_2y+c_2=0$</span> can also be represented as (multiplying both sides by <span class="math-container">$-1$</span>) <span class="math-container">$-a_1x-b_1y-c_1=0$</span> and <span class="math-container">$-a_2x-b_2y-c_2=0$</span> respectively. In both, the original equation and the negated equation the sign of <span class="math-container">$a_1a_2+b_1b_2$</span> remains the same. So, "<span class="math-container">$c_1,c_2$</span> are of same sign" seems to be irrelevant.</p>
<p>Now consider,</p>
<p><strong>Case II: Both <span class="math-container">$c_1$</span> and <span class="math-container">$c_2$</span> are of opposite signs:</strong></p>
<p>Let us consider <span class="math-container">$c_1=+p$</span> and <span class="math-container">$c_2=-q$</span> where <span class="math-container">$p$</span> and <span class="math-container">$q$</span> are positive real numbers. </p>
<p>So, <span class="math-container">$a_1x+b_1y+p=0$</span> and <span class="math-container">$a_2x+b_2y-q=0$</span> are the equations of the lines under consideration. Let <span class="math-container">$a_1a_2+b_1b_2=r$</span> where <span class="math-container">$r$</span> is any real number, positive or negative.</p>
<p>The equation of the second line can also be represented as <span class="math-container">$-a_2x-b_2y+q=0$</span> by multiplying by <span class="math-container">$-1$</span> on both sides. Now, <span class="math-container">$-a_1a_2-b_1b_2=-r$</span> clearly of opposite sign compared to the previous form. </p>
<p><strong>Conclusion:</strong></p>
<p>"<span class="math-container">$c_1,c_2$</span> are of same sign (or of opposite sign)" is <strong>relevant</strong> to our study here. </p>
|
3,822,042 | <p>For any function <span class="math-container">$f : X \rightarrow Y$</span> and any subset A of Y, define
<span class="math-container">$$f^{-1}(A) = \{x \in X: f(x) \in A\}$$</span> Let <span class="math-container">$A^c$</span> denote the complement of A in Y. For subsets <span class="math-container">$A_1,A_2$</span> of Y, consider the following statements:</p>
<p>(i) <span class="math-container">$ f^{-1} (A^c_1 \bigcap A^c_2) = (f^{-1}(A_1))^c \bigcup (f^{-1}(A_2))^c $</span></p>
<p>(ii) If <span class="math-container">$ f^{-1} (A_1) = f^{-1} (A_2)$</span> then <span class="math-container">$A_1 = A_2 $</span></p>
<p>Then which of the above statements are always true?</p>
<p>My effort: The first statement can not be true unless <span class="math-container">$A_1 = A_2$</span>. So that's not always true. For the 2nd statement, let x = <span class="math-container">$ f^{-1}(A_1) = f^{-1}(A_2)$</span>, then <span class="math-container">$f(x) = A_1 = A_2$</span>. Since f is a function, f can not have duplicate values of f(x) for the same value of x. That's what I read in books. E.g. the function <span class="math-container">$y^2 = x$</span>, is actually two functions, <span class="math-container">$y=+\sqrt x$</span> and <span class="math-container">$y=-\sqrt x$</span>, since, for same x, there are two values of y. So, my answer is, (ii) is always true, (i) is not always true.</p>
<p>But the answer given is, neither (i) nor (ii) is always true. Any pointers on where my understanding is incorrect, is highly appreciated.</p>
| N. F. Taussig | 173,070 | <p>Let <span class="math-container">$b, g, r$</span> denote, respectively, the numbers on the blue, green, and red cards. Then we want to find the number of solutions of the equation
<span class="math-container">$$b + g + r = 16 \tag{1}$$</span>
subject to the restrictions <span class="math-container">$3 \leq b \leq 9$</span>, <span class="math-container">$3 \leq g \leq 7$</span>, and <span class="math-container">$4 \leq r \leq 8$</span>.</p>
<p>We can convert this to the equivalent problem in the nonnegative integers. Let <span class="math-container">$b' = b - 3$</span>, <span class="math-container">$g' = g - 3$</span>, and <span class="math-container">$r' = r - 4$</span>. Then <span class="math-container">$b'$</span>, <span class="math-container">$g'$</span>, and <span class="math-container">$r'$</span> are nonnegative integers satisfying <span class="math-container">$b' \leq 6$</span>, <span class="math-container">$g' \leq 4$</span>, <span class="math-container">$r' \leq 4$</span>. Substituting <span class="math-container">$b' + 3$</span> for <span class="math-container">$b$</span>, <span class="math-container">$g' + 3$</span> for <span class="math-container">$g$</span>, and <span class="math-container">$r' + 4$</span> for <span class="math-container">$r$</span> in equation 1 yields
<span class="math-container">\begin{align*}
b' + 3 + g' + 3 + r' + 4 & = 16\\
b' + g' + r' & = 6 \tag{2}
\end{align*}</span>
Equation 2 is an equation in the nonnegative integers. A particular solution of equation 2 corresponds to the placement of <span class="math-container">$3 - 1 = 2$</span> addition signs in a row of six ones. For instance,
<span class="math-container">$$1 1 1 + 1 1 + 1$$</span>
corresponds to the solution <span class="math-container">$b' = 3, g' = 2, r' = 1$</span> of equation 2 and <span class="math-container">$b = 6, g = 5, r = 5$</span> of equation 1, while
<span class="math-container">$$+ 1 1 + 1 1 1 1$$</span>
corresponds to the solution <span class="math-container">$b' = 0, g' = 2, r' = 4$</span> of equation 2 and <span class="math-container">$b = 3, g = 5, r = 8$</span> of equation 1. The number of solutions of equation 2 in the nonnegative integers is the number of ways we can insert <span class="math-container">$3 - 1 = 2$</span> addition signs in a row of <span class="math-container">$6$</span> ones, which is
<span class="math-container">$$\binom{6 + 3 - 1}{3 - 1} = \binom{8}{2}$$</span>
since we must choose which <span class="math-container">$2$</span> of the <span class="math-container">$8$</span> positions required for six ones and two addition signs will be filled with addition signs.</p>
<p>However, these solutions include those that violate the restrictions <span class="math-container">$g' \leq 4$</span> or <span class="math-container">$r' \leq 4$</span>. Notice that both restrictions cannot be violated simultaneously since <span class="math-container">$2 \cdot 5 > 6$</span>.</p>
<p>There are two ways to select the variable which exceeds <span class="math-container">$4$</span>. Suppose it is <span class="math-container">$g'$</span>. Then <span class="math-container">$g'' = g' - 5$</span> is a nonnegative integer. Substituting <span class="math-container">$g'' + 5$</span> for <span class="math-container">$g'$</span> in equation 2 yields
<span class="math-container">\begin{align*}
b' + g'' + 5 + r' & = 6\\
b' + g'' + r' & = 1 \tag{3}
\end{align*}</span>
Equation 3 is an equation in the nonnegative integers with
<span class="math-container">$$\binom{1 + 3 - 1}{3 - 1} = \binom{3}{2}$$</span>
solutions. Hence, there are
<span class="math-container">$$\binom{2}{1}\binom{3}{2}$$</span>
solutions of equation 2 which violate the restriction <span class="math-container">$g' \leq 4$</span> or <span class="math-container">$r' \leq 4$</span>.</p>
<p>Therefore, the number of admissible solutions of equation 2 is
<span class="math-container">$$\binom{8}{2} - \binom{2}{1}\binom{3}{2} = 22$$</span>
as you found.</p>
|
1,627,713 | <p>This is maybe math $101$ question:</p>
<p>Let $z_1=1+i$.</p>
<p>I know that $r=\sqrt 2$ and $\theta=\arctan(1/1)=\pi/4$ so $$z_1=\color{blue}{\sqrt 2e^{i\pi/4}} .$$</p>
<p>But now if I take a look at</p>
<p>$z_2=-1-i$,</p>
<p>I know that $r=\sqrt 2$ and $\theta=\arctan(-1/-1)=\pi/4$ so $$z_1=\color{blue}{\sqrt 2e^{i\pi/4}}.$$</p>
<p>But $z_2$ should be equal to $$\color{red}{\sqrt 2 e^{5i\pi/4}} .$$</p>
<p>Why in $z_2$ should I add $\pi$ in the power of the exponent?</p>
| Mankind | 207,432 | <p>Let $a=(x_1,y_1)$ and $b=(x_2,y_2)$ be two points of $\Bbb{R}^2\setminus\Bbb{E}$. By definition of $\Bbb{E}$, at least one coordinate of both $a$ and $b$ must be irrational, so suppose for instance that $x_1$ and $y_2$ are irrational.</p>
<p>You are right that we need to find a continuous function $f\colon [0,1]\to \Bbb{R}^2\setminus\Bbb{E}$, such that $f(0) = a$ and $f(1) = b$. You can compose this function as follows:</p>
<p>First, move along a straight, vertical line segment from $(x_1,y_1)$ to $(x_1,y_2)$. This line segment will be contained in your space, because $x_1$ is irrational.</p>
<p>Next, move along the straight, horizontal line segment from $(x_1,y_2)$ to $(x_2,y_2)$. The function $f$ will be the composition of these two line segments. This is the idea, but you should define $f$ formally.</p>
<p>You can also use this idea for the other cases of which coordinates are irrational.</p>
|
827,154 | <p>I need help with the definition of "within 1":</p>
<ul>
<li><p>If $x = 8$ and $y = 7$, then $x$ is "within 1" of $y$. </p></li>
<li><p>If $x = 8$ and $y = 9$, then $x$ is "within 1" of $y$.</p></li>
<li><p>If $x = 8$ and $y = 8$, is $x$ still "within 1" of $y$?</p></li>
</ul>
<p>It's my understanding that this would still be true, but I'm being asked for something to back up my assumption, so I guess I'm looking for a second opinion.</p>
| Community | -1 | <p>In the more general case, I would say that "$x$ is within (a distance) $d$ of $y$" means that
$$|x - y| \le d.$$
(Depending on the context, I would imagine the inequality could be strict.)</p>
|
2,028,703 | <p>I'm having this example for a simple <a href="https://en.wikipedia.org/wiki/Binary_symmetric_channel" rel="nofollow noreferrer">binary symmetric channel</a> (BSC) to bound the mutual information of $X$ and $Y$ as</p>
<p>\begin{align*}
I(X;Y) &= H(Y) - H(Y|X)\\
&= H(Y) - \sum p(x) H(Y \mid X = x) \\
&= H(Y) - \sum p(x) H(p) \\
&= H(Y) - H(p) \\
&\leq 1 - H(p)
\end{align*}</p>
<p>However, as the title states, I don't really understand why I can write</p>
<p>\begin{align*}
\sum p(x) H(Y \mid X = x) = \sum p(x) H(p)
\end{align*}</p>
<p>I know that</p>
<p>\begin{align*}
\mathbb{P}[Y = 0 \mid X = 0 ] &= 1 - p \\
\mathbb{P}[Y = 1 \mid X = 0 ] &= p \\
\mathbb{P}[Y = 1 \mid X = 1 ] &= p \\
\mathbb{P}[Y = 0 \mid X = 1 ] &= 1 - p
\end{align*}</p>
<p>but let's assume I set $p = \frac{1}{3}$, would that mean that I have</p>
<p>\begin{align*}
I(X;Y) \leq 1- H(p) = 1- H(\frac{1}{3}) \approx 0.4716 \text{ bit}
\end{align*}</p>
<p>I ask because if this is the case, why is it not</p>
<p>\begin{align*}
I(X;Y) \leq 1- H(1-p) = 1- H(\frac{2}{3}) \approx 0.61 \text{ bit}
\end{align*}</p>
<p>instead? </p>
<p>Or, and this would make the most sense to me, it's actually $p = (p_{error}, 1-p_{error})= (\frac{1}{3}, \frac{2}{3})$ and thus we have</p>
<p>\begin{align*}
I(X;Y) \leq 1- H(p) = 1- H(\frac{1}{3}, \frac{2}{3}) \approx 0.0817 \text{ bit}
\end{align*}</p>
| R.G. | 78,396 | <p>The reason for the validity of the equation</p>
<p>\begin{equation}
\sum p(x) H(Y \mid X = x) = \sum p(x) H(p)
\end{equation}</p>
<p>can perhaps be better seen if we denote the right-hand side by</p>
<p>\begin{equation}
\sum p(x) H_b(p)
\end{equation}</p>
<p>where $H_b(\cdot)$ is the binary entropy function (<a href="https://en.wikipedia.org/wiki/Binary_entropy_function" rel="nofollow noreferrer">https://en.wikipedia.org/wiki/Binary_entropy_function</a>). To see this, note that the defining property of the BSC is precisely that <em>independent</em> of what the source symbol X is, an error, that is a bit-flip, occurs with a fixed probability $p$. In other words:</p>
<p>\begin{equation}
\forall x: H(Y \mid X = x) = H(Err \mid X = x) = H_b(p)
\end{equation}</p>
<p>where the first equality is due to the fact that for a binary input the enropy of the "error" is equal to entropy of $Y$ and the second equality follows by the paragraph above.</p>
|
1,760,242 | <p>Can anybody tell me where can I find some REAL problems (i.e. form real life) that can be solved using a 3x3 system of linear equations? Or, can anybody give me an example? A solution could be a circuit in electrical engineering, but this is not very interesting, and it doesn't seem to be so real.
Thank you!</p>
| GEdgar | 442 | <p>Your computer animation programs do 3D geometric transformations before they can draw their pictures. These involve $3 \times 3$ matrices.</p>
|
134,574 | <p>$a^{p-1} \equiv 1 \pmod p$</p>
<p>Why do Carmichael numbers prevent Fermat's Little Theorem from being a guaranteed test of primality? Fermat' Little theorem works for any $a$ such that $1≤a\lt p$, where $p$ is a prime number. Carmichael numbers only work for $a$'s coprime to $N$ (where $N$ is the modulus). Doesn't this mean that for some non-coprime $a$ the Carmichael number will fail the test? Therefore if every $a$ is tested, a Carmichael number wouldn't pass.</p>
| joriki | 6,622 | <p>In case you looked at the <a href="http://en.wikipedia.org/wiki/Carmichael_number">Wikipedia article</a> on Carmichael numbers, your question may have resulted from the sentence "Since Carmichael numbers exist, [the Fermat] primality test cannot be relied upon to prove the primality of a number". This is a bad formulation, since the Fermat primality test isn't meant to be used as proof of the primality of a number, but as a probabilistic test that is very likely to prove the compositeness of any composite number. It's that latter use that Carmichael numbers interfere with. As the article on the <a href="http://en.wikipedia.org/wiki/Fermat_primality_test">Fermat primality test</a> shows, for numbers $n$ other than primes and Carmichael numbers, at least half of all numbers coprime to $n$ are Fermat witnesses, i.e. let the test prove compositeness. Thus the Fermat primality test serves its function well for non-Carmichael numbers, whereas for Carmichael numbers with relatively high prime factors, such as $8911=7\cdot19\cdot67$, the probability of proving compositeness with a randomly chosen number $\lt n$ is significantly reduced (roughly from $1$ in $2$ to in this case $1$ in $7$ per test).</p>
|
1,878,734 | <p>Is it true that if an isomorphism $f$ maps a cyclic group $G$ to group $H$ that $H$ must also be cyclic? It seems intuitive but until I can actually prove it I'm always a bit dubious to believe it. </p>
| Justin Benfield | 297,916 | <p>To give a direct proof: Let <span class="math-container">$y\in H$</span>, and <span class="math-container">$\phi:G\rightarrow H$</span> an isomorphism of the groups <span class="math-container">$G$</span> and <span class="math-container">$H$</span>. Consider <span class="math-container">$\phi^{-1}(y)\in G$</span>. Since <span class="math-container">$G$</span> is cyclic, we can express <span class="math-container">$\phi^{-1}(y)$</span> in terms of a generator for <span class="math-container">$G$</span> (<span class="math-container">$G$</span> can be generated by a single non-identity element because it is cyclic; why?). Hence we have a generator <span class="math-container">$g$</span> for the group <span class="math-container">$G$</span> (that is, <span class="math-container">$\langle g\rangle=G$</span>), and it follows that <span class="math-container">$\phi^{-1}(y)=g^k$</span> for some integer exponent <span class="math-container">$k$</span>. Now we claim that <span class="math-container">$\phi (g)$</span> must be a generator for <span class="math-container">$H$</span>, for every element of <span class="math-container">$G$</span> is of the form <span class="math-container">$g^k$</span> for some integer <span class="math-container">$k$</span>. Moreover, because <span class="math-container">$\phi$</span> is a bijection, it follows <span class="math-container">$y=\phi(g^k)$</span>. Now because <span class="math-container">$\phi$</span> is an isomorphism, we have that <span class="math-container">$\phi (g^k)=[\phi (g)]^k$</span>, and thus we have expressed <span class="math-container">$y$</span> in terms of a single generator and since <span class="math-container">$y$</span> was arbitrary, it follows that <span class="math-container">$H$</span> is generated by a single element, and hence must be cyclic.</p>
|
1,346,286 | <p>Why is $\int_{0}^{\pi}{1\over 1-\sin x}dx=2\int_{0}^{\pi\over 2}{1\over 1-\sin x}dx$, or to be accurate: why is $\int_{\pi\over 2}^{\pi}{1\over 1-\sin x}dx=\int_{0}^{\pi\over 2}{1\over 1-\sin x}dx$? </p>
<p>At the very best, I know that the area $\sin x$ covers between $0$ to $\pi\over 2$ has the same magnitude between $\pi\over 2$ to $\pi$ but I fail to see how it formally leads to the identity aforementioned. Can you help me with understanding this?</p>
| Brian M. Scott | 12,042 | <p>Make the substitution $u=\pi-x$; then</p>
<p>$$\int_{\pi/2}^\pi\frac{dx}{1-\sin x}=\int_{\pi/2}^0\frac{-du}{1-\sin u}=\int_0^{\pi/2}\frac{du}{1-\sin u}\;.$$</p>
|
1,346,286 | <p>Why is $\int_{0}^{\pi}{1\over 1-\sin x}dx=2\int_{0}^{\pi\over 2}{1\over 1-\sin x}dx$, or to be accurate: why is $\int_{\pi\over 2}^{\pi}{1\over 1-\sin x}dx=\int_{0}^{\pi\over 2}{1\over 1-\sin x}dx$? </p>
<p>At the very best, I know that the area $\sin x$ covers between $0$ to $\pi\over 2$ has the same magnitude between $\pi\over 2$ to $\pi$ but I fail to see how it formally leads to the identity aforementioned. Can you help me with understanding this?</p>
| John Hunsberger | 240,638 | <p>It appears by symmetry in that sin(x) x = 0 .. pi describes an arc that will define an area that is twice that of the half arc described by sin(x) x = 0 .. pi/2 so the second integral is half the area of the first hence multiplied by two makes them equal areas.</p>
|
2,793,983 | <p>For example I find myself wanting to write $x$ is an element of the integers from $1$ to $50$,</p>
<p>Is this the quickest way? </p>
<p>$x\in \left[ 1,50\right] \cap \mathbb{N} $</p>
<p>Also is this standard on here? $\mathbb{N} = \{0, 1, 2,\dotsc \}$,
$\mathbb{ℤ}_+ = \{1, 2, \dotsc \}$.</p>
| Evpok | 15,102 | <p>A common convention in French is</p>
<p>$$
x∈⟦1, 50⟧
$$</p>
<p>and I am genuinely surprised to learn that it might not be common elsewhere ! In any case, $\{1, …, 50\}$ or maybe $\{1, 2, …, 50\}$ should be universal and more readable for most people.</p>
<p>For your other question, still from the French perspective,
$$
\mathbb{N} = \{0, 1, …\}\\
\mathbb{N^*} = \{1, 2, …\}\\
$$
though the second one is sometimes frowned upon due to it being an abuse of the $A^*$ notation (where $A$ is a ring) that leads to confusion for the $\mathbb{Z}^*=\{-1, 1\}$ case.</p>
<p>I have never seen $\mathbb{Z}^+$ used, but if I had, I would probably have assumed $\mathbb{Z}^+=\mathbb{N}$, following $\mathbb{R}^+=\{x∈\mathbb{R}|x⩾0\}$.</p>
|
2,793,983 | <p>For example I find myself wanting to write $x$ is an element of the integers from $1$ to $50$,</p>
<p>Is this the quickest way? </p>
<p>$x\in \left[ 1,50\right] \cap \mathbb{N} $</p>
<p>Also is this standard on here? $\mathbb{N} = \{0, 1, 2,\dotsc \}$,
$\mathbb{ℤ}_+ = \{1, 2, \dotsc \}$.</p>
| Hammerite | 23,931 | <p>One possibility is $\{i\}_{i = 1}^{50}$, by analogy with $\sum_{i = 1}^{50}(\cdots)$ and other similar notation.</p>
|
2,793,983 | <p>For example I find myself wanting to write $x$ is an element of the integers from $1$ to $50$,</p>
<p>Is this the quickest way? </p>
<p>$x\in \left[ 1,50\right] \cap \mathbb{N} $</p>
<p>Also is this standard on here? $\mathbb{N} = \{0, 1, 2,\dotsc \}$,
$\mathbb{ℤ}_+ = \{1, 2, \dotsc \}$.</p>
| Newton fan 01 | 560,959 | <p>Another fancy way of writing the set is this one: </p>
<p><a href="https://i.stack.imgur.com/5Ny73.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/5Ny73.png" alt="enter image description here"></a></p>
<p>I got this idea when reading Hammerite's answer. However, the formulas are different. Or at least, I hope so. I have never encountered his notation so far, but if it is equivalent to set union, please tell me, in order to delete my answer. However, this is the standard notation for union of sets, the one that I posted.</p>
|
1,627,619 | <p>Could anyone please check my solution to the following problem?</p>
<blockquote>
<p><strong>Problem:</strong> Let $f(x, y) = (x^2 + y^2)e^{-(x^2 + y^2)}$. Find global extrema of $f$ on $M = {\mathbf R}^2$.</p>
</blockquote>
<p><strong>Proposed solution:</strong> Taking partial derivatives of $f$, we conclude that critical points are $[0,0]$ and points of the unit circle $C = \{[x,y] \in {\mathbf R}^2:\ x^2 + y^2 = 1\big\}$.</p>
<p>We can reason immediately that the global minimum is attained at $[0,0]$ as the function is nonnegative. We observe that the value of $f$ on $C$ is $e^{-1}$.</p>
<p>To prove that $f$ attains global maximum on $C$, we let $r := x^2 + y^2$. We observe that for any two points $[x_1,y_1]$ and $[x_2,y_2]$, $r_1 = r_2$ (ie. the function is constant on circles). Now let $r \to \infty$. Then
$re^{-r} \to 0$. From the definition of limit, it follows that for any $\varepsilon > 0$, we find $\delta > 0$ such that</p>
<p>$$\forall r \in P(\infty, \delta) = ({1 \over \delta}, \infty): re^{-r} < \varepsilon.$$</p>
<p>Let $\varepsilon = (2e)^{-1}$. Then there's $\delta$ from the definition above and we know that for $r \in ({1 \over \delta}, \infty)$, the value of f is less than the value of f on $C$. Restricting ourselves to the compact set</p>
<p>$$C' = \big\{[x, y] \in {\mathbf R}^2:\ x^2 + y^2 \le {1 \over \delta}\big\},$$</p>
<p>we can now argue that $f$ on $C'$ is indeed maximized on $C$, as it is a continuous function on a compact set, it's value around the boundary is at most $(2e)^{-1}$ and all critical points have been considered.</p>
<p>Therefore, the maximum value of $f$ is attained on $C$ with respect to $M$ as well.</p>
| Tryss | 216,059 | <p>Or you could just remark that the function is radial with value $f(x) = |x|^2 e^{-|x|^2}$. so if it has a maximum/minimum at $x_0$, it's on the whole circle of radius $|x_0|$.</p>
<p>So it suffice to study the function $h(t) = t^2 e^{-t^2}$.</p>
<p>Here you can just differentiate, $h'(t) = (2t - 2t^3) e^{-t^2}$, and this is equal to $0$ for $t=\pm 1$, and it's easy to check that it's a maximum.</p>
<p>Hence $f$ is maximal on all $x$ such that $|x| = 1$</p>
|
457,557 | <p>Use a triple integral to find the volume of the solid: The solid enclosed by the cylinder $$x^2+y^2=9$$ and the planes $$y+z=5$$ and $$z=1$$<br>
This is how I started solving the problem, but the way I was solving it lead me to 0, which is incorrect. $$\int_{-3}^3\int_{-\sqrt{9-y^2}}^{\sqrt{9-y^2}}\int_{1}^{5-y}dzdxdy=\int_{-3}^3\int_{-\sqrt{9-y^2}}^{\sqrt{9-y^2}}\left(4-y\right)dxdy=\int_{-3}^3\left[4x-xy\right]_{-\sqrt{9-y^2}}^\sqrt{9-y^2}dy= {8\int_{-3}^3{\sqrt{9-y^2}}dy}-2\int_{-3}^3y{\sqrt{9-y^2}}dy$$<br>
If this is wrong, then that would explain why I'm stuck. If this is correct so far, that's good news, but the bad news is that I'm still stuck. If someone could help me out, that would be wonderful, thanks!</p>
| Mikasa | 8,581 | <p>You can also use the Cylindrical Coordinates to find the volume. Take a look at the area in which all solid is being projected on $z=0$ plane. It is really a circle with radii $3$.</p>
<p><img src="https://i.stack.imgur.com/aoosR.png" alt="enter image description here"></p>
<p>So we have the following triple integrals as well:</p>
<p>$$\int_{\theta=0}^{2\pi}\int_{r=0}^3\int_{z=1}^{5-r\sin\theta}~rdz~dr~d\theta=36\pi$$</p>
<p><img src="https://i.stack.imgur.com/riLxr.png" alt="enter image description here"></p>
|
3,047,686 | <p>Prove that: <span class="math-container">$$(p - 1)! \equiv p - 1 \pmod{p(p - 1)}$$</span></p>
<p>In text it's not mentioned that <span class="math-container">$p$</span> is prime, but I checked and this doesn't hold for non-prime, so I guess <span class="math-container">$p$</span> is prime ..
I know that <span class="math-container">$(p - 1)! \equiv -1 \equiv p - 1 \pmod p$</span>,
and <span class="math-container">$(p - 1)! \equiv 0 \equiv (p - 1) \pmod{p - 1}$</span>,
the problem is that I don't know how to combine these.
Is it true that then we have: <span class="math-container">$(p - 1)! \equiv (p - 1)^2 = p^2 - p - p + 1 \equiv - p + 1 = - (p - 1) \pmod{p(p - 1)}$</span>, which is not the desired result .. ?
Or I need to multiply both sides of congruences ?
What are the laws for congruences that allow me to do this ? (I hope you get the idea of what I'm trying to ask).</p>
| Julio Trujillo Gonzalez | 272,343 | <p>1) Theorem. If <span class="math-container">$a|c$</span>, <span class="math-container">$b|c$</span> and <span class="math-container">$(a,b)=1$</span> then <span class="math-container">$ab|c$</span></p>
<p>We know that <span class="math-container">$ (p,p-1)=1$</span> and also <span class="math-container">$p-1|(p-1)!-(p-1)$</span></p>
<p>By wilson theorem <span class="math-container">$p|(p-1)!+1$</span>. Then <span class="math-container">$p|[(p-1)!+1]-p$</span></p>
<p>Therefore <span class="math-container">$p(p-1)| (p-1)!-(p-1)$</span></p>
|
2,019,070 | <p>I got this equality $(A \times B) = (A\cup B) \times(A\cup B)$. I've already shown the left-to-right inclusion, and I want to refute the other way by giving a counterexample. I want to show that given an ordered pair $(x,y) \in (A\cup B) \times(A\cup B)$, $(x,y)$ does not always belong to $(A \times B)$, thus giving the needed counterexample.</p>
<p>Let $A$ and $B$ be non-zero sets. And let $B$ be a proper subset of $A$, then, there must be at least an element $a \in A$ that is not an element of $B$, $a \notin B$. So, for every ordered pair $(x,y) \in (A\cup B) \times (A\cup B)$, we got that $x \in (A\cup B)$ and also $y\in (A \cup B)$.</p>
<p>If the equality was right, for all $x \in (A\cup B)$, $x \in A$. And for all $y \in (A\cup B)$, $y \in B$. We got that $(A \cup B) = A$, therefore, the following statements are equivalent:</p>
<p>$(1) \quad x \in A$</p>
<p>$(2) \quad x \in (A \cup B)$</p>
<p>But, by definition of $A$ and $B$, and knowing that $(A \cup B) = A$, we come to $B \subset A$, hence, the following statement is NOT TRUE:</p>
<p>$(1)\quad y \in (A \cup B) \rightarrow y \in B$,</p>
<p>because we can find at least one element that belongs to $(A \cup B)$ but not to $B$. Q.E.D.</p>
<p>I want to know if this proof is formally correct, and if there is any mistake.</p>
| Simply Beautiful Art | 272,831 | <p>Notice that</p>
<p>$$\frac1{n(n+1)}=\frac1n-\frac1{n+1}$$</p>
<p>This makes this a telescoping sum:</p>
<p>$$\begin{align}S&=\quad\frac1{1\times2}\ \ \ \quad+\frac1{2\times3}\ \ \ \ \ \ \ \ +\frac1{3\times4}\ \ \ +\dots+\quad\ \frac1{n(n+1)}\\&=\left(\frac11-\color{#ee8844}{\frac12}\right)+\left(\color{#ee8844}{\frac12}-\color{#559999}{\frac13}\right)+\left(\color{#559999}{\frac13}-\color{#034da3}{\frac14}\right)+\dots+\left(\color{#034da3}{\frac1n}-\frac1{n+1}\right)\\&=1-\frac1{n+1}\end{align}$$</p>
<p>Since each colored term cancels with the next.</p>
|
2,019,070 | <p>I got this equality $(A \times B) = (A\cup B) \times(A\cup B)$. I've already shown the left-to-right inclusion, and I want to refute the other way by giving a counterexample. I want to show that given an ordered pair $(x,y) \in (A\cup B) \times(A\cup B)$, $(x,y)$ does not always belong to $(A \times B)$, thus giving the needed counterexample.</p>
<p>Let $A$ and $B$ be non-zero sets. And let $B$ be a proper subset of $A$, then, there must be at least an element $a \in A$ that is not an element of $B$, $a \notin B$. So, for every ordered pair $(x,y) \in (A\cup B) \times (A\cup B)$, we got that $x \in (A\cup B)$ and also $y\in (A \cup B)$.</p>
<p>If the equality was right, for all $x \in (A\cup B)$, $x \in A$. And for all $y \in (A\cup B)$, $y \in B$. We got that $(A \cup B) = A$, therefore, the following statements are equivalent:</p>
<p>$(1) \quad x \in A$</p>
<p>$(2) \quad x \in (A \cup B)$</p>
<p>But, by definition of $A$ and $B$, and knowing that $(A \cup B) = A$, we come to $B \subset A$, hence, the following statement is NOT TRUE:</p>
<p>$(1)\quad y \in (A \cup B) \rightarrow y \in B$,</p>
<p>because we can find at least one element that belongs to $(A \cup B)$ but not to $B$. Q.E.D.</p>
<p>I want to know if this proof is formally correct, and if there is any mistake.</p>
| Community | -1 | <p>By induction,</p>
<p>If $$S_n=\frac n{n+1}$$</p>
<p>then</p>
<p>$$S_{n+1}=S_n+\frac1{(n+1)(n+2)}=\frac n{n+1}+\frac1{(n+1)(n+2)}=\frac{n+1}{n+2}.$$</p>
|
1,917,790 | <p>Can anyone help me to solve this? </p>
<blockquote>
<p>Determine the value or values of $k$ such that $x + y + k = 0$ is tangent to the circle $x^2+y^2+6x+2y+6=0$.</p>
</blockquote>
<p>I don't know how to calculate the tangent.</p>
| DonAntonio | 31,254 | <p>Developed hint:</p>
<p>If the line is tangent to the circle then the line's distance to the circle's center equals the circle's radius. The circle's equation is</p>
<p>$$x^2+y^2+6x+2y+6=(x+3)^2-9+(y+1)^2-1+6\implies$$</p>
<p>$$(x+3)^2+(y+1)^2=4$$</p>
<p>Well, now use the formula for the distance of the point $\;(a,b)\;$ to the line $\;Ax+By+C=0\;$ , which is</p>
<p>$$\frac{|Aa+Bb+C|}{\sqrt{A^2+B^2}}$$</p>
<p>and this distance has to be equal to $\;2\;$ ...</p>
|
1,917,790 | <p>Can anyone help me to solve this? </p>
<blockquote>
<p>Determine the value or values of $k$ such that $x + y + k = 0$ is tangent to the circle $x^2+y^2+6x+2y+6=0$.</p>
</blockquote>
<p>I don't know how to calculate the tangent.</p>
| 5xum | 112,884 | <p><strong>Hints</strong>:</p>
<ul>
<li>A line is <em>tangent</em> to a circle if there exist <em>precisely</em> one point that is both on the straight line and on the circle</li>
<li>A point $(x,y)$ is both on the line and the circle if it satisfies <em>both</em> equations.</li>
<li>A quadratic equation has <em>exactly</em> one solution if its discriminant is $0$.</li>
</ul>
|
2,349,982 | <p>According to the <a href="http://www.fftw.org/fftw3_doc/1d-Real_002dodd-DFTs-_0028DSTs_0029.html#g_t1d-Real_002dodd-DFTs-_0028DSTs_0029" rel="nofollow noreferrer">FFTW Website</a>, the Fourier Sine Transform (FST) returns:</p>
<p>$$Y_k = 2 \sum_{j=0}^{N-1} X_i \sin [\pi (j+1)(k+1)/(N+1)]$$</p>
<p><a href="http://reference.wolfram.com/language/ref/FourierSinTransform.html" rel="nofollow noreferrer">WolframAlpha</a> defines the Fourier Sine Transform as follows:
$2\sqrt\frac{\lvert b \rvert}{(2\pi)^{1-a}} \int_0^\infty f(t)\sin(b\omega t)\mathrm{d}t$</p>
<p>Taking $a=1$ and $b=\pi$ this becomes:
$F^{W}_{sin} = 2\sqrt \pi \int_0^\infty f(t)\sin(b\omega t)\mathrm{d}t$. </p>
<p>Comparing the two definitions one can write:</p>
<p>$$Y_k = 2 \sum_{j=0}^{N-1} X_i \sin [\pi (j+1)(k+1)/(N+1)] \approx \frac{1}{\sqrt\pi} F^{W}_{s}$$</p>
<p>Setting $f(t) = t\mathrm{e}^{-t^2}$ and performing FourierSinTransform, Wolframalpha returns:</p>
<p>$$FST\{f(t)\}= \frac{1}{2}\pi^2\omega\mathrm{e}^{-(1/4)\pi^2\omega^2}$$</p>
<p>I implemented this in my code and I was puzzled by the results: The analytical and numerical solutions look
similar but I would have expected a higher precision. What is the reason for this? Am I doing some error in
reasoning?</p>
<p>Any help is appreciated.</p>
<pre><code>// RosenbluthFourier.cpp : Definiert den Einstiegspunkt für die Konsolenanwendung.
//
#include "stdafx.h"
#include <fftw3.h>
int main() {
int const Pi = 3.14159265359;
int N;
double sup;
cout << "enter N of points "; std::cin >> N;
cout << "enter sup "; std::cin >> sup;
cout << "Interval runs from 0 to " << sup << " and will be divided into " << N
<< " intervals." << endl;
double T, Df;
T = sup / (N-1);
Df = 1 / sup;
cout << "Sampling interval T = " << T << endl;
cout << "Frequency spacing df = " << Df << endl << endl;
double *X = new double[N];
double *Y = new double[N];
for (int i = 0; i <= N; i++) {
X[i] = T*i;
Y[i] = X[i] * exp(-pow(X[i],2));
cout << "X[" << i << "] = " << X[i] << " Y[" << i << "] = " << Y[i] << endl;
}
cout << endl << "Analitically tranformed function" << endl << endl ;
double *f = new double[N];
double *Yt = new double[N];
for (int k = 0; k <= N; k++) {
// calc pi*w
f[k] = Pi*k*Df;
Yt[k] = (1./2.)*Pi*f[k] * exp(-pow(f[k]/2., 2));
cout << "f[" << k << "] = " << f[k] << " Yt[" << k << "] = " << Yt[k] << endl;
}
cout << endl << "FFTW-tranformed function" << endl << endl;
fftw_plan p;
p = fftw_plan_r2r_1d(N, Y, Yt, FFTW_RODFT00, FFTW_ESTIMATE);
fftw_execute(p);
for (int k = 0; k <= N; k++) {
Yt[k] = Yt[k] * T * double(sqrt(Pi)) ;
cout << "f[" << k << "] = " << f[k] << " Yt[" << k << "] = " << Yt[k] << endl;
}
fftw_destroy_plan(p);
return 0;
}
</code></pre>
| spaceisdarkgreen | 397,125 | <p>You need to take the expectation value of the expression and compare to $\sigma^2$. We have $E(X_i^2) = \sigma^2+\mu^2$ and $E(X_i X_j) = \sigma^2\rho + \mu^2$ for $i\ne j.$ And then we also have $$ \left(\sum_i X_i\right)^2 = \sum_i X_i^2 + 2\sum_{i<j} X_i X_j$$ </p>
<p>Now it should be relatively straightforward to calculate $E(T)$ by linearity. (Note that the expectation values of the individual terms don't depend on the index so you can pull them out of the sums and then the sum just amounts to a factor that is the number of terms in the sum.) So do that and set $$E(T) = \sigma^2$$ and see if you can find $h$ and $k$ so that the equality holds.</p>
<p>Unfortunately this problem seems to leave something ambiguous... You have one equation to solve and two degrees of freedom with which to solve it. So there are many values of $h$ and $k$ that agree. What they probably intend you to do is to solve it so that $h$ and $k$ have no explicit dependence on $\mu,$ so it ends up becoming the traditional unbiased estimator for $\sigma^2$ when you plug in $\rho=0$. </p>
<p>This leaves $\rho$ as the only "nuisance parameter" that you either have to know or estimate in order to calculate the estimator for $\sigma^2.$ And if you don't know it, estimating and plugging into the formula you derived adds additional noise, and can even potentially make the estimator biased after all.</p>
<p>Or perhaps the question included something in the fine print about $\rho$ being known and $\mu$ being unknown.</p>
|
4,348,969 | <p>Let <span class="math-container">$a<b\in\mathbb{R}$</span>. A sequence <span class="math-container">$P:=(p_0,\ldots,p_n)$</span> is a called a partition of <span class="math-container">$[a,b]$</span> if
<span class="math-container">$$a=p_0<\ldots<p_n=b.$$</span>
The size of <span class="math-container">$P$</span> is taken to be <span class="math-container">$\max_i(p_{i+1}-p_i)$</span>.</p>
<p>Now, suppose we are given <span class="math-container">$\delta>0$</span>. There exists <span class="math-container">$n\in\mathbb{N}_{\geq 1}$</span> such that <span class="math-container">$(b-a)/n<\delta$</span>. I can divide <span class="math-container">$[a,b]$</span> into <span class="math-container">$n$</span> equal sub-intervals, by writing
<span class="math-container">$$I_k:=\left[a+\frac{k}{n}(b-a),a+\frac{k+1}{n}(b-a)\right]$$</span>
for all <span class="math-container">$0\leq k\leq n-1$</span>. Clearly the length of each <span class="math-container">$I_k$</span> is <span class="math-container">$<\delta$</span>. We take <span class="math-container">$p_k=a+\frac{k}{n}(b-a)$</span> for <span class="math-container">$0\leq k\leq n$</span>.</p>
<p>The above gives us a partition. Are there other ways of constructing partitions of <span class="math-container">$[a,b]$</span> whose size is <span class="math-container">$<\delta$</span>?</p>
| Sammy Black | 6,509 | <p><strong>Geometric partition</strong> for integrating <span class="math-container">$f(x) = \frac{1}{x}$</span> on interval <span class="math-container">$[1, b]$</span> for defining the <strong>natural logarithm</strong> <span class="math-container">$\,\ln b$</span>:
<span class="math-container">$$
F(b) = \int_1^b \frac{1}{x} \, dx.
$$</span></p>
<p>Fix <span class="math-container">$r>1$</span>, and define
<span class="math-container">$$
x_k = r^k
\quad
(0 \leq k \leq n).
$$</span></p>
<p>The width of <span class="math-container">$k$</span>th subinterval is
<span class="math-container">$$
\Delta x_k := x_k - x_{k-1} = r^k - r^{k-1} = r^{k-1}(r-1),
$$</span>
which for a given <span class="math-container">$r$</span> increases as a function of <span class="math-container">$k$</span>. Thus the size of the partition is the width of the final subinterval, i.e.
<span class="math-container">$$
\Delta x_n = r^{n-1}(r-1),
$$</span>
which can be made as small as we like by shrinking the geometric ratio <span class="math-container">$r$</span> and increasing the number <span class="math-container">$n$</span>.
<a href="https://i.stack.imgur.com/dUrnZ.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/dUrnZ.png" alt="Geometric partition Riemann sum" /></a></p>
<p>Here's the rub: <strong>the area of each rectangle in the Riemann sum is a constant!</strong> This is the calculation for the left-hand sums (sampling on each subinterval <span class="math-container">$[x_{k-1}, x_k]$</span> at the left endpoint <span class="math-container">$x_{k-1}$</span>):
<span class="math-container">$$
A_k = f(x_{k-1}) \, \Delta x_k = \frac{1}{r^{k-1}} \, r^{k-1}(r-1) = r-1
$$</span>
so the left-hand sum (an overestimate since <span class="math-container">$f$</span> is decreasing) is
<span class="math-container">$$
L_n = \sum_{k=1}^n A_k = \sum_{k=1}^n (r-1) = n(r-1).
$$</span>
But what is <span class="math-container">$n$</span>? Choose the least integer <span class="math-container">$n$</span> such that <span class="math-container">$x_n \geq b$</span> for the upper limit of the sum:
<span class="math-container">$$
r^n = b
\quad\Longrightarrow\quad
n = \biggl\lceil \frac{\ln b}{\ln r} \biggr\rceil
$$</span>
Thus the left-hand sum is
<span class="math-container">$$
\biggl\lceil \frac{\ln b}{\ln r} \biggr\rceil \, (r-1)
\approx \frac{(r-1)}{\ln r} \, \ln b
\quad\to\quad \ln b
$$</span>
as <span class="math-container">$r \to 1^+$</span> (equivalently, <span class="math-container">$n \to \infty$</span>).</p>
<p>With a little care, this can be made rigorous. A very similar calculation, using right-hand sums and choosing <span class="math-container">$n$</span> via a floor function, gives an underestimate for the definite integral. Thus, by the Squeeze Theorem, we have a direct proof that
<span class="math-container">$$
\int_1^b \frac{1}{x} \, dx = \ln b
$$</span>
that does not rely on any of the calculus of the natural exponential function. In fact, we can derive all the derivative and integral formulas for the exponential function from this!</p>
<hr />
<p><a href="https://www.desmos.com/calculator/xfvejq4c4j" rel="nofollow noreferrer">Here's an interactive graph</a> where you can drag <span class="math-container">$r \to 1^+$</span> and witness <span class="math-container">$L_n \to \ln b$</span>, visually.</p>
|
310,462 | <p>I am looking for an elegant proof of the fact that a countable metric space is complete iff its underlying topology is discrete.</p>
<p>It is easy to see that a discrete space is complete because its topology can be derived from the distance <span class="math-container">$d(x,y)=1$</span> iff <span class="math-container">$x$</span> and <span class="math-container">$y$</span> are distinct, so that every Cauchy sequence must be eventually constant, and converge to this constant point that is inside the space. But the proof in the other side, that the topology underlying a countable and complete metric space must be discrete does not seem so easy. </p>
<p>By the way, I take it that:</p>
<p>(i) If the space is a singleton, <span class="math-container">$d(x,x)=0$</span> is a distance whose underlying topology is the discrete (and unique) topology on <span class="math-container">$X$</span>, so that the only (Cauchy) sequence on x is constant and convergent to x, making the space complete;</p>
<p>(ii) If the space is empty, it is a countable space whose only topology can be considered as discrete. We can also consider that the only function from the empty set into the positive reals (and similarly into the natural numbers) is the empty function, so that no Cauchy sequence exists making our empty space complete.
Gérard Lang. </p>
| Francis Adams | 6,342 | <p>Take any countable, closed subset of a Polish space and it again will Polish. There are non-discrete examples of this, like $\{\frac{1}{n}\}_{n=1}^\infty\cup\{0\}$ as a subset of $\mathbb{R}$.</p>
|
3,179,505 | <p>Help me please , I am not able to solve this problem.I have tried in many ways to figure out such as Ration test , Integral test , Comparison test , Limit Comparison Test , Root Test but i can't find the way out . This is my first question and i'm not good at English. If there is something wrong or you are not comfortable with my language usage I'm so sorry.</p>
| uniquesolution | 265,735 | <p>When considering a product of two metric spaces, say <span class="math-container">$(X_1,d_1)$</span> and <span class="math-container">$(X_2,d_2)$</span>, you want to consider metrics on the set <span class="math-container">$X_1\times X_2$</span> that are naturally related to the individual metrics <span class="math-container">$d_1,d_2$</span>. For example, you may consider
<span class="math-container">$d((x,y),(\xi,\eta)):=d_1(x,\xi)+d_2(y,\eta)$</span>. More generally, if you have a countable collection of metric spaces <span class="math-container">$(X_i,d_i)_{i=1}^{\infty}$</span> you may define a metric on the product <span class="math-container">$\prod_{i=1}^{\infty}X_i$</span> by
<span class="math-container">$$d((x_i),(y_i))=\sum_{i=1}^{\infty}\frac{d_i(x_i,y_i)}{2^i}$$</span>
For such metrics on the product, which use the individual metrics in some "natural" way, it is the case that <span class="math-container">$(x_{i,n})_{n=1}^{\infty}$</span> is a Cauchy sequence in the product space if and only if each sequence <span class="math-container">$x_{i,n}$</span> is a Cauchy sequence in <span class="math-container">$X_i$</span>.</p>
|
1,915,782 | <p>I'm attempting to teach myself some vector calculus before starting university next month in hope of getting my head around some of the concepts as I can foresee this being a weak topic for me.</p>
<p>I have been 'learning' from some online lecture notes related to my course. The notes talk about line integrals but as far as I understand say little on how to evaluate them and only gives one quick example in the form below that I didn't find terribly useful. As a result I'm not entirely sure how to evaluate the line integral below and so I would ask that someone answer the below question, but if possible perhaps give more detail than would usually be necessary, talking through each step with a specific emphasis on the difference between evaluating (i) and (ii), thank you.</p>
<blockquote>
<p>Evaluate explicitly the line integral $\int(y$ $dx+x$ $dy+dz)$ along
(i) the straight path from the origin to $x=y=z=1$ and (ii) the
parabolic path given parametrically by $x = t,y = t,z = t^2$ from $t=0$ to
$t=1$.</p>
</blockquote>
<p>Any help is appreciated.</p>
<p>Thank you.</p>
| gt6989b | 16,192 | <p>The trick mainly consists of parameterizing the curve $C$ in some parameter $t \in [0,1]$ and then you integrate
$$
\int_C f(x,y,z) = \int_0^1 f(x(t), y(t), z(t))
\sqrt{|x'(t)|^2 + |y'(t)|^2 + |z'(t)|^2}dt.
$$</p>
<p>Let's do the first one together. The parameterization is obvious $x=y=z=t$ with $t \in [0,1]$, so $dx=dy=dz=dt$ and the integral becomes
$$
\int_C(ydx + xdy + dz) = \int_0^1 (tdt + tdt + dt) = \int_0^1 (2t+1)dt = \left. t^2 + t \right|_0^1 = 2.
$$
Please do the second one yourself.</p>
|
56,847 | <p>What are the angles formed at the center of a tetrahedron if you draw lines to the vertices?</p>
<p>I'm trying to make these:</p>
<p><img src="https://i.stack.imgur.com/FRUi8.jpg" alt="caltrop"> </p>
<p>I need to know what angles to bend the metal.</p>
| anon | 11,763 | <p>One way is to write the vertices as vectors $a,b,c,d$ with norm $\|\cdot\|=1$. Then $a+b+c+d=0$. But</p>
<p>$$ 0=\|a+b+c+d\|^2=4+2{4 \choose 2}\cos\theta,$$</p>
<p>so $\theta = \arccos(-1/3)$.</p>
|
56,847 | <p>What are the angles formed at the center of a tetrahedron if you draw lines to the vertices?</p>
<p>I'm trying to make these:</p>
<p><img src="https://i.stack.imgur.com/FRUi8.jpg" alt="caltrop"> </p>
<p>I need to know what angles to bend the metal.</p>
| trinetr | 822,174 | <p>Consider a sphere passing through the four vertices of the regular tetrapod with it's centre at the centre of the tetrapod.
Each set of three vertices form four congruent equilateral spherical triangles on the surface of the sphere.
For a spherical triangle ABC (unlike plane triangles),</p>
<ol>
<li>The sides a,b,c are also measured as angles subtended at the centre of the sphere.</li>
<li><span class="math-container">$\pi <A+B+C<3\pi$</span>
Here <span class="math-container">$A=B=C=120°$</span> and <span class="math-container">$a=b=c$</span>
The required angle between any two arms is thus the measure of any side of any of the triangles.
Cosine rule for spherical triangles
<span class="math-container">$$\cos A= -\cos B. \cos C + \sin B. \sin C. \cos a$$</span>
gives
<span class="math-container">$$\cos a= -1/3$$</span>
<span class="math-container">$$a= 109.47°$$</span></li>
</ol>
|
1,659,075 | <p>In linear algebra, the Rank-Nullity theorem states that given a vector space $V$ and an $n\times n$ matrix $A$,
$$\text{rank}(A) + \text{null}(A) = n$$
or that
$$\text{dim(image}(A)) + \text{dim(ker}(A)) = \text{dim}(V).$$</p>
<hr>
<p>In abstract algebra, the Orbit-Stabilizer theorem states that given a group $G$ of order $n$, and an element $x$ of the set $G$ acts on,
$$|\text{orb}(x)||\text{stab}(x)| = |G|.$$</p>
<hr>
<p>Other than the visual similarity of the expressions, is there some deeper, perhaps category-theoretic connection between these two theorems? Is there, perhaps, a functor from the category of groups $\text{Grp}$ to some category where linear transformations are morphisms? Am I even using the words functor and morphism correctly in this context?</p>
| Marc Olschok | 19,950 | <p>The orbit-stabilizer formula stems from the fact that, given an action
of $G$ on $X$, the map</p>
<p>$$ G/G_x \ni gG_x \mapsto gx \in Gx $$</p>
<p>is an isomorphism of $G$-sets between the coset space $G/G_x$ and
the orbit $Gx$ of $x \in X$. Here $G$ acts on $G/G_x$ via left
multiplication.</p>
<p>The closest analogy is the case of an $R$-module $M$ where
$R/ann(m) \cong Rm$ for any $m \in M$.</p>
|
1,659,075 | <p>In linear algebra, the Rank-Nullity theorem states that given a vector space $V$ and an $n\times n$ matrix $A$,
$$\text{rank}(A) + \text{null}(A) = n$$
or that
$$\text{dim(image}(A)) + \text{dim(ker}(A)) = \text{dim}(V).$$</p>
<hr>
<p>In abstract algebra, the Orbit-Stabilizer theorem states that given a group $G$ of order $n$, and an element $x$ of the set $G$ acts on,
$$|\text{orb}(x)||\text{stab}(x)| = |G|.$$</p>
<hr>
<p>Other than the visual similarity of the expressions, is there some deeper, perhaps category-theoretic connection between these two theorems? Is there, perhaps, a functor from the category of groups $\text{Grp}$ to some category where linear transformations are morphisms? Am I even using the words functor and morphism correctly in this context?</p>
| Simon Burton | 360,303 | <p>The intuition behind this question is spot-on. I'm going to try to fill out some of the details to make this work.</p>
<p>The first thing to note is that a linear map $A:V\to V$ also gives a genuine group action: it is the additive group of $V$ acting on the set $V$ by addition. That is, any $v\in V$ acts on $x\in V$ as $v: x \mapsto x+Av.$ </p>
<p>Now we see that given any $x$ in $V$ the stabilizer subgroup $\text{stab}(x)$ of this action is precisely the kernel of $A.$ The orbit of $x$ is $x$ plus the image of $A.$</p>
<p>If we are working with a vector space over a finite field, we can take the cardinality of these sets as in the formula $|\text{orb}(x)||\text{stab}(x)| = |G|$ and as @Ravi suggests, take the logarithm of this where the base is the size of the field and we get exactly the rank-nullity equation.</p>
<p>If we have an infinite field then this doesn't work and we need to think more along the lines of a <em>categorified</em> orbit-stabilizer theorem. In this case, for each $x\in V$ we can find a bijection:</p>
<p>$$
\text{orb}(x) \cong G / \text{stab}(x)
$$</p>
<p>and as @Nick points out, this bijection gives us the First Isomorphism Theorem:
$$
\mathrm{Im}(A) \cong V / \mathrm{Ker}(A).
$$ </p>
|
4,038,904 | <p>Your statistics teacher challenges you to write a mathematics paper, which depending on your time spent, will earn you a cash reward. You're given the paper and a candle stands on the table in front of you. The life span <span class="math-container">$X$</span>, in minutes, of the candle is a continuous random variable with a uniform distribution on <span class="math-container">$(0, 60)$</span>. You must leave after <span class="math-container">$\frac{1}{2}$</span> an hour has been spent writing the paper or as soon as the candle burns out, whichever of these happens first. The teacher gives you a cash reward <span class="math-container">$M$</span> equal to <em>half</em> the amount of time the candle was lit. Find the cumulative distribution function (cdf) of <span class="math-container">$M$</span> and determine the expected value <span class="math-container">$E[M]$</span>.</p>
<p>My attempt:</p>
<p>By inspection <span class="math-container">$M = \frac{X}{2}$</span>.</p>
<p>I think cdf of <span class="math-container">$M$</span> is of the form:</p>
<p><span class="math-container">$$F(m)= \left\{
\begin{array}{lr}
0 & \mbox{if } x < 0 \\
2x = m & \mbox{if } 0 \leq x < 60 \\
1 & \mbox{if } 60 \leq x
\end{array}
\right.$$</span></p>
<p>As for <span class="math-container">$E(M)$</span>, we have</p>
<p><span class="math-container">$$E(M) = \int_{0}^{60} m \cdot F'(m) \,dm = \int_{0}^{60} m \,dm = 1800$$</span></p>
<p>Is this correct? My work above seems lacking and I am not sure if I did it correctly. Any assistance is much appreciated.</p>
| Community | -1 | <p>Primes very much behave like random numbers once you erase the basic properties and you start looking at very large numbers. So your question is if probability of a large number having a property P is <span class="math-container">$\frac{1}{\ln(n)}$</span> and we erase one digit, what is the probability of a new number having the same property P.</p>
<p>The question here is quite obvious, and that is which digit you have chosen to delete. Probability that a number <span class="math-container">$m-k10^n$</span> has a property P again, directly correlates with <span class="math-container">$10^n$</span>. So you are asking if <span class="math-container">$m$</span> is prime what is the probability of <span class="math-container">$m-k10^n$</span> being prime.</p>
<p>Again the average gap between primes is <span class="math-container">$\ln(n)$</span> meaning that if you take digits up to <span class="math-container">$k\log_{10}(\ln(n))$</span> meaning first few (with some constant k that we can find heuristically), even for a very large number, the rest will keep the probability of <span class="math-container">$\frac{1}{\ln(\frac{n}{10})}$</span> for being a prime, regardless which digit you have deleted.</p>
<p>For the first few digits the situation is very complicated and very likely not to be solved any time soon regarding the precise distribution of prime gaps on such a small scale.</p>
<p>So being a prime number, <span class="math-container">$p$</span>, and deleting a little bit larger digit does not differ much from guessing any prime number that has one digit less. Therefore you can still expect that a new number will be prime with the probability of</p>
<p><span class="math-container">$$\frac{1}{\ln(\frac{p}{10})}$$</span></p>
<p>which is quite large.</p>
<p>Now if you want it for each digit you come up with the probability of the order of</p>
<p><span class="math-container">$$\left ( \frac{1}{\ln(\frac{p}{10})} \right)^{\log_{10}(p)-k\log_{10}(\ln(p))} $$</span></p>
<p>which is the expression of the form</p>
<p><span class="math-container">$$\frac{1}{(d-1)^d}$$</span></p>
<p>where <span class="math-container">$d$</span> is the number of digits, explaining why it will be notoriously difficult to verify the existence of the prime with such property with a very large number of digits just by listing them.</p>
<p>The option here is, then, to search for numbers in a specific form so we eliminate quite some number of possible factors when we delete a digit.</p>
|
1,480,871 | <p>$1^3$ + $2^3$ + $2^3$ + ... + $n^3$ = ($1 + 2 + 3 + ... + n)^2$</p>
<p>I start with $P(1)$ and get $1 = 1$.</p>
<p>Then I do it with $P(n+1)$ and I get stuck.</p>
<p>$1^3$ + $2^3$ + $2^3$ + ... + $n^3$ + $(n+1)^3$ = ($1 + 2 + 3 + ... + n +(n+1))^2$</p>
<p>then I've tried substituting values and both ways and I cannot find anywhere to go with the problem.</p>
<p>$(1 + 2 + 3 + ... + n)^2 + (n+1)^3 = (1 + 2 + 3 + ... + n +(n+1))^2$
$OR$ </p>
<p>$1^3 + 2^3 + 2^3 + ... + n^3 + (n+1)^3 = (1^3 + 2^3 + 2^3 + ... + n^3 +(n+1))^2$</p>
<p>I know that the sum of numbers in a row is $\cfrac{n(n+1)}{2}$. I'm not sure if that's of any use. I'm pretty sure there's a mathematical proof that I can't remember that'll clean up this problem so any insight would be greatly appreciated.</p>
| marty cohen | 13,079 | <p>The product of
two number of the form
4k+1 is also of the form
4k+1.</p>
<p>Therefore,
a number of the form
4k+3
must have at least one factor
of the form 4k+3.</p>
<p>Also,
since the product of
two numbers of the form 4k+3
is of the form 4k+1,
a number of the form
4k+3
must have an odd number of factors
of the form 4k+3.</p>
|
1,480,871 | <p>$1^3$ + $2^3$ + $2^3$ + ... + $n^3$ = ($1 + 2 + 3 + ... + n)^2$</p>
<p>I start with $P(1)$ and get $1 = 1$.</p>
<p>Then I do it with $P(n+1)$ and I get stuck.</p>
<p>$1^3$ + $2^3$ + $2^3$ + ... + $n^3$ + $(n+1)^3$ = ($1 + 2 + 3 + ... + n +(n+1))^2$</p>
<p>then I've tried substituting values and both ways and I cannot find anywhere to go with the problem.</p>
<p>$(1 + 2 + 3 + ... + n)^2 + (n+1)^3 = (1 + 2 + 3 + ... + n +(n+1))^2$
$OR$ </p>
<p>$1^3 + 2^3 + 2^3 + ... + n^3 + (n+1)^3 = (1^3 + 2^3 + 2^3 + ... + n^3 +(n+1))^2$</p>
<p>I know that the sum of numbers in a row is $\cfrac{n(n+1)}{2}$. I'm not sure if that's of any use. I'm pretty sure there's a mathematical proof that I can't remember that'll clean up this problem so any insight would be greatly appreciated.</p>
| Deepak | 151,732 | <p>Case 1: $n$ is prime, in which case, you're done.</p>
<p>Case 2: $n$ is composite. It has no even factors so all prime factors are odd.</p>
<p>An odd prime factor is either of the form: $p \equiv 1 \pmod 4$ or $p \equiv 3 \pmod 4$.</p>
<p>If <em>all</em> prime factors were of the first form, then their product would also give a residue of $1$ modulo $4$ (because $1^m = 1$). Hence at least one prime factor must have a residue of $3$ modulo $4$, and you're done.</p>
|
4,508,796 | <p>How to find the integral
<span class="math-container">$$\int_0^1 x\sqrt{\frac{1-x}{1+x}}dx$$</span></p>
<p>I tried by substituting <span class="math-container">$x=\cos a$</span>. But it's leading to a form <span class="math-container">$\sin2a\cdot\tan a/2$</span> which I can't integrate further.</p>
| David Quinn | 187,299 | <p>Hint..substitute <span class="math-container">$x=\cos2\theta$</span> and you get a simple trig integral…</p>
|
3,236,067 | <p>I am having some trouble understanding where some linear boundary conditions are derived from </p>
<p>The following is an extract from my lecture notes on boundary value problems for second-order Linear ODE's</p>
<blockquote>
<p>In this section we are going to consider the different situation when some conditions are specified at the endpoints, or boundaries, of an interval of the independent variable, that is, at <span class="math-container">$x = x_1$</span> and <span class="math-container">$x=x_2$</span> with <span class="math-container">$x_1 < x_2$</span>. This problem is known as a <span class="math-container">$ \textbf{Boundary Value Problem} $</span> and the conditions are called boundary conditions. We are then interested in finding the solution <span class="math-container">$y(x)$</span> to the ODE (which we consider to be Linear) inside the interval <span class="math-container">$x_1 \le x \le x_2$</span>. we will consider only linear boundary conditions, where the left-hand sides of the conditions are linear combinations of the function and its derivatives at the same point and the right hand sides are given by constants, for example </p>
<p><span class="math-container">$$ y(x_1) = b_1, \ y(x_2)=b_2 \ \ or \ \ y'(x_1)=b_1, \ y'(x_2) = b_2$$</span> </p>
<p>or more generally </p>
<p><span class="math-container">$$ \tag{1} \alpha y'(x_1) + \beta y(x_1)=b_1, \gamma y'(x_2)+\delta y(x_2) =b_2,$$</span></p>
<p>where <span class="math-container">$\alpha , \beta , \gamma , \delta$</span> are given real constants such that <span class="math-container">$|\alpha |+ | \beta | > 0, | \gamma | + | \delta|>0. $</span></p>
</blockquote>
<p>My question is this, where do the conditions <span class="math-container">$(1)$</span> derive from? </p>
| Mohammad Riazi-Kermani | 514,496 | <p>The boundary conditions are found by the physical condition of the problem at the end points. </p>
<p>For example if the end points are moving according to certain rule involving velocity or if the temperature at end points are controlled according to some rules dictated by the problem.</p>
|
257,121 | <p>The question is very simple and I apologize for that, but I am not an expert of this kind of problem.
Given the polynomial
$$ P(x_1,\ldots,x_{2n})=x_1^2+\ldots+x_n^2-x_{n+1}^2-\ldots-x_{2n}^2,$$
I would like to know if there are non trivial integer roots $(y_1,\ldots, y_{2n})$ such that
$$y_1+\cdots+y_{n}=y_{n+1}+\cdots+ y_{2n}.$$
With non trivial I mean the ones like
$$y_1=y_{n+1},\ldots,y_{n}=y_{2n},$$
or their permutations.</p>
| Fedor Petrov | 4,312 | <p>Fix large $N$ and consider all $n$-tuples $(x_1,\dots,x_n)\in \{1,\dots,N\}^n$. There are $N^n$ such $n$-tuples, at least $N^n/n!$ tuples modulo permutations, and for them the pairs $(x_1+\dots+x_n,x_1^2+\dots+x_n^2)$ take at most $n\cdot N\cdot n\cdot N^2=n^2N^3$ possible values. Thus by pigeonhole principle some value is obtained at least $N^{n-3}/(n^2\cdot n!)$ times. This is greater than 1 if $n>3$ and $N$ is chosen large enough.</p>
|
61,047 | <p>I can add the value of a slider to the right of it using the Appearance-->Labelled option, but what if I want to add text after the automatic label. How can I do that?</p>
<p>Normally I want to do this to show the units of the value. For example, if the slider label is "4.7", I might want it to read "4.7 meters".</p>
| kglr | 125 | <p>You can use <code>Quantity</code> to specify the initial value and domain of a control:</p>
<pre><code>Manipulate[x, {{x, Quantity[1, "Meters"], "x ="},
Quantity[Range[0, 1, .1], "Meters"],
ControlType -> Manipulator ,
Appearance -> "Labeled"}]
</code></pre>
<p><img src="https://i.stack.imgur.com/CVlZC.png" alt="enter image description here"></p>
<p>Few more alternatives:</p>
<pre><code>Manipulate[Quantity[x, "Meters"],
Row[{Control[{{x, 1, "x="}, 0, 1, .1}], Spacer[5], Dynamic@Quantity[x, "Meters"]}]]
Manipulate[Quantity[x, "Meters"],
Labeled[Control[{{x, 1, "x="}, 0, 1, .1}], Dynamic@Quantity[x, "Meters"], Right]]
</code></pre>
|
1,363,213 | <p>I am given a chessboard of size $8*8$. In this chessboard there are two holes at positions $(X1,Y1)$ and $(X2,Y2)$. Now I need to find the maximum number of rooks that can be placed on this chessboard such that no rook threatens another. </p>
<p>Also no two rooks can threaten each other if there is hole between them.</p>
<p>How can I tackle this problem? Please help</p>
<p><strong>NOTE : A hole can occupy only a single cell on the chess board.</strong></p>
| Vishwajeet Agrawal | 715,222 | <p>Consider the generalisation of the problem, place the maximum number of rooks in an m X n board with some given squares cut out.
This problem can be reduced to bipartite matching with the following construction:
Two sets of vertices A and B.
Vertices in A represent columns such that no two rooks placed in different columns attack along the column line. So if there is a square cut in a column, the two vertices will be present in A corresponding to the two partitions of column.
In particular each cut square create a new vertex in A, so there will be (n+k) vertices in A.
Similarly construct B for rows, there will be (m+k) vertices.
There is an edge from u in A to v in B if column corresponding to u and row corresponding to v intersect.
The maximum matching in this bipartite graph gives the max number of rooks that can be placed which can be efficiently solved using blocking flow in O(E*sqrt(V)) is E and V are the number of edges and vertices respectively.</p>
|
4,059,426 | <ul>
<li>In a longer derivation I ran into the following quantity:
<span class="math-container">$$
\nabla\left[\nabla\cdot\left(%
{\bf r}_{0}\,{\rm e}^{{\rm i}{\bf k} \cdot {\bf r}}\,\right)
\right]
$$</span>
( i.e., the gradient of the divergence ) where <span class="math-container">${\bf k}$</span> is a vector of constants and <span class="math-container">${\bf r}$</span> is a position vector.</li>
<li>Can someone help explaining how to calculate this?
I am hoping it gives:
<span class="math-container">$$
\nabla\left[\nabla\cdot\left(%
{\bf r}_{0}\,{\rm e}^{{\rm i}{\bf k} \cdot {\bf r}}\,\right)
\right] =
-{\bf k}\left({\bf k}\cdot{\bf r}_{0}\right)
$$</span>
( because then the rest of my equations add up ).</li>
</ul>
| J.G. | 56,861 | <p><a href="https://en.wikipedia.org/wiki/Einstein_notation" rel="nofollow noreferrer">Summing over repeated indices</a>, the divergence is <span class="math-container">$r_{0i}\partial_ie^{\text{i}k_jr_j}=r_{0i}\text{i}k_ie^{\text{i}k_jr_j}=\text{i}(k\cdot r_0)e^{\text{i}k_jr_j}$</span>. Applying <span class="math-container">$\partial_l$</span> pulls down another <span class="math-container">$\text{i}k_l$</span> factor, so the gradient is <span class="math-container">$-k(k\cdot r_0)e^{\text{i}k\cdot r}$</span>. Your desired result drops the exponential, which I suspect is a typo.</p>
|
2,704,247 | <p>I was given this question to prepare for an exam:</p>
<p><em>Show that the set of all functions $f(x)$ such that $f''(x)$ = -3 on ($-\infty$, $\infty$) is uncountable.</em></p>
<p>I know that this gives me a set of parabolas $f(x) = -\frac{3}{2}x^{2} + ax + b$, but I'm unsure of how to show this set is uncountable. I thought I could find a correspondence between the roots of the parabola and the real numbers, but there's no guarantee there are any roots. Would a and b determine a unique parabola, or is this an insufficient number of points? </p>
<p>Could anyone give me a hint to figure this out, but not the solution?</p>
<p>edit: Thank you for your help, I have constructed the following bijection to show the set of these functions is equivalent to the real plane.</p>
<p>Let z : $\mathbb{R^2}$ $\to$ A, where A is the set of these functions and z(a,b) = $ -\frac{3}{2}x^{2} + ax + b$. Then z is clearly a bijection and thus A is uncountable.</p>
| Martin Argerami | 22,857 | <p>The parabola determined by $a,b$ as in your formula <strong>is</strong> unique. This is trivially established by the fact that a degree two polynomial is determined by its values at three points (and you have uncountably many to choose from). </p>
|
2,704,247 | <p>I was given this question to prepare for an exam:</p>
<p><em>Show that the set of all functions $f(x)$ such that $f''(x)$ = -3 on ($-\infty$, $\infty$) is uncountable.</em></p>
<p>I know that this gives me a set of parabolas $f(x) = -\frac{3}{2}x^{2} + ax + b$, but I'm unsure of how to show this set is uncountable. I thought I could find a correspondence between the roots of the parabola and the real numbers, but there's no guarantee there are any roots. Would a and b determine a unique parabola, or is this an insufficient number of points? </p>
<p>Could anyone give me a hint to figure this out, but not the solution?</p>
<p>edit: Thank you for your help, I have constructed the following bijection to show the set of these functions is equivalent to the real plane.</p>
<p>Let z : $\mathbb{R^2}$ $\to$ A, where A is the set of these functions and z(a,b) = $ -\frac{3}{2}x^{2} + ax + b$. Then z is clearly a bijection and thus A is uncountable.</p>
| Eric Towers | 123,905 | <p>You could set $a = 0$, then the $y$-intercept of the parabola depends only on $b$ and gives a bijection with choices of $b$. That two choices of $b$ give distinct parabolas is straightforward -- all of these are vertical translates of each other and all these parabolas are functions. You have uncountably many distinct options for $b$, so there at least that many parabolas in the collection.</p>
<p>This is generally true -- the set of vertical translates of a function cover the vertical strip containing the domain of the function with uncountably many translated copies. You can see this by picking a point in the domain and noticing that every pair with nonzero translation has distinct values at that point.</p>
|
3,245,945 | <p>Suppose <span class="math-container">$A$</span> is a linearly ordered set without maximum or minimum and every closed interval is a finite set. I want to show <span class="math-container">$A$</span> is isomorphic to the set of integers with the usual order.</p>
<p>I know that if <span class="math-container">$A$</span> is countable then I can use induction to construct partial isomorphisms and hence an isomorphism.</p>
<p>Any help is appreciated.</p>
| Alex Kruckman | 7,062 | <p>It follows from your hypothesis that every element of <span class="math-container">$A$</span> has a predecessor and a successor. So picking any <span class="math-container">$a\in A$</span>, there is an embedding <span class="math-container">$f\colon \mathbb{Z}\hookrightarrow A$</span> sending <span class="math-container">$0$</span> to <span class="math-container">$a$</span>, and whose image is a convex set. Suppose for contradiction that <span class="math-container">$f$</span> is not surjective. Then if <span class="math-container">$b$</span> is not in the image of <span class="math-container">$f$</span>, either <span class="math-container">$b$</span> is greater than every element in the image of <span class="math-container">$f$</span> or less than every element. So one of the closed intervals <span class="math-container">$[b,a]$</span> or <span class="math-container">$[a,b]$</span> is infinite. </p>
|
3,776,889 | <p>I'm reading: <a href="https://en.wikipedia.org/wiki/Convergence_of_random_variables#Almost_sure_convergence" rel="nofollow noreferrer">https://en.wikipedia.org/wiki/Convergence_of_random_variables#Almost_sure_convergence</a> and here it says that</p>
<blockquote>
<p>Given a probability space <span class="math-container">$(\Omega,\mathcal{F},P)$</span> and a random variable <span class="math-container">$X:\Omega \rightarrow \mathbb{R}$</span> almost sure convergence stands for <span class="math-container">$$P\left(\omega \in \Omega: \lim_{n \rightarrow \infty} X_n(\omega)=X\right)=1.$$</span> [...] almost sure convergence can also be defined as follows: <span class="math-container">$$P\left(\limsup_{n \rightarrow \infty} \left\{\omega \in \Omega: |X_n(\omega) - X(\omega)| > \varepsilon\right\}\right)=0, \quad \forall \; \varepsilon>0.$$</span></p>
</blockquote>
<p>My question is, what is the intuition behind this equivalence? I understand the first definition, but why do we use <span class="math-container">$\limsup$</span> in the second one to make the equivalence work? Thanks</p>
| Mark | 470,733 | <p>I don't really see intuition here, the equivalence just follows from using the definition of convergence. For a sequence of sets <span class="math-container">$(A_n)$</span> the set <span class="math-container">$\lim \sup(A_n)=\{A_n\ \ i.o\}$</span> is the set of elements which belong to infinitely many of the sets <span class="math-container">$A_n$</span>. The formal definition of this set is <span class="math-container">$\cap_{n=1}^\infty \cup_{k=n}^\infty A_k$</span>.</p>
<p>Assume <span class="math-container">$X_n\to X$</span> almost surely by the first definition and let any constant <span class="math-container">$\epsilon>0$</span>. Define the sequence <span class="math-container">$A_{n,\epsilon}:=\{\omega: |X_n(\omega)-X(\omega)|>\epsilon\}$</span>. Note that if <span class="math-container">$\omega\in\lim\sup A_{n,\epsilon}$</span> then it means that <span class="math-container">$|X_n(\omega)-X(\omega)|>\epsilon$</span> for infinitely many values of <span class="math-container">$n$</span>, and hence <span class="math-container">$X_n(\omega)$</span> obviously does not converge to <span class="math-container">$X(\omega)$</span>. So <span class="math-container">$\lim\sup A_{n,\epsilon}\subseteq \{\omega: X_n(\omega)\nrightarrow X(\omega)\}$</span>, and by monotonicity of probability:</p>
<p><span class="math-container">$\mathbb{P}(\lim\sup A_{n,\epsilon})\leq \mathbb{P}(\{\omega: X_n(\omega)\nrightarrow X(\omega)\})=0$</span></p>
<p><strong>Second direction:</strong> Now assume <span class="math-container">$X_n\to X$</span> by the second definition. For each <span class="math-container">$k\in\mathbb{N}$</span> define <span class="math-container">$B_k=\lim\sup A_{n,\frac{1}{k}}$</span> where the sets <span class="math-container">$A_{n,\epsilon}$</span> are defined like before. Then by assumption <span class="math-container">$\mathbb{P}(B_k)=0$</span> for all <span class="math-container">$k$</span>, and hence <span class="math-container">$\mathbb{P}(\cup_{k=1}^\infty B_k)=0$</span>. Now suppose we have <span class="math-container">$X_n(\omega)\nrightarrow X(\omega)$</span> for some <span class="math-container">$\omega$</span>. This implies that there must be some <span class="math-container">$m\in\mathbb{N}$</span> such that <span class="math-container">$|X_n(\omega)-X(\omega)|>\frac{1}{m}$</span> for infinitely many natural numbers <span class="math-container">$n$</span>, and thus <span class="math-container">$\omega\in B_m\subseteq\cup_{k=1}^\infty B_k$</span>.</p>
<p>In other words, we have the inclusion <span class="math-container">$\{\omega: X_n(\omega)\nrightarrow X(\omega)\}\subseteq\cup_{k=1}^\infty B_k$</span>, and so <span class="math-container">$\mathbb{P}(\{\omega: X_n(\omega)\nrightarrow X(\omega)\})=0$</span>.</p>
|
3,776,889 | <p>I'm reading: <a href="https://en.wikipedia.org/wiki/Convergence_of_random_variables#Almost_sure_convergence" rel="nofollow noreferrer">https://en.wikipedia.org/wiki/Convergence_of_random_variables#Almost_sure_convergence</a> and here it says that</p>
<blockquote>
<p>Given a probability space <span class="math-container">$(\Omega,\mathcal{F},P)$</span> and a random variable <span class="math-container">$X:\Omega \rightarrow \mathbb{R}$</span> almost sure convergence stands for <span class="math-container">$$P\left(\omega \in \Omega: \lim_{n \rightarrow \infty} X_n(\omega)=X\right)=1.$$</span> [...] almost sure convergence can also be defined as follows: <span class="math-container">$$P\left(\limsup_{n \rightarrow \infty} \left\{\omega \in \Omega: |X_n(\omega) - X(\omega)| > \varepsilon\right\}\right)=0, \quad \forall \; \varepsilon>0.$$</span></p>
</blockquote>
<p>My question is, what is the intuition behind this equivalence? I understand the first definition, but why do we use <span class="math-container">$\limsup$</span> in the second one to make the equivalence work? Thanks</p>
| angryavian | 43,949 | <p><strong>Intuition</strong></p>
<p>There is not much intuition to be gleaned here. The second definition comes from "massaging" the definition of the [non-random] limit of real numbers (since for a fixed <span class="math-container">$\omega$</span>, the limit <span class="math-container">$\lim_{n \to \infty} X_n(\omega)$</span> is just a non-random limit).</p>
<p>The utility of the second definition is that it is easier to verify because it involves relatively simple sets <span class="math-container">$\{|X_n(\omega) - X(\omega)| > \epsilon\}$</span> (fixed <span class="math-container">$\epsilon$</span>, fixed <span class="math-container">$n$</span>). You only need to deal with one <span class="math-container">$n$</span> at a time to understand this set, and under certain circumstances, bounding the probability of this set for each <span class="math-container">$n$</span> can be enough to bound probability of the <span class="math-container">$\limsup$</span>. By contrast, the set <span class="math-container">$\{\lim_{n \to \infty} X_n(\omega) = X(\omega)\}$</span> is difficult to deal with because of the limit inside the event.</p>
<hr />
<p><strong>Notation</strong></p>
<p>Let <span class="math-container">$A_{n, \epsilon} = \{|X_n(\omega) - X(\omega)| > \epsilon\}$</span>.
Note that
<span class="math-container">$$\limsup_{n \to \infty} A_{n, \epsilon} := \bigcap_n \bigcup_{k \ge n} A_{k,\epsilon}$$</span>
by definition.</p>
<hr />
<p><strong>(1) <span class="math-container">$\implies$</span> (2)</strong></p>
<p>Fix <span class="math-container">$\epsilon > 0$</span>.
If <span class="math-container">$\omega \in \bigcap_n \bigcup_{k \ge n} A_{k, \epsilon}$</span>, then <span class="math-container">$|X_n(\omega) - X(\omega)| > \epsilon$</span> for infinitely many <span class="math-container">$n$</span>, so <span class="math-container">$\lim_n X_n(\omega) \ne X(\omega)$</span>. Thus
<span class="math-container">$$P(\limsup_n A_{n, \epsilon}) \le P(\lim_n X_n(\omega) \ne X(\omega))$$</span>
for each <span class="math-container">$\epsilon$</span>.
So if almost sure convergence holds in the sense of the first definition, then it holds in the sense of the second definition.</p>
<hr />
<p><strong>(2) <span class="math-container">$\implies$</span> (1)</strong></p>
<p>Conversely, suppose <span class="math-container">$\omega$</span> is such that <span class="math-container">$\lim_n X_n(\omega) \ne X(\omega)$</span>. If you write out the definition of a limit, this means there exists some <span class="math-container">$\epsilon$</span> such that <span class="math-container">$|X_n(\omega) - X(\omega)| > \epsilon$</span> for infinitely many <span class="math-container">$n$</span>. That is, there exists <span class="math-container">$\epsilon$</span> such that <span class="math-container">$\omega \in \bigcap_n \bigcup_{k \ge n} A_{k, \epsilon}$</span>. Then
<span class="math-container">$$P(\limsup_n A_{n, \epsilon}) \ge P(\lim_n X_n(\omega) \ne X(\omega))$$</span>
for this particular <span class="math-container">$\epsilon$</span>. So if almost sure convergence holds in the sense of the second definition, it also holds in the sense of the first definition.</p>
|
3,746,402 | <p>For example, if we define <span class="math-container">$F(x)=\int^x_a f(t)dt$</span>, where <span class="math-container">$f$</span> is Riemann integrable, then <span class="math-container">$F(x)$</span> is a function. Or for a 2 variables real-valued integrable function <span class="math-container">$f(x, y)$</span>, <span class="math-container">$G(x)=\int^b_a f(x,y)dy$</span>, then <span class="math-container">$G(x)$</span> is a function. But for <span class="math-container">$\int_a^b f$</span>, is it just a number rather than a function?</p>
| Fernando | 335,205 | <p>That depends on how you consider the parameters <span class="math-container">$a$</span> and <span class="math-container">$b$</span>, if they are fixed numbers, then <span class="math-container">$\int_a^bf$</span> is a number, but if you consider them as variables, you have a function of two variables</p>
<p><span class="math-container">$$F(a,b)=\int\limits_a^bf(t)dt,$$</span></p>
<p>where <span class="math-container">$F(x)=\int_a^xf(t)dt$</span> is just the particular case obtained by fixing the value of <span class="math-container">$a$</span> and changing the notation of the variable <span class="math-container">$b$</span> by <span class="math-container">$x$</span>.</p>
<p>So, it is mostly a matter of perspective.</p>
|
1,362,220 | <p>My question is regarding the validity of the following statement:</p>
<p>$$ (\forall a (\phi \implies \psi)) \equiv (\phi \implies \forall a \psi ),$$</p>
<p>provided, of course, there are no free occurrences of $a$ in $\phi$.</p>
<p>I am using the axiom system from <a href="http://rads.stackoverflow.com/amzn/click/0415126002" rel="nofollow">Hughes and Cresswell</a>, namely,</p>
<p>(US) $\forall a \phi \implies \phi [a / b]$ (N.B. $\phi[a/b]$ denotes a bound alphabet variant of $\phi$ with no bound $b$, then replacing every free $a$ with free $b$).</p>
<p>(UG) From $\phi \implies \psi$ infer $\phi \implies \forall a \psi$, provided $a$ is not free in $\phi$.</p>
<p>(MP) Modus Ponens. </p>
<p>I also have some modal axioms in play but I assume they are irrelevant. In the book they list as a theorem:</p>
<p>$$ (\forall a (\phi \implies \psi)) \implies (\phi \implies \forall a \psi ),$$</p>
<p>provided there are no free occurrences of $a$ in $\phi$. (Which is clearly a straightforward application of (1)+MP and then (2).) I believe the other direction should follow from a rather similar argument, but seeing as the book did not list such a equivalence as a theorem, but merely a one sided implication, I am second guessing myself. Anyway, a sketch:</p>
<p>$$(\phi \implies \forall a \psi ) \quad [1: Assumption]$$
$$(\forall a \psi \implies \psi[a/a] )\quad [2: US]$$
$$(\phi \implies \psi[a/a]) \quad [3: 1+2+MP]$$
$$(\phi \implies \forall a \psi ) \implies (\phi \implies \psi[a/a]) \quad [4: 1+3]$$
$$\forall a(\phi \implies \psi[a/a]) \quad [5: 4+UG]$$</p>
<p>Then since bound alphabetic variants are material equivalents, this delivers the result. Now, I'm a bit new to the whole logic thing so any errors or omissions would be very helpful.</p>
| Ramiro | 190,563 | <p>There is no error in your sketch. My only remark is that in step 5, I would also indicate that MP is used, $[5:4+UG+MP]$ and then from 1 and 5 you get the reversed implication. </p>
|
439,745 | <blockquote>
<p>Prove:$|x-1|+|x-2|+|x-3|+\cdots+|x-n|\geq n-1$</p>
</blockquote>
<p>example1: $|x-1|+|x-2|\geq 1$</p>
<p>my solution:(substitution)</p>
<p>$x-1=t,x-2=t-1,|t|+|t-1|\geq 1,|t-1|\geq 1-|t|,$</p>
<p>square,</p>
<p>$t^2-2t+1\geq 1-2|t|+t^2,\text{Since} -t\leq -|t|,$</p>
<p>so proved.</p>
<p><em>question1</em> : Is my proof right? Alternatives?</p>
<p>one reference answer: </p>
<p>$1-|x-1|\leq |1-(x-1)|=|1-x+1|=|x-2|$</p>
<p><em>question2</em> : prove:</p>
<p>$|x-1|+|x-2|+|x-3|\geq 2$</p>
<p>So I guess:( I think there is a name about this, what's that? wiki item?)</p>
<p>$|x-1|+|x-2|+|x-3|+\cdots+|x-n|\geq n-1$</p>
<p>How to prove this? This is <em>question3.</em> I doubt whether the two methods I used above may suit for this general case.</p>
<p>Of course, welcome any interesting answers and good comments.</p>
| André Nicolas | 6,312 | <p>Note that the sum $F(x)=|x-1|+|x-2| +\cdots +|X-n|$ is the <strong>sum of the distances</strong> from $x$ to the points $1,2,\dots,n$. Draw the $n$ points $1,2,3,\dots,n$ on the number line, taking $n$ say $7$ or $8$. </p>
<p>Imagine that a particle $P$ starts far to the left of $1$, and travels to the right. </p>
<p>Until it hits $x=1$, the sum is of the distances of $P$ from $1,2,\dots, n$ is decreasing. At $x=1$ it becomes $1+2+\cdots+(n-1)$. </p>
<p>As we travel from $1$ to $2$, the function $F(x)$ is decreasing. For each tiny step $s$ we take to the right increases our distance from $1$ by $s$, but it decreases our distance from each of $2, 3,\dots,n$ by $s$. So each tiny step $s$ we take decreases $F(x)$ by $(n-1)s-s$.</p>
<p>If $n$ is not too small, this decrease continues. For each small step $s$ we take from $2$ towards $3$ increases our distance from $1$ and $2$ by $s$, and decreases our distance from each of the other points by $s$, for a decrease of $(n-2)s-2s$.</p>
<p>If $n$ is odd, $F(x)$ keeps decreasing until $x=\frac{n+1}{2}$, and then by symmetry $F(x)$ starts to increase. If $n$ is even, then $F(x)$ reaches a minimum at all points between $\frac{n}{2}$ and $\frac{n+1}{2}$.</p>
<p>For $n$ odd, say $n=2k+1$, the minimum value of $F(x)$ is $2(1+2+3+\cdots +k)$. This is $k(k+1)$. For $n$ even, say $n=2k$, the minimum value of $F(x)$ is $(1+2+\cdots +(k-1))+(1+2+\cdots +k)$. This is $k^2$.</p>
<p>Back to the question in the post. </p>
<p>We want to show that if $n$ is odd, say $n=2k+1$, then $k(k+1) \ge 2k$, and that if $n$ is even, say $2k$m then $k^2\ge 2k-1$. Both of these are obvious.</p>
<p><strong>Remark:</strong> We wrote out a solution in order to emphasize the <strong>geometry</strong>
of the situation.</p>
<p><strong>Generalization:</strong> Suppose that instead of $1,2,\dots,n$ we have numbers $a_1 \le a_2\le a_3\le \cdots \le a_n$.</p>
<p>Play the same walking game. If $n=2k+1$ is odd, then $F(x)$ reaches its minimum at $x=k+1$. The number $a_{k+1}$ is the <strong>median</strong> of our $n$ numbers $a_1,\dots, a_n$. </p>
<p>If $n$=2k is even, then $F(x)$ reaches a minimum at all points between $x=k$ and $x=k+1$. Any such $x$ can be viewed as a median of the $a_i$. </p>
|
2,683,032 | <blockquote>
<p>Show the sum of the first $n$ positive even integers is $n^2 + n$ using strong induction.</p>
</blockquote>
<p>I can't solve the above problem using strong induction. It will be very helpful if I can get the solution. Thanks in advance.</p>
| owen88 | 12,981 | <p>In theory I think the two descriptions you give are the same, and that perhaps it is your understanding of conditional probability that is causing the confusion.</p>
<p>In your scenario a), whilst you consider pairs $P(w_i \,| \, w_{i-1})$ as "bigrams", i.e. pairs, once you condition on the specific choice of $w_{i-1}$ the only `randomness' left in the system is the value of$w_i$, which (assuming independence) can take one of $|V|^{-1}$ values.</p>
<p>That is, assuming independence of $w_i,\,w_{i-1}$ and uniform distributions then</p>
<p>$$\mathbf P(w_i\,|\,w_{i-1}) = \frac{1}{|V|}.$$</p>
<p>The important heuristic here is: once you have conditioned on the value of $w_{i-1}$ it is no longer random; so the extra $|V|^{-1}$ factor does not come into play.</p>
|
1,694,495 | <p>Graphically, I am searching for something like this:</p>
<p><a href="https://i.stack.imgur.com/Rskpk.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Rskpk.png" alt="enter image description here"></a></p>
<p>The only additional requirement would be that the elements are defined by a closed formula or "simple" recursion, i.e. no definition by cases (Fallunterscheidung) and such.</p>
| barak manos | 131,263 | <p>How about $a_n=2^{n\cdot\lfloor{n/1000000}\rfloor}$?</p>
|
3,484,136 | <blockquote>
<p>Show that <span class="math-container">$n^2+n$</span> is even for all <span class="math-container">$n\in\mathbb{N}$</span> by contradiction.</p>
</blockquote>
<p>My attempt: assume that <span class="math-container">$n^2+n$</span> is odd, then <span class="math-container">$n^2+n=2k+1$</span> for all <span class="math-container">$k\in\mathbb{N}$</span>.</p>
<p>We have that
<span class="math-container">$$n^2+n=2k+1 \Leftrightarrow \left(n+\frac{1}{2}\right)^2-\frac{1}{4}=2k+1\Leftrightarrow n=\sqrt{2k+\frac{5}{4}}-\frac{1}{2}.$$</span></p>
<p>Choosing <span class="math-container">$k=1$</span>, we have that <span class="math-container">$n=\sqrt{2+\frac{5}{4}}-\frac{1}{2}\notin\mathbb{N}$</span>, so we have a contradiction because <span class="math-container">$n^2+n=2k+1$</span> should be verified for all <span class="math-container">$n\in\mathbb{N}$</span> and for all <span class="math-container">$k\in\mathbb{N}$</span>.</p>
<p>Is this correct or wrong? If wrong, can you tell me where and why? Thanks.</p>
| CyclotomicField | 464,974 | <p>You can't choose <span class="math-container">$k$</span> here; you have to show it's true for all of them. I would prove it this way. Assume <span class="math-container">$n^2+n$</span> is odd. This means <span class="math-container">$n(n+1)$</span> is odd, but that's impossible, because a number and its successor can't both be odd.</p>
|
2,559,560 | <blockquote>
<p>Show that there are two distinct positive integers such that: $1394|2^a-2^b$</p>
</blockquote>
<p>I'm sure pigeon hole principle applies here,but don't recognize holes.Another problem statement is: show that there are two positive integers $a,b$ such that: $$2^a\equiv 2^b\pmod {1394}$$<br>
Of course we have $1394$ cases for division mod $1394$,but what are the pigeons?</p>
| Siong Thye Goh | 306,553 | <p>Hint:</p>
<p>Consider $2^i \pmod{1394}$ for $1 \leq i \leq \color{blue}{1395}$</p>
|
1,250,132 | <p>Below is part of a solution to a critical points question. I'm just not sure how the equation on the left becomes the equation on the right. Could someone please show me the steps in-between? Thanks.</p>
<blockquote>
<p>$$\frac{-1}{x^2}+2x=0 \implies 2x^3-1=0$$</p>
</blockquote>
| trocho | 207,836 | <p>Multiply by $x^2$ both sides of the equation.</p>
|
1,250,132 | <p>Below is part of a solution to a critical points question. I'm just not sure how the equation on the left becomes the equation on the right. Could someone please show me the steps in-between? Thanks.</p>
<blockquote>
<p>$$\frac{-1}{x^2}+2x=0 \implies 2x^3-1=0$$</p>
</blockquote>
| Daniel W. Farlow | 191,378 | <p>You need to keep one important thing in mind: what must be true of $x$ for $\frac{-1}{x^2}+2x=0$ to make any sense? We must have that $x\neq 0$. Bear this in mind before multiplying through:
\begin{array}{rcl}
\frac{-1}{x^2}+2x &=&0\\[0.5em]
x^2\cdot\left(\frac{-1}{x^2}+2x\right)&=&x^2\cdot0\\[0.5em]
-1+2x^3 &=& 0\\[0.5em]
2x^3-1 &=& 0
\end{array}
Even though $x=0$ is not a solution to $2x^3-1=0$, you still need to be aware of possibly introducing an extraneous solution when you perform such algebraic manipulations. </p>
|
1,036,636 | <p>The following statement makes sense intuitively, but is there a way to prove it mathematically? (This is something we make use of in applied optimization in calculus.)</p>
<blockquote>
<p>If $f$ is continuous on an interval $I$ and $x_0$ is the <strong>only</strong> relative (local) extremum, then $x_0$ is actually an <strong>absolute (global)</strong> extremum on $I$.</p>
</blockquote>
| SPK.z | 171,119 | <p>Not sure if this is what you mean, but I'll give it a go.</p>
<p>If you consider the extrema to be the minima, you can say that an absolute minimum is always a relative minimum (because if it's not even a relative minimum, how can it be an absolute minimum?). That means that only relative minima are candidates for the absolute minimum. We could write:
$$a = \min{(r_1,\ r_2,\ r_3,\ \ldots)},$$
where $a$ is the absolute minimum and $r_i$ is the $i$'th relative minimum. Hence, if we have only one minimum, we have:
$$a = \min{(r_1)},$$
so that the absolute minimum automatically equals the relative minimum. The case with maxima instead of minima is analogous.</p>
|
1,534,981 | <p>I am trying to verify my procedure to find if the extrema is correct for a function
$u \left( x,y \right) ={x}^{2}-{y}^{2}$ on the set $\left( \mathop{\rm D}~~=~\left\{\left(x ,y \right)
\in \mathbb{R}^{2}{\it} | x ^{2}+y ^{2}
\le 1\right\} \right)
$</p>
<p>By the closed interval method, to find the absolute maximum and minimum values of a continuous function u on a closed, bounded set D,</p>
<ol>
<li><p>Find the values of u at the critical points of u in D:
$u_{{x}} \left( x,y \right) =2\,x$ = 0 and
$u_{{y}} \left( x,y \right) =-2\,y$ = 0
and so (x,y) = (0,0) is the only critical point on the unit disk.</p></li>
<li><p>Find the extreme values of u on the boundary of the unit disk:
From ${y}^{2}=-{x}^{2}+1$, v(x) = $u \left( x,y \right) ={x}^{2}-{y}^{2}$ = ${2x}^2-1$
and
v'(x) = 4x.</p></li>
</ol>
<p>So, the solution of u at the critical point is x = 0.</p>
<p>The unit circle is bounded by -1 ≤ x ≤ 1, and so the corresponding values to the points of x are: </p>
<p>v(0) = -1, v(-1) = 1, v(1) = 1</p>
| Dr. Sonnhard Graubner | 175,066 | <p>the critical values on $D$ are the solution of the System
$$2x=0$$
$$2y=0$$
for the others we consider
$$F(x,y)=x^2-y^2+\lambda(x^2+y^2-1)$$
and you must solve
$$2x+2x\lambda=0$$
$$-2y+\lambda2y=0$$
$$x^2+y^2=1$$</p>
|
135,911 | <p>Liouville's theorem gives such a proof for antiderivatives of functions like <span class="math-container">$e^x/x$</span> or <span class="math-container">$e^{x^2}$</span>, and differential Galois theory extends that to Bessel functions, say. But what tools exist for implicit functions like Lambert's W?</p>
| IV_ | 94,085 | <p>In [Ritt 1948] p. 53 - 56, the method of J. Liouville is given for Kepler's equation. The same method can be applied to functions <span class="math-container">$f$</span> with <span class="math-container">$f(z)=A(z,e^z)$</span> (<span class="math-container">$A$</span> an algebraic function of two complex variables with complex coefficients), to functions <span class="math-container">$g$</span> with <span class="math-container">$g(z)=A(z,\ln(z))$</span>, and generally to functions <span class="math-container">$h$</span> with <span class="math-container">$h(z)=u(A(v(z),e^{v(z)}))$</span> (<span class="math-container">$u,v$</span> bijective <a href="https://en.wikipedia.org/wiki/Elementary_function" rel="nofollow noreferrer">elementary functions</a> whose inverses are elementary functions) with a complex domain that doesn't contain isolated points. An example is Lambert W.<br />
[Ritt 1948] Ritt, J. F.: Integration in finite terms. Liouville's theory of elementary methods. 1948</p>
<p>A further method is the method of <a href="https://eudml.org/doc/103891" rel="nofollow noreferrer">Rosenlicht, M.: On the explicit solvability of certain transcendental equations. Publications mathématiques de l'IHÉS 36 (1969) 15-22</a>. It is a byproduct of Liouville's theory of integration in finite terms. It is written in the language of Differential algebra, but it can be represented also without that.<br />
This method is applicable only for functions satisfying a differential equation that is simple enough.<br />
A reference for Kepler's equation is <a href="http://mat.uab.es/pubmat/volums/update_navegador/volum_id:26" rel="nofollow noreferrer">Zarzuela Armengou, S.: About some questions of differential algebra concerning to elementary functions. Pub. Mat. UAB 26 (1982) (1) 5-15</a>.<br />
A reference for Lambert W function is Bronstein/Corless/Davenport/Jeffrey 2008 from the answer of Igor Khavkine above.</p>
<p>The branches of Lambert W are the local inverses of the Elementary function <span class="math-container">$f$</span> with <span class="math-container">$f(z)=ze^z$</span>, <span class="math-container">$z \in \mathbb{C}$</span>.</p>
<p>The incomprehensibly unfortunately hardly noticed theorem of Joseph Fels Ritt in <a href="https://doi.org/10.1090/S0002-9947-1925-1501299-9" rel="nofollow noreferrer">Ritt, J. F.: Elementary functions and their inverses. Trans. Amer. Math. Soc. 27 (1925) (1) 68-90</a> answers which kinds of Elementary functions can have an inverse which is an Elementary function.</p>
<p>And Ritt's theorem shows that no antiderivatives, no differentiation and no differential fields are needed for defining the Elementary functions.</p>
<p>Ritt's theorem is proved also in <a href="https://www.jstor.org/stable/2373917?seq=1#page_scan_tab_contents" rel="nofollow noreferrer">Risch, R. H.: Algebraic Properties of the Elementary Functions of Analysis. Amer. J. Math 101 (1979) (4) 743-759</a>.</p>
<p>By extension of Risch's structure theorem for the elementary functions, Ritt's theorem could possibly be extended to other and to larger classes of functions, as I proposed in my question here: <a href="https://mathoverflow.net/questions/320801/how-to-extend-ritts-theorem-on-elementary-invertible-bijective-elementary-funct">How to extend Ritt's theorem on elementary invertible bijective elementary functions?</a>.</p>
|
4,050,336 | <p>I am familiar with the Negative Binomial distribution <span class="math-container">$NB(p, k)$</span>, which gives the number of failures before <span class="math-container">$k$</span> successes occur in a Bernoulli process with parameter <span class="math-container">$p$</span>. I am wondering, however, if there the distribution of the number of failures before getting <span class="math-container">$k$</span> <em>distinct</em> successes is a well-known distribution.</p>
<p>More precisely, suppose we have some set of <span class="math-container">$N$</span> elements; without loss of generality, we may assume this set is <span class="math-container">$[N] = \{1, \cdots, N\}$</span>. We have a subset <span class="math-container">$S \subseteq [N]$</span> which contains the successes, while every element in <span class="math-container">$[N] \setminus S$</span> is a failure. Now in our process, we select a random element uniformly at random from <span class="math-container">$N$</span>, and keep on doing this until we have attained some <span class="math-container">$k \leq |S|$</span> successes. Let <span class="math-container">$X$</span> be the total number of trials performed before we stop the process. <em>What is the distribution of <span class="math-container">$X$</span>, in terms of <span class="math-container">$k$</span>, N, and p = <span class="math-container">$|S| / N$</span>?</em> If we can't concisely describe this distribution, can we at least come up with some (relatively tight) tail bounds? And how valid is the approximation <span class="math-container">$X \approx NB(p, k)$</span>?</p>
<p>I know this problem can simplify to the coupon collector's problem in the case that <span class="math-container">$S = [N]$</span> and <span class="math-container">$k = N$</span>, but I'm more interested in the case where <span class="math-container">$0 < p < 1$</span> and <span class="math-container">$k \ll |S| = pN$</span>.</p>
| Kenny Lau | 328,173 | <p>Let <span class="math-container">$a+bi\sqrt2$</span> be a root of <span class="math-container">$x^2+1$</span>, so <span class="math-container">$(a+bi\sqrt2)^2 = -1$</span>, i.e. <span class="math-container">$(a^2-2b^2)+2abi\sqrt2 = -1$</span>.</p>
<p>Equate coefficients and derive a contradiction.</p>
|
164,309 | <p>I came across this recurrence function:</p>
<blockquote>
<p>$$F(n) = a \times F(n-1) + b$$</p>
</blockquote>
<p>where $F(0) =1$. We have to solve for $F(n) \pmod {m}$</p>
<p>But for very large $n$, solving it with computer is also taking time. Is there anyway to simplify this. I think the values will be repeated again after some '$n$' based upon the values of $a,b$ and $m$. But I am unable to figure out how to solve it.</p>
| Community | -1 | <p>$F(n) = T(n) + c \implies T(n) + c = aT(n-1) + ac + b$.</p>
<p>Choosing $c = b/(1-a)$, we get that $T(n) = a T(n-1) \implies T(n) = a^n T(0)$.</p>
<p>Hence, $$F(n) = T(n) + c = a^n T(0) + b/(1-a) = a^n (F(0) - b/(1-a)) + b/(1-a)\\ \implies F(n) = a^n F(0) +b \dfrac{1-a^n}{1-a} = a^n + b \dfrac{1-a^n}{1-a}$$</p>
<p>If you are just interested in $F(n) \pmod{m}$, then make use of the recurrence. Call $G(n) = F(n) \pmod{m}$. Precompute $a' = a \pmod{m}$, $b' = b \pmod{m}$. Then we have that $$G(n) \equiv \left(a'G(n-1) + b' \right) \pmod{m}$$</p>
|
1,159,155 | <p>If we define <strong>open</strong> as: A set $O⊆R$ is open if for all points $a∈O$ there exists an
$\epsilon$-neighborhood $V_\epsilon(a)⊆O$.</p>
<p>Where $V_\epsilon(a) = \{x \in \mathbb{R}: | x - a | < \epsilon\}$
Now consider some open interval: </p>
<p>$(c,d) = \{x \in \mathbb{R} : c<x<d \}$</p>
<p>To see that $(c,d)$ is open, let $x \in (c,d)$ be arbitrary.</p>
<p>let $\epsilon = \text{min}\{x-c,d-x\}$, then it follows that $V_\epsilon(x) \subseteq (c,d)$</p>
<p>I am unable to see why this definition does not hold for a set containing one or more closed end points.</p>
<p>If my understanding is correct, let's take the closed interval $[1,10]$ and lets choose $x = 10$. So clearly $x \in [1,10]$</p>
<p>$\epsilon = \text{min} \{10-1, 10-10\} = 0$</p>
<p>Then isn't it still true that
$V_\epsilon(x) \subseteq [1,10]$ </p>
<p>Also, how would you express $V_\epsilon(x)$ in this instance, like it is expressed in the third line?</p>
| Ken | 169,838 | <p>If you allowed $\epsilon$ to be $0$, then every set is open since the $0$-neighborhood of a point is just itself. For this reason, $\epsilon$ is understood to be positive.</p>
|
36,735 | <p>In Peter J. Cameron's book "Permutation Groups" I found the following quote</p>
<blockquote>
<p>It is a slogan of modern enumeration theory that the ability to count a set is closely related to the ability to pick a random element from that set (with all elements equally likely).</p>
</blockquote>
<p>Indeed, one can count and sample uniformly from labeled trees, spanning trees, spanning forests, dimer models, young tableaux, plane partitions etc. However one can't do either of these very efficiently with groups, for example. My question is if one can make this into a rigorous statement, perhaps through complexity theory. That is, if I have an algorithm to produce a uniform sample from a set of objects, can I somehow come up with an efficient way to count them or vice-versa? </p>
<p>Does this slogan have a standard name? Are there any references?</p>
| Benoît Kloeckner | 4,961 | <p>If you have an algorithm that produces uniform and independent samples from a set of object, you can estimate the total number of objects as follows. First, construct a subset of the objects to be counted, if possible quite large, in a way such that you know the size of the subset and you can check easily if a given element is in your subset. Then start using your algorithm to produce random elements $X_1,\ldots, X,n,\ldots$. By the law of large numbers, the fraction of $X_i$ that belong to your subset will converge to its size divided by the total number of objects, so that you get your estimation. Finer convergence theorems like CLT give you quantitative estimates on the probability that your estimate is bad.</p>
<p>Of course, if your subset is very small compared to the total number of objects, you will have to wait very long for this method to give you something.</p>
|
36,735 | <p>In Peter J. Cameron's book "Permutation Groups" I found the following quote</p>
<blockquote>
<p>It is a slogan of modern enumeration theory that the ability to count a set is closely related to the ability to pick a random element from that set (with all elements equally likely).</p>
</blockquote>
<p>Indeed, one can count and sample uniformly from labeled trees, spanning trees, spanning forests, dimer models, young tableaux, plane partitions etc. However one can't do either of these very efficiently with groups, for example. My question is if one can make this into a rigorous statement, perhaps through complexity theory. That is, if I have an algorithm to produce a uniform sample from a set of objects, can I somehow come up with an efficient way to count them or vice-versa? </p>
<p>Does this slogan have a standard name? Are there any references?</p>
| Brendan McKay | 9,025 | <p>Suppose there are two finite sets $A$, $B$ and you have some relation on $A\times B$. If you can randomly sample from $A$ and $B$, you can estimate the average number $a$ of elements of $A$ related to each element of $B$, and the average number $b$ of elements of $B$ related to each element of $A$. Then $a/b=|A|/|B|$, so you can estimate the relative sizes of $A$ and $B$. So if you know the size of $B$ you can estimate the size of $A$. More generally, you can construct a chain of such ratios from the set you want to estimate to one you know the size of.</p>
<p>For example, if you want to estimate the number of trees of order $n$, then consider the trees of order $n-1$ and the relation of removing one leaf from a larger tree to make a smaller tree. Extend this chain to trees of order $n-2$, $n-3$, etc down to order $1$, where you know the number of trees is $1$, and use random sampling to estimate the ratios of the sizes of adjacent classes.</p>
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.