qid int64 1 4.65M | question large_stringlengths 27 36.3k | author large_stringlengths 3 36 | author_id int64 -1 1.16M | answer large_stringlengths 18 63k |
|---|---|---|---|---|
7,064 | <p>The <a href="http://en.wikipedia.org/wiki/Long_division" rel="nofollow">Wikipedia article</a> on long division explains the different notations. I still use the European notation I learned in elementary school in Colombia. I had difficulty adapting to the US/UK notation when I moved to the US. However, I did enjoy seeing my classmates' puzzled faces in college whenever we had a professor that preferred the European notation.</p>
<p>What long division notation do you use and where did you learn it?</p>
| Yuval Filmus | 1,277 | <p>In Israel I was taught (in the 90's) the US notation, with the divisor on the right (might have something to do with the fact that we write Hebrew right to left).</p>
|
99,617 | <p>How can I animate a point on a polar curve? I have used <code>Animate</code> and <code>Show</code> together before in order to get the curve and the moving point together on the same plot, but combining the polar plot and the point doesn't seem to be working because point only works with Cartesian coordinates.</p>
<p>Here is the code I used before to animate a point on a parametric curve. For higher values of <code>a</code> and <code>theta</code>, you can see the point moving along the curve better (I was required to animate all three parameters).</p>
<pre><code>Animate[
Show[
ParametricPlot[{a Cos[θ] t, a Sin[θ] t - 4.9 t^2}, {t, 0, 15}, AxesLabel -> {"x", "y"},
PlotRange -> {{0, 50}, {0, 30}}],
Graphics[{Red, PointSize[.05], Point[{a Cos[θ] t, a Sin[θ] t - 4.9 t^2}]}]
],
{t, 0, 5, Appearance -> "Labeled"},
{a, 1, 20, Appearance -> "Labeled"},
{θ, 0, Pi/2, Appearance -> "Labeled"},
AnimationRunning -> False
]
</code></pre>
<p>Here is the code I tried to use to animate a point on a polar curve, but the point does not even show up.</p>
<pre><code>Animate[
Show[
PolarPlot[2 Sin[4*θ], {θ, 0, 2 Pi}],
Graphics[Red, PointSize[Large], Point[{2 Sin[4*θ] Cos[θ], 2 Sin[4*θ] Sin[θ]}]]
],
{θ, 0, 2 Pi},
AnimationRunning -> False
]
</code></pre>
| kglr | 125 | <pre><code>Animate[PolarPlot[2 Sin[4 θ], {θ, 0, 2 Pi}, Axes -> False,
MeshFunctions -> {#3 &}, Mesh -> {{{θ, Directive[Red, PointSize[Large]]}}}],
{θ, 0, 2 Pi}, AnimationRunning -> False]
</code></pre>
<p><a href="https://i.stack.imgur.com/PI5cZ.gif" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/PI5cZ.gif" alt="enter image description here"></a></p>
|
1,404,960 | <p>Say you have a function $f(x)$ and a line $g(x)=ax+b$. How do you reflect $f$ about $g$?</p>
<p>I am apparently supposed to write more text, but the line above is all I am after, hence I wrote this sentence as well.</p>
| Ben Grossmann | 81,360 | <p>We can implement this as a translation of the line to the origin, a reflection about the line through the origin, and then a translation back in the same direction.</p>
<p>In particular: suppose we want to reflect a point $(x_0,y_0)$ across this line.</p>
<ul>
<li><p>First translate it to the point $(x_1,y_1) = (x_0,y_0 - b)$.</p></li>
<li><p>Then, reflect $(x_1,y_1)$ across the line $y = ax$ to get
$$
(x_2,y_2) = \frac1{a^2 + 1} ([1 - a^2]x_1 + 2a\,y_1,2a\,x_1 + [a^2 - 1]y_1)
$$</p></li>
<li><p>Finally, translate back to get $(x_3,y_3) = (x_2,y_3 + b)$</p></li>
</ul>
<p>So, the curve parametrized by
$$
x = t\\
y = f(t)
$$
Becomes the curve parametrized by
$$
x = \frac{1 - a^2}{a^2 + 1}\,t + \frac{2a}{a^2 + 1}(f(t) - b) \\
y = \frac{2a}{a^2 + 1}\,t + \frac{a^2 - 1}{a^2 + 1}(f(t) - b)+b
$$
Note: there is no guarantee that we can write $y$ as a function of $x$.</p>
<hr>
<p><strong>Interesting cases:</strong> if we take $g(x) = 0x + b$, then we get
$$
y = - (f(t) - b) + b = 2b - f(t)\\
x = t
$$
which is simply the curve $y = 2b - f(x)$.</p>
<p>If we take $g(x) = x$, then we get
$$
y = t\\
x = f(t)
$$
This is the curve $x = f(y)$. When $f$ is invertible, we can rewrite this as $y = f^{-1}(x)$.</p>
|
1,737,835 | <p>You have decided to buy candy for the trick-or-treaters and have estimated there will be 200 children coming to your door, and plan to give each children three pieces of candy. You have decided to offer Twix and 3 Musketeers. The cost of buying these two candies is<br>
$$C= 5T^2 + 2TM + 3M^2 + 800$$
Where T is the number of Twix and M is the number of 3 Musketeers. How many of each candy should you get to minimize the cost? </p>
| marco trevi | 170,887 | <p>\begin{equation}
g(x,y):=\max\left(\frac{|x+y|}{\sqrt{2}},\frac{|x-y|}{\sqrt{2}}\right)+\max(|x|,|y|)=1
\end{equation}
gives an octagon in $\mathbb{R}^2$, but I'm not sure if it's a norm...</p>
|
206,636 | <p>What's the fastest way to find the local maxima of a 2D list? <em>E.g.</em></p>
<pre><code>nx = ny = 100;
dat = Table[Sin[2. \[Pi] x/nx] (0.1 + Cos[2. \[Pi] y/ny]), {y, 0, ny}, {x, 0, nx}];
ListPlot3D[dat]
</code></pre>
<p><img src="https://i.stack.imgur.com/MyGBl.png" alt="Mathematica graphics"></p>
<p>This (updated) function has three local maxima of different heights:</p>
<pre><code>Position[MaxDetect[dat], 1]
(* {{1, 26}, {51, 76}, {101, 26}} *)
dat[[1, 26]]
dat[[51, 76]]
dat[[101, 26]]
(* 1.1, 0.9, 1.1 *)
</code></pre>
<p>My original attempt was super-slow:</p>
<pre><code>RepeatedTiming[MaxDetect[Chop@dat];][[1]]
(* 1.55 *)
</code></pre>
<p>Turns out using <code>Chop</code> is a very bad idea. Without it is 100X faster:</p>
<pre><code>RepeatedTiming[MaxDetect[dat];][[1]]
(* 0.016 *)
</code></pre>
<p>Along the way I discovered another version that is 2X faster yet:</p>
<pre><code>RepeatedTiming[MaxDetect[Image[dat]];][[1]]
(* 0.0067 *)
</code></pre>
<p><strong>Questions</strong></p>
<ol>
<li>Why is <code>MaxDetect</code> so much slower when <code>Chop</code> is applied? (I should add that my actual non-example problem has lots of small values that needed to <code>Chop</code>-ping)</li>
<li>Why does converting to an <code>Image</code> speed it up further?</li>
<li>Is there any faster way available?</li>
</ol>
| Alexey Popkov | 280 | <blockquote>
<p>Why is <code>MaxDetect</code> so much slower when <code>Chop</code> is applied?</p>
</blockquote>
<p>Because <code>Chop</code> returns exact zeros what prevents packing of the matrix attempted by <code>MaxDetect</code>. You can detect an attempt to pack the array with <code>Trace</code>:</p>
<pre><code>Trace[MaxDetect[{{1.}}], Developer`ToPackedArray] // Flatten
</code></pre>
<p><a href="https://i.stack.imgur.com/s53P7.png" rel="noreferrer"><img src="https://i.stack.imgur.com/s53P7.png" alt="screenshot"></a></p>
<p>Applying <code>N</code> after <code>Chop</code> gives almost the same speed as without <code>Chop</code> because now the matrix can be packed again:</p>
<pre><code>RepeatedTiming[MaxDetect[dat];][[1]]
RepeatedTiming[MaxDetect[N@Chop@dat];][[1]]
</code></pre>
<blockquote>
<pre><code>0.06
0.07
</code></pre>
</blockquote>
<hr>
<blockquote>
<p>Why does converting to an <code>Image</code> speed it up further?</p>
</blockquote>
<p>It seems that <code>MaxDetect</code> is simply not so well-optimized for an array input as compared to an <code>Image</code> input. A superficial analysis follows.</p>
<ol>
<li><p>Compare the complexity of <code>Trace</code> outputs in the both cases:</p>
<pre><code>Trace[MaxDetect[dat]] // LeafCount
Trace[MaxDetect[Image@dat]] // LeafCount
</code></pre>
<blockquote>
<pre><code>79670996
260610
</code></pre>
</blockquote>
<p>It indicates that in the case of array input it is evaluated many times as an (unpacked?) array, while in the case of <code>Image</code> input it is evaluated as an atomic <code>Image</code> object what may be faster. </p></li>
<li><p>Additional evidences in support of the previous guess:</p>
<pre><code>Count[Trace[MaxDetect[dat]], _Image, Infinity]
Count[Trace[MaxDetect[Image@dat]], _Image, Infinity]
</code></pre>
<blockquote>
<pre><code>8
1913
</code></pre>
</blockquote>
<pre><code>Count[Trace[MaxDetect[dat]], _List?MatrixQ, Infinity]
Count[Trace[MaxDetect[Image@dat]], _List?MatrixQ, Infinity] // Quiet
</code></pre>
<blockquote>
<pre><code>7432
367
</code></pre>
</blockquote></li>
<li><p>And finally let us compare how many packed and non-packed array evaluations appear in the both cases:</p>
<pre><code>Count[Trace[MaxDetect[dat]], a_List /; Developer`PackedArrayQ[a], Infinity]
Count[Trace[MaxDetect[Image@dat]], a_List /; Developer`PackedArrayQ[a], Infinity] // Quiet
</code></pre>
<blockquote>
<pre><code>102
204
</code></pre>
</blockquote>
<pre><code>Count[Trace[MaxDetect[dat]], a_List /; ! Developer`PackedArrayQ[a], Infinity]
Count[Trace[MaxDetect[Image@dat]], a_List /; ! Developer`PackedArrayQ[a], Infinity] // Quiet
</code></pre>
<blockquote>
<pre><code>748562
8706
</code></pre>
</blockquote>
<p>From the last output we see that there seems to be about 100 times more evaluations of non-packed arrays in the case of array input as compared to <code>Image</code> input. I think that now the situation is clear enough.</p></li>
</ol>
|
206,636 | <p>What's the fastest way to find the local maxima of a 2D list? <em>E.g.</em></p>
<pre><code>nx = ny = 100;
dat = Table[Sin[2. \[Pi] x/nx] (0.1 + Cos[2. \[Pi] y/ny]), {y, 0, ny}, {x, 0, nx}];
ListPlot3D[dat]
</code></pre>
<p><img src="https://i.stack.imgur.com/MyGBl.png" alt="Mathematica graphics"></p>
<p>This (updated) function has three local maxima of different heights:</p>
<pre><code>Position[MaxDetect[dat], 1]
(* {{1, 26}, {51, 76}, {101, 26}} *)
dat[[1, 26]]
dat[[51, 76]]
dat[[101, 26]]
(* 1.1, 0.9, 1.1 *)
</code></pre>
<p>My original attempt was super-slow:</p>
<pre><code>RepeatedTiming[MaxDetect[Chop@dat];][[1]]
(* 1.55 *)
</code></pre>
<p>Turns out using <code>Chop</code> is a very bad idea. Without it is 100X faster:</p>
<pre><code>RepeatedTiming[MaxDetect[dat];][[1]]
(* 0.016 *)
</code></pre>
<p>Along the way I discovered another version that is 2X faster yet:</p>
<pre><code>RepeatedTiming[MaxDetect[Image[dat]];][[1]]
(* 0.0067 *)
</code></pre>
<p><strong>Questions</strong></p>
<ol>
<li>Why is <code>MaxDetect</code> so much slower when <code>Chop</code> is applied? (I should add that my actual non-example problem has lots of small values that needed to <code>Chop</code>-ping)</li>
<li>Why does converting to an <code>Image</code> speed it up further?</li>
<li>Is there any faster way available?</li>
</ol>
| Henrik Schumacher | 38,178 | <p>A severe problem is that <code>Table</code> generates an unpacked array, a thing that is often very annoying. When converting to an image, it is packed automatically (one can check that, e.g., with <code>Developer`PackedArrayQ[ImageData[Image[dat]]]</code>). And because many functions work faster on packed arrays than on unpacked ones, this increases the performance.</p>
<p>A vectorized implementation that exploits the very nature of the function (and is thus not very generalizable) is the following:</p>
<pre><code>nx = ny = 1000;
dat = KroneckerProduct[
(0.1 + Cos[Subdivide[0., 2. Pi, ny]]),
Sin[Subdivide[0., 2. Pi, nx]]
];
</code></pre>
<blockquote>
<p>0.003691</p>
</blockquote>
<p>The maxima can be found by comparing each value to the maximum of its neighbors. The latter can be done efficiently with <code>MaxFilter</code>:</p>
<pre><code>getindex = {Quotient[#, Dimensions[dat][[1]]] + 1, Mod[#, Dimensions[dat][[2]], 1]} &;
idx = getindex /@
Random`Private`PositionsOf[
Flatten@UnitStep[dat - MaxFilter[dat, {1, 1}]],
1
]; // AbsoluteTiming // First
idx
MatrixPlot@SparseArray[idx -> 1, Dimensions[dat]]
</code></pre>
<blockquote>
<p>0.00108</p>
<p>{{1, 26}, {29, 1}, {30, 1}, {31, 1}, {32, 1}, {33, 1}, {34, 1}, {35,
1}, {36, 1}, {37, 1}, {38, 1}, {39, 1}, {40, 1}, {41, 1}, {42,
1}, {43, 1}, {44, 1}, {45, 1}, {46, 1}, {47, 1}, {48, 1}, {49,
1}, {50, 1}, {51, 1}, {51, 76}, {52, 1}, {53, 1}, {54, 1}, {55,
1}, {56, 1}, {57, 1}, {58, 1}, {59, 1}, {60, 1}, {61, 1}, {62,
1}, {63, 1}, {64, 1}, {65, 1}, {66, 1}, {67, 1}, {68, 1}, {69,
1}, {70, 1}, {71, 1}, {72, 1}, {73, 1}, {101, 26}}</p>
</blockquote>
<p><a href="https://i.stack.imgur.com/0GImE.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/0GImE.png" alt="enter image description here"></a></p>
<p>This is about 10 times faster than <code>MaxDetect</code>. But it does not distinguish between local maxima and strict local maxima. However, a second sweep over the local maxima (which is much less expensive) could filter out the strict local maxima.</p>
|
2,946,408 | <p>Here is the question:
Find the volume of the region bounded by the <span class="math-container">$x$</span>-axis, <span class="math-container">$x=4$</span>, and <span class="math-container">$y=sqrt(x)$</span>. (rotated about x-axis)</p>
<p>I understand that we can find the cross sectional area at each point with respect to x and integrate (<span class="math-container">$\int_{0}^{4}(\pi *x)dx$</span>), but I want to solve this question slightly differently. Since the curve goes from 0 to 2 (y-axis), I decided to set up my integral like this: <span class="math-container">$\int_{0}^{2}(\pi*y^2)dy$</span>. Unfortunately, this does not give me the correct answer. What went wrong?</p>
| G Cab | 317,234 | <p>Since the rotation axis is the x, then<br>
- in the first (disks) you have to integrate <span class="math-container">$\pi y(x)^2 dx$</span>,<br>
- in the second (shells) you have to integrate <span class="math-container">$2 \pi (4-x(y)) dy$</span></p>
<p>Have a look at <a href="https://math.stackexchange.com/questions/2941849">this other post</a></p>
|
2,245,010 | <p>Does there exist a topological group which can be covered by (nontrivial and proper) open subgroups of itself? If so, what are groups of these types called and is this a nice property for a topological group to have? Or is this just impossible?</p>
| DanielWainfleet | 254,665 | <p>(1). An ordered group is a group $G$ with a linear order $<$ such that $a<b\implies ((ac<bc)\land (ca<cb))$ for all $a,b,c \in G$. </p>
<p>(2). Consider the free Abelian group $G$ on $\{a_n:n\in \mathbb N\}$ where $m\ne n\implies a_m\ne a_n,$ with identity element $1.$ For $1\ne x\in G$ there is a unique finite non-empty $S\subset \mathbb N$ and unique $\{e_s:s\in S\}\subset \mathbb Z$ \ $\{0\}$ such that $x=\prod_{s\in S}(x_s)^{e_s}.$ </p>
<p>We can linearly order $G$ by declaring that $x_1>1$ and that $x_{n+1}>x_n^j$ for all $n, j \in \mathbb N ,$ and applying the rules of (1), whereupon $G$ is an ordered group.</p>
<p>Now let $G$ have the $<$-order topology. Then $G$ is a topological group. Each $G_n=\{y\in G: x_{n+1}^{-1}<y<x_{n+1}\}$ is an open-and-closed proper subgroup of $G.$ Every $y\in G$ belongs to some $G_n.$</p>
<p>Remark. It is not necessary to require, in this example, that $G$ be Abelian. It just enables us to skip some intermediate results that would be needed to justify the claims that $G$ is an ordered group and that $G_n$ is an open proper subgroup.</p>
|
38,480 | <p>(Before reading, I apologize for my poor English ability.)</p>
<p>I have enjoyed calculating some symbolic integrals as a hobby, and this has been one of the main source of my interest towards the vast world of mathematics. For instance, the integral below
$$ \int_0^{\frac{\pi}{2}} \arctan (1 - \sin^2 x \; \cos^2 x) \,\mathrm dx = \pi \left( \frac{\pi}{4} - \arctan \sqrt{\frac{\sqrt{2} - 1}{2}} \right). $$
is what I succeeded in calculating today.</p>
<p>But recently, as I learn advanced fields, it seems to me that symbolic integration is of no use for most fields in mathematics. For example, in analysis where the integration first stems from, now people seem to be interested only in performing numerical integration. One integrates in order to find an evolution of a compact hypersurface governed by mean curvature flow, to calculate a probabilistic outcome described by Ito integral, or something like that. Then numerical calculation will be quite adequate for those problems. But it seems that few people are interested in finding an exact value for a symbolic integral.</p>
<p>So this is my question: Is it true that problems related to symbolic integration have lost their attraction nowadays? Is there no such field that seriously deals with symbolic calculation (including integration, summation) anymore?</p>
| Giuseppe Negro | 8,157 | <p>I don't think your point of view is the right one. To compute an integral analytically and to compute an integral numerically are different things. A numerical analysis professor of mine once said that, in applications (engineering, physics...) it is often more convenient to directly evaluate integrals by numerical means, even if they are integrable analytically! For example, suppose that you need </p>
<p>$$\int_{0}^{\frac{\pi}{2}} \arctan (1 - \sin^2 x \; \cos^2 x) \,\mathrm dx$$</p>
<p>meters of conducting wire. You make a phone call to the wire factory and ask for what? For $\pi \left( \frac{\pi}{4} - \arctan \sqrt{\frac{\sqrt{2} - 1}{2}} \right)$ meters of wire? More realistically you will ask for something like $1.13$ meters of wire. </p>
<p>To obtain this number $1.13$ you performed an approximation over the non-rational quantity $\pi \left( \frac{\pi}{4} - \arctan \sqrt{\frac{\sqrt{2} - 1}{2}} \right)$. In doing so you wasted information. It would have been more convenient (and, maybe, even more accurate) to perform this approximation on the first integral directly, that is, to evaluate it numerically.</p>
<p>Of course this does not render analytical methods useless. You could have a family of integrals depending on a parameter, for example. Numerical methods tell you nothing here. You could run across an integral in the middle of a proof, and need its exact value for theoretical purposes. The possibilities are countless.</p>
|
3,279,965 | <p>Let <span class="math-container">$A$</span> be a non-symmetric <span class="math-container">$n\times n$</span> real matrix. Assume that all eigenvalues of <span class="math-container">$A$</span> have positive real parts. Could you please show some simple conditions such that <span class="math-container">$A+A^T$</span> is positive definite? Thank you so much.</p>
| tortue | 140,475 | <p>Let <span class="math-container">$\lambda$</span> be an eigenvector for <span class="math-container">$A$</span> with eigenvector <span class="math-container">$v$</span>, then <span class="math-container">$A v = \lambda v$</span> and <span class="math-container">$v^{\dagger}A^{\dagger} = \overline{\lambda} v^{\dagger}$</span> (<span class="math-container">$\cdot^{\dagger}$</span> denotes as conjugate transpose, notice that for matrix <span class="math-container">$A$</span> it holds <span class="math-container">$A^T = A^{\dagger}$</span>, because we know that the entries of matrix <span class="math-container">$A$</span> are real). Then, by multiplying the first equation by <span class="math-container">$v^{\dagger}$</span> from the left and the second equation by <span class="math-container">$v$</span> from the right we obtain:
<span class="math-container">$$
v^{\dagger} A v = \lambda v^{\dagger} v, \\
v^{\dagger} A^{\dagger} v = \overline{\lambda} v^{\dagger} v.
$$</span>
Hence, <span class="math-container">$v^{\dagger} (A + A^{\dagger}) v = (\lambda + \overline{\lambda}) v^{\dagger} v = 2\Re(\lambda) \cdot |v| > 0$</span>, if <span class="math-container">$v \neq 0$</span>. Notice that we worked out this for any pair of eigenvalue-eigenvector <span class="math-container">$(\lambda, v)$</span> and since any other vector can be represented by a linear combination of eigenvectors, then this argument remains valid for all <span class="math-container">$x$</span>, i.e. <span class="math-container">$x^{\dagger}(A + A^T)x > 0$</span> for all <span class="math-container">$x \neq 0$</span>.</p>
<p>So, matrix <span class="math-container">$A + A^T$</span> is indeed positive definite without any conditions. </p>
<p><strong>EDIT</strong>:
As it was noticed by multiple users the conclusion of my answer isn't correct (see comments) and I don't see an obvious solution how to fix this in order to claim that <span class="math-container">$x^{\dagger}(A + A^T)x > 0$</span> for all <span class="math-container">$x \neq 0$</span>. Thanks to @user1551.</p>
|
3,701,175 | <p>If a function <span class="math-container">$h(x)$</span> satisfies:</p>
<p>exists partition <span class="math-container">$ P=\left\{ a_{0},a_{1},...,a_{n}\right\} $</span></p>
<p>of the interval [a,b], such that <span class="math-container">$ h $</span> is constant in the segment <span class="math-container">$(a_{k-1},a_{k}) $</span> for any <span class="math-container">$1\leq k\leq n $</span> then we call <span class="math-container">$h(x) $</span> a step function.</p>
<p>Let <span class="math-container">$ f(x) $</span> be integrable function in the interval [a,b] and let <span class="math-container">$ \varepsilon>0 $</span>. prove that exists step function <span class="math-container">$ h $</span> that satisfies </p>
<p><span class="math-container">$ \intop_{a}^{b}|f\left(x\right)-h\left(x\right)|dx<\varepsilon $</span></p>
<p>This is actually a part from a bigger proof. Im trying to prove that for any integrable function <span class="math-container">$ f $</span> in an interval <span class="math-container">$[a,b]$</span>, for any <span class="math-container">$\varepsilon>0 $</span> exists continious function <span class="math-container">$g(x) $</span> such </p>
<p><span class="math-container">$ \intop_{a}^{b}|f\left(x\right)-g\left(x\right)|dx<\varepsilon $</span></p>
<p>So, part 1 of the proof is to prove the thing I mentioned. and part 2 is to prove that for any step function <span class="math-container">$h(x)$</span> in the interval <span class="math-container">$[a,b]$</span>, and for any <span class="math-container">$ \varepsilon>0 $</span> exists continious function <span class="math-container">$g(x)$</span> in the interval <span class="math-container">$[a,b]$</span> that satisfies</p>
<p><span class="math-container">$ \intop_{a}^{b}|h\left(x\right)-g\left(x\right)|dx<\varepsilon $</span></p>
<p>I already proved that any step function in any interval <span class="math-container">$[a,b] $</span> is integrable. Im not sure how to prove the parts I mentioned. Thanks in advance.</p>
| Divide1918 | 706,588 | <p>The upper sum or lower sum in the definition of Riemann integral is essentially the (definite) integral of a stepwise function. Consider a partition of <span class="math-container">$[a,b]: P=\{x_0=a,x_1,...,x_n=b\}$</span>. Now, define <span class="math-container">$ h(x)$</span> by <span class="math-container">$h(x)=\sup f[x_{i-1},x_i] \;\forall x\in \;[x_{i-1},x_i), i=1,...,n.$</span> Then <span class="math-container">$h(x)$</span> is a stepwise function, and </p>
<p><span class="math-container">$\forall \epsilon \gt 0, \exists N$</span> positive integer such that whenever <span class="math-container">$n\ge N, |\int_a^b f(x)dx-\int_a^b h(x)dx|=|\int_a^b (f(x)-h(x))\;dx|\lt \epsilon.$</span> </p>
<p>Notice that <span class="math-container">$f(x)\le h(x) \;\forall x$</span>, and thus <span class="math-container">$\int_a^b|f(x)-h(x)| \;dx = |\int_a^b(f(x)-h(x))\; dx|\lt \epsilon$</span></p>
<p>Hence proven. </p>
<p>(edited according to discussions in comments)</p>
|
3,809,026 | <p>I'm trying to solve this problem in the implicit differentiation section of the book I'm going through:</p>
<blockquote>
<p>The equation that implicitly defines <span class="math-container">$f$</span> can be written as</p>
<p><span class="math-container">$y = \dfrac{2 \sin x + \cos y}{3}$</span></p>
<p>In this problem we will compute <span class="math-container">$f(\pi/6)$</span>. The same method could be
used to compute <span class="math-container">$f(x)$</span> for any value of <span class="math-container">$x$</span>.</p>
<p>Let <span class="math-container">$a_1 = \dfrac{2\sin(\pi/6)}{3} = \dfrac{1}{3}$</span></p>
<p>and for every positive integer <span class="math-container">$n$</span> let</p>
<p><span class="math-container">$a_{n+1} = \dfrac{2\sin(\pi/6) + \cos a_n}{3} = \dfrac{1 + \cos a_n}{3}$</span></p>
<p>(a) Prove that for every positive integer <span class="math-container">$n$</span>, <span class="math-container">$|a_n - f(\pi/6)| \leq 1/3^n$</span> (Hint: Use mathematical induction) (b) Prove that <span class="math-container">$\lim_{n \to \infty} a_n = f(\pi/6)$</span></p>
</blockquote>
<p>Now, I'm not sure how to solve <span class="math-container">$f(\pi/6)$</span> in the base case of the induction proof.</p>
<p>Also, does this above pattern of assuming <span class="math-container">$a_{n+1}$</span> and using that to compute <span class="math-container">$f(x)$</span> for any value of <span class="math-container">$x$</span> has any name ? I would like to read more about it.</p>
| Anderson Brasil | 388,949 | <p>An induction is meant to prove that a certain statement <span class="math-container">$A(n)$</span> is true for all integers <span class="math-container">$n$</span> equal or greater than a certain <span class="math-container">$n_0$</span>. If <span class="math-container">$A$</span> fails fails at <span class="math-container">$n = n_0$</span>, the proposition is false as you've got a counterexample.</p>
<p>Of course, what I've said is valid for "homework" questions, in which you are supposed to prove a proposition which explicitly gives you the range of the variable. In practice (and in more interesting homeworks) you often need to experience values until you find out what is the range. For example, to show that <span class="math-container">$n! > 3^{n+1}$</span> for <span class="math-container">$n$</span> large enough requires you to guess the minimal value for <span class="math-container">$n$</span> in which the inequality holds. Only after this you can prove by induction that your guess was indeed correct.</p>
<p>Well, I am not sure if that's exactly what you've asked, but I hope it helps.</p>
|
449,617 | <p>I would like to know how do you solve summation expressions in an easy way (from my understanding).
I am computer science student analyzing for loops and finding it's time complexity.</p>
<p>e.g</p>
<p><strong>Code</strong>:</p>
<pre><code> for i=1 to n
x++
end for
</code></pre>
<p><strong>Summation</strong>:</p>
<pre><code> n
∑ 1
i=1
</code></pre>
<p><strong>Solving</strong>:</p>
<pre><code> = ∑ [n-1+1] (topLimit - bottomLimit + 1)
= n (summation formula said ∑ 1 = 1+1+1+1+ ... + 1 = n)
</code></pre>
<p>The time complexity of the for loop is: O(n)</p>
<hr>
<p><strong>Code</strong></p>
<pre><code>for(i=0; i<=n i++)
for(j=i; j<=n; j++)
x++;
</code></pre>
<p><strong>Question</strong>: </p>
<p>How do you solve:</p>
<pre><code> n n
∑ [∑ 1]
i=1 j=i
</code></pre>
<p><strong>My Solution</strong>:</p>
<pre><code> n
= ∑ [n-i+1]
i=1
= not sure how to progress from here (should i do another topLimit - bottomLimit + [n-i+1]?)
</code></pre>
<p>The problem i am having is simplifying so i can get to say i, 1/i, i^2, .. i.e something i can use a summation formula on.
I know the answer supposed to be: (n(n+1))/2.</p>
| Brian M. Scott | 12,042 | <p>For the second problem, suppose that $G$ is a universal countable locally finite graph, and let $V=\{v_n:n\in\omega\}$ be the vertex set of $G$. For each $n\in\omega$ define a function</p>
<p>$$f_n:V\to\omega:v\mapsto|\{w\in V:d_G(v,w)\le n\}|\;,$$</p>
<p>where $d_G(v,w)$ is the length of the shortest path from $v$ to $w$ in $G$. (Verify that the definition makes sense.) Then define</p>
<p>$$f:\omega\to\omega:n\mapsto f_n(v_n)+1\;.$$</p>
<p>Now build a countable locally finite graph $H$ as follows. Let $H_0$ be a copy of $K_1$, with vertex $u$. For $n\ge 1$ let $H_n$ be a copy of $K_{f(n)}$. Form $H$ by taking the disjoint union of the graphs $H_n$, $n\in\omega$, and for each $n\in\omega$ adding an edge from each vertex of $H_n$ to each vertex of $H_{n+1}$.</p>
<p>Without loss of generality assume that $H$ is an induced subgraph of $G$; clearly $u=v_n$ for some $n\in\omega$. Find a lower bound for $f_n(u)$ by considering how many vertices $w$ of $H$ satisfy $d_H(u,w)\le n$, and use this and the definition of $f$ to get a contradiction.</p>
|
1,977,345 | <p>Queston;</p>
<p>Given that 2 is a generator of cyclic group U(25), find all generators.</p>
<p>I am only conversant with the finding the mod which is very long with this question. Pls can someone enlighten me on how to get it done faster.</p>
<p>I am new to the cyclic group.</p>
<p>solution
U(25) = {1,2,3,4,6,7,8,9,11,12,13,14,16,17,18,19,21,22,23,24}</p>
<p>2^20 = 1 (mod 25)</p>
<p>Thanks</p>
| lhf | 589 | <p>In a cyclic group of order $n$ generated by $g$, the order of $g^k$ is $\dfrac{n}{\gcd(n,k)}$.</p>
<p>In particular, the generators are $g^k$ with $\gcd(n,k)=1$.</p>
<p>In your case, $g=2$ and $n=\phi(25)=20$.</p>
<p>Therefore, the generators of $U(25)$ are $2^k$ for $k$ coprime with $20$, that is, $k$ odd not a multiple of $5$.</p>
|
18,564 | <p>I want to get the eight points of intersection from the equations <code>2 Abs[x] + Abs[y] == 1</code> and <code>Abs[x] + 2 Abs[y] == 1</code>. To solve these equations, I tried</p>
<pre><code>Solve[{2 Abs[x] + Abs[y] == 1, Abs[x] + 2 Abs[y] == 1}, {x, y}]
</code></pre>
<p>but could only get the four points. I then tried </p>
<pre><code>y /. Quiet@Solve[#, y] /.
Abs[x_] -> {x, -x} & /@ {2 Abs[x] + Abs[y] == 1, Abs[x] + 2 Abs[y] == 1}
p = ({x, y} /. Solve[y == #] & /@ {{1 - 2 x, (1 - x)/2}, {(1 - x)/2, (
1 + x)/2}, {(1 + x)/2, 1 + 2 x}, {1 + 2 x, -1 - 2 x}, {-1 - 2 x,
1/2 (-1 - x)}, {1/2 (-1 - x),
1/2 (-1 + x)}, {1/2 (-1 + x), -1 + 2 x}, {-1 + 2 x, 1 - 2 x}})~
Flatten~1
</code></pre>
<blockquote>
<p>{{1/3, 1/3}, {0, 1/2}, {-(1/3), 1/3}, {-(1/2), 0}, {-(1/3), -(1/3)}, {0, -(1/2)}, {1/3, -(1/3)}, {1/2, 0}}</p>
</blockquote>
<p>It works, but I don't like it. Could you recommend a better method?</p>
<p><img src="https://i.stack.imgur.com/0GPLC.jpg" alt="enter image description here"></p>
| Pinguin Dirk | 5,274 | <p>Actually, the way I interpret your <code>ContourPlot</code>, there are only 4 points of intersection (red/blue curves). So I do interpret your question such that you want to include intersection with the coord axes as well. I hope this helps:</p>
<pre><code>eq1 = 2 Abs[x] + Abs[y] == 1;
eq2 = Abs[x] + 2 Abs[y] == 1;
p = {x, y} /.
(Solve[#, {x, y}] & /@
{{eq1, eq2},
{x == 0, eq2},
{eq1, y == 0}})~Flatten~1
</code></pre>
|
138,173 | <p>$f(x, y) = 0$ and $g(x, y) = 0$,
both $f$ and $g$ are cubic polynomial equation (at most 10 coefficients for each).</p>
<p>Is there any fixed method to solve this degenerate equation system?
thanks.</p>
| Noah Stein | 5,963 | <p>I'd suggest reading about <a href="http://en.wikipedia.org/wiki/Resultant" rel="nofollow">resultants</a>. The applications section of that article gives a method for solving your problem.</p>
<p>P.S. The level of this question seems maybe borderline for this forum (non-research-level questions can always be asked at math.stackexchange.com), but I could imagine someone going through grad school without learning about resultants so it seemed worth answering.</p>
|
2,581,361 | <p>For even $n \in \mathbb{N}$, prove $\binom{n}{i}< \binom{n}{j} $ if $0\leq i<j\leq \frac{n}{2}$.</p>
<p>So far all I have been able to come up with are a bunch of seemingly useless inequalities. </p>
<p>Any hints would be greatly appreciated.</p>
| BallBoy | 512,865 | <p>You can use induction on $n$ and the Pascal identity $\binom{n}{k} = \binom{n-1}{k-1} + \binom{n-1}{k}$.</p>
<p>The only tricky case is when $j = \frac{n}{2}$, since then $j \not\leq \frac{n-1}{2}$, but you can use the fact that $\binom{n-1}{j} = \binom{n-1}{n-1-j} = \binom{n-1}{j-1}$.</p>
|
1,399,008 | <p>I'm asked to prove that $$\lim_{n \to \infty}\left(\frac{n}{n^2+1}+\frac{n}{n^2+4}+\frac{n}{n^2+9}+\cdots+\frac{n}{n^2+n^2}\right)=\frac{\pi}{4}$$ This looks like it can be solved with Riemann sums, so I proceed:</p>
<p>\begin{align*}
\lim_{n \to \infty}\left(\frac{n}{n^2+1}+\frac{n}{n^2+4}+\frac{n}{n^2+9}+\cdots+\frac{n}{n^2+n^2}\right)&=\lim_{n \to \infty} \sum_{k=1}^{n}\frac{n}{n^2+k^2}\\
&=\lim_{n \to \infty} \sum_{k=1}^{n}(\frac{1}{n})(\frac{n^2}{n^2+k^2})\\
&=\lim_{n \to \infty} \sum_{k=1}^{n}(\frac{1}{n})(\frac{1}{1+(k/n)^2})\\
&=\lim_{n \to \infty} \sum_{k=1}^{n}f(\frac{k}{n})(\frac{k-(k-1)}{n})\\
&=\int_{0}^{1}\frac{1}{1+x^2}dx=\frac{\pi}{4}
\end{align*}</p>
<p>where $f(x)=\frac{1}{1+x^2}$. Is this correct, are there any steps where I am not clear? </p>
| Harish Chandra Rajpoot | 210,295 | <p>Notice, we have $$\lim_{n\to \infty}\left(\frac{n}{n^2+1}+\frac{n}{n^2+4}+\frac{n}{n^2+9}+\dots +\frac{n}{n^2+n^2}\right)=\lim_{n\to \infty}\sum_{r=1}^{n}\frac{n}{n^2+r^2}$$</p>
<p>$$\lim_{n\to \infty}\sum_{r=1}^{n}\frac{n}{n^2+r^2}=\lim_{n\to \infty}\sum_{r=1}^{n}\frac{\frac{1}{n}}{1+\left(\frac{r}{n}\right)^2}$$
Let, $\frac{r}{n}=x\implies \lim_{n\to \infty}\frac{1}{n}=dx\to 0$</p>
<p>$$\text{upper limit of x}=\lim_{n\to \infty }\frac{n}{n}=1$$
$$\text{lower limit of x}=\lim_{n\to \infty }\frac{1}{n}=0$$ Hence, using integration with proper limits, we get</p>
<p>$$\lim_{n\to \infty}\sum_{r=1}^{n}\frac{\frac{1}{n}}{1+\left(\frac{r}{n}\right)^2}= \int_ {0}^{1}\frac{dx}{1+x^2}$$ $$=\left[\tan^{-1}(x)\right]_{0}^{1}$$
$$=\left[\tan^{-1}(1)-\tan^{-1}(0)\right]$$ $$=\left[\frac{\pi}{4}-0\right]=\frac{\pi}{4}$$</p>
|
47,655 | <p>Let $X_n$ be the set of all word of the length $3 n$ over the alphabet $\{A,B,C\}$ which contain each of the three letters <em>n</em> times.</p>
<p>The amount of elements of $X_n$ is $\frac{(3n)!}{(n!)^3}$, but why?</p>
<p>I tried to split the problem into three smaller ones by only looking at the distribution of the single letters. For each one there should be $\binom{3n}{n} = \frac{(3n)!}{n!(2n)!}$ possibilities, but if this is correct, how do I get it combined? According to Wolfram Alpha $\binom{3n}{n}^3 \neq \frac{(3n)!}{(n!)^3}$.</p>
<p>Thanks in advance!</p>
| kuch nahi | 8,365 | <p>The number of words out of length $n$ out of which $n_1$ are of one letter and $n_2$ are of another letter etc and $n_k$ are of another letter is same as the number of arrangements of $n$ objects of which $n_1$ are of one type, $n_2$ are of another type, etc is $$\frac{n!}{n_1!n_2!\cdots n_k!}$$ Can you prove this?</p>
<p>Or equivalently, as others have pointed out, there are $\binom{3n}{n}$ ways to choose position of one letter, $\binom{2n}{n}$ ways of choosing positions for one of the other remaining letters, and $1$ way to put the last letter in the remaining positions. By the product rule $$\binom{3n}{n}\binom{2n}{n}\binom{n}{n} = \frac{3n!}{(2n!)n!}\cdot\frac{2n!}{n!n!}\cdot 1=\frac{3n!}{(n!)^3}$$</p>
<p>Also see <a href="http://en.wikipedia.org/wiki/Multinomial_theorem#Number_of_unique_permutations_of_words" rel="nofollow">wikipedia</a></p>
|
2,839,823 | <p>I don't know where to ask, but I'm trying. I just thing we cannot do hocus pocus methods.</p>
<hr>
<p>Solve $3^x+4^x=5^x$</p>
<p>Okay, so my friend gave me this equation, and his solution. But I don't belive it holds. It comes here:</p>
<p>"Solution: </p>
<p>$3^x+4^x=5^x\Leftrightarrow \frac{3^x}{3^x}+\frac{4^x}{3^x}=\frac{5^x}{3^x}\Leftrightarrow 1+\left(\frac{4}{3}\right )^x=\left(\frac{5}{3}\right )^x\Leftrightarrow 1+\left(\frac{4}{3}\right )^{\frac{x}{2}\cdot 2}=\left(\frac{5}{3}\right )^{\frac{x}{2}\cdot 2}\Leftrightarrow \left(\frac{4}{3}\right )^{\frac{x}{2}\cdot 2}-\left(\frac{5}{3}\right )^{\frac{x}{2}\cdot 2}=-1\Leftrightarrow \left(\left(\frac{4}{3}\right)^{\frac{x}{2}}\right )^2-\left(\left(\frac{5}{3}\right )^{\frac{x}{2}}\right)^2=-1$</p>
<p>Let $a=\left(\frac{4}{3}\right )^{\frac{x}{2}}$ and let $b=\left(\frac{5}{3}\right )^{\frac{x}{2}}$ then</p>
<p>$a^2-b^2=\frac{-1}{3}\cdot 3 \Leftrightarrow (a-b)(a+b)=\frac{-1}{3}\cdot 3$</p>
<p>Now
$a-b=\frac{-1}{3}$ and $a+b=3$ solving the system of equations, we get $a=\frac{4}{3}$ and $b=\frac{5}{3}$ hence we put back.</p>
<p>$a=\left(\frac{4}{3}\right )^{\frac{x}{2}} \Rightarrow \frac{4}{3}=\left(\frac{4}{3}\right )^{\frac{x}{2}} $ and $b=\left(\frac{5}{3}\right )^{\frac{x}{2}} \Rightarrow \frac{5}{3}=\left(\frac{5}{3}\right )^{\frac{x}{2}} $ we see that $x=2$ in both cases, which satisfies the equation."</p>
| hmakholm left over Monica | 14,366 | <p>It is obvious that $x=2$ is a solution.</p>
<p>To see that there cannot be other (real) solutions: We're looking for solutions of
$$ 5^x - 4^x - 3^x = 0 $$
Divide this by $4^x$ (which is always positive) on both sides, and we get
$$ (\tfrac54)^x - 1 - (\tfrac34)^x = 0 $$
Here, both the terms $(\tfrac54)^x$ and $-(\tfrac34)^x$ are <em>strictly increasing</em> and the middle $-1$ is a constant. So the entire left-hand side is strictly increasing on all of $\mathbb R$, and therefore has at most one zero -- which must be the one we already know at $x=2$.</p>
|
142,007 | <p>Assume: $p$ is a prime that satisfies $p \equiv 3 \pmod 4$</p>
<p>Show: $x^{2} \equiv -1 \pmod p$ has no solutions $\forall x \in \mathbb{Z}$.</p>
<p>I know this problem has something to do with Fermat's Little Theorem, that $a^{p-1} \equiv 1\pmod p$. I tried to do a proof by contradiction, assuming the conclusion and showing some contradiction but just ran into a wall. Any help would be greatly appreciated.</p>
| jojobo | 864,803 | <p>It is equivalent to <span class="math-container">$\nexists n: \frac{n^2+1}{p}\in \mathbb{N}$</span>, which is a case of the sum of squares theorem.</p>
|
185,900 | <p>What is the <strong>fastest way</strong> to find the smallest positive root of the following transcendental equation:</p>
<p><span class="math-container">$$a + b\cdot e^{-0.045 t} = n \sin(t) - m \cos(t)$$</span></p>
<pre><code>eq = a + b E^(-0.045 t) == n Sin[t] - m Cos[t];
</code></pre>
<p>where
<span class="math-container">$a,b,n,m$</span> are some real constants.</p>
<p>for instance I tried :</p>
<pre><code>eq = 5 E^(-0.045 t) + 0.1 == -0.3 Cos[t] + 0.009 Sin[t];
sol = FindRoot[eq, {t, 1}]
{t -> 117.349}
</code></pre>
<p>There is an answer but it doesn't mean that this is the smallest positive root ))</p>
<p>I don't like <code>FindRoot[]</code> because you need starting point, for different initial parameters <span class="math-container">$(a,b,n,m)$</span></p>
<p>Is there a way to find the <strong>smallest positive root</strong> of equation for any <span class="math-container">$(a,b,n,m)$</span> (if there exist the solution), without <em>starting points</em>?</p>
<p>If No. how to determine automatically starting point for a given parameters?</p>
<p>there is a numerical and graphical answers in <em>Wolfram Alpha</em></p>
<p><a href="https://i.stack.imgur.com/Y90wg.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Y90wg.jpg" alt="enter image description here"></a></p>
| rmw | 57,128 | <pre><code>eq = 1 + 5 E^(-0.045 t) - (0.03 Sin[t] - 1.2 Cos[t]);
</code></pre>
<p>To find the first root, plot the equation and marker the roots.</p>
<pre><code>Plot[eq, {t, 50, 150}, Mesh -> {{0}}, MeshFunctions -> {#2 &}, MeshStyle -> Directive[Red, PointSize@Medium]]
</code></pre>
<p><a href="https://i.stack.imgur.com/NsHcT.gif" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/NsHcT.gif" alt="enter image description here"></a></p>
<p>The first root lies between 70 < t < 75</p>
<pre><code>NSolve[eq == 0 && 70 < t < 75, t]
{{t -> 72.1339}, {t -> 72.3439}}
</code></pre>
|
1,911,601 | <p>Alright so I had to find the area of the common region determined by $y>=x^{0.5}$ and $x^2+y^2<2$. And I proceeded like this --
<a href="https://i.stack.imgur.com/R19kg.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/R19kg.jpg" alt="enter image description here"></a></p>
<p>But I'm not getting the expected answer, are there any flaws in my logic or approach? Please let me know friends. </p>
| rogerl | 27,542 | <p>The integral you wrote is the integral of $\sqrt{x}$ between the vertical lines $x=0$ and $x=\sqrt{2}$. This is <em>not</em> the same as the picture, which should integrate only between $x=0$ and the boundary of the circle. The easiest approach here is to convert to polar coordinates.</p>
<p>If you aren't familiar with polar coordinates, another approach is to divide up the computation of this area into two separate computations. By solving the equations $x^2+y^2=2$ and $y^2=x$ for $x$ you can see that the point of intersection of these two curves is the point $(1,1)$. Then the area of the region you want (the region <em>below</em> $y=\sqrt{x})$ is the sum of the area under $y=\sqrt{x}$ from $x=0$ to $x=1$ and the area under $x^2+y^2=2$ from $x=1$ to $x=\sqrt{2}$. Since $x^2+y^2=2$ is the same is $y = \sqrt{2-x^2}$ for $y\ge 0$, this gives
$$\int_0^1 \sqrt{x}\,dx + \int_1^{\sqrt{2}} \sqrt{2-x^2}\,dx.$$</p>
|
1,798,710 | <blockquote>
<p>Prove that any continuous bijection $f:X \rightarrow Y$ from a compact space $X$ to a Hausdorff space $Y$ is a homeomorphism</p>
</blockquote>
<p>Requirements for a homeomorphism $f:X \rightarrow Y$:</p>
<ol>
<li>$f$ is continuous</li>
<li>$f$ is bijective</li>
<li>$f^{-1}$ is continuous</li>
</ol>
<p>The first two properties are given in the question, so we just need to show that the inverse is continuous.</p>
<p>So $f^{-1}(Y)$ is the preimage of a Hausdorff space to a compact space. Why is this continuous?</p>
| J.E.M.S | 73,275 | <p><strong>Hint:</strong> That map is open that means that image of an open set is open. </p>
|
3,820,947 | <p>Now I have always been rather intrigued with factorial, at first, in high school, teachers told me that factorials are only defined for whole numbers. As I studied, I found factorials for positive reals and negative fractions. But the integral with which we define factorial falls flat on the negative integers.</p>
<p>why is that we can find the factorial of (-1/2) and root(3) but not for -1 or -2? Does this go against the definition of a factorial? If yes, what IS the definition of a factorial because children are never taught it and it clouds their reasoning and perception of the topic.</p>
| Community | -1 | <p>Children are only taught the factorial of naturals because the definition is elementary (though <span class="math-container">$0!=1$</span> deserves special comments).</p>
<p>When you want to extend it to reals, a natural way is with the integral</p>
<p><span class="math-container">$$I_n=\int_0^\infty x^ne^{-x}dx$$</span> as it verifies the recurrence</p>
<p><span class="math-container">$$I_n=\int_0^\infty x^ne^{-x}dx=-\left. x^ne^{-x}\right|_0^\infty+n\int_0^\infty x^{n-1}e^{-x}dx=nI_{n-1}$$</span> and <span class="math-container">$I_n=n!$</span>.</p>
<p>As the integral keeps a meaning with <span class="math-container">$n$</span> positive real, this is taken for the extension.</p>
<p>Now for the negatives, it makes sense to retain the recurrence</p>
<p><span class="math-container">$$(n-1)!=\frac{n!}n,$$</span> resulting in infinite values for the negative integers.</p>
<p><a href="https://i.stack.imgur.com/uQr3i.gif" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/uQr3i.gif" alt="enter image description here" /></a></p>
|
84,255 | <p>It is just like the linear algebra over commutative ring (maybe advanced linear algebra), that is a nature extension and can make the structure of Lie algebra more algebraic, but I find little book discussing this topic.</p>
<p>I just know Weibel’s book “Homological Algebra” discuss something, but that is just a form, do not pay more attention to the Lie algebra and show some difference between field coefficient and ring coefficient. </p>
<p>Does anybody know something else?</p>
| Anatoly Kochubei | 12,205 | <p>Some basic notions and results about Lie algebras are given in the generality you want in Chapter 1 of Bourbaki's "Lie Groups and Lie Algebras", and also in the book by J.-P. Serre "Lie Algebras and Lie Groups".</p>
|
84,255 | <p>It is just like the linear algebra over commutative ring (maybe advanced linear algebra), that is a nature extension and can make the structure of Lie algebra more algebraic, but I find little book discussing this topic.</p>
<p>I just know Weibel’s book “Homological Algebra” discuss something, but that is just a form, do not pay more attention to the Lie algebra and show some difference between field coefficient and ring coefficient. </p>
<p>Does anybody know something else?</p>
| Peter May | 14,447 | <p>There is a joke definition of a Lie algebra, due to my adviser John Moore,
that is relevant. His definition of a Lie algebra over a commutative ring
$R$ is that it is a module $L$ with a bracket operation such that there
exists an associative $R$-algebra $A$ and a monomorphism $L \to A$ of $R$-modules
that takes the bracket operation to the commutator in $A$. The point is to try
to build in the PBW and dodge the question of which identities characterize
Lie algebras. It is equivalent to the usual definition when $R$ is a field,
as one sees by proving PBW using only the standard identities, but not so over
a general commutative ring.</p>
<p>Even over a field (char $\neq 2$ for simplicity) there is an interesting
contrast with the definition of a Jordan algebra. There the analogue
of the commutator is $1/2 (ab + ba)$. One writes down the identities
this satisfies and defines a Jordan algebra to be a vector space that
satisfies the identities. But Jordan algebras do not generally embed
in associative algebras (those that do are called special). </p>
|
84,255 | <p>It is just like the linear algebra over commutative ring (maybe advanced linear algebra), that is a nature extension and can make the structure of Lie algebra more algebraic, but I find little book discussing this topic.</p>
<p>I just know Weibel’s book “Homological Algebra” discuss something, but that is just a form, do not pay more attention to the Lie algebra and show some difference between field coefficient and ring coefficient. </p>
<p>Does anybody know something else?</p>
| Pasha Zusmanovich | 1,223 | <ol>
<li><p>J.F. Hurley in a series of papers studied Lie algebras obtained by taking the multiplication table (with integer coefficients, due to Chevalley) of simple Lie algebras of classical or exceptional type and considering them over a commutative ring. The results describe center, ideal structure, etc. of such algebras in terms of the underlying ring. See, for example: Ideals in Chevalley algebras, <a href="http://www.jstor.org/stable/1994801" rel="noreferrer">Trans. Amer. Math. Soc. 137 (1969), 245-258</a>; Composition series in Chevalley algebras <a href="http://projecteuclid.org/euclid.pjm/1102977369" rel="noreferrer">Pacific J. Math. 32 (1970), 429-434</a>; Centers of Chevalley algebras, <a href="http://www.journalarchive.jst.go.jp/english/jnlabstract_en.php?cdjournal=jmath1948&cdvol=34&noissue=2&startpage=219" rel="noreferrer">J. Math. Soc. Japan 34 (1982), No.2, 219-222</a>. In the joint paper with J. Morita (Affine Chevalley algebras, <a href="http://dx.doi.org/10.1016/0021-8693(81)90299-4" rel="noreferrer">J. Algebra 72 (1981), N2, 359-373</a>) he does something similar for some Kac--Moody algebras.</p></li>
<li><p>Some questions in free Lie algebras were considered over commutative rings, for example: D.Z. Djokovic, On some inner derivations of free Lie algebras over commutative rings, J. Algebra 119 (1988), 233-245, where centralizers of a member of a free generating set are studied. The latter reference is more or less random, probably more can be found in some books (Reutenauer?). </p></li>
</ol>
<p>There are more instances of considering Lie algebras over commutative rings (for example, plenty of papers about automorphisms of some triangular or close to them algebras), but, unlike in the case of Lie algebras over fields, all these are some isolated examples, rather than a coherent theory. The book(s) of Bourbaki recommended by Anatoly Kochubei are, probably, interesting in that regard. Bourbaki tend to state things in the utmost generality, and it is educational to see how quickly they have to give up considering Lie algebras over rings and have to "throw around properties of vector spaces" (quoting Darij Grinberg).</p>
<p>Perhaps the question could be augmented slightly by asking what is the reason for the absence of such a theory for Lie algebras (as opposed, for example, for associative algebras). Perhaps this is related somehow to the fact that classifying some natural (e.g., simple) classes of associative algebras is easier then that of Lie algebras (e.g., root space decomposition technique for Lie algebras which works over algebraically closed fields vs. "idempotent" technique for associative algebras which works over arbitrary fields and even over rings), but I venture into a sheer speculation here.</p>
|
1,898,839 | <p>Is the following inequality true?</p>
<p><span class="math-container">$$\mbox{Tr} \left( \mathrm P \, \mathrm M^T \mathrm M \right) \leq \lambda_{\max}(\mathrm P) \, \|\mathrm M\|_F^2$$</span></p>
<p>where <span class="math-container">$\mathrm P$</span> is a positive definite matrix with appropriate dimension. How about the following?</p>
<p><span class="math-container">$$\mbox{Tr}(\mathrm A \mathrm B)\leq \|\mathrm A\|_F \|\mathrm B\|_F$$</span></p>
| Rodrigo de Azevedo | 339,790 | <p>Since $\mathrm P$ is positive definite, $\mathrm P^{\frac 12}$ exists and is symmetric. Hence,</p>
<p>$$\mbox{tr} (\mathrm P \mathrm M^T \mathrm M) = \mbox{tr} (\mathrm P^{\frac 12} \mathrm P^{\frac 12} \mathrm M^T \mathrm M) = \mbox{tr} (\mathrm M \mathrm P^{\frac 12} \mathrm P^{\frac 12} \mathrm M^T) = \mbox{tr} ((\mathrm P^{\frac 12} \mathrm M^T)^T (\mathrm P^{\frac 12} \mathrm M^T)) = \|\mathrm P^{\frac 12} \mathrm M^T\|_F^2$$</p>
<p>We <a href="https://mathoverflow.net/q/59918/91764">have</a> $\|\mathrm A \mathrm B\|_F \leq \|\mathrm A\|_2 \|\mathrm B\|_F$. Thus,</p>
<p>$$\mbox{tr} (\mathrm P \mathrm M^T \mathrm M) = \|\mathrm P^{\frac 12} \mathrm M^T\|_F^2 \leq \|\mathrm P^{\frac 12}\|_2^2 \|\mathrm M^T\|_F^2 = \lambda_{\max} (\mathrm P) \|\mathrm M\|_F^2$$</p>
|
2,615,115 | <blockquote>
<p>For $S$ in the domain of a function $f$, let $f(S)= \{f(x): x\in S\}$. Let $C$ and $D$ be subsets of the domain of $f$. Give an example where equality doesn't hold in $f(C\cup D)\subseteq f(C)\cup f(D)$.</p>
</blockquote>
<p>I have proved $f(C\cup D)\subseteq f(C)\cup f(D)$ by definition, but I can't come up with an example where equality doesn't hold. </p>
<p>If $x\in C$ or $x\in D$, $f(x)\in f(C)$ or $f(D)$, thus if $x\in(C∪D)$, $f(X)\in f(C) \cup f(D)$?</p>
<p>PS: this question is from Mathematical Thinking Problem-Solving and Proofs. Second Edition. </p>
| Nick A. | 412,202 | <p>What you are trying to prove is wrong. It is always true that $f$ and $\cup$ commute meaning that $f(\cup A_i)=\cup f(A_i)$. Take a look <a href="https://math.stackexchange.com/questions/1131956/image-of-the-union-and-intersection-of-sets">here</a>
!</p>
|
2,615,115 | <blockquote>
<p>For $S$ in the domain of a function $f$, let $f(S)= \{f(x): x\in S\}$. Let $C$ and $D$ be subsets of the domain of $f$. Give an example where equality doesn't hold in $f(C\cup D)\subseteq f(C)\cup f(D)$.</p>
</blockquote>
<p>I have proved $f(C\cup D)\subseteq f(C)\cup f(D)$ by definition, but I can't come up with an example where equality doesn't hold. </p>
<p>If $x\in C$ or $x\in D$, $f(x)\in f(C)$ or $f(D)$, thus if $x\in(C∪D)$, $f(X)\in f(C) \cup f(D)$?</p>
<p>PS: this question is from Mathematical Thinking Problem-Solving and Proofs. Second Edition. </p>
| Akira | 368,425 | <p>I think you meant $f(C\cap D)\subseteq f(C)\cap f(D)$. Let me give you an example where inequality does not hold.</p>
<p>$\text{Let }X = \{1, 2\} \text{ and } f(1) = f(2) = 1$.</p>
<p>$\implies f[\{1\}\cap\{2\}]=\varnothing \text{ and }f(\{1\})\cap f[\{2\}]=\{1\}$</p>
<p>$\implies f[\{1\}\cap\{2\}] \neq f[\{1\}]\cap f[\{2\}]$</p>
|
735,338 | <p>In evaluating an integral in path integrals in QFT, I am stuck with this integral (that came up from evaluating a functional integral),</p>
<p>$$I = \bigg( \frac{m}{2\pi i\tau}\bigg) \int dx_1e^{\frac{im\tau}{2} \bigg((x_{2} - x_1)^2+(x_{1} - x_0)^2\bigg)}$$</p>
<p>After some manipulation, I end up with an integral of the form (apart from some constants)</p>
<p>$$\int_{0}^{\infty} e^{iax^2}\;dx$$
And on further evaluation using the substitution $s= -iax^2 $, I get</p>
<p>$$ \int_{0}^{i\infty} \frac{ds}{\sqrt{-ias}} e^{-s} $$</p>
<p>what kind of contour can we choose for evaluating this one. A diagram would be really helpful.</p>
| Jeff Faraci | 115,030 | <p>I will calculate your integral
$$
I=\int_{-\infty}^\infty e^{i\alpha x^2}=2\int_0^\infty e^{i\alpha x^2}dx
$$
by using some residue methods. This will be a general proof for you, you can put in the constant factors at the end where you need them.
\begin{equation}
F(\alpha)\equiv C(\alpha)+iS(\alpha)=\int_{-\infty}^{\infty} e^{i\alpha x^2}dx
\end{equation}
for real $\alpha$. These are the Fresnel integrals, S($\alpha$) and C($\alpha$) which are two transcendental functions. The integral is even so we can write
$$
F(\alpha)= 2\int_{0}^\infty {e^{i\alpha x^2}} dx.
$$
Consider the complex function $f(z)= e^{i\alpha z^2}$, $z=re^{i\theta}$, $f(z=re^{i\theta})=e^{i\alpha r^2 e^{2i\theta}}.$ If we stare at
$$
f(z=re^{i\theta})=e^{i\alpha r^2 e^{2i\theta}}
$$
we notice an amazing result, for $\theta=\pi/4$, this is just a real Gaussian integral ($\alpha >0$)!! If $\theta =0$ we have $f(re^{i\theta})=e^{i\alpha r^2}$.
Thus our contour is split up into three contours making an angle between imaginary and real axis of $45^o$. The contour I am using is shown here <a href="http://en.wikipedia.org/wiki/File:Fresnel_Integral_Contour.svg" rel="nofollow">http://en.wikipedia.org/wiki/File:Fresnel_Integral_Contour.svg</a>. We know there are no poles inside and $f(z)$ is holomorphic, thus by the Cauchy-Goursat theorem we know
\begin{equation}
0=\oint f(z) dz=\int_{0}^{\infty} e^{i\alpha x^2} dx +\int_{0}^{\pi/4} ire^{i\theta} d\theta e^{i\alpha r^2(\cos(2\theta)+i\sin(2\theta))}+\int_{\infty}^{0}e^{-\alpha r^2}e^{i\pi/4}dr
\end{equation}
where the integral is broken up into three contours shown in the illustration and I used $z=re^{i\theta}, \ dz= e^{i\theta }dr$. The difficult integral to evaluate is
$$
\int_{0}^{\pi/4} ire^{i\theta} d\theta e^{i\alpha r^2(\cos(2\theta)+i\sin(2\theta))},
$$
however it vanishes as $r \to \infty$. We need to show that it vanishes, it is similar to Jordan's inequality, but that just places an upper bound on the integral,
$$
\int_{0}^{\pi} e^{-r\sin\theta}d\theta \ < \frac{\pi}{r} \ (r >0).
$$
We will prove this now by showing that
$$
\lim_{r\to \infty} \bigg| \int_{0}^{\pi/4} e^{i\alpha r^2 e^{2i\theta}} ir e^{i\theta }d\theta \bigg|=0.
$$<br>
We know that
$$
\bigg| \int_{0}^{\pi/4} e^{i\alpha r^2 e^{2i\theta}} ir e^{i\theta }d\theta \bigg| \leq \int_{0}^{\pi/4} \big|e^{i\alpha r^2 e^{2i\theta}}\big| \big|ir e^{i\theta}d\theta\big|=\int_{0}^{\pi/4} rd\theta \big| e^{i\alpha r^2\cos(2\theta)-\alpha r^2\sin(2\theta)} \big|=\int_{0}^{\pi/4}rd\theta \big|e^{i \alpha r^2\cos(2\theta)}\big|\big|e^{-\alpha r^2 \sin(2\theta)}\big|
$$
where I used $e^{2i\theta}=\cos 2\theta +i\sin 2\theta$. We can simplify this to obtain
$$
\int_{0}^{\pi/4} r d\theta \big|e^{i \alpha r^2\cos(2\theta)}\big|\big|e^{-\alpha r^2\sin(2\theta)}\big|=\int_{0}^{\pi/4} rd\theta e^{-\alpha r^2\sin(2\theta)}.
$$
Thus we need to show that this vanishes as $r \to \infty.$ If we make the substitution $\xi=2\theta , d\theta=d\xi/2$ and changing the bounds of integration we obtain
$$
\int_{0}^{\pi/2} \frac{r}{2}d\xi e^{-\lambda r^2\sin \xi}.
$$
We can see that for $\xi \in [0,\pi/2]$, $\sin\xi \geq 2\xi/\pi$. Since $e^{-\alpha r^2\cdot 2\xi/\pi} \geq e^{-\alpha r^2 \sin \xi} $ (since exponential is bigger for a smaller exponent), we can write
$$
\int_{0}^{\pi/2} \frac{r}{2}d\xi e^{-\alpha r^2\sin \xi} \leq \int_{0}^{\pi/2} \frac{r}{2}d\xi e^{-\alpha r^2 \cdot 2/\pi}=\frac{\pi}{4 \alpha r}(1-e^{-\alpha r^2}).
$$
Thus it is clear that
$$
\lim_{r\to \infty}\frac{\pi}{4 \alpha r}(1-e^{-\alpha r^2})=0,
$$
thus we have shown that
$$
\lim_{r\to \infty} \bigg| \int_{0}^{\pi/4} e^{i\alpha r^2 e^{2i\theta}} ir e^{i\theta }d\theta \bigg|=0.
$$
We are left with
$$
0=\int_{0}^{\infty} e^{i\alpha x^2} dx +\int_{\infty}^{0}e^{-\alpha r^2}e^{i\pi/4}dr.
$$
Re-arranging this expression we obtain
$$
\int_{0}^{\infty} e^{i\alpha x^2} dx=-\int_{\infty}^{0}e^{-\alpha r^2}e^{i\pi/4}dr=\int_{0}^{\infty}e^{-\alpha r^2}e^{i\pi/4}dr
$$
where I switched the bounds of integration to remove the minus sign. This is a fabulous result, we have reduced the Fresnel integral to a real Gaussian integral which is trivial, the result is
$$
\int_{0}^{\infty} e^{i\alpha x^2} dx=\int_{0}^{\infty}e^{-\alpha r^2}e^{i\pi/4}dr=\frac{1}{2}e^{i\pi/4} \sqrt{\frac{\pi}{\alpha}}
$$
for $\alpha> 0.$(proof at end of this part.)
Thus using the property that the integrand is even, we revert back to the original integral in to obtain the desired result given by
\begin{equation}
\int_{-\infty}^{\infty} e^{i\alpha x^2}dx=e^{i\pi/4} \sqrt{\frac{\pi}{\alpha}} , \ \ \alpha>0.
\end{equation}
Now we notice that $F(-\alpha)={F}^*(\alpha)$, thus we can write
$$
F(\alpha)=e^{-i\pi/4} \sqrt{\frac{\pi}{\alpha}}, \ \ \alpha<0.
$$
For $\alpha=0$, the integral is divergent since
$$
\int_{-\infty}^{\infty} dx=\infty.
$$
We can now calculate $C(\alpha)$ and $S(\alpha)$ by writing
$$
F(\alpha=C(\alpha)+iS(\alpha)=e^{\pm i\pi/4}\sqrt{\frac{\pi}{\alpha}}=\sqrt{\frac{\pi}{\alpha}}\bigg( \frac{1}{\sqrt{2}}\pm \frac{i}{\sqrt{2}}\bigg).
$$
Thus we conclude that
\begin{equation}
{\boxed{
F(\alpha)=\sqrt{\frac{\pi}{\alpha}}\cdot
\left\{
\begin{array}{ll}
e^{i\pi/4} \ ,\ \alpha > 0 \\
e^{-i\pi/4} \ , \ \alpha < 0.\\
\end{array}
\right., \
C(\alpha)=\sqrt{\frac{\pi}{2\alpha}}, \ \alpha \neq 0, \ S(\alpha)=\sqrt{\frac{\pi}{\alpha}}\cdot
\left\{
\begin{array}{ll}
\frac{1}{\sqrt{2}} \ ,\ \alpha > 0 \\
-\frac{1}{\sqrt{2}} \ , \ \alpha < 0.\\
\end{array}
\right.
}}
\end{equation}</p>
|
735,338 | <p>In evaluating an integral in path integrals in QFT, I am stuck with this integral (that came up from evaluating a functional integral),</p>
<p>$$I = \bigg( \frac{m}{2\pi i\tau}\bigg) \int dx_1e^{\frac{im\tau}{2} \bigg((x_{2} - x_1)^2+(x_{1} - x_0)^2\bigg)}$$</p>
<p>After some manipulation, I end up with an integral of the form (apart from some constants)</p>
<p>$$\int_{0}^{\infty} e^{iax^2}\;dx$$
And on further evaluation using the substitution $s= -iax^2 $, I get</p>
<p>$$ \int_{0}^{i\infty} \frac{ds}{\sqrt{-ias}} e^{-s} $$</p>
<p>what kind of contour can we choose for evaluating this one. A diagram would be really helpful.</p>
| robjohn | 13,854 | <p>To compute the integral
$$
\int_0^\infty e^{i\alpha x^2}\,\mathrm{d}x
$$
we will consider the contour integral
$$
\int_\gamma e^{i\alpha z^2}\,\mathrm{d}z=0
$$
where $\gamma=[0,R]\cup Re^{i[0,\pi/4]}\cup e^{i\pi/4}[R,0]$ as $R\to\infty$. </p>
<p>$\hspace{3cm}$<img src="https://i.stack.imgur.com/p6Mjx.png" alt="enter image description here"></p>
<p>There are no singularities of the integrand inside this contour, so the integral is $0$. Since the integral along the arc of the contour tends to $0$
$$
\begin{align}
\left|\int_{Re^{i[0,\pi/4]}}e^{i\alpha z^2}\,\mathrm{d}z\right|
&\le\int_0^{\pi/4}e^{-\alpha R^2\sin(2x)}R\,\mathrm{d}x\\
&\le\int_0^\infty e^{-\alpha R^24x/\pi}R\,\mathrm{d}x\\
&=\frac{\pi}{4\alpha R}
\end{align}
$$
the integrals along the two line segments must cancel. Therefore,
$$
\begin{align}
\overbrace{\int_0^\infty e^{i\alpha x^2}\,\mathrm{d}x}^{[0,R]}
&=\overbrace{\int_0^\infty e^{i\alpha(xe^{i\pi/4})^2}\,\mathrm{d}xe^{i\pi/4}}^{e^{i\pi/4}[0,R]}\\
&=e^{i\pi/4}\int_0^\infty e^{-\alpha x^2}\,\mathrm{d}x\\
&=\color{#C00000}{\frac{1+i}{\sqrt2}\frac12\sqrt{\frac\pi\alpha}}
\end{align}
$$</p>
<hr>
<p><strong>Correcting and Continuing the Answer in the Question</strong></p>
<p>Substituting $s=-i\alpha x^2$ gives
$$
\frac12\int_0^{-i\infty}\frac{e^{-s}}{\sqrt{-i\alpha s}}\,\mathrm{d}s
$$
note the factor of $\frac12$ and the upper limit of integration.</p>
<p>To compute the integral above, consider the integral
$$
\frac12\int_\gamma\frac{e^{-z}}{\sqrt{-i\alpha z}}\,\mathrm{d}z=0
$$
where $\gamma=-i[0,R]\cup Re^{-i[\pi/2,0]}\cup[R,0]$.</p>
<p>$\hspace{3.8cm}$<img src="https://i.stack.imgur.com/p5Wka.png" alt="enter image description here"></p>
<p>There are no singularities of the integrand inside this contour, so the integral is $0$. Again, the integral along the arc vanishes as $R\to\infty$
$$
\begin{align}
\left|\frac12\int_{Re^{-i[\pi/2,0]}}\frac{e^{-z}}{\sqrt{-i\alpha z}}\,\mathrm{d}z\right|
&\le\left|\frac12\int_{-\pi/2}^0\frac{e^{-R\cos(x)}}{\sqrt{\alpha R}}R\,\mathrm{d}x\right|\\
&=\left|\frac12\int_0^{\pi/2}\frac{e^{-R\sin(x)}}{\sqrt{\alpha R}}R\,\mathrm{d}x\right|\\
&\le\frac12\int_0^\infty\frac{e^{-2Rx/\pi}}{\sqrt{\alpha R}}R\,\mathrm{d}x\\[5pt]
&=\frac\pi{4\sqrt{\alpha R}}
\end{align}
$$
the integrals along the two line segments must cancel. Therefore,
$$
\begin{align}
\overbrace{\frac12\int_0^{-i\infty}\frac{e^{-s}}{\sqrt{-i\alpha s}}\,\mathrm{d}s}^{-i[0,R]}
&=\overbrace{\frac{1+i}{\sqrt{2\alpha}}\int_0^\infty\frac{e^{-x}}{2\sqrt{x}}\,\mathrm{d}x}^{[0,R]}\\
&=\frac{1+i}{\sqrt{2\alpha}}\int_0^\infty e^{-x^2}\,\mathrm{d}x\\
&=\color{#C00000}{\frac{1+i}{\sqrt{2\alpha}}\frac{\sqrt\pi}2}
\end{align}
$$</p>
|
2,943,637 | <p>I want to prove that if <span class="math-container">$|z|=1 $</span> then <span class="math-container">$z^8-3z^2+1 \neq 0$</span>. I tried to prove the reciprocal by taking norms in <span class="math-container">$z^8-3z^2+1= 0$</span> and then solving for <span class="math-container">$ |z|$</span> but it does not work. I also assume <span class="math-container">$| z|=1 $</span> and then trying to see that <span class="math-container">$| z^8-3z^2+1 |> 0 $</span> but it did not work neither.</p>
<p>Any ideas on this?</p>
| Martin R | 42,969 | <p>If <span class="math-container">$z^8-3z^2+1 = 0$</span> then
<span class="math-container">$$
3 |z|^2 = |3 z^2| = |z^8 + 1| \le |z|^8 + 1
$$</span>
and that is not possible if <span class="math-container">$|z| = 1$</span>.</p>
|
2,828,023 | <p><strong>Four cards are dealt off the top of a well-shuffled deck.
Find the chance that:</strong></p>
<p><strong>You get a queen or a king.</strong> </p>
<p>The solution shows $ 1- \frac { \binom {44}{4}}{ \binom {52}{4}}$ </p>
<p>I get that in this case it's easier to use compliment rule, however I'm trying another method and I got the following:</p>
<p>$ \binom {4}{1} \cdot\ \frac {4}{52} \cdot\ \frac {48}{51} \cdot\ \frac {47}{50} \cdot\ \frac {46}{49} $ + $ \binom {4}{1} \cdot\ \frac {4}{52} \cdot\ \frac {48}{51} \cdot\ \frac {47}{50} \cdot\ \frac {46}{49} $ - [ ($ \binom {4}{1} \cdot\ \frac {4}{52} \cdot\ \frac {48}{51} \cdot\ \frac {47}{50} \cdot\ \frac {46}{49} $)^2]</p>
<p>My reasoning is P(1 Queen or 1 King) = P(1 Queen) + P(1 King) - P[P(1 queen) x P (1 king)]</p>
<p>The answer that I got is about 0.2499 which is different from the solution obtained from the compliument method, what did I miss here?</p>
<p>Thank you!</p>
| Graham Kemp | 135,106 | <p>In addition to @InterstellarProbe 's answer, your method of Inclusion and Exclusion should be</p>
<p>$$\begin{align}\mathsf P(Q\geq 1\cup K\geq 1) &= \mathsf P(Q\geq 1)+\mathsf P(K\geq 1)-\mathsf P(Q\geq 1\cap K\geq 1)
\\&=\left(\tfrac{\tbinom 41\tbinom {48}3+\tbinom 42\tbinom {48}2+\tbinom 43\tbinom {48}1+\tbinom 44\tbinom {48}0}{\tbinom{52}4}\right)+\mathsf P(K\geq 1)-\mathsf P(Q\geq 1\cap K\geq 1)
\\ &= \left(1-\tfrac{\tbinom 40\tbinom {48}4}{\tbinom{52}4}\right)+\left(1-\tfrac{\tbinom 40\tbinom {48}4}{\tbinom{52}4}\right)-\left(1-\tfrac{\tbinom 40\tbinom {48}4+\tbinom 40\tbinom {48}4-\tbinom 40\tbinom 40\tbinom {44}4}{\tbinom{52}4}\right) \\ &= 1-\tfrac{\tbinom {44}{4}}{\tbinom{52}4}\end{align}$$</p>
<p>Noting:</p>
<ul>
<li>The probability for obtaining particular counts for kings and queens in the same hand are <em>not independent</em>, so you cannot multiply the individual probabilities.</li>
<li>You need to evaluate the probabilities for having <em>more</em> than none of the card type, not exactly 1.</li>
<li>The evaluation is thus made <em>much harder</em> by considering kings and queens as seperate categories, rather than considering "the 8 cards which as king-or-queen" as one category. </li>
</ul>
|
119,492 | <p>First I define a function, just a sum of a few sin waves at different angular frequencies:</p>
<pre><code>ubdat = 50;
ws = 10*{2, 5, 10, 20, 40}
fn = Table[Sum[Sin[w*x], {w, ws}], {x, 0, ubdat, .001}];
pts = Length@fn
ListPlot[fn, Joined -> True, PlotRange -> {{0, 1000}, All}]
{20, 50, 100, 200, 400}
</code></pre>
<p><a href="https://i.stack.imgur.com/ZCcaO.png" rel="noreferrer"><img src="https://i.stack.imgur.com/ZCcaO.png" alt="enter image description here"></a></p>
<p>If I take the Fourier transform and scale it correctly, you can see the correct peaks:</p>
<pre><code>fnft = Abs@Fourier@fn;
fnftnormed = Table[{2*Pi*i/ubdat, fnft[[i]]}, {i, Length@fnft}];
ListPlot[fnftnormed, Joined -> True, PlotRange -> {{0, 500}, All}]
</code></pre>
<p><a href="https://i.stack.imgur.com/adDJS.png" rel="noreferrer"><img src="https://i.stack.imgur.com/adDJS.png" alt="enter image description here"></a></p>
<p>Now, I want to do a low pass filter on it, for, say, $\omega_c=140$. This should get rid of the peaks at 200 and 400, ideally. Doing it this way returns the same plots as above:</p>
<pre><code>fnfilt = LowpassFilter[fn, 140];
ListPlot[fnfilt, Joined -> True, PlotRange -> {{0, 1000}, All}]
fnfiltft = Abs@Fourier@fnfilt;
fnfiltftnormed =
Table[{2*Pi*i/ubdat, fnfiltft[[i]]}, {i, Length@fnfiltft}];
ListPlot[fnfiltftnormed, Joined -> True, PlotRange -> {{0, 500}, All}]
</code></pre>
<p>I assume the problem is something to do with defining SampleRate, but the documentation explaining how it's defined or how to use it on the <a href="https://reference.wolfram.com/language/ref/LowpassFilter.html" rel="noreferrer">LowpassFilter page</a> is <em>very</em> sparse:</p>
<blockquote>
<p>By default, SampleRate->1 is assumed for images as well as data. For a
sampled sound object of sample rate of r, SampleRate->r is used. With
SampleRate->r, the cutoff frequency should be between 0 and $r*\pi$.</p>
</blockquote>
<p>It appears to have a broken link at the bottom, so maybe that had something helpful. The page for SampleRate itself has even less info.</p>
<p>My naive attempt at choosing a sample rate would be dividing the number of samples by the total range, so in this case, <code>Floor[pts/ubdat]=1000</code>. Using this <em>does</em> affect the FT, but not a whole lot:</p>
<pre><code>fnfilt = LowpassFilter[fn, 140, SampleRate -> 1000];
ListPlot[fnfilt, Joined -> True, PlotRange -> {{0, 1000}, All}]
fnfiltft = Abs@Fourier@fnfilt;
fnfiltftnormed =
Table[{2*Pi*i/ubdat, fnfiltft[[i]]}, {i, Length@fnfiltft}];
ListPlot[fnfiltftnormed, Joined -> True, PlotRange -> {{0, 500}, All}]
</code></pre>
<p><a href="https://i.stack.imgur.com/XUqiC.png" rel="noreferrer"><img src="https://i.stack.imgur.com/XUqiC.png" alt="enter image description here"></a></p>
<p>So what am I missing? I've tried googling for some sort of guide on using filters in Mathematica, but I can't find anything and it's very frustrating.</p>
| Hugh | 12,558 | <p>As MarcoB remembered I have looked at this <a href="https://mathematica.stackexchange.com/a/91725/12558">before</a> and concluded that LowpassFilter is for image processing not for signal processing. Lets go through your example with random noise and see what we get. I start by making some random noise with zeros before and after. </p>
<pre><code>sr = Round[1/0.001];
fn = Join[ConstantArray[0, 500], RandomReal[{-1, 1}, 49000],
ConstantArray[0, 500]];
pts = Length@fn
ListPlot[fn, Joined -> True, PlotRange -> {{0, 1000}, All}]
</code></pre>
<p><img src="https://i.stack.imgur.com/wXi4m.png" alt="Mathematica graphics"></p>
<p>Now do the filtering. Plot the start of the original and the filtered time history against point number.</p>
<pre><code>fnfilt = LowpassFilter[fn, 140, SampleRate -> sr];
ListLinePlot[{fn, fnfilt}, PlotRange -> {{475, 525}, All}]
</code></pre>
<p><img src="https://i.stack.imgur.com/MMw2S.png" alt="Mathematica graphics"></p>
<p>The filtered signal begins before the original signal. This is a non-causal response which is usually avoided in signal processing. At the end there is also a signal after the original signal has stopped (not plotted). This is acceptable in signal processing and is due to ringing of the filter. </p>
<p>Take the Fourier transforms and then plot the filtered spectrum and the ratio of the filtered spectrum to the original spectrum. This is the filter transfer function. Also make the frequency axis. Note that the first point is zero frequency.</p>
<pre><code>fnft = Abs@Fourier@fn;
fnfiltft = Abs@Fourier@fnfilt;
freqs = Table[2.*Pi*(i - 1) sr/(pts - 1), {i, pts}];
ListLinePlot[Transpose[{freqs, fnfiltft}],
PlotRange -> {{0, 500}, All}]
ListLinePlot[Transpose[{freqs, fnfiltft/fnft}],
PlotRange -> {{0, 500}, All},
Epilog -> {Pink, Line[{{0, 1/Sqrt[2]}, {140, 1/Sqrt[2]}, {140, 0}}]}]
</code></pre>
<p><img src="https://i.stack.imgur.com/4BM7b.png" alt="Mathematica graphics"></p>
<p><img src="https://i.stack.imgur.com/14tRJ.png" alt="Mathematica graphics"></p>
<p>I have added pink lines to show the standard cut off-frequency. As a filter transfer function goes this is not very good. It is not flat in the pass band and does not drop to the usual <code>1/Sqrt[2]</code> at the cut-off frequency. </p>
<p>I really recommend that you use the signal processing software I described <a href="https://mathematica.stackexchange.com/a/119533/12558">here</a></p>
|
2,966,959 | <blockquote>
<p>Consider the simultaneous system of differential equations:
<span class="math-container">$$ \begin{equation}
x'(t)=y(t) -x(t)/2\\
y'(t)=x(t)/4-y(t)/2
\end{equation} $$</span>
If <span class="math-container">$ x(0)=2 $</span> and <span class="math-container">$ y(0)=3 $</span>, then what is <span class="math-container">$ \lim_{t\to\infty}((x(t)+y(t)) $</span>?</p>
</blockquote>
<p>Here is what I do:</p>
<p><span class="math-container">$$ \frac{dy}{dx}=\frac{\frac{1}{4}x-\frac{1}{2}y}{-\frac{1}{2}x+y}=-\frac{1}{2} $$</span>
So <span class="math-container">$$ y=-\frac{1}{2}x+4 $$</span> and <span class="math-container">$$ x(t)+y(t)=\frac{1}{2}x(t)+4 .$$</span>
Now solve for <span class="math-container">$ x(t) $</span>, we have <span class="math-container">$$ x(t)=4-\frac{2}{e^t} .$$</span>
Hence <span class="math-container">$ \lim_{t\to\infty}((x(t)+y(t))=2+4=6 $</span>.</p>
<p>However, there should be another method involving using matrices in the standard way. How to do it via matrices?</p>
<p>The question is from:(14) of <a href="https://math.uchicago.edu/~min/GRE/files/week2.pdf" rel="nofollow noreferrer">https://math.uchicago.edu/~min/GRE/files/week2.pdf</a></p>
| Vasili | 469,083 | <p>Everything was fine until you've made a silly mistake. It should be <br> <span class="math-container">$$\large{v+x\frac{dv}{dx}=-\frac{v}{2+3v^2}}$$</span> <span class="math-container">$$\large{x\frac{dv}{dx}=-\frac{3v+3v^3}{2+3v^2}}$$</span>
<span class="math-container">$$\large{\frac{2+3v^2}{3v+3v^3}}dv=-xdx$$</span></p>
|
2,493,802 | <p>If $A \subset \mathbb R$, we have the distance between a point in $\mathbb R$ and $A$ as</p>
<p>$$d(x,A)=\inf\{|x-a| \mid a \in A\}.$$</p>
<p><strong>For $\varepsilon>0$, if we define $A(\varepsilon) = \{x \in \mathbb R \mid d(x,A) < \varepsilon \}$, how do we show this set is open?</strong></p>
<hr>
<p>I know that $A \subset A(\varepsilon)$ since for any point in $a \in A$ we will have $d(a, A)=0 < \varepsilon$ for any $\varepsilon >0$.</p>
<p>I can see that $A(\varepsilon)$ is sort of like a neighborhood of the set $A$ but I am having trouble formalizing this.</p>
| Cameron Buie | 28,900 | <p>Well, I'd start by taking an arbitrary $x\in A(\varepsilon).$ Since $d(x,A)<\varepsilon,$ then there is some $a\in A$ such that $|x-a|<\varepsilon.$ (Can you see why?) Thus, for any $x\in A(\varepsilon),$ we have that $$x\in\bigcup_{a\in A}(a-\varepsilon,a+\varepsilon).\tag{$\heartsuit$}$$ On the other hand, given an arbitrary $x$ as in $(\heartsuit),$ there is some $a\in A$ such that $|x-a|<\varepsilon,$ so $d(x,A)<\varepsilon.$ (Do you see why?) Thus, $x\in A(\varepsilon).$</p>
<p>By double inclusion, $$A(\varepsilon)=\bigcup_{a\in A}(a-\varepsilon,a+\varepsilon),$$ so as a union of open intervals, $A(\varepsilon)$ is open. (Can you justify this?)</p>
|
2,493,802 | <p>If $A \subset \mathbb R$, we have the distance between a point in $\mathbb R$ and $A$ as</p>
<p>$$d(x,A)=\inf\{|x-a| \mid a \in A\}.$$</p>
<p><strong>For $\varepsilon>0$, if we define $A(\varepsilon) = \{x \in \mathbb R \mid d(x,A) < \varepsilon \}$, how do we show this set is open?</strong></p>
<hr>
<p>I know that $A \subset A(\varepsilon)$ since for any point in $a \in A$ we will have $d(a, A)=0 < \varepsilon$ for any $\varepsilon >0$.</p>
<p>I can see that $A(\varepsilon)$ is sort of like a neighborhood of the set $A$ but I am having trouble formalizing this.</p>
| 7697 | 480,457 | <p>A set S in metric space is open if for any $x \in S$, there is $\delta>0$(this $\delta$ depends of $x$) such that $B_{\delta}(x) \subset S$.</p>
<p>Take $x \in A(\epsilon)$. In this case we choose $\delta = \epsilon - d(x,A)$.</p>
<p>Take $y \in B_{\delta}(x)$. Then we have $d(y,A) \leq d(y,x) + d(x,A)< \epsilon - d(x,A) + d(x,A)$.</p>
<p>Then $A(\epsilon)$ is open.</p>
|
973,101 | <p>I am trying to find a way to generate random points uniformly distributed on the surface of an ellipsoid.</p>
<p>If it was a sphere there is a neat way of doing it: Generate three $N(0,1)$ variables $\{x_1,x_2,x_3\}$, calculate the distance from the origin</p>
<p>$$d=\sqrt{x_1^2+x_2^2+x_3^2}$$</p>
<p>and calculate the point </p>
<p>$$\mathbf{y}=(x_1,x_2,x_3)/d.$$</p>
<p>It can then be shown that the points $\mathbf{y}$ lie on the surface of the sphere and are uniformly distributed on the sphere surface, and the argument that proves it is just one word, "isotropy". No prefered direction.</p>
<p>Suppose now we have an ellipsoid</p>
<p>$$\frac{x_1^2}{a^2}+\frac{x_2^2}{b^2}+\frac{x_3^2}{c^2}=1$$</p>
<p>How about generating three $N(0,1)$ variables as above, calculate</p>
<p>$$d=\sqrt{\frac{x_1^2}{a^2}+\frac{x_2^2}{b^2}+\frac{x_3^2}{c^2}}$$</p>
<p>and then using $\mathbf{y}=(x_1,x_2,x_3)/d$ as above. That way we get points guaranteed on the surface of the ellipsoid but will they be uniformly distributed? How can we check that?</p>
<p>Any help greatly appreciated, thanks.</p>
<p>PS I am looking for a solution without accepting/rejecting points, which is kind of trivial.</p>
<p>EDIT:</p>
<p>Switching to polar coordinates, the surface element is $dS=F(\theta,\phi)\ d\theta\ d\phi$ where $F$ is expressed as
$$\frac{1}{4} \sqrt{r^2 \left(16 \sin ^2(\theta ) \left(a^2 \sin ^2(\phi )+b^2 \cos
^2(\phi )+c^2\right)+16 \cos ^2(\theta ) \left(a^2 \cos ^2(\phi )+b^2 \sin
^2(\phi )\right)-r^2 \left(a^2-b^2\right)^2 \sin ^2(2 \theta ) \sin ^2(2 \phi
)\right)}$$</p>
| mercio | 17,445 | <p>One way to proceed is to generate a point uniformly on the sphere, apply the mapping $f : (x,y,z) \mapsto (x'=ax,y'=by,z'=cz)$ and then correct the distortion created by the map by discarding the point randomly with some probability $p(x,y,z)$ (after discarding you restart the whole thing).</p>
<p>When we apply $f$, a small area $dS$ around some point $P(x,y,z)$ will become a small area $dS'$ around $P'(x',y',z')$, and we need to compute the multiplicative factor $\mu_P = dS'/dS$.</p>
<p>I need two tangent vectors around $P(x,y,z)$, so I will pick $v_1 = (dx = y, dy = -x, dz = 0)$ and $v_2 = (dx = z,dy = 0, dz=-x)$</p>
<p>We have $dx' = adx, dy'=bdy, dz'=cdz$ ;
$Tf(v_1) = (dx' = adx = ay = ay'/b, dy' = bdy = -bx = -bx'/a,dz' = 0)$, and similarly $Tf(v_2) = (dx' = az'/c,dy' = 0,dz' = -cx'/a)$</p>
<p>(we can do a sanity check and compute $x'dx'/a^2+ y'dy'/b^2+z'dz'/c^2 = 0$ in both cases)</p>
<p>Now, $dS = v_1 \wedge v_2 = (y e_x - xe_y) \wedge (ze_x-xe_z) = x(y e_z \wedge e_x + ze_x \wedge e_y + x e_y \wedge e_z)$ so $|| dS || = |x|\sqrt{x^2+y^2+z^2} = |x|$</p>
<p>And $dS' = (Tf \wedge Tf)(dS) = ((ay'/b) e_x - (bx'/a) e_y) \wedge ((az'/c) e_x-(cx'/a) e_z) = (x'/a)((acy'/b) e_z \wedge e_x + (abz'/c) e_x \wedge e_y + (bcx'/a) e_y \wedge e_z)$</p>
<p>And finally $\mu_{(x,y,z)} = ||dS'||/||dS|| = \sqrt{(acy)^2 + (abz)^2 + (bcx)^2}$.</p>
<p>It's quick to check that when $(x,y,z)$ is on the sphere the extrema of this expression can only happen at one of the six "poles" ($(0,0,\pm 1), \ldots$). If we suppose $0 < a < b < c$, its minimum is at $(0,0,\pm 1)$ (where the area is multiplied by $ab$) and the maximum is at $(\pm 1,0,0)$ (where the area is multiplied by $\mu_{\max} = bc$)</p>
<p>The smaller the multiplication factor is, the more we have to remove points, so after choosing a point $(x,y,z)$ uniformly on the sphere and applying $f$, we have to keep the point $(x',y',z')$ with probability $\mu_{(x,y,z)}/\mu_{\max}$.</p>
<p>Doing so should give you points uniformly distributed on the ellipsoid. </p>
|
913,239 | <p>Given a circle with known center $c$, known radius $r$ and perimeter point $x$:
$$
(x - c_x)^2 + (y - c_y)^2 = r^2
$$
with a tangent line that also goes through a point $p$ lying outside the circle. How do I find the point $x$ at which the line touches the circle?</p>
<p>Given that the tangent line is orthogonal to the vector $(x-c)$ and also that the vector $(x-p)$ lies on the tangent line we have $(x-c) \cdot (x-p) = 0$ which can be expanded to:</p>
<p>$$
(x - c_x) (x - p_x) + (y - c_y) (y - p_y) = 0
$$</p>
<p>Thus my question is:
How do I find the point $x$?</p>
| mathlove | 78,967 | <p>What you got is correct. You can solve them in the following way.</p>
<p>We have to solve the following two :
$$(x-a)^2+(y-b)^2=r^2\tag1$$
$$(x-a)(x-s)+(y-b)(y-t)=0\tag2$$
where $x=p_x,y=p_y,a=c_x,b=c_y,s=pp_x,t=pp_y$.</p>
<p>Note that
$$(1)\iff \color{red}{x^2}-2ax+a^2+\color{blue}{y^2}-2by+b^2=r^2$$
$$(2)\iff \color{red}{x^2}-(a+s)x+as+\color{blue}{y^2}-(b+t)y+bt=0$$
So, substracting the latter from the former gives you the form of $y=Ax+B$. So you can plug it in $(1)$ to get $x$. Note that you'll get two $x$s. Then plug them in $y=Ax+B$ to get $y$s.</p>
<p>P.S. If $t\not =b$, then we get
$$y=\frac{a-s}{t-b}x+\frac{r^2-a^2+as-b^2+bt}{t-b}=Ax+B.$$
Now plugging this in $(1)$ gives us
$$x^2-2ax+a^2+(Ax+B)^2-2b(Ax+B)+b^2=r^2.$$
Now, you can solve this for $x$.</p>
|
913,239 | <p>Given a circle with known center $c$, known radius $r$ and perimeter point $x$:
$$
(x - c_x)^2 + (y - c_y)^2 = r^2
$$
with a tangent line that also goes through a point $p$ lying outside the circle. How do I find the point $x$ at which the line touches the circle?</p>
<p>Given that the tangent line is orthogonal to the vector $(x-c)$ and also that the vector $(x-p)$ lies on the tangent line we have $(x-c) \cdot (x-p) = 0$ which can be expanded to:</p>
<p>$$
(x - c_x) (x - p_x) + (y - c_y) (y - p_y) = 0
$$</p>
<p>Thus my question is:
How do I find the point $x$?</p>
| georg | 144,937 | <p>Equation of circle center (m,n) radius r: $(x-m)^2+(y-n)^2=r^2$, the point $P(x_p,y_p)$ lying outside the circle. Solve using polar.</p>
<p>The coordinates of the point of contact of tangents from P - solve equations:</p>
<p>$(x-m)^2+(y-n)^2=r^2$ </p>
<p>and </p>
<p>$(x-m)(x_p-m)+(y-n)(y_p-n)=r^2$ </p>
<p>Edit - addet:</p>
<p>The equation of the tangent line through the point P and the point of contact $T(x_t,y_t)$ has the same shape:</p>
<p>$(x-m)(x_t-m)+(y-n)(y_t-n)=r^2$</p>
<p>Edit - example:</p>
<p>$P(3,5)$</p>
<p>$(x-3)^2+(y+5)^2=20,\,\,(3-3)(x-3)+(5+5)(y+5)=20\Rightarrow T_1(7,-3),T_2(-1,-3)$</p>
<p>So the equations of tangents:</p>
<ol>
<li><p>$(7-3)(x-3)+(-3+5)(y+5)=20\Rightarrow 2x+y-11=0$</p></li>
<li><p>$(-1-3)(x-3)+(-3+5)(y+5)=20\Rightarrow 2x-y-1=0$</p></li>
</ol>
|
3,259,910 | <p>My question is: if G:= Group of polynomials under addition with elements from <span class="math-container">$\{0,1,2,3,4,5,6,7,8,9\}$</span>
<span class="math-container">$f(x)=7x^2+5^x+4$</span> and <span class="math-container">$g(x)=4x^2+8^x+6$</span> ,<span class="math-container">$f(x)+g(x)=x^2+3x$</span>
what is the order of <span class="math-container">$f(x)$</span>,<span class="math-container">$g(x)$</span>,<span class="math-container">$f(x)+g(x)$</span>
i find that the answers are <span class="math-container">$10$</span>,<span class="math-container">$5$</span> and <span class="math-container">$10$</span> . Am i correct??
(i noticed that zero polynomial is the identity here)
then if <span class="math-container">$h(x)=a_nx^n+...+a_0$</span> belongs to <span class="math-container">$G$</span> what is the order of it? given that</p>
<p><span class="math-container">$gcd(a_1,a_2,,,a_n)=1$</span>,</p>
<p><span class="math-container">$gcd(a_1,a_2,,,,a_n)=2$</span>,</p>
<p><span class="math-container">$gcd(a_1,a_2,,,a_n)=5$</span>,</p>
<p><span class="math-container">$gcd(a_1,a_2,,,a_n)=10$</span></p>
<p>Here i do not understand how to proceeded ?</p>
| egreg | 62,967 | <p>Note that
<span class="math-container">$$
2\sin\alpha\sec^2\alpha=\frac{2\sin\alpha}{\cos^2\alpha}=\frac{2\sin\alpha}{1-\sin^2\alpha}
$$</span>
If
<span class="math-container">$$
f(x)=\arctan\frac{2x}{1-x^2}
$$</span>
then
<span class="math-container">$$
f'(x)=\frac{2}{1+x^2}
$$</span>
so <span class="math-container">$f(x)$</span> differs from <span class="math-container">$2\arctan x$</span> by a constant over <span class="math-container">$(-1,1)$</span>. Since <span class="math-container">$f(0)=0=2\arctan0$</span>, we can say that
<span class="math-container">$$
\arctan\frac{2\sin\alpha}{1-\sin^2\alpha}=2\arctan\sin\alpha
$$</span>
For <span class="math-container">$x>0$</span>, <span class="math-container">$\arctan(1/x)=\pi/2-\arctan x$</span>, so your expression evaluates to
<span class="math-container">$$
2\left(\frac{\pi}{2}-\arctan\sin\alpha\right)+2\arctan\sin\alpha=\pi
$$</span>
if <span class="math-container">$\sin\alpha>0$</span>.</p>
<p>If <span class="math-container">$\sin\alpha<0$</span>, the expression evaluates to <span class="math-container">$-\pi$</span>, because for <span class="math-container">$x<0$</span> one has <span class="math-container">$\arctan(1/x)=-\pi/2-\arctan x$</span>.</p>
|
3,259,910 | <p>My question is: if G:= Group of polynomials under addition with elements from <span class="math-container">$\{0,1,2,3,4,5,6,7,8,9\}$</span>
<span class="math-container">$f(x)=7x^2+5^x+4$</span> and <span class="math-container">$g(x)=4x^2+8^x+6$</span> ,<span class="math-container">$f(x)+g(x)=x^2+3x$</span>
what is the order of <span class="math-container">$f(x)$</span>,<span class="math-container">$g(x)$</span>,<span class="math-container">$f(x)+g(x)$</span>
i find that the answers are <span class="math-container">$10$</span>,<span class="math-container">$5$</span> and <span class="math-container">$10$</span> . Am i correct??
(i noticed that zero polynomial is the identity here)
then if <span class="math-container">$h(x)=a_nx^n+...+a_0$</span> belongs to <span class="math-container">$G$</span> what is the order of it? given that</p>
<p><span class="math-container">$gcd(a_1,a_2,,,a_n)=1$</span>,</p>
<p><span class="math-container">$gcd(a_1,a_2,,,,a_n)=2$</span>,</p>
<p><span class="math-container">$gcd(a_1,a_2,,,a_n)=5$</span>,</p>
<p><span class="math-container">$gcd(a_1,a_2,,,a_n)=10$</span></p>
<p>Here i do not understand how to proceeded ?</p>
| lab bhattacharjee | 33,337 | <p>Using <a href="https://math.stackexchange.com/questions/1837410/inverse-trigonometric-function-identity-doubt-tan-1x-tan-1y-pi-tan/1837799#1837799">Inverse trigonometric function identity doubt: $\tan^{-1}x+\tan^{-1}y =-\pi+\tan^{-1}\left(\frac{x+y}{1-xy}\right)$, when $x<0$, $y<0$, and $xy>1$</a>,</p>
<p><span class="math-container">$$2\arctan p=\begin{cases} \arctan\dfrac{2p}{1-p^2} &\mbox{if } p^2<1 \\
\pi+ \arctan\dfrac{2p}{1-p^2} & \mbox{if } p>1\\-\pi+ \arctan\dfrac{2p}{1-p^2} & \mbox{if } p<-1\end{cases} $$</span></p>
<p>So, if <span class="math-container">$2m\pi\alpha>0>(2m-1)\pi,\csc\alpha<0\implies\csc\alpha<-1$</span></p>
<p>Consequently, <span class="math-container">$$2\arctan(\csc\alpha)=-\pi+\arctan\dfrac{2\csc\alpha}{1-\csc^2\alpha}$$</span>
<span class="math-container">$$=-\pi+\arctan\left(-\dfrac{2\csc\alpha}{\cot^2\alpha}\right)$$</span></p>
<p><span class="math-container">$$=-\pi-\arctan\left(\dfrac{2\csc\alpha}{\cot^2\alpha}\right)$$</span></p>
<p><span class="math-container">$$=-\pi-\arctan\left(\dfrac{2\sin\alpha}{\cos^2\alpha}\right)$$</span></p>
<p>Here <span class="math-container">$-1<\alpha<0,\implies\csc\alpha<0$</span></p>
|
4,361,063 | <blockquote>
<p>Suppose that <span class="math-container">$(X, \tau_1)$</span> is compact and <span class="math-container">$\tau_1 \subset \tau_2$</span>. Is the space <span class="math-container">$(X, \tau_2)$</span> compact? Does the converse hold i.e if <span class="math-container">$(X, \tau_1)$</span> is compact and <span class="math-container">$\tau_2 \subset \tau_1$</span>?</p>
</blockquote>
<p>The first one shouldn't hold since if <span class="math-container">$X= [0,1]$</span> and <span class="math-container">$\tau_1$</span> is the usual topology of <span class="math-container">$\Bbb R$</span>, then I think that if <span class="math-container">$\tau_2$</span> is the lower limit topology we have <span class="math-container">$\tau_1 \subset \tau_2$</span> and <span class="math-container">$X$</span> wouldn' t be compact?</p>
<p>The second one also doesn't seem true. If <span class="math-container">$(X, \tau_1)$</span> is compact and <span class="math-container">$\tau_2 \subset \tau_1$</span>, then every open cover of <span class="math-container">$X$</span> has a finite subcover, but I don't think why this would hold for the coarser topology <span class="math-container">$\tau_2$</span>? I think it could be that <span class="math-container">$\tau_2$</span> doesn't have "enough" elements to satisfy this.</p>
| Arctic Char | 629,362 | <p>The first one holds if <span class="math-container">$X$</span> is finite, since every topology on finite sets are compact. If <span class="math-container">$X$</span> is infinite, for any topology <span class="math-container">$\tau_1$</span> on <span class="math-container">$X$</span> one can take <span class="math-container">$\tau_2$</span> to be the discrete topology, which is non-compact.</p>
<p>On the other hand, if <span class="math-container">$(X, \tau_1)$</span> is compact and <span class="math-container">$\tau_2\subset \tau_1$</span>, then <span class="math-container">$(X, \tau_2)$</span> is also compact: take any covering <span class="math-container">$\mathscr U \subset \tau_2$</span> of <span class="math-container">$X$</span>. Then it is also a covering of <span class="math-container">$(X, \tau_1)$</span> and hence has a <span class="math-container">$\mathscr U_1 \subset \mathscr U$</span>.</p>
|
4,494,489 | <p>These are questions I’m stuck on:</p>
<p>(i) Prove for <span class="math-container">$k\in \mathbb{R}^+:\frac{2}{\sqrt{k+1}+\sqrt{k}}\lt\frac{1}{\sqrt{k}}$</span></p>
<p>(ii) Prove <span class="math-container">$16\lt\sum_{k=1}^{80}\frac{1}{\sqrt{k}}\lt17$</span></p>
<p>I did the first one but I just equated them until I got <span class="math-container">$\sqrt{k+1}\gt\sqrt{k}$</span>
Which I think I did something wrong because I assumed I was supposed to use induction.</p>
<p>For the second one I am just completely stuck, I tried to convert 16 and 17 into summations with the same limits and got</p>
<p><span class="math-container">$\sum_{k=1}^{80}\frac{16k}{3240}=16$</span></p>
<p><span class="math-container">$\sum_{k=1}^{80}\frac{17k}{3240}=17$</span></p>
<p>But after that I just didn’t know what to do.</p>
| Li Kwok Keung | 1,072,805 | <p>Sorry that I still stick to counting <span class="math-container">$i$</span> from <span class="math-container">$1$</span> to <span class="math-container">$n$</span>. This sounds more natural to me.</p>
<p>I think that the calculation is incorrect because</p>
<p>First, in your formula, <span class="math-container">$\sum_{i=1}^n P(A_i)=\sum_{i=1}^n \frac {1}{2^i}=1-\frac{1}{2^n} \neq 1 $</span></p>
<p>It means that something is wrong. Actually <span class="math-container">$P(A_n)$</span> should be <span class="math-container">$\frac {1}{2^{n-1}}$</span> (See Leander Tilsted Kristensen's comment).</p>
<p>Second, the formula for <span class="math-container">$P(E|A_i)$</span> should be <span class="math-container">$$P(E|A_i)=\frac { 2^n-1 \choose i}{2^n-1 \choose i}=\frac {2^n-1-i}{2^n-1}$$</span></p>
<p>Accordingly</p>
<p><span class="math-container">\begin{align}
P(E) &= \sum_{i=1}^nP(E|A_i)P(A_i) \\
&= \sum_{i=1}^{n-1}P(E|A_i)P(A_i)+P(E|A_n)P(A_n) \\ &= \sum_{i=1}^{n-1} \frac {2^n-1-i}{2^n-1}\times \frac{1}{2^i}+ \frac{2^n-1-n}{2^n-1}\times\frac{1}{2^{n-1}} \\
&= \sum_{i=1}^{n-1}\frac{1}{2^i}-\frac{1}{2^n-1}\sum_{i=1}^{n-1}\frac{i}{2^i}+ \frac{2^n-1-n}{2^n-1}\times\frac{1}{2^{n-1}} \\
&= 1- \frac{1}{2^{n-1}}
\end{align}</span></p>
<p>Note:</p>
<p>We can prove that <span class="math-container">$$-\frac{1}{2^n-1}\sum_{i=1}^{n-1}\frac{i}{2^i}+ \frac{2^n-1-n}{2^n-1}\times\frac{1}{2^{n-1}}=0$$</span>
or equivalently</p>
<p><span class="math-container">$$\sum_{i=1}^{n-1}\frac{i}{2^i}=\frac {2^n-1-n}{2^{n-1}}$$</span>
as follows:</p>
<p>Let <span class="math-container">$$S=\sum_{i=1}^{n-1}\frac{i}{2^i}$$</span> we have</p>
<p><span class="math-container">$$S = \frac{1}{2}+2\left(\frac{1}{2} \right)^2+3\left(\frac{1}{2} \right)^3+ \dots +(n-1)\left(\frac{1}{2} \right)^{n-1} $$</span></p>
<p><span class="math-container">$$\frac{S}{2}= \left(\frac{1}{2} \right)^2+2\left(\frac{1}{2} \right)^3+ \dots +(n-2)\left(\frac{1}{2} \right)^{n-1}+ (n-1)\left(\frac{1}{2} \right)^{n} $$</span></p>
<p>Hence
<span class="math-container">$$S-\frac{S}{2}=\frac{1}{2}+\left(\frac{1}{2} \right)^2+ \dots +\left(\frac{1}{2} \right)^{n-1} -
(n-1)\left(\frac{1}{2} \right)^n$$</span></p>
<p><span class="math-container">$$\frac{S}{2}=1-\left(\frac{1}{2} \right)^{n-1}-(n-1)\left(\frac{1}{2} \right)^n$$</span></p>
<p><span class="math-container">$$S=\frac {2^n-1-n}{2^{n-1}}$$</span></p>
|
728,503 | <p>\begin{align*}
f(x) = \left\{\begin{array}{ll}
0 & \text{ if } x=0\\
x^\alpha \sin(x^{-\beta}) & \text{ otherwise }
\end{array}\right.
\end{align*}</p>
<p>Determine the values of $\alpha$ and $\beta$ for which this function is differentiable at $x=0$.</p>
<p>I found the derivative, but I don't know what to do after...</p>
| Cameron Williams | 22,551 | <p>Think of it like this: </p>
<p>$$\int_{-\infty} ^{\infty} \delta(x-a) f(x) dx = f(a). $$</p>
<p>(Assuming $f$ is sufficiently nice.) Here your $f$ is given by $(e^{-s})^t\cos(t)H(t)$, where $H$ is the Heaviside step function. Does this help? </p>
|
2,369,133 | <p>I just found out that
if you want to get 1 with the fraction: $$\frac{5}{2}$$
Then you multiply it with: $$ \frac{2}{5} $$
Does anyone have a good way to think about this? </p>
| CopyPasteIt | 432,081 | <p>Even before defining the rational numbers, one usually learns the 'prime factor cancellation' game, or how to divide without really trying.</p>
<p>For example, if you want apply Euclid's algorithm to $28$ and $16$, you can put $28$ 'on top' and $16$ on bottom, writing<br>
$\frac{28}{16} = \frac{2^2\,7}{2^4} = \frac{7}{2^2}$<br>
and then you can say that $16$ 'goes into' $28$ one and three-quarters times.</p>
<p>So, we have something that we can call the 'numerator/denominator' game. When you create the rational numbers, you hope that it would be helpful to keep this fractional notation. And indeed, it is very useful. If $n$ is a nonzero number, the multiplicative inverse $n^{-1}$ can be expressed as $\frac{1}{n}$ and you can continue playing the 'numerator/denominator' game in new ways.</p>
<p>The multiplication of fractional expressions is a blast (you can stick the prime factorizations together), but to add them you need a common denominator (no big deal).</p>
<p>For your question, just remember how much fun it is to cancel common terms in the numerator and denominator:<br></p>
<p>$\frac{5}{2}*\frac{2}{5} = \frac{5\;2}{2 \;5} = \frac{2 \; 5}{2 \; 5} \text { (cancel numerator factors /with denominator factors) } = 1$</p>
|
2,922,494 | <p>I came across the following problem today.</p>
<blockquote>
<p>Flip four coins. For every head, you get <span class="math-container">$\$1$</span>. You may reflip one coin after the four flips. Calculate the expected returns.</p>
</blockquote>
<p>I know that the expected value without the extra flip is <span class="math-container">$\$2$</span>. However, I am unsure of how to condition on the extra flips. I am tempted to claim that having the reflip simply adds <span class="math-container">$\$\frac{1}{2}$</span> to each case with tails since the only thing which affects the reflip is whether there are tails or not, but my gut tells me this is wrong. I am also told the correct returns is <span class="math-container">$\$\frac{79}{32}$</span> and I have no idea where this comes from.</p>
| joriki | 6,622 | <p>Your temptation is right and your gut is wrong. You do get an extra $\frac12$ if you got tails at least once. The probability that you don't have a tail to reflip is $\frac1{16}$, so you get an extra $\frac12\left(1-\frac1{16}\right)=\frac{15}{32}$. This added to the base expectation of $2 = \frac{64}{32}$ gives $\frac{79}{32}$.</p>
|
2,922,494 | <p>I came across the following problem today.</p>
<blockquote>
<p>Flip four coins. For every head, you get <span class="math-container">$\$1$</span>. You may reflip one coin after the four flips. Calculate the expected returns.</p>
</blockquote>
<p>I know that the expected value without the extra flip is <span class="math-container">$\$2$</span>. However, I am unsure of how to condition on the extra flips. I am tempted to claim that having the reflip simply adds <span class="math-container">$\$\frac{1}{2}$</span> to each case with tails since the only thing which affects the reflip is whether there are tails or not, but my gut tells me this is wrong. I am also told the correct returns is <span class="math-container">$\$\frac{79}{32}$</span> and I have no idea where this comes from.</p>
| AlanDouglas | 595,237 | <p>Your gut is wrong, as pointed out already.
The possible outcomes from the initial flip are:</p>
<p>"4 heads" x 1</p>
<p>"3 heads" x 4</p>
<p>"2 heads" x 6</p>
<p>"1 heads" x 4</p>
<p>"0 heads" x 1</p>
<p>This gives an expected return of (4 + 12 + 12 + 4 + 0)/16 = 2</p>
<p>If you add 0.5 to each case except 4 heads, you get (4 + 14 + 15 + 6 + 0.5)/16 = 79/32</p>
|
3,879,009 | <p>How to prove this lemma?</p>
<blockquote>
<p><span class="math-container">$\min(x,y,z) \leq ax+by+cz \leq \max(x,y,z)$</span>, with <span class="math-container">$a+b+c = 1$</span> for any real numbers <span class="math-container">$x,y,z$</span> and <span class="math-container">$a,b,c$</span> positive.</p>
</blockquote>
| Farouk Deutsch | 455,437 | <p><span class="math-container">$a \min(x,y,z)+b \min(x,y,z)+c \min(x,y,z) \leq ax+by+cz \leq a \max(x,y,z)+b \max(x,y,z)+c \max(x,y,z)$</span></p>
|
732,132 | <blockquote>
<p>Suppose $P(x)$ is a polynomial of degree $n \geq 1$ such that $\int_{0}^{1}x^{k}P(x)\,dx = 0$ for $k = 1, 2, \ldots, n$. Show that $$\int_{0}^{1}\{P(x)\}^{2}\,dx = (n + 1)^{2}\left(\int_{0}^{1}P(x)\,dx\right)^{2}$$</p>
</blockquote>
<p>If we assume that $P(x) = a_{0}x^{n} + \cdots + a_{n - 1}x + a_{n}$ then we can easily see that $$\int_{0}^{1}\{P(x)\}^{2}\,dx = a_{n}\int_{0}^{1}P(x)\,dx$$ and therefore to solve the given problem we need to show that $$\int_{0}^{1}P(x)\,dx = \frac{a_{n}}{(n + 1)^{2}}$$ Direct integration of the polynomial gives the expression $$\frac{a_{0}}{n + 1} + \frac{a_{1}}{n} + \cdots + \frac{a_{n - 1}}{2} + a_{n}$$ and simplifying this to $a_{n}/(n + 1)^{2}$ does not seem possible. I think there is some nice "integration by parts" trick which will give away the solution, but I am not able to think of it.</p>
| W-t-P | 181,098 | <p>Here is yet another solution; it is a little long, but is in fact very simple and non-technical.</p>
<p><strong>Outline of the proof</strong></p>
<p>The crucial observation is that the polynomial <span class="math-container">$Q(x):=((1-x)P(1-x))'$</span> also has the property in question: integrating by parts, changing the variable, and expanding the binomial, for any <span class="math-container">$1\le k\le n$</span> we get
<span class="math-container">$$ \int_0^1 x^kQ(x)\,dx = -k\int_0^1 (1-x)P(1-x)x^{k-1}\,dx = -k\int_0^1 P(y) (1-y)^{k-1} y\,dy = 0. $$</span>
We will see that this forces <span class="math-container">$Q(x)=cP(x)$</span> with an appropriate coefficient <span class="math-container">$c$</span>. This is a very strong relation; the result will follow by integrating it against <span class="math-container">$x^{n+1}$</span>.</p>
<p><strong>Detailed argument</strong></p>
<p>With <span class="math-container">$Q(x)$</span> defined above, observing that
<span class="math-container">$$ \int_0^1 Q(x)\,dx = (1-x)P(1-x)\Big\vert_0^1 = -P(1), $$</span>
and letting
<span class="math-container">$$ I:=\int_0^1 P(x)\,dx, $$</span>
we conclude that
<span class="math-container">$$ \int_0^1 (IQ(x)+P(1)P(x))x^k\,dx = 0,\qquad 0\le k\le n. $$</span>
As a result, for any polynomial <span class="math-container">$T$</span> of degree at most <span class="math-container">$n$</span>, we have
<span class="math-container">$$ \int_0^1 (IQ(x)+P(1)P(x))T(x)\,dx = 0. $$</span>
Applying this with <span class="math-container">$T(x)=IQ(x)+P(1)P(x)$</span>, we conclude that
<span class="math-container">$$ \int_0^1 (IQ(x)+P(1)P(x))^2\,dx = 0. $$</span>
Therefore, <span class="math-container">$IQ(x)+P(1)P(x)$</span> is the zero polynomial:
<span class="math-container">$$ IQ(x) = -P(1)P(x). $$</span>
Recalling the definition of <span class="math-container">$Q(x)$</span> and switching from <span class="math-container">$x$</span> to <span class="math-container">$1-x$</span>, we get
<span class="math-container">$$ I(xP(x))' = P(1)P(1-x). \tag{1} $$</span>
Expanding, <span class="math-container">$I(xP'(x)+P(x))=P(1)P(1-x)$</span>, and substituting <span class="math-container">$x=0$</span> gives
<span class="math-container">$$ IP(0)=(P(1))^2. \tag{2} $$</span>
Write
<span class="math-container">$$ J := \int_0^1 x^{n+1} P(x)\, dx. $$</span>
Notice that <span class="math-container">$J\ne 0$</span>, as otherwise we would have
<span class="math-container">$$ \int_0^1 x(P(x))^2\,dx = 0. $$</span></p>
<p>To complete the proof, we integrate (1) against <span class="math-container">$x^{n+1}$</span>. In the LHS we get
<span class="math-container">$$ I\int_0^1 (xP(x))'x^{n+1}\,dx = I x^{n+2} P(x)\Big\vert_0^1 - (n+1)I \int_0^1 x^{n+1}P(x)\,dx = IP(1)-(n+1)IJ, $$</span>
in the RHS
<span class="math-container">$$ P(1) \int_0^1 x^{n+1}P(1-x)\,dx = P(1) \int_0^1 (1-x)^{n+1}P(x)\,dx
= P(1) (I+(-1)^{n+1}J) $$</span>
(for the last step, we expand the binomial and integrate termwise, observing that all intermediate terms vanish). Hence,
<span class="math-container">$$ -(n+1)IJ = (-1)^{n+1} P(1)J; $$</span>
therefore,
<span class="math-container">$$ P(1) = (-1)^n(n+1)I. $$</span>
Squaring and using (2),
<span class="math-container">$$ IP(0) = (n+1)^2 I^2, $$</span>
and the result follows readily.</p>
|
261,605 | <p>Applying the <a href="https://en.wikipedia.org/wiki/M%C3%BCntz%E2%80%93Sz%C3%A1sz_theorem" rel="nofollow noreferrer">Müntz–Szász theorem</a> on $[0,1]$ repeatedly, we can represent
$$
x= \sum_{n\geq 2} c_n x^n
$$
as a uniformly convergent series (<strong>edit:</strong> only over some subsequence, see edits below) on $[0,1]$ of higher powers $x^n$ for $n\geq 2$. What can one say about the coefficients? Is there an explicit choice of $c_n$?</p>
<p><strong>Edit:</strong> Comments below suggest that this is not possible. What is wrong with the following argument? Take $\epsilon>0$ and approximate $x$ by a finite combination of higher powers and a constant uniformly with an error $\epsilon/2.$ Plugging $x=0$ we see that the constant is smaller than $\epsilon/2$ so dropping it we get an approximation up to $\epsilon$ by a finite sum $\sum_{n=2}^{N_1} c_n x^n.$ Next, consider $x-\sum_{n=2}^{N_1} c_n x^n$ and approximate it by a linear combination $\sum_{n=N_1+1}^{N_2} c_n x^n$ up to an error $\epsilon/2.$ This gives us an $\epsilon/2$-approximation $\sum_{n=1}^{N_2}c_n x^n.$ Continue this construction repeatedly.</p>
<p><strong>Edit II:</strong> Theorem holds for $a=0$ if we include constants, but Robert Israel's comment below contained the main point: the series only converges over some subsequence $(N_k)_{k\geq 1}$ as in the above construction. Let me rephrase the question accordingly:</p>
<p>Is there anything interesting one can say about $c_n$? Can one choose the subsequence and $c_n$ in a way that $(c_n)\in\ell^p$ for some $p$, or uniformly bounded?</p>
| Robert Israel | 13,650 | <p>It can't be in $\ell^p$ or bounded, in fact you can't have $|c_n| = O(t^{-n})$ for any $t > a$ where the subsequence converges on $[a,1]$. This is because if $|c_n| = O(t^{-n})$, $\sum_n c_n z^n$ is analytic in $|z|<t$, and by uniqueness...</p>
|
2,058,560 | <p>Can anyone please simplify this boolean expression ? My answer always reduces to a single variable i.e $x$ but my instructor reduced to three literals.
$$x'y'z+x'yz'+xy'z'+xyz$$ </p>
| DanielV | 97,045 | <p>$$x'y'z+x'yz'+xy'z'+xyz = x \text{ xor } y \text{ xor } z$$</p>
<p>There's an even number of negations in each term.</p>
|
100,957 | <p>I have this picture of small particles in a polymer film. I want to count how many particles in the figure, so that I can have a rough estimation of the particle density. But the image quality is poor, I had a hard time to do it.</p>
<p><a href="https://i.stack.imgur.com/ez50R.jpg" rel="noreferrer"><img src="https://i.stack.imgur.com/ez50R.jpg" alt="particle in film - grayscale"></a>, <a href="https://i.stack.imgur.com/0OJd7.jpg" rel="noreferrer"><img src="https://i.stack.imgur.com/0OJd7.jpg" alt="particle in film - color"></a></p>
<p>I have tried several ways to do it, but failed. below is the code.
The first method I tried is:</p>
<pre><code>`SetDirectory["C:\\Users\\mayao\\documents"]
image = Import["Picture3.jpg"];
imag2 = Binarize[image, {0.0, 0.8}];
cells = SelectComponents[DeleteBorderComponents[imag2], "Count", -400];
circles = ComponentMeasurements[ImageMultiply[image,cells],"Centroid", "EquivalentDiskRadius"}][[All, 2]]; Show[image, Graphics[{Red, Thick, Circle @@ # & /@ circles}]]
</code></pre>
<p>Here is what I got:
<a href="https://i.stack.imgur.com/f7u8C.jpg" rel="noreferrer"><img src="https://i.stack.imgur.com/f7u8C.jpg" alt="enter image description here"></a></p>
<p>So it does not count all the particle. Plus, it sometimes take several particle as one.</p>
<p>I read another method from a thread here, the code is:</p>
<pre><code>obl[transit_Image] := (SelectComponents[
MorphologicalComponents[
DeleteSmallComponents@
ChanVeseBinarize[#, "TargetColor" -> Black],
Method -> "ConvexHull"], {"Count", "SemiAxes"},
Abs[Times @@ #2 Pi - #1] < #1/100 &]) &@
transit; GraphicsGrid[{#, obl@# // Colorize,
ImageMultiply[#,
Image@Unitize@
obl@#]} & /@ (Import /@ ("C:\\Users\\mayao\\documents\\" <> # \
& /@ {"Picture1.jpg", "Picture2.jpg", "Picture3.jpg",
"Picture1.jpg"}))]
</code></pre>
<p>But it does not recognize the single particles:</p>
<p><a href="https://i.stack.imgur.com/zav3Y.png" rel="noreferrer"><img src="https://i.stack.imgur.com/zav3Y.png" alt="enter image description here"></a></p>
<p>Is there any other method to do this task?
Thanks a lot for any suggestions.</p>
| bill s | 1,783 | <p>The local binarization can help with the uneven lighting. <code>Dilation</code> helps disconnect some particles that remain connected, and <code>DeleteSmallComponents</code> removes small portions caused by noise.</p>
<pre><code>img = Import["http://i.stack.imgur.com/0OJd7.jpg"];
comps = MorphologicalComponents[DeleteSmallComponents[
ColorNegate[Dilation[LocalAdaptiveBinarize[img, 10], 1]]]];
comps // Colorize
Max[comps]
984
</code></pre>
<p><a href="https://i.stack.imgur.com/koh4A.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/koh4A.png" alt="enter image description here"></a></p>
|
3,954,010 | <p>The case <span class="math-container">$n=3$</span> is from <a href="https://usamts.org/Tests/Problems_31_1.pdf" rel="nofollow noreferrer">here</a>. It's straightforward to prove it's true:</p>
<p>First we notice that if any two of <span class="math-container">$x_1, x_2, x_3$</span> are equal then all must be equal.</p>
<p>Denote <span class="math-container">$a=x_1^{x_2}=x_2^{x_3}=x_3^{x_1}$</span> then
<span class="math-container">$$x_1 = a^{1/x_2}, x_2=a^{1/x_3}, x_3=a^{1/x_1}$$</span>
WLOG assume <span class="math-container">$x_1 \ge x_2, x_3$</span>. There are two cases:</p>
<ul>
<li><p><span class="math-container">$x_1 \ge x_2 \ge x_3 \implies \frac{1}{x_2} \ge \frac{1}{x_3} \ge \frac{1}{x_1} \implies x_1 \ge x_3 \ge x_2 \implies x_2=x_3 \implies x_1=x_2=x_3$</span>.</p>
</li>
<li><p><span class="math-container">$x_1 \ge x_3 \ge x_2 \implies \frac{1}{x_2} \ge \frac{1}{x_1} \ge \frac{1}{x_3} \implies x_3 \ge x_1 \ge x_2 \implies x_1=x_3 \implies x_1=x_2=x_3$</span>.</p>
</li>
</ul>
<hr />
<p>If <span class="math-container">$n$</span> is even, then <span class="math-container">$x_{2i-1} = 2, x_{2i}=4$</span> is a counterexample.</p>
<p>When <span class="math-container">$n=5$</span> the above method will need to examine <span class="math-container">$4!=24$</span> cases. Basically we map <span class="math-container">$x_i$</span> to <span class="math-container">$x_{(i \mod 5) +1}$</span> and reverse the order. In many of cases we can deduce <span class="math-container">$4$</span> or all <span class="math-container">$5$</span> of the <span class="math-container">$x_i$</span>'s are equal. For example, if <span class="math-container">$$x_1 \ge x_3 \ge x_2 \ge x_5 \ge x_4 \tag 1$$</span> then
<span class="math-container">$$
x_5 \ge x_1 \ge x_3 \ge x_4 \ge x_2 \tag 2
$$</span>
Since the order of <span class="math-container">$x_2$</span> and <span class="math-container">$x_5$</span> reversed from <span class="math-container">$(1)$</span> to <span class="math-container">$(2)$</span>, they must be equal and so are everything in between them from both <span class="math-container">$(1)$</span> and <span class="math-container">$(2)$</span> hence all <span class="math-container">$x_i$</span>'s are equal.</p>
<p>There are other cases that are different.</p>
<p><strong>Example 2:</strong> <span class="math-container">$$x_1 \ge x_3 \ge x_4 \ge x_2 \ge x_5 \tag 3$$</span> then
<span class="math-container">$$x_1 \ge x_3 \ge x_5 \ge x_4 \ge x_2 \implies x_1 \ge x_3 \ge x_2=x_4=x_5 \tag 4$$</span></p>
<p><strong>Example 3:</strong> <span class="math-container">$$x_1 \ge x_3 \ge x_4 \ge x_5 \ge x_2 \tag 5$$</span> then
<span class="math-container">$$x_3 \ge x_1 \ge x_5 \ge x_4 \ge x_2 \implies x_1=x_3 \ge x_2=x_4=x_5 \tag 6$$</span></p>
<p>But they all lead to the conclusion that <span class="math-container">$x_1=x_2=x_3=x_4=x_5$</span>.</p>
<hr />
<p>Now my questions:</p>
<p><strong>Question #1:</strong> Is it always true if <span class="math-container">$n>1$</span> and <span class="math-container">$n$</span> is odd?</p>
<p><strong>Question #2:</strong> What if we allow <span class="math-container">$x_i>0$</span> instead of <span class="math-container">$x_i>1$</span>?</p>
| void_117 | 843,560 | <p>It can be proved to be false for n = 2.</p>
<p>Let x1^x2 = x2^x1. Then we raise both sides with power 1/(x1*x2).</p>
<p>This means x1^((x2/x2)*x1) = x2^((x1/x1)*x2). This means x1^(1/x1) = x2^(1/x2).</p>
<p>This means we have to check if the function f(x) = x^(1/x) is one-one or not. It turns out it is not.</p>
<p>A function whose derivative changes sign is not one-one.</p>
<p>f(x) = x^(1/x) has a maxima at x=e.</p>
<p>So this proves the existence of distinct values of x1,x2 for n=2.</p>
|
1,227,330 | <p>I have the following hypotheses: $\alpha_n \to 0, \beta_n \to 0, \alpha_n < 0 < \beta_n$. I need to show that the following two limits exist <strong>so that I may add them</strong>:</p>
<p>$$\lim \limits_{n \to \infty} \frac{-\alpha_n}{\beta_n -\alpha_n}, \lim \limits_{n \to \infty} \frac{\beta_n}{\beta_n - \alpha_n}$$</p>
<p>Well, I know that each limit is between $0$ and $1$. But I'm not sure how to show that these limits actually converge and don't fluctuate? (Or, it's also a possibility that I don't need the fact that the limits exist to add them, but I'm not sure if that's true)</p>
<p>This question is motivated by Exercise 5.19 in Baby Rudin.</p>
<hr>
<p>Edit: So either I'm doing something wrong by wanting to add the limits, or <strong>we don't need existence of the limits</strong> because I found a counter example: take $\beta_n = \frac{1}{n} \cos^2 n, \alpha_n = - \frac{1}{n} \sin^2 n$. Then $ \beta_n < 0 < \alpha_n$ (strict because $\pi$ is irrational), and the quotients do not converge (they are equal to $\cos^2 n$ and $\sin^2 n$).</p>
<p>Thus, here is my whole solution so someone can point out where I'm going wrong. I need to prove that for $D_n = \frac{f(\beta_n) - f(\alpha_n)}{\beta_n - \alpha_n}$, $f$ defined in $(-1,1)$, $f'$ exists at $0$, and $-1 < \alpha_n < 0 < \beta_n < 1$, $\alpha_n, \beta_n \to 0$, we have $\lim D_n = f'(0)$.</p>
<p>\begin{align}
\lim D_n &= \lim \frac{f(\beta_n) - f(0) + f(0) - f(\alpha_n)}{\beta_n - \alpha_n}\\
&= \lim \frac{f(\beta_n) - f(0)}{\beta_n - \alpha_n} + \lim \frac{f(0) - f(\alpha_n)}{\beta_n - \alpha_n}\\
&= \lim \frac{f(\beta_n) - f(0)}{\beta_n - 0} \lim \frac{\beta_n}{\beta_n - \alpha_n} + \lim \frac{f(0) - f(\alpha_n)}{0 - \alpha_n} \lim \frac{-\alpha_n}{\beta_n - \alpha_n}\\
&= \lim f'(0) \lim \frac{\beta_n - \alpha_n}{\beta_n - \alpha_n} = f'(0)
\end{align}</p>
| Chappers | 221,811 | <p>Your issue is in evaluating $f(0)$ as $(\sin{0})/0$: you haven't used the definition you have given, which has $f(0)=1$.</p>
<p>Then you have
$$ \lim_{h \to 0} \frac{\frac{\sin{h}}{h}-1}{h} = \lim_{h \to 0} \frac{\sin{h}-h}{h^2} = \lim_{h\to 0} \frac{\cos{h}-1}{2h} = \lim_{h \to 0} \frac{-\sin{h}}{2} = 0, $$
using L'Hôpital's rule twice.</p>
<hr>
<p>To avoid using L'Hôpital, the easiest way is to use the squeeze theorem: for $0<x<\pi/2$,
$$ \sin{x}<x<\tan{x}, \tag{1} $$
which can be shown by drawing triangles inside and tangent to the unit circle. Dividing by $\sin{x}$ gives
$$ 1<\frac{x}{\sin{x}}<\frac{1}{\cos{x}}, $$
which shows, taking $x$ to $0$, that the limit of $x/\sin{x}$ must be $1$. The difficult bit is now to show that the limit of $\frac{\sin{h}-h}{h^2}$ is actually zero.</p>
<p>The inequality (1) also shows that
$$ 0<1-\frac{\sin{h}}{h}<\frac{\tan{h}-\sin{h}}{h} = \frac{\sin{h}}{h}\frac{1-\cos{h}}{\cos{h}} $$
Using the identity $1-\cos{h}=2\sin^2{(h/2)}$, and dividing by $h$, we have
$$ 0<\frac{h-\sin{h}}{h^2} < \frac{\sin{h}}{h} \frac{2\sin{(h/2)}}{h} \frac{1}{\cos{h}} (2\sin{(h/2)}) $$
We know enough limits to compute the right-hand side: the first 3 factors all tend to $1$, and the last one tends to zero, so another sandwich theorem application gives
$$ \lim_{h \to 0} \frac{\sin{h}-h}{h^2} =0. $$</p>
<p>I'm quite surprised that this can actually be pushed this far.</p>
|
990,467 | <p>I am having some trouble understanding these two questions. Any help is appreciated. Scanned questions are included at the end.</p>
<p>6) We are given the function $ f(x) =\frac{1 - 2x} {2x^2 - 3x - 2} $</p>
<p>6 a) Find the equation of the vertical asymptotes. Explain how.</p>
<p>For the above question how did they first get the equation $ x =( 3 \pm √25 ) / 4, $ </p>
<p>and then get x = 2 and x = -1/2 out of it?</p>
<p>6 b) Find the equation of the horizontal asymptotes. Use a limit.</p>
<p>For this question I understand that when the degree of the numerator is less than the degree of the denominator it results in a horizontal asymptote. Thus here we get y = 0. Right? But I would still like to know if it is the same procedure they used in the answer sheet to get the answer 0/2.</p>
<p><img src="https://i.stack.imgur.com/hKhb4.jpg" alt="enter image description here"></p>
| Antony | 180,827 | <p>Graphs of $f(x)=|\sin x|$ and $f(x)=2^{\cos(x)}$. For the point $A\ $ $\cos x=0.56424...$
<img src="https://i.stack.imgur.com/OuwBt.jpg" alt="enter image description here"></p>
|
3,251,928 | <p><span class="math-container">$$\frac{1}{a(a-b)(a-c)} + \frac{1}{b(b-a)(b-c)} + \frac{1}{c(c-a)(c-b)} $$</span></p>
<p>I tried to get everything to the same denominator, and then simplify numerators first but it is very complicated and long if I just use brute force, to multiply all the expressions given from the previous unification of denominator.</p>
| Michael Rozenberg | 190,319 | <p><span class="math-container">$$\sum_{cyc}\frac{1}{a(a-b)(a-c)}=\sum_{cyc}\frac{bc(c-b)}{abc(a-b)(b-c)(c-a)}=\frac{(a-b)(b-c)(c-a)}{abc(a-b)(b-c)(c-a)}=\frac{1}{abc}.$$</span>
I used the following property of the Schur's polynomial:
<span class="math-container">$$\sum_{cyc}(a^2c-a^2b)=(a-b)(b-c)(c-a).$$</span></p>
|
2,385,949 | <p>While playing around with random binomial coefficients , I observed that the following <em>identity</em> seems to hold for all positive integers $n$:</p>
<p>$$ \sum_{k=0}^{2n} (-1)^k \binom{4n}{2k}\cdot\binom{2n}{k}^{-1}=\frac{1}{1-2n}.$$</p>
<p>However, I am unable to furnish a proof for it ( though this result is just a conjecture ).<br> Any ideas/suggestions/solutions are welcome.</p>
| Jack D'Aurizio | 44,121 | <p>$$\begin{eqnarray*} S(n) &=& (2n+1)\sum_{k=0}^{2n}(-1)^k\binom{4n}{2k}\int_{0}^{1}(1-x)^k x^{2n-k}\,dx\tag{Euler Beta}\\&=& (2n+1)\sum_{k=0}^{2n}(-1)^k\binom{4n}{2k}\int_{0}^{1}2z(1-z^2)^k z^{4n-2k}\,dz\tag{$x\mapsto z^2$}\\&=&(2n+1)\int_{0}^{\pi/2}\sin(\theta)\cos(\theta)\left[e^{4ni\theta}+e^{-4ni\theta}\right]\,d\theta\tag{$z\to\cos\theta$}\\&=&(2n+1)\int_{0}^{\pi/2}\sin(2\theta)\cos(4n\theta)\,d\theta\tag{De Moivre} \\ &=&-\frac{2n+1}{4n^2-1} = \color{blue}{-\frac{1}{2n-1}}.\tag{Simplify}
\end{eqnarray*}$$</p>
|
14,893 | <p>What is the best way to draw a path (red color, dashed) along the surface of the smoothHistogram3D. Assume the paths desired to be the X=0 and Y=0 Axes.</p>
<pre><code>data = {{0.798333, 1.21167}, {-0.415, 0.915}, {-0.675, 0.715}, {0.785,
0.675}, {-0.55, 0.645}, {-0.125, 0.57}, {0.15, 0.27}, {-0.3,
0.115}, {0.925, -0.685}, {0.748333, 1.27167}, {-0.465,
0.975}, {-0.725, 0.775}, {0.735, 0.735}, {-0.6, 0.705}, {-0.175,
0.63}, {0.1, 0.33}, {-0.35, 0.175}, {0.875, -0.625}, {0.628333,
1.18167}, {-0.585, 0.885}, {-0.845, 0.685}, {0.615, 0.645}, {-0.72,
0.615}, {-0.295, 0.54}, {-0.02, 0.24}, {-0.47,
0.085}, {0.755, -0.715}, {0.718333, 1.23167}, {-0.495,
0.935}, {-0.755, 0.735}, {0.705, 0.695}, {-0.63, 0.665}, {-0.205,
0.59}, {0.07, 0.29}, {-0.38, 0.135}, {0.845, -0.665}, {0.738333,
1.23167}, {-0.475, 0.935}, {-0.735, 0.735}, {0.725, 0.695}, {-0.61,
0.665}, {-0.185, 0.59}, {0.09, 0.29}, {-0.36,
0.135}, {0.865, -0.665}, {1.07833, 1.43167}, {-0.135,
1.135}, {-0.395, 0.935}, {1.065, 0.895}, {-0.27, 0.865}, {0.155,
0.79}, {0.43, 0.49}, {-0.02, 0.335}, {1.205, -0.465}};
SmoothHistogram3D[data]
</code></pre>
<p><img src="https://i.stack.imgur.com/A4pdw.png" alt="enter image description here"></p>
| Rojo | 109 | <p>Perhaps</p>
<pre><code>typingEffect[str_String, secPerLetter_: 0.3] :=
Dynamic[StringTake[str,
Clock[{0, StringLength@str, 1}, StringLength@str secPerLetter]]]
</code></pre>
<p>(thanks @SjoerdC.deVries)</p>
<p>So this gives an animation</p>
<pre><code>typingEffect["hello my dear"]
</code></pre>
<p>Other effects can be implemented with the same idea, such as</p>
<pre><code>typingEffect2[str_String] :=
Row@MapIndexed[
Style[#,
FontColor->Dynamic@ColorData["Rainbow"][
Mod[(First@#2 - 0.5)/StringLength@str + Clock[], 1]]] &,
Characters@str]
</code></pre>
|
14,893 | <p>What is the best way to draw a path (red color, dashed) along the surface of the smoothHistogram3D. Assume the paths desired to be the X=0 and Y=0 Axes.</p>
<pre><code>data = {{0.798333, 1.21167}, {-0.415, 0.915}, {-0.675, 0.715}, {0.785,
0.675}, {-0.55, 0.645}, {-0.125, 0.57}, {0.15, 0.27}, {-0.3,
0.115}, {0.925, -0.685}, {0.748333, 1.27167}, {-0.465,
0.975}, {-0.725, 0.775}, {0.735, 0.735}, {-0.6, 0.705}, {-0.175,
0.63}, {0.1, 0.33}, {-0.35, 0.175}, {0.875, -0.625}, {0.628333,
1.18167}, {-0.585, 0.885}, {-0.845, 0.685}, {0.615, 0.645}, {-0.72,
0.615}, {-0.295, 0.54}, {-0.02, 0.24}, {-0.47,
0.085}, {0.755, -0.715}, {0.718333, 1.23167}, {-0.495,
0.935}, {-0.755, 0.735}, {0.705, 0.695}, {-0.63, 0.665}, {-0.205,
0.59}, {0.07, 0.29}, {-0.38, 0.135}, {0.845, -0.665}, {0.738333,
1.23167}, {-0.475, 0.935}, {-0.735, 0.735}, {0.725, 0.695}, {-0.61,
0.665}, {-0.185, 0.59}, {0.09, 0.29}, {-0.36,
0.135}, {0.865, -0.665}, {1.07833, 1.43167}, {-0.135,
1.135}, {-0.395, 0.935}, {1.065, 0.895}, {-0.27, 0.865}, {0.155,
0.79}, {0.43, 0.49}, {-0.02, 0.335}, {1.205, -0.465}};
SmoothHistogram3D[data]
</code></pre>
<p><img src="https://i.stack.imgur.com/A4pdw.png" alt="enter image description here"></p>
| István Zachar | 89 | <p>Implementation with <code>ScheduledTask</code>. When the <code>Type</code> button is pushed, a scheduled task is started that increases the display length of the temporary string <code>temp</code>. It can be paused (<code>Stop</code>) and continued (<code>Type</code> again).</p>
<pre><code>text = StringTake[ExampleData[{"Text", "LoremIpsum"}], 120]
temp = ""; task = None; n = 0;
Row@{Button["Type", If[task =!= None, RemoveScheduledTask@task];
task = RunScheduledTask[
If[n > StringLength@text, RemoveScheduledTask@task; n = 0,
temp = StringTake[text, n++]], 0.01]],
Button["Stop", If[task =!= None, RemoveScheduledTask@task]]}
Dynamic@temp
</code></pre>
<p><img src="https://i.stack.imgur.com/yaNUy.gif" alt="enter image description here"></p>
|
26,986 | <p>Here are the differential equations that set's up the 11 coupled oscillators.</p>
<pre><code>new = Join[
Table[x[i]''[t] == - x[i][t] +
0.1*(x[i + 1][t] - 2*x[i][t] + x[i - 1][t]), {i, 1,
9}], {x[0]''[t] == -x[0][t], x[10]''[t] == x[9][t], x[0][0] == 1,
x[0]'[0] == 1, x[1]'[0] == 0, x[1][0] == 0},
Table[x[i][0] == 0, {i, 2, 10}], Table[x[i]'[0] == 0, {i, 2, 10}]]
</code></pre>
<p>Here are the solutions.</p>
<pre><code>Solt = NDSolve[new, Table[x[i], {i, 0, 10}], {t, 25}]
</code></pre>
<p>Here are the individual plots.</p>
<pre><code>Table[Plot[Evaluate[x[i][t] /. Solt], {t, 0, 25},
PlotRange -> All], {i, 0, 10}]
</code></pre>
<p>I am trying to figure out how to make a graph so along the x-axis are my i's from 0 to 10, and I can watch the wave move along each oscillator as time moves on. I keep getting errors in which it floods my notebook and doesn't stop unless I close the kernel.</p>
<p>This is what I have so far, and I'm not sure how to incorporate time into this.</p>
<pre><code>Plot[Evaluate[x[i][t] /. Solt], {i, 0, 10}]
</code></pre>
<p>EDIT Coupled in a circle</p>
<pre><code>Stew = Join[
Table[x[i]''[t] == - x[i][t] +
0.1*(x[i + 1][t] - 2*x[i][t] + x[i - 1][t]), {i, 1,
9}], {x[10]''[t] == - x[10][t] +
0.1*(x[0][t] - 2*x[10][t] + x[9][t]),
x[0]''[t] == - x[0][t] +
0.1*(x[1][t] - 2*x[0][t] + x[10][t])}, {x[0][0] == 0,
x[0]'[0] == 0, x[1][0] == 1, x[1]'[0] == 0.5},
Table[x[i][0] == 0, {i, 2, 10}], Table[x[i]'[0] == 0, {i, 2, 10}]];
</code></pre>
<p>The Dsolve</p>
<pre><code>Loin = NDSolve[Stew, Table[x[i], {i, 0, 10}], {t, 6.28}]
</code></pre>
<p>The individual graphs</p>
<pre><code>Table[Plot[Evaluate[x[i][t] /. Loin], {t, 0, 6.28},
PlotRange -> All], {i, 0, 10}]
</code></pre>
<p>How would I go about putting the i=0 to 10 around in a circle?</p>
| Kuba | 5,478 | <p><strong>After edit</strong></p>
<p>I think oscillation directions should be parallel. </p>
<pre><code>g[t_] = Table[{Cos[i*2 Pi/11], Sin[i*2 Pi/11], x[i][t]} /. Loin[[1]], {i, 0, 10}];
Animate[
Show[
ListPointPlot3D[g[t], PlotRange -> 1.5, BoxRatios -> 1, Filling -> Axis,
PlotStyle -> Directive@AbsolutePointSize@7, Boxed -> False],
ParametricPlot3D[{Cos@t, Sin@t, 0}, {t, 0, 2 Pi}, PlotStyle -> {Dashed, Black}]
,
ImageSize -> 500, ViewVector -> {{Cos[t/15], Sin[t/15], 1} 11, {0, 0, 0}},
AxesOrigin -> {0, 0, 0}, Ticks -> None, Axes -> True, AxesStyle -> {Red, Green, Blue},
SphericalRegion -> True],
{t, 0, 50}]
</code></pre>
<p><img src="https://i.stack.imgur.com/yAZ9H.gif" alt="enter image description here"></p>
<p><strong>Before edit</strong></p>
<pre><code>f[t_] = Table[{i, x[i][t]} /. Solt[[1]], {i, 0, 10}];
Animate[
ListPlot[f[t], PlotRange -> {{0, 11}, {-1.5, 1.5}},
Joined -> True, PlotMarkers -> Automatic]
, {t, 0, 25}
]
</code></pre>
<p>Good to notice: In <code>f[t]</code> definition <code>:=</code> is intentionally replaced by <code>=</code>.</p>
<p><img src="https://i.stack.imgur.com/ALBo9.gif" alt="enter image description here"></p>
|
2,657,112 | <p>I am currently reading the paper "PRIMES is in p" and have come across some notation that I don't quite understand in this following sentence</p>
<blockquote>
<p>Consider a prime $q$ that is a factor of $n$ and let $q^k || n$. Then...</p>
</blockquote>
<p>What does the notation $q^k || n$ mean here?</p>
<p>The full paper can be found <a href="https://www.cse.iitk.ac.in/users/manindra/algebra/primality_v6.pdf" rel="noreferrer">here</a> and the notation described above is used in the proof on page 2</p>
| user | 505,767 | <p>It means "<strong>divides exactly</strong>" in the sense that $q^{k+1}$ does not divide n.</p>
|
1,755,107 | <p>Let $(X,d)$ be a metric space and let $A \subseteq X$. We define the distance from a point $x \in X$ to $A$ by $d(x,A)= \inf \{ d(x,a) : a \in A \} $.</p>
<p>What will be the value of $d(x, \emptyset )$? I am confused between $+ \infty $ and $- \infty$. Also, is it possible to find $ \min \{ d(x,a) : a \in \emptyset \} $? (I know $\emptyset$ is empty, just asking symbolically whether we can take $\min$ instead of $\inf$)</p>
<p>Thanks.</p>
| Wes | 320,529 | <p>Fix $x \in X$. If $\{d(x,a) \mid a \in \emptyset\} \neq \emptyset$, then there exists $a \in \emptyset$.</p>
<p>It is vacuously true that for each $M \in \mathbb{R}$ we have $a \geq M$, for all $a \in \emptyset$, so it's reasonable to define $inf\{\emptyset\}=\infty$.</p>
<p>The $min$ function returns a minimal element contained in a set for which an order relation is defined, if such an element exists, so it's again reasonable to say $min\{\emptyset\}=\emptyset$.</p>
|
1,262,444 | <p>I study hard throughout the year and I am able to solve most problems in the text assigned to us and I am frequently the only one who can solve the hardest problems in the assignments or the problem sets. However, I can rarely do exceptionally well at math tests in my University and I frequently run out of time while sitting for a test. My grades have suffered and I am unable to understand why.[I end up with 65% or something like that in my exams while others score like 80%.]</p>
<p>Since, I am able to understand every theorem in the book and then solve most of the problems in the book(and solve the hardest problems in any test), it seems to me that I should also have good grades on most tests I take.
I will appreciate it if people give me some advice on how to do well in the tests.
Thank you.</p>
| Kushashwa Ravi Shrimali | 232,558 | <p>Sometimes, you do questions, you practice a lot but you don't get the same output. Don't care much about what you're getting as the output unless and until your input is in doubtful situation. Give it your best and the results will be shown, no matter the conditions be! </p>
<p>Mathematics is a subject, which requires loads of practice and concentration while solving questions. You might have been solving questions, but in a very comfortable environment. It is not essentially the same during an Exam or a test, when you may face any kind of disturbances in your surroundings. And that's when you lose concentration resulting in lack of ideas for solving the questions. </p>
<p>Revision, remembering tricks, formulas, are some of the key facts to get good marks in a test. But, what's more important is that you should get your concepts right. You should be aware of what you're doing and once you're content in that, then the result is not too far. Just keep doing your work and the result will automatically show up.</p>
<p>As you mentioned that you usually run out of time while sitting in a test. I suggest that you should practice doing calculations orally while practicing and then matching up with the answers. Only then you could speed up yourself accordingly. I'm assuming that you're good at concepts and have got potential to think of solving questions or calculations orally. </p>
<p>The way you attempt a test is also important.The idea for solving a particular question must strike at the first glance, else you'll end up consuming 5-10 minutes in just thinking about how to kick off the question. Try to concentrate yourself completely over the question, ignoring whatever is going in your environment and once you're successful in that, go through all the conditions you've read. The one which applies in the question will be your first step to think about. Keep your eyes to a farther distance, aim for the final step while thinking about the first step. It may be that you start solving the question but you end up getting the initial equation only and thus, wasting your precious 5-10 minutes. So, accuracy and precision with your ideas and work is a lot important. </p>
<p>Good Luck for your future! </p>
|
1,410,185 | <p>I observe that if we claim that $\sqrt[3]{-1}=-1$, we reach a contradiction.</p>
<p>Let's, indeed, suppose that $\sqrt[3]{-1}=-1$. Then, since the properties of powers are preserved, we have: $$\sqrt[3]{-1}=(-1)^{\frac{1}{3}}=(-1)^{\frac{2}{6}}=\sqrt[6]{(-1)^2}=\sqrt[6]{1}=1$$ which is a clear contradiction to what we assumed...</p>
| Daniel | 150,142 | <p>The notation $\sqrt[3]{-1}$ is a little bit ambiguous, since there are exactly three third roots of $-1$ over the complex numbers (in general, there are exactly $n$ $n-$roots of any complex number $z$ so the notation $\sqrt[n]{z}$ is ambiguous too).</p>
<p>Since</p>
<p>$$(-1)^3 = -1$$</p>
<p>$-1$ is one of those roots, but there are other two, namely, the roots of the equation</p>
<p>$$x^2-x+1$$</p>
<p>Which arise from the factorization</p>
<p>$$x^3+1=(x+1)(x^2-x+1)$$</p>
|
3,929,893 | <p>Let <span class="math-container">$G$</span> be a finite group and <span class="math-container">$H$</span>, <span class="math-container">$K$</span> normal subgroups of <span class="math-container">$G$</span>. Prove that <span class="math-container">$G=HK$</span> if and only if <span class="math-container">$G/(H\cap K)$</span> is isomorphic to <span class="math-container">$G/H\times G/K$</span>.</p>
<p>For first part i think I have to take the function <span class="math-container">$\phi\colon G\rightarrow G/H\times G/K$</span> defined by <span class="math-container">$\phi(g)=(gH,gK)$</span> then the kernel would be <span class="math-container">$H\cap K$</span> but I am confusion how to show this function is onto and how to use the fact <span class="math-container">$G=HK$</span>?</p>
<p>For converse part since H and K be normal then <span class="math-container">$HK$</span> be a subgroup of <span class="math-container">$G$</span> so clearly <span class="math-container">$HK\subset G$</span> but now how to show that <span class="math-container">$G\subset HK$</span>?</p>
<p>It will be enough if I get a proper hint for both part.Thank you.</p>
| Berci | 41,488 | <p>Working with <span class="math-container">$\phi$</span> and the first isomorphism theorem is indeed a proper way.</p>
<p><strong>Hints:</strong><br>
If <span class="math-container">$x,y\in KH=HK$</span> with <span class="math-container">$x=kh,\,y=h'k'$</span>, then <span class="math-container">$\phi(kh')=(xH,yK)$</span>.<br>
If <span class="math-container">$\phi$</span> is surjective, then <span class="math-container">$\forall x\in G\,\exists g\in G:\phi(g)=(xH,K)$</span>.</p>
|
578,968 | <p>I studied when H is a separable Hilbert space, every seminorm is norm;but, if H is not separable, it isnot correct. Is it true? pelese help me </p>
| Rasmus | 367 | <p>What you say is false. For instance, on every Hilbert space we have the constant zero seminorm, which is not a norm unless the Hilbert space is zero-dimensional.</p>
|
578,968 | <p>I studied when H is a separable Hilbert space, every seminorm is norm;but, if H is not separable, it isnot correct. Is it true? pelese help me </p>
| Martin Argerami | 22,857 | <p>It is not true that every seminorm on a Hilbert space is a norm. To see a very explicit example, let $H=\mathbb C^2$, and consider the seminorm
$$
\|(a,b)\|=|a|.
$$</p>
|
2,557,608 | <p>Let $X$ be integer valued with Characteristic Function $\phi$. How to show that </p>
<p>$\ P(|X| = k)= \frac{1}{2\pi }\int _{-\pi} ^{\pi} e^{ikt} \phi(t) dt$</p>
<p>$P(S_n =k) =\frac{1}{2\pi }\int _{-\pi} ^{\pi} e^{ikt} (\phi(t))^n dt $</p>
| user | 505,767 | <p>Applying <strong>sum to product rule</strong> on LHS and <strong>sine of the sum of 2 angles</strong> on RHS</p>
<p>$$2\sin(\alpha)+2\sin(\beta)=3\sin(\alpha+\beta)$$</p>
<p>$$4\sin \left(\frac{\alpha+\beta}{2}\right)\cos \left(\frac{\alpha-\beta}{2}\right)=6\sin \left(\frac{\alpha+\beta}{2}\right)\cos \left(\frac{\alpha+\beta}{2}\right)$$</p>
<p>$$2\cos \left(\frac{\alpha}{2}-\frac{\beta}{2}\right)=3\cos \left(\frac{\alpha}{2}+\frac{\beta}{2}\right)$$</p>
<p>$$2\cos\left(\frac{\alpha}{2}\right)\cos \left(\frac{\beta}{2}\right)+2\sin \left(\frac{\alpha}{2}\right)\sin \left(\frac{\beta}{2}\right)=3\cos\left(\frac{\alpha}{2}\right)\cos \left(\frac{\beta}{2}\right)-3\sin \left(\frac{\alpha}{2}\right)\sin \left(\frac{\beta}{2}\right)$$</p>
<p>$$5\sin \left(\frac{\alpha}{2}\right)\sin \left(\frac{\beta}{2}\right)=\cos\left(\frac{\alpha}{2}\right)\cos \left(\frac{\beta}{2}\right)$$</p>
<p>$$\tan\left(\frac{\alpha}{2}\right)\tan\left(\frac{\beta}{2}\right)=\frac{1}{5} \quad \square$$</p>
|
4,139,003 | <p>For this I used the method of making the determinant nonzero. First I create the array</p>
<p><span class="math-container">\begin{equation}
\begin{pmatrix}
a^2 & 0 & 1\\
0 & a & 2\\
1 & 0 & 1
\end{pmatrix}
\end{equation}</span></p>
<p>Then <span class="math-container">$\det(A)=a^3-a=a(a^2-1)$</span></p>
<p>Where we can see that for the determinant to be different from zero <span class="math-container">$ a $</span> cannot take the values of
<span class="math-container">$0,1,-1$</span>.</p>
<p>Therefore the values that <span class="math-container">$ a $</span> can take so that <span class="math-container">$ A $</span> is a base of <span class="math-container">$\mathbb{R} ^ 3 $</span> are <span class="math-container">$\mathbb{R}\backslash \{0,1, -1\}$</span>.</p>
<p>This is correct?</p>
| Ritam_Dasgupta | 925,091 | <p>There could be a better approach. You have been given the lines <span class="math-container">$y=x$</span> and <span class="math-container">$y=2x$</span>. You should just consider coordinates of centre of circle to be <span class="math-container">$(h,k)$</span>, and then simply write the equation of perpendicular distance from a point on a line, for both given lines. This will give you two relations to solve, thus you get centre's coordinates. While solving be careful to note that centre must lie above <span class="math-container">$y=x$</span> but below <span class="math-container">$y=2x$</span>.</p>
|
4,139,003 | <p>For this I used the method of making the determinant nonzero. First I create the array</p>
<p><span class="math-container">\begin{equation}
\begin{pmatrix}
a^2 & 0 & 1\\
0 & a & 2\\
1 & 0 & 1
\end{pmatrix}
\end{equation}</span></p>
<p>Then <span class="math-container">$\det(A)=a^3-a=a(a^2-1)$</span></p>
<p>Where we can see that for the determinant to be different from zero <span class="math-container">$ a $</span> cannot take the values of
<span class="math-container">$0,1,-1$</span>.</p>
<p>Therefore the values that <span class="math-container">$ a $</span> can take so that <span class="math-container">$ A $</span> is a base of <span class="math-container">$\mathbb{R} ^ 3 $</span> are <span class="math-container">$\mathbb{R}\backslash \{0,1, -1\}$</span>.</p>
<p>This is correct?</p>
| Math Lover | 801,574 | <p>For two given lines that are not parallel to each other, the angle bisector is given by</p>
<p><span class="math-container">$ \displaystyle \frac{Ax+By+C}{\sqrt{A^2+B^2}} = \pm \frac{ax+by+c}{\sqrt{a^2+b^2}}$</span></p>
<p>where the given lines are <span class="math-container">$Ax + By + C =0$</span> and <span class="math-container">$ax + by + c =0$</span></p>
<p>We have lines <span class="math-container">$y-2x = 0$</span> and <span class="math-container">$y-x = 0$</span> that intersect at the origin.</p>
<p><span class="math-container">$ \displaystyle \frac{y-x}{\sqrt{2}} = \pm \frac{y-2x}{\sqrt{5}}$</span></p>
<p>For internal bisector,</p>
<p><span class="math-container">$ \displaystyle \frac{y-x}{\sqrt{2}} = - \frac{y-2x}{\sqrt{5}}$</span></p>
<p><span class="math-container">$\implies \displaystyle y = \frac{2\sqrt2+\sqrt5}{\sqrt2+\sqrt5} x$</span></p>
<p><span class="math-container">$\displaystyle y = \frac{1+\sqrt{10}}{3} x$</span></p>
<p>As the center of the circle is on this line, its coordinates can be written as <span class="math-container">$(x_0, \frac{1+\sqrt{10}}{3} x_0)$</span>.</p>
<p>Now perpendicular distance to <span class="math-container">$y - x = 0$</span> from the center is <span class="math-container">$3$</span>.</p>
<p><span class="math-container">$ \displaystyle \frac{|\frac{1+\sqrt{10}}{3} x_0 - x_0|}{\sqrt2} = 3$</span></p>
<p>Solving <span class="math-container">$ \displaystyle x_0 = \frac{9\sqrt2}{\sqrt{10}-2} = 3 (\sqrt5+\sqrt2), y_0 = 3 (\sqrt5+2\sqrt2)$</span></p>
<p>And equation of circle is <span class="math-container">$(x-x_0)^2 + (y-y_0)^2 = 9$</span></p>
|
872,275 | <p>here's the question, how can I solve this:</p>
<p>$$\lim_{x \rightarrow \infty} x\sin (1/x) $$</p>
<p>Now, from textbooks I know it is possible to use the following substitution $x=1/t$, then, the ecuation is reformed in the following way </p>
<p>$$\frac{\sin t}{t}$$</p>
<p>then, and this is what I really can´t understand, textbook suggest find the limit as $t\to0^+$ (what gives you 1 as result)</p>
<p>Ok, I can't figure out WHY finding that limit as $t$ approaches $0$ from the right gives me the answer of the limit in infinity of the original formula. I think I can't understand what implies the substitution.</p>
<p>Better than an answer, I need an explanation.</p>
<p>(Sorry If I wrote something incorrectly, the english is not my original language)
Really thanks!!</p>
| Joe | 107,639 | <p>You can simply observe that
$$
\lim_{x\rightarrow+\infty}x\sin(1/x)=
\lim_{x\rightarrow+\infty}x\frac{1}{x}\frac{\sin(1/x)}{\frac{1}{x}}=
\lim_{x\rightarrow+\infty}\frac{\sin(1/x)}{1/x}
$$</p>
<p>Now it should be clear that $1/x$ tends to $0$ as $x$ approaches to $+\infty$. Hence it will be equivalent to write $1/x=t$ and let $t$ tends to $0$.
Thus you have
$$
\lim_{x\rightarrow+\infty}\frac{\sin(1/x)}{1/x}=
\lim_{t\rightarrow0}\frac{\sin t}{t}=1\;,
$$
as wanted.</p>
|
2,675,954 | <p>Fractals, when viewed as functions, are everywhere continuous and nowhere differentiable. Can this also be used as a definition for fractals?</p>
<p>i.e. Are all fractals everywhere continuous and nowhere differentiable? And also: Are all functions that are everywhere continuous and nowhere differentiable fractals?</p>
| SAHEB PAL | 309,736 | <p><strong>By taking elementary operation:</strong>$$\Delta=\begin{vmatrix}
yz-x^2&zx-y^2&xy-z^2\\
zx-y^2&xy-z^2&yz-x^2\\
xy-z^2&yz-x^2&zx-y^2
\end{vmatrix}=\begin{vmatrix}
xy+yz+zx-x^2-y^2-z^2&zx-y^2&xy-z^2\\
xy+yz+zx-x^2-y^2-z^2&xy-z^2&yz-x^2\\
xy+yz+zx-x^2-y^2-z^2&yz-x^2&zx-y^2
\end{vmatrix}\\=-(x^2+y^2+z^2-xy-yz-zx)\begin{vmatrix}
1&zx-y^2&xy-z^2\\
1&xy-z^2&yz-x^2\\
1&yz-x^2&zx-y^2
\end{vmatrix}\\=-\frac{1}{2}\Big[(x-y)^2+(y-z)^2+(z-x)^2\Big]\begin{vmatrix}
0&(x+y+z)(z-y)&(x+y+z)(x-z)\\
0&(x+y+z)(x-z)&(x+y+z)(x-z)\\
1&yz-x^2&zx-y^2
\end{vmatrix}\\=-\frac{1}{2}(x+y+z)^2\Big[(x-y)^2+(y-z)^2+(z-x)^2\Big]\begin{vmatrix}
(z-y)&(x-z)\\
(x-z)&(x-z)\\
\end{vmatrix}=\frac{1}{2}(x+y+z)^2\Big[(x-y)^2+(y-z)^2+(z-x)^2\Big]^2,
$$ as we know $x^2+y^2+z^2-xy-yz-zx=\frac{1}{2}\Big[(x-y)^2+(y-z)^2+(z-x)^2\Big]$.</p>
|
3,988,595 | <p>Suppose <span class="math-container">$A \subset \mathbb{R}^n$</span>. Define
<span class="math-container">$$D^m=\{x \in \mathbb{R}^{m+1} : ||x|| \leq 1 \}$$</span></p>
<p>Suppose <span class="math-container">$A$</span> and <span class="math-container">$D^m$</span> are homeomorphic , then is it necessary that <span class="math-container">$A$</span> is closed subset of <span class="math-container">$\mathbb{R}^n$</span></p>
<p>I thought that homeomorphism sends open sets to open sets, but it is not the case. Hence I cannot use that. I wonder if this statement is true. Can I get some hint?</p>
| TravorLZH | 748,964 | <p>Let <span class="math-container">$(a,b)$</span> denote the gcd of <span class="math-container">$a$</span> and <span class="math-container">$b$</span>, then</p>
<p><span class="math-container">$$
\varphi(n)\triangleq\sum_{k\le n,(k,n)=1}1=\sum_{k\le n}\left\lfloor1\over(k,n)\right\rfloor
$$</span></p>
<p>where <span class="math-container">$k$</span> runs from 1 to <span class="math-container">$n$</span>. By the property that</p>
<p><span class="math-container">$$
\sum_{d|n}\mu(d)=\left\lfloor\frac1n\right\rfloor
$$</span></p>
<p>we obtain</p>
<p><span class="math-container">$$
\begin{aligned}
\varphi(n)
&=\sum_{k\le n}\sum_{d|k,d|n}\mu(d)=\sum_{d|n}\mu(d)\sum_{k=jd\le n}1 \\
&=\sum_{d|n}\mu(d)\sum_{j\le n/d}1=\sum_{d|n}\mu(d)\cdot\frac nd \\
\end{aligned}
$$</span></p>
<p>Because the <span class="math-container">$n$</span> and Mobius function <span class="math-container">$\mu(n)$</span> are multiplicative, we deduce that <span class="math-container">$\varphi(n)$</span> is multiplicative as well due to the property of Dirichlet convolution.</p>
<p>For prime power <span class="math-container">$p^n$</span> (<span class="math-container">$n\ge1$</span>), we have</p>
<p><span class="math-container">$$
\varphi(p^n)=\sum_{k=0}^n\mu(p^k)p^{n-k}=\mu(1)p^n+\mu(p)p^{n-1}=p^n-p^{n-1}
$$</span></p>
|
3,988,595 | <p>Suppose <span class="math-container">$A \subset \mathbb{R}^n$</span>. Define
<span class="math-container">$$D^m=\{x \in \mathbb{R}^{m+1} : ||x|| \leq 1 \}$$</span></p>
<p>Suppose <span class="math-container">$A$</span> and <span class="math-container">$D^m$</span> are homeomorphic , then is it necessary that <span class="math-container">$A$</span> is closed subset of <span class="math-container">$\mathbb{R}^n$</span></p>
<p>I thought that homeomorphism sends open sets to open sets, but it is not the case. Hence I cannot use that. I wonder if this statement is true. Can I get some hint?</p>
| Quade | 790,417 | <p>Why (3) is true intuitively:</p>
<p>For each <span class="math-container">$m_{i}$</span>, <span class="math-container">$n_{j}$</span> respectively coprime and less than <span class="math-container">$m$</span>, <span class="math-container">$n$</span>, we can build <strong>one</strong> and <strong>only one</strong> <span class="math-container">$x_{ij}$</span> less than <span class="math-container">$m.n$</span> such that <span class="math-container">$x_{ij}$</span> has as remainder <span class="math-container">$m_{i}$</span> modulo <span class="math-container">$m$</span>, and <span class="math-container">$n_{j}$</span> modulo <span class="math-container">$n$</span> (this is essentially the chinese remainder theorem). Such number <span class="math-container">$x_{ij}$</span> is coprime with <span class="math-container">$m$</span> and <span class="math-container">$n$</span>, so coprime with <span class="math-container">$m.n$</span>. The <span class="math-container">$x_{ij}$</span> are mutually different as they produce different remainders modulo <span class="math-container">$m$</span> and <span class="math-container">$n$</span>. And their number is the number of couples <span class="math-container">$(m_i, n_j)$</span> so <span class="math-container">$ϕ(m).ϕ(n)$</span>.</p>
|
2,391,812 | <p>The question is to find the largest integer that divides all $p^4-1$, where p is a prime greater than 5. Being asked this question, I just assume this number exists. Set $p = 7$, then $p^4-1=2400$. I don't have any background in number theory and not sure what to do next. Thank you for your help! </p>
| Aaron | 9,863 | <p>Since $p^4-1$ is relatively prime to $p$, we know that the common divisor will not have any large prime factors. Indeed, from your calculation, it will have to be a divisor of $2400=2^5*3*5^2$. However, you could make an even better guess by trying out a few more primes and then taking the greatest common divisor. However, that will get you a proof.</p>
<p>A reasonable thing to try is to factor the polynomial $p^4-1=(p^2+1)(p^2-1)=(p^2+1)(p+1)(p-1)$. There are various things you can see from this. For example, since $p$ is odd, $p\pm 1$ are both even, as is $p^2+1$, giving you at least three factors of $2$. Actually, one of $p\pm 1$ has to be multiple of $4$ (being two consecutive even numbers), and therefore you know that $16$ will divide $p^4-1$ for all odd $p$ (even if $p$ isn't prime). But you should gather more data to figure out exactly <em>what</em> you want to prove.</p>
|
2,391,812 | <p>The question is to find the largest integer that divides all $p^4-1$, where p is a prime greater than 5. Being asked this question, I just assume this number exists. Set $p = 7$, then $p^4-1=2400$. I don't have any background in number theory and not sure what to do next. Thank you for your help! </p>
| DanielWainfleet | 254,665 | <p>Let $n$ be the largest integer that divides $p^4-1$ for all prime $p\geq 7.$ </p>
<p>We have $11^4-1=14640$ and $7^4-1=2400.$ The $gcd$ of $14640$ and $2400$ is $240.$ So $$n\leq 240.$$ If $p$ is odd then modulo $16$ we have $p^4\in \{(\pm 1)^4, (\pm 3)^4,(\pm 5)^4,(\pm 7)^4\}=\{1^2,9^2, 25^2, 49^2\}=\{1^2,9^2,9^2,1^2\}=$
$=\{1,81,81,1\}=\{1\}.$ </p>
<p>If $p$ is not divisible by $3$ then modulo $3$ we have $p^4\in \{(\pm 1)^4\}=\{1\}.$</p>
<p>If $p$ is not divisible by $5$ then modulo $5$ we have $p^4\in \{(\pm 1)^4,(\pm 2)^4\}=\{1,16\}=\{1\}.$</p>
<p>So for any integer $p$ that is not divisible by $2,3,$ or $5$ we have $p^4\equiv 1 \pmod {16}$ and $p^4 \equiv 1 \pmod 3$ and $p^4 \equiv 1 \pmod 5;$ and since $16,3,$ and 5 are pair-wise co-prime, therefore $p^4\equiv 1 \pmod {16\cdot 3\cdot 5}=240,$ so $$n\geq 240.$$</p>
|
2,387,501 | <p>I'm struggling with the following integral:</p>
<p>$$
I = \int_{a}^{b}
{\frac{\mathrm{Erf}\left(\,{x/c}\,\right)}{\,\sqrt {\,{1 - {x^2}}\,}\,}\,\mathrm{d}x}
$$</p>
<p>Honestly, I do not know any approaches to solve it, except trying with Mathematica and searching for a possible solution in tables of integrals involving the Erf function, but all failed.</p>
<p>Can somebody give me a hint?</p>
<p>Thank you very much.</p>
<p>Best regards.</p>
| Claude Leibovici | 82,404 | <p>As Robert Israel answered, it does not seem that the antiderivative could be computed even using special functions.</p>
<p>However, considering $$I=\int_0^a\frac{\text{erf}\left(\frac{x}{c}\right)}{\sqrt{1-x^2}}\,dx$$ hoping that $a$ is not too large, you could expand the integrand as a truncated Taylor series and integrate termwise. Otherwise, numerical integration would be required.</p>
<p>You would get something like
$$\frac{\text{erf}\left(\frac{x}{c}\right)}{\sqrt{1-x^2}}=\frac{2 x}{\sqrt{\pi } c}+\frac{\left(3 c^2-2\right) x^3}{3 \sqrt{\pi }
c^3}+\frac{\left(45 c^4-20 c^2+12\right) x^5}{60 \sqrt{\pi }
c^5}+\frac{\left(525 c^6-210 c^4+84 c^2-40\right) x^7}{840 \sqrt{\pi }
c^7}+O\left(x^9\right)$$</p>
<p>Let us try using $a=\frac 12$ for various values of $c$
$$\left(
\begin{array}{ccc}
c & \text{exact} & \text{approximation} \\
1 & 0.1450370 & 0.14500970 \\
2 & 0.0747909 & 0.07477398 \\
3 & 0.0501538 & 0.05014195 \\
4 & 0.0376931 & 0.03768400 \\
5 & 0.0301833 & 0.03017601 \\
6 & 0.0251659 & 0.02515974 \\
7 & 0.0215775 & 0.02157225 \\
8 & 0.0188842 & 0.01887956 \\
9 & 0.0167883 & 0.01678417
\end{array}
\right)$$</p>
<p>Another solution would be to use, as Robert Israel answered, integration by parts
$$J=\int\frac{\text{erf}\left(\frac{x}{c}\right)}{\sqrt{1-x^2}}\,dx=\sin ^{-1}(x) \text{erf}\left(\frac{x}{c}\right)-\frac{2}{\sqrt{\pi } c}\int { e^{-\frac{x^2}{c^2}} \sin ^{-1}(x)}\,dx$$ $$K=\int { e^{-\frac{x^2}{c^2}} \sin ^{-1}(x)}\,dx=c\int e^{-t^2} \sin ^{-1}(c t)\,dt$$ Using the Taylor expansion
$$\sin ^{-1}(c t)=\sum^{\infty}_{n=0} \frac{(2n)!\,c^{2n+1}}{4^n (n!)^2 (2n+1)} t^{2n+1}\qquad \text{for}\qquad |ct|\leq 1$$ and use $$\int t^{2n+1}e^{-t^2}\,dt=-\frac{1}{2} \Gamma \left(n+1,t^2\right)$$ </p>
|
2,062,638 | <blockquote>
<p>Consider a curve $\gamma(t): [a,b] \to \mathbb{R}^n$. The curve
$$-\gamma(t)=\gamma(a+b-t) \,\,\,\,\,\,\,\,\,\,\,\,\,\, t \in [a,b]$$</p>
<p>is called the "reverse" curve (or path) of $\gamma(t)$.</p>
</blockquote>
<p>This definition is clear, but how is the "reverse" path defined in the following case?</p>
<p>Take two regular curves $\gamma_1:[a,b] \to \mathbb{R}^n$ and $\gamma_2:[c,d] \to \mathbb{R}^n$ with $\gamma_1(b)=\gamma_2(c)$ and define $\gamma:[a,d] \to \mathbb{R}^n$ as
$$\gamma(t)=\begin{cases} \gamma_1(t) & t\in [a,b] \\ \gamma_2(t) &t \in [c,d] \end{cases}$$</p>
<p>Now, what is $-\gamma(t)$?</p>
<p>I think there are two possibilities:</p>
<ol>
<li>$$-\gamma(t)=\begin{cases} \gamma_1(a+b-t) & t\in [a,b] \\ \gamma_2(a+b-t) &t \in [c,d] \end{cases}$$</li>
<li>$$-\gamma(t)=\begin{cases} \gamma_1(a+b-t) & t\in [a,b] \\ \gamma_2(c+d-t) &t \in [c,d] \end{cases}$$</li>
</ol>
| πr8 | 302,863 | <p>A picture would really help here - the intuitive idea behind this is a bit like how, when looking at compositions of functions (or even group elements), $(fg)^{-1}=g^{-1}f^{-1}$. </p>
<p>Here, you're dealing with a <em>concatenation</em> of paths, which is an analog of composition (well, almost) in this setting. You're travelling along $\gamma_1$ first, and then along $\gamma_2$. With this in mind, to reverse your path, you should retrace your steps along $\gamma_2$, and then along $\gamma_1$. </p>
<p>Once such way of doing this would be:</p>
<p>$$-\gamma (t) = \begin{cases} \gamma_2 (a+d-t), & t \in [a,a+d-c] \\
\gamma_1 (a+d-t), & t \in [d-b+a,d] \end{cases} .$$</p>
|
1,545,258 | <p>In my abstract algebra book one of the first facts stated is the Well Ordering Principle:</p>
<p>(*) Every non-empty set of positive integers has a smallest member.</p>
<p>In real analysis on the other hand one of the first things introduced are the real numbers and their Completeness Axiom:</p>
<p>Every nonempty set of real numbers having an upper bound must have a least upper bound.</p>
<p>Which is equivalent to:</p>
<p>(**) Every nonempty set of real numbers having a lower bound must have a biggest lower bound (infimum). </p>
<p>It has never been mentioned in any book I've read and I don't know if they have anything to do with each other but (*) and (**) seem to me to be such that (**) implies (*). </p>
<blockquote>
<p>Is the Well Ordering Principle a consequence of the Completeness of
the real numbers? Or do they have nothing to do with each other? How
should I think of them in terms of how they relate to each other?</p>
</blockquote>
<p>Is it okay to see one as a consequence of the other?</p>
| nombre | 246,859 | <p>If you define $\Bbb{R}$ using Dedekind cuts over $\mathbb{Q}$, then $(**)$ can be proven without using $(*)$. (the Dedekind completion of a dense linear order without endpoints always has the least upper bound property). </p>
<p>It is tricky to say that $(*)$ follows from $(**)$ because $(*)$ is a defining caracteristic of $\mathbb{N}$ so in a way you summon $(*)$ as soon as you talk about $\mathbb{N}$. I am not sure you can define $\mathbb{N}$ knowing only that $\mathbb{R}$ is an ordered field with property $(**)$ in a way that would make the proof $(**) \rightarrow (*)$ possible. </p>
|
183,536 | <p>What is the best way to take a part of an expression and put it somewhere else in the same expression? Essentially I need to combine a <code>Extract</code>, <code>Delete</code> and <code>Insert</code> atomically, that is, handle corner cases where the deletion might cause a shift of the position where I want to insert to, or similar problems.</p>
| kglr | 125 | <pre><code>ClearAll[changePos]
changePos = Insert[Delete[#, #2], #[[#2]], {#3}] &;
</code></pre>
<p><strong>Examples:</strong></p>
<pre><code>changePos[CharacterRange["a", "k"], 7, 3]
</code></pre>
<blockquote>
<p>{"a", "b", "g", "c", "d", "e", "f", "h", "i", "j", "k"}</p>
</blockquote>
<pre><code>changePos[CharacterRange["a", "k"], 3, 7]
</code></pre>
<blockquote>
<p>{"a", "b", "d", "e", "f", "g", "c", "h", "i", "j", "k"}</p>
</blockquote>
|
183,536 | <p>What is the best way to take a part of an expression and put it somewhere else in the same expression? Essentially I need to combine a <code>Extract</code>, <code>Delete</code> and <code>Insert</code> atomically, that is, handle corner cases where the deletion might cause a shift of the position where I want to insert to, or similar problems.</p>
| Carl Woll | 45,431 | <p>If the parts you want to move are at the same level, you can use <a href="http://reference.wolfram.com/language/ref/Part" rel="nofollow noreferrer"><code>Part</code></a>:</p>
<pre><code>list = Range[10];
list[[{3, 7}]] = list[[{7, 3}]];
list
</code></pre>
<blockquote>
<p>{1, 2, 7, 4, 5, 6, 3, 8, 9, 10}</p>
</blockquote>
|
3,676,003 | <p>I would like to find the parametric equation for a curve starting at radius of curvature 10 at angle 0 degrees and ending at radius of curvature 100 at 90 degrees. The equation for change in radius of curvature along the path will be specified. </p>
<p>One can imagine this as a 90 degree arc of a circle, except the radius of the circle is changing along the path. This is similar to the concept of Euler curves, except the curvature only changes linearly for Euler curves. </p>
<p>Any suggestions on how to approach this problem? </p>
<p>Edit: Thanks for the many replies! Actually my question is that the rate of change of radius of curvature can be any function. In my case, I'd like it to be a tanh function going from radius r1 to radius r2. The radii I mentioned above are also arbitrary values. I am looking for an analytical or numerical way to approach the generalized problem. </p>
| Jean Marie | 305,862 | <p>There is a simple curve, the ellipse </p>
<p><span class="math-container">$$\begin{cases}x&=&a \cos(t)\\ y&=&b\sin(t)\end{cases} \ \ \ \iff \ \ \ \dfrac{x^2}{a^2}+\dfrac{y^2}{b^2}=1 \ \ \ \ $$</span></p>
<p>providing an answer to your issue. Indeed, its curvature is</p>
<p><span class="math-container">$$K=\dfrac{ab}{(a^2 \sin^2 t + b^2 \cos^2 t)^{3/2}}\tag{1}$$</span></p>
<p>(see example 1 in <a href="https://www.math24.net/curvature-radius/" rel="nofollow noreferrer">this reference</a>), therefore with max. and min. <strong>radii of curvature</strong> :</p>
<p><span class="math-container">$$a^2/b \ \ \text{ in} \ \ A(a,0) \ \ \ \ \text{ and } \ \ \ \ b^2/a \ \ \text{ in} \ \ B(0,b).$$</span> </p>
<p>It remains to solve : <span class="math-container">$$a^2/b=10 \ \ \ \text{ and } \ \ \ \ b^2/a=100...$$</span></p>
<p>giving</p>
<p><span class="math-container">$$a=10^{4/3} \approx 21.54 \ \ \ \text{ and } \ \ \ \ b=10^{5/3}\approx 46.42.$$</span></p>
<p><strong>Edit :</strong> the locus of centers of curvature, called the evolute, of an ellipse is an elongated astroid : see the very nice figures <a href="https://en.wikipedia.org/wiki/Evolute#Evolute_of_an_ellipse" rel="nofollow noreferrer">there</a></p>
|
286,312 | <p>I'm starting to understand how induction works (with the whole $k \to k+1$ thing), but I'm not exactly sure how summations play a role. I'm a bit confused by this question specifically:</p>
<p>$$
\sum_{i=1}^n 3i-2 = \frac{n(3n-1)}{2}
$$</p>
<p>Any hints would be greatly appreciate</p>
| Math Gems | 75,092 | <p>If we mimic amWhy's proof for a general summation we obtain a powerful result.</p>
<p>$$ \sum_{i=1}^n f(i) = g(n)$$</p>
<p><strong>Base case:</strong> </p>
<ul>
<li><p>Let $n=1$ and test: $$\sum_{i=1}^1 f(i) = f(1) \color{#c00}{=?}\ g(1)$$</p></li>
<li><p>The $ $ Base Case $\ \,n = 1\ $ holds true $\iff \color{#C00}{f(1) = g(1)}$</p></li>
</ul>
<p><strong>Induction Hypothesis</strong>: </p>
<ul>
<li>Assume that it is true for $\, n = k$: assume that $$\sum_{i=1}^k f(i) = g(k).$$ </li>
</ul>
<p><strong>Inductive Step:</strong> </p>
<ul>
<li><p>Prove, using the Induction Hypothesis as a premise, that $$\sum_{i=1}^{k+1}f(i)=\left(\sum_{i=1}^k f(i)\right) + f(k\!+\!1) = g(k) + f(k\!+\!1) \color{#0a0}{=?}\ g(k\!+\!1).$$</p></li>
<li><p>The Inductive Step from $k$ to $k+1$ is true $\iff \color{#0a0}{ g(k\!+\!1) - g(k) = f(k\!+\!1)}$</p></li>
</ul>
<p>Therefore we have proved by induction the following generic summation criterion</p>
<p>$\displaystyle\quad\sum_{i=1}^n f(i) = g(n)\iff \color{#c00}{g(1) = f(1)}\,\ {\rm and}\,\ \color{#0a0}{g(n\!+\!1)-g(n) = f(n\!+\!1)}\ $ for $\,n \ge 1$</p>
<p>This theorem reduces the inductive proof to simply verifying the $\rm\color{#c00}{RHS}$ $\rm\color{#0a0}{equalities}$, which is trivial polynomial arithmetic when $f(n),g(n)$ are polynomials in $n,\,$ so trivial it can be done purely <em>mechanically</em> by a high-school student (or computer). No <em>insight</em> (magic) is required - no rabbits need be pulled from a hat.</p>
<p>The above theorem is an example of <em>telescopy</em>, also known as the <em>Fundamental Theorem of Difference Calculus</em>, depending on context. You can find <a href="https://math.stackexchange.com/search?q=user%3A242+telescopy">many more examples</a> of telescopy and related results in other answers here.</p>
|
2,076,984 | <p>I have already asked <a href="https://math.stackexchange.com/questions/2075949/how-to-draw-diagram-for-line-and-parabola">a similar question</a>. But the answer in that question is very difficult to understand. I am new to this concept so I am looking for an easier explanation.</p>
<blockquote>
<p>My main <strong>question</strong> is: why do we subtract things to find the area using the definite integral?</p>
</blockquote>
<p>Here are a couple of figures -</p>
<ol>
<li>Two parabolas -
<a href="https://i.stack.imgur.com/aQC8h.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/aQC8h.jpg" alt="page 1"></a></li>
</ol>
<p>Area $\displaystyle = \int \left(\sqrt{x} - x^2 \right) dx$</p>
<p>Why do we subtract to find the area? Why not add?</p>
<ol start="2">
<li>Similarly in parabola and line.</li>
</ol>
<p><a href="https://i.stack.imgur.com/emac3.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/emac3.png" alt="page 2"></a></p>
<p>Area $\displaystyle = \int (x + 2 - x^2)dx$</p>
| pseudoeuclidean | 325,914 | <p>Look at the second image that you provided. Let $A$ represent the region under the line; let $B$ represent the region under the parabola; and let $C$ represent the region in between the line and the parabola. Note that $B$ and $C$ do not overlap. We want to find the area of region $C$.</p>
<p>Notice that the combination of regions $B$ and $C$ completely covers region $A$. Restated:
$$B\cup C=A\tag{1}$$
Equation $(1)$ means that all points in $A$ lie in $B$ or in $C$. We can solve for the region $C$.
$$C=A-B\tag{2}$$
Equation $(2)$ means that all points in $C$ lie in $A$ but do not lie in $B$.</p>
<p>The area of a region is simply the sum of all the points in that region.</p>
|
2,076,984 | <p>I have already asked <a href="https://math.stackexchange.com/questions/2075949/how-to-draw-diagram-for-line-and-parabola">a similar question</a>. But the answer in that question is very difficult to understand. I am new to this concept so I am looking for an easier explanation.</p>
<blockquote>
<p>My main <strong>question</strong> is: why do we subtract things to find the area using the definite integral?</p>
</blockquote>
<p>Here are a couple of figures -</p>
<ol>
<li>Two parabolas -
<a href="https://i.stack.imgur.com/aQC8h.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/aQC8h.jpg" alt="page 1"></a></li>
</ol>
<p>Area $\displaystyle = \int \left(\sqrt{x} - x^2 \right) dx$</p>
<p>Why do we subtract to find the area? Why not add?</p>
<ol start="2">
<li>Similarly in parabola and line.</li>
</ol>
<p><a href="https://i.stack.imgur.com/emac3.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/emac3.png" alt="page 2"></a></p>
<p>Area $\displaystyle = \int (x + 2 - x^2)dx$</p>
| Han de Bruijn | 96,057 | <p>Come on! Some of us use integrals, or even the
<A HREF="http://math.stackexchange.com/questions/2075949/how-to-draw-diagram-for-line-and-parabola/2076079#2076079"><I>fundamental idiom of integral calculus</I></A>
to explain the idea that areas can be subtracted. However,
the concept of subtracting areas is much more general, and much older, than this.
An area is an area; integrals are just a means to calculate them; more often they aren't even used that way.
If I'm not mistaken, subtraction of areas goes all the way back to Euclid.<P>
As an example. Starting with a rectangle (with even more rectangles in it), we can do the following to deduce
the area of an arbitrary <B>triangle</B>. Herewith it is <I>only</I> assumed that the formula for the <I>area of a rectangle</I>
is well understood.<BR>To begin with, a parallelogram $\overline{ABDC}$ is constructed inside a rectangle $\overline{AEDF}$:
<BR><a href="https://i.stack.imgur.com/nK0Zl.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/nK0Zl.jpg" alt="enter image description here"></a><BR>
Label the vertices and assign a few coordinates (addition of vectors giving $D$):
$$ A = (0,0) \quad , \quad B = (x_1,y_1) \quad , \quad C = (x_2,y_2) \quad \Longrightarrow \quad D = (x_1+x_2,y_1+y_2)$$
Then calculate areas according to the following, leaving it to the user to prove things where they feel the need. Though,
with Euclidean geometry, <I>What you see is what you get</I>, most of the time:
$$
\operatorname{Area}(\overline{CDf}) = \operatorname{Area}(\overline{AbB}) =
\frac{1}{2} \operatorname{Area}(\overline{AbBh}) = \frac{1}{2} \overline{Ab}\times\overline{Ah} = \frac{1}{2} x_1y_1 \\
\operatorname{Area}(\overline{BcD}) = \operatorname{Area}(\overline{ACg}) =
\frac{1}{2} \operatorname{Area}(\overline{AaCg}) = \frac{1}{2} \overline{Aa}\times\overline{Ag} = \frac{1}{2} x_2y_2 \\
\operatorname{Area}(\overline{bEcB}) = \operatorname{Area}(\overline{gCfF}) = \operatorname{Area}(\overline{Aakh}) =
\overline{Aa}\times\overline{Ah} = x_2y_1
$$
Now <B>subtract</B> the areas of $\overline{AbB} , \overline{bEcB} , \overline{BcD} , \overline{CDf} , \overline{gCfF} ,
\overline{ACg}$ from the area of the big rectangle,
which is $\operatorname{Area}(\overline{AEDF}) = (x_1+x_2)(y_1+y_2)$. Then what we get is the area of the parallelogram:
$$
\operatorname{Area}(\overline{ABDC}) = (x_1+x_2)(y_1+y_2) - 2 x_2y_1 - 2\cdot\frac{1}{2} x_2y_2 - 2\cdot\frac{1}{2} x_1y_1 =\\
x_1y_1+x_1y_2+x_2y_1+x_2y_2 - x_2y_2 - x_1y_1 - 2x_2y_1 = x_1y_2-x_2y_1
$$
Do we observe a <I>determinant</I> here? Ayway, the area of the <B>triangle</B> is half of this:
$$
\operatorname{Area}(\overline{ABC}) = \frac{1}{2}(x_1y_2-x_2y_1) = \frac{1}{2} \begin{vmatrix} x_1 & y_1 \\ x_2 & y_2 \end{vmatrix}
$$
There are many interesting applications. For example, the area of an arbitrary polygon can be calculated with addition and
subtraction of nothing else but triangles, as exemplified <A HREF="http://math.stackexchange.com/questions/2074672/show-that-the-area-of-image-area-of-object-cdot-dett-where-t-is-a-l/2077741#2077741">here</A>.</p>
<p>However, there is a tag (definite-integrals) associated with the question, so let's kill the mosquito with a gun end employ some of that in the end:
$$
\operatorname{Area}(\overline{ABC}) = \operatorname{Area}(\overline{AaC}) + \operatorname{Area}(\overline{abBC})
- \operatorname{Area}(\overline{AbB}) =\\ \int_0^{x_2} \frac{y_2}{x_2}x\,dx \;
+ \; \int_{x_2}^{x_1} \left[y_2 + (y_1-y_2)\frac{x-x_2}{x_1-x_2} \right] dx \; - \; \int_0^{x_1} \frac{y_1}{x_1}x\,dx = \\
\frac{1}{2}x_2y_2 + \frac{1}{2}(x_1-x_2)(y_2+y_1)-\frac{1}{2}x_1y_1 = \frac{1}{2}(x_1y_2-x_2y_1)
$$
Always nice to see the consistency of mathematics showing up again.</p>
|
2,444,376 | <p>Let be $u:\mathbb{R}\rightarrow(-\infty,+\infty]$ a convex function and I suppose that $u$ admits a point of minimum.
I define:</p>
<p>$$(\varphi_\epsilon*u)(x)=\int_{\mathbb{R}^n}\varphi_\epsilon(y)u(x-y)dy, $$
where $\varphi_{\epsilon}$ is the standard mollifier.
Let's introduce the notation:
$$\tilde{u}_i=\varphi_{1/i}*u,\quad\forall i\in\mathbb{N}. $$</p>
<p>I know that the function $\tilde{u}_i$ is convex, that it converges pointwise to $u$ in $\mathbb{R}^n$ and uniformly on compact sets of $\mathbb{R}^n$.</p>
<blockquote>
<p>If I denote with $y_i:=\min_{\mathbb{R}}u_i$, is it true that the sequence of the $y_i$ converges to $y=\min u$?</p>
</blockquote>
<p>I think yes because I have the uniformly convergence on compact sets.
However I cannot prove it. How can I do it?</p>
<p>Thanks for the help!</p>
| Community | -1 | <p>Neither the convergence of derivative almost everywhere, nor uniform convergence on compact sets imply the convergence of minima on their own. Both modes of convergence leave the possibility of $u_i$ having some small values near infinity (if we don't know anything else about $u_i$). </p>
<p>Here we know that $u_i\ge u$ by Jensen's inequality. So it remains to prove that for every $\delta>0$ we have $\min u_i\le \min u+\delta$ for large $i$. To do this, let $x_0$ be a point of minimum of $u$, and take a neighborhood $V$ of $x_0$ in which $u\le u(x_0)+\delta$. When $i$ is large enough, the support of $\varphi_{1/i}$ is smaller than the size of $V$, which implies that $u_i(x_0)$ only involve the values of $u$ within $V$. Hence $u_i(x_0)\le u(x_0)+\delta$. </p>
|
2,121,895 | <blockquote>
<p>How many three digit sequences using the numbers 0 ... 9 with conditions<br>
(a) Repetition of digits is allowed and the sequence cannot start with 0?<br>
(b) Repetition of digits is not allowed and the sequence cannot start with 0? </p>
</blockquote>
<p>For part (a) I did the following:<br>
(9)(10)(10) = 900 </p>
<p>I am stuck on part (b), I know that when you sample without replacement you want to use n!/(n-r)! But in this problem n isn't constant. The first time a digit is picked there are 9 possible digits because the sequence cannot start with 0. The second time a digit is picked there are still 9 possible digits because we are sampling without replacement and the third time a digit is picked there are 8 possible digits left. I am not sure how to combine this information to solve for part (b). Any suggestions? </p>
| Kiran | 82,744 | <p>for part a, it is $9 \times 10 \times 10$ as you did.</p>
<p>for part b, it is $9 \times 9 \times 8$</p>
|
2,121,895 | <blockquote>
<p>How many three digit sequences using the numbers 0 ... 9 with conditions<br>
(a) Repetition of digits is allowed and the sequence cannot start with 0?<br>
(b) Repetition of digits is not allowed and the sequence cannot start with 0? </p>
</blockquote>
<p>For part (a) I did the following:<br>
(9)(10)(10) = 900 </p>
<p>I am stuck on part (b), I know that when you sample without replacement you want to use n!/(n-r)! But in this problem n isn't constant. The first time a digit is picked there are 9 possible digits because the sequence cannot start with 0. The second time a digit is picked there are still 9 possible digits because we are sampling without replacement and the third time a digit is picked there are 8 possible digits left. I am not sure how to combine this information to solve for part (b). Any suggestions? </p>
| Joffan | 206,402 | <p>For part (b), using $\dfrac{n!}{(n-r)!}$ will get you $\dfrac{10!}{(10-3)!}=\dfrac{10!}{7!} = 10\cdot 9\cdot 8$ options, ignoring the leading zero restriction. Then $\frac{1}{10}$ of these options will have a leading zero, so we can multiply by $\frac {9}{10}$ to get back to the valid solutions, $ \frac {9}{10}\cdot 10\cdot 9\cdot 8 = 9\cdot 9\cdot 8 = 649$ options.</p>
<p>Of course you can get to the same answer by just picking available digits in succession also, as you did, just needing to multiply together the digit-by-digit results for the final answer. You can perhaps see how the two processes are just the same in different formats.</p>
|
873,755 | <p>Let $f(x) = x^{10}+5x^9-8x^8+7x^7-x^6-12x^5+4x^4-8x^3+12x^2-5x-5. $</p>
<p>Without using long division (which would be horribly nasty!), find the remainder when $f(x)$ is divided by $x^2-1$.</p>
<p>I'm not sure how to do this, as the only way I know of dividing polynomials other than long division is synthetic division, which only works with linear divisors. I thought about doing $f(x)=g(x)(x+1)(x-1)+r(x)$, but I'm not sure how to continue. Thanks for the help in advance.</p>
| Bill Dubuque | 242 | <p><strong>Hint</strong> <span class="math-container">$\ {\rm mod\ }x^{\large 2}\!-1\!:\,\ x^{\large 2}\equiv 1\,\Rightarrow\,\color{#0a0}{x^{\large 2n}\equiv 1}\,\Rightarrow\,\color{#c00}{x^{\large 2n+1}\equiv x},\ $</span> hence</p>
<p><span class="math-container">$ f(x) =\, \overbrace{(c_0 + c_2\color{#0a0}{ x^2} + c_4\color{#0a0}{ x^4}+\cdots)}^{\large \color{#0a0}{f_0(x)}} \ +\ \overbrace{(c_1\color{#c00} x + c_3\color{#c00}{x^3} + c_5\color{#c00}{x^5} + \cdots)}^{\large \color{#c00}{f_1(x)}}$</span></p>
<p><span class="math-container">$\qquad \equiv \ (c_0 \ +\ c_2\ +\ c_4\ \ + \ \cdots)\,\color{#0a0}1 + (c_1\ +\,\ c_3\ \ +\ \ c_5 \ +\ \cdots)\,\color{#c00}x $</span></p>
<p><span class="math-container">$\qquad \equiv\ f_0(1)\,\color{#0a0}1 + f_1(1)\, \color{#c00}x,\ $</span> where <span class="math-container">$\,f_0(x),\ f_1(x)\,$</span> are <a href="https://math.stackexchange.com/a/3514/242">the <span class="math-container">$\rm\color{#0a0}{even}$</span> and <span class="math-container">$\rm\color{#c00}{odd}$</span> parts</a> of <span class="math-container">$\,f(x).$</span></p>
<p>e.g. a familiar numerical instance when <span class="math-container">$\,x=10\,$</span> in radix <span class="math-container">$10$</span> (decimal) arithmetic</p>
<p><span class="math-container">$\!\! \bmod 99\!:\ \color{#c00}5\color{#0a0}4\color{#c00}3\color{#0a0}2\color{#c00}1\color{#0a0}0\equiv
(\color{#c00}{5\!+\!3\!+\!1}),(\color{#0a0}{4\!+\!2\!+\!0})\equiv \color{#c00}9\color{#0a0}6\equiv \color{#c00}5\color{#0a0}4+\color{#c00}3\color{#0a0}2+\color{#c00}1\color{#0a0}0\ $</span> by <span class="math-container">$\,10^2\equiv 1$</span></p>
|
2,174 | <p>As a teenager I was given this problem which took me a few years to solve. I'd like to know if this hae ever been published. When I presented my solution I was told that it was similar to one of several he had seen.</p>
<p>The problem:</p>
<p>For an <span class="math-container">$n$</span> dimensional space, develop a formula that evaluates the maximum number of <span class="math-container">$n$</span> dimensional regions when divided by <span class="math-container">$k$</span> <span class="math-container">$n-1$</span> dimensional (hyper)planes.</p>
<p>Example: <span class="math-container">$A$</span> line is partitioned by points: <span class="math-container">$1$</span> point, <span class="math-container">$2$</span> line segments. <span class="math-container">$10$</span> points, <span class="math-container">$11$</span> line segments, and so one.</p>
| Isaac | 72 | <p>Though not discussed in the full generality you describe, the problem is discussed at length for n = 1, 2, and 3 in <em>Mathematics for High School Teachers: An Advanced Persepective</em> by Usiskin, Peressini, Machisotto, and Stanley (section 5.1.4; © 2003; published by Pearson Education). Specifically, the text uses that problem as an example for discussing how induction can be applied to prove the formulae using the geometry of the problem in the inductive step (when you add the next (n-1)-dimensional boundary object, how many additional regions are created?).</p>
|
2,610,048 | <p>Ok, so I'm trying to prove statement in the header. I have read the following discussion on it, but I can't seem to follow it all the way through:</p>
<p><a href="https://math.stackexchange.com/questions/1198735/proving-sum-k-1n-k-k-n1-1/1198743#1198743?newreg=abbcf872d9904cbfa76cd161c4fecdd0">Proving $\sum_{k=1}^n k k!=(n+1)!-1$</a></p>
<p>I like mfl's answer, but I get hung up on the last step. They say: </p>
<p>and we need to show</p>
<blockquote>
<p>$$\sum_{k=1}^{n+1} kk!=(n+2)!−1.$$</p>
</blockquote>
<p>Just write</p>
<blockquote>
<p>$$\sum_{k=1}^{n+1} kk!=\sum_{k=1}^n kk! + (n+1)(n+1)!$$</p>
</blockquote>
<p>How do they get from the first step stated above, to the following step? I'm stuck.</p>
| vadim123 | 73,324 | <p>The eleven options listed are not equally likely. There are $5^2=25$ ways to roll $[1,1,x,x]$, but only one way to roll $[1,1,1,1]$. Hence, what you need is $$\frac{25\cdot 6+5\cdot4+1}{6^4}=\frac{171}{1296}\approx 0.132$$
To deal with three rolls, the simplest way to compute it is to compute the probability that you FAIL to roll snake-eyes (three times), then subtract this from $1$. Your answer is $$1-\left(\frac{1125}{1296}\right)^3=\frac{1032859}{2985984}\approx 0.346$$</p>
|
83,648 | <p><a href="http://reference.wolfram.com/language/ref/FullGraphics.html" rel="nofollow noreferrer"><code>FullGraphics</code></a> hasn't worked entirely for a long time and the situation appears to be getting worse instead of better. In <em>Mathematica</em> 10.0, 10.1, 11.3, 12.3 up to 13.1 a simple usage throws numerous errors and returns a graphic without ticks and with the wrong aspect ratio:</p>
<pre><code>Plot[Sin[x], {x, 0, 10}] // FullGraphics
</code></pre>
<blockquote>
<p>Axes::axes: {{False,False},{False,False}} is not a valid axis
specification. >></p>
<p>Ticks::ticks: {Automatic,Automatic} is not a valid tick specification. >></p>
<p>(* etc. etc. *)</p>
</blockquote>
<p>This may be caused by or related to <a href="https://mathematica.stackexchange.com/questions/68937/">More Ticks::ticks errors in AbsoluteOptions in v10</a>.</p>
<p>It seems that I must go back to version 5 functionality if I want this function to work right:</p>
<pre><code><< Version5`Graphics` (* load old graphics subsystem *)
Plot[Sin[x], {x, 0, 10}] // FullGraphics
</code></pre>
<p><img src="https://i.stack.imgur.com/2ldzm.png" alt="enter image description here" /></p>
<p>I wonder at this point if there is any indication that <code>FullGraphics</code> and perhaps also <code>AbsoluteOptions</code> are still supported? Or has something to the contrary has been written (Wolfram blog, a developer's comment, etc.) that indicates these should be removed from the documentation now?</p>
<p>With <code>FullGraphics</code> broken is there a method that can take its place for producing proper <code>Graphics</code> directives that may be further manipulated and combined, not merely vectorized outlines?</p>
| akpc | 61,134 | <p>It seems that the problem appeared in the version 5-> 6 transition.</p>
<pre><code><< Version6`Graphics`
InputForm@ListPlot[{{0, 0}}]
(* Graphics[{...
Frame -> {{False, False}, {False, False}},...] *)
</code></pre>
<p>which has the same form in later version.
This subtle difference is causing the trouble. c.f.</p>
<pre><code> << Version5`Graphics`
InputForm@ListPlot[{{0, 0}}]
(* Graphics[{...
Frame -> False,...] *)
</code></pre>
<p>This bug can thus be circumvented by manually changing this value.</p>
<pre><code>FullGraphics @@ (InputForm[Plot[Sin[x], {x, 0, 4}]] /.
Rule[Frame, _ ] ->
Rule[Frame, False])
</code></pre>
|
710,146 | <p>Conversely, is it true that if every sequence of pointwise equicontinuous functions from $M$ to $\mathbb{R}$ is uniformly equicontinuous, them $M$ is compact?</p>
| Daniel Fischer | 83,702 | <p>Such a space need not be compact. Take any metric space with the discrete metric, $d(x,y) = 1$ for $x\neq y$. Then every family of functions $M \to \mathbb{R}$ is uniformly equicontinuous. But if the space contains infinitely many points, it is not compact.</p>
<p>A connected metric space on which every pointwise equicontinuous sequence of functions is uniformly equicontinuous, however, must be compact.</p>
<p>More, a connected metric space on which every continuous function is uniformly continuous must be compact.</p>
<p>First, if $(X,d)$ is an incomplete metric space, let $(\tilde{X},\tilde{d})$ be its completion. For every $p \in \tilde{X}\setminus X$, the function</p>
<p>$$f\colon X\to \mathbb{R}; \quad f\colon x \mapsto \frac{1}{\tilde{d}(x,p)}$$</p>
<p>is continuous but not uniformly continuous. It is continuous because $x\mapsto \tilde{d}(x,p)$ is continuous and nonzero on $X$, and for a sequence $(x_n)$ in $X$ converging to $p$, the sequence $\bigl(f(x_n)\bigr)$ is not a Cauchy sequence, hence $f$ is not uniformly continuous.</p>
<p>If a metric space $(X,d)$ is not totally bounded, it contains a sequence $(x_n)$ of points with $d(x_k,x_n) \geqslant 4\varepsilon$ for $k\neq n$ and some $\varepsilon > 0$. For $n\in\mathbb{Z}^+$, let</p>
<p>$$\varphi_n(x) = \max \left\{0, 1- \frac{d(x_n,x)}{\varepsilon} \right\}.$$</p>
<p>Then $\varphi_n$ is continuous, and since $B_\varepsilon(x)$ intersects the support of at most one $\varphi_n$, the function</p>
<p>$$f(x) = \sum_{n=1}^\infty n\cdot \varphi_n(x)$$</p>
<p>is well-defined and continuous. If $X$ is connected, $f$ is not uniformly continuous: then for every $n$ there is an $y_n \in X$ with $d(x_n,y_n) = \frac{\varepsilon}{2n}$ and hence $\lvert f(y_n) - f(x_n)\rvert = \frac{1}{2}$, but $\lim\limits_{n\to\infty} d(x_n,y_n) = 0$.</p>
<p>The connectedness is a stronger assumption than is needed, it suffices that for infinitely many of the $x_n$ there is a point $y_n$ with $\frac{c}{n} \leqslant d(x_n,y_n) \leqslant \delta_n$ where $c$ is some positive constant, and $\delta_n \to 0$. But an exact criterion is not obvious.</p>
|
312,696 | <p>I have three original points $pt_1, pt_2, pt_3$ which if transformed by an unknown matrix $M$ turn into points $gd_1, gd_2, gd_3$ respectively. How can I find the matrix $M$ (all points are in 3-dimensional space)?</p>
<p>I understand that for original points holds $M\cdot pt_i = gd_i$, so combining all $pt_i$ into matrix $PT$ and all $gd_i$ into $GD$ I'd get a matrix equation $M\cdot PT=GD$ with unknown $M$.</p>
<p>However, many math packages solve matrix equations in form of $A\cdot x=B$, where $x$ is unknown.</p>
<p>Is my idea of combining points into matrices correct and if so how can I solve my matrix equation?</p>
| robjohn | 13,854 | <p>Form matrices whose rows are $pt_k$ and $gd_k$. Then
$$
\begin{bmatrix}pt_1\\pt_2\\pt_3\end{bmatrix}M=\begin{bmatrix}gd_1\\gd_2\\gd_3\end{bmatrix}
$$
Then we have
$$
M=\begin{bmatrix}pt_1\\pt_2\\pt_3\end{bmatrix}^{-1}\begin{bmatrix}gd_1\\gd_2\\gd_3\end{bmatrix}
$$
So your idea is correct.</p>
|
147,441 | <p>I've just been looking through my Linear Algebra notes recently, and while revising the topic of change of basis matrices I've been trying something:</p>
<p>"Suppose that our coordinates are $x$ in the standard basis and $y$ in a different basis, so that $x = Fy$, where $F$ is our change of basis matrix, then any matrix $A$ acting on the $x$ variables by taking $x$ to $Ax$ is represented in $y$ variables as: $F^{-1}AF$ "</p>
<p>Now, I've attempted to prove the above, is my intuition right?</p>
<p>Proof: We want to write the matrix $A$ in terms of $y$ co-ordinates.</p>
<p>a) $Fy$ turns our y co-ordinates into $x$ co-ordinates.</p>
<p>b) pre multiply by $A$, resulting in $AFy$, which is performing our transformation on $x$ co-ordinates</p>
<p>c) Now, to convert back into $y$ co-ordinates, pre multiply by $F^{-1}$, resulting in $F^{-1}AFy$</p>
<p>d) We see that when we multiply $y$ by $F^{-1}AF$ we perform the equivalent of multiplying $A$ by $x$ to obtain $Ax$, thus proved.</p>
<p>Also, just to check, are the entries <em>in</em> the matrix $F^{-1}AF$ still written in terms of the standard basis?</p>
<p>Thanks.</p>
| Keivan | 10,020 | <p>Without saying much, here is how I usually remember the statement and also the proof in one big picture:</p>
<p>\begin{array}{ccc}
x_{1},\dots,x_{n} & \underrightarrow{\;\;\; A\;\;\;} & Ax_{1},\dots,Ax_{n}\\
\\
\uparrow F & & \downarrow F^{-1}\\
\\
y_{1},\dots,y_{n} & \underrightarrow{\;\;\; B\;\;\;} & By_{1},\dots,By_{n}
\end{array}</p>
<p>And
$$By=F^{-1}AFy$$</p>
|
4,004,157 | <p>A firm wants to know how many of its employees have drug problems. Realizing the sensitivity of this issue, the personnel director decides to use a randomized response survey.</p>
<p>Each employee is asked to flip a fair coin,</p>
<p>If head (H), answer the question “Do you carpool to work?”</p>
<p>If tail (T), answer the question “Have you used illegal drugs within the last month?”</p>
<p>Out of 8000 responses, 1420 answered “YES” (assuming honesty)</p>
<p>The company knows that 35% of its employees carpool to work. What is the probability that an employee (chosen at random) used illegal drugs within the last month?</p>
<p>I think the probability that I am trying to figure out is <span class="math-container">$\mathbb{P}(yes|T)$</span>. From the problem, I was able to figure out that <span class="math-container">$\mathbb{P}(yes)=1420/8000$</span>, <span class="math-container">$\mathbb{P}(T)=50%$</span> (because it's a fair coin) and that <span class="math-container">$\mathbb{P}(yes|H)=35%$</span>. But for Bayes' theorem, I need to find <span class="math-container">$\mathbb{P}(T|yes)$</span>, and that is where I am stuck.</p>
<p>I realized that I did not need Bayes' theorem as that would have made it more difficult.</p>
| Community | -1 | <p>Consider <span class="math-container">$\Bbb F_5$</span> and consider <span class="math-container">$a=2$</span>, <span class="math-container">$b=3$</span> and <span class="math-container">$\sqrt b=2\sqrt a$</span>. Then, <span class="math-container">$x^2+2$</span> has roots <span class="math-container">$\pm(\sqrt a+\sqrt b)$</span>, and neither <span class="math-container">$\sqrt a-\sqrt b$</span> nor <span class="math-container">$\sqrt b-\sqrt a$</span> are among them.</p>
|
4,179,720 | <p>I'm starting to study triple integrals. In general, I have been doing problems which require me to sketch the projection on the <span class="math-container">$xy$</span> plane so I can figure out the boundaries for <span class="math-container">$x$</span> and <span class="math-container">$y$</span>. For example, I had an exercise where I had to calculate the volume bound between the planes <span class="math-container">$x=0$</span>, <span class="math-container">$y=0$</span>, <span class="math-container">$z=0$</span>, <span class="math-container">$x+y+z=1$</span> which was easy. For the projection on the <span class="math-container">$xy$</span> plane, I set that <span class="math-container">$z=0$</span>, then I got <span class="math-container">$x+y=1$</span> which is a line.</p>
<p>However, now I have the following problem:</p>
<p>Calculate the volume bound between:</p>
<p><span class="math-container">$$z=xy$$</span></p>
<p><span class="math-container">$$x+y+z=1$$</span></p>
<p><span class="math-container">$$z=0$$</span></p>
<p>now I know that if I put <span class="math-container">$z=0$</span> into the second equation I get the equation <span class="math-container">$y=1-x$</span> which is a line, but I also know that <span class="math-container">$z=xy$</span> has to play a role in the projection. If I put <span class="math-container">$xy=0$</span> I don't get anything useful. Can someone help me understand how these projections work and how I can apply it here?</p>
| N. F. Taussig | 173,070 | <p>By convention, in a circular permutation, only the relative order of the people matters. Therefore, we can consider seating arrangements relative to one of the people at the table.</p>
<p>Suppose Julia is one of the eight people. Seat her. We will use her as our reference point. There are <span class="math-container">$7!$</span> ways to seat the remaining seven people as we proceed clockwise around the table from Julia.</p>
<p>For the favorable cases, we again begin by seating Julia. Her mate can be seated opposite to her in one way. That leaves six people who could be seated to Julia's immediate left. That person's mate must be seated to the immediate left of Julia's mate. That leaves four people who could be seated two seats to Julia's left. That person's mate must be seated two seats to the left of Julia's mate. That leaves two people who could be seated to Julia's immediate right. That person's mate must be seated to the immediate right of Julia's mate. Hence, there are
<span class="math-container">$$6 \cdot 4 \cdot 2$$</span>
favorable seating arrangements.</p>
<p>Hence, the probability that if the members of four couples are seated randomly at a round table that each person sits opposite to his or her mate is
<span class="math-container">$$\frac{6 \cdot 4 \cdot 2}{7!} = \frac{6 \cdot 4 \cdot 2}{7 \cdot 6 \cdot 5 \cdot 4 \cdot 3 \cdot 2 \cdot 1} = \frac{1}{7 \cdot 5 \cdot 3 \cdot 1} = \frac{1}{105}$$</span></p>
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.