qid int64 1 4.65M | question large_stringlengths 27 36.3k | author large_stringlengths 3 36 | author_id int64 -1 1.16M | answer large_stringlengths 18 63k |
|---|---|---|---|---|
1,995,663 | <p>My brother in law and I were discussing the four color theorem; neither of us are huge math geeks, but we both like a challenge, and tonight we were discussing the four color theorem and if there were a way to disprove it.</p>
<p>After some time scribbling on the back of an envelope and about an hour of trial-and-error attempts in Sumopaint, I can't seem to come up with a pattern that only uses four colors for this "map". Can anyone find a way (algorithmically or via trial and error) to color it so it fits the four color theorem?</p>
<p><a href="https://i.stack.imgur.com/rlVrW.png"><img src="https://i.stack.imgur.com/rlVrW.png" alt=""five color" graph"></a></p>
| dtldarek | 26,306 | <p>Starting at the top, going clockwise:</p>
<ul>
<li>center: 1, 2, 3</li>
<li>middle: 2,4,3,4,2,4</li>
<li>outside: 1</li>
</ul>
<p><a href="https://i.stack.imgur.com/0gjxt.png" rel="noreferrer"><img src="https://i.stack.imgur.com/0gjxt.png" alt="4colors"></a></p>
<p>I hope this helps $\ddot\smile$</p>
|
4,614,334 | <p>I'm trying to resolve this recurrence equation <span class="math-container">$T(n)=4T(\frac{n}{2})+cn$</span>.</p>
<p>The solution I discovered online is <span class="math-container">$T(n)=\theta(n^2)$</span>.</p>
<p>The steps I follow was these: <br/>
a) Create the tree of recurrence as this:</p>
<pre><code> cn cost cn
/ / \ \
c(n/2) c(n/2) c(n/2) c(n/2) cost 2cn
//\\ //\\ //\\ //\\
c(n/2)^2 c(n/2)^2 c(n/2)^2 ... c(n/2)^2 cost 2^2cn
... ... ... ... ...
T(1) T(1) ... T(1) T(1) cost 2^icn
</code></pre>
<p>With <span class="math-container">$\log_2n$</span> expected level. So the longest path and calculation shall be <span class="math-container">$2^i\log_2cn$</span>, now I think I can ignore the <span class="math-container">$2^i$</span> and got an <span class="math-container">$O(n\log n)$</span>. That differs from the solution.</p>
<p>I tried to follow with the substitution method: <br/>
b) let's assume O(n\log n), <span class="math-container">$f(n) = dn\log_2n$</span> with d a constant greater than zero be more of our function, with the constant c set as the same value <span class="math-container">$d$</span>.</p>
<p><span class="math-container">$T(n) <= 4d\frac{n}{2}\log_2n+cn$</span> <br/>
<span class="math-container">$T(n) <= 2dn\log_2n+cn$</span> <br/>
<span class="math-container">$dn\log_2n <= 2dn\log_2n+cn$</span> <br/>
<span class="math-container">$-dn\log_2n <= cn$</span> <br/>
true for all <span class="math-container">$d>=\frac{c}{\log_2n}$</span></p>
<p>Where am I wrong?</p>
<p>Source of the solutions:
a) That says shall be <span class="math-container">$n^2+2cn$</span>: <a href="https://github.com/gzc/CLRS/blob/master/C04-Recurrences/4.2.md" rel="nofollow noreferrer">https://github.com/gzc/CLRS/blob/master/C04-Recurrences/4.2.md</a>, point 4.2-3
b) WolframAlpha, that say is, is a <span class="math-container">$\theta$</span> of <span class="math-container">$n^2$</span> <a href="https://www.wolframalpha.com/input?i=g%28n%29%3D4g%28n%2F2%29" rel="nofollow noreferrer">https://www.wolframalpha.com/input?i=g%28n%29%3D4g%28n%2F2%29</a></p>
| Abezhiko | 1,133,926 | <p>I'm not that familiar with the recursive tree method, but it seems that you only consider the longest path, when you should have calculated the total cost, which takes the form of a partial geometric series :<br />
<span class="math-container">$$
\sum_{k=0}^{\log_2(n)}2^kcn = cn\frac{2^{\log_2(n)+1}-1}{2-1} = cn(2n-1) \sim n^2
$$</span>
whence the desired result.</p>
<hr />
<p><strong>Addendum</strong>
Here is an exact derivation of the solution.</p>
<p>The recurrence relation can be rewritten and simplified thanks to re-indexation, such that <span class="math-container">$a_n := T(2^n) = 4T(2^{n-1}) + 2^nc = 4a_{n-1} + 2^nc$</span>. Now it is a first-order inhomogeneous linear recurrence relation.</p>
<p>The homogeneous solution is derived straightforwardly from <span class="math-container">$b_n = 4b_{n-1}$</span>, which gives <span class="math-container">$b_n = 4^nb_0$</span>, whereas the particular solution can be determined from the ansatz <span class="math-container">$2^n\alpha$</span>, which leads to <span class="math-container">$\alpha = -c$</span> and, <em>in fine</em>, to the solution <span class="math-container">$a_n = 4^nb_0-2^nc$</span>, with <span class="math-container">$b_0$</span> a constant.</p>
<p>Coming back to the initial problem, we find <span class="math-container">$T(n) = a_{\log_2(n)} = 4^{\log_2(n)}b_0-2^{\log_2(n)}c = b_0n^2+cn$</span>, with <span class="math-container">$b_0 = T(1)-c$</span> due to the initial condition.</p>
|
3,251,851 | <p>I need help solving these simultaneous equations:</p>
<p><span class="math-container">$$a^2 - b^2 = -16$$</span>
<span class="math-container">$$2ab = 30$$</span></p>
| John Hughes | 114,036 | <p>Let's concentrate on the reals, in fact on <span class="math-container">$\Bbb R^3$</span>. The vectors are column vectors containing three real numbers. Because <span class="math-container">$0$</span> and <span class="math-container">$1$</span> are special real numbers, it turns out to be really nice to work with vectors that are mostly zeroes. So
<span class="math-container">$$
e_1 = \pmatrix{1\\0\\0}, e_2 = \pmatrix{0\\1\\0}, e_3 = \pmatrix{0\\0\\1},
$$</span>
which turns out to be a basis for 3-space, are a really nice set. They come up a lot, so they get a name: "The standard basis". This generalizes to <span class="math-container">$\Bbb R^n$</span>, and I'm pretty sure you understand the pattern.</p>
<p>As it happens, when we use the "standard inner product" on <span class="math-container">$\Bbb R^n$</span>, these vectors turn out to all have length one, and be mutually perpendicular. Those two properties <em>also</em> come up a lot, so we give them a name: we say the basis is an "orthonormal" basis.</p>
<p>So at this point, you see that the standard basis, with respect to the standard inner product, is in fact an orthonormal basis.</p>
<p>But not every orthonormal basis is the standard basis (even using the standard inner product). For instance, in <span class="math-container">$\Bbb R^2$</span>, for any value you pick for <span class="math-container">$t$</span>, the vectors
<span class="math-container">$$
v_1 = \pmatrix{\cos t \\ \sin t} , v_2 = \pmatrix{-\sin t\\ \cos t}
$$</span>
are an orthonormal basis as well. (They're just the standard basis rotated counterclockwise by an angle <span class="math-container">$t$</span>.)</p>
<p>The phrase "orthonormal basis" is always qualified with "with respect to ..." or "under ...", and then an inner product gets named. Well ... not <em>always</em>. Sometimes we're in the middle of talking about some inner product, and it's implicit. But a basis that's orthonormal with respect to one inner product may not be orthonormal with respect to another. Consider, on <span class="math-container">$\Bbb R^2$</span>, the inner product defined by
<span class="math-container">$$
\langle \pmatrix{a\\b} , \pmatrix{c\\d} \rangle = ac + 2bd.
$$</span>
Under this inner product, the vector <span class="math-container">$e_2$</span> has length <span class="math-container">$4$</span>, so <span class="math-container">$\{e_1, e_2 \}$</span> is not an orthonomal basis with respect to this (peculiar) inner product.</p>
|
2,835,474 | <p>What is linear about a linear combination of things?. In linear algebra, the "things" we are dealing with are usually vectors and the linear combination gives the span of the vectors. Or it could be a linear combination of variables and functions. But why not just call it combination. Why is the term "linear" included?What is so "linear" about it?</p>
| mweiss | 124,095 | <p>The word "linear" has two distinct senses, one <em>geometric</em> and one <em>algebraic</em>. Linear combinations are linear in <em>both</em> senses, which is why the phrase is so apt.</p>
<p><strong>Geometric Linearity</strong></p>
<p>Let's take the geometric sense first, because that is the one that has not yet been explicitly mentioned by the other answers. If you studied Geometry in high school, you may (depending on the curriculum you followed) have learned some axioms for lines and planes. One of those axioms is:</p>
<blockquote>
<p>If a plane $\textbf{P}$ contains two points $A$ and $B$, then it also contains the line $AB$.</p>
</blockquote>
<p>This axiom expresses the intuitive notion of "flatness". Non-planar surfaces do not satisfy this property: for example of it you take two points on the surface of a sphere, the line joining those points does not lie on the surface; rather, it cuts through the interior of the sphere and exits the sphere. Inspired by this example, we might define the following property:</p>
<p><strong>Definition</strong>. A subset $S$ of $\mathbb{R}^n$ is <em>geometrically linear</em> if $S$ contains all of the lines through the points of $S$.</p>
<p>Now let's take a few vectors and form their <em>span</em>, which is the set of combinations you can form by adding together scalar multiples of the vectors. If the vectors live in $\mathbb{R}^3$ (or more generally in $\mathbb R^n$) then that span is guaranteed to be "geometrically linear" in the sense described above. Whether that span is a plane, or a line, or some higher-dimensional analogue of those things depends on how many vectors you begin with and whether or not they are linearly independent, and that's a whole separate question; regardless, though, the span of any set of vectors is a subspace of the ambient vector space, and is "flat" geometrically. That's why we call them <em>linear combinations</em>.</p>
<p><strong>Algebraic Linearity</strong></p>
<p>In high school algebra, you study polynomial functions of a single variable, and the way the formulas for those functions relate to their graphs: $f(x)=ax+b$ determines a line, $g(x)=ax^2+bx+c$ determines a parabola, etc. More generally one can consider functions of more than one variable, like $f(x,y) = ax^2 + bxy + cy^2 + d$, or $g(x,y) = ax^3 + bx + cy^5$. By analogy with the single-variable case, we can make the following definition:</p>
<p><strong>Definition.</strong> A a multivariable polynomial function is called <em>algebraically linear</em> if every term has degree $1$. </p>
<p>Note: This definition is actually stricter than the high-school level use of the word "linear", in the sense that a function like $f(x) = 3x +2$ would not be considered "algebraically linear", despite the fact that its graph is obviously a line, because the constant term has degree $0$. Some people are bothered by this mismatch of language; see <a href="https://matheducators.stackexchange.com/q/9835/29">https://matheducators.stackexchange.com/q/9835/29</a> for example.</p>
<p>In any case, with this definition established, an expression like
$$ax + by + cz$$
would determine an algebraically linear function, whereas an expression like
$$ax^2 + bxy + cz$$
would not.</p>
<p>Now it is a remarkable fact that these two notions of linearity -- the geometric and the algebraic -- coincide, at least in settings in which they are both meaningful. If the vectors in your vector space $V$ have a natural geometric interpretation -- for example if you think of $V=\mathbb R^2$ as a plane, or $V=\mathbb R^3$ as modeling he 3-dimensional world we live in -- then if you take any set of vectors, and form from it the set of all "algebraically linear" combinations, the span of the set is "geometrically linear".</p>
<p>What's nice about the algebraic formulation is that it also works in settings that are not easily interpreted geometrically. For example, if $f(x)$, $g(x)$, and $h(x)$ are any three functions on $\mathbb R$, you can form the set of "linear combinations" -- functions of the form
$$af(x) + bg(x) +ch(x)$$
This describes a set of functions that can be "built from" $f, g, h$, in an algebraic sense, using only addition and scalar multiplication. A function built by multiplication, like $f(x)g(x)$, is not a linear combination; nor is a function built by composition, like $f(g(x))$. If the functions $f,g,h$ happen to be linearly independent, then you can think of their span as a 3-dimensional vector space -- and that vector space is <em>geometrically linear</em> as well, even though it may be difficult to visualize exactly what a "line" is. In fact, this is true even if the function $f,g,h$ themselves are "nonlinear". (Yes, you can build a linear combination of nonlinear functions; the result will in general also be a nonlinear function, but the <em>set of all such combinations</em> is a linear subspace.)</p>
<p>The notions of geometric and algebraic linearity go together so tightly that it is easy for people to forget that they are, to a novice, intrinsically different concepts: once you get used to this stuff, it's hard to remember that the notion of "line through the origin" has a geometric, intuitive meaning that precedes the formal notion of "$1$-dimensional vector subspace". This is an example of a phenomenon called the "expert blind spot"; I think it's particularly common when teaching linear algebra to people for whom the subject is new.</p>
|
2,520,044 | <p>$$\lim_{x\to2}{\frac{\sqrt{3x-2}-\sqrt{5x-6}}{\sqrt{2x-1}-\sqrt{x+1}}}$$</p>
<p>Evaluate the limit.</p>
<p>Thanks for any help</p>
| Paolo Intuito | 395,372 | <p>That stinks a lot as homework. A general hint for solving such a problem is to "rationalize" both numerator and denominator i.e. multiply the whole thing by</p>
<p>$$\frac{\sqrt{3x-2}+\sqrt{5x-6}}{\sqrt{3x-2}+\sqrt{5x-6}}\cdot \frac{\sqrt{2x-1}+\sqrt{x+1}}{\sqrt{2x-1}+\sqrt{x+1}}$$</p>
|
2,595,418 | <p>Let there be a graph $G$ and it's complement $G'$ , if the degree of a vertex in $G$ is added with degree of the corresponding vertex of $G'$ , the sum will be $(n-1)$; where $n$ is the number of vertices. How to prove this ?</p>
| Rebecca J. Stones | 91,818 | <p>Given a vertex $u$, there are three types of vertices:</p>
<ul>
<li>the vertices which are adjacent to $u$,</li>
<li>the vertices which are <strong>not</strong> adjacent to $u$ (excluding $u$ itself), and</li>
<li>$u$.</li>
</ul>
<p>I depict this below:</p>
<p><a href="https://i.stack.imgur.com/Ta8N1.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Ta8N1.png" alt="enter image description here"></a></p>
<p>Every vertex has exactly one of these forms, and there are $n$ vertices in total. Thus, if $u$ is adjacent to $k$ vertices, then it is not adjacent to $n-1-k$ vertices (excluding $u$ itself). Thus, in the complement graph, $u$ is adjacent to $n-1-k$ vertices.</p>
<p>Thus, if we sum the degree of $u$ in the original graph and in the complement graph, we get $k+(n-1-k)=n-1$.</p>
|
3,047,670 | <blockquote>
<p>Prove or disprove that <span class="math-container">$$\lim_{n\to \infty} \left(\frac{x_{n+1}-l}{x_n-l}\right)=\lim_{n\to\infty}\left(\frac{x_{n+1}}{x_n}\right)$$</span> where <span class="math-container">$l=\lim_{n\to \infty} x_n$</span></p>
</blockquote>
<p>I think that the above result is true,but I am not really sure how to prove it.If anyone has a counterexample I am looking forward to it.<br>
EDIT1 : <span class="math-container">$x_n$</span> is any real sequence which is not constant.<br>
EDIT2: What if we add the additional constraint that <span class="math-container">$l \in \mathbb{R}$</span>?</p>
| Math-fun | 195,344 | <p>Let <span class="math-container">$x_n=\frac{2^{n+1}-1}{2^n}$</span>, for which <span class="math-container">$l=2$</span> and <span class="math-container">$$\lim\frac{x_{n+1}}{x_n}=1$$</span> and <span class="math-container">$$\lim\frac{x_{n+1}-2}{x_n-2}=\frac12.$$</span></p>
|
14,140 | <p>One of the most annoying "features" of <em>Mathematica</em> is that the <code>Plot</code> family does extrapolation on <code>InterpolatingFunction</code>s without any warning. I'm sure it was discussed to hell previously, but I cannot seem to find any reference. While I know how to simply overcome the problem by defining a global variable for the domain of the interpolation, from time to time I forget to do this and then I spend days figuring out where the numerical error originates. This could be avoided if <code>Plot</code> was to give a warning.</p>
<p>Consider the following example. An ODE system is defined and integrated for two different time ranges:</p>
<pre><code>odes = {
a'[t] == -a[t] - .2 a[t]^2 + 2. b[t],
b'[t] == a[t] + .1 a[t]^2 - 1.1 b[t], a[0] == 1, b[0] == 1
};
sol100 = First@NDSolve[odes, {a, b}, {t, 0, 100}];
sol500 = First@NDSolve[odes, {a, b}, {t, 0, 500}];
</code></pre>
<p>Now querying the function value for a point outside of the range correctly gives a warning:</p>
<pre><code>(a /. sol100)[500]
</code></pre>
<blockquote>
<pre><code>InterpolatingFunction::dmval: Input value {500} lies outside
the range of data in the interpolating function. Extrapolation will be used. >>
651.034
</code></pre>
</blockquote>
<p>The same is not done when we use the function in <code>Plot</code>:</p>
<pre><code>Show[
Plot[{a[t], b[t]} /. sol100, {t, 0, 400}, PlotStyle -> {Thick, Red}],
Plot[{a[t], b[t]} /. sol500, {t, 0, 400}, PlotStyle -> {Thick, Blue}]
]
</code></pre>
<p><img src="https://i.stack.imgur.com/X7xaL.png" alt="Mathematica graphics"></p>
<p>I've tried to force a warning, with no avail. The following example won't give a warning.</p>
<pre><code>On[InterpolatingFunction::dmval]
Check[Plot[{a[t], b[t]} /. sol100, {t, 0, 500}], "Error",
InterpolatingFunction::dmval]
</code></pre>
<p>Interestingly, one can be sure that <code>InterpolatingFunction::dmval</code> is NOT turned off at all inside the <code>Plot</code> family. In the following example, <code>LogLinearPlot</code> is able to drop a warning about sampling from below the domain (that can be ignored being unrelated, see <a href="https://mathematica.stackexchange.com/q/5986/89">this post</a>, also it seems to be fixed in v9), but it does not give the same warning when sampling from <strong>above</strong> (> 100)! </p>
<pre><code>LogLinearPlot[{a[t], b[t]} /. sol100, {t, 0.1, 500}]
</code></pre>
<blockquote>
<pre><code>InterpolatingFunction::dmval: Input value {-2.30241} lies outside the
range of data in the interpolating function. Extrapolation will be used. >>
</code></pre>
</blockquote>
<p>It is even more disturbing to see that <code>Plot</code> checks the lower boundary but not the upper (thanks to <a href="https://mathematica.stackexchange.com/users/50/j-m">J.M.</a> for the comment):</p>
<pre><code>Plot[{a[t], b[t]} /. sol100, {t, -1, 500}]
</code></pre>
<blockquote>
<pre><code>InterpolatingFunction::dmval: Input value {-0.989765} lies outside the
range of data in the interpolating function. Extrapolation will be used. >>
</code></pre>
</blockquote>
<p>As <a href="https://mathematica.stackexchange.com/users/312/oleksandr-r">Oleksandr</a> has pointed out, it is not about lower vs. upper boundaries but first point vs. the rest. </p>
<pre><code>Plot[{a[t], b[t]} /. sol100, {t, 101, 500}]
</code></pre>
<blockquote>
<pre><code>InterpolatingFunction::dmval: Input value {101.008} lies outside the
range of data in the interpolating function. Extrapolation will be used. >>
</code></pre>
</blockquote>
<h2><strong>Questions</strong></h2>
<ol>
<li>Why <code>Plot</code> does not give a warning when extrapolating an <code>InterpolatingFunction</code>? Is there some higher-level consideration that justifies this behaviour, or is it a bug?</li>
<li>How can one force <code>Plot</code> to give a warning? Is there any workaround that forces <code>InterpolatingFunction::dmval</code> not to be attenuated inside <code>Plot</code>?</li>
</ol>
| Simon Woods | 862 | <p>As Rojo showed, the symbol <code>$Messages</code> appears to become unset somewhere inside the <code>Plot</code> internals. I wondered if it was possible to prevent this by setting the <code>Protected</code> and <code>Locked</code> attributes for <code>$Messages</code>, but it would seem that <code>Plot</code> has magic powers:</p>
<pre><code>SetAttributes[$Messages, {Protected, Locked}];
f = Interpolation[Range[5]];
g[x_] := (Print[Attributes[$Messages], " ", $Messages]; f[x]);
Plot[g[x], {x, 1, 6}]
</code></pre>
<p><img src="https://i.stack.imgur.com/eObYd.gif" alt="enter image description here"></p>
<p>You can see that after the first evaluation <code>$Messages</code> loses the <code>Protected</code> attribute and its value, though curiously it remains <code>Locked</code> throughout.</p>
<p>Since it seems impossible to stop <code>$Messages</code> losing its value, an imperfect workaround might be to reset it whenever the <code>InterpolatingFunction::dmval</code> message is generated.</p>
<pre><code>Quit[];
Unprotect[Message];
With[{mess = $Messages},
m : Message[InterpolatingFunction::dmval, __] :=
Block[{$mIFdmval = True}, $Messages = mess; m] /; !TrueQ[$mIFdmval]]
</code></pre>
<p>With this approach the automatic limit of 3 messages kicks in if the messages are generated outside of <code>Plot</code>:</p>
<pre><code>f = Interpolation[Range[5]];
Table[f[x], {x, 1, 10}]
</code></pre>
<p><img src="https://i.stack.imgur.com/eZRKZ.gif" alt="enter image description here"></p>
<p>Unfortunately, the automatic limit doesn't work with <code>Plot</code> (perhaps because <code>Plot</code> messes around with <code>$MessageList</code> as well), so you still get a long list of messages.</p>
|
188,158 | <p>I am interested in a function such that <code>f[m, i] = n</code> where <code>m, n</code> are positive integers and <code>n</code> is the <code>i</code>-th number relatively prime with <code>m</code>.</p>
<p>Getting a sample of the possible outputs of <code>f</code> is straightforward. For example, let <code>m = 30</code>. Now we can use</p>
<pre><code>list = 2 Range[0,29] + 1;
list = Pick[list, GCD[30, list], 1]
(*{1, 7, 11, 13, 17, 19, 23, 29, 31, 37, 41, 43, 47, 49, 53, 59}*)
</code></pre>
<p>where I'm picking from odd numbers since <code>m</code> happens to be even. There should be a pattern in these numbers given by <code>EulerPhi[30]</code> (this is <code>8</code>) and indeed, <code>list[[;;8]] + 30</code> coincides with <code>list[[9;;16]]</code>. How to continue from here?</p>
| Αλέξανδρος Ζεγγ | 12,924 | <p>I give a naive implementation</p>
<pre><code>ithCoprime[m_, i_] := Module[{coprimes, j = 1},
coprimes = {1};
While[Length[coprimes] < i,
j++;
If[CoprimeQ[m, j], AppendTo[coprimes, j]]
];
Last[coprimes]
]
ithCoprime[30, #] & /@ Range[16]
</code></pre>
<blockquote>
<pre><code>{1, 7, 11, 13, 17, 19, 23, 29, 31, 37, 41, 43, 47, 49, 53, 59}
</code></pre>
</blockquote>
<hr>
<p><strong>Update</strong></p>
<p>Here is a better version:</p>
<pre><code>ithCoprime2[m_, i_] := Module[{j = 1, k = 1},
While[k < i, j++;
If[CoprimeQ[m, j], k++]
];
j
]
</code></pre>
<hr>
<p><strong>Update 2</strong></p>
<p>Another version</p>
<pre><code>ithCoprime3[m_, i_] := Module[{iterate, predicate, initial},
iterate = # + {1, Boole[CoprimeQ[m, First[#]]]} &;
predicate = Last[#] <= i &;
initial = {1, 1};
NestWhile[iterate, initial, predicate, 1, \[Infinity], -1][[1]]
]
</code></pre>
|
188,158 | <p>I am interested in a function such that <code>f[m, i] = n</code> where <code>m, n</code> are positive integers and <code>n</code> is the <code>i</code>-th number relatively prime with <code>m</code>.</p>
<p>Getting a sample of the possible outputs of <code>f</code> is straightforward. For example, let <code>m = 30</code>. Now we can use</p>
<pre><code>list = 2 Range[0,29] + 1;
list = Pick[list, GCD[30, list], 1]
(*{1, 7, 11, 13, 17, 19, 23, 29, 31, 37, 41, 43, 47, 49, 53, 59}*)
</code></pre>
<p>where I'm picking from odd numbers since <code>m</code> happens to be even. There should be a pattern in these numbers given by <code>EulerPhi[30]</code> (this is <code>8</code>) and indeed, <code>list[[;;8]] + 30</code> coincides with <code>list[[9;;16]]</code>. How to continue from here?</p>
| KennyColnago | 3,246 | <p>To find relative primes, I've found <code>Complement</code> to be generally faster than <code>GCD</code> or <code>CoprimeQ</code>.</p>
<pre><code>RelativePrimes[m_Integer] :=
Complement[
Range[m - 1],
Apply[Sequence, Map[Range[#, m - 1, #] &, FactorInteger[m][[All, 1]]]]]
</code></pre>
<p>Your function <code>f</code> becomes the following.</p>
<pre><code>f[m_, i_] :=
Block[{n = RelativePrimes[m], e = EulerPhi[m]},
n[[Mod[i, e, 1]]] + m * Quotient[i - 1, e]
]
SetAttributes[f,Listable]
</code></pre>
<p>Thus,</p>
<pre><code>f[30,Range[20]]
</code></pre>
<blockquote>
<p>{1, 7, 11, 13, 17, 19, 23, 29, 31, 37, 41, 43, 47, 49, 53, 59, 61,
67, 71, 73}</p>
</blockquote>
<pre><code>f[902,555]
</code></pre>
<blockquote>
<p>1251</p>
</blockquote>
|
2,534,999 | <p>I tried to solve $z^3=(iz+1)^3$. I noticed that $(iz+1)^3=i(z-1)^3$ so $(\frac{z-1}{z})^3=i$. How to finish it?</p>
| Community | -1 | <p><strong>Hint:</strong></p>
<p>Factor as</p>
<p>$$z^3-(iz+1)^3=(z-(iz+1))(z^2+z(iz+1)+(iz+1)^2)$$ and you have a linear and a quadratic equation.</p>
<p>Alternatively, considering the three cubic roots of unity, solve</p>
<p>$$z=\sqrt[3]1(iz+1).$$</p>
|
865,598 | <p>How can I calculate this value?</p>
<p>$$\cot\left(\sin^{-1}\left(-\frac12\right)\right)$$</p>
| Thoth19 | 76,241 | <p>You should probably have memorized things like the sine of 30 degrees. We therefore know that $sin(30) = 0.5$ So $arcsin(-1/2)=-30$ degrees
Now we want to take the cotangent of that. Well Cotangent is cosine over sine.
$cos(-30) = cos(30) = \sqrt(3)/2$
$sin(-30)=-sin(30)=-1/2$
Thus, the final answer is$-\sqrt(3)$</p>
|
865,598 | <p>How can I calculate this value?</p>
<p>$$\cot\left(\sin^{-1}\left(-\frac12\right)\right)$$</p>
| Steven Alexis Gregory | 75,410 | <p><a href="https://i.stack.imgur.com/JAVKR.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/JAVKR.jpg" alt="enter image description here"></a></p>
<p>Arcsine is defined in the first and the fourth quadrants. If
<span class="math-container">$\theta = \arcsin\left(-\dfrac 12 \right)$</span>, then <span class="math-container">$\theta$</span> corresponds to the point
<span class="math-container">$(x,y)=(\sqrt 3, -1)$</span> with length <span class="math-container">$r=2$</span>, because
<span class="math-container">$\sin \theta = \dfrac yr = \dfrac{-1}{2}$</span>, and the corresponding reference triangle shown above. Then <span class="math-container">$\cot\left(\arcsin\left(-\dfrac12\right)\right) = \dfrac xy = -\sqrt 3$</span>.</p>
|
3,773,856 | <p>I'm having trouble with part of a question on Cardano's method for solving cubic polynomial equations. This is a multi-part question, and I have been able to answer most of it. But I am having trouble with the last part. I think I'll just post here the part of the question that I'm having trouble with.</p>
<p>We have the depressed cubic equation :
<span class="math-container">\begin{equation}
f(t) = t^{3} + pt + q = 0
\end{equation}</span>
We also have what I believe is the negative of the discriminant :
<span class="math-container">\begin{equation}
D = 27 q^{2} + 4p^{3}
\end{equation}</span>
We assume <span class="math-container">$p$</span> and <span class="math-container">$q$</span> are both real and <span class="math-container">$D < 0$</span>. We also have the following polynomial in two variables (<span class="math-container">$u$</span> and <span class="math-container">$v$</span>) that results from a variable transformation <span class="math-container">$t = u+v$</span> :
<span class="math-container">\begin{equation}
u^{3} + v^{3} + (3uv + p)(u+v) + q = 0
\end{equation}</span>
You also have the quadratic polynomial equation :
<span class="math-container">\begin{equation}
x^{2} + qx - \frac{p^{3}}{27} = 0
\end{equation}</span>
The solutions to the 2-variable polynomial equation satisfy the following constraints :
<span class="math-container">\begin{equation}
u^{3} + v^{3} = -q
\end{equation}</span>
<span class="math-container">\begin{equation}
uv = -\frac{p}{3}
\end{equation}</span>
The first section of this part of the larger question asks to prove that the solutions of the quadratic equation are non-real complex conjugates. Here the solutions to the quadratic are equal to <span class="math-container">$u^{3}$</span> and <span class="math-container">$v^{3}$</span> (this relationship between the quadratic polynomial and the polynomial in two variables was proven in an earlier part of the question). I was able to do this part. The second part of this sub-question is what I'm having trouble with.</p>
<p>The question says, let :
<span class="math-container">\begin{equation}
u = r\cos(\theta) + ir\sin(\theta)
\end{equation}</span>
<span class="math-container">\begin{equation}
v = r\cos(\theta) - ir\sin(\theta)
\end{equation}</span>
The question then asks the reader to prove that the depressed cubic equation has three real roots :
<span class="math-container">\begin{equation}
2r\cos(\theta) \text{ , } 2r\cos\left( \theta + \frac{2\pi}{3} \right) \text{ , } 2r\cos\left( \theta + \frac{4\pi}{3} \right)
\end{equation}</span>
In an earlier part of the question they had the reader prove that given :
<span class="math-container">\begin{equation}
\omega = \frac{-1 + i\sqrt{3}}{2}
\end{equation}</span>
s.t. :
<span class="math-container">\begin{equation}
\omega^{2} = \frac{-1 - i\sqrt{3}}{2}
\end{equation}</span>
and :
<span class="math-container">\begin{equation}
\omega^{3} = 1
\end{equation}</span>
that if <span class="math-container">$(u,v)$</span> is a root of the polynomial in two variables then so are :
<span class="math-container">$(u\omega,v\omega^{2})$</span> and <span class="math-container">$(u\omega^{2},v\omega)$</span>. I think that the part of the question I'm having trouble with is similar. I suspect that :
<span class="math-container">\begin{equation}
2r \cos\left( \theta + \frac{2\pi}{3} \right) = u\omega + v\omega^{2} \text{ or } u\omega^{2} + v\omega \tag{1}
\end{equation}</span>
and :
<span class="math-container">\begin{equation}
2r \cos\left( \theta + \frac{4\pi}{3} \right) = u\omega + v\omega^{2} \text{ or } u\omega^{2} + v\omega \tag{2}
\end{equation}</span>
I have derived that :
<span class="math-container">\begin{equation}
\omega = \cos(\phi) + i\sin(\phi)
\end{equation}</span>
where <span class="math-container">$\phi = \frac{2\pi}{3}$</span>. Also :
<span class="math-container">\begin{equation}
\omega^{2} = \cos(2\phi) + i \sin(2\phi)
\end{equation}</span>
So that the goal of the question may be to prove equations <span class="math-container">$(1)$</span> and <span class="math-container">$(2)$</span>. I have tried to do this but haven't been able to.</p>
<p>Am I approaching this question in the correct way ? If I am approaching it the right way can someone show me how to use trigonometric identities to prove equations #1 and #2 ?</p>
| Paul Frost | 349,785 | <p>Let <span class="math-container">$w(\alpha) = \cos \alpha + i\sin \alpha$</span>. Then
<span class="math-container">$$w(\alpha) w(\beta) = (\cos\alpha + i \sin \alpha)(\cos \beta + i\sin \beta) \\ =\cos\alpha \cos \beta - \sin \alpha \sin \beta +i(\cos\alpha \sin \beta + \sin \alpha \cos \beta) = \cos(\alpha + \beta) + i \sin(\alpha + \beta) \\= w(\alpha + \beta) .$$</span>
An easier way to see this is to write <span class="math-container">$w(\alpha) = e^{i\alpha}$</span>. Then
<span class="math-container">$$w(\alpha) w(\beta) = e^{i\alpha}e^{i\beta} = e^{i(\alpha + \beta)} = w(\alpha + \beta) .$$</span></p>
<p>We have
<span class="math-container">$$u\omega = rw(\theta)w(\phi) = rw(\theta+\phi) ,$$</span>
<span class="math-container">$$u\omega^2 = rw(\theta)w(2\phi) = rw(\theta+2\phi) .$$</span>
Moreover, since <span class="math-container">$v = \overline u$</span> and <span class="math-container">$\omega^2 = \overline \omega$</span>, we get
<span class="math-container">$$v\omega^2 = \overline u \cdot \overline \omega = \overline{u\omega} ,$$</span>
thus
<span class="math-container">$$u\omega + v\omega^2 = 2\Re (u\omega) = 2r\cos(\theta + \phi) = 2r\cos(\theta + 2\pi/3) .$$</span>
Similarly
<span class="math-container">$$v\omega = \overline u \cdot \overline {\omega^2} = \overline{u\omega^2},$$</span>
thus
<span class="math-container">$$u\omega^2 + v\omega = 2\Re (u\omega^2) = 2r\cos(\theta + 2\phi) = 2r\cos(\theta + 4\pi/3) .$$</span></p>
<p><strong>Edited:</strong></p>
<p>In my opinion it is an odd aproach to apply Cardano's formula and then translate the result into a trigonometric form. A direct approach is via <em>angle trisection</em>. By Moivre's formula we have
<span class="math-container">$$\cos\phi + i\sin\phi = (\cos(\phi/3) + i\sin(\phi/3))^3$$</span>
which gives
<span class="math-container">$$\cos \phi = \cos^3(\phi/3) -3\cos(\phi/3)\sin^2(\phi/3)\\ = \cos^3(\phi/3) -3\cos(\phi/3)(1- \cos^2(\phi/3)) = 4 \cos^3(\phi/3) - 3 \cos(\phi/3) .$$</span>
Writing <span class="math-container">$\theta = \phi/3$</span> and <span class="math-container">$x = 2\cos \theta$</span> gives us the cubic <em>angle trisection equation</em>
<span class="math-container">$$x^3 - 3x = 2\cos \phi \tag{1}.$$</span>
By construction it has the obvious solution <span class="math-container">$x_0 = 2\cos \theta$</span>. But since <span class="math-container">$\cos \phi = \cos (\phi + 2\pi) = \cos (\phi + 4 \pi)$</span>, it also has the solutions <span class="math-container">$x_1 = 2 \cos((\phi + 2\pi)/3) = 2\cos (\theta + 2\pi/3)$</span>, <span class="math-container">$x_2 = 2 \cos((\phi + 4\pi)/3) = 2\cos (\theta + 4\pi/3)$</span>.</p>
<p>Under the assumption that <span class="math-container">$p, q$</span> are real and <span class="math-container">$D = 27q^2 + 4 p^3<0$</span> it is possible to reduce the general equation
<span class="math-container">$$t^3 + pt + q = 0 \tag{2}$$</span>
to the angle trisection equation (1). Since <span class="math-container">$D < 0$</span>, we must have <span class="math-container">$p < 0$</span>. Note that therefore <span class="math-container">$D < 0$</span> is equivalent to <span class="math-container">$27q^2/(-4p^3) < 1$</span>.</p>
<p>Let us write <span class="math-container">$t = cx$</span>. Then
<span class="math-container">$$x^3 + (p/c^2)x = -q/c^3 .$$</span>
With <span class="math-container">$c = \sqrt{-p/3} > 0$</span> we get
<span class="math-container">$$x^3 -3x = 2(-q/2c^3) .$$</span>
But
<span class="math-container">$$(-q/2c^3)^2 = q^2 /4(-p/3)^3 = 27q^2/(-4p^3) < 1$$</span>
which means that
<span class="math-container">$$-q/2c^3 \in (-1,1) .$$</span>
Therefore <span class="math-container">$\phi = \arccos(-q/2c^3)$</span> is a well-defined number in <span class="math-container">$(0,2\pi)$</span> and we get the cubic equation (1) with solutions <span class="math-container">$x_k$</span> as above. Therefore the solutions of (2) are
<span class="math-container">$$t_k = 2\sqrt{-p/3}\cos(\phi/3 + 2k\pi/3) , k = 0,1,2 .$$</span></p>
|
2,972,355 | <p>How to convert sentence that contains “no more than 3” into predicate logic sentence?</p>
<p>For example: "No more than three <span class="math-container">$x$</span> satisfy <span class="math-container">$R(x)$</span>"
using predicate logic. </p>
<p>This is what I have for "exactly one <span class="math-container">$x$</span> satisfies <span class="math-container">$R(x)$</span>":
<span class="math-container">$\exists x(R(x) \land \forall y(R(y) \rightarrow (x = y)))$</span></p>
| Anguepa | 196,351 | <p><span class="math-container">$$
\forall x \forall y \forall z \forall u ((R(x)\wedge R(y) \wedge R(z) \wedge R(u)) \rightarrow (x=y \vee x=z \vee x=u \vee y=z \vee y=u \vee z=u))
$$</span></p>
|
756,236 | <p>The question is to write the general solution for this recurrence relation:</p>
<p>$y_{k+2} - 4y_{k+1} + 3y_{k} = -4k$.</p>
<p>I first solved the homogeneous equation $y_{k+2} - 4y_{k+1} + 3y_{k} = 0$, by writing the auxiliary equation $r^2 - 4r + 3 = (r-3)(r-1) = 0$. Thus $y_k^{h} = c_1(1)^k + c_2 (3)^k$. The general solution is just $y_k^{gen} = y_k^{h} + y_k^{p}$. My trouble is coming up with a particular solution. I keep up coming with $y_k^{p} = 2k^2$ when that doesn't work, but $k^2$ works, so my answer is close. I've gone through the artihmetic several times and cannot spot the mistake, here's the work:</p>
<p>The particular solution is of the form $y_k^{p} = a + bk$. Plugging in recurrence relation: $a + b(k+2) - 4(a + b(k+1) + 3(a + bk) = (a - 4a + 3a) + (bk - 4bk + 3bk) + (2b - 4b) = 0 + 0 - 2b = -4k$.</p>
<p>Thus $b = 2k$, and since our $y_k^p = a + bk$, it doesn't matter what pick $a$ to be so choose $a = 0$, which gives us $y_k^p = 2k^2$.</p>
<p>However, $2k^2$ doesn't satisfy the recurrence relation:
$2(k+2)^2 - 8(k+1)^2 + 6k^2 = (2k^2 - 8k^2 + 6k^2) + (8k - 16k) + (8 - 8) = -8k \ne -4k$.</p>
<p>Where is the error in my reasoning? I know $y_k^p = k^2$ works, but why do I keep coming up with $2k^2$.</p>
| Mark Bennet | 2,906 | <p>Your error is to put $b=2k$, when $b$ is a constant. Notice that one of the solutions you have to the homogeneous equation is $1^k$. This means that you have to increase the degree of your particular solution and try $a+bk+ck^2$.</p>
<p>Because $1$ is a root of the auxiliary equation, the term in $a$ will simply go to zero (try it). The term in $b$ goes to a constant (as you have computed, in fact) and would help if you had a constant term on the right-hand side, and the term in $c$ will be what you need for the $k$ term. It works like this (you will see that all the leading coefficients cancel):</p>
<p>$$a+b(k+2)+c(k+2)^2-4a-4b(k+1)-4c(k+1)^2+3a+3bk+3ck^2 =$$$$2b+4ck+4c-4b-8ck-4c=-2b-4ck$$ From which you get $b=0, c=1$</p>
|
756,236 | <p>The question is to write the general solution for this recurrence relation:</p>
<p>$y_{k+2} - 4y_{k+1} + 3y_{k} = -4k$.</p>
<p>I first solved the homogeneous equation $y_{k+2} - 4y_{k+1} + 3y_{k} = 0$, by writing the auxiliary equation $r^2 - 4r + 3 = (r-3)(r-1) = 0$. Thus $y_k^{h} = c_1(1)^k + c_2 (3)^k$. The general solution is just $y_k^{gen} = y_k^{h} + y_k^{p}$. My trouble is coming up with a particular solution. I keep up coming with $y_k^{p} = 2k^2$ when that doesn't work, but $k^2$ works, so my answer is close. I've gone through the artihmetic several times and cannot spot the mistake, here's the work:</p>
<p>The particular solution is of the form $y_k^{p} = a + bk$. Plugging in recurrence relation: $a + b(k+2) - 4(a + b(k+1) + 3(a + bk) = (a - 4a + 3a) + (bk - 4bk + 3bk) + (2b - 4b) = 0 + 0 - 2b = -4k$.</p>
<p>Thus $b = 2k$, and since our $y_k^p = a + bk$, it doesn't matter what pick $a$ to be so choose $a = 0$, which gives us $y_k^p = 2k^2$.</p>
<p>However, $2k^2$ doesn't satisfy the recurrence relation:
$2(k+2)^2 - 8(k+1)^2 + 6k^2 = (2k^2 - 8k^2 + 6k^2) + (8k - 16k) + (8 - 8) = -8k \ne -4k$.</p>
<p>Where is the error in my reasoning? I know $y_k^p = k^2$ works, but why do I keep coming up with $2k^2$.</p>
| vonbrand | 43,946 | <p>As a side remark, it is best to use generating functions and solve the equation in one go. Define $g(z) = \sum_{k \ge 0} y_k z^k$, multiply the recurrence by $z^k$, sum over $k \ge 0$, and recognize a few sums:
\begin{align}
\sum_{k \ge 0} y_{k + r} z^k
&= \frac{g(z) - y_0 - y_1 z - \ldots - y_{r - 1} z^{r - 1}}{z^r} \\
\sum_{k \ge 0} k z^k
&= z \frac{\mathrm{d}}{\mathrm{d} z} \frac{1}{1 - z} \\
&= \frac{z}{(1 - z)^2}
\end{align}
and so get:
$$
\frac{g(z) - y_0 -y_1 z}{z^2}
- 4 \frac{g(z) - y_0}{z}
+ 3 g(z)
= - 4 \frac{z}{(1 - z)^2}
$$
Written as partial fractions:
$$
g(z) = \frac{1 - y_0 - y_1}{2 (1 - 3 z)}
+ \frac{3 + 3 y_0 - y_1}{2 (1 - z)}
- \frac{3}{(1 - z)^2}
+ \frac{2}{(1 - z)^3}
$$
Your particular solution comes from the terms that don't include the initial values.
The generalized binomial theorem for negative integer powers gives for them:
\begin{align}
- 3 \binom{-2}{k} (-1)^k + 2 \binom{-3}{k} (-1)^k
&= -3 \binom{k + 2 - 1}{2 - 1} + 2 \binom{k + 3 - 1}{3 - 1} \\
&= -3 (n + 1) + 2 \frac{(n + 2) (n + 1)}{2} \\
&= n^2 - 1
\end{align}</p>
|
4,252,431 | <p>Let the constant <span class="math-container">$\alpha > 0$</span> be the problem
<span class="math-container">$$\left\{\begin{array}{cll}
u_t + \alpha u_x & = & f(x,t); \ \ 0 < x < L; \ t > 0\\
u(0,t) & = & 0; \ \ t > 0;\\
u(x,0) & = & 0; \ \ 0 < x < L.
\end{array}\right.$$</span>
Prove that, for every <span class="math-container">$t > 0$</span>, the following applies:
<span class="math-container">$$\int_{0}^L |u(x,t)|^2dx \leq \int_{0}^L \int_{0}^t |f(x,s)|^2 dsdx.$$</span></p>
<p><strong>TIP:</strong> Uses Gronwall Inequality.</p>
<p><strong>Outline:</strong> I tried to use the fact that
<span class="math-container">$$u(x,t) = \int_{0}^t f(x + (s-t)\alpha,s)ds$$</span>
is the solution to the above problem when <span class="math-container">$u(x,0) = 0$</span>. Then I used Holder inequality to get to
<span class="math-container">$$|u(x,t)|^2 \leq t\int_{0}^t |f(x + (s-t)\alpha,s)|^2ds.$$</span>
Then I got stuck because I couldn't apply the Gronwall inequality satisfactorily...</p>
| José Carlos Santos | 446,262 | <p>The approach is fine, but since you did not show us your computations, I cannot tell you whether or not the full solution is correct.</p>
<p>Here's how I would do it. Note that<span class="math-container">\begin{align}x+xy^2=y+yx^2&\iff x-y=yx^2-xy^2\\&\iff x-y=xy(x-y)\end{align}</span>and so if <span class="math-container">$x\ne y$</span>, <span class="math-container">$xy=1$</span>. But (still assuming that <span class="math-container">$x\ne y$</span>)<span class="math-container">\begin{align}x+xy^4=y+yx^4&\iff x-y=xy(x^3-y^3)=xy(x-y)(x^2+xy+y^2)\\&\iff1=x^2+1+y^2\text{ (since $xy=1$ and $x-y\ne0$)}\\&\iff x^2+y^2=0\\&\iff x=y=0.\end{align}</span>But we were assuming that <span class="math-container">$x\ne y$</span>. So, there is no solution with <span class="math-container">$x\ne y$</span>.</p>
|
4,252,431 | <p>Let the constant <span class="math-container">$\alpha > 0$</span> be the problem
<span class="math-container">$$\left\{\begin{array}{cll}
u_t + \alpha u_x & = & f(x,t); \ \ 0 < x < L; \ t > 0\\
u(0,t) & = & 0; \ \ t > 0;\\
u(x,0) & = & 0; \ \ 0 < x < L.
\end{array}\right.$$</span>
Prove that, for every <span class="math-container">$t > 0$</span>, the following applies:
<span class="math-container">$$\int_{0}^L |u(x,t)|^2dx \leq \int_{0}^L \int_{0}^t |f(x,s)|^2 dsdx.$$</span></p>
<p><strong>TIP:</strong> Uses Gronwall Inequality.</p>
<p><strong>Outline:</strong> I tried to use the fact that
<span class="math-container">$$u(x,t) = \int_{0}^t f(x + (s-t)\alpha,s)ds$$</span>
is the solution to the above problem when <span class="math-container">$u(x,0) = 0$</span>. Then I used Holder inequality to get to
<span class="math-container">$$|u(x,t)|^2 \leq t\int_{0}^t |f(x + (s-t)\alpha,s)|^2ds.$$</span>
Then I got stuck because I couldn't apply the Gronwall inequality satisfactorily...</p>
| Donald Splutterwit | 404,247 | <p>Square the equation <span class="math-container">$x(1+y^2)=y(1+x^2)$</span> and we have (Note that <span class="math-container">$2x^2y^2$</span> will cancel)
<span class="math-container">\begin{eqnarray*}
x^2(1+y^4)=y^2(1+x^4).
\end{eqnarray*}</span>
Now divide by the first equation and we have <span class="math-container">$x=y$</span>.</p>
|
1,767,682 | <p>I was thinking about sequences, and my mind came to one defined like this:</p>
<p>-1, 1, -1, 1, 1, -1, 1, 1, 1, -1, 1, 1, 1, 1, ...</p>
<p>Where the first term is -1, and after the nth occurrence of -1 in the sequence, the next n terms of the sequence are 1, followed by -1, and so on. Which led me to perhaps a stronger example, </p>
<p>-1, 1, -1, 1, 1, -1, 1, 1, 1, 1, -1, 1, 1, 1, 1, 1, 1, 1, 1, -1, ...</p>
<p>Where the first term is -1, and after the nth occurrence of -1 in the sequence, the next $2^n$ terms of the sequence are 1, followed by -1, and so on.</p>
<p>By the definition of convergence or by Cauchy's criterion, the sequence does not converge, as any N one may choose to define will have an occurrence of -1 after it, which must occur within the next N terms (and certainly this bound could be decreased)</p>
<p>However, due to the decreasing frequency of -1 in the sequence, I would be tempted to say that there is some intuitive way in which this sequence converges to 1. Is there a different notion of convergence that captures the way in which this sequence behaves?</p>
| André Nicolas | 6,312 | <p>The <a href="https://en.wikipedia.org/wiki/Ces%C3%A0ro_mean" rel="nofollow">Cesaro mean</a> accomplishes something that is close to your intuition.</p>
<p>For the sequence $a_1,a_2,a_3,\cdots$, the Cesaro mean is the limit, if it exists, of the sequence $(b_n)$, where $b_n=\frac{a_1+\cdots+a_n}{n}$.</p>
<p>If the limit of $(a_n)$ is $a$, then the Cesaro mean of the sequence $(a_n)$ is $a$. But the Cesaro mean of the sequence $(a_n)$ may exist when the limit does not. The Cesaro mean of the sequence in your post is $1$, as desired.</p>
|
3,356,951 | <p>On SAT,scores range from 2000 to 2400, with two thirds of the scores falling in the range of 2200 to 2300. If we further assume that test scores are normally distributed in this range from 2000 to 2400, determine the mean and standard deviation.</p>
| BruceET | 221,800 | <p>I believe there is a mistake in the statement of your problem.
By symmetry, and for things to work out as intended '2200' has to be '2100'.</p>
<p>The Empirical Rule says that 68% of observations fall within one standard deviation of the mean. What you request can be done
only approximately, so we will use 68% instead of <span class="math-container">$2/3.$</span>
Then the population mean needs to be halfway between 2100 and 2300; <span class="math-container">$\mu = 2200.$</span> Also, that implies that <span class="math-container">$\sigma = 100.$</span></p>
<p>Then using the CDF function <code>pnorm</code> in R, we can find the probability of a score less than 2000 and greater than 4000
for a test with scores distributed <span class="math-container">$\mathsf{Norm}(\mu=2200, \sigma=100).$</span> About 2.27% of students will have scores in the
'tails' of the distribution.</p>
<pre><code>pnorm(2000, 2200, 100)
[1] 0.02275013
1 - pnorm(2400, 2200, 100)
[1] 0.02275013
</code></pre>
<p>The figure below shows the normal distribution of SAT scores. Vertical red lines mark values in the discussion above.</p>
<p><a href="https://i.stack.imgur.com/2SOlJ.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/2SOlJ.png" alt="enter image description here"></a></p>
|
2,810,008 | <p>Can I investigate this limit and if yes, how? $${i^∞}$$</p>
<p>I am at a loss of ideas and maybe it is undefined?</p>
| Rhys Hughes | 487,658 | <p>The convergence of certain sequences is impossible to find if the range is limited to a cycle of a few values. In this case $i^n$ is equal to either $\pm1$ or $\pm i$ with no other values. $\infty$ being the <em>concept</em> that it is makes it impossible to determine which of the four solutions would be $\lim_{n\to\infty}{i^n}$.</p>
<p>Other such expressions are $(-1)^n$ which cycles around $1$ and $-1$ in a similar fashion, or $\sin(x)$ and $\cos(x)$, whose range is limited to between $-1$ and $1$, without any convergence.</p>
|
2,375,529 | <p>Let $H$ be a Hilbert space and let $T\in \mathcal{B}(H)$ such that $T$ is self-adjoint. I want to show that if $T$ is non-zero, then $T^n\neq 0$ for all $n\in \mathbb{N}$.</p>
<p>Suppose $n$ be the least positive integer such that $T^n=0$. Then for all $x,y\in H$, we have $\langle T^nx,y\rangle=0\implies \langle T^{n-2}x,T^2y\rangle=0$. Herefrom can I show that $T^{n-1}=0$? If it is possible, then I am done. Please suggest.</p>
| Michael L. | 153,693 | <p>This may be a bit heavy-handed, but we can consider the spectral theorem for bounded self-adjoint operators on Hilbert spaces:</p>
<p>For self-adjoint $T\in \mathcal{L}(H, H)$, there is an $L^2(X, \mathcal{M}, \mu)$, a unitary $U : L^2(X, \mathcal{M}, \mu)\to H$, and an essentially bounded measurable function $f : X\to \mathbb{R}$ such that $U^*MU = T$, where $[M\varphi](x) = f(x)\varphi(x)$. As $T$ and $M$ are unitarily equivalent, we will have $$\|T\| = \|M\| = \|f\|_{\infty}$$ Assuming that $T$ is nonzero, we will therefore have $\|f\|_{\infty} > 0$, i.e. $\lvert f\rvert > 0$ on some set $S\subseteq X$ of positive measure.</p>
<p>As $U^*U = UU^* = I$, we have that $U^*M^nU = T^n$, i.e. $T^n$ and $M^n$ are unitarily equivalent. Therefore, $\|T^n\| = \|M^n\| = \|f^n\|_{\infty} > 0$, as $\lvert f^n\rvert > 0$ on $S$.</p>
|
4,631,618 | <p>Consider this absolute value quadratic inequality</p>
<p><span class="math-container">$$ |x^2-4| < |x^2+2| $$</span></p>
<p>Right side is always positive for all real numbers,so the absolute value is not needed.</p>
<p>Now consider the cases for the left absolute value</p>
<ol>
<li><span class="math-container">$$ x^2-4 \geq 0 $$</span></li>
</ol>
<p>We get <span class="math-container">$$ x \geq \pm 2 $$</span></p>
<p>Solve for first case <span class="math-container">$$ x^2 - 4 < x^2+2 $$</span> the solution <span class="math-container">$$ 0 < 6$$</span> This is true for all real numbers; taking in consideration the boundary that x has to be greater equal +2 and -2 the first part of the solution I think should be <span class="math-container">$$ L_1 = [2, \infty) $$</span></p>
<ol start="2">
<li><span class="math-container">$$ x^2-4 < 0$$</span> <span class="math-container">$$ x < \pm 2 $$</span></li>
</ol>
<p>Solve for seonc case I get <span class="math-container">$$ x > \pm 1 $$</span> Solution for the second case considering the boundary of 2. should be <span class="math-container">$$ L_2 = (-2,-1) \cup (1,2) $$</span> Final solution;</p>
<p><span class="math-container">$$ L = (-2,-1) \cup (2,\infty) $$</span></p>
<p>According to the solutions,this is wrong; it should be <span class="math-container">$$ L = (-\infty,-1) \cup (1,\infty) $$</span> Rechecking their L1 and L2; L2 should be correct but for L1 they have <span class="math-container">$$ L_1 = R \backslash (-2,2) $$</span></p>
<p>So all real numbers except -2 and 2? Can anyone explain how this is the solution.First the sign is greater equal than +2 -2 shouldnt those numbers be included? Also we are looking for numbers GREATER than -2 and +2, since 2 is greater than -2 I assumed we only need to take numbers from to infinity,how does negative infinity come in consideration here?</p>
<p>Thanks in advance!</p>
| Stas Volkov | 1,101,398 | <p>Let</p>
<p><span class="math-container">$$f(x)=\sum_{n=1}^\infty \frac{x^n}{(2n-2)!}=\frac{x(e^{\sqrt x}+ e^{-\sqrt x })}2$$</span></p>
<p>Then</p>
<p><span class="math-container">$$x f'(x)=x \sum_{n=1}^\infty \frac{n x^{n-1}}{(2n-2)!}=x \sum_{n=1}^\infty \frac{n x^n}{(2n-2)!}$$</span></p>
<p>and</p>
<p><span class="math-container">$$(x f'(x))'=\sum_{n=1}^\infty \frac{n^2 x^{n-1}}{(2n-2)!}$$</span></p>
<p>Now plug in <span class="math-container">$x=1$</span> into <span class="math-container">$(x f'(x))'=f'(x)+xf''(x)$</span>. This gives us the answer <span class="math-container">$\frac{5e}4$</span>.</p>
|
2,938,424 | <p>Explain why the equation </p>
<p><span class="math-container">$$x^3 - 15x +1 = 0$$</span></p>
<p>has at least three solutions in the interval [-4,4].</p>
<p>My thoughts:</p>
<p><span class="math-container">$$f(x) = x^3 - 15x + 1$$</span>
<span class="math-container">$$f(-4) = -3 $$</span>
<span class="math-container">$$f(4) = 5 $$</span>
<span class="math-container">$$f(-4) < 0 < f(4)$$</span>
Therefore, by IVT, there exist some <span class="math-container">$c \in [-4,4]$</span> such that <span class="math-container">$f(c)=0$</span> exist.</p>
<p>However, I am unable how to prove there are at least three solutions...</p>
| Will Jagy | 10,400 | <p>You may have taken a transpose where not appropriate. In both cases, four columns, the vector created by summing <span class="math-container">$w c_1 + x c_2 + y c_3 + z c_4$</span> is exactly the result of multiplying the matrix on the left times the column vector
<span class="math-container">$$
V =
\left(
\begin{array}{c}
w \\
x \\
y \\
z
\end{array}
\right)
$$</span>
Finding the reduced row echelon form of the rectangular matrix leads to a way to find a null vector. For the first matrix, 3 by 4, it is guaranteed there is at least one nonzero vector <span class="math-container">$V$</span> that is a null vector. Write out <span class="math-container">$V,$</span> any nonzero entry is a column in the rectangular matrix that can be written in terms of the others. </p>
<p>In the 4 by 4 case, it is possible that the only <span class="math-container">$V$</span> that works is the zero vector. Not sure. If there is a nonzero <span class="math-container">$V,$</span> same as before </p>
<p>For the first problem, I got<br>
<span class="math-container">$$
V =
\left(
\begin{array}{c}
4 \\
-4 \\
7 \\
-1
\end{array}
\right)
$$</span>
or any nonzero multiple of that, so that any one of the four columns can be written in terms of the other three. Put the other way, with the four columns in the 3 by 4 matrix, we have
<span class="math-container">$$ 4 c_1 - 4 c_2 + 7 c_3 - c_4 = 0 $$</span></p>
|
2,938,424 | <p>Explain why the equation </p>
<p><span class="math-container">$$x^3 - 15x +1 = 0$$</span></p>
<p>has at least three solutions in the interval [-4,4].</p>
<p>My thoughts:</p>
<p><span class="math-container">$$f(x) = x^3 - 15x + 1$$</span>
<span class="math-container">$$f(-4) = -3 $$</span>
<span class="math-container">$$f(4) = 5 $$</span>
<span class="math-container">$$f(-4) < 0 < f(4)$$</span>
Therefore, by IVT, there exist some <span class="math-container">$c \in [-4,4]$</span> such that <span class="math-container">$f(c)=0$</span> exist.</p>
<p>However, I am unable how to prove there are at least three solutions...</p>
| egreg | 62,967 | <p>If you perform Gaussian elimination on the matrix with the given vectors as <em>columns</em>, then the columns with no leading <span class="math-container">$1$</span> can be removed.</p>
<p><span class="math-container">\begin{align}
\begin{bmatrix}
2 & 1 & 0 & 4 \\
-1 & 1 & 1 & -1 \\
2 & 3 & 1 & 3
\end{bmatrix}
&\to
\begin{bmatrix}
1 & 1/2 & 0 & 2 \\
0 & 3/2 & 1 & 1 \\
0 & 2 & 1 & -1
\end{bmatrix}
&& \begin{aligned} R_1&\gets\tfrac{1}{2}R_1 \\ R_2&\gets R_2+R_1 \\ R_3&\gets R_3-2R_1\end{aligned}
\\&\to
\begin{bmatrix}
1 & 1/2 & 0 & 2 \\
0 & 1 & 2/3 & 2/3 \\
0 & 0 & -1/3 & -7/3
\end{bmatrix}
&& \begin{aligned} R_2&\gets\tfrac{2}{3}R_2 \\ R_3&\gets R_3-2R_2\end{aligned}
\\&\to
\begin{bmatrix}
1 & 1/2 & 0 & 2 \\
0 & 1 & 2/3 & 2/3 \\
0 & 0 & 1 & 7
\end{bmatrix}
&& R_3\gets -3R_3
\end{align}</span>
At this point we see that the fourth vector can be removed. If we also compute the RREF
<span class="math-container">\begin{align}
\begin{bmatrix}
1 & 1/2 & 0 & 2 \\
0 & 1 & 2/3 & 2/3 \\
0 & 0 & 1 & 7
\end{bmatrix}
&\to
\begin{bmatrix}
1 & 1/2 & 0 & 2 \\
0 & 1 & 0 & -4 \\
0 & 0 & 1 & 5
\end{bmatrix}
&& R_2\gets -\frac{2}{3}R_3
\\&\to
\begin{bmatrix}
1 & 0 & 0 & 4 \\
0 & 1 & 0 & -4 \\
0 & 0 & 1 & 7
\end{bmatrix}
&& R_1\gets -\frac{1}{2}R_2
\end{align}</span>
We see that
<span class="math-container">$$
\begin{bmatrix}
4 \\
-1 \\
3
\end{bmatrix}=
4\,
\begin{bmatrix}
2 \\
-1 \\
2
\end{bmatrix}\,
-4\,
\begin{bmatrix}
1 \\
1 \\
3
\end{bmatrix}\,
+7\,
\begin{bmatrix}
0 \\
1 \\
1
\end{bmatrix}
$$</span></p>
<p>The reason is that performing elementary row operations doesn't modify the linear relations between columns. On the contrary, row operations generally <em>do</em> modify linear relations between rows, which should be clear because if the matrix has lower rank than the number of rows, the last rows in the reduced form are zero by construction.</p>
|
4,451,894 | <p><strong>Problem</strong><br />
There is a knight on an infinite chessboard. After moving one step, there are <span class="math-container">$8$</span> possible positions, and after moving two steps, there are <span class="math-container">$33$</span> possible positions. The possible position after moving n steps is <span class="math-container">$a_n$</span>, find the formula for <span class="math-container">$a_n$</span>.</p>
<hr />
<p>I found this sequence is <a href="http://oeis.org/A118312" rel="nofollow noreferrer">http://oeis.org/A118312</a></p>
<p>But I can't understand this Recurrence Relation</p>
<p><span class="math-container">$$a_n = 3a_{n-1} - 3a_{n-2} + a_{n-3}, \quad\quad n\geq3$$</span></p>
<p>Can someone give the intuition for this relationship?</p>
| Community | -1 | <p>Mordechai Katzman demonstrates in section <span class="math-container">$3$</span> of his paper <a href="https://arxiv.org/abs/math/0504113" rel="nofollow noreferrer">Counting monomials</a> (pages <span class="math-container">$5$</span> - <span class="math-container">$8$</span>) that</p>
<p><span class="math-container">$$a_n = \begin{cases}
1 \quad \quad \quad \quad \quad \; \, n = 0 \\
8 \quad \quad \quad \quad \quad \; \, n = 1 \\
33 \quad \quad \quad \quad \quad n = 2 \\
1 + 4n + 7n^2 \quad \; \, n \ge 3
\end{cases}$$</span></p>
<p>We can now prove by induction that
<span class="math-container">$$a_n = 3a_{n-1} - 3a_{n-2} + a_{n-3} = 1 + 4n + 7n^2, \quad\quad n\geq3 \tag{1}$$</span></p>
<p>To test whether <span class="math-container">$(1)$</span> holds for <span class="math-container">$n \ge 3$</span>, we need to define</p>
<p><span class="math-container">$$a_0 = 1 + 4(0) + 7(0)^2 = 1$$</span>
<span class="math-container">$$a_1 = 1 + 4(1) + 7(1)^2 = 12$$</span>
<span class="math-container">$$a_2 = 1 + 4(2) + 7(2)^2 = 37$$</span></p>
<p>For the base cases, we have</p>
<p><span class="math-container">$$a_3 = 3a_2 - 3a_1 + a_0 = 3\cdot37 - 3\cdot12 + 1 = 1 + 4(3) + 7(3)^2 = 76$$</span>
<span class="math-container">$$a_4 = 3a_3 - 3a_2 + a_1 = 3\cdot76 - 3\cdot37 + 12 = 1 + 4(4) + 7(4)^2 = 129$$</span>
<span class="math-container">$$a_5 = 3a_4 - 3a_3 + a_2 = 3\cdot129 -3\cdot76 + 37 = 1 + 4(5) + 7(5)^2 =196$$</span></p>
<p>Now, we must prove using <span class="math-container">$(1)$</span> that <span class="math-container">$$a_{n+1} = 3a_n - 3a_{n-1} + a_{n-2} = 1 + 4(n+1) + 7(n+1)^2 = 7n^2 + 18n + 12$$</span></p>
<p>Substituting for <span class="math-container">$a_n, a_{n-1}$</span> and <span class="math-container">$a_{n-2}$</span>, we get
<span class="math-container">\begin{align}
a_{n+1} &= 3\left(1 + 4n + 7n^2\right) -3\left(1 + 4(n-1) + 7(n-1)^2\right) + \left(1 + 4(n-2) + 7(n-2)^2\right)\\
& = 7n^2 + 18n + 12
\end{align}</span></p>
<p><span class="math-container">$\blacksquare$</span></p>
|
4,250,845 | <p>Considering a Quadrilateral <span class="math-container">$ABCD$</span> where <span class="math-container">$A(0,0,0), B(2,0,2), C(2,2\sqrt 2,2), D(0,2\sqrt2,0)$</span>. Basically I have to find the <strong>Area</strong> of <strong>projection</strong> of quadrilateral <span class="math-container">$ABCD$</span> on the plane <span class="math-container">$x+y-z=3$</span>.</p>
<p>I have tried to first find the projection of the points <span class="math-container">$A,B,C,D$</span> <em><strong>individually</strong></em> on the plane and then using the <strong>projected points</strong> find the vectors <span class="math-container">$\vec{AB}$</span> and <span class="math-container">$\vec{BC}$</span> and then using <span class="math-container">$|\vec{AB}\cdot \vec{BC}|$</span> , but I was unable to find the projected points.</p>
<p>Is it the correct approach? If it is not I would highly appreciate a correct approach for the problem.</p>
| sirous | 346,566 | <p>You can also first find the projection of points and then calculate the area.</p>
<p>Hints for finding the projection of a point:</p>
<p><span class="math-container">$x+y-z=3$</span></p>
<p>So the normal vector is <span class="math-container">$N(1, 1, -1)$</span></p>
<p>the projection of any point on this plane is the intersection of plane and a line passing that point and perpendicular on plane, that is the gradients of line is:</p>
<p><span class="math-container">$(m, n, l)=(1, 1,-1)$</span></p>
<p>For example we find projection of point A:</p>
<p>equation of line passing A and perpendicular on plane is:</p>
<p><span class="math-container">$\frac{x-0}m=\frac{y-0}n=\frac{z-0}l$</span></p>
<p><span class="math-container">$\frac x1=\frac y1=\frac z{-1}=t$</span></p>
<p><span class="math-container">$\Rightarrow x=t, y=t, z=-t$</span></p>
<p>Now plug in these in equation of plane:</p>
<p><span class="math-container">$t+t-(-t)=3\Rightarrow t=1$</span></p>
<p>So the coordinates of A' (projection of A) is:</p>
<p><span class="math-container">$x=t=1$</span>, <span class="math-container">$y=t=1$</span> and <span class="math-container">$z=-t=-1$</span> or <span class="math-container">$A'(1, 1, -1)$</span></p>
<p>Now find B', C' and D' and use similar method given in other answer to find the area of A'B'C'D'.</p>
|
4,328,630 | <p>While preparing for a midterm, I came across this question</p>
<blockquote>
<p>Suppose a restaurant is visited by 10 clients per hour on average, and clients follow a homogeneous Poisson Process. Independantly of other client, each client has a 20% chance to eat here and 80% to take away. In average, how many clients should be expected before one eats here ? </p>
</blockquote>
<p>Proposed answer :</p>
<ul>
<li>8</li>
<li>4</li>
<li>2</li>
</ul>
<p>For me the correct answer is 4, but many of friends have answered "2" because they've decomposed the poisson into two poisson process one of parameter 0.2<em>10 and another one with parameter 0.8</em>10.</p>
<p>Who is right? The question is really tricky, isn't it?</p>
<p>Thanks for you help !</p>
| greg | 357,854 | <p><span class="math-container">$
\def\bbR#1{{\mathbb R}^{#1}}
\def\d{\delta}
\def\k{\sum_k}
\def\l{\sum_l}
\def\e{\varepsilon}
\def\n{\nabla}\def\o{{\tt1}}\def\p{\partial}
\def\E{{\cal E}}\def\F{{\cal F}}\def\G{{\cal G}}
\def\B{\Big}\def\L{\left}\def\R{\right}
\def\LR#1{\L(#1\R)}
\def\BR#1{\B(#1\B)}
\def\vecc#1{\operatorname{vec}\LR{#1}}
\def\Diag#1{\operatorname{Diag}\LR{#1}}
\def\trace#1{\operatorname{Tr}\LR{#1}}
\def\qiq{\quad\implies\quad}
\def\grad#1#2{\frac{\p #1}{\p #2}}
\def\hess#1#2#3{\frac{\p^2 #1}{\p #2\,\p #3}}
\def\c#1{\color{red}{#1}}
$</span>The differential of a matrix is easy to work with, since it obeys
all of the rules of matrix algebra. So let's start by calculating
the differential of your function.
<span class="math-container">$$\eqalign{
F &= EJE^T \\
dF &= dE\;JE^T + EJ\;dE^T \\
}$$</span>
Vectorizing this expression yields<br />
<span class="math-container">$$\eqalign{
f &= \vecc{F},\qquad e=\vecc{E} \\
df &= \LR{EJ^T\otimes I}\,de + \LR{I\otimes EJ}K\;de \\
\grad{f}{e} &= \LR{EJ^T\otimes I} + \LR{I\otimes EJ}K \\
}$$</span>
where <span class="math-container">$K$</span> is the <a href="https://en.wikipedia.org/wiki/Commutation_matrix" rel="nofollow noreferrer">Commutation Matrix</a> associated with
the <code>vec()</code> operation.</p>
<p>Another approach to the problem is to use the self-gradient of a matrix, i.e.
<span class="math-container">$$\eqalign{
\grad{E}{E_{ij}} = S_{ij} \\
}$$</span>
where <span class="math-container">$S_{ij}$</span> is the matrix whose components are all zero, except for the
<span class="math-container">$(i,j)^{th}$</span> component which is equal to one. This is sometimes called the
<em>single-entry</em> matrix, and it can be used to write the
component-wise gradient of the function as
<span class="math-container">$$\eqalign{
\grad{F}{E_{ij}} &= S_{ij}\,JE^T + EJ\,S_{ij} \\
}$$</span>
Yet another approach is to use <a href="https://mathworld.wolfram.com/EinsteinSummation.html" rel="nofollow noreferrer">Index Notation</a> to write the self-gradient (which is a fourth-order tensor) in terms of <a href="https://en.wikipedia.org/wiki/Kronecker_delta" rel="nofollow noreferrer">Kronecker delta</a> symbols as
<span class="math-container">$$\eqalign{
\grad{E_{mn}}{E_{ij}} = \d_{im}\d_{jn} \\
}$$</span>
Then calculate the gradient of the function
(also a fourth-order tensor) as<br />
<span class="math-container">$$\eqalign{
F_{mn} &= \k\l E_{mk}J_{kl}E_{ln}^T \\
\grad{F_{mn}}{E_{ij}}
&= \k\l \BR{ \c{\d_{im}\d_{jk}}\;J_{kl}E_{nl}
+ E_{mk}J_{kl}\;\c{\d_{in}\d_{jl}} } \\
&= \l \d_{im}J_{jl}E_{ln}^T + \k E_{mk}J_{kj}\d_{in} \\
&= \d_{mi}\LR{JE^T}_{jn} + \LR{EJ}_{mj}\d_{in} \\
}$$</span>
Once you are comfortable with the Einstein summation convention,
you can drop the <span class="math-container">$\Sigma$</span> symbols to write the intermediate steps more concisely.</p>
|
270,410 | <p>I have simplified the equations and decrease the variables to 5, and changed the parameters' value as I think the equations in <a href="https://mathematica.stackexchange.com/questions/270375/findrootjsing-encountered-a-singular-jacobian-at-the-point-when-solving-nonli">enter link description here</a> is because of the improper parameters' value.</p>
<p>and new codes are as this:</p>
<pre><code>equa={(AmI[1] - BmI[1]/R3^2) CmI[1] == k1 u0,
R3 AmI[1] DmI[1] == (BmI[1] DmI[1])/R3,
(AmI[1] - BmI[1]/R2^2) CmI[1] == (R1^(-((2 π)/β))
R2^(-((π + β)/β)) (R1^((2 π)/β) - R2^((2 π)/β)) β Aki[1]*
(Sin[thetai] + Sin[thetai + β]))/(-π^2 + β^2),
(AmI[1] - BmI[1]/R2^2) DmI[1] == (R1^(-((2 π)/β))
R2^(-((π + β)/β)) (R1^((2 π)/β) - R2^((2 π)/β)) β Aki[1]*
(Cos[thetai] + Cos[thetai + β]))/(π^2 - β^2),
Aki[1] == -((2 β (R2^2 AmI[1] + BmI[1]) ((Cos[thetai] + Cos[thetai + β]) DmI[1] -
CmI[1] (Sin[thetai] + Sin[thetai + β])))/(R2 (π - β) (π + β)))}
system = equa;
vars = {AmI[1], BmI[1], CmI[1], DmI[1], Aki[1]};
parameters = {u0 -> 4*π*10^(-7), R1 -> 4/100, R2 -> 7/100,
R3 -> 8/100, β -> π/4, k1 -> (11/10)^5, L -> 0.1,
N1 -> 50, K -> 50, thetai -> π/6};
givenPoint = {{AmI[1], 0.1}, {BmI[1], 0.1}, {CmI[1], 0.1},
{DmI[1], 0.1}, {Aki[1], 0.1 + I}};
NMinimize[# . # &[equa /. Equal -> Subtract /. parameters], vars]
</code></pre>
<p><code>{1.53176*10^-12, {AmI[1] -> -1.75396*10^-6, BmI[1] -> 8.53681*10^-9, CmI[1] -> -0.410309, DmI[1] -> 0.317123, Aki[1] -> 2.23402*10^-13}}</code></p>
<p>And it seems that the object is approximate to 0, however it is not 0, so I cannot get the solution by <code>Solve</code>.</p>
<p><strong>The most important question is how to analyze this nonlinear equations mathematically with 5 variables? For example using <code>MatrixRank</code> or other functions to make sure in which condition the equations will and will not have solution.</strong></p>
<p><strong>I do not know if it is effective to use <code>MatrixRank</code> for nonlinear equations.</strong></p>
<p><strong>By the way, I do not know which of vars should be real and which complex and maybe the initial values are improper.</strong></p>
| Ulrich Neumann | 53,677 | <p>Try <code>Reduce</code> to analyze the nonlinear equations:</p>
<pre><code>Reduce[equa /. parameters, vars]
(* False*)
</code></pre>
<p>The result confirms @Bill's helpful comment!</p>
|
2,629,408 | <p>How to evaluate this given expression?
$$\int\frac{du}{\sqrt{9e^{-2u}-1}}$$
I got so many tries but I'm not sure of my answer because somebody said that it was wrong, they told me that I used a wrong formula applied!
That's why I ask a support here I want correct explanation and answer of this given!</p>
<p>Thanks!</p>
| D F | 501,035 | <p>try to substitute $\frac{1}{3}e^{u} = cost$ then $du = -tg(t)dt$. And the integral will be $\int \frac{-tgtdt}{\sqrt{\frac{1}{cos(t)^2} - 1}}$ then use that $(cos(t))^2 + (sin(t))^2 = 1$</p>
|
3,454,095 | <p>Minimize <span class="math-container">$\;\;\displaystyle \frac{(x^2+1)(y^2+1)(z^2+1)}{ (x+y+z)^2}$</span>, if <span class="math-container">$x,y,z>0$</span>.
By setting gradient to zero I found <span class="math-container">$x=y=z=\frac{1}{\displaystyle\sqrt{2}}$</span>, which could minimize the function.</p>
<blockquote>
<p>Question from Jalil Hajimir</p>
</blockquote>
| dezdichado | 152,744 | <p>If you want some calculus/analysis argument:</p>
<p>After establishing there must exist a global minimum, let <span class="math-container">$p$</span> be the global minimum. Then we must have that
<span class="math-container">$$f(x) = x^2\left((y^2+1)(z^2+1) - p\right) - 2xp(y+z) + (y^2+1)(z^2+1) - p(y+z)^2\geq 0$$</span>
as a quadratic in <span class="math-container">$x.$</span> So the discriminant is non-positive:
<span class="math-container">$$D =4\left[p^2(y+z)^2 - (y^2+1)^2(z^2+1)^2 - p^2(y+z)^2+(y^2+1)(z^2+1)p(1+(y+z)^2)\right]\leq 0\iff $$</span>
<span class="math-container">$$p\leq\min\dfrac{(y^2+1)(z^2+1)}{1+(y+z)^2}.$$</span>
But
<span class="math-container">$$4(y^2+1)(z^2+1) - 3 - 3(y+z)^2 = 4y^2z^2+y^2+z^2-6yz+1 = (y-z)^2+(2yz-1)^2\geq 0.$$</span> So <span class="math-container">$p = \dfrac{3}{4}$</span> by continuity argument and it is achieved by <span class="math-container">$y = z = \dfrac{1}{\sqrt{2}},$</span> which in return easily tells us that that <span class="math-container">$x$</span> is also <span class="math-container">$\dfrac{1}{\sqrt{2}}$</span> for the minimum to be attained. </p>
|
2,078,535 | <p>I'm kinda new with this and find hard to solve the problems related to LA although I can visually and conceptualize stuff easily. Please help me to find $\{(v_1, v_2, v_3) \in \Bbb R^3 \mid 5v_1 - 3v_2 + 2v_3 = 0\}$.</p>
| Noble Mushtak | 307,483 | <p>We want to solve the system represented by the following augmented matrix:
$$[5 \ -3 \ 2 \ | \ 0]$$
To get this into RREF form, divide the row by $5$:
$$[1 \ -\frac{3}{5} \ \frac{2}{5} \ | \ 0]$$
Now, the pivot variable is the first coordinate and the free variables are the second and third coordinate. From the second coordinate we get the solution $(\frac 3 5, 1, 0)$. From the third coordinate, we get the solution $(-\frac 2 5, 0, 1)$. Thus, we get that the answer is the span of these two vectors.</p>
|
2,247,522 | <p>Suppose $X$ and $Y$ are discrete random variables. Show that $$E(X \mid Y)=E(X \mid Y^3).$$</p>
<p>The conditional expected value of a discrete random variable is expressed as
$$E(X \mid Y)=\sum xp_{X \mid Y}(x \mid y),$$
where$$p_{X \mid Y}(x \mid y)=\frac{p_{X,Y}(x,y)}{p_Y(y)}.$$ </p>
<p>Similarly, you can say that
$$E(X \mid Y^3)=\sum x p_{X \mid Y^3}(x \mid y^3),$$
where$$p_{X \mid Y^3}(x \mid y^3)=\frac{p_{X,Y^3}(x,y^3)}{p_{Y^3}(y^3)}.$$ </p>
<p>The goal is to show that </p>
<p>$$\sum xp_{X \mid Y}(x \mid y)=\sum xp_{X \mid Y^3}(x \mid y^3).$$</p>
<p>From here I don't really know how to show that the two are equal, some help would be appreciated. </p>
| Amit | 378,131 | <p>Let $S$ denote the sample space. Notice that both $\mathbb{E}(X|Y):S\rightarrow \mathbb{R}$ and $\mathbb{E}(X|Y^3):S\rightarrow \mathbb{R}$ are random variables, and the following holds:</p>
<p>$\mathbb{E}(X|Y)(s) = \mathbb{E}(X|Y = Y(s)) = \mathbb{E}(X|Y^3 = (Y(s))^3) = \mathbb{E}(X|Y^3 = Y^3(s))= \mathbb{E}(X|Y^3)(s)$</p>
<p>for all $s\in S$.</p>
<p>Therefore, $\mathbb{E}(X|Y)= \mathbb{E}(X|Y^3)$.</p>
|
23,268 | <p>I'm the sort of mathematician who works really well with elements. I really enjoy point-set topology, and category theory tends to drive me crazy. When I was given a bunch of exercises on subjects like limits, colimits, and adjoint functors, I was able to do them, although I am sure my proofs were far longer and more laborious than they should have been. However, I felt like most of the understanding I gained from these exercises was gone within a week. I have a copy of MacLane's "Categories for the Working Mathematician," but whenever I pick it up, I can never seem to get through more than two or three pages (except in the introduction on foundations).</p>
<p>A couple months ago, I was trying to use the statements found in Hartshorne about glueing schemes and morphisms and realized that these statements were inadequate for my purposes. Looking more closely, I realized that Hartshorne's hypotheses are "wrong," in roughly the same way that it is "wrong" to require, in the definition of a basis for a topology that it be closed under finite intersections. (This would, for instance, exclude the set of open balls from being a basis for $\mathbb{R}^n$.) Working through it a bit more, I realized that the "right" statement was most easily expressed by saying that a certain kind of diagram in the category of schemes has a colimit. At this point, the notion of "colimit" began to seem much more manageable: a colimit is a way of gluing objects (and morphisms).</p>
<p>However, I cannot think of any similar intuition for the notion of "limit." Even in the case of a fibre product, a limit can be anything from an intersection to a product, and I find it intimidating to try to think of these two very different things as a special cases of the same construction. I understand how to show that they are; it just does not make intuitive sense, somehow.</p>
<p>For another example, I think (and correct me if I am wrong) that <strike>the sheaf condition on a presheaf can be expressed as stating that the contravariant functor takes colimits to limits</strike>. [This is not correct as stated. See Martin Brandenburg's answer below for an explanation of why not, as well as what the correct statement is.] It seems like a statement this simple should make everything clearer, but I find it much easier to understand the definition in terms of compatible local sections gluing together. I can (I think) prove that they are the same, but by the time I get to one end of the proof, I've lost track of the other end intuitively.</p>
<p>Thus, my question is this: Is there a nice, preferably geometric intuition for the notion of limit? If anyone can recommend a book on category theory that they think would appeal to someone like me, that would also be appreciated.</p>
| Nicolas Ford | 5,281 | <p>The way I think about limits and colimits is in terms of the most elementary examples of each: (co)products and (co)equalizers. Since any (co)limit can be built out of these, this is technically enough, and anyway I think it does give something of a feel for what the object should be. (I'm not being very picky about details here, since I'm just trying to describe my intuition. Feel free to correct me in comments.)</p>
<p>(Finite) coproducts of sets and spaces are disjoint unions, of groups are free products, of vector spaces are direct sums, and so on. The common thread is that a coproduct is the object you get by (as you say) gluing together the objects you start with and (if necessary) "closing it up" to get it to still be a group/vector space/whatever. This is reflected in the universal property: the coproduct of X and Y is the object with the property that maps out of it look like pairs of maps out of X and out of Y.</p>
<p>If f and g are set maps from A to B, then the coequalizer is the quotient of B obtained by setting f(x) equal to g(x) for each x in A. The cokernel of a map of R-modules, for example, is the coequalizer of the map with the zero map. So coequalizers are, in general, what you get by taking the target of the map and forcing the two maps given to you to be equal by making the appropriate identifications, and if you trace through the universal property you can see that this is what it entails: maps out of the coequalizer are the same as maps h out of B for which hf=hg.</p>
<p>The same sort of logic can be applied to products and equalizers. Products are very familiar in the categories I've mentioned, and they're usually even called "products", so I won't belabor that point. Equalizers are a little less familiar. If f and g are maps of sets from A to B, then the equalizer of f and g is just the set of elements x in A with the property that f(x) = g(x). For example, the kernel of a map of R-modules is the equalizer of the map with 0. In other words, whereas a coequalizer takes the target and forces the maps to be equal "after" they've been applied, an equalizer takes the source and forces the maps to be equal by throwing out everything on which they aren't. This is once again reflected by the universal property: maps into the equalizer are the same as maps h into A so that fh=gh.</p>
<p>A general limit is an equalizer of products, so you can use this to get some intuition for what it looks like: a limit of some diagram of sets can be thought of as the subset of the product of all the sets in the diagram consisting of elements which are consistent with the arrows in the diagram, that is, whenever there's an arrow f between A and B in the diagram, applying f to the A component of your element should give you the B component. (Notice that in my description of the equalizer above, I only took elements of the source. This is because specifying the element of the target is redundant, since it's forced to be both f(x) and g(x).)</p>
<p>A general colimit is a coequalizer of coproducts, so you take (in the case of spaces, say) a disjoint union of all the spaces involved, and then identify them along the maps in the diagram, so this picture of limits is kind of the dual: you take a product instead of a coproduct, and you use the other way to make maps equal, that is, taking subsets rather than quotients.</p>
|
67,460 | <p>Denote the system in $GF(2)$ as $Ax=b$, where:
$$
\begin{align}
A=&(A_{ij})_{m\times m}\\
A_{ij}=&
\begin{cases}
(1)_{n\times n}&\text{if }i=j\quad\text{(a matrix where entries are all 1's)}\\
I_n&\text{if }i\ne j\quad\text{(the identity matrix)}
\end{cases}
\end{align}
$$
that is, $A$ is a square matrix of order $m\times n$. And $b$ is a 0-1 vector with length $m\times n$. Now what is the solution of this system, if any, for a general pair of $m$ and $n$?</p>
<p>Example: For $m=2,n=3$ and $b=(0, 1, 0, 0, 1, 0)^T$, we have
$$
A=
\begin{pmatrix}
1 & 1 & 1 & 1 & 0 & 0 \\
1 & 1 & 1 & 0 & 1 & 0 \\
1 & 1 & 1 & 0 & 0 & 1 \\
1 & 0 & 0 & 1 & 1 & 1 \\
0 & 1 & 0 & 1 & 1 & 1 \\
0 & 0 & 1 & 1 & 1 & 1
\end{pmatrix}
$$
then one solution is $x=(1, 0, 1, 0, 1, 0)^T$</p>
<p>I know Gaussian elimination. I am trying but find it not very easy when dealing with a general case.</p>
| user1551 | 1,551 | <p>Edit: I posted a wrong answer earlier. Hope I can get things fixed this time.</p>
<p>For convenience, write $x^T = (x_1^T, \ldots, x_m^T)$ where each $x_i$ is a vector of length $n$. Similarly, write $b^T = (b_1^T, \ldots, b_m^T)$. Let $J$ and $u$ be respectively the $n$-by-$n$ matrix and $n$-vector with all entries equal to 1 and let $Jb_i=\beta_iu$. Your system of equations $Ax=b$ is equivalent to
$$
(\dagger): (J-I)x_i+\sum_{j=1}^m x_j= b_i\quad \forall i.
$$
Let $\ Jx_i=\alpha_iu$ where $\alpha_i\in GF(2)$ is the parity of $x_i$. So $(\dagger)$ gives
$$
(*): x_i = \sum x_j + b_i + \alpha_iu\quad \forall i.
$$</p>
<p><strong>Case 1: $m$ is even.</strong> By summing up $(*)$ from $i=1,2,\ldots,m$, we get
$$\sum x_j = \sum b_j + \sum\alpha_ju.$$
Substitute this back into $(*)$, we see that the general solution to $(\ast)$ is of the form
$$
(**): x_i = \sum b_j + \sum\alpha_ju + b_i + \alpha_iu.
$$
Such $\{x_i\}$ form a solution of $(\dagger)$ if and only if $Jx_i=\alpha_iu$ for all $i$, which means
$$
\sum \beta_j + n\sum\alpha_j + \beta_i + n\alpha_i = \alpha_i
$$
or equivalently,
$$
n\sum\alpha_j + (n-1)\alpha_i = \sum \beta_j + \beta_i.
$$
When $n$ is even, the above system has a unique solution $\alpha_i = \sum \beta_j + \beta_i$.</p>
<p>When $n$ is odd, the above system reduces to $\sum\alpha_j = \sum \beta_j + \beta_i$. Hence solution exists if and only if $\beta_1=\ldots=\beta_m=\beta$ and the solutions are given by $(**)$ with $\sum\alpha_j = \beta$.</p>
<p><strong>Case 2: $m$ is odd.</strong> By summing up $(*)$ from $i=1,2,\ldots,m$, we get
$$\sum b_j = \sum\alpha_ju.$$
Thus a necessary condition for a solution to exist is that $\sum b_j$ is a multiple of $u$ (say, $\sum b_j=\lambda u$). If this is the case, then the general solution to $(\ast)$ is given by $x_i = y + b_i + \alpha_iu$ where $\sum \alpha_j=\lambda$ and $y$ is any $n$-vector. Such $\{x_i\}$ is a feasible solution to $(\dagger)$ if and only if $Jx_i=\alpha_iu$ for all $i$, that is, iff $Jy + \beta_iu + n\alpha_iu=\alpha_iu$. Therefore, we need $(n-1)\alpha_i+\beta_i$ to be constant and equal to the parity of $y$.</p>
<p>So, when $n$ is odd, solution exists only if $\beta_1=\ldots=\beta_m$. Since
$$
\sum b_j=\lambda u\ \Rightarrow\ \sum Jb_j=\lambda Ju\ \Rightarrow\ \sum\beta_j=\lambda,
$$
the previous requirement that $\sum \alpha_j=\lambda$ can be rewritten as $\sum \alpha_j=\sum \beta_j$.</p>
<p>When $n$ is even, that $(n-1)\alpha_i+\beta_i$ is constant means $\alpha_i+\beta_i=c$ for some constant $c$. Recall that we need $\sum \alpha_ju=\lambda u=\sum b_j$. Multiply both sides by $J$, we get $0=\sum \beta_ju$, i.e. $\sum \beta_j=0$. This is another necessary condition for the existence of solution. Suppose this is also satisfied. To make $\sum \alpha_ju=\lambda u$, we may take $\alpha_i=\beta_i$ for all $i$ if $\lambda=0$, or $\alpha_i=1-\beta_i$ for all $i$ if $\lambda=1$.</p>
|
3,998,098 | <p>I was asked to determine the locus of the equation
<span class="math-container">$$b^2-2x^2=2xy+y^2$$</span></p>
<p>This is my work:</p>
<blockquote>
<p>Add <span class="math-container">$x^2$</span> to both sides:
<span class="math-container">$$\begin{align}
b^2-x^2 &=2xy+y^2+x^2\\
b^2-x^2 &=\left(x+y\right)^2
\end{align}$$</span></p>
</blockquote>
<p>I see that this is similar to the equation of a circle. How can I find the locus of this expression?</p>
| Quanto | 686,284 | <p>With variable changes
<span class="math-container">$$x= \sqrt{ \frac{5+\sqrt5}2}u + \sqrt{ \frac{5-\sqrt5}2}v,\>\>\>\>\>
y= \sqrt{ \frac{5-\sqrt5}2}u - \sqrt{ \frac{5+\sqrt5}2}v
$$</span>
the curve equation <span class="math-container">$b^2-2x^2=2xy+y^2$</span> can be recast as</p>
<p><span class="math-container">$$ \frac{u^2}{\left(\frac{\sqrt5-1}2b\right)^2} +\frac{v^2}{\left(\frac{\sqrt5+1}2b\right)^2} =1
$$</span>
which reveals an ellipse of axis lengths <span class="math-container">$\frac{\sqrt5-1}2b$</span> and <span class="math-container">$\frac{\sqrt5+1}2b$</span>.</p>
|
257,821 | <p>The Kullback-Liebler divergence between two distributions with pdfs $f(x)$ and $g(x)$ is defined
by
$$\mathrm{KL}(F;G) = \int_{-\infty}^{\infty} \ln \left(\frac{f(x)}{g(x)}\right)f(x)\,dx$$</p>
<p>Compute the Kullback-Lieber divergence when $F$ is the standard normal distribution and $G$
is the normal distribution with mean $\mu$ and variance $1$. For what value of $\mu$ is the divergence
minimized?</p>
<p>I was never instructed on this kind of divergence so I am a bit lost on how to solve this kind of integral. I get that I can simplify my two normal equations in the natural log but my guess is that I should wait until after I take the integral. Any help is appreciated.</p>
| Timmmm | 60,289 | <p>Ok, I've been searching for ages for the Kullback-Liebler divergence between two Normal distributions and didn't find it, but RS's answer enabled me to calculate it quite simply. Here's my derivation. I've also derived it for Beta distributions.</p>
<h1>Kullback-Liebler divergence of Normal Distributions</h1>
<p>Suppose we have two Normal distributions $F\sim N(\mu_{f},\sigma_{f})$;
$G\sim N(\mu_{g},\sigma_{g})$. The Kullback-Liebler divergence is
defined as:</p>
<p>$$
\mathrm{KL}(F||G)=\int f(x)\ln\left(\frac{f(x)}{g(x)}\right)dx\quad\mathrm{nats}
$$</p>
<p>Divide by $\ln(2)$ to get the answer in bits. The Gaussian PDF is:</p>
<p>$$
f(x)=\frac{1}{\sigma_{f}\sqrt{2\pi}}\, e^{\dfrac{-(x-\mu_{f})^{2}}{2\sigma_{f}^{2}}}
$$</p>
<p>Substituting we get</p>
<p>\begin{eqnarray*}
\mathrm{KL}(F||G) & = & \int f(x)\ln\left(\frac{e^{\frac{-(x-\mu_{f})^{2}}{2\sigma_{f}^{2}}}}{\sigma_{f}\sqrt{2\pi}}\frac{\sigma_{g}\sqrt{2\pi}}{e^{\frac{-(x-\mu_{g})^{2}}{2\sigma_{g}^{2}}}}\right)dx\\
& = & \int f(x)\ln\left(\frac{e^{\frac{-(x-\mu_{f})^{2}}{2\sigma_{f}^{2}}}}{\sigma_{f}}\frac{\sigma_{g}}{e^{\frac{-(x-\mu_{g})^{2}}{2\sigma_{g}^{2}}}}\right)dx\\
& = & \int f(x)\left[\ln\left(\frac{e^{\frac{-(x-\mu_{f})^{2}}{2\sigma_{f}^{2}}}}{e^{\frac{-(x-\mu_{g})^{2}}{2\sigma_{g}^{2}}}}\right)+\ln\left(\frac{\sigma_{g}}{\sigma_{f}}\right)\right]dx\\
& = & \int f(x)\left[\frac{-(x-\mu_{f})^{2}}{2\sigma_{f}^{2}}-\frac{-(x-\mu_{g})^{2}}{2\sigma_{g}^{2}}+\ln\left(\frac{\sigma_{g}}{\sigma_{f}}\right)\right]dx
\end{eqnarray*}</p>
<p>Then via a tedious and error-prone but straightforward expansion we
get</p>
<p>$$
=\left[\ln\left(\frac{\sigma_{g}}{\sigma_{f}}\right)+\frac{-\mu_{f}^{2}}{2\sigma_{f}^{2}}-\frac{-\mu_{g}^{2}}{2\sigma_{g}^{2}}\right]\int f(x)dx+\left[\frac{2\mu_{f}}{2\sigma_{f}^{2}}-\frac{2\mu_{g}}{2\sigma_{g}^{2}}\right]\int x\, f(x)dx+\left[\frac{-1}{2\sigma_{f}^{2}}-\frac{-1}{2\sigma_{g}^{2}}\right]\int x^{2}f(x)dx
$$</p>
<p>Then we have the following properties:</p>
<p>\begin{eqnarray*}
\int f(x)dx & = & 1\\
\int x\, f(x)dx & = & \mu_{f}\\
\int x^{2}f(x)dx & = & \mu_{f}^{2}+\sigma_{f}^{2}
\end{eqnarray*}</p>
<p>Which gives:</p>
<p>\begin{eqnarray*}
\mathrm{KL}(F||G) & = & \left[\ln\left(\frac{\sigma_{g}}{\sigma_{f}}\right)+\frac{-\mu_{f}^{2}}{2\sigma_{f}^{2}}-\frac{-\mu_{g}^{2}}{2\sigma_{g}^{2}}\right]+\left[\frac{2\mu_{f}}{2\sigma_{f}^{2}}-\frac{2\mu_{g}}{2\sigma_{g}^{2}}\right]\mu_{f}+\left[\frac{-1}{2\sigma_{f}^{2}}-\frac{-1}{2\sigma_{g}^{2}}\right]\left(\mu_{f}^{2}+\sigma_{f}^{2}\right)\\
& = & \ln\left(\frac{\sigma_{g}}{\sigma_{f}}\right)+\frac{-\mu_{f}^{2}}{2\sigma_{f}^{2}}+\frac{\mu_{g}^{2}}{2\sigma_{g}^{2}}+\frac{2\mu_{f}^{2}}{2\sigma_{f}^{2}}+\frac{-2\mu_{g}\mu_{f}}{2\sigma_{g}^{2}}+\frac{-\mu_{f}^{2}-\sigma_{f}^{2}}{2\sigma_{f}^{2}}+\frac{\mu_{f}^{2}+\sigma_{f}^{2}}{2\sigma_{g}^{2}}\\
& = & \ln\left(\frac{\sigma_{g}}{\sigma_{f}}\right)+\frac{\mu_{g}^{2}-2\mu_{g}\mu_{f}+\mu_{f}^{2}+\sigma_{f}^{2}}{2\sigma_{g}^{2}}+\frac{-\mu_{f}^{2}+2\mu_{f}^{2}-\mu_{f}^{2}-\sigma_{f}^{2}}{2\sigma_{f}^{2}}\\
& = & \ln\left(\frac{\sigma_{g}}{\sigma_{f}}\right)+\frac{(\mu_{f}-\mu_{g})^{2}+\sigma_{f}^{2}-\sigma_{g}^{2}}{2\sigma_{g}^{2}}
\end{eqnarray*}</p>
<p>I verified this numerically in Matlab, after fixing many sign errors! If anyone wants to do this for the Beta distribution I'd be greatful!</p>
<h1>Kullback-Liebler divergence of Beta Distributions</h1>
<p>Suppose we have two Beta distributions $F\sim\mathrm{Beta}(\alpha_{f},\beta_{f})$;
$G\sim\mathrm{Beta}(\alpha_{g},\beta_{g})$. The Kullback-Liebler
divergence is defined as:</p>
<p>$$\mathrm{KL}(F||G)=\int f(x)\ln\left(\frac{f(x)}{g(x)}\right)dx\quad\mathrm{nats}$$</p>
<p>Divide by $\ln(2)$ to get the answer in bits. The Beta PDF is:</p>
<p>$$f(x)=\frac{\Gamma(\alpha_{f}+\beta_{f})}{\Gamma(\alpha_{f})\Gamma(\beta_{f})}x^{\alpha_{f}-1}(1-x)^{\beta_{f}-1}$$</p>
<p>Where $\Gamma()$ is the Gamma function. Substituting gives:</p>
<p>\begin{eqnarray*}
\mathrm{KL}(F||G) & = & \int_{0}^{1}f(x)\ln\left(\frac{\frac{\Gamma(\alpha_{f}+\beta_{f})}{\Gamma(\alpha_{f})\Gamma(\beta_{f})}x^{\alpha_{f}-1}(1-x)^{\beta_{f}-1}}{\frac{\Gamma(\alpha_{g}+\beta_{g})}{\Gamma(\alpha_{g})\Gamma(\beta_{g})}x^{\alpha_{g}-1}(1-x)^{\beta_{g}-1}}\right)dx\\
& = & \int_{0}^{1}f(x)\ln\left(\frac{\frac{\Gamma(\alpha_{f}+\beta_{f})}{\Gamma(\alpha_{f})\Gamma(\beta_{f})}x^{\alpha_{f}-\alpha_{g}}(1-x)^{\beta_{f}-\beta_{g}}}{\frac{\Gamma(\alpha_{g}+\beta_{g})}{\Gamma(\alpha_{g})\Gamma(\beta_{g})}}\right)dx\\
& = & \int_{0}^{1}f(x)\ln\left(\frac{\Gamma(\alpha_{f}+\beta_{f})\Gamma(\alpha_{g})\Gamma(\beta_{g})}{\Gamma(\alpha_{g}+\beta_{g})\Gamma(\alpha_{f})\Gamma(\beta_{f})}x^{\alpha_{f}-\alpha_{g}}(1-x)^{\beta_{f}-\beta_{g}}\right)dx\\
& = & \int_{0}^{1}f(x)\left[\ln\frac{\Gamma(\alpha_{f}+\beta_{f})\Gamma(\alpha_{g})\Gamma(\beta_{g})}{\Gamma(\alpha_{g}+\beta_{g})\Gamma(\alpha_{f})\Gamma(\beta_{f})}+\ln\left(x^{\alpha_{f}-\alpha_{g}}\right)+\ln\left((1-x)^{\beta_{f}-\beta_{g}}\right)\right]dx\\
& = & \ln\frac{\Gamma(\alpha_{f}+\beta_{f})\Gamma(\alpha_{g})\Gamma(\beta_{g})}{\Gamma(\alpha_{g}+\beta_{g})\Gamma(\alpha_{f})\Gamma(\beta_{f})}+(\alpha_{f}-\alpha_{g})\int_{0}^{1}f(x)\ln x\, dx+(\beta_{f}-\beta_{g})\int_{0}^{1}f(x)\ln(1-x)dx
\end{eqnarray*}</p>
<p>In terms of expectations this is:</p>
<p>$$\ln\frac{\Gamma(\alpha_{f}+\beta_{f})\Gamma(\alpha_{g})\Gamma(\beta_{g})}{\Gamma(\alpha_{g}+\beta_{g})\Gamma(\alpha_{f})\Gamma(\beta_{f})}+(\alpha_{f}-\alpha_{g})\mathrm{E}\left(\ln F\right)+(\beta_{f}-\beta_{g})\mathrm{E}\left(\ln(1-F)\right)$$</p>
<p>From Wikipedia we have:</p>
<p>$$\mathrm{E}(\ln F)=\psi(\alpha_{f})-\psi(\alpha_{f}+\beta_{f})$$</p>
<p>Where $\psi(x)=\frac{d}{dx}\ln\Gamma(x)=\frac{\Gamma'(x)}{\Gamma(x)}$
is the digamma function (also known as the polygamma function; it
is \texttt{psi} in Matlab). By swapping variables it is easy to show
that</p>
<p>$$\mathrm{E}(\ln(1-F))=\psi(\beta_{f})-\psi(\alpha_{f}+\beta_{f})$$</p>
<p>Therefore the final solution is</p>
<p>$$\mathrm{KL}(F||G)=\ln\frac{\Gamma(\alpha_{f}+\beta_{f})\Gamma(\alpha_{g})\Gamma(\beta_{g})}{\Gamma(\alpha_{g}+\beta_{g})\Gamma(\alpha_{f})\Gamma(\beta_{f})}+(\alpha_{f}-\alpha_{g})\left(\psi(\alpha_{f})-\psi(\alpha_{f}+\beta_{f})\right)+(\beta_{f}-\beta_{g})\left(\psi(\beta_{f})-\psi(\alpha_{f}+\beta_{f})\right)$$</p>
|
204,842 | <p>A probability measure defined on a sample space $\Omega$ has the following properties:</p>
<ol>
<li>For each $E \subset \Omega$, $0 \le P(E) \le 1$</li>
<li>$P(\Omega) = 1$</li>
<li>If $E_1$ and $E_2$ are disjoint subsets $P(E_1 \cup E_2) = P(E_1) + P(E_2)$</li>
</ol>
<p>The above definition defines a measure that is finitely additive (by induction) but not necessarily countably additive.</p>
<p>What is a probability measure that would be finitely additive but not countably additive (for a countable sample space $\Omega$)?</p>
<p>The example that I have seen most commonly on forums (this and elsewhere) is to set $P(E) = 0$ if $E$ is finite and $P(E) = 1$ if $E$ is co-finite. But that is <strong>not</strong> a probability measure as defined above since it is not defined on every subset of $\Omega$. </p>
<p>So an example of such a probability measure, or what is the reasoning that a finitely additive probability measure is not always countably additive?</p>
| Michael | 155,065 | <p>This question was from years ago, but I was just about to ask a similar question (I found this page from the stackexchange list of similar questions). My own question is whether it is possible to have an <em>explicit</em> example. The answers above are all non-explicit. Here is another (non-explicit) answer in a different form that I found to be helpful. It uses a <em>Banach limit</em> from functional analysis. </p>
<p>Define the natural numbers $\mathbb{N} = \{1, 2, 3, \ldots\}$ and define $2^{\mathbb{N}}$ as the set of all subsets of $\mathbb{N}$. Define $P:2^{\mathbb{N}}\rightarrow\mathbb{R}$ as follows: For each set $A \subseteq \mathbb{N}$, define $P(A)$ as a <em>Banach limit</em> of the sequence $\left\{\frac{|A \cap \{1, 2, ..., k\}|}{k}\right\}_{k=1}^{\infty}$. </p>
<h3>Banach limit properties:</h3>
<p>A Banach limit can be proven to exist and to have the following properties: </p>
<p>1) It is defined for all bounded real-valued sequences $\{x_k\}_{k=1}^{\infty}$, regardless of whether or not $x_k$ has a limit. In fact, the Banach limit is always a real number between $\liminf_{k\rightarrow\infty} x_k$ and $\limsup_{k\rightarrow\infty} x_k$.</p>
<p>2) The Banach limit is the same as the regular limit whenever the regular limit exists.</p>
<p>3) The Banach limit of a sum of two bounded sequences is the sum of the Banach limits of the individual sequences. </p>
<p>4) The Banach limit is nonnegative whenever $x_k \geq 0$ for all $k$. </p>
<p>There are many ways to define a real-valued function on bounded sequences that satisfies the above properties, so it is implicitly assumed that we consistently use one such function. The value of that function on a given bounded sequence is what we shall call the "Banach limit" of that sequence. Proofs of existence of such functions use nonexplicit things like axiom of choice or ultrafilters. </p>
<h3>Using these properties:</h3>
<p>Now if $A$ is a finite subset of $\mathbb{N}$ then $\frac{|A \cap \{1, ..., k\}|}{k} \leq \frac{|A|}{k}\rightarrow 0$, so the limit exists and $P(A)=0$. In particular, $P(\{n\})=0$ for all $n \in \mathbb{N}$. So:<br>
$$ 1=P[\mathbb{N}] = P[\cup_{n=1}^{\infty} \{n\}] \neq \sum_{n=1}^{\infty}P[\{n\}]=0$$</p>
<p>Furthermore, $P(A)$ is nonnegative for all $A\subseteq \mathbb{N}$ (by the 4th property of Banach limits above). It also satisfies $P(A \cup B)=P(A)+P(B)$ whenever $A$ and $B$ are disjoint (which can be shown by the 3rd property above). So this $P(A)$ function is indeed a finitely-additive measure on all subsets of $\mathbb{N}$, but not a countably-additive one. </p>
<h3>Remaining question:</h3>
<p>The above pushes a bit more towards an explicit answer, but still uses Banach limits and hence is not explicit. Can a more explicit answer can be given? Now I'm not sure if I should formally ask this question on stackexchange or not, I suspect I would just get pointers back to your question.</p>
|
419,625 | <p>Please help me, I have two functions:</p>
<blockquote>
<pre><code>a := x^3+x^2-x;
b := 20*sin(x^2)-5;
</code></pre>
</blockquote>
<p>and I would like to change a background color and fill the areas between two curves. I filled areas but I dont know how can I change the background, any idea?</p>
<blockquote>
<pre><code>plots[display]([plottools[transform]((x, y)->
[x, y+x^3+x^2+x])(plot(20*sin(x^2)-5-
x^3-x^2-x, x = o .. s, filled = true, color = red)),
plot([x^3+x^2+x, 20*sin(x^2)-5],
x = -3 .. 3, y = -30 .. 30, color = black)]);
</code></pre>
</blockquote>
| Sujaan Kunalan | 77,862 | <p>Hmm, the closest thing I could find after a quick search on Maple Help was the ability to choose a colour between the curve and the x-axis. I don't know how helpful that would be to you though.</p>
<p>Alternatively, maybe you could try to make two plots- one your actual plot and the other a plot of solid colour for the background and use the display command to plot them on the same set of axes.</p>
|
2,363,733 | <p>I saw this notation $V= V_1\otimes V_2$ in a survey on universal algebra, where $V$ was a variety, but the survey in question didn't define this notation. Could anyone explain what it means ?</p>
| Eran | 70,857 | <p>I would have to read the survey in question to be sure, but I have most commonly seen this notation used when talking about decidable varieties.</p>
<p>You can find a definition here: <a href="https://www.math.uwaterloo.ca/~snburris/htdocs/MYWORKS/PAPERS/DecidVar.pdf" rel="nofollow noreferrer">https://www.math.uwaterloo.ca/~snburris/htdocs/MYWORKS/PAPERS/DecidVar.pdf</a></p>
|
2,329,730 | <p>Let $G$, an algebraic group, act morphically on the affine variety $X$.</p>
<p>Then we can also have $G$ act on the affine algebra $K[X]$ as follows:
$$\tau_x(f(y))=f(x^{-1}\cdot y),\qquad (x\in G, y\in X)$$</p>
<p>Then $\tau:G\to GL(K[X]),\quad \tau:x\mapsto \tau_x$.</p>
<p>Humphreys says that the reason that the inverse appears, is so that $\tau$ is a group homomorphism. But to me it seems that without the inverse it would be a group homomorphism, and with it, it isn't even a group homomorphism.</p>
<p>$\tau_{xy}(f(z))=f((xy)^{-1}\cdot z)=f(y^{-1}\cdot x^{-1}\cdot z)$ and $\tau_x\tau_yf(z)=\tau_xf(y^{-1}\cdot z)=f(x^{-1}\cdot y^{-1}\cdot z),$
so these seem to fail to be a homomorphism, where it is clear to see that without the inverse, it would be a group homomorphism.</p>
<p>What's the deal?</p>
| Björn Friedrich | 203,412 | <p>From $\color{blue}{27x} = \color{blue}{b}$ and $\color{blue}{b}x = 1024$ it follows that
$$
\color{blue}{(27x)}x = 1024 \;.
$$
If you now divide both sides by $27$, you will see that this expression is equivalent to
$$
x^2 = \dfrac{1024}{27} \;.
$$
Now you take the positive and the negative root, and you will obtain the two solutions
$$
x_1 = +\sqrt{\dfrac{1024}{27}} = +\dfrac{32}{\sqrt{27}} \quad\text{and}\quad x_2 = -\sqrt{\dfrac{1024}{27}} = -\dfrac{32}{\sqrt{27}} \;.
$$</p>
|
4,201,477 | <blockquote>
<p>Integrate <span class="math-container">$$\int \frac{\cos 2x}{(\sin x+\cos x)^2}\mathrm dx$$</span></p>
</blockquote>
<p>I was integrating my own way.</p>
<p><span class="math-container">$$\int \frac{\cos 2x}{\sin^2x+2\sin x\cos x+cos^2}\mathrm dx$$</span>
<span class="math-container">$$\int \cot 2x \mathrm dx$$</span>
<span class="math-container">$$\frac{1}{2}\ln |\sin2x|+c$$</span></p>
<p>I guess I didn't do any mistake. But, my book had derived something else.</p>
<p><span class="math-container">$$\int \frac{\cos 2x \mathrm dx}{1+\sin 2x}$$</span> By taking <span class="math-container">$1+\sin 2x=z$</span> By differentiating the value,
<span class="math-container">$\cos 2x \mathrm dx=\frac{1}{2}\mathrm dz$</span>
Continue to main equation,
<span class="math-container">$$\frac{1}{2}\int \frac{1}{z}\mathrm dz$$</span>
<span class="math-container">$$\frac{1}{2}ln|z|+c=\frac{1}{2}\ln |1+\sin 2x|+c$$</span></p>
<p>Why both answers are different? I don't think it's possible to derive one to another.</p>
| Henry Lee | 541,220 | <p>notice that:
<span class="math-container">$$\frac{\cos 2x}{\cos^2x+2\cos x\sin x+\sin^2x}=\frac{\cos 2x}{\color{red}{1}+2\cos x\sin x}=\frac{\cos 2x}{1+\sin2x}$$</span>
this is because:
<span class="math-container">$$\sin^2x+\cos^2x\equiv1$$</span>
and:
<span class="math-container">$$\sin2x=2\cos x\sin x$$</span></p>
|
3,936,545 | <p>What happens if you have some error in your expression and you wish to take the limit? Does the big-oh go away or are you unable to take a limit at all?</p>
<p>This may seem very obvious as it is likely I am misunderstanding the definition of big-oh?</p>
| vonbrand | 43,946 | <p>Check your definitions. Big-Oh and limits should commute:</p>
<p><span class="math-container">$\begin{align*}
\lim_{n \to \infty} (f(n) + O(g(n)))
= \lim_{n \to \infty} f(n) + O(\lim_{n \to \infty} g(n))
\end{align*}$</span></p>
|
3,936,545 | <p>What happens if you have some error in your expression and you wish to take the limit? Does the big-oh go away or are you unable to take a limit at all?</p>
<p>This may seem very obvious as it is likely I am misunderstanding the definition of big-oh?</p>
| zkutch | 775,801 | <p>What you understand under <span class="math-container">$\lim_\limits{n \to \infty} O(g(n))$</span>? Set of limits of all members?</p>
<p>If exists <span class="math-container">$\lim_\limits{n \to \infty} g(n)$</span>, then <span class="math-container">$O(\lim_\limits{n \to \infty} g(n))=O(1)$</span>, so here are bounded functions. On another hand even if we assume that all members of <span class="math-container">$O(g(n))$</span> have limits, then we obtain set of numbers, not bounded functions.</p>
<p><span class="math-container">$$\lim_\limits{n \to \infty} O(g(n)) \ne O(\lim_\limits{n \to \infty} g(n))$$</span></p>
|
2,903,359 | <p>I am trying to prove the following:</p>
<p>Given $1 \le d \le n$, a matrix $P \in R^n$ is a rank-$d$ orthogonal projection matrix. Prove that P is projection matrix iff there exists a $n$x$d$ matrix $U$ such that $P =UU^T$ and $U^TU = I$.</p>
<p>I know that this is an obvious fact about projection matrices but I am not sure how to get started on proving it.</p>
<p>Once I can do that, I am looking to prove that,</p>
<p>for all $v \in R^n$,
$Pv = arg \min_{w \in range(P)} \lVert {v - w} \rVert^2$</p>
| Dean Alderucci | 682,002 | <p>We can prove the equivalence by proving both ways separately.</p>
<hr>
<h2>1. <span class="math-container">$ P =UU^T,U^T U = I \Rightarrow $</span> P is a projection matrix</h2>
<p>Let's solve an objective similar to the one you stated:
<span class="math-container">$$
\arg \min_{b \in R^n} \lVert {v - Pb} \rVert^2$$</span>
In other words, given any vector <span class="math-container">$v$</span>, what is the corresponding vector <span class="math-container">$b$</span> such that some vector in the column space of <span class="math-container">$P$</span> is 'closest' to <span class="math-container">$v$</span> (closest in the 2-norm sense).
<p>
Next, since <span class="math-container">$$ Pb = U U^T b$$</span>
the objective is
<span class="math-container">$$
\arg \min_{b \in R^n} \lVert {v - U U^T b} \rVert^2 $$</span>
which can be rewritten, using <span class="math-container">$U^T U = I$</span> as:
<span class="math-container">$$
\arg \min_{b \in R^n} (v - U U^T b)^T (v - U U^T b) = \arg \min_{b \in R^n} v^Tv - 2 v^T U U^T b + b^T U U^T b $$</span>
The minimum is found by setting the derivative with respect to the vector <span class="math-container">$b$</span> equal to zero.
<span class="math-container">$$
0 = - 2 U U^T v + 2 U U^T b $$</span>
which means <span class="math-container">$\forall v$</span>, the optimal <span class="math-container">$b$</span> is <span class="math-container">$v$</span>:
<span class="math-container">$$
U U^T v = U U^T b \Rightarrow P v = P b$$</span>
So we know the original problem is, for every <span class="math-container">$v$</span>:
<span class="math-container">$$
\arg \min_{b \in R^n} \lVert {v - Pb} \rVert^2 = \lVert {v - Pv} \rVert^2$$</span>
Since for every <span class="math-container">$v$</span>, <span class="math-container">$P v$</span> is the vector in the column space of <span class="math-container">$P$</span> that is 'closest' to <span class="math-container">$v$</span> , <span class="math-container">$P$</span> must be a projection matrix.</p>
<hr>
<h2>2. <span class="math-container">$P$</span> is a projection matrix <span class="math-container">$ \Rightarrow P =UU^T,U^T U = I $</span></h2>
<p>Since <span class="math-container">$P$</span> is a projection matrix, <span class="math-container">$Pb$</span> is the projection of vector <span class="math-container">$b$</span> onto some rank-d columnspace represented by the columns of matrix <span class="math-container">$X \in R^{n\times d}$</span>. Since <span class="math-container">$X$</span> is a rank-d matrix it has <span class="math-container">$d$</span> linearly independent columns.
Make the columns of <span class="math-container">$X$</span> orthonormal using Gram Schmidt.</p>
<p>We can use the well-known formula for projections:
<span class="math-container">$$
P = X (X^T X)^{-1} X^T
$$</span>
Since the columns of <span class="math-container">$X$</span> are orthonormal, <span class="math-container">$(X^T X)^{-1} =I$</span>, so:
<span class="math-container">$$
P = X X^T
$$</span>
Let <span class="math-container">$U=X$</span> and we have
<span class="math-container">$$
P = U U^T, (U^T U)^{-1} =I = U^T U
$$</span></p>
|
405,783 | <p>I saw the following in my lecture notes, and I am having difficulties
verifying the steps taken.</p>
<p>The question is:</p>
<blockquote>
<p>Assuming $0<\epsilon\ll1$ find all the roots of the polynomial
$$\epsilon^{2}x^{3}+x+1$$ which are $O(1)$ up to a precision of
$O(\epsilon^{2})$</p>
</blockquote>
<p>and the solution given was </p>
<blockquote>
<p>Assume that $x=O(1)$ and that $$x(\epsilon)=x_{0}+\epsilon
x_{1}+O(\epsilon^{2})$$ Then by setting it in the equation and letting
$\epsilon\to0$ we get $$x_{0}=-1,x_{1}=0$$</p>
<p>Hence $x(\epsilon)=-1+O(\epsilon^{2})$</p>
</blockquote>
<p>I have two questions: </p>
<ol>
<li><p>Where did we use the assumption that $x=O(1)$</p></li>
<li><p>How did they get $$x_{0}=-1,x_{1}=0 ?$$ </p></li>
</ol>
<p>When I did the step setting
it in the equation and letting $\epsilon\to0$ I got $$x_{0}+1+O(\epsilon^{2})=0$$
and so I don't know anything about $x_{1}$. </p>
<p>Should I ignore $O(\epsilon^{2})$
and from that I should get $x_{0}=-1$</p>
| TMM | 11,176 | <p>This is not exactly as it is written above, but my approach would be as follows. </p>
<p>First, assume that $x(\epsilon) = O(1)$. Then we can say $x(\epsilon) = x_0 + f(\epsilon)$, where $f(\epsilon) = o(1)$ is some function in $\epsilon$, and $x_0$ is constant in $\epsilon$. Filling this in, and using that $x(\epsilon)$ is a root of the equation, we get
$$\epsilon^2 (x_0 + f(\epsilon))^3 + (x_0 + f(\epsilon)) + 1 = 0.$$
Expanding the cubic, we get
$$\left(\epsilon^2 x_0^3 + 3 \epsilon^2 x_0^2 f(\epsilon) + 3 \epsilon^2 x_0 f(\epsilon)^2 + \epsilon^2 f(\epsilon)^3\right) + \left(x_0 + f(\epsilon)\right) + 1 = 0.$$
Now, note that $f(\epsilon) = o(1)$, so also $f(\epsilon)^2 = o(1)$ and $f(\epsilon)^3 = o(1)$. Further note that $x_0 = O(1)$, so also $x_0^2 = O(1)$ and $x_0^3 = O(1)$. So the second, third and fourth terms above are all $o(\epsilon^2)$. So if we combine all these $o(\epsilon^2)$-terms into one $o(\epsilon^2)$-term, we get
$$\epsilon^2 x_0^3 + o(\epsilon^2) + x_0 + f(\epsilon) + 1 = 0.$$
Rewriting this slightly, to group the terms of the same order, we get
$$(x_0 + 1) + \left(f(\epsilon) + x_0^3 \epsilon^2 + o(\epsilon^2)\right) = 0.$$
Equating the proper order terms on both sides, we must therefore have $x_0 = -1$ and $f(\epsilon) = -x_0^3 \epsilon^2 + o(\epsilon^2) = \epsilon^2 + o(\epsilon^2)$, from which we may conclude that $x(\epsilon) = -1 + \epsilon^2 + o(\epsilon^2)$.</p>
<p>If we do not assume $x(\epsilon) = O(1)$, then we get more solutions. In particular, the two other solutions are of the form
$$\begin{align} x(\epsilon) &= \frac{\pm i}{\epsilon} + O(1) \end{align}$$
For these solutions, we cannot say that $x(\epsilon) = x_0 + o(1)$ for some constant $x_0$, so we did not find these solutions with the above method. </p>
|
2,825,522 | <p>I have this problem:</p>
<p><a href="https://i.stack.imgur.com/blD6N.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/blD6N.png" alt="enter image description here"></a></p>
<p>I have not managed to solve the exercise, but this is my breakthrough:</p>
<p><a href="https://i.stack.imgur.com/0dTdO.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/0dTdO.jpg" alt="enter image description here"></a></p>
<p>How can I continue to find it?</p>
| dxiv | 291,201 | <p>Hint: draw the segments to the center of the circle, and recognize a rhombus there formed by two equilateral triangles with a common base.</p>
<p><a href="https://i.stack.imgur.com/IeOem.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/IeOem.jpg" alt="enter image description here"></a></p>
|
2,825,522 | <p>I have this problem:</p>
<p><a href="https://i.stack.imgur.com/blD6N.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/blD6N.png" alt="enter image description here"></a></p>
<p>I have not managed to solve the exercise, but this is my breakthrough:</p>
<p><a href="https://i.stack.imgur.com/0dTdO.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/0dTdO.jpg" alt="enter image description here"></a></p>
<p>How can I continue to find it?</p>
| random | 513,275 | <p>By using the symmetry of the figure.</p>
|
2,268,345 | <p>Find the value of $$S=\sum_{n=1}^{\infty}\left(\frac{2}{n}-\frac{4}{2n+1}\right)$$ </p>
<p>My Try:we have</p>
<p>$$S=2\sum_{n=1}^{\infty}\left(\frac{1}{n}-\frac{2}{2n+1}\right)$$ </p>
<p>$$S=2\left(1-\frac{2}{3}+\frac{1}{2}-\frac{2}{5}+\frac{1}{3}-\frac{2}{7}+\cdots\right)$$ so</p>
<p>$$S=2\left(1+\frac{1}{2}-\frac{1}{3}+\frac{1}{4}-\frac{1}{5}+\cdots\right)$$ But we know</p>
<p>$$\ln2=1-\frac{1}{2}+\frac{1}{3}-\frac{1}{4}+\cdots$$ So</p>
<p>$$S=2(2-\ln 2)$$</p>
<p>Is this correct?</p>
| Simply Beautiful Art | 272,831 | <p>Note that this is not valid by the <a href="https://en.wikipedia.org/wiki/Riemann_series_theorem#Examples" rel="noreferrer">Riemann series theorem</a>, which shows you cannot group terms like that. In particular, the terms you are grouping tend to be farther and farther from each other, meaning you are "pulling" terms faster than others, and this results in the value of the series changing, since $\sum\frac1n$ does not converge. Indeed, note that:</p>
<p>$$S=4\sum_{n=1}^\infty\frac1{2n}-\frac1{2n+1}=4\sum_{n=2}^\infty\frac{(-1)^n}{n}=4(1-\ln(2))$$</p>
<p>Which is different from your result. Indeed, if we consider the partial sums correctly, this is the correct result.</p>
<hr>
<p>A more explicit example of the Riemann series theorem:</p>
<p>$$0=\sum_{n=1}^\infty\frac1n-\frac1n$$</p>
<p>Note that $\frac1n$ is either $\frac1{2n}$ or $\frac1{2n-1}$, hence, we group more positive terms together:</p>
<p>$$\begin{align}0&\stackrel?=\sum_{n=1}^\infty\frac1{2n-1}+\frac1{2n}-\frac1n\\&=\sum_{n=1}^\infty\frac1{2n-1}-\frac1{2n}\end{align}$$</p>
<p>But now notice that $\frac1{2n-1}-\frac1{2n}>0$ for all $n$, hence,</p>
<p>$$0\stackrel?>0$$</p>
<p>Which should be intuitive. Since we added up the positive terms faster, the resulting series became larger.</p>
<hr>
<p>If you can't see the manipulation step, here it is written out:</p>
<p>$$\begin{align}0&=\color{#4488ee}{\frac11}-\frac11+\color{#4488ee}{\frac12}-\frac12+\color{#44ee88}{\frac13}-\frac13+\color{#44ee88}{\frac14}-\frac14+\color{orange}{\frac15}-\frac15+\color{orange}{\frac16}-\frac16+\dots\\&\stackrel?=\color{#4488ee}{\frac11+\frac12}-\frac11+\color{#44ee88}{\frac13+\frac14}-\frac12+\color{orange}{\frac15+\frac16}-\frac13+\frac17+\frac18-\frac14+\dots\\&\stackrel?=\frac11+\left(\frac12-\frac11\right)+\frac13+\left(\frac14-\frac12\right)+\frac15+\left(\frac16-\frac13\right)+\frac17+\left(\frac18-\frac14\right)+\dots\\&\stackrel?=\frac11-\frac12+\frac13-\frac14+\frac16-\frac14+\dots\\&\stackrel?=\ln(2)\end{align}$$</p>
|
1,393,869 | <p>Given a cubic polynomial with real coefficients of the form $f(x) = Ax^3 + Bx^2 + Cx + D$ $(A \neq 0)$ I am trying to determine what the necessary conditions of the coefficients are so that $f(x)$ has exactly three distinct real roots. I am wondering if there is a way to change variables to simplify this problem and am looking for some clever ideas on this matter or on other ways to obtain these conditions.</p>
| P Vanchinathan | 28,915 | <p>For simplicity assume $A>0$ (you can easily get analogous conditions for $A<0)$. Then the first turning point will be a maximum and the second one will be a minimum. Then the 3 distinct roots:</p>
<p>(i) the least root should be earlier than the point where maximum is attained</p>
<p>(ii) the middle root should be between the maximum and the minimum</p>
<p>(iii) the largest root would be bigger than the minimum.</p>
<p>This translates to maximum should be attained at a negative number, minimum at a positive number.
You can now express the above statement into a condition on the coefficients of $f'x$.</p>
|
3,497,679 | <p>I was given this function:
<span class="math-container">$$
f(x)=
\begin{cases}
x+x^2, & x\in\Bbb Q\\
x, & x\notin \Bbb Q
\end{cases}
$$</span>
I first proved that it is continuous at <span class="math-container">$x=0$</span>.</p>
<p>Now I need to prove that that for every <span class="math-container">$x_0 \in \mathbb R\setminus\{0\}$</span> the limit <span class="math-container">$\lim \limits_{x \to x_0}f(x)$</span> does not exist.</p>
<p>I know that I need to start by assuming that the limit does exist but I don't know how to reach a contradiction.</p>
| Michael Hardy | 11,667 | <p>For <span class="math-container">$x\ne0,$</span> first note that <span class="math-container">$x\ne x^2 +x,$</span> and then let <span class="math-container">$\varepsilon$</span> be half the distance between <span class="math-container">$x$</span> and <span class="math-container">$x^2+x.$</span></p>
<p>Suppose there is a limit <span class="math-container">$L.$</span> What value of <span class="math-container">$\delta$</span> is small enough to assure you that if <span class="math-container">$x-\delta <w<x+\delta$</span> and <span class="math-container">$w\ne x$</span> then <span class="math-container">$L-\varepsilon < f(w) < L+\varepsilon\text{?}$</span> Some such values of <span class="math-container">$w$</span> are rational so that <span class="math-container">$f(w)$</span> will differ from <span class="math-container">$x+x^2$</span> by less than <span class="math-container">$\varepsilon;$</span> others are irrational so that <span class="math-container">$f(w)$</span> will differ from <span class="math-container">$x$</span> by less than <span class="math-container">$\varepsilon.$</span> Show that that implies that no matter what number <span class="math-container">$L$</span> is, they cannot both differ from <span class="math-container">$L$</span> by less than <span class="math-container">$\varepsilon.$</span></p>
|
2,005,604 | <p>Showing $\sqrt a + $$\sqrt {\cos(\sin a)} = 2$</p>
<p>I've attempted various manipulations (multiplying by one, squaring, etc.) but cannot find a way to solve for a. Anyone have an idea how I can approach this problem? Thanks. </p>
| Robert Israel | 8,508 | <p>The solution is approximately $a = 1.5994958620742425268$, but is very unlikely to be expressible in closed form. Numerical methods (e.g. Newton's method) work well.</p>
|
605,277 | <p>I have an electronics project where I sample two sine waves. I would like to know what the amplitude (peak) and difference in phase is. Actually I just need to know the average product of the two waves.</p>
<p>A caveat I have is that the two sine waves have been rectified. (negatives cut off) Here is what I expect the samples to look like:</p>
<p><img src="https://i.stack.imgur.com/oUnLb.png" alt="Samples of two rectified sine waves out of phase"></p>
<p>I don't have much experience with signal processing. Can you recommend any reading or topics to research?</p>
| Semjon Mössinger | 136,192 | <p>You could try to use a Least Squares Estimator (ls-estimator). It can fit a curve with unknown amplitude an phase to a signal. A least squares estimator always consists of an observation matrix. In this you can "design" your "cuttet sine wave" like this (if one period of your sine wave consists of 8 samples an your signal only contains one period):</p>
<pre><code>x1 = [0 0.707 1 0.707 0 0 0 0]^T
</code></pre>
<p>To estimate the phase (and the true amplitude) you must fit a cosine wave, too:</p>
<pre><code>x2 = [0 0 0 0 0 0.707 1 0.707]^T
</code></pre>
<p>So you observation matrix (to estimate amplitude and phase of one sinewave with know frequency) is:</p>
<pre><code>X = [x1, x2]
</code></pre>
<p>The formular of the least square estimator is:</p>
<pre><code>b = (X^T * X)^(-1) * X^Ty
</code></pre>
<p>with b containing the amplitdue of the sine and the cosine wave.
BTW ls-esitmator is quite powerfull an can be applied to many problems. It's worth to learn. Good look.</p>
<p><a href="https://fenix.tecnico.ulisboa.pt/downloadFile/2589868537967/bias%20of%20amplitude%20estimation%20using%20three%20parameter%20sine%20fitting%20in%20the%20presence%20of%20additive%20noise.pdf" rel="nofollow">This paper</a> paper shows the main ideas within the first two pages: </p>
|
876,763 | <p>Let $R$ be a commutative ring with identity. Assume that for any two principal ideals $Ra$ and $Rb$ we have either $Ra\subseteq Rb$ or $Rb\subseteq Ra$. Show that for any two ideals $I$ and $J$ in $R$, we have either $I\subseteq J$ or $J\subseteq I$.</p>
<p>Initially i thought that if i could show that any ideal in the ring is principal then i am done. But could not show what i thought of. Is my assumption to solve the problem correct? How can i proceed? Any hints would be highly appreciated. Thank you.</p>
| Alex J Best | 31,917 | <p>I'm not sure trying to show every ideal is principal will work (though I can't verify a counterexample off the top of my head!) however I'll start you off a different way:</p>
<p>First assume $I \not\subseteq J$, then we can take some $x\in I\smallsetminus J$ and now consider the principal ideal $Rx$, what can we say about this ideal?</p>
|
3,020,365 | <p>Let <span class="math-container">$A=\{t\sin(\frac{1}{t})\ |\ t\in (0,\frac{2}{\pi})\}$</span>.</p>
<p>Then </p>
<ol>
<li><p><span class="math-container">$\sup (A)<\frac{2}{\pi}+\frac{1}{n\pi}$</span> for all <span class="math-container">$n\ge 1$</span>.</p></li>
<li><p><span class="math-container">$\inf (A)> \frac{-2}{3\pi}-\frac{1}{n\pi}$</span> for all <span class="math-container">$n\ge 1$</span>.</p></li>
<li><span class="math-container">$\sup (A)=1$</span></li>
<li><span class="math-container">$\inf (A)=-1$</span></li>
</ol>
<p>My answer was options <span class="math-container">$1$</span> and <span class="math-container">$2$</span> which matched with prelim answer key provided by organization, but the final answer key changed the answer to option <span class="math-container">$1$</span> only. Why is option <span class="math-container">$2$</span> not correct?</p>
<p><strong>My attempt-</strong> </p>
<p>As <span class="math-container">$-t\le t\sin(1/t)\le t$</span>. So <span class="math-container">$t\sin(1/t)$</span> will always be less than <span class="math-container">$\frac{2}{\pi}$</span> which is strictly less than <span class="math-container">$1$</span>. So option <span class="math-container">$3$</span> is false and option <span class="math-container">$1$</span> is true. Also value of <span class="math-container">$t\sin(1/t)$</span> is always greater than or equal to <span class="math-container">$-2/\pi=-0.63$</span>, so infimum cannot be <span class="math-container">$-1$</span>.</p>
<p>Now at <span class="math-container">$t=2/3\pi$</span> , function has value <span class="math-container">$\frac{-2}{3\pi}$</span>. So if option <span class="math-container">$2$</span> is false we must have some value of <span class="math-container">$t$</span> in <span class="math-container">$A$</span> for which the function attains value strictly less than <span class="math-container">$\frac{-2}{3\pi}$</span>. Right? </p>
<p>Can you please help me with option <span class="math-container">$2$</span> now. Thanks in advance. </p>
| Shubham Johri | 551,962 | <p>Let <span class="math-container">$x=1/t\implies A=\{\frac{\sin x}x:x>\pi/2\}$</span></p>
<p>Let <span class="math-container">$f(x)=\frac{\sin x}x, x>\pi/2$</span></p>
<p><span class="math-container">$f(3\pi/2)=-\frac2{3\pi}<0, f'(3\pi/2)>0\implies f$</span> is strictly increasing at <span class="math-container">$3\pi/2$</span></p>
<p>This means you can find <span class="math-container">$a\in\varepsilon-$</span>neighbourhood of <span class="math-container">$3\pi/2$</span> such that <span class="math-container">$f(a)<f(3\pi/2)\implies f(a)<-\frac2{3\pi}$</span></p>
|
1,796,156 | <p>Let $F(n)$ denote the $n^{\text{th}}$ Fibonacci number<a href="http://mathworld.wolfram.com/FibonacciNumber.html" rel="noreferrer">$^{[1]}$</a><a href="http://en.wikipedia.org/wiki/Fibonacci_number" rel="noreferrer">$\!^{[2]}$</a><a href="http://oeis.org/A000045" rel="noreferrer">$\!^{[3]}$</a>. The Fibonacci numbers have a natural generalization to an analytic function of a complex argument:
$$F(z)=\left(\phi^z - \cos(\pi z)\,\phi^{-z}\right)/\sqrt5,\quad\text{where}\,\phi=\left(1+\sqrt5\right)/2.\tag1$$
This definition is used, for example, in <em>Mathematica</em>.<a href="http://reference.wolfram.com/language/ref/Fibonacci.html" rel="noreferrer">$^{[4]}$</a> It produces real values for $z\in\mathbb R$, and preserves the usual functional equation for Fibonacci numbers for all $z\in\mathbb C$: $$F(z)=F(z-1) + F(z-2).\tag2$$</p>
<hr>
<p>The fibonorial<a href="http://mathworld.wolfram.com/Fibonorial.html" rel="noreferrer">$^{[5]}$</a><a href="http://en.wikipedia.org/wiki/Fibonorial" rel="noreferrer">$\!^{[6]}$</a><a href="http://oeis.org/A003266" rel="noreferrer">$\!^{[7]}$</a> is usually denoted as $n!_F$, but here we prefer a different notation $\mathfrak F(n)$. It is defined for non-negative integer $n$ inductively as
$$\mathfrak F(0)=1,\quad \mathfrak F(n+1)=\mathfrak F(n)\times F(n+1).\tag3$$
In other words, the fibonorial $\mathfrak F(n)$ gives the product of the Fibonacci numbers from $F(1)$ to $F(n)$, inclusive. For example, $$\mathfrak F(5)=\prod_{m=1}^5F(m)=1\times1\times2\times3\times5=30.\tag4$$</p>
<blockquote>
<p><em>Questions:</em> Can the fibonorial be generalized in a natural way to an analytic function $\mathfrak F(z)$ of a complex (or, at least, positive real) variable, such that it preserves the functional equation $(3)$ for all arguments?</p>
<p>Is there an integral, series or continued fraction representation of $\mathfrak F(z)$, or a representation in a closed form using known special functions?</p>
<p>Is there an efficient algorithm to calculate values of $\mathfrak F(z)$ at non-integer arguments to an arbitrary precision?</p>
</blockquote>
<p>So, we can see that the fibonorial is to the Fibonacci numbers as the factorial is to natural numbers, and the analytic function $\mathfrak F(z)$ that I'm looking for is to the fibonorial as the analytic function $\Gamma(z+1)$ is to the factorial.</p>
<hr>
<p><em>Update:</em> While thinking on <a href="https://math.stackexchange.com/q/1914821/19661">this question</a> it occurred to me that perhaps we can use <a href="http://mathworld.wolfram.com/GammaFunction.html#eqn30" rel="noreferrer">the same trick</a> that is used to define the $\Gamma$-function using a limit involving factorials of integers:
$$\large\mathfrak F(z)=\phi^{\frac{z\,(z+1)}2}\cdot\lim_{n\to\infty}\left[F(n)^z\cdot\prod_{k=1}^n\frac{F(k)}{F(z+k)}\right]\tag5$$
or, equivalently,
$$\large\mathfrak F(z)=\frac{\phi^{\frac{z\,(z+1)}2}}{F(z+1)}\cdot\prod_{k=1}^\infty\frac{F(k+1)^{z+1}}{F(k)^z\,F(z+k+1)}\tag{$5'$}$$
This would give
$$\mathfrak F(1/2)\approx0.982609825013264311223774805605749109465380972489969443...\tag6$$
that appears to have a closed form in terms of the <a href="http://mathworld.wolfram.com/q-PochhammerSymbol.html" rel="noreferrer">q-Pochhammer symbol</a>:
$$\mathfrak F(1/2)=\frac{\phi^{3/8}}{\sqrt[4]{5}}\,\left(-\phi^{-2};-\phi^{-2}\right)_\infty\tag7$$
and is related to the <a href="http://mathworld.wolfram.com/FibonacciFactorialConstant.html" rel="noreferrer">Fibonacci factorial constant</a>.</p>
| giobrach | 332,594 | <p>I think I've found a <strike>suitable</strike> [see <strong>EDIT</strong> below] way to generalize the fibonorial in the reals! (I've taken inspiration from an answer to <a href="https://math.stackexchange.com/questions/1842543/write-an-expression-in-terms-of-n-for-the-nth-term-in-the-following-sequence">this question</a>.)</p>
<p>The first terms of the fibonorial sequence $\mathfrak{F}(n)$ are:
$$\mathfrak{F}_7(n)=\color{red}{1},1,2,6,30,240,3120,65620$$
where $\mathfrak{F}_7(n)$ is just $\mathfrak{F}(n)$ interrupted after the seventh term, for no particular reason. If we were to find the difference between consequent terms in this partial sequence we'd obtain
$$\Delta\mathfrak{F}_7(n)=\color{red}{0},1,4,24,210,2880,62500$$
and if we were to go on, we'd arrive at the point where we'd obtain a single number, in this case $$\Delta^7\mathfrak{F}_7(n)=\color{red}{47844}$$
and if we were to go even further, we'd find an infinite sequence of $0$s. (Indeed this process is akin to differentiation of a polynomial.)</p>
<p>As you can notice, I've highlighted the first terms of each sequence of differences in red. The sequence of these red terms I define to be $c(n)=\Delta^n\mathfrak{F}(0)$, and by inspection and induction one obtains that
$$c(n)=(-1)^n\sum_{k=0}^n(-1)^k\binom{n}{k}\mathfrak{F}(k)$$
Now the fibonorial sequence can be approximated as follows:
$$\begin{align}
T_q(n)&=\frac{c(0)}{0!}+\frac{c(1)}{1!}n+\frac{c(2)}{2!}n(n-1)+\cdots+\frac{c(q)}{q!}n(n-1)\cdots(n-q+1)\\
&=\sum_{j=0}^q \frac{c(j)}{j!}\frac{n!}{(n-j)!}
\end{align}$$
This can of course be generalized to real $x$ by using the $\Gamma$ function:
$$T_q(x)=\sum_{j=0}^q \frac{c(j)}{j!}\frac{\Gamma(x+1)}{\Gamma(x-j+1)}$$
which means that the limit of $T_q(x)$ as $q\to\infty$ should give us the Maclaurin series for a generalization of $\mathfrak{F}(n)$, which I shall call $\Phi(x)$:
$$\Phi(x)=\sum_{j=0}^\infty \frac{c(j)}{j!}\frac{\Gamma(x+1)}{\Gamma(x-j+1)}=\sum_{j=0}^\infty \frac{(-1)^j}{j!}\sum_{k=0}^j\binom{j}{k}\frac{\Gamma(x+1)(-1)^k}{\Gamma(x-j+1)}\mathfrak{F}(k)$$</p>
<p><strong>EDIT</strong>: unfortunately, I have just realized that this generalization is an oscillating function which happens to be equal to $\mathfrak{F}(n)$ when $x$ is natural, but is awfully off otherwise. I won't delete the answer because I want to know what has gone wrong!</p>
|
425,460 | <p>While browsing through several pages of nlab(mainly on n-Categories), I encountered the notion "foo" several times. However, there seems to be article on nlab about this notion. Is this some kind of category theorist slang? Please explain to me what this term means.</p>
| Clive Newstead | 19,542 | <p>It's slang, which I've mostly seen used in the context of computing rather than category theory; <em>foo</em> is just a placeholder for something else, as is <em>bar</em>. A logician I know likes talking about <em>widgets</em> and <em>wombats</em> $-$ it all serves the same purpose.</p>
<p>For example, you might say "an irreducible foo is a foo with no proper sub-foos".</p>
|
1,490,219 | <blockquote>
<p>Given $S=\displaystyle \bigcap^{\infty}_{k=1}\left(1-\frac{1}{k}, 1+\frac{1}{k}\right)$, what is $\sup(S)$ and $\max(S)$?</p>
</blockquote>
<p>I reasoned that this is empty, since as $k$ goes to infinity, then $\frac{1}{k}$ goes to $0$. So ultimately, the intersection of all the intervals in $S$ is $(1,1)$, which should be empty. But the solution the supremum and maximum does exist and they are $1$. Why is this?</p>
| Nitrogen | 189,200 | <p>Since $1-\frac{1}{k} < 1 < 1+\frac{1}{k}$ for all $k>0$, $1$ is indeed in the intersection. Now, suppose there is some $1\ne a \in S$ and suppose $a<1$ (the case $1<a$ is similar.) Then, there exists $k\in \Bbb{N}$ such that $\frac{1}{k}<1-a$. Thus $a<1-\frac{1}{k}$ and $a\notin S$.</p>
<p>Therefore, $S=\{1\}$ and $sup(S)=inf(S)=1$.</p>
|
3,877,652 | <p>Can anyone help me how to do this, like there are some examples in my book, but this exercise problem seems to be alittle difficult for me to approach:</p>
<p>Given a set <span class="math-container">$\{A_k|k\in\mathbb{N}\}$</span>:
<span class="math-container">$$A_k=\bigg\{x\in\mathbb{R}\bigg|\space\space 1-\frac{1}{k}<x<1+\frac{1}{k}\bigg\}$$</span></p>
<p>Find:
<span class="math-container">$$\bigcup_{k\in \mathbb{N}} A_k,\text{and} \bigcap_{k\in \mathbb{N}} A_k$$</span></p>
<p><strong>My thoughts:</strong></p>
<p>I think that <span class="math-container">$\bigcup_{k\in \mathbb{N}} A_k=(0,2)$</span>, since the range of the inequality gets smaller from <span class="math-container">$(0,2)$</span> to <span class="math-container">$(0.5,1.5), (0.666, 1.3333)$</span> etc.. However I have trouble showing this in a mathematical way. Can anyone provide me with some assistance as I am having trouble with reasoning with set logic.</p>
<p>Additionally for <span class="math-container">$\bigcap_{k\in \mathbb{N}} A_k$</span>, eventually as <span class="math-container">$k\rightarrow \infty$</span> then the inequaliy becomes <span class="math-container">$1<x<1$</span>, which does not make sense and thus the intersections must be <span class="math-container">$\bigcap_{k\in \mathbb{N}} A_k =\emptyset$</span>?</p>
| José Carlos Santos | 446,262 | <p>Since <span class="math-container">$A_1=(0,2)$</span> and since <span class="math-container">$k\geqslant1\implies A_k\subset(0,2)$</span>, the union <span class="math-container">$\bigcup_{k\in\Bbb N}A_k$</span> is indeed <span class="math-container">$(0,2)$</span>.</p>
<p>And the intersection <span class="math-container">$\bigcap_{n\in\Bbb N}A_k$</span> is <span class="math-container">$\{x\}$</span>, because:</p>
<ul>
<li><span class="math-container">$x\in A_k$</span> for every <span class="math-container">$k\in\Bbb N$</span>;</li>
<li>if <span class="math-container">$y\ne x$</span>, then <span class="math-container">$y>x+\frac1k$</span> for some <span class="math-container">$k\in\Bbb N$</span> or <span class="math-container">$y<x-\frac1k$</span> for some <span class="math-container">$k\in\Bbb N$</span>. In both case, <span class="math-container">$y\notin A_k$</span> and therefore <span class="math-container">$y\notin\bigcap_{n\in\Bbb N}A_k$</span>.</li>
</ul>
|
3,452,249 | <p>I'd like to ask to check for a solution to a homework problem, below.</p>
<p>Define <span class="math-container">$\forall n \in \mathbb{N}, g_n(x):=\frac{x}{(1-x)^n}$</span>. Let <span class="math-container">$F_n$</span> be the associated Newton function, i.e. <span class="math-container">$F_n(x):= x - \frac{g_n(x)}{g_n'(x)}$</span>. </p>
<p>How do we show that <span class="math-container">$\infty$</span> is an attracting fixed point of <span class="math-container">$F_n$</span>. I can see that <span class="math-container">$F_n(\infty)=\infty$</span>, since <span class="math-container">$F_n(x)= x [1 - \frac{x-1}{x(1-n)-1)}] = x [1 - \frac{1-1/x}{(1-n)-1/x)}]$</span>. But, how would define <span class="math-container">$F_n'(\infty)$</span>, as a limit of <span class="math-container">$F_n'(x)$</span> as <span class="math-container">$x \to \infty$</span>? If yes, how do we show that <span class="math-container">$|F_n'(\infty)|< 1$</span>, in order to prove that <span class="math-container">$\infty$</span> is attracting? Thanks in advance!</p>
<p>EDIT: if I'm not mistake, my camculations show that:</p>
<p><span class="math-container">$F_n'(x)= x(x-1)^n[ \frac{ (1-n)(x-1) - (n+1)(x-1-nx) }{(x-1-nx)^2} ]$</span>.
This seems to go to infinity as <span class="math-container">$x \to \infty$</span>, because it's of the form <span class="math-container">$(x-1)^n \frac{O(x^2)}{O(x^2)}$</span> as <span class="math-container">$x\to \infty$</span>, if my calculations are right; so <span class="math-container">$\infty$</span> seems to be a repelling fixed point. Could someone please check it? </p>
| marty cohen | 13,079 | <p>Since</p>
<p><span class="math-container">$\begin{array}\\
F_n(x)
&=x-\dfrac{g_n'(x)}{g_n(x)}\\
&=x-\dfrac{\frac{ (n - 1) x + 1}{(1 - x)^{n + 1}}}{\frac{x}{(1-x)^n}}\\
&=x-\dfrac{(n - 1) x + 1}{x(1-x)}\\
&=x+\dfrac{(n - 1) x + 1}{x(x-1)}\\
&=x+\dfrac{n - 1}{x-1}+\dfrac{1}{x(x-1)}\\
&>x+\dfrac{n - 1}{x-1}\\
\end{array}
$</span></p>
<p>so
<span class="math-container">$F_n(x) > x$</span>.</p>
|
3,452,249 | <p>I'd like to ask to check for a solution to a homework problem, below.</p>
<p>Define <span class="math-container">$\forall n \in \mathbb{N}, g_n(x):=\frac{x}{(1-x)^n}$</span>. Let <span class="math-container">$F_n$</span> be the associated Newton function, i.e. <span class="math-container">$F_n(x):= x - \frac{g_n(x)}{g_n'(x)}$</span>. </p>
<p>How do we show that <span class="math-container">$\infty$</span> is an attracting fixed point of <span class="math-container">$F_n$</span>. I can see that <span class="math-container">$F_n(\infty)=\infty$</span>, since <span class="math-container">$F_n(x)= x [1 - \frac{x-1}{x(1-n)-1)}] = x [1 - \frac{1-1/x}{(1-n)-1/x)}]$</span>. But, how would define <span class="math-container">$F_n'(\infty)$</span>, as a limit of <span class="math-container">$F_n'(x)$</span> as <span class="math-container">$x \to \infty$</span>? If yes, how do we show that <span class="math-container">$|F_n'(\infty)|< 1$</span>, in order to prove that <span class="math-container">$\infty$</span> is attracting? Thanks in advance!</p>
<p>EDIT: if I'm not mistake, my camculations show that:</p>
<p><span class="math-container">$F_n'(x)= x(x-1)^n[ \frac{ (1-n)(x-1) - (n+1)(x-1-nx) }{(x-1-nx)^2} ]$</span>.
This seems to go to infinity as <span class="math-container">$x \to \infty$</span>, because it's of the form <span class="math-container">$(x-1)^n \frac{O(x^2)}{O(x^2)}$</span> as <span class="math-container">$x\to \infty$</span>, if my calculations are right; so <span class="math-container">$\infty$</span> seems to be a repelling fixed point. Could someone please check it? </p>
| Lee Mosher | 26,501 | <p><span class="math-container">$F'(\infty)$</span> does not make any sense, because <span class="math-container">$\infty$</span> is not a point in the domain of <span class="math-container">$F'$</span>. </p>
<p>For similar reasons, the behavior of <span class="math-container">$F'(x)$</span> as <span class="math-container">$x \to \infty$</span> cannot help you to determine whether <span class="math-container">$\infty$</span> is an attracting fixed point. In particular, even if you did successfully prove that <span class="math-container">$F'(x) \to \infty$</span> as <span class="math-container">$x \to \infty$</span>, it <em>would not follow</em> that <span class="math-container">$\infty$</span> is a repelling fixed point.</p>
<p>So you've got to do something else.</p>
<p>This does raise the important question of what it even means for <span class="math-container">$\infty$</span> to be an attracting fixed point, as brought up by @Ian. The guess in your comment works: you could work in the 1-point compactification <span class="math-container">$\mathbb R^* = \mathbb R \cup \{\infty\}$</span> (which is indeed homeomorphic to <span class="math-container">$\mathbb S^1$</span>). But one still has to refine that guess into a working mathematical strategy.</p>
<p>Here's a strategy that works: instead of thinking about <span class="math-container">$\mathbb S^1$</span> directly, let's use the "reciprocal" coordinate <span class="math-container">$y = \frac{1}{x}$</span> defined on the set <span class="math-container">$\mathbb R^* - \{0\}$</span>, which actually <em>does</em> define a valid coordinate system around <span class="math-container">$\infty$</span> in the 1-point compactification <span class="math-container">$\mathbb R^*$</span>. </p>
<p>Working in the reciprocal coordinate, let's rewrite the formula for <span class="math-container">$F_n(x)$</span>:
<span class="math-container">$$G_n(y) = \frac{1}{F_n(1/y)} = \frac{1}{\frac{1}{y}[1 - \frac{1-y}{1-n-y}]} = \frac{y(1-n-y)}{(1-n-y)-(1-y)} = \frac{y(1-n-y)}{-n} = \frac{1}{n}(y^2+(n-1)y)
$$</span>
Now compute <span class="math-container">$G'_n(y) = \frac{1}{n}(2y + n - 1)$</span>, and so <span class="math-container">$G'_n(0) = \frac{n-1}{n}$</span> and hence <span class="math-container">$|G'_n(0)| < 1$</span>. This proves that <span class="math-container">$0$</span> is a attracting fixed point of <span class="math-container">$G_n(y)$</span>, hence <span class="math-container">$\infty$</span> is an attracting fixed point of <span class="math-container">$F_n(x)$</span>.</p>
|
3,802,269 | <p>Evaluate <span class="math-container">$\lim_{x\rightarrow \infty} x\int_{0}^{x}e^{t^2-x^2}dt$</span></p>
<p>My approach:</p>
<p><span class="math-container">$$
\lim_{x\rightarrow \infty} x\int_{0}^{x}e^{t^2-x^2}dt = \lim_{x\rightarrow \infty} \frac{\int_{0}^{x}e^{t^2}dt}{x^{-1}e^{x^2}}
$$</span></p>
<p>Both the numerator and denominator <span class="math-container">$\rightarrow \infty$</span> as <span class="math-container">$x\rightarrow \infty$</span>. Apply L'Hopital's rule and FTC:</p>
<p><span class="math-container">$$
\lim_{x\rightarrow \infty} x\int_{0}^{x}e^{t^2-x^2}dt = \lim_{x\rightarrow \infty} \dfrac{e^{x^2}}{2e^{x^2}-x^{-2}e^{x^2}}=\frac{1}{2}
$$</span></p>
<p>I am looking for a verification of my result. Thank you!</p>
| Felix Marin | 85,343 | <p><span class="math-container">$\newcommand{\bbx}[1]{\,\bbox[15px,border:1px groove navy]{\displaystyle{#1}}\,}
\newcommand{\braces}[1]{\left\lbrace\,{#1}\,\right\rbrace}
\newcommand{\bracks}[1]{\left\lbrack\,{#1}\,\right\rbrack}
\newcommand{\dd}{\mathrm{d}}
\newcommand{\ds}[1]{\displaystyle{#1}}
\newcommand{\expo}[1]{\,\mathrm{e}^{#1}\,}
\newcommand{\ic}{\mathrm{i}}
\newcommand{\mc}[1]{\mathcal{#1}}
\newcommand{\mrm}[1]{\mathrm{#1}}
\newcommand{\pars}[1]{\left(\,{#1}\,\right)}
\newcommand{\partiald}[3][]{\frac{\partial^{#1} #2}{\partial #3^{#1}}}
\newcommand{\root}[2][]{\,\sqrt[#1]{\,{#2}\,}\,}
\newcommand{\totald}[3][]{\frac{\mathrm{d}^{#1} #2}{\mathrm{d} #3^{#1}}}
\newcommand{\verts}[1]{\left\vert\,{#1}\,\right\vert}$</span>
<span class="math-container">$\ds{\Large\left. a\right)}$</span>
<span class="math-container">\begin{align}
&\bbox[5px,#ffd]{\lim_{x \to \infty}\pars{x\int_{0}^{x}\expo{t^{2} - x^{2}}\dd t}} =
\lim_{x \to \infty}\bracks{x\int_{0}^{x}\expo{\pars{x - t}^{2} - x^{2}}\dd t}
\\[5mm] = &
\lim_{x \to \infty}\bracks{x\int_{0}^{x}\expo{-2tx}\expo{-t^{2}}
\dd t}
=
\lim_{x \to \infty}\pars{x\int_{0}^{\infty}\expo{-2tx}\dd t}
\\[5mm] = &\
\lim_{x \to \infty}\pars{x\,{1 \over 2x}} = \bbx{1 \over 2} \\ &
\end{align}</span>
See <a href="https://en.wikipedia.org/wiki/Laplace%27s_method" rel="nofollow noreferrer">Laplace Method</a>.</p>
<hr>
<span class="math-container">$\ds{\Large\left. b\right)}$</span>
<span class="math-container">\begin{align}
&\bbox[5px,#ffd]{\lim_{x \to \infty}
\pars{x\int_{0}^{x}\expo{t^{2} - x^{2}}\dd t}}
\lim_{x \to \infty}
\braces{x\bracks{{1 \over 2}\,\root{\pi}\expo{-x^{2}}\,{\mrm{erf}\pars{\ic x} \over \ic}}}
\end{align}</span>
<p>where <span class="math-container">$\ds{\mrm{erf}}$</span> is an
<a href="https://dlmf.nist.gov/7.2.E1" rel="nofollow noreferrer">Error Function</a> which has the <a href="https://mathworld.wolfram.com/Erf.html" rel="nofollow noreferrer">asymptotic behavior</a> <span class="math-container">$\ds{\mrm{erf}\pars{\ic x} \sim
1 - {\expo{x^{2}} \over \root{\pi}\ic x}}$</span>. Then,
<span class="math-container">\begin{align}
&\bbox[5px,#ffd]{\lim_{x \to \infty}
\pars{x\int_{0}^{x}\expo{t^{2} - x^{2}}\dd t}} =
\bbx{1 \over 2} \\ &
\end{align}</span></p>
|
1,476,946 | <p>So, I'm just starting to peruse "Categories for the Working Mathematician", and there's one thing I'm uncertain on. Lets say I have three objects, $X,Y,Z$ and two arrows $f,g$ such that $X\overset {f} {\to}Y\overset {g} {\to}Z$. Does this necessitate the composition arrow exist so the diagram commutes, i.e must I have an $X\overset {h} {\to} Z$ such that $h=g\circ f$, or is it just that IF such an arrow $h$ exists, then it commutes? </p>
<p>The question came up when the book defined preorders, saying that they were transitive since we could associate arrows...I just wanted to make sure association of arrows actually mandates the creation of the direct arrow.</p>
| Community | -1 | <p>The definition of category ensures that you can <em>always</em> "compose" two arrows if one's target is the other's source: that is, every diagram</p>
<p>$$ \begin{matrix} X & \xrightarrow{f} & Y
\\ & & \downarrow g\!\!\!\!\!
\\ & & Z
\end{matrix}$$</p>
<p>can be (uniquely!) completed to a commutative diagram</p>
<p>$$ \begin{matrix} X & \xrightarrow{f} & Y
\\ &\!\!{}_{gf\!\!\!}\searrow & \downarrow g\!\!\!\!\!
\\ & & Z
\end{matrix}$$</p>
<p>However, it is <strong>not</strong> true that every diagram</p>
<p>$$ \begin{matrix} X & \xrightarrow{f} & Y
\\ &\!\!{}_{h\!\!\!}\searrow & \downarrow g\!\!\!\!\!
\\ & & Z
\end{matrix}$$</p>
<p>commutes; in many categories, you can have $h \neq gf$ in such a diagram.</p>
<hr>
<p>Preorders are a special case; they can be defined as categories with the weird property that <em>every</em> diagram is commutative; e.g. in a preorder, whenever you have the third triangle above, you have to have $h = gf$</p>
|
1,707,929 | <p>How do I solve for the object distance to each receiver for three radar receivers on the ground, each the same distance from the other, and each receiving echoes, reflected from an object overhead, of a signal pulse from a single transmitter located on the ground at the exact center of the receivers? </p>
<p>Transmitter is located at the exact center of an equlaternal triangle bounded by the the three receivers.
The transmitter is t. The three receivers are a, b, c. The overhead object is o.</p>
<p>The physical distance between each pair of receivers is known. (a—b, b—c, c—a )
The time it takes the signal from the transmitter to the object and echoed back to each receiver are known (t—o—a, t—o—b, t—o—c). These three times will be equivalent if the object is directly overhead the transmitter. The three times will be different if the object is at a different distance from each receiver.</p>
<p>The signal from transmitter, reflected from object, and received by receivers, always travels at a constant speed 's'.</p>
<p>Ground is a plane. Any environmental factor (such as air density) that affects speed of transmission pulse is a constant.</p>
<p>===================================================
UPDATE:</p>
<p>I implemented the solution in the answer below into 'C' to test it. The x,y,z coordinate solution for case 1 & 2 looks correct. But the x coordinate is incorrectly 0 for case 3 & 4.
It should be < 0 for case 3, and > 0 for case 4.</p>
<pre><code>int main()
</code></pre>
<p>{
printf( "\n\n CASE 1: object directly obove TX:");
generate_3D_vector( 10, 10, 10 ); </p>
<pre><code>printf( "\n\n CASE 2: object closer to RX-a:");
generate_3D_vector( 9, 10, 10 );
printf( "\n\n CASE 3: object closer to RX-b:");
generate_3D_vector( 10, 9, 10 );
printf( "\n\n CASE 4: object closer to RX-c:");
generate_3D_vector( 10, 10, 9 );
</code></pre>
<p>}</p>
<p>void generate_3D_vector( float ra, float rb, float rc )</p>
<p>{
float x, y, z;</p>
<pre><code>x = sqrt( 3.0 ) * ( rb - rc )*( 15 / 24 - 1 / 3 * ra*( ra - rb - rc ) + 2 / 3 * rb*rc ) / ( ra + rb + rc );
y = ( 1.0/2 *ra*( 1 / 8 - ra*( rb + rc ) + rb*rb + rc*rc ) - 1 / 2 * ( ra - rb - rc ) ) / ( ra + rb + rc );
z = sqrt( ra*ra*( ra*ra - 2 - 4 * ( x*x + y*y - y ) ) + ( pow( 2 * y - 1, 2 ))) / ( 2 * ra );
printf( "\n ra rb rc = %.f %.f %.f ==> x y z1 = %f %f %f ", ra, rb, rc, x, y, z );
</code></pre>
<p>}</p>
<p>===================================================
UPDATE 4/18/2016:</p>
<p>I implemented the latest x,y,z solution from the formulae, but it didn't work.
Next, I copy and pasted, as is, the three AWK equations for x,y,z and got the results below, with problems: x looks ok. Y looks ok except for the 9,10,10 case in which y at -8 seems incorrect. z in each case crashes.</p>
<p>Here is my 'C' source code:</p>
<pre><code>int main()
</code></pre>
<p>{</p>
<pre><code>printf( "\n\n CASE 1: object directly obove TX:" );
generate_3D_vector( 10, 10, 10 );
printf( "\n\n CASE 2: object closer to RX-a:" );
generate_3D_vector( 9, 10, 10 );
printf( "\n\n CASE 3: object closer to RX-b:" );
generate_3D_vector( 10, 9, 10 );
printf( "\n\n CASE 4: object closer to RX-c:" );
generate_3D_vector( 10, 10, 9 );
</code></pre>
<p>}</p>
<p>void generate_3D_vector( float ra, float rb, float rc )
{
float x, y, z;
float ra2 = ra*ra;
float ra4 = pow( ra, 4 );</p>
<pre><code>float C1 = -sqrt( 3.0 ) / 6.0;
float C2 = -0.5;
float r_a = ra;
float r_b = rb;
float r_c = rc;
x = C1 * ( r_b - r_c ) * ( r_a*( r_a - r_b - r_c ) + 2.0 * r_b*r_c + 1 ) / ( r_a - r_b - r_c );
y = C2 * ( r_a*( r_a*( r_b + r_c ) - r_b*r_b - r_c*r_c + 2.0 ) - r_b - r_c ) / ( r_a - r_b - r_c );
z = sqrt( r_a*r_a * ( r_a*r_a - 4.0 * ( x*x + y*y + y ) - 2 ) + 4 * ( y*y + y ) + 1 ) / ( 2 * r_a );
printf( "\n ra rb rc = %.f %.f %.f ==> x y z1 = %f %f %f ", ra, rb, rc, x, y, z );
</code></pre>
<p>}</p>
<p>AND HERE IS MY 'C' OUTPUT........................</p>
<p>CASE 1: object directly obove TX:
ra rb rc = 10 10 10 ==> x y z1 = 0.000000 0.000000 4.950000</p>
<p>CASE 2: object closer to RX-a:
ra rb rc = 9 10 10 ==> x y z1 = 0.000000 -8.272727 -nan(ind)
<<<<<<<<<<<<<<<<<<<<< I would expect y to be greater (closer to 0).</p>
<p>CASE 3: object closer to RX-b:
ra rb rc = 10 9 10 ==> x y z1 = -2.918826 5.055555 -nan(ind)</p>
<p>CASE 4: object closer to RX-c:
ra rb rc = 10 10 9 ==> x y z1 = 2.918826 5.055555 -nan(ind)</p>
| amd | 265,466 | <p>For each receiver, the possible locations of the target are on the surface of an ellipsoid of revolution that has the transmitter as one focus and the receiver as the other. The target will be at the intersection of these three surfaces.</p>
|
3,430,066 | <p><strong>Question:</strong></p>
<p>Calculate the integral </p>
<p><span class="math-container">$$\int_0^1 \frac{dx}{e^x-e^{-2x}+2}$$</span></p>
<p><strong>Attempted solution:</strong></p>
<p>I initially had two approaches. First was recognizing that the denominator looks like a quadratic equation. Perhaps we can factor it.</p>
<p><span class="math-container">$$\int_0^1 \frac{dx}{e^x-e^{-2x}+2} = \int_0^1 \frac{dx}{e^{-2x}(e^x+1)(e^x+e^2x-1)}$$</span></p>
<p>To me, this does not appear productive. I also tried factoring out <span class="math-container">$e^x$</span> with a similar unproductive result.</p>
<p>The second was trying to make it into a partial fraction. To get to a place where this can efficiently be done, I need to do a variable substitution:</p>
<p><span class="math-container">$$\int_0^1 \frac{dx}{e^x-e^{-2x}+2} = \Big[ u = e^x; du = e^x \, dx\Big] = \int_1^e \frac{u}{u^3+2u^2 - 1} \, du$$</span></p>
<p>This looks like partial fractions might work. However, the question is from a single variable calculus book and the only partial fraction cases that are covered are denominators of the types <span class="math-container">$(x+a), (x+a)^n, (ax^2+bx +c), (ax^2+bx +c)^n$</span>, but polynomials with a power of 3 is not covered at all. Thus, it appears to be a "too difficult" approach.</p>
<p>A third approach might be to factor the new denominator before doing partial fractions:</p>
<p><span class="math-container">$$\int_1^e \frac{u}{u^3+2u^2 - 1} \, du = \int_1^e \frac{u}{u(u^2+2u - \frac{1}{u})} \, du$$</span></p>
<p>However, even this third approach does not have a denominator that is suitable or partial fractions, since it lacks a u-free term.</p>
<p>What are some productive approaches that can get me to the end without restoring to partial fractions from variables with a power higher than <span class="math-container">$2$</span>?</p>
| José Carlos Santos | 446,262 | <p>Since <span class="math-container">$-1$</span> is a root of <span class="math-container">$u^3+2u^2-1$</span>, you can write it as <span class="math-container">$u+1$</span> times a quadratic monic polynomial. It turns out that that polynomial is <span class="math-container">$u^2+u-1$</span>. Besides<span class="math-container">$$\frac u{u^3+2u^2-1}=\frac1{u+1}+\frac{-u+1}{u^2+u-1}.$$</span>Can you take it from here?<hr /><strong>Note:</strong> There is an error in your computations: the integral that you should be computing is<span class="math-container">$$\int_1^e\frac u{u^3+2u^2-1}\,\mathrm du.$$</span></p>
|
251,466 | <p>Let $A$, $B$ and $C$ be three points in a disk,
does $f\left(A,B,C\right)=\mbox{Area}\left(\mbox{triangle}\,ABC\right)/\mbox{Perimeter}\left(\mbox{triangle}\,ABC\right)$
have maximum on
the boundary? </p>
| chloe_shi | 45,070 | <p>$\triangle ABC=\dfrac{r(a+b+c)}{2}=rs$ $~$ $\Longrightarrow$ $~$ $\dfrac{\triangle ABC}{s}=r$<br>
Euler theorem : $~$ $\underline{OI^{2}=R^{2}-2Rr}\geq 0$ $~$ $\Longrightarrow$ $~$ $r\leq\dfrac{R}{2}$ , which equality hold at $OI=0$ , $R=2r$<br>
That is, it's equilateral. </p>
<p>Now, for the ratio $R$ ($\triangle ABC$ circumradius) to be maximum, the circumcircle ($\triangle ABC$) must be the boundary disk. $$$$
I add a little detailed.</p>
|
68,386 | <p>I'm looking for a theorem of the form </p>
<blockquote>
<p>If $R$ is a nice ring and $v$ is a reasonable element in $R$ then Kr.Dim$(R[\frac{1}{v}])$ must be either Kr.Dim$(R)$ or Kr.Dim$(R)-1$.</p>
</blockquote>
<p>My attempts to do this purely algebraically are not working, so I started looking into methods from algebraic geometry. I thought that Grothendieck's Vanishing Theorem might help (i.e. if dim$(X)=n$ then $H^i(X,\mathcal{F})=0$ for any sheaf of abelian groups $\mathcal{F}$ and any $i>n$) but the problem is that the converse for this theorem fails, so I can't conclude anything about dimension. Perhaps this theorem could give some sort of test for when dimension drops, but I'm hoping for a better answer.</p>
<p>We'll definitely need some hypotheses. For the application I have in mind we can assume $R$ is commutative and is finitely generated over some base ring (e.g. $\mathbb{Z}_{(2)}$), but we should not assume it's an integral domain. If necessary we can assume it's Noetherian and local, but I'd rather avoid this. As for $v$, it's not in the base ring and it has only a few relations with other elements in $R$, none of which are in the base ring. If we can't get the theorem above, perhaps we can figure out something to help me get closer:</p>
<blockquote>
<p>Are there any conditions on $v$ such that the dimension would drop by more than 1 after inverting $v$?</p>
</blockquote>
<p>One thing I know: to have any hope of dimension dropping by $1$ I need to be inverting a maximal irreducible component. I'm curious as to the algebraic condition this puts on $v$. </p>
| Sándor Kovács | 10,076 | <p>$R[\frac 1v]$ corresponds to the open subset of $\mathrm{Spec}R$ where $v\neq 0$. So, all kind of things can happen. Here is an example: Let $R_1$ be a "nice" ring with a unity $v\in R_1$ of dimension $n$ and $R_2$ a "nice" ring with a unity $w\in R_2$ of dimension $m$. Let $R=R_1\oplus R_2$. Then $\dim R=\max \{n,m\}$ and $\dim R[\frac 1v]=\dim R_1=n$, so the dimension drops by $\max \{n,m\}-n$.</p>
|
2,083,347 | <p>Let's consider a linear operator
$$
Lu = -\frac{1}{w(x)}\Big(\frac{d}{dx}\Big[p(x)\frac{du}{dx}\Big] + q(x)u\Big)
$$
So the Sturm-Liouville equation can be written as
$$
Lu = \lambda u
$$
Why the proper setting for this problem is the weighted Hilbert space $L^2([a,b], w(x)dx)$?</p>
| Jimmy R. | 128,037 | <p>Despite the change of its form at $k=4$, the function $k(x)$ is monotone decreasing on $\mathbb R$ and continuous at $x=4$. Hence $$k^{-1}(x)=\begin{cases}-\dfrac{x-2}{4}, & x\le -14 \\ -\dfrac{x-6}{5}, & x>-14\end{cases}$$ where $-14=k(4)$.</p>
|
705,945 | <p>I have this expression:
$$\sum_{\{\vec{S}\}}\prod_{i=1}^{N}e^{\beta HS_{i}}=\prod_{i=1}^{N}\sum_{S_{i}\in\{-1,1\}}e^{\beta HS_{i}} \qquad (1)$$
Where $\sum_{\{\vec{S}\}}$ means a sum over all possible vectors $\vec{S}=(S_1,...,S_N)$ with the restriction that $S_i$ can only take the values $\{-1,+1\}$, i.e. the sum is over $2^N$ different vectors: $\{\vec{S}\}$.</p>
<p>My <strong>question</strong> is: How can I be sure that (1) is right? Is there a criteria to interchange sums and products or it's always valid?</p>
| Mx Glitter | 134,212 | <p>Try to go from the right formula to the left and use <a href="http://en.wikipedia.org/wiki/Distributivity" rel="nofollow">distributivity</a>. If you're not sure, try with N=2 to convince yourself.</p>
|
4,330,031 | <p>I want to prove that this degree sequence <span class="math-container">$(5,5,5,2,2,2,1)$</span> isn't valid to draw a graph from it, the graph needs to be simple. I am looking for a Theroem or a way to contradict the assumption that we can make a graph from it.</p>
<p>My solution was the following, for the given nodes:degrees => <span class="math-container">$(A:5; B:5; C:3; D:2; E:2; F:2; G:1)$</span></p>
<p><a href="https://i.stack.imgur.com/Z5HZ2.png" rel="nofollow noreferrer">Graph</a></p>
<p>Note that the vertex <span class="math-container">$C$</span> is the one that makes the contradiction, since we should have another 2 extra edges, but we can't add them to the previous nodes.</p>
<p>So my question is: Is there any theorem which I can use to prove this contradiction? Because I feel like my solution isn't enough.</p>
| MachineLearner | 647,466 | <p>As <span class="math-container">$A$</span> is invertable we know that it is a square matrix and that its determinant is not equal to zero.</p>
<p>For square matrices we have</p>
<p><span class="math-container">$$\det(ABA)=\det A \det B \det A=0$$</span>
We know that <span class="math-container">$\det A \neq 0$</span>, hence we can conclude that <span class="math-container">$\det B = 0$</span>. The last statement implies that <span class="math-container">$B$</span> is not invertable$.</p>
|
3,290,514 | <p>I need to make the navigation and guidance of a vehicle (a quadcopter) in a platform. This platform can be seen like this:</p>
<p><a href="https://i.stack.imgur.com/jeJ34.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/jeJ34.png" alt="enter image description here"></a></p>
<p>where the blue dots are the center of each square, and the <span class="math-container">$x$</span> distances are all the same, and the <span class="math-container">$y$</span> distances are all the same.</p>
<p>I need the distance between each blue dot to the center (the blue dot of the <span class="math-container">$(2;2)$</span>), but that distance depends on the <span class="math-container">$yaw$</span> angle. For example, if <span class="math-container">$yaw=0^\circ$</span>, the situation is like this:</p>
<p><a href="https://i.stack.imgur.com/QGwlE.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/QGwlE.png" alt="enter image description here"></a></p>
<p>and the distances are:</p>
<p><span class="math-container">$$d_{1;1} = (-d_x; -d_y)$$</span>
<span class="math-container">$$d_{1;2} = (-d_x; 0)$$</span>
<span class="math-container">$$d_{1;3} = (-d_x; d_y)$$</span></p>
<p><span class="math-container">$$d_{2;1} = (0; -d_y)$$</span>
<span class="math-container">$$d_{2;2} = (0; 0)$$</span>
<span class="math-container">$$d_{2;3} = (0; d_y)$$</span></p>
<p><span class="math-container">$$d_{3;1} = (d_x; -d_y)$$</span>
<span class="math-container">$$d_{3;2} = (d_x; 0)$$</span>
<span class="math-container">$$d_{3;3} = (d_x; d_y)$$</span></p>
<p>If the situation is with <span class="math-container">$yaw=180^\circ$</span>:</p>
<p><a href="https://i.stack.imgur.com/Y0A6P.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Y0A6P.png" alt="enter image description here"></a></p>
<p>the distances are the same but with the opposite sign, i.e,</p>
<p><span class="math-container">$$d_{1;1} = (d_x; d_y)$$</span>
<span class="math-container">$$d_{1;2} = (d_x; 0)$$</span>
<span class="math-container">$$d_{1;3} = (d_x; -d_y)$$</span></p>
<p><span class="math-container">$$d_{2;1} = (0; d_y)$$</span>
<span class="math-container">$$d_{2;2} = (0; 0)$$</span>
<span class="math-container">$$d_{2;3} = (0; -d_y)$$</span></p>
<p><span class="math-container">$$d_{3;1} = (-d_x; d_y)$$</span>
<span class="math-container">$$d_{3;2} = (-d_x; 0)$$</span>
<span class="math-container">$$d_{3;3} = (-d_x; -d_y)$$</span></p>
<p>If <span class="math-container">$yaw=90^\circ$</span>, the situation is like this:</p>
<p><a href="https://i.stack.imgur.com/B6a8b.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/B6a8b.png" alt="enter image description here"></a></p>
<p>and the distances (see the difference between <span class="math-container">$d_x$</span> and <span class="math-container">$d_y$</span>) would be:</p>
<p><span class="math-container">$$d_{1;1} = (-d_y; d_x)$$</span>
<span class="math-container">$$d_{1;2} = (-d_y; 0)$$</span>
<span class="math-container">$$d_{1;3} = (-d_y; d_x)$$</span></p>
<p><span class="math-container">$$d_{2;1} = (0; -d_x)$$</span>
<span class="math-container">$$d_{2;2} = (0; 0)$$</span>
<span class="math-container">$$d_{2;3} = (0; d_x)$$</span></p>
<p><span class="math-container">$$d_{3;1} = (d_y; -d_x)$$</span>
<span class="math-container">$$d_{3;2} = (d_y; 0)$$</span>
<span class="math-container">$$d_{3;3} = (d_y; d_x)$$</span></p>
<p>If <span class="math-container">$yaw = -90^\circ$</span>:</p>
<p><a href="https://i.stack.imgur.com/6Zk2f.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/6Zk2f.png" alt="enter image description here"></a></p>
<p>the distances would be:</p>
<p><span class="math-container">$$d_{1;1} = (d_y; d_x)$$</span>
<span class="math-container">$$d_{1;2} = (d_y; 0)$$</span>
<span class="math-container">$$d_{1;3} = (d_y; -d_x)$$</span></p>
<p><span class="math-container">$$d_{2;1} = (0; d_x)$$</span>
<span class="math-container">$$d_{2;2} = (0; 0)$$</span>
<span class="math-container">$$d_{2;3} = (0; -d_x)$$</span></p>
<p><span class="math-container">$$d_{3;1} = (-d_y; d_x)$$</span>
<span class="math-container">$$d_{3;2} = (-d_y; 0)$$</span>
<span class="math-container">$$d_{3;3} = (-d_y; -d_x)$$</span></p>
<p>I need to write a matrix that uses the information of the <span class="math-container">$yaw$</span> angle and returns the distances from each angle (not just 0, 90, -90 and 180, but also 1, 2, 3, ...)</p>
<p>I tried to write it but I couldn't find the solution.</p>
<p>Thank you very much. I really need this help</p>
<p>Edit: please note that the coordinate frame moves with the quadcopter, like in this image:</p>
<p><a href="https://i.stack.imgur.com/Vge0h.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Vge0h.png" alt="enter image description here"></a></p>
<p>Edit 2: for example, if <span class="math-container">$yaw=45^\circ$</span>, then the distance from <span class="math-container">$(3;3)$</span> to <span class="math-container">$(2;2)$</span> is <span class="math-container">$\sqrt{d_x^2+d_y^2}$</span> in <span class="math-container">$x$</span> and <span class="math-container">$0$</span> in <span class="math-container">$y$</span>.</p>
| Intelligenti pauca | 255,730 | <p>If I understand correctly, you have the coordinates of some points, all of them (or nearly so) lying on a circle, and you want to find the center of that circle.</p>
<p>A possible approach is the following: </p>
<ol>
<li><p>choose at random a triplet of points and find their circumcenter using <a href="https://en.wikipedia.org/wiki/Circumscribed_circle#Cartesian_coordinates_2" rel="nofollow noreferrer">this formula</a>;</p></li>
<li><p>repeat step 1. a certain number of times (10 times, for instance); if all points lie on the circle then you should always obtain the same result;</p></li>
<li><p>choose as center of the circle the most obtained result in step 2.</p></li>
</ol>
<p>Of course you must take into account numerical accuracy issues to decide if two results are the same or not.</p>
|
2,863,533 | <blockquote>
<p>Let $f(x)=x^3+ax^2+bx+c$ be a cubic polynomial with real coefficients and all real roots, also $|f(i)|=1$ where $i=\sqrt{-1}$. Prove that all three roots of $f(x)=0$ are zero. Also prove that $a+b+c=0$.</p>
</blockquote>
<hr>
<p>As $f(i)=-i-a+ib+c=1$ and $f(i)=-i-a+ib+c=-1$<br><br>
I don't know how to solve further.</p>
| vadim123 | 73,324 | <p>Hint: Let the three real roots of $f(x)$ be $r,s,t$ (not necessarily distinct). Then, we may write $$f(x)=(x-r)(x-s)(x-t)$$
Expand this, and set equal to $x^3+ax^2+bx+c$, and continue...</p>
|
2,863,533 | <blockquote>
<p>Let $f(x)=x^3+ax^2+bx+c$ be a cubic polynomial with real coefficients and all real roots, also $|f(i)|=1$ where $i=\sqrt{-1}$. Prove that all three roots of $f(x)=0$ are zero. Also prove that $a+b+c=0$.</p>
</blockquote>
<hr>
<p>As $f(i)=-i-a+ib+c=1$ and $f(i)=-i-a+ib+c=-1$<br><br>
I don't know how to solve further.</p>
| mechanodroid | 144,766 | <p>Let $x_1, x_2, x_3$ be the roots. We have $f(x) = (x - x_1)(x - x_2)(x-x_3)$.</p>
<p>Hence</p>
<p>\begin{align}
1 &= |f(i)|^2 \\
&= f(i)\overline{f(i)} \\
&= (i - x_1)(i - x_2)(i - x_3)(-i - x_1)(-i - x_2)(-i - x_3) \\
&= (x_1^2 + 1)(x_2^2 + 1)(x_3^2 + 1)
\end{align}</p>
<p>so $x_1 = x_2 = x_3 = 0$.</p>
|
4,050,855 | <p>On <a href="https://math.stackexchange.com/a/4050373/878105">this answer</a>, the function <span class="math-container">$f_n(x)=x^n$</span> in the interval <span class="math-container">$[0,1]$</span> is given as a pathologic example with pointwise convergence.</p>
<p>Can I say that this Cauchy sequence does not (pointwise) converge because the limit of the sequence is a function like this (not continuous):</p>
<p><a href="https://i.stack.imgur.com/1RR8t.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/1RR8t.png" alt="enter image description here" /></a></p>
<p>without specifying any particular norm? I read that pointwise convergence, doesn't imply <span class="math-container">$d_\infty$</span> (uniform) convergence, and that uniform convergence implies pointwise convergence. But does lack of pointwise convergence negate uniform convergence?</p>
<p>Does this contradict in any way (or under certain norms) the fact that <span class="math-container">$C[a,b]$</span> with respect to <span class="math-container">$\Vert f \Vert_{\infty}$</span> is a Banach space? In other words, why is not an example of a Cauchy sequence that does not converge to some <span class="math-container">$f\in C[0,1]$</span>?</p>
| Elchanan Solomon | 647 | <p>The notion of a sequence being "Cauchy" is inherently a metric concept. When you talk about a Cauchy sequence, you need to be talking about convergence in a metric. Pointwise convergence does not correspond to any metric (this can be proven). If you use the <span class="math-container">$\|f\|_{\infty}$</span> metric, the sequence isn't Cauchy. For any finite <span class="math-container">$m$</span>, there is some point <span class="math-container">$x_m$</span> sufficiently close to <span class="math-container">$1$</span> for which <span class="math-container">$x_{m}^{m} \approx 1$</span>. However, if you take <span class="math-container">$n$</span> sufficiently large, <span class="math-container">$x_{m}^{n} \approx 0$</span>, so that <span class="math-container">$\|x^{n} - x^{m}\|_{\infty} \approx 1$</span>. Since the sequence isn't Cauchy, it won't converge in our metric.</p>
<p>As for the question: if the pointwise limit is not continuous, can the sequence converge in the <span class="math-container">$\|f\|_{\infty}$</span> metric? The answer here is no: if it did converge in <span class="math-container">$\|f\|_{\infty}$</span>, the limit would be continuous (because our space is Banach, so complete) , and since the pointwise limit has to be the same as the uniform limit, we would arrive at a contradiction.</p>
|
433,639 | <p>(What follows is motivated by an answer to <a href="https://mathoverflow.net/questions/433612/fourier-optimization-problem-related-to-the-prime-number-theorem?noredirect=1#comment1116702_433612">Fourier optimization problem related to the Prime Number Theorem</a>)</p>
<p>Let <span class="math-container">$f:\mathbb{R}\to [0,\infty)$</span> be such that
<br>
(a) <span class="math-container">$\int_{\mathbb{R}} f(x) dx = 1$</span>,<br>
(b) <span class="math-container">$\widehat{f}(t)=0$</span> for all real <span class="math-container">$t$</span> with <span class="math-container">$|t|>1$</span>. <br></p>
<p>What is the choice of <span class="math-container">$f$</span> such that <span class="math-container">$$\int_{\mathbb{R}} |x| f(x) dx$$</span>is minimal? What is that minimum?</p>
<p>Remarks:</p>
<ol>
<li>It is easy to see that we can assume <span class="math-container">$f$</span> to be an even function.</li>
<li>Yes, this seems to be yet another incarnation of the uncertainty principle.</li>
</ol>
| H A Helfgott | 398 | <p>Warning: the following answer is (a) maybe a bit careless in a nineteenth-century sort of way, (b) missing its final step (which may be obvious to others, and/or amount to looking things up in the right table)</p>
<p>We are looking for an even function <span class="math-container">$g:\mathbb{R}\to \mathbb{R}$</span> with support on <span class="math-container">$[-1,1]$</span>; we will define <span class="math-container">$f$</span> to be its Fourier transform. The condition <span class="math-container">$\int_{\mathbb{R}} f(x) dx=1$</span> becomes <span class="math-container">$g(0)=1$</span>. What we need to minimize is the quantity
<span class="math-container">$$I = \int_{\mathbb{R}} |x| \widehat{g}(x) dx.$$</span>
Here are two attempts to express <span class="math-container">$I$</span> more directly in terms of <span class="math-container">$g$</span>.</p>
<ol>
<li><p>The Fourier transform of <span class="math-container">$\frac{g'(t)}{2\pi i}$</span> equals <span class="math-container">$x \widehat{g}(x)$</span>. We can write <span class="math-container">$|x| \widehat{g}(x) = \mathrm{sgn}(x) x \widehat{g}(x)$</span>. The Fourier transform of <span class="math-container">$\mathrm{sgn}(x)$</span> is <span class="math-container">$\frac{1}{i \pi t}$</span> (in some sense). Hence, the Fourier transform of <span class="math-container">$|x| \widehat{g}(x)$</span> should be the convolution of <span class="math-container">$\frac{1}{i \pi t}$</span> and <span class="math-container">$\frac{g'(-t)}{2\pi i} = \frac{g'(t)}{2\pi i}$</span>. In particular, <span class="math-container">$I$</span> should equal the value of the Fourier transform of <span class="math-container">$|x| \widehat{g}(x)$</span> at <span class="math-container">$0$</span>, i.e.,
<span class="math-container">$$I = \int_{\mathbb{R}} \frac{1}{i \pi t} \frac{g'(t)}{2\pi i} dt =
- \frac{1}{2 \pi^2} \int_{\mathbb{R}} \frac{g'(t)}{t} dt.$$</span>
We can assume <span class="math-container">$g'(0)=0$</span>, so the integral above should make sense.</p>
</li>
<li><p>Since <span class="math-container">$I = - 2 \int_{-\infty}^0 x \widehat{g}(x) dx$</span>, we can write
<span class="math-container">$I = - 2\int_{-\infty}^0 \int_{-\infty}^y \widehat{g}(x) dx dy$</span>.
Now, the second antiderivative of <span class="math-container">$\widehat{g}(x)$</span> should have Fourier transform <span class="math-container">$\frac{g(t)}{(2\pi i)^2 t^2}$</span>. Its value at <span class="math-container">$0$</span> equals both <span class="math-container">$-I/2$</span> (by definition) and
<span class="math-container">$$\int_\mathbb{R} \frac{g(t)}{(2\pi i)^2 t^2} dt =
-\frac{1}{4\pi^2} \int_{\mathbb{R}} \frac{g(t)}{t^2} dt;$$</span>
hence,
<span class="math-container">$$I = \frac{1}{2\pi^2} \int_{\mathbb{R}} \frac{g(t)}{t^2} dt.$$</span>
Of course this diverges.</p>
</li>
</ol>
<p>At the same time, by integration by parts, <span class="math-container">$-\int_\mathbb{R} \frac{g'(t)}{t} dt$</span> equals <span class="math-container">$\int_\mathbb{R} \frac{g(t)-1}{t^2} dt$</span>, which converges. So, it looks like
<span class="math-container">$$I =\frac{1}{2\pi^2} \int_\mathbb{R} \frac{g(t)-1}{t^2} dt.$$</span></p>
<p>We recall we must also fulfill the constraint that <span class="math-container">$f$</span> take non-negative values. This will certainly be true if <span class="math-container">$g$</span> is defined as a convolution <span class="math-container">$h\ast h$</span>, with <span class="math-container">$h$</span> symmetric and real-valued, as then <span class="math-container">$\widehat{h}$</span> will be real valued, and <span class="math-container">$\widehat{h\ast h} = \widehat{h}^2$</span>. We can require that <span class="math-container">$h$</span> also be an even function, and that the support of <span class="math-container">$h$</span> be contained in <span class="math-container">$[-1/2,1/2]$</span>.
I <em>think</em> these are all necessary conditions, so I am not making the search space for my optimum any smaller, but I'd be delighted if others can double-check and confirm.</p>
<p>So, we've reduced our problem to: find a symmetric function <span class="math-container">$h:[-1/2,1/2]\to \mathbb{R}$</span> with <span class="math-container">$(h\ast h)(0) = |h|_2^2 = 1$</span> such that
<span class="math-container">$$\int_{\mathbb{R}} \frac{(h\ast h)(t)-1}{t^2} dt$$</span>
is minimal.</p>
<p>A bit of calculus variations seems to show that the optimal <span class="math-container">$h(t)$</span> has to have
<span class="math-container">$$\frac{1}{h(t_0)} \int \frac{\frac{1}{2} (h(t+t_0)+h(t-t_0)) - h(t_0)}{t^2} dt$$</span> equal to a constant independent of <span class="math-container">$t_0$</span>, for <span class="math-container">$t_0\in (-1/2,1/2)$</span>. Again by integration by parts, this is just
<span class="math-container">$$\frac{1}{h(t_0)} \int \frac{\frac{1}{2} (h'(t+t_0)+h'(t-t_0))}{t} dt,$$</span> which equals <span class="math-container">$\frac{1}{h(t_0)} H(h')(t_0)$</span>, where <span class="math-container">$H$</span> is our old friend the Hilbert transform.</p>
<p>In other words: we are to find a (continuous) function <span class="math-container">$h:\mathbb{R}\to \mathbb{R}$</span>, supported on <span class="math-container">$[-1/2,1/2]$</span>, with <span class="math-container">$|h|_2=1$</span>, such that <span class="math-container">$H(h')(t) = \lambda h(t)$</span> for all <span class="math-container">$t\in (-1/2,1/2)$</span> and some <span class="math-container">$\lambda$</span>.</p>
<p>Surely such a function must be known (if it exists)?</p>
|
877,646 | <p>Friends,I have a set of matrices of dimension $3\times3$ called $A_i$. ,</p>
<p>Following are the given conditions</p>
<p>a) each $A_i$ is non invertible <strong>except $A_0$</strong> because their determinant is zero.</p>
<p>b) $\sum_{n=0}^\infty A_i$ is invertible and determinant is not zero</p>
<p>c) </p>
<ol>
<li><p>This is the recursion available for $A_i$,
$ A_{n}=\frac{1}{n} \{C_1* A_{n-1} +C_2 * A_{n-2}\} \tag 1$, where $A_0$ = Constant matrix ,$A_1$ =Constant matrix </p></li>
<li><p>$C_1,C_2 $ are constant matrices. $A_1$ and $A_0$ are initial values.
$A_0,A_1,C_1,C_2,A_n $ have dimension $3\times 3$</p></li>
<li><p>$C_1,C_2,C_1+C_2 $ etc are skew symmetric matrices , not commutative, and also with diagonals as zeros </p></li>
<li><p>$A_n$ are converging series. Means last terms will be approaching to zero or very very small values</p></li>
<li><p>Determinant of $C_1*A_{n-1}$ and $C_2*A_{n-2}$ both are zero {Logic : det($C_1A_{n-1}$)=det($C_1$)det($A_{n-1}$),=0*det($A_{n-1}$),$=0 $ }</p></li>
<li><p>Given that SUM= $ \sum_{n=0}^{n= \infty} A_n \ne 0 $.</p></li>
<li><p>Let $S(x) = \sum_{n=0}^\infty A_nx^n$,$SUM=S(1)$.<strong><em>Given that $S(1)$ is invertible</em></strong> . Remember we still have not proved S(x) is invertible. What we only know is, S(1) is invertible from the given conditions </p></li>
</ol>
<p><strong>Question</strong>
From the given condition can we say that $S(x)=\sum_{n=0}^\infty A_nx^n$ is invertible? If so. how do we prove that?. (x is not a matrix, it is just a variable)</p>
| Felix Marin | 85,343 | <p>$\newcommand{\angles}[1]{\left\langle\, #1 \,\right\rangle}
\newcommand{\braces}[1]{\left\lbrace\, #1 \,\right\rbrace}
\newcommand{\bracks}[1]{\left\lbrack\, #1 \,\right\rbrack}
\newcommand{\dd}{{\rm d}}
\newcommand{\ds}[1]{\displaystyle{#1}}
\newcommand{\dsc}[1]{\displaystyle{\color{red}{#1}}}
\newcommand{\expo}[1]{\,{\rm e}^{#1}\,}
\newcommand{\half}{{1 \over 2}}
\newcommand{\ic}{{\rm i}}
\newcommand{\imp}{\Longrightarrow}
\newcommand{\Li}[1]{\,{\rm Li}_{#1}}
\newcommand{\pars}[1]{\left(\, #1 \,\right)}
\newcommand{\partiald}[3][]{\frac{\partial^{#1} #2}{\partial #3^{#1}}}
\newcommand{\root}[2][]{\,\sqrt[#1]{\vphantom{\large A}\,#2\,}\,}
\newcommand{\totald}[3][]{\frac{{\rm d}^{#1} #2}{{\rm d} #3^{#1}}}
\newcommand{\verts}[1]{\left\vert\, #1 \,\right\vert}$</p>
<p>\begin{align}
\color{#f00}{\int_{-\infty}^{\infty}{\sin\pars{x} \over x - \ic}\,\dd x} & =
\int_{-\infty}^{\infty}{x\sin\pars{x} \over x^{2} + 1}\,\dd x =
\Im\int_{-\infty}^{\infty}{x\expo{\ic x} \over x^{2} + 1}\,\dd x =
\Im\pars{2\pi\ic\,{\ic\expo{\ic\ic} \over \ic + \ic}} =
\color{#f00}{{\pi \over \expo{}}}
\end{align}</p>
<p>The integration was performed along a semi-circle in the upper half complex plane by using the Residues Theorem.</p>
|
241,871 | <p>A Lévy measure $\nu$ on $\mathbb R^{d}$ is a measure satisfying
$$\nu\{0\} = 0, \ \int_{\mathbb R^{d}} (|y|^{2}\wedge 1) \nu(dy) <\infty.$$</p>
<p>A Lévy process can be characterized by triples $(b, A, \nu)$ by
Lévy-Itô decomposition, then
$$X_{t} = bt + W_{A}(t) + \int_{B_{1}} x \tilde N(t, dx) + \int_{B_{1}^{c}} x N(t, dx)$$
where $N(t, B)$ is a Poisson measure with $\mathbb E N(1, B) = \nu(B)$ for a set $B$ bounded below,
and $\tilde N(t, dx) = N(t, dx) - t \nu(dx)$ is its compensated one.</p>
<p>[Q.] If $(0, 0, \nu)$ is a triplet of a Lévy process $X$ whose first moment is finite, is the following always true?
$$ \lim_{r\to 0^{+}}\int_{B_{1}\setminus B_{r}^{c}} x \nu(dx) < \infty.$$
Moreover, if $\nu(B_1^c) = 0$, then
$$\mathbb E X_1 = \lim_{r\to 0^{+}}\int_{B_{1}\setminus B_{r}^{c}} x \nu(dx).$$
END.</p>
<p>Remark: If $\nu(dx) = x^{-2} dx$, then it corresponds to 1-stable process, and
$ \lim_{r\to 0^{+}}\int_{B_{1}\setminus B_{r}^{c}} x \nu(dx) = 0$, while
$ \int_{B_{1}} x \nu(dx) $ is not well-defined.
[Q1.] Is there always a Lévy process corresponding to $(0, 0, \nu)$ for an
arbitrary Lévy measure $\nu$? </p>
<p>Remark: Consider $\nu(dx) = x^{-2} I(x>0) dx$, it is
a Lévy measure. But if there was an associated process $X_{t}$, then $\mathbb E[X_{1}] = \int_{0}^{\infty} x \nu(dx) = \infty$.</p>
| Joachim | 29,657 | <p>Using Jason Starr's comment I was able (I think) to figure out the case of $\pi_1(X)^0$. For anyone who stumbles across this with the same question in mind I add a sketch of the proof in an answer. For the experts, if you feel like leaving a comment whether my reasoning is correct, that would be great.</p>
<p>Let $\phi: X \rightarrow Y$ be a rational map, i.e. defined on some open $U \subset X$. Because $X$ is normal, $Z := X \backslash U$ has codimension at least two. This means the dimenson of the ring $\mathcal{O}_{X,z}$ is at least two for each point $z \in Z$. But then this local ring is pure (SGA 2, Exp. X, Thm 3.4), which in turn implies the couple $(X,Z)$ is pure (SGA 2, Exp. X, prop. 3.3), so that the categories $\operatorname{FEt}(X)$ and $\operatorname{FEt}(U)$ are equivalent.</p>
<p>Since $X$ and $Y$ are birational, we can find an open $V \subset Y$ such that $U$ is isomorphic to $V$ and in the same way we show $V$ and $Y$ have the same \'etale coverings. This shows $\pi_1(X) = \pi_1(U) = \pi_1(V) = \pi_1(Y)$ as desired, which proves in particular what I wanted to show.</p>
|
1,057,819 | <p>The number $128$ can be written as $2^n$ with integer $n$, and so can its every individual digit. Is this the only number with this property, apart from the one-digit numbers $1$, $2$, $4$ and $8$? </p>
<p>I have checked a lot, but I don't know how to prove or disprove it. </p>
| Robert Israel | 8,508 | <p>This seems to be an open question. See <a href="http://oeis.org/A130693">OEIS sequence A130693</a> and references there.</p>
|
3,931,831 | <p>For the scenario given below, I am confused about if the samples are dependent or independent since the scenario does not mention anything about the samples being paired/related or vice versa.</p>
<p>I am aware if terms such as paired, repeated measurements, within-subject effects, matched pairs, and pretest/posttest are instructed in scenarios then it indicates that the samples are dependent and the opposite applies to independent samples, but I am clueless for the given scenario. Any help would be appreciated.</p>
<p><em>Alice and Bob work evening shifts in a supermarket. Alice has complained to
the manager that she works, on average, much more than Bob. The manager claims that on
average they both work the same amount of time, i.e. the competing claim is that the average
working hours are different. After a short discussion between the manager and Alice, the manager
randomly selected 50 evenings when Alice and Bob both worked.</em></p>
| Théophile | 26,091 | <p>As others have said, you don't necessarily need to use Lagrange multipliers. But since you've set up the system, we can see what happens:</p>
<p><span class="math-container">$$
3x^2-3=\lambda\\
3y^2-3=2\lambda\\
x+2y-3=0
$$</span></p>
<p>From the first two equations, we have <span class="math-container">$3y^2-3=2(3x^2-3)$</span>, which simplifies to <span class="math-container">$y^2-1=2(x^2-1)$</span>. Rearranging the linear constraint, we have <span class="math-container">$x=3-2y$</span>. Putting this information together leads to
<span class="math-container">$$7y^2-24y+17=0.$$</span></p>
<p>You could solve this using the quadratic formula, but it is quicker to observe that <span class="math-container">$-24 = -7-17$</span>:</p>
<p><span class="math-container">$$7y^2-7y-17y+17=0$$</span></p>
<p>and so <span class="math-container">$(7y-17)(y-1)=0$</span>.</p>
<p>This will give you the two local extrema.</p>
|
2,362,477 | <p>Solve the equation $f(x) = 2$.
I reached the stage $\sin(x) = {2\over 3}$ but then (as I remember it was solved) using $x = \sin^{-1}(2/3)$ (sine inverse) I get the answer $x = 41.81$ but the correct answer is $x = 0.730$ or $2.41$. Why is this so? Sorry it might be a silly question but it had been long since I studied mathematics so I kinda forgot everything. Thanks in advance!</p>
| N. F. Taussig | 173,070 | <p>We are given the function $f(x) = 4 - 3\sin x$ defined on the interval $[0, 2\pi]$. Notice that $x$ is measured in radians. </p>
<p>Solving the equation $f(x) = 2$ yields
\begin{align*}
f(x) & = 2\\
4 - 3\sin x & = 2\\
-3\sin x & = -2\\
\sin x & = \frac{2}{3}
\end{align*}
Thus, we must find all angles in the interval $[0, 2\pi]$ such that $\sin x = \dfrac{2}{3}$.</p>
<p>A particular solution is
$$x = \arcsin\left(\frac{2}{3}\right) \approx 0.729727656227$$
Since the sine of angle in standard position (vertex at the origin, initial side on the positive $x$-axis) is the $y$-coordinate of a point where the terminal side of the angle intersects the unit circle, we must find all angles in the interval $[0, 2\pi]$ such that the terminal side of the angle intersects the unit circle at a point with $y$-coordinate $2/3$. There are two such angles, one in the first quadrant, which we have found, and one in the second quadrant. </p>
<p><a href="https://i.stack.imgur.com/SVESp.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/SVESp.jpg" alt="symmery_diagram_for_sine_and_cosine"></a> </p>
<p>By symmetry, $\sin(\pi - x) = \sin x$. Hence, the other solution in the interval $[0, 2\pi]$ is
$$x = \pi - \arcsin\left(\frac{2}{3}\right) \approx 2.41186499736$$</p>
|
4,622,956 | <p>I think <span class="math-container">$\,9\!\cdot\!10^n+4\,$</span> can be a perfect square, since it is <span class="math-container">$0 \pmod 4$</span> (a quadratic residue modulo <span class="math-container">$4$</span>), and <span class="math-container">$1 \pmod 3$</span> (also a quadratic residue modulo <span class="math-container">$3$</span>).<br />
But when I tried to find if <span class="math-container">$\;9\!\cdot\!10^n+4\,$</span> is a perfect square, I didn’t succeed. Can someone help me see if <span class="math-container">$\;9\!\cdot\!10^n+4\,$</span> can be a perfect square ?</p>
| B. Goddard | 362,009 | <p>If you reduce mod <span class="math-container">$11$</span> you get <span class="math-container">$(-2)(-1)^n+4 \equiv 2$</span> or <span class="math-container">$6 \pmod{11}$</span>. Neither <span class="math-container">$2$</span> nor <span class="math-container">$6$</span> is a quadratic residue mod <span class="math-container">$11$</span>.</p>
|
1,038,076 | <p>Solve the equation $7\times 13\times 19=a^2-ab+b^2$ for integers $a>b>0$. How many are there such solutions $(a,b)$?</p>
<p>I know that $a^2-ab+b^2$ is the norm of the Eisentein integer $z=a+b\omega$, but how can I make use of this? Thank you so much.</p>
| Dietrich Burde | 83,966 | <p>Note that $N(a+b\omega)=a^2-ab+b^2$ is the sum of squares, because
$$
a^2-ab+b^2=\frac{1}{4}((2a-b)^2+3b^2).
$$
Hence we have to solve the equation $(2a-b)^2+3b^2=4\cdot 7\cdot 13\cdot 19=6916$, which is
straightforward, since we only have to test a few integers $a,b \in \mathbb{N}$. In particular, $3b^2\le 6916$, so that $b<49$. Similarly, $(2a-b)^2\le 6916$ then gives $a< 66$.
We find, that the integer solutions with $b>a>0$ are given by
$$(a,b) = (43, 3), (43,40), (45, 8), (45, 37), (47, 15), (47,32), (48,23), (48,25)$$</p>
|
1,038,076 | <p>Solve the equation $7\times 13\times 19=a^2-ab+b^2$ for integers $a>b>0$. How many are there such solutions $(a,b)$?</p>
<p>I know that $a^2-ab+b^2$ is the norm of the Eisentein integer $z=a+b\omega$, but how can I make use of this? Thank you so much.</p>
| achille hui | 59,379 | <p>It is known that the <a href="http://en.wikipedia.org/wiki/Eisenstein_integer" rel="nofollow">Eisenstein integers</a> $\mathbb{Z}[\omega]$ is an <a href="http://en.wikipedia.org/wiki/Unique_factorization_domain" rel="nofollow">unique factorization domain</a> and it has six <a href="http://en.wikipedia.org/wiki/Unit_%28ring_theory%29#Group_of_units" rel="nofollow">units</a>
$$\pm 1, \pm \omega, \pm \omega^2$$
Over $\mathbb{Z}[\omega]$, the numbers $7, 13, 19$ factorize into its prime factors as
$$\begin{cases}
7 &= (3 + \omega)(3 + \omega^2)\\
13 &= (4 + \omega)(4 + \omega^2)\\
19 &= (5 + 2\omega)(5 + 2\omega^2)
\end{cases}$$
This mean if we want to factorize $1729 = 7 \times 13 \times 19$ over $\mathbb{Z}[\omega]$ as
$$1729 = ( x + y\omega )(x + y\omega^2) = x^2 - xy + y^2
\quad x, y \in \mathbb{Z}
$$
the corresponding factor $x + y\omega$ must have the form</p>
<p>$$x + y\omega = u A B C\quad\text{ with }\quad
\begin{cases}
A &= 3 + \omega &\text{or}& 3 + \omega^2\\
B &= 4 + \omega &\text{or}& 4 + \omega^2\\
C &= 5 + 2\omega &\text{or}& 5 + 2\omega^2
\end{cases}
$$
and $u$ is one of above six units. </p>
<p>There are 8 possible choices of $A,B,C$. For each choice of $A,B,C$,
multiply by one of the six units allow one to obtain an pair of $x,y$
that satisfies $x \ge y \ge 0$:</p>
<ul>
<li>$ABC = (3+\omega)(4+\omega)(5+2\omega) = 43+40\omega$.</li>
<li>$ABC = (3+\omega)(4+\omega)(5+2\omega^2) = 45+8\omega$.</li>
<li>$ABC = (3+\omega)(4+\omega^2)(5+2\omega) = 48+23\omega$.</li>
<li>$ABC = (3+\omega)(4+\omega^2)(5+2\omega^2) = 32-15\omega \implies -\omega^2 ABC = (47+32\omega)$</li>
<li>$ABC = (3+\omega^2)(4+\omega)(5+2\omega) = 47+15\omega$.</li>
<li>$ABC = (3+\omega^2)(4+\omega)(5+2\omega^2) = 25-23\omega \implies -\omega^2 ABC = 48+25\omega$</li>
<li>$ABC = (3+\omega^2)(4+\omega^2)(5+2\omega) = 37-8\omega \implies -\omega^2 ABC = 45+37\omega$</li>
<li>$ABC = (3+\omega^2)(4+\omega^2)(5+2\omega^2) = 3-40\omega \implies -\omega^2 ABC =
43+3\omega$</li>
</ul>
<p>As a result, there are $8$ pairs of $(a,b)$ that solves the original problem:</p>
<p>$$(a,b) = (43, 3), (43,40), (45, 8), (45, 37), (47, 15), (47,32), (48,23), (48,25)$$</p>
|
831,763 | <p>The following equation $$e^{i+z}e^{iz}=1$$ is to be solved for $z$. I have tried
$$
\begin{eqnarray}
e^{i+z+iz} = 1\\
i+z+iz=0\\
z= -{i \over 1+i} = -{i(1+i)\over 2} = \frac12-i\frac12
\end{eqnarray}
$$
However I am absolutely unsure, that's correct. Somehow I suspect trigonometry should creep in the answer.</p>
| Community | -1 | <p>Let $z=a+ib$ then the given equality becomes</p>
<p>$$e^{a+(b+1)i}e^{ai-b}=1\iff e^{a-b+(b+a+1)i}=1$$
hence we find
$$a-b=0\quad\text{and}\quad b+a+1\equiv 0\mod 2\pi$$
so
$$a=b\quad\text{and}\quad a\equiv-\frac12\mod \pi$$</p>
<p><strong>Added</strong> i.e.
$$a=b\quad\text{and}\quad a=-\frac12+k\pi,\quad k\in\Bbb Z$$</p>
|
3,545,548 | <p><span class="math-container">$\def\LIM{\operatorname{LIM}}$</span>
Let <span class="math-container">$(X,d)$</span> be a metric space and given any cauchy sequence <span class="math-container">$(x_n)_{n=1}^{\infty}$</span> in <span class="math-container">$X$</span> we introduce the formal limit <span class="math-container">$\LIM_{n\to \infty}x_n$</span>. We say that two formal limits <span class="math-container">$\LIM_{n\to \infty}x_n$</span> and <span class="math-container">$\LIM_{n\to \infty}y_n$</span> are equal iff <span class="math-container">$\lim_{n \to \infty}d(x_n,y_n)=0$</span>. We then define <span class="math-container">$\bar{X}$</span> to be set of all the formal limits of Cauchy sequences in <span class="math-container">$X$</span>. We define the metric <span class="math-container">$d_{\bar{X}}$</span> as follows: <span class="math-container">$$d_{\bar{X}}(\LIM_{n\to \infty}x_n,\LIM_{n\to \infty}y_n)= \lim_{n \to \infty} d(x_n,y_n)$$</span>
I have proved that <span class="math-container">$(\bar{X},d_{\bar{X}})$</span> is indeed a metric space that that the definition of metric is well defined. But I am stuck to prove that <span class="math-container">$(\bar{X},d_{\bar{X}})$</span> is a complete metric space. This problem could be resolved without taking into account topological spaces as that concept in later in the book. Any suggestion on how to go about this problem without using machinery of topology would be highly invaluable. Thanks in advance.</p>
| Eric Towers | 123,905 | <p><span class="math-container">$x$</span>, <span class="math-container">$y$</span>, and <span class="math-container">$z$</span> are successive terms of an arithmetic progression. Consequently, there is an <span class="math-container">$s$</span> such that <span class="math-container">$y = x + s$</span> and <span class="math-container">$z = x+2s$</span>.</p>
<p>We are to show <span class="math-container">$3^x$</span>, <span class="math-container">$3^y = 3^{x+s} = 3^x 3^s$</span>, <span class="math-container">$3^z = 3^{x+2s} = 3^{x+s+s} = 3^x 3^s 3^s$</span> is a geometric progression. Do you se how to finish?</p>
|
3,989,878 | <p>I can't solve this problem. I tried to find <span class="math-container">$\tan x$</span> directly by solving cubic equations but I failed.</p>
<p>The problem is to find <span class="math-container">$\tan x\cot 2x$</span> given that
<span class="math-container">$$\tan x+ \tan 2x=\frac{2}{\sqrt{3}}, \>\>\>\>\>0<x<\pi/4$$</span></p>
<p>How am I supposed to solve this problem?</p>
| J.G. | 56,861 | <p>If <span class="math-container">$t:=\tan x$</span> then <span class="math-container">$\frac{t(3-t^2)}{1-t^2}=\tfrac{2}{\sqrt{3}}$</span> so <span class="math-container">$t^3-\tfrac{2}{\sqrt{3}}t^2+3t+\tfrac{2}{\sqrt{3}}=0$</span>. This has <a href="https://www.wolframalpha.com/input/?i=solve+t%5E3-%282%2Fsqrt%283%29%29t%5E2%2B3t%2B2%2Fsqrt%283%29%3D0" rel="nofollow noreferrer">one real root</a>, <span class="math-container">$t=\tfrac{2+33/a-a}{3\sqrt{3}}$</span> with <span class="math-container">$a:=\sqrt[3]{154+9\sqrt{443}}$</span>. Now just calculate <span class="math-container">$\frac{1-t^2}{2}$</span>.</p>
|
2,904,603 | <p>I'm working on the following question:</p>
<blockquote>
<p>Show that $G$ is a group if and only if, for every $a, b \in G$,
the equations $xa = b$ and $ay = b$ have solutions $x, y \in G$.</p>
</blockquote>
<p>I'm having trouble getting started because I'm not understanding what it means for "the equations $xa = b$ and $ay = b$ have solutions $x, y \in G$". Do they mean, there's only 1 left multiplier that takes $a$ to $b$ and one right multiplier that takes $a$ to $b$?</p>
| stressed out | 436,477 | <p>What it means is as follows:
$$\forall a,b \in G, \exists x \in G: xa=b$$
$$\forall a,b \in G, \exists y \in G: ay=b$$</p>
<p>Assuming that $(G,\star)$ is a semi-group, i.e. $G \not= \emptyset$ is closed and associative under $\star$, you must prove the existence of the identity element and the existence of the inverse element for each element in $G$.</p>
<p>Hint:</p>
<p>First show that for each element $a \in G$, one can find $e(a)_r$ and $e(a)_l$ such that $e(a)_l\star a = a \star e(a)_r = a$. Then show that $e(a)_r=e(a)_l:=e(a)$ is independent of $a$, i.e. for any element $a$, one can take $e(a)$ to be a fixed unique element $e \in G$.</p>
<p>Now using the same kind of argument, show that for each element $a \in G$, you can find an inverse for $a$. Again, this means that you should prove the existence and equality of the following elements: $a^{-1}_l=a^{-1}_r := a^{-1}$</p>
|
2,641,073 | <p>I want to define the scalar product by $\vec a \cdot \vec b = | \vec a | |\vec b | \cos(\varphi)$ and derive $\vec a \cdot \vec b = a_1 b_1 + a_2 b_2$. Algebraically, this is no problem (we can just expand the vectors in the standard basis).</p>
<p>How do I show this by using only elementary geometry? I've drawn the areas of the rectangles $a_1 b_1$, $a_2 b_2$ and $| \vec a | \cos(\varphi) \cdot |\vec b | $but I don't see why they should be equal.</p>
<p>/edit: I'm really interested in the general case and in using elementary geometry, not in getting an elegant proof which involves rotations.</p>
| msm | 340,064 | <p>Yeah, I don't think the thing to do is to look at rectangles. First, prove that the dot product is invariant under rotations (this is a little circular, since some people would say rotations are defined by being linear maps preserving the dot product and orientation). </p>
<p>Then, line one of those vectors up with the $x$-axis. Without a loss of generality, say $\vec{b}$ is along the $x$-axis. Then it has components $(b_1,0)$ in the standard basis. The projection of $\vec{a}$ onto the $x$-axis is $|\vec{a}|\cos(\varphi)$. That is $a_1= |\vec{a}|\cos(\varphi)$. At the same time, $|\vec{b}| = b_1$, since it is lined up on the $x$-axis. So, then we obviously have $a_1b_1= |\vec{a}||\vec{b}|\cos(\varphi)$. But, since it is rotationally invariant, this holds in general. </p>
|
2,641,073 | <p>I want to define the scalar product by $\vec a \cdot \vec b = | \vec a | |\vec b | \cos(\varphi)$ and derive $\vec a \cdot \vec b = a_1 b_1 + a_2 b_2$. Algebraically, this is no problem (we can just expand the vectors in the standard basis).</p>
<p>How do I show this by using only elementary geometry? I've drawn the areas of the rectangles $a_1 b_1$, $a_2 b_2$ and $| \vec a | \cos(\varphi) \cdot |\vec b | $but I don't see why they should be equal.</p>
<p>/edit: I'm really interested in the general case and in using elementary geometry, not in getting an elegant proof which involves rotations.</p>
| pjs36 | 120,540 | <p>One way is to compute the (square of the) red length two ways, and equate the two.</p>
<p><a href="https://i.stack.imgur.com/vvIko.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/vvIko.png" alt="enter image description here"></a></p>
<p>On the one hand, Pythagoras tells us that the red length is the hypotenuse of a right triangle, whose legs have length $|a_1 - b_1|$ and $|a_2 - b_2|$, so the square of its length must be $$|a_1 - b_1|^2 + |a_2 - b_2|^2 = (a_1 - b_2)^2 + (a_2 - b_2)^2$$</p>
<p>On the other hand, the Law of Cosines tells us that the square of the red length is $$|a|^2 + |b|^2 - 2|a||b|\cos(\varphi) = (a_1^2 + a_2^2) + (b_1^2 + b_2^2) - 2|a||b|\cos( \varphi )$$</p>
<p>Since these are equal, we have $(a_1 - b_2)^2 + (a_2 - b_2)^2 = (a_1^2 + a_2^2) + (b_1^2 + b_2^2) - 2|a||b|\cos( \varphi )$</p>
<p>It looks like a mess, but if you multiply out the squares on the left side, you get a bunch of cancellation (with the squares on the right), and before long, $a_1b_1 + a_2b_2 = |a||b|\cos( \varphi )$ pops out.</p>
|
2,533,960 | <p>So, the given expression is
$$\binom{2n}{2} = 2\binom{n}{2}+n^2$$</p>
<p>The task is to give a combinatorical proof for it.</p>
<p>Left side of the identity is obviously equal to the number of options for choosing 2 elements out of the set with cardinality $2n$.</p>
<p>What issues me is that I can't think of any way to separate that into two disjoint cases which would have $2\binom{n}{2}$ and $n^2$ different options (what is, I believe, meant to happen).</p>
<p>Any hints would be helpful.</p>
| Peter Szilas | 408,605 | <p>Consider 2 sets:</p>
<p>$A=${$a_1,a_2,....a_n$}, and </p>
<p>$B=${$b_1,b_2,..., b_n$}, all distinct elements.</p>
<p>LHS:</p>
<p>The number of ways to choose $ 2$ elements from $A\cup B$ :</p>
<p>$\binom{2n}{2}.$</p>
<p>RHS: </p>
<p>Choose $2$ elements from $A$, or from $B$ :</p>
<p>In $2\binom{n}{2}$ ways</p>
<p>The mix: </p>
<p>$S_{i,k} =${$a_i,b_k$} , $1\le i,k \le n$.</p>
<p>How many different sets $S_{ik}$?</p>
<p>$n$ ways to chose from $A$, and $n$ ways to choose from $B$:</p>
<p>Altogether: $n^2$ ways.</p>
|
1,731,364 | <p>So the question asks:</p>
<blockquote>
<p>Let $X_1,X_2,X_3\sim \operatorname{Exp}(\lambda)$ be independent (exponential) random variables (with $\lambda> 0$).<br>
(a) Find the probability density function of the random variable $Z = \max \{X_1,X_2,X_3\}$.<br>
(b) Let $T = X_1+X_2/2+X_3/3$, use moment generating functions to prove $Z\sim T$ (same distribution). Find $E[Z]$ and $\operatorname{Var}[Z]$.</p>
</blockquote>
<p>So far I got: </p>
<p>(a)$F(x) = 1-e^{-\lambda x}$</p>
<p>$F_Z(z) = P (Z \leq z) = P(\max(X_1,X_2,X_3) ≤ z) = P(X_1\leq z, X_2 \leq z, X_3 \leq z)= P(X_1\leq z)P(X_2\leq z) P(X_3\leq z) = (1-e^{-\lambda z})^3$</p>
<p>$f_Z(z) = F_Z'(z) = (1-e^{-\lambda z})^3 =3\lambda e^{-3\lambda z}(e^{\lambda z}-1)^2$</p>
<p>(b) for this part, I did not quite understand what it wanted me to prove actually...</p>
<p>I got: $M_X(t) = λ/(λ-t )$</p>
<p>$M_Z(t) = M_{X_1}(t)M_{X_2}(t) M_{X_3}(t) = [\lambda/(\lambda-t )] [\lambda /(2(\lambda-t) ] [\lambda /(3(\lambda-t) ]= [\lambda/(\lambda-t)]^3/6$</p>
<p>So what does it mean by proving $Z\sim T$ (same distribution) ?</p>
<p>And for the $E[Z]$ and $\text{Var} [Z]$, I actually tried to do it using the standard method which is $$
E[Z]=\int z\cdot3\lambda e^{-3\lambda z}(e^{\lambda z}-1)^2dz
$$</p>
<p>which becomes super complicated...</p>
<p>So is there a simple way to calculate the $E[Z]$ and $\text{Var} [Z]$ without literally solving the integration? </p>
| Em. | 290,196 | <p>(b) Just means use MGFs to show that $Z$ and $T$ have the same distribution.</p>
<blockquote>
<p>So is there a simple way to calculate the $E[Z]$ and $\text{Var} [Z]$ without literally solving the integration?</p>
</blockquote>
<p>Yes. Intuitively, if you have $3$ light bulbs with lifetimes that are iid exponential $\lambda$, then $Z$ is the time until the last light bulb burns out. In other words, we wait for the first of three to burn out ($Y_3$), then we wait for the first of two to burn out $(Y_2)$, then we finally wait for the last to burn out $(Y_1)$.</p>
<p>Notice (prove for yourself), that the minimum of $n$ iid exponential random variables follows an exponential distribution with mean $1/(n\lambda)$. Hence, we have</p>
<p>$$E[Z] =E[Y_3+Y_2+Y_1] = E[Y_3]+E[Y_2]+E[Y_1] = \frac{1}{3\lambda}+\frac{1}{2\lambda}+\frac{1}{\lambda} = \frac{1}{\lambda}\sum_{k=1}^3\frac{1}{k}.$$</p>
<p>Use independence and argue similarly for the variance.</p>
<p>In fact, this is the exact same thing as the $T$ they describe.;
$$X_1+X_2/2+X_3/3 \overset{d}{=} Y_3+Y_2+Y_1.$$</p>
|
66,068 | <p>I have a list like this. </p>
<pre><code>cdatalist = {{1., 0.898785, Failed, Failed, 50., 25., "serial"}, {1., 1.31175,1., Failed, 50., 25., "serial"}, {1., 18.8025, Failed, 0.490235, 50., 25., "serial"}, {1., 19.6628, 0.990079, Failed, 50., 25., "serial"}, {1., 39.547, Failed, Failed, 50., 25., "serial"}, {1., 39.7503, Failed, 0.482749, 50., 25., "serial"}, {1., 40.2078, Failed, Failed, 50., 25., "serial"}, {1., 40.6208, 0.980588, Failed, 50., 25., "serial"}, {1., 102.588, Failed, Failed, 50., 25., "serial"}, {1., 102.781, Failed, 0.466214, 50., 25., "serial"}, {1., 102.826, Failed, Failed, 50., 25., "serial"}, {1., 102.833, Failed, Failed, 50., 25., "serial"}, {15., 0.89985, Failed, Failed, 50., 25., "serial"}, {15., 1.31344, 1., $Failed, 50., 25., "serial"}}
</code></pre>
<p>at the end, I want to compile a new list by dropping any lines that don't have "Failed" on the third column on each row. </p>
<pre><code>datalistfunc[input_] :=
Module[{cell, cell2, celltable, celllist},
i = 1;
celllist = {};
While[i < Length@cdatalist + 1,
cell =
Select[cdatalist[[i]][[1 ;; 3]],
Head[cdatalist[[i]][[3]]] == Real &];
i = If[i < Length@cdatalist + 1, i + 1, Length@cdatalist + 1];
celllist = AppendTo[celllist, cell2];
Print[cell2]
]
]
datalist = datalistfunc[cdata];
</code></pre>
<p>My list looks like this after filtering. </p>
<pre><code>{{},{}}
{{1.,1.31175,1.},{}}
{{},{}}
{{1.,19.6628,0.990079},{}}
{{},{}}
{{},{}}
{{},{}}
{{1.,40.6208,0.980588},{}}
{{},{}}
{{},{}}
{{},{}}
{{},{}}
{{},{}}
{{15.,1.31344,1.},{}}
</code></pre>
<p>Instead, I want my list to look like this. </p>
<pre><code>{{1.,1.31175,1.},
{1.,19.6628,0.990079},
{1.,40.6208,0.980588},
{15.,1.31344,1.}}
</code></pre>
| Aisamu | 8,238 | <p>This matches your example, but I had to get only the first three elements of each line (you didn't mention it).</p>
<pre><code>Select[cdatalist, #[[3]] =!= Failed &][[All, ;; 3]]
</code></pre>
<p>Or, as per @belisarius suggestion (roughly twice as fast!)</p>
<pre><code>Cases[cdatalist, Except[{_, _, Failed, ___}]][[All, 1 ;; 3]]
</code></pre>
<p>Or, "inspired" by @Gerli:</p>
<pre><code>Cases[cdatalist , {a_, b_, c : Except[Failed], ___} :> {a, b, c}]
</code></pre>
<p>Silly benchmark:</p>
<pre><code>Do[Select[cdatalist, #[[3]] =!= Failed &][[All, ;; 3]], {100000}] // AbsoluteTiming // First
(* 1.981000 *)
(* belisarius *)
Do[Cases[cdatalist, Except[{_, _, Failed, ___}]][[All, 1 ;; 3]], {100000}] // AbsoluteTiming // First
(* 0.747398 *)
(* Gerli-inspired *)
Do[Cases[cdatalist , {a_, b_, c : Except[Failed], ___} :> {a, b, c}], {100000}] // AbsoluteTiming // First
(* 1.172714 *)
(* kguler's *)
Do[Pick[#, (#[[-1]] =!= Failed) & /@ #] &@ cdatalist[[All, ;; 3]], {100000}] // AbsoluteTiming // First
(* 2.415208 *)
</code></pre>
|
66,068 | <p>I have a list like this. </p>
<pre><code>cdatalist = {{1., 0.898785, Failed, Failed, 50., 25., "serial"}, {1., 1.31175,1., Failed, 50., 25., "serial"}, {1., 18.8025, Failed, 0.490235, 50., 25., "serial"}, {1., 19.6628, 0.990079, Failed, 50., 25., "serial"}, {1., 39.547, Failed, Failed, 50., 25., "serial"}, {1., 39.7503, Failed, 0.482749, 50., 25., "serial"}, {1., 40.2078, Failed, Failed, 50., 25., "serial"}, {1., 40.6208, 0.980588, Failed, 50., 25., "serial"}, {1., 102.588, Failed, Failed, 50., 25., "serial"}, {1., 102.781, Failed, 0.466214, 50., 25., "serial"}, {1., 102.826, Failed, Failed, 50., 25., "serial"}, {1., 102.833, Failed, Failed, 50., 25., "serial"}, {15., 0.89985, Failed, Failed, 50., 25., "serial"}, {15., 1.31344, 1., $Failed, 50., 25., "serial"}}
</code></pre>
<p>at the end, I want to compile a new list by dropping any lines that don't have "Failed" on the third column on each row. </p>
<pre><code>datalistfunc[input_] :=
Module[{cell, cell2, celltable, celllist},
i = 1;
celllist = {};
While[i < Length@cdatalist + 1,
cell =
Select[cdatalist[[i]][[1 ;; 3]],
Head[cdatalist[[i]][[3]]] == Real &];
i = If[i < Length@cdatalist + 1, i + 1, Length@cdatalist + 1];
celllist = AppendTo[celllist, cell2];
Print[cell2]
]
]
datalist = datalistfunc[cdata];
</code></pre>
<p>My list looks like this after filtering. </p>
<pre><code>{{},{}}
{{1.,1.31175,1.},{}}
{{},{}}
{{1.,19.6628,0.990079},{}}
{{},{}}
{{},{}}
{{},{}}
{{1.,40.6208,0.980588},{}}
{{},{}}
{{},{}}
{{},{}}
{{},{}}
{{},{}}
{{15.,1.31344,1.},{}}
</code></pre>
<p>Instead, I want my list to look like this. </p>
<pre><code>{{1.,1.31175,1.},
{1.,19.6628,0.990079},
{1.,40.6208,0.980588},
{15.,1.31344,1.}}
</code></pre>
| kglr | 125 | <pre><code>Pick[#, (#[[-1]] =!= Failed) & /@ #] &@cdatalist[[All, ;; 3]]
DeleteCases[cdatalist[[All, ;; 3]], {_, _, Failed}]
DeleteCases[cdatalist, {_, _, Failed, ___}][[All, ;; 3]]
</code></pre>
<p>all give</p>
<pre><code>(* {{1., 1.31175, 1.},
{1., 19.6628, 0.990079},
{1., 40.6208, 0.980588},
{15., 1.31344, 1.}} *)
</code></pre>
|
2,252,090 | <p>Someone posed this question to me on a forum, and I have yet to figure it out. If $a,b,c,d$ are the zeroes of:</p>
<p>$$x^4-7x^3+2x^2+5x-1=0$$
Then what is the value of $$ \frac1a +\frac1b +\frac1c +\frac1d $$</p>
<p>I can figure out the zeroes, but they are wildly complex. I'm sure there must be an easier way. </p>
| Robert Israel | 8,508 | <p>$$(x-a)(x-b)(x-c)(x-d) = x^4 + \ldots - (abc+abd+acd+bcd)x + abcd$$
So $$abcd = -1$$ and
$$abc + abd + acd + bcd = abcd \left(\frac{1}{a}+\frac{1}{b}+\frac1c + \frac1d\right) = -5$$
making
$$\frac{1}{a}+\frac{1}{b}+\frac1c + \frac1d = 5$$</p>
|
3,290,095 | <p>Now first something that I already know;
<span class="math-container">\begin{eqnarray}
∞/ ∞ = undetermined ( ≠1 ) \\
∞- ∞ = undetermined (≠0)\\
\end{eqnarray}</span></p>
<p>So basically one reason for this is that the <span class="math-container">$∞$</span> I assume is not as same as the <span class="math-container">$∞$</span> someone else will assume as <span class="math-container">$ ∞$</span> is a very large number with no definite value.....but what if I assign the <span class="math-container">$ ∞$</span> to a certain variable....that way the infinity is always same.</p>
<p>For eg:</p>
<p>What if I assign <span class="math-container">$ a=∞$</span>;</p>
<p>Now infinity is always the same if I use <span class="math-container">$a $</span> instead of directly using <span class="math-container">$∞$</span>......so my question is are the same laws mentioned above applicable here.....or can i solve it like solving any other equation;
<span class="math-container">\begin{eqnarray}
a/a = 1 \\
a-a = 0\\
\end{eqnarray}</span>
Or are these still undetermined? .</p>
| G Cab | 317,234 | <p>As the other answers explained <span class="math-container">$\infty$</span> is not a number. Already Galileo had to admit that.</p>
<p>However, it is true that <span class="math-container">$1=a/a$</span> is valid for all <span class="math-container">$0 \ne a$</span>, including the limit for <span class="math-container">$a \to \pm \infty$</span>.<br>
And same for <span class="math-container">$0=a-a$</span></p>
|
385,537 | <p>How would you go about proving the following?</p>
<p>$${1- \cos A \over \sin A } + { \sin A \over 1- \cos A} = 2 \operatorname{cosec} A $$</p>
<p>This is what I've done so far:</p>
<p>$$LHS = {1+\cos^2 A -2\cos A + 1 - \cos^2A \over \sin A(1-\cos A)}$$</p>
<p>....no idea how to proceed .... X_X</p>
| André Nicolas | 6,312 | <p>When we "cross-multiply," on top we get $(1-\cos A)^2+\sin^2 A$. Expand the square.</p>
<p>We get $1-2\cos A+\cos^2 A+\sin^2 A$. Replace $\cos^2 A+\sin^2 A$ by $1$. We get $2-2\cos A$ on top, and now it's over. </p>
|
2,276,907 | <p>If $\cos{x}=\frac{3}{5}$ and angle $x$ terminates in the fourth quadrants, find the exact value of each of the following:</p>
<p>A. $\sin{2x}$
B. $\cos{2x}$
C. $\tan {\frac{x}{2}}$</p>
<p>Okay, so I am going through my old exam reviews for the final exam I have this evening, and choosing problems I have trouble with. Problems like these a struggle. Could someone give me some sort of step by step? I don't need to know all A,B,and C, but maybe one of them would help. Also, if $\cos{x}=\frac{3}{5}$ and angle $x$ terminates in the fourth quadrants, wouldn't that fraction be negative?</p>
<p>EDIT: Thank you for all the feedback! I understand now and finally realized what I have been messing up on was so small! Will make a mental note so I don't mess up on tonights final. :)</p>
| turkeyhundt | 115,823 | <p>I would draw it on the unit circle. Put a point at the point $(0.6,-0.8)$ From there you can get the values for $\sin{x}$ & $\tan{x}$ and use some basic trig identities to get the rest.</p>
<p><a href="https://i.stack.imgur.com/3f9Om.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/3f9Om.jpg" alt="enter image description here"></a></p>
|
128,221 | <p>Let $v_1=[-3;-1]$ and $v_2= [-2;-1]$</p>
<p>Let $T:\mathbb{R}^2 \rightarrow \mathbb{R}^2$ be the linear transformation satisfying:</p>
<p>$T(v_1)=[15;-6]$ and $T(v_2)=[11;-3]$</p>
<p>Find the image of an arbitrary vector $[x;y]$</p>
| Community | -1 | <p>Note sure if <code>(homework)</code> yet. So <strong>hint:</strong></p>
<p>Let
$$
T =
\begin{pmatrix}
a & b \\ c & d
\end{pmatrix}
$$</p>
<p>We can re-interpret the given $T(v_1)$ and $T(v_2)$ as:</p>
<p>$$
\begin{pmatrix}
a & b \\ c & d
\end{pmatrix}
\begin{pmatrix}
-3 \\ -1
\end{pmatrix}
=
\begin{pmatrix}
15 \\ -6
\end{pmatrix} ,
\\
\begin{pmatrix}
a & b \\ c & d
\end{pmatrix}
\begin{pmatrix}
-2 \\ -1
\end{pmatrix}
=
\begin{pmatrix}
11 \\ -3
\end{pmatrix}
$$
Or more succinctly as,
$$
\begin{pmatrix}
a & b \\ c & d
\end{pmatrix}
\begin{pmatrix}
-3 & -2 \\ -1 & -1
\end{pmatrix}
=
\begin{pmatrix}
15 & 11 \\ -6 & -3
\end{pmatrix}
\tag{1}
$$
Can you take it from here?</p>
|
2,870,729 | <blockquote>
<p>Why does $|e^{ix}|^2 = 1$?</p>
</blockquote>
<p>The book said $e^{ix} = \cos x + i\sin x$, and square it, then $|e^{ix}|^2 = \cos^2x + \sin^2x = 1$.</p>
<p>But, when I calculated it, $ |e^{ix}|^2 = \left|\cos x + i\sin x\right|^2 = \cos^2x - \sin^2x + 2i\sin x\cos x$.</p>
<p>I can't make it to be equal $1.$ How can I do it?</p>
| Nebo Alex | 218,007 | <p>you can first apply the modulus $$|e^{ix}|=\sqrt{\cos^2 x + \sin^2 x}$$ then square it whole you will get $$|e^{ix}|^2={\cos^2 x + \sin^2 x}=1$$</p>
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.