qid int64 1 4.65M | question large_stringlengths 27 36.3k | author large_stringlengths 3 36 | author_id int64 -1 1.16M | answer large_stringlengths 18 63k |
|---|---|---|---|---|
3,600,139 | <blockquote>
<p>Solve <span class="math-container">$y''-y'-y=\cos x$</span>.</p>
</blockquote>
<p>After first solving the homogeneous equation we know that the solution to it is <span class="math-container">$$y(x)=a\sin(x)+b\cos(x).$$</span></p>
| user577215664 | 475,762 | <p><span class="math-container">$$y''-y'-y=\cos x$$</span>
The characteristic polynomial is:
<span class="math-container">$$\implies r^2-r-1=0$$</span>
<span class="math-container">$$r=\dfrac {1\pm \sqrt 5}2$$</span>
Therefore the solution to the homogeneous equation is:
<span class="math-container">$$y=e^{x/2}(c_1e^{\sqrt 5 x}+c_2e^{-\sqrt 5 x})$$</span>
The guess for the particular solution is
<span class="math-container">$$y_p=a \cos x+ b\sin x$$</span></p>
|
503,808 | <p>Let $\mathbb{K} = \mathbb{Q}[\sqrt[3]{5}] \ $, and let $\mathbb{L}$ be the normal closure of $\mathbb{K}$. </p>
<p>Let $\mathbb{O}_{\mathbb{K}} \ $ be the integral closure of $\mathbb{Z}$ in $K$ and $\mathbb{O}_{\mathbb{L}} \ $ be the integral closure of $\mathbb{Z}$ in $\mathbb{L}$. I want to find the factorization of the primes $7, 11 \ $ and $13$ as ideals in $\mathbb{O}_{\mathbb{K}} \ $ and $\mathbb{O}_{\mathbb{L}} \ $. My question is : how can I do that without knowing an integral basis for $\mathbb{K}$ and $\mathbb{L}$ ?</p>
| paul garrett | 12,291 | <p>For a prime $p$ in $\mathbb Z$ unramified in an extension $K=\mathbb Q(\beta)$, if we are merely <em>close</em> to the true ring of algebraic integers, meaning that $p$ does not divide the discriminate of $\mathbb Z[\beta]$, then the <em>localization</em> at $p$ of the true ring of integers is the same as the localization of this good approximation. Thus, looking how the minimal polynomial for $\beta$ factors mod $p$ will show how $p$ splits.</p>
|
1,960,793 | <p>Considering the (not most common) definition:</p>
<blockquote>
<p>A set is <em>infinite</em> if it is equipotent to a proper subset of itself. A set is <em>finite</em> if it is not infinite.</p>
</blockquote>
<p>How can I prove that a set $A$ is finite iif it is equipotent to $J_n=\{1,…,n\}$ for some $n{\in}\mathbb{N}$ (assuming that I already proved that $J_n$ is a finite set for every $n{\in}\mathbb{N}$)?</p>
| G. Snapsmath | 376,415 | <p>Suppose $A$ is not equipotent with any of the $J_n$. Then we see that we can produce a surjection from $A$ to $\mathbb{N}$. If not then there is some maximum $n\in \mathbb{N}$ for $f(A)$. From there you can whittle it down to a bijection to a $J_n$. Then $A$ you can conclude by contradiction that $A$ is equipotent to some $J_n$. </p>
<p>For the reverse direction, if $A$ is equipotent to $J_n$ for some $n$ then there is a bijection $f:A\to J_n$. Pick any $k\in J_n$ then there is no bijection from $J_n$ to $J_n\setminus \{k\}$. Since $f$ is a bijection, argue that there is no bijection from $A$ to $A\setminus\{f^{-1}(k)\}.$</p>
|
2,471,217 | <p>Contradiction or contra-positive? Or is direct easier? </p>
<p>∀n∈N, (n>3∧ n is prime)→∃q∈N,(n=6q+1∨n=6q+5)</p>
| DeepSea | 101,504 | <p>Consider $ n\mod 6$. Thus any $n > 3$ can be written as $n = 6q + k$ with $0 \le k < 6$. For if $k = 0,2,3,4$, $n$ is divisible by $2$ or $3$ or $6$ so not a prime. Thus for $n$ to be prime, $k = 1$ or $5$.</p>
|
3,745,452 | <p>I'm evaluating the area of a parametric function and I'm stuck at this point.
<span class="math-container">$$S=2\pi \int_{0}^{2\pi} x\cos(x)\sqrt{x^2+1} \mathrm{d}x.$$</span>
Is there anyway to work this problem further without a calculator?</p>
| Claude Leibovici | 82,404 | <p>There is probably no antiderivative. So, basically, you are asking to perform a numerical integration without calculator.</p>
<p>The only thing I am able to think about is to write <span class="math-container">$\sqrt{1+x^2} \sim x$</span> and then
<span class="math-container">$$S=2\pi \int_{0}^{2\pi} x\cos(x)\sqrt{x^2+1} \,dx \sim 2\pi \int_{0}^{2\pi} x^2\cos(x) \,dx$$</span>
Integration by parts gives
<span class="math-container">$$\int x^2\cos(x) \,dx=\left(x^2-2\right) \sin (x)+2 x \cos (x)$$</span> and then
<span class="math-container">$$S \sim 8\pi^2\approx 78.9568$$</span> while numerical integration lead to <span class="math-container">$77.7680$</span>.</p>
|
3,762,306 | <p>I have the equation <span class="math-container">$$(e^x-1)-k\arctan(x) = 0$$</span> where <span class="math-container">$0<k \leq \frac 2\pi$</span> and I was wondering how I would go about starting to determine the amount of real roots of this equation. So far I have just manipulated the equation to get different equations of <span class="math-container">$x$</span>, however I'm unsure what to do with them.</p>
<p>Current equations for <span class="math-container">$x$</span> are <span class="math-container">$x = \ln(k\arctan(x)-1)$</span> and <span class="math-container">$x = \frac{\tan(e^x-1)}{k}$</span></p>
| Robert Israel | 8,508 | <p>Of course <span class="math-container">$x=0$</span> is always a solution. Otherwise, write the equation as <span class="math-container">$$k = \frac{e^x-1}{\arctan(x)}$$</span>
Call the right side <span class="math-container">$f(x)$</span>. The singularity at <span class="math-container">$x=0$</span> is removable, with <span class="math-container">$\lim_{x \to 0} f(x) = 1$</span>. We also have <span class="math-container">$\lim_{x \to -\infty} f(x) = 2/\pi$</span> and <span class="math-container">$\lim_{x \to +\infty} f(x) = +\infty$</span>.<br />
It appears that <span class="math-container">$f(x)$</span> is increasing. If so, for <span class="math-container">$2/\pi < k < 1$</span> and <span class="math-container">$1 < k < \infty$</span> there are two real roots (<span class="math-container">$x=0$</span> and the root of <span class="math-container">$f(x)=k$</span>), otherwise there is only <span class="math-container">$x=0$</span>.</p>
|
7,354 | <p>some of my students refer to there being an invisible $-1$ in front of the expression $-(x + 4)$ or in the exponent of $x$. While it is not phrased mathematically, I am ok with them saying this because it reminds them to distribute fully before simplifying etc. It got me thinking though is there a reason to not teach students to always write in $1$ wherever there is a single variable/unknown/etc such as $1(x^1+40)-1(3x^1 - 2)$? I know that it is not done in general because it is incredibly repetitive and annoying, but is there any reason why not to teach students to do this? Eventually they will be comfortable enough that they can imply the $1$ but I feel like a lot of my students would benefit from this especially when simplifying exponential expressions and distributing negative signs. Are there any downsides to this or is it just de facto mathematics education? </p>
| Frank Newman | 5,104 | <p>The invisible $1$ makes sense when students are just getting started in using certain formulas.</p>
<p>In the equation $y = x + 4$, if students are new to the idea of $y = mx + b$, then it can help to ask the class, "if you had to put some number to the left of the $x$, what would it be?" Someone will probably say "$1$", and then you can sneak in the "$1$" to the left of the $x$ in the equation that you have written on the board: $y = 1x + 4$. Then you can ask what the slope of the line is, etc.</p>
<p>The same idea applies for the quadratic formula. Beginners can be encouraged to "sneak in the $1$" to the left of $x^2$ in $x^2 - 7x + 10 = 0$ or to the left of $x$ in $2x^2 - x - 15 = 0$</p>
<p>I do make clear that the invisible $1$ should be removed from any final answer. I teach that there is value in formatting an answer in a standard way so that it is likely to match precisely the answer given by a fellow student or by an answer key. In many cases, that means losing the $1$.</p>
|
1,615,201 | <p>I just solved a long problem for my physics w/calculus homework that required a simplification using a quadratic formula. The "textbook" (flipItPhysics) came up with a different simplification than mine but it turns out they are equivalent. I can't, for the life of me, figure out how to simplify mine into theirs.</p>
<h2>Mine:</h2>
<p>$$
x = \frac{d\left(q\pm\sqrt{Qq}\right)}{q - Q}
$$</p>
<h2>Theirs:</h2>
<p>$$
x = d \cdot \left(\frac{q}{q- Q}\right) \left(1 \pm\sqrt{\frac{Q}{q}}\right)
$$</p>
<p>Can someone show me how to get from mine to theirs? I'm specifically confused about how to make the inside of the square root a division instead of a multiplication.</p>
| Brian M. Scott | 12,042 | <p>It’s a little easier to go the other way: assuming that $q>0$, we have</p>
<p>$$q\sqrt{\frac{Q}q}=\sqrt{q^2}\cdot\sqrt{\frac{Q}q}=\sqrt{q^2\cdot\frac{Q}q}=\sqrt{qQ}\;.$$</p>
<p>If $q<0$, $q=-\sqrt{q^2}$, so you get $-\sqrt{qQ}$; since you have a $\pm$ sign, it doesn’t matter: as long as $q\ne 0$, you get both signs.</p>
|
1,815,918 | <p>Let $G$ be a group and $|G|=p^n$ for some prime $p$. If $f:G\to H$ is a surjective homomorphism, how do I know $H=f(G)$ also has cardinality a power of $p$?</p>
| Tsemo Aristide | 280,301 | <p>Hint: $f(x)={1\over2}((f(x)+f(-x)))+{1\over 2}(f(x)-f(-x)))$.</p>
<p>$g(x)={1\over2}((f(x)+f(-x)))$</p>
<p>$h(x)={1\over2}((f(x)-f(-x)))$</p>
|
24,412 | <p>Trying to plot the following two functions to show points of intersection.</p>
<pre><code>2 x + y - 1 == 0,
x - y + 2 == 0
ContourPlot[{2 x + y - 1 == 0, x - y + 2 == 0}, {x, -5, 5}, {y, -5, 5}]
</code></pre>
<p>The above shows the plots, but I find it difficult to see the point of intersection. I suspect that there is a better method than this. </p>
<p>Please suggest a good method to plot such equations.</p>
| ubpdqn | 1,997 | <p>You could put these linear equations in matrix form:</p>
<pre><code>i = LinearSolve[a = {{2, 1}, {1, -1}}, b = {1, -2}]
ContourPlot[Evaluate@Thread[a.{x, y} == b], {x, -3, 3}, {y, -3, 3},
Epilog -> {Red, PointSize[0.02], Point[i]},
PlotLegends -> "Expressions"]
</code></pre>
<p><a href="https://i.stack.imgur.com/cfgbx.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/cfgbx.png" alt="enter image description here"></a></p>
|
333,265 | <p>Can $|-x^2 | < 1 $ imply that $-1<x<1$?
My steps are as follows?
$$| -x^2| < 1 $$
$$-1<(-x^2)< 1 $$
$$-1<(-x^2)< 1 $$
$$-1<x^2< 1 $$
$$\sqrt{-1}<x< \sqrt 1 $$</p>
<p>I'm actually looking for the radius of convergence for the power series of $\frac{1}{1-x^2}$:</p>
<p>$$\frac{1}{1-x^2}=\sum\limits_{n=0}^\infty (-x^2)^n \hspace{10mm}\text{for} \,|-x^2|<1$$
This is derived from the equation $$\frac{1}{1-x}=\sum\limits_{n=0}^\infty x^n \hspace{10mm}\text{for} \,|x|<1$$
According to my textbook, the power series $$\sum\limits_{n=0}^\infty (-x^2)^n \hspace{10mm}\text{for} \,|-x^2|<1$$
is 'for the interval (-1,1)' which means that $|-x^2 | < 1 $ implies that $-1<x<1$.
However, that implication does not make sense to me.</p>
| Dan Rust | 29,059 | <p>Be careful with the last line. $\sqrt{-1}$ is not a real number so it doesn't make much sense to say $x$ is greater than it. You can, however, deduce from $x^2\geq 0$ for all real numbers, that $0\leq x^2<1$ which implies that $-1<x<1$ (you can see this from the graph of $y=x^2$. For a more rigorous proof, consider the cases $x<0$, $x=0$ and $x>0$ seperately.).</p>
|
1,318,884 | <blockquote>
<p>Let S be a piecewise smooth oriented surface in <span class="math-container">$\mathbb{R}^3$</span> with positive oriented piecewise smooth boundary curve <span class="math-container">$\Gamma:=\partial S$</span> and <span class="math-container">$\Gamma : X=\gamma(t), t\in [a,b]$</span> a rectifiable parametrization of <span class="math-container">$\Gamma$</span>. Imagine <span class="math-container">$\Gamma$</span> is a wire in which a current I flows through. Then</p>
<p><span class="math-container">$$m:=\frac{I}{2}\int_a^b\gamma(t)\times \dot{\gamma}(t)dt$$</span></p>
<p>is the magnetic moment of the current.</p>
<p>Show that for an arbitrary <span class="math-container">$u\in \mathbb{R}^3$</span></p>
<p><span class="math-container">$$m\cdot u=I\int_Su\cdot d\Sigma$$</span> is true.</p>
</blockquote>
<p>I tried doing this with Stokes but I can't seem to get to the desired equation. The teacher gave us a hint: <span class="math-container">$k_u(x):=\frac{1}{2}u\times x$</span> is a vector field and <span class="math-container">$\operatorname{curl}k_u = u$</span>.</p>
<p>Any tips or hints? I would appreciate it.</p>
| Atvin | 215,617 | <p>Hint: For any $a,b$ real numbers: $\min(a,b)+\max(a,b)=a+b$.</p>
<p>Now, if we have $a=a_1^{p_1} a_2^{p_2}\ldots$ and similarly with $b$, if you use the equation I just mentioned for all $p_i$, you will get, that $\gcd(a,b)\cdot\operatorname{lcm}(a,b)=ab$.</p>
|
970,882 | <p>This is homework so no answers please</p>
<p>We have Multiplication map $F:O(n)\times O(n)\to O(n)$ defined as $F(A,B)=AB$ $F:O(n)\times O(n)\to O(n)$, where $O(n)=\{A\in M(n\times n):AA^{t}=id\}$.
The smooth structure of O(n) comes from ImpFT: consider smooth map $f:M(n\times n)\to Sym(n\times n)$ defined as $f(A)=A^{t}A$ then $O(n)=f^{-1}(Id)$, where Id is a regular value.</p>
<p>The problem is to show that F is smooth.</p>
<p>Smoothness of F, means smoothness of the coordinate representation of F. So I am trying to unearth the coordinate charts of O(n) from IFT. Any suggestions on how to do that? Any links on doing that for general level-set manifolds.</p>
<p>Thanks</p>
| Daniel Valenzuela | 156,302 | <p>$GL_n$ is a Lie group, in particular multiplication is smooth. $O(n)$ is just a sub manifold which is closed under multiplication, hence $F$ is smooth.</p>
<p>So to prove your homework it would be easier to prove that multiplaction on $GL_n$ is smooth, or even easier (almost trivial) that multiplication on $M(n\times n)$ is smooth. Note that there are no inverses in the latter, hence no group structure. To show that $M(n\times n)$ has a smooth multiplication map, note that the charts are very easy! Write down the function as $R^n \times R^n \to M \times M \to M \to R^n$. You will see smothness.</p>
|
100,433 | <p>I am trying to export some largish integer arrays to HDF5 and know that every entry in them would fit into an unsigned 8 bit integer array. As the default "DataFormat" for export that Mathematica is using is a 32bit integer array the resulting files are unnecessarily large. Does anyone know the correct syntax to export such an integer array as an 8 bit integer dataset?</p>
<p>I found the following suggestions in this <a href="https://mathematica.stackexchange.com/a/92064/169">answer</a> and a comment to this <a href="https://mathematica.stackexchange.com/a/47861/169">answer</a>:</p>
<pre><code>Export["int8.h5",
RandomInteger[{2^8 - 1}, {100, 100}],
"DataFormat" -> {"UnsignedInteger8"}
]
Export["int8.h5",
RandomInteger[{2^8 - 1}, {100, 100}],
{"UnsignedInteger8", "DataFormat"}
]
</code></pre>
<p>but they both seem to not work at all or not create the expected data representation in the files, which can be checked by either HDFView (which I consider authorative) or Mathematica itself with:</p>
<pre><code>Import["int8.h5","DataFormat"]
</code></pre>
| Albert Retey | 169 | <p>The following seems to work, but I'd also be interested in other, possibly simpler or clearer versions...</p>
<pre><code>Export["int8.h5",
{
"Datasets" -> {"/data" -> RandomInteger[{2^8 - 1}, {100, 100}]},
"DataFormat" -> {"UnsignedInteger8"}
},
"Rules"
]
</code></pre>
<p>Two notes: </p>
<ol>
<li>Unfortunately I could not create a generalization of these which make the (both documented but obviously broken) features compression ("DataEncoding" element) or attributes ("Attributes" element) work. So if someone has a solution for those, I'd appreciate and accept answers which solve those.</li>
<li>I now found that in a comment to this <a href="https://mathematica.stackexchange.com/a/31583/169">answer</a> Szabolcs has the working syntax (for the not working "DataEncoding" element), but unfortunately that is quite hard to find so I hope this question (and answers) is acceptable anyway...</li>
</ol>
|
3,398,164 | <p>I am working through the math behind homographies, but my math skills are a bit rusty.</p>
<p>A homography can be calculated with 8 corresponding points (4-4) because the homography matrix has 8 degrees of freedom. This is because, <a href="https://docs.opencv.org/master/d9/dab/tutorial_homography.html#lecture_16" rel="nofollow noreferrer">eventhough the 3x3 matrix has 9 variables, one can "normalized to one"</a>
the cited explanation says:</p>
<blockquote>
<p>Note that we can multiply all <span class="math-container">$h_{ij}$</span> by nonzero k without changing the
equations</p>
</blockquote>
<p>and the following explanation is given in "multiple view geometry in computer vision":</p>
<blockquote>
<p>Note that the matrix H occurring in this equation may be changed by
multiplication by an arbitrary non-zero scale factor without altering
the projective transformation. Consequently we say that H is a
homogeneous matrix, since as in the homogeneous representation of a
point, only the ratio of the matrix elements is significant. There are
eight independent ratios amongst the nine elements of H, and it
follows that a projective transformation has eight degrees of freedom.</p>
</blockquote>
<p>I have also read this post: <a href="https://math.stackexchange.com/questions/508668/degree-of-freedom-of-homography-matrix">degree of freedom of Homography matrix</a>, but im afraid i still don't understand.</p>
<p>Can someone explain in detail how this normalization works? Or why the scaling multiplication normalizes the matrix?</p>
<p>thanks a ton!</p>
| amd | 265,466 | <p>Suppose that <span class="math-container">$\mathbf y=H\mathbf x$</span>. Then by elementary properties of matrix multiplication, <span class="math-container">$(\lambda H)\mathbf x = \lambda(H\mathbf x)=\lambda\mathbf y$</span>, so when <span class="math-container">$\lambda\ne0$</span>, <span class="math-container">$H\mathbf y$</span> and <span class="math-container">$(\lambda H)\mathbf y$</span> represent the same point. To put it another way, any homogeneous transformation matrix, not only one that represents a homography, is uniquely determined up to an irrelevant scalar factor. </p>
<p>Note, by the way, that the text only claims that <em>some</em> element of the matrix can be normalized to <span class="math-container">$1$</span>, not that any specific element can. In general, zeros can appear anywhere within a transformation matrix, but if the matrix is nonzero, there must be at least one nonzero element.</p>
|
2,961,592 | <p>We know 4 is a real number but how can we prove that it is a complex number? How can be describe it in the a+ib form??</p>
| Michael Rozenberg | 190,319 | <p>Because <span class="math-container">$$\sqrt{y}-\sqrt{x}=\frac{y-x}{\sqrt{y}+\sqrt{x}}>0,$$</span>
<span class="math-container">$$y-\sqrt{xy}=\sqrt{y}(\sqrt{y}-\sqrt{x})>0$$</span> and
<span class="math-container">$$\sqrt{xy}-x=\sqrt{x}(\sqrt{y}-\sqrt{x})>0.$$</span></p>
|
2,459,263 | <p>If $p$ and $q$ are two points on a $n$-dimensional manifold $M$, then their tangent spaces are two $n$-dimensional vector spaces. So algebraically they have the same structure, but they are not the same because $p$ and $q$ are different points. I can understand they are not the same by visualization. How is it stated algebraically? </p>
<p>I mean if M is 2 dimensional I can picture two different planes, one on $p$ and one on $q$. How is that difference stated mathematically?</p>
<p>Thanks.</p>
| Amitai Yuval | 166,201 | <p>The algebraic statement is very simple: </p>
<p>Even though the spaces $T_pM$ and $T_qM$ are isomorphic, there are many different isomorphisms between these two spaces, none of which is "special" in any way or can be distinguished from the others. This is why one cannot state that the two spaces are equal to one another.</p>
<p>This is a serious matter which leads to deep theories. One simple example for this difference between isomorphism and equality is the following: let $\gamma:[0,1]\to M$ be a path, and let $X$ be a vector field along $\gamma$. That is, $X(t)$ is a tangent vector at $\gamma(t)$ for every $t\in[0,1]$. Now, one can wonder if $X$ is <em>constant</em>. However, this depends on <em>how</em> one identifies the different tangent spaces $T_{\gamma(t)}M$ to one another. If $M$ doesn't have some extra structure (such as a connection on the tangent bundle), then the statement "$X$ is constant" is just meaningless.</p>
|
1,900,365 | <p>Apparently this is a true statement, but I cannot figure out how to prove this. I have tried setting </p>
<p>$$(m)(m + 1)(m + 2)(m + 3) = (m + 4)^2 - 1 $$</p>
<p>but to no avail. Could someone point me in the right direction? </p>
| David Quinn | 187,299 | <p>You can take the product of the four consecutive integers as $$(m-1)m(m+1)(m+2)$$</p>
<p>Adding $1$ and simplifying, we get $$m^4+2m^3-m^2-2m+1$$
$$=m^4+m^2+1+2m^3-2m^2-2m$$
$$=(m^2+m-1)^2$$</p>
|
1,900,365 | <p>Apparently this is a true statement, but I cannot figure out how to prove this. I have tried setting </p>
<p>$$(m)(m + 1)(m + 2)(m + 3) = (m + 4)^2 - 1 $$</p>
<p>but to no avail. Could someone point me in the right direction? </p>
| gnasher729 | 137,175 | <p>You can solve this without much thinking. </p>
<p>By expanding the product, we find $m(m+1)(m+2)(m+3) = m^4 + 6m^3 + 11m^2 + 6m$. That's a bit bigger than $(m^2)^2 = m^4$. $(m^2+c)^2 = (m^4 + 2m^2c + c^2)$ is still too small because there is no term $m^3$. $(m^2+cm)^2 = m^4 + 2cm^3+c^2m^2$ looks better, especially if we let c = 3. </p>
<p>$(m^2+3m)^2 = m^4 + 6m^3 + 9m^2$ is still just a little bit too small, by about $2m^2$. So we add another c: $(m^2+3m+c)^2 = m^4 + 6m^3 + 2cm^2 + 9m^2 + 6cm + c^2$. We let c = 1 to match the term $11m^2$ and get $(m^2+3m+1)^2 = m^4 + 6m^3 + 11m^2 + 6m + 1$. Exactly 1 more than the product, just what we wanted. So </p>
<p>$m(m+1)(m+2)(m+3) = (m^2+3m+1)^2 - 1$.</p>
<p>That's the answer, without much thought at all.</p>
|
2,542,242 | <p>When I tried solving for the inverse of $f(x)=\frac{1-x}{-x}$, I got this:</p>
<p>$f^{-1}(x)=\frac{1}{1-x}$</p>
<p>I know that the way to check my answer would be to take the inverse of the inverse I just found, but this is what I get:</p>
<p>$f^{-1}(f^{-1}(x))=1-\frac{1}{x}=\frac{x-1}{-x}$</p>
<p>The last part, when multiplied by $\frac{-1}{-1}$ is indeed my original function, but am I allowed to do that? And why does that get lost when taking the inverse?</p>
| spaceisdarkgreen | 397,125 | <p>You use the <a href="https://en.wikipedia.org/wiki/Leibniz_integral_rule" rel="nofollow noreferrer">Leibniz formula</a> $$ \frac{d}{dx}\int_{a(x)}^{b(x)} f(x,t)\;dt = f(x,b(x))b'(x) - f(x,a(x))a'(x) + \int_{a(x)}^{b(x)}\frac{\partial}{\partial x}f(x,t)\;dt$$ and get $$ \frac{d}{dx}\int_0^xf(x-t)\;dt = f(0)+\int_0^xf'(x-t)\; dt.$$</p>
|
3,779,996 | <p>Let <span class="math-container">$(X_1,...,X_n)$</span> be a random sample with PDF <span class="math-container">$f(x;\theta) = \frac{x}{\theta}\exp(-x^2/(2\theta)), \theta > 0$</span></p>
<p>I want to show that the likelihood ratio test of <span class="math-container">$H_0 : \theta \le \theta_0$</span> against <span class="math-container">$H_1 : \theta > \theta_0$</span> where <span class="math-container">$\theta_0>0$</span> is given is a Chi-square test</p>
<p>This gives that the the likelihood function <span class="math-container">$\displaystyle L(\theta) = \frac{\prod x_i}{\theta^n}\exp(-\sum x_i^2/2\theta)$</span></p>
<p>I am going to set <span class="math-container">$t = \prod X_i$</span> and <span class="math-container">$s = \sum X_i^2$</span>. So we get <span class="math-container">$\displaystyle L(\theta) = \frac{t}{\theta^n}\exp(-s/2\theta)$</span>. And <span class="math-container">$\max_{\theta \ge 0 }L(\theta)$</span> occurs when <span class="math-container">$\theta = \frac{s}{2n}$</span></p>
<p>And <span class="math-container">$\max_{0 \le \theta \le \theta_0} L(\theta) = \begin{cases}
L(\frac{s}{2n})&\text{if }\theta_0 \ge \frac{s}{2n}\\
L(\theta_0)&\text{else}
\end{cases}$</span></p>
<p>Now we have</p>
<p><span class="math-container">$$
\Lambda_{H_0} = \frac{\max_{0 \le \theta \le \theta_0} L(\theta)}{\max_{0 \le \theta } L(\theta)} = \begin{cases} 1 &\text{if } \theta_0 \ge \frac{s}{2n}\\ \bigg (\frac{s}{2n\theta_0}\bigg)^n\exp(n - s/(2\theta_0))&\text{else}
\end{cases}
$$</span></p>
<p>Hopefully I have calculated both of those correct, now is where I run into my issue I don't quite see how this is a Chi-square test.</p>
| Michael Hardy | 11,667 | <p><span class="math-container">\begin{align}
L(\theta) & = \frac{t}{\theta^n}\exp\left(\frac{-s}{2\theta} \right) \\[8pt]
\ell(\theta) = \log L(\theta) & = -n\log\theta - \frac s {2\theta} + (\text{something not depending on } \theta) \\[8pt]
\ell\,'(\theta) & = \frac{-n}\theta + \frac s {\theta^2} = \frac{s-n\theta}{\theta^2}\quad \begin{cases} >0 & \text{if } \theta<s/n, \\ =0 & \text{if } \theta=s/n, \\ <0 & \text{if } \theta > s/n. \end{cases} \\[8pt]
\end{align}</span>
So <span class="math-container">$\widehat{\theta\,} = s/n.$</span></p>
<p>So the likelihood ratio is
<span class="math-container">$$
\begin{cases} 1 &\text{if } \theta_0 \ge \frac s n, \\[8pt] \bigg (\dfrac{s}{n\theta_0}\bigg)^n\exp\left(\dfrac n2 - \dfrac s {2\theta_0}\right)&\text{else}.
\end{cases}
$$</span>
You reject <span class="math-container">$\text{H}_0$</span> if this piecewise expression is improbably small.</p>
<p>Now <b>here is the crucial fact:</b> The expression above is a decreasing function of <span class="math-container">$s^2.$</span> Therefore you reject <span class="math-container">$\text{H}_0$</span> if <span class="math-container">$s^2$</span> is improbably big.</p>
|
354,512 | <p>Find the minimal polynomial of the matrix M:
\begin{pmatrix}
0 & 0 & 0 & \dots & 0 & a_{1}\\
1 & 0 & 0 & \dots & 0 & a_{2}\\
0 & 1 & 0 & \dots & 0 & a_{3} \\
\dots & \dots \\
0 & 0 & 0 & \dots & 1 & a_{n}
\end{pmatrix}</p>
<p>Let's take vector $e_{1}$:
\begin{pmatrix}
1 \\
0 \\
\vdots\\
0
\end{pmatrix}</p>
<p>$M e_{1} = e_{2}$, $M e_{2} = e_{3}$, $M e_{3}= e_{4}\dots$.
$M^{n} e_{1}=(a_{1}\dots a_{n})$.
Why does $x^{n}-(a_{1}+a_{2}x+\dots+a_{n}x^{n-1})=0$ be the minimal polynomial of this matrix? How does it connect with the dimension of image? </p>
| Marc van Leeuwen | 18,880 | <p>Your computation shows that $M^n(e_1)-a_nM^{n-1}(e_1)-\cdots-a_2M(e_1)-a_{1}e_1=0$, and since $e_1,M(e_1),\ldots,M^{n-1}(e_1)$ are linearly independent, no nonzero polynomial of lower degree than$~n$ has this property of the polynomial $P=x^n-a_nx^{n-1}-\cdots-a_2x-a_1$. Then the minimal polynomial $\mu$ of$~M$, which has $\mu[M](v)=0$ for <em>every</em> $v$, must be a polynomial multiple of$~P$.</p>
<p>By definition $\mu$ has leading coefficient$~1$, and you may know that $\deg\mu\leq n$ always holds (for instance from the Cayley-Hamilton theorem), and this implies that $\mu=P$ is the only possibility. One does not <em>need</em> to reason like that though, as one can directly show $P[M](v)=0$ to hold for every$~v$, by showing it for $v=M^k(e_1)$ with $0\leq k<n$, which for a basis of the vector space: $P[M](M^k(v))=M^k(P[M](v))=M^k(0)=0$ since $P[M]$ and $M^k$ commute.</p>
|
1,357,405 | <p>Under what conditions will the cubic equation
$ax^3 + bx^2 + cx + d$ where $a,b,c,d \in \mathbb R$ yield roots which have negative real parts? (All roots must have negative real parts)</p>
<p><strong><em>Motivation</strong>:
I am studying a dynamical system i.e, Chua circuit, in $ \mathbb R^3 $and wish to analyze it's stability. For stability analysis, one needs to find out eigen values of the Jacobian matrix. If the eigen values have negative real parts, then the system will have a stable fixed point.
I wish to synchronize the system and vary certain parameters so as to ensure that the system always has negative eigen values.
For equations, visit</em> <a href="http://www.chuacircuits.com/diagram.php" rel="nofollow">http://www.chuacircuits.com/diagram.php</a></p>
| i. m. soloveichik | 32,940 | <p>You can separate $x$ into the real and imaginary parts, $x=u+Iv$, to get two equations that must be satisfied. For one of them we can solve for $v$ and substitute to get the equation $$u^3+(b/a)u^2+((2ac+2b^2)/8a^2)u+(1/8)(bc-da)/a^2$$ for $u$. So you want this to have three negative roots $u$. To answer that you can try to use Descartes' rule of signs to give some information. </p>
|
1,357,405 | <p>Under what conditions will the cubic equation
$ax^3 + bx^2 + cx + d$ where $a,b,c,d \in \mathbb R$ yield roots which have negative real parts? (All roots must have negative real parts)</p>
<p><strong><em>Motivation</strong>:
I am studying a dynamical system i.e, Chua circuit, in $ \mathbb R^3 $and wish to analyze it's stability. For stability analysis, one needs to find out eigen values of the Jacobian matrix. If the eigen values have negative real parts, then the system will have a stable fixed point.
I wish to synchronize the system and vary certain parameters so as to ensure that the system always has negative eigen values.
For equations, visit</em> <a href="http://www.chuacircuits.com/diagram.php" rel="nofollow">http://www.chuacircuits.com/diagram.php</a></p>
| Piquito | 219,998 | <p>There is always a real root and you have three possibilities: </p>
<p>Let $F(x)= ax^3+bx^2+cx+d$ ; the discriminant $\Delta$ is defined by
$$\Delta=(bc)^2+18abcd-4ac^3-4db^3-27(ad)^2$$
(1) $\Delta>0$: $F(x)=a(x-x_1)(x-x_2)(x-x_3)$; the three roots, $x_1, x_2, x_3,$ are real.</p>
<p>(2) $\Delta=0$: $F(x)=a(x-x_1)(x-x_2)^2$; at least two roots coincide (eventually the three coincide); the three roots are real.</p>
<p>(3) $\Delta<0$: $F(x)=a(x-x_1)[(x-\alpha)^2+(\beta)^2]$ when $x_2=\alpha+i\beta$ with $\beta\neq0$; one (necessarily) root is real and the other complex conjugates.</p>
<p>From this,I think you can deduce a condition in order that “All roots must have negative real parts” as you desire.</p>
|
4,114,962 | <blockquote>
<p>Prove that the product of four consecutive natural numbers are not the square of an integer</p>
</blockquote>
<p>Would appreciate any thoughts and feedback on my suggested proof, which is as follows:</p>
<p>Let <span class="math-container">$f(n) = n(n+1)(n+2)(n+3) $</span>.
Multiplying out the expression and refactoring it in a slightly different way gives
<span class="math-container">$$f(n) = n^4 + 6n^3+11n^2+6n \\= n^4 + 6n^3 + 9n^2 + 2n^2 + 6n = (n^2 + 3n)^2 + 2n(n+3). \tag{1}\label{1} $$</span></p>
<p>We want to show that the only possible way for <span class="math-container">$ f(n) $</span> to be the square of an integer is if <span class="math-container">$ f(n) = (n^2 + 3n +1 ) ^2. $</span> We show this by proving that <span class="math-container">$ (n^2+3n)^2 < f(n) < (n^2+3n+2)^2 $</span>. The left-hand side follows immediately from <span class="math-container">$(1)$</span>, since <span class="math-container">$ 2n(n+3) > 0 $</span> for all <span class="math-container">$ n \geq 1 $</span>, and the right-hand side can be verified by multiplying out both sides:
<span class="math-container">$$
\begin{align}
(n^2+3n)^2 + 2n(n+3) &< (n^2+3n+2)^2 \\ \iff
n^4 + 6n^3 + 11n^2 + 6n &< n^4 + 9n^2 + 4 + 6n^3+4n^2+12n \\ \iff
0 &< 2n^2 + 6n + 4
\end{align}
$$</span>
which is true for all <span class="math-container">$n \geq 1 $</span>. Now we note that <span class="math-container">$n^2+3n = n(n+3)$</span> is even since one of the factors <span class="math-container">$n$</span> or <span class="math-container">$n+3$</span> is even for all <span class="math-container">$n$</span>. It follows that <span class="math-container">$ n^2+3n+1$</span> must be odd, and so <span class="math-container">$ (n^2+3n+1)^2 $</span> must be odd. But <span class="math-container">$ f(n) $</span> must be even, since either <span class="math-container">$n$</span> and <span class="math-container">$(n+2)$</span> is even, or <span class="math-container">$(n+1)$</span> and <span class="math-container">$(n+3)$</span> is even and an even number multiplied by an odd number is an even number. So <span class="math-container">$f(n) \neq (n^2 + 3n +1)$</span> and therefore <span class="math-container">$f(n)$</span> cannot be the square of an integer for all <span class="math-container">$n \geq 1 $</span>.</p>
| Community | -1 | <p>Your solution is perfect and detailed.</p>
<p>Another alternative shorter approach based on observation is if we can consider <span class="math-container">$n(n+2)=k$</span> and then redefine the function as:
<span class="math-container">$$\begin{align*}f(k) &= k(k+2)\\ f(k) &= (k+1)^2-1\end{align*}$$</span></p>
<p>Now, we need to prove that <span class="math-container">$f(k)$</span> is not the square of an integer.
Well, there exists no two squares of a nonzero integer that differ by 1.
Hope that helps!!</p>
|
958,267 | <p>When I was doing problems in my textbook, I came across this problem:</p>
<blockquote>
<p>The velocity of a heavy meteorite entering the earth's atmosphere is inversely proportional to $\sqrt s$ when it is s kilometers from the earth's center. Show that the meteorite's acceleration is inversely proportional to $s^2$.</p>
</blockquote>
<p>From what I have learned, I know the velocity $= k/(\sqrt s)$. And the acceleration is just the derivative of the acceleration. Which means $dv/dt$. But when I looked at this answer, it does something like $\frac{dv}{dt} = \frac{dv}{ds} * \frac{ds}{dt} = \frac{dv}{ds}v$ And then got a different answer than mine. I don't understand why can we just take the derivative directly. I appreciate the help, thanks!</p>
| Eweler | 179,156 | <p>Because $s$ is strictly a function of time, you can't differentiate the velocity with respect to time without using the chain rule. Perhaps putting this in more familiar notation, $v = v(s(t)) $</p>
<p>Just using the chain rule, $$ \frac{dv}{dt} = \frac{dv}{ds}\frac{ds}{dt} = v\frac{b}{s^{3/2}} = \frac{k'}{s^2}$$</p>
<p>So, we have proven that $a = \frac{dv}{dt} \propto \space s^2 $</p>
<p>Where I absorbed all the multipliers into the new constant $k'$ </p>
|
3,490,454 | <p>Definition from Munkres, the textbook we are following: If <span class="math-container">$Y$</span> is a compact Hausdorff space and <span class="math-container">$X$</span> is a proper subspace of <span class="math-container">$Y$</span> whose closure equals <span class="math-container">$Y$</span>, then <span class="math-container">$Y$</span> is said to be a compactification of <span class="math-container">$X$</span>. If <span class="math-container">$Y \setminus X$</span> equals a single point then, then <span class="math-container">$Y$</span> is called the one-point compactification of <span class="math-container">$X$</span>. From this definition, I understood it this way: I just need to add one single point to the given space and it would be compact and Hausdorff. But don’t I need to add 4 points? Namely 0, 1, 2 and 3. Because these are the limit points that the given space lacks to be closed, and in return, compact. What am I missing?..</p>
| Henno Brandsma | 4,280 | <p>Look at <span class="math-container">$$X=\{(x,y)\in \Bbb R^2\mid (x+1)^2 + y^2= 1\} \cup \{(x,y)\in \Bbb R^2\mid (x-1)^2 + y^2= 1\}$$</span></p>
<p>which is the subspace of the plane of two circles intersecting exactly at <span class="math-container">$(0,0)$</span>.</p>
<p><span class="math-container">$X$</span> is compact as the union of two compact unit circles. (or closed and bounded whatever you like).</p>
<p><span class="math-container">$X\setminus \{(0,0)\}$</span> consists of two disjoint open intervals, topologically. Any circle minus a point is just homeomorphic to <span class="math-container">$(0,1)$</span> (e.g for the unit circle around the origin minus <span class="math-container">$(1,0)$</span> just use the homeomorphism <span class="math-container">$(0,1) \ni t \to (\cos(2\pi t), \sin(2 \pi t))$</span> but all points behave the same on all circles so this holds for our two circles of <span class="math-container">$X$</span> minus <span class="math-container">$(0,0)$</span> too).</p>
<p>And two disjoint open is just our original space (the gap between them in <span class="math-container">$\Bbb R$</span> is irrelevant, all that matters is that they're two disjoint open intervals).</p>
<p>So <span class="math-container">$X$</span> is the one-point compactification of <span class="math-container">$(0,1) \cup (2,3)$</span> : it's compact and there is <em>one point</em> (the compactifying point as it were) that we can remove and get our original space back (two disjoint open intervals, up to homeomorphism).
This is how you can tell that <span class="math-container">$X$</span> is the (unique up to homeomoerphism!) one-point compactification. You knwo it exists because <span class="math-container">$(0,1) \cup (2,3)$</span> is locally compact and Hausdorff.</p>
|
4,003,019 | <p>So I've just started learning topology and a lot of the definitions confuse me.
My main problem is that some of the definitions are quite inconsistent to me for example in topology without tears the author says that a topology <span class="math-container">$\tau$</span> is a set of subsets of <span class="math-container">$X$</span> and then he proceeds with the axioms.
However I've also seen a lot from other sources that a topology is a family of subsets of <span class="math-container">$X$</span> and then they proceeds to describe the same axioms. However they do not refer to the topology <span class="math-container">$\tau$</span> as being a set at all which confuses me.</p>
<p>I think my confusion is the definition on what a family actually is ive seen lots of confusing definitions on what a family is like it is a surjective function however I cannot seem to grasp the idea.</p>
<p>So why do they define them differently and what is a family?</p>
<p>Thanks in advance.</p>
| Arthur | 15,500 | <p>"A set of sets" is a somewhat abstract notion. It is, formally, what a topology is, but people also need to know how to think about them. Having an informal "hierarchy" of container names gives newcomers some mental intuitive help (or at least, that's the idea).</p>
<p>Thus we get "a family of sets".</p>
|
110,867 | <p>I want to prove that the function $f(x) := x^3$ for all real $x$ defines a homeomorphism from $\mathbb{R}$ to $\mathbb{R}$. But I am finding it difficult to prove that the inverse map is continuous!</p>
| Alex Becker | 8,173 | <p>Suppose $x^{1/3}$ is not continuous at some $x_0\in\mathbb R$, so for some $\epsilon>0$ we have a sequence $(x_n)$ such that $\lim\limits_{n\to\infty} x_n = x_0$ yet $|x_n^{1/3}-x_0^{1/3}|>\epsilon$ for all $n$. We must have either infinitely many $n$ such that $x_n^{1/3}-x_0^{1/3}$ or infinitely many $n$ such that it is negative. Similarly, either infinitely many of these $x_n$ are $\geq 0$ or infinitely many are $\leq 0$. By relabeling our $x_n$ we can assume that neither of these signs change for any $n$. I will deal with the case $x_n\geq 0$ for all $n$ and leave the other to you. If $x_n^{1/3}-x_0^{1/3}$ is positive for all $n$, then $x_n^{1/3}>x_0^{1/3}+\epsilon$ so $x_n>x_0+3x_0^{2/3}\epsilon+3x_0^{1/3}\epsilon^2+\epsilon^3\geq x_0+\epsilon^3$ for all $n$, contradicting the fact that $\lim\limits_{n\to\infty} x_n=x_0$. If it is negative for all $n$, then $x_0^{1/3}>x_n^{1/3}+\epsilon$ so $x_0>x_n+3x_n^{2/3}\epsilon+3x_n^{1/3}\epsilon^2+\epsilon^3\geq x_n+\epsilon^3$ for all $n$, again contradicting the fact that $\lim\limits_{n\to\infty} x_n=x_0$. Thus $x^{1/3}$ is continuous at any $x_0\in\mathbb R$, so $x^{1/3}$ is continuous.</p>
|
867,207 | <p>The Efron-Stein inequality sais that if $X_1,\ldots,X_n$ are independent random variables on say $R^n$, and $f:R^n \rightarrow R$ s.t. $Z:=f(X_1,\ldots,X_n)$ has finite variance, then</p>
<p>$$\operatorname{Var}(X)\le \sum_{i=1}^n E[(Z-E^{(i)}[Z]]$$</p>
<p>where $E^{(i)}$ denotes conditional expectation taken w.r.t. $X_i$ by keeping the rest of the variables fixed.</p>
<p>On going through the proof, it is not clear to me why do we need the variables to be independent and where is that used in the proof? </p>
| hexaflexagonal | 162,141 | <p>A relation on $A$ is simply a subset of the Cartesian product $A \times A$. For $A=\{a,b,c,d\}$, both $(b,c)$ and $(b,d)$ are contained in $A \times A$; therefore, $\{(b,c),(b,d)\}$ is a relation on $A$. </p>
|
3,143,532 | <blockquote>
<p>Find <span class="math-container">$\det A$</span> and <span class="math-container">$\text{Tr} A$</span> for the matrix <span class="math-container">$A\in M_n(\mathbb{Q})$</span> such that <span class="math-container">$\sqrt[n]{p}$</span> is an eigenvalue of <span class="math-container">$A$</span>, where <span class="math-container">$p$</span> is a prime number or a positive integer such that the square root is irrational number.</p>
</blockquote>
<p>My attempt is described below. From hypothesis we know that
<span class="math-container">$$ \det(A-\sqrt[n]{p}I)=0,
$$</span> hence <span class="math-container">$ \det(A+\sqrt[n]{p}I)=0$</span> because the characteristic polynomial of matrix <span class="math-container">$A$</span> has rational coefficients. Multiplying these two relation we get <span class="math-container">$ \det(A^2-\sqrt[n]{p^2}I)=0$</span> and so on. </p>
<p>From here, I don't find something. How to proceed? Thanks.</p>
| FDP | 186,817 | <p>For <span class="math-container">$\theta \in [0;\pi]$</span>,
<span class="math-container">\begin{align}
J(\theta)&=\int_0^\infty \ln\left(1-\frac{2\cos(2\theta)}{x^2}+\frac{1}{x^4}\right) \,dx
\end{align}</span>
Perform the change of variable <span class="math-container">$y=\dfrac{1}{x}$</span>,</p>
<p><span class="math-container">\begin{align}
J(\theta)&=\int_0^\infty \frac{\ln\left(1-2\cos(2\theta)x^2+x^4\right)}{x^2} \,dx
\end{align}</span></p>
<p>For <span class="math-container">$a\geq -1$</span>, define the function <span class="math-container">$F$</span> by,
<span class="math-container">\begin{align}F(a)&=\int_0^\infty \frac{\ln\left(1+2ax^2+x^4\right)}{x^2} \,dx\\
&=\left[-\frac{\ln\left(1+2ax^2+x^4\right)}{x}\right]_0^\infty+\int_0^\infty \frac{4\left(x^2+a\right)}{1+2ax^2+x^4}\,dx\\
&=\int_0^\infty \frac{4\left(x^2+a\right)}{1+2ax^2+x^4}\,dx\\
\end{align}</span>
Perform the change of variable <span class="math-container">$y=\dfrac{1}{x}$</span>,
<span class="math-container">\begin{align}F(a)&=\int_0^\infty \frac{4\left( \frac{1}{x^2}+a\right) }{x^2\left(1+\frac{2a}{x^2}+\frac{1}{x^4}\right) } \,dx\\
&=\int_0^\infty \frac{4\left( 1+ax^2\right) }{x^4+2ax^2+1 } \,dx\\
\end{align}</span>
Therefore,
<span class="math-container">\begin{align}F(a)&=\int_0^\infty \frac{2(a+1)\left( 1+x^2\right) }{x^4+2ax^2+1 } \,dx\\
&=2(a+1)\int_0^\infty \frac{\left(1+\frac{1}{x^2}\right)}{x^2+\frac{1}{x^2}+2a } \,dx\\
&=2(a+1)\int_0^\infty \frac{\left(1+\frac{1}{x^2}\right)}{\left(x-\frac{1}{x}\right)^2+2(a+1) } \,dx\\
\end{align}</span>
Perform the change of variable <span class="math-container">$y=x-\dfrac{1}{x}$</span>,
<span class="math-container">\begin{align}F(a)&= 2(a+1)\int_{-\infty}^{+\infty}\frac{1}{x^2+2(a+1)}\,dx\\
&=4(a+1)\int_{0}^{+\infty}\frac{1}{x^2+2(a+1)}\,dx\\
&=\left[2\sqrt{2(a+1)}\arctan\left( \frac{x}{\sqrt{2(a+1)}} \right)\right]_0^\infty\\
&=\boxed{\pi\sqrt{2(1+a)}}
\end{align}</span></p>
<p>Observe that, <span class="math-container">$J(\theta)=F\big(-\cos(2\theta)\big)$</span>.</p>
<p><span class="math-container">\begin{align} 2(1-\cos(2\theta))&=2(1-\cos^2(\theta)+\sin^2 (\theta))\\
&=2\times 2\sin^2 (\theta)\\
&=4\times \sin^2 (\theta)\\
\end{align}</span>
Since, for <span class="math-container">$\theta \in [0;\pi],\sin(\theta)\geq 0$</span> then <span class="math-container">$\sqrt{2(1-\cos(2\theta))}=2\sin(\theta)$</span></p>
<p>Therefore,
<span class="math-container">\begin{align}\boxed{J(\theta)=2\pi \sin(\theta)}\end{align}</span></p>
|
1,552,775 | <p>In how many ways 5 blue pens and 6 black pens can be distributed to 6 children?</p>
<hr>
<p>To do that I used:
$\text{Coefficient of } x^6 \text{ in } ((1+x+x^2+x^3+x^4+x^5)(1+x+x^2+x^3+x^4+x^5+x^6))$
and got answer = 6</p>
<p>but options given are:</p>
<p>a) 97020</p>
<p>b) 116424</p>
<p>c) 8008</p>
<p>d) 672</p>
<p>How does taking coefficient gives answer for distribution problems?</p>
| Brian M. Scott | 12,042 | <p>It doesn’t. The coefficient of $x^6$ in your polynomial is the number of ways to pick $6$ of the $11$ pens. For instance, the term $x^2\cdot x^4$ (before you collect terms) corresponds to choosing $2$ blue and $4$ black pens.</p>
<p>HINT: This problem is quite different. Let’s forget about the black pens for a moment and ask in how many ways $5$ blue pens can be distributed amongst $6$ children. This is a standard <a href="https://en.wikipedia.org/wiki/Stars_and_bars_%28combinatorics%29" rel="nofollow">stars and bars problem</a>, and the answer is</p>
<p>$$\binom{5+6-1}{6-1}=\binom{10}5=252\;.$$</p>
<p>(The reasoning behind the formula is explained fairly well at the linked article.)</p>
<p>In similar fashion you can calculate the number of ways to distribute the $6$ black pens amongst the $6$ children. These two distributions are completely independent of each other: any distribution of the blue pens can be combined with any distribution of the black pens. This means that in order to get the total number of possible distributions of the $11$ pens, you should combine the number of distributions of the blue pens and the number of distributions of the black pens . . . how?</p>
<p>Note that all of this is based on the assumption that pens of the same color are indistinguishable, while the children are distinguishable. Thus giving George all $11$ pens is different from giving Tripta all $11$ pens, but giving Ivan $3$ blue pens and all the other pens to Nina is one arrangement no matter which $3$ blue pens Ivan gets.</p>
|
3,676,719 | <p>I've recently started to learn about differential equations and I am having a hard time solving any of them.</p>
<p>I feel like I'm missing some steps.</p>
<p>Therefore, could I please know how to solve:</p>
<p><span class="math-container">$(ye^x +y)dx+ye^{(x+y)}dy=0$</span></p>
<p>So far, I've looked around the book and some websites that could give the final answer, to at least know what way should I go, but I feel like I'm going nowhere. All I was able to find is that the equation above is something called "an equation with separable variables".</p>
<p>Equation:
<a href="https://www.symbolab.com/solver/ordinary-differential-equation-calculator/%5Cleft(ye%5E%7Bx%7D%2By%5Cright)dx%2Bye%5E%7Bx%2By%7Ddy%3D0" rel="nofollow noreferrer">here</a></p>
<p>Thank you very much.</p>
| Noob mathematician | 779,382 | <p>Your statement is right but the following is not true for any <span class="math-container">$n≥2$</span> : "for a matrix A of size <span class="math-container">$n$</span>, if <span class="math-container">$trace(A)=0$</span> then <span class="math-container">$A^2=0$</span>.</p>
<p>If <span class="math-container">$A=0$</span> then it is easy to see that <span class="math-container">$trace(A)=0$</span></p>
<p>Suppose <span class="math-container">$A\ne 0$</span> but <span class="math-container">$A^2=0$</span> , then <span class="math-container">$\exists x\in F^2$</span> such that <span class="math-container">$Ax\ne 0$</span></p>
<p>You see <span class="math-container">$\{x,Ax\}$</span> forms a basis of <span class="math-container">$F^2$</span>.(why?)</p>
<blockquote>
<p>Let <span class="math-container">$c_1x+c_2Ax=0$</span> where <span class="math-container">$c_1,c_2\in F$</span> . Then <span class="math-container">$A(c_1x+c_2Ax)=0\implies c_1Ax+c_2A^2x=0$</span> .
Then <span class="math-container">$c_1=0$</span> and <span class="math-container">$c_2=0$</span>.</p>
</blockquote>
<p>So <span class="math-container">${x,Ax}$</span> linearly independent hence the basis of <span class="math-container">$F^2$</span>.</p>
<p>Look at the matrix of <span class="math-container">$A$</span> with respect to this matrix it is
<span class="math-container">$\begin{pmatrix}
0&0\\1&0
\end {pmatrix}$</span> as <span class="math-container">$A(x)=0\cdot x +1\cdot Ax$</span> and <span class="math-container">$A(Ax)=A^2x=0\cdot x +0\cdot Ax.$</span></p>
<p>So the <span class="math-container">$trace(A)=0$</span>.</p>
|
439,750 | <p>I have to check if $\int_{0}^\infty \mathrm 1/(x\ln(x)^2)\,\mathrm dx $ is convergent or divergent.</p>
<p>My approach was to integrate the function , hence : $\int_{0}^\infty \mathrm 1/(x\ln(x)^2)\,\mathrm dx=-\lim_{x \to \infty} 1/\ln(x)+ \lim_{x \to 0} 1/\ln(x)=0 $</p>
<p>Still my book says that it is divergent. Maybe the $\infty$ sign of the integral means to check for $+\infty$ and $-\infty $ or i just overlooked something. Any help would be appreciated.</p>
| Mikasa | 8,581 | <p>Look at the following improper integral:
$$
\int_1^2f(x)dx
$$
Certainly, $\lim_{x\to 1^+}(x-1)f(x)=+\infty$ so the Comparison test admits the series is divergent.</p>
|
285,975 | <p>I'm not sure I understand what to do with what's given to me to solve this. I know it has to do with the relationship between velocity, acceleration and time.</p>
<blockquote>
<p>At a distance of <span class="math-container">$45m$</span> from a traffic light, a car traveling <span class="math-container">$15 m/sec$</span> is brought to a stop at a constant deceleration.</p>
<p>a. What is the value of deceleration?</p>
<p>b. How far has the car moved when its speed has been reduced to <span class="math-container">$3m/sec$</span>?</p>
<p>c. How many seconds would the car take to come to a full stop?</p>
</blockquote>
<p>Can somebody give me some hints as to where I should start? All I know from reading this is that <span class="math-container">$v_0=15m$</span>, and I have no idea what to do with the <span class="math-container">$45m$</span> distance. I can't tell if it starts to slow down when it gets to <span class="math-container">$45m$</span> from the light, or stops <span class="math-container">$45m$</span> from the light.</p>
<hr />
<p>Edit:</p>
<p>I do know that since accelleration is the change in velocity over a change in time, <span class="math-container">$V(t)=\int a\ dt=at+C$</span>, where <span class="math-container">$C=v_0$</span>. Also, <span class="math-container">$S(t)=\int v_{0}+at\ dt=s_0+v_0t+\frac{1}{2}at^2$</span>. But I don't see a time variable to plug in to get the answers I need... or am I missing something?</p>
| dinoboy | 43,912 | <p>OK, so we have the useful lemma that for integers $a,b$ we have $(a-b)|(f(a) - f(b))$. This follows essentially from the fact that $(a-b)|(a^n - b^n)$ for any $n$. This is why $f(x) \equiv 0 \pmod{m}$ whenever $x \equiv x_0 \pmod{m}$. </p>
<p>Now, how does this help with part i)? Suppose the conclusion is false, so for some $M$ we have $x > M \implies f(x)$ is prime. Remark that for all $x \equiv M+1 \pmod{f(M+1)}$, we have $f(M+1)|f(x)$ so for $x > M, x \equiv M+1 \pmod{f(M+1)}$ we have $f(x) = f(M+1)$. But then for infinitely many values $f$ is the same value, which is a contradiction if $f$ is nonconstant so we are done.</p>
|
285,975 | <p>I'm not sure I understand what to do with what's given to me to solve this. I know it has to do with the relationship between velocity, acceleration and time.</p>
<blockquote>
<p>At a distance of <span class="math-container">$45m$</span> from a traffic light, a car traveling <span class="math-container">$15 m/sec$</span> is brought to a stop at a constant deceleration.</p>
<p>a. What is the value of deceleration?</p>
<p>b. How far has the car moved when its speed has been reduced to <span class="math-container">$3m/sec$</span>?</p>
<p>c. How many seconds would the car take to come to a full stop?</p>
</blockquote>
<p>Can somebody give me some hints as to where I should start? All I know from reading this is that <span class="math-container">$v_0=15m$</span>, and I have no idea what to do with the <span class="math-container">$45m$</span> distance. I can't tell if it starts to slow down when it gets to <span class="math-container">$45m$</span> from the light, or stops <span class="math-container">$45m$</span> from the light.</p>
<hr />
<p>Edit:</p>
<p>I do know that since accelleration is the change in velocity over a change in time, <span class="math-container">$V(t)=\int a\ dt=at+C$</span>, where <span class="math-container">$C=v_0$</span>. Also, <span class="math-container">$S(t)=\int v_{0}+at\ dt=s_0+v_0t+\frac{1}{2}at^2$</span>. But I don't see a time variable to plug in to get the answers I need... or am I missing something?</p>
| Hagen von Eitzen | 39,174 | <p>For (ii) note that $(x_0+km)^n=x_0^n+{n\choose 1}x_0^{n-1}m+{n\choose 2}x_0^{n-2}m^2+\ldots +m^n = x_0^n+m\cdot(\ldots)$, therefore in general $f(x_0+km)\equiv f(x_0)\pmod m$.</p>
<p>There are at most finitely many $x$ with $f(x)=0$, so we can pick $x_0$ such that $m:=f(x_0)\ne 0$.
Among the infinitely many $x=x_0+km$, for which we already know $f(x+km)\equiv 0\pmod m$, there are at most finitely many with $f(x)=0$, finitely many with $f(x)=m$, $f(x)=-m$.
Therefore, we find $x_1$ with $m-1:=f(x_1)$ a multiple of $m$ $\notin\{-m,0,m\}$. Especially, $|m_1|\ge 2$.
From there, we similarly find a number $x_2=x_1+km_1$ for some $k$, such that $f(x_2)\equiv 0\pmod {m_1}$ and $m_2:=f(x_2)\notin\{-m_1,0,m_1\}$.
Therefore $m_2$ is composite ($m_1$ is a nontrivial factor - nontrivial because neither $|m_1|=1$ nor $|m_1|=|m_2|$).
Now $f(x_2+km_2)$ is a multiple of $m_2$ (and hence not prime) for all $k$.
In summary: I repeatedly used facts about infinitely many $x$ to find a function value that is nonzero, then not a unit, then not prime, ...</p>
|
345,652 | <p>I try to understand why by definition </p>
<ol>
<li>$[c_0,c_1,\ldots,c_n]=[c_0,[c_1,\ldots,c_n]]$ and also </li>
<li>$[c_0,c_1,\ldots,c_n]=[c_0,c_1,\ldots,c_{n-2},[c_{n-1},c_n]]$ .</li>
</ol>
<p>Those are continued fractions, and $1$ and $2$ are notes I have in the lecture summery.</p>
<p>But we can add bracket where we we want. for example:<br>
$[c_0,c_1,\ldots,c_n]=[c_0,c_1,[c_2,\ldots,c_n]]$ </p>
<p>Thanks!</p>
| Yoni Rozenshein | 36,650 | <p>Usually one wouldn't write such a thing as $[c_0; [c_1, \ldots, c_n]]$, because elements appearing in a continued fraction representation are supposed to be positive integers (except the one before the semicolon, if there is one, which may be a negative integer).</p>
<p>But it doesn't contradict anything to write this. Well, from the definition of a continued fraction, you have</p>
<p>$$[c_0; c_1, \ldots, c_n] = c_0 + \frac 1 {[c_1, \ldots, c_n]}$$</p>
<p>And also</p>
<p>$$[c_0; c_1] = c_0 + \frac 1 {c_1}$$</p>
<p>Can you see how claim 1 follows from this?
For claim 2, try using induction with claim 1.</p>
<p><strong>Note:</strong> I added a semicolon after $c_0$ because the index "$0$" seems to indicate that is what you were looking for.</p>
|
3,163,525 | <p>How would I expand the following function as a power series, around <span class="math-container">$\eta=0$</span>?</p>
<p><span class="math-container">$$g_0(1,\eta)=\frac{\left(\frac{PV}{NkT}\right)_0-1}{4\eta}$$</span></p>
<p>Note that:</p>
<p><span class="math-container">$$\left(\frac{PV}{NkT}\right)_0=1+\frac{3\eta}{\eta_c-\eta}+\sum_{k=1}^4kA_k\left(\frac{\eta}{\eta_c}\right)^k$$</span></p>
<p>Then we have:</p>
<p><span class="math-container">$$g_0(1,\eta)=\frac{\frac{3\eta}{\eta_c-\eta}+\sum_{k=1}^4kA_k\left(\frac{\eta}{\eta_c}\right)^k}{4\eta}$$</span></p>
| Community | -1 | <p><strong>Hint:</strong></p>
<p>The sum is a polynomial with no constant term and its handling is not a problem.</p>
<p>Then</p>
<p><span class="math-container">$$\frac{\eta}{\eta_c-\eta}=1-\frac1{1-\dfrac\eta{\eta_c}}=\frac\eta{\eta_c}+\frac{\eta^2}{\eta_c^2}+\frac{\eta^3}{\eta_c^3}+\cdots$$</span></p>
|
3,187,793 | <p>We know that <span class="math-container">$221 = 17*13$</span>. So we can check if the system has roots to both of those equations separately, which it does:</p>
<p><span class="math-container">$x^{5} \equiv 2$</span> mod <span class="math-container">$13$</span> has the solution <span class="math-container">$6 + 13n$</span> and <span class="math-container">$x^{5} \equiv 2$</span> mod <span class="math-container">$17$</span> has the solution <span class="math-container">$15 + 17n$</span>. </p>
<p>I got these numbers from wolfram, I have no idea how to solve this problem WITHOUT a calculator. And even after finding these numbers. How would one obtain a solution modulo <span class="math-container">$221$</span>? I was thinking the Chinese Remainder Theorem but I am under the assumption that CRT only applies to problems with powers of <span class="math-container">$x$</span> which are <span class="math-container">$1$</span>.</p>
<p>Thanks.</p>
| dan_fulea | 550,003 | <p>We work all the time modulo <span class="math-container">$221$</span> from now on tacitly. Since <span class="math-container">$2$</span> is relatively prime to <span class="math-container">$221$</span>, a/the solution <span class="math-container">$x$</span> of <span class="math-container">$x^5=2$</span> (modulo <span class="math-container">$221$</span> - i write equivalences modulo <span class="math-container">$221$</span> as equalities from now on) is also relatively prime to <span class="math-container">$221$</span>. The Euler indicator function of <span class="math-container">$221$</span> is <span class="math-container">$\varphi(221)=\varphi(17\cdot13)=\varphi(17)\cdot\varphi(13)= 16\cdot 12=192$</span>.</p>
<p>The inverse of <span class="math-container">$5$</span> modulo <span class="math-container">$192$</span> is <span class="math-container">$77$</span>, explicitly
<span class="math-container">$5\cdot 77=385=2\cdot192+1$</span>. From <span class="math-container">$x^5=2$</span> we get then (implications in one direction)
<span class="math-container">$$
\begin{aligned}
x &= (x^{192})^2\cdot x
&&\text{ Euler, since $x^{192}=x^{\varphi(221)}=1$ modulo $221$}
\\
&=x^{385}=(x^5)^{77}=2^{77}=(2^8)^9\cdot 2^5
&&\text{ ($2^8$ is "close" to $221$)}
\\
&=256^9\cdot 32 = 35^9\cdot 32=(35^3)^3\cdot 32
\\
&=42875^3\cdot 32
=1^3\cdot 32=\color{blue}{\boxed{32}}\ .
\end{aligned}
$$</span>
<em>Check</em> (for the other direction):
<span class="math-container">$$
32^5
=
(2^5)^5=2^{25}
=2^{24}\cdot 2
=(2^8)^3\cdot 2=42875^3\cdot 2=1^3\cdot2=2\ .
$$</span></p>
|
2,408,141 | <p>Let $X\to Y$ so that $f(x)=x^2+4$ </p>
<p>$X=\{6,9,2,8,5\}$ and $Y=\{27,85,40,8,12,29,63,68,17\}$</p>
<p>a) state the domain of $f$</p>
<p>b) state the range of $f$</p>
<p>I have calculated that the domain of $f(x)$ is all real numbers and that the range of $f(x)$ is $f\ge4$.</p>
<p>Therefore I thought that the correct answers were:</p>
<p>a) $\{6,9,8,5\}$</p>
<p>b) $\{27,85,40,8,12,29,63,68,17\}$</p>
<p>But this was wrong. Could anyone help me? Many thanks in advance!</p>
| Callus - Reinstate Monica | 94,624 | <p>I think I understand what you're missing. And I think this is what Eric explained, too, in some very general way, but I'm going to be more hand-wavy. To be more precise, I'd need to know what definition you're working with for a field extension generated by an element $e$, or more generally by a set $S$. </p>
<p>I think what's bothering you though is the following. let $E$ be an extension of $F$ and let $S\subset E$ be a subset, then there is an extension field $F(S)$ which is defined to be the smallest subfield of $E$ that contains $S$. To me, that's what $F(S)$ means. </p>
<p>Let $F\left< S\right>$ be the set of rational functions using symbols from $F$ and $S$. Clearly this must be contained in $F(S)$. Therefore if it is a field, it must be equal to $F(S)$, since it contains $F$ and $S$. But showing it is a field is just a rote exercise in verifying the axioms of a field: It has inverses, it has a $0$ and $1$, it is closed under addition and multiplication, etc. </p>
<p>Therefore $F(S)$ is precisely the set of rational expressions using symbols from $F$ and $S$. </p>
|
3,621,836 | <p>Let's say I have a large determinant with scalar elements:</p>
<p><span class="math-container">$$\begin{vmatrix} x\cdot a & x\cdot b & x\cdot c & x\cdot d \\ x\cdot e & x\cdot f & x\cdot g & x\cdot h \\ x\cdot i & x\cdot g & x\cdot k & x\cdot l \\ x\cdot m & x\cdot n & x\cdot o & x\cdot p\end{vmatrix}$$</span></p>
<p>Is it valid to factor out a term that's common to every element of the determinant? Is the following true:</p>
<p><span class="math-container">$$\begin{vmatrix} x\cdot a & x\cdot b & x\cdot c & x\cdot d \\ x\cdot e & x\cdot f & x\cdot g & x\cdot h \\ x\cdot i & x\cdot g & x\cdot k & x\cdot l \\ x\cdot m & x\cdot n & x\cdot o & x\cdot p\end{vmatrix} = x \cdot \begin{vmatrix} a & b & c & d \\ e & f & g & h \\ i & g & k & l \\ m & n & o & p\end{vmatrix}$$</span></p>
| s1mple | 697,936 | <p>Clearly <span class="math-container">$\omega^3=1$</span> and <span class="math-container">$1+\omega+\omega^2=0$</span>. So, the problem reduces to:</p>
<p><span class="math-container">$$(2-\omega)(2-\omega^2)(2-\omega)(2-\omega^{2})=(2-\omega)^2(2-\omega^2)^2=(4+\omega^2-4\omega)(4+\omega^4-4\omega^2)$$</span>
<span class="math-container">$$=(4+\omega^2-4\omega)(4+\omega-4\omega^2)=16+4\omega-16\omega^2+4\omega^2+1-4\omega-16\omega-4\omega^2+16$$</span>
<span class="math-container">$$=33+-16(\omega+\omega^2)=\boxed{49}$$</span></p>
<p>since <span class="math-container">$\omega+\omega^2=-1$</span>.</p>
|
3,621,836 | <p>Let's say I have a large determinant with scalar elements:</p>
<p><span class="math-container">$$\begin{vmatrix} x\cdot a & x\cdot b & x\cdot c & x\cdot d \\ x\cdot e & x\cdot f & x\cdot g & x\cdot h \\ x\cdot i & x\cdot g & x\cdot k & x\cdot l \\ x\cdot m & x\cdot n & x\cdot o & x\cdot p\end{vmatrix}$$</span></p>
<p>Is it valid to factor out a term that's common to every element of the determinant? Is the following true:</p>
<p><span class="math-container">$$\begin{vmatrix} x\cdot a & x\cdot b & x\cdot c & x\cdot d \\ x\cdot e & x\cdot f & x\cdot g & x\cdot h \\ x\cdot i & x\cdot g & x\cdot k & x\cdot l \\ x\cdot m & x\cdot n & x\cdot o & x\cdot p\end{vmatrix} = x \cdot \begin{vmatrix} a & b & c & d \\ e & f & g & h \\ i & g & k & l \\ m & n & o & p\end{vmatrix}$$</span></p>
| Matteo | 686,644 | <p>Using modular arhitmetic and the fact that <span class="math-container">$\omega^3=1$</span>, we can write:
<span class="math-container">$$(2-\omega)
\left(2-\omega^2\right)
\left(2-\omega^{19}\right)
\left(2-\omega^{23}\right) =(5-2\omega^2-2\omega)\cdot(5-2\omega^2-2\omega)$$</span>
Now, rearranging this expression and using the fact that <span class="math-container">$1+\omega+\omega^2=0$</span>, we arrive at:
<span class="math-container">$$(2-\omega)
\left(2-\omega^2\right)
\left(2-\omega^{19}\right)
\left(2-\omega^{23}\right) = 49$$</span></p>
|
3,238,316 | <p>I think there is a glitch in the <a href="https://en.wikipedia.org/w/index.php?title=Jensen%27s_inequality&oldid=887772941#Proof_1_(finite_form)" rel="nofollow noreferrer">proof by induction</a>. The proof is still valid, but they add an unnecessary assumption:</p>
<p>In the induction step, they choose one of the <span class="math-container">$\lambda_i$</span>'s that is strictly positive (I guess by that, they mean nonzero). Since the sum of the <span class="math-container">$\lambda_i$</span>'s is 1, there must be at least one that is nonzero, that part is valid. And the argument that follows is also perfectly valid.</p>
<p>However, why do we need to pick a nonzero <span class="math-container">$\lambda_i$</span>? Wouldn't the argument work regardless? If <span class="math-container">$\lambda_1 = 0$</span>, the inequality still holds. In other words, the inequality holds regardless of the value of <span class="math-container">$\lambda_1$</span>:
<span class="math-container">$$ \varphi\left( \lambda_1 x_1 + (1 - \lambda_1) \sum_{i=2}^{n+1}\frac{\lambda_i}{1-\lambda_1} x_i \right) ~\leqslant~ \lambda_1 \varphi(x_1) + (1-\lambda_1)\varphi\left( \sum_{i=2}^{n+1} \frac{\lambda_i}{1-\lambda_1} x_i \right)$$</span> </p>
<p>because <span class="math-container">$\varphi$</span> is convex, period. No requirement on the coefficient being nonzero: according to <a href="https://en.wikipedia.org/wiki/Convex_function#Definition" rel="nofollow noreferrer">Wikipedia's definition of a convex function</a>, it is <span class="math-container">$\forall x_1, x_2 \in X$</span>, <span class="math-container">$\forall \lambda \in [0,1] ~ \cdots$</span></p>
<p>Since the <span class="math-container">$\lambda_i$</span>'s are all nonnegative and their sum is 1, then every <span class="math-container">$\lambda_i \in [0,1]$</span>, so the definition of convexity applies.</p>
<p>Am I missing something?</p>
| Botond | 281,471 | <p>Of course, <span class="math-container">$\lambda_1=0$</span> is valid, but not so interesting:
<span class="math-container">$$ \varphi\left(\sum_{i=2}^{n+1}\lambda_i x_i \right) ~\leqslant~ \varphi\left( \sum_{i=2}^{n+1} \lambda_i x_i \right)$$</span> </p>
|
3,420,960 | <p>The problem is as follows:</p>
<blockquote>
<p>A protein sample spins in the counterclockwise direction in a
centrifuge seen from the top as shown in the diagram from below. The
radius of the centrifuge es <span class="math-container">$R=2\,m$</span>. The magnitude of its speed
changes. At a certain instant the acceleration vector is as shown in
the figure. Find the speed in <span class="math-container">$\frac{m}{s}$</span> and state the type of its
motion in the given instant. A for acceleration if the speed increases
or D for deceleration if the speed decreases.</p>
</blockquote>
<p><a href="https://i.stack.imgur.com/7ENKn.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/7ENKn.png" alt="Sketch of the problem"></a></p>
<p>The alternatives given on my book are:</p>
<p><span class="math-container">$\begin{array}{ll}
1.10\frac{m}{s};\,A\\
2.10\frac{m}{s};\,D\\
3.5\frac{m}{s};\,A\\
4.5\frac{m}{s};\,D\\
5.10\frac{m}{s};\,\textrm{uniform motion}\\
\end{array}$</span></p>
<p>This problem I'm particulary lost at. The acceleration shown in the graph. What is it?. Is it perhaps the total acceleration?. In other words the norm of the centripetal and the tangential acceleration?</p>
<p>If so then that would meant that the:</p>
<p><span class="math-container">$a_c=50\frac{m}{s^2}$</span></p>
<p>Then the tangential acceleration will be:</p>
<p><span class="math-container">$a_t=50\frac{m}{s^2}$</span></p>
<p>And because the angular acceleration is related to the tangential acceleration due the radius as:</p>
<p><span class="math-container">$a_t=\alpha\times r$</span></p>
<p>Then:</p>
<p><span class="math-container">$\alpha=\frac{a_t}{r}=\frac{50}{2}=25\frac{rad}{s^2}$</span></p>
<p>But that's how far I went in my analysis. What else can I relate to find the asked speed?.</p>
<p>The only equations which I recall are:</p>
<p><span class="math-container">$\omega_{f}=\omega_{0}+\alpha t$</span></p>
<p>Can somebody help me here?. What exactly should be the right path to get the answer?.</p>
| IamWill | 389,232 | <p>You have to decompose the acceleration vector into two components, one which is tangential to the orbit and the other which points towards the center of the circle. Note that the tangential component is paralel to the velocity <span class="math-container">$\vec{v}$</span> but it points in the opposite direction , so there is gonna be deceleration. The component pointing towards the center is the centripetal acceleration given by <span class="math-container">$$a_{cp} = \frac{v^{2}}{R} $$</span>. Remember each component of the acceleration can be calculated using pythagoras theorem.</p>
|
2,738,138 | <p>I know it's probably a stupid question, but I'm confused. I have a set {$x\in\mathbb R, \frac{1}{x} \le 1$} that I want to represent as interval/s.</p>
<p>Thinking about it logically, I know that the set is $x\in]-\infty, 0[$U$[1, +\infty[$.</p>
<p>However, when trying to solve the inequality, I can't seem to get the answer. What am I doing wrong?</p>
<p>I take $\frac{1}x \le 1$, and I split it into 2 cases:</p>
<ol>
<li>if $x > 0$, then $x \ge 1$,</li>
<li>if $x < 0$, then $x \le 1$,
which is every element of $\mathbb R$. Where am I going wrong? Thanks.</li>
</ol>
| Community | -1 | <p>In your second analysis you must intersect the conditions within each case.</p>
<p>In 1. you got $x>0$ and $x\geq 1$. The conjuction of these two is $x\geq 1$.</p>
<p>In 2. you got $x<0$ and $x\leq 1$. The conjunction of these two is $x<0$. The idea is that the solution $x\leq 1$ must be taken into account together with the assumptions that were made to reach it, $x<0$.</p>
<hr>
<p>It is more common to forget the assumptions when applying nonequivalent transformations, like multiplying by $x$ in this inequality. Applying equivalent transformations the need to intersecting stays with you until the end.</p>
<p>$$\frac{1}{x}\leq1\Leftrightarrow 0\leq 1-\frac{1}{x}=\frac{x-1}{x}$$</p>
<p>You see that $\frac{x-1}{x}$ is non-negative, when either both factors are non-negative, or both are non-positive.</p>
|
77,290 | <p>How would you determine all integers $m$ such that the following is true? </p>
<p>$$\frac{1}{m}=\frac{1}{\lfloor 2x \rfloor}+\frac{1}{\lfloor 5x \rfloor} .$$</p>
<p>Note that $\lfloor \cdot \rfloor$ means the greatest integer function. Also, $x$ must be a positive real number.</p>
| Brian M. Scott | 12,042 | <p>You can solve it by cases. Let $x=n+r$, where $n$ is an integer and $r=\lfloor x\rfloor$. Then $\lfloor 2x\rfloor = \lfloor 2n+2r\rfloor = 2n+\lfloor 2r\rfloor$ and $\lfloor 5x\rfloor = \lfloor 5n+5r\rfloor = 5n+\lfloor 5r\rfloor$, and the original equation can be written as $$\frac1m = \frac1{2n+\lfloor 2r\rfloor}+\frac1{5n+\lfloor 5r\rfloor}.$$</p>
<p>You know that $0\le r<1$, and you can get exact values for $\lfloor 2r\rfloor$ and $\lfloor 5r\rfloor$ for $r$ in different subintervals of $[0,1)$.</p>
<ul>
<li>If $0\le r<\frac15$, $\lfloor 2r\rfloor=\lfloor 5r\rfloor=0$. </li>
<li>If $\frac15\le r <\frac25$, $\lfloor 5r\rfloor = 1$ and $\lfloor 2r\rfloor = 0$. </li>
<li>If $\frac25\le r<\frac12$, $\lfloor 5r\rfloor = 2$ and $\lfloor 2r\rfloor = 0$. </li>
<li>If $\frac12\le r<\frac35$, $\lfloor 5r\rfloor = 2$ and $\lfloor 2r\rfloor = 1$.</li>
</ul>
<p>And there are two more intervals, which I’ll leave to you.</p>
<p>If $0\le r<\frac15$, the equation becomes simply $$\frac1m=\frac1{2n}+\frac1{5n}=\frac{7n}{10n^2}=\frac7{10n};$$ in order for $m$ to be an integer, $n$ must be a multiple of $7$, say $n=7k$, we get $m=10k$. In other words, this case gives us every positive multiple of $10$ as a solution.</p>
<p>If $\frac15\le r <\frac25$, the equation becomes $$\frac1m=\frac1{2n}+\frac1{5n+1}=\frac{7n+1}{10n^2+2n},$$ or $$m=\frac{10n^2+2n}{7n+1}.$$ Divide this out to get $$m = \frac{10}7n+\frac4{49}-\frac4{49(7n+1)} = \frac{70n+4}{49}-\frac4{49(7n+1)},$$ and multiply through by $49$ to get $$49m=70n+4-\frac4{7n+1}$$ or, with a little rearrangement, $$49m-70n-4=\frac4{7n+1}.$$ But this is impossible if $n$ and $m$ are integers, because the lefthand side is an integer, and the righthand side isn’t. Thus, there are no solutions in this case.</p>
<p>If $\frac25\le r<\frac12$, the equation becomes $$\frac1m=\frac1{2n}+\frac1{5n+2}=\frac{7n+2}{10n^2+4n},$$ so $$m=\frac{10n^2+4n}{7n+2}=\frac{10n}7+\frac8{49}-\frac{16}{49(7n+2)},$$ and $$49m=70n+8-\frac{16}{7n+2}.$$ This time there is one value of $n$ that makes the righthand side an integer, namely, $n=2$. Substituting $n=2$ into the last equation, we get $49m=140+8-1=147$, and $m=3$; this is the only solution in this case. (Note that had the righthand side not been a multiple of $49$ when $n=2$, there would have been no solutions in this case.)</p>
<p>You can continue in this fashion through the remaining three cases. There may be an easier approach, but this one is at least systematic and workable.</p>
|
77,290 | <p>How would you determine all integers $m$ such that the following is true? </p>
<p>$$\frac{1}{m}=\frac{1}{\lfloor 2x \rfloor}+\frac{1}{\lfloor 5x \rfloor} .$$</p>
<p>Note that $\lfloor \cdot \rfloor$ means the greatest integer function. Also, $x$ must be a positive real number.</p>
| Yuval Filmus | 1,277 | <p>We will assume that $x,m > 0$. Our results show (unless there is some mistake in the case analysis) that the only solutions (up to a value of $x$) are given by
$$ \frac{1}{10C} = \frac{1}{14C} + \frac{1}{35C} $$
and the exceptional solution
$$ \frac{1}{3} = \frac{1}{\lfloor 2\cdot 2.4\rfloor} + \frac{1}{\lfloor 5\cdot 2.4\rfloor} = \frac{1}{4} + \frac{1}{12}. $$</p>
<p>As user9176 mentions, intuitively it is obvious that we can assume that $x$ is of the form $A + B/10$, where $B \in \{0,\cdots,9\}$. We then have
$$\lfloor 2x \rfloor = 2A + \alpha, \; \lfloor 5x \rfloor = 5A + \beta, $$
where $\alpha,\beta$ depend on $B$ and take on the six values
$$(0,0);(0,1);(0,2);(1,2);(1,3);(1,4). $$
We can thus rewrite the equation as
$$ (2A + \alpha) (5A + \beta) = m(7A + \alpha + \beta), $$
from which we can form a quadratic
$$ 10A^2 - (7m - 5\alpha - 2\beta)A - ((\alpha+\beta)m - \alpha\beta). $$
The discriminant of the quadratic is
$$ (7m - 5\alpha - 2\beta)^2 + 40 ((\alpha+\beta)m - \alpha\beta). $$
For the equation to have integer solutions, we need the discriminant to be a perfect square. Let's see what that implies for each of the six pairs $(\alpha,\beta)$.</p>
<p>The first case, $(0,0)$, doesn't actually require the theory. The equation simply reads $10A^2 = 7mA$, or $10A = 7m$, which implies that $7|A$. On the other hand, if $A = 7C$, then we can recover $m = 10C$. We get the parametric solution $$ m = 10C, x \in [7C,7C+0.2). $$</p>
<p>Next, consider $(0,1)$. The discriminant is $$ (7m-2)^2 + 40m. $$ The squares following $(7m-2)^2$ are $$(7m-2)^2 + \{14m - 3, 28m - 4, 42m - 3, 56m\}.$$ Comparing coefficients, we see that the discriminant cannot be a perfect square.</p>
<p>Next, consider $(0,2)$. The discriminant is $$ (7m-4)^2 + 80m. $$ The squares following $(7m-4)^2$ are $$(7m-4)^2 + \{14m-7, 28m-12, 42m-15, 56m-16, 70m-15, 84m-12, 98m-7\}.$$ Comparing coefficients, the only solution is $m = 3$. The quadratic now reads $$ 10A^2 - 17A - 6 = 0 = (10A + 3)(A - 2). $$ The only integral solution is $A = 2$, and we get the solution $$ m = 3, x \in [2.4,2.5).$$</p>
<p>Next, consider $(1,2)$. The discriminant is $$ (7m-9)^2 + 120m - 80. $$ The squares following $(7m-9)^2$ are $$\begin{align*} (7m-9)^2 + \{&14m-17,28m-32,42m-45,56m-56,70m-65,\\&84m-72,98m-77,112m-80,126m-81,140m-80,154m-77\}. \end{align*}$$ This time there are no integral solutions.</p>
<p>Next, consider $(1,3)$. The discriminant is $$ (7m-11)^2 + 160m - 120. $$ The squares following $(7m-11)^2$ are $$\begin{align*} (7m-11)^2 + \{&14m-21, 28m-40, 42m-57, 56m-72, 70m-85, \\ &84m-96, 98m-105, 112m-112, 126m-117, 140m-120, \\ &154m-121, 168m-120, 182m-117\}. \end{align*}$$ Again there are no integral solutions.</p>
<p>Finally, consider $(1,4)$. The discriminant is $$ (7m-13)^2 + 200m - 160. $$ The squares following $(7m-13)^2$ are $$\begin{align*} (7m-13)^2 + \{& 14m-25, 28m-48, 42m-69, 56m-88, 70m-105, \\& 84m-120, 98m-133, 112m-144, 126m-153, 140m-160, \\& 154m-165, 168m-168, 182m-169, 196m-168, 210m-165, \\& 224m-160, 238m-153\}. \end{align*}$$ Once again there are no integral solutions.</p>
|
2,406,049 | <p>I would appreciate help understanding what $\aleph_1$ is according to this definition:</p>
<blockquote>
<blockquote>
<p>If $\alpha$ is an ordinal, then $\aleph_{\alpha}$ is the unique infinite cardinal such that:
$\{\kappa:\kappa\text{ is an infinite cardinal and }\kappa\lt\aleph_{\alpha}\}$
is isomorphic to $\alpha$ as a well-ordered set.</p>
</blockquote>
</blockquote>
<p>My question specifically is:</p>
<p>With $\alpha=1$ an ordinal ($=\{0\}$ according to von Neumann), what would the set $\{\kappa:\kappa\text{ is an infinite cardinal and }\kappa\lt\aleph_1\}$ look like?</p>
<p><strong>EDIT</strong> I think I should emphasize that the aspect that especially confuses me is "isomorphic to $\alpha=1$."</p>
<p>Thanks </p>
| Asaf Karagila | 622 | <p>Note that $1$ is just $\{0\}$. So a well-ordered set is isomorphic to $1$ if and only if it has exactly one element.</p>
<p>In the case of $\aleph_1$, it is the unique transfinite cardinal $\kappa$ which has exactly one transfinite cardinal smaller than it. In other words, it would be exactly the smallest cardinal which is larger than $\aleph_0$.</p>
|
87,740 | <p>There appear to be questions perhaps tangentially related to this that have been asked already. If so a reference and a close would be heartily appreciated.</p>
<p>Give some category $\mathcal{C}$ with the modifiers listed in the title of the question, and an object $A$ in that category, how does one define the "free monoid" on that object? Is this related to being an algebra over some monad? </p>
<p>Thanks!</p>
| Yemon Choi | 763 | <p>The <strong>2nd edition</strong> of Mac Lane's <em>Categories for the Working Mathematician</em> has a description/construction of the free monoid on a given object in a monoidal category satisfying some side conditions, see Section 7.3. I am not sure off the top of my head if those conditions are met for every symmetric closed monoidal category, but they certainly hold for Set, K-Mod, Ban${}_1$ and other familiar examples.</p>
<p>(I don't think you can get free monoidal objects in Ban, but I have never written down exact details, and suspect that's not really an example of direct interest to you.)</p>
|
87,740 | <p>There appear to be questions perhaps tangentially related to this that have been asked already. If so a reference and a close would be heartily appreciated.</p>
<p>Give some category $\mathcal{C}$ with the modifiers listed in the title of the question, and an object $A$ in that category, how does one define the "free monoid" on that object? Is this related to being an algebra over some monad? </p>
<p>Thanks!</p>
| Todd Trimble | 2,926 | <p>Very simply, if a closed symmetric monoidal category has countable coproducts, and if monoid means monoid with respect to the monoidal product, then the free monoid on an object $A$ can be constructed as the "geometric series" </p>
<p>$$F(A) = \sum_{n \geq 0} A^{\otimes n}.$$ </p>
<p>The key fact needed to prove this is that $\otimes$ distributes over countable coproducts, and this is guaranteed by the closedness (indeed, $X \otimes -$ preserves arbitrary colimits). The proof that this is correct must be in thousands of places; see for instance Categories for the Working Mathematician (2nd edition), page 172. </p>
|
1,393,955 | <p>I have the following evolution equations realted to mean curavture flow, with the induced metric $g=\{g_{ij}\}$, measure $d\mu$ and second fundamental form $A=\{h_{ij}\}$:</p>
<p>1)$\frac{\partial}{\partial t}g_{ij}=-2Hh_{ij}$</p>
<p>2)$\frac{\partial}{\partial t}d\mu=-H^2d\mu$</p>
<p>3)$\frac{\partial}{\partial t}h_{ij}=\Delta h_{ij}-2Hh_{il}h^l_j+|A|^2h_{ij}$</p>
<p>Now we consider the Weingarten map $W:T_pM\to T_pM$ associated with $A$ and $g$, given by the matrix $\{h^i_j\}=\{g^{il}h_{lj}\}$, and let $P$ be any invariant symmetric homogeneous polynomial.</p>
<p>Then I wish to prove the result.</p>
<p>If $W=\{h^i_j\}$ is the Weingarten map and $P(W)$ is an invariant polynomial of degree $\alpha$, i.e. $P(\rho W)=\rho^{\alpha}P(W)$, then</p>
<p>1) $\frac{\partial}{\partial t}h^i_j=\Delta h_j^i+|A|^2h_j^i$</p>
<p>2)$\frac{\partial}{\partial t}P=\Delta P- \frac{\partial^2 P}{\partial h_{ij}\partial h_{pq}}\nabla_lh_{ij}\nabla_lh_{pq}+\alpha|A|^2P$</p>
<p>I'm able to prove the first part as follows,</p>
<p>$\frac{\partial}{\partial t}h^i_j=\frac{\partial}{\partial t}(g^{ik}h_{jk})
=\frac{\partial}{\partial t}(g^{ik})h_{jk}+\frac{\partial}{\partial t}(h_{jk})g^{ik}$</p>
<p>$=2Hg^{is}h_{sl}g^{lk}h_{jk}+(\nabla h_{jk}-2Hh_{jl}h^l_k+|A|^2h_{jk})g^{ik}$</p>
<p>$=\Delta h_j^i+|A|^2H_j^i$.</p>
<p>To work out $\frac{\partial}{\partial t}g^{ij}$ you just need to use the fact that $g^{ij}g_{ij}=1$</p>
<p>I think I'm missing a simple useful result to get part 2) out?</p>
| Alex | 78,161 | <p>I've found this to be the best way to do it. I won't post a full solution but a rough sketch,</p>
<p>$\partial_tP=\sum(h\cdots h)_t$</p>
<p>$=\sum\sum(h_t h\cdots h)$</p>
<p>$=\sum\sum((\Delta h) h\cdots h)+\sum\sum|A|^2h\cdots h)$</p>
<p>$=\sum\sum((\Delta h)h\cdots h)+\alpha|A|^2P$</p>
<p>Now $\Delta P=g^{ij}(\sum h\cdots h)_{ij}$</p>
<p>$=g^{ij}(\sum\sum h_{ij} h\cdots h +\sum\sum h_ih_jh\cdots h)$</p>
<p>$=\sum\sum(\Delta h)h\cdots h+\frac{\partial^2 P}{\partial h \partial h}\nabla h\nabla h$.</p>
<p>Putting this all together gives the required result.</p>
|
2,536,206 | <p>Let $M$ be a smooth manifold of dimension $n$, and let $T(M)$ be the tangent bundle of $M$. Let $F$ be a smooth vector field, which is to say a smooth section of the canonical map $T(M) \rightarrow M$. If I understand the manifold structure on $T(M)$ correctly, smoothness means that for any chart $(U,\phi)$ of $M$, we can use $\phi$ to identify $T_p(M)$ with $\mathbb{R}^n$ for all $p \in U$, and under these simultaneous identifications, $F$ becomes a map $\phi(U) \rightarrow \mathbb{R}^n$ which is smooth.</p>
<p>Let $p_0 \in M$ and $t_0 \in \mathbb{R}$. Wikipedia defines an <em>integral curve</em> for the vector field $F$, passing through $p_0$ at time $t_0$, to be an open neighborhood $J$ of $t_0$, together with a smooth morphism $\alpha: J \rightarrow M$, such that $\alpha(t_0) = p_0$ and</p>
<p>$$\alpha'(t) = F(\alpha(t))$$</p>
<p>for all $t \in J$. I am confused on what this equality is saying. First, I do not understand what $\alpha'$ means as a map from $J$ to the manifold $M$. Maybe a chart $(U,\phi)$ containing the image of $J$ must be chosen, and then we can talk about the derivative of the composition $\phi \circ \alpha: J \rightarrow \mathbb{R}^n$. Even so, that derivative is a priori a collection of linear maps $\mathbb{R} \rightarrow \mathbb{R}^n$, so for each $t \in J$, $\alpha'(t)$ can be thought of as a linear map $\mathbb{R} \rightarrow \mathbb{R}^n$. On the other hand, $F(\alpha(t))$ identifies as an element of $\mathbb{R}^n$. I don't see in what sense these things are supposed to be equal. Are we also using the identification $\textrm{Hom}_{\mathbb{R}}(\mathbb{R},V) = V$ for any vector space $V$? </p>
| Mastrem | 253,433 | <p>Let $m,n\ge 3$ be given and suppose that $m\neq n$. Without loss of generality, we can say that $n>m$.</p>
<p>Assume there are primes between $m$ and $n$. Let $p$ be the largest prime number with $n>p\ge m$. It is clear that $p\not\in P_m$. If $p\not\in P_n$, we must have $p\mid n$, so $n\ge 2p$. However, Betrand's postulate tells us there is at least one prime inbetween $p$ and $2p$, contradicting the definition of $p$. We conclude that $p\in P_n$ and $P_n\neq P_m$.</p>
<p>Now, say there are no primes between $n$ and $m$. Now, we have $P_n=P_m$ if and only if $rad(n)=rad(m)$. Say $rad(n)=r$. So now we have $k,l\in\mathbb{N}$ with $n=kr$ and $m=lr$. This is the hard part. (In fact it's very hard, as shown in an answer to the repost on Mathoverflow)</p>
|
673,946 | <p>I have the function $f(x)=\frac {1}{2} \mathbf x^T Q \mathbf x$. </p>
<p>I want to use the steepest descent algorithm where $Q$ is the diagonal matrix $\begin{bmatrix}1 & 0\\0 & 20\end{bmatrix}$ and $\mathbf x = \begin{bmatrix}0.7\\-0.2\end{bmatrix}$.</p>
<p>I want to implement the ideal line search algorithm: for a starting $\mathbf x$ and direction $\mathbf d$ choose $\alpha > 0$ so that $\mathbf d ^T\nabla f (\mathbf x + \alpha \mathbf d)=0$. </p>
<p>I have the hint that I can find $\alpha$ by substituting the formula for $\nabla f(\mathbf z)$ and then solving for $\alpha$. </p>
<p>I am to carry out 50 steps of the steepest descent method. </p>
<p>Is this something that I just need to Matlab for? I would appreciate any guidance of what I should do! </p>
| Royi | 33 | <p>You can solve it by seeing that $ d = -Q x $ namely the gradient descent.</p>
<p>Solving you equation will yield:</p>
<p>$$ \alpha = \frac{ {x}^{T} {Q}^{T} Q x }{ 2 {x}^{T} {Q}^{T} Q Q x } $$</p>
|
3,001,860 | <p>I am looking for a <span class="math-container">$u(x)$</span> and an <span class="math-container">$f$</span> that make the answer to <span class="math-container">$$u(x)''''-u(x)''+u(x)=f(x)$$</span> where <span class="math-container">$$u'(-1)=u'(1)=u(1)=u(-1)=0$$</span> reasonably short.</p>
<p>I have some numerical code I am trying to test but I can't seem to find a nice solution to this problem. </p>
| David G. Stork | 210,401 | <p>If <span class="math-container">$u(t) = (t-1)^2 (t+1)^2$</span>, then</p>
<p><span class="math-container">$f(t) = 29 - 14 t^2 + t^4$</span></p>
<p>Seems "simple enough."</p>
<p>Here's the solution for <span class="math-container">$f(t) = 0$</span>:</p>
<p><span class="math-container">$$-\frac{c_1 e^{-\frac{\sqrt{3} t}{2}} \sec \left(\frac{1}{2}\right) \left(\left(e^{2
\sqrt{3}}-1\right) \left(e^{\sqrt{3} (t+1)}-1\right) \sin
\left(\frac{1-t}{2}\right)-\sqrt{3} \left(e^{2 \sqrt{3}}-e^{\sqrt{3} (t+1)}\right)
\cos \left(\frac{1-t}{2}\right)+\sqrt{3} \left(e^{2 \sqrt{3}}-e^{\sqrt{3}
(t+1)}\right) \cos \left(\frac{t+3}{2}\right)\right)}{1+e^{2 \sqrt{3}} \left(2
\sqrt{3} \sin (1)-1\right)}$$</span></p>
<p>This is merely <em>one</em> example of a <span class="math-container">$u(t)$</span> and <span class="math-container">$f(t)$</span> that solves the equation. I suspect that even this is not "reasonably short" according to the OP. Given this solution, I suspect that finding other pairs will be rather difficult. </p>
<p>Of course other "solutions" are <span class="math-container">$u(t) = 0 = f(t)$</span>.</p>
|
28,348 | <p>Thomson et al. provide a proof that <span class="math-container">$\lim_{n\rightarrow \infty} \sqrt[n]{n}=1$</span> in <a href="http://classicalrealanalysis.info/documents/TBB-AllChapters-Landscape.pdf#page=95" rel="noreferrer">this book (page 73)</a>. It has to do with using an inequality that relies on the binomial theorem:
<a href="https://i.stack.imgur.com/n3dxw.png" rel="noreferrer"><img src="https://i.stack.imgur.com/n3dxw.png" alt="enter image description here" /></a></p>
<p>I have an alternative proof that I know (from elsewhere) as follows.</p>
<hr />
<p><strong>Proof</strong>.</p>
<p><span class="math-container">\begin{align}
\lim_{n\rightarrow \infty} \frac{ \log n}{n} = 0
\end{align}</span></p>
<p>Then using this, I can instead prove:
<span class="math-container">\begin{align}
\lim_{n\rightarrow \infty} \sqrt[n]{n} &= \lim_{n\rightarrow \infty} \exp{\frac{ \log n}{n}} \newline
& = \exp{0} \newline
& = 1
\end{align}</span></p>
<hr />
<p>On the one hand, it seems like a valid proof to me. On the other hand, I know I should be careful with infinite sequences. The step I'm most unsure of is:
<span class="math-container">\begin{align}
\lim_{n\rightarrow \infty} \sqrt[n]{n} = \lim_{n\rightarrow \infty} \exp{\frac{ \log n}{n}}
\end{align}</span></p>
<p>I know such an identity would hold for bounded <span class="math-container">$n$</span> but I'm not sure I can use this identity when <span class="math-container">$n\rightarrow \infty$</span>.</p>
<p><strong>Question:</strong></p>
<p>If I am correct, then would there be any cases where I would be wrong? Specifically, given any sequence <span class="math-container">$x_n$</span>, can I always assume:
<span class="math-container">\begin{align}
\lim_{n\rightarrow \infty} x_n = \lim_{n\rightarrow \infty} \exp(\log x_n)
\end{align}</span>
Or are there sequences that invalidate that identity?</p>
<hr />
<p>(Edited to expand the last question)
given any sequence <span class="math-container">$x_n$</span>, can I always assume:
<span class="math-container">\begin{align}
\lim_{n\rightarrow \infty} x_n &= \exp(\log \lim_{n\rightarrow \infty} x_n) \newline
&= \exp(\lim_{n\rightarrow \infty} \log x_n) \newline
&= \lim_{n\rightarrow \infty} \exp( \log x_n)
\end{align}</span>
Or are there sequences that invalidate any of the above identities?</p>
<p>(Edited to repurpose this question).
Please also feel free to add different proofs of <span class="math-container">$\lim_{n\rightarrow \infty} \sqrt[n]{n}=1$</span>.</p>
| Community | -1 | <p>Take <span class="math-container">$n=2^m$</span></p>
<p><span class="math-container">$$\lim\limits_{n \to \infty} \sqrt[n]{n} = \lim\limits_{m \to \infty} \sqrt[2^m]{2^m}= \lim\limits_{m \to \infty} 2^{\frac{m}{2^m}}=2^{\lim\limits_{m \to \infty} \frac{m}{2^m}}=2^0=1$$</span></p>
<p>This is inverted and maybe a more obvious way from the original one.</p>
|
3,477,532 | <p>In the cyclic quadrilateral <span class="math-container">$ABCD$</span>, <span class="math-container">$AB:BC:CD:DA=1:9:9:8$</span>, <span class="math-container">$AC$</span> intersects <span class="math-container">$BD$</span> at <span class="math-container">$P$</span>, what is <span class="math-container">$S_{\triangle PAB}:S_{\triangle PBC}:S_{\triangle PCD}:S_{\triangle PDA}$</span>?</p>
<p><a href="https://i.stack.imgur.com/RBUH0.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/RBUH0.png" alt="Diagram"></a></p>
<p>I have no idea how to start this question; how do I get the areas of the triangles when only given side lengths? Please help me out.</p>
| peterwhy | 89,922 | <p>Consider <span class="math-container">$\triangle PAB$</span> and <span class="math-container">$\triangle PDC$</span>. These pairs of inscribed angles are equal:</p>
<p><span class="math-container">$$\begin{align*}
\angle CAB &=\angle BDC\\
\angle ABD &= \angle DCA
\end{align*}$$</span></p>
<p>So <span class="math-container">$\triangle PAB \sim\triangle PDC$</span>. The ratio of their areas are square of the ratio of their corresponding sides:</p>
<p><span class="math-container">$$\frac{S_{\triangle PAB}}{S_{\triangle PDC}} = \left(\frac{AB}{DC}\right)^2 = \frac{1}{81}$$</span></p>
<hr>
<p>Similar, for <span class="math-container">$\triangle PBC$</span> and <span class="math-container">$\triangle PAD$</span>, consider these pairs of inscribed angles,</p>
<p><span class="math-container">$$\begin{align*}
\angle DBC &= \angle CAD\\
\angle BCA &= \angle ADB\\
\triangle PBC &\sim \triangle PAD\\
\frac{S_{\triangle PBC}}{S_{\triangle PAD}} &= \left(\frac{BC}{AD}\right)^2\\
& = \frac{81}{64}
\end{align*}$$</span></p>
<hr>
<p>Between <span class="math-container">$\triangle PAB$</span> and <span class="math-container">$\triangle PAD$</span>, these inscribed angles are equal because <span class="math-container">$BC = CD$</span>:</p>
<p><span class="math-container">$$\angle CAB = \angle CAD$$</span></p>
<p>Then the ratio of their areas is</p>
<p><span class="math-container">$$\begin{align*}
\frac{S_{\triangle PAB}}{S_{\triangle PAD}}
&= \frac{\frac12 PA\cdot AB\sin\angle PAB}{\frac12PA\cdot AD\sin PAD}\\
&= \frac{AB}{AD}\\
&= \frac 18\\
\end{align*}$$</span></p>
<hr>
<p><span class="math-container">$$S_{\triangle PAB}:S_{\triangle PBC}:S_{\triangle PCD}:S_{\triangle PDA}
= 8:81:648:64
$$</span></p>
|
3,536,120 | <p>Textbook example says that:</p>
<p><span class="math-container">$$\sum \limits_{i=k+1}^{\infty}(1-p)^{i-1}p = \frac{(1-p)^k p}{1-(1-p)}=(1-p)^k~~~~~k=1,2,3,\cdots$$</span></p>
<p>I'm told that a geometric series identity used to obtaining the above result is:</p>
<p><span class="math-container">$$\sum \limits_{n=1}^{\infty} a r^{n-1} = \frac{a}{1-r}~~~~|r|<1$$</span></p>
<p>now, i'm wondering how they used this identity to get the above result...</p>
<p>So I pull out the p term because it doesn't depend on i in the summation, which gives:</p>
<p><span class="math-container">$$\sum \limits_{i=k+1}^{\infty}(1-p)^{i-1}p = p\sum \limits_{i=k+1}^{\infty}(1-p)^{i-1} $$</span></p>
<p>which looks similar to <span class="math-container">$\sum \limits_{n=1}^{\infty} a r^{n-1} = \frac{a}{1-r}$</span> where <span class="math-container">$r=(1-p)$</span> and <span class="math-container">$a=1$</span>... but then i'm not really sure how to handle the lower limit of the sum...maybe:</p>
<p><span class="math-container">$$\sum \limits_{i=k+1}^{\infty}(1-p)^{i-1}p = -p\sum \limits_{i=1}^{k}(1-p)^{i-1} +p\sum \limits_{i=1}^{\infty}(1-p)^{i-1} $$</span></p>
<p><span class="math-container">$$\sum \limits_{i=k+1}^{\infty}(1-p)^{i-1}p = -p\sum \limits_{i=1}^{k}(1-p)^{i-1} +p\frac{1}{1-(1-p)}$$</span></p>
<p><span class="math-container">$$\sum \limits_{i=k+1}^{\infty}(1-p)^{i-1}p = 1 -p\sum \limits_{i=1}^{k}(1-p)^{i-1} $$</span></p>
<p>???</p>
<p>not sure how they came up with:</p>
<p><span class="math-container">$$\sum \limits_{i=k+1}^{\infty}(1-p)^{i-1}p = \frac{(1-p)^k p}{1-(1-p)}$$</span></p>
<p>Second, option, ss this geometric summation with a variable lower bound just something i can lookup in a table somewhere?</p>
<hr>
<p>the above was taken out of the exercise:</p>
<p>Let X be a geometric r.v. with parameter p.</p>
<p>(a) show that <span class="math-container">$p_{X}(x) = P(X=x) = (1-p)^{x-1}p~~~~x=1,2,3,\cdots$</span></p>
<p>satisfies the equation: <span class="math-container">$\sum \limits_{k} P_{X}(x_k) = 1$</span></p>
<p>just in case you need the context of the question...</p>
| Henno Brandsma | 4,280 | <p>Of course the smallest <span class="math-container">$\sigma$</span>-algebra containing <span class="math-container">$\Bbb K$</span> contains <span class="math-container">$\tau$</span> (as all open sets are (countable) unions of base elements. This holds already in hereditarily Lindelöf general spaces and certainly in second countable ones. </p>
<p>So <span class="math-container">$\tau \subseteq \sigma(\Bbb K)$</span> and so by minimality <span class="math-container">$\sigma(\tau) \subseteq \sigma(\Bbb K)$</span>.</p>
|
2,240,616 | <p>Let $f:[0,1] \to \mathbb{R}$ be a function such that for every $a \in [0,1)$ and $b \in (0,1]$ the one-sided limits $$f(a^+)=\lim _{x\to a^+}f(x) \in \mathbb{R}$$ $$f(b^-)=\lim _{x \to b^-} f(x) \in \mathbb {R}$$ exist. </p>
<p>A) Show that $f$ is bounded. </p>
<p>B) Does $f$ necessarily achieve its maximum at some $x \in [0,1]$?</p>
<p>C) Suppose further that $f$ is continuous at $0$ and $1$, and that $f(0) f(1)<0$. Prove that there exists some point $p \in (0,1)$ such that $f(p^-)f(p^+) \leq 0$. </p>
<p>Intuitively, I can see why part A is true, but I am not sure how to prove this formally. For part B, I think the answer is no, but I haven't yet come up with a counterexample. My initial thoughts on part C are to somehow apply the intermediate value theorem, but I am not sure if this is the correct approach or not.</p>
| Hagen von Eitzen | 39,174 | <p>A) Assume $f$ is not bounded from above. Then there exists a sequence $(x_n)_{n\in\Bbb N}$ such that $f(x_n)\to+\infty$. This sequence has a limit point $x\in[0,1]$. Then a subsequence of $(x_n)$ converges to $x$, either from the right or from the left. But then $f(x_n)\to \infty$ contradicts the existence of $f(x+)$ of $f(x-)$.</p>
<p>B) Consider $f(x)=\begin{cases}x&x<1\\0&x=1\end{cases}$. The masximum is not achieved.</p>
|
2,240,616 | <p>Let $f:[0,1] \to \mathbb{R}$ be a function such that for every $a \in [0,1)$ and $b \in (0,1]$ the one-sided limits $$f(a^+)=\lim _{x\to a^+}f(x) \in \mathbb{R}$$ $$f(b^-)=\lim _{x \to b^-} f(x) \in \mathbb {R}$$ exist. </p>
<p>A) Show that $f$ is bounded. </p>
<p>B) Does $f$ necessarily achieve its maximum at some $x \in [0,1]$?</p>
<p>C) Suppose further that $f$ is continuous at $0$ and $1$, and that $f(0) f(1)<0$. Prove that there exists some point $p \in (0,1)$ such that $f(p^-)f(p^+) \leq 0$. </p>
<p>Intuitively, I can see why part A is true, but I am not sure how to prove this formally. For part B, I think the answer is no, but I haven't yet come up with a counterexample. My initial thoughts on part C are to somehow apply the intermediate value theorem, but I am not sure if this is the correct approach or not.</p>
| Paramanand Singh | 72,031 | <p>Let's extend $f$ to whole of $\mathbb{R}$ by letting $f(x) = f(0)$ for $x < 0$ and $f(x) = f(1)$ for $x > 1$. Then $f$ has left and right hand limits at each point of $[0, 1]$. Therefore for each point $x \in [0, 1]$ there is an open interval $I_{x}$ containing $x$ such that $f$ is bounded on $I_{x}$. By Heine Borel Principle a finite number of these intervals $I_{x}$ cover $[0, 1]$ and $f$ is bounded on each of these selected intervals and hence bounded on $[0, 1]$.</p>
<p>The counter example by Hagen Von Eitzen shows that $f$ is not guaranteed to attain a maximum value on $[0, 1]$.</p>
<p>For part (C) let us assume that no such $p$ exists. Then for every $p \in (0, 1)$ we have $f(p-)$ and $f(p+)$ with same signs. Bisect the interval $[0, 1]$ into two equal parts and note that one of the intervals will be such that the limits of $f$ at end points have different signs. Choose that interval and bisect it again and repeat the procedure to obtain a sequence of nested closed intervals such that limits of $f$ at end points of these intervals have different sign. By nested interval principle there is a unique point $p$ which lies in all the intervals. Now the left right limits of $f$ at $p$ are of same sign and hence $f$ maintains a constant sign in some neighborhood of $p$. This neighborhood of $p$ also contains some intervals from the above sequence and hence $f$ must change sign somewhere in this neighborhood. This contradiction shows that there must be a $p$ with desired properties. Note that the continuity of $f$ at end points is not needed. What is needed is that $f(0+)f(1-) < 0$.</p>
<hr>
<p>The question shows how far we can carry the three properties of continuous functions (<a href="http://paramanands.blogspot.com/2011/06/continuous-functions-on-closed-interval-boundedness-property.html" rel="nofollow noreferrer">boundedness</a>, <a href="http://paramanands.blogspot.com/2011/06/continuous-functions-on-closed-interval-intermediate-value-theorem.html" rel="nofollow noreferrer">Intermediate Value Theorem</a>, <a href="http://paramanands.blogspot.com/2011/06/continuous-functions-on-closed-interval-boundedness-property.html" rel="nofollow noreferrer">Extreme Value Theorem</a>) on closed intervals if the condition for continuity is relaxed to existence of one sided limit at each point. The linked blog posts show that these properties of continuous functions are proved using <a href="https://math.stackexchange.com/a/1787254/72031">completeness of real numbers</a> and the same holds for the parts (A) and (C) of this question. Thus for example (A) can also be proved via Nested Interval Principle and (C) can be proved via Heine Borel Principle (you should give it a try).</p>
|
2,663,370 | <p>Consider the following integral:</p>
<p>$$
I(k):= \int_k^{+\infty} \frac{e^{-1/x} \log(x)}{x^2} dx\\
=-\int_0^{1/k} e^{-t} \log t \, dt,
$$</p>
<p>where $k>0$. It is known that $\lim_{k \to 0} I(k)=\gamma$, where $\gamma$ denotes the Euler-Mascheroni constant. Henceforth, it must be that $\lim_{k\to +\infty}I(k)=0$. At which rate does $I(k)$ decays to $0$ as $k \to +\infty$? I guess there's some known result about that. </p>
| Micah | 30,836 | <p>If $k$ is large, $t$ is near zero, so $I(k)$ is roughly (to $0$th order) equal to
$$-\int_0^{1/k} \log t \,dt = -\frac{1}{k}\left(\log\left(\frac{1}{k}\right) - 1\right)=\frac{1}{k}(\log k + 1)$$</p>
<p>That is, $I(k)$ is $\Theta\left(\frac{\log k}{k}\right)$.</p>
|
405,830 | <p>Are there any cases of finite subgroups of <span class="math-container">$O_n(\mathbb{Q})$</span> <s>not contained in</s> <i>not isomorphic to any subgroup of</i> <span class="math-container">$O_n(\mathbb{Z})$</span>?</p>
| YCor | 14,094 | <p>Let <span class="math-container">$n=m^2$</span> be a square. Then the vector <span class="math-container">$(1,\dots,1)\in\mathbf{Q}^n$</span> has norm <span class="math-container">$m$</span>, as does <span class="math-container">$(0,\dots 0,m)$</span>. Hence by Witt's theorem there exists an element <span class="math-container">$u$</span> of <span class="math-container">$\mathrm{O}_n(\mathbf{Q})$</span> mapping <span class="math-container">$(0,\dots,0,m)$</span> to <span class="math-container">$(1,\dots,1)$</span>. Hence <span class="math-container">$u$</span> maps orthogonals to orthogonals: it maps the hyperplane <span class="math-container">$\mathbf{Q}^{n-1}\times\{0\}$</span> of equation <span class="math-container">$x_n=0$</span> onto the hyperplane of equation <span class="math-container">$\sum x_i=0$</span>. Let <span class="math-container">$S_n$</span> be the subgroup of <span class="math-container">$\mathrm{O}_n(\mathbf{Q})$</span> permutating coordinates. Then <span class="math-container">$S_n$</span> fixes <span class="math-container">$(1,\dots,1)$</span> and acts faithfully on its orthogonal. Therefore <span class="math-container">$u^{-1}S_nu$</span> acts faithfully on the orthogonal <span class="math-container">$\mathbf{Q}^{n-1}$</span>.</p>
<p>Hence, for every square <span class="math-container">$n$</span>, the group <span class="math-container">$\mathrm{O}_{n-1}(\mathbf{Q})$</span> contains a copy of the symmetric group <span class="math-container">$S_n$</span>. But for every <span class="math-container">$n\ge 5$</span> there is no nontrivial homomorphism <span class="math-container">$S_n$</span> to the Weyl group <span class="math-container">$S_{n-1}\ltimes C_2^{n-1}$</span> of type <span class="math-container">$B_{n-1}$</span>, which is isomorphic to <span class="math-container">$\mathrm{O}_{n-1}(\mathbf{Z})$</span>.</p>
<p>This proves that for every square <span class="math-container">$n\ge 9$</span>, there is a finite subgroup of <span class="math-container">$\mathrm{O}_{n-1}(\mathbf{Q})$</span>, isomorphic to <span class="math-container">$S_n$</span>, not embedding into <span class="math-container">$\mathrm{O}_{n-1}(\mathbf{Z})$</span>.</p>
|
405,830 | <p>Are there any cases of finite subgroups of <span class="math-container">$O_n(\mathbb{Q})$</span> <s>not contained in</s> <i>not isomorphic to any subgroup of</i> <span class="math-container">$O_n(\mathbb{Z})$</span>?</p>
| David E Speyer | 297 | <p>A theorem which you might be looking for is that, if <span class="math-container">$G$</span> is any finite subgroup of <span class="math-container">$O_n(\mathbb{Q})$</span>, then there is a lattice <span class="math-container">$\Lambda$</span> in <span class="math-container">$\mathbb{Q}^n$</span> which is preserved by <span class="math-container">$G$</span>. However, that lattice does not have to be <span class="math-container">$\mathbb{Z}^n$</span>.</p>
<p>Proof: Let <span class="math-container">$L$</span> be any lattice in <span class="math-container">$\mathbb{Q}^n$</span>. Take <span class="math-container">$\Lambda = \sum_{g \in G} g L$</span>.</p>
|
4,517,063 | <p>Prove that there are exactly <span class="math-container">$8100$</span> different ways of distributing <span class="math-container">$4$</span> indistinguishable black marbles and <span class="math-container">$6$</span> distinguishable coloured marbles ( none of them black) into <span class="math-container">$5$</span> distinguishable boxes in such a way that each box contains exactly <span class="math-container">$2$</span> marbles.</p>
<hr />
<p>I have done problems involving indistinguishable balls and indistinguishable/distinguishable boxes, distinguishable balls and indistinguishable/distinguishable boxes.</p>
<p>I am confused about how to handle the situation when both indistinguishable and distinguishable balls are given at the same time.</p>
<hr />
<p>Any hints will be helpful.</p>
| Graham Kemp | 135,106 | <p>Count ways to assign black balls to the boxes – partitioned by how many boxes contain two black balls – then count ways to arrange distinguishable balls among the remaining spaces — remembering that ball placement inside a box is not ordered.</p>
<p>Eg: The count of ways to assign two boxes with double-black, and arrange the distinguishable balls into pairs among the remaining three boxes, is: <span class="math-container">$$\dfrac{5!}{2!\,3!}\cdotp\dfrac{6!}{2!^3}$$</span></p>
<p>Do similarly for the remaining cases.</p>
|
2,392,841 | <p>Assuming the series of functions $\{\phi_{n}(x)\}$ and the function $\phi(x)$ are in Schwartz space $S(R)$,
now my question is this,</p>
<p>Does $\lim_{n\to\infty} \phi_{n}(x)\to \phi(x)$ (in Hilbert space $L^2[-\infty,\infty]$)
imply $\lim_{n\to\infty}\phi_{n}(x)\to \phi(x)$ (in semi-norms topology of Schwartz space)?</p>
| D_S | 28,556 | <p>Let $R$ be a ring whose prime spectrum $X$ is finite and discrete. Then $R$ must be a semilocal ring with maximal ideals $\mathfrak m_1, ... , \mathfrak m_n$, with no prime ideals. </p>
<p>I think the geometric argument you suggested will work. I don't know how much algebraic geometry you know, so I'll state the relevant results . Let $A$ be a commutative ring with identity, and let $Y$ be the space of prime ideals of $A$. To each open set $U$ of $Y$ is associated a certain ring $\mathcal O_Y(U)$. In particular, $\mathcal O_Y(\emptyset)$ is the zero ring, and $\mathcal O_Y(Y)$ can be canonically identified with $A$ itself. To each inclusion of open sets $U \subseteq V$ is associated a certain ring homomorphism $\mathcal O_Y(V) \rightarrow \mathcal O_Y(U)$ denoted by $s \mapsto s|_U$. If $U_i$ is an open cover of $Y$, then the sequence of abelian groups</p>
<p>$$0 \rightarrow A \rightarrow \prod\limits_i \mathcal O_Y(U_i) \rightarrow \prod\limits_{i,j} \mathcal O_Y(U_i \cap U_j)$$</p>
<p>is exact, where the last map is $(s_i ) \mapsto (s_i|_{U_i \cap U_j} - s_j|_{U_i \cap U_j})$. In particular, if you have a disjoint open cover $U_i$ of $Y$, then you get an isomorphism of $A$ with the product ring $\prod\limits_i \mathcal O_Y(U_i)$. Finally, if $\mathfrak p \in Y$, then as you run $U$ over smaller and smaller open neighborhoods of $\mathfrak p$, you can pass to the direct limit of the corresponding rings $\mathcal O_Y(U)$ to obtain a local ring $\mathcal O_{Y,\mathfrak p}$, which is nothing more than $A_{\mathfrak p}$, the localization of $A$ at $\mathfrak p$.</p>
<p>You can apply all this to your situation: $X$ is the disjoint union of $n$ open sets $\{\mathfrak m_i\}$, so the sheaf condition tells you that the restriction maps $R= \mathcal O_X(X) \rightarrow \mathcal O_X(\{\mathfrak m_i\}) = R_{\mathfrak m_i}$ induce an isomorphism of $R$ with $\prod\limits_{i=1}^n R_{\mathfrak m_i}$. Each $R_{\mathfrak m_i}$ is local with nilpotent nonunits.</p>
<p>Without algebraic geometry, you can still probably argue directly that the map $r \mapsto (r/1, ... , r/1)$ defines an isomorphism</p>
<p>$$R \rightarrow \prod\limits_{i=1}^n R_{\mathfrak m_i}$$</p>
<p>but I can't immediately see a clean proof. I'll try it again tomorrow.</p>
|
1,775,378 | <p>So this question is inspired by the following thread: <a href="https://forums.factorio.com/viewtopic.php?f=5&t=25008">https://forums.factorio.com/viewtopic.php?f=5&t=25008</a></p>
<p>In it, the poster is examining an $8$-belt balancer (more on that to come) which he shows fails to satisfy a desirable property, which he called universally throughput unlimited.</p>
<p>So what is a $n$-belt balancer? It is a configuration of belts (which move items around), and splitters (which take two belts in and balance their items on the two belts on the output side), which will balance the input of all $n$ input belts across all $n$ output belts. They are frequently used in large factories to move large amounts of items to a variety of different areas in a manner where no one belt worth of items getting backlogged (more items coming at it than it can use) results in other projects not getting full throughput (or at least as much as they can use).</p>
<p>The desired property called <em>universally throughput unlimited</em> is the following: Suppose only $k$ of the $n$ input belts are getting input (assume full input; aka, input belts are assumed saturated), and that all but $k$ of the output belts are backlogged and have no throughput (already full of items and nothing is moving on those belts). Then the full input on those $k$ input belts can be provided across the $k$ output belts (which have the same maximum throughput, hence no one output belt can handle more than one input belt's worth of throughput). This basically means that the $n$-belt balancer is never a bottleneck no matter the current input or output limitations (which lanes are getting input/available for output).</p>
<p>The question I have is the following: It always possible to create an $n$-belt balancer satisfying the universally throughput unlimited condition for any $n$? It not, for which $n$'s is it possible? (clearly, $n=2$ works because of how splitters behave)</p>
<p>I have some ideas on how to approach this problem, but am nowhere near having it solved. The first idea is about how to represent the problem:
We can represent the input belts and output belts as vertices of a directed graph. The inputs being sources (in-degree=0) and the outputs being sinks (out-degree=0). The balancer is the input and output vertices together with a set of <em>intermediate</em> vertices which represent splitters which have $1\leq$in-degree,out-degree$\leq$2 (one or two directed edges point to them and one or two coming from them) and the associated directed edges. Looking at the problem this way, it is easy to see that a <em>necessary</em> condition is that input on any belt can reach any output belt (it is necessary because if not, then consider the case of all input on one input belt and all but one output belt backlogged with 0 throughput; in such a case, if you can't route that input belt's input to the output belt you won't get any throughput), but this condition is not sufficient (multiple examples that satisfy this condition have been shown both theoretically and experimentally to fail to have the desired universally throughput unlimited property).</p>
<p>An important thing to note is that belts can be routed <em>under</em> other belts via underground belts, hence planarity of the above described graph is not necessary. The fact that splitters have some very specific behaviors is important to this problem: They will always try to balance outputs provided there is no backlog, hence, in a no backlog scenario the output on each belt leaving a splitter is half of the <em>total</em> input on both of its input belts. If however, one of the output belts is backlogged with no throughput, then all of the throughput will be merged onto the 'free' belt <em>up to</em> it's throughput limit. If more than one belt worth of throughput is coming into a splitter in this case, then <em>both</em> input belts will start to bottleneck (each belt's effective throughput will be half of the maximum because that's how much of the saturated output belt is coming from the given input belt). Sometimes a backlog is only a <em>reduction</em> in throughput (do to bottlenecking down the line somewhere) in such a case, a splitter will still split input equally up to that the reduced throughput on the lowest throughput belt, after that, <em>all</em> remaining throughput is thrown at the belt with additional capacity until that two is saturated, and if there is any more input coming at the given splitter then <em>both</em> of its input belts will start to backlog.</p>
<p>This backlog phenomenon can result in some very subtle behaviors. Which makes simply assigning weights to the directed edges in the above described graph (constrained to a value of $[0,1]$ where $1$ is saturated and $0$ is no throughput) inadequate to describe the problem. For instance a splitter causing a backlog with some throughput but not enough to avoid backlog, can lead to a reduction in throughput for <em>another</em> splitter's output belt, shifting more of it's input onto the other output belt (which might cause a splitter further down that belt to suddenly become a bottleneck and backlog, etc.)</p>
<p>My suspicion from experimenting a tad as well as some theoretical work looking at how splitters are dividing inputs leads me to conjecture that it is not possible for all $n$, and that the most likely candidates are powers of $2$. Even then, for powers higher than $1$ it still might be impossible because of odd #s of belts having input needed to get to same number of output belts (and if balancing odd #s of belts isn't possible, then the universally throughput unlimited condition might not be satisfiable because of these cases).</p>
| David Ketcheson | 20,368 | <p>UTU = Universally throughput unlimited</p>
<p><strong>There exist UTU balancers for 3, 4, and 5 belts, and likely for any number of belts</strong>. Examples are given in <a href="https://gist.github.com/ketch/b0e30aae8452716233969479d6b6ca5d" rel="noreferrer">this Jupyter notebook</a>, along with a Python implementation of <strong>an iterative algorithm for computing the flow for any set of belts and splitters</strong>.</p>
<p>I will quote some of the notebook here for those who don't wish to follow the link. I refer to splitters as junctions, since that's how they really function.</p>
<blockquote>
<p>Each belt is composed of unit length segments, referred to in the game as tiles. In the model, we imagine each belt flowing from left to right along coordinate direction <span class="math-container">$x$</span> and each junction uniting two belts at the left edge of a given tile. Note that some balancers used in practice contain loops and could not be represented by our model without an infinite length belt.</p>
<p>The state of a belt tile is represented by a density <span class="math-container">$0 \le \rho \le 1$</span> and a velocity <span class="math-container">$0 \le v \le 1$</span>. The value <span class="math-container">$v=1$</span> is the normal belt speed, and <span class="math-container">$\rho=1$</span> is the maximum capacity of a belt. Clearly, we can only have <span class="math-container">$v<1$</span> if <span class="math-container">$\rho=1$</span>; this is a state in which the belt is fully loaded and there is not enough outflow downstream.</p>
<p>The inflow upstream is specified by the density <span class="math-container">$\rho_{up}$</span> at the left edge of the domain (<span class="math-container">$x=0$</span>) and the outflow downstream is specified by <span class="math-container">$v_{down}$</span> at the right edge of the domain. For the purposes of testing UTU, we are only interested in setting these values to zero or one, but more general values will work with the algorithm below.</p>
<p>The main condition to be satisfied is conservation at each junction. The flux of items along a belt is given by the product <span class="math-container">$\rho v$</span>, and we require that the flux going into a junction be equal to the flux coming out. It turns out to be much more complicated than I expected to satisfy this condition at each junction in a way consistent with how splitters work, and that is reflected in the code below. When I have time, I will hopefully add a more precise mathematical description of the conditions that are applied in the algorithm in order to achieve conservation.</p>
<p>Briefly: we initially set <span class="math-container">$\rho=0$</span> and <span class="math-container">$v=1$</span> everywhere except at the boundaries. At each iteration, we only increase <span class="math-container">$\rho$</span> and decrease <span class="math-container">$v$</span>; the iterative values are always lower (respectively upper) bounds on the correct values. At each iteration, we check for junctions where inflow is greater than outflow (due to the conditions in the previous sentence, the opposite situation cannot happen) and adjust them in order to achieve conservation and equal outflow. If possible, outflow densities are increased to achieve this. If that's not possible (because one of the outflow densities reaches 1) then all excess outflow is assigned to the other belt. If that's not possible (because both outflow densities reach 1) then one or both of the inflow belts will fill up and slow down.</p>
</blockquote>
<p>Here is a simplified view of one flow through a UTU 3-belt balancer:
<a href="https://i.stack.imgur.com/mIZgE.png" rel="noreferrer"><img src="https://i.stack.imgur.com/mIZgE.png" alt="3-belt UTU balancer" /></a>
The numbers are densities and the colors are velocities. Again, flow is left to right, and each black line represents a junction ("splitter"). Here's what it looks like in the game:</p>
<p><a href="https://i.stack.imgur.com/KK3nS.png" rel="noreferrer"><img src="https://i.stack.imgur.com/KK3nS.png" alt="3-belt UTU in game" /></a></p>
<p>This seems pretty simple and obvious; I'm surprised that it hasn't been posted in the Factorio forums already.</p>
<p>Here's a 4-belt UTU balancer:</p>
<p><a href="https://i.stack.imgur.com/rSe6Y.png" rel="noreferrer"><img src="https://i.stack.imgur.com/rSe6Y.png" alt="4-belt UTU" /></a></p>
<p>and one way to construct it in-game:</p>
<p><a href="https://i.stack.imgur.com/zHwaH.png" rel="noreferrer"><img src="https://i.stack.imgur.com/zHwaH.png" alt="enter image description here" /></a></p>
<p>I think this has already been invented and posted on the Factorio forums, although I came to it independently.</p>
<p>The notebook has a 5-belt balancer too, but I haven't taken the time to build it in-game.</p>
<p>The notebook includes code to generate and test all the required sets of inputs and outputs for the UTU property.</p>
<p>You'll notice that these balancers are built by naively putting as many junctions in as one can fit, and staggering them. I conjecture that if you do enough of this, you'll always get a UTU balancer, but I have no idea how the length of the balancer grows with the number of belts. Interestingly, my 4-belt balancer is shorter (in the model representation, where belts can be combined willy-nilly without worrying about the actual geometry) than the 3-belt balancer. But the 5-belt balancer is much longer.</p>
<p>The algorithm in the notebook could also be used to investigate other scenarios, such as <span class="math-container">$m$</span>-to-<span class="math-container">$n$</span> balancers with <span class="math-container">$m$</span> not equal to <span class="math-container">$n$</span>, or to test the idea put forward in other answers that a <span class="math-container">$2n$</span> UTU balancer can be built up from an <span class="math-container">$n$</span> UTU balancer. It would be straightforward to attach it to some brute-force or more intelligent search that tries to find the shortest possible balancer with some given properties.</p>
<p>I must say that I am surprised by the complexity of this problem, which I initially thought would be much easier to solve. In particular, I haven't yet been able to come up with an explicit approach to the solution, only the iterative algorithm in the notebook. Thanks for the very interesting problem.</p>
<p><strong>Update</strong>: In the comments it is claimed that the 3-belt balancer fails if the top input and bottom output are disabled. Here is the picture for that situation, showing full throughput:</p>
<p><a href="https://i.stack.imgur.com/0TZTM.png" rel="noreferrer"><img src="https://i.stack.imgur.com/0TZTM.png" alt="enter image description here" /></a></p>
<p>Notice that the splitters are doing exactly what @Rhamphoryncus claims they should. For instance, the splitter connecting the lower two belts on tile 2 (the third tile) pulls exactly 1/2 of a full belt from each of the two upstream belts. The same happens with the splitter joining those two belts on tile 4.</p>
|
83,921 | <p>Consider $J = \sum_{i=0}^{N}y_{i-1}x_{i}y_{i+1}$ where $+$ and $-$ in the indices are mod $N+1$. Let $x_{i} = 1 - y_{i} \in \{0,1\}$. What are some of the tools useful and relaxation techniques available to maximize $J$ or any other symmetric multivariate polynomial?</p>
| bnaul | 19,008 | <p>In case anyone else comes across this: the model I ended up using is by León, Massé, and Rivest in the Journal of Multivariate Analysis (see <a href="http://www.sciencedirect.com/science/article/pii/S0047259X05000382" rel="nofollow">here</a>). They give a distribution on the space of skew-symmetric matrices that gives an arbitrarily concentrated distribution on the orthogonal group: that is, for dispersion parameter $\kappa=0$ the resulting distribution is uniform, and as $\kappa\rightarrow\infty$ the distribution approaches a constant distribution.</p>
<p>EDIT: The limit is a Gaussian distribution (analogous to the degrees of freedom of a t distribution approaching $\infty$), not constant.</p>
<p>Thanks again to those who commented/answered as these suggestions put me on the path towards the method I ultimately found.</p>
|
3,426,316 | <blockquote>
<p>If <span class="math-container">$f(x)\leq a$</span> then can we say <span class="math-container">$$\int f(x) dx\leq a\int dx?$$</span></p>
</blockquote>
<p>I want to know if this is true or not and can we prove the answer. if it's right can you provide a good simple explanation?
thanks in advance.</p>
| Dietrich Burde | 83,966 | <p>An <span class="math-container">$L$</span>-module <span class="math-container">$V$</span> is just a Lie algebra representation <span class="math-container">$\phi\colon L\rightarrow \mathfrak{gl}(V)$</span>, where <span class="math-container">$x.v=\phi(x)(v)$</span>. By the universal property of <span class="math-container">$U(L)$</span>, the universal enveloping algebra of <span class="math-container">$L$</span>, <span class="math-container">$\phi$</span> extends to a representation of <span class="math-container">$U(L)$</span> on <span class="math-container">$V$</span>. Conversely, every representation of <span class="math-container">$U(L)$</span> on <span class="math-container">$V$</span> restricts to a representation of <span class="math-container">$L$</span> on <span class="math-container">$V$</span>. In this sense, <span class="math-container">$L$</span>-modules correspond to <span class="math-container">$U(L)$</span>-modules.</p>
|
2,843,337 | <blockquote>
<p>If $0 \lt x \lt \dfrac{\pi}{2}$, prove that
$$x^{3/2}\sin x + \sqrt{9-x^3}\cos x \leq 3$$</p>
</blockquote>
<p>This question must be done without calculus. First, I tried splitting it into the intervals $(0,\pi/4)$ and $(\pi/4, \pi/2)$, hoping that, $\sin x$ was bound tightly enough on the interval that it'd be less than 3 even if $\cos x = 1$ (which doesn't work -- letting $\sin x = \dfrac{1}{\sqrt{2}}$ and $\cos x = 1$ produces a result greater than 3).</p>
<p>The other thing I noticed was that inside the square root sign, we have $\sqrt{9-x^3} = \sqrt{(3-x^{3/2})(3+x^{3/2})}$, and an $x^{3/2}$ appears in the first term, but I'm not sure how useful the similarity there is.</p>
<p>Advice on how to proceed?</p>
| Aaron Meyerowitz | 84,560 | <p>It is just trig: Any expression $A\sin(\theta)+B\cos(\theta)=C\sin(\theta+\omega)$ for a phase shift $\omega$ with $\tan(\omega)=\frac{B}{A}$ and $C^2=A^2+B^2.$ Here $C=3.$</p>
|
1,387,216 | <p>Let $X$ be a metric space, and let $E$ and $K$ be two sets such that $E\subset K$.</p>
<p>I want to prove:</p>
<p>If $p$ is a limit point of $E$, then $p$ is a limit point of $K$.</p>
<p>Proof: if every neighborhood of $p\in E$ contains a point $q\neq p$ such that $q\in E$, then this $q$ is also in $K$, then $p$ is a limit point of $K$. Is that correct?</p>
| OKPALA MMADUABUCHI | 98,218 | <p>I dont think so. To proceed, let $p$ be a limit pointn of $E$. Then any neighbourhood of $N\setminus\{p\}\cap E\neq \emptyset.$ Now let $N$ be a neighbourhood of $\{p\}$ in $K$. Then $N\cap E$ is a neighbourhood of $p$ in $E$. So $(N\cap E\setminus \{p\})\cap E\neq \emptyset$ and therefore $N\cap K\neq \emptyset$. So $p$ is a limit point of $K$. </p>
|
2,426,394 | <p>I'm sorry to bother you with this easy problem. But I'm working alone and totally confused.
It is the 1378th Problem from "Problems in Mathematical Analysis" written by Demidovich. The standard answer is
$$\frac{60(1+x)^{99}(1+6x)}{(1-2x)^{41}(1+2x)^{61}}$$</p>
<p>But when I followed the rutine
$$\frac{d(P(x)/Q(x))}{dx}=\frac{\frac{dP(x)}{dx}Q(x)-P(x)\frac{dQ(x)}{dx}}{Q(x)^2}$$
I got
$$\frac{100(1+x)^{99}(1-2x)^{40}(1+2x)^{60}-(1+x)^{100}(-80(1-2x)^{39}+120(1+2x)^{59})}{(1-2x)^{80}(1+2x)^{120}}$$</p>
<p>and finally
$$\frac{60(1+x)^{99}P(x)}{(1-2x)^{41}(1+2x)^{61}}$$</p>
<p>$$P(x)=1-4x^2-\frac{3(1+x)}{(1-2x)^{39}}-\frac{2(1+x)}{(1+2x)^{59}}$$</p>
<p>Would you tell me what mistake I made. Best regards.</p>
| neonpokharkar | 477,567 | <p>You can write those equations as
$$\Biggr(\frac{1+x}{1-2x}\Biggr)^{40}\Biggr(\frac{1+x}{1+2x}\Biggr)^{60}$$
And use multiplication rule</p>
<p>It is easier this way</p>
<blockquote class="spoiler">
<p>$$\color{green}{\frac{d}{dx} }\Biggr(\color{red}{\Biggr(\frac{1+x}{1-2x}\Biggr)^{40}}\color{blue}{\Biggr(\frac{1+x}{1+2x}\Biggr)^{60}}\Biggr)$$$$$$$$\color{red}{\Biggr(\frac{1+x}{1-2x}\Biggr)^{40}}\color{green}{\frac{d}{dx}}\color{blue}{\Biggr(\frac{1+x}{1+2x}\Biggr)^{60}}+\color{green}{\frac{d}{dx}}\color{red}{\Biggr(\frac{1+x}{1-2x}\Biggr)^{40}}\color{blue}{\Biggr(\frac{1+x}{1+2x}\Biggr)^{60}}$$$$$$$$\color{blue}{60 \Biggr(\frac{1+x}{1+2x}\Biggr)^{59}\Biggr(\frac{-1}{(1+2x)^2}\Biggr)}\color{red}{\Biggr(\frac{1+x}{1-2x}\Biggr)^{40}}+\color{red}{40\Biggr(\frac{1+x}{1-2x}\Biggr)^{39}\Biggr(\frac{3}{(1-2x)^2}\Biggr)}\color{blue}{\Biggr(\frac{1+x}{1+2x}\Biggr)^{60}}$$$$$$$$-60\frac{(1+x)^{99}}{(1+2x)^{\color{blue}{61}}(1-2x)^{\color{red}{40}}}+120\frac{(1+x)^{99}}{(1-2x)^{\color{red}{41}}(1+2x)^{\color{blue}{60}}}$$$$$$$$\frac{60(1+x)^{99}(\color{red}{-1+2x+2+4x})}{(1+2x)^{61}(1-2x)^{41}}$$$$$$$$\frac{60(1+x)^{99}(6x+1)}{(1+2x)^{61}(1-2x)^{41}}$$</p>
</blockquote>
|
2,903,259 | <p>I am working on one example from Munkres book "Topology"and I would like to clarify one question.</p>
<p><strong>Example:</strong> Consider the set $[0,1)$ of real numbers and the set $\mathbb{Z}_+$ of positive integers, both in their usual orders; give $\mathbb{Z}_+\times [0,1)$ the dictionary order. This set has the same order type as the set of nonnegative reals; the function $$f(n\times t)=n+t-1$$ is the required bijective order-preserving correspondence. On the other hand, the set $[0,1)\times \mathbb{Z}_+$ in the dictionary order has quite a different order type; for example, every element of this ordered set has an immediate successor. </p>
<p><strong>My questions:</strong> </p>
<p>1) I've checked that $[0,1)\times \mathbb{Z}_+$ has the same order type as the set of nonnegative reals, right?</p>
<p>2) Any element $(t,n)$ from $[0,1)\times \mathbb{Z}_+$ has immediate successor, namely $(t,n+1)$. Right?</p>
<p>3) But elements in $\mathbb{Z}_+\times [0,1)$ have not immediate successors, right?</p>
| Chessanator | 363,017 | <p>The order type of the dictionary order depends on which way round the two factors are, because one of the factors has a much greater effect on the ordering than the other. Going by the convention used in your example (so that $\mathbb{Z}_+\times [0,1)$ is isomorphic to the non-negative reals) we see that $[0,1) \times \mathbb{Z}_+$ is quite different because increasing the value of the component in $[0,1)$ a little bit now ensures that many copies of $\mathbb{Z}$ will be included between the two elements, while increasing the component in $\mathbb{Z}$ by one only steps up by one element rather than an entire copy of $[0,1)$.</p>
|
4,292,822 | <blockquote>
<p>Prove linear independence of <span class="math-container">$1+x^3-x^5,1-x^3,1+x^5$</span> in the Vector Space of Polynomials</p>
</blockquote>
<p>The attempts I found online all are quite easy. You just substitute something in for <span class="math-container">$x$</span> into the equation <span class="math-container">$a(1+x^3-x^5)+b(1-x^3)+c(1+x^5)=0$</span> for example <span class="math-container">$x=1,0,-1$</span> and this will give you three equations where you can show that <span class="math-container">$a,b,c=0$</span>. But why can we substitute something in? If I define the Vector Space of Polynomials in a very abstract way that <span class="math-container">$\sum_{i} \alpha_i x^{i}+\sum_{i} \beta_{i} x^{i}:=\sum_{i} (\alpha_{i}+\beta_{i})x^{i})$</span> and <span class="math-container">$(\sum_{i}^{n} \alpha_i x^{i})(\sum_{i}^{m} \alpha_{i} x^{i} ):=\sum_{i=0}^{n+m} c_i x^i$</span> with <span class="math-container">$c_k=a_0 b_k+a_1 b_{k-1}+...+a_{k} b_0$</span> and a <span class="math-container">$x$</span> is just an abstract symbol with absolutely no meaning why should one be allowed to substitute something for <span class="math-container">$x$</span> or even worse differentiate the equation?</p>
| José Carlos Santos | 446,262 | <p>In order to deal with this problem, you deal with expressions of the type<span class="math-container">$$a(1+x^3-x^5)+b(1-x^3)+c(1+x^5).\tag1$$</span>Now, take <span class="math-container">$\alpha\in\Bbb R$</span>. Then the map<span class="math-container">$$\begin{array}{rccc}\operatorname{ev}_\alpha\colon&\Bbb R[x]&\longrightarrow&\Bbb R\\&P(x)&\mapsto&P(\alpha)\end{array}$$</span>is a linear map and therefore it maps <span class="math-container">$(1)$</span> into <span class="math-container">$a(1+\alpha^3-\alpha^5)+b(1-\alpha^3)+c(1+\alpha^5)$</span>. But this is a real number now, and so you can do anything that you would do with ordinary numbers.</p>
<p>Besides, if, when <span class="math-container">$P(x)=a_0+a_x+a_2x^2+\cdots+a_nx^n$</span>, you define <span class="math-container">$P'(x)=a_1+2a_2x+\cdots+na_nx^{n-1}$</span>, then<span class="math-container">$$\begin{array}{ccc}\Bbb R[x]&\longrightarrow&\Bbb R[x]\\P(x)&\mapsto&P'(x)\end{array}$$</span>is also a linear map. So, again, you can see what it maps <span class="math-container">$(1)$</span> into.</p>
|
68,147 | <p>While I was investigating some specific types of prime numbers I have faced with the following infinite sequence :</p>
<p>$1,2,8,9,15,20,26,38,45,65,112,244,303,393,560,....$</p>
<p>I tried to find recursive formula using Maple and it's listtorec command, so up to $393$ I got the next output:</p>
<p>$ f(n+3) = ((-10604990407411886564453040+8614360900967683126093782*n$ $-1437788330056801496567841*n^2-20019334790519891406942*n^3$ $+10676199651161684501481*n^4)*f(n+1)$ $+(-1637719982644311036922320-2457276199701830407970234*n$ $-480059310080505210547097*n^2+383671472063948372228234*n^3$ $-33849767081583104776903*n^4)*f(n+2))$ $/(-936042047504931985146406*n -3812415630664251269364960$ $+337414858035611215686569*n^2+50641450188283496191324*n^3$ $-8211420729473965803551*n^4) $</p>
<p>but when I added $560$ to list Maple sent me message FAIL.</p>
<p>So, my question is : how can I find pattern for this sequence if it exists ?</p>
| Juan S | 2,219 | <p>Almost certainly the first thing you should do is try <a href="http://oeis.org/search?q=1%2C2%2C8%2C9%2C15%2C20&language=english&go=Search" rel="nofollow">OEIS</a></p>
<p>Doing so gives in your case gives that the sequence is numbers $n$ such that $6\times10^n+1$ is prime.</p>
|
1,991,600 | <p><a href="https://i.stack.imgur.com/2L9j9.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/2L9j9.jpg" alt="enter image description here" /></a></p>
<p>I have worked out the areas as <span class="math-container">$\pi/3$</span> for the circle and <span class="math-container">$2/\sqrt3$</span> for the triangle but don't know how to convert into a percentage without a calculator.</p>
| David Quinn | 187,299 | <p>If the radius of the circle is $1$ then the area of the circle is $\pi$ and the area of the triangle is $3\sqrt{3}$</p>
<p>Therefore the percentage is $$\frac{\pi}{3\sqrt{3}}\times100\simeq\frac{100}{\sqrt{3}}\simeq\sqrt{\frac{10000}{3}}\simeq\sqrt{3300}\simeq60%$$</p>
|
1,137 | <p>In a freshman course in mathematics for biology students, I have had the issue that basic algebra (e.g. simplifying $\frac{a/b}{c/d}$) is far from being mastered. On the one hand, it seems ill-suited to teach more advanced matters to student failing on the basics. On the other hand, if we take too much time to cover the material allegedly covered in high school, we can hardly hope to cover what is needed to these student, and we enforce the impression that not mastering high-school mathematics is ok.</p>
<blockquote>
<p>How much effort should we spend in class for studying material that is
supposed to be mastered but in practice is not?</p>
</blockquote>
| paul garrett | 63 | <p>In addition to other useful answers... this sort of problem will recur forever, since students (even with good attitudes) chronically misjudge the importance of background courses, and chronically misjudge the degree of facility and competence really desirable for later work. E.g., sometimes students see a passing grade (C+?) as showing they're ready to move forward... while, in reality, that's terrible from the viewpoint of competence and reliability.</p>
<p>But there'll be no room in your course to rant about the failings of the ambient cultural attitudes, blah-blah-blah. Nor will many students be willing to agree with the premise that they in fact do not know things they thought they know... with "know" as certified by a grade in a course in the past.</p>
<p>Thus, for me both in undergrad courses at all levels, and in grad courses, I "recall" things ... sometimes in gruesome detail ... pretending to be apologetic for boring people who know it all too well... to at least passively drill everyone in the absolutely critical riffs.</p>
<p>Yes, I am implicitly doubting that formal reviews are as effective as one would want, exactly because of the inaccurate self-perceptions of the students. It is quicker, and not toooo burdensome to "in-line" the review/recapitulation of things that... yes, we'd like to be able to assume the kids know, but they just don't.</p>
<p>The meta-comment is that this is a huge issue, and we can't expect any sort of "solution", but only a strategy to fight back on an on-going basis...</p>
|
1,137 | <p>In a freshman course in mathematics for biology students, I have had the issue that basic algebra (e.g. simplifying $\frac{a/b}{c/d}$) is far from being mastered. On the one hand, it seems ill-suited to teach more advanced matters to student failing on the basics. On the other hand, if we take too much time to cover the material allegedly covered in high school, we can hardly hope to cover what is needed to these student, and we enforce the impression that not mastering high-school mathematics is ok.</p>
<blockquote>
<p>How much effort should we spend in class for studying material that is
supposed to be mastered but in practice is not?</p>
</blockquote>
| William | 581 | <p>You should give your students an assessment test to see what they have not mastered.
Then you may wish to give them some basic material for practice on their own time.</p>
<p>Once a professor asked me "Didn't you take a course in differential equations"?</p>
<p>I said "Of course, twice".</p>
|
183,237 | <blockquote>
<p>Does</p>
<p><span class="math-container">$$\int_{-\infty}^\infty \text{e}^{\ a\ (x+b)^2}\ \text dx=\int_{-\infty}^\infty \text{e}^{\ a\ x^2}\ \text dx\ \ \ \ \ ?$$</span></p>
<p>hold, even if the imaginary part of <span class="math-container">$b$</span> is nonzero?</p>
</blockquote>
<p>What I really want to understand is what the phrase "<a href="http://en.wikipedia.org/wiki/Common_integrals_in_quantum_field_theory#Integrals_with_a_complex_argument_of_the_exponent" rel="nofollow noreferrer">By analogy with the previous integrals</a>" means in that link. There, the expression <span class="math-container">$\frac{J}{a}$</span> is complex but they seem to imply the integral can be solved like above anyway.</p>
<p>The reusult tells us that the integral is really independend of <span class="math-container">$J$</span>, which is assumed to be real here. I wonder if we can also generalize this integral to include complex <span class="math-container">$J$</span>. In case that the shift above is possible, this should work out.</p>
<p>But even if the idea is here to perform that substitution, how to get rid of the complex <span class="math-container">$a$</span> to obtain the result. If everything is purely real or imaginary, then <a href="https://math.stackexchange.com/questions/163946/are-complex-substitutions-legal-in-integration/166359#166359">this</a> solves the rest of the problem.</p>
| DonAntonio | 31,254 | <p>Put
$$(x+b)^2=u^2\Longrightarrow 2(x+b)dx=2udu\Longrightarrow dx= du$$ so we get</p>
<p>$$\int_{-\infty}^\infty e^{a(x+b)^2}dx=\int_{-\infty}^\infty e^{au^2}du$$
and your question's answered in the affirmative.</p>
<p><strong>Added:</strong> You may want to divide the integral in two rays $\,(-\infty,0)\,,\,(0,\infty)\,$ and then do the above, to avoid problems with the signs after taking the square root.</p>
|
481,017 | <blockquote>
<p>Find all $f(x)$ satisfying $f(f(x)) = x^2 - 2$.</p>
</blockquote>
<p>Presumably $f(x)$ is supposed to be a function from $\mathbb R$ to $\mathbb R$ with no further restrictions (we don't assume continuity, etc), but the text of the problem does not specify further. </p>
<p><strong>Possibly Helpful Links:</strong> Information on similar problems can be found <a href="https://mathoverflow.net/questions/17605/how-to-solve-ffx-cosx">here</a> and <a href="https://mathoverflow.net/questions/17614/solving-ffx-gx">here</a>.</p>
<p><strong>Source:</strong> <a href="https://math.stackexchange.com/questions/481000/some-old-russian-problems">This question.</a> It is about to be closed for containing too many problems in one question. I'm posting each problem separately. </p>
| Gottfried Helms | 1,714 | <p>There is another way to arrive by a meaningful generalization to a result. <em>(Btw., this is essentially the same result which I got using the Carlemanmatrix; if we recenter the function $w(x)$ of my previous posting we get $f(x)$ with all displayed digits correct.)</em> </p>
<p>The resulting function has the following power series:
$$ f(x) = -1.21139973416 + 1.12528011714 x + 0.302849933539 x^2 - 0.0468866715477 x^3 + 0.0126187472308 x^4 - 0.00410258376042 x^5 + 0.00147218717693 x^6 - 0.000561663252915 x^7 + O(x^8) \\
f(f(x)) = -2 + x^2
$$</p>
<p>The key here is to observe, that the basic function $g_2(x)=-2 + x^2$ is a rescaled <a href="https://en.wikipedia.org/wiki/Chebyshev_polynomials#Examples" rel="nofollow">Chebychev-polynomial</a> $T_2(x)$ such that $g_2(x) = 2 \cdot T_2(x/2)$<br>
Now the iterates of $g_2^{\circ h} (x)$ are in a similar way as the Chebychev-polynomials expressible by the composition<br>
$$ g_2(x) = 2 \cdot \cosh(2 \cdot \mathrm{arccosh}(x/2)) $$
and the h'th iterate
$$ g_2^{\circ h}(x) = 2 \cdot \cosh(2^h \cdot \mathrm{arccosh}(x/2)) $$
This composition allows a natural extension to the fractional iteration heights h; using $h=1/2$ we get
$$ \begin{array} {llll} g_2^{\circ 1/2}(x) &=& f(x) \\ &=& -1.21139973416 + 1.12528011714 x + 0.302849933539 x^2 \\ & & - 0.0468866715477 x^3 + 0.0126187472308 x^4 - 0.00410258376042 x^5 \\ & & + 0.00147218717693 x^6
- 0.000561663252915 x^7 + O(x^8) \\ & & \text{ and }\\
f(f(x)) &=& -2 + x^2 \end{array} $$</p>
<p>Of course we have to reflect restrictions for the domain. If we work only over the reals then we have because of the inner <em>arccosh</em>-function $ x \in (2,\infty) $. However, if we extend the domain to complex numbers this can be extended. (While multivaluedness might then occur - I don't have the exact description at hand yet, Pari/GP which has complex arithmetic builtin works with the complete disk around the complex origin.)</p>
<p><hr>
The function $w(x)$ in the other post can be reproduced putting the cosh/arccosh-formula into Wolframalpha. Using <a href="http://www.wolframalpha.com/input/?i=2%2acosh%28sqrt%282%29%2aarccosh%28x/2%29%29" rel="nofollow">this</a> we get in the paragraph "series around 2" the representation of $f(x)$ using the coefficients of $w(x-2)+2$</p>
|
3,489,376 | <p>Let <span class="math-container">$C$</span> be a category. Then by definition, for very ordered triple <span class="math-container">$A,B,C$</span> of objects, there is a law of composition of morphisms, i.e., a map <span class="math-container">$$Hom_C(A,B)\times Hom_C(B,C)\longrightarrow Hom_C(A,C)$$</span> where <span class="math-container">$(f,g)\mapsto gf$</span>.</p>
<p>I was wondering if it is possible in the definition that <span class="math-container">$Hom_C(A,C)=\emptyset$</span> when the other two are not empty.</p>
| Berci | 41,488 | <p>No, that is not possible:</p>
<p>If we have a function <span class="math-container">$X\to\emptyset$</span>, then <span class="math-container">$X=\emptyset$</span>, because for every <span class="math-container">$x\in X$</span> there should be an assigned element in the codomain.</p>
<p>If, however, <span class="math-container">$\hom(A,B)\ne\emptyset$</span> and <span class="math-container">$\hom(B,C)\ne\emptyset$</span>, then their Cartesian product is nonempty either. </p>
|
2,393,130 | <blockquote>
<p><strong>Problem:</strong> <span class="math-container">$ABCD$</span> is a rectangle. A point <span class="math-container">$P$</span> is <span class="math-container">$11$</span> from <span class="math-container">$A$</span>, <span class="math-container">$13$</span> from <span class="math-container">$B$</span> and <span class="math-container">$7$</span> from <span class="math-container">$C$</span>. What is the length of <span class="math-container">$DP=x?$</span> (Note: <span class="math-container">$P$</span> can be inside the rectangle or outside of it.)</p>
<p><a href="https://i.stack.imgur.com/wzBoW.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/wzBoW.jpg" alt="enter image description here" /></a></p>
</blockquote>
<p>I drew this scenario as best as I could, but I only have triangles with two sides and no angles. How do I begin? Any weird and relatively unknown theorems I should use?</p>
| Michael Rozenberg | 190,319 | <p>Just $x^2+13^2=7^2+11^2$ by Pythagoras theorem.</p>
<p>Let $PK$, $PL$, $PM$ and $PN$ be altitudes of $\Delta DPA$, $\Delta DPC$, $\Delta CPB$ and $\Delta APB$ respectively.
Thus,
$$x^2+13^2=DK^2+KP^2+PM^2+MB^2=PN^2+KP^2+PM^2+PL^2=11^2+7^2$$ </p>
|
16,477 | <p>What is meant when one says that one has chosen a basis of fields on the manifold with ``anolonomy"?</p>
<p>I get the feeling that it is a choice of basis with non-trivial structure constants say $C^{k}_{ij}$ </p>
<p>Now in some papers people seem to state that once such a basis is chosen one can write the Christoffel symbols in terms of the structure constants by the following equation,</p>
<p>I am writing the equation with all indices lowered (as written in the original paper) </p>
<p>$\Gamma_{ijk} = \frac{1}{2}(C_{ijk} - C_{ikj} - C_{jki})$</p>
<p>I suppose they are assuming that the connection is chosen to be torsion free since the above equation satisfies torsion free-ness condition. </p>
<p>But just being torsion free doesn't seem to be enough to derive the above relationship.</p>
<p>This relationship is seen in the papers in the context of choosing a veirbein on homogeneous spaces equipped with a Riemannian metric. </p>
<p>I would like to know from where and how does the above equation come.</p>
<p>In general it is surely impossible that the structure constants determine the connection! </p>
| jvkersch | 3,909 | <p>You are correct, a nonholonomic (or anholonomic) basis of the tangent bundle is a set of vector fields $X_i$, $i = 1, \ldots, n$, with nonvanishing structure constants: $[X_i, X_j] = C_{ij}^k X_k$. I'm not sure where this terminology comes from, since in classical mechanics nonholonomicity means exactly the opposite (i.e. the Lie bracket is not closed).</p>
<p>I haven't seen the formula you wrote down. The indices seem to be in the wrong places in the second and third term, could you check this? </p>
<p>Not sure if this helps, but on a Lie group at least there is a connection $\nabla$ defined in terms of the Lie bracket of left invariant vector fields as </p>
<p>$\nabla_X Y = \frac{1}{2} [X, Y]$</p>
<p>So the Christoffel symbols for this metric are just the structure constants of the Lie group. This is due to Milnor.</p>
<p>I guess this could be extended to the case of homogeneous manifolds, but at some stage something has got to give since the Lie bracket is not a connection (not tensorial in the first argument). So in that case, maybe your term 2 and 3 are the necessary correction terms. </p>
|
16,477 | <p>What is meant when one says that one has chosen a basis of fields on the manifold with ``anolonomy"?</p>
<p>I get the feeling that it is a choice of basis with non-trivial structure constants say $C^{k}_{ij}$ </p>
<p>Now in some papers people seem to state that once such a basis is chosen one can write the Christoffel symbols in terms of the structure constants by the following equation,</p>
<p>I am writing the equation with all indices lowered (as written in the original paper) </p>
<p>$\Gamma_{ijk} = \frac{1}{2}(C_{ijk} - C_{ikj} - C_{jki})$</p>
<p>I suppose they are assuming that the connection is chosen to be torsion free since the above equation satisfies torsion free-ness condition. </p>
<p>But just being torsion free doesn't seem to be enough to derive the above relationship.</p>
<p>This relationship is seen in the papers in the context of choosing a veirbein on homogeneous spaces equipped with a Riemannian metric. </p>
<p>I would like to know from where and how does the above equation come.</p>
<p>In general it is surely impossible that the structure constants determine the connection! </p>
| José Figueroa-O'Farrill | 394 | <p>Maybe I should elaborate on my comment. On a riemannian manifold $(M,g)$ there exists a unique metric-compatible torsion-free affine connection. It's the Levi-Civita connection and one can prove its existence and uniqueness constructively by giving a formula, known as the Koszul formula.
This formula is given by
$$2 g(\nabla_XY,Z) = X g(Y,Z) + Y g(Z,X) - Z g(X,Y) - g(Y,[X,Z]) - g([Y,Z],X) - g(Z,[X,Y]) $$
where $X,Y,Z$ are vector fields on $M$.
Now take a local orthonormal frame $(e_i)$ for the tangent bundle. Orthonormality says that $g(e_i,e_j)$ is constant, whereas because they are a frame $[e_i,e_j] = C_{ij}^k e_k$ for some <em>functions</em> $C_{ij}^k$. (They will not be constant in general.) If you now apply the Koszul formula to the elements in this frame you get your expression, where
$$C_{ijk} = g([e_i,e_j],e_k).$$</p>
|
54,815 | <p>A notorious question with prime numbers is estimating the gaps between consecutive primes. That is, if $(p_n)_{n \geq 1}$ is the canonical enumeration of the primes, then set $g_n = p_{n+1} - p_n$. It is shown that $g_n > \frac{c \log(n) \log \log(n) \log \log \log \log(n)}{(\log \log \log(n))^2}$ infinitely often, but a precise estimate is not known.</p>
<p>My question is, is there a 'natural' superset of the primes that are of interest (say, the set of numbers that are either primes or product of two primes) such that the gap between consecutive members is well known or well estimated?</p>
| Micah Milinovich | 3,659 | <p>Let $q_n$ denote the $n^{\text{th}}$ number that is a product of exactly two distinct primes. It is known that
$$\liminf_{n\to \infty} \ (q_{n+1}-q_n) \le 6.$$
This is a result of Goldston, Graham, Pintz, and Yildirim.</p>
<p><a href="http://arxiv.org/abs/math/0609615" rel="noreferrer">http://arxiv.org/abs/math/0609615</a></p>
|
120,281 | <p>Does this outline of a proof work?</p>
<p>Consider the ball and the bidisc in $\mathbb{C}^2$. Give each space its Bergman metric. To show that the ball and the bidisc are not holomorphic, it is enough to show that they are not isometric. </p>
<p>One way to distinguish the two spaces is their sectional curvature. I think I have shown that the sectional curvature of the Bergman metric of the ball is constant and negative, whereas the sectional curvature of the bidisc is nonpositive non constant. For example in the plane generated by the vectors $\langle 1,0 \rangle$ and $\langle 0,1 \rangle$ the section curvature is 0, but in the plane generated by $\langle 1,0\rangle$ and $\langle i,0 \rangle $ the sectional curvature is negative.</p>
<p>Is this true? Is there anything subtle I might have missed? I have seen a lot of pretty convoluted proofs of this fact, and I would think that this basic outline would be recorded in print somewhere if it is true, but I cannot seem to find it.</p>
| ahmed sulejmani | 31,537 | <p>You can look at the holomorphic sectional curvature. For the ball it is constant. Actually the constancy of the holomorphic sectional curvature of the Bergman metric distinguishes the ball (and domains biholomorphic to it) by an old theorem of Lu Qi Keng. </p>
|
1,405,486 | <p>My question concerns division by zero. Let's say there are two functions, $g(x)$ and $f(x)$ that approach $0$, as $x\to t$, also assume their derivative w.r.t. $x$ is finite as $x\to t$. Using L'Hopital's Rule: $\frac{f(x)}{g(x)} = \frac{f'(x)}{g'(x)}$ $x\to t$, Can you say $\frac{f'(x)}{f(x)} = \frac{g'(x)}{g(x)}$? Or $\frac{f'(x)}{f(x)} - \frac{g'(x)}{g(x)} = 0$? To be more specific is $\frac{1}{0} = \frac{1}{0}$? Or is it undefined?</p>
| Dodo | 263,266 | <p>Arguably, $\frac10$ can take on any or no value. Therefore it is incorrect, though this may seem strange, to say that $\frac10=\frac10$ in any (real) case. Of course, any value <em>approaching</em> zero but not zero can be compared.</p>
<p><strong>Edit:</strong> In any other real case, $\frac1x=\frac1x$, because values only have a single reciprocal.</p>
|
3,994,280 | <p>Show that <span class="math-container">$(\lnot(p\lor(\lnot p\land q))$</span> is logically equivalent to <span class="math-container">$\lnot p \land \lnot q$</span>.</p>
<p>I am wondering what I did is correct. Very new to learning simple logic.</p>
<p><span class="math-container">$$(\lnot(p\lor(\lnot p\land q)) \equiv \lnot((p\lor\lnot p)\land(p\lor q)) \\ \equiv \lnot(\text{T} \land (p\lor q)) \\ \equiv\lnot(p \lor q) \\ \equiv \lnot p \land
\lnot q \\ \square$$</span></p>
| SagarM | 142,677 | <p>Only your last step is wrong:
<span class="math-container">$$\neg(p \lor q) \equiv (\neg p \land \neg q)$$</span></p>
|
2,136,183 | <p>Find integers x,y such that the repeating decimal 0.712341234.... = x/y.</p>
<p>I would actually do this problem if the 7 was not there. If the 7 was not there, my proof would be as follows.</p>
<p>proof:</p>
<p>Let z = 0.12341234...</p>
<p>Then 10^4z = 1234.1234</p>
<p>10^4z - z = 1234</p>
<p>z = 1234/(10^4 - 1)</p>
<p>x = 1234, y = 10^4-1</p>
<p>So my question is, how would this change when there is a random number thrown in there that is not part of the repeating decimal?</p>
<p>Edit: Proof after hints given</p>
<p>Let Let z = 0.712341234...</p>
<p>Then 10z = 7.1234...</p>
<p>10z - 7 = 0.1234...</p>
<p>10^4*(10z - 7) = 1234.1234...</p>
<p>10^4*(10z-7) - (10z-7) = 1234</p>
<p>(10z-7) * 10^4 - 1 = 1234</p>
<p>(10z-7) = 1234/(10^4 - 1)</p>
<p>10z = 1234/(10^4-1) + 7</p>
<p>z = (1234/(10^4-1) + 7)/10</p>
<p>x = 1234/(10^4-1) + 7, y = 10</p>
<p>I mean this does give me the correct answer, but x isn't exactly an integer.</p>
| Henricus V. | 239,207 | <p>$$0.7\overline{1234} = 0.7 + 0.0\overline{1234}
$$</p>
|
90,006 | <p>I have two problem collections I am currently working through, the "Berkeley Problems in Mathematics" book, and the first of the three volumes of Putnam problems compiled by the MAA. These both contain many problems on basic differential equations.</p>
<p>Unfortunately, I never had a course in differential equations. Otherwise, my background is reasonably good, and I have knowledge of real analysis (at the level of baby Rudin), basic abstract algebra, topology, and complex analysis. I feel I could handle a more concise and mathematically mature approach to differential equations than the "cookbook" style that is normally given to first and second year students. I was wondering if someone to access to the above books that I am working through could suggest a concise reference that would cover what I need to know to solve the problems in them. In particular, it seems I need to know basic solution methods and basic existence and uniqueness theorem. On the other hand, I have no desire to specialize in differential equations, so a reference work like V.I Arnold's book on ordinary differential equations would not suit my needs, and I certainly don't have any need for knowledge of, say, the Laplace transform or PDEs. </p>
<p>To reiterate, I just need a concise, high level overview of the basic (by hand) solution techniques for differential equations, along with some discussion of the basic uniqueness and existence theorems. I realize this is rather vague, but looking through the two problem books I listed above should give a more precise idea of what I mean. Worked examples would be a plus. I am very unfamiliar with the subject matter, so thanks in advance for putting up with my very nebulous request.</p>
<p>EDIT: I found Coddington's "Intoduction to Ordinary Differential Equations" to be what I needed. Thanks guys.</p>
| David Mitra | 18,986 | <p>Let $\epsilon=1/4$. Let $N$ be given. Then
$${n+1\over 2n+3}\le {n \over 2n}\le{1\over2}\quad\Rightarrow|{n+1\over 2n+3} -1|\ge 1/2.$$
This is true for all $n\ge N$. </p>
<p>So, there is no $N$ such that $|{n+1\over 2n+3} - 1|<\epsilon$
for all $n\ge N$. This shows the sequence does not converge to 1.</p>
|
2,831,469 | <p>How can we solve $$y'''(t)+a(y''(t))^2+b(y'(t))^3=0$$</p>
<p>Could one make some kind of least common denominator argument to decide possible substitutions? Since the chain rule will come into play, I suppose a substitution both for the variable $t$ and the function $y$ could be useful. Possibly some powers of them?</p>
| Julián Aguirre | 4,791 | <p>As Adrian suggests, let $y'=z$, to get the second order equation
$$
z''+a\,(z')^2+b\,z^3=0.
$$
Since the independent variable $t$ does not appear explicitly in the equation (I am assuming $a$ and $b$ are constants), we let
$$
z'=p,\quad z''=\frac{dp}{dt}=\frac{dp}{dz}\,\frac{dz}{dt}=p\,\frac{dp}{dz}.
$$
This gives the first order equation
$$
p\,\frac{dp}{dz}+a\,p^2+b\,z^3=0,
$$
which written as
$$
\frac{dp}{dz}=-a\,p-b\,z^3\,p^{-1}
$$
is a Bernoulli equation. To solve it, let $u=p^2$. This gives the linear equation
$$
\frac{du}{dz}=-a\,u-b\,z^3.
$$
I have not done the calculations, but my impression is that you will not be able to get an explicit solution in terms of elementary functions.</p>
|
2,831,469 | <p>How can we solve $$y'''(t)+a(y''(t))^2+b(y'(t))^3=0$$</p>
<p>Could one make some kind of least common denominator argument to decide possible substitutions? Since the chain rule will come into play, I suppose a substitution both for the variable $t$ and the function $y$ could be useful. Possibly some powers of them?</p>
| user577215664 | 475,762 | <p>$$y'''(t)+a(y''(t))^2+b(y'(t))^3=0$$
Substitute $z=y'$
$$z''(t)+a(z'(t))^2+bz^3=0$$
Substitute $p=z'$
$$\frac {dp}{dz}p+ap^2+bz^3=0$$
$$\frac 12(p^2)'+ap^2+bz^3=0$$
Finally substitute $w=p^2$
$$\frac 12w'+aw+bz^3=0$$
Bernouilli's equation</p>
|
477,250 | <p>Let $u=\{u_\alpha\}$ be an open cover of $[a,b], a <b $. Let $S=\{r \in [a,b]$ such that $[a,r]$ is covered by some finite collection of open sets belonging to $u\}$. Is $S$ non-empty ? How ?</p>
| bradhd | 5,116 | <p>Let $a_{ij}$ be the $(i,j)^{th}$ entry of $A$ and $b_{ij}$ the $(i,j)^{th}$ entry of $A^{-1}$. Then from $AA^{-1}=I$ we have</p>
<p>$$\sum_{k=1}^5 a_{ik}b_{kj} = \delta_{ij},$$ and the given condition on the rows of $A$ is that
$$\sum_{i=1}^5 a_{ij}=1$$ for each $j$. We wish to deduce that
$$\sum_{i,j}b_{ij}=5.$$</p>
<p>We have $$\sum_{i,j}\delta_{ij}=5$$ since $\delta_{ij}=0$ unless $i=j=1,\ldots,5$, for which $\delta_{ij}=1$. The desired equation can be obtained by summing the first equation over all $i,j$ and using the other two known equations.</p>
|
107,653 | <p>Suppose you have the set of all possible $n$ x $n$ square adjacency matrices where $n$={1,2,3,4...}. For each matrix, compute the logarithm of the largest eigenvalue. Is it true that the set of logarithms you obtain is dense in $\mathbb{R}$? How do you begin to prove/disprove this?</p>
| Douglas Lind | 8,112 | <p>I think you mean dense in $[0,\infty)$, since the spectral radius of a nonnegative integer matrix must be at least 1 (the product of all nonzero eigenvalues must be a nonzero integer). You are effectively asking whether Perron numbers are dense in $[1,\infty)$, and this is easy to see. For example, let $A_n$ be the companion matrix of $x^n-x-1$ and $\lambda_n$ be its spectral radius. It's easy to check that $\log \lambda_n\to 0$, and that $k \log \lambda_n =\log \lambda_n^k$ is the spectral radius of $A_n^k$, so these numbers, as $n, k=1,2,3,\dots$ are dense. Finally, one can recode the nonnegative integer matrix $A_n^k$ to an larger adjacency matrix with the same spectral radius using the standard idea called "higher block presentation" from symbolic dynamics (this is described in my book with Marcus called "An Introduction to Symbolic Dynamics and Coding").</p>
|
3,036,489 | <blockquote>
<p>For <span class="math-container">$k, l \in \mathbb N$</span>
<span class="math-container">$$\sum_{i=0}^k\sum_{j=0}^l\binom{i+j}i=\binom{k+l+2}{k+1}-1$$</span>
How can I prove this?</p>
</blockquote>
<p>I thought some ideas with Pascal's triangle, counting paths on the grid and simple deformation of the formula.</p>
<p>It can be checked <a href="https://www.wolframalpha.com/input/?i=sum(i+%3D+0,+k,+sum(j+%3D+0,+l,+C(i+%2B+j,+i)))" rel="nofollow noreferrer">here (wolframalpha)</a>.</p>
<p>If the proof is difficult, please let me know the main idea.</p>
<p>Sorry for my poor English.</p>
<p>Thank you.</p>
<p>EDIT:
I got <a href="https://math.stackexchange.com/a/3036497/625590">the great and short proof</a> using Hockey-stick identity by Anubhab Ghosal, but because of this form, I could also get the <a href="https://math.stackexchange.com/a/3036508/625590">Robert Z's specialized answer</a>.
Then I don't think it is fully duplicate.</p>
| Anubhab | 602,341 | <p><span class="math-container">$$\displaystyle\sum_{i=0}^k\sum_{j=0}^l\binom{i+j}i=\sum_{i=0}^k\sum_{j=i}^{i+l}\binom{j}i=\sum_{i=0}^k\binom{i+l+1}{i+1} \ ^{[1]}$$</span></p>
<p><span class="math-container">$$=\sum_{i=0}^k\binom{i+l+1}{l}=\sum_{i=l}^{k+l+1}\binom{i}{l}−1=\binom{k+l+2}{k+1}-1\ ^{[1]}$$</span></p>
<p><a href="https://en.wikipedia.org/wiki/Hockey-stick_identity" rel="noreferrer">1. Hockey-Stick Identity</a></p>
|
3,036,489 | <blockquote>
<p>For <span class="math-container">$k, l \in \mathbb N$</span>
<span class="math-container">$$\sum_{i=0}^k\sum_{j=0}^l\binom{i+j}i=\binom{k+l+2}{k+1}-1$$</span>
How can I prove this?</p>
</blockquote>
<p>I thought some ideas with Pascal's triangle, counting paths on the grid and simple deformation of the formula.</p>
<p>It can be checked <a href="https://www.wolframalpha.com/input/?i=sum(i+%3D+0,+k,+sum(j+%3D+0,+l,+C(i+%2B+j,+i)))" rel="nofollow noreferrer">here (wolframalpha)</a>.</p>
<p>If the proof is difficult, please let me know the main idea.</p>
<p>Sorry for my poor English.</p>
<p>Thank you.</p>
<p>EDIT:
I got <a href="https://math.stackexchange.com/a/3036497/625590">the great and short proof</a> using Hockey-stick identity by Anubhab Ghosal, but because of this form, I could also get the <a href="https://math.stackexchange.com/a/3036508/625590">Robert Z's specialized answer</a>.
Then I don't think it is fully duplicate.</p>
| Felix Marin | 85,343 | <p><span class="math-container">$\newcommand{\bbx}[1]{\,\bbox[15px,border:1px groove navy]{\displaystyle{#1}}\,}
\newcommand{\braces}[1]{\left\lbrace\,{#1}\,\right\rbrace}
\newcommand{\bracks}[1]{\left\lbrack\,{#1}\,\right\rbrack}
\newcommand{\dd}{\mathrm{d}}
\newcommand{\ds}[1]{\displaystyle{#1}}
\newcommand{\expo}[1]{\,\mathrm{e}^{#1}\,}
\newcommand{\ic}{\mathrm{i}}
\newcommand{\mc}[1]{\mathcal{#1}}
\newcommand{\mrm}[1]{\mathrm{#1}}
\newcommand{\pars}[1]{\left(\,{#1}\,\right)}
\newcommand{\partiald}[3][]{\frac{\partial^{#1} #2}{\partial #3^{#1}}}
\newcommand{\root}[2][]{\,\sqrt[#1]{\,{#2}\,}\,}
\newcommand{\totald}[3][]{\frac{\mathrm{d}^{#1} #2}{\mathrm{d} #3^{#1}}}
\newcommand{\verts}[1]{\left\vert\,{#1}\,\right\vert}$</span></p>
<blockquote>
<p><span class="math-container">$\ds{\sum_{i = 0}^{k}\sum_{j = 0}^{\ell}
{i + j \choose i} = {k + \ell + 2 \choose k + 1} - 1:\ {\LARGE ?}.\qquad k, \ell \in \mathbb{N}}$</span>.</p>
</blockquote>
<p><span class="math-container">\begin{align}
&\bbox[10px,#ffd]{\sum_{i = 0}^{k}\sum_{j = 0}^{\ell}
{i + j \choose i}} =
\sum_{i = 0}^{k}\sum_{j = 0}^{\ell}{i + j \choose j} =
\sum_{i = 0}^{k}\sum_{j = 0}^{\ell}{-i - 1 \choose j}
\pars{-1}^{\,j}
\\[5mm] = &\
\sum_{i = 0}^{k}\sum_{j = 0}^{\ell}\pars{-1}^{\,j}
\bracks{z^{\, j}}\pars{1 + z}^{-i - 1} =
\sum_{i = 0}^{k}\sum_{j = 0}^{\ell}\pars{-1}^{\,j}
\bracks{z^{0}}{1 \over z^{\, j}}\,\pars{1 + z}^{-i - 1}
\\[5mm] = &\
\bracks{z^{0}}\sum_{i = 0}^{k}\pars{1 \over 1 + z}^{i + 1}
\sum_{j = 0}^{\ell}\pars{-\,{1 \over z}}^{\,j}
\\[5mm] = &\
\bracks{z^{0}}\braces{{1 \over 1 + z}\,
{\bracks{1/\pars{1 + z}}^{k + 1} - 1 \over 1/\pars{1 + z} - 1}}
\braces{{\pars{-1/z}^{\ell + 1} - 1 \over -1/z - 1}}
\\[5mm] = &\
\bracks{z^{0}}\braces{%
{1 - \pars{1 + z}^{k + 1} \over -z}
\,{1 \over \pars{1 + z}^{k + 1}}}
\braces{{\pars{-1}^{\ell + 1} - z^{\ell + 1} \over -1 - z}\,{z \over z^{\ell + 1}}}
\\[5mm] = &\
\bracks{z^{\ell + 1}}\braces{1 - {1 \over \pars{1 + z}^{k + 1}}}
\braces{z^{\ell + 1} + \pars{-1}^{\ell} \over 1 + z}
\\[5mm] = &\
\pars{-1}^{\ell}\bracks{z^{\ell + 1}}
\bracks{\pars{1 + z}^{-1} - \pars{1 + z}^{-k - 2}}
\\[5mm] = &\
\pars{-1}^{\ell}\bracks{\pars{-1}^{\ell + 1} - {-k - 2 \choose \ell + 1}}
\\[5mm] = &\
-1 - \pars{-1}^{\ell}\,{-\bracks{-k - 2} + \bracks{\ell + 1} - 1 \choose \ell + 1}\pars{-1}^{\ell + 1}
\\[5mm] = &\
-1 + { k + \ell + 2 \choose \ell + 1} =
\bbx{{k + \ell + 2 \choose k + 1} - 1}
\end{align}</span></p>
|
1,607,108 | <p>Using the ratio test:
$$\frac{1}{3}\lim\limits_{n\to\infty}\left|\frac{(n+1)^3(\sqrt{2}+(-1)^{n+1})^{n+1}}{n^3(\sqrt{2}+(-1)^n)^n}\right|$$</p>
<p>Without evaluating the limit, the numerator is greater then denominator and the series is divergent. Is there an easier method for checking the convergence of this sequence?</p>
| πr8 | 302,863 | <p>If $2\vert x^2-1$, then $x$ must be odd, so write $x=2y+1, y\in\mathbb{Z}$. </p>
<p>Then $x^2-1=4y^2+4y=8\frac{y(y+1)}{2}$. </p>
<p>Note that at least one of $y,y+1$ is even, so the fraction on the right is always an integer. </p>
<p>Thus we have that $8\vert x^2-1$.</p>
|
2,127,110 | <p>I am working on this differetiation problem:</p>
<p>$ \frac{d}{dx}x(1-\frac{2}{x})$</p>
<p>and I am currently stuck at this point:</p>
<p>$1\cdot \left(1-\frac{2}{x}\right)+\frac{2}{x^2}x$</p>
<p>Symbolab tells me this simplifies to $1$ but I do not understand how. I am under the impression that;</p>
<p>$1\cdot \left(1-\frac{2}{x}\right)+\frac{2}{x^2}x \equiv 1- 2x^{-x^2}-2^{-x}$</p>
| Community | -1 | <p><strong>Do not get confused by fractions and exponents.</strong>
You should remember that $\frac {k}{n} = kn^{-1}$ and not $k^{-n} $ and that $\frac {k}{n} l = kln^{-1}$ and not $kl^{-n} $.</p>
<p>We have $$1\times (1-\frac {2}{x}) = 1-\frac {2}{x} \tag {1}$$ and then $$\frac {2}{x^2}x = \frac {2}{x^2} \times x = \frac {2}{x} \tag {2} $$ </p>
<p>What do we get by adding $(1)$ and $(2)$? The result is $1-\frac {2}{x} + \frac {2}{x} = 1$. Hope it helps. </p>
|
2,617,555 | <p>So the cost per hour of running a cruiser is $\$ \left(\frac {V^2}{40} + 10\right)$, where $V $is the speed in knots. So I’ve answered the first question showing the cost would be $\$\frac DV \left(\frac {V^2}{40} + 10\right)$. And then they asked me to find the most economical speed for running the cruiser, and I have no idea how to get it</p>
| Nash J. | 439,920 | <p>The <em>most economical speed</em> will be the one at which the cost ($C$) is minimum. So the problem boils down to
\begin{align}
& \dfrac{dC}{dV} = 0 \\
\implies & \dfrac{d}{dV} \left[ \dfrac{D}{V} \left( \dfrac{V^{2}}{40} + 10\right)\right] = 0 \\
\implies & D \dfrac{d}{dV} \left( \dfrac{V}{40} + \dfrac{10}{V}\right) = 0 \\
\implies & \dfrac{1}{40} - \dfrac{10}{V^{2}} = 0 \tag{assuming $D > 0$} \\
\implies & V = \sqrt{400} = 20 \tag{since $V \geq 0$}
\end{align}</p>
<p>Hence the speed at which the cost will be minimized will be 20 knots. </p>
<p>Here is a plot of cost vs speed for $D = 20$ (you can choose any positive value). It is clear from the plot that cost attains its minimum value at $V = 20$ knots.<a href="https://i.stack.imgur.com/CCnYu.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/CCnYu.jpg" alt="enter image description here"></a></p>
|
1,937,515 | <p>If a function is analytic on $B(0,1)$ and continuous as well as real valued on the boundary (i.e. $|z| = 1$),then show that the function is constant.</p>
| TZakrevskiy | 77,314 | <p>Hint: apply the maximum principle for harmonic functions to imaginary part of your function.</p>
<p><strong>Edit</strong></p>
<p>As usual, we say that complelx variable $z$ writes $$z=x+iy$$
and the function $f$ has real and imaginary parts $u$ and $v$ respectively:
$$f(x+iy) = u(x,y)+iv(x,y).$$
It is a standard result that $u$ and $v$ are harmonic:
$$\Delta u = \Delta v =0 .$$
We can therefore apply the maximum principle (<a href="https://en.wikipedia.org/wiki/Maximum_principle" rel="nofollow">https://en.wikipedia.org/wiki/Maximum_principle</a>) to the function $v$; it is identically zero on the boundary $x^2+y^2=1$, hence $v$ is identically zero on the whole unit ball $B(0,1)$.</p>
<p>After that, the Cauchy-Riemann condition $$\partial_x u = \partial_y v,\quad \partial_y u =-\partial_x v$$
assures us that $\nabla u =0$ and therefore $u$ is a constant function.</p>
<p>Thus, $f$ is constant.</p>
|
259,431 | <p>In the book of Richard Hammack, I come accross with the following question:</p>
<blockquote>
<p>There are two different equivalence relations on the set $A = \{a,b\}$.
Describe them.</p>
</blockquote>
<p>OK, I found that the solution is,</p>
<p>$$R_1 = \{(a,a),(b,b),(a,b),(b,a)$$
and
$$R_2 = \{(a,a),(b,b)\}$$</p>
<p>Then I thought two more equivalence classes $R_3 = \{(a,a)\}$, $R_4 = \{(b,b)\}$. But when I looked the answer, I saw that, $R_1$ and $R_2$ are true but others are false. Why is that?</p>
| joriki | 6,622 | <p>Because they're not reflexive. An equivalence relation is reflexive, i.e. it contains all pairs of the form $(x,x)$.</p>
|
653,887 | <p>Why is the following set linearly independent for all x on ($-\infty$, $\infty$)?</p>
<p>$$\{1+x, 1-x, 1-3x\}$$</p>
<p>The Wronskian is $0$, but Wolfram Alpha says it is still linear independent? Why is this?</p>
<p>Thanks!</p>
<p><a href="http://www.wolframalpha.com/input/?i=Is+%7B1-x%2C+1%2Bx%2C+1-3x%7D+linearly+independent%3F" rel="nofollow">http://www.wolframalpha.com/input/?i=Is+%7B1-x%2C+1%2Bx%2C+1-3x%7D+linearly+independent%3F</a></p>
| user127.0.0.1 | 50,800 | <p>You have <strong>three</strong> polynomials of degree $1$ hence they cannot be linear independent.</p>
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.