qid int64 1 4.65M | question large_stringlengths 27 36.3k | author large_stringlengths 3 36 | author_id int64 -1 1.16M | answer large_stringlengths 18 63k |
|---|---|---|---|---|
2,220 | <p>Some of you (myself included) might remember how as a new user you struggle with finding stuff to answer, and hope to have these answers upvoted and accepted...</p>
<p>You really want that. You want to write comments, to the least but that requires 50 reputation.</p>
<p>As a result you look for old questions, possible with very good answers that were accepted, and write your own answer.</p>
<p>Now there is nothing wrong with adding more into threads like that, but it turns out that the answers added by some new users are of... "low quality content".</p>
<p>In his answer to my previous question Qiaochu said that the community should discuss and decide what is low quality and how to treat it. Since I'm starting to feel flooded with old questions, I would like to discuss this right now, so in the future we can judge what should we do about it.</p>
<p>So, what is "low quality" content, and how should we deal with it?</p>
| Qiaochu Yuan | 232 | <p>Just to have an option for people to respond to: I have been deleting answers that are not answers (that is, that are questions or otherwise not attempts to answer the question in the OP), and otherwise I have just been downvoting new bad answers. </p>
|
35,281 | <p>I am looking for applications of category theory and homotopy theory in set theory and particularly in cardinal arithmetics. "Applications" in the broad sense of the word --- this would include theorems, definitions, questions, points of view (and papers) in set theory that could be motivated or understood with help of category theory and homotopy theory. I am aware of some applications of set theory in category theory, e.g. large cardinal axioms (Vopenka principle) are used to construct localisations in homotopy theory, but this is not what I am asking for. However, I would be interested to hear if Vopenka principle is equivalent to a statement in category or homotopy theory.</p>
<p>The reason for the question is that I am trying to better understand <a href="https://arxiv.org/abs/1006.4647" rel="nofollow noreferrer">this sketch</a> of an attempt to understand an invariant in PCF theory in terms of homotopy theory. I am most interested in applications to cardinal arithmetic.</p>
| Andreas Blass | 6,794 | <p>Peter Freyd wrote a paper, "The Axiom of Choice," in which he used topos-theoretic methods to prove that the axiom of choice is independent of (classical) Zermelo-Fraenkel set theory. Unlike earlier topos-versions of set-theoretic independence proofs, Freyd's construction does not merely provide a category-theoretic view of a model that had already been considered by set theorists. His models can be obtained by set-theoretic forcing methods (by another result of Freyd, in "All topoi are localic"), but those particular forcing constructions had not been considered until Andre Scedrov and I analyzed them (in "Freyd's models for the independence of the axiom of choice"). So I think it's fair to say that these models were a contribution from category theory to set theory. </p>
|
1,072,669 | <ul>
<li><p>Let <span class="math-container">$C_1,C_2,C_3,C_4$</span> be compact convexes of <span class="math-container">$\mathbb{R}^2$</span> such that <span class="math-container">$C_1\cap C_2\cap C_3\neq\emptyset,C_1\cap C_2\cap C_4\neq\emptyset,C_1\cap C_3\cap C_4\neq\emptyset,C_2\cap C_3\cap C_4\neq\emptyset$</span>.</p>
<p>Show that <span class="math-container">$C_1\cap C_2\cap C_3\cap C_4\neq\emptyset$</span></p>
</li>
<li><p>Let <span class="math-container">$C_1,\dots,C_n$</span> be compact convexes of <span class="math-container">$\mathbb{R}^m,(m,n)\in\mathbb{N}^2$</span>. We suppose that <span class="math-container">$\displaystyle\forall 1\le j\le n,\bigcap_{1\le i\le n,i\neq j} C_i\neq\emptyset$</span>.</p>
<p>Do we have <span class="math-container">$\displaystyle\bigcap_{1\le i\le n} C_i\neq\emptyset$</span> ?</p>
</li>
</ul>
<hr />
<p>The first one seems quite logical by drawing a graph, but I can't find a proper mathematical 'rigorous' proof. I don't know what to do for the second.</p>
| Hugh Thomas | 94,551 | <p>I like Valeriy's answer, but let me also point out that the first question is a special case of Helly's theorem, which says that in $\mathbb R^d$, if you have at least $d+1$ sets such that any $d+1$ of them have a non-empty intersection, then the intersection of all the sets is non-empty. </p>
|
4,040,301 | <p>If <span class="math-container">$\lim_{|x| \to \infty} g(x)/x = \infty$</span>, Prove that <span class="math-container">$\{g(x)\mid x \in \mathbb{R}\} = \mathbb{R}.$</span></p>
| Wynne Liu | 714,963 | <p><span class="math-container">$lim_{x \to -\infty}g(x)/x = \infty$</span> mean <span class="math-container">$lim_{x \to -\infty}g(x) \to -\infty$</span></p>
<p><span class="math-container">$lim_{x \to \infty}g(x)/x = \infty$</span> mean <span class="math-container">$lim_{x \to \infty}g(x) \to \infty$</span></p>
<p><span class="math-container">$g(x)$</span> is differentiable(continuous) and the Intermediate Value Theorem</p>
|
3,278 | <h3>What are Community Promotion Ads?</h3>
<p>Community Promotion Ads are community-vetted advertisements that will show up on the main site, in the right sidebar. The purpose of this question is the vetting process. Images of the advertisements are provided, and community voting will enable the advertisements to be shown.</p>
<h3>Why do we have Community Promotion Ads?</h3>
<p>This is a method for the community to control what gets promoted to visitors on the site. For example, you might promote the following things:</p>
<ul>
<li>the site's twitter account</li>
<li>useful tools or resources for the mathematically inclined</li>
<li>interesting articles or findings for the curious</li>
<li>cool events or conferences</li>
<li>anything else your community would genuinely be interested in</li>
</ul>
<p>The goal is for future visitors to find out about <em>the stuff your community deems important</em>. This also serves as a way to promote information and resources that are <em>relevant to your own community's interests</em>, both for those already in the community and those yet to join. </p>
<h3>How does it work?</h3>
<p>The answers you post to this question <em>must</em> conform to the following rules, or they will be ignored. </p>
<ol>
<li><p>All answers should be in the exact form of:</p>
<pre><code>[![Tagline to show on mouseover][1]][2]
[1]: http://image-url
[2]: http://clickthrough-url
</code></pre>
<p>Please <strong>do not add anything else to the body of the post</strong>. If you want to discuss something, do it in the comments.</p></li>
<li><p>The question must always be tagged with the magic <a href="/questions/tagged/community-ads" class="post-tag moderator-tag" title="show questions tagged 'community-ads'" rel="tag">community-ads</a> tag. In addition to enabling the functionality of the advertisements, this tag also pre-fills the answer form with the above required form.</p></li>
</ol>
<h3>Image requirements</h3>
<ul>
<li>The image that you create must be <strong>220 x 250 pixels</strong></li>
<li>Must be hosted through our standard image uploader (imgur)</li>
<li>Must be GIF or PNG</li>
<li>No animated GIFs</li>
<li>Absolute limit on file size of 150 KB</li>
</ul>
<h3>Score Threshold</h3>
<p>There is a <strong>minimum score threshold</strong> an answer must meet (currently <strong>6</strong>) before it will be shown on the main site.</p>
<p>You can check out the ads that have met the threshold with basic click stats <a href="http://meta.math.stackexchange.com/ads/display/3278">here</a>.</p>
| John | 20,946 | <p><a href="http://www.wolframalpha.com/input/?i=airspeed+velocity+of+an+unladen+swallow" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/NWAyK.png" alt="Alt Text"></a></p>
|
3,278 | <h3>What are Community Promotion Ads?</h3>
<p>Community Promotion Ads are community-vetted advertisements that will show up on the main site, in the right sidebar. The purpose of this question is the vetting process. Images of the advertisements are provided, and community voting will enable the advertisements to be shown.</p>
<h3>Why do we have Community Promotion Ads?</h3>
<p>This is a method for the community to control what gets promoted to visitors on the site. For example, you might promote the following things:</p>
<ul>
<li>the site's twitter account</li>
<li>useful tools or resources for the mathematically inclined</li>
<li>interesting articles or findings for the curious</li>
<li>cool events or conferences</li>
<li>anything else your community would genuinely be interested in</li>
</ul>
<p>The goal is for future visitors to find out about <em>the stuff your community deems important</em>. This also serves as a way to promote information and resources that are <em>relevant to your own community's interests</em>, both for those already in the community and those yet to join. </p>
<h3>How does it work?</h3>
<p>The answers you post to this question <em>must</em> conform to the following rules, or they will be ignored. </p>
<ol>
<li><p>All answers should be in the exact form of:</p>
<pre><code>[![Tagline to show on mouseover][1]][2]
[1]: http://image-url
[2]: http://clickthrough-url
</code></pre>
<p>Please <strong>do not add anything else to the body of the post</strong>. If you want to discuss something, do it in the comments.</p></li>
<li><p>The question must always be tagged with the magic <a href="/questions/tagged/community-ads" class="post-tag moderator-tag" title="show questions tagged 'community-ads'" rel="tag">community-ads</a> tag. In addition to enabling the functionality of the advertisements, this tag also pre-fills the answer form with the above required form.</p></li>
</ol>
<h3>Image requirements</h3>
<ul>
<li>The image that you create must be <strong>220 x 250 pixels</strong></li>
<li>Must be hosted through our standard image uploader (imgur)</li>
<li>Must be GIF or PNG</li>
<li>No animated GIFs</li>
<li>Absolute limit on file size of 150 KB</li>
</ul>
<h3>Score Threshold</h3>
<p>There is a <strong>minimum score threshold</strong> an answer must meet (currently <strong>6</strong>) before it will be shown on the main site.</p>
<p>You can check out the ads that have met the threshold with basic click stats <a href="http://meta.math.stackexchange.com/ads/display/3278">here</a>.</p>
| Ilmari Karonen | 9,602 | <p><a href="http://www.geogebra.org/" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/9fRBB.png" alt="GeoGebra - Free mathematics software for learning and teaching"></a></p>
|
3,278 | <h3>What are Community Promotion Ads?</h3>
<p>Community Promotion Ads are community-vetted advertisements that will show up on the main site, in the right sidebar. The purpose of this question is the vetting process. Images of the advertisements are provided, and community voting will enable the advertisements to be shown.</p>
<h3>Why do we have Community Promotion Ads?</h3>
<p>This is a method for the community to control what gets promoted to visitors on the site. For example, you might promote the following things:</p>
<ul>
<li>the site's twitter account</li>
<li>useful tools or resources for the mathematically inclined</li>
<li>interesting articles or findings for the curious</li>
<li>cool events or conferences</li>
<li>anything else your community would genuinely be interested in</li>
</ul>
<p>The goal is for future visitors to find out about <em>the stuff your community deems important</em>. This also serves as a way to promote information and resources that are <em>relevant to your own community's interests</em>, both for those already in the community and those yet to join. </p>
<h3>How does it work?</h3>
<p>The answers you post to this question <em>must</em> conform to the following rules, or they will be ignored. </p>
<ol>
<li><p>All answers should be in the exact form of:</p>
<pre><code>[![Tagline to show on mouseover][1]][2]
[1]: http://image-url
[2]: http://clickthrough-url
</code></pre>
<p>Please <strong>do not add anything else to the body of the post</strong>. If you want to discuss something, do it in the comments.</p></li>
<li><p>The question must always be tagged with the magic <a href="/questions/tagged/community-ads" class="post-tag moderator-tag" title="show questions tagged 'community-ads'" rel="tag">community-ads</a> tag. In addition to enabling the functionality of the advertisements, this tag also pre-fills the answer form with the above required form.</p></li>
</ol>
<h3>Image requirements</h3>
<ul>
<li>The image that you create must be <strong>220 x 250 pixels</strong></li>
<li>Must be hosted through our standard image uploader (imgur)</li>
<li>Must be GIF or PNG</li>
<li>No animated GIFs</li>
<li>Absolute limit on file size of 150 KB</li>
</ul>
<h3>Score Threshold</h3>
<p>There is a <strong>minimum score threshold</strong> an answer must meet (currently <strong>6</strong>) before it will be shown on the main site.</p>
<p>You can check out the ads that have met the threshold with basic click stats <a href="http://meta.math.stackexchange.com/ads/display/3278">here</a>.</p>
| dls | 1,761 | <p><a href="http://blogs.ams.org/mathgradblog/" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/lcbn6.png" alt="AMS Graduate Student Blog - Read and write about topics that matter to you."></a></p>
|
2,536,866 | <p>In tensor notation the change of the electromagnetic field tensor by change of inertial reference frames can be done by the following formula :</p>
<p>$$F^{\alpha\beta} = \varLambda^{\alpha}_{\mu}\varLambda^{\beta}_{\nu}F^{\mu\nu}$$</p>
<p>But when this is represented by matrix multiplications it becomes:</p>
<p>$$F'=\varLambda F \varLambda^T$$
Where $ F'$ is the matrix representation of the tensor $F^{\alpha\beta}$, $F$ of $F^{\mu\nu}$ and $\varLambda$ of any of the two tensors $\varLambda^{\alpha}_{\mu}$ or $\varLambda^{\beta}_{\nu}$ that have the same components.</p>
<p>I guess that in the end i am asking how is matrix multiplication defined in tensorial form, or rather, when and how can i take an expression written in tensorial form and represent it by a matrix multiplication. </p>
| Xander Henderson | 468,350 | <p>I've never seen the notation $\mathbb{R}^{3\times 2}$ before (which is, perhaps, more a comment on my lack of exposure than anything else—a quick Googling reveals that it is actually pretty damn common), and can't say that I really like it—it looks too much like $\mathbb{R}^6$, which is something else entirely. That being said, it is entirely reasonable notation, as long as you are clear about it, i.e. when you use it, make a statement to the effect of "$\mathbb{R}^{m\times n}$ denotes the space (or set) of $m\times n$ matrices with entries in $\mathbb{R}$."</p>
<p>Other notations exist:
$$
M_{m\times n}(\mathbb{R}),
\qquad \mathcal{M}_{m\times n}(\mathbb{R}),
\qquad M_{m,n}(\mathbb{R}),
\qquad\text{or}\qquad
\mathbb{R}^{(m,n)},
$$
for example. Personally, I prefer something like $M_{m,n}(\mathbb{R})$, as this remains consistent with, say, $SO_{n}(\mathbb{R})$ for the special orthogonal group, or $GL_{n}(\mathbb{R})$ for the general linear group.</p>
<p>In any event, the moral of the story is to <em>be clear about your notation.</em> When you use a notation that could be potentially ambiguous, or for which alternatives exist, define your terms first and use your definitions consistently.</p>
|
2,042,257 | <p>I'm looking at the following differential equation:</p>
<p>$\frac{dx}{dt} = \frac{\sin^2 x - t^2}{t \cdot \sin(2x)}$</p>
<p>I rewrote it as</p>
<p>$(t \cdot \sin(2x))dx = (\sin^2x - t^2)dt$</p>
<p>$\Leftrightarrow (\underbrace{\sin^2 x - t^2}_{J(x,t)})dt + (\underbrace{-t \cdot \sin(2x)}_{I(x,t)}) dx = 0$</p>
<p>where I put the minus sign inside the function because I think it is important that a plus stands between the two functions in order to check if a differential equation is exact or not. But then:</p>
<p>$\partial_x J(x,t) = 2 \sin x \cos x = \sin(2x)$</p>
<p>$\partial_t I(x,t) = -\sin(2x)$</p>
<p>Those functions don't seem to satisfy the condition for a differential equation to be exact. However my teacher wrote in his answer the following:</p>
<p>$\frac{-\partial_x (\sin^2 x - t^2) - \partial_t (t \cdot \sin (2x))}{t \sin(2x)} = \frac{-2}{t}$</p>
<p>$\implies c' = - \frac{2}{t} c$</p>
<p>$\implies x' = \frac{\frac{\sin^2 x}{t^2} - 1}{\frac{\sin (2x)}{t}}$</p>
<p>and he states that this equation is now exact. What happened there? I've never seen anything like that yet. I understand (more or less) what he is doing after $c' = -\frac{2}{t}c$ (separation of variables and integration), but how he came to the first line I have no idea. Is that a known technic to transform a differential equation that is $\textit{almost}$ exact?</p>
<p>Thanks a lot in advance for your answers.</p>
<p>Julien.</p>
| Ahmed S. Attaalla | 229,023 | <p>The positive real solution to $x^x=2$ is irrational.</p>
<p>Proof:</p>
<p>Assume $x=\frac{p}{q}$ where $p$ and $q$ have no common factor but $1$ then:</p>
<p>$$\left(\frac{p}{q}\right)^{p}=2^{q}$$</p>
<p>We must have $p^p=q^p2^q$</p>
<p>Note $p>q$ because $x>1$ (this can be shown).</p>
<p>Then $p$ must be even. So we may set $p=2k$.</p>
<p>$$(2k)^{2k}=q^p2^q$$</p>
<p>$$2^{2k}k^{2k}=q^{2k}2^q$$</p>
<p>$$2^{2k-q}k^{2k}=q^{2k}$$</p>
<p>Then $q$ must be even. Contradiction. </p>
<p>Note:</p>
<p>We have $x^x=2$, then $x \ln x=\ln 2$ and $\ln xe^{\ln x}=2$ so using the lambert W function:</p>
<p>$$\ln x=W(\ln 2)$$</p>
<p>$$x=e^{W(\ln 2)}=\frac{\ln (2)}{W(\ln 2)}$$</p>
|
371,318 | <p>The original problem was to consider how many ways to make a wiring diagram out of $n$ resistors. When I thought about this I realized that if you can only connect in series and shunt. - Then this is the same as dividing an area with $n-1$ horizontal and vertical lines. When each line only divides one of the current area sections into two smaller ones.</p>
<p>This is also the same as the number of ways to make a set of $n$ (and only $n$) rectangles into a bigger rectangle. If the rectangles can be drawn by dividing the big rectangle, line by line, into the set of rectangles without lose endpoints of the line. - Can someone come to think of "a expression of $n$" which equals this amount, independent of the order of the rectangles or position?</p>
<p>(It is only the relations between the area sections that matters and not left or right, up or down. However dividing an area with a horizontal line is not the same as dividing it with a vertical line.)</p>
| PPP | 55,643 | <p>Nothing happens, forget infinitesimals and infinity as numbers. Math was built with limits in the last centuries so we don't need to try to define these things.</p>
|
498,694 | <p>So, I'm learning limits right now in calculus class.</p>
<p>When $x$ approaches infinity, what does this expression approach?</p>
<p>$$\frac{(x^x)}{(x!)}$$</p>
<p>Why? Since, the bottom is $x!$, doesn't it mean that the bottom goes to zero faster, therefore the whole thing approaches 0?</p>
| Neal | 20,569 | <p>You may be thinking of the fact that $e^x/x!\to 0$. The difference is that there the base of the exponent is fixed, while in this problem, the base of the exponent grows.</p>
|
2,259,145 | <p>Let $f\colon (0,1]\to [-1,1]$ be a continuous function. Let us define a function $h$ by $h(x)=xf(x)$ for all $x$ belongs to $(0,1]$.
Prove that $h$ is uniformly continuous.</p>
<p>We know $f$ is uniformly continuous on $I$ if $f'(x)$ is bounded on $I$. Here $h'(x)= xf'(x) + f(x)$ and $f(x)$ is bounded here. How can I prove that $xf'(x)$ is bounded here.
Please help me to solve this.
Thanks in advance.</p>
| Community | -1 | <p>Let $\psi(z) = \frac{\Gamma'(z+1)}{\Gamma(z+1)}$, then we can resolve your ambiguous notation with the following.</p>
<p>$$\psi(z) = -\gamma + \sum_{n=1}^\infty \frac{1}{k} - \frac{1}{z+k}$$</p>
<p>$$\psi(n+1) - \psi(1) = \sum_{j=1}^n \frac{1}{j}$$</p>
<p>so that naturally</p>
<p>$$\psi(x+1) - \psi(1) = \frac{1}{x} + \frac{1}{x-1} + ... + 1$$</p>
<p>Therefore the integral you want to evaluate is</p>
<p>$$\int_a^z \Gamma(x+1)(\psi(x+1) - \psi(1))\,dx$$</p>
<p>which becomes at best</p>
<p>$$\Gamma(z+1) - \Gamma(a+1) - \psi(1)\int_a^z \Gamma(x+1)\,dx$$</p>
|
237,197 | <p>I'm new here. If there is anything not appropriate pls let me know.</p>
<p>I am currently working on a differential equation with one of which term is a integral of the variable.</p>
<p><span class="math-container">$$
\frac{d^2u(x)}{dx^2}=cosh(G(x))+\frac{1}{C_1}\int_{0}^{1}{u(x)sinh(G(x))dx }+C_2
$$</span>
with the boundary condition, <span class="math-container">$ u(x=0)=0$</span> and <span class="math-container">$u'(x=1)=0$</span>,</p>
<p>where <span class="math-container">$ G(x)= 2ln(\frac{1+C_3\cdot exp(-x)}{1-C_3\cdot exp(-x)})$</span> and
C1, C2 and C3 are system constants which could be predefined.</p>
<p>To show it more simply, I let <span class="math-container">$C_1=C_2=C_3=1$</span> in the code below,</p>
<pre><code>G[x_] = 2 Log[(1 + Exp[-x])/(1 - Exp[-x])];
</code></pre>
<pre><code>Sol =
NDSolveValue[
{
u''[x] ==
Cosh[ G[x] ] + NIntegrate[ u[x] *Sinh[ G[x] ],{x,0,1}] + 1
,u'[1] == 0., u[0] == 0.
}
, u, {x, 0, 1}, PrecisionGoal -> 10] ;
</code></pre>
<p>However, since u(x) in the integral have yet been solved, the numerical integration would fail with the error message show:</p>
<p><code>"The integrand {} has evaluated to non-numerical values for all sampling points in the region with boundaries {{0,1}}"*</code></p>
<p>Some suggested by breaking the procedure of NDSolveValue in parts with the NIntegrate inserted. However, I am not sure how to do it correctly in Mathematica.</p>
<p>Thanks for your kindly help, I am really appreciated it!</p>
<p><strong>EDIT 1</strong></p>
<p>Special thanks to Tugrul Temel and Alex Trounev, who shows the singularity point at x = 0 for G[x] function.
I make a adjustment follow with Alex Trounev, show below, for making the problem solvable!
<span class="math-container">$ G(x)= 2ln(\frac{1+exp(-x)}{1-C_3\cdot exp(-x)})$</span>
where <span class="math-container">$ C_3 = 0.99 $</span></p>
| Alex Trounev | 58,388 | <p>In the case <code>C3=1</code> we have singular solution with <span class="math-container">$x^{-2}$</span> singularity at <span class="math-container">$x \rightarrow 0$</span>. Therefore we can suggest that <span class="math-container">$C3<0$</span>, and in this case we have regular solution of this problem. For instance, for <code>C3=0.99</code> we can get solution with using colocation method and Haar wavelets as follows</p>
<pre><code>C3 = 0.99; G[x_] := 2 Log[(1 + Exp[-x])/(1 - C3 Exp[-x])];
Get["NumericalDifferentialEquationAnalysis`"];
L = 1; J = 6; M = 2^J; dx = 1/2/M; xl = Table[l*dx, {l, 0, 2*M}];
xcol = Table[(xl[[l - 1]] + xl[[l]])/2, {l, 2, 2*M + 1}];
pp = GaussianQuadratureWeights[2*M, 0, 1]; points = pp[[All,1]]; weights = pp[[All,2]];
h1[x_] := Piecewise[{{1, 0 <= x <= 1}, {0, True}}]; p1[x_, n_] := (1/n!)*x^n;
h[x_, k_, m_] := Piecewise[{{1, Inequality[k/m, LessEqual, x, Less, (1 + 2*k)/(2*m)]},
{-1, Inequality[(1 + 2*k)/(2*m), LessEqual, x, Less, (1 + k)/m]}}, 0]
p[x_, k_, m_, n_] := Piecewise[{{0, x < k/m}, {(-(k/m) + x)^n/n!, Inequality[k/m, LessEqual, x,
Less, (1 + 2*k)/(2*m)]}, {((-(k/m) + x)^n - 2*(-((1 + 2*k)/(2*m)) + x)^n)/n!,
(1 + 2*k)/(2*m) <= x <= (1 + k)/m},
{((-(k/m) + x)^n + (-((1 + k)/m) + x)^n - 2*(-((1 + 2*k)/(2*m)) + x)^n)/n!, x > (1 + k)/m}}, 0]
f2[x_] := Sum[af[i, j]*h[x, i, 2^j], {j, 0, J, 1}, {i, 0, 2^j - 1, 1}] + a0*h1[x];
f1[x_] := Sum[af[i, j]*p[x, i, 2^j, 1], {j, 0, J, 1}, {i, 0, 2^j - 1, 1}] + a0*p1[x, 1] + f10;
f0[x_] := Sum[af[i, j]*p[x, i, 2^j, 2], {j, 0, J, 1}, {i, 0, 2^j - 1, 1}] + a0*p1[x, 2] + f10*x +
f00;
bc0 = {f0[0] == 0};
bc1 = {f1[L] == 0};
var = Flatten[Table[af[i, j], {j, 0, J, 1}, {i, 0, 2^j - 1, 1}]];
varM = Join[{a0, f10, f00}, var]; int = Sum[weights[[i]]*(f0[z]*Sinh[G[z]] /. z -> points[[i]]),
{i, Length[points]}];
eqf[x_] := -f2[x] + Cosh[G[x]] + int + 1; eq = Flatten[Table[eqf[x] == 0, {x, xcol}]];
eqM = Join[eq, bc0, bc1];
</code></pre>
<p>Note, that $f2=u'',f1=u',f0=u$, <code>eqf[x]</code> is equation we try to solve. Since this equation is linear we use</p>
<pre><code>{b, m} = CoefficientArrays[eqM, varM]
sol = -Inverse[m].b;
</code></pre>
<p>Finally we can plot numerical solution for 128 (solid line) and for 64 (red points) colocation points to verifier how numerical solution converges with number of points increasing. List <code>lst64</code> we can prepare with the code above for <code>J=5</code></p>
<pre><code>lst = Table[{x,
f0[x] /. Table[varM[[s]] -> sol[[s]], {s, Length[sol]}]}, {x,
xcol}];
Show[ListLinePlot[Join[{{0, 0}}, lst], AxesLabel -> {"x", "u"},
PlotLabel -> Row[{"C3 = ", C3}]], ListPlot[lst64, PlotStyle -> Red]]
</code></pre>
<p><a href="https://i.stack.imgur.com/6tClo.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/6tClo.png" alt="Figure 1" /></a></p>
|
1,132,063 | <p>For $x=(x_j)_{j\in\mathbb N}\in \ell^1$ let</p>
<p>$$\|x\|=\sup_{n\in \mathbb N}\left \Vert \sum_{j=1}^{n}x_j\right\Vert$$</p>
<p>Show that $(\ell^1,\|\cdot\|)$ is a normed space, but it is not complete.</p>
<p>The first part was easy.</p>
<p>Now I try to find a sequence in $\ell^1$ such that it is a cauchy sequence, but not convergent.</p>
<p>Let me choose(try) $x_n=\frac{n}{j^2}$. For a fixed $n$ it is in $\ell^1$ because $\sum_{j=1}^{\infty} \frac{1}{j^2}$ converges.</p>
<p>WLOG $n>m$:</p>
<p>$$\|x_n-x_m\|=\sup_{k\in \mathbb N} \left \Vert \sum_{j=1}^{k}x_n-x_m \right\Vert| = \sup_{k\in \mathbb N} \sum_{j=1}^{k}\frac{n-m}{j^2}$$</p>
<p>Okay for me it seems that this is not even a cauchy-sequence..</p>
<p>Can someone help me? What kinds of sequences should I consider when I am facing problems like this?</p>
| Kevin Arlin | 31,228 | <p>Observe that your norm is identical to the usual $\ell^1$ norm on sequences with all entries positive. So you'll have to think about alternating signs. The sequence with infinitely many terms of each sign which is closest to $\ell^1$ without being in it (this is an imprecise statement-just indicating why it's my guess) is $(1,-1/2,1/3,-1/4,...)$. The sequence of sequences $(1,0,0,...),(1,-1/2,0,0,...,),(1,-1/2,1/3,0,0,...),...$ is Cauchy in the new norm because the alternating harmonic sum converges. But it doesn't converge in $\ell^1$.</p>
|
1,132,063 | <p>For $x=(x_j)_{j\in\mathbb N}\in \ell^1$ let</p>
<p>$$\|x\|=\sup_{n\in \mathbb N}\left \Vert \sum_{j=1}^{n}x_j\right\Vert$$</p>
<p>Show that $(\ell^1,\|\cdot\|)$ is a normed space, but it is not complete.</p>
<p>The first part was easy.</p>
<p>Now I try to find a sequence in $\ell^1$ such that it is a cauchy sequence, but not convergent.</p>
<p>Let me choose(try) $x_n=\frac{n}{j^2}$. For a fixed $n$ it is in $\ell^1$ because $\sum_{j=1}^{\infty} \frac{1}{j^2}$ converges.</p>
<p>WLOG $n>m$:</p>
<p>$$\|x_n-x_m\|=\sup_{k\in \mathbb N} \left \Vert \sum_{j=1}^{k}x_n-x_m \right\Vert| = \sup_{k\in \mathbb N} \sum_{j=1}^{k}\frac{n-m}{j^2}$$</p>
<p>Okay for me it seems that this is not even a cauchy-sequence..</p>
<p>Can someone help me? What kinds of sequences should I consider when I am facing problems like this?</p>
| user66081 | 66,081 | <p>This is a slightly overkill answer.</p>
<p>Consider the following <a href="http://en.wikipedia.org/wiki/Banach_space#Banach.27s_theorems" rel="nofollow">theorem</a>: Every one-to-one bounded linear operator from a Banach space onto a Banach space is an isomorphism.</p>
<p>Let $\ell^*$ denote the collection of $\ell^1$ sequences and endow it with the norm $\|\cdot\|$, for which it is a normed vector space. Suppose it is indeed complete, ie a Banach space (and $\ell^1$ is a Banach space already). Consider the canonical map $I : \ell^1 \to \ell^*$, which is one-to-one and onto. It is linear and bounded because obviously $\|\cdot\| \leq \|\cdot\|_1$. By the theorem, it is an isomorphism, ie its inverse $I^{-1}$ is also bounded (and linear anyway). This means $\|\cdot\|_1 \leq C \|\cdot\|$ for some constant $C \geq 0$. But this is nonsense: take the sequence $x^{(n)} = (1, -1, 1, \ldots, -1, 0, 0, \ldots)$ with $2 n$ nonzeros, half of them $+1$ and half of them $-1$, $n \in \mathbb{N}$. Then $\|x^{(n)}\|_1 = 2n$ but $\|x^{(n)}\| = 1$. Thus the assumption of completeness of $\ell^*$ was inappropriate.</p>
|
1,023,193 | <p>Proving this formula
$$
\pi^{2}
=\sum_{n\ =\ 0}^{\infty}\left[\,{1 \over \left(\,2n + 1 + a/3\,\right)^{2}}
+{1 \over \left(\, 2n + 1 - a/3\,\right)^{2}}\,\right]
$$
if $a$ an even integer number so that
$$
a \geq 4\quad\mbox{and}\quad{\rm gcd}\left(\,a,3\,\right) = 1
$$</p>
| achille hui | 59,379 | <p>Start with the well known? expansion of $\cot z$ and differentiate,</p>
<p>$$\cot z = \sum_{n=-\infty}^\infty \frac{1}{z - n\pi}
\implies \frac{1}{\sin(z)^2} = \sum_{n=-\infty}^\infty \frac{1}{(z - n\pi)^2}
\implies \frac{\pi^2}{\sin(\pi z)^2} = \sum_{n=-\infty}^\infty \frac{1}{(z - n)^2}
$$
Substitute $z$ by $-\frac{1+\alpha}{2}$, we get</p>
<p>$$\frac{\pi^2}{4\cos(\frac{\pi\alpha}{2})^2} = \sum_{n=-\infty}^\infty\frac{1}{(2n+1+\alpha)^2}\tag{*1}$$
In the RHS of above sum, if we index those negative $n$ as $-(m+1)$, we have</p>
<p>$$\frac{1}{(2n+1 + \alpha)^2} = \frac{1}{(2m+1 - \alpha)^2}$$
This means one can rewrite $(*1)$ as</p>
<p>$$\frac{\pi^2}{4\cos(\frac{\pi\alpha}{2})^2} = \sum_{n=0}^\infty \left(\frac{1}{(2n+1 + \alpha)^2} + \frac{1}{(2n+1 - \alpha)^2}\right)\tag{*2}$$</p>
<p>When $\alpha = \frac{a}{3}$ where $a$ is an even integer with $\gcd(a,3) = 1$,
$\cos(\frac{\pi\alpha}{2}) = \pm \frac12$.<br>
RHS$(*2)$ reduces to $\frac{\pi^2}{4\left(\frac12\right)^2} = \pi^2$ as desired.</p>
|
348,748 | <p>Find the solution for $Ax=0$ for the following $3 \times 3$ matrix:</p>
<p>$$\begin{pmatrix}3 & 2& -3\\ 2& -1&1 \\ 1& 1& 1\end{pmatrix}$$</p>
<p>I found the row reduced form of that matrix, which was </p>
<p>$$\begin{pmatrix}1 & 2/3& -1\\ 0& 1&-9/7 \\ 0& 0& 1\end{pmatrix}$$</p>
<p>I'm not sure what I'm supposed to do next to find the "unique" solution besides $x=0$? Do I further reduce that matrix to the identity matrix?</p>
| dato datuashvili | 3,196 | <p>now if you imagine that our vector solution is
$x=x_1,x_2,x_3$</p>
<p>than we will get</p>
<p>$x_3=0$</p>
<p>$x_1+(2*x_2)/3=0$</p>
<p>$x_2=0$</p>
<p>so after inserting $x_3,x_2$ into first you get $x_1=0$ </p>
|
2,252,671 | <p>$P(x=k) = \frac{1}{5}$ for $k=1,\cdots,5$. Find $E(X), E(X^2)$ and use these results to obtain $E[(X+3)^2]$ and $Var(3X-2)$</p>
<p>I know how to calcuate all these individually, but how can I use $E(X^2)$ and $E(X)$ to calculate the more complex forms $E[(X+3)^2]$ and $Var(3X-2)$?</p>
| PSPACEhard | 140,280 | <p><strong>Hint:</strong></p>
<ul>
<li><p>$\mathsf{E}[(X + 3)^2] = \mathsf{E}[X^2 + 6X + 9]$</p></li>
<li><p>$\mathsf{Var}(3X - 2) = \mathsf{Var}(3X) = 9\mathsf{Var}(X)$</p></li>
</ul>
|
2,252,671 | <p>$P(x=k) = \frac{1}{5}$ for $k=1,\cdots,5$. Find $E(X), E(X^2)$ and use these results to obtain $E[(X+3)^2]$ and $Var(3X-2)$</p>
<p>I know how to calcuate all these individually, but how can I use $E(X^2)$ and $E(X)$ to calculate the more complex forms $E[(X+3)^2]$ and $Var(3X-2)$?</p>
| Ziad Fakhoury | 295,839 | <p>$$E((X+3)^2) = E(X^2) + E(6X) + E(9)$$
$$ = E(X^2) + 6E(X) + 9$$</p>
<p>As for $Var(3X-2) = Var(3X) = 9Var(X) = 9(E(X^2) - (E(X))^2$</p>
|
92,670 | <p>We're learning about domains and setbuilder notation in school at the moment, and I want to make sure what I did was right.</p>
<p>My thought process:
\begin{align*}
-\frac12|4x - 8| - 1 &< -1 \\
-\frac12|4x - 8| &< 0 \\
|4x - 8| &> 0
\end{align*}
$x =$ all real numbers.</p>
<p>{real numbers} :</p>
<p><||||||||||[0]|||||||||></p>
<p>{x| x is any real number}</p>
<p>{whole numbers}</p>
<p>... <----[-2]---[-1]---[0]---[1]---[2]---> ...</p>
<p>{x|...-2,-1,0,1,2...}</p>
| Dylan Moreland | 3,701 | <p>You get $|4x - 8| > 0$, which I agree with; now you want to find all $x$ satisfying this inequality. It's true that for any number $y$ we have $|y| \geq 0$, but equality can hold: $|y| = 0$ if and only if $y = 0$. Use this fact to find the single value $a$ of $x$ for which $|4x - 8| = 0$. In <a href="http://en.wikipedia.org/wiki/Set-builder_notation" rel="nofollow">set-builder</a> notation, I would write this as
\[
\{x \mid x \neq a\} \qquad \text{or, more carefully,} \qquad \{x \in \mathbb R \mid x \neq a\},
\]
replacing $a$ by the number you find.</p>
<p>Your representations of the real numbers look fine to me. There is always <a href="http://en.wikipedia.org/wiki/Whole_number" rel="nofollow">controversy</a> over what "whole numbers" should mean, and I would call
\[
\{x \mid x = \ldots, -2, -1, 0, 1, 2, \ldots\}
\]
the set of integers. Note the slight difference between your expression and mine.</p>
|
1,242,001 | <p>The following is the notation for Fermat's Last Theorem </p>
<p>$\neg\exists_{\{a,b,c,n\},(a,b,c,n)\in(\mathbb{Z}^+)\color{blue}{^4}\land n>2\land abc\neq 0}a^n+b^n=c^n$ </p>
<p>I understand everything in the notation besides the 4 highlighted in blue. Can someone explain to me what this means?</p>
| Mark Viola | 218,419 | <p>If $g(x) =\frac{1}{\sqrt{2\pi}}e^{-(x-2)^2}$, then</p>
<p>$$g'(x)=\frac{1}{\sqrt{2\pi}}e^{-(x-2)^2} \times (-2(x-2))$$</p>
<p>for which $g'$ is zero only at $x=2$.</p>
<p>At $x=2$, we have $g(2)=\frac{1}{\sqrt{2\pi}}$. </p>
<p>This value is the maximum of $g$.</p>
|
1,242,001 | <p>The following is the notation for Fermat's Last Theorem </p>
<p>$\neg\exists_{\{a,b,c,n\},(a,b,c,n)\in(\mathbb{Z}^+)\color{blue}{^4}\land n>2\land abc\neq 0}a^n+b^n=c^n$ </p>
<p>I understand everything in the notation besides the 4 highlighted in blue. Can someone explain to me what this means?</p>
| Jeffrey L. | 227,579 | <p>Apparently WolframAlpha isn't enough to show you that your derivative is incorrect, so let me help you see for yourself.</p>
<p>We're starting out with this (edited for clarity)</p>
<p>$$g(x)=2\pi^\frac{-1}{2} * e^\frac{-(x-2)^2}{2}$$</p>
<p>I'm going to pull out the constant $2\pi^\frac{-1}{2}$ so now we have</p>
<p>$$2\pi^\frac{-1}{2}g(x)=e^\frac{-(x-2)^2}{2}$$</p>
<p>Using the chain rule, we get</p>
<p>$$2\pi^\frac{-1}{2}g'(x)=(\frac{-(x-2)^2}{2})'e^\frac{-(x-2)^2}{2}$$</p>
<p>Simplifying the derivative:</p>
<p>$$(\frac{-(x-2)^2}{2})' = (\frac{-2(x-2)}{2}) = -(x-2) = 2-x$$</p>
<p>Therefore</p>
<p>$$2\pi^\frac{-1}{2}g'(x)=(2-x)e^\frac{-(x-2)^2}{2}$$</p>
<p>And the only time that $g'(x)=0$ is when $x = +2$, so the critical point is</p>
<p>$$g(2) = 2\pi^\frac{-1}{2}$$</p>
|
654,617 | <p>$v$ being a vector.
I never understood what they mean and haven't found online resources. Just a quick question.</p>
<p>Thought it was absolute and magnitude respectively when regarding vectors. need confirmation</p>
| Cameron Buie | 28,900 | <p>In general $\lvert\cdot\rvert$ and $\lVert\cdot\rVert$ are both used to signify <a href="http://en.wikipedia.org/wiki/Norm_%28mathematics%29"><em>norms</em></a> of some sort. Different texts use different notation conventions, and sometimes the precise definition (if there <em>is</em> one) will vary from context to context</p>
|
1,181,123 | <blockquote>
<ol>
<li>Find the smallest positive integer such that $80-n$ and $80+n$ are prime numbers. </li>
<li>Find the smallest positive prime number such that $2002-n$ and $2002+n$ are prime numbers.</li>
</ol>
</blockquote>
<p>I cannot think of any way other than trying the prime numbers one by one,
like trying from $2, 3, 5, 7,\ldots...$ but it will probably take forever in case the answer is a big number, any clue please? </p>
<p>Thanks in advance!</p>
| Fermat | 83,272 | <p>For the first.</p>
<p>1) $n$ is not of the form $3k+1$ otherwise $80+n$ is divisible by $3$.</p>
<p>2) $n$ is not of the form $3k+2$ otherwise $80-n$ is divisible by $3$.</p>
<p>3)<strong>So n is a multiple of</strong> $3$.</p>
<p>4) $n$ can not have factors $2, 5$. In particular $n$ is odd. <strong>So n is an odd multiple of</strong> $3$ that is not divisible by $5$.</p>
<p>Examining odd multiples (not divisible by $5$ ) of $3$ is easy. $n=9$ is the answer.</p>
|
242,097 | <p>I need to show that the function $f(n) = n^2$ is not of $\mathcal{O}(n)$. If I am correct I should prove that there is no number $c,n \geq 0$ where $n^2\lt cn$. How to do that?</p>
| JavaMan | 6,491 | <p><strong>Hint:</strong></p>
<p>Suppose that $n^2 = O(n)$. Then, there exist constats $C, N_0$ such that
$n^2 \leq C n$ for all $ \geq N_0$. However...</p>
|
242,097 | <p>I need to show that the function $f(n) = n^2$ is not of $\mathcal{O}(n)$. If I am correct I should prove that there is no number $c,n \geq 0$ where $n^2\lt cn$. How to do that?</p>
| DonAntonio | 31,254 | <p>$$n^2\leq cn\,\,\,\forall\,n\geq N_0\Longleftrightarrow n\leq c\,\,\,\forall n\geq N_0$$
and thus we'd get the natural numbers are bounded, say by</p>
<p>$$\max\{c\,,\,1,2,...,N_0-1\}$$</p>
|
242,097 | <p>I need to show that the function $f(n) = n^2$ is not of $\mathcal{O}(n)$. If I am correct I should prove that there is no number $c,n \geq 0$ where $n^2\lt cn$. How to do that?</p>
| Austin Mohr | 11,245 | <p>The result is more clear if proven directly, I think.</p>
<p>Let $c > 0$ be given. To conclude $n^2 \neq O(n)$, we need to produce a particular $n_0$, such that $n_0^2 > cn_0$.</p>
<p>To that end, choose $n_0 = c+1$. We have
$$
n_0^2 = (c+1)^2 = c^2 + 2c + 1 > c^2 + c = c(c+1) = cn_0.
$$</p>
<p>Since $c$ was arbitrary, there can be no choice of $c$ that allows $n^2 = O(n)$.</p>
|
4,298,951 | <p>Let us define a sequence <span class="math-container">$(a_n)$</span> as follows:</p>
<p><span class="math-container">$$a_1 = 1, a_2 = 2 \text{ and } a_{n} = \frac14 a_{n-2} + \frac34 a_{n-1}$$</span></p>
<p>Prove that the sequence <span class="math-container">$(a_n)$</span> is Cauchy and find the limit.</p>
<hr />
<p>I have proved that the sequence <span class="math-container">$(a_n)$</span> is Cauchy. But unable to find the limit. I have observed that the sequence <span class="math-container">$(a_n)$</span> is decreasing for <span class="math-container">$n \ge 2$</span>.</p>
| acreativename | 347,666 | <p>Note</p>
<p><span class="math-container">$a_{n} = \frac{9}{5}+\frac{16}{5}(\frac{-1}{4})^{n}$</span></p>
<p>Thus the limit is <span class="math-container">$\frac{9}{5}$</span>.</p>
|
1,676,848 | <blockquote>
<p>Given the series </p>
<p>$$ \sum_{n=1}^{\infty} \frac{k(k+1)(k+2)\cdot \cdot \cdot (k + n - 1)x^n}{n!} \quad \quad k \geq 1 $$
Find the interval of convergence.</p>
</blockquote>
<p>I started by applying the Ratio test</p>
<p>$$
\lim_{n\to \infty}\left|\frac{k(k+1)(k+2)\cdot \cdot \cdot (k + n - 1)(k+n)x^{n+1}}{(n+1)!}\cdot \frac{n!}{k(k+1)(k+2)\cdot \cdot \cdot (k + n - 1)x^n}\right|$$</p>
<p>$$\lim_{n\to \infty}\left|\frac{(k+n)x}{(n+1)}\right|$$</p>
<p>to show that the series converges when $|x| \lt 1$.</p>
<p>However, when I test the end points of $(-1,1)$ for convergence, I end up with two series whose convergence I am unable to show. Namely,
$$
\sum_{n=1}^{\infty} \frac{k(k+1)(k+2)\cdot \cdot \cdot (k + n - 1)}{n!}
$$</p>
<p>and
$$
\sum_{n=1}^{\infty} \frac{k(k+1)(k+2)\cdot \cdot \cdot (k + n - 1)(-1)^n}{n!}
$$</p>
<p>How can I show that these two series converge or diverge?</p>
| Lutz Lehmann | 115,115 | <p>Your series is
$$
\sum_{n=0}^\infty\binom{-k}{n}(-x)^n=(1-x)^{-k}
$$
This alone should show that there is no convergence at $x=1$ for positive $k$.</p>
<p>For the series at $x=-1$ consider that
$$
\binom{-k}{n}(-1)^n=\binom{n+k-1}{n}=\binom{n+k-1}{k-1}=\frac{(n+1)(n+2)···(n+k-1)}{(k-1)!}
$$
is a polynomial in $n$ of degree $k-1$.</p>
|
216,171 | <p>Basically, I have a set of differential equations that I need to solve for exactly 100 different initial conditions (given as lists for each initial condition), and then plot each solution.</p>
<p>Here is some sample code where I have set vrad, vtan, and deltaR (arrays of initial conditions) to an array of length two. So, given the arrays vrad, vtan, deltaR (our initial conditions) I want to be able to essentially do what this code does but for the array of solutions. Cheers!</p>
<p>Edit: I think I've nearly done it, I just need Table to not iterate through every tuple, but instead by index, anyone know how to do this?</p>
<pre><code>(* Scaling Quantities *)
V = 200;
R = 10^4;
(* Random Quantities *)
vrad = {0, 5};
vtan = {0, 5};
deltaR = {0, 5};
(* Converting to dimensionless quantities *)
vRadial = (V + vrad)/V;
vTangential = (V + vtan)/V;
r0 = (10^4 + deltaR)/R;
L = r0*vTangential;
(* numerical solution *)
s = Partition[
Flatten@Table[
NDSolve[{r''[t] == r[t]*ϕ'[t]^2 - 1/r[t], ϕ'[t] == d/
r[t]^2, ϕ[0] == a, r[0] == b,
r'[0] == c}, {r, ϕ}, {t, 0, 200}], {a, vTangential/r0}, {b,
r0}, {c, vRadial}, {d, L}], 2]
(* Plotting the solution *)
ParametricPlot[
Evaluate[{r[t]*Cos[ϕ[t]], r[t]*Sin[ϕ[t]]} /. s], {t, 0,
2*Pi}, GridLines -> Automatic, Frame -> True]
</code></pre>
| kglr | 125 | <p>An alternative approach: </p>
<ol>
<li>If you use <a href="https://reference.wolfram.com/language/ref/ParametricNDSolveValue.html" rel="nofollow noreferrer"><code>ParametricNDSolveValue</code></a>
you don't have run <code>NDSolve</code> for each 4-tuple of input parameters. </li>
<li>Using the function you want to plot as the second argument of <code>ParametricNDSolveValue</code> you can use the output directly in plot functions without additional processing.</li>
</ol>
<h3> </h3>
<pre><code>ClearAll[pndsv]
pndsv = ParametricNDSolveValue[{r''[t] == r[t]*ϕ'[t]^2 - 1/r[t], ϕ'[t] == d/r[t]^2,
ϕ[0] == a, r[0] == b, r'[0] == c},
{r[#] Cos[ϕ[#]], r[#] Sin[ϕ[#]]} &,
{t, 0, 200},
{a, b, c, d}];
params = Transpose[{vTangential/r0, r0, vRadial, L}];
ParametricPlot[Evaluate[pndsv[##][t] & @@@ params], {t, 0, 2 Pi},
GridLines -> Automatic, Frame -> True]
</code></pre>
<p><a href="https://i.stack.imgur.com/FEJvK.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/FEJvK.png" alt="enter image description here"></a></p>
<p>Interactively set up to 10 sets of parameters using control label styles as legend for the curves shown:</p>
<pre><code>k = 10;
Manipulate[ParametricPlot[Evaluate[pndsv[##][t] & @@@ Take[psets, n, 4]], {t, 0, 2 Pi},
GridLines -> Automatic, Frame -> True, ImageSize -> 400, AspectRatio -> 1],
{{psets, ConstantArray[1., {k, 4}]}, None},
{{n, 3}, 1, 10, 1},
Dynamic[Panel[Grid[Prepend[#, {"params", "a", "b", "c", "d"}] &@
MapIndexed[Prepend[#, #2[[1]]] &, Outer[Manipulator[Dynamic[psets[[#1, #2]]], {0, 3},
Appearance -> "Labeled", ImageSize -> Tiny] &, Range[n], Range[4]]],
FrameStyle -> LightGray,
Background -> {None, None, {# + 1, 1} -> ColorData[97]@# & /@ Range[n]},
Dividers -> {{False, True}, {False, True}}]]],
Alignment -> Center]
</code></pre>
<p><a href="https://i.stack.imgur.com/1Q0rJ.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/1Q0rJ.png" alt="enter image description here"></a></p>
|
65,912 | <p>How do I show that $s=\sum\limits_{-\infty}^{\infty} {1\over (x-n)^2}$ on $x\not\in \mathbb Z$ is differentiable without using its compact form? I realize that the sequence of sums $s_a=\sum\limits_{-a}^{a} {1\over (x-n)^2}$ is not uniformly convergent. </p>
<p>I also tried to prove that it is continuous by using the usual $\varepsilon \over 3$ method. And it seems to apply because each $s_a$ is continuous and they converge pointwise to $s$. But then I realized that this should not be the right proof because I didn't use any special property of the given functions and the general case only works for uniform convergence. I am very confused. Please help!</p>
<p>Actually I do realize that if I could prove differentiability, continuity follows.</p>
<p>Thanks.</p>
| Michael Hardy | 11,667 | <p>OK, just to be exotic (?), let's see if we if we can get this from Morera's theorem. Let $C$ be a simple closed curve neither winds around any integer nor passes through any integer. Then
$$
\int\limits_C \sum_{n=-\infty}^\infty \frac{1}{(x-n)^2}\;dx = \sum_{n=-\infty}^\infty\ \int\limits_C \frac{1}{(x-n)^2}\;dx = \sum_{n=-\infty}^\infty 0 = 0.
$$
The first equality follows from Fubini's theorem.*</p>
<p>The second equality follows from the fact that $C$ does not wind around any point where the holomorphic function $x\mapsto 1/(x-n)^2$ behaves badly (fails to be holomorphic).</p>
<p>Morera's theorem says that if the integral of a function along every simple closed curve that does not wind around any point not in the domain is $0$, then the function is holomorphic.</p>
<p><b>* Later note:</b> Are the hypotheses of Fubini's theorem satisfied? The terms of the sum are nonnegative and the sum converges to a finite number. If one can show that it depends continuously on $x$, then the integral is that of a continuous function on a compact set, so that is also finite.</p>
<p>However, it now occurs to me that we don't need to go into that, because nonnegativity means we can cite <b>Tonelli's theorem</b> instead. That says that one can interchange the order of two Lebesgue integrations for functions that are everywhere nonnegative, regardless of whether the value of the integral is finite or infinite.</p>
|
1,378,633 | <p>It seems that some, especially in electrical engineering and musical signal processing, describe that every signal can be represented as a Fourier series.</p>
<p>So this got me thinking about the mathematical proof for such argument.</p>
<p>But even after going through some resources about the Fourier series (which I don't have too much background in, but grasp the concept), I cannot find a mathematical proof for whether every function can be represented by a Fourier series. There was a hint about the function having to be periodic.</p>
<p>So that means that the "every function can be represented as a Fourier series" is a myth and it doesn't apply on signals either, unless they're periodic?</p>
<p>But then I can also find references like these:
<a href="http://msp.ucsd.edu/techniques/v0.11/book-html/node171.html" rel="noreferrer">http://msp.ucsd.edu/techniques/v0.11/book-html/node171.html</a>
that say/imply that every signal can be made periodic? So does that change the notion about whether Fourier series can represent every function, with the new condition of first making it periodic, if necessary?</p>
| Alex Pavellas | 255,545 | <p>Since you're referring to signals here, it seems appropriate to consider this question from the viewpoint of an electrical engineer.</p>
<p>If we impose some restrictions on what kind of functions can be considered a "signal," then all periodic signals have a Fourier series.</p>
<ul>
<li>The function should be piecewise continuous.</li>
<li>the function should be be bounded.</li>
</ul>
<p>These are reasonable physical restrictions that all real signals should meet. These are also more than enough for a function to have a Fourier series.</p>
<p>Now, for a function that isn't periodic, we can find a Fourier series for a piece of it through a process called "windowing." Basically you isolate a part of the signal on some interval, and pretend that piece is one period of a periodic signal. The Fourier coefficients for each "window" tell you the power spectrum of the signal as time progresses.</p>
|
1,378,633 | <p>It seems that some, especially in electrical engineering and musical signal processing, describe that every signal can be represented as a Fourier series.</p>
<p>So this got me thinking about the mathematical proof for such argument.</p>
<p>But even after going through some resources about the Fourier series (which I don't have too much background in, but grasp the concept), I cannot find a mathematical proof for whether every function can be represented by a Fourier series. There was a hint about the function having to be periodic.</p>
<p>So that means that the "every function can be represented as a Fourier series" is a myth and it doesn't apply on signals either, unless they're periodic?</p>
<p>But then I can also find references like these:
<a href="http://msp.ucsd.edu/techniques/v0.11/book-html/node171.html" rel="noreferrer">http://msp.ucsd.edu/techniques/v0.11/book-html/node171.html</a>
that say/imply that every signal can be made periodic? So does that change the notion about whether Fourier series can represent every function, with the new condition of first making it periodic, if necessary?</p>
| foobar | 11,413 | <p>I came across this question because I wanted to ask the same thing. In Gilbert Strang's Linear Algebra LEC 24, towards the end: <a href="https://youtu.be/8MF3pz-oYHo?t=41m8s" rel="noreferrer">https://youtu.be/8MF3pz-oYHo?t=41m8s</a> he mentions that the parts of a fourier series is like an orthogonal basis, and that you can project a function onto the fourier series like any other projection onto orthogonal basis.</p>
<p>And the intuitive way to think about its restriction to be periodic is because the parts that make up the fourier series are periodic.</p>
<p>(I'd love to see something more rigourous though)</p>
|
1,923,034 | <p>A bagel store sells six different kinds of bagels. Suppose you choose 15 bagels at random. What is the probability that your choice contains at least one bagel of each kind? If one of the bagels is Sesame, what is the probability that your choice contains at least three Sesame bagels?</p>
<p>My approach to the first problem was the equation $x_1+x_2+x_3+x_4+x_5+x_6=15, x_i \geq 1$ which is the same as $y_1+y_2+y_3+y_4+y_5+y_6=9, y_i \geq 0$ which has $ 14 \choose 9$ solutions. So that is 2002 solutions. And there are in total $22 \choose 15$ solutions to the equation without the restriction. So in a percentage, there is a $\frac{2002}{15504}$ or 12.9% we will get one of each kind.</p>
<p>For the second problem, I used the equation $x_1+x_2+x_3+x_4+x_5+x_6=15, x_1 \geq 3, x_i \geq 0, i \neq 1$. This gives $17 \choose 12$ solutions, which gives a $\frac{6188}{15504}$ or a 39% chance of getting a sesame bagel. </p>
<p>Is my approach for both of these right? (The percentages of these happening seem really high)</p>
| Community | -1 | <p>A Markovian matrices solution. I like this formalism because it gives more controls on the algorithm and leads to less faulty solutions in general.</p>
<p>To get one of each kind, we define $7$ states, from $0$ to $6$, related to the number of colors found. $k$ is the size of the sample, $k=15$ and $n$ the number of states. We start with $0$ colors. The 1st "throw" gives always a transition to the 1st color. At the next steps , the state stays unchanged or changes with the couple of probabilities $(\frac{s}{n} , \frac{n-s}{n})$. When it comes to $6$, it stays unchanged.</p>
<p>$\begin{pmatrix} 1 & 0 & 0 & 0 & 0 & 0 & 0 \end {pmatrix} \times {\begin{pmatrix} 0 & 1 & 0 & 0 & 0 & 0 & 0 \\ 0 & {\frac{1}{n}} & {\frac{n-1}{n}} & 0 & 0 & 0 & 0 \\ 0 & 0 & {\frac{2}{n}} & {\frac{n-2}{n}} & 0 & 0 & 0 \\ 0 & 0 & 0 & {\frac{3}{n}} & {\frac{n-3}{n}} & 0 & 0 \\ 0 & 0 & 0 & 0 & {\frac{4}{n}} & {\frac{n-4}{n}} & 0 \\ 0 & 0 & 0 & 0 & 0 & {\frac{5}{n}} & {\frac{n-5}{n}} \\ 0 & 0 & 0 & 0 & 0 & 0 & 1 \end {pmatrix}}^{k} =$
$\begin{pmatrix} 1 & 0 & 0 & 0 & 0 & 0 & 0 \end {pmatrix} \times {\begin{pmatrix} 0 & 1 & 0 & 0 & 0 & 0 & 0 \\ 0 & {\frac{1}{6}} & {\frac{5}{6}} & 0 & 0 & 0 & 0 \\ 0 & 0 & {\frac{2}{6}} & {\frac{4}{6}} & 0 & 0 & 0 \\ 0 & 0 & 0 & {\frac{3}{6}} & {\frac{3}{6}} & 0 & 0 \\ 0 & 0 & 0 & 0 & {\frac{4}{6}} & {\frac{2}{6}} & 0 \\ 0 & 0 & 0 & 0 & 0 & {\frac{5}{6}} & {\frac{1}{6}} \\ 0 & 0 & 0 & 0 & 0 & 0 & 1 \end {pmatrix}}^{15} = $</p>
<p>$\begin{pmatrix} 0 & \frac{1}{78364164096} & \frac{27305}{26121388032} & \frac{11875505}{19591041024} & \frac{35296625}{1088391168} & \frac{43909775}{136048896} & \frac{233718485}{362797056} \end{pmatrix} =$</p>
<p>$\begin{pmatrix} 0. & 1.27609×10^{-11} & 1.04531×10^{-6} & 0.00060617 & 0.0324301 & 0.32275 & \color{blue}{0.644213} \end{pmatrix}$</p>
<p><a href="http://www.wolframalpha.com/input/?i=%7B1,0,0,0,0,0,0%7D.%7B%7B0,1,0,0,0,0,0%7D,%7B0,1%2F6,5%2F6,0,0,0,0%7D,%7B0,0,2%2F6,4%2F6,0,0,0%7D,%7B0,0,0,3%2F6,3%2F6,0,0%7D,%7B0,0,0,0,4%2F6,2%2F6,0%7D,%7B0,0,0,0,0,5%2F6,1%2F6%7D,%7B0,0,0,0,0,0,1%7D%7D%5E15" rel="nofollow">computed here</a></p>
<p>Similarly to get 3 Sesames, we must build another matrix with 4 states, 0, 1 , 2 or at least 3 sesames. Now the probability of transition is always $\frac16$ and then the probability to stay is the complement to 1. We start with 0 Sesame.</p>
<p>$\begin{pmatrix} 1 & 0 & 0 & 0 \end {pmatrix} \times {\begin{pmatrix} {\frac{n-1}{n}} & {\frac{1}{n}} & 0 & 0 \\ 0 & {\frac{n-1}{n}} & {\frac{1}{n}} & 0 \\ 0 & 0 & {\frac{n-1}{n}} & {\frac{1}{n}} \\ 0 & 0 & 0 & 1 \end {pmatrix}}^{k} = $
$\begin{pmatrix} 1 & 0 & 0 & 0 \end {pmatrix} \times {\begin{pmatrix} {\frac{5}{6}} & {\frac{1}{6}} & 0 & 0 \\ 0 & {\frac{5}{6}} & {\frac{1}{6}} & 0 \\ 0 & 0 & {\frac{5}{6}} & {\frac{1}{6}} \\ 0 & 0 & 0 & 1 \end {pmatrix}}^{15} = $
$\begin{pmatrix}\frac{30517578125}{470184984576} & \frac{30517578125}{156728328192} & \frac{42724609375}{156728328192} & \frac{219940843951}{470184984576} \end{pmatrix} = $
$\begin{pmatrix}0.0649055 & 0.194716 & 0.272603 & \color{blue}{0.467775} \end{pmatrix}$</p>
<p><a href="http://www.wolframalpha.com/input/?i=%7B1,+0,+0,+0%7D+.+%7B%7B5%2F6,1%2F6,0,0%7D,%7B0,5%2F6,1%2F6,0%7D,%7B0,0,5%2F6,1%2F6%7D,%7B0,0,0,1%7D%7D%5E15" rel="nofollow">computed here</a></p>
|
25,137 | <p>I want to find an intuitive analogy to explain how binary addition (more precise: an adder circuit in a computer) works. The point here is to explain the abstract process of <em>adding</em> something by comparing it to something that isn't abstract itself.</p>
<p>In principle: An everyday object or an action that is structured like or functionally resembles an adder.</p>
<p>Think of a thing that can belong to any number of categories x<sub>1</sub>, x<sub>2</sub>, x<sub>3</sub>, x<sub>4</sub>, x<sub>5</sub>, x<sub>6</sub>, x<sub>7</sub>, x<sub>8</sub> for which the property holds that if you put two objects together/perform two actions simultaneously, and both the objects/actions are of the same category you automatically create an object or perform an action that is of the next higher category that the object doesn't yet belong to, the whole thing therefore implementing the basic functionality of an adder.</p>
<p>(Categories are changing here analogous to the bits in the circuit: 00000001 (1) + 00000001 (1) together, adds up to 00000010 (2).)</p>
<p>But I just can't think of such a situation or an object where this pattern would occur. Whatever analogy i create with increasing amount of categories the way these categories transform becomes increasingly harder to explain, and the metaphor becomes overly specific and unhandy.</p>
<p>Hence the question:</p>
<p><strong>What's an everyday object that resembles an adder in it's basic functionality?</strong></p>
| Joseph O'Rourke | 511 | <p>Not quite what you want, but there are mechanical binary counters, e.g.,
this one, video <a href="https://www.reddit.com/r/oddlysatisfying/comments/8ke0yr/mechanical_binary_counter/" rel="nofollow noreferrer">here</a>:</p>
<p><a href="https://i.stack.imgur.com/MB4Xp.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/MB4Xp.jpg" alt="Bits" /></a></p>
<p><a href="https://www.pinterest.com/pin/310607705544531238/" rel="nofollow noreferrer">Another wooden version here</a>.
You could emphasize the analogy with an odometer.</p>
<p>And here is a Minecraft version, not quite as compelling (to me):
<a href="https://www.youtube.com/watch?v=MKi9cvdrisI&ab_channel=SgtGodswordBerserker" rel="nofollow noreferrer">YouTube link</a></p>
<p>Here's a clever marble binary adder:</p>
<p><a href="https://i.stack.imgur.com/94XjF.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/94XjF.jpg" alt="Adder" /></a></p>
<p><a href="https://hackaday.com/2009/10/13/binary-adder-will-give-you-slivers/" rel="nofollow noreferrer">Video link</a></p>
|
25,137 | <p>I want to find an intuitive analogy to explain how binary addition (more precise: an adder circuit in a computer) works. The point here is to explain the abstract process of <em>adding</em> something by comparing it to something that isn't abstract itself.</p>
<p>In principle: An everyday object or an action that is structured like or functionally resembles an adder.</p>
<p>Think of a thing that can belong to any number of categories x<sub>1</sub>, x<sub>2</sub>, x<sub>3</sub>, x<sub>4</sub>, x<sub>5</sub>, x<sub>6</sub>, x<sub>7</sub>, x<sub>8</sub> for which the property holds that if you put two objects together/perform two actions simultaneously, and both the objects/actions are of the same category you automatically create an object or perform an action that is of the next higher category that the object doesn't yet belong to, the whole thing therefore implementing the basic functionality of an adder.</p>
<p>(Categories are changing here analogous to the bits in the circuit: 00000001 (1) + 00000001 (1) together, adds up to 00000010 (2).)</p>
<p>But I just can't think of such a situation or an object where this pattern would occur. Whatever analogy i create with increasing amount of categories the way these categories transform becomes increasingly harder to explain, and the metaphor becomes overly specific and unhandy.</p>
<p>Hence the question:</p>
<p><strong>What's an everyday object that resembles an adder in it's basic functionality?</strong></p>
| Matthew Daly | 12,619 | <p>If you're looking for a metaphor that would instantly click with a modern audience, you might consider mobile merge games. The gameplay is that you have a board that is filled with pieces, and you can combine two identical pieces to yield a single "evolved" piece. For instance, maybe you combine two pieces of thread to get a spool of thread, two spools of thread make a piece of yarn, two pieces of yarn makes a ball of yarn, two balls of yarn makes a piece of rope, and so on. For illustration, <a href="https://youtu.be/RWGchSWy-H4" rel="nofollow noreferrer">here</a> is a video of someone making a notoriously complex item in Merge Mayor from a large number of base units.</p>
|
116,537 | <p>Let's say that I have</p>
<pre><code>x^2+x
</code></pre>
<p>Is there a way to map $x$ to the first derivative of a function and $x^2$ to the second derivative of the same function? According to <a href="http://reference.wolfram.com/language/ref/Slot.html" rel="nofollow">http://reference.wolfram.com/language/ref/Slot.html</a>, I know that I can change </p>
<pre><code>x /. x -> D[#, {y, 1}] &[a[y, z]]
x^2 /. x^2 -> D[#, {y, 2}] &[a[y, z]]
</code></pre>
<p>to yield the first and second derivatives of $a(y,z)$, respectively. I also know that I can use</p>
<pre><code>x^2+x/. x^2 -> D[#, {y, 2}] &[a[y, z]] /. x -> D[#, {y, 1}] &[a[y, z]]
</code></pre>
<p>but if I have say, a polynomial of degree 70, manually telling Mathematica to do this is highly inefficient. Is there a method, such that, for $x^n$, I can tell Mathematica to map $x^n$ to the $n^{th}$ derivative of a function?</p>
| Bruno Le Floch | 39,260 | <p>Rule-replacement with <code>x^n_. :> Derivative[n,0][a][y,z]</code> (as done in Kuba's answer) has two drawbacks: if your polynomial has a constant term, then it will not be replaced by the zero-th derivative <code>a[y,z]</code>, and if your polynomial is not expanded the result is incorrect. Namely, <code>(1+x)(2+x)</code> becomes <code>(1+a'[y,z])(2+a'[y,z])</code> rather than <code>2a[y,z]+3a'[y,z]+a''[y,z]</code> (here I use <code>'</code> to denote derivative with respect to the first variable).</p>
<p>One option is to multiply by <code>x</code> and <code>Expand</code>:</p>
<pre><code>{x, x^2, x^2 + x, (1 + x) (2 + x)} //
(Expand[x #] /. x^n_. :> Derivative[n-1, 0][a][y, z] &)
(*{Derivative[1, 0][a][y, z],
Derivative[2, 0][a][y, z],
Derivative[1, 0][a][y, z] + Derivative[2, 0][a][y, z],
2*a[y, z] + 3*Derivative[1, 0][a][y, z] + Derivative[2, 0][a][y, z]}*)
</code></pre>
<p>Another option which I find more natural is to get the <code>CoefficientList</code>, then rebuild the expression. But this does not thread nicely over lists of polynomials (here I used <code>x^2+x</code>).</p>
<pre><code>Total@MapIndexed[#1 Derivative[#2[[1]] - 1, 0][a][y, z] &,
CoefficientList[x + x^2, x]]
</code></pre>
|
1,562,503 | <p>Can anyone help me here?</p>
<p>Question: "X is a normed space and A is a subset dense in the dual of X.
x belongs to X and the sequence (x_n) of X is bounded of E such that f(x_n) converges to f(x) for all f in A. Show that x_n converges to x weakly"</p>
<p>My try: I think that if I show that A=cl(A) so I prove what is required. So I have to prove that cl(A) is contained in A, ie, for all y in cl(A): y is in A.
Let y belongs to cl(A), ie, there existe a sequence (y_n) in A such that y_n converges to y.
Let y_n := f(x_n) and y:=f(x) (my doubt is: my I do this? because y is in A and f(x_n) is a image of x_n, and not a function) </p>
<p>Then I tried to prove that f(x) is in A but I think it's impossible :(</p>
| Justpassingby | 293,332 | <p>You're on the wrong track because $A$ is never closed except in the trivial case where $A=X^*.$</p>
<p>You need to prove that for arbitrary $f\in X^*$</p>
<p>$$\lim_{n\to\infty}f(x_n)=f(x).$$</p>
<p>You can do this using the $\epsilon$-definition of the limit of a sequence, first choosing a $g\in A$ sufficiently close to $f$ in the sense of the norm on $X^*$, then an $n$ sufficiently large, and use the triangle inequality (twice).</p>
|
3,088,766 | <p>I need to prove that the premise <span class="math-container">$A \to (B \vee C)$</span> leads to the conclusion <span class="math-container">$(A \to B) \vee (A \to C)$</span>. Here's what I have so far.</p>
<p><a href="https://i.stack.imgur.com/1AgTZ.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/1AgTZ.png" alt="enter image description here" /></a></p>
<p>From here I'm stuck (and I'm not even sure if this is correct). My idea is to use negation intro by assuming the opposite and coming up with a contradiction. I assumed <span class="math-container">$A$</span> which led to <span class="math-container">$B \vee C$</span> and, as you can see, I'm trying or elim but the only way I can think of doing this is to use conditional intro and then or intro but that seems to only work for a single subproof. In other words, I can't use the assumption of <span class="math-container">$B$</span> to say <span class="math-container">$A \to B$</span>. This is called an indirect proof.</p>
| Rob Arthan | 23,171 | <p>Hint: if you assume <span class="math-container">$A \to (B \lor C)$</span>, <span class="math-container">$\lnot(A \to B)$</span> and <span class="math-container">$A$</span>, then you can conclude <span class="math-container">$B \lor C$</span> and <span class="math-container">$\lnot B$</span>. Can you take it from there?</p>
|
3,088,766 | <p>I need to prove that the premise <span class="math-container">$A \to (B \vee C)$</span> leads to the conclusion <span class="math-container">$(A \to B) \vee (A \to C)$</span>. Here's what I have so far.</p>
<p><a href="https://i.stack.imgur.com/1AgTZ.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/1AgTZ.png" alt="enter image description here" /></a></p>
<p>From here I'm stuck (and I'm not even sure if this is correct). My idea is to use negation intro by assuming the opposite and coming up with a contradiction. I assumed <span class="math-container">$A$</span> which led to <span class="math-container">$B \vee C$</span> and, as you can see, I'm trying or elim but the only way I can think of doing this is to use conditional intro and then or intro but that seems to only work for a single subproof. In other words, I can't use the assumption of <span class="math-container">$B$</span> to say <span class="math-container">$A \to B$</span>. This is called an indirect proof.</p>
| Frank Hubeny | 312,852 | <p>I assume you are using the <a href="http://proofs.openlogicproject.org/" rel="nofollow noreferrer">proof checker</a> associated with the <em>forallx</em> text. There is no rule in this proof checker that allows inference rules for <a href="https://en.wikipedia.org/wiki/Material_implication_(rule_of_inference)" rel="nofollow noreferrer">material implication</a>. So it may not be easy to proceed by negating the goal if it contains conditional statements. </p>
<p>The following attempt to complete the proof shows what might happen:</p>
<p><a href="https://i.stack.imgur.com/vqHH0.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/vqHH0.png" alt="enter image description here"></a></p>
<p>To discharge the assumption <span class="math-container">$A$</span> on line 3, I will likely have to use the law of the excluded middle on <span class="math-container">$A \lor \lnot A$</span> which would involve another subproof.</p>
<p>An alternate way to proceed is to negate something equivalent to either the premise or the goal to remove the conditional statements. For example note that the premise is equivalent to <span class="math-container">$\lnot A \lor (B \lor C)$</span>. If I negate that I can rewrite it using the following equivalences:</p>
<p><span class="math-container">$$\begin{align}
\lnot (\lnot A \lor (B \lor C)) &\equiv \lnot \lnot A \land \lnot (B\lor C)\\
&\equiv A \land \lnot (B\lor C)\\
\end{align}$$</span></p>
<p>Rather than using the negation of the conclusion, I will use that statement as the assumption and derive a contradiction. Here is the proof:</p>
<p><a href="https://i.stack.imgur.com/GGodq.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/GGodq.png" alt="enter image description here"></a></p>
<p>The contradiction was derived on line 6. Now I have to continue from this point and derive the the goal itself. This detour of negating a statement equivalent to the premise (or goal) may make it easier to derive the goal.</p>
<hr>
<p>Kevin Klement's JavaScript/PHP Fitch-style natural deduction proof editor and checker <a href="http://proofs.openlogicproject.org/" rel="nofollow noreferrer">http://proofs.openlogicproject.org/</a></p>
|
3,325,340 | <p>Show that <span class="math-container">$$ \lim\limits_{(x,y)\to(0,0)}\dfrac{x^2y^2}{x^2+y^2}=0$$</span>
My try:
We know that, <span class="math-container">$$ x^2\leq x^2+y^2 \implies x^2y^2\leq (x^2+y^2)y^2 \implies x^2y^2\leq (x^2+y^2)^2$$</span>
Then, <span class="math-container">$$\dfrac{x^2y^2}{x^2+y^2}\leq x^2+y^2 $$</span>
So we chose <span class="math-container">$\delta=\sqrt{\epsilon}$</span></p>
| Axion004 | 258,202 | <p>In two variables the epsilon-delta definition for <span class="math-container">$\lim_{\substack{x\to a\\ y\to b}}f(x,y)=L$</span> means that for every <span class="math-container">$\epsilon >0$</span> there exists a <span class="math-container">$\delta>0$</span> such that <span class="math-container">$\big|f(x,y)-L\big|<\epsilon$</span> whenever <span class="math-container">$0<\sqrt{(x-a)^2+(y-b)^2}<\delta$</span>. </p>
<p>In your case, you want to show that <span class="math-container">$\big|f(x,y)-0\big|<\epsilon$</span> whenever <span class="math-container">$0<\sqrt{x^2+y^2}<\delta$</span>. You did this by showing that</p>
<p><span class="math-container">\begin{align}x^2y^2\leq (x^2+y^2)^2\implies\bigg|\frac{x^2y^2}{x^2+y^2}-0\bigg|\le\bigg|\frac{(x^2+y^2)(x^2+y^2)}{x^2+y^2}\bigg|=\bigg|x^2+y^2\bigg|=x^2+y^2\end{align}</span></p>
<p>so that you could choose <span class="math-container">$\delta=\sqrt{\epsilon}$</span> and then get the required form of <span class="math-container">$\big|f(x,y)-0\big|<\epsilon$</span>.</p>
|
524,870 | <p>This is adapted from 1.7.7 in Friedman's "Foundations of Modern Analysis":</p>
<blockquote>
<p>Let $\mathscr{B}$ be the $\sigma$-ring generated by the class of open subsets of $X$ [a fixed set], and $\mathscr{D}$ the $\sigma$-ring generated by the class of closed subsets of $X$. Show that $\mathscr{D} = \mathscr{B}$.</p>
</blockquote>
<p>I would appreciate a hint on how to begin doing this exercise, because I haven't a clue. I'm not really sure what the problem entails. I know what a $\sigma$-ring is, as well as open and closed sets. That $\mathscr{B},\mathscr{D}$ are generated means that they each are, in a sense, the smallest and unique ring containing its respective "underlying" class of sets. But none of this gives me any idea on where to start. The problem is from a section on metric spaces (prior to metric outer measures), but again not even the context gives me any ideas.</p>
<p>(Couldn't come up with a good title, change it if you like...)</p>
| Robert Israel | 8,508 | <p>Hint: show that every closed set is in $\mathscr B$ and every open set is in $\mathscr D$.</p>
|
617,747 | <p>In mathematics, how does something like complex numbers apply to the real world? Why do complex numbers exist? How can we comprehend addition of complex numbers? For example, addition of natural numbers can be understood as putting together two apples and two oranges makes four fruits. How can we apply this thinking to complex numbers?</p>
| Gerry Myerson | 8,269 | <p>"how does something like complex numbers apply to the real world?" Type $$\rm circuits\ and\ complex\ numbers$$ into Google, and you will find that computations of currents in electrical circuits are done using complex numbers. </p>
<p>"Why do complex numbers exist?" The equation $x^3-4x+1=0$ has three real solutions, but it is impossible to express them (in terms of arithmetical operations and square roots and cube roots) without using complex numbers. So complex numbers come up naturally in finding real solutions of real equations. Key search phrase: casus irreducibilis. </p>
<p>"How can we comprehend addition of complex numbers?" What is there to comprehend? If we want to add $2+3\sqrt{-1}$ to $4+5\sqrt{-1}$, we do $2+4=6$ and $3+5=8$ to get the answer $6+8\sqrt{-1}$. Just like adding 2 apples and 3 unicorns to 4 apples and 5 unicorns, getting 6 apples and 8 unicorns. </p>
|
617,747 | <p>In mathematics, how does something like complex numbers apply to the real world? Why do complex numbers exist? How can we comprehend addition of complex numbers? For example, addition of natural numbers can be understood as putting together two apples and two oranges makes four fruits. How can we apply this thinking to complex numbers?</p>
| Lucian | 93,448 | <p>Numbers count $(\mathbb{N})$ and measure $(\mathbb{R})$. Yet complex $(\mathbb{C})$ or imaginary $(i\,\mathbb{R})$ numbers do neither. So what good are they anyway ? $($Is this what you're asking ?$)$ Well, let's just say that <a href="http://en.wikipedia.org/wiki/Complex_number#Applications" rel="noreferrer">engineering</a> as we know it would be a whole lot more difficult to either understand or apply without their help, as would <a href="http://en.wikipedia.org/wiki/Quaternion" rel="noreferrer">mechanics and computer graphics</a>, or even <a href="http://en.wikipedia.org/wiki/Octonion" rel="noreferrer">modern physics</a>, for that matter. Many of the practical problems that arise in these various fields often require solving contour integrals, which in many instances simply cannot be done without the use of <a href="http://en.wikipedia.org/wiki/Methods_of_contour_integration" rel="noreferrer">complex integration</a>. Their <a href="http://en.wikipedia.org/wiki/Euler%27s_formula" rel="noreferrer">trigonometric</a> applications range from <a href="http://en.wikipedia.org/wiki/Cartesian_coordinate_system" rel="noreferrer">geographic location</a> and <a href="http://en.wikipedia.org/wiki/Polar_coordinate_system" rel="noreferrer">cartographic projections</a> to <a href="http://en.wikipedia.org/wiki/Fourier_transform" rel="noreferrer">signal processing</a> and <a href="http://en.wikipedia.org/wiki/Z-transform#Inverse_Z-transform" rel="noreferrer">other</a> branches of <a href="http://en.wikipedia.org/wiki/Alternating_current" rel="noreferrer">electrical engineering</a>. Basically, all radio or acoustic signals, as well as electricity itself, are nothing else <a href="http://en.wikipedia.org/wiki/Sine_wave" rel="noreferrer">sinusoidal waveforms</a> (and those that aren't can easily be <a href="http://en.wikipedia.org/wiki/Fourier_analysis" rel="noreferrer">decomposed</a> into such), and whose study would become very tedious really fast, were it not for Euler's relationship, $e^{ix}=\sin x+i\cdot\cos x$.</p>
|
4,572,804 | <p>I'm working my way through Murphy's, C<em>-algebras and Operator Theory and I have a question concernig the proof that every C</em>-algebra admits an approximate identity.</p>
<p>Let A be an arbitrary C*-algebra. We denote by <span class="math-container">$\Lambda$</span> the set of all positive elements a in A such that <span class="math-container">$||a||<1$</span>. It can be shown that <span class="math-container">$\Lambda$</span> is a poset under the partial order of <span class="math-container">$A_{sa}$</span> (set of all hermitian elements in A). Partial order is defined by <span class="math-container">$a\leq b \iff b-a\geq 0$</span>, where <span class="math-container">$a\geq 0$</span> means a is positive, in other words hermitian element such that <span class="math-container">$\sigma(a)\subseteq [0,\infty \rangle$</span>. It is also shown <span class="math-container">$\Lambda$</span> is a upwards-directe set, so we can defined a net with <span class="math-container">$\Lambda$</span> as it's enumeration set. The proof goes on as follows:<a href="https://i.stack.imgur.com/6zUlm.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/6zUlm.jpg" alt="proof from Murphy's book" /></a></p>
<p>How does <span class="math-container">$||f-\delta gf||\leq \epsilon$</span> follow ?. Also the choice od <span class="math-container">$\lambda_0$</span> is confusing to me, basically the rest of the proof I cant wrap my head around.</p>
| belkacem abderrahmane | 660,639 | <p>for <span class="math-container">$w\in K$</span>, <span class="math-container">$g(w) =1\implies \lvert f-\delta g f\rvert =(1-\delta) \lvert f\rvert <\epsilon $</span> (since Gelfand transformation is an isometry), for <span class="math-container">$$ w\in K^{c}, \lvert f(w) <\rvert \epsilon, \lvert g(w) \rvert \leq 1$$</span>.</p>
|
4,572,804 | <p>I'm working my way through Murphy's, C<em>-algebras and Operator Theory and I have a question concernig the proof that every C</em>-algebra admits an approximate identity.</p>
<p>Let A be an arbitrary C*-algebra. We denote by <span class="math-container">$\Lambda$</span> the set of all positive elements a in A such that <span class="math-container">$||a||<1$</span>. It can be shown that <span class="math-container">$\Lambda$</span> is a poset under the partial order of <span class="math-container">$A_{sa}$</span> (set of all hermitian elements in A). Partial order is defined by <span class="math-container">$a\leq b \iff b-a\geq 0$</span>, where <span class="math-container">$a\geq 0$</span> means a is positive, in other words hermitian element such that <span class="math-container">$\sigma(a)\subseteq [0,\infty \rangle$</span>. It is also shown <span class="math-container">$\Lambda$</span> is a upwards-directe set, so we can defined a net with <span class="math-container">$\Lambda$</span> as it's enumeration set. The proof goes on as follows:<a href="https://i.stack.imgur.com/6zUlm.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/6zUlm.jpg" alt="proof from Murphy's book" /></a></p>
<p>How does <span class="math-container">$||f-\delta gf||\leq \epsilon$</span> follow ?. Also the choice od <span class="math-container">$\lambda_0$</span> is confusing to me, basically the rest of the proof I cant wrap my head around.</p>
| Danny Pak-Keung Chan | 374,270 | <p><span class="math-container">$f\in C_{0}(\Omega)$</span> with <span class="math-container">$||f||_{\sup}=||a||<1$</span>. Let <span class="math-container">$\omega\in\Omega$</span>
be arbitrary. If <span class="math-container">$\omega\in K$</span>, then <span class="math-container">$g(\omega)=1$</span>. Therefore
<span class="math-container">\begin{eqnarray*}
\left|f(\omega)-\delta g(\omega)f(\omega)\right| & = & (1-\delta)|f(\omega)|\\
& \leq & 1-\delta\\
& < & \varepsilon.
\end{eqnarray*}</span>
If <span class="math-container">$\omega\notin K$</span>, then <span class="math-container">$|f(\omega)|<\varepsilon$</span> and <span class="math-container">$0\leq g(\omega)\leq1$</span>.
Therefore</p>
<p><span class="math-container">\begin{eqnarray*}
\left|f(\omega)-\delta g(\omega)f(\omega)\right| & \leq & |f(\omega)|\cdot|1-\delta g(\omega)|\\
& \leq & |f(\omega)|\\
& < & \varepsilon.
\end{eqnarray*}</span></p>
<p>It follows that <span class="math-container">$||f-\delta gf||_{\sup}\leq\varepsilon$</span>.</p>
|
137,755 | <p>Suppose that $X$ is a scheme and $x\in X$ is a point. The stalk of $X$ at $x$ is a (local) ring and we can form its spectrum $Y_x=\rm{Spec}(\mathcal{O}_{X,x})$.</p>
<p>There is a canonical map $Y_x\to X$. We can define it by fixing an affine neighborhood $x\in U\cong \rm{Spec}(R)$, making $x$ as a prime ideal in $R$ and $\mathcal{O}_{X,x}\cong R_x$ a localization. The quotient $R\to R_x$ then induces the map of schemes $Y_x\to U \subseteq X$.</p>
<p>My question is this: is there a name for this construction? Are there familiar methods or theorems where it arises?</p>
| Georges Elencwajg | 450 | <p>Topologically the scheme $\rm{Spec}(\mathcal{O}_{X,x})$ is exactly the intersection of all neighbourhoods of $x$ and algebraically it contains every infinitesimal neighbourhood of $X$.<br>
Although technically it is not the germ of$X$ at $x$, it seems to me that it contains so much information about that germ that it could be considered a materialization of that germ .<br>
Also: technically it is not a subscheme of $X$ (since it is not locally closed) but I can imagine a world where a broader notion of subscheme would allow the monomorphism $j: S=\rm{Spec}(\mathcal{O}_{X,x})\hookrightarrow X$ to be called a subscheme, an almost open subscheme if you will.<br>
One argument for that broader point of view is that the induced canonical morphism $j^{-1}\mathcal O_X=\mathcal O_X\mid S \stackrel {\cong}{\to} \mathcal O_S$ of sheaves over $S$ is an isomorphism, just as if $S\subset X$ were open.</p>
<p>In particular, given a morphism of rings $A\to B$ , the corresponding morphism of affine schemes $\phi:\rm{Spec} (B)\to \rm{Spec} (A)$ and a prime ideal $\mathfrak p\subset A$, the morphism $\rm{Spec} (B_{\mathfrak p}) \to \rm{Spec}(A_{\mathfrak p} )$ is a pleasant thickening of the genuine fiber $\phi^{-1}(\mathfrak p)=\rm{Spec}(B\otimes_A \kappa(\mathfrak p))$ of $\mathfrak p$.<br>
Considerations of such almost germs $\rm{Spec}(A_{\mathfrak p} )$ of $\rm{Spec}(A)$ at $\mathfrak p$ may be of help when following reasonings in commutative algebra. </p>
<p>My first contact with one of these almost germs (or almost open subschemes) was in Mumford's Red Book, Chapter Two, §1, Example F, where he describes an example as "... a startling way to make a scheme out of the non closed points in the plane" and draws one of his celebrated pictures to illustrate the notion. </p>
|
209,761 | <p>I've the following code:</p>
<pre><code>Table[b = (-12 +Sqrt[3] Sqrt[3 (-4 + r)^2 + 12 a^2 (-2 + r) - 4 a (-5 + r) (-2+ r) +4 a^3 (-2 + r)^2] + 3 r)/(6 (-2 + r)) /. r -> 5;
If[IntegerQ[b], {b, a}, Nothing], {a, 1+10^(11), 10^(12)}]
</code></pre>
<p>But it gives me the following warning: 'SystemException["MemoryAllocationFailure"]'.</p>
<blockquote>
<p>Is there a way to avoid this warning? Maybe I've to edit my code?</p>
</blockquote>
| yarchik | 9,469 | <p>You can replace <code>Table</code> with <code>Do</code></p>
<pre><code>Do[If[IntegerQ[1/6 (1 + Sqrt[1 + 12 a^2 + 12 a^3])], Print[a]], {a, 1 + 10^(11), 10^4 + 10^(11)}]
</code></pre>
<p>but it is still slow. Try to bring your diophantine equation to some known type.</p>
|
209,761 | <p>I've the following code:</p>
<pre><code>Table[b = (-12 +Sqrt[3] Sqrt[3 (-4 + r)^2 + 12 a^2 (-2 + r) - 4 a (-5 + r) (-2+ r) +4 a^3 (-2 + r)^2] + 3 r)/(6 (-2 + r)) /. r -> 5;
If[IntegerQ[b], {b, a}, Nothing], {a, 1+10^(11), 10^(12)}]
</code></pre>
<p>But it gives me the following warning: 'SystemException["MemoryAllocationFailure"]'.</p>
<blockquote>
<p>Is there a way to avoid this warning? Maybe I've to edit my code?</p>
</blockquote>
| bbgodfrey | 1,063 | <p>The approach suggested by yarchik can be accelerated by two orders of magnitude by performing the computations with machine precision numbers instead of exact numbers and then rounding, and by using <code>ParallelDo</code>:</p>
<pre><code>SetSharedVariable[s]
s = {};
ParallelDo[If[IntegerQ[(1 + a*Round[Sqrt[1./(a*a) + 12. (a + 1)], 10^-10])/6],
AppendTo[s, a]], {a, 1, 10^9}] // AbsoluteTiming
s
(* {1795.35, Null} *)
(* 1 *)
</code></pre>
<p>The computation for <code>a</code> as large as <code>10^9</code> yields the single answer <code>a -> 1</code> in about 30 minutes. Consequently, the computation for <code>a</code> as large as <code>10^12</code> should take about 21 days, a long but not impossibly long time. It seems likely that there are no additional solutions.</p>
<p>Note that there is a tradeoff between increasing the second argument of <code>Round</code>, which reduces runtime but also may yield some false positives. Eliminating the false positives is, of course, straightforward and fast, provided there are not too many.</p>
<p><strong>Addendum: Solution using Reduce</strong></p>
<p>The corresponding Diophantine equation is <code>a^2 + a^3 + b - 3 b^2 == 0</code>, as can be seen from <code>Simplify[-(6 b - 1)^2 + 1 + 12 a^2 + 12 a^3]</code>. It can be solved by means of <code>Reduce</code> for modest maximum values, <code>amx</code>, of <code>a</code> by</p>
<pre><code>amx = 10^6;
SetSystemOptions["ReduceOptions" -> {"ExhaustiveSearchMaxPoints" -> {100, amx}}];
Reduce[{a^2 + a^3 + b - 3 b^2 == 0, amx > a > 0, b > 0}, {a, b}, Integers]
// AbsoluteTiming
(* {230.319, a == 1 && b == 1} *)
</code></pre>
<p>The corresponding run time for the solution by yarchik is 143.7 seconds. For the code given earlier in this answer (but with <code>ParallelDo</code> replaced by <code>Do</code> for consistency) the run time is 4.00122 seconds. Of course, the solution using <code>Reduce</code> and that by yarchik both can be parallelized, reducing run times proportionately. </p>
|
3,183,617 | <p>I have an equation that looks like <span class="math-container">$$X' = a \sin(X) + b \cos(X) + c$$</span> where <span class="math-container">$a,b$</span> and <span class="math-container">$c$</span> are constants. For given values of <span class="math-container">$a, b$</span> and <span class="math-container">$c$</span> how can I calculate X? I have set of values for <span class="math-container">$a, b$</span> and <span class="math-container">$c$</span> and I am looking for an equation that could solve for <span class="math-container">$X$</span> or other approaches like numerical methods are also ok. Thanks in advance.</p>
<p>My approach is as below:
<span class="math-container">$$X' = a \sin(X) + b \cos(X) + c \ (Integrating \ this \ eqn)$$</span>
<span class="math-container">$$X = d \cos(X) + b \sin(X) + cX \\where(d = -a)$$</span>
<span class="math-container">$$X = p\sin(X + q) + cX \\where \ p= sqrt(d^2 + b^2) \ and \ cosq = d/p ,\ sinq=b/p$$</span>
<span class="math-container">$$X(1-c)/p = sin(X+q) \\ where \ p/(1-c) = r$$</span>
<span class="math-container">$$X - rsin(X+q) = 0$$</span>
<span class="math-container">$$f(X) = X - r \ sin(X+q) \\ f'(X) = 1 - rcos(X+q)$$</span>
To the above equations I applied Newton's method:
<span class="math-container">$$X(n+1) = X(n) - f(X(n))/f'(X(n)) \\ X(0) = 1$$</span>
I run this for 2000 Iterations and found theat the solution is not converging to expected result. Is there something wrong with the mathematical derication ? or It is not possible to get results from this approach? </p>
| Lutz Lehmann | 115,115 | <h3>direct transformation using half-angle formulas</h3>
<p>The probably best systematic method (from integrals of quotients of trigonometric expressions) is the half-angle tangent substitution. Set <span class="math-container">$U=\tan(X/2)$</span>, then <span class="math-container">$\sin(X)=\frac{2U}{1+U^2}$</span>, <span class="math-container">$\cos(X)=\frac{1-U^2}{1+U^2}$</span> and
<span class="math-container">$$
U'=(1+U^2)X'=2aU+b(1-U^2)+c(1+U^2)
$$</span>
which now has a quadratic right side and can be solved via partial fraction decomposition. Or one can see it as a Riccati equation, and get with the further substitution <span class="math-container">$U=\frac{V'}{(b-c)V}$</span> a second order linear ODE with constant coefficients.</p>
<hr />
<h3>correct integration using separation of variables</h3>
<p>As to your edit: separation of variables leads to
<span class="math-container">$$
\int\frac{dX}{a\sin X+b\cos X+c}=t+k.
$$</span>
Here then you proceed with the integral of a rational trigonometric expression, naturally inviting to apply the half angle formula.</p>
<hr />
<h3>direct approach continued, (implicit) partial fraction decomposition</h3>
<p>You can solve the transformed equation or the integral by finding the roots of the quadratic, so that
<span class="math-container">$$
U' = (c-b)(U-r_1)(U-r_2)
$$</span>
and the appropriate transformation is <span class="math-container">$V=\frac{U-r_1}{U-r_2}$</span> transforming it into a linear first order ODE.
<span class="math-container">$$
V'=\frac{r_1-r_2}{(U-r_2)^2}U'=(r_1-r_2)(c-b)V\implies V=Ce^{(r_1-r_2)(c-b)t}
$$</span>
where the roots <span class="math-container">$r_k$</span> are of the quadratic equation <span class="math-container">$[(c-b)r]^2-2a[(c-b)r]+(c^2-b^2)=0$</span>.</p>
<hr />
<h3>using the approach via a second order linear ODE</h3>
<p>A more direct solution comes from the Riccati transformation into a second order ODE, <span class="math-container">$V=\exp((b-c)\int U\,dt)$</span>, so that
<span class="math-container">$$
\frac{V''}{(b-c)V}-\frac{V'^2}{(b-c)V^2}=(b+c)+2a\frac{V'}{(b-c)V}-(b-c)\frac{V'^2}{(b-c)^2V^2}
\\\implies
V''-2aV'+(c^2-b^2)V=0
$$</span>
which again is easy to solve, and the back-substitutions here are IMO easiest to perform.</p>
|
2,296,724 | <p>I need to calculate $(A+B)^{-1}$, where $A$ and $B$ are two square, very sparse and very large. $A$ is block diagonal, real symmetric and positive definite, and I have access to $A^{-1}$ (which in this case is also sparse, and block diagonal). $B$ is diagonal and real positive. In my application, I need to calculate the inverse of the sum of these two matrices where the inverse of the non-diagonal one (e.g. $A^{-1}$) is updated frequently, and readily available.</p>
<p>Since $B$ is full rank, the Woodbury lemma is of no use here (well, it is, but it's too slow). Other methods described in <a href="https://math.stackexchange.com/questions/17776/inverse-of-the-sum-of-matrices">this nice question</a> are of no use in my case as the spectral radius of $A^{-1}B$ is much larger than one. Methods based on a diagonalisation assume that it is the diagonal matrix that is being updated frequently, whereas that's not my case (i.e., diagonalising $A$ is expensive, and I'd have to do that very often).</p>
<p>I'm quite happy to live with an approximate solution.</p>
| Oussama Boussif | 258,472 | <p>Let $A_1,A_2,\cdots,A_q$ be the diagonal blocks of $A$, and $a_{1,1},a_{1,2},\cdots,a_{1,n_1},a_{2,1},a_{2,2},\cdots,a_{2,n_2},\cdots,a_{q,1},a_{q,2},\cdots,a_{1,n_q}$ the diagonal elements of $B$, then the inverse of the sum would simply be a diagonal block matrix with blocks: ${(A_i+diag(a_{i,1},\cdots,a_{i,n_i}))}^{-1}$ for $i\in(1,2,\cdots,q)$.</p>
<p>So the problem is reduced to finding the inverse of the sum of a matrix and a diagonal matrix. Fortunately, $A$ is symmetric positive definite, so each $A_i$ diagonalizable, hence, we can write it as follows:</p>
<p>$$
A_i=P_iD_i{P_i}^{T}=P_i\begin{bmatrix}
\lambda_{i,1} & 0 & 0 & \cdots & 0 & 0 \\
0 & \ddots & \ddots & \ddots & & 0 \\\
0 & \ddots & \ddots & \ddots & \ddots & \vdots \\
\vdots & \ddots & \ddots & \ddots & \ddots & 0 \\
0 & & \ddots & \ddots & \ddots & 0 \\
0 & 0 & \cdots & 0 & 0 & \lambda_{i,n_{i}}
\end{bmatrix}{P_i}^{T}
$$</p>
<p>Where $\lambda_{i,1},\cdots,\lambda_{i,n_{i}}$ are the eigenvalues of $A_i$, hence:</p>
<p>$$
A_i+diag(a_{i,1},\cdots,a_{i,n_i})=P_i\begin{bmatrix}
\lambda_{i,1}+a_{n_i} & 0 & 0 & \cdots & 0 & 0 \\
0 & \ddots & \ddots & \ddots & & 0 \\\
0 & \ddots & \ddots & \ddots & \ddots & \vdots \\
\vdots & \ddots & \ddots & \ddots & \ddots & 0 \\
0 & & \ddots & \ddots & \ddots & 0 \\
0 & 0 & \cdots & 0 & 0 & \lambda_{i,n_{i}}+a_{i,n_i}
\end{bmatrix}{P_i}^{T}
$$</p>
<p>Since $A_i$ and $B$ are symmetric positive definite, then $\lambda_{i,j}+a_{i,j}\neq 0$, so :</p>
<p>$$
{(A_i+diag(a_{i,1},\cdots,a_{i,n_i}))}^{-1}=P_i\begin{bmatrix}
\frac{1}{\lambda_{i,1}+a_{i,1}} & 0 & 0 & \cdots & 0 & 0 \\
0 & \ddots & \ddots & \ddots & & 0 \\\
0 & \ddots & \ddots & \ddots & \ddots & \vdots \\
\vdots & \ddots & \ddots & \ddots & \ddots & 0 \\
0 & & \ddots & \ddots & \ddots & 0 \\
0 & 0 & \cdots & 0 & 0 & \frac{1}{\lambda_{i,n_{i}}+a_{i,n_i}}
\end{bmatrix}{P_i}^{T}=P_iD_i{P_i}^{T}
$$</p>
<p>Hence the inverse of ${(A+B)}^{-1}$ is a block diagonal matrix with diagonal elements being the matrices above.</p>
<p>You can rewrite that as </p>
<p>$$
{(A+B)}^{-1}=\begin{bmatrix}
P_1 & 0 & 0 & \cdots & 0 & 0 \\
0 & \ddots & \ddots & \ddots & & 0 \\\
0 & \ddots & \ddots & \ddots & \ddots & \vdots \\
\vdots & \ddots & \ddots & \ddots & \ddots & 0 \\
0 & & \ddots & \ddots & \ddots & 0 \\
0 & 0 & \cdots & 0 & 0 & P_q
\end{bmatrix}\times\begin{bmatrix}
D_1 & 0 & 0 & \cdots & 0 & 0 \\
0 & \ddots & \ddots & \ddots & & 0 \\\
0 & \ddots & \ddots & \ddots & \ddots & \vdots \\
\vdots & \ddots & \ddots & \ddots & \ddots & 0 \\
0 & & \ddots & \ddots & \ddots & 0 \\
0 & 0 & \cdots & 0 & 0 & D_q
\end{bmatrix}\times\begin{bmatrix}
{P_1}^{T} & 0 & 0 & \cdots & 0 & 0 \\
0 & \ddots & \ddots & \ddots & & 0 \\\
0 & \ddots & \ddots & \ddots & \ddots & \vdots \\
\vdots & \ddots & \ddots & \ddots & \ddots & 0 \\
0 & & \ddots & \ddots & \ddots & 0 \\
0 & 0 & \cdots & 0 & 0 & {P_q}^{T}
\end{bmatrix}
$$</p>
|
2,614,920 | <p>Not understanding the concept well, I am trying to determint the pointwise and uniform convergence of the following sequence of function:</p>
<p>$$f_n(x) = \frac{\sin{nx}}{n^3}, x \in \mathbb{R}$$</p>
<p>The only part I understand so far is that I need $\lim_{x\to\infty}{f_n(x)}$ in which I have determined that (if I am correct):</p>
<p>$$\lim_{x\to\infty}{f_n(x)} = \lim_{x\to\infty}{\frac{\sin{nx}}{n^3}} = 0$$</p>
<p>Where do i go from here?</p>
| Paul Frost | 349,785 | <p>It seems that you argue that the restriction of a (strong) deformation retraction <span class="math-container">$r : X \to A$</span> to any <span class="math-container">$Y$</span> with <span class="math-container">$A \subset Y \subset X$</span> is again a (strong) deformation retraction. It is of course a retraction, but in general no longer a homotopy equivalence. A trivial example is <span class="math-container">$X = [0,1], A = \{ 0 \}, Y= \{0,1\}$</span>.</p>
|
2,880,384 | <p>Look at the following definition.</p>
<p><strong>Definition.</strong> Let $\kappa$ be an infinite cardinal. A theory $T$ is called $\kappa$-stable if for all model $M\models T$ and all $A\subset M$ with $|A|\leq \kappa$ we have $|S_n^M(A)|\leq \kappa$.
A theory $T$ is called stable if it is $\kappa$-stable for some infinite cardinal $\kappa$.</p>
<p>I am beginner in model theory and so my questions might be stupid. When you read a basic textbook in math you see that the algebraic , analysis, geometric, topological and .... definitions/concepts are natural. For example algebraic concepts, group, ring, field, module, Galois theory and ... have a clear natural roots. Also the notion of continuity in analysis is a natural (in my sense!) concept. The idea behind topology is also natural. But the model theoretic notions are usually not concrete for me at all! For example one of the main portions in model theory is stability theory (which is a part of Shelah classification) in which you need to count the number of types. I would like to know: where is the idea of stability (counting the number of types) come from?</p>
<p>Any reference would be appreciated.</p>
| user584026 | 584,026 | <p>Just to add on one point to why the classification program is natural:</p>
<p>One can interpret the model theorist's approach to studying a structure $M$ as assigning to $M$ its ``logical invariances'' which is just a fancy way to refer to the theory $T$ of $M$.</p>
<p>It is therefore natural trying to understand how powerful these logical invariances are, that is, how far is $T$ from characterizing its models up to isomorphism completely.</p>
<p>It turn out by Löwenheim–Skolem theorem that the only case where $T$ has absolute power is absolutely boring: $T$ has a single finite model up to isomorphism. So the first non-boring situation where $T$ is powerful is when $T$ has few models in certain infinite cardinality. Stability is a slightly different expression of the idea that $T$ is powerful. Here $T$ has few types over a small parameter sets instead of few models. As Noah pointed out, these two notions of power are closely related.</p>
<p>The primary insight of Morley and later Shelah, in my opinion, is the following: When $T$ is powerful in a certain way, this is due to the fact that models of $T$ are equipped with some kind of special algebraic features (dimension, indepence relation,...) and do not encode complicated combinatorial patte </p>
<p>From this point of view, one can see the various definitions of stability as equating being powerful (having few types) with having algebraic features (local notion of dimension, independent relation with special properties) and with non encoding complicated combinatorial patterns (in this case an ordering). One can also think of Morley Theorem as a corollary of this phenomenon: These features persist through other cardinality which leads to the theory also has few models in other cardinality as well. </p>
|
1,187,376 | <p>Let $c(n,k)$ be the unsigned Stirling numbers of the first kind, i.e., the number of
$n$-permutations with exactly $k$ cycles.
Apparently, $$\sum_{k=1}^n c(n,k)2^k = (n+1)!$$</p>
<p>I want to prove the equality. </p>
<p>I am most interested in a combinatorial explanation. </p>
<p>The exponential generating function for the RHS is $\frac1{(1-x)^2}$. Is there a way to derive the e.g.f. for the LHS symbolically? </p>
| Marko Riedel | 44,883 | <p>By way of enrichment here is a proof using generating functions.
Suppose we seek to evaluate
$$\sum_{k=1}^n \left[n\atop k\right] 2^k.$$<P></p>
<p>The species of decompositions of permutations into cycles marked by
the number of cycles is
$$\mathfrak{P}(\mathcal{U}\mathfrak{C}(\mathcal{Z})).$$
This gives the generating function
$$G(z, u) = \exp\left(u\left(\log\frac{1}{1-z}\right)\right)$$
which immediately yields
$$\left[n\atop k\right] =
n! [z^n] \frac{1}{k!} \left(\log\frac{1}{1-z}\right)^k.$$</p>
<p>This gives for the sum
$$n! [z^n] \sum_{k=1}^n 2^k \times
\frac{1}{k!} \left(\log\frac{1}{1-z}\right)^k.$$</p>
<p>The term at $k=0$ does not contribute so we may include it to get
$$n! [z^n] \sum_{k=0}^n 2^k \times
\frac{1}{k!} \left(\log\frac{1}{1-z}\right)^k.$$</p>
<p>Furthermore $\log\frac{1}{1-z}$ starts at $z$
and $\left(\log\frac{1}{1-z}\right)^k$ starts at $z^k$
so we may extend the sum
to infinity, getting
$$n! [z^n] \sum_{k=0}^\infty 2^k \times
\frac{1}{k!} \left(\log\frac{1}{1-z}\right)^k.$$</p>
<p>This is
$$n! [z^n] \exp\left(2\log\frac{1}{1-z}\right)
= n! [z^n] \frac{1}{(1-z)^2}.$$
This finally yields
$$n! {n+1\choose n} = (n+1)!$$</p>
<p><strong>Addendum.</strong> The proof by @QuiaochuYuan is remarkably elegant.
Suppose you are distributing $Q$ colors into $n$ slots (initially
creating $Q^n$ possible configurations) and want to
count orbits under the action of the symmetric group $S_n$ (all $n!$
permutations) on the slots. By Burnside you need to average the number
of colorings fixed by a given permutation $\sigma$ over all $n!$
permutations. But a permutation with $k$ cycles fixes $Q^k$ colorings
(color must be constant on a cycle). Therefore the number of colorings
is given by</p>
<p>$$\frac{1}{n!} \sum_{k=1}^n \left[n\atop k\right] Q^k.$$<P></p>
<p>On the other hand lining up the colors according to some order
we have by stars-and-bars that there are
$${n+Q-1\choose Q-1}$$
colorings, thus completing the proof.</p>
|
2,293,147 | <p>I was trying to solve this ODE $\frac{dy}{dx} = c_{1} + c_{2}y + \frac{c_{3}}{y} , y(0) = c , c >0$.</p>
<p>where $c_{1},c_{2},c_{3}$ are three real numbers say $c_{1} < 0,c_{2},c_{3} > 0$.</p>
<p>I thought of using separation of variables giving me $x = \int(\frac{y}{c_{1}y+c_{2}y^2+c_{3}})dy + c$.</p>
<p>Next I am trying to reduce the denominator into a perfect square thng like of the form $(a + by)^2 + c$ ,so equating $(a + by)^2 = c_{1}y + c_{2}y^2 + c_{3}$
we get,</p>
<p>$(c_{1}y + c_{2}y^2 + c_{3}) = (\sqrt{\frac{-c_{1}^2}{4.c_{2}}} + \sqrt{c_{2}}.y)^2 + (c_{3} - \frac{c_{1}^2}{4.c_{2}})$</p>
<p>thus $x = \int(\frac{y}{(\sqrt{\frac{-c_{1}^2}{4.c_{2}}} + \sqrt{c_{2}}.y)^2 + (c_{3} - \frac{c_{1}^2}{4.c_{2}})}) dy + c$.</p>
<p>Now I am stuck at this point.
Also it makes me think whether there exists an analytic solution to this ODE?</p>
| Community | -1 | <p>By a linear transform $y=ax+b$, you can establish</p>
<p>$$\frac y{c_1y+c_2y^2+c_3}=\frac{ax+b}{c(x^2\pm1)}$$ where the sign is dictated by an expression below.</p>
<p>For this, write</p>
<p>$$c_1(ax+b)+c_2(a^2x^2+2abx+b^2)+c_3$$ and identify</p>
<p>$$\begin{cases}c_2a^2=c,\\c_1a+2c_2ab=0,\\c_1b+c_2b^2+c_3=\pm c.\end{cases}$$ (You draw $b$ from the second equation, then $c$ from the third and $a$ from the first. $c$ must have the sign of $c_2$.)</p>
<p>Now,</p>
<p>$$\int\frac x{x^2\pm1}dx=\ln(x^2\pm1)$$ and</p>
<p>$$\int\frac1{x^2\pm1}=\arctan x\text{ or }\text{artanh }x.$$</p>
|
223,582 | <p>Maps $g$ maps $\left\{1,2,3,4,5\right\}$ onto $\left\{11,12,13,14\right\}$ and $g(1)\neq g(2)$. How many g are there.</p>
<p><strong>My answer</strong>:
I transformed the question to a easy-understand way and find out the solution.
Consider there are five children and four seats. Two of them are willing sitting together but only two of them never seat together.</p>
<p>$$\left(\begin{pmatrix}
5 \\
2
\end{pmatrix}-1\right)*4!=456$$</p>
<p>However the answer is 216. I don't know what's wrong.</p>
<p>Could you please help me find out what's wrong or give a right way to solve the problem?</p>
<p>Thanks!</p>
| ccorn | 75,794 | <p>You can do it with a straightedge alone, though the lines tend to clutter the scene.</p>
<p><a href="https://i.stack.imgur.com/2epTR.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/2epTR.png" alt="Straightedge-only construction of polar and tangents"></a></p>
<p>Start with the two solid black lines, with directions at your free disposal as long as you get four intersection points with the circle. (It helps to keep one line closer to the center of the circle and the other farther away.)</p>
<p>Then draw the dashed lines, then the blue line $p$. That blue line is called the <a href="https://en.wikipedia.org/wiki/Pole_and_polar" rel="nofollow noreferrer"><em>polar</em></a> of the point $P$. Interestingly, it does not depend on the particulars of the black lines you have begun with.</p>
<p>Now, if $P$ is <em>outside</em> the circle, then its polar $p$ crosses the circle, and the points of intersection are the points of tangency for tangents through $P$.</p>
<p>Bonus: This approach even works for a conic instead of a circle, as long as you are given at least five points of that conic. Takes even more lines though, unless the conic is drawn already, in which case it works the same way as for the circle. I have hinted at that <a href="https://math.stackexchange.com/a/2298607/75794">elsewhere</a>.</p>
|
2,788,276 | <p>Let$\ f_n (x)=n^2x(1-x)^n$ I need to prove that$\ f_n→0$ in the interval$\ [0,1]$.</p>
<hr>
<p>Let$\ f_n(x) = nx^n$ prove that$\ f_n→0$ in the interval$\ [0,1)$.</p>
<p>For both of these sequences I tried the following:</p>
<p>By taking the function$\ f(x)=0$ we can see that</p>
<p>$$\lim_{n\rightarrow\infty}f_n(x) = 0$$</p>
<p>for both of the sequences, but I don't know if this is the correct way of solving both problems and I have my doubts if this even means that$\ f_n→0$, I'm thinking that what I did before actually implies that$\ f_n→f$.</p>
<p>Help would be greatly appreciated.</p>
| quasi | 400,434 | <p>Unless I've made a mistake, here is a partial generalization . . .
<p>
Let $R$ be a commutative ring with $1\ne 0$ such that</p>
<ul>
<li>$2$ is a unit of $R$.
<li>For some monic $f\in R[x]$, we have $f(r)=0$, for all $r\in R$.
</ul>
<p><strong>Claim:</strong>$\;R$ has Krull dimension $0$.
<p>
<strong>Proof:</strong>
<p>
Suppose $P$ is a prime ideal of $R$ which is not maximal.
<p>
Our goal is to derive a contradiction.
<p>
Let $M$ be a maximal ideal of $R$ such that $P\subset M$.
<p>
Write $f(x) = x^d +{\displaystyle{\sum_{i=0}^{d-1}a_ix^i}}$, where $a_i\in R$, for $0 \le i \le d-1$.
<p>
From $f(0)=0$, we get $a_0=0$.
<p>
From $f(1)=0$, it follows that $d > 1$, and least one of $a_1,...,a_{d-1}$ is not in $P$.
<p>
Let $k$ be the least positive integer such that $a_k\notin P$.
<p>
Let $w$ be an arbitrary element of $M{\setminus}P$.
\begin{align*}
\text{Then}\;\;&f(w)=0\\[4pt]
\implies\;&w^d +\sum_{i=0}^{d-1}a_iw^i=0\\[4pt]
\implies\;&w^d +\sum_{i=0}^{d-1}a_iw^i\equiv 0\;(\text{mod}\;P)\\[4pt]
\implies\;&w^d +\sum_{i=k}^{d-1}a_iw^i\equiv 0\;(\text{mod}\;P)\\[4pt]
\implies\;&w^k\left(w^{d-k} +\sum_{i=k}^{d-1}a_iw^{i-k}\right)\equiv 0\;(\text{mod}\;P)\\[4pt]
\implies\;&w^{d-k} +\sum_{i=k}^{d-1}a_iw^{i-k}\equiv 0\;(\text{mod}\;P)&&\text{[since $w\notin P$]}\tag{1}\\[4pt]
\implies\;&w^{d-k} +\sum_{i=k}^{d-1}a_iw^{i-k}\equiv 0\;(\text{mod}\;M)\\[4pt]
\implies\;&a_k\equiv 0\;(\text{mod}\;M)\\[4pt]
\implies\;&a_k\in M{\setminus}P\\[4pt]
\implies\;&a_k^{d-k} +\sum_{i=k}^{d-1}a_ia_k^{i-k}\equiv 0\;(\text{mod}\;P)&&\text{[using $w=a_k$ in $(1)$]}\\[4pt]
\implies\;&a_k^{d-k} +a_k\left(\sum_{i=k+1}^{d-1}a_ia_k^{i-k-1}+1\right)\equiv 0\;(\text{mod}\;P)\\[4pt]
\implies\;&a_k^{d-k-1} +\sum_{i=k+1}^{d-1}a_ia_k^{i-k-1}+1\equiv 0\;(\text{mod}\;P)&&\text{[since $a_k\notin P$]}\\[4pt]
\implies\;&a_k^{d-k-1} +\sum_{i=k+1}^{d-1}a_ia_k^{i-k-1}+1\equiv 0\;(\text{mod}\;M)\\[4pt]
\implies\;&
\begin{cases}
1\equiv 0\;(\text{mod}\;M)\qquad\text{if $k < d-1$}\\[4pt]
2\equiv 0\;(\text{mod}\;M)\qquad\text{if $k = d-1$}\\
\end{cases}
\\[4pt]
\end{align*}
contradiction.</p>
|
33,622 | <p>I am looking for differentiable functions $f$ from the unit interval to itself that satisfy the following equation $\forall\:p \in \left( 0,1 \right)$:</p>
<p>$$1-p-f(f(p))-f(p)f'(f(p))=0$$</p>
<p>Is there a way to use <em>Mathematica</em> to solve such equations?<br>
<code>DSolve</code> is of course unable to handle this -- unless there are tricks I don't know about.</p>
| gpap | 1,079 | <p>If <code>f</code> is differentiable in the unit interval then it has a power series expansion in that interval. Use an assumed polynomial <code>trial</code> of order <code>order</code> in variable <code>var</code> to denote that expansion</p>
<pre><code>ClearAll[trial, equation, solutions];
trial[order_, var_] :=
Block[{vars = Table[ToExpression["a" <> ToString[i]], {i, 0, order}]},
vars.(var^Range[0, order])
]
</code></pre>
<p>then you can plug this into the differential equation:</p>
<pre><code>equation[order_, var_] := Module[{f},
f[var2_] := trial[order, var2];
1 - var - f[f[var]] - f[var] f'[f[var]]
]
</code></pre>
<p>and for every order you get an equation in the coefficients of the trial polynomial which you can use <code>Solve</code> on:</p>
<pre><code>solutions[order_, var_] :=
Block[{vars = Table[ToExpression["a" <> ToString[i]], {i, 0, order}]},
Solve[Thread[CoefficientList[equation[order, var], var] == 0], vars]
]
</code></pre>
<p>You see that the equation is solved by a first order polynomial (and its CC) and higher order coefficients are zero (up to order 5 this is reasonably fast to calculate):</p>
<pre><code>solutions[3, p]
{{a0 -> 1/3 (1 - I Sqrt[2]), a1 -> I/Sqrt[2], a2 -> 0,
a3 -> 0}, {a0 -> 1/3 (1 + I Sqrt[2]), a1 -> -(I/Sqrt[2]), a2 -> 0,
a3 -> 0}}
</code></pre>
<p>so your equation is solved by:</p>
<pre><code>{f1[p_],f2[p_]} = trial[1, p] /. solutions[1, p]
{1/3 (1 - I Sqrt[2]) + (I p)/Sqrt[2],
1/3 (1 + I Sqrt[2]) - (I p)/Sqrt[2]}
</code></pre>
<p>Check that </p>
<pre><code>1 - p - f[f[p]] - f[p] f'[f[p]]/.f->f1// Simplify
(* 0 *)
</code></pre>
<p>and </p>
<pre><code>1 - p - f[f[p]] - f[p] f'[f[p]]/.f->f2// Simplify
(* 0 *)
</code></pre>
|
481,421 | <p>Find the limit of:
$$\lim_{x\to\infty}{\frac{\cos(\frac{1}{x})-1}{\cos(\frac{2}{x})-1}}$$</p>
| NightRa | 90,049 | <p>Let's define:
$$h=\frac{1}{x}$$
Then:
$$h\to0$$
So we will rewrite the limit as:
$$\lim_{h\to0}{\frac{\cos(h)-1}{\cos(2h)-1}}=\lim_{h\to0}{\frac{-\sin(h)}{-2\sin(2h)}}=\lim_{h\to0}{\frac{h}{2\cdot2h}}=\frac{1}{4}$$</p>
|
2,195,739 | <p>For
$$
f(x) = \begin{cases}
x^2 & \text{if $x\in\mathbb{Q}$,} \\[4px]
x^3 & \text{if $x\notin\mathbb{Q}$}
\end{cases}
$$</p>
<p>What I did was examine each of the limits at $0$ of
$\displaystyle\lim_{x\to0} \frac{f(x)-f(a)}{x-a}$ for each case but I am not sure </p>
| Jonathan Barkey | 414,649 | <p>By the Sequential Criterion for Limits, we know that $$\lim_{x\to c}f(x)=L$$ if and only if for every sequence $(x_n)$ in the domain of $f$ that converges to $c$ such that $x_n\ne c$ for all $n$, then the sequence $(f(x_n))$ converges to $L$.</p>
<p>Let $(x_n)\in\mathbb{Q}$ such that $x_n\ne0$ for all $n$ and $\lim_{n\to\infty}(x_n)=0$. Then, $$\lim_{n\to\infty}(f(x_n))=f(0)=0^2=0$$</p>
<p>Let $(y_n)\notin\mathbb{Q}$ such that $y_n\ne0$ for all $n$ and $\lim_{n\to\infty}(y_n)=0$. Then, $$\lim_{n\to\infty}(f(y_n))=f(0)=0^2=0$$</p>
<p>Thus, by the Sequential Criterion, $$\lim_{x\to 0}f(x)=0$$</p>
|
82,716 | <p>There seems to be two competing(?) formalisms for specifying theories: <a href="http://ncatlab.org/nlab/show/sketch" rel="noreferrer">sketches</a> (as developped by Ehresmann and students, and expanded upon by Barr and Wells in, for example, <a href="http://www.tac.mta.ca/tac/reprints/articles/12/tr12.pdf" rel="noreferrer">Toposes, Triples and Theories</a>), and the setting of <a href="http://cseweb.ucsd.edu/~goguen/pps/nel05.pdf" rel="noreferrer">institutions</a>. </p>
<p>But I sometimes get a glimpse that sketches are really a very nice way to specifiy a good category of <em>signatures</em>, while institutions are much more model-theoretic. But in works on institutions, the category of signatures is usually highly under-specified (which is quite ironic, really).</p>
<p>So my question really is: what is the relation between Sketches and Institutions? </p>
<p>A subsidiary question is, why do I find a lot of work relying on institutions, but comparatively less on sketches? [I am talking volume here, not quality.] Did sketches somehow not prove to be effective?</p>
| Zinovy Diskin | 19,786 | <p>@Jacques: On relations between sketches and institutions. The former is a particular instance of the latter: sketches of a given type and their models form an institution. In more detail, signatures are graphs, sentences are diagrams of a given type, and models are sketch morphisms of a given type, say, into Set. </p>
|
1,382,087 | <p>Problem:</p>
<p>A bag contains $4$ red and $5$ white balls. Balls are drawn from the bag without replacement.</p>
<p>Let $A$ be the event that first ball drawn is white and let $B$ denote the event that the second ball drawn is red. Find </p>
<p>(i) $P(B\mid A)$</p>
<p>(ii) $P(A\mid B)$</p>
<p>My confusion is that should $P(A\mid B)=P(A)$</p>
<p>Can we say that in general if $P(A\mid B)$ exists then $P(B\mid A)$ should also exist?</p>
| callculus42 | 144,421 | <blockquote>
<p>Can we say that in general if $P(A\mid B)$ exists then $P(B\mid A)$
should also exist?</p>
</blockquote>
<p>Not necessarily. I modifiy your exercise.</p>
<blockquote>
<p>A bag contains $4$ red and $5$ white balls. Balls are drawn from the
bag without replacement.</p>
<p>Let $A$ be the event that first ball drawn is white and let $B$ denote the event that the second ball drawn is <strong>black</strong>. Find </p>
<p>(i) $P(B \mid A)$</p>
<p>(ii) $P(A\mid B)$</p>
</blockquote>
<p>$P(B)=0$, therefore $P(A\mid B)=\frac{P(A \cap B)}{P(B)}$ is not defined.</p>
|
749,473 | <p>I am trying to model the time it takes until a malfunction appears. For example the time a light-bulb will last. I would like the probability that the light-bulb will burn out at a certain moment (given it hadn't bunt yet) to increase as a function of the time ($P(x | X \geq x$) should be monotonic increasing). That is, an old light-bulb is more likely to burn out at the moment than a new one. (Obviously, I can't use a memory-less probability distribution). Any suggestions?</p>
| Henry | 6,460 | <p>If you want a distribution with a maximum lifetime (say $c$) then you might consider $$F(x)=1-\left(1-\frac{x}{c}\right)^\beta$$ $$f(x)= \frac{\beta}{c}\left(1-\frac{x}{c}\right)^{\beta-1}$$ for some positive $\beta$: for $\beta=1$ this gives a uniform distribution on $[0,c]$. It is a kind of scaled Beta distribution with $\alpha=1$. Its expectation is $\frac{c}{1+\beta}$. </p>
<p>Its hazard function or failure rate, which you want to be monotonically increasing, is $$\lambda(x)=\frac{f(x)}{1-F(x)}=\frac{\beta}{c}\left(1-\frac{x}{c}\right)^{-1}.$$ </p>
<p>While not memoryless, it does have shape-memorylessness: conditioned on not having failed yet, the remaining distribution has the same shape as the original distribution but has been scaled; scale-free statistics such as the coefficient of variation, skewness or kurtosis do not change. </p>
|
3,634,416 | <p>First of all, English is not my native language, but Chinses is. I tried to spilt the integration interval into 2 pieces: <span class="math-container">$ [0, 1-1/n] $</span> and <span class="math-container">$ [1-1/n, 1] $</span>. In both intervals I use the mean value theorem:
<span class="math-container">$$
\int_{0}^{1-1/n}\frac{1}{1+x^{n}}\,dx=\frac{1}{1+\xi_{n}^{n}}\left( 1-\frac{1}{n} \right), \qquad \text{and} \qquad \int_{1-1/n}^{1}\frac{1}{1+x^{n}}\,dx=\frac{1}{1+\eta_{n}^{n}}\frac{1}{n},
$$</span>
where <span class="math-container">$ \xi_{n}\in(0, 1-1/n), \eta_{n}\in(1-1/n, 1) $</span>.I found that the latter formula has a limit of <span class="math-container">$ 0 $</span> when <span class="math-container">$ n\to\infty $</span>. However I can't handle the previous formula. Does anyone has some thoughts? </p>
| Kavi Rama Murthy | 142,385 | <p><span class="math-container">$\int_0^{1}\frac 1 {1+x^{n}}dx =1-\int_0^{1}\frac {x^{n}} {1+x^{n}}dx$</span>. Note that <span class="math-container">$0 \leq \frac {x^{n}} {1+x^{n}} \leq x^{n}$</span> and <span class="math-container">$\int_0^{1} x^{n}dx=\frac 1 {n+1} \to 0$</span>. Put these together to see that the required limit is <span class="math-container">$1$</span>. </p>
|
1,146,050 | <p>given $f(x)=\frac{x^4+x^2+1}{x^2+x+1}$.</p>
<p>Need to find the min value of $f(x)$.</p>
<p>I know it can be easily done by polynomial division but my question is if there's another way</p>
<p>(more elegant maybe) to find the min? </p>
<p><strong>About my way</strong>: $f(x)=\frac{x^4+x^2+1}{x^2+x+1}=x^2-x+1$. (long division)</p>
<p>$x_{min}=\frac{-b}{2a}=\frac{1}{2}$. (when $ax^2+bx+c=0$)</p>
<p>So $f(0.5)=0.5^2-0.5+1=\frac{3}{4}$</p>
<p>Thanks. </p>
| lab bhattacharjee | 33,337 | <p>$$x^2-x+1=\frac{4x^2-4x+4}4=\frac{(2x-1)^2+3}4\ge\frac34$$</p>
<p>The equality occurs if $2x-1=0\iff x=\dfrac12$</p>
|
2,573,458 | <p>Given $n$ prime numbers, $p_1, p_2, p_3,\ldots,p_n$, then $p_1p_2p_3\cdots p_n+1$ is not divisible by any of the primes $p_i, i=1,2,3,\ldots,n.$ I dont understand why. Can somebody give me a hint or an Explanation ? Thanks.</p>
| user | 505,767 | <p>Let $$P = p_1p_2...p_n+1$$ and let $p$ be a prime such that $p\mid P$.</p>
<p>Then $p$ can not be any of $p_1,p_2,p_3,\cdots ,p_n$ otherwise $p$ would divide the difference $P-p_1p_2...p_n=1$ which is not possible.</p>
|
2,573,458 | <p>Given $n$ prime numbers, $p_1, p_2, p_3,\ldots,p_n$, then $p_1p_2p_3\cdots p_n+1$ is not divisible by any of the primes $p_i, i=1,2,3,\ldots,n.$ I dont understand why. Can somebody give me a hint or an Explanation ? Thanks.</p>
| Bram28 | 256,001 | <p>Simple example. Suppose I consider $2 \cdot 3 \cdot 5 \cdot 7 = 210$</p>
<p>Now, $2$ divides $210$, and so do $3$, $5$, and $7$ ... of course! </p>
<p>But what happens if you divide $210+1=211$ by $2$? You get a remainder of $1$ ... exactly because you got a remainder of $0$ when dividing $210$. And the exact same thing happens for $3$, $5$, and $7$ </p>
|
3,143,084 | <p>If <span class="math-container">$f : \mathbb{R} \to \mathbb{R}$</span>, we can think of the derivative of <span class="math-container">$f$</span> at a point <span class="math-container">$x$</span>, denoted <span class="math-container">$f'(x)$</span>, as giving the slope of a line tangent to the graph of <span class="math-container">$f$</span> at the point <span class="math-container">$(x, f(x))$</span>. One way to obtain the derivative is to consider a secant line through a second point <span class="math-container">$(x+h, f(x+h))$</span> on the graph of <span class="math-container">$f$</span>. The slope of the secant line is given by
<span class="math-container">$$ \frac{f(x+h) - f(x)}{(x+h)-x} = \frac{f(x+h) - f(x)}{h}. $$</span>
The tangent line results by taking <span class="math-container">$h$</span> to be arbitrarily small, so the derivative is given by
<span class="math-container">$$ \lim_{h\to 0} \frac{f(x+h) - f(x)}{h}, $$</span>
presuming that this limit exists.</p>
<blockquote>
<p><strong>Question:</strong> Suppose that <span class="math-container">$f$</span> is given by
<span class="math-container">$$ f(x) = x^n. $$</span>
What is <span class="math-container">$f'(x)$</span>?</p>
</blockquote>
<p>For small values of <span class="math-container">$n$</span>, this can be computed by hand fairly easily. For example, if <span class="math-container">$n=3$</span>, then
<span class="math-container">$$ f'(x)
= \lim_{h\to 0} \frac{(x+h)^3 - x^3}{h}
= \lim_{h\to 0} \frac{x^3 + 3hx^2 + 3h^2x + h^3}{h}
= \lim_{h\to 0} 3x^2 + 3hx + h^2
= 3x^2. $$</span>
On the other hand, if <span class="math-container">$n$</span> is very large, then this becomes impractical. For example, if <span class="math-container">$n = 123$</span>, then how do we determine
<span class="math-container">$$ f'(x) = \lim_{h\to 0} \frac{(x+h)^{123} - x^{123}}{h}? $$</span></p>
| JoseSquare | 643,097 | <p>By Newton's Binomial formula <span class="math-container">$(x+h)^{123} =x^{123} +123x^{122}h+\ldots$</span> then you get </p>
<p><span class="math-container">$$\lim_{h \to 0} \frac{(x^{123} +123x^{122}h+\ldots)-x^{123}}{h}=
\lim_{h \to 0} \frac{123x^{122}h +
\binom{123}{2}x^{121}h^2 + \ldots +h^{123}}{h} = 123x^{122}$$</span> because the rest of the terms in numerator have an <span class="math-container">$h$</span> that goes to <span class="math-container">$0$</span></p>
<p><strong>Assuming that you meant that <span class="math-container">$h^{123}$</span> to be <span class="math-container">$x^{123}$</span></strong></p>
|
240,741 | <p>I'm trying to include the legends inside the frame of the plot like this</p>
<p><a href="https://i.stack.imgur.com/7K5aa.jpg" rel="noreferrer"><img src="https://i.stack.imgur.com/7K5aa.jpg" alt="hehe" /></a></p>
<p>Here is my Attempt:</p>
<pre><code>ListPlot[{{2, 5, 2, 8, 6, 8, 3}, {1, 2, 5, 2, 3, 4, 3}},
PlotMarkers -> {"\[SixPointedStar]", 15}, Joined -> True,
PlotStyle -> {Orange, Green},
PlotLegends ->
Placed["line1", "line2",
LegendFunction -> (Framed[#, FrameMargins -> 0] &)], Frame -> True]
</code></pre>
<p>My references:</p>
<ol>
<li><a href="https://mathematica.stackexchange.com/questions/141737/specify-legend-position-in-a-plot">specify-legend-position-in-a-plot</a></li>
<li><a href="https://mathematica.stackexchange.com/questions/173911/plotting-legends-matching-with-plots-inside-the-show-graph">plotting-legends-matching-with-plots-inside-the-show-graph</a></li>
<li><a href="https://mathematica.stackexchange.com/questions/212046/placing-plot-legends-inside-a-plot">placing-plot-legends-inside-a-plot</a></li>
<li><a href="https://www.wolfram.com/mathematica/new-in-9/legends/place-a-legend-inside-a-plot.html" rel="noreferrer">place-a-legend-inside-a-plot.</a></li>
</ol>
| Bob Hanlon | 9,362 | <p>Space for the legend is available at the upper left</p>
<pre><code>ListPlot[{
{2, 5, 2, 8, 6, 8, 3},
{1, 2, 5, 2, 3, 4, 3}},
PlotMarkers -> {"✶", 15},
Joined -> True,
PlotStyle -> {Orange, Green},
PlotLegends ->
Placed[
LineLegend[{"line1", "line2"},
LegendFunction -> "Frame"],
{.15, .8}],
Frame -> True]
</code></pre>
<p><a href="https://i.stack.imgur.com/uuASC.png" rel="noreferrer"><img src="https://i.stack.imgur.com/uuASC.png" alt="enter image description here" /></a></p>
<p>To locate legend at upper right you must change the <a href="https://reference.wolfram.com/language/ref/PlotRange.html" rel="noreferrer"><code>PlotRange</code></a></p>
<pre><code>ListPlot[{
{2, 5, 2, 8, 6, 8, 3},
{1, 2, 5, 2, 3, 4, 3}},
PlotRange -> {{0, 9}, {0, 9}},
PlotMarkers -> {"✶", 15},
Joined -> True,
PlotStyle -> {Orange, Green},
PlotLegends ->
Placed[
LineLegend[{"line1", "line2"},
LegendFunction -> "Frame"],
{.85, .8}],
Frame -> True]
</code></pre>
<p><a href="https://i.stack.imgur.com/JqRLJ.png" rel="noreferrer"><img src="https://i.stack.imgur.com/JqRLJ.png" alt="enter image description here" /></a></p>
|
2,293,746 | <p>A function f has derivative for all $x\in \mathbb R$ and the limits of $f$ at $+\infty $, $-\infty$ are equal to $+\infty$ . Is it true that $\lim_{x\to a} \frac {1}{f'(x)} = + \infty $ or $-\infty$ for some $a\in\mathbb R$ ?</p>
<p>Of course function $f' $ has roots , according to Fermat's theorem( $f$ has a total infimum) but how I could find an example to prove that the statement is W(wrong), if it really is wrong? </p>
<p>Thank you in advance!</p>
<p>Babis</p>
| Paramanand Singh | 72,031 | <p>Your statement is false. In fact you can take $f$ to be constant in some interval and let $f$ be decreasing before that interval and increasing after that interval. Thus let $f(x) =(x+1)^{2},x<-1,f(x)=0,|x|\leq 1,f(x)=(x-1)^{2},x>1$. Then we can see that $f$ is differentiable everywhere, but there is no point where $1/f'(x) \to \pm\infty$. </p>
|
381,177 | <p>I have a problem in which I have to compute the following integral: <span class="math-container">$$\mathop{\idotsint\limits_{\mathbb{R}^k}}_{\sum_{i=1}^k y_i=x} e^{-N^2r(\sum_{i=1}^k y_i^2-\frac{1}{k}x^2)} dy_1\dots dy_k,$$</span>
where this notation means that I want to integrate over <span class="math-container">$\mathbb{R}^k$</span> restricted to the plane where <span class="math-container">$\sum_{i=1}^k y_i=x$</span> (a convolution of gaussians) and <span class="math-container">$N$</span> and <span class="math-container">$r$</span> are positive real constants. I have tried two different methods for computing this integral, but they are yielding different results. I would appreciate it very much if someone could take a look and tell me what I'm doing wrong.</p>
<p><strong>Method 1</strong></p>
<p>In method 1 I just wrote it as
<span class="math-container">$$\mathop{\idotsint\limits_{\mathbb{R}^k}}_{\sum_{i=1}^{k}y_i=x} e^{-N^2r(\sum_{i=1}^{k}y_i^2-\frac{1}{k}x^2)} dy_1\dots dy_k =\int_{-\infty}^{\infty}\dots\int_{-\infty}^\infty e^{-N^2r((x-y_1)^2+\sum_{i=1}^{k-2}(y_i-y_{i+1})^2+y_{k-1}^2-\frac{1}{k}x^2)} \, dy_1\dots dy_{k-1}=\sqrt{\frac{1} {\pi r^{k-1}k}} \frac{\pi^k}{N^{k-1}}$$</span></p>
<p>I deduced this formula by induction, first integrating in <span class="math-container">$y_{k-1}$</span>, then <span class="math-container">$y_{k-2}$</span> and so on.</p>
<p><strong>Method 2</strong></p>
<p>In method 2 I tried writting the function in a matrix form <span class="math-container">$$\mathop{\idotsint\limits_{\mathbb{R}^k}}_{\sum_{i=1}^{k}y_i=x} e^{-N^2r(\sum_{i=1}^{k}y_i^2-\frac{1}{k}x^2)} dy_1\dots dy_{k}=\mathop{\idotsint\limits_{\mathbb{R}^k}}_{\sum_{i=1}^{k}y_i=x} e^{-N^2r(\vec{y},Q\vec{y})} dy_1\dots dy_{k}$$</span>
where <span class="math-container">\begin{equation}
Q:=\left(\begin{array}{cccccccc}
(1-\frac{1}{k})& -\frac{1}{k} & -\frac{1}{k} & \cdots & -\frac{1}{k} \\
-\frac{1}{k} & (1-\frac{1}{k}) & -\frac{1}{k} & \cdots & -\frac{1}{k} \\
\vdots & \ddots & & &\vdots \\
-\frac{1}{k} & \dots & &-\frac{1}{k} &(1-\frac{1}{k})
\end{array}\right).
\end{equation}</span></p>
<p>This matrix <span class="math-container">$Q$</span> has eigenvalues <span class="math-container">$\lambda_0=0$</span>, <span class="math-container">$\lambda_l=1$</span> and corresponding normalized eigenvetors <span class="math-container">\begin{equation}
\vec{\lambda}_l=\frac{1}{\sqrt{k}}\left(\begin{array}{c}
1 \\
e^{\frac{2\pi i}{k}1l} \\
\vdots \\
e^{\frac{2\pi i}{k}(k-1)l}
\end{array}\right)
\end{equation}</span> for <span class="math-container">$0\le l\le k-1$</span>.</p>
<p>As I understand it, the restriction in the integral means that I shouldn't integrate in the <span class="math-container">$\lambda_0$</span> direction, since in this direction I must have all components equal, and the only place where the components are equal and the bound is satisfied is <span class="math-container">$(\frac{x}{k},\dots,\frac{x}{k})$</span>. So my integration should occour in the orthogonal complement of this vector, which is a hyperplane of dimension <span class="math-container">$k-1$</span>. Everything seems to check to this point, so I diagonalized the matrix <span class="math-container">$Q=U\Lambda U^{-1}$</span> and so</p>
<p><span class="math-container">$$(\vec{y},Q\vec{y})=(\vec{\xi},\Lambda\vec{\xi})=\sum_{i=1}^{k-1}\xi_i^2.$$</span></p>
<p>The change of variables <span class="math-container">$\vec{\xi}=U^{-1}\vec{y}$</span> has a Jacobian <span class="math-container">$\frac{1}{\sqrt{k^{k-1}}}$</span>, since <span class="math-container">$U^{-1}$</span> is the DFT matrix times <span class="math-container">$\frac{1}{\sqrt{k^{k-1}}}$</span> and the DFT matrix is known to be unitary. So</p>
<p><span class="math-container">$$\mathop{\idotsint\limits_{\mathbb{R}^k}}_{\sum_{i=1}^{k}y_i=x} e^{-N^2r(\vec{y},Q\vec{y})} dy_1\dots dy_{k}=\idotsint\limits_{\mathbb{R}^k} e^{-N^2r\sum_{i=1}^{k-1}\xi_i^2} \frac{1}{\sqrt{k^{k-1}}}d\xi_1\dots d\xi_{k-1}= \sqrt{\frac{\pi^{k-1}}{k^{k-1}r^{k-1}}}\frac{1}{N^{k-1}}.$$</span></p>
<p>These two results are different and I cannot figure out why.</p>
<p>Thank you all in advance for your help!</p>
| Iosif Pinelis | 36,721 | <p><span class="math-container">$\newcommand\1{\mathbf1}\newcommand{\R}{\mathbb R}\newcommand{\la}{\lambda}$</span>Here is a more explicit way to define the disintegration of the Lebesgue measure over <span class="math-container">$\R^k$</span> into the measures <span class="math-container">$\mu_t$</span> over the planes <span class="math-container">$\Pi_t$</span> introduced in my other answer on this page, of course with the same result.</p>
<p>Let us recall notations introduced in that answer:
<span class="math-container">$$c:=N^2r\in(0,\infty),\quad t:=x/\sqrt k,$$</span>
<span class="math-container">$$\Pi_t:=\{y\in\R^k\colon u\cdot y=t\}=\{y\in\R^k\colon \1\cdot y=x\},$$</span>
where <span class="math-container">$\cdot$</span> denotes the dot product, <span class="math-container">$\1:=(1,\dots,1)\in\R^k$</span>,<br />
<span class="math-container">$$u:=\1/\sqrt k;$$</span></p>
<p>The integral in question still is
<span class="math-container">\begin{equation*}
I_t:=e^{ct^2}J_t,\quad\text{where}\quad J_t:=\int_{\Pi_t}\mu_t(dy)e^{-c|y|^2}, \tag{1}
\end{equation*}</span>
<span class="math-container">$|y|$</span> is the Euclidean norm of <span class="math-container">$y$</span>, and (for each real <span class="math-container">$t$</span>) <span class="math-container">$\mu_t$</span> is the measure over the plane <span class="math-container">$\Pi_t$</span> now explicitly defined as follows.</p>
<p>Let <span class="math-container">$T\colon\R^{k-1}\to\Pi_0$</span> be any linear isometry of <span class="math-container">$\R^{k-1}$</span> onto the <span class="math-container">$(k-1)$</span>-dimensional linear subspace <span class="math-container">$\Pi_0$</span> of <span class="math-container">$\R^k$</span>. Any such isometry <span class="math-container">$T$</span> is given by the formula <span class="math-container">$Tz=\sum_{j=1}^{k-1}z_jb_j$</span> for all <span class="math-container">$z=(z_1,\dots,z_{k-1})\in\R^{k-1}$</span>, where <span class="math-container">$(b_1,\dots,b_{j-1})$</span> is any orthonormal basis of <span class="math-container">$\R^{k-1}$</span>.
For each real <span class="math-container">$t$</span>, we have <span class="math-container">$\Pi_t=\Pi_0+tu$</span>, and so, we can define the affine isometry <span class="math-container">$U_t\colon\R^{k-1}\to\Pi_t$</span> of <span class="math-container">$\R^{k-1}$</span> onto the <span class="math-container">$(k-1)$</span>-dimensional affine subspace <span class="math-container">$\Pi_t$</span> by the formula
<span class="math-container">\begin{equation*}
U_tz:=Tz+tu
\end{equation*}</span>
for all <span class="math-container">$z\in\R^{k-1}$</span>. Let now
<span class="math-container">\begin{equation*}
\mu_t:=\mu^T_t:=\la_{k-1}U_t^{-1}, \tag{2}
\end{equation*}</span>
the pushforward measure for the Lebesgue measure <span class="math-container">$\la_{k-1}$</span> over <span class="math-container">$\R^{k-1}$</span> under the isometry <span class="math-container">$U_t$</span>, so that <span class="math-container">$\mu_t(B)=\la_{k-1}(U_t^{-1}(B))=\la_{k-1}(T^{-1}(B-tu))$</span> for all Borel subsets <span class="math-container">$B$</span> of <span class="math-container">$\Pi_t$</span>.</p>
<p><strong>Remark:</strong> The measures <span class="math-container">$\mu_t=\mu^T_t$</span> do not depend on the choice of a linear isometry <span class="math-container">$T$</span> of <span class="math-container">$\R^{k-1}$</span> onto <span class="math-container">$\Pi_0$</span>. Indeed, if <span class="math-container">$T$</span> is any such isometry, then any other such isometry is of the form <span class="math-container">$S:=TQ$</span>, where <span class="math-container">$Q$</span> is any linear isometry of <span class="math-container">$\R^{k-1}$</span> (onto <span class="math-container">$\R^{k-1}$</span>). Hence,
<span class="math-container">\begin{equation*}
\mu^S_t(B)=\la_{k-1}(S^{-1}(B-tu))=\la_{k-1}(Q^{-1}(T^{-1}(B-tu)))
=\la_{k-1}(T^{-1}(B-tu))=\mu^T_t(B)
\end{equation*}</span>
for all Borel subsets <span class="math-container">$B$</span> of <span class="math-container">$\Pi_t$</span>; the penultimate equality in the above display holds because the Lebesgue measure is invariant with respect to linear isometries. <span class="math-container">$\Box$</span></p>
<p>It follows from (1) and (2) that
<span class="math-container">\begin{equation}
J_t=\int_{\R^{k-1}}dz\,\exp\{-c|U_tz|^2\},
\end{equation}</span>
by the <a href="https://en.wikipedia.org/wiki/Pushforward_measure#Main_property:_change-of-variables_formula" rel="nofollow noreferrer">change-of-variables formula for the pushforward measures</a>.
Since <span class="math-container">$U_tz=Tz+tu$</span>, <span class="math-container">$Tz\perp u$</span>, <span class="math-container">$T$</span> is an isometry, and <span class="math-container">$|u|=1$</span>, we have <span class="math-container">$|U_tz|^2=|Tz|^2+t^2|u|^2=|z|^2+t^2$</span>. So,
<span class="math-container">\begin{equation}
J_t=e^{-ct^2}\int_{\R^{k-1}}dz\,\exp\{-c|z|^2\}
=e^{-ct^2}(\pi/c)^{(k-1)/2}.
\end{equation}</span>
Thus, the integral in question is
<span class="math-container">$$I_t=e^{ct^2}J_t=(\pi/c)^{(k-1)/2}=\Big(\frac\pi{N^2r}\Big)^{(k-1)/2},$$</span>
which is what was obtained a bit differently in my other answer.</p>
|
1,513,373 | <p>Let M be a cardinal with the following properties:<br>
- M is regular<br>
- $\kappa < M \implies 2^\kappa < M$<br>
- $\kappa < M \implies s(\kappa) < M$ where $s(\kappa)$ is the smallest strongly inaccessible cardinal strictly greater than $\kappa$ </p>
<p>My question is: Is M a Mahlo cardinal ? If so, how does the definition above connect to the usual definition in terms of stationary sets ?</p>
<p>Motivation: My intuition about a Mahlo cardinal is that it you cannot reach it by taking unions, power sets or "the next inaccessible cardinal", which is my definition above.
My worry is, I might have arrived at something much smaller than Mahlo.</p>
| Andreas Blass | 48,510 | <p>Wojowu has answered the question, but it might be useful to record here why the first $M$ that satisfies your conditions is not a Mahlo cardinal. Consider the set $C$ of those cardinals $\lambda<M$ that satisfy $(\forall\kappa<\lambda)\,2^\kappa<\lambda$ and $(\forall\kappa<\lambda)\,s(\kappa)<\lambda$, i.e., $\lambda$ satisfies the second and third of your hypotheses about $M$. (Actually, the third hypothesis subsumes the second, because $s(\kappa)>2^\kappa$.) Note that none of these $\lambda$'s is regular, because I assumed that $M$ is the first cardinal satisfying all three of your conditions. Now suppose, toward a contradiction, that $M$ were Mahlo. Then $C$ could not be closed and unbounded in $M$, because the definition of Mahlo says that every closed unbounded subset of $M$ contains a regular cardinal. </p>
<p>It's easy to check that $C$ is closed, so it would have to be bounded, i.e., the supremum $\sigma$ of $C$ would be $<M$ (and would therefore be an element of $C$ as $C$ is closed). Now define a countable increasing sequence of cardinals $\sigma_n<M$ by $\sigma_0=\sigma$ and $\sigma_{n+1}=s(\sigma_n)$. The supremum $\tau$ of this sequence satisfies the second and third of your hypotheses, so, if it were $<M$, then it would be in $C$, contrary to $\sigma$ being the supremum of $C$. So $\tau=M$. But this is absurd, as $\tau$ has countable cofinality whereas $M$ is uncountable and regular.</p>
<p>That completes the proof that the first $M$ satisfying your conditions isn't Mahlo, but let me add a remark that might help clarify what Mahlo cardinals are "about". Note that the proof above didn't use very much about the functions $\kappa\mapsto 2^\kappa$ and $s$ that occur in your second and third hypotheses. You could add more conditions of the form "$(\forall\kappa<\lambda)\,f(\kappa)<\lambda$" for other functions $f$, and the same argument will still work. The first regular cardinal with any specified closure properties of this sort is not Mahlo. (You could even take $f(\kappa)$ to be the first Mahlo cardinal above $\kappa$; then the first $M$ would be an inaccessible limit of Mahlo cardinals, but would not itself be Mahlo.) The key idea that the definition of Mahlo cardinals tries to capture is the idea of being unobtainable (from below) by any sort of closure properties.</p>
|
4,294,860 | <p>Let's say I have a group of n people. Some are left handed and some are right handed. I need to know a random person identity, knowing if he is right or left handed</p>
<p>As conditional probabilty:</p>
<p>Being <span class="math-container">$P(X)$</span> the probability of correctly guessing a person identity.</p>
<p><span class="math-container">$P(X | left)$</span> the probability of guessing the person identity knowing is left-handed</p>
<p><span class="math-container">$P(left)$</span> the probability for the person to be left-handed</p>
<p><span class="math-container">$ P(X) = P(X| left) P(left) + P(X|right)P(right)$</span></p>
<p>Then
<span class="math-container">$ P(X) = \dfrac{1}{num\_left} \dfrac{num\_left}{num\_total}+ \dfrac{1}{num\_right} \dfrac{num\_right}{num\_total} $</span></p>
<p><span class="math-container">$ P(X) = \dfrac{2}{num\_total} $</span></p>
<p>So, you are twice as likely to guess the person if you know weather he's right / left handen, and that doesn't depend on how frequent that characteristic is??</p>
<p>This is mind blowing! Am I doing something wrong?</p>
| user2661923 | 464,411 | <p>The following analysis <strong>assumes</strong> that the chance of someone being left handed (for example) is less than <span class="math-container">$1$</span> and greater than <span class="math-container">$0$</span>.</p>
<p>Let's try it with actual numbers. Suppose that <span class="math-container">$1$</span> out of every <span class="math-container">$n$</span> people is left handed.</p>
<p>You have a group of <span class="math-container">$T$</span> people. If you try to guess someone's identity, without knowing whether or not they are right handed, your chances are <span class="math-container">$\displaystyle \frac{1}{T}.$</span></p>
<hr />
<p>When told whether the person is right handed, <span class="math-container">$\displaystyle \frac{n-1}{n}$</span> of the time, the person will be right handed, and <span class="math-container">$\displaystyle \frac{1}{n}$</span> of the time the person will be left handed.</p>
<p><span class="math-container">$\underline{\text{Case 1: Person is right handed}}$</span></p>
<p>Probability of this case occuring is <span class="math-container">$\displaystyle \frac{n-1}{n}$</span>. <br>
Then, you will be guessing at random from a group of <br>
<span class="math-container">$\displaystyle \frac{n-1}{n} \times T$</span> people.</p>
<p>Your chance of guessing correctly here will be</p>
<p><span class="math-container">$$\frac{1}{\frac{n-1}{n} \times T} = \frac{n}{(n-1) \times T}.$$</span></p>
<p>So, the overall chance of Case 1 occurring and leading to success is :</p>
<p><span class="math-container">$$ \frac{n-1}{n} \times \frac{n}{(n-1) \times T} = \frac{1}{T}.$$</span></p>
<p><span class="math-container">$\underline{\text{Case 2: Person is left handed}}$</span></p>
<p>Probability of this case occurring is <span class="math-container">$\displaystyle \frac{1}{n}$</span>. <br>
Then, you will be guessing at random from a group of <br>
<span class="math-container">$\displaystyle \frac{1}{n} \times T$</span> people.</p>
<p>Your chance of guessing correctly here will be</p>
<p><span class="math-container">$$\frac{1}{\frac{1}{n} \times T} = \frac{n}{T}.$$</span></p>
<p>So, the overall chance of Case 2 occurring and leading to success is :</p>
<p><span class="math-container">$$ \frac{1}{n} \times \frac{n}{T} = \frac{1}{T}.$$</span></p>
<hr />
<p>Therefore, your overall chance of success has in fact doubled from
<span class="math-container">$$\frac{1}{T} ~\text{to} ~\left[\frac{1}{T} + \frac{1}{T}\right] = \frac{2}{T}.$$</span></p>
<p>Therefore, the new information doubles your chances, regardless of how often someone is right handed.</p>
|
4,029,249 | <blockquote>
<p>Prove that the function <span class="math-container">$f(x)=e^x-(ax^2+bx+c)$</span> has 3 solutions at most .</p>
<p><span class="math-container">$a$</span>,<span class="math-container">$b$</span> and <span class="math-container">$c$</span> are constants.</p>
</blockquote>
<p>This is the information given about the function, I tried a couple of things and I am not sure if what I did is right.</p>
<p>First the function is continuous and differentiable since <span class="math-container">$e^x$</span> is continuous and differentiable and <span class="math-container">$ax^2+bx+c$</span> is a polynomial so it is also continuous and differentiable therefore we can use rolles theorem.</p>
<p>I tried doing a couple of derivatives such as <span class="math-container">$f'(x)=e^x-2ax-b$</span> and <span class="math-container">$f''(x)=e^x-2a$</span> and lastly <span class="math-container">$f'''(x)=e^x$</span></p>
<p>so the third derivative has no solution , the second one is <span class="math-container">$x=ln(2a)$</span> since the second derivative has only 1 solution , then the first derivative has 2 at most and the original has 3 at most.</p>
<p>Is it the right way to solve it? am I missing something? thank you for the help!
By solution I mean f(x)=0</p>
| Guillemus Callelus | 361,108 | <p>The arguments you make need to be a little more precise, but the idea is correct!</p>
<p>Let the function <span class="math-container">$f(x)=e^x-(ax^2+bx+c)$</span> that is a continuous and differentiable function in <span class="math-container">$\mathbb{R}$</span> and let its derivative function <span class="math-container">$f'(x)=e^x-2ax-b$</span> that is continuous and differentiable in <span class="math-container">$\mathbb{R}$</span>. The function <span class="math-container">$f(x)$</span> is in the conditions described by Rolle's Theorem on any closed and bounded interval we want.</p>
<p>Suppose <span class="math-container">$f(x)$</span> has <span class="math-container">$n$</span> distinct real roots with <span class="math-container">$n>3$</span>. Let be the distinct real roots, which we call ordered from least to greatest, <span class="math-container">$x_1,x_2,x_3,\ldots ,x_n\in \mathbb{R}$</span>. In such a case,</p>
<p><span class="math-container">$$f(x_1)=f(x_2)=f(x_3)=\cdots =f(x_n)=0$$</span></p>
<p>By Rolle's Theorem, there exists at least one value in the interval <span class="math-container">$(x_1,x_2)$</span> where <span class="math-container">$f'(x)=0$</span>; at least one value in the interval <span class="math-container">$(x_2,x_3)$</span> where <span class="math-container">$f'(x)=0$</span>; ... at least one value in the interval <span class="math-container">$(x_{n-1},x_n)$</span> where <span class="math-container">$f'(x)=0$</span>.</p>
<p>We have just proved the existence of at least <span class="math-container">$n-1$</span> roots of <span class="math-container">$f'(x)$</span> with <span class="math-container">$n>3$</span> in a correct way. That is, there exist at least <span class="math-container">$3$</span> roots of the equation <span class="math-container">$e^x-2ax-b=0$</span>. Let be the distinct real roots <span class="math-container">$x_1',x_2',x_3',\ldots, x_{n-1}'\in \mathbb{R}$</span>. In such a case,</p>
<p><span class="math-container">$$f'(x_1')=f'(x_2')=\cdots =f'(x_{n-1}')=0$$</span></p>
<p>By Rolle's Theorem (applied to <span class="math-container">$f'$</span>), there exists at least one value in the interval <span class="math-container">$(x_1,x_2)$</span> where <span class="math-container">$f''(x)=0$</span>; at least one value in the interval <span class="math-container">$(x_2,x_3)$</span> where <span class="math-container">$f''(x)=0$</span>,... at least one value in the interval <span class="math-container">$(x_{n-2},x_{n-1})$</span> where <span class="math-container">$f''(x)=0$</span>.</p>
<p>We have just proved the existence of at least <span class="math-container">$n-2$</span> roots of <span class="math-container">$f''(x)$</span> with <span class="math-container">$n>3$</span>. That is, there exist at least <span class="math-container">$2$</span> roots of the equation <span class="math-container">$f''(x)=e^x-2a=0$</span>.</p>
<p>On the other hand, we know how to solve algebraically this equation which, as it turns out, has only one real solution other than if <span class="math-container">$a>0$</span>:</p>
<p><span class="math-container">$$e^x-2a=0\Leftrightarrow e^x=2a \Leftrightarrow x=\ln (2a)$$</span></p>
<p>And if <span class="math-container">$a\leq 0$</span>, it doesn't exist any real solution to the equation <span class="math-container">$e^x-2a=0$</span> because <span class="math-container">$e^x>0, \,\, \forall x\in \mathbb{R}$</span>.</p>
<p>This is an absurd thing according to what we have found above.</p>
<p>Therefore, the absurdity comes from considering that <span class="math-container">$f(x)$</span> has more than three distinct real roots. We conclude that the number of roots of <span class="math-container">$f(x)$</span> cannot exceed the number of three.</p>
|
1,286,306 | <p>Suppose that $a_1,...,a_n,b_1,...,b_n ∈ F $ are such that $\sum a_ib_i = 1_F$. </p>
<p>Let $J : F^n → F^n $ be the linear transformation whose standard matrix has $ij^{th}$ entry $a_ib_j$. </p>
<p>Prove that $J^2 = J$.</p>
<p>So I think I've figured out that the index in the matrix $F^2$ given by </p>
<p>$u_{ij} = a_ib_j = \sum_{c=1}^{n}\sum_{d=1}^{n} (a_ib_c)(b_ja_d)$</p>
<p>given that</p>
<p>$ \sum (a_ib_i) = 1 $</p>
<p>I think I got the above right, not 100% sure, but I still don't know where to go from this point onwards.</p>
<p>Could someone help me out with that.</p>
<p>Thanks.</p>
| Rikimaru | 80,284 | <p>You should be careful with blindly applying Tychonoff's theorem. After all, that theorem also says that the "unit cube" $C = \{(x_1,...) : x \in [0,1] \}$ is compact, even though the sequence $(x_n)_i = \delta_{in}$ lacks a converging subsequence. This kind of nonsense cannot happen with your cube because you've imposed a condition as you go further down. So, what this says is that the topology induced by your choice of metric is certainly not the one induced by Tychonoff!</p>
<p>You can actually work "backwards" and try to see why such sequences won't happen in your cube and use that to argue that it is compact. Lost in a Maze's argument will also work. </p>
|
2,978,988 | <p>I'm stuck at a question. </p>
<p>The question states that <span class="math-container">$K$</span> is a field like <span class="math-container">$\mathbb Q, \mathbb R, \mathbb C$</span> or <span class="math-container">$\mathbb Z/p\mathbb Z$</span> with <span class="math-container">$p$</span> a prime. <span class="math-container">$R$</span> is used to give the ring <span class="math-container">$K[X]$</span>. A subset <span class="math-container">$I$</span> of R is called an ideal if:</p>
<p>• <span class="math-container">$0 \in I$</span>; </p>
<p>• <span class="math-container">$a,b \in I \to a−b \in I$</span>; </p>
<p>• <span class="math-container">$a \in I$</span> <span class="math-container">$r \in R \to ra \in I$</span>. </p>
<p>Suppose <span class="math-container">$a_1,...,a_n \in R$</span>. The ideal <span class="math-container">$<a_1,...,a_n>$</span> generated by <span class="math-container">$a_1,...,a_n$</span> is defined as the intersection of all ideals which contain <span class="math-container">$a_1,...,a_n$</span>. Prove that <span class="math-container">$⟨a_1,...,a_n⟩ = {r_1a_1 +···+ r_na_n | r_1,...,r_n \in R}$</span>. </p>
<pre><code> I proved this, but I got stuck on the one below:
</code></pre>
<p>Prove that <span class="math-container">$⟨a_1,...,a_n⟩ = ⟨\gcd(a_1,...,a_n)⟩$</span></p>
<p>Because I know how to calculate the gcd, but how do I use it in this context? Because it now has more than two elements, so I don't know how to work with this</p>
| Oleg567 | 47,993 | <p>Yes, of course:
for any <span class="math-container">$n \in \mathbb{N}$</span>:
<span class="math-container">$$
\sum_{k=n^2}^{n^2+n}k = \dfrac{(n+1)(2n^2+n)}{2} = \dfrac{(n+1)n(2n+1)}{2};\tag{1}
$$</span>
and
<span class="math-container">$$
\sum_{k=n^2+n+1}^{n^2+2n}k = \dfrac{n(2n^2+3n+1)}{2} = \dfrac{n(2n+1)(n+1)}{2}.\tag{2}
$$</span></p>
<p>RHS-sof <span class="math-container">$(1)$</span> and <span class="math-container">$(2)$</span> are equal.</p>
<hr>
<p>Therefore one can write more generally:
<span class="math-container">$$
\sum_{k=n^2}^{n^2+n}k = \sum_{k=n^2+n+1}^{n^2+2n}k,
$$</span>
or
<span class="math-container">$$
n^2+(n^2+1)+\ldots+(n^2+n) = (n^2+n+1)+(n^2+n+2)+\ldots + (n^2+2n).
$$</span></p>
|
863,860 | <p>I am not particularly well-versed in topology, so I wanted to check with you whether there exists a much simpler argument to prove the following statement or whether there are problems with my proof. The statement also seems to be a very standard result but I could not find a reference in e.g. a book on basic topology (references would also be appreciated). The statement is as follows:</p>
<p>Consider $\mathbb{R}^d$ with its usual topology where $d \geq 1$. Let $A \in \mathbb{R}^d$ be bounded. Then, for any $x\in A$ and $y\in A^c$, there exists a point in the line segment joining $x$ and $y$ ($x$ and $y$ included) that also belongs to the boundary $\partial A$ of $A$. </p>
<p>My argument goes like this: Consider a bijection $T$ from $[0,1]$ to such a line segment so that $T(0) = x$ and $T(1) = y$ (Actually this step is not very necessary but makes the argument a little more visual). For any $a\in[0,1]$, let $f(a) = 0$ if $T(a) \in A$ and otherwise let $f(a) = 1$ if $T(a) \notin A$ so that $f(0) = 0$ (because $x$ is a member of $A$ and $T(0) = x$) and $f(1) = 1$. It is now sufficient to find some $b\in[0,1]$ such that for every $\epsilon > 0$, $f((b-\epsilon,b+\epsilon)\cap[0,1]) = \{0,1\}$. "Topologically," this would mean that every open neighborhood of $b$ contains points from both $A$ and $A^c$, which would mean $b\in\partial A$.</p>
<p>We can find such a $b$ constructively as follows: Let $I_0 = [0,1]$ (We will have a recursion $I_1,I_2,\ldots,$ which will all be intervals). Recall $f(0) = 0$ and $f(1) = 1$. Consider $f(\frac{1}{2})$. If $f(\frac{1}{2}) = 0$, we set $I_1 = [\frac{1}{2},1]$, otherwise if $f(\frac{1}{2}) = 1$ we set $I_1 = [0,\frac{1}{2}]$. In either case, $f$ takes the values $0$ and $1$, respectively at the lower and upper end points of $I_1$. We continue this process by dividing $I_1$ on its middle, and so on, while at each iteration we make sure that $f(\min I_n) = 0$ and $f(\max I_n) = 1$. Let $b = \lim \min I_n =\lim \max I_n$ (It is not difficult to see the limits exist) and we are done.</p>
| user138999 | 162,288 | <p>$$\begin{align}
f'(x)&=\frac{d}{dx}\frac{x^2+4x+3}{\sqrt{x}} \\
&= \frac{d}{dx}\left(\frac{x^2}{\sqrt{x}}+\frac{4x}{\sqrt{x}}+\frac{3}{\sqrt{x}}\right) \\
&=\frac{d}{dx}\left(x^{\frac{3}{2}}+4\sqrt{x}+3(x)^{-\frac{1}{2}}\right) \\
&=\frac{3}{2}x^{\frac{1}{2}}+\frac{4}{2\sqrt{x}}-\frac{3}{2x^{\frac{3}{2}}}.
\end{align}$$ This is simplified as
$$f'(x)=\frac{3}{2}\sqrt{x}+\frac{2}{\sqrt{x}}-\frac{3}{2x^{\frac{3}{2}}}.$$</p>
<p><strong>Note.</strong> The term $x^{\frac{3}{2}}$ can be expressed as $x^1\cdot x^{1/2}=x\sqrt{x}.$</p>
|
1,504,483 | <p>Where did the angle convention (in mathematics) come from?</p>
<p>One would imagine that a clockwise direction would be more 'natural' (given
sundials & the like, also a magnetic compass dial).</p>
<p>Also, given time and direction conventions, one would imagine that the
zero degree line would be vertical.</p>
<p>There are two parts to this
question: (1) Why do we measure angles anticlockwise?
(2) Why do we take the zero degree line to be along the $x$-axis.</p>
<p>(This was inspired by <a href="https://matheducators.stackexchange.com/questions/9874/why-do-we-conventionally-treat-trig-functions-as-going-anti-clockwise-from-the-r">https://matheducators.stackexchange.com/questions/9874/why-do-we-conventionally-treat-trig-functions-as-going-anti-clockwise-from-the-r</a>.)</p>
| Singh | 121,735 | <p>We measure the angles with the $x$-axis. So one of the arm of the angle is $x$-axis and the other arm is also on $x$-axis if the angle is zero. This is why we take zero degree line along the $x$-axis.</p>
<p>In rectangular coordinate system we have four quadrants. Now we move the second arm which is fixed to the origin. When we move the second arm in counter clockwise direction then we have pattern of going from quadrant I-II-III-IV.</p>
<p>In history mathematicians worked on height and distance problems in which they were required to find height of a tower(say) without directly measuring it. In those problems they required to find the angle between the line of sight(seeing the highest point on the tower) and the surface of Earth. This means we go counter clockwise for measuring the angle. </p>
<p>Today we have coordinate transformation so we can always define everything in new coordinate system according to are convenience.</p>
<p>In my class text book of mathematics while doing trigonometry I found this problem and I searched for the solution. Above thing is what I got while searching for the solution. </p>
|
1,215,537 | <p>I need to prove that
$ \int_0^\infty (\frac{\sin x}{x})^2 = \frac{\pi}{2}$.
I have proved that $\sum_1^\infty \frac {\sin^2(n \delta)}{n^2 \delta}=\frac{\pi-\delta}{2}$ for $0<\delta<\pi$ and I'm supposed to use this identity.</p>
| Seyed Mohsen Ayyoubzadeh | 165,227 | <p>Use your identity for $\delta \to 0$. In this case, by assuming $x = n\delta $, and noting that $dx = (n + 1)\delta - n\delta = \delta $ and ${x_{\min }} = 1\delta = 0$ and ${x_{\max }} = +\infty \delta = +\infty$, your identity will give $$\mathop {\lim }\limits_{\delta \to 0} \sum\limits_{n = 1\atop x = n\delta }^\infty {{{\left( {\frac{{\sin (x)}}{x}} \right)}^2}dx} = \int\limits_0^{ + \infty } {{{\left( {\frac{{\sin (x)}}{x}} \right)}^2}dx} = \frac{\pi }{2}$$which is (hopefully) your desired result ;)</p>
|
31,480 | <p>I'm having difficulty with my math, fractions and up. I used to understand it all, but it's been so long since I've touched the book (I finished it a couple of months ago, picked it up to review everything), I seem to have forgotten it. </p>
<p>The explanations inside of the individual chapters do no good. They never helped me, and I always resorted to having my older brother helping (who is now away at college), and I can't find any resources online that help at all.</p>
<p>Are there any online tutorials / guides that can help me relearn all this fully, all the way from the basics, up to college level? </p>
| GeoffDS | 8,671 | <p>I upvoted picakhu's answer and it's probably better than mine. But, this might be helpful still. Not online, but maybe the <a href="http://www.artofproblemsolving.com/index.php?mode=books">Art of Problem Solving</a> would be helpful. It is what I am going to use for my children, possibly. There are 8 books starting at algebra and going through calculus. And, the emphasis is on learning problem solving. The books are not that expensive and you can buy a full solutions manual for each one, which is also not expensive.</p>
|
2,541,709 | <p>For example Calculate the probability of getting exactly 50 heads and 50 tails after flipping a fair coin $100$ times. then is ${100 \choose 50}\left(\frac 12\right)^{50}\left(\frac 12\right)^{50}$
the reason that we multiply $\left(\frac 12\right)^{50}$ twice is because the first one $\left(\frac 12\right)^{50}$ is consider as $\frac 12$ probability of head $50$ as $50$ times, then $\frac 12$ multiply itself $50$ equal $\left(\frac 12\right)^{50}$
I know we need to multiply the second $\left(\frac 12\right)^{50}$ term as well although it is the failure of $50$ heads (or otherwise, when we are talking about $50$ tails.)
My question is:
Why do we need to multiply the probability of failure events ? ( I do notice that "exactly" always seems to appear in the question)</p>
| zhw. | 228,045 | <p>It's actually true for any continuous $f$ bounded on $\mathbb R.$ Let $M= \sup_{\mathbb R} |f|.$ Rewrite the convolution as</p>
<p>$$\int_{\mathbb R} f(x-t)g(t)\,dt.$$</p>
<p>Fix any $x$ and let $x_n \to x.$ By the continuity of $f,$ $f(x_n-t) \to f(x-t)$ pointwise on $\mathbb R.$ Since $|f(x_n-t)g(t)| \le M|g(t)|$ on $\mathbb R$ for all $n$ and $t,$ the dominated converge theorem shows</p>
<p>$$\int_{\mathbb R} f(x_n-t)g(t)\,dt \to \int_{\mathbb R} f(x-t)g(t)\,dt.$$</p>
<p>Thus the convolution is continuous at $x$ as desired, and we're done.</p>
|
64,780 | <p>I need to sum values that belongs to same week. For example, I have the list x with one column and n rows. Format: </p>
<pre><code>{{2007,1,3},0.2},{2007,1,4},0.1},{2007,1,5},0.14},{2007,1,8},0.}, ... {2014,10,17},-0.2},{2014,10,18},0.2},{2014,10,19},0.2}}.
</code></pre>
<p>Dates in list are sorted in the form from Oldest to Newest.
Say differently, is there any function like function "Weekendum" in MS Excel that returns the number of week in the year, so i could use function „GatherBy“ that number of week and „Accumulate“ the values.</p>
| Kuba | 5,478 | <p>Let's convert dates:</p>
<pre><code>data ={
{{2007, 1, 3}, 0.2}, {{2007, 1, 4}, 0.1}, {{2007, 1, 5}, 0.14}, {{2007, 1, 8}, 0.},
{{2014, 10, 17}, -0.2}, {{2014, 10, 18}, 0.2}, {{2014, 10, 19}, 0.2}};
data = MapAt[DateList, data, {;; , 1}];
</code></pre>
<p>I don't know if this can be done automatically but unless I put there the very first day of the week with "neutral" value then Mathematica starts counting "Weeks" from the first data point. Kind of expected.</p>
<p>So let's add that day:</p>
<pre><code> PrependTo[data, {DateList[{2007, 1, 1}], 0}]
</code></pre>
<blockquote>
<pre><code>{{{2007, 1, 1, 0, 0, 0.}, 0}, {{2007, 1, 3, 0, 0, 0.}, 0.2},
{{2007, 1, 4, 0, 0, 0.}, 0.1}, {{2007, 1, 5, 0, 0, 0.}, 0.14},
{{2007, 1, 8, 0, 0, 0.}, 0.}, {{2014, 10, 17, 0, 0, 0.}, -0.2},
{{2014, 10, 18, 0, 0, 0.}, 0.2}, {{2014, 10, 19, 0, 0, 0.}, 0.2}}
</code></pre>
</blockquote>
<p>Now, in V10 you can do this automatically. We will get only 3 points because I use European week.</p>
<pre><code>TimeSeriesAggregate[data, "Week", Total]
DateListPlot[{data, %}, Joined -> False]
</code></pre>
<p><img src="https://i.stack.imgur.com/e9s4F.png" alt="enter image description here"></p>
|
3,296,596 | <p>Ive been asked the following question and I'm not sure how to approach it.</p>
<p>Solve the system</p>
<p><span class="math-container">\begin{cases}
x_1+x_2-5x_3=2 \\
6x_1+7x_2+4x_3=7
\end{cases}</span></p>
<p>The answer is required to be in the form of</p>
<p><span class="math-container">$\begin{bmatrix}x_1\\ x_2\\x_3\end{bmatrix}$</span>=<span class="math-container">$\begin{bmatrix}...\\ ...\\...\end{bmatrix}$</span>+s<span class="math-container">$\begin{bmatrix}...\\ ...\\...\end{bmatrix}$</span></p>
<p>I know how to solve systems using REF and RREF or converting linear equations to matrix equations and solve using inverses. But Im not sure how to solve using the answer format above. Any tips? Don't give answer outright if at all possible but some hints would be nice. Thanks.</p>
| Community | -1 | <p><strong>Hint:</strong> Evidently, the solution set is <span class="math-container">$1$</span>-dimensional. </p>
<p>Now use row-reduction to find it.</p>
|
3,296,596 | <p>Ive been asked the following question and I'm not sure how to approach it.</p>
<p>Solve the system</p>
<p><span class="math-container">\begin{cases}
x_1+x_2-5x_3=2 \\
6x_1+7x_2+4x_3=7
\end{cases}</span></p>
<p>The answer is required to be in the form of</p>
<p><span class="math-container">$\begin{bmatrix}x_1\\ x_2\\x_3\end{bmatrix}$</span>=<span class="math-container">$\begin{bmatrix}...\\ ...\\...\end{bmatrix}$</span>+s<span class="math-container">$\begin{bmatrix}...\\ ...\\...\end{bmatrix}$</span></p>
<p>I know how to solve systems using REF and RREF or converting linear equations to matrix equations and solve using inverses. But Im not sure how to solve using the answer format above. Any tips? Don't give answer outright if at all possible but some hints would be nice. Thanks.</p>
| Fred | 380,717 | <p>If you use the RREF, you will get</p>
<p><span class="math-container">$x_1=7+39x_3$</span></p>
<p>and</p>
<p><span class="math-container">$x_2=-5-34x_3.$</span></p>
<p>Now put <span class="math-container">$s=x_3$</span> an we derive</p>
<p><span class="math-container">$$\begin{bmatrix}x_1\\ x_2\\x_3\end{bmatrix}=\begin{bmatrix}7\\ -5\\0\end{bmatrix}+s\begin{bmatrix}39\\ -34\\1\end{bmatrix}.$$</span></p>
|
3,296,596 | <p>Ive been asked the following question and I'm not sure how to approach it.</p>
<p>Solve the system</p>
<p><span class="math-container">\begin{cases}
x_1+x_2-5x_3=2 \\
6x_1+7x_2+4x_3=7
\end{cases}</span></p>
<p>The answer is required to be in the form of</p>
<p><span class="math-container">$\begin{bmatrix}x_1\\ x_2\\x_3\end{bmatrix}$</span>=<span class="math-container">$\begin{bmatrix}...\\ ...\\...\end{bmatrix}$</span>+s<span class="math-container">$\begin{bmatrix}...\\ ...\\...\end{bmatrix}$</span></p>
<p>I know how to solve systems using REF and RREF or converting linear equations to matrix equations and solve using inverses. But Im not sure how to solve using the answer format above. Any tips? Don't give answer outright if at all possible but some hints would be nice. Thanks.</p>
| Dr. Sonnhard Graubner | 175,066 | <p>Multiplying the first equation by <span class="math-container">$4$</span> and the second by <span class="math-container">$5$</span> we get
<span class="math-container">$$x_1=\frac{43}{34}-\frac{39}{34}x_2$$</span>
so <span class="math-container">$x_2=-5+34x_3$</span> and we get
<span class="math-container">$$[x_1,x_2,x_3]=[7,-5,0]+t[39,34,1]$$</span> where <span class="math-container">$t\in \mathbb{R}$</span></p>
|
3,574,460 | <p>Suppose <span class="math-container">${X_0, X_1, . . . , }$</span> forms a Markov chain with state space S. For any n ≥ 1
and <span class="math-container">$i_0, i_1, . . . , ∈ S$</span>, which conditional probability, <span class="math-container">$P(X_0 = i_0|X_1 = i_1)$</span> or <span class="math-container">$P(X_0 =
i_0|X_n = i_n)$</span>, is equal to
<span class="math-container">$P(X_0 = i_0|X_1 = i_1, . . . , X_n = i_n)$</span>?</p>
<p>I think it is the second one?? I do know the Markov property but I am not sure on how it applies to the initial state? </p>
| kiyomi | 527,262 | <p>If you are working with a stationary ergodic Markov chain, then what you mention is a <a href="http://www.columbia.edu/~ks20/stochastic-I/stochastic-I-Time-Reversibility.pdf" rel="nofollow noreferrer">time-reversible markov chain</a>, which must satisfy certain criterions to be classified as one, make sure those apply to the one you are working with. But, the reversed process satisfies
<span class="math-container">$$\mathbb P\left(X_n=i_{n}\vert X_{n+1}=i_{n+1},X_{n+2},X_{n+3},...\right)=\mathbb P\left(X_n=i_{n}\vert X_{n+1}=i_{n+1}\right)$$</span>
in your case, just let <span class="math-container">$i_n=0$</span>.</p>
<p>The other option mentioned would be the <span class="math-container">$n$</span>-th step transition probability of going from <span class="math-container">$i_n$</span> to <span class="math-container">$i_0$</span>.</p>
|
98,361 | <p>I have been reading Rudin (Principles of Mathematical Analysis) on my own now for around a month or so. While I was able to complete the first chapter without any difficulty, I am having problems trying to get the second chapter right. I have been able to get the definitions and work out some problems, but I am still not sure if I understand the thing and it is certainly not internalized. </p>
<p>I am wondering whether I should take this shaky structure with me to the next chapters, hoping that the application there improves my understanding, or to stop and complete this chapter really well? </p>
<p>What do you think? </p>
<p>As for my background, I am quiet close to completing Linear Algebra by Lang (having done a course in Linear Algebra from Strang). I have completed Spivak's Calculus. I come from an engineering background and so I have done multivariable calculus, fourier analysis, numerical analysis, basic probability and random variables as required for engineering. One of the professors advised that I may be better off studying Part I from Topology and Modern Analysis by GF Simmons, but I am finding that completing that book itself may take a semester and I would prefer not to wait that long to start with analysis. </p>
<p>Thank You</p>
<p><strong>EDIT</strong>: If it makes any difference, I am studying on my own. </p>
<p><strong>EDIT</strong>:
So, I have accepted the answer by Samuel Reid. I too have found the limit point definition as illustrated by Rudin and the large set of definitions listed there somewhat dry and without any motivation or examples. This is one of the places in the book which makes it a little difficult for self-study. What I found working in this case is, taking some drill problems from other books and working through them. I will advise anyone to go real slow over the sections 2.18 to 2.32 . There are too many definitions and new concepts in that sections and to miss even one means you cannot move forward. To tell the truth, I found Simmons's 50 pages (from chapter 2 section 10 to the end of chapter 3) to be more useful than the corresponding 4.5 pages in Rudin. </p>
| Samuel Reid | 19,723 | <p>As I found out while working through that chapter, a lot of misunderstanding can arise from not understanding the idea of a limit point thoroughly. To remedy this, I recommend you visit a question I asked a little while ago: <a href="https://math.stackexchange.com/questions/93288/understanding-the-idea-of-a-limit-point-topology">Understanding the idea of a Limit Point (Topology)</a>.</p>
<p>Another tip would be to remember that in most situations you do not need to worry (conceptually) about the full definition of compact as the whole "open cover containing a finite subcover" which is a loaded statement as the definition of open cover trickles down back to the limit point. Just remember that compact can sometimes be visualized as "closed and bounded". When I was working through this chapter it helped me to try and draw out pictures for the concepts (and then I would make them up in Adobe Illustrator as you can see in the link above). Once you have some sort of solid mental imagery for a particular concept it will be easier to build on the previous terminology when a new concept is introduced. A first exposure to Topology is extremely difficult in this respect as there is so much new terminology that you are not familiar with, and then they build on it immediately!</p>
<p>A few seemingly unimportant things I would suggest that you should NOT gloss over.</p>
<ul>
<li>"Open relative to..." (Theorem 2.30, Theorem 2.33)</li>
<li>Sequences of sets, intervals, k-cells, etc. (Theorem 2.38, Theorem 2.39)</li>
<li>Specifically, make sure you understand Theorem 2.41 as it really wraps up a lot of the concepts in this chapter and tests that you actually do understand the notions of closure, boundary, compactness, limit points, and how they relate to each other.</li>
</ul>
<p>Of specific importance if you plan on studying Convex Geometry (Convex Sets, Convex Polytopes, etc.) it is very important that you have a great understanding of anything "Open relative to..." or "Compact relative to..." as they lead to an understanding of the style used in the basic foundations of convex geometry in relative interior, relative boundary, etc.</p>
<hr>
<p>I would highly recommend that you spend more time going over the material in the actual book and do as many exercises as you can. I found that with this book in particular, you think you understand the meaning of a particular Theorem or think you understand why some result is important, only to be blown away during an exercise when you realize that the theorem means something different than you thought it did. Make sure you that you can get through some (if not most) of the exercises before moving on to the next chapter and if you are struggling with one, CONTINUE TO STRUGGLE WITH IT! Only post on here as a sort of last resort if you have spent maybe 3-4+ hours on a single question and can't make any progress. Remember to hop around on the questions for a bit, if you've solved maybe 50% of them and the remaining questions all seem very hard, try one for 15-20 minutes, go to another one and try it for 15-20 minutes, and keep switching around on the questions (pretend it's like the putnam!); I find that things click faster for me that way. If I'm hopping around between 5 or 6 questions and spend 4 hours working on them I'll likely be able to solve 2 or 3 and if I just ram my head against a wall on one of them for 4 hours I might not even solve that one. Keep in mind that there are certain exercises (a few each chapter) that are VERY hard, so don't get discouraged! Stay passionate about the concepts and don't worry if things aren't obvious... because they aren't. Remember it took some of the greatest geniuses of the past few generations to figure out this stuff in the first place!</p>
<p>Good luck!</p>
|
98,361 | <p>I have been reading Rudin (Principles of Mathematical Analysis) on my own now for around a month or so. While I was able to complete the first chapter without any difficulty, I am having problems trying to get the second chapter right. I have been able to get the definitions and work out some problems, but I am still not sure if I understand the thing and it is certainly not internalized. </p>
<p>I am wondering whether I should take this shaky structure with me to the next chapters, hoping that the application there improves my understanding, or to stop and complete this chapter really well? </p>
<p>What do you think? </p>
<p>As for my background, I am quiet close to completing Linear Algebra by Lang (having done a course in Linear Algebra from Strang). I have completed Spivak's Calculus. I come from an engineering background and so I have done multivariable calculus, fourier analysis, numerical analysis, basic probability and random variables as required for engineering. One of the professors advised that I may be better off studying Part I from Topology and Modern Analysis by GF Simmons, but I am finding that completing that book itself may take a semester and I would prefer not to wait that long to start with analysis. </p>
<p>Thank You</p>
<p><strong>EDIT</strong>: If it makes any difference, I am studying on my own. </p>
<p><strong>EDIT</strong>:
So, I have accepted the answer by Samuel Reid. I too have found the limit point definition as illustrated by Rudin and the large set of definitions listed there somewhat dry and without any motivation or examples. This is one of the places in the book which makes it a little difficult for self-study. What I found working in this case is, taking some drill problems from other books and working through them. I will advise anyone to go real slow over the sections 2.18 to 2.32 . There are too many definitions and new concepts in that sections and to miss even one means you cannot move forward. To tell the truth, I found Simmons's 50 pages (from chapter 2 section 10 to the end of chapter 3) to be more useful than the corresponding 4.5 pages in Rudin. </p>
| yep | 22,795 | <p>I was in precisely your situation several years ago. In hindsight, Rudin was a poor text for self-study. Perhaps if you're the kind of person who grew up with mathematical culture (parents mathematical, friends interested in mathematics,etc), you'll have the broad cultural background necessary to appreciate the overall approach, or a network of people to bounce ideas off of.</p>
<p>For me, working to learn formal mathematics with my background in electrical engineering was an uphill fight. Rather than struggling through Rudin in a linear fashion, I'd recommend supplementing the exercises in Rudin with other texts that provide more examples, and a more complete link to the history and context in which the subject developed. </p>
<p>I found the exposition in Thomas Korner's book, <a href="http://books.google.com/books/about/A_companion_to_analysis.html?id=H3zGTvmtp74C" rel="nofollow">A Companion to Analysis</a> especially helpful. I agree with the other poster that the Munkres book is excellent. Also, for whatever reason, I found the <a href="http://rads.stackoverflow.com/amzn/click/0024041513" rel="nofollow">Royden</a> book better for self-study than the Rudin text. There are also numerous expository texts on introductory analysis (the subject is more-or-less standard, modulo pedagogical preferences) that attempt friendly introductions; I won't link to any here since I'm not familiar with any in particular, but a quick google search would surely bring up a few. </p>
<p>Rather than trying to tease understanding out of unchanging paragraphs, I recommend you supplement Rudin with many other texts. Because the subject is relatively standard, other texts may provide a different perspective on a topic that provides you with that "ah-ha" moment -- why have one teacher when you can have many? Think of those Rudin paragraphs as research problems unto themselves, and go hunting for the context you need to place them in perspective. </p>
<p>Finally, I don't think it's at all a bad idea to skip ahead in Rudin to the next chapter, or to whatever part you're interested in. Working on a problem in the next chapter may provide the context you need to understand the need for the topological background in chapter 2. </p>
<p>Good luck!</p>
|
10,949 | <p>Is it known whether every finite abelian group is isomorphic to the ideal class group of the ring of integers in some number field? If so, is it still true if we consider only imaginary quadratic fields?</p>
| Pete L. Clark | 299 | <p>Virtually nothing is known about the question of which abelian groups can be the ideal class group of (the full ring of integers of) some number field. So far as I know, it is a plausible conjecture that all finite abelian groups (up to isomorphism, of course) occur in this way. Conjectures and heuristics in this vein have been made, but unfortunately for me I'm not so familiar with them.</p>
<p>The situation for imaginary quadratic fields is different. Here there is an absolute bound on the size of an integer $k$ such that the class group of an imaginary quadratic field can be isomorphic to $(\mathbb{Z}/2\mathbb{Z})^k$. Conditionally on the Generalized Riemann Hypothesis, the largest such $k$ is $4$. This has do to with <strong>idoneal numbers</strong>, of which the following paper provides a very fine survey:</p>
<p><a href="http://www.mast.queensu.ca/~kani/papers/idoneal-f.pdf">http://www.mast.queensu.ca/~kani/papers/idoneal-f.pdf</a></p>
<p>Actually the truth is slightly stronger: let $H_D$ be the class group of the imaginary quadratic field $\mathbb{Q}(\sqrt{-D})$. Then, as $D$ tends to negative infinity through squarefree numbers, the size of $2H_D$ (the image of multiplication by $2$) tends to infinity. See for instance </p>
<p><a href="http://arxiv.org/PS_cache/arxiv/pdf/0811/0811.0358v2.pdf">http://arxiv.org/PS_cache/arxiv/pdf/0811/0811.0358v2.pdf</a></p>
<p>for some recent explicit bounds on this. </p>
|
956,110 | <p>I am struggling with thinking about this. Any help would be great!!</p>
<p>A medical research survey categorizes adults as follows:</p>
<ul>
<li>by gender (male or female)</li>
<li>by age group (age groups are 18-25, 26-35, 36-50, 51+)</li>
<li>by income (less than 30k/year, 30k-60k/year, more than 60k/year)</li>
<li>for women only: by whether they have been pregnant (yes/no)</li>
<li>for men only: by frequency of undergoing prostate exams (frequently, rarely, never).</li>
</ul>
<p>What minimum size of a set of adults will guarantee that there are two people in it with matching characteristics in all categories? You do not need to explain your answer.</p>
| monsterx | 180,720 | <p>By the Pigeonhole Principle it would be one more than the number of categories, C(5,2)*4*3 = 10*4*3 = 120 => 120 +1 = 121.</p>
|
2,609,252 | <p>like the title said i'm looking for the best way for me(a 15 year old) to go about learning calculus, thank you :)</p>
| Botond | 281,471 | <p>3Blue1Brown's Essence of calculus is a good starting point: <a href="https://www.youtube.com/playlist?list=PLZHQObOWTQDMsr9K-rj53DwVRMYO3t5Yr" rel="nofollow noreferrer">https://www.youtube.com/playlist?list=PLZHQObOWTQDMsr9K-rj53DwVRMYO3t5Yr</a><br>
And for a lot of excercises, you can check blackpenredpen: <a href="https://www.youtube.com/user/blackpenredpen" rel="nofollow noreferrer">https://www.youtube.com/user/blackpenredpen</a> </p>
|
4,019,561 | <p>If <span class="math-container">$M_{n\times n}$</span> is the set of invertible matrices with real entries. Find two matrices <span class="math-container">$A,B\in M_{n \times n}$</span> with the propriety that there not exists such a continuous function</p>
<p><span class="math-container">$$f:[0,1]\to M, \quad f(0)=A, f(1)=B $$</span></p>
<p>the only way i was thinking was is the inverse function such as <span class="math-container">$f^{-1}(A)=0, \quad f^{-1}(B)=1,$</span>
but this doesnt seem to get me anywhere.</p>
| Community | -1 | <p>Hint.</p>
<p>The existence of the function <span class="math-container">$f$</span> means that there exists a <em><a href="https://en.wikipedia.org/wiki/Path_(topology)" rel="nofollow noreferrer">path</a></em> between the two matrices <span class="math-container">$A$</span> and <span class="math-container">$B$</span>, which implies that <span class="math-container">$A$</span> and <span class="math-container">$B$</span> are in the same connected component of <span class="math-container">$M:=M_{n\times n}$</span>.</p>
<p>On the contrary, the continuous map <span class="math-container">$\det:M\to\mathbb{R}$</span> tells you that <span class="math-container">$M$</span> is not connected (since <span class="math-container">$\det(M)$</span> is not).</p>
<p>Now, find two elements of <span class="math-container">$M$</span> that are not in the same component of <span class="math-container">$M$</span>.</p>
|
4,616,048 | <p>How can I solve this system of coupled differential equations?</p>
<p><span class="math-container">$\frac{d^2\rho}{d\lambda^2}=\frac{5\rho}{(5\rho^2+4t^2)^2}$</span> <span class="math-container">$\frac{d^2t}{d\lambda^2}=\frac{4t}{(5\rho^2+4t^2)^2}$</span></p>
<p>Is it something I could input in the Wolfram calculator?</p>
| user577215664 | 475,762 | <p><span class="math-container">$$\frac{d^2\rho}{d\lambda^2}=\frac{5\rho}{(5\rho^2+4t^2)^2},\tag{1}$$</span>
<span class="math-container">$$\frac{d^2t}{d\lambda^2}=\frac{4t}{(5\rho^2+4t^2)^2}.\tag{2}$$</span>
<span class="math-container">$$2\rho' \frac{d^2\rho}{d\lambda^2}=\frac{10\rho \rho '}{(5\rho^2+4t^2)^2},\tag{1}$$</span>
<span class="math-container">$$2t'\frac{d^2t}{d\lambda^2}=\frac{8t't}{(5\rho^2+4t^2)^2}.\tag{2}$$</span>
Add both DE's and integrate:
<span class="math-container">$$2t'\frac{d^2t}{d\lambda^2}+2\rho' \frac{d^2\rho}{d\lambda^2}=\frac{8t't+10 \rho \rho'}{(5\rho^2+4t^2)^2}$$</span></p>
<p><span class="math-container">$$t'^2+\rho'^2 =-\frac{1}{5\rho^2+4t^2}+C_1$$</span>
<span class="math-container">$$t'^2+\rho'^2 =-\rho\rho''-tt''+C_1$$</span>
<span class="math-container">$$(t't)'+(\rho \rho ')' =C_1$$</span>
Integrate again:
<span class="math-container">$$t't+\rho \rho ' =C_1\lambda +C_2$$</span>
Integrate again:
<span class="math-container">$$t^2+\rho ^2 =C_1\lambda^2 +2C_2\lambda+C_3$$</span>
<span class="math-container">$$t^2+\rho ^2 =C_1\lambda^2 +C\lambda+C_3$$</span>
You can use this result to eliminate <span class="math-container">$t$</span> or <span class="math-container">$\rho$</span> in the original DE. Unfortunately the DE won't be linear and therefore it's hard to solve.</p>
|
1,948,730 | <blockquote>
<p>For all odd integers $n$, there exists an integer $k$ such that $n=2k+1$.</p>
</blockquote>
<p>I negated using De Morgan's laws. Let $O(n)$ be "$n$ is odd" and $N(n, k)$ "$2k + 1 = n$", then
$$\neg(\forall n \exists k [O(n) \to N(n,k)])\\
\exists n \neg\exists k [O(n) \to N(n,k)]\\
\exists n \forall k \neg [O(n) \to N(n,k)]\\
\exists n \forall k \neg [\neg O(n) \lor N(n,k)]\\
\exists n \forall k O(n) \land \neg N(n,k)\\
$$
Therefore the negation is</p>
<blockquote>
<p>There is at least one $n$ that is odd, and for all $k$ such that $n\neq2k+1$</p>
</blockquote>
<p>Is that the correct result?</p>
| Community | -1 | <p>Your sentences do not reflect the logical expressions. You should have started from </p>
<p>$$\forall n: O(n),\exists k: N(n,k),$$</p>
<p>turning to</p>
<p>$$\exists n: O(n),\forall k: \lnot N(n,k).$$</p>
|
725,602 | <p>I am trying to prove the 'second' triangle inequality:
$$||x|-|y|| \leq |x-y|$$</p>
<p>My attempt:
$$----------------$$
Proof:
$|x-y|^2 = (x-y)^2 = x^2 - 2xy + y^2 \geq |x|^2 - 2|x||y| + |y|^2 = (||x|-|y||)^2$</p>
<p>Therefore $\rightarrow |x-y| \geq ||x|-|y||$</p>
<p>$$----------------$$</p>
<p>My questions are: Is this an acceptable proof, and are there alternative proofs that are more efficient?</p>
| DeepSea | 101,504 | <p><strong>Hint:</strong> Use formula: $$F(n) = \frac{a^n - b^n}{\sqrt{5}}$$
With $a = \frac{1 + \sqrt{5}}{2}$, and $b=\frac{1 - \sqrt{5}}{2}$</p>
|
3,014,085 | <p>I am trying to isolate y in this equation:
<span class="math-container">$$-4/3·\ln(|y-60|)=x+c$$</span></p>
<p>If I use a cas-tool to isolate <span class="math-container">$y$</span>, I get:</p>
<p><span class="math-container">$$60.-(2.71828182846)^{−0.75*x-0.75*c}=y$$</span></p>
<p>If I try isolating <span class="math-container">$y$</span> by hand I get:</p>
<p><a href="https://i.stack.imgur.com/vEPyV.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/vEPyV.png" alt="enter image description here" /></a></p>
<hr />
<p>These two are not the same, is the cas-tool right or am I right? What are the rules to isolate something when the absolute value is taken of it as in this case.</p>
<hr />
<p>Proof they are not equal: (black is my result, red is cas-tool's result)</p>
<p><a href="https://i.stack.imgur.com/oqD35.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/oqD35.png" alt="enter image description here" /></a></p>
| user376343 | 376,343 | <p>From <span class="math-container">$$\arg (z+2) + \arg (z-2) = \arg (z^2-4) = \pi $$</span></p>
<p>we deduce that <span class="math-container">$\;z^2-4\;$</span> is a negative real number. This occurs when</p>
<ol>
<li><p><span class="math-container">$z\in \mathbb{R},\; -2<z<2,$</span> or</p></li>
<li><p><span class="math-container">$z=i\alpha,\; \alpha \in \mathbb{R},$</span> is pure imaginary.</p></li>
</ol>
|
1,591,371 | <p>If we start with $n$ elements and at each step split them into $2$ parts randomnly and repeat with both sub-parts until parts of only $1$ element are left, in how many different ways can these elements be separated?
I made a mistake we don't split them in half we split them in a random place.</p>
| drhab | 75,923 | <p>Let's say there are $x_n$ ways if the number of elements is $n$. Then $x_1=1$ and:$$x_n=\sum_{i=1}^{n-1}x_i\times x_{n-i}$$</p>
<p>where term $x_i\times x_{n-i}$ corresponds with a <em>first</em> split: $1,\dots,i\mid i+1,\dots,n$.</p>
|
1,556,747 | <p>$$\text{a)} \ \ \sum_{k=0}^{\infty} \frac{5^{k+1}+(-3)^k}{7^{k+2}}\qquad\qquad\qquad\text{b)} \ \ \sum_{k=1}^{\infty}\log\bigg(\frac{k(k+2)}{(k+1)^2}\bigg)$$</p>
<p>I am trying to determine the convergence values. I tried with partial sums and got stuck...so I am thinking the comparison test...Help</p>
| lab bhattacharjee | 33,337 | <p>For the first one, use summation <a href="https://en.wikipedia.org/wiki/Geometric_progression#Infinite_geometric_series" rel="nofollow">formula</a> of Infinite geometric series</p>
<p>For the second, $$\log\dfrac{k(k+2)}{(k+1)^2}=\log\dfrac k{k+1}-\log\dfrac{k+1}{k+2}=u(k)-u(k+1)$$</p>
<p>where $u(m)=\log\dfrac m{m+1}$</p>
<p>See <a href="https://en.wikipedia.org/wiki/Telescoping_series" rel="nofollow">Telescoping series</a></p>
|
365,986 | <p>If $A$ is an $n \times n$ matrix with $\DeclareMathOperator{\rank}{rank}$ $\rank(A) < n$, then I need to show that $\det(A) = 0$.</p>
<p>Now I understand why this is - if $\rank(A) < n$ then when converted to reduced row echelon form, there will be a row/column of zeroes, thus $\det(A) = 0$</p>
<p>However, I have been told to use the fact that the determinant is multilinear and alternating and subsequently deduce that if $\det(A)$ is non-zero, $A$ is invertible. </p>
<p>How do I use the properties of the determinant to prove these claims? </p>
| Marc van Leeuwen | 18,880 | <p>Without bothering too much about the mechanics of finding echelon forms, you may reason as follows. I will suppose you know the determinant is multilinear and alternating <em>in the columns</em>. (You didn't specify rows or columns; in fact the determinant is multilinear and alternating both as function of the rows and as function of the columns, but the two properties are not the same.)</p>
<p>If you consider successive columns, and determine the rank of the matrix up to there, then there must be some $j$ where adding column number $j$ to the matrix does not increase the rank. Then that column is a linear combination of the previous columns. Now write out that column as that linear combination, apply the operation $\det$ to the matrix, and use it's multilinear property with respect to column $j$, giving a linear combination of determinants, in each of which column $j$ is a copy of some previous column $j'$. But the alternating property, all these determinants are $0$, and therefore the original determinant is.</p>
|
3,227,788 | <p>Let <span class="math-container">$f: D(0,1)\to \mathbb C$</span> be a holomorphic function. How to show that there exists a sequence <span class="math-container">$\{z_n\}$</span> in <span class="math-container">$D(0,1)$</span> such that <span class="math-container">$|z_n| \to 1$</span> and <span class="math-container">$\exists M>0$</span> such that <span class="math-container">$|f(z_n)|<M,\forall n \ge 1$</span> ? </p>
<p>My try: If not, then <span class="math-container">$\lim_{|z|\to 1} |f(z)|=\infty$</span>. So in particular, <span class="math-container">$f$</span> has finitely many zeroes in <span class="math-container">$D(0,1)$</span>. Also, <span class="math-container">$1/f$</span> is meromorphic in <span class="math-container">$D(0,1)$</span> with <span class="math-container">$\lim _{|z|\to 1}\dfrac {1}{|f(z)|}=0$</span>. I am not sure where to go from here. Please help. </p>
| N. S. | 9,176 | <p><strong>Edit:</strong> You are on the right track/ Start by removing the zeroes. </p>
<p>Let <span class="math-container">$w_1,.., w_m$</span> be the zeroes of <span class="math-container">$f$</span> with multiplicity <span class="math-container">$k_1,..,k_m$</span>. Let <span class="math-container">$P(z):= (z-w_1)^{k_1}....(z-w_m)^{k_m}$</span>. Then <span class="math-container">$g(z):= \frac{f(z)}{(P(z)}$</span> is holomorphic on <span class="math-container">$D(0,1)$</span> and non-vanishing on <span class="math-container">$D(0,1)$</span>.</p>
<p>Then <span class="math-container">$\frac{1}{g(z)}$</span> admits a continuous extension to <span class="math-container">$\{ z : |z| \leq 1 \}$</span>, namely the function which is <span class="math-container">$0$</span> on the boundary.</p>
<p>By compactness, <span class="math-container">$|\frac{1}{g(z)}|$</span> has a local maximum on this domain. Show that this local maximum appears at some point in <span class="math-container">$D(0,1)$</span> and use the maximum modulus principle.</p>
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.