qid int64 1 4.65M | question large_stringlengths 27 36.3k | author large_stringlengths 3 36 | author_id int64 -1 1.16M | answer large_stringlengths 18 63k |
|---|---|---|---|---|
783 | <p>In a category I have two objects $a$ and $b$ and a morphism $m$ from $a$ to $b$ and one $n$ from $b$ to $a$. Is this always an isomorphism? Why is it emphasized that this has to be true, too: $m \circ n = \mathrm{id}_b$ and $n \circ m = \mathrm{id}_a$?</p>
<p>I am looking for an example in which the id-part is not true and therefore $m$ and $n$ are not isomorphic.</p>
| Eric O. Korman | 9 | <p>If you have no restrictions on m and n, then clearly they cannot be isomorphisms in general. For instance, take any two groups G and H and let m: G --> H, n: H --> G be the zero homomorphisms.</p>
<p>Even if you say that n and m are monomorphisms, then it is still not true in general that they are isomorphisms. I believe it is true however if your category is one whose objects are sets with additional structure. See this question: <a href="https://math.stackexchange.com/questions/309/if-a-is-a-subobject-of-b-and-b-a-subobject-of-a-are-they-isomorphic/327#327">If $A$ is a subobject of $B$, and $B$ a subobject of $A$, are they isomorphic?</a>.</p>
|
61,406 | <p>Can I say this is a bad performance from the new V10 function <code>SubsetQ</code>?</p>
<p>Here are some tests comparing it to <code>Complement[l2, l1] === {}</code></p>
<pre><code>count1[data_,list_]:=Module[{r},
r=SubsetQ[#,list]&/@data;
Counts[r]
]
count2[data_,list_]:=Module[{r},
r=Complement[list,#]==={}&/@data;
Counts[r]
]
</code></pre>
<p>Small columns test:</p>
<pre><code>$HistoryLength = 0;
data = RandomInteger[100, {100000, 10}];
list = {4, 3, 2, 1};
count1[data, list] // AbsoluteTiming
count2[data, list] // AbsoluteTiming
</code></pre>
<blockquote>
<p>{2.760775, <|False -> 99995, True -> 5|>}
{0.450933, <|False -> 99995,True -> 5|>}</p>
</blockquote>
<p>Large columns test:</p>
<pre><code>$HistoryLength=0;
data=RandomInteger[100,{100000,100}];
list={4, 3, 2, 1};
count1[data,list]//AbsoluteTiming
count2[data,list]//AbsoluteTiming
</code></pre>
<blockquote>
<p>{3.345720, <|False -> 97745, True -> 2255|>} {0.910420, <|False ->
97745, True -> 2255|>}</p>
</blockquote>
<p><strong>Update:</strong></p>
<p>Still slow in V10.1</p>
| Taliesin Beynon | 7,140 | <p><code>SubsetQ</code> <em>is</em> implemented in top-level using <code>Complement[a, b] === {}</code>. It has some overhead because it has to treat associations specially, plus it has to go through the requisite error-handling rigmarole. But has the same time complexity in the length of the first argument:</p>
<p><img src="https://i.stack.imgur.com/Bl42M.png" alt="enter image description here"></p>
<p>But this is on the shortlist of functions to reimplement in C when we have time. There are other patients that are more worthy of the "Vitamin C" treatment, however :).</p>
|
2,942,244 | <p>I need to find unmeasurable dense subset of circle. I think, that I found the unmeasurable set, but I can't show that it is dense. Here is my construction. Let's take <span class="math-container">$\alpha\in\mathbb{R}\setminus\mathbb{Q}$</span>, consider the irrational rotation of the circle on angle <span class="math-container">$\alpha$</span>. Let's take orbits of all points on the circle under the rotation, then choose the only one point from every orbit. This is set <span class="math-container">$X_0$</span>. Then <span class="math-container">$X_j=X_0+\alpha*j$</span>. So <span class="math-container">$\forall j$</span> the set <span class="math-container">$X_j$</span> is unmeasurable. I need to find the point of <span class="math-container">$X_j$</span> in any neighbourhood of any point on the circle to show that <span class="math-container">$X_j$</span> is dense. But I don't know how to do it.</p>
| R.C.Cowsik | 293,582 | <p>Take any unmeasurable subset <span class="math-container">$X$</span> and add a countable dense subset <span class="math-container">$Y$</span>, that is take <span class="math-container">$X \cup Y$</span>. </p>
|
51,732 | <p>A <em>Perron number</em> is a real algebraic integer $\lambda$ that is larger than the absolute value of any of its Galois conjugates. The Perron-Frobenius theorem says that any
non-negative integer matrix $M$ such that some power of $M$ is strictly positive has a
unique positive eigenvector whose eigenvalue is a Perron number. Doug Lind proved the converse: given a Perron number $\lambda$, there exists such a matrix, perhaps in dimension
much higher than the degree of $\lambda$. Perron numbers come up frequently in many places, especially in dynamical systems.</p>
<p>My question:</p>
<blockquote>
<p>What is the limiting distribution of Galois conjugates of Perron numbers $\lambda$ in
some bounded interval, as the degree goes to infinity?</p>
</blockquote>
<p>I'm particularly interested in looking at the limit as the length of the interval goes to
0. One way to normalize this is to look at the ratio $\lambda^g/\lambda$, as $\lambda^g$
ranges over the Galois conjugates. Let's call these numbers \emph{Perron ratios}.</p>
<p>Note that for any fixed $C > 1$ and integer $d > 0$, there are only
finitely many Perron numbers $\lambda < C$ of degree $< d$, since there is obviously a bound on the discriminant of the minimal polynomial for $\lambda$, so the question is only interesting when a bound goes to infinity. </p>
<p>In any particular field, the set of algebraic
numbers that are Perron lie in a convex cone in the product of Archimedean places of the field. For any lattice, among lattice points with $x_1 < C$ that are within this cone, the projection along lines through the origin to the plane $x_1 = 1$ tends toward the uniform
distribution, so as $C \rightarrow \infty$, the distribution of Perron
ratios converges to a uniform distribution in the unit disk (with a contribution for each complex place of the field) plus a uniform distribution
in the interval $[-1,1]$ (with a contribution for each real place of the field).</p>
<p>But what happens when $C$ is held bounded and the degree goes to infinity? This question seems
related to the theory of random matrices, but I don't see any direct translation from
things I've heard. Choosing a random Perron number seems very different from choosing
a random nonnegative integer matrix.</p>
<p>I tried some crude experiments, by looking at randomly-chosen polynomials of a fixed degree whose coefficients are integers in some fixed range except for the coefficient of $x^d$
which is $1$, selecting from those the irreducible polynomials whose largest real root is Perron. This is not the same as selecting a random Perron number of the given degree
in an interval. I don't know any reasonable way to do the latter except for small enough $d$ and $C$ that one could presumably find them by exhaustive search.
Anyway, here are some samples from what I actually tried.
First, from among the 16,807 fifth degree polynomials with coefficients in the range -3 to 3, there are $3,361$ that define a Perron number. Here is the plot of the Perron ratios:</p>
<p><a href="http://dl.dropbox.com/u/5390048/PerronPoints5%2C3.jpg" rel="noreferrer">alt text http://dl.dropbox.com/u/5390048/PerronPoints5%2C3.jpg</a></p>
<p>Here are the results of a sample of 20,000 degree 21 polynomials with coefficients
between -5 and 5. Of this sample, 5,932 defined Perron numbers:</p>
<p><a href="http://dl.dropbox.com/u/5390048/PerronPoints21.jpg" rel="noreferrer">alt text http://dl.dropbox.com/u/5390048/PerronPoints21.jpg</a></p>
<p>The distribution decidedly does not appear that it will converge toward a uniform distribution on the disk plus a uniform distribution on the interval. Maybe the artificial bounds on the coefficients cause the higher density ring.</p>
<blockquote>
<p>Are there good, natural distributions for selecting random integer polynomials? Is there a
way to do it without unduly prejudicing the distribution of roots?</p>
</blockquote>
<p>To see if it would help separate what's happening,
I tried plotting the Perron ratios restricted to $\lambda$ in subintervals. For
the degree 21 sample, here is the plot of $\lambda$ by rank order:</p>
<p><a href="http://dl.dropbox.com/u/5390048/CDF21.jpg" rel="noreferrer">alt text http://dl.dropbox.com/u/5390048/CDF21.jpg</a></p>
<p>(If you rescale the $x$ axis to range from $0$ to $1$ and interchange $x$ and $y$ axes,
this becomes the plot of the sample cumulative distribution function of $\lambda$.)
Here are the plots of the Perron ratios restricted to the intervals $1.5 < \lambda < 2$
and $3 < \lambda < 4$:</p>
<p><a href="http://dl.dropbox.com/u/5390048/PerronPoints21%281.5%2C2%29.jpg" rel="noreferrer">alt text http://dl.dropbox.com/u/5390048/PerronPoints21%281.5%2C2%29.jpg</a></p>
<p><a href="http://dl.dropbox.com/u/5390048/PerronPoints21%283%2C4%29.jpg" rel="noreferrer">alt text http://dl.dropbox.com/u/5390048/PerronPoints21%283%2C4%29.jpg</a></p>
<p>The restriction to an interval seems to concentrate the absolute values of Perron ratios even more. The angular distribution looks like it converges to the uniform
distribution on a circle plus point masses at $0$ and $\pi$. </p>
<p>Is there an explanation for the distribution of radii? Any guesses for what it is?</p>
| Community | -1 | <p>(This is more of an extended comment than an answer.)</p>
<p>You speculate whether imposing artificial bounds on the coefficients imposes a bias on the pictures your are producing. There is reason to think that this is possible. Edelman and Kostlan have some nice results on "random" polynomials, where a possible candidate for "random" is given by taking the coefficients $a_n$ of a polynomial of degree $d$ to be normally distributed with variance
$\binom{d}{n}$. In this case, they show that the expected number of real
roots is $\sqrt{d}$ (see *1,*2); in contrast to the result of Kac mentioned in the
comments.</p>
<p>Suppose one takes "random" polynomials of large degree $d$, all of whose
coefficients are integers in the fixed interval $[-m,m]$. One guess is that as
$d \rightarrow \infty$, the distribution of roots of this polynomial approaches
the uniform measure on the the unit circle. This may even be relatively easy to prove, I haven't thought so much about it beyond some postage stamp heuristics.
(Certainly the distribution of $|z|$ approaches a point measure at $1$;
this follows (essentially) from Proposition 2.1 of *3.)
Suppose one now restricts to irreducible polynomials which have a unique
largest root of size $|\lambda|$. It might not be too much of a stretch
to imagine that the distrubution of the other roots is otherwise unchanged,
and so lie (roughly) on the circle $|z| = 1/\lambda$. I can't tell to what
extent this is in accordance with your diagrams.</p>
<p>It's also not clear to me to what extent your diagrams depend at all on the
Perron property. What happens if one
considers random degree $21$ polynomials with coefficients in $[-5,5]$
and simply normalizes by the absolute value of the largest root - does one obtain
substantially different pictures?</p>
<p>*1: <a href="http://www-math.mit.edu/~edelman/homepage/papers/kac.pdf">http://www-math.mit.edu/~edelman/homepage/papers/kac.pdf</a></p>
<p>*2: <a href="http://www-math.mit.edu/~edelman/homepage/papers/roots.pdf">http://www-math.mit.edu/~edelman/homepage/papers/roots.pdf</a></p>
<p>*3: <a href="http://www.dtc.umn.edu/~odlyzko/doc/arch/polynomial.zeros.pdf">http://www.dtc.umn.edu/~odlyzko/doc/arch/polynomial.zeros.pdf</a>
(warning, PDF file is backwards).</p>
|
4,102,443 | <p>Let <span class="math-container">$a,b \in [0,\infty)$</span> with <span class="math-container">$a \leq b$</span>.</p>
<p><span class="math-container">$D(x,y)=\mid \frac{x}{2+x}-\frac{y}{2+y} \mid$</span>.</p>
<p>There exist constants <span class="math-container">$c_{1}, c_{2} \in [0,\infty)$</span> such that <span class="math-container">$$c_{1}\mid x - y \mid \leq D(x,y) \leq c_{2}\mid x - y \mid \forall x,y\in [a,b]$$</span></p>
<p>Show that <span class="math-container">$([a,b],D)$</span> is a complete.</p>
<p>I am very confused about how to prove that a metric space is complete. There are multiple theorems involving Cauchy sequences and closed subsets. I have seen solutions involving the standard metric but I am not sure how to use it in this proof. Any help would be appreciated.</p>
| paul blart math cop | 571,438 | <p>A group action on an object <span class="math-container">$X$</span> can be thought of as a group homomorphism <span class="math-container">$G \longrightarrow Aut(X)$</span>. This is a special case of a monoid action, which would be a monoid homomorphism <span class="math-container">$M \longrightarrow End(X)$</span>, the set of endomorphisms of <span class="math-container">$X$</span> under composition. It may sometimes be the case that <span class="math-container">$End(X)$</span> carries an additive structure as well. For instance, if <span class="math-container">$X$</span> is an abelian group then <span class="math-container">$End(X)$</span> has the addition <span class="math-container">$(f+g)(x) := f(x) + g(x)$</span>. This makes <span class="math-container">$End(X)$</span> into a ring under composition and this pointwise addition. By analogy to the above definitions of group and ring actions, we can say that a ring <span class="math-container">$R$</span> acts linearly on <span class="math-container">$X$</span> by giving a ring homomorphism <span class="math-container">$R \longrightarrow End(X)$</span>. More generally, this can be done for <span class="math-container">$X$</span> in any abelian (or even additive) category, such as chain complexes or sheaves.</p>
<p>This is also referred to as making <span class="math-container">$X$</span> into an <span class="math-container">$R$</span>-module, when <span class="math-container">$X$</span> is an abelian group. You may have seen <span class="math-container">$R$</span>-module structures defined as a map <span class="math-container">$R \times X \longrightarrow X$</span> satisfying some axioms. These two notions are compatible, in the same way thay a group action on <span class="math-container">$X$</span> can be thought of as a map <span class="math-container">$G \longrightarrow Aut(X)$</span> or as a map <span class="math-container">$G \times X \longrightarrow X$</span> satisfying some axioms. That is, a map <span class="math-container">$f: R \times X \longrightarrow X$</span> defines a map <span class="math-container">$R \longrightarrow End(X)$</span> via <span class="math-container">$r \mapsto (x \mapsto f(r, x))$</span>. On the other hand, a map <span class="math-container">$f: R \longrightarrow End(X)$</span> corresponds to a map <span class="math-container">$(r, x) \mapsto f(r)(x)$</span>.</p>
|
556,150 | <blockquote>
<p>Prove that a metric space is totally bounded if and only if every sequence has a Cauchy subsequence. </p>
</blockquote>
<p>I think I proved the Cauchy subsequence part:</p>
<p>Let $a_{0},a_{1}, a_{2}, a_{3}, a_{4},...\in X$ be a sequence.</p>
<p>For each $k$, let $F \subseteq X$ be a finite $\frac1k$-net.</p>
<p>Given $I \subseteq \Bbb N_{\ge0}$ and $k>1$ you find an infinite $J \subseteq I$ such that:
$$\exists p\in F:\forall n\in J: d(x_n,p)<\frac1k$$</p>
| Moss | 23,468 | <p>You have proved that totally bounded implies every sequence has Cauchy subsequence, so I will prove the other implication. This is a proof using the contrapositive, that is, not totally bounded implies that there is a sequence with no Cauchy subsequence.</p>
<p>Suppose that $X$ is not totally bounded. Then there is an $\epsilon>0$ such that for all finite sets of points $\{x_1,\ldots,x_n\}$
$$X\neq \bigcup_{k=1}^n B(x_k;\epsilon).$$
Now we construct a sequence that has no Cauchy subsequence. Start with a finite collection of points $\{x_1,\ldots,x_n\}$, as above. Then since $X\neq \bigcup_{k=1}^n B(x_k;\epsilon)$, there is a point $x_{n+1}\in X$ such that $x_{n+1}\notin \bigcup_{k=1}^n B(x_k;\epsilon)$. Moveover,
$$X\neq \bigcup_{k=1}^{n+1} B(x_k;\epsilon)$$
because if it were equal, we would have a contradiction to our assumption. Wash, rinse, repeat this process to get a sequence $(x_k)_{k=1}^\infty$.</p>
<p>To check that this has no Cauchy subsequence, notice that for any two terms $x_n$ and $x_m$ in this sequence, if $m>n$ then
$$x_m\notin \bigcup_{k=1}^{m-1} B(x_k;\epsilon).$$
In particular, $x_m\notin B(x_n;\epsilon)$, hence $d(x_m,x_n)\geq \epsilon$. Similarly if $n>m$. This shows that the terms of this sequence are at least $\epsilon$ in distance apart, hence no Cauchy subsequence can exist.</p>
|
664,347 | <p>For each of the following values of ($a,b$), find the largest number that is not of the form $ax+by$ with $x\geq 0$ and $y \geq 0$.</p>
<p>$(i) (a,b) = (3,7)$</p>
<p>$(ii) (a,b) = (5,7)$</p>
<p>$(iii) (a,b) = (4,11)$</p>
<p>I know how to compute numbers of these forms (clear), but have no idea how to generate one that is not of that form. Then, of those, which is the largest? More importantly, how do I prove its the largest?</p>
| Tom-Tom | 116,182 | <p>To help you, let us have a look on how it works with the first example.
With $x\geq0$ and $y\geq0$, what are the smallest numbers $3x+7y$ that you can generate ? </p>
<ul>
<li>Clearly 0</li>
<li>The next smallest one is 3, using $x=1$ and $y=0$</li>
<li>you can't generate 4 or 5 so the next one is $6=3\times2+7\times0$</li>
<li>the next one is obviously 7</li>
<li>8 is not possible (check it out) so next one is $9=3\times3+7\times0$ </li>
<li>$10=3\times1+7\times1$</li>
<li>11 is not possible (you should now see why)</li>
<li>$12=3\times4+7\times0$, $13=3\times2+7\times1$ and $14=3\times0+7\times2$ are <strong>3</strong> consecutive numbers, therefore every number from this point can be generated (do you see why ? and in particular do you understand why it is important to find <strong>3</strong> consecutive numbers one can generate ?).</li>
</ul>
<p>The answer for point <em>(i)</em> is thus 11. You should manage with <em>(ii)</em> and <em>(iii)</em> easily.</p>
|
479,851 | <p>I toss a coin a many times, each time noting down the result of the toss. If at any time I have tossed more heads than tails I stop.
I.e. if I get heads on the first toss I stop.
Or if toss T-T-H-H-T-H-H I will stop.
If I decide to only toss the coin at most 2n+1 times, what is the probability that I will get more heads than tails before I have to give up?</p>
| user84413 | 84,413 | <p>As in some previous answers, assume that you continue tossing the coin even if you get more heads than tails at some point, so you have a total of $2n+1$ tosses and $2^{2n+1}$ possible outcomes of the tosses.</p>
<p>To count the number of sequences of tosses in which we never get more heads than tails, we can use Andre's reflection principle: </p>
<p>The number of tails t must be larger than the number of heads h, so $n+1\le t\le 2n+1$, and the number of tosses in which tails never falls behind heads is
given by </p>
<p>$\displaystyle\sum_{t=n+1}^{2n+1} \bigg[\binom{2n+1}{t}-\binom{2n+1}{t+1}\bigg]=$
$\big[\binom{2n+1}{n+1}-\binom{2n+1}{n+2}\big]+\big[\binom{2n+1}{n+2}-\binom{2n+1}{n+3}\big]+\big[\binom{2n+1}{n+3}-\binom{2n+1}{n+4}\big]+\cdots+\big[\binom{2n+1}{2n+1}-\binom{2n+1}{n+2}\big]$
$=\binom{2n+1}{n+1}$, so the probability of getting more heads than tails at some point in the tosses is equal to </p>
<p>$1-\binom{2n+1}{n+1}/2^{2n+1}$.</p>
|
22,415 | <p>Very often here when a post does not show any work gets down voted to hell, <a href="https://math.stackexchange.com/questions/1611839/how-to-solve-problem-2014-iran-tst">this question</a> is getting upvoted +5 without having shown any work.</p>
<p>I have seen others people's post get downvoted severely with more work than the mentioned question, why the double standard? </p>
| Jyrki Lahtonen | 11,619 | <p>I have other reasons not to upvote that question, but I would not downvote or vote to close it either.</p>
<p>The official reason for closing many a question is "off-topic -> missing context". Some users mistakenly equate "missing context" with "missing work/effort shown".</p>
<p>The linked question gives context on the first line - it is from an Iranian math contest. I didn't read further, but if the contest question is worth its salt, it follows that 90%+ of our users will be clueless about how to solve it. Therefore it is pointless to demand that the asker would show their own work. This is in sharp contrast to questions about calculus/elementary number theory (or below), when 90%+ of our users can solve the question without breaking a sweat. In those cases the demand for other kind of context is essential for the purposes of gauging what kind of an answer would be helpful - and also to enforce the community norm against outsourced homework assignments.</p>
<p>[taking off the moderator hat]</p>
<ol>
<li>I am somewhat in favor of various subcommunities, say, those forming around selected tags, within Math.SE developing their own norms. Enforcing such norms will mostly be up to the subcommunities themselves. It is good to have some common standards (enforced for example via our common review queues), but IMO the keen followers of a tag are best placed to judge many cases. A good example of such a subcommunity is the one built around those tough definite integrals <a href="https://math.meta.stackexchange.com/q/22330/11619">discussed recently</a>.</li>
<li>I am biased in the sense that IMO the higher level questions should be given some slack (in terms of how much effort needs to be shown). Partly because such askers may more often be in self study mode, and usually already know the basics anyway. Against that is (as pointed out to me by a fellow moderator, I think it was Arthur, but I'm not 100%) that such users really should know better than to copy/paste a homework problem - possible foreign language obstacles notwithstanding. So, let's be reasonable :-)</li>
<li>It makes me angry, when a user who has earned their rep doing trigonometry and (pre)calculus suddenly feels qualified to judge that this question on, say, elliptic curves, "does not show any effort". IMO <a href="https://math.meta.stackexchange.com/a/10962/11619">ideally</a> anyone casting a "no effort" -close vote should be able to solve the problem themself. I am aware that the policy I suggested may place too high a burden to the first close-voter. That's where that <em>ideally</em> came from.</li>
<li>This would lead to a certain kind of expertocracy. Call me an elitist pig, if you want to.</li>
<li>I do practice this myself as much as I can. For example, I will skip all the questions in review queues when I don't feel qualified to judge the merits of a post for the above reasons. This applies to questions about for example stochastic processes, set theory, logic, functional analysis,..., you name it (after taking a peek at my profile). Of course, I could just copy, say, Did's close vote about a post in probability theory, but such a vote would not carry the weight of my INFORMED opinion, so it is surely best that I abstain from voting on such a question. Now that my votes are immediately binding this point has additional weight.</li>
</ol>
|
22,415 | <p>Very often here when a post does not show any work gets down voted to hell, <a href="https://math.stackexchange.com/questions/1611839/how-to-solve-problem-2014-iran-tst">this question</a> is getting upvoted +5 without having shown any work.</p>
<p>I have seen others people's post get downvoted severely with more work than the mentioned question, why the double standard? </p>
| zyx | 14,120 | <p>Whatever its advantages or disadvantages in other categories, the "work and effort" question closure movement never made any sense for postings in the (contest-math) tag. Where there are downvotes or close votes on tagged contest questions it is mainly the result of believers in the work-effort-context philosophy voting to enforce that philosophy where it does not fit. If you see an <em>absence</em> of down/close voting on a bare contest problem, the system is working properly. </p>
<p>Some of the reasons the closures never made sense on contest problems:</p>
<ul>
<li><p>There is no connection between contest problems, and the homework or (supposedly) "low quality" postings that were given as <em>the</em> justification for closing questions. </p></li>
<li><p>The rate of appearance of contest problems is modest, and the argument that the site will be flooded if those posts are not limited does not apply.</p></li>
<li><p>It has never been the tradition in the contest problem community to provide more than the question, and a source for the problem/solution where known. The latter are the only form of <em>context</em> that is considered relevant, and are not always available. One effect of posting without that context is that people who recognize the source can add it.</p></li>
<li><p>Contest problems help to draw and retain high ability users who can also participate in other areas.</p></li>
</ul>
<p>Compared to other sites, MSE has a large number of problem specialists, and it is of interest to give them free rein to share problems. If it is not possible to post bare contest problems without hassle then the (contest-math) tag should be split off as its own site where problem solvers can operate freely by the norms that prevail in that community.</p>
|
1,252,857 | <p>Let $f\colon \mathbb R \to \mathbb R$ be defined by $f(x)= 5x^3+3$. Is it onto?</p>
<p>According to me, if $y=5x^3+3$, then $x = \sqrt[3]{(y-3)/5}$ is not an element of $\mathbb R$ for all $y \in (-\infty,3)$ so all numbers in the codomain $(-\infty,3)$ wont have pre-images.</p>
<p>But many say $5x^3+7$ as an odd degree equation will have at least one real root. Is it onto?</p>
| Bat Dejean | 196,331 | <p>$x=\sqrt[3]{\frac{y-3}5}$ is an inverse as you claimed; keep in mind that odd radicals ($\sqrt[3]{}$, $\sqrt[5]{}$, etc.) are defined for all real numbers, whereas even ones are not (unless we introduce extensions such as $\mathbb{C}$.) Therefore all numbers in the codomain $\mathbb{R}$ have preimages given by the rule you stated, not just the ones in $\left[-3,\infty\right)$.</p>
<p>Perhaps this example will help clarify: consider the equation $x^3=k$. For positive $k$, this has a solution at $\sqrt[3]{k}$. For negative $k$, we have $x^3=k\leftrightarrow-x^3=-k\leftrightarrow\left(-x\right)^3=-k$ (as $\left(-1\right)^3=-1$.) $-k$ is positive, so some value of $-x$ satisfies this equation as just demonstrated; just take the negative of this to get $x$. Thus, for all nonzero $k$, a real solution to $x^3=k$ exists. The case $k=0$ is easy: let $x=0$.</p>
<p>After showing that $x$ is unique for all $k$, $x$ is the cube root of $k$ by <em>definition</em>, so all real numbers have a real cube root.</p>
|
2,751,300 | <blockquote>
<p>Figure out with distinct real numbers the system of equations.</p>
<p><span class="math-container">$$\frac{xy}{x-y} = \frac{1}{30}$$</span>
<span class="math-container">$$\frac{x^2y^2}{x^2+y^2} = \frac{1}{2018}$$</span></p>
</blockquote>
<p>I multiplied x-y both side on the first equation and square on both side, and I stucked.</p>
<p>Help me...</p>
| Dr. Sonnhard Graubner | 175,066 | <p>From the first equation we get:
$$30xy=x-y$$ or $$y(30x+1)=x$$ so $$y=\frac{x}{30x+1}$$ substituting this in the second equation we get
$${\frac {1}{2018}}\,{\frac { \left( 43\,x+1 \right) \left( 13\,x-1
\right) }{450\,{x}^{2}+30\,x+1}}
=0$$ Can you finish?</p>
|
2,751,300 | <blockquote>
<p>Figure out with distinct real numbers the system of equations.</p>
<p><span class="math-container">$$\frac{xy}{x-y} = \frac{1}{30}$$</span>
<span class="math-container">$$\frac{x^2y^2}{x^2+y^2} = \frac{1}{2018}$$</span></p>
</blockquote>
<p>I multiplied x-y both side on the first equation and square on both side, and I stucked.</p>
<p>Help me...</p>
| Cesareo | 397,348 | <p>$$
\left\{
\begin{array}{rcl}
\frac{xy}{x-y} & = & \frac{1}{30}\\
\frac{x^2y^2}{x^2+y^2} & = & \frac{1}{2018}
\end{array}
\right.
\Rightarrow
\left\{
\begin{array}{rcl}
900 x^2y^2 & = & x^2+y^2-2 xy\\
2018x^2y^2 & = & x^2+y^2
\end{array}
\right.
\Rightarrow x y = \frac{1}{559}
$$</p>
<p>etc.</p>
|
18,136 | <p>I introduced the hypercube (to undergraduate students in the U.S.) in the
context of generalizations of the Platonic solids, explained its
structure, showed it rotating.
I mentioned <a href="https://matheducators.stackexchange.com/a/1824/511">Alicia Stott</a>, who discovered the <span class="math-container">$6$</span> regular
polytopes in <span class="math-container">$\mathbb{R}^4$</span> (discovered after Schläfli).
I sense they largely did not grasp what is the hypercube, let alone
the other regular polytopes.</p>
<p>I'd appreciate hearing of techniques for getting students to
"grok" the fourth dimension.</p>
| Owen Reynolds | 13,766 | <p>That reminds me of my first college programming course where they drew a square picture of a 2D array, a cube for a 3D array, and then said 4D arrays were very hard to understand. But I'd already made 4D arrays that were fine, since they weren't representing points in 4d space. I'd played a computer dungeon game where you had continent, province, dungeon, and floor. 4D. They're not orthogonal to each other, exactly, but the general concept of N dimensions is simple enough, that way.</p>
<p>A questionaire on a bad dating site could be: True/False, you enjoy: Hiking, Cooking, Dancing, Travel. That has the properties of a 4D cube, right? Each set of answers is like a corner, with Hamming distance of a most 4. Adding more questions increases the dimension. The number of corners and edges blow up, but conceptually, a dozen dimensions is simple.</p>
|
4,038,685 | <p>Suppose that <span class="math-container">$u(x)$</span> is continuous and satisfies the integral equation</p>
<p><span class="math-container">\begin{equation}\label{1.4.1}
u(x) = \int_0^x \sin(u(t))u(t)^p dt
\end{equation}</span></p>
<p>on the interval <span class="math-container">$0 \leq x \leq 1$</span>. show that <span class="math-container">$u(x) = 0$</span> on this interval if <span class="math-container">$p \geq 0$</span>.</p>
<p>This is what I have:</p>
<p>Since <span class="math-container">$\sin(u(t))u(t)^p$</span> is continuous, it follows from the integral definition that <span class="math-container">$u(x)$</span> is differentiable. Let us differentiate both sides of equation above with respect to <span class="math-container">$x$</span>. This yields:</p>
<p><span class="math-container">\begin{equation}
u'(x) = \sin(u(x))u(x)^p
\end{equation}</span></p>
<p>This ODE is separabale and becomes:</p>
<p><span class="math-container">\begin{equation}
\frac{1}{\sin(u(x))u(x)^p}du(x) = dx
\end{equation}</span></p>
<p>However, this doesn't seem easily solvable so I'm not sure how to show that <span class="math-container">$u(x) = 0$</span> from this.</p>
| doobdood | 800,483 | <p>WARNING: As pointed out by Matthew in the comments, this isn't as sound as I wanted it to be and has a few holes - namely continuity for <span class="math-container">$p<1$</span> and the interval for which uniqueness is guaranteed. Check out his answer, if he posts one!</p>
<p>I'm going to write the differential equation
<span class="math-container">$$u'=\sin(u)u^p$$</span>
with the initial value condition <span class="math-container">$u(0)=0$</span>. Notice that <span class="math-container">$\frac{\partial}{\partial u}\left(\sin(u)u^p\right)$</span> and <span class="math-container">$\sin(u)u^p$</span> are both continuous on the rectangle <span class="math-container">$[0,1]\times[a,b]$</span> for any arbitrary <span class="math-container">$a,b$</span>. Thus, we can guarantee the uniqueness and existence of our solutions.</p>
<p><span class="math-container">$u(x)=0$</span> is a solution to the differential equation <span class="math-container">$u'=\sin(u)u^p$</span>. So we can argue that if <span class="math-container">$u(x)=\int_0^x\sin(u(t))u(t)^pdt$</span>, it must satisfy the differential equation <span class="math-container">$u'=\sin(u)u^p$</span> with the initial condition <span class="math-container">$u(0)=0$</span>. Therefore, <span class="math-container">$u(x)=0$</span>.</p>
<p>(I hope this is correct; it's been a while since I touched a differential equation)</p>
|
1,328,082 | <p>I've been trying to prove the following inequality, but until now I've had problems coming up with a solution:</p>
<p>$$
2^{mn} \ge m^n
$$</p>
<p>$m$ and $n$ can assume any natural number.</p>
<p>I wasn't able to find any counterexample that would invalidate this inequality, so I am assuming that this statement is generally true, but of course this still has to be proven.</p>
| gt6989b | 16,192 | <p><strong>Hint</strong> Suffices to show $2^m \ge m$ for all natural $m$. How about using Mathematical induction?</p>
|
1,328,082 | <p>I've been trying to prove the following inequality, but until now I've had problems coming up with a solution:</p>
<p>$$
2^{mn} \ge m^n
$$</p>
<p>$m$ and $n$ can assume any natural number.</p>
<p>I wasn't able to find any counterexample that would invalidate this inequality, so I am assuming that this statement is generally true, but of course this still has to be proven.</p>
| Guest | 102,415 | <p>This is true for $m,n\in\mathbb{Z}_{\geq1}$.</p>
<p>Since $2^{mn}=(2^m)^n$ and $x^n$ is an increasing function on $x>0$, all that has to be shown is $2^m\geq m$ when $m\geq1$. This can be done via induction.</p>
|
2,076,831 | <blockquote>
<p>Is the following set path-connected?</p>
<p><span class="math-container">$A=\{(x,y):y=x\sin \frac{1}{x},x>0\}$</span></p>
</blockquote>
<p>I am unable to understand that should I prove it or disprove it .</p>
<p>Will someone please give me some hints.</p>
| Fabio Somenzi | 123,852 | <p>The product of two integers is the best-known candidate for one-way function. It's fast to compute the product of two integers. However, it is not known whether integer factorization can be solved in polynomial time (even though primality testing is in P).</p>
<p>In complexity theory it is common to call <em>easy</em> those computations that can be carried out in time that is bounded by a polynomial in the size of the input by a deterministic algorithm. Here, the size of the of the input is the sum of the number of digits of the two numbers to multiply or the number of digits of the number to be factored.</p>
<p>In practice, an algorithm whose runtime grows with the cube of the size of the imput may already be impractical, whereas an algorithm whose worst-case run time is exponential in the size of the input may find extensive application. </p>
<p>In discussing one-way functions, however, one usually sticks to the simple criterion that identifies easy with polynomial-time. There is one additional detail, though: computing the input from the output of a one-way function should be hard on average---not just in the worst case.</p>
<p>If we knew a polynomial-time algorithm that returns the factorization of an integer with some constant positive probability, then we would rule out integer multiplication as one-way function.</p>
<p>Another well-known possibly-one-way function is modular exponentiation, whose inverse, the discrete logarithm, computes $x$ such that $b^x \equiv y \pmod n$, given $y$.</p>
<p>The fact that $x^x$, as a function from $\mathbb{Z}^+$ to $\mathbb{Z}^+$, is monotonically increasing makes finding $x$ from $x^x$ relatively easy.</p>
|
2,076,831 | <blockquote>
<p>Is the following set path-connected?</p>
<p><span class="math-container">$A=\{(x,y):y=x\sin \frac{1}{x},x>0\}$</span></p>
</blockquote>
<p>I am unable to understand that should I prove it or disprove it .</p>
<p>Will someone please give me some hints.</p>
| ameed | 119,300 | <p>"Easy", roughly speaking, means a computer can do it efficiently, and "hard" means a computer can't do it efficiently. Usually the threshold for "efficiently" is that the problem is solvable in polynomial time; <em>i.e.</em> there exists some algorithm that, given an input of size $n$, solves the problem in $O(n^p)$ steps for some $p$. (Essentially, that means for all $n$, the algorithm takes up to $kn^p$ steps for some $k$.)</p>
<p>For instance, as Fabio explained, we think integer multiplication is a one-way function. It's easy to multiply two numbers (the grade-school algorithm is $O(n^2)$ for an $n$-digit number), but hard to find the factors of an arbitrary integer.</p>
<p><em>Edit:</em> As @KCd points out, we actually don't know if multiplication is a one-way function. It could be, and we don't know an easy way to reverse it, but we haven't proved that one doesn't exist. In fact, we haven't proved that one-way functions exist at all. We just have things that we think are one-way functions, and so we use them as such. It might seem scary that all of cryptography rests on such a shaky mathematical foundation, but it's the best we have at this point.</p>
<p>As Fabio points out, $f : \mathbb{Z}^+ \to \mathbb{Z}^+, x \mapsto x^x$ is easy enough to invert because it is monotonically increasing; binary search could be an approach, for instance, plus some initial computations to set an upper bound for the search. However, taken as a function of the real numbers, $f$ actually doesn't have an inverse, since $1/4^{1/4}$ = $1/2^{1/2}$.</p>
|
14,746 | <p>Is there a case to be made that the topic of line integrals should only involve vector fields?</p>
<p>My colleagues and our textbook take the position that line integrals should only be taught from a vector field perspective (specifically for calculating "work"). [In fact, our textbook <em>defines</em> a line integral as <span class="math-container">$\int _C \mathbf{F} \cdot d\mathbf{r}$</span>, where <span class="math-container">$\mathbf{F}$</span> is a vector field and <span class="math-container">$C$</span> is some parametrized curve.]</p>
<p>I think it makes more pedagogical sense to introduce line integrals as a way to generalize what students should have just done in their integral calculus class: integrate a function along some 1-dimensional direction. Now, that "direction" can be a path in 2-space or 3-space, so we can see an area as the result of the line integral. Then, after introducing vector fields, we can consider other, meaningful things to integrate along a path, such as a dot product of the field with the path.</p>
<p>My motivation here is that I would like new calculus topics to be easily connected to old topics, if possible. If we jump right into calculating work without any tie back to the "calculate the area" problem students are used to, I fear they may come away thinking that line integrals are just these weird things with their own rules.</p>
<p>In short: Is there a prevailing setting for introducing line integrals? If so, has there been a movement to minimize the teaching of line integrals over scalar fields, focusing primarily on work calculations?</p>
| guest | 10,126 | <p>Caveat: I'm just a random Internet poster, not an instructor, so take this with a grain of salt. And get some advice from blooded veterans.</p>
<p>A. It's too much. 1 and 2 are plenty for 12 minutes.</p>
<p>B. Write out the entire 50 minute lecture, practice it at once, and write up a synthesis of the last 35 minutes. I suggest something like topic, time, key points, perhaps purpose (you can draw a 4 column table and fill it out in the 3 minute). Make it simple enough so that you can actually explain it in 2 minutes. Test yourself to see that you can (it is simple enough summary).</p>
<p>C. Your beginning should have some intro that is a little more motivational, less dry (1 is too theoretical). Maybe something like "sins and cosines are complicated and polynomials are easy so we like to change them into something easier to work with" Or "you will need this in physics" or whatever the practical rationale is for why this is in the curriculum (not too real analysis-y, please). [I do quite like 2, though.]</p>
<p>D. Remember that your target audience at a juco is not math superstars. They are a lower skill set than you or even than an AP calculus class. Good people who want to get through this to support their chem/physics class or nursing degree or what have you. They are looking to progress and get jobs or to transfer to lower level state schools (and then get jobs). Have some sympathy for this and for them (without being obvious or patronizing about it either). </p>
<p>E. I suspect the interview committee wants to see that you can manage time, organize your thoughts, are practiced, command the room, have some energy, etc. And can get through the topic without getting too tied up into every nuance. I think you have plenty of math chops and that will not be their main worry (that you know the topic well). Sure review the standard lesson and be absolutely up to speed on it (especially if they probe...but if they don't, don't feel a need to flaunt.). But the objective is probably 20% math skill (and mostly about being above a skill threshold rather than how high above it). 80% is instructional ability...which is very, very strongly correlated to planning the lecture and practicing it at least 3 times. [Don't bother with that level of time investment when doing the job, except for first lecture, but definitely for the interview.]</p>
<p>F. Do a little reconnaissance and figure out what text they use. See how it addresses this topic and work the homework problems in that text.</p>
<p>G. Really this whole topic is a little bit of a pain for the student and not the most important material. Maybe even why they picked it. Show you can get the kids through the damned thing.</p>
|
14,746 | <p>Is there a case to be made that the topic of line integrals should only involve vector fields?</p>
<p>My colleagues and our textbook take the position that line integrals should only be taught from a vector field perspective (specifically for calculating "work"). [In fact, our textbook <em>defines</em> a line integral as <span class="math-container">$\int _C \mathbf{F} \cdot d\mathbf{r}$</span>, where <span class="math-container">$\mathbf{F}$</span> is a vector field and <span class="math-container">$C$</span> is some parametrized curve.]</p>
<p>I think it makes more pedagogical sense to introduce line integrals as a way to generalize what students should have just done in their integral calculus class: integrate a function along some 1-dimensional direction. Now, that "direction" can be a path in 2-space or 3-space, so we can see an area as the result of the line integral. Then, after introducing vector fields, we can consider other, meaningful things to integrate along a path, such as a dot product of the field with the path.</p>
<p>My motivation here is that I would like new calculus topics to be easily connected to old topics, if possible. If we jump right into calculating work without any tie back to the "calculate the area" problem students are used to, I fear they may come away thinking that line integrals are just these weird things with their own rules.</p>
<p>In short: Is there a prevailing setting for introducing line integrals? If so, has there been a movement to minimize the teaching of line integrals over scalar fields, focusing primarily on work calculations?</p>
| WeCanLearnAnything | 7,151 | <p>I'd take an entirely different approach, though the faculty that is assessing you may not view it favorably.</p>
<p>Mathematicians tend to view the logic of math as intrinsically beautiful.</p>
<p>The vast majority of people do not. The vast majority considers math to be hoop jumping and weirdly arbitrary rules that must be followed to get check marks. They have no idea why anyone would care about formulae.</p>
<p>Simply telling students "Taylor Series are important!" does not motivate them. Glossing over it or lecturing them for a few minutes on it will also fail to motivate them.</p>
<p>To begin addressing this, I would use intellectual need to motivate the students.</p>
<p>For example, in lower levels of math, one might teach fractions as part of a whole or something like that, or sections of circles, or naming amount of food. Or, you could make them feel intellectual need by having them how they'd share 5 pieces of licorice among 3 kids and name the amount each kid got.</p>
<p>One could teach exponents as repeated multiplication then do a bunch of notational drills. Or, you could make students feel the need for exponents by teaching them about the spread of infectious diseases. "If 3 people are infected initially and each sick person infects one more person each week, what will happen in 6 months?" Then, you make a table, and let them struggle for a while to figure out that in week, say, 10, they calculate the number of infections by going <span class="math-container">$3\times2\times2\times2\times2\times2\times2\times2\times2\times2\times2$</span>, which is just a pain to read and write.</p>
<p>In the eyes of most mathematicians, the average person, including college students, have <em>shockingly</em> little interest in the logic of math because they just want to get answers and check marks. So, what will you do to make them feel the <em>need</em> for a Taylor Series?</p>
<p>Check out the work on <a href="https://www.math.ucsd.edu/~jrabin/publications/ProblemFreeActivity.pdf" rel="nofollow noreferrer">"Intellectual Need" in math education</a>.</p>
<p>The risk: The "<a href="https://en.wikipedia.org/wiki/Curse_of_knowledge" rel="nofollow noreferrer">Curse of Knowledge</a>" unfortunately, implies a very high probability that the people assessing you assume that students will intuitively understand that Taylor Series are interesting and important, and thus just need to be told so. Experts, very frequently, are shocked to find out that basic concepts, such as why anyone cares about Taylor Series, completely fly over the head of students who ace quizzes and tests on Taylor Series.</p>
|
2,585,828 | <p>I saw in an article that for every real number, there exists a Cauchy sequence of rational numbers that converges to that real number. This was stated without proof, so I'm guessing it is a well-known theorem in analysis, but I have never seen this proof. So could someone give a proof (preferably with a source) for this fact? It can use other theorems from analysis, as long as they aren't too obscure.</p>
| Mohammad Riazi-Kermani | 514,496 | <p>Note that the set of real numbers is a complete set that is every Cauchy sequence of real numbers is convergent also every convergent sequence is a Cauchy sequence. Therefore once you have a rational sequence converging to a real number the same sequence is a rational Cauchy sequence converging to the same real number. </p>
|
22,269 | <p>How to verify, which version of MathJax is used here? It seems that v. 2.5 is not implemented yet. It has very useful features, as <code>\delimiterfactor</code>, which is needed here: <a href="https://tex.stackexchange.com/questions/284809/getting-large-braces-in-mathjax">https://tex.stackexchange.com/questions/284809/getting-large-braces-in-mathjax</a></p>
| Community | -1 | <ol>
<li>Right-click any formula, like this one: $\int_0^1 e^x\,dx$</li>
<li>Choose "About MathJax"</li>
<li>You will see the version. Currently MathJax v2.6.0-beta.2</li>
</ol>
|
3,138,450 | <p>Let <span class="math-container">$A \subseteq \mathbb{R}$</span> be bounded above and let <span class="math-container">$c \in \mathbb
{R}$</span>. Define the set <span class="math-container">$c + A = \{c + a : a \in A\} $</span></p>
<p>Now since <span class="math-container">$a \leq \sup A , \forall a\in A$</span>. Then <span class="math-container">$a + c \leq \sup A + c $</span>. So <span class="math-container">$A+c$</span> has an upper bound. Now the claim is that <span class="math-container">$\sup(c+A) = c + \sup A.$</span></p>
<p>So now I have to prove that <span class="math-container">$\forall \varepsilon> 0, c+a > c + \sup A - \varepsilon.$</span></p>
<p>How can I proceed? Thanks.</p>
| Theo Bendit | 248,286 | <p>You have shown that <span class="math-container">$c + \sup A$</span> is an upper bound to <span class="math-container">$c + A$</span>, and hence <span class="math-container">$\sup(c + A) \le c + \sup A$</span>. Replacing <span class="math-container">$A$</span> with <span class="math-container">$c + A$</span> and <span class="math-container">$c$</span> with <span class="math-container">$-c$</span>, we obtain
<span class="math-container">$$\sup(-c + (c + A)) \le -c + \sup(c + A) \le -c + c + \sup A = \sup A.$$</span>
Now, note that <span class="math-container">$A \subseteq -c + (c + A)$</span>, as if <span class="math-container">$x \in A$</span>, then <span class="math-container">$c + x \in c + A$</span>, and so <span class="math-container">$x = -c + (c + x) \in -c + (c + A)$</span>. Therefore,
<span class="math-container">$$\sup A \le \sup(-c + (c + A)).$$</span>
Putting this together with the above inequalities, we see that all the inequalities must be equality, by the antisymmetry of <span class="math-container">$\le$</span>. Hence,
<span class="math-container">$$-c + \sup(c + A) = \sup A \implies \sup(c + A) = c + \sup A.$$</span></p>
|
391,509 | <p>We have $$\dfrac{1+2+3+...+ \space n}{n^2}$$</p>
<p>What is the limit of this function as $n \rightarrow \infty$?</p>
<p>My idea:</p>
<p>$$\dfrac{1+2+3+...+ \space n}{n^2} = \dfrac{1}{n^2} + \dfrac{2}{n^2} + ... + \dfrac{n}{n^2} = 0$$</p>
<p>Is this correct?</p>
| lab bhattacharjee | 33,337 | <p>HINT:</p>
<p>Such application of limit (esp. to $\infty$) to individual summand is applicable only when the number of summand is finite.</p>
<p>$$\dfrac{1+2+3+...+ \space n}{n^2} =\frac{\frac{n(n+1)}2}{n^2}=\frac12\cdot\left(1+\frac1n\right)$$</p>
|
4,311,731 | <p>I was thinking about solutions of equation <span class="math-container">$x^2=i$</span> . First thought coming to my mind was <span class="math-container">$x=\pm \sqrt{i}$</span> . ( I know it's wrong ) .
Then I thought if we solve this equation like real problem then
<span class="math-container">$|x|=\sqrt i$</span> ( Again wrong ) .</p>
<p>But it got me thinking that we define |z| as distance of complex number z from origin . But what if this distance is not real so let consider complex numbers z whose <span class="math-container">$|z|=i$</span> .</p>
<blockquote>
<p>Does this question make sense ? Does such numbers exist on complex x - y plane ? Do we need to include another dimensions for such numbers ?</p>
</blockquote>
<p>Sorry if this question seems silly and thanks for help !</p>
| Community | -1 | <blockquote>
<p>But what if this distance is not real so let consider complex numbers <span class="math-container">$z$</span> whose <span class="math-container">$|z|=i$</span>.</p>
</blockquote>
<p>No, the "distance (from a complex number to another)" is a nonnegative real number by definition. It does not make sense to write <span class="math-container">$|z|=i$</span>.</p>
<blockquote>
<p>I was thinking about solutions of equation <span class="math-container">$x^2=i$</span>.</p>
</blockquote>
<p>There are two solutions to the complex equation <span class="math-container">$z^2=i$</span>. (One usually uses <span class="math-container">$z$</span> as the unknown variable for complex equations.)</p>
<p>One way to solve this equation is using the polar form of complex numbers. Write <span class="math-container">$z=re^{i\theta}$</span> and <span class="math-container">$i=e^{i\pi/2}$</span>. Then
<span class="math-container">$$
r^2e^{i2\theta} = e^{i\pi/2+2k\pi},\quad (r\ge 0)
$$</span>
It follows that <span class="math-container">$r=1$</span> and <span class="math-container">$\theta=\pi/4+k\pi$</span>. Thus the two solutions are
<span class="math-container">$$
z_1=e^{i\pi/4},\quad z_2=e^{i5\pi/4}
$$</span></p>
|
3,042,308 | <p>Suppose a matrix <span class="math-container">$A$</span> has eigenvalues 0, 3, 7 and eigenvectors <span class="math-container">$\mathbf{u, v, w,}$</span> respectively. Find the least square minimum length solution for <span class="math-container">$A\mathbf{x} = \mathbf{u+v+w}$</span>.</p>
<p>This was on our engineering math final exam last year and we've tried some techniques about Moore-Penrose pseudoinverse, which didn't seem to work. Can someone help?</p>
| Hermione | 593,163 | <p>You can consider the function <span class="math-container">$f$</span> defined as
<span class="math-container">$$ f (x, y) = 1 \quad \text{if } x=0 \text{ for all } y \in \mathbb{R}$$</span>
<span class="math-container">$$ f (x, y) = 1 \quad \text{if } x>0 \text{ and } y \geq 0$$</span>
<span class="math-container">$$ f (x, y) = -1 \quad \text{if } x>0 \text{ and } y < 0$$</span></p>
|
2,699,536 | <blockquote>
<p>Given a continuous function $f:[0,1]\to\mathbb R$, prove that $$\forall t>0, \frac{1}{t}\cdot\ln\left(\int_{0}^{1}e^{-tf(x)}dx\right)\leq-\min f(x).$$</p>
</blockquote>
<p>I have no idea where to begin. Thought I could use the FTOC to come up with some form of antiderivative but I don't know if this is even the right intuition to approach this.</p>
| Community | -1 | <p>A good first approach might be exploring what happens for constant functions $f$, or for something simplie like $f(x) = ax$.</p>
<hr>
<p>Note that</p>
<p>$$\int_0^1 e^{-t f(x)} dx \le \int_0^1 e^{-t \min f} dx = e^{-t \min f}$$</p>
<p>so that the left hand side of your inequality is</p>
<p>$$\frac 1 t \ln \left(\int_0^1 e^{-t f(x)} dx\right) \le \frac 1 t \ln e^{-t \min f} = - \min f. $$</p>
|
3,048,381 | <p>We found the orthonormal basis for the eigen spaces.</p>
<p>We got <span class="math-container">$C$</span> to be the matrix</p>
<pre><code>[ 1/squareroot(2) 1/squareroot(6) 1/squareroot(3)
-1/squareroot(2) 1/squareroot(6) 1/squareroot(3)
0 -2/squareroot(6) 1/squareroot(3) ]
</code></pre>
<p>And the original matrix <span class="math-container">$A $</span> is</p>
<pre><code>[4 2 2
2 4 2
2 2 4]
</code></pre>
<p>After finding <span class="math-container">$C$</span>, my notes jump to:</p>
<pre><code>therefore <span class="math-container">$C^-1 A C = $</span>
[2 0 0
0 2 0
0 0 8]
</code></pre>
<p>They do not show any steps on how to calculate the inverse of <span class="math-container">$C$</span>. Is there an easy way of calculating it? How would I start off reducing it to RREF? How would I get rid of the square roots? (normally, I'm used to just dealing with regular integers).</p>
<p>Thanks in advance!</p>
| nicomezi | 316,579 | <p>The matrix <span class="math-container">$C$</span> is orthogonal. Hence <span class="math-container">$C^{-1}=C^T$</span>.</p>
|
324,557 | <p>Isbell gave, in <a href="https://eudml.org/doc/213746" rel="noreferrer"><em>Two set-theoretic theorems in categories</em> (1964)</a>, a necessary criterion for categories to be concretisable (i.e. to admit some faithful functor into sets). Freyd, in <a href="https://www.sciencedirect.com/science/article/pii/0022404973900315" rel="noreferrer"><em>Concreteness</em> (1973)</a>, showed that Isbell’s criterion is also sufficient.</p>
<p>My question is: <strong>Has anyone ever used Isbell’s criterion to check that a category is concretisable?</strong></p>
<p>I’m interested not only in seeing the theorem is formally invoked in print, to show some category is concretisable — though of course that would be a perfect answer, if it’s happened. What I’m also interested in, and suspect is more likely to have occurred, is if anyone’s found the criterion useful as a heuristic for checking whether a category is concretisable, in a situation where one wants it to be concrete but finding a suitable functor is not totally trivial. (I’m imagining a situation similar to the adjoint functor theorems: they give very useful quick heuristics for guessing whether adjoints exist, but if they suggest an adjoint does exist, usually there’s an explicit construction as well, so they’re used as heuristics much more often than they’re formally invoked in print.)</p>
<p>What I’m not so interested in is uses of the criterion to confirm that an expected non-concretisable category is indeed non-concretisable — I’m after cases where it’s used in expectation of a <em>positive</em> answer.</p>
| Tim Campion | 2,362 | <p>I did this once with the category of schemes in response to <a href="https://mathoverflow.net/questions/2015/can-the-category-of-schemes-be-concretized/2050">this question</a>, with help from Laurent Moret-Bailly. But then Zhen Lin Low pointed out there's an obvious concretizing functor. Maybe it wasn't so obvious until we were sure it was there, though. So I suppose this falls under the "useful heuristic" category. In practice, the Isbell-Freyd criterion translated the problem into something more concrete (pardon the pun!) which an algebraic geometer had a sense for how to answer. At the time, I didn't know enough algebraic geometry to answer this question on my own, so translating it into more geometric language which I could ask somebody else was an essential step for me.</p>
<p>It helped that, as Ivan Di Liberti points out in the comments, the criterion is especially simple in a finitely-complete category.</p>
|
324,557 | <p>Isbell gave, in <a href="https://eudml.org/doc/213746" rel="noreferrer"><em>Two set-theoretic theorems in categories</em> (1964)</a>, a necessary criterion for categories to be concretisable (i.e. to admit some faithful functor into sets). Freyd, in <a href="https://www.sciencedirect.com/science/article/pii/0022404973900315" rel="noreferrer"><em>Concreteness</em> (1973)</a>, showed that Isbell’s criterion is also sufficient.</p>
<p>My question is: <strong>Has anyone ever used Isbell’s criterion to check that a category is concretisable?</strong></p>
<p>I’m interested not only in seeing the theorem is formally invoked in print, to show some category is concretisable — though of course that would be a perfect answer, if it’s happened. What I’m also interested in, and suspect is more likely to have occurred, is if anyone’s found the criterion useful as a heuristic for checking whether a category is concretisable, in a situation where one wants it to be concrete but finding a suitable functor is not totally trivial. (I’m imagining a situation similar to the adjoint functor theorems: they give very useful quick heuristics for guessing whether adjoints exist, but if they suggest an adjoint does exist, usually there’s an explicit construction as well, so they’re used as heuristics much more often than they’re formally invoked in print.)</p>
<p>What I’m not so interested in is uses of the criterion to confirm that an expected non-concretisable category is indeed non-concretisable — I’m after cases where it’s used in expectation of a <em>positive</em> answer.</p>
| Martti Karvonen | 136,562 | <p>An inverse category can be defined as a category where every <span class="math-container">$f$</span> admits a unique regular inverse, i.e. a map <span class="math-container">$g$</span> such that <span class="math-container">$fgf=f$</span> and <span class="math-container">$gfg=g$</span>. In [1], Kastl proves that any locally small inverse category admits a faithful functor into <span class="math-container">$PInj$</span>, the category of sets and partial injections. The proof first verifies Isbell's criterion, obtaining a faithful functor to <span class="math-container">$Set$</span> and then one proves a general result giving rise to a faithful functor to <span class="math-container">$PInj$</span>.</p>
<p>[1] J. Kastl. Inverse categories. Studien zur Algebra und ihre Anwendungen, 7:51–
60, 1979.</p>
|
4,401,028 | <p>Let <span class="math-container">$p$</span> be an odd prime and <span class="math-container">$ω=e^{2\pi i /p}$</span>. Determine if <span class="math-container">$1-ω$</span> is prime in <span class="math-container">$\mathbb{Z}[ω]=\mathbb{Z}+\mathbb{Z}ω+\mathbb{Z}ω^2+...+\mathbb{Z}ω^{p-2}$</span></p>
<p><strong>My attempt</strong></p>
<p>I have tried using the definition of prime and also tried to show <span class="math-container">$\langle 1-\omega \rangle$</span> is maximal but I end up with more unknowns than equations.</p>
<p>I know the norm of the ideal is <span class="math-container">$N(\langle 1-\omega \rangle)=|1-\omega|^{p-2}$</span>.</p>
| Zaz | 3,157 | <p>Well it's not true in general that <span class="math-container">$\sin(x) = 0$</span>. You should have been taught that <span class="math-container">$\sin(0) = 0$</span>.</p>
<p><span class="math-container">$\sin(x)$</span> is the value of the sine function at <span class="math-container">$x$</span> - it is an unknown <strong>number</strong>. <span class="math-container">$\sin$</span> is a <strong>function</strong>. These are two different mathematical objects.</p>
<p>You can also use <span class="math-container">$\sin 0 = 0$</span>, which is just shorthand for <span class="math-container">$\sin(0) = 0$</span>.</p>
|
1,796,008 | <p>I am using DFT with windows. The way I understand how a window makes the DFT "look" better, is that multiplication in time domain is convolution in frequency domain. Therefore a window with following FT (Hann window), will suppress the side lobes found in a signal FT (second picture) : </p>
<p><a href="https://i.stack.imgur.com/1nFsx.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/1nFsx.jpg" alt="enter image description here"></a></p>
<p><a href="https://i.stack.imgur.com/rVZ9U.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/rVZ9U.png" alt="enter image description here"></a></p>
<p>But I dont understand how are the values |F($\omega$)| related to suppressing the signals side lobes ... e.g. Tukey window plotted as |F($\omega$)|</p>
<p><a href="https://i.stack.imgur.com/IsL6q.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/IsL6q.jpg" alt="enter image description here"></a></p>
<p>How is the width and the pace of decreasing sidelobes (of the above plot) related to getting rid of sidelobes of <strong>signals</strong> FT ? Is there any intuitive way to explain ?</p>
| Chappers | 221,811 |
$
\newcommand{\sech}{\mathop{\rm sech}\nolimits}
\newcommand{\csch}{\mathop{\rm csch}\nolimits}
$
<p>Your calculation of the curvature of the tractrix is fine. The problem comes from the other part: the plane in which the circle $(\cos{u}\sech{v},\sin{u}\sech{v},v-\tanh{v})$ (with constant $v$) lies is <em>not</em> one of the planes normal to the surface at the point where you want to measure the curvature (this is easy to see in the case of a surface with radius decreasing rapidly: the plane must depend on the gradient of the radial component, therefore.</p>
<p>Instead, we need to look at the curvature of a curve $\gamma(t)$ in the plane parallel to the surface normal, which is easily found to be
$$ n=(\cos{u}\tanh{v},\sin{u}\tanh{v},\sech{v}). $$</p>
<p>If the curve is in both the surface and a plane parallel to this vector, it is easy to see that the normal vector to the curve must be parallel to the surface normal. (E.g., the curve's tangent vector is perpendicular to the surface normal by definition of the surface normal, the binormal is constant (and perpendicular to the plane and thus the surface normal) as the curve is planar, so the curve's normal vector, being perpendicular to both of these, lies in the plane and is perpendicular to the perpendicular of the surface normal, and hence parallel to it.)</p>
<p>Therefore, all we have to do is find $ (\gamma(t)-\gamma(0)) \cdot n$, where $\dot{\gamma}(0)$ is parallel to the circle $v= \text{const}$., and expand to get to the first nonzero term near t=0, which is easily seen to be:
$$ \begin{align}
(\gamma(t)-\gamma(0)) \cdot n &= \gamma'(0) \cdot n \, t + \gamma''(0) \cdot n \frac{t^2}{2} +O(t^3) \\
&= s'(0)T(0) \cdot n t + (s''(0)T(0)+ s'(0)^2\kappa(0) N(0) ) \cdot n \frac{t^2}{2} + O(t^3) \\
&= s'(0)^2 \kappa \frac{t^2}{2} + O(t^3),
\end{align} $$
by the definition of curvature, where $s$ is the arclength parameter. The symmetry shows that we only need to do the calculation for $u=0$, so
$$ n=(\tanh{v},0,\sech{v}), $$
and some boring calculation later, we find that
$$ s'(0)^2 \kappa(0) = \sech{v}\tanh{v} \, (-U'(0)^2+V'(0)^2). $$
Of course, $s'(0)^2$ is the square of the length of $\gamma'(0)$, which is also easy to calculate, as
$$ \lVert \gamma'(0) \rVert^2 = \sech^2{v} \, (U'(0)^2+\sinh^2{v} \, V'(0)^2) $$
(you can easily fill this in yourself, with enough differentiation and application of trigonometrical and hyperbolic identities). Therefore,
$$ \kappa(0) = \sinh{v} \, \frac{-U'(0)^2+V'(0)^2}{U'(0)^2+\sinh^2{v} \, V'(0)^2}. $$
To find the principal curvatures, one has to maximise and minimise this homogeneous function, but in this case, it's easy, with the minimum obviously when $V'(0)=0$ (i.e., parallel to the circle), the maximum when $U'(0)=0$ (parallel to the tractrix), with values $-\sinh{u}$ and $1/\sinh{u}$ respectively. The latter you have already, and the product is $-1$ as it should be.</p>
|
1,796,008 | <p>I am using DFT with windows. The way I understand how a window makes the DFT "look" better, is that multiplication in time domain is convolution in frequency domain. Therefore a window with following FT (Hann window), will suppress the side lobes found in a signal FT (second picture) : </p>
<p><a href="https://i.stack.imgur.com/1nFsx.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/1nFsx.jpg" alt="enter image description here"></a></p>
<p><a href="https://i.stack.imgur.com/rVZ9U.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/rVZ9U.png" alt="enter image description here"></a></p>
<p>But I dont understand how are the values |F($\omega$)| related to suppressing the signals side lobes ... e.g. Tukey window plotted as |F($\omega$)|</p>
<p><a href="https://i.stack.imgur.com/IsL6q.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/IsL6q.jpg" alt="enter image description here"></a></p>
<p>How is the width and the pace of decreasing sidelobes (of the above plot) related to getting rid of sidelobes of <strong>signals</strong> FT ? Is there any intuitive way to explain ?</p>
| Narasimham | 95,860 | <p>Gauss curvature is product of principal <em>curvatures of the meridian and its perpendicular line</em>. This is not curvature of radius of pseudosphere in cylindrical coordinate system as you incorrectly found.</p>
<p><span class="math-container">$v$</span> is the angle of rotation of a point on asymptotic line (zero normal curvature) around symmetry axis. </p>
<p>Parametrization of an asymptotic line in space is:</p>
<p><span class="math-container">$$ [\text{sech } v \cos v, \text{sech } v \sin v , ( v- \text{tanh } v)]$$</span></p>
<p>You have correctly included <span class="math-container">$u$</span> to describe the pseudospherical surface obtained by rotating above asymptotic line about z-axis.</p>
<p>Euler's theorem:</p>
<p><span class="math-container">$$ k_n= k_1\cos^2 \psi+ k_2\cos^2 \psi $$</span></p>
<p>Principal radii of curvature</p>
<p><span class="math-container">$$ (R_1,R_2)= ( -\cot \phi ,\ \tan \phi) $$</span></p>
<p>where <span class="math-container">$\phi $</span> is angle of slope to axis of symmetry. Also some properties</p>
<p><span class="math-container">$$ \phi =\psi ; v = s /a ;$$</span></p>
<p>when the asymptotic line starts from x-axis in projection. <span class="math-container">$a$</span> is radius of cuspidal equator, sometimes referred to as the radius of torsion.</p>
|
2,430,529 | <p>How should I define a vector, that has equal angles to vectors $\vec{i}, \vec{i} + \vec{j}$ and $\vec{i} + \vec{j} + \vec{k}$?</p>
<p>After looking at the problem in a graphical way, I tried taking average from $\vec{i}$ and $\vec{i} + \vec{j} + \vec{k}$, modifying the $\vec{i}$ vector to be of same length than the $\vec{i} + \vec{j} + \vec{k}$. Unfortunately the answer does not seem to be correct. </p>
| achille hui | 59,379 | <p>In general, if you have $3$ vectors $\vec{v}_1, \vec{v}_2, \vec{v}_3$ and you want
to find a vector $\vec{v}$ making equal angles to them. You first construct the unit vectors for those vectors $\hat{v}_1, \hat{v}_2, \hat{v}_3$ and then solve for the equation:</p>
<p>$$\hat{v}_1 \cdot \vec{v} = \hat{v}_2 \cdot \vec{v} = \hat{v}_3 \cdot \vec{v} = \cos\theta\tag{*1}$$
where $\theta$ is the common angle.</p>
<p>This is easy if you have heard of <a href="https://en.wikipedia.org/wiki/Dual_basis" rel="noreferrer">dual basis</a>. Given any $3$ linear independent vectors $\vec{u}_1, \vec{u}_2, \vec{u}_3$, a dual basis of them
is another $3$ vectors $\vec{w}_1, \vec{w}_2, \vec{w}_3$ such that</p>
<p>$$\vec{u}_i \cdot \vec{w}_j = \begin{cases}1, & i = j\\ 0, & i \ne j\end{cases}$$</p>
<p>It is not hard to word out the explicit form of $\vec{w}_i$:</p>
<p>$$\vec{w}_1 = \frac{\vec{u}_2 \times \vec{u}_3}{\vec{u}_1\cdot (\vec{u}_2 \times \vec{u}_3)},\quad
\vec{w}_2 = \frac{\vec{u}_3 \times \vec{u}_1}{\vec{u}_1\cdot (\vec{u}_2 \times \vec{u}_3)}\quad\text{ and }\quad
\vec{w}_3 = \frac{\vec{u}_1 \times \vec{u}_2}{\vec{u}_1\cdot (\vec{u}_2 \times \vec{u}_3)}
$$
The nice thing of dual basis is if your know the dot products of a vector $\vec{u}$ with $\vec{u}_i$, you immediate obtain a decomposition of $\vec{u}$ in terms of $\vec{w}_i$:</p>
<p>$$\left\{\begin{align}
\vec{u}_1 \cdot \vec{u} &= \lambda_1\\
\vec{u}_2 \cdot \vec{u} &= \lambda_2\\
\vec{u}_3 \cdot \vec{u} &= \lambda_3\\
\end{align}\right.
\quad\implies\quad
\vec{u} = \lambda_1 \vec{w}_1 + \lambda_2\vec{w}_2 + \lambda_3\vec{w}_3
$$
Apply this to $(*1)$ and notice we don't care about the overall length of $\vec{v}$, we find</p>
<p>$$\begin{align}\vec{v}
&\propto \hat{v}_1 \times \hat{v}_2 + \hat{v}_2 \times \hat{v}_3 + \hat{v}_3 \times \hat{v}_1\\
&= \frac{1}{\sqrt{2}}(1,0,0)\times(1,1,0)
+ \frac{1}{\sqrt{6}}(1,1,0)\times(1,1,1)
+ \frac{1}{\sqrt{3}}(1,1,1)\times(1,0,0)\\
&= (0,0,\frac{1}{\sqrt{2}}) + (\frac{1}{\sqrt{6}},-\frac{1}{\sqrt{6}},0)
+ (0, \frac{1}{\sqrt{3}},-\frac{1}{\sqrt{3}})\\
&\propto (1,\sqrt{2}-1,\sqrt{3}-\sqrt{2})
\end{align}
$$</p>
|
4,022,269 | <p>I'm refreshing some Probability Theory (it's been a while) and came accross the following example:</p>
<p>Let <span class="math-container">$X$</span> be a random variable with distributuion function <span class="math-container">$$F_X(x) = \left\{\begin{array}{l r}0 & \text{if }x<0\\\frac{1}{2}+\frac{x}{2}&\text{if }0\le x<1\\1&\text{if }x\ge 1\end{array}\right.$$</span>
Then this random variable is neither discrete nor continuous, since <span class="math-container">$$\mathbb{P}(X=0)=\frac{1}{2},\quad\text{and}\quad\mathbb{P}(X=\frac{1}{2})=0.$$</span></p>
<p>Is this first statement true since <span class="math-container">$X$</span> takes these values on all <span class="math-container">$x<0$</span>? And the second statement true since <span class="math-container">$X$</span> takes that value for only 1 value of <span class="math-container">$x$</span>, out of infinitely many?</p>
<p>It's probably easy, but I'm not sure if I understand it...</p>
| John L | 852,522 | <p>A random variable is continuous if the cdf is a continuous function. This is not continuous at x=0.
A random variable is discrete if it can take only a countable number of values. This can take any value in (0.5,1) which is uncountably infinite.</p>
<p>So it is neither discrete or continuous.</p>
<p>To find <span class="math-container">$P[X=x]$</span>, find <span class="math-container">$P[X \le x]=F(x)$</span> and subtract <span class="math-container">$P[X<x]$</span>, which is the supremum of <span class="math-container">$F(y)$</span> for all <span class="math-container">$y<x$</span>. Since <span class="math-container">$F$</span> is a non-decreasing function, the supremum is the limit as <span class="math-container">$y$</span> approaches <span class="math-container">$x$</span> from the left.</p>
<p>Since <span class="math-container">$P[X\le x]=0$</span> for all <span class="math-container">$x<0$</span>,
<span class="math-container">$$P[X=0]=P[X\le 0]-\lim_{x\rightarrow 0^-}P[X \le x]=\frac{1}2$$</span></p>
<p>On the other hand,<br />
<span class="math-container">$$P\left[X=\frac{1}2 \right]=
P\left[X \le \frac{1}2 \right]-\lim_{x\rightarrow {\frac{1}2}^-}P\left[X \le x \right]=
\frac{1}2 +\frac{\frac{1}2}2-\lim_{x\rightarrow {\frac{1}2}^-}\left(\frac{1}2 +\frac{x}2\right)=0$$</span></p>
|
4,022,269 | <p>I'm refreshing some Probability Theory (it's been a while) and came accross the following example:</p>
<p>Let <span class="math-container">$X$</span> be a random variable with distributuion function <span class="math-container">$$F_X(x) = \left\{\begin{array}{l r}0 & \text{if }x<0\\\frac{1}{2}+\frac{x}{2}&\text{if }0\le x<1\\1&\text{if }x\ge 1\end{array}\right.$$</span>
Then this random variable is neither discrete nor continuous, since <span class="math-container">$$\mathbb{P}(X=0)=\frac{1}{2},\quad\text{and}\quad\mathbb{P}(X=\frac{1}{2})=0.$$</span></p>
<p>Is this first statement true since <span class="math-container">$X$</span> takes these values on all <span class="math-container">$x<0$</span>? And the second statement true since <span class="math-container">$X$</span> takes that value for only 1 value of <span class="math-container">$x$</span>, out of infinitely many?</p>
<p>It's probably easy, but I'm not sure if I understand it...</p>
| HallsofIvy | 879,895 | <p>The <strong>distribution</strong> function, F, is not continuous because the limit, at = 0, "from the left" is 0 while the limit "from the right" is <span class="math-container">$\frac{1}{2}+ \frac{0}{2}= \frac{1}{2}$</span>.</p>
<p>But the statement that "P(x= 0)= 1/2" is simply WRONG. <span class="math-container">$P(x= a)= \int_{-\infty}^a F(x)dx$</span>. Since F(x)= 0 for x< 0, P(x)= 0 for x< 0 and, if <span class="math-container">$x\ge 0$</span> <span class="math-container">$P(x)= \int_0^x F(x)dx$</span>.</p>
<p>In particular, while F(0)= 1/2, P(0)= 0 since <span class="math-container">$\int_0^0 F(x)d= 0$</span> for any F.
And <span class="math-container">$P(1/2)= \int_0^{1/2} \frac{1}{2}+ \frac{x}{2}dx= \left[\frac{x}{2}+ \frac{x^2}{4}\right]_0^{1/2}= \frac{1}{4}+ \frac{1}{16}= \frac{5}{16}$</span>, not "0".</p>
|
1,591,197 | <p>Let $ A \trianglelefteq G $ and $ B \trianglelefteq A $ a Sylow normal subgroup of $ A $. My textbook says then that $ B \trianglelefteq G $.</p>
<p>I don’t understand why that is.</p>
| Lauren | 293,855 | <p>let $x\in G$ we'll show that $xBx^{-1}=B$, we have :</p>
<p>$B$ subgrouf of $A$ means $xBx^{-1}\subset xAx^{-1}=A$ ( because $A \trianglelefteq G$, so $xBx^{-1}\subset A$, which means that $B$ and $xBx^{-1}$ are two $p$-subgroups of sylow of $A$. So $B=xBx^{-1}$ because $B \trianglelefteq A$.</p>
<p>Finally $B \trianglelefteq G$.</p>
|
1,107,932 | <p>Is the map $f:S_n \to A_{n+2}$ given by </p>
<p>$$f(s)= \begin{cases}
s & s\ \text{is even}\\
s \circ (n+1,\ n+2) & s\ \text{is odd}
\end{cases}$$ </p>
<p>an injective homomorphism? I can show that if it is a homomorphism then it is injective but having difficulty in showing that $f$ is a homomorphism. Please help. </p>
| Timbuc | 118,527 | <p>As disjoint cycles commute and $\;(n+1\;n+2)^2=1\;$ , for any two <strong>cycles</strong> $\;\sigma,\pi\in S_n\;$ we get</p>
<p>$$f(\sigma\pi):=\begin{cases}\sigma\pi&,\text{both cycles have same parity}\\{}\\\sigma\pi(n+1\;n+2)=\sigma(n+1\;n+2)\pi=f(\sigma)f(\pi)&,\text{otherwise}\end{cases}$$</p>
<p>where $\;\sigma\;$ odd (first) case and $\;\pi\;$ even, and the other way around in the second case.</p>
<p>Now generalize using that any permutation is the product of disjoint cycles.</p>
|
416,407 | <blockquote>
<p>What examples are there of habitual but unnecessary uses of the axiom of
choice, in any area of mathematics except topology?</p>
</blockquote>
<p>I'm interested in standard proofs that use the axiom of choice, but where
choice can be eliminated via some judicious and maybe not quite obvious
rephrasing. I'm less interested in proofs that were originally proved
using choice and where it took some significant new idea to remove the
dependence on choice.</p>
<p>I exclude topology because I already know lots of topological examples. For
instance, Andrej Bauer's <a href="https://www.ams.org/journals/bull/2017-54-03/S0273-0979-2016-01556-4/" rel="noreferrer">Five stages of accepting constructive
mathematics</a>
gives choicey and choice-free proofs of a standard result (Theorem 1.4):
every open cover of a compact metric space has a Lebesgue number. Todd
Trimble told me about some other topological examples, e.g. a compact
subspace of a Hausdorff space is closed, or the product of two compact
spaces is compact. There are more besides.</p>
<p>One example per answer, please. And please sketch both the habitual proof
using choice and the alternative proof that doesn't use choice.</p>
<p>To show what I'm looking for, here's an example taken from that paper of Andrej Bauer. It would qualify as an answer except that it comes from
topology.</p>
<p><strong>Statement</strong> Every open cover <span class="math-container">$\mathcal{U}$</span> of a compact metric space
<span class="math-container">$X$</span> has a Lebesgue number <span class="math-container">$\varepsilon$</span> (meaning that for all <span class="math-container">$x \in X$</span>, the
ball <span class="math-container">$B(x, \varepsilon)$</span> is contained in some member of <span class="math-container">$\mathcal{U}$</span>).</p>
<p><strong>Habitual proof using choice</strong> For each <span class="math-container">$x \in X$</span>, choose some
<span class="math-container">$\varepsilon_x > 0$</span> such that <span class="math-container">$B(x, 2\varepsilon_x)$</span> is contained in some
member of <span class="math-container">$\mathcal{U}$</span>. Then <span class="math-container">$\{B(x, \varepsilon_x): x \in X\}$</span> is a
cover of <span class="math-container">$X$</span>, so it has a finite subcover <span class="math-container">$\{B(x_1, \varepsilon_{x_1}),
\ldots, B(x_n, \varepsilon_{x_n})\}$</span>. Put <span class="math-container">$\varepsilon = \min_i
\varepsilon_{x_i}$</span> and check that <span class="math-container">$\varepsilon$</span> is a Lebesgue number.</p>
<p><strong>Proof without choice</strong> Consider the set of balls <span class="math-container">$B(x, \varepsilon)$</span>
such that <span class="math-container">$x \in X$</span>, <span class="math-container">$\varepsilon > 0$</span> and <span class="math-container">$B(x, 2\varepsilon)$</span> is
contained in some member of <span class="math-container">$\mathcal{U}$</span>. This set covers <span class="math-container">$X$</span>, so it has
a finite subcover <span class="math-container">$\{B(x_1, \varepsilon_1), \ldots, B(x_n,
\varepsilon_n)\}$</span>. Put <span class="math-container">$\varepsilon = \min_i
\varepsilon_i$</span> and check that <span class="math-container">$\varepsilon$</span> is a Lebesgue number.</p>
| Michael Hardy | 6,316 | <p>Sometimes people prove the <a href="https://en.wikipedia.org/wiki/Schr%C3%B6der%E2%80%93Bernstein_theorem" rel="noreferrer">Schröder–Bernstein theorem</a> by saying it follows easily from the well-ordering theorem, which is equivalent to the axiom of choice. But it can be proved without the axiom of choice. The theorem states that if there is a one-to-one mapping from each of two sets into the other, then there is also a bijection.</p>
|
416,407 | <blockquote>
<p>What examples are there of habitual but unnecessary uses of the axiom of
choice, in any area of mathematics except topology?</p>
</blockquote>
<p>I'm interested in standard proofs that use the axiom of choice, but where
choice can be eliminated via some judicious and maybe not quite obvious
rephrasing. I'm less interested in proofs that were originally proved
using choice and where it took some significant new idea to remove the
dependence on choice.</p>
<p>I exclude topology because I already know lots of topological examples. For
instance, Andrej Bauer's <a href="https://www.ams.org/journals/bull/2017-54-03/S0273-0979-2016-01556-4/" rel="noreferrer">Five stages of accepting constructive
mathematics</a>
gives choicey and choice-free proofs of a standard result (Theorem 1.4):
every open cover of a compact metric space has a Lebesgue number. Todd
Trimble told me about some other topological examples, e.g. a compact
subspace of a Hausdorff space is closed, or the product of two compact
spaces is compact. There are more besides.</p>
<p>One example per answer, please. And please sketch both the habitual proof
using choice and the alternative proof that doesn't use choice.</p>
<p>To show what I'm looking for, here's an example taken from that paper of Andrej Bauer. It would qualify as an answer except that it comes from
topology.</p>
<p><strong>Statement</strong> Every open cover <span class="math-container">$\mathcal{U}$</span> of a compact metric space
<span class="math-container">$X$</span> has a Lebesgue number <span class="math-container">$\varepsilon$</span> (meaning that for all <span class="math-container">$x \in X$</span>, the
ball <span class="math-container">$B(x, \varepsilon)$</span> is contained in some member of <span class="math-container">$\mathcal{U}$</span>).</p>
<p><strong>Habitual proof using choice</strong> For each <span class="math-container">$x \in X$</span>, choose some
<span class="math-container">$\varepsilon_x > 0$</span> such that <span class="math-container">$B(x, 2\varepsilon_x)$</span> is contained in some
member of <span class="math-container">$\mathcal{U}$</span>. Then <span class="math-container">$\{B(x, \varepsilon_x): x \in X\}$</span> is a
cover of <span class="math-container">$X$</span>, so it has a finite subcover <span class="math-container">$\{B(x_1, \varepsilon_{x_1}),
\ldots, B(x_n, \varepsilon_{x_n})\}$</span>. Put <span class="math-container">$\varepsilon = \min_i
\varepsilon_{x_i}$</span> and check that <span class="math-container">$\varepsilon$</span> is a Lebesgue number.</p>
<p><strong>Proof without choice</strong> Consider the set of balls <span class="math-container">$B(x, \varepsilon)$</span>
such that <span class="math-container">$x \in X$</span>, <span class="math-container">$\varepsilon > 0$</span> and <span class="math-container">$B(x, 2\varepsilon)$</span> is
contained in some member of <span class="math-container">$\mathcal{U}$</span>. This set covers <span class="math-container">$X$</span>, so it has
a finite subcover <span class="math-container">$\{B(x_1, \varepsilon_1), \ldots, B(x_n,
\varepsilon_n)\}$</span>. Put <span class="math-container">$\varepsilon = \min_i
\varepsilon_i$</span> and check that <span class="math-container">$\varepsilon$</span> is a Lebesgue number.</p>
| Willie Wong | 3,948 | <p>One of my papers, <a href="https://arxiv.org/abs/1310.1318v3" rel="nofollow noreferrer">A comment on the construction of the maximal globally hyperbolic Cauchy development</a>, did this for the existence of the maximal globally hyperbolic Cauchy development for the initial value problem in general relativity.</p>
<p>The TL;DR is that the original proof had a gratuitous use of Zorn's lemma. The fix is similar, but also somewhat different from, the fix <a href="https://mathoverflow.net/a/416448/3948">removing the use of Zorn from maximal atlases</a>.</p>
|
416,407 | <blockquote>
<p>What examples are there of habitual but unnecessary uses of the axiom of
choice, in any area of mathematics except topology?</p>
</blockquote>
<p>I'm interested in standard proofs that use the axiom of choice, but where
choice can be eliminated via some judicious and maybe not quite obvious
rephrasing. I'm less interested in proofs that were originally proved
using choice and where it took some significant new idea to remove the
dependence on choice.</p>
<p>I exclude topology because I already know lots of topological examples. For
instance, Andrej Bauer's <a href="https://www.ams.org/journals/bull/2017-54-03/S0273-0979-2016-01556-4/" rel="noreferrer">Five stages of accepting constructive
mathematics</a>
gives choicey and choice-free proofs of a standard result (Theorem 1.4):
every open cover of a compact metric space has a Lebesgue number. Todd
Trimble told me about some other topological examples, e.g. a compact
subspace of a Hausdorff space is closed, or the product of two compact
spaces is compact. There are more besides.</p>
<p>One example per answer, please. And please sketch both the habitual proof
using choice and the alternative proof that doesn't use choice.</p>
<p>To show what I'm looking for, here's an example taken from that paper of Andrej Bauer. It would qualify as an answer except that it comes from
topology.</p>
<p><strong>Statement</strong> Every open cover <span class="math-container">$\mathcal{U}$</span> of a compact metric space
<span class="math-container">$X$</span> has a Lebesgue number <span class="math-container">$\varepsilon$</span> (meaning that for all <span class="math-container">$x \in X$</span>, the
ball <span class="math-container">$B(x, \varepsilon)$</span> is contained in some member of <span class="math-container">$\mathcal{U}$</span>).</p>
<p><strong>Habitual proof using choice</strong> For each <span class="math-container">$x \in X$</span>, choose some
<span class="math-container">$\varepsilon_x > 0$</span> such that <span class="math-container">$B(x, 2\varepsilon_x)$</span> is contained in some
member of <span class="math-container">$\mathcal{U}$</span>. Then <span class="math-container">$\{B(x, \varepsilon_x): x \in X\}$</span> is a
cover of <span class="math-container">$X$</span>, so it has a finite subcover <span class="math-container">$\{B(x_1, \varepsilon_{x_1}),
\ldots, B(x_n, \varepsilon_{x_n})\}$</span>. Put <span class="math-container">$\varepsilon = \min_i
\varepsilon_{x_i}$</span> and check that <span class="math-container">$\varepsilon$</span> is a Lebesgue number.</p>
<p><strong>Proof without choice</strong> Consider the set of balls <span class="math-container">$B(x, \varepsilon)$</span>
such that <span class="math-container">$x \in X$</span>, <span class="math-container">$\varepsilon > 0$</span> and <span class="math-container">$B(x, 2\varepsilon)$</span> is
contained in some member of <span class="math-container">$\mathcal{U}$</span>. This set covers <span class="math-container">$X$</span>, so it has
a finite subcover <span class="math-container">$\{B(x_1, \varepsilon_1), \ldots, B(x_n,
\varepsilon_n)\}$</span>. Put <span class="math-container">$\varepsilon = \min_i
\varepsilon_i$</span> and check that <span class="math-container">$\varepsilon$</span> is a Lebesgue number.</p>
| Sam Sanders | 33,505 | <p>My favourite example is from Reverse Mathematics, namely <em>Pincherle's theorem</em> stating that</p>
<p><em>a locally bounded function on Cantor space is bounded there</em>.</p>
<p>The obvious proof proceeds by contradiction and uses AC:</p>
<ol>
<li><p>Suppose <span class="math-container">$F:2^\mathbb{N}\rightarrow \mathbb{N}$</span> is unbounded, i.e. <span class="math-container">$(\forall n\in \mathbb{N})(\exists f \in 2^{\mathbb{N}})(F(f)>n)$</span>.</p>
</li>
<li><p>Apply (countable) choice to obtain a sequence <span class="math-container">$(f_n)_{n\in \mathbb{N}}$</span> such that <span class="math-container">$F(f_n)>n$</span> for all <span class="math-container">$n \in \mathbb{N}$</span>.</p>
</li>
<li><p>Use the sequential compactness of Cantor space to show that this sequence has a subsequence <span class="math-container">$(g_n)_{n\in \mathbb{N}}$</span> which converges to <span class="math-container">$g\in 2^\mathbb{N}$</span>.</p>
</li>
<li><p>Since <span class="math-container">$F$</span> is locally bounded, <span class="math-container">$F$</span> is bounded in a neighbourhood of <span class="math-container">$g$</span>. However, as <span class="math-container">$n$</span> increases, <span class="math-container">$g_n$</span> approaches <span class="math-container">$g$</span> and <span class="math-container">$F(g_n)$</span> becomes arbitrary large. Contradiction.</p>
</li>
</ol>
<p>There is a proof in ZF (and weaker systems) that is more delicate:</p>
<p>in step 2., one considers:</p>
<p><span class="math-container">$(\forall n\in \mathbb{N})(\exists \sigma\in 2^{<\mathbb{N}})[(\exists f \in 2^{\mathbb{N}})(F(f)>n) \wedge \sigma = (f(0),..., f(|\sigma|) ]$</span>.</p>
<p>One can apply `numerical choice' to obtain a sequence <span class="math-container">$(\sigma_n)_{n\in \mathbb{N}}$</span> such that:</p>
<p><span class="math-container">$(\forall n\in \mathbb{N})[(\exists f \in 2^{\mathbb{N}})(F(f)>n) \wedge \sigma_n = (f(0),..., f(|\sigma_n|) ]$</span>.</p>
<p>This `numerical' choice principle is provable in ZF. Now use the sequence <span class="math-container">$(\sigma_n)_{n\in \mathbb{N}}$</span> instead of the sequence <span class="math-container">$(f_n)_{n\in \mathbb{N}}$</span>; the rest of the proof then can be modified to obtain a contradiction on the same way.</p>
|
2,645,406 | <p>I am reading Rudin's real and complex analysis. On page 14 we see $\underset{n} {\text{sup }} f_n$, and on page 15 we see max$\{f,g\}$. These two are obviously different. I searched some answers and found an example: </p>
<p>sup $\{f,g\} = f(x)$ wehn $f(x)\geq g(x)$ and $g(x)$ when $f(x) < g(x)$ </p>
<p>But then I saw max$\{f,g\}$ which makes me confused. Are they the same? What are the differences?</p>
<p>Can someone also give some explanation of $(\underset{n\to \infty}{\text{lim sup }} f_n)(x)$? How to visualize/understand it?</p>
| LucasSilva | 154,194 | <p><strong>Supremum and Maximum</strong></p>
<p>Let $A$ be a set of real numbers. </p>
<p>If $x$ is a real number such that $a \leq x$ for all $a \in A$, then $x$ is called an <strong>upper bound</strong> for $A$. Example: $1$ is an upper bound for the interval $(0,1)$. So is $2$. </p>
<p>If $x$ is an upper bound for $A$ and $x$ belongs to $A$, then $x$ is called the <strong>maximum</strong> of $A$ and we write $x = \max A$. In other words, the maximum of $A$ is the element of $A$ that is larger than every other element of $A$. Not every set has a maximum. Examples: The interval $(0,1)$ has no maximum; The finite set $\{1,2,3,4\}$ has $4$ as its maximum. If the maximum of $A$ exists, then it is unique. This justifies the use of "the maximum" rather than "a maximum."</p>
<p>If $x$ is an upper bound for $A$ and $x$ is smaller than every other maximum for $A$, then $x$ is called the <strong>supremum</strong> of $A$ (or the least upper bound for $A$) and we write $x = \sup A$. In symbols, $\sup A$ is the number such that (1) $a \leq \sup A$ for all $a \in A$ and (2) if $x'$ is a real number such that $a \leq x'$ for all $a \in A$, then $\sup A \leq x'$. </p>
<p>The supremum of $A$ is unique if it exists. The Dedekind Completeness Property of the Real Numbers is: If $A$ has at least one upper bound, then the supremum of $A$ exists.</p>
<p>If $\max A$ exists, then $\sup A$ exists and $\max A = \sup A$. Example: $A=(0,1]$ has $\max A = \sup A = 1$. </p>
<p>The supremum may exist even when the maximum does not. Example: $A=(0,1)$ has no maximum, but $\sup A = 1$. </p>
<p>Sometimes neither the maximum nor the supremum exist.
Example: The interval $(0,+\infty)$ has no maximum and no supremum. </p>
<p>If $\sup A$ exists and belongs to $A$, then $\max A = \sup A$. Example: $A=(0,1]$ has $\max A = \sup A = 1$.</p>
<p>To put things another way, a real number is called...</p>
<p>(i) an <strong>upper bound</strong> for $A$ if it is greater than or equal to every element of $A$. </p>
<p>(ii) the <strong>supremum</strong> of $A$ if it is an upper bound for $A$ and it is less than or equal to every upper bound of $A$.</p>
<p>(iii) the <strong>maximum</strong> if $A$ if it is the supremum of $A$ and it belongs to $A$.</p>
<p><strong>Limit Superior ($\limsup$)</strong></p>
<p>If $(a_n)$ is a sequence of real numbers that is bounded, we can define a new sequence $b_n = \sup \{ a_k : k \geq n\}$. Note that $(b_n)$ is a decreasing sequence. We define
$$\limsup a_n = \lim_{n \to \infty} b_n = \lim_{n \to \infty} \sup \{ a_k : k \geq n\}.$$
A useful fact to help understand the meaning of $\limsup a_n$ is that<br>
$$
\limsup a_n = \sup S
$$
where $S$ is the set of all real numbers $x$ such that $\lim_{k \to \infty} a_{n_k} = x$ for some convergent subsequence $(a_{n_k})$ of the sequence $(a_n)$. In fact, $\sup S$ belongs to $S$, so
$$
\limsup a_n = \max S
$$
Thus, you can think of $\limsup a_n$ as the largest limit for all convergent subsequences of $(a_n)$.</p>
<p>Example: The sequence $(1,\frac{1}{2},1,\frac{1}{3},1,\frac{1}{4},1,\ldots)$ does not converge. It has a subsequence $(1,\frac{1}{2},\frac{1}{3},\frac{1}{4},\ldots)$ that converges to $0$. It has another subsequence $(1,1,1,1,\ldots)$ that converges to $1$. It has infinitely many other convergent subsequences, but all of these either converge to $0$ or converge to $1$. Therefore $S = \{0,1\}$. Hence $\limsup a_n = \sup S = \max S = 1$.</p>
<p>For a sequence of real numbers that is bounded, the $\limsup$ always exists as a finite real number, even if the limit of the sequence does not. </p>
<p>If $(a_n)$ is a sequence of real numbers that is not bounded above, then we define $\limsup a_n = +\infty$. If $(a_n)$ is a sequence of real numbers such that $\lim_n a_n = -\infty$, then we define $\limsup a_n = -\infty$.</p>
<p>Given a sequence of functions $f_n:E \to \mathbb{R}$, for each <em>fixed</em> $x \in E$, the sequence $(f_n(x))$ is a sequence of real numbers, so we can talk about $\limsup f_n(x)$. We define the function $\limsup f_n: E \to [-\infty,\infty]$ by setting
$$
(\limsup f_n)(x) = \limsup f_n(x).
$$
for each fixed $x \in E$. </p>
<p>Note that since we have not specified that the sequence of functions $f_n$ is bounded, it could be that $\limsup f_n(x) = \pm \infty$. That is why the range of the function $\limsup f_n$ is given as $[-\infty,\infty]$. </p>
|
248,325 | <p>is there some connection between a curve in the algebraic geometry sense, e.g.</p>
<blockquote>
<p>Separated scheme of finite type over spec($k$)</p>
</blockquote>
<p>for a field $k$</p>
<p>and a curve in the sense of a smooth map from an interval in $\mathbb R$ to $\mathbb R^n$?</p>
| Simon Rose | 1,703 | <p>Well, first of all, a separated scheme of finite type over $Spec(k)$ is not necessarily a curve. A <em>one dimensional</em> separated scheme of finite type etc. etc. may be a curve, but this is also not quite a curve in the sense that you describe.</p>
<p>A curve in the algebro-geometric sense is a one dimensional variety. However, depending on your base field $k$ this may look very different than a real curve. For example, we could consider the scheme
$$
Spec\ \mathbb{C}[x, y]/(y^2 - x^3 - 1)
$$
which produces an affine elliptic curve. Topologically though[1], this is a real 2-dimensional topological space. It is however, one <em>complex</em> dimensional, and so we call it a curve.</p>
<p>However, if you were to look at the real points of
$$
Spec\ \mathbb{R}[x, y](y^2 - x^3 - 1)
$$
(i.e. corresponding to maps $Spec\ \mathbb{R} \to Spec\ \mathbb{R}[x, y](y^2 - x^3 - 1)$ then you do obtain something that is a one (real) dimensional topological space, whose curve is exactly what you think it is, at least once you choose the correct topology again.</p>
<p>As for curves viewed as maps $\gamma : [0, 1] \to \mathbb{R}^N$: these are not necessarily (in fact, not likely!) algebro-geometric curves, mostly because the image of such a curve may not be the zero locus of a collection of polynomial (or even analytic) functions.</p>
<p>Really, I think that viewing a curve as a "one dimensional thing", whatever that means in your context, is how you should think of it. So in the algebraic setting, it's going to be something that is locally the spectrum of a ring of Krull-dimension one. In the topological setting, it may be a topological space of Hausdorff dimension one.</p>
<p>[1] There is maybe a point that one should make about the Zariski versus the analytic topology here...</p>
|
2,414,721 | <p>For all perfect numbers $N$, $\sigma (N) = 2N$, where $\sigma$ is the divisor sigma function.</p>
<p>Let $s$ be a perfect number of the form $3^m 5^n 7^k$, where $m,n,k \geq 1$ are integers.</p>
<p>Then $\sigma (s)= \sigma (3^m 5^n 7^k)$</p>
<p>$ =\sigma (3^m) \sigma (5^n) \sigma (7^k)$ since $3, 5,$ and $7$ are coprime to each other.</p>
<p>$ =\left(\frac{3^{m+1}-1}{2}\right)\left(\frac{5^{m+1}-1}{4}\right)\left(\frac{7^{k+1}-1}{6}\right)$</p>
<p>$ =2(3^m 5^n 7^k)$ since $s$ is a perfect number.</p>
<p>$\implies 9 (3^m 5^n 7^k) = 3^{m+1} 5^{n+1}+3^{m+1} 7^{k+1} + 5^{n+1} 7^{k+1} - 3^{m+1}-5^{n+1} - 7^{k+1}-1$ after some algebra.</p>
<p>This is as far as I got using this method. Any and all help would be appreciated.</p>
| hardmath | 3,111 | <p>Note that the Question stipulates the exponents considered are $m,n,k\ge 1$. As @lulu points out, the cases where one of the exponents is zero can (if desired) be ruled out by <a href="https://math.stackexchange.com/questions/1148588/prove-that-pj-qi-cannot-be-a-perfect-number-for-p-q-odd-distinct-primes">this previous Question</a>.</p>
<p>The following is a simplification of the proof that an odd perfect number cannot be divisible by $105$ found <a href="http://www.fen.bilkent.edu.tr/~cvmath/Problem/0610a.pdf" rel="nofollow noreferrer">here</a>, as previously linked under the <a href="https://math.stackexchange.com/questions/236891/can-an-odd-perfect-number-be-divisible-by-105">older Question</a> to that effect.</p>
<p>$N= 3^m 5^n 7^k$ is a perfect number if and only $S(N)$, the sum of all divisors of $N$ (including itself and $1$), equals $2N$.</p>
<p>Since $N$ is odd, it must be that $S(N)=2N$ is <em>not</em> divisible by $4$. Now:</p>
<p>$$ \frac{S(N)}{N} = \left(1+\frac{1}{3}+\ldots+\frac{1}{3^m}\right)
\left(1+\frac{1}{5}+\ldots+\frac{1}{5^n}\right)
\left(1+\frac{1}{7}+\ldots+\frac{1}{7^k}\right) $$</p>
<p>Since $m=1$ would give $\left(1+\frac{1}{3}+\ldots+\frac{1}{3^m}\right)=\frac{4}{3}$ and $k=1$ would give $\left(1+\frac{1}{7}+\ldots+\frac{1}{7^k}\right)=\frac{8}{7}$, either would imply $S(N)$ is divisible by $4$, contradicting our observation above.</p>
<p>Knowing thus $m,k\ge 2$, we get a contradiction:</p>
<p>$$ \begin{align*} 2 = \frac{S(N)}{N}
&\ge \left(1+\frac{1}{3}+\frac{1} {3^2} \right)
\left(1+\frac{1}{5}\right)
\left(1+\frac{1}{7}+\frac{1}{7^2}\right) \\
&= \frac{13}{9} \frac{6}{5} \frac{57}{49} = \frac{4446}{2205} \gt 2 \end{align*} $$</p>
|
1,394,141 | <blockquote>
<p>Compute the tangent space $T_pM$ of the unit matrix $p=I$ when $$(i)\,M=SO(n)\\ (ii)\,M=GL(n)\\ (iii)\,M=SL(n).$$</p>
</blockquote>
<p><strong>My attempt:</strong>
I think I have computed the tangent space in the case that $M=SL(n)$. We can write $SL(n)=\det^{-1}(1)$ and I've proved in an earlier exercise that $1$ is a regular value and that $D\det(I)\cdot H= \text{trace } H$. So the tangent space consists of precisely those matrices $H$ which have vanishing trace.</p>
<p>I don't know how to proceed in the other two cases. Can we write $SO(n)$ or $GL(n)$ as the pre-image of a regular value?</p>
| ASCII Advocate | 260,903 | <p>These are foundational calculations in what used to be called the theory of "continuous groups", now called Lie groups. The group structures are in fact smooth in today's terminology, not only continuous. The beginning material of any text on <strong>Lie groups and Lie algebras</strong> will have what you are looking for. The respective answers for $SO(n), GL(n)$ and $SL(n)$ are </p>
<blockquote class="spoiler">
<p> (i) skew-symmetric matrices, (ii) all matrices, (iii) trace 0 matrices. </p>
</blockquote>
|
464,586 | <blockquote>
<p>Find a basis for $U=\{A\in\mathbb{M}_{22}\mid A^T=-A\}$.</p>
</blockquote>
<p>$\mathbb{M}_{22}$ denotes the set of all $2 \times 2$ matrices. This question appeared on an examination I wrote yesterday. Does a basis even exist? I can't think of any matrices in which $A^T=-A$ except for the zero matrix. If this is the case, I was taught (if I recall correctly), that $0$ can not be in a basis because it is linearly dependent.</p>
<p>$\begin{bmatrix}a&b\\b&c\end{bmatrix}\to \begin{bmatrix}-a&-b\\-b&-c \end{bmatrix}\implies a=b=c=0?$ </p>
| Branimir Ćaćić | 49,610 | <p>Well, let
$$
A = \begin{pmatrix} A_{11} & A_{12} \\ A_{21} & A_{22} \end{pmatrix} \in \mathbb{M}_{22}.
$$
Then
$$
A + A^T = \begin{pmatrix} A_{11} & A_{12} \\ A_{21} & A_{22} \end{pmatrix} + \begin{pmatrix} A_{11} & A_{21} \\ A_{12} & A_{22} \end{pmatrix} = \begin{pmatrix} 2 A_{11} & A_{12}+A_{21} \\ A_{21}+A_{12} & 2A_{22}\end{pmatrix},
$$
so as long as you're working over a field of characteristic $\neq 2$, $A^T = -A$ if and only if
$$
A_{11} = 0, \quad A_{22} = 0, \quad A_{21} = -A_{12},
$$
so that
$$
U = \left\{\left. \begin{pmatrix} 0 & a \\ -a & 0 \end{pmatrix} \; \right| \;a \in F \right\} = \operatorname{span}\left\{ \begin{pmatrix} 0 & 1 \\ -1 & 0 \end{pmatrix} \right\}.
$$
So, what can you conclude?</p>
|
3,841,535 | <p>I'm having a lot of trouble solving this question via the differentiating with respect to a parameter method. I can get the correct result for the integral containing sine, but I'm totally lost when it comes to evaluating the integral containing cosine. Here's the problem statement:</p>
<p>Given:</p>
<p><span class="math-container">$$\int_{0}^{\infty} e^{-ax} \sin(kx) \ dx = \frac{k}{a^2+k^2}$$</span></p>
<p>evaluate <span class="math-container">$\int_{0}^{\infty} xe^{-ax}\sin(kx) \ dx$</span> and <span class="math-container">$\int_{0}^{\infty} xe^{-ax} \cos(kx) \ dx$</span>.</p>
<p>This is the last question in the 2nd chapter of 'Basic Training in Mathematics' by Shankar.
Any help would be appreciated, I've been tearing my hair out all day with this.</p>
| Abhijeet Vats | 426,261 | <p><strong>Hint:</strong></p>
<p>Have you considered differentiating:</p>
<p><span class="math-container">$$\int_{0}^{\infty} e^{-ax} \sin(kx) \ dx = \frac{k}{a^2+k^2}$$</span></p>
<p>with respect to <span class="math-container">$k$</span> instead? :-)</p>
|
674,621 | <p>I am trying to figure out what the three possibilities of $z$ are such that </p>
<p>$$
z^3=i
$$</p>
<p>but I am stuck on how to proceed. I tried algebraically but ran into rather tedious polynomials. Could you solve this geometrically? Any help would be greatly appreciated.</p>
| colormegone | 71,645 | <p>I believe your "polynomial" approach would also have worked, if this is what you meant :</p>
<p>[In this, we are supposing that we knew nothing of the "Euler Identity", DeMoivre's Theorem, or roots of unity, all of which provide quite efficient devices]</p>
<p>If we (probably safely) assume that the solution(s) are complex numbers, and call $ \ z \ = \ a + bi \ , $ with $ \ a \ $ and $ \ b \ $ real, we can write the equation as</p>
<p>$$ (a + bi)^3 \ = \ a^3 \ + \ 3a^2 b \cdot i \ + \ 3a b^2 \cdot i^2 \ + \ b^3 i^3 \ = \ (a^3 \ - \ 3ab^2) \ + \ (3a^2b \ - \ b^3) \cdot i \ \ = \ \ i \ , $$</p>
<p>by applying the binomial theorem and "powers of $ \ i \ $ ". Since the right-hand side of the equation is a pure-imaginary number, this requires that</p>
<p>$$ a^3 \ - \ 3ab^2 \ = \ a \ ( a^2 \ - \ 3b^2 ) \ = \ 0 \ \ \text{and} \ \ 3a^2b \ - \ b^3 \ = \ b \ (3a^2 \ - \ b^2) \ = \ 1 \ \ . $$</p>
<p>The first equation presents us with two cases:</p>
<p><strong>I --</strong> $ \ a \ = \ 0 \ $ :</p>
<p>$$ a \ = \ 0 \ \ \Rightarrow \ \ b \ ( \ 0 \ - \ b^2 ) \ = \ -b^3 \ = \ 1 \ \ \Rightarrow \ \ b \ = \ -1 \ \ \Rightarrow \ \ z \ = \ 0 - i \ \ ; $$</p>
<p><strong>II --</strong> $ \ a^2 \ - \ 3b^2 \ = \ 0 $ :</p>
<p>$$ a^2 \ = \ 3b^2 \ \ \Rightarrow \ \ b \ ( \ 3 \cdot [3b^2] \ - \ b^2 \ ) \ = \ 8b^3 \ = \ 1 \ \ \Rightarrow \ \ b \ = \ \frac{1}{2} $$</p>
<p>$$ \Rightarrow \ \ a^2 \ = \ 3 \ \left( \frac{1}{2} \right)^2 \ = \ \frac{3}{4} \ \ \Rightarrow \ \ a \ = \ \pm \frac{\sqrt{3}}{2} \ \ \Rightarrow \ \ z \ = \ \frac{\sqrt{3}}{2} + \frac{1}{2}i \ , \ -\frac{\sqrt{3}}{2} + \frac{1}{2}i \ \ . $$</p>
<p>We have found three complex-number solutions to the equation. As <strong>Dan</strong> says, (one form of) the Fundamental Theorem of Algebra states that this third-degree polynomial with complex coefficients has, in all, three roots (counting multiplicities, which are each 1 here).</p>
<p>We probably wouldn't want to use this method for degrees higher than this, as the algebra would become more difficult to resolve. The techniques described by the other posters are far more generally used.</p>
|
530,605 | <blockquote>
<p>Let <span class="math-container">$A$</span> be open in <span class="math-container">$\mathbb{R}^m$</span>; let <span class="math-container">$g:A\rightarrow\mathbb{R}^n$</span>. If <span class="math-container">$S\subseteq A$</span>, we say that <span class="math-container">$S$</span> satisfies the <strong>Lipschitz condition</strong> on <span class="math-container">$S$</span> if the function <span class="math-container">$\lambda(x,y)=|g(x)-g(y)|/|x-y|$</span> is bounded for <span class="math-container">$x\neq y\in S$</span>. We say that <span class="math-container">$g$</span> is <strong>locally Lipschitz</strong> if each point of <span class="math-container">$A$</span> has a neighborhood on which <span class="math-container">$g$</span> satisfies the Lipschitz condition.</p>
<p>Show that if <span class="math-container">$g$</span> is locally Lipschitz, then <span class="math-container">$g$</span> is continuous. Does the converse hold?</p>
</blockquote>
<p>For the first part, suppose <span class="math-container">$g$</span> is locally Lipschitz. So for each point <span class="math-container">$r\in A$</span>, there exists a neighborhood for which <span class="math-container">$|g(x)-g(y)|/|x-y|$</span> is bounded. Suppose <span class="math-container">$|g(x)-g(y)|/|x-y|<M$</span> in that neighborhood. Then <span class="math-container">$|g(x)-g(r)|<M|x-r|$</span> in that neighborhood of <span class="math-container">$r$</span>. Therefore <span class="math-container">$g(x)\rightarrow g(r)$</span> as <span class="math-container">$x\rightarrow r$</span>, and so <span class="math-container">$g$</span> is continuous at <span class="math-container">$r$</span>. This means <span class="math-container">$g$</span> is continuous everywhere in <span class="math-container">$A$</span>.</p>
<p>What about the converse? I don't think it holds, but can't come up with a counterexample.</p>
| Community | -1 | <p>Intuitively, a counterexample must be a function which is very steep without having a jump or other sort of discontinuity. Consider, for example, $$g(x) = x^{1/3}$$ at $0$. Then</p>
<p>$$\frac{|g(x) - g(0)|}{|x - 0|} = x^{-2/3}$$</p>
<p>This cannot be bounded in a neighborhood of $0$.</p>
|
1,601,427 | <blockquote>
<p>Let $a,b,c$ be three nonnegative real numbers. Prove that $$a^2+b^2+c^2+3\sqrt[3]{a^2b^2c^2} \geq 2(ab+bc+ca).$$</p>
</blockquote>
<p>It seems that the inequality $a^2+b^2+c^2 \geq ab+bc+ca$ will be of use here. If I use that then I will get $a^2+b^2+c^2+3\sqrt[3]{a^2b^2c^2} \geq ab+bc+ca+3\sqrt[3]{a^2b^2c^2}$. Then do I use the rearrangement inequality similarly on $3\sqrt[3]{a^2b^2c^2}$?</p>
| chenbai | 59,487 | <p>$x^3=a^2,y^3=b^2,z^3=c^2 \implies x^3+y^3+z^2 +3xyz \ge 2(\sqrt{(xy)^3}+\sqrt{(yz)^3}+\sqrt{(xz)^3})$</p>
<p>we have $x^3+y^3+z^3 +3xyz \ge xy(x+y)+yz(y+z)+xz(x+z)$</p>
<p>$xy(x+y)\ge 2xy\sqrt{xy}=2\sqrt{(xy)^3} \implies xy(x+y)+yz(y+z)+xz(x+z)\ge 2(\sqrt{(xy)^3}+\sqrt{(yz)^3}+\sqrt{(xz)^3})$</p>
|
3,828,205 | <blockquote>
<p>Find all integers <span class="math-container">$x$</span> and <span class="math-container">$y$</span> that satisfy <span class="math-container">$$x^4-12x²+x^2y^2+30 < 0$$</span></p>
</blockquote>
<p>Letting <span class="math-container">$a = x^2$</span> and <span class="math-container">$b = y^2$</span> I got <span class="math-container">$$a^2-12a+ab+30 < 0$$</span></p>
<p>from which I managed to get <span class="math-container">$$a^2-12+30+ab <0 \Rightarrow (a-6)^2+ab < 6.$$</span></p>
<p>However I'm not sure how to proceed from here. What should I do?</p>
| lab bhattacharjee | 33,337 | <p>Clearly, <span class="math-container">$x\ne0$</span>,</p>
<p><span class="math-container">$$\dfrac{x^4-12x^2+30}{x^2}<-y^2$$</span></p>
<p><span class="math-container">$$\iff x^2+\dfrac{30}{x^2}<12-y^2\le12$$</span></p>
<p>Now <span class="math-container">$x^2+\dfrac{30}{x^2}<12\iff0>x^4-12x^2+30=(x^2-6)^2-6$</span></p>
<p><span class="math-container">$(x^2-6)^2<6\iff-\sqrt6<x^2-6<\sqrt6\iff6-\sqrt6<x^2<6+\sqrt6$</span></p>
<p>But <span class="math-container">$6+\sqrt6<9$</span> and <span class="math-container">$6-\sqrt6>3\implies 3<x^2<9$</span></p>
|
1,165,828 | <p>I run into a problem when I'm trying to prove how $\tan^2x+1 = \sec^2x$, and $1+\cot^2x=\csc^2x$</p>
<p>I understand that $\sin^2x+\cos^2x = 1$. (To my understanding 1 is the Hypotenuse, please correct me if I'm wrong). If referring to a Pythagorean triangle, let's say a triangle where $a=3$, $b=4$, and $c=5$, or $a=\cos$ $b=\sin$ and $c=\text{hypotenuse}$.</p>
<p>$3^2 + 4^2 = 5^2$. Which is true and proves that this identity work. To my understanding, that is how this identity work.</p>
<p>However, when I try to make sense of the $\tan^2x+1 = \sec^2x$ and $1+\cot^2x=\csc^2x$ identity, using the triangle example from above, it doesn't work. For example, here's how I did it on paper,</p>
<p>$\tan^2x + 1 = {\sin^2x\over \cos^2x} + 1$, so I would get using the triangle above, ${4^2\over 3^2} + 1 = {16\over 9} + 1 = 2.777777778$</p>
<p>Now on the right hand side, $\sec^2x, = {1\over \cos^2x} = {1\over 3^2} = .1111111111$. The answer I get from $\tan^2x+1$ DOES NOT EQUAL the answer I get from $\sec^2x$. I think I may be misunderstanding a critical part here that I can't really pinpoint.</p>
<p>However, I do know that if I prove $\tan^2x+1 = \sec^2x$ using just the identity itself it does work, for example,</p>
<p>$\tan^2x+1 = {\sin^2x\over \cos^2x} + 1 = (\text{after some simplification}) = \sin^2x + {\cos^2x \over \cos^2x} = {1\over \cos^2x} = \sec^2x$.</p>
<p>The same issue happens with $1+\cot^2x=\csc^2x$.</p>
<p>To my understanding, $\sin^2+\cos^2=1$ is the same as $a^2+b^2=c^2$. Am I right or wrong? I think there is a huge concept that I am missing between the UNIT CIRCLE and just TRIANGLES. </p>
| Felipe Faria | 219,178 | <p>Your issue is your understanding of the relationship between your basic trigonometry values and the sides of a triangle. You must keep in mind that the values of trigonometric functions are utilising the sides of your triangles in the following manner: $\sin(x) = \frac{opp}{hyp}$, $\cos(x) = \frac{adj}{hyp}$, and $\tan(x) = \frac{\sin(x)}{\cos(x)}$.</p>
<p>Let us use your example of a $3$, $4$, $5$ triangle to prove our identities. </p>
<p><img src="https://i.stack.imgur.com/tm41G.jpg" alt="triangle, much wow">
Let us go ahead and figure out our values of $\cos(x)$, $\sin(x)$, and $\tan(x)$:</p>
<p>$$\sin(x) = \frac{4}{5}\\
\cos(x) = \frac{3}{5}\\
\tan(x) = \frac{\sin(x)}{\cos(x)} = \frac{\frac{4}{5}}{\frac{3}{5}} = \frac{4}{3}.$$</p>
<p>Now that we have figured out our values, we may plug them into our identities: </p>
<p>$$\sin^2(x) + \cos^2(x) = 1\\
\left(\frac{4}{5}\right)^2 + \left(\frac{3}{5}\right)^2 = 1\\
\frac{16}{25} + \frac{9}{25} = 1\\
\frac{25}{25} = 1$$</p>
<p>$\,$</p>
<p>$$1 + \cot^2(x) = \csc^2(x)\\
1 + \left(\frac{1}{\frac{4}{3}}\right)^2 = \left(\frac{1}{\frac{4}{5}}\right)^2\\
1 + \left(\frac{3}{4}\right)^2 = \left(\frac{5}{4}\right)^2\\
1 + \frac{9}{16} = \frac{25}{16}\\
\frac{16}{16} + \frac{9}{16} = \frac{25}{16}\\
\frac{25}{16} = \frac{25}{16}$$</p>
<p>Attempt to prove $\tan^2(x) + 1 = \sec^2(x)$ by yourself to see if you fully understand. </p>
<hr>
<p>A quick re-cap on the relationship of the three identities you proposed. Assuming you understand where $\sin^2(x) + \cos^2(x) = 1$ originated from, you should have no problems understanding where the next two identities originated from.</p>
<p>$1 + \cot^2(x) = \csc^2(x)$:</p>
<p>$$\sin^2(x) + \cos^2(x) = 1\\
\frac{\sin^2(x)}{\sin^2(x)} + \frac{\cos^2(x)}{\sin^2(x)} = \frac{1}{\sin^2(x)}\\
1 + \cot^2(x) = \csc^2(x)$$</p>
<p>$\tan^2(x) + 1 = \sec^2(x)$:</p>
<p>$$\sin^2(x) + \cos^2(x) = 1\\
\frac{\sin^2(x)}{\cos^2(x)} + \frac{\cos^2(x)}{\cos^2(x)} = \frac{1}{\cos^2(x)}\\
\tan^2(x) + 1 = \sec^2(x)$$</p>
|
1,756,567 | <p>Let A be a 3*3 matrix and $A^{2014}=0$. Must $A^3$ be the zero matrix? I can work out that I-A is invertible, but I don't know how to proceed further.</p>
| Harald Hanche-Olsen | 23,290 | <p><strong>Hint:</strong> Note that $\operatorname{rank}(A^{n+1})\le\operatorname{rank}(A^n)$. Then show that if $\operatorname{rank}(A^{n+1})=\operatorname{rank}(A^n)$, then all powers $A^k$ with $k\ge n$ have the same rank.</p>
|
1,756,567 | <p>Let A be a 3*3 matrix and $A^{2014}=0$. Must $A^3$ be the zero matrix? I can work out that I-A is invertible, but I don't know how to proceed further.</p>
| Ben Grossmann | 81,360 | <p><strong>Hint:</strong> Let $R(A)$ denote the range (AKA image or column space) of $A$. If $A^k=0$ for some $k$ (and if $A^2 \neq 0$), we <em>must</em> have
$$
R(A^3) \subsetneq R(A^2) \subsetneq R(A) \subsetneq \Bbb R^3
$$</p>
|
198,049 | <p>Let $S$ be a smooth projective surface over $\mathbb{C}$. (I guess this can be more general—higher dimension, other ground fields, non-projective, maybe even singular?—and I'dd like to hear that.) Let $s \in S$ be a point. Let $\beta \colon X \to S$ be the blowup of $s \in S$. Suppose that $H^{i}(S, T_{S})$ is known for $i \in \{0,1,2\}$, as well as $\mathrm{Def}(S)$.</p>
<p>If I am not mistaken, the exceptional divisor $E$, which is a $(-1)$-curve, is rigid, in the sense that every deformation of $X$ also has a $(-1)$-curve. Therefore
$$\mathrm{Def}(X) \cong \mathrm{Def}(X,E) \cong \mathrm{Def}(S,s) \cong \mathrm{Def}(S) \times T_{S,s},$$
where the last isomorphism is not canonical. (Rather $T_{S,s}$ is the kernel of the forgetful map $\mathrm{Def}(S,s) \to \mathrm{Def}(S)$.)</p>
<p>This question is about the cohomological side of the picture, i.e. $H^{i}(X,T_{X})$ for $i \in \{0,1,2\}$. My intuition says that $H^{1}(X,T_{X})$ should also increase with dimension two, whereas the obstruction space $H^{2}(X,T_{X})$ should stay the same. I've tried to fiddle around with the spectral sequence
$$ H^{p}(S, R^{q}\beta_{*}T_{X}) \Longrightarrow H^{p+q}(X, T_{X}) $$
but I could not really come to the desired conclusions.</p>
<p>For $H^{0}(X, T_{X})$ we get the term $H^{0}(S, \beta_{*}T_{X})$.<br>
For $H^{1}(X, T_{X})$ we get the terms $H^{1}(S, \beta_{*}T_{X})$ and $H^{0}(S, R^{1}\beta_{*}T_{X})$. Now $R_{1}\beta_{*}T_{X}$ is a skyscraper sheaf supported on $s$, and if I'm not mistaken, and vague geometric intuition makes me think that it is the tangent space $T_{S,s}$.<br>
Finally, $R^{2}\beta_{*}T_{X} = 0$, so for $H^{2}(X, T_{X})$ we get the terms $H^{2}(S, \beta_{*}T_{X})$ and $H^{1}(S, R^{1}\beta_{*}T_{X})$.<br>
But maybe this isn't the right way to approach the question…</p>
<p>So the main question is:</p>
<blockquote>
<p>What are the $H^{i}(X,T_{X})$ for $i \in \{0,1,2\}$?</p>
</blockquote>
<p>I've not been able to find this via google, though I guess this is pretty basic knowledge in deformation theory. But I'm pretty new to this field, so please bear with me.</p>
| BlaCa | 14,514 | <p>Let $S$ be a surface and $Z=\{p_1,...,p_n\}\subset S$ be a reduced subscheme of dimension zero. Let $\epsilon:\widetilde{S}\rightarrow S$ be the blow-up of $S$ at $Z$. Consider the exact sequence
$$0\mapsto \epsilon^{*}\Omega_{S}\rightarrow \Omega_{\widetilde{S}}\rightarrow i_{*}\Omega_{E/Z}\mapsto 0$$
where $i:E\hookrightarrow\widetilde{S}$ is the exceptional divisor. Note that $\mathcal{H}om(i_{*}\Omega_{E/Z},\mathcal{O}_{\widetilde{S}}) = 0$ and by Grothendieck duality $\mathcal{E}xt^{1}(i_{*}\Omega_{E/Z},\mathcal{O}_{\widetilde{S}})\cong i_{*}T_{E/Z}(E)$. So dualizing the above exact sequence we get
$$0\mapsto T_{\widetilde{S}}\rightarrow \epsilon^{*}T_{S}\rightarrow i_{*}T_{E/Z}(E)\mapsto 0.$$
Since $R^{1}\epsilon_{*}T_{\widetilde{S}} = 0$ we have
$$0\mapsto\epsilon_{*}T_{\widetilde{S}}\rightarrow T_{S}\rightarrow T_{S|Z}\mapsto 0.$$
Now, $R^{i}\epsilon_{*}T_{\widetilde{S}} = 0$ for any $i > 0$. So $H^{i}(\widetilde{S},T_{\widetilde{S}})\cong H^{i}(S,\epsilon_{*}T_{\widetilde{S}})$ for any $i\geq 0$ and we get the following exact sequence in cohomology
$$
\begin{array}{l}
0\mapsto H^{0}(\widetilde{S},T_{\widetilde{S}})\rightarrow H^{0}(S,T_S)\rightarrow K^{2n}\rightarrow H^{1}(\widetilde{S},T_{\widetilde{S}})\rightarrow H^{1}(S,T_S)\rightarrow 0 \rightarrow \\
\rightarrow H^{2}(\widetilde{S},T_{\widetilde{S}})\rightarrow H^{2}(S,T_S)\rightarrow 0\\
\end{array}
$$
Since the map between the tangent spaces $H^{1}(\widetilde{S},T_{\widetilde{S}})\rightarrow H^{1}(S,T_S)$ is surjective and the map between the obstruction spaces $H^{2}(S,\epsilon_{*}T_{\widetilde{S}})\rightarrow H^{2}(S,T_S)$ is injective the map $Def_{\widetilde{S}}\rightarrow Def_S$ is smooth of relative dimension $2n-\dim H^{0}(S,T_S) + \dim H^{0}(\widetilde{S},T_{\widetilde{S}})$. This means that the obstructions to deforming $\widetilde{S}$ are exactly the obstructions to deforming $S$. The vector space $K^{2n}$ parametrizes the deformations of $Z$ inside $S$ and the spaces $H^{0}(\widetilde{S},T_{\widetilde{S}})$, $H^{0}(S,T_S)$ parametrize the infinitesimal automorphisms of $\widetilde{S}$ and $S$ respectively. If the map
$$H^{0}(S,T_S)\rightarrow K^{2n}$$
is surjective then $H^{1}(\widetilde{S},T_{\widetilde{S}})\cong H^{1}(S,T_S)$ and the deformations of $\widetilde{S}$ are induced by deformations of $S$. Otherwise the deformations of $Z$ inside $S$ induce non-trivial deformations of $\widetilde{S}$.</p>
<p>For instance, take $S = \mathbb{P}^{2}$. Then $H^{0}(S,T_S)\cong T_{Id}Aut(\mathbb{P}^{2})$ has dimension $8$. If $n\leq 4$ the map
$$T_{Id}Aut(\mathbb{P}^{2})\rightarrow K^{2n}$$
is surjective and $H^{1}(\widetilde{S},T_{\widetilde{S}})\cong H^{1}(\mathbb{P}^{2},T_{\mathbb{P}^{2}})$. Indeed if $n\leq 4$ there is an automorphism mapping $Z$ to any other set of $n$ points in general position and moving $Z$ inside $\mathbb{P}^{2}$ just induces trivial deformations of $\widetilde{S}$. Furthermore, since $\mathbb{P}^{2}$ itself is rigid we have $H^{1}(\mathbb{P}^{2},T_{\mathbb{P}^{2}})=0$. </p>
<p>More generally, let $X$ be a smooth variety and $Z\subseteq X$ be a smooth subvariety. Then
$$T^{1}Def_{(X,Z)} = H^{1}(X,T_{X}(-log (Z))) = H^{1}(Bl_{Z}X,T_{Bl_{Z}X}) = T^{1}Def_{Bl_{Z}X}.$$</p>
|
238,809 | <p><a href="https://i.stack.imgur.com/W5ILn.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/W5ILn.jpg" alt="enter image description here" /></a>
How to construct a tree like <a href="https://i.stack.imgur.com/XKYl9.png" rel="nofollow noreferrer">this</a>? I was looking <code>CompleteKaryTree</code> initially, there are some similarities overall, but it's still different.</p>
<pre><code>CompleteKaryTree[5, 2, GraphLayout -> "LayeredEmbedding", AspectRatio -> 1/4]
</code></pre>
<p><a href="https://i.stack.imgur.com/xrW63.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/xrW63.png" alt="enter image description here" /></a></p>
<p>Another way, I've generated the coordinates of all the points, but I don't know how to connect them</p>
<pre><code>n=4;
pts=Join @@ Table[{1/2 (1+(n-j)!)+(i-1) (n-j)!,n-j-1},{j,0,n},{i,FactorialPower[n,j]}];
Graphics[{Point@pts}, ImageSize->Large]
</code></pre>
<p><a href="https://i.stack.imgur.com/L3hGw.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/L3hGw.png" alt="enter image description here" /></a></p>
| Ian Ford | 76,874 | <p>In 12.3 or later, you can use NestTree:</p>
<pre><code>choice[{_, list_List}]:=
MapIndexed[{e, pos} |-> {e, Delete[list, pos]}, list]
permutationTree[list_List] :=
NestTree[choice, {Null, list}, Infinity, First]
permutationTree[n_Integer] :=
permutationTree[Range[n]]
</code></pre>
<p><a href="https://i.stack.imgur.com/7gPCh.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/7gPCh.png" alt="enter image description here" /></a></p>
|
1,565,406 | <p>Which interpolation method should I use for complicated "smooth" curves such as </p>
<p>$\frac{sin(x)}{x}$ for $x>0$.</p>
| Rorimac | 296,921 | <p>There are many different interpolation methods and depending on the data you want to interpolate some might work better than others. If you know that the dataset comes from a trigonometric function, say $\frac{sin(x)}{x}$, then using <a href="https://en.wikipedia.org/wiki/Trigonometric_interpolation" rel="nofollow">trigonometric interpolation</a> might be right for you.
If you don't know if the dataset comes from a specific class of functions then using splines for interpolation could work well. Here is one such method for curve interpolation.</p>
<p>Lets say that you have a dataset $C = (c_0, \dots, c_m)$ with $c_i \in \mathbb{R}^d$. We wish to find a B-spline
$$
f(t) = \sum\limits_{i = 0}^m p_i N_i^n(t)
$$
that interpolates $C$. This means that for a parameter vector $T = (t_0,\dots,t_m)$ we have $f(t_i) = c_i$ for all $0\leq i \leq m$. For this we need to decide on the degree $n$ of the B-spline, create the basis functions $N_i^n$ from a knot vector $U = (u_0,\dots,u_k)$ and find the control points $P = (p_0,\dots,p_m)$, $p_i \in \mathbb{R}^d$. A common choice for degree is $n = 3$ which we will use here.</p>
<ol>
<li><p>To create the parameter vector $T$ we will use the <a href="http://www.cs.mtu.edu/~shene/COURSES/cs3621/NOTES/INT-APP/PARA-chord-length.html" rel="nofollow">chord length method</a> defined by
\begin{align*}
t_0 &= 0\\
t_i &= t_{i - 1} + \frac{|c_{i} - c_{i - 1}|}{L}\\
t_m &= 1
\end{align*}
where $L = \sum_i |c_i - c_{i - 1}|$ is the total length of the chord. The uniform or centripetal method are other parameter selection methods and a description and good discussion on their pros and cons are written by (T. Foley and G. Nielson, Knot selection for parametric spline interpolation).</p></li>
<li><p>Use knot averaging suggested by de Boor (de Boor, A practical guide to splines p. 219) defined by
\begin{align*}
u_0 &= \dots = u_3 = 0\\
u_{3 + j} &= \frac{1}{3} \sum\limits_{i = j}^{3 + j - 1} t_j\\
u_{k - 3} &= \dots = u_k = 1.
\end{align*}
Here, $k$ has to satisfy $k = n + m + 1$ which means that for $n = 3$ we have $k = m + 4$. This will create a clamped B-spline.</p></li>
<li><p>Create the basis functions $N_i^3$ from the knot vector $U$ and use them to calculate the matrix
\begin{align*}
A_T = \begin{pmatrix}
{N_0^3}(t_0) & {N_1^3}(t_0) & \dots & {N_m^3}(t_0) \\
{N_0^3}(t_1) & {N_1^3}(t_1) & \dots & {N_m^3}(t_1) \\
\vdots & \vdots & \ddots & \vdots \\
{N_0^3}(t_m) & {N_1^3}(t_m) & \dots & {N_m^3}(t_m)
\end{pmatrix}.
\end{align*}
Solve the linear equation system $A_T P^\top = C^\top$ (where $P^\top$ is the transpose of $P$) to get the control points $P$.</p></li>
<li><p>Create the B-spline $f$ from $P$ and the basis functions $N_i^3$. This is a B-spline curve that will interpolate the dataset $C = (c_0,\dots,c_m$).</p></li>
</ol>
<p>For this method to work well it is important to use a good parameter selection method. The chord length method is good for datasets that behave fairly well but others might work better for you. The paper by T. Foley and G. Nielson has a deeper discussion on the behaviour on different parameter selection methods. </p>
|
1,360,658 | <p>prove: $$\frac{1-\cos 2\theta}{1-\cos\theta}=2\cos\theta-2$$</p>
<p>I'm thinking that there will be something to square in this? Because I notice that the $LHS$ looks like the half-angle identity....</p>
<p>Edit: I am so sorry guys, my grave mistake, the expression should have been equal to 2 instead like,</p>
<p>$$\frac{1-\cos 2\theta}{1-\cos\theta}-2\cos\theta=2$$</p>
<p>BUT THANKS A LOT!</p>
| Mary Star | 80,708 | <p><strong>Hint:</strong> </p>
<p>Use the property:
$$\cos 2 \theta=2 \cos^2 \theta-1$$ </p>
|
2,619,344 | <p>I just struck with a doubt today</p>
<blockquote>
<p>Why do most of the standard inequalities require the variables to be positive.</p>
</blockquote>
<p>For example if we want to find minimum value of a certain expression say <span class="math-container">$a+b+c$</span> the very first thought that comes in our mind is the AM GM inequality but the question must satisfy a condition <span class="math-container">$\mathbf {a, b, c\ge 0}$</span>.</p>
<p>So I want to ask why is that so.</p>
<blockquote>
<p>Even in some very useful inequalities like the Muirhead inequality, Hölder's inequality, Minkowski's inequality, etc. we need the condition that the variables to be used must be non negative or positive.</p>
<p>While there are also some inequalities like Chebyshev's inequality and Rearrangement inequality, Cauchy Schwartz inequality which do not have restrictions of the variables or terms to be positive.</p>
</blockquote>
<p>I want to know why such condition is needed to make the inequalities true. Is there a mathematical sense and a reason to do so? Does this have to do anything with geometry( I saw proofs of AM GM inequality using geometry and as the variables used were the lengths of some segments they were confined to be non negative)</p>
<p>If someone has an idea please share.</p>
| carmichael561 | 314,708 | <p>Since $f$ is continuous on $[0,1]$ there is some constant $C$ such that $|f(x)|\leq C$ for all $x\in[0,1]$, hence
$$\Big|\int_0^1f(x)e^{-nx}\;dx\Big|\leq C\int_{0}^1e^{-nx}\;dx=C\frac{1-e^{-n}}{n}\leq \frac{C}{n}$$
Therefore the sum converges absolutely.</p>
|
1,036,907 | <p>For <span class="math-container">$n$</span> in the natural numbers let</p>
<p><span class="math-container">$$a_n = \int_{1}^{n} \frac{\cos(x)}{x^2} dx$$</span></p>
<p>Prove, for <span class="math-container">$m ≥ n ≥ 1$</span> that <span class="math-container">$|a_m - a_n| ≤ \frac{1}{n}$</span> and deduce <span class="math-container">$a_n$</span> converges.</p>
<p>I am totally stuck on how to even go about approaching this. All help would be very gratefully received!</p>
| DeepSea | 101,504 | <p>$|a_m-a_n| = \left|\displaystyle \int_{n}^m \dfrac{\cos x}{x^2}dx\right| \leq \displaystyle \int_{n}^m \dfrac{|\cos x|}{x^2}dx \leq \displaystyle \int_{n}^m \dfrac{1}{x^2}dx = \dfrac{1}{n} - \dfrac{1}{m} < \dfrac{1}{n}$. Thus $\{a_n\}$ is a Cauchy sequence, hence converges.</p>
|
223,718 | <p>Suppose I have two lists to which I apply the <code>Line</code> command for example</p>
<pre><code>A = {{-4,0},{4,0}}
B = {{0,4},{0,-4}}
</code></pre>
<p>and I take <code>Line[A]</code> and <code>Line[B]</code>. Is there a way to get Mathematica to tell me the intersection points of the line? Of course this is a very simple example, in practice the lines would have many defining points to approximate a curve.</p>
<p>further questions: How about if I had <span class="math-container">$n$</span> lists? Could I ask to find the intersection points in some bounded region only?</p>
| flinty | 72,682 | <p>Join each of your collections of lines using a <code>RegionUnion</code> into a single region, then intersect both regions with <code>RegionIntersection</code> like so:</p>
<pre><code>redLines = {
Line[{{-2.`, 4.`}, {-1.5`, 2.25`}, {-1.`, 1.`}, {-0.5`, 0.25`}, {0.`, 0.`}, {0.5`, 0.25`}, {1.`, 1.`}, {1.5`, 2.25`}, {2.`, 4.`}}]
, Line[{{2, -1}, {2.5, 2}}]
};
blueLines = {
Line[{{22.`, -3.`}, {15.49`, -2.3`}, {9.96`, -1.6`}, {5.41`, -0.9`}, {1.84`, -0.2`}, {-0.75`, 0.5`}, {-2.36`, 1.2`}, {-2.99`, 1.9`}, {-2.64`, 2.6`}, {-1.31`, 3.3`}, {1.`, 4.`}, {4.29`, 4.7`}, {8.56`, 5.4`}, {13.81`, 6.1`}, {20.04`, 6.8`}}]
, Line[{{-3, 1}, {2, 2}}]
};
intersections = RegionIntersection[RegionUnion@redLines, RegionUnion@blueLines];
isectCoordinates = Flatten[MeshPrimitives[intersections, 0] /. Point -> List, 1];
Graphics[{Red, redLines, Blue, blueLines, Black, PointSize[Large],intersections},
PlotRange -> {{-3, 3}, {-5, 5}}]
</code></pre>
<p><a href="https://i.stack.imgur.com/mBxTJ.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/mBxTJ.png" alt="line intersections"></a></p>
<p>And if you want to extract the lines from a shape <em>(e.g square, triangle etc.)</em> then use <code>MeshPrimitives[shape, 1]</code>.</p>
|
2,262,001 | <p>Prove this inequality where $a$, $b$ and $c$ are sides of a triangle and $S$ its Area.
$$\frac{ab + bc + ca}{4S}\ge \operatorname{ctg} \frac{\pi}{6}$$</p>
| Jack D'Aurizio | 44,121 | <p>By the sine theorem, the given inequality is equivalent to
$$ \frac{1}{\sin A}+\frac{1}{\sin B}+\frac{1}{\sin C} \geq 2\sqrt{3} \tag{1}$$
and since $\frac{1}{\sin(x)}$ is a convex function on the interval $(0,\pi)$, $(1)$ is a straightforward consequence of <a href="https://en.wikipedia.org/wiki/Jensen%27s_inequality" rel="nofollow noreferrer">Jensen's inequality</a>.</p>
|
2,598 | <p>Should Wolfram Alpha Notebook questions be considered on-topic?</p>
<p>Here's an example: <a href="https://mathematica.stackexchange.com/questions/240780/calculating-double-integral-bounded-by-domain-in-wolfram-alpha-notebook">Calculating double integral bounded by domain in Wolfram Alpha Notebook</a></p>
<p>Here are some related meta Q&A:</p>
<ul>
<li><p><a href="https://mathematica.meta.stackexchange.com/questions/68/other-wri-product-discussion">Other WRI product discussion?</a></p>
</li>
<li><p><a href="https://mathematica.meta.stackexchange.com/questions/265/are-questions-about-doing-symbolic-math-in-wolfram-alpha-on-topic-here">Are questions about doing symbolic math in Wolfram Alpha on topic here?</a></p>
</li>
</ul>
<p><a href="https://www.wolfram.com/wolfram-alpha-notebook-edition/" rel="nofollow noreferrer">Wolfram Alpha Notebooks</a> are a new WRI product that hybridizes W|A and Mathematica. I looked only briefly, but it resembles a <em>Mathematica</em> notebook in which the only valid input starts with single-equals (probably without having to type <code>=</code>), though the sample inputs are sometimes interpreted differently in the examples shown than in my <em>Mathematica</em>.</p>
<p>(For those who may not know, it was decided to consider questions about <a href="https://mathematica.stackexchange.com/help/on-topic">W|A off-topic</a>.)</p>
| Szabolcs | 12 | <p><strong>No, Wolfram|Alpha questions should be explicitly off-topic, regardless of whether W|A is being used through the web interface or Wolfram|Alpha Notebook</strong>.</p>
<p>Vote here if you agree (and post the opposite answer for voting, with appropriate arguments, if you disagree).</p>
<p>Arguments:</p>
<ol>
<li><p><strong>The interface is a minor detail that should not decide whether W|A is off-topic or not.</strong> It would be quite ridiculous to tell people: "Your questions about this W|A input is off-topic because you used the website. Had you downloaded the GUI and typed the input there, it would be on-topic."</p>
<p>As for asking about the GUI vs W|A input: as @MichaelE2 says, <em>"I think the distinction @b3m2a1 makes might be hard to make in practice."</em> I do not think most people who might post questions would be able to understand the distinction, unless they are also very familiar with Mathematica.</p>
</li>
<li><p>The reasons from <a href="https://mathematica.meta.stackexchange.com/a/267/12">https://mathematica.meta.stackexchange.com/a/267/12</a> still apply:</p>
<blockquote>
<p>" I know that W|A runs on mma and it kinda sorta understands mma syntax. However, opening the door to such questions will only lower the bar and result in hit-and-run questions from folks who just want a quick result from W|A."</p>
</blockquote>
</li>
<li><p><strong>Having expertise in Mathematica / Wolfram Language (which is what users of this site have in common) does not translate to Wolfram|Alpha.</strong> While W|A understands some Mathematica-like input, it often interprets it differently from Mathematica. W|A does not have a documented syntax. Its understanding of natural language is constantly evolving and often unpredictable.</p>
</li>
<li><p><strong>W|A (not W|A Notebook) has a much larger userbase than Mathematica, most of whom have no familiarity with Mathematica whatsoever,</strong> and would not fit in the current community. Additionally, there is good reason to suspect that many W|A questions would be of the hit-and-run type, coming from students looking for a quick answer or homework solution.</p>
</li>
<li><p>It is difficult to maintain a community like the one organized around Mathematica.SE. <a href="https://mathematica.meta.stackexchange.com/q/2599/12">There are even concerns that things are not going as well as they used to.</a> In my opinion, inviting W|A questions (which are orthogonal to Wolfram Language questions) will bring nothing of value to the community. In fact, I worry that it might even be the last straw that breaks the camel's back.</p>
</li>
</ol>
<p>Note that there are already places for W|A questions: <a href="https://webapps.stackexchange.com/questions/tagged/wolfram-alpha">WebApps.SE</a> and Wolfram Community. There is no pressing need to create another one.</p>
|
1,139,789 | <p>Is $f(z)=z^n$ holomorphic?</p>
<p>I have tested a number of other functions using the Cauchy Riemann equations $u_x=v_y$, $v_x=-u_y$. However in the case of $f(z)=z^n$ I cannot think of a way to find the functions $u(x,y)$ and $v(x,y)$ without using a binomial expansion of $(x+iy)^n$. </p>
<p>Any help or pointers is appreciated.</p>
<p>edit - the problem requires the use of the Cauchy - Riemann equations and not the formal definition of complex differentiation.</p>
| egreg | 62,967 | <p>Consider the function $f(x,y)=(x+iy)^n$. Then the real and imaginary parts of this function are
$$
u(x,y)=\frac{1}{2}((x+iy)^n+(x-iy)^n)\\
v(x,y)=\frac{1}{2i}((x+iy)^n-(x-iy)^n)$$
so
$$
\frac{\partial u}{\partial x}=\frac{n}{2}((x+iy)^{n-1}+(x-iy)^{n-1})
$$
whereas
$$
\frac{\partial v}{\partial y}=
\frac{n}{2i}(i(x+iy)^{n-1}+i(x-iy)^{n-1})
$$</p>
<p>Verify also the other Cauchy-Riemann equation.</p>
<p>Is this legitimate? Yes, of course. We're just considering functions $\mathbb{R}\to\mathbb{C}$ and derivatives are perfectly defined as usual, with the usual properties.</p>
<hr>
<p>If you don't trust this (but you should), you can do a proof by induction. Denote by $u_n(x,y)$ and $v_n(x,y)$ the real and imaginary parts of $f(x,y)=(x+iy)^n$. Then
\begin{align}
u_n(x,y)+iv_n(x,y)&=(u_{n-1}(x,y)+iv_{n-1}(x,y))(x+iy)\\
&=
(xu_{n-1}(x,y)-yv_{n-1}(x,y))+i(xv_{n-1}(x,y)+yu_{n-1}(x,y))
\end{align}
So
\begin{align}
u_n(x,y)&=xu_{n-1}(x,y)-yv_{n-1}(x,y)\\
v_n(x,y)&=xv_{n-1}(x,y)+yu_{n-1}(x,y)
\end{align}
Compute the partial derivatives and apply the induction hypothesis.</p>
|
1,139,789 | <p>Is $f(z)=z^n$ holomorphic?</p>
<p>I have tested a number of other functions using the Cauchy Riemann equations $u_x=v_y$, $v_x=-u_y$. However in the case of $f(z)=z^n$ I cannot think of a way to find the functions $u(x,y)$ and $v(x,y)$ without using a binomial expansion of $(x+iy)^n$. </p>
<p>Any help or pointers is appreciated.</p>
<p>edit - the problem requires the use of the Cauchy - Riemann equations and not the formal definition of complex differentiation.</p>
| Mårten W | 58,780 | <p>HINT: Show that if $f=u+iv$ and $g=\hat{u}+i\hat{v}$ satisfy the Cauchy-Riemann equations, then so does $fg$. Use this together with the fact that $z^n$ is a finite product of holomorphic functions (since $z$ is holomorphic).</p>
|
2,783,224 | <p>$$\lim_{x \rightarrow 0}\frac{x^2 \sin(Kx)}{x - \sin x} =1$$</p>
<p>Here K is a constant whose value I want to find,</p>
<p>I got it by writing the series expansion of $\sin(\theta)$, but couldn't by L'hospital rule or standard limits,</p>
<p>This is what I tried:</p>
<p>$$\lim_{x \rightarrow 0}\frac{x^2 \sin(Kx)}{x - \sin x} =1$$</p>
<p>$$\implies \lim_{x \rightarrow 0}\frac{x \sin(Kx)}{1 - \frac{\sin x} x} =1$$</p>
<p>$$\implies \lim_{x \rightarrow 0}\frac{Kx^2 \frac{\sin(Kx)}{Kx}}{1 - \frac{\sin x} x} =1$$
So this gives a 0 in the denominator which doesn't help, L'hospital gives a huge mess, applying it once more didn't help</p>
<p>The answer given is K =1/6.</p>
| Jiabin Du | 442,428 | <p>$$\lim_{x \rightarrow 0}\frac{x^2 \sin(Kx)}{x - \sin x} =\lim_{x\to 0}\frac{Kx^3}{x-\sin x}=\lim_{x\to 0}\frac{3Kx^2}{1-\cos x}=\lim_{x\to 0}\frac{6Kx}{\sin x}=6K$$</p>
|
3,703,011 | <p>So, I am an absolute beginner in mathematics; only being knowledgeable in some basic ideas of the subject. My interest in math started only recently, while reading about set theory and cardinality (particularly the concept of higher infinities) in some other forums. Can you guys recommend me any farily accessible books or any other material which I could use to understand those topics? Or do I need to study some other areas in mathematics before I am able to comprehend set theory or cardinals? </p>
| Mark S. | 26,369 | <p>If you are an "absolute beginner", then I would recommend starting by working through <a href="https://www.people.vcu.edu/~rhammack/BookOfProof/" rel="nofollow noreferrer">Book of Proof</a> by Richard Hammack, which builds up basic naive set theory, tours through a variety of good foundations for any math subject, and ends with an introduction to cardinality. There are other similar books you could start with like <a href="https://www.google.com/books/edition/How_to_Prove_It/IhkWcGRV1scC" rel="nofollow noreferrer">How to Prove It: A Structured Approach</a> by Velleman and <a href="https://www.google.com/books/edition/An_Introduction_to_Abstract_Mathematics/-asQAAAAQBAJ" rel="nofollow noreferrer">An Introduction to Abstract Mathematics</a> by Bond and Keane, but "Book of Proof" is free.</p>
<p>After gaining a foundation and exposure to mathematical proof in a variety of contexts like that, many introductions to set theory will become accessible. </p>
|
3,703,011 | <p>So, I am an absolute beginner in mathematics; only being knowledgeable in some basic ideas of the subject. My interest in math started only recently, while reading about set theory and cardinality (particularly the concept of higher infinities) in some other forums. Can you guys recommend me any farily accessible books or any other material which I could use to understand those topics? Or do I need to study some other areas in mathematics before I am able to comprehend set theory or cardinals? </p>
| Anonymous | 794,282 | <p>For a real beginner in mathematics who is particularly interested in set theory and cardinalities, I might recommend <em>Stories about Sets</em> by Vilenkin, which is aimed at a high school audience.</p>
<p>I don't recommend studying an axiomatic presentation of set theory until you have significant experience with proofs in one or two other areas of mathematics, such as abstract algebra, analysis, topology or number theory. By an axiomatic presentation, I mean one in which axioms are given for the behavior of sets, such as the "axiom of extensionality" or the "axiom of the power set." This includes the references by Weiss, Halmos and Cunningham mentioned in the comments above. (Strictly speaking, results from other areas of mathematics are mostly not necessary. But there are serious pedagogical and psychological obstacles for a student without any other math background.)</p>
<p>Once you have a sufficient general background in mathematics, <em>Introduction to Set Theory</em> by Hrbacek and Jech is a good choice.</p>
<p>In the meantime, in the course of studying other areas of math, you are exposed to aspects of set theory gradually, with sets presented on an intuitive level. In studying calculus with a modern textbook, you become accustomed to the basic use of set notation. In studying analysis, you learn about countable sets and get practice manipulating sets in more sophisticated ways. For example, <em>Mathematical Analysis</em> by Tom Apostol is an excellent introduction to analysis and has a good (non-axiomatic) chapter on sets.</p>
|
4,492,250 | <p>Say I have a function <span class="math-container">$f(x) = x^2 + 2$</span></p>
<p>This function never touches the x-axis, but it could be easily transformed to touch it by cancelling the constant as in <span class="math-container">$g(x) = (x^2 + 2) - 2$</span></p>
<p>Is there any way to generalize this, so that I can make any function "magnet" to the x-axis?</p>
| Pearson | 992,512 | <p>Simply letting <span class="math-container">$g(x)=f(x)-f(x_0)$</span> for any <span class="math-container">$x_0\in\mathbb R$</span> such that <span class="math-container">$f$</span> is well-defined at <span class="math-container">$x_0$</span> works</p>
|
253,208 | <p>Here's a problem I'm working on: </p>
<p>Find the matrix of T with respect to the standard bases $P_{3}$ and $\mathbb{R}^{3}$: </p>
<p>$T(p(x)) = \left( \begin{array}{cc} p'(0) \\ p(0) \\ p(0) - p'(0)\end{array} \right)$</p>
<p>So I'll list the steps that I've been taking and hopefully someone will be able to tell me what I'm doing wrong. So the first thing I did was prove that the transformation was linear, which wasn't too bad. Now since I know that the transformation is linear I can make use of the theorem that says that every linear transformation can be written in the form $T(x) = Ax$ where $A$ is the coefficient matrix and $x$ is a vector. </p>
<p>Now I believe the standard bases for the polynomial in my example is $1, x, x^{2}, x^{3}$, so I assumed I could do the following $A = ( T(1) \space \space T(x) \space \space T(x^{2}) \space \space T(x^{3}) )$ but here's when I start to get confused. The problem definition says that each function is being forced to evaluate zero, so knowing what the basis's are doesn't matter, you only need to know that there are four (correct me if I'm wrong about there being 4 standard basis). </p>
<p>Next, so if I take the derivative of $p(x)$ I'd get some polynomial, but if I input zero I'd just get some constant. Does this mean that each of the four columns of my coefficient matrix are just going to be unique constants? Thanks in advance for any help offered. </p>
| lab bhattacharjee | 33,337 | <p>2.</p>
<p>So, $103x=444+999a$ for some integer $a$</p>
<p>or, $103x=111(4+9a)$ or $\frac{103x}{111}=4+9a$ an integer. </p>
<p>$\implies 111\mid x$</p>
<p>Let $x=111y\implies 103y=4+9a$
$\implies 103y\equiv4\pmod 9$
$\implies 4y\equiv4\pmod 9$
$\implies y\equiv1\pmod 9$ as $(4,9)=1,y=9b+1$ for some integer $b$</p>
<p>So, $x=111y=111(9b+1)\equiv 111\pmod{999}$</p>
|
263,718 | <p>I have 2d data in the form <code>{x,y,f(x,y)}</code> which is randomly stored. the random <code>{x,y}</code> for specific shape (square for simplification here) can be created as</p>
<pre><code>Regn = {{-\[Pi], -\[Pi]}, {-\[Pi], \[Pi]}, {\[Pi], \[Pi]}, {\[Pi], -\
\[Pi]}, {-\[Pi], -\[Pi]}};
xylist=Select[RegionMember[Polygon@(Regn)]][
Join @@ CoordinateBoundsArray[CoordinateBounds@(Regn), Into[50]]]
</code></pre>
<p>then these points are passed to a function <code>f</code> here it is just <code>sin</code> and we get</p>
<pre><code>datxy = Table[{xylist[[i, 1]], xylist[[i, 2]],
Sin[xylist[[i, 1]] xylist[[i, 2]]]}, {i, 1, Length[xylist]}];
ListDensityPlot[datxy, ColorFunction -> "TemperatureMap"]
</code></pre>
<p><a href="https://i.stack.imgur.com/jR7l8.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/jR7l8.png" alt="enter image description here" /></a></p>
<p>my question is how can I plot the x or y derivative of such data?</p>
<p><strong>Update</strong></p>
<p>here I will compare the solution provided by the answers which indicate that the built-in interpolation gives bad results compared to the method provided by @Shin Kim</p>
<p><a href="https://i.stack.imgur.com/k8xlG.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/k8xlG.png" alt="enter image description here" /></a></p>
<p>I used <code>PlotPoints->50</code> in <code>DensityPlot</code> for the interpolated data</p>
| Shin Kim | 85,037 | <p>If you want to produce the derivatives directly using only the raw data,</p>
<pre><code>(*w.r.t y*)
datxy = Sort[Sort[datxy, #1[[1]] < #2[[1]] &],
Which[ #1[[1]] == #2[[1]] && #1[[2]] < #2[[2]], True,
#1[[1]] == #2[[1]] && #1[[2]] > #2[[2]], False ] &];
Dy = {};
Do[If[datxy[[i, 1]] == datxy[[i + 1, 1]],
AppendTo[Dy, Flatten@{datxy[[i, 1 ;; 2]], (datxy[[i + 1, {2, 3}]] - datxy[[i, {2, 3}]]) /. {dy_, df_} -> df/dy}]
], {i, Length@datxy - 1}]
(*w.r.t x*)
datxy = Sort[Sort[datxy, #1[[2]] < #2[[2]] &],
Which[ #1[[2]] == #2[[2]] && #1[[1]] < #2[[1]], True,
#1[[2]] == #2[[2]] && #1[[1]] > #2[[1]], False] &];
Dx = {};
Do[If[datxy[[i, 2]] == datxy[[i + 1, 2]],
AppendTo[Dx, Flatten@{datxy[[i, 1 ;; 2]], (datxy[[i + 1, {1, 3}]] - datxy[[i, {1, 3}]]) /. {dx_, df_} -> df/dx}]
], {i, Length@datxy - 1}]
</code></pre>
<p>then form the gradient:</p>
<pre><code>grad = {};
i = 1;
Do[
Do[If[Dx[[i, 1 ;; 2]] == Dy[[j, 1 ;; 2]],
AppendTo[grad, {Dx[[i, 1 ;; 2]], {Dx[[i, 3]], Dy[[j, 3]]}}]],
{j, Length@Dy}];
i++;
, Length@Dx]
ClearAll[i]
</code></pre>
<p>Following is a comparison with the actual gradient (right):</p>
<p><a href="https://i.stack.imgur.com/QOsqX.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/QOsqX.png" alt="enter image description here" /></a></p>
<p>I coded this without any concern of performance, so there's a lot of room for optimization, if anyone's interested.</p>
|
97,329 | <p>This is inspired by <a href="https://mathoverflow.net/questions/97307/polynomials-all-of-whose-roots-are-rational">this </a> question. Let $f(x)=a_nx^n+...+a_0$ be a polynomial with rational coefficients. The sandard procedure of finding a rational root $p/q$ involves checking all $p$ that divide $a_0$ and all $q$ that divide $a_0$. This is not very complicated but involves factoring $a_0$ and $a_n$. The factoring problem is not known to be in P. If $n\le 4$, then the fact that the group $S_4$ is solvable and the well known formulas for roots of polynomials of degree $\le 4$ give easy polynomial time algorithm of finding rational roots. </p>
<p><b> Question</b> Is the problem of finding a rational root of $f(x)$ in P for every $n$ (say, for $n=5$)?</p>
<p><b> Update 1</b> After I posted the question, I noticed an answer by Robert Israel to the previous <a href="https://mathoverflow.net/questions/97307/polynomials-all-of-whose-roots-are-rational"> question</a> (of Joseph O'Rourke). That could give an answer to my question but I am still not sure how one can avoid factoring numbers $a_0,a_n$. </p>
<p><b> Update 2 </b> Robert Israel's explanations (see his comment <a href="https://mathoverflow.net/questions/97307/polynomials-all-of-whose-roots-are-rational">here </a>) convince me that his algorithm of checking whether a polynomial has a rational root (all roots rational) runs in polynomial time. </p>
<p>I removed Question 2 so that I can accept Michael Stoll's answer. I will post Question 2 as a separate question. </p>
| Michael Stoll | 21,146 | <p>Didn't Lenstra, Lenstra and Lovász in their famous LLL paper prove that factorization of polynomials over $\mathbb Q$ can be done in polynomial time? You get a rational root if and only if there is a factor of degree 1, and the polynomial has only rational roots if and only if all factors have degree 1.</p>
<p>Lenstra, A.K.; Lenstra, H.W.jun.; Lovász, László:
<em>Factoring polynomials with rational coefficients.</em> (English)
Math. Ann. <strong>261</strong>, 515-534 (1982).</p>
|
3,471,585 | <blockquote>
<p>All roots of polynomial <span class="math-container">$x^3+ax^2+17x+3b$</span>, <span class="math-container">$a,b\in \Bbb Z$</span> are integers. Prove that this polynomial doesn't have same roots.</p>
</blockquote>
<p>My plan was to find all three solutions and then compare them.</p>
<p>I don't know how to find the first root - tried to use cubic formula to find it but I got a large expression, just a few numbers canceled. Any advices? Thanks.</p>
| nonuser | 463,553 | <p>Say it has, so <span class="math-container">$x_1=x_2 = m$</span> and <span class="math-container">$x_3=k$</span>, then we have by (second) Vieta formula <span class="math-container">$$m^2+2mk =17$$</span> and (third) <span class="math-container">$$m^2k =-3b$$</span></p>
<p>So <span class="math-container">$m$</span> or <span class="math-container">$k$</span> is divisible by <span class="math-container">$3$</span>. Clearly <span class="math-container">$m$</span> is not (since else <span class="math-container">$3\mid 17$</span>), so <span class="math-container">$3\mid k$</span>. But then we have <span class="math-container">$m^2\equiv 2\pmod 3$</span>, a contradiction. </p>
|
3,471,585 | <blockquote>
<p>All roots of polynomial <span class="math-container">$x^3+ax^2+17x+3b$</span>, <span class="math-container">$a,b\in \Bbb Z$</span> are integers. Prove that this polynomial doesn't have same roots.</p>
</blockquote>
<p>My plan was to find all three solutions and then compare them.</p>
<p>I don't know how to find the first root - tried to use cubic formula to find it but I got a large expression, just a few numbers canceled. Any advices? Thanks.</p>
| Community | -1 | <p><strong>Hint:</strong> There is a theorem that <span class="math-container">$f$</span> and <span class="math-container">$f'$</span> are relatively prime iff the polynomial <span class="math-container">$f$</span> has no repeated roots in a splitting field. . </p>
|
3,471,585 | <blockquote>
<p>All roots of polynomial <span class="math-container">$x^3+ax^2+17x+3b$</span>, <span class="math-container">$a,b\in \Bbb Z$</span> are integers. Prove that this polynomial doesn't have same roots.</p>
</blockquote>
<p>My plan was to find all three solutions and then compare them.</p>
<p>I don't know how to find the first root - tried to use cubic formula to find it but I got a large expression, just a few numbers canceled. Any advices? Thanks.</p>
| Maverick | 171,392 | <p>Let <span class="math-container">$f(x)=x^3+ax^2+17x+3b$</span></p>
<p><span class="math-container">$f'(x)=3x^2+2ax+17$</span></p>
<p>Putting <span class="math-container">$f'(x)=0$</span> we have <span class="math-container">$$x=\frac{-a\pm\sqrt{a^2-51}}{3}$$</span></p>
<p>If <span class="math-container">$f(x)=0$</span> has integer roots and if it has repeated roots then roots of <span class="math-container">$f'(x)=0$</span> will</p>
<p>also have integer roots. So <span class="math-container">$a^2-51$</span> must be perfect square of an integer which is</p>
<p>possible only for <span class="math-container">$a=\pm 10,a=\pm26$</span>. Correspondingly the integer roots obtained are
<span class="math-container">$\pm1,\pm17$</span>(neglecting the non-integer roots)</p>
<p>The third root can be obtained using Vieta's theorem</p>
<p>As <span class="math-container">$\alpha\beta+\beta\gamma+\gamma\alpha=17$</span></p>
<p>Setting <span class="math-container">$\alpha=\beta=1$</span> we obtain <span class="math-container">$\gamma=8$</span></p>
<p>Similarly,setting <span class="math-container">$\alpha=\beta=-1$</span> we obtain <span class="math-container">$\gamma=-8$</span></p>
<p>Similarly,setting <span class="math-container">$\alpha=\beta=17$</span> we obtain <span class="math-container">$\gamma=-8$</span></p>
<p>Similarly,setting <span class="math-container">$\alpha=\beta=-17$</span> we obtain <span class="math-container">$\gamma=8$</span></p>
<p>So the product of roots turns out to be <span class="math-container">$3b=-8,8,-2312,2312$</span></p>
<p>which is not possible as the value of <span class="math-container">$b$</span> obtained is not an integer</p>
<p>which is obviously a CONTRADICTION</p>
|
3,471,585 | <blockquote>
<p>All roots of polynomial <span class="math-container">$x^3+ax^2+17x+3b$</span>, <span class="math-container">$a,b\in \Bbb Z$</span> are integers. Prove that this polynomial doesn't have same roots.</p>
</blockquote>
<p>My plan was to find all three solutions and then compare them.</p>
<p>I don't know how to find the first root - tried to use cubic formula to find it but I got a large expression, just a few numbers canceled. Any advices? Thanks.</p>
| lhf | 589 | <p>A polynomial has a multiple root iff its discriminant is zero.</p>
<p>The discriminant of <span class="math-container">$x^3+ax^2+17x+3b$</span> is <span class="math-container">$-12 a^3 b + 289 a^2 + 918 a b - 243 b^2 - 19652$</span>.</p>
<p>Mod <span class="math-container">$3$</span>, this discriminant reduces to <span class="math-container">$a^2 + 1$</span>, which is never zero.</p>
|
4,337,320 | <p>Let <span class="math-container">$G$</span> be a group and <span class="math-container">$F:G^n \to G$</span> with the following property: If <span class="math-container">$x_1,…,x_n,h \in G$</span>, then <span class="math-container">$F(hx_1,…,hx_n)=hF(x_1,…,x_n)$</span>. Is there a name for this type of function property? It is something I’ve been investigating lately. For instance, if <span class="math-container">$G$</span> is a vector space and <span class="math-container">$F$</span> outputs the average vector, then <span class="math-container">$F$</span> has this property.</p>
| Milten | 620,957 | <p>I would like to generalise Thomas Andrews's answer a bit more. Let <span class="math-container">$G$</span> act on two sets <span class="math-container">$X$</span> and <span class="math-container">$Y$</span>, and consider functions <span class="math-container">$F:X\to Y$</span> such that <span class="math-container">$F(gx) = gF(x)$</span>. In the original setting, we have <span class="math-container">$X=G^n$</span>, <span class="math-container">$Y=G$</span>. As Thomas Andrews has pointed out, <span class="math-container">$F$</span> is determined by its value on a representative of each orbit in <span class="math-container">$X/G$</span>. This is of course because if <span class="math-container">$F(x_0)$</span> is known, then <span class="math-container">$F(gx_0)=gF(x_0)$</span> is forced for all <span class="math-container">$gx_0$</span> in the orbit of <span class="math-container">$x_0$</span>.</p>
<p>In the original setting, Thomas Andrews showed (in slightly different words) that the functions <span class="math-container">$F$</span> are in in fact in a one-to-one correspondence with the set of all functions <span class="math-container">$G^n/G\to G$</span> (i.e. functions <span class="math-container">$X/G\to Y$</span>). When do we have this nice property in general?</p>
<p>For each orbit in <span class="math-container">$a\in X/G$</span>, choose a canonical representative <span class="math-container">$\psi(a)\in a$</span>. Let's write <span class="math-container">$\hat x = \psi([x])$</span> for the representative of <span class="math-container">$x$</span>'s orbit. Now we want to define <span class="math-container">$F$</span> by choosing a value at each orbit representative, and then setting
<span class="math-container">$$
F(x) = F(g_x \hat x) := g_x F(\hat x).
$$</span>
This isn't well-defined in general, because there might be several choices for <span class="math-container">$g_x$</span>. But if the action on <span class="math-container">$X$</span> is <strong>free</strong>, then <span class="math-container">$g_x$</span> is unique by assumption, and we're golden. Thus,</p>
<blockquote>
<p>If <span class="math-container">$G$</span> acts on <span class="math-container">$Y$</span> and acts <strong>freely</strong> on <span class="math-container">$X$</span>, then the functions <span class="math-container">$F:X\to Y$</span> such that <span class="math-container">$F(gx)=gF(x)$</span> are in a one-to-one correpondence withthe set of all functions <span class="math-container">$X/G \to Y$</span>.</p>
</blockquote>
<p>We recover the corresponding function <span class="math-container">$E:X/G \to Y$</span> by simply
<span class="math-container">$$
E(a) = F(\psi(a)).
$$</span>
Note however that the concrete correspondence is not unique or natural, since it depends on our arbitrary choices of representatives when defining <span class="math-container">$\psi$</span>.</p>
<hr />
<p>Lastly, take the special case <span class="math-container">$G\le H$</span>, where <span class="math-container">$G$</span> is a subgroup of <span class="math-container">$X=H$</span> and acts by left mulitplication. Then <span class="math-container">$h=g_h\hat h \implies g_h = h(\hat h)^{-1}$</span>, so the action is free (<span class="math-container">$g_h$</span> is unique), and our result applies. This was the case in the original setting, if we identify <span class="math-container">$G$</span> with the diagonal subgroup of <span class="math-container">$G^n$</span>. In this case
<span class="math-container">$$
\widehat {(g_1, \ldots, g_n)} := (g_n^{-1}g_1, \ldots, g_n^{-1}g_{n-1}, 1).
$$</span></p>
|
1,315,922 | <blockquote>
<p>show that
$$\sum_{k=2}^{n}\left(\dfrac{2}{k}+\dfrac{H_{k}-\frac{2}{k}}{2^{k-1}}\right)\le 1+2\ln{n}$$where $ n\ge 2,H_{k}=1+\dfrac{1}{2}+\cdots+\dfrac{1}{k}$</p>
</blockquote>
<p>Maybe this $\ln{k}<H_{k}<1+\ln{k}$?</p>
| Macavity | 58,320 | <p>WLOG, let $a_n$ be positive. Now there are only $n-1$ coefficients left! hence the maximum number of sign changes possible is also exactly $n-1$, hence by Descartes rule of signs, that is the maximum possible number of positive roots.</p>
|
108,890 | <p>Consider $V_{(n-1, 1)}$, the $n-1$ dimensional irreducible representation of $S_n$, i.e. the "standard" or "defining" representation. Is there a nice formula for how the $k$-th tensor power of $V_{(n-1, 1)}$ decomposes into irreps?</p>
| Richard Stanley | 2,807 | <p>For convenience consider the representation $Y=V_n\oplus V_{n-1,1}$ instead of $V_{n-1,1}$. Then the multiplicity of the representation of $S_n$ indexed by the partition $\lambda$ of $n$ in the $k$th tensor power of $Y$ equals the scalar product of the symmetric function $s_1^k$ (where $s_1=x_1+x_2+\cdots$ denotes a Schur function) with the plethysm $s_\lambda[1+h_1+h_2+h_3+\cdots]$, where $h_i$ is the complete symmetric function of degree $i$. This follows from the theory of inner plethysm; see Exercise 7.74 of <em>Enumerative Combinatorics</em>, volume 2. Since plethysm is in general intractable, I don't expect anything much simpler. This result does allow, however, these decompositions to be computed using Stembridge's Maple package SF for small values of $n$ and $k$.</p>
<p><strong>Addendum.</strong> I used the method of Exercise 7.74 to get the analogous result for $V_{n-1,1}$.
Namely, the multiplicity of the representation of $S_n$ indexed by the partition $\lambda$ of $n$ in the $k$th tensor power of $V_{n-1,1}$ equals the scalar product of $s_1^k$ with the symmetric function $(1-e_1+e_2-e_3+\cdots)\cdot s_\lambda[1+h_1+h_2+h_3+\cdots]$, where $e_i$ is an elementary symmetric function.</p>
<p><strong>Addendum #2.</strong> A alternative formulation is the following. The multiplicity of the representation of $S_n$ indexed by the partition $\lambda$ of $n$ in the $k$th tensor power of $V_{n-1,1}$ equals the scalar product of $(s_1-1)^k$ with the symmetric function $s_\lambda[1+h_1+h_2+h_3+\cdots]$.</p>
<p><strong>News flash!</strong> I said above that plethysm in in general intractable. Indeed, the Schur function expansion of $s_\lambda[1+h_1+h_2+\cdots]$ looks hopeless to me. However, taking the scalar product with $s_1^k$ results in a lot of simplification. I can show the following. The multiplicity of the representation of $S_n$ indexed by the partition $\lambda$ of $n$ in the $k$th tensor power of $V_n\oplus V_{n-1,1}$ equals the coefficient of $s_\lambda$ in the Schur function expansion of $(1+h_1+h_2+\cdots)\cdot \sum_{j=1}^k
S(k,j)s_1^j$, where $S(k,j)$ is a Stirling number of the second kind. (After obtaining this result, I noticed that it is essentially the same as Corollary 2 of the Goupil-Chauve paper mentioned in Vasu Vineet's comment.) Since for fixed $j$ we have $S(k,j)=\frac{1}{j!}\sum_{i=1}^j (-1)^{j-i}{j\choose i}i^k$, we can get explicit formulas for the multiplicities for fixed $\lambda$ that don't involve Stirling numbers. For instance, when $\lambda=(3)$ the multiplicity is $\frac{1}{6}(3^k+3)$, for $\lambda=(2,1)$ it is $3^{k-1}$, and for $\lambda=(1,1,1)$ it is $\frac{1}{6}(3^k-3)$. In particular, the multiplicity for $\lambda = (1^n)$ (i.e., $n$ parts equal to 1) is $S(k,n)+S(k,n-1)$.</p>
|
108,890 | <p>Consider $V_{(n-1, 1)}$, the $n-1$ dimensional irreducible representation of $S_n$, i.e. the "standard" or "defining" representation. Is there a nice formula for how the $k$-th tensor power of $V_{(n-1, 1)}$ decomposes into irreps?</p>
| Abdelmalek Abdesselam | 7,410 | <p>The problem has been solved in the reference indicated in the comment
by Vasu Vineet, namely:
<a href="http://www.mat.univie.ac.at/~slc/wpapers/s54goupchau.html" rel="noreferrer">"Combinatorial Operators for Kronecker Powers of Representations of Sn"</a> by
Alain Goupil and Cedric Chauve. However, one cannot say that the formulas in Propositions 1 and 2 of this paper are "nice".</p>
|
2,933,577 | <p>Eliminate the parameters to find a Cartesian equation of the curve and sketch the curve.</p>
<p><span class="math-container">$x = e^t – 1, y = e^{2t}$</span>.</p>
<p>My attempt:</p>
<p><span class="math-container">$x = e^{t} - 1$</span></p>
<p><span class="math-container">$x + 1 = e^{t}$</span></p>
<p><span class="math-container">$\ln(x+1) = t$</span></p>
<p>so</p>
<p><span class="math-container">$y = e^{2t} = e^{2\ln(x+1)} = (x+1)^2$</span> [fixed mistake]</p>
<p>I think I eliminated the parameters now how would I sketch this? </p>
<p>I made a table</p>
<p><span class="math-container">\begin{array}{|c|c|c|c|}
\hline
t& 0 & 1 & 2 & 3 & 4 \\ \hline
x & 0& e^{1}-1 & e^{2} - 1 & e^{3}-1 & e^{4}-1\\ \hline
y & 1 & e^{2} & e^{4} & e^{6} & e^{8}\\ \hline
\end{array}</span></p>
<p>If I graph above with respect to x and y, then would this be correct?</p>
| José Carlos Santos | 446,262 | <p>As you were told in the comments, there is an error in your computations and the answer is <span class="math-container">$y=e^{2\ln(x+1)}=\left(e^{\ln(x+1)}\right)^2=(x+1)^2$</span>. But there is an even shorter path to that conclusion: since <span class="math-container">$x+1=e^t$</span>, then <span class="math-container">$y=e^{2t}=(e^t)^2=(x+1)^2$</span>.</p>
|
4,380,559 | <p><span class="math-container">$P$</span> is a <span class="math-container">$\mathbb{R}^{2\times2}$</span> positive semidefinite matrix satisfying <span class="math-container">$P=P^2$</span>, <span class="math-container">$P \neq 0$</span>, and <span class="math-container">$P \neq I$</span>. Show that
<span class="math-container">$$P=\begin{bmatrix}
\cos t \\
\sin t
\end{bmatrix}
\begin{bmatrix}
\cos t & \sin t
\end{bmatrix}$$</span>
for some <span class="math-container">$t$</span>.</p>
| M. Winter | 415,941 | <p>As a positive semi-definite matrix, <span class="math-container">$P$</span> has positive eigenvalues, and the matrix equation <span class="math-container">$P=P^2$</span> implies <span class="math-container">$\lambda=\lambda^2$</span> for each eigenvalue <span class="math-container">$\lambda$</span>. That is <span class="math-container">$\lambda\in\{0,1\}$</span>. Since <span class="math-container">$P$</span> is neither zero nor the identity, we find that one eigenvalue must be <span class="math-container">$0$</span>, and the other one must be <span class="math-container">$1$</span>. In particular, <span class="math-container">$P$</span> will be the projection onto the span of the eigenvector to eigenvalue <span class="math-container">$1$</span>. You have free choice of this eigenvector, and after normalization and we can write it as <span class="math-container">$(\cos t,\sin t)$</span>. You can check that the projection onto this vector is exactly the given matrix.</p>
|
95,616 | <p>I have tried to adapt <a href="https://mathematica.stackexchange.com/a/16290/29734">this answer</a> to my problem of calculating some bosonic commutation relations, but there are still some issues.</p>
<p>The way I'm implementing the commutator is straightforward:</p>
<pre><code>commutator[x_, y_] :=x**y-y**x
</code></pre>
<p>Example: if I want to compute $[b^\dagger b,ab^\dagger]$ I write</p>
<pre><code>commutator[BosonC["b"]**BosonA["b"], BosonA["a"]**BosonC["b"]]
</code></pre>
<p>and the output is $ab^\dagger$ as it should be.
However this fails when I compute $[a^\dagger a,ab^\dagger]$ (which should be $-ab^\dagger)$:</p>
<pre><code>commutator[BosonC["a"]**BosonA["a"], BosonA["a"]**BosonC["b"]]
Out: a†** a^2 ** b†- a ** b†** a†** a
</code></pre>
<p>How can I modify the code <a href="https://mathematica.stackexchange.com/a/16290/29734">in this answer</a> to have it work properly?</p>
<p><strong>EDIT</strong>
Building on the answers of @QuantumDot and @evanb, I came up with this solution. First I implement the commutator, with <code>Distribute</code>.</p>
<pre><code>NCM[x___] = NonCommutativeMultiply[x];
SetAttributes[commutator, HoldAll]
NCM[] := 1
commutator[NCM[x___], NCM[y___]] := Distribute[NCM[x, y] - NCM[y, x]]
commutator[x_, y_] := Distribute[NCM[x, y] - NCM[y, x]]
</code></pre>
<p>Then I implement two tools, one for teaching Mathematica how to swap creation and annihilation operators and one is for operator ordering:</p>
<pre><code>dag[x_] := ToExpression[ToString[x] ~~ "†"]
mode[x_] := With[{x†= dag[x]},
NCM[left___, x, x†, right___] := NCM[left, x†, x, right] + NCM[left, right]]
leftright[L_, R_] := With[{R† = dag[R], L† = dag[L]},
NCM[left___, pr : R | R†, pl : L | L†, right___] := NCM[left, pl, pr, right]]
</code></pre>
<p>Now I can use it like this: after evaluating the definitions I input (for instance)</p>
<pre><code>mode[a]
mode[b]
leftright[a,b]
</code></pre>
<p>And finally I can evaluate commutators, for instance</p>
<pre><code>commutator[NCM[a†,a] + NCM[b†,b], NCM[a,b†]]
(* 0 *)
</code></pre>
| evanb | 7,936 | <p>You get get far by just implementing the linearity rules for the commutator (and that 0 and 1 "escape" the matrix product. The first two aren't strictly needed, but I include them for completeness:</p>
<pre><code>commutator[Plus[a_,A__],B_]:=commutator[a,B]+commutator[Plus[A],B]
commutator[A_,Plus[b_,B__]]:=commutator[A,b]+commutator[A,Plus[B]]
commutator[A_**B_,C_]:=A**commutator[B,C]+commutator[A,C]**B
commutator[A_,B_**C_]:=B**commutator[A,C]+commutator[A,B]**C
commutator[A_,A_]:=0
commutator[___,0,___]:=0
Unprotect[NonCommutativeMultiply];
NonCommutativeMultiply[___,0,___]:=0
NonCommutativeMultiply[H___, 1, T___] := NonCommutativeMultiply[H, T]
</code></pre>
<p>Now evaluating <code>commutator[adag**a,a**bdag]</code> gives</p>
<pre><code>(a**commutator[adag,bdag]+commutator[adag,a]**bdag)**a+adag**a**commutator[a,bdag]
</code></pre>
<p>You can get further by implementing the fact that different oscillators commute. For example:</p>
<pre><code>commutator[adag, bdag] = 0
commutator[a, bdag] = 0
</code></pre>
<p>Reevaluating <code>commutator[adag**a,a**bdag]</code> gives</p>
<pre><code>commutator[adag, a] ** bdag ** a
</code></pre>
<p>Finally, you can implement the canonical relation:</p>
<pre><code>commutator[adag, a] = 1;
</code></pre>
<p>To get what you want.</p>
<p>You can of course decorate / generalize what's above so that you don't have to write every relation out by hand.</p>
|
3,155,241 | <p>Let <span class="math-container">$\mathbb{A}^n$</span> denote the set of n-tuples of elements from field <span class="math-container">$k$</span> and <span class="math-container">$I(X)$</span> the ideal of polynomials in <span class="math-container">$k[x_1,...,x_n]$</span> that vanish every point in <span class="math-container">$X$</span>. The note I’m reading, in showing that <span class="math-container">$\mathbb{A}^n$</span> is affine variety, says <span class="math-container">$I(\mathbb{A}^n)=(0)$</span>.</p>
<p>Why is that? I know it’s probably extremely trivial but I have very little background so please bear with me... Suppose that <span class="math-container">$k$</span> is <span class="math-container">$\mathbb{Z}_p$</span> then something like <span class="math-container">$x^p-x$</span> also belongs to <span class="math-container">$I(\mathbb{Z}_p)$</span> so I guess k must be infinite too (I don’t see this mentioned in the note)? Or am I confusing something?</p>
| Maxime Ramzi | 408,637 | <p>Let me just write up a proof without going through a contradiction (it's almost literally the same proof but it doesn't use contradiction so is somewhat cleaner and clearer)</p>
<p>It proceeds by induction, like dcolazin's proof . The proof for <span class="math-container">$n=1$</span> is the same. </p>
<p>Now let <span class="math-container">$p\in I(\mathbb{A}^{n+1})$</span>, write <span class="math-container">$p(x_1,...,x_n, y) = \displaystyle\sum_{k=0}^da_k(x_1,...,x_n)y^k$</span>. </p>
<p>Fix <span class="math-container">$(x_1,...,x_n)\in \mathbb{A}^n$</span>. </p>
<p>Then for all <span class="math-container">$y$</span>, <span class="math-container">$p(x_1,...,x_n, y)=0$</span>, therefore <span class="math-container">$\displaystyle\sum_{k=0}^da_k(x_1,...,x_n)X^k$</span> is the zero polynomial : <span class="math-container">$a_k(x_1,...,x_n) = 0$</span> for all <span class="math-container">$k$</span>. </p>
<p>But this is for any <span class="math-container">$(x_1,...,x_n)$</span>, hence for all <span class="math-container">$k$</span>, <span class="math-container">$a_k\in I(\mathbb{A}^n)$</span>, hence for all <span class="math-container">$k, a_k=0$</span> by induction. Therefore <span class="math-container">$p=0$</span>.</p>
|
2,243,900 | <p>I've researched this topic a lot, but couldn't find a proper answer to this, and I can't wait a year to learn it at school, so my question is:</p>
<blockquote>
<p>What exactly is calculus? </p>
</blockquote>
<p>I know who invented it, the Leibniz controversy, etc., but I'm not exactly sure what it is. I think I heard it was used to calculate the area under a curve on a graph. If anyone can help me with this, I'd much appreciate it.</p>
| Ben Grossmann | 81,360 | <p>In a nutshell: calculus is about derivatives and integrals. A <em>derivative</em> generalizes the idea of slope to graphs that are not lines. For instance, you might look at the graph of $y = x^2$ and notice that it gets steeper as $x$ increases, but how can we make this observation precise? You might ask what the <em>slope</em> of the graph is at $x = 1$, for instance. </p>
<p>But what could the "slope at a point" mean? Rise-over-run gives the formula for the slope of a <em>secant line</em>, but what I really want is a formula for the slope of a <em>tangent line</em>, which would have a "rise" and "run" of zero. </p>
<p>Traditionally, we resolve this paradox using limits (though it can also be done with "infinitesimals"). You'll see how limits work when you take calculus.</p>
<p>As it turns out, the answer we get to our question is that when $y = x^2$, $\frac{dy}{dx}|_{x = a} = 2a$. So: at $(0,0)$, the graph has a slope of $2(0) = 0$. At $(-1,1)$, the graph has a slope of $2(-1) = -2$. At $(2,4)$, the graph has a slope of $2(2) = 4$. This function $2x$ is called the <em>derivative</em> of the function $x^2$.</p>
<p>Intgeration, as you said, is about computing the area, usually between a graph and the $x$-axis, from $x = a$ to $x = b$. For instance,
$$
\int_1^4 2x\,dx
$$
means "the area underneath the graph $y = 2x$, between the values $x = 1$ and $x = 4$". The <em>fundamental theorem of calculus</em> relates integrals to derivatives. In this particular case: because we know a function whose derivative is $2x$ (in this case, $x^2$), we can find this area by calculating
$$
\int_1^4 2x\,dx = \left. x^2\right|_1^4 = (4)^2 - (1)^2 = 15
$$</p>
|
2,243,900 | <p>I've researched this topic a lot, but couldn't find a proper answer to this, and I can't wait a year to learn it at school, so my question is:</p>
<blockquote>
<p>What exactly is calculus? </p>
</blockquote>
<p>I know who invented it, the Leibniz controversy, etc., but I'm not exactly sure what it is. I think I heard it was used to calculate the area under a curve on a graph. If anyone can help me with this, I'd much appreciate it.</p>
| Mauro ALLEGRANZA | 108,274 | <p>Maybe useful:</p>
<ul>
<li>George Exner, <a href="https://books.google.it/books?id=IjP0BwAAQBAJ&printsec=frontcover" rel="nofollow noreferrer">Inside Calculus</a>, Springer (2000), <strong>Introduction</strong> - <em>Propaganda: For Students</em>, page vii:</li>
</ul>
<blockquote>
<p>Calculus is really two things: a <em>tool</em> to be used for solving problems for many other disciplines, and a <em>field of study</em> all its own. </p>
<p><em>Calculus as a tool</em> cares deeply about ways to find the largest value of
a function, or obtain relationships between rates of change of some related
variables, or obtain graphs of motion of physical objects.</p>
<p>The <em>study of calculus itself</em> is really the internal, supporting structure,
for all of the above tools and techniques.</p>
</blockquote>
<p>These "structure" is made of three basic concepts: <em>number, function</em>, and <em>limit</em>.</p>
|
2,243,900 | <p>I've researched this topic a lot, but couldn't find a proper answer to this, and I can't wait a year to learn it at school, so my question is:</p>
<blockquote>
<p>What exactly is calculus? </p>
</blockquote>
<p>I know who invented it, the Leibniz controversy, etc., but I'm not exactly sure what it is. I think I heard it was used to calculate the area under a curve on a graph. If anyone can help me with this, I'd much appreciate it.</p>
| Kyle Strand | 52,057 | <p>This is not a "standard" definition, and I am going to gloss over many details, but I think it provides some insight.</p>
<p>I consider the <strong>fundamental insight</strong> of calculus as a mathematical discovery (i.e., the basis for the branches of mathematics that you will eventually study) to be the idea that in some cases, an "infinitely close approximation" of a value that can't be directly computed (using pre-calculus methods) <em>actual is</em> equal to the value itself. So the study of calculus is essentially the study of:</p>
<ul>
<li><em>methodology</em> for calculating these "infinitely close approximations", and</li>
<li><em>analysis</em> for determining when such approximations are appropriate, in the sense that:
<ul>
<li>these approximations can be calculated, and</li>
<li>the result of this calculation will be equal to the desired value.</li>
</ul></li>
</ul>
<p>(Caveat: the actual branch of mathematics called "analysis" is closely related to my second bullet point, but the bullet point is not actually intended to be a true definition of that branch of mathematics.)</p>
<p>Thus, the first concept taught in calculus is that of a <em>limit</em>, which is the fundamental building block for the "infinitely close approximation" methodology. The next concept is typically that of <em>convergence</em> of limits, and examples of functions for which the limit at a particular point <em>does not equal</em> the value of the function itself at that point. This corresponds, broadly speaking, to my second bullet point, of analyzing when limits can be computed and when they are equal to the originally-desired value.</p>
<p>As mentioned elsewhere, calculus is typically divided into "differential" calculus and "integral" calculus. Each branch uses the concepts above to find values that can be represented in standard coordinate geometry but can't typically be computed directly using algebraic or geometric methods:</p>
<ul>
<li>Differential calculus concerns the problem of finding the local "slope" (called the "derivative") of a function at a given point. "Slope" is an algebraic/geometric concept, but cannot typically be computed directly except in simple cases (e.g. the slope of $y = x^2$ at $x = 0$ is $0$). The slope <em>can</em>, however, be approximated by taking two points near the point of interest and calculating the slope of the line connecting them; this approximation can be improved by bringing the points closer to the point of interest. The "infinitely close approximation" is the <em>limit</em> of this sequence of approximate slopes.</li>
<li>Integral calculus concerns the problem of finding the area under the curve (called the "integral") of a function. (It appears that this is the branch of calculus that you've heard of, since you mention the area under a graph.) Again, for most functions, this value cannot be calculated directly. But it can be approximated, for instance by picking some number of points along the graph and creating a sequence of line segments connecting them, then calculating the sum of the areas of the trapezoids formed by these line segments, the $x$ axis, and the vertical lines connecting the selected points to the $x$ axis. As the number of points selected is increased, this approximation improves; the "infinitely close approximation" is, again, the <em>limit</em> of this sequence of approximate areas.</li>
</ul>
<p>In both cases, <em>methodology</em> for calculating these limits is developed; it is often possible (and in some cases, surprisingly easy!) to take a function definition and describe a new pair of functions representing the <em>derivative</em> and <em>integral</em> of the original function at every point. And in both cases there are degenerate functions that make it impossible to calculate either the derivative or the integral (or both) at a certain point or set of points (or even the entire domain of the function), which is why the <em>analysis</em> of when such methods are appropriate is important.</p>
<p>The connection between these two branches is known as the "fundamental theorem of calculus"; it is a pair of theorems that essentially state that integration and differentiation are <em>inverses</em> in the sense that, given the integral of a function, one can find the original function using differentiation, and (modulo a constant) vice-versa.</p>
|
2,243,900 | <p>I've researched this topic a lot, but couldn't find a proper answer to this, and I can't wait a year to learn it at school, so my question is:</p>
<blockquote>
<p>What exactly is calculus? </p>
</blockquote>
<p>I know who invented it, the Leibniz controversy, etc., but I'm not exactly sure what it is. I think I heard it was used to calculate the area under a curve on a graph. If anyone can help me with this, I'd much appreciate it.</p>
| Tom Au | 12,506 | <p>I consider calculus to be the study of "infintessimals."</p>
<p>As in, what happens to a secant line if you take smaller and smaller distances from the starting point. (It approaches the tangent line, which is its "derivative.")</p>
<p>Or what happens to your calculation of the area under a curve as you approximate it using "thinner" and thinner rectangles until they become infinite thin? (The approximation becomes progressively more accurate until you have a useful calculation.)</p>
|
31,782 | <p>This question is edited following the comment of Joseph. He pointed out that the main object of the first version of this question is the cut locus.</p>
<p>Recall that the cut locus of a set <span class="math-container">$S$</span> in a geodesic space <span class="math-container">$X$</span>
is the closure of the set of all points <span class="math-container">$p \in X$</span> that have two or more distinct shortest paths in <span class="math-container">$X$</span> from <span class="math-container">$S$</span> to <span class="math-container">$p$</span>.
<a href="http://en.wikipedia.org/wiki/Cut_locus" rel="nofollow noreferrer">http://en.wikipedia.org/wiki/Cut_locus</a></p>
<p>A simple lemma shows that, for a disk <span class="math-container">$D^2$</span> with a Riemannian metric and piecewise smooth <em>generic</em> boundary, the cut locus of <span class="math-container">$D^2$</span> with respect to its boundary is a tree.
A picture of such tree can be found on page 542, figure 17 of the article of Thurston "Shapes of polyhedra". The tree is white.
<a href="http://arxiv.org/PS_cache/math/pdf/9801/9801088v2.pdf" rel="nofollow noreferrer">http://arxiv.org/PS_cache/math/pdf/9801/9801088v2.pdf</a>
For an ellipse on the 2-plane, the tree is the segment that joins its focal points.</p>
<p>More generically for a Riemannian manifold <span class="math-container">$M^n$</span> with boundary, the cut locus of <span class="math-container">$\partial M$</span> should be a deformation retract of <span class="math-container">$M$</span>. (I guess it is a <span class="math-container">$CW$</span> complex of dimension less than <span class="math-container">$n$</span>.) To prove this lemma, notice that <span class="math-container">$M^n\setminus\operatorname{cut-locus}(\partial M^n)$</span> is canonically foliated by geodesic segments that join <span class="math-container">$X$</span> with <span class="math-container">$\partial M$</span>.</p>
<p>I wonder if this lemma has a name or maybe is contained in some textbook on Riemannian geometry?</p>
| Joseph O'Rourke | 6,094 | <p>Let me continue the comments above here so I can include a figure.
Here are examples of the <em>medial axis</em> of two different convex polygons
(from my own work):
<hr>
<img src="https://i.stack.imgur.com/v7kba.jpg" alt="medaxis">
<hr>
The term <em><a href="https://en.wikipedia.org/wiki/Medial_axis" rel="noreferrer">medial axis</a></em> is used in computer science to denote the same concept
as the <em><a href="https://en.wikipedia.org/wiki/Cut_locus" rel="noreferrer">cut locus</a></em>.</p>
<p>Franz-Erich Wolter wrote his Ph.D. dissertation on "<a href="https://books.google.com/books/about/Cut_loci_in_bordered_and_unbordered_Riem.html?id=GLrIcQAACAAJ" rel="noreferrer">Cut loci in bordered and unbordered Riemannian manifolds</a>"
(Technische Universität Berlin, 1985).
That might contain some useful information.</p>
|
162,147 | <p>I have</p>
<pre><code>J = Table[{x10, y10, x10*y10}, {x10, 0, 1, 0.5}, {y10, 0, 1, 0.5}]
L = Table[{x10, y10, 2.0*x10*y10}, {x10, 0, 1, 0.5}, {y10, 0, 1, 0.5}]
</code></pre>
<p>I want the third elements of J and L to be added and the first and second elements are as they are (as they are the same in both cases) for an example.</p>
| Picaud Vincent | 42,847 | <p>Something like: </p>
<pre><code>MapThread[{#1[[1]], #1[[2]], #1[[3]] + #2[[3]]} &, {J, L}, 2]
</code></pre>
<p>do the job. </p>
|
162,147 | <p>I have</p>
<pre><code>J = Table[{x10, y10, x10*y10}, {x10, 0, 1, 0.5}, {y10, 0, 1, 0.5}]
L = Table[{x10, y10, 2.0*x10*y10}, {x10, 0, 1, 0.5}, {y10, 0, 1, 0.5}]
</code></pre>
<p>I want the third elements of J and L to be added and the first and second elements are as they are (as they are the same in both cases) for an example.</p>
| mrz | 19,892 | <pre><code>j = Table[{x10, y10, x10*y10}, {x10, 0, 1, 0.5}, {y10, 0, 1, 0.5}];
l = Table[{x10, y10, 2.0*x10*y10}, {x10, 0, 1, 0.5}, {y10, 0, 1, 0.5}];
Partition[
Transpose[{Partition[Flatten[j], 3], Partition[Flatten[l], 3]}] /.
{{x1_, y1_, r1_}, {x2_, y2_, r2_}} :> {x1, y1, r1 + r2}
, 3]
{{{0., 0., 0.}, {0., 0.5, 0.}, {0., 1., 0.}},
{{0.5, 0., 0.}, {0.5, 0.5, 0.75}, {0.5, 1., 1.5}},
{{1., 0., 0.}, {1., 0.5, 1.5}, {1., 1., 3.}}}
</code></pre>
|
177,091 | <p>What would $\int\limits_{-\infty}^\infty e^{ikx}dx$ be equal to where $i$ refers to imaginary unit? What steps should I go over to solve this integral? </p>
<p>I saw this in the Fourier transform, and am unsure how to solve this.</p>
| BlueTrin | 36,894 | <p>Isn't it </p>
<p>$$
-{i\over k} e^{i k x} + C
$$</p>
<p>Just start by deriving </p>
<p>$$
e^{i k x}
$$</p>
<p>And workout what constants are needed to adjust the result or alternatively use the formula by noticing that you are integrating</p>
<p>$$
e^{c x}
$$</p>
<p>with:</p>
<p>$$
c=ik
$$</p>
|
3,986,785 | <p>On page 153 of <em>Linear Algebra Done Right</em> the second edition, it says:</p>
<blockquote>
<p>Define a linear map <span class="math-container">$S_1: \text{range}(\sqrt{T^*T} ) \to \text{range}(T)$</span> by:</p>
</blockquote>
<blockquote>
<p><strong>7.43:</strong> <span class="math-container">$S_1 (\sqrt{T^* T}v)=Tv$</span></p>
</blockquote>
<blockquote>
<p>First we must check that <span class="math-container">$S_1$</span> is <strong>well defined</strong>. To do this, suppose <span class="math-container">$v_1, v_2 \in V$</span> are such that <span class="math-container">$\sqrt {T^*T}v_1 = \sqrt{T^*T}v_2$</span>. For the definition given by 7.43 to make sense, we must show that <span class="math-container">$Tv_1=T v_2$</span>.</p>
</blockquote>
<p>It is not entirely clear to me what the term 'well-defined' means here. Can someone clarify?</p>
<p>Thanks</p>
| Community | -1 | <p>I'll describe well-definedness in terms of a simpler function. Let's say that I wanted to describe a function on non-zero rational numbers by <span class="math-container">$$f(p/q)=q/p$$</span> In this case, it falls on me to show that the value of <span class="math-container">$f$</span> at a certain rational number is independent of the way that the ratio is formed, for instance that <span class="math-container">$f(1/2)=f(2/4)$</span>. If I can show that, then I can get away with defining a function whose "input" is not a single independent variable.</p>
|
3,986,785 | <p>On page 153 of <em>Linear Algebra Done Right</em> the second edition, it says:</p>
<blockquote>
<p>Define a linear map <span class="math-container">$S_1: \text{range}(\sqrt{T^*T} ) \to \text{range}(T)$</span> by:</p>
</blockquote>
<blockquote>
<p><strong>7.43:</strong> <span class="math-container">$S_1 (\sqrt{T^* T}v)=Tv$</span></p>
</blockquote>
<blockquote>
<p>First we must check that <span class="math-container">$S_1$</span> is <strong>well defined</strong>. To do this, suppose <span class="math-container">$v_1, v_2 \in V$</span> are such that <span class="math-container">$\sqrt {T^*T}v_1 = \sqrt{T^*T}v_2$</span>. For the definition given by 7.43 to make sense, we must show that <span class="math-container">$Tv_1=T v_2$</span>.</p>
</blockquote>
<p>It is not entirely clear to me what the term 'well-defined' means here. Can someone clarify?</p>
<p>Thanks</p>
| Hagen von Eitzen | 39,174 | <p>Ultimately, well-defined means simply: defined, however in a special kind of defining functions.
Namely, we want to define a map <span class="math-container">$f\colon A\to B$</span> in terms of given maps <span class="math-container">$ g\colon C\to A$</span> and <span class="math-container">$h\colon C\to B$</span>. We <em>attempt</em> to do so by saying that for given <span class="math-container">$x\in A$</span>, we pick <span class="math-container">$z\in C$</span> with <span class="math-container">$g(z)=x$</span> and then set <span class="math-container">$f(x):=h(z)$</span>. But is that really a definition of a function? We need two properties:</p>
<ol>
<li>For every <span class="math-container">$x\in A$</span>, there exists <span class="math-container">$z\in C$</span> with <span class="math-container">$g(z)=x$</span>.</li>
<li>If there are multiple choices for <span class="math-container">$z$</span>, say <span class="math-container">$g(z_1)=g(z_2)=x$</span>, then <span class="math-container">$h(z_1)=h(z_2)$</span></li>
</ol>
<p>By showing these two properties, we prove that our attempted definition is indeed a definition. In this case we say that <span class="math-container">$f$</span> is well-defined.</p>
|
3,319,903 | <p>What is exactly generating function of ordered partitions and how can I get number of ordered partitions from that?</p>
<p>Example:</p>
<p><span class="math-container">$$ 4 = 1+1+1+1 \\ = 2+2 \\ = 1+1+2 \\ = 1+2+1 \\ = 2+1+1 \\ = 1+3 \\ = 3+1 \\ = 4 $$</span>
so we have <span class="math-container">$8$</span> partitions. I was thinking about exponential generating function:</p>
<p><span class="math-container">$$(1+t+\frac{t^2}{2!} + ...)(1+\frac{t^2}{2!} +\frac{t^4}{4!} +...)...(1+\frac{t^n}{n!} + ... ) = e^x e^{2x} e^{3x} \cdots e^{nx} = e^{n(n+1)/2} = \sum_{k \ge 0}\frac{\left(\frac{n(n+1)}{2}x\right)^k}{k!} $$</span>
so the number of ordered partitions of <span class="math-container">$n$</span> is
<span class="math-container">$$\left(\frac{n(n+1)}{2}\right)^n $$</span>
bot for <span class="math-container">$n=4$</span> it is
<span class="math-container">$$10^4$$</span> It seems to be completely wrong.</p>
| mathsdiscussion.com | 694,428 | <p>Let us say we want to partition <span class="math-container">$n$</span>.</p>
<p>There are <span class="math-container">$\underbrace{1.1.1. \ldots .1}_{n ~ \text{times}}$</span> </p>
<p>Between these <span class="math-container">$n$</span> times <span class="math-container">$1$</span> there exist <span class="math-container">$(n-1)$</span> spaces which can be filled in two ways either numbers are combined to increase the count or can be seperated.</p>
<p>e.g let us say you have <span class="math-container">$$ 3 :::: 1. 1. 1 $$</span>
<span class="math-container">$$ 1 +1+1 $$</span>
<span class="math-container">$$ 1+1(1) \; \\e.g\; 1\, + \,2 $$</span>
<span class="math-container">$$ 1(1)\,+\,1 \; \\e.g \; 2\,+\,1 $$</span>
<span class="math-container">$$ 1(1)(1) \;\\e.g \; 3 $$</span>
Similarly for n it will be <span class="math-container">$ {2}^{n-1} $</span></p>
|
18,090 | <p>I am wondering if anyone could prove the following equivalent definition of recurrent/persistent state for Markov chains:</p>
<p>1) $P(X_n=i,n\ge1|X_0=i)=1$<p>
2) Let $T_{ij}=min\{n: X_n=j|X_0=i\}$, a state $i$ is recurrent if for $\forall j$ such that $P(T_{ij}<\infty)>0$, one has $P(T_{ij}<\infty)=1$</p>
| leonbloy | 312 | <p>I think the first property is wrong. That would be a strictly periodic state. Perhaps you missed something?</p>
|
1,361,517 | <p>$K_1 K_2 \dotsb K_{11}$ is a regular $11$-gon inscribed in a circle, which has a radius of $2$. Let $L$ be a point, where the distance from $L$ to the circle's center is $3$. Find
$LK_1^2 + LK_2^2 + \dots + LK_{11}^2$.</p>
<p>Any suggestions as to how to solve this problem? I'm unsure what method to use. </p>
| Michael Hardy | 11,667 | <p>Let $O$ be the center of the circle. One would like to say $LO^2 + OK_j^2 = LK_j^2\quad$ --- a Pythagorean theorem of sorts, except that $LO$ is not at a right angle with $OK_j$. So
\begin{align}
LO^2 + OK_1^2 & \ne LK_1^2 \\
LO^2 + OK_2^2 & \ne LK_2^2 \\
LO^2 + OK_3^2 & \ne LK_3^2 \\
& \,\,\,\vdots \\
LO^2 + OK_{11}^2 & \ne LK_{11}^2
\end{align}
So I ask myself whether perhaps the <b>sum</b> of the left sides equals the <b>sum</b> of the right sides. Notice that every left side is $3^2+2^2 = 13$, so their sum is $11\times13 = 143$. We will need the fact that the <b>average</b> of $K_1,\ldots,K_{11}$ is $O$. Let $x_L$, $x_O$, $x_{K_j}$ be the respective $x$-coordinates. One has
\begin{align}
(x_L-x_{K_j})^2 & = \Big((x_L - x_O) + (x_O - x_{K_j})\Big)^2 \\[10pt]
& = (x_L - x_O)^2 + 2(x_L - x_O)(x_O - x_{K_j}) + (x_O - x_{K_j})^2
\end{align}
The middle term is not zero, but if we sum this over all $11$ values of $j$ then the sum of the middle terms is zero because the sum of the deviations $x_{K_j} - x_O$ from the average is zero. (Here we are using the fact that the other factor in the middle term, $2(x_L-x_O)$, does not change as $j$ runs through the list $1,\ldots,11$.)</p>
<p>The same thing works with the $y$-coordinates, and we have $LK_j^2 = (x_L-x_{K_j})^2+(y_L - y_{K_j})^2$.</p>
<p><b>Another approach:</b></p>
<p>In an $11$-dimensional space, show that the two vectors $(OK_1,\ldots,OK_{11})$ and $(LO,\ldots,LO)$ are at right angles to each other and then apply the Pythagorean theorem.</p>
|
133,109 | <p>How can I integrate,</p>
<p>$$
\int_n^{+\infty} x \exp\{-ax^2+bx+c\}dx
$$</p>
<p>and what's the result w.r.t the Gaussian function's p.d.f $p(x)$ and c.d.f $\phi(x)$?</p>
<p>Thanks!</p>
| David Mitra | 18,986 | <p>Completing the square in the quadratic:
$$\eqalign{
\int_n^\infty x \exp(-ax^2+bx+c)\,dx
&= \int_n^\infty\kern-5pt x \exp( -a(x-{\textstyle{b\over 2a})^2 +c+{b^2\over 4a} } )\,dx\cr
&= \alpha \int_n^\infty\kern-5pt x \exp( \textstyle {-(x-{\textstyle{b\over 2a})^2 }\over 1/a } )\,dx\cr
&= \alpha \int_n^\infty\kern-5pt (x+{\textstyle{b\over 2a}-{b\over2a}}) \exp( \textstyle {-(x-{\textstyle{b\over 2a})^2 }\over 1/a } )\,dx\cr
&=\alpha \int_n^\infty\kern-5pt (x { -{\textstyle{b\over2a}}}) \exp( {\textstyle {-(x-{\textstyle{b\over 2a})^2 }\over 1/a }} )\,dx
+ \alpha\int_n^\infty\kern-5pt \textstyle{b\over2a} \exp( \textstyle {-(x-{\textstyle{b\over 2a})^2 }\over 1/a } )\,dx,\cr
}
$$
where $\alpha=\exp(c+{b^2\over4a})$.</p>
<p>On the right hand side of the last equality above, the first integral can be evaluated using the substitution $u=x-{b\over 2a}$ and the second integral can be expressed in terms of the cumulative distribution function of an appropriate normal random variable.</p>
|
3,912,722 | <blockquote>
<p>Circle of radius <span class="math-container">$r$</span> touches the parabola <span class="math-container">$y^2+12x=0$</span> at its vertex. Centre of circle lies left of the vertex and circle lies entirely within the parabola. What is the largest possible value of <span class="math-container">$r$</span>?</p>
</blockquote>
<p>So my book has given the solution as follows:</p>
<blockquote>
<p>The equation of the circle can be taken as: <span class="math-container">$(x+r)^2+y^2=r^2$</span><br />
and when we solve the equation of the circle and the parabola, we get <span class="math-container">$x=0$</span> or <span class="math-container">$x=12-2r$</span>.</p>
</blockquote>
<blockquote>
<p>Then, <span class="math-container">$12-2r≥0$</span> and finally, the largest possible value of <span class="math-container">$r$</span> is <span class="math-container">$6$</span>.</p>
</blockquote>
<p>This is where I got stuck as I'm not able to understand why that condition must be true. I get that the circle must lie within the parabola...</p>
<p>Can someone please explain this condition to me?</p>
| Math Lover | 801,574 | <p><span class="math-container">$(x+r)^2+y^2=r^2$</span> (check the equation)</p>
<p><span class="math-container">$y^2 + 12x = 0$</span></p>
<p>Equating the two,</p>
<p><span class="math-container">$x^2 -12x + 2xr = 0$</span></p>
<p><span class="math-container">$x^2 + (2r-12)x = 0$</span></p>
<p>So two solutions in <span class="math-container">$x$</span>, <span class="math-container">$x = 0, (12-2r)$</span></p>
<p>For any value of <span class="math-container">$r \gt 6$</span>, <span class="math-container">$x = 12 - 2r$</span> is negative and so you have another value of <span class="math-container">$x$</span> other than <span class="math-container">$0$</span> that satisfies the parabola equation <span class="math-container">$y^2 = -12x$</span> which means the circle will intersect the parabola for <span class="math-container">$r \gt 6$</span>.</p>
<p>Note that for <span class="math-container">$r \lt 6$</span>, <span class="math-container">$x = 12-2r$</span> is positive which cannot be on the parabola <span class="math-container">$y^2 = -12x$</span>.</p>
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.