qid int64 1 4.65M | question large_stringlengths 27 36.3k | author large_stringlengths 3 36 | author_id int64 -1 1.16M | answer large_stringlengths 18 63k |
|---|---|---|---|---|
3,520,269 | <p><img src="https://i.stack.imgur.com/mdM8B.png" alt="enter image description here"></p>
<p>I also know that given the length of 2 sides in a kite and the angle of one of the other angles (which aren't included angles), you can find the area by multiplying these two sides and the sine of the angle. I am unable to find a relationship between this situation and the situation posted in the title question.</p>
<p>Also I can't visualize why the situation in the title question is true. Thanks!</p>
| Community | -1 | <p>In fact, <a href="https://en.wikipedia.org/wiki/Quadrilateral#Trigonometric_formulas" rel="nofollow noreferrer">the area of a quadrilateral is <strong>one-half</strong> the product of its diagonals times the sine of their included angle</a>.</p>
<p><a href="https://i.stack.imgur.com/cJd7G.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/cJd7G.png" alt="enter image description here"></a></p>
<p>The area of this triangle is <span class="math-container">$\frac12bh$</span>. <span class="math-container">$\sin\alpha=\frac ha$</span>, so <span class="math-container">$h=a\sin\alpha$</span> and thus the area of the triangle can also be expressed as <span class="math-container">$\frac12ab\sin\alpha$</span>.</p>
<p>In your quadrilateral, let the two segments of <span class="math-container">$d_1$</span> have lengths <span class="math-container">$w$</span> and <span class="math-container">$x$</span> and the two segments of <span class="math-container">$d_2$</span> have lengths <span class="math-container">$y$</span> and <span class="math-container">$z$</span>. We will use the formula above to find the sum of the four triangles in the quadrilateral. Note that two of those two triangles have an included angle measure of <span class="math-container">$\alpha$</span> and the other two have the supplement of <span class="math-container">$\alpha$</span>, but we will ignore that because both of those angles have the same sine. Therefore, the area of the quadrilateral is
<span class="math-container">$$\frac12wy\sin\alpha+\frac12wz\sin\alpha+\frac12xy\sin\alpha+\frac12xy\sin\alpha\\=\frac12(wy+wz+xy+xz)\sin\alpha\\=\frac12(w+x)(y+z)\sin\alpha=\frac12d_1d_2\sin\alpha$$</span></p>
|
1,994,277 | <p>I am studying linear representation theory for finite groups and came across the claim in title. When $n\geq 5$, $S_n$ does not have an irreducible, $2$- dimensional representation.But I am not sure where to begin with. </p>
<p>Although it seems that this result will follow from <a href="https://math.stackexchange.com/questions/69384/low-dimensional-irreducible-representations-of-s-n">this</a> as a special case, I am interested in a solution that is specific to this problem. </p>
<p>The condition $n\geq 5$ seems to suggest that we need to use the fact that $A_n$ is simple for $n\geq 5$. </p>
<p>I would appreciate any hint. </p>
| Jeremy Rickard | 88,262 | <p>Here's a hint for one fairly elementary proof.</p>
<p>Suppose $\rho:S_n\to\textrm{GL}(2,\mathbb{C})$ is a representation, where $n\geq5$.</p>
<p>Consider the eigenvalues of $\rho(\sigma)$ for $5$-cycles $\sigma$.</p>
|
1,799,366 | <p>I'm trying to solve the following exercise:</p>
<blockquote>
<p>Let $\mu$ be a probability distribution on $\mathbb{R}$ having second moment $\sigma^2<\infty$ such that if $X$ and $Y$ are independent with law $\mu$ then the law of $(X+Y)/\sqrt{2}$ is also $\mu$. Show that $\mu =\mathcal{N}(0,1)$
Hint: apply the central limits theorem to packs of $2^n$ variables</p>
</blockquote>
<p>My attempt:</p>
<p>So let $Z_n=(X_1,Y_1)+\cdots+(X_n,Y_n)$, then $\mathbb{E}(Z_n)=n\mathbb{E}(Z_n)$ then for $n\to \infty$
$$T_n=\frac{Z_n-n\mathbb{E}(Z_n)}{\sqrt{\sigma^2n}}\xrightarrow{\mathcal{D}}\mathcal{N}(0,1)$$
converges in distribution to the normal distribution.</p>
<p>Now I don't see the connection how to proof $\mu=\mathcal{N}(0,1)$. I also do not understand what "packs" of $2^n$ variables are. Is it $Z_n=(X_n,Y_n)+\dots$?</p>
| drhab | 75,923 | <p>I think it must be proved that $\mu=\mathcal N(0,\sigma^2)$ but for convenience I will also preassume that $\sigma=1$</p>
<p>If $\phi$ denotes the characteristic function then:$$\phi(t)=\phi\left(\frac{t}{\sqrt2}\right)^2$$</p>
<p>Note that this can be repeated to arrive at $\phi(t)=\phi(\frac{t}2)^4$ and can be repeated again.</p>
<p>Actually with this it can be shown that $X$ and $2^{-\frac12n}(X_1+\cdots+X_{2^n})$ have equal distribution if the $X_i$ are independent and all mentioned random variables have distribution $\mu$.</p>
<p>It is not really necessary to use the characteristic function to come to this conclusion. You could just observe that $([X_1+Y_1]/\sqrt2+[X_2+Y_2]/\sqrt2)/\sqrt2$ again has $\mu$ as distribution, and so on - if the $X_i$ and $Y_i$ are independent and have $\mu$ as distribution. </p>
<p>Also we have $0$ as expectation, since $\nu=(\nu+\nu)/\sqrt2$ implies $\nu=0$.</p>
<p>Applying the CLS on the $X_i$ you will find that the <strong>constant</strong> $\mu$ must convergence in distribution to $\mathcal N(0,1)$. </p>
<p>This can only be the case if $\mu=\mathcal N(0,1)$. </p>
|
25,337 | <p>If you want to compute crystalline cohomology of a smooth proper variety $X$ over a perfect field $k$ of characteristic $p$, the first thing you might want to try is to lift $X$ to the Witt ring $W_k$ of $k$. If that succeeds, compute de Rham cohomology of the lift over $W_k$ instead, which in general will be much easier to do. Neglecting torsion, this de Rham cohomology is the same as the crystalline cohomology of $X$.</p>
<p>I would like to have an example at hand where this approach fails: Can you give an example for</p>
<blockquote>
<p>A smooth proper variety $X$ over the finite field with $p$ elements, such that there is no smooth proper scheme of finite type over $\mathbb Z_p$ whose special fibre is $X$.</p>
</blockquote>
<p>The reason why such examples <em>have</em> to exist is metamathematical: If there werent any, the pain one undergoes constructing crystalline cohomology would be unnecessary.</p>
| user39938 | 39,938 | <p>A general method for for constructing schemes with "arbitrarily bad" deformation spaces (including the non-existence of liftings from char. p to char. 0) is in the following paper:</p>
<p>R. Vakil "Murphy's Law in algebraic geometry: Badly-behaved deformation spaces", Invent. Math. 164 (2006), 569--590. </p>
|
113,446 | <p>Suppose a simple equation in Cartesian coordinate:
$$
(x^2+ y^2)^{3/2} = x y
$$
In polar coordinate the equation becomes $r = \cos(\theta) \sin(\theta)$. When I plot both, the one in polar coordinate has two extra lobes (I plot the polar figure with $\theta \in [0.05 \pi, 1.25 \pi]$ so the "flow" of the curve is clearer).</p>
<pre><code>figurePolar = PolarPlot[Sin[θ] Cos[θ], {θ, 0.05 π, 1.25 π},
PlotStyle -> {Blue, Thick}];
figureCartesian = ContourPlot[(Sqrt[x^2 + y^2])^3 == x y, {x, -0.4, 0.4}, {y, -0.4, 0.4}, ContourStyle -> {Green, Dashed}];
GraphicsGrid[{{figurePolar, figureCartesian}}]
</code></pre>
<p><a href="https://i.stack.imgur.com/ez5CK.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/ez5CK.png" alt="same function in polar and Cartesian coordinate"></a>
The right one is in the Cartesian cooridnate, it is correct since $x y \geq 0$. The extra lobes in the polar (left) figure seem to be caused by Mathematica's use of negative $r$, which is against the mathematical definition. Any thoughts?</p>
| kglr | 125 | <p>A simpler way to restrict to positive radii:</p>
<pre><code>PolarPlot[Max[Sin[θ] Cos[θ], 0], {θ, 0, 2 π}]
</code></pre>
<p><img src="https://i.stack.imgur.com/w4gge.png" alt="Mathematica graphics"></p>
|
1,109,443 | <p>I'm currently learning for an algebra exam and I have some examples of questions from few last years. And I can't find a solution to this one:</p>
<blockquote>
<p>Give three examples of complex numbers where z = -z</p>
</blockquote>
<p>The only complex number I can think of is 0. Because it is a complex number, isn't it? Like 0 + 0i.</p>
<p>What two other complex numbers can be given as examples?</p>
<p>Edit: Well, I'm pretty sure it's z = -z. I have only this low-resolution picture, but you can see it in the first task: <a href="https://i.imgur.com/2EuugPZ.jpg" rel="nofollow noreferrer">http://i.imgur.com/2EuugPZ.jpg</a>. Yeah, I know it's all in Polish, but you have to believe me it says to find three examples.</p>
<p>Edit 2: Okay, now I see that it might actually say $\bar{z} = -z$.</p>
| Hagen von Eitzen | 39,174 | <p>This depends on the model. Instead of arguing that we have only $46$ chromosomes and cross-overs or whatever the mechanism is called are not <em>that</em> common, let us assume a continuous model.
That is, a priori, everybody can be $\alpha$ Cherokee for any $\alpha\in[0,1]$ and the rules are as follows</p>
<ul>
<li>Everybody has exactly two parents. </li>
<li>If the parents have Cherokee coefficients $\alpha_m, \alpha_f$, then the child has $\alpha=\frac{\alpha_m+\alpha_f}2$</li>
<li>In a sufficiently large but finite number of generations ago, people had $\alpha\in\{0,1\}$</li>
</ul>
<p>It follows by induction, that $\alpha$ can always be expressed as $\alpha=\frac{k}{2^n}$ with $k,n\in\mathbb N_0$ and $0\le k\le 2^n$. Consequently $\alpha=\frac1{12}$ is not possible exactly (though for example $\frac{85}{1024}\approx\frac1{12}$ would be possible).
It doesn't matter if there is any type of inbreeding taking place anywhere in the tree (or then not-tree) of ancestors.
The only way to obtain $\alpha$ not of this form would involve time-travel and genealogical paradoxes: If you travel back in time and paradoxically become your own grandparent and one of the other three grandparents is $\frac14$-Cherokee and the others are $0$-Cherokee, you end up as a solution to $\alpha=\frac{\frac14+\alpha}4$, i.e. $\alpha=\frac1{12}$.</p>
|
1,053,065 | <p>I have a function called $P(t)$ that is the number of the population at time $t$. $t$ being in days.</p>
<p>We know the growth rate is $P'(t) = 2t + 6$</p>
<p>We also know that $P(0) = 100$. How many days till the population doubles?</p>
<p>edit: $P(t) = t^2 + 6t$
edit: $P(t) = t^2 + 6t = 200$
edit: $t^2 + 6t - 200 = 0$</p>
| asomog | 183,714 | <p>Let $n=\prod_{i=1}^k p_i^{\alpha_i}$, then we want
$$\frac{d(n)}{n}=\frac{\prod_{i=1}^k{1+\alpha_i}}{\prod_{i=1}^k p_i^{\alpha_i}}\leq\frac{1}{2}$$
Look at just one term, and add 1 to the exponent of $p_i$ to observe that:
$$\frac{\alpha+1}{p^\alpha}/\frac{\alpha+2}{p^{\alpha+1}}=\frac{1}{p}\left(1+\frac{1}{\alpha+1}\right)$$
So adding factors to $n$ makes $\displaystyle\frac{d(n)}{n}$ smaller, since
$$\frac{1}{p}\left(1+\frac{1}{\alpha+1}\right)\leq \frac{1}{2}\left(1+\frac{1}{0+1}\right)=1$$
And in all the other cases we have a strict inequality.
If we have a prime divisor $p\geq 5$, then $\displaystyle\frac{d(n)}{n}<\frac{1}{2}$, as the term of the product expressing $\displaystyle\frac{d(n)}{n}$ that belongs to $p_i$ has a "starting factor" smaller than $\frac{2}{5}<1/2$, and as we enlarge $\alpha_i$, $\frac{\alpha_i+1}{p_i^{\alpha_i}}$ get smaller, and all factors are at most $1$.
For small $2^k3^n$ we have:
$\frac{3}{3^2}=\frac{1}{3}$ and $\frac{4}{2^3}=\frac{1}{2}$.
Since for $12$ the fraction is $1/2$ and for $18$ it is $1/3$, we are done (adding factors make the fraction smaller, and we checked the limit for $2^k3^n$, and we got that only $(0,1),(1,1),(2,0),(2,1),(3,0)$ is good.</p>
|
1,053,065 | <p>I have a function called $P(t)$ that is the number of the population at time $t$. $t$ being in days.</p>
<p>We know the growth rate is $P'(t) = 2t + 6$</p>
<p>We also know that $P(0) = 100$. How many days till the population doubles?</p>
<p>edit: $P(t) = t^2 + 6t$
edit: $P(t) = t^2 + 6t = 200$
edit: $t^2 + 6t - 200 = 0$</p>
| Esteban Crespi | 3,274 | <p>Let $n>12$ if $a$ is a divisor of $n$ then $a$ is not in the range $n/2 < a < n$ otherwise $n/a$ is an integer in the range $ 1 < n/a < 2$, in the same way $a$ can't be in the range $n/3 < a < n/2$, so the number of integers in this range is at least equal to the number of integers in the interval $[n/3,n-1]$ with the exception possibly of $n/2$ so it is at least
$$ \left\lfloor n - 1 - \frac{n}{3} \right\rfloor -1 \ge n - \frac{n}{3} -2 = \frac{2}{3}n-2 $$
This means that the number of divisors of $n$ is at most
$$d(n) \le n-\left( \frac{2n}{3}-2\right) = \frac{n}{3} + 2 $$
and this is smaller than $n/2$ if $n > 12$. </p>
|
1,210,285 | <p>Let there be a given function $f \in C([0,1])$, $f(x)>0$; $x\in [0,1]$. Prove </p>
<p>$$\lim_{n\to\infty} \sqrt[n]{f\left({1\over n}\right)f\left({2\over n}\right)\cdots f\left({n\over n}\right)}=e^{\int_0^1 \log f(x) \, dx} $$</p>
<p>All the questions before this required solving an definite integral without Newton Leibnitz formula, then this came up, can anyone provide help?</p>
| Chappers | 221,811 | <p>$f$ is positive, and the logarithm is continuous on $(0,\infty)$, so we can take logarithms of both sides and swap the limit and logarithm to find
$$ \log{\left( \lim_{n\to\infty} \sqrt[n]{ f(1/n) f(2/n) \dotsm f(n/n) }\right)} = \lim_{n\to\infty} \log{\sqrt[n]{f(1/n)f(2/n)\dotsm f(n/n)}} $$
Now applying properties of the logarithm, the expression inside the limit on the right hand side is equal to
$$ \frac{1}{n}\log{\left( f(1/n)f(2/n)\dotsm f(n/n) \right)} = \frac{1}{n} \left( \log{f(1/n)}+\log{f(2/n)}+\dotsb+\log{f(n/n)} \right) \\
= \sum_{k=1}^{n} \frac{1}{n}\log{f(k/n)}, $$
which is a Riemann sum for $\log{f}$ on the interval $[0,1]$. Because $f$ is continuous, it is Riemann-integrable, and hence this sum must converge to the integral
$$ \int_0^1 \log{f(x)} \, dx, $$
which is the logarithm of the right-hand side of your original expression.</p>
|
1,384,752 | <p>I ran across a problem which has stumped me involving existential quantifiers.
Let U, our universe, be the set of all people. Let S(x) be the predicate "x is a student" and I(x) be the predicate "x is intelligent".
I want to write the statement "Some students are intelligent" in the correct logical form. I can see 2 possible ways to write it</p>
<p>1) There exists an x in U such that ( S(x) AND I(x) )</p>
<p>2) There exists an x in U such that ( S(x) implies I(x) )</p>
<p>If I draw a Venn diagram, it seems like option 1 must be true, but from this same diagram (where the sets where S(x) is true and I(x) is true intersect), it is also true that there is an x such that if x is in the set where S(x) us true, then x is in the set where I(x) is true. This makes me wonder if these two statements are not logically equivalent, but I have a feeling they are not.</p>
<p>Thanks,
Matt</p>
| Graham Kemp | 135,106 | <p>The confusion lies in choice of logical connective to represent a restriction on the domain of discussion. We use conjunction to restrict an existential quantifier, and implication to restrict a universal quantifier.</p>
<p>Here we are restricting the domain, from discussions of all people in the given universe, to students in that universe.</p>
<hr>
<ul>
<li>$\exists x \in U \big(S(x)\wedge I(x)\big)$</li>
</ul>
<p>"There exists $x$ in $U$ such that $S(x)$ and $I(x)$" is "someone in our universe is both a student and intelligent" ie "some students are intelligent".</p>
<p>We use conjunction as the connective for the restricted existential because to be true there merely needs be an example of a person who is a student and intelligent.</p>
<p>If our universe consists of teachers Bob and Jane, and students Tom, Dick, and Hariet, then our statement "some students are intelligent" will only be true if <em>at least one</em> of Tom, Dick, or Harriet is intelligent.</p>
<hr>
<ul>
<li>$\forall x\in U\big(S(x)\to I(x)\big)$</li>
</ul>
<p>"For all $x$ in $U$ it is such that $S(x)$ implies $I(x)$" is "everyone in our universe, is intelligent whenever they are a student" ie: "every student is intelligent."</p>
<p>We use implication as the connective for the restricted universal because to be true then every person must either be intelligent or be not a student. That is, $\;\forall x\in U\big(\neg S(x)\vee I(x)\big)\;$.</p>
<p>If our universe consists of teachers Bob and Jane, and students Tom, Dick, and Harriet, then our statement "all students are intelligent" will only be true if <em>all of</em> of Tom, Dick, and Harriet are intelligent; that is if being a student implies being intelligent.</p>
<hr>
<p>We further note that $\;\exists x\in U\big(S(x)\to I(x)\big)\;$ can be true only if there is someone in that universe who is either intelligent <em>or</em> is not a student.</p>
<p>Likewise $\;\forall x\in U\big(S(x)\wedge I(x)\big)\;$ can be true only if everyone in that universe is both a student and intelligent.</p>
<p>Don't use the wrong connective to represent a restrictive quantification.</p>
|
2,618,273 | <p>In integer-base positional numeral systems, the notation of a number in base $n$ uses $n$ numerals. Base 2 uses the symbols 0 and 1, base 10 uses 0123456789, base 16 uses base 10 + ABCDEF. Although the choice of symbols for the numerals is arbitrary, the number of numerals (unique glyphs) is identical to the base. That still holds when the base is negative or even complex. However, <a href="https://en.wikipedia.org/wiki/Non-integer_representation" rel="nofollow noreferrer">non-integer bases</a> exist. I understand how the definition of base $n$ can extend to the base of any real base $b$ (I suppose we need $|b|>1$ for the positional property to hold?), what determines the number of numerals used in a number system with a non-integer base? Is it $\lfloor b \rfloor$ or can we decide any number of numerals?</p>
<p>For example, imagine I decide I want to use base τ. How many numerals do I use?</p>
| fleablood | 280,126 | <p>==== number of numerals that can be digits ====</p>
<p>In the article sited the $k$th digit can but any integer from $0$ to $\frac x{\beta^k}$ which as $x < \beta^{k+1}$ would be any digit greater than equal to $0$ and less than $\beta$. If $\beta$ is not an integer that is from $0$ to $\lfloor \beta \rfloor < \beta$.</p>
<p>Or $\lfloor \beta \rfloor + 1= \lceil \beta \rceil$.</p>
<p>[Note; if $\beta$ is an integer then $\lfloor b \rfloor \not < \beta$. But $\beta = \lceil \beta \rceil$.]</p>
<p>==== number of digits ====</p>
<p>An $n + 1$ digit number $N$ will be such:</p>
<p>$\beta^n \le N < \beta^{n+1}$</p>
<p>so $n \le \log_{\beta} N < n+ 1$</p>
<p>So $n = \lfloor \log_{\beta} N \rfloor$</p>
<p>so then number of digits is $n+1 = \lfloor \log_{\beta} N \rfloor + 1$.</p>
<p>It doesn't matter if $\beta$ is an integer or not. It only matters that $\beta > 1$.</p>
|
2,515,939 | <p>So, I just need a hint for proving
$$\lim_{n\to \infty} \int_0^1 e^{-nx^2}\, dx = 0$$ </p>
<p>I think maybe the easiest way is to pass the limit inside, because $e^{-nx^2}$ is uniformly convergent on $[0,1]$, but I'm new to that theorem, and have very limited experience with uniform convergence. Furthermore, I don't want to integrate the Taylor expansion, because I'm not familiar with that. So, I want to prove it in a way I'm more familiar with, if possible. So far I've tried: </p>
<ol>
<li><p>Show that $e^{-nx^2}$ is a monontone decreasing sequence with limit $0$. Then use the monotone property of integrals but I think this argument would just end circularly with passing the limit out of the integration operator. </p></li>
<li><p>Bound $e^{-nx^2}$ by 0 and some other $f(x)$ like $\cos^n x$ or $(1-\frac{x^2}{2})^n$ and then use the squeeze theorem. But the integrals of those functions seem to be a little bit out of my math range to analyze. </p></li>
</ol>
<p>But I have a feeling that there's something much simpler here that I'm missing.</p>
| zhw. | 228,045 | <p>Letting $y=x/\sqrt n$ shows the integral equals</p>
<p>$$\frac{1}{\sqrt n} \int_0^{\sqrt n} e^{-y^2}\, dy <\frac{1}{\sqrt n} \int_0^{\infty} e^{-y^2}\, dy .$$</p>
<p>Since the last integral converges, the desired limit is $0.$</p>
|
2,329,542 | <p>I looked up wikipedia but honestly I could not make much sense of what I will basically study in Abstract Algebra or what it is all about .</p>
<p>I also looked up a question here :
<a href="https://math.stackexchange.com/questions/855828/what-is-abstract-algebra-essentially">What is Abstract Algebra essentially?</a></p>
<p>But there are so many definitions and terms that I always get bogged down by them. </p>
<p>It would be helpful to me and maybe others if someone could explain what Abstract Algebra is all about in <em>simple words</em> that one can understand intuitively. </p>
| EHH | 133,303 | <p>This is a fairly tricky question as Abstract Algebra is one of those things that makes a lot more sense one you've spent some time studying some of it's subject areas.</p>
<p>I will however have a stab at it...</p>
<p>Abstract algebra normally follows the same pattern of taking a set, $Z$ say, and attributing some properties to the elements of those sets. We then seek to prove certain things about those sets. </p>
<p>We do in fact use sets of these sorts all the time, since number systems are formed in exactly this way. For example, the set of integers $\mathbb{Z}$ is simply a set of numbers with the properties of the numbers being that they can be added and subtracted from each other, with an identity element (an element such that adding it to other elements just gives you what you started with) zero etc.</p>
<p>The goal of the 'abstract' part of abstract algebra is due to the fact that rather than just studying very specific cases, we instead look for common properties across many types of sets and mathematical objects and study the properties that they all have in common. This is similar to a the fact that while there are many hundreds of breeds of dogs, they all have very similar physiologies and so a vet can study the common aspects of all dogs and in doing so she can then perform operations on any of them, even though there are other aspects in which they all differ.</p>
<p>So this why in abstract algebra we don't specifically study each individual set such as $\mathbb{C}$ and$\mathbb{R}$, but instead we notice that they all satisfy certain properties in common. We then define a Field (as one example) as being a set $\mathbb{F}$ with these properties in common (you'll study Fields when you start your course). Then anything we prove about Fields can be applied to all of the sets which also share the properties of a Field. </p>
<p>There are many other types of objects such as as Groups and Rings which are also studied because they have properties that we know exist for many specific cases.</p>
<p>Therefore, in conclusion Abstract ALgebra is the process of noticing similar properties in different mathematical objects and then specifically seeing what we can prove about objects with these properties in order to be able to say things about every object with those given properties.</p>
|
4,083,697 | <p>I'm thinking about the example <span class="math-container">$f(x)=(x-1)^2$</span> which is clearly symmetric about the line <span class="math-container">$x=1$</span>. The question is really how do you show that it is symmetric about <span class="math-container">$x=1$</span> algebraically? I notice that if you plug in <span class="math-container">$-x+2$</span> you end up with <span class="math-container">$y=(-x+1)^2$</span> which is equivalent to the original function. I'm looking for this answer because I'm curious how one would show a similar property for a function that is more tricky to graph.</p>
| user2661923 | 464,411 | <p>Alternative approach:</p>
<p><strong>To Prove</strong>: <span class="math-container">$~|z| \times |w| = |z \times w| ~: z,w \in \Bbb{C}.$</span></p>
<p>Sufficient to show that <span class="math-container">$|z|^2 \times |w|^2 = |z \times w|^2.$</span></p>
<p>Let <span class="math-container">$z = (x + iy), w = (u + iv).$</span></p>
<p><span class="math-container">$|z|^2 \times |w|^2 = (x^2 + y^2)(u^2 + v^2).$</span></p>
<p><span class="math-container">$|z \times w|^2$</span> <br>
<span class="math-container">$= |(x + iy)(u + iv)|^2 $</span> <br>
<span class="math-container">$= |(xu - yv) + i(xv + yu)|^2 $</span> <br>
<span class="math-container">$= (xu - yv)^2 + (xv + yu)^2 $</span> <br>
<span class="math-container">$= x^2u^2 + y^2v^2 - 2xuyv + x^2v^2 + y^2u^2 + 2xvyu $</span> <br>
<span class="math-container">$= x^2u^2 + y^2v^2 + x^2v^2 + y^2u^2 $</span> <br>
<span class="math-container">$= (x^2 + y^2)(u^2 + v^2)$</span> <br>
<span class="math-container">$= |z|^2 \times |w|^2.$</span></p>
|
232,672 | <p>Yesterday, I posted a question that was received in a different way than I intended it. I would like to ask it again by adding some context. </p>
<p>In ZF one can prove $\not\exists x (\forall y (y\in x)).$ This statement can be read in many ways, such as (1) "there is no set of all sets" (2) "the class of all sets is proper (i.e. is not a set)" etc. and I believe that there is a substantial philosophical difference between (1) and (2). The former suggests that the existential quantifier refers to the actual existence of something intended in a platonic way, while the latter interprets $\exists$ as meaning "it is a set". So, in the second case, I would say that the existential quantifier is a way of singling out things that are sets from things that are not sets, rather than a way to claim actual existence of something. </p>
<p>I am a set theorist and I always intended the statement above as (2) because I don't think existential quantification in set theory refers to actual existence. I suspect that also Zermelo intended existential quantifications as a way of singling out sets from things that are not sets, because in its original formulation he introduced "urelements" i.e. objects that are not sets but could be elements of a set. But I am interested in what is the most common interpretation among contemporary set theorists and I have the impression that my colleagues in set theory use (1) more often. </p>
<p>So my question is: from the point of view of someone who believes that existential quantifiers in set theory refer to actual existence, does the statement above mean "the class of all sets does not exist"? Does this interpretation appear anywhere in the literature? </p>
<p>Thank you in advance. </p>
| Thomas Benjamin | 20,597 | <p>Considering what you wrote in your slide presentation "On the definitional character of axioms.", you might be interested in the following preprint by John L. Bell (found on his Homepage) titled "SETS AND CLASSES AS MANY". In it, he initially gives the following naive definition of set (following Cantor in his book, <em>Contributions to the Founding of the Theory of Transfinite Numbers</em>):</p>
<blockquote>
<p>Set theory is sometimes formulated by starting with two sorts of entities called <em>individuals</em> and <em>classes</em>, and then defining a <em>set</em> to be a <em>class as one</em>, that is, a <em>class which is at the same time an individual</em>...If on the other hand we insist--as we shall here--that classes are to be taken in the sense of <em>multitudes</em>, <em>pluralities</em>, of <em>classes as many</em>, then no class can be an individual and so, in particular, the concept of set will need to be redefined.</p>
</blockquote>
<p>What does "<em>class which is at the same time an individual</em>" mean? Well, following Cantor, since a set ("aggregate") is</p>
<blockquote>
<p>...any collection into a whole $M$ of definite and separate objects <em>m</em> of our intuition and our thought. These objects are called the "elements" of $M$...In signs we express this thus: $M$={<em>m</em>}.</p>
</blockquote>
<p>it seems reasonable to infer that a set is then a class which is a object, i.e. that which can be an element of another set (or even, possibly, of itself). This, however, produces an interesting variant of the Russell paradox, defined entirely in terms of <em>class as object</em>:</p>
<p>Is {x| x$\notin$x} both a class and object, that is, can {x| x$\notin$x} be an element of another set (including itself)? If {x| $\notin$x} is both a class and an object, a contradiction follows because then {x| x$\notin$x} can be an element of itself. On the other hand, since {x| x$\notin$x} is, in some sense a 'name' (label) of its elements, this 'name' can be an 'element' of some other collection, it must be deemed an 'object' and therefore can again be deemed an element of itself, and the contradiction again follows (perhaps the paradox ensues through a confusion between 'label' and that which is labelled, but then, can a class that is not itself an 'object' be labelled?).</p>
<p>Bell solves this problem in the following manner (though he does not explicitly mention the above version of the paradox):</p>
<blockquote>
<p>Now while we shall require a set to be a class of <em>some</em> kind, construing the class concept as "class as many" entails that sets can no longer <em>literally</em> be taken as individuals. So instead we shall take sets to be classes that are are represented, or <em>labelled</em>, by individuals in an appropriate way. For simplicity we shall suppose that labels are attached, not just to sets, but to all classes: thus each class $X$ will be assigned an individual $\lambda$$X$ called its <em>label</em>. Now in view of Cantor's theorem that the number of classes of individuals exceeds the number of individuals, it is not possible for different classes always to be assigned distinct labels [consider now the argument put forth in Stanford Encyclopedia of Philosophy's (SEP's) entry "Frege's Theorem and Foundations for Arithmetic" (Edward Zalta's entry) that Russell's paradox is engendered because Second-order Logic+ Basic Law V requires the impossible situation in which the domain of concepts (labels) has to be strictly larger than the domain of extensions (classes) while at the same time the domain of extensions has to be as large as the domain of concepts--my comment]. This being the case, we single out a subdomain $S$ of the domain of classes on which the labelling map $\lambda$ is one-to-one. The classes falling under $S$ will be identified as <em>sets</em>; and an individual which is the label of a set will be called an <em>identifier</em>.</p>
</blockquote>
<p>Bell now defines the dual notion of <em>colabelling</em>:</p>
<blockquote>
<p>For reasons of symmetry, it will be convenient (although not strictly necessary) to assume that, in addition to the operation of labelling each class by an individual, there is a reverse process--<em>colabelling</em>--which assigns a class [note here that classes then, of necessity, must exist as extensions--my comment] to each individual. Thus we shall suppose that to each individual <em>x</em> there corresponds a unique class <em>x*</em> called its <em>colabel</em>. Again, because of Cantor's theorem, not every class can be the colabel of an individual (although every individual can be the label of a class). However, it seems natural enough to stipulate that each <em>set</em> be the colabel of some individual, and indeed that this individual may be taken to be the label of the set in question. Thus we shall require that $X$=$\lambda$$($$X$$)^{*}$ for every set $X$. In that event, for any identifier <em>x</em> in the above sense, we shall have <em>x</em> = $\lambda$(<em>x*</em>); that is, the colabel of an identifier is the set of which it is the label, or the set <em>labelled</em> by the identifier. Another way of putting this is to say that the restriction of the colabelling map to identifiers acts as an inverse to the restriction of he labelling map to sets.</p>
</blockquote>
<p>After dealing with singletons and the empty set in the following fashion</p>
<blockquote>
<p><em>Singletons</em> and the <em>empty class</em>--"multitudes" with just one, or no members respectively--are here regarded, like the "numbers" $\mathbf 1$ and $\mathbf 0$, as "ideal" entities introduced to enable the theory to be developed smoothly.</p>
</blockquote>
<p>he considers the problem of adequately defining the $\in$ relation:</p>
<blockquote>
<p>The <em>membership relation</em> $\in$ between individuals and classes is a primitive of our system. It will be taken as an <em>objective</em> relation in the sense suggested, for example, by the assertion that Lazare Carnot was a member of the Committee of Public Safety, or Polaris is a member of the constellation Ursa Minor. The fact that $\in$ is not iterable--there are no "$\in$-chains"--means that it can have very few intrinsic properties. This is to be contrasted with the relation $\epsilon$ of "membership" between <em>individuals</em>, defined by <em>x</em> $\epsilon$ <em>y</em> $\leftrightarrow$ <em>x</em> $\in$ <em>y*</em>: <em>x</em> is a member of the class <em>labelled by y</em>. This relation links the entities of the same sort and is, accordingly, iterable. It should be noted, however, that the presence of the colabelling map * in the definition of $\epsilon$ gives the latter a purely formal, arbitrary character</p>
</blockquote>
<p>Next, Bell uses the $\epsilon$-relation to present the notion of <em>nonwellfounded set</em>. However,</p>
<blockquote>
<p>In the usual set theories it is difficult to grasp the nature of a set which is, for example, identical with its own singleton since a set cannot be "formed" by assembling individuals. In the present scheme, on the other hand, the assertion $\alpha$={$\alpha$}--which is, as remarked above, not well-formed--is replaced by the assertion $\forall$$x$($x$$\epsilon$$\alpha$ $\leftrightarrow$ $x$=$\alpha$), that is, $\alpha^*$={$\alpha$}, which asserts that {$\alpha$} is identical, not with $\alpha$ itself, but rather with its colabel. Similarly, the self-membership assertion $\alpha$$\in$$\alpha$ is transformed into the statement $\alpha$ $\epsilon$ $\alpha$, that is, $\alpha$$\in$$\alpha^*$, which asserts that $\alpha$ belongs, not to itself, but merely to its colabel. And an assertion of cyclic membership $\alpha$$\in$$\mathop b$$\in$$\alpha$ is transformed into the assertion $\alpha$ $\epsilon$ $\mathop b$ $\in$ $\alpha$, or $\alpha$$\in$$\mathop b^*$& $\mathop b$$\in$$\alpha^*$, that is, "$\alpha$ (respectively $\mathop b$) is a member of the colabel of $\mathop b$(respectively $\alpha$)." These rephrasings appear much more natural in that they only impute the possession of curious properties to the colabelling map, rather than to the objective membership relation $\in$ itself.</p>
</blockquote>
<p>Considering the "naturalness" of the rephrasings, and the version of the Russell paradox stated above, one might reasonably infer that the confusion between labels and colabels is at the heart of the derivation of Russell's paradox, and by making the distinction between the two, Bell has found the 'best' way of ridding systems of set theory from it.</p>
<p>I say this because of what Prof. Bell says in the concluding paragraph of the "Introduction" to his paper:</p>
<blockquote>
<p>We shall also see that, in addition to nonwellfounded set theories, a number of other theories familiar from the literature can be provided with natural formulations within the system to be presented here [the theory $\mathbf M$ of <em>multitudes or classes as many</em>--my comment from the begining sentence of Bell's Section 1]. These include second-order arithmetic, the set theories of Zermelo-Frankel, Morse-Kelly, and Ackermann, as well as a system in which Frege's construction of the natural numbers can be carried out. Each of these theories can therefore be seen as the result of imposing a particular condition on a common apparatus of labelling classes by individuals [these conditions therefore define what sets 'exist' within each individual theory, a result similar to yours --my comment].</p>
</blockquote>
|
353,658 | <p>Let $g : [0, 1] \rightarrow \mathbb{R}$ be twice differentiable with $g^{\prime \prime}(x) > 0$ for all $x \in [0,1]$. Suppose that $g(0) > 0$ and $g(1) = 1$. Prove if $g$ has a fixed point in $(0,1)$, then $g^{\prime}(1) > 1$.</p>
<p>My attempt: Define a function $h(x)=g(x)-x$. Since $g$ has a fixed point, say $c \in (0,1)$, we have $h(c)=g(c)-c=0$. </p>
<p>Notice that we have $h(c)=h(1)=0$, by Rolle's Theorem, there exists $d \in (c,1)$ such that $h^{\prime}(d)=0$</p>
<p>Applying Mean Value Theorem on $h$ on $[d,1]$, there exists $e \in (d,1)$ such that $h^{\prime \prime}(e)=\frac{h^{\prime}(d)-h^{\prime}(1)}{c-1}$. Notice that we have $h^{\prime \prime}(x)=g^{\prime \prime}(x) >0 $ for all $x \in [0,1]$. Hence, $-h^{\prime}(1)<0 \implies g^{\prime}(1) > 1$</p>
<p>Can anyone explain to me why we need to use Rolle's theorem here?</p>
| Sugata Adhya | 36,242 | <p>The Rolle's theorem enables us to get a root of $h'$ on some point lying left to $1$ which helps to conclude the result using the strict monotonicity of $h'$ (Since $h''>0$ on $[0,1]\implies h'$ is strictly increasing on $[0,1]$)</p>
|
2,895,655 | <blockquote>
<p>Four coins of different colour are thrown. If three out of these show heads then find the probability that the remaining one shows tails. </p>
</blockquote>
<p>My approach:</p>
<p>$A$: The event in which 3 heads appear in 3 coins out of 4</p>
<p>$B$: The event in which the 4th coin shows tails</p>
<p>thus we need to find $P(\frac{B}{A})$</p>
<p>and we know that $P(\frac{B}{A})= \frac{P(A \cap B)}{P(A)}$</p>
<p>The ways in which 3 out of 4 coins can be chosen= $^4C_3$</p>
<p>$P(A)= ^4C_3 (\frac{1}{2})^3$</p>
<p>and</p>
<p>$ P(A \cap B)= ^4C_3 (\frac{1}{2})^4 $</p>
<p>so</p>
<p>$ P(\frac{B}{A})= \frac{1}{2} $</p>
<p>However the answer given is $\frac {4}{5}$. What am I doing wrong?</p>
| N. F. Taussig | 173,070 | <p>You have correctly calculated the probability $\Pr(A \cap B)$. Your error was in the calculation of $\Pr(A)$.</p>
<p>The sample space consists of those events in which at least three of the four coins display heads. The probability that at least three coins display heads is
$$\Pr(A) = \binom{4}{3}\left(\frac{1}{2}\right)^3\left(\frac{1}{2}\right)^1 + \binom{4}{4}\left(\frac{1}{2}\right)^4\left(\frac{1}{2}\right)^0 = \left[\binom{4}{3} + \binom{4}{4}\right]\left(\frac{1}{2}\right)^4 = 5\left(\frac{1}{2}\right)^4$$</p>
<p>Thus,
$$\Pr(B \mid A) = \frac{\Pr(A \cap B)}{\Pr(A)} = \frac{\dbinom{4}{3}\left(\dfrac{1}{2}\right)^4}{\left[\dbinom{4}{3} + \dbinom{4}{4}\right]\left(\dfrac{1}{2}\right)^4} = \frac{4}{5}$$</p>
|
3,459,106 | <p>I have a function
<span class="math-container">$$
\frac{\ln x}{x}
$$</span> and I wonder, is <span class="math-container">$y=0$</span> an asymptote? I mean it is kinda strange that graph is in some place is going through that asymptote. I know it meets the criterium of asymptote, but its kinda strange if you understand me. :D</p>
| Marios Gretsas | 359,315 | <p>If <span class="math-container">$f_n$</span> are not continuous then it is not true.</p>
<p>Take <span class="math-container">$f_n=1_{[-\frac{1}{2},\frac{1}{2}]}+\frac{1}{n}$</span> on <span class="math-container">$[-1,1]$</span></p>
<p>where <span class="math-container">$1_A$</span> is the indicator function of the set <span class="math-container">$A$</span></p>
<p>You can see in many textbooks and notes that: If the functions <span class="math-container">$f_n$</span> are continuous on a compact interval ,then their uniform limit is continuous on that interval.</p>
|
42,957 | <p>I am an "old" programmer used to <em>Fortran</em> and <em>Pascal</em>. I can't get rid of <code>For</code>, <code>Do</code> and <code>While</code> loops, but I know <em>Mathematica</em> can do things much faster!</p>
<p>I am using the following code</p>
<pre><code>SeedRandom[3]
n = 10;
v1 = Range[n];
v2 = RandomReal[250., n];
a = {};
Do[
Do[
AppendTo[a, (v2[[i]] - v2[[j]])/(v1[[i]] - v1[[j]])
],
{j, i - 1, 1, -1}], {i, n, 2, -1}
]; // Timing
</code></pre>
<p>If <code>n</code> is small, it runs fast enough, but for bigger <code>n</code> it slows down. I usually deal with <code>n > 600</code>.</p>
<p>How can the code be made faster?</p>
| WalkingRandomly | 4,786 | <p>Your version:</p>
<pre><code>SeedRandom[3]
n = 250;
v1 = Range[n];
v2 = RandomReal[250., n];
a = {};
Do[
Do[
AppendTo[a,
(v2[[i]] - v2[[j]])/(v1[[i]] - v1[[j]])],
{j, i - 1, 1, -1}], {i, n, 2, -1}]; // AbsoluteTiming
</code></pre>
<p>on my machine this takes 1.85 seconds</p>
<p>This version</p>
<pre><code>AbsoluteTiming[A = Flatten[
Table[(v2[[i]] - v2[[j]])/(i-j), {i, n, 2, -1}, {j,
i - 1, 1, -1}]]
]
</code></pre>
<p>Takes 0.112 seconds</p>
<p>They give the same result. </p>
<pre><code>In[75]:= a == A
Out[75]= True
</code></pre>
|
42,957 | <p>I am an "old" programmer used to <em>Fortran</em> and <em>Pascal</em>. I can't get rid of <code>For</code>, <code>Do</code> and <code>While</code> loops, but I know <em>Mathematica</em> can do things much faster!</p>
<p>I am using the following code</p>
<pre><code>SeedRandom[3]
n = 10;
v1 = Range[n];
v2 = RandomReal[250., n];
a = {};
Do[
Do[
AppendTo[a, (v2[[i]] - v2[[j]])/(v1[[i]] - v1[[j]])
],
{j, i - 1, 1, -1}], {i, n, 2, -1}
]; // Timing
</code></pre>
<p>If <code>n</code> is small, it runs fast enough, but for bigger <code>n</code> it slows down. I usually deal with <code>n > 600</code>.</p>
<p>How can the code be made faster?</p>
| ciao | 11,467 | <p>Just changing it to something like:</p>
<pre><code>Table[(v2[[i]] - v2[[j]])/(v1[[i]] - v1[[j]]), {i, n, 2, -1}, {j, i - 1, 1, -1}] // Flatten
</code></pre>
<p>Should net you a nice boost. Edit - oops, ninja'd</p>
<p>About twice as fast as any so far on large N:</p>
<pre><code>s = Subsets[Range[n, 1, -1], {2}];
{i1, i2} = {s[[All, 1]], s[[All, 2]]};
result = Divide[Subtract[v2[[i1]], v2[[i2]]], i1 - i2];
</code></pre>
<p>Edit - Simon beat me to the faster way to create indexes. His can be improved by removing the whole latter subtraction, as in mine, netting his a 25% boost in my tests.</p>
|
4,219,614 | <p>In proving a change of basis theorem in linear algebra, our professor draw this diagram and simply stated that because all the outer squares in this diagram commute, the inner square (green) must also commute (I didn't write the exact mappings, because I think this question is more about diagram chasing and that it isn't really relevant).</p>
<p>I can't, however, figure out why this is true. This class is also my first time experiencing commutative diagrams, so please explain it with as many details as possible.</p>
<p><a href="https://i.stack.imgur.com/k5jsY.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/k5jsY.png" alt="enter image description here" /></a></p>
<hr />
<p>Edit: it turned out the questions wasn't fine, since "in general there is no implication either way", but "if the diagonal arrows are isomorphisms then the inner square commutes if and only if the (big) outer square commutes".</p>
<p>In my case, the diagonal arrows actually are isomorphisms, which is why I am posting the extended diagram.</p>
<p><a href="https://i.stack.imgur.com/FNhGI.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/FNhGI.png" alt="enter image description here" /></a></p>
<p>To clear some notation:</p>
<ol>
<li><span class="math-container">$A: U \to V$</span> is a linear map between vector spaces <span class="math-container">$U$</span> and <span class="math-container">$V$</span>. First bases for <span class="math-container">$U, V$</span> are <span class="math-container">$B, C$</span>. Another possible bases for <span class="math-container">$U, V$</span> are <span class="math-container">$B', C'$</span>.</li>
<li><span class="math-container">$\phi$</span>'s are isomorphisms</li>
<li><span class="math-container">$P_{XY}$</span> is a change-of-basis matrix from <span class="math-container">$Y$</span> to <span class="math-container">$X$</span></li>
</ol>
<p>I would still appreciate if someone would help me understand why the inner square commutes, because I am lost.</p>
| Troposphere | 907,303 | <blockquote>
<p>because all the outer squares in this diagram commute, the inner square (green) must also commute</p>
</blockquote>
<p>That is not true in general. You can make a counterexample in the category of vector spaces by letting the green square be your favorite <em>non-commuting</em> square and then declare that the four outer objects in the <em>black</em> square are all the trivial space <span class="math-container">$\{0\}$</span>. Then the outer squares will commute automatically, because linear transformations always take <span class="math-container">$0$</span> to <span class="math-container">$0$</span>.</p>
<p>So you need to know more about the maps before you can conclude the green square commutes.</p>
<hr />
<p><em>After question was updated:</em> We now know that the blue arrows are isomorphisms -- then it's quite different.</p>
<p>A useful principle is: <em>if you have a commuting square (or other diagram) and replace an isomorphism with its inverse, the resulting diagram still commutes</em>.</p>
<p>In your case, you can flip just <span class="math-container">$\Phi_B$</span> in your diagram, and you should now be able to show step by step that the green square commutes.</p>
<p>(In algebraic notation, what's going on is just that, for example, <span class="math-container">$A_{CB}\circ \Phi_B = \Phi_C\circ A$</span> implies <span class="math-container">$A_{CB} = \Phi_C \circ A \circ \Phi_B^{-1}$</span> when we compose with <span class="math-container">$\Phi_B^{-1}$</span> on the right.)</p>
|
441,888 | <p>I should clarify that I'm asking for intuition or informal explanations. I'm starting math and never took set theory so far, thence I'm not asking about formal set theory or an abstract hard answer. </p>
<p>From Gary Chartrand page 216 Mathematical Proofs - </p>
<p>$\begin{align} \text{ range of } f & = \{f(x) : x \in domf\} = \{b : (a, b) \in f \} \\
& =
\{b ∈ B : b \text{ is an image under $f$ of some element of } A\} \end{align}$</p>
<p><a href="http://en.wikipedia.org/wiki/Parity_%28mathematics%29" rel="nofollow noreferrer">Wikipedia</a> - $\begin{align}\quad \{\text{odd numbers}\} & = \{n \in \mathbb{N} \; : \; \exists k \in \mathbb{N} \; : \; n = 2k+1 \} \\
& = \{2n + 1 :n \in \mathbb{Z}\} \end{align}$</p>
<p>But <a href="https://math.stackexchange.com/questions/266718/quotient-group-g-g-identity/266725#266725">Why $G/G = \{gG : g \in G \} \quad ? \quad$ And not $\{g \in G : gG\} ?$</a></p>
<p><strong>EDIT @Hurkyl 10/5.</strong> Lots of detail please.</p>
<p>Question 1. Hurkyl wrote $\{\text{odd numbers}\}$ in two ways.<br>
But can you always rewrite $\color{green}{\{ \, x \in S: P(x) \,\}}$ with $x \in S$ on the right of the colon? How?<br>
$ \{ \, x \in S: P(x) \,\} = \{ \, \color{red}{\text{ What has to go here}} : x \in S \, \} $? Is $ \color{red}{\text{ What has to go here}} $ unique?</p>
<p>Qusetion 2. Axiom of replacement --- Why $\{ f(x) \mid x \in S \}$ ? NOT $\color{green}{\{ \; x \in S \mid f(x) \;
\}}$ ?</p>
<p><strong>@HTFB.</strong> Can you please simplify your answer? I don't know what are ZF, extensionality, Fraenkel's, many-one, class function, Cantor's arithmetic of infinities, and the like. </p>
| Community | -1 | <p>There are two basic ways to "build" sets, aside from listing their elements.</p>
<p>The first is the axiom of subsets: if you already have a set $S$ and you want to create the subset of $S$ of things satisfying some property $P$, then the usual notation is</p>
<p>$$ \{ x \in S \mid P(x) \} $$</p>
<p>For example, the set of even numbers is <code>{ x in Z | x is divisible by 2}</code>. This is sometimes written in the form of "unrestricted comprehension":</p>
<p>$$ \{ x \mid x \in S \wedge P(x) \} $$</p>
<p>e.g. <code>{ x | x is an integer divisible by 2}</code>. You have to be a little careful with this variation: sometimes it doesn't actually define a set. (relevant keywords: "proper class" and "class builder notation")</p>
<p>The second is the axiom of replacement: if you have a set $S$, and you have a function $f$ that you can apply to elements of $S$, then you can create a new set by using $f$ to transform each element of $S$. This is usually written</p>
<p>$$ \{ f(x) \mid x \in S \} $$</p>
<p>For example, we could again define the set of even integers as <code>{ 2x | x in Z }</code>.</p>
<p>Some other variations exist, but these are the two principal ways to use set builder notation to define sets.</p>
<hr>
<p>Edit: to elaborate on some other versions, could use an 'all-in-one' version. The computer algebra system <code>magma</code> does this, for instance. We use the notation</p>
<p>$$ \{ f(x) : x \in S \mid P(x) \} $$</p>
<p>to represent the set (or class) of all of the values $f(x)$ where $x$ is an element of $S$ that satisfies the predicate $P$. (the choice of <code>:</code> and <code>|</code> in the notation is based on what magma uses)</p>
<p>In this form, the axiom-of-subsets version of class version becomes</p>
<p>$$ \{ x : x \in S \mid P(x) \} $$</p>
<p>and one could view the usual notation as being a shortened form of this. In mathematical writing, $x \in S$ usually gets folded into the predicate $P$ (or is implicit from context that $x$ is a varaible of 'type' $S$). That is, people often write</p>
<p>$$ \{ f(x) \mid P(x) \} $$</p>
<p>for the class of all values $f(x)$ such that $x$ satisfies the predicate $P$. This may be a set; sometimes it is obviously so: e.g. the set of integers divisible by 6 could be written as</p>
<blockquote>
<p>{ 3x | (x in Z) and (x is divisible by 2) }</p>
</blockquote>
<p>which is obviously a set, since class of elements satisfying the predicate is clearly a subclass of the set of integers.</p>
|
1,272,124 | <p>we know that $1+2+3+4+5.....+n=n(n+1)/2$</p>
<p>I spent a lot of time trying to get a formula for this sum but I could not get it :</p>
<p>$( 2 + 3 + . . . + 2n)$</p>
<p>I tried to write the sum of some few terms.. Of course I saw some pattern between the sums but still the formula I Got didn't give a correct sum for other terms.</p>
<p>Is there another way of solving this question?</p>
| Adhvaitha | 228,265 | <p>Note that
$$2+4+6+\cdots+2n = 2\left(1+2+3+\cdots+n\right) = 2 \cdot \dfrac{n(n+1)}2 = n(n+1)$$</p>
|
2,593,361 | <p>I’m trying to solve what I’ll call the p-Laplace Equation which is</p>
<p>$$\Delta_p u = 0$$</p>
<p>where $\Delta_p u$ is the p-Laplacian. It is defined as </p>
<p>$$\Delta_p u = \nabla \cdot (|\nabla|^{p-2} \nabla u).$$</p>
<p>Any ideas? I haven’t seen this in a book or anything. I just thought that by analogy, there should be a solution to this equation too. Are there any properties of p-Harmonic functions like there are for Harmonic functions? </p>
| fleablood | 280,126 | <p>By archemenian principal there is a unique integer $m$ so that</p>
<p>$mn \le x < (m+1) n$</p>
<p>And,, likewise, there is a unique integer $a$ so that $a \le x < a + 1$.</p>
<p>So $mn\le a \le x < a+1 \le (m+1)n$</p>
<p>And $m \le \frac an \le \frac xn < \frac an + \frac 1 n \le m+1$</p>
<p>From the above it is clear $[x] = a$, $[\frac xn] = m$ and $[\frac an]=[\frac {[x]} n] = m$.</p>
<p>The only real issue I elided over is assuming $mn\le a$ and that $a+1 \le (m+1)n$. Which... should be obvious. $a = \max \{z\in \mathbb Z|z \le x\}$ by definition, and $mn \in \{z\in \mathbb Z|z \le x\}$ so $mn \le a$. And $a + 1 = \min\{z\in \mathbb Z|z > x\}$ by definition and $(m+1)n \in \{z\in \mathbb Z|z > x\}$ so $a+1 \le (m+1)n$.</p>
<p>==== old and hard to read =====</p>
<p>Let $x = a + f$ where $a \in \mathbb Z$ and $0 \le f < 1$. </p>
<p>ANd let $a = m*n + r$ where $r, m\in \mathbb Z$ and $0 \le r < n$.</p>
<p>By the archimedian principal we <em>can</em> make such statements for unique $m,r,a,f$.</p>
<p>$\frac xn = \frac an + \frac fn < \frac an + \frac 1n$.</p>
<p>And and $m\le \frac an = m + \frac rn < m+1$ we have $[\frac an] = m$ and</p>
<p>$m \le \frac an < m + 1$ then $\frac {a+1}n \le m+ 1$ and $\frac an + \frac fn < m+1$. And $m\le \frac an + \frac fn = \frac xn < m+1$</p>
<p>So $[\frac xn ] = m$.</p>
<p>And $[\frac an] = m$ </p>
<p>And $[x] = a$.</p>
<p>So that's it.</p>
|
3,820,929 | <p>When i look at my notes , i realized something i have not realized before.It was as to a modular arithmetic
question.</p>
<p>The question is <span class="math-container">${\sqrt 2} \pmod7$</span></p>
<p>It is very trivial question.The solution is: if <span class="math-container">$x \equiv {\sqrt 2} \pmod7$</span> ,then <span class="math-container">$x^{2} \equiv ({\sqrt 2})^{2} \pmod7$</span></p>
<p><span class="math-container">$\therefore x^{2} \equiv 2 \pmod7$</span> and <span class="math-container">$x=+3,-3,+4,-4$</span></p>
<p>However, there is something which i stuck in it. How can we work with <span class="math-container">${\sqrt 2}$</span> , because we know the definition of modular arithmetic.It says that</p>
<p><span class="math-container">$a \equiv b \pmod m$</span> where <span class="math-container">$a,b$</span> are integers and <span class="math-container">$m$</span> is positive integer.I think that <span class="math-container">${\sqrt 2}$</span> contradicts with the definiton of modular arithmetic because it is not an integer.</p>
<p>Can you enlighten me? What am i missing ?</p>
<p>NOTE:Someone might suggest that when you take exponential of both side , <span class="math-container">${\sqrt 2}$</span> turned out to be an integer.</p>
<p>My answer to this question:Yes it turned out to be an integer but in order to take exponential of <span class="math-container">${\sqrt 2}$</span> , it must be an integer because definition says that <span class="math-container">$a^{e} \equiv b^{e} \pmod m$</span> where <span class="math-container">$a,b$</span> are integers and <span class="math-container">$m,e$</span> are positive integers.</p>
| Crostul | 160,300 | <p>You are working in the field of numbers modulo <span class="math-container">$7$</span>, namely <span class="math-container">$\Bbb Z / (7)$</span> which is often denoted by <span class="math-container">$\Bbb F_7$</span>.</p>
<p>Now, <span class="math-container">$2$</span> is an element of <span class="math-container">$\Bbb F_7$</span>.
Another element <span class="math-container">$x \in \Bbb F_7$</span> is said to be a square root of <span class="math-container">$2$</span> if <span class="math-container">$x^2 = 2 \pmod 7$</span>.</p>
<p>It happens that theoretically <span class="math-container">$2$</span> may have two square roots or none. This means that it it does not make sense to talk about "the square root of <span class="math-container">$2$</span>", since it may not exist, or there may be more than one.</p>
<p>The two square roots of <span class="math-container">$2$</span> in <span class="math-container">$\Bbb F_7$</span> are <span class="math-container">$3$</span> and <span class="math-container">$4$</span> (which is <span class="math-container">$-3$</span>), since their square is <span class="math-container">$2 \pmod 7$</span>.
This can be written using the notation <span class="math-container">$$\sqrt 2 = \pm 3 \pmod 7$$</span> which is equivalent to say that
<span class="math-container">$$2 = ( \pm 3)^2 \pmod 7$$</span></p>
<p>However this is just notation, it has nothing to deal with the real number <span class="math-container">$\sqrt 2 \in \Bbb R$</span>.</p>
|
24,927 | <p>Does this mean that the first homotopy group in some sense contains more information than the higher homotopy groups? Is there another generalization of the fundamental group that can give rise to non-commutative groups in such a way that these groups contain more information than the higher homotopy groups? </p>
| Ronnie Brown | 28,586 | <p>These problems puzzled the early topologists: in fact Cech's paper on higher homotopy groups was rejected for the 1932 Int. Cong. Math. at Zurich by Hopf and Alexandroff, who quickly proved they were abelian. We now know this is because group objects in groups are abelian groups. However group objects in the category of groupoids are NOT just abelian groups, but are equivalent to crossed modules, which occurred in the 1940s in relation to second relative homotopy groups, $\pi_2(X,A,x)$. It turns out that there is a nice <em>double groupoid</em> $\rho_2(X,A,x)$ consisting of homotopy classes of maps of a square $I^2$ to $X$ which map the edges to $A$ and the vertices to $x$. (The proof that the compositions are well defined is not quite trivial!). Using this Philip Higgins and I proved a 2-d van Kampen Theorem, published in Proc. LMS 1978, i.e. 34 years ago, from which one can deduce new results on the nonabelian second relative homotopy groups, as crossed modules over the fundamental group. </p>
<p>This is the start of using <strong>strict</strong> higher homotopy groupoids for obtaining nonabelian calculations in higher homotopy theory -- see the web page in my comment, and references there. </p>
<p>This idea came from examining in 1965 a proof of the 1-dim van Kampen theorem for the fundamental groupoid, and observing that it ought to generalise to higher dimensions if one had the <em>right</em> homotopical gadgets. It took years to get the idea that this could be done for pairs of spaces, filtered spaces, or $n$-cubes of spaces, but apparently not easily just for spaces, or spaces with base point. </p>
|
1,843,274 | <p>Good evening to everyone. So I have this inequality: $$\frac{\left(1-x\right)}{x^2+x} <0 $$ It becomes $$ \frac{\left(1-x\right)}{x^2+x} <0 \rightarrow \left(1-x\right)\left(x^2+x\right)<0 \rightarrow x^3-x>0 \rightarrow x\left(x^2-1\right)>0 $$ Therefore from the first $ x>0 $, from the second $ x_1 = 1 $ and $x_2=-1$ therefore $ x $ belongs to $(-\infty,-1)$ and $(1,\infty)$ therefore $x$ belongs to $(1,\infty)$. But on the answer sheet it shows that it's defined on $(-1,0)$ and $(1,\infty)$. Where I am wrong? Thanks for any response.</p>
| Aman Rajput | 307,098 | <p>In inequality questions, first you need to make all the coefficients of $x$ positive.</p>
<p>We can write it as
$$\frac{x-1}{x(x+1)}>0$$</p>
<p>Now , multiply both sides by $(x(x+1))^2$ which is always greater than $0$.
Hence we are left with this
$$x(x-1)(x+1)>0$$</p>
<p>Solving this critical points are at $x=-1,0,1$
Plot a number line mark these points. Now on the extreme right hand side interval mark $+$ , and then going fro right to left.. mark alternating $-$ then $+$ .. and so on.. </p>
<p>Now see the $+$ intervals on the number line which is marked in the interval
$$ x \in (-1,0) \cup (1,\infty)$$</p>
|
4,489,675 | <p>When saying that in a small time interval <span class="math-container">$dt$</span>, the velocity has changed by <span class="math-container">$d\vec v$</span>, and so the acceleration <span class="math-container">$\vec a$</span> is <span class="math-container">$d\vec v/dt$</span>, are we not assuming that <span class="math-container">$\vec a$</span> is constant in that small interval <span class="math-container">$dt$</span>, otherwise considering a change in acceleration <span class="math-container">$d\vec a$</span>, the expression should have been <span class="math-container">$\vec a = \frac{d\vec v}{dt} - \frac{d\vec a}{2}$</span> (Again assuming rate of change of acceleration is constant). According to that argument, I can say that <span class="math-container">$\vec v$</span> is also constant in that time interval and so <span class="math-container">$\vec a = \vec 0$</span>.</p>
<p>Can someone point out where exactly I have gone wrong. Also this was just an example, my question is general.</p>
| Community | -1 | <p>It is important to be careful when working with infinitesimals. The answer by @mmesser314 is a good answer (+1 from me) which is described in terms of limits and the so-called standard analysis. In that analysis an infinitesimal is not a number. More specifically, an infinitesimal is not a real number.</p>
<p>However, it is not the only possible rigorous approach. If we use the <a href="https://en.wikipedia.org/wiki/Hyperreal_number" rel="noreferrer">hyperreal numbers</a> then we can indeed treat infinitesimals as actual numbers. In the hyperreal numbers an infinitesimal is a positive number that is smaller than any positive real number. (That statement can be made precise, but I am going for the concept rather than for mathematical rigor). A finite hyperreal is then a real number, <span class="math-container">$x$</span>, plus an infinitesimal, <span class="math-container">$\epsilon$</span>. If you take the "standard part", denoted <span class="math-container">$\mathrm{st}$</span>, of a hyperreal <span class="math-container">$x+\epsilon$</span> then you get the real number without the infinitesimal, <span class="math-container">$\mathrm{st}(x+\epsilon)=x$</span>.</p>
<p>Now, with that you can think of <span class="math-container">$dv$</span> and <span class="math-container">$dt$</span> as being legitimate infinitesimal numbers. The derivative of a function, <span class="math-container">$f$</span>, is then defined as: <span class="math-container">$$\dot f(x) = \mathrm{st}\left( \frac{f(x+dx)-f(x)}{dx} \right)$$</span></p>
<p>So, let's see how this applies for your example of a non-constant acceleration. Let's say that we have <span class="math-container">$v(t)=b t^2 + c t$</span> and <span class="math-container">$a(t)=\dot v(t)$</span>. Now, we will not assume that <span class="math-container">$a$</span> is constant but we will apply the definition above: <span class="math-container">$$\dot v(t) = \mathrm{st}\left( \frac{v(t+dt)-v(t)}{dt} \right)=\mathrm{st}\left( \frac{2 \ b \ t \ dt +c \ dt + b \ dt^2}{dt} \right)= \mathrm{st}\left(2bt+c+b \ dt \right)$$</span> Now, notice that the last term inside the <span class="math-container">$\mathrm{st}$</span> is infinitesimal, so it is dropped and we are left with <span class="math-container">$$\dot v(t)=2bt+c$$</span></p>
<p>So even treating infinitesimals as valid numbers and not treating the acceleration as constant we are able to get the correct result. This is because of the way that the <span class="math-container">$\mathrm{st}$</span> function chops off any remaining infinitesimals. Roughly using your original terminology if we are treating infinitesimals as valid hyperreal numbers then <span class="math-container">$$\vec a = \mathrm{st}\left(\frac{d\vec v}{dt} - \frac{d\vec a}{2}\right) = \frac{d\vec v}{dt}$$</span></p>
|
299,452 | <p>According to wiki: <a href="https://en.wikipedia.org/wiki/Dedekind_eta_function" rel="nofollow noreferrer">https://en.wikipedia.org/wiki/Dedekind_eta_function</a>, Dedekind eta function is defined in many equivalent forms. But none of them is an explicit description (say in algorithmic format) on how to computing it. Where to find such one? Thanks!</p>
| André Henriques | 5,690 | <p>Euler's formula</p>
<p>$$
\sum\limits_{n \in \mathbb{Z}} {( - 1)^n q^{\frac{{(3n^2 - n)}}
{2}} } = \prod\limits_{n = 1}^\infty {(1 - q^n ),}
$$</p>
<p>(which can be proven from Jacobi’s triple product identity by using the fact that $\prod\limits_{n = 1}^\infty {(1 - q^{3n} )(1 - q^{3n - 2} )} (1 - q^{3n - 1} ) = \prod\limits_{n = 1}^\infty {(1 - q^n )}
$)
provides a good way of numerically computing </p>
<p>$$
\eta (\tau )=e^{\frac {\pi {\rm {{i}\tau }}}{12}}\prod _{n=1}^{\infty }(1-e^{2n\pi {\rm {{i}\tau }}})=q^{\frac {1}{24}}\prod _{n=1}^{\infty }(1-q^{n}).
$$</p>
<p>I hope this answers your question.</p>
|
3,197,331 | <p><span class="math-container">$X, Y$</span> are quantities and <span class="math-container">$f : X → Y$</span> a function. Show
the equivalence of the following statements:</p>
<p>(i) <span class="math-container">$f$</span> is injective</p>
<p>(ii) <span class="math-container">$f^{-1}\!\bigl(f(A) \bigr)=A \quad \text{for all}~ A \subset X$</span></p>
| Ross Millikan | 1,827 | <p>If you compute the expected value, each outcome after an initial tails contributes <span class="math-container">$\frac 14$</span> to the sum, so the sum diverges. This would say I should pay any price to play the game, which is counterintuitive as the chance of winning very much money is very small. The usual resolution is that you cannot pay me more than some amount of money, so we should cut off the sum at that point. When we do that, the expected value is <span class="math-container">$\frac14$</span> times the base <span class="math-container">$2$</span> log of the amount of money you can pay me. If you can pay me <span class="math-container">$\$1,000,000$</span> I should pay <span class="math-container">$\$5$</span> or so to play.</p>
|
3,717,144 | <p>Suppose M is a finitely generated non-zero R-module, where R is a commutative unital ring. Show that the tensor product of M with itself is non-zero.</p>
<p>I know one way to show this is to find an R-bilinear map which is nonzero, but am not sure how to find it.</p>
| o.h. | 630,261 | <p>Not sure this is the best proof, but here goes. Note that <span class="math-container">$M\otimes_R M = 0$</span> if and only if <span class="math-container">$(M\otimes_R M)_{\mathfrak p} = 0$</span> for all primes <span class="math-container">$\mathfrak p\subset R$</span> (Atiyah-MacDonald 3.8). But
<span class="math-container">$$
(M\otimes_R M)_{\mathfrak p}\cong M_\mathfrak p\otimes_{R_{\mathfrak p}}M_{\mathfrak p}
$$</span>
by Atiyah-MacDonald 3.7.</p>
<p>Hence we have reduced to the case where <span class="math-container">$R$</span> is a local ring. The statement now follows from the fact that tensor product is faithful for finitely-generated modules over local rings, e.g. exercise 3 in the second chapter of Atiyah-MacDonald. (This is a consequence of Nakayama's lemma, as far as I remember.)</p>
<p><strong>Remark.</strong> So why doesn't the argument given here show that tensor products are faithful over any ring via reduction to the local case? Simply because a pair of distinct modules <span class="math-container">$M$</span> and <span class="math-container">$N$</span> might be supported on disjoint sets of primes. For instance, this is the case when <span class="math-container">$R = \mathbb Z$</span>, <span class="math-container">$M = \mathbb Z/p$</span> and <span class="math-container">$N = \mathbb Z/q$</span> for distinct primes <span class="math-container">$p,q$</span>. Then <span class="math-container">$\mathbb Z /p\otimes\mathbb Z/q = 0$</span> because, with <span class="math-container">$r$</span> ranging over primes, <span class="math-container">$(\mathbb Z/p)_{(r)} = 0$</span> for <span class="math-container">$r\neq p$</span> and similarly for <span class="math-container">$q$</span>.</p>
|
855,227 | <p>I'm trying to understand Taylor's Theorem for functions of $n$ variables, but all this higher dimensionality is causing me trouble. One of my problems is understanding the higher order differentials. For example, if I have a function $f(x, y)$, then it's first differential is: </p>
<p>$$df = \frac{\partial f}{\partial x}dx + \frac{\partial f}{\partial y}dy.$$</p>
<p>To me this quantity is saying that: </p>
<blockquote>
<p>A differential change in the value of function $f(x,y)$ is equal to how fast function $f(x,y)$ is changing
with respect to $x$ multiplied by a differential change in the
$x$-coordinate plus how fast function $f(x,y)$ is changing with
respect to $y$ multiplied by a differential change in the
$y$-coordinate.</p>
</blockquote>
<p>This seems intuitive. But when we get into higher order differentials I get confused: </p>
<p>$$d^2f= \frac{\partial^2 f}{\partial y ^2}dy^2 + 2\frac{\partial^2 f}{\partial y \partial x}dy\:dx + \frac{\partial^2 f}{\partial x ^2}dx^2$$</p>
<p>How would one interpret this quantity? What about even higher order differentials? say $d^3f$ or $d^{1500 }f$ =) </p>
<p>Thank you for any help! =) </p>
| Community | -1 | <p>In $$d^2f= \frac{\partial^2 f}{\partial y ^2}dy^2 + 2\frac{\partial^2 f}{\partial y \partial x}dy\:dx + \frac{\partial^2 f}{\partial x ^2}dx^2$$, $\frac{\partial^2 f}{\partial y ^2}dy^2$ is the rate of change of $\frac{\partial f}{\partial y}$ on $y$, same for $\frac{\partial^2 f}{\partial x ^2}dx^2$. $2\frac{\partial^2 f}{\partial y \partial x}dy\:dx$ is the rate of change of $\frac{\partial f}{\partial y}$ on $x$ and rate of change of $\frac{\partial f}{\partial x}$ on $y$.</p>
<p>You can similarly interpret the terms appearing in higher order differentials.</p>
|
855,227 | <p>I'm trying to understand Taylor's Theorem for functions of $n$ variables, but all this higher dimensionality is causing me trouble. One of my problems is understanding the higher order differentials. For example, if I have a function $f(x, y)$, then it's first differential is: </p>
<p>$$df = \frac{\partial f}{\partial x}dx + \frac{\partial f}{\partial y}dy.$$</p>
<p>To me this quantity is saying that: </p>
<blockquote>
<p>A differential change in the value of function $f(x,y)$ is equal to how fast function $f(x,y)$ is changing
with respect to $x$ multiplied by a differential change in the
$x$-coordinate plus how fast function $f(x,y)$ is changing with
respect to $y$ multiplied by a differential change in the
$y$-coordinate.</p>
</blockquote>
<p>This seems intuitive. But when we get into higher order differentials I get confused: </p>
<p>$$d^2f= \frac{\partial^2 f}{\partial y ^2}dy^2 + 2\frac{\partial^2 f}{\partial y \partial x}dy\:dx + \frac{\partial^2 f}{\partial x ^2}dx^2$$</p>
<p>How would one interpret this quantity? What about even higher order differentials? say $d^3f$ or $d^{1500 }f$ =) </p>
<p>Thank you for any help! =) </p>
| Steven Gubkin | 34,287 | <p>I think all of this makes a lot more sense when you approach it from a multilinear set up.</p>
<p>If $f: \mathbb{R}^2 \to \mathbb{R}$ is a function, then its differential $df$ gives a different linear map at each point. Fundamentally we have </p>
<p>$$
f(\mathbf{p}+\vec{v})\approx f(\mathbf{p})+df(\mathbf{p}, \vec{v})
$$</p>
<p>Now the second differential ($d^2f$ in your notation), should be something which records how $df$ changes from one point to the next. In other words, we should like </p>
<p>$$
df(\mathbf{p}+\vec{w}, \vec{v}) \approx df(\mathbf{p},\vec{v})+d^2f(\mathbf{p},\vec{w},\vec{v})
$$</p>
<p><strong>Slogan</strong>: "$d^2 f$ is the gadget which takes two vectors $\vec{v}$ and $\vec{w}$ and spits out the approximate change in $df$ from $p$ to $p+\vec{w}$ when it is evaluated in the direction $\vec{v}$ " </p>
<p>At a given point $p$, this map gadget $d^2f$ should be linear in both $\vec{w}$ and $\vec{v}$. So it is a multilinear function which varies from point to point, aka a $2$-<a href="http://en.wikipedia.org/wiki/Tensor" rel="nofollow">tensor</a> field! </p>
<p>We can figure out an expression for $d^2f$ as follows:</p>
<p>$$
\frac{\partial ^2 f}{\partial x^2} dx \otimes dx + \frac{\partial ^2 f}{\partial x \partial y} dx \otimes dy + \frac{\partial ^2 f}{\partial y \partial x} dy \otimes dx + \frac{\partial ^2 f}{\partial y^2} dy \otimes dy
$$</p>
<p>Taylor's theorem comes about when you try to approximate changes not just in $df$, but carry those through to changes in $f$. You do that, basically, by starting from one point and restricting your method of approximation to a line segment. So you only ever plug the same vector into both arguments of the second differential, which means you are really working with the associated quadratic form.</p>
<p>If you would like to come to an understanding of Taylor's theorem along the lines suggested here, I recommend you check out my online course here:</p>
<p><a href="http://ximera.osu.edu/course/kisonecat/m2o2c2/course/" rel="nofollow">http://ximera.osu.edu/course/kisonecat/m2o2c2/course/</a></p>
<p>It will guide you through a selection of exercises which gradually build in difficulty, with you developing most of the mathematics yourself. A copious hint system (which more often than not prompts you with a simpler or related question) should ensure that you can get through it.</p>
|
3,479,144 | <p>Let <span class="math-container">$(X,M,\mu)$</span> be a measure space and <span class="math-container">$f \in L^{1}(X,\mu)$</span>. Then show that for <span class="math-container">$E \in M$</span>, <span class="math-container">$\lim_{k \rightarrow \infty} \int_{E} |f|^{1/k} = \mu(E)$</span>. I am able to show this in the case that <span class="math-container">$\mu(E) < \infty$</span>, but I do not know how to proceed when <span class="math-container">$\mu(E) = \infty$</span>. For the sets of finite measure, we can use <span class="math-container">$ ||f||_{1} \cdot \chi_{E} \in L^{1}$</span> as a bound a use dominated convergence. </p>
| Lubin | 17,760 | <p>If you’re merely asking whether there are infinitely many <span class="math-container">$n$</span> for which the expansion of <span class="math-container">$\sqrt n$</span> is of that form, that’s easy, since the expansion of <span class="math-container">$\sqrt{n^2+2n}$</span> is <span class="math-container">$[n,\overline{1,2n}]$</span>. Notice that <span class="math-container">$n^2-2n$</span> is just one less than a square.</p>
|
184,699 | <p>First, we make the following observation: let $X: M \rightarrow TM $ be a vector
field on a smooth manifold. Taking the contraction with respect to $X$ twice gives zero, i.e.
$$ i_X \circ i_{X} =0.$$
Is there any "name" for the corresponding "homology" group that one can define
(Kernel mod image)? Has this "homology" group been studied by others (there are plenty of questions that one can ask........is it isomorphic to anything more familiar etc etc). </p>
<p>Similarly, a dual observation is as follows: Let $\alpha$ be a one form; taking
the wedge product with $\alpha$ twice gives us zero. One can again define kernel
mod image. Does that give anything "interesting"? </p>
<p>If people have investigated these questions, I would like to know a few references. </p>
<p>My purpose for asking the "name" of the (co)homology group is so that I can make a google search using the name. I was unable to do that, since I do not know of any key words under this topic (or if at all it is a topic).</p>
| Qiaochu Yuan | 290 | <p>You can get something "interesting" if you couple the $1$-form version with the de Rham differential. Namely, if $\alpha \in \Omega^1(X)$ is a closed $1$-form on a smooth manifold $X$, then the de Rham complex can be equipped with a twisted de Rham differential $d + \alpha$, by which I mean</p>
<p>$$(d + \alpha) \beta = d \beta + \alpha \wedge \beta.$$</p>
<p>This is a differential since</p>
<p>$$(d + \alpha)^2 \beta = d (\alpha \wedge \beta) + \alpha \wedge (d \beta) = 0$$</p>
<p>(since we assumed $d \alpha = 0$), and its cohomology is a twisted version of de Rham cohomology; in fact it's precisely de Rham cohomology with local coefficients given by the local system associated to the (exponential of the?) image of $\alpha$ in $H^1(X, \mathbb{R}) \cong \text{Hom}(\pi_1(X), \mathbb{R})$. One way to think about this construction is to think of $d + \alpha$ as a flat connection on the trivial line bundle over $X$. </p>
<p>But actually the argument above proceeds just fine if we generalize $\alpha$ to an odd form $\alpha \in \Omega^{2k+1}(X)$, although this collapses the $\mathbb{Z}$-grading on de Rham cohomology to a $\mathbb{Z}_2$-grading. When $\alpha \in \Omega^3(X)$ the corresponding twisted de Rham cohomology groups are the recipients of twisted Chern characters for twisted K-theory; see, for example, Atiyah and Segal's <a href="http://arxiv.org/abs/math/0510674">Twisted K-theory and cohomology</a>. </p>
|
507,062 | <p>I need your help with a lifelong problem I have always had. There are two things in life I hate most, 1) weddings and 2) Long division. For the life of me I hate division. I am horrible at it. I can multiple large numbers in my head but I can’t divide if my life depended on it. When the division operation is 2 digits over 1 or 2 digit, it’s not a major issue. But when I divide anything 3+ over a 2 digit number, it becomes an issue.</p>
<p>I am trying to find a way t divide that appeals to my abilities to multiply. I am trying to find another way to approach a division problem than the typical long division approach.</p>
<p>For example: 876/2. Simple enough Ans: 438. This isn’t a problem, Finding the new approach is.
I am trying to break this division number into single digits.
$$8=2(4)+0$$
$$7=2(3)+1$$
$$6=2(3)+0$$</p>
<p>My problem is how does this approach get me to 438? The numbers in the parenthesis get me to 433 with a summed remainder of 1. If I carry the one and put it on the first digit of my number, 3, I get 434.</p>
<p>In typical long division, the remainder is carried over. So
$$8=2(4)+0$$
$$7=2(3)+1 $$(the remainder 1 is carried down)
$$16=2(8)+0$$</p>
<p>Is there a way to do long division by doing the division individually on each number and then summing the remainder?</p>
| Patrick Da Silva | 10,704 | <p>Assuming $x \neq 0$, then yes. Otherwise you're dividing by zero.</p>
<p>Hope that helps,</p>
|
507,062 | <p>I need your help with a lifelong problem I have always had. There are two things in life I hate most, 1) weddings and 2) Long division. For the life of me I hate division. I am horrible at it. I can multiple large numbers in my head but I can’t divide if my life depended on it. When the division operation is 2 digits over 1 or 2 digit, it’s not a major issue. But when I divide anything 3+ over a 2 digit number, it becomes an issue.</p>
<p>I am trying to find a way t divide that appeals to my abilities to multiply. I am trying to find another way to approach a division problem than the typical long division approach.</p>
<p>For example: 876/2. Simple enough Ans: 438. This isn’t a problem, Finding the new approach is.
I am trying to break this division number into single digits.
$$8=2(4)+0$$
$$7=2(3)+1$$
$$6=2(3)+0$$</p>
<p>My problem is how does this approach get me to 438? The numbers in the parenthesis get me to 433 with a summed remainder of 1. If I carry the one and put it on the first digit of my number, 3, I get 434.</p>
<p>In typical long division, the remainder is carried over. So
$$8=2(4)+0$$
$$7=2(3)+1 $$(the remainder 1 is carried down)
$$16=2(8)+0$$</p>
<p>Is there a way to do long division by doing the division individually on each number and then summing the remainder?</p>
| amWhy | 9,003 | <p>Notice that the denominators are equal, save for a factor of $-1$: which has no impact since the fraction is squared. $$\; (4 - x^2)^2 = (-(x^2 - 4))^2 = (x^2 - 4)^2$$</p>
<p>So multiply both sides by $(x^2 - 4)^2$, and you'll cancel both denominators. Then there's no need to divide by $5x$.
After multiplying both sides by $(x^2 - 4)^2$, we get
$$\left (x^{2}+6\right )^{2}= \left ( 5x \right )^{2}\iff (x^4 + 12x^2 + 36) - 25x^2 = 0 \iff x^4 - 13x^2 + 36 = (x^2 - 9)(x^2 - 4) = (x+3)(x-3)(x+2)(x-2) = 0 $$</p>
<p>But we need to throw out the solutions $x = 2, -2$ because the equation is not defined there.</p>
|
2,970,787 | <blockquote>
<p>Find <span class="math-container">$\lim_{x\to -\infty} \frac{(x-1)^2}{x+1}$</span></p>
</blockquote>
<p>If I divide whole expression by maximum power i.e. <span class="math-container">$x^2$</span> I get,<span class="math-container">$$\lim_{x\to -\infty} \frac{(1-\frac1x)^2}{\frac1x+\frac{1}{x^2}}$$</span>
Numerator tends to <span class="math-container">$1$</span> ,Denominator tends to <span class="math-container">$0$</span></p>
<p>So I get the answer as <span class="math-container">$+\infty$</span></p>
<p>But when I plot the graph it tends to <span class="math-container">$-\infty$</span></p>
<p>What am I missing here? "Can someone give me the precise steps that I should write in such a case." Thank you very much!</p>
<p>NOTE: I cannot use L'hopital for finding this limit.</p>
| lisyarus | 135,314 | <p>The point is that the denominator not just tends to zero, but tends to zero from the left, i.e. from being negative.</p>
<p>Alternatively, rewrite like this:</p>
<p><span class="math-container">$$\frac{(x-1)^2}{x+1} = \frac{(x+1)^2-4x}{x+1}=x+1-\frac{4}{1+\frac{1}{x}}$$</span></p>
<p>which clearly tends to <span class="math-container">$-\infty$</span> as <span class="math-container">$x \rightarrow -\infty$</span>.</p>
|
3,078,707 | <p>The above question is the equation <span class="math-container">$(2.4)$</span> of the following paper:</p>
<p><a href="http://www.jmlr.org/papers/volume6/tsuda05a/tsuda05a.pdf" rel="nofollow noreferrer">MATRIX EXPONENTIATED GRADIENT UPDATES</a>.</p>
<p>Let <span class="math-container">$M$</span> and <span class="math-container">$N$</span> be two <span class="math-container">$n \times n$</span> positive definite matrices where <span class="math-container">$M=U\Lambda U^{\top}$</span>, <span class="math-container">$N=\tilde{U}\tilde{\Lambda} \tilde{U}^{\top}
$</span> and <span class="math-container">$(\lambda_i,v_i)$</span> are eigenpairs of <span class="math-container">$M$</span>, likewise for <span class="math-container">$N$</span>.</p>
<p>How to show the following
<span class="math-container">$$\text{Tr}(M\log N)=\sum_{i,j}\lambda_i\log(\tilde{\lambda_j})(u_i^{\top}\tilde{u}_j)^2$$</span></p>
<p>First I do not know what <span class="math-container">$i,j$</span> mean in summation, and how we have it using two summations. Second how to get that.</p>
<p>My try:</p>
<p><span class="math-container">\begin{align}
\text{Tr}(M\log N) &=
\text{Tr}(U\Lambda U^{\top}
\tilde{U}\log(\tilde{\Lambda})\tilde{U}^{\top}) \\
& = \text{Tr}(\Lambda U^{\top}
\tilde{U}\log(\tilde{\Lambda})\tilde{U}^{\top}U)
\end{align}</span></p>
<p>How can I proceed using matrix calculus to get the result not by expanding? what is the hidden trick?</p>
| Jack D'Aurizio | 44,121 | <p>Just exploit the principle that an ellipse is a stretched circle.<br>
Let <span class="math-container">$P$</span> our point, <span class="math-container">$\tau$</span> the known tangent through <span class="math-container">$P$</span>, <span class="math-container">$O$</span> the center of the ellipse.</p>
<p><a href="https://i.stack.imgur.com/UFMQh.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/UFMQh.png" alt="enter image description here"></a></p>
<p>We find the <span class="math-container">$x$</span>-compression factor sending the ellipse into a circle in the following way:</p>
<ul>
<li>we define <span class="math-container">$R$</span> as the intersection between <span class="math-container">$\tau$</span> and the <span class="math-container">$y$</span>-axis and we draw the circle having <span class="math-container">$OR$</span> as a diameter;</li>
<li>we define <span class="math-container">$Q$</span> as the point with a positive abscissa that lies on the previous circle and also on the parallel to the <span class="math-container">$x$</span>-axis through <span class="math-container">$P$</span>;</li>
<li>we draw the circle centered at <span class="math-container">$O$</span> through <span class="math-container">$Q$</span>: this is the ellipse <em>after</em> the compression, and the point <span class="math-container">$S$</span> in the picture is one of the vertices of the ellipse;</li>
<li>we define <span class="math-container">$T$</span> as the intersection between the compressed ellipse (i.e. the previous circle) and the positive <span class="math-container">$x$</span>-axis, <span class="math-container">$U$</span> as the intersection between <span class="math-container">$TQ$</span> and the <span class="math-container">$y$</span>-axis (we use this point to scale back the ellipse);</li>
<li>the intersection <span class="math-container">$V$</span> between <span class="math-container">$UP$</span> and the <span class="math-container">$x$</span>-axis is another vertex of the ellipse;</li>
<li>we draw a circle centered at <span class="math-container">$S$</span> with radius <span class="math-container">$OV$</span>: its intersections with the <span class="math-container">$x$</span>-axis are the foci of the ellipse.</li>
</ul>
<p>I leave to you to convert this geometric approach into a formula for the coordinates of <span class="math-container">$S,V,F_1,F_2$</span> given the coordinates of <span class="math-container">$P$</span> and the slope of <span class="math-container">$\tau$</span>.</p>
|
3,078,707 | <p>The above question is the equation <span class="math-container">$(2.4)$</span> of the following paper:</p>
<p><a href="http://www.jmlr.org/papers/volume6/tsuda05a/tsuda05a.pdf" rel="nofollow noreferrer">MATRIX EXPONENTIATED GRADIENT UPDATES</a>.</p>
<p>Let <span class="math-container">$M$</span> and <span class="math-container">$N$</span> be two <span class="math-container">$n \times n$</span> positive definite matrices where <span class="math-container">$M=U\Lambda U^{\top}$</span>, <span class="math-container">$N=\tilde{U}\tilde{\Lambda} \tilde{U}^{\top}
$</span> and <span class="math-container">$(\lambda_i,v_i)$</span> are eigenpairs of <span class="math-container">$M$</span>, likewise for <span class="math-container">$N$</span>.</p>
<p>How to show the following
<span class="math-container">$$\text{Tr}(M\log N)=\sum_{i,j}\lambda_i\log(\tilde{\lambda_j})(u_i^{\top}\tilde{u}_j)^2$$</span></p>
<p>First I do not know what <span class="math-container">$i,j$</span> mean in summation, and how we have it using two summations. Second how to get that.</p>
<p>My try:</p>
<p><span class="math-container">\begin{align}
\text{Tr}(M\log N) &=
\text{Tr}(U\Lambda U^{\top}
\tilde{U}\log(\tilde{\Lambda})\tilde{U}^{\top}) \\
& = \text{Tr}(\Lambda U^{\top}
\tilde{U}\log(\tilde{\Lambda})\tilde{U}^{\top}U)
\end{align}</span></p>
<p>How can I proceed using matrix calculus to get the result not by expanding? what is the hidden trick?</p>
| Servaes | 30,382 | <p>A general ellipse, with axes parallel to the coordinate axes, is given by an equation of the form
<span class="math-container">$$u(x-x_0)^2+v(y-y_0)^2=1.$$</span>
The ellipse is symmetric w.r.t. the <span class="math-container">$x$</span>-axis so <span class="math-container">$y_0=0$</span>, and because <span class="math-container">$(-a,0)$</span> is the center of the ellipse we have <span class="math-container">$(0,0)$</span> and <span class="math-container">$(-2a,0)$</span> on the ellipse, so <span class="math-container">$x_0=-a$</span> and <span class="math-container">$u=\frac{1}{a^2}$</span> (I will assume that <span class="math-container">$a\neq0$</span> throughout). Hence the ellipse is given by
<span class="math-container">$$\frac{1}{a^2}(x+a)^2+vy^2=1,$$</span>
for some parameters <span class="math-container">$a$</span> and <span class="math-container">$v$</span> that are yet to be determined. </p>
<p>Given two points <span class="math-container">$(p_x,p_y)$</span> and <span class="math-container">$(q_x,q_y)$</span> that define the wall, we want the ellipse to pass through their midpoint
<span class="math-container">$$(r_x,r_y):=\left(\frac{p_x+q_x}{2},\frac{p_y+q_y}{2}\right),$$</span>
which means that
<span class="math-container">$$\frac{1}{a^2}(r_x+a)^2+vr_y^2=1.$$</span>
This already allows us to express <span class="math-container">$v$</span> in terms of <span class="math-container">$a$</span> as
<span class="math-container">$$v=\frac{1-\frac{1}{a^2}(r_x+a)^2}{r_y^2}=-\frac{r_x^2+2ar_x}{a^2r_y^2},\tag{1}$$</span>
where we assume that <span class="math-container">$r_y\neq0$</span>. Then it remains to determine <span class="math-container">$a$</span>.</p>
<p>The tangent line to the ellipse at the point <span class="math-container">$(r_x,r_y)$</span> can be found by implicit differentiation; we find
<span class="math-container">$$\frac{dy}{dx}(r_x)=-\frac{1}{a^2v}(r_x+a),$$</span>
so the tangent line through <span class="math-container">$(r_x,r_y)$</span> is given by
<span class="math-container">$$y=-\frac{1}{a^2v}(r_x+a)(x-r_x)+r_y,$$</span>
where we assume that <span class="math-container">$v\neq0$</span>. The normal is obtained by multiplying the coefficient of <span class="math-container">$x-r_x$</span> by <span class="math-container">$-1$</span>. Plugging in expression (<span class="math-container">$1$</span>) for <span class="math-container">$v$</span>, the equation for the normal is
<span class="math-container">$$y=\frac{r_y^2(r_x+a)}{r_x^2+2ar_x}x-\frac{r_y^2(r_x+a)}{r_x+2a}+r_y.$$</span>
This must equal the equation for the line through <span class="math-container">$(p_x,p_y)$</span> and <span class="math-container">$(q_x,q_y)$</span>, which is given by
<span class="math-container">$$y=\frac{q_y-p_y}{q_x-p_x}(x-p_x)+p_y.$$</span>
This yields the two equations
<span class="math-container">$$\frac{r_y^2(r_x+a)}{r_x^2+2ar_x}=\frac{q_y-p_y}{q_x-p_x}
\qquad\text{ and }\qquad
-\frac{r_y^2(r_x+a)}{r_x+2a}+r_y=\frac{p_yq_x-p_xq_y}{q_x-p_x}.$$</span>
These can be rewritten into two (equivalent) linear equations in <span class="math-container">$a$</span>. Solving yields
<span class="math-container">$$a=\frac{(q_y-p_y)r_x^2-(q_x-p_x)r_xr_y^2}{(q_x-p_x)r_y^2-2(q_y-p_y)r_x}.$$</span>
With this we can also compute <span class="math-container">$v$</span> by means of equation (<span class="math-container">$1$</span>), and hence the equation for the ellipse.</p>
|
192,020 | <p>I suspect this is a duplicate, but I can't seem to find what I'm looking for.</p>
<p>A routine problem I have is the following.</p>
<p>I have a set of data in three (or two, or more) lists:</p>
<pre><code>l1={a1, a2, a3}
l2={b1, b2, b3, b4}
l3={{c1, c2, c3, c4}, {d1, d2, d3, d4}, {e1, e2, e3, e4}}
</code></pre>
<p>where <code>c1</code> is a result under condition <code>{a1, b1}</code>, <code>c2</code> is a result under condition <code>{a1, b2}</code>, etc.</p>
<p>I want to create the list:</p>
<pre><code>{{a1, b1, c1}, {a1, b2, c2}, {a1, b3, c3},{a1, b4, c4}, {a2, b1, d1}, ...}
</code></pre>
<p>in preparation for creating a string to export to a text file. </p>
<p>My current solution:</p>
<pre><code>Map[Transpose[{l2, #}] &, l3]
MapIndexed[Prepend[#1, l1[[#2[[1]]]]] &, %, {2}]
Flatten[%, 1]
</code></pre>
<p>This works, but the solution isn't intuitive to me, which makes me think there's a better way. </p>
<p>Is there a preferred approach for this task?</p>
| Coolwater | 9,754 | <pre><code>Join[Tuples[{l1, l2}], ArrayReshape[l3, {Times @@ Dimensions[l3], 1}], 2]
</code></pre>
<blockquote>
<p>{{a1, b1, c1}, {a1, b2, c2}, {a1, b3, c3}, ...}</p>
</blockquote>
<p>Though if the elements of <code>l3</code> are list of equal length, then <code>ArrayReshape</code>/<code>Dimension</code> won't work.<br>
To avoid that problem you could write</p>
<pre><code>ArrayReshape[Riffle[Tuples[{l1, l2}], #], {Length[#], 3}] &[Catenate[l3]]
</code></pre>
|
1,728,097 | <p>So i have this integral : $$ \int_0^\infty e^{-xy} dy = -\frac{1}{x} \Big[ e^{-xy} \Big]_0^\infty$$
The integration part is fine, but I'm not sure what i get with the limits, can someone explain this</p>
<p>Thanks </p>
| Mnifldz | 210,719 | <p>You need to take the limits as $y$ ranges from $0$ to $\infty$. This is simply</p>
<p>$$
\left . - \frac{1}{x} e^{-xy} \right |_0^\infty \;\; =\;\; 0 - -\frac{1}{x} \;\; =\;\; \frac{1}{x}.
$$</p>
|
1,728,097 | <p>So i have this integral : $$ \int_0^\infty e^{-xy} dy = -\frac{1}{x} \Big[ e^{-xy} \Big]_0^\infty$$
The integration part is fine, but I'm not sure what i get with the limits, can someone explain this</p>
<p>Thanks </p>
| Edward Evans | 312,721 | <p>Taking a limit is how many integrals of this form can be evaluated. Let $c$ be a real number and let $I = \int_0^\infty e^{-xy} dy.$ Then,</p>
<p>$$I = \lim_{c \to \infty} \int_0^c e^{-xy} dy = \lim_{c \to \infty} \left[ -\frac{1}{x}e^{-xy}\right]_0^c = 0 -\left(-\frac{1}{x} \right) = \frac{1}{x}.$$</p>
|
341,202 | <p>The divisibility rule for $3$ is well-known: if you add up the digits of $n$ and the sum is divisible by $3$, then $n$ is divisible by three. This is quite helpful for determining if really large numbers are multiples of three, because we can recursively apply this rule:</p>
<p>$$1212582439 \rightarrow 37 \rightarrow 10\rightarrow 1 \implies 3\not\mid 1212582439$$
$$124524 \rightarrow 18 \rightarrow 9 \implies 3\mid 124524$$</p>
<p>This works for as many numbers as I've tried. However, I'm not sure how this may be <em>proven</em>. Thus, my question is:</p>
<blockquote>
<p>Given a positive integer $n$ and that $3\mid\text{(the sum of the digits of $n$)}$, how may we prove that $3\mid n$?</p>
</blockquote>
| Math Gems | 75,092 | <p>$\begin{eqnarray} \rm{\bf Hint}\ \ &&\rm3\ \ divides\ \ a\! +\! 10\,b\! +\! 100\, c\! +\! 1000\,d\! + \cdots\\
\iff &&\rm 3\ \ divides\ \ a\! +\! b\! +\! c\! +\! d\! +\! \cdots +\color{#c00}9\,b\! +\! \color{#c00}{99}\,c\! +\! \color{#c00}{999}\,d\! + \cdots\\
\iff &&\rm3\ \ divides\ \ a\! +\! b\! +\! c\! +\! d + \cdots\ \ by\ \ 3\ \ divides\ \ \color{#c00}{9,\ 99,\ 999,\,\ldots}\end{eqnarray}$</p>
<p>Above we used that $\rm\ n + 3m\ $ is divisible by $\rm\,3\iff n\:$ is divisible by $\,3.$</p>
|
341,202 | <p>The divisibility rule for $3$ is well-known: if you add up the digits of $n$ and the sum is divisible by $3$, then $n$ is divisible by three. This is quite helpful for determining if really large numbers are multiples of three, because we can recursively apply this rule:</p>
<p>$$1212582439 \rightarrow 37 \rightarrow 10\rightarrow 1 \implies 3\not\mid 1212582439$$
$$124524 \rightarrow 18 \rightarrow 9 \implies 3\mid 124524$$</p>
<p>This works for as many numbers as I've tried. However, I'm not sure how this may be <em>proven</em>. Thus, my question is:</p>
<blockquote>
<p>Given a positive integer $n$ and that $3\mid\text{(the sum of the digits of $n$)}$, how may we prove that $3\mid n$?</p>
</blockquote>
| LinAlgMan | 49,785 | <p>Yes, it is true. Let
$$n = a_n a_{n-1} ... a_1 a_0 $$
the integer, where $0 \le a_i \le 9$ are its digits, that is
$$ n = \sum_{i=0}^n a_i \cdot 10^i \ . $$
Since $10^i \equiv 1 \pmod 3$ ($10=3 \cdot 3 + 1$, $100 = 3 \cdot 33 + 1$ $1000 = 3 \cdot 333 + 1$ and so on), we can write
$$ n = \sum_{i=0}^n a_i \cdot 10^i \equiv \sum_{i=0}^n a_i \pmod{3} $$
so $n$ is divisbile by 3, if and only if the sums of its digits is
$$ (n \equiv 0 \pmod{3} \iff \sum_{i=0}^n a_i \equiv 0 \pmod{3} )$$
Q.E.D.</p>
|
3,159,199 | <p>I did a question to find relative extrema for the following function:
<span class="math-container">$f(x)=x^2$</span> on <span class="math-container">$[−2,2].$</span></p>
<p>The answer said that there is no relative maxima for this function because relative extrema cannot occur at the end points of a domain.
Why is this so ?
Thank you.</p>
| Allawonder | 145,126 | <p>The answer is correct. Why it is so follows immediately from the definition.</p>
<p>There's a difference between an extreme value and a <em>relative</em> extreme value of a function at a point. While your function has a maximum value at <span class="math-container">$\pm2,$</span> it does not have a <em>relative</em> maximum at all. The reason is that relative extreme values by definition can only be defined at a point <span class="math-container">$c$</span> first, when the function is defined over an interval of the form <span class="math-container">$[c-\delta,c+\delta],$</span> where <span class="math-container">$\delta>0.$</span> Another example is <span class="math-container">$\sqrt x,$</span> which has no relative minimum at <span class="math-container">$x=0,$</span> though it has a minimum there.</p>
<p>In mathematics, the words are very important and never superfluous. Pay attention to such innocuous qualifiers as <em>relative.</em></p>
<p>Good luck in your studies.</p>
|
1,162,315 | <blockquote>
<p>(b) An electrical circuit comprises three closed loops giving the following equations for the currents $i_1, i_2$ and $i_3$</p>
<p>\begin{align*}
i_1 + 8i_2 + 3i_3 &= -31\\
3i_1 - 2i_2 + i_3 &= -5\\
2i_1 - 3i_2 + 2i_3 &= 6
\end{align*}</p>
</blockquote>
<p>This is the system I need to solve. How do I solve for all three?</p>
<p>Any help would be of great help. But I need step by step instructions for each unknown. Thanks</p>
| Peter | 82,961 | <p>Hint : Multiply the first equation with $3$ and subtract the second to get one equation
containing only $i_2$ and $i_3$. Multiply the first equation with $2$ and subtract
the third to get another equation containing only $i_2$ and $i_3$.
The result is</p>
<p>$$26i_2+8i_3=-88$$
$$19i_2+4i_3=-68$$</p>
<p>Now multiply the second of these equations with two and subtract the first.</p>
<p>You get $12i_2=-48$ , so $i_2=-4$. Use one of the two intermediate equations
to get $i_3$ and finally calculate $i_1$ using one of the original equations.</p>
<p>The final result is $i_1=-5$ , $i_2=-4$ , $i_3=2$.</p>
|
3,415,331 | <p>It is easy to show, that (for continuous functions f)
<span class="math-container">$$\exists c>0,\exists\alpha>1, \forall x\in \mathbb{R}: |x|^\alpha |f(x)| <c \implies \int |f|dx <\infty$$</span></p>
<p>The question is, whether or not this is also a neccessary condition. I could not come up with any counterexamples (<span class="math-container">$1/(x \log x)$</span> is not integrable yet). But I am also not able to prove it. And it is difficult to search for even though someone has probably already asked this question.</p>
<hr>
<p>UPDATE: It appears this is not true in general even for smooth functions <span class="math-container">$f\in C^\infty$</span> due to the ability to create smooth bumps of fixed hight in regular intervals with a decreasing base, making them integrable (See comments and answers). So in order to avoid that: what about monotonous functions?</p>
<p>I am trying to understand wether or not there is a function with a decrease "between" <span class="math-container">$1/x$</span> and <span class="math-container">$1/x^\alpha$</span>. Sorry for moving the goalposts.</p>
<hr>
<p>Proof, that it is sufficient:
<span class="math-container">$$\int |f(x)| dx \le \int_{|x|<1} |f(x)| dx + \int_{|x|>1} |x|^{-\alpha}|x|^\alpha |f(x)|dx<2\|f\|_{\infty,[-1,1]} + c\int_{|x|>1} |x|^{-\alpha}dx<\infty$$</span></p>
<p>Fix due to the helpful question whether or not <span class="math-container">$|x|^{-\alpha}$</span> is integrable.</p>
| user284331 | 284,331 | <p>If it were, then <span class="math-container">$|f(x)|<c/|x|^{\alpha}$</span> for <span class="math-container">$|x|>1$</span> which entails that <span class="math-container">$f(x)\rightarrow 0$</span> as <span class="math-container">$x\rightarrow\infty$</span>. So we are asking whether integrable functions will eventually converge to zero, there are plenty of counterexamples.</p>
<p>One counterexample would be, <span class="math-container">$f(x)=1$</span> for <span class="math-container">$n\leq x\leq n+1/2^{n}$</span> and zero otherwise.</p>
<p>For the updated question, take <span class="math-container">$f(x)=1/(2(\log 2)^{2})$</span> for <span class="math-container">$0\leq x\leq 2$</span> and <span class="math-container">$f(x)=1/(x(\log x)^{2})$</span> for <span class="math-container">$x\geq 2$</span>, and for <span class="math-container">$x<0$</span>, make it as an even function.</p>
<p>So <span class="math-container">$f\in L^{1}({\mathbb{R}})$</span>. If there were some <span class="math-container">$C>0$</span> and <span class="math-container">$\alpha>1$</span> such that <span class="math-container">$x^{\alpha}f(x)\leq C$</span> for all <span class="math-container">$x\geq 2$</span>, then <span class="math-container">$x^{\alpha-1}/(\log x)^{2}\leq C$</span>. But we can have <span class="math-container">$\log x\leq C'x^{(\alpha -1)/4}$</span>, then <span class="math-container">$x^{\alpha -1}/(\log x)^{2}\geq C''x^{(\alpha-1)/2}$</span>. Taking limit as <span class="math-container">$x\rightarrow\infty$</span> gives you a contradiction.</p>
|
4,382,786 | <p>I would like to know what is the following process on the real line called.</p>
<p>Let us fix some <span class="math-container">$X_0$</span> and let <span class="math-container">$X_{i+1} = (1-\gamma)X_i + Y_i$</span> where <span class="math-container">$\gamma$</span> is a fixed real number and <span class="math-container">$Y_i$</span>'s are i.i.d. random variables.</p>
<p>I found a reference to this in the book "Stochastic Population Dynamics in Ecology and
Conservation". Specifically, it is supposed to model that the population of species is limited by the amount of resources of the environment. I would like to know if someone has studied this mathematically.</p>
| manifolded | 621,937 | <p>This probably doesn't answer your question but I know about ergodic Markov chains that have this form where <span class="math-container">$Y_i$</span> take values on a certain state space and <span class="math-container">$f(x)=(1-\gamma)x$</span> is a bijection on that state space.</p>
<p>Example: Let <span class="math-container">$Y_{i}\in\{-1,0,1\}$</span>, <span class="math-container">$1-\gamma=2$</span> (so the state space of the Markov chain is <span class="math-container">$\mathbb{Z}_n$</span>). Notice that <span class="math-container">$f(x):=2x\mod n$</span> is a bijection on <span class="math-container">$\mathbb{Z}_n$</span> when <span class="math-container">$n$</span> is odd. The most recent result on mixing time of this chain is <a href="https://arxiv.org/pdf/2003.08117.pdf" rel="nofollow noreferrer">here</a>.</p>
<p>In general, if <span class="math-container">$\Pi$</span> is a permutation matrix, <span class="math-container">$P$</span> a probability transition matrix, then it is shown <a href="https://arxiv.org/pdf/2004.11491.pdf" rel="nofollow noreferrer">here</a> that the Markov chain <span class="math-container">$\Pi P$</span> mixes in <span class="math-container">$O(\log n)$</span> steps where <span class="math-container">$n$</span> is the size of the state space if <span class="math-container">$\Pi$</span> satisfies a certain set expansion condition. This paper also has several references about random walks of this form. They are commonly referred to as Markov chains with deterministic jumps.</p>
<p>More recently, this Markov chain of the form <span class="math-container">$\Pi P$</span> was applied to speedup mixing of a random walk on a hypercube <a href="https://arxiv.org/pdf/2109.05387.pdf" rel="nofollow noreferrer">here</a>. In fact, this paper shows that there is cutoff/phase transition at time <span class="math-container">$n$</span> which is quite interesting.</p>
|
474,568 | <p>In some books I've seen this symbol $\dagger$, next to some theorem's name, and I don't know what it means. I've googled it with no results which makes me suspect it's not standard.</p>
<p>Does anybody know what it means? One example I'm looking at right now is in a probability book, next to a section about Sitrling's approximation to factorials:</p>
<blockquote>
<p><strong>Stirling's formula ($\dagger$)</strong></p>
</blockquote>
<p>FOUND IT: The preamble says they're historic notes, it actually makes a historic introduction in the section about Stirling's formula, inc ase anyone's wondering.</p>
| Don Larynx | 91,377 | <p>From Wikipedia: "While the asterisk (asteriscus) was used for corrective additions, the obelus was used for corrective deletions of invalid reconstructions". </p>
<p>The obelus, which is the "cross", is similar to the asterisk but is used for making corrections instead of additions.</p>
<p>Edit: I don't know if this would necessarily be the correct context without further information (of what follows the cross).</p>
|
572,137 | <p>If $\psi:G \to H$ is a surjective homomorphism, then $|\{g \in G: \psi(g)=h_1\}| = |\{g \in G: \psi(g)=h_2\}|, \forall h_1,h_2 \in H.$ </p>
<p>Could anyone advise on the proof? If $\psi$ is injective, then the result follows. So, what happens if $\psi$ is not injective? By 2nd isomorphism thm, $G/Ker(\psi) \cong H.$ Is this the correct start? Thank you. </p>
| DonAntonio | 31,254 | <p>An idea following egreg's answer: </p>
<p>$$\psi(g)=h_1\in H\;\implies\;\psi(gn)=h_1\;\;\forall\,n\in N:=\ker\psi\implies \psi(gN)=h_1$$</p>
<p>and the other way around:</p>
<p>$$\psi(x)=h_1\implies \psi(g^{-1}x)=\psi(g)^{-1}\psi(x)=h_1^{-1}h_1=1\implies g^{-1}x\in N\iff xN=gN$$</p>
<p>and thus we see that </p>
<p>$$\{g\in G\;;\;\phi(g)=h_1\in H\}=|gN|=|N|$$</p>
<p>and we're done.</p>
|
49,068 | <p>Given lists $a$ and $b$, which represent multisets, how can I compute the complement $a\setminus b$?</p>
<p>I'd like to construct a function <code>xunion</code> that returns the symmetric difference of multisets.
For example, if $a=\{1, 1, 2, 1, 1, 3\}$ and $b=\{1, 5, 5, 1\}$, then their symmetric difference is $\big((a\cup b)\setminus(a\cap b)\big)\setminus(a\cap b)=(a\setminus b)\cup(b\setminus a)=\{1,1,2,3,5,5\}$.</p>
| ciao | 11,467 | <p>I don't pretend this is the most efficient or pretty, but here's a go at what I think you're after (see latter part of post for faster and simpler realizations):</p>
<pre><code>a = {1, 1, 2, 1, 1, 3};
b = {1, 5, 5, 1};
result = Join[
Flatten[ConstantArray @@@
Flatten[Replace[Cases[GatherBy[Join[Tally[#1], Tally[#2]], First],
{{Alternatives @@ a, _} ..}], {{a_, b_}, {c_, d_}} :>
{{a, Max[0, b - d]}}, 1], 1]],
Flatten[
ConstantArray @@@
Flatten[Replace[Cases[GatherBy[Join[Tally[#2], Tally[#1]], First],
{{Alternatives @@ b, _} ..}], {{a_, b_}, {c_, d_}} :>
{{a, Max[0, b - d]}}, 1], 1]]] &[a, b]
(* {1, 1, 2, 3, 5, 5} *)
</code></pre>
<p>Here's a <em>much</em> faster alternative for big lists:</p>
<pre><code>Module[{ta = Tally[#1], tb = Tally[#2], tab, tba, j},
tab = Tally[Join[#1, #2]][[;; Length@ta]];
ta[[All, 2]] = ta[[All, 2]] - (tab - ta)[[All, 2]];
tba = Tally[Join[#2, #1]][[;; Length@tb]];
tb[[All, 2]] = tb[[All, 2]] - (tba - tb)[[All, 2]];
j = Join[ta, tb];
Flatten[ConstantArray @@@ Pick[j, Sign[j[[All, 2]]], 1]]] &[a, b]
</code></pre>
<p>And after partitioning a steak and doing a gatherby on dessert, this simpler and even faster idea popped into the cranium:</p>
<pre><code>With[{du = DeleteDuplicates@Join[#1, #2]},
Join @@ ConstantArray @@@
Transpose[{du, Abs[Subtract[Tally[Join[du, #1]], Tally[Join[du, #2]]][[All, 2]]]}]] &[a, b]
</code></pre>
|
300,753 | <p>Revision in response to early comments. Users of set theory need an <em>implementation</em> (in case "model" means something different) of the axioms. I would expect something like this:</p>
<blockquote>
<p>An <em>implementation</em> consists of a "collection-of-elements" $X$, and a relation
(logical pairing) $E:X\times X\to \{0,1\}$. A logical function $h:X\to\{0,1\}$
is a <em>set</em> if it is of the form $x\mapsto E[x,a]$ for some $a\in X$. Sets are
required to satisfy the following axioms: ....</p>
</blockquote>
<p>The background "collection-of-elements" needs some properties to even get started. For instance "of the form $x\mapsto E[x,a]$" needs first-order quantification. Mathematical standards of precision seem to require <em>some</em> discussion, but so far I haven't seen anything like this. The first version of this question got answers like $X$ is "the domain of discourse" (philosophy??), "everything" (naive set theory?) "a set" (circular), and "type theory" (a postponement, not a solution). Is this a missing definition? Taking it seriously seems to give a rather fruitful perspective. </p>
| Noah Schweber | 8,133 | <p><em>Caveat: it's become clear from comments and revisions that the original portion of this answer - leading up to the horizontal line below - is not really addressing the heart of the OP. I'm leaving it up since I think it is still at least somewhat relevant and potentially useful to readers. See below the horizontal line for an answer I thnk is ultimately more on-topic.</em></p>
<p>There is no circularity here.</p>
<p>A model of ZFC is simply a set $X$ together with a binary relation $E$ on $X$, satisfying some properties. We intuitively think of elements of $X$ as sets, but this is an intuition we impose on models of the theory from outside; a priori, a model of ZFC is just a special kind of (directed) graph.</p>
<p>For example, thinking of models of ZFC as graphs, the extensionality axiom just says </p>
<blockquote>
<p>If two vertices are connected "from the left" to the same vertices, then they are in fact the same vertex. (More precisely: if $u, v$ are vertices such that for every vertex $w$ we have $wEu\iff wEv$, then in fact $u=v$.)</p>
</blockquote>
<p>So for example, the discrete graph (= no edges at all) on two vertices is not a model of ZFC: the two vertices are each connected "from the left" to the same vertices (namely, none), but they are distinct.</p>
<p>Note that this demonstrates a fundamental point about ZFC (which is an instance of a more general fact about first-order theories in general):</p>
<blockquote>
<p>The ZFC axioms <strong>describe</strong>, but do not <strong>define</strong>, sets.</p>
</blockquote>
<hr>
<p>EDIT: OK, the following is a bit long. The tl;dr is the following: </p>
<blockquote>
<p>If we're skeptical of philosophical commitments such as Platonism (which I think we should be), then the right response to the circularity involved in defining mathematical objects in terms of sets while recognizing sets as mathematical objects is this: that all <em>semantic</em> reasoning, such as the development of model theory, is really <em>syntactic</em> reasoning taking place in a formal theory which we're choosing to interpret as being "about" objects whose existence is dubious, false, or meaningless. These syntactic claims (such as "ZFC proves that no set contains itself") are just statements about finite strings, and we can make sense of them even in a purely empirical way.</p>
</blockquote>
<p>OK, now the long version:</p>
<p>Based on your edit (as far as I can tell, your "implementations" are just models), I think you're asking:</p>
<blockquote>
<p>To what extent do we need to make set-theoretic commitments to do model theory?</p>
</blockquote>
<p>(Note that I said "model theory," not "logic;" I'll say more about that in a moment.)</p>
<p>The answer is that we do in fact need to presuppose a notion of set. If one is a Platonist, this isn't necessarily problematic, and a formalist will dispense with the entire apparatus altogether and simply look at the formal system it takes place in (again, more on that in a moment). </p>
<p><em>There is also the option that what we really have here is a way of taking any "notion-of-set" and producing a corresponding model theory; this is exemplified by <strong>topos theory</strong>, where each topos can be understood as a universe of sets and model theory can be developed inside the topos. Based on your most recent comment to me, I think this might be interesting to you, but ultimately it runs into the same problem: we wind up having to talk about some sort of mathematical objects to develop semantics for mathematical statements, and this is ultimately no less circular or demanding of Platonism.</em></p>
<p>Now, what if we are unwilling to make any set-theoretic commitments at all? One approach is to argue that the whole semantic apparatus of model theory, and indeed all of mathematics, is not describing anything but rather is simply taking place inside a formal theory. That is, we don't view the statement "If there is a countable transitive model of ZFC, then there is a countable transitive model of ZFC + CH" as really referring to "countable transitive models," but rather is simply a string of symbols which has been produced by a certain formal system. The fundamental question of formalism, to my mind, is <strong>why the formal systems we do math in are valuable and interesting,</strong> but there's no doubt that formalism provides a vehicle for doing mathematics with the minimal philosophical commitment.</p>
<p>Now, after all, <a href="http://www.ditext.com/carroll/tortoise.html" rel="noreferrer">we <em>do</em> need <em>some</em> commitments to get off the ground</a>. For "naive" formalism, this amounts to a commitment to the "existence" of the natural numbers in some sense; further examining this notion, we can try to reduce the philosophical commitment involved even further. For example, "truly empirical" mathematics is extremely ultrafinitist: the only things one is allowed to assert is "the string $\sigma$ is deducible from the strings $\sigma_1, ...,\sigma_n$," and only in the case when one <em>actually has</em> a formal deduction of $\sigma$ from $\sigma_1,...,\sigma_n$.</p>
<p>Why am I bringing this up? Well, the point I want to make is that <strong>formalism helps us not worry (as much) about circularity without invoking some kind of Platonism.</strong> Specifically, while one can be suspicious of set-theoretic foundations of mathematics because of the circularity involved in defining mathematical objects via sets while sets themselves are mathematical objects, a claim like "ZFC proves $\sigma$" is universally intelligible. Essentially, what this means to me is that we can do mathematics <em>as if</em> we were Platonists without actually making the philosophical commitments involved in any serious way, and still be doing "honest mathematics" - the point being that the formalist perspective gives us a bulwark to "fall back to."</p>
<p>This "optional Platonism," I think, is why mathematicians tend not to care about these issues; we tend to recognize that we could reduce all our reasoning to concrete statements about finite strings, and therefore that our Platonist statements can be translated into obviously meaningful ones.</p>
<p>Of course, this translates (one of) the Platonist challenge(s) - "In what sense can mathematical objects be said to exist, and <em>why are we justified</em> in claiming that they do?" - into the "formalist challenge:"</p>
<blockquote>
<p>What criteria determine whether a formal theory is "mathematically valuable"? </p>
</blockquote>
<p>I have strong and wrong opinions on this matter, but I think that's off-topic for this specific question.</p>
|
2,763,974 | <blockquote>
<p>Find $\displaystyle \lim_{(x,y)\to(0,0)} x^2\sin(\frac{1}{xy}) $ if exists, and find $\displaystyle\lim_{x\to 0}(\lim_{y\to 0} x^2\sin(\frac{1}{xy}) ), \displaystyle\lim_{y\to 0}(\lim_{x\to 0} x^2\sin(\frac{1}{xy}) )$ if they exist.</p>
</blockquote>
<p>Hey everyone. I've tried using the squeeze theorem and found $0 \le |x^2\sin(\frac{1}{xy})| \le |x^2|\cdot 1 \xrightarrow{x\to0} 0 $ and so the "double" limit exists and equals zero.
Now, I know $\lim_{x\to 0}\sin(\frac{1}{ax})$ diverges, so both $lim_{y\to 0}(\lim_{x\to 0} x^2\sin(\frac{1}{xy}) )$ and $lim_{x\to 0}(\lim_{y\to 0} x^2\sin(\frac{1}{xy}) )$ do not exist(?) </p>
<p>I don't think I understand multi-variable limits, I would love your help on this basic one so I can understand better. Thanks in advance :) </p>
| user284331 | 284,331 | <p>So the moral of this question is that, one might think that if the double limit exists, then it entails the existence of iterated limits, this question gives a counterexample, but what is true is that,</p>
<blockquote>
<p>Given that $\lim_{(x,y)\rightarrow(a,b)}f(x,y)=L$ exists and that for each $x\in B_{\delta}(a)-\{a\}$, $\lim_{y\rightarrow b}f(x,y)=M_{x}$ exists, then $\lim_{x\rightarrow a}M_{x}=L$, in other words, $\lim_{x\rightarrow a}\lim_{y\rightarrow b}f(x,y)=\lim_{(x,y)\rightarrow(a,b)}f(x,y)$.</p>
</blockquote>
<p>The assumption that for a deleted neigborhood of $a$ that the existence of $\lim_{y\rightarrow b}f(x,y)$ cannot be relaxed.</p>
|
354,213 | <p>This is a similar question to the one I have posted <a href="https://math.stackexchange.com/questions/354124/given-u-2-5-3-how-to-find-unit-vectorsu-w-s-t-uv-is-maximal-and-u">before</a>. The problem
is as in the title:</p>
<blockquote>
<p>Given $u=(-2,5,3)$ find a unit vector $v$ s.t $|u\times v|$ is
maximal, and then a unit vector $w$ s.t $|(u\times v)\cdot w|$ is
minimal</p>
</blockquote>
<p>Again, I can write out the equations, but there should be an easier
way to find the max/min then to solve the equations (since, as I stated
in my previews question, they don't know how to solve for the max/min).</p>
<p>Any help on the problem is greatly appreciated!</p>
| user54358 | 54,358 | <p>Let $O$ be an open set in $X\times Y$. If $x\in O$ then there is an open ball $O_x$ containing $x$. You can construct an open rectangle, $R_x$, with rational endpoints to contain $x$ but contained in $O_x$. Now
\begin{eqnarray*}
O=\bigcup_{x\in O} R_x.
\end{eqnarray*}
You assert that it is countable because there are at most $\mathbb{Q}^4$ such rectangles. So there are at most countably many rectangles with rational endpoints.
Thus,
\begin{eqnarray*}
O=\bigcup_{x\in O} R_x=\bigcup_{n=1}^\infty R_n.
\end{eqnarray*}</p>
<p>Every Borel set is just the union of open sets, thus measurable as well.</p>
<p>That $f$ should be continuous means that the inverse image of open sets are open.
But that means the inverse image are measurable. Thus, the inverse image of open sets are measurable which is equivalent to a function being measurable.</p>
|
105,723 | <p>I am using MMA do do some algebra and I need to do some simplification. For example, I have the following expression
$$ x^2 \left( \frac{6 \left(2 c_1 x^6+c_2\right)}{x^4} \right)-x \left( 4 c_1 x^3-\frac{2 c_2}{x^3}\right)-8\left(c_1 x^4+\frac{c_2}{x^2}\right ) $$
Can I get intermediate steps to reach the final answer? For example, I want to factor out $c_1$ and $c_2$ and write the expression as $c_1 (x\dots) + c_2 (x\dots)$ but the terms $x\dots $ need not be evaluated. I can to the calculation as it is simple, but if I could do this from MMA, it would greatly save my time.</p>
<p>I always have some terms that could be factored out. It could be constant $c_k$ or polynomial $x^n$ or trig functions and there could be multiple terms but they are always unique and do not conflict with each other.</p>
| Jason B. | 9,490 | <p>You have your expression</p>
<pre><code>expression =
x^2 (6 (2 c1 x^6 + c2)/x^4) - x (4 c1 x^3 - 2 c2/x^3) -
8 (c1 x^4 + c2/x^2)
(* -x (-((2 c2)/x^3) + 4 c1 x^3) - 8 (c2/x^2 + c1 x^4) + (
6 (c2 + 2 c1 x^6))/x^2 *)
</code></pre>
<p>for now it hasn't been evaluated to zero yet, but if you try most anything it will do so:</p>
<pre><code>Expand@expression
Series[expression, {c1, 0, 2}]
Coefficient[expression, c2]
(* 0 *)
(* O[c1]^3 *)
(* 0 *)
</code></pre>
<p>One way to get what you are looking for is to change the <code>Head</code> of your expression from <code>Plus</code> to <code>List</code>,</p>
<pre><code>List @@ expression
(* {-x (-((2 c2)/x^3) + 4 c1 x^3), -8 (c2/x^2 + c1 x^4), (
6 (c2 + 2 c1 x^6))/x^2} *)
</code></pre>
<p>Now you can get the coefficients for <code>c1</code> and <code>c2</code> for each of the terms in the sum,</p>
<pre><code>Coefficient[#, c1] & /@ (List @@ expression)
(* {-4 x^4, -8 x^4, 12 x^4} *)
Coefficient[#, c2] & /@ (List @@ expression)
(* {2/x^2, -(8/x^2), 6/x^2} *)
</code></pre>
<p>I don't know of a general way to do what you want, but this may work with some input from you.</p>
|
985,103 | <p>The set $\{u_{1},u_{2}\cdots,u_{6}\}$
is a basis for a subspace $\mathcal{M}$ of $\mathbb{F}^{m}$ if and
only if $\{u_{1}+u_{2},u_{2}+u_{3}\cdots,u_{6}+u_{1}\}$
is also a basis for $\mathcal{M}$.
So far I have that the two basis are just rearranged sums of each other but don't know where else to go with it.</p>
| TheSilverDoe | 594,484 | <p>No need to write any <span class="math-container">$\varepsilon$</span> here. Let <span class="math-container">$L > 0$</span> such that <span class="math-container">$0 \leq b_n \leq L$</span> for every <span class="math-container">$n$</span>. Then
<span class="math-container">$$0 \leq a_n \leq L \times \frac{a_n}{b_n} $$</span></p>
<p>By assumption, the right part tends to <span class="math-container">$0$</span>, so by comparison, you get that <span class="math-container">$$\boxed{\lim_{n \rightarrow +\infty} a_n = 0}$$</span></p>
|
4,528,059 | <p>The graph of <span class="math-container">$y = f(x)$</span> is as follows:</p>
<p><a href="https://i.stack.imgur.com/vprAu.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/vprAu.png" alt="enter image description here" /></a></p>
<p>Find <span class="math-container">$$\int_{-1}^{1} f(1-x^2) dx$$</span></p>
<p>I tried to solve this through substitution of <span class="math-container">$u = 1-x^2$</span>, found <span class="math-container">$dx = \frac{du}{-2x}$</span> and attempted to adjust the upper and lower bounds of the substitution to attempt to use the graphical area under the function to find my answer. However, when trying to find the upper and lower bounds they both equal <span class="math-container">$0$</span>, so this cannot work.</p>
| Mike Earnest | 177,399 | <p>Let <span class="math-container">$A$</span> be the matrix whose diagonal entries are <span class="math-container">$q_i-1$</span>, and whose off-diagonal entries are <span class="math-container">$-1$</span>. If you bet <span class="math-container">$x$</span>, then your possible winnings are described by the vector <span class="math-container">$Ax$</span>. Call a vector "positive" if all of its entries are positive. In order to ensure a positive gain, you need to find a positive vector <span class="math-container">$x$</span> for which <span class="math-container">$Ax$</span> is positive.</p>
<blockquote>
<p><strong>Theorem</strong>: You can ensure a positive gain if and only if <span class="math-container">$q_1^{-1}+q_2^{-1}+\dots+q_n^{-1}<1$</span>.</p>
</blockquote>
<p><em>Proof:</em> We may as well normalize our vector <span class="math-container">$x$</span> so that <span class="math-container">$\sum_{i=1}^n x_i=1$</span>. For any normalized vector,
<span class="math-container">$$
Ax=\begin{bmatrix}q_1x_1-1\\q_2x_2-1\\\vdots \\q_nx_n-1\end{bmatrix}
$$</span>
This is positive if and only if <span class="math-container">$x_i> q_i^{-1}$</span> for all <span class="math-container">$i$</span>. Since <span class="math-container">$\sum_i x_i=1$</span>, this is only possible if <span class="math-container">$1>\sum_i q_i^{-1}$</span>.</p>
<hr />
<p>For the second part, you are putting a certain probability distribution on the set of outcomes, and asking how to find the maximum expected gain. Let <span class="math-container">$p=\begin{bmatrix}p_1&p_2&\cdots & p_n \end{bmatrix}$</span> be the vector describing the probability distribution. Then your expected gain is
<span class="math-container">$$
p_1q_1x_1+p_2q_2x_2+\dots+p_nq_nx_n - 1
$$</span>
If we were only optimizing for expected value, the optimal strategy is to find the index <span class="math-container">$j$</span> for which <span class="math-container">$p_jq_j$</span> is maximized, and to bet your entire stake on outcome <span class="math-container">$j$</span>. This gives an expected gain of <span class="math-container">$p_jq_j-1$</span>.</p>
<p>However, I think you are talking about about finding the maximum expected gain, subject to the constraint that your gain is always positive. We saw before that you need to bet <span class="math-container">$x_i>q_i^{-1}$</span> for all <span class="math-container">$i$</span> to ensure positive gain. If you bet <span class="math-container">$x_i=q_i^{-1}+\varepsilon$</span> on all vectors for some small <span class="math-container">$\varepsilon>0$</span>, then you have <span class="math-container">$1-(q_1^{-1}+\dots+q_n^{-1}+n\epsilon)$</span> leftover. It is then optimal to bet all of the leftovers on the <span class="math-container">$j$</span> for which <span class="math-container">$p_jq_j$</span> is maximized. However, there is no maximum expected value; you can always increase your expected gain by decreasing <span class="math-container">$\varepsilon$</span> further, giving you more leftovers to place on the most efficient bet.</p>
|
1,142,624 | <p>Find a generating function for $\{a_n\}$ where $a_0=1$ and $a_n=a_{n-1} + n$</p>
| Community | -1 | <p>By definition of the generating function,</p>
<p>$$G(z)=\sum_{n=0}^\infty a_nz^n=a_0+\sum_{n=1}^\infty a_nz^n.$$
Applying the recurrence relation,
$$G(z)=a_0+\sum_{n=1}^\infty (a_{n-1}+n)z^n
=a_0+z\sum_{n=1}^{\infty}a_{n-1}z^{n-1}+\sum_{n=1}^\infty nz^n=a_0+zG(z)+\frac z{(1-z)^2}.$$</p>
<p>Solve for $G(z)$.</p>
|
2,431,027 | <p>I am asking about changing the limits of integration. </p>
<p>I have the following integral to evaluate - </p>
<p>$$\int_2^{3}\frac{1}{(x^2-1)^{\frac{3}{2}}}dx$$ using the substitution $x = sec \theta$. </p>
<p>The problem states</p>
<p><strong>Use the substitution to change the limits into the form $\int_a^b$ where $a$ and $b$ are multiples of $\pi$.</strong></p>
<p>Now, this is what I did. </p>
<p>$$ x= \sec \theta$$
$$\frac{dx}{d\theta} = \sec\theta \tan\theta$$
$$dx = sec\theta tan\theta \ d\theta$$</p>
<p>$$\begin{align}\int_2^{3}\frac{1}{(x^2-1)^{\frac{3}{2}}}\,dx \\
&= \int\frac{1}{(\sec^2\theta-1)^{\frac{3}{2}}}\sec\theta \tan\theta \,d\theta \\
&= \int\frac{\sec\theta \tan\theta}{(\tan^2\theta)^{\frac{3}{2}}} \,d\theta \\
&= \int\frac{\sec\theta \tan\theta}{\tan^3\theta} \, d\theta \\
&= \int\frac{\sec\theta}{\tan^2\theta} \, d\theta \\
&= \int\frac{\cos\theta}{\sin^2\theta} \, d\theta \\
&= \int \csc\theta \cot\theta \, d\theta
\end{align}$$</p>
<p>But here is my problem.
I know that when $x = 2$,
$$2 = \sec \theta$$
$$\frac{1}{2} = \cos \theta$$
$$\frac{\pi}{3} = \theta$$</p>
<p>but when $x = 3$
$$3 = \sec \theta$$
$$\frac{1}{3} = \cos \theta$$
$$\arccos\left(\frac{1}{3}\right) = \theta = $$
but this does not give me a definite result in $\pi$.
The book says the following - </p>
<p><a href="https://i.stack.imgur.com/SPvci.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/SPvci.png" alt="enter image description here"></a></p>
<p>where $\arccos\left(\frac{1}{3}\right) = \frac{\pi}{3}$
Am I the only one or is the book wrong in this instance?</p>
| vik1245 | 265,333 | <p>The book is indeed incorrect. </p>
<p>$$arccos(\frac{1}{3}) \neq \frac{\pi}{3}$$</p>
<p>Note if it was the case that $arccos(\frac{1}{3}) = \frac{\pi}{3}$ as stated in the solutions, </p>
<p>then the integral would total 0. </p>
|
1,424,072 | <p>Im trying to answer this question using the answer given in </p>
<p><a href="https://math.stackexchange.com/questions/795564/let-c-be-a-cube-and-let-g-be-its-rotational-symmetry-group-outline-a-proof-that">Let C be a cube and let G be its rotational symmetry group. Outline a proof that G is isomorphic to Sym(4)</a></p>
<p>What I did is - I numbered the vertexes and I have the group of long diagonals </p>
<p>$S = {(1,7), (3,5), (4,6) ,(2,8)} $ </p>
<p>where 1,2,34 is the button of the cube and 5,6,7,8 is the top (5 above 1)</p>
<p>now i know that all elements of $S_4$ can be made from 2-cycles. </p>
<p>so I need to find a $ g \in G$ that is correspond to every possible 2-cycle in </p>
<p>$S_4$ . I can't find this g. is it one g for every 2-cycle ? or one for all? </p>
<p>please help me solve that. </p>
<p>also I would like to know how to solve it (also with G operating on the diagonals) with the Orbit-stabilizer theore.</p>
<p>thanks a lot</p>
| Hagen von Eitzen | 39,174 | <p>The rotational group of the cube certainly turns a (here always: main) diagonal into a diagonal, hence acts on the set of the four diagonals. This action gives us a homomorphism $\phi\colon G\to \operatorname{Sym}(4)$. On the other hand, $G$ certainly has order $24$: We can pick one of six faces as "ground" face and rotate it in steps of $90^\circ$, thus giving us at least $6\times 4=24$ group elements. Once the ground face is fixed, the whole cube is fixed, thus indeed $|G|=24$. To establish that $\phi$ is an isomorphism it suffices to show that it is one-to-one, or alternatively to show that it is onto. </p>
<p>One quickly verifies that $G$ acts <em>transitively</em> on the diagonals (just rotate in steps of $90^\circ$ around the "vertical" axis). In fact the stabilizer of one diagonal still acts transitively on the remaining three diagonals (per rotation by $120^\circ$ around the fixed diagonal, and finally the stabilizer of two diagonals still acts transitively on the remaining two diagonals (rotation by $180^\circ$ around the axis perpendicular to the plane spanned by the two fixed diagonals). We conclude that $\phi[G]$ is all of $\operatorname{Sym}(4)$, as desired.</p>
|
1,424,072 | <p>Im trying to answer this question using the answer given in </p>
<p><a href="https://math.stackexchange.com/questions/795564/let-c-be-a-cube-and-let-g-be-its-rotational-symmetry-group-outline-a-proof-that">Let C be a cube and let G be its rotational symmetry group. Outline a proof that G is isomorphic to Sym(4)</a></p>
<p>What I did is - I numbered the vertexes and I have the group of long diagonals </p>
<p>$S = {(1,7), (3,5), (4,6) ,(2,8)} $ </p>
<p>where 1,2,34 is the button of the cube and 5,6,7,8 is the top (5 above 1)</p>
<p>now i know that all elements of $S_4$ can be made from 2-cycles. </p>
<p>so I need to find a $ g \in G$ that is correspond to every possible 2-cycle in </p>
<p>$S_4$ . I can't find this g. is it one g for every 2-cycle ? or one for all? </p>
<p>please help me solve that. </p>
<p>also I would like to know how to solve it (also with G operating on the diagonals) with the Orbit-stabilizer theore.</p>
<p>thanks a lot</p>
| David Wheeler | 23,285 | <p>Using the orbit-stabilizer theorem: first of all, we can see that we have "some" homomorphism:</p>
<p>$G \to S_4$ by considering the fact that any such rotation of $G$ permutes the main diagonals.</p>
<p>Now let's consider the action of $G$ on the set of faces of the cube, we have $6$ of these. $G$ clearly always takes a face to a face. Since it is possible, by just using rotations about the center of two opposite faces (which is certainly a subset of $G$) to put any face in any desired position (a six-sided die can be rolled so that "any number is up"), we conclude that $G$ is <em>transitive</em> on this set of $6$ faces, that is, the <em>orbit</em> of any face under $G$ is the entire set of faces.</p>
<p>Hence if $x$ is any particular face, its orbit $G.x$ has cardinality $6$.</p>
<p>Now the orbit-stabilizer theorem says:</p>
<p>$|G| = |G.x|\ast|\text{Stab}(x)|$.</p>
<p>If a rotation of the cube fixes a face, it must be a rotation about the center of that face (its face is a square, and this is the full rotational symmetry group of a square-reflections of the square don't work, because changing the orientation of any (single) plane in $\Bbb R^3$ changes the orientation of $\Bbb R^3$ itself, and rotations in $\Bbb R^3$ preserve orientation).</p>
<p>So the stabilizer of the face $x$ has order $4$, and thus $|G| = 24$.</p>
<p>Similar arguments can be made by using the set of edges, or the set of vertices, but the elements of the stabilizers are harder to visualize. Basically, the $120^{\circ}$ rotations stabilize a vertex, and the $180^{\circ}$ rotations stabilize an edge, leading to:</p>
<p>$|G| = 8\cdot 3$ or $|G| = 12 \cdot 2$.</p>
|
3,682,900 | <p>I have a hard time solving this one.
I'm sure there is trick that should be used but if so, I can't spot it.</p>
<p><span class="math-container">$$(3\cdot4^{-x+2}-48)\cdot(2^x-16)\leqslant0$$</span></p>
<p>Here is what I get but I'm anything but confident about this:</p>
<p><span class="math-container">$$3\cdot(2^{-2x+4}-16)\cdot(2^x-16)\leqslant0$$</span>
<span class="math-container">$$(2^{-2x+4}-2^4)\cdot(2^x-2^4)\leqslant0$$</span>
<span class="math-container">$$2^{-x+4}-2^{x+4}-2^{-2x+8}+2^8\leqslant0$$</span>
<span class="math-container">$$2^{-x+4}+2^8\leqslant2^{x+4}+2^{-2x+8}$$</span></p>
<p>So far, I'm already not 100% sure but then, I'm not sure at all:</p>
<p><span class="math-container">$$(-x+4)\cdot \ln(2)+8\cdot \ln(2)\leqslant(x+4)\cdot \ln(2)+(-2x+8)\cdot \ln(2)$$</span></p>
<p>This is nonsense, can someone correct me please?
Thanks.</p>
| Mando | 734,443 | <p>To expound on the previous problem (I can't comment), the interval should be</p>
<p><span class="math-container">$$(-\infty,0]\cup[4,\infty)$$</span></p>
<p>For if </p>
<p><span class="math-container">$$(1-t)(t-16)\le0$$</span></p>
<p>then</p>
<p><span class="math-container">$$(t-1)(t-16)\ge0$$</span></p>
<p>So</p>
<p><span class="math-container">$$(2^x-1)(2^x-16)\ge 0$$</span></p>
<p>Note that if <span class="math-container">$x=2$</span>, we have</p>
<p><span class="math-container">$$(4-1)(4-16)=-36$$</span></p>
<p>so <span class="math-container">$2$</span> can not be a solution</p>
|
1,349 | <p>In this question here the OP asks for hints for a problem rather than a full proof.</p>
<p><a href="https://math.stackexchange.com/questions/14477">Proof of subfactorial formula $!n = n!- \sum_{i=1}^{n} {{n} \choose {i}} \quad!(n-i)$</a></p>
<p>Now, while I would like to respect that request, I also feel that questions on this site are not intended just for the OP's benefit. This leads me to the question...</p>
<blockquote>
<p><strong>Question</strong>: Is there any way to use some form of <em>spoiler space</em>, so that it's possible to post the answer for the other readers' benefit, but at the same time hiding it from those who do not want it?</p>
</blockquote>
<p>My attempted "look at the previous version of this post" turned out a disaster. I've seen people use rot13, but that seems like a lot of fuss (and clashes with the mathematics).</p>
<p>On some sites they use white text on white background for spoily material, which, when you select with the mouse, reveals the text. Is that possible?</p>
<hr />
<p>Testing:</p>
<blockquote>
<p>! Spoiler Space</p>
<p>! More spoiler space</p>
</blockquote>
<hr />
<blockquote class="spoiler">
<p> Spoiler Space
More spoiler space</p>
</blockquote>
<hr />
<blockquote class="spoiler">
<p> What happens if I write a really long sentence. Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test</p>
</blockquote>
<hr />
<blockquote class="spoiler">
<p> Maybe it'll involve some maths like <span class="math-container">$E=mc^2$</span> or exclamation marks <span class="math-container">$n!=n \times (n-1)!$</span>.</p>
</blockquote>
| Zev Chonoles | 264 | <p>Here is a sample bit of LaTeX contained in a spoiler block:</p>
<blockquote class="spoiler">
<p> $\sum_{n=1}^\infty\frac{1}{n^2}=\frac{\pi^2}{6}$<br>
$$\sum_{n=1}^\infty\frac{1}{n^2}=\frac{\pi^2}{6}$$ </p>
</blockquote>
<p>This is the code I used to produce it:</p>
<pre><code> >! $\sum_{n=1}^\infty\frac{1}{n^2}=\frac{\pi^2}{6}$
>! $$\sum_{n=1}^\infty\frac{1}{n^2}=\frac{\pi^2}{6}$$
</code></pre>
<p>However, when I moused over the spoiler block in the preview area while composing this post:</p>
<p><img src="https://i.stack.imgur.com/WAuj4.png" alt="enter image description here"></p>
<p>So, displayed equations don't appear displayed in the preview area, although they come out just fine in the final product. </p>
|
540,135 | <p>$\newcommand{\lcm}{\operatorname{lcm}}$Is $\lcm(a,b,c)=\lcm(\lcm(a,b),c)$?</p>
<p>I managed to show thus far, that $a,b,c\mid\lcm(\lcm(a,b),c)$, yet I'm unable to prove, that $\lcm(\lcm(a,b),c)$ is the lowest such number... </p>
| Community | -1 | <p>The "universal property" of the <span class="math-container">$\newcommand{\lcm}{\operatorname{lcm}}\lcm$</span> is</p>
<blockquote>
<p>if <span class="math-container">$\lcm(a,b) \mid x$</span>, and <span class="math-container">$c \mid x$</span>, then <span class="math-container">$\lcm(\lcm(a,b),c) \mid x$</span>.</p>
</blockquote>
<p>Make a good choice for <span class="math-container">$x$</span>, for which you can prove the hypothesis and/or the conclusion is useful.</p>
|
540,135 | <p>$\newcommand{\lcm}{\operatorname{lcm}}$Is $\lcm(a,b,c)=\lcm(\lcm(a,b),c)$?</p>
<p>I managed to show thus far, that $a,b,c\mid\lcm(\lcm(a,b),c)$, yet I'm unable to prove, that $\lcm(\lcm(a,b),c)$ is the lowest such number... </p>
| Lord_Farin | 43,351 | <p>With this type of problem, it's often useful to be aware of a particular order on <span class="math-container">$\Bbb N_{>0}$</span> -- at least if one is familiar with posets (partially ordered sets).</p>
<p>Namely, the poset <span class="math-container">$(\Bbb N_{>0}, \mid)$</span>, where <span class="math-container">$a \mid b$</span>, as usual, denotes that <span class="math-container">$a$</span> divides <span class="math-container">$b$</span>. (One may readily see/prove that this indeed forms a poset.)</p>
<p>This order has the following properties:</p>
<ul>
<li><span class="math-container">$1$</span> is the least element;</li>
<li><span class="math-container">$\gcd(a,b)$</span> is the <em>greatest lower bound</em> or <em>infimum</em> of <span class="math-container">$a$</span> and <span class="math-container">$b$</span>: If <span class="math-container">$m \mid a$</span> and <span class="math-container">$m \mid b$</span>, then <span class="math-container">$m \mid \gcd(a,b)$</span>;</li>
<li><span class="math-container">${\rm lcm}(a,b)$</span> is the <em>least upper bound</em> or <em>supremum</em> of <span class="math-container">$a$</span> and <span class="math-container">$b$</span>: If <span class="math-container">$a \mid n$</span> and <span class="math-container">$b \mid n$</span>, then <span class="math-container">${\rm lcm}(a,b) \mid n$</span>.</li>
</ul>
<p>which together make it a so-called <em>lower-bounded lattice</em>.</p>
<p>By the general theorem that in any lattice, for a collection of sets <span class="math-container">$(A_i)_i$</span> such that <span class="math-container">$\bigcup_i A_i$</span> is finite, we have <span class="math-container">$\sup_i \sup A_i = \sup \bigcup_i A_i$</span>, we are done immediately:</p>
<p>Taking <span class="math-container">$A_1 = \{a,b\}$</span> and <span class="math-container">$A_2 = \{c\}$</span> (note that <span class="math-container">$\sup A_2 = c$</span>), we find that: <span class="math-container">$${\rm lcm}({\rm lcm}(a,b),c) = \sup\{\sup\{a,b\},\sup\{c\}\} = \sup \{a,b,c\} = {\rm lcm}(a,b,c)$$</span></p>
<hr>
<p>Of course, to be able to use this argument, one requires some knowledge about lattices and posets. If you don't understand it at this point, don't worry: when you progress in mathematics, you're bound to encounter these concepts, after which you hopefully can appreciate this argument.</p>
|
572,125 | <p>How to show this function's discontinuity?<br></p>
<p>$ f(n) = \left\{
\begin{array}{l l}
\frac{xy}{x^2+y^2} & \quad , \quad(x,y)\neq(0,0)\\
0 & \quad , \quad(x,y)=(0,0)
\end{array} \right.$</p>
| Clive Newstead | 19,542 | <p>One of the definitions of a cardinal $\kappa$ being regular is that, whenever $\alpha < \kappa$, every function $f : \alpha \to \kappa$ is bounded.</p>
<p>In any case, you can prove this directly, using the fact that a countable union of sets of cardinality $\aleph_1$ has cardinality $\aleph_1$: consider $$\bigcup_{n < \omega} \{ \alpha < \omega_2 : \alpha \le f(n) \}$$</p>
|
185,478 | <blockquote>
<p>How do I solve for $x$ in $$x\left(x^3+\sin(x)\cos(x)\right)-\big(\sin(x)\big)^2=0$$</p>
</blockquote>
<p>I hate when I find something that looks simple, that I should know how to do, but it holds me up. </p>
<p>I could come up with an approximate answer using Taylor's, but how do I solve this? </p>
<p>(btw, WolframAlpha tells me the answer, but I want to know how it's solved.)</p>
| Ross Millikan | 1,827 | <p>Polynomials and trig functions don't play nice together, so you are usually stuck with numeric solutions. You can start by noting that $x=0$ is a double root, one from the outer $x$ and one from the $x/\sin x$ terms. </p>
|
185,478 | <blockquote>
<p>How do I solve for $x$ in $$x\left(x^3+\sin(x)\cos(x)\right)-\big(\sin(x)\big)^2=0$$</p>
</blockquote>
<p>I hate when I find something that looks simple, that I should know how to do, but it holds me up. </p>
<p>I could come up with an approximate answer using Taylor's, but how do I solve this? </p>
<p>(btw, WolframAlpha tells me the answer, but I want to know how it's solved.)</p>
| N. S. | 9,176 | <p>We prove that $x=0$ is the only solution.</p>
<p>Let </p>
<p>$$f(x)= x^4+x \sin(x) \cos(x)- \sin^2(x) \,.$$</p>
<p>Then $f$ is even, so it is enough to look for roots on $[0, \infty)$.</p>
<p>You cana lso observe that $f(x)\geq x^4 -x-1$, and an easy calculation shows that for all
$x> \sqrt[3]{2}$ we have $x^4-x-1 >0$.</p>
<p>Thus the only possible positive roots are in the interval $[0, \sqrt[3]{2}]$, which is inside the first quadrant.</p>
<p>Then for all $x \neq 0$, by using $x >\sin(x)$ we have</p>
<p>$$ x^4+x\sin(x)\cos(x)-\sin^2(x) >x^2 \sin^2(x)+\sin^2(x)\cos(x)-\sin^2(x)$$
$$=\sin^2(x)(x^2+\cos(x)-1)>\sin^2(x)(x^2+\cos^2(x)-1)=\sin^2(x)(x^2-\sin^2(x))>0$$</p>
<p><strong>P.S.</strong> Just to clarify, we need to make first the reduction to the first quadrant to make sure than $x> \sin(x)$ implies inequalities of the type $x\sin(x)\cos(x) > \sin^2(x)\cos(x) $</p>
<p><strong>P.P.S</strong> I also suspect that $f'(x) >0$ for $x>0$, which would lead to a second proof of the problem. Note that
$$f'(x)=4x^3+x \cos(2x)-\frac{1}{2}\sin(2x)$$
which can easly be proven to be positive on $(0,\frac{\pi}{2}]$ and $(\frac{\pi}2, \infty)$.</p>
<hr>
<p><strong>ADDED: Second solution</strong></p>
<p>$$f''(x)= 12x^2 -2x \sin(2x)$$</p>
<p>If $x>0$ then since $\sin(2x)<2x$ we have</p>
<p>$$12x^2 -2x \sin(2x)> 12x^2-2x\cdot(2x)>0$$</p>
<p>Thus $f'(x)$ is strictly increasing on $[0, \infty)$. Since $f'(0)=0$ we get that $f'(x)>0$ for all $x>0$. Thus $f(x)$ is strictly increasing on $[0, \infty)$, and since $f(0)=0$, it follows that $f(x)=0$ has no solution on $(0, \infty)$. Since $f(x)$ is even, it follows that $x=0$ is the unique solution.</p>
|
1,829,342 | <p>So I know that $\sum_{i\geq 0}{n \choose 2i}=2^{n-1}=\sum_{i\geq 0}{n \choose 2i-1}$. However, I need formulas for $\sum_{i\geq 0}i{n \choose 2i}$ and $\sum_{i\geq 0}i{n \choose 2i-1}$. Can anyone point me to a formula with proof for these two sums? My searches thus far have only turned up those first two sums without the $i$ coefficient in the summand. Thanks!</p>
| Felix Marin | 85,343 | <p>$\newcommand{\angles}[1]{\left\langle\,{#1}\,\right\rangle}
\newcommand{\braces}[1]{\left\lbrace\,{#1}\,\right\rbrace}
\newcommand{\bracks}[1]{\left\lbrack\,{#1}\,\right\rbrack}
\newcommand{\dd}{\mathrm{d}}
\newcommand{\ds}[1]{\displaystyle{#1}}
\newcommand{\expo}[1]{\,\mathrm{e}^{#1}\,}
\newcommand{\half}{{1 \over 2}}
\newcommand{\ic}{\mathrm{i}}
\newcommand{\iff}{\Leftrightarrow}
\newcommand{\imp}{\Longrightarrow}
\newcommand{\ol}[1]{\overline{#1}}
\newcommand{\pars}[1]{\left(\,{#1}\,\right)}
\newcommand{\partiald}[3][]{\frac{\partial^{#1} #2}{\partial #3^{#1}}}
\newcommand{\root}[2][]{\,\sqrt[#1]{\,{#2}\,}\,}
\newcommand{\totald}[3][]{\frac{\mathrm{d}^{#1} #2}{\mathrm{d} #3^{#1}}}
\newcommand{\verts}[1]{\left\vert\,{#1}\,\right\vert}$</p>
<blockquote>
<p>Just one !!!. The other ones are similar to the present one.</p>
</blockquote>
<p>\begin{align}
\color{#f00}{\sum_{i\ \geq\ 0}i{n \choose 2i}} & =
\half\sum_{i\ \geq\ 1}2i{n \choose 2i} =
\half\sum_{i\ \geq\ 1}i{n \choose i}{1 + \pars{-1}^{i} \over 2}
\\[3mm] & =
\left.{1 \over 4}\,\partiald{}{x}\sum_{i\ \geq\ 1}{n \choose i}x^{i}
\right\vert_{\ x\ =\ 1} -
\left.{1 \over 4}\,\partiald{}{x}\sum_{i\ \geq\ 1}{n \choose i}x^{i}
\right\vert_{\ x\ =\ -1}
\\[3mm] & =
\left.{1 \over 4}\,\partiald{}{x}\bracks{\pars{1 + x}^{n} - 1}
\right\vert_{\ x\ =\ 1} -
\left.{1 \over 4}\,\partiald{}{x}\bracks{\pars{1 + x}^{n} - 1}
\right\vert_{\ x\ =\ -1}
\\[3mm] & =
{1 \over 4}\,n\,2^{n - 1} - {1 \over 4}\,n\,\delta_{n1} =
\color{#f00}{{1 \over 4}\pars{2^{n - 1}n - \delta_{n1}}}
\end{align}</p>
|
2,609,283 | <p>$u_1 = (2, -1, 3)$ and $u_2 = (0, 0, 0)$</p>
<p>I tried using the cross product of the two but that just gave me the zero vector. I don't know any other methods to get a vector that is orthogonal to two vectors. </p>
<p>The answer is $v = s(1, 2, 0) + t(0, 3, 1)$ , where $s$ and $t$ are scalar values. </p>
| Devendra Singh Rana | 406,845 | <p>Consider any $n \times n$ rank-$1$ matrix $A$ as follows: one of its eigenvalues is its trace and the remaining eigenvalues are zero. Hence, the characteristic polynomial is $$x^{n-1}(x-\mbox{Tr}(A))$$ and the spectrum is $\{0,4\}$.</p>
|
285,548 | <p>I asked the following question on math.SE (<a href="https://math.stackexchange.com/questions/2420298/bvps-for-elliptic-pdos-when-do-green-functions-l2-inverses-define-pseudo-d">https://math.stackexchange.com/questions/2420298/bvps-for-elliptic-pdos-when-do-green-functions-l2-inverses-define-pseudo-d</a>) just over two months ago, and it only received one, rather unsatisfactory to me, answer there. I'm wondering if people here can have a look. Some related questions have been posed <a href="https://mathoverflow.net/questions/188815/greens-operator-of-elliptic-differential-operator">here</a> and <a href="https://mathoverflow.net/questions/69521/estimates-on-the-green-function-of-an-elliptic-second-order-differential-operato">here</a> (and I noted in particular the literature link provided in the answer to the latter question). However, neither of those discussions cover the possibility of boundary conditions being present, or possible lack of compactness.</p>
<h1>Repost of the original question</h1>
<p>Let me illustrate my question by starting with the simplest possible example: Let us consider $P := - \mathrm{d}^2/\mathrm{d}x^2$, an elliptic partial differential operator on $\mathbb{R}$; let us also consider the following boundary-value problem on the interval $\overline{\Omega} = [0,1]$:
\begin{equation}
P u = f, \qquad u(0)=u(1)=0.
\end{equation}
As is (I think) well-known, when seen as an operator $L^2(\Omega) \to L^2(\Omega)$, $P$ is unbounded. However, it is closed on the dense domain $D(P) := H^2(\Omega) \cap H_0^1(\Omega)$ where $H_0^1(\Omega)$ is the closure of $C_{\mathrm{c}}^\infty(\Omega)$ in the $H^1$ norm (so that any element of this space has vanishing trace on $\partial \Omega = \{0,1\}$, i.e. it satisfies the Dirichlet boundary condition above in a weak sense). Furthermore, $0$ is in the resolvent of $(P,D(P))$, i.e. there exists a bounded inverse $P^{-1} : L^2(\Omega) \to L^2(\Omega)$. In fact, in this example the inverse is easily computed: it is the integral operator defined by the (continuous, as it happens) kernel
\begin{equation}
G(x,y) = \begin{cases} x(1-y) & x \leq y \\ y(1-x) &x > y \end{cases}, \quad (x,y) \in \Omega \times \Omega.
\end{equation}
Of course, when viewed as a distribution in $\mathscr{D}'(\Omega \times \Omega)$, $G$ is the Schwartz kernel of $P^{-1}$ which we know on abstract grounds must exist since $P^{-1} : C_{\mathrm{c}}^{\infty}(\Omega) \to \mathscr{D}'(\Omega)$ is continuous.</p>
<p>My question is the following: in this example and in more general examples where $P$ is a second-order elliptic differential operator on, say, an open (and not necessarily compact) region $\Omega$ with smooth boundary in $\mathbb{R}^n$, and assuming that we can find a suitable dense domain $D(P)$ for $P$ as above so that $(P,D(P))$ has a bounded inverse $P^{-1} : L^2(\Omega) \to L^2(\Omega)$, <strong>does the Schwartz kernel $G$ of $P^{-1}$ always define a pseudodifferential operator on $\Omega$?</strong></p>
<h1>Addendum for MO</h1>
<p>User mcd on math.SE points out that the Boutet de Monvel calculus ought to be relevant here. Aside from wishing to see exactly how this is, I wonder whether the possible lack of compactness (of $\Omega$) might cause problems in such an approach.</p>
<h1>UPDATE</h1>
<p>I have reduced my question to the following subproblem: <a href="https://mathoverflow.net/questions/287167/composition-of-a-smoothing-operator-with-an-l2-bounded-operator-non-compact">Composition of a smoothing operator with an $L^2$-bounded operator, non-compact Riemannian manifold</a>, as explained in a comment there.</p>
| Deane Yang | 613 | <p>Assuming that $P^{-1}$ is a right inverse and $\Omega$ an open subset of $\mathbb{R}^n$ or an open manifold, then you can proceed as follows:</p>
<p>1) An operator $Q: L^2(\Omega) \rightarrow L^2\Omega$ is a pseudodifferential operator, if and only if for any open domain $\Omega'\subset\Omega$, $Q$ restricted to $C^\infty_0(\Omega')$ is a pseudodifferential operator. We can therefore assume that $\Omega'$ is an open subset of $\mathbb{R}^n$.</p>
<p>2) For ay elliptic operator with smooth coefficients $P: C^\infty_0(\Omega')\subset C^\infty_0(\Omega')$, there exists a pseudodifferential operator $Q$ such that $PQ = I + S$, where $S$ is a smoothing operator.</p>
<p>3) If $\Omega'$ is a sufficiently small neighborhood of $x \in \Omega$, then the operator $P$ is injective. It follows that $P^{-1}$ is also a left inverse when restricted to $C^\infty_0(\Omega')$. Therefore,
$$ P^{-1} = Q - P^{-1}S$. Since any smoothing operator is also a pseudodifferential operator, it follows that $P^{-1}$ is a pseudodifferential operator on $\Omega'$.</p>
<p>I learned much of this from Introduction to the Theory of Linear Partial Differential Equations by Chazarain and Piriou. Other books include the one by Taylor and one by Treves (you only need the first volume).</p>
|
2,198,454 | <p>My professor's solution to this is as follows: "Create a 2 x 2 matrix with the first row (corresponding to $X=0$) summing to $P(X=0)=1-p$, the second row summing to $P(X=1)=p$, the first column ($Y=0$) summing to $P(Y=0)=1-q$ and the second column summing to $P(Y=1)=q$. We want to maximize the sum of the diagonal which is $P(X=Y)$. Since $p > q$, the first diagonal entry can be at most $1 − p$; the second can be at
most $q$. If we write these in we can fill out the rest of the table (below) to get the desired coupling."</p>
<p><a href="https://i.stack.imgur.com/8jqJp.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/8jqJp.png" alt="enter image description here"></a></p>
<p>The bit I'm confused about is why $p>q$ implies the first diagonal entry can be at most $1-p$? Can someone explain this please? If we want to maximise the sum of the main diagonal then isn't $1-q+p$ better than $1-p+q$ because $p>q \implies 1-p+q<1$ but $1-q+p>1$?</p>
| MR_BD | 195,683 | <p>Here is a simple solution:</p>
<p>Pick a random number <span class="math-container">$z$</span> uniformly in <span class="math-container">$[0,1]$</span></p>
<p>for <span class="math-container">$0 \le z<q$</span> set <span class="math-container">$X=Y=1$</span>;</p>
<p>for <span class="math-container">$q \le z<p$</span> set <span class="math-container">$X=1,Y=0$</span>;</p>
<p>for <span class="math-container">$ p \le z \le 1$</span> set <span class="math-container">$X=Y=0$</span>.</p>
<hr />
<p>But if you prefer to do it using table you have to complete the following table:</p>
<p><a href="https://i.stack.imgur.com/dubVJ.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/dubVJ.jpg" alt="enter image description here" /></a></p>
<p>The first row have to has sum equal to <span class="math-container">$1-q$</span> and the second one equal to <span class="math-container">$q$</span></p>
<p>[Analogously for the columns.]</p>
<p>The digonal entries have to be equal to minimum of the corresponding row and column probability since you want to maximaize <span class="math-container">$P(X=Y)$</span></p>
<p>[If you set a value greater than this minimum you have to set a negative value for the other entry!]</p>
<p>So you can complete the table as your professor said.</p>
<blockquote>
<p>P.S. My table is transpose of yours.</p>
</blockquote>
|
2,560,556 | <p>Let $X,Y,Z$ be topological spaces.
Let $p:X\rightarrow Y$ be a continuous surjection. Let $f:Y\rightarrow Z$ be continuous if and only if $f\circ p:X\rightarrow Z$ is continuous.</p>
<p>I want to prove that this makes $p$ a quotient map. </p>
<p>My thoughts:</p>
<p>Since $p$ is a continuous surjection, all I need is for $p$ to also be open.</p>
<p>If I can show that $p^{-1}$ exists and is continuous, then $p$ must be open, and therefore a quotient map.
Since $p$ is surjective, I know that $p$ at least has a right inverse, so some function $g$ exists such that $p\circ g = Id_Y$.</p>
<p>I don't know how to proceed, however. Am I on the right track?</p>
| D_S | 28,556 | <p>This follows from univeral property arguments. If an object satisfies the same universal property as a quotient, product, coproduct etc. then it is that quotient, product, coproduct etc. You can give a much shorter proof of this without universal property arguments, but if you become familiar with such arguments, then problems like this can be solved in a mechanical way without doing much thinking.</p>
<p><strong>Lemma:</strong> Let $\pi: X \rightarrow S$ be a quotient map. That is, $\pi$ is surjective, and $U$ is open in $S$ if and only if $\pi^{-1}U$ is open in $X$. Then for any space $Z$, and any continuous map $g: X \rightarrow Z$ such that $\pi(x_1) = \pi(x_2)$ implies $g(x_1) = g(x_2)$, there is a unique continuous map $\bar{g}: S \rightarrow Z$ such that $g = \bar{g} \circ \pi$.</p>
<p>Proof: If we forget continuity and just worry about $\pi$ and $f$ as maps of <em>sets</em>, it's clear that there is a unique function $\bar{g}: S \rightarrow Z$ such that $g = \bar{g} \circ \pi$, namely for any $s \in S$, we find an $x \in X$ such that $\pi(x) = s$, and then define $\bar{g}(s) = g(x)$. This is a well defined function which doesn't depend on the choice of $x$. We just need to show that $\overline{g}$ is continuous. </p>
<p>If $V \subseteq Z$ is open, we want to show that $\bar{g}^{-1}V$ is open in $S$. This is true if and only if $\pi^{-1}\bar{g}^{-1}V$ is open in $X$. But $\pi^{-1}\bar{g}^{-1}V = (\bar{g} \circ \pi)^{-1}V = g^{-1}V$, and $g$ is continuous, so it is open in $X$. $\blacksquare$</p>
<p>Now let $p: X \rightarrow Y$ be your surjective continuous map. We are supposing that for any continuous map $f: X \rightarrow Z$ such that $p(x_1) = p(x_2)$, there is a unique continuous map $\bar{f}: Y \rightarrow Z$ such that $\bar{f} \circ p = f$, and we want to show that $Y$ has the quotient topology.</p>
<p>Temporarily forget the existing topology on the set $Y$, and give this set the quotient topology with respect to the surjective function $p: X \rightarrow Y$. Denote the <em>set</em> $Y$, together with the quotient topology, by the letter $S$. Then $p: X \rightarrow S$ is a quotient map. What you want to show then is that $S = Y$ (already they are equal as sets, but you want to show that they are the same topological space). This amounts to showing that the identity function $S \rightarrow Y$ is a homeomorphism.</p>
<p>Since the quotient map $p: X \rightarrow S$ is by definition continuous, your hypothesis on $Y$ tells you that there must be a unique continuous map $j: Y \rightarrow S$ such that $p = j \circ p$ as maps $X \rightarrow S$. As a map of sets, $j$ is obviously the identity map. Thus the identity map $Y \rightarrow S$ is continuous.</p>
<p>On the other hand, the lemma tells you that $S$ satisfies the same property as $Y$: since $p: X \rightarrow Y$ is continuous, there must be a unique continuous map $i: S \rightarrow Y$ such that $p = p \circ i$ as maps $X \rightarrow Y$. As a map of sets, $i$ is obviously the identity map. Thus the identity map $S \rightarrow Y$ is continuous.</p>
<p>Since the identity function $S \rightarrow Y$ and its inverse are both continuous, this function is a homeomorphism, meaning $Y$ has the same topology as $S$, so $p: X \rightarrow Y$ is actually a quotient map.</p>
|
1,579,811 | <p>Find general solution for the differential equation $x^3y^{'''}+x^2y^{''}+3xy^{'}-8y=0$</p>
<p>This is the Euler differential equation which can be solved by substitution $x=e^t$. I don't understand the following differential relations:</p>
<p>$$xy^{'}=x\frac{dy}{dx}=\frac{dy}{dt}
$$
$$x^2y^{''}=x^2\frac{d^2y}{dx^2}=\frac{d}{dt}\left(\frac{d}{dt}-1\right)y
$$
$$x^3y^{'''}=x^3\frac{d^3y}{dx^3}=\frac{d}{dt}\left(\frac{d}{dt}-1\right)\left(\frac{d}{dt}-2\right)y
$$</p>
<p>How to evaluate these relations?</p>
<p>From here, it is easy to solve the equation, which is homogeneous with constant coefficients.</p>
<p>General solution is $y=c_1e^{2\ln x}+c_2e^{-i2\ln x}+c_3e^{i2\ln x}$</p>
| Jan Eerland | 226,665 | <p>HINT:</p>
<p>$$x^3y'''(x)+x^2y''(x)+3xy'(x)-8y(x)=0\Longleftrightarrow$$
$$x^3\cdot\frac{\text{d}^3y(x)}{\text(d)x^3}+x^2\cdot\frac{\text{d}^2y(x)}{\text(d)x^2}+3x\cdot\frac{\text{d}y(x)}{\text(d)x}-8y(x)=0\Longleftrightarrow$$</p>
<hr>
<p>Assume the solution will be proportional to $x^{\lambda}$ for some constant $\lambda$.
Substitute $y(x)=x^{\lambda}$:</p>
<hr>
<p>$$x^3\cdot\frac{\text{d}^3x^{\lambda}}{\text(d)x^3}+x^2\cdot\frac{\text{d}^2x^{\lambda}}{\text(d)x^2}+3x\cdot\frac{\text{d}x^{\lambda}}{\text(d)x}-8x^{\lambda}=0\Longleftrightarrow$$</p>
<hr>
<p>Substitute $\frac{\text{d}x^{\lambda}}{\text{d}x}=\lambda x^{\lambda-1}$:</p>
<hr>
<p>$$\lambda^3x^{\lambda}-2\lambda^2x^{\lambda}+4\lambda x^{\lambda}-8x^{\lambda}=0\Longleftrightarrow$$
$$x^{\lambda}\left(\lambda^3-2\lambda^2+4\lambda-8\right)=0\Longleftrightarrow$$</p>
<hr>
<p>Assuming $x\ne 0$, the zeros must come from the polynomial:</p>
<hr>
<p>$$\lambda^3-2\lambda^2+4\lambda-8=0\Longleftrightarrow$$
$$\left(\lambda-2\right)\left(\lambda^2+4\right)=0$$</p>
|
1,579,811 | <p>Find general solution for the differential equation $x^3y^{'''}+x^2y^{''}+3xy^{'}-8y=0$</p>
<p>This is the Euler differential equation which can be solved by substitution $x=e^t$. I don't understand the following differential relations:</p>
<p>$$xy^{'}=x\frac{dy}{dx}=\frac{dy}{dt}
$$
$$x^2y^{''}=x^2\frac{d^2y}{dx^2}=\frac{d}{dt}\left(\frac{d}{dt}-1\right)y
$$
$$x^3y^{'''}=x^3\frac{d^3y}{dx^3}=\frac{d}{dt}\left(\frac{d}{dt}-1\right)\left(\frac{d}{dt}-2\right)y
$$</p>
<p>How to evaluate these relations?</p>
<p>From here, it is easy to solve the equation, which is homogeneous with constant coefficients.</p>
<p>General solution is $y=c_1e^{2\ln x}+c_2e^{-i2\ln x}+c_3e^{i2\ln x}$</p>
| user69468 | 225,040 | <p>It is called Cauchy's equation; what you did was correct. Set the operator $D: = \frac {d} {dt}$. Then form the auxiliary equation and solve it. The equation will be of the form $$(D^3-2D^2+4D-8)y=0,$$ where $x = e^t$. This gives $D = 2, \pm 2i$.</p>
|
2,539,888 | <p>I have an polynomial $x^4+x+1 \in \mathbb{Z}\left\{ x\right\}$ and I want to construct an extension field of $\mathbb{Z}_2$ that include the roots of that polynomial. So is this the right approach?</p>
<p>Let E be the extension field.
$$E= \mathbb{Z}_2 / <x^4+x+1> $$?</p>
<p>If so, how do I find the root of this polynomial? And what is the range of the extension field?</p>
| lab bhattacharjee | 33,337 | <p>Hint:</p>
<p>$$2x^2-xy-y^2=2x(x-y)+y(x-y)=?$$</p>
|
2,924,380 | <p><span class="math-container">$\sum_{k=0}^{n}{k\binom{n}{k}}=n2^{n-1}$</span></p>
<p><span class="math-container">$n2^{n-1} = \frac{n}{2}2^{n} = \frac{n}{2}(1+1)^n = \frac{n}{2}\sum_{k=0}^{n}{\binom{n}{k}}$</span></p>
<p>That's all I got so far, I don't know how to proceed</p>
| Mark | 470,733 | <p>There is also a combinatorial proof. Imagine there is a set of <span class="math-container">$n$</span> elements and you want to choose a subset with one special element in it. (it can also be the only element of the subset). How can you do that? You can first choose the special element (<span class="math-container">$n$</span> options for that) and then choose a subset from the remaining <span class="math-container">$n-1$</span> elements. So the number of options is <span class="math-container">$n2^{n-1}$</span>. </p>
<p>Another way is to find the number of options to choose a subset of a specific size <span class="math-container">$k$</span> and then sum on all possible values of <span class="math-container">$k$</span>. So to choose a subset of size <span class="math-container">$k$</span> you need to choose <span class="math-container">$k$</span> elements outside of <span class="math-container">$n$</span> which is <span class="math-container">$\binom nk$</span> options and then you have <span class="math-container">$k$</span> options to choose the special element. So it is <span class="math-container">$k\binom nk$</span> options to choose a subset of size <span class="math-container">$k$</span>. Sum on all possible values of <span class="math-container">$k$</span> and you will get the required identity. </p>
|
2,425,337 | <p>What would be an example of a real valued sequence $\{a_{n}\}_{n=1}^{\infty}$ such that $$\frac{a_{n}}{a_{n+1}} = 1 + \frac{1}{n} + \frac{p}{n \ln n} + O\left(\frac{1}{n \ln^{2}n}\right)\ ?$$</p>
| AlgorithmsX | 355,874 | <p>If you scale and flip almost any <a href="https://en.m.wikipedia.org/wiki/Cumulative_distribution_function" rel="nofollow noreferrer">Cumulative Distribution Function</a>, you should get exactly what you're looking for. The simplest CDF is from the <a href="https://en.m.wikipedia.org/wiki/Kumaraswamy_distribution" rel="nofollow noreferrer">Kumaraswamy Distribution</a>, which has CDF $1-(1-x^a)^b$.</p>
|
673,334 | <p>I used below pseudocode to generate a discrete normal distribution over 101 points.</p>
<pre><code>mean = 0;
stddev = 1;
lowerLimit = mean - 4*stddev;
upperLimit = mean + 4*stddev;
interval = (upperLimit-lowerLimit)/101;
for ( x = lowerLimit + 0.5*interval ; x < upperLimit; x = x + interval) {
y = exp(-sqr(x)/2)/sqrt(2*PI);
print ("%f %f", x, y);
}
}
</code></pre>
<p>When I plot y Vs x I get normal distribution curve as expected. But when I try to calculate standard deviation I use following algorithm (According to <a href="http://en.wikipedia.org/wiki/Standard_deviation#Discrete_random_variable" rel="nofollow">http://en.wikipedia.org/wiki/Standard_deviation#Discrete_random_variable</a>)</p>
<pre><code>for i = 1:101
sumsq += y[i]*(x[i]^2)
end
stddev = sqrt(sumsq)
</code></pre>
<p>I get $stddev = 3.55$ instead of $1$. Where is the problem?</p>
| Community | -1 | <p>You need to re-think how you are "discretizing" the normal distribution. You need to either: (1) partition the real line and set the probability of each discrete value to the probabiity of one of these intervals as calculated by the non-discrete version of the normal distribution. or (2) divide the "probabilties" for each value by the sum of the "probability" assigned to all values. This will "renormalize" your values so they sum to 1, and hence represent a probability.</p>
|
3,956,467 | <p>Show that if <span class="math-container">$\alpha\in[0^\circ;45^\circ]:\sin(45^\circ+\alpha)=\cos(45^\circ-\alpha)$</span> and <span class="math-container">$\cos(45^\circ+\alpha)=\sin(45^\circ-\alpha).$</span></p>
<p>I tried to use the unit circle, but I am not sure how to draw the angles <span class="math-container">$\alpha, 45^\circ+\alpha$</span> and <span class="math-container">$45^\circ-\alpha.$</span> I have noticed <span class="math-container">$45^\circ+\alpha\le90^\circ$</span> and <span class="math-container">$45^\circ-\alpha\ge0^\circ.$</span>
Can I use any other approaches? Why do we have restrictions on <span class="math-container">$\alpha$</span>? Thank you in advance!</p>
| Vishu | 751,311 | <p>Hint: <span class="math-container">$$\sin \theta = \cos(90^\circ -\theta) \\ \cos\theta =\sin(90^\circ -\theta)$$</span> The range of <span class="math-container">$\alpha$</span> is irrelevant, as this identity holds for all <span class="math-container">$\theta\in\mathbb R$</span>.</p>
|
3,956,467 | <p>Show that if <span class="math-container">$\alpha\in[0^\circ;45^\circ]:\sin(45^\circ+\alpha)=\cos(45^\circ-\alpha)$</span> and <span class="math-container">$\cos(45^\circ+\alpha)=\sin(45^\circ-\alpha).$</span></p>
<p>I tried to use the unit circle, but I am not sure how to draw the angles <span class="math-container">$\alpha, 45^\circ+\alpha$</span> and <span class="math-container">$45^\circ-\alpha.$</span> I have noticed <span class="math-container">$45^\circ+\alpha\le90^\circ$</span> and <span class="math-container">$45^\circ-\alpha\ge0^\circ.$</span>
Can I use any other approaches? Why do we have restrictions on <span class="math-container">$\alpha$</span>? Thank you in advance!</p>
| Salmon Fish | 955,791 | <p><span class="math-container">$HINT$</span>:You should know that <span class="math-container">$\sin(x)\times \sin(y)=\frac{\cos(x+y)-\cos(x-y)}{-2}$</span> and <span class="math-container">$\sin(x)\times \cos(y)=\frac{\sin(x+y)+\sin(x-y)}{2}$</span></p>
<p>so, try to multiply and divide the former by <span class="math-container">$\sin(45+a)$</span> and the latter by <span class="math-container">$\cos(45+a)$</span></p>
<p>for the former:</p>
<p><span class="math-container">$\frac{\sin(45+a)\times \sin(45+a)}{\sin(45+a)}=[\frac{\cos(90+2a)-\cos(0)}{-2}]=\sin(45+a)\times \cos(45-a)=[\frac{\sin(90)+\sin(2a)}{2}]$</span></p>
<p>so,<span class="math-container">$\sin(2a)=\sin(2a)$</span></p>
<p>The second for yours :))</p>
|
8,568 | <p>I'm going to be starting teaching a course called algebra COE, which is for students who didn't pass the required state algebra exam to graduate and are now seniors, to do spaced-out exam-like extended problems after extensive support. </p>
<p>I don't want to start the class out with "getting down to business" because I want the students to feel comfortable in the class, with me and with each other. The "getting down to business" will happen during the second week of class. Therefore, I'd like to start out with a class collaboration to solve a "fun" problem. (There are 5 students in the class)</p>
<p>At the same time, I don't want to start out with a problem that feels too contrived, or too much like "school math" problems. They have clearly been turned off from "school math." I want one or some that feel more like they are doing a puzzle, yet still engage algebra-related skills and open up a discussion about problem-solving as a process and skill that can be honed. </p>
<p>Some problems I have considered, yet I believe are too "math-feeling":</p>
<ul>
<li>The exponential chessboard and rice problem </li>
<li>How many squares are there on the chessboard? (note: more than 64)</li>
<li>The "lockers" problem</li>
<li>The <a href="https://itunes.apple.com/us/app/ooops/id467564672?mt=8">ooops game</a></li>
</ul>
<p>Any suggestions?</p>
| MissC | 5,544 | <p>What about the Monty Hall Problem?</p>
<p><a href="https://www.youtube.com/watch?v=mhlc7peGlGg" rel="nofollow">https://www.youtube.com/watch?v=mhlc7peGlGg</a></p>
<p>Not sure if you wanted an algebra related one tho. </p>
|
180,839 | <p>Is there any software which can be used for computing Thurston's unit ball (for second homology of 3-manifolds) of link complements? In particular can I do that with SnapPy?</p>
<p>PS: even a table for Thurston's ball of two component links would be helpful for me.</p>
| Sam Nead | 1,650 | <p>"Better late than never." Stephan Tillmann and William Worden have produced the software package <strong>tnorm</strong>. This can be found here:</p>
<p><a href="https://pypi.org/project/tnorm/" rel="noreferrer">https://pypi.org/project/tnorm/</a></p>
<p>The software should be able to deal with hyperbolic two-component links easily. William replies to emails, as well. :)</p>
|
673,385 | <p>Question:</p>
<blockquote>
<p>Determine the multiplicative inverse of $x^2 + 1$ in $GF(2^4)$ with $$m(x) = x^4 + x + 1.$$ </p>
</blockquote>
<p>My confusion is over the $GF (2^4)$.</p>
| robjohn | 13,854 | <p>As shown in <a href="https://math.stackexchange.com/a/86553">this answer</a>,
$$
\int_0^1t^{\alpha-1}\,(1-t)^{\beta-1}\,\mathrm{d}t
=\frac{\Gamma(\alpha)\Gamma(\beta)}{\Gamma(\alpha+\beta)}
$$
and
$$
\int_0^\infty t^{\alpha-1}\,(1+t)^{-\beta}\,\mathrm{d}t
=\frac{\Gamma(\alpha)\Gamma(\beta-\alpha)}{\Gamma(\beta)}
$$
Substituting $t\mapsto1/t$ and then $t\mapsto t^{1/4}$, we get
$$
\begin{align}
\int_0^1\frac{\mathrm{d}t}{\sqrt{1+t^4}}
&=\int_1^\infty\frac{\mathrm{d}t}{\sqrt{1+t^4}}\\
&=\frac12\int_0^\infty\frac{\mathrm{d}t}{\sqrt{1+t^4}}\\
&=\frac18\int_0^\infty t^{-3/4}(1+t)^{-1/2}\,\mathrm{d}t\\
&=\frac18\frac{\Gamma(1/4)^2}{\Gamma(1/2)}
\end{align}
$$
Furthermore,
$$
\begin{align}
\int_0^1\frac{\mathrm{d}t}{\sqrt{1-t^4}}
&=\frac14\int_0^1t^{-3/4}(1-t)^{-1/2}\,\mathrm{d}t\\
&=\frac14\frac{\Gamma(1/4)\Gamma(1/2)}{\Gamma(3/4)}
\end{align}
$$
As shown in <a href="https://math.stackexchange.com/a/176216">this answer</a>, $\Gamma(x)\Gamma(1-x)=\pi\csc(\pi x)$. Therefore,
$$
\begin{align}
\frac{\displaystyle\int_0^1\frac{\mathrm{d}t}{\sqrt{1-t^4}}}{\displaystyle\int_0^1\frac{\mathrm{d}t}{\sqrt{1+t^4}}}
&=2\frac{\Gamma(1/2)^2}{\Gamma(1/4)\Gamma(3/4)}\\
&=2\frac{\pi\csc(\pi/2)}{\pi\csc(\pi/4)}\\[12pt]
&=\sqrt2
\end{align}
$$</p>
|
9,930 | <p>One of the standard parts of homological algebra is "diagram chasing", or equivalent arguments with universal properties in abelian categories. Is there a rigorous theory of diagram chasing, and ideally also an algorithm?</p>
<p>To be precise about what I mean, a diagram is a directed graph $D$ whose vertices are labeled by objects in an abelian category, and whose arrows are labeled by morphisms. The diagram might have various triangles, and we can require that certain triangles commute or anticommute. We can require that certain arrows vanish, which can be used to ask that certain compositions vanish. We can require that certain compositions are exact. Maybe some of the arrows are sums or direct sums of other arrows, and maybe some of the vertices are projective or injective objects. Then a diagram "lemma" is a construction of another diagram $D'$, with some new objects and arrows constructed from those of $D$, or at least some new restrictions.</p>
<p>As described so far, the diagram $D$ can express a functor from any category $\mathcal{C}$ to the abelian category $\mathcal{A}$. This looks too general for a reasonable algorithm. So let's take the case that $D$ is acyclic and finite. This is still too general to yield a complete classification of diagram structures, since acyclic diagrams include all acyclic quivers, and some of these have a "wild" representation theory. (For example, three arrows from $A$ to $B$ are a wild quiver. The representations of this quiver are not tractable, even working over a field.) In this case, I'm not asking for a full classification, only in a restricted algebraic theory that captures what is taught as diagram chasing.</p>
<p>Maybe the properties of a diagram that I listed in the second paragraph already yield a wild theory. It's fine to ditch some of them as necessary to have a tractable answer. Or to restrict to the category $\textbf{Vect}(k)$ if necessary, although I am interested in greater generality than that.</p>
<p>To make an analogy, there is a theory of Lie bracket words. There is an algorithm related to <a href="http://en.wikipedia.org/wiki/Lyndon_word" rel="noreferrer">Lyndon words</a> that tells you when two sums of Lie bracket words are formally equal via the Jacobi identity. This is a satisfactory answer, even though it is not a classification of actual Lie algebras. In the case of commutative diagrams, I don't know a reasonable set of axioms — maybe they are related to triangulated categories — much less an algorithm to characterize their formal implications.</p>
<p>(This question was inspired by a mathoverflow question about <a href="https://mathoverflow.net/questions/6749/">George Bergman's salamander lemma</a>.)</p>
<hr>
<p>David's reference is interesting and it could be a part of what I had in mind with my question, but it is not the main part. My thinking is that diagram chasing is boring, and that ideally there would be an algorithm to obtain all finite diagram chasing arguments, at least in the acyclic case. Here is a simplification of the question that is entirely rigorous.</p>
<p>Suppose that the diagram $D$ is finite and acyclic and that all pairs of paths commute, so that it is equivalent to a functor from a finite <a href="http://en.wikipedia.org/wiki/Partially_ordered_set#In_category_theory" rel="noreferrer">poset category</a> $\mathcal{P}$ to the abelian category $\mathcal{A}$. Suppose that the only other decorations of $D$ are that: (1) certain arrows are the zero morphism, (2) certain vertices are the zero object, and (3) certain composable pairs of arrows are exact. (Actually condition 2 can be forced by conditions 1 and 3.) Then is there an algorithm to determine all pairs of arrows that are forced to be exact? Can it be done in polynomial time?</p>
<p>This rigorous simplification does not consider many of the possible features of lemmas in homological algebra. Nothing is said about projective or injective objects, taking kernels and cokernels, taking direct sums of objects and morphisms (or more generally finite limits and colimits), or making connecting morphisms. For example, it does not include the <a href="http://en.wikipedia.org/wiki/Snake_lemma" rel="noreferrer">snake lemma</a>. It also does not include diagrams in which only some pairs of paths commute. But it is enough to express the monomorphism and epimorphism conditions, so it includes for instance the <a href="http://en.wikipedia.org/wiki/Five_lemma" rel="noreferrer">five lemma</a>.</p>
| Daniel Litt | 6,950 | <p>Here's an attempt, assuming the target category is finite dimensional vector spaces and the diagram is finite acyclic.</p>
<p>1) Fill in the diagram so it's triangulated (that is, add the compositions of all arrows). Add kernels and cokernels to every arrow in the diagram. Add the natural arrows between these new objects (e.g. the kernel of $f$ maps into the kernel of $g\circ f$). Iterate this process until it terminates (which it does, as there are finitely many arrows, and thus only finitely many subquotients of objects they induce).</p>
<p>2) In this new diagram, tabulate all exact paths. (This step seems to me to have high time and space complexity, unfortunately.) By exact path, I mean an infinite long exact sequence, for which all but finitely many objects are zero.</p>
<p>3) Given an exact path $$0\to A_1\to \cdots \to A_i\to \cdots \to A_n\to 0$$ write the equation $$\sum_{i=1}^n (-1)^i \dim~ A_i=0.$$</p>
<p>4) Solve the resulting linear system for the $\dim~ A_i$'s.</p>
<p>Since we've added kernels and cokernels, solving the system tells us about injectivity and surjectivity of all the arrows. Furthermore, as far as I can tell any (non-negative) solution to this linear system is realizable as a diagram, so this algorithm gives us any information we could get by diagram chasing.</p>
<hr>
<p>Greg has given a good summary of this method in his answer below--he takes me to task about my assertion that any non-negative integer solution to this linear system is realizable, so I will sketch an argument (again requiring the diagram to be finite acyclic). We also assume the diagram is "complete" in the sense that step 1) has been completed.</p>
<p>Partially order the objects in the diagram by saying $A\leq B$ iff $A$ is a subquotient of $B$; inductively, this is the same as saying that $B\leq B$ and $A\leq B$ if there exists $C\leq B$ such that $A$ injects into $C$ or $C$ surjects onto $A$ in this diagram. We identify isomorphic objects; this is a poset by acyclicity. We proceed by induction on the number of objects in the diagram. The one-object diagram is easy, so we do the induction step.</p>
<p>By finiteness the diagram contains a longest chain; choose such a chain and consider its minimal element $X$. All maps into or out of $X$ are surjections or injections, respectively, so split $X$ off of every object that maps onto it or that it maps into as a direct summand. The diagram with $X$ removed gives the induction step.</p>
|
9,930 | <p>One of the standard parts of homological algebra is "diagram chasing", or equivalent arguments with universal properties in abelian categories. Is there a rigorous theory of diagram chasing, and ideally also an algorithm?</p>
<p>To be precise about what I mean, a diagram is a directed graph $D$ whose vertices are labeled by objects in an abelian category, and whose arrows are labeled by morphisms. The diagram might have various triangles, and we can require that certain triangles commute or anticommute. We can require that certain arrows vanish, which can be used to ask that certain compositions vanish. We can require that certain compositions are exact. Maybe some of the arrows are sums or direct sums of other arrows, and maybe some of the vertices are projective or injective objects. Then a diagram "lemma" is a construction of another diagram $D'$, with some new objects and arrows constructed from those of $D$, or at least some new restrictions.</p>
<p>As described so far, the diagram $D$ can express a functor from any category $\mathcal{C}$ to the abelian category $\mathcal{A}$. This looks too general for a reasonable algorithm. So let's take the case that $D$ is acyclic and finite. This is still too general to yield a complete classification of diagram structures, since acyclic diagrams include all acyclic quivers, and some of these have a "wild" representation theory. (For example, three arrows from $A$ to $B$ are a wild quiver. The representations of this quiver are not tractable, even working over a field.) In this case, I'm not asking for a full classification, only in a restricted algebraic theory that captures what is taught as diagram chasing.</p>
<p>Maybe the properties of a diagram that I listed in the second paragraph already yield a wild theory. It's fine to ditch some of them as necessary to have a tractable answer. Or to restrict to the category $\textbf{Vect}(k)$ if necessary, although I am interested in greater generality than that.</p>
<p>To make an analogy, there is a theory of Lie bracket words. There is an algorithm related to <a href="http://en.wikipedia.org/wiki/Lyndon_word" rel="noreferrer">Lyndon words</a> that tells you when two sums of Lie bracket words are formally equal via the Jacobi identity. This is a satisfactory answer, even though it is not a classification of actual Lie algebras. In the case of commutative diagrams, I don't know a reasonable set of axioms — maybe they are related to triangulated categories — much less an algorithm to characterize their formal implications.</p>
<p>(This question was inspired by a mathoverflow question about <a href="https://mathoverflow.net/questions/6749/">George Bergman's salamander lemma</a>.)</p>
<hr>
<p>David's reference is interesting and it could be a part of what I had in mind with my question, but it is not the main part. My thinking is that diagram chasing is boring, and that ideally there would be an algorithm to obtain all finite diagram chasing arguments, at least in the acyclic case. Here is a simplification of the question that is entirely rigorous.</p>
<p>Suppose that the diagram $D$ is finite and acyclic and that all pairs of paths commute, so that it is equivalent to a functor from a finite <a href="http://en.wikipedia.org/wiki/Partially_ordered_set#In_category_theory" rel="noreferrer">poset category</a> $\mathcal{P}$ to the abelian category $\mathcal{A}$. Suppose that the only other decorations of $D$ are that: (1) certain arrows are the zero morphism, (2) certain vertices are the zero object, and (3) certain composable pairs of arrows are exact. (Actually condition 2 can be forced by conditions 1 and 3.) Then is there an algorithm to determine all pairs of arrows that are forced to be exact? Can it be done in polynomial time?</p>
<p>This rigorous simplification does not consider many of the possible features of lemmas in homological algebra. Nothing is said about projective or injective objects, taking kernels and cokernels, taking direct sums of objects and morphisms (or more generally finite limits and colimits), or making connecting morphisms. For example, it does not include the <a href="http://en.wikipedia.org/wiki/Snake_lemma" rel="noreferrer">snake lemma</a>. It also does not include diagrams in which only some pairs of paths commute. But it is enough to express the monomorphism and epimorphism conditions, so it includes for instance the <a href="http://en.wikipedia.org/wiki/Five_lemma" rel="noreferrer">five lemma</a>.</p>
| Greg Kuperberg | 1,450 | <p>This answer is a response to Daniel Litt's answer above. First, let me distill his point. Given a diagram of finite-dimensional vector spaces, every term $A$ has a dimension which is a non-negative integer. In addition, every morphism $f:A \to B$ has a non-negative rank which is at most $\min(\dim A,\dim B)$. If a pair of morphisms
$$A \stackrel{f}{\longrightarrow} B \stackrel{g}{\longrightarrow} C,$$
then you get a linear relation
$$\mathrm{rank}\ f + \mathrm{rank}\ g = \dim B.$$
Thus, there is an integer programming problem to express whether the dimensions and ranks are feasible. Since such an integer programming problem is homogeneous, you can instead treat it as a rational linear programming problem and later clear denominators. There is an algorithm to determine feasibility, even a polynomial time algorithm. As Daniel points out, this is enough to establish the <a href="http://en.wikipedia.org/wiki/Nine_lemma" rel="nofollow">nine lemma</a> in the special case of finite-dimensional vector spaces. The argument/algorithm even works in many categories other than finite-dimensional vector spaces. For instance it works for finite abelian groups. On the other hand, the algorithm doesn't use the fact that any polygons in the diagram commute (see below), other than perhaps a preprocessing stage in which commutative polygons are filled in by commutative triangles.</p>
<p>Like me, Daniel asked to restrict to acyclic diagrams. However, I realized that this restriction is ineffectual for my entire question. (Thus, his algorithm can't need it.) You can convert any diagram into an acyclic one using the fact that
$$0 \longrightarrow A \stackrel{\phi}{\longrightarrow} A' \longrightarrow 0$$
makes $\phi$ into an isomorphism. If $\mathcal{D}$ is a diagram, you should first triangulate all commutative polygons in to make commutative triangles. Then make three copies $A, A', A''$ of each object $A \in \mathcal{D}$ together with isomorphisms
$$A \stackrel{\phi}{\longrightarrow} A' \stackrel{\phi'}{\longrightarrow} A'.$$
Then a homomorphism $f:A \to B$ in $\mathcal{D}$ can be expressed acyclically as this commutative square:
$$\begin{matrix} A & \stackrel{f}{\longrightarrow} & B' \\
\downarrow && \downarrow \\
A' & \stackrel{f'}{\longrightarrow} & B'' \end{matrix}$$
Finally, a commutative triangle $h = f \circ g$ can be expressed as a commutative square $h' \circ \phi = f' \circ g$. Or if $f$ and $g$ are an exact pair, you can require that $f'$ and $g$ make an exact pair.</p>
<p>Daniel also says without explanation that if there is a solution to the dimension and rank equations, then there are ways to fill in all of the maps. But without a lot more work, I don't think that this inference is reasonable. The hard part is satisfying commutative triangles. It is certainly not true that there is simply a canonical choice for each map using a skeleton $\{k^n\}$ of the category of finite-dimensional vector spaces. Because, if $f$ and $g$ each have some rank, then their composition $f \circ g$ might also have some desired rank, and that rank depends on the choices of $f$ and $g$. One interesting remark here is that
$$\mathrm{rank}\ f \circ g \le \min(\mathrm{rank}\ f,\mathrm{rank}\ g),$$
and there is a similar inequality on the other side. However, I think that there is more going on than that.</p>
<p>Finally, it is worth giving a simple example to show that feasibility in finite-dimensional vector spaces is not the same as feasibility in vector spaces. Given the exact sequence
$$0 \longrightarrow A \longrightarrow A \longrightarrow A \longrightarrow 0,$$
you can conclude (using the dimension equations that Daniel suggests) that $A = 0$ if it is finite-dimensional. But if it is infinite-dimensional, then there are non-trivial solutions.</p>
|
114,438 | <p>I am interested in knowing whether there is a definition for the symbol of a PDO which is NOT linear.
In Wikipedia and in the book I am reading (An Introduction to Partial Differential Equations by Renardy-Rogers) I only found the definition for linear PDOs.</p>
<p>Here is the Wikipedia link:</p>
<p><a href="http://en.wikipedia.org/wiki/Symbol_of_a_differential_operator" rel="nofollow">http://en.wikipedia.org/wiki/Symbol_of_a_differential_operator</a></p>
| mdg | 64,184 | <p>See <a href="https://math.stackexchange.com/questions/589522/definition-of-the-principal-symbol-of-a-differential-operator-on-a-real-vector-b/883032#883032">Definition of the principal symbol of a differential operator on a real vector bundle.</a>.</p>
<p>For an example, consider the Ricci curvature operator:
\begin{align}
\mathsf{Ricc}:\Gamma(S^2_+M)&\rightarrow\Gamma(S^2M)\\
g&\mapsto\mathsf{Ricc}(g).
\end{align}
The linearisation of the Ricci operator at a given metric $g\in\Gamma(S^2_+M)$ is just the directional derivative of the operator at $g$, and is the map
\begin{align}
D\mathsf{Ricc}|_g:S^2M&\rightarrow S^2M\\
h&\mapsto D\mathsf{Ricc}|_gh=\frac{\text{d}}{\text{d}t}\Big|_{t=0}\mathsf{Ricc}(g+th).
\end{align}
The hard part is calculating it. Let $(U,\mathsf{x})$ be a chart on $M$ and $\omega\in\Gamma(T^*M)$ be a covector field. Books on the Ricci flow (Topping, Chapter 2 or Chow & Knopf, Chapter 3) show that locally the principal symbol of the Ricci operator is
\begin{align}
[\hat{\sigma}_\mathsf{Ricc}(\omega)h]_{ij}=\frac{1}{2}g^{st}(\omega_s\omega_ih_{jt}+\omega_s\omega_jh_{it}-\omega_s\omega_th_{ij}-\omega_i\omega_jh_{st}).
\end{align}
It is then easy to show that the Ricci operator is not elliptic since if we set $h_{ij}=\omega_i\omega_j\neq0$, then $\hat{\sigma}_\mathsf{Ricc}(\omega)h=0$.</p>
|
3,255,644 | <p>I need help understanding the definitions and context for a homework question:</p>
<blockquote>
<p>Consider a 3 by 7 matrix A over GF(2) containing distinct columns. The row space C of A is the
subspace over GF(2) generated by the 3 rows. (Extra note: This is a “simplex” code [7,3] with generator
matrix A. It is closely related to a certain “Hamming” code [7,4].)</p>
</blockquote>
<p>Would the above mean that, for instance I have a matrix that has unique columns and elements in GF(2):</p>
<p><span class="math-container">$A=
\begin{matrix}
0 & 0 & 0& 0& 1 & 1 &1\\
0 & 0 & 1& 1& 0 & 0 &1\\
0 & 1 & 0& 1& 0 & 1 &0
\end{matrix}
$</span></p>
<p>the row space would then be:</p>
<p><span class="math-container">$C=[0000111, 0011001, 0101010]$</span></p>
<p>The next parts of the questions needs me to know about the</p>
<blockquote>
<p>weight distribution of C, weight of a vector, distance between words</p>
</blockquote>
<p>Could anyone explain what those words mean in this context?</p>
| xxxxxxxxx | 252,194 | <p>When it says "having distinct columns", it most likely means "distinct nonzero columns" (this is the matrix commonly used in relation to the Hamming code). There are exactly 7 possible nonzero columns in <span class="math-container">$\mathbb{F}_{2}^{3}$</span>, so the matrix should be
<span class="math-container">$$\begin{bmatrix} 1 & 0 & 0 & 1 & 1 & 0 & 1 \\
0 & 1 & 0 & 1 & 0 & 1 & 1 \\
0 & 0 & 1 & 0 & 1 & 1 & 1 \end{bmatrix}$$</span>
(in some order; your matrix is fine if you get rid of the all 0 column and replace it with the all 1 column).</p>
<p>Then the row space is the set of all possible linear combinations of these three rows; the row space is <span class="math-container">$3$</span>-dimensional, so contains a total of <span class="math-container">$8$</span> vectors.</p>
<p>The weight of a codeword is the number of nonzero entries, so <span class="math-container">$01001011$</span> would have weight 4. </p>
<p>The distance between two codewords is the weight of their difference, or equivalently, the number of places where they are not equal; so <span class="math-container">$00001111$</span> and <span class="math-container">$11001100$</span> would be at distance 4.</p>
<p>The weight distribution is the number of codewords of each weight. It is commonly represented as a polynomial, where for example the term <span class="math-container">$3x^4$</span> would mean there are 3 codewords of weight <span class="math-container">$4$</span>.</p>
|
2,251,964 | <p><strong>question(s):</strong></p>
<p>Choose any real or complex clifford algebra $\mathcal{Cl}_{p,q}$. <a href="https://en.wikipedia.org/wiki/Classification_of_Clifford_algebras" rel="nofollow noreferrer">It's known</a> that there is some $A \simeq \mathcal{Cl}_{p,q}$, where $A$ is either a matrix ring $M(n,R)$ or a direct sum of matrix rings $M(n,R)\oplus M(n,R)$, for $n \geq 1$ and $R \in \{\mathbb R, \mathbb C, \mathbb H\}$ such that $\dim(\mathcal{Cl}_{p,q}) = \dim(A)$ (as k-algebras). </p>
<ol>
<li><p>Given an element $X\in \mathcal{Cl}_{p,q}$, defined on a standard basis, how can I find an explicit isomorphism $f:\mathcal{Cl}(p,q)\to A$, that preserves algebraic properties such as grade interaction? What is the "character" (qualitatively or otherwise) of such an isomorphism? Is it part of some group like the general linear or orthogonal groups?</p></li>
<li><p>Conversely, say $y^{i,j} \in R$ is an entry of some matrix $Y\in A$, and $y^{i,j}_k \in \mathbb R$ is a component of $y^{i,j}$. Let's call it a component of $A$ as well. What is the relationship of $y^{i,j}_k$ to $\mathcal{Cl}_{p,q}$? Does it have a single grade that can be determined by its grade in $R$? Or is there something more nuanced going on? How is it related to other components of $A$?</p></li>
<li><p>There are cases where a given $A$ is isomorphic to multiple Clifford algebras. Then these Clifford algebras must be isomorphic to one another. For example: $\mathcal{Cl}_{7,0} \simeq \mathcal{Cl}_{5,2} \simeq \mathcal{Cl}_{3,4} \simeq \mathcal{Cl}_{1,6} \simeq M(8,\mathbb C)$. What's going on here? Is there a mathematical term for these kinds of special isomorphisms between Clifford algebras in general?</p></li>
</ol>
<p>BONUS QUESTION: how is this problem referred to in the academic math literature? I have scoured, with my amateur math education, the "matrix representations of Clifford algebras" stuff, and mostly found stuff about real matrix representations of real Clifford algebras with real matrix generators, etc, but that's not what I'm looking for. How to distinguish?</p>
<p><strong>context:</strong></p>
<p>I have been using sage and sympy (computer algebra systems) to compute symbolic monomial representations of products for various Clifford algebras. These are then used to generate GPU code for geometric algebra usage.</p>
<p>It works nice for Clifford algebras with up to 7 generating dimensions, and for low grade computations like planar rotation, I can get up to 8 or 9 generating dimensions. I've been successful in pumping out reasonably fast 8 dimensional planar rotation functions in glsl.</p>
<p>Recently, I was reading about Bott periodicity and the classification of Clifford algebras. Using the equations given on the wikipedia page "classification of Clifford algebras", I tried generating some of these isomorphic algebras.</p>
<p>For whatever reason, they are much, much faster to generate, likely due to the ubiquity of matrix multiplication. But I have absolutely no idea how, in general, I would construct a generic multivector in them. For example, generating a symbolic representation of $M(4,\mathbb H)$ is very quick in Sage. Presumably there is some isomorphism between this and the 64 dimensional $\mathcal{Cl}_{2,4}$. But how do I use it?</p>
| benrg | 234,743 | <p>Here is the simplest way I know of getting matrix representations of even or full Clifford algebras of vector spaces over <span class="math-container">$\def\|#1{\mathbb#1}\|R$</span> or <span class="math-container">$\|C$</span> with arbitrary nondegenerate signature. There are just two recursive rules and a trivial base case. It may also work over other fields but some details would have to be changed.</p>
<p>Summary: you can always factor out a copy of <span class="math-container">$\|C$</span>, <span class="math-container">$\|H$</span>, <span class="math-container">$\|D$</span> (the <a href="https://en.wikipedia.org/wiki/Split-complex_number" rel="nofollow noreferrer">split-complex numbers</a>) or <span class="math-container">$\|P$</span> (the <a href="https://en.wikipedia.org/wiki/Split-quaternion" rel="nofollow noreferrer">split quaternions</a>) until you get to a trivial algebra that is isomorphic to the base field. You can convert the resulting tensor product to a matrix representation using the isomorphisms <span class="math-container">$$\|D\cong\|R\oplus\|R, \quad \|P\cong M_2(\|R), \quad \|C\otimes\|C\cong\|C\otimes\|D, \quad \|C\otimes\|H\cong \|C\otimes\|P, \quad \|H\otimes\|H\cong \|P\otimes\|P$$</span> (and the fact that <span class="math-container">$\|R$</span> is the identity and <span class="math-container">$\otimes$</span> distributes over <span class="math-container">$\oplus$</span> and <span class="math-container">$M$</span>).</p>
<p>Details:</p>
<ul>
<li><p>The full algebra in <span class="math-container">$0$</span> dimensions and the even algebra in <span class="math-container">${\le}1$</span> dimension are isomorphic to the base field.</p></li>
<li><p>If there is a unit pseudoscalar <span class="math-container">$ω$</span> that is not a scalar, then any element can be uniquely written as <span class="math-container">$A+Bω$</span> where <span class="math-container">$A$</span> and <span class="math-container">$B$</span> belong to the subalgebra of a subspace of codimension one. If additionally <span class="math-container">$ω$</span> commutes with all elements of the subalgebra, then the algebra is isomorphic to <span class="math-container">$\|C$</span> (if <span class="math-container">$ω^2=-1$</span>) or <span class="math-container">$\|D$</span> (if <span class="math-container">$ω^2=1$</span>) tensored with the subalgebra. This works in dimension at least 1, for full algebras in odd dimensions and even algebras in even dimensions. It doesn't work for full algebras in even dimensions because the pseudoscalar anticommutes with odd elements, and it doesn't work for even algebras in odd dimensions because there is no pseudoscalar.</p></li>
<li><p>Let <span class="math-container">$\{e_1, \ldots, e_n\}$</span> be an orthonormal basis for the vector space, and define (if possible) <span class="math-container">$i=e_1e_2,\;j=e_2\cdots e_n,\;k=ij$</span>. Note that regardless of signature, <span class="math-container">$i^2j^2k^2=-i^4j^4=-1$</span>. If <span class="math-container">$i$</span>, <span class="math-container">$j$</span>, <span class="math-container">$k$</span> commute with all elements of the subalgebra of the subspace spanned by <span class="math-container">$\{e_3, \ldots, e_n\}$</span>, then the algebra is isomorphic to <span class="math-container">$\|H$</span> (if <span class="math-container">$i^2,j^2,k^2$</span> are all negative) or <span class="math-container">$\|P$</span> (if two of them are positive) tensored with the subalgebra. (You may have to permute <span class="math-container">$i,j,k$</span> to get a standard basis where <span class="math-container">$j^2=k^2=ijk$</span>.) This works in dimension at least 2, for full algebras in even dimensions and for even algebras in odd dimensions. It doesn't work for full algebras in odd dimensions because <span class="math-container">$j$</span> and <span class="math-container">$k$</span> anticommute with odd elements, and it doesn't work for even algebras in even dimensions because <span class="math-container">$j$</span> and <span class="math-container">$k$</span> don't exist.</p></li>
</ul>
<p>Note that precisely one of these rules applies in any given situation. However, you have a choice of subspaces when applying the recursive rules, so there are many possible factorizations.</p>
<p>As for the isomorphisms:</p>
<ul>
<li><p><span class="math-container">$\|D\cong\|R\oplus\|R{:}\;1\leftrightarrow(1,1),\;i\leftrightarrow(1,-1)$</span></p></li>
<li><p><span class="math-container">$\|P\cong M_2(\|R){:}\; 1,i,j,k \leftrightarrow \def\M#1{\bigl(\begin{smallmatrix}#1\end{smallmatrix}\bigr)} \M{1&0\\0&1}, \M{0&1\\-1&0}, \M{0&1\\1&0}, \M{1&0\\0&-1}$</span></p></li>
<li><p><span class="math-container">$\|C\otimes\|C\cong\|C\otimes\|D{:}\; i' \leftrightarrow ii'$</span></p></li>
<li><p><span class="math-container">$\|C\otimes\|H\cong \|C\otimes\|P{:}\; j', k' \leftrightarrow ij', ik'$</span></p></li>
<li><p><span class="math-container">$\|H\otimes\|H\cong \|P\otimes\|P{:}\; j, k, j', k' \leftrightarrow ji', ki', ij', ik'$</span></p></li>
</ul>
<hr>
<p>Here's a worked example: the even algebra of <span class="math-container">$\|R^{1,3}$</span>.</p>
<p>The pseudoscalar squares to <span class="math-container">$-1$</span> so we factor out <span class="math-container">$\|C$</span>, leaving us with either <span class="math-container">$\|R^{0,3}$</span> or <span class="math-container">$\|R^{1,2}$</span>. I'll pick the latter.</p>
<p>Let <span class="math-container">$\{\hat t,\hat x,\hat y\}$</span> be an orthonormal basis for this subspace with <span class="math-container">$\hat t^2=1$</span>. Let <span class="math-container">$i=\hat x\hat y,\;j=\hat y\hat t,\;k=\hat t\hat x$</span>. We factor out <span class="math-container">$\|P$</span>, leaving us with <span class="math-container">$\|R^{0,1}$</span> or <span class="math-container">$\|R^{1,0}$</span>.</p>
<p>The even Clifford algebra of <span class="math-container">$\|R^{0,1}$</span> or <span class="math-container">$\|R^{1,0}$</span> is just <span class="math-container">$\|R$</span>, so our original algebra was isomorphic to <span class="math-container">$\|C\otimes\|P \cong M_2(\|C)$</span>.</p>
<p>Explicitly, we can take <span class="math-container">$\hat t\hat x\hat y\hat z = \M{i&0\\0&i},\ \hat x\hat y = \M{0&1\\-1&0},\ \hat y\hat t = \M{0&1\\1&0}$</span>, and this generates the rest of the algebra.</p>
<hr>
<p>A weakness of this method is that it doesn't give you a chiral basis for the full algebra in even dimensions. You can work around that by using a third recursive rule:</p>
<ul>
<li>Let <span class="math-container">$\{e_1, \ldots, e_n\}$</span> be an orthonormal basis for the vector space, and define (if possible) <span class="math-container">$i=e_1,\;j=e_2\cdots e_n,\;k=ij$</span>. If <span class="math-container">$i^2j^2k^2=-1$</span>, then the algebra is isomorphic to <span class="math-container">$\|H$</span> or <span class="math-container">$\|P$</span> tensored with the even subalgebra of the subspace spanned by <span class="math-container">$\{e_2, \ldots, e_n\}$</span>. This works in nonzero even dimension for full algebras only. It doesn't work in odd dimensions because <span class="math-container">$i^2j^2k^2=1$</span>.</li>
</ul>
<hr>
<blockquote>
<p><span class="math-container">$\mathcal{Cl}_{7,0} \simeq \mathcal{Cl}_{5,2} \simeq \mathcal{Cl}_{3,4} \simeq \mathcal{Cl}_{1,6} \simeq M(8,\mathbb C)$</span>. What's going on here?</p>
</blockquote>
<p><span class="math-container">$M(8,\mathbb C)$</span> only makes sense as a Clifford algebra if you also supply an embedding of the underlying vector space, and these embeddings will necessarily be different for nonisomorphic vector spaces, so I wouldn't say that these algebras are isomorphic in any particularly interesting sense.</p>
<p>For what it's worth, the factorization in this answer can get you explicit isomorphisms fairly easily. You will get for each of these algebras a factor of <span class="math-container">$\|C$</span> and three factors of <span class="math-container">$\|H$</span> or <span class="math-container">$\|P$</span>, and after converting the <span class="math-container">$\|H$</span>s to <span class="math-container">$\|P$</span>s (or vice versa), the map taking generators to their counterparts in another algebra extends to an isomorphism of the algebras.</p>
|
181,940 | <p>I've been unable to find an answer to the following question in the literature
on generalized descriptive set theory. Consider Baire space $\kappa^{\kappa}$
where $\kappa$ is inaccessible. The basic open sets are the $U_f$ where
$f\in\kappa^{<\kappa}$. A perfect set is a nonempty closed set with no
isolated points. Does a perfect set have cardinality $2^{\kappa}$?
Does it have to have cardinality $>\kappa$?</p>
<p>A variation of an observation in
"Generalized Descriptive Set Theory and Classification Theory"
(arxiv.org/abs/1207.4311)
states that if $T$ is a slim Kurepa tree (as defined in Devlin's
"Constructibility") then there is no continuous (in fact Borel)
injection of $2^{\kappa}$ into $[T]$, but this doesn't answer my question.</p>
<p>Of course, if the existence of a slim Kurepa tree was consistent with
$2^{\kappa}>\kappa^+$ then it would be consistent that my first question
had a "no" answer; I have not been able to find any references on this
question, but I wouldn't be surprised if it were so.</p>
| Yair Hayut | 41,953 | <p>For every $\kappa$ of uncountable cofinality there is a tree $T\subseteq 2^{<\kappa}$ such that $[T]=\kappa$. The tree $T$ is the tree of all binary sequences $f\colon \alpha \to 2$, $\alpha <\kappa$ such that $f^{-1} (1)$ is finite. It is clear that for every $f \in T$, $f\frown (1), f\frown (0)$ are both in $T$, so this tree is prefect. </p>
<p>Any branch $b$ of $T$ contains finitely many $1$-s, since $\text{cf }\kappa > \omega$ and therefore is there were infinitely many $1$-s, there was some $\alpha$ such that $b\restriction \alpha$ contains already infinitely many $1$-s. On the other hand, it is clear that for every $b\colon \kappa \to 2$, with $b^{-1}(1)$ finite, $\{ b\restriction \alpha \mid \alpha < \kappa \}$ is a branch in $T$. </p>
<p>There are $\kappa^{<\omega} = \kappa$ such branches, as wanted. </p>
<p><strong>Edit:</strong> I argue that the $ZFC$ doesn't prove that there are prefect sets in $^\kappa\kappa$ of size $\kappa^{+}$. The proof is similar to the proof of the consistency of "there are no Kurepa trees". </p>
<p><strong>Theorem:</strong> Assume $GCH$. Let $\kappa < \eta < \mu$ be regular cardinals, $\eta$ inaccessible. Then after forcing with $\mathbb{Q} = Add(\kappa,\mu)\times Col(\kappa,<\eta)$, for every $\kappa$-tree $T$, $|[T]| \in \kappa \cup \{\kappa, \mu\}$. In this generic extension $\eta = \kappa^+$, $\mu = 2^\kappa$ and every cardinal $\geq \eta$ is preserved. </p>
<p><strong>Proof:</strong> We need the following well known fact:</p>
<p><strong>Fact:</strong> Let $T$ be a tree of height $\kappa$. If there is a $\kappa$-closed forcing that adds a branch to $T$ then $|[T]| = 2^\kappa$. </p>
<p><strong>Sketch of proof</strong>: Let $\mathbb{P}$ be a $\kappa$-closed forcing that adds a branch for $T$ and let $\dot{b}$ be the name of this new branch. Define an embedding of $2^{<\kappa}$ into $T$ by building a tree of conditions in $\mathbb{P}$, $\langle p_\eta \mid \eta \in 2^{<\kappa}\rangle$ such that for every $\eta$, $p_{\eta \frown (0)}, p_{\eta \frown (1)} \leq p$ give contradictionary information about the branch. Then for every $f\in 2^\kappa$, $b_f = \{ t\in T \mid \exists \alpha < \kappa,\,p_{f\restriction \alpha}\Vdash t\in \dot{b}\}$ is a cofinal branch, and $f\neq g\implies b_f \neq b_g$. <strong>Q.E.D.</strong> </p>
<p>Let's return to the proof of the theorem. Let $G$ be a $\mathbb{Q}$-generic filter and let $\dot{T}$ be a $\mathbb{Q}$-name for a tree in $V[G]$. Note that $(\kappa^{<\kappa})^{V[G]} = (\kappa^{<\kappa})^{V}$, so we may assume that $\Vdash \dot{T}\subseteq \check{\kappa^{<\kappa}}$. </p>
<p>By the $\eta$.c.c. of $\mathbb{Q}$, we can find a model $M\prec H_\chi$ such that $|M|<\eta$, $\dot{T}, \mathbb{Q} \in M$, $^{<\kappa}M\subseteq M$ and for every $t\in \kappa^{<\kappa}$ there is a maximal antichain $\mathcal{A} \subseteq M$ that decides whether $t\in \dot{T}$ or not (so $T\in V[M\cap G]$). </p>
<p>Note that $M\cap G$ is the restriction of the generic filter to the coordinates that are ordinals of $M$, so it is a generic filter for the forcing $\mathbb{Q}_M := Add(\kappa, \mu\cap M)\times Col(\kappa, <(\eta\cap M))$. Let $\mathbb{P}$ be the restriction of $Add(\kappa,\mu)\times Col(\kappa,<\eta)$ to the ordinals that don't appear in $M$, so $\mathbb{Q}= \mathbb{Q}_M \times \mathbb{P}$. </p>
<p>In $V[G\cap M]$, $2^\kappa$ is less than $\eta$ (since $|\mu \cap M| < \eta$), so if all the branches of $T$ in $V[G]$ are already in $V[G\cap M]$, we have that $V[G]\models |[T]|< \eta = \kappa^+$, and we're done. </p>
<p>Otherwise, let $\dot{b}$ be a name for a new branch. Since $\mathbb{P} \cong \mathbb{P} \times \mathbb{P}$, $\mathbb{Q} \cong \mathbb{Q}_M \times \mathbb{P} \times \mathbb{P}$. Therefore also in $V[G]$, $\dot{b}$ is a $\mathbb{P}$-name for a new branch (by the mutually generity of the two copies of $\mathbb{P}$). Moreover, $\mathbb{P}$ is $\kappa$-closed in $V[G]$, so by the fact above - $V[G] \models |[T]|=2^\kappa$, as wanted.</p>
|
3,776,217 | <p>Prove that</p>
<p><span class="math-container">\begin{equation}
y(x) = \sqrt{\dfrac{3x}{2x + 3c}}
\end{equation}</span></p>
<p>is a solution of</p>
<p><span class="math-container">\begin{equation}
\dfrac{dy}{dx} + \dfrac{y}{2x} = -\frac{y^3}{3x}
\end{equation}</span></p>
<p>All the math to resolve this differential equation is already done. The exercise simply asks to prove the solution.</p>
<p>I start by pointing out that it has the form</p>
<p><span class="math-container">\begin{equation}
\dfrac{dy}{dx} + P(x)y = Q(x)y^3
\end{equation}</span></p>
<p>where</p>
<p><span class="math-container">\begin{equation}
P(x) = \dfrac{1}{2x}, \qquad Q(x) = -\frac{1}{3x}
\end{equation}</span></p>
<p>Rewriting y(x) as</p>
<p><span class="math-container">\begin{equation}
y(x) = (3x)^{\frac{1}{2}} (2x + 3c)^{-\frac{1}{2}}
\end{equation}</span></p>
<p>Getting rid of that square root, I'll need it later on to simplify things</p>
<p><span class="math-container">\begin{equation}
[y(x)]^2 = 3x(2x + 3c)^{-1}
\end{equation}</span></p>
<p>Calculating dy/dx</p>
<p><span class="math-container">\begin{align}
\dfrac{dy}{dx} &= \frac{1}{2}(3x)^{-\frac{1}{2}}(3)(2x + 3c)^{-\frac{1}{2}} + \left(-\dfrac{1}{2}\right)(2x + 3c)^{-\frac{3}{2}}(2)(3x)^{\frac{1}{2}} \\
&= \frac{3}{2}(3x)^{-\frac{1}{2}}(2x + 3c)^{-\frac{1}{2}} - (3x)^{\frac{1}{2}}(2x + 3c)^{-\frac{3}{2}} \\
&= (3x)^{\frac{1}{2}}(2x + 3c)^{-\frac{1}{2}} \left[\dfrac{3}{2}(3x)^{-1} - (2x + 3c)^{-1}\right] \\
&= y \left[\dfrac{3}{2}(3x)^{-1} - (2x + 3c)^{-1}\right] \\
&= \dfrac{y}{2x} - y(2x + 3c)^{-1} \\
&= \dfrac{y}{2x} - y\left(\dfrac{y^2}{3x}\right) \\
&= \dfrac{y}{2x} - \dfrac{y^3}{3x}
\end{align}</span></p>
<p>Finally</p>
<p><span class="math-container">\begin{align}
\dfrac{dy}{dx} + P(x)y &= \dfrac{y}{2x} - \dfrac{y^3}{3x} + \dfrac{y}{2x} \\
&= \dfrac{y}{x} - \dfrac{y^3}{3x}
\end{align}</span></p>
<p>which obviously isn't the same as equation 2. I don't know where I screwed up.</p>
| Hari krishna Goli | 787,202 | <p>@kira1985 you should not add the p(x).y on both side, here you have to find the answer .</p>
<p>But you are using the answer it self to solve the problem .As the answer is unknown to have to just find the value of the P(x) and Q(x).</p>
<p>which is</p>
<p>/ = /2 − (^3)/3</p>
<p>P(x) = - 1/(2x)</p>
<p>Q(x) = - 1/(3x)</p>
<p>Is the write answer for the given question, please check your question once more.</p>
|
2,021,217 | <p>I tried to solve this limit
$$\lim_{x\to 1} \left(\frac{x}{x-1}-\frac{1}{\ln x}\right)$$
and, without thinking, I thought the result was 1. But, using wolfram to verify, I noticed that the limit is $1/2$. </p>
<p>How can I solve it without Hopital/series/integration, just with known limits (link in comments) /squeeze/basic therem?</p>
<p>I don't have a clue about what I can do, because known limits can't be used ($\ln x = x-1$) without considering the "error" that we don't exactly know!</p>
| kotomord | 382,886 | <p>So, the overkill proof;</p>
<p>One of the equidistribution theorems:
<a href="https://en.wikipedia.org/wiki/Equidistribution_theorem" rel="nofollow noreferrer">https://en.wikipedia.org/wiki/Equidistribution_theorem</a></p>
<p>a is irrational, then sequence $a*p_n$ mod 1 is uniformly distributed.</p>
<p>Lemma:
If a > 0 is irrational, then sequence { $\lfloor a*n\rfloor$ } contains infinitely many prime numbers.</p>
<p>Proof of lemma: </p>
<p>Case (a>1):</p>
<p>$\lfloor a*n\rfloor = p_m$ <=> $p_m < a*n < p_m + 1$ <=> $\frac{p_m}{a} < n < \frac{p_m+1}{a}$ <=> n = $\lfloor \frac{p_m+1}{a}\rfloor$, {$\frac{p_m}{a}$} $ \in (1-\frac{1}{a}, 1)$ (where are infinitely many such m by thm.)</p>
<p>Case (a<1) is trivial.</p>
<p>Proof of task:</p>
<p>Without loss of generality $a \ge b \ge c, d \ge e \ge f, a\ge d$</p>
<p>Let a > d.
Fix n what $n*a > n*d + 2, n*c > 1, \lfloor n*a \rfloor =p_m$</p>
<p>$\lfloor n*d \rfloor \lfloor n*e \rfloor \lfloor n*f \rfloor > 0 $ and not divisible to $p_m$ - it is the absurd.</p>
<p>So, a = d and we need to prove what if $\lfloor n*e \rfloor \lfloor n*f \rfloor = \lfloor n*b \rfloor \lfloor n*c \rfloor$, then sets {b, c} and {e,f} are equals.</p>
<p>We can do it similary or by method of @marco2013 </p>
<p>P.S. Maybe, we can generalize lemma to rational not-integer numbers (with the Dirichlet thm.)</p>
|
1,164,037 | <p>The question I propose is this: For an indexing set $I = \mathbb{N}$, or $I = \mathbb{Z}$, and some alphabet $A$, we can define a left shift $\sigma : A^{I} \to A^{I}$ by $\sigma(a_{k})_{k \in I} = (a_{k + 1})_{k \in I}$, because there exists a unique successor $\min \{ i > i_{0} \}$ for all $i \in I$. But one could not do the same if we made, say, $I = \mathbb{Q}$. Is there a way to characterize this in the language of orderings, i.e. a way to characterize the existence of a unique successor of a directed set such that one could sensibly define a "left shift" on a net $s: I \to A$? Does this place restrictions on, say, the cardinality of $I$?</p>
| Asaf Karagila | 622 | <p>There's really no issue here, given any <em>partial order</em> $P$, you can take the lexicographic product of $P$ with $\Bbb N$ or $\Bbb Z$, and obtain a partial order where each element has a unique immediate successor.</p>
<p>So there is no limit on the cardinality of such set. If, however, you want every two points to be within a finite distance of successive steps, you have to have it countable, and a subset of $\Bbb Z$.</p>
|
3,246,240 | <p>I have the following problem</p>
<p><span class="math-container">$\frac{d^2y}{dx^2} + \lambda y = 0 , y'(0)=0$</span> and <span class="math-container">$y(3)=0$</span></p>
<p>I'm trying to solve for the eigenvalues <span class="math-container">$\lambda_n$</span> for <span class="math-container">$n=1,2,3...$</span>
and eigenfunctions <span class="math-container">$y_n$</span> or <span class="math-container">$n=1,2,3...$</span></p>
<p>Im considering all cases for the values of <span class="math-container">$\lambda$</span>:</p>
<p><span class="math-container">$\lambda = 0$</span>: <span class="math-container">$y= Ax+B$</span> and <span class="math-container">$y' = A$</span> - applying conditions yields <span class="math-container">$A=0=B$</span></p>
<p>Now i get stuck on the cases for <span class="math-container">$\lambda < 0$</span> and <span class="math-container">$\lambda > 0$</span>.</p>
<p>I have tried a similar approach to the <span class="math-container">$\lambda = 0$</span> case by setting <span class="math-container">$\lambda = p^2 >0$</span> for example but I'm unsure where to go from here</p>
<p>Any help or guidance will be <strong>greatly</strong> appreciated!</p>
<p>edit: my latest attempt</p>
<p><img src="https://i.stack.imgur.com/smiUu.jpg" alt="enter image description here"></p>
| Kavi Rama Murthy | 142,385 | <p>The Riemann Lebesgue Lemma tells you that <span class="math-container">$\hat {f}(x) \to 0$</span> as <span class="math-container">$x \to \pm \infty$</span>. Any continuous function with this property is uniformly continuous. </p>
|
2,596,700 | <p>We can see, for example, that the years 2009 and 2015 have identical calendars. Similarly, 2000 and 2028.</p>
<p>I read once that given any year X at most 28 years later there will be another year Y, with the calendar identical to that of X.</p>
<p>Here I am referring to our usual calendar, the Gregorian.</p>
<p>I have not yet been able to prove such an assertion.
I ask for help.</p>
| Ѕᴀᴀᴅ | 302,797 | <p>Since $y = \mathrm{e}^x$ is a continuous function and$$
\left(1 - \frac{1}{a_n}\right)^n = \exp\left(n \ln\left(1 - \frac{1}{a_n}\right)\right),
$$
then$$
\lim_{n \to \infty} \left(1 - \frac{1}{a_n}\right)^n \ \text{exists} \Longleftrightarrow \lim_{n \to \infty} n \ln\left(1 - \frac{1}{a_n}\right) \ \text{exists}.
$$
Given that $a_n \to \infty \ (n \to \infty)$, thus$$
\lim_{n \to \infty} \frac{1}{a_n} = 0 \Longrightarrow \ln\left(1 - \frac{1}{a_n}\right) \sim -\frac{1}{a_n} \ (n \to \infty).
$$
Therefore,$$
\lim_{n \to \infty} n \ln\left(1 - \frac{1}{a_n}\right) \ \text{exists} \Longleftrightarrow \lim_{n \to \infty} \frac{n}{a_n} \ \text{exists}.
$$</p>
|
2,750,790 | <p>A real-valued function has to output a real number or a vector that is composed only of real numbers. </p>
<p>But can a scalar-valued function or vector-valued function output an imaginary number if it is not also labeled as real-valued?</p>
<p>For example, is this a scalar valued function?</p>
<p>$$
f:x \mapsto x\sqrt{-1}
$$</p>
| Arnaud Mortier | 480,423 | <p><em>Scalar</em> means <em>with values in the base field</em>, whatever that field is. It depends on the context. The terminology is derived from the fact that when you multiply a set of vectors by a number, you <em>scale</em> the picture (you change the scale but not the overall shape). Hence numbers are "scalers".</p>
<p>When geometry and linear algebra get axiomatised, and you start working over an arbitrary vector/affine space over some base field, scalars are elements of the field. In particular, it could very well be the field of complex numbers.</p>
|
2,750,790 | <p>A real-valued function has to output a real number or a vector that is composed only of real numbers. </p>
<p>But can a scalar-valued function or vector-valued function output an imaginary number if it is not also labeled as real-valued?</p>
<p>For example, is this a scalar valued function?</p>
<p>$$
f:x \mapsto x\sqrt{-1}
$$</p>
| giobrach | 332,594 | <p>A general truth in mathematics is that functions do <em><strong>not</strong></em> exist independently from their domain and codomain. This is a misconception generated by those pre-calc problems where they asked you to "find the domain" of a certain formula. So <span class="math-container">$f$</span> by itself (the "mapping rule") is <em>not</em> a function; <span class="math-container">$f : A \to B$</span>, instead, <em>is</em>. Therefore, you <em>need</em> to specify what <span class="math-container">$A$</span> and <span class="math-container">$B$</span> are when defining a function.</p>
<p>Real numbers live in <span class="math-container">$\mathbb R$</span>. Real euclidean vectors live in <span class="math-container">$\mathbb R^n$</span> for some <span class="math-container">$n$</span>. A function from <span class="math-container">$f : \mathbb R \to \mathbb R$</span> is called <em>real(-valued) function of one real variable</em>. A function <span class="math-container">$f : \mathbb R \to \mathbb R^n$</span> is called <em>vector(-valued) function of one real variable</em>. A function <span class="math-container">$f : \mathbb R^k \to \mathbb R$</span> is called a <em>real(-valued) function of <span class="math-container">$k$</span> real variables</em>. Just guess what functions like <span class="math-container">$f : \mathbb R^k \to \mathbb R^n$</span> are.</p>
<p>In the context of real vector spaces, and in particular <span class="math-container">$\mathbb R^n$</span>, <em>scalars</em> are the elements of the base field of the vector space. In the case of <span class="math-container">$\mathbb R^n$</span>, that field is most naturally <span class="math-container">$\mathbb R$</span>. So any time you find "<em>real(-valued)</em>" in the above nomenclatures, you may substitute <em>scalar(-valued)</em>.</p>
<p>As you know, the imaginary unit <span class="math-container">$i$</span> is not in <span class="math-container">$\mathbb R$</span>, so the function you proposed (wherever your <span class="math-container">$x$</span> comes from) may <em>never</em> be of the kinds listed above. However, if <span class="math-container">$x$</span> came from <span class="math-container">$\mathbb C$</span> or <span class="math-container">$\mathbb C^n$</span>, the map <span class="math-container">$x \mapsto ix$</span> would not be such a strange sight.</p>
|
1,380,819 | <p>Hi: I'm reading some introductory notes on hilbert spaces and there is a step in a proof that I don't follow. I will put the exact statement below. If someone could explain how it is obtained, it's appreciated. Note that commans between two terms when they have < and > around them denotes the innner product. Also, $e_{n}$ for $n = 1,2,3,\ldots$ is a complete orthonormal sequence in a Hilbert space $H$ and $x$ is in $H$.</p>
<p>Proof: Observe that</p>
<p>\begin{eqnarray*}
0 <= || x - \sum_{n=1}^{m} <x,e_{n}>e_{n}) ||^2
& = & \left< x - \sum_{n=1}^{m} <x,e_{n}>e_{n}, x - \sum_{n=1}^{m} <x,e_{n}>e_{n} \right> \\
& = & \left< x, x - \sum_{n=1}^{m} <x,e_{n}>e_{n} \right>
- \sum_{n=1}^{m} <x, e_{n}> \left < e_{n}, x - \sum_{n=1}^{m} <x,e_{n}>e_{n} \right > \\
& = & ||x||^2 - \sum_{n=1}^{m} |<x, e_{n}>|^2
\end{eqnarray*}</p>
<p>I understand the first two lines of above.
My question is how one goes from the second to the last line to the last line. Thanks for your help.</p>
| ptrsinclair | 247,301 | <p>Note that the first inner product in the second-last line is:
$$ \left\langle x, x - \sum_{n=1}^m \langle x,e_n\rangle e_n\right\rangle = \langle x,x\rangle - \sum_{n=1}^m \langle x,e_n\rangle \langle x,e_n\rangle = \|x\|^2 - \sum_{n=1}^m |\langle x,e_n\rangle|^2 $$
so we would hope that the second is equal to zero. We have
$$ \sum_{n=1}^m \langle x,e_n\rangle\left\langle e_n, x - \sum_{n=1}^m \langle x,e_n\rangle e_n\right\rangle = \sum_{n=1}^m \langle x,e_n\rangle\langle e_n,x\rangle - \sum_{n=1}^m \langle x,e_n\rangle^2 \sum_{k=1}^m \langle e_n,e_k\rangle $$
Luckily, $\langle e_n,e_k\rangle = 1$ if $n=k$ and $0$ otherwise, so the two terms in the difference are equal.</p>
|
4,642,566 | <p>For <span class="math-container">$x, y ∈$</span> <span class="math-container">$\mathbb{R}$</span>, let <span class="math-container">$x△y = 2(x + y)$</span>. Then <span class="math-container">$△$</span> is a binary operation on <span class="math-container">$\mathbb{R}$</span>.</p>
<p>Show that there is no identity element for <span class="math-container">$△$</span> on <span class="math-container">$\mathbb{R}$</span>.</p>
<p>I have tried <span class="math-container">$x△e = e△x=x$</span></p>
<p>I don't know what else to do.</p>
| HeroZhang001 | 1,123,708 | <p>Suppose we have an identity <span class="math-container">$e$</span>. Then</p>
<p><span class="math-container">$$0 \triangle e =0\implies2(0+e)=0\implies e=0.$$</span></p>
<p>But</p>
<p><span class="math-container">$$2 \triangle e =2\implies2(2+e)=2\implies e=-1.$$</span></p>
<p>Thus <span class="math-container">$-1=e=0$</span>. This is contradictory. So there is no such an identity <span class="math-container">$e$</span>.</p>
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.