qid int64 1 4.65M | question large_stringlengths 27 36.3k | author large_stringlengths 3 36 | author_id int64 -1 1.16M | answer large_stringlengths 18 63k |
|---|---|---|---|---|
4,259,561 | <p>I am looking for the derivation of the closed form along any given diagonal <span class="math-container">$a$</span> of Pascal's triangle,<br />
<span class="math-container">$$\sum_{k=a}^n {k\choose a}\frac{1}{2^k}=?$$</span>
Numbered observations follow. As for the limit proposed in the title given by:</p>
<p><strong>Observation 1</strong></p>
<p><span class="math-container">$$\sum_{k=a}^\infty {k\choose a}\frac{1}{2^k}=2,$$</span>
when I calculate the sums numerically using MS Excel for any <span class="math-container">$a$</span> within the domain (<span class="math-container">$0\le a \le100$</span>) the sum approaches 2.000000 in all cases within total steps <span class="math-container">$n\le285$</span>. The first series with <span class="math-container">$a=0$</span> is a familiar geometric series, and perhaps others look familiar to you as well:</p>
<p><span class="math-container">$$\sum_{k=0}^\infty {k\choose 0}\frac{1}{2^k}=1+\frac{1}{2}+\frac{1}{4}+\frac{1}{8}+... =2,$$</span>
<span class="math-container">$$\sum_{k=1}^\infty {k\choose 1}\frac{1}{2^k}=\frac{1}{2}+\frac{1}{2}+\frac{3}{8}+\frac{1}{4}+\frac{5}{32}... =2,$$</span>
<span class="math-container">$$\sum_{k=2}^\infty {k\choose 2}\frac{1}{2^k}=\frac{1}{4}+\frac{3}{8}+\frac{3}{8}+\frac{5}{16}+\frac{15}{64}... =2,$$</span>
but it is both surprising and elegantly beautiful that these sums across all diagonals appear to approach the same value. Some additional observations from the numerically determined sums:</p>
<p><strong>Observation 2</strong></p>
<p>The maximum value of any term <span class="math-container">${k\choose a}\frac{1}{2^k}$</span> within a diagonal <span class="math-container">$a$</span> for the domain <span class="math-container">$(a>0)$</span> is attained at <span class="math-container">$k=2a-1$</span> and repeated for the term immediately following (<span class="math-container">$k=2a$</span>).</p>
<p><strong>Observation 3</strong>
<span class="math-container">$$\sum_{k=a}^{2a} {k\choose a}\frac{1}{2^k}=1$$</span>
<strong>Observation 4</strong>
<span class="math-container">$$\sum_{k=a}^{n} {k\choose a}\frac{1}{2^k} + \sum_{k=n-a}^{n} {k\choose n-a}\frac{1}{2^k}=2$$</span>
It's very likely that the general closed form has been derived before, but searching for the past several days has produced no results. It appears that setting up the appropriate generating function may play a role, but I am at a loss as to how to proceed. Looking forward to the responses.</p>
| Robert Shore | 640,080 | <p>You have <span class="math-container">$25$</span> possible choices for the clockwise member of the pair that's sitting together. Once you've selected that pair, (and assuming the question asks for <em>exactly</em> two members to be adjacent to one another), you can choose any of the <span class="math-container">$21$</span> remaining people who are not adjacent to the pair as the third member of your team. so the answer is <span class="math-container">$\frac{25 \cdot 21 \cdot 6}{25 \cdot 24 \cdot 23}=\frac{21}{92}.$</span></p>
|
1,218,582 | <p>I was presented with the function $max (|x|,|y|)$ which should output a maximum value of given two.... I can only suppose this one creates some body in $\mathbb{R}^3$ but how do you sketch it and what does it mean in $\mathbb{R}^3$? for that matter in $\mathbb{R}^2$ I cant really imagine it also.. </p>
| MPW | 113,214 | <p>The most revealing approach would probably be to draw the set of level curves in the plane. This projects the slices through the graph by horizontal planes back down onto the domain plane, like a topographic map.</p>
<p>The equation of the level curve corresponding to the planar section at height $c$ is
$$\max(|x|,|y|)=c$$
This curve is a square centered at the origin with side $2c$ (for $c\geq0$). Note that the level curves are empty for $c<0$.</p>
<p><strong>Comment:</strong> This is essentially what was suggested by @David's answer.</p>
|
3,105,664 | <p><span class="math-container">$$
I_n=\int_{0}^{1}\frac{(1-x)^n}{n!}e^x\,dx
$$</span></p>
<blockquote>
<p>Prove that
<span class="math-container">$$
I_n=\frac{1}{(n+1)!}+I_{n+1}
$$</span></p>
</blockquote>
<p>I tried integration by parts and still can't prove it, I appreciate any hint/answer. </p>
| Zacky | 515,527 | <p>I used <span class="math-container">$n!$</span> as in the original image.<span class="math-container">$$I_n=\int_{0}^{1}\frac{(1-x)^n}{n!}e^x\,dx=\frac{1}{n!}\int_0^1 \left(-\frac{(1-x)^{n+1}}{n+1}\right)'e^x\,dx$$</span>
<span class="math-container">$$=\underbrace{-\frac{1}{n!}\frac{(1-x)^{n+1}}{n+1} e^x\bigg|_0^1}_{=0-\left(-\large\frac{1}{(n+1)!}\right)} +\int_0^1 \frac{(1-x)^{n+1}}{(n+1)!}e^xdx $$</span>
<span class="math-container">$$\Rightarrow I_n=\frac{1}{\color{}{(n+1)!}} +I_{n+1}$$</span></p>
|
2,267,165 | <p>Find the distance between the two parallel planes?</p>
<p>$$a: x-2y+3z-2=0$$
$$b: 2x-4y+6z-1=0$$</p>
<p>The given answer is: $\dfrac{3}{\sqrt{56}}$</p>
| hamam_Abdallah | 369,188 | <p>Take a point in the plane $(a) $.</p>
<p>for example $A (2,0,0) $.</p>
<p>the distance to the plane $(b) $ is</p>
<p>$$\frac {|2.2- 4.0 +6.0-1|}{ \sqrt {2^2+4^2+6^2 } }=$$
$$\frac {3}{\sqrt {56}} $$</p>
|
1,832,080 | <p>The converse is pretty obvious. If G is a cycle, then it is isomorphic to it's line graph.
How to prove that if L(G) is isomorphic to G, then G is a cycle...?</p>
<p><strong>P.S.</strong>- Assume G is connected</p>
| Ashwin Ganesan | 157,927 | <p>Let $G$ be a connected graph on $n$ vertices and $m$ edges and suppose $G$ is isomorphic to its line graph $L(G)$. Then, $m=n$ and so $G$ is a connected graph having $n$ edges. This implies $G$ is of the form $T+e$, where $T$ is a tree and $e$ is an edge not in $T$. If $G=T+e$ is the $n$-cycle graph, then we are done. </p>
<p>So suppose $T+e$ is not a cycle. Then, $T+e$ has a vertex of degree 3 or more. Three of the edges incident to this vertex in $G$ form a cycle of length 3 in $L(G)$. Also, the unique cycle in $G$ obtained by adding edge $e$ to the tree $T$ induces a cycle in $L(G)$. Hence, $L(G)$ has at least two cycles. We showed that $G$ has only one cycle and that $L(G)$ has two or more cycles, which is a contradiction because $G \cong L(G)$. So this case is impossible, and $T+e$ must be a cycle.</p>
|
948,329 | <p>I have come across this trig identity and I want to understand how it was derived. I have never seen it before, nor have I seen it in any of the online resources including the many trig identity cheat sheets that can be found on the internet.</p>
<p>$A\cdot\sin(\theta) + B\cdot\cos(\theta) = C\cdot\sin(\theta + \Phi)$</p>
<p>Where $C = \pm \sqrt{A^2+B^2}$</p>
<p>$\Phi = \arctan(\frac{B}{A})$</p>
<p>I can see that Pythagorean theorem is somehow used here because of the C equivalency, but I do not understand how the equation was derived. </p>
<p>I tried applying the sum of two angles identity of sine i.e. $\sin(a \pm b) = \sin(a)\cdot\cos(b) + \cos(a)\cdot\sin(b)$</p>
<p>But I am unsure what the next step is, in order to properly understand this identity.</p>
<p>Where does it come from? Is it a normal identity that mathematicians should have memorized?</p>
| Jasser | 170,011 | <p>Hint : Just divide and multiply LHS by C and consider $\frac AC =\cos \phi$ and similarly $ \frac BC =\sin \phi$ and then try to simplify.</p>
<p>To get the proof consider</p>
<p>\begin{align}
\sin (a-b) &= \cos (\pi/2-(a-b)) \\
&=\cos ((\pi/2-a)+b)
\end{align}
following on from this, one can then apply the appropriate formula for $\cos(a-b)$.</p>
<p>To find the proof of even this try searching on Google or follow this Wikipedia link. <a href="http://en.m.wikipedia.org/wiki/Proofs_of_trigonometric_identities" rel="nofollow">http://en.m.wikipedia.org/wiki/Proofs_of_trigonometric_identities</a> </p>
|
3,933,296 | <p>What I already have,</p>
<ol>
<li>Palindrome in form XYZYX, where X can’t be 0.</li>
<li>Divisibility rule of 9: sum of digits is divisible by 9. So, we have 2(X+Y)+Z = 9M.</li>
<li>The first part is divisible by 9 if and only if X+Y is divisible by 9. So, we have 10 pairs out of 90. And each such pair the total sum is divisible by 9 when Z is also divisible by 9. There are 2 such Zs: 0, 9. So, there are 20 divisible palindromes.</li>
<li>If (X+Y) mod 9 = 1, then 2(X+Y) mod 9 = 2; and in order for the total sum to be divisible by 8, Z must have the remainder of 1 when divided by 9. There is 1 such Z: 1. And again, we have 10 xy pairs with the given remainder. So, this case yields 10*1 = 30 more palindromes.</li>
<li>Same logic as on previous step applies to the case when 2(X+Y) mod 9 = 2.</li>
<li>So, total number of divisible palindromes = 80?</li>
</ol>
<p>When using this method, I only get 80 numbers of 5-digit palindromes that are divisible by 9(?) i dont think im doing this method correctly, can someone show me whats going on here</p>
| Measure me | 854,564 | <p>The answer is <span class="math-container">$100$</span>, my argument is as follows.</p>
<p><span class="math-container">$X$</span> is choosen as a digit from <span class="math-container">$1$</span> to <span class="math-container">$9$</span>, and <span class="math-container">$Y$</span> from <span class="math-container">$0$</span> to <span class="math-container">$9$</span>,
now you want that <span class="math-container">$9|(2(X+Y)+Z)$</span>.</p>
<p>Two cases:</p>
<p>(1) if <span class="math-container">$9|2(X+Y)$</span>, then <span class="math-container">$2(X+Y)=9k$</span> for some natural number <span class="math-container">$k>0$</span>, and since <span class="math-container">$Z$</span> is an integer from <span class="math-container">$0$</span> to <span class="math-container">$9$</span> you would have that both <span class="math-container">$Z=0,9$</span> work.</p>
<p>(2) if <span class="math-container">$9\not|2(X+Y)$</span>, then you would have that there exists <span class="math-container">$k$</span> natural number such that <span class="math-container">$9(k-1)<2(X+Y)<9k$</span> and still becouse <span class="math-container">$0\le Z\le 9$</span> you obtain this time that there exists only one value of <span class="math-container">$Z$</span> that satisfies the problem, and it is <span class="math-container">$9k-2(X+Y)$</span>.</p>
<p>So we want to find when <span class="math-container">$9|2(X+Y) \Rightarrow 9|(X+Y)$</span>: so if <span class="math-container">$1\le X\le8$</span>, then <span class="math-container">$Y=9-X$</span>; if <span class="math-container">$X=9$</span>, then <span class="math-container">$Y=0,9$</span> work. So <span class="math-container">$10$</span> cases where (1) happens</p>
<p>To conclude: there are <span class="math-container">$9\cdot 10=90$</span> possibilities between <span class="math-container">$X,Y$</span> of which <span class="math-container">$10$</span> cases are (1) and <span class="math-container">$80$</span> are (2), so the final number is <span class="math-container">$10\cdot 2 + 80\cdot 1=100$</span></p>
|
3,506,091 | <p>Solve <span class="math-container">$2^m=7n^2+1$</span> with <span class="math-container">$(m,n)\in \mathbb{N}^2$</span></p>
<p>Here is what I did:
First try, I have seen first that the obvious solutions are <span class="math-container">$n=1$</span> and <span class="math-container">$m=3$</span> , and <span class="math-container">$n=3$</span> and <span class="math-container">$m=6$</span>, then I proved by simple congruences that <span class="math-container">$m$</span> must be divisible by <span class="math-container">$3$</span> so <span class="math-container">$m=3k$</span>, If we add <span class="math-container">$27$</span> to the equation we will have <span class="math-container">$2^{3k}+3^3=7(n^2+2^2)$</span>, but unfortunately I tried to do something with Legendre symbol or the multiplicative order but I found nothing interesting.</p>
<p>Second try,I let <span class="math-container">$n=2k+1$</span> then I worked in <span class="math-container">$\mathbb{Z}\left[ \frac{-1+\sqrt{-7}}{2} \right] $</span> and the equation becomes <span class="math-container">$7\times 2^{m-2}=\left( 7k+4+\frac{-1+\sqrt{-7}}{2} \right) \left( 7k+3-\frac{-1+\sqrt{-7}}{2} \right) $</span> but I didn't find something interesting because the two factors are not coprime.</p>
| Barry Cipra | 86,747 | <p>If <span class="math-container">$2^m=7n^2+1$</span> with <span class="math-container">$m=3k$</span>, as the OP found must be the case (since <span class="math-container">$2^m\equiv1$</span> mod <span class="math-container">$7$</span>), we have</p>
<p><span class="math-container">$$7n^2=2^{3k}-1=(2^k-1)(2^{2k}+2^k+1)$$</span></p>
<p>Now for <span class="math-container">$k\ge1$</span> we have</p>
<p><span class="math-container">$$\begin{align}
\gcd(2^k-1,2^{2k}+2^k+1)
&=\gcd(2^k-1,2^{2k}+2^{k+1})\\
&=\gcd(2^k-1,2^{k+1}(2^{k-1}+1))\\
&=\gcd(2^k-1,2^{k-1}+1)\\
&=\gcd(2^k+2^{k-1},2^{k-1}+1)\\
&=\gcd(3\cdot2^{k-1},2^{k-1}+1)\\
&=\gcd(3,2^{k-1}+1)\\
&=\begin{cases}
3\quad\text{if $k$ is even}\\
1\quad\text{if $k$ is odd}
\end{cases}
\end{align}$$</span></p>
<p>If <span class="math-container">$k$</span> is even, we proceed as in Mastrem's answer: <span class="math-container">$k=2h$</span> implies <span class="math-container">$7n^2=2^{6h}-1=(2^{3h}-1)(2^{3h}+1)$</span> with <span class="math-container">$\gcd(2^{3h}-1,2^{3h}+1)=1$</span> and <span class="math-container">$7\mid(2^{3h}-1)$</span>, so <span class="math-container">$2^{3h}+1$</span> must be a square, but <span class="math-container">$2^{3h}+1=N^2$</span> implies <span class="math-container">$2^{3h}=(N-1)(N+1)$</span>, which holds only for <span class="math-container">$N=3$</span>, corresponding to the known solution with <span class="math-container">$m=6$</span>.</p>
<p>If <span class="math-container">$k$</span> is odd, then we must have <span class="math-container">$7$</span> divide one of the factors in <span class="math-container">$(2^k-1)(2^{2k}+2^k+1)$</span> and the other factor be a square. If <span class="math-container">$k\ge3$</span>, <span class="math-container">$2^k-1\equiv-1$</span> mod <span class="math-container">$8$</span>, which is not a square, so we must have <span class="math-container">$7\mid2^k-1$</span>, which implies <span class="math-container">$k=3h$</span> (with <span class="math-container">$h$</span> odd, but that's no longer important), from which it follows that <span class="math-container">$2^{6h}+2^{3h}+1$</span> is a square. But <span class="math-container">$2^{6h}+2^{3h}+1\equiv1+1+1\equiv3$</span> mod <span class="math-container">$7$</span>, which is not a square. So the case of odd <span class="math-container">$k$</span> leaves only <span class="math-container">$k=1$</span>, corresponding to the other known solution, <span class="math-container">$m=3$</span>.</p>
|
1,059,939 | <blockquote>
<p>Can a function exist which is both $o(g(n))$ and $\omega(g(n))$?</p>
</blockquote>
<p>Wouldn't this imply
$$m |g(n)| \le |f(n)| \le k |g(n)| $$</p>
<p>If $f(n) = g(n)$ then wouldn't an arbitrary integer $m$ be greater than $f(n)$?</p>
<p>If $f(n) \ne g(n)$ wouldn't for $n$ sufficiently large the equality fail?</p>
<p>I want to make sure I'm thinking about this correctly. </p>
| alexjo | 103,399 | <p>The distribution of $V$ is $\chi^2(n)$. The square root of a $\chi^2(n)$ random variable is a <a href="http://mathworld.wolfram.com/ChiDistribution.html" rel="noreferrer">$\chi(n)$ random variable</a>.</p>
<p>Let the random variable $V$ have the chi-square distribution with $n$ degrees of freedom
with probability density function
$$
f_V(v) =
\begin{cases}
\frac{v^{n/2-1} e^{-v/2}}{2^{n/2} \Gamma\left(\frac{n}{2}\right)}, & v \geq 0; \\ 0, & \text{otherwise}.
\end{cases}
$$
By the transformation technique, using the transformation $Y=g(V)=\sqrt{V}$, we have
$$
f_Y(y)=f_V(g^{-1}(y))\left|\frac{dv}{dy}\right|=\frac{(y^2)^{n/2-1} e^{-y^2/2}}{2^{n/2} \Gamma\left(\frac{n}{2}\right)}|2y|=\frac{y^{n-1} e^{-y^2/2}}{2^{n/2-1} \Gamma\left(\frac{n}{2}\right)}\qquad y>0
$$
which is the probability density function of the chi distribution with $n$ degrees of freedom.</p>
<p>The expectation is
$$
\Bbb{E}(Y)=\int_{-\infty}^{\infty}yf_Y(y)\textrm{d}y=\int_{0}^{\infty}\frac{y^{n} e^{-y^2/2}}{2^{n/2-1} \Gamma\left(\frac{n}{2}\right)}\textrm{d}y=\sqrt{2}\frac{\Gamma\left(\frac{n+1}{2}\right)}{\Gamma\left(\frac{n}{2}\right)}
$$</p>
|
1,059,939 | <blockquote>
<p>Can a function exist which is both $o(g(n))$ and $\omega(g(n))$?</p>
</blockquote>
<p>Wouldn't this imply
$$m |g(n)| \le |f(n)| \le k |g(n)| $$</p>
<p>If $f(n) = g(n)$ then wouldn't an arbitrary integer $m$ be greater than $f(n)$?</p>
<p>If $f(n) \ne g(n)$ wouldn't for $n$ sufficiently large the equality fail?</p>
<p>I want to make sure I'm thinking about this correctly. </p>
| Kamster | 159,813 | <p>First note that each $X^{2}_i\sim Gamma(\frac{1}{2},\frac{1}{2})$ for each $i$ (i.e. chi square with df=1) now to find distribution of $\sum_{i=1}^{n}X^2_{i}$ we see that using moment generating functions and fact that each are independent of each other that
$$M_{\sum_{i=1}^{n}X^2_{i}}=M(t)^{n}=\left(\frac{\frac{1}{2}}{\frac{1}{2}-t}\right)^{\frac{n}{2}}$$
thus $\sum_{i=1}^{n}X^2_{i}\sim Gamma(\frac{n}{2},\frac{1}{2})$ thus $V=\sum_{i=1}^{n}X^2_{i}$ thus
$$E(\sqrt{V})=\int_0^\infty \frac{\frac{1}{2}^{\frac{n}{2}}}{\Gamma(\frac{n}{2})}v^{\frac{1}{2}}v^{\frac{n}{2}-1}e^{-\frac{1}{2}x}dx=\frac{\frac{1}{2}^{\frac{n}{2}}}{\Gamma(\frac{n}{2})}\frac{\Gamma(\frac{n+1}{2})}{\frac{1}{2}^{\frac{n+1}{2}}}\int^\infty_0\frac{\frac{1}{2}^{\frac{n+1}{2}}}{\Gamma(\frac{n+1}{2})}v^{\frac{n+1}{2}-1}e^{-\frac{1}{2}x}dx=\frac{\sqrt{2}\Gamma(\frac{n+1}{2})}{\Gamma(\frac{n}{2})}$$</p>
|
1,189,216 | <p>Wikipedia and other sources claim that </p>
<p>$PA +\neg G_{PA}$</p>
<p>can be consistent, where $\neg G_{PA}$ is the Gödel statement for PA.</p>
<p>So what is the error in my reasoning?</p>
<p>$G_{PA}$ = "$G_{PA}$ is unprovable in PA"</p>
<p>$\neg G_{PA} $</p>
<p>$\implies$ $\neg$ "$G_{PA}$ is unprovable in PA"</p>
<p>$\implies$ "$G_{PA}$ is provable in PA" </p>
<p>$\implies$ $G_{PA}$</p>
<p>I would also appreciate it if someone could provide a somewhat intuitive explanation.</p>
<p>Sources:</p>
<ol>
<li><a href="http://en.wikipedia.org/wiki/Non-standard_model_of_arithmetic#Arithmetic_unsoundness_for_models_with_.7EG_true" rel="nofollow noreferrer">Non-standard model of arithmetic on Wikipedia</a></li>
<li><a href="http://en.wikipedia.org/wiki/Rosser%27s_trick#Background" rel="nofollow noreferrer">Rosser's trick on Wikipedia</a></li>
<li><a href="https://math.stackexchange.com/a/16383/155096">Understanding Gödel's Incompleteness Theorem</a></li>
<li><a href="https://math.stackexchange.com/a/183106/155096">Is the negation of the Gödel sentence always unprovable too?</a></li>
</ol>
| Asaf Karagila | 622 | <p>The point is that $G_{PA}$ is neither provable nor refutable in $\sf PA$. But it is a concrete sentence, and in a given model of $\sf PA$ it has a concrete truth value.</p>
<p>But if in all models $\sf PA$ it would have the same truth value, then the completeness theorem tells us that it is provable from $\sf PA$. Therefore there has to be some models where $G_{PA}$ is true and others where it is false; in particular both options are consistent with $\sf PA$.</p>
<hr>
<p>In your suggested reasoning you forget that, for example, $\sf PA+\lnot\operatorname{Con}(PA)$ is also consistent. There are models of $\sf PA$ that "think" that $\sf PA$ is not consistent, therefore can prove anything.</p>
<p>Those models are non-standard models, and the non-standard numbers introduce new lengths of formulas and proofs, new rules of inference, and new axioms to $\sf PA$. Because internally, $\sf PA$ (and rules of logic, etc.) are all just recursive predicates to be interpreted in some way.</p>
<p>The point here is that all the things we do with Godel numbering (represent logic and theories using integers) are just "definable predicates", and the sentence $G_{PA}$ is <strong><em>really</em></strong> just saying the following thing:</p>
<blockquote>
<p>Using the way described in $\varphi_1$ to understand numbers as formulas, and with the inference rules described by the formula $\varphi_2$ and the formula $\varphi_3$ describing the theory $\sf PA$, then if this theory as understood with all these rules is consistent, then there is no proof of this very sentence.</p>
</blockquote>
<p>But in non-standard models all these formulas will necessarily define sets which include non-standard integers (if only because they define unbounded sets in the standard model). So a non-standard model can have more inference rules, more proofs, more cowbell, and definitely more axioms to $\sf PA$. And there lies the contradiction from which we can show that every statement has a "code for a proof" (which may be of non-standard length, and it might be using non-standard formulas, and non-standard inference rules).</p>
|
1,520,028 | <p>I'm struggling to figure out how to find a bound on my error for this problem:</p>
<p>Let T_{6}(x) be the Taylor polynomial of degree 6 based at a = 0 for the function f(x)=\cos(x). Suppose you approximate f(x) by T_{6}(x). If |x|\leq 1, find a bound on the error in your approximation by using the alternating series estimate. </p>
<p>So far, I have</p>
<p>f(x) = cos(x), f(0) = 1</p>
<p>f^(1)(x) = -sin(x), f^(1)(0) = 0</p>
<p>f^(2)(x) = -cos(x), f^(2)(0) = -1</p>
<p>f^(3)(x) = sin(x), f^(3)(0) = 0</p>
<p>f^(4)(x) = cos(x), f^(4)(0) = 1</p>
<p>and</p>
<p>T6(x) = 1 - x^2/4! + x^4/8! -x^6/12!</p>
<p>and that [b_n] = x^8/16!</p>
<p>I'll be honest, this section is way over my head and I'm struggling to make any sense out of it, so I'm not sure that I'm taking the correct approach here in general.</p>
<p>Any help is appreciated!</p>
| John Molokach | 90,422 | <p>Your denominators are incorrect. The factorials should be the same as the exponents of $x$. Also the alternating series error bound says that your $b_n$ term is an upper bound for the error of the 6th degree polynomial. </p>
|
1,248,331 | <p>That's the question :</p>
<p>Let $a$ be a cardinality such that this following statment is true :</p>
<p>For every $A, C$, if $ A \subseteq C$, $|A| = a$ and $|C| > a$, then $|C \setminus A| > |A|$.</p>
<p>Without using cardinality arithmethics, prove that $a + a = a$.</p>
<p>This is how the question is written; maybe i'm not reading it well because I can find a counter example. Let $C$ be $\{1,2,3,4,5\}$, and $A$ be $\{3,4\}$.
Then $A \subseteq C$, the cardinality of $C$ is bigger than the cardinality of $A$, and $|C \setminus A| = 3 > 2$.</p>
<p>So the statment is indeed true, yet $a + a = 4 \neq 2$.</p>
<p>What am I missing here ?
Thanks in advance.</p>
<p>Edit : Hagen von Eitzen and marini helped me see what i've missed, thanks. I am still trying to solve this regardless of my mistake reading the question right so a hint / help from here will be much appreciated.</p>
<p>I think that by saying show that $a + a = a$, it means show that $a$ has to be infinte, because when $a$ is finite than $a + a = 2a$, maybe i'm getting this all wrong...</p>
| Hagen von Eitzen | 39,174 | <p>You are missing the "for every". With $A=\{42,666\}$, $C=\{13,42,666\}$ we have $A\subseteq C$, $|A|=a$ and $|C|>a$, but not $|C\setminus A|>|A|$. Hence your $a=2$ does not have the required property.</p>
|
2,146,929 | <p>Let $f:S^n \to S^n$ be a homeomorphism. I know the result that a rigid motion in $\mathbb R^{n+1}$ is always <a href="https://math.stackexchange.com/a/866471/185631">linear</a>, but can we get more information from the assumption that $f:S^n \to S^n$ is a homeomorphism?</p>
| Andreas Caranti | 58,401 | <p>$\newcommand{\C}{\mathbb{C}}$$\renewcommand{\theta}{\vartheta}$$\newcommand{\R}{\mathbb{R}}$Consider $S^{1} \subseteq \R^{2}$. The map
$$
f(e^{i \theta}) = e^{i \theta^{2}/2 \pi}
$$
is a homeomorphism of $S^{1}$, as $\theta \mapsto \dfrac{\theta^{2}}{2 \pi}$ is a homeomorphism on the interval $[0, 2 \pi]$. </p>
<p>However, it maps $(1, 0) = e^{i 0}$ to itself, but its multiple $(-1, 0) = e^{i \pi}$ to $e^{i \pi/2} = (0, 1)$.</p>
<p>Here I have identified $\R^{2}$ with $\C$.</p>
|
1,785,633 | <p><em>(First, I am very aware of the fact that Brownian motion is actually probably more difficult to understand than at least basic complex analysis, so the pedagogical merits of such an approach would be questionable for anyone besides a probabilist wanting to refresh or reshape their already existing complex analysis knowledge.)</em></p>
<hr>
<p>During my stochastic processes lecture, my professor said something to the effect that:</p>
<blockquote>
<p>"Every statement in complex analysis can be proven using Brownian motion, in particular using the fact that the image of a Brownian path under a conformal map is Brownian motion up to a time change."</p>
</blockquote>
<p><strong>To what extent is this true?</strong></p>
<p>Even after he gave a proof of Liouville's Theorem using Brownian motion and promised to give a proof of the Riemann Mapping Theorem during the next lecture, I was still unconvinced.</p>
<p>However, now the more that I think about it, it becomes more plausible -- Brownian motion is very closely related to the theory of harmonic functions, and analytic functions are just 2-dimensional harmonic functions satisfying the Cauchy-Riemann equations (is this correct?).</p>
<p>Brownian motion has already been shown to have numerous applications to potential theory and PDEs (to the best of my knowledge), so is a similar formulation of complex analysis in terms of Brownian motion also theoretically possible? (even if not desirable?)</p>
| Redundant Aunt | 109,899 | <p>You could consider two linearly independent vectors $a,b$ and then pose $c=a+b$.</p>
|
1,250,020 | <blockquote>
<p>Find $\int \frac{1+\sin x \cos x}{1-5\sin^2 x}dx$</p>
</blockquote>
<p>I used a bit of trig identities to get: $\int \frac {2+\sin (2x)}{-4+\cos(2x)}dx$ and using the substitution: $t= \tan (2x)$ I got to a long partial fractions calculation which doesn't seem right.</p>
<p>Any hints on how to do it please?</p>
| Nicolas | 213,738 | <p>We have
$$\int\frac{1+\sin x\cos x}{1-5\sin^2x}\mathrm{d}x
=\int\frac{\mathrm{d}x}{1-5\sin^2x}+\int\frac{\sin x\cos x}{1-5\sin^2x}\mathrm{d}x$$
and
$$\int\frac{\sin x\cos x}{1-5\sin^2x}\mathrm{d}x
=-\frac{1}{10}\int\frac{\mathrm{d}\left(1-5\sin^2x\right)}{1-5\sin^2x}
=-\frac{1}{10}\ln\left(1-5\sin^2x\right).$$
Then,
$$\int\frac{\mathrm{d}x}{1-5\sin^2x}
=\int\frac{\mathrm{d}x}{\cos^2x-4\sin^2x}
=\int\frac{1}{1-4\tan^2x}\frac{\mathrm{d}x}{\cos^2x}
=\frac{1}{2}\int\frac{\mathrm{d}\left(2\tan x\right)}{1-\left(2\tan x\right)^2}
=\frac{1}{2}\tanh^{-1}\left(2\tan x\right).$$</p>
|
3,006,022 | <p>I have a radioactive decay system to solve for <span class="math-container">$x(t)$</span> and <span class="math-container">$y(t)$</span> (no need for <span class="math-container">$z(t)$</span>):
<span class="math-container">$$\begin{cases}x'=-\lambda x\\
y'=\lambda x-\mu y\\
z'=\mu y\end{cases}$$</span>
with the initial conditions <span class="math-container">$x(0)=1,y(0)=0,z(0)=0$</span>.</p>
<p>I found <span class="math-container">$x(t) = e^{-\lambda t}$</span>, but <span class="math-container">$y(t)$</span> is proving to be difficult. I have tried subbing in <span class="math-container">$x(t) = e^{-\lambda t}$</span> so that we have a linear DE in <span class="math-container">$y$</span>:
<span class="math-container">$$y'+\mu y=\lambda e^{-\lambda t}$$</span>
but ended up getting a giant solution which seems very wrong. </p>
<p>Is this the right approach or no?</p>
| Offlaw | 571,888 | <p><span class="math-container">$$y'+\mu y=\lambda e^{-\lambda t}$$</span></p>
<p><span class="math-container">$$d(e^{\mu t}y)=\lambda e^{(\mu - \lambda)t} dt \text{ }[\text{I.F.} = e^{\mu t}]$$</span></p>
<p><span class="math-container">$$\text{Integrating, } e^{\mu t}y=\frac{\lambda e^{(\mu - \lambda)t}}{\mu - \lambda} + C$$</span></p>
<p><span class="math-container">$$\text{Putting } y(0)=0, 0=\frac{\lambda }{\mu - \lambda} + C$$</span></p>
<p><span class="math-container">$$\text{Hence, } y(t)=\frac{\lambda }{\mu - \lambda}(e^{(\mu - \lambda)t}-1)e^{-\mu t}$$</span></p>
|
3,012,416 | <p>I know the answer is obvious: In <span class="math-container">$\mathbb{Z}$</span> the only solutions of <span class="math-container">$xy=-1$</span> are <span class="math-container">$x=-y=1$</span> and <span class="math-container">$x=-y=-1$</span>.
My problem is that I want to formally prove it and I don't know how to write it. Where do you even begin for such a trivial statement?</p>
<p><strong>Edit: I would like to prove it viewing <span class="math-container">$\mathbb{Z}$</span> as a ring. This is, just using the sum and product of integers. No order, no absolute value, etc...</strong></p>
| Siong Thye Goh | 306,553 | <p>if <span class="math-container">$xy=-1$</span>, then we we have <span class="math-container">$|x||y|=1$</span>, that is we must have <span class="math-container">$|x|=1$</span> and <span class="math-container">$|y|=1$</span>.</p>
<p>Also, determinining <span class="math-container">$x$</span> would completely determine <span class="math-container">$y$</span>.</p>
<p>Hence we only need to examine what happends when <span class="math-container">$x=1$</span> and <span class="math-container">$x=-1$</span>.</p>
|
3,012,416 | <p>I know the answer is obvious: In <span class="math-container">$\mathbb{Z}$</span> the only solutions of <span class="math-container">$xy=-1$</span> are <span class="math-container">$x=-y=1$</span> and <span class="math-container">$x=-y=-1$</span>.
My problem is that I want to formally prove it and I don't know how to write it. Where do you even begin for such a trivial statement?</p>
<p><strong>Edit: I would like to prove it viewing <span class="math-container">$\mathbb{Z}$</span> as a ring. This is, just using the sum and product of integers. No order, no absolute value, etc...</strong></p>
| fleablood | 280,126 | <p>Alternatively.</p>
<p>If <span class="math-container">$x $</span> or <span class="math-container">$y $</span> is <span class="math-container">$0$</span>, <span class="math-container">$xy=0$</span>.</p>
<p>Otherwise, <span class="math-container">$|x|\ge 1$</span> and <span class="math-container">$|y|\ge 1$</span>. If <span class="math-container">$|x|>1$</span> then <span class="math-container">$|xy|=|y||x|>|y|\ge 1$</span>. So we must have <span class="math-container">$|x|=1$</span>. And <span class="math-container">$1=|xy|=|x||y|=|y|$</span> so <span class="math-container">$|x|=|y|=1$</span>.</p>
<p>Of the four options only <span class="math-container">$x=y=1;x=y=-1$</span> work.</p>
|
3,471,292 | <p>I need to find the value of the series <span class="math-container">$\sum_{n=0}^{\infty}\frac{(n+1)x^n}{n!}$</span>.I've computed its radius of convergence which comes out to be zero.</p>
<p>I'm not getting how to make adjustments in the general terms of the series to get the desired result...</p>
| E.H.E | 187,799 | <p><span class="math-container">$$y=\sum_{n=0}^{\infty}\frac{nx^n}{n!}+\sum_{n=0}^{\infty}\frac{x^n}{n!}$$</span>
<span class="math-container">$$y=e^x+\sum_{n=0}^{\infty}\frac{nx^n}{n!}$$</span>
<span class="math-container">$$\frac{y}{x}=\frac{e^x}{x}+\sum_{n=0}^{\infty}\frac{nx^{n-1}}{n!}$$</span>
<span class="math-container">$$\frac{y}{x}=\frac{e^x}{x}+e^x$$</span>
so
<span class="math-container">$$y=e^x+xe^x$$</span></p>
|
2,355,579 | <blockquote>
<p><strong>Problem:</strong> James has a pile of n stones for some positive integer n ≥ 2. At each step, he
chooses one pile of stones and splits it into two smaller piles and writes the product
of the new pile sizes on the board. He repeats this process until every pile is exactly
one stone.</p>
<p>For example, if James has n = 12 stones, he could split that pile into a pile of size
4 and another pile of size 8. James would then write the number 4 · 8 = 32 on the
board. He then decides to split the pile of 4 stones into a pile with 1 stone and a
pile with 3 stones and writes 1 · 3 = 3 on the board. He continues this way until he
has 12 piles with one stone each.</p>
<p>Prove that no matter how James splits the piles (starting with a single pile of n
stones), the sum of the numbers on the blackboard at the end of the procedure is
always the same.</p>
</blockquote>
<p><em>Hint: First figure out what the formula for the final sum will be. Then prove it
using strong induction.</em></p>
<p>The formula I came up with is $n(n-1) / $2. It could be totally wrong...</p>
<p><strong>My (Partial) Solution:</strong></p>
<p>1) <strong>Base Case:</strong> (n=2) James has one pile of 2 stones. Suppose he takes the top-most stone and puts in one pile A, which now has size 1. He takes the remaining stone and places it into another pile B, which now has size 1. Then the product of the sizes is 1. The sum of all the products on his board is 1. Now, if he were to start again, and take the bottom stone first and put it in pile A, he has a pile of size 1, and then takes the top stone and puts it in pile B, he has a pile of size 1, and the product is 1, with sum of all products on the board is 1. </p>
<p>2) <strong>Inductive Hypothesis (Strong Induction):</strong> Suppose for some $k \geq 2$, $n$ stones can be split in $ 2 \leq n \leq k $ stones and $k-n$ stones.</p>
<p>3) <strong>Inductive Step</strong>: Consider $n=k+1$. What do I do now?</p>
| user64742 | 289,789 | <p>The answer is <a href="https://chat.stackexchange.com/transcript/message/38721939#38721939">4 8 15 16 23 42</a></p>
<hr>
<p>Joke aside what you actually need is iterated induction (sorta).</p>
<p>Some setup:</p>
<ul>
<li>The ordering of two piles is irrelevant. A pile of 2 and a pile of 3 is the same as a pile of 3 and a pile of 2. This is important, and we need not replicate these cases when resolving the base case.</li>
</ul>
<hr>
<p>Proof:</p>
<p>We will use complete induction. For the base case suppose that $n=2$. It can only be split in one manner and therefore the sum is equal to itself.</p>
<p>For the inductive step suppose that the statement is true for all piles of size $k$, where $1 < k \leq m$. We can therefore define a function on the range of $k$ called $p(k)$, which is defined to return the unique pile sums. We seek to prove the case that $n = m+1$. Let us consider the case where we split it into piles of size $1$ and $m$. The resulting sum for $m$ is $p(m)$ by induction. Therefore the sum for that case is $m + p(m)$.</p>
<p>Now suppose that we split the pile into two piles $s$ and $n-s$, where $1 \leq s < n$. By induction we then have that the pile sum is $ns - s^2 + p(s) + p(n-s)$. However $n = m+1$ and so $ms + s - s^2 + p(s) + p(m+1-s)$. We can identify $p(s) = \sum_{i=2}^{s} i-1$. Therefore, $p(s) + s = s + \sum_{i=2}^{s} i-1 = \sum_{i=2}^{s+1} i-1 = p(s+1)$. This means that the current pile sum is now $ms - s^2 + p(s+1) + p(m+1-s) = m + s(m - s) + p(s) + p(m-s) = m + p(m)$. Therefore, we have proven that all pile sums equal the case of when we split it into a pile of size $1$ and $m$. Therefore, by transitivity all the pile sums for the case $n = m+1$ are equal. This completes the base case.</p>
<p>Therefore, by induction all piles of integer size $n \geq 2$ have a unique pile sum.</p>
<hr>
<p>Remarks:</p>
<p>You can prove that the pile function is the n-series. I won't do that for you. It is just a matter of pulling out nothing but 1's once you have uniqueness.</p>
<p>Note that this could be written as induction upon the pile splitting and the pile size. I worked around it, but feel free to do that if it is easier for you.</p>
<p>In fact, just answer this how you feel it is best answered by you. This is just an example proof.</p>
|
1,212,262 | <p>The statement goes as follow: </p>
<p>$ B ∩ C ⊆ A ⇒ (C − A) ∩ (B − A) = ∅. $</p>
<p>First, the sign "=>" represents a tautology, no? ( apparently I get it confuse with the 3 bar sign, if you know what I mean).</p>
<p>Second, the fact that it equals to no solution, how do I prove that? Seems to contradict itself, at least to me.</p>
<p>EDIT: apparently necessary edit... Couldn't start the problem because I did not understand the notation. No need to down vote for that, the more people understand a concept properly, the better....</p>
| Christian Blatter | 1,303 | <p>From your handling of cases I have got the impression that the boxes are labeled, as are the colors, but balls of the same color are undistinguishable.</p>
<p>If there were enough balls of all colors we could just assign a color to each of the boxes. This can be done in $4^5=1024$ ways.</p>
<p>But assignments where the some color is chosen $\geq4$ times are forbidden. There are $4$ assignments were the same color is chosen for all five boxes. There are $60$ assignments where the same color is chosen for four boxes: We can chose this color in four ways, then we can chose the second used color in $3$ ways, and we can chose the deviant box in $5$ ways.</p>
<p>In all there are $1024-4-60=960$ admissible assignments.</p>
|
2,806,432 | <p>Let $(\mathbb{R}^N,\tau)$ a topological space, where $\tau$ is the usual topology.
Let $A\subset\mathbb{R}^N$ a compact. If $(A_n)_n$ is a family of open such that
\begin{equation}
\bigcup_nA_n\supset A,
\end{equation}
then, from compact definition
\begin{equation}
\bigcup_{i=1}^{k}A_i\supset A
\end{equation}
Now, if I find a family of open $(B_n)_n$ such that
\begin{equation}
cl\bigg(\bigcup_n B_n\bigg)\supset A,
\end{equation}
can I say that
\begin{equation}
cl\bigg(\bigcup_{i=1}^kB_i\bigg)\supset A\quad?
\end{equation}
Thanks for the attention!</p>
| lulu | 252,071 | <p>Your method works fine, but you have to remember the continuity correction. You are approximating a discrete distribution with a continuous one. Thus, for instance, your method gives a non-zero chance of getting $224$ points (say) though that is in fact impossible. </p>
<p>As you are working with the scores, then for $(24,25,26)$ tails you get $(218,225,232)$ as scores...so for your continuous variant you need to look between $225-3.5$ and $225+3.5$. If you do that you get $0.112462916$ as opposed to the exact value of $0.112275173$ ... not bad at all!</p>
|
1,736,098 | <p>Wrote some Python code to verify if my Vectors are parallel and/or orthogonal. Parallel seems to be alright, orthogonal however misses out in one case. I thought that if the dotproduct of two vectors == 0, the vectors are orthogonal? Can someone tell me what's wrong with my code?</p>
<pre><code>def isParallel(self,v):
try:
print self.findTheta(v,1)
if (self.findTheta(v,1) == 0 or self.findTheta(v,1) == 180):
return True
except ZeroDivisionError:
return True
return False
def isOrthogonal(self,v):
print self.dotProduct(v)
if self.dotProduct(v) == 0:
return True
return False
def dotProduct(self,v):
dotproduct = sum([self.coordinates[i]*v.coordinates[i] for i in range(len(self.coordinates))])
return dotproduct
</code></pre>
| bluppfisk | 330,237 | <pre><code>def isOrthogonal(self,v,tolerance=1e-10):
if abs(self.dotProduct(v)) < tolerance:
return True
return False
</code></pre>
<p>the above code runs fine, as there are chances of rounding errors in my original code. Hans Lundmark got is straight: floating point rounding errors can cause the code to falter and produce false negatives.</p>
|
251,818 | <p>In other words if a graph is $3$-regular does it need to have $4$ vertices? I ask because I have been asked to prove that if $n$ is an odd number and $G$ is an $n$-regular graph then $G$ must have an even number of vertices.</p>
| joriki | 6,622 | <p>It seems from your last sentence that you're asking whether an $n$-regular graph must have <em>exactly</em> $n+1$ vertices (rather than <em>at least</em> $n+1$ vertices). If so, as Gregor commented, the answer is no.</p>
<p>For the proof you're trying to find, try counting the number of incidences in two different ways.</p>
|
200,278 | <p>Say I have two TimeSeries:</p>
<pre><code>x = TimeSeries[{2, 4, 1, 10}, {{1, 2, 4, 5}}]
y = TimeSeries[{6, 2, 6, 3, 9}, {{1, 2, 3, 4, 5}}]
</code></pre>
<p>x has a value at times: 1,2,4,5</p>
<p>y has a value at times: 1,2,3,4,5</p>
<p>I would like to build a list of pairs {<span class="math-container">$x_i$</span>, <span class="math-container">$y_i$</span>} which would not include missing elements (in this case element x element at time 3 is missing)</p>
<p>The desired result would be: </p>
<pre><code>{{2,6}, {4,2}, {1,3}, {10,9}}
</code></pre>
<p>I have a feeling that this should be simple and perhaps I'm not using right tools.</p>
| user42582 | 42,582 | <p>One possible solution is to use <a href="https://reference.wolfram.com/language/ref/TimeSeriesResample.html?q=TimeSeriesResample" rel="noreferrer"><code>TimeSeriesResample</code></a>:</p>
<pre><code>td = TimeSeriesResample[TemporalData[{x, y}], "Intersection"];
</code></pre>
<p>Using <code>"Intersection"</code> instructs <code>TimeSeriesResample</code> to use <em>only</em> common timestamps for all paths.</p>
<p>Then </p>
<pre><code>td["Paths"] // (Part[#, All, All, -1] &) /* Transpose
</code></pre>
<p>evaluates to</p>
<blockquote>
<pre><code>{{2, 6}, {4, 2}, {1, 3}, {10, 9}}
</code></pre>
</blockquote>
|
3,525,488 | <p>So I have the polar curve </p>
<p><span class="math-container">$r=\sqrt{|\sin(n\theta)|}$</span></p>
<p>Which I am trying to evaluate between <span class="math-container">$0$</span> and <span class="math-container">$2\pi$</span>. By smashing it into wolfram it returns a constant value 4 for any <span class="math-container">$n$</span>.</p>
<p>I tried calculating it manually (I suspect my calculation might be wrong), but I arrived at </p>
<p><span class="math-container">$$\textrm{I}=\frac{1}{2}\int_{0}^{2\pi}|\sin(n\theta)|= \left[\text{sgn}(\sin(nx))\frac{\cos(nx)}{n}\right]_{0}^{2\pi} $$</span> Which when looking at it can't be evaluated at <span class="math-container">$0$</span> or <span class="math-container">$2\pi$</span> with taking the limits. I suspect that whenever <span class="math-container">$n$</span> increases, the curve becomes "tighter", which could explain why in the integral stays permanently at 4, but I can't come up with a sound argument for it, so if someone could give me a pointer, that would be greatly appreciated. </p>
| Quanto | 686,284 | <p>Use the variable change <span class="math-container">$t=n\theta$</span> to rewrite the integral as</p>
<p><span class="math-container">$$\textrm{I}=\frac{1}{2}\int_{0}^{2\pi}|\sin(n\theta)|d\theta
= \frac1{2n} \int_{0}^{2\pi n}|\sin t|dt$$</span></p>
<p>Due to the periodicity of the sine function </p>
<p><span class="math-container">$$\textrm{I}
= \frac1{2n} \cdot n\int_{0}^{2\pi}|\sin t|dt
=\frac1{2}\int_{0}^{2\pi}|\sin t|dt =\frac1{2}\cdot 4 \int_{0}^{\pi/2}\sin tdt = 2$$</span></p>
|
9,484 | <p>Let <span class="math-container">$F(k,n)$</span> be the number of permutations of an n-element set that fix exactly <span class="math-container">$k$</span> elements.</p>
<p>We know:</p>
<ol>
<li><p><span class="math-container">$F(n,n) = 1$</span></p>
</li>
<li><p><span class="math-container">$F(n-1,n) = 0$</span></p>
</li>
<li><p><span class="math-container">$F(n-2,n) = \binom {n} {2}$</span></p>
<p>...</p>
</li>
<li><p><span class="math-container">$F(0,n) = n! \cdot \sum_{k=0}^n \frac {(-1)^k}{k!}$</span> (the subfactorial)</p>
</li>
</ol>
<p>The summation formula is obviously</p>
<p><span class="math-container">$\displaystyle\sum_{k=0}^n F(k,n) = n!$</span></p>
<p>A recursive definition of <span class="math-container">$F(k,n)$</span> is (my claim):</p>
<p><span class="math-container">$$F(k,n) = \binom {n} {k} \cdot \Big( k! - \displaystyle\sum_{i=0}^{k-1} F(i,k) \Big)$$</span></p>
<p>Question 1: Is there a common name for the "generalized factorial" <span class="math-container">$F(k,n)$</span>?</p>
<p>Question 2: Does anyone know a closed form for <span class="math-container">$F(k,n)$</span> or have an idea how to get it from the recursive definition? (generating function?)</p>
| Michael Lugo | 143 | <p>The "semi-exponential" generating function for these is</p>
<p><span class="math-container">$\sum_{n=0}^\infty \sum_{k=0}^n {F(k,n) z^n u^k \over n!} = {\exp((u-1)z) \over 1-z}$</span></p>
<p>which follows from the exponential formula.</p>
<p>These numbers are apparently called the <a href="https://oeis.org/A008290" rel="nofollow noreferrer">rencontres numbers</a> although I'm not sure how standard that name is.</p>
<p>Now, how do we get a formula for these numbers out of this? First note that</p>
<p><span class="math-container">$$exp((u-1)z) = 1 + (u-1)z + {(u-1)^2 \over 2!} z^2 + {(u-1)^3 \over 3!} z^3 + \cdots $$</span></p>
<p>and therefore the "coefficient" (actually a polynomial in <span class="math-container">$u$</span>) of <span class="math-container">$z^n$</span> in <span class="math-container">$exp((u-1)z)/(1-z)$</span> is</p>
<p><span class="math-container">$$ P_n(u) = 1 + (u-1) + {(u-1)^2 \over 2!} + \cdots + {(u-1)^n \over n!} = \sum_{j=0}^n {{(u-1)^j } \over j!} $$</span></p>
<p>since division of a generating function by <span class="math-container">$1-z$</span> has the effect of taking partial sums of the coefficients.</p>
<p>The coefficient of <span class="math-container">$u^k$</span> in <span class="math-container">$P_n(u)$</span> (which I'll denote <span class="math-container">$[u^k] P_n(u)$</span>, where <span class="math-container">$[u^k]$</span> denotes taking the <span class="math-container">$u^k$</span>-coefficient) is then</p>
<p><span class="math-container">$$ [u^k] P_n(u) = \sum_{j=0}^n [u^k] {(u-1)^j \over j!} $$</span></p>
<p>But we only need to do the sum for <span class="math-container">$j = k, \ldots, n$</span>; the lower terms are zero, since they are the <span class="math-container">$u^k$</span>-coefficient of a polynomial of degree less than <span class="math-container">$k$</span>. So</p>
<p><span class="math-container">$$ [u^k] P_n(u) = \sum_{j=k}^n [u^k] {(u-1)^j \over j!} $$</span></p>
<p>and by the binomial theorem,</p>
<p><span class="math-container">$$ [u^k] P_n(u) = \sum_{j=k}^n {(-1)^{j-k} \over k! (j-k)!} $$</span></p>
<p>Finally, <span class="math-container">$F(k,n) = n! [u^k] P_n(u)$</span>, and so we have</p>
<p><span class="math-container">$$ F(k,n) = n! \sum_{j=k}^n {(-1)^{j-k} \over k!(j-k)!} $$</span></p>
|
9,484 | <p>Let <span class="math-container">$F(k,n)$</span> be the number of permutations of an n-element set that fix exactly <span class="math-container">$k$</span> elements.</p>
<p>We know:</p>
<ol>
<li><p><span class="math-container">$F(n,n) = 1$</span></p>
</li>
<li><p><span class="math-container">$F(n-1,n) = 0$</span></p>
</li>
<li><p><span class="math-container">$F(n-2,n) = \binom {n} {2}$</span></p>
<p>...</p>
</li>
<li><p><span class="math-container">$F(0,n) = n! \cdot \sum_{k=0}^n \frac {(-1)^k}{k!}$</span> (the subfactorial)</p>
</li>
</ol>
<p>The summation formula is obviously</p>
<p><span class="math-container">$\displaystyle\sum_{k=0}^n F(k,n) = n!$</span></p>
<p>A recursive definition of <span class="math-container">$F(k,n)$</span> is (my claim):</p>
<p><span class="math-container">$$F(k,n) = \binom {n} {k} \cdot \Big( k! - \displaystyle\sum_{i=0}^{k-1} F(i,k) \Big)$$</span></p>
<p>Question 1: Is there a common name for the "generalized factorial" <span class="math-container">$F(k,n)$</span>?</p>
<p>Question 2: Does anyone know a closed form for <span class="math-container">$F(k,n)$</span> or have an idea how to get it from the recursive definition? (generating function?)</p>
| Joshua Baehring | 490,064 | <p>Let <span class="math-container">$S_n$</span> be the set of all permutations of <span class="math-container">$X$</span> \(i.e. <span class="math-container">$S_n = \{ f$</span> <span class="math-container">$|$</span> <span class="math-container">$f : X \rightarrow X\}$</span>). Now consider the set of permutations, <span class="math-container">$A$</span>, that have exactly k fixed points. More formally, <span class="math-container">$A$</span> is the union of all sets</p>
<p><span class="math-container">$$A_i = \{f \in S_n : N' \in {\{1,...,n\} \choose k}, f(N') = N', \text{ and } \forall j \in \{1,...,n\} \setminus N', f(x_j) \neq x_j\}$$</span> where <span class="math-container">$i \in \{1,...,{n \choose k}\}$</span>.</p>
<p>(Note that each <span class="math-container">$A_i$</span> must uniquely correspond to some <span class="math-container">$N' \in {\{1,...,n\} \choose k}$</span>, which is why <span class="math-container">$i \in \{1,...,{n \choose k}\}$</span>. In other words, the definition of <span class="math-container">$A_i$</span> implicitly defines a bijection between all <span class="math-container">$A_i$</span> and <span class="math-container">${\{1,..., n\} \choose k}$</span>).</p>
<p>Since there are <span class="math-container">$k$</span> fixed elements for all <span class="math-container">$f \in A$</span>, we consider how to permute the remaining <span class="math-container">$n - k$</span> elements. These elements cannot be fixed. Thus, by letting arbitrary <span class="math-container">$N' \in {\{1,...,n\} \choose k}$</span>, the number of permutations of <span class="math-container">$X$</span> where the <span class="math-container">$k$</span> points in <span class="math-container">$N'$</span> are fixed and the rest are not is equivalent to the number of derangements (see side note for further explanation) of the set <span class="math-container">$X \setminus N'$</span>. Additionally, because there are <span class="math-container">${n \choose k}$</span> ways to fix <span class="math-container">$k$</span> elements, we have</p>
<p><span class="math-container">$${n \choose k} \cdot D(|X \setminus N'|) = {n \choose k} \cdot D(n - k) = {n \choose k} \cdot ((n - k)! - \displaystyle\sum_{i = 1}^{n - k} (-1)^{i - 1} {(n - k) \choose i} (n - k - i)!)$$</span></p>
<p>as the total number of permutations of <span class="math-container">$X$</span> with exactly <span class="math-container">$k$</span> fixed
points. More concisely, we have <span class="math-container">$${n \choose k} \cdot D(|X \setminus N'|) = {n \choose k} ((n - k)! - \displaystyle\sum_{i = 1}^{n - k} (-1)^{i - 1} \dfrac{(n - k)!}{i!})$$</span></p>
<p><em><strong>Side note:</strong></em> A derangement of a set is a permutation of the set such that no element is mapped to itself. The total number of derangements of an n-element set is <span class="math-container">$$D(n) = n! - \displaystyle\sum_{i = 1}^n (-1)^{i - 1} {n \choose i}(n - i)! = n! - \displaystyle\sum_{i = 1}^n (-1)^{i - 1} \dfrac{n!}{i!}$$</span>
The proof for the total number of derangements of a set utilizes the principle of inclusion-exclusion, but I will not include it here as it does not directly answer your question.</p>
|
346,950 | <p>The equation $$3\sin^2 x - 3\cos x -6\sin x + 2\sin 2x + 3=0$$ has a solution $x = 0$. That is mean it has a factor $\cos x - 1$. I tried write the given equation has the form
$$(\cos x - 1)P(x)=0.$$ I am looking for the factor $P(x)$. How to do that?</p>
| chenbai | 59,487 | <p>LHS$=sinx(3sinx-2)+4sinx(cox-1)-3(cosx-1)=\sqrt{1-cos^2 x}(3sinx-2)+4sinx(cox-1)-3(cosx-1)=\sqrt{|cosx-1||cosx+1|}(3sinx-2)+4sinx(cox-1)-3(cosx-1)$</p>
<p>if you insist $cosx-1$ as a factor, it has $\sqrt{|cosx-1|}$,but not $cosx-1$.</p>
<p>if you simply want to solve it, $1-cosx=2sin^2 \dfrac{x}{2}$,$sinx=2sin\dfrac{2}{2}cos\dfrac{x}{2}$, so more straight forward. $sin\dfrac{x}{2}$ may be a better choice.</p>
|
138,658 | <p>Suppose $X$ is a topological space, and $\mu$ is a Borel measure on $X$. Also suppose we have an $n$-dimensional vector bundle $E \to X$, with an inner product $\langle \cdot,\cdot \rangle_x$ on the fibre $E_x$ for all $x \in X$, in such a way that each $E_x$ is complete and such that there exists a vector bundle trivialisation which is compatible with the fibrewise inner products. The inner product $\langle \cdot, \cdot \rangle_x$ determines a norm $||\cdot||_x$ on $E_x$.</p>
<p>Say that a (not necessarily continuous) section $\sigma \colon X \to E$ is <em>measurable</em> if its restriction to each trivialising open set $U \subset X$ is given by a measurable function $U \to \mathbb{F}^n$ (here $\mathbb{F}$ is either $\mathbb{R}$ or $\mathbb{C}$, depending on how you feel). Denote the set of measurable sections (understood as being defined up to measure zero) by $\Gamma(E)$.</p>
<p>Given a section $\sigma \in \Gamma(E)$ and a number $p \in (0,\infty)$, we can define
$$||\sigma||_p := \left( \int_X ||\sigma(x)||_x^p \; d\mu(x) \right)^{1/p},$$
and we can then define the corresponding $E$-valued Lebesgue space $L^p(X;E)$ in the obvious way.</p>
<blockquote>
<p><strong>Question:</strong> do we have the usual duality relations for Lebesgue spaces, i.e. $(L^p(X;E))^* \cong L^{p^\prime}(X;E)$ where $1 = \frac{1}{p} + \frac{1}{p^\prime}$?</p>
</blockquote>
<p>As one would expect, there is a kind of Hölder inequality: if $\sigma \in L^p(X;E)$ and $\tau \in L^{p^\prime}(X;E)$, then the function $\langle \sigma,\tau \rangle$ on $X$, given by
$$\langle \sigma,\tau \rangle(x) := \int_X \langle \sigma(x), \tau(x) \rangle_x \; d\mu(x),$$
satisfies $|| \langle \sigma,\tau \rangle||_{L^1(X)} \leq ||\sigma||_p ||\tau||_{p^\prime}$. It follows that the pairing $\langle \cdot,\cdot \rangle$ can be used to isometrically embed $L^{p^\prime}(X;E)$ into $(L^p(X;E))^*$.</p>
<p>However, I haven't been able to prove the reverse containment - that each functional $\phi \in (L^p(X;E))^*$ is given by pairing with an element of $L^{p^\prime}(X;E)$ - without additional assumptions, such as the existence of a finite trivialising cover for $E$ with uniform norm control (for example, when $X$ is compact). In this case, one can recover the result from the corresponding result for trivial bundles - which is essentially the case of vector-valued Lebesgue spaces - but constants appear which depend on the cardinality of a trivialising cover, which is somewhat unexpected.</p>
<p>Has this been explicitly proven anywhere? Is it even true in general? (I'll be very surprised if it isn't)</p>
| johndoe | 36,502 | <p>The existence of a finite trivialising cover is a less stringent condition than one would expect: see <a href="https://mathoverflow.net/questions/94479/does-every-vector-bundle-allow-a-finite-trivialization-cover">Does every vector bundle allow a finite trivialization cover?</a></p>
<p>(Sorry for the commentlike answer but it seems that by migrating to SE I lost reputation points, so I have not enough of them to properly comment)</p>
<p>Edit: however, the uniform norm control over the cover might be an issue when $X$ is not compact, so my comment is really not that helpful I guess.</p>
|
399,934 | <p>How to derive the specific case of the generating element of a group given its generating set. For example, when
$$G=\langle a,b,c|a^3=b^3=c^2=1,ab=ba,ca=a^2c,cb=b^2c\rangle$$
we can let $G\subset S_3\times S_3$, and let
$$a=((123),1),b=(1,(123)),c=((12),(12))$$
to get the desired result. However, when $3$ is replaced by general $p$, where $p$ is an odd prime,say,
$$G=\langle a,b,c|a^p=b^p=c^2=1,ab=ba,aca=c,bcb=c\rangle$$
I cannot specifically give the example of $a,b$ and $c$, which satisfies "$a^p=b^p=c^2=1,ab=ba,aca=c,bcb=c$" Is there any quick way to do so?</p>
| Mariano Suárez-Álvarez | 274 | <p>It is not true that if a linear map $f:V\to W$ has a one sided inverse, then $\dim V=\dim W$ nor that $f$ is an isomorphism.</p>
|
399,934 | <p>How to derive the specific case of the generating element of a group given its generating set. For example, when
$$G=\langle a,b,c|a^3=b^3=c^2=1,ab=ba,ca=a^2c,cb=b^2c\rangle$$
we can let $G\subset S_3\times S_3$, and let
$$a=((123),1),b=(1,(123)),c=((12),(12))$$
to get the desired result. However, when $3$ is replaced by general $p$, where $p$ is an odd prime,say,
$$G=\langle a,b,c|a^p=b^p=c^2=1,ab=ba,aca=c,bcb=c\rangle$$
I cannot specifically give the example of $a,b$ and $c$, which satisfies "$a^p=b^p=c^2=1,ab=ba,aca=c,bcb=c$" Is there any quick way to do so?</p>
| Ink | 34,881 | <p>It is a standard fact from elementary set theory that a map $f$ is a bijection $\iff$ $f$ has a two-sided inverse. Remember, $df_x$ is a linear map, and therefore, an isomorphism. This shows $k = l$ since isomorphic vector spaces have equal dimension.</p>
|
2,618,804 | <p>Let $V$ be a vector space of dimension $m\geq 2$ and $ T: V\to V$ be a linear transformation such that $T^{n+1}=0$ and $T^{n}\neq 0$ for some $n\geq1$ .Then choose the correct statement(s):</p>
<p>$(1)$ $rank(T^n)\leq nullity(T^n)$</p>
<p>$(2)$ $rank(T^n)\leq nullity(T^{n+1})$</p>
<p><strong>Try:</strong></p>
<p>I found this case is possible if $n<m$ and took some examples for $(2)$ , found it true but I've no idea how to prove.
For (1) I'm not getting anything. </p>
| Andrea Marino | 177,070 | <p>Note that $f=T^n$ is such that $f^2=0$. Thus $Im f \subseteq \ker f$ implies $rank(f) \le nullity(f) $, which is (1).</p>
|
1,784,679 | <p>if $p,q,r$ are three positive integers prove that</p>
<p>$$LCM(p,q,r)=\frac{pqr \times HCF(p,q,r)}{HCF(p,q) \times HCF(q,r) \times HCF(r,p)}$$</p>
<p>I tried in this way;</p>
<p>Let $HCF(p,q)=x$ hence $p=xm$ and $q=xn$ where $m$ and $n$ are relatively prime.</p>
<p>similarly let $HCF(q,r)=y$ hence $q=ym_1$ and $r=yn_1$ where $m_1$ and $n_1$ are Relatively prime.</p>
<p>Alo let $HCF(r,p)=z$ hence $r=zm_2$ and $p=zn_2$</p>
<p>we have $$p=xm=zn_2$$</p>
<p>$$q=xn=ym_1$$and</p>
<p>$$r=yn_1=zm_2$$</p>
<p>can i have any hint to proceed?</p>
| Mayank Bomb | 256,329 | <p>To get a start on the problem first let us try to understand it intuitively. We need to show that for three numbers <span class="math-container">$a,b,c$</span>, <span class="math-container">$$abc=\frac{LCM(a,b,c) \times HCF(a,b) \times HCF(b,c) \times HCF(c,a)}{HCF(a,b,c)}$$</span>
On the LHS- the product of three numbers <span class="math-container">$a, b, c$</span> consists of product of all their factors.
On the RHS- The LCM contains all the factors of <span class="math-container">$a,b,c$</span> with common factors only taken once. The pairwise HCF's get us the pair-wise common factors not accounted for by the LCM. However, now the factors common to all the three numbers have been counted 4 times (with the LCM as well as with each pair-wise HCF's), instead of being counted 3 times (once for each number). To fix this we divide the entire product by <span class="math-container">$HCF(a,b,c)$</span> which gets us the factors common to all the three numbers.
We can construct a general proof of the theorem based on above argument as well.</p>
<p>Alternatively, we can see that of the three number <span class="math-container">$a,b,c$</span> any number number, for example, <span class="math-container">$a$</span> is made of 4 kind of factors-</p>
<p>-Factors only in <span class="math-container">$a$</span> (say product of these factors is <span class="math-container">$\alpha$</span>)</p>
<p>-Factors common between <span class="math-container">$a,b$</span> (say product of these factors is <span class="math-container">$x$</span>)</p>
<p>-Factors common between <span class="math-container">$a,c$</span> (say product of these factors is <span class="math-container">$y$</span>)</p>
<p>-Factors common between all three numbers (say product of these factors is <span class="math-container">$w = HFC(a,b,c)$</span>).</p>
<p>Thus</p>
<p><span class="math-container">$a = \alpha.x.y.w$</span></p>
<p>Similarly,</p>
<p><span class="math-container">$b = \beta.x.z.w$</span> and</p>
<p><span class="math-container">$c = \gamma.y.z.w$</span></p>
<p>Therefore, by multiplying the equations for three numbers we get,</p>
<p><span class="math-container">$abc = \alpha.\beta.\gamma.x^2.y^2.z^2.w^3.$</span></p>
<p>or, <span class="math-container">$abc = \frac{\alpha.\beta.\gamma.x^2y^2.z^2.w^4}{w}$</span></p>
<p>but, <span class="math-container">$x.w=HCF(a,b)$</span> and <span class="math-container">$y.w=HCF(b,c)$</span> and <span class="math-container">$z.w=HCF(c,a)$</span> and <span class="math-container">$w=HCF(a,b,c)$</span></p>
<p>therefore,</p>
<p><span class="math-container">$abc = \frac{\alpha.\beta.\gamma.x.y.z.w.HCF(a,b).HCF(b,c).HCF(c,a)}{HCF(a,b,c)}$</span></p>
<p>finally, we also have that <span class="math-container">$LCM(a,b,c)=\alpha.\beta.\gamma.x.y.z.w$</span></p>
<p>thus, <span class="math-container">$$abc = \frac{LCM(a,b,c).HCF(a,b).HCF(b,c).HCF(c,a)}{HCF(a,b,c)}$$</span></p>
|
3,522,867 | <p>Consider the sequence <span class="math-container">$\{x_n\}_{n\ge1}$</span> defined by <span class="math-container">$$x_n=\sum_{k=1}^n\frac{1}{\sqrt{k+1}+\sqrt{k}}, \forall n\in\mathbb{N}.$$</span> Is <span class="math-container">$\{x_n\}_{n\ge 1}$</span> bounded or unbounded. </p>
<p>I solved the problem as stated in the answer posted by me. Is it possible to solve the problem in a more better and rigorous manner?</p>
| Eduline | 743,749 | <p>Here goes the solution I found out. Let <span class="math-container">$a_k$</span> be defined as <span class="math-container">$$a_k=\frac{1}{\sqrt{k}+\sqrt{k+1}}, \forall k\in\mathbb{N}.$$</span> Then, <span class="math-container">$$x_n=\sum_{k=1}^n a_k, \forall n\in\mathbb{N}.$$</span> Now <span class="math-container">$$a_k=\frac{1}{\sqrt{k}+\sqrt{k+1}}=\frac{\sqrt{k+1}-\sqrt{k}}{(k+1)-k}=\sqrt{k+1}-\sqrt{k}, \forall k\in\mathbb{N}.$$</span></p>
<p>This implies that <span class="math-container">$$x_n=\sqrt{n+1}-1, \forall n\in\mathbb{N}.$$</span> Now intuition says that <span class="math-container">$\{x_n\}_{n\ge 1}$</span> is an unbounded sequence. But let us try to prove it rigorously. </p>
<p>Let us assume that <span class="math-container">$\{x_n\}_{n\ge 1}$</span> is bounded above. This implies that, we can find <span class="math-container">$M\in\mathbb{R}$</span> such that <span class="math-container">$x_n\le M, \forall n\in\mathbb{N}$</span>. Now let <span class="math-container">$\lceil M\rceil=n_1\implies M\le n_1.$</span> </p>
<p>Now <span class="math-container">$x_{n_1^2+4n_1+3}=\sqrt{n_1^2+4n_1+3+1}-1=\sqrt{(n_1+2)^2}-1=n_1+1.$</span> </p>
<p>This implies that <span class="math-container">$x_{n_1^2+4n_1+3}=n_1+1>n_1\ge M\implies x_{n_1^2+4n_1+3}>M.$</span> But we have assumed that <span class="math-container">$x_n\le M, \forall n\in\mathbb{N}.$</span> Contradiction. </p>
<p>This implies that <span class="math-container">$\{x_n\}_{n\ge 1}$</span> is not bounded above, which in turn implies that <span class="math-container">$\{x_n\}_{n\ge 1}$</span> is not bounded. </p>
|
3,313,603 | <p>I am assigned with a question which states the rate of a microbial growth is exponential at a rate of (15/100) per hour. where y(0)=500, how many will there be in 15 hours?</p>
<p>I know this question is generally modelled as: </p>
<p><span class="math-container">$y=y_0*e^{kt}$</span></p>
<p>However, my solution ended up being modelled as :</p>
<p><span class="math-container">$y=e^{kt}*e^{c}$</span> via <span class="math-container">$y'=ky$</span></p>
<p>The resulting equation was:</p>
<p><span class="math-container">$y=e^{kt}*e^{ln500}$</span></p>
<p>I ended up getting <span class="math-container">$y(15)=4743.86$</span> which is the same answer for both methods.</p>
<p>I'm wondering how the general equation was modelled, and if someone could explain how I could tidy up my equations.</p>
<p>Thanks. </p>
| Amy Ngo | 692,535 | <p><span class="math-container">$e^{\ln 500}$</span> is equal to 500. Notice that if <span class="math-container">$y = e^x$</span>, taking the natural log of both sides gives you <span class="math-container">$\ln y = \ln (e^x) = x$</span>. Thus, to undo this operation, take each side as the power of <span class="math-container">$e$</span> to get <span class="math-container">$e^{\ln y} = e^x$</span>, which must be <span class="math-container">$y = e^x$</span>. </p>
|
4,086,995 | <p>Let <span class="math-container">$ABCD$</span> be a rectangle.</p>
<p>Given:</p>
<p><span class="math-container">$A(2;1)$</span></p>
<p><span class="math-container">$C(5;7)$</span></p>
<p><span class="math-container">$\overline{BC}=2\overline{AB}$</span>.</p>
<p>I tried to solve it, but after using the Pythagoras theorem I got that <span class="math-container">$\overline{AB}=\overline{DC}=3$</span> and <span class="math-container">$\overline{AD}=\overline{BC}=6$</span> but I don't know what I do from here.</p>
<p>How can I get points <span class="math-container">$B$</span> and <span class="math-container">$D$</span>? (There are two answers)</p>
| José Carlos Santos | 446,262 | <p>If <span class="math-container">$f$</span> is an automorphism of <span class="math-container">$\Bbb Z$</span>, then there is some <span class="math-container">$m\in\Bbb Z$</span> such that <span class="math-container">$f(m)=1$</span>, and therefore <span class="math-container">$mf(1)=1$</span>. But this is possibly only if <span class="math-container">$m=\pm1$</span>. So, either you have <span class="math-container">$f(1)=1$</span> or <span class="math-container">$f(-1)=1(\iff f(1)=-1)$</span>.</p>
<p>But if you are working with <span class="math-container">$\Bbb Z_n$</span>, then there is no this restriction that <span class="math-container">$mf(1)=1\iff m=\pm1$</span>.</p>
|
3,251,337 | <p>Be <span class="math-container">$E,F,K , L,$</span> points in the sides <span class="math-container">$AB,BC,CD,DA$</span> of a square <span class="math-container">$ABCD$</span>, respectively. Show that if <span class="math-container">$EK$</span> <span class="math-container">$\perp$</span> <span class="math-container">$FL$</span> , then <span class="math-container">$EK=FL$</span>.</p>
<p>I need help proving something like this:</p>
<p><a href="https://i.stack.imgur.com/WRNge.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/WRNge.png" alt="enter image description here"></a></p>
<p>Any hints?</p>
<p>Edited: I wrote the principal statement wrong, now it's correct.</p>
<p>I saw a question in this forum about a similar problem, the problem was like this:
<a href="https://i.stack.imgur.com/sdxAI.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/sdxAI.png" alt="enter image description here"></a></p>
<p>And i thought that i can modify the problem like the first image, and after drawing a lot of squares in geogebra, i think it's true, but i don't know how to prove it.</p>
| Community | -1 | <p>After William Elliot's feedback on your proof and <a href="https://math.stackexchange.com/questions/3251331/boundary-points-and-metric-space#comment6686712_3251331">this comment</a> of yours, I don't think there is much that needs to be clarified. Still if you have anything specific regarding your proof to ask me, I welcome you to come <a href="https://chat.stackexchange.com/rooms/72901/general-chat-with-user-170039">here</a>. </p>
<p>In any case, let me try to write a proof that I believe is in line with your attempt.</p>
<blockquote>
<p><span class="math-container">\begin{align*}E\cap \partial{E}=\emptyset&\implies E\cap(\overline{E}\cap \overline{X\setminus E})=\emptyset\\&\implies (E\cap\overline{E})\cap \overline{X\setminus E}=\emptyset\\&\implies E\cap \overline{X\setminus E}=\emptyset\\&\implies \overline{X\setminus E}\subseteq X\setminus E\\&\implies \overline{X\setminus E}=X\setminus E\end{align*}</span>This shows that <span class="math-container">$X\setminus E$</span> is closed and hence <span class="math-container">$E$</span> is open.</p>
</blockquote>
|
1,925,245 | <p>Find the eigenvalues of
$$
\left(\begin{matrix}
C_1 & C_1 & C_1&\cdots&C_1 \\
C_2 & C_2 & C_2&\cdots &C_2 \\
C_3 & C_3 & C_3&\cdots&C_3 \\
\vdots&\ & \ & \ & \vdots\\
C_n & C_n & C_n&\cdots&C_n \\
\end{matrix}\right)
$$</p>
<p>My approach:
$$\left|
\begin{matrix}
C_1-λ & C_1 & C_1&...&C_1 \\
C_2 & C_2-λ & C_2&...&C_2 \\
C_3 & C_3 & C_3-λ&...&C_3 \\
\vdots & \ & \ & \ & \vdots\\
C_n & C_n & C_n&...&C_n-λ
\end{matrix}\right|
$$
and i eventually have
\begin{align}
\ -λ^3 +(C_1+C_2+C_3)λ^2-C_1C_3λ-C_1C_2=0
\end{align} </p>
<p>and then I'm lost at this stage...</p>
| Community | -1 | <p>Since $rank(A)=1$, $A$ admits $n-1$ times the eigenvalues $0$. The last eigenvalue is $trace(A)=C_1+\cdots+C_n$.</p>
|
3,995,046 | <p>Refer to <a href="https://oeis.org/A340800" rel="nofollow noreferrer">https://oeis.org/A340800</a> to notice that the number of primes between two primes having the same last digit is increasing as the primes themselves increase. Is there an explanation for this? How can the size of primes have any influence on the last digit of the following primes?</p>
| Empy2 | 81,790 | <p>Supposing the last digits of primes form a random sequence, from the set 1,3,7,9.<br />
Let the prime we want end in a 1. The next <span class="math-container">$n-1$</span> primes must end with something else, and the <span class="math-container">$n$</span>th end with a 1. This happens with probability <span class="math-container">$$\left(\frac34\right)^{n-1}\frac14$$</span><br />
So it is harder to belong in a later slot of A340800, so the first one in each slot tends to be higher.</p>
|
2,811,825 | <p>I'm trying to solve the following problem (under which is my attempt at it)</p>
<p><a href="https://i.stack.imgur.com/qxHGI.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/qxHGI.png" alt="enter image description here"></a></p>
<p>I'm confused on how to solve for the expectation here in a non-conditional manner (summing over all values of n) without doing it manually. Any help would be great! :)</p>
| Sungjin Kim | 67,070 | <p>Consider the following very bad scenario: </p>
<p>$$A_0=p^{-1}(A)=[0,1/2], \ \ B_0=p^{-1}(B)=[3/4,1].$$
Then your process may give
$$
I_0=[0,1],
$$
$$
I_1=[1/2,1],
$$
$$
I_2=[1/2,3/4], \ \ I_3=[1/2,5/8], \ \ I_4=[1/2, 9/16], \cdots
$$</p>
<p>Then the intersection of the nested intervals is $\{1/2\}$, and it belongs to $A_0$. </p>
<p>Because of this, the nested intervals method is not a good choice. </p>
|
2,811,825 | <p>I'm trying to solve the following problem (under which is my attempt at it)</p>
<p><a href="https://i.stack.imgur.com/qxHGI.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/qxHGI.png" alt="enter image description here"></a></p>
<p>I'm confused on how to solve for the expectation here in a non-conditional manner (summing over all values of n) without doing it manually. Any help would be great! :)</p>
| William Elliot | 426,203 | <p>P = p([0,1]) is connected.<br>
P $\cap$ A and P $\cap$ B are separated and not empty.<br>
P $\cap$ (A $\cup$ B) is disconnected.<br>
Thus P /= P $\cap$ (A $\cup$ B).<br>
Whereupon be the beseached t in [0,1].</p>
|
784,673 | <p>Consider stationary autoregression AR(1):
$$u_t=\beta u_{t-1}+ \varepsilon_t, \quad t \in \mathbb{Z}.$$ </p>
<p>$\{\varepsilon_t\}$ - i.i.d. $N(0,\sigma^2)$ random variables.</p>
<p>I know that $\mathbf{E}u_t =0$ and $\mathbf{E}u_t^2=\sigma^2/(1-\beta^2).$</p>
<p>The question is: what is the distribution of $u_t$? What is its CDF or PDF?</p>
<p>Thanks in advance.</p>
| square_one | 148,176 | <p>That is one of the objectives of the whole modeling exercise. </p>
<p>We desire to obtain the distribution of the solution to a time series. Thus, we need to obtain the distribution of the error terms since the AR(1) equation is determined simply by past observations of the random variable X and the error terms. So, if we knew the distribution of the error terms, we would have the distribution of the AR(1) equation. In addition, we would need to know the value of ρ , which we find an estimator for using the method of least squares.</p>
<p>First, we observe a plot of the data to get an idea of what trends there might be. We also create a histogram of the observed data to begin to get an idea of what distribution we could use to test for goodness-of-fit. It may be necessary to “play around” with the parameters of the histogram by decreasing or increasing the interval length for the data to be binned. This is done in order to try to fit the shape of the histogram as closely to a distribution curve of a specific probability distribution. Now we calculute the estimators for ρ and our distribution function and use the chi-squared goodness-of-fit test to determing whether the observed data could be of the hypothesized distribution.</p>
<p>For full details, please read from the source: <a href="http://www.math.utah.edu/~zhorvath/ar1.pdf" rel="nofollow">http://www.math.utah.edu/~zhorvath/ar1.pdf</a></p>
|
784,673 | <p>Consider stationary autoregression AR(1):
$$u_t=\beta u_{t-1}+ \varepsilon_t, \quad t \in \mathbb{Z}.$$ </p>
<p>$\{\varepsilon_t\}$ - i.i.d. $N(0,\sigma^2)$ random variables.</p>
<p>I know that $\mathbf{E}u_t =0$ and $\mathbf{E}u_t^2=\sigma^2/(1-\beta^2).$</p>
<p>The question is: what is the distribution of $u_t$? What is its CDF or PDF?</p>
<p>Thanks in advance.</p>
| Did | 6,179 | <p>Iterating the recursion, one sees that, for every $t$,
$$u_t=\sum_{s\geqslant0}\beta^s\varepsilon_{t-s},
$$
where the family $(\varepsilon_s)$ is i.i.d. normal $(0,\sigma^2)$, hence each $u_t$ is normal $(0,\tau^2)$, where $\tau^2$ is indeed
$$
\tau^2=\sum_{s\geqslant0}\beta^{2s}\sigma^2=\frac{\sigma^2}{1-\beta^2}.
$$</p>
|
3,124,285 | <p>I have written a proof, and I would appreciate verification. The problem is picked from "Set Theory and Matrices" by I. Kaplansky.</p>
<hr>
<p><em>Proof</em>. Let <span class="math-container">$a_1=f(x)$</span> and <span class="math-container">$a_2=f(y)$</span>, then</p>
<p><span class="math-container">$g(a_1)=g(a_2) \Longrightarrow g(f(x))=g(f(y)) \overset{def}{\Longrightarrow} x=y \Longrightarrow f(x)=f(y) \Longrightarrow a_1=a_2$</span></p>
<p>Therefore, according to transitivity, we get that <span class="math-container">$\,g(a_1)=g(a_2) \Longrightarrow a_1=a_2$</span>, which is (one of) the definition(s) of an injective function. Thus, <span class="math-container">$g$</span> is injective if <span class="math-container">$gf$</span> is injective. QED</p>
<hr>
<p>My insecurity is foremost the first step as well as some unclarity about explicitly highlight that <span class="math-container">$f$</span> is surjective. </p>
| Hector Blandin | 170,571 | <p>Let <span class="math-container">$a_1,a_2\in\mathrm{dom}(g)$</span>. Since <span class="math-container">$f$</span> is surjective for each pair <span class="math-container">$a_1,a_2\mathrm{dom}(g)$</span> there exists <span class="math-container">$x,y\in\mathrm{dom}(f)$</span> such that <span class="math-container">$a_1=f(x)$</span> and <span class="math-container">$a_2=f(y)$</span>. In this case we want to see that <span class="math-container">$g$</span> is injective whenever <span class="math-container">$gf$</span> is injective. For that let's suppose that <span class="math-container">$g(a_1)=g(a_2)$</span> we have <span class="math-container">$g(f(x))=g(a_1)=g(a_2)=g(f(y))$</span>. Since <span class="math-container">$gf$</span> is injective we get</p>
<p><span class="math-container">$$ g(f(x))=g(f(y)) \ {\Longrightarrow} \ x=y $$</span> </p>
<p>this implies that <span class="math-container">$ f(x)=f(y) \Longrightarrow a_1=a_2$</span></p>
<p>Therefore, <span class="math-container">$\,g(a_1)=g(a_2) \Longrightarrow a_1=a_2$</span>. Thus, <span class="math-container">$g$</span> is injective if <span class="math-container">$gf$</span> is injective.</p>
|
2,804,716 | <p>Given this Maclaurin series:</p>
<p>$$f(x)=\sum_{n=0}^{\infty}\frac{x^{2n}}{(2n)!}$$</p>
<p>And the following Catenary curve, assuming that $a=1$:</p>
<p>$$g(x)=\frac{a(e^\frac{x}{a}+e^{-\frac{x}{a}})}{2}$$</p>
<p>Why does $f(x)=g(x)$ seem to hold true (at least when graphed)?</p>
<p>I'm looking for a purely algebraic reason here as to why these two are equal, ideally in terms that are at or around a high-school calculus level (where I'm at currently).</p>
<p>If I am mistaken, and these two are not equal to each other, an explanation of why that is would be great too. </p>
| user | 505,767 | <p>Let consider the function</p>
<p>$$C(x)=\frac{e^x+e^{-x}}{2}$$</p>
<p>and let define also</p>
<p>$$S(x)=C'(x)=\frac{e^x-e^{-x}}{2} \implies S'(x)=C(x)$$</p>
<p>thus since</p>
<ul>
<li>$C(0)=1$</li>
<li>$C'(0)=S(0)=0$</li>
<li>$C''(0)=S'(0)=C(0)=1$</li>
<li>...</li>
</ul>
<p>by Taylor's expansion at $x=0$, that is Maclaurin's expansion, we have that</p>
<p>$$C(x)=\frac{e^x+e^{-x}}{2}=1+\frac{x^2}{2!}+\frac{x^4}{4!}+\frac{x^6}{6!}+\dots+\frac{x^{2k}}{{2k}!}+\dots=\sum_{n=0}^{\infty}\frac{x^{2n}}{(2n)!}$$</p>
<p>which turns out to converge for any $x\in \mathbb{R}$.</p>
|
2,804,716 | <p>Given this Maclaurin series:</p>
<p>$$f(x)=\sum_{n=0}^{\infty}\frac{x^{2n}}{(2n)!}$$</p>
<p>And the following Catenary curve, assuming that $a=1$:</p>
<p>$$g(x)=\frac{a(e^\frac{x}{a}+e^{-\frac{x}{a}})}{2}$$</p>
<p>Why does $f(x)=g(x)$ seem to hold true (at least when graphed)?</p>
<p>I'm looking for a purely algebraic reason here as to why these two are equal, ideally in terms that are at or around a high-school calculus level (where I'm at currently).</p>
<p>If I am mistaken, and these two are not equal to each other, an explanation of why that is would be great too. </p>
| Emilio Novati | 187,568 | <p>The starting point is the series expansion of the exponential function:
$$
e^x=\sum_{k=0}^\infty \frac{x^k}{k!}
$$
substituting in $g(x)$ with $a=1$ we have:
$$
g(x)=\frac{e^x+e^{-x}}{2} = \frac{1}{2}\left(\sum_{k=0}^\infty \frac{x^k}{k!}+\sum_{k=0}^\infty \frac{(-x)^k}{k!} \right)=\frac{1}{2}\left[\sum_{k=0}^\infty \left(\frac{x^k}{k!}+ \frac{(-x)^k}{k!} \right)\right]
$$
now note that the for $k$ odd the terms in the series are null and for $k$ even the terms becomes $2\frac{x^k}{k!}$, so in the series we have only the even terms $k=2n$ and the function becomes:
$$
g(x)=\frac{1}{2}\sum_{n=0}^\infty\frac{2x^{2n}}{(2n)!}=\sum_{n=0}^\infty\frac{x^{2n}}{(2n)!}
$$</p>
|
3,950,808 | <p><em>(note: this is very similar to <a href="https://math.stackexchange.com/questions/188252/spivaks-calculus-exercise-4-a-of-2nd-chapter">a related question</a> but as I'm trying to solve it without looking at the answer yet, I hope the gods may humor me anyways)</em></p>
<p>I'm self-learning math, and an <a href="https://www.reddit.com/r/math/comments/kcb1cd/how_do_i_gain_proficiency_in_mathematics_through/gfqn6y2/" rel="nofollow noreferrer">answer</a> to an /r/math post about self-learning was finally enough to motivate me to try getting feedback on this site :) . After reading throughout the internet, I've decided to start with Spivak's <em>Calculus</em>. I'm loving the book thankfully, but I'm stuck on this problem and I don't want to look at the answer quite yet.</p>
<blockquote>
<p>4 . (a) Prove that
<span class="math-container">$$\sum_{k=0}^l \binom{n}{k} \binom{m}{l-k} = \binom{n+m}{l}$$</span>.
Hint: Apply the binomal theorem to <span class="math-container">$(1+x)^n(1+x)^m$</span> .</p>
</blockquote>
<p>I've done all the prior problems, including Problem 3 (proving the Binomial Theorem) which is obviously closely tied to this, but even with the hint it feels like there's too much I don't know. I applied the hint and found:</p>
<p><span class="math-container">$$\left(\sum_{i=0}^n \binom{n}{i} x^i\right)\left(\sum_{j=0}^m \binom{m}{j} x^j\right) = \sum_{k=0}^{n+m} \binom{n+m}{k} x^k$$</span></p>
<p>And, setting <span class="math-container">$x=1$</span> got something even more interesting:</p>
<p><span class="math-container">$$\left(\sum_{i=0}^n \binom{n}{i}\right)\left(\sum_{j=0}^m \binom{m}{j}\right) = \sum_{k=0}^{n+m} \binom{n+m}{k}$$</span></p>
<p>However, I don't know where to go after that...is there some property of multiplying sums that I need to prove first, to relate the multiplication of two sums of binomials with the sum of the multiplication of two binomials?</p>
<p>Thank you!</p>
| Christoph | 86,801 | <p>Instead of setting <span class="math-container">$x=1$</span> in
<span class="math-container">$$
\left(\sum_{i=0}^n \binom{n}{i} x^i\right)\left(\sum_{j=0}^m \binom{m}{j} x^j\right) = \sum_{k=0}^{n+m} \binom{n+m}{k} x^k,
$$</span>
expand the LHS and collect terms of degree <span class="math-container">$k$</span> together to obtain
<span class="math-container">$$
\sum_{k=0}^{n+m} \Bigg( \sum_{\substack{0\le i\le n\\0\le j\le m\\i+j=k}} \binom{n}{i} \binom{m}{j}\Bigg) x^k = \sum_{k=0}^{n+m} \binom{n+m}{k} x^k.
$$</span>
Since <span class="math-container">$i+j=k$</span> determines <span class="math-container">$j=k-i$</span> this yields
<span class="math-container">$$
\sum_{k=0}^{n+m} \Bigg( \sum_{i=0}^k \binom{n}{i} \binom{m}{k-i}\Bigg) x^k = \sum_{k=0}^{n+m} \binom{n+m}{k} x^k.
$$</span>
Comparing coefficients yields the desired identity.</p>
|
1,945,116 | <p>I need a simple definition of Disjoint cycles in Symmetric Groups.I already understand what cycles and Transpositions are. I need a simple definition and if possible,give a clear example.Thanks in advance Mathematician</p>
| P Vanchinathan | 28,915 | <p>Consider a permutation of a set that has the property that, for example, elements of two disjoint subsets are permuted within each subset as cycles in each subset.</p>
<p>$S=\{1,2,3,4,5,6,7,8,9\};\ A=\{1,4,9\};\ B={2,3,5,6,7,8}$</p>
<p>The permutation of $S$ sending 1 to 4, 4 to 9, 9 to 1, 2 to 3, 3 to 5, 5 to 6, 6 to 7, 7 to 8 and 8 to 2$ is not a cycle, but consists of two disjoint cycles of length 3 and 6 respectively. (You can think of them going around parallelly in two different circles). The concept is not limited to just two cycles. Also any fixed point of the permutation is regarded as a cycle of length 1.</p>
|
3,308,291 | <p>I have an array of numbers (a column in excel). I calculated the half of the set's total and now I need the minimum number of set's values that the sum of them would be greater or equal to the half of the total. </p>
<p>Example:</p>
<pre><code>The set: 5, 5, 3, 3, 2, 1, 1, 1, 1
Half of the total is: 11
The least amount of set values that need to be added to get 11 is 3
</code></pre>
<p>What is the formula to get '3'?</p>
<p>It's probably something basic but I have not used calculus in a bit hence may have forgotten just forgotten it.</p>
<p>Normally I would use a simple while loop with a sort but I am in excel so I was wondering is there a more elegant solution. </p>
<p>P.S. I have the values sorted in descending order to make things easier.</p>
<p>EDIT: Example</p>
| Cameron Buie | 28,900 | <p>It is not closed. <span class="math-container">$1+1=2,$</span> which is not in <span class="math-container">$\{0,1,4\}.$</span></p>
|
2,582,046 | <p>My professor showed the following false proof, which showed that complex numbers do not exist. We were told to find the point where an incorrect step was taken, but I could not find it. Here is the proof: (Complex numbers are of the form <span class="math-container">$\rho e^{i\theta}$</span>, so the proof begins there) <span class="math-container">$$\large\rho e^{i\theta} = \rho e^{\frac{2 \pi i \theta}{2\pi}} = \rho (e^{2\pi i})^{\frac{\theta}{2\pi}} = \rho (1)^{\frac{\theta}{2\pi}} = \rho$$</span>
<span class="math-container">$$Note: e^{i\pi} = -1, e^{2\pi i} = (-1)^2 = 1$$</span>
Since we started with the general form of a complex number and simplified it to a real number (namely, <span class="math-container">$\rho$</span>), the proof can claim that only real numbers exist and complex numbers do not. My suspicion is that the error occurs in step <span class="math-container">$4$</span> to <span class="math-container">$5$</span> , but I am not sure if that really is the case.</p>
| José Carlos Santos | 446,262 | <p>The error lies in assuming that $(\forall a,b\in\mathbb{C}):e^{ab}=(e^a)^b$. </p>
<p>It's worse than wrong; it doesn't make sense. The reason why it doesn't make sense is because $e^a$ can be an arbitrary complex number (except that it can't be $0$). And what is $z^w$, where $z,w\in\mathbb C$? A reasonable definition is that it means $e^{w\log z}$, where $\log z$ is <em>a</em> logarithm of $z$. Problem: every non-zero complex number has infinitely many logarithms: if a number $\omega$ is a logarithm, then every number of the form $\omega+2k\pi i$ ($k\in\mathbb Z$) is also a logarithm.</p>
|
386,073 | <p>For which values of a do the following vectors for a <strong><em>linearly dependent</em></strong> set in $R^3$?</p>
<p>$$V_1= \left(a,\, \frac{-1}{2}, \,\frac{-1}{2}\right),\;\; V_2= \left(\frac{-1}{2},\, a, \,\frac{-1}{2}\right),\; \;V_3= \left(\frac{-1}{2}, \,\frac{-1}{2},\, a\right)$$</p>
<p>Please would it be possible to <strong><em>advise</em></strong> me how I would go about solving this?<br>
I'm not sure if I should be using only row reduction because I think it may relate to eigenvalues and eigenvectors but we haven't covered those concepts in class as yet.<br>
Am I supposed to find the determinant?</p>
| amWhy | 9,003 | <p>We want to find all (only) those value(s) that will make the vectors linearly <em>dependent</em>. </p>
<p>Can you see, for example, why $\,a = -\frac 12\,$ <em>is a problem</em>? Why would $\bf \,a = -\frac 12\,$ make the vectors linearly dependent? And why would $\bf\, a = 1\,$ make the vectors linearly dependent?</p>
<p>A set of vectors, in your case, in $\mathbb R^3$, is linearly dependent if any one of them can be written as a linear combination of the others. In either of the above cases, $\,a = -\frac 12, \,\text{ or}\; a = 1,\,$ one or more of the vectors can be expressed as a linear combination of the others.</p>
<hr>
<p>Recall: The determinant of an $n\times n$ matrix equals zero $\iff$ (when and only when) its column vectors are linearly dependent. </p>
<p>So you can also solve for $a$ by setting up the matrix using your vectors as columns in a $3 \times 3$ matrix; <strong><em>find the determinant</em></strong>, which will be a <em>function</em> of $a$, set it equal to zero, and solve for the $a$ values that make the determinant equal to zero (find the zeros of the determinant). <strong><em>Those and only those values</em></strong> are values for which the vectors are linearly <em>dependent</em>. </p>
|
386,073 | <p>For which values of a do the following vectors for a <strong><em>linearly dependent</em></strong> set in $R^3$?</p>
<p>$$V_1= \left(a,\, \frac{-1}{2}, \,\frac{-1}{2}\right),\;\; V_2= \left(\frac{-1}{2},\, a, \,\frac{-1}{2}\right),\; \;V_3= \left(\frac{-1}{2}, \,\frac{-1}{2},\, a\right)$$</p>
<p>Please would it be possible to <strong><em>advise</em></strong> me how I would go about solving this?<br>
I'm not sure if I should be using only row reduction because I think it may relate to eigenvalues and eigenvectors but we haven't covered those concepts in class as yet.<br>
Am I supposed to find the determinant?</p>
| egreg | 62,967 | <p>Row reduction has little to do with eigenvalues, but it has <em>much</em> to do with linear dependence. Actually it's the method of choice.</p>
<p>It's better to change the order into $v_3,v_2,v_1$</p>
<p>$$
\begin{bmatrix}
-\frac{1}{2} & -\frac{1}{2} & a \\
-\frac{1}{2} & a & -\frac{1}{2} \\
a & -\frac{1}{2} & -\frac{1}{2}
\end{bmatrix}\to
\begin{bmatrix}
1 & 1 & -2a \\
1 & -2a & 1 \\
-2a & 1 & 1
\end{bmatrix}\to
\begin{bmatrix}
1 & 1 & -2a \\
0 & -2a-1 & 1+2a \\
0 & 1+2a & 1-4a^2
\end{bmatrix}
$$
If $a=-1/2$ the matrix becomes
$$
\begin{bmatrix}
1 & 1 & 1\\
0 & 0 & 0\\
0 & 0 & 0
\end{bmatrix}
$$
Otherwise we can continue the reduction
$$
\to
\begin{bmatrix}
1 & 1 & -2a \\
0 & 1 & -1 \\
0 & 0 & 2 + 2a - 4a^2
\end{bmatrix}
$$
The last row is zero (and the vectors are linearly dependent) if and only if
$$
2a^2 - a - 1 = 0
$$
that is
$$
a=\frac{1\pm\sqrt{1+8}}{4}=\frac{1\pm3}{4}
$$
which gives $a=-1/2$ (not good now as we were assuming $a\ne-1/2$) or $a=1$.</p>
<p>Conclusion: the three vectors are linearly dependent if and only if $a=-1/2$ or $a=1$.</p>
<hr>
<p>For $a=-1/2$, the reduced form says that the general form for getting</p>
<p>$$
\alpha_3v_3 + \alpha_2v_2 +\alpha_1 v_1=0
$$
is to set arbitrary values for $\alpha_2$ and $\alpha_1$ and taking $\alpha_3=-\alpha_2-\alpha_1$. For instance, with $\alpha_1=1$ and $\alpha_2=0$ we get
$$
v_1-v_3=0.
$$</p>
<p>For $a=1$, the reduced form is</p>
<p>$$
\begin{bmatrix}
1 & 1 & -2 \\
0 & 1 & -1 \\
0 & 0 & 0
\end{bmatrix}
$$
that can be further reduced to row echelon form</p>
<p>$$
\begin{bmatrix}
1 & 0 & -1 \\
0 & 1 & -1 \\
0 & 0 & 0
\end{bmatrix}
$$
which says that the general form for getting</p>
<p>$$
\alpha_3v_3 + \alpha_2v_2 +\alpha_1 v_1=0
$$
is to set an arbitrary value for $\alpha_1$ and setting $\alpha_3=\alpha_1$, $\alpha_2=\alpha_1$; for instance, with $\alpha_1=1$, we have
$$
v_1+v_2+v_3=0.
$$</p>
<p>The order of the vectors is irrelevant, because addition of vectors is commutative.</p>
|
357,101 | <p>There exists a minimal subshift <span class="math-container">$X$</span> with a point <span class="math-container">$x \in X$</span> such that <span class="math-container">$x_{(-\infty,0)}.x_0x_0x_{(0,\infty)} \in X$</span>?</p>
| Ilkka Törmä | 66,104 | <p>We can produce such a subshift by a standard hierarchical construction.
Let <span class="math-container">$w_{0,0} = 01$</span> and <span class="math-container">$w_{0,1} = 011$</span>.
For each <span class="math-container">$k \geq 0$</span>, define <span class="math-container">$w_{k+1,0} = w_{k,0} w_{k,0} w_{k,1}$</span> and <span class="math-container">$w_{k+1,1} = w_{k,0} w_{k,1} w_{k,1}$</span>.
Define <span class="math-container">$X$</span> by forbidding each word that doesn't occur in any <span class="math-container">$w_{k, b}$</span>.
Since each <span class="math-container">$w_{k+1,b}$</span> contains both <span class="math-container">$w_{k,0}$</span> and <span class="math-container">$w_{k,1}$</span>, it's easy to show that <span class="math-container">$X$</span> is minimal.</p>
<p>For <span class="math-container">$b \in \{0,1\}$</span>, define <span class="math-container">$x^b \in X$</span> as the "limit" of <span class="math-container">$(w_{k,b})_{k \geq 0}$</span> such that the central <span class="math-container">$01$</span> or <span class="math-container">$011$</span> is at the origin.
Because of the way we defined <span class="math-container">$w_{k,0}$</span> and <span class="math-container">$w_{k,1}$</span>, the only difference between the words, and hence the limiting configurations, is that single extra <span class="math-container">$1$</span>.
Concretely, <span class="math-container">$x^0$</span> and <span class="math-container">$x^1$</span> look like
<span class="math-container">$$
\cdots 0101011\;0101011\;01011011\;0101011\;010.1011\;01011011\;0101011\;01011011\;01011011 \cdots
$$</span>
and
<span class="math-container">$$
\cdots 0101011\;0101011\;01011011\;0101011\;010.11011\;01011011\;0101011\;01011011\;01011011 \cdots
$$</span>
(Spaces added for clarity.)</p>
|
4,537,685 | <p>Let <span class="math-container">$H$</span> be a Hilbert space, and let
<span class="math-container">$$H_n = \otimes_n H = \Big\{\sum_{i_1,\ldots, i_n} \alpha_{i_1, \ldots, i_n} \big(e_{i_1} \otimes \cdots \otimes e_{i_n}\big) : \sum_{i_1,\ldots, i_n} |\alpha_{i_1, \ldots, i_n}|^2<\infty \Big\}$$</span>
denote the <span class="math-container">$n$</span>-fold tensor product of <span class="math-container">$H$</span>. Here <span class="math-container">$\{e_i\}$</span> denotes a basis element for <span class="math-container">$H_i$</span> and <span class="math-container">$\alpha_{i_1, \ldots, i_n} \in \mathbb{C}$</span>. Please correct me if this definition is incorrect.</p>
<p>I am trying to understand a particular subset of <span class="math-container">$H_n$</span>, namely the symmetric tensor product. This is defined as the space
<span class="math-container">$$H_n^s = \Big\{\sum_{i_1,\ldots, i_n} \alpha_{i_1, \ldots, i_n} \big(e_{i_1} \otimes \cdots \otimes e_{i_n}\big) \in H_n: ~\alpha_{i_1, \ldots, i_n} = \alpha_{i_{\sigma(1)}, \ldots, i_{\sigma(n)}}~ \forall \sigma \in S_n,\Big\}$$</span>
where <span class="math-container">$S_n$</span> is of course the permutation group of <span class="math-container">$n$</span> objects.</p>
<p>To me, this says that for every vector in <span class="math-container">$H_n^s$</span> if we swap the order of the coefficient's components then the result is the same coefficient we started with. However this is clearly wrong, as this would imply that the coefficients must all be the same. If anyone can provide a detailed breakdown of this definition that would be much appreciated.</p>
| Sammy Black | 6,509 | <p>A (small) example to get the feel of these objects:
<span class="math-container">$$
2 \, e_3 \otimes e_{17} - 5 \, e_4 \otimes e_4
$$</span>
is an arbitrary element of the two-fold tensor product space <span class="math-container">$H_2$</span>. Here the multi-index <span class="math-container">$(i_1, i_2) = (3, 17)$</span> for the first term and <span class="math-container">$(i_1, i_2) = (4, 4)$</span> for the second. The coefficients are <span class="math-container">$\alpha_{3, 17} = 2$</span>, <span class="math-container">$\alpha_{4, 4} = -5$</span>, and <span class="math-container">$\alpha_{i_1, i_2} = 0$</span> for all others. <strong>But this tensor is not symmetric.</strong> If you swap the indices (the only non-trivial permutation of <span class="math-container">$2$</span> indices), you get
<span class="math-container">$$
2 \, e_{17} \otimes e_3 - 5 \, e_4 \otimes e_4
$$</span>
which is a different tensor.</p>
<p>An example of a symmetric tensor in <span class="math-container">$H_2$</span> would be
<span class="math-container">$$
2 \, e_3 \otimes e_{17} - 5 \, e_4 \otimes e_4 + 2 \, e_{17} \otimes e_3.
$$</span>
You can easily see that either permutation in <span class="math-container">$S_2$</span> swaps the first and third terms, <em>but their coefficients are the same,</em> so the tensor is fixed.</p>
<p>Here's how to define the tensor spaces:
<span class="math-container">$$
H_n = H^{\otimes n}
= \biggl\{\, \sum_{i_1, \ldots, i_n}
\alpha_{i_1, \ldots, i_n}
\bigl(e_{i_1} \otimes \cdots \otimes e_{i_n} \bigr): \;
\sum_{i_1, \ldots, i_n} \lvert \alpha_{i_1, \ldots, i_n}\rvert^2
< \infty \bigg\}
$$</span>
and
<span class="math-container">$$
\operatorname{Sym}^n H
= \biggl\{\, \sum_{i_1, \ldots, i_n}
\alpha_{i_1, \ldots, i_n}
\bigl(e_{i_1} \otimes \cdots \otimes e_{i_n} \bigr) \in H_n: \;
\alpha_{i_1, \ldots, i_n}
= \alpha_{i_{\sigma(1)}, \ldots, i_{\sigma(n)}} \,
\forall \sigma \in S_n \bigg\}.
$$</span>
Almost by definition, these tensors are invariant under any permutation <span class="math-container">$\sigma \in S_n$</span>. All that happens when you act on the sub-indices is that the order of the sum of term is permuted.</p>
<p>Does this make sense?</p>
|
1,114 | <p>Or more specifically, why do people get so excited about them? And what's your favorite easy example of one, which illustrates why I should care (and is not a group)?</p>
| Joel Dodge | 493 | <p>I never think about groupoids in any technical sense, but my favorite easy example of one can be built out of a separable field extension K/k. It is the category whose points are the subfields of the algebraic closure of k which are k-isomorphic to K. The morphisms between two objects are just the k-isomorphisms between the respective subfields. It's some kind of "Galois groupoid" and it's a group if and only if K/k is a Galois field extension. </p>
|
1,114 | <p>Or more specifically, why do people get so excited about them? And what's your favorite easy example of one, which illustrates why I should care (and is not a group)?</p>
| Alfonso Gracia-Saz | 3,065 | <p>Contra dance (or square dance) gives us a nice example of a groupoid. The objects are the formations (i.e. the positions of the dancers) and the morphisms are the calls up to homotopy. A choreography (or a dance, if you wish) if a set of composable calls whose product is a morphism between two specific objects.</p>
|
3,223,732 | <p>Let <span class="math-container">$X=U\cup V$</span> where <span class="math-container">$U,V$</span> are simply-connected open sets and <span class="math-container">$U\cap V$</span> is the disjoint union of two simply connected sets. We also have the condition that any subspace <span class="math-container">$S$</span> of <span class="math-container">$X$</span> homeomorphic to <span class="math-container">$[0,1]$</span> has an open neighborhood that deformation retracts onto <span class="math-container">$S$</span>.</p>
<p>We can choose points <span class="math-container">$p$</span> and <span class="math-container">$q$</span>, one from each of the two disjoint components of <span class="math-container">$U\cap V$</span>, that are not connected by a path. Then <span class="math-container">$U\cup V$</span> should deformation retract onto the union of two paths (one path <span class="math-container">$\alpha$</span> in <span class="math-container">$U$</span>, another path <span class="math-container">$\beta$</span> in <span class="math-container">$V$</span>) that connects <span class="math-container">$p$</span> and <span class="math-container">$q$</span>, hence the fundamental group must be <span class="math-container">$\mathbb{Z}$</span>.</p>
<p>But I don't know how to rigorously show this part. We don't know if <span class="math-container">$U$</span> deformation retracts onto <span class="math-container">$\alpha$</span> and <span class="math-container">$V$</span> deformation retracts onto <span class="math-container">$\beta$</span>.. and even if we show that, I don't know how do we deal with the intersection <span class="math-container">$U\cap V$</span>.</p>
| Ariel Serranoni | 253,958 | <p>By definition, if <span class="math-container">$V$</span> and <span class="math-container">$W$</span> are vector spaces and <span class="math-container">$T\colon V\to W$</span> is a linear transformation, then </p>
<p><span class="math-container">$$\text{Null}(T)=\{x\in V\,\colon T(x)=0\}.$$</span></p>
<p>Therefore, the nullspace of a linear transformation is a subset of its domain. In your case, <span class="math-container">$\text{Null}(T)\subseteq\mathbb{R}^5$</span> and hence <span class="math-container">$\text{dim}(\text{Null}(T))\leq 5$</span>. On the other hand, <span class="math-container">$\text{Null}(T)$</span> contains at least one point, which is <span class="math-container">$0$</span> and thus <span class="math-container">$\text{dim}(\text{Null}(T))\geq 0$</span>. Therefore, we concluded that you have <span class="math-container">$0\leq \text{dim}(\text{Null}(T))\leq 5$</span>. Using the same reasoning, we conclude the more general statement that <span class="math-container">$0\leq \text{dim}(\text{Null}(T))\leq \text{dim}(V)$</span>.</p>
<p><strong>EDIT</strong>: As noted in the comments above, we can refine what I said above by using the Rank-Nullity Theorem. Using this result, we obtain that <span class="math-container">$\text{dim}(\text{Null}(T))= 5-\text{Rank}(T)$</span>. Since the possible values for <span class="math-container">$\text{Rank}(T)$</span> are <span class="math-container">$0,1,2,$</span> and <span class="math-container">$3$</span>, we obtain that <span class="math-container">$2\leq\text{dim}(\text{Null}(T))\leq 5$</span>.</p>
|
2,002,385 | <blockquote>
<p>Prove $|e^{i\theta_1}-e^{i\theta_2}|\geq|e^{i\theta_1/2}-e^{i\theta_2/2}|$
where $ \theta_1, \theta_2 \in (0,\pi]$.</p>
</blockquote>
<p>Even though geometrically it is an obvious fact, somehow I couldn't prove it elegant way (it's really frustrating), and I'm sure some of you guys know how to prove it in two or three lines. </p>
<p>That's pretty much it.</p>
| Ewan Delanoy | 15,381 | <p>Hint :
$\bigg|
\frac
{e^{i\theta_1}-e^{i\theta_2}}
{e^{\frac{i\theta_1}{2}}-e^{\frac{i\theta_2}{2}}}\bigg|=
\bigg|e^{\frac{i\theta_1}{2}}+e^{\frac{i\theta_2}{2}}\bigg|=\bigg|e^{i\frac{\theta_1-\theta_2}{4}}+e^{i\frac{\theta_2-\theta_1}{4}}\bigg|=2|\cos(\frac{\theta_2-\theta_1}{4})|$.</p>
|
3,238,914 | <p>When is the <a href="https://en.wikipedia.org/wiki/Euler_line" rel="nofollow noreferrer">Euler line</a> parallel with a triangle's side?</p>
<p>I have found that a triangle with angles <span class="math-container">$45^\circ$</span> and <span class="math-container">$\arctan2$</span> is a case.</p>
<p>Is there any other case?
<a href="https://i.stack.imgur.com/1KjZe.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/1KjZe.png" alt="Image"></a>></p>
| Blue | 409 | <p>Here's an approach that's more satisfying than <a href="https://math.stackexchange.com/a/3239223/409">my previous attempt</a>.</p>
<hr>
<p>Let <span class="math-container">$\triangle ABC$</span> have midpoints <span class="math-container">$D$</span>, <span class="math-container">$E$</span>, <span class="math-container">$F$</span> opposite <span class="math-container">$A$</span>, <span class="math-container">$B$</span>, <span class="math-container">$C$</span>. Let it have centroid <span class="math-container">$K$</span>, circumcenter <span class="math-container">$P$</span>, and orthocenter <span class="math-container">$Q$</span>. Let <span class="math-container">$G$</span> be the foot of the perpendicular from <span class="math-container">$C$</span>, and let <span class="math-container">$M$</span> be the midpoint of <span class="math-container">$\overline{CG}$</span>.</p>
<p><a href="https://i.stack.imgur.com/AYnTg.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/AYnTg.png" alt="enter image description here"></a></p>
<p>Suppose the Euler line (through <span class="math-container">$K$</span>, <span class="math-container">$P$</span>, <span class="math-container">$Q$</span>) is parallel to <span class="math-container">$\overline{AB}$</span>. Then, since we "know" <span class="math-container">$K$</span> trisects median <span class="math-container">$\overline{CF}$</span>, we can get the following chain of ratios:</p>
<p><span class="math-container">$$\begin{align}
3 &= \frac{|CF|}{|KF|}=\frac{|CM|}{|QM|}=\frac{|\triangle CDE|}{|\triangle PDE|}=\frac{\frac12|CD||CE|\sin DCE}{\frac12|PD||PE|\sin DPE}=\frac{|CD|}{|PD|}\cdot\frac{|CE|}{|PE|} \\[4pt]
&=\tan A\tan B
\end{align}$$</span></p>
<p>Here, we used the fact that <span class="math-container">$\angle DPE=180^\circ-\angle DCE$</span> (so that the sines cancel). Also, by the <a href="https://en.wikipedia.org/wiki/Inscribed_angle" rel="nofollow noreferrer">Inscribed Angle Theorem</a>, <span class="math-container">$\angle BPC=2\angle A$</span>, so that <span class="math-container">$\angle DPC =\frac12 \angle BPC = \angle A$</span> (and likewise, <span class="math-container">$\angle EPC=\angle B$</span>). <span class="math-container">$\square$</span></p>
|
2,845,085 | <p>Find $f(5)$, if the graph of the quadratic function $f(x)=ax^2+bx+c$ intersects the ordinate axis at point $(0;3)$ and its vertex is at point $(2;0)$</p>
<p>So I used the vertex form, $y=(x-2)^2+3$, got the quadratic equation and then put $5$ instead of $x$ to get the answer, but it's wrong. I think I shouldn't have added $3$ in the vertex form but I don't know how else I can solve this</p>
| Community | -1 | <p>As we know:
$2ax + b=0$ when $x=2$</p>
<p>This means $4a + b=0$ or $b=-4a$</p>
<p>Now you can substitute </p>
<p>$a×2^2 - 4a×2 +3=0$</p>
<p>Or $4a - 8a +3=0$</p>
<p>Or $a=\frac{3}{4}$</p>
<p>Now you can get $b$ and you can get $f(5)$</p>
|
265,949 | <p>Consider the set of all $n \times n$ matrices with real entries as the space $ \mathbb{R^{n^2}}$ .
Which of the following sets are compact?</p>
<ol>
<li>The set of all orthogonal matrices.</li>
<li>The set of all matrices with determinant equal to unity.</li>
<li><p>The set of all invertible matrices.</p>
<p>I am stuck on this problem. Can anyone help me please...............</p>
<p>how can we check closedness and boundedness of matrices???</p></li>
</ol>
| anonymous | 50,284 | <p>Let's think about $1)$. A matrix is orthogonal if it consists of rows and columns of orthogonal unit vectors (in this case, of $\mathbb{R}^n$). Let $A \in \mathbb{R}^{n^2}$ be such a matrix. Pick a row or column from $A$, say $(a_1, ..., a_n)$. By assumption, $|(a_1, ..., a_n)| = 1$, from which we have that each $|a_j| \le 1$. This establishes the boundedness.</p>
<p>Next, let $\{A_n\}_{n=1}^{\infty}$ be a sequence of convergent orthogonal matrices. We need to show that its limit, say $A$ is orthogonal. Use the continuity of the norm and inner product to conclude that $A$ is orthogonal.</p>
<p>For $2)$, I'll work on the case of $n =2$ and you may generalize. Fix any $n$ and consider the matrix
$ \left( \begin{array}{ccc}
n & n-1 \\
1 & 1 \\
\end{array} \right)$</p>
<p>This matrix has determinant $1$ but arbitrarily large $\mathbb{R}^{n^2}$ norm. Is the set of such matrices compact then?</p>
<p>For the third one, what do you know about invertible matrices? Apply part $2)$.</p>
|
879,886 | <p>If one number is thrice the other and their sum is $16$, find the numbers.</p>
<p>I tried,
Let the first number be $x$ and the second number be $y$
Acc. to question </p>
<p>$$
\begin{align}
x&=3y &\iff x-3y=0 &&(1)\\
x&=16-3y&&&(2)
\end{align}
$$</p>
| evinda | 75,843 | <p>$$x=3y$$
$$y+x=16 \Rightarrow y+3y=16 \Rightarrow 4y=16 \Rightarrow y=4$$</p>
<p>So, $x=12$.</p>
|
2,918,091 | <p>Suppose I want to find the locus of the point $z$ satisfying $|z+1| = |z-1|$</p>
<p>Let $z = x+iy$</p>
<p>$\Rightarrow \sqrt{(x+1)^2 + y^2} = \sqrt{(x-1)^2 + y^2}$ <br/>
$\Rightarrow (x+1)^2 = (x-1)^2$ <br/>
$\Rightarrow x+1 = x-1$ <br/>
$\Rightarrow 1= -1$ <br/>
$\Rightarrow$ Loucus does not exist</p>
<p>Is my approach incorrect? The answer I was given was that the y-axis describes the locus.</p>
<p>Any help would be appreciated.</p>
| Siong Thye Goh | 306,553 | <p>As you remove the square root sign, there is another possible solution</p>
<p>$$x+1 = -(x-1)$$</p>
<p>Hence $x=0$ which is the $y$-axis.</p>
<p>A faster way is to recognize that this means the distance from $1$ and $-1$ are equal and hence the perpendicular bisector is the locus.</p>
|
3,014,670 | <p>I don't have any experience working with radicals, but I'm working on a function that requires products of nth roots to be positive or negative, depending on the number of negative factors. </p>
<p><em>I've done some initial research, and reviews these Stack questions: <a href="https://math.stackexchange.com/questions/26363/square-roots-positive-and-negative">Square roots — positive and negative</a> and <a href="https://math.stackexchange.com/questions/437908/the-product-rule-of-square-roots-with-negative-numbers">The Product Rule of Square Roots with Negative Numbers</a> but I couldn't find the information I was seeking (or am not fully understanding the answers.)</em></p>
<p>Are the following expressions true? If not, how can I produce the those results?</p>
<p><span class="math-container">$\sqrt[2]{1*-1} = -1$</span></p>
<p><span class="math-container">$\sqrt[3]{1*1*-1} = -1$</span></p>
<p><span class="math-container">$\sqrt[3]{1*-1*-1} = 1$</span></p>
<hr>
<p>[update] This is what the function does:</p>
<p><span class="math-container">$\sqrt[n]{\overline{\Delta_1}*\overline{\Delta_2} *...*\overline{\Delta_n}} \text{ }*\text{ }
\frac{\overline{\Delta_1}*\overline{\Delta_2} *...*\overline{\Delta_n}}{\Delta_1*\Delta_2*...*\Delta_n}$</span></p>
<p>such that if there are an odd number of negative factors, the product is negative, otherwise positive.</p>
<ul>
<li>Is there a more compact way to express this?</li>
</ul>
<p>also, any tips on notation are appreciated. </p>
| Adrian Keister | 30,813 | <p>Your initial translation is correct, though in standard form I would write </p>
<p>All non-rational people are lakers.</p>
<p>This is an Aristotelian A type statement, and A type statements have valid obverses and valid contrapositives. They do not have valid converses.</p>
<p>Obverse: No non-rational people are non-lakers.</p>
<p>Contrapositive: All non-lakers are rational people. </p>
<p>So the ultimate answer to your question is no, because you are trying to get the converse of an A statement to be true, which does not happen in general.</p>
<p>If you had an E or I statement, the converse would be valid.</p>
|
1,885,068 | <p>Prove $$\int_0^1 \frac{x-1}{(x+1)\log{x}} \text{d}x = \log{\frac{\pi}{2}}$$</p>
<p>Tried contouring but couldn't get anywhere with a keyhole contour.</p>
<p>Geometric Series Expansion does not look very promising either.</p>
| Olivier Oloa | 118,798 | <p><strong>Hint</strong>. One may set
$$
f(s):=\int_0^1 \frac{x^s-1}{(x+1)\log{x}}\: \text{d}x, \quad s>-1, \tag1
$$ then one is allowed to differentiate under the integral sign, getting
$$
f'(s)=\int_{0}^{1}\frac{x^s}{x+1}\:dx=\frac12\psi\left(\frac{s}2+\frac12\right)-\frac12\psi\left(\frac{s}2+1\right), \quad s>-1, \tag2
$$where we have used a standard <a href="http://dlmf.nist.gov/5.9#E16" rel="noreferrer">integral representation</a> of the digamma function.</p>
<p>One may recall that $\psi:=\Gamma'/\Gamma$, then integrating $(2)$, observing that $f(0)=0$, one gets</p>
<blockquote>
<p>$$
f(s)=\int_0^1 \frac{x^s-1}{(x+1)\log{x}}\: \text{d}x=\log \left(\frac{\sqrt{\pi}\cdot\Gamma\left(\frac{s}2+1\right)}{\Gamma\left(\frac{s}2+\frac12\right)}\right), \quad s>-1, \tag3
$$ </p>
</blockquote>
<p>from which one deduces the value of the initial integral by putting $s:=1$, recalling that
$$
\Gamma\left(\frac12+1\right)=\frac12\Gamma\left(\frac12\right)=\frac{\sqrt{\pi}}2.
$$</p>
<p><em>Edit. The result $(3)$ is more general than the given one.</em></p>
|
3,873,433 | <p>Is it true that for all square, complex matrices A, B
<span class="math-container">$$
\left\|AB\right\|_p\leq\left\|A\right\|\left\|B\right\|_p$$</span></p>
<p>where <span class="math-container">$\left\| .\right\|_p$</span> refers to the Schatten p-norm and <span class="math-container">$\left\| .\right\|$</span> refers to the spectral norm?
How would I prove this?</p>
| user8675309 | 735,806 | <p>You can also prove this (and most Schatten norm results) via the theory of majorization</p>
<p>It's worth noting that p norms for real non-negative vectors and Schatten p norms for diagonal Positive (semi)definite matrices are essentially the same thing. In both cases the norms are homogenous with respect to re-scaling by positive numbers and they are sub-additive (i.e. as norms they must obey triangle inequality). Thus they are <em>convex</em>. They are also <em>increasing functions</em> in the sense that (when we restrict to real non-negative values) if we have the following <em>component-wise</em> inequalities<br />
<span class="math-container">$\mathbf 0 \leq \mathbf x_1 \leq \mathbf x_2\implies \Big\Vert \mathbf 0\Big\Vert_p \leq \Big\Vert\mathbf x_1\Big\Vert_p \leq \Big\Vert\mathbf x_2\Big\Vert_p$</span></p>
<p>Letting <span class="math-container">$\Sigma_Z$</span> be the <span class="math-container">$n\times n$</span> diagonal matrix containing the singular values of <span class="math-container">$Z$</span> <em>in the usual ordering of largest to smallest</em></p>
<p><span class="math-container">$\Sigma_{AB} \preceq_w \Sigma_{A}\Sigma_{B}$</span><br />
where <span class="math-container">$\preceq_w$</span> denotes weak majorization. (This takes some work to prove and e.g. one may find a proof in the book <em>Inequalities: The Theory of Majorization</em> by Olkin et. al)</p>
<p>Putting this all together<br />
<span class="math-container">$\Big \Vert AB\Big\Vert_{S_p} $</span><br />
<span class="math-container">$= \Big \Vert \Sigma_{AB}\Big\Vert_{S_p}$</span><br />
<span class="math-container">$\leq \Big \Vert \Sigma_{A}\Sigma_{B}\Big\Vert_{S_p}$</span><br />
<span class="math-container">$=\Big(\sum_{k=1}^n \big(\sigma_{k}^{(A)}\big)^p\cdot \big(\sigma_{k}^{(B)}\big)^p\Big)^\frac{1}{p}$</span><br />
<span class="math-container">$\leq \Big(\sum_{k=1}^n \big(\sigma_{k}^{(A)}\big)^p\cdot \big(\sigma_{1}^{(B)}\big)^p\Big)^\frac{1}{p}$</span><br />
<span class="math-container">$= \sigma_{1}^{(B)} \cdot \Big(\sum_{k=1}^n \big(\sigma_{k}^{(A)}\big)^p\Big)^\frac{1}{p}$</span><br />
<span class="math-container">$= \Big \Vert\Sigma_{B}\Big\Vert_{S_\infty}\cdot \Big \Vert \Sigma_{A}\Big\Vert_{S_p}$</span></p>
<p>where the first inequality comes from applying a function that is symmetric, convex and increasing to a weak majorization relation, and the second inequality comes from the point-wise bound <span class="math-container">$\big(\sigma_{k}^{(A)}\big)^p\cdot \big(\sigma_{k}^{(B)}\big)^p \leq \big(\sigma_{k}^{(A)}\big)^p\cdot \big(\sigma_{1}^{(B)}\big)^p$</span>. Finally note that when dealing with Schatten norms, that the "spectral norm" is more commonly known as the Schatten <span class="math-container">$\infty$</span> norm.</p>
|
3,720,272 | <p>My textbook employs a brute force method: add the number of committees that could be formed with one woman, two women, and three women in them. Then, the total number of such committees will be:
<span class="math-container">$$\left(\begin{smallmatrix} 8 \\ 1 \end{smallmatrix}\right)\cdot\left(\begin{smallmatrix} 10 \\ 2 \end{smallmatrix}\right) + \left(\begin{smallmatrix} 8 \\ 2 \end{smallmatrix}\right)\cdot\left(\begin{smallmatrix} 10 \\ 1 \end{smallmatrix}\right) + \left(\begin{smallmatrix} 8 \\ 3 \end{smallmatrix}\right)\cdot\left(\begin{smallmatrix} 10 \\ 0 \end{smallmatrix}\right) = 360 + 280 + 56 = 696$$</span>
(The number of ways of choosing one woman from eight times that of choosing two men from ten + ...)</p>
<p>I had solved the question with this reasoning: You can choose one woman from eight as a member of the committee, and for the remaining two positions, you could choose either a man or a woman. This is equivalent to choosing two people from <span class="math-container">$(10 + 8) - 1 = 17$</span>. Thus, the number of possible ways to form such a committee is:
<span class="math-container">$$\left(\begin{smallmatrix} 8 \\ 1 \end{smallmatrix}\right)\cdot\left(\begin{smallmatrix} 17 \\ 2 \end{smallmatrix}\right) = 8 \cdot 136 = 1088$$</span></p>
<p>What mistake have I made?</p>
| Shiv Tavker | 687,825 | <p>You are recounting! For simplicity consider there are only three people. <span class="math-container">$W_1, W_2, M_1$</span>. Suppose you count the same way. You will get number way as <span class="math-container">$\left(\begin{smallmatrix} 2 \\ 1 \end{smallmatrix}\right) \times \left(\begin{smallmatrix} 2 \\ 2 \end{smallmatrix}\right)=2$</span>. But are there two ways of forming the committee? No! You just recounted <span class="math-container">$W_1 (W_2 M_1)$</span> and <span class="math-container">$W_2 (W_1 M_1)$</span>. The easier way could be:
<span class="math-container">$$\
\text{Possible Ways} = \text{Total Ways} - \text{Ways with only men}\\
\text{Possible Ways} = \left(\begin{smallmatrix} 18 \\ 3 \end{smallmatrix}\right) - \left(\begin{smallmatrix} 10 \\ 3 \end{smallmatrix}\right)=696
$$</span></p>
|
2,377,946 | <blockquote>
<p>The integral is:
$$\int_0^a \frac{x^4}{(x^2+a^2)^4}dx$$</p>
</blockquote>
<p>I used an approach that involved substitution of x by $a\tan\theta$. No luck :\ . Help?</p>
| Doug M | 317,162 | <p>$\displaystyle\int_0^a \frac{x^4}{(x^2+a^2)^4}dx$</p>
<p>Where do we get with the substitution you have suggested?</p>
<p>$x = a\tan\theta\\
dx = a\sec^2\theta\\
\displaystyle\int_0^{\frac \pi 4} \frac{(a^4\tan^4\theta)(a\sec^2\theta)}{(a^2\tan^2\theta+a^2)^4}d\theta\\
$</p>
<p>Looks promising:
Keep simplifying</p>
<p>$\displaystyle\int_0^{\frac \pi 4} \frac{a^5\tan^4\theta\sec^2\theta}{a^8\sec^8\theta}d\theta\\
\displaystyle\int_0^{\frac \pi 4} \frac{\tan^4\theta}{a^3\sec^6\theta}d\theta$</p>
<p>Let's state this into terms of $\sin\theta, \cos\theta$</p>
<p>$\displaystyle\frac 1{a^3}\int_0^{\frac \pi 4} \sin^4\theta\cos^2\theta\ d\theta$</p>
<p>You need to apply your half angle identities, perhaps repeatedly.</p>
<p>$\displaystyle\sin^2\theta = \frac 12 (1-\cos 2\theta), \cos^2\theta = \frac 12 (1+\cos\theta) $</p>
<p>$\displaystyle\frac 1{8a^3}\int_0^{\frac \pi 4} (1-\cos 2\theta)^2(1+\cos 2\theta)\ d\theta\\
\displaystyle\frac 1{8a^3}\int_0^{\frac \pi 4} [1-\cos 2\theta -\cos^2 2\theta + \cos^3 2\theta] \ d\theta\\
\displaystyle\frac 1{8a^3}\int_0^{\frac \pi 4} [1-\cos 2\theta -\frac 12 (1+\cos 4\theta) + \cos 2\theta (1-\sin^2 2\theta)]\ d\theta\\$</p>
<p>And that looks pretty straight-forward.</p>
|
1,419,483 | <p>Can anyone please help me in solving this integration problem $\int \frac{e^x}{1+ x^2}dx \, $?</p>
<p>Actually, I am getting stuck at one point while solving this problem via integration by parts.</p>
| Claude Leibovici | 82,404 | <p>Since $x^2+1=(x+i)(x-i)$, partial fraction decomposition leads to $$\frac 1{x^2+1}=\frac 1{2i}\Big(\frac{1}{x-i}-\frac{1}{x+i}\Big)=-\frac i{2}\Big(\frac{1}{x-i}-\frac{1}{x+i}\Big)$$ So $$I=\int \frac{e^x}{1+ x^2}\,dx =-\frac i{2}\int\Big(\frac{e^x}{x-i}-\frac{e^x}{x+i}\Big)\,dx=-\frac i{2}\int\Big(\frac{e^i\,e^{x-i}}{x-i}-\frac{e^{-i}\,e^{x+i}}{x+i}\Big)\,dx$$ Now make change of variable $y=x-i$ for the first and $z=x+i$ for the second. So $$I=-\frac {i e^i}{2}\int \frac{e^y}y dy+\frac {i e^{-i}}{2}\int \frac{e^z}z dz$$ and remember that $$\int\frac {e^t} t dt=\text{Ei}(t)$$ which finally makes $$I=\frac{1}{2}\, i \,e^{-i}\, \text{Ei}(x+i)-\frac{1}{2}\, i\, e^i\, \text{Ei}(x-i)$$ with $i\,e{^i}=-\sin (1)+i \cos (1)$ and $i\,e^{-i}=\sin (1)+i \cos (1)$.</p>
|
4,411,247 | <blockquote>
<p>If <span class="math-container">$G$</span> is finite group, how to prove that <span class="math-container">$f(g)=ag$</span>, <span class="math-container">$a \in G$</span>, is a bijection for all <span class="math-container">$g \in G$</span>? Here <span class="math-container">$ag$</span> is <span class="math-container">$a \cdot g$</span>, where <span class="math-container">$\cdot$</span> is the operator from the group <span class="math-container">$G$</span>.</p>
</blockquote>
<p>This is what I've tried so far:</p>
<p><span class="math-container">$f(g)=ag
\\
a^{-1}g(g)=g.
$</span></p>
<p>Since the inverse has been dotted so <span class="math-container">$f$</span> is a bijection, I would like to know if there's a misstep on that.</p>
| Community | -1 | <p>Though the claim holds true for any group, you can take advantage of the assumed finiteness of <span class="math-container">$G$</span> to get immediately:</p>
<ol>
<li>the injectivity holds by the left cancellation law;</li>
<li>since <span class="math-container">$G$</span> is finite, the surjectivity follows from 1.</li>
</ol>
|
4,547,480 | <p>I am working with some data for which I am interested in calculating some physical parameters. I have a system of linear equations, which I can write in matrix form as:</p>
<p><span class="math-container">$$
\textbf{A} \textbf{x} = \textbf{b}
$$</span></p>
<p>where <span class="math-container">$\textbf{A}$</span> is a square matrix containing the coefficients of the linear equations, <span class="math-container">$\textbf{x}$</span> is a column vector of length <span class="math-container">$n$</span> containing the unknown parameters that I want to calculate, and <span class="math-container">$\textbf{b}$</span> is a known vector of length <span class="math-container">$n$</span>.</p>
<p>Given no other constraints, the solution is to simply calculate <span class="math-container">$\textbf{x}$</span> by inverting <span class="math-container">$\textbf{A}$</span>. However, I have hard inequality constraints on <span class="math-container">$\textbf{x}$</span> because of physical reasons of the data in question:</p>
<p><span class="math-container">$$
0 \leq x_i \leq c_i \ \forall \ i=1,2,...n
$$</span></p>
<p>where <span class="math-container">$x_i$</span> are the values in <span class="math-container">$\textbf{x}$</span> and all <span class="math-container">$c_i$</span> are known.</p>
<p>Now, these extra inequality constraints make the problem overdetermined. However, I have the choice to remove data (e.g., remove rows from <span class="math-container">$\textbf{A}$</span>) because I know a priori that some data are more reliable. Thus, I am hoping that I can make the unconstrained problem underdetermined, but <em>given the hard inequality constraints, make the constrained problem exactly determined</em> and calculate a single unique solution of <span class="math-container">$\textbf{x}$</span>.</p>
<p>To summarize, my question is: how can I solve a determined system of linear equations subject to inequality constraints on the unknown parameters?</p>
<p>I looked into a few potential techniques like linear programming and bounded variable least squares. However, the goal of those methods is to maximize or minimize some objective function, whereas I want an exact solution to my equations. My gut feeling is that a solution should exist but I don't have the linear algebra background to find it, so I appreciate any help!</p>
<hr />
<p>These are details on my specific problem that might help with a solution:</p>
<ul>
<li>All values in <span class="math-container">$\textbf{A}$</span> are between 0 and 1.</li>
<li>The value of the <span class="math-container">$c_i$</span> is at most 1.</li>
<li><span class="math-container">$n=36$</span></li>
</ul>
| llorente | 810,541 | <p>I figured out the solution to my question, so I am posting it here in case others are interested.</p>
<p>The main problem is how to apply bounds on the unknown parameters <span class="math-container">$\textbf{x}$</span>. The key is to substitute the <span class="math-container">$x_i$</span> using a transformation that accepts all real numbers and maps the input onto the bounded range of interest (e.g., <span class="math-container">$0 \leq x_i \leq c_i$</span>). One way to do this is with a <a href="https://en.wikipedia.org/wiki/Sigmoid_function" rel="nofollow noreferrer">sigmoid function</a>. Sigmoid functions have two horizontal asymptotes, which you can scale linearly to match any lower and upper bound. There are a variety of sigmoid functions you can choose from. Using <span class="math-container">$\arctan$</span> as an example:</p>
<p><span class="math-container">$$
x_i=c_i \left( \frac{1}{\pi} \arctan{y_i} +\frac{1}{2} \right)
$$</span></p>
<p>You can see in the above equation that <span class="math-container">$y_i$</span> can be any real number, but <span class="math-container">$x_i$</span> is now bounded between 0 and <span class="math-container">$c_i$</span>.</p>
<p>This transformation gives us a new problem because now we have a system of nonlinear equations. However, we can solve this using Newton's method to numerically iterate for the <span class="math-container">$y_i$</span> on the system of equations (see <a href="https://math.stackexchange.com/questions/466809/solving-a-set-of-equations-with-newton-raphson">this post</a> for an example). After solving for the <span class="math-container">$y_i$</span> we can apply our transformation to convert them to <span class="math-container">$x_i$</span>, which will automatically be bounded in the specified range.</p>
<p>I tested this <span class="math-container">$\arctan$</span> method with synthetic data and it works well. There is probably an art to choosing an appropriate transformation because some sigmoid functions saturate faster as you move to more extreme arguments.</p>
|
92,967 | <p>Let <span class="math-container">$d(n)$</span> be the number of divisors function, i.e., <span class="math-container">$d(n)=\sum_{k\mid n} 1$</span> of the positive integer <span class="math-container">$n$</span>. The following estimate is well known
<span class="math-container">$$
\sum_{n\leq x} d(n)=x \log x + (2 \gamma -1) x +{\cal O}(\sqrt{x})
$$</span>
as well as its variability, e.g., the lim sup of the fraction
<span class="math-container">$$
\frac{\log d(n)}{\log n/\log \log n}
$$</span>
is <span class="math-container">$\log 2$</span> while the lim inf of <span class="math-container">$d(n)$</span> is <span class="math-container">$2,$</span> achieved whenever <span class="math-container">$n$</span> is prime.</p></p>
<p>I am interested in estimating, instead, the following sum
<span class="math-container">$$
A(x)=\sum_{n\leq x} \min[ d(n), f(x)]
$$</span>
for functions of <span class="math-container">$x$</span> where <span class="math-container">$f(x) = c (x / \log x)$</span> or <span class="math-container">$f(x) = c x^{\alpha}$</span> for some <span class="math-container">$\alpha \in (0,1)$</span> are possible candidates. Intuitively, the sum should not change much, but large
infrequent values contribute a lot to the sum, so I am not so sure. The lim-sup mentioned above would seem to imply that <span class="math-container">$d(n)$</span> can achieve a value as large as
<span class="math-container">$$n^{c/\log \log n}$$</span> while I seem to recall that it is also known that for any fixed exponent <span class="math-container">$\varepsilon,$</span> we have <span class="math-container">$d(n) < n^{\varepsilon}$</span> for <span class="math-container">$n$</span> large enough.</p>
<p>Any pointers, comments appreciated.</p>
| Dr. Pi | 9,232 | <p>The $n=ab$ trick is very effective. There is a similar trick of writing in a unique way $n=ab$ with $n$ squarefree, $b$ a square and I was wondering whether this could also work here.</p>
|
2,401,281 | <blockquote>
<p>Show that for $\{a,b,c\}\subset\Bbb Z$ if $a+b+c=0$ then $2(a^4 + b^4+ c^4)$ is a perfect square. </p>
</blockquote>
<p>This question is from a math olympiad contest. </p>
<p>I started developing the expression $(a^2+b^2+c^2)^2=a^4+b^4+c^4+2a^2b^2+2a^2c^2+2b^2c^2$ but was not able to find any useful direction after that.</p>
<p><strong>Note</strong>: After getting 6 answers here, another user pointed out other question in the site with similar but not identical content (see above), but the 7 answers presented include more comprehensive approaches to similar problems (e.g. newton identities and other methods) that I found more useful, as compared with the 3 answers provided to the other question. </p>
| farruhota | 425,072 | <p>Denote:
$$a+b+c=0; ab+ac+bc=k; abc=t$$
Then $a,b,c$ are the roots of:
$$x^3+kx+t=0$$
Note:
$$a^3+ka+t=0 \Rightarrow a^4+ka^2+ta=0,$$
$$b^3+kb+t=0 \Rightarrow b^4+kb^2+tb=0,$$
$$c^3+kc+t=0 \Rightarrow c^4+kc^2+tc=0.$$
Add and multiply by $2$:
$$2(a^4+b^4+c^4)=-2k(a^2+b^2+c^2)-t(a+b+c)=-2k((a+b+c)^2-2k)-0=(2k)^2.$$</p>
|
1,714 | <p>I know of two good mathematics videos available online, namely:</p>
<ol>
<li>Sphere inside out (<a href="https://www.youtube.com/watch?v=BVVfs4zKrgk" rel="nofollow noreferrer">part I</a> and <a href="https://www.youtube.com/watch?v=x7d13SgqUXg" rel="nofollow noreferrer">part II</a>)</li>
<li><a href="https://www.youtube.com/watch?v=0z1fIsUNhO4" rel="nofollow noreferrer">Moebius transformation revealed</a></li>
</ol>
<p>Do you know of any other good math videos? Share.</p>
| Scott Carter | 36,108 | <p><a href="http://www.youtube.com/user/ProfessorElvisZap" rel="nofollow noreferrer">Elvis's youtube link </a></p>
|
1,714 | <p>I know of two good mathematics videos available online, namely:</p>
<ol>
<li>Sphere inside out (<a href="https://www.youtube.com/watch?v=BVVfs4zKrgk" rel="nofollow noreferrer">part I</a> and <a href="https://www.youtube.com/watch?v=x7d13SgqUXg" rel="nofollow noreferrer">part II</a>)</li>
<li><a href="https://www.youtube.com/watch?v=0z1fIsUNhO4" rel="nofollow noreferrer">Moebius transformation revealed</a></li>
</ol>
<p>Do you know of any other good math videos? Share.</p>
| Ferran V. | 892 | <p>My personal favorite in Dimensions, that was mentioned before by Gerald Edgar. For a neat and clear exposition the Geom.of 3 manifolds, Poincaré conjecture, etc I recommend <a href="http://athome.harvard.edu/threemanifolds/index.html">this</a> lecture by C.McMullen. Or Das Schöne denken (hosted at the HIM in Bonn), for a good "glimpse in the world of the mathematician". Jos Leys' mathematical imagery contains some (interesting) videos and (a lot of beautiful) images. </p>
|
1,714 | <p>I know of two good mathematics videos available online, namely:</p>
<ol>
<li>Sphere inside out (<a href="https://www.youtube.com/watch?v=BVVfs4zKrgk" rel="nofollow noreferrer">part I</a> and <a href="https://www.youtube.com/watch?v=x7d13SgqUXg" rel="nofollow noreferrer">part II</a>)</li>
<li><a href="https://www.youtube.com/watch?v=0z1fIsUNhO4" rel="nofollow noreferrer">Moebius transformation revealed</a></li>
</ol>
<p>Do you know of any other good math videos? Share.</p>
| JoeG | 3,457 | <p>This <a href="https://www.youtube.com/watch?v=SccDUpIPXM0" rel="nofollow">video</a> about Andrew Wiles and the proof of Fermat's Last Theorem is the only time I've seen the real excitement of mathematics presented accurately. </p>
|
1,714 | <p>I know of two good mathematics videos available online, namely:</p>
<ol>
<li>Sphere inside out (<a href="https://www.youtube.com/watch?v=BVVfs4zKrgk" rel="nofollow noreferrer">part I</a> and <a href="https://www.youtube.com/watch?v=x7d13SgqUXg" rel="nofollow noreferrer">part II</a>)</li>
<li><a href="https://www.youtube.com/watch?v=0z1fIsUNhO4" rel="nofollow noreferrer">Moebius transformation revealed</a></li>
</ol>
<p>Do you know of any other good math videos? Share.</p>
| Paolo | 1,831 | <p>GRASP is a new lecture series at the University of Texas at Austin, which is aimed at bringing some of the fundamental concepts and big picture of the GRASP areas (Geometry, Representation, and Some Physics) to a wider audience (the intended target audience are beginning graduate students).</p>
<p><a href="http://www.ma.utexas.edu/users/benzvi/GRASP.html">http://www.ma.utexas.edu/users/benzvi/GRASP.html</a></p>
|
1,714 | <p>I know of two good mathematics videos available online, namely:</p>
<ol>
<li>Sphere inside out (<a href="https://www.youtube.com/watch?v=BVVfs4zKrgk" rel="nofollow noreferrer">part I</a> and <a href="https://www.youtube.com/watch?v=x7d13SgqUXg" rel="nofollow noreferrer">part II</a>)</li>
<li><a href="https://www.youtube.com/watch?v=0z1fIsUNhO4" rel="nofollow noreferrer">Moebius transformation revealed</a></li>
</ol>
<p>Do you know of any other good math videos? Share.</p>
| Valerio Talamanca | 11,018 | <p><a href="http://www.youtube.com/watch?v=VdPrCWr9Ruk&feature=player_embedded#" rel="nofollow">http://www.youtube.com/watch?v=VdPrCWr9Ruk&feature=player_embedded#</a>!</p>
<p>is a video made by a student in the school of arichitecture using pov-ray
is about algebraic surfaces and how they "deform"</p>
<p>there are a few more animations at the following url</p>
<p><a href="http://www.formulas.it/animazioni.php" rel="nofollow">http://www.formulas.it/animazioni.php</a> </p>
<p>they are part of on-going project about the visualization
of mathematics (being developed by group of mathematicians and architects) </p>
|
1,714 | <p>I know of two good mathematics videos available online, namely:</p>
<ol>
<li>Sphere inside out (<a href="https://www.youtube.com/watch?v=BVVfs4zKrgk" rel="nofollow noreferrer">part I</a> and <a href="https://www.youtube.com/watch?v=x7d13SgqUXg" rel="nofollow noreferrer">part II</a>)</li>
<li><a href="https://www.youtube.com/watch?v=0z1fIsUNhO4" rel="nofollow noreferrer">Moebius transformation revealed</a></li>
</ol>
<p>Do you know of any other good math videos? Share.</p>
| Justin Hilburn | 333 | <p>I am surprised no one has mentioned that the <a href="http://online.kitp.ucsb.edu/online/" rel="nofollow">Kavli Institute for Theoretical Physics</a>, the <a href="http://www.scgp.stonybrook.edu/" rel="nofollow">Simons Center for Geometry and Physics</a>, and the <a href="http://www.perimeterinstitute.ca/" rel="nofollow">Perimeter Institute</a> often tape conferences.</p>
|
1,714 | <p>I know of two good mathematics videos available online, namely:</p>
<ol>
<li>Sphere inside out (<a href="https://www.youtube.com/watch?v=BVVfs4zKrgk" rel="nofollow noreferrer">part I</a> and <a href="https://www.youtube.com/watch?v=x7d13SgqUXg" rel="nofollow noreferrer">part II</a>)</li>
<li><a href="https://www.youtube.com/watch?v=0z1fIsUNhO4" rel="nofollow noreferrer">Moebius transformation revealed</a></li>
</ol>
<p>Do you know of any other good math videos? Share.</p>
| rnegrinho | 29,818 | <p>I know of some youtube channels with good content. The last two link are not strictly pure math, but still worth a look.</p>
<p>Institut Henri Poincaré:
<a href="https://www.youtube.com/channel/UCrKGv5WY5ryaIXEmnxKVxOQ" rel="nofollow">https://www.youtube.com/channel/UCrKGv5WY5ryaIXEmnxKVxOQ</a></p>
<p>princetonmathematics
<a href="https://www.youtube.com/channel/UCVKtRsfK1QPyHRP2QupyddQ" rel="nofollow">https://www.youtube.com/channel/UCVKtRsfK1QPyHRP2QupyddQ</a></p>
<p>Institut des Hautes Études Scientifiques (IHÉS)
<a href="https://www.youtube.com/channel/UC4R1IsRVKs_qlWKTm9pT82Q" rel="nofollow">https://www.youtube.com/channel/UC4R1IsRVKs_qlWKTm9pT82Q</a></p>
<p>StanfordCSTheory:
<a href="https://www.youtube.com/channel/UCdZlxxfpEzQWwMvjVQ7gOJw" rel="nofollow">https://www.youtube.com/channel/UCdZlxxfpEzQWwMvjVQ7gOJw</a></p>
<p>Simons Institute:
<a href="https://www.youtube.com/channel/UCW1C2xOfXsIzPgjXyuhkw9g" rel="nofollow">https://www.youtube.com/channel/UCW1C2xOfXsIzPgjXyuhkw9g</a></p>
|
361,755 | <p>Let $S$ be a multiplicatively closed subset of a commutative noetherian ring $A$. Let $M$ and $N$ be finitely generated $A$-modules. If $M_S$ is isomorphic to $N_S$, show that $M_t$ is isomorphic to $N_t$ for some $t \in S.$</p>
| Martin Brandenburg | 1,650 | <p>There is a more general geometric statement whose proof I find a little bit clearer.</p>
<p>Let $F,G$ sheaves of modules over some ringed space $X$. If $F$ is of finite presentation, then $\underline{\hom}_{\mathcal{O}_X}(F,G)_x \to \hom_{\mathcal{O}_{X,x}}(F_x,G_x)$ is an isomorphism (one easily reduces to the case $F=\mathcal{O}_X$). Thus, if $F,G$ are of finite presentation, and $F_x \cong G_x$, then there are local sections of $\underline{\hom}(F,G)$ and $\underline{\hom}(G,F)$ at $x$ which compose to the identity in $\underline{\hom}(F,F)_x$ and $\underline{\hom}(G,G)_x$. Both equalities hold actually as local sections of $\underline{\hom}(F,F)$ and $\underline{\hom}(G,G)$ for some small open neighborhood $U$ of $x$. This shows $F|_U \cong G|_U$.</p>
<p>If we apply this to quasi-coherent sheaves on an affine scheme, we get that if $M_{\mathfrak{p}} \cong N_{\mathfrak{p}}$ for finitely presented modules over some commutative ring $A$ (not assumed to be noetherian) for some prime ideal $\mathfrak{p}$, then there is some $f \notin \mathfrak{p}$ with $M_f \cong N_f$. If more generally $M_S \cong N_S$ for some multiplicative subset $S$, then we may assume $0 \notin S$ and localize at a prime ideal $\mathfrak{p}$ disjoint from $S$. Then we are in the situation before.</p>
|
480,828 | <p>We can suppose that we will create a new number system with essentially two imaginaries that do not interact. (Besides this, all quantities are taken to be integers) For example, we have an $i_1$ and an $i_2$. Then we could say</p>
<p>$$(a+b i_1)(c+d i_1) = ac + (ad + bc)i_1-bd$$</p>
<p>and, similarly for $i_2$:</p>
<p>$$(a+b i_2)(c+d i_2) = ac + (ad + bc)i_2-bd$$</p>
<p><strong>However</strong>, for a system with $i_1$ AND $i_2$:</p>
<p>$$(a+b i_1+c i_2)(d + e i_1 + f i_2)=$$<br>
$$(ad + ae i_1 + af i_2) + (bd i_1 - be + 0i_1i_2) + (cd i_2 + 0i_1i_2 -cf)$$</p>
<p>Above, the key thing to note is that $i_1\cdot i_2 = 0$.</p>
<p><strong>QUESTIONS</strong></p>
<p>I'm wondering if there is any idea or statement in math that says that I simply cannot do this. Without having worked in general systems of numbers, I'm wondering what ideas I should know about when I try to create a system like this.</p>
<p>Can I create this system if my main purposes are to carry out addition, subtraction, and multiplication with these numbers? Also, if I pose the additional constraint that all of these calculations are carried out modulo a prime, will this affect the system?</p>
| Anixx | 2,513 | <p>If you drop the rule that <span class="math-container">$i_1 i_2=0$</span> but maintain commutativity, you will get <a href="https://en.wikipedia.org/wiki/Bicomplex_number" rel="nofollow noreferrer">bicomplex numbers</a>.</p>
|
4,099,649 | <p>I’m trying to solve two (in my opinion, tough) integrals which appear in part of my problem. I tried different ways but in the end I failed. See them below, please.</p>
<p><span class="math-container">$${\rm{integral}}\,1 = \int {{{\left( {\frac{A}{{{x^\alpha }}}\, + \sqrt {B + \frac{C}{{{x^{2\alpha }}}}\,} } \right)}^{\frac{1}{3}}}} dx ,$$</span>
and
<span class="math-container">$${\rm{integral}}\,2 = \int {\frac{1}{{{{\left( {\frac{A}{{{x^\alpha }}}\, + \sqrt {B + \frac{{C\,}}{{{x^{2\alpha }}}}} } \right)}^{\frac{1}{3}}}}}} dx,$$</span></p>
<p>where <span class="math-container">$\alpha$</span> is a positive integer (<span class="math-container">$\alpha \ge 2$</span>). How can I solve them? I was wondering if someone could help me integrate these functions. Any help is appreciated. Much thanks.</p>
<p><strong>Edit</strong>: Don't you think that if we set <span class="math-container">$\alpha =2 $</span>, the integral might be easier to solve? Having this, I think I can solve the general case with <span class="math-container">$\alpha \ge 2$</span>.</p>
| Tyma Gaidash | 905,886 | <p>Here will be an evaluation of:</p>
<blockquote>
<p><span class="math-container">$${ \rm{integral}}\,1,2 = \int {{{\left( {\frac{A}{{{x^\alpha }}}\, + \sqrt {B + \frac{C}{{{x^{2\alpha }}}}\,} } \right)}^{\pm\frac{1}{3}}}} dx ,$$</span></p>
</blockquote>
<p>Note that if <span class="math-container">$$(a+b)^v=a^v\left(1+\frac ba\right)^v$$</span> then the radius of convergence for a <a href="https://tutorial.math.lamar.edu/classes/calcii/binomialseries.aspx" rel="nofollow noreferrer">Binomial Series</a> would be <span class="math-container">$\left|\frac ba\right|<1$</span></p>
<p>There is also a slightly generalized version for <span class="math-container">$(a+b)^v$</span> instead of <span class="math-container">$(1+b)^v$</span>. Here it is applied to your problem and simplified into an <a href="https://functions.wolfram.com/GammaBetaErf/Beta3/" rel="nofollow noreferrer">Incomplete Beta function</a>. I will use <span class="math-container">$\alpha=a$</span> for easier typing:</p>
<p><span class="math-container">$$\int {{{\left( {\frac{A}{{{x^a}}}\, + \sqrt {B + \frac{C}{{{x^{2a}}}}\,} } \right)}^{\pm\frac{1}{3}}}} dx =\int\sum_{n=0}^\infty \binom{\pm\frac13}n A^n x^{-2an} \sqrt{B+Cx^{-2a}}^{\pm\frac13-n}dx= \sum_{n=0}^\infty \binom{\frac13}n A^n B^{\pm\frac16} \int x^{-2an}\sqrt B^{-n}\left(1+\frac CB x^{-2a}\right)^{\pm\frac16-\frac n2}dx $$</span></p>
<p>so <span class="math-container">$B\ne0$</span> which almost looks like an Incomplete Beta function:</p>
<p><span class="math-container">$$\text B_z(a,b)=\int_0^z t^{a-1} (1-t)^{b-1}dt$$</span></p>
<p><a href="https://www.wolframalpha.com/input/?i=integral+x%5EN+sqrt%28B%2BC+x%5Ek%29%5Ep" rel="nofollow noreferrer"><strong>It can be shown</strong></a> that you can use that you can use the <a href="https://functions.wolfram.com/HypergeometricFunctions/Hypergeometric2F1/" rel="nofollow noreferrer">Gauss Hypergeometric function</a> to integrate:</p>
<p><span class="math-container">$$\int\sum_{n=0}^\infty \binom{\pm\frac13}n A^n x^{-2an} \sqrt{B+Cx^{-2a}}^{\pm\frac13-n}dx =c+\sum_{n=0}^\infty \binom{\pm\frac13} n\frac{A^n}{(1-2an)x^{2an-1}} \big(B+C x^{-2a}\big)^{\pm\frac16-\frac n2}\left(\frac{C x^{-2a}}B+1\right)^{\frac n2\mp\frac16}\,_2\text F_1\left(n-\frac 1{2a},\frac n2\mp\frac16,n-\frac1{2a}+1,-\frac{C x^{-2a}}B\right)$$</span></p>
<p>The Gauss Hypergeometric function also can be written as an <a href="https://mathworld.wolfram.com/IncompleteBetaFunction.html" rel="nofollow noreferrer">Incomplete Beta function</a> assuming an analytic continuation of the Incomplete Beta function to make the domain larger.</p>
<p><span class="math-container">$$\frac k{z^k}\text B_z(k,b)=\,_2\text F_1(k,1-b,k+1,z), k= n-\frac 1{2a},1-b= \frac n2\mp\frac16 ,z= -\frac{C x^{-2a}}B\ne0$$</span></p>
<p>Therefore:</p>
<p><span class="math-container">$$c+\sum_{n=0}^\infty \binom{\pm\frac13} n\frac{A^n}{(1-2an)x^{2an-1}} \big(B+C x^{-2a}\big)^{\pm\frac16-\frac n2}\left(\frac{C x^{-2a}}B+1\right)^{\frac n2\mp\frac16}\,_2\text F_1\left(n-\frac 1{2a},\frac n2\mp\frac16,n-\frac1{2a}+1,-\frac{C x^{-2a}}B\right)=c+\sum_{n=0}^\infty \binom{\pm\frac13} n\frac{A^n}{(1-2an)x^{2an-1}} \big(B+C x^{-2a}\big)^{\pm\frac16-\frac n2}\left(\frac{C x^{-2a}}B+1\right)^{\frac n2\mp\frac16}\frac{n-\frac 1{2a}}{\left(-\frac{C x^{-2a}}B\right)^{n-\frac 1{2a}}}\text B_{-\frac{C x^{-2a}}B }\left(n-\frac 1{2a},1-\left(\frac n2\mp\frac16\right)\right)= \boxed{c+\sum_{n=0}^\infty \binom{\pm\frac13} n\frac{A^n}{(1-2an)x^{2an-1}} \big(B+C x^{-2a}\big)^{\pm\frac16-\frac n2}\left(\frac{C x^{-2a}}B+1\right)^{\frac n2\mp\frac16}\left(n-\frac 1{2a}\right){\left(-\frac{C x^{-2a}}B\right)^{\frac 1{2a}-n}}\text B_{-\frac{C x^{-2a}}B }\left(n-\frac 1{2a},\left\{\frac76\text{ or }\frac56\right\} -\frac n2\right) }$$</span></p>
<p>The <span class="math-container">$\{\text{or}\}$</span> is due to the <span class="math-container">$\pm$</span> in the argument. The above result gives the largest domain whereas the following simplifications restrict the domain from canceling and more:</p>
<p><span class="math-container">$$c+\sum_{n=0}^\infty \binom{\pm\frac13} n\frac{A^n}{(1-2an)x^{2an-1}} \big(B+C x^{-2a}\big)^{\pm\frac16-\frac n2}\left(\frac{C x^{-2a}}B+1\right)^{\frac n2\mp\frac16}\left(n-\frac 1{2a}\right){\left(-\frac{C x^{-2a}}B\right)^{\frac 1{2a}-n}}\text B_{-\frac{C x^{-2a}}B }\left(n-\frac 1{2a},\left\{\frac76\text{ or }\frac56\right\} -\frac n2\right) = \boxed{c+\sqrt[a]iB^{\pm\frac16-\frac1{2a}}C^\frac1{2a}\sum_{n=0}^\infty \binom{\pm\frac13} n\frac{(-A)^n B^{\frac n2}}{C^n(1-2an)}\text B_{-{C x^{-2a}}}\left(n-\frac 1{2a},\left\{\frac76\text{ or }\frac56\right\} -\frac n2\right)} $$</span></p>
<p>If I think of some other important simplifications, then I will add them in. If you have any, then please tell me.</p>
<p>Note that various research papers have defined Incomplete Beta function series like:</p>
<blockquote>
<p><a href="https://www.hindawi.com/journals/amp/2020/5792853/" rel="nofollow noreferrer">An Extension of the Mittag-Leffler Function and Its Associated Properties</a></p>
</blockquote>
<p>and</p>
<blockquote>
<p><a href="https://www.researchgate.net/profile/Junesang-Choi/publication/315096127_A_New_Extension_of_Mittag-Leffler_function/links/5ad92509aca272fdaf82022f/A-New-Extension-of-Mittag-Leffler-function.pdf?origin=publication_detail" rel="nofollow noreferrer">A New Extension of Mittag-Leffler function</a></p>
</blockquote>
<p>but these only are for an Incomplete Beta function with the sum index in one argument and not both like in this answer. Please correct me and give me feedback!</p>
|
51,341 | <p>I have a function that is a summation of several Gaussians. Working with a 1D Gaussian here, there are 3 variables for each Gaussian: <code>A</code>, <code>mx</code>, and <code>sigma</code>:</p>
<p>$A \exp \left ( - \frac{\left ( x - mx \right )^{2}}{2 \times sigma^{2}} \right )$</p>
<pre><code>A*Exp[-((x - mx)^2/(2 sigma^2))]
</code></pre>
<p>The number of Gaussians in the final function will be vary each time the function is called, so my question is: <strong>What is the best way to define a function in Mathematica that can handle this variation, rather than hard-coding each Gaussian?</strong></p>
<p>I was thinking along the lines of providing a list of <code>{A,mx,sigma}</code> to the function, so that if I want one Gaussian, I provide:</p>
<pre><code>f[{{A,mx,sigma}}]
</code></pre>
<p>And if I want two Gaussians, I provide</p>
<pre><code>f[{{A,mx,sigma},{A2,mx2,sigma2}}]
</code></pre>
<p>which would give:</p>
<pre><code>A*Exp[-((x-mx)^2/(2sigma^2))] + A2*Exp[-((x-mx2)^2/(2sigma2^2))]
</code></pre>
<p>and so on.</p>
<p>But I'm not at all sure how to design the function <code>f[]</code> to do this efficiently (for example, can it be done without a <code>For[]</code> loop? Can it be compiled in future if necessary?). </p>
<p>Any help much appreciated - I did several searches on here and couldn't find anything, but I realise that might be because I'm not sure how to define my problem succinctly, so apologies if it has been asked before and I've missed it.</p>
| Apple | 10,193 | <pre><code>f[data_] := Total[#1*Exp[-((x - #2)^2/(2 #3^2))] & @@@ data];
f[{{A, mx, sigma}}]
f[{{A, mx, sigma}, {A2, mx2, sigma2}}]
</code></pre>
<p>Use Function, Apply and Total.</p>
<p><img src="https://i.stack.imgur.com/rfPbd.jpg" alt="enter image description here"></p>
|
1,452,425 | <p>From what I have been told, everything in mathematics has a definition and everything is based on the rules of logic. For example, whether or not <a href="https://math.stackexchange.com/a/11155/171192">$0^0$ is $1$ is a simple matter of definition</a>.</p>
<p><strong>My question is what the definition of a set is?</strong> </p>
<p>I have noticed that many other definitions start with a set and then something. A <a href="https://en.wikipedia.org/wiki/Group_%28mathematics%29#Definition" rel="noreferrer">group is a set</a> with an operation, an equivalence relation is a set, a <a href="https://en.wikipedia.org/wiki/Function_%28mathematics%29#Definition" rel="noreferrer">function can be considered a set</a>, even the <a href="https://en.wikipedia.org/wiki/Natural_number#Constructions_based_on_set_theory" rel="noreferrer">natural numbers can be defined as sets</a> of other sets containing the empty set.</p>
<p>I understand that there is a whole area of mathematics (and philosophy?) that deals with <a href="https://en.wikipedia.org/wiki/Set_theory#Axiomatic_set_theory" rel="noreferrer">set theory</a>. I have looked at a book about this and I understand next to nothing.</p>
<p>From what little I can get, it seems a sets are "anything" that satisfies the axioms of set theory. It isn't enough to just say that a set is any collection of elements because of various paradoxes. <strong>So is it, for example, a right definition to say that a set is anything that satisfies the ZFC list of axioms?</strong></p>
| Michael Hardy | 11,667 | <p>Sets have members, and two sets are the same set if, and only if, they have the same members.</p>
<p>That is not quite enough to characterize what sets are.</p>
<p>For example, is the set of all sets that are not members of themselves a member of itself? If so, you get a contradition, and if not, you get a contradiction. The "class" of all sets is "too big to be a set", and that simply means you cannot apply to it all the operations you can with sets. The same thing forbids the class of all groups to be a set of all groups: if it were a set, then the set of permutations of its members would be a group, and would therefore be a member of the set of all groups, and that leads to problems like those of the set of all sets that are not members of themselves. Likewise, the class of all vector spaces is not a set, etc. These "proper classes" differ from "sets" only it that they are not members of any other classes.</p>
<p>E. Kamke's <em>Theory of Sets</em> and Paul Halmos' <em>Naive Set Theory</em> are fairly gentle, if moderately onerous, introductions to "naive" set theory. In "naive" set theory, sets are collections of things. In "axiomatic" set theory, sets are whatever satifies the axioms. Halmos inadvertenly coined the term "naive set theory" by naming his book that, when he mistakenly thought the term was already in standard use.</p>
|
1,186,517 | <p>I do some ex for preparing discrete mathematics exam, i get stuck in one problem, anyone could help me?</p>
<blockquote>
<p>How many ways we can partition set {1,2,...,9} into subsets of size 2
and 5?</p>
</blockquote>
<p>anyway, some tutorials for solving such a question...</p>
<p>Edit: Like always Scott is Right...</p>
| Brian M. Scott | 12,042 | <p>A partition of $\{1,\ldots,9\}$ into sets of sizes $2$ and $5$ must contain two sets of size $2$ and one of size $5$: no other combination of $2$’s and $5$’s adds up to $9$. Thus, the question boils down to determining how many ways there are to pick a $5$-element subset of $\{1,\ldots,9\}$ and then split the remaining $4$ elements into two $2$-element subsets.</p>
<p>The first part of that is easy: a $9$-element set has $\binom95$ $5$-element subsets, so there are $\binom95=126$ ways to choose the $5$-element piece of our partition. The second step step is just a little trickier. You might be tempted to think that there are $\binom42=6$ ways to choose $2$ of the remaining $4$ elements for one of the $2$-element sets, leaving the other $2$ to be the other $2$-element set, but that’s not quite right. Suppose that your $5$-element set is $\{1,3,5,7,9\}$, so you’re making up two $2$-element sets from $\{2,4,6,8\}$. You might pick $\{2,6\}$ for one of your $2$-element sets, leaving $\{4,8\}$ for the other. Unfortunately, you might equally well pick $\{4,8\}$, leaving $\{2,6\}$ for the other. That figure of $\binom42$ counts these two choices separately, even though they produce the same two $2$-element sets.</p>
<p>You can avoid this in at least two ways. One is to notice that each pair of $2$-element sets is being counted twice, once for each of the two sets, so that instead of getting $\binom42$ pairs of $2$-element sets, we’re getting only half that many, i.e., $3$. The other is to employ a trick that’s useful quite often. No matter what $4$ elements remain after you’ve picked the $5$-element set, you can list them in numerical order; call them $n_1<n_2<n_3<n_4$. When we split them into two pairs, we have to pair one of the numbers $n_2,n_3$, and $n_4$ with the smallest one, $n_1$; obviously we can do that in $3$ ways. And once we’ve done that, there’s no more choosing to be done: the two numbers that we didn’t pair with $n_1$ form the last $2$-element set.</p>
<p>One way or another, we reach the final result: there are</p>
<p>$$\binom95\cdot3=126\cdot3=378$$</p>
<p>such partitions.</p>
|
1,722,948 | <blockquote>
<p>$$\frac{1}{x}-1>0$$</p>
</blockquote>
<p>$$\therefore \frac{1}{x} > 1$$</p>
<p>$$\therefore 1 > x$$</p>
<p>However, as evident from the graph (as well as common sense), the right answer should be $1>x>0$. Typically, I wouldn't multiple the x on both sides as I don't know its sign, but as I was unable to factories the LHS, I did so. How can I get this result algebraically?</p>
| Michael Hoppe | 93,935 | <p>Consider the function $f(x)=1/x-1=(1-x)/x$ defined for $x\neq0$. As $ f$ is continuous it can only change sign where it is zero or undefined, i.e., at $x=1$ or $ x=0$. Hence the sign of $f$ is constant on $]-\infty, 0[$, $]0,1[$, and $]1,\infty[$.</p>
<p>Finally compute the sign of say $f(-1)$, $f(1/2)$, and $f(2)$.</p>
|
1,893,280 | <p>How to show $\frac{c}{n} \leq \log(1+\frac{c}{n-c})$ for any positive constant $c$ such that $0 < c < n$?</p>
<p>I'm considering the Taylor expansion, but it does not work...</p>
| Mark Viola | 218,419 | <blockquote>
<p><strong>I thought it would be instructive to present a way forward that does not rely on calculus, but rather an elementary inequality. To that end, we proceed.</strong></p>
<p>I showed in <a href="https://math.stackexchange.com/questions/1589429/how-to-prove-that-logxx-when-x1/1590263#1590263">THIS ANSWER</a> using only the limit definition of the exponential function and Bernoulli's Inequality that the logarithm function satisfies the inequalities</p>
<p><span class="math-container">$$\frac{x-1}{x}\le\log(x)\le x-1 \tag 1$$</span></p>
<p>for <span class="math-container">$x>0$</span>.</p>
</blockquote>
<p>Using <span class="math-container">$(1)$</span> with <span class="math-container">$x=1+\frac{c}{n-c}=\frac{n}{n-c}$</span>, and thus <span class="math-container">$\frac{x-1}{x}=\frac{c}{n}$</span>, it is easy to see that for <span class="math-container">$n>c>0$</span></p>
<p><span class="math-container">$$\bbox[5px,border:2px solid #C0A000]{\frac{c}{n}\le \log\left(1+\frac{c}{n-c}\right)\le \frac{c}{n-c}}$$</span></p>
<p>And we are done!</p>
|
3,029,708 | <p>Suppose there is a vector <span class="math-container">$U \in \mathbb{R}^n$</span>. How would you find the derivative of:</p>
<p><span class="math-container">$$
F(U)=trace\left(diag(U) A\ diag(U) \right)
$$</span>
where <span class="math-container">$A \in \mathbb{R}^{n \times n} \succ 0 $</span> and where <span class="math-container">$diag(\cdot)$</span> creates a diagonal matrix with <span class="math-container">$(\cdot)$</span> on the leading diagonal. Where the derivative is taken with respect to the vector <span class="math-container">$U$</span>, i.e.</p>
<p><span class="math-container">$$
{\partial F(U) \over \partial U } \\
$$</span></p>
<p>I am more interested in the method used. Thanks in advance.</p>
| p32fr4 | 586,851 | <p>Posting the solution I identified.</p>
<p>Due to the trace operator evaluating the above is equivalent to evaluating:</p>
<p><span class="math-container">$$
{\partial \left(\sum\limits_{i=1}^{n} u_i \ A_{(i,i)}u_i\right)\over \partial U }=
\left( \begin{align}
\begin{array}
{\partial \left(\sum\limits_{i=1}^{n} u_iA_{(i,i)}u_i\right)\over \partial u_1}\\
{\partial \left(\sum\limits_{i=1}^{n} u_iA_{(i,i)}u_i\right)\over \partial u_1}&\\
\vdots \ \ \ \ \ \ \ \ \ \ \ &\\
{\partial \left(\sum\limits_{i=1}^{n} u_iA_{(i,i)}u_i\right)\over \partial u_n}&\\
\end{array}\end{align}\right) \\
$$</span> </p>
<p>which becomes:</p>
<p><span class="math-container">$$
{\partial \left( trace(diag(U)\ A\ diag(U)) \right)\over \partial U} = 2\ diag( A) U
$$</span></p>
|
4,252,428 | <p>I encountered this system of nonlinear equations:
<span class="math-container">$$\begin{cases}
x+xy^4=y+x^4y\\
x+xy^2=y+x^2y
\end{cases} $$</span></p>
<p>My ultimate goal is to show that this has only solutions when <span class="math-container">$x=y$</span>. I didn't find any straight forward method to solving this. But then I came up with the following solution.</p>
<blockquote>
<p>First, if <span class="math-container">$x=0$</span>, then clearly <span class="math-container">$y=0$</span> and for the solution we need to
have <span class="math-container">$x=y$</span>.</p>
<p>Then, assume that <span class="math-container">$x\ne 0$</span>. Therefore there exists a real number <span class="math-container">$t$</span>
s.t. <span class="math-container">$y=t x$</span>. By substituting this to the equations, we find (by
comparing the coefficients) that <span class="math-container">$t=1$</span> and therefore <span class="math-container">$x=y$</span>.</p>
<p>Therefore the system has only solutions of form <span class="math-container">$x=y$</span>, and every pair
(x, y=x) is a solution.</p>
</blockquote>
<p>So is this kind of method OK? If I checked the case <span class="math-container">$x=0$</span> separately?</p>
| herb steinberg | 501,262 | <p>Proof by contradiction:</p>
<p>Take the difference of the two equations and divide out common factors to get <span class="math-container">$y^3-y=x^3-x$</span>. This is a cubic in either variable in terms of the other, giving three solutions in each case, possible duplicates (x=y will appear in both sets). Use synthetic division by <span class="math-container">$x-y$</span> to get quadratics in both cases to get remaining solutions.</p>
<p>Remaining solutions: <span class="math-container">$x=\frac{-y\pm \sqrt{4-3y^2}}{2}$</span> and <span class="math-container">$y=\frac{-x\pm \sqrt{4-3x^2}}{2}$</span></p>
<p>However these possible solutions do not in general satisfy the original equations, leaving <span class="math-container">$x=y$</span> as the only possible. An example: <span class="math-container">$y=1$</span> leads to <span class="math-container">$x=0$</span> and <span class="math-container">$x=-1$</span>, which fail.</p>
|
1,464,747 | <p>I am trying to solve this question:</p>
<blockquote>
<p>How many ways are there to pack eight identical DVDs into five indistinguishable boxes so that each box contains at least one DVD?</p>
</blockquote>
<p>I am very lost at trying to solve this one. My attempt to start this problem involved drawing 5 boxes, and placing one DVD each, meaning 3 DVDs were left to be dropped, but I am quite stuck at this point.</p>
<p>Any help you can provide would be great. Thank you.</p>
| soctiggs | 276,328 | <p>5 boxes 8 dvds ...
firstly you put one dvd in each box .
and now you solve no. of ways of placing 3 dvds in 5 boxes.
which is same as no of solution to the equation <br>
b1 + b2 + b3 + b4 + b5 = 3 <br> i.e.,
$ (5+3-1)\choose (3)$ = 35 . .... [solution to the equation a1+a2+a3+...an = r is $ (n+r-1) \choose n $ which can easily be proved]
<br> so 35 ways</p>
|
194,954 | <p>Is there a reason to use <code>Hold*</code> attributes for functional code (e.g. no intention to mutate input)? I'd expect performance gains as in pass by value vs pass by reference. </p>
<p>E.g. </p>
<pre><code>data = RandomReal[1, 10^8];
data // Function[x, x[[1]]] // RepeatedTiming
</code></pre>
<blockquote>
<pre><code>{8.*10^-7, 0.0372378}
</code></pre>
</blockquote>
<pre><code>data // Function[x, x[[1]], HoldAllComplete] // RepeatedTiming
</code></pre>
<blockquote>
<pre><code>{8.0*10^-7, 0.0372378}
</code></pre>
</blockquote>
<p>The question is, why doesn't it matter? Or are there cases where it matters, performance tuning wise.</p>
<p>Due to my limited understanding of internals of WL implementation I am puzzled by this and I often have a case where I need to traverse a big expression by keeping a 'reference' to the expression as a whole. So I can do:</p>
<pre><code>parse[bigExpression]:=parse[bigExpression, #] & /@ bigExpression
</code></pre>
<p>or I can make <code>parese</code> a <code>HoldFirst</code>. Or I could even <code>MapIndexed</code> and keep track of the index I could use to access a global <code>bigExpression</code>.</p>
<h2>Edit:</h2>
<p>As noted in comments, the initial example wasn't the best, here I hope is a better one:</p>
<pre><code>data = RandomReal[1, 10^8];
g1[x_] := x[[1]];
f1[x_] := g1[x];
SetAttributes[{g2, f2}, HoldAll];
g2[x_] := x[[1]];
f2[x_] := g2[x];
f1[data] // RepeatedTiming
f2[data] // RepeatedTiming
%%/%
</code></pre>
<blockquote>
<p>{8.38*10^-7, 0.487565}</p>
<p>{7.840*10^-7, 0.487565}</p>
<p>{1.07, 1.}</p>
</blockquote>
| b3m2a1 | 38,205 | <p>In general, as Henrik and I note in the comments, Mathematica makes sure only to copy data when it has undergone some sort of change internally. An easy way to see this is to set a flag like <code>Valid</code> and see when it disappears:</p>
<pre><code>myBigData = RandomReal[{}, {500, 800}];
myBigData // System`Private`SetValid;
amIDifferentNowHeld[Hold[data_]] :=
! System`Private`ValidQ[data]
amIDifferentNow[data_] :=
! System`Private`ValidQ[data]
amIDifferentNow[myBigData]
False
amIDifferentNowHeld[Hold[myBigData]]
False
trueCopy[data_] :=
BinaryDeserialize[BinarySerialize[data]];
amIDifferentNow[trueCopy[myBigData]]
True
</code></pre>
<p>So it seems pretty clear to me that we were working with the same object in both the unheld and held cases and this is why you can pass big arrays and stuff through a program and not need to worry about pass-by-value or pass-by-reference. It's really all pass-by-reference anyway with a pointer to some kind of internal <code>Expression</code> object.</p>
<p>Now, where <code>Hold</code> <em>does</em> buy you something is in data-safety. Consider this kind of naive data cleaning you might think to do (basically copied from some old code I wrote). I wanted to canonicalize some options and merge with some defaults so that they were a nice single <code>Association</code> I could store and query later and feed through my program. Here's how that looked:</p>
<pre><code>$defaults = {
{
"GridOptions" -> {"Domain" -> {{-5, 5}, {-5, 5}}, "Points" -> {60, 60}},
"PotentialEnergyOptions" -> {"PotentialFunction" -> (1/2 #^2 &)},
"KineticEnergyOptions" -> {"Masses" -> {1, 1}, "HBars" -> {1, 1}}
}
};
Options[doADVR] = {
"KineticEnergyMatrix" -> None,
"PotentialEnergyMatrix" -> None
};
getWfs[ops : OptionsPattern[]] :=
Module[{dom, pts, potF, m, hb, ke, pe, opts},
opts = Merge[
{
Normal@{ops},
$defaults
},
First
];
(* do the real calculation, usually, but that's not the point here *)
opts
]
</code></pre>
<p>And here's what happens when you feed in a <code>SparseArray</code> for one of the <code>"*Matrix"</code> arguments:</p>
<pre><code>ke = $H4DDVR["KineticEnergy", "Points" -> {30, 30}];
ke // Head
ke // Dimensions
SparseArray
{900, 900}
dvrOpts = getWfs["KineticEnergyMatrix" -> ke];
dvrOpts["KineticEnergyMatrix"] // Head
dvrOpts["KineticEnergyMatrix"] // Dimensions
List
{900, 900}
</code></pre>
<p>We can see that <code>Normal</code> converted it to a <code>List</code>! As these things grow in size the penalty for that gets more and more dramatic. </p>
<p>And Mathematica provides lots of ways for you to shoot yourself in the foot like this: <code>Normal</code> recurses into data structures, <code>ReplaceAll</code> digs into even <code>PackedArray</code> structures, etc. and those are the <em>well designed</em> functions. Too much of the system tries to be "clever" or "helpful" and in doing so makes it really easy to wreck your data by accident.</p>
<p>Obviously <code>Hold</code> has its core usage in clever destructing-based meta-programming, but in terms of <em>performance</em> a big place <code>Hold</code> can help you out is in making sure your data flows through a program unadulterated, e.g.:</p>
<pre><code>dvrOpts["KineticEnergyMatrix"] // Head
ReleaseHold@dvrOpts["KineticEnergyMatrix"] // Head
ReleaseHold@dvrOpts["KineticEnergyMatrix"] // Dimensions
Hold
SparseArray
{900, 900}
</code></pre>
<p>I'd say the place where I actually use <code>Hold</code> the most is when doing crazy things with like <code>Block</code> to ensure no evaluation, though, which often look like:</p>
<pre><code>doASymbolicThing~SetAttributes~HoldAll
<span class="math-container">$defaultSymbolList = Thread@Hold[{x, y, z, a, b, c}];
doASymbolicThing[symbolicExpr_,
symbols1 : {__Symbol},
symbol2_Symbol
] :=
Replace[
Thread[
DeleteDuplicates@
Join[Thread[Hold[symbols1]], $</span>defaultSymbolList, {Hold[symbol2]}],
Hold
],
Hold[symList : {__Symbol}] :>
Block[symList,
processSymbolicExprSafely[symbolicExpr, symList]
]
]
</code></pre>
|
3,399,195 | <p>So I've seen various questions with the limit 'equal' to <span class="math-container">$\infty$</span> or that the limit doesn't exist in a case where the function tends to <span class="math-container">$\infty$</span>.</p>
<p>For example, the limit of <span class="math-container">$\sqrt{x}$</span> as <span class="math-container">$x$</span> tends to <span class="math-container">$\infty$</span>. Is the answer <span class="math-container">$\infty$</span> or that the limit doesn't exist?</p>
<p>Obviously the function tends to <span class="math-container">$\infty$</span> as <span class="math-container">$x$</span> tends to <span class="math-container">$\infty$</span> but I don't know what to give as an answer.</p>
<p>I've seen similar questions where the function tends to <span class="math-container">$\infty$</span> as <span class="math-container">$x$</span> tends to a certain value where the answer has been that the limit doesn't exist. I've also seen where, in a similar situation, the limit has been 'equal' to <span class="math-container">$\infty$</span>.</p>
<p>So which is the one to use? What's the difference? Thanks!</p>
| user | 505,767 | <p>Usually we say that limit exists when it is finite or finite. In the first case we say that the function converges to <span class="math-container">$L$</span>, in the second case we say that the function diverges to plus or minus infinity.</p>
<p>We say that the limit doesn’t exist in all the remaining cases, for example periodic functions or other not convergent nor divergent cases.</p>
|
3,009,362 | <p>I need to find
<span class="math-container">$$\lim_{x\rightarrow -5} \frac{2x^2-50}{2x^2+3x-35}$$</span></p>
<p>Looking at the graph, I know the answer should be <span class="math-container">$\frac{20}{17}$</span>, but when I tried solving it, I reached <span class="math-container">$0$</span>.</p>
<p>Here are the two ways I approached this:</p>
<p>WAY I:</p>
<p><span class="math-container">$$\lim_{x\rightarrow -5} \frac{2x^2-50}{2x^2+3x-35} =
\lim_{x\rightarrow -5} \frac{\require{cancel} \cancel{x^2}(2- \frac{50}{x^2})} {\require{cancel} \cancel{x^2}(2+ \frac{3}{x}-\frac{35}{x^2})}
=\frac{2-2}{\frac {42}{5}}=0
$$</span></p>
<p>WAY II:
<span class="math-container">$$\lim_{x\rightarrow -5} \frac{2x^2-50}{2x^2+3x-35} =
\lim_{x\rightarrow -5} \frac{\require{cancel} \cancel{2}(x^2- 25)} {\require{cancel} \cancel{2}(x^2+ \frac{3}{2}x-\frac{35}{2})}
=\lim_{x\rightarrow -5} \frac{{\require{cancel} \cancel{(x-5)}}(x+5)}{{\require{cancel} \cancel{(x-5)}}(x+3.5)}= \frac{-5+5}{-5+3.5}=0
$$</span></p>
<p>What am I doing wrong here?</p>
<p>Thanks!</p>
| Mefitico | 534,516 | <p><strong>Hint:</strong> Try factorization!</p>
<p><span class="math-container">$$
\frac{2x^2-50}{2x^2+3x-35}=\frac{2(x^2-25)}{(1/2)(4x^2+6x-70)}=\frac{4(x-5)(x+5)}{(2x+10)(2x-7)}
$$</span></p>
|
2,358,385 | <p>I had a test and I couldn't solve this problem:<br></p>
<p>Given $f: \mathbb R^2 \rightarrow \mathbb R$.<br>For every constant $y_0$, $f(x,y_0)$ is known to be continuous.<Br>Also, $\frac{\partial f}{\partial y}(x,y)$ is defined and bounded for all $(x,y)$.
<br><br>I needed to prove that $f$ is continuous for all $\mathbb R^2$. How do you do that?</p>
| StuartMN | 439,545 | <p>f(x,y)-f($x_0$,$y_0$) = [f(x,y)-f(x,$y_0$) ] + [ f(x,$y_0$) - f($x_0$,$y_0$) ] .</p>
<p>On the first bracketed term use the mean value theorem and the boundedness of the partial derivative : on the second use the given continuity . </p>
|
2,622,092 | <p>I want to study the convergence of the improper integral $$ \int_0^{\infty} \frac{e^{-x^2}-e^{-3x^2}}{x^a}$$To do so I used the comparison test with $\frac{1}{x^a}$ separating $\int_0^{\infty}$ into $\int_0^{1} + \int_1^{\infty}$.</p>
<p>For the first part, $\int_0^{1}$, I did $$\lim_{x\to0} \frac{\frac{e^{-x^2}-e^{-3x^2}}{x^a}}{\frac{1}{x^a}}=0$$
Therefore $\int_0^{1}\frac{e^{-x^2}-e^{-3x^2}}{x^a}$ converges for $a<1$, since $\int_0^{1}\frac{1}{x^a}$ converges for $a<1$</p>
<p>For the second part I did the same, and got that $\int_1^{\infty}\frac{e^{-x^2}-e^{-3x^2}}{x^a}$ converges for $a>1$. This means that the initial improper integral does not converge for any $a$, is this correct?</p>
| Jack D'Aurizio | 44,121 | <p>Your problem boils down to computing
$$ I(\alpha)=\int_{0}^{+\infty}\frac{1-e^{-z^2}}{z^\alpha}\,dz = \frac{1}{2}\int_{0}^{+\infty}\frac{1-e^{-z}}{z^{\frac{\alpha+1}{2}}}\,dz. $$
In a right neighbourhood of the origin $\frac{1-e^{-z}}{z^{\frac{\alpha+1}{2}}}$ behaves like $z^{\frac{1-\alpha}{2}}$ and in a left neighbourhood of $+\infty$ it behaves like $z^{-\frac{1+\alpha}{2}}$, hence $I(\alpha)$ is convergent as soon as $\alpha\in(1,3)$, and in such a case
$$I(\alpha)=-\frac{1}{2}\Gamma\left(\frac{1-\alpha}{2}\right)=-\frac{\pi}{2\cos\left(\frac{\pi\alpha}{2}\right)\,\Gamma\left(\frac{1+\alpha}{2}\right)}.$$
Similarly, the convergence of
$$ J(\alpha)=\int_{0}^{+\infty}\frac{e^{-z^2}-e^{-3z^2}}{z^\alpha}\,dz $$
only depends on the integrability of the integrand function in a right neighborhood of the origin, and for any $\alpha < 3$ we have
$$ J(\alpha) = \color{red}{\frac{\pi\left(1-\sqrt{3^{\alpha-1}}\right)}{2\cos\left(\frac{\pi\alpha}{2}\right)\,\Gamma\left(\frac{\alpha+1}{2}\right)}}.$$</p>
|
2,972,957 | <p><strong>Artin's Theorem-</strong> Let <span class="math-container">$E$</span> be a field and <span class="math-container">$G$</span> be a group of automorphisms of <span class="math-container">$E$</span> and <span class="math-container">$k$</span> be the set of elements of <span class="math-container">$E$</span> fixed by <span class="math-container">$G$</span>. Then <span class="math-container">$k$</span> is a subfield of <span class="math-container">$E$</span> and <span class="math-container">$E$</span> has finite degree over <span class="math-container">$k$</span> iff <span class="math-container">$G$</span> is finite. In that case, <span class="math-container">$[E:k]=|G|.$</span></p>
<p><strong>Question-</strong> Use Artin's Theorem to show that for every field <span class="math-container">$E$</span> with <span class="math-container">$n$</span> distinct automorphisms, if <span class="math-container">$k$</span> is the fixed field of this set of automorphisms, then <span class="math-container">$[E:k] \ge n$</span>.</p>
<p><strong>Attempt-</strong> If <span class="math-container">$[E:k]$</span> is infinite we are done. </p>
<p>So we take the case where <span class="math-container">$[E:k]$</span> is finite. Let <span class="math-container">$A= \{\sigma_{1}, \ldots\ ,\sigma_{n}\}$</span> be the set of distinct automorphisms of <span class="math-container">$E$</span>. Then <span class="math-container">$k=\{a\in E\ | \sigma_{i}(a)=a\ , 1\le i \le n \}$</span>. </p>
<p>Now the <em>hint given in class</em> says "consider the group <span class="math-container">$G$</span> generated by <span class="math-container">$\sigma_{i}'s$</span> and <span class="math-container">$F=\{a\in E \ |\ \sigma(a)=a\ \forall\ \sigma \in G\}$</span>. (Here <span class="math-container">$G$</span> is the subgroup of all automorphisms of <span class="math-container">$E$</span>)" </p>
<p>But isn't <span class="math-container">$A$</span> itself a group so <span class="math-container">$G$</span> must be equal to <span class="math-container">$A$</span> and thus <span class="math-container">$F=k.$</span> And thus <span class="math-container">$[E:k]=[E:F]$</span>. What did we achieve this way?</p>
| Tsemo Aristide | 280,301 | <p><span class="math-container">$A$</span> is not itself a group, think about the set of transpositions which generated <span class="math-container">$S_n$</span>, (a permutation whose signature is even is not a transposition). The Artin theoem's allows to say that and <span class="math-container">$[E:F]=|G|\geq |A|$</span>.</p>
|
2,252,579 | <p>$$ \lim_{n\to\infty}\left (\frac n {n+1} \right )^{2n} = \lim_{n\to\infty}\left (\frac{n+1}{n} \right )^{-2n} =\lim_{n\to\infty} \left (1 + \frac 1n \right )^{-2n}= \left (\lim_{n\to\infty}\left (1 + \frac 1n \right )^{n} \right )^{-2} = e^{-2}$$</p>
<p>What I don't understand is why is it a -2 and not +2? Also, is there a better way to solve this?</p>
<p>Thank you</p>
| Alex Jones | 350,433 | <p>$\lim (\frac{n}{n+1})^{2n} = \lim e^{\log(\frac{n}{n+1})^{2n}} = \lim e^{2n\log(\frac{n}{n+1})} = e^{\lim 2n\log(\frac{n}{n+1})}$.</p>
<p>$\lim 2n \log (\frac{n}{n+1}) = \lim \frac{\log (\frac{n}{n+1})}{\frac{1}{2n}} = \lim \frac{\frac{1}{\frac{n}{n+1}(n+2)^2}}{\frac{-1}{2n^2}} = \lim -2 \frac{n}{n+1} = -2(1) = -2.$</p>
<p>So, $\lim (\frac{n}{n+1})^{2n} = e^{-2}$.</p>
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.