qid int64 1 4.65M | question large_stringlengths 27 36.3k | author large_stringlengths 3 36 | author_id int64 -1 1.16M | answer large_stringlengths 18 63k |
|---|---|---|---|---|
2,877,174 | <p>Imagine that we roll two fair six-sided dice (i.e., all six sides have equal probability). Let X1 and
X2 be the random variables representing these outcomes.
Now, imagine we take one of the dice rolls, say X1, and add a (possibly negative) constant c to
the result. If this becomes less than zero, then we set it to zero; denote this by <br></p>
<p><em>(X + c)<sub>+</sub> = max(X + c, 0)</em></p>
<p>What is the expected value of <em>(X<sub>1</sub> - 2)<sub>+</sub></em> * <em>(X<sub>2</sub> + 1)<sub>+</sub></em>?</p>
<p>My answer <br>
E{<em>(X<sub>1</sub> - 2)<sub>+</sub></em> * <em>(X<sub>2</sub> + 1)<sub>+</sub></em>} = $\frac{1+2+3+4}{6}$ * $\frac{2+3+4+5+6+7}{6}$ = 7.515
<br></p>
<p>I don't know if my answer is right. For X<sub>2</sub>, Am I suppose to divide by 6 or 36? Can anyone correct me if I'm wrong?</p>
| Will Jagy | 10,400 | <p>I like to find the actual Jordan form, especially including the matrices that give $R^{-1}A R = J.$ It is not bad when the eigenvalues are integers, and makes concepts concrete. This begins with choosing a column vector i called $w$ such that $A^2 w \neq 0.$ That becomes the far right column.</p>
<p>$$
\frac{1}{8} \;
\left(
\begin{array}{cccc}
8&-4&-4&0\\
0&1&3&0\\
0&2&-2&0\\
0&-4&4&8\\
\end{array}
\right)
\left(
\begin{array}{cccc}
0&0&0&1\\
0&-1&1&3\\
0&1&-1&-1\\
0&-1&1&2\\
\end{array}
\right)
\left(
\begin{array}{cccc}
1&2&1&0\\
0&2&3&0\\
0&2&-1&0\\
0&0&2&1\\
\end{array}
\right) =
\left(
\begin{array}{cccc}
0&0&0&0\\
0&0&1&0\\
0&0&0&1\\
0&0&0&0\\
\end{array}
\right)
$$</p>
|
1,050,141 | <p>Find the roots of the equation $3^{x+2}$+$3^{-x}$=10 . By inspection the roots are $x=0$ and $x=-2$. But how can I solve this equation otherwise? </p>
| Balbichi | 24,690 | <p>Suppose $y=3^x$. Then $3^{x+2}$+$3^{-x}=10$ is transformed into $$9y+{1\over y}=10\Rightarrow 9y^2-10y+1=0\Leftrightarrow(y-1)(9y-1)=0$$</p>
|
1,450,669 | <p>Find the value of $$\lim\limits_{x\to 0^+}[1+[x]]^{\frac2x}$$where $[x]$ denotes greatest integer function less than or equal to $x$. <br></p>
<hr>
<p><strong>My attempt:</strong> <br>
I calculated $[1+[x]]$ to be $1$ as $x\to 0^+$. <br> Now I am stuck. <br> Please help me.</p>
| Aditya Agarwal | 217,555 | <p>Substitute $y=\frac1x$, so as $x\to0^{+}, y\to\infty$. <br>
So we get $$\lim\limits_{y\to\infty}[1+[\frac{1}{y}]]^{2y}=\lim\limits_{y\to\infty}([1+[\frac1y]]^{y})^2=1^{2.\infty}=1$$ <br></p>
|
85,622 | <p>I'm trying to understand how this works in terms of Galois theory and local class field theory. Assume we have an extension of local fields $E/L/K$ s.t. $E/L$ and $L/K$ are abelian. I'm interested in recognizing when $E/K$ is Galois. Clearly, $E/K$ is Galois if and only if $E$ is always fixed by an extension of an $L/K$ automorphism to $E$, but this is tricky to compute.</p>
<p>I overheard a brief conversation that this can be done through Galois groups and some group actions that occur in the tower, but I haven't found anything explicit through google. We should be able to see from how $Gal(L/K)$ acts on something whether or not the extension is Galois. I'm having trouble seeing what the action should be. I hope someone who knows what I'm talking about could write it down explicitly. Since the Galois groups should correspond through local class field theory to very concrete objects which are quotients of $E^\times$, $L^\times$ and $K^\times$. I was wondering how this action on the Galois side is expressed on the local field side?</p>
<p>I'm interested in this since it clearly would provide a tool for constructing some solvable extensions of e.g. $\mathbb{Q}_p$. I apologize for being fuzzy, but I don't know how to be more explicit.</p>
| SGP | 11,786 | <p>Not an answer, but perhaps something useful from an apprentice in CFT:</p>
<p>Considerations very closely related to your question led Andre Weil to discover the "Weil group"; see <a href="http://projecteuclid.org/DPubS?service=UI&version=1.0&verb=Display&handle=euclid.jmsj/1261734944" rel="nofollow">his paper on Class Field theory (1955)</a>. </p>
<p>Let $G$ be the Galois group of $L/K$ and $H$ the Galois group of $E/L$. </p>
<p>The multiplicative group $L^*$is naturally a $G$-module and so one can consider the cohomology group $H^2(G, {L^*})$. Now $E^*$is not a $G$-module in any natural way. But it does give rise to a $G$-module which should be useful in answering your question.</p>
<p>Namely, the subgroup $NE^*$ of norms from $E$ to $L$ is a $G$-module. So one can consider the quotient $G$-module $M = {L^*}/{NE^*}$ and the cohomology group $H^2(G, M)$. </p>
<p>If $E/K$ is Galois, then its Galois group $\Gamma$ would be sit in an exact sequence $$0 \to H \to \Gamma \to G \to 0.$$ The conjugation action of $\Gamma$ on its normal subgroup $H$ (identified with its image in $\Gamma$) factorises via the quotient ${\Gamma}/{H} = G$. So this gives $H$ a $G$-module structure. </p>
<p>The group $\Gamma$ gives rise to an element $\gamma$ of $H^2(G, H)$ (via a standard construction). Local class field theory provides us with </p>
<ul>
<li>a fundamental class $u_{L/K}$ in $H^2(G, {L^*})$, and</li>
<li>an isomorphism $H \cong M$.</li>
</ul>
<p>The latter yields a map $L^* \to M \cong H$ of $G$-modules which provides a map $$t: H^2(G, L^*) \to H^2(G, H).$$</p>
<p>If $E/K$ were Galois, then the Galois group $\Gamma$ would have a constraint, namely, $$\gamma = t(u_{L/K}).$$</p>
<p>This discussion presupposes that $E.K$ is Galois and gives a constraint; it does not answer your question on how to check if $E/K$ is Galois!!</p>
<p>The fundamental class is what gives rise to the local Weil group.</p>
<p>By the way, every finite Galois extension of $Q_p$ (or a $p$-adic local field) is solvable.</p>
<p>All of this can be found in Cassels-Frohlich or Serre's Local Fields or Milne's notes on Class Field Theory.</p>
<p>Let us wait for a master in LCFT for an answer!!</p>
|
3,563,238 | <p>I try to understand the relation of likelihood to cross-entropy by reading <a href="https://en.wikipedia.org/wiki/Cross_entropy" rel="nofollow noreferrer">cross-entropy</a>.</p>
<p>The problem is I cannot understand the formula for the likelihood in the article. The likelihood is defined as follows</p>
<p><span class="math-container">$$\prod_{i}^{}q_i^{Np_i}$$</span></p>
<p>where</p>
<p><span class="math-container">$q_i$</span> is the estimated probability of outcome <span class="math-container">$i$</span>, <span class="math-container">$p_i$</span> is the empirical probability of outcome <span class="math-container">$i$</span> and <span class="math-container">$N$</span> is the size of the training set. </p>
<p>I haven't seen the formulation of the likelihood like that before that combines estimated and empirical probabilities. Why <span class="math-container">$p_i$</span> takes place in the formula? What's the motivation behind this formulation?</p>
| WoolierThanThou | 686,397 | <p>It just looks to me like the person who wrote the wiki article isn't phrasing themselves all that greatly.</p>
<p>Your model is that <span class="math-container">$(X_n)_{1\leq n\leq N}$</span> are iid and <span class="math-container">$\mathbb{P}(X_n=i)=q_i$</span>. So given a realisation <span class="math-container">$(x_n)_{1\leq n\leq N}$</span>, we have that the likelihood function is exactly</p>
<p><span class="math-container">$$
\ell_x(q)=\mathbb{P}(X=x|q)=\prod_{n=1}^N q_{x_n}=\prod_{i} q_i^{\#\{n| X_n=i\}},
$$</span>
and the exponent here is exactly the empirical probability distribution multiplied by <span class="math-container">$N$</span>.</p>
|
306,178 | <p>Given
$$
y_n=\left(1+\frac{1}{n}\right)^{n+1}\hspace{-6mm},\qquad n \in \mathbb{N}, \quad n \geq 1.
$$
Show that $\lbrace y_n \rbrace$ is a decreasing sequence. Anyone can help ? I consider the ratio $\frac{y_{n+1}}{y_n}$ but I got stuck.</p>
| Sugata Adhya | 36,242 | <p><strong>Hint:</strong> Try to use the A.M.- G.M. inequality for positive numbers.</p>
|
306,178 | <p>Given
$$
y_n=\left(1+\frac{1}{n}\right)^{n+1}\hspace{-6mm},\qquad n \in \mathbb{N}, \quad n \geq 1.
$$
Show that $\lbrace y_n \rbrace$ is a decreasing sequence. Anyone can help ? I consider the ratio $\frac{y_{n+1}}{y_n}$ but I got stuck.</p>
| Micheal Brain Hurts | 324,389 | <p><strong>Lemma(1):</strong></p>
<p>Let be $a,b\in\mathbb R^+$</p>
<p>$$\sqrt[n+1]{ab^n}\le \dfrac{a+bn}{n+1}$$</p>
<p><strong>Proof:</strong></p>
<p>$$\sqrt[n+1]{a\underbrace{bbb...b}_{n\;times}}\le \dfrac{a+b+b+b+...+b}{n+1}=\dfrac{a+bn}{n+1}\\ \Box.$$</p>
<p><strong>Lemma(2):</strong></p>
<p>Let be $x_n=\left(1+\dfrac1n \right)^{n}$ and $z_n=\left(1-\dfrac1n\right)^{n},\quad \forall n\neq 0\in\mathbb N,\quad x_n<x_{n+1}\quad and\quad z_n<z_{n+1}$</p>
<p><strong>Proof:</strong>
Use "Lemma(1)" and choose $a=1$ and $b=\left(1\pm\dfrac1n\right)$
$$\Longrightarrow$$
$$\sqrt[n+1]{\left(1\pm\dfrac1n\right)^{n}}<\dfrac{1+n\left(1\pm\frac1n\right)}{n+1}=1\pm\dfrac1n$$
$$\Longrightarrow$$
$$\left(1\pm\dfrac1n\right)^{n}<\left(1\pm\dfrac1n\right)^{n+1}$$$$\Box.$$</p>
<p><strong>Theorem:</strong></p>
<p>$y_n=\left(1+\dfrac1n\right)^{n+1},\quad \forall n\neq 0\in\mathbb N,\quad y_{n+1}<y_n$</p>
<p><strong>Proof:</strong></p>
<p>We know that $z_{n+1}<z_{n+2}\Longleftrightarrow \dfrac1{z_{n+1}}>\dfrac1{z_{n+2}}$ from "Lemma(2)"</p>
<p>$$y_n=\left(1+\dfrac1n\right)^{n+1}=\left(\dfrac{n+1}{n}\right)^{n+1}=\dfrac{1}{\left(\dfrac{n}{n+1}\right)^{n+1}}=\dfrac{1}{\left(1-\dfrac1{n+1}\right)^{n+1}}=\dfrac1{z_{n+1}}$$</p>
<p>$$y_n>y_{n+1}\Box.$$</p>
|
306,178 | <p>Given
$$
y_n=\left(1+\frac{1}{n}\right)^{n+1}\hspace{-6mm},\qquad n \in \mathbb{N}, \quad n \geq 1.
$$
Show that $\lbrace y_n \rbrace$ is a decreasing sequence. Anyone can help ? I consider the ratio $\frac{y_{n+1}}{y_n}$ but I got stuck.</p>
| Ekadh Singh - Reinstate Monica | 675,447 | <p>the derivative of <span class="math-container">$(1+1/n)^{(n+1)}$</span> is <span class="math-container">$(ln(1+1/n)-1/n)*(1+1/n)^{(n+1)}$</span> so if that is less than <span class="math-container">$0$</span> then <span class="math-container">$(1+1/n)^{(n+1)}$</span> is decreasing. <span class="math-container">$(1+1/n)^{(n+1)}$</span> is greater than <span class="math-container">$0$</span>. if <span class="math-container">$ln(1+1/n)-1/n<0$</span> and <span class="math-container">$(1+1/n)^{(n+1)}>0$</span> then <span class="math-container">$(ln(1+1/n)-1/n)*(1+1/n)^{(n+1)}<0$</span>. to show that <span class="math-container">$ln(1+1/n)-1/n<0$</span> you add 1/n to both sides and get <span class="math-container">$ln(1+1/n)<1/n$</span> let <span class="math-container">$x=1/n$</span> so then I am proving <span class="math-container">$ln(1+n)<n$</span> so <span class="math-container">$n+1<e^n$</span> by bernellies inequaility this is true for x is not 0, but the original function is undefined at x=0, so that doesn't matter. I am left with proving <span class="math-container">$(1+1/n)^{(n+1)}>0$</span> and this is all that I was able to do. can somebody finish that from there please?</p>
|
2,213,047 | <p>Let $F$ be the set of all functions $f : \mathbb{R} \to \mathbb{R}$. A relation $c$ is defined on $F$ by
$f c g$ if and only if $f(x) ≤ g(x)$ for all $x ∈ \mathbb{R} $.
Prove that '$c$' is a partial order.</p>
<p>I have proved that the relation is <em>reflexive</em>. Now I must prove that it is <em>antisymmetric</em> (and later <em>transitive</em>). </p>
<p>My working thus far: </p>
<p>Suppose $a, b ∈ F$. We must prove that if $a c b$ and $b c a$ then $a = b$. Now, if $a c b$ and $b c a$ then $f(a) ≤ f(b)$ and $f(b) ≤ f(a)$. Therefore, $f(a) = f(b)$. </p>
<p><em>Now, how to prove that $a = b$? There is no indication that $f$ is an injective function.</em> </p>
| GoodDeeds | 307,825 | <p>The relation is over the set of functions, $F$, and not on the set of real numbers, $\mathbb R$.</p>
<p>So, here, your $a$s and $b$s are elements of $F$, i.e., functions that map from $\mathbb R$ to $\mathbb R$.</p>
<p>In your case, $a c b$ means $a \le b$, where $a$ and $b$ are functions in $F$. It does not mean $f(a) \le f(b)$.</p>
<p>Hence, showing $f=g$ is sufficient.</p>
|
993,385 | <p>Salam,</p>
<p>I would appreciate it if anyone could help me solving this integral:</p>
<p>$$ \int \frac{e^{ax^2+bx}}{\sqrt{1-x^2}}dx
$$</p>
<p>Many thanks.</p>
| k170 | 161,538 | <p>The solution to this integral cannot be found in terms of elementary mathematical functions. </p>
|
993,385 | <p>Salam,</p>
<p>I would appreciate it if anyone could help me solving this integral:</p>
<p>$$ \int \frac{e^{ax^2+bx}}{\sqrt{1-x^2}}dx
$$</p>
<p>Many thanks.</p>
| Lucian | 93,448 | <p>The indefinite integral <a href="http://en.wikipedia.org/wiki/Liouville's_theorem_(differential_algebra)" rel="nofollow">cannot</a> be <a href="http://en.wikipedia.org/wiki/Risch_algorithm" rel="nofollow">expressed</a> in terms of elementary functions. However, the definite integral can, for appropriate limits, and $b=0$, be expressed in terms of special <a href="http://en.wikipedia.org/wiki/Bessel_function" rel="nofollow">Bessel <em>I</em> functions</a>:</p>
<p>$$\int_{-1}^1\dfrac{e^{-ax^2}}{\sqrt{1-x^2}}dx~=~2\int_0^1\dfrac{e^{-ax^2}}{\sqrt{1-x^2}}dx~=~\pi~\exp\bigg(\dfrac a2\bigg)~I_0\bigg(\dfrac a2\bigg).$$</p>
<p>More information can be found <a href="http://dlmf.nist.gov/10" rel="nofollow">here</a>.</p>
|
2,916,168 | <p>I am a beginner in proofs and, unfortunately, I cannot wrap my mind around how to prove the simplest things, so I need a bit of help getting started. This is the proof that I am dealing with:</p>
<p>$\text{If }x< y< 0\text{, then }x^{2}> y^{2}\text{.}$</p>
<p>Thank you in advance.</p>
| Paramanand Singh | 72,031 | <p>Multiplying an inequality by a negative number reverses the inequality.</p>
<hr>
<p>We have $x<y$ and multiplying it with negative number $x$ we get $x^2>xy$. Again multiplying the same starting inequality by negative number $y$ we get $xy>y^2$. Now using $x^2>xy$ and $xy>y^2$ we arrive at $x^2>y^2$.</p>
|
1,214,042 | <p>"A particle of mass m is attracted towards a fixed point 0 with a force inversely proportional to its instantaneous distance from 0. If the particle is released from rest, at a distance L from 0, find the time for it to reach 0"</p>
<p>My attempt at this question:
Noting the fact $x(0)=L$ and $x'(0)=0$, I tried to set the equation to this problem, $m\frac {d^2x}{dt^2}=\frac {-k}{x}$. Since we are covering laplace transform and gamma functions in my class right now, my attempt was to apply laplace transform to the equation above, however I am having trouble finding the answer. Please help me out. Thank you very much in advance for your help.</p>
| Lutz Lehmann | 115,115 | <p>The relation of mass and $k$ is not given, set their fraction to $k/m=1/2$. Solve $\ddot x=-1/(2x)$. Multiply with $\dot x$, integrate once
$$
\frac12(\dot x(t)^2-\dot x(0)^2)= -\frac12\,\ln|x(t)|+\frac12\ln|x(0)|
\implies \dot x(t)=\pm\sqrt{-\ln(|x(t)|/L)}
$$
This is not an easy integral.</p>
|
566,255 | <p>How to evaluate the following integral? </p>
<p>$\int_{0}^{\infty} \sin(kx)dx=\frac 1 k$ </p>
<p>The book Mathematical Physics by Butkov reads "The sequence $f_N(k)=\int_{0}^{N} \sin(kx) dx=\frac{(1-\cos kN)}k$diverges as N approaches $\infty$, but it is weakly convergent for a suitable chosen set of test functions $g(k)$ defined $0<k<\infty$."
The problem is how to choose the test function?</p>
| daulomb | 98,075 | <p>We should calculate the limit: $\displaystyle\lim_{k\rightarrow\infty}\int_{0}^\infty\sin(kx)\phi(x)dx$ for every $\phi\in C_0^\infty(0, \infty)$, i.e, $\phi$ is a test function. Integration by parts implies that $$\displaystyle\lim_{k\rightarrow\infty}\int_{0}^\infty\sin(kx)\phi(x)dx$$
$$=-\displaystyle\lim_{k\rightarrow\infty}\int_{0}^\infty\frac{\cos(kx)}{k}\phi'(x)dx$$
and since $$\bigg|\int_{0}^\infty\frac{\cos(kx)}{k}\phi'(x)dx\bigg|\leq\frac{1}{k}\int_0^\infty\cos(kx)\phi'(x)dx\leq\frac{1}{k}\int_0^\infty|\phi'(x)|dx,$$
where the latter integral is finite (this is because $\phi(x)$ has a compact support in $(0, \infty)$), the result of the integral is zero as $k\rightarrow\infty$.</p>
|
3,664,272 | <h2><strong>MOTIVATION</strong></h2>
<p>I am considering investing a significant amount of money into a raffle. In order to decide the number of entries I purchase, I would like to find probability distributions for the number of prizes I will win with respect to the number of entries I purchase.</p>
<h2><strong>HOW THE RAFFLE WORKS</strong></h2>
<p>Total entries: 1000</p>
<p>Winning entries (# of prizes): 20</p>
<p>How it actually works is in 20 rounds of 50 entries. </p>
<ul>
<li>Entries 1-50 have a 1/50 chance to win prize 1</li>
<li>Entries 51-100 have a 1/50 chance to win prize 2</li>
</ul>
<p>... </p>
<ul>
<li>Entries 951-1000 have a 1/50 chance to win prize 20</li>
</ul>
<p>The entry numbers are purchased in order, so technically if I can get entries 1-50 then I have a 100% chance to win prize 1. However, I don't expect I will be able to do this since many people will be trying to buy entries at the same time. For simplicity, perhaps we can just assume that my entries will be evenly distributed across all 20 rounds (see <strong>BONUS</strong> below for my thoughts on how this change impacts the solution and please correct me if I am wrong).</p>
<h2><strong>INITIAL THOUGHTS</strong></h2>
<p>From some quick research I think the estimate for my odds of winning ONE prize is approximately like this:</p>
<p>1 - [ (1000-n) / 1000 ]^20</p>
<p>where n = number of entries I purchase</p>
<h2><strong>WHAT I WANT TO KNOW</strong></h2>
<p>What I actually want is how to calculate the probability distribution of the number of prizes I win. So not just whether I win 1 prize or not.</p>
<p>Given n where n is the number of entries I purchase, I want to know the average (mean) number of prizes I should expect to win and the surrounding distribution. This way I can decide my risk tolerance and choose how many entries (n) it is worth it for me to buy.</p>
<h2><strong>BONUS</strong></h2>
<p>I mentioned we can simplify the problem to assume my entries will be even distributed across all 20 rounds, but I am curious what the optimal strategy would be if I could choose my entry numbers.</p>
<p>For example, if n = 100 entries, is it best to buy entries 1-100 and have a 100% chance to win 2 prizes? Or would having a more even distribution be better. For example, having 5 entries in each of the 20 rounds ?</p>
<p>In other words, I could have:</p>
<ul>
<li>100% chance to win in 2 rounds (win 2 prizes) and 0% chance to win in
the other 18 rounds</li>
<li>10% chance to win in all 20 rounds</li>
</ul>
<p>My understanding is that in both cases my expected number of wins is 2. The difference is that in the first case it is guaranteed whereas in the second place I could get lucky and win more or unlucky and win less. Correct?</p>
<p>Extrapolating from that, it seems like the more evenly distributed the entry numbers are across rounds, the more uncertainty in the number of prizes I will actually win. However, the expected number (mean) of the distribution should always be the same. Is this true?</p>
| lonza leggiera | 632,373 | <ul>
<li>If you buy a total of <span class="math-container">$\ n\ $</span> tickets, the expected number of prizes you win is <span class="math-container">$\ \frac{n}{50}\ $</span>, regardless of which rounds the tickets are in. You're therefore correct that your expected number of wins is always the same for the same number of tickets purchased.</li>
<li>If you buy <span class="math-container">$\ t_i\ $</span> tickets in round <span class="math-container">$\ i\ $</span> for <span class="math-container">$\ i=1,2,\dots,20\ $</span>, and the winning ticket for each round is drawn randomly, and independently of the draws of all the other rounds, then the <em>variance</em> of the total number of prizes you win is the sum of the variances of the numbers of prizes you win in all rounds. You can only win no prize or <span class="math-container">$1$</span> prize in any single round, <span class="math-container">$\ i\ $</span>, say, which you will do with probabilities <span class="math-container">$\ 1-\frac{t_i}{50}\ $</span> and <span class="math-container">$\ \frac{t_i}{50}\ $</span>, respectively. The expected number of prizes you win in that round is therefore <span class="math-container">$\ \frac{t_i}{50}\ $</span>, and the variance of that number is
<span class="math-container">$$
\left(0-\frac{t_i}{50}\right)^2\left(1-\frac{t_i}{50}\right) +\left(1-\frac{t_i}{50}\right)^2\left(\frac{t_i}{50}\right)= \left(\frac{t_i}{50}\right) \left(1-\frac{t_i}{50}\right)\ .
$$</span>
Therefore the total variance in the number of prizes you will win is
<span class="math-container">$$
\sum_{i=1}^{20} \left(\frac{t_i}{50}\right) \left(1-\frac{t_i}{50}\right)\ .
$$</span>
You're also also correct that this is minimised by concentrating all your tickets as much as possible in the same rounds. If you have <span class="math-container">$\ s_1\le s_2\le \dots \le s_j<50\ $</span> tickets in rounds <span class="math-container">$\ r_1, r_2, \dots, r_j\ $</span>, respectively, for instance, then those tickets contribute a total of
<span class="math-container">$$
\sum_{i=1}^j \left(\frac{s_i}{50}\right) \left(1-\frac{s_i}{50}\right)
$$</span>
to the variance. If you were to transfer <span class="math-container">$\ x\ $</span> of the tickets you have in round <span class="math-container">$\ r_1\ $</span> to round <span class="math-container">$\ r_j\ $</span>, however <span class="math-container">$\large($</span>with <span class="math-container">$\ 0<$$x\le$$\min\left(s_1,50-s_j\right)\ \large)$</span>, the variance would then decrease by
<span class="math-container">$$
\left(\frac{s_1}{50}\right)\left(1-\frac{s_1}{50}\right)+ \left(\frac{s_j}{50}\right)\left(1-\frac{s_j}{50}\right)-\left(\frac{s_1-x}{50}\right)\left(1-\frac{s_1-x}{50}\right) -\left(\frac{s_j+x}{50}\right)\left(1-\frac{s_j+x}{50}\right)=\frac{x\left(s_j+x-s_1\right)}{25}>0\ .
$$</span>
It follows from this that you will minimise the variance by concentrating all your tickets as much as possible in the same rounds—that is, by having <span class="math-container">$\ t_i<50\ $</span> for <em>at most one</em> value of <span class="math-container">$\ i\ $</span>. If you do that, then you're certain to win at least <span class="math-container">$\ \left\lfloor\frac{n}{50}\right\rfloor\ $</span> prizes, and at most <span class="math-container">$\ \left\lfloor\frac{n}{50}\right\rfloor+1\ $</span>. You will win the former number with probability <span class="math-container">$\ 1+\left\lfloor\frac{n}{50}\right\rfloor-\frac{n}{50}\ $</span>, and the latter number with probability <span class="math-container">$\ \frac{n}{50}-\left\lfloor\frac{n}{50}\right\rfloor\ $</span>.</li>
<li><p>What your "optimal" strategy is depends on your own personal preferences. Typically, the "best" strategies are considered to be the ones which maximise your expected gain. If that's what you want to do, you should buy all the tickets in every round for which the value of the prize exceeds <span class="math-container">$50$</span> times the cost of a ticket. This would seem to me to be reasonable if the prizes are all cash, but might be problematic if they're not, because the nominal value of a prize might be much more than <em>you</em> would ever be willing to pay for it.</p>
<p>If one of the prizes, for instance, were <span class="math-container">$\ \$250\ $</span> worth of cricket lessons from <a href="https://en.wikipedia.org/wiki/Sachin_Tendulkar" rel="nofollow noreferrer">Sachin Tendulkar</a>, for which you'd have to come up with your own travel expenses to India to take advantage of, and the cost of each raffle ticket were <span class="math-container">$\$3$</span>, you'd have to ask yourself whether you'd be willing to buy such a set of cricket lessons for only <span class="math-container">$\$150$</span> and then travel to India to receive them. If not, then my advice would be to refrain from buying any tickets in the round for which that was the prize.</p></li>
<li><p>Just knowing your expected gain and its variance should be sufficient for you to determine what your optimum strategy is, and I don't think you'll gain much more by knowing the complete distribution of the number of prizes you will win. It is nevertheless possible to calculate that distribution for the two scenarios you mention, which I therefore do below.</p></li>
<li><p>If you have <span class="math-container">$\ t_i\ $</span> tickets in round <span class="math-container">$\ i\ $</span> for then<span class="math-container">$\ i=1,2,\dots,20\ $</span>, and <span class="math-container">$\ W\ $</span> is the number of prizes you win, then
<span class="math-container">$$
P(W=w)=\sum_{S\subseteq\{1,2,\dots,20\}\\
\hspace{1em} |S|=w}\prod_{i\in S}\frac{t_i}{50} \prod_{j\not\in S}\left(1-\frac{t_j}{50}\right)\ .
$$</span>
I doubt if this expression can be simplified much for general <span class="math-container">$\ t_i\ $</span>, and the sum in it has <span class="math-container">$\ 2^{20}\ $</span> terms. The sum would thus be infeasible to calculate by hand, although it would be no problem for a modern computer. If you have the same number <span class="math-container">$\ t\ $</span> of tickets in every round, however, the distribution simplifies to the binomial:
<span class="math-container">$$
P(W=w)={20\choose w}\left(\frac{t}{50}\right)^w \left(1-\frac{t}{50}\right)^{20-w}
$$</span></p></li>
<li><p>If you have a total of <span class="math-container">$\ n\ $</span> tickets, randomly distributed over all rounds, then <span class="math-container">$\ t_1,t_2,\dots,t_{20}\ $</span> will be random variables with the following distribution:
<span class="math-container">\begin{align}
P\left(t_1=\tau_1,t_2=\tau_2,\dots,t_{20}=\tau_{20}\right)&= \frac{\prod_\limits{k=1}^{20}{50\choose\tau_k}}{1000\choose n},
\end{align}</span>
for <span class="math-container">$\ 0\le\tau_1,\tau_2,\dots,\tau_{20}\le50\ $</span> and <span class="math-container">$\ \sum_\limits{i=1}^{20}\tau_i=n\ $</span>. These random variables are not independent, however, so your approximation, <span class="math-container">$\ 1-\left(\frac{1000-n}{1000}\right)^{20}\ $</span> for the probability of winning at least one prize is certainly not exact. If <span class="math-container">$\ n=1\ $</span>, for instance, it gives the probability as approximately <span class="math-container">$\ 0.0198\ $</span>, whereas the true probability is <span class="math-container">$\ \frac{1}{50}=0.02\ $</span>.</p>
<p>Given that the value of <span class="math-container">$\ t_i\ $</span> is equal to <span class="math-container">$\ \tau_i\ $</span> for all <span class="math-container">$\ i\ $</span>, the probability of your <em>not</em> winning the prize for round <span class="math-container">$\ i\ $</span> is <span class="math-container">$\ 1-\frac{\tau_i}{50}\ $</span>, and the probability of winning a least one prize is therefore
<span class="math-container">$$
1-\prod_{i=1}^{20}\left(1-\frac{\tau_i}{50}\right)\ ,
$$</span>
i.e. one minus the probability that you don't win the prize for any round. Your exact probability of winning at least one prize is obtained by multiplying this by the probability that <span class="math-container">$\ t_i=\tau_i\ $</span> for all <span class="math-container">$\ i\ $</span> and summing over all possible values of the quantities <span class="math-container">$\ \tau_i\ $</span>:
<span class="math-container">\begin{align}
1-\frac{1}{1000\choose n}&\sum_{\tau:\sum_{k=1}^{20}\tau_k=n\\
0\le\tau_k\le50}\prod_\limits{k=1}^{20}{50\choose\tau_k}\prod_{j=1}^{20}\left(1-\frac{\tau_j}{50}\right)\\
&=1-\frac{1}{1000\choose n}\sum_{\tau:\sum_{k=1}^{20}\tau_k=n\\
0\le\tau_k\le49} \prod_{j=1}^{20} {49\choose\tau_j}\ .
\end{align}</span>
While the sum in this expression might look daunting for <span class="math-container">$\ n\ $</span> far away from the extremes of the range <span class="math-container">$0$</span>-<span class="math-container">$1000$</span>, there is nevertheless a recursive procedure for evaluating it quite efficiently over the whole of that range.</p>
<p>The following table gives the approximate probabilities of winning at least one prize for various values of <span class="math-container">$\ n\ $</span> using both the exact formula and the approximation <span class="math-container">$\ 1-\left(\frac{1000-n}{1000}\,\right)^{20}\ $</span>. The vulgar fractions in the first three columns of the first row are exact probabilities.
<span class="math-container">\begin{array}{c|c|c|}
n& 1&2&3\\
\hline
\text{exact}&\frac{1}{50}=0.02&\frac{1,979}{49,950}\approx0.0396&\frac{489,077}{8,308,350}\approx0.0589&0.0777&0.0963\\
\hline
\text{approximate}&0.0198&0.0392&0.0583&0.0770&0.0954\\
\hline
\end{array}</span>
<span class="math-container">\begin{array}{c|c|c|}n&4&5&6&7&8\\
\hline\text{exact} &0.0777&0.0963&0.1144&0.1322&0.1500\\
\hline\text{approximate}& 0.0770&0.0954&0.1134&0.1311&0.1484\\
\hline
\end{array}</span>
<span class="math-container">\begin{array}{c|c|c|}
\hspace{-0.5em} n&9&10&50&100&500\\
\hline
\hspace{-0.5em}\text{exact}&0.1669&0.1837&0.6451& 0.8810&0.99999921\\
\hline
\hspace{-0.5em}\text{approximate}&0.1654&0.1821&0.6415&0.8784&0.99999905\\
\hline
\end{array}</span></p>
<p>Thus, the approximate formula gives a reasonably good estimate over this range. For <span class="math-container">$\ n=500\ $</span>, the approximate probabilities are <span class="math-container">$\ 1-7.86\times10^{-7}\ $</span> and <span class="math-container">$\ 1-9.54\times 10^{-7}\ $</span>. Although the approximate probability of <em>not</em> winning a prize, <span class="math-container">$\ 9.54\times 10^{-7}\ $</span>, is thus in error here by more than <span class="math-container">$20\%$</span>, that error is of little consequence because the true probability itself is so small.</p>
<p>More generally, the distribution of the number of prizes you win in this case is given by
<span class="math-container">\begin{align}
P(W&=w)=\\
&\frac{1}{1000\choose n}\sum_{\tau:\sum_{k=1}^{20}\tau_k=n\\
0\le\tau_k\le50}\prod_\limits{k=1}^{20}{50\choose\tau_k}\sum_{S\subseteq\{1,2,\dots,20\}\\
\hspace{1em} |S|=w}\prod_{i\in S}\frac{\tau_i}{50} \prod_{j\not\in S}\left(1-\frac{\tau_j}{50}\right)\\
=&\frac{1}{1000\choose n}\sum_{S\subseteq\{1,2,\dots,20\}\\
\hspace{1em} |S|=w} \sum_{\tau:\sum_{k=1}^{20}\tau_k=n\\
0\le\tau_k\le50}\prod_\limits{k=1}^{20}{50\choose\tau_k}\prod_{i\in S}\frac{\tau_i}{50} \prod_{j\not\in S}\left(1-\frac{\tau_j}{50}\right)\\
=&\frac{{20\choose w}}{1000\choose n}\sum_{\tau:\sum_{i=k}^{20}\tau_k=n\\
0\le\tau_k\le50}\prod_\limits{k=1}^{20}{50\choose\tau_k}\prod_{i=1}^w\frac{\tau_i}{50} \prod_{j=w+1}^{20}\left(1-\frac{\tau_j}{50}\right)\\
=& \frac{{20\choose w}}{1000\choose n}\sum_{\sigma:\sum_{i=1}^{20}\sigma_i=n-w\\
0\le\sigma_i\le49}\prod_{i=1}^{20}{49\choose\sigma_i}\
\end{align}</span>
where the last step comes from the identities <span class="math-container">$\ \displaystyle \prod_\limits{i=1}^w{50\choose\tau_i}\frac{\tau_i}{50}=$$\displaystyle\prod_\limits{i=1}^w{49\choose\tau_i-1}\ $</span> and <span class="math-container">$\ \displaystyle\prod_{j=w+1}^{20} {50\choose\tau_j}\left(1-\frac{\tau_j}{50}\right)=$$\displaystyle \prod_\limits{j=w+1}^{20}{49\choose\tau_j}\\ $</span>, and setting <span class="math-container">$\ \sigma_i=\tau_i-1\ $</span> for <span class="math-container">$\ 1\le i\le w\ $</span> and <span class="math-container">$\ \sigma_i=\tau_i\ $</span> for <span class="math-container">$\ w+1\le i\le 20\ $</span>.</p>
<p>Note that the probability of your winning <span class="math-container">$\ w\ $</span> prizes when you buy <span class="math-container">$\ n\ $</span> tickets is just <span class="math-container">$\ 20\choose w\ $</span> times the probability of your winning <em>no</em> prizes when you buy <span class="math-container">$\ n-w\ $</span> tickets.</p></li>
</ul>
|
688,711 | <p>Assume the desity of air <span class="math-container">$\rho$</span> is given by</p>
<p><span class="math-container">$\rho(r)=\rho_0$$e^{-(r-R_0)/h_0}$</span> for <span class="math-container">$r\ge R_0$</span></p>
<p>where <span class="math-container">$r$</span> is the distance from the centre of the earth, <span class="math-container">$R_0$</span> is the radius of the earth in meters, <span class="math-container">$\rho_0=1.2kg/m^3$</span> and <span class="math-container">$h_0=10^4m$</span></p>
<p>Assuming the atmosphere extends to infinity, calculate the mass of the portion of the earth's atmosphere north of the equator and south of <span class="math-container">$30^\circ$</span>N latitude.</p>
<p>How do I even start this problem? Do I need to convert it into spherical coordinates? But then what limits do I use for the integration?</p>
| copper.hat | 27,978 | <p>The problem is simplified by the fact that the density is independent of the latitude and longitude. So we can compute the total mass and the multiply this by the fraction of area in the specified region over the total area.</p>
<p>The total enclosed volume at radius $r$ is $V(r) = {4 \over 3} \pi r^3$, hence we have
${d V(r) \over dr} = 4 \pi r^2$ (the surface area).</p>
<p>The mass of air between radius $r$ and $r+\delta$ is approximately $m(r+\delta)-m(r) \approx\rho(r) {d V(r) \over dr} \delta$, and so we see that ${d m(r) \over dr} = \rho(r) {d V(r) \over dr}$, from which we obtain
$m(\infty)-m(R_0) = \int_{R_0}^\infty \rho(r) {d V(r) \over dr} dr = 4 \pi \int_{R_0}^\infty \rho(r) r^2 dr$.</p>
<p>This gives the total mass of air. To find the portion above the specified area, we need to find the fraction of the Earth's surface area represented by the area between $0^∘$ and $30^∘N$.</p>
<p>The area between $0$ and $\phi$ is given by $\int_0^\phi (2 \pi R_0 \cos \alpha) R_0 d \alpha = 2 \pi R_0^2 \sin \phi$, from which we get the fraction between $0^∘$ and $30^∘N$ to be ${ \sin {\pi \over 6} \over 2} = {1 \over 4}$.</p>
<p>Hence the mass of air above the specified region is
$\pi \int_{R_0}^\infty \rho(r) r^2 dr$.</p>
<p>Assuming I haven't made a mistake, this gives:</p>
<blockquote class="spoiler">
<p> $\pi \int_{R_0}^\infty \rho(r) r^2 dr = \rho_0 \pi h_0(R_0^2+2 R_0 h_0 + 2 h_0^2)$.</p>
</blockquote>
|
2,904,359 | <p><strong>Definition.</strong> A formula <span class="math-container">$\phi(x,a)$</span> <em>divides</em> over a set <span class="math-container">$B$</span> if there are <span class="math-container">$k<\mathbb{N}$</span> and a sequence <span class="math-container">$(a_i)_{i<\omega}$</span> such that</p>
<p>(1) <span class="math-container">$\text{tp}(a/B)=\text{tp}(a_i/B)$</span>, for all <span class="math-container">$i<\omega$</span>;</p>
<p>(2) <span class="math-container">$\{\phi(x,a_i)\}_{i<\omega}$</span> is <span class="math-container">$k$</span>-inconsistent.</p>
<hr />
<p><strong>Definition.</strong> A formula <span class="math-container">$\phi(x,a)$</span> <em>forks</em> over a set <span class="math-container">$B$</span> if there are <span class="math-container">$n\in\mathbb{N}$</span> and formulas <span class="math-container">$\psi(x,b_1),\dots, \psi(x,b_n)$</span> such that</p>
<p>(1) for each <span class="math-container">$i=1,\dots, n$</span>, the formula <span class="math-container">$\psi_i(x,b_i)$</span> divides over <span class="math-container">$B$</span>;</p>
<p>(2) <span class="math-container">$\phi(x,a)\models \bigvee_{i=1}^{n} \psi_i(x,b_i)$</span>.</p>
<hr />
<p>It is clear that dividing implies forking.</p>
<p><strong>Question.</strong> Does forking always imply dividing? If no, is there a formula that forks but does not divide?</p>
| Alex Kruckman | 7,062 | <p>I hope you accept Mostafa's answer, since he described the canonical example of a formula which forks but does not divide (<span class="math-container">$x=x$</span> in the circular order). I believe this example is originally due to Byunghan Kim, in his thesis on simple theories. </p>
<p>But since you asked for more examples in the comments, I'll just add some more examples and references. </p>
<ul>
<li><p>Consider the two-sorted structure <span class="math-container">$(X,\mathcal{P}(X); \in)$</span> where <span class="math-container">$X$</span> is an infinite set, <span class="math-container">$\mathcal{P}(X)$</span> is its powerset, and the only symbol in the language is the membership relation <span class="math-container">$\in$</span> between the two sorts. Let <span class="math-container">$A$</span> be an infinite and coinfinite subset of <span class="math-container">$X$</span>, and let <span class="math-container">$B$</span> be its complement. Then <span class="math-container">$x = x$</span> implies <span class="math-container">$(x\in A)\lor (x\in B)$</span>, and both <span class="math-container">$(x\in A)$</span> and <span class="math-container">$(x\in B)$</span> divide over <span class="math-container">$\emptyset$</span>. Why? Any two infinite and coinfinite subsets of <span class="math-container">$X$</span> have the same type over the empty set (they are even conjugate by an automorphism), and <span class="math-container">$X$</span> can be partitioned into infinitely many pairwise disjoint infinite sets. So <span class="math-container">$x = x$</span> forks but does not divide over <span class="math-container">$\emptyset$</span>. </p></li>
<li><p>In both the circular order example and the powerset example, the formula which forks but doesn't divide is <span class="math-container">$x=x$</span>, considered over <span class="math-container">$\emptyset$</span>. But many variations on these examples can be cooked up to satisfy different properties, e.g. to show that forking <span class="math-container">$\neq$</span> dividing in general even over models. See Section 5 of <em><a href="https://arxiv.org/abs/0906.2806" rel="nofollow noreferrer">Forking in NTP<span class="math-container">$_2$</span> theories</a></em> by Artem Chernikov and Itay Kaplan (and note that Example 5.1, which they partially credit to Martin Ziegler, is really a combination of the circular order and the powerset examples). This paper is also the reference for the theorem mentioned by tomasz that forking <span class="math-container">$=$</span> dividing over models in NTP<span class="math-container">$_2$</span> theories. </p></li>
</ul>
<p>When thinking about the difference between forking and dividing, we should be careful about whether we're talking about formulas or complete types. That is: </p>
<ul>
<li>"forking <span class="math-container">$=$</span> dividing for formulas" is the statement that for all formulas <span class="math-container">$\varphi(x,y)$</span>, all sets <span class="math-container">$A$</span>, and all parameters <span class="math-container">$b$</span>, if <span class="math-container">$\varphi(x,b)$</span> forks over <span class="math-container">$A$</span>, then <span class="math-container">$\varphi(x,b)$</span> divides over <span class="math-container">$A$</span>. </li>
<li>"forking <span class="math-container">$=$</span> dividing for complete types" is the statement that for every complete type <span class="math-container">$p(x)\in S(B)$</span> and every <span class="math-container">$A\subseteq B$</span>, if <span class="math-container">$p(x)$</span> forks over <span class="math-container">$A$</span>, then <span class="math-container">$p(x)$</span> divides over <span class="math-container">$A$</span>. </li>
</ul>
<p>Forking <span class="math-container">$=$</span> dividing for formulas implies forking <span class="math-container">$=$</span> dividing for complete types. Why? Suppose <span class="math-container">$p(x)\in S(B)$</span> forks over <span class="math-container">$A\subseteq B$</span>. Then <span class="math-container">$p(x)$</span> contains a formula <span class="math-container">$\varphi(x,b)$</span> which forks over <span class="math-container">$A$</span>. By forking <span class="math-container">$=$</span> dividing for formulas, <span class="math-container">$\varphi(x,b)$</span> divides over <span class="math-container">$A$</span>, so <span class="math-container">$p(x)$</span> divides over <span class="math-container">$A$</span>. </p>
<p>But the converse does not hold. Specifically, you might have a formula which forks but does not divide, but nevertheless any complete type containing it divides. </p>
<p>The circular order example and the powerset example above both give examples of forking <span class="math-container">$\neq$</span> dividing for complete types. In the circular order example, there is a unique <span class="math-container">$1$</span>-type <span class="math-container">$p(x)$</span> over <span class="math-container">$\emptyset$</span>, and in the powerset example, there is a unique <span class="math-container">$1$</span>-type <span class="math-container">$p(x)$</span> in the "<span class="math-container">$X$</span> sort" over <span class="math-container">$\emptyset$</span>, and in both examples <span class="math-container">$p(x)$</span> forks but does not divide over <span class="math-container">$\emptyset$</span>. Every example I know in which forking <span class="math-container">$\neq$</span> dividing for complete types is closely related to one of these two examples. (That's not to say there aren't other quite different examples - I just don't know them.)</p>
<p>But I do know lots of other of examples of theories in which forking <span class="math-container">$=$</span> dividing for complete types, but forking <span class="math-container">$\neq$</span> dividing for formulas. This sort of behavior seems to be very common in theories without the strict order property which are not simple. </p>
<ul>
<li><p>In <em><a href="https://arxiv.org/abs/1401.1570" rel="nofollow noreferrer">Forking and dividing in Henson graphs</a></em>, Gabe Conant showed that forking <span class="math-container">$=$</span> dividing for complete types in the theory <span class="math-container">$T_n$</span> of the generic <span class="math-container">$K_n$</span>-free graph, for <span class="math-container">$n\geq 3$</span>, but that every such theory has a <em>formula</em> which forks but does not divide. (These theories are all SOP<span class="math-container">$_3$</span> but NSOP<span class="math-container">$_4$</span>.) The example in <span class="math-container">$T_3$</span> is <span class="math-container">$\bigvee_{1\leq i<j\leq 4} (xEb_i\land xEb_j)$</span>, where <span class="math-container">$b_1,b_2,b_3,b_4$</span> is any <span class="math-container">$4$</span>-tuple with no edges between any of the <span class="math-container">$b_i$</span>. Each formula <span class="math-container">$(xEb_i\land xEb_j)$</span> divides over <span class="math-container">$\emptyset$</span>, but the disjunction of all <span class="math-container">$6$</span> of these formulas does not divide over <span class="math-container">$\emptyset$</span>. </p></li>
<li><p>In the generic binary function (the model companion of the empty theory in the language with a single binary function symbol <span class="math-container">$f$</span>), the formula <span class="math-container">$\varphi(x;b_1,b_2)$</span> given by <span class="math-container">$(f(x,b_1) = b_2)\lor x = b_1$</span> forks but does not divide over <span class="math-container">$\emptyset$</span> whenever <span class="math-container">$b_2$</span> is not in the substructure <span class="math-container">$\langle b_1\rangle$</span> generated by <span class="math-container">$b_1$</span>. This is because for every indiscernible sequence <span class="math-container">$(b_1^ib_2^i)_{i\in \omega}$</span>, either the sequence <span class="math-container">$(b_1^i)$</span> is constant, in which case <span class="math-container">$\{x = b_1^i\mid i\in \omega\}$</span> is consistent, or the <span class="math-container">$b_1^i$</span> are pairwise distinct, in which case <span class="math-container">$\{f(x,b_1^i) = b_2^i\mid i\in \omega\}$</span> is consistent. Nick Ramsey and I showed that forking <span class="math-container">$=$</span> dividing for complete types in this theory (and that the theory is NSOP<span class="math-container">$_1$</span>) in our paper <em><a href="https://arxiv.org/abs/1706.06616" rel="nofollow noreferrer">Generic expansion and Skolemization in NSOP<span class="math-container">$_1$</span> theories</a></em>.</p></li>
<li><p>As far as I know, it could be that for all NSOP<span class="math-container">$_1$</span> unsimple theories, forking <span class="math-container">$=$</span> dividing for complete types, but there is a formula which forks but does not divide. ("As far as I know" doesn't mean "I conjecture"! I haven't even considered this question in every known example.) Certainly examples which are very similar to the previous one occur in other NSOP<span class="math-container">$_1$</span> unsimple theories. For another example with a similar flavor, see Proposition 4.23 in my paper <em><a href="https://arxiv.org/abs/1709.09626" rel="nofollow noreferrer">Independence in generic incidence structures</a></em> with Gabe Conant. </p></li>
</ul>
|
1,997,341 | <p>We know that the binomial theorem and expansion extends to powers which are non-integers. </p>
<p>For integer powers the expansion can be proven easily as the expansion is finite. However what is the proof that the expansion also holds for fractional powers? </p>
<p>A simple an intuitive approach would be appreciated.</p>
| SirXYZ | 375,099 | <p>Heres a proof:</p>
<p>Let <span class="math-container">$f(a)$</span> denote the binomial expansion of <span class="math-container">$(1+x)^a$</span> and <span class="math-container">$f(b)$</span> denote the same for <span class="math-container">$(1+x)^b$</span>. Where</p>
<p><span class="math-container">$(1+x)^a = 1 + ax.....$</span></p>
<p><span class="math-container">$(1+x)^b = 1 + bx....$</span></p>
<p>On multiplying the two binomial expansions together....the product will be another series in ascending powers of <span class="math-container">$x$</span> and will remain unaltered irrespective of <span class="math-container">$a$</span> and <span class="math-container">$b$</span>.</p>
<p>To determine this invariable form of the product we may give to <span class="math-container">$a$</span> and <span class="math-container">$b$</span> positive integral values for convenience.</p>
<p>Then</p>
<p><span class="math-container">$f(a)×f(b)=(1+x)^{(a+b)}$</span></p>
<p>But when a and b are positive integers.... the expansion is</p>
<p><span class="math-container">$1 + (a+b)x +.....$</span></p>
<p>This then is the form of the product of <span class="math-container">$f(a)×f(b)$</span> <em>in all cases</em>, whatever the values of a and b be; and in agreement with our previous notation it may be denoted by <span class="math-container">$f(a+b)$</span>; therefore for all values of <span class="math-container">$a$</span> and <span class="math-container">$b$</span>.</p>
<p><span class="math-container">$f(a)×f(b)=f(a+b)$</span></p>
<p>Also</p>
<p><span class="math-container">$f(a)×f(b)×f(p) = f(a+b+p)...$</span></p>
<p>Therefore</p>
<p><span class="math-container">$f(a)×f(b)×f(p)... \text{to k factors} = f(a+b+p.... \text{to k terms}).$</span></p>
<p>Let each of these quantities a,b,p,.... be equal to <span class="math-container">$(c/k)$</span>, where c and k are positive integers.</p>
<p>Hence</p>
<p><span class="math-container">$f(c/k)^k = f(c)$</span></p>
<p>But since c is a positive integer, </p>
<p><span class="math-container">$f(c)=(1+x)^c$</span></p>
<p><span class="math-container">$(1+x)^c = f(c/k)^k$</span></p>
<p>Therefore</p>
<p><span class="math-container">$(1+x)^{c/k} = f(c/k)$</span></p>
<p>And </p>
<p><span class="math-container">$f(c/k) = 1 + (c/k)x ....$</span></p>
<p>Hence we get</p>
<p><span class="math-container">$(1+x)^{c/k} = 1 + (c/k)x .....$</span></p>
<p>This proves the binomial theorem for any positive fractional index.</p>
|
1,883,765 | <p>In the following situation I know how to find the <code>red</code> distance as <code>(diagonal - diamater) / 2</code> however I'm not sure how to find the <code>yellow</code> and <code>green</code> distances.</p>
<p><a href="https://i.stack.imgur.com/RMNLh.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/RMNLh.png" alt="enter image description here"></a></p>
| Daniel | 221,735 | <p>they are equal, if red is $r$ then green\yellow is $r/\sqrt{2}$</p>
|
2,939,605 | <p>The given task goes as follows:</p>
<blockquote>
<p>Show that <span class="math-container">$ f: \mathbb{R} \longrightarrow \mathbb{R}$</span> defined by <span class="math-container">$f(x) = \sqrt{1 + x^2} $</span> is not a polynomial function.</p>
</blockquote>
<p>I tried this approach - if <span class="math-container">$f(x)$</span> is a <span class="math-container">$n$</span>-degree polynomial function, then the <span class="math-container">$(n+1)$</span>-st derivative equals to 0 and I was trying to determine the <span class="math-container">$k$</span>-th derivative of <span class="math-container">$f(x)$</span> (and show it differs from 0 for any <span class="math-container">$k$</span>) but without success. Since <span class="math-container">$f(x)$</span> is continuous and defined over whole R domain, I have no idea how to carry on. Any ideas? </p>
| Peter Szilas | 408,605 | <p>Assume <span class="math-container">$\sqrt{x^2+1}= a_nx^n+...a_1x +a_0$</span> , with an <span class="math-container">$a_n \not =0$</span>.</p>
<p>Since <span class="math-container">$\sqrt{x^2+1}$</span> is an even function, the polynomial has to be even, i.e. only even powers occur.</p>
<p>Set <span class="math-container">$y^2:=x^2+1$</span>, <span class="math-container">$y \ge 1$</span>, then</p>
<p><span class="math-container">$y = a_{2k}(y^2-1)^{2k} + ........+a_0.$</span></p>
<p>RHS: A polynomial in <span class="math-container">$y$</span> with even powers.</p>
<p>LHS: <span class="math-container">$y$</span>.</p>
<p>A contradiction.</p>
|
2,634,657 | <p>My question is: For <span class="math-container">$R>0$</span>, Can we choose a family of functions <span class="math-container">$\eta_R\in C_c^1(\mathbf{R}^N)$</span> satisfying
<span class="math-container">$0\leq\eta_R\leq 1$</span> in <span class="math-container">$\mathbf{R}^N$</span>, <span class="math-container">$\eta_R=1$</span> in <span class="math-container">$B_R(0)$</span> and
<span class="math-container">$\eta_R=0$</span> in <span class="math-container">$\mathbf{R}^N \setminus B_{2R}(0)$</span> with <span class="math-container">$|\nabla\eta|\leq\frac{C}{R}$</span> for <span class="math-container">$C>0$</span> a positive constant independent of <span class="math-container">$R$</span>.</p>
<p>I know these type of functions can be chosen for any <span class="math-container">$R$</span>, but don't know whether there is a choice of such functions for which the constant is independent of <span class="math-container">$R$</span>, as asked in the question.</p>
<p>Can anyone give a proper explanation to this question?</p>
<p>Thanks...</p>
| username | 948,485 | <p>The usual construction for such fonctions is to define the cut-off as a radial function, that is, <span class="math-container">$$\eta_R(x) = \eta\left(\frac{|x|}R\right)$$</span> where <span class="math-container">$\eta\in c^\infty(\mathbb R)$</span> is given by
<span class="math-container">$$
x\to \begin{cases}1 &\text{ for }x\leq1 \\ 0&\text{ for } x\geq2
\end{cases}
$$</span>
and monotone. The dependence on <span class="math-container">$R$</span> of the derivative comes from the chain rule. The standard construction for <span class="math-container">$\eta$</span> is to mollify an indicator function. You can also do it <a href="https://math.stackexchange.com/a/4365567/948485">this way</a>, to minimise the constant <span class="math-container">$C$</span>.</p>
|
873,224 | <p>I'm not mathematically inclined, so please be patient with my question.</p>
<p>Given </p>
<ul>
<li><p>$(x_0, y_0)$ and $(x_1, y_1)$ as the endpoints of a cubic Bezier curve.</p></li>
<li><p>$(c_x, c_y)$ and r as the centerpoint and the radius of a circle.</p></li>
<li><p>$(x_0, y_0)$ and $(x_1, y_1)$ are on the circle.</p></li>
<li><p>if it makes the calculation simpler, it's safe to assume the arc is less than or equal to $\frac{\pi}{2}$.</p></li>
</ul>
<p>How do I calculate the two control points of the Bezier curve that best fits the arc of the circle from $(x_0, y_0)$ to $(x_1, y_1)$?</p>
| robjohn | 13,854 | <p>Let $(x,y)^R=(-y,x)$ represent rotation by $\pi/2$ counterclockwise and
$$
\gamma(t)=(1-t)^3p_0+3t(1-t)^2p_1+3t^2(1-t)p_2+t^3p_3
$$
define a cubic bezier with control points $\{p_0,p_1,p_2,p_3\}$.</p>
<p>Suppose $p_0=(x_0,y_0)$, $p_3=(x_1,y_1)$, and $c=(c_x,c_y)$ are given so that $|p_0-c|=|p_3-c|=r$ ($p_3$ is counterclockwise from $p_0$). Then $p_1=p_0+\alpha(p_0-c)^R$ and $p_2=p_3-\alpha(p_3-c)^R$ where
$$
\alpha=\frac43\tan\left(\frac14\cos^{-1}\left(\frac{(p_0-c)\cdot(p_3-c)}{r^2}\right)\right)
$$
For a quarter of a circle, $\alpha=\frac43(\sqrt2-1)$, and $\gamma$ is no more than $0.00027$ of the radius of the circle off.</p>
<p>Here is a plot of $\gamma$ in red over the quarter circle in black. We really don't see the circle since it is no more than $0.1$ pixels off from $\gamma$ when the radius is $400$ pixels.</p>
<p>$\hspace{3.5cm}$<img src="https://i.stack.imgur.com/WCjOv.png" alt="enter image description here"></p>
<hr>
<p><strong>Computation of $\boldsymbol{\alpha}$</strong></p>
<p>Looking at an arc with an angle of $\theta=\cos^{-1}\left(\frac{(p_0-c)\cdot(p_3-c)}{r^2}\right)$</p>
<p>$\hspace{1.5cm}$<img src="https://i.stack.imgur.com/kUJCA.png" alt="enter image description here"></p>
<p>we see that the distance from $c$ to the middle of the arc is
$$
r\cos(\theta/2)+\frac34\alpha r\sin(\theta/2)
$$
we wish to choose $\alpha$ so that this is equal to $r$. Solving for $\alpha$ gives
$$
\begin{align}
\alpha
&=\frac43\frac{1-\cos(\theta/2)}{\sin(\theta/2)}\\
&=\frac43\tan(\theta/4)
\end{align}
$$</p>
<hr>
<p><strong>A Slight Improvement</strong></p>
<p>Using a circle of radius $1$, the maximum error in radius produced using $\alpha=\frac43\tan(\theta/4)$ is approximately
$$
0.0741\cos^4(\theta/4)\tan^6(\theta/4)
$$
and the error is always positive; that is, the cubic spline never passes inside the circle. Reducing $\alpha$ reduces the midpoint distance by $\frac34\sin(\theta/2)=\frac32\tan(\theta/4)\cos^2(\theta/4)$ times as much, so to distribute the error evenly between the positive and negative, a first guess, assuming that the amplitude of the radius is unchanged, would be to reduce $\alpha$ by $0.0247\cos^2(\theta/4)\tan^5(\theta/4)$.</p>
<p>A bit of investigation shows that, when equalizing the positive and negative swings of the radius, the amplitude increases and that
$$
\alpha=\frac43\tan(\theta/4)-0.03552442\cos^2(\theta/4)\tan^5(\theta/4)
$$
gives pretty even distribution of the error between positive and negative for $\theta\le\pi/2$. The maximum error, both positive and negative, is approximately
$$
0.0533\cos^4(\theta/4)\tan^6(\theta/4)
$$
When $\theta=\pi/2$, this agrees with <a href="http://spencermortensen.com/articles/bezier-circle/" rel="noreferrer">the article</a> mentioned by bubba in comments.</p>
<p>Note however, that in minimizing the radial error from the circle, the actual variation in radius is increased. Using the simple formula for $\alpha$, which puts the cubic bezier outside the circle, the radius varies by $0.0741\cos^4(\theta/4)\tan^6(\theta/4)$. However, when we minimize the error, the radial variation increases to $0.1066\cos^4(\theta/4)\tan^6(\theta/4)$.</p>
|
2,655,178 | <p>I am asked to find the equation of a cubic function that passes through the origin. It also passes through the points $(1, 3), (2, 6),$ and $(-1, 10)$. </p>
<p>I have walked through many answers for similar questions that suggest to use a substitution method by subbing in all the points and writing in terms of variables. I have tried that but I don't really know where to take it from there or what variables to write it as. </p>
<p>If anyone could provide their working out for this problem it would be extremely enlightening. </p>
| Mark Bennet | 2,906 | <p>Given four points $(x_i,y_i)$ consider the functions $$f_1(x)=\frac {(x-x_2)(x-x_3)(x-x_4)}{(x_1-x_2)(x_1-x_3)(x_1-x_4)}$$ so that $f_1(x_1)=1$ and $f_1(x_i)=0, i\neq 1$, and similarly $f_2, f_3, f_4$. Note that the $f_i$ are cubic in $x$.</p>
<p>Then $p(x)=y_1f_1(x)+y_2f_2(x)+y_3f_3(x)+y_4f_4(x)$ is at most a cubic polynomial and passes through the four given points.</p>
|
2,329,003 | <p>Parametrise the circle centered at $ \ (1,1,-1) \ $ with radius equal to $ 3 $ in the plane $ x+y+z=1 $ with positive orientation . $$ $$ I have thought the parametriation: </p>
<p>\begin{align} x(t)=1+ 3 \cos (t) \hat j +3 \sin (t) \hat k \\ y(t)=1+3 \cos (t) \hat i+3 \sin (t) \hat k \\ z(t)=-1+3 \cos (t) \hat i +3 \sin (t) \hat j , \ \ 0 \leq t \leq 2 \pi \end{align} But I am not sure . <strong>Any help is there ?</strong></p>
| hamam_Abdallah | 369,188 | <p>The intersection of the sphere
$$(x-1)^2+(y-1)^2+(z+1)^2=9$$
and the plane
$$x+y+z=1$$
can be parametrised by spherical coordinates :
$$x=1+3\sin (\phi)\cos (\theta) $$
$$y=1+3\sin (\phi)\sin (\theta) $$
$$z=-1-3\sin (\phi)(\cos (\theta)+\sin (\theta)) $$.</p>
<p>with $0\le \theta \le 2\pi $ and $0\le \phi \le \pi$.</p>
|
4,585,589 | <p>In the 4th edition of "Matrix Computations", Golub and Van Loan present "Algorithm 5.1.1 (Householder Vector)". The first couple of lines (translated into MATLAB-syntax) read:</p>
<pre><code>m = length(x); sigma = x(2:m)'*x(2:m); v = [1; x(2:m)];
if sigma == 0 && x(1) >= 0
beta = 0;
elseif sigma == 0 && x(1) < 0
beta = -2;
else
...
</code></pre>
<p>The <code>else</code> clause handles the case where <code>sigma</code> is nonzero and no code after the provided snippet modifies <code>v</code> if <code>sigma</code> is zero. The matrix form of the resulting Householder transformation is <span class="math-container">$I-\beta v v^T$</span>.</p>
<p>The <code>elseif</code> clause is a little strange. It doesn't appear in the algorithm listing in the
3rd edition, so it was added for the 4th, presumably for numerical stability. However, it seems to me to generate a <span class="math-container">$(\beta, v)$</span> pair that doesn't map to an orthogonal matrix. For example, if <code>x = [-1; 0; 0]</code> then <code>sigma == 0</code> and <code>x(1) < 0</code> so we get <code>beta = -2</code> and <code>v = [1; 0; 0]</code> and a Householder transformation of <code>[3, 0, 0; 0, 1, 0; 0, 0, 1]</code> which is not an orthogonal matrix.</p>
<p>So my questions are:</p>
<ul>
<li>What benefit is there to handling that case separately, rather than just setting <code>beta = 0</code> if <code>sigma = 0</code>?</li>
<li>Does the resulting Householder transformation need to be applied differently?</li>
</ul>
| Jamie Ballingall | 559,730 | <p>In this answer I'll try to provide further context and summarize the consensus I think we achieved in the comments to the accepted answer.</p>
<p>The full algorithm (5.1.1) reads:</p>
<pre><code>m = length(x); sigma = x(2:m)'*x(2:m); v = [1; x(2:m)];
if sigma == 0 && x(1) >= 0
beta = 0;
elseif sigma == 0 && x(1) < 0
beta = -2;
else
mu = sqrt(x(1)*x(1)+sigma);
if (x(1) <= 0)
v(1) = x(1) - mu;
else
v(1) = -sigma/(x(1)+mu);
end
beta = 2*v(1)*v(1)/(sigma+v(1)*v(1));
v = v/v(1);
end
</code></pre>
<p>and produces a <code>v</code> (<span class="math-container">$v$</span>) and a <code>beta</code> (<span class="math-container">$\beta$</span>) intended to be converted to a matrix via <span class="math-container">$P=I-\beta v v^T$</span> or applied directly as <span class="math-container">$x_r = x - \beta v (v^T x)$</span>. That these are subtractions rather than additions is confirmed by the main <code>else</code> case.</p>
<p>The line <code>beta = -2;</code> is incorrect and should read <code>beta = 2;</code>.</p>
<p>Any householder algorithm will need to handle the case when <span class="math-container">$\sigma=0$</span> and <span class="math-container">$x_1=0$</span> because this implies that the norm of the supplied vector <span class="math-container">$x$</span> is exactly zero and so a reflection is not defined. We wish to output <span class="math-container">$\beta=0$</span> in this case. While not strictly a householder transformation, when converted to a matrix it simply yields the identity matrix. In this algorithm that is handled as part of the <code>if sigma == 0 && x(1) >= 0</code> branch.</p>
<p>Note that if <span class="math-container">$\sigma = 0$</span> then all the elements of the supplied vector below the first are zero and no transformation need be applied to zero them out. However, there are valid Householder transformations what affect the first element, leaving the rest as zeros, that could be applied if desirable.</p>
<p>The main <code>else</code> clause is designed to avoid severe cancellation when <span class="math-container">$x$</span> is close to a positive multiple of <span class="math-container">$e_1$</span> but would have division by zero error if <span class="math-container">$\sigma=0$</span> and <span class="math-container">$x_1 > 0$</span>. So the case <span class="math-container">$\sigma=0$</span> and <span class="math-container">$x_1 = 0$</span> is extended to <span class="math-container">$\sigma=0$</span> and <span class="math-container">$x_1 >= 0$</span> and <span class="math-container">$\beta = 0$</span> returned in that case.</p>
<p>In previous editions, the case <span class="math-container">$\sigma=0$</span> and <span class="math-container">$x_1 < 0$</span> was handled the same way but in the 4th edition the authors choose to handle it as a special case. In that case the matrix form of the Householder transformation is the identity matrix but with a -1 in the top left position. This leaves the zeros unchanged but flips the sign of the element on the diagonal.</p>
<p>This does not appear to be for reasons of numerical stability but rather to create more positive entries on the diagonal of the resulting <span class="math-container">$R$</span> matrix. It do not appear to create numerical instability however.</p>
|
2,044,362 | <p>(<em>This summarizes scattered results from <a href="https://math.stackexchange.com/questions/879089/prove-2f-1-left-frac13-frac13-frac56-27-right-stackrel-color808080">here</a>, <a href="https://math.stackexchange.com/questions/879854/prove-large-int-11-fracdx-sqrt394-sqrt5-x-left1-x2-right">here</a>, <a href="https://math.stackexchange.com/questions/1326557/integral-large-int-0-infty-fracdx-sqrt47-cosh-x">here</a> and elsewhere. See also this <a href="https://math.stackexchange.com/questions/2043030/closed-forms-for-int-0-infty-fracdx-sqrt355-cosh-x-and-int-0-inf">older post</a></em>.)</p>
<blockquote>
<p><strong>I. Cubic</strong></p>
</blockquote>
<p>Define $\beta= \tfrac{\Gamma\big(\tfrac56\big)}{\Gamma\big(\tfrac13\big)\sqrt{\pi}}= \frac{1}{48^{1/4}\,K(k_3)}$. Then we have the nice evaluations,</p>
<p>$$\begin{aligned}\frac{3}{5^{5/6}} &=\,_2F_1\big(\tfrac{1}{3},\tfrac{1}{3};\tfrac{5}{6};-4\big)\\
&=\beta\,\int_0^1 \frac{dx}{\sqrt{1-x}\,\sqrt[3]{x^2+4x^3}}\\[1.7mm]
&=\beta\,\int_{-1}^1\frac{dx}{\left(1-x^2\right)^{\small2/3} \sqrt[3]{\color{blue}{9+4\sqrt{5}}\,x}}\\[1.7mm]
&=2^{1/3}\,\beta\,\int_0^\infty\frac{dx}{\sqrt[3]{9+\cosh x}}
\end{aligned}\tag1$$
and,
$$\begin{aligned}\frac{4}{7} &=\,_2F_1\big(\tfrac{1}{3},\tfrac{1}{3};\tfrac{5}{6};-27\big)\\
&=\beta\,\int_0^1 \frac{dx}{\sqrt{1-x}\,\sqrt[3]{x^2+27x^3}}\\[1.7mm]
&=\beta\,\int_{-1}^1\frac{dx}{\left(1-x^2\right)^{\small2/3} \sqrt[3]{\color{blue}{55+12\sqrt{21}}\,x}}\\[1.7mm]
&=2^{1/3}\,\beta\,\int_0^\infty\frac{dx}{\sqrt[3]{55+\cosh x}}
\end{aligned}\tag2$$
Note the powers of <em><a href="http://mathworld.wolfram.com/FundamentalUnit.html" rel="nofollow noreferrer">fundamental units</a></em>,
$$U_{5}^6 = \big(\tfrac{1+\sqrt{5}}{2}\big)^6=\color{blue}{9+4\sqrt{5}}$$
$$U_{21}^3 = \big(\tfrac{5+\sqrt{21}}{2}\big)^3=\color{blue}{55+12\sqrt{21}}$$
<em>Those two instances can't be coincidence.</em></p>
<blockquote>
<p><strong>II. Quartic</strong></p>
</blockquote>
<p>Define $\gamma= \tfrac{\sqrt{2\pi}}{\Gamma^2\big(\tfrac14\big)}= \frac{1}{2\sqrt2\,K(k_1)}=\frac1{2L}$ with <em><a href="http://mathworld.wolfram.com/LemniscateConstant.html" rel="nofollow noreferrer">lemniscate constant</a></em> $L$. Then we have the nice,</p>
<p>$$\begin{aligned}\frac{2}{3^{3/4}} &=\,_2F_1\big(\tfrac{1}{4},\tfrac{1}{4};\tfrac{3}{4};-3\big)\\
&=\gamma\,\int_0^1 \frac{dx}{\sqrt{1-x}\,\sqrt[4]{x^3+3x^4}}\\[1.7mm]
&\overset{\color{red}?}=\gamma\,\int_{-1}^1\frac{dx}{\left(1-x^2\right)^{\small3/4} \sqrt[4]{\color{blue}{7+4\sqrt{3}}\,x}}\\[1.7mm]
&=2^{1/4}\,\gamma\,\int_0^\infty\frac{dx}{\sqrt[4]{7+\cosh x}}
\end{aligned}\tag3$$
and,
$$\begin{aligned}\frac{3}{5}&=\,_2F_1\big(\tfrac{1}{4},\tfrac{1}{4};\tfrac{3}{4};-80\big)\\
&=\gamma\,\int_0^1 \frac{dx}{\sqrt{1-x}\,\sqrt[4]{x^3+80x^4}}\\[1.7mm]
&\overset{\color{red}?}=\gamma\,\int_{-1}^1\frac{dx}{\left(1-x^2\right)^{\small3/4} \sqrt[4]{\color{blue}{161+72\sqrt{5}}\,x}}\\[1.7mm]
&=2^{1/4}\,\gamma\,\int_0^\infty\frac{dx}{\sqrt[4]{161+\cosh x}}
\end{aligned}\tag4$$</p>
<p>with $a=161$ given by Noam Elkies in this <a href="https://math.stackexchange.com/questions/1326557/integral-large-int-0-infty-fracdx-sqrt47-cosh-x?noredirect=1&lq=1#comment2706992_1326557">comment</a>. (For $4$th roots, I just assumed the equality using the blue radicals based on the ones for cube roots.) Note again the powers of fundamental units,
$$U_{3}^2 = \big(2+\sqrt3\big)^2=\color{blue}{7+4\sqrt{3}}$$
$$U_{5}^{12} = \big(\tfrac{1+\sqrt{5}}{2}\big)^{12}=\color{blue}{161+72\sqrt{5}}$$
<em>Just like for the cube roots version, these can't be coincidence.</em></p>
<blockquote>
<p><strong>Questions:</strong></p>
</blockquote>
<p>Is it true these observations can be explained by, let $b=2a+1$, then,</p>
<p>$$\int_0^1 \frac{dx}{\sqrt{1-x}\,\sqrt[3]{x^2+ax^3}}=\int_{-1}^1\frac{dx}{\left(1-x^2\right)^{\small2/3} \sqrt[3]{b+\sqrt{b^2-1}\,x}}=2^{1/3}\int_0^\infty\frac{dx}{\sqrt[3]{b+\cosh x}}$$</p>
<p>$$\int_0^1 \frac{dx}{\sqrt{1-x}\,\sqrt[4]{x^3+ax^4}}=\int_{-1}^1\frac{dx}{\left(1-x^2\right)^{\small3/4} \sqrt[4]{b+\sqrt{b^2-1}\,x}}=2^{1/4}\int_0^\infty\frac{dx}{\sqrt[4]{b+\cosh x}}$$</p>
| Nemo | 285,751 | <p>Starting from
$$
\,_2F_1\big(\tfrac{1}{4},\tfrac{1}{4};\tfrac{3}{4};-a\big)=\gamma\,\int_0^1 \frac{dx}{\sqrt{1-x}\,\sqrt[4]{x^3+ax^4}},
$$
$$
(b+\sqrt{b^2-1})^{-1/4}\,_2F_1\big(\tfrac{1}{4},\tfrac{1}{4};\tfrac{1}{2};\tfrac{2\sqrt{b^2-1}}{b+\sqrt{b^2-1}}\big)={\gamma}\,\int_{-1}^1\frac{dx}{\left(1-x^2\right)^{\small3/4} \sqrt[4]{{b+\sqrt{b^2-1}}\,x}},
$$
(with $\gamma$ defined above) and applying transformations 2.11(4), 2.10(6), 2.11(2) from Erdelyi, Higher transcendental functions, vol. I, to the second hypergeometric function one gets
\begin{align}
(b+\sqrt{b^2-1})^{-1/4}\,_2F_1\big(\tfrac{1}{4},\tfrac{1}{4};\tfrac{1}{2};\tfrac{2\sqrt{b^2-1}}{b+\sqrt{b^2-1}}\big)&=b^{-1/4}\,_2F_1\big(\tfrac{1}{8},\tfrac{5}{8};\tfrac{3}{4};\tfrac{{b^2-1}}{b^2}\big)\\
&=\,_2F_1\big(\tfrac{1}{8},\tfrac{1}{8};\tfrac{3}{4};1-b^2\big)\\
&=\,_2F_1\big(\tfrac{1}{8},\tfrac{1}{8};\tfrac{3}{4};-4a(1+a)\big)\\
&=\,_2F_1\big(\tfrac{1}{4},\tfrac{1}{4};\tfrac{3}{4};-a\big),
\end{align}
where $b=2a+1$, thus proving that
$$
\int_0^1 \frac{dx}{\sqrt{1-x}\,\sqrt[4]{x^3+ax^4}}=\int_{-1}^1\frac{dx}{\left(1-x^2\right)^{\small3/4} \sqrt[4]{b+\sqrt{b^2-1}\,x}}.
$$</p>
<p>More generally application of the same series of transformations gives
$$
{(b+\sqrt{b^2-1})^{-\alpha } \, _2F_1\left(\alpha ,\alpha ;2 \alpha ;\tfrac{2 \sqrt{b^2-1}}{b+\sqrt{b^2-1}}\right)}={\, _2F_1\left(\alpha ,\alpha ;\alpha +\tfrac{1}{2};-a\right)},
$$
i.e.
$$
\int_0^1 \frac{dx}{\sqrt{1-x}\,x^{1-\alpha}(1+ax)^{\alpha}}=\int_{-1}^1\frac{dx}{\left(1-x^2\right)^{1-\alpha} (b+\sqrt{b^2-1}\,x)^{\alpha}}.
$$
When $\alpha=1/3$ this answers the related <a href="https://math.stackexchange.com/questions/2043030/closed-forms-for-int-0-infty-fracdx-sqrt355-cosh-x-and-int-0-inf?rq=1">question</a>.</p>
<p>Formula 2.12(10) from Erdelyi, Higher transcendental functions, vol. I answers the second equality, namely
$$
{\, _2F_1\left(\alpha ,\alpha ;\alpha +\tfrac{1}{2};-a\right)}=2^{\alpha}\frac{\Gamma(\alpha+1/2)}{\sqrt{\pi}\Gamma(\alpha)}\int_0^\infty\frac{dx}{(b+\cosh x)^\alpha}.
$$</p>
|
2,607,733 | <p>Prove the inlcusion-exclusion principle using the fact that for three pairwise disjoint sets $X$, $Y$, $Z$:</p>
<p>$$|X\cup{Y}\cup{Z|}=|X|+|Y|+|Z|$$</p>
<p>I tried setting $X\cup{Y}=A$, but arrive at $|X\cup{Y|}=|X|+|Y|$ due to the sets being disjoint.</p>
| 5xum | 112,884 | <p>The mapping $$d: (x,y)\mapsto \|x+y\|$$</p>
<p>is <strong>not</strong> a metric, because it does not satisfy the axiom</p>
<p>$$\forall x\in X:d(x,x)=0$$</p>
<p>(except if $X\neq \{0\}$, but then you only have one metrix on $X$ anyway).</p>
|
2,100,793 | <p>Let us say that I have a set $A=\{1,2, 3\}$. Now I need to access, say, the element $3$ of $A$. How do I achieve this?</p>
<p>I know that sets are unordered list of elements but I need to access the elements of a set. Can I achieve this with a tuple? Like $A=(1, 2, 3)$, should I write $A(i)$ to access the i-th element of $A$? Or is there any other notation? </p>
<blockquote>
<p>If I have a list of elements, what is the best mathematical object to represent it so that I can freely access its elements and how? In programming, I would use <code>arrays</code>.</p>
</blockquote>
| Robert Israel | 8,508 | <p>The fact that a set is "unordered" by definition doesn't prevent you from defining an ordering on it. Indeed, by the Axiom of Choice, any set can be
indexed by an ordinal number. </p>
|
17,819 | <p>$\mathfrak{sl}_2(\mathbb{C})$ is usually given a basis $H, X, Y$ satisfying $[H, X] = 2X, [H, Y] = -2Y, [X, Y] = H$. What is the origin of the use of the letter $H$? (It certainly doesn't stand for "Cartan.") My guess, based on similarities between these commutator relations and ones I have seen mentioned when people talk about physics, is that $H$ stands for "Hamiltonian." Is this true? Even if it's not, is there a connection? </p>
| Francois Ziegler | 19,276 | <p>The letters <span class="math-container">$\mathrm X$</span> and <span class="math-container">$\mathrm Y$</span> are already used by <strong>Cayley</strong> in what Dieudonné (in <a href="//ams.org/mathscinet-getitem?mr=88c:01020" rel="nofollow noreferrer">MR</a>) calls the first description of all finite-dimensional irreducible <span class="math-container">$\mathfrak{sl}_2$</span>-modules: <em>A Second Memoir upon Quantics</em> (<a href="//doi.org/10.1098/rstl.1856.0008" rel="nofollow noreferrer">1856</a>, §§29–31). He apparently has no name for <span class="math-container">$\mathrm{XY-YX}$</span>. Same in e.g. Faà di Bruno (<a href="//zbmath.org/?q=an:08.0056.02" rel="nofollow noreferrer">1876</a>, <a href="//archive.org/details/thoriedesformes00brungoog/page/n218" rel="nofollow noreferrer">§§113–114</a>).</p>
<p><span class="math-container">$\mathrm H$</span> seems to stand for <em>Hauptmatrix</em>, as introduced by <strong>Weyl</strong> (<a href="//zbmath.org/?q=an:48.0637.04" rel="nofollow noreferrer">1922</a>, p. 125; see also <a href="//zbmath.org/?q=an:0001.17501" rel="nofollow noreferrer">1931</a>, pp. <a href="//books.google.com/books?id=jBDOAAAAMAAJ&&q=%22welche+die+beiden+Variablen+einzeln+transformieren%22" rel="nofollow noreferrer">114</a>, <a href="//books.google.com/books?id=jBDOAAAAMAAJ&q=Haupttransformationen" rel="nofollow noreferrer">122</a>) to describe Cartan subalgebras, roots (<em>Multiplikatoren</em>) and root spaces (<em>Länder</em>): </p>
<blockquote>
<p>Kommt in <span class="math-container">$\mathfrak g$</span> eine „Hauptmatrix“ <span class="math-container">$H$</span> vor, in der alle Elemente außerhalb der Hauptdiagonale verschwinden, während in der Hauptdiagonale die Zahlen <span class="math-container">$\alpha_1,\alpha_2,\dots,\alpha_n$</span> stehen, so bilde man die Differenzen <span class="math-container">$\alpha_i - \alpha_k$</span> und teile mit Bezug auf <span class="math-container">$H$</span> das Schema einer beliebigen Matrix in „Länder“ ein, indem man jedem Feld <span class="math-container">$(ik)$</span> des Schemas (<span class="math-container">$i$</span> der Zeilen-, <span class="math-container">$k$</span> der Kolonnenindex) die Zahl <span class="math-container">$\alpha_i - \alpha_k$</span> als „Multiplikator“ zuordnet (...)</p>
</blockquote>
<p><span class="math-container">$\mathfrak{sl}_2$</span>-triples with your bracket relations appear in Killing (<a href="//zbmath.org/?q=an:20.0368.03" rel="nofollow noreferrer">1888</a>, p. 281), denoted <span class="math-container">$(X_{r-1}, X_r, X_{r-2})$</span>; Cartan (<a href="//zbmath.org/?q=an:25.0638.02" rel="nofollow noreferrer">1894</a>, <a href="//archive.org/details/surlastructured00bourgoog/page/n125" rel="nofollow noreferrer">p. 116</a>), denoted <span class="math-container">$\mathrm{(Y, X,X')}$</span>; Weyl (<a href="//zbmath.org/?q=an:51.0319.01" rel="nofollow noreferrer">1925</a>, p. 276), denoted <span class="math-container">$(h_\alpha,e_\alpha,e_{-\alpha})$</span>, with the <span class="math-container">$h_\alpha$</span> again called “Diagonal- oder Hauptmatrizen”; Dynkin (<a href="http://mi.mathnet.ru/eng/msb/v72/i2/p349" rel="nofollow noreferrer">1952</a>, §8.1), denoted <span class="math-container">$(f,e_+,e_-)$</span>; <strong>Chevalley</strong> (<a href="//zbmath.org/?q=an:0066.01503" rel="nofollow noreferrer">1955</a>, p. 28; <a href="//ams.org/mathscinet-getitem?mr=68552" rel="nofollow noreferrer">1955</a>, <a href="//books.google.com/books?id=YjfvAAAAMAAJ&q=%22%5BH,+X%5D+%3D+aX%22" rel="nofollow noreferrer">p. 96</a>), denoted <span class="math-container">$(D,N,N')$</span>, <span class="math-container">$(H_r,X_r,X_{-r})$</span>, and finally the desired <span class="math-container">$\mathrm{(H,X,Y)}$</span>. </p>
<p>(Standardization was slow: Lie (<a href="//zbmath.org/?q=an:08.0212.01" rel="nofollow noreferrer">1876</a>, <a href="//archive.org/details/archivformathema11876oslo/page/53" rel="nofollow noreferrer">p. 53</a>; <a href="//zbmath.org/?q=an:23.0364.01" rel="nofollow noreferrer">1890</a>, <a href="//archive.org/details/theotransformation02liesrich/page/n366" rel="nofollow noreferrer">p. 353</a>) used <span class="math-container">$(X_1,X_2,X_3)=(\mathrm{-Y,\smash{\frac12}H,X})$</span>, and similarly rescaled bases and bracket relations still appear in Pauli (<a href="//doi.org/10.1007/BF01397326" rel="nofollow noreferrer">1927</a>, p. 614), Born-Jordan (<a href="//zbmath.org/?q=an:56.1291.01" rel="nofollow noreferrer">1930</a>, p. 135), Casimir-van der Waerden (<a href="https://zbmath.org/?q=an:57.1579.02" rel="nofollow noreferrer">1931</a>, <a href="//www.lorentz.leidenuniv.nl/IL-publications/dissertations/ehrenfest.html" rel="nofollow noreferrer">p. 46</a>; <a href="//zbmath.org/?q=an:61.0475.02" rel="nofollow noreferrer">1935</a>, p. 4), Bauer (<a href="//zbmath.org/?q=an:59.1521.02" rel="nofollow noreferrer">1933</a>, p. 126), Harish-Chandra (<a href="//ams.org/mathscinet-getitem?mr=33811" rel="nofollow noreferrer">1950</a>, p. 301; <a href="//ams.org/mathscinet-getitem?mr=47055" rel="nofollow noreferrer">1952</a>, p. 337), Jacobson (<a href="//ams.org/mathscinet-getitem?mr=49882" rel="nofollow noreferrer">1951</a>, p. 107; <a href="//ams.org/mathscinet-getitem?mr=20:3901" rel="nofollow noreferrer">1958</a>, p. 825), Séminaire “Sophus Lie” (<a href="//ams.org/mathscinet-getitem?mr=73107" rel="nofollow noreferrer">1955</a>, <a href="http://www.numdam.org/item/SSL_1954-1955__1__A14_0" rel="nofollow noreferrer">p. 10-01</a>), Kostant (<a href="//ams.org/mathscinet-getitem?mr=22:5693" rel="nofollow noreferrer">1959</a>, p. 977), etc. Settling on the “Chevalley” basis <span class="math-container">$\mathrm{(H,X,Y)}$</span> over Lie’s seems ultimately motivated by the smaller <span class="math-container">$\mathbf Z$</span>-form <span class="math-container">$\mathfrak g_\mathrm{sc}\subset\mathfrak g_\mathrm{ad}$</span> it spans, cf. Borel (<a href="//ams.org/mathscinet-getitem?mr=258838" rel="nofollow noreferrer">1970</a>, §2.7).) </p>
|
651,027 | <p><img src="https://i.stack.imgur.com/SIdAQ.jpg" alt="enter image description here"></p>
<p>I'm doing this Solving Abstract Problem but I'm not sure which one it is. I mean from the Series I can see there's a pattern but in the Options I don't see images that link with the Series. Do you have any idea - unless it's B but it's note very clear.</p>
| Dmoreno | 121,008 | <p>Imagine there are two pendulums, one with an X end and the other with a circle. They are spinning in different "directions". So the answer should be E.</p>
<p>Cheers!</p>
|
2,146,571 | <p>I solved this problem in my textbook but noticed their solution was different than mine. <br/></p>
<p>$1. \ 9e^{-2x}=1$ </p>
<p>$2. \ e^{-2x}=\frac{1}{9}$</p>
<p>$3. -2x=\ln(\frac{1}{9})$</p>
<p>$4. \ x=-\ \frac{1}{2}\ln(\frac{1}{9})$</p>
<p>However, the answer that my textbook gives is $\frac{\ln(9)}{2}$ </p>
<p>I plugged these expressions into my calculator and they are indeed equivalent, however I don't see what properties I could use to get from my messy answer to the textbook's much cleaner one. Any help would be greatly appreciated. Thank you.</p>
| S.C.B. | 310,930 | <p>Note that $$\ln x +\ln y =\ln xy, \; \ln 1=\ln e^{0}=0$$
If $x, y$ are positive reals, as seen <a href="https://en.wikipedia.org/wiki/List_of_logarithmic_identities#Using_simpler_operations" rel="nofollow noreferrer">here</a>. From this, $$\ln x +\ln \frac{1}{x}=0 \iff \ln x =-\ln \frac{1}{x}$$
So $$\ln \frac{1}{9}=-\ln 9$$
So $-\frac{1}{2}\ln(\frac{1}{9})=\frac{\ln(9)}{2}$</p>
|
2,146,571 | <p>I solved this problem in my textbook but noticed their solution was different than mine. <br/></p>
<p>$1. \ 9e^{-2x}=1$ </p>
<p>$2. \ e^{-2x}=\frac{1}{9}$</p>
<p>$3. -2x=\ln(\frac{1}{9})$</p>
<p>$4. \ x=-\ \frac{1}{2}\ln(\frac{1}{9})$</p>
<p>However, the answer that my textbook gives is $\frac{\ln(9)}{2}$ </p>
<p>I plugged these expressions into my calculator and they are indeed equivalent, however I don't see what properties I could use to get from my messy answer to the textbook's much cleaner one. Any help would be greatly appreciated. Thank you.</p>
| BobaFret | 43,760 | <p>$\ln (1/9) = \ln (9^{-1})=-1 \cdot \ln (9)$</p>
|
2,146,571 | <p>I solved this problem in my textbook but noticed their solution was different than mine. <br/></p>
<p>$1. \ 9e^{-2x}=1$ </p>
<p>$2. \ e^{-2x}=\frac{1}{9}$</p>
<p>$3. -2x=\ln(\frac{1}{9})$</p>
<p>$4. \ x=-\ \frac{1}{2}\ln(\frac{1}{9})$</p>
<p>However, the answer that my textbook gives is $\frac{\ln(9)}{2}$ </p>
<p>I plugged these expressions into my calculator and they are indeed equivalent, however I don't see what properties I could use to get from my messy answer to the textbook's much cleaner one. Any help would be greatly appreciated. Thank you.</p>
| Fede Poncio | 269,098 | <p>$\ln(\frac{1}{9})=\ln(9^{-1})=(-1)\ln(9)$ ;)</p>
|
718,609 | <p>This theorem is the converse of Wilson's theorem:</p>
<blockquote>
<p>If $n$ is composite and $n>4$, then $(n-1)! \equiv 0 \pmod n$</p>
</blockquote>
<p>The question holds up for all the composites I have tried but I'm struggling to form a proof for all composites greater than $4$.</p>
| Michael Hardy | 11,667 | <p>Notice first that </p>
<p>Suppose $p$ is prime and $p\mid n$. Since $n$ is composite, we have $p<n$, so $p\le n-1$.</p>
<p>The multiplicity of the prime factor $p$ in the the factorization of $n$ is the largest integer $k$ such that $p^k\mid n$. Thus $p,p^2,p^3,\ldots,p^{k-1}$ divide $n$ and are all less than $n$. We have
$$
(n-1)! = (n-1)(n-2)(n-3) \cdots 3\cdot2\cdot1.
$$
The numbers $p$ and $p^{k-1}$ are both in this string of numbers from $1$ to $n-1$, so the product is divisible by $p\cdot p^{k-1} = p^k$. And $p^k$ itself is in that string of numbers unless $p^k=n$. Either way, $p^k\mid n.$</p>
<p>This is true of <em>all</em> prime numbers that divide $n$, and thus it is true of the product of those powers of primes, and the product is $n$. So $n$ is a divisor of $(n-1)!.$</p>
<p><b>Revised postscript:</b> As Bill Dubuque points out in comments $4$ is not a divisor of $(4-1)! = 6$. The problem is this: although $p=2$ and $p^{k-1}=2^{2-1} =2 $ are both in this string of numbers, in this case $p$ and $p^{k-1}$ are both the same number, namely $p$. This happens whenever $p$ and $p^{k-1}$ are both the same number, i.e. when $p^{k-1} = p^1$ so that $k=1.$ So we can't conclude that $p\times p^{k-1}$ divides $(n-1)!$ by that same method in that case.</p>
<p>However, the result can be proved by another method when $k=1$ and $p>2$: Observe that $p$ and $2p$ are both divisors of $(n-1)!$, so $p^k=p^2\mid (n-1)!.$ That doesn't work when $p=2$ because in that case $2p$ is $n$ itself rather than some number less than $n$.</p>
|
47,753 | <p>What are the Method options Solve command accepts? Solve has the Method option, however documentation contains no methods that it accepts...</p>
| Alexey Popkov | 280 | <p>Undocumented <a href="http://forums.wolfram.com/mathgroup/archive/2010/Nov/msg00787.html" rel="nofollow"><code>Method->"Legacy"</code> option forces <code>Solve</code> in version 8 to use algorithm from versions <=7</a>. </p>
<pre><code>exprs = Together[{(b + d + f)/x - (a + b)/(1 + x) -
2*(c + d + e)/(1 + 2*x + y) - (f + g)/(x + y), (e + g)/
y - (c + d + e)/(1 + 2*x + y) - (f + g)/(x + y)}];
Timing[solns1 = Solve[exprs == 0, {x, y}, Method -> "Legacy"];]
</code></pre>
<blockquote>
<p>{39.515, Null}</p>
</blockquote>
<p><code>Solve[exprs == 0, {x, y}]</code> never returns in version 8. Additional examples see <a href="http://community.wolfram.com/groups/-/m/t/17961" rel="nofollow">here</a>.</p>
<p>Some information on the change in the algorithm between versions 7 and 8 is <a href="http://community.wolfram.com/groups/-/m/t/158746" rel="nofollow">published</a> by Bruce Miller (Wolfram Technical Support Group):</p>
<blockquote>
<p>Boilerplate I had:</p>
<p>The difference between 7.0 and 8.0 output is that 7.0 Solve was
treating equations that involved only variables as assumptions. This
functionality was not precisely defined or consistently implemented
and has been removed in 8.0. Instead there is a new option
MaxExtraConditions which provides a well-defined and extended version
of the functionality.</p>
</blockquote>
|
2,553,470 | <p>The equations of the two chords of the parabola y^2=4ax are to be found such that the pass through the point (-6a,0) and subtends an angle of 45 degree at the vertex of the parabolas. Tried it in many ways including using parametric equations but could not get the two equations</p>
<p>(note. Let the chord intersect the parabola at points p and q. And let the vertex be point r then prq..=45 degree) </p>
| Christian Blatter | 1,303 | <p>If a slot may receive at most one letter then we can just choose the four slots receiving a letter in ${8\choose 4}=70$ ways. It is then determined by the rules which of the chosen slots shall receive which letter.</p>
<p>If the slots may receive any number $\geq0$ of letters then it is a stars and bars problem: We have to separate $4$ stars by $7$ separators into $8$ compartments. This can be done in ${11\choose 7}=330$ ways. Again for each chosen way it is then determined which letter goes into which compartment.</p>
|
2,996,775 | <p><span class="math-container">$$ \frac {1}{\log_2(x-2)^2} + \frac{1}{\log_2(x+2) ^2} =\frac5{12}.$$</span></p>
<p>I made the graph using wolfram alpha it is giving answer as
6. But how to solve it algebraically?
base of logarithm is 2.</p>
<p>Tried using taking Lcm but then two different log terms
are getting formed.</p>
| KM101 | 596,598 | <p><span class="math-container">$$\frac{1}{\log_2(x-2)^2}+\frac{1}{\log_2(x+2)^2} = \frac{5}{12}$$</span></p>
<p>Rewrite the logs using <span class="math-container">$$\log_a b^c = c\log_a b$$</span></p>
<p>and factor. Then, simplify both sides.</p>
<p><span class="math-container">$$\frac{1}{2}\cdot\frac{1}{\log_2\vert x-2\vert}+\frac{1}{2}\cdot\frac{1}{\log_2\vert x+2\vert} = \frac{5}{12}$$</span></p>
<p><span class="math-container">$$\frac{1}{2}\bigg(\frac{1}{\log_2\vert x-2\vert}+\frac{1}{\log_2\vert x+2\vert}\bigg) = \frac{5}{12}$$</span></p>
<p><span class="math-container">$$\frac{1}{\log_2\vert x-2\vert}+\frac{1}{\log_2\vert x+2\vert} = \frac{5}{6}$$</span></p>
<p><span class="math-container">$$\frac{\log_2\vert x-2\vert+\log_2\vert x+2\vert}{\log_2\vert x-2\vert\cdot\log_2\vert x+2\vert} = \frac{5}{6}$$</span></p>
<p>Set <span class="math-container">$\color{blue}{a = \log_2\vert x-2\vert}$</span> and <span class="math-container">$\color{purple}{b = \log_2\vert x+2\vert}$</span>.</p>
<p><span class="math-container">$$\frac{\color{blue}{a}+\color{purple}{b}}{\color{blue}{a}\color{purple}{b}} = \frac{5}{6}$$</span></p>
<p>Can you take it on from here? (<strong>Hint:</strong> Solve for possible values of <span class="math-container">$a$</span> and <span class="math-container">$b$</span>. Then, plug in <span class="math-container">$\color{blue}{\log_2\vert x-2\vert = a}$</span> and <span class="math-container">$\color{purple}{\log_2\vert x+2\vert = b}$</span> and check for any extraneous solutions, in case there are any.)</p>
|
3,133,831 | <p>I know that supremum means least upper bound. If I have a sequence of events, <span class="math-container">$\{A_n\}_{n=1}^\infty$</span></p>
<p>then <span class="math-container">$$\limsup_{n\rightarrow \infty} A_n = \lim_{n\rightarrow \infty} \sup_{j\geq n} A_j$$</span></p>
<p>I'm having trouble understanding this statement: </p>
<p>"The supremum of a collection of elements in a partially ordered set is its least upper bound, so <span class="math-container">$\sup_{j\geq n} A_j$</span> should be a set and it should hold that <span class="math-container">$A_j \subset \sup_{j\geq n} A_j$</span> for all <span class="math-container">$j \geq n $</span>. Because the supremum should also be the smallest upper bound it is not hard to see that <span class="math-container">$$\sup_{j\geq n} A_j= \bigcup_{j=n}^\infty A_j$$</span>"</p>
<p>Why does <span class="math-container">$j \geq n $</span> matter? Also, I don't understand what set is partially ordered, and also why the supremum of <span class="math-container">$A_j$</span> is itself a set. Shouldn't it just be an element? And I also don't see how the supremum is the same thing as the union.</p>
| angryavian | 43,949 | <p>I think you are conflating two similarly-notated notions.
Given a set <span class="math-container">$A$</span> with an ordering, one might write
<span class="math-container">$$\sup A$$</span>
to denote the supremum of the set <span class="math-container">$A$</span>.</p>
<p>Above, when one writes
<span class="math-container">$$\sup_{j \ge n} A_j$$</span>
it denotes something more like
<span class="math-container">$$\sup \{A_j, A_{j+1}, A_{j+2}, \ldots, \}$$</span>
which is the supremum of the collection of events <span class="math-container">$A_j, A_{j+1}, \ldots$</span>,
<em>not</em> something like "the supremum of the set <span class="math-container">$\bigcup_{j \ge n} A_j$</span>."
Of course, in order to talk about the supremum of something like
<span class="math-container">$\{A_j, A_{j+1}, A_{j+2}, \ldots, \}$</span> (whose <em>elements</em> are themselves events/sets), you need an ordering of the elements. Here, we consider the partial order of set containment. So the least upper bound should be the smallest [in the partial order] set larger than any <span class="math-container">$A_j, A_{j+1}, \ldots$</span>, which in terms of the set containment partial order, is
<span class="math-container">$$\sup_{j \ge n} A_n \text{ is the smallest set such that } A_j \subset \sup_{j \ge n} A_j.$$</span></p>
|
1,248,517 | <p>I try to solve the following problem: <br>
Given a stream of symmetric matrices $A_0, A_1, ...,A_n$ such that $A_i$ is different from $A_{i-1}$ only in one place, I want to compute the eigenvalues of $A_i$. <br>
Since the matrices are very large, computing the eigenvalues from scratch isn't efficient (and since the matrices are different only in one place, that's also not very smart..), and I try to find how to calculate the eigenvalues of $A_i$ using the eigenvalues of $A_{i-1}$. <br></p>
<p>Any help will be welcomed, <br></p>
<p>Thanks</p>
| Bill Dubuque | 242 | <p>Note that $\,\underbrace{\gcd(f(n),n\!+\!2) = \gcd(f(-2),n\!+\!2)}\ $ by the Euclidean algorithm, since<br>
$\ {\rm mod}\ n\!+\!2\!:\,\ f(\color{#c00}n)\equiv f(\color{#c00}{-2})\ $ by $\ \color{#c00}{n\equiv -2},\ $ by the $ $ <a href="https://math.stackexchange.com/a/879262/242">Polynomial Congruence Rule</a>, </p>
<p>where above $\ f(x)\ $ is <em>any</em> polynomial with <em>integer</em> coefficients. $ $ </p>
<p>In the OP: $\, f(-2) = 2,\,$ so $\,\gcd(f(n),n\!+\!2)=\gcd(2,n\!+\!2)=\gcd(2,n)$</p>
|
1,463,258 | <p>Given a cumulative distribution function of the form $P(X\leq x) = 1- e^{-\lambda x^3}$, is there any way to represent it in terms of an exponential or any other distribution? I've thought about exponentials, but don't know how to deal with the cubed term. Thanks!</p>
| André Nicolas | 6,312 | <p>Let $W$ have exponential distribution with parameter $\lambda$. Let $X=W^{1/3}$. Then if $x\gt 0$, we have $\Pr(X\le x)=\Pr(W^{1/3}\le x)=\Pr(W\le x^3)=1-e^{-\lambda x^3}$.</p>
|
3,584,927 | <p>If I have two isomorphic groups, can I write <span class="math-container">$A \xrightarrow{\sim} B$</span> rather than <span class="math-container">$A \cong B$</span> to mean "A is isomorphic to B", or is the arrow notation only used if I have a map <span class="math-container">$\varphi : A \xrightarrow{\sim} B$</span> ?</p>
| diracdeltafunk | 19,006 | <p>Upgraded to an answer from a comment by request:</p>
<p>I think it's better practice to use <span class="math-container">$A \cong B$</span> when you mean "there exists an isomorphism from A to B" and <span class="math-container">$A \xrightarrow{\sim} B$</span> when you mean "I have a specific isomorphism from A to B in mind". It's fine to use <span class="math-container">$A \cong B$</span> even in the latter case, but it would be strange to read <span class="math-container">$A \xrightarrow{\sim} B$</span> when there's no specific map being discussed.</p>
|
3,760,115 | <p>I have these constraints on a cost function</p>
<p><span class="math-container">$$
c = A+Bx=A+B\text{vec}\ (q^*q^\top),
$$</span>
where <span class="math-container">$(c,A)\in\mathbb{R}^{100}$</span>, <span class="math-container">$B\in\mathbb{C}^{100\times 81}$</span>, <span class="math-container">$x\in\mathbb{C}^{81}$</span> and <span class="math-container">$q\in\mathbb{C}^9$</span>. So <span class="math-container">$x=\text{vec}\ (q^*q^\top)$</span>, which is the vectorization operator. I want to speed up my optimizer and therefore i require the gradient of the constraints (with respect to <span class="math-container">$q$</span>). This is how far i have come:</p>
<p><span class="math-container">$$
\begin{aligned}
dc = Bdx &= Bd\text{vec}\ (q^*q^\top)\\
&=B\text{vec}\ (q^*dq^\top+dq^*q^\top) \\
&=B\text{vec}\ (q^H:dq)+B\text{vec}\ (q^\top:dq^*)
\end{aligned}
$$</span></p>
<p>However, i cannot seem to get rid of the <span class="math-container">$\text{vec}$</span> operator. If i "matricize" the left side to remove the vectorization at the right side, i cannot get to <span class="math-container">$\frac{\partial c}{\partial q}$</span> anymore. Anyone got some brilliance for me?</p>
<p><strong>Update</strong>: The last line of my derivation is incorrect i think. <span class="math-container">$q^H\in\mathbb{C}^{12}$</span> while <span class="math-container">$dq\in\mathbb{C}^{1\times 12}$</span>, so you cannot use the Frobenius product here.</p>
| greg | 357,854 | <p>The outer product of two vectors can be vectorized in several equivalent ways
<span class="math-container">$$\eqalign{
{\rm vec}(q^*q^T) &= {\rm vec}(q^*q^TI) = {\rm vec}(Iq^*q^T) \\
=q\otimes q^* &= (I\otimes q^*)\,q = (q\otimes I)\,q^* \\
}$$</span>
Use this to rewrite the constraint vector and calculate its gradient(s).
<span class="math-container">$$\eqalign{
(c-A) &= B(I\otimes q^*)\,q \;=\; B(q\otimes I)\,q^* \\
dc &= B(I\otimes q^*)\,dq + B(q\otimes I)\,dq^* \\
\frac{\partial c}{\partial q} &= B(I\otimes q^*), \quad
\frac{\partial c}{\partial q^*} = B(q\otimes I) \\
}$$</span></p>
|
2,202,048 | <p>There's a box with $N$ balls in it. One of them is red and the others white. What's the probability of getting the red ball at the $k$th try ( if you're not putting them back in?) where $k = 1,2,3,\ldots,N$</p>
| neofoxmulder | 121,687 | <p>I understand this question a bit differently. Suppose $N = 3 $ then there is $1$ red ball and $2$ white. since $k = 1 , 2 , ... , N$ you must make $3$ picks. </p>
<p>On your first pick the probability of drawing the red ball is $\frac{ 1}{3}$ but on your second pick the probability changes because a ball has been eliminated and the probability also depends on what color ball you drew on the first pick. If you drew a white ball on your first pick then there is only one red and one white remaining for your second pick so the probability becomes $\frac{ 1}{2}$. If you drew the red ball on your first pick then the probability becomes $0$ , there are no more red balls. </p>
<p>When you get to your third pick , the probability of drawing the red ball is $1$ or $0$ because there will be only $1$ ball left.</p>
|
2,198,293 | <p>Having a bit of trouble in what way I am suppose to go about solving this problem. Any guidance would be great.</p>
<p>Show that there is at least one real solution to:$$x^5 - x^2 - 4 = 0 $$</p>
<p>Thanks in advance.</p>
| A-B-izi | 416,892 | <p>If z is a complex root of a polynomial with real coefficients, then also conjugate of z is a root (that is an easy thing to prove).
Therefore, if a polynomial with real coefficient does not have real roots, it must have an even number of roots.
Now, this polynomial has 5 roots by Fundamental Theorem of Algebra, thus at least one solution must be real.</p>
|
2,198,293 | <p>Having a bit of trouble in what way I am suppose to go about solving this problem. Any guidance would be great.</p>
<p>Show that there is at least one real solution to:$$x^5 - x^2 - 4 = 0 $$</p>
<p>Thanks in advance.</p>
| Emilio Novati | 187,568 | <p>The polynomial has degree $5$ so it has $5$ roots in $\mathbb{C}$. We known that the complex roots are always couple of conjugate numbers if the coefficients are real numbers, so, for the given polynomial, we can have $4$ complex roots and the one other root must be real.</p>
|
3,178,648 | <blockquote>
<p>We assign to every element <span class="math-container">$i$</span> from <span class="math-container">$N=\{1,2,...,n\}$</span> a positive integer <span class="math-container">$a_i$</span>. Suppose <span class="math-container">$$a_1+a_2+...+a_n = 2n-2$$</span> then prove that map <span class="math-container">$T: \mathcal{P}(N) \to \{1,2,...,2n-2\}$</span> defined with <span class="math-container">$$T(X) = \sum _{i\in X}a_i$$</span> is surjective. </p>
</blockquote>
<hr>
<p>We can assume that <span class="math-container">$a_1\leq a_2\leq ...\leq a_n$</span>. </p>
<p>Clearly, <span class="math-container">$a_1 = a_2 = 1$</span> and thus <span class="math-container">$1,2,2n-3,2n-4$</span> are in a range. </p>
<p>Also, if <span class="math-container">$a_i=2$</span> for some <span class="math-container">$i$</span> then we could easily apply induction. </p>
<p>Say <span class="math-container">$b_1< b_2<...<b_k$</span> are all different values that appear among <span class="math-container">$a_i$</span>. </p>
<p>Then we have <span class="math-container">$n _1\cdot b_1+n_2\cdot b_2+...+n_k \cdot b_k = 2n-2$</span> and <span class="math-container">$n_1+n_2+..+n_k = n$</span>. We have to prove that for each <span class="math-container">$l\leq 2n-2$</span> we have <span class="math-container">$$n' _1\cdot b_1+n'_2\cdot b_2+...+n'_k \cdot b_k = l$$</span></p>
<p>for some <span class="math-container">$n'_i\leq n_i$</span>. And here it stops. I have no idea how to find all those <span class="math-container">$n_i'$</span>. Any ideas?</p>
| String | 94,971 | <p>Assume that <span class="math-container">$\{a_i\}$</span> contains <span class="math-container">$k$</span> copies of <span class="math-container">$1$</span>. Assume in addition to this that <span class="math-container">$a$</span> is the second lowest number occurring somewhere in the sequence. Then we have:
<span class="math-container">$$
k+a(n-k)\leq a_1+...+a_n=2n-2
$$</span>
which can be rearranged to see that:
<span class="math-container">$$
k\geq\frac{(a-2)n+2}{a-1}> a-2
$$</span>
The last inequality stems from the fact that <span class="math-container">$a\leq n-1$</span>. Hence the sequence contains at least <span class="math-container">$a-1$</span> copies of <span class="math-container">$1$</span>.</p>
<p>Finally, note that if we remove <span class="math-container">$a$</span> and <span class="math-container">$a-2$</span> copies of <span class="math-container">$1$</span> from the sequence we have removed <span class="math-container">$a-1$</span> terms and reduced the sum by <span class="math-container">$2(a-1)$</span>:
<span class="math-container">$$
a+(a-2)\cdot 1=2(a-1)
$$</span>
Hence we can use induction and we are done.</p>
<hr>
<p><strong>Regarding the induction</strong></p>
<p>We have the base case <span class="math-container">$n=2,a_1=a_2=1$</span> which is easily checked. Note that <span class="math-container">$n=1$</span> is impossible.</p>
<p>Now assume we have shown all cases <span class="math-container">$n<m$</span>. Consider <span class="math-container">$n=m$</span>.</p>
<hr>
<p>To construct the <strong>higher numbers</strong>, simply consider the total of <span class="math-container">$2m-2$</span> and subtract subset sums:
<span class="math-container">$$
0,1,...,2a-1
$$</span>
by removing subsets of <span class="math-container">$a$</span> and the <span class="math-container">$a-1$</span> copies of <span class="math-container">$1$</span>. This gets us as far down as:
<span class="math-container">$$
2m-2-(2a-1)=2(m-a)-1
$$</span></p>
<hr>
<p>To account for the <strong>lower numbers</strong>, remove <span class="math-container">$a-1$</span> terms that sum to <span class="math-container">$2(a-1)$</span> by using the arguments above. Note that for <span class="math-container">$m>2$</span> we have <span class="math-container">$a\geq 2$</span>. Then we are left with <span class="math-container">$m-(a-1)$</span> terms that satisfy:
<span class="math-container">$$
\begin{align}
\sum a_i &=2m-2-2(a-1)\\
&=2\left\{m-(a-1)\right\}-2
\end{align}
$$</span>
which is one of the cases covered by the induction hypothesis. Note that <span class="math-container">$2\leq m-(a-1)<m$</span> because <span class="math-container">$2\leq a\leq m-1$</span> so induction holds. Thus we have also covered the sums from <span class="math-container">$1$</span> up to:
<span class="math-container">$$
2\left\{m-(a-1)\right\}-2=2(m-a)
$$</span>
and so all sums from <span class="math-container">$1$</span> through <span class="math-container">$2m-2$</span> have been accounted for.</p>
|
2,566,546 | <p>Show that for any $x_1 < x_2$ and $y_1 < y_2$ one has
$P(x_1 < X ≤ x_2, y_1 < Y ≤ y_2) = F(x_2, y_2) + F(x_1, y_1) − F(x_1, y_2) − F(x_2, y_1)$.</p>
<p>Would I just need to split the LHS to something that gives me the right?</p>
| TZakrevskiy | 77,314 | <p>The most trivial example would be the identity matrix - it is a projection on the whole space. The matrices with $\pm 1$ on the main diagonal and $0$ outside this diagonal are square roots of the identity. In other words, you have at least $2^n$ different square roots.</p>
|
4,081,029 | <p>I've been reading <a href="https://ocw.mit.edu/courses/aeronautics-and-astronautics/16-07-dynamics-fall-2009/lecture-notes/MIT16_07F09_Lec03.pdf" rel="nofollow noreferrer">this pdf</a> about vector transformations and I don't quite understand how to implement it in a computer program. On page 10, it shows you how to transform a vector from one coordinate system to another, exactly how would this look on paper? Say I had two 3d vectors that represented directions and I wanted to get the relative direction of <span class="math-container">$vector A$</span> to <span class="math-container">$vector B$</span>. So if <span class="math-container">$A = (0,0,0)$</span> and <span class="math-container">$B = (0,1,0)$</span>, <span class="math-container">$C$</span> would also be <span class="math-container">$(0,1,0)$</span>. If <span class="math-container">$A = (0,1,0)$</span>, <span class="math-container">$C$</span> would then be <span class="math-container">$(0,0,-1)$</span>. Would I make matrices of <span class="math-container">$A$</span> and <span class="math-container">$B$</span> and just multiply them together? Other places have talked about getting the dot product? What would the <span class="math-container">$i$</span> and <span class="math-container">$i'$</span> vectors be in this case? Thank you.</p>
<p>I'm pretty sure this is all the info I need but in case there is an obviously better way of doing it I will explain my project. It's a simple voxel 3D raycaster. I'm actually using Unity and rendering to a texture using a compute shader.</p>
<p>The way I am calculating the rays is I am centering the pixel coordinates as if they were 3D space coordinates, moving them a little forward and getting the directions from 0,0,0 to each of those points (and shrinking the dimensions for field-of-view). Then, I transform those directions relative to the player's transform every frame to get the ray directions for the GPU. Obviously, this also needs to be calculated on the GPU so I can't use the handy Unity method I used just to test if it would work (Transform.TransformDirection). So I guess I could also use a relative point transform function too and just send those points and the player's transform to the GPU.</p>
| Tristan367 | 901,849 | <p>So <a href="https://computergraphics.stackexchange.com/questions/8562/how-to-convert-from-object-space-into-world-space-exercise-from-3d-math-primer">this page</a> ended up helping me. I simply send the positions of the pixels as if they were a screen slightly in front of the player at 0,0,0 and center them and whatnot, then in the compute shader I transform those points to world space relative to the player's rotation for each ray like this:</p>
<pre><code>float3 transformDirectionFromPoint(float3 p) {
float3 u1 = p.x * playerWorldRight;
float3 u2 = p.y * playerWorldUp;
float3 u3 = p.z * playerWorldForward;
return u1 + u2 + u3; // the direction to that point
}
</code></pre>
<p>And then I use that direction to cast a ray out from the player's position. I found with higher resolutions I needed to decrease the field of view.</p>
|
1,160,699 | <p>In my discrete math book, I was tasked with finding a counterexample for this:</p>
<blockquote>
<p>If $n$ is prime, then $2^n-1$ is prime.</p>
</blockquote>
<p>Does there exist a counterexample for such a statement? Also, am I wrong in thinking that when something asks for a counterexample, is it looking for some logic that proves the original statement to be false?</p>
<p>Any help is appreciated, as I've got a test on subjects like this tomorrow.</p>
| user26486 | 107,671 | <p>$2^n-1\in\mathbb P\implies n\in\mathbb P$</p>
<p>This is true because if $n\not\in\mathbb P$, then $n=kl$ for some $k,l\in\mathbb Z_{\ge 2}$ and so </p>
<p>$$2^n-1=2^{kl}-1=(2^k-1)(2^{kl-k}+2^{kl-k-1}+\cdots +1)$$</p>
<p>And this is composite, since both $2^k-1$ and $2^{kl-k}+2^{kl-k-1}+\cdots +1$ are integers larger than $1$.</p>
<p>Primes of the form $2^n-1$ are called <a href="http://en.wikipedia.org/wiki/Mersenne_prime">Mersenne primes</a>. It is not known whether there are infinitely many of them.</p>
<p>The implication is not true the other way around.<br>
I.e., $n\in\mathbb P\not\implies 2^n-1\in\mathbb P$</p>
<p>The smallest counterexample is $n=11$.</p>
<p>If $n=11$, then $n\in\mathbb P$, but $2^{11}-1=23\cdot 89\not\in\mathbb P$.</p>
|
1,160,699 | <p>In my discrete math book, I was tasked with finding a counterexample for this:</p>
<blockquote>
<p>If $n$ is prime, then $2^n-1$ is prime.</p>
</blockquote>
<p>Does there exist a counterexample for such a statement? Also, am I wrong in thinking that when something asks for a counterexample, is it looking for some logic that proves the original statement to be false?</p>
<p>Any help is appreciated, as I've got a test on subjects like this tomorrow.</p>
| David R. | 158,279 | <p>That's false. But if it makes you feel any better, you're making a mistake very similar to Fermat's. In 1650, Fermat conjectured that all numbers of the form $2^{2^n} + 1$ are prime. They didn't have computers back then, so it was very difficult to find that $2^{2^5} + 1 = 641 \times 6700417$.</p>
<p>But back then it was easier to find that $2^{11} - 1 = 23 \times 89$. That's just one counterexample to the assertion that $2^n - 1$ is prime whenever $n$ is prime; in fact, most prime $n$ lead to composite $2^n - 1$. A few more counterexamples: $n = 23, 29, 37, 41, 43, 47, 53, 59, 67, 71, 73, 79, 83, 97$. These are getting a little too large to test by hand. In fact, Marin Mersenne also made some mistakes: for example, he said $2^{67} - 1$ is prime when it is in fact composite, and he skipped over $2^{61} - 1$, which is prime (or maybe his "1" looked like a "7"?--he made other mistakes, though).</p>
<p>When "something asks for a counterexample," you just need to produce one example to show the assertion is wrong. But "some logic that proves the original statement to be false" can be very helpful, especially when the first counterexample is quite large and not readily accessible to manual computation, or for some reason not immediately obvious.</p>
|
1,025,588 | <p>I just started doing AM-GM inequalities for the first time about two hours ago. In those two hours, I have completed exactly two problems. I am stuck on this third one! Here is the problem:</p>
<p>If $a, b, c \gt 0$ prove that $$ a^3 +b^3 +c^3 \ge a^2b +b^2c+c^2a.$$</p>
<p>I am going crazy over this! A hint or proof would be much appreciated. Also any general advice for proving AM-GM inequalities would bring me happiness to my heart. Thank you!</p>
| Macavity | 58,320 | <p>You already have a great answer from @Adriano. To use AM-GM here, in general the idea would be to observe exponents on both sides and try finding a convex combination of $(3, 0, 0), (0, 3, 0)$ and $(0, 0, 3)$ which gives you a term like $(2, 1, 0)$. </p>
<p><em>Muirhead's inequality - if you're familiar with it - assures us this will work as $[3, 0, 0] \succ [2, 1, 0]$</em></p>
<p>So you may consider the following generic equation with non-negative $\alpha+\beta+\gamma=1$:
$$\alpha (3, 0, 0)+\beta (0, 3, 0) +\gamma(0, 0, 3)= (2, 1, 0)$$</p>
<p>Obviously $\alpha = \frac23, \beta = \frac13, \gamma=0$ comes to mind. Thus the basic inequality to use would be the AM-GM:
$$\tfrac23a^3 + \tfrac13b^3 \ge a^2b$$</p>
<p>Summing three similar inequalities get you the result. </p>
|
1,696,713 | <p>I am solving exact differential equation, but I am stuck on the step on how to simplify this term or how to rewrite it. </p>
<p>$e^{-2\ln{\sin{x}}}$</p>
| pjs36 | 120,540 | <p>I'd like to expand upon my comment because this is a very interesting question.</p>
<hr>
<p>First, I think the word "causation" is really throwing people off. Causation <em>does</em> have a specific meaning that's been modeled in various ways (I'm thinking statistics and mathematical logic), but none of the ways I know about capture what (I think) you're after.</p>
<p>The <a href="https://johncarlosbaez.wordpress.com/2015/04/07/resource-convertibility-part-1/" rel="nofollow">link</a> I posted in my comment is to a post about "Resource Convertibility," which really perfectly captures what you've been thinking about, or at least the example you gave. I don't know much about it, but it's fun to think about, so I'll try to say a little bit.</p>
<hr>
<p>You've got a very interesting idea that surprisingly intersects with some current research! </p>
<p>Unfortunately, it may turn out that <em>equality</em> has connotations that are inappropriate, as you've already seen:</p>
<blockquote>
<p>$$\rm flour\ + milk\ +eggs\ + (other\ ingredients) = bread$$</p>
</blockquote>
<p>is indeed something of an unfortunate notation, because equality is <em>symmetric</em> and as you've pointed out, our conversion here is really one way.</p>
<p>Tobias Fritz has evidently been thinking about this, and decided that an <em>inequality</em> was really the way to go: It makes more sense to write</p>
<p>$$\rm flour\ + milk\ +eggs\ + (other\ ingredients) \ge bread$$</p>
<p>and think of this statement as something like "having flour, milk, eggs, and other stuff is <em>at least as good</em> as having bread" (highly paraphrased from link above). It's also probably best to avoid <em>causation</em> and speak strictly about the ability to <em>convert</em> the things on the left to the things on the right (as the act of bringing milk and eggs together certainly doesn't <em>cause</em> bread to form of its own volition!).</p>
<p>The key features of his formulation are that</p>
<ul>
<li>You have some means of "comparing" objects, using $\ge$.
<ul>
<li>Everything is comparable to itself (i.e., $\rm bread \ge bread$). </li>
<li>Comparisons are <em>transitive</em>. For example if $\rm bread\ ingredients \ge bread$, and $\rm bread \ge toast$, then $\rm bread\ ingredients \ge toast$ (just bake the ingredients into bread, then slice and toast!).</li>
<li>Finally, antisymmetry (think $x \ge 3$ and $3 \ge x$ means $x = 3$) is used as a means to decide what resources are equivalent. We have, for example (by visiting one's favorite financial institution and trading cash for different cash),</li>
</ul></li>
</ul>
<p>$$\rm five\ \$1\ bills \ge one\ $5\ bill \qquad and \qquad one\ $5\ bill \ge five\ \$1\ bills.$$</p>
<ul>
<li>We also have the ability to add objects, as in the $\rm flour + milk + \ldots$ example (but the $+$ really just lets us build a shopping cart from what I can tell; adding doesn't convert anything, that's all encapsulated in $\ge$).</li>
</ul>
<p>With these abilities, you get to say you're studying fancy things called ordered commutative monoids, and you can read much much more in the series of posts linked above, as well as <a href="http://arxiv.org/abs/1504.03661" rel="nofollow">the paper</a> that ensued (there's a lot of fancy notation and theorems, but there's some value to skimming at least portions of the paper, if you find the blog series interesting enough).</p>
<hr>
<p>Has any of this turned out to be useful, from a practical standpoint? I have no idea! But John Baez (the person running the blog) as been involved in, and popularized, efforts to build a framework to talk and think about network theory: Chemical reactions (think $H + O \ge H_2O$), birth-death processes, resource conversion, etc. It turns out that classical mathematics can only say so much about these subjects.</p>
<p>This perspective on Resource Convertibility is just one of many efforts to find a good framework.</p>
|
484,550 | <p>The problem is as follows: let $n_1, n_2,..., n_t$ be positive integers. Prove that if $n_1+n_2+...+n_t-t+1$ objects are placed into $t$ boxes, then for some $i, i=1, 2, ..., t$, the $i$th box contains at least $n_i$ objects. </p>
<p>I'm having difficulty getting started in developing a proof because I have no intuition as to why this should be true or whether or not it actually is. Could someone help get me started?</p>
| Rebecca J. Stones | 91,818 | <p>A counterexample would consist of $t$ boxes with $b_i$ balls, $1 \leq i \leq t$, such that $$b_i \leq n_i-1$$ for all $1 \leq i \leq t$. Now find a upper bound on $\sum_{i=1}^t b_i$ using the above inequality.</p>
|
2,174,454 | <p>My teacher substitutes for this $\sum_{x=0}^{y} ({{y!} \over {x! (y-x)!} })$ by $2^y$, so I tried to use the (Mathematical induction) to prove it (My teacher did not ask me to do that, however I want to do it only to make sure for this statement)</p>
<p>My attempt:</p>
<ul>
<li>when $y=1$ :</li>
</ul>
<p>L.H.S.
$2^y=2$</p>
<p>R.H.S.
$\sum_{x=0}^{1} {{y!} \over {x! (y-x)!} } =2$</p>
<p>So this true when $y=1$</p>
<ul>
<li>Let the statement true when $y=k$ , so </li>
</ul>
<p>$\sum_{x=0}^{k} {{k!} \over {x! (k-x)!} } =2^k$</p>
<ul>
<li>When $y=k+1$</li>
</ul>
<p>L.H.S.</p>
<p>$2^y=2^{k+1}=2*2^k=2 * \sum_{x=0}^{k} {{k!} \over {x! (k-x)!} }$</p>
<p>R.H.S</p>
<p>$\sum_{x=0}^{k+1} {{(k+1)!} \over {x! (k-x+1)!} }=\sum_{x=0}^{k+1} {({k+1)k!} \over {x! (k-x+1)(k-x)!} }$</p>
<p>But now I don't know how can I complete it ? </p>
| Ángel Mario Gallegos | 67,622 | <p>Observe that
$$\sum_{x=0}^y\frac{y!}{x!(y-x)!}=\sum_{x=0}^y{y\choose x}$$
This correspond to the develop of $2^y=(1+1)^y\;$ from the Binomial Theorem.</p>
|
3,748,879 | <p><span class="math-container">$g : \mathbb R \to [0,1]$</span> is a non-decreasing and right continuous step function such that <span class="math-container">$g(x)=0$</span> for all <span class="math-container">$x \leq 0$</span> and <span class="math-container">$g(x)=1$</span> for all <span class="math-container">$x \geq 1$</span>. Let us define <span class="math-container">$g^{-1}(y) = \inf { \{x : x \geq 0, \ g(x) \geq y\} }$</span></p>
<p>Then, is it a continuous function, right/left continuous or neither ?</p>
<p>Where I'm specifically having a problem is the [<span class="math-container">$g(x) \geq y$</span>] part. I do not understand what this means in this context.</p>
<p>Edit: it has been pointed out to me that the function <span class="math-container">$g$</span> is not defined in <span class="math-container">$(0,1)$</span> so the question is incorrect. So please just assume that function is well defined but however many steps the question says exists, exist between <span class="math-container">$(0,1)$</span>.</p>
| Jake Mirra | 278,017 | <p>Convergence of the integral on the interval <span class="math-container">$ [0,1] $</span> always occurs by continuity (thanks to the <span class="math-container">$ 2020 $</span> term). So we can restrict our attention to the interval <span class="math-container">$ [1, \infty) $</span>. We see that the integrand <span class="math-container">$ I_1 = (x^p + 2020)^{-q} $</span> can be directly compared to <span class="math-container">$ I_2 = x^{-pq} $</span>. Indeed we have <span class="math-container">$ I_1 \leq I_2 $</span>; but also it's not hard to find a constant <span class="math-container">$ C $</span> such that <span class="math-container">$ I_2 \leq C I_1 $</span> for <span class="math-container">$ x \in [1, \infty) $</span>. Thus, the original integral converges on <span class="math-container">$ [1,\infty) $</span> precisely if <span class="math-container">$ pq > 1 $</span>.</p>
|
434,685 | <p>Suppose that $X^*$ is the dual space of a normed space $X$. If we renorm the space $X^*$ with a new norm equivalent to the first one, is this new normed space the dual of $X$ as well?
(I think it suffices to prove that a functional $f$ is continuous with a norm1 if and only if it is continuous with norm2 where norm1 and norm2 are two equivalent norms. This seems to be obvious!).</p>
<p>Thanks for the help.</p>
| Julien | 38,053 | <p>No. Since you don't change the vector space $X^*$ by renorming it, it remains the dual as the set of continuous linear functionals on $X$. But it is no longer the dual of $X$ as a normed vector space, since that means precisely $\|f\|=\sup_{\|x\|\leq 1} |f(x)|$, which is completely determined by the norm on $X$.</p>
<p>Along these lines, what is true is: if you put an equivalent norm on $X$, then $X$ has the same bounded (= continuous) linear functionals as before, so the vector space $X^*$ remains the same. And both induced norms on $X^*$ are equivalent as well. Maybe that's what you meant to ask.</p>
<p>To prove the statement of that last paragraph, assume first that $\|x\|_1\leq C\|x\|_2$ for some $C>0$. Then $\|x\|_1\leq 1$ whenever $\|Cx\|_2\leq 1$, whence for every linear functional on $X$
$$
C\|f\|_1=C\sup_{\|x\|_1\leq 1}|f(x)|\geq \sup_{\|Cx\|_2\leq 1}|f(Cx)|= \sup_{\|y\|_2\leq 1}|f(y)|=\|f\|_2.
$$
In particular, $f$ is $\|\cdot\|_2$ bounded whenever it is $\|\cdot\|_1$ bounded. The result follows by symmetry.</p>
|
4,322,897 | <p>Let <span class="math-container">$\{y_n\}_{n=1}^\infty$</span> be a sequence in <span class="math-container">$\Bbb{R}^n$</span> and <span class="math-container">$z\in\Bbb{R}^n$</span> be given. If <span class="math-container">$\{\|z-y_n\|\}_{n=1}^\infty$</span> is a convergent sequence, what can we say about <span class="math-container">$\{y_n\}_{n=1}^\infty$</span>? More precisely, can we conclude that <span class="math-container">$\{y_n\}_{n=1}^\infty$</span> is bounded?</p>
<p>For some reason, I'd like to extract a convergent subsequence from <span class="math-container">$\{y_n\}_{n=1}^\infty$</span>, and this can be done by showing that <span class="math-container">$\{y_n\}_{n=1}^\infty$</span> is bounded. Here is my attempt. Since <span class="math-container">$\{\|z-y_n\|\}_{n=1}^\infty$</span> converges, it must be bounded. Then <span class="math-container">$\exists r>0$</span> s.t. <span class="math-container">$\forall n\in\Bbb N$</span>, <span class="math-container">$\left|\|z-y_n\|\right|<r$</span>, but this amounts to saying that each term of <span class="math-container">$\{y_n\}_{n=1}^\infty$</span> falls into an open ball centered at <span class="math-container">$z$</span>. Thus, <span class="math-container">$\{y_n\}_{n=1}^\infty$</span> must be a bounded sequence. Is my attempt correct, please? Thank you.</p>
| ncmathsadist | 4,154 | <p>Translating a bounded set preserves its boundedness.</p>
<p>Proof.</p>
<p>Suppose <span class="math-container">$B$</span> is bounded and <span class="math-container">$y\in\mathbb{R}^d$</span>. Put <span class="math-container">$b = \sup\{\|x\|: x\in B\}$</span>. Then for <span class="math-container">$x\in B$</span>, <span class="math-container">$$\|x + y\| \le \|x\| + \|y\|
\le \|y\| + b.$$</span></p>
<p>We conclude that <span class="math-container">$\sup(y + B) \le \|y\| + \sup(B).$</span></p>
|
3,438,043 | <p><strong>Definition:</strong></p>
<p>Let <span class="math-container">$X$</span> be a set.</p>
<p>A set <span class="math-container">$\tau \subset P(X)$</span> is called a topology on <span class="math-container">$X$</span> if:</p>
<p>(a) <span class="math-container">$\emptyset , X\in \tau$</span></p>
<p>(b) <span class="math-container">$A,B\in \tau$</span> implies <span class="math-container">$A\cap B\in \tau$</span></p>
<p>(c) If <span class="math-container">$\alpha \in \tau$</span> then <span class="math-container">$\underset{A\in \alpha}\bigcup A\in \tau$</span>.</p>
<p>I have been given <span class="math-container">$$\tau :=\left \{ U\subset \mathbb{R} : \text{ For every } x\in U \text{ exists } \varepsilon >0 \text{ with } (x-\varepsilon, x+\varepsilon)\subset U\right \}$$</span>
Show that<span class="math-container">$(\mathbb{R}, \tau)$</span> is a topological space</p>
<p><strong>My attempt:</strong></p>
<p>In order to show that <span class="math-container">$\emptyset, \mathbb{R}\in \tau$</span> , I'd say that if <span class="math-container">$\varepsilon \to 0$</span>, we have <span class="math-container">$(x,x)=\emptyset$</span> and if <span class="math-container">$\varepsilon \to \infty$</span> we have <span class="math-container">$U\subset \mathbb{R}\in \tau$</span>.</p>
<p>In oder to show (b) I'd take to arbitrary intervalls and add them together - but how do I do it formally?</p>
<p>Sadly, I don't really know how to show (c).</p>
| Vijayakumar Muni | 400,662 | <p>First of all in your definition of a topology <span class="math-container">$\tau$</span> on <span class="math-container">$X,$</span> there is a typo in condition (c).</p>
<p>It is supposed to be <span class="math-container">$``$</span>for <span class="math-container">$\alpha\in J,$</span> where <span class="math-container">$J$</span> is an index set <span class="math-container">$\Big($</span>this index set <span class="math-container">$J$</span> is any of the following three types: (i) <span class="math-container">$J$</span> is either finite set, e.g. <span class="math-container">$\{1,2,\ldots,n\},$</span> where <span class="math-container">$n$</span> is a finite positive integer, (ii) <span class="math-container">$J$</span> is a countably infinite set, e.g. <span class="math-container">$\mathbb{N},$</span> the set of all positive integers, (iii) <span class="math-container">$J$</span> is an uncountable set, e.g. <span class="math-container">$\mathbb{Q}^c,$</span> the set of all irrational numbers<span class="math-container">$\Big),$</span> if <span class="math-container">$A_\alpha\in \tau,$</span> then <span class="math-container">$\bigcup_{\alpha\in J}A_\alpha\in \tau."$</span> </p>
<p>The meaning of condition (a) is <span class="math-container">$``$</span>the empty set and the whole <span class="math-container">$X$</span> are the members of <span class="math-container">$\tau."$</span></p>
<p>The meaning of condition (b) is <span class="math-container">$``$</span>the finite-intersection of members of <span class="math-container">$\tau$</span> is a member of <span class="math-container">$\tau."$</span></p>
<p>The meaning of condition (c) is <span class="math-container">$``$</span>the arbitrary union of members of <span class="math-container">$\tau$</span> is a member of <span class="math-container">$\tau."$</span> <span class="math-container">$\big($</span>Here the word arbitrary union refers to- the union of finite number of members, or the union of countably infinite number of members, or the union of uncountable number of members.<span class="math-container">$\big)$</span></p>
<p>If all these three conditions satisfies, then <span class="math-container">$\tau$</span> <span class="math-container">$\big($</span>viz. a subset of <span class="math-container">$\mathcal{P}(X)\big)$</span> is called a topology on <span class="math-container">$X,$</span> and <span class="math-container">$X$</span> is called a topological space endowed with a topology <span class="math-container">$\tau.$</span></p>
<p>Now, let us come to your example: </p>
<p>Given <span class="math-container">$X=\mathbb{R},$</span> the set of all real numbers, viz. <span class="math-container">$(-\infty, \, \infty).$</span></p>
<p>Given <span class="math-container">$\tau:=\Big\{U\,\,\Big| \, U\subseteq\mathbb{R},\,\,$</span>if<span class="math-container">$\,\,x\in U,\,\,$</span>then <span class="math-container">$\exists$</span> a finite<span class="math-container">$\,\,\epsilon_x \in (0, \infty)\,\,$</span>s.t.<span class="math-container">$\,\,(x-\epsilon_x, \, x+\epsilon_x)\subseteq U\Big\}.$</span></p>
<p>Now let us see why <span class="math-container">$\tau$</span> is a topology on <span class="math-container">$\mathbb{R}.$</span></p>
<p>For this, we need to check the conditions (a), (b), and (c). Remember, even if at least one of these conditions fails, then this <span class="math-container">$\tau$</span> will not be called as a topology on <span class="math-container">$\mathbb{R}.$</span> </p>
<p><span class="math-container">$\underline{\text{Condition (a).}}$</span> Note that <span class="math-container">$\varnothing\subset \mathbb{R}.$</span> But there is no <span class="math-container">$x\in \varnothing.$</span> Hence it is not required to check the condition: <span class="math-container">$``$</span>Does there exists any finite <span class="math-container">$\,\epsilon_x\in (0, \infty)\,\,$</span>s.t.<span class="math-container">$\,(x-\epsilon_x, \, x+\epsilon_x)\subseteq \varnothing,"\,$</span> as this condition is trivially true. <span class="math-container">$\big($</span>To understand more about this logic, please read any textbook on conditional propositions: Let <span class="math-container">$p$</span> and <span class="math-container">$q$</span> be two propositions. Then the compound proposition <span class="math-container">$p\Longrightarrow q$</span> is a true proposition under three cases: (i) both <span class="math-container">$p$</span> and <span class="math-container">$q$</span> are true, (ii) both <span class="math-container">$p$</span> and <span class="math-container">$q$</span> are false, (iii) <span class="math-container">$p$</span> is false and <span class="math-container">$q$</span> is true. The truthness of above conditional statement follows from (iii).<span class="math-container">$\big)$</span> Therefore <span class="math-container">$\varnothing\in \tau.$</span></p>
<p>Now <span class="math-container">$\mathbb{R}\subseteq \mathbb{R}.$</span> Then for every real number <span class="math-container">$x\in \mathbb{R},$</span> one can choose, for instance, <span class="math-container">$\epsilon_x=1.$</span> Then note that <span class="math-container">$(x-1, \, x+1)\subset \mathbb{R}.$</span> Therefore <span class="math-container">$\mathbb{R}\in \tau.$</span></p>
<p><span class="math-container">$\underline{\text{Condition (b).}}$</span> Let <span class="math-container">$U_1\in \tau$</span> and <span class="math-container">$U_2\in \tau.$</span> Then <span class="math-container">$U_1\cap U_2\subseteq \mathbb{R},$</span> as <span class="math-container">$U_1\subseteq \mathbb{R}$</span> and <span class="math-container">$U_2\subseteq \mathbb{R}.$</span> </p>
<p>If <span class="math-container">$U_1\cap U_2=\varnothing,$</span> then <span class="math-container">$U_1\cap U_2\in \tau,\,$</span> by condition (a).</p>
<p>If <span class="math-container">$U_1\cap U_2\neq\varnothing,$</span> then <span class="math-container">$\exists$</span> some <span class="math-container">$x\in U_1\cap U_2.\,\,$</span> Since <span class="math-container">$x\in U_1,$</span> therefore <span class="math-container">$\exists$</span> some finite number, say <span class="math-container">$\epsilon_{1x}\in (0, \infty)$</span> s.t. <span class="math-container">$(x-\epsilon_{1x}, \, x+\epsilon_{1x})\subseteq U_1.\,\,$</span> Similarly, as <span class="math-container">$x\in U_2,$</span> therefore <span class="math-container">$\exists$</span> some finite number, say <span class="math-container">$\epsilon_{2x}\in (0, \infty)$</span> s.t. <span class="math-container">$(x-\epsilon_{2x}, \, x+\epsilon_{2x})\subseteq U_2.$</span> Let us define <span class="math-container">$\epsilon_x:=$</span> min<span class="math-container">$\{\epsilon_{1x},\, \epsilon_{2x}\}.$</span> Then <span class="math-container">$\epsilon_x\in (0, \infty)$</span> is also a finite number s.t. <span class="math-container">$(x-\epsilon_x, \, x+\epsilon_x)\subseteq U_1\cap U_2.\,\,$</span> As <span class="math-container">$\,x\,$</span> was arbitrary element in <span class="math-container">$U_1\cap U_2,$</span> so for every <span class="math-container">$x\in U_1\cap U_2,$</span> we can find some finite number <span class="math-container">$\epsilon_x\in (0, \infty),$</span> s.t. <span class="math-container">$(x-\epsilon_x, \, x+\epsilon_x)\subseteq U_1\cap U_2.$</span> Hence <span class="math-container">$U_1\cap U_2\in \tau.$</span> </p>
<p>Now we prove that <span class="math-container">$\,U_1\cap\cdots \cap U_n\in \tau,\,\,$</span> whenever <span class="math-container">$\,U_1,\cdots, U_n\in \tau,$</span> where <span class="math-container">$n\in \mathbb{N}$</span> is finite.</p>
<p>For this let us assume <span class="math-container">$\,\bigcap_{i=1}^{n-1}U_i\in \tau,\,$</span> <span class="math-container">$($</span>where<span class="math-container">$\,\,n\geq 2).$</span> </p>
<p>Then <span class="math-container">$\,\bigcap_{i=1}^{n}U_i=\Big(\bigcap_{i=1}^{n-1}U_i\Big)\bigcap U_n.\,$</span> Denoting <span class="math-container">$\,\bigcap_{i=1}^{n-1}U_i=V,\,\,$</span>then by using the previous argument, we have <span class="math-container">$V\cap U_n\in \tau,\,\,$</span> as <span class="math-container">$V\in \tau,\, U_n\in \tau.\,\,$</span>Therefore <span class="math-container">$\bigcap_{i=1}^{n}U_i\in \tau.\,\,$</span>Hence by induction, any finite-intersection of members of <span class="math-container">$\tau$</span> is a member of <span class="math-container">$\tau.$</span> </p>
<p>*Note: If we take the intersection of infinite collection of elements of <span class="math-container">$\tau,$</span> then it need not be a member of <span class="math-container">$\tau.$</span> For instance, let <span class="math-container">$U_n:=\big(-\frac{1}{n},\, \frac{1}{n}\big)\in \tau,$</span> for each <span class="math-container">$n\in \mathbb{N}.$</span> Now if we take their intersection:</p>
<p><span class="math-container">$$\bigcap_{i\in \mathbb{N}}\Big(-\frac{1}{n},\,\frac{1}{n}\Big)=\{0\}\notin \tau,$$</span> as for the element <span class="math-container">$x=0\in \bigcap_{i\in \mathbb{N}}\Big(-\frac{1}{n},\,\frac{1}{n}\Big),$</span> <span class="math-container">$\nexists\,$</span> any finite <span class="math-container">$\epsilon_0\in (0, \infty)\,$</span> s.t. <span class="math-container">$\,(0-\epsilon_0,\, 0+\epsilon_0)=(-\epsilon_0, \epsilon_0)\subseteq \bigcap_{i\in \mathbb{N}}\Big(-\frac{1}{n},\,\frac{1}{n}\Big).$</span> </p>
<p><span class="math-container">$\underline{\text{Condition (c).}}\,$</span> Let <span class="math-container">$U_\alpha\in \tau,\,$</span> for each <span class="math-container">$\alpha\in J.\,$</span> We need to check whether <span class="math-container">$\,\bigcup_{\alpha\in J}U_\alpha\in \tau?$</span> </p>
<p>Let <span class="math-container">$\,x\in \bigcup_{\alpha\in J}U_\alpha.\,$</span> <span class="math-container">$\big($</span>If there is no <span class="math-container">$\,x\in \bigcup_{\alpha\in J}U_\alpha,\,$</span> then <span class="math-container">$\bigcup_{\alpha\in J}U_\alpha=\varnothing\in \tau,\,$</span> by condition (a).<span class="math-container">$\big)\,$</span> Then <span class="math-container">$\,x\in U_\alpha,\,$</span> for some <span class="math-container">$\alpha\in J.\,$</span> Since <span class="math-container">$U_\alpha\in \tau,\,$</span> therefore <span class="math-container">$\exists$</span> some finite <span class="math-container">$\,\epsilon_{\alpha x}\in (0, \infty)\,$</span> s.t.<span class="math-container">$\,(x-\epsilon_{\alpha x},\,\, x+\epsilon_{\alpha x})\subseteq U_\alpha.\,$</span> But again as <span class="math-container">$U_\alpha\subseteq \bigcup_{\alpha\in J}U_\alpha,\,\,$</span> so by transitivity of <span class="math-container">$\,\subseteq,\,$</span> we have
<span class="math-container">$$(x-\epsilon_{\alpha x},\,\, x+\epsilon_{\alpha x})\subseteq \bigcup_{\alpha\in J}U_\alpha.$$</span> This condition is true for every <span class="math-container">$x\in \bigcup_{\alpha\in J}U_\alpha,\,$</span> as our chosen <span class="math-container">$x$</span> was arbitrary element of <span class="math-container">$\bigcup_{\alpha\in J}U_\alpha.\,$</span> Hence <span class="math-container">$\,\bigcup_{\alpha\in J}U_\alpha\in \tau.$</span></p>
<p>All the above verifications forces to claim that <span class="math-container">$\mathbb{R}$</span> is a topological space endowed with a topology <span class="math-container">$\tau.$</span></p>
|
2,772,190 | <p>The question:</p>
<blockquote>
<p>Determine the solution set (in $\mathbb R$) for the equation $|x^2+2x+2| = |x^2-3x-4|$</p>
</blockquote>
<p>So far, I have determined that for this to be true, $|x^2-3x-4|$ must be greater or equal to $0$, giving $(x-4)(x+1)\ge0$. To find the solution set, $|x^2+2x+2|\ge-1$ as indicated by the roots of the RHS equation, but this is where I get stuck. </p>
<p>Where am I going wrong?</p>
| Francesco Carzaniga | 382,649 | <p>Since $|x| = x$ or $-x$, $$|x^2+2x+2| = |x^2-3x-4|$$ if and only if
$$x^2+2x+2 = x^2-3x-4$$ or
$$-(x^2+2x+2) = x^2-3x-4$$ or
$$x^2+2x+2 = -(x^2-3x-4)$$ or
$$-(x^2+2x+2) = -(x^2-3x-4)$$</p>
<p>Some cases are equivalent (can you guess which and why?), so the computations are easier than they look.</p>
|
2,772,190 | <p>The question:</p>
<blockquote>
<p>Determine the solution set (in $\mathbb R$) for the equation $|x^2+2x+2| = |x^2-3x-4|$</p>
</blockquote>
<p>So far, I have determined that for this to be true, $|x^2-3x-4|$ must be greater or equal to $0$, giving $(x-4)(x+1)\ge0$. To find the solution set, $|x^2+2x+2|\ge-1$ as indicated by the roots of the RHS equation, but this is where I get stuck. </p>
<p>Where am I going wrong?</p>
| zipirovich | 127,842 | <p>There's a simply fact about absolute values (of real numbers $a$ and $b$) that you should know and understand:
$$|a|=|b| \quad \text{if and only if} \quad a=\pm b.$$
Think about it for a moment, and it should become clear to you.</p>
<p>Then apply this observation to the given equation: you will get two equations, as two possible cases, without absolute values, which will be easy enough to solve.</p>
|
357,493 | <p>I am searching for a book in commutative algebra with will develop theory from an undergraduate level and lead to areas of current research. Any help is welcome.</p>
| Robert Cardona | 29,193 | <p>You can find '<a href="https://mathoverflow.net/a/385313/175094">A Term of Commutative Algebra</a>' by Allen Altman and Steven Kleiman (introduces Category Theory and has full solutions) which is a more in depth and updated version of 'Introduction to Commutative Algebra' by Atiyah, Macdonald. After that you could try Commutative Algebra I/II by Zariski, Samuel. Whilst simultaneously getting acquainted with Algebraic Geometry and Homological Algebra. If you think you can work through a lot by yourself and don't need solutions, skip the first book I mentioned and just start with Atiyah's book.</p>
<p>These books are just a basic introduction though; but they should keep you out of trouble for a while!</p>
|
357,493 | <p>I am searching for a book in commutative algebra with will develop theory from an undergraduate level and lead to areas of current research. Any help is welcome.</p>
| Community | -1 | <p>Pinter's <a href="http://www.amazon.ca/Book-Abstract-Algebra-Second-Edition/dp/0486474178/ref=pd_rhf_dp_s_cp_3_1475?ie=UTF8&refRID=1QFKGETYTZ9A7RCCYWQ1" rel="nofollow"><em>A Book of Abstract Algebra (Second Edition)</em></a> is quite good. </p>
|
1,012,985 | <p>Let $A = \left[ \begin{matrix} 3 & 2 & 1\\ 5 & 0 &1\end{matrix}\right]$, </p>
<p>how can I know if there is a matrix $N$ , st. $AN=0$ (N is not a zero matrix) </p>
| user187373 | 187,373 | <p>In this case there is a theorem that guarantees the existence of such an $N$ (of size $3 \times 1)$, because the matrix $A$ has more columns than rows.</p>
|
1,012,985 | <p>Let $A = \left[ \begin{matrix} 3 & 2 & 1\\ 5 & 0 &1\end{matrix}\right]$, </p>
<p>how can I know if there is a matrix $N$ , st. $AN=0$ (N is not a zero matrix) </p>
| peterwhy | 89,922 | <p>Try to solve
$$A\pmatrix{x_1\\x_2\\x_3} = \pmatrix{0\\0}$$</p>
<blockquote>
<p>in the question they demand $N$ be at least two columns.</p>
</blockquote>
<p>And it doesn't matter how many columns $N$ should have. Say, from the above equation, you found a 3-element column vector $\mathbf x \ne \pmatrix{0&0&0}^T$ such that</p>
<p>$$A\mathbf x = \pmatrix{0\\0}$$</p>
<p>Then there would exist a $3\times j$ matrix $N = \mathbf x\pmatrix{c_1&c_2&\cdots&c_j}$, $c_1, \ldots, c_j\in\mathbb R$, such that</p>
<p>$$\begin{align*}
AN &= A\mathbf x\pmatrix{c_1&c_2&\cdots&c_j}\\
&= \pmatrix{0\\0}\pmatrix{c_1&c_2&\cdots&c_j}\\
&= \pmatrix{0&0&\cdots&0\\0&0&\cdots&0}
\end{align*}$$</p>
<blockquote>
<p>Okay, another question. Is it possible to find matrix $N$ such that $AN$ and $NA$ are zero matrices of their respective dimensions, without $N$ being the zero matrix?</p>
</blockquote>
<p>Similarly, try to solve
$$\begin{align*}
\pmatrix{y_1&y_2} A &= \pmatrix{0&0&0}\\
A^T \pmatrix{y_1\\y_2} &= \pmatrix{0\\0\\0}
\end{align*}$$</p>
<p>If there were one solution of non-zero row vector $\mathbf y^T$ that satisfies $\mathbf y^T A = \pmatrix{0&0&0}$, then can you construct an $N$ from $\mathbf y^T$ and $\mathbf x$ that satisfies your condition? And if you were not able to find such non-zero $\mathbf y^T$, can you reason that no non-zero $N$ would satisfy $NA = 0$?</p>
<p>If you notice, trying to find non-trivial solution for $\pmatrix{y_1&y_2}A = \pmatrix{0&0&0}$ can be considered as trying to determine whether the rows of $A$ are dependent. Can you see from $A$ if that is the case?</p>
|
4,363,384 | <p>Let <span class="math-container">$A_i$</span>, <span class="math-container">$i \in I$</span> subsets of a space X given topology <span class="math-container">$\tau$</span>. Show that <span class="math-container">$\overline{\bigcup_i A_i }\subset \bigcup_i\overline{ A_i }$</span> does not hold necessarily :</p>
<p>My counterexample is in <span class="math-container">$\Bbb N$</span> with the co-finite topology and <span class="math-container">$A_k=\{2k\}=\overline{ A_k }$</span> because singletons are closed. Hence, <span class="math-container">$\bigcup_i\overline{ A_i } = 2\Bbb N$</span>. However, <span class="math-container">$\overline{\bigcup_i A_i }$</span> is the smallest closed set containing <span class="math-container">$2\Bbb N$</span>, since closed sets are finite or equal to <span class="math-container">$\Bbb N$</span>, then <span class="math-container">$\overline{\bigcup_i A_i }=\Bbb N$</span> so <span class="math-container">$\overline{\bigcup_i A_i }\nsubseteq \bigcup_i\overline{ A_i }$</span>.</p>
<p>But here, since <span class="math-container">$I$</span> is countable, it seems to disagree with the fact that <span class="math-container">$\overline{A \cup B}=\overline{A} \cup \overline{B}$</span>. Is my counterexample wrong ? Should I find a case with <span class="math-container">$I$</span> uncountable ?</p>
| Sourav Ghosh | 977,780 | <p>Consider, <span class="math-container">$(\Bbb{R}, \tau_{std}) $</span></p>
<p>Then, <span class="math-container">$\overline{\cup_{r\in \Bbb{Q}}\{r\}}=\overline{\Bbb{Q}}=\Bbb{R}$</span></p>
<p>And, <span class="math-container">$\cup_{r\in \Bbb{Q}}\
\overline{\{r\}}= \cup_{r\in \Bbb{Q}}\
\{r\} =\Bbb{Q}$</span></p>
|
1,675,411 | <p>So far I have this:</p>
<p>First consider $n = 5$. In this case $(5)^2 < 2^5$, or $25 < 32$. So the inequality holds for $n = 5$.</p>
<p>Next, suppose that $n^2 < 2^n$ and $n \geq 5$. Now I have to prove that $(n+1)^2 < 2^{(n+1)}$.</p>
<p>So I started with $(n+1)^2 = n^2 + 2n + 1$. Because $n^2 < 2^n$ by the hypothesis, $n^2 + 2n + 1$ < $2^n + 2n + 1$. As far as I know, the only way I can get $2^{n+1}$ on the right side is to multiply it by $2$, but then I get $2^{n+1} + 4n + 2$ on the right side and don't know how to get rid of the $4n + 2$. Am I on the right track, or should I have gone a different route?</p>
| Siminore | 29,672 | <p>Remark that
$$2^{n+1} = 2 \cdot 2^n > 2 n^2 = n^2 +n^2> n^2+2n+1 = (n+1)^2.$$
Indeed $n^2-2n-1=(n-1)^2-2>0$ for $n \geq 5$.</p>
|
1,351,458 | <p>Can someone please show me the steps (all of them… yeah, even the obvious ones)
to go from</p>
<p>$$\begin{align}\frac{y+1}{y-1} = 10^{x^2}\end{align}$$</p>
<p>to</p>
<p>$$\begin{align}y=\frac{10^{x^2}+1}{10^{x^2}-1}\end{align}$$</p>
| 3SAT | 203,577 | <p>Solve for $y$:</p>
<blockquote>
<p>$$\frac{y+1}{y-1} = 10^{x^2}$$</p>
</blockquote>
<p>Multiply both sides by $y-1$:
$$y+1=10^{x^2}(y-1)$$
Expand out terms of the right hand side:
$$y+1=10^{x^2}y-10^{x^2}$$
Subtract $1+10^{x^2}y$ from both sides:
$$y(1-10^{x^2})=-1-10^{x^2}$$
Divide both sides by $1-10^{x^2}$
$$\boxed{\color{blue}{y=\frac{10^{x^2}+1}{10^{x^2}-1}}}$$</p>
|
915,542 | <p>After working on an ODE I find I am needing to solve the integral </p>
<p>$$\int \frac{u}{b - au - u^2}\mathrm{d}u$$</p>
<p>Trig subs, banging heads against walls, and sobbing have not yielded a solution. Yet. </p>
<p>Could use a hand, thanks.</p>
| Mhenni Benghorbal | 35,472 | <p>Another approach is to use partial fraction</p>
<blockquote>
<p>$$ \frac{u}{b-au-u^2} = \frac{A}{u-\alpha}+\frac{B}{u-\beta} $$</p>
</blockquote>
<p>where $\alpha, \beta$ are the roots of $ b-au-u^2 $. You need to determine $A$ and $B$. The answer will have the form</p>
<blockquote>
<p>$$ I = A\ln(u-\alpha)+B\ln(u-\beta)+C. $$</p>
</blockquote>
<p><strong>Note:</strong> Here are the roots</p>
<blockquote>
<p>$$ \alpha = -\frac{a}{2}+\frac{\sqrt {{a}^{2}+4\,b}}{2},\quad \beta = -\frac{a}{2}-\frac{\sqrt {{a}^{2}+4\,b}}{2} $$</p>
</blockquote>
|
2,509,230 | <p>I was studying Chevalley-Serre relations, which can be summed up to these</p>
<p><span class="math-container">$$\tag{S1}\left[h_{i},\,h_{j}\right]=0$$</span></p>
<p><span class="math-container">$$\tag{S2}\left[e_{i},\,f_{i}\right]=h_{i} \quad \left[e_{i},\,f_{j}\right]=0 \quad\text{for } i\neq j$$</span></p>
<p><span class="math-container">$$\tag{S3} \left[h_{i},\,e_{j}\right]=A_{ij}e_{j}
\quad \left[h_{i},\,f_{j}\right]=-A_{ij}f_{j}$$</span></p>
<p><span class="math-container">$$\tag{S4} \text{ad}\left(e_{i}\right)^{1-A_{ij}}\left(e_{j}\right)=0
\quad\;\; \text{ad}\left(f_{i}\right)^{1-A_{ij}}\left(f_{j}\right)=0 \quad\text{for } i\neq j$$</span> </p>
<p>where <span class="math-container">$A_{ij}$</span> are the coefficients of the Cartan matrix. Now it seems to me that relations (S1),(S2), and (S3) are really quite natural, but I don't fully understand relations in (S4). Does anybody has an insight on what does those relations mean?</p>
| Hanno | 81,567 | <p>The relations prescribe how the Lie algebra is supposed to decompose when considered as a module over the copy ${\mathfrak s}{\mathfrak l}_2(i)$ of ${\mathfrak s}{\mathfrak l}_2({\mathbb k})$ spanned by $\{e_i,f_i,h_i\}$. Namely, if you know that $\text{ad}(e_i)^{a+1}(e_j)=0$ but $\text{ad}(e_i)^{a}(e_j)\neq 0$, then the ${\mathfrak s}{\mathfrak l}_2(i)$-submodule of ${\mathfrak g}$ spanned by $e_j$ has dimension $a+1$ (note that $\text{ad}(f_i)(e_j)=0$, so $e_j$ is a lowest weight vector for the generated ${\mathfrak s}{\mathfrak l}_2(i)$ submodule).</p>
<p>If you look at the A2 root system of ${\mathfrak s}{\mathfrak l}_3({\mathbb C})$ for example, you see that if $\{\alpha,\beta\}$ is a basis of the root system, then the root string $\alpha, \alpha + \beta, ...$ has only length $2$, in accordance with the fact that the Cartan matrix is $\tiny\begin{pmatrix} 2 & -1 \\ -1 & 2\end{pmatrix}$. If, in contrast, you look at the G2 root system, you'll see one chain of length $4$ and one of length $2$, in accordance with the Cartan matrix $\tiny\begin{pmatrix} 2 & -3 \\ -1 & 2\end{pmatrix}$. The last Serre-Chevallley relation reflects these chain lengths (even the $2$'s on the diagonal make sense, because the ${\mathfrak s}{\mathfrak l}_2(i)$ submodule spanned by $e_i$ is just ${\mathfrak s}{\mathfrak l}_2(i)$ itself, so has dimension $3$; the sign is different because $e_i$ is a <em>highest</em> weight vector, though).</p>
|
2,056,979 | <p>I am attempting:
<a href="https://i.stack.imgur.com/FM80Z.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/FM80Z.png" alt="enter image description here"></a></p>
<p>My solution is: But I am not sure where I am going wrong. The answer I get is not divisible by 7.
<a href="https://i.stack.imgur.com/9onWu.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/9onWu.jpg" alt="enter image description here"></a></p>
| Vidyanshu Mishra | 363,566 | <p>HINT:</p>
<p>You have to prove the truth of $p(k+1)$ using $p(k)$, so you have to take out something from $p(k$) and then apply it to $p(k+1)$ to establish its truth.</p>
<p>As you have assumed that $p(k)$ is true. So, $4^{k+1}+5^{2k-1}$ must be divisible by 7 say it is $7m$ where $m$ is an integer. So you get $4^{k+1}+5^{2k-1}=7m$. Some flipping will give you $5^{2k-1}=7m-4^{k+1}$. Now How does $p(k+1)$ looks like??</p>
<p>It will look like $4^{k+2}+5^{2k+1}$. If we prove that $4^{k+2}+5^{2k+1}$ is divisible by $7$ then we are done. Try using $5^{2k-1}=7m-4^{k+1}$ to proceed further.</p>
|
2,056,979 | <p>I am attempting:
<a href="https://i.stack.imgur.com/FM80Z.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/FM80Z.png" alt="enter image description here"></a></p>
<p>My solution is: But I am not sure where I am going wrong. The answer I get is not divisible by 7.
<a href="https://i.stack.imgur.com/9onWu.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/9onWu.jpg" alt="enter image description here"></a></p>
| Bernard | 202,857 | <p>Note $5^2\equiv 4$ and $5^{-1}\equiv 3\mod 7$. Using this, we have:
$$4^{n+1}+5^{2n-1}\equiv 4\cdot 4^n+(5^2)^n\cdot5^{-1}\equiv4\cdot 4^n+3\cdot 4^n=7\cdot4^n\equiv 0\mod 7.$$</p>
|
2,056,979 | <p>I am attempting:
<a href="https://i.stack.imgur.com/FM80Z.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/FM80Z.png" alt="enter image description here"></a></p>
<p>My solution is: But I am not sure where I am going wrong. The answer I get is not divisible by 7.
<a href="https://i.stack.imgur.com/9onWu.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/9onWu.jpg" alt="enter image description here"></a></p>
| barak manos | 131,263 | <p><strong>First, show that this is true for $n=1$:</strong></p>
<p>$4^{1+1}+5^{2-1}=21$</p>
<p><strong>Second, assume that this is true for $n$:</strong></p>
<p>$4^{n+1}+5^{2n-1}=7k$</p>
<p><strong>Third, prove that this is true for $n+1$:</strong></p>
<p>$4^{n+2}+5^{2n+1}=$</p>
<p>$4(4^{n+1})+25(5^{2n-1})=$</p>
<p>$(25-21)(4^{n+1})+25(5^{2n-1})=$</p>
<p>$25(4^{n+1})-21(4^{n+1})+25(5^{2n-1})=$</p>
<p>$25(4^{n+1})+25(5^{2n-1})-21(4^{n+1})=$</p>
<p>$25(\color\red{4^{n+1}+5^{2n-1}})-21(4^{n+1})=$</p>
<p>$25(\color\red{7k})-21(4^{n+1})=$</p>
<p>$7(25k)-21(4^{n+1})=$</p>
<p>$7(25k-3(4^{n+1}))$</p>
<hr>
<p>Please note that the assumption is used only in the part marked red.</p>
|
393,430 | <p><strong>Question:</strong> Is there a simple method for calculating the Fourier transform of a holomorphic complex function <span class="math-container">${f{{\left({z}\right)}}}:\Omega\to{\mathbb{{{C}}}}$</span>?</p>
<p>In order for my question to be well-posed I define a holomorphic function <span class="math-container">${f}:\Omega\to{\mathbb{{{C}}}}$</span> to posses continuous first partial derivatives and satisfy the Cauchy-Riemann equations in a simple connected domain <span class="math-container">${\Omega}\subseteq{\mathbb{{{C}}}}$</span> without any singularities. I am quite familiar with a Fourier transform for a real, periodic function <span class="math-container">${f}:{\mathbb{{{R}}}}\to{\mathbb{{{R}}}}$</span> that uses complex exponentials as a basis of eigenfunctions to generate an expansion <span class="math-container">${f{{\left({x}\right)}}}={\sum_{{{n}=-\infty}}^{{\infty}}}{A}_{{{n}}}{e}^{{in{x}}}$</span>.</p>
<p>Given that all functions satisfying the Cauchy-Riemann equations are harmonic, I wondered if the Laplace PDE with homogeneous Dirichlet boundary conditions <span class="math-container">${\Delta}{f{{\left({z}\right)}}}={0}\forall{z}\in{\Omega}$</span> and <span class="math-container">${f{{\left({z}\right)}}}{\mid}_{{\partial{\Omega}}}={0}$</span> could be used to generate a class of harmonic functions in <span class="math-container">${\mathbb{{{R}}}}^{{{2}}}$</span>. Admittedly, none would be guaranteed to correspond with analytic functions let alone approximate a desired function <span class="math-container">${f{{\left({z}\right)}}}$</span> within a sufficiently small error bound.</p>
<p>Next, I considered the viability of taking a Fourier decomposition of the real and imaginary components separately, which could be superposed to recover the original function. While this approach merits consideration for sufficiently simple functions, I noticed that it would fail for cases where separability is more enigmatic. For an example, I turn to the Schwarz-Christoffel transform.</p>
<p><span class="math-container">$${f{{\left({z}\right)}}}={\int_{{{z}_{{{0}}}}}^{{{z}}}}\frac{{{A}{\left.{d}{z}\right.}}}{{{\prod_{{{j}={1}}}^{{{n}}}}{\left({z}-{x}_{{{j}}}\right)}^{{{k}_{{{j}}}}}}}+{B}$$</span></p>
<p>In the above, <span class="math-container">${A},{B}\in{\mathbb{{{C}}}}$</span> are both taken to be constants. Given the integral representation of the formula, I find that it would present a particular challenge to separate the components for an arbitrary choice of <span class="math-container">${x}_{{{j}}}$</span>.</p>
| rpotrie | 5,753 | <p>For certain 3-manifolds (irreducible), if you are willing to take finite index subgroups, the description is relatively easy, and has to do with dehn-twists too.</p>
<p>Let me add the following paper by McCullough which was useful to me when considering this question: <a href="https://projecteuclid.org/journals/journal-of-differential-geometry/volume-33/issue-1/Virtually-geometrically-finite-mapping-class-groups-of-3-manifolds/10.4310/jdg/1214446029.full" rel="nofollow noreferrer">https://projecteuclid.org/journals/journal-of-differential-geometry/volume-33/issue-1/Virtually-geometrically-finite-mapping-class-groups-of-3-manifolds/10.4310/jdg/1214446029.full</a></p>
<p>I think this goes in line with the comments by HJRW which are more detailed, but I thought this might be useful.</p>
|
3,935,443 | <p>Why <span class="math-container">$\sqrt {xy} = \sqrt x \sqrt y$</span> and same for division?
I found question like these on website but I don't anything about precalculus and not even whole algebra so I want to know the proof with basic concepts that can explain it.</p>
| Hagen von Eitzen | 39,174 | <p>To begin with, let us assume that <span class="math-container">$f,g$</span> are injective so that we have no problems with defining the inverses. Then by continuity, they are monotonic. To vahe nno problems with <span class="math-container">$x\to 0^+$</span>, we better assume they are positive (and hence decreasing).
Even then, we cannot conclude a lot about <span class="math-container">$L_2$</span> from <span class="math-container">$L_1$</span>:</p>
<ul>
<li>By swapping <span class="math-container">$f\leftrightarrow g$</span>, we find that if <span class="math-container">$(L_1,L_2)$</span> is possible, then so is <span class="math-container">$(1/L_1,1/L_2)$</span>.</li>
<li>Of course, if <span class="math-container">$1<L_1\le \infty$</span>, then the graph of <span class="math-container">$f$</span> is eventually above that of <span class="math-container">$g$</span>, hence <span class="math-container">$f^{-1}$</span> must also be above <span class="math-container">$g^{-1}$</span>, i.e., <span class="math-container">$1\le L_2\le\infty$</span>.</li>
<li>In other words, <span class="math-container">$L_1<1<L_2$</span> and <span class="math-container">$L_1>1>L_2$</span> are impossible.</li>
</ul>
<p>We can construct <span class="math-container">$f,g$</span> with limits to our liking as follows:</p>
<p>Let <span class="math-container">$\{a_n\}_{n\in\Bbb N}$</span> and <span class="math-container">$\{b_n\}_{n\in\Bbb N}$</span> be convergent (possibly to <span class="math-container">$\infty$</span>) sequences of numbers <span class="math-container">$>1$</span>. Let <span class="math-container">$x_n=\prod_{k=1}^n a_k$</span>, <span class="math-container">$y_n=\prod_{k=1}^n b_k^{-1}$</span>.
Assume <span class="math-container">$x_n\to \infty$</span> and <span class="math-container">$y_n\to 0$</span>.
Let <span class="math-container">$f$</span> be the piecewise linear interpolation through the points <span class="math-container">$(x_n,y_n)$</span> and <span class="math-container">$g$</span> the piecewise linear interpolation through the points <span class="math-container">$(x_n,y_{n+1})$</span>. Then one directly checks that <span class="math-container">$\frac{f(x)}{g(x)}=\frac{y_n}{y_{n+1}}=b_{n+1}$</span> at <span class="math-container">$x=x_n$</span>.<br />
One verifies that on the interval <span class="math-container">$[x_n,x_{n+1}]$</span>, the quotient <span class="math-container">$\frac{f(x)}{g(x)}$</span> only varies between <span class="math-container">$b_{n+1}$</span> and <span class="math-container">$b_{n+2}$</span> (the quotient is essentially piecewise a hyperbola). We conclude that <span class="math-container">$$L_1=\lim_{x\to\infty}\frac{f(x)}{g(x)}=\lim_{n\to\infty} b_n.$$</span> By the same argument,
<span class="math-container">$$ L_2=\lim_{y\to0^+}\frac{f^{-1}(y)}{g^{-1}(y)}=\lim_{n\to\infty} a_n.$$</span></p>
<p>By picking <span class="math-container">$a_n=a^n$</span> with <span class="math-container">$1<a<\infty$</span> we can thus achieve <span class="math-container">$L_1=a$</span>.
By picking <span class="math-container">$a_n=n!$</span>, we achieve <span class="math-container">$L_1=\infty$</span>, and by <span class="math-container">$a_n=1+\frac1n$</span> we achieve <span class="math-container">$L_1=1$</span>. In summary, we can achieve any <span class="math-container">$L_1\in[1,\infty]$</span> by a suitable choice of <span class="math-container">$\{a_n\}_{n\in\Bbb N}$</span>.
Likewise, we can achieve any <span class="math-container">$L_2\in[1,\infty]$</span> by a suitable choice of <span class="math-container">$\{b_n\}_{n\in\Bbb N}$</span>.</p>
<p>As the choices are independent, any combination <span class="math-container">$(L_1,L_2)\in[1,\infty]\times [1,\infty]$</span> can be achieved (and likewise any <span class="math-container">$(L_1,L_2)\in [0,1]\times[0,1]$</span>).</p>
|
3,935,443 | <p>Why <span class="math-container">$\sqrt {xy} = \sqrt x \sqrt y$</span> and same for division?
I found question like these on website but I don't anything about precalculus and not even whole algebra so I want to know the proof with basic concepts that can explain it.</p>
| Edwin Franks | 853,882 | <p>If <span class="math-container">$\lim_{x \to \infty}f(x) = \lim_{x \to \infty}g(x) = 0$</span> and
<span class="math-container">\begin{equation}
\lim_{x \to \infty} \frac{f(x)}{g(x)}=L_1 \text{ and }
\lim_{y \to 0+} \frac{f^{-1}(y)}{g^{-1}(y)}=L_2\tag{1}
\end{equation}</span>
then if <span class="math-container">$L_1>1$</span>, <span class="math-container">$L_2\geq 1$</span> and if <span class="math-container">$L_1<1$</span>, <span class="math-container">$L_2\leq 1$</span>. Since
<span class="math-container">$L_1>1$</span> implies that for sufficiently large <span class="math-container">$x$</span>, <span class="math-container">$f(x)>g(x)$</span>, for
sufficiently small <span class="math-container">$y$</span>, <span class="math-container">$f^{-1}(y)>g^{-1}(y)$</span> (<span class="math-container">$x$</span> must be bigger for
<span class="math-container">$f$</span> to shrink to <span class="math-container">$y$</span>). Thus, <span class="math-container">$L_2\geq 1$</span>. The second conclusion is
similarly shown. Too many examples:</p>
<ol>
<li>Let <span class="math-container">$L_1$</span> and <span class="math-container">$L_2$</span> be arbitrary positive real numbers either
both greater than one or both less than one. Set
<span class="math-container">$\alpha=\log{L_2}/\log{L_1}$</span> and note that <span class="math-container">$\alpha>0$</span>. Let
<span class="math-container">$f(x)=L_1x^{-1/\alpha}$</span> and <span class="math-container">$g(x)=x^{-1/\alpha}$</span> so that
<span class="math-container">$f^{-1}(y)=L_2y^{-\alpha}$</span> and <span class="math-container">$g^{-1}(y)=y^{-\alpha}$</span>, and hence (1) holds.</li>
<li>Let <span class="math-container">$L_1=1$</span> and <span class="math-container">$L_2$</span> be an arbitrary positive real number. Let
<span class="math-container">$a=-\log{L_2}$</span> and set <span class="math-container">$f(x)=(\log x+a)^{-1}$</span> and <span class="math-container">$g(x)=(\log x)^{-1}$</span>.
Since <span class="math-container">$f^{-1}(y)=L_2e^{1/y}$</span> and <span class="math-container">$g^{-1}(y)=e^{1/y}$</span>, (1) holds.</li>
<li>Let <span class="math-container">$L_1=1$</span> and <span class="math-container">$L_2=\infty$</span>. Set
<span class="math-container">$f(x)=(\log\log x-\log 2)^{-1}$</span> and <span class="math-container">$g(x)=(\log\log x)^{-1}$</span>. Since
<span class="math-container">$f^{-1}(y)=e^{2e^{1/y}}$</span> and <span class="math-container">$g^{-1}(y)=e^{e^{1/y}}$</span>, (1) holds.</li>
<li>Let <span class="math-container">$L_1=1$</span> and <span class="math-container">$L_2=0$</span>. Set
<span class="math-container">$f(x)=(\log\log x+\log 2)^{-1}$</span> and <span class="math-container">$g(x)=(\log\log x)^{-1}$</span>. Since
<span class="math-container">$f^{-1}(y)=e^{\frac12 e^{1/y}}$</span> and <span class="math-container">$g^{-1}(y)=e^{e^{1/y}}$</span>, (1) holds.</li>
<li>Let <span class="math-container">$L_1=\infty$</span> and <span class="math-container">$1<L_2<\infty$</span>. Set
<span class="math-container">$f(x)=e^{-x/L_2}$</span> and <span class="math-container">$g(x)=e^{-x}$</span>. Since
<span class="math-container">$f^{-1}(y)=-L_2\log y$</span> and <span class="math-container">$g^{-1}(y)=-\log y$</span>, (1) holds.</li>
<li>Let <span class="math-container">$L_1=0$</span> and <span class="math-container">$0<L_2<1$</span>. Set
<span class="math-container">$f(x)=e^{-x/L_2}$</span> and <span class="math-container">$g(x)=e^{-x}$</span>. Since
<span class="math-container">$f^{-1}(y)=-L_2\log y$</span> and <span class="math-container">$g^{-1}(y)=-\log y$</span>, (1) holds.</li>
<li>Let <span class="math-container">$L_1=0$</span> and <span class="math-container">$L_2=0$</span>. Set
<span class="math-container">$f(x)=x^{-2}$</span> and <span class="math-container">$g(x)=x^{-1}$</span>. Since
<span class="math-container">$f^{-1}(y)=y^{-1/2}$</span> and <span class="math-container">$g^{-1}(y)=y^{-1}$</span>, (1) holds.</li>
<li>Let <span class="math-container">$L_1=\infty$</span> and <span class="math-container">$L_2=\infty$</span>. Set
<span class="math-container">$f(x)=x^{-1}$</span> and <span class="math-container">$g(x)=x^{-2}$</span>. Since
<span class="math-container">$f^{-1}(y)=y^{-1}$</span> and <span class="math-container">$g^{-1}(y)=y^{-1/2}$</span>, (1) holds.</li>
</ol>
|
1,928,892 | <p>I'm solving a graduate entrance examination problem.
We are required to establish the inequality using the following result:</p>
<p>for $x,y > 0$, $\frac{x}{y} + \frac{y}{x} > 2$ (1), which is easy to prove as it is equivalent to $(x - y)^2 > 0$.</p>
<p>But when it comes to an inequality combining $x, y, z$, I got stuck as I've tried to develop the expression into one single fraction and obtain something irreducible.</p>
<p>Any hints ? My intuition tells me that for $x,y,z >0$, any fraction of the form $\frac{x}{y+z}$ is greater than 1/2. As there are three fractions of this kind with mute variables playing symmetrical roles, we get: $1/2 + 1/2 + 1/2 = 3/2$.</p>
<p>I just don't figure out how to play with the result (1).</p>
| timon92 | 210,525 | <p>Hint: put $a=x+y, b=y+z. c=z+x$. Then $2x=a-b+c, 2y=a+b-c, 2z=-a+b+c$. Rewrite the inequality in terms of $a,b,c$ and try to apply the lemma you are supposed to use.</p>
|
2,062,671 | <p>How many sequences of positive integer numbers $\{a_n\}$ such that $$a_0 =1, a_1 = 2,|a_{n+2}a_n - a_{n+1}^2| = 1 ?$$</p>
| Hermetically Sealed Halibut | 399,430 | <p>By using the rule given above, we can compute the first few terms of such sequences. It is easily seen that there are at most four sequences of integers which satisfy the conditions.</p>
<p>From looking at the first five terms, my guess is that these can be given in the following ways:
$a_n=n+1$, </p>
<p>$b_{n+2}=b_{n+1}+b_n$ (Fibonacci sequence),</p>
<p>$c_{n+2}=2c_{n+1}+c_n$,</p>
<p>$d_{n+2}=2d_{n+1}+\sum_{i=0}^n d_n$</p>
<p>It remains to be verified that these sequences satisfy the given conditions.</p>
|
2,418,274 | <p>AP=DS=CR=BQ=2a, ABCD and PQRS are squares with a side a
find the angle between AR and BC..
i got $$AR=\sqrt{6}a$$can i use ARQ triangle in order to find the corresponding angle..</p>
<p><a href="https://i.stack.imgur.com/QtRaT.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/QtRaT.jpg" alt="enter image description here"></a></p>
| Community | -1 | <p>Take $R$ as the origin of vectors. Then $\vec{AR}$ is along $a\mathbf{i} + a\mathbf{j} +2a\mathbf{k}$ and $\vec{CB}$ is along $a\mathbf{i}$. The angle between them is hence
$$\cos^{-1}\left( \frac{(\mathbf{i} + \mathbf{j} +2\mathbf{k})\cdot \mathbf{i}}{\sqrt{6}}\right) = \cos^{-1}\frac{1}{\sqrt{6}}$$</p>
|
2,782,338 | <blockquote>
<p>Let $X_1, X_2, \cdots$ be independent and identically distributed random variables with expectation $\mu$. Let $N$ be a positive integer-valued random variable such that $E[N] < \infty$ and such that $I_{N≥n}$ is independent of $X_n$ for all $n$. Prove that $$E\!\left[\sum_{i=1}^NX_i\right]=\mu\, E[N]$$</p>
</blockquote>
<p>This is a question from an exam a few years ago. I don’t even know where to start here. What is meant by $I_{N≥n}$?</p>
| me47 | 353,121 | <p>You have to condition on the event that $N=k$ for some integer $k$;
$$
E \left[ \sum_{i=1}^N X_i \right] = \sum_{k=0}^{+\infty} E \left[ \sum_{i=1}^N X_i \bigg| N = k \right] P(N=k)\\
=\sum_{k=0}^{+\infty} \sum_{i=1}^k E \left[ X_i \right] P(N=k), \quad \text{by independence of } X_i \text{ and } N \\
= \mu \sum_{k=0}^{+\infty} k P(N=k), \quad \text{since } E[X_i] = \mu \\
= \mu E[N]
$$</p>
|
1,348,763 | <p>Using the comparison test I am supposed to figure out whether this integral converges or diverges. what other function should I use? Also, the inequality stating that $1/\sqrt{e^x+1}$ is larger or smaller than another function must be proven.</p>
| JimmyK4542 | 155,509 | <p><strong>Hint</strong>: Clearly, $1+e^x \ge e^x$, so $\sqrt{1+e^x} \ge \sqrt{e^x} = e^{x/2}$, and thus, $\dfrac{1}{\sqrt{1+e^{x}}} \le \dfrac{1}{e^{x/2}} = e^{-x/2}$. </p>
<p>Using the comparison test should be easy now.</p>
|
27,865 | <p>Both the Laplace transform and the Fourier transform in some sense decode the "spectrum" of a function. The Laplace transform gives a power-series decomposition whereas the Fourier transform gives a harmonic (or loop-based) decomposition.</p>
<p>Are there deep connections between these two transforms? <a href="https://math.stackexchange.com/questions/7301/connection-between-fourier-transform-and-taylor-series">The formulaic connection</a> is clear, but is there something deeper?</p>
<p>(Maybe the answer will involve spectral theory?)</p>
| xen | 8,229 | <p>I don't know what answer you are looking for but for example both Laplace and Fourier transform are a so called <a href="http://en.wikipedia.org/wiki/Gelfand_representation">Gelfand Transform</a>.</p>
<p>You can find good introduction to Gelfand Transform in nice book <a href="http://books.google.pl/books?id=q7dR3d5nqaUC&printsec=frontcover&dq=bobrowski+functional+analysis&source=bl&ots=ilzVPTF_zs&sig=2RpAscLsc2Ug75hjqUAg4OTU134&hl=pl&ei=9N6DTYXpBczysgb4tZSJAw&sa=X&oi=book_result&ct=result&resnum=6&ved=0CEMQ6AEwBQ#v=onepage&q&f=false">Functional analysis for probability and stochastic processes: an introduction</a>, A. Bobrowski. Look into Chapter 6.</p>
|
1,248,668 | <p>I do have a system of n equations with m variables where m > n with integer coefficients. I wish to find a set of integer solutions to this system (In my case n = 2 and m = 4). Could somebody tell me how I can do it? I already solved this system with Mathematica but I would like to redo these calculations by hand to understand how their were obtained.</p>
<p>The system is:
$\left\{
\begin{array}{l l}
4u - 3v + 4w + 3z = 1\\
-4v - 3u - 4z + 3w = 0
\end{array} \right.$</p>
| mvw | 86,776 | <p>A linear equation with integer coefficients, where one looks for integer solutions is called a linear Diophantine equation.</p>
<p>The simplest case
$$
a x + b y = c
$$
can be solved systematically and has either no or infinite many solutions.</p>
<p>From here one can move to more variables or more equations. </p>
<p>See <a href="http://en.wikipedia.org/wiki/Diophantine_equation#System_of_linear_Diophantine_equations" rel="nofollow">System of linear Diophantine equations</a> on how you might proceed. It recommends calculating the Smith normal form.</p>
|
4,556,193 | <p>I meet a series of the form</p>
<p><span class="math-container">$$\sum_{n=0}^{\infty} \frac{x^n}{(2n-1)!!}$$</span>
where <span class="math-container">$(-1)!! = 1$</span>.</p>
<p>I guess it is a Taylor expansion of a function but I don't know what it is. Could anyone here help me?</p>
<p>Remark: The problem comes from calculating a renewal process. Assume <span class="math-container">$N(t)$</span> is a renewal process with interarrival time <span class="math-container">$X_i$</span> where <span class="math-container">$X_i$</span> i.i.d. follow <span class="math-container">$\chi^2_1$</span>. Then the arrival time of the <span class="math-container">$k$</span>th event is <span class="math-container">$S_k \sim \chi^2_k$</span>. Then the renewal function is</p>
<p><span class="math-container">$$m(t) = \mathbb{E}N(t) =\sum_{k=1}^\infty Pr(S_k \leq t)$$</span></p>
<p>which is</p>
<p><span class="math-container">$$\sum_{k=1}^\infty \int_0^t \frac{x^{k/2-1}e^{-x/2}}{2^{k/2}\Gamma(k/2)}dx.$$</span></p>
<p>We can exchange the summation and the integral and divide the summation into two parts according to <span class="math-container">$k$</span> is even or odd.</p>
<p>The part for <span class="math-container">$k$</span> is even is easy. But for <span class="math-container">$k$</span> is odd, I think we need to deal the series in the beginning of the problem.</p>
| C-RAM | 833,331 | <p>Your series is certainly not a trivial one. We shall prove that your series is given by <span class="math-container">$1+f(\sqrt{x})$</span>, where
<span class="math-container">\begin{equation}
f(x)=xe^{x^2/2}\sqrt{\frac{\pi}{2}}\text{erf}\left(\frac{x}{\sqrt{2}}\right)
\end{equation}</span>
and <span class="math-container">$\text{erf}(z)$</span> is the <a href="https://en.wikipedia.org/wiki/Error_function" rel="noreferrer">error function</a>. We will be making use of the Taylor series for <span class="math-container">$\text{erf}(z)$</span> which is as follows:
<span class="math-container">\begin{equation}
\text{erf}(z)=\frac{2}{\sqrt{\pi}}\sum_{n=0}^\infty\frac{(-1)^nz^{2n+1}}{n!(2n+1)}
\end{equation}</span>
Now, we expand Taylor series of both <span class="math-container">$e^{x^2/2}$</span> and <span class="math-container">$\text{erf}\left(\frac{x}{\sqrt{2}}\right)$</span>, and take the Cauchy product:
<span class="math-container">\begin{equation}
\begin{split}
f(x)&=xe^{x^2/2}\sqrt{\frac{\pi}{2}}\text{erf}\left(\frac{x}{\sqrt{2}}\right)\\
&=x\sqrt{\frac{\pi}{2}}\left[\sum_{k=0}^\infty \frac{x^{2k}}{2^kk!}\right]\left[\sqrt{\frac{2}{\pi}}\sum_{\ell=0}^\infty \frac{(-1)^kx^{2\ell+1}}{2^\ell \ell ! (2\ell+1)}\right]\\
&=x^2\left[\sum_{k=0}^\infty \frac{x^{2k}}{2^kk!}\right]\left[\sum_{\ell=0}^\infty \frac{(-1)^kx^{2\ell}}{2^\ell \ell ! (2\ell+1)}\right]\\
&=x^2\sum_{n=0}^\infty\sum_{k=0}^n\frac{x^{2n-2k}}{2^{n-k}(n-k)!}\cdot\frac{(-1)^kx^{2k}}{2^kk!(2k+1)}\\
&=x^2\sum_{n=0}^\infty\frac{x^{2n}}{2^nn!}\sum_{k=0}^n{n\choose k}\frac{(-1)^k}{2k+1}\\
\end{split}
\end{equation}</span>
We can evaluate this last sub-sum using the <a href="https://en.wikipedia.org/wiki/Beta_function" rel="noreferrer">Beta function</a>:
<span class="math-container">\begin{equation}
\begin{split}
\sum_{k=0}^n{n\choose k}\frac{(-1)^k}{2k+1}&=\sum_{k=0}^n{n\choose k}(-1)^k\int_0^1t^{2k}dt\\
&=\int_0^1\sum_{k=0}^n{n\choose k}(-1)^kt^{2k}dt\\
&=\int_0^1(1-t^2)^ndt\\
&=\int_0^1\frac{(1-u)^n}{2u^{1/2}}du\\
&=\frac{1}{2}B(1/2,n+1)\\
&=\frac{\Gamma(1/2)\Gamma(n+1)}{2\Gamma(n+3/2)}\\
&=\frac{2^nn!}{(2n+1)!!}
\end{split}
\end{equation}</span>
We may therefore simplify
<span class="math-container">\begin{equation}
f(x)=x^2\sum_{n=0}^\infty\frac{x^{2n}}{(2n+1)!!}=\sum_{n=1}^\infty\frac{x^{2n}}{(2n-1)!!}
\end{equation}</span>
which means that
<span class="math-container">\begin{equation}
\sum_{n=0}^\infty\frac{x^n}{(2n-1)!!}=1+f(\sqrt{x})=1+e^{x/2}\sqrt{\frac{\pi x}{2}}\text{erf}\left(\sqrt{\frac{x}{2}}\right)
\end{equation}</span>
as desired.</p>
|
4,556,193 | <p>I meet a series of the form</p>
<p><span class="math-container">$$\sum_{n=0}^{\infty} \frac{x^n}{(2n-1)!!}$$</span>
where <span class="math-container">$(-1)!! = 1$</span>.</p>
<p>I guess it is a Taylor expansion of a function but I don't know what it is. Could anyone here help me?</p>
<p>Remark: The problem comes from calculating a renewal process. Assume <span class="math-container">$N(t)$</span> is a renewal process with interarrival time <span class="math-container">$X_i$</span> where <span class="math-container">$X_i$</span> i.i.d. follow <span class="math-container">$\chi^2_1$</span>. Then the arrival time of the <span class="math-container">$k$</span>th event is <span class="math-container">$S_k \sim \chi^2_k$</span>. Then the renewal function is</p>
<p><span class="math-container">$$m(t) = \mathbb{E}N(t) =\sum_{k=1}^\infty Pr(S_k \leq t)$$</span></p>
<p>which is</p>
<p><span class="math-container">$$\sum_{k=1}^\infty \int_0^t \frac{x^{k/2-1}e^{-x/2}}{2^{k/2}\Gamma(k/2)}dx.$$</span></p>
<p>We can exchange the summation and the integral and divide the summation into two parts according to <span class="math-container">$k$</span> is even or odd.</p>
<p>The part for <span class="math-container">$k$</span> is even is easy. But for <span class="math-container">$k$</span> is odd, I think we need to deal the series in the beginning of the problem.</p>
| Leucippus | 148,155 | <p>Using
<span class="math-container">$$(2 n -1)!! = \frac{(2 n)!}{2^n \, n!} = 2^n \, \left(\frac{1}{2}\right)_{n}$$</span>
then
<span class="math-container">$$ \sum_{n=0}^{\infty} \frac{x^n}{(2 n - 1)!!} = \sum_{n=0}^{\infty} \frac{\left(\frac{x}{2}\right)^n}{(1/2)_{n}} = {}_{1}F_{1}\left(1 ; \frac{1}{2} ; \frac{x}{2} \right),$$</span>
where <span class="math-container">$(a)_{n}$</span> is the Pochhammer symbol, <span class="math-container">${}_{1}F_{1}(a; b; x)$</span> is the confluent hypergeometric function. Now, by using
<span class="math-container">$$ {}_{1}F_{1}\left(1 ; \frac{1}{2} ; x \right) = 1 + \sqrt{\pi \, x} \, e^{x} \, \text{erf}(\sqrt{x}) $$</span>
then
<span class="math-container">$$ \sum_{n=0}^{\infty} \frac{x^n}{(2 n - 1)!!} = 1 + \sqrt{\frac{\pi \, x}{2}} \, e^{x/2} \, \text{erf}\left(\sqrt{\frac{x}{2}}\right). $$</span></p>
|
999,000 | <p>Theorem 3.29 If $p >1$ then $\sum_{n=2}^{\infty} \frac{1}{n(\log n)^p}$ converges; if $p \leq 1$, the series diverges. </p>
<p>Proof:</p>
<p>The monotonicity of the logarithmic function implies that $\{\log n\}$ increases. Hence $\frac{1}{n\log n}$ decreases, and we can apply theorem 3.27; this leads us to the series </p>
<p>$\sum_{k=1}^{\infty} 2^k \frac{1}{2^k(\log 2^k)^p}= \sum_{k=1}^{\infty} \frac{1}{(k\log 2)^p}= \frac{1}{(\log 2)^p} \sum_{k=1}^{\infty} \frac{1}{k^p}$ and the conclusion follows from Theorem 3.28. </p>
<p>I understand the proof thus far, but Rudin goes on to say that this procedure may evidently be continued. For instance,</p>
<p>$\sum_{k=3}^{\infty} \frac{1}{n\log n(\log\log n)}$ diverges, whereas</p>
<p>$\sum_{k=3}^{\infty} \frac{1}{n\log n(\log\log n)^2}$ converges. </p>
<p>I want to continue the procedure to show for which $p$ does $\sum_{k=3}^{\infty} \frac{1}{n\log n(\log\log n)^p}$ converge. </p>
<p>Here are the relevant theorems:</p>
<p>Theorem 3.27 Suppose $a_1 \geq a_2 \geq ... \geq 0$. Then the series $\sum_{k=1}^{\infty} a_n$ converges if and only if $\sum_{k=1}^{\infty} 2^n a_{2^n}$ converges. </p>
<p>Theorem 3.28 $\sum \frac{1}{n^p}$ converges if $p>1$ and diverges if $p \leq 1$. </p>
<p>Here is what I have so far:</p>
<p>By Theorem 3.27 $\sum_{k=3}^{\infty} \frac{1}{n\log n(\log\log n)^p}$ converges if and only if $\sum_{k=2}^{\infty}2^n \frac{1}{2^n\log 2^n(\log\log 2^n)^p} = \frac{1}{\log 2} \sum_{k=2}^{\infty} \frac{1}{n(\log\log 2^n)^p}$ </p>
<p>Now I'm stuck. How may the procedure be continued? Please note it is my goal to show a result similar to Theorem 3.29. It is not my goal to simply show that $\sum_{k=3}^{\infty} \frac{1}{n\log n(\log\log n)^p}$ converges for some $p$. Any hint would be greatly appreciated. Thank you. </p>
| Joonas Ilmavirta | 166,535 | <p>Let $r:(0,\infty)\to(0,\infty)$ and $\omega:(0,\infty)\to S^{n-1}$ be smooth smooth functions that define a curve $\gamma$ by $\gamma(s)=r(s)\omega(s)$ in spherical coordinates.
I want to choose $S=\gamma((0,\infty))$, so I need to assume $r(s)\to\infty$ as $s\to\infty$.
I will also assume $r$ to be increasing and $t$ to be positive.</p>
<p>Let $f(x)=\|x\|^t$.
To ensure uniform continuity, I want the local Lipschitz constant of $f|_S$ at $\gamma(s)$,
$$
L(s)=\frac{\frac{d}{ds}f(\gamma(s))}{\|\dot\gamma(s)\|},
$$
to be uniformly bounded.
We have
$$
\frac{d}{ds}f(\gamma(s))
=
tr^{t-1}\dot r
$$
and
$$
\|\dot\gamma(s)\|^2
=
\|\dot r\omega+r\dot\omega\|^2
=
\dot r^2+r^2\|\dot\omega\|^2.
$$
For the last equation, note that $2\omega\cdot\dot\omega=\frac{d}{ds}\|\omega\|^2=0$.
Thus
$$
L(s)^2
=
t^2\frac{r^{2(t-1)}\dot r^2}{\dot r^2+r^2\|\dot\omega\|^2}
=
t^2\frac{r^{2(t-1)}}{1+(\|\dot\omega\|/\dot\ell)^2},
$$
where $\ell=\log(r)$.</p>
<p>For the Archimedean spiral $r=\omega=s$ we get $L(s)^2=t^2s^{2(t-1)}/(1+s^2)$, which stays bounded if and only if $t\leq2$.
This was expected.</p>
<p>A uniform bound on $L(s)$ does not suffice for uniform continuity; if the "spiral" $S$ is too tight, $f|_S$ is not uniformly continuous.
To make this issue easier to handle, let me assume that $\omega$ is periodic with some period $p>0$.
I'm not sure if a periodic choice is optimal, but I have a vague feeling that an "optimal spiral" is periodic enough for the argument to work.</p>
<p>Note that if you want uniform continuity with respect to the path metric, bounding $L(s)$ is enough.
If you want it w.r.t. the induced Euclidean metric, it is not.</p>
<p>Suppose we want Hölder continuity with exponent $\alpha\in(0,1]$.
Then we get the requirement that
$$
r(s+p)^t-r(s)^t\lesssim (r(s+p)-r(s))^\alpha.
$$
(I don't want to keep track of multiplicative constants anymore, and I will assume $r$ to be so nice that I can make some approximations.)
The function $r$ cannot grow too fast if we want $L(s)$ to remain bounded, so
$$
r(s+p)^t-r(s)^t\approx t(r(s+p)-r(s))r(s)^{t-1}
$$
should be a reasonable approximation.
This combined with the above estimate gives
$$
(r(s+p)-r(s))^{1-\alpha}r(s)^{t-1}\lesssim 1.
$$
Approximating $r(s+p)-r(s)\approx p\dot r(s)$, we get
$$
\dot r(s)^{1-\alpha}r(s)^{t-1}\lesssim 1.
$$
This condition is not necessary for uniform continuity if $r(s+p)-r(s)$ has a uniform lower bound.
</p>
<p>To make $L(s)$ bounded we should have
$$
r^{2(t-1)}
\lesssim
(\|\dot\omega\|/\dot\ell)^2.
$$
If we choose the parametrization so that $\|\dot\omega\|$ is constant, we end up with two requirements (if $r(s+p)-r(s)\to0$ as $r\to\infty$):</p>
<ol>
<li>$\dot r^{1-\alpha}r^{t-1}\lesssim1$,</li>
<li>$\dot r r^{t-2}\lesssim1$.</li>
</ol>
<p>Assuming $\alpha<1$ (which is not very restrictive), the first condition can be rewritten as $\dot r r^{(t-1)/(1-\alpha)}\lesssim1$.
The condition then becomes
$$
\dot r r^{\max\{(t-1)/(1-\alpha),t-2\}}\lesssim1.
$$
If the spiral tightens up so that $r(s+p)-r(s)\to0$ as $r\to\infty$, the modulus of continuity of $N_t|_S$ should be as bad as that of $N_t$ in all of $\mathbb R^n$ (although this does not somehow show up in the calculation above).</p>
<p>It seems that the most promising way to go is to demand that $r(s+p)-r(s)\gtrsim1$ and $\dot r r^{t-2}\lesssim1$.</p>
<p>This answer is not conclusive, though...</p>
|
2,341,552 | <p>I am pretty sure that
$$\bigg|\int_{A}e^{it}\,dt\bigg|\leq2$$
for every measurable set $A\subseteq[-\pi,\pi]$,
but I cannot prove this...</p>
| crankk | 202,579 | <p>Hint: $e^{it}=\cos t + i \sin t$ and split $[-\pi,\pi]$ into the regions $[-\pi,-\pi/2],[-\pi/2,0],[0,\pi/2],[\pi/2,\pi]$. How do $\cos$ and $\sin$ behave on these regions? </p>
|
2,341,552 | <p>I am pretty sure that
$$\bigg|\int_{A}e^{it}\,dt\bigg|\leq2$$
for every measurable set $A\subseteq[-\pi,\pi]$,
but I cannot prove this...</p>
| Weaam | 1,746 | <p>We avoid arguments that use the modulus (i.e. triangle inequality) since for some measurable sets, for instance $A = [-\pi, \pi]$, we have an overestimate $$\Big|\int_A e^{it}dt\Big| \leq 2 < \int_A|e^{it}|dt = 2\pi$$</p>
<p>Simply by rotating the value of the integral (which is a complex number) back to the real numbers. </p>
<p>Let $f(t) = e^{it}$. Since $\int_A f dt$ is a complex number, it has magnitude and phase $$\int_A f dt = \Big|\int_A f dt\Big| e^{i\theta}$$ for some $\theta \in [-\pi, \pi]$. Note that $\theta$ is independent of time, hence
$$\Big|\int_A f dt\Big| = \int_A f e^{-i\theta} dt$$
The integrals are real valued, hence we consider the real component only $$\int_A f e^{-i\theta} dt = \int_A \mathfrak{Re}(f e^{-i\theta}) dt = \int_A \cos(t - \theta) dt$$</p>
<p>It remains to prove that the real integral $ \int_A \cos(t - \theta) dt \leq 2$.</p>
<p><em>Hint.</em> $\cos(t-\theta)$ is non-negative for $t-\theta \in [-\pi/2, \pi/2]$, so we are interested in such elements of $A$, denote them by $A^+$, so that $$\int_A \cos(t-\theta) dt \leq \int_{A^+} \cos(t-\theta) dt \leq \int_{[-\pi/2, \pi/2]} \cos(t)dt = 2$$</p>
|
554,003 | <p>How can I find a closed form for the following sum?
$$\sum_{n=1}^{\infty}\left(\frac{H_n}{n}\right)^2$$
($H_n=\sum_{k=1}^n\frac{1}{k}$).</p>
| Lucian | 93,448 | <p>I believe the answer you're looking for is in <a href="http://en.wikipedia.org/wiki/Experimental_mathematics#Applications_and_examples">this Wikipedia article</a> :</p>
<blockquote>
<p>The following identity was first conjectured by <strong>Enrico Au-Yeung</strong> , a student of <a href="http://en.wikipedia.org/wiki/Jonathan_Borwein">Jonathan Borwein</a>, using computer search and the <a href="http://en.wikipedia.org/wiki/Integer_relation_algorithm">PSLQ algorithm</a>, in <strong>1993</strong> :
$$\sum_{k=1}^\infty \frac{1}{k^2}\left(1+\frac{1}{2}+\frac{1}{3}+\cdots+\frac{1}{k}\right)^2 = \frac{17\pi^4}{360}.$$</p>
</blockquote>
<p>A simple <a href="https://www.google.com/search?q=Enrico+Au+Yeung">Google search</a> will return several papers in PDF format containing this and other curious and interesting mathematical identities. Or you could simply visit <a href="http://en.wikipedia.org/wiki/David_H._Bailey">David H. Bailey</a>'s own <a href="http://www.davidhbailey.com/dhbpapers">page</a>, and search for papers containing the string <em>experiment</em> in their title, most of which also contain this and many other similar results. The proofs are based on a combination of one or more of the following: the <strong>PSLQ algorithm</strong> I've already mentioned, <a href="http://en.wikipedia.org/wiki/Computer-assisted_proof">computer-assisted proofs</a>, and-or <a href="http://en.wikipedia.org/wiki/Inverse_Symbolic_Calculator">inverse symbolic computation</a>.</p>
|
554,003 | <p>How can I find a closed form for the following sum?
$$\sum_{n=1}^{\infty}\left(\frac{H_n}{n}\right)^2$$
($H_n=\sum_{k=1}^n\frac{1}{k}$).</p>
| Ali Shadhar | 432,085 | <p><strong>Different approach:</strong></p>
<p>Start with <a href="https://math.stackexchange.com/questions/3366039/a-group-of-important-generating-functions-involving-harmonic-number">the identity</a></p>
<p><span class="math-container">$$\sum_{n=1}^\infty (H_n^{(2)}-H_n^2)x^{n}=-\frac{\ln^2(1-x)}{1-x}$$</span></p>
<p>Multiply both sides by <span class="math-container">$-\frac{\ln x}{x}$</span> and integrate between <span class="math-container">$0$</span> and <span class="math-container">$1$</span> and use <span class="math-container">$\int_0^1-x^{n-1}\ln x\ dx=\frac1{n^2}$</span> we get</p>
<p><span class="math-container">$$\sum_{n=1}^\infty \frac{H_n^{(2)}-H_n^2}{n^2}=\int_0^1\frac{\ln x\ln^2(1-x)}{x(1-x)}dx=\int_0^1\frac{\ln(1-x)\ln^2x}{(1-x)x}dx$$</span></p>
<p><span class="math-container">$$=-\sum_{n=1}^\infty H_n\int_0^1 x^{n-1}\ln^2x\ dx=-2\sum_{n=1}^\infty\frac{H_n}{n^3}=-\frac52\zeta(4)$$</span></p>
<p><span class="math-container">$$\Longrightarrow\sum_{n=1}^\infty\frac{H_n^2}{n^2}=\frac52\zeta(4)+\sum_{n=1}^\infty\frac{H_n^{(2)}}{n^2}=\frac52\zeta(2)+\frac74\zeta(4)=\frac{17}4\zeta(4)$$</span></p>
|
554,003 | <p>How can I find a closed form for the following sum?
$$\sum_{n=1}^{\infty}\left(\frac{H_n}{n}\right)^2$$
($H_n=\sum_{k=1}^n\frac{1}{k}$).</p>
| FDP | 186,817 | <p>This result was a problem in American mathematical monthly in 50s.</p>
<p>Source:</p>
<p>H. F. Sandham and Martin Kneser, <em>The American mathematical monthly</em>, Advanced problem 4305, Vol. 57, No. 4 (Apr., 1950), pp. 267-268</p>
<p><span class="math-container">\begin{align}S&=\sum_{n=1}^\infty \left(\frac{\text{H}_n}{n}\right)^2\\
&=\sum_{n=1}^\infty \frac{1}{n^2}\left(\int_0^1 \frac{1-t^n}{1-t}dt\right)\left(\int_0^1 \frac{1-u^n}{1-u}du\right)\\
&=\int_0^1 \int_0^1 \frac{\text{Li}_2(1)+\text{Li}_2( tu)-\text{Li}_2(t)-\text{Li}_2(u)}{(1-t)(1-u)}dtdu\\
&\overset{\text{IBP}}=\int_0^1 \int_0^1 \frac{\ln(1-t)\big(\ln(1-t)-\ln(1-tu)\big)}{t(1-u)}dtdu\\
&\overset{x=1-tu,y=\frac{1-t}{1-tu}}=\int_0^1\int_0^1 \frac{\ln y\ln(xy)}{(1-y)(1-xy)}dxdy\\
&\overset{w(x)=yx}=\int_0^1 \frac{\ln y}{y(1-y)}\left(\int_0^y \frac{\ln w}{1-w}dw\right)dy\\
&=\int_0^1 \frac{\ln y}{1-y}\left(\int_0^y \frac{\ln w}{1-w}dw\right)dy+\underbrace{\int_0^1 \frac{\ln y}{y}\left(\int_0^y \frac{\ln w}{1-w}dw\right)dy}_{\text{IBP}}\\
&=\frac{1}{2}\left(\int_0^1 \frac{\ln w}{1-w}dw\right)^2-\frac{1}{2}\int_0^1 \frac{\ln^3 w}{1-w}dw\\
&=\frac{1}{2}\times \left(\frac{\pi^2}{6}\right)^2-\frac{1}{2}\times -\frac{\pi^4}{15}\\
&=\boxed{\dfrac{17\pi^4}{360}}.
\end{align}</span> NB:
I assume that:</p>
<p><span class="math-container">\begin{align}\zeta(2)=\frac{\pi^2}{6},&&\zeta(4)=\frac{\pi^4}{90},&&\int_0^1 \frac{\ln w}{1-w}dw=-\zeta(2)=-\frac{\pi^2}{6},&&\int_0^1 \frac{\ln^3 w}{1-w}dw=-6\zeta(4)=-\frac{\pi^4}{15}\end{align}</span></p>
|
3,556,334 | <p>Given a rectangle with dimensions <span class="math-container">$10$</span>cm and <span class="math-container">$6$</span>cm, show that for every <span class="math-container">$3$</span> points in the interior of the rectangle, the area of the triangle is less than <span class="math-container">$30$</span> cm<span class="math-container">$^2$</span>.</p>
<p>I draw the diagonals but now I am stuck.</p>
| Acccumulation | 476,070 | <p>Let <span class="math-container">$x_{min}$</span> be the smallest x coordinate among the three points. Similarly for <span class="math-container">$x_{max}$</span>, <span class="math-container">$y_{min}$</span>, and <span class="math-container">$y_{max}$</span>. Clearly <span class="math-container">$x_{max}-x_{min} < 10$</span> and <span class="math-container">$y_{max}-y_{min} < 6$</span>, so all you have to do is show that the area of the triangle is equal to <span class="math-container">$(x_{max}-x_{min})(y_{max}-y_{min})/2$</span></p>
|
3,081,552 | <p>Is this <code>∅ ⊈ { ∅, 1, 2 }</code> true or false ? </p>
<p>Also, I am confuse since this <code>{ ∅, 1, 2 }</code> has already contain a <code>∅</code>, does it still contain another <code>∅</code> meaning like : <code>{ ∅, ∅, 1, 2 }</code> ? </p>
<p>Is <code>∅ ∈ { ∅, 1, 2 }</code> true ? & <code>{∅} ∈ { ∅, 1, 2 }</code> false ?</p>
| DanielWainfleet | 254,665 | <p><span class="math-container">$\neg (A\subset B)$</span> iff there exists a member of <span class="math-container">$A$</span> that is not a member of <span class="math-container">$B$</span>.</p>
<p>Consider the case <span class="math-container">$A=\phi.$</span> </p>
<p>Since <span class="math-container">$\phi$</span> has no members, there cannot exist a member of <span class="math-container">$\phi$</span> that fails to belong to <span class="math-container">$B$</span>. Therefore <span class="math-container">$\neg (\neg (\phi\subset B))$</span>, equivalently <span class="math-container">$\phi\subset B.$</span> For <span class="math-container">$any $</span> <span class="math-container">$B$</span>.</p>
|
63,534 | <p>I found in an article <a href="http://dx.doi.org/10.1103/PhysRev.105.776" rel="nofollow">"Imperfect Bose Gas with Hard-Sphere Interaction"</a>, <em>Phys. Rev.</em> 105, 776–784 (1957) the following integral, but I don't know how to solve it. Any hints?</p>
<p>$$\int_0^\infty {\int_0^\infty {\mathrm dp\mathrm dq\frac{\sinh(upq)}{q^2 - p^2}pq} } e^{-vq^2 - wp^2} = \frac{\pi}{4}\frac{u(w - v)}{\left[(w + v)^2-u^2 \right]\left(4wv-u^2\right)^{1/2}}$$</p>
<p>for $u,v,w > 0$.</p>
| Sangchul Lee | 9,340 | <p>I will assume that $u, v, w > 0$ and $4vw > u^2$. Let $$
I := \int_{0}^{\infty} \int_{0}^{\infty} \frac{\sinh (upq)}{p^2 - q^2} \, pq \, e^{-vp^2} e^{-wq^2} \; dpdq.
$$ By polar coordinate transform, we obtain $$
\begin{eqnarray*}
I & = & \int_{0}^{\frac{\pi}{2}} \int_{0}^{\infty} \frac{\sinh (r^2 u \cos \theta \sin \theta)}{r^2 \cos^2 \theta - r^2 \sin^2 \theta} \, r^2 \cos\theta \sin \theta \, e^{-vr^2 \cos^2 \theta} e^{-w r^2 \sin^2 \theta} \; r dr d\theta \\
& = & \frac{1}{2} \int_{0}^{\frac{\pi}{2}} \frac{\tan \theta}{1 - \tan^2 \theta} \int_{0}^{\infty} \sinh(r^2 u \sin \theta \cos\theta) \, e^{-r^2 (v \cos^2 \theta + w \sin^2 \theta)} \; d(r^2) d\theta \\
& = & \frac{1}{2} \int_{0}^{\frac{\pi}{2}} \frac{\tan \theta}{1 - \tan^2 \theta} \left( \frac{u \cos\theta \sin\theta}{(v \cos^2 \theta + w \sin^2 \theta)^2 - u^2 \cos^2 \theta \sin^2 \theta} \right) d\theta \\
& = & \frac{1}{4} \int_{-\infty}^{\infty} \frac{u t^2}{(1-t^2)\left( (v + w t^2)^2 - u^2 t^2 \right)}\; dt. \qquad (\text{where} \ t = \tan \theta)
\end{eqnarray*}
$$ Now the last integral can be attacked by standard contour integration techinque. In particular, let $$ f(z) = \frac{u}{4} \frac{z^2}{(1-z^2)\left( (v + w z^2)^2 - u^2 z^2 \right)} $$ be the integrand. Then considering appropriate upper-semicircular contour with vanishing dents at $\pm 1$, we obtain $$ \begin{align*} I = & \pi i \Bigg[ \mathrm{Res} \left\{ f, 1 \right\} + \mathrm{Res} \left\{ f, -1 \right\} \Bigg] \\ & + 2\pi i \Bigg[ \mathrm{Res} \left\{ f, \frac{u+i\sqrt{4vw-u^2}}{2w} \right\} + \mathrm{Res} \left\{ f, \frac{-u+i\sqrt{4vw-u^2}}{2w} \right\} \Bigg], \end{align*}$$
which yields the desired formula. (A tip : $\mathrm{Res} \{ f, 1 \} + \mathrm{Res} \{ f, -1 \} = 0$ because $\pm 1$ are simple poles of an even function $f$.)</p>
<p>p.s. While posting my solution, Andrew gave a nice solution.</p>
|
2,993,958 | <blockquote>
<p>Do there exist three non null vectors <span class="math-container">$a,b,c$</span> with <span class="math-container">$$a.b=a.c$$</span> such that <span class="math-container">$b\ne c$</span>?</p>
</blockquote>
<p>My attempt:</p>
<p>a.(b-c)=0</p>
<p>But is it possible to say b-c=0 since a is non null?</p>
| Peter Szilas | 408,605 | <p><span class="math-container">$\vec a\cdot (\vec b-\vec c)=0.$</span></p>
<p><span class="math-container">$\vec a$</span> is perpendicular to <span class="math-container">$(\vec b-\vec c)$</span>.</p>
<p>In 2D: </p>
<p>Let <span class="math-container">$\vec a =(1,0)$</span>, then chose <span class="math-container">$\vec b-\vec c =(0,1)$</span>(why?).</p>
<p><span class="math-container">$\vec a=(1,0)$</span>; <span class="math-container">$\vec b=\vec c +(0,1)$</span>;</p>
|
3,728,707 | <p>I would like to solve this linear diophantine equation:
<span class="math-container">$$
40x_1+296x_2+945x_3+2048x_4+4500x_5+8640x_6=616103
$$</span>
All the answers have to be an integer number in the interval <span class="math-container">$\{[10] \cup [29,95]\}$</span>.</p>
<p>As a first step, I started by finding a particular solution of the equation without taking the constraints into account. I used the following procedure:</p>
<ul>
<li>Find a particular solution for <span class="math-container">$x_6$</span>:
<span class="math-container">$$
gcd(40,296,945,2048,4500)w_6+8640x_6=616103
$$</span>
I found <span class="math-container">$x_6=71$</span> and <span class="math-container">$w_6=2663$</span>. I can also determine a general solution: <span class="math-container">$X_6=71-n_6$</span> and <span class="math-container">$W_6=2663+8640n_6$</span>.</li>
<li>Use <span class="math-container">$w_6$</span> to find the other solutions by repeating the same procedure:
<span class="math-container">$$
40x_1+296x_2+945x_3+2048x_4+4500x_5=gcd(40,296,945,2048,4500)w_6 = 2663
$$</span>
<span class="math-container">$$
gcd(40,296,945,2048)w_5+4500x_5=2663
$$</span>
<span class="math-container">$$
...
$$</span></li>
</ul>
<p>That way, I could determine one particular solution of this equation:
<span class="math-container">$$
x_1=6876450,
x_2=-916860,
x_3=-3885,
x_4=1,
x_5=1,
x_6=71
$$</span>
I also introduce some intermediate variables to compute a general solution:
<span class="math-container">$$X_6=71-n_6, W_6=2663+8640n_6$$</span>
<span class="math-container">$$X_5=1+n_5,W_5=-1837-4500n_5$$</span>
<span class="math-container">$$X_4=1+n_4,W_4=-3885-2018n_4$$</span>
<span class="math-container">$$X_3=-3885+8n_3,W_3=458430-945n_3$$</span>
<span class="math-container">$$X_2=-916860-5n_2$$</span>
<span class="math-container">$$X_1=6876450+37n_2$$</span></p>
<p>Now, I'm stuck with my general solutions and I don't know what I can do to fit my particular solutions with the constraints. Here are the thoughts I got to solve this problem and the associated issue:</p>
<ul>
<li>I wanted to find what values of <span class="math-container">$n_2$</span> make <span class="math-container">$x_1$</span> in the interval defined above. I found <span class="math-container">$n_2=\{-185849,-185848\}$</span> that gives <span class="math-container">$x_1=\{37,74\}$</span> but <span class="math-container">$x_2$</span> is out of the interval with these values.</li>
<li>I wanted to rewrite the equation that way:
<span class="math-container">$$40\cdot(6876450+37n_2)+296\cdot(-916860-5n_2)+945\cdot(-3885+8n_3)+2048\cdot(1+n_4)+4500\cdot(1+n_5)+8640\cdot(71-n_6)=616103$$</span>
However, I can't because <span class="math-container">$n_2$</span>, <span class="math-container">$n_3$</span>, <span class="math-container">$n_4$</span>, <span class="math-container">$n_5$</span> and <span class="math-container">$n_6$</span> are not independent and the choice of the value of <span class="math-container">$n_6$</span>, for instance, has an impact on <span class="math-container">$w_6$</span> that has an impact on all general solutions.</li>
</ul>
<p>What can I do to find a solution that corresponds to the constraints?</p>
| Pierre | 401,549 | <p>To find all the answers, I coded my own solver following this algorithm:</p>
<ol>
<li>Find all possible <span class="math-container">$x_6$</span> that fits in the constraints and its corresponding <span class="math-container">$w_6$</span></li>
<li>For each possible <span class="math-container">$x_6$</span>, rewrite the equation that way:</li>
</ol>
<p><span class="math-container">$$
40x_1+296x_2+945x_3+2048x_4+4500x_5=(40,296,945,2048,4500)w_6
$$</span></p>
<ol start="3">
<li><p>Find all possible <span class="math-container">$x_5$</span> that fits in the constraints and its corresponding <span class="math-container">$w_5$</span></p>
</li>
<li><p>Continue the next steps for all unknown</p>
</li>
</ol>
<p>I found 20926 solutions that satisfy the constraints. My code is available on <a href="https://gist.github.com/pierretallotte/31d59b860bd84e24621f6a9c0bf85008" rel="nofollow noreferrer">Gist</a>.</p>
|
375,025 | <p>How do I solve this simultaneous equation? I know it has multiple solutions, but would anyone be able to show me the exact steps in working these out for future reference? </p>
<p>$$
\begin{cases}
2x +y = 4\\
-6x - 3y = -12
\end{cases}
$$</p>
<p>Thank you...</p>
| Tazwar Sikder | 347,302 | <p>We have: $2x+y=4$ and $-6x-3y=-12$</p>
<p>Let's begin by solving each equation for $y$ and labelling them:</p>
<p>$\Rightarrow y=4-2x \hspace{80 mm}$ (i)</p>
<p>$\Rightarrow y=\dfrac{12-6x}{3}=4-2x \hspace{58.5 mm}$ (ii)</p>
<p>Graphically, these two lines overlap.</p>
<p>Therefore, the system of equations has an infinite number of solutions.</p>
|
1,784,469 | <p>I roll a biased dice 9000 times. the probability of seeing 1,2,3,4,5,6 are $\frac{1}{3}, \frac{1}{12}, \frac{1}{12}, \frac{1}{6}, \frac{1}{6}, \frac{1}{6}$ respectively. I need to use the Central limit theorem to estimate the probability that the total number of 1s that I see is within [2970,3040].</p>
<p>So far, I only know the fact that the random variables Xi of of CLT are each rolls. I'm not sure how I'm supposed to apply the information given. Please help this hopeless soul :(( </p>
| André Nicolas | 6,312 | <p>For $i=1$ to $n$, where $n=9000$, define random variable $X_i$ by $X_i=1$ if we roll a $1$ on the $i$-th roll, and by $X_i=0$ otherwise.</p>
<p>Let $Y=X_1+X_2+\cdots+X_n$. Then $Y$ is the sum of a large number of independent identically distributed "nice" random variables. So $Y$ has a reasonably close to normal distribution. (We have used an informal version of the Central Limit Theorem. The actual distribution of $Y$ is binomial, $n=9000$, $p=1/3$.)</p>
<p>I will assume that you know that the mean of $Y$ is $(1/3)(9000)$, and thst the variance of $Y$ is $(1/3)(2/3)(9000)$. So $Y$ has standard deviation $\sqrt{2000}$. </p>
<p>Finally, we need to calculate the probability that
$$2970\le W\le 3040,$$
where $W$ is normal with mean $3000$ and standard deviation $\sqrt{2000}$. Software will do the job. But to do it the old-fashioned way, we find
$$\Pr\left(\frac{2970-3000}{\sqrt{2000}}\le Z\le \frac{3040-3000}{\sqrt{2000}}\right),$$
where $Z$ is standard normal. Then the calculation can be completed using tables of the standard normal.</p>
|
192,391 | <p>I know that this is sometimes the case, but that some matrices are not tensors.
So what is the intuitive and specific demands of a matrix to also be a tensor?
Does it need to be quadratic, singular or something else?</p>
<p>Some sources I read seem to suggest that all rank 2 matrices are tensors while other just claims that "some" matrices are rank 2 tensors.</p>
<p>What's the connection between tensors and matrices?</p>
| rschwieb | 29,335 | <p>The connection is this: a matrix consists of the coefficients of a (1,1) tensor, but it is not a tensor itself.</p>
<p>Suppose we are talking about a linear transformation $T$ on an $n$ dimensional vector space $V$.</p>
<p>Now $T$ is certainly a tensor (tensors are, after all, multilinear maps on copies of $V$ and $V^\ast$, and a linear transformation can be interpreted as a multilinar function from $V\times V^\ast$ to $\mathbb{F}$.)</p>
<p>Once a basis for $V$ is fixed, then you can talk about the matrix $A$ for $T$ which is written in terms of the basis. The same can be said for general multilinear functions on copies of $V$ and $V^\ast$, that after you have fixed a basis, you have a big array holding its coefficients. </p>
<p>It's important to remember not to confuse the array for the tensor. The tensor is a basis independent entity: it's a kind of function. The components are just one particular representation of that function, and the components depend upon a choice of basis.</p>
|
192,391 | <p>I know that this is sometimes the case, but that some matrices are not tensors.
So what is the intuitive and specific demands of a matrix to also be a tensor?
Does it need to be quadratic, singular or something else?</p>
<p>Some sources I read seem to suggest that all rank 2 matrices are tensors while other just claims that "some" matrices are rank 2 tensors.</p>
<p>What's the connection between tensors and matrices?</p>
| Farshad Ashkbous | 347,365 | <p>All matrices are not tensors, although all tensors of rank 2 are matrices.</p>
<p>Example: Matrix T (T11=x , T12=-y , T21=x^2 , T22=-y^2) .This matrix is not tensor rank 2. We test matrix T to rotation matrix A: a11=cos(theta),a12=sin(theta),a21=-sin(theta),a22=cos(theta). Now Expand tensor equation rank2,for example T'11=Σ(a1i<em>a1j</em>Tij) (1)</p>
<p>Now calculate T'11=x'=x<em>cos(theta)+y</em>sin(theta) (2)</p>
<p>You see (1) unequal to (2) then conclude Matrix T isn't a tensor rank2!!</p>
<p>Tensor must follow the conversion(transformation) rules, but matrices generally are not.</p>
|
2,781,702 | <p>Honestly, I have no idea if I put the correct tag on this question, and I don't even know where to begin to solve an equation like this:
$$
f(d,n)=\sum_{i=1}^n\binom{d}{i}.
$$</p>
<p>Could someone explain what the "d" over the "i" inside the parenthesis means? I'm attempting to solve for when d is equal to 20, n is equal to 3, but I can't work out what I'm supposed to do here.</p>
<p>Thanks all!</p>
| fleablood | 280,126 | <p>${d \choose i}$ = "d choose i" is the number of different ways to choose $i$ objects from $d$, total.</p>
<p>So for example if you are given a bag with $a,b,c,d$ in it and you are told to pick two items you can do one of the following: pick $a,b$ ; pick $a,c$; pick $a,d$; pick $b,c$; pick $b,d$ or pick $c,d$. Those are six possible ways. So ${6 \choose 2} = $ "$4$ choose $2$" $= 6$.</p>
<p>So is there an alegebraic formula for ${d\choose i}$? Why, yes there is. </p>
<p>There are $d$ options of the first item. Once you choose the first there are $d-1$ for then second all the way down to $d-i + 1$ chooses for the last item.</p>
<p>So for picking two items from $a,b,c,$ you have $4$ for the first item and $3$ for the second.</p>
<p>So there are $d*(d-1)*(d-2)*.....*(d-i + 1)$ ways to pick out a list of $i$ items.</p>
<p>So there are $4*3$ ways to choose a list of two items from $a,b,c,d$.</p>
<p>There is $a,b; a,c; a,d; b,a; b,c;b,d; c, a; c,b; c,d;$ and $d,a;d,b;d,c$.</p>
<p>But wait! That is treating $a,b$ as though it is the different than $b,a$ and $d,a$ as though it is different than $a,d$.</p>
<p>For every list of $i$ items there are several ways to order them. But we consider them the same no matter how we order them. So the ways to pick them are $\frac {d*(d-1)*....*(d-i+1)}{\text{ number of ways to order a list of }i\text{ items}}$.</p>
<p>So what is $\text{ number of ways to order a list of }i\text{ items}$? Well, there are $i$ ways to choose the first item, $i-1$ ways to choose the second and so on, all the way down to $1$ choice for the last item. $\text{ number of ways to order a list of }i\text{ items} = i*(i-1)*....*2*1$.</p>
<p>So ${d\choose i} = \frac {d*(d-1)*....*(d-i+1)}{i*(i-1) *.....* 2*1}$.</p>
<p>Jeez, that's a lot to type out. Is there any shorthand notation for that? Why, yes there is.</p>
<p>If $k$ is a positive integer we refer to $k!$ = "$k$ factorial" as $k*(k-1)*(k-2)*...2*1$. This is number of ways you can order $k$ items. Example: $2! = 2*1 = 2$ and we can order $a,b$ in two ways: either $a$ comes first or $b$ does. Example: $3! = 3*2*1 = 6$ and we can order $a,b,c$ six ways: either $a$ or $b$ or $c$ comes first. If $a$ comes first the either $b$ or $c$ can come second whereas if $b$ comes first than either $a$ or $c$ can come second, and if $c$ comes first, $a$ or $b$ can come second. In other words; $abc; acb; bac;bca;cab;cba$. And if you want to figure out how many ways you can arrange the $26$ letters of the alphabet it is: $26*25*24*....*3*2*1 = 26! = 403291461126605635584000000$.</p>
<p>So ${d\choose i}= \frac {d*(d-1)*....*(d-i+1)}{i*(i-1)*....*1} = \frac {d*(d-1)*....*(d-i+1)}{i*(i-1)*....*1}*\frac {(d-i)*(d-i-1)*....*2*1}{(d-i)*(d-i-1)*....*2*1} = \frac {d*(d-1)*....*(d-i+1)*(d-i)*(d-i-1)*....*2*1}{(i*(i-1)*....*1)*((d-i)*(d-i-1)*....*2*1)}=\frac {d!}{(d-i)!*i!}$.</p>
<p>Okay.... that is what ${d\choose i}$ is.</p>
<p>So what is ${d \choose 1} + {d\choose 2} + {d\choose 3} + .... + {d\choose n}$ equal to? Well that is an entirely other question. I'll let you play with it.</p>
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.