qid int64 1 4.65M | question large_stringlengths 27 36.3k | author large_stringlengths 3 36 | author_id int64 -1 1.16M | answer large_stringlengths 18 63k |
|---|---|---|---|---|
4,365,859 | <p>My professor had given us a problem. It goes like -</p>
<blockquote>
<p>Find equation of circle passing through intersection of circle <span class="math-container">$S=x^2+y^2-12x-4y-10=0$</span> and <span class="math-container">$L:3x+y=10$</span> and having radius equal to that of circle <span class="math-container">$S$</span>.</p>
</blockquote>
<p>So when he was telling the solution, he setup an equation like -</p>
<p><span class="math-container">$x^2+y^2-12x-4y-10=0+2t(3x+y-10)=0$</span> where <span class="math-container">$t \in \mathbb{R}$</span></p>
<p><span class="math-container">$= x^2+y^2+2(3t-6)x+2(-2+t)y-(10+2t)=0$</span></p>
<p>So now we can find out that radius of <span class="math-container">$S=\sqrt{50}$</span></p>
<p>Now we can use for any general circle <span class="math-container">$r=\sqrt{g^2+f^2-c}$</span> to get ,</p>
<p><span class="math-container">$(6-3t)^2+(t-2)^2+10+20t=50\implies t=0,2$</span></p>
<p>Now we can remove <span class="math-container">$t=0$</span> as we will get <span class="math-container">$S$</span></p>
<p>So putting <span class="math-container">$t=2 \implies \boxed{x^2+y^2=50}$</span></p>
<p>Now this solution is ok, but I didn't get why putting <span class="math-container">$t$</span> worked? I mean why is <span class="math-container">$x^2+y^2-12x-4y-10=0+2t(3x+y-10)=0$</span> working? Can someone explain this?</p>
| Math Lover | 801,574 | <p>After changing the coordinates, in effect you are rotating <span class="math-container">$x^2 + (y-a)^2 = a^2$</span> around x-axis.</p>
<p>The circle is <span class="math-container">$x^2 + y^2 = 2 ay$</span></p>
<p><span class="math-container">$ \displaystyle y' = \frac{x}{a-y}$</span></p>
<p><span class="math-container">$ \displaystyle ds = \sqrt{1 + (y')^2} ~dx = \frac{a}{|y-a|} ~ dx$</span></p>
<p>For lower half -</p>
<p><span class="math-container">$y = a - \sqrt{a^2-x^2}$</span></p>
<p>So, <span class="math-container">$ \displaystyle S_1 = 2 \pi a \int_{-a}^a \frac{a - \sqrt{a^2-x^2}}{\sqrt{a^2-x^2}} ~ dx$</span><br />
<span class="math-container">$ = 2 \pi a^2 (\pi - 2)$</span></p>
<p>For upper half -</p>
<p><span class="math-container">$y = a + \sqrt{a^2-x^2}$</span></p>
<p>So, <span class="math-container">$ \displaystyle S_2 = 2 \pi a \int_{-a}^a \frac{a + \sqrt{a^2-x^2}}{\sqrt{a^2-x^2}} ~ dx$</span><br />
<span class="math-container">$ = 2 \pi a^2 (\pi + 2)$</span></p>
<p>Adding both, <span class="math-container">$S = 4 \pi^2 a^2$</span></p>
<p>But it is easier in polar coordinates as I mentioned in comments. The circle is,</p>
<p><span class="math-container">$r = 2a \sin\theta, 0 \leq \theta \leq a$</span></p>
<p><span class="math-container">$\dfrac{dr}{d\theta} = 2a \cos\theta$</span></p>
<p><span class="math-container">$ \displaystyle ds = \sqrt{r^2 + \left(\frac{dr}{d\theta}\right)^2} ~ d\theta = 2a ~ d\theta$</span></p>
<p><span class="math-container">$y = 2a\sin^2\theta$</span></p>
<p>So the integral is,</p>
<p><span class="math-container">$ \displaystyle S = 8 \pi a^2 \int_0^{\pi} \sin^2\theta ~ d\theta = 4 \pi^2 a^2$</span></p>
|
403,977 | <blockquote>
<p>Let $u_n>u_{n+1}>0$ for all $n \in \Bbb N$, and suppose that $u_2+u_4+u_8+u_{16}+\dots$ diverges. Prove that $\sum_{n=1}^{\infty}\frac{u_n}{n}$diverges.</p>
</blockquote>
<p>Please provide me a hint or a full solution.</p>
| Bill Kleinhans | 73,675 | <p>Suppose $ u_n = (\log n)^{-1}$</p>
|
224,474 | <p>I want to generate the following matrix for any n:</p>
<pre><code> Table[A[i, j], {i, 0, n}, {j, 0, n}];
</code></pre>
<p>Where,</p>
<pre><code> A ={{0,1,0},{0,0,2},{0,0,0}} when n=2;
A={{0,1,0,0},{0,0,2,0},{0,0,0,3},{0,0,0,0}} when n=3;
A={{0,1,0,0,0},{0,0,2,0,0},{0,0,0,3,0},{0,0,0,0,4},{0,0,0,0,0}} when n=4, and so on for any n.
</code></pre>
<p>Thanks</p>
| kglr | 125 | <p>You can also use <a href="https://reference.wolfram.com/language/ref/SparseArray.html" rel="noreferrer"><code>SparseArray</code></a> + <a href="https://reference.wolfram.com/language/ref/Band.html" rel="noreferrer"><code>Band</code></a>:</p>
<pre><code>ClearAll[sA]
sA[n_] := SparseArray[Band[{1, 2}, Automatic] -> Range@n, n + {1, 1}]
</code></pre>
<p><em><strong>Examples:</strong></em></p>
<pre><code>sA /@ Range[2, 4] // Map[MatrixForm] // Row
</code></pre>
<p><a href="https://i.stack.imgur.com/K5Ovp.png" rel="noreferrer"><img src="https://i.stack.imgur.com/K5Ovp.png" alt="enter image description here" /></a></p>
|
3,926,580 | <p>I'm trying to prove with squeeze theorem that the limit of the following series equals 1:</p>
<p><span class="math-container">$$\frac{1+\sqrt{2}+\sqrt[3]{3}+...+\sqrt[n]{n}}{n}$$</span></p>
<p>For the left side of the inequality I did:</p>
<p><span class="math-container">$$\frac{1+\sqrt{1}+\sqrt[3]{1}+...+\sqrt[n]{1}}{n} < \frac{1+\sqrt{2}+\sqrt[3]{3}+...+\sqrt[n]{n}}{n}$$</span></p>
<p>For the right side, at first I did the following:</p>
<p><span class="math-container">$$\frac{1+\sqrt{2}+\sqrt[3]{3}+...+\sqrt[n]{n}}{n} < \frac{n\sqrt[n]{n}}{n}$$</span></p>
<p>But then I realized it wasn't true and that the direction of this inequality is the opposite.</p>
<p>Do you have any idea which series with limit 1 is bigger from the original series?</p>
<p>Thanks!</p>
| robjohn | 13,854 | <p>As shown in <a href="https://math.stackexchange.com/a/3926707">this answer</a>, the Binomial Theorem says that for <span class="math-container">$n\ge1$</span>,
<span class="math-container">$$
\begin{align}
1\le n^{1/n}
&\le1+\sqrt{\frac2n}\tag{1a}\\
&\le1+\frac{2\sqrt2}{\sqrt{n}+\sqrt{n-1}}\tag{1b}\\[3pt]
&=1+2\sqrt2\left(\sqrt{n}-\sqrt{n-1}\right)\tag{1c}
\end{align}
$$</span>
Thus,
<span class="math-container">$$
\frac nn\le\frac1n\sum_{k=1}^nk^{1/k}\le\frac1n\left[n+2\sqrt2\sum_{k=1}^n\left(\sqrt{k}-\sqrt{k-1}\right)\right]\tag2
$$</span>
and, because the sum on the right side of <span class="math-container">$(2)$</span> <a href="https://en.wikipedia.org/wiki/Telescoping_series" rel="nofollow noreferrer">telescopes</a>, we have
<span class="math-container">$$
1\le\frac1n\sum_{k=1}^nk^{1/k}\le1+\frac{2\sqrt2}{\sqrt{n}}\tag3
$$</span>
to which we can apply the <a href="https://en.wikipedia.org/wiki/Squeeze_theorem" rel="nofollow noreferrer">Squeeze Theorem</a>.</p>
|
2,211,461 | <p>Given that $f$ is an integrable function on $X$ and $\{E_k\}_{k=1}^\infty$ where each $E_k$ is a measurable set such that $\lim_{k\rightarrow \infty} \mu(E_k) = 0$</p>
<p>Can we show that $$\lim_{k\rightarrow \infty} \int_{E_k} fd\mu = 0$$ </p>
<p>I want to prove like this:
$$|\int_{E_k} fd\mu| \leq sup|f|\cdot \mu(E_k) \rightarrow 0 $$
The problem is when $|f| \rightarrow \infty$, I'm not sure if this is valid.</p>
<p>And if we remove the condition $f$ integrable and instead make f positive measurable, does the result still hold?</p>
| mathworker21 | 366,088 | <p>Let $f_k = f\chi_{E_k}$. Note $f_k \to 0$ almost everywhere since $\mu(E_k) \to 0$. Also, $|f_k| \le |f|$ which is integrable, so Dominated Convergence Theorem gives you the result.</p>
|
3,826,152 | <p>How can I verify this inequality by using induction?</p>
<p><span class="math-container">$$\frac1{2n}\le \frac{1\cdot3\cdot5\cdots(2n-1)}{2\cdot4\cdot6\cdots(2n)}$$</span></p>
<p>I understand the principle of induction, but I'm struggling with how my teacher is solving this problem. I Highlighted the area I'm stuck at In the solution - why are we able to take <span class="math-container">$1/2k$</span> and see if its less than the left side of our inducted hypothesis? (view the highlighted part of the photo)</p>
<p>Any suggestions would be great - I really don't understand how to solve this problem. If I could get a step by step walk through of this proof with side notes, I'd be very grateful</p>
<p><a href="https://i.stack.imgur.com/LU2yX.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/LU2yX.png" alt="enter image description here" /></a></p>
| Fawkes4494d3 | 260,674 | <blockquote>
<p>In this screenshot from your image call the LHS and RHS of the first inequality (which is a consequence of the induction hypothesis) <span class="math-container">$B$</span> and <span class="math-container">$C$</span> respectively. The first inequality is stating that <span class="math-container">$\boxed{B\le C}$</span>.</p>
</blockquote>
<blockquote>
<p>Call the LHS of the second inequality, marked in yellow by you, to be <span class="math-container">$A$</span>.</p>
</blockquote>
<p>You want to prove <span class="math-container">$A\le C$</span>, and you have already proved <span class="math-container">$B\le C$</span> (mentioned in the first block above), so <br> if you can prove <span class="math-container">$A\le B$</span>, then you can combine the consequence of the first inequality, i.e <span class="math-container">$\boxed{B\le C}$</span> with <span class="math-container">$A\le B$</span> to get <span class="math-container">$$A\le B \le C \implies A \le C$$</span> which would prove the claim. So we <em>want to prove</em> (as written by your teacher) that <span class="math-container">$A\le B$</span>.</p>
<p><a href="https://i.stack.imgur.com/mKFr6.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/mKFr6.png" alt="enter image description here" /></a></p>
|
144,550 | <p>Cross post <a href="http://community.wolfram.com/groups/-/m/t/1120587?p_p_auth=9d8Myocf" rel="nofollow noreferrer">here</a></p>
<hr>
<p>A word path means the last character of the last word is same to the first character of the next word.And the path have no duplicated word.If I have $55$ words,I can find it with this method.</p>
<h3>Build a graph</h3>
<pre><code>SeedRandom[2]
string = Select[ToLowerCase[RandomSample[DictionaryLookup[]]],
3 < StringLength[#] < 5 &];
g = RelationGraph[StringTake[#, -1] == StringTake[#2, 1] &,
string[[;; 55]], VertexLabels -> "Name"]
</code></pre>
<p><img src="https://i.stack.imgur.com/FdJXm.png" alt="Mathematica graphics"></p>
<h3>Find a longest word path by <a href="https://mathematica.stackexchange.com/users/9490/jason-b">Jason</a>'s answer <a href="https://mathematica.stackexchange.com/a/136278/21532">here</a></h3>
<pre><code>allPaths =
FindPath[g, #2, #1, Infinity, All] & @@@
Subsets[VertexList[g], {2}] // Apply[Join];
First[TakeLargestBy[allPaths, Length@Union@# &, 1]]
</code></pre>
<blockquote>
<p><code>{calm,muir,reef,fray,yaws,seed,deaf,fnma,axis,stow,waft,tint,trig,good,duns,sill,loge,etch,hill,lath,howl}</code></p>
</blockquote>
<p>Well I have to say this is a very very slow solution.I even cannot find more than $60$ words.Actually I want to find <code>Length[DictionaryLookup[]]=92518</code> words. It seem I still have a long way to go.Do any suggestion can give?</p>
| Gustavo Delfino | 251 | <p>You are clearly a beginner. Experienced Mathematica users run away in panic when presented so many <code>For</code> loops. Using <code>Do</code> is better than nesting <code>For</code> and <code>Table</code> is better. In general you never <code>Print</code> something you want to export.</p>
<p>These steps may help you:</p>
<pre><code>Table[
If[
NMaxValue[{B[n, b, l, m, d, p], n >= 1}, n ∈ Integers] <= 0
, ToString /@ {b, l, m, d, p, 0, 0}
, ToString /@ {b, l, m, d, p
, NArgMax[{B[n, b, l, m, d, p], n >= 1}, n ∈ Integers]
, NMaxValue[{B[n, b, l, m, d, p], n ∈ Integers, n >= 1}, n]
}
]
, {b, 2, 3}, {l, 6, 7}, {m, 4, 5}, {d, 4, 5}, {p, 3, 3}
]
</code></pre>
<blockquote>
<pre><code>{{{{{{2,6,4,4,3,9,12.}},{{2,6,4,5,3,9,11.}}},{{{2,6,5,4,3,9,13.}},{{2,6,5,5,3,9,12.}}}},{{{{2,7,4,4,3,9,13.}},{{2,7,4,5,3,9,12.}}},{{{2,7,5,4,3,9,14.}},{{2,7,5,5,3,9,13.}}}}},{{{{{3,6,4,4,3,9,12.}},{{3,6,4,5,3,9,11.}}},{{{3,6,5,4,3,9,13.}},{{3,6,5,5,3,9,12.}}}},{{{{3,7,4,4,3,9,13.}},{{3,7,4,5,3,9,12.}}},{{{3,7,5,4,3,9,14.}},{{3,7,5,5,3,9,13.}}}}}}
</code></pre>
</blockquote>
<pre><code>Flatten[%, 4]
</code></pre>
<blockquote>
<pre><code>{{2,6,4,4,3,9,12.},{2,6,4,5,3,9,11.},{2,6,5,4,3,9,13.},{2,6,5,5,3,9,12.},{2,7,4,4,3,9,13.},{2,7,4,5,3,9,12.},{2,7,5,4,3,9,14.},{2,7,5,5,3,9,13.},{3,6,4,4,3,9,12.},{3,6,4,5,3,9,11.},{3,6,5,4,3,9,13.},{3,6,5,5,3,9,12.},{3,7,4,4,3,9,13.},{3,7,4,5,3,9,12.},{3,7,5,4,3,9,14.},{3,7,5,5,3,9,13.}}
</code></pre>
</blockquote>
<pre><code>toExcel = StringJoin /@ %
</code></pre>
<blockquote>
<pre><code>{ "26443912.", "26453911.", "26543913.", "26553912.","27443913.","27453912.", "27543914.","27553913.","36443912.", "36453911.","36543913.","36553912.", "37443913.","37453912.","37543914.","37553913."}
</code></pre>
</blockquote>
<pre><code>Export["file.xls", toExcel]
</code></pre>
|
144,550 | <p>Cross post <a href="http://community.wolfram.com/groups/-/m/t/1120587?p_p_auth=9d8Myocf" rel="nofollow noreferrer">here</a></p>
<hr>
<p>A word path means the last character of the last word is same to the first character of the next word.And the path have no duplicated word.If I have $55$ words,I can find it with this method.</p>
<h3>Build a graph</h3>
<pre><code>SeedRandom[2]
string = Select[ToLowerCase[RandomSample[DictionaryLookup[]]],
3 < StringLength[#] < 5 &];
g = RelationGraph[StringTake[#, -1] == StringTake[#2, 1] &,
string[[;; 55]], VertexLabels -> "Name"]
</code></pre>
<p><img src="https://i.stack.imgur.com/FdJXm.png" alt="Mathematica graphics"></p>
<h3>Find a longest word path by <a href="https://mathematica.stackexchange.com/users/9490/jason-b">Jason</a>'s answer <a href="https://mathematica.stackexchange.com/a/136278/21532">here</a></h3>
<pre><code>allPaths =
FindPath[g, #2, #1, Infinity, All] & @@@
Subsets[VertexList[g], {2}] // Apply[Join];
First[TakeLargestBy[allPaths, Length@Union@# &, 1]]
</code></pre>
<blockquote>
<p><code>{calm,muir,reef,fray,yaws,seed,deaf,fnma,axis,stow,waft,tint,trig,good,duns,sill,loge,etch,hill,lath,howl}</code></p>
</blockquote>
<p>Well I have to say this is a very very slow solution.I even cannot find more than $60$ words.Actually I want to find <code>Length[DictionaryLookup[]]=92518</code> words. It seem I still have a long way to go.Do any suggestion can give?</p>
| george2079 | 2,079 | <p>another approach:</p>
<pre><code>B[n_, b_, l_, m_, d_, p_] := l + m - d - p + n
{##, If[(amax =
NArgMax[{B[n, Sequence@##], n >= 1}, Element[n, Integers]]) >
0, Sequence @@ {amax, B[amax, Sequence@##]},
Sequence @@ {0, 0}]} & @@@
Tuples[{
Range[2, 3], Range[6, 7], Range[4, 5],
Range[4, 5], Range[3, 3]}]
Export["test.xlsx", %]
</code></pre>
|
4,551,543 | <p>You have ten coins, each with a bias for head drawn from a uniform distribution [0,1]. For example, a coin with a 0.7 bias means that the probability of flipping a head is 0.7. You are allowed 100 flips and you are free to choose any coins to flip. Each time you flip a head, you get $1. How much money would you pay to play this game and what is your strategy?</p>
<p>Here's what I have in mind so far: The most basic strategy is to each time pick a coin at random and flip it. The expected payoff is $50. To do better than that, I need to know which coins have biases bigger than 0.5. It seems that I could use a Bayesian approach to update my belief about each coin, but doing so would require multiple rounds, and I'm not sure how to actually quantify this.</p>
| Bafs | 846,919 | <p>If you are asking for the optimum strategy, I would also be curious to hear from others. My first instinct would be to identify the coins by number, and flip Coin #1 until it came up tails. Then I would move on to each subsequent coin, flipping until I get the first tails. This would establish a very shaky estimate of their biases. Then it seems smart to "flip them until tails", in order from best to worst performer so far. There should be a cutoff point, based on how many available flips remain, and on how well the best performers did. The cutoff would exclude the worst performers completely, and could also exclude the so-so performers, if the best ones are much better than the rest.
Mathematically, the first tails shown will tend to line up with the expected number of flips until tails, over more and more trials. If you estimate the bias as
P = 1/E, where P is probability of tails (or heads), and E is expected tries until the first tails (or heads), you can use the first occurrence as a rough estimate of E, giving you a rough estimate of P. After every new instance of tails, reevaluate which coins are better to flip.
Concerning the exact worth of playing this game... well, I'm sure its based on the actual best strategy, maybe not mine, and it may involve binomial coefficients.
My best guess for the value of the game is $79.17.</p>
<p>Here's my rationale. If you imagine that the coin biases are occupying all ten deciles of unity (0% to 10%, 10% to 20%, etc.), then it's likely that one coin has an approximately 95% chance of showing heads. The chance of stumbling across that one as the first coin is 10%, but it should yield an average of 19 "heads" before showing tails at 20 flips.
The 85% coin only yields around 5.67 "heads" in 6.67 flips.
The 75% coin yields 3 "heads" before showing tails at 4 flips.
The 65% coin yields 1.86 "heads", out of 2.86 flips.
The average value for these games is $80,</p>
<p>and multiplied by (.40) gives 32
dollars toward the average value of all possible games.
So far, these are 4 out of 10 starting scenarios, where you would not switch coins at all. The reason you would stay with any coin that averages at least 1.582 heads for every 2.582 flips, is that you are above the 63.2% bias level. (This is simply 1 - 1/e.) It doesn't make sense to switch coins when the chance of doing worse is greater by switching.
If you start out with a poorly performing coin, you should switch to another coin. This is the case about 6 out of 10 times, starting out. Then, switching until you get the 4 out of 10 better coins gives the rest of the game's value. Averaging the yield for the poor performers gives 52 cents in 1.52 flips, on average, before finding a high performing coin. There's a 5/9 chance of choosing a bad coin again, and then 4/8, 3/7, 2/6, and 1/5.</p>
<p>(6/10) + (6/10)(5/9) + (6/10)(5/9)(4/8) + (6/10)(5/9)(4/8)(3/7) + (6/10)(5/9)(4/8)(3/7)(2/6) + (6/10)(5/9)(4/8)(3/7)(2/6)(1/5) = 1.2</p>
<p>This is 2 times the starting chance of flipping bad coins.
This 2 is multiplied with the flips and the yield.
(1.52)(2) = 3.04 flips
(.52)(2) = 1.04 dollars.
In the remaining 96.96 flips, the value is $77.57.</p>
<p>Add that to 1.04 dollars to get $78.61.</p>
<p>Now put the chances with the values, to get a weighted average.
(.60)(78.61) + (.40)(80)
= 47.17 + 32
= $79.17</p>
<p>Edit: This is below the value of the game, because it costs very few flips to try for the 85% coin or the 95% coin. I would still put it between 80 and 85 dollars.</p>
|
957,604 | <p>Let <strong>X</strong> and <strong>Y</strong> have the joint probability density function given by:</p>
<p>f(x,y)=$\frac{1}{4}exp{\frac{-(x+y)}{2}}$, x>0 and y>0</p>
<p>(a) Find Pr(x<1,y>1)</p>
<p>(b) Find Pr(y<$x^2$)</p>
<p>This is how I tackled (a):</p>
<p>Pr(x<1,y>1)=$\int\int exp{\frac{-(x+y)}{2}}$ dydx where $0<x<1$ and $0<y<\infty$. My problem is with the interdependency of the two variables. Are they dependent or independent? </p>
<p>For (b), I had no clue at all. Someone should please help me out.</p>
| Jared | 138,018 | <p>a) What are all of the possibilities for transferring to bowl II?</p>
<ol>
<li> Both chips are white (what is the probability)?
<li> Both chips are red (what is the probability)?
<li> One chip is red and one is white (what is the probability)?
</ol>
<p>From each of those possibilities you will end up (in bowl II) with either </p>
<ol>
<li> 8 white and 4 red (if you transfer two whites form bowl I to bowl II)
<li> 6 white and 6 red (if you transfer two red from bowl I to bowl II)
<li> 7 white and 5 red (if you transfer 1 white and 1 red from bowl I to bowl II).
</ol>
<p>Calculate the probabilities of selecting three white from each of those cases then multiple each of those cases by the probability of each case:</p>
<ul>
<li>p(draw 3 white from II given transfer two whites from I to II)
<li>p(draw 3 white from II given transfer two red from I to II)
<li>p(draw 3 white from II given transfer one white and one red from I to II))
</ul>
<p>Then add up the probability of each case (you can add because you cannot both pick two white chips and pick one white chip and one red, etc.).</p>
<p>b) You already have the probability of three white chips being selected (from part a)). Using <a href="http://en.wikipedia.org/wiki/Bayes'_theorem#Statement_and_interpretation" rel="nofollow">Baye's theorem</a>, you can compute the conditional probability:</p>
<p>$$
P(A|B) = \frac{P(B|A)\cdot P(A)}{P(B)}
$$</p>
<p>Here 'A' is "two white chips were transferred from I to II" (we know this probability: $P(A) = \frac{7\cdot6}{14\cdot 13} = \frac{\binom{7}{2}}{\binom{14}{2}}$. $B$ in this case is that "three white chips are selected from II". We certainly know $P(B)$ from part a) (we already calculated that). In addition we only need $P(B | A)$ which should be part of your calculation for part a). That is, if we assume two white chips were transferred to II, then we have 8 white chips and 4 red chips thus the probability of choosing two white chips is $P(B|A) = \frac{8*7}{12*11} = \frac{\binom{8}{2}}{\binom{12}{2}}$</p>
|
95,168 | <p>I try to prove the following</p>
<p>$$\binom{2\phi(r)}{\phi(r)+1} \geq 2^{\phi(r)}$$</p>
<p>with $r \geq 3$ and $r \in \mathbb{P}$. Do I have to make in induction over $r$ or any better ideas?</p>
<p>Any help is appreciated.</p>
| Did | 6,179 | <p>Let $x_n=2^{-n}{2n\choose n+1}$. Then $x_{n+1}=x_n\frac{(2n+1)(n+1)}{n(n+2)}\gt x_n$ for every $n\geqslant1$ and $x_2=1$, hence $x_n\geqslant1$ for every $n\geqslant2$. </p>
<p>Since $\varphi(r)\geqslant2$ for every integer $r\geqslant3$, the estimate above implies that the desired inequality holds for every (not necessarily prime) integer $r\geqslant3$.</p>
|
3,995,954 | <p>Today during a Calc I exam we(me and my classmates) have been ask to prove <span class="math-container">$$-a\cdot e\cdot \ln(x)\le x^{-a}$$</span>, <span class="math-container">$\forall x>0$</span> and <span class="math-container">$\forall a\in \mathbb{R}$</span>. But, noneone in the room has known how to do it. Anyone knows how to?</p>
<p>Thanks in advance</p>
| Tito Eliatron | 84,972 | <p><strong>HINT</strong></p>
<p>Try to find the maximum of <span class="math-container">$f(t):=\frac{\ln(t)}{t}$</span> and recall that your inequality is equivalent to
<span class="math-container">$$ \frac{\ln(x^{-a})}{x^{-a}}\le\frac{1}{e}.$$</span></p>
|
3,787,576 | <p><span class="math-container">$$
\mbox{Prove}\quad
\int_{0}^{1}{\mathrm{d}x \over
\left(\,{x - 2}\,\right)\,
\sqrt[\Large 5]{\,x^{2}\,\left(\,{1 - x}\,\right)^{3}\,}\,}
=
-\,{2^{11/10}\,\pi \over \,\sqrt{\,{5 + \,\sqrt{\,{5}\,}}\,}\,}
$$</span></p>
<ul>
<li>Being honest I havent got a clue where to start. I dont think any obvious substitutions will help (<span class="math-container">$x \to 1-x, \frac{1}{x}, \sqrt{x},$</span> more).</li>
<li>The indefinite integral involves hypergeometric function so some miracle substitution has to work with the bounds I suspect.</li>
<li>Maybe gamma function is involved some how ??.</li>
</ul>
<p>If anyone has an idea and can provide help I would appreciate it.</p>
| Zenix | 711,043 | <h3>Hint:</h3>
<p>Substitute <span class="math-container">$x \rightarrow\frac{1}{x-1}$</span>. We'll get: <span class="math-container">$$-\int_0^\infty \dfrac{x^{-3/5}dx}{(2x+1)}$$</span>
Can you continue from here?</p>
|
114,831 | <p>$$A_n=\frac{1}{n+1}+\frac{1}{n+2}+\cdots+\frac{1}{2n}$$
Try to prove $$\lim_{n \to \infty}n(\ln 2-A_n) = \frac{1}{4}$$</p>
<p>I try to decompose $\ln 2$ as $$\ln(2n)-\ln(n)=\ln\left(1+\frac{1}{2n-1}\right)+\dots+\ln\left(1+\frac{1}{n}\right)\;,$$ but I can't continue, is that right?</p>
| Michael Hardy | 11,667 | <p>Letting $f(x)=1/x$ we have
$$
\int_1^2 f(x) \; dx = \log_e 2
$$
and
$$
\frac 1 n \left( f\left(1+\frac 1 n\right) + f\left(1+\frac 2 n\right) + f\left(1+\frac 3 n\right) + \cdots + f\left(1+\frac n n\right) \right) \to \int_1^2 f(x) \; dx\text{ as }n\to\infty,
$$
so
$$
\frac 1 n \left( \frac{n}{n+1} + \frac{n}{n+2}+\frac{n}{n+3} + \cdots + \frac{n}{n+n} \right) \to \log_e 2 \text{ as } n\to \infty. \tag{1}
$$
Since $f$ is a decreasing function, this is a <b>lower</b> Riemann sum, so it's approaching the integral from below. The difference
$$
\log_e 2 - \{\text{the sum in (1)}\}
$$
is positive and approaches $0$. That difference is the sum of the areas of $n$ regions below the curve and above the tops of the rectangles that you draw when you illustrate the Riemann sum. Each such region is <b>almost a triangle</b>. Its base has length $1/n$. It has a vertical side that is a straight line. Its hypotenuse is a curve that is nearly a straight line. <b>The sum of the heights of those almost-triangles is $1/2$.</b> So the sum of $1/2\times\text{base}\times\text{height}$ is $1/2\times1/n\times1/2$. Multiply it by $n$ to get $1/4$.</p>
<p>But they're not <em>exactly</em> triangles, since the hypotenuse is a curve. It approaches a straight line as $n\to\infty$. The remaining problem is to deal with this present paragraph.</p>
|
217,711 | <p>Set
$$
g(x)=\sum_{k=0}^{\infty}\frac{1}{x^{2k+1}+1} \quad \text{for} \quad x>1.
$$</p>
<p>Is it true that
$$
\frac{x^{2}+1}{x(x^{2}-1)}+\frac{g'(x)}{g(x)}>0 \quad \text{for}\quad x>1?
$$
The answer seems to be positive. I spent several hours in proving this statement but I did not come up with anything reasonable. Maybe somebody else has (or will have) any bright idea? </p>
<p><strong>Motivation?</strong> Nothing important, I was just playing around this question:
<a href="https://mathoverflow.net/questions/217530/a-problem-of-potential-theory-arising-in-biology">A problem of potential theory arising in biology</a></p>
| Fedor Petrov | 4,312 | <p>I am not sure at all, please double check.</p>
<p>Denote $u(t)=(1+t)^{-1}$, it is a decreasing convex function on $[0,1]$, but $u(e^s)$ is concave on $(-\infty,0]$. As Neil Strickland notes, what we have to prove is that $(x-1/x)g(x)$ increases, or, if we denote $y=1/x$, that $(1/y-y)g(1/y)$ decreases. We have $$(1/y-y)g(1/y)=\sum_{k=0}^{\infty} \frac{y^{2k}-y^{2k+2}}{1+y^{2k+1}},$$
this is a Riemann sum of the function $u(t)$ corresponding to nodes $t_k=y^{2k}$, $t_0>t_1>\dots$, and intermediate points $s_k=y^{2k+1}=\sqrt{t_kt_{k+1}}\in [t_k,t_{k+1}]$. The Riemann sums converge to the integral, and they are always more than integral since $\int_a^b u(t)dt\leq (b-a)(u(a)+u(b))/2\leq (b-a)u(\sqrt{ab})$ as $u$ is convex and $u(e^s)$ is concave on $(-\infty,0]$. Next, we see that when $y$ increases, all nodes become closer to 1. It suggests to move nodes and see how the Riemann sum $R(t_0,t_1,\dots):=\sum (t_i-t_{i+1})u(\sqrt{t_it_{i+1}})$ behaves. Consider three consecutive nodes $a^2<b^2<c^2$ and increase $b$. What happens to $(b^2-a^2)u(ab)+(c^2-b^2)u(bc)$? Its derivative in $b$ equals
$$
-\frac{(c-a)((a-c)^2+3(ac-b^2)+2(abc-b^3)(a+c)+ab^2c(ac-b^2))}
{(ab+1)^2 (bc+1)^2}.
$$
This is strictly negative if $b^2=ac$ (that holds in our case). It follows that $R(1,y^2,y^4,\dots)$ decreases when $y$ increases, as desired, by
$$
\frac{d}{dy} R(1,y^2,y^4,\dots)=\sum_{k=1}^{\infty} 2ky^{2k-1}\frac\partial{\partial t_k} R(1,y^2,\dots)\geq 0.
$$</p>
|
1,933,410 | <p>I'm reading a book about differential geometry and there's a part where he talks about the standar spherical coordinates on $S^2$, which are given by:</p>
<p>$$x:(0,2\pi)\times (0,\pi)\to S^2, x(\theta, \phi) = (\cos\theta\sin\phi,\sin\theta\sin\phi, \cos\phi)$$</p>
<p>Shouldn't it be in $S^3$?</p>
| Alexis Olson | 11,246 | <p>No. <a href="https://en.wikipedia.org/wiki/Sphere" rel="nofollow">$S^2$</a> is the ordinary sphere embedded in three dimensions. $S^3$ would have $4$ coordinates in Cartesian form.</p>
<p>It is called $S^2$ since it is a $2D$ surface in the sense that any point on it can be described by two coordinates ($\theta, \phi$) as your mapping $x$ shows.</p>
|
2,866,768 | <p>Question: Use the trigonometric identity $\cos(2A)=1-2\sin^2(A)$ to show that $$\sin\left(\frac{\pi}{12}\right)=\sqrt{\frac{1}{2} - \frac{\sqrt{3}}{4}}$$</p>
<p>What are good strategies to figure out this question in Particular?</p>
| LJFan | 580,348 | <p>Use the trigonemetric identity, we get $\sin{A}=\pm\sqrt{\frac{1-\cos{2A}}{2}}$</p>
<p>Substitute $A=\frac{\pi}{12}$ in it, we can easily find that $\sin A>0$. Therefore $\sin{\frac{\pi}{12}}=\sqrt{\frac{1-\cos{\frac{\pi}{6}}}{2}}=\sqrt{\frac{1-\frac{\sqrt{3}}{2}}{2}}=\sqrt{\frac{1}{2}-\frac{\sqrt{3}}{4}}$</p>
|
3,005,591 | <p>I wish to find the integers of <span class="math-container">$a,b,c$</span> and <span class="math-container">$d$</span> such that:
<span class="math-container">$$225a + 360b +432c +480d = 3$$</span>
which is equal to:
<span class="math-container">$$75a + 120b +144c+ 160d =1$$</span></p>
<p>I know I have to use the Euclidean algorithm. And I managed to do it for two integers <span class="math-container">$x$</span> and <span class="math-container">$y$</span>. But can't figure out, how to do it with <span class="math-container">$4$</span> integers.</p>
| Mohammad Riazi-Kermani | 514,496 | <p>First divide both sides by <span class="math-container">$3$</span> to get <span class="math-container">$$ 75a+120b+144c+160d=1$$</span></p>
<p>Now let <span class="math-container">$$a=b=c=x$$</span></p>
<p>Your equation changes to <span class="math-container">$$339x+160d=1$$</span></p>
<p>You can solve this one for <span class="math-container">$x$</span> and <span class="math-container">$d$</span> because <span class="math-container">$339$</span> and <span class="math-container">$160$</span> are relatively prime.
Back substitute and you have your solution. </p>
|
3,005,591 | <p>I wish to find the integers of <span class="math-container">$a,b,c$</span> and <span class="math-container">$d$</span> such that:
<span class="math-container">$$225a + 360b +432c +480d = 3$$</span>
which is equal to:
<span class="math-container">$$75a + 120b +144c+ 160d =1$$</span></p>
<p>I know I have to use the Euclidean algorithm. And I managed to do it for two integers <span class="math-container">$x$</span> and <span class="math-container">$y$</span>. But can't figure out, how to do it with <span class="math-container">$4$</span> integers.</p>
| fleablood | 280,126 | <p>TO solve <span class="math-container">$75a+120b+144c+160d=1$</span></p>
<p>You can always you Euclidean Algorithm to solve <span class="math-container">$75A + 120B = \gcd(75,120)=15$</span></p>
<p>And to solve <span class="math-container">$120\beta + 144\gamma = \gcd(120,144) = 24$</span></p>
<p>And to solve <span class="math-container">$144C+160D = \gcd(144,160)=16$</span>.</p>
<p>Then in an attempt to solve <span class="math-container">$15e + 24f + 16g=1$</span> and to</p>
<p>Solve <span class="math-container">$15E + 24F= \gcd(15,24) = 3$</span> and <span class="math-container">$24\phi + 16\rho = \gcd(24,16)=8$</span>.</p>
<p>Then solve <span class="math-container">$3j + 8k = 1$</span>.</p>
<p>Then <span class="math-container">$j(15E + 24F) + (24\phi + 16\rho)k = 15(jE) + 24(jF+\phi k) + 16(\rho k)=1$</span></p>
<p>So <span class="math-container">$e=jE; f=jF+\phi k; g=\rho k$</span> and </p>
<p>So <span class="math-container">$(75A + 120B)e + (120\beta + 144\gamma)f + (144C+160D)g = 1$</span></p>
<p>And <span class="math-container">$a = Ae; b=Be+\beta f; c=\gamma f + Cg; d = Dg$</span>.</p>
<p>Of, course there are probably insights and ways to make it simpler along the way. </p>
<p>But that's the general idea, just break it into smaller and smaller pieces.</p>
<p>===</p>
<p>To actually do this:</p>
<p><span class="math-container">$75A + 120B = 15$</span> means <span class="math-container">$5A + 8B =1$</span> so <span class="math-container">$A=-3; B=2$</span> and <span class="math-container">$75(-3) + 120(2) = 15$</span>.</p>
<p><span class="math-container">$120B + 144C =24$</span> means <span class="math-container">$5B + 6C =1$</span> so <span class="math-container">$B=-1;C=1$</span> and <span class="math-container">$120(-1)+144(1) = 24$</span>. (Don't let the recycling of variable names scare you; we won't combine them.)</p>
<p><span class="math-container">$144C + 160D=16$</span> means <span class="math-container">$9C + 10D =1$</span> so <span class="math-container">$144(-1) + 160(1) = 16$</span>.</p>
<p>The solve <span class="math-container">$15e + 24f + 16g = 1 $</span> .... well, I can just see <span class="math-container">$f=0$</span> and <span class="math-container">$e = -1; g = 1$</span> so </p>
<p><span class="math-container">$-(75(-3) + 120(2)) + (144(-1) + 160(1)) = -15 + 16 = 1$</span> So</p>
<p><span class="math-container">$75*3 + 120*(-2) + 144(-1) + 160(1) = 1$</span></p>
|
3,005,591 | <p>I wish to find the integers of <span class="math-container">$a,b,c$</span> and <span class="math-container">$d$</span> such that:
<span class="math-container">$$225a + 360b +432c +480d = 3$$</span>
which is equal to:
<span class="math-container">$$75a + 120b +144c+ 160d =1$$</span></p>
<p>I know I have to use the Euclidean algorithm. And I managed to do it for two integers <span class="math-container">$x$</span> and <span class="math-container">$y$</span>. But can't figure out, how to do it with <span class="math-container">$4$</span> integers.</p>
| Steven Alexis Gregory | 75,410 | <p>Here is a description of a program that I wrote to solve such problems. It is definitely not optimized.</p>
<p>I start by considering the equalities</p>
<p><span class="math-container">\begin{array}{r}
160 &= 160(1) &+& 144(0) &+& 120(0) &+& 75(0) \\
144 &= 160(0) &+& 144(1) &+& 120(0) &+& 75(0) \\
120 &= 160(0) &+& 144(0) &+& 120(1) &+& 75(0) \\
75 &= 160(0) &+& 144(0) &+& 120(0) &+& 75(1) \\
\end{array}</span></p>
<p>This can be abstracted into the following partitioned array
<span class="math-container">\begin{array}{r|rrrr}
160 & 1 & 0 & 0 & 0 \\
144 & 0 & 1 & 0 & 0 \\
120 & 0 & 0 & 1 & 0 \\
75 & 0 & 0 & 0 & 1 \\
\end{array}</span></p>
<p>The important thing to remember is that, at any time, a row <span class="math-container">$\fbox{n | d c b a}$</span> represents the equality <span class="math-container">$n = 160d + 144c + 120b + 75a$</span>.</p>
<p>The "outer loop" of this algorithm assumes that the left column is in descending order.</p>
<p>The first step is to reduce the three upper rows so that the entries in the first column are all less than the bottom left number, <span class="math-container">$75$</span>. For the first row, we know that
<span class="math-container">$160 = 2 \times 75 + 10$</span> so we replace row <span class="math-container">$1$</span> (<span class="math-container">$R1$</span>) with <span class="math-container">$R1 - 2R4$</span>, getting <span class="math-container">$\fbox{10 | 1 0 0 -2}$</span>. Negative remainders are allowed if their absolute value is less than the positive remainder. So since <span class="math-container">$144 = 75(2)-6$</span>, we replace the second row with <span class="math-container">$R2 - 2R4$</span>, getting <span class="math-container">$\fbox{-6 | 0 1 0 -2}$</span>. Similarly the third row becomes <span class="math-container">$R3 - 24$</span>, which is <span class="math-container">$\fbox{-30 | 0 0 1 -2}$</span>. So we now have</p>
<p><span class="math-container">\begin{array}{r|rrrr}
10 & 1 & 0 & 0 & -2 \\
-6 & 0 & 1 & 0 & -2 \\
-30 & 0 & 0 & 1 & -2 \\
75 & 0 & 0 & 0 & 1 \\
\end{array}</span></p>
<p>Next, we make the substitution <span class="math-container">$Rk \to -Rk$</span> for any row with a negative first element and we then sort the array in decreasing order of the first element. We end up with</p>
<p><span class="math-container">\begin{array}{r|rrrr}
75 & 0 & 0 & 0 & 1 \\
30 & 0 & 0 & -1 & 2 \\
10 & 1 & 0 & 0 & -2 \\
6 & 0 & -1 & 0 & 2 \\
\end{array}</span></p>
<p>After the next pass through the loop, we get</p>
<p><span class="math-container">\begin{array}{r|rrrr}
3 & 0 & 12 & 0 & -23 \\
0 & 0 & 5 & -1 & -8 \\
-2 & 1 & 2 & 0 & -6 \\
6 & 0 & -1 & 0 & 2 \\
\end{array}</span></p>
<p>which "sorts" to</p>
<p><span class="math-container">\begin{array}{r|rrrr}
6 & 0 & -1 & 0 & 2 \\
3 & 0 & 12 & 0 & -23 \\
2 & -1 & -2 & 0 & 6 \\
0 & 0 & 5 & -1 & -8 \\
\end{array}</span></p>
<p>The next outer loop will proceed as before except that we will make our adjustments with respect to the third row instead of the fourth row. we finally end up with</p>
<p><span class="math-container">\begin{array}{r|rrrr}
1 & 1 & 14 & 0 & -29 \\
0 & -3 & -30 & 0 & 64 \\
0 & 3 & 5 & 0 & -6 \\
0 & 0 & 5 & -1 & -8 \\
\end{array}</span></p>
<p>The tells is that the general solution is (if I haven't messed up my math)</p>
<p><span class="math-container">$(d,c,b,a) = (1, 14 , 0 , -29) + u(-3, -30, 0, 64) + v(3, 5, 0, -6) + w(0, 5, -1, -8)$</span></p>
|
2,640,143 | <h2>Problem</h2>
<p>Compute following:</p>
<p>$$ \int_{1}^{2} \int_{0}^{z} \int_{0}^{y+z}\frac{1}{(z+y+x)^3}dxdydz $$</p>
<h2> Attempt to solve </h2>
<p>Now i could probably solving this by opening $(z+y+x)^3$ parenthesis and then solving 3 separate integrals involving variables $x,y,z$. Opening these parenthesis wouldn't obviously look nice and it would be a lot of work for sure. I think better way would be to substitute $u=(z+y+x)$ and then we would need to adjust the integration limits by working chainrule backwards ?</p>
<p>$$ \frac{\delta u}{\delta x}=1, \quad \delta u = \delta x $$
$$ \frac{\delta u}{\delta y}=1, \quad \delta u = \delta y $$
$$ \frac{\delta u}{\delta z}=1, \quad \delta u = \delta z $$</p>
<p>$$ \int_{1}^{2} \int_{0}^{z} \int_{0}^{y+z}\frac{1}{(z+y+x)^3}\delta x\delta y\delta z $$
$$ \int_{1}^{2} \int_{0}^{z} \int_{0}^{y+z}\frac{1}{u^3}\delta u^3$$
$$\int_{1}^{2} \int_{0}^{z} -\frac{1}{2(y+x)^2}\delta u^2$$</p>
<p>Now it is visible at this point that this cant possibly work.I don't think i have very good understanding on how i am suppose to accomplish this what i am trying to do.</p>
| Mauro ALLEGRANZA | 108,274 | <p><a href="https://en.wikipedia.org/wiki/Logarithm#History" rel="nofollow noreferrer">Logarithm</a> was invented around 1600 by <a href="https://en.wikipedia.org/wiki/John_Napier" rel="nofollow noreferrer">John Napier</a> to simplify difficult calculations: a multiplication can be reduced to addition. </p>
<p>It was very useful in the computation of astronomical tables.</p>
<p>See the quote from Laplace, who called logarithms</p>
<blockquote>
<p>"[a]n admirable artifice which, by reducing to a few days the labour of many months, doubles the life of the astronomer, and spares him the errors and disgust inseparable from long calculations."</p>
</blockquote>
|
8,741 | <p>Here is a topic in the vein of <a href="https://mathoverflow.net/questions/1890/describe-a-topic-in-one-sentence" title="Describe a topic in one sentence"> Describe a topic in one sentence</a> and <a href="https://mathoverflow.net/questions/4994/fundamental-examples" > Fundamental examples </a> : imagine that you are trying to explain and justify a mathematical theory T to a skeptical mathematician who thinks T is just some sort of abstract nonsense for its own sake. The ideal solution consists
of a problem P which can be stated and understood without knowing anything about T, but which is difficult (or impossible, even better) to solve without T, and easier (or almost-trivial, even better) to solve with the help of T. What should be avoided is an example where T is "superimposed", e.g. when T is a model for some physical phenomenon, because there is always something arbitrary about the choice of a specific model. </p>
<p>A classical example is Galois theory for solving polynomial equations. </p>
<p>Any examples for homological algebra ? For Fourier analysis ? For category theory ?</p>
| Jose Brox | 1,234 | <p>The Laplace transform to systematically solve homogeneous ODE's with constant coefficients by transforming them in polynomial equations (and then transforming the solution back).</p>
<p><a href="http://en.wikipedia.org/wiki/Laplace_transform#Example_.231%3A_Solving_a_differential_equation" rel="nofollow">Solving a differential equation by the Laplace transform</a></p>
|
8,741 | <p>Here is a topic in the vein of <a href="https://mathoverflow.net/questions/1890/describe-a-topic-in-one-sentence" title="Describe a topic in one sentence"> Describe a topic in one sentence</a> and <a href="https://mathoverflow.net/questions/4994/fundamental-examples" > Fundamental examples </a> : imagine that you are trying to explain and justify a mathematical theory T to a skeptical mathematician who thinks T is just some sort of abstract nonsense for its own sake. The ideal solution consists
of a problem P which can be stated and understood without knowing anything about T, but which is difficult (or impossible, even better) to solve without T, and easier (or almost-trivial, even better) to solve with the help of T. What should be avoided is an example where T is "superimposed", e.g. when T is a model for some physical phenomenon, because there is always something arbitrary about the choice of a specific model. </p>
<p>A classical example is Galois theory for solving polynomial equations. </p>
<p>Any examples for homological algebra ? For Fourier analysis ? For category theory ?</p>
| Qiaochu Yuan | 290 | <p>If you're a combinatorialist and you want to know the asymptotics of a sequence $a_n$ with a nice generating function $A(z) = \sum_{n \ge 0} a_n z^n$, the very first thing you should do is find out if $A$ is meromorphic, since then one can analyze the asymptotics of $a_n$ using its poles. Even if $A$ isn't meromorphic, if one has sufficiently good information about its singularities then there are transfer theorems that translate information about the behavior of $A$ near its poles to the behavior of $a_n$ for large $n$. In other words, combinatorialists (and by extension computer scientists) should learn complex analysis.</p>
<p>For example, let $E_n$ be the number of alternating permutations on $n$ letters. Then $E(z) = \sum_{n \ge 0} E_n z^n = \sec z + \tan z$ is meromorphic with poles $z = \frac{\pi}{2} + 2k \pi, k \in \mathbb{Z}$. The dominant singularity is at $z = \frac{\pi}{2}$ and one now knows without doing any other computations that $E_n \sim n! \left( \frac{2}{\pi} \right)^n$. Even better one can write down an exact series converging to $E_n$ with one term for each pole. The corresponding expansion of the Bernoulli numbers $B_n$ gives the classical evaluation of the zeta function at even integers.</p>
|
8,741 | <p>Here is a topic in the vein of <a href="https://mathoverflow.net/questions/1890/describe-a-topic-in-one-sentence" title="Describe a topic in one sentence"> Describe a topic in one sentence</a> and <a href="https://mathoverflow.net/questions/4994/fundamental-examples" > Fundamental examples </a> : imagine that you are trying to explain and justify a mathematical theory T to a skeptical mathematician who thinks T is just some sort of abstract nonsense for its own sake. The ideal solution consists
of a problem P which can be stated and understood without knowing anything about T, but which is difficult (or impossible, even better) to solve without T, and easier (or almost-trivial, even better) to solve with the help of T. What should be avoided is an example where T is "superimposed", e.g. when T is a model for some physical phenomenon, because there is always something arbitrary about the choice of a specific model. </p>
<p>A classical example is Galois theory for solving polynomial equations. </p>
<p>Any examples for homological algebra ? For Fourier analysis ? For category theory ?</p>
| Harrison Brown | 382 | <p>Set theory provides by far the easiest proof of the existence of transcendental numbers -- just show that the algebraic numbers and the integers can be put into 1-1 correspondence, but the real numbers and the integers can not. Liouville's proof isn't too hard, but it's nowhere near as elegant.</p>
|
8,741 | <p>Here is a topic in the vein of <a href="https://mathoverflow.net/questions/1890/describe-a-topic-in-one-sentence" title="Describe a topic in one sentence"> Describe a topic in one sentence</a> and <a href="https://mathoverflow.net/questions/4994/fundamental-examples" > Fundamental examples </a> : imagine that you are trying to explain and justify a mathematical theory T to a skeptical mathematician who thinks T is just some sort of abstract nonsense for its own sake. The ideal solution consists
of a problem P which can be stated and understood without knowing anything about T, but which is difficult (or impossible, even better) to solve without T, and easier (or almost-trivial, even better) to solve with the help of T. What should be avoided is an example where T is "superimposed", e.g. when T is a model for some physical phenomenon, because there is always something arbitrary about the choice of a specific model. </p>
<p>A classical example is Galois theory for solving polynomial equations. </p>
<p>Any examples for homological algebra ? For Fourier analysis ? For category theory ?</p>
| gowers | 1,459 | <p>For me personally, the first time I "got" Fourier analysis was when I understood how it could be used to prove Roth's theorem on arithmetic progressions (that any dense set of integers contains an arithmetic progression of length 3). It can be proved in several ways, but when you see this way you immediately realize that the technique is likely to be useful for many other problems. And in fact, Roth's theorem can also be used to justify Szemerédi's regularity lemma (which is not quite the same as justifying a whole theory, but it is a very useful technique) in a similar way.</p>
|
8,741 | <p>Here is a topic in the vein of <a href="https://mathoverflow.net/questions/1890/describe-a-topic-in-one-sentence" title="Describe a topic in one sentence"> Describe a topic in one sentence</a> and <a href="https://mathoverflow.net/questions/4994/fundamental-examples" > Fundamental examples </a> : imagine that you are trying to explain and justify a mathematical theory T to a skeptical mathematician who thinks T is just some sort of abstract nonsense for its own sake. The ideal solution consists
of a problem P which can be stated and understood without knowing anything about T, but which is difficult (or impossible, even better) to solve without T, and easier (or almost-trivial, even better) to solve with the help of T. What should be avoided is an example where T is "superimposed", e.g. when T is a model for some physical phenomenon, because there is always something arbitrary about the choice of a specific model. </p>
<p>A classical example is Galois theory for solving polynomial equations. </p>
<p>Any examples for homological algebra ? For Fourier analysis ? For category theory ?</p>
| Christopher Creutzig | 2,933 | <p>Proving the termination of <a href="http://en.wikipedia.org/wiki/Goodstein%27s_theorem" rel="nofollow">Goodstein sequences</a> (a problem in natural numbers) via arithmetic on infinite ordinals.</p>
|
2,159,092 | <p>$\sum _{k=1}^{n}\mathrm{sin}kz=\frac{\mathrm{sin}\frac{n+1}{{2}}\cdot \mathrm{sin}\frac{nz}{2}}{\mathrm{sin}\frac{z}{2}}$
Proof for $z\neq 0$</p>
| Community | -1 | <p>The main idea connecting logic to set theory is the idea that <em>propositions are subsets</em>.</p>
<p>On the one hand, if you are interpreting propositional logic in some (set-theoretic) domain $D$ of discourse, then each proposition gets interpreted as a subset of $D$, and connectives are operations on subsets.</p>
<p>On the other hand, given any set $D$, we can make $\mathcal{P}(D)$ into a Boolean lattice in which we can compute with propositional logic.</p>
<p>And this all extends fairly naturally to predicates and quantifiers and such.</p>
<hr>
<p>When connecting logic to <em>category theory</em> we use basically the same idea, except instead that propositions are subobjects rather than subsets.</p>
<p>The posets $\operatorname{Sub}(X)$ of subobjects of $X$ are of crucial importance to doing propositional logic, and more generally we are interested in the maps back and forth between $\operatorname{Sub}(X)$ and $\operatorname{Sub}(Y)$ that may be induced by a map $X \to Y$.</p>
<p>So, for internal first-order logic, classifying <em>subobjects</em> really is the thing we want. Even better, in a topos, $\operatorname{Sub}$ <em>is</em> a representable functor</p>
<p>$$ \operatorname{Sub}(-) \cong \hom(-, \Omega) $$</p>
<p>So, while $\operatorname{Sub}(-)$ is a relatively complicated object, in a topos the whole thing effectively collapses down to being a single object $\Omega$, thus allowing the internal logic to be treated in a more elementary way.</p>
|
3,104,058 | <p>I need to make a part of a program in java that calculates a circle center. It has to be a circle through a given point that touches another circle, and the variable circle center has the possibility to move over a given line.</p>
<p><a href="https://i.stack.imgur.com/RQgBC.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/RQgBC.png" alt="Example of the problem"></a></p>
<p>Here the coordinates of A, B and C and the radius of the circle around A are given. I need to know how to get the coordinates of P and P' when they touch the blue circle around A.</p>
| amd | 265,466 | <p>You haven’t mentioned this explicitly, but from the illustration is appears that you’re only interested in <em>externally</em> tangent circles. If the radius of the circle is <span class="math-container">$r$</span>, then the points you are looking for satisfy <span class="math-container">$|PA|-|PB|=r$</span>. Set <span class="math-container">$P = (1-t)B+tC$</span> and use the distance formula to get a somewhat messy-looking equation in <span class="math-container">$t$</span>. You can find some useful suggestions for how to manipulate equations involving sums of radicals into a more manageable form in the answers to <a href="https://math.stackexchange.com/questions/3097351/solving-equations-involving-square-roots/3097931#3097931">this question</a>.</p>
|
1,562,907 | <p>I want to know whether $$\mathbb{Q}[x]/(x^3-1)$$ is a field or not. Is it as simple as determining if $x^3-1$ is irreducible in $\Bbb Q$?</p>
<p>But since it has roots , $x=1$ wouldn't this imply it is reducible and hence not a field? Or is there much more to it I am not considering?</p>
| George V. Williams | 54,806 | <p><strong>Hint</strong>: let $y = g(x)$, so $x = g^{-1}(y)$. Can you use your equation to solve for $g^{-1}(y)$?</p>
|
1,562,907 | <p>I want to know whether $$\mathbb{Q}[x]/(x^3-1)$$ is a field or not. Is it as simple as determining if $x^3-1$ is irreducible in $\Bbb Q$?</p>
<p>But since it has roots , $x=1$ wouldn't this imply it is reducible and hence not a field? Or is there much more to it I am not considering?</p>
| marwalix | 441 | <p>Start with $f(x)=g(x)-1$ and apply $f^{-1}$ to get</p>
<p>$$x=f^{-1}(g(x)-1)$$</p>
<p>Now set $y=g(x)$ this gives $x=g^{-1}(y)$ and replace in the above to get</p>
<p>$$g^{-1}(y)=f^{-1}(y-1)$$</p>
|
607,324 | <blockquote>
<p><span class="math-container">$d \in \mathbb{Z}$</span> is a square-free integer (<span class="math-container">$d \ne 1$</span>, and <span class="math-container">$d$</span> has no factors of the form <span class="math-container">$c^2$</span> except <span class="math-container">$c = \pm 1$</span>), and let <span class="math-container">$R=\mathbb{Z}[\sqrt{d}]= \{ a+b\sqrt{d} \mid a,b \in \mathbb{Z} \}$</span>. Prove that every nonzero prime ideal <span class="math-container">$P \subset R$</span> is a maximal ideal.</p>
</blockquote>
<p>I have a possible outline which I think is good enough to follow.</p>
<p>I think that we need to first prove that every ideal <span class="math-container">$I \subset R$</span> is finitely generated. </p>
<p>So if <span class="math-container">$I$</span> is non-zero, then <span class="math-container">$I \cap \mathbb{Z}$</span> is a non-zero ideal in <span class="math-container">$\mathbb{Z}$</span>. </p>
<p>Then I need to find <span class="math-container">$I \cap \mathbb{Z} = \{ xa \mid a \in \mathbb{Z} \}$</span> for some <span class="math-container">$x \in \mathbb{Z}$</span>. That way if I let <span class="math-container">$J$</span> be the set of all integers <span class="math-container">$b$</span> such that <span class="math-container">$a+b\sqrt{d} \in I$</span> for some <span class="math-container">$a\in \mathbb{Z}$</span>, then if there exists a integer <span class="math-container">$y$</span> such that <span class="math-container">$J=\{ yt \mid t\in \mathbb{Z} \}$</span>, then there must exist <span class="math-container">$s \in \mathbb{Z}$</span> such that <span class="math-container">$s+y\sqrt{d} \in I$</span>. </p>
<p>Then all I need to show is that <span class="math-container">$I = ( x,s+y\sqrt{d} )$</span>. </p>
<p>Now I need to derive that the factor ring <span class="math-container">$R / P$</span> is a finite ring without zero divisors, also finite, then since every finite integral domain is a field, every prime ideal <span class="math-container">$P \subset R$</span> is a maximal ideal, then I'll be done.</p>
| Stephen Montgomery-Smith | 22,016 | <p>I assume that $R[X]$ means the polynomials over a ring $R$.</p>
<p>So using Taylor's series:
$$ f(X+t) = \sum_{k=0}^\infty \frac{t^k}{k!} f^{(k)}(X) = \sum_{k=0}^\infty \frac{t^k}{k!} D^k f(X) = \exp(tD) f(X) .$$
Note that the sum is actually finite since $D^k f(X) = 0$ for $k$ large enough.</p>
<p>The proof of Taylors series: by linearity we need only prove it in the case $f(X) = X^n$. Then this reduces to the binomial expansion. And this can be proved by induction, when $X$ is a variable and $t$ is a real number.</p>
<p>Or you can prove it by induction on the degree of $f$. When the degree is zero, it is trivial. To go from degree $n$ to degree $n+1$, formally integrate both sides. Then substitute $t=0$ to get the constant term.</p>
|
853,735 | <p>Can anyone show, step-by-step, how the expression on the LHS can be turned into the expression on the RHS?</p>
<p>$x^ay^b=a^ab^b(a+b)^{-(a+b)}(x+y)^{a+b}$</p>
| Community | -1 | <p>Assuming all the numbers are positive, we in fact have
$$ x^a y^b \le a^a b^b (a+b)^{-(a+b)} (x+y)^{a+b} \tag{$\ast$} $$
with equality if and only if $x/y = a/b$.</p>
<p>This is a standard application of the AM/GM inequality, as follows:
\begin{align*}
&\left(\frac{x}{x+y}\cdot\frac{a+b}{a}\right)^{a/(a+b)}
\left(\frac{y}{x+y}\cdot\frac{a+b}{b}\right)^{b/(a+b)} \\
&\le \frac{a}{a+b}\left(\frac{x}{x+y}\cdot\frac{a+b}{a}\right)
+ \frac{b}{a+b} \left(\frac{y}{x+y}\cdot\frac{a+b}{b}\right) \\
&= 1
\end{align*}
Raising both sides to the power $a+b$ and rearranging yields ($\ast$), with equality iff
$$ \frac{x}{x+y}\cdot\frac{a+b}{a} = \frac{y}{x+y}\cdot\frac{a+b}{b} $$
which is equivalent to $\frac xy = \frac ab$.</p>
|
3,634,490 | <p>Let <span class="math-container">$N \geq 1$</span> be an integer.</p>
<p>Let <span class="math-container">$X$</span> be a standard <span class="math-container">$\mathbb{R}^N$</span> Gaussian vector (all components are <span class="math-container">$\mathcal{N}(0, 1)$</span> and i. i. d.).</p>
<p>Let <span class="math-container">$A \in \mathcal{M}_N(\mathbb{R})$</span> be a deterministic matrix.</p>
<p>Thus, <span class="math-container">$AX$</span> is a Gaussian vector.</p>
<p>Let <span class="math-container">$\epsilon \in (0, 1)$</span>.</p>
<p>I would like to simulate Gaussian <em>vectors</em> of the form <span class="math-container">$AX$</span> that <em>mostly have a smaller infinity norm</em> than <span class="math-container">$\epsilon$</span>, with only controlling the <em>infinity norm of the matrix</em> <span class="math-container">$A$</span>.</p>
<p>So the question is: how to limit <span class="math-container">$||A||_{\infty} = \underset{i, j}{\max} |A_{i, j}|$</span> so that with high confidence / probability (let's say 95%), I get <span class="math-container">$||AX||_{\infty} \leq \epsilon$</span> ?</p>
<p>I am looking for an answer like: <strong>take <span class="math-container">$A$</span> such that <span class="math-container">$||A||_{\infty} \leq f(\epsilon)$</span></strong>. The answer is quite easy in the <span class="math-container">$1$</span>-dimensional case, I would like to generalize it, but haven't found any clear theorem addressing that.</p>
<p>Thanks a lot for your help!</p>
| charmd | 332,790 | <p>For some <span class="math-container">$n$</span> (<span class="math-container">$1$</span>, <span class="math-container">$2$</span> and all multiples of <span class="math-container">$4$</span> if you admit a certain conjecture; all powers of <span class="math-container">$2$</span> if you don't), we can find explicitely the best bound.</p>
<p>Notwithstanding this conjecture, we find for <span class="math-container">$f(\varepsilon)$</span> a rate of convergence of <span class="math-container">$\frac{1}{\sqrt{n\log (n)}}$</span>, less conservative than <span class="math-container">$\frac{1}{n}$</span>.</p>
<hr />
<p>We will denote by <span class="math-container">$X_0$</span> a standardized normal law <span class="math-container">$\mathcal{N}(0,1)$</span> independent (jointly) from our variables.</p>
<blockquote>
<p><strong>Lemma 1:</strong> for all <span class="math-container">$\varepsilon>0, r>0$</span>, <span class="math-container">$\min \limits_{A \in \mathcal{M}_n(\mathbb{R}),\ ||A||_{\infty} \le r} \mathbb{P}\big(||AX||_{\infty} \le \varepsilon\big) \ge \mathbb{P}\big(|X_0| \le \frac{\varepsilon}{\sqrt{n}r}\big)^n$</span></p>
</blockquote>
<p>The proof if a direct application of the <a href="https://en.wikipedia.org/wiki/Gaussian_correlation_inequality" rel="nofollow noreferrer">Gaussian correlation inequality</a>.</p>
<p><em>Proof:</em> let <span class="math-container">$A \in \mathcal{M}_n(\mathbb{R})$</span> with <span class="math-container">$||A||_{\infty} \le r$</span>. Then, for <span class="math-container">$i \in [\![1,n]\!]$</span>, the region defined by <span class="math-container">$\big|(Ax)_i\big| \le \varepsilon$</span> is convex and symmetric about the origin. Thus <span class="math-container">$\mathbb{P}\big(||AX||_{\infty} \le \varepsilon \big) \ge \prod \limits_{i=1}^n \mathbb{P}\big(|(AX)_i| \le \varepsilon\big)$</span>. Also, <span class="math-container">$(AX)_i \sim \mathcal{N}\big(0, \sum \limits_{j=1}^n a_{i,j}^2\big)$</span>, so <span class="math-container">$\mathbb{P}\big(|(AX)_i| \le \varepsilon\big) = \mathbb{P}\Big(|X_0| \le \varepsilon / \sqrt{\sum_{j=1}^n a_{i,j}^2}\Big) \ge \mathbb{P}\big(|X_0| \le \frac{\varepsilon}{\sqrt{n}r}\big)$</span> since <span class="math-container">$||A||_{\infty} \le r$</span>.</p>
<p><span class="math-container">$ $</span></p>
<p>What this lemma says, is that the matrices <span class="math-container">$A$</span> for which we have the lowest confidence are the ones with</p>
<ul>
<li><p>uncorrelated outputs for <span class="math-container">$AX$</span>, so orthogonal lines for <span class="math-container">$A$</span></p>
</li>
<li><p>coefficients which are close (in absolute value) to <span class="math-container">$||A||_{\infty}$</span></p>
</li>
</ul>
<p>These two properties bring us to the next section: we can find matrices satisfying these conditions, and reaching the bound of lemma <span class="math-container">$1$</span>.</p>
<hr />
<p>A <a href="https://en.wikipedia.org/wiki/Hadamard_matrix" rel="nofollow noreferrer">Hadamard matrix</a> of order <span class="math-container">$n$</span> is a matrix <span class="math-container">$H_n \in \mathcal{M}_n(\mathbb{R})$</span> with coefficients <span class="math-container">$+1$</span> or <span class="math-container">$-1$</span>, such that <span class="math-container">$HH^T = nI_n$</span>. While the construction of Hadamard matrix of order <span class="math-container">$2^k$</span> is obvious, a stil standing conjecture is:</p>
<blockquote>
<p><strong>Conjecture:</strong> for <span class="math-container">$n=1,2$</span> or any multiple of <span class="math-container">$4$</span>, there exists a Hamadard matrix or order <span class="math-container">$n$</span>.</p>
</blockquote>
<p>Now why were we talking about Hadamard matrices again? That is because they are the worst case matrices for your problem:</p>
<blockquote>
<p><strong>Lemma 2:</strong> given a Hadamard matrix <span class="math-container">$H_n$</span>, for <span class="math-container">$A = rH_n$</span> the bound of lemma <span class="math-container">$1$</span> is reached: <span class="math-container">$$\min \limits_{A \in \mathcal{M}_n(\mathbb{R}),\ ||A||_{\infty} \le r} \mathbb{P}\big(||AX||_{\infty} \le \varepsilon\big) = \mathbb{P}\big(||rH_nX||_{\infty} \le \varepsilon\big) = \mathbb{P}\big(|X_0| \le \frac{\varepsilon}{\sqrt{n}r}\big)^n$$</span></p>
</blockquote>
<p><em>Proof:</em> let us denote <span class="math-container">$A = rH_n$</span>. Since the rows of <span class="math-container">$H_n$</span> are orthogonal, the <span class="math-container">$(AX)_i$</span> are uncorrelated, and as <span class="math-container">$AX$</span> is a gaussian vector, the <span class="math-container">$(AX)_i$</span> are independent. Thus <span class="math-container">$\mathbb{P}\big(||AX||_{\infty} \le \varepsilon\big) = \prod \limits_{i=1}^n \mathbb{P}\big(|(AX)_i| \le \varepsilon\big)$</span>. Last, the coefficients of <span class="math-container">$A$</span> are either <span class="math-container">$r$</span> or <span class="math-container">$-r$</span>, so <span class="math-container">$(AX)_i \sim \mathcal{N}(0, nr^2)$</span>, and thus
<span class="math-container">$\mathbb{P}\big(||AX||_{\infty} \le \varepsilon\big) = \mathbb{P}\big(|X_0| \le \frac{\varepsilon}{\sqrt{n}r}\big)^n$</span>.</p>
<hr />
<p>Conclusion: for all <span class="math-container">$n$</span>, you can take <span class="math-container">$$f(\varepsilon) = \frac{\varepsilon}{\sqrt{n} \cdot F^{-1}\big(\frac{1+\alpha^{1/n}}{2}\big)}$$</span></p>
<p>with <span class="math-container">$F$</span> the cdf of <span class="math-container">$X_0$</span>, to get <span class="math-container">$||AX||_{\infty} \le \varepsilon$</span> for all <span class="math-container">$A$</span> with <span class="math-container">$||A||_{\infty} \le f(\varepsilon)$</span>, with confidence level at least <span class="math-container">$\alpha$</span>, and exactly <span class="math-container">$\alpha$</span> for those <span class="math-container">$n$</span> for which a Hadamard matrix exist.</p>
<p>Using <a href="https://math.stackexchange.com/questions/2964944/asymptotics-of-inverse-of-normal-cdf">the asymptotics of <span class="math-container">$F^{-1}$</span></a>, as <span class="math-container">$n \to +\infty$</span> you can take <span class="math-container">$f(\varepsilon) = \frac{\varepsilon}{\sqrt{n \log\big(\frac{4n^2}{\log(\alpha)^2}\big)}}$</span>.</p>
<hr />
|
2,683,731 | <p>If a relation is not symmetric shall we say that it is anti symmetric?
In a quiz show's preliminary round i had this question looks easy but i couldn't answer.can anyone answer?</p>
| Bram28 | 256,001 | <p>No. Anti-symmetry means that if $R(x,y)$ and $R(y,x)$ then $x=y$. In other words, for anti-symmetry you cannot have both $R(a,b)$ and $R(b,a)$ for different $a$ and $b$. But that is not the negation of symmetry. </p>
<p>For example, for $R$ to be not anti-symmetric, we could have both $R(a,b)$ and $R(b,a)$, but to then also make it not symmetric, we simply add $R(a,c)$, which is exactly what @Levent did in their answer.</p>
<p>Notice that this example is also not asymmetric (indeed, asymmetry implies anti-symmetry, and so if it is not anti-symmetric, then it is automatically not asymmetric either) so asymmetry is also not the negation of symmetry.</p>
<p>I suppose you could use a phrase like 'non-symmetry', but it's best just to say: it's 'not symmetric'. Likewise the example given is also 'not anti-symmetric' and 'not asymmetric'.</p>
<p>Maybe this helps: 'symmetry', 'asymmetry', and 'anti-symmetry' are all properties of 'nicely behaving' relations, so you can have relations that don't have any of these 'nice' features.</p>
|
2,683,731 | <p>If a relation is not symmetric shall we say that it is anti symmetric?
In a quiz show's preliminary round i had this question looks easy but i couldn't answer.can anyone answer?</p>
| user061703 | 515,578 | <p>You need to note about the logic (the "logic" tag should be added).</p>
<p>A relation $R$ over the set $X$ is <strong>symmetric</strong> if and only if this statement is true: "For all $a$ and $b$ in $X$, $a$ is related to $b$ $\Leftrightarrow$ $b$ is related to $a$."</p>
<p>A relation $R$ over the set $X$ is <strong>antisymmetric</strong> if and only if there are no pair of distinct elements $a$ and $b$ (at all) satisfies $a$ is related to $b$ $\Leftrightarrow$ $b$ is related to $a$.</p>
<p>If a relation $R$ over the set $X$ has four elements, $a_{1},a_{2},b_{1},b_2$ that satisfies $a_1$ and $b_1$ are related to each other, but $a_{2}$ and $b_2$ are not related to each other, we can conclude that $R$ is not symmetric, but it also violates the condition for the antisymmetric or $R$ is neither of them.</p>
|
4,415,254 | <p>When is <span class="math-container">$n!>x^n$</span>? assuming that x is a fixed positive. I know we can take the log of both sides and use the following formula:</p>
<p><span class="math-container">$$n\log(x) = \log(x^n) < \log(n!) = \sum_{i = 1}^n\log(i)$$</span>.
But this still gives me difficulty trying to find a specific N that makes the right side larger. Is there another formula I should be using?</p>
| Paolo Leonetti | 45,736 | <p>Use that (see <a href="https://en.wikipedia.org/wiki/Stirling%27s_approximation" rel="nofollow noreferrer">here</a>) for <span class="math-container">$n\ge 1$</span>:
<span class="math-container">$$
n!>\sqrt{2\pi n}\left(\frac{n}{e}\right)^ne^{\frac{1}{12n
+1}}.
$$</span>
Hence it is enough to check that
<span class="math-container">$$
\sqrt{2\pi n}\left(\frac{n}{e}\right)^ne^{\frac{1}{12n
+1}}\ge x^n,
$$</span>
where <span class="math-container">$x$</span> is fixed and positive. Equivalently
<span class="math-container">$$
2\pi n\left(\frac{n}{ex}\right)^{2n}e^{\frac{2}{12n
+1}}\ge 1.
$$</span>
This holds, of course, for all <span class="math-container">$n\ge n_0(x):=ex$</span>.</p>
<p>Edit: As pointed out in the comments below by SomeCallMeTime, this estimate is asymptotically optimal.</p>
|
3,631,714 | <p>If <span class="math-container">$f: \mathbb{R}^n \to \mathbb{R} $</span> is a function on the Schwartz space <span class="math-container">$\mathcal{S} (\mathbb{R}^n) $</span> we define the Fourier transform of <span class="math-container">$f$</span> as the function
<span class="math-container">$$ \hat f(\xi)= \int_{\mathbb{R}^n} f(x)e^{2 \pi ix\xi} dx $$</span>
How can I prove that if <span class="math-container">$f \in L^1(\mathbb{R}^n)$</span>, then the same formula for <span class="math-container">$\hat f$</span> holds? I'd like to prove that that formula extends the Fourier transform on <span class="math-container">$L^1$</span>.</p>
| Community | -1 | <p>I think that there is a misunderstanding of how math is taught in our country, it's not as intensive as you think. JEE is an exam which is taken by million students but only <span class="math-container">$11,000$</span> get selected. So if you require books of <strong>JEE Advanced</strong> standard then better go for books which are exclusively cooked up for it. And not definitely basic theory books such as NCERT as mentioned by others. NCERT is a common book for all high school students, which may reach about <span class="math-container">$6$</span> million(not so sure)</p>
<p>Some books which jee aspirants follow are given in this <a href="https://www.quora.com/What-are-the-best-books-that-can-be-referred-to-for-JEE-Mains-and-Advanced-for-preparation-in-Physics-Chemistry-and-Mathematics" rel="nofollow noreferrer">link</a> it includes some old books and i think all the pdf are available online. I would recommend cengage series in math for jee advanced</p>
|
1,515,817 | <p>I conjecture that in a consecutive sequence of $n$ natural numbers all greater than $n$, there exists at least one number which is not divisible by any prime number less than or equal to $n/2$.</p>
<p>Can any one prove or disprove this?</p>
| Eric Naslund | 6,075 | <p>The conjecture is false. This problem is directly related to finding large gaps between primes, and the methods of Erdos, Rankin and others.</p>
<p>Define the <em>Jacobsthal function</em> $j(q)$ to be the largest gap between consecutive reduced residues modulo $q$, that is the largest gap between elements that are relatively prime to $q$. Note that your conjecture is equivalent to asking if $$j\left(\prod_{p\leq n/2} p\right)\leq n$$ holds for all $n$. To see why, consider any sequence of $n$ consecutive numbers modulo $M=\prod_{p\leq n/2}p$. Then each of them will be divisible by some $p\leq n/2$ if and only if $j\left(\prod_{p\leq n/2}p\right)\geq n$. </p>
<p>This function $j$ is directly related to best lower bounds for prime gaps. Indeed, if $$j\left(\prod_{p\leq X}p\right)\geq f(X)$$ infinitely often, (where $f$ is a nice function, strictly increasing etc.) then $$\max_{p_{n+1}\leq x} p_{n+1}-p_n \geq f(\log x).$$ In a recent paper of <a href="http://arxiv.org/pdf/1412.5029v2.pdf" rel="nofollow">Kevin Ford, Ben Green, Sergei Konyagin, James Maynard, Terence Tao</a>, they proved that $$j\left(\prod_{p\leq x} p\right)\gg \frac{x\log x \log \log \log x}{\log \log x},$$ and hence</p>
<p>$$\max_{p_n\leq X} p_{n+1}-p_n\gg \frac{\log X\log \log X \log \log \log \log X}{\log \log \log X}.$$
We remark that Erdos had put a $10000\$$ prize on this result, the largest amount he set for any problem.</p>
<p>While this result does disprove your conjecture, we do not need use such powerful theorems. We need only use Lemma 7.13 of Montgomery and Vaughn which states that $$\lim_{n\rightarrow \infty} \frac{j\left(\prod_{p\leq n} p\right)}{n}=\infty.$$ This is proven using an elementary sieving argument, and the result was originally given by Westzynthius.</p>
|
2,229,532 | <p>I have a circle which has a triangle inscribed in it.</p>
<p>The circle radius R = 4</p>
<p>The triangle ABC vertices divide circle into 3 arcs in 1:2:3 ratio</p>
<p>Find the perimeter and area of triangle.</p>
<p>Can you guys help me with this one? </p>
| ThoughtOfGod | 418,784 | <p>One can observe that the arc lengths can be shown as parts of 6, for 1+2+3 is equal to 6. Once this has been observed, one can see the triangle has a side that of which is a diameter, and another as a radius.</p>
<p>It may be easier to think of it as parts of a hexagon divided into 6 equal parts.</p>
<p>Because an angle on the circumference is equal to half of the angle in the center sub tended by the same arc, one can see it is a right triangle and use Pythagorean Theorem to get an area of 8*3^1/2</p>
|
1,401,661 | <blockquote>
<p>Dice are cubes with pips (small dots) on their sides, representing
numbers 1 through 6. Two dice are considered the same if they can be
rotated and placed in such a way that they present matching numbers on
the top, bottom, left, right, front, and back sides.</p>
<p>Below is an example of two dice that can be rotated to show that they
are the same if the 2-pip and 4-pip sides are opposite and the 3-pip
and 5-pip sides are also opposite.
<a href="https://www.dropbox.com/s/6q56njm11hu3f36/Screenshot%202015-08-18%2012.02.11.png?dl=0" rel="nofollow">https://www.dropbox.com/s/6q56njm11hu3f36/Screenshot%202015-08-18%2012.02.11.png?dl=0</a></p>
<p>How many different dice exist? That is, how many ways can you make
distinct dice that cannot be rotated to show they are the same? Note:
This problem does not involve rolling the dice or the probability of
roll outcomes.</p>
</blockquote>
<p>I'm having trouble understanding exactly what is being asked in this question. I understand that I have to find how many different ways the dice can be placed to show that they are the same, but saying they cannot be rotated confuses me.</p>
<p>Could somebody make an attempt at rewording this? Or walking me through how to solve this?</p>
| André Nicolas | 6,312 | <p>Let us sit down at a table. Whatever die we have, it can be put on the table with the $1$ face down. </p>
<p>There are then $5$ possibilities for the up face (only one is legal, the face opposite $1$ is always $6$, but we will not worry about that). We make a choice for the up face, like $3$, count the dice that have up face $3$ and then multiply the result by $5$.</p>
<p>So now we concentrate on counting the dice with down face $1$ and up face $3$. Take the smallest face that has not been mentioned yet, in this case $2$. Rotate the die, keeping it with the $1$ face on the table, until you are looking at the $2$ face. Now there remain $3$ faces. Any two orderings of the numbers on these $3$ faces gives different dice, giving a total of $3!$ dice with down face $1$ and up face $3$. </p>
<p>Finally, multiply by $5$. </p>
|
553,103 | <p>consider the series of function $\sum f_n$ with $f_n(x)=\frac{x}{x^2+n^2}$.
It is easy to see that there is pointwise convergence on $\mathbb{R}$ (to a function that we'll call $f$) but not normal convergence. I want to know whether there is uniform convergence towards $f$ or not. </p>
<p>I tried to disprove uniform convergence by looking at $|\sum_{k=0}^n f_k(n)-f(n)|$ : the sum is unbounded and equivalent to $\ln n$. The problem is that I don't know how to estimate $f(n)$. </p>
<p>I'm sure I'm missing something not too difficult here, so help would be appreciated.</p>
| Khosrotash | 104,171 | <p><img src="https://i.stack.imgur.com/8jDVf.jpg" alt="enter image description here">
if you fcator 4cos^2x from 4cos^2x+sin^2x
you will have an arctan form integral
at the end
note that period of the function is pi
so your problem reduce to 4*integral(0 -->pi/2) f(x)</p>
|
510,672 | <p>My prof taught us that during Gaussian Elimination, we can perform three elementary operations to transform the matrix:</p>
<p>1) Multiple both sides of a row by a non-zero constant
2) Add or subtract rows
3) Interchanging rows</p>
<p>In addition to those, why isn't removing zero rows an elementary operation? It doesn't affect the system in any way. Define zero rows to be a row with no leading variables.</p>
<p>For example isn't $\begin{bmatrix}a & b & k\\c & d & m\end{bmatrix} \rightarrow
\begin{bmatrix}a & b & k\\c & d & m\\0 & 0 & 0\end{bmatrix}$</p>
| Community | -1 | <p>Aside: which row operations are called elementary is purely a matter of convention: we can pick any operations we want to be called the elementary ones. Those three are the ones we chosen.</p>
<hr>
<p>The problem is that row operations are formal things. e.g. there is a row operation called "multiply the first row row by 5", but there is not a row operation called "multiply the first row by 5 if its first nonzero entry is 0.2".</p>
<p>Along those lines, "remove a row" is a row operation, but "remove a zero row is not".</p>
<p>"<em>Add</em> a zero row" is a row operation, though.</p>
<hr>
<p>Those zero rows <em>are</em> useful -- or, more accurately, you need to remember what row operations you performed to make that zero row. If $R$ is the matrix that accumulates all of your row operations, then when trying to solve</p>
<p>$$Ax = b$$</p>
<p>you can inspect</p>
<p>$$ RAx = Rb$$</p>
<p>and know that there are no solutions if $Rb$ has a nonzero entry in the position where $RA$ has a zero row.</p>
<p>It's also useful for finding left null vectors to your matrix: if $RA$ is in row echelon form, then the rows of $R$ that correspond to zero rows of $RA$ are a basis for the left nullspace of $A$.</p>
<p>If you do the row operation to eliminate the zero row, you've forgotten all of that useful information.</p>
|
3,131,791 | <p>I read this:</p>
<blockquote>
<p><strong>Definition 1.1</strong>. <em>The</em> complex projective line <span class="math-container">$\mathbb{CP}^1$</span> <em>(or just</em> <span class="math-container">$\mathbb{P}^1$</span><em>) is the set of all ordered pairs of complex numbers</em> <span class="math-container">$\{(x, y)\in\mathbb{C}^2\mid(x, y)\neq(0, 0)\}$</span> <em>where we identify pairs</em> <span class="math-container">$(x, y)$</span> <em>and</em> <span class="math-container">$(x', y')$</span> <em>if one is a scalar multiple of the other</em>: <span class="math-container">$(x, y)=\lambda(x', y')$</span> <em>for some</em> <span class="math-container">$\lambda\in\mathbb{C}^*$</span><em>, where</em> <span class="math-container">$\mathbb{C}^*$</span> <em>is the set of nonzero complex numbers</em>.</p>
<p><span class="math-container">$\quad$</span> So for example <span class="math-container">$(1, 2)$</span>, <span class="math-container">$(3, 6)$</span>, and <span class="math-container">$(2+3i, 4+6i)$</span> all represent the same point of <span class="math-container">$\mathbb{P}^1$</span>.</p>
<p><span class="math-container">$\quad$</span> This construction is an example of the quotient of a set by an equivalence relation. See Exercise 2.</p>
<p><span class="math-container">$\quad$</span> The idea is that <span class="math-container">$\mathbb{P}^1$</span> can be thought of as the union of the set of complex numbers <span class="math-container">$\mathbb{C}$</span> and a single point "at infinity". To see this, consider the following subset <span class="math-container">$U_0 \subset \mathbb{P}^1$</span>:</p>
<p><span class="math-container">$$ U_0 = \{(x_0, x_1) \in\mathbb{P}^1\mid x_0\neq 0 \}.$$</span></p>
<p>Then <span class="math-container">$U_0$</span> is in one-to-one correspondence with <span class="math-container">$\mathbb{C}$</span> via the map</p>
<p><span class="math-container">$$\tag{1} \phi_0 : U_0 \to \mathbb{C} : \qquad (x_0, x_1) \mapsto \frac{x_1}{x_0}. $$</span></p>
<p>Note that <span class="math-container">$\phi_0$</span> is well defined on <span class="math-container">$U_0$</span>. First of all, <span class="math-container">$x_0$</span> is not <span class="math-container">$0$</span> so the division makes sense. Secondly, if <span class="math-container">$(x_0, x_1)$</span> represent the same point of <span class="math-container">$\mathbb{P}^1$</span> as <span class="math-container">$(x_0', x_1')$</span> then <span class="math-container">$x_0 = \lambda x_0'$</span> and <span class="math-container">$x_1 = \lambda x_1'$</span> for some nonzero <span class="math-container">$\lambda \in \mathbb{C}$</span>. Thus, <span class="math-container">$\phi_0((x_0, x_1)) = x_1 / x_0 = (\lambda x_1')/(\lambda x_0')$</span> <span class="math-container">$= x_1' / x_0' = \phi_0((x_0', x_1')) $</span> and <span class="math-container">$\phi_0$</span> is well defined as claimed. The inverse map is given by</p>
<p><span class="math-container">$$ \psi_0 : \mathbb{C}\to U_0 : \qquad z \mapsto (1, z). $$</span></p>
<p><span class="math-container">$\quad$</span> The complement of <span class="math-container">$U_0$</span> is the set of all points of <span class="math-container">$\mathbb{P}^1$</span> of the form <span class="math-container">$(0, x_1)$</span>. But since <span class="math-container">$(0, x_1) = x_1(0, 1)$</span>, all of these points coincide with <span class="math-container">$(0,1)$</span> as a point of <span class="math-container">$\mathbb{P}^1$</span>. So <span class="math-container">$\mathbb{P}^1$</span> is obtained from a copy of <span class="math-container">$\mathbb{C}$</span> by adding a single point.</p>
<p><span class="math-container">$\quad$</span> This point can be thought of as the point of infinity. To see this, consider a complex number <span class="math-container">$t$</span>, and identify it with a point of <span class="math-container">$U_0$</span> using <span class="math-container">$\psi_0$</span>; i.e., we identify it with <span class="math-container">$\psi_0(t) = (1, t)$</span>. Now let <span class="math-container">$t \to \infty$</span>. The beautiful feature is that the limit now exists in <span class="math-container">$\mathbb{P}^1$</span>! To see this, rewrite <span class="math-container">$(1, t)$</span> as <span class="math-container">$(1/t, 1)$</span> using scalar multiplication by <span class="math-container">$1/t$</span>. This clearly approaches <span class="math-container">$(0,1)$</span> as <span class="math-container">$t \to \infty$</span>, so <span class="math-container">$(0, 1)$</span> really should be thought of as the point at infinity!</p>
<p><span class="math-container">$\quad$</span> We have been deliberately vague about the precise meaning of limits in <span class="math-container">$\mathbb{P}^1$</span>. This is a notion from topology, which we will deal with later in Chapter 4. The property that limits exist in a topological space is a consequence of the <em>compactness</em> of the space, and the process of enlarging <span class="math-container">$\mathbb{C}$</span> to the compact space <span class="math-container">$\mathbb{P}^1$</span> is our first example of the important process of <em>compactification</em>. This makes the solutions to enumerative problems well-defined, by preventing solutions from going off to infinity. A precise definition of compactness will be given in Chapter 4. </p>
<p><span class="math-container">$\quad$</span> We now have to modify our description of complex polynomials by associating to them polynomials <span class="math-container">$F(x_0, x_1)$</span> on <span class="math-container">$\mathbb{P}^1$</span>. Before turning to their definition, note that the equation <span class="math-container">$F(x_0, x_1) = 0$</span> need not make sense as a well-defined equation of <span class="math-container">$\mathbb{P}^1$</span>, since it is conceivable that a point could have different representations <span class="math-container">$(x_0, x_1)$</span> and <span class="math-container">$(x_0', x_1')$</span> such that <span class="math-container">$F(x_0, x_1) = 0$</span> while <span class="math-container">$F(x_0', x_1') \neq 0$</span>. We avoid this problem by requiring that <span class="math-container">$F(x_0, x_1)$</span> be a <em>homogenous polynomial</em>; i.e., all terms in <span class="math-container">$F$</span> have the same total degree, which is called the degree of <span class="math-container">$F$</span>. So</p>
<p><span class="math-container">$$\tag{2} F(x_0, x_1) = \sum_{i=0}^d a_i x_0^i x_1^{d-i} $$</span></p>
<hr>
</blockquote>
<p>In the last paragraph, I tried to take one non-homogeneous polynomial and checked that indeed, that problem happens but when I took an homogeneous polynomial, the problem vanishes! I am baffled: it looks like sorcery! I tried to explain myself why that happens but couldn't so, why does that happen?</p>
| Matt Samuel | 187,867 | <p>What about <span class="math-container">$F(x) =x+1$</span>? Is it mystifying that <span class="math-container">$F(-1)=0$</span> but <span class="math-container">$F(2\cdot - 1)\neq 0$</span>? The property homogenous polynomials have is that scaling the coordinates scales all coefficients of the polynomial uniformly, which is what you need. That doesn't happen for any polynomial that isn't homogeneous. </p>
|
2,501,450 | <p>I was trying for a while to prove non-existence of theu following limit:</p>
<p>$$\lim_{n\to\infty}(-1)^n\frac{2^n+4n+6}{2^n(\sqrt[n]{5}-1)}$$</p>
<p>Unfortunately, with no results.</p>
<p>My hope was to show, that:</p>
<p>$$\lim_{n\to\infty}\frac{2^n+4n+6}{2^n(\sqrt[n]{5}-1)}\not=0$$</p>
<p>But showing that was harder than I thought.</p>
<p>Can anyone show me how to solve this problem?</p>
| avz2611 | 142,634 | <p>let $\sqrt[n]{5}=x$
$$\lim_{n\to\infty}\frac{2^n+4n+6}{2^n(\sqrt[n]{5}-1)}=\lim_{n\to\infty}\frac{2^n+4n+6}{2^n(x-1)}=\lim_{n\to\infty}\frac{(2^n+4n+6)\cdot(1+x+x^2+\cdots+x^{n-1})}{2^n(x^n-1)}$$
now substitute $x$ and get your answer to be non-zero</p>
|
594,482 | <p>If a,b and c are elements of a ring, does the equation ax+b=c always have a solution x? If it does, must the solution be unique? </p>
| Dietrich Burde | 83,966 | <p>About the existence of a solution: for the ring $\mathbb{Z}$ there exists a solution to $ax=b$ if and only if $a\mid b$. For the ring $M_n(K)$ the matrix equation $AX=B$ is a special case of the Sylverster equation, i.e., $AX+XC=B$. See the discussion here: <a href="https://math.stackexchange.com/questions/179413/existence-of-non-trivial-solution-of-sylvester-equation">Existence of non-trivial solution of Sylvester equation.</a>.
In general, the existence is a difficult issue. Concerning uniqueness, the matrix equation $AX=0$ also shows that there may be many solutions in general. </p>
|
1,216,576 | <p><img src="https://i.stack.imgur.com/p4xbH.png" alt="enter image description here"></p>
<blockquote>
<p>$ABC$ and $BDE$ are two equilateral triangles such that $D$ is the midpoint of
$BC$. If $AE$ intersects $BC$ at $F$, show that $Area(\triangle BFE)=2Area(\triangle FED)$</p>
</blockquote>
<p><strong>Solution given</strong><img src="https://i.stack.imgur.com/fBsFE.png" alt="enter image description here"> :</p>
<p><img src="https://i.stack.imgur.com/wsWIe.png" alt="enter image description here"></p>
<p>I didn't understand in the solution that $FB=2FD$ .</p>
<p>Any hint is appreciated.</p>
| Tae Hyung Kim | 94,401 | <p>This is the well known Vandermonde's identity. There's a rather nice proof of a generalization of it. Consider the generating functions $(1 + x)^n\cdot(1 + x)^m$ and $(1 + x)^{n + m}$. Then obviously the $r$th degree term must be the same for both generating functions. Expand the first one to get
$$ \left(1 + \binom n 1 x + \cdots + \binom n n x^n\right)\cdot\left(1 + \binom{m}{1} x + \cdots + \binom {m}{m}x^m\right).$$
Then the term of degree $r$ has coefficient
$$ \binom n r + \binom n {r-1}\binom{m}1 + \binom n {r-2}\binom{m}2 +\cdots+\binom{m}{r}$$
This is equal to the term of degree $r$ of the second generating function, which is $\binom{m+n}{r}$. </p>
|
118,985 | <p>Here's a question I came across:</p>
<blockquote>
<p>count the number of permutations of 10 men, 10 women and one child,
with the limitation that a man will not sit next to another man, and a
woman will not sit next to another woman</p>
</blockquote>
<p>What I tried was using the inclusion-exclusion principle but it got way to complicated. I don't have the right answer, but I know it should be simple (it was a 5 points question in an exam I was going through).</p>
<p>Any ideas? Thanks!</p>
| Community | -1 | <p>Note that $\frac{x^2}{e^{x}} < \frac1{x^2}$ for $x>9$. Hence, split the integral from $2$ to $9$ and then from $9$ to $\infty$ and argue why both are finite.</p>
|
2,797,039 | <p><strong>1)</strong> Let $\{f_n\}$ be a sequence of nonnegative measurable functions of $\mathbb R$ that converges pointwise on $\mathbb R$ to $f$ integrable. Show that</p>
<p>$$\int_{\mathbb R} f = \lim_{n\to \infty}\int_{\mathbb R}f_n \Rightarrow \int_{E} f = \lim_{n\to \infty}\int_{E}f_n $$</p>
<p>for any measurable set $E$</p>
<p>I know that $\int_{\mathbb R} f = \int_{\mathbb R \setminus E} f + \int_{E} f$ and $\int_{\mathbb R \setminus E} f \le \liminf_{n\to {\infty}}\biggr(\int_{\mathbb R \setminus E} f_n \biggr)$ from Fatau's Lemma.</p>
<p>I couldn't obtain $\int_{E} f = \liminf_{n\to \infty}\int_{E}f_n = \limsup_{n\to \infty}\int_{E}f_n$ and I have seen that inequality below for obtaining it but I couldn't understand. Could someone explain me please?</p>
<blockquote>
<p>$$\liminf_{n\to \infty}\int_{\mathbb R \setminus E}f_n = \int_{\mathbb R}f-\limsup_{n\to \infty}\int_{E}f_n$$</p>
</blockquote>
<p><strong>2)</strong> It has been written "since $\int_Ef_n \le \int_Ef$ (this inequality from monotonicity I have understood) thus </p>
<blockquote>
<p>$$\limsup\int_Ef_n \le \int_Ef$$</p>
</blockquote>
<p>in proof of The Monotone Convergence Theorem in Royden's Real Analysis. I couldn't see why that inequality obtains.</p>
<p>Thanks for any help</p>
<p>Regards</p>
| fleablood | 280,126 | <p>Interesting question.</p>
<p>Assume the die roll is $m$. There are ${m \choose 2}$ positions that the two jacks can occupy. And three are $4*3$ possible jacks they can be. (I'm distinguishing order; I suspect the math will be easier-- I may be wrong, but so long as I am consistent it doesn't matter). There $\frac {48!}{(48-(m-2)!}=\frac {48!}{(50-m)!}$ ways the rest of the cards could be. So given that the die roll is $m$ then the probability of exactly $2$ jacks is $\frac {12{m\choose 2}\frac {48!}{(50-m)!}}{\frac {52!}{52-m}!}=12{m\choose 2}\frac {48!}{52!}\frac {(52-m)!}{(50-m)!}$</p>
<p>And the probability that the die is $m$ is $\frac 16$.</p>
<p>So the probability of exactly 2 jacks is $\sum\limits_{m=1}^6 \frac 1612{m\choose 2}\frac {48!}{52!}\frac {(52-m)!}{(50-m)!}=2\frac {48!}{52!}\sum\limits_{m=1}^6{m\choose 2}\frac {(52-m)!}{(50-m)!}$</p>
<p>$\sum\limits_{m=1}^6{m\choose 2}\frac {(52-m)!}{(50-m)!}=$</p>
<p>$0*\frac {52!}{50!}+ 1*\frac {51!}{49!} + 3*\frac {50!}{48!}+ 6*\frac {49!}{47!}+ \frac{ 5*4*3}{6}\frac {48!}{46!}+ \frac {6*5*4*3}{24}\frac {47!}{45!}=$</p>
<p>$\frac {51!}{49!}+ 3*\frac {50!}{48!}+ 6*\frac {49!}{47!}+10\frac {48!}{46!}+15\frac {47!}{45!}=$</p>
<p>$51*50 + 3*50*49 + 6*49*48+10*48*47 + 15*47*46=79002=2*3^3*7*11*19$</p>
<p>And so $2\frac {48!}{52!}\sum\limits_{m=1}^6{m\choose 2}\frac {(52-m)!}{(48-m)!}=$</p>
<p>$2\frac 1{52*51*50*49}2*3^3*7*11*19=$</p>
<p>$\frac {3^2*7*11*19}{13*17*50*49}=\frac {13167}{541450}$</p>
<p>I don't know if there is a way to simplify the sum.</p>
|
2,804,086 | <p>A distribution is an element of the continuous dual space of some function space. Let us take the Schwartz space $\mathcal{S} := \mathcal{S}(\mathbb{R}^n)$ just as an example. A distribution $\phi \in \mathcal{S}'$ is then a map
$$ \phi: \mathcal{S} \rightarrow \mathbb{C}.$$</p>
<p>My question is this: how do I interpret $\phi(x)$? I see this written a lot, but I don't understand how to work with it. What does for example $\phi(x) = \phi(-x)$ mean? The only thing I can think of is that $\phi(f) = \phi(\hat{f})$, where $\hat{f}(x) = f(-x)$.</p>
<p>And more specifically for the problem I'm working on: I have a distribution
$$ \mathcal{W} : \mathcal{S}(\underbrace{\mathbb{R}^4 \times \dots \times \mathbb{R}^4}_{n \text{ times}}) \rightarrow \mathbb{C} $$</p>
<p>and then they say that $\mathcal{W}$ is translation invariant, i.e.
$$\mathcal{W}(x_1 +a,\dots, x_n + a) = \mathcal{W}(x_1,\dots,x_n)$$
so it can be writthen as a distribution $\mathfrak{W}$ that only depends on the differences $x_1-x_2,\dots,x_{n-1} - x_n$:
$$\mathcal{W}(x_1,\dots,x_n) = \mathfrak{W}(x_1-x_2,\dots,x_{n-1}-x_n).$$</p>
<p>How do I interpret this last line?</p>
| C. Dubussy | 310,801 | <p>When you have a $C^{\infty}$ map $f : \mathbb{R}^n \to \mathbb{R}^m$ which is proper (it is automaticaly the case with a $C^{\infty}$ bijection like translations), you can define the pushforward of a distribution by $f$. If $u$ is a distribution on $\mathbb{R}^n$ then, $f_{!}u$ is a distribution on $\mathbb{R}^m$ defined by $f_!u(\varphi)=u(\varphi \circ f).$ </p>
<p>So for example the (dangerous) notation $u(x) = u(-x)$ means that $f_!u = u$ with $f : \mathbb{R}^n \to \mathbb{R}^n, x \mapsto -x.$ </p>
<p>The same occurs when considering your translation by $a$. Let us call it $\mu_a.$ The condition $$\mathcal{W}(x_1 +a,\dots, x_n + a) = \mathcal{W}(x_1,\dots,x_n)$$ means that $\mathcal{W}=\mu_{a!}\mathcal{W}$</p>
<p>Now define the map $g : (\mathbb{R}^4)^n \to (\mathbb{R}^4)^{n-1}, (x_1, ... ,x_n) \mapsto (x_1-x_2, ... , x_{n-1}-x_n).$ The last equality means that the condition $\mathcal{W}=\mu_{a!}\mathcal{W}$ implies that $\mathcal{W}=g^*\mathfrak{W}$ for a certain distribution $\mathfrak{W}$ on $(\mathbb{R}^4)^{n-1}$.</p>
|
2,804,086 | <p>A distribution is an element of the continuous dual space of some function space. Let us take the Schwartz space $\mathcal{S} := \mathcal{S}(\mathbb{R}^n)$ just as an example. A distribution $\phi \in \mathcal{S}'$ is then a map
$$ \phi: \mathcal{S} \rightarrow \mathbb{C}.$$</p>
<p>My question is this: how do I interpret $\phi(x)$? I see this written a lot, but I don't understand how to work with it. What does for example $\phi(x) = \phi(-x)$ mean? The only thing I can think of is that $\phi(f) = \phi(\hat{f})$, where $\hat{f}(x) = f(-x)$.</p>
<p>And more specifically for the problem I'm working on: I have a distribution
$$ \mathcal{W} : \mathcal{S}(\underbrace{\mathbb{R}^4 \times \dots \times \mathbb{R}^4}_{n \text{ times}}) \rightarrow \mathbb{C} $$</p>
<p>and then they say that $\mathcal{W}$ is translation invariant, i.e.
$$\mathcal{W}(x_1 +a,\dots, x_n + a) = \mathcal{W}(x_1,\dots,x_n)$$
so it can be writthen as a distribution $\mathfrak{W}$ that only depends on the differences $x_1-x_2,\dots,x_{n-1} - x_n$:
$$\mathcal{W}(x_1,\dots,x_n) = \mathfrak{W}(x_1-x_2,\dots,x_{n-1}-x_n).$$</p>
<p>How do I interpret this last line?</p>
| goblin GONE | 42,339 | <p>The trouble is that given a map $f : X \rightarrow Y$, the notation $\phi(f(x))$ should really mean "$\phi$ pulled back across $f$." But of course, distributions don't pull back in a natural way. Therefore I think the notation $\phi(f(x))$ only makes sense if $f$ is an invertible function, in which case it really means the $(f^{-1})_*(\phi),$ that is, the pushforward of $\phi$ across the inverse of $f$.</p>
|
2,365,120 | <p>I'm doing some revision on indices and surds. How do you simplify </p>
<blockquote>
<p>$(xy^2)^p\sqrt{x^q} $</p>
</blockquote>
<p>Bit confused because my textbook says the answer is </p>
<blockquote>
<p>$x^{p+q/2}y^{2p}$</p>
</blockquote>
<p>I can understand simplifying but only when it's the same base. I'm confused with this specific question - step-by-step help would be much appreciated!</p>
<p>[[Edit: changed $x^{p+1/2q}$ to $x^{p + q/2}$.]]</p>
| Siong Thye Goh | 306,553 | <p>Hint:</p>
<p>Great you can handle when things are in the same base.</p>
<p>Simplify $$x^p \sqrt{x^q}$$</p>
<p>and simplify $$(y^2)^p$$</p>
<p>Remark:
Note that </p>
<p>$$(1/2)q = q/2$$</p>
<p>but $$(1/2)q \neq \frac{1}{2q}$$</p>
|
903,831 | <p>One may be curious why one wishes to convert a polynomial ring to a numerical ring. But as one of the most natural number system is integers, and many properties of rings can be easily understood in parallel to ring of integers, I think converting a polynomial ring to a numerical (i.e. integer) ring is useful.</p>
<p>What I mean by converting to a numerical ring is: in the standard ring of integers, $+$ and $\cdot$ are defined as in usual arithmetic. But is there universal way of converting any polynomial/monomial rings such that each object in the ring gets converted to an integer, and $+$ and $\cdot$ can be defined differently from standard integer $+$ and $\cdot$? This definition would be based on integer arithmetic, though. </p>
| rschwieb | 29,335 | <p>The requirements are a little vague, but it sounds like you'd like to find a function from a polynomial ring $R[X]$ into the set $\Bbb Z$, and then equip $\Bbb Z$ with a potentially unusual addition and multiplication to make the map a ring isomorphism, thereby "representing the ring $R[X]$ with integers." </p>
<p>Further, the addition and multiplication are supposed to be "based on" the operations in $\Bbb Z$, which I'm taking to mean that addition and multiplication between elements $a,b$ will somehow be a polynomial in the indeterminates $a,b$.</p>
<p>The first thing to notice is that the usefulness of this idea is immediately curtailed by the size of $R[X]$. If $R$ is uncountable, say it's $\Bbb R$ or $\Bbb C$, then you're never going to get an injective map of $R[X]$ into $\Bbb Z$, even as a set.</p>
<p>Secondly, one has to ask why it would be easier to work with $\Bbb Z$-with-a-bizzare-multiplication-and-addition rather than just $R[X]$. Take $\Bbb Z[X]$ for example: it would seem a lot simpler just to work in $\Bbb Z[X]$ directly.</p>
<p>Finally, the spirit of ring theory is "let's just take some main features of addition and multiplication in $\Bbb Z$ and explore operations like that on other sets." Trying to cram rings back into $\Bbb Z$ is a bit of a step backwards :)</p>
|
2,661,718 | <p>My basic understanding is that each time
I have 30% chance of winning the prize
so between 3 and 4 tries I should win it</p>
<p>cause .3 +.3 +.3 = 90% as I need to win only once with 3 try
.3+.3+.3+.3 = 120% with 4 try</p>
<p>I do remember a formula saying 1-(0.7)^3 = 65.7% chance but I dont remember what that number mean exactly</p>
<p>thanks for answering this noob question my math are way behind me now</p>
| egreg | 62,967 | <p>It can be easier if you set $1+x=t$, so the inequality becomes
$$
(t-1)^2\ge t(\log t)^2
$$
If we consider the function
$$
f(t)=(t-1)^2-t(\log t)^2
$$
defined for $t>0$, we have
$$
\lim_{t\to0}f(t)=1,\qquad \lim_{t\to\infty}f(t)=\infty
$$
Moreover
$$
f'(t)=2(t-1)-(\log t)^2-2\log t
$$
and
$$
f''(t)=2-2\frac{\log t}{t}-\frac{2}{t}=\frac{2}{t}(t-\log t-1)
$$
It's easy to see that $f''(t)>0$ except for $t=1$, where $f''(1)=0$. Thus $f'$ is strictly increasing, hence only vanishes once.</p>
<p>As $f'(1)=0$, the function $f$ has an absolute minimum at $t=1$.</p>
<hr>
<p>Alternative solution: the inequality is the same as
$$
\lvert\log t\rvert\le\frac{\lvert t-1\rvert}{\sqrt{t}}
$$
and we can study it for $t\ge1$; indeed, if
$$
f(t)=\frac{t-1}{\sqrt{t}}-\log t=\sqrt{t}-\frac{1}{\sqrt{t}}-\log t
$$
we have
$$
f(1/t)=-\frac{t-1}{\sqrt{t}}+\log t=-f(t)
$$
and so $|f(1/t)|=|f(t)|$.</p>
<p>Well, now it's easier to study $g(t)=f(t^2)$:
$$
g(t)=t-\frac{1}{t}-2\log t
$$
Since
$$
g'(t)=1+\frac{1}{t^2}-\frac{2}{t}=\frac{t^2-2t+1}{t}\ge0
$$
the function $g$, and so also $f$, is increasing for $t\ge1$.</p>
<p>Hence $f$ has a minimum at $1$.</p>
|
294,383 | <p>Evaluate :
$$\int_{0}^{+\infty }{\left( \frac{x}{{{\text{e}}^{x}}-{{\text{e}}^{-x}}}-\frac{1}{2} \right)\frac{1}{{{x}^{2}}}\text{d}x}$$</p>
| Felix Marin | 85,343 | <p>\begin{align}
?
&\equiv
{1 \over 4}\int_{-\infty}^{\infty}
{x - \sinh\left(x\right) \over x^{2}\sinh\left(x\right)}\,{\rm d}x
=
{1 \over 4}\sum_{n = 1}^{\infty}2\pi{\rm i}
\lim_{x \to {\rm i}\,n\,\pi}
{\left\lbrack x - \sinh\left(x\right)\right\rbrack\left(x - {\rm i}n\pi\right)
\over
x^{2}\sinh\left(x\right)}
\\[3mm]&=
{\rm i}\,{\pi \over 2}\sum_{n = 1}^{\infty}
{{\rm i}n\pi \over \left({\rm i}n\pi\right)^{2}}\,\lim_{x \to {\rm i}\,n\,\pi}
{x - {\rm i}n\pi \over \sinh\left(x\right)}
=
{1 \over 2}\sum_{n = 1}^{\infty}
{1 \over n}\,{1 \over \cosh\left({\rm i}n\pi\right)}
=
{1 \over 2}\sum_{n = 1}^{\infty}
{1 \over n}\,{1 \over \cos\left(\pi n\right)}
\\[3mm]&=
{1 \over 2}\sum_{n = 1}^{\infty}
{\left(-1\right)^{n} \over n}
=
{1 \over 2}\int_{0}^{1}{\rm d}x
\left\lbrack
{{\rm d} \over {\rm d}x}\sum_{n = 1}^{\infty}
{\left(-1\right)^{n}x^{n} \over n}
\right\rbrack
=
{1 \over 2}\int_{0}^{1}{\rm d}x\,
\sum_{n = 1}^{\infty}\left(-1\right)^{n}x^{n - 1}
\\[3mm]&=
{1 \over 2}\int_{0}^{1}{-1 \over 1 - \left(-x\right)}\,{\rm d}x
=
\color{#ff0000}{\large -\,{1 \over 2}\,\ln\left(2\right)}
\end{align}</p>
|
686,167 | <p>I'm working on an assignment where part of it is showing that $S_k=0$ for even $k$ and $S_k=1$ for odd $k$, where</p>
<blockquote>
<p>$$S_k:=\sum_{j=0}^{n}\cos(k\pi x_j)= \frac{1}{2}\sum_{j=0}^{n}(e^{ik\pi x_{j}}+e^{-ik\pi x_{j}}) $$</p>
<p>Here $x_j=j/(n+1)$.</p>
</blockquote>
<p>So, working through the algebra:</p>
<blockquote>
<p>$$\frac{1}{2}\sum_{j=0}^{n}(e^{ik\pi x_{j}} +e^{-ik\pi x_{j}}) =\dots
=\frac{1}{2}\cdot\frac{1-e^{ik\pi}}{1-e^{\frac{ik\pi}{n+1}}}+\frac{1}{2}\cdot\frac{1-e^{-ik\pi}}{1-e^{-\frac{ik\pi}{n+1}}}
$$</p>
</blockquote>
<p>Obviously $S_k=0$ for even $k$'s, since $e^{i\pi\cdot\text{even integer}}=1$. But when $k$ is odd we get $$\frac{1}{1-e^{\frac{ik\pi}{n+1}}}+\frac{1}{1-e^{-\frac{ik\pi}{n+1}}}$$
which isn't obviously one to me, at least. Wolfram alpha confirms it equals 1.</p>
<p>My question: How does one see that it equals 1?</p>
| Gabriel Romon | 66,096 | <p>Reminders first:</p>
<p>$$\forall w,z \in \mathbb C, \overline{\left(\frac{w}{z}\right)}=\frac{\overline w}{\overline z}$$
$$\forall w,z \in \mathbb C, \overline{w+z}=\overline w +\overline z$$
$$\forall a\in \mathbb R, \overline{e^{ia}}=e^{-ia},$$
$$\forall z \in \mathbb C, z+ \overline z=2 \Re( z)$$
Apply these rules to $$\frac{1}{1-e^{\frac{ik\pi}{n+1}}}$$</p>
<p>$$\overline {\left(\frac{1}{1-e^{\frac{ik\pi}{n+1}}}\right)}=\frac{\overline1}{\overline1- \overline {e^{\frac{ik\pi}{n+1}}}}=\frac{1}{1-e^{-\frac{ik\pi}{n+1}}}$$</p>
<p>Then let $\alpha:=\Re\left(\frac{1}{1-e^{\frac{ik\pi}{n+1}}} \right)$
The following identity holds: $$\frac{1}{1-e^{\frac{ik\pi}{n+1}}}+\frac{1}{1-e^{-\frac{ik\pi}{n+1}}}=2 \alpha$$</p>
<p>Now if we let $a:=e^{\frac{k\pi}{n+1}}$,</p>
<p>$$\alpha=\Re(\frac{1}{1-\cos(a)-i\sin(a)})= \frac{1-\cos(a)}{(1-\cos(a))^2+\sin^2(a)} = 1/2 $$</p>
<p>Hence $$\frac{1}{1-e^{\frac{ik\pi}{n+1}}}+\frac{1}{1-e^{-\frac{ik\pi}{n+1}}}=1$$</p>
|
2,148,979 | <p>I want to find $A$ such that $$A\sum \limits_{n=m}^{\infty}1/n^{3}=1$$</p>
<p>for any natural value of $m$.</p>
| Simply Beautiful Art | 272,831 | <p>In terms of the <a href="https://en.wikipedia.org/wiki/Hurwitz_zeta_function" rel="nofollow noreferrer">Hurwitz zeta function</a>:</p>
<p>$$A=\frac1{\zeta(3,m)}$$</p>
<p>There is no further closed form, unless you allow</p>
<p>$$A=\frac1{\zeta(3)-\sum_{n=1}^{m-1}\frac1{n^3}}$$</p>
<p>Approximations may be done with the Euler-Maclaurin formula:</p>
<p>$$\sum_{n=1}^m\frac1{n^3}\approx\zeta(3)+\frac1{2m^2}+\frac1{2m^3}+\dots$$</p>
|
2,752,421 | <p>a) Consider the equation $x_1 + x_2 + x_3 + x_4 = 35$. How many different solutions does this equation have
if all the variables must be positive integers? Enter the exact numeric answer.</p>
<p>b) Suppose that a license plate consists of three letters followed by three digits. How many different license plates start with the letter A if letters and digits cannot be repeated? Enter the exact numeric answer.</p>
<p><strong>My work</strong></p>
<p>a) $C(38,35)$</p>
<p>b)$1*P(26,2)*P(10,3)$</p>
<p>A_ _ _ _ _ . The last three number should not be repeated, so $P(10,3)$ and then the letters can be chosen randomly so $C(26,2)$ </p>
| Brian Tung | 224,454 | <p>(a) Almost. The restriction is that all variables must be <em>positive</em> integers, so we can rewrite this as</p>
<p>$$
y_1+y_2+y_3+y_4 = 31
$$</p>
<p>with $y_i = x_i-1$ being a non-negative integer for all $i$. Then stars-and-bars gives us $C(34, 31) = C(34, 3)$.</p>
<p>(b) Almost. As Tony Jacobs points out in the comments, the two letters should be represented by $P(25, 2)$, since you have to avoid $A$ (leaving only $25$ letters left to chose from), and order matters, as it does with the digits.</p>
|
3,592,368 | <p>I managed to solve this question using trigonometry. But I wondered if there'd be anyway of doing it using only synthetic geometry. Here it is.</p>
<blockquote>
<p>Let <span class="math-container">$ABC$</span> be a right isosceles triangle of hypotenuse <span class="math-container">$AB$</span>. Let also <span class="math-container">$\Gamma$</span> be the semicircle whose diameter is the line segment <span class="math-container">$AC$</span> such that <span class="math-container">$\Gamma\cap\overline{AB} = \{A\}$</span>. Consider <span class="math-container">$P\in\Gamma$</span> with <span class="math-container">$PC = k$</span>, with <span class="math-container">$k \leq AC$</span>. Find the area of triangle <span class="math-container">$PBC$</span>.</p>
</blockquote>
<p>Here is my interpretation of the picture:
<a href="https://i.stack.imgur.com/Vz0ck.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Vz0ck.png" alt="enter image description here" /></a></p>
<p>I managed to get the solution via trigonometry as below.</p>
<p><a href="https://i.stack.imgur.com/dQ8hU.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/dQ8hU.png" alt="enter image description here" /></a></p>
<p>Then, the area <span class="math-container">$S$</span> requested is:</p>
<p><span class="math-container">$$\begin{align} S &= \displaystyle\frac{PC\cdot BC\cdot \sin(90^\circ + \beta)}{2}\\
&= \displaystyle\frac{k\cdot d\cdot \cos\beta}{2}\\
&= \displaystyle\frac{k\cdot d\cdot \frac{k}{d}}{2}\\
&= \displaystyle\frac{k^2}{2}.\\
\end{align}$$</span></p>
| Blue | 409 | <p><a href="https://i.stack.imgur.com/Tzett.png" rel="noreferrer"><img src="https://i.stack.imgur.com/Tzett.png" alt="enter image description here"></a></p>
<p><span class="math-container">$$|\triangle PBC| = \frac12|CP||BQ| = \frac12k^2$$</span></p>
|
1,369,428 | <p>I am trying to solve the following equation;
$$\int_{-1}^{1}e^{i(x+a\cos x)} \, \mathrm{d}(\cos x)$$ or $$\int_{0}^{\pi}e^{i(x+a\cos x)} \sin x \, \mathrm{d}x$$</p>
<p>I tried this in <em>Wolfram Alpha</em>, but it says that answer cannot be obtained.</p>
| Josh Broadhurst | 249,961 | <p>If you leave out the (what I assume is a) constant $a$ in your original problem and type it into WolframAlpha, it tells you that the solution involves the Bessel function of the first kind ($J_1$) and gives the following answer
$$ (\pi J_1(1)-2 \cos(1)+2 \sin(1))i \approx 1.9848i$$ </p>
<p>If you set $a=2$ as the input to WolframAlpha this gives
$$ \frac{1}{2} (\pi J_1(2)-2 \cos(2)+\sin(2))i \approx 1.7767i$$</p>
<p>Again with $a=3$ as the input gives
$$ \frac{1}{9} (3 \pi J_1(3)-6 \cos(3)+2 \sin(3))i \approx 1.0464i$$</p>
<p>Perhaps you can use a few more values for $a$ and generalize a pattern.
The input I'm using looks like:</p>
<pre><code>integrate from 0 to pi: e^(i (x + 3 cosx)) sinx dx
</code></pre>
<p>Hopefully this is sufficient for your needs.</p>
|
2,073,794 | <p>Let's consider the following variant of Collatz (3n+1) : </p>
<p>if $n$ is odd then $n \to 3n+1$</p>
<p>if $n$ is even then you can choose : $n \to n/2$ or $n \to 3n+1$</p>
<p>With this definition, is it possible to construct a cycle other than the trivial one, i.e., $1\to 4 \to 2 \to 1$?</p>
<p>Best regards</p>
| florence | 343,842 | <p>$$7\to 22$$
$$22\to11$$
$$11\to34$$
$$34\to17$$
$$17\to52$$
$$52\to26\to13$$
$$13\to40$$
$$40\to20\to10\to5$$
$$5\to16$$
$$16\to8\to4\to2$$
$$2\to 3\cdot2+1=7$$</p>
|
2,073,794 | <p>Let's consider the following variant of Collatz (3n+1) : </p>
<p>if $n$ is odd then $n \to 3n+1$</p>
<p>if $n$ is even then you can choose : $n \to n/2$ or $n \to 3n+1$</p>
<p>With this definition, is it possible to construct a cycle other than the trivial one, i.e., $1\to 4 \to 2 \to 1$?</p>
<p>Best regards</p>
| meriton | 71,993 | <p>There are quite a few such cycles. Here's the list of all cycles of length $\le 30$ starting from any number $< 10^5$ that never exceed $2^{63}$:</p>
<pre><code>[2, 7, 22, 11, 34, 17, 52, 26, 13, 40, 20, 10, 5, 16, 8, 4]
[2, 7, 22, 11, 34, 17, 52, 26, 13, 40, 20, 10, 5, 16, 8, 4, 13, 40, 20, 10, 5, 16, 8, 4]
[2, 7, 22, 11, 34, 17, 52, 26, 13, 40, 20, 10, 5, 16, 49, 148, 74, 37, 112, 56, 28, 85, 256, 128, 64, 32, 16, 8, 4]
[2, 7, 22, 11, 34, 17, 52, 26, 13, 40, 20, 61, 184, 92, 46, 23, 70, 35, 106, 53, 160, 80, 40, 20, 10, 5, 16, 8, 4]
[2, 7, 22, 11, 34, 17, 52, 26, 13, 40, 20, 61, 184, 92, 277, 832, 416, 208, 104, 52, 26, 13, 40, 20, 10, 5, 16, 8, 4]
[2, 7, 22, 67, 202, 101, 304, 152, 76, 38, 19, 58, 29, 88, 44, 22, 11, 34, 17, 52, 26, 13, 40, 20, 10, 5, 16, 8, 4]
[4, 13, 40, 20, 10, 5, 16, 8]
[4, 13, 40, 20, 10, 5, 16, 8, 25, 76, 38, 19, 58, 29, 88, 44, 22, 11, 34, 17, 52, 26, 13, 40, 20, 10, 5, 16, 8]
[4, 13, 40, 20, 10, 5, 16, 49, 148, 74, 37, 112, 56, 28, 14, 7, 22, 11, 34, 17, 52, 26, 13, 40, 20, 10, 5, 16, 8]
[4, 13, 40, 20, 10, 5, 16, 49, 148, 74, 37, 112, 56, 28, 85, 256, 128, 64, 32, 16, 8]
[4, 13, 40, 20, 61, 184, 92, 46, 23, 70, 35, 106, 53, 160, 80, 40, 20, 10, 5, 16, 8]
[4, 13, 40, 20, 61, 184, 92, 277, 832, 416, 208, 104, 52, 26, 13, 40, 20, 10, 5, 16, 8]
[5, 16, 8, 25, 76, 38, 19, 58, 29, 88, 44, 22, 11, 34, 17, 52, 26, 13, 40, 20, 10]
[5, 16, 49, 148, 74, 37, 112, 56, 28, 14, 7, 22, 11, 34, 17, 52, 26, 13, 40, 20, 10]
[5, 16, 49, 148, 445, 1336, 668, 334, 167, 502, 1507, 4522, 2261, 6784, 3392, 1696, 848, 424, 212, 106, 53, 160, 80, 40, 20, 10]
[5, 16, 49, 148, 445, 1336, 668, 2005, 6016, 3008, 1504, 752, 376, 188, 565, 1696, 848, 424, 212, 106, 53, 160, 80, 40, 20, 10]
[7, 22, 11, 34, 103, 310, 155, 466, 233, 700, 2101, 6304, 3152, 1576, 788, 394, 197, 592, 296, 148, 74, 37, 112, 56, 28, 14]
[7, 22, 11, 34, 103, 310, 931, 2794, 1397, 4192, 2096, 1048, 524, 262, 131, 394, 197, 592, 296, 148, 74, 37, 112, 56, 28, 14]
[7, 22, 67, 202, 101, 304, 152, 76, 38, 19, 58, 29, 88, 265, 796, 2389, 7168, 3584, 1792, 896, 448, 224, 112, 56, 28, 14]
[7, 22, 67, 202, 101, 304, 152, 76, 38, 115, 346, 173, 520, 260, 130, 65, 196, 98, 49, 148, 74, 37, 112, 56, 28, 14]
[7, 22, 67, 202, 101, 304, 152, 76, 229, 688, 344, 172, 86, 43, 130, 65, 196, 98, 49, 148, 74, 37, 112, 56, 28, 14]
[8, 25, 76, 38, 19, 58, 29, 88, 44, 133, 400, 200, 100, 50, 151, 454, 227, 682, 341, 1024, 512, 256, 128, 64, 32, 16]
[8, 25, 76, 38, 19, 58, 29, 88, 44, 133, 400, 200, 100, 301, 904, 452, 226, 113, 340, 170, 85, 256, 128, 64, 32, 16]
[8, 25, 76, 38, 19, 58, 29, 88, 265, 796, 2389, 7168, 3584, 1792, 896, 448, 224, 112, 56, 28, 85, 256, 128, 64, 32, 16]
[8, 25, 76, 38, 115, 346, 173, 520, 260, 130, 65, 196, 98, 49, 148, 74, 37, 112, 56, 28, 85, 256, 128, 64, 32, 16]
[8, 25, 76, 229, 688, 344, 172, 86, 43, 130, 65, 196, 98, 49, 148, 74, 37, 112, 56, 28, 85, 256, 128, 64, 32, 16]
[10, 31, 94, 47, 142, 71, 214, 643, 1930, 965, 2896, 1448, 724, 362, 181, 544, 272, 136, 68, 34, 17, 52, 26, 13, 40, 20]
[11, 34, 17, 52, 26, 13, 40, 20, 61, 184, 92, 277, 832, 416, 208, 625, 1876, 938, 469, 1408, 704, 352, 176, 88, 44, 22]
[11, 34, 17, 52, 26, 13, 40, 121, 364, 182, 91, 274, 137, 412, 1237, 3712, 1856, 928, 464, 232, 116, 58, 29, 88, 44, 22]
[11, 34, 17, 52, 26, 13, 40, 121, 364, 182, 547, 1642, 821, 2464, 1232, 616, 308, 154, 77, 232, 116, 58, 29, 88, 44, 22]
[11, 34, 17, 52, 26, 13, 40, 121, 364, 1093, 3280, 1640, 820, 410, 205, 616, 308, 154, 77, 232, 116, 58, 29, 88, 44, 22]
[11, 34, 17, 52, 26, 79, 238, 119, 358, 179, 538, 269, 808, 404, 202, 101, 304, 152, 76, 38, 19, 58, 29, 88, 44, 22]
[11, 34, 17, 52, 157, 472, 236, 118, 59, 178, 89, 268, 134, 67, 202, 101, 304, 152, 76, 38, 19, 58, 29, 88, 44, 22]
[11, 34, 17, 52, 157, 472, 236, 118, 355, 1066, 533, 1600, 800, 400, 200, 100, 50, 25, 76, 38, 19, 58, 29, 88, 44, 22]
[11, 34, 17, 52, 157, 472, 236, 709, 2128, 1064, 532, 266, 133, 400, 200, 100, 50, 25, 76, 38, 19, 58, 29, 88, 44, 22]
[13, 40, 20, 61, 184, 92, 46, 23, 70, 35, 106, 53, 160, 80, 40, 20, 61, 184, 92, 277, 832, 416, 208, 104, 52, 26]
[13, 40, 20, 61, 184, 92, 46, 23, 70, 35, 106, 53, 160, 80, 241, 724, 362, 181, 544, 272, 136, 68, 34, 17, 52, 26]
[13, 40, 20, 61, 184, 92, 277, 832, 416, 208, 104, 52, 26]
[14, 43, 130, 65, 196, 98, 49, 148, 74, 37, 112, 56, 28]
[14, 43, 130, 65, 196, 98, 49, 148, 74, 37, 112, 56, 28, 85, 256, 128, 64, 32, 16, 49, 148, 74, 37, 112, 56, 28]
[16, 49, 148, 74, 37, 112, 56, 28, 85, 256, 128, 64, 32]
[19, 58, 29, 88, 44, 22, 67, 202, 101, 304, 152, 76, 38]
[19, 58, 29, 88, 44, 133, 400, 200, 100, 50, 25, 76, 38]
[20, 61, 184, 92, 46, 23, 70, 35, 106, 53, 160, 80, 40]
</code></pre>
<p>It was generated by the following Java program:</p>
<pre><code>public class ModifiedCollatz {
int maxLength = 30;
long[] currentPath = new long[maxLength];
void find() {
for (int i = 2; i < 100_000; i++) {
find(i, i, 0);
}
}
void find(long a, long goal, int depth) {
if (depth >= maxLength || a < goal) {
return;
}
if (depth > 0 && a == goal) {
System.out.println(Arrays.toString(Arrays.copyOf(currentPath, depth)));
return;
}
currentPath[depth] = a;
if (a % 2 == 0) {
find(a / 2, goal, depth + 1);
}
find(3 * a + 1, goal, depth + 1);
}
public static void main(String[] args) {
new ModifiedCollatz().find();
}
}
</code></pre>
|
78,461 | <p>I'm trying to solve the following problem:</p>
<p>For a vector $v$ of length $c$, $\min \frac{\sum v[i]^4}{(\sum v[i]^2)^2}$ subject to $\sum v[i] = N$.</p>
<p>I can solve this numerically for a given $c$ using the following command: </p>
<pre><code>Minimize[{Total[z^4]/Total[z^2]^2, Total[z] == N}, z]
</code></pre>
<p>Is it possible to solve this problem symbolically leaving $c$ as a free variable? </p>
| 2012rcampion | 21,750 | <p>You're like 99% of the way there. You just need to tell Mathematica that <code>z</code> is a vector by feeding it the components:</p>
<pre><code>With[{z = Array[v, 3]},
Minimize[{Total[z^4]/Total[z^2]^2, Total[z] == n}, z]]
</code></pre>
<p>For dimensions <code>1</code> and <code>2</code> the answers are as expected. For <code>3</code> (the code above) <code>Minimize</code> outputs a bunch of ugly <code>Root</code>s. So although there is a symbolic solution, there may not be an analytic one.</p>
|
1,671,933 | <p>In the question <a href="https://math.stackexchange.com/questions/1502484/the-application-of-nimbers-to-nim-strategy">on nimbers</a>, the original poster asks for the meaning of Nimber <a href="https://en.wikipedia.org/wiki/Nimber#Multiplication" rel="nofollow noreferrer">multiplication</a> in the context of impartial games.</p>
<hr>
<p><strong>Edit: As noted by Mark Fischler in the comments below, the following is wrong</strong></p>
<p>My gut instinct is $*a \times *b$ means that if $*a$ is a game equivalent to $a$ stones, and $*b$ is a game equivalent to $b$ stones, then if you replace every stone in the $*a$ game with a copy of the $*b$ game, you get a game with the Nimber $*a \times *b$, but I haven't been able prove it.</p>
| E. Z. L. | 811,580 | <p><em>Winning Ways</em> Volume 3, Chapter 14, considers coin-turning games. One such game is <em>Turning Corners</em>, where a move is to turn over the four corners of a rectangle with horizontal and vertical sides, but the North-East most coin needs to be turned from heads to tails. A sample move shows that it defines nimber multiplication:</p>
<pre>
a X H
.
.
.
a' X X
.
.
.
0 . . . b' . . . b
</pre>
<p>A move will turn the <code>H</code> to tails and <code>X</code>s over, so from the position <span class="math-container">$(a,b)$</span>, a move moves to the sum <span class="math-container">$(a',b)+(a,b')+(a',b')$</span>, so the value of the position <span class="math-container">$(a,b)$</span> is the nimber product <span class="math-container">$ab$</span> inductively.</p>
|
3,916,261 | <p>In the book Optimal Transport for Applied Mathematicians, Theorem 1.37, the authors states an inequality, but I'm having trouble seeing how one shows that the inequality is indeed true.</p>
<p><strong>Definition</strong>: Given a function <span class="math-container">$c:X\times Y \to \mathbb R$</span>, we say that <span class="math-container">$\Gamma \subset X \times Y$</span> is <span class="math-container">$c$</span>-cyclically monotone (<span class="math-container">$c$</span>-CM) if for every <span class="math-container">$k \in \mathbb N$</span>, every permutation <span class="math-container">$\sigma$</span> and every finite family of points <span class="math-container">$(x_1,y_1),...,(x_k,y_k) \in \Gamma$</span> we have
<span class="math-container">$$
\sum^k_{i=1} c(x_i,y_i) \leq
\sum^k_{i=1} c(x_i,y_{\sigma(i)})
$$</span></p>
<p>Now, assume that <span class="math-container">$\Gamma$</span> is <span class="math-container">$c$</span>-CM. Then, fix <span class="math-container">$(x_0, y_0) \in \Gamma$</span>, and define:
<span class="math-container">$$
-\psi(y) = \inf\{
-c(x_n,y) +c(x_n,y_{n-1})-c(x_{n-1},y_{n-1})+...+ c(x_1,y_0) - c(x_0,y_0) : n \in \mathbb N,
(x_i,y_i) \in \Gamma \quad \forall i=1,...,n
\}
$$</span></p>
<p>How does one then shows that fo <span class="math-container">$(x,y) \in \Gamma$</span> and <span class="math-container">$\bar y \in \text{Proj}_y \circ\Gamma$</span>, we can then write:
<span class="math-container">$$
-\psi(y) \leq - c(x,y) + c(x,\bar y) - \psi(\bar y)
$$</span></p>
<p>In the proof, the author claims that this inequality follows from the very definition of the function <span class="math-container">$\psi$</span>. What I found odd was that since <span class="math-container">$y$</span> and <span class="math-container">$\bar y$</span> are arbitrary, then I could just swap them, obtaining the opposite inequality, and hence, I actually have an equality. Which would be kind of odd. Anyways, how can you prove this inequality? And is it actually an equality?</p>
| Sidharth Ghoshal | 58,294 | <p>It is NOT wrong to say <span class="math-container">$\sin(x): \mathbb{R} \rightarrow \mathbb{R}$</span> since [-1,1] is in <span class="math-container">$\mathbb{R}$</span> but perhaps the you could say "it is not honest" after we prove that <span class="math-container">$\sin(x)$</span> only takes on values in <span class="math-container">$[-1,1]$</span></p>
<p>Math generally only cares about correctness, and once you're correct you generally look for secondary goals of elegance and convenience so hence your professor doesn't bother restricting that set every time he/she/ze writes it. (Another example being <span class="math-container">$y = x^2: \mathbb{R} \rightarrow \mathbb{R}^+$</span> but usually they drop the <span class="math-container">$+$</span>.)</p>
<p>Now while thats frustrating we could see it as motivation to create some new math. Given a TRUE mathematical sentence <span class="math-container">$S(s_1 ... s_n)$</span> which accepts sets <span class="math-container">$s_1 ... s_n$</span> as an input: we can say it is <span class="math-container">$\text{Minimal}$</span> if there doesn't exist an indexed set <span class="math-container">$s_i$</span> and another <span class="math-container">$u$</span> such that <span class="math-container">$u \subset s_i$</span> but <span class="math-container">$S(s_1 ... u ... s_n)$</span> is also true.</p>
<p>To make this concrete, we let <span class="math-container">$s_1 = \mathbb{R}, s_2 = \mathbb{R}$</span>. Then <span class="math-container">$S(s_1, s_2)$</span> is the statement <span class="math-container">$\sin(x): s_1 \rightarrow s_2$</span></p>
<p>The triple <span class="math-container">$\left( s_1 = \mathbb{R}, s_2 = \mathbb{R}, S(s_1, s_2) \right) $</span> is NOT minimal since as you noted <span class="math-container">$[-1,1] \subset \mathbb{R}$</span> that still lets <span class="math-container">$S$</span> be true so we can reduce the triple to: <span class="math-container">$\left( s_1 = \mathbb{R}, s_2 = [-1,1] , S(s_1, s_2) \right) $</span>.</p>
<p>Now this is minimal.</p>
<p>The general problem then of "minimizing" sentences can probably lead to some interesting in complex math. For example, what would it take to build an automated minimization program? Even for elementary sets this starts to be an interesting exercise in theorem proving and a hard software engineering challenge.</p>
|
966,267 | <p>Original PDE
$$T_t=\alpha T_{xx}$$
I need to solve this equation numerically and analytically and compared them. I've already done the numerical part. But I need to solve it analytically now. </p>
<p>Given the initial condition
$$T(x,0)=sin(\frac{\pi x}{L})$$
where $$L=1$$</p>
<p>I would like to find the exact solution of the heat equation.</p>
<p>I know what $$T(x,t)=\sum_{n=1}^{\infty}B_n sin(n\pi x)e^{-n^2\pi^2\alpha t}\\where\\B_n=2\int_0^1T(x,0)sin(n\pi x)dx$$</p>
<p>After evaluating this integral, I get the solution as</p>
<p>$$T(x,t)=\sum_{n=1}^{\infty}\frac{2sin(n\pi)}{(1-n^2)\pi} sin(n\pi x)e^{-n^2\pi^2\alpha t}$$</p>
<p>I think I've done something wrong here because $$n=1^{th}$$ term is not defined. Can someone point out my mistake if there is? Thank you!</p>
<p>Corrected</p>
<p>Bn is nonzero only at n=1. Evaluating the case for n=1, Bn=1</p>
<p>So the solution is </p>
<p>$$T(x,t)=sin(\pi x)e^{-\pi^2\alpha t}$$</p>
<p>Thanks to Leucippus and AlexZorn for the correction.</p>
| Alex Zorn | 73,104 | <p>Two comments.</p>
<p>First, you're integral computation looks something like this:</p>
<p>$$2\int_0^1\sin(\pi x)\sin(n\pi x)\, dx = \int_0^1 \cos((n - 1)\pi x) - \cos((n+1)\pi x)\, dx$$</p>
<p>Now, it's tempting to write:</p>
<p>$$\int \cos((n-1)\pi x)\, dx = \frac{\sin((n-1)\pi x)}{\pi(n-1)} + C$$</p>
<p>But of course this is not true when $n = 1$. Hence your mistake.</p>
<p>The second comment is that you can actually solve for the coefficients "by inspection", without having to compute any integrals. Specifically, we have:</p>
<p>$$T(x,t) = \sum_{n = 1}^{\infty} B_n \sin(n \pi x)e^{-n^2\pi^2 \alpha t}$$</p>
<p>So:</p>
<p>$$T(x,0) = \sum_{n = 1}^{\infty} B_n \sin(n \pi x) = B_1\sin(\pi x) + B_2 \sin(2\pi x) + \cdots$$</p>
<p>And also $T(x,0) = \sin(\pi x)$. It should be clear from this that $B_1 = 1$ and the rest of the $B_n$ are zero.</p>
|
2,668,779 | <p>The exercise consists of showing that the function $f(x,y)=x^4 + y^4$ has a global minimum and maximum under the constraint $x^4 + y^4 - 2xy = 2$.</p>
<p>In the solution to the exercise, it it follows that the constraint is compact if we can show that $\lim_{x^2 + y^2 \rightarrow \infty} x^4 + y^4 - 3xy - 2 \rightarrow \infty$.Why this is the case? </p>
<p>My intuition tells me that this is because the $x^4$ and $y^4$ terms dominates the other two terms when $x$ and $y$ gets large. This would then imply that $x$ and $y$ cannot get arbitrarily big without violating the constraint. Does this imply that if the limit of the constraint was $0$, that the domain would not be compact? Is my reasoning valid?</p>
<p>Many thanks,</p>
| StackTD | 159,845 | <blockquote>
<p>How else can a set of vectors that matches this condition not be a vector space?</p>
</blockquote>
<p>Recall that there is another condition: besides being non-empty (or containing the zero vector) and being closed under scalar multiplication, it also needs to be closed under <em>addition</em>.</p>
<p><strong>Hint</strong>: think of the union of the coordinate axes; i.e. the $x$- and $y$-axis.</p>
<hr>
<p>Consider the set $S = X \cup Y$ with:
$$\color{blue}{X = \left\{ (a,0) \in \mathbb{R}^2 \;\vert\; a \in \mathbb{R} \right\}}\quad,\quad \color{red}{Y = \left\{ (0,b) \in \mathbb{R}^2 \;\vert\; b \in \mathbb{R} \right\}}$$
Now verify that taking any $\vec s \in S$, you have for all $\lambda \in \mathbb{R}$ that $\lambda\vec s \in S$.</p>
<p>However, take $\color{blue}{\vec x} \in X$ and $\color{red}{\vec y} \in Y$ with $\vec x \ne \vec 0$ and $\vec y \ne \vec 0$; then consider $\color{purple}{\vec x + \vec y}$.</p>
<p><a href="https://i.stack.imgur.com/7Mohn.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/7Mohn.png" alt="enter image description here"></a></p>
|
828,668 | <p>Let $G$ be a finite group. If there exists an $a\in G$ not equal to the identity such that for all $x\in G$,$\phi(x) = axa^{-1}=x^{p+1} $ is an automorphism of $G$ then $G$ is a $p$-group.</p>
<p>This is what I have so for.</p>
<p>The order of $a$ is $p$ since $\phi(a) = a= a^pa\rightarrow a^p=e$ therefore the $order(\phi)|p$</p>
<p>If $order(\phi) = 1$ then for all $ x\in G$ $\phi(x) = x=x^{p+1}\rightarrow x^p=e$ . Thus every element has order $p$ therefore $G$ is a $p-group$</p>
<p>For $order(\phi) = p$ I get stuck:</p>
<p>$\phi^p(x) = x = x^{(p+1)^p}$ using the expansion formula and simplifying i reach that the order of each element in $G$ divides $\displaystyle\sum_{k=1}^n \binom{p}{k}p^k=(p+1)^p-1$. But other primes divide this so I can't easily conclude $G$ is a $p$-group. </p>
<p>I proceeded by contradiction:</p>
<p>Suppose for contradiction that $G$ is not a $p$-group. Let $|G| = kp^n$ where $k$ is not a multiple of $p$ ($p\nmid k$). If $k>1$ then take a $q$ in $k$'s prime factorization. So we have $q|k$ and by Cauchy's Theorem $\exists y \in G$ with $y^q = e$ i.e $order(y) = q$.</p>
<p>If $q<p$, applying $\phi$ to $y$ I get that $\langle y\rangle$ has more than $q$ elements since $\phi(y)=y^{p+1}\in \langle y \rangle$ and there are $p$ distint elements achieved by $\phi$</p>
<p>If $p>q$ then....... I cannot reach a contradiction :'(</p>
<p>Question #2: Show each element of $G$ has order $p$</p>
<p>When $order(\phi) = 1$ i get what I want but I'm also stuck when $order(\phi) =p$</p>
<p>I believe $a\in Z(G)$, is there a way I can show this?</p>
<p>Thank you....This is a pretty hard problem. :'(</p>
| Jack Schmidt | 583 | <p>Here is a different proof from Dan Shved's:</p>
<p><strong>Lemma:</strong> In any such group $G$, $a$ normalizes every subgroup, and $a$ lies in every Sylow $p$-subgroup.</p>
<blockquote class="spoiler">
<p> <em>Proof:</em> Let $H$ be a subgroup. Then for $x \in H$, $axa^{-1}=x^{p+1} \in H$, so $aHa^{-1} \leq H$; a similar statement holds for $a^{-1}$ and so $H$ is normal. Let $P$ be a Sylow $p$-subgroup. Since $a$ has order $p$ and $a$ normalizes $P$, $\langle a, P \rangle$ is a $p$-group, and so by maximality of $P$, $\langle a, P \rangle = P$ and $a \in P$. $\square$</p>
</blockquote>
<p><em>Main proof:</em> Reduce to a minimal counterexample:</p>
<blockquote class="spoiler">
<p> Let $G$ be a group of smallest order such that $G$ has an element $a$ of order $p$ such that for every $x \in G$, $ax=x^{p+1}a$ and yet $G$ is not a $p$-group. If $H$ is a subgroup of $G$ containing $a$, then $H$ satisfies the same hypothesis. If $H$ is also a proper subgroup, then $H$ is a $p$-group by definition of $G$.</p>
</blockquote>
<p>Now consider what this means for an element of prime order $q\neq p$.</p>
<blockquote class="spoiler">
<p> Let $g$ be an element of order $q$ for $q \neq p$. By hypothesis, $a$ normalizes $\langle g \rangle$ so $H=\langle a,g\rangle$ has order $pq$ and satisfies the hypothesis of the problem. If $H <G$ then we have a contradiction, so we must $H=G$ and so $G$ is a non-abelian group of order $pq$.</p>
</blockquote>
<p>And show that this structure doesn't satisfy the hypothesis:</p>
<blockquote class="spoiler">
<p> Let $P=\langle a \rangle$ be a Sylow $p$-subgroup of $G$ and let $Q=\langle g \rangle$ be a Sylow $q$-subgroup. Since $Q$ is normalized by both $P$ and $Q$, $Q$ is normal in $G$. Since $a$ is contained in every Sylow $p$-subgroup by the lemma, but the Sylow $p$-subgroups have order $p$, we have $P$ is also normal. Hence $G = P \times Q$ is abelian. This a contradiction, since then $aga^{-1} aa^{-1}g = g \neq g^{p+1}$ as $g$ has order $q$ not $p$.</p>
</blockquote>
<p>$\square$</p>
|
1,726,540 | <p>How should $x^{\frac{1}{x}}$ be differentiated? I know the answer is
$$\frac{1-\ln(x)}{x^{2-\frac{1}{x}}}$$</p>
<p>but I do not understand how to get there.</p>
<h2>Attempt at solution.</h2>
<p>I believe the following is true:</p>
<p>$$
\begin{aligned}\frac{d}{dx}x^u&=ux^{u-1}\cdot u^\prime\\
\frac{d}{dx}a^x&=a^x\cdot\ln(a)
\end{aligned}$$
but I don't know what to do when both the base and the exponent are functions of $x$.</p>
| Plutoro | 108,709 | <p><strong>Hint:</strong>
Let $y=x^{1/x}$. Now take the natural log of both sides.
$$\ln(y)=\ln(x^{1/x})=\frac{1}{x}\ln(x).$$ Now you can differentiate both sides and solve to find $y'$. I'll even do the left side for you:
$$\frac{d}{dx}\ln(y)=\frac{y'}{y}=\frac{y'}{x^{1/x}}.$$</p>
|
3,998,484 | <p>For the purpose of clarity, I do not want anybody to prove this statement, I am just looking to get some help translating it into the contrapositive using <em>symbolic logic</em>. Right now I have the following translation:</p>
<hr />
<p>Original Statement:</p>
<p><span class="math-container">$$(\exists k \in \mathbb{Z} \quad n = 2k) \quad \land \quad (\exists m \in \mathbb{Z} \quad n+1 =m^2) \quad \Rightarrow \quad \text{n is divisible by 8}$$</span></p>
<p>Contrapositive Statement:</p>
<p><span class="math-container">$$ \text{n is not divisible by 8} \quad \Rightarrow \quad (\exists k \in \mathbb{Z} \quad n = 2k+1) \quad \lor \quad (\forall m \in \mathbb{Z} \quad n+1 \neq m^2) $$</span></p>
<hr />
<p>A couple of points of clarification:</p>
<ul>
<li>I wasn't sure how to write <span class="math-container">$n$</span> is divisible by 8. Perhaps it could have been stated as <span class="math-container">$\exists p \in \mathbb{Z} \quad n = 8p$</span>?</li>
<li>For the negation of <span class="math-container">$\exists k \in \mathbb{Z} \quad n = 2k$</span>, I know that typically you might want to reverse the <span class="math-container">$\exists$</span> to an <span class="math-container">$\forall$</span>, but I think what I wrote makes sense since the negation of even numbers is odd numbers.</li>
</ul>
<p>Eventually I will try to prove this statement by contrapositive, but I want to get some practice in with translating mathematical statements into a more formal structure.</p>
<p>My question(s) are following:</p>
<ol>
<li>Is the original translation correct? How might it be improved?</li>
<li>How could I more formally translate '<span class="math-container">$n$</span> is divisible by 8'?</li>
<li>Is the contrapositive stated correctly?</li>
</ol>
<p>I also am not able to use <span class="math-container">$mod$</span> notation, <span class="math-container">$a|b$</span> notation, or anything that is unique to Abstract Algebra.</p>
| jlammy | 304,635 | <p>Your answer for the kernel is wrong -- <span class="math-container">$(a,9)$</span> and <span class="math-container">$(a,18)$</span> are also in the kernel, so <span class="math-container">$$\ker\alpha=\mathbb Z_9\times\{0,9,18\}\cong\mathbb Z_9\times\mathbb Z_3.$$</span>
So by the isomorphism theorem, <span class="math-container">$$\operatorname{im}\alpha\cong(\mathbb Z_9\times\mathbb Z_{27})/(\mathbb Z_9\times\mathbb Z_3),$$</span>
which (looking at orders) cannot possibly be <span class="math-container">$\mathbb Z_{27}$</span>.</p>
<p>Your last claim doesn't really make sense -- how to interpret <span class="math-container">$\frac{1}{3}$</span> in <span class="math-container">$\mathbb Z_{27}$</span>? You can't really, because <span class="math-container">$3$</span> and <span class="math-container">$27$</span> aren't coprime.</p>
|
2,636,679 | <p>For which values of x do the vectors $(x,0,x + 1),(1,x,1),(0,x,−x)$ form a basis for R3?</p>
<p>Is 1 a correct answer to this. The vectors would each be linearly independent, however this seems too simple and I'm guessing theres an error somewhere.</p>
<p>If 1 works then it would seem any real number would work since the location of the 0 in the first and last vector will always mean the three vectors are linearly independent, right?</p>
| TryAgain | 520,267 | <p>How did you find this value?</p>
<p>You can also "glue" the vectors together to form a matrix (dimension 3x3), and try to find its determinant. You will obtain a polynom in $x$ and the zeros of this polynom are the values for which the three vectors are linearly dependant.</p>
|
2,636,679 | <p>For which values of x do the vectors $(x,0,x + 1),(1,x,1),(0,x,−x)$ form a basis for R3?</p>
<p>Is 1 a correct answer to this. The vectors would each be linearly independent, however this seems too simple and I'm guessing theres an error somewhere.</p>
<p>If 1 works then it would seem any real number would work since the location of the 0 in the first and last vector will always mean the three vectors are linearly independent, right?</p>
| mvw | 86,776 | <p>A basis of $\mathbb{R}^3$ must have three linear independent vectors, so the equation
$$
\lambda_1 (x,0,x+1)^t + \lambda_2 (1,x,1)^t + \lambda_3 (0,x,−x)^t = (0,0,0)^t
$$
is allowed to have only the solution $(\lambda_1, \lambda_2, \lambda_3) = (0,0,0)$. In matrix form
$$
A \lambda = 0 \iff \\
\begin{pmatrix}
x & 1 & 0 \\
0 & x & x \\
x+1 & 1 & -x
\end{pmatrix}
\begin{pmatrix}
\lambda_1 \\
\lambda_2 \\
\lambda_3
\end{pmatrix}
=
\begin{pmatrix}
0 \\
0 \\
0
\end{pmatrix}
$$
We can now go for the determinant of the matrix, which must not vanish for the solution to be unique, or we solve the homogeneous linear system and check what influence $x$ has on the number of solutions. </p>
<p><strong>Method A: Solving the linear system</strong></p>
<p>Still open are the possibilities one solution or infinite many solutions, as we know the null vector is a solution the case no solution is not applying here.</p>
<p>In augmented matrix form we have:
$$
A \lambda = 0 \iff [A|0] \iff \\
\left[
\begin{array}{ccc|c}
x & 1 & 0 & 0 \\
0 & x & x & 0 \\
x+1 & 1 & -x & 0
\end{array}
\right]
\to
\left[
\begin{array}{ccc|c}
x & 1 & 0 & 0 \\
0 & x & x & 0 \\
1 & 0 & -x & 0
\end{array}
\right]
$$
<strong>Case 1:</strong> For $x\ne 0$ we can divide by $x$ and have
$$
\left[
\begin{array}{ccc|c}
1 & 1/x & 0 & 0 \\
0 & 1 & 1 & 0 \\
1 & 0 & -x & 0
\end{array}
\right]
\to
\left[
\begin{array}{ccc|c}
1 & 1/x & 0 & 0 \\
0 & 1 & 1 & 0 \\
0 & -1/x & -x & 0
\end{array}
\right]
\to \\
\left[
\begin{array}{ccc|c}
1 & 0 & -1/x & 0 \\
0 & 1 & 1 & 0 \\
0 & 0 & 1/x-x & 0
\end{array}
\right]
$$
<strong>Case 1.1:</strong> For $1/x-x \ne 0$ we can divide by $1/x-x$ and get
$$
\left[
\begin{array}{ccc|c}
1 & 0 & -1/x & 0 \\
0 & 1 & 1 & 0 \\
0 & 0 & 1 & 0
\end{array}
\right]
\to
\left[
\begin{array}{ccc|c}
1 & 0 & 0 & 0 \\
0 & 1 & 0 & 0 \\
0 & 0 & 1 & 0
\end{array}
\right]
$$
so for
$$
x\ne 0 \wedge 1/x-x \ne 0 \iff \\
x\ne 0 \wedge 1/x \ne x \iff \\
x\ne 0 \wedge 1 \ne x^2 \iff \\
x \not\in \{-1, 0, 1 \}
$$
the vectors are linear independent. As three linear independent vectors form a basis of $\mathbb{R}^3$ we are done with this case.</p>
<p><strong>Case 1.2:</strong>
For $x \ne 0$ and $x \in \{ -1, 1 \}$, which means $x=\pm 1$ we have
$$
\left[
\begin{array}{ccc|c}
1 & 0 & \mp 1 & 0 \\
0 & 1 & 1 & 0 \\
0 & 0 & 0 & 0
\end{array}
\right]
\to
\left[
\begin{array}{ccc|c}
1 & 0 & 0 & 0 \\
0 & 1 & 0 & 0 \\
0 & 0 & 0 & 0
\end{array}
\right]
$$
which gives the infinite many solutions $\lambda = (0, 0, \lambda_3)$. So in this case the three vectors are linear dependent, we have no basis for $\mathbb{R}^3$.</p>
<p><strong>Case 2:</strong> For $x=0$ we have
$$
\left[
\begin{array}{ccc|c}
0 & 1 & 0 & 0 \\
0 & 0 & 0 & 0 \\
1 & 1 & 0 & 0
\end{array}
\right]
\to
\left[
\begin{array}{ccc|c}
1 & 0 & 0 & 0 \\
0 & 1 & 0 & 0 \\
0 & 0 & 0 & 0
\end{array}
\right]
$$
which again gives the infinite many solutions $\lambda = (0, 0, \lambda_3)$. So in this case we have no basis for $\mathbb{R}^3$.</p>
<p><strong>Answer:</strong></p>
<p>For $x \in \mathbb{R} \setminus \{-1, 0, 1\}$ the three vectors form a basis of $\mathbb{R}^3$.</p>
<p><strong>Method B: Checking the determinant</strong></p>
<p>$$
\det A = \\
\left\vert
\begin{array}{ccc}
x & 1 & 0 \\
0 & x & x \\
x+1 & 1 & -x
\end{array}
\right\vert
=
\left\vert
\begin{array}{ccc}
x & 1 & 0 \\
0 & x & x \\
1 & 0 & -x
\end{array}
\right\vert
=
x
\left\vert
\begin{array}{cc}
x & x \\
0 & -x
\end{array}
\right\vert
-
\left\vert
\begin{array}{ccc}
0 & x \\
1 & -x
\end{array}
\right\vert
= -x^3 + x
= -x(x^2 - 1)
$$
which vanishes for $x \in \{ -1, 0 , 1 \}$. For these $x$ values we have no unique solution to $A\lambda = 0$, which means the vectors are linear dependent and form no basis of $\mathbb{R}^3$. Any other real value will lead to a basis.</p>
|
3,025,375 | <p>What is<span class="math-container">$$\lim_{n→∞}n^3(\sqrt{n^2+\sqrt{n^4+1}}-n\sqrt2)?$$</span>So it is<span class="math-container">$$\lim_{n→∞}\frac{n^3(\sqrt{n^2+\sqrt{n^4+1}})^2-(n\sqrt{2})^2}{\sqrt{n^2+\sqrt{n^4+1}}+n\sqrt{2}}=\lim_{n→∞}\frac{n^3(n^2+\sqrt{n^4+1}-2n^2)}{\sqrt{n^2+\sqrt{n^4+1}}+n\sqrt{2}}.$$</span>
I do not know what to do next, because my resuts is <span class="math-container">$∞$</span> but the answer from book is <span class="math-container">$\dfrac{1}{4\sqrt{2}}$</span>.</p>
| Community | -1 | <p><strong>The expedite way:</strong></p>
<p><span class="math-container">$$\sqrt{1+\sqrt{1+n^{-4}}}=\sqrt{1+1+\dfrac12n^{-4}+o(n^{-4})}=\sqrt2\sqrt{1+\dfrac14n^{-4}+o(n^{-4})}=\sqrt2\left(1+\dfrac18n^{-4}+o(n^{-4})\right)$$</span></p>
<p>and the limit is</p>
<p><span class="math-container">$$\frac{\sqrt2}8.$$</span></p>
|
3,699,640 | <p>How to show <span class="math-container">$x^2=20$</span> has no solution in <span class="math-container">$2$</span>-adic ring of integers <span class="math-container">$\mathbb{Z}_2$</span> ?</p>
<p>What is the general criterion fora solution of <span class="math-container">$x^2=a$</span> in <span class="math-container">$\mathbb{Z}_2$</span> ?</p>
<p>I know for odd prime <span class="math-container">$x^2=a$</span> has solution in <span class="math-container">$\mathbb{Z}_p$</span> or in other word <span class="math-container">$a$</span> is quadratic residue modulo <span class="math-container">$p$</span> if <span class="math-container">$a_0$</span> is quadratic residue modulo <span class="math-container">$p$</span>, where <span class="math-container">$a=a_0+a_1p+a_2p^2+\cdots \in \mathbb{Z}_p$</span>.</p>
<p>But what about for <span class="math-container">$p$</span> even i.e., <span class="math-container">$p=2$</span> ?</p>
<p>We have two following results as well.</p>
<blockquote>
<p>Result <span class="math-container">$1$</span>: For <span class="math-container">$p \neq 2$</span>, an <span class="math-container">$\epsilon \in \mathbb{Z}_p^{\times}$</span> is square in <span class="math-container">$\mathbb{Z}_p$</span> iff it is square in the residue field of <span class="math-container">$\mathbb{Z}_p$</span>.</p>
<p>Result <span class="math-container">$2$</span>: An unit <span class="math-container">$\epsilon \in \mathbb{Z}_2^{\times}$</span> is square iff <span class="math-container">$\epsilon \equiv 1 (\mod 8)$</span>.</p>
</blockquote>
<p>But as <span class="math-container">$20$</span> is not a unit in <span class="math-container">$\mathbb{Z}_2$</span>, the above Result <span class="math-container">$2$</span> is not applicable here.</p>
<p>So we have to use other way.</p>
<p>I am trying as follows:</p>
<p>For a general <span class="math-container">$2$</span>-adic integer <span class="math-container">$a$</span>, we have the following form <span class="math-container">$$ a=2^r(1+a_1 \cdot 2+a_2 \cdot 2^2+\cdots).$$</span>
Thus one necessary criterion for <span class="math-container">$a \in \mathbb{Q}_2$</span> to be square in <span class="math-container">$\mathbb{Q}_2$</span> is that <span class="math-container">$r$</span> must be <strong>even integer</strong>.</p>
<p>Are there any condition on <span class="math-container">$a_1$</span> and <span class="math-container">$a_2$</span> ?</p>
<p>For, let <span class="math-container">$\sqrt{20}=a_0+a_1 2+a_22^2+a_32^3+\cdots$</span>.</p>
<p>Then squaring and taking modulo <span class="math-container">$2$</span>, we get <span class="math-container">$$ a_0^2 \equiv 0 (\mod 2) \Rightarrow a_0 \equiv 0 (\mod 2).$$</span> Thus, <span class="math-container">$\sqrt {20}=a_12+a_22^2+a_32^3+\cdots$</span>.</p>
<p>Again squaring and taking modulo (<span class="math-container">$2^2)$</span>, we get
<span class="math-container">$$a_1 \equiv 1 (\mod 2^2).$$</span> Thus we have <span class="math-container">$\sqrt{20}=2+a_22^2+a_32^3+\cdots$</span>.</p>
<p>Again squaring and taking modulo <span class="math-container">$(2^3)$</span>, we get no solution for <span class="math-container">$a_i,\ i \geq 2$</span>.</p>
<p>What do I conclude here from ?</p>
<p>How do I conclude that <span class="math-container">$x^2=20$</span> has no solution in <span class="math-container">$\mathbb{Z}_2$</span> ?</p>
<p>Help me</p>
| Community | -1 | <p>It seems like your attempt probably works.</p>
<p>A slightly more general version of Result 2 that could help you is that squares in the <span class="math-container">$2$</span>-adic integers are of the form <span class="math-container">$2^{2k}u$</span> where <span class="math-container">$u\in \mathbb Z_2^*$</span> is congruent to <span class="math-container">$1$</span> mod <span class="math-container">$8$</span>. Since <span class="math-container">$20$</span> is not of this form, it is not a square.</p>
<p>One can see this follows immediately from Result 2 and your observation that the valuation of a square is a square: if <span class="math-container">$x$</span> is a square then it has valuation <span class="math-container">$2^{2k}$</span> for some integer <span class="math-container">$k$</span>, and so <span class="math-container">$x2^{-2k}=u$</span> is a square (product of squares is a square) with unit valuation and so by Result 2, <span class="math-container">$u=1$</span> mod <span class="math-container">$8$</span>. Conversely anything of that form is clearly a square by applying Result 2 again (and observing the product of squares is a square).</p>
|
3,699,640 | <p>How to show <span class="math-container">$x^2=20$</span> has no solution in <span class="math-container">$2$</span>-adic ring of integers <span class="math-container">$\mathbb{Z}_2$</span> ?</p>
<p>What is the general criterion fora solution of <span class="math-container">$x^2=a$</span> in <span class="math-container">$\mathbb{Z}_2$</span> ?</p>
<p>I know for odd prime <span class="math-container">$x^2=a$</span> has solution in <span class="math-container">$\mathbb{Z}_p$</span> or in other word <span class="math-container">$a$</span> is quadratic residue modulo <span class="math-container">$p$</span> if <span class="math-container">$a_0$</span> is quadratic residue modulo <span class="math-container">$p$</span>, where <span class="math-container">$a=a_0+a_1p+a_2p^2+\cdots \in \mathbb{Z}_p$</span>.</p>
<p>But what about for <span class="math-container">$p$</span> even i.e., <span class="math-container">$p=2$</span> ?</p>
<p>We have two following results as well.</p>
<blockquote>
<p>Result <span class="math-container">$1$</span>: For <span class="math-container">$p \neq 2$</span>, an <span class="math-container">$\epsilon \in \mathbb{Z}_p^{\times}$</span> is square in <span class="math-container">$\mathbb{Z}_p$</span> iff it is square in the residue field of <span class="math-container">$\mathbb{Z}_p$</span>.</p>
<p>Result <span class="math-container">$2$</span>: An unit <span class="math-container">$\epsilon \in \mathbb{Z}_2^{\times}$</span> is square iff <span class="math-container">$\epsilon \equiv 1 (\mod 8)$</span>.</p>
</blockquote>
<p>But as <span class="math-container">$20$</span> is not a unit in <span class="math-container">$\mathbb{Z}_2$</span>, the above Result <span class="math-container">$2$</span> is not applicable here.</p>
<p>So we have to use other way.</p>
<p>I am trying as follows:</p>
<p>For a general <span class="math-container">$2$</span>-adic integer <span class="math-container">$a$</span>, we have the following form <span class="math-container">$$ a=2^r(1+a_1 \cdot 2+a_2 \cdot 2^2+\cdots).$$</span>
Thus one necessary criterion for <span class="math-container">$a \in \mathbb{Q}_2$</span> to be square in <span class="math-container">$\mathbb{Q}_2$</span> is that <span class="math-container">$r$</span> must be <strong>even integer</strong>.</p>
<p>Are there any condition on <span class="math-container">$a_1$</span> and <span class="math-container">$a_2$</span> ?</p>
<p>For, let <span class="math-container">$\sqrt{20}=a_0+a_1 2+a_22^2+a_32^3+\cdots$</span>.</p>
<p>Then squaring and taking modulo <span class="math-container">$2$</span>, we get <span class="math-container">$$ a_0^2 \equiv 0 (\mod 2) \Rightarrow a_0 \equiv 0 (\mod 2).$$</span> Thus, <span class="math-container">$\sqrt {20}=a_12+a_22^2+a_32^3+\cdots$</span>.</p>
<p>Again squaring and taking modulo (<span class="math-container">$2^2)$</span>, we get
<span class="math-container">$$a_1 \equiv 1 (\mod 2^2).$$</span> Thus we have <span class="math-container">$\sqrt{20}=2+a_22^2+a_32^3+\cdots$</span>.</p>
<p>Again squaring and taking modulo <span class="math-container">$(2^3)$</span>, we get no solution for <span class="math-container">$a_i,\ i \geq 2$</span>.</p>
<p>What do I conclude here from ?</p>
<p>How do I conclude that <span class="math-container">$x^2=20$</span> has no solution in <span class="math-container">$\mathbb{Z}_2$</span> ?</p>
<p>Help me</p>
| Mathmo123 | 154,802 | <p>The other answers do a good job of answering your more general question, but I want to address the specific case of <span class="math-container">$x^2 = 20$</span> in the simplest way possible.</p>
<p>Remember how Hensel's lemma works: we perform an iterative procedure to find a solution to <span class="math-container">$f(x) \equiv 0\pmod {2^n}$</span> for all <span class="math-container">$n$</span>. We then apply completeness to show that <span class="math-container">$f$</span> has a solution in <span class="math-container">$\mathbb Z_2$</span>.</p>
<p>The problem here is that, since <span class="math-container">$x^2 -20$</span> has a repeated root modulo <span class="math-container">$2$</span>, the iterative procedure fails.</p>
<p>The simplest way to prove that <span class="math-container">$x^2 -20$</span> has no solution in <span class="math-container">$\mathbb Z_2$</span> is, therefore, to pinpoint at which point there is no solution modulo <span class="math-container">$2^n$</span>.</p>
<hr>
<p>Here's the one line proof: if <span class="math-container">$x^2 = 20$</span> has a solution in <span class="math-container">$\mathbb Z_2$</span>, it has a solution modulo <span class="math-container">$2^n$</span> for all <span class="math-container">$n$</span>. But it has no solution modulo <span class="math-container">$32$</span>.</p>
|
1,234,320 | <p>It is given that $4 x^4 + 9 y^4 = 64$.</p>
<p>Then what will be the maximum value of $x^2 + y^2$?</p>
<p>I have done it using the sides of a right-angled triangle be $2x , 3y $ and hypotenuse as 8 .</p>
| Jean-Claude Arbaut | 43,608 | <p>Without using Lagrange multipliers</p>
<p>It's equivalent to find $x$ that maximizes</p>
<p>$$f(x)=x^2+\frac23\sqrt{16-x^4}$$</p>
<p>You have</p>
<p>$$f'(x)=2x-\frac13\frac{4x^3}{\sqrt{16-x^4}}$$</p>
<p>Hence $f'(x)=0$ iff $x=0$ or</p>
<p>$$\frac{2x^2}{\sqrt{16-x^4}}=3$$
$$2x^2=3\sqrt{16-x^4}$$
$$4x^4=9(16-x^4)$$
$$13x^4=12^2$$
$$x=\pm2\frac{\sqrt{3}}{\sqrt[4]{13}}$$</p>
<p>Let $x_0=2\dfrac{\sqrt{3}}{\sqrt[4]{13}}$. Since $x$ appears only with even powers, sign is not important. And you have</p>
<p>$$f(0)=\frac83$$
$$f(x_0)=\frac{12}{\sqrt{13}}+\frac{2}{3}\sqrt{16-\frac{16\times 9}{13}}$$
$$=\frac{12}{\sqrt{13}}+\frac{2}{3}\sqrt{\frac{64}{13}}=\frac{12+\frac{16}{3}}{\sqrt{13}}=\frac{4\times13}{3\sqrt{13}}=\frac{4}{3}\sqrt{13}$$</p>
<p>And this is larger than $f(0)$ since</p>
<p>$$f(0)^2-f(x_0)^2=\frac{16\times4}{9}-\frac{16\times13\times3}{9}$$</p>
<p>Thus the max is found for $x=\pm x_0$ and</p>
<p>$$y^4=\frac{1}{9}\left(64-4x_0^4\right)=\frac{1}{9}\left(64-4\frac{16\times 9}{13}\right)=\frac{64}{9}\left(1-\frac{9}{13}\right)=\frac{64\times4}{9\times13}$$</p>
<p>Hence</p>
<p>$$y=\pm \frac{4}{\sqrt{3}\sqrt[4]{13}}$$</p>
|
3,707,132 | <p>I believe my proof of this simple fact is fine, but after a few false starts, I was hoping that someone could look this over. In particular, I am interested in whether there is an alternate proof.</p>
<blockquote>
<p>For a real number <span class="math-container">$a$</span> and non-empty subset of reals <span class="math-container">$B$</span>, define <span class="math-container">$a + B = \{a + b : b \in B\}$</span>. Show that if <span class="math-container">$B$</span> is bounded above, then <span class="math-container">$\sup(a + B) = a + \sup B$</span>.</p>
</blockquote>
<p>My attempt: </p>
<blockquote>
<p>Fix <span class="math-container">$a \in \mathbb{R}$</span>, take <span class="math-container">$B \subset \mathbb{R}$</span> to be nonempty and bounded above, and define
<span class="math-container">$$a + B = \{a + b : b \in B\}.$$</span>
Since <span class="math-container">$B$</span> is nonempty and bounded above, the least-upper-bound axiom guarantees the existence of <span class="math-container">$\sup B$</span>. For any <span class="math-container">$b \in B$</span>, we have
<span class="math-container">$$b \leq \sup B,$$</span>
which implies
<span class="math-container">$$a + b \leq a + \sup B.$$</span>
As this is true for any <span class="math-container">$b \in B$</span>, it follows that <span class="math-container">$a + \sup B$</span> is an upper bound of <span class="math-container">$a + B$</span>, and hence <span class="math-container">$\sup(a + B)$</span> exists, by the completeness axiom, since <span class="math-container">$B \neq \emptyset$</span> implies immediately that <span class="math-container">$a + B \neq \emptyset$</span>. I claim that <span class="math-container">$a + \sup B$</span> is in fact the least upper bound of <span class="math-container">$a + B$</span>. As we have already shown it to be an upper bound, it suffices to demonstrate that <span class="math-container">$a + \sup B$</span> is the least of the upper bounds. Let <span class="math-container">$\gamma$</span> be an upper bound of <span class="math-container">$a + B$</span>. Hence, for any <span class="math-container">$b \in B$</span>,
<span class="math-container">$$a + b \leq \gamma,$$</span>
which implies that
<span class="math-container">$$b \leq \gamma - a.$$</span>
As this holds for all <span class="math-container">$b \in B$</span>, <span class="math-container">$\gamma - a$</span> is an upper bound of <span class="math-container">$B$</span>. Hence, by the definition of supremum,
<span class="math-container">$$\gamma - a \geq \sup B,$$</span>
which implies that
<span class="math-container">$$\gamma \geq a + \sup B,$$</span>
as desired. </p>
</blockquote>
<p>I tried to write the proof initially be showing that <span class="math-container">$\sup(a + B) \leq a + \sup B$</span> and <span class="math-container">$\sup(a + B) \geq a + \sup B$</span>, but didn't have any luck. If there is a trick to it, I would be interested in hearing it.</p>
| zkutch | 775,801 | <p><span class="math-container">$$\sup\left\lbrace x + y\right\rbrace = \sup \left\lbrace x \right\rbrace + \sup \left\lbrace x \right\rbrace $$</span>
<span class="math-container">$$\inf\left\lbrace x + y\right\rbrace = \inf \left\lbrace x \right\rbrace + \inf \left\lbrace x \right\rbrace $$</span>
For non negative numbers same properties we have for multiplication.</p>
<p>Proof for second:</p>
<p><span class="math-container">$ \exists \inf \left\lbrace x \right\rbrace $</span> and <span class="math-container">$\exists \inf \left\lbrace y \right\rbrace \Rightarrow \exists \inf\left\lbrace x + y\right\rbrace $</span> and <span class="math-container">$\inf \left\lbrace x \right\rbrace + \inf \left\lbrace x \right\rbrace \leqslant x + y$</span>.</p>
<p>For <span class="math-container">$ \forall \epsilon > 0$</span> <span class="math-container">$ \space \exists (x_1 + y_1) \in \left\lbrace x + y\right\rbrace $</span> for which <span class="math-container">$$\inf \left\lbrace x \right\rbrace + \inf \left\lbrace y \right\rbrace \leqslant (x_1 + y_1) < \inf \left\lbrace x \right\rbrace + \inf \left\lbrace y \right\rbrace + \epsilon $$</span>
This gives
<span class="math-container">$$\inf\left\lbrace x + y\right\rbrace = \inf \left\lbrace x \right\rbrace + \inf \left\lbrace x \right\rbrace $$</span></p>
|
760,216 | <p>The sewing pattern on a basketball is composed of two great circles and a single curve that intersects each great circle twice. Does this curve have a name? Are there any parametric descriptions of the curve?</p>
<p>The same question can apply to the sewing pattern on a tennis ball.</p>
<p>Answers about why this sewing pattern came to be used are also welcome.</p>
| Will Jagy | 10,400 | <p>I don't have a basketball here to compare. However, I can say that the seams on a baseball and on a tennis ball are similar to the intersection of <a href="http://www.indiana.edu/~minimal/maze/enneper.html" rel="nofollow noreferrer">Enneper's minimal surface</a> with a sphere centered at the, well, the origin of all the symmetries of the surface. See also <a href="http://en.wikipedia.org/wiki/Enneper_surface" rel="nofollow noreferrer">HERE</a>.</p>
<p><em>Edit by Mario:</em> Here is an animation of the basketball curve based on the idea given above. The degree to which the two bells of the curve approach each other depends on the radius of the sphere chosen to intersect the surface; this animation uses $r=\frac32$, which produces a reasonable approximation.</p>
<p> <img src="https://i.stack.imgur.com/dtUNs.gif" alt=""></p>
|
4,183,734 | <p>I just started trigonometry, and I came across this problem. I know that this problem would be very simple with a calculator, but without one, I'm lost. How would you determine if <span class="math-container">$\sin 4$</span> is bigger than <span class="math-container">$\sin 3$</span> in negativity or positivity? How would you even determine if they are positive or negative in the first place, and how would you know which one is bigger?</p>
| fleablood | 280,126 | <p>For values close to <span class="math-container">$\pi$</span> (or close to <span class="math-container">$0$</span>) we have <span class="math-container">$|\sin (\pi \pm k)| = |\sin k|$</span>.</p>
<p>And if <span class="math-container">$|k|$</span> is closer to <span class="math-container">$0$</span> (but less than <span class="math-container">$\frac \pi 2$</span>) than <span class="math-container">$|j|$</span> then <span class="math-container">$|\sin(\pi \pm k)| < |\sin (\pi \pm j)|$</span>.</p>
<p>.......</p>
<p>So bearing in mind that <span class="math-container">$4-\pi \approx 4- 3.14 = 0.86$</span> and <span class="math-container">$\pi -3 \approx 3.14 - 3 = 0.14$</span> we have</p>
<p>So for <span class="math-container">$\frac \pi 2 < 3 < \pi$</span> we have <span class="math-container">$\sin 3 > 0$</span> and <span class="math-container">$\sin 3=\sin (\pi -3) < \sin (4 - \pi)$</span></p>
<p>And for <span class="math-container">$\pi < 4 < \frac {3\pi}2$</span> we have <span class="math-container">$\sin 4 < 0$</span> and <span class="math-container">$|\sin 4| = \sin (4-\pi)> \sin (\pi -3)$</span>.</p>
<p>so <span class="math-container">$\sin 3 + \sin 4 = |\sin 3| - |\sin 4|= \sin(\pi -3) - \sin (4-\pi) < 0$</span>.</p>
<p>====</p>
<p>Alternatively (but this is more coincidental than general.</p>
<p><span class="math-container">$\frac {3\pi}4 < 3 < \pi$</span> so <span class="math-container">$\frac 12 > \sin 3 > 0$</span>.</p>
<p>And <span class="math-container">$\frac{5\pi} 4 < 4 < \frac {3\pi}2$</span> so <span class="math-container">$-\frac 12 > \sin 4 > -1$</span>.</p>
<p>So <span class="math-container">$0 =\frac 12 -\frac 12 > \sin 3 + \sin 4 > 0 -1=-1$</span></p>
<p>But that would be harder to generalize for something like <span class="math-container">$\sin 2.8 + \sin 3.7$</span> something like that.</p>
|
457,654 | <p>This question is a continuation of: <a href="https://math.stackexchange.com/questions/456118/finding-a-set-of-values-with-the-binomial-theorem">Finding a set of values with the binomial theorem</a></p>
<p>For $n \in \mathbb N$ and the function $p(x) = \left(x + \frac 1x\right)^{n}$.</p>
<p>By the binomial theorem:<br />
$$\left(x + \frac 1x\right)^{n} = \sum_{k=0}^n {n \choose k} x^{n-2k}$$</p>
<p>1) For $p(x) = a_ox^{b_o}+a_1x^{b_1}+...+a_nx^{b_n}$ and $D=$ {$b_o, b_1, ..., b_n$}.<br />
Determine D and its maximum and minimum values.<br />
<strong>Answer:</strong> $D = {n, n-2, n-4, ..., -n}$ with a minimum of $-n$ and a maximum of $n$.</p>
<p>2) Determine the coefficient $a_k$ of $x^{b_k}$ where $k=0,1,...,n$ and give a formula.</p>
<p><strong>Can someone give me a hint of what to do? I don't know where to start for the second part...</strong></p>
| Alex Youcis | 16,497 | <p>Hint: Euler's theorem comes from the fact that $\#(\mathbb{F}_p^\times)=p-1$. The minimality coming from the fact that $\mathbb{F}_p^\times$ is cyclic--as is the unit group of any finite field. Now, assume that $m$ is irreducible. Then, $k=\mathbb{F}[x]/(m(x))$ is a field. Moreover, it's easy to see that $\dim_{\mathbb{F}_p}k=\deg m$. </p>
|
39,184 | <p>Hi everybody,</p>
<p>Reading about Geometry Processing, I have realized that people in this area are very interested in regular vertices(degree=6) rather than irregular ones.</p>
<p>Can anybody give me reasons why this is an important property?
Suppose that we can have a mesh with all regular vertices (genus 1) but with a very poor geometry positioning for the vertices.</p>
<p>So, is there any specific reason why connectivity is this much important regardless of the geometry?</p>
<p>Thanks</p>
| Darsh Ranjan | 302 | <p>Some geometric algorithms work better around regular vertices. For example, Loop subdivision is a procedure to turn a triangular mesh into a smooth surface (i. e., a "subdivision surface"). The amount of smoothness varies, however: IIRC, the normal vector to the surface is continuous everywhere, but the curvature is only continuous away from vertices of degree other than 6. </p>
|
2,572,428 | <p>$$z^2=3+4i ,(z=x+iy)$$
Seems easy? That's what I thought!</p>
<p>I get a system of equations that I can't solve:
$x^2-y^2=3$ and $2xiy=4i$ which I then get $x=\frac{2}{y}$</p>
<p>I can't solve the system of equations?</p>
| Alex Zorn | 62,875 | <p>Taking norms, you have: $|z|^2 = |3 + 4i|$, so $x^2 + y^2 = 5$. Combining this with $x^2 - y^2 = 3$ yields $2x^2 = 8$, hence $x = \pm 2$. </p>
|
2,738,023 | <p>We know we can check if two vectors are 'orthogonal' by doing an inner product.</p>
<p>$a*b=0$</p>
<p>tells us that these two vectors are orthogonal</p>
<p>here comes the question:</p>
<p>if there a way to compute if they are 'parallel'? i.e., they are pointing at the same direction. </p>
| user | 505,767 | <p>Note that two vectors $\vec v_1,\vec v_2\neq \vec 0$ are parallel $$\iff \vec v_1=k\cdot \vec v_2$$</p>
<p>for some $k\in \mathbb{R}$ and this condition is easy to check component by component.</p>
<p>For vectors in $\mathbb{R^2}$ or $\mathbb{R^3}$ we could check the condition by cross product.</p>
<p>More in general, the matrix formed by two parallel vectors has $rank=1$.</p>
|
1,004,837 | <p>The Warsaw circle is defined as a subset of $\mathbb{R}^2$:
$$\left\{\left(x,\sin\frac{1}{x}\right): x\in\left(0,\frac{1}{2\pi}\right]\right\}\cup\left\{(0,y):-1\leq y\leq1\right\}\cup C\;,$$
where $C$ is the image of a curve connecting the other two pieces. </p>
<p>A map from Warsaw circle to a single point space seems to be a well-known example showing weak homotopy equivalence is indeed weaker than homotopy equivalence. <strong>I am trying to see why the Warsaw circle is non-contractible.</strong> It seems intuitively reasonable since two 'ends' of it are connected in some sense, but I failed to give a proof. Any hint would be appreciated. Thank you very much. </p>
| Jason DeVito | 331 | <p>Let $W$ denote the Warsaw circle. By collapsing the interval piece down to a point, the quotient space is homeomorophic to $S^1$. This gives a map $f:W\rightarrow W/{\sim} \cong S^1$.</p>
<p>I claim this map is not null homotopic. Believing this for a second, note that for any contractible space $X$ any continuous $f:X\rightarrow Y$ is null homotopic, so this will show that $W$ is not contractible.</p>
<p>So, assume $f$ is null homotopic. Then we can lift it to get a map $\hat{f}:W\rightarrow \mathbb{R}$. Now, $f$ is $1-1$, except on the interval piece. This implies the lift is $1-1$, except, perhaps, on the interval piece. But since the interval piece is connected and it must map into the fiber $\mathbb{Z}$ of the map $\mathbb{R}\rightarrow S^1$, this implies that $\hat{f}$ is 1-1, except that it collapses the interval to a point.</p>
<p>Said another way, $\hat{f}$ descends to an injective map on $W/{\sim}$. Since $W/{\sim}$ is homeomorphic to $S^1$, $\hat{f}$ gives an injective map from $S^1$ to $\mathbb{R}$. But using the intermediate value theorem twice, it's easy to see that there is no injective continuous map from $S^1$ to $\mathbb{R}$.</p>
|
1,004,837 | <p>The Warsaw circle is defined as a subset of $\mathbb{R}^2$:
$$\left\{\left(x,\sin\frac{1}{x}\right): x\in\left(0,\frac{1}{2\pi}\right]\right\}\cup\left\{(0,y):-1\leq y\leq1\right\}\cup C\;,$$
where $C$ is the image of a curve connecting the other two pieces. </p>
<p>A map from Warsaw circle to a single point space seems to be a well-known example showing weak homotopy equivalence is indeed weaker than homotopy equivalence. <strong>I am trying to see why the Warsaw circle is non-contractible.</strong> It seems intuitively reasonable since two 'ends' of it are connected in some sense, but I failed to give a proof. Any hint would be appreciated. Thank you very much. </p>
| Matteo Doni | 557,505 | <p>Let <span class="math-container">$W$</span> denote the Warsaw circle and <span class="math-container">$C$</span> is the space as in the question. By collapsing <span class="math-container">$C$</span> down to a point we find the map <span class="math-container">$f:W\rightarrow W/C \cong S^1$</span>.</p>
<p><span class="math-container">$\underline{Claim}$</span>: f is not null homotopic.</p>
<p>If we assume that the <span class="math-container">$\underline{Claim}$</span> holds we find the thesis. Indeed, any continuous <span class="math-container">$f:X\rightarrow Y$</span> is null homotopic if its domain is contractible; so this will show that <span class="math-container">$W$</span> is not contractible.</p>
<p>So, we have done if we prove the <span class="math-container">$\underline{Claim}$</span>.</p>
<p><span class="math-container">$\textit{Proof}(\underline{Claim})$</span> Assume by absurd that <span class="math-container">$f$</span> is null homotopic. Let <span class="math-container">$e:\mathbb{R}\to S^1:r\mapsto (cos2\pi r,sin2\pi r)$</span> be the usual helic covering space of <span class="math-container">$S^1$</span> and let <span class="math-container">$H:W\times I\to S^1$</span> (with I the interval <span class="math-container">$[0,1]$</span>) be an homotopy between <span class="math-container">$f$</span> and a constant map <span class="math-container">$c:W\to S^1:w\mapsto (0,1)$</span>. Since <span class="math-container">$e$</span> is surjective there exists a lifting <span class="math-container">$c_0 :W\to \mathbb{R}$</span> of <span class="math-container">$c$</span> along <span class="math-container">$e$</span>.
Now since every covering space is a Hurewicz fibration (see Proposition 1.30 Hatcher Algebraic Topology <a href="https://pi.math.cornell.edu/%7Ehatcher/AT/AT.pdf" rel="nofollow noreferrer">https://pi.math.cornell.edu/~hatcher/AT/AT.pdf</a>) there is a lifting of <span class="math-container">$H_0:W\times I\to \mathbb{R}$</span> of <span class="math-container">$H$</span> along <span class="math-container">$e$</span> such that <span class="math-container">$e\circ H_0=H$</span> and <span class="math-container">$c_0=i\circ H_0$</span> where <span class="math-container">$i:W\cong W\times\{0\}\to W\times I$</span> is the obviously inclusion. In particular <span class="math-container">$f_0(-):=H_0(-,1):W\to \mathbb{R} $</span> is a lifting of <span class="math-container">$f$</span> along <span class="math-container">$e$</span>.</p>
<p>Recall that <span class="math-container">$C$</span> is the space as in the question.Let <span class="math-container">$C$</span> be the space as in the question. The map <span class="math-container">$f_{|C}~$</span> is injective and the commutative equation <span class="math-container">$ f_{|C} =e \circ f_{0|C}~$</span> implies that <span class="math-container">$f_{0|C}~$</span> is injective too. Moreover <span class="math-container">$f_{0|W/C}$$ ~:W/C\to \mathbb{R}~$</span> has image in a connected component of <span class="math-container">$e^{-1}\{(0,1)\}~$</span>, then it is a constant map. Hence by the universal property of quotient <span class="math-container">$f_0~$</span> defines a unique, continuous and injective map <span class="math-container">$\hat{f}_0:S^1\cong W/C\to \mathbb{R}$</span>. Here we get the absurd since there isn't such map. Indeed if by absurd such map <span class="math-container">$\hat{f}_0~$</span> exists then <span class="math-container">$Im(\hat{f}_0)~$</span> is a compact, path connected subspace of <span class="math-container">$\mathbb{R}$</span> because <span class="math-container">$S^1~$</span> is a path connected, compact space (with the topology of subspace of <span class="math-container">$\mathbb{R}^2$</span>). But compact, connected subspace of <span class="math-container">$\mathbb{R}$</span> is simply a closed, path connected interval of <span class="math-container">$\mathbb{R}~$</span> <span class="math-container">$[a,b]$</span>. Finally, we find this second absurd since this implies that <span class="math-container">$S^1~$</span> is homeomorphism to a closed, connected interval of <span class="math-container">$\mathbb{R}$</span> (i.e. <span class="math-container">$S^1\cong Im{\hat{f}_0}\cong [a,b]$</span>): this is impossibile since <span class="math-container">$\pi_1(S^1)\cong\mathbb{Z}\neq 0\cong\pi_{1}([a,b])$</span>.</p>
|
3,287,067 | <p>I'm reading a solution to the following exercise:</p>
<p>"Assume that <span class="math-container">$\lim_{x\to c}f\left(x\right)=L$</span>, where <span class="math-container">$L\ne0$</span>, and assume <span class="math-container">$\lim_{x\to c}g\left(x\right)=0.$</span> Show that <span class="math-container">$\lim_{x\to c}\left|\frac{f\left(x\right)}{g\left(x\right)}\right|=\infty.$</span>" </p>
<p>And at some point in the proof the following step appears:</p>
<p>"Choose <span class="math-container">$\delta_1$</span> so that <span class="math-container">$0<\left|x-c\right|<\delta _1$</span> implies <span class="math-container">$\left|f\left(x\right)-L\right|<\frac{|L|}{2}$</span>. <strong>Then we have <span class="math-container">$\left|f\left(x\right)\right|\ge\frac{\left|L\right|}{2}$</span></strong>."</p>
<p>It's precisely the implication in bold that I'm struggling to understand. How does the writer go from <span class="math-container">$\left|f\left(x\right)-L\right|<\frac{\left|L\right|}{2}$</span> to <span class="math-container">$\left|f\left(x\right)\right|\ge\frac{\left|L\right|}{2}$</span>? </p>
<p>I'm probably failing to see something that may be very clear, but I've been attempting unsuccessfully to reach the conclusion algebraically long enough, and can't quite see why it is true either! </p>
<p>Here is the rest of the solution, if necessary. </p>
<p>"Let <span class="math-container">$M>0\ $</span> be arbitrary. [...]. Because <span class="math-container">$\lim_{x\to c}g\left(x\right)=0$</span>, we can choose <span class="math-container">$\delta_2$</span> such that <span class="math-container">$\left|g\left(x\right)\right|<\frac{\left|L\right|}{2M}\ $</span>provided <span class="math-container">$0<\left|x-c\right|<\delta_2$</span>. </p>
<p>Let <span class="math-container">$\delta=\min\left\{\delta_1,\delta_2\right\}.\ $</span> Then we have </p>
<p><span class="math-container">$\left|\frac{f\left(x\right)}{g\left(x\right)}\right|\ge\left|\frac{\frac{\left|L\right|}{2}}{\frac{\left|L\right|}{2M}}\right|=M$</span> provided <span class="math-container">$0<\left|x-c\right|<\delta$</span>, as desired." </p>
| J. W. Tanner | 615,567 | <p><span class="math-container">$\cos$</span> is an even function, but <span class="math-container">$$\int_{-\pi}^{\pi} \cos x dx =\sin x |_{-\pi}^{\pi}=0$$</span></p>
|
3,495,779 | <p>The question is:
Prove that if graph <span class="math-container">$G$</span> is a <strong>Connected Planar Graph</strong> where each region is made up of at least <span class="math-container">$k\,(k\ge3)$</span> edges, then it must satisfy:</p>
<p><span class="math-container">$e\ge\frac{k(v-2)}{k-2}$</span></p>
<p><span class="math-container">$e$</span> means <span class="math-container">$G$</span>'s number of edges, <span class="math-container">$v$</span> means <span class="math-container">$G$</span>'s number of vertices</p>
<p>Then I am prepare using induction to prove this.But I don't know which variable should I use. The number of edges or the number of vertices? And I even don't know what <span class="math-container">$k$</span> means here. Is <span class="math-container">$k$</span> a number no less than <span class="math-container">$3$</span> or it equals to the minimum number of edges of <span class="math-container">$G$</span>'s regions? Due to these doubts,I am confused how to start my induction correctly.</p>
| Acccumulation | 476,070 | <p>Generally, when you're using induction to prove a fact about relationship between a dependent variable and an independent variable, the induction is on the independent variable. Another generalization is that if you're proving an inequality, the induction is likely on the "less than" side. The expression <span class="math-container">$\frac {k(v-2)}{k-2}$</span> is written as if it's a function of <span class="math-container">$v$</span>, making <span class="math-container">$v$</span> the indepedent variable. Furthermore, it it more common for the number of edges to be considered a dependent variable with respect to the number of vertices, than the reverse. None of this is iron clad rules, and having not worked out the proof myself, I can't say for certain, but I would strongly expect the induction to not be on <span class="math-container">$e$</span>; I would expect it be on either <span class="math-container">$v$</span> or on the number of regions.</p>
<p>As for <span class="math-container">$k$</span>, it is an integer that is greater than <span class="math-container">$2$</span>, and it is a lower bound for the number of edges for each region. It is not necessarily the minimum; there is a difference between "minimum" and "lower bound". Lower bound for a set of values means that the values are never lower, while minimum means they are never lower, and at least one value is equal. So, for instance, I'm confident that 10 is a lower bound for the age of Fortune 500 CEOs, but it's not the minimum age, because no such CEO is actually 10.</p>
|
2,376,315 | <p>So I'm trying to solve the problem irrational ^ irrational = rational. Here is my proof
Let $i_{1},i_{2}$ be two irrational numbers and r be a rational number such that $$i_{1}^{i_{2}} = r$$
So we can rewrite this as $$i_{1}^{i_{2}} = \frac{p}{q}$$
Then by applying ln() to both sides we get $$i_2\ln(i_1) = \ln(p)-\ln(q)$$
which can be rewritten using the difference of squares as $$ i_2\ln(i_1) = \left(\sqrt{\ln(p)}-\sqrt{\ln(q)}\right)\left(\sqrt{\ln(p)}+\sqrt{\ln(q)}\right)$$
so now we have $$i_1 = e^{\sqrt{\ln(p)}+\sqrt{\ln(q)}}$$
$$i_2 = \sqrt{\ln(p)}-\sqrt{\ln(q)}$$
because I've found an explicit formula for $i_1$ and $i_2$ we are done.</p>
<p>So I'm new to proofs and I'm not sure if this is a valid argument. Can someone help me out?</p>
| Reese Johnston | 351,805 | <p>When writing a proof - any proof - the first and most important step is to decide <em>what you are proving</em>. If the thing you end up with is not exactly the thing you were proving, you're not done with the proof.</p>
<p>In this case, it's not clear to me what you're trying to prove - are you trying to prove that $i_1^{i_2}$ is always rational for every irrational $i_1,i_2$? Hopefully not, because that isn't true - but even if it were, your proof only supplies one example. Are you trying to show that for every rational $r$ there exists irrationals $i_1,i_2$ so that $i_1^{i_2} = r$? If so, then you have a problem - in your first sentence you assumed the thing you were trying to prove!</p>
<p>Your reasoning itself is good, assuming you're shooting for the second conclusion, but this looks more like the work you should use to <em>get</em> the proof, not the proof itself. If I were you, I'd have the proof look sort of like this:</p>
<p>Let $r$ be any rational. Then $r = \frac{p}{q}$ for some integers $p,q$. Let $i_1 = e^{\sqrt{ln(p)} + \sqrt{\ln(q)}}$ and $i_2 = \sqrt{\ln(p)} - \sqrt{\ln(q)}$. Then... (insert calculations here)... so $i_1^{i_2} = r$. Therefore, for any $r$, there exist $i_1$ and $i_2$ so that $i_1^{i_2} = r$.</p>
<p>Now, there's still a problem - how do you know that your $i_1$ and $i_2$ are irrational? For example, if $r = 1$, then you end up with $i_1 = 1$ and $i_2 = 0$ no matter which $p$ and $q$ you choose. You may be able to do some sort of clever argument here, but I'd recommend trying a different technique.</p>
|
310,651 | <p>$\mathbb{Z}_{30}$* $= \{1,7,11,13,17,19,23,29\}$</p>
<p>The number of elements is $8$ and $8$ is not prime, therefore $\mathbb{Z}_{30}$* is not cyclic.</p>
<p>and the generators are $7,11,13,17,19,23,29$.</p>
<p>can anyone correct me please?</p>
| Julien | 38,053 | <p>Note that $30=2\cdot 3\cdot 5$.</p>
<p>By the Chinese remainder theorem,</p>
<p>$$
\mathbb{Z}/30\mathbb{Z}^*=\mathbb{Z}/2\mathbb{Z}^*\times \mathbb{Z}/3\mathbb{Z}^*\times \mathbb{Z}/5\mathbb{Z}^*=\mathbb{Z}/2\mathbb{Z}\times \mathbb{Z}/4\mathbb{Z}.
$$</p>
<p>So you're right, this is not cyclic, but not for the reason you mention.</p>
<p>So no element generates the group.</p>
<p>Edit: I see the confusion. The elements you mention generate $\mathbb{Z}/30\mathbb{Z}$, not $\mathbb{Z}/30\mathbb{Z}^*$.</p>
|
2,919,096 | <p>I am trying to solve a system of two second-order ODEs. After separating them, I obtained a fourth-order independent ODE as illustrated below. I wonder if there is a specific technique to solve it.</p>
<p>$$y^{(4)}+\frac{a_1}{x} y^{(3)}+\frac{a_2}{x^2}y^{(2)}+a_3y^{(2)}+\frac{a_4}{x}y^{(1)}+a_5y=0$$</p>
| Ernie060 | 592,621 | <p>A power series method might do the trick. Substitute
$$ y = \sum_{k=0}^\infty c_k x^k$$
in the differential equation $$x^2 y^{(4)} + a_1 x y^{(3)} + \cdots + a_5 x^2 y = 0.$$
You will obtain a series in which the coefficients are difference equations in the unknowns $c_k$. For concrete constants $a_n$, this difference equation maybe can be solved. </p>
|
2,919,096 | <p>I am trying to solve a system of two second-order ODEs. After separating them, I obtained a fourth-order independent ODE as illustrated below. I wonder if there is a specific technique to solve it.</p>
<p>$$y^{(4)}+\frac{a_1}{x} y^{(3)}+\frac{a_2}{x^2}y^{(2)}+a_3y^{(2)}+\frac{a_4}{x}y^{(1)}+a_5y=0$$</p>
| Przemo | 99,778 | <p>One technique for solving linear higher order ODEs is to seek solutions in the form <span class="math-container">$y(x) = y_1(x) \cdot y_2(x)$</span> where <span class="math-container">$y_{1,2}$</span> are solutions of some lower order ODE (input ODE), solutions that are known. In order to guarantee success of this method one has make sure that both the input and the target ODE (i.e. the ODE in question) have the same singular points. The target ODE has singular points at zero and at infinity and therefore we chose the Whittaker differential equation (which also has singularities at zero and infinity) as the input ODE.</p>
<p>Define <span class="math-container">$Q(x) := -A + k/x + (1/4-\mu^2)/x^2$</span>. Then the equation(to be called input ODE) reads:
<span class="math-container">\begin{equation}
\frac{d^2 y_{1,2}(x)}{d x^2} + Q(x) y_{1,2}(x) = 0
\end{equation}</span>
is solved by
<span class="math-container">\begin{eqnarray}
y_{1}(x) &=& M_{\frac{\kappa}{2\sqrt{A}},\mu}(2\sqrt{A} x) \\
y_{2}(x) &=& W_{\frac{\kappa}{2\sqrt{A}},\mu}(2\sqrt{A} x)
\end{eqnarray}</span>
where <span class="math-container">$M_{\cdot,\cdot}(x)$</span>and <span class="math-container">$W_{\cdot,\cdot}(x)$</span> are Whittaker functions <a href="https://en.wikipedia.org/wiki/Whittaker_function" rel="nofollow noreferrer">https://en.wikipedia.org/wiki/Whittaker_function</a> .
Now we differentiate our function <span class="math-container">$y(x)$</span>. We have:
<span class="math-container">\begin{eqnarray}
&&y(x) = y_1(x) y_2(x) \\
&&y^{'}(x) = y_1^{'}(x) y_2(x) + y_1(x) y_2^{'}(x)\\
&&y^{''}(x) = -2 Q(x) y_1(x) y_2(x) + 2 y_1^{'}(x) y_2^{'}(x)\\
&&y^{'''}(x)=-2 Q^{'}(x) y_1(x) y_2(x) - 4 Q(x) (y_2(x) y_1^{'}(x) +y_1(x) y_2^{'}(x)) \\
&&y^{''''}(x)= (8 Q^2(x)-2 Q^{''}(x)) y_1(x) y_2(x)-6 Q^{'}(x)(y_2(x) y_1^{'}(x) +y_1(x) y_2^{'}(x)) -8 Q(x) y_1^{'}(x) y_2^{'}(x)
\end{eqnarray}</span>
Now we insert the expressions above into our target ODE and recognize that there will be only three type of terms being proportional to firstly <span class="math-container">$y_1(x) y_2(x)$</span>, secondly to <span class="math-container">$(y_1(x) y_2^{'}(x) + y_2(x) y_1^{'}(x))$</span> and thirdly to <span class="math-container">$y_1^{'}(x) y_2^{'}(x)$</span> with the proportionality constants being polynomials of order at most four in the variable <span class="math-container">$1/x$</span>. Now all we need to do is to choose the parameters <span class="math-container">$(A,\kappa,\mu)$</span>of our input ODE so that the polynomials in question are all identically equal to zero. Even though the later seems to be a very strong requirement it turns out that solutions do exist for a certain choice of the parameters <span class="math-container">$\left(a_i \right)_{i=1}^5$</span> of the target ODE.</p>
<p>As a matter of fact the following is true.
Let <span class="math-container">$a_2$</span> and <span class="math-container">$a_3$</span> be arbitrary real numbers. Then let :
<span class="math-container">\begin{eqnarray}
a_1&=& 3\\
a_4&=& a_1 \cdot a_3\\
a_5&=&0
\end{eqnarray}</span>
Then the set of functions <span class="math-container">$(y_1(x) y_2(x), y_1(x)^2,y_2(x)^2)$</span> where
<span class="math-container">\begin{eqnarray}
y_{1}(x) &=& M_{\frac{\kappa}{2\sqrt{A}},\mu}(2\sqrt{A} x) \\
y_{2}(x) &=& W_{\frac{\kappa}{2\sqrt{A}},\mu}(2\sqrt{A} x)
\end{eqnarray}</span>
and <span class="math-container">$(k,\mu,A)= (0,\sqrt{1-a_2}/2,-a_3/4)$</span> solves the target ODE.
The following Mathematica code verifies that:</p>
<pre><code>In[125]:= {a1, a2, a3, a4, a5} = RandomInteger[{1, 10}, 5];
a1 = 3;
a4 = a1 a3;
a5 = 0;
{k, mu, A} = {0, Sqrt[1 - a2]/2, -a3/4};
y1[x_] = WhittakerM[k/(2 Sqrt[A]), mu, 2 Sqrt[A] x];
y2[x_] = WhittakerW[k/(2 Sqrt[A]), mu, 2 Sqrt[A] x];
FullSimplify[(D[#, {x, 4}] +
a1/x D[#, {x, 3}] + (a3 + a2/x^2) D[#, {x, 2}] +
a4/x D[#, {x, 1}] + a5 #) & /@ {y1[x] y2[x], y1[x]^2, y2[x]^2}]
Out[132]= {0, 0, 0}
</code></pre>
<p>Update 0:</p>
<p>It is more likely for us to obtain new solutions if we introduce additional parameters into the system. This can be done by replacing <span class="math-container">$y_{1,2}(x) \rightarrow r_0 y_{1,2}(x) + r_1 y_{1,2}^{'}(x)$</span> where <span class="math-container">$r_0$</span> and <span class="math-container">$r_1$</span> are new parameters that come into play. Having done that we just repeat the procedure above and interestingly enough we find a new solution.
Let <span class="math-container">$a_3$</span> and <span class="math-container">$a_5$</span> be arbitrary real numbers. Now let :
<span class="math-container">\begin{eqnarray}
a_1&=&3\\
a_2&=&0\\
a_4&=& \frac{3}{2} \left(a_3+\sqrt{a_3^2-4 a_5}\right)
\end{eqnarray}</span>
Then the set of functions:
<span class="math-container">\begin{eqnarray}
\left(
\begin{array}{r}
(r_0 y_1(x) + r_1 y_1^{'}(x))\cdot (r_0 y_2(x) + r_1 y_2{'}(x))\\
(r_0 y_1(x) + r_1 y_1^{'}(x))\cdot (r_0 y_1(x) + r_1 y_1{'}(x))\\
(r_0 y_2(x) + r_1 y_2^{'}(x))\cdot (r_0 y_2(x) + r_1 y_2{'}(x))
\end{array}
\right)
\end{eqnarray}</span>
where
<span class="math-container">\begin{eqnarray}
y_{1}(x) &=& M_{\frac{\kappa}{2\sqrt{A}},\mu}(2\sqrt{A} x) \\
y_{2}(x) &=& W_{\frac{\kappa}{2\sqrt{A}},\mu}(2\sqrt{A} x)
\end{eqnarray}</span>
and <span class="math-container">$(k,\mu,A)= (0,1/2,1/8(-a_3-\sqrt{a_3^2-4 a_5}))$</span> and
<span class="math-container">\begin{equation}
r_0=\frac{\imath \sqrt{a_5} r_1}{\sqrt{2} \sqrt{a_3-\sqrt{a_3^2-4 a_5}}}
\end{equation}</span>
solves the target ODE.
Again, we used Mathematica to verify our result.</p>
<pre><code>In[292]:= a3 =.; a5 =.; r1 =.; r0 =.; Clear[y1]; Clear[y2];
{a1, a2, a4} = {3, 0, 3/2 (a3 + Sqrt[a3^2 - 4 a5])};
r0 = (I Sqrt[a5] r1)/(Sqrt[2] Sqrt[a3 - Sqrt[a3^2 - 4 a5]]);
{A, mu, k} = {1/8 (-a3 - Sqrt[a3^2 - 4 a5]), 1/2, 0};
y1[x_] = WhittakerM[k/(2 Sqrt[A]), mu, 2 Sqrt[A] x];
y2[x_] = WhittakerW[k/(2 Sqrt[A]), mu, 2 Sqrt[A] x];
FullSimplify[(D[#, {x, 4}] +
a1/x D[#, {x, 3}] + (a3 + a2/x^2) D[#, {x, 2}] + a4/x D[#, x] +
a5 #) & /@ {(r0 y1[x] + r1 D[y1[x], x]) (r0 y2[x] +
r1 D[y2[x], x]), (r0 y1[x] + r1 D[y1[x], x]) (r0 y1[x] +
r1 D[y1[x], x]), (r0 y2[x] + r1 D[y2[x], x]) (r0 y2[x] +
r1 D[y2[x], x])}]
Out[298]= {0, 0, 0}
</code></pre>
<p>Update 1:</p>
<p>Here is a third possible way to tackling the problem in question.
We know that we can always annihilate the coefficient at the third order derivative by writing <span class="math-container">$y(x)=\exp(-1/4 \int(a_1/x dx)) \cdot v(x)$</span> and then apply the procedure described above to the equation satisfied by the function <span class="math-container">$v(x)$</span>. If we do this we obtain another solutions to the ODE is shown below.</p>
<p>Solution 1:</p>
<p>Let <span class="math-container">$a_3$</span> be real and then let the following:
<span class="math-container">\begin{eqnarray}
a_1&=&4\\
a_2&=&0\\
a_4&=&\frac{a1 a3}{2}\\
a_5&=&0
\end{eqnarray}</span>
Then define
<span class="math-container">\begin{eqnarray}
y_{1}(x) &=& M_{\frac{\kappa}{2\sqrt{A}},\mu}(2\sqrt{A} x) \\
y_{2}(x) &=& W_{\frac{\kappa}{2\sqrt{A}},\mu}(2\sqrt{A} x)
\end{eqnarray}</span>
where <span class="math-container">$(k,\mu,A)= (0,1/2,-a_3/4))$</span>.</p>
<p>Then the set of functions <span class="math-container">$(y_1(x) y_2(x)/x, y_1(x)^2/x,y_2(x)^2/x)$</span> satisfy the ODE in question. The Mathematica code snippet verifies it:</p>
<pre><code>In[13]:= a3 =.; a1 =.; k =.; A =.; mu =.; Clear[y1]; Clear[y2];
{a1, a2, a4, a5} = {4, 0, (a1 a3)/2, 0};
{k, A, mu} = {0, -a3/4, 1/2};
y1[x_] = WhittakerM[k/(2 Sqrt[A]), mu, 2 Sqrt[A] x];
y2[x_] = WhittakerW[k/(2 Sqrt[A]), mu, 2 Sqrt[A] x];
FullSimplify[(D[#, {x, 4}] +
a1/x D[#, {x, 3}] + (a3 + a2/x^2) D[#, {x, 2}] + a4/x D[#, x] +
a5 #) & /@ {1/x y1[x] y2[x], 1/x y1[x]^2, 1/x y2[x]^2}]
Out[18]= {0, 0, 0}
</code></pre>
<p>Solution 2:</p>
<p>Let <span class="math-container">$a_5$</span> be real and then let the following:
<span class="math-container">\begin{eqnarray}
a_1&=&8\\
a_2&=&12\\
a_3&=&0\\
a_4&=&0
\end{eqnarray}</span>
and
<span class="math-container">\begin{equation}
r_0=\frac{\sqrt{a_5}}{2(-a_5)^{1/4}} r_1
\end{equation}</span>
Then define
<span class="math-container">\begin{eqnarray}
y_{1}(x) &=& M_{\frac{\kappa}{2\sqrt{A}},\mu}(2\sqrt{A} x) \\
y_{2}(x) &=& W_{\frac{\kappa}{2\sqrt{A}},\mu}(2\sqrt{A} x)
\end{eqnarray}</span>
where <span class="math-container">$(k,\mu,A)= (0,1/2,-1/4 \sqrt{-a_5})$</span>.</p>
<p>Then the set of functions:
<span class="math-container">\begin{eqnarray}
\left(
\begin{array}{r}
(r_0 y_1(x) + r_1 y_1^{'}(x))\cdot (r_0 y_2(x) + r_1 y_2{'}(x))\cdot x^{-2}\\
(r_0 y_1(x) + r_1 y_1^{'}(x))\cdot (r_0 y_1(x) + r_1 y_1{'}(x))\cdot x^{-2}\\
(r_0 y_2(x) + r_1 y_2^{'}(x))\cdot (r_0 y_2(x) + r_1 y_2{'}(x))\cdot x^{-2}
\end{array}
\right)
\end{eqnarray}</span>
satisfy the ODE in question. The Mathematica code snippet verifies it:</p>
<pre><code>In[1]:= a1 =.; a2 =.; a3 =.; a4 =.; a5 =.; r1 =.; r0 =.; Clear[y1]; \
Clear[y2];
{a1, a2, a3, a4} = {8, 12, 0, 0};
r0 = Sqrt[a5]/(2 (-a5)^(1/4)) r1;
mu = 1/2; k = 0; A = 1/4 (-Sqrt[- a5]);
y1[x_] = WhittakerM[k/(2 Sqrt[A]), mu, 2 Sqrt[A] x];
y2[x_] = WhittakerW[k/(2 Sqrt[A]), mu, 2 Sqrt[A] x];
FullSimplify[(D[#, {x, 4}] +
a1/x D[#, {x, 3}] + (a3 + a2/x^2) D[#, {x, 2}] + a4/x D[#, x] +
a5 #) & /@ {(r0 y1[x] +
r1 D[y1[x], x]) (r0 y2[x] + r1 D[y2[x], x])/
x^2, (r0 y1[x] + r1 D[y1[x], x]) (r0 y1[x] + r1 D[y1[x], x])/
x^2, (r0 y2[x] + r1 D[y2[x], x]) (r0 y2[x] + r1 D[y2[x], x])/x^2}]
Out[7]= {0, 0, 0}
</code></pre>
<p>Of course what we have found are only very particular solutions to the target ODE however one can readily see that by introducing additional transformations and in turn additional parameters we might be able to find more solutions. One possibility to try would be to firstly try a change of variables <span class="math-container">$x\rightarrow f(x)$</span> and <span class="math-container">$d/d x \rightarrow 1/f^{'}(x) d/d x$</span> in the input equation and then choose <span class="math-container">$f(x)$</span> in such a way that the target equation is solved.</p>
|
2,652,675 | <p>Given that A = $\begin{bmatrix} 2 & 1 \\ -5 & -4 \end{bmatrix} $ and B = $\begin{bmatrix} 3 & -1 \\ -1 & 0 \end{bmatrix} $ </p>
<p>Find a 2 X 2 matrix C such that $CA= B$</p>
<p>I multiply both sides by $A^{-1}$ </p>
<p>Since $A^{-1}A = I $ </p>
<p>$ CI = BA^{-1}$ </p>
<p>Since $CI = IC = C$ </p>
<p>$ C = BA^{-1} $ </p>
<p>However, when I carry on and find out the answer to matrix C, I can’t get the answer. Where have I gone wrong ? </p>
| confused_wallet | 390,931 | <p>Note that the definition for the topology generated by $\mathscr{B}$ is the set of <em>all</em> subsets $U$ of $X$ with the stated property. It follows that the topology generated is unique.</p>
|
11,435 | <p>I know what they are and how to define them. They have to serve some purpose - else why include them? But I never used them and all examples on this site and the docs never show a practical use of <code>UpValues</code>.</p>
<p>Can someone give me an example to see when they are actually needed?</p>
| rm -rf | 5 | <p>Yes, <code>UpValues</code> are certainly useful in that you can bind definitions and custom behaviours to the symbol rather than the operator. For instance, I can define (a simple, silly example):</p>
<pre><code>g /: g[x_] + g[y_] := g[x] g[y]
</code></pre>
<p>to actually multiply the two when I add them. This definition is now stored in:</p>
<pre><code>UpValues[g]
(* {HoldPattern[g[x_] + g[y_]] :> g[x] g[y]} *)
</code></pre>
<p>The alternative would be to unprotect <code>Plus</code> and then overload it with this definition as:</p>
<pre><code>Unprotect@Plus;
g[x_] + g[y_] := g[x] g[y]
Protect@Plus;
</code></pre>
<p>The advantages of <code>Upvalues</code> over overloading built-in operators are:</p>
<ul>
<li>It is safer. Modifying built-ins is risky because you don't know what might break internally with your custom definitions</li>
<li><em>Mathematica</em> reads custom definitions first (except for perhaps <code>Times</code>) before built-in ones. As a result, overloading operators and functions with more and more additional definitions could slow things down because it has to consider the custom definitions even in situations where they aren't necessary.</li>
<li>All the custom definitions for a symbol/object are collected in its <code>UpValues</code>. With the alternate approach, you don't know, at a glance, which functions have been modified to treat this symbol/object differently.</li>
</ul>
<hr>
<p>For a more informative (and relevant) example, I'll turn to Sal Mangano's <a href="http://proquest.safaribooksonline.com/book/mathematica/9781449382001/2dot0-introduction/id3364358">"Mathematica Cookbook", Chapter 2: Functional Programming</a>, which illustrates the idea and reinforces the points I made above:</p>
<blockquote>
<p>There are some situations in which you would like to give new meaning to functions native to <em>Mathematica</em>. These situations arise when you introduce new types of objects. For example, imagine <em>Mathematica</em> did not already have a package that supported quaternions (a kind of noncommutative generalization of complex numbers) and you wanted to develop your own. Clearly you would want to use standard mathematical notation, but this would amount to defining new downvalues for the built-in <em>Mathematica</em> functions <code>Plus</code>, <code>Times</code>, etc.</p>
<pre><code>Unprotect[Plus,Times]
Plus[quaternion[a1_,b1_,c1_,d1_], quaternion[a2_,b2_,c2_,d2_]] := ...
Times[quaternion[a1_,b1_,c1_,d1_], quaternion[a2_,b2_,c2_,d2_]] := ...
Protect[Plus,Times]
</code></pre>
<p>If quaternion math were very common, this might be a valid approach. However, <em>Mathematica</em> provides a convenient way to associate the definitions of these operations with the quaternion rather than with the operations. These associations are called UpValues, and there are two syntax variations for defining them. The first uses operations called <code>UpSet</code> (<code>^=</code>) and <code>UpSetDelayed</code> (<code>^:=</code>), which are analogous to <code>Set</code> (<code>=</code>) and <code>SetDelayed</code> (<code>:=</code>) but create upvalues rather than downvalues.</p>
<pre><code>Plus[quaternion[a1_,b1_,c1_,d1_], quaternion[a2_,b2_,c2_,d2_]] ^:= ...
Times[quaternion[a1_,b1_,c1_,d1_], quaternion[a2_,b2_,c2_,d2_]] ^:= ...
</code></pre>
<p>The alternate syntax is a bit more verbose but is useful in situations in which the symbol the upvalue should be associated with is ambiguous. For example, imagine you want to define addition of a complex number and a quaternion. You can use <code>TagSet</code> or <code>TagSetDelayed</code> to indicate that the operation is an upvalue for quaternion rather than <code>Complex</code>.</p>
<pre><code>quaternion /: Plus[Complex[r_, im_], quaternion[a1_,b1_,c1_,d1_]] := ...
quaternion /: Times[Complex[r_, im_], quaternion[a1_,b1_,c1_,d1_]] := ...
</code></pre>
<p><code>Upvalues</code> solve two problems. First, they eliminate the need to unprotect native <em>Mathematica</em> symbols. Second, they avoid bogging down <em>Mathematica</em> by forcing it to consider custom definitions every time it encounters common functions like <code>Plus</code> and <code>Times</code>. (<em>Mathematica</em> aways uses custom definitions before built-in ones.) By associating the operations with the new types (in this case quaternion), <code>Mathematica</code> need only consider these operations in expression where quaternion appears. If both upvalues and downvalues are present, upvalues have precedence, but this is something you should avoid.</p>
</blockquote>
|
4,159,337 | <p>How would you show
<span class="math-container">$$\log_2 3 + \log_3 4 + \log_4 5 + \log_5 6 > 5?$$</span></p>
<p>After trying to represent the mentioned expression in the same base, a messy expression is created.
The hint mentioned that the proof can take help of quadratic equations.</p>
<p>Could you provide some input?
Is it a good idea to think of some graphical solution?</p>
| Barry Cipra | 86,747 | <p>Converting to logs of a single base and using AGM, we have</p>
<p><span class="math-container">$$\begin{align}
\log_23+\log_34+\log_45+\log_56
&={\log3\over\log2}+{\log4\over\log3}+{\log5\over\log4}+{\log6\over\log5}\\
&\ge4\sqrt[4]{{\log3\over\log2}\cdot{\log4\over\log3}\cdot{\log5\over\log4}\cdot{\log6\over\log5}}\\
&=4\sqrt[4]{\log6\over\log2}\\
&=4\sqrt[4]{1+{\log3\over\log2}}
\end{align}$$</span></p>
<p>So to show the desired inequality, it suffices to show that <span class="math-container">$256(1+\log3/\log2)\gt625$</span>, or <span class="math-container">$\log3/\log2\gt369/256$</span>. But</p>
<p><span class="math-container">$$2\log3=\log9\gt\log8=3\log2$$</span></p>
<p>and</p>
<p><span class="math-container">$$3\cdot256=768\gt738=2\cdot369$$</span></p>
<p>together tell us
<span class="math-container">$${\log3\over\log2}\gt{3\over2}\gt{369\over256}$$</span> as desired.</p>
|
1,606,023 | <blockquote>
<p>Given that $U_{n+1} = U_n-U_n^2$ and $0 < U_0 < 1$, show that $0 < U_n \leq 1/4$ for all $n \in \mathbb{N}^*$.</p>
</blockquote>
<p>Given that $S_n$ is the sum of $\frac{1}{1-U_k}$ from k=0 to n</p>
<p>Calculate $S_n$ in function of n </p>
<p>I see that $\frac{1}{1-U_{n}}=\frac{1}{U_{n+1}}-\frac{1}{U_n}$</p>
<p>How can I calculate the sum ?</p>
| zu7 | 303,898 | <ol>
<li>Show that $0<U_{n+1}<U_n$ for all $n$</li>
<li>Show that $U_1<1/4$ for any choice of $0<U_0<1$</li>
</ol>
|
2,121,611 | <p>Compute a quadrature of</p>
<p>$\int_c^d\int_a^b f(x,y)dxdy$</p>
<p>using the Simpson rule and estimate the error.</p>
<p>So the Simpson rule says </p>
<p>$S(f) = (b-a)/6(f(a)+4f((a+b)/2) +f(b))$</p>
<p>So i get </p>
<p>$\int_c^d(b-a)/6(f(a)+4f((a+b)/2) +f(b))dy$</p>
<p>Is that even correct? How do I go on?</p>
| Alexander | 806,961 | <p>A C implementation for applying Simpson's Rule towards solving double integrals can be found here if you are interested.
<a href="https://github.com/adel314/Small-C-Plus--for-TMS9900" rel="nofollow noreferrer">Simpson integration technique for
evaluating double integrals</a></p>
<p>It can be also represented in the following form:</p>
<p><span class="math-container">$$
S_x(y_j) = f(x_0, y_j) + f(x_n,y_j) + 4\sum_{i = 1}^{(N_x-2)/2} f(x_{2i-1},y_j) + 2\sum_{i = 1}^{(N_x-2)/2} f(x_{2i},y_j)
$$</span></p>
<p><span class="math-container">$$
S = \frac{h_xh_y}{9}[(S_x(y_0) +S_x(y_n) + 4\sum_{j = 1}^{(N_y-2)/2} S_x(y_{2j-1}) + 2\sum_{j = 1}^{(N_y-2)/2} S_x(y_{2j})
$$</span></p>
<p>Where
<span class="math-container">$$
h_x = \frac{UpperLimit_x - LowerLimit_x}{N_x}
$$</span></p>
<p>and
<span class="math-container">$$
h_y = \frac{UpperLimit_y - LowerLimit_y}{N_y}
$$</span></p>
|
764,437 | <p>I am trying to prove that $\mathbb{Z}[i]\cong \mathbb{Z}[x]/(x^2+1)$.</p>
<p>My initial plan was to use the first isomorphism theorem. I showed that there is a map $\phi: \mathbb{Z}[x] \rightarrow \mathbb{Z}[i]$, given by $\phi(f)=f(i).$ This map is onto and homorphic. The part I have a question on is showing that the $ker(\phi) = (x^{2}+1)$. </p>
<p>One containment is trivial, $(x^2+1)\subset ker(\phi)$. To show $ker(\phi)\subset (x^2+1)$, let $f \in ker(\phi)$, then f has either $i$ or $-i$ as a root. Sot $f=g(x-i)(x+i)=g(x^2+1).$
How can I prove that $f \in \mathbb{Z}[x]\rightarrow g \in \mathbb{Z}[x]$? </p>
| Ellya | 135,305 | <p>When quotient out an ideal, we consider what happens to the ring when all the elements in the ideal are considered as identity elements.</p>
<p>Now if $x^2+1=0\Rightarrow x=\pm i$ let us take the "+" root.</p>
<p>$\mathbb{Z}[x]/(x^2+1)=\{f\in\mathbb{Z}[x]\,|x^2+1=0\}=\{f\in\mathbb{Z}[x]\,|x=i\}=\{a+bi|a,b\in\mathbb{Z}\}=\mathbb{Z}[i]$</p>
<p>I.e they are isomorphic.</p>
<p>I have answered a similar question here <a href="https://math.stackexchange.com/questions/732874/is-this-quotient-ring-isomorphic-to-the-complex-numbers/732892#732892">Is this quotient Ring Isomorphic to the Complex Numbers</a></p>
|
1,191,740 | <p>$a_{n+1} - a_n = 3n^2 - n$ ;$a_0=3$</p>
<p>I need help for solving the particular solution.
Based on a chart in my textbook if you get $n^2$ the particular solution would be
$A_2n^2 + A_1n + A_0$ and $n$ has the particular solution of $A_1n+A_0$.<br>
So given $3n^2 - n$, my first thought was that if the equation was $n^2-n$ you can have something like $An^2 + Bn+C - (Bn + C) = An^2$. </p>
<p>Is this process correct if I simply had $n^2-n$ ? If so how would the $3$ in $3n^2$ affect this step?</p>
| DeepSea | 101,504 | <p><strong>hint</strong>: $a_n = (a_n-a_{n-1})+(a_{n-1}-a_{n-2})+\cdots +(a_2-a_1)+(a_1-a_0)+a_0$</p>
|
7,064 | <p>The <a href="http://en.wikipedia.org/wiki/Long_division" rel="nofollow">Wikipedia article</a> on long division explains the different notations. I still use the European notation I learned in elementary school in Colombia. I had difficulty adapting to the US/UK notation when I moved to the US. However, I did enjoy seeing my classmates' puzzled faces in college whenever we had a professor that preferred the European notation.</p>
<p>What long division notation do you use and where did you learn it?</p>
| Arturo Magidin | 742 | <p>I use the same one that is used in the U.S.; it is the one that is (or at least, was up until the 80s) taught in Mexico. I had some classmates in high school who had emigrated form Argentina: they used the one described in the Wikipedia article as "European", not the one described as "Latin America". (I'll also note that beginners in Mexico would write the full computation as described in "US notation"; doing the computation mentally was more an 'advanced shortcut' than part of the notation, as I remember).</p>
|
2,421,421 | <blockquote>
<p>Evaluate
$$ \lim_{x\to \pi/2} \frac{\sqrt{1+\cos(2x)}}{\sqrt{\pi}-\sqrt{2x}}$$ </p>
</blockquote>
<p>I tried to solve this by L'Hospital's rule..but that doesn't give a solution..appreciate if you can give a clue.</p>
| farruhota | 425,072 | <p>Alternatively:
$$\lim_{x\to \pi/2} \frac{\sqrt{1+\cos(2x)}}{\sqrt{\pi}-\sqrt{2x}}=\lim_{x\to \pi/2} \frac{\sqrt{1+\cos(2x)}}{\sqrt{\pi}-\sqrt{2x}}\cdot \frac{\sqrt{1-\cos(2x)}}{\sqrt{1-\cos(2x)}}\cdot \frac{\sqrt{\pi}+\sqrt{2x}}{\sqrt{\pi}+\sqrt{2x}}=$$
$$\lim_{x\to \pi/2} \frac{|\sin{(2x)|}\cdot(\sqrt{\pi}+\sqrt{2x})}{(\pi-2x)\cdot\sqrt{1-\cos{(2x)}}}=\lim_{x\to \pi/2} \frac{|\sin{(\pi-2x)|}\cdot\sqrt{2\pi}}{(\pi-2x)}$$
Note:
$$\lim_{x\to \pi/2^+} \frac{|\sin{(\pi-2x)|}\cdot\sqrt{2\pi}}{(\pi-2x)}=\lim_{x\to \pi/2^+} \frac{-\sin{(\pi-2x)}\cdot\sqrt{2\pi}}{(\pi-2x)}=-\sqrt{2\pi}.$$
$$\lim_{x\to \pi/2^-} \frac{|\sin{(\pi-2x)|}\cdot\sqrt{2\pi}}{(\pi-2x)}=\lim_{x\to \pi/2^-} \frac{\sin{(\pi-2x)}\cdot\sqrt{2\pi}}{(\pi-2x)}=\sqrt{2\pi}.$$</p>
|
954,281 | <p>Given a complex measure: $\mu:\Sigma\to\mathbb{C}$.</p>
<p><img src="https://i.stack.imgur.com/YS26y.jpg" alt="Variation Measure"></p>
<p>Consider its decomposition into positive measures:
$$\mu=\Re_+\mu-\Re_-\mu+i\Im_+\mu-i\Im_-\mu=:\sum_{\alpha=0\ldots3}i^\alpha\mu_\alpha$$</p>
<blockquote>
<p>Does it split into disjoint regions: $\mu_\alpha(E)=|\mu|(E\cap A_\alpha)\quad(A_\alpha\cap A_\beta=\varnothing)$</p>
</blockquote>
| C-star-W-star | 79,762 | <p>Consider the Radon-Nikodym derivatives:
$$\mu_\alpha(E)=\int_Eu_\alpha\mathrm{d}|\mu|$$
They give rise to the constraints:
$$u_\alpha\geq0:\quad|u|=1\implies u_\alpha=\chi_{A_\alpha}$$</p>
<blockquote>
<p>So the above can only hold for: $\mathrm{im}u\subseteq\{1,i,-1,-i\}$</p>
</blockquote>
<p>As an example when this fails consider:
$$\Omega:=[0,1]:\quad\mu(A):=\int_Ae^{it}\mathrm{d}t$$
<em>(This way it becomes clear since it takes other values, too.)</em></p>
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.