qid int64 1 4.65M | question large_stringlengths 27 36.3k | author large_stringlengths 3 36 | author_id int64 -1 1.16M | answer large_stringlengths 18 63k |
|---|---|---|---|---|
674,310 | <p>I am having trouble with a proof for linear algebra. Could somebody explain to me how to prove that if $A$ and $B$ are both $n\times n$ non singular matrices, that their product $AB$ is also non singular. </p>
<p>A place to start would be helpful. Thank you for your time. </p>
| Ben Grossmann | 81,360 | <p>Note that a matrix is non-singular if and only if it has an inverse.</p>
<p>Suppose $A$ and $B$ have inverses $A^{-1} B^{-1}$. What do you get when you multiply
$$
(AB)(B^{-1}A^{-1})
$$
and why can we now conclude that $AB$ is non-singular?</p>
|
2,310,109 | <p>I'm an undergraduate with little to no background in functional analysis and topology. The whole concept of function spaces is quite fuzzy to me, and I'm having a difficult time conceptualizing it. (Things like there being different notions of compactness in general topological spaces is one of many things confusing me because of what I've learned so far. I have learned many things which turn out to be true only in metric spaces, or in $\mathbb{R}^n$ specifically.)</p>
<p>Consider the following situation:</p>
<p>Let $A \subset \mathbb{R}^n$ and $[a, b] \subset \mathbb{R}$. Let $F$ denote the set of all functions from $A$ to $[a, b]$, and $G \subset F$ denote those functions which possess a particular attribute that we are interested in.</p>
<p>In order to finish a larger proof, I'd like to show that $G$ is closed; later on, I want to work with pointwise convergence of functions in $G$. Since we are dealing with a function space, I'm a bit unsure about how to do this, because I'm uncertain about what constitutes a limit point in this space.</p>
<p>I have showed that, for any sequence $(f_n)_1^\infty$ of functions in $G$ converging pointwise to some $f \in F$, $f$ must be in $G$. I believe that this shows that $G$ is closed, and I believe that it has to do with the connection between the product topology and pointwise convergence of functions, but I would really appreciate feedback on this; have I misunderstood what "closed set" means in this context?</p>
<p>Thanks!</p>
<p>EDIT: In my case, $G \subset F$ is the set of all functions $f:A \rightarrow [a, b]$ such that $f(a) + f(b) + f(c) = c(f)$, for any three pair-wise orthogonal unit vectors $a, b, c$, where $c(f)$ is a constant depending only on $f$. </p>
<p>Second attempt: Take any function $f \in F \setminus G$. There exist two sets of orthogonal unit vectors $(v_1, v_2, v_3)$ and $(v_4, v_5, v_6)$ such that
\begin{equation*}
\Delta = \left| \sum_{i=1}^3 f(v_i) - f(v_{i + 3}) \right| > 0.
\end{equation*}
The set
\begin{equation*}
B_{\Delta/6} = \{ f \in F : \max(|f(v_i) - g(v_i)|) < \Delta / 6, i = 1, 2, \dots 6 \}.
\end{equation*}
is an open neighborhood of $f$. Take any $g \in B_{\Delta/6}$, and let $\delta_i = g(v_i) - f(v_i)$, for $i = 1, 2, \dots 6$, so that $|\delta_i| < \Delta / 6$. We get
\begin{equation*}
\begin{split}
\left| \sum_{i=1}^3 (g(v_i) - g(v_{i+3})) \right| = \left| \sum_{i=1}^3 (f(v_i) + \delta_i - f(v_{i+3}) - \delta_{i+3}) \right| \geq\\
\left| \sum_{i=1}^3 (f(v_i) - f(v_{i+3})) \right| - \left|\sum_{i=1}^3 (\delta_{i+3} - \delta_i) \right|
= \Delta - \left|\sum_{i=1}^3 (\delta_{i+3} - \delta_i) \right| > \Delta - \Delta = 0,
\end{split}
\end{equation*}
using the reverse triangle inequality. Thus $g \in F \setminus G$, so that $B_{ \Delta / 6} \subset F \setminus G$, meaning $F \setminus G$ is open. We conclude that $(F \setminus G)^C = G$ is closed.</p>
<p>Any thoughts?</p>
| Prahlad Vaidyanathan | 89,789 | <p>What you have shown is that $G$ is <em>sequentially closed</em>, which may not imply that $G$ is <em>closed</em>. The two concepts coincide for metric spaces, but not in general. </p>
<p>Here is a standard counterexample which fits your situation as well: if $A = \mathbb{R}, a=0,b=1$, and $G$ is the set of all functions $f:A\to [0,1]$ such that $f(x) = 0$ for all but countably many $x\in A$.</p>
<p>This set is sequentially closed because the countable union of a countable set is countable. However, it is not closed in $F$ : If $g\in F$ is any function, and $U$ is a basic open neighbourhood in the topology of point-wise convergence, then $\exists x_1,x_2,\ldots, x_n\in A$ and $\epsilon > 0$ such that
$$
U = \{f\in F : |f(x_i) - g(x_i)| < \epsilon \quad\forall 1\leq i\leq n\}
$$
Now simply that $f\in G$ such that $f(x_i) = g(x_i)$ for all $1\leq i\leq n$ and $f(x) = 0$ if $x\notin \{x_1,x_2,\ldots, x_n\}$. Then $f\in U$, so $U\cap G\neq \emptyset$. Hence, $g\in \overline{G}$.</p>
|
2,798,207 | <p>This problem needs also to be extended to $n*m$ chessboard. I tried to think like this:</p>
<p>First I choose a place for the first king in $64$ ways. Then I have a choice $64-5 = 59$ squares for the second king . But this solution is not right because this is not the case if I place the first king in the sidemost layers of squares. Then I have $64-4 = 60$ squares for the other king. How can I solve this problem?</p>
| nonuser | 463,553 | <p>You have ${64\choose 2} = 32\cdot 63=2016$ ways to put $2$ kings on board without restriction (since they are the same). Now we have $7\cdot 8\cdot 2+7\cdot 7\cdot 2 =210$ adjacent pairs of squares (I count that pairs directly, in one row you have $7$ adjacent pairs and we have 8 rows, the same holds for columns, and then I count diagonally neighbors).</p>
<p>So we can put them on $2016-210 = 1806$ ways they are not together. </p>
|
2,798,207 | <p>This problem needs also to be extended to $n*m$ chessboard. I tried to think like this:</p>
<p>First I choose a place for the first king in $64$ ways. Then I have a choice $64-5 = 59$ squares for the second king . But this solution is not right because this is not the case if I place the first king in the sidemost layers of squares. Then I have $64-4 = 60$ squares for the other king. How can I solve this problem?</p>
| Юрій Ярош | 517,661 | <p>We will solve the problem by cases.<br>
1) We put the first king on the corner square. Then there are 4 ways to do it, and for the second king we have 64-3=61. Then there are $4\cdot61=244$ ways to do it.<br>
2) We put the first king on the edge square. Then for the first king $6+6+6+6=24$ ways to do it, then for the second it is $64-4=60$ ways to do it. So it is $24\cdot60=1440$ ways to do it.<br>
3)We put king neither in the corner, nor on the edge. So we have $6\cdot6=36$ ways for the first king and for the second $64-5=59$ ways to do it. So it is $36\cdot59=2124$ ways.<br>
So together it is $(2124+1440+244)/2=1904$ ways to do it. We are dividing by 2 because we are double-counting when we analyze cases, because the order of putting kings doesn't matter.
And now, I think, it is easy to generalise it for $n×m$ case.</p>
|
2,798,207 | <p>This problem needs also to be extended to $n*m$ chessboard. I tried to think like this:</p>
<p>First I choose a place for the first king in $64$ ways. Then I have a choice $64-5 = 59$ squares for the second king . But this solution is not right because this is not the case if I place the first king in the sidemost layers of squares. Then I have $64-4 = 60$ squares for the other king. How can I solve this problem?</p>
| Barry Cipra | 86,747 | <p>If you want "non-attacking" placements, in which "adjacent" includes diagonally adjacent, then the answer is</p>
<p>$$4(64-4)+4\cdot6(64-6)+6\cdot6(64-9)\over2$$</p>
<p>That is, the "first" king goes either in one of $4$ corners and the "second" king avoids it and the $3$ adjacent squares, or it goes in one of the $6$ other squares along one of the $4$ sides, with the second king avoiding it and the $5$ adjacent squares, or it goes in one of the $6\cdot6$ interior squares and the second king avoids it and the $8$ surrounding squares. That gives the numerator. The denominator comes because you don't want to distinguish "first" from "second."</p>
<p>If "adjacent" is restricted to horizontal and vertical adjacency, then the answer is</p>
<p>$$4(64-3)+4\cdot6(64-4)+6\cdot6(64-5)\over2$$</p>
<p>by the same reasoning.</p>
|
322,302 | <p>Conjectures play important role in development of mathematics.
Mathoverflow gives an interaction platform for mathematicians from various fields, while in general it is not always easy to get in touch with what happens in the other fields.</p>
<p><strong>Question</strong> What are the conjectures in your field proved or disproved (counterexample found) in recent years, which are noteworthy, but not so famous outside your field?</p>
<p>Answering the question you are welcome to give some comment for outsiders of your field which would help to appreciate the result.</p>
<p>Asking the question I keep in mind by "recent years" something like a dozen years before now, by a "conjecture" something which was known as an open problem for something like at least dozen years before it was proved and I would say the result for which the Fields medal was awarded like a proof of <a href="https://en.wikipedia.org/wiki/Fundamental_lemma_(Langlands_program)" rel="noreferrer">fundamental lemma</a> would not fit "not so famous", but on the other hand these might not be considered as strict criteria, and let us "assume a good will" of the answerer.</p>
| Hailong Dao | 2,083 | <p>The <a href="https://en.wikipedia.org/wiki/Homological_conjectures_in_commutative_algebra" rel="noreferrer">homological conjectures in commutative algebra</a> using perfectoid methods. A survey on many recent developments written by André can be found <a href="https://arxiv.org/pdf/1811.09843.pdf" rel="noreferrer">here</a>. </p>
|
322,302 | <p>Conjectures play important role in development of mathematics.
Mathoverflow gives an interaction platform for mathematicians from various fields, while in general it is not always easy to get in touch with what happens in the other fields.</p>
<p><strong>Question</strong> What are the conjectures in your field proved or disproved (counterexample found) in recent years, which are noteworthy, but not so famous outside your field?</p>
<p>Answering the question you are welcome to give some comment for outsiders of your field which would help to appreciate the result.</p>
<p>Asking the question I keep in mind by "recent years" something like a dozen years before now, by a "conjecture" something which was known as an open problem for something like at least dozen years before it was proved and I would say the result for which the Fields medal was awarded like a proof of <a href="https://en.wikipedia.org/wiki/Fundamental_lemma_(Langlands_program)" rel="noreferrer">fundamental lemma</a> would not fit "not so famous", but on the other hand these might not be considered as strict criteria, and let us "assume a good will" of the answerer.</p>
| kodlu | 17,773 | <p>Konstantin Tikhomirov recently <a href="https://arxiv.org/pdf/1812.09016.pdf" rel="noreferrer">proved</a> that the probability that a random <span class="math-container">$n\times n$</span> Bernoulli matrix <span class="math-container">$M_n$</span> with independent <span class="math-container">$\pm 1$</span> entries, and <span class="math-container">$$\mathbb{P}[M_{ij}=1]=p,\quad 1\leq i,j\leq n,$$</span> is singular is
<span class="math-container">$$
\mathbb{P}[M_n~\mathrm{is~singular}]=(1-p+o_n(1))^n
$$</span>
for any fixed <span class="math-container">$p\in (0,1/2].$</span></p>
<p>This problem was considered by Komlos, Kahn-Komlos-Szemeredi, Bourgain, Tao-Vu etc., so I am unsure if it qualifies in terms of being not-so-famous. </p>
<p>Nevertheless it was exciting reading about it in Gil Kalai's blog <a href="https://gilkalai.wordpress.com/2019/02/02/konstantin-tikhomrov-the-probability-that-a-bernoulli-matrix-is-singular/" rel="noreferrer">here</a> .</p>
|
322,302 | <p>Conjectures play important role in development of mathematics.
Mathoverflow gives an interaction platform for mathematicians from various fields, while in general it is not always easy to get in touch with what happens in the other fields.</p>
<p><strong>Question</strong> What are the conjectures in your field proved or disproved (counterexample found) in recent years, which are noteworthy, but not so famous outside your field?</p>
<p>Answering the question you are welcome to give some comment for outsiders of your field which would help to appreciate the result.</p>
<p>Asking the question I keep in mind by "recent years" something like a dozen years before now, by a "conjecture" something which was known as an open problem for something like at least dozen years before it was proved and I would say the result for which the Fields medal was awarded like a proof of <a href="https://en.wikipedia.org/wiki/Fundamental_lemma_(Langlands_program)" rel="noreferrer">fundamental lemma</a> would not fit "not so famous", but on the other hand these might not be considered as strict criteria, and let us "assume a good will" of the answerer.</p>
| Michael | 38,448 | <p>Manolescu refuted the Triangulation Conjecture. The paper is</p>
<blockquote>
<p><em>Pin(2)-equivariant Seiberg-Witten Floer homology and the Triangulation Conjecture</em>, J. Amer. Math. Soc. <strong>29</strong> (2016), 147-176, doi:<a href="https://doi.org/10.1090/jams829" rel="noreferrer">10.1090/jams829</a>, arXiv:<a href="https://arxiv.org/abs/1303.2354" rel="noreferrer">1303.2354</a></p>
</blockquote>
<p>And you can read a blog post about it at <a href="https://ldtopology.wordpress.com/2013/03/16/manolescu-refutes-the-triangulation-conjecture/" rel="noreferrer">Low Dimensional Topology</a>.</p>
|
322,302 | <p>Conjectures play important role in development of mathematics.
Mathoverflow gives an interaction platform for mathematicians from various fields, while in general it is not always easy to get in touch with what happens in the other fields.</p>
<p><strong>Question</strong> What are the conjectures in your field proved or disproved (counterexample found) in recent years, which are noteworthy, but not so famous outside your field?</p>
<p>Answering the question you are welcome to give some comment for outsiders of your field which would help to appreciate the result.</p>
<p>Asking the question I keep in mind by "recent years" something like a dozen years before now, by a "conjecture" something which was known as an open problem for something like at least dozen years before it was proved and I would say the result for which the Fields medal was awarded like a proof of <a href="https://en.wikipedia.org/wiki/Fundamental_lemma_(Langlands_program)" rel="noreferrer">fundamental lemma</a> would not fit "not so famous", but on the other hand these might not be considered as strict criteria, and let us "assume a good will" of the answerer.</p>
| kodlu | 17,773 | <p>Turan's book <em>On a new method of analysis and its applications</em> focuses on bounds on power sums. The quantity
<span class="math-container">$$
T(m,n)=\inf_{|z_k|=1} \max_{\nu=1,\ldots,m} \left| \sum_{k=1}^n z_k^\nu\right|,
$$</span>
for various choices of <span class="math-container">$m,n$</span> has been of interest since then. The case <span class="math-container">$m\sim n^{B}$</span> has recently been settled by Andersson, using a character sum estimate due to Katz, in the paper available <a href="https://arxiv.org/pdf/0706.4131.pdf" rel="noreferrer">on arXiv here</a>. The result essentially states that
<span class="math-container">$$
T(m,n)\asymp \sqrt{n},
$$</span>
if <span class="math-container">$m=\lfloor n^B \rfloor,$</span> if <span class="math-container">$B>1$</span> is fixed. This was also an open problem by Montgomery in his <em>Ten lectures on the interface between analytic number theory and harmonic analysis.</em></p>
|
322,302 | <p>Conjectures play important role in development of mathematics.
Mathoverflow gives an interaction platform for mathematicians from various fields, while in general it is not always easy to get in touch with what happens in the other fields.</p>
<p><strong>Question</strong> What are the conjectures in your field proved or disproved (counterexample found) in recent years, which are noteworthy, but not so famous outside your field?</p>
<p>Answering the question you are welcome to give some comment for outsiders of your field which would help to appreciate the result.</p>
<p>Asking the question I keep in mind by "recent years" something like a dozen years before now, by a "conjecture" something which was known as an open problem for something like at least dozen years before it was proved and I would say the result for which the Fields medal was awarded like a proof of <a href="https://en.wikipedia.org/wiki/Fundamental_lemma_(Langlands_program)" rel="noreferrer">fundamental lemma</a> would not fit "not so famous", but on the other hand these might not be considered as strict criteria, and let us "assume a good will" of the answerer.</p>
| Louis D | 17,798 | <p>In 2016, Andrew Suk (nearly) solved the <a href="http://www.quantamagazine.org/a-puzzle-of-clever-connections-nears-a-happy-end-20170530" rel="nofollow noreferrer">"happy ending" problem</a>; that is, he proved (<em>On the Erdős-Szekeres convex polygon problem</em>, J. Amer. Math. Soc. <strong>30</strong> (2017), 1047-1053, doi:<a href="https://doi.org/10.1090/jams/869" rel="nofollow noreferrer">10.1090/jams/869</a>, arXiv:<a href="http://arxiv.org/abs/1604.08657" rel="nofollow noreferrer">1604.08657</a>) that <span class="math-container">$2^{n+o(n)}$</span> points in general position guarantee the existence of <span class="math-container">$n$</span> points in convex position which improves the upper bound of <span class="math-container">$4^{n-o(n)}$</span> given by Erdős and Szekeres in 1935 and nearly matches the lower bound of <span class="math-container">$2^{n-2}+1$</span> given by Erdős and Szekeres in 1960 which they conjectured to be optimal.</p>
|
322,302 | <p>Conjectures play important role in development of mathematics.
Mathoverflow gives an interaction platform for mathematicians from various fields, while in general it is not always easy to get in touch with what happens in the other fields.</p>
<p><strong>Question</strong> What are the conjectures in your field proved or disproved (counterexample found) in recent years, which are noteworthy, but not so famous outside your field?</p>
<p>Answering the question you are welcome to give some comment for outsiders of your field which would help to appreciate the result.</p>
<p>Asking the question I keep in mind by "recent years" something like a dozen years before now, by a "conjecture" something which was known as an open problem for something like at least dozen years before it was proved and I would say the result for which the Fields medal was awarded like a proof of <a href="https://en.wikipedia.org/wiki/Fundamental_lemma_(Langlands_program)" rel="noreferrer">fundamental lemma</a> would not fit "not so famous", but on the other hand these might not be considered as strict criteria, and let us "assume a good will" of the answerer.</p>
| ARG | 18,974 | <p>In 2019 <a href="https://arxiv.org/abs/1802.09077" rel="nofollow noreferrer">Anna Erschler and Tianyi Zheng</a> gave a very sharp estimate of the growth of Grigorchuk's first group. Although it was one of the first example of a group of intermediate growth (finitely generated group whose growth is neither polynomial nor exponential), how fast it grows was not really known. In Grigorchuk's original paper, the exponent <span class="math-container">$\alpha$</span> in <span class="math-container">$\mathrm{exp}(Cn^\alpha)$</span> was only known to lie somewhere between 0.5 and 0.991... Quite a few papers made improvements on the upper bound e.g. Bartholdi brought it down to 0.7675... and Leonov up to 0.504... but until then it remained unknown.</p>
<p>EDIT: if <span class="math-container">$b_n$</span> is the cardinality of the ball of radius <span class="math-container">$n$</span> in Grigorchuk's first group, Erschler and Zheng proved that
<span class="math-container">$$
\alpha := \lim_{n \to \infty} \frac{ \log \log b_n}{\log n} = \frac{\log 2}{\log \lambda_0} \approx 0.7674
$$</span>
where <span class="math-container">$\lambda_0$</span> is the positive real root of the polynomial <span class="math-container">$x^3-x^2-2x-4$</span>. Note that the group may still grow somehow faster or slower than <span class="math-container">$\mathrm{exp}{(C n^\alpha)}$</span>, but they identified the dominating term in the growth. Also, since changing the generating set is a bi-Lipschitz map, this is the only part of the growth function that is guaranteed to be independent of the generating set.</p>
|
3,882,566 | <p>We have <span class="math-container">$0<b≤ a$</span>, and:</p>
<p><span class="math-container">$$\underbrace{\dfrac{1+⋯+a^7+a^8}{1+⋯+a^8+a^9}}_{A} \quad \text{and} \quad \underbrace{\dfrac{1+⋯+b^7+b^8}{1+⋯+b^8+b^9}}_{B}$$</span></p>
<p>Source: Lumbreras Editors</p>
<hr />
<p>It was my strategy:</p>
<p><span class="math-container">$1 ≤ \dfrac{1+⋯+a^8}{1+⋯+b^8}$</span> ya que <span class="math-container">$a^p ≥ b^p$</span>, <span class="math-container">$∀\, p ≥ 0 $</span> (monotonous function).</p>
<p>We also have to, <span class="math-container">$ \dfrac{a}{b} ≤ 1 $</span></p>
| Calvin Lin | 54,563 | <p>Step 1: Take the inverses, so we want to compare</p>
<p><span class="math-container">$$ \frac { 1+a+\ldots a^9} { 1+a+\ldots a^8 } \text{ vs } \frac{1+b+\ldots b^9 }{1+b+\ldots b^8 } . $$</span></p>
<p>Step 2: Subtract 1 from both sides, we want to compare</p>
<p><span class="math-container">$$ \frac { a^9} { 1+a+\ldots a^8 } \text{ vs } \frac{ b^9 }{1+b+\ldots b^8 } .$$</span></p>
<p>Step 3: Comparing these quantities is obvious by cross multiplying (denominators are positive), and using the fact that <span class="math-container">$ a^9 b^k > a^k b^9$</span> for <span class="math-container">$ k < 9$</span>, so</p>
<p><span class="math-container">$$ \frac { a^9} { 1+a+\ldots a^8 } > \frac{ b^9 }{1+b+\ldots b^8 } .$$</span></p>
<p>(Go back and fill in the gaps yourself.) Hence,</p>
<p><span class="math-container">$$ \frac { 1+a+\ldots a^8} { 1+a+\ldots a^9 } < \frac{1+b+\ldots b^8 }{1+b+\ldots b^0 } . $$</span></p>
<hr />
<p>Notes</p>
<ul>
<li>Please show your work. When comparing fractions. a good tactic to use is cross multiplying. Nothing harder than that was used here.</li>
<li>If we cross multiplied directly at the start, then subtracting 1 in step 2 is equivalent to removing common terms of <span class="math-container">$(1+a+\ldots+ a^8)(1+b+\ldots + b^8 )$</span>.</li>
<li>This approach generalizes to show that <span class="math-container">$ \frac{ a^n - 1 } { a^m -1 } > \frac{ b^n -1 } { b^m - 1 } $</span> if <span class="math-container">$(a-b)(n-m) > 0$</span>.</li>
</ul>
|
3,479,883 | <p>I know that (I might be wrong):</p>
<ul>
<li>Symbol for empty or null set : {Ø} or {}</li>
<li>Null or empty set is 'subset of all sets' as well as 'empty or null set' set</li>
<li>So, { {} } is same as { Ø }</li>
</ul>
<p>I just want to know { {} } or { Ø } is an empty set or not ? And if yes then we can conclude that if a set contains a only null set which is by definition always true then it must be a null or empty set.</p>
<p>(Here I am assuming empty and null are same, because I've read that they sometimes taken as different.)</p>
| nonuser | 463,553 | <p>Yes they are the same and it is a set witch contains an empty set as an element.</p>
<p>But symbols {Ø} and {} are not the same. Second one is an empty set or <span class="math-container">$\emptyset$</span> </p>
|
3,479,883 | <p>I know that (I might be wrong):</p>
<ul>
<li>Symbol for empty or null set : {Ø} or {}</li>
<li>Null or empty set is 'subset of all sets' as well as 'empty or null set' set</li>
<li>So, { {} } is same as { Ø }</li>
</ul>
<p>I just want to know { {} } or { Ø } is an empty set or not ? And if yes then we can conclude that if a set contains a only null set which is by definition always true then it must be a null or empty set.</p>
<p>(Here I am assuming empty and null are same, because I've read that they sometimes taken as different.)</p>
| Ethan Bolker | 72,858 | <p>The sets you have written down are the same. Neither is empty since each has just one element: the empty set.</p>
<p>Related: <a href="https://math.stackexchange.com/questions/2620616/what-is-the-difference-between-x-and-x-when-x-itself-is-a-set/2620621#2620621">What is the difference between $x$ and $\{x\}$ when $x$ itself is a set?</a></p>
|
1,015,826 | <p>For $r>1$, prove the sequence $$X_n=\left(1+r^n\right)^{1/n}$$ is decreasing. I understand the limit is decreasing and that the limit of this sequence is $r$. I am just not sure on the algebra. My thought is to show $X_n>X_{n+1}$ by showing $X_n-X_{n+1}>0$ for all $n$. I could also use induction; however, I am not sure how that would be done. </p>
<p>If someone is willing to give me a push in the right direction, it would be much appreciated! </p>
| user84413 | 84,413 | <p>Since $\displaystyle(1+r^n)^\frac{1}{n}=\bigg(r^n\bigg[\left(\frac{1}{r}\right)^n+1\bigg]\bigg)^\frac{1}{n}=r\big(1+s^n\big)^\frac{1}{n}$ where $s=\frac{1}{r}$ satisfies $0<s<1$,</p>
<p>$\hspace{.3 in}$it suffices to show that $\big(1+s^n\big)^{1/n}>(1+s^{n+1})^{1/(n+1)}$ for $0<s<1$:</p>
<p>Since $\big(1+s^n\big)^{n+1}>(1+s^n)^n>(1+s^{n+1})^n$ since $s^n>s^{n+1}$, $\;\;\;\;\big(1+s^n\big)^{1/n}>(1+s^{n+1})^{1/(n+1)}$.</p>
|
1,015,826 | <p>For $r>1$, prove the sequence $$X_n=\left(1+r^n\right)^{1/n}$$ is decreasing. I understand the limit is decreasing and that the limit of this sequence is $r$. I am just not sure on the algebra. My thought is to show $X_n>X_{n+1}$ by showing $X_n-X_{n+1}>0$ for all $n$. I could also use induction; however, I am not sure how that would be done. </p>
<p>If someone is willing to give me a push in the right direction, it would be much appreciated! </p>
| user84413 | 84,413 | <p>It is enough to show that $(1+r^n)^{n+1}>(1+r^{n+1})^n,\;\;\;$ since then $(1+r^n)^\frac{1}{n}>(1+r^{n+1})^\frac{1}{n+1}$:</p>
<p>$\displaystyle\big(1+r^n\big)^{n+1}-\big(1+r^{n+1}\big)^n=\sum_{k=0}^{n+1}\binom{n+1}{k}(r^n)^k-\sum_{j=0}^n\binom{n}{j}(r^{n+1})^j$</p>
<p>$\displaystyle=\sum_{k=1}^n\binom{n+1}{k}r^{nk}-\sum_{j=1}^{n-1}\binom{n}{j}r^{(n+1)j}=\sum_{j=0}^{n-1}\binom{n+1}{j+1}r^{n(j+1)}-\sum_{j=1}^{n-1}\binom{n}{j}r^{(n+1)j}$</p>
<p>$\;\;\;\;\displaystyle=(n+1)r^n+\sum_{j=1}^{n-1}\bigg[\binom{n+1}{j+1}r^{nj+n}-\binom{n}{j}r^{nj+j}\bigg]>0$</p>
<p>since $\binom{n+1}{j+1}=\binom{n}{j+1}+\binom{n}{j}\implies\binom{n+1}{j+1}>\binom{n}{j}$ and since $nj+n>nj+j$ for $j<n$.</p>
|
4,182,153 | <p>Let A be a nonempty compact subset of <span class="math-container">$R$</span> (real numbers) and let B be a nonempty closed subset of R. Recall
that <span class="math-container">$\operatorname{dist}(A, B) = \inf{|x − y| : x ∈ A, y ∈ B}$</span>. Show that there exist <span class="math-container">$a ∈ A$</span> and <span class="math-container">$b ∈ B$</span> such that
<span class="math-container">$|a − b| = \operatorname{dist}(A, B)$</span>.</p>
<p>How to prove this question?</p>
| Yao Zhao | 889,365 | <p>Proof: From the basic properties of the infimum, we know that we can find elements in a set that are arbitrarily close to its infimum. Then there exists a sequence <span class="math-container">$(|x_n-y_n|)$</span> such that <span class="math-container">$\lim_{n\to \infty} |x_n-y_n|= dis(A,B)$</span> where <span class="math-container">$x_n \in A$</span> and <span class="math-container">$y_n \in B$</span> for all n.</p>
<p>Because A is compact, <span class="math-container">$\{x_n\}$</span> has a convergent subsequence <span class="math-container">$\{x_{n_k}\}$</span>, where its limit is an element <span class="math-container">$a \in A$</span> as a compact set is closed.</p>
<p>As every convergent sequence is bounded, we know that <span class="math-container">$(|x_n-y_n|)$</span> is bounded. Also, <span class="math-container">$\{x_n\}$</span> is obviously bounded because A is compact. From <span class="math-container">$|y_n| =|x_n-(x_n-y_n)| \leq |x_n|+|x_n-y_n|$</span>, we see that <span class="math-container">$\{y_n\}$</span> is also bounded. In particular, <span class="math-container">$\{y_{n_k}\}$</span> is bounded. By Bolzano-Weirstrass Theorem, <span class="math-container">$\{y_{n_k}\}$</span> has a convergent subsequence: <span class="math-container">$y_{n_{k_l}} \to b$</span> as <span class="math-container">$l \to \infty$</span>. Since B is closed, we have <span class="math-container">$b \in B$</span>. We also have <span class="math-container">$x_{n_{k_l}} \to a \in A$</span> as <span class="math-container">$l \to \infty$</span>.</p>
<p>Therefore, <span class="math-container">$dis(A,B)=\lim_{n\to \infty} |x_n-y_n|=\lim_{n\to \infty} |x_{n_{k_l}}-y_{n_{k_l}}| = |a-b|$</span>, for the elements <span class="math-container">$a \in A$</span> and <span class="math-container">$b \in B$</span> found above.</p>
|
4,062,242 | <p>Is <span class="math-container">$$\int_1^\infty \frac{\log x}{x^2}dx$$</span>
finite? How to solve this?</p>
| Community | -1 | <p>If you don't like the <span class="math-container">$\log$</span>, then you can turn it back to exponent by setting <span class="math-container">$u = \log x \implies x = e^u\implies dx = e^udu\implies I = \displaystyle \int_{0}^\infty ue^{-2u}(e^{u}du)= \displaystyle \int_{0}^\infty ue^{-u}du= -\displaystyle \int_{0}^\infty ud(e^{-u})=- \left(ue^{-u}|_{u=0}^\infty-\displaystyle \int_{0}^\infty e^{-u}du\right)=\displaystyle \int_{0}^\infty e^{-u}du = 1.$</span></p>
|
542,454 | <p>There are two tangent lines on $f(x) = \sqrt{x}$ each with the $x$-value $a$ and $b$ respectively. </p>
<p>I need to prove that $c$, the $x$ value of the point at which the two lines intersect each other, is equal to $\sqrt{ab}$, the geometric mean of $a$ and $b$. </p>
<p>I have been trying many different ways of doing this question and I keep getting stuck. </p>
| André Nicolas | 6,312 | <p>The tangent line at $(a,\sqrt{a})$ has equation $y=\frac{1}{2\sqrt{a}}x+\frac{\sqrt{a}}{2}$. </p>
<p>The tangent line at $(b,\sqrt{b})$ has equation $y=\frac{1}{2\sqrt{b}}x+\frac{\sqrt{b}}{2}$. </p>
<p>Set $\frac{1}{2\sqrt{a}}x+\frac{\sqrt{a}}{2}=\frac{1}{2\sqrt{b}}x+\frac{\sqrt{b}}{2}$ and solve for $x$.</p>
|
2,855,411 | <p>Find all real number(s) $x$ satisfying the equation $\{(x +1)^3\}$ = $x^3$ , where $\{y\}$ denotes the fractional part of $y$ , for example $\{3.1416\ldots\}=0.1416\ldots$.</p>
<p>I am trying all positive real numbers from $1,2,\dots$ but I didn't get any decimals.</p>
<p>Is there a smarter way to solve this problem? ... Please advise.</p>
| Karn Watcharasupat | 501,685 | <p>Fractional part is always in $[0,1)$. So the domain of $g(x)=x^3$ you are looking for is such that $R_g\in[0,1)$, which happens to be $[0,1)$.</p>
<p>$y=\{(x+1)^3\}$ is basically $y=(x+1)^3$ chopped into appropriate pieces and translated down by some $k\in\mathbb{Z}$ where $k=\lfloor(x+1)^3\rfloor$</p>
<p>So we solve
$$(x+1)^3-k=x^3$$
which gives
$$k=3x^2+3x+1$$</p>
<p>Since for $[0,1)$, $0\le(x+1)^3<8$, we check
$$3x^2+3x+1=0,1,2,3,4,5,6,7$$
whose positive roots give the answers.</p>
|
3,726,382 | <p>Prove that the series converge using direct comparison or limit comparison
<span class="math-container">$$\sum \limits_{n=1}^{+\infty} \frac{2^n}{n!}.$$</span></p>
<p>I really don't know how to proceed with the comparison tests though I know how to prove its convergence using ratio test.</p>
| Weierstraß Ramirez | 174,035 | <p>Perhaps it is useful to notice that for an <span class="math-container">$n$</span> large enough:</p>
<p><span class="math-container">$$\left(\frac{2}{n}\right)^{n}=\frac{2^n}{n^n}>\frac{2^n}{n!}>\frac{1}{n!}$$</span></p>
<p>After that, one might proceed by observing that:</p>
<p><span class="math-container">$$\left(\frac{2}{n}\right) < \frac{4}{5} \forall n >3$$</span></p>
<p>After that, you have a simple GP sum:</p>
<p><span class="math-container">$$\sum_{n=1}^{\infty} \left(\frac{2}{n}\right)^{n} < 2 + 1 + \sum_{n=3}^{\infty} (4/5)^n$$</span></p>
<p>Clearly, RHS is convergent.</p>
|
3,726,382 | <p>Prove that the series converge using direct comparison or limit comparison
<span class="math-container">$$\sum \limits_{n=1}^{+\infty} \frac{2^n}{n!}.$$</span></p>
<p>I really don't know how to proceed with the comparison tests though I know how to prove its convergence using ratio test.</p>
| CNiD | 788,800 | <p>So basically this series is taylor expansion of <span class="math-container">$e^2$</span>.</p>
<p>We can prove it's convergence using Ratio test.</p>
<p><span class="math-container">$\lim_{n\rightarrow\infty}\frac{a_{n+1}}{a_n}=\lim_{n\rightarrow\infty}\frac{2^{n+1}n!}{(n+1)!2^n}=\lim_{n\rightarrow\infty}\frac{2}{n+1}=0<1$</span></p>
<p>Hence the series <span class="math-container">$\frac{2^n}{n!}$</span> converges.</p>
|
131,294 | <p>How do I show that $ f(t) = t^2 + t +1 $ is irreducible in $K[t]$, where $K = \{0,1\}$?</p>
<p>I know how to tackle this over $\mathbb{Z}$ or $\mathbb{Q}$ using Guass or Eisenstein say...but I'm a little unsure how to proceed in this case.</p>
<p>Any help is much appreciated.</p>
| Prasad G | 25,314 | <p>Suppose $f(t)$ is reducible.(then we have to show that it is contradiction)</p>
<p>$f(t) = (t+a)(t+b)$ where a and b are in $K$</p>
<p><code>Case 1:</code> $a =0,b=0$</p>
<p>$f(t)= t^2$. This is contradiction.</p>
<p>Similarly we can prove remaining cases.</p>
<p><code>Case 2:</code> $a=1,b=1$</p>
<p><code>Case 3:</code> $a=0,b=1$ or $a=1,b=0$</p>
|
925,541 | <p>The exercise states: prove that the limit of the sequence $$a_{n+2}=(a_na_{n+1})^{1/2} \ where \ a_1 \ge 0, a_2 \ge 0 $$</p>
<p>is $L = (a_1a_2^2)^{1/3}$</p>
<p>The solutin says: $$Let \ b_n = \frac{a_{n+1}}{a_n},$$ then $$b_{n+1}= 1/\sqrt{b_n} \ for \ all \ n$$ wich implies that
$$b_{n+1}= b_1^{(-1/2)^n} \rightarrow 1 \ as \ n \rightarrow \infty$$</p>
<p>Consider
$$\prod_{j=2}^{n+1}b_j = \prod_{j=1}^{n}(b_j)^{-1/2} $$</p>
<p>This implies that:$$(a_1^{1/2}a_2)^{-2/3}a_{n+1} = \left( \frac{1}{b_{n+1}} \right)^{2/3}$$...</p>
<p>I am having problems obtaining this last implication, I see that $$\prod_{j=2}^{n+1}b_j = \frac{a_{n+2}}{a_2} \ and \ \prod_{j=1}^{n}(b_j)^{-1/2}= \frac{a_{n+1}}{a_1} $$ But still I struggle.</p>
<p>Any help?</p>
| Dave | 174,047 | <p>Multiply both sides of</p>
<p>$$\prod_{j=2}^{n+1}b_j = \prod_{j=1}^{n}b_j^{-1/2} $$</p>
<p>by $\prod_{j=1}^{n}b_j^{1/2}$ to get</p>
<p>$$b_{n+1}b_1^{1/2} \left(\prod_{j=2}^{n}b_j\right)^{3/2} = b_{n+1}b_1^{1/2} \prod_{j=2}^{n}b_j^{3/2} = 1.$$</p>
<p>This can be rewritten as </p>
<p>$$b_{n+1} (a_2/a_1)^{1/2} (a_{n+1}/a_2)^{3/2} = 1,$$</p>
<p>which can be manipulated to the equality you were asking about.</p>
<p>This solution should deal with the case where one of the $a_n$'s might be zero separately. </p>
|
2,094,123 | <p>A plane curve is printed on a piece of paper with the directions of both axes specified. How can I (roughly) verify if the curve is of the form $y=a e^{bx}+c$ without fitting or doing any quantitative calculation?</p>
<p>For example, for linear curves, I can choose two points on the curve and check if the midpoint is also on the curve. For parabolas, I can examine the geometric relationship between the tangent at a point and the secant connecting the peak and that point. Does the exponential curve have any similar geometric features that I can take advantage of?</p>
| user541686 | 4,890 | <h3>You can't. No, not just "in theory", but also in practice.</h3>
<p>I tried this when doing regression before and I gave up on it once I realized how impossible it is:</p>
<p><a href="https://i.stack.imgur.com/FNYE2.png" rel="noreferrer"><img src="https://i.stack.imgur.com/FNYE2.png" alt="Curve" /></a></p>
<p>(Ignore the left upwards part of the parabola; pretend you don't have that piece of information when you're trying to tell which is which.)</p>
<hr />
<h3>Update</h3>
<p>Since I couldn't reproduce the plot above anymore (I only kept the screenshot, and I'm not sure why the formulas don't seem to be reproducing it), I'll include an artificial one that illustrates the same problem:</p>
<p><a href="https://i.stack.imgur.com/hTMAg.png" rel="noreferrer"><img src="https://i.stack.imgur.com/hTMAg.png" alt="Plot 2" /></a></p>
<p>To reproduce:</p>
<pre><code>Plot[{Exp[x] / 4, 0.32 x^2 + 0.12 x + 0.26}, {x, 0, 2}, PlotRange -> {Automatic, {0, 2}},
GridLines -> Automatic, AspectRatio -> 1, BaseStyle -> {FontSize -> 14}]
</code></pre>
|
129,875 | <p>The Fourier transform of the Heaviside step function $u(t)$ <a href="http://fourier.eng.hmc.edu/e101/lectures/handout3/node3.html" rel="nofollow noreferrer">is</a> $\dfrac{1}{iω} + π δ(ω)$.<br>
The Laplace transform of the same function <a href="http://leevaraiya.org/releases/LeeVaraiya_DigitalV2_02.pdf#page=569" rel="nofollow noreferrer">is</a> $\dfrac{1}{s}$. (<strong>Edit:</strong> This was my mistake, see <a href="/a/2728442/4890">my answer</a>.)</p>
<p>I remember the proof <a href="http://211.71.86.13/web/jp/05sb/xhyxt/ckwx/lec17.pdf" rel="nofollow noreferrer">came from derivatives and signums</a>, and I'm <strong>not</strong> interested in the proof.<br>
Rather, I want to understand <em>why</em> they <em>should</em> be different a bit more, shall we say, <em>intuitively</em>.</p>
<p>I mean, the Laplace transform of $x(t)$ is just $$\mathcal{L}(x)(s) = \int_{-∞}^∞ e^{-st}x(t)\,dt$$
whereas the Fourier transform of $x(t)$ is just $$\mathcal{F}(x)(ω) = \int_{-∞}^∞ e^{-iωt}x(t)\,dt$$
so it's pretty obvious they <strong>only</strong> differ by the dummy variable name. So if we substitute $s = iω$, then they <em>should</em> turn out to be the same... and yet the result for the Fourier transform contains an extra Dirac delta.</p>
<p>Could someone please explain why there is such a discrepancy more or less intuitively (rather than just presenting another mathematical proof)?</p>
| Tom Copeland | 27,786 | <p>Actually they do match in the sense that the Laplace transform provides an analytic continuation of the Fourier transform result to the complex plane. Look at the limits of the real and imaginary parts of</p>
<p>$\frac{1}{s}=\frac{s^{*}}{|s|^2}=\frac{\sigma-i\omega}{\sigma^2+\omega^2}$</p>
<p>as the real part of $s$ tends to $0$. There's no discrepancy; you are looking at a one-dimensional slice of a two-dimensional function (the blind men and the elephant allegory). </p>
<p>Hints: Look at <a href="https://math.stackexchange.com/questions/122220/clear-explanation-of-heaviside-function-fourier-transform">MSE-122220</a> and <a href="https://math.stackexchange.com/questions/73922/fourier-transform-of-unit-step">MSE-73922</a>. Also think of the Cauchy contour integral of $f(z)/z$ with the contour being a rectangle about the origin that gradually and symmetrically extends to infinity in length and collapses in height to the real line. Additional info available at Wiki on the <a href="http://en.wikipedia.org/wiki/Cauchy_principal_value" rel="nofollow noreferrer">Cauchy principle value</a> and the <a href="http://en.wikipedia.org/wiki/Dirac_delta_function" rel="nofollow noreferrer">Poisson kernel rep of the nascent delta function</a>.</p>
<p>PS: The <a href="http://math.fullerton.edu/mathews/c2003/LaplaceTransformMod.html" rel="nofollow noreferrer">bilateral Laplace transform</a> equals the unilateral Laplace transform when acting on H(t)f(t) where H(t) is the Heaviside step function. In this case, letting $s= \sigma+i\omega$ clearly shows that the Laplace transform provides an analytic continuation in general of the FT result to the complex plane for $\sigma>0$.</p>
|
129,875 | <p>The Fourier transform of the Heaviside step function $u(t)$ <a href="http://fourier.eng.hmc.edu/e101/lectures/handout3/node3.html" rel="nofollow noreferrer">is</a> $\dfrac{1}{iω} + π δ(ω)$.<br>
The Laplace transform of the same function <a href="http://leevaraiya.org/releases/LeeVaraiya_DigitalV2_02.pdf#page=569" rel="nofollow noreferrer">is</a> $\dfrac{1}{s}$. (<strong>Edit:</strong> This was my mistake, see <a href="/a/2728442/4890">my answer</a>.)</p>
<p>I remember the proof <a href="http://211.71.86.13/web/jp/05sb/xhyxt/ckwx/lec17.pdf" rel="nofollow noreferrer">came from derivatives and signums</a>, and I'm <strong>not</strong> interested in the proof.<br>
Rather, I want to understand <em>why</em> they <em>should</em> be different a bit more, shall we say, <em>intuitively</em>.</p>
<p>I mean, the Laplace transform of $x(t)$ is just $$\mathcal{L}(x)(s) = \int_{-∞}^∞ e^{-st}x(t)\,dt$$
whereas the Fourier transform of $x(t)$ is just $$\mathcal{F}(x)(ω) = \int_{-∞}^∞ e^{-iωt}x(t)\,dt$$
so it's pretty obvious they <strong>only</strong> differ by the dummy variable name. So if we substitute $s = iω$, then they <em>should</em> turn out to be the same... and yet the result for the Fourier transform contains an extra Dirac delta.</p>
<p>Could someone please explain why there is such a discrepancy more or less intuitively (rather than just presenting another mathematical proof)?</p>
| user541686 | 4,890 | <p><em>(Realizing 6 years later that I never accepted an answer...)</em></p>
<p>The answer was that <strong>the premise of my question was simply false</strong>.</p>
<p>The (bilateral) Laplace transform of the unit-step function is <strong><em>not</em></strong> $\dfrac{1}{s}$ everywhere.<br>
Rather, the statement <a href="http://leevaraiya.org/releases/LeeVaraiya_DigitalV2_02.pdf#page=569" rel="nofollow noreferrer">was</a> that it is $\dfrac{1}{s}$ <em>in the region of convergence (RoC)</em> $\operatorname{Re}(s) > 0$. </p>
<p>This means the statement said nothing about what happens in the origin.</p>
<p>So... what happens at the origin? </p>
<p>Just substitute $\omega = s/i$ into the Fourier transform and you get the answer:</p>
<p>\begin{align*}
\mathcal{L}(u)
= &\ s\ \mapsto\ \dfrac{1}{i\,\dfrac{s}{i}} + \pi\, \delta\!\left(\frac{s}{i}\right) && \text{($u$ is the unit-step function)} \\
= &\ s\ \mapsto\ \dfrac{1}{s} + \pi\, \delta(s) \left\vert i\right\vert && \text{(scaling property of delta function)} \\
= &\ s\ \mapsto\ \boxed{\dfrac{1}{s} + \pi\, \delta(s)}
\end{align*}</p>
<p>And <strong>this should make sense</strong>: the average ("DC") value of a unit step is 1/2, so its Laplace or Fourier transforms at the origin should be Dirac deltas of weight $\dfrac{1}{2} 2\pi = \pi$, with the $2\pi$ just coming from the use of angular frequency $\omega$.</p>
|
354,250 | <p><strong>Remark:</strong> All the answers so far have been very insightful and on point but after receiving public and private feedback from other mathematicians on the MathOverflow I decided to clarify a few notions and add contextual information. 08/03/2020.</p>
<h2>Motivation:</h2>
<p>I recently had an interesting exchange with several computational neuroscientists on whether organisms with spatiotemporal sensory input can simulate physics without computing partial derivatives. As far as I know, partial derivatives offer the most quantitatively precise description of spatiotemporal variations. Regarding feasibility, it is worth noting that a number of computational neuroscientists are seriously considering the question that human brains might do reverse-mode automatic differentiation, or what some call backpropagation [7].</p>
<p>Having said this, a large number of computational neuroscientists (even those that have math PhDs) believe that complex systems such as brains may simulate classical mechanical phenomena without computing approximations to partial derivatives. Hence my decision to share this question.</p>
<h2>Problem definition:</h2>
<p>Might there be an alternative formulation for mathematical physics which doesn't employ the use of partial derivatives? I think that this may be a problem in reverse mathematics [6]. But, in order to define equivalence a couple definitions are required:</p>
<p><strong>Partial Derivative as a linear map:</strong></p>
<p>If the derivative of a differentiable function <span class="math-container">$f: \mathbb{R}^n \rightarrow \mathbb{R}^m$</span> at <span class="math-container">$x_o \in \mathbb{R}^n$</span> is given by the Jacobian <span class="math-container">$\frac{\partial f}{\partial x} \Bigr\rvert_{x=x_o} \in \mathbb{R}^{m \times n}$</span>, the partial derivative with respect to <span class="math-container">$i \in [n]$</span> is the <span class="math-container">$i$</span>th column of <span class="math-container">$\frac{\partial f}{\partial x} \Bigr\rvert_{x=x_o}$</span> and may be computed using the <span class="math-container">$i$</span>th standard basis vector <span class="math-container">$e_i$</span>:</p>
<p><span class="math-container">\begin{equation}
\frac{\partial{f}}{\partial{x_i}} \Bigr\rvert_{x=x_o} = \lim_{n \to \infty} n \cdot \big(f(x+\frac{1}{n}\cdot e_i)-f(x)\big) \Bigr\rvert_{x=x_o}. \tag{1}
\end{equation}</span></p>
<p>This is the general setting of numerical differentiation [3].</p>
<p><strong>Partial Derivative as an operator:</strong></p>
<p>Within the setting of automatic differentiation [4], computer scientists construct algorithms <span class="math-container">$\nabla$</span> for computing the dual program <span class="math-container">$\nabla f: \mathbb{R}^n \rightarrow \mathbb{R}^m$</span> which corresponds to an operator definition for the partial derivative with respect to the <span class="math-container">$i$</span>th coordinate:</p>
<p><span class="math-container">\begin{equation}
\nabla_i = e_i \frac{\partial}{\partial x_i} \tag{2}
\end{equation}</span></p>
<p><span class="math-container">\begin{equation}
\nabla = \sum_{i=1}^n \nabla_i = \sum_{i=1}^n e_i \frac{\partial}{\partial x_i}. \tag{3}
\end{equation}</span></p>
<p>Given these definitions, a constructive test would involve creating an open-source library for simulating classical and quantum systems that doesn’t contain a method for numerical or automatic differentiation.</p>
<h2>The special case of classical mechanics:</h2>
<p>For concreteness, we may consider classical mechanics as this is the general setting of animal locomotion, and the vector, Hamiltonian, and Lagrangian formulations of classical mechanics have concise descriptions. In all of these formulations the partial derivative plays a central role. But, at the present moment I don't have a proof that rules out alternative formulations. Has this particular question already been addressed by a mathematical physicist?</p>
<p>Perhaps a reasonable option might be to use a probabilistic framework such as Gaussian Processes that are provably universal function approximators [5]?</p>
<h2>Koopman Von Neumann Classical Mechanics as a candidate solution:</h2>
<p>After reflecting upon the answers of Ben Crowell and <a href="https://mathoverflow.net/a/354289">gmvh</a>, it appears that we require a formulation of classical mechanics where:</p>
<ol>
<li>Everything is formulated in terms of linear operators.</li>
<li>All problems can then be recast in an algebraic language.</li>
</ol>
<p>After doing a literature search it appears that Koopman Von Neumann Classical Mechanics might be a suitable candidate as we have an operator theory in Hilbert space similar to Quantum Mechanics [8,9,10]. That said, I just recently came across this formulation so there may be important subtleties I ignore.</p>
<h2>Related problems:</h2>
<p>Furthermore, I think it may be worth considering the following related questions:</p>
<ol>
<li>What would be left of mathematical physics if we could not compute partial derivatives?</li>
<li>Is it possible to accurately simulate any non-trivial physics without computing partial derivatives?</li>
<li>Are the operations of multivariable calculus necessary and sufficient for modelling classical mechanical phenomena?</li>
</ol>
<h2>A historical note:</h2>
<p>It is worth noting that more than 1000 years ago as a result of his profound studies on optics the mathematician and physicist Ibn al-Haytham(aka Alhazen) reached the following insight:</p>
<blockquote>
<p>Nothing of what is visible, apart from light and color, can be
perceived by pure sensation, but only by discernment, inference, and
recognition, in addition to sensation.-Alhazen</p>
</blockquote>
<p>Today it is known that even color is a construction of the mind as photons are the only physical objects that reach the retina. However, broadly speaking neuroscience is just beginning to catch up with Alhazen’s understanding that the physics of everyday experience is simulated by our minds. In particular, most motor-control scientists agree that to a first-order approximation the key purpose of animal brains is to generate movements and consider their implications. This implicitly specifies a large class of continuous control problems which includes animal locomotion.</p>
<p>Evidence accumulated from several decades of neuroimaging studies implicates the role of the cerebellum in such internal modelling. This isolates a rather uniform brain region whose processes at the circuit-level may be identified with efficient and reliable methods for simulating classical mechanical phenomena [11, 12].</p>
<p>As for the question of whether the mind/brain may actually be modelled by Turing machines, I believe this was precisely Alan Turing’s motivation in conceiving the Turing machine [13]. For a concrete example of neural computation, it may be worth looking at recent research that a single dendritic compartment may compute the xor function: [14], <a href="https://www.reddit.com/r/MachineLearning/comments/ejbwvb/r_single_biological_neuron_can_compute_xor/" rel="nofollow noreferrer">Reddit discussion</a>.</p>
<h2>References:</h2>
<ol>
<li>William W. Symes. Partial Differential Equations of Mathematical Physics. 2012.</li>
<li>L.D. Landau & E.M. Lifshitz. Mechanics (Volume 1 of A Course of Theoretical Physics). Pergamon Press 1969.</li>
<li>Lyness, J. N.; Moler, C. B. (1967). "Numerical differentiation of analytic functions". SIAM J. Numer. Anal. 4: 202–210. <a href="https://doi.org/10.1137/0704019" rel="nofollow noreferrer">doi:10.1137/0704019</a>.</li>
<li>Naumann, Uwe (2012). The Art of Differentiating Computer Programs. Software-Environments-tools. SIAM. ISBN 978-1-611972-06-1.</li>
<li>Michael Osborne. Gaussian Processes for Prediction. Robotics Research Group
Department of Engineering Science
University of Oxford. 2007.</li>
<li>Connie Fan. REVERSE MATHEMATICS. University of Chicago. 2010.</li>
<li>Richards, B.A., Lillicrap, T.P., Beaudoin, P. et al. A deep learning framework for neuroscience. Nat Neurosci 22, 1761–1770 (2019). <a href="https://doi.org/10.1038/s41593-019-0520-2" rel="nofollow noreferrer">doi:10.1038/s41593-019-0520-2</a>.</li>
<li>Wikipedia contributors. "Koopman–von Neumann classical mechanics." Wikipedia, The Free Encyclopedia. Wikipedia, The Free Encyclopedia, 19 Feb. 2020. Web. 7 Mar. 2020.</li>
<li>Koopman, B. O. (1931). "Hamiltonian Systems and Transformations in Hilbert Space". Proceedings of the National Academy of Sciences. 17 (5): 315–318. Bibcode:1931PNAS...17..315K. <a href="https://doi.org/10.1073/pnas.17.5.315" rel="nofollow noreferrer">doi:10.1073/pnas.17.5.315</a>. PMC 1076052. PMID 16577368.</li>
<li>Frank Wilczek. Notes on Koopman von Neumann Mechanics, and a
Step Beyond. 2015.</li>
<li>Daniel McNamee and Daniel M. Wolpert. Internal Models in Biological Control. Annual Review of Control, Robotics, and Autonomous Systems. 2019.</li>
<li>Jörn Diedrichsen, Maedbh King, Carlos Hernandez-Castillo, Marty Sereno, and Richard B. Ivry. Universal Transform or Multiple Functionality? Understanding the Contribution of the Human Cerebellum across Task Domains. Neuron review. 2019.</li>
<li>Turing, A.M. (1936). "On Computable Numbers, with an Application to the Entscheidungsproblem". Proceedings of the London Mathematical Society. 2 (published 1937). 42: 230–265. <a href="https://doi.org/10.1112/plms/s2-42.1.230" rel="nofollow noreferrer">doi:10.1112/plms/s2-42.1.230</a>. (and Turing, A.M. (1938). "On Computable Numbers, with an Application to the Entscheidungsproblem: A correction". Proceedings of the London Mathematical Society.</li>
<li>Albert Gidon, Timothy Adam Zolnik, Pawel Fidzinski, Felix Bolduan, Athanasia Papoutsi, Panayiota Poirazi, Martin Holtkamp, Imre Vida, Matthew Evan Larkum. <a href="https://doi.org/10.1126/science.aax6239" rel="nofollow noreferrer">Dendritic action potentials and computation in human layer 2/3 cortical neurons</a>. Science. 2020.</li>
</ol>
| Community | -1 | <blockquote>
<p>Is it possible to accurately simulate any non-trivial physics without computing partial derivatives?</p>
</blockquote>
<p>Yes. An example is the nuclear shell model as formulated by Maria Goeppert Mayer in the 1950's. (The same would also apply to, for example, the <a href="https://en.wikipedia.org/wiki/Interacting_boson_model" rel="nofollow noreferrer">interacting boson model</a>.) The way this type of shell model works is that you take a nucleus that is close to a closed shell in both neutrons and protons, and you treat it as an inert core with some number of particles and holes, e.g., <span class="math-container">$^{41}\text{K}$</span> (potassium-41) would be treated as one proton hole coupled to two neutrons. There is some vector space of possible states for these three particles, and there is a Hamiltonian that has to be diagonalized. When you diagonalize the Hamiltonian, you have a prediction of the energy levels of the nucleus.</p>
<p>You do have to determine the matrix elements of the Hamiltonian in whatever basis you've chosen. There are various methods for estimating these. (They cannot be determined purely from the theory of quarks and gluons, at least not with the present state of the art.) In many cases, I think these estimates are actually done by some combination of theoretical estimation and empirical fitting of parameters to observed data. If you look at how practitioners have actually estimated them, I'm sure their notebooks do contain lots of calculus, including partial derivatives, or else they are recycling other people's results that were certainly not done in a world where nobody knew about partial derivatives. But that doesn't mean that they really require partial derivatives in order to find them.</p>
<p>As an example, people often use a basis consisting of solutions to the position-space Schrodinger equation for the harmonic oscillator. This is a partial differential equation because it contains the kinetic energy operator, which is basically the Laplacian. But the reality is that the matrix elements of this operator can probably be found without ever explicitly writing down a wavefunction in the position basis and calculating a Laplacian. E.g., there are algebraic methods. And in any case many of the matrix elements in such models are simply fitted to the data.</p>
<p>The interacting boson model (IBM) is probably an even purer example of this, although I know less about it. It's a purely algebraic model. Although its advocates claim that it is in some sense derivable as an approximation to a more fundamental model, I don't think anyone ever actually <em>has</em> succeeded in determining the IBM's parameters for a specific nucleus from first principles. The parameters are simply fitted to the data.</p>
<p>Looking at this from a broader perspective, here is what I think is going on. If you ask a physicist how the laws of physics work, they will probably say that the laws of physics are all wave equations. Wave equations are partial differential equations. However, all of our physical theories except for general relativity fall under the umbrella of quantum mechanics, and quantum mechanics is perfectly linear. There is a no-go theorem by Gisin that says you basically can't get a sensible theory by adding a nonlinearity to quantum mechanics. Because of the perfect linearity, our physical theories can also just be described as exercises in linear algebra, and we can forget about a specific basis, such as the basis consisting of Dirac delta functions in position space.</p>
<p>In terms of linear algebra, there is the problem of determining what is the Hamiltonian. If we don't have any systematic way of determining what is an appropriate Hamiltonian, then we get a theory that lacks predictive power. Even for a finite-dimensional space (such as the shell model), an <span class="math-container">$n$</span>-dimensional space has <span class="math-container">$O(n^2)$</span> unknown matrix elements in its Hamiltonian. Determining these purely by fitting to experimental data would be a vacuous exercise, since typically the number of observations we have available is <span class="math-container">$O(n)$</span>. One way to determine all these matrix elements is to require that the theory consist of solutions to some differential equation. But there is no edict from God that says this is the only way to do so. There are other methods, such as algebraic methods that exploit symmetries. This is the kind of thing that the models described above do, either partially or exclusively.</p>
<p><em>References</em></p>
<p>Gisin, "Weinberg's non-linear quantum mechanics and supraluminal communications," <a href="http://dx.doi.org/10.1016/0375-9601(90)90786-N" rel="nofollow noreferrer">http://dx.doi.org/10.1016/0375-9601(90)90786-N</a> , Physics Letters A 143(1-2):1-2</p>
|
354,250 | <p><strong>Remark:</strong> All the answers so far have been very insightful and on point but after receiving public and private feedback from other mathematicians on the MathOverflow I decided to clarify a few notions and add contextual information. 08/03/2020.</p>
<h2>Motivation:</h2>
<p>I recently had an interesting exchange with several computational neuroscientists on whether organisms with spatiotemporal sensory input can simulate physics without computing partial derivatives. As far as I know, partial derivatives offer the most quantitatively precise description of spatiotemporal variations. Regarding feasibility, it is worth noting that a number of computational neuroscientists are seriously considering the question that human brains might do reverse-mode automatic differentiation, or what some call backpropagation [7].</p>
<p>Having said this, a large number of computational neuroscientists (even those that have math PhDs) believe that complex systems such as brains may simulate classical mechanical phenomena without computing approximations to partial derivatives. Hence my decision to share this question.</p>
<h2>Problem definition:</h2>
<p>Might there be an alternative formulation for mathematical physics which doesn't employ the use of partial derivatives? I think that this may be a problem in reverse mathematics [6]. But, in order to define equivalence a couple definitions are required:</p>
<p><strong>Partial Derivative as a linear map:</strong></p>
<p>If the derivative of a differentiable function <span class="math-container">$f: \mathbb{R}^n \rightarrow \mathbb{R}^m$</span> at <span class="math-container">$x_o \in \mathbb{R}^n$</span> is given by the Jacobian <span class="math-container">$\frac{\partial f}{\partial x} \Bigr\rvert_{x=x_o} \in \mathbb{R}^{m \times n}$</span>, the partial derivative with respect to <span class="math-container">$i \in [n]$</span> is the <span class="math-container">$i$</span>th column of <span class="math-container">$\frac{\partial f}{\partial x} \Bigr\rvert_{x=x_o}$</span> and may be computed using the <span class="math-container">$i$</span>th standard basis vector <span class="math-container">$e_i$</span>:</p>
<p><span class="math-container">\begin{equation}
\frac{\partial{f}}{\partial{x_i}} \Bigr\rvert_{x=x_o} = \lim_{n \to \infty} n \cdot \big(f(x+\frac{1}{n}\cdot e_i)-f(x)\big) \Bigr\rvert_{x=x_o}. \tag{1}
\end{equation}</span></p>
<p>This is the general setting of numerical differentiation [3].</p>
<p><strong>Partial Derivative as an operator:</strong></p>
<p>Within the setting of automatic differentiation [4], computer scientists construct algorithms <span class="math-container">$\nabla$</span> for computing the dual program <span class="math-container">$\nabla f: \mathbb{R}^n \rightarrow \mathbb{R}^m$</span> which corresponds to an operator definition for the partial derivative with respect to the <span class="math-container">$i$</span>th coordinate:</p>
<p><span class="math-container">\begin{equation}
\nabla_i = e_i \frac{\partial}{\partial x_i} \tag{2}
\end{equation}</span></p>
<p><span class="math-container">\begin{equation}
\nabla = \sum_{i=1}^n \nabla_i = \sum_{i=1}^n e_i \frac{\partial}{\partial x_i}. \tag{3}
\end{equation}</span></p>
<p>Given these definitions, a constructive test would involve creating an open-source library for simulating classical and quantum systems that doesn’t contain a method for numerical or automatic differentiation.</p>
<h2>The special case of classical mechanics:</h2>
<p>For concreteness, we may consider classical mechanics as this is the general setting of animal locomotion, and the vector, Hamiltonian, and Lagrangian formulations of classical mechanics have concise descriptions. In all of these formulations the partial derivative plays a central role. But, at the present moment I don't have a proof that rules out alternative formulations. Has this particular question already been addressed by a mathematical physicist?</p>
<p>Perhaps a reasonable option might be to use a probabilistic framework such as Gaussian Processes that are provably universal function approximators [5]?</p>
<h2>Koopman Von Neumann Classical Mechanics as a candidate solution:</h2>
<p>After reflecting upon the answers of Ben Crowell and <a href="https://mathoverflow.net/a/354289">gmvh</a>, it appears that we require a formulation of classical mechanics where:</p>
<ol>
<li>Everything is formulated in terms of linear operators.</li>
<li>All problems can then be recast in an algebraic language.</li>
</ol>
<p>After doing a literature search it appears that Koopman Von Neumann Classical Mechanics might be a suitable candidate as we have an operator theory in Hilbert space similar to Quantum Mechanics [8,9,10]. That said, I just recently came across this formulation so there may be important subtleties I ignore.</p>
<h2>Related problems:</h2>
<p>Furthermore, I think it may be worth considering the following related questions:</p>
<ol>
<li>What would be left of mathematical physics if we could not compute partial derivatives?</li>
<li>Is it possible to accurately simulate any non-trivial physics without computing partial derivatives?</li>
<li>Are the operations of multivariable calculus necessary and sufficient for modelling classical mechanical phenomena?</li>
</ol>
<h2>A historical note:</h2>
<p>It is worth noting that more than 1000 years ago as a result of his profound studies on optics the mathematician and physicist Ibn al-Haytham(aka Alhazen) reached the following insight:</p>
<blockquote>
<p>Nothing of what is visible, apart from light and color, can be
perceived by pure sensation, but only by discernment, inference, and
recognition, in addition to sensation.-Alhazen</p>
</blockquote>
<p>Today it is known that even color is a construction of the mind as photons are the only physical objects that reach the retina. However, broadly speaking neuroscience is just beginning to catch up with Alhazen’s understanding that the physics of everyday experience is simulated by our minds. In particular, most motor-control scientists agree that to a first-order approximation the key purpose of animal brains is to generate movements and consider their implications. This implicitly specifies a large class of continuous control problems which includes animal locomotion.</p>
<p>Evidence accumulated from several decades of neuroimaging studies implicates the role of the cerebellum in such internal modelling. This isolates a rather uniform brain region whose processes at the circuit-level may be identified with efficient and reliable methods for simulating classical mechanical phenomena [11, 12].</p>
<p>As for the question of whether the mind/brain may actually be modelled by Turing machines, I believe this was precisely Alan Turing’s motivation in conceiving the Turing machine [13]. For a concrete example of neural computation, it may be worth looking at recent research that a single dendritic compartment may compute the xor function: [14], <a href="https://www.reddit.com/r/MachineLearning/comments/ejbwvb/r_single_biological_neuron_can_compute_xor/" rel="nofollow noreferrer">Reddit discussion</a>.</p>
<h2>References:</h2>
<ol>
<li>William W. Symes. Partial Differential Equations of Mathematical Physics. 2012.</li>
<li>L.D. Landau & E.M. Lifshitz. Mechanics (Volume 1 of A Course of Theoretical Physics). Pergamon Press 1969.</li>
<li>Lyness, J. N.; Moler, C. B. (1967). "Numerical differentiation of analytic functions". SIAM J. Numer. Anal. 4: 202–210. <a href="https://doi.org/10.1137/0704019" rel="nofollow noreferrer">doi:10.1137/0704019</a>.</li>
<li>Naumann, Uwe (2012). The Art of Differentiating Computer Programs. Software-Environments-tools. SIAM. ISBN 978-1-611972-06-1.</li>
<li>Michael Osborne. Gaussian Processes for Prediction. Robotics Research Group
Department of Engineering Science
University of Oxford. 2007.</li>
<li>Connie Fan. REVERSE MATHEMATICS. University of Chicago. 2010.</li>
<li>Richards, B.A., Lillicrap, T.P., Beaudoin, P. et al. A deep learning framework for neuroscience. Nat Neurosci 22, 1761–1770 (2019). <a href="https://doi.org/10.1038/s41593-019-0520-2" rel="nofollow noreferrer">doi:10.1038/s41593-019-0520-2</a>.</li>
<li>Wikipedia contributors. "Koopman–von Neumann classical mechanics." Wikipedia, The Free Encyclopedia. Wikipedia, The Free Encyclopedia, 19 Feb. 2020. Web. 7 Mar. 2020.</li>
<li>Koopman, B. O. (1931). "Hamiltonian Systems and Transformations in Hilbert Space". Proceedings of the National Academy of Sciences. 17 (5): 315–318. Bibcode:1931PNAS...17..315K. <a href="https://doi.org/10.1073/pnas.17.5.315" rel="nofollow noreferrer">doi:10.1073/pnas.17.5.315</a>. PMC 1076052. PMID 16577368.</li>
<li>Frank Wilczek. Notes on Koopman von Neumann Mechanics, and a
Step Beyond. 2015.</li>
<li>Daniel McNamee and Daniel M. Wolpert. Internal Models in Biological Control. Annual Review of Control, Robotics, and Autonomous Systems. 2019.</li>
<li>Jörn Diedrichsen, Maedbh King, Carlos Hernandez-Castillo, Marty Sereno, and Richard B. Ivry. Universal Transform or Multiple Functionality? Understanding the Contribution of the Human Cerebellum across Task Domains. Neuron review. 2019.</li>
<li>Turing, A.M. (1936). "On Computable Numbers, with an Application to the Entscheidungsproblem". Proceedings of the London Mathematical Society. 2 (published 1937). 42: 230–265. <a href="https://doi.org/10.1112/plms/s2-42.1.230" rel="nofollow noreferrer">doi:10.1112/plms/s2-42.1.230</a>. (and Turing, A.M. (1938). "On Computable Numbers, with an Application to the Entscheidungsproblem: A correction". Proceedings of the London Mathematical Society.</li>
<li>Albert Gidon, Timothy Adam Zolnik, Pawel Fidzinski, Felix Bolduan, Athanasia Papoutsi, Panayiota Poirazi, Martin Holtkamp, Imre Vida, Matthew Evan Larkum. <a href="https://doi.org/10.1126/science.aax6239" rel="nofollow noreferrer">Dendritic action potentials and computation in human layer 2/3 cortical neurons</a>. Science. 2020.</li>
</ol>
| Mozibur Ullah | 35,706 | <p>I'd query the contention that organisms or even inorganic matter compute in the sense described.</p>
<p>For example, if I drop a stone on the surface of the earth, it falls in a straight line. To call this as 'computing' a straight line seems rather a stretch of the word computation; to my thinking, to compute, means that one ought to be conscious that one is carrying out a computation. That is the person who dropped it is computing the straight line - and not the stone itself. It merely moves in a straight line. We <em>know</em> it moves in a straight line, and hence by dropping it, are describing a straight line.</p>
|
229,161 | <p>A sequence of positive integer is defined as follows</p>
<blockquote>
<ul>
<li>The first term is $1$.</li>
<li>The next two terms are the next two even numbers $2$, $4$.</li>
<li>The next three terms are the next three odd numbers $5$, $7$, $9$.</li>
<li>The next $n$ terms are the next $n$ even numbers if $n$ is even or the next $n$ odd numbers if $n$ is odd.</li>
</ul>
</blockquote>
<p>What is the general term $a_n?$</p>
<p><strong>Please, proofs of all these formulas would be nice</strong></p>
| Hagen von Eitzen | 39,174 | <p>The general term is
$$\tag1a_n=2n-\left\lceil \sqrt{2 n}-\frac12\right\rceil.$$
Why?
We have $a_{n+1}=a_{n}+2$ unless there is an integer $\ge\frac12 +\sqrt{2n}$ and $<\frac12+\sqrt{2(n+1)}$.
We need $a_{n+1}=a_n+1$ iff $n$ is one of the numbers $1, 3, 6, 10, \ldots$, i.e. a number of the form $k \choose 2$. This is accounted for by the ceiling/sqrt term.</p>
<p><em>Proof:</em>
That $a_{n}=a_{n-1}+1$ iff $n-1=\frac{k(k+1)}2$ because of the well-known summation $1+2+3+\cdots+k=\frac{k(k+1)}2$ should be clear.
For which $\nu$ does there exist a $k$ such that $\nu-1=\frac{k(k+1)}2$? This is a quadratic in $k$ with solutions
$$\tag2k_{1,2}=\frac{-1\pm\sqrt{1+8(\nu-1)}}2=\frac{-1\pm\sqrt{8\nu-7}}2.$$
Thus the $\nu$ with $a_{\nu}=a_{\nu-1}+1$ are characterized by the fact that $8\nu-7$ is a square (of an odd number).
In total we have $a_n = 2n-m_n$ where $m_n$ is the number of $\nu\le n$ for which $(2)$ has a positive integer solution $k$. But since there is one $\nu$ (namely $\frac{k(k+1)}2$) for each $k=1, 2, \ldots$, we find that
$m_n=\left\lfloor\frac{-1+\sqrt{8n-7}}2\right\rfloor$.
This proves a different formula, but it can be simplified to $(1)$ by observing that $\sqrt {2n}-\frac12$ is never an integer (that would make $2n$ the square of an odd integer). Therefore $\lfloor\sqrt{2n}+\frac12\rfloor$ may be used instead of $\lceil\sqrt{2n}-\frac12\rceil$ in $(1)$ (or just round $\sqrt{2n}$ to <em>nearest</em> integer). We have
$$\left\lfloor\frac{-1+\sqrt{8n-7}}2\right\rfloor\le\left\lfloor \sqrt{2 n}-\frac12\right\rfloor $$
and inequality can only hold if there is a natural number $r$ such that
$$\frac{-1+\sqrt{8n-7}}2<s<\sqrt{2 n}-\frac12$$
i.e.
$$\sqrt{8n-7}<2s+1<2\sqrt{2 n}$$
$$8n-7<(2s+1)^2<8 n.$$
The latter is impossible because odd squares are $\equiv 1\pmod 8$.</p>
<p>Therefore, rounding $\left\lfloor\frac{-1+\sqrt{8n-7}}2\right\rfloor$ may be replaced with e.g. $\left\lfloor \sqrt{2 n}-\frac12\right\rfloor$ or (with adjustment of the complete formula by a constant) with $\left\lceil \sqrt{2 n}-\frac12\right\rceil$.</p>
|
229,161 | <p>A sequence of positive integer is defined as follows</p>
<blockquote>
<ul>
<li>The first term is $1$.</li>
<li>The next two terms are the next two even numbers $2$, $4$.</li>
<li>The next three terms are the next three odd numbers $5$, $7$, $9$.</li>
<li>The next $n$ terms are the next $n$ even numbers if $n$ is even or the next $n$ odd numbers if $n$ is odd.</li>
</ul>
</blockquote>
<p>What is the general term $a_n?$</p>
<p><strong>Please, proofs of all these formulas would be nice</strong></p>
| Arthur | 15,500 | <p><a href="http://oeis.org/A001614" rel="nofollow">http://oeis.org/A001614</a> has a few formulas:
$$
a_n = 2n - \left \lfloor \frac{1+ \sqrt{8n-7}}{2} \right\rfloor
$$</p>
|
3,605,636 | <p>Let <span class="math-container">$m_{a},m_{b},m_{c}$</span> be the lengths of the medians and <span class="math-container">$a,b,c$</span> be the lengths of the sides of a given triangle , Prove the inequality : </p>
<p><span class="math-container">$$m_{a}m_{b}m_{c}\leq\frac{Rs^{2}}{2}$$</span></p>
<p>Where : </p>
<p><span class="math-container">$s : \operatorname{Semiperimeter}$</span></p>
<p><span class="math-container">$R : \operatorname{circumradius}$</span> </p>
<p>I know the relation : </p>
<p><span class="math-container">$$m_{a}^{2}=\frac{2(b^{2}+c^{2})-a^{2}}{4}$$</span></p>
<p>But when I multiple together I dont get simple formulas!</p>
<p>So, I need help finding a solution.
Thanks!</p>
| Michael Rozenberg | 190,319 | <p>In the standard notation we need to prove that:
<span class="math-container">$$\frac{1}{8}\sqrt{\prod_{cyc}(2a^2+2b^2-c^2)}\leq\frac{1}{2}\cdot\frac{abc}{4S}\cdot\frac{(a+b+c)^2}{4}$$</span> or
<span class="math-container">$$a^2b^2c^2(a+b+c)^3\geq\prod_{cyc}(2a^2+2b^2-c^2)\prod_{cyc}(a+b-c).$$</span>
Now, let <span class="math-container">$a+b+c=3u$</span>, <span class="math-container">$ab+ac+bc=3v^2$</span> and <span class="math-container">$abc=w^3$</span>.</p>
<p>Thus, <span class="math-container">$$\prod_{cyc}(2a^2+2b^2-c^2)=\prod_{cyc}(2(a^2+b^2+c^2)-3c^2)=$$</span>
<span class="math-container">$$=8(9u^2-6v^2)^3-12(9u^2-6v^2)^3+18(9u^2-6v^2)(9v^4-6uw^3)-27w^6=$$</span>
<span class="math-container">$$=27(-w^6+2(3u^2-2v^2)(9v^4-6uw^3)-4(3u^2-2v^2)^3).$$</span>
Also, <span class="math-container">$$\prod_{cyc}(a+b-c)=\prod_{cyc}(3u-2c)=27u^3-54u^3+36uv^2-8w^3=$$</span>
<span class="math-container">$$=-8w^3-27u^3+36uv^2.$$</span>
Thus, we need to prove that <span class="math-container">$f(w^3)\geq0,$</span> where
<span class="math-container">$$f(w^3)=u^3w^6-(-w^6+2(3u^2-2v^2)(9v^4-6uw^3)-4(3u^2-2v^2)^3)(-8w^3-27u^3+36uv^2).$$</span>
But <span class="math-container">$$f''(w^3)=2u^3-2(-2w^3+2(3u^2-2v^2)(-6u))(-8)+$$</span>
<span class="math-container">$$-(-8w^3-27u^3+36uv^2)(-2)=-4(157u^3-114uv^2+12w^3)<0,$$</span> which says that <span class="math-container">$f$</span> is a concave function.</p>
<p>Thus, it's enough to prove our inequality for an extreme value of <span class="math-container">$w^3$</span>, which happens in the following cases.</p>
<ol>
<li><p><span class="math-container">$w^3\rightarrow0^+$</span>.
in this case the inequality is obvious;</p></li>
<li><p><span class="math-container">$\prod\limits_{cyc}(a+b-c)\rightarrow0^+$</span>.</p></li>
</ol>
<p>It's obvious again;</p>
<ol start="3">
<li>Two variables are equal.</li>
</ol>
<p>Since our inequality is symmetric and homogeneous, it's enough to assume <span class="math-container">$b=c=1$</span>.</p>
<p>Thus, <span class="math-container">$0<a<2$</span> and we need to prove that
<span class="math-container">$$a^2(a+2)^3\geq(2a^2+1)^2(4-a^2)a^2(2-a)$$</span> or
<span class="math-container">$$(a+2)^2\geq(2a^2+1)^2(2-a)^2$$</span> or
<span class="math-container">$$a+2\geq(2a^2+1)(2-a)$$</span> or
<span class="math-container">$$a(a-1)^2\geq0$$</span> and we are done!</p>
|
138,243 | <p><a href="https://i.stack.imgur.com/kRmeb.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/kRmeb.png" alt="Area and Perimeter"></a></p>
<p>How can I draw the figure shown above in rectangular coordinates, calculate the area and perimeter of the shaded region as a function of radius <code>r</code> of the outer circle, and find the points of intersection of the inner circles.</p>
| yode | 21,532 | <p>Show it:</p>
<pre><code>RegionPlot[
region = RegionUnion[
Sequence @@
RegionIntersection @@@
Subsets[{Disk[{-1, 0}], Disk[{0, -1}], Disk[{1, 0}],
Disk[{0, 1}]}, {2}],
Fold[RegionDifference, {Disk[{0, 0}, 2], Disk[{-1, 0}],
Disk[{0, -1}], Disk[{1, 0}], Disk[{0, 1}]}]], Frame -> False]
</code></pre>
<p><img src="https://i.stack.imgur.com/YOw1U.png" height="350"></p>
<h3>Area</h3>
<pre><code>Area[region]
</code></pre>
<blockquote>
<p>4 (-2 + π)</p>
</blockquote>
<h3>Perimeter</h3>
<pre><code>ArcLength@RegionBoundary[region]
</code></pre>
<blockquote>
<p>12 π</p>
</blockquote>
|
138,243 | <p><a href="https://i.stack.imgur.com/kRmeb.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/kRmeb.png" alt="Area and Perimeter"></a></p>
<p>How can I draw the figure shown above in rectangular coordinates, calculate the area and perimeter of the shaded region as a function of radius <code>r</code> of the outer circle, and find the points of intersection of the inner circles.</p>
| bobbym | 8,585 | <p>Your question wants a relationship for r which I assume is the radius of the larger circle. You can get it like this:</p>
<pre><code>c1 = ImplicitRegion[(x - r)^2 + y^2 <= r^2, {x, y}];
c2 = ImplicitRegion[x^2 + (y - r)^2 <= r^2, {x, y}];
Assuming[r > 0, Area[RegionIntersection[c1, c2]]]
</code></pre>
<p>yields</p>
<p>$\frac{1}{2} (\pi -2) r^2$ </p>
<p>All the shaded areas in terms of r:</p>
<pre><code>FullSimplify[\[Pi]*r^2 - 4 \[Pi] (r/2)^2 + 8 1/8 (-2 + \[Pi]) r^2]
</code></pre>
<p>$\left ( \pi -2\right )r^2$</p>
<p>Testing for the particular answer given by yode:</p>
<pre><code>(-2 + \[Pi]) r^2 /. r -> 2
</code></pre>
<p>yields</p>
<p>$4\pi - 8$</p>
|
138,243 | <p><a href="https://i.stack.imgur.com/kRmeb.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/kRmeb.png" alt="Area and Perimeter"></a></p>
<p>How can I draw the figure shown above in rectangular coordinates, calculate the area and perimeter of the shaded region as a function of radius <code>r</code> of the outer circle, and find the points of intersection of the inner circles.</p>
| ubpdqn | 1,997 | <p>Just for fun. By symmetry we need only consider the first quadrant.</p>
<pre><code>Graphics[{EdgeForm[Black], FaceForm[None], Disk[{0, 0}, 2, {0, Pi/2}],
Disk[{1, 0}, 1, {0, Pi}], Disk[{0, 1}, 1, {-Pi/2, Pi/2}],
Line[{{1, 0}, {1, 1}}], Line[{{0, 0}, {1, 1}}],
Line[{{0, 1}, {1, 1}}],
Text[Style["A", 20], {0, 0}, {1, 1}],
Text[Style["B", 20], {1, 0}, {1, 1}],
Text[Style["C", 20], {1, 1}, {-2, -1}],
Text[Style["D", 20], {2, 0}, {1, 1}],
Text[Style["arc 1", 20, Background -> White], {0.7, 0.3}, {0, 0}],
Text[Style["arc 2", 20, Background -> White], {0.3, 0.7}, {0, 0}],
Text[Style["arc 3", 20, Background -> White], {1.4, 1.4}, {0, 0}],
Text[Style["arc 4", 20, Background -> White], {0.8, 1.5}, {0, 0}],
Text[Style["arc 5", 20, Background -> White], {1.5, 0.8}, {0, 0}]
}]
</code></pre>
<p><a href="https://i.stack.imgur.com/md9aF.png" rel="noreferrer"><img src="https://i.stack.imgur.com/md9aF.png" alt="enter image description here"></a></p>
<p>Let the radius of the small circle be 1 (hence the radius of the large circle is 2).</p>
<p>So the perimeter can be seen to be length of arc1+ arc2+arc3+arc4+arc5:</p>
<p>Let $p_i$ represent arc i length. Now $p_1=p_2=p_4=p_5= \pi/2$ and arc length $p3= \pi/2 \times 2$. Hence total perimeter:</p>
<pre><code>perimeter = 4 (4 Pi/2 + Pi/2 2)
</code></pre>
<p>i.e. $12\pi$</p>
<p>For the area: area bounded by arc1 and arc 2 is 2 x (area of sector-area of triangle ABC):</p>
<pre><code>area1 = 2 (Pi/4 - 1/2)
</code></pre>
<p>The area bounded by arcs 3,4 and 5= area of quarter circle -area of 2 semicircles+ area of overlap:</p>
<pre><code>area2 = (Pi/2) 2^2/2 - Pi + area1
</code></pre>
<p>Note <code>area1</code>=<code>area2</code>=$\pi/2-1$, so the total area is </p>
<pre><code>total = Simplify[4 (area1 + area2)]
</code></pre>
<p>yielding:</p>
<pre><code>4 (-2 + \[Pi])
</code></pre>
|
344,725 | <p>in $\Delta ABC$,and </p>
<p>$$\dfrac{\sin{(\dfrac{B}{2}+C)}}{\sin^2{B}}=\dfrac{\sin{(\dfrac{C}{2}+B)}}{\sin^2{C}}$$</p>
<p>prove that $B=C$</p>
<p>I think $\sin{(\dfrac{B}{2}+C)}\sin^2{C}=\sin{(\dfrac{C}{2}+B)}\sin^2{B}$</p>
<p>then
$$\sin{(\dfrac{B}{2})}\cos{C}\sin^2{C}+\cos{\dfrac{B}{2}}\sin^3C=\sin{\dfrac{C}{2}}\cos{B}\sin^2B+\cos{\dfrac{C}{2}}\sin^3B$$</p>
<p>so
$$(\sin{\dfrac{B}{2}}-\sin{\dfrac{C}{2}})f(B,C)=0$$</p>
<p>my question:How can prove $f(B,C)\neq 0$ ?</p>
| Christian Blatter | 1,303 | <p>The following picture suggests that the statement might be wrong. It shows that near $\beta=\gamma=1.417079$ there are angles $\beta\ne\gamma$ satisfying the stated equality.</p>
<p><img src="https://i.stack.imgur.com/NVxai.jpg" alt="enter image description here"></p>
|
842,271 | <p>Evaluation of $\displaystyle \int \frac{\sqrt[3]{x+\sqrt[4]{x}}}{\sqrt{x}}dx$</p>
<p>$\bf{My\; Try::}$ Let $x=t^4\;,$ Then $dx = 4t^3dt$</p>
<p>So Integral is $\displaystyle \int\frac{\sqrt[3]{t^4+t}}{t^2} \cdot 4t^3dt$</p>
<p>So Integral is $\displaystyle 4\int t^{\frac{7}{3}}\cdot (1+t^{-3})^{\frac{1}{3}}$</p>
<p>Now How can i solve after that</p>
<p>Help me</p>
<p>Thanks</p>
| achille hui | 59,379 | <p>Let $\mathcal{I}$ be the integral. You can actually evaluate it using the substitution $x = t^4$.</p>
<p>$$\mathcal{I} =
\int\frac{\sqrt[3]{x + \sqrt[4]{x}}}{\sqrt{x}}dx
= \int\frac{\sqrt[3]{(t^3 + 1)t}}{t^2}4t^3dt
= 4\int\sqrt[3]{1+t^3}t^{4/3}dt
$$
For $|x| < 1$, we can expand the integrand at RHS using following expansion</p>
<p>$$\frac{1}{(1-t)^\gamma} = \sum_{k=0}^\infty \frac{(\gamma)_k}{k!} t^k$$</p>
<p>where $(\gamma)_k = \gamma(\gamma+1)\cdots(\gamma+k-1)$ is the rising <a href="http://en.wikipedia.org/wiki/Pochhammer_symbol">Pochhammer symbol</a>. This gives us</p>
<p>$$
\mathcal{I}
= 4\int\left(\sum_{k=0}^\infty \frac{(-1)^k (-\frac13)_k}{k!}t^{3k}\right)t^{4/3}dt
= 4 \sum_{k=0}^\infty \frac{(-1)^k (-\frac13)_k}{k!}\frac{t^{3k+7/3}}{3k+7/3}
$$
Using another identity
$$\frac{(\gamma)_k}{(\gamma+1)_k} = \frac{\gamma}{\gamma+k}$$
We can transform above expression to
$$\mathcal{I}
= \frac{12}{7} t^{\frac73}
\sum_{k=0}^\infty \frac{(-\frac13)_k}{k!}\frac{(\frac79)_k}{(\frac{16}{9})_k}(-t^3)^k
$$
The expansion in the right is that for a
<a href="http://en.wikipedia.org/wiki/Hypergeometric_function">hypergeometric function ${}_2F_1$</a>. As a result, up to an integration constant, we have</p>
<p>$$\mathcal{I}
= \frac{12}{7} t^{\frac73} {}_2F_1\left( -\frac13, \frac79 ; \frac{16}{9}; -t^3 \right)
= \frac{12}{7} x^{\frac{7}{12}} {}_2F_1\left( -\frac13, \frac79 ; \frac{16}{9}; -x^{\frac34} \right)$$</p>
|
2,471,680 | <p>I am working with a theorem and i need the reference of the above limit.
Kindly guide.</p>
| Peter Szilas | 408,605 | <p>$0<k<1$, then $1/k >1.$</p>
<p>$1/k= :(1+x)$, with $x \gt 0.$</p>
<p>$(1/k)^n = (1+x)^n \ge 1 +nx.$
(Bernouilli's inequality)</p>
<p>Let $M \gt 1/\epsilon.$</p>
<p>Choose $n_0$ such that $1+n_0x \gt M.$
(Archimedes)</p>
<p>For $n \ge n_0 :$</p>
<p>$(1/k)^n = (1+x)^n \ge 1+nx \gt M \gt 1/\epsilon$,</p>
<p>$\rightarrow$</p>
<p>$k^n \lt \epsilon$.</p>
|
2,293,600 | <p>How to calculate X $\cap$ $\{X\}$ for finite sets to develop an intuition for intersections?</p>
<p>If $X$ = $\{$1,2,3$\}$, then what is $X$ $\cap$ $\{X\}$? </p>
| Dragonite | 395,130 | <p>As far as developing intuition for intersection, the idea of $A \cap B$ are the elements that $A$ and $B$ both have in common. So if we're looking at $X \cap \left\{ X \right\}$ where $X = \left\{ 1,2,3 \right\}$ then it is a matter of
$$X \cap \left\{ X \right\} = \left\{ 1,2,3 \right\} \cap \left\{ \left\{ 1,2,3 \right\} \right\}.$$
However, there is a rather subtle difference here between the left and right side of the intersection. The left side, $\left\{ 1,2,3\right\} = X$, is the set at hand. Where the right side, $\left\{ X \right\}$ is viewing the <em>set</em> $X$ as an <em>element</em>, which is different than $X$ itself, so they have nothing in common. Hence,
$$X \cap \left\{ X \right\} = \emptyset.$$</p>
|
2,293,600 | <p>How to calculate X $\cap$ $\{X\}$ for finite sets to develop an intuition for intersections?</p>
<p>If $X$ = $\{$1,2,3$\}$, then what is $X$ $\cap$ $\{X\}$? </p>
| fleablood | 280,126 | <p>$\{X\}$ contains one element and one element only. So as $E \cap F \subset F$ we know $E \cap \{X\} \subset \{X\}$. So either $E \cap \{X\} = \{X\}$ if $X \in E$ or $E \cap \{X\} = \emptyset$ if $X \not \in E$. </p>
<p>It violates the axioms of set theory to have a set such that $X \in X$ (a set can't be an element of itself). So $X \in X$ so $X \cap \{X\} = \emptyset$.</p>
<p>As per your example.</p>
<p>$\{1,2,3\} \cap \{\{1,2,3\}\}$.... $1 \not \in \{\{1,2,3\}\}$, $2 \not \in \{\{1,2,3\}\}$, $3 \not \in \{\{1,2,3\}\}$, and $\{1,2,3\} \not \in \{1,2,3,\}$. And $1,2,3,\{1,2,3\}$ are everything in either set and none of them are in both sets. so $\{1,2,3\} \cap \{\{1,2,3\}\} = \emptyset$.</p>
<p>Note: It doesn't matter if $X$ is finite or infinite or empty. $X \cap \{X\} = \emptyset$. (Unless you ignore the axiom that $A \not \in A$.)</p>
|
3,613,120 | <p>In elementary algebra and beyond, we are taught to use a sequence of equations to derive a relationship. For instance, to show that <span class="math-container">$a \le 2b - 1$</span> follows from <span class="math-container">$\frac{a+1}{2} = b$</span>, one would use the following sequence of equations, where each equation follows from the previous one.</p>
<p><span class="math-container">$$
\begin{align}
\frac{a + 1}{2} &= b
\newline
a + 1 &= 2b
\newline
a &= 2b-1
\newline
a &\le 2b - 1
\end{align}
$$</span></p>
<p>This notation has always seemed inadequate to me. In particular, when I don't want to waste paper I find myself placing multiple equations in one row with an arrow in between them.</p>
<p><span class="math-container">$$a = b \Rightarrow c = d $$</span></p>
<p>This looks nice but does not mean what I want it to mean since in logic</p>
<p><span class="math-container">$$a \Rightarrow b \Rightarrow c $$</span></p>
<p>evaluates to a single value,</p>
<p><span class="math-container">$$a \Rightarrow (b \Rightarrow c) $$</span></p>
<p>whereas I am using it as a short hand for something like</p>
<p><span class="math-container">$$
\begin{align}
&1. \:& & a
\newline
&2. \:& & a \Rightarrow b
\newline
&3. \:& & b \Rightarrow c
\newline
&4. \:& \therefore \: & \: c
\end{align}
$$</span></p>
<p>I've sometimes used the symbol <span class="math-container">$\rightarrow$</span> as in</p>
<p><span class="math-container">$$a = b \rightarrow c = d $$</span></p>
<p>but what I really want is to chain implications so that I am emphasizing the nature of my derivation as being a logical progression.</p>
<p>I've also used <span class="math-container">$\equiv$</span> as in</p>
<p><span class="math-container">$$a = b \equiv c = d $$</span></p>
<p>but this does not work in many situations such as deriving <span class="math-container">$a \le 2b - 1$</span> from <span class="math-container">$a = 2b - 1$</span>.</p>
<p>Am I overthinking this? Should I just use the typical chain of equations with no symbol to represent the relationship between those equations? Are any of the aforementioned shorthand notations appropriate?</p>
<p>Thanks</p>
| rae306 | 168,956 | <p>This holds in general:</p>
<blockquote>
<p>If <span class="math-container">$f\in C[0,1]$</span>, then the sequence of Bernstein polynomials <span class="math-container">$B_nf$</span> converges uniformly to <span class="math-container">$f$</span> on <span class="math-container">$[0,1]$</span>.</p>
</blockquote>
<p>Its proof is pretty involved: see Ross, Elementary Analysis, theorem <span class="math-container">$27.4$</span>.</p>
|
3,613,120 | <p>In elementary algebra and beyond, we are taught to use a sequence of equations to derive a relationship. For instance, to show that <span class="math-container">$a \le 2b - 1$</span> follows from <span class="math-container">$\frac{a+1}{2} = b$</span>, one would use the following sequence of equations, where each equation follows from the previous one.</p>
<p><span class="math-container">$$
\begin{align}
\frac{a + 1}{2} &= b
\newline
a + 1 &= 2b
\newline
a &= 2b-1
\newline
a &\le 2b - 1
\end{align}
$$</span></p>
<p>This notation has always seemed inadequate to me. In particular, when I don't want to waste paper I find myself placing multiple equations in one row with an arrow in between them.</p>
<p><span class="math-container">$$a = b \Rightarrow c = d $$</span></p>
<p>This looks nice but does not mean what I want it to mean since in logic</p>
<p><span class="math-container">$$a \Rightarrow b \Rightarrow c $$</span></p>
<p>evaluates to a single value,</p>
<p><span class="math-container">$$a \Rightarrow (b \Rightarrow c) $$</span></p>
<p>whereas I am using it as a short hand for something like</p>
<p><span class="math-container">$$
\begin{align}
&1. \:& & a
\newline
&2. \:& & a \Rightarrow b
\newline
&3. \:& & b \Rightarrow c
\newline
&4. \:& \therefore \: & \: c
\end{align}
$$</span></p>
<p>I've sometimes used the symbol <span class="math-container">$\rightarrow$</span> as in</p>
<p><span class="math-container">$$a = b \rightarrow c = d $$</span></p>
<p>but what I really want is to chain implications so that I am emphasizing the nature of my derivation as being a logical progression.</p>
<p>I've also used <span class="math-container">$\equiv$</span> as in</p>
<p><span class="math-container">$$a = b \equiv c = d $$</span></p>
<p>but this does not work in many situations such as deriving <span class="math-container">$a \le 2b - 1$</span> from <span class="math-container">$a = 2b - 1$</span>.</p>
<p>Am I overthinking this? Should I just use the typical chain of equations with no symbol to represent the relationship between those equations? Are any of the aforementioned shorthand notations appropriate?</p>
<p>Thanks</p>
| s.harp | 152,424 | <p>The first important thing to see is that <span class="math-container">$n(1-e^{1/n})\to1$</span>, as <span class="math-container">$n\to\infty$</span>, which you can show in any way you like. Now the following lemma instantly gives you the result you are looking for:</p>
<blockquote>
<p><strong>Lemma.</strong> Let <span class="math-container">$a_n$</span> be a sequence converging to <span class="math-container">$a$</span>. Then the functions
<span class="math-container">$$ f_n : \Bbb R\to\Bbb R, \quad x\mapsto \left(1+ \frac{a_n x}{n}\right)^n$$</span>
converge uniformly on compacta to the function <span class="math-container">$x\mapsto e^{ax}$</span>.</p>
</blockquote>
<p>To prove the lemma we will show uniform convergence on a set of the form <span class="math-container">$[-R,R]$</span>. Now consider <span class="math-container">$x\in[-R,R]$</span>:
<span class="math-container">$$\left|\left(1+ \frac{a_n x}n\right)^n - e^{a x}\right| ≤ \sum_{k=0}^N \left| \frac{n!}{(n-k)!n^k} \,a_n^k - a^k\ \right| \frac{R^k}{k!} + \sum_{k=N+1}^n \left|\frac{n!}{(n-k)!} \frac{a_n^k}{n^k}\right|\frac{R^k}{k!} +\sum_{k=N+1}^\infty a^k\frac{R^k}{k!}.$$</span></p>
<p>Now we just understand these terms and get the reuslt. Note that for any fixed <span class="math-container">$N$</span> the first summand goes to <span class="math-container">$0$</span> as <span class="math-container">$n\to\infty$</span>. For the second summand notice that the <span class="math-container">$\frac{n!}{(n-k)!} \frac{a_n^k}{n^k}$</span> term will for <span class="math-container">$n$</span> large enough be smaller than <span class="math-container">$(|a|+1)^k$</span>, hence this summand can be bounded by <span class="math-container">$\sum_{k=N+1}^\infty \frac{(|a|+1)^k R^k}{k!}$</span>, which can be made as small as you please for <span class="math-container">$N$</span> large enough. Similarly the third summand can be made as small as you like, you just need to choose <span class="math-container">$N$</span> large enough.</p>
<p>This retrieves a bound for the difference that you can make as small as you want, and this can be done independently of <span class="math-container">$x$</span> (provided <span class="math-container">$|x|≤R$</span>), giving you the desired result.</p>
|
354,124 | <p>I was stumbled with a basic calculus question by a friend.</p>
<p>The question first asks to find unit vectors $v,w$ s.t $|u+v|$ is
maximal and $|u-w|$ is minimal where $u=(-2,5,3)$.</p>
<p>Then the question asks to find unit vectors $v,w$ s.t $u\cdot v$
is maximal and $|u\cdot w|$ is minimal.</p>
<p>It's easy to write out the equations in all parts, for example for
the first part: denote $v=(x,y,z)$ then we wish to find the maximum
$$(z+3)^{2}+(y+5)^{2}+(x-2)^{2}$$ under $$x^{2}+y^{2}+z^{2}=1$$</p>
<p>and similarly for the second part with the minimum. But this doesn't
seem like the right way to go at this, since I only know to solve
such a question with Lagrange multipliers, and they didn't study this
(yet). </p>
<p>Can anyone please help point me out in the right direction ?</p>
| Eckhard | 53,115 | <p>The right direction is to consider vectors $v$ and $w$ which are collinear with $u$.</p>
<p>For the second part of the question, observe that $u\cdot v=|u||v|\cos\left(\angle(u,v)\right)$. In your case, the absolute values of $u$ and $v$ are fixed, so the values of $u\cdot v$ and $|u\cdot w|$ depend only on the angle between these two vectors.</p>
|
2,752,511 | <p>Prove that if $X$ is Hausdorff, $\Delta=\{(x, x)\mid x\in X\}$ is closed in $X\times X$ (with the product topology).</p>
<p><strong>My attempt:</strong></p>
<p>Let $x_1, x_2\in X$ s.t. $x_1\ne x_2$.</p>
<p>There exist neighborhoods $U_1$ and $U_2$ of $x_1$ and $x_2$ that are disjoint.</p>
<p>$U_1\times U_2$ is a basis element in the product topology on $X\times X$. So, $U_1\times U_2$ is open in $X\times X$.</p>
<p>Let $x\in X$. </p>
<p>$(x, x)\in U_1\times U_2\implies x\in U_1$ and $x\in U_2\implies x\in U_1\cap U_2$, which contradicts the fact that $U_1$ and $U_2$ are disjoint.</p>
<p>So, $(x, x)\notin U_1\times U_2$.</p>
<p>I feel that I'm on the right track but don't know how to proceed. Could someone please help me out?</p>
| TheMagicSnoot | 555,837 | <p>You're basically there, you just need to interpret your result. You found that for any point $(x_1,x_2)\in X\times X-\Delta$, there exists a neighborhood of $(x_1,x_2)$ contained in $X\times X-\Delta$. That is, $X\times X-\Delta$ is open. Therefore $\Delta$ is...</p>
|
1,497,898 | <p>Consider the polynomial $$f(x)=x^4-x^3+14x^2+5x+16$$ and $\mathbb{F}_p$ be the field with $p$ elements, where $p$ is prime. Then</p>
<ol>
<li>Considering $f$ as a polynomial with coefficients in $\mathbb{F_3}$, it has no roots in $\mathbb{F_3}$.</li>
<li><p>Considering $f$ as a polynomial with coefficients in $\mathbb{F_3}$, it is a product of two irreducible factors of degree $2$ over $\mathbb{F_3}$.</p></li>
<li><p>Considering $f$ as a polynomial with coefficients in $\mathbb{F_7}$, it has an irreducible factor of degree $3$.</p></li>
<li>$f$ is a product of two polynomials of degree two over $\mathbb{Z}$.</li>
</ol>
<p><strong>My work:</strong>
$$f(x)=x^4+2x^3+2x^2+2x+1$$ in $\mathbb{F_3}$
which can be written as
$$f(x)=(x^2+1)(x+1)^2$$ hence $1$ and $2$ are wrong. </p>
<p>In $\mathbb{F_7}$, we can write</p>
<p>$$f(x)=x^4+6x^3+5x+2,$$ which is reducible but I am unable to conclude $3$ precisely and also having problem with $4$. </p>
<p>Am i right with my conclusions? Help me out. </p>
| 2'5 9'2 | 11,123 | <p>To see if the 4th degree polynomial over $\mathbb{Z}_7$ has an irreducible 3rd degree factor, simply check to see if it has a root in $\mathbb{Z}_7$ first by trying all $7$ options. If so, divide by the corresponding factor to leave you with a cubic. Now either the cubic is irreducible, or it has a root in $\mathbb{Z}_7$. Again, try all $7$ options to see.</p>
<p>Over $\mathbb{Z}$, you can look for linear factors by looking for integer roots. And you can look for those by applying the rational root theorem. If you find none, you can move on to trying to represent it as $(x^2+ax+b)(x^2+cx+d)$. You can use your factorization over $\mathbb{Z}_3$ to refine this as $(x^2+3ax+(3b+1))(x^2+(3c+2)x+(3d+1))$, if that helps. Multiply it out and you can slowly rule out all cases, or possibly find the asserted factorization. </p>
|
1,557,165 | <p>Prove that
$$\int_1^\infty\frac{e^x}{x (e^x+1)}dx$$
does not converge.</p>
<p>How can I do that? I thought about turning it into the form of $\int_b^\infty\frac{dx}{x^a}$, but I find no easy way to get rid of the $e^x$.</p>
| Martin Argerami | 22,857 | <p>For $x\geq1$, you have $e^x>1$, so $2e^x>e^x+1$, or
$$
\frac {e^x}{e^x+1}>\frac12.
$$
Then
$$
\int_1^\infty \frac {e^x}{x (e^x+1)}\,dx\geq\int_1^\infty\frac1x\,dx=\infty .
$$</p>
|
898,151 | <p>I have encountered an statement several times while proving determinant of a block matrix. </p>
<blockquote>
<p>$$\det\pmatrix{A&0\\0&D}\; = \det(A)\det(D)$$</p>
</blockquote>
<p>where $A$ is $k\times k$ and $D$ is $n\times n$ matrix. How to prove this?</p>
<p>Thanks in advance.</p>
| Community | -1 | <p>This is more general result
$$\det\pmatrix{A&B\\0&D} = \det A\det D$$
and to prove it notice that</p>
<p>$$\pmatrix{A&B\\0&D}=\pmatrix{I_k&0\\0&D}\pmatrix{A&B\\0&I_n}$$
and we develop along the $k$ first rows we find
$$\det\pmatrix{I_k&0\\0&D}=\det D$$
and along the last $n$ rows we find
$$\det\pmatrix{A&B\\0&I_n}=\det A$$
and the result follows.</p>
|
1,029,485 | <p>I wish to show the following statement:</p>
<p>$
\forall x,y \in \mathbb{R}
$</p>
<p>$$
(x+y)^4 \leq 8(x^4 + y^4)
$$</p>
<p>What is the scope for generalisaion?</p>
<p><strong>Edit:</strong></p>
<p>Apparently the above inequality can be shown using the Cauchy-Schwarz inequality. Could someone please elaborate, stating the vectors you are using in the Cauchy-Schwarz inequality: </p>
<p>$\ \ \forall \ \ v,w \in V, $ an inner product space,</p>
<p>$$|\langle v,w\rangle|^2 \leq \langle v,v \rangle \cdot \langle w,w \rangle$$</p>
<p>where $\langle v,w\rangle$ is an inner product.</p>
| Joel | 85,072 | <p>If you instead consider $$\left( \frac{x}{2} + \frac{y}{2} \right)^4$$ we know that the function $(\cdot)^4$ is convex. This leads to: $$\left( \frac{x}{2} + \frac{y}{2} \right)^4 \le \frac12 x^4 + \frac12 y^4$$</p>
<p>Multiply both sides by $16$ and we have: $$(x+y)^4 \le 8x^4 + 8y^4.$$</p>
<p>This process works as long as $(\cdot)^p$ is convex, which holds precisely when $p \ge 1$.</p>
<p>You can show that $(x+y)^p \le x^p + y^p$ when $p < 1$ by other means.</p>
|
187,545 | <p><span class="math-container">$\DeclareMathOperator\GL{GL}\DeclareMathOperator\L{\mathfrak{L}}$</span>The free Lie algebra <span class="math-container">$\L(V)$</span> generated by an <span class="math-container">$r$</span>-dimensional vector space <span class="math-container">$V$</span> is, in the
language of <a href="https://en.wikipedia.org/wiki/Free_Lie_algebra" rel="nofollow noreferrer">https://en.wikipedia.org/wiki/Free_Lie_algebra</a>,
the free Lie algebra generated by any choice of basis <span class="math-container">$e_1, \ldots , e_r$</span> for the vector space <span class="math-container">$V$</span>. (Work over the field <span class="math-container">${\mathbb R}$</span> or <span class="math-container">${\mathbb C}$</span>, whichever you prefer.)
It is a graded Lie algebra<br />
<span class="math-container">$$\L(V) = V \oplus \L_2 (V) \oplus \L_3 (V) \oplus \ldots .$$</span>
The general linear group <span class="math-container">$\GL(V)$</span> of <span class="math-container">$V$</span> acts on <span class="math-container">$\L(V)$</span> by gradation-preserving Lie algebra automorphisms.
Thus each graded piece <span class="math-container">$\L_k (V)$</span> is a finite dimensional
representation space for <span class="math-container">$\GL(V)$</span>. (The `weight' of <span class="math-container">$\L_k (V)$</span> is <span class="math-container">$k$</span> in the sense that <span class="math-container">$\lambda \mathrm{Id} \in \GL(V)$</span> acts on <span class="math-container">$\L_k (V)$</span> by scalar multiplication by <span class="math-container">$\lambda^k$</span>.)
QUESTION: How does <span class="math-container">$\L_k (V)$</span> break up into <span class="math-container">$\GL(V)$</span>-irreducibles?</p>
<p>I only really know that <span class="math-container">$\L_2 (V) = \Lambda ^2 (V)$</span>, which is already irreducible.</p>
<p>To start the game off, perhaps some reader out there already is familiar with <span class="math-container">$\L_3 (V)$</span>
as a <span class="math-container">$\GL(V)$</span>-rep, and can tell me its irreps in terms of the Young diagrams / Schur theory involving 3 symbols?</p>
<p>(My motivation arises from trying to understand some details of the subRiemannian geometry <a href="https://en.wikipedia.org/wiki/Sub-Riemannian_manifold" rel="nofollow noreferrer">https://en.wikipedia.org/wiki/Sub-Riemannian_manifold</a> of the Carnot group whose Lie algebra is the free <span class="math-container">$k$</span>-step Lie algebra, which is <span class="math-container">$\L(V)$</span>-truncated after step <span class="math-container">$k$</span>. )</p>
| Vladimir Dotsenko | 1,306 | <p>The Whitehouse module referred to in one of the other answers is not necessary, since it is related to the <em>cyclic</em> operad Lie, that is to the representation of <span class="math-container">$S_{n+1}$</span> in <span class="math-container">$Lie(n)$</span>.</p>
<p>The decomposition in terms of Young diagrams is, as far as I understand, first done in a paper of Kraskiewicz and Weyman (preprint W. Kraskiewicz, J. Weyman. Algebra of Invariants and the Action of a Coxeter Element. Math. Inst. Copernicus Univ. Chopina, Torun (1987), published W. Kraskiewicz, J. Weyman. Algebra of invariants and the action of a Coxeter element. Bayreuth. Math. Schr., 63 (2001)). Neither the preprint nor the published version are easily available, but lots of sources online discuss this topic. One set of slides that I found on Google right now which seems to contain all the statements you might possibly need is <a href="https://personalpages.manchester.ac.uk/staff/Marianne.Johnson/Antalyaslides.pdf" rel="nofollow noreferrer">https://personalpages.manchester.ac.uk/staff/Marianne.Johnson/Antalyaslides.pdf</a></p>
|
3,275,423 | <p>How do I see that for a <span class="math-container">$K$</span>-vector space <span class="math-container">$V$</span> the map</p>
<blockquote>
<p><span class="math-container">$\bigwedge^d(V^*) \times \bigwedge^d(V) \rightarrow K, (f_1 \wedge ... \wedge f_d, x_1 \wedge ... \wedge x_d) \mapsto det(f_i(x_i)_{i,j})$</span></p>
</blockquote>
<p>is bilinear?</p>
| Berci | 41,488 | <p>The given formula is certainly giving a well defined mapping
<span class="math-container">$$\varphi:V^*\times\dots\times V^*\ \times\ V\times\dots\times V \longrightarrow K$$</span>
Fixing all but one arguments makes the matrix of the determinant varying linearly in one row or column. <br>
This shows that <span class="math-container">$\varphi$</span> is <em>multilinear</em>, so that it factors through <span class="math-container">$$V^*\otimes\dots\otimes V^*\ \otimes\ V\otimes\dots\otimes V \longrightarrow K$$</span>
Finally, if <span class="math-container">$f_i=f_j$</span> [or <span class="math-container">$x_i=x_j$</span>] with <span class="math-container">$i\ne j$</span>, the matrix of the determinant will have two identical rows [columns], which shows that restricting to the first [second] <span class="math-container">$d$</span> variables gives an <em>alternating</em> multilinear map, and hence it factors through
<span class="math-container">$$(V^*\land\dots\land V^*)\ \otimes\ (V\land\dots\land V)\ .$$</span></p>
|
716,121 | <p>Construct the matrix corresponding to a rotation of 90 degrees about the y-axis together with a reflection about the (x,z) plane. </p>
<p>Reviewing Linear Algebra and seem to have forgotten some stuff. Not sure what to do with this problem</p>
| Klaas van Aarsen | 134,550 | <p>Figure out what the images are of each of the 3 standard unit vectors.
Put them next to each other in a matrix... and presto! :)</p>
|
716,121 | <p>Construct the matrix corresponding to a rotation of 90 degrees about the y-axis together with a reflection about the (x,z) plane. </p>
<p>Reviewing Linear Algebra and seem to have forgotten some stuff. Not sure what to do with this problem</p>
| jamisans | 102,913 | <p>I'm assuming this is in 3-space and your rotation is CCW. We can figure out the matrix for this transformation by seeing where it sends the standard basis vectors. $[1, 0, 0]^T$ gets rotated to $[0, 1, 0]^T$ then reflected to $[0, -1, 0]^T$. $[0, 1, 0]^T$ gets rotated to $[-1, 0, 0]^T$ and it is fixed by the reflection since it's on the x-axis. $[0, 0, 1]^T$ is fixed by the rotation and also by the reflection. Put these transformed basis vectors next to each other and you have your matrix. </p>
|
397,274 | <p>Suppose you have a group isomorphism given by the first isomorphism theorem:</p>
<p><span class="math-container">$$G/\ker(\phi) \simeq \operatorname{im}(\phi)$$</span></p>
<p>What can we say about the group <span class="math-container">$\ker(\phi)\times \operatorname{im}(\phi)$</span>? In particular, when does the following hold:</p>
<p><span class="math-container">$$G\simeq \ker(\phi)\times \operatorname{im}(\phi)?$$</span></p>
<p>I ask this question because i want to prove that <span class="math-container">$GL_n^+(\mathbb{R}) \simeq SL_n(\mathbb{R}) \times \mathbb{R}^*_{>0}$</span>, with <span class="math-container">$GL_n^+(\mathbb{R})$</span> the group of matrices with positive determinant. I proved that <span class="math-container">$SL_n(\mathbb{R})$</span> is a normal subgroup and that <span class="math-container">$GL_n^+(\mathbb{R})/ SL_n(\mathbb{R}) \simeq \mathbb{R}^*_{>0}$</span>, using the surjective homomorphism <span class="math-container">$\det(M)$</span>. I tried something with semidirect products but I got stuck.</p>
| Andrea Marino | 177,070 | <p>In your special case you actually have a morphism <span class="math-container">$GL_n ^+ \to SL_n \times \mathbb{R}^+ $</span> given by</p>
<p><span class="math-container">$$ M \mapsto (M/(\det M)^{1/n}, \det M) $$</span></p>
<p>the inverse being given by <span class="math-container">$(N, t) \mapsto t^{1/n} N$</span>.</p>
|
2,281,510 | <p>Why do we replace y by x and then calculate y for calculating the inverse of a function?</p>
<p>So, my teacher said that in order to find the inverse of any function, we need to replace y by x and x by y and then calculate y. The reason being inverse takes y as input and produces x as output.</p>
<p>My question is-</p>
<p>Why do we have to calculate y after swapping? I do not get this part.</p>
| Ahmed S. Attaalla | 229,023 | <p>Let $f(x)=y$, we would like to find $f^{-1}(x)$, to do this note by definition:</p>
<p>$$f(f^{-1}(x))=x$$</p>
<p>If we for the moment call $f^{-1}(x)$ as $y$, then by solving,</p>
<p>$$f(y)=x$$</p>
<p>For $y$ we have found $f^{-1}(x)$. Note in the above we have switched $x$ with $y$ and vice versa.</p>
<p>Actually I think it is sloppy to call $f^{-1}(x)$ as $y$, because it was already defined before. In my opinion it's better to call it $u$ then solve,</p>
<p>$$f(u)=x$$</p>
<p>For $u$, which is doing the same thing.</p>
<p>If $g(x,y)=0$ with $y=f(x)$, then we plug in $f^{-1}(x)$ for $x$ to get,</p>
<p>$$g(f^{-1}(x),x)=0$$</p>
<p>Now for the moment call $f^{-1}(x)$ as $u$, then solving,</p>
<p>$$g(u,x)=0$$</p>
<p>For $u$ or equivalently, </p>
<p>$$g(y,x)=0$$</p>
<p>For $y$ gives $f^{-1}(x)$. Notice again the equation we had to solve is the result of switching $x$ and $y$ in the original equation.</p>
|
1,840,778 | <p>In rectangle $ABCD$, we have $AD = 3$ and $AB = 4$. Let $M$ be the midpoint of $\overline{AB}$, and let $X$ be the point such that $MD = MX$, $\angle MDX = 77^\circ$, and $A$ and $X$ lie on opposite sides of $\overline{DM}$. Find $\angle XCD$, in degrees. </p>
<p><img src="https://i.stack.imgur.com/3TsZm.png" alt="Diagram"></p>
<p>Thanks!</p>
| Jack D'Aurizio | 44,121 | <p>Since $MC=MD=MX$, the points $C,X,D$ lie on a circle centered at $M$ and
$$\widehat{XCD}=\frac{1}{2}\widehat{XMD} = \color{red}{13^\circ}.$$</p>
|
440,791 | <p>I am trying to figure out if the infinite product <span class="math-container">$$\omega=\frac{5\sqrt{3}}{12}\prod\limits_{\substack{p\equiv 1\pmod3 \\
p\ge 13}}\left(\frac{p-2}{p-1}\right)\prod\limits_{\substack{p\equiv 2\pmod3 \\
p\ge 13}}\left(\frac{p}{p-1}\right)$$</span> is asymptotically equal to the infinite product <span class="math-container">$$c=\frac{5775}{2592\pi}\prod\limits_{\substack{p\equiv 1\pmod3 \\
p\ge 13}}\left(\frac{p(p-2)}{(p-1)^2}\right)\prod\limits_{\substack{p\equiv 2\pmod3 \\
p\ge 13}}\left(\frac{p^2}{p^2-1}\right)$$</span>.</p>
<p>So, I reformulated the products as <span class="math-container">$$\omega(x)=\frac{5\sqrt{3}}{12}\prod\limits_{\substack{p\equiv 1\pmod3 \\
13\leq p\leq x}}\left(\frac{p-2}{p-1}\right)\prod\limits_{\substack{p\equiv 2\pmod3 \\
13\leq p\leq x}}\left(\frac{p}{p-1}\right)$$</span> and <span class="math-container">$$c(x)=\frac{5775}{2592\pi}\prod\limits_{\substack{p\equiv 1\pmod3 \\
13\leq p\leq x}}\left(\frac{p(p-2)}{(p-1)^2}\right)\prod\limits_{\substack{p\equiv 2\pmod3 \\
13\leq p\leq x}}\left(\frac{p^2}{p^2-1}\right)$$</span>.
Then, taking the quotient we get
<span class="math-container">$$\frac{\omega(x)}{c(x)}=\frac{216\sqrt{3}\pi}{1155}\prod\limits_{\substack{p\equiv 1\pmod3 \\
13\leq p\leq x}}\left(1-\frac{1}{p}\right)\prod\limits_{\substack{p\equiv 2\pmod3 \\
13\leq p\leq x}}\left(1+\frac{1}{p}\right)$$</span> <span class="math-container">$$=\frac{216\sqrt{3}\pi}{1155}\prod\limits_{\substack{p\equiv 2\pmod3 \\
13\leq p\leq x}}\left(1-\frac{1}{p^2}\right)\prod\limits_{\substack{p\equiv 1\pmod3 \\
13\leq p\leq x}}\left(1-\frac{1}{p}\right)\prod\limits_{\substack{p\equiv 2\pmod3 \\13\leq p\leq x}}\left(\frac{1}{1-\frac{1}{p}}\right).$$</span></p>
<p>As <span class="math-container">$x\to\infty$</span>, from <a href="https://core.ac.uk/reader/81188410" rel="nofollow noreferrer">A. Languasco's paper</a>, we observe that the last two products in the quotient above are asymptotically equal. So, the limit of the quotient depends on the product <span class="math-container">$$\frac{216\sqrt{3}\pi}{1155}\prod\limits_{\substack{p\equiv 2\pmod3 \\
13\leq p\leq x}}\left(1-\frac{1}{p^2}\right)$$</span>, but I don't know if this converges to <span class="math-container">$1$</span> or not as <span class="math-container">$x\to\infty$</span>. I can see that the last product (except the constant) has something to do with <span class="math-container">$\frac{1}{\zeta(2)}$</span> but I don't know if it's actually smaller or greater than that.</p>
<p>I am essentially trying to check if eventually the products would be same or not?</p>
<p>I would really appreciate it if somebody could provide some hints or ideas on how to progress from here.</p>
| KConrad | 3,272 | <p>I don't know why you are restricting the products to <span class="math-container">$p \geq 13$</span> or where the factor <span class="math-container">$5\sqrt{3}/12$</span> is coming from. I am going to ignore that and discuss the following product over all primes <span class="math-container">$p$</span>:
<span class="math-container">$$
C = \prod_{p}\left(1- \frac{\chi(p)}{p-1}\right)
$$</span>
where the terms in the product are in order of increasing <span class="math-container">$p$</span> and <span class="math-container">$\chi$</span> is the nontrivial Dirichlet character mod <span class="math-container">$3$</span>, so <span class="math-container">$\chi(p) = 1$</span> if <span class="math-container">$p \equiv 1 \bmod 3$</span>, <span class="math-container">$\chi(p) = -1$</span> if <span class="math-container">$p \equiv 2 \bmod 3$</span>, and <span class="math-container">$\chi(3) = 0$</span>.</p>
<p>When <span class="math-container">$p \equiv 1 \bmod 3$</span> we have
<span class="math-container">$$
1- \frac{\chi(p)}{p-1} = 1 - \frac{1}{p-1} = \frac{p-2}{p-1}
$$</span>
and when <span class="math-container">$p \equiv 2 \bmod 3$</span> we have
<span class="math-container">$$
1- \frac{\chi(p)}{p-1} = 1 + \frac{1}{p-1} = \frac{p}{p-1}
$$</span>
When <span class="math-container">$p = 3$</span>, <span class="math-container">$\chi(p) = 0$</span>, so
<span class="math-container">$$
1- \frac{\chi(p)}{p-1} = 1.
$$</span></p>
<p>Therefore the factors in <span class="math-container">$C$</span> are the same as what you wrote, but you broke up the product into separate products over <span class="math-container">$p \equiv 1 \bmod 3$</span> and <span class="math-container">$p \equiv 2 \bmod 3$</span> (I ignore the condition <span class="math-container">$p \equiv 13$</span>). That is a subtle issue because <em>your products don't converge</em>. I am multiplying the terms at all primes in <span class="math-container">$C$</span> together in increasing order, not using separate products depending on <span class="math-container">$p \bmod 3$</span>.</p>
<p>As an analogue, consider the alternating harmonic series <span class="math-container">$S = \sum_{n \geq 1} (-1)^{n-1}/n$</span>. This converges (and equals <span class="math-container">$\log 2$</span>), but you can't write
<span class="math-container">$S$</span> with two separate sums over odd and even <span class="math-container">$n$</span>:
<span class="math-container">$$
S \not= \sum_{{\rm odd } \ n} \frac{1}{n} - \sum_{{\rm even } \ n} \frac{1}{n}
$$</span>
because those separate sums individually do not converge.</p>
<p>Returning to <span class="math-container">$C$</span>, we want to insert factors into each term to improve the convergence (to make it absolutely convergent). To do that, write
<span class="math-container">$$
1 - \frac{\chi(p)}{p-1} = 1-\frac{\chi(p)/p}{1-1/p} =
1 - \frac{\chi(p)}{p} + \frac{\chi(p)}{p^2} + O\left(\frac{1}{p^3}\right)
$$</span>
by expanding <span class="math-container">$1/(1-1/p)$</span> into a geometric series in powers of <span class="math-container">$1/p$</span>. Since <span class="math-container">$1-\chi(p)/p \sim 1$</span> as <span class="math-container">$p \to \infty$</span>, dividing by <span class="math-container">$1-\chi(p)/p$</span> tells us
<span class="math-container">$$
\frac{1-\chi(p)/(p-1)}{1-\chi(p)/p} = 1 + O\left(\frac{1}{p^2}\right),
$$</span>
so the product
<span class="math-container">$$
\prod_{p} \left(\frac{1-\chi(p)/(p-1)}{1-\chi(p)/p}\right)
$$</span>
is absolutely convergent.</p>
<p>Now we can rewrite <span class="math-container">$C$</span>:
<span class="math-container">$$
C = \prod_p \left(1 - \frac{\chi(p)}{p-1}\right) = \prod_p \left(1-\frac{\chi(p)}{p}\right)\left(\frac{1-\chi(p)/(p-1)}{1-\chi(p)/p}\right),
$$</span>
where the product is in order of increasing <span class="math-container">$p$</span>. Can we split apart this product?</p>
<p>Yes! When <span class="math-container">${\rm Re}(s) > 1$</span>, the <span class="math-container">$L$</span>-function of <span class="math-container">$\chi$</span> has the absolutely convergent Euler product representation
<span class="math-container">$$
L(s,\chi) = \prod_{p} \frac{1}{1-\chi(p)/p^s}
$$</span>
and it can be shown, with some nontrivial work, that this product remains valid on the line <span class="math-container">${\rm Re}(s) = 1$</span> <em>provided</em> the product is taken in order of increasing <span class="math-container">$p$</span> (on that line the product is no longer absolutely convergent). Taking <span class="math-container">$s = 1$</span> and reciprocating,
<span class="math-container">$$
\prod_p \left(1 - \frac{\chi(p)}{p}\right) = \frac{1}{L(1,\chi)}.
$$</span>
Feeding that into the formula for <span class="math-container">$C$</span> above,
<span class="math-container">$$
C = \frac{1}{L(1,\chi)}\prod_p \frac{1-\chi(p)/(p-1)}{1-\chi(p)/p}.
$$</span></p>
<p>It can be shown that <span class="math-container">$L(1,\chi) = \pi/(3\sqrt{3})$</span>, so
<span class="math-container">$$
C = \frac{3\sqrt{3}}{\pi}\prod_p \frac{1-\chi(p)/(p-1)}{1-\chi(p)/p},
$$</span>
where the <span class="math-container">$p$</span>-th factor in this product is <span class="math-container">$1+O(1/p^2)$</span>.</p>
<p>What is the <span class="math-container">$p$</span>-th factor here? When <span class="math-container">$\chi(p) = 1$</span>,
<span class="math-container">$$
\frac{1-\chi(p)/(p-1)}{1-\chi(p)/p} = \frac{1-1/(p-1)}{1-1/p} = \frac{p(p-2)}{(p-1)^2}
$$</span>
and when <span class="math-container">$\chi(p) = -1$</span>,
<span class="math-container">$$
\frac{1-\chi(p)/(p-1)}{1-\chi(p)/p} = \frac{1+1/(p-1)}{1+1/p} = \frac{p^2}{p^2-1}.
$$</span>
And when <span class="math-container">$\chi(p) = 0$</span> (namely when <span class="math-container">$p = 3$</span>,
<span class="math-container">$$
\frac{1-\chi(p)/(p-1)}{1-\chi(p)/p} = 1.
$$</span></p>
<p>Because these terms are <span class="math-container">$1 + O(1/p^2)$</span>, the order of multiplication now does not matter and we can split apart this product depending on <span class="math-container">$p \bmod 3$</span>:
<span class="math-container">$$
C = \frac{3\sqrt{3}}{\pi}\prod_{p\equiv 1 \bmod 3}\frac{p(p-2)}{(p-1)^2} \prod_{p \equiv 2 \bmod 3} \frac{p^2}{p^2-1}.
$$</span></p>
<p>On the MO page <a href="https://mathoverflow.net/questions/31150/calculating-the-infinite-product-from-the-hardy-littlewood-conjecture-f?rq=1">here</a> I discuss such a product with <span class="math-container">$\chi$</span> replaced by a more general Legendre symbol.</p>
|
215,474 | <p>Suppose I have a function <code>cyclePart</code> which has a definition for the case</p>
<pre><code>cyclePart[list_->{},n_,Δn_,cycle_:True]:=...
</code></pre>
<p>But for example in the algorithm, if it encounters a case like</p>
<pre><code>cyclePart[{a,b,c,d,e,f,g,h,i,j}->{b,d,i,j},5,2,True]
</code></pre>
<p>I want it to transform it into the previous form so its base definition can be applied. In this case, the number <code>5</code> is referring to the position in the original list and the list to the right of the arrow tells it to drop these elements.</p>
<pre><code>cyclePart[list_->drop_,n_,Δn_,cycle_:True]:=
cyclePart[Complement[list,drop]->{},...updated n...,Δn,cycle]
</code></pre>
<p>But how to transform <code>n</code> here? Is there a builtin function that can get an updated position after element drops? Note in the example, the updated list's position <code>3</code> refers to position <code>5</code> since two elements <code>{b,d}</code> to the left of position <code>5</code> are being removed so position <code>5</code> moves down to position <code>3</code>.</p>
<hr>
<p>As seen in the comment of @kglr <code>Complement[list, drop]</code> should be changed to <code>DeleteCases[list, Alternatives @@ drop]</code> since the former is a set drop which means it removes duplicates and sorts the list in addition to the drop. I incorrectly used <code>Complement[list, drop]</code> but meant the actions of <code>DeleteCases[list, Alternatives @@ drop]</code>.</p>
| bbgodfrey | 1,063 | <p>Progress can be made by integrating over <code>y</code> only (with the correction that <code>{…}</code> be replaced by <code>(…)</code>).</p>
<pre><code>Integrate[x DiracDelta[r x - y] Exp[1/g^2 (Cos[x2 - x] + Cos[x2] + Cos[y + x2])],
{y, 0, 2 Pi}, Assumptions -> r > 0 && 0 < x < 2 Pi]
(* E^((Cos[x - x2] + Cos[x2] + Cos[r x + x2])/g^2) x HeavisideTheta[2 Pi - r x] *)
</code></pre>
<p>In contrast, if <code>r < 0</code>, the result is <code>0</code>, as one would expect. </p>
<p>The remaining integrals take the form,</p>
<pre><code>Integrate[E^((Cos[x - x2] + Cos[x2] + Cos[r x + x2])/g^2)
x HeavisideTheta[2 Pi - r x], {x, 0, 2 Pi}, {x2, -Pi, Pi}]
</code></pre>
<p><strong>Edit</strong>: which can be simplified further to</p>
<pre><code>Integrate[E^((Cos[x - x2] + Cos[x2] + Cos[r x + x2])/g^2) x,
{x, 0, 2 Pi Min[1, 1/r]}, {x2, -Pi, Pi}
</code></pre>
<p>Now, if <code>g^2</code> is much greater than <code>1</code>, then the remaining integrations can be performed with the exponential ignored.</p>
<pre><code>Integrate[x, {x, 0, 2 Pi Min[1, 1/r]}, {x2, -Pi, Pi}, Assumptions -> r > 0]
(* (4 Pi^3 Min[1, 1/r]^2 *)
</code></pre>
<p>For <code>g^2</code> not large, the remaining integrals must be performed numerically along the lines suggested by Alx in a comment above, but a 2D plot is slow. </p>
<pre><code>f[r_?NumericQ, g_?NumericQ] := NIntegrate[E^((Cos[x - x2] + Cos[x2] + Cos[r x + x2])/g^2) x,
{x, 0, 2 Pi Min[1, 1/r]}, {x2, -Pi, Pi},
Method -> {Automatic, "SymbolicProcessing" -> False}]
</code></pre>
<p>A 1D plot for <code>g == 1</code> is</p>
<pre><code>Plot[f[r, 1], {r, 0, 4}, MaxRecursion -> 2, ImageSize -> Large,
AxesLabel -> {r, None}, LabelStyle -> {15, Bold, Black}]
</code></pre>
<p><a href="https://i.stack.imgur.com/3ybwC.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/3ybwC.png" alt="enter image description here"></a></p>
<p>For a small value of <code>g</code>, the curve is somewhat different.</p>
<pre><code>LogPlot[f[r, 1/10], {r, 0, 4}, MaxRecursion -> 3, ImageSize -> Large,
AxesLabel -> {r, None}, LabelStyle -> {15, Bold, Black}, PlotRange -> All]
</code></pre>
<p><a href="https://i.stack.imgur.com/Ml9EM.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Ml9EM.png" alt="enter image description here"></a></p>
<p>Peaks lie at <code>r = 1</code> and very close to <code>r = 0</code>.</p>
<p><strong>Addendum</strong></p>
<p>The further simplification identified by Andreas (+1), equivalent to</p>
<pre><code>f[r_?NumericQ, g_?NumericQ] := 2 Pi NIntegrate[BesselI[0, g^-2
Sqrt[3 + 2 Cos[x] + 2 Cos[r x] + 2 Cos[(1 + r) x]]] x, {x, 0, 2 Pi Min[1, 1/r]},
Method -> {Automatic, "SymbolicProcessing" -> False}]
</code></pre>
<p>reproduces the plots above about an order of magnitude faster. For completeness, we provide a plot of the values at the two peak values as functions of <code>g</code> (rescaled by <code>Exp[-2.94 g^-2]</code> to keep the curves readable.</p>
<pre><code>LogPlot[{f[.02, g] Exp[-2.94 g^-2], f[1, g] Exp[-2.94 g^-2]}, {g, 1/10, 3},
ImageSize -> Large, AxesLabel -> {g, None}, LabelStyle -> {15, Bold, Black},
PlotRange -> {{0, 3}, Automatic}]
</code></pre>
<p>The blue and orange curves correspond to <code>r = 0.02</code> and <code>r = 1</code>, respectively.</p>
<p><a href="https://i.stack.imgur.com/yyQ1f.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/yyQ1f.png" alt="enter image description here"></a></p>
|
215,474 | <p>Suppose I have a function <code>cyclePart</code> which has a definition for the case</p>
<pre><code>cyclePart[list_->{},n_,Δn_,cycle_:True]:=...
</code></pre>
<p>But for example in the algorithm, if it encounters a case like</p>
<pre><code>cyclePart[{a,b,c,d,e,f,g,h,i,j}->{b,d,i,j},5,2,True]
</code></pre>
<p>I want it to transform it into the previous form so its base definition can be applied. In this case, the number <code>5</code> is referring to the position in the original list and the list to the right of the arrow tells it to drop these elements.</p>
<pre><code>cyclePart[list_->drop_,n_,Δn_,cycle_:True]:=
cyclePart[Complement[list,drop]->{},...updated n...,Δn,cycle]
</code></pre>
<p>But how to transform <code>n</code> here? Is there a builtin function that can get an updated position after element drops? Note in the example, the updated list's position <code>3</code> refers to position <code>5</code> since two elements <code>{b,d}</code> to the left of position <code>5</code> are being removed so position <code>5</code> moves down to position <code>3</code>.</p>
<hr>
<p>As seen in the comment of @kglr <code>Complement[list, drop]</code> should be changed to <code>DeleteCases[list, Alternatives @@ drop]</code> since the former is a set drop which means it removes duplicates and sorts the list in addition to the drop. I incorrectly used <code>Complement[list, drop]</code> but meant the actions of <code>DeleteCases[list, Alternatives @@ drop]</code>.</p>
| Andreas | 69,887 | <p>if the range of the x2 integration may be shifted to {0,2π} then it can be done and plotted like:</p>
<pre><code>g = 2.; Plot[{NIntegrate[
E^((Cos[x - x2] + Cos[x2] + Cos[r x + x2])/g^2) x HeavisideTheta[2 Pi - r x], {x, 0, 2 Pi}, {x2, 0, 2 Pi}],
2 Pi NIntegrate [x BesselI[0, 1/g^2 \[Sqrt](3 + 2 Cos[x] + 2 Cos[r x] +
2 Cos[(1 + r) x])] HeavisideTheta[2 Pi - r x], {x, 0,
2 Pi}] + 10}, {r, 0, 4}]
</code></pre>
<p>because</p>
<pre><code>Integrate[Exp[a Cos[x]] Exp[c Sin[x]], {x, 0, 2 Pi}]
is (* 2 Pi BesselI[0,Sqrt[a^2+c^2]]*)
</code></pre>
<p>Above is valid if you shift the integration range by any s (for instance s=-Pi):</p>
<pre><code>Integrate[Exp[a Cos[x]] Exp[c Sin[x]], {x, s, 2 Pi+s}]
is (* 2 Pi BesselI[0,Sqrt[a^2+c^2]]*)
</code></pre>
|
231,887 | <p>I'm learning to do proofs, and I'm a bit stuck on this one.
The question asks to prove for any positive integer $k \ne 0$, $\gcd(k, k+1) = 1$.</p>
<p>First I tried: $\gcd(k,k+1) = 1 = kx + (k+1)y$ : But I couldn't get anywhere.</p>
<p>Then I tried assuming that $\gcd(k,k+1) \ne 1$ , therefore $k$ and $k+1$ are not relatively prime, i.e. they have a common divisor $d$ s.t. $d \mid k$ and $d \mid k+1$ $\implies$ $d \mid 2k + 1$</p>
<p>Actually, it feels obvious that two integers next to each other, $k$ and $k+1$, could not have a common divisor. I don't know, any help would be greatly appreciated.</p>
| apnorton | 23,353 | <p>Old John's answer in the comments is better than this, but I'm hoping to provide some intuition...</p>
<p>Look at $k!$ instead of $k$:</p>
<ul>
<li>2 divides $k!$, thus it cannot divide $k!+1$. (2 divides every other integer)</li>
<li>3 divides $k!$, thus it cannot divide $k!+1$. (3 divides every third integer)</li>
</ul>
<p>etc, up to and including "$k$ divides $k!$...".</p>
<p>Now look back at $k$: Well, if $2$ divides $k$, then it cannot divide $k+1$ (2 divides every other integer). If $3$ divides $k$, then it cannot divide $k+1$. Etc, until you reach "does $k$ divide $k$?".</p>
|
373,906 | <p>(This question is <a href="https://math.stackexchange.com/questions/3859476">originally from Math.SE</a> where it was suggested that I ask the question here)</p>
<p>Let <span class="math-container">$G$</span> be a finite group with fewer than <span class="math-container">$p^2$</span> Sylow <span class="math-container">$p$</span>-subgroups, and let <span class="math-container">$p^n$</span> be the power of <span class="math-container">$p$</span> dividing <span class="math-container">$\lvert G\rvert$</span>. I can show that if <span class="math-container">$P$</span> and <span class="math-container">$Q$</span> are any two distinct Sylow <span class="math-container">$p$</span>-subgroups of <span class="math-container">$G$</span> then <span class="math-container">$\lvert P\cap Q\rvert=p^{n-1}$</span>. I was wondering if this intersection is necessarily the same across all Sylow <span class="math-container">$p$</span>-subgroups of <span class="math-container">$G$</span>.</p>
<blockquote>
<p>Is the intersection <span class="math-container">$P\cap Q$</span> the same for any two distinct Sylow <span class="math-container">$p$</span>-subgroups <span class="math-container">$P$</span> and <span class="math-container">$Q$</span>?</p>
</blockquote>
<p>We might as well assume that <span class="math-container">$G$</span> has more than one Sylow <span class="math-container">$p$</span>-subgroup, in which case here are two equivalent formulations:</p>
<blockquote>
<p>Does the intersection of all Sylow <span class="math-container">$p$</span>-subgroups of <span class="math-container">$G$</span> necessarily have order <span class="math-container">$p^{n-1}$</span>?</p>
</blockquote>
<blockquote>
<p>Must there exist a normal subgroup of <span class="math-container">$G$</span> of order <span class="math-container">$p^{n-1}$</span>?</p>
</blockquote>
<p>I'm looking for a proof or counterexample of this conjecture.</p>
<p>I know that the conjecture holds in the case where <span class="math-container">$G$</span> has <span class="math-container">$p+1$</span> Sylow <span class="math-container">$p$</span>-subgroups.</p>
<p>There is some good partial progress in the comments and answers of the Math.SE link.</p>
| Richard Lyons | 99,221 | <p>The conjecture follows quickly from <strong>Brodkey's Theorem</strong>: Let <span class="math-container">$G$</span> be a finite group and <span class="math-container">$p$</span> a prime. Suppose that Sylow <span class="math-container">$p$</span>-subgroups of <span class="math-container">$G$</span> are abelian. If <span class="math-container">$O_p(G)=1$</span>, then there exist Sylow <span class="math-container">$p$</span>-subgroups <span class="math-container">$P$</span> and <span class="math-container">$Q$</span> of <span class="math-container">$G$</span> such that <span class="math-container">$P\cap Q=1$</span>.</p>
<p>Here <span class="math-container">$O_p(G)$</span> is the intersection of all Sylow <span class="math-container">$p$</span>-subgroups of <span class="math-container">$G$</span>, or equivalently the largest normal <span class="math-container">$p$</span>-subgroup of <span class="math-container">$G$</span>. (Note that <span class="math-container">$O_p(G/O_p(G))=1$</span>.) Brodkey's theorem can be found several places on the web. It is an exercise in section 1E of Isaacs's <strong>Finite Group Theory</strong>.</p>
<p>Now, your assumption implies that <span class="math-container">$\Phi(P)\le P\cap Q\le Q$</span> for all Sylow <span class="math-container">$p$</span>-subgroups <span class="math-container">$P,Q$</span> of <span class="math-container">$G$</span>, so <span class="math-container">$\Phi(P)\le O_p(G)$</span>. Pass to <span class="math-container">$\bar G=G/O_p(G)$</span>. Then <span class="math-container">$\bar P$</span> is an (elementary) abelian Sylow <span class="math-container">$p$</span>-subgroup of <span class="math-container">$\bar G$</span>, and <span class="math-container">$O_p(\bar G)=1$</span>. (This much was already noted on Math.SE.) Now Brodkey's Theorem gives you <span class="math-container">$\bar P\cap \bar Q=\bar 1$</span> for some Sylow subgroups <span class="math-container">$P,Q$</span> of <span class="math-container">$G$</span>, so <span class="math-container">$P\cap Q=O_p(G)$</span>, as you conjectured.</p>
|
1,953,517 | <p>Let $X,Y$ be two independet Poisson variables with parameters $\mu,\lambda>0$. Let $N:=Y+X$
what is $\mathbb{E}(X\vert N=n)$?</p>
<p>I already computed $P(X=k\vert N=n)$ for $k,n\in \mathbb{Z}_{+}$ which is $$P(X=k\vert N=n)=\binom{n}{k}\frac{\mu^{n-k}\lambda^k}{(\mu+\lambda)^n}$$ if $n>k$ else $0$.</p>
<p>I know that $n=k+j$. But now I get stucked $$\mathbb{E}(X\vert N=n)=\sum\limits_{k=0}^\infty k\binom{n}{k}\frac{\mu^{n-k}\lambda^k}{(\mu+\lambda)^n}=\sum\limits_{k=0}^\infty k\frac{n!}{k!(n-k)!}\frac{\mu^{n-k}\lambda^k}{(\mu+\lambda)^n}$$
$$\sum\limits_{k=0}^\infty \frac{(k+j)!}{(k-1)!(j)!}\frac{\mu^{j}\lambda^k}{(\mu+\lambda)^{k+j}}$$</p>
<p>How can I compute the expected value?</p>
| Andreas Caranti | 58,401 | <p>As a variant on the solution of Benson Lin, and basically as suggested in comments by Doug M and csts, start with only the constraint of at least 10 bicycles in warehouse 1 and 2. This gives you
$$
\binom{100-10-10+3}{3} = \binom{83}{3}
$$
possibilities. Now count the possibilities when warehouse 1 gets at least 10, and warehouse 2 gets at least 21. The number is
$$
\binom{100-10-21+3}{3} = \binom{72}{3}.
$$
Subtract the first from the second to get
$$
91\,881 - 59\,640 = 32\,241.
$$</p>
|
819,704 | <p>Here is the problem I have </p>
<p>$\lim \limits_{x \to -1} (x + 1)^2 sin (\frac{1}{x + 1})$</p>
<p>I approached it like this:</p>
<p>\begin{align}
-1 \le sin(\displaystyle \frac{1}{x + 1}) \le 1 \\
-(x + 1) \le sin(1) \le (x + 1)
\end{align}</p>
<p>I then go on to solve the limit by replacing $sin (\frac{1}{x + 1})$ with $-(x + 1)$ and $(x + 1)$ respectively.</p>
<p>Although this yields the correct answer, I am unsure if this is the correct way to solve this type of problem using the Squeeze theorem. Is this correct?</p>
| mm-aops | 81,587 | <p>it's not, you should've just written that $|\sin (\frac{1}{x+1})|$ is bounded by $1$ (which you did) and then say that $(x+1)^2 \rightarrow 0$, hence the product goes to zero as well, kinda like $$ - (x+1)^2 \leq (x+1)^2 \sin (\frac{1}{x+1}) \leq (x+1)^2$$ it isn't true that $\sin(\frac{1}{x+1}) = (x+1)\sin(1)$ which seems to be what you wrote</p>
|
288,974 | <p>Alright this maybe really funny but I want to know why is this wrong. We often come across identities which we prove by multiplying both the sides of the identity by a certain entity but why don't we multiply it by $0$. That way every identity will be proved in one single line. That is so stupid. I mean, by that way we may also say that $1=2=3$. I know it is wrong. But why? I mean if we can multiply both the sides by $2$ then why not by $0$. For example, consider the following trigonometric identity :</p>
<p>Prove the identity : $\sin^2 \theta = \tan^2 \theta \cos ^2 \theta$</p>
<h2>Usual way</h2>
<p>To Prove : $\sin^2 \theta = \tan^2 \theta \cos ^2 \theta $</p>
<p>$\displaystyle \implies {\sin^2 \theta \over \cos^2 \theta } = {\tan^2 \theta \cos ^2 \theta \over \cos^2 \theta}$ (multiplying both the sides by $\displaystyle 1 \over \cos^2 \theta$)</p>
<p>$\implies \tan ^2 \theta = \tan^2\theta$</p>
<p>$\implies LHS=RHS$</p>
<p>$\therefore proved$</p>
<h2>Funny way</h2>
<p>To Prove : $\sin^2 \theta = \tan^2 \theta \cos ^2 \theta $</p>
<p>$\displaystyle \implies {\sin^2 \times 0} = {\tan^2 \theta \cos ^2 \theta \times 0}$ (multiplying both the sides by $0$)</p>
<p>$\implies 0 = 0$</p>
<p>$\therefore proved$</p>
<p>Please explain why is this wrong.</p>
| guest196883 | 43,798 | <p>Because to prove $a=b$ you don't have to prove that $$a=b \implies c=c$$You have to prove that some trivial statement, such as $$c=c$$or another axiom, or logical tautology, or proved statement, and derive from that that $a=b$. So in your example, </p>
<p>$$\begin{array}{lcl}&& \sin^2 \theta & = & tan^2 \theta \cos ^2 \theta \\
& \implies & {\sin^2 \theta \over \cos^2 \theta } & = & {tan^2 \theta \cos ^2 \theta \over \cos^2 \theta} \\ & \implies & \tan ^2 \theta & = & \tan^2\theta \\ & \implies & LHS & = &RHS \\
& \therefore & proved\end{array}$$</p>
<p>Doesn't prove the initial $\sin^2 \theta = tan^2 \theta \cos ^2 \theta$, it's simply an easy way to "work backwards" in reviersible way to arrive at the actual proof:
$$\tan^2\theta=\tan^2\theta\implies \sin^2 \theta = tan^2 \theta \cos ^2 \theta
$$
by reversing multiplication and using division. This very much doesn't work for $0$, because, well. You're doing arithmetic and dividing by $0$. </p>
|
1,579,170 | <p>This problem is dependent because it matters which one you choose, So i don't think we can do the multiplication thing in this one. </p>
<ul>
<li>Probability of ( non defective ) = 6/10 </li>
</ul>
<p>What does the question mean when it says all will be non-defective? is "all" the 2 randomly chosen telephone? How would i do this problem? 2 is chosen randomly and 6 is non defective, I just thought of doing 2/6 cause 2 was the chosen and 6 total are non-defective. But if i wanted to find the number of non defective i would just do 6/10? I feel like I don't understand this question</p>
| André Nicolas | 6,312 | <p>We need to assume that $p$ and $q$ are <em>distinct</em>.</p>
<p>Let $L$ be our lcm. Since $p-1$ divides $L$, we have $a^L\equiv 1\pmod{p}$. Similarly, $a^L\equiv 1\pmod{q}$. Thus $p$ divides $a^L-1$ and $q$ divides $a^L-1$. Since $p$ and $q$ are relatively prime, it follows that $pq$ divides $a^L-1$.</p>
|
1,303,577 | <p>I have started to learn about the properties of the <a href="http://en.wikipedia.org/wiki/Quadratic_residue" rel="nofollow">quadratic residues modulo n (link)</a> and reviewing the list of quadratic residues modulo $n$ $\in [1,n-1]$ I found the following possible property:</p>
<blockquote>
<p>(1) $\forall\ p \gt 3\in \Bbb P, \ (number\ of\ Quadratic\ Residues\ mod\ kp)=p\ when\ k\in\{2,3\}$</p>
</blockquote>
<p>In other words: (a) if $n$ is $2p$ or $3p$, where $p$ is a prime number greater than $3$, then the total number of the quadratic residues modulo $n$ is exactly the prime number $p$. (b) And every prime number $p$ is the number of quadratic residues modulo $2p$ and $3p$.</p>
<blockquote>
<p>E.g.:</p>
<p>$n=22$, the list of quadratic residues is $\{1,3,4,5,9,11,12,14,15,16,20\}$, the total number is $11 \in \Bbb P$ and $22=11*2$.</p>
<p>$n=33$, the list of quadratic residues is $\{1,3,4,9,12,15,16,22,25,27,31\}$, the total number is $11 \in \Bbb P$ and $33=11*3$.</p>
</blockquote>
<p>I did a quick Python test initially in the interval $[1,10^4]$, no counterexamples found. Here is the code:</p>
<pre><code>def qrmn():
from sympy import is_quad_residue
from gmpy2 import is_prime
def list_qrmn(n):
lqrmn = []
for i in range (1,n):
if is_quad_residue(i,n):
lqrmn.append(i)
return lqrmn
tested1 = 0
tested2 = 0
for n in range (4,10000,1):
lqrmn = list_qrmn(n)
# Test 1
if is_prime(len(lqrmn)):
if n==3*len(lqrmn) or n==2*len(lqrmn):
print("SUCCESS1 " + str(n) + " len " + str(len(lqrmn)) + " div " + str(int(n/len(lqrmn))))
tested1 = tested1 + 1
# Test 2
if n==3*len(lqrmn) or n==2*len(lqrmn):
if is_prime(len(lqrmn)):
print("SUCCESS2 " + str(n) + " len " + str(len(lqrmn)) + " div " + str(int(n/len(lqrmn))))
tested2 = tested2 + 1
else:
print("ERROR2 " + str(n) + " len " + str(len(lqrmn)) + " div " + str(int(n/len(lqrmn))))
if tested1 == tested2:
print("\nTEST SUCCESS: iif condition is true")
else:
print("\nTEST ERROR: iif condition is not true: " + str(tested1) + " " + str(tested2))
qrmn()
</code></pre>
<p>I am sure this is due to a well known property of the quadratic residues modulo $n$, but my knowledge is very basic (self learner) and initially, reviewing online, I can not find a property of the quadratic residues to understand if that possible property is true or not.</p>
<p>Please I would like to share with you the following questions:</p>
<blockquote>
<ol>
<li><p>Is (1) a trivial property due to the definition of the quadratic residue modulo n?</p></li>
<li><p>Is there a counterexample?</p></li>
</ol>
</blockquote>
<p>Thank you!</p>
| 111 | 100,311 | <p>Say $N=2p$ then
$$\#\{y:x^2\equiv y(\ {\rm{mod}} N)\}$$ is equal (from CRT)
$\#\{y:x^2\equiv y(\ {\rm{mod}} p)\}\times 2-1=p$ (we subtract $-1$ since we add the solution $x=0$ twice. We have $\frac{p+1}{2}\times 2$ equations but two of them are the same.).
More general if $N=p_1p_2\cdots p_k$ and $p_j$ are distinct odd primes then you get
$\frac{1}{2^n}\prod_{i=1}^n(p_i+1)-1.$</p>
|
292,122 | <p>This question actually came out of a question. In some other post, I saw a reference and going through, found this, $n>0$.</p>
<p>Solve for n explicitly without calculator:
$$\frac{3^n}{n!}\le10^{-6}$$</p>
<p>And I appreciate hint rather than explicit solution.</p>
<p>Thank You.</p>
| A Ricko Maulidar | 60,472 | <p>how about make a function?$f(n)=\frac{3^n}{n!}-10^{-6}$ or maybe $f(n)=\frac{3^n}{n!10^{-6}}$</p>
|
1,897,849 | <p>I'm reading <strong>Awodey's</strong>: Category Theory. I harbor a little confusion: When we speak about a category, say: $\mathbf{Set}$. In the book, usually he talks about this category but there is no notion of quantity of sets. How many sets are actually there?</p>
<p>Another thing that is getting me confused is this: Suppose that I have this category with $n$ sets, I must have the arrows, the composition, the identity, etc. But for these sets, should I assume that functions can be <em>constructed</em> from one set to every other set? That is: For a category with $3$ objects: $A,B,C$, should I assume that I have the following functions?</p>
<p>$$f_1:A\to B\\f_2:A\to C\\f_3:B\to C\\f_4:C\to B\\f_5:C\to A\\f_6:B\to A\\$$</p>
<p>I'm not sure if it's about <em>what I have</em> or <em>what could be made</em>. </p>
| SixWingedSeraph | 318 | <p>First, "Set" is one specific category. Its objects are all the sets and its arrows are all the functions between sets. </p>
<p>In the second paragraph, you suppose you have a category with n sets. To define the category you have to say specifically which n sets the objects are. You then have to say what the arrows are (they will be set functions in this case). Your specification of what the arrows are must satisfy the axioms for a category: (1) If a set S is an object, the identity function on S must be an arrow. (2) If you have an arrow f:S\to T and an arrow g:T\to U then you must specify that the composite g\o f is also in the category. If you do those things, then associativity (in this case) will be automatically satisfied, since composition of set functions is associative. Once you have done those things you have defined a category.</p>
<p>In your second paragraph you mention functions f_1:A to B and so on, but you didn't say what the functions are. Also, you mentioned f_1:A to B and f_3:B to C but you didn't mention any function from A to C, which is required by the axiom that says composites exist. </p>
<p>Also, you ask, "Can I assume that arrows can be constructed from one set to every other set?" That is wrong in two ways: (1) A category with two objects S and T is not required to have an arrow from S to T. (2) You can't "assume" there are arrows, you have to <em>say specifically what they are.</em></p>
|
3,142,339 | <p>Let <span class="math-container">$p$</span> be a real number. I am looking for all <span class="math-container">$(x,y)$</span> such that <span class="math-container">$\ln[e^{x}+e^{y}]=px+(1-p)y$</span>. My effort:</p>
<p>Take exponent of both sides to obtain <span class="math-container">$e^{x}+e^{y}=e^{px}e^{(1-p)y}$</span> and then let <span class="math-container">$X=e^{x}, Y=e^{y}$</span>, so that <span class="math-container">$X+Y=X^{p}Y^{1-p}$</span>. How can I proceed from here?</p>
| Vasily Mitch | 398,967 | <p>The problem with this expression is that function <span class="math-container">$f(x)=(i+x)^{-1}$</span> is a tricky one:</p>
<p><span class="math-container">$$
f(f(x)) = \frac{1}{i+\frac{1}{i+x}}=\frac{i+x}{-1+ix+1}=-i+\frac1x,\\
f(f(f(x))) = \frac{1}{i+(-i+1/x)}=x.
$$</span></p>
<p>So every number generates the orbit of length 3, except for two numbers you have found, which are fixed points. Thus, the series of nesting functions <span class="math-container">$f$</span> does not converge if you haven't already started with a fixed point. So the question as it is stated has no sense: you cannot assign any number to this expression.</p>
|
1,319,288 | <p>There is a <a href="https://math.stackexchange.com/questions/1103723">similar question</a> in this site but I am not satisfied with the answer, which is basically the same as the proof in the mentioned textbook.</p>
<p>The book(Karel Hrbacek&Thomas Jech, <em>Introduction to Set Theory 3e</em>, p165) states a lemma: For every $\alpha$, $\text{cf}(2^{\aleph_\alpha})>\aleph_\alpha$. Then it asserts that $2^{\aleph_0}$ cannot be $\aleph_\omega$, since $\text{cf}(2^{\aleph_\omega})=\aleph_0$. But I can't see the connection. According to the lemma, $\text{cf}(2^{\aleph_\omega})$ should be larger than $\aleph_\omega>\aleph_0$, how can it equal $\aleph_0$?</p>
<p>On the other hand, I can't see why $\text{cf}(2^{\aleph_\omega})=\aleph_0$ is false either. Since $2^{\aleph_\omega}=\lim\limits_{n\rightarrow\omega}2^{\aleph_n}$, it is the limit of an increasing sequence of ordinals of length $\omega$, so its cofinality should not be greater than $\aleph_0$. Is there something wrong within this reasoning?</p>
| Noah Schweber | 28,111 | <p>As bof says, the statement "$2^{\aleph_\omega}=\lim 2^{\aleph_n}$" is false - to see why, consider the following analogous claim: $$2^\omega=\lim 2^n.$$ They are each wrong for precisely the same reason.</p>
<p>EDIT: Specifically, if we try to "cover" $2^\omega$ by $\bigcup 2^n$ in the obvious way, we miss all the infinite subsets of $\omega$. Similarly, if we try to cover $2^{\aleph_\omega}$ by $\bigcup 2^{\aleph_n}$ we miss all the cofinal subsets of $\aleph_\omega$. Yes, each cofinal $X\subseteq \aleph_\omega$ can be written as $X=\bigcup Y_i$ with each $Y_i\subseteq \aleph_i$, but this doesn't let us build the bijection you might want, for the same reason that saying "each subset of $\omega$ is a limit of finite sets" doesn't let us argue that $2^{\aleph_0}$ is countable.</p>
|
661,182 | <p>I'm taking a discrete structures class and I would appreciate some help with a homework problem. The problem is</p>
<blockquote>
<p>Attempt to find a closed form for the sum $\displaystyle \sum_{k=1}^n k^3$
by perturbation, only to find a closed form for the following sum $\displaystyle \sum_{k=1}^n k^2$.</p>
</blockquote>
<p>I got as far as </p>
<blockquote>
<p>$\displaystyle S_n + (n+1)^3 = a_0 + \sum_{k=1}^n (k+1)^3$</p>
</blockquote>
<p>but now I am stuck. I don't undestand how to finish the problem and the teacher did not explain perturbation with sums very well. If somebody could explain it to me in greater detail and/or show me how to finish the problem I would really appreciate that.</p>
<p>Thanks</p>
| drhab | 75,923 | <p>I am not familiar with perturbation so cannot really help you.</p>
<p>However, maybe this can be useful anyhow:</p>
<p>$$\sum_{k=r}^{n}\binom{k}{r}=\binom{n+1}{r+1}$$
is a nice closed form that can be proved easily by induction. This for any nonnegative $r$.</p>
<p>Closed forms for: $$\sum_{k=1}^{n}k^{r}$$
can be deducted from it.</p>
|
3,055,324 | <p>I need some help with constructing a proof for the following statement,<span class="math-container">$ \frac{P_1 P_2}{hcf(m,n)} = lcm(P_1,P_2)$</span> where <span class="math-container">$P_1$</span> and <span class="math-container">$P_2$</span> are polynomials with real coefficients.</p>
<p>I know how to do the same for integers using prime factors and their exponents but not sure where to go with polynomials.</p>
| Quang Hoang | 91,708 | <p>It works pretty much the same for integers if you modify the argument a little. Let <span class="math-container">$L = lcm(P_1, P_2)$</span> and <span class="math-container">$G=gcd(P_1, P_2)$</span>. Then
<span class="math-container">$$P_1 = Gh_1, P_2 = Gh_2,$$</span>
with <span class="math-container">$gcd(h_1, h_2) = 1$</span>. It's easy to see that <span class="math-container">$P_1$</span> and <span class="math-container">$P_2$</span> both divides <span class="math-container">$Gh_1h_2$</span> so <span class="math-container">$L$</span> also divides <span class="math-container">$Gh_1h_2$</span>. Assume that
<span class="math-container">$$ Gh_1h_2 = Lh,$$</span>
then <span class="math-container">$P_1 h_2 = L h$</span>, or <span class="math-container">$h_2 = \frac{L}{P_1} h$</span>. That is, <span class="math-container">$h$</span> divides <span class="math-container">$h_2$</span>. Similarly <span class="math-container">$h$</span> divides <span class="math-container">$h_1$</span>. Since <span class="math-container">$gcd(h_1,h_2)=1$</span>, <span class="math-container">$h$</span> must be as scalar as well. In other words
<span class="math-container">$$L= Gh_1h_2 = \frac{P_1P_2}{G}.$$</span></p>
|
293,047 | <p>When I am reading through higher Set Theory books I am frequently met with statements such as '$V$ is a model of ZFC' or '$L$ is a model of ZFC' where $V$ is the Von Neumann Universe, and $L$ the Constructible Universe. For instance, in Jech's 'Set Theory' pg 176, in order to prove the consistency of the Axiom of Choice with ZF, he constructs $L$ and shows that it models the ZF axioms plus AC. </p>
<p>However isn't this strictly inaccurate as $V$ and $L$ are proper classes? For instance, by this very method we might as well take it as a $Theorem$ in ZFC that ZFC is consistent since $V$ models ZFC. However this is obviously impossible as ZFC cannot prove its own consistency. I highly doubt that Jech would make a mistake in such classic textbook, so I must be missing something.</p>
<p>How could we, for instance, show Con(ZF) $\implies$ Con(ZF + AC) without invoking the use of proper classes? I imagine, for instance, that we would start with some (set sized) model $M$ of ZFC and apply some sort of 'constructible universe' construction to $M$. </p>
| Asaf Karagila | 7,206 | <p>Yes, that is true. But note that in its nature statements like $\operatorname{Con}(T)$ are <em>meta-theoretic</em> statements. So when we say that $V$ is a model of $\sf ZF$, we mean that in the meta-theory it is a model of $\sf ZF$.</p>
<p>This is often something which is not stressed enough in introductions to $V$ and relative consistency results: when we prove that $L$ is a model of $\sf ZFC$, we do not "just prove a meta-theoretic result", we in fact prove a stronger statement:</p>
<blockquote>
<p>There is a formula $L$ in the language of set theory which defines a class that is provably transitive and contains all the ordinals, and for every axiom $\varphi$ of $\sf ZFC$, $\sf ZF\vdash\varphi^\it L$.</p>
</blockquote>
<p>So not only you have this model, but in fact $\sf ZF$ itself prove that each axiom of $\sf ZFC$ holds in $L$.</p>
<hr>
<p>Let me also share, in my first course on axiomatic set theory, which was given by the late Mati Rubin, we had proved that $\sf ZF-Reg$ and $\sf ZF$ are equiconsistent by practically proving that $\sf PRA$ proves that if there is a contradiction in $\sf ZF$, then there is one in $\sf ZF-Reg$.</p>
<p>Of course, the same can be done with $\sf ZF$ and $\sf ZFC$. And it is much more annoying than using the model theoretic approach. Sometimes with impunity when it comes to class models.</p>
|
2,964,359 | <blockquote>
<p>Let <span class="math-container">$(X, d)$</span> be a metric space with no isolated points, and let <span class="math-container">$A$</span>
be a relatively discrete subset of <span class="math-container">$X$</span>. Prove that <span class="math-container">$A$</span> is nowhere
dense in <span class="math-container">$X$</span>.</p>
</blockquote>
<p><strong>relatively discrete subset of</strong> <span class="math-container">$X$</span>:= A subset <span class="math-container">$A$</span> of a topological space <span class="math-container">$(X,\mathscr T)$</span> is relatively discrete provided that for each <span class="math-container">$a\in A$</span>, there exists <span class="math-container">$U\in \mathscr T$</span> such that <span class="math-container">$U \cap A=\{a\}$</span>.</p>
<p>My aim is to prove <span class="math-container">$int(\overline{A})=\emptyset$</span>. Let if possible <span class="math-container">$int(\overline{A})\neq \emptyset$</span>. Let <span class="math-container">$x\in int(\overline{A})$</span>. which implies there exists <span class="math-container">$B_d(x,\epsilon)\subset \overline A=A\cup A'$</span>.</p>
<p>How do I complete the proof?</p>
<p>What if metric space is replaced by arbitrary topological space, will the result still hold?</p>
| Hagen von Eitzen | 39,174 | <p>With your <span class="math-container">$x$</span> and <span class="math-container">$\epsilon$</span>, <span class="math-container">$$\overline A\setminus B_d(x,\epsilon/2)$$</span> is a strictly smaller closed set than <span class="math-container">$\overline A$</span>, hence cannot contain all of <span class="math-container">$A$</span>. Pick <span class="math-container">$a\in A\cap B_d(x,\epsilon/2)$</span>, by which we achieve that <span class="math-container">$$a\in B_d(a,\epsilon/2)\subseteq A\cap\operatorname{int}(\overline A).$$</span>
By relative discreteness, we find <span class="math-container">$\delta>0$</span> such that <span class="math-container">$B_d(a,\delta)\cap A=\{a\}$</span>.
Wlog <span class="math-container">$\delta\le\epsilon/2$</span>. Now
<span class="math-container">$$ S:=(\overline A\setminus B_d(a,\delta))\cup \{a\}$$</span>
is a closed set with <span class="math-container">$A\subseteq S\subseteq \overline A$</span>, hence <span class="math-container">$S=\overline A$</span>. We still have <span class="math-container">$B_d(a,\delta)\subseteq \overline A=S$</span>, but <span class="math-container">$B_d(a,\delta)\cap S=\{a\}$</span>, which means that <span class="math-container">$B_d(a,\delta)=\{a\}$</span>, contrary to the assumption that there are no isolated points.</p>
|
1,375,365 | <p>Find all polynomials for which </p>
<p>What I have done so far:
for $x=8$ we get $p(8)=0$
for $x=1$ we get $p(2)=0$</p>
<p>So there exists a polynomial $p(x) = (x-2)(x-8)q(x)$</p>
<p>This is where I get stuck. How do I continue?</p>
<p><strong>UPDATE</strong></p>
<p>After substituting and simplifying I get
$(x-4)(2ax+b)=4(x-2)(ax+b)$</p>
<p>For $x = 2,8$ I get</p>
<p>$x= 2 \to -8a+b=0$</p>
<p>$x= 8 \to 32a+5b=0$</p>
<p>which gives $a$ and $b$ equal to zero.</p>
| lab bhattacharjee | 33,337 | <p>HINT:</p>
<p>Let the highest of power of $x$ be $n$</p>
<p>So, $(x-8)[a(2x)^n+\cdots]=8(x-1)[ax^n+\cdots]$</p>
<p>Comparing the coefficients of $x^{n+1},$
$$a2^n=8a\implies n=3$$</p>
<p>Let $p(x)=(x-2)(x-8)(ax+b)$ where $a,b$ are arbitrary constants to be determined</p>
<p>Hope you take it from here?</p>
|
2,946,384 | <p>How to prove that any integer n which is not divisible by 2 or 3 is not divisible by 6?</p>
<p>The point was to prove separately inverse, converse and contrapositive statements of the given statement: "for all integers n, if n is divisible by 6, then n is divisible by 3 and n is divisible by 2".
I have the proof for converse and inverse similar to that given in comments.
I have trouble only with the proof that integer not divisible by 2 or 3 is not divisible by 6. </p>
<p>As I review my proof for inverse statement, I'm not sure of it as well. "For all integers n, if n is not divisible by 6, n is not divisible by 3 or n is not divisible by 2."</p>
<p>n = 6*x where x in not an integer<br>
n = 2*3*x<br>
n/2 = 3*x and n/3 = 2*x where 2x or 3x is not an integer,<br>
so n is not divisible by 2 or 3</p>
| J.G. | 56,861 | <p>Since <span class="math-container">$dx=dt/t$</span>, you need to divide the whole integrand by <span class="math-container">$t$</span>.</p>
|
444,486 | <p>I am teaching myself real analysis, and in this particular set of lecture notes, the <a href="http://www.math.louisville.edu/~lee/RealAnalysis/IntroRealAnal-ch01.pdf" rel="nofollow">introductory chapter on set theory</a> when explaining that not all sets are countable, states as follows:</p>
<blockquote>
<p>If $S$ is a set, $\operatorname{card}(S) < \operatorname{card}(\mathcal{P}(S))$.</p>
</blockquote>
<p>Can anyone tell me what this means? It is theorem 1.5.2 found on page 13 of the article.</p>
| Muphrid | 45,296 | <p>The relationship between $\mathbb C$ and $\mathbb R^2$ becomes clearer using Clifford algebra.</p>
<p>Clifford algebra admits a "geometric product" of vectors (and more than just two vectors). The so-called complex plane can instead be seen as the algebra of geometric products of two vectors.</p>
<p>These objects--geometric products of two vectors--have special geometric significance, both in 2d and beyond. Each product of two vectors describes a pair of reflections, which in turn describes a rotation, specifying not only the unique plane of rotation but also the angle of rotation. This is at the heart of why complex numbers are so useful for rotations; the generalization of this property to 3d generates quaternions. For this reason, these objects are sometimes called <em>spinors</em>.</p>
<p>On the 2d plane, for every vector $a$, there is an associated spinor $a e_1$, formed using the geometric product. It is this explicit correspondence that is used to convert vector algebra and calculus on the 2d plane to the algebra and calculus of spinors--of "complex numbers"--instead. Hence, much of the calculus that one associates with complex numbers is instead intrinsic to the structure of the 2d plane.</p>
<p>For example, the residue theorem tells us about meromorphic functions' integrals; there is an equivalent vector analysis that tells us about integrals of vector functions whose divergences are delta functions. This involves using Stokes' theorem. There is a very tight relationship between holomorphic functions and vector fields with vanishing divergence and curl.</p>
<p>For this reason, I regard much of the impulse to complexify problem on real vector spaces to be inherently misguided. Often, but not always, there is simply no reason to do so. Many results of "complex analysis" have real equivalents, and glossing over them deprives students of powerful theorems that would be useful outside of 2d.</p>
|
444,486 | <p>I am teaching myself real analysis, and in this particular set of lecture notes, the <a href="http://www.math.louisville.edu/~lee/RealAnalysis/IntroRealAnal-ch01.pdf" rel="nofollow">introductory chapter on set theory</a> when explaining that not all sets are countable, states as follows:</p>
<blockquote>
<p>If $S$ is a set, $\operatorname{card}(S) < \operatorname{card}(\mathcal{P}(S))$.</p>
</blockquote>
<p>Can anyone tell me what this means? It is theorem 1.5.2 found on page 13 of the article.</p>
| Wlod AA | 490,755 | <p>For the sake of easy communication, it is common to identify <span class="math-container">$\ \mathbb C\ $</span> and
<span class="math-container">$\ \mathbb R^2\ $</span> via the algebraic connecting <span class="math-container">$\ \mathbb C\ $</span> with field
<span class="math-container">$\mathbb R[i]/(i^2+1).\ $</span> However, there are many other equivalent ways to define <span class="math-container">$\ \mathbb C,\ $</span> e.g. as <span class="math-container">$\mathbb R[\epsilon]/(\epsilon^2+\epsilon+1).\ $</span>
Thus, in principle, an axiomatic way would be cleaner -- for instance, as an algebraically closed field with an automorphism called conjugation, etc.</p>
<hr>
<p>Complex analysis feels very different from real analysis. Formally, the vector spaces are different in an essential way. E.g. there is always an eigenvalue and an eigenvector over <span class="math-container">$\ \mathbb C\ $</span> but not always over <span class="math-container">$\ \mathbb R.\ $</span> The complex field is much more algebraic and geometric. The real smooth (infinitely differentiable) functions on manifolds are very flexible (see the partition of the unit!), they remind you of the real-valued continuous functions on topological normal and paracompact spaces. On the other hand, complex-differentiable functions are right away infinitely differentiable (analytic), they are quite rigid, and they feel almost like polynomials. To Riemann, analytic functions were global creatures rather than local. Euler already looked at analytic functions as at infinite degree polynomials, and that how he was able to find/compute <span class="math-container">$\ \sum_{n=1}^\infty\, \frac 1{n^2}\ =\ \pi^2/6.$</span></p>
<p>And this goes on and on.</p>
|
444,486 | <p>I am teaching myself real analysis, and in this particular set of lecture notes, the <a href="http://www.math.louisville.edu/~lee/RealAnalysis/IntroRealAnal-ch01.pdf" rel="nofollow">introductory chapter on set theory</a> when explaining that not all sets are countable, states as follows:</p>
<blockquote>
<p>If $S$ is a set, $\operatorname{card}(S) < \operatorname{card}(\mathcal{P}(S))$.</p>
</blockquote>
<p>Can anyone tell me what this means? It is theorem 1.5.2 found on page 13 of the article.</p>
| Allawonder | 145,126 | <p>The basic difference between <span class="math-container">$\mathrm C$</span> and <span class="math-container">$\mathrm R^2$</span> which makes electrical engineers prefer working with complex quantities is that <span class="math-container">$\mathrm C$</span> is not usually thought of as just a set (yes, it's an abuse of notation, but that's common -- it's almost impossible to imagine some set without thinking of at least some structure on it). It has an algebra over it very similar to the usual algebra with real numbers, so we can manipulate these vectors almost as effortlessly as with real numbers -- perhaps sometimes even more effortlessly.</p>
<p>They come into their very own when we start doing analysis -- that is, dealing with functions. Functions of a complex variable have remarkable analytic properties which make them easier to work with in many cases. Also, such functions are just an elegant way to model many natural phenomenon we may want to analyse. In electrical engineering in particular, they're interested in oscillations. These find a very natural interpretation in terms of complex variables since they can be thought of as oscillations too. Couple this with their algebraic properties and you have a powerful system of tools to literally calculate with oscillations (or whatever other object you're dealing with).</p>
|
1,671,357 | <p>I'm trying to solve a minimization problem whose purpose is to optimize a matrix whose square is close to another given matrix. But I can't find an effective tool to solve it.</p>
<p>Here is my problem:</p>
<blockquote>
<p>Assume we have an unknown Q with parameter $q11, q12,q14,q21,q22,q23,q32,q33,q34,q41,q43,q44$, and a given matrix G, that is,
$Q=\begin{pmatrix}
q11&q12 &0 &q14 \\q21&q22& q23&0\\ 0&q32& q33&q34\\ q41&0& q43&q44\\
\end{pmatrix} $, $G=\begin{pmatrix}
0.48&0.24 &0.16 &0.12 \\ 0.48&0.24 &0.16 &0.12\\0.48&0.24 &0.16 &0.12\\0.48&0.24 &0.16 &0.12
\end{pmatrix} $,</p>
<p>The problem is how to find the values of $q11, q12,q14,q21,q22,q23,q32,q33,q34,q41,q43,q44$ such that the square of $Q$ is very close to matrix $G$.</p>
</blockquote>
<p>I choose to minimize the Frobenius norm of their difference, that is, </p>
<blockquote>
<p>$ Q* ={\arg\min}_{Q} \| Q^2-G\|_F$</p>
<p>s.t. $0\leq q11, q12,q14,q21,q22,q23,q32,q33,q34,q41,q43,q44 \leq 1$,$\quad$<br>
$\quad$ $q11+q12+q14=1$,
$\quad$ $q21+q22+q23=1$,
$\quad$ $q32+q33+q34=1$,
$\quad$ $q41+q43+q44=1$.</p>
</blockquote>
<p>During those days, I am frustrated to find a effective tool to execute the above optimization algorithm, can someone help me to realize it?</p>
| Samrat Mukhopadhyay | 83,973 | <p>For the modified question, let me try to give an answer that can address situations with matrices having the structure that you have. </p>
<p>Basically, your matrix $G$ has the following structure $$G=uv^T$$ where I have taken $u$ as the all $1$'s vector and $v$ a vector of positive coordinates such that $v^Tu=1$, i.e. $G$ is a <a href="https://en.wikipedia.org/wiki/Stochastic_matrix" rel="nofollow">stochastic matrix</a>. It is also easy to see that $G$ is idempotent i.e. $G^2=G$. </p>
<p>You need to find out $Q$ which is a stochastic such that $\|Q^2-G\|_F$ is minimized. Obviously the only solution is $Q=G$. But if you do not want to take $Q=G$, then you cannot find an optimal solution. Instead you can choose a matrix $\delta$ having the property $\delta u=0$ and then can form the matrix $Q=G+\delta$, then you need to ensure that, for a chosen $\epsilon>0$, $$\|Q^2-G\|_F<\epsilon\implies \|G\delta+\delta G+\delta^2\|_F<\epsilon\\\implies \|uv^T\delta+\delta^2\|_F<\epsilon$$ since $\delta u=0$.</p>
<p><strong>Edit:</strong> regarding how to calculate the matrix $\delta$, I am not sure it is always possible to get closed form solutions of $\delta$ and you have to use some numerical technique. Basically, for a given $\epsilon$, you have to calculate the Frobenius norm which will yield,
$$Tr(N\delta^Tvv^T \delta+2\delta^T vu^T\delta^2+\delta^2(\delta^T)^2)<\epsilon$$ As I mentioned earlier, in general, this will ensure that the elements of $\delta$ remain inside a hyperellipse of degree $4$. </p>
|
3,334,031 | <p>I was doing some practice problems that my professor had sent us and I have not been able to figure out one of them. The given equation is:</p>
<p><span class="math-container">$-y^2dx +x^2dy = 0$</span></p>
<p>He then asks us to verify that:</p>
<p><span class="math-container">$ u(x, y) = \frac{1}{(x-y)^2}$</span></p>
<p>is an integrating factor. </p>
<p>I multiplied through to get:</p>
<p><span class="math-container">$\frac{-y^2}{(x-y)^2}dx + \frac{x^2}{(x-y)^2}dy = 0$</span> </p>
<p>However, the partial derivatives of these do not equal each other so I am a bit confused...</p>
| N. S. | 9,176 | <p>It has (kinda) exponential growth.</p>
<p>Let <span class="math-container">$f_n$</span> be the number of solutions. Then, it is easy to see that
<span class="math-container">$$f_{m+n-1}\geq f_n\cdot f_m$$</span></p>
<p>Indeed, if <span class="math-container">$x_1,.., x_n$</span> is a solution for <span class="math-container">$n$</span> and <span class="math-container">$y_1,.., y_m$</span> is a solution for <span class="math-container">$m$</span> then
<span class="math-container">$$x_1,...,x_n,y_2,.., y_m$$</span>
is a solution for <span class="math-container">$m+n-1$</span>.</p>
<p>From here, you get that
<span class="math-container">$$f_{n+2}\geq f_3 f_n= 3f_n$$</span></p>
<p>Hence,
<span class="math-container">$$f_{2n}\geq 3^{n-1} f_2 =\frac{1}{3} \cdot (\sqrt{3})^{2n} \\
f_{2n+1}\geq 3^n f_1=\frac{1}{\sqrt{3}} \cdot (\sqrt{3})^{2n+1}$$</span></p>
<p>Also note that at each step there are at most 3 choices, therefore
<span class="math-container">$$f_n \leq 3^{n-1}$$</span></p>
<p>From here, it follows that
<span class="math-container">$$\frac{1}{3} \cdot (\sqrt{3})^{n} \leq f_n \leq \frac{1}{3} \cdot 3^n$$</span></p>
<p><strong>Extra</strong> Same way, for each fixed <span class="math-container">$k$</span> we have
<span class="math-container">$$f_{n+k-1} \geq f_n \cdot f_k$$</span>
giving
<span class="math-container">$$f_n \geq C (\sqrt[k-1]{f_k})^n$$</span></p>
<p>Note that
<span class="math-container">$$\sqrt[6]{83}~2.088
\sqrt[7]{177}~2.095$$</span></p>
<p><strong>Added</strong> Note that the equation
<span class="math-container">$$f_{m+n-1}\geq f_n\cdot f_m$$</span>
implies that
<span class="math-container">$$g(m+n) \leq g(m)+g(n)$$</span>
where <span class="math-container">$g(n)=- \ln(f(n+1))$</span>.</p>
<p>Then, by <a href="https://en.wikipedia.org/wiki/Subadditivity#Sequences" rel="nofollow noreferrer">Fekete's Subadditive Lemma</a> the limit </p>
<p><span class="math-container">$$l= \lim_n \frac{g(n)}{n}$$</span>
exists and <span class="math-container">$l=\inf\frac{g(n)}{n}$</span>.</p>
<p>Note that this gives that
<span class="math-container">$$\lim_n \frac{\ln(f(n+1))}{n}=-l \\
\lim_n \ln(f(n+1)^{\frac{1}{n}})=-l \\
\lim_n f(n+1)^{\frac{1}{n}}=e^{-l}\\
\lim_n\sqrt[n]{ \frac{f(n+1)}{e^{-l}} }=1$$</span>
meaning that the <span class="math-container">$b$</span> in your formula must be <span class="math-container">$e^-l$</span>. </p>
<p>This show that asymptotically, in some sense, <span class="math-container">$f_n \simeq e^{-l}$</span>. But it is not clear if you can get some <span class="math-container">$a$</span> so that, in a stronger sense <span class="math-container">$f_n\simeq ae^{-l}$</span>.</p>
<p>As for finding the <span class="math-container">$l$</span>, Fekete's Subadditive Lemma tells you exatly what it is, but I would be surprised if we can calculate it explicitely.</p>
<p>Note that
<span class="math-container">$$l=\inf\frac{g(n)}{n} \Rightarrow l=\inf \frac{- \ln(f(n+1))}{n} \Rightarrow -l=\sup \ln(f(n+1)^\frac{1}{n}) \Rightarrow e^{-l}=\sup \sqrt[n]{f(n+1)}$$</span></p>
|
3,334,031 | <p>I was doing some practice problems that my professor had sent us and I have not been able to figure out one of them. The given equation is:</p>
<p><span class="math-container">$-y^2dx +x^2dy = 0$</span></p>
<p>He then asks us to verify that:</p>
<p><span class="math-container">$ u(x, y) = \frac{1}{(x-y)^2}$</span></p>
<p>is an integrating factor. </p>
<p>I multiplied through to get:</p>
<p><span class="math-container">$\frac{-y^2}{(x-y)^2}dx + \frac{x^2}{(x-y)^2}dy = 0$</span> </p>
<p>However, the partial derivatives of these do not equal each other so I am a bit confused...</p>
| David E Speyer | 448 | <p>If we restrict ourselves to sequences which stay between <span class="math-container">$A$</span> and <span class="math-container">$B$</span>, we can get an exact count, and thus a lower bound for the original problem. Let <span class="math-container">$M$</span> be the matrix with rows and columns indexed by <span class="math-container">$\{A, A+1, \cdots, B-1,B \}$</span>, with <span class="math-container">$M_{ij}=1$</span> if <span class="math-container">$j=i+1$</span>, <span class="math-container">$i-1$</span> or <span class="math-container">$2i+1$</span>, and <span class="math-container">$M_{ij}=0$</span> otherwise. Let <span class="math-container">$\vec{e}$</span> be the vector whose <span class="math-container">$0$</span>-entry is <span class="math-container">$1$</span> and whose other entries are <span class="math-container">$0$</span>. Then the number of sequences is <span class="math-container">$\vec{e}^T M^n \vec{e}$</span>. For <span class="math-container">$n$</span> large, this grows like <span class="math-container">$\lambda^n$</span> where <span class="math-container">$\lambda$</span> is the largest eigenvalue of <span class="math-container">$M$</span>. </p>
<p>I computed this largest eigenvalue for <span class="math-container">$A=-B$</span> with <span class="math-container">$5 \leq B \leq 20$</span> and got the following sequence, which seems to be approaching a limit around 2.3:</p>
<pre><code>2.28034, 2.29867, 2.30516, 2.31369, 2.31662, 2.31967, 2.32071, 2.32196,
2.32238, 2.32282, 2.32296, 2.32313, 2.32319, 2.32325, 2.32327, 2.32329
</code></pre>
<p>The characteristic polynomial doesn't seem enlightening: For <span class="math-container">$(A,B) = (-10,10)$</span>, I get </p>
<pre><code> (-1 + t) * (1 + t) * (1 - 3 t - t^2 + t^3) * (1 + 3 t - 3 t^2 - 4 t^3 + t^4 + t^5) *
(-1 - 13 t - 6 t^2 + 53 t^3 + 21 t^4 - 76 t^5 - 21 t^6 + 44 t^7 + 8 t^8 - 11 t^9 - t^10 + t^11)
</code></pre>
<p>The largest root, 2.31967, is a root of the degree 11 factor.</p>
|
654,408 | <p>I know that the volume form on $S^1$ is $\omega= ydx-xdy$. But how I can derive that? The only things that I know are the definition of differential q-form, and the fact that the vector field $v= y \frac{\partial}{\partial x}-x\frac{\partial}{\partial y}$ never vanishes on $S^1$.</p>
| Willie Wong | 1,543 | <ol>
<li><p>You speak of "the" volume form. There are <em>many</em> volume forms on any orientable smooth manifold. Given $\omega$ a volume form, and a non-vanishing smooth function $f$, then $f\omega$ is again a volume form. </p></li>
<li><p>The task of finding "a" volume form is to find a non-vanishing top degree form on the manifold in question. To verify that a one form $\omega$ is a volume form on $S^1$ you of course need to check that for any tangent vector $X$ of $S^1$, the contraction $\omega(X) \neq 0$. Now, any one form on $\mathbb{R}^2$ can be written as $\omega_x \mathrm{d}x + \omega_y \mathrm{d}y$. Since you already know that the tangent space of $S^1$ is spanned by $y \partial_x - x\partial_y$, it suffices to find any pair of real functions $\omega_x, \omega_y$ such that
$$ \omega(X) = \omega_x y - \omega_y x \neq 0 $$
for any $(x,y)\in S^1$. The choice that $\omega_y = -x$ and $\omega_x = y$ is just one of many possible choices. </p></li>
<li><p>Because of the freedom described above, there isn't one method to derive your given one form as <em>the</em> volume form on $S^1$ ... unless you add additional requirements. For example, on the unit circle the volume form you wrote down is the "natural" one (up to choice of orientation) given by the induced Riemannian metric. And perhaps this is what you are seeking, in the end:</p>
<p>The natural volume form on $\mathbb{R}^2$ is $\mathrm{d}x\mathrm{d}y$. Using the Riemannian metric on $\mathbb{R}^2$ we can write down the <em>unit normal</em> vector field to the unit circle, and this happens to be $x\partial_x + y\partial_y$. Since $\mathbb{R}^2$ is orientable and $S^1$ has a unit normal field, it is also orientable. And a volume form (indeed the one for the induced Riemannian metric) can be given by
$$ \omega = (\mathrm{d}x\mathrm{d}y)(x \partial_x + y\partial_y, \cdot) $$</p>
<p>This method is not restricted to Riemannian manifolds. Suppose $\Sigma$ is a hypersurface in a smooth manifold $M$. Assume that $M$ is orientable and so $\omega$ is a volume form for $M$. Assume also that $\Sigma$ admits a field of "normal vectors", by which I mean that there exists a vector field $n$ defined along $\Sigma$ that is never in the tangent space to $\Sigma$. Then a volume form for $\Sigma$ can be found by contracting (taking the interior product) $\iota_n\omega$. </p></li>
</ol>
|
189,069 | <p>The Survival Probability for a walker starting at the origin is defined as the probability that the walker stays positive through n steps. Thanks to the Sparre-Andersen Theorem I know this PDF is given by</p>
<pre><code>Plot[Binomial[2 n, n]*2^(-2 n), {n, 0, 100}]
</code></pre>
<p>However, I want to validate this empirically. </p>
<p>My attempt to validate this for <code>n=100</code>:</p>
<pre><code>FoldList[
If[#2 < 0, 0, #1 + #2] &,
Prepend[Accumulate[RandomVariate[NormalDistribution[0, 1], 100]], 0]]
</code></pre>
<p>I want<code>FoldList</code> to stop if <code>#2 < 0</code> evaluates to <code>True</code>, not just substitute in 0. </p>
| Carl Lange | 57,593 | <p>We can do this using an implementation of <code>FoldWhileList</code>.</p>
<p>First, implement <code>FoldWhileList</code> using <a href="https://mathematica.stackexchange.com/a/19105/57593">this great answer</a>.</p>
<pre><code>FoldWhileList[f_, test_, start_, secargs_List] :=
Module[{tag},
If[# === {}, {start}, Prepend[First@#, start]] &@
Reap[Fold[If[test[##], Sow[f[##], tag], Return[Null, Fold]] &,
start, secargs], _, #2 &][[2]]]
</code></pre>
<p>Now we simply run this using the test <code>#2 >= 0</code> (note that the implementation of <code>NestWhile</code> breaks when <code>test</code> stops evaluating <code>True</code> - our implementation of <code>FoldWhileList</code> also does this, therefore we invert the test you originally used.</p>
<pre><code>FoldWhileList[Plus, #2 >= 0 &, 0,
Prepend[Accumulate[RandomVariate[NormalDistribution[0, 1], 100]], 0]]
</code></pre>
<p>We can now estimate your PDF:</p>
<p><a href="https://i.stack.imgur.com/3h0of.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/3h0of.png" alt="pdf estimate"></a></p>
<p>and overlay it over the original plot also:</p>
<p><a href="https://i.stack.imgur.com/WMh5x.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/WMh5x.png" alt="overlaid plots"></a></p>
<p>which doesn't seem like a great match - perhaps there's an issue with your original code, as <a href="https://mathematica.stackexchange.com/a/189088/57593">this answer surmises</a>.</p>
|
2,792,061 | <p>I would like to understand how to apply <em>well-founded induction</em>. I have found two definitions which I list now, followed by the question.</p>
<blockquote>
<p>(1) A binary relation $\prec$ is <a href="http://www.cs.cornell.edu/courses/cs6110/2013sp/lectures/lec07-sp13.pdf" rel="noreferrer"><em>well-founded</em></a> if there are no infinite descending chains. An <em>infinite descending chain</em> is an infinite sequence $a_0, a_1, \dotsc$ such that $a_{i + 1} \prec a_i$ (it goes in reverse order) for all $i \geq 0$.</p>
<p>(2) Well-founded induction says that, in order to prove a property $P$ holds on a set $A$ which has a well-founded binary relation $\prec$, it's enough to prove that $P$ holds for any $a \in A$ whenever $P$ holds for $b \prec a$.</p>
</blockquote>
<p>That last paragraph I don't quite understand. And I am having trouble parsing the multiple-nested $\Rightarrow$ blocks in the formal definition:</p>
<blockquote>
<p>$$\forall a \in A.(\forall b \in A.b \prec a \Rightarrow P(b)) \Rightarrow P(a) \Rightarrow \forall a \in A.P(a)$$</p>
</blockquote>
<p>An alternative from <a href="https://en.wikipedia.org/wiki/Well-founded_relation#Induction_and_recursion" rel="noreferrer">Wikipedia</a> is just as difficult to parse:</p>
<blockquote>
<p>(3) If $x$ is an element of $X$ and $P(y)$ is true for all $y$ such that $y R x$, then $P(x)$ must also be true.</p>
<p>$$\forall x\in X\,[(\forall y\in X\,(y\,R\,x\to P(y)))\to P(x)]\to \forall x\in X\,P(x)$$</p>
</blockquote>
<p>The questions are:</p>
<ol>
<li>If one could break down / parse that formal equation to explain its meaning.</li>
<li>And likewise for that paragraph (2), what it means that "it's enough to prove that $P$ holds for any $a \in A$ whenever $P$ holds for $b \prec a$"</li>
</ol>
<p>My attempt at understanding the wiki version is, for all $x \in X$, then if [the first big block] is true, then we can say for all $x \in X$, $P(x)$ is true. So that is essentially saying if that big chunk is true, then we have proven our property $P(x)$ is true for all x. Not sure how that is the case, but continuing...</p>
<p>Then the "first big block" is saying, if for all $y \in X$ [the second big block] is true, then $P(x)$ is true, for the specific $x$ we are focused on atm. So for all $y \in X$, $P(x)$ is going to be true if that "second big block" is true.</p>
<p>Finally, the "second big block" is saying if $y$ <em>precedes</em> $x$, then $P(y)$ is true. So if $y$ comes before $x$, then we at least know $P(y)$ is true. Not really sure what this means (how we know $P(y)$ is true).</p>
<p>So to summarize, if for all $x$, that for all $y$, if $y$ precedes $x$ then $P(y)$ is true, that if that is true, then $P(x)$ is true, and if that's true, then $P(x)$ is true for all $x$.</p>
<p>I have no idea what I am saying right now lol, I am having a tough time understanding this and looking for some guidance. Thank you so much.</p>
| qualcuno | 362,866 | <p>Note that since you have $m+1$ polynomial, lineal independence would in particular imply that $<p_0,...,p_m>$ generates $\mathbb{F}_{\leq m}$. However, this is not so: clearly</p>
<p>$$
<p_0, \dots p_m> \subseteq \{p \in \mathbb{F}_{\leq m} : p(2) = 0\}
$$</p>
<p>which is a proper subspace of $\mathbb{F}_{\leq m}$ since, for example, it does not contain 1. In general you could have linearly independent polynomials that behave like this, as long as they are not 'too many'. Take for example $(X-2)X$ and $(X-2)X^2$ in $\mathbb{R}_{\leq 3}$, they both verify the former property but they are linearly independent, because they have different degrees. We have then constructed a linearly independent set of polynomials that vanishes at $2$. </p>
<p>In general, for any vector space, proving that $v_1, \dots v_n$ are linearly independent consist of showing the following implication</p>
<p>$$
\sum_{i=1}^na_iv_i = 0 \ \Rightarrow a_i = 0 \ (\forall i)
$$</p>
<p>In this particular scenario, the first equality to zero means the zero polynomial, that is, the one whose coefficients are all zero (this shouldn't be confused with $p(z) = 0 \ (\forall z)$, think for example of $X(X-1)$ in $\mathbb{Z}_2$).</p>
|
203,505 | <p>Let <span class="math-container">$P(x)$</span> be a non-constant polynomial with real coefficients.</p>
<p>Can <a href="http://en.wikipedia.org/wiki/Natural_density" rel="noreferrer">natural density</a> of</p>
<p><span class="math-container">$$\{n\ |\ \lfloor P(n)\rfloor \ \text{is prime.}\}$$</span></p>
<p>be positive?</p>
| Igor Rivin | 11,142 | <p>The Bateman-Horn conjecture says no. See <a href="http://projecteuclid.org/download/pdf_1/euclid.bams/1183551839">this paper of Kevin McCurley</a> for related results.</p>
|
536,362 | <p>Let $\Sigma = \sigma(\mathcal C)$ be the $\sigma$-algebra generated by the countable collection of sets $\mathcal C \subset \mathcal{P}(X)$. How can I prove that if $\mu$ is a $\sigma$-finite measure on $(X,\Sigma)$ then $L^p(X)$ is separable for $1 \le p < \infty$?</p>
<p>I know that simple functions are dense in $L^p(X)$, so I would like to find a countable subset of the set of simple functions that is dense in them. Could you help me please?</p>
| Davide Giraudo | 9,849 | <p>Before going into a formal proof, here is the idea. The space is $\sigma$-finite, so we can "break" it into countably many spaces of finite measure. Up to some technical considerations, we are reduced to the case $X$ of finite measure. An algebra generated by a countable class is countable and we can approximate elements of finite measure by those of a generating algebra.</p>
<p>Let $(A_n,n\geqslant 1)$ be a partition of $X$ into measurable sets of finite measure. </p>
<ol>
<li><p>We show that $(A_n,A_n\cap \Sigma,\mu_{\mid A_n\cap \Sigma})$ is separable. Consider $f\in L^p(A_n)$ and fix $\varepsilon>0$. There is $f'=\sum_{j=1}^J a_j\chi_{B_j}$ simple simple such that $\int_{A_n}|f-f'|^p\mathrm d\mu\lt\varepsilon^p$. Define $\mathcal A_n$ the algebra generated by sets of the form $A_n\cap C,C\in\mathcal C$. Then $\mathcal A_n$ is countable. <a href="https://math.stackexchange.com/questions/228998/approximating-a-sigma-algebra-by-a-generating-algebra">Approximate $B_j$</a> by $B'_j$, an element of $\mathcal A_n$, that is, such that $\mu(B'_j\Delta B_j)\lt \frac 1{J(|a_j|^p+1)}\varepsilon$. Defining $f'':=\sum_{j=1}^Ja_j\chi_{B'_j}$, we get $\lVert f-f''\rVert^p\lt 2\varepsilon$. </p></li>
<li><p>Define $D_n$ as the set of linear combinations with rational coefficients of characteristic functions of elements of $\mathcal A_n$. Since $\mathcal{A}_n$ is countable, so is $D_n$. Finally, define
$$D:=\bigcup_{N\geqslant 1}\left\{\sum_{i=1}^Nd_i,d_i\in D_i\right\}.$$
Then $D$ is countable and dense in $L^p(X)$.</p></li>
</ol>
|
1,631,589 | <p>Consider the sequence $\{\frac{x^n}{n!}\}_n$ for any number $x$.</p>
<p>By choosing $m>x$ and letting $n>m$ , show that:</p>
<p>$\frac{x^n}{n!} < \frac{x^n}{m^n} < \frac{m^m}{(m-1)!}$</p>
<p>Am using the squeeze theorem , but unable to start third inequality.</p>
| Maffred | 279,068 | <p>LHS can be interpreted as counting the ways in which you can create two different pairs of two different elements, taken from a set of $n$ toys; while the pairs must be different, they need not be disjoint: they might have a toy in common.</p>
<p>RHS counts the same amount of pairs in a different way: you can create two kinds of such couple of pairs: disjoint ones $(a,b)(c,d)$ and overlapping ones $(a,b)(a,c)$. Remember that every pair must be different and the two couples must be different. The first kind of pairs can be chosen in the following way: first chose $4$ toys in $\binom{n}{4}$ ways, then chose one of the $3$ ways to partition the $4$ into two pairs (to see that there are $3$ ways, focus on one of the four: it must be paired with one of the three remaining ones, and that determines the partition). The second kind of pairs can be chosen by first choosing the toy $a$ that will be common to both pairs in $n$ ways, then chose $2$ toys from the remaining $n-1$ to be $\{b,c\}$; the order does not matter here, so the final choice can be made in $\binom{n-1}{2}$ ways.</p>
|
3,042,149 | <p>We can't exactly draw a line of length square root of 2 but in an isosceles right angle triangle of sides 1 unit each, the length of hypotenuse will be the square root of 2. Now does it mean we can get the line of exact such length?</p>
<p>How is it possible? How can we get a line of exact length square root of 2 which we can't construct exactly due to its infinite decimal expansion? So does Pythagoras theorem mislead us or create a paradox?</p>
| PrincessEev | 597,568 | <p>A lot of numbers are called "constructible", in that, if <span class="math-container">$n$</span> is constructible, then we can find a construction in which a line of <span class="math-container">$n$</span> units can be made. Granted, this is not a trivial task depending on the construction and number involved.</p>
<p>In the sense of construction and only drawing a line, then the only thing we can draw "exactly" is a line segment which we would denote as our unit, essentially <span class="math-container">$1$</span>. Everything else requires some method of construction. We can make <span class="math-container">$2$</span> by appending two unit lengths; we can make <span class="math-container">$1/2$</span> by bisecting a unit length. </p>
<p>The only things we're given is a unit length, our straightedge, and a compass. Nothing else; take out the construction and we only have a unit. (And you could even argue drawing the line itself is a unit.) From there, we can find constructible numbers <span class="math-container">$n$</span>, which are <span class="math-container">$n$</span> unit lengths long.</p>
<p>It will also be helpful to note that we cannot use rulers. I mean, they can be used, insofar as being a straightedge, but we're not allowed to use them to measure out, say, <span class="math-container">$2$</span> inches or whatever. This is because the ruler might be flawed, your line might be flawed - it could be too long by ever-so-many atom-widths if you want to be pedantic; what temperature it is could affect the scaling even if only minutely; blah blah blah. In constructions, we effectively view the tools and our execution of the construction as "perfect", free of human errors like these, if just in the theoretical sense.</p>
<p>Why am I rambling on about constructions?</p>
<p>Because that is at the core of why you might say you cannot draw a line of length <span class="math-container">$\sqrt2$</span> units - because I think you're looking at the ruler. You look at it and go <em>"I can't get to it because it's irrational. I can only guessimate in halves or quarters or whatever's on my ruler, I can only make rational estimations. I cannot find square root of <span class="math-container">$2$</span> just by drawing a line."</em></p>
<p>And you're right - you can't make <span class="math-container">$\sqrt2$</span> just by drawing a line. You can't make <em>anything</em> by drawing a line, except your unit, assuming - again - the inherent perfectness of construction and technique and not knowing exact lengths <em>a priori.</em></p>
<p>The only length you can draw, without other aides, is a line <span class="math-container">$1$</span> unit long.</p>
<p>Seems weird when you say it but that's what it comes down to.</p>
<p>So how do we draw others? We use constructions. We make use of geometric principles to demonstrate other numbers are "constructible." These numbers can be constructed, made. Given a unit, straightedge, and compass, we can make a line <span class="math-container">$n$</span> units long, where <span class="math-container">$n$</span> is constructible.</p>
<p>This gives rise to the <a href="https://en.wikipedia.org/wiki/Constructible_number" rel="nofollow noreferrer">constructible numbers</a>. Like the real numbers, they form a field: you can multiply and add any two constructible numbers to get another one. You can also divide and subtract them.</p>
<p>What are some examples of constructible numbers? Well...</p>
<ul>
<li>All of the natural numbers (just keep appending lines of length <span class="math-container">$1$</span> to each)</li>
<li>All of the rational numbers (bisections and appending of various lengths)</li>
<li>Square roots of rational numbers (the isosceles right triangle of side length <span class="math-container">$a$</span> suffices to give a hypotenuse of length <span class="math-container">$\sqrt a$</span>)</li>
</ul>
<p>These are not the only examples; the article I linked discusses them at further length. There are some numbers known to not be constructible:</p>
<ul>
<li><span class="math-container">$\sqrt[3]2$</span></li>
<li><span class="math-container">$\sqrt \pi$</span></li>
<li><span class="math-container">$\cos \left( \arccos(x)/3 \right)$</span> for constructible <span class="math-container">$x$</span> (related to how angles generally cannot be trisected)</li>
</ul>
<p>I think that's enough rambling. In short:</p>
<ul>
<li>In reality, with only a straightedge, you are given a unit length, representative of <span class="math-container">$1$</span></li>
<li>With compass and straightedge, you can make the constructible numbers (again, see <a href="https://en.wikipedia.org/wiki/Constructible_number" rel="nofollow noreferrer">https://en.wikipedia.org/wiki/Constructible_number</a> for more info)</li>
<li>Constructions cannot use rulers to measure out lengths: we can only have a straightedge and compass, and the former allows us a starting, unit length.</li>
<li>In constructions, at least analyzing them in theoretical realm (rather than making practical use of them), we look at them in the sense of being "perfect" and absent of human errors. There are definitely rational approximations for <span class="math-container">$\pi$</span> or <span class="math-container">$\sqrt 2$</span> that are really, REALLY good (look into continued fractions if you're curious), and it shouldn't be hard to convince yourself of our ability to draw those. But constructions - if perfect, if idealized, if absent of error - also allows us to represent irrational numbers. </li>
<li>We can draw right triangles via a construction and isosceles right triangles in particular - this isn't at all difficult - so we can draw lines of <span class="math-container">$\sqrt q$</span> for any rational <span class="math-container">$q$</span> units. This includes <span class="math-container">$q=2$</span>. Of course, if you do it in real life, then it's not going to be exactly right, no measurement is. But in looking at constructions and constructible numbers, we ignore that and look at a theoretical, perfect, idealized case where these human errors aren't present. In <em>that</em> sense, <span class="math-container">$\sqrt 2$</span> is absolutely drawable (when you use a construction to get a right triangle). Of course in real life imperfections arise everywhere - but mathematics is kinda beautiful in that sense, it allows us to consider the idealized situation too, as well as make it practical, give us "close enough" solutions to real life.</li>
</ul>
|
1,457,063 | <p>I am utterly confused on how to solve this problem. I found a lemma that says $|A\cup B|=|A|+|B|$ is true if the two sets are disjoint which makes sense, but how do I prove the entire statement. </p>
| R.N | 253,742 | <p>suppose $x\in A\cup B$ then in left hand you have considered it 1 time. what a bout in right hand 2 event may happen, if $x\notin A\cap B$ then in right hand you count it one time, just in A or B, but if $x\in A\cap B$ then $\color{red}{1=1+1-1}$</p>
|
1,765,538 | <p>If $N$ is the set of all natural numbers, $R$ is a relation on $N \times N$, defined by $(a,b) \simeq (c,d)$ iff $ad=bc$, how can I prove that $R$ is an equivalence relation ?</p>
| Michael Burr | 86,421 | <p>Hint: To prove that this is an equivalence relation, you must prove reflexivity, symmetry, and transitivity. Each of these is a different statement that you must prove.</p>
<ul>
<li><p>Reflexivity: Suppose that $(a,b)\in\mathbb{N}\times\mathbb{N}$. Then, you want to prove that $(a,b)R(a,b)$. This can get a little confusing because sometimes a relation is written as a pair - so students sometimes consider the case where $a=b$, i.e., $(a,a)$. If you want to write a relation in terms of a pair, you would write $((a,b),(a,b))$.</p></li>
<li><p>Symmetry: Suppose that $(a,b),(c,d)\in\mathbb{N}\times\mathbb{N}$ such that $(a,b)R(c,d)$ (so $ad=bc$). Now, you would like to prove $(c,d)R(a,b)$, in other words that $cb=da$. This can get a little confusing because students sometimes try to look at symmetry of $(a,b)$, in other words, look at $(b,a)$. Since the objects are in $\mathbb{N}\times\mathbb{N}$, you want to reverse the two elements of $\mathbb{N}\times\mathbb{N}$.</p></li>
<li><p>Transitivity: Suppose that $(a,b),(c,d),(e,f)\in\mathbb{N}\times\mathbb{N}$ such that $(a,b)R(c,d)$ and $(c,d)R(e,f)$; in other words, you know that $ad=bc$ and $cf=de$. You would like to prove $(a,b)R(e,f)$, in other words, that $af=be$. If you manipulate $ab=bc$ by multiplying both sides by $f$, you get the desired $(af)b$ on the LHS. Now, use the other equality on the RBS and divide by $b$ (is $b\not=0$!?)</p></li>
</ul>
|
602,286 | <p>I'm reading a paper which uses the following fact; it appears to be standard but I am not sure where to look for a proof.</p>
<blockquote>
<p><strong>Claim.</strong> Let $M$ be a complete Riemannian manifold (assumed to be second countable, so no long lines). There is an increasing sequence of open sets $U_n$ with $\bigcup_n U_n = M$ and smooth, compactly supported functions $\phi_n : M \to [0,1]$ such that $\phi_n = 1$ on $U_n$ and $\sup_n |\nabla \phi_n| < \infty$.</p>
</blockquote>
<p>This is trivial if $M$ is compact (take $U_n = M$ and $\phi_n = 1$). If we drop the requirement that the derivatives of $\phi_n$ be uniformly bounded, it's a consequence of the $\sigma$-compactness of $M$ and Urysohn's lemma. Also, the completeness is essential as we can see by taking $M$ to be an open interval. </p>
<p>I would appreciate a proof (or at least a hint) or a reference.</p>
| Lame-Ov2.0 | 114,476 | <p>You can factor out the $x^2$ so your equation looks like,</p>
<p>$\dfrac{dy}{dx} = x^2(y+1)-y-1$</p>
<p>The rest shouldn't be too bad.</p>
|
602,286 | <p>I'm reading a paper which uses the following fact; it appears to be standard but I am not sure where to look for a proof.</p>
<blockquote>
<p><strong>Claim.</strong> Let $M$ be a complete Riemannian manifold (assumed to be second countable, so no long lines). There is an increasing sequence of open sets $U_n$ with $\bigcup_n U_n = M$ and smooth, compactly supported functions $\phi_n : M \to [0,1]$ such that $\phi_n = 1$ on $U_n$ and $\sup_n |\nabla \phi_n| < \infty$.</p>
</blockquote>
<p>This is trivial if $M$ is compact (take $U_n = M$ and $\phi_n = 1$). If we drop the requirement that the derivatives of $\phi_n$ be uniformly bounded, it's a consequence of the $\sigma$-compactness of $M$ and Urysohn's lemma. Also, the completeness is essential as we can see by taking $M$ to be an open interval. </p>
<p>I would appreciate a proof (or at least a hint) or a reference.</p>
| neela | 365,061 | <p>After separating the variables, integrate on both sides.
$\log(y+1)=(x^3)-x+c$
Is the solution of given differential equation</p>
|
667,293 | <p>We are given a $n \times m$ rectangular matrix in which every cell there is a light bulb, together with the information whether the bulb is ON or OFF.</p>
<p>Now i am required to switch OFF all the bulbs but i can perform only one operation that is as follows:</p>
<ul>
<li>I can simultaneously flip all the bulbs from ON to OFF and vice versa in any submatrix.</li>
</ul>
<p>I need to switch OFF all the bulbs in a minimum number of moves and tell the number of moves needed as well as the moves themselves. Is there an efficient algorithm for this?</p>
<p>EXAMPLE: Let us assume $n=2$ and $m=3$ .The initial grid is as follow if $0$ stands for OFF and $1$ for ON:</p>
<p>$$\begin{matrix}1 & 1 & 1 \\ 0 & 1 & 1 \end{matrix}$$</p>
<p>Now we can switch OFF all bulbs in 2 moves which are as follow : </p>
<p>Move 1: Flip bulbs in subarray from $(1,1)$ to $(1,1)$</p>
<p>Move 2: Flip bulbs in subarray from $(1,2)$ to $(2,3)$</p>
| Hagen von Eitzen | 39,174 | <p>There are $n={N+1\choose 2}{M+1\choose2}$ ways to pick a submatrix, each corresponding to a vector $v_i\in\mathbb F_2^{NM}$, $1\le i\le n$. You are looking for a solution of $\sum c_iv_i=A$ that minimizes the weight $w(c)$ (i.e. the number of $i$ with $c_i=1$)</p>
<p><strong>Proposition:</strong> Among all minimum-weight solutions there exists one where no tow of the chosen submatrices share a corner.</p>
<p><em>Proof:</em>
Among all minimum weight solutions let $c$ minimize $\sum_{c_i=1}w(v_i)$.
Assume two submatrices $v_iv_j$ with $c_i=c_j=1$ share a corner.
Wlog. it is their top left corner, say say $v_i$ covers $(a,b)$ to $(c,d)$ and $v_j$ covers $(a,b)$ to $(e,f)$ with $a\le c$, $b\le d$, $a\le e$, $b\le f$.</p>
<ul>
<li>If $c< e$ and $d< f$, we could flip $(a,d+1)$ to $(c,f)$ and $(c+1,b)$ to $(e,f)$ instead</li>
<li>If $c<e$ and $d=f$, we could flip $(c+1,b)$ to $(e,f)$ instead</li>
<li>If $c<e$ and $d>f$, we could flip $(a,f+1)$ to $(c,d)$ and $(c+1,b)$ to $(e,f)$ instead</li>
<li>All remaining cases can be treated similarly (use symmetry)</li>
</ul>
<p>Since this replaces overlapping rectangles with disjoint rectangles the sum of rectangle weights decreases, contradicting its minimality. $_\square$</p>
<p><strong>EDIT:</strong> This proposition may be helpful to devise an algorithm to find the answer by dynamic programmming, though I am not sure yet how to do so without requiring <em>lots</em> of memory. Thanks to <em>Christoph Pregel</em> that my original corallary of the proposition was trivial. Indeed we already find a better simple general estimate as follows: A row (or column) of $N$ light bulbs with $n$ separate intervals of on-bulbs (so $N\ge 2n-1$) can easily be cleared with $n$ moves. Thus for even $N$ we need at most $\frac12NM$ moves to clear the whole matrix; the same holds for even $M$; and if both $N$ and $M$ ore odd, it takes at most $\frac{N+1}{2}$ moves to reduce to the case of even $M$, i.e. we need at most $\lceil\frac{NM}2\rceil $ moves in general.
In fact one can verify by exhaustive search that any $4\times 4$ matrix can be cleared in at most $5$ moves; this shows that $\le 5\cdot \lceil \frac N4\rceil\cdot \lceil \frac M4\rceil\approx \frac{5}{16}NM$ moves suffice. Again, this does not solve the problem itself, but it may give a helpful heuristic estimate for "distance" in a suitable algorithm.</p>
|
2,426,892 | <blockquote>
<p>Between which two integers does <span class="math-container">$\sqrt{2017}$</span> fall? </p>
</blockquote>
<p>Since <span class="math-container">$2017$</span> is a prime, there's not much I can do with it. However, <span class="math-container">$2016$</span> (the number before it) and <span class="math-container">$2018$</span> (the one after) are not, so I tried to factorise them. But that didn't work so well either, because they too are not perfect squares, so if I multiply them by a number to make them perfect squares, they're no longer close to <span class="math-container">$2017.$</span> How can I solve this problem?</p>
<p>Update:
Okay, since <span class="math-container">$40^2 = 1600$</span> and <span class="math-container">$50^2 = 2500$</span>, I just tried <span class="math-container">$45$</span> and <span class="math-container">$44$</span> and they happened to be the answer - but I want to be more mathematical than that... </p>
| Will Jagy | 10,400 | <p>As to the next question, Pell -1 and Pell +1..Hmm, seems to be showing the exponents incorrectly.</p>
<p>$$ {106515299132603184503844444}^2 - 2017 \cdot 2371696115380807559791481^2 = -1 $$</p>
<p>$$ 22691017898615873418283839489716246568157231499338273^2 - 2017 \cdot 505243842362839347335084683756885179819279000763128^2 = 1 $$</p>
<p>So, $\sqrt {2017}$ is above
$$ \frac{106515299132603184503844444}{2371696115380807559791481} $$
and below
$$ \frac{22691017898615873418283839489716246568157231499338273}{505243842362839347335084683756885179819279000763128} $$</p>
<pre><code>parisize = 4000000, primelimit = 500509
? a = 106515299132603184503844444
%1 = 106515299132603184503844444
? b = 2371696115380807559791481
%2 = 2371696115380807559791481
? a^2 - 2017 * b^2
%3 = -1
?
? c = 22691017898615873418283839489716246568157231499338273
%4 = 22691017898615873418283839489716246568157231499338273
? d = 505243842362839347335084683756885179819279000763128
%5 = 505243842362839347335084683756885179819279000763128
? c^2 - 2017 * d^2
%6 = 1
?
?
? a^2 + 2017 * b^2 - c
%7 = 0
? 2 * a * b - d
%8 = 0
?
?
</code></pre>
|
2,426,892 | <blockquote>
<p>Between which two integers does <span class="math-container">$\sqrt{2017}$</span> fall? </p>
</blockquote>
<p>Since <span class="math-container">$2017$</span> is a prime, there's not much I can do with it. However, <span class="math-container">$2016$</span> (the number before it) and <span class="math-container">$2018$</span> (the one after) are not, so I tried to factorise them. But that didn't work so well either, because they too are not perfect squares, so if I multiply them by a number to make them perfect squares, they're no longer close to <span class="math-container">$2017.$</span> How can I solve this problem?</p>
<p>Update:
Okay, since <span class="math-container">$40^2 = 1600$</span> and <span class="math-container">$50^2 = 2500$</span>, I just tried <span class="math-container">$45$</span> and <span class="math-container">$44$</span> and they happened to be the answer - but I want to be more mathematical than that... </p>
| Robert Soupe | 149,436 | <p>Try graphing $\sqrt x$ for $x \geq 0$. You should see a fairly smooth curve that goes upwards, which means that if $a < b$ and they're both positive, then $\sqrt a < \sqrt b$.</p>
<p>From this, it's clear that the integers you want are $\lfloor \sqrt{2017} \rfloor$ and $\lceil \sqrt{2017} \rceil$. A calculator readily tells us that $\sqrt{2017}$ is approximately 44.911, so the answer is 44 and 45.</p>
<p>If you really want to do it by prime factorization, look at the divisors of 2016. Notice that $2016 = 42 \times 48$. Then $43 \times 47 = 2021$ and $44 \times 46 = 2024$, which should strongly suggest 45 is the greater of the integers you're looking for.</p>
|
2,426,892 | <blockquote>
<p>Between which two integers does <span class="math-container">$\sqrt{2017}$</span> fall? </p>
</blockquote>
<p>Since <span class="math-container">$2017$</span> is a prime, there's not much I can do with it. However, <span class="math-container">$2016$</span> (the number before it) and <span class="math-container">$2018$</span> (the one after) are not, so I tried to factorise them. But that didn't work so well either, because they too are not perfect squares, so if I multiply them by a number to make them perfect squares, they're no longer close to <span class="math-container">$2017.$</span> How can I solve this problem?</p>
<p>Update:
Okay, since <span class="math-container">$40^2 = 1600$</span> and <span class="math-container">$50^2 = 2500$</span>, I just tried <span class="math-container">$45$</span> and <span class="math-container">$44$</span> and they happened to be the answer - but I want to be more mathematical than that... </p>
| mathreadler | 213,607 | <p>You can use the Newton square root method <a href="https://en.wikipedia.org/wiki/Methods_of_computing_square_roots" rel="nofollow noreferrer">wikipedia</a> for integers:</p>
<p>$$x_{n+1} = \frac 1 2 \left(x_n+ \frac S {x_n}\right)$$</p>
<p>Let us start with a crappy guess </p>
<ol>
<li>$x_1 = 500$:</li>
<li>$x_2 = \frac{1}{2} (500 + 2017/500) = 252$</li>
<li>$x_3 = \frac 1 2 (252 + 2017/252) = 130$</li>
<li>$x_4 = \frac 1 2 (130 + 2017/130) = 72.7$</li>
<li>$x_5 = \frac 1 2 (73 + 2017/73) = 50$</li>
<li>$x_6 = \frac 1 2 (50 + 2017/50) = 45$</li>
</ol>
<p>Now as the iterations seem to converge we can try $45^2 = 2025$ and we have our answer.</p>
|
2,040,293 | <p>I am trying to follow this tutorial: <a href="http://ctms.engin.umich.edu/CTMS/index.php?example=InvertedPendulum&section=SystemModeling" rel="nofollow noreferrer">http://ctms.engin.umich.edu/CTMS/index.php?example=InvertedPendulum&section=SystemModeling</a></p>
<p>I am stuck to understand how to make a state-space representation from these transfer functions</p>
<p>$$ \frac{\Phi(s)}{U(s)} = \frac{\frac{ml}{q}s}{s^3+\frac{b(I+ml^2)}{q}s^2-\frac{(M+m)mgl}{q}s-\frac{bmgl}{q}} \\ \frac{X(s)}{U(s)} = \frac{\frac{(I+ml^2)s^2-gml}{q}}{s^4+\frac{b(I+ml^2)}{q}s^3-\frac{(M+m)mgl}{q}s^2-\frac{bmgl}{q}s} $$</p>
<p>The text gives a hint "<em>The linearized equations of motion from above can also be represented in state-space form if they are rearranged into a series of first order differential equations. Since the equations are linear, they can then be put into the standard matrix form shown below.</em>"</p>
<p>But I do not understand this hint, I tried to research on to reduce the order like on <a href="http://tutorial.math.lamar.edu/Classes/DE/ReductionofOrder.aspx" rel="nofollow noreferrer">http://tutorial.math.lamar.edu/Classes/DE/ReductionofOrder.aspx</a> but maybe you can give me more input to my brain.</p>
<p>Solution:
$$ \begin{bmatrix} \dot x \\ \ddot x \\ \dot \phi \\ \ddot \phi \end{bmatrix}
= \begin{bmatrix} 0 & 1 & 0 & 0 \\ 0 & \frac{-(I+ml^2)b}{I(M+m)+Mml²} & \frac{m^2gl^2}{I(M+m)+Mml^2} & 0 \\ 0 & 0 & 0 & 1 \\ 0 & \frac{-mlb}{I(M+m)+Mml^2} & \frac{mgl(M+m)}{I(M+m)+Mml^2} & 0 \end{bmatrix} \begin{bmatrix} x \\ \dot x \\ \phi \\ \dot \phi \end{bmatrix} + \begin{bmatrix} 0 \\ \frac{I+ml^2}{I(M+m)+Mml²} \\ 0 \\ \frac{ml}{I(M+m)+Mml^2} \end{bmatrix} u $$</p>
<p>$$ y = \begin{bmatrix} 1 & 0 & 0 & 0 \\ 0 & 0 & 1 & 0 \end{bmatrix} \begin{bmatrix} x \\ \dot x \\ \phi \\ \dot \phi \end{bmatrix} + \begin{bmatrix} 0 \\ 0 \end{bmatrix} u $$</p>
| Community | -1 | <p>We fix $n$ and prove induction on $m$. When $m=0$, we have to show that $x^{n+0}=x^{n}\times x^{0}$. But the left hand side is $x^{n+0}=x^{n}$ while the right hand side is $x^{n}\times 1= x^{n}$, so we are done with $m=0$.</p>
<hr>
<p>Now suppose inductively we have already proven that $x^{n+m}=x^{n}\times x^{m}$; we know wish to show that $x^{n+(m++)}=x^{n}\times x^{m++}$. The left hand side is $x^{(n+m)++}= x^{n+m}\times x$ by definition of exponentiation; by induction hypothesis this is equal to $(x^{n}x^{m})x$. Meanwhile the right hand side is $x^{n}\times x^{m++}= x^{n}(x^{m}x) $. The two sides are thus equal due to the associativity of multiplication. Hope it helps.</p>
|
39,790 | <p>I'm teaching a programming class in Python, and I'd like to start with the mathematical definition of an array before discussing how arrays/lists work in Python.</p>
<p>Can someone give me a definition?</p>
| Tim van Beek | 7,556 | <p>An array is a tuple with elements taken from a specific set $S$. When the array can contain variables of a specific type, then the set is the set $S$ consists of all possible values of this type. </p>
<p>This is the most general definition, as a mathematician I don't talk about implementation details like the memory structure or the complexity of operations on the array. </p>
<p>It may be useful to consider more specific examples: If you fix a field $k$, consider a $k$ vector space $V$ of dimension $n$ and accept a specific data type of Pyhton as a representation of elements of $k$, you can interpret an one-dimensional $k$-array of length $n$ as a representation of a vector in $V$ (with respect to a fixed basis that is left implicit).</p>
<p>Similarly, one could interpret a two dimensional array of length n*n as representing a matrix, i.e. a linear transformation of $V$ etc.</p>
|
930,949 | <p>Given that the circle C has center $(a,b)$ where $a$ and $b$ are positive constants and that C touches the $x$-axis and that the line $y=x$ is a tangent to C show that $a = (1 + \sqrt{2})b$</p>
| Mazdak | 161,745 | <p>As we know <strong>any indicative sentence that is True or False but not both ,is a statement</strong> , so in this case $D$ is a set and we don't know about its content and we say for every $x$ in the set of $D$, $P(x)$ and this point that "we didn't know what set $D$ is" may be cause that our statement be False or maybe be True anyway it's is not against out define !</p>
|
1,017,411 | <blockquote>
<p>Let <span class="math-container">$R$</span> be a commutative Ring with <span class="math-container">$1$</span> and <span class="math-container">$M$</span> a <span class="math-container">$R$</span>-Module. <span class="math-container">$$\varphi: \begin{cases}R & \longrightarrow \text{end}_R(M) \\ a & \longmapsto \lambda_a \end{cases} $$</span> is a Ringisomorphism for <span class="math-container">$M=R$</span></p>
</blockquote>
<p><strong>My approach</strong>: First to clarify, <span class="math-container">$\lambda_a: M \to M, x \mapsto ax$</span> is the homothetic mapping.</p>
<p>I managed to show that <span class="math-container">$\varphi$</span> is a Ringmorphism with <span class="math-container">$\varphi(1_R)=\lambda_{1_R}=1_{\text{end}_R(M)}=id_M$</span></p>
<p>Now I am stuck with the 2nd part. I was told that the easiest way to complete this exercise is to find the inverse mapping and show that the two composition yield the identity mapping (on the respective set)</p>
<p>At some point I was given the following mapping <span class="math-container">$$\xi : \begin{cases}\text{end}_R(R) & \longrightarrow R \\ \delta & \longmapsto \delta (1) \end{cases} $$</span></p>
<p>I yet fail to understand the intuition behind this mapping, how one comes up with such an idea and ontop of that, why it works. Here are my calculations:</p>
<p>Let <span class="math-container">$x \in R$</span> be arbitrary
<span class="math-container">$$\varphi(\xi(\delta(x)))=\varphi(\delta(1))=\lambda_{\delta(1)} $$</span>
My <span class="math-container">$x \in R$</span> seems to have 'vanished' which is clearly a bad calculation on my end. So I suppose I have to do the calculation like that: <span class="math-container">$$\varphi(\xi (\delta(x)))=\varphi(\delta(1)(x))=\lambda_{\delta(1)(x)}\overset{?}=\lambda_{\delta(1)}(x)=\delta(1)x =x \delta(1) \\ = \delta(x) = id_{\text{end}_R(R)}\tag{*}\delta(x)$$</span></p>
<p>After the answers provided below I understand that the above calculation does hold, but there is clearly some magic (and with magic I mean bad mathematics performed by me) going on at the step indicated with ?. It seems that my argument is always getting 'eaten up' or ends up in places where I can no longer work with it.</p>
<p>Furthermore let <span class="math-container">$a \in R$</span> be arbitrary: <span class="math-container">$$\xi(\varphi(a))=\xi(\lambda_a)=\lambda_a(1)=a\cdot 1=a=id_M(a) $$</span>
Which I am okay with.</p>
<p>Could someone please enlighten me with some insight regarding this exercise? Especially in the calculation marked with (*) I am hopelessly lost (because of a mapping defined by a mapping through a mapping .....)</p>
| Avitus | 80,800 | <p>Extending the comments:</p>
<ul>
<li><strong>Ring structure on</strong> $\operatorname{End}_R(M)$ (sketch)</li>
</ul>
<p>The composition $\circ$ is nothing but $(\phi\circ\psi)(m):=\phi(\psi(m))\in M$, for all $\phi, \psi\in \operatorname{End}_R(M)$ and $m\in M$. It is clearly associative with unit $1_{\operatorname{End}_R(M)}$ given by $1_{\operatorname{End}_R(M)}: m\mapsto 1(m):=m$.</p>
<ul>
<li><strong>Isomorphism</strong> $\lambda: R\rightarrow \operatorname{End}_R(R)$</li>
</ul>
<p>We want to show that</p>
<p>$$\lambda\circ\Psi = 1,~~~ \Psi\circ\lambda = 1,$$</p>
<p>where $\Psi: \operatorname{End}_R(R)\rightarrow R$ is given by $$\Psi(\varphi):=\varphi(1),~~ \forall \varphi\in \operatorname{End}_R(R)$$
and identities are $1:\operatorname{End}_R(R)\rightarrow \operatorname{End}_R(R)$, and $1: R\rightarrow R$, respectively.</p>
<p>Let us prove that $\Psi$ is a ring homomorphism, to start with. We have</p>
<p>$$\Psi(\rho\circ\psi):=(\rho\circ\psi)(1)=\rho(\psi(1))=(*)=\rho(1)\psi(1)=\Psi(\rho)\Psi(\psi), $$
where the equality $(*)$ follows as both $\rho$ and $\psi$ are $R$-linear: more explicitly $$\rho(\psi(1))=\rho(1\psi(1))=\rho(1)\psi(1).$$</p>
<p>Let us prove $\lambda\circ\Psi = 1$; for all $\varphi\in \operatorname{End}_R(R)$ we have $\lambda(\Psi(\varphi)):r\mapsto \varphi(1)r=\varphi(r)$, for all $r\in R$ as, by definition, $\varphi$ is $R$-linear. This is equivalent to $\lambda(\Psi(\varphi))=\varphi$, as claimed. </p>
<p>Now, for $\Psi\circ\lambda = 1$, one can write $\Psi(\lambda)=\lambda(1)$, where the r.h.s. is the $R$-linear map $\lambda(1):r\mapsto 1r=r$, for all $r\in R$, i.e. the identity map on $R$. This is what we had to prove.</p>
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.