qid
int64
1
4.65M
question
large_stringlengths
27
36.3k
author
large_stringlengths
3
36
author_id
int64
-1
1.16M
answer
large_stringlengths
18
63k
4,142,540
<p>Let <span class="math-container">$V$</span> a vector subspace of dimension <span class="math-container">$n$</span> on <span class="math-container">$\mathbb R$</span> and <span class="math-container">$f,g \in V^* \backslash \{0\}$</span> two linearly independent linear forms. I want to show that <span class="math-container">$\dim (\ker f \cap \ker g) = n-2$</span>.</p> <p>Since <span class="math-container">$f$</span> and <span class="math-container">$g$</span> are linear forms, I know that dim ker <span class="math-container">$f =n-1$</span> and dim ker <span class="math-container">$g =n-1$</span>. I think I should use the fact that the two forms are linearly independent to <span class="math-container">$\dim (\ker f \cap \ker g) = n-2$</span> but I don't really see how...</p> <p>I saw a proof with scalar product but I would like to see an alternative proof or maybe an explanation of the scalar product's proof.</p>
Boka Peer
304,326
<p>Comments:</p> <p>The following post contains the key part of the proof of your problem. Tsemo has mentioned it in his comment.</p> <p><a href="https://math.stackexchange.com/q/1088623/304326">Proving that linear functionals are linearly independent if and only if their kernels are different</a></p> <p>A bit more explanation/belaboring:</p> <p>From this above post, you conclude that <span class="math-container">$kerf \cap ker g$</span> strictly contained in ker <span class="math-container">$f$</span> and ker <span class="math-container">$g.$</span> Then we have the following:</p> <p><span class="math-container">$$ dim ( ker f + ker g) = n-1 + (dim ker f - dim (ker f \cap ker g).$$</span> Since <span class="math-container">$ (ker f - dim (ker f \cap ker g )&gt;0$</span>, we have dim<span class="math-container">$ (ker f + ker g)$</span> is strictly greater than <span class="math-container">$n-1.$</span> This shows that <span class="math-container">$ ker f + ker g = V.$</span> by the way, I assumed that dim (V) = <span class="math-container">$n &gt; 2.$</span></p>
1,823,736
<p><a href="http://www.math.drexel.edu/~dmitryk/Teaching/MATH221-SPRING&#39;12/Sample_Exam_solutions.pdf" rel="nofollow">Problem 10c from here</a>.</p> <blockquote> <p>Thirteen people on a softball team show up for a game. Of the $13$ people who show up, $3$ are women. How many ways are there to choose $10$ players to take the field if at least one of these players must be a woman?</p> </blockquote> <p>The given answer is calculated by summing the combination of $1$ woman + $9$ men, $2$ women + $8$ men, and $3$ women + $7$ men.</p> <p>My question is, why can't we set this up as the sum $\binom{3}{1} + \binom{12}{9}$ - picking one of the three women first, then picking $9$ from the remaining $12$ men and women combined? The only requirement is that we have at least one woman, which is satisfied by $\binom{3}{1}$, and that leaves a pool of $12$ from which to pick the remaining $9$. The answer this way is <em>close</em> to the answer given, but it's $62$ short. I get that it's the "wrong" answer but I'm wondering why my thinking was wrong in setting it up this way. Thanks.</p>
Majid
254,604
<p>Well, according to your figure, you need to write the integral as a sum of two integrals.</p> <p>$y = 25 - x^2\rightarrow x=\sqrt{25-y}$ (In the first quadrant).</p> <p>$y = 25 - \frac{25}{3}x\rightarrow x= 3 - \frac{3}{25}y$.</p> <p>$y = 9x - 27\rightarrow x=3+ \frac{1}{9}y$.</p> <p>Now, for $0\leq y\leq 4$, we have $3 - \frac{3}{25}y\leq x\leq 3+ \frac{1}{9}y$, which is the bounds for the first integral.</p> <p>Next, for $4\leq y\leq 25$, we have $3 - \frac{3}{25}y\leq x\leq 3 - \frac{3}{25}y$, which is the bounds for the second integral.</p> <p>Your answer is the summation of these two integrals. Please note that I had a double integral in my mind.</p>
4,312,890
<p>I'm working my way through Grimaldi's textbook, and there's one exercise in the supplementary exercises for Chapter 4 that I don't understand how to approach.</p> <p>Here is the problem: if <span class="math-container">$n \in Z^+$</span>, how many possible values are there for <span class="math-container">$gcd(n,n+3000)$</span>?</p> <p>In case the notation <span class="math-container">$gcd(x,y)$</span> is not universal, it refers to the greatest common divisor of <span class="math-container">$x$</span> and <span class="math-container">$y$</span>. I reviewed the teacher's solutions for an explanation for how to solve this problem, but it relies on the fact that <span class="math-container">$gcd(n,n+3000)$</span> is a divisor of <span class="math-container">$3000$</span>, which I don't understand. Any help on either solving the problem or explaining why <span class="math-container">$gcd(n,n+3000)$</span> is a common divisor of <span class="math-container">$3000$</span> this would be greatly appreciated. Thank you!</p>
MH.Lee
980,971
<p>There are exactly <span class="math-container">$\sigma_0(3000)=32$</span> values.</p> <p>If <span class="math-container">$d$</span> is divisor of 3000, Then <span class="math-container">$d=\text{gcd}(d, d+3000)$</span>. And by Euclidean algorithm, <span class="math-container">$\text{gcd}(d, d+3000)=\text{gcd}(d, 3000)$</span> is divisor of 3000.</p>
71,166
<p>This question have been driving me crazy for months now. This comes from work on multiple integrals and convolutions but is phrased in terms of formal power series.</p> <p>We start with a formal power series</p> <p>$P(C) = \sum_{n=0}^\infty a_n C^{n+1}$</p> <p>where $a_n = (-1)^n n!$</p> <p>With these coefficients the formal power series can be expressed as a hypergeometric function</p> <p>$P(C) = C \, _2F_0(1,1;;-C)$</p> <p>I'm then interested in the formal power series $P_T(C)=\frac{P}{1-P}$ as well as, if possible, the series $P^n$ for arbitrary positive integer n (where this is the power series P raised to the nth power).</p> <p>Specifically if</p> <p>$P_T(C) = \sum_{n=0}^\infty b_n C^{n}$ </p> <p>then want to construct the function</p> <p>$f(x) = \sum_{n=0}^\infty \frac{b_{n+1}}{(n!)^2} x^{n}$</p> <p>which, from other results, should converge for all x. We can think of this as the doubly-exponential generating function for the $b_n$ sequence.</p> <p>There are rules for multiplying and dividing formal power series (see <a href="http://en.wikipedia.org/wiki/Formal_power_series#Operations_on_formal_power_series" rel="nofollow">here</a>) and I've used these to get a recurrence relation for the coefficients in $P_T(C)$ (as well as P^n(C)) but I've been unable to solve these recurrence relations explicitly. They're in a form where each $b_n$ depends on all the previous $b_n$'s and I've not been able to make progress with them.</p> <p>Explicitly the recurrence relation for the $b_n$ is $b_0 = 1$, $b_n = \sum_{k=1}^n b_{n-k} a_k$ (for n > 1). This looks simple enough but I don't think has a nice closed-form expression.</p> <p>Nevertheless I do know what the $b_n$ are. They are the sequence <a href="http://oeis.org/A052186" rel="nofollow">A052186</a> (up to plus and minus signs). So </p> <p>$P_T(C) = C+C^3-3 C^4+14 C^5-77 C^6+497 C^7-3676 C^8+\ldots$</p> <p>and</p> <p>$f(x) = 1 + \frac{1}{4}x^2 - \frac{1}{12}x^3 + \frac{7}{288}x^4 - \frac{77}{14400} x^5 + \frac{497}{518400}x^6 +\ldots$</p> <p>The question is, is it possible to figure out the function $f(x)$!? Does it have a nice closed form? Perhaps in the form of a hypergeometric function? If it does, great, if it doesn't then at least I can stop searching for it!</p>
Ralph
10,194
<p>I post this as an answer since this way it's more easy to read. But it's a successive comment to my comment above. </p> <p>The formula I mean is the following: Let $$f(x) = 1 + \sum_{n &gt;0}b_nx^n$$ be a formal power series over a commutative ring with unit. Then $$1/f = 1 + \sum_{n&gt;0}c_nx^n,$$ $$c_n = \sum_{k=1}^n(-1)^k \sum b_{i_1}\cdots b_{i_k}$$ where the inner sum is taken over all tuples $(i_1, \dots, i_k) \in \lbrace 1, \dots, n \rbrace ^k$ such that $i_1 + \dots + i_k = n$. </p> <p>Applied to $1-P$ this yields $1/(1-P) = 1 + \sum_{n&gt;0}c_nx^n$ with $$c_n = (-1)^n\sum_{k=1}^n(-1)^k \sum (i_1-1)! \cdots (i_k-1)!$$ But actually I don't believe that this is very helpful at all. </p>
71,166
<p>This question have been driving me crazy for months now. This comes from work on multiple integrals and convolutions but is phrased in terms of formal power series.</p> <p>We start with a formal power series</p> <p>$P(C) = \sum_{n=0}^\infty a_n C^{n+1}$</p> <p>where $a_n = (-1)^n n!$</p> <p>With these coefficients the formal power series can be expressed as a hypergeometric function</p> <p>$P(C) = C \, _2F_0(1,1;;-C)$</p> <p>I'm then interested in the formal power series $P_T(C)=\frac{P}{1-P}$ as well as, if possible, the series $P^n$ for arbitrary positive integer n (where this is the power series P raised to the nth power).</p> <p>Specifically if</p> <p>$P_T(C) = \sum_{n=0}^\infty b_n C^{n}$ </p> <p>then want to construct the function</p> <p>$f(x) = \sum_{n=0}^\infty \frac{b_{n+1}}{(n!)^2} x^{n}$</p> <p>which, from other results, should converge for all x. We can think of this as the doubly-exponential generating function for the $b_n$ sequence.</p> <p>There are rules for multiplying and dividing formal power series (see <a href="http://en.wikipedia.org/wiki/Formal_power_series#Operations_on_formal_power_series" rel="nofollow">here</a>) and I've used these to get a recurrence relation for the coefficients in $P_T(C)$ (as well as P^n(C)) but I've been unable to solve these recurrence relations explicitly. They're in a form where each $b_n$ depends on all the previous $b_n$'s and I've not been able to make progress with them.</p> <p>Explicitly the recurrence relation for the $b_n$ is $b_0 = 1$, $b_n = \sum_{k=1}^n b_{n-k} a_k$ (for n > 1). This looks simple enough but I don't think has a nice closed-form expression.</p> <p>Nevertheless I do know what the $b_n$ are. They are the sequence <a href="http://oeis.org/A052186" rel="nofollow">A052186</a> (up to plus and minus signs). So </p> <p>$P_T(C) = C+C^3-3 C^4+14 C^5-77 C^6+497 C^7-3676 C^8+\ldots$</p> <p>and</p> <p>$f(x) = 1 + \frac{1}{4}x^2 - \frac{1}{12}x^3 + \frac{7}{288}x^4 - \frac{77}{14400} x^5 + \frac{497}{518400}x^6 +\ldots$</p> <p>The question is, is it possible to figure out the function $f(x)$!? Does it have a nice closed form? Perhaps in the form of a hypergeometric function? If it does, great, if it doesn't then at least I can stop searching for it!</p>
StevenJ
4,673
<p>I stumbled across a similar question on math overflow <a href="https://mathoverflow.net/questions/45811/use-of-everywhere-divergent-generating-functions">here</a>. That had lot of insights into P including a simple continued fraction expression for P. Details in a paper <a href="http://www.springerlink.com/content/w5425448n1606161/" rel="nofollow noreferrer">here</a>.</p> <p>They give $P(x) = [1, x, x, 2x, 2x, 3x, 3x, . . . , nx, nx, . . .]$</p> <p>in continued fraction notation.</p> <p>This surely means that</p> <p>$P_T(x)+1 = \frac{1}{1-P(x)}= [1, 1, x, x, 2x, 2x, 3x, 3x, . . . , nx, nx, . . .]$</p> <p>Perhaps written like this maybe it is simple enough to have a nice solution. Maybe?</p>
2,263,230
<p>Let's say I wanted to express sqrt(4i) in a + bi form. A cursory glance at WolframAlpha tells me it has not just a solution of 2e^(i<em>Pi/4), which I found, but also 2e^(i</em>(-3Pi/4))</p> <p>Why do roots of unity exist, and why do they exist in this case? How could I find the second solution? </p>
Community
-1
<p>$\mathbf{i}$ is a root of unity. Thus any root of $\mathbf{i}$ is a root of unity.</p> <p>How do you <em>usually</em> find the other square root of a number given one of its square roots? Same thing applies here.</p> <p>Alternatively, recall that the exponential has period $2 \pi \mathbf{i}$. If you wrote</p> <p>$$ \mathbf{i} = e^{\mathbf{i} \pi / 2} $$</p> <p>it's helpful to remember that you also have</p> <p>$$ \mathbf{i} = e^{2 \pi \mathbf{i} n + \mathbf{i} \pi / 2} $$</p> <p>for every integer $n$.</p>
2,263,230
<p>Let's say I wanted to express sqrt(4i) in a + bi form. A cursory glance at WolframAlpha tells me it has not just a solution of 2e^(i<em>Pi/4), which I found, but also 2e^(i</em>(-3Pi/4))</p> <p>Why do roots of unity exist, and why do they exist in this case? How could I find the second solution? </p>
John Lou
404,782
<p>Roots of unity are basically the idea of multiple square roots extended to any level of square roots. First, we have to understand the complex plane. You might be familiar with the equation $e^{i \pi} = -1$. This expression is equivalent to another, $(e^{i \theta} = \cos \theta + i\sin \theta)$. Based on the properties of complex numbers and exponents, $e^{i \theta_1}*e^{i \theta_2} = e^{i (\theta_1 + \theta_2)}$, From here, if I were to ask you for the cube root of $1$, we see that there must be a radius of $1$, and $3*\theta \equiv 0 mod 2 \pi$. For this equation, there are three solutions: $0, 2 \pi/3, 4 \pi/3$, which lead us to the roots of unity.</p>
2,093,720
<p>$$y~ dy+(2+x^2-y^2)dx$$</p> <p>I try to solve this equation by putting standard form but becomes more challenge . So your answer is helpful </p>
Robert Israel
8,508
<p>The minimum of $f(a,b,c) = a/b + b/c + c/a$ for $a, b, c &gt; 0$ is on $a=b=c$, where $f(a,a,a) = 3$. Since this is greater than $2017/1000$, there are no solutions.</p>
3,553,681
<p>consider the identity <span class="math-container">$$\frac{e^{-x}}{1-x}=\sum_{n=0}^{\infty}c_nx^n$$</span></p> <p>Show that for each <span class="math-container">$n\ge0$</span> <span class="math-container">$$\sum_{k=0}^{n}\frac{c_k}{(n-k)!}=1$$</span></p> <p>My trial : By cauchy product, </p> <p><span class="math-container">$$c_k=\sum_{i=0}^{k}\frac{(-1)^i}{i!}$$</span></p> <p>Then <span class="math-container">$$\sum_{k=0}^{n}\frac{c_k}{(n-k)!}=\sum_{k=0}^{n}\sum_{i=0}^{k}\frac{(-1)^i}{(n-k)!i!}$$</span> <span class="math-container">$$=\sum_{i=0}^{n}\sum_{k=i}^{n} \frac{(-1)^i}{(n-k)!i!}$$</span> <span class="math-container">$$=\sum_{i=0}^{n}\sum_{k=i}^{n}\binom nk\frac{k!}{n!}$$</span></p> <p>I was stuck in here. So, I tried to solve it by taking <span class="math-container">$c_k=\frac{f^{k}(0)}{k!}$</span>. But I couldn't solve as well.. Could you please give me a few hint.. it will help me a lots. Thanks!</p>
Community
-1
<p>May I offer some comments designed to complement the answer of Jean-Claude Collette. Let <span class="math-container">$(H,\langle \cdot,\cdot\rangle)$</span> be a Hilbert space and <span class="math-container">$V\subset H$</span> a subspace. An orthogonal projector <span class="math-container">$\Pi:H\to V$</span> will satisfy for all <span class="math-container">$v\in H$</span> <span class="math-container">$$\langle \Pi v-v,w \rangle = 0 \quad\forall\,w\in V.$$</span> As Jean-Claude Collette showed, your projector <span class="math-container">$P$</span> satisfies <span class="math-container">$$ Pf = \frac{1}{\pi}\cos(2x)\int_{-\pi}^{\pi}\cos(2y)f(y)dy+\frac{1}{\pi}\sin(2x)\int_{-\pi}^{\pi}\sin(2y)f(y)dy+\frac{1}{2\pi}\int_{-\pi}^{\pi}f(y)dy.$$</span> From here it is fairly straight forward that <span class="math-container">\begin{align} \int_{-\pi}^\pi Pf(x)\,dx = &amp; \, \int_{-\pi}^{\pi}f(y)\,dy,\\ \int_{-\pi}^\pi \sin (2x) Pf(x)\,dx = &amp; \, \int_{-\pi}^{\pi}\sin (2y)f(y)\,dy,\\ \int_{-\pi}^\pi \cos (2x)Pf(x)\,dx = &amp; \, \int_{-\pi}^{\pi}\cos (2y)f(y)\,dy.\end{align}</span> So if we endow the space <span class="math-container">$L^2[-\pi,\pi]$</span> with the usual inner-product <span class="math-container">$\langle u,v \rangle:=\int_{-\pi}^{\pi}uv\,dx$</span> and define the space <span class="math-container">$V:=\textrm{span}\{1,\sin 2x,\cos 2x \}$</span> then your orthogonal projector <span class="math-container">$P:L^2[-\pi,\pi]\to V$</span> satisfies <span class="math-container">$$\langle P f-f,w \rangle = 0 \quad\forall\,w\in V.\tag{1}\label{1}$$</span> I would note that if <span class="math-container">$P$</span> was defined as the solution to \eqref{1}, then existence and uniqueness would follow from the Riesz Representation Theorem. From here your problem is easy. We have showed that <span class="math-container">$Pf-f$</span> is orthogonal to <span class="math-container">$V$</span>, and linearity and idempotency follow trivially. (For the latter show that <span class="math-container">$Pf=f$</span> for all <span class="math-container">$f\in V$</span>).</p>
85,374
<p>I'm currently using <code>WolframLibraryData::Message</code> to generate messages from a library function, like this:</p> <pre><code>Needs[&quot;CCompilerDriver`&quot;] src = &quot; #include \&quot;WolframLibrary.h\&quot; DLLEXPORT mint WolframLibrary_getVersion() {return WolframLibraryVersion;} DLLEXPORT int WolframLibrary_initialize(WolframLibraryData libData) {return 0;} DLLEXPORT void WolframLibrary_uninitialize(WolframLibraryData libData) {} DLLEXPORT myFunc(WolframLibraryData libData, mint argc, MArgument* args, MArgument result) { libData-&gt;Message(\&quot;Here's my message\&quot;); MArgument_setReal(result,1.1); return LIBRARY_NO_ERROR; } &quot;; lib = CreateLibrary[src, &quot;mylib&quot;]; myFunc = LibraryFunctionLoad[lib, &quot;myFunc&quot;, {}, Real]; </code></pre> <p>Now the problem can be seen if <code>myFunc[]</code> is called. I get this result:</p> <blockquote> <p>LibraryFunction::Here's my message: -- Message text not found -- &gt;&gt;</p> <p>1.1</p> </blockquote> <p>The problem is this <code>-- Message text not found --</code> part. Apparently I'm generating the message in a wrong way. So how should I do instead? How do I fill this &quot;message text&quot; to make it look like messages from normal Mathematica functions?</p>
Szabolcs
12
<p>The simple way is what <a href="https://mathematica.stackexchange.com/a/85393/12">Simon described in his answer</a>.</p> <p>A more flexible way is described under <a href="http://reference.wolfram.com/language/LibraryLink/tutorial/LibraryStructure.html#55120353" rel="nofollow noreferrer">Callback Evaluations in the LibraryLink User's Guide</a>.</p> <p>Note that in the version 10.0-10.2 documentation there's an error: <code>getWSTP</code> should be <code>getWSLINK</code>. You can also use the old (v9) M-prefix function names instead of the WS-prefix ones. For completeness, I'll reproduce the piece of code from there:</p> <pre class="lang-c prettyprint-override"><code>char *msg; // this should contain your message text int pkt; MLINK link = libData-&gt;getMathLink(libData); MLPutFunction(link, "EvaluatePacket", 1); MLPutFunction(link, "Message", 2); MLPutFunction(link, "MessageName", 2); MLPutSymbol(link, "MyFunction"); MLPutString(link, "info"); MLPutString(link, msg); libData-&gt;processMathLink(link); pkt = MLNextPacket(link); if (pkt == RETURNPKT) MLNewPacket(link); </code></pre> <p>This will simply evaluate <code>Message[MyFunction::info, msg]</code>. On the Mathematica side you'll want to define something like</p> <pre><code>MyFunction::info = "Message from LibraryLink: ``" </code></pre> <p>You need to <code>#include "MathLink.h"</code> for this to work. This header is located somewhere under <code>$InstallationDirectory/SystemFiles/Links/MathLink</code>.</p> <hr> <p><strong>Update:</strong> I ran into problems with this when <a href="http://reference.wolfram.com/language/LibraryLink/ref/callback/AbortQ.html" rel="nofollow noreferrer">aborting</a> functions. I don't quite understand what is happening on the MathLink connection during an abort, so for now I simply use <code>if (libData-&gt;AbortQ()) return;</code> before the above.</p>
24,810
<p>The title says it all. Is there a way to take a poll on Maths Stack Exchange? Is a poll an acceptable question?</p>
Gerry Myerson
8,269
<p>Vote this answer up, if you think a poll is an acceptable question. </p>
3,973,611
<p>Let <span class="math-container">$$F(x)=\int_{-\infty}^x f(t)dt,$$</span> where <span class="math-container">$x\in\mathcal{R}$</span>, <span class="math-container">$f\geq 0$</span> is complicated (it cannot be integrated analytically).</p> <p>Can I used the Simpson's rule to approximate this integral, knowing that <span class="math-container">$f(-\infty)=0$</span>?</p>
Math Lover
801,574
<p>I would simplify the expression a bit by combining set of no capital letter and set of one capital letter as there is no intersection between them.</p> <p>If <span class="math-container">$A$</span> is the set of passwords where we have at most one capital letter, <span class="math-container">$B$</span> is the set of passwords with no small letters and <span class="math-container">$C$</span> with no digits,</p> <p><span class="math-container">$|A| = 36^8 + 8 \times 26 \times 36^7$</span></p> <p><span class="math-container">$|B|= 36^8$</span></p> <p><span class="math-container">$|C| = 52^8$</span></p> <p><span class="math-container">$|A \cap B| = 10^8 + 8 \times 26 \times 10^7$</span></p> <p><span class="math-container">$|B \cap C| = 26^8$</span></p> <p><span class="math-container">$|A \cap C| = 26^8 + 8 \times 26 \times 26^7$</span></p> <p><span class="math-container">$|A \cap B \cap C| = 0$</span></p> <p><span class="math-container">$|A \cup B \cup C| = |A| + |B| + |C| - |A \cap B| - |B \cap C| - |A \cap C| + |A \cap B \cap C|$</span></p> <p>And the answer will be <span class="math-container">$ \, \, 62^8 - |A \cup B \cup C|$</span></p>
2,950,813
<blockquote> <p>Take <span class="math-container">$G$</span> to be a group of order <span class="math-container">$600$</span>. Prove that for any element <span class="math-container">$a$</span> <span class="math-container">$\in$</span> G there exist an element <span class="math-container">$b$</span> <span class="math-container">$\in$</span> G such that <span class="math-container">$a = b^7$</span>. </p> </blockquote> <p>My thought process: since <span class="math-container">$a = b^7$</span> <span class="math-container">$\implies$</span> <span class="math-container">$|a| = |b^7|$</span>. Consequently <span class="math-container">$\operatorname{lcm}(1,|a|) = \dfrac{1}{7}\operatorname{lcm}(7,|b|)$</span> implies <span class="math-container">$7|a| = 7|b|$</span> so <span class="math-container">$|a| = |b|$</span>. I don't know where I would go from here or if this is even the right approach. </p>
AHusain
277,089
<p>Take the cyclic subgroup generated by <span class="math-container">$a$</span>. It has some order <span class="math-container">$k$</span> so <span class="math-container">$a^{1+kn}=a$</span> for all <span class="math-container">$n$</span>. Let <span class="math-container">$b=a^l$</span></p> <p><span class="math-container">$$ b^7=a^{1+kn}\\ a^{7l}=a^{1+kn}\\ $$</span></p> <p><span class="math-container">$7l \equiv 1 \; (mod \; k)$</span></p> <p>Suppose <span class="math-container">$k=2$</span>, then <span class="math-container">$l=1$</span> works and <span class="math-container">$b=a$</span> satisfies <span class="math-container">$b^7=b=a$</span>.</p> <p>Suppose <span class="math-container">$k=20$</span>, then <span class="math-container">$l=3$</span> work <span class="math-container">$b=a^3$</span> satisfies <span class="math-container">$b^7=a^{21}=a$</span></p> <p>What are the possible <span class="math-container">$k$</span> and what can you then say about the solvability of <span class="math-container">$7l \equiv 1$</span>?</p>
1,291,050
<p>I have been doing doing this problem $∇ × (\varphi∇\varphi)=0$</p> <p>I am just having trouble applying the product result i get which is below.</p> <p>$$i(( \frac {d}{dy} )(\varphi \frac {d}{dz} \varphi) - ((\frac {d}{dz})(\varphi \frac {d}{dy} \varphi)) )$$</p> <p>if i take the first part </p> <p>$$(\varphi \frac {d}{dz} \varphi)$$</p> <p>and use the product rule i get the following</p> <p>$$\frac {d}{dx}(uv)= ((\varphi \frac {d}{dz} \varphi) + ((\frac {d^2}{d^2z} \varphi^2)))$$</p> <p>this doesnt seem right, can someone help by going through the how to apply the product rule to this. thank you</p>
Censi LI
223,481
<p>I guess the basic field you concerned is $\mathbb R$? Then by the hypothesis, we can write $A=H+S$, where $H$ is a positive definite symmetric matrix and $S$ is skew-symmetric. Then since $H$ is positive definite, there is a invertible matrix $P$ such that $P^THP=I$, note that $P^TSP$ is still skew-symmetric, so there is a orthogonal matrix $O$ such that $O^TP^TSPO$ is of canonical form, then to compare $\det(A)$ with $\det(H)$ is equivalent to compare $\det(O^TP^TAPO)$ with $\det(O^TP^THPO)$ (note that $O^TP^TAPO=O^TP^THPO+O^TP^TSPO=I+O^TP^TAPO$, and $O^TP^TAPO$ is of canonical form).</p>
302,179
<p>The question I am working on is:</p> <blockquote> <p>"Use a direct proof to show that every odd integer is the difference of two squares."</p> </blockquote> <p>Proof:</p> <p>Let n be an odd integer: $n = 2k + 1$, where $k \in Z$</p> <p>Let the difference of two different squares be, $a^2-b^2$, where $a,b \in Z$.</p> <p>Hence, $n=2k+1=a^2-b^2$...</p> <p>As you can see, this a dead-end. Appealing to the answer key, I found that they let the difference of two different squares be, $(k+1)^2-k^2$. I understand their use of $k$--$k$ is one number, and $k+1$ is a different number--;however, why did they choose to add $1$? Why couldn't we have added $2$?</p>
Herng Yi
34,473
<p>Note that $a^2 - b^2 = (a + b)(a - b)$. Solve the simultaneous equations $a + b = n$ and $a - b = 1$.</p> <p>This is where you got $(k + 1)^2 - k^2$ from - $(a + b)(a - b)$ then matches to $((k + 1) + k)((k + 1) - k)$.</p>
198,116
<p>A finite <a href="http://en.wikipedia.org/wiki/Lattice_(order)" rel="nofollow noreferrer">lattice</a> is planar if it admits a <a href="http://en.wikipedia.org/wiki/Hasse_diagram" rel="nofollow noreferrer">Hasse diagram</a> which is a <a href="http://en.wikipedia.org/wiki/Planar_graph" rel="nofollow noreferrer">planar graph</a> (we want the Hasse diagram to be represented in the plane so that the <span class="math-container">$y$</span>-coordinate of each element respects the order).</p> <p><em>Remark</em>: See the paper <a href="http://www.sciencedirect.com/science/article/pii/0095895676900241" rel="nofollow noreferrer">Planar lattices and planar graphs</a> (1976) by C. R. Platt :<br /> It is shown that a finite lattice is planar if and only if the (undirected) graph obtained from its (Hasse) diagram by adding an edge between its least and greatest elements is a planar graph.</p> <p>The <span class="math-container">$B_3$</span> lattice (below) is not planar.<br /> <img src="https://i.stack.imgur.com/Nxdxm.png" alt="enter image description here" /></p> <p><em>Question</em>: Is a finite <a href="http://en.wikipedia.org/wiki/Distributive_lattice" rel="nofollow noreferrer">distributive lattice</a> planar iff it admits no sublattice isomorphic to <span class="math-container">$B_3$</span>?</p>
Richard Stanley
2,807
<p>A stronger result is due to R. Wille. See for instance page 3 of <a href="http://www.math.uh.edu/~hjm/1973_Lattice/p00512-p00518.pdf">http://www.math.uh.edu/~hjm/1973_Lattice/p00512-p00518.pdf</a>.</p>
198,116
<p>A finite <a href="http://en.wikipedia.org/wiki/Lattice_(order)" rel="nofollow noreferrer">lattice</a> is planar if it admits a <a href="http://en.wikipedia.org/wiki/Hasse_diagram" rel="nofollow noreferrer">Hasse diagram</a> which is a <a href="http://en.wikipedia.org/wiki/Planar_graph" rel="nofollow noreferrer">planar graph</a> (we want the Hasse diagram to be represented in the plane so that the <span class="math-container">$y$</span>-coordinate of each element respects the order).</p> <p><em>Remark</em>: See the paper <a href="http://www.sciencedirect.com/science/article/pii/0095895676900241" rel="nofollow noreferrer">Planar lattices and planar graphs</a> (1976) by C. R. Platt :<br /> It is shown that a finite lattice is planar if and only if the (undirected) graph obtained from its (Hasse) diagram by adding an edge between its least and greatest elements is a planar graph.</p> <p>The <span class="math-container">$B_3$</span> lattice (below) is not planar.<br /> <img src="https://i.stack.imgur.com/Nxdxm.png" alt="enter image description here" /></p> <p><em>Question</em>: Is a finite <a href="http://en.wikipedia.org/wiki/Distributive_lattice" rel="nofollow noreferrer">distributive lattice</a> planar iff it admits no sublattice isomorphic to <span class="math-container">$B_3$</span>?</p>
David Eppstein
440
<p>It's already been answered positively, but here's another argument that shows something a little stronger: every finite distributive lattice either contains a B3 (and is not planar) or it can be drawn as a planar grid graph.</p> <p>By Birkhoff's representation theorem, every finite distributive lattice is isomorphic to the lattice of lower sets of a finite partially ordered set. If this partial order has width three or more, (that is, if it has an antichain of three elements), then they generate a B3 and the lattice is not planar. And if the partial order has width two, then by Dilworth's theorem it can be decomposed into two chains. This decomposition can be used to embed the lattice as a grid graph, by setting the two coordinates of each lower set to be the numbers of elements it has in each of the two chains (and then rotating the whole thing by 45 degrees to make it an upward drawing).</p>
3,853,509
<blockquote> <p>prove <span class="math-container">$$\sum_{cyc}\frac{a^2}{a+2b^2}\ge 1$$</span> holds for all positives <span class="math-container">$a,b,c$</span> when <span class="math-container">$\sqrt{a}+\sqrt{b}+\sqrt{c}=3$</span> or <span class="math-container">$ab+bc+ca=3$</span></p> </blockquote> <hr /> <p><strong>Background</strong> Taking <span class="math-container">$\sqrt{a}+\sqrt{b}+\sqrt{c}=3$</span> This was left as an exercise to the reader in the book 'Secrets in inequalities'.This comes under the section Cauchy Reverse technique'.i.e the sum is rewritten as :<span class="math-container">$$\sum_{cyc} a- \frac{ab^2}{a+2b^2}\ge a-\frac{2}{3}\sum_{cyc}{(ab)}^{2/3}$$</span> which is true by AM-GM.(<span class="math-container">$a+b^2+b^2\ge 3{(a. b^4)}^{1/3}$</span>)</p> <p>By QM-AM inequality <span class="math-container">$$\sum_{cyc}a\ge \frac{{ \left(\sum \sqrt{a} \right)}^2}{3}=3$$</span>.</p> <p>we are left to prove that <span class="math-container">$$\sum_{cyc}{(ab)}^{2/3}\le 3$$</span> .But i am not able to prove this .Even the case when <span class="math-container">$ab+bc+ca=3$</span> seems difficult to me.</p> <p>Please note I am looking for a solution using this cuchy reverse technique and AM-GM only.</p>
Claude Leibovici
82,404
<p><strong>Hint</strong></p> <p>Rewrite <span class="math-container">$$\ f(x) = \frac{1}{2+3x^2}=\frac 12 \times\frac 1 {1+\frac 32 x^2}$$</span> Now, let <span class="math-container">$t=\frac 32 x^2$</span>.</p> <p>Expand in terms of <span class="math-container">$t$</span> and, when done, replace <span class="math-container">$t$</span> and simplify.</p>
3,540,613
<p>The integral to solve:</p> <p><span class="math-container">$$ \int{5^{sin(x)}cos(x)dx} $$</span></p> <p>I used long computations using integration by parts, but I don't could finalize:</p> <p><span class="math-container">$$ \int{5^{sin(x)}cos(x)dx} = cos(x)\frac{5^{sin(x)}}{ln(5)}+\frac{1}{ln(5)}\Bigg[ \frac{5^{sin(x)}}{ln(5)}-\frac{1}{ln(5)}\int{5^{sin(x)}cos(x)dx} \Bigg] $$</span></p>
José Carlos Santos
446,262
<p>If you do <span class="math-container">$\sin x=u$</span> and <span class="math-container">$\cos x\,\mathrm dx=\mathrm du$</span>, your integral becomes<span class="math-container">$$\int 5^u\,\mathrm du.$$</span></p>
302
<p>I know that the Fibonacci numbers converge to a ratio of .618, and that this ratio is found all throughout nature, etc. I suppose the best way to ask my question is: where was this .618 value first found? And what is the...significance?</p>
John D. Cook
136
<p>The golden ratio was used extensively in ancient art, but the man named Fibonacci (Leonardo of Pisa) lived around 1200 AD. It's possible that the Fibonacci series was known before Fibonacci but I'm not aware of this. So I think it's safe to assume the golden ratio is older.</p>
302
<p>I know that the Fibonacci numbers converge to a ratio of .618, and that this ratio is found all throughout nature, etc. I suppose the best way to ask my question is: where was this .618 value first found? And what is the...significance?</p>
Mensen
774
<p>The answer for either of these is "hundreds of millions of years" due to their emergence/use in biological development programs, the self-assembly of symmetrical viral capsids (the adenovirus for example), and maybe even protein structure. Because of their close relationship I'd be hard pressed to say which 'came first'. </p> <p>If you google for it, you'll find plenty of books and papers. However, be extremely careful about examples without a well-explained functional role... there are an arbitrarily large number of coincidences out there if you're looking for them, and humans excel at numerology. </p>
302
<p>I know that the Fibonacci numbers converge to a ratio of .618, and that this ratio is found all throughout nature, etc. I suppose the best way to ask my question is: where was this .618 value first found? And what is the...significance?</p>
David Eppstein
440
<p>The golden ratio in mathematics dates back to the Pythagoreans, circa 500 BC, it's true. But the Fibonacci numbers also have a long heritage going back to Pingala in India circa 200 BC.</p> <p>However, the mystical claims about the golden ratio and Fibonacci numbers going back hundreds of millions of years in biology and showing up in every piece of ancient art and architecture seem to date back only to Pacioli in the 16th century AD.</p>
1,853,846
<p>Prove that the equation <span class="math-container">$$x^2 - x + 1 = p(x+y)$$</span> has integral solutions for infinitely many primes <span class="math-container">$p$</span>.</p> <p>First, we prove that there is a solution for at least one prime, <span class="math-container">$p$</span>. Now, <span class="math-container">$x(x-1) + 1$</span> is always odd so there is no solution for <span class="math-container">$p=2$</span>. We prove there is a solution for <span class="math-container">$p=3$</span>. If <span class="math-container">$p=3$</span>, <span class="math-container">$y = (x-2)^2/3-1$</span>. We get integral solutions whenever we get <span class="math-container">$x = 3m +2$</span>, where <span class="math-container">$m$</span> is any integer.<span class="math-container">$\\$</span> We provide a proof by contradiction which is similar to Euclid's proof of there being infinitely many primes. \Let us assume that it is is true for only finitely many primes, and name the largest prime for which the equation is true <span class="math-container">$P$</span>.\\We set <span class="math-container">$$x = 2\cdot3\cdot5\dots P$$</span> <span class="math-container">$x$</span> is the product of all primes upto <span class="math-container">$P.\\$</span> Then, the term <span class="math-container">$x(x-1) + 1$</span> is either prime or composite. If it is prime, then we set <span class="math-container">$p = x(x-1) + 1, y = 1 - x$</span> and get a solution. If it is composite, we write <span class="math-container">$x(x-1) + 1 = p\times q$</span>, where <span class="math-container">$p$</span> is any prime factor of <span class="math-container">$x(x-1)+1$</span>, and <span class="math-container">$q$</span> is an integer, <span class="math-container">$q = (x(x-1)+1)/p$</span>. Now, <span class="math-container">$x(x-1) + 1$</span> is not divisible by any prime upto <span class="math-container">$P$</span> since it leaves a remainder of <span class="math-container">$1$</span> with all of them. So, <span class="math-container">$p &gt; P$</span>. We set <span class="math-container">$y=q-x$</span> for a solution.<span class="math-container">$\\$</span>In either case, we get a solution for a prime <span class="math-container">$p &gt; P$</span>, which means there's no largest prime for which this equation has solutions. This contradicts the assumption that there are solutions for only finitely many primes.</p> <p>I feel like I'm missing some step. Is this correct ?</p>
Faibbus
307,662
<p>Or you can use the half angle formulas:</p> <p>$\cos(\theta) = \frac{1 - \tan^2(\frac{\theta}{2})}{1 + \tan^2(\frac{\theta}{2})} $</p> <p>$\sin(\theta) = \frac{2 \tan(\frac{\theta}{2})}{1 + \tan^2(\frac{\theta}{2})}$</p> <p>So as to get:</p> <p>$ 8 \frac{2 \tan(\frac{\theta}{2})}{1 + \tan^2(\frac{\theta}{2})} = 4 + \frac{1 - \tan^2(\frac{\theta}{2})}{1 + \tan^2(\frac{\theta}{2})} $</p> <p>$ 3 \tan^2(\frac{\theta}{2}) - 16 \tan(\frac{\theta}{2}) +5 = 0$</p> <p>Which you can solve for $\tan(\frac{\theta}{2})$ that you can then use to compute $\sin(\theta)$</p> <p>Edit: I forgot to mention that you can do this only because $\cos(\frac{\theta}{2}) \neq 0$, because $\cos(\frac{\theta}{2}) = 0 \Leftrightarrow \theta = \pi [2\pi] \Rightarrow \sin(\theta) = 0$</p>
161,678
<p>Assume a process with Itô dynamics of the generic form $$dX_t=\mu(t,X_t)dt+\sigma(t,X_t)dW_t$$</p> <p>and let $f:\mathbb{R}\to\mathbb{R}$ be borel-measurable. Is the following function smooth ? $$g(t,x)=\mathbb{E}[f(X_T)|\mathcal{F}_t]$$</p> <p>I remember comming upon the proof of above once but I cannot find it any longer.</p>
Thomas
46,773
<p>I assume here the $x$ variable is the initial condition of you process? Here is a partial answer: if $(\mathcal F_t)$ is the brownian filtration, then by Itô's martingale representation theorem for any $x$ there is a predictable process $(H_t^x)$ such that $g(t,x)=\mathbb E(f(X_T))+ \int_0^t H^x_s dW_s$. So in that case there is at least continuity in $t$.</p>
153,923
<p>My question is: Solve $\sqrt{x^2 +2x + 1}-\sqrt{x^2-4x+4}=3$</p> <p>I deduced that:$LHS= x+1-(x-2)$</p> <p>I am unable to solve this equation. I would like to get some hints to solve it.</p>
Gigili
181,853
<p>$$\sqrt {x^2 +2x + 1}-\sqrt { x^2-4x+4}= \sqrt{(x+1)^2} - \sqrt{(x-2)^2}=|x+1|-|x-2|$$</p> <p>You have to consider three cases:</p> <ul> <li>$x \geq 2$</li> <li>$-1&lt;x&lt;2$</li> <li>$x \leq -1$</li> </ul>
153,923
<p>My question is: Solve $\sqrt{x^2 +2x + 1}-\sqrt{x^2-4x+4}=3$</p> <p>I deduced that:$LHS= x+1-(x-2)$</p> <p>I am unable to solve this equation. I would like to get some hints to solve it.</p>
Madrit Zhaku
34,867
<p>$\sqrt {x^2 +2x + 1}-\sqrt { x^2-4x+4}= \sqrt{(x+1)^2} - \sqrt{(x-2)^2}=|x+1|-|x-2|=3$</p> <p>$|x+1|-|x-2|=3$</p> <p>1) $x\in(-\infty, -1)$$\Rightarrow$$|x+1|=-(x+1)=-x-1$, $|x-2|=-(x-2)=2-x$.</p> <p>$|x+1|-|x-2|=3$$\Rightarrow$ $-x-1-2+x=3$$\Rightarrow$$-3=3$, this is a contradiction.</p> <p>In this interval equation has no solution.</p> <p>2) $x\in[-1, 2)$$\Rightarrow$ $|x+1|=x+1$, $|x-2|=-(x-2)=2-x$.</p> <p>$|x+1|-|x-2|=3$$\Rightarrow$ $ x+1-2+x=3$ $\Rightarrow$$2x=4$$\Rightarrow$$x=2$.</p> <p>$2\notin [-1, 2)$. Also in this interval equation has no solution.</p> <p>3) $x\in(2, \infty)$$\Rightarrow$ $|x+1|=x+1$, $|x-2|=x-2$.</p> <p>$|x+1|-|x-2|=3$$\Rightarrow$ $ x+1-x+2=3$ $\Rightarrow$$3=3$.</p> <p>On this interval equation has infinity solutions.</p>
3,997,632
<p>Use the Chain Rule to prove the following.<br /> (a) The derivative of an even function is an odd function.<br /> (b) The derivative of an odd function is an even function.</p> <p><strong>My attempt:</strong></p> <p>I can easily prove these using the definition of a derivative, but I'm having trouble showing them using the chain rule.</p> <p>(a) <span class="math-container">$f(x)$</span> is even <span class="math-container">$ \therefore f(-x) = f(x)$</span>.<br /> We need to show that <span class="math-container">$f'(-x) = -f'(x)$</span>.</p> <p>Let <span class="math-container">$u = -x$</span>.<br /> My reasoning for the next step is that if we want to find the derivative of <span class="math-container">$f$</span> with respect to <span class="math-container">$x$</span> from <span class="math-container">$f(u)$</span> we need to use the chain rule and first find the derivative of <span class="math-container">$f$</span> with respect to <span class="math-container">$u$</span> and multiply that with the derivative of <span class="math-container">$u$</span> with respect to <span class="math-container">$x$</span>.<br /> <span class="math-container">$f'(x) = f'(u) \cdot u' = - f'(u) = -f'(-x)$</span>.<br /> <span class="math-container">$f'(x) = -f'(-x)$</span>.<br /> <span class="math-container">$f'(-x) = -f'(x)$</span>.</p> <p>The problem with this is that I never used the fact that <span class="math-container">$f$</span> is even and I feel like there is a mistake in my approach, most likely where I explained my reasoning. I just can't see it. Can you please point out the mistake and point me to the right direction?</p> <p>Thanks!</p>
Gibbs
498,844
<p>Actually, you are using the assumption that <span class="math-container">$f$</span> is even.</p> <p>If <span class="math-container">$f$</span> is even, then as you say <span class="math-container">$f(-x) = f(x)$</span>. By differentiating, the left hand side is <span class="math-container">$-f'(-x)$</span> by the chain rule, and the right hand side is simply <span class="math-container">$f'(x)$</span>. Thus <span class="math-container">$-f'(-x) = f'(x)$</span>, or <span class="math-container">$f'(-x) = -f'(x)$</span>, meaning that <span class="math-container">$f'$</span> is odd. You can try with <span class="math-container">$f$</span> odd now.</p>
3,997,632
<p>Use the Chain Rule to prove the following.<br /> (a) The derivative of an even function is an odd function.<br /> (b) The derivative of an odd function is an even function.</p> <p><strong>My attempt:</strong></p> <p>I can easily prove these using the definition of a derivative, but I'm having trouble showing them using the chain rule.</p> <p>(a) <span class="math-container">$f(x)$</span> is even <span class="math-container">$ \therefore f(-x) = f(x)$</span>.<br /> We need to show that <span class="math-container">$f'(-x) = -f'(x)$</span>.</p> <p>Let <span class="math-container">$u = -x$</span>.<br /> My reasoning for the next step is that if we want to find the derivative of <span class="math-container">$f$</span> with respect to <span class="math-container">$x$</span> from <span class="math-container">$f(u)$</span> we need to use the chain rule and first find the derivative of <span class="math-container">$f$</span> with respect to <span class="math-container">$u$</span> and multiply that with the derivative of <span class="math-container">$u$</span> with respect to <span class="math-container">$x$</span>.<br /> <span class="math-container">$f'(x) = f'(u) \cdot u' = - f'(u) = -f'(-x)$</span>.<br /> <span class="math-container">$f'(x) = -f'(-x)$</span>.<br /> <span class="math-container">$f'(-x) = -f'(x)$</span>.</p> <p>The problem with this is that I never used the fact that <span class="math-container">$f$</span> is even and I feel like there is a mistake in my approach, most likely where I explained my reasoning. I just can't see it. Can you please point out the mistake and point me to the right direction?</p> <p>Thanks!</p>
J.G.
56,861
<p>Since <span class="math-container">$(-x)^\prime=-1$</span>, we can concisely prove both parts viz.<span class="math-container">$$f(-x)=\pm f(x)\implies f^\prime(-x)=-1\cdot\pm f^\prime(x)=\mp f^\prime(x).$$</span></p>
3,335,060
<blockquote> <p>The numbers of possible continuous <span class="math-container">$f(x)$</span> defiend on <span class="math-container">$[0,1]$</span> for which <span class="math-container">$I_1=\int_0^1 f(x)dx = 1,~I_2=\int_0^1 xf(x)dx = a,~I_3=\int_0^1 x^2f(x)dx = a^2 $</span> is/are</p> <p><span class="math-container">$(\text{A})~~1~~~(\text{B})~~2~~(\text{C})~~\infty~~(\text{D})~~0$</span></p> <p>I have tried the following: Applying ILATE (the multiplication rule for integration) - nothing useful comes up, only further complications like the primitive of the primitive of f(x). No use of the given information either. Using the rule <span class="math-container">$$ \int_a^b g(x)dx = \int_a^b g(a+b-x)dx$$</span> I solved all three constraints to get <span class="math-container">$$ \int_0^1 x^2f(1-x)dx = (a-1)^2 \\ \text{or} \int_0^1 x^2[f(1-x)+f(x)]dx = (a-1)^2 +a^2 \\$$</span> Then I did the following - if f(x) + f(1-x) is constant, solve with the constraints to find possible solutions. Basically I was looking for any solutions where the function also follows the rule that f(x) + f(1-x) is constant. Solving with the other constraints, I obtained that f(x) will only follow all four constraints if the constant [= f(x) + f(1-x)] is 2, and a is <span class="math-container">$\frac{√3\pm1}{2}$</span>.</p> </blockquote>
Community
-1
<p><span class="math-container">$$1+3+\frac92+\frac92+\frac{27}8+\frac{81}{40}+\frac{81}{80}+\frac{243}{560}+\frac{729}{4480}\\ 13+3.375+2.025+1.025+0.433928\cdots+0.162723\cdots=20.021651$$</span></p> <p>isn't so difficult. Only the last two term require a "true" division. </p>
18,444
<p>I am a student, in my last year of school(17 years old)</p> <p>When I was about 13 years old I fell into the <a href="https://artofproblemsolving.com/news/articles/avoid-the-calculus-trap" rel="noreferrer">calculus trap</a> by starting off learning trigonometry on my own, when I was supposed to factor equations or solve basic probability questions. Being good at math(fast arithmetic skills, could grasp new concepts easily etc.) I came across an interschool math competition at that time which was purely based on geometry. I studied by myself for that comp, mainly the properties of circles, triangles and as I earlier mentioned, was introduced to trigonometry. Even though I didn't score well in that comp, I was mesmerized studying those new topics which led me to discover more and more, and finally started learning calculus just one year after that time.</p> <p>I soon realized that I have fallen into a large hole with no end, though going deeper and deeper was becoming more interesting, it started affecting my performance in other subjects. I also realized that instead of starting with calculus, I could have solved much harder problems related to my school curriculum and hence could score better in Olympiads. </p> <p>Unfortunately I am still in that hole, going deeper and deeper, studying more abstract, higher level concepts. But I will be passing out of my school next year and will get admission in a nice college aiming to study higher maths and everything will become normal. </p> <p>My main question is that should I encourage my juniors to do the same thing which I did? Or should I guide them to study maths in a more systematic way.</p> <p>The <a href="https://artofproblemsolving.com/news/articles/avoid-the-calculus-trap" rel="noreferrer">link</a> that I had provided is from art of problem solving and hence I found it a bit harsh towards my situation, so I decided to ask here for guidance. I also read <a href="https://matheducators.stackexchange.com/questions/7718/effects-of-early-study-of-advanced-books">this</a> question but that is not exactly my case, I was thorough in my study, also took help from my teachers if I did not understand anything.</p>
Aviral Sood
14,177
<p>[VERY LONG ANSWER, needs patience to read through]</p> <p>I feel this is a problem many students who are good at maths face. They understand the simple tricks and patterns which are present in the school syllabus and so it is simple for them and after some practise and memorisation they are done. Then they seek out more maths and find out about topics like trigonometry and calculus. </p> <p>One thing which I feel is a big factor causing the drive and a part of the problem which many people ignore is the 'need to be a genius'. Growing up we all hear stories about Einstein and his legendary E=mc^2 formula which he dreamt up because his brain was just so big. In mathematics we have role models like Euler, Gauss and Ramanujan who seemingly picked out amazing results from thin air. These stories make up our perception about scientists who are introverted geniuses which seem to know everything except how to talk to people and conform to normality.</p> <p>So students such as myself begin to rebel against the system and find out more advanced maths on their own. However, this is much more difficult than just following the school curriculum. Even if the topic is within your intellectual capacity, having no one to explain it to you is very discouraging and you give up easily. Since you are talking about studying high level mathematics in school you definitely would have experienced finding out about some topic or the other in which you have no idea how to even process the proofs and theorems related to it after a certain basic point. This is very frustrating because you have an image of being good at maths in your mind and you cannot meet it if you don't immediately understand your textbook or whatever you are studying. </p> <p>This then becomes a trap: you don't study topics which you don't immediately understand or have good intuition for, and so you keep going down and down into deep rabbit holes where you go into one sub-topic after another without pausing at any level to expand your knowledge to related topics and building a firm base before you go on into more specialization.</p> <p>This is why there is a need to tell students to be firmly confident in their own stupidity. They don't get things immediately and make like 10 mistakes while solving a question, but they can explore and work harder until they are proficient in that topic. Not only their own stupidity, they should know that everyone else is also stupid. Even Euler, Gauss and Ramanujan were stupid, in the sense that each one of them must have struggled with some topic or the other, and they must have felt frustrated and incompetent many times because of that.</p> <p>The best way I have found to overcome this inferiority complex is to let students make something original of their own. If you know about the process of making an entirely new discovery (new to them, maybe not to the world) without always relying on thought patterns and tricks which are just programmed into you after solving many school and Olympiad level problems (which do not test your mathematical ability accurately), you learn to appreciate many things. You realise how random and arbitrary making progress in a problem is. You can be stuck for days on a single lemma but come up with a one liner that completely resolves it while having a bath. It is also extremely non-linear, which means you can take long detours without coming close to the correct method. However, you also realise that progress in a small or big form will always come if you try hard for long enough (and take enough breaks to reset).</p> <p>When you realise that every one of those geniuses went through this same random, frustrating but highly satisfying process every time they solved a hard problem, the illusion of being smart only if you are lightning fast in solving and understanding is quickly broken. The only thing you need to make an independent, worthwhile discovery is lots of studying and lots of thinking. That is how maths in the real world is done. You may never get even close to a level like that of Euler, but it is unreasonable to hold such an expectation.</p> <p>So I think you should encourage your juniors to explore maths: not just topics which are at a higher level, but to truly <em>explore</em> and wonder about things and try to find out more about them. Learning a new topic should be an interest rather than a habit or a drive. It is psychologically very unhealthy and dangerous to have such an unfounded fear of failure and lack of confidence in your abilities. Expanding your horizons above school, competitions and Olympiads helps with that. </p> <p>As an example, when I proved the formula for the sum of a geometric series in 9th class months before it was taught, I felt much more proud and confident in my abilities than when I solved much harder Olympiad type problems which tested not my independent thinking but my ability to remember and apply obscure formulas and patterns. It also probably helped me much more in developing mathematical thinking than those problems.</p> <p>As for school, unfortunately, it is a necessary part of life and you have to devote some time to study school maths and other topics according to what marks you want. It is not all bad: if you look close enough, there are many things to be explored in school mathematics.</p> <p>Finally, when talking about studying other subjects, you should have practicality in mind and know about the consequences of your actions when you avoid studying for Olympiads or school. If you are okay with sacrificing that to fulfill your interest in maths, only then should you do that, otherwise you should search for a compromise.</p>
838,400
<p>One question asking if $\mathbb{Z}^*_{21}$ is cyclic.</p> <p>I know that the cyclic group must have a generator which can generate all of the elements within the group.</p> <p>But does this kind of question requires me to exhaustively find out a generator? Or is there any more efficient method to quickly determine if a group is a cyclic group?</p>
rogerl
27,542
<p>If an abelian group has elements of order $m$ and $n$, then it also has an element of order $lcm(m,n)$, so that is another potential way to cut down on the work you have to do. So, for example, if you were to find an element of order $4$ and one of order $3$ in your group, you would know that there would have to be an element of order $12$, so it would be cyclic.</p> <p>It turns out that there is an explicit characterization of $\mathbb{Z}_n^{\times}$ that depends on the factorization of $n$; in your case this becomes $\mathbb{Z}_{21}^{\times} \cong \mathbb{Z}_3^{\times} \times \mathbb{Z}_7^{\times}$, so once you know this theorem, showing whether this group is cyclic or not becomes pretty easy.</p>
838,400
<p>One question asking if $\mathbb{Z}^*_{21}$ is cyclic.</p> <p>I know that the cyclic group must have a generator which can generate all of the elements within the group.</p> <p>But does this kind of question requires me to exhaustively find out a generator? Or is there any more efficient method to quickly determine if a group is a cyclic group?</p>
KCd
619
<p>In practice, to show $(\mathbf Z/n\mathbf Z)^\times$ is not cyclic you can look for a "fake" square root of $1$, i.e., a solution to $a^2 \equiv 1 \bmod n$ with $a \not\equiv \pm 1 \bmod n$. Then there are at least two subgroups of order $2$, so this group is not cyclic. </p>
2,620,032
<p>Find the derivative of $y=(\tan (x))^{\log (x)}$</p> <p>I thought of using the power rule that: $$\dfrac {d}{dx} u^n = n.u^{n-1}.\dfrac {du}{dx}$$ Realizing that the exponent $log(x)$ is not constant, I could not use that. </p>
Usermath
496,395
<p>Note that $$\log(y) = \log(x).\log(\tan(x))~.$$ Now, apply chain rule on both sides.</p>
2,620,032
<p>Find the derivative of $y=(\tan (x))^{\log (x)}$</p> <p>I thought of using the power rule that: $$\dfrac {d}{dx} u^n = n.u^{n-1}.\dfrac {du}{dx}$$ Realizing that the exponent $log(x)$ is not constant, I could not use that. </p>
Tsemo Aristide
280,301
<p>$(\tan(x))^{\ln(x)}=\exp(\ln x(\ln(\tan x)))$ by apply the formula off the derivative of a composition:</p> <p>$f'(x)=(\ln(x)\ln(\tan(x))'\exp(\ln(\tan(x))$.</p> <p>$(\ln(x)(\ln(\tan(x)))'={1\over x}\ln(\tan(x))+\ln(x){1\over {\tan(x)}}{1\over{cos(x)^2}}$.</p>
2,620,032
<p>Find the derivative of $y=(\tan (x))^{\log (x)}$</p> <p>I thought of using the power rule that: $$\dfrac {d}{dx} u^n = n.u^{n-1}.\dfrac {du}{dx}$$ Realizing that the exponent $log(x)$ is not constant, I could not use that. </p>
Michael Rozenberg
190,319
<p>$$\left(\left(\tan{x}\right)^{\ln{x}}\right)'=\left(e^{\ln{x}\ln\tan{x}}\right)'=e^{\ln{x}\ln\tan{x}}\left(\ln{x}\ln\tan{x}\right)'=$$ $$=\left(\tan{x}\right)^{\ln{x}}\left(\frac{\ln\tan{x}}{x}+\frac{\ln{x}}{\tan{x}}\cdot\frac{1}{\cos^2x}\right)=\left(\tan{x}\right)^{\ln{x}}\left(\frac{\ln\tan{x}}{x}+\frac{2\ln{x}}{\sin2x}\right).$$</p>
1,803,416
<p>Does the function $d: \mathbb{R}^n \times \mathbb{R}^n \to \mathbb{R}$ given by: $$d(x,y)= \frac{\lvert x-y\rvert} {1+{\lvert x-y\rvert}}$$ define a metric on $\mathbb{R}^n?$</p> <p>How do you go about proving this? Do I need to just show that it satisfies the three conditions to be a metric? If so how do I show them?</p>
Mohammad W. Alomari
45,105
<p>This first conditions holds trivially, to prove the third condition (triangle inequality) consider the function $f(t)=\frac{t}{1+t}$, $t&gt;0$ and check the monotonicity of $f$.</p>
127,225
<p>I got stuck solving the following problem:</p> <pre><code>Table[Table[ Table[ g1Size = x; g2Size = y; vals = FindInstance[(a1 - a2) - (b1 - b2) == z &amp;&amp; a1 + b1 == g1Size &amp;&amp; a2 + b2 == g2Size &amp;&amp; a1 + a2 == g1Size &amp;&amp; b1 + b2 == g2Size &amp;&amp; a1 &gt; 0 &amp;&amp; a2 &gt; 0 &amp;&amp; b1 &gt; 0 &amp;&amp; b2 &gt; 0, {a1, a2, b1, b2}, Integers, 3]; aa1 = a1 /. vals; aa2 = a2 /. vals; bb1 = b1 /. vals; bb2 = b2 /. vals; {g1Size, g2Size, z, Flatten@{aa1, aa2, bb1, bb2}} , {z, 0, 10}], {x, 1, 10}], {y, 1, 10}] </code></pre> <p>I want to loop through different values of g1Size, g2Size and z and find the first solution to the system of equations. As soon as a solution for a combination of g1Size,g2size and z was found, I want to extract the values for a1,a2,b1,b2 and continue with the next loop. In other words, only print the values when vals is not empty and then stop the z-loop and switch to the next values of x and y.</p> <p>But my output is like this:</p> <pre><code>{{{{1, 1, 0, {a1, a2, b1, b2}}, {1, 1, 1, {a1, a2, b1, b2}}, {1, 1, 2, {a1, a2, b1, b2}}, {1, 1, 3, {a1, a2, b1, b2}}, {1, 1, 4, {a1, a2, b1, b2}}, {1, 1, 5, {a1, a2, b1, b2}}, {1, 1, 6, {a1, a2, b1, b2}}, {1, 1, 7, {a1, a2, b1, b2}}, {1, 1, 8, {a1, a2, b1, b2}}, {1, 1, 9, {a1, a2, b1, b2}}, {1, 1, 10, {a1, a2, b1, b2}}} </code></pre> <p>plotting the names for a1,a2,b1,b2 when no solution was found.</p> <p>My mathematica coding is a bit rust and this code seems far from elegant. And I hope it is clear what I mean :).</p>
jkuczm
14,303
<h2>Algorithm</h2> <p>Since the bottleneck is calculating <a href="https://en.wikipedia.org/wiki/Elementary_symmetric_polynomial" rel="nofollow noreferrer">elementary symmetric polynomial</a>, let's search for an efficient algorithm to do it. In <a href="https://math.stackexchange.com/a/1265218/374469">answer to "Algorithm(s) for computing an elementary symmetric polynomial"</a> question, Ben Kuhn shows simple recurrence relation between elementary symmetric polynomials: \begin{align} \forall_{n\in\mathbb N} \quad &amp; s^0_n(x_1, ..., x_n) = 1 \\ \forall_{k,n\in\mathbb N;k&gt;n} \quad &amp; s^k_n(x_1, ..., x_n) = 0 \\ \forall_{k,n\in\mathbb Z_+} \quad &amp; s^k_n(x_1, ..., x_n) = s^k_{n-1}(x_1, ..., x_{n-1}) + x_n s^{k-1}_{n-1}(x_1, ..., x_{n-1}) \end{align} which relates $k$-th polynomial of $n$ variables with $k$-th and $(k-1)$-th polynomial of $n-1$ variables. Using this recurrence we can create simple iterative algorithm starting with polynomials of one variable, using them to calculate polynomials of two variables and so on.</p> <p>For example if we want to calculate $k=3$ polynomial of $n=7$ variables we need to calculate polynomials with following $n$ and $k$:</p> <p>\begin{array}{l|cccc} n &amp; 1 &amp; 2 &amp; 3 &amp; 4 &amp; 5 &amp; 6 &amp; 7 \\ \hline k=0 &amp; 0 &amp; 0 &amp; 0 &amp; 0 \\ k=1 &amp; 1 &amp; 1 &amp; 1 &amp; 1 &amp; 1 \\ k=2 &amp; &amp; 2 &amp; 2 &amp; 2 &amp; 2 &amp; 2 \\ k=3 &amp; &amp; &amp; 3 &amp; 3 &amp; 3 &amp; 3 &amp; 3 \\ \end{array}</p> <p>for $k=5$ we need</p> <p>\begin{array}{l|cccc} n &amp; 1 &amp; 2 &amp; 3 &amp; 4 &amp; 5 &amp; 6 &amp; 7 \\ \hline k=0 &amp; 0 &amp; 0 \\ k=1 &amp; 1 &amp; 1 &amp; 1 \\ k=2 &amp; &amp; 2 &amp; 2 &amp; 2 \\ k=3 &amp; &amp; &amp; 3 &amp; 3 &amp; 3 \\ k=4 &amp; &amp; &amp; &amp; 4 &amp; 4 &amp; 4 \\ k=5 &amp; &amp; &amp; &amp; &amp; 5 &amp; 5 &amp; 5 \\ \end{array}</p> <p>In general we need a list of at most <code>Min[k, n - k] + 1</code> elements to store already calculated values. Iteration, with <code>i</code> denoting number of variables from <code>1</code> to <code>n</code>, can be split to three stages:</p> <ol> <li>for <code>i &lt;= n-k</code> we increase number of required polynomials,</li> <li>for <code>n-k+1&lt;= i &lt;=k</code> we need exactly <code>Min[k, n - k] + 1</code> polynomials,</li> <li>for <code>Max[k,n-k]+1&lt;=i&lt;=n</code> number of required polynomials decreases.</li> </ol> <hr> <h2>Implementation</h2> <p>Simple procedural implementation is as follows:</p> <pre><code>symPol // ClearAll symPol[0, vars_List] = 1; symPol[1, vars_List] := Total@vars symPol[k_Integer?Positive, vars_List] /; k &gt; Length@vars = 0; symPol[k_Integer?Positive, vars_List] /; k === Length@vars := Times @@ vars symPol[k_Integer?Positive, vars_List] := Module[{n, l, pols, prev, i}, n = Length@vars; l = Max[k, n - k]; pols = ConstantArray[0, Min[k, n - k] + 1]; pols[[1]] = 1; pols[[2]] = vars // First; Do[ pols[[2 ;;]] += vars[[i]] pols[[;; -2]] , {i, 2, n - k} ]; Do[ prev = pols[[2 ;;]]; pols *= vars[[i]]; pols[[;; -2]] += prev , {i, n - k + 1, k} ]; Do[ prev = pols[[2 ;; l - i]]; pols[[;; l - 1 - i]] *= vars[[i]]; pols[[;; l - 1 - i]] += prev , {i, l + 1, n} ]; pols // First ] </code></pre> <p><code>symPol</code> gives the same results as built-in <code>SymmetricPolynomial</code>:</p> <pre><code>Table[ With[{vars = Table[Unique[], n]}, Expand@symPol[k, vars] == SymmetricPolynomial[k, vars] // Simplify ], {n, 0, 10}, {k, 0, n} ]; And @@ Flatten@% (* True *) </code></pre> <p>but it's much faster and uses much less memory:</p> <pre><code>SymmetricPolynomial[12, Range[25]] // MaxMemoryUsed // Timing symPol[12, Range[25]] // MaxMemoryUsed // Timing (* {6.748, 915253280} *) (* {0., 7984} *) </code></pre> <hr> <p>For actual size matrix from OP:</p> <pre><code>SeedRandom[1211] matsize = 100; subsize = 30; mat = RandomInteger[{-10, 10}, {matsize, matsize}]; </code></pre> <p>we get:</p> <pre><code>(result = Ordering[symPol[subsize, #] &amp; /@ Transpose[mat], -1]) // MaxMemoryUsed // RepeatedTiming (* {0.093, 180552} *) result (* {70} *) </code></pre> <p>in 0.093 seconds using 180552 bytes.</p> <hr> <h2>Compilation</h2> <p>Procedural nature of <code>symPol</code> function makes it easily compilable. If machine precision is enough, then we can compile function taking array of reals:</p> <pre><code>symPolR = Compile[{{k, _Integer}, {vars, _Real, 1}}, Module[{n, l, len, pols, prev, i, j}, If[k == 0, Return[1.]]; n = Length@vars; If[k == n, Return[Times @@ vars]]; If[k &gt; n, Return[0.]]; len = Min[k, n - k] + 1; l = Max[k, n - k]; pols = Table[0., {len}]; pols[[1]] = 1.; pols[[2]] = vars // First; Do[ pols[[-j]] += vars[[i]] pols[[-j - 1]], {i, 2, n - k}, {j, len - 1} ]; Do[ Do[ pols[[j]] = pols[[j + 1]] + vars[[i]] pols[[j]], {j, len - 1} ]; pols[[len]] *= vars[[i]] , {i, n - k + 1, k} ]; Do[ pols[[j]] = pols[[j + 1]] + vars[[i]] pols[[j]], {i, l + 1, n}, {j, len + l - i} ]; pols // First ], RuntimeAttributes -&gt; {Listable}, Parallelization -&gt; True, RuntimeOptions -&gt; "Speed", CompilationTarget -&gt; "WVM" ] </code></pre> <p>With <code>"WVM"</code> version we get ten times faster function:</p> <pre><code>(result = Ordering[symPolR[subsize, Transpose@mat], -1]) // MaxMemoryUsed // RepeatedTiming (* {0.009, 172424} *) result (* {70} *) </code></pre> <p>With <code>CompilationTarget -&gt; "C"</code> we get another factor of ten:</p> <pre><code>(result = Ordering[symPolR[subsize, Transpose@mat], -1]) // MaxMemoryUsed // RepeatedTiming (* {0.0007, 164728} *) result (* {70} *) </code></pre>
341,823
<p>Let <span class="math-container">$E\subset B_1(0)\subset \mathbb{R}^n$</span> be a compact set s.t. <span class="math-container">$\lambda(E)=0$</span>, where <span class="math-container">$\lambda$</span> is the Lebesgue measure, and <span class="math-container">$B_1(0)$</span> is the Euclidean unit ball centered at the origin. Is the following integral finite:</p> <p><span class="math-container">$$\int_{B_1(0)}-\log d(x,E)d\lambda(x)&lt;\infty?$$</span></p> <p>Although this question seems trivial, I have failed to find a reference to it or to variations of it in previous discussions. I was not able to come up with a counter-example nor a proof. I also asked in mathstackexchange a variation of it, but didn’t get a sufficient answer.</p> <p>Thanks ahead</p>
Yuval Peres
7,691
<p>The integral in question is finite for most sets of measure zero, but can diverge to <span class="math-container">$\infty$</span> for some sets. An example in one dimension is obtained by constructing a Cantor set where at stage <span class="math-container">$k$</span> the middle <span class="math-container">$1/(k+1)$</span> proportion is removed from each of the <span class="math-container">$2^{k-1}$</span> intervals obtained at stage <span class="math-container">$k-1$</span>. Thus the <span class="math-container">$2^k$</span> intervals obtained at stage <span class="math-container">$k$</span> will each have length <span class="math-container">$2^{-k}/(k+1)$</span>. Therefore, each of the <span class="math-container">$2^k$</span> middle intervals removed in the next stage will have length <span class="math-container">$2^{-k}/[(k+1)(k+2)]$</span>, and each of these will contribute at least <span class="math-container">$k/2$</span> times its length to the integral. Summing over <span class="math-container">$k$</span> gives a harmonic series which diverges. The example can be lifted to higher dimensions by taking a Cartesian product with a <span class="math-container">$n-1$</span> dimensional box.</p>
830,111
<p>We have the following set of lines: $$L_1: \frac{x-2}{1}=\frac{y-3}{-2}=\frac{z-1}{-3}$$ $$L_2:\frac{x-3}{1}=\frac{y+4}{3}=\frac{z-2}{-7}$$</p> <p>This leads to the following parametric equations: $$L_1:x=t+2,\space y=-2t+3,\space z=-3t+1$$ $$L_2: x=s+3,\space y=3s-4,\space z=-7s+2$$ The $x$ line looked pretty simple, so I did this: $$t+2=s+3$$$$t=s+1$$</p> <p>Then I simply substituted this into the Y equations $$-2(s+1)+3=3s-4$$ which yields $$s=5,\space t=6$$ That, in turn, gives me: $$x_1=8,x_2=8 \space z_1=-17,z_2=-38$$ forcing me to conclude the lines are skew.</p> <p><em>The lines are not skew</em> - there is an intersection at point $(4,-1,-5)$.</p> <p>Where is the flaw in my analysis?</p> <p>I suspect taking $t=s+1$ is insufficiently bounded to use in this system? If so, how do we prove $x$ equations is sufficient in an $n$-dimensional system of lines to still apply to the system?</p> <p><strong>EDIT:</strong> $5=5s$ actually does not mean that $s=5$. </p>
Ted
15,012
<p>You didn't solve for $s$ and $t$ correctly. $s=5$ doesn't satisfy your equation $-2(s+1)+3 = 3s-4$.</p>
404,574
<p>Suppose that:</p> <p>$Y \pmod B = 0$</p> <p>$Y \pmod C = X$</p> <p>I know $B$ and $C$. $Y$ is unknown, it might be an extremely large number, and it does not interest me. </p> <p>The question is: Is it possible to find $X$, and if so, how?</p>
Adriano
76,987
<p>No; more information is needed. To see this, suppose that $B=2$ and $C=5$ and suppose that we know that $Y \bmod 2 = 0$ and we want to figure out $X = Y \bmod 5$. The possibilities for $X$ are not unique and depend on $Y$:</p> <blockquote> <ul> <li>Since $2$ is a factor of $10$, we could have $Y=10$, which yields $X=0$.</li> <li>Since $2$ is a factor of $12$, we could have $Y=12$, which yields $X=2$.</li> <li>Since $2$ is a factor of $14$, we could have $Y=14$, which yields $X=4$.</li> <li>Since $2$ is a factor of $16$, we could have $Y=16$, which yields $X=1$.</li> <li>Since $2$ is a factor of $18$, we could have $Y=18$, which yields $X=3$.</li> </ul> </blockquote>
1,831,191
<p>I am confused about the following Theorem:</p> <p>Let <span class="math-container">$f: I \to \mathbb{R}^n$</span>, <span class="math-container">$a \in I$</span>. Then the function <span class="math-container">$f$</span> is differentiable at <span class="math-container">$a$</span> if and only if there exists a function <span class="math-container">$\varphi: I \to \mathbb{R}^n$</span> that is continuous in <span class="math-container">$a$</span>, and such that <span class="math-container">$f(x) - f(a) = (x - a)\varphi(x)$</span>, for all <span class="math-container">$x \in I$</span>; furthermore, <span class="math-container">$\varphi(a) = f'(a)$</span>.</p> <p>I understand the proof of this theorem, but something confuses me. Doesn't this theorem state that the derivative of a function in a point is always continuous in that point, since <span class="math-container">$f'(a) = \varphi(a)$</span> is continuous in <span class="math-container">$a$</span>? This would mean that the derivative of a function is always continuous on the domain of the function, but I have encountered counterexamples. I have probably misinterpreted something; any help would be welcome.</p>
Aloizio Macedo
59,234
<p>The point is that $\varphi$ is not $f'$. They just coincide in one point, and it is easy to see that two functions coinciding in one point entails nothing about some relationship of differentiability/continuity etc between one another.</p>
2,266,573
<p>I am working through some problems about probability and seem to be having trouble working through this one in particular. I'd love some help learning how to go about solving problems such as this.</p> <p>A website estimates that 19% of people have a phobia regarding public speaking. If three students are assigned to a project group, what's the probability...</p> <p>a.) That all 3 students have a fear of public speaking.</p> <p>b.) That none have a fear of public speaking</p> <p>c.) That at least one of the students has a fear of public speaking.</p>
Graham Kemp
135,106
<p>The problem involves the count of 'successes' in a known amount of independent trials with equal success rate. &nbsp; There are three people, each with a probability of $0.19$ for possessing phobia.</p> <p>The key point in this problem is to recognise that this count is a Random Variable which follows a <strong>Binomial Distribution</strong>. </p> <p>Familiarity with such will then solve the problem. &nbsp; In particular, you should know the <em>probability mass function</em> (pmf) for a binomial distribution.</p> <p>$$X\sim\mathcal{Bin}(n, p) \quad\iff\quad \mathsf P(X=k) ~=~ \binom n k p^k (1-p)^{n-k}~\mathbf 1_{k\in \{0,..,n\}}$$</p> <p>If you <em>have yet to</em> encounted a binomial distribution, note that this is simply the probability of having a sequence of $k$ successes and $n-k$ failures multiplied by the count of distinct arrangements of that sequence. &nbsp; (Here $n=3$, $p=0.19$, and a success is "having the phobia").</p> <p>$$\mathsf P(X=k) ~=~\begin{cases}0.89^3 &amp;:&amp; k=0 \\ 3\cdot 0.19\cdot 0.89^2 &amp;:&amp; k=1\\3\cdot 0.19^2\cdot 0.89 &amp;:&amp; k=2 \\ 0.19^3 &amp;:&amp; k=3\\ 0 &amp; :&amp; \text{otherwise}\end{cases}$$</p> <p>From there it is simply a matter of determining what probabilities are requested and applying the <em>pmf</em> to evaluate.</p> <hr> <p>PS: The <em>indicator function</em>, $\mathbf 1_{k\in\{0,..,n\}}$, has the value of $1$ when $k$ is an integer between $0$ and $n$, and a value of $0$ otherwise. </p> <p>PPS: $\binom nk$ is the binomial coefficient, also written as ${}^n\mathrm C_k$. &nbsp; It is the count of ways to choose $k$ from a set of $n$ elements (here, people).</p>
1,902,138
<p>It's common to see a plus-minus ($\pm$), for example in describing error $$ t=72 \pm 3 $$ or in the quadratic formula $$ x = \frac{-b \pm \sqrt{b^2-4ac}}{2a} $$ or identities like $$ \sin(A \pm B) = \sin(A) \cos(B) \pm \cos(A) \sin(B) $$</p> <p>I've never seen an analogous version combining multiplication with division, something like $\frac{\times}{\div}$</p> <blockquote> <p>Does this ever come up, and if not why?</p> </blockquote> <p>I suspect it simply isn't as naturally useful as $\pm$. </p>
TheGeekGreek
359,887
<p>The multiplication sign $\cdot$ is usually omited in abelian groups (or even non-abelian). I think the most important thing is, that something like $\div$ does mislead the reader, because if we consider $a\div b$, then $b$ is not on the same level as $a$, but in contrary the sumbology does suggest so. This can cause serious errors in calculations whereas $\frac{a}{b}$ is much more clearer as well as $ab^{-1}$. I think this levelness is the main point of never using $\div$.</p>
97,672
<p>Given that I have a set of equations about varible $x_0,x_1,\cdots,x_n$, which own the following style:</p> <p>$ \left( \begin{array}{cccccccc} \frac{1}{6} &amp; \frac{2}{3} &amp; \frac{1}{6} &amp; 0 &amp; 0 &amp; 0 &amp; 0 &amp; 0 \\ 0 &amp; \frac{1}{6} &amp; \frac{2}{3} &amp; \frac{1}{6} &amp; 0 &amp; 0 &amp; 0 &amp; 0 \\ 0 &amp; 0 &amp; \frac{1}{6} &amp; \frac{2}{3} &amp; \frac{1}{6} &amp; 0 &amp; 0 &amp; 0 \\ 0 &amp; 0 &amp; 0 &amp; \frac{1}{6} &amp; \frac{2}{3} &amp; \frac{1}{6} &amp; 0 &amp; 0 \\ 0 &amp; 0 &amp; 0 &amp; 0 &amp; \frac{1}{6} &amp; \frac{2}{3} &amp; \frac{1}{6} &amp; 0 \\ \end{array} \right) \left( \begin{array}{c} x_0 \\ x_1 \\ x_2 \\ x_3 \\ x_4 \\ \color{red}{x_0} \\ \color{red}{x_1} \\ \color{red}{x_2} \\ \end{array} \right)=\left( \begin{array}{c} (1,1) \\ (2,3) \\ (3,-1) \\ (4,1) \\ (5,0) \\ \end{array} \right) $</p> <p>Obviously, I <strong>cannot</strong> solve this linear system by <code>LinearSolve[]</code>. To solve this equation group, I only used the <code>Solve[]</code>.</p> <pre><code>mat= {{1/6, 2/3, 1/6, 0, 0, 0, 0, 0}, {0, 1/6, 2/3, 1/6, 0, 0, 0, 0}, {0, 0, 1/6, 2/3, 1/6, 0, 0, 0}, {0, 0, 0, 1/6, 2/3, 1/6, 0, 0}, {0, 0, 0, 0, 1/6, 2/3, 1/6, 0}}; eqns = mat.{x0, x1, x2, x3, x4, x0, x1, x2}; </code></pre> <p>$ \begin{pmatrix} \frac{x_0}{6}+\frac{2 x_1}{3}+\frac{x_2}{6}\\ \frac{x_1}{6}+\frac{2 x_2}{3}+\frac{x_3}{6}\\ \frac{x_2}{6}+\frac{2 x_3}{3}+\frac{x_4}{6}\\ \frac{x_0}{6}+\frac{x_3}{6}+\frac{2 x_4}{3}\\ \frac{2 x_0}{3}+\frac{x_1}{6}+\frac{x_4}{6} \end{pmatrix} $</p> <pre><code>yValues = {{1, 1}, {2, 3}, {3, -1}, {4, 1}, {5, 0}}; part1 = {x0, x1, x2, x3, x4} /. Solve[Thread[eqns == yValues[[All, 1]]], {x0, x1, x2, x3, x4}] part2 = {x0, x1, x2, x3, x4} /. Solve[Thread[eqns == yValues[[All, 2]]], {x0, x1, x2, x3, x4}] res = Transpose[Join[part1, part2]] </code></pre> <blockquote> <pre><code> {{75/11, -8/11}, {-9/11, 4/11}, {27/11, 58/11}, {3, -38/11}, {39/11, 28/11}} </code></pre> </blockquote> <h3>Question</h3> <p>However, the index $n$ for variables $\{x_0,x_1,\cdots,x_n\}$ is very large ($n=100$) in my work. My solution that by <code>Solve[]</code> is very cockamamie. So I would like to know how to deal with this case by the <em>built-in</em> <code>LinearSolve[]</code> efficiently?</p>
J. M.'s persistent exhaustion
50
<p>How to fold a "wide" matrix over to enforce "periodic" conditions:</p> <pre><code>mat = {{1/6, 2/3, 1/6, 0, 0, 0, 0, 0}, {0, 1/6, 2/3, 1/6, 0, 0, 0, 0}, {0, 0, 1/6, 2/3, 1/6, 0, 0, 0}, {0, 0, 0, 1/6, 2/3, 1/6, 0, 0}, {0, 0, 0, 0, 1/6, 2/3, 1/6, 0}}; {m, n} = Dimensions[mat]; LinearSolve[Take[mat, m, m] + PadRight[Take[mat, m, m - n], {m, m}], {{1, 1}, {2, 3}, {3, -1}, {4, 1}, {5, 0}}] </code></pre> <blockquote> <pre><code>{{75/11, -8/11}, {-9/11, 4/11}, {27/11, 58/11}, {3, -38/11}, {39/11, 28/11}} </code></pre> </blockquote>
2,278,798
<p>The converse statement, "A metric space on which every continuous, real valued function is bounded is compact" is dealt with on this site, as it is in Greene and Gamelin's monograph, "Introduction to Topology", where a hint to its proof is offered. I see no discussion of the direct statement in my title. Is it true? If so, can someone outline a proof?</p>
Tsemo Aristide
280,301
<p>Yes, it is bounded. Suppose it is not bounded , for every integer $n$, you have $x_n$ such that $|f(x_n)|\geq n$, you can extract a subsequence $x_{n_i}$ which converges towards $x$ since the domain is compact, this implies that $f(x_{n_i})$ converges towards $f(x)$ since $f$ is continuous, contradiction since $|f(x_{n_i})|\geq n_i$.</p>
1,890,047
<p>Consider two linear transformations $L_1, L_2: V \to W$.</p> <p>Fix a basis of $V$, $W$, and consider $M_1$, $M_2$, the matrices of the aforementioned transformations w.r.t said basis.</p> <p>Suppose you can obtain $M_2$ from swapping columns in $M_1$.</p> <p>How are $L_1$ and $L_2$ related? (Besides having the same image)</p>
user115350
334,306
<p>because the transformation matrix is determined by basis of V and W, if you swap the two column (e.g. i and j) of the matrix, you are swapping two basis $w_i$ and $w_j$ of W. L1 and L2 are the same but different matrix interpretation.</p>
16,105
<p>The answer to this question should be obvious, but I can't seem to figure it out. Suppose we have a surface $F$, and a representation $\rho : \pi_1(F)\to SU(n)$. We can define the homology with local coefficients $H_*(F,\rho)$ straightforwardly as the homology of the twisted complex $$C_*(F,\rho):=C_*(\widetilde{F};\mathbf{Z})\otimes_{\mathbf{Z}[\pi_1(F)]} \mathbf{C}^n$$ where $\widetilde{F}$ is the universal cover, and $\mathbf{Z}[\pi_1(F)]$ acts on each side in the obvious way. </p> <p>Now, this complex is actually very easy to compute explicitly: just lift a nice basis of cells in $F$ to $\widetilde{F}$, and write down the boundary maps explicitly. For example, if $F$ is a torus and we take $n=2$, say, we can choose a natural meridian-longitude basis $(x,y)$ for $H_1(F)$, and the twisted boundary map $\partial_1:C_1(F,\rho)=\mathbf{C}^4\to C_2(F,\rho)=\mathbf{C}^2$ is $$ \left( \begin{array}{ccc} \rho(x)-Id \newline\rho(y)-Id\end{array} \right)$$</p> <p>So, here's my question. Since $\rho$ is a unitary representation, we should get a twisted intersection form on $H_1(F)$, simply by combining the untwisted intersection form with the standard hermitian product on $\mathbf{C}^2$, right? And I would imagine this is also really easy to compute, in a similar basis, say? I can't seem to figure out how it would go. Could anyone help me, even show me how it works for the same torus example?</p> <p>Or, if I've said anything wrong, tell me where?</p>
Emerton
2,874
<p>For me it is easier to work with cohomology (just for psychological reasons). Also, I will distinguish the representation $\rho$ from the local system $V$ with fibres ${\mathbb C}^2$ that it gives rise to. So where you would write $H^1(F,\rho)$ I will write $H^1(F,V)$. I will let $\overline{V}$ denote the complex conjugate local system to $V$. (So it is the same underlying local system of abelian groups, but we give it the conjugate action of $\mathbb C$.)</p> <p>The Hermitian pairing on the fibres of $V$ and $\overline{V}$ gives a pairing of local systems $V \times \overline{V} \to \mathbb R$, where $\mathbb R$ is the constant local system with fibre the real numbers. If you like we can think of this as an $\mathbb R$-linear map $V\otimes_{\mathbb C}\overline{V} \to \mathbb R.$ This pairing will induce a map on cohomology $H^2(F,V\otimes_{\mathbb C}\overline{V}) \to H^2(F,\mathbb R)$.</p> <p>There will also be a cup product $H^1(F,V) \times H^1(F,\overline{V}) \to H^2(F, V\otimes_{\mathbb C} \overline{V})$. Composing this with the previous map on $H^2$ gives your twisted cup product $H^1(F,V)\times H^1(F,\overline{V}) \to H^2(F,\mathbb R)$.</p> <p>This gives one perspective on your construction. To compute it, write down the twisted cochains $C^{\bullet}(\tilde{F})\otimes_{\mathbb Z[\pi_1(F)]}\mathbb C^2$, then write down the cup-product $$(C^{\bullet}(\widetilde{F})\otimes_{\mathbb Z[\pi_1(F)]}\mathbb C^2 ) \times (C^{\bullet}(\widetilde{F})\otimes_{\mathbb Z[\pi_1(F)]}\mathbb C^2) \to C^{\bullet}(\widetilde{F})\otimes_{\mathbb Z[\pi_1(F)]} \mathbb R^2 = C^{\bullet}(F,\mathbb R).$$ The cup product will just be given by the usual formula, and then you will also pair the $\mathbb C^2$ parts of the cochains using the hermitian pairing.</p> <p>Hopefully you can follow your nose and do this explicitly for the torus. Then you can just dualize everything to get to the homology version.</p>
2,150,886
<p>I want to find a first order ode, an initial value problem, that has the solution $$y=(1-y_0)t+y_0$$ where $y_0$ is the initial value.The ode has to be of first order, that is: $$y'=f(y).$$ I need this to test a special solver I am building. The main objective is to find an ode that has the property that the end-value goes into the reverse direction of the initial value. My thought is, that the function above is the most simple one that fulfills that requirement. However, any other idea that produces my desired result is welcome. </p> <p>I came so far: since $$y'=1-y_0=f((1-y_0)+y_0)$$ $f$ should be something like $$f(z)=\frac{z-y_0}{t}$$ so we get: $$y'=\frac{y-y_0}{t}$$ However, I don't know how to properly get the initial condition into the equation, the $y_0$ part in the ode itself doesn't seem right, since the initial value can't be put into the ode itself but is a special constrained outside the ode. Can anyone help me get this clear? </p>
Robert Israel
8,508
<p>Solve for $y_0$: </p> <p>$$ y_0 = \frac{t-y}{t-1} $$</p> <p>Then differentiate: $$ 0 = \frac{(t-1)(1-y') - (t-y)}{(t-1)^2} = - \frac{y'}{t-1} + \frac{y-1}{(t-1)^2}$$ i.e. $$ y' = \frac{y-1}{t-1}$$</p>
5,739
<p>Hi, I have recently got interested in multi-index (multi-dimensional) Dirichlet series, i.e. series of the form $F(s_1,...,s_k)=\sum_{(n_1,...,n_k)\in\mathbb{N}^k}\frac{a_{n_1,...,n_k}}{n_1^{s_1}...n_k^{s_k}}$. I found some papers suggesting that multi-index Dirichlet series are in fact a distinct subfield for itself within analytic number theory. So, I´m now looking for some 'basic' learning materials/books or similar on this subject.</p> <p>Any suggestions are greatly appreciated!</p> <p>efq</p> <p>PS: I believe I have already checked most books on multi-dimensional complex analysis/several complex variables.</p>
Jon Awbrey
1,636
<p>I used to study enumerating generating functions, mostly for various families of graphs, that allowed a mix of ordinary and exponential variables for tracking different kinds of additive weights along with dirichlet variables for tracking multiplicative weights. I don't remember there being a lot of literature &mdash; this was a few years back &mdash; but there was some. I'm guessing you already looked in the bibs of Stanley or Goulden and Jackson and so on? Will see if I can dig up some notes, but probably easier just persisting in your web searches.</p>
5,739
<p>Hi, I have recently got interested in multi-index (multi-dimensional) Dirichlet series, i.e. series of the form $F(s_1,...,s_k)=\sum_{(n_1,...,n_k)\in\mathbb{N}^k}\frac{a_{n_1,...,n_k}}{n_1^{s_1}...n_k^{s_k}}$. I found some papers suggesting that multi-index Dirichlet series are in fact a distinct subfield for itself within analytic number theory. So, I´m now looking for some 'basic' learning materials/books or similar on this subject.</p> <p>Any suggestions are greatly appreciated!</p> <p>efq</p> <p>PS: I believe I have already checked most books on multi-dimensional complex analysis/several complex variables.</p>
maki
1,919
<p>De la Breteche proved recently a Tauberian theorem for multiple Dirichlet series (MR1858338 (2002j:11106)). This is useful stuff in applications. It fails shortly of proving the main result in Balazard, et. al recent paper: <a href="http://iml.univ-mrs.fr/~balazard/pdfdjvu/19.pdf" rel="nofollow">http://iml.univ-mrs.fr/~balazard/pdfdjvu/19.pdf</a> (but does so assuming the Riemann Hypothesis). Finally Daniel Bump (look up his homepage on google) did a lot of work on multiple Dirichlet series - unfortunately I am not familiar with any of it - it also seems to have a more algebraic flavor to it. </p> <p>P.S: It is remarkable that De La Breteche avoids using several complex variables.</p>
5,739
<p>Hi, I have recently got interested in multi-index (multi-dimensional) Dirichlet series, i.e. series of the form $F(s_1,...,s_k)=\sum_{(n_1,...,n_k)\in\mathbb{N}^k}\frac{a_{n_1,...,n_k}}{n_1^{s_1}...n_k^{s_k}}$. I found some papers suggesting that multi-index Dirichlet series are in fact a distinct subfield for itself within analytic number theory. So, I´m now looking for some 'basic' learning materials/books or similar on this subject.</p> <p>Any suggestions are greatly appreciated!</p> <p>efq</p> <p>PS: I believe I have already checked most books on multi-dimensional complex analysis/several complex variables.</p>
Anweshi
2,938
<p>See P. Deligne, <em>Multizeta values</em>, Notes d'exposes, IAS Princeton, for the deep mathematical aspects of this. </p> <p>Also for a general relevance philosophy, see Kontsevich and Zagier, <em>Periods</em>, Mathematics Unlimited(2001). An electronic version is available <a href="http://www.maths.gla.ac.uk/~tl/periods.ps" rel="nofollow">here</a>.</p> <p>There are various references, including those of Zudilin, Cartier, Zagier, Terasoma, Oesterle(On polylogarithms), Manin(iterated integrals and ....). Please look into mathscinet. </p> <p>There seem to be many papers by Dorian Goldfeld and collaborators, too.</p>
3,363,875
<p>when I read a book,they say this is clear:</p> <p>let <span class="math-container">$n$</span> be postive integer,then have <span class="math-container">$$(-1)^n(n+1)\equiv n+1\pmod 4$$</span> Why don't I feel right?</p>
Bill Dubuque
242
<p>Their difference <span class="math-container">$\,\underbrace{(n+1)\overbrace{(1 - (-1)^{\large n})}^{\large 0 \ \ {\rm if}\ \ n\ \ {\rm even}}}_{\large {\rm even}\ \times\ 2\ \ {\rm if}\ \ n\ \ {\rm odd}\!\!\!}\ $</span> is divisible by <span class="math-container">$\,4\,$</span> so they are congruent <span class="math-container">$\!\bmod{4}$</span>.</p>
2,386,602
<p>This is a question from an exam I recently failed. </p> <p>What is the radius of convergence of the following power series? $$(a) \sum_{n=1}^\infty(n!)^2x^{n^2}$$ and $$(b) \sum_{n=1}^\infty \frac {x^{n^2}}{n!}$$</p> <p>Edit: Here's my attempt at the first one, if someone could tell me if it's any good...</p> <p>$\sum_{n=1}^\infty(n!)^2x^{n^2}$=$\sum_{n=1}^\infty a_nx^n$ where $a_n= 0$ for any $n\notin \{k\in N| \exists t\in N: k=t^2\}$ and $a_n = (n!)^2$ else. So the radius of convergence would be the inverse of $\lim_{n\rightarrow \infty}{(n!)^{2/n}}=\lim e^{2/n\cdot log(n!) }$. The exponent with log of factorial becomes a series, $\sum_{n=1}^{\infty} \frac{logn}{n}$ which diverges by comparison test with $\frac{1}{n}$, so the radius of convergence would be equal to $0$.</p> <p>Second edit: This is wrong. Correct answer below.</p>
levap
32,262
<p>Let's consider the first series $\sum_{n=1}^{\infty} (n!)^2 x^{n^2}$. The easiest way to find the radius of convergence is to forget this is a power series and treat $x$ as a constant. Let's assume $x &gt; 0$ so that the terms of the series are positive and we can use any test we wish for the convergence/divergence of a non-negative series. Alternatively, replace $x$ with $|x|$ and check for absolute convergence instead. Using the ratio test, we get the expression $$ \frac{(n+1)!^2 x^{(n+1)^2}}{(n!)^2 x^{n^2}} = (n+1)^2 x^{2n + 1}. $$ This expression converges if $0 &lt; x &lt; 1$ to $0$ while it diverges if $x \geq 1$. This implies that the series converges if $0 &lt; x &lt; 1$ and diverges if $x \geq 1$. But this is a power series so the only possible value for the radius of convergence is $R = 1$ (because it should converge when $|x| &lt; R$ and diverge when $|x| &gt; R$).</p>
3,062,701
<p>I want to solve this system by Least Squares method:<span class="math-container">$$\begin{pmatrix}1 &amp; 2 &amp; 3\\\ 2 &amp; 3 &amp; 4 \\\ 3 &amp; 4 &amp; 5 \end{pmatrix}\begin{pmatrix}x\\y\\z\end{pmatrix} =\begin{pmatrix}1\\5\\-2\end{pmatrix} $$</span> This symmetric matrix is singular with one eigenvalue <span class="math-container">$\lambda1 = 0$</span>, so <span class="math-container">$\ A^t\cdot A$</span> is also singular and for this reason I cannot use the normal equation: <span class="math-container">$\hat x = (A^t\cdot A)^{-1}\cdot A^t\cdot b $</span>. So I performed Gauss-Jordan to the extended matrix to come with <span class="math-container">$$\begin{pmatrix}1 &amp; 2 &amp; 3\\\ 0 &amp; 1 &amp; 2 \\\ 0 &amp; 0 &amp; 0 \end{pmatrix}\begin{pmatrix}x\\y\\z\end{pmatrix} =\begin{pmatrix}1\\3\\-1\end{pmatrix} $$</span> Finally I solved the <span class="math-container">$\ 2x2$</span> system: <span class="math-container">$$\begin{pmatrix}1 &amp; 2\\\ 0 &amp; 1\end{pmatrix}\begin{pmatrix}x\\y\end{pmatrix} =\begin{pmatrix}1\\3\end{pmatrix} $$</span> taking into account that the best <span class="math-container">$\ \hat b\ $</span> is <span class="math-container">$\begin{pmatrix}1\\3\\0\end{pmatrix}$</span></p> <p>The solution is then <span class="math-container">$\ \hat x = \begin{pmatrix}-5\\3\\0\end{pmatrix}$</span></p> <p>Is this approach correct ? </p> <p><strong>EDIT</strong></p> <p>Based on the book 'Lianear Algebra and its applications' from David Lay, I also include the Least Squares method he proposes: <span class="math-container">$(A^tA)\hat x=A^t b $</span></p> <p><span class="math-container">$$A^t b =\begin{pmatrix}5\\9\\13\end{pmatrix}, A^tA = \begin{pmatrix}14 &amp; 20 &amp; 26 \\ 20 &amp; 29 &amp; 38 \\ 26 &amp; 38 &amp; 50\end{pmatrix}$$</span> The reduced echelon from the augmented is: <span class="math-container">$$ \begin{pmatrix}14 &amp; 20 &amp; 26 &amp; 5 \\ 20 &amp; 29 &amp; 38 &amp; 9 \\ 26 &amp; 38 &amp; 50 &amp; 13 \end{pmatrix} \sim \begin{pmatrix}1 &amp; 0 &amp; -1 &amp; -\frac{35}{6} \\ 0 &amp; 1 &amp; 2 &amp; \frac{13}{3} \\ 0 &amp; 0 &amp; 0 &amp; 0 \end{pmatrix} \Rightarrow \hat x = \begin{pmatrix}-\frac{35}{6} \\ \frac{13}{3} \\ 0 \end{pmatrix}$$</span> for the independent variable case that <span class="math-container">$z=\alpha , \alpha=0 $</span> </p>
John Doe
399,334
<p>I think you did the Gaussian elimination wrong. </p> <p><span class="math-container">$$\begin{pmatrix}1 &amp; 2 &amp; 3\\\ 2 &amp; 3 &amp; 4 \\\ 3 &amp; 4 &amp; 5 \end{pmatrix}\begin{pmatrix}x\\y\\z\end{pmatrix} =\begin{pmatrix}1\\5\\-2\end{pmatrix}$$</span></p> <p>Becomes <span class="math-container">$$\begin{pmatrix}1&amp;2&amp;3\\0&amp;-1&amp;-2\\0&amp;-2&amp;-4\end{pmatrix}\begin{pmatrix}x\\y\\z\end{pmatrix}=\begin{pmatrix}1\\3\\-5\end{pmatrix}$$</span> Which becomes <span class="math-container">$$\begin{pmatrix}1&amp;2&amp;3\\0&amp;-1&amp;-2\\0&amp;0&amp;0\end{pmatrix}\begin{pmatrix}x\\y\\z\end{pmatrix}=\begin{pmatrix}1\\3\\-11\end{pmatrix}$$</span>Now notice that the final row says <span class="math-container">$0=-11$</span>. That is a contradiction, hence there are no solutions to this equation.</p>
3,062,701
<p>I want to solve this system by Least Squares method:<span class="math-container">$$\begin{pmatrix}1 &amp; 2 &amp; 3\\\ 2 &amp; 3 &amp; 4 \\\ 3 &amp; 4 &amp; 5 \end{pmatrix}\begin{pmatrix}x\\y\\z\end{pmatrix} =\begin{pmatrix}1\\5\\-2\end{pmatrix} $$</span> This symmetric matrix is singular with one eigenvalue <span class="math-container">$\lambda1 = 0$</span>, so <span class="math-container">$\ A^t\cdot A$</span> is also singular and for this reason I cannot use the normal equation: <span class="math-container">$\hat x = (A^t\cdot A)^{-1}\cdot A^t\cdot b $</span>. So I performed Gauss-Jordan to the extended matrix to come with <span class="math-container">$$\begin{pmatrix}1 &amp; 2 &amp; 3\\\ 0 &amp; 1 &amp; 2 \\\ 0 &amp; 0 &amp; 0 \end{pmatrix}\begin{pmatrix}x\\y\\z\end{pmatrix} =\begin{pmatrix}1\\3\\-1\end{pmatrix} $$</span> Finally I solved the <span class="math-container">$\ 2x2$</span> system: <span class="math-container">$$\begin{pmatrix}1 &amp; 2\\\ 0 &amp; 1\end{pmatrix}\begin{pmatrix}x\\y\end{pmatrix} =\begin{pmatrix}1\\3\end{pmatrix} $$</span> taking into account that the best <span class="math-container">$\ \hat b\ $</span> is <span class="math-container">$\begin{pmatrix}1\\3\\0\end{pmatrix}$</span></p> <p>The solution is then <span class="math-container">$\ \hat x = \begin{pmatrix}-5\\3\\0\end{pmatrix}$</span></p> <p>Is this approach correct ? </p> <p><strong>EDIT</strong></p> <p>Based on the book 'Lianear Algebra and its applications' from David Lay, I also include the Least Squares method he proposes: <span class="math-container">$(A^tA)\hat x=A^t b $</span></p> <p><span class="math-container">$$A^t b =\begin{pmatrix}5\\9\\13\end{pmatrix}, A^tA = \begin{pmatrix}14 &amp; 20 &amp; 26 \\ 20 &amp; 29 &amp; 38 \\ 26 &amp; 38 &amp; 50\end{pmatrix}$$</span> The reduced echelon from the augmented is: <span class="math-container">$$ \begin{pmatrix}14 &amp; 20 &amp; 26 &amp; 5 \\ 20 &amp; 29 &amp; 38 &amp; 9 \\ 26 &amp; 38 &amp; 50 &amp; 13 \end{pmatrix} \sim \begin{pmatrix}1 &amp; 0 &amp; -1 &amp; -\frac{35}{6} \\ 0 &amp; 1 &amp; 2 &amp; \frac{13}{3} \\ 0 &amp; 0 &amp; 0 &amp; 0 \end{pmatrix} \Rightarrow \hat x = \begin{pmatrix}-\frac{35}{6} \\ \frac{13}{3} \\ 0 \end{pmatrix}$$</span> for the independent variable case that <span class="math-container">$z=\alpha , \alpha=0 $</span> </p>
AVK
362,247
<p>Your solution is incorrect for the following reason. When you perform the Gauss-Jordan elimination, you transform the original system <span class="math-container">$$\tag{1} Ax=b $$</span> to another <span class="math-container">$$\tag{2} SAx=Sb. $$</span> But the least squares solutions of (1) and (2) do not coincide in general. Indeed, the least squares solution of (1) is <span class="math-container">$A^{+}b$</span>, the least squares solution of (2) is <span class="math-container">$(SA)^{+}Sb$</span>. If <span class="math-container">$A$</span> is invertible, then <span class="math-container">$$(SA)^{+}S=(SA)^{-1}S=A^{-1}=A^{+}$$</span> and everything is OK, but in general case <span class="math-container">$(SA)^{+}S\ne A^{+}$</span>.</p> <p>In your case, in particular, <span class="math-container">$$ S=\left(\begin{array}{rrr} 1 &amp; 0 &amp; 0 \\ 2 &amp; -1 &amp; 0 \\ 1 &amp; -2 &amp; 1 \\ \end{array}\right),\quad (SA)^{+}S=\left(\begin{array}{rrr} -11/6 &amp; 4/3 &amp; 0 \\ -1/3 &amp; 1/3 &amp; 0 \\ 7/6 &amp; -2/3 &amp; 0 \\ \end{array}\right), $$</span> <span class="math-container">$$ A^{+}=\left(\begin{array}{rrr} -13/12 &amp; -1/6 &amp; 3/4 \\ -1/6 &amp; 0 &amp; 1/6 \\ 3/4 &amp; 1/6 &amp; -5/12\\ \end{array}\right).$$</span></p> <p>You can calculate the pseudoinverse matrix by using the <a href="https://en.wikipedia.org/wiki/Rank_factorization" rel="nofollow noreferrer">rank factorization</a>: <span class="math-container">$$ A=BC,\quad B=\left(\begin{array}{rr} 1 &amp; 3\\ 2 &amp; 4\\ 3 &amp; 5\\ \end{array}\right),\quad C=\left(\begin{array}{rrr} 1&amp;1/2&amp;0\\ 0&amp;1/2&amp;1 \end{array}\right) $$</span> (this decomposition comes from the fact the second column of <span class="math-container">$A$</span> is the arithmetic mean of the remaining columns). It remains only to calculate the pseudoinverse matrix <span class="math-container">$$ A^{+}=C^{+}B^{+}=C^T(CC^T)^{-1}(B^TB)^{-1}B^T $$</span> and the least squares solution is <span class="math-container">$A^{+}b$</span>.</p>
2,806,858
<p>There is an equation $$\sin2\theta=\sin\theta$$ We need to show when the right-hand side is equal to the left-hand side for $[0,2\pi]$. <hr> Let's rewrite it as $$2\sin\theta\cos\theta=\sin\theta$$ Let's divide both sides by $\sin\theta$ (then $\sin\theta \neq 0 \leftrightarrow \theta \notin \{0,\pi,2\pi\}$) $$2\cos\theta=1$$ $$cos\theta=\frac{1}{2}$$ $$\theta\in\left\{\frac{\pi}{3},\frac{5\pi}{3}\right\}$$ <hr> Now, let's try something different. $$2\sin\theta\cos\theta=\sin\theta$$ $$2\sin\theta\cos\theta-\sin\theta=0$$ $$\sin\theta(2\cos\theta-1)=0$$ We can have the solution when $\sin\theta=0$. $$\sin\theta=0$$ $$\theta \in \left\{0,\pi,2\pi\right\}$$ And when $2\cos\theta-1=0$. $$2\cos\theta-1=0$$ $$2\cos\theta=1$$ $$\cos\theta=\frac{1}{2}$$ $$\theta \in \left\{\frac{\pi}{3},\frac{5\pi}{3}\right\}$$ Therefore the whole solution set is $$\theta \in \left\{0,\pi,2\pi,\frac{\pi}{3},\frac{5\pi}{3}\right\}$$ <strong>This is the correct solution.</strong> <hr> Why is this happening? In the first approach, the extra solution given by $\sin\theta=0$ is not only non-appearing but actually banned. Both approaches look valid to me, yet the first one yields less solutions than the second one. Is the first approach invalid in some cases? This is not the only case when this happens, so I'd like to know when I need to use the second approach to solve the equation, so I don't miss any possible solutions.</p>
Arnaud Mortier
480,423
<p>Wrong:$$ab=ac\Longleftrightarrow b=c$$</p> <p>Correct:$$ab=ac\Longleftrightarrow\cases{b=c\\\text{or}\\a=0}$$</p> <p>Therefore, if you want to simplify by $a$, you automatically end up with a proof by exhaustion, with at least two cases.</p>
4,090,408
<p>Show that <span class="math-container">$A$</span> is a whole number: <span class="math-container">$$A=\sqrt{\left|40\sqrt2-57\right|}-\sqrt{\left|40\sqrt2+57\right|}.$$</span> I don't know if this is necessary, but we can compare <span class="math-container">$40\sqrt{2}$</span> and <span class="math-container">$57$</span>: <span class="math-container">$$40\sqrt{2}\Diamond57,\\1600\times2\Diamond 3249,\\3200\Diamond3249,\\3200&lt;3249\Rightarrow 40\sqrt{2}&lt;57.$$</span> Is this actually needed for the solution? So <span class="math-container">$$A=\sqrt{57-40\sqrt2}-\sqrt{40\sqrt2+57}.$$</span> What should I do next?</p>
peter.petrov
116,591
<p><span class="math-container">$$A=\sqrt{57-40\sqrt2}-\sqrt{40\sqrt2+57} = \sqrt{(4\sqrt2-5)^2} - \sqrt{(4\sqrt2+5)^2} $$</span></p> <p><span class="math-container">$$ = (4\sqrt2-5) - (4\sqrt2+5) = -10$$</span></p> <p>So <span class="math-container">$A$</span> is the integer <span class="math-container">$-10$</span>.<br /> It's just written in some slightly convoluted form.</p>
716,859
<p>Define the mean of order $p$ of $a$ and $b$ as $s_p(a,b)$ $=$ $({a^p + b^p\over 2})^{1/p}$.</p> <p>I have to find the limit of the sequence $s_n(a,b)$. I already know this sequence is bounded above by $b$ (from a previous question) and if I assume the limit exists I can show it is $b$. What I cannot show is that the sequence is increasing. Could someone assist me or show me how to prove this? </p>
mookid
131,738
<p>There is no need for increasing:</p> <p>let us assume that $b&gt;a\ge 0$, that is $a = b-r$, $r&gt;0$.</p> <p>Then $$ \left( \frac{a^p + b^p}2 \right)^{1/p}= b \left( \frac 12 \left[\left(\frac{b-r}b \right)^p + 1 \right]\right)^{1/p}= b\left[\frac 12\right]^{1/p} \left[\left(\frac{b-r}b \right)^p + 1 \right]^{1/p} $$ now $\left[\frac 12\right]^{1/p}\to 1$ and $$ \frac 1p\log\left[\left(\frac{b-r}b \right)^p + 1 \right]\sim \frac 1p\left[\left(\frac{b-r}b \right)^p\right]\to 0 $$ Now use the continuity of the exponential to get $$ \left[\left(\frac{b-r}b \right)^p + 1 \right]^{1/p} \to 1\\ \left( \frac{a^p + b^p}2 \right)^{1/p}\to b $$</p>
152,405
<p>This question complement a previous MO question: <a href="https://mathoverflow.net/questions/95837/examples-of-theorems-with-proofs-that-have-dramatically-improved-over-time">Examples of theorems with proofs that have dramatically improved over time</a>.</p> <p>I am looking for a list of</p> <h3>Major theorems in mathematics whose proofs are very hard but was not dramatically improved over the years.</h3> <p>(So a new dramatically simpler proof may represent a much hoped for breakthrough.) Cases where the original proof was very hard, dramatical improvments were found, but the proof remained very hard may also be included.</p> <p>To limit the scope of the question:</p> <p>a) Let us consider only theorems proved at least 25 years ago. (So if you have a good example from 1995 you may add a comment but for an answer wait please to 2020.)</p> <p>b) Major results only.</p> <p>c) Results with very hard proofs.</p> <p>As usual, one example (or a few related ones) per post.</p> <p>A similar question was asked before <a href="https://mathoverflow.net/questions/44593/still-difficult-after-all-these-years">Still Difficult After All These Years</a>. (That question referred to 100-years old or so theorems.)</p> <h2>Answers</h2> <p>(Updated Oct 3 '15)</p> <p><strong>1)</strong> <a href="https://mathoverflow.net/a/152457">Uniformization theorem for Riemann surfaces</a>( Koebe and Poincare, 19th century)</p> <p><strong>2)</strong> <a href="https://mathoverflow.net/a/152546">Thue-Siegel-Roth theorem</a> (Thue 1909; Siegel 1921; Roth 1955)</p> <p><strong>3)</strong> <a href="https://mathoverflow.net/a/152412">Feit-Thompson theorem</a> (1963);</p> <p><strong>4)</strong> <a href="https://mathoverflow.net/a/152419">Kolmogorov-Arnold-Moser (or KAM) theorem</a> (1954, Kolgomorov; 1962, Moser; 1963)</p> <p><strong>5)</strong> <a href="https://mathoverflow.net/a/218043/1532">The construction of the <span class="math-container">$\Phi^4_3$</span> quantum field theory model. This was done in the early seventies by Glimm, Jaffe, Feldman, Osterwalder, Magnen and Seneor.</a> (NEW)</p> <p><strong>6)</strong> <a href="https://mathoverflow.net/a/152752">Weil conjectures</a> (1974, Deligne)</p> <p><strong>7)</strong> <a href="https://mathoverflow.net/a/152418">The four color theorem</a> (1976, Appel and Haken);</p> <p><strong>8)</strong> <a href="https://mathoverflow.net/a/152613">The decomposition theorem for intersection homology</a> (1982, Beilinson-Bernstein-Deligne-Gabber); (<strong>Update:</strong> A new simpler proof by de Cataldo and Migliorini is now available)</p> <p><strong>9)</strong> <a href="https://mathoverflow.net/a/218061/1532">Poincare conjecture for dimension four</a>, 1982 Freedman (NEW)</p> <p><strong>10)</strong> <a href="https://mathoverflow.net/a/152416">The Smale conjecture</a> 1983, Hatcher;</p> <p><strong>11)</strong> <a href="https://mathoverflow.net/a/152741">The classification of finite simple groups</a> (1983, with some completions later)</p> <p><strong>12)</strong> <a href="https://mathoverflow.net/a/152537">The graph-minor theorem</a> (1984, Robertson and Seymour)</p> <p><strong>13)</strong> <a href="https://mathoverflow.net/a/152424">Gross-Zagier formula</a> (1986)</p> <p><strong>14)</strong> <a href="https://mathoverflow.net/a/218066/1532">Restricted Burnside conjecture</a>, Zelmanov, 1990. (NEW)</p> <p><strong>15)</strong> <a href="https://mathoverflow.net/a/219182/1532">The Benedicks-Carleson theorem</a> (1991)</p> <p><strong>16)</strong> <a href="https://mathoverflow.net/a/152454">Sphere packing problem in R 3 , a.k.a. the Kepler Conjecture</a>(1999, Hales)</p> <p><strong>For the following answers some issues were raised in the comments.</strong></p> <p><a href="https://mathoverflow.net/a/152598">The Selberg Trace Formula- general case</a> (Reference to a 1983 proof by Hejhal)</p> <p><a href="https://mathoverflow.net/a/152423">Oppenheim conjecture</a> (1986, Margulis)</p> <p><a href="https://mathoverflow.net/a/152459">Quillen equivalence</a> (late 60s)</p> <p><a href="https://mathoverflow.net/a/152434">Carleson's theorem</a> (1966) (Major simplification: 2000 proof by Lacey and Thiele.)</p> <p><a href="https://mathoverflow.net/a/152432">Szemerédi’s theorem</a> (1974) (Major simplifications: ergodic theoretic proofs; modern proofs based on hypergraph regularity, and polymath1 proof for density Hales Jewett.)</p> <p><strong>Additional answer:</strong></p> <p><a href="https://mathoverflow.net/a/152658">Answer about fully formalized proofs for 4CT and FT theorem</a>.</p>
Gil Kalai
1,532
<p><a href="http://en.wikipedia.org/wiki/Graph_minor_theorem">The Graph-Minor Theorem</a>.</p> <p>A graph $H$ is a minor of a graph $G$ if it can be obtained from $G$ by a sequence of deletion and contraction edges. Roberton and Seymour's graph-minor theorem asserts that in every infinite sequence of graphs $G_1,G_2,\dots$ there is $i&lt;j$ such that $G_i$ is a minor of $G_j$. Equivalently it asserts that every minor-closed family of graphs (examples: planar graphs) can be defined by a finite list of forbidden minors (for the example a theorem of Wegner asserts that the list is $\{K_5,K_{3,3}\}$).</p> <p>The theorem was proved by Robertson and Seymour around 1984. The proof spans 20 papers (published between 1984 and 2004) and is very hard, in spite of some simplifications of some of its ingredients. </p>
152,405
<p>This question complement a previous MO question: <a href="https://mathoverflow.net/questions/95837/examples-of-theorems-with-proofs-that-have-dramatically-improved-over-time">Examples of theorems with proofs that have dramatically improved over time</a>.</p> <p>I am looking for a list of</p> <h3>Major theorems in mathematics whose proofs are very hard but was not dramatically improved over the years.</h3> <p>(So a new dramatically simpler proof may represent a much hoped for breakthrough.) Cases where the original proof was very hard, dramatical improvments were found, but the proof remained very hard may also be included.</p> <p>To limit the scope of the question:</p> <p>a) Let us consider only theorems proved at least 25 years ago. (So if you have a good example from 1995 you may add a comment but for an answer wait please to 2020.)</p> <p>b) Major results only.</p> <p>c) Results with very hard proofs.</p> <p>As usual, one example (or a few related ones) per post.</p> <p>A similar question was asked before <a href="https://mathoverflow.net/questions/44593/still-difficult-after-all-these-years">Still Difficult After All These Years</a>. (That question referred to 100-years old or so theorems.)</p> <h2>Answers</h2> <p>(Updated Oct 3 '15)</p> <p><strong>1)</strong> <a href="https://mathoverflow.net/a/152457">Uniformization theorem for Riemann surfaces</a>( Koebe and Poincare, 19th century)</p> <p><strong>2)</strong> <a href="https://mathoverflow.net/a/152546">Thue-Siegel-Roth theorem</a> (Thue 1909; Siegel 1921; Roth 1955)</p> <p><strong>3)</strong> <a href="https://mathoverflow.net/a/152412">Feit-Thompson theorem</a> (1963);</p> <p><strong>4)</strong> <a href="https://mathoverflow.net/a/152419">Kolmogorov-Arnold-Moser (or KAM) theorem</a> (1954, Kolgomorov; 1962, Moser; 1963)</p> <p><strong>5)</strong> <a href="https://mathoverflow.net/a/218043/1532">The construction of the <span class="math-container">$\Phi^4_3$</span> quantum field theory model. This was done in the early seventies by Glimm, Jaffe, Feldman, Osterwalder, Magnen and Seneor.</a> (NEW)</p> <p><strong>6)</strong> <a href="https://mathoverflow.net/a/152752">Weil conjectures</a> (1974, Deligne)</p> <p><strong>7)</strong> <a href="https://mathoverflow.net/a/152418">The four color theorem</a> (1976, Appel and Haken);</p> <p><strong>8)</strong> <a href="https://mathoverflow.net/a/152613">The decomposition theorem for intersection homology</a> (1982, Beilinson-Bernstein-Deligne-Gabber); (<strong>Update:</strong> A new simpler proof by de Cataldo and Migliorini is now available)</p> <p><strong>9)</strong> <a href="https://mathoverflow.net/a/218061/1532">Poincare conjecture for dimension four</a>, 1982 Freedman (NEW)</p> <p><strong>10)</strong> <a href="https://mathoverflow.net/a/152416">The Smale conjecture</a> 1983, Hatcher;</p> <p><strong>11)</strong> <a href="https://mathoverflow.net/a/152741">The classification of finite simple groups</a> (1983, with some completions later)</p> <p><strong>12)</strong> <a href="https://mathoverflow.net/a/152537">The graph-minor theorem</a> (1984, Robertson and Seymour)</p> <p><strong>13)</strong> <a href="https://mathoverflow.net/a/152424">Gross-Zagier formula</a> (1986)</p> <p><strong>14)</strong> <a href="https://mathoverflow.net/a/218066/1532">Restricted Burnside conjecture</a>, Zelmanov, 1990. (NEW)</p> <p><strong>15)</strong> <a href="https://mathoverflow.net/a/219182/1532">The Benedicks-Carleson theorem</a> (1991)</p> <p><strong>16)</strong> <a href="https://mathoverflow.net/a/152454">Sphere packing problem in R 3 , a.k.a. the Kepler Conjecture</a>(1999, Hales)</p> <p><strong>For the following answers some issues were raised in the comments.</strong></p> <p><a href="https://mathoverflow.net/a/152598">The Selberg Trace Formula- general case</a> (Reference to a 1983 proof by Hejhal)</p> <p><a href="https://mathoverflow.net/a/152423">Oppenheim conjecture</a> (1986, Margulis)</p> <p><a href="https://mathoverflow.net/a/152459">Quillen equivalence</a> (late 60s)</p> <p><a href="https://mathoverflow.net/a/152434">Carleson's theorem</a> (1966) (Major simplification: 2000 proof by Lacey and Thiele.)</p> <p><a href="https://mathoverflow.net/a/152432">Szemerédi’s theorem</a> (1974) (Major simplifications: ergodic theoretic proofs; modern proofs based on hypergraph regularity, and polymath1 proof for density Hales Jewett.)</p> <p><strong>Additional answer:</strong></p> <p><a href="https://mathoverflow.net/a/152658">Answer about fully formalized proofs for 4CT and FT theorem</a>.</p>
Daniel Moskovich
2,051
<p><a href="http://en.wikipedia.org/wiki/Selberg_trace_formula" rel="nofollow">The Selberg Trace Formula- general case</a></p> <hr> <p>Hejhal's original 1983 proof is 1322 pages long! As far as I know, the proof remains famously very hard.</p>
2,069,507
<p><a href="https://i.stack.imgur.com/B4b88.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/B4b88.png" alt="The image of parallelogram for help"></a></p> <p>Let's say we have a parallelogram $\text{ABCD}$.</p> <p>$\triangle \text{ADC}$ and $\triangle \text{BCD}$ are on the same base and between two parallel lines $\text{AB}$ and $\text{CD}$, So, $$ar\triangle \text{ADC}=ar\triangle \text{BCD}$$ Now the things those should be noticed are that:</p> <p>In $\triangle \text{ADC}$ and $\triangle \text{BCD}$:</p> <p>$$\text{AD}=\text{BC}$$ $$\text{DC}=\text{DC}$$ $$ar\triangle \text{ADC}=ar\triangle \text{BCD}.$$</p> <p>Now in two different triangles, two sides are equal and their areas are also equal, so the third side is also equal or $\text{AC}=\text{BD}$. Which make this parallelogram a rectangle.</p> <p>Isn't it a claim that every parallelogram is a rectangle or a parallelogram does not exist?</p>
Arnaldo
391,612
<p>Here is the problem:</p> <p>"<em>Two sides are equal and Area is also equal. So, the third side is also equal</em>"</p> <p>Take as an example:</p> <p>Suppose $\angle ADC=60°$ and $\angle BCD =120°$. </p> <p>You keep getting: $AD=BC$, $DC=DC$ and $S(ADC)=S(BCD)$ but $\Delta ADC \ne \Delta BCD$ because they don't fit the rule <em>(side,angle,side)</em>: </p> <p>$AD=BC$, $DC=DC$ but $\angle ADC \ne \angle BCD$.</p> <p><em>P.S: Having the same area is a consequence of both triangle have the same base ($CD$) and the same height (because $AB$ is parallel to $CD$) but that doesn't garantee congruence. In order to see that just move the side $AB$ (keeping it parallel to $CD$ and on the same height) and clearly you will change the angles $\angle ADC$ and $\angle BCD$ but the area will keep the same.</em></p> <p><strong>EDIT</strong></p> <p>This post is getting some much attention that I think there is something more to say. </p> <p><a href="https://i.stack.imgur.com/NCbeC.png"><img src="https://i.stack.imgur.com/NCbeC.png" alt="enter image description here"></a> In the geometric construction above we have the line $t$ parallel to $s$. Those two paralelogram are just examples about what is happening. For any choice of $AB$ we will always have $AD=BC$ and $DC=DC$ and once $s$ and $t$ are parallels then the height $h$ is constant. It means that we will always have:</p> <p>$$ar(ADC)=ar(BCD)=\frac{m\cdot h}{2}$$</p> <p>and clearly there are infinite angles $\angle ADC$ that are not $90º$ and the parallelogram will not be a rectangle. Furthermore if $\angle ADC \ne 90º$ then $\angle ADC \ne \angle BCD$ and once the rule (side,angle,side) can be the definition of congruence between two triangles we will get $BD \ne AC$.</p>
11,698
<p><strong>Bug introduced in 8.0 or earlier and fixed in 13.2.0 or earlier</strong></p> <hr /> <p>So I have been fighting with this for a while. I'm trying to get custom frame ticks on both the left and right side of a <code>DistributionChart</code>. It's not going very well. It just keeps throwing errors saying tick position needs to be a number. Which in my code it is. Here is an example:</p> <pre><code>With[ {fps = {120, 60, 50, 40, 30, 25, 20, 15, 10}}, DistributionChart[ RandomVariate[SkewNormalDistribution[##], 100] &amp; @@@ {{20, 13, 5}, {30, 12, 10}}, ChartLabels -&gt; {1, 2}, PlotRange -&gt; {0, 70}, GridLines -&gt; {None, N@Table[1/i*1000, {i, fps}]}, FrameTicks -&gt; { { Table[{N[1/i*1000], NumberForm[N[1/i*1000], 3]}, {i, fps}],(*Left*) Table[{1/i*1000, i}, {i, fps}] (*Right*) }, { None,(*Bottom*) None (*Top*) } } ] ] </code></pre> <p>Any help would be much appreciated.</p>
bmf
85,558
<p>The issue has been resolved in the latest -<code>13.2.0</code>- version or earlier</p> <p>The code from the OP</p> <pre><code>With[{fps = {120, 60, 50, 40, 30, 25, 20, 15, 10}}, DistributionChart[ RandomVariate[SkewNormalDistribution[##], 100] &amp; @@@ {{20, 13, 5}, {30, 12, 10}}, ChartLabels -&gt; {1, 2}, PlotRange -&gt; {0, 70}, GridLines -&gt; {None, N@Table[1/i*1000, {i, fps}]}, FrameTicks -&gt; {{Table[{N[1/i*1000], NumberForm[N[1/i*1000], 3]}, {i, fps}],(*Left*) Table[{1/i*1000, i}, {i, fps}] (*Right*)}, {None,(*Bottom*) None (*Top*)}}]] </code></pre> <p>Screenshot for completeness</p> <blockquote> <p><a href="https://i.stack.imgur.com/rTWpl.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/rTWpl.png" alt="screen" /></a></p> </blockquote>
1,043,266
<p>Carefully see this problem(I have solved them on my own, I'm only talking about the magical coincidence):</p> <blockquote> <p>A bag contains 6 notes of 100 Rs.,2 notes of 500 Rs., 3 notes of 1000 Rs..Mr. A draws two notes from the bag then Mr. B draws 2 notes from the bag.<br> (i)Find the probability that A has drawn 600 Rs.<br> (ii)Find the probability that B has drawn 600 Rs.<br> (iii)B has drawn 600 Rs., then find the probability that A has also drawn 600 Rs..<br> (iv)A has drawn 600 Rs.,then find the probability that B has drawn 600 Rs.<br></p> <hr> <p>(i)$$P=\frac{\binom61\binom21}{\binom{11}2}=\frac{12}{55}$$ (ii)<strong>Total Probability Theorem:</strong> Considering various cases, depending upon what A chooses:<br> In order of $2H,2F,2T,1H1T,1H1F,1F1T$<br> where H=100(<strong>H</strong>undred),F=500(<strong>F</strong>ive-hundred),T=1000(<strong>T</strong>housand)]is: $$P=\frac{\binom41\binom21}{\binom92}\frac{\binom62}{\binom{11}2} +\frac{0}{\binom92}\frac{\binom22}{\binom{11}2} +\frac{\binom61\binom21}{\binom92}\frac{\binom32}{\binom{11}2} +\frac{\binom51\binom21}{\binom92}\frac{\binom61\binom31}{\binom{11}2} +\frac{\binom51\binom11}{\binom92}\frac{\binom61\binom21}{\binom{11}2} +\frac{\binom61\binom11}{\binom92}\frac{\binom21\binom31}{\binom{11}2}=\frac{12}{55}$$ <strong>Oh My God!What's happening here above?</strong><br> (iii)<strong>Baye's Theorem:</strong> $$P=\frac{\frac{\binom51\binom11}{\binom92}\frac{\binom61\binom21}{\binom{11}2}}{\frac{12}{55}}=\frac5{36}$$ (iv)<strong>Conditional Probability:</strong> $$P(B|A)=\frac{P(AB)}{P(A)}=\frac{\frac{\binom51\binom11}{\binom92}\frac{\binom61\binom21}{\binom{11}2}}{\frac{12}{55}}=\frac5{36}$$ <strong>Not again, you must be joking!</strong></p> </blockquote> <p>Why doesn't it makes any difference?Think Intutively, if A has taken some money there must be less notes so there must be difference in probability, why doesn't order matter here?</p>
ml0105
135,298
<blockquote> <blockquote> <p>(i)Find the probability that A has drawn 600 Rs.</p> </blockquote> </blockquote> <p>Only $A$ is drawing. It doesn't matter if he picks the $100$ or $500$ first. So order does not matter.</p> <blockquote> <blockquote> <p>(ii)Total Probability Theorem: Considering various cases, depending upon what A chooses: In order of 2H,2F,2T,1H1T,1H1F,1F1T where H=100(Hundred),F=500(Five-hundred),T=1000(Thousand)]is: </p> </blockquote> </blockquote> <p>It's actually broken down into cases pretty well. If $A$ chose $2H$, the odds are: $\binom{6}{2}/\binom{11}{2}$. We then multiply this by the odds of $B$ getting $600$ given this, which are $\binom{4}{1} \binom{2}{1}/\binom{9}{2}$. </p> <p>So the idea is this: we break $A$'s outcomes down into cases. For a given case, we then examine what is left and calculate $B$'s odds. Then by rule of product, we multiply. Each case is disjoint, so we add up the results. Does this make sense?</p> <blockquote> <blockquote> <p>(iv)A has drawn 600 Rs.,then find the probability that B has drawn 600 Rs.</p> </blockquote> </blockquote> <p>So there are $5H$ and $1F$ left. The odds of getting 600 are: </p> <p>$$\frac{ \binom{5}{1} \binom{1}{1} }{\binom{9}{2} } = \frac{5}{36}$$</p> <p>Notice that I didn't go through the whole process with Bayes' Theorem. I just cut to the chase by analyzing the cases.</p> <p>Notice as well that the order in which $B$ or $A$ choose their respective bills do not matter. It just matters that we are careful about analyzing what is left after $A$'s turn.</p> <p>Note- if you want me to go through $3$, I will be happy to do so. Just comment and let me know.</p>
4,440,233
<blockquote> <p>Find all the functions <span class="math-container">$f:\mathbb{Z}^+ \to \mathbb{Z}^+$</span> such that <span class="math-container">$f(f(x)) = 15x-2f(x)+48$</span>.</p> </blockquote> <p>If <span class="math-container">$f$</span> is a polynomial of degree <span class="math-container">$n$</span>, we have that <span class="math-container">$\deg(f(f(x))) = n^2$</span> and <span class="math-container">$\deg(15x-2f(x)+48)=n$</span>. Therefore, the only possible polynomials that satisfy the condition have degree <span class="math-container">$0$</span> or <span class="math-container">$1$</span>.</p> <p>Let <span class="math-container">$f:\mathbb{Z}^+ \to \mathbb{Z}^+$</span> be a function that holds the condition of the problem given by <span class="math-container">$f(x)=ax+b$</span> for some constants <span class="math-container">$a$</span> and <span class="math-container">$b$</span>. Since <span class="math-container">$$f(f(x)) = f(ax+b) = a(ax+b)+b = a^2x + (a+1)b$$</span> and <span class="math-container">$$15x-2f(x)+48 = 15x-2(ax+b)+48 = (15-2a)x+(48-2b),$$</span> it follows that <span class="math-container">$$a^2+2a-15=0 \quad\text{and}\quad (a+1)b=48-2b.$$</span> From the first equation, we get that <span class="math-container">$a=-5$</span> or <span class="math-container">$a=3$</span>. If <span class="math-container">$a=-5$</span>, from the second equation we get that <span class="math-container">$b=-24$</span>, and it contradicts that <span class="math-container">$f[\mathbb{Z}^+]\subseteq \mathbb{Z}^+$</span>. If <span class="math-container">$a=3$</span>, then <span class="math-container">$b=8$</span>. Therefore, <span class="math-container">$f(x)=3x+8$</span> is the only polynomial that satisfies the condition of the problem. I guess that this is the only solution, but I do not know how to prove it.</p> <p><strong>Edit:</strong> I was trying to prove that the iterations of any function <span class="math-container">$f$</span> that satisfies the problem have the same behaviour. For instance, by iterating <span class="math-container">$f$</span> we have that <span class="math-container">$2f^3(x)+f^4(x)-15f^2(x)=48$</span>, so this functions are almost the same except for constant terms. Is this usefull this idea to complete the problem?</p>
Tob Ernack
275,602
<p>Note: the proof of convergence of the sequence used below is not complete, I might come back later to fix it, but there are already other answers anyway.</p> <hr /> <p>Due to the requirement that <span class="math-container">$f(x) \in \mathbb{Z}^+$</span> whenever <span class="math-container">$x \in \mathbb{Z}^+$</span>, we must have <span class="math-container">$15x - 2f(x) + 48 \gt 0$</span>, so</p> <p><span class="math-container">$$f(x) \lt \frac{15x + 48}{2}\tag{1}$$</span></p> <p>By substituting <span class="math-container">$x$</span> with <span class="math-container">$f(x)$</span> in <span class="math-container">$(1)$</span>, we obtain</p> <p><span class="math-container">$$f(f(x)) \lt \frac{15 f(x) + 48}{2}\tag{2}$$</span></p> <p>Using the functional equation to eliminate <span class="math-container">$f(f(x))$</span> we get <span class="math-container">$$15x - 2f(x) + 48 \lt \frac{15f(x) + 48}{2}\tag{3}$$</span></p> <p>Combining (1) and (3), it follows that <span class="math-container">$$f(x) \gt \frac{30x + 48}{19}\tag{4}$$</span></p> <p>Substituting <span class="math-container">$x$</span> with <span class="math-container">$f(x)$</span> in (4) and using the functional equation we get <span class="math-container">$$15x - 2f(x) + 48 \gt \frac{30f(x) + 48}{19}\tag{5}$$</span></p> <p>From this it follows that <span class="math-container">$$f(x) \lt \frac{285x + 864}{68}\tag{6}$$</span></p> <hr /> <p>We can keep continuing the same way. In order to discover a pattern while repeating this, suppose we have found so far that <span class="math-container">$$ax + b \lt f(x) \lt Ax + B\tag{7}$$</span> for some coefficients <span class="math-container">$a, b, A, B$</span>. Then by substituting <span class="math-container">$x$</span> with <span class="math-container">$f(x)$</span> and applying the functional equation, we get <span class="math-container">$$af(x) + b \lt 15x - 2f(x) + 48 \lt Af(x) + B$$</span> and therefore <span class="math-container">$$\frac{15}{A + 2}x + \frac{48 - B}{A + 2} \lt f(x) \lt \frac{15}{a + 2}x + \frac{48 - b}{a + 2} \tag{8}$$</span></p> <p>From (7) and (8) we obtain a sequence of coefficients <span class="math-container">$a_n, b_n, A_n, B_n$</span> satisfying the recurrence</p> <p><span class="math-container">$$\begin{pmatrix}a_{n+1} \\ b_{n+1} \\ A_{n+1} \\ B_{n+1}\end{pmatrix} = \begin{pmatrix}\frac{15}{A_n + 2} \\ \frac{48 - B_n}{A_n + 2} \\ \frac{15}{a_n + 2} \\ \frac{48 - b_n}{a_n + 2}\end{pmatrix} = \mathbf{F}\begin{pmatrix}a_n \\ b_n \\ A_n \\ B_n\end{pmatrix}\tag{9}$$</span></p> <p>Now assuming the sequence (9) converges, it must converge to a fixed point of <span class="math-container">$\mathbf{F}$</span>. The fixed points are found by solving the equations <span class="math-container">$$\begin{pmatrix}a \\ b \\ A \\ B\end{pmatrix} = \begin{pmatrix}\frac{15}{A + 2} \\ \frac{48 - B}{A + 2} \\ \frac{15}{a + 2} \\ \frac{48 - b}{a + 2}\end{pmatrix}$$</span></p> <p>From the first and third components we get a quadratic with two solutions. But only one is positive and since we started with <span class="math-container">$a_0 = 0, b_0 = 0, A_0 = \frac{15}{2}, B_0 = \frac{48}{2} = 24$</span>, it will be the one to use. This solution is <span class="math-container">$a = A = 3$</span>. Then we can solve for <span class="math-container">$b$</span> and <span class="math-container">$B$</span> from the second and fourth equations to obtain <span class="math-container">$b = B = 8$</span>.</p> <p>This means that by repeating the procedure described above, we can get a sequence of inequalities</p> <p><span class="math-container">$$a_nx + b_n \lt f(x) \lt A_nx + B_n$$</span></p> <p>where <span class="math-container">$a_n, A_n$</span> are arbitrarily close to <span class="math-container">$3$</span> and <span class="math-container">$b_n, B_n$</span> are arbitrarily close to <span class="math-container">$8$</span>, so that <span class="math-container">$f(x)$</span> is arbitrarily close to <span class="math-container">$3x + 8$</span> (pointwise). For any fixed integer <span class="math-container">$x$</span>, we can repeat this procedure finitely many times before the error in approximation is less than <span class="math-container">$1/2$</span> and by the requirement that <span class="math-container">$f(x)$</span> is an integer, it must be equal to <span class="math-container">$3x + 8$</span>.</p> <hr /> <h2>Proof of convergence of the sequence (9)</h2> <p>The initial conditions are <span class="math-container">$a_0 = 0, b_0 = 0, A_0 = \frac{15}{2}, B_0 = \frac{48}{2} = 24$</span>.</p> <p>Looking at the first and third components, we have the coupled recurrence equations <span class="math-container">$$a_{n+1} = \frac{15}{A_n + 2}$$</span> <span class="math-container">$$A_{n+1} = \frac{15}{a_n + 2}$$</span></p> <p>It can be checked that <span class="math-container">$0 \leq \frac{15}{x + 2} \lt 3$</span> when <span class="math-container">$x \gt 3$</span> and <span class="math-container">$\frac{15}{x + 2} \gt 3$</span> when <span class="math-container">$0 \leq x \lt 3$</span>. Using this fact and induction it is clear that <span class="math-container">$0 \leq a_n \lt 3 \lt A_n$</span> for all <span class="math-container">$n$</span>.</p> <p>We can decouple the equations to obtain the recurrence <span class="math-container">$$x_{n+2} = \frac{15x_n + 30}{2x_n + 19}$$</span> where <span class="math-container">$x_n$</span> can be either <span class="math-container">$a_n$</span> or <span class="math-container">$A_n$</span>.</p> <p>The function <span class="math-container">$\frac{15x + 30}{2x + 19}$</span> is strictly increasing when <span class="math-container">$x \geq 0$</span>. It can also be checked that <span class="math-container">$\frac{15x + 30}{2x + 19} \gt x$</span> when <span class="math-container">$0 \leq x \lt 3$</span> and <span class="math-container">$\frac{15x + 30}{2x + 19} \lt x$</span> when <span class="math-container">$x \gt 3$</span>.</p> <p>This means that if <span class="math-container">$0 \leq x_n \lt 3$</span> then <span class="math-container">$x_n \lt x_{n+2} \lt 3$</span> and if <span class="math-container">$x_n \gt 3$</span> then <span class="math-container">$x_n \gt x_{n+2} \gt 3$</span>.</p> <p>Since <span class="math-container">$a_0 = 0$</span> and <span class="math-container">$A_0 = \frac{15}{2}$</span>, this result shows that <span class="math-container">$$0 \leq a_n \lt a_{n+2} \lt 3 \lt A_{n+2} \lt A_n\text{ for all }n$$</span></p> <p>This shows that the subsequences <span class="math-container">$a_{2n}, a_{2n+1}$</span>, <span class="math-container">$A_{2n}$</span>, <span class="math-container">$A_{2n+1}$</span> are all bounded and monotonic, so they each converge. From the recurrence we get <span class="math-container">$A_1 = \frac{15}{a_0 + 2} = \frac{15}{2} = A_0$</span> and <span class="math-container">$a_2 = \frac{15}{A_1 + 2} = \frac{15}{A_0 + 2} = a_1$</span>. This means that the subsequences <span class="math-container">$a_{2n+1}$</span> and <span class="math-container">$a_{2n+2}$</span> are identical, and similarly <span class="math-container">$A_{2n}$</span> and <span class="math-container">$A_{2n+1}$</span> are identical, so in fact the sequences <span class="math-container">$a_n$</span> and <span class="math-container">$A_n$</span> converge to positive values <span class="math-container">$a$</span> and <span class="math-container">$A$</span>. Since <span class="math-container">$a$</span> and <span class="math-container">$A$</span> must be fixed points of the recurrence, the only possibility is <span class="math-container">$a = 3 = A$</span>.</p> <p>Now we look at the second and fourth components of (9): <span class="math-container">$$b_{n+1} = \frac{48 - B_n}{A_n + 2}$$</span> <span class="math-container">$$B_{n+1} = \frac{48 - b_n}{a_n + 2}$$</span></p> <p>This can be written in matrix form as <span class="math-container">$$\begin{pmatrix}b_{n+1} \\ B_{n+1}\end{pmatrix} = \begin{pmatrix}0 &amp; -\frac{1}{A_n + 2} \\ -\frac{1}{a_n + 2} &amp; 0\end{pmatrix}\begin{pmatrix}b_n \\ B_n\end{pmatrix} + \begin{pmatrix}\frac{48}{A_n + 2} \\ \frac{48}{a_n + 2}\end{pmatrix}$$</span> or more compactly as <span class="math-container">$$\mathbf{b}_{n+1} = \mathbf{A}_n\mathbf{b}_n + \mathbf{u}_n$$</span> where <span class="math-container">$\mathbf{b}_n = \begin{pmatrix}b_n \\ B_n\end{pmatrix}$</span>, <span class="math-container">$\mathbf{A}_n = \begin{pmatrix}0 &amp; -\frac{1}{A_n + 2} \\ -\frac{1}{a_n + 2} &amp; 0\end{pmatrix}$</span> and <span class="math-container">$\mathbf{u}_n = \begin{pmatrix}\frac{48}{A_n + 2} \\ \frac{48}{a_n + 2}\end{pmatrix}$</span>.</p> <p>The general solution is <span class="math-container">$$\mathbf{b}_n = \left(\mathbf{A}_{n-1}\cdots\mathbf{A}_0\right)\mathbf{b}_0 + \sum\limits_{i = 0}^{n-1}\left(\mathbf{A}_{n-1}\cdots\mathbf{A}_{i+1}\right)\mathbf{u}_i$$</span></p> <p>Convergence should follow from the fact that the <span class="math-container">$\mathbf{A}_n$</span> have Frobenius norm less than <span class="math-container">$1$</span> and the <span class="math-container">$\mathbf{u}_n$</span> are bounded.</p>
4,440,233
<blockquote> <p>Find all the functions <span class="math-container">$f:\mathbb{Z}^+ \to \mathbb{Z}^+$</span> such that <span class="math-container">$f(f(x)) = 15x-2f(x)+48$</span>.</p> </blockquote> <p>If <span class="math-container">$f$</span> is a polynomial of degree <span class="math-container">$n$</span>, we have that <span class="math-container">$\deg(f(f(x))) = n^2$</span> and <span class="math-container">$\deg(15x-2f(x)+48)=n$</span>. Therefore, the only possible polynomials that satisfy the condition have degree <span class="math-container">$0$</span> or <span class="math-container">$1$</span>.</p> <p>Let <span class="math-container">$f:\mathbb{Z}^+ \to \mathbb{Z}^+$</span> be a function that holds the condition of the problem given by <span class="math-container">$f(x)=ax+b$</span> for some constants <span class="math-container">$a$</span> and <span class="math-container">$b$</span>. Since <span class="math-container">$$f(f(x)) = f(ax+b) = a(ax+b)+b = a^2x + (a+1)b$$</span> and <span class="math-container">$$15x-2f(x)+48 = 15x-2(ax+b)+48 = (15-2a)x+(48-2b),$$</span> it follows that <span class="math-container">$$a^2+2a-15=0 \quad\text{and}\quad (a+1)b=48-2b.$$</span> From the first equation, we get that <span class="math-container">$a=-5$</span> or <span class="math-container">$a=3$</span>. If <span class="math-container">$a=-5$</span>, from the second equation we get that <span class="math-container">$b=-24$</span>, and it contradicts that <span class="math-container">$f[\mathbb{Z}^+]\subseteq \mathbb{Z}^+$</span>. If <span class="math-container">$a=3$</span>, then <span class="math-container">$b=8$</span>. Therefore, <span class="math-container">$f(x)=3x+8$</span> is the only polynomial that satisfies the condition of the problem. I guess that this is the only solution, but I do not know how to prove it.</p> <p><strong>Edit:</strong> I was trying to prove that the iterations of any function <span class="math-container">$f$</span> that satisfies the problem have the same behaviour. For instance, by iterating <span class="math-container">$f$</span> we have that <span class="math-container">$2f^3(x)+f^4(x)-15f^2(x)=48$</span>, so this functions are almost the same except for constant terms. Is this usefull this idea to complete the problem?</p>
Sil
290,240
<p>Put <span class="math-container">$a_0=a \in \mathbb{Z}^{+}$</span> arbitrary and <span class="math-container">$a_n=f(a_{n-1})$</span> for <span class="math-container">$n \geq 1$</span>. The functional equation gives a non-homogenous linear recurrence <span class="math-container">$$ a_{n}=-2a_{n-1}+15a_{n-2}+48. $$</span> Homogenizing it by substitution <span class="math-container">$b_n=a_n+4$</span> we get <span class="math-container">$$ b_n=-2b_{n-1}+15b_{n-2}. $$</span> The characteristic equation is <span class="math-container">$x^2+2x-15=(x+5)(x-3)$</span>, hence by the standard result for linear recurrences we have some constants <span class="math-container">$A,B$</span> such that <span class="math-container">$$ a_n=A3^n+B(-5)^n-4. $$</span> If <span class="math-container">$B\neq 0$</span>, the term <span class="math-container">$(-5)^n$</span> will dominate over <span class="math-container">$3^n$</span> in <span class="math-container">$a_n$</span> for sufficiently large <span class="math-container">$n$</span>, regardless of value of <span class="math-container">$A$</span>. Hence <span class="math-container">$a_n$</span> will be arbitrary large negative or positive value, based on the parity of <span class="math-container">$n$</span>. However, we know <span class="math-container">$a_n=f(a_{n-1})$</span> must be positive for <span class="math-container">$n \geq 1$</span>, thus <span class="math-container">$B=0$</span>. Also from <span class="math-container">$n=0$</span> we find <span class="math-container">$A=a+4$</span>, and overall <span class="math-container">$$ a_n=3^n(a+4)-4. $$</span> Finally, from <span class="math-container">$n=1$</span> we obtain <span class="math-container">$f(a)=a_1=3(a+4)-4=3a+8$</span>. Since <span class="math-container">$a$</span> was arbitrarily chosen, we have <span class="math-container">$f(x)\equiv 3x+8$</span>. Plugging back to the functional equation verifies it is indeed a solution and by the above construction also the only one.</p>
3,111,489
<p>For which <span class="math-container">$p,q$</span> does the <span class="math-container">$\int_0^{\infty} \frac{x^p}{\mid{1-x}\mid^q}dx$</span> exist ?</p> <p>Can you help me, I have been siting hours on this question .</p> <p>I got that for <span class="math-container">$ q&lt;1$</span> and <span class="math-container">$p&gt;q+1$</span>, but I am not sure if thats right</p>
Calvin Khor
80,734
<p>The only correct bound you have is <span class="math-container">$$q&lt;1.$$</span> <span class="math-container">$x^p|1-x|^{-q}$</span> has potential issues at <span class="math-container">$0,1,+\infty$</span>. </p> <ol> <li>At <span class="math-container">$0$</span>, the function is like <span class="math-container">$x^p$</span>. need <span class="math-container">$p&gt;-1$</span>.</li> <li>At <span class="math-container">$1$</span>, the function is like <span class="math-container">$|1-x|^{-q}$</span>. Need <span class="math-container">$q&lt;1$</span>.</li> <li>At <span class="math-container">$\infty$</span>, the function is like <span class="math-container">$x^{p-q}$</span>. Here we need <span class="math-container">$p-q&lt;-1$</span>. </li> </ol> <p>The result is the following triangle of admissible values: (<span class="math-container">$x$</span>-axis is <span class="math-container">$p$</span>, <span class="math-container">$y$</span>-axis is <span class="math-container">$q$</span>)</p> <p><a href="https://i.stack.imgur.com/gHgJU.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/gHgJU.png" alt="enter image description here"></a></p> <p>From this graph, we can verify that your answer is not right: your answer allows the values <span class="math-container">$(p,q) = (-5,-10)$</span>. Indeed, <span class="math-container">$-10=q&lt;1$</span> and <span class="math-container">$p=-5 &gt; -10+1=q+1$</span>. But this gives the integral <span class="math-container">$$ \int_0^\infty \frac{|1-x|^{10}}{x^5} dx$$</span> This integrand has a non-integrable singularity at 0, is integrable around 1, and has polynomial growth at infinity.</p>
1,079,995
<p>I can't understand how: $$ \frac {2\times{^nC_2}}{5} $$</p> <p>Equals:</p> <p>$$ 2\times \frac {^nC_2}{5} $$</p> <p>If we forget the combination and replace it with a $10$, the result is clearly different. $1$ in the first example and and $0.5$ in the second.</p>
k170
161,538
<p>Using your example, $$ \frac{2\times 10}{5}=\frac{20}{5}=4$$ And $$ 2\times\frac{10}{5}=2\times 2=4$$ In general for any $c\not =0$, we have $$ \frac{a\times b}{c}=a\times\frac{b}{c}= b\times\frac{a}{c} $$</p>
393,293
<p>I need an upper bound for $$\frac{ax}{x-2}$$ I know that $1\leq a&lt; 2$ and $x\geq 0$.</p> <p>This upper bound can include just $a$ and constant numbers not $x$.</p> <p>thanks a lot.</p>
in_mathematica_we_trust
27,030
<p>And a picture for the graphical learners.</p> <p><img src="https://i.stack.imgur.com/Sd0T2.png" alt="enter image description here"></p>
3,001,700
<p>I am trying to find an <span class="math-container">$x$</span> and <span class="math-container">$y$</span> that solve the equation <span class="math-container">$15x - 16y = 10$</span>, usually in this type of question I would use Euclidean Algorithm to find an <span class="math-container">$x$</span> and <span class="math-container">$y$</span> but it doesn't seem to work for this one. Computing the GCD just gives me <span class="math-container">$16 = 15 + 1$</span> and then <span class="math-container">$1 = 16 - 15$</span> which doesn't really help me. I can do this question with trial and error but was wondering if there was a method to it.</p> <p>Thank you</p>
user
505,767
<p>Note that by Bezout's identity since <span class="math-container">$\gcd(15,16)=1$</span> we have</p> <p><span class="math-container">$$15\cdot (-1+k\cdot 16)+16 \cdot (1-k\cdot 15)=1 \quad k\in\mathbb{Z}$$</span></p> <p>are all the solution for <span class="math-container">$15a+16b=1$</span> and from here just multiply by <span class="math-container">$10$</span>.</p>
202,742
<p>Consider a <a href="http://en.wikipedia.org/wiki/Circular_layout" rel="noreferrer">circular drawing</a> of a simple (in particular, loopless) graph $G$ in which edges are drawn as straight lines inside the circle. The <em>crossing graph</em> for such a drawing is the simple graph whose nodes correspond to the edges of $G$ and in which two nodes are adjacent if and only if the corresponding edges cross.</p> <p><strong>Example.</strong> The graph $G$ has four vertices (1–4) and three edges (a–c) where $a = 12$, $b = 13$, $c = 24$. In the circular drawing, $b$ and $c$ cross, so the crossing graph has three nodes and a single edge $bc$.</p> <p><img src="https://i.imgur.com/HvO93fm.jpg?1" alt="Example of a graph drawing and its crossing graph"></p> <p>Here are my questions:</p> <ol> <li><p>Is every simple graph the crossing graph of some circular graph drawing?</p></li> <li><p>If not, how does a counterexample look like?</p></li> <li><p>If yes, how can such a graph drawing be constructed?</p></li> </ol>
Zsbán Ambrus
5,340
<p>No, and you can see this from just a counting argument.</p> <p>For determining which of the $ n $ chords of the circle intersect, it is enough to know the order of the $ 2n $ endpoints on the circle. (You can assume that no two endpoints coincide.) There are at most $ (2n)!/2^n $ such orders (the two endpoints of a chord aren't distinguishable). On the other hand, there are exactly $ 2^{n(n-1)/2} $ simple graphs on $ n $ nodes. The latter grows faster. </p> <p>You can even get an explicit bound from this: there are more simple graphs on 16 nodes than arrangement of 13 chords in the circle, so at least one of those graphs can't be generated this way.</p>
2,150,552
<p>I'm following a YouTube linear algebra course. (<a href="https://www.youtube.com/watch?v=PFDu9oVAE-g&amp;list=PLZHQObOWTQDPD3MizzM2xVFitgF8hE_ab&amp;index=14" rel="nofollow noreferrer">https://www.youtube.com/watch?v=PFDu9oVAE-g&amp;list=PLZHQObOWTQDPD3MizzM2xVFitgF8hE_ab&amp;index=14</a>)<br> In part 9 there's the following question: <a href="https://i.stack.imgur.com/S3UdB.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/S3UdB.jpg" alt="enter image description here"></a></p> <p>I don't know what the formula is. What I figured out is that there is a link with the fibonacci sequence.</p> <p>I also tried to convert A to eigenbasis. I get this: </p> <p>\begin{bmatrix}\frac{{-\sqrt5 + 1}}{2}&amp;0\\0&amp;\frac{{\sqrt5 + 1}}{2}\\\end{bmatrix}</p> <p>How do I go back to normal basis and what is the formula?</p> <p>This is what I have now: <a href="https://i.stack.imgur.com/wVxPY.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/wVxPY.jpg" alt="enter image description here"></a></p>
la flaca
279,164
<p>I think the answer is no. Take $(a,b) \in R$ and $(b,a) \in S$, then $(a,b),(b,a) \in R \cup S$ but it doesn't imply $(a,b)=(b,a)$.<br> The key here is that $\{(a,b),(b,a)\} \subset R \cup S$ but it is not the case that $\{(a,b),(b,a)\} \subset R$ nor $\{(a,b),(b,a)\} \subset S$ so you can't use the antisymmetry of $R$ or $S$ to conclude $(a,b)=(b,a)$</p> <p>Added: If $R\cup S$ were antisymmetric, then the following should hold for any $a,b\in A$: $$(a,b),(b,a) \in R \cup S \implies (a,b)=(b,a)$$ For your example in the comment, let $A=\{1,2,3\}$, $R=\{(1,2),(2,2),(3,1)\}$, and $S=\{(1,2),(1,3)\}$.<br> Note both $R$ and $S$ are antisymmetric but $(1,3)$ and $(3,1)$ are in $R \cup S$ but $1 \not=3$, hence $R \cup S$ is not antisymmetric</p>
2,150,552
<p>I'm following a YouTube linear algebra course. (<a href="https://www.youtube.com/watch?v=PFDu9oVAE-g&amp;list=PLZHQObOWTQDPD3MizzM2xVFitgF8hE_ab&amp;index=14" rel="nofollow noreferrer">https://www.youtube.com/watch?v=PFDu9oVAE-g&amp;list=PLZHQObOWTQDPD3MizzM2xVFitgF8hE_ab&amp;index=14</a>)<br> In part 9 there's the following question: <a href="https://i.stack.imgur.com/S3UdB.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/S3UdB.jpg" alt="enter image description here"></a></p> <p>I don't know what the formula is. What I figured out is that there is a link with the fibonacci sequence.</p> <p>I also tried to convert A to eigenbasis. I get this: </p> <p>\begin{bmatrix}\frac{{-\sqrt5 + 1}}{2}&amp;0\\0&amp;\frac{{\sqrt5 + 1}}{2}\\\end{bmatrix}</p> <p>How do I go back to normal basis and what is the formula?</p> <p>This is what I have now: <a href="https://i.stack.imgur.com/wVxPY.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/wVxPY.jpg" alt="enter image description here"></a></p>
lordoftheshadows
303,196
<p>This not not true. Consider the set $(A,B)$ and the relation $(A,B)$ and $(B,A)$. Both relations are anti symmetric but their union isn't anti symmetric because both $(A,B)$ and $(B,A)$ are members of the relation but $A \neq B$.</p>
2,125,018
<blockquote> <p>You toss a fair coin 3x, events:</p> <p>A = "first flip H"</p> <p>B = "second flip T"</p> <p>C = "all flips H"</p> <p>D = "at least 2 flips T"</p> <p><strong>Q:</strong> Which events are independent?</p> </blockquote> <p>From the informal def. it is where one doesnt affect the other.</p> <p>So in this case, $AB$ seem independent? Any others?</p>
Jean Marie
305,862
<p>Here is a complete solution using $\tan$ and $\tan^{-1}$.</p> <p>Let us fix notations, with points </p> <p>$a=(x_a,y_a)=(L_1\cos(\theta_1),L_1\sin(\theta_1))$ and</p> <p>$b=(L_3 \cos(\theta), L_3 \sin(\theta))$. </p> <p>We have:</p> <p>$$\tag{1}\vec{ba}\binom{x_a-L_3 \cos(\theta)}{y_a-L_3 \sin(\theta)}.$$ </p> <p>It is not difficult to see that the (negative) angle $-\alpha$ that $\vec{ba}$ makes with respect to the horizontal reference is such that $\alpha+\gamma=\pi$. Thus, using (1):</p> <p>$$\tan(\alpha)=-\left(\frac{y_a-L_3 \sin(\theta)}{x_a-L_3 \cos(\theta)}\right)$$</p> <p>Therefore, </p> <p>$$\tag{2}\gamma=\pi-\alpha=\pi+\tan^{-1}\left(\frac{y_a-L_3 \sin(\theta)}{x_a-L_3 \cos(\theta)}\right).$$</p> <p>It suffices now to differentiate (2) (I used Mathematica for that) to obtain:</p> <blockquote> <p>$$\tag{3}\dfrac{\Delta \gamma}{\Delta \theta}\approx\dfrac{E-L_3^2}{L_1^2+L_3^2-2E} \ \ \text{with} \ \ E:=L_3(x_a\cos(\theta)+y_a\sin(\theta))$$</p> </blockquote> <p>Edit: Formula (3) can be written in a different way:</p> <blockquote> <p>$$ \dfrac{\Delta \gamma}{\Delta \theta}\approx \dfrac{L_1L_3\cos(\theta_1-\theta)-L_3^2}{L_2^2}$$</p> </blockquote> <p>Explanation: one can apply the <strong>cosine rule</strong> to the denominator of (3): $L_1^2+L_3^2-2L_1 L_3\cos(\theta_1-\theta)=L_2^2$ applied to triangle Oab.</p>
154,893
<p>I am having trouble figuring this out.</p> <p>$$\sqrt {1+\left(\frac{x}{2}- \frac{1}{2x}\right)^2}$$</p> <p>I know that $$\left(\frac{x}{2} - \frac{1}{2x}\right)^2=\frac{x^2}{4} - \frac{1}{2} + \frac{1}{4x^2}$$ but I have no idea how to factor this since I have two x terms with vastly different degrees, 2 and -2.</p>
Rick Decker
36,993
<p>I presume you aren't asked to solve this (since it isn't an equation), but rather are asked to express it in a tidier form. Carrying on, we have \begin{align*} 1+\left(\frac{x^2}{4}-\frac{1}{2}+\frac{1}{4x^2}\right) &amp;=\frac{x^2}{4}+\frac{1}{2}+\frac{1}{4x^2}\\\\ &amp;= \left(\frac{x}{2}+\frac{1}{2x}\right)^2\\\\ &amp;= \left(\frac{x^2+1}{2x}\right)^2 \end{align*}</p> <p>Carry on from there: put the whole expression under the radical, use the fact that $\sqrt{a^2}=\mid a\mid$ to get $$ \left|\frac{x^2+1}{2x}\right| $$ </p> <p>By the way, this idiom, $4ab+(a-b)^2=(a+b)^2$, is very common and should eventually be part of your mathematical toolkit.</p>
2,648,370
<p>$$\int\frac{x^2}{\sqrt{2x-x^2}}dx$$ This is the farthest I've got: $$=\int\frac{x^2}{\sqrt{1-(x-1)^2}}dx$$</p>
lab bhattacharjee
33,337
<p>As $0&lt;x&lt;2,$</p> <p>$$\dfrac{x^2}{\sqrt{2x-x^2}}=\dfrac{x^{3/2}}{\sqrt{2-x}}$$</p> <p>set $x=2\sin^2t,x^{3/2}=\text{?}$</p> <p>$dx=\text{?}$ and $\sqrt{2-x}=+\sqrt2\cos t$</p>
2,648,370
<p>$$\int\frac{x^2}{\sqrt{2x-x^2}}dx$$ This is the farthest I've got: $$=\int\frac{x^2}{\sqrt{1-(x-1)^2}}dx$$</p>
lab bhattacharjee
33,337
<p>Hint:</p> <p>As $\dfrac{d(2x-x^2)}{dx}=2-2x$</p> <p>$$\dfrac{x^2}{\sqrt{2x-x^2}}=\dfrac{x^2-2x+2x-2+2}{\sqrt{2x-x^2}}$$</p> <p>$$=-\sqrt{1-(x-1)^2}-\dfrac{2-2x}{\sqrt{2x-x^2}}+\dfrac2{\sqrt{1-(x-1)^2}}$$</p> <p>Now use $\#1,\#8$ of <a href="http://www.sosmath.com/tables/integral/integ13/integ13.html" rel="nofollow noreferrer">this</a></p>
2,648,370
<p>$$\int\frac{x^2}{\sqrt{2x-x^2}}dx$$ This is the farthest I've got: $$=\int\frac{x^2}{\sqrt{1-(x-1)^2}}dx$$</p>
damier.godfred
530,152
<p>Ok, so building off of what <a href="https://math.stackexchange.com/users/33337/lab-bhattacharjee">lab bhattacharjee</a> said: <span class="math-container">$$\int\frac{x^2}{\sqrt{2x-x^2}}dx$$</span> <span class="math-container">$$=-\int\sqrt{1-(x-1)^2}dx-\int\dfrac{2-2x}{\sqrt{2x-x^2}}dx+2\int\dfrac1{\sqrt{1-(x-1)^2}}dx$$</span> Ok, so I use #8 on the 1st integral, u-substitution on the 2nd, and #1 on the 3rd. <span class="math-container">$$=-(\frac{(x-1)\sqrt{1-(x-1)^2}}{2}+\frac{1}{2}\arcsin(x-1))-2\sqrt{2x-x^2}+2\arcsin(x-1)+C$$</span> Simplify. <span class="math-container">$$=-\frac{(x-1)\sqrt{1-(x-1)^2}}{2}-2\sqrt{2x-x^2}+\frac{3}{2}\arcsin(x-1)+C$$</span></p>
208,744
<p>I was asked to show that $\frac{d}{dx}\arccos(\cos{x}), x \in R$ is equal to $\frac{\sin{x}}{|\sin{x}|}$. </p> <p>What I was able to show is the following:</p> <p>$\frac{d}{dx}\arccos(\cos(x)) = \frac{\sin(x)}{\sqrt{1 - \cos^2{x}}}$</p> <p>What justifies equating $\sqrt{1 - \cos^2{x}}$ to $|\sin{x}|$?</p> <p>I am aware of the identity $ \sin{x} = \pm\sqrt{1 - \cos^2{x}}$, but I still do not see how that leads to that conclusion.</p>
Christian Blatter
1,303
<p>The function $f:\ x\mapsto\arccos(\cos x)$ is even and $2\pi$-periodic, since the "inner" function $\cos$ has these properties. For $0\leq x\leq \pi$ by definition of $\arccos$ we have $f(x)=x$. Therefore $f$ is the $2\pi$-periodic continuation of the function $$f_0(x)\ :=\ |x|\qquad(-\pi\leq x\leq\pi)$$ to the full real line ${\mathbb R}$, and $f'$ (where defined) is the $2\pi$-periodic continuation of $$f_0'(x)={\rm sgn}(x)\qquad(-\pi&lt;x&lt;\pi)$$ to all of ${\mathbb R}$. One way to express this extension is indeed given by $$f'(x)\ =\ {\sin x\over|\sin x|}\qquad (x\ne \pi{\mathbb Z})\ ,$$ but we can do without computing sines: $$f'(x)=(-1)^{\lfloor x/\pi\rfloor}\qquad (x\ne \pi{\mathbb Z})\ .$$</p>
3,812,432
<p>For <span class="math-container">$a,b,c&gt;0.$</span> Prove<span class="math-container">$:$</span> <span class="math-container">$$4\Big(\dfrac{1}{a^2}+\dfrac{1}{b^2}+\dfrac{1}{c^2} \Big)+\dfrac{81}{(a+b+c)^2}\geqslant{\dfrac {7(a+b+c)}{abc}}$$</span></p> <p>My proof is using SOS<span class="math-container">$:$</span></p> <p><span class="math-container">$${c}^{2}{a}^{2} {b}^{2}\Big( \sum a\Big)^2 \sum a^2 \Big\{ 4\Big(\dfrac{1}{a^2}+\dfrac{1}{b^2}+\dfrac{1}{c^2} \Big)+\dfrac{81}{(a+b+c)^2}-{\dfrac {7(a+b+c)}{abc}} \Big\}$$</span> <span class="math-container">$$=\dfrac{1}{2} \sum {a}^{2}{b}^{2} \left( {a}^{2}+{b}^{2}-2\,{c}^{2} +5bc-10ab+5\, ac \right) ^{2} +\dfrac{1}{2} \prod (a-b)^2 \left( 7\sum a^2 +50\sum bc \right) \geqslant 0.$$</span></p> <p>From this we see that the inequality is true for all <span class="math-container">$a,b,c \in \mathbb{R};ab+bc+ca\geqslant 0.$</span></p> <p>But we also have this inequality for <span class="math-container">$a,b,c \in \mathbb{R}.$</span> Which verify by Maple.</p> <p>I try and I found a proof but I'm not sure<span class="math-container">$:$</span></p> <p>If replace <span class="math-container">$(a,b,c)$</span> by <span class="math-container">$(-a,-b,-c)$</span> we get the same inequality.</p> <p>So we may assume <span class="math-container">$a+b+c\geqslant 0$</span> (because if <span class="math-container">$a+b+c&lt;0$</span> we can let <span class="math-container">$a=-x,b=-y,c=-z$</span> where <span class="math-container">$x+y+z \geqslant 0$</span> and the inequality is same!)</p> <p>Let <span class="math-container">$a+b+c=1,ab+bc+ca=\dfrac{1-t^2}{3} \quad (t\geqslant 0), r=abc.$</span> Need to prove<span class="math-container">$:$</span></p> <p><span class="math-container">$$f(r) =81\,{r}^{2}-15\,r+\dfrac{4}{9} \left( t-1 \right) ^{2} \left( t+1 \right) ^{2 }\geqslant 0.$$</span></p> <p>It's easy to see, when <span class="math-container">$r$</span> increase then <span class="math-container">$f(r)$</span> decrease. Since <span class="math-container">$r\leqslant \dfrac{1}{27} \left( 2\,t+1 \right) \left( t-1\right) ^{2} \quad$</span>(see <a href="https://artofproblemsolving.com/community/c1101515h2255918_helpful_lemma_for_the_homogeneous_inequality" rel="nofollow noreferrer">here</a>). We get<span class="math-container">$:$</span></p> <p><span class="math-container">$$f(r)\geqslant f\Big(\dfrac{1}{27} \left( 2\,t+1 \right) \left( t-1\right) ^{2}\Big)=\dfrac{1}{9} {t}^{2} \left( 2\,t-1 \right) ^{2} \left( t-1 \right) ^{2} \geqslant 0.$$</span></p> <p>Done.</p> <p>Could you check it for me? Who have a proof for <span class="math-container">$a,b,c \in \mathbb{R}$</span>?</p>
RobPratt
683,666
<p>You want to avoid 12 properties: <span class="math-container">\begin{matrix} 000xxx &amp; x000xx &amp; xx000x &amp; xxx000 \\ 111xxx &amp; x111xx &amp; xx111x &amp; xxx111 \\ 222xxx &amp; x222xx &amp; xx222x &amp; xxx222 \\ \end{matrix}</span> The inclusion-exclusion formula is <span class="math-container">$$\sum_{k=0}^{12} (-1)^k N_k,$$</span> where <span class="math-container">$N_k$</span> is an overcount of the number of <span class="math-container">$6$</span>-strings that satisfy <span class="math-container">$k$</span> properties. Clearly, <span class="math-container">$N_k=0$</span> for <span class="math-container">$k&gt;4$</span>. We have <span class="math-container">\begin{align} N_0 &amp;= \binom{12}{0} 3^6 = 729 \\ N_1 &amp;= \binom{12}{1} 3^{6-3} = 324 \\ N_2 &amp;= 2 \binom{3}{2} 3^{6-6} + 3 \left(3 \cdot 3^{6-4} + 2 \cdot 3^{6-5} + 1 \cdot 3^{6-6}\right) = 108 \\ N_3 &amp;= 3 \left(2 \cdot 3^{6-5} + 2 \cdot 3^{6-6}\right) = 24 \\ N_4 &amp;= 3 \left(1 \cdot 3^{6-6}\right) \\ \end{align}</span> Putting it all together, we obtain <span class="math-container">$$729 - 324 + 108 - 24 + 3 = 492$$</span></p>
179,886
<p>I have series of values which, by visual inspection, appear to be sums of certain constants, not divisible by each other, with rational weights. I want to convert these sums to vectors of weights for a specific basis vector.</p> <p>I have an initial solution based on <code>FindInstance</code>, which works reasonably but I think is not necessarily elegant:</p> <pre><code>ClearAll[splitSumCoefficients]; splitSumCoefficients[sum_, basis_] := With[{cvals = c /@ basis}, cvals/d /. # &amp; /@ FindInstance[ d sum == basis.cvals &amp;&amp; d != 0, {Sequence @@ cvals, d}, Integers]]; </code></pre> <p>It works fine for a case where vector values could actually be extracted using expression rewriting:</p> <pre><code># -&gt; splitSumCoefficients[#, {1, 1/E, E}] &amp; /@ Table[TrigExpand@ SeriesCoefficient[ Sin[x] + Sqrt[1 - x^2] Sinh[Sqrt[1 - x^2]]/(x + 1), {x, 0, n}], {n, 0, 5}] </code></pre> <blockquote> <p>$$\begin{array}{c} \frac{e}{2}-\frac{1}{2 e}\to \left( \begin{array}{ccc} 0 &amp; -\frac{1}{2} &amp; \frac{1}{2} \\ \end{array} \right) \\ 1+\frac{1}{2 e}-\frac{e}{2}\to \left( \begin{array}{ccc} 1 &amp; \frac{1}{2} &amp; -\frac{1}{2} \\ \end{array} \right) \\ -\frac{1}{2 e}\to \left( \begin{array}{ccc} 0 &amp; -\frac{1}{2} &amp; 0 \\ \end{array} \right) \\ \frac{1}{2 e}-\frac{1}{6}\to \left( \begin{array}{ccc} -\frac{1}{6} &amp; \frac{1}{2} &amp; 0 \\ \end{array} \right) \\ \frac{e}{16}-\frac{7}{16 e}\to \left( \begin{array}{ccc} 0 &amp; -\frac{7}{16} &amp; \frac{1}{16} \\ \end{array} \right) \\ \frac{1}{120}+\frac{7}{16 e}-\frac{e}{16}\to \left( \begin{array}{ccc} \frac{1}{120} &amp; \frac{7}{16} &amp; -\frac{1}{16} \\ \end{array} \right) \\ \end{array}$$</p> </blockquote> <p>It also works in a case where expression rewriting would already be a little more tricky:</p> <pre><code># -&gt; splitSumCoefficients[#, {1, Sinh[1], Cosh[1]}] &amp; /@ Table[SeriesCoefficient[ Sin[x] + Sqrt[1 - x^2] Sinh[Sqrt[1 - x^2]]/(x + 1), {x, 0, n}], {n, 0, 5}] </code></pre> <blockquote> <p>$$\begin{array}{c} \sinh (1)\to \left( \begin{array}{ccc} 0 &amp; 1 &amp; 0 \\ \end{array} \right) \\ 1-\sinh (1)\to \left( \begin{array}{ccc} 1 &amp; -1 &amp; 0 \\ \end{array} \right) \\ \frac{1}{2} (\sinh (1)-\cosh (1))\to \left( \begin{array}{ccc} 0 &amp; \frac{1}{2} &amp; -\frac{1}{2} \\ \end{array} \right) \\ \frac{1}{6} (-1-3 \sinh (1)+3 \cosh (1))\to \left( \begin{array}{ccc} -\frac{1}{6} &amp; -\frac{1}{2} &amp; \frac{1}{2} \\ \end{array} \right) \\ \frac{\sinh (1)}{2}-\frac{3 \cosh (1)}{8}\to \left( \begin{array}{ccc} 0 &amp; \frac{1}{2} &amp; -\frac{3}{8} \\ \end{array} \right) \\ \frac{1}{120} (1-60 \sinh (1)+45 \cosh (1))\to \left( \begin{array}{ccc} \frac{1}{120} &amp; -\frac{1}{2} &amp; \frac{3}{8} \\ \end{array} \right) \\ \end{array}$$</p> </blockquote> <p>(This code can also convert above expressions between $1/sinh/cosh$ and $1/\frac{1}{e}/e$ basis automatically.)</p> <p>Would there be a more practical solution than <code>FindInstance</code> for this problem?</p> <p><strong>EDIT</strong>:</p> <p>A failing example:</p> <pre><code>splitSumCoefficients[#, {1, Sinh[1], Cosh[1]}] &amp;@ SeriesCoefficient[ Sin[x] + Sqrt[1 - x^2] Sinh[Sqrt[1 - x^2]]/(x + 1), {x, 0, 143}] </code></pre> <blockquote> <p>{}</p> </blockquote> <p>(That is, no solutions from <code>FindInstance</code>.)</p>
Daniel Lichtblau
51
<p>Could use <a href="http://reference.wolfram.com/language/ref/FindIntegerNullVector.html" rel="nofollow noreferrer"><code>FindIntegerNullVector</code></a>.</p> <pre><code>s143 = SeriesCoefficient[ Sin[x] + Sqrt[1 - x^2] Sinh[Sqrt[1 - x^2]]/(x + 1), {x, 0, 143}]; ff = FindIntegerNullVector[{1, Sinh[1], Cosh[1], s143}, WorkingPrecision -&gt; 2000] (* Out[141]= {1, \ 3961260330770617476883241243800272591914813378841561500324133049366105\ 0123701401772607848046869189487230434065769624598074056818708818344785\ 0696665716427876426887640556807563267222769242157291176636056668271262\ 5711003773130484745502258300781250000, \ -301687271813430245298615167128172580774085014946504395637814915763577\ 7158504130808243502776912161388156812463524288595215733594917752859937\ 3098739537990205122300228063774095483218191584013468406883227014757659\ 90607108694209686634044834136962890625, \ 3854370717180072770521565736493325081944432179154696438432688127620284\ 5420193798918144180166658987031965483631719296696351202501036957071818\ 6035253548159443361661549763406518875415055453101304885936693315514033\ 76640000000000000000000000000000000000} *) </code></pre> <p>It is straightforward to check that this is the correct result. The desired form is simply <code>-Most[ff]/Last[ff]</code>.</p>
631,214
<p>Two kids starts to run from the same point and the same direction of circled running area with perimeter 400m. The velocity of each kid is constant. The first kid run each circle in 20 sec less than his friend. They met in the first time after 400 sec from the start. Q: Find their velocity.</p> <p>I came with one equation:</p> <p>400/v1 +20 = 400/v2 </p> <p>But what is the second equation? ("They met in the first time after 400 sec from the start.")</p>
Emanuele Paolini
59,304
<p>You can prove that $e^t &gt; 2^t &gt; t^2$ for $t$ sufficiently large. Hence you can prove that $$ \lim_{t\to +\infty} \frac{t}{e^t} = 0 $$ without using derivatives. Let $x=e^{-t}$ and you find $$ 0 = \lim_{t\to +\infty} \frac{t}{e^t} = \lim_{x\to 0} -x \log x. $$</p> <p>The point, however, is: how the function $\log x$ has been defined?</p>
1,354,953
<p>Solve for the function f(x):</p> <p>$$f(x)=\frac{x}{x+f\left(\frac{x}{x+f(x)}\right)}$$ I'm not able to solve this. </p> <p>[For instance, I tried solving for $f(\frac{x}{x+f(x)})$, but this doesn't lead me anywhere as the value obtained, when substituted into the original equation, just yields $f(x)=f(x)$]</p> <p>Question: <strong>What is f(x) = ?</strong></p> <p><em>I suppose I need to mention that f(x) should be continuous, and that it's a subset of $R$.</em></p> <p>(Related : <a href="https://math.stackexchange.com/questions/1280806/functional-equation-fx-frac11f-frac11fx">Functional equation $f(x)=\frac{1}{1+f(\frac{1}{1+f(x)})}$</a> , whose confirmed solutions are $-\phi$ and $\frac{1}{\phi}$)</p>
mahdokht
253,415
<p>take $g=\frac { x }{ (x+f) } $ (#) by differentiating from two sides of this: $1=g+g'x+f'g+g'f$ $(*)$ by substituting $g=\frac { x }{ (x+f) } $ in the equation: $$\frac { x }{ g } -x=\frac { x }{ \left( x+f(g) \right) } $$ differentiating from two sides of this and using (*) : $2x-gx=g-g^2$ find g in this quadratic equation.Then,find f by (#)</p>
2,303,795
<p>So I know that by Euler's homogeneous function theorem $m$ is a positive number, but why is it an integer? And how to prove that $f$ is polynomial of degree $m$?</p>
Robert Z
299,698
<p>Alternative way. In order to avoid integration, note that $$f_n(x)-f(x)= \int_{1/n}^{x+1/n} \sin^2(t) \, dt-\int_0^x \sin^2(t) \, dt= \int_{x}^{x+1/n} \sin^2(t) \, dt-\int_0^{1/n} \sin^2(t) \, dt$$ Hence for $x\geq 0$, $$|f_n(x)-f(x)|\leq \int_{x}^{x+1/n} 1 \, dt+\int_0^{1/n} 1 \, dt= \frac{2}{n} $$ which implies that $(f_n)_n$ converges uniformly to $f$ in $[0,+\infty)$. The same argument works if we replace $\sin(x)$ with any bounded continuous function in $[0,+\infty)$.</p>
3,628,374
<p>We have that <span class="math-container">$W \in \mathbb{R}^{n \times m}$</span> and we want to find <span class="math-container">$$\text{prox}(W) = \arg\min_Z\Big[\frac{1}{2} \langle W-Z, W-Z \rangle+\lambda ||Z||_* \Big]$$</span></p> <p>Here, <span class="math-container">$||Z||_*$</span> represents the trace norm of <span class="math-container">$Z$</span>.</p> <p>I tried getting the derivative of the whole thing, and to do that I used that the derivative of trace norm is <span class="math-container">$UV^T$</span> (according to <a href="https://math.stackexchange.com/questions/701062/proximal-operator-and-the-derivative-of-the-matrix-nuclear-norm">Proximal Operator and the Derivative of the Matrix Nuclear Norm</a>). However, after this, I don't really know how to proceed. </p>
Community
-1
<p>This gives a solution by group action theory, as you wish, <em>but</em> I couldn't avoid using the result (*) hereunder, which is seemingly as much non-elementary as the tools in the other answers/comments (for its proof, see in this site e.g. <a href="https://math.stackexchange.com/q/3525343/750041">here</a>).</p> <hr> <p>Let's define <span class="math-container">$X:=\{M\in \operatorname{GL}_2(\mathbb{F}_p)\mid M^p=I\}$</span></p> <p><strong>Lemma</strong>. <span class="math-container">$G\in \operatorname{GL}_2(\mathbb{F}_p)\wedge M\in X \Longrightarrow GMG^{-1}\in X$</span>.</p> <p><em>Proof</em>. <span class="math-container">$(GMG^{-1})^p=GM^pG^{-1}=GIG^{-1}=GG^{-1}=I \Longrightarrow GMG^{-1}\in X$</span>.</p> <p><span class="math-container">$\Box$</span></p> <p>Therefore, <span class="math-container">$\operatorname{GL}_2(\mathbb{F}_p)$</span> acts by conjugation on <span class="math-container">$X$</span>. Note that <span class="math-container">$\tilde M:=\begin{bmatrix}1&amp;1\\0&amp;1\end{bmatrix}\in X$</span>, because <span class="math-container">$\tilde M^p=I$</span>, and thence we can apply the Orbit-Stabilizer Theorem to <span class="math-container">$\tilde M$</span>:</p> <p><span class="math-container">$$|O(\tilde M)||\operatorname{Stab}(\tilde M)|=|\operatorname{GL}_2(\mathbb{F}_p)|=(p^2-1)(p^2-p)=p(p+1)(p-1)^2 \tag 1$$</span></p> <p>Now, <span class="math-container">$\operatorname{Stab}(\tilde M)=\{G\in \operatorname{GL}_2(\mathbb{F}_p)\mid G\tilde M=\tilde MG\} = \Biggl\{G\in \operatorname{GL}_2(\mathbb{F}_p)\mid G=\begin{bmatrix}a&amp;b\\0&amp;a\end{bmatrix}, a,b\in \mathbb{F}_p\Biggr\}$</span>, whence <span class="math-container">$|\operatorname{Stab}(\tilde M)|=p(p-1)$</span> and finally, by <span class="math-container">$(1)$</span>, <span class="math-container">$|O(\tilde M)|=(p+1)(p-1)=p^2-1$</span>. Now, <span class="math-container">$|X|=p^2-1$</span> <a href="https://math.stackexchange.com/q/3525343/750041">(*)</a> and <span class="math-container">$O(\tilde M) \subseteq X$</span>, whence <span class="math-container">$O(\tilde M)=X$</span>, and the action is transitive.</p>
2,397,874
<p>I am new to modulus and inequalities , I came across this problem:</p> <p>$ 2^{\vert x + 1 \vert} - 2^x = \vert 2^x - 1\vert + 1 $ for $ x $</p> <p>How to find $ x $ ?</p>
Michael Rozenberg
190,319
<p>Let $(5+2\sqrt6)^{x^2-3}=t$. Hence, $t+\frac{1}{t}=10$ and we have $t=5\pm2\sqrt{6}$.</p> <p>Thus, $x^2-3=1$ or $x^2-3=-1$, which gives the answer: $$\{2,-2,\sqrt2,-\sqrt2\}$$</p>
3,163,342
<p>Find all the ring homomorphisms <span class="math-container">$f$</span> : <span class="math-container">$\mathbb{Z}_6\to\mathbb{Z}_3$</span>.</p> <p>definition of ring homomorphism:</p> <p>The function f: R → S is a ring homomorphism if:</p> <p>1) <span class="math-container">$f(1)$</span> = <span class="math-container">$1$</span></p> <p>2) <span class="math-container">$f(a+b)$</span> = <span class="math-container">$f(a)$</span> + <span class="math-container">$f(b)$</span> for all a,b, in R</p> <p>3) <span class="math-container">$f(ab)$</span> = <span class="math-container">$f(a)$</span> <span class="math-container">$f(b)$</span> for all a,b in R</p> <p>Does it make sense to say that in this case </p> <p><span class="math-container">$f(6) = f(1) + f(1) + f(1) + f(1) +f(1) + f(1) = 1 + 1 + 1 + 1 + 1 + 1 = 0$</span> in <span class="math-container">$\mathbb{Z}_3$</span> </p> <p>Could you explain what do we do to find ring homomorphism in all cases. Not only <span class="math-container">$\mathbb{Z}_m \to\mathbb{Z}_n$</span>, where <span class="math-container">$m&lt;n$</span> . </p>
J. W. Tanner
615,567
<p>You made a mistake: <span class="math-container">$ 3(-\frac{1}{3}(x - 1))= -(x-1),$</span> not <span class="math-container">$ -3(x - 1).$</span></p>
772,665
<p><strong>Question:</strong></p> <blockquote> <p>For any $a,b\in \mathbb{N}^{+}$, if $a+b$ is a square number, then $f(a)+f(b)$ is also a square number. Find all such functions.</p> </blockquote> <p><strong>My try:</strong> It is clear that the function $$f(x)=x$$ satisfies the given conditions, since: $$f(a)+f(b)=a+b.$$</p> <p>But is it the only function that fits our needs? </p> <p>It's one of my friends that gave me this problem, maybe this is a Mathematical olympiad problem. Thank you for you help.</p>
Ahmed Bachir
984,531
<p>we can find some other solutions such the zero function or the functions under the form <span class="math-container">$f(n)=a^2n$</span> or the constant functions under the form <span class="math-container">$f(n)=\dfrac{a^2}{2}$</span>, or an other function <span class="math-container">$f$</span> which can be determined following this method.<br /> Here I am just giving a way to build a function with this property<br /> suppose that <span class="math-container">$a, b \in \mathbb{N}^{+} $</span> such that their sum is a perfect square, that is there exists <span class="math-container">$n \in \mathbb{N}_{≥2}$</span> such that<br /> <span class="math-container">$$ a+b=n^2 $$</span> which is equivalent to <br /> <span class="math-container">$$ a=n^2-b $$</span> which implies that <br /> <span class="math-container">$$ f(a)+f(b)=f(n^2-b)+f(b) $$</span> then from here we can see that we are looking for a function <span class="math-container">$f$</span> such that <br /> <span class="math-container">$$ \forall n \in \mathbb{N}_{\geq 2}, \forall b \in { 1, 2,..., [ \dfrac{n^2}{2}] }, \exists a(b,n) \in \mathbb{N}; f(n^2-b)+f(b)=(a(b,n))^2$$</span> Let's start building<br /> for <span class="math-container">$n=2$</span> we need to give values to <span class="math-container">$f(b) $</span> and <span class="math-container">$a(b, 2)$</span> for <span class="math-container">$ b \in \{ 1, 2 \}$</span> such that<br /> <span class="math-container">$$ f(4-b)+f(b)=(a(b,2))^2$$</span> that is we need to give values to <span class="math-container">$f(1), f(2), f(3)$</span> and <span class="math-container">$a(1, 2), a(2,2)$</span> such that<br /> <span class="math-container">$$ \begin{cases} f(3)+f(1)=(a(1,2))^2 \\ f(2)+f(2)=(a(2,2))^2 \end{cases} $$</span> Then we find that we can associate a random values to <span class="math-container">$f(1), a(1,2), a(2,2)$</span> and we define <span class="math-container">$f(2), f(3)$</span> as follows<br /> <span class="math-container">$$ \begin{cases} f(2)=\dfrac{(a(2,2))^2}{2} \\ f(3)=(a(1,2))^2-f(1) \end{cases}$$</span> before moving to the next step we need to note that if <span class="math-container">$n$</span> is even and by taking <span class="math-container">$ b= \left[ \dfrac{n^2}{2}\right]$</span> the equality <span class="math-container">$ f(n^2-b)+f(b)=(a(b,n))^2 $</span> gives <br /> <span class="math-container">$$ f( [ \dfrac{n^2}{2}] )=\dfrac{a( [ \dfrac{n^2}{2}], n)^2}{2}...(*) $$</span> and we can choose any natural value for <span class="math-container">$a( [ \dfrac{n^2}{2}], n)$</span> to define <span class="math-container">$f( [ \dfrac{n^2}{2}] )$</span>. <br /> in the next steps we have to take in account that <span class="math-container">$f( [ \dfrac{n^2}{2}])$</span> has been already determined. <br /> for <span class="math-container">$n=3$</span> we need to give values to <span class="math-container">$f(b) $</span> and <span class="math-container">$a(b, 2)$</span> for <span class="math-container">$ b \in \{ 1, 2, 3, 4\}$</span> such that<br /> <span class="math-container">$$ f(9-b)+f(b)=(a(b,3))^2$$</span> that is <span class="math-container">$$ \begin{cases} f(8)+f(1)=\dfrac{(a(1,3))^2}{2} \\ f(7)+f(2)=(a(2,3))^2\\ f(6)+f(3)=(a(3,3))^2\\ f(5)+f(4)=(a(4,3))^2 \end{cases}$$</span> Here we need to see that <span class="math-container">$f(2)$</span> and <span class="math-container">$f(8)$</span> have been already choosed from the equality <span class="math-container">$(*) $</span> that implies that <span class="math-container">$f(1)$</span> is not random as we find in the first step so we need to change the choice of <span class="math-container">$f(1)$</span> to be under the form <span class="math-container">$$ f(1)=\dfrac{(a(1,3))^2}{2}-f(8)$$</span> and since <span class="math-container">$f(2), f(3)$</span> are well determined we can easily associate values to <span class="math-container">$f(7), f(6)$</span>.<br /> but <span class="math-container">$f(4), f(5)$</span> remain not determined, it's clear that we can associate a random value to <span class="math-container">$f(4)$</span> and then we get <span class="math-container">$f(5)=(a(4,3))^2-f(4)$</span>, but is this true? <br /> Maybe in the next steps we will need to change one of these values as we have done with <span class="math-container">$f(1)$</span>, and we iterate counting the next values of the function <span class="math-container">$f$</span>.</p>
104,875
<p>I've been looking for a solution to this problem for other applications too, for some time, but haven't come up with a solution that does not involve <code>Animate</code> or similar (and it never works).</p> <p>Take this example: plot a function (say <code>f=a/x</code>) for different <code>a</code>. The y-axis plot range is based off of <code>a/1</code> but there are say, 3, possible plot ranges:</p> <pre><code>range=Which[f&lt;2,2,f&lt;5,5,f&lt;10,10] </code></pre> <p>for <code>1&lt;=a&lt;=10</code>. Each time <code>range</code> changes as <code>a</code> changes (with <code>Manipulate</code> slider), the <code>FrameStyle</code> for the changing y-axis should flash red then return back to black.</p> <p>Every time I've encountered this issue I was using <code>Manipulate</code> (and need to find a solution to this while still using <code>Manipulate</code>).</p> <p>Here's what I want to show but WITHOUT having to use the method I used to creat this, which was:</p> <pre><code>Which[ f[1] &lt; 1.9, Black, 1.9 &lt;= f[1] &lt;= 2.1, Red, 2.1 &lt; f[1] &lt; 4.9, Black, 4.9 &lt;= f[1] &lt;= 5.1, Red, 5.1 &lt; f[1] &lt; 9.9, Black, 9.9 &lt;= f[1] &lt;= 10, Red ] </code></pre> <p>for the <code>FrameStyle</code>. Here's what it did, for clarification:</p> <p><a href="https://i.stack.imgur.com/suFLd.gif" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/suFLd.gif" alt="enter image description here"></a></p>
Jack LaVigne
10,917
<p>Here is something that works.</p> <p>I am quite certain that there are more efficient ways to accomplish this but at least the code is fairly easy to follow.</p> <p>I tried wrapping the right hand side of <code>AxesStyle</code> in <code>Dynamic</code> but it didn't work so I ended up wrapping the whole plot in <code>Dynamic</code>.</p> <p>The <code>Manipulate</code> variables with <code>ControlType-&gt;None</code> is one way of introducing dynamic variables (one could also use <code>DynamicModule</code>).</p> <p>When you get a working version you will undoubtedly want to reduce the time for the <code>Pause</code> and may want to get rid of the <code>Thick</code> when the y-axis is Red.</p> <p>One of the workhorses is the list of functions that follow the <code>Dyanmic[Plot,...</code>.</p> <p><code>Dynamic</code> can take an expression and zero, one, two or three functions. For the case with three functions the first one applies at the start, the second during and the last at the end.</p> <pre><code>Manipulate[ ytop = Which[ a &lt; 2, 2, a &lt; 5, 5, a &gt;= 5, 10 ]; If[ytop != ytopSaved, ytopSaved = ytop; scaleChanged = True ]; Dynamic[ Plot[ a/x, {x, 0, 10}, PlotRange -&gt; {{0, 10}, {0, ytop}}, AxesStyle -&gt; { Black, ydirective } ], {ydirective = If[scaleChanged, Directive[Thick, Red], Black ], If[scaleChanged, Pause[1]], scaleChanged = False; ydirective == Black} ], (* Manipulate variables *) {{ydirective, Black}, ControlType -&gt; None}, {{ytop, 2}, ControlType -&gt; None}, {{ytopSaved, 2}, ControlType -&gt; None}, {{scaleChanged, False}, ControlType -&gt; None}, {{a, 1}, 0, 10, Appearance -&gt; "Open"} ] </code></pre> <p>The top figure shows the case where the y axis scale has not changed, and the bottom figure shows the case during the change of the y axis scale.</p> <p><img src="https://i.stack.imgur.com/Gzuzf.png" alt="Mathematica graphics"></p>
130,502
<p>I obtained a numerical solution from the following code with <code>NDSolve</code></p> <pre><code>L = 20; tmax = 27; \[Sigma] = 2; myfun = First[h /. NDSolve[{D[h[x, y, t], t] + Div[h[x, y, t]^3*Grad[Laplacian[h[x, y, t], {x, y}], {x, y}], {x, y}] + Div[h[x, y, t]^3*Grad[h[x, y, t], {x, y}], {x, y}] == 0, h[x, y, 0] == 1 + 1/(2*\[Pi]*\[Sigma]^2)*Exp[-((x - 10)^2/(2*\[Sigma]^2) + (y - 10)^2/(2*\[Sigma]^2))], h[0, y, t] == h[L, y, t], h[x, 0, t] == h[x, L, t]}, h, {x, 0, L}, {y, 0, L}, {t, 0, tmax}, Method -&gt; {"MethodOfLines", "SpatialDiscretization" -&gt; {"TensorProductGrid", "MinPoints" -&gt; 60, "MaxPoints" -&gt; 60, "DifferenceOrder" -&gt; 4}}, StepMonitor :&gt; Print[t]]] </code></pre> <p>(<em>It took about 7 sec to be solved on my old laplop.</em>)</p> <p>Next, I am trying to make an animation and export a .gif file to present its evolution as follows: (<em>taking about 50 sec</em>)</p> <pre><code>mpl = Table[Plot3D[myfun[x, y, t], {x, 0, L}, {y, 0, L}, PlotRange -&gt; All, PlotPoints -&gt; 40, ImageSize -&gt; 400, PlotLabel -&gt; Style["t = " &lt;&gt; ToString[t], Bold, 18]], {t, 0, 27, 1}]; Export["test.gif", mpl, "DisplayDurations" -&gt; 1, "AnimationRepetitions" -&gt; Infinity] </code></pre> <p><a href="https://i.stack.imgur.com/GAWwG.gif" rel="noreferrer"><img src="https://i.stack.imgur.com/GAWwG.gif" alt="enter image description here"></a></p> <p><strong>Here are my questions:</strong></p> <p>As you may see, during the evolution <strong>(1)</strong> the box(frame) of the animation is shrinking and expanding, though slightly, <strong>(2)</strong> the augment in the amplitude is shown through increasing the vertical coordinate. If one neglects the scaling in this coordinate and only observe the middle peak, he may do not feel its growth. This is a problem in make a presentation. </p> <p>I don't know the reason for the fist observation, for the second one, while, I think MMA try to highlight the surface variation at <em>every</em> instant by scaling the vertical axis synchronously.</p> <p>Can anyone please help me to suppress the oscillation of the 3Dbox and hold the coordinate of vertical axis as the final frame at $t_\text{max}=27$ (i.e. about z=6 here) because I want to show the surface evolution form a small fluctuation to the final big amplitude. Thanks!</p>
Nasser
70
<p>To prevent shaking, try to add <code>ImagePadding</code> and for the other issue, you can fix the vertical plot range. </p> <pre><code>mpl = Table[ Plot3D[myfun[x, y, t], {x, 0, L}, {y, 0, L}, PlotRange -&gt; {Automatic, Automatic, {0, 6}}, PlotPoints -&gt; 40, ImageSize -&gt; 400, PlotLabel -&gt; Style["t = " &lt;&gt; ToString[t], Bold, 18], ImagePadding -&gt; 30], {t, 0, 27, 1}]; </code></pre> <p><a href="https://i.stack.imgur.com/UER1w.gif" rel="noreferrer"><img src="https://i.stack.imgur.com/UER1w.gif" alt="enter image description here"></a></p>
1,801,112
<p>Find the simplest solution:</p> <p>$y' + 2y = z' + 2z$ I think proper notation is not sure, y' means first derivate of y. ($\frac{dy}{dt}+ 2y = \frac{dz}{dt} + 2z$)</p> <p>$y(0)=1$</p> <p>I got kind of confused, is $y=z=1$ a proper solution here? Or is disqualified because a constant is not reliant on time and something like $e^t$ is the simplest solution?</p> <p>You can choose z and y however you like.</p>
Mark Viola
218,419
<p>Let <span class="math-container">$w(t)=y(t)-z(t)$</span>. Then, we have</p> <p><span class="math-container">$$\frac{dy(t)}{dt}+2y(t)=\frac{dz(t)}{dt}+2z(t)\implies \frac{dw(t)}{dt}+2w(t)=0$$</span></p> <p>Hence, <span class="math-container">$w(t)=Ae^{-2t}$</span> for some constant <span class="math-container">$A$</span>, from which we find that <span class="math-container">$y(t)=z(t)+Ae^{-2t}$</span> . Using the initial condition, <span class="math-container">$y(0)=1$</span>, we obtain</p> <p><span class="math-container">$$y(t)=z(t)+(1-z(0))e^{-2t} \tag 1$$</span></p> <p>and we are done!</p> <hr /> <blockquote> <p><strong>EDIT:</strong></p> <p>It appears that the OP would like the &quot;simplest&quot; functions <span class="math-container">$y(t)$</span> and <span class="math-container">$z(t)$</span> that satisfy <span class="math-container">$(1)$</span>. Choosing <span class="math-container">$y(t)=z(t)=1$</span> satisfies <span class="math-container">$(1)$</span> and provides the &quot;simplest&quot; functions that do so.</p> </blockquote>
3,831,387
<p><span class="math-container">$X,Y\sim N(0,1)$</span> and are independent, consider <span class="math-container">$X+Y$</span> and <span class="math-container">$X-Y$</span>.</p> <p>I can see why <span class="math-container">$X+Y$</span> and <span class="math-container">$X-Y$</span> are independent based on the fact that their joint distribution is equal to the product of their marginal distributions. Just, I'm having trouble understanding <em>intuitively</em> why this is so.</p> <p>This is how I see it : When you look at <span class="math-container">$X+Y=u$</span>, the set <span class="math-container">$\{(x,u-x)|x\in\mathbb{R}\}$</span> is the list of possibilities for <span class="math-container">$X$</span> and <span class="math-container">$Y$</span>.</p> <p>And intuitively, I understand independence of two random variables <span class="math-container">$A$</span> and <span class="math-container">$B$</span> as, the probability of the event <span class="math-container">$A=a$</span> being completely unaffected by the event <span class="math-container">$B=b$</span> happening.</p> <p>But when you look at <span class="math-container">$X+Y=u$</span> given that <span class="math-container">$X-Y=v$</span>, the set of possibilities has only one value <span class="math-container">$(\frac{u+v}{2},\frac{u-v}{2})$</span>.</p> <p>So, <span class="math-container">$\mathbb{P}(X+Y=u|X-Y=v)\neq \mathbb{P}(X+Y=u)$</span>.</p> <p>Doesn't this mean that <span class="math-container">$X+Y$</span> is affected by the occurrance of <span class="math-container">$X-Y$</span>? So, they would have to be dependent? I'm sorry if this comes off as really stupid, it has been driving me crazy, even though I am sure that they are independent, it just doesn't feel right.</p> <p>Thank you.</p>
Alecos Papadopoulos
87,400
<p>Let <span class="math-container">$X$</span> and <span class="math-container">$Y$</span> be two random variables, with finite second moment. Consider the variables <span class="math-container">$Z_1=X-Y$</span> and <span class="math-container">$Z_2=X+Y$</span>.</p> <p>Their covariance is</p> <p><span class="math-container">$$\rm{Cov}(Z_1, Z_2) = E[(X-Y)(X+Y)] - E(X-Y)E(X+Y) = {\rm Var}(X) - {\rm Var}(Y).$$</span></p> <p>So</p> <p><span class="math-container">$${\rm Var}(X) = {\rm Var}(Y) \implies \rm{Cov}(Z_1, Z_2) = 0,\;\;\; {\rm Var}(X) \neq {\rm Var}(Y) \implies \rm{Cov}(Z_1, Z_2) \neq 0.$$</span></p> <p>So a <em>necessary</em> condition for independence of <span class="math-container">$Z_1$</span> and <span class="math-container">$Z_2$</span> is that <span class="math-container">${\rm Var}(X) = {\rm Var}(Y)$</span>. No matter what the marginal and joint distributions are of the variables involved, if the variances of the <span class="math-container">$X$</span> and <span class="math-container">$Y$</span> variables are not equal, the independence result cannot hold.</p> <p><em>Given</em> this, the second required condition for independence of <span class="math-container">$Z_1, Z_2$</span> is that their joint distribution is such that zero covariance implies independence. There are many such distribution families, not just the Normal. For example, if the joint distribution is of the Farlie-Gumbel-Morgenstern type.</p> <p><em>PS: Now the interesting question becomes: assume that <span class="math-container">$X$</span> and <span class="math-container">$Y$</span> have no moments. Under which conditions <span class="math-container">$Z_1$</span> and <span class="math-container">$Z_2$</span> will be independent?</em></p> <p>PS2: The above result does not make nor uses the assumption that <span class="math-container">$X,Y$</span> are independent random variables.</p>
1,842,826
<blockquote> <p>Explain why the columns of a $3 \times 4$ matrix are linearly dependent</p> </blockquote> <p>I also am curious what people are talking about when they say "rank"? We haven't touched anything with the word rank in our linear algebra class.</p> <p>Here is what I've came up with as a solution, will this suffice?</p> <p>I know that the columns of a matrix $A$ are <strong>linearly independent</strong> <strong>iff</strong> the equation $Ax = 0$ has <strong>only</strong> the <strong>trivial solution</strong>. $\therefore$ If the equation $Ax= 0$ does <strong>not</strong> have <strong>only</strong> the <strong>trivial solution</strong> $\implies$ that the columns of the matrix $A$ are <strong>linearly dependent</strong>?</p> <p><strong>UPDATE</strong> I don't understand why a $3x4$ matrix is always linearly dependent.. what about $\begin{bmatrix}1&amp;0&amp;0&amp;0\\0&amp;1&amp;0&amp;0\\0&amp;0&amp;1&amp;0\end{bmatrix}$</p> <p>where $x_1 = 0$ now $x_1= x_2 = x_3...$ then we can see that $x_1v_1 + x_2v_2 + x_3.. = 0 $ and we have the trivial solution?</p>
janezdu
348,666
<p>Your solution is almost there, you just need to incorporate the original question in. In other words, you need to show that $Ax=0$ has more than one solution when $A$ is $3 \times 4$. </p> <p>I also like to look at this problem as working with 4 vectors in $\rm I\!R^3$. You can definitely construct one of them with a linear combination of the others, or one of them must be the $0$ vector.</p>
3,542,885
<p>Let <span class="math-container">$P(x, y) = ax^2 + bxy + cy^2 + dx + ey + h$</span> and suppose <span class="math-container">$b^2 - 4ac &gt; 0.$</span></p> <p>I know that we can re-write <span class="math-container">$P(x, y)$</span> as a polynomial of <span class="math-container">$x:$</span> <span class="math-container">$$P(x, y) = ax^2 + (by+d)x + (cy^2 + ey + h).$$</span> From here, we can get the discriminant <span class="math-container">$\Delta_x(y)$</span> in terms of <span class="math-container">$y:$</span> <span class="math-container">$$\Delta_x(y) = (by+d)^2 - 4a(cy^2 + ey + h) = (b^2 - 4ac)y^2 + (2bd - 4ae)y + (d^2 - 4ah).$$</span></p> <p>Given the assumptions, I'm supposed to show that one of the following occurs: </p> <p>1) <span class="math-container">$\{ y \mid \Delta_x(y) \geq 0\} = \mathbb{R} \text{ and } \Delta_x(y) \neq 0 $</span> </p> <p>2) <span class="math-container">$\{ y \mid \Delta_x(y) = 0\} = \{y_0\} \text{ and }\{ y \mid\Delta_x(y) &gt; 0\} = \{ y\mid y \neq y_0\}$</span></p> <p>3) there exist real numbers <span class="math-container">$\alpha$</span> and <span class="math-container">$\beta$</span>, <span class="math-container">$\alpha &lt; \beta$</span>, such that <span class="math-container">$\{y \mid \Delta_x(y)\geq0\} = \{y \mid y \leq \alpha\} \cup \{y \mid y \geq \beta\}.$</span></p> <p>In the first case, we're supposed to get a hyperbola opening left and right. In the second case, we'll get two lines intersecting at a point. In the final case, we'll get a hyperbola opening up and down. However, I can't see how to show these things rigorously. Any insight would be appreciated.</p>
Quanto
686,284
<p>Recognize that the right triangles ABC and AED are similar, we have </p> <p><span class="math-container">$$\frac {AE}{AD} = \frac{AB}{AC}$$</span></p> <p>Substitute <span class="math-container">$AE = y_1-y_2$</span>, <span class="math-container">$AC = y_1$</span> and <span class="math-container">$AD = \frac15AB = a$</span> into above ratio,</p> <p><span class="math-container">$$\frac{y_1-y_2}{a} = \frac{5a}{y_1}\implies y_1^2-y_2y_1-5a^2=0$$</span></p> <p>Solve for <span class="math-container">$y_1$</span> in terms of known <span class="math-container">$a$</span> and <span class="math-container">$y_2$</span>,</p> <p><span class="math-container">$$y_1 = \frac12 y_2 + \frac12\sqrt{y_2^2+20a^2}$$</span></p> <p>Then, the length of AD can be calculated as,</p> <p><span class="math-container">$$ED^2= AE^2 - AD^2 = (y_1-y_2)^2-a^2 = \frac14(\sqrt{y_2^2+20a^2}-y_2)^2-a^2$$</span></p> <p>or </p> <p><span class="math-container">$$ED = \left( 4a^2 +\frac12y_2^2-\frac12 y_2 \sqrt{y_2^2+20a^2}\right)^{1/2}$$</span></p> <p>Also, the angle <span class="math-container">$\angle ABC$</span> is given by</p> <p><span class="math-container">$$\sin\angle ABC = \frac{AC}{AB} = \frac{y_1}{5a} =\frac1{10}\left(\frac{y_2}{a}+ \sqrt{\left(\frac{y_2}{a}\right)^2+20} \right)$$</span></p>
2,903,557
<p>I am having trouble proving the following identity: </p> <p>$$\frac{\sinh \tau +\sinh i\sigma }{\cosh \tau +\cosh i\sigma }=-\coth \left(i \frac{\sigma +i\tau }{2}\right)$$</p> <p>I have tried using identities and the definitions but haven't had much luck. This is a missing step in inverting the bipolar coordinates. Any assistance is appreciated.</p>
Stefan Lafon
582,769
<p>Let $$u_r= \frac 1 r - \frac 1{r+1}$$ Then $$\begin{split} u_r-u_{r+1} &amp;= \frac 1 r - \frac 1{r+1} -\bigg( \frac 1 {r+1} - \frac 1{r+2}\bigg) \\ &amp;= \frac 1 r - \frac 2{r+1} + \frac 1 {r+2} \\ &amp;= \frac{ (r+2)(r+1) -2r(r+2) + r(r+1)}{r(r+1)(r+2)}\\ &amp;= \frac 2 {r(r+1)(r+2)}\\ \end{split}$$ With that, the sum to compute is $$\begin{split} S_n &amp;= \frac 1 2 \sum_{r=1}^n(n-r+1)(u_r-u_{r+1})\\ &amp;= \frac 1 2 \bigg (\sum_{r=1}^n(n-r+1)u_r-\sum_{r=2}^{n+1} (n-r+2)u_r\bigg)\\ &amp;= \frac 1 2 \bigg ( nu_1-u_{n+1} -\sum_{r=2}^n u_r \bigg)\\ &amp;= \frac 1 2 \bigg ( nu_1 -\sum_{r=2}^{n+1} u_r \bigg)\\ &amp;= \frac 1 2 \bigg (nu_1 - \frac 1 2 + \frac 1 {n+2} \bigg) \\ &amp;= \frac 1 2 \bigg (\frac {n-1}2 + \frac 1 {n+2} \bigg) \\ \end{split} $$</p>
1,052,073
<p><strong>Assume $V$ is a real $n$-dimensional vector space, and $v,w \in V $. Define $ T \in L(V)$ by $ T(u) = u - (u,v)w$. Find a formula for Trace(T)</strong></p> <p>All I know about this is that trace is sum of the diagonal entries of the matrix. So how do I find the diagonal entries? I don't really know what steps to follow. </p>
MvG
35,416
<p>Let's start with the comment Hagen wrote:</p> <blockquote> <p>Construct a regular pentagon, connect each vertex to its center, and prolong three of its edges (all but two non-adjacent edges). This gives you a $(108°,36°,36°)$ triangle partitioned into seven acute triangles. You can solve a wide range of cases by distorting this figure. Hopefuly, one can manage to solve every right triangle this way - for any obtuse triangle can be partitioned into two right triangles.</p> </blockquote> <p>So can we solve every right triangle that way? To answer this question, I'll label all my angles:</p> <p><img src="https://i.stack.imgur.com/Ex4Jd.png" alt="Figure"></p> <p>Now one can formulate a linear program in these angles $\alpha_1$ through $\alpha_{21}$ and one extra angle $\varepsilon$. The rules of the linear program are as follows:</p> <ul> <li>The sum of angles for every one of the seven small triangles must be $180°$.</li> <li>At every vertex which is incident with more than one triangle, the incident angles must add up to $180°$ except for $\alpha_6+\alpha_7=90°$ and $\alpha_{17}+\alpha_{18}+\alpha_{19}+\alpha_{20}+\alpha_{21}=360°$. </li> <li>All angles must be strictly greater than $0°$, which I express as $\alpha_i\ge\varepsilon$.</li> <li>All angles must be strictly less than $90°$, which I express as $\alpha_i\le90°-\varepsilon$.</li> <li>$\varepsilon$ must be strictly greater than zero, which we'll have to ensure by including it into our objective function.</li> </ul> <p>Now one can look for solutions of this linear program. I did so trying to minimize $10\alpha_1-\varepsilon$. Or, phrased differently, I tried to make the right triangle far from isosceles while at the same time trying to avoid both zero angles and right angles.</p> <p>The numeric result I found was this:</p> <p>\begin{align*} \varepsilon&amp;=2.07899474469°\cdot10^{-10} \\ \alpha_{1}&amp;=3.21777264646°\cdot10^{-10} &amp; \alpha_{8}&amp;=75.0558118504° &amp; \alpha_{15}&amp;=54.2736044438° \\ \alpha_{2}&amp;=89.9999999997° &amp; \alpha_{9}&amp;=60.1977531668° &amp; \alpha_{16}&amp;=89.9999999999° \\ \alpha_{3}&amp;=89.9999999998° &amp; \alpha_{10}&amp;=44.7464349828° &amp; \alpha_{17}&amp;=70.0447167828° \\ \alpha_{4}&amp;=44.3278546796° &amp; \alpha_{11}&amp;=45.2535650175° &amp; \alpha_{18}&amp;=81.3985408766° \\ \alpha_{5}&amp;=45.6721453206° &amp; \alpha_{12}&amp;=60.5175473216° &amp; \alpha_{19}&amp;=79.9759354909° \\ \alpha_{6}&amp;=54.3519191885° &amp; \alpha_{13}&amp;=74.2288876609° &amp; \alpha_{20}&amp;=69.2961073381° \\ \alpha_{7}&amp;=35.6480808115° &amp; \alpha_{14}&amp;=35.7263955563° &amp; \alpha_{21}&amp;=59.2846995116° \end{align*}</p> <p>So as you can see, there was a solution which is very degenerate, with $\alpha_1$ really close to zero. On the other hand, realizing that situation under the given constraints required some angles to be close to right. Of course, the above numerical evidence is no proof, but I'd be really surprised if something which works for such small angles doesn't work for arbitrary small angles, as long as you make $\varepsilon$ sufficient small as well.</p> <p>Come to think of it, the small deviations from $0°$ resp. $90°$ in the above numbers are likely an artifact of how the solver works. A different solver gave me results where the angles were exactly $0°$ resp. $90°$. And it makes sense to assume that you can achieve $0°$ if you allow $90°$, and that doing so is better in terms of the objective function than the above. To actually forbid those solutions for all solvers would likely require a fixed non-zero lower bound on $\varepsilon$.</p>
2,270,861
<p>What follows is part of Exercise 1.34 from Pillay's <em>Introduction to Stability Theory</em>. Suppose the following:</p> <ol> <li>$M \prec N$.</li> <li>$N$ is $|M|^+$-saturated.</li> <li>$p \in S_1(M)$, $q \in S_1(N)$.</li> <li>$q \supset p$ is a coheir of $p$.</li> </ol> <p>Construct a sequence $(a_i \mid i &lt; \omega)$ of elements in $N$ inductively as follows: let $a_0$ realize $p$, and given $a_0, \dots, a_n$, let $a_{n+1}$ realize $q \upharpoonright Ma_0\dots a_n$. The exercise is to show that $(a_i)_i$ is an indiscernible sequence over $M$.</p> <p>My attempt is to show, by induction on $n$, that for $\bar ab$ and $\bar a'b'$, subsequences of $(a_i)_i$ of length $n$, the type of $\bar a$ over $M$ is the same as that of $\bar a'$. Suppose that the following are equivalent for a formula $\phi$ without parameters, $\bar cd $ a subsequence of $(a_i)_i$, and $\bar m \in M$:</p> <ol> <li>$N \models \phi(\bar c, d; \bar m)$</li> <li>There exists $d' \in M$ such that $N \models \phi(\bar c, d';\bar m)$.</li> </ol> <p>Note that since $q$ is a coheir, 1 implies 2. If that is true, what I need to prove is a straightforward consequence of the induction hypothesis.</p> <p>My question is whether or not the two conditions are equivalent, and how to show it.</p>
Mark Viola
218,419
<p>HINT:</p> <p>Integrate by parts the integral $\int_0^\infty e^{-x^2}2y\cos(2xy)\,dx$ with $u=e^{-x^2}$ and $v=\sin(2xy)$.</p>
1,293,725
<p>Here I have a question:</p> <p>Solve for real value of $x$: $$|x^2 -2x -3| &gt; |x^2 +7x -13|$$</p> <p>I got the answer as $x = (-\infty, \frac{1}{4}(-5-3\sqrt{17}))$ and $x=(\frac{10}{9},\frac{1}{4}(3\sqrt{17}-5)$</p> <p>Please verify it if it is correct or not. Thanks</p>
copper.hat
27,978
<p>Brute force (as I am wont to do):</p> <p>Look at where $(x^2-2x-3)^2 - (x^2+7x-13)^2 = -(9x-10)(2x^2+5x-16)$ is strictly positive.</p> <p>The zeroes are ${10 \over 9}$ and ${1 \over 4} (-5 \pm 3 \sqrt{17})$.</p> <p>Since the leading coefficient is $-1$, we see that the answer is $(-\infty,{1 \over 4} (-5 - 3 \sqrt{17})) \cup ({10 \over 9}, {1 \over 4} (-5 + 3 \sqrt{17}))$.</p>
627,815
<p>What is the best way to write 'exclusively divisible by' a given number in terms of set notation? eg: the set of natural numbers that are divisible by $2$ and only $2$; the set of natural numbers that are divisible by $3$ and only $3$; $\dots 5$, $7\dots$ etc.</p>
Cameron Buie
28,900
<p>Given a prime natural number $p,$ it seems that the set (let's call it $\Bbb D_p$) of natural numbers "exclusively divisible by $p$" (according to your description) would be:</p> <p>$$\Bbb D_p=\left\{n\in\Bbb N:\exists k\in\Bbb N\left(n=p^k\right)\right\}$$</p> <p>For example, $$\begin{align}\Bbb D_2 &amp;= \left\{n\in\Bbb N:\exists k\in\Bbb N\left(n=2^k\right)\right\}\\ &amp;= \left\{2^1,2^2,2^3,2^4,\dots\right\}\\ &amp;= \{2,4,8,16,\dots\}.\end{align}$$</p> <p><strong>Nota Bene</strong>: I assume in this answer that your natural numbers do <em>not</em> include $0.$ If $0$ <em>is</em> a natural number by your definition, then we will need to alter the definition of $\Bbb D_p$ above, since while $1=p^0,$ we certainly can't say that $1$ is divisible by $p.$ Instead, we will say:</p> <p>$$\Bbb D_p=\left\{n\in\Bbb N:\exists k\in\Bbb N\left[\left(n=p^k\right)\wedge(k\ne0)\right]\right\}$$</p>
1,803,589
<p>I'm stuck at this. How is RHS rearranged? Is it a change of index?</p> <p>$$ \sum_{n=1}^{2N} \frac{1}{n} - \sum_{n=1}^{N} \frac{1}{n} = \sum_{n=N+1}^{2N} \frac{1}{n} $$</p> <p>I'm stuck here:</p> <p>$$ \sum_{n=1}^{2N} \frac{1}{n} = \frac{1}{1}+\frac{1}{2}+\frac{1}{3}+\dots+ \frac{1}{2N} $$ $$ \sum_{n=1}^{N} \frac{1}{n} =\frac{1}{1}+\frac{1}{2}+\frac{1}{3}+\dots+ \frac{1}{N} $$ $$ \frac{1}{1}+\frac{1}{2}+\frac{1}{3}+\dots+\frac{1}{2N}-(\frac{1}{1}+\frac{1}{2}+\frac{1}{3}+\dots+ \frac{1}{N})= \frac{1}{2N}-\frac{1}{N}=\frac{-1}{2N} $$ Thanks!</p>
Emilio Novati
187,568
<p>Your reorder is wrong. See here: $$ \sum_{n=1}^{2N} \frac{1}{n} - \sum_{n=1}^{N} \frac{1}{n}= \frac{1}{1}+\frac{1}{2}+\frac{1}{3}+\dots+\frac{1}{2N}-\left(\frac{1}{1}+\frac{1}{2}+\frac{1}{3}+\dots+ \frac{1}{N}\right)= $$ $$= \left(\frac{1}{1}-\frac{1}{1}\right)+\left(\frac{1}{2}-\frac{1}{2}\right)+\cdots+\left(\frac{1}{N}-\frac{1}{N}\right)+\frac{1}{N+1}+\frac{1}{N+2}+\cdots+\frac{1}{2N}= \sum_{n=N+1}^{2N}\frac{1}{n} $$</p>
65,691
<p>The question of generalising circle packing to three dimensions was asked in <a href="https://mathoverflow.net/questions/65677/">65677</a>. There is a clear consensus that there is no obvious three dimensional version of circle packing.</p> <p>However I have seen a comment that circle packing on surfaces and Ricci flow on surfaces are related. The circle packing here is an extension of circle packing to include intersection angles between the circles with a particular choice for these angles. My initial question is to ask for an explanation of this.</p> <p>My real question should now be apparent. There is an extension of Ricci flow to three dimensions: so is there some version of circle packing in three dimensions which can be interpreted as a combinatorial version of Ricci flow?</p>
Igor Rivin
11,142
<p>Actually, there are a number of references by Ben Chow, Feng Luo, and D. Glickenstein on this subject, mostly in two dimensions. Glickenstein's work (Glickenstein was a student of Ben Chow's) is more three-dimensional. Some relevant references are below. The curvature flow approach distinct from the even more popular variational approach (though the two approaches intersect nontrivially).</p> <p><a href="https://mathscinet.ams.org/mathscinet-getitem?mr=0127372" rel="nofollow noreferrer">MR0127372</a> (23 #B418) Regge, T. General relativity without coordinates. (Italian summary) Nuovo Cimento (10) 19 1961 558–571. </p> <p><a href="https://mathscinet.ams.org/mathscinet-getitem?mr=1393382" rel="nofollow noreferrer">MR1393382</a> (97k:52022) Cooper, Daryl(1-UCSB); Rivin, Igor(4-WARW-MI) Combinatorial scalar curvature and rigidity of ball packings. Math. Res. Lett. 3 (1996), no. 1, 51–60. </p> <p><a href="https://mathscinet.ams.org/mathscinet-getitem?mr=2136536" rel="nofollow noreferrer">MR2136536</a> (2006a:53081) Glickenstein, David A maximum principle for combinatorial Yamabe flow. Topology 44 (2005), no. 4, 809–825. (Reviewer: Igor Rivin), 53C44 (52C15) </p> <p><a href="https://mathscinet.ams.org/mathscinet-getitem?mr=2136535" rel="nofollow noreferrer">MR2136535</a> (2005k:53108) Glickenstein, David A combinatorial Yamabe flow in three dimensions. Topology 44 (2005), no. 4, 791–808. (Reviewer: Igor Rivin), 53C44 (52C15) </p> <p><a href="https://mathscinet.ams.org/mathscinet-getitem?mr=2100762" rel="nofollow noreferrer">MR2100762</a> (2005m:53122) Luo, Feng Combinatorial Yamabe flow on surfaces. Commun. Contemp. Math. 6 (2004), no. 5, 765–780. (Reviewer: Igor Rivin), 53C44 (53C21) </p> <p><a href="https://mathscinet.ams.org/mathscinet-getitem?mr=2015261" rel="nofollow noreferrer">MR2015261</a> (2005a:53106) Chow, Bennett; Luo, Feng Combinatorial Ricci flows on surfaces. J. Differential Geom. 63 (2003), no. 1, 97–129. (Reviewer: Igor Rivin), 53C44 </p> <p><a href="https://arxiv.org/abs/1010.4070" rel="nofollow noreferrer">arXiv:1010.4070</a> [pdf, ps, other] Discrete Laplace-Beltrami Operator Determines Discrete Riemannian Metric Xianfeng David Gu, Ren Guo, Feng Luo, Wei Zeng</p> <p><a href="https://arxiv.org/abs/1005.4648" rel="nofollow noreferrer">arXiv:1005.4648</a> [pdf, other] Computing Quasiconformal Maps on Riemann surfaces using Discrete Curvature Flow W. Zeng, L.M. Lui, F. Luo, J.S. Liu T.F. Chan, S.T. Yau, X.F. Gu</p>