qid int64 1 4.65M | question large_stringlengths 27 36.3k | author large_stringlengths 3 36 | author_id int64 -1 1.16M | answer large_stringlengths 18 63k |
|---|---|---|---|---|
3,200,940 | <p><a href="https://i.stack.imgur.com/7k9P8.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/7k9P8.jpg" alt="enter image description here"></a></p>
<p>Velleman's logic in sentence 3 under figure 4 is confusing me. He is using lines two and four of the truth table to infer what Q of line 1 should be. But lines two and four assume P --> Q are true while line 1 assumes P --> Q to be false. So the latter doesn't follow the former in the way Velleman is reasoning here.</p>
<p>Furthermore it is already given in the first two columns of P and Q that they are both already F, so how he reasons that there is even a possible "inference" of Q being T is very strange.</p>
<p>Help anyone.</p>
<p>Thanks!</p>
| Bram28 | 256,001 | <p>Velleman's logic is correct. Here is what he is saying. Suppose we change the first line of the truth-table for <span class="math-container">$P \to Q$</span>, so it becomes:</p>
<p><span class="math-container">\begin{array}{cc|c}
P&Q&P \to Q\\
\hline
F&F&F\\
F&T&T\\
T&F&F\\
T&T&T\\
\end{array}</span></p>
<p>OK, so now look at the following argument:</p>
<p><span class="math-container">$$P \to Q$$</span></p>
<p><span class="math-container">$$\therefore Q$$</span></p>
<p>Is this valid? With the truth-table as defined above, it would be, since in every case where the premise <span class="math-container">$P \to Q$</span> is true (lines 2 and 4 of the truth-table), <span class="math-container">$Q$</span> is also true.</p>
<p>But, Velleman says, this would be very unintuitive (he gives the example with the eggs). Therefore, we better not use the truth-table above.</p>
|
1,446,168 | <p>Let $x$ and $y$ be two random variables. </p>
<p>Suppose $m$ is a random variable that is independent of $x$ and has the following distribution:</p>
<p>$$\text{Pr}(m = 1|x) = 0.5,$$ $$\text{Pr}(m = -1|x) = 0.5.$$</p>
<p>Let $y$ be given by: $$y= \left\{ \begin{array}{lcc}
0 & \text{if } x\geq0 \\
\\ m & \text{if } x<0 \\
\\
\end{array}
\right.$$</p>
<p>To show that $x$ and $y$ are not independent, can I use a more rigorous method other than just observing that it's the way $y$ is defined that clearly makes it dependent on $x?$ </p>
<p>I wanted to use these kinds formula for no independence: $f(x,y) \neq f(x)f(y)$ or $f(y|x) \neq f(y)$. </p>
<p>Given the information, is it possible to even find $f(x)?$ I think it's possible to find $f(y|x)$ and $f(y)$ given the information, but not sure how they would be different. </p>
| hunter | 108,129 | <p>In $\mathbb{R}$, let $X = \{0\} \bigcup \{1, \frac{1}{2}, \frac{1}{3}, \ldots \}$. Then $X$ is compact and has infinitely many isolated points.</p>
|
3,052,746 | <p>I want to solve <span class="math-container">$2x = \sqrt{x+3}$</span>, which I have tried as below:</p>
<p><span class="math-container">$$\begin{equation}
4x^2 - x -3 = 0 \\
x^2 - \frac14 x - \frac34 = 0 \\
x^2 - \frac14x = \frac34 \\
\left(x - \frac12 \right)^2 = 1 \\
x = \frac32 , -\frac12
\end{equation}$$</span></p>
<p>This, however, is incorrect.</p>
<p>What is wrong with my solution?</p>
| Hugh Entwistle | 361,701 | <p>From </p>
<p><span class="math-container">$$x^2-\frac{1}{4}x=\frac{3}{4}$$</span> to <span class="math-container">$$\left(x-\frac{1}{2} \right)^2=1$$</span> you have not completed the square correctly. </p>
<p>It should instead be </p>
<p><span class="math-container">$$\left(x-\frac{1}{8} \right)^2-\frac{1}{64}=\frac{3}{4}$$</span></p>
<p>Additionally please note that, if we were to proceed with your original solution as 'correct', one of your solutions does in fact not work. When we originally square both sides we have, in a sense, 'modified' the question -- allowing for negative solutions for <span class="math-container">$x$</span>. Your original solution of <span class="math-container">$x=-\frac{1}{2}$</span> does not satisfy the original equation - we must be mindful to check our solutions in these situations. </p>
|
3,052,746 | <p>I want to solve <span class="math-container">$2x = \sqrt{x+3}$</span>, which I have tried as below:</p>
<p><span class="math-container">$$\begin{equation}
4x^2 - x -3 = 0 \\
x^2 - \frac14 x - \frac34 = 0 \\
x^2 - \frac14x = \frac34 \\
\left(x - \frac12 \right)^2 = 1 \\
x = \frac32 , -\frac12
\end{equation}$$</span></p>
<p>This, however, is incorrect.</p>
<p>What is wrong with my solution?</p>
| Deepak | 151,732 | <p>Two mistakes:</p>
<p>1) mistake in completing the square. Remember, divide the coefficient of the <span class="math-container">$x$</span> term by two, not multiply. You should've got: <span class="math-container">$(x-\frac 18)^2 = \frac 34 + \frac{1}{64}$</span>.</p>
<p>2) when you square, you run the risk of introducing "redundant roots". This is because when you solve by squaring, you're really solving <span class="math-container">$2x = \pm\sqrt{x+3}$</span>. So put your solutions back into the original to see which one(s) satisfy the original equation, and discard the rest. This applies whenever you raise both sides of an equation to any even power.</p>
|
3,830,636 | <p>This is something I'm doing for a video game so may see some nonsense in the examples I provide.</p>
<p>Here's the problem:
I want to get a specific amount minerals, to get this minerals I need to refine ore. There are various kinds of ore, and each of them provide different amounts of minerals per volume. So I want to calculate the optimum amount of ore (by least possible volume) to get the amount of minerals.</p>
<p>For example:</p>
<ul>
<li>35 m<sup>3</sup> of Plagioclase is refined into 15 Tritanium, 19 Pyerite, 29 Mexallon</li>
<li>120 m<sup>3</sup> of Kernite is refined into 79 Tritanium, 144 Mexallon, 14 Isongen</li>
</ul>
<p>How could I go and calculate the combination of Plagioclase and Kernite that gives at least 1000 Tritanium and 500 Mexallon with the least amount of Ore (by volume)</p>
<p>I think this is a linear programming problem, but I haven't touched this subject in years</p>
| RobPratt | 683,666 | <p>Yes, this is linear programming.
<span class="math-container">\begin{align}
&\text{minimize} & 35p+120k \\
&\text{subject to}
&15p+79k &\ge 1000 \\
&&29p+144k &\ge 500 \\
&&p &\ge 0\\
&&k &\ge 0
\end{align}</span>
The unique optimal solution is <span class="math-container">$(p,k)=(0,1000/79)$</span>. If <span class="math-container">$p$</span> and <span class="math-container">$k$</span> are required to be integers, the unique optimal solution is <span class="math-container">$(p,k)=(0,13)$</span>.</p>
|
3,956,828 | <p>I can find the nth integral of <span class="math-container">$\ln(z)$</span> as follows:
<span class="math-container">\begin{aligned}
\left(\frac d{dz}\right)^{-n}\ln(z)&=\frac1{\Gamma(n)}\int\limits_0^z(z-t)^{n-1}\ln(t)dt\\
&=\frac1{n!}\left[\int\limits_0^z\frac1t(z-t)^ndt-z^n\ln(0)\right]\\
&=\frac1{n!}\left[\int\limits_0^1\frac{z^nt^n-z^n+z^n}{1-t}dt-z^n\ln(0)\right]\\
&=\frac{\ln(z)-H_n}{n!}z^n,
\end{aligned}</span>
but I can't get very far with <span class="math-container">$\ln(1+z/k)$</span>. I was able to come up with this but it is only a conjecture:
<span class="math-container">\begin{aligned}
\frac1{\Gamma(n)}\int\limits_0^z(z-t)^{n-1}\ln\left(1+\frac tk\right)dt&=\frac{\ln\left(1+\frac{z}{k}\right)-H_n}{n!}(k+z)^n+\sum ^{n}_{i=0}\frac{H_{i}}{i!(n-i)!}z^{n-i}k^i\\
&=\frac1{n!}\ln\left( 1+\frac{z}{k}\right)( k+z)^{n} -\sum ^{n}_{i=0}\frac{H_{n} -H_{i}}{i!( n-i) !} z^{n-i} k^{i}.
\end{aligned}</span>
I'm only using <span class="math-container">$k,n\in\mathbb{N}$</span> and <span class="math-container">$z\in\mathbb{R}$</span> for now. Any help proving this would be appreciated.</p>
| Spador Yedi | 664,849 | <p>I realized that showing how I came up with my conjecture could probably provide the basis of a proof. I'm not sure if I should post this as a separate answer or as part of the question, but since it kind of answers the question I'm posting as an answer. I will not, however, accept this.</p>
<p>I already knew the answer would be in the form <span class="math-container">$$\left(\frac d{dz}\right)^{-n}\ln\left(1+\frac zk\right)=\frac{\ln\left(1+\frac zk\right)-H_n}{n!}(k+z)^n+P(z,k,n),$$</span> for some polynomial <span class="math-container">$P$</span>, thanks in part to the nth integral of <span class="math-container">$\ln(z)$</span>:
<span class="math-container">\begin{aligned}
\left(\frac d{dz}\right)^{-n}\ln\left(1+\frac zk\right)&=\left(\frac d{d(1+z/k)}\frac{d(1+z/k)}{dz}\right)^{-n}\ln\left(1+\frac zk\right)\\
&\rightsquigarrow\left(\frac d{d(1+z/k)}\right)^{-n}\ln\left(1+\frac zk\right)\left(\frac{d(1+z/k)}{dz}\right)^{-n}\\
&=\frac{\ln\left(1+\frac zk\right)-H_n}{n!}\left(1+\frac zk\right)^n\left(\frac1k\right)^{-n}\\
&=\frac{\ln\left(1+\frac zk\right)-H_n}{n!}(k+z)^n.
\end{aligned}</span>
I made a table of <span class="math-container">$P$</span> to hopefully find a pattern
<span class="math-container">\begin{matrix}
n & P(z,k,n)\\
0 & 0\\
1 & k\\
2 & kz+\frac{3k^2}4\\
3 & \frac{kz^2}2+\frac{3k^2z}4+\frac{11k^3}{36}\\
4 & \frac{kz^3}3+\frac{3k^2z^2}8+\frac{11k^3z}{36}+\frac{25k^4}{288}\\
\dots & \dots
\end{matrix}</span>
and indeed several nice patterns appear. The table shows that we can expect <span class="math-container">$P$</span> to be in the form <span class="math-container">$$P(z,k,n)=\sum_{i=0}^n\frac{H_i}{i!(n-i)!}z^{n-i}k^i,$$</span> which then leads to
<span class="math-container">\begin{aligned}
\left(\frac d{dz}\right)^{-n}\ln\left(1+\frac zk\right)&=\frac{\ln\left(1+\frac zk\right)-H_n}{n!}(k+z)^n+\sum_{i=0}^n\frac{H_i}{i!(n-i)!}z^{n-i}k^i\\
&=\frac1{n!}\ln\left(1+\frac zk\right)(k+z)^n-\sum_{i=0}^n\frac{H_n-H_i}{i!(n-i)!}z^{n-i}k^i.
\end{aligned}</span>
Thus, all that really has to be done is prove this form of <span class="math-container">$P$</span>.</p>
|
4,422,512 | <p>I do have a matrix of following form</p>
<p><span class="math-container">$$M:=\left(\begin{array}{c|ccc}
A & & * &\\
\hline 0 & & &\\
0 & & B &\\
0 & & &\\
\end{array}\right)$$</span></p>
<p>Here the <span class="math-container">$0$</span>'s represent matrices of which any entry is equal to <span class="math-container">$0$</span>. Moreover, <span class="math-container">$A$</span> is an invertible square matrix. Is it true, that <span class="math-container">$M$</span> is invertible, iff <span class="math-container">$B$</span> is invertible? My guess is that in this case, it holds <span class="math-container">$\det(M)=\det(A)\det(B)$</span>, but I am not completely convinced, if this is true.</p>
| zwim | 399,263 | <p>Note that even if you ignore that in case of <span class="math-container">$B$</span> invertible <span class="math-container">$$\det(M)=\det(A-*B^{-1}0)\det(B)=\det(A)\det(B)$$</span> it is possible to solve the system easily by blocks:</p>
<p><span class="math-container">$MX=0\iff\left(\begin{array}{c|ccc}
A & & C &\\
\hline 0 & & &\\
0 & & B &\\
0 & & &\\
\end{array}\right)\left(\begin{array}{c}
x\\
\hline \\
y\\
\\
\end{array}\right)=0\iff \begin{cases}Ax+Cy=0\\\\By=0\end{cases}$</span></p>
<p>If <span class="math-container">$B$</span> is invertible then <span class="math-container">$y=0$</span> so we are left with <span class="math-container">$Ax=0$</span> and since <span class="math-container">$A$</span> is invertible too then <span class="math-container">$x=0$</span> therefore <span class="math-container">$X=(x,y)^T=(0,0)^T=0$</span> and <span class="math-container">$M$</span> kernel is null therefore <span class="math-container">$M$</span> invertible.</p>
<p>If <span class="math-container">$B$</span> not invertible there exists <span class="math-container">$y\neq 0$</span> such that <span class="math-container">$By=0$</span> but since <span class="math-container">$A$</span> invertible then we can always calculate <span class="math-container">$x=-A^{-1}Cy$</span> making <span class="math-container">$X=(x,y)^T\neq 0$</span> an eigenvector of the kernel, thus <span class="math-container">$M$</span> not invertible either.</p>
|
252,870 | <p>Given a polynomial, lets say for example <span class="math-container">$f(x,y) = (1+x+y)^2 = 1+2x+x^2+2y+2xy+y^2$</span>, I'd like to be able to order the terms of the polynomial by total degree, either in increasing or decreasing order (and if alphabetical order can be taken into account within terms of the same total order, then that would be great, but not necessary).</p>
<p>I'd like a function to take in <span class="math-container">$1+2x+x^2+2y+2xy+y^2$</span> and return <span class="math-container">$(1) + (2x + 2y) + (x^2 + 2xy + y^2)$</span>, or in reverse order (and not necessarily with parenthesis, but that would be nice to work with.</p>
<p>I've tried various commands using Collect[], and MonomialList[], and while MonomialList[f(x,y),{x,y},"DegreeLexicographic"] gives a list of the terms in the order I want, I would like the full expression.</p>
| Daniel Huber | 46,318 | <p>I am not sure if I understand your description correctly. You have a list with names in "hypatia". Then the same names appear in a list of lists named "names". And you want determine the indices of names from "hypatia" in the list of lists "names". The command "Position" will do the job. Here is an example:</p>
<p>For an example we create names from numbers:</p>
<pre><code>hypatia = Table[ToString[i], {i, 100}];
</code></pre>
<p>We then split these names into a list of lists:</p>
<pre><code>names = Partition[hypatia, 10];
</code></pre>
<p>You could additonally add names that do not appear in "hypatia", but for simplicity I am not doing this. Now `Position will give you the indices:</p>
<pre><code>Flatten[Position[names, #]] & /@ hypatia
</code></pre>
<p><a href="https://i.stack.imgur.com/b6l5V.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/b6l5V.png" alt="enter image description here" /></a></p>
|
542,808 | <p>I had a little back and forth with my logic professor earlier today about proving a number is irrational. I proposed that 1 + an irrational number is always irrational, thus if I could prove that 1 + irrational number is irrational, then it stood to reason that was also proving that the number in question was irrational.</p>
<p>Eg. $\sqrt2 + 1$ can be expressed as a continuous fraction, and through looking at the fraction, it can be assumed $\sqrt2 + 1$ is irrational. I suggested that because of this, $\sqrt2$ is also irrational.</p>
<p>My professor said this is not always true, but I can't think of an example that suggests this.</p>
<p>If $x+1$ is irrational, is $x$ always irrational? </p>
<p>Actually, a better question is: if $x$ is irrational, is $x+n$ irrational, provided $n$ is a rational number?</p>
| Ross Millikan | 1,827 | <p>Yes, it is true that $x+1$ being irrational implies $x$ is irrational. Given that $x+1$ is irrational, assume $x=\frac ab$ with $a,b$ integers. Then $x+1=\frac {a+b}b$ would be rational as well.</p>
|
1,416,275 | <p>I've just began the study of linear functionals and the dual base. And this book I'm reading says the dual space $V^{*}$ may be identified with the space of row vectors. This notion seems very important, but I'm having trouble understanding it. Here is the text:</p>
<blockquote>
<p>Let $\sigma$ be an element of the dual space $V^{*}$, i.e. a linear
map $\sigma: V \rightarrow K$. Choose a basis for $V$, say the usual
the basis, then $\sigma$ is represented by a matrix $[\sigma]$.
However, such a matrix $[\sigma]$ is a row vector. Also, the map
$\sigma \rightarrow [\sigma]$ is a vector space isomorphism.</p>
<p>On the other hand, any row vector $\phi = (a_1, \ldots, a_n)$ defines
a linear functional $\phi: V \rightarrow K$ by \begin{align*}
\phi(x_1, \ldots, x_n) = (a_1, \ldots, a_n) \begin{pmatrix} x_1 \\
\vdots \\ x_n \end{pmatrix} \end{align*} or simply $\phi(x_1, \ldots,
x_n) = a_1 x_1 + a_2 x_2 + \ldots + a_n x_n$.</p>
</blockquote>
<p>The author speaks of the matrixrepresentation $[\sigma]$, but he doesn't really explain it. Why is this matrix a row vector? Also, the second part of the text: is this merely a definition? Why does he claim $\phi(x_1, \ldots, x_n) = a_1 x_1 + \ldots + a_n x_n$? The output of a linear functional is suppose to be a scalar, and not a vector? And this is clearly a linear combination of vectors...</p>
<p>Maybe some of the advanced mathematicians here could give me some examples, because I can't get my head around this at the moment. </p>
| Berci | 41,488 | <p>Let us fix a basis $\def\b{{\bf b}} \b_1,\dots,\b_n$ for $V$. For coordinates of vectors of $V$ we use <em>column</em> vectors, i.e. $\pmatrix{x_1\\ \vdots\\ x_n}$ represents the vector $x_1\b_1+\dots+x_n\b_n$.</p>
<p>Any element of the dual space (i.e. a linear $f:V\to K$) is uniquely determined by its action on the basis, and will simply correspond to the row vector
$${\bf w_f}:=\pmatrix{f(\b_1)&f(\b_2)&\dots&f(\b_n)}\,.$$
Use linearity to prove ${\bf w_f}\cdot{\bf v}=f({\bf v})$ for all vectors ${\bf v}=x_1\b_1+\dots+x_n\b_n$.</p>
|
272,144 | <p>Consider a multi-value function <span class="math-container">$f(z)=\sqrt{(z-a)(z+\bar a)}, \Im{a}>0,\Re{a}>0$</span>. To make the function be single-valued, one needs to make a cut. Suppose <span class="math-container">$a=e^{i\theta}$</span>, my choice of the branch cut is <span class="math-container">$e^{it},t\in (\theta,\pi-\theta)$</span>. This uniquely defines my function <span class="math-container">$f(z)$</span>, now I want to study the level curves of <span class="math-container">$f$</span>, and how to visualize it on Mathematica?</p>
<p><strong>Note:</strong> How to Choose a branch so that the cut is part of the level curve (say <span class="math-container">$\Im f=0$</span>).</p>
<p><strong>Update:</strong> In fact, since we know the effect of passing the cut is changing sign. We thus can define the radical by the following Riemann-Hilbert Problem:
<span class="math-container">$$R_+=-R_-(z),\quad z\in \Gamma,$$</span>
where <span class="math-container">$\Gamma$</span> is any branch cuts you want. Then up to some proper normalization, the solution is
<span class="math-container">$$\exp\{h(z)+C_\Gamma(\log(-1))(z)\},$$</span>
where <span class="math-container">$C_\Gamma$</span> is the Cauchy transform. If the branch cut is properly parametrized, the integral can be computed in Mathematica using <code>NIntegrate</code>.</p>
| I.M. | 26,815 | <p>Perhaps you can use <code>ComplexPlot</code> (since v12)</p>
<pre><code>ClearAll[f] ;
f[w_][z_] := Sqrt[(z - w)(z + Conjugate[w])] ;
Manipulate[
ComplexPlot[
f[Exp[I*theta]][z],
{z,-5 - 5I, 5 + 5I},
PlotPoints -> 100,
MaxRecursion -> 2,
ColorFunction -> "CyclicLogAbsArg",
PlotLegends -> Automatic
],
{theta, 0, 2*Pi, 0.1*Pi},
ContinuousAction -> False
]
</code></pre>
<p><a href="https://i.stack.imgur.com/SnKtT.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/SnKtT.png" alt="enter image description here" /></a></p>
|
80,056 | <p>I am toying with the idea of using slides (Beamer package) in a third year math course I will teach next semester. As this would be my first attempt at this, I would like to gather ideas about the possible pros and cons of doing this. </p>
<p>Obviously, slides make it possible to produce and show clear graphs/pictures (which will be particularly advantageous in the course I will teach) and doing all sorts of things with them; things that are difficult to carry out on a board. On the other hand, there seems to be something about writing slowly with a chalk on a board that makes it easier to follow and understand (I have to add the disclaimer that here I am just relying on only a few reports I have from students and colleagues).</p>
<p>It would very much like to hear about your experiences with using slides in the classroom, possible pitfalls that you may have noticed, and ways you have found to optimize this.</p>
<p>I am aware that the question may be a bit like the one with Japanese chalks, once discussed here, but I think as using slides in the classroom is becoming more and more common in many other fields, it may be helpful to probe the advantages and disadvantages of using them in math classrooms as well.</p>
| Ryan Reich | 6,545 | <p>I just decided this quarter to use slides for my calculus class, a large-lecture course of the sort I'd never done before; I figured it would be easier to see the "board" if it were on the big screen. Here is the progression of my mistakes and corrections:</p>
<ul>
<li><p>My first lectures had too many words. Slides are great for presenting the wordy parts of math, because they take so long to write and then the students have to write them again. What is not great about them is how much they encourage this behavior.</p></li>
<li><p>Since I was giving a "slide presentation" or a "lecture" rather than a "class", my mindset was different: sort of presentation-to-the-investors rather than gathering-the-children. My slides went by too quickly.</p></li>
<li><p>I eventually slowed myself down by basing the lectures around computations rather than information. Beamer is pretty good (though not ideal) for this, because you can uncover each successive part of an equation. If you break down your slides like this, it is <em>almost</em> as natural as writing on the board.</p></li>
<li><p>My students themselves actually brought up the point that Terry Tao mentioned in his answer: the slides were too transient. They also wanted printouts. Having to prepare the slides for being printed in "handout" mode changed how I organized them: for one, no computation should be longer than one frame (something I should have realized earlier). Also, there should be minimally complex animations, since you don't see them in the printout.</p></li>
<li><p>Many of them expressed the following conservative principle: they had "always" had math taught on the board and preferred the old way. So I've started mixing the board with the slides: I write the statement of the problem on a slide, solve it on the board, and maybe summarize the solution on the slides. This works very well.</p></li>
<li><p>Now I can reserve the slides for two things: blocks of text (problem statements, statements of the main topic of the lesson) and pictures. TikZ, of course, does better pictures than I do, especially when I lose my colored chalk.</p></li>
</ul>
<p>Preparing these lectures used to take me forever. Using beamer does require that you learn how it wants you to use it: don't recompile compulsively, because each run takes a full minute, and don't do really tricky animations. Every picture takes an extra hour to prepare. If you stick to writing a fairly natural summary of a lesson, broken by lots of \pause's and the occasional <code>\begin{overprint}`...`\end{overprint}</code> for long bulleted lists, an hour lecture will take about two hours to prepare.</p>
|
80,056 | <p>I am toying with the idea of using slides (Beamer package) in a third year math course I will teach next semester. As this would be my first attempt at this, I would like to gather ideas about the possible pros and cons of doing this. </p>
<p>Obviously, slides make it possible to produce and show clear graphs/pictures (which will be particularly advantageous in the course I will teach) and doing all sorts of things with them; things that are difficult to carry out on a board. On the other hand, there seems to be something about writing slowly with a chalk on a board that makes it easier to follow and understand (I have to add the disclaimer that here I am just relying on only a few reports I have from students and colleagues).</p>
<p>It would very much like to hear about your experiences with using slides in the classroom, possible pitfalls that you may have noticed, and ways you have found to optimize this.</p>
<p>I am aware that the question may be a bit like the one with Japanese chalks, once discussed here, but I think as using slides in the classroom is becoming more and more common in many other fields, it may be helpful to probe the advantages and disadvantages of using them in math classrooms as well.</p>
| JP McCarthy | 35,482 | <p>Finally, I question in MO I feel qualified to answer!</p>
<p>I am a PhD student in Ireland doing an amount of lecturing. As a first remark, I am lucky in the sense that undergraduate maths was never especially easy for me and therefore I empathise with the average student. My second remark is that I hope for a career lecturing in the Irish Institute of Technology sector where the role in primarily teaching as opposed to the university sector where research is the primary role. Hence I am acutely interested in the skills as a mathematics teacher.</p>
<p>The second half of the answers here are closer to my philosophy than the first. A particular distinction must be put on the classroom environment and facilities. Regardless, my first instinct is that slides alone is sub-optimal. </p>
<p>The alternative to this is to produce everything on the blackboard. I did this last year in a differential calculus module (the students were maths studies --- by and large headed towards a career as "high school" mathematics teachers). The emphasis in this course is to convey to the students that although differential calculus is a relatively intuitive subject with the motivation coming from geometric concerns, as mathematicians we must also be rigorous, logical and precise in our thinking. Hence, we are not merely making a series of calculations and passing exams --- we must understand the content. When I wrote blackboard after blackboard of notes, the students did not have any chance of understanding the material. While I am a fervent believer that exercises and reflection are the best way for a student to achieve this aim, I am reminded of my undergraduate experience where certain obstacles lay in the path of me putting in this work and luckily my presence at lecture-time was sufficient for me to grasp the general theory and progress (eventually with first class grades) despite less than exemplary exam results in previous years. Put simply, ordinary students do not have the faculties to take down written notes and consider the important comments of the lecturer in real time.</p>
<p>However, slides do not work because mathematics is not a spectator sport (not a cliche when the average student is first interested in passing exams --- its is the goal of the educator to transcend this). It takes a superlative lecturer and a cohort of motivated and enthusiastic students to assimilate a lecture purely by ear. At least once I had a lecturer of this standard but I would vouch that were engineering, scientific or humanities students subjected to his fantastic delivery and questioning, they would simply fall asleep. It is a curse but a fact (among my students at least --- none of which are Math majors), that the average student does not have that aptitude to bask in such splendour.</p>
<p>My compromise, therefore is the very similar to what has been suggested above. I produce a set of notes (available soft-bound in a local printing house), with gaps which we fill in during the class (I print the notes onto an acetate sheet which I project onto a screen and can write on with a marker). All the theorems are writ-large, and everything else is teased out per a blackboard with suitable prior fillings in to both give the students a sneak preview and for the practical reasons of properly spacing out my scribblings. Does the need arise, I can put more complicated graphics in this set of notes. Today we introduced implicit differentiation and I projected this Wikipedia page <a href="http://en.wikipedia.org/wiki/List_of_curves">list of curves</a> onto the screen and this was but a two minute interlude.</p>
<p>The issue of students looking ahead was served by a motivation at the start of term (we are studying continuous and differentiable (smooth) functions. We draw a picture. We translate these geometric pictures into an algebebraic ones and never lose sight of this fact).</p>
<p>I have covered more content this year than last using this method, the first continuous assessment results showed a marked improvement and I am ahead of schedule despite being able to allocate a lot more time to comments and explanation of subtleties.</p>
|
424,675 | <p>Just one simple question:</p>
<p>Let $\tau =(56789)(3456)(234)(12)$.</p>
<p>How many elements does the conjugacy class of $\tau$ contain? How do you solve this exersie?</p>
<p>First step is to write it in disjunct cyclces I guess. What's next? :)</p>
| Han de Bruijn | 96,057 | <p>Isn't it a pity that one has to <B>choose</B> between these methods, while they all have their advantages and disadvantages? Wouldn't it be better trying to take the best of the worlds and mix ingredients together? Perhaps a decent research effort of the kind will result in just <B>one</B> numerical method for solving PDE's instead of two or three distinct ones. Here is my attempt:
<UL><LI>
<A HREF="http://www.alternatievewiskunde.nl/sunall/index.htm" rel="nofollow">Unified Numerical Analysis</A> /
<A HREF="http://www.alternatievewiskunde.nl/origineel.htm" rel="nofollow">Highlights</A>
</LI></UL>
Where it should be emphasized that more than one life will be needed to really accomplish things. So, where is my backup?</p>
|
424,675 | <p>Just one simple question:</p>
<p>Let $\tau =(56789)(3456)(234)(12)$.</p>
<p>How many elements does the conjugacy class of $\tau$ contain? How do you solve this exersie?</p>
<p>First step is to write it in disjunct cyclces I guess. What's next? :)</p>
| Han de Bruijn | 96,057 | <p>It shall be argued in this post that the whole idea of one Numerical Method being superior to
another is merely a prejudice that rests upon insufficient in-depth analysis of the real thing.</p>
<p>The argumentation will proceed at hand of a two-dimensional example.<BR>The reader is invited
not to skip through but take notice of the details.<P>
Numerical Analysis of Diffusion starts with a well known Partial Differential
Equation (PDE). The problem will be restricted <I>here</I> to the simpler case of
two space dimensions:
$$
\frac{\partial Q_x}{\partial x} + \frac{\partial Q_y}{\partial y} = 0
$$
$ (x,y) = $ Cartesian coordinates. A possible interpretation of the vector
$ (Q_x,Q_y) $ is the heat flux. The differential equation then follows from the
law of conservation of energy. In case of pure diffusion of heat, also known
as conduction, the components of the heat flux are related to temperature $T$ as
follows:
$$
Q_x = - \lambda \frac{\partial T}{\partial x} \qquad \qquad Q_y = - \lambda \frac{\partial T}{\partial y}
$$
Where $ \lambda = $ thermal conductivity. Hence the final differential equation
for the temperature field is actually of the second degree. In order to make
the PDE amenable for numerical treatment, an integration procedure has to be
resorted to. At this point, there occurs a splitting into several distinct
roads, all leading to a numerical solution, more or less efficiently.</p>
<p><H3>Triangle isoparametrics</H3></p>
<p>The simplest Finite Element in two dimensions - and my absolute
<A HREF="http://math.stackexchange.com/questions/837245/example-of-a-problem-made-easier-with-skew-coordinates/1086013#1086013">favorite</A>
- is the linear triangle:<BR>
<a href="https://i.stack.imgur.com/qSjcG.jpg" rel="noreferrer"><img src="https://i.stack.imgur.com/qSjcG.jpg" alt="enter image description here"></a><BR>
Let's summarize the isoparametrics (= affine transformation) in the first place:
$$
\begin{cases}
x = x_1 + (x_2-x_1)\xi + (x_3-x_1)\eta \\
y = y_1 + (y_2-y_1)\xi + (y_3-y_1)\eta \\
f = f_1 + (f_2-f_1)\xi + (f_3-f_1)\eta
\end{cases}
$$
It will be shown next how partial differentiation at such a linear triangle takes place.
First do the chain rules with global $(x,y)$ and local $(\xi,\eta)$ coordinates:
$$
\begin{cases} \Large
\frac{\partial f}{\partial \xi} =
\frac{\partial f}{\partial x}\frac{\partial x}{\partial \xi} +
\frac{\partial f}{\partial y}\frac{\partial y}{\partial \xi} \\ \Large
\frac{\partial f}{\partial \eta} =
\frac{\partial f}{\partial x}\frac{\partial x}{\partial \eta} +
\frac{\partial f}{\partial y}\frac{\partial y}{\partial \eta}
\end{cases} \quad \Longleftrightarrow \quad \Large
\begin{bmatrix} \frac{\partial f}{\partial \xi} \\ \frac{\partial f}{\partial \eta} \end{bmatrix} =
\begin{bmatrix} \frac{\partial x}{\partial \xi} & \frac{\partial y}{\partial \xi} \\
\frac{\partial x}{\partial \eta} & \frac{\partial y}{\partial \eta} \end{bmatrix}
\begin{bmatrix} \frac{\partial f}{\partial x} \\ \frac{\partial f}{\partial y} \end{bmatrix}
$$
But what we need is the inverse, with determinant / Jacobian
$\;\Delta = (\partial x / \partial \xi)(\partial y / \partial \eta)
- (\partial x / \partial \eta)(\partial y / \partial \xi)$ :
$$
\Large
\begin{bmatrix} \frac{\partial f}{\partial x} \\ \frac{\partial f}{\partial y} \end{bmatrix} =
\begin{bmatrix} \frac{\partial x}{\partial \xi} & \frac{\partial y}{\partial \xi} \\
\frac{\partial x}{\partial \eta} & \frac{\partial y}{\partial \eta} \end{bmatrix}^{-1}
\begin{bmatrix} \frac{\partial f}{\partial \xi} \\ \frac{\partial f}{\partial \eta} \end{bmatrix} =
\begin{bmatrix} \frac{\partial y}{\partial \eta} & -\frac{\partial y}{\partial \xi} \\
-\frac{\partial x}{\partial \eta} & \frac{\partial x}{\partial \xi} \end{bmatrix} / \Delta
\begin{bmatrix} \frac{\partial f}{\partial \xi} \\ \frac{\partial f}{\partial \eta} \end{bmatrix}
$$
Giving full discretization of the function derivatives, with determinant / Jacobian
$\;\Delta = (x_2-x_1)(y_3-y_1)-(x_3-x_1)(y_2-y_1)$ :
$$
\Delta
\begin{bmatrix} \partial f / \partial x \\ \partial f / \partial y \end{bmatrix} =
\begin{bmatrix} (y_3-y_1) & -(y_2-y_1) \\ -(x_3-x_1) & (x_2-x_1) \end{bmatrix}
\begin{bmatrix} f_2-f_1 \\ f_3-f_1 \end{bmatrix}
$$
Here $\Delta$ is the area of a vector parallelogram, which is twice the area of the
triangle. The above can also be written as:
$$
\Delta \left[ \begin{array}{c} \partial f / \partial x \\ \partial f / \partial y \end{array} \right] =
\left[ \begin{array}{ccc} +(y_2 - y_3) & +(y_3 - y_1) & +(y_1 - y_2) \\
-(x_2 - x_3) & -(x_3 - x_1) & -(x_1 - x_2) \end{array}
\right] \left[ \begin{array}{c} f_1 \\ f_2 \\ f_3 \end{array} \right]
$$
The matrix in this last formula should be memorized; it is called a <I>differentiation matrix</I>.</p>
<p><H3>Finite Element Method</H3></p>
<p>When using a Finite Element method, the differential equation may be multiplied at
first with an arbitrary (test)function. Subsequently the PDE is integrated over
the domain of interest. Let the test function be called $f$, then:
$$
\iint f . \left[ \frac{\partial Q_x}{\partial x} + \frac{\partial Q_y}{\partial y} \right] \, dx dy = 0
$$
It can be shown that this integral formulation is (more or less) equivalent with the original
partial differential equation. This is due to the fact that $f$ is an <I>arbitrary</I> function.
It should be non-zero, continuous and integrable, though.<BR>
Partial integration, or applying Green's theorem (which is the same),
results in an expression with line-integrals over the boundaries and an area
integral over the bulk field. The latter is given by:
$$
- \iint \left[ \frac{\partial f}{\partial x}.Q_x + \frac{\partial f}{\partial y}.Q_y \right] \, dx dy
$$
Mind the minus sign. The advantage accomplished herewith is a reduction of the
difficulty of the problem: only derivatives of the <I>first</I> degree are left.
As a next step, the domain of interest is split up into "elements" $E$.
Due to this, also the integral will split up into separate contributions,
each contribution corresponding with an element:
$$
- \sum_E \iint \left[ \frac{\partial f}{\partial x}.Q_x + \frac{\partial f}{\partial y}.Q_y \right] \, dx dy
$$
It is clear that $\partial f / \partial x$ and $\partial f / \partial y$ are constants. While considering
only 2-D diffusion, $Q_x$ and $Q_y$ are also partial derivatives of the first
degree, hence constants. Herewith the bulk Finite Element formulation, for one triangle, is given by:
$$
- \left[ \frac{\partial f}{\partial x}.Q_x + \frac{\partial f}{\partial y}.Q_y \right] \iint dx dy =
- \left[ \frac{\partial f}{\partial x}.Q_x + \frac{\partial f}{\partial y}.Q_y \right] \Delta/2
$$
The remaining integral is equal, namely, to de area of the triangle. Applying now
the differentiation matrix, we find:
$$
= - \frac{1}{2} \left[ \begin{array}{ccc} f_1 & f_2 & f_3 \end{array} \right]
\left[ \begin{array}{cc} y_2 - y_3 & x_3 - x_2 \\
y_3 - y_1 & x_1 - x_3 \\
y_1 - y_2 & x_2 - x_1 \end{array} \right]
\left[ \begin{array}{c} Q_x \\ Q_y \end{array} \right] =
$$ $$
= \frac{1}{2} \left[ \begin{array}{ccc} f_1 & f_2 & f_3 \end{array} \right]
\left[ \begin{array}{c} (y_3 - y_2) Q_x - (x_3 - x_2) Q_y \\
(y_1 - y_3) Q_x - (x_1 - x_3) Q_y \\
(y_2 - y_1) Q_x - (x_2 - x_1) Q_y \end{array} \right]
$$
Actually, we don't want to subdivide the Finite Element domain into triangular
elements, but rather into quadrilateral elements. However, any quad element,
in turn, can be subdivided yet into triangles, even in two different ways:
<BR>
<a href="https://i.stack.imgur.com/e5cmb.jpg" rel="noreferrer"><img src="https://i.stack.imgur.com/e5cmb.jpg" alt="enter image description here"></a>
<BR>
In addition, what we want is a configuration in which all quad vertices play an
equally important role. In order to accomplish this, all of the four triangles
must be present in our formulation, simultaneously. For just one quadrilateral,
it boils down to renumbering vertices in the formulation for a single triangle,
according to the following permutations:
<PRE>
1 2 3 2 4 1 3 1 4 4 3 2
</PRE>
Also an upper label (<I>not</I> a power) will be attached to the values $(Q_x,Q_y)$,
because it must be denoted at which triangle the discretization takes place.
Any contributions are summed now over the four triangles (and the whole is divided
by a factor two again):
$$
\frac{1}{4} \left[ \begin{array}{ccc} f_1 & f_2 & f_3 \end{array} \right]
\left[ \begin{array}{c} (y_3 - y_2) Q^1_x - (x_3 - x_2) Q^1_y \\
(y_1 - y_3) Q^1_x - (x_1 - x_3) Q^1_y \\
(y_2 - y_1) Q^1_x - (x_2 - x_1) Q^1_y
\end{array} \right] +
$$ $$
\frac{1}{4} \left[ \begin{array}{ccc} f_2 & f_4 & f_1 \end{array} \right]
\left[ \begin{array}{c} (y_1 - y_4) Q^2_x - (x_1 - x_4) Q^2_y \\
(y_2 - y_1) Q^2_x - (x_2 - x_1) Q^2_y \\
(y_4 - y_2) Q^2_x - (x_4 - x_2) Q^2_y
\end{array} \right] +
$$ $$
\frac{1}{4} \left[ \begin{array}{ccc} f_3 & f_1 & f_4 \end{array} \right]
\left[ \begin{array}{c} (y_4 - y_1) Q^3_x - (x_4 - x_1) Q^3_y \\
(y_3 - y_4) Q^3_x - (x_3 - x_4) Q^3_y \\
(y_1 - y_3) Q^3_x - (x_1 - x_3) Q^3_y
\end{array} \right] +
$$ $$
\frac{1}{4} \left[ \begin{array}{ccc} f_4 & f_3 & f_2 \end{array} \right]
\left[ \begin{array}{c} (y_2 - y_3) Q^4_x - (x_2 - x_3) Q^4_y \\
(y_4 - y_2) Q^4_x - (x_4 - x_2) Q^4_y \\
(y_3 - y_4) Q^4_x - (x_3 - x_4) Q^4_y
\end{array} \right] \mbox{ }
$$
The four overlapping triangles at the corners of the quadrilateral are a small but significant twist to the standard Finite Element Method, which is motivated by the end-result.</p>
<p><H3>Finite Volume Method</H3></p>
<p>In order to save unnecessary paperwork, the following shorthand notation has
been adopted. It may be interpreted as an outer product:
$$
r_{ij} \times Q_k = (y_i - y_j) Q^k_x - (x_i - x_j) Q^k_y
= (x_j - x_i) Q^k_y - (y_j - y_i) Q^k_x
$$
Terms belonging to $f_k, k=1 ... 4$ are collected together. By doing so, the
standard Finite Element assembly procedure is demonstrated at a small scale.
What else is the Finite Element matrix than just an incomplete system of
equations?
$$
\frac{1}{4} \left[ \begin{array}{cccc} f_1 & f_2 & f_3 & f_4 \end{array} \right]
\left[ \begin{array}{c}
r_{32} \times Q_1 + r_{42} \times Q_2 + r_{34} \times Q_3 + 0 \\
r_{13} \times Q_1 + r_{14} \times Q_2 + 0 + r_{34} \times Q_4 \\
r_{21} \times Q_1 + 0 + r_{41} \times Q_3 + r_{42} \times Q_4 \\
0 + r_{21} \times Q_2 + r_{13} \times Q_3 + r_{23} \times Q_4
\end{array} \right]
$$
Subsequently use:
$$
r_{32} = r_{34} + r_{42} \qquad r_{14} = r_{13} + r_{34} \qquad
r_{41} = r_{42} + r_{21} \qquad r_{23} = r_{21} + r_{13}
$$
To put the above in a more handsome form:
$$
\left[ \begin{array}{cccc} f_1 & f_2 & f_3 & f_4 \end{array} \right]
\left[ \begin{array}{c}
\frac{1}{2} r_{42} \times \frac{1}{2} (Q_1+Q_2) + \frac{1}{2} r_{34} \times \frac{1}{2} (Q_1+Q_3) \\
\frac{1}{2} r_{13} \times \frac{1}{2} (Q_1+Q_2) + \frac{1}{2} r_{34} \times \frac{1}{2} (Q_2+Q_4) \\
\frac{1}{2} r_{21} \times \frac{1}{2} (Q_1+Q_3) + \frac{1}{2} r_{42} \times \frac{1}{2} (Q_3+Q_4) \\
\frac{1}{2} r_{21} \times \frac{1}{2} (Q_2+Q_4) + \frac{1}{2} r_{13} \times \frac{1}{2} (Q_3+Q_4)
\end{array} \right]
$$
It's a triviality, but nevertheless: a picture says more that a thousand words.
<BR>
<a href="https://i.stack.imgur.com/Gzx2n.jpg" rel="noreferrer"><img src="https://i.stack.imgur.com/Gzx2n.jpg" alt="enter image description here"></a>
<BR>
<a href="https://i.stack.imgur.com/NvOaC.jpg" rel="noreferrer"><img src="https://i.stack.imgur.com/NvOaC.jpg" alt="enter image description here"></a>
<BR>
It is seen that the four pieces-of-equations correspond with four pieces of
line-integrals, each of them belonging to one of the vertices. Midpoints of
triangle sides are connected by lines at which the integration takes place.
The heat flux at a midpoint is the average of values at the vertices.<BR>
Let's adopt another point of view now and no longer concentrate on elements but
on vertices. Instead of arranging vertices around an element, elements are
arranged around a vertex. Label triangle side midpoints as $a,b,c,d,e,f,g,h$.<BR>
It is immediately noted that the lines connecting the midsides of the triangles
around a vertex, when tied together, neatly delineate a closed area, which can
be interpreted as a kind of 2-D Finite Volume. Expressed in the outer product
formalism, we find:
$$
r_{ba} \times Q_a + r_{cb} \times Q_c + r_{dc} \times Q_c +
r_{ed} \times Q_e + r_{fe} \times Q_e + r_{gf} \times Q_g +
r_{hg} \times Q_g + r_{ah} \times Q_a
$$
Which is the content of one equation in the Finite Element global matrix.
All terms together represent a discretization of the following circular
integral:
$$
\sum r \times Q = \oint Q_y dx - Q_x dy
$$
With help of Green's theorem, however, such a circular integral can be converted
into a "volume" integral, over the area indicated in the above figure:
$$
\oint Q_y dx - Q_x dy =
+ \iint \left[ \frac{\partial Q_x}{\partial x} + \frac{\partial Q_y}{\partial y} \right] \, dx dy
$$
Conservation of heat is integrated over a finite volume, which is wrapped around
a vertex. So we have arrived at sort of a Finite Difference method. To be more precise:
at a Finite Volume Method. It is remarked that this F.V. procedure has been
applicable for curvilinear grids from the start.<BR>
Apply a Finite Element (Galerkin) method to a mesh of quadrilaterals. Subdivide
each of the quads into four (overlapping) triangles, in the two ways that are
possible. Then such a method is <I>equivalent</I> to a Finite Volume method:
midsides of the triangles, around the vertex of interest, are neatly connected
together, to form the boundary of a 2-D finite volume, and the conservation law
is integrated over this volume.<BR>
<B>Unification of a Finite Element and a Finite Volume method</B>
has been accomplished herewith, for a restricted class of 2-D diffusion problems.</p>
|
424,675 | <p>Just one simple question:</p>
<p>Let $\tau =(56789)(3456)(234)(12)$.</p>
<p>How many elements does the conjugacy class of $\tau$ contain? How do you solve this exersie?</p>
<p>First step is to write it in disjunct cyclces I guess. What's next? :)</p>
| Han de Bruijn | 96,057 | <p><H2>Labrujère's Problem</H2></p>
<p>In Februari 1976, Dr. Th.E. Labrujère, at the National Aerospace Laboratory
NLR, the Netherlands, wrote a memorandum which is titled, when
translated in English: <I>The "Least Squares - Finite Element" Method</I> [L.S.FEM]
<I>applied to the 2-D Incompressible Flow around a Circular Cylinder</I>. To be more
precise: incompressible <I>and irrotational</I> flow.<BR>
In this memorandum, it was firmly established that a straightforward application
of the Least Squares Method, using linear triangular Finite Elements, quite
unexpectedly, <I>does not work well</I>. Herewith, Labrujère's report is
demonstrating a scientific integrity which is rarely seen these days. With our
own software we have been able to reproduce the poor results as obtained by NLR:</p>
<p><a href="https://i.stack.imgur.com/h4bCg.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/h4bCg.jpg" alt="enter image description here"></a></p>
<p>Improving on these results has been a non-trivial task. On the side of
NLR, it could only be accomplished by introducing highly complicated elements.
On the side of myself, it could only be accomplished by adopting an approach
which is quite deviant from the common Finite Element methodology. It has to be
decided by Occam's Razor which of the two approaches is to be preferred.</p>
<p><H2>The Calgary Solution</H2></p>
<p>In December 1976, Labrujère's problem was "solved" by G. de Vries,
T.E. Labrujère himself and D.H. Norrie, at the mechanical Engineering
Department of The University of
Calgary, Alberta, Canada. The result is written down in their Report no.86:
A Least Squares Finite Element Solution for Potential Flow. The abstract of
this report is copied here:<BR>
<a href="https://i.stack.imgur.com/57TC1.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/57TC1.jpg" alt="enter image description here"></a>
<BR>
It seems to me that the above
solution is of pure academical interest, though. The apparent need for
fifth-order trial functions shall make this method <I>unworkable</I>
in practice. Even if attention is restricted to the simple case at hand,
it's way too complicated. What's worse, generalization is likely to be
hard. In the end, 2-D and 3-D Navier Stokes equations (at a curvilinear grid,
preferably) need to be solved. So the point of departure must be something
which is much more simple. Especially the number of unknowns at each nodal point
should not exeed the absolute minimum, the number of degrees of freedom: two.
I have never been in doubt that an alternative least squares finite element
solution, <I>having</I> such desirable properties, <I>must</I> be possible.
I have a dream ..</p>
<p>Incompressible irrotational (ideal) flow of an inviscid fluid is described by
the following system of linear first-order (!) Partial Differential Equations
(PDE's):
$$
\frac{\partial u}{\partial x} + \frac{\partial v}{\partial y} = 0 \quad \mbox{: incompressible} \\
\frac{\partial v}{\partial x} - \frac{\partial u}{\partial y} = 0 \quad \mbox{: irrotational}
$$
Here: $(x,y) =$ coordinates , $(u,v) =$ velocity-components.<BR>
There does <I>not</I> exist a kind of "natural" variational principle for the
above differential equations. Conventional Finite Element Methods, however,
are very much dependent upon the existence of such principles. There <I>must
be something to minimize</I> (or to "make stationary"). In cases like the above,
it seems, at first sight, that L.S.FEM offers a possible solution. That is because
Least Squares Finite Element Methods
proceed by constructing an alternative minimum principle: square the equations
just as-they-are (!) , add these squares together, integrate their sum over the
area of interest and minimize the result as a function of the unknowns. This is
the approach as described in O.C. Zienkiewicz "The Finite Element Method" (1977)
chapter 3.14.2. In our case:
$$
\iint \left\{ \left[ \frac{\partial u}{\partial x} + \frac{\partial v}{\partial y} \right]^2 +
\left[ \frac{\partial v}{\partial x} - \frac{\partial u}{\partial y} \right]^2 \right\} \, dx.dy = \mbox{minimum}(u,v)
$$
Simple as it sounds, but watch out! People (including myself) have wasted
<I>very</I> much time trying to get this method to work. After many years of
frustration, I even had to give up for a while. Appearance is highly
deceptive here: Least Squares may be the most <I>tricky</I> Finite Element Method
that has ever been invented. We have already seen that Least Squares <I>does
not work well</I> for linear triangles, that is <I>iff</I> the method is applied
in a straightforward Finite Element manner. Which is the bare essence of
Labrujère's Problem. Start of personal motivation.
It is our purpose to show, in the end, that Labrujère's problem can be solved in a
<I>proper</I> manner. Herewith I mean: a simple and straightforward manner.
However, to that end, we must look at the problem from a different, or should
I rather say a "difference" perspective. As if it were essentially a Finite
Difference problem, namely, instead of the Finite Element problem that it only
<I>appears</I> to be. With other words:<BR><B>the Least Squares Finite Element Method is a Finite Difference Method in disguise</B>.</p>
<p><H2>A Difference Perspective</H2></p>
<p>Let's look at the details. At first, the global F.E. integral is split up into
separate contributions, from all finite elements $(E)$ in the mesh:
$$
\sum_E \iint \left\{ \left[ \frac{\partial u}{\partial x} + \frac{\partial v}{\partial y} \right]^2 +
\left[ \frac{\partial v}{\partial x} - \frac{\partial u}{\partial y} \right]^2 \right\} \, dx.dy = \mbox{minimum}
$$
It is often advantageous to carry out a Numerical Integration, instead of an
"exact" one (see e.g. Zienkiewicz chapter 8.8). This means that function
values are to be determined at so-called integration points $p$. With each
integration point $p$ a certain weight factor $w_p$ is associated:
$$
\sum_E \sum_p w_p \left\{ \left[ \frac{\partial u}{\partial x} + \frac{\partial v}{\partial y} \right]_p^2 +
\left[ \frac{\partial v}{\partial x} - \frac{\partial u}{\partial y} \right]_p^2 \right\} \, J_p = \mbox{minimum}
$$
Here $J_p$ is the Jacobian (determinant), which is the result of a transformation from global to
local F.E. coordinates. The jacobians $J_p$ as well as the weighting factors $w_p$ are positive
real-valued numbers.<BR>What follows is a small step for man:
<I>unify</I> the summations over the elements and the integration
points, resulting in one global summation over all integration points
$(i=E,p)$, where $(i)$ becomes the global index of any "integration point".
This merely says that summing over elements, together with their integration
points, is equivalent with summing over all the integration points
in the whole domain of interest, in one big sweep.
In this way, integration points can be interpreted as more elementary than the
elements themselves. And an element with more than one integration point can be
considered as a superposition of elementary integrated elements, with only one
integration point $(i)$ in each of them:
$$
\sum_i w_i \left\{ \left[ \frac{\partial u}{\partial x} + \frac{\partial v}{\partial y} \right]_i^2 +
\left[ \frac{\partial v}{\partial x} - \frac{\partial u}{\partial y} \right]_i^2 \right\} \, J_i = \mbox{minimum} = 0
$$
In order for L.S.FEM to work properly, the minimum required must be a small
number, rapidly approximating zero, as the size of the elements becomes less.
Thus maybe it would be not such a weird idea to demand that the minimum value
should merely <I>be zero from the start</I>. But in that case the above "variational
integral" would have been equivalent to an non-squared system of linear equations.
Because <I>when</I> a sum of squares can possibly be zero? If and only if each
of the separate terms in the sum is equal to zero:
$$
\left[ \frac{\partial u}{\partial x} + \frac{\partial v}{\partial y} \right]_i = 0 \quad ; \quad
\left[ \frac{\partial v}{\partial x} - \frac{\partial u}{\partial y} \right]_i = 0
\quad \mbox{: for each integration point } (i)
$$
Let's go one more step further. It is realized that each 'integration point' in
the grid does in fact nothing else than creating two independent equations.
All integration points together contribute to the fact that a whole system of
linear equations emerges in this way. Nothing prevents us from calling this a "Finite
Difference" system of equations. Let's therefore, at last, replace the notion
of 'an integration point' simply by: 'an F.D. equation'. And here we are!
<P><I>
Any feasible Least Squares Finite Element Method is equivalent
with forcing to zero the sum of squares of all equations emerging from some
Finite Difference Method.<BR>L.S.FEM gives rise to the same solution
as an equivalent system of finite difference equations.
</I><P>
We are ready now to look at Labrujère's problem in the following way. Let it
be required that the Least Squares Finite Element Method always leads to an
acceptable solution, with moderate mesh sizes. Then, of course, in the
associated Finite Difference system, the number of unknowns $N$ should be
<I>equal</I> to the number of independent
equations $M$. If such is not the case, namely, then the system is likely to
be overdetermined. And it is doubtful if the Least Squares minimum can still
approach zero, fast enough. A simple count of the triangles involved with
Labrujère's problem reveals that such kind of a delicate
balance between unknowns and equations is definitely <I>not</I> achieved there:
the number of elements outweights the number of nodal points by a factor $2$!
This means that there are roughly twice as many "unsquared" F.D. equations
as there are unknowns. Apart from of any more complicated kind of argument,
like higher order continuity, this surely throws up a basic question.</p>
<p>I am not qualified to check out whether Norrie and DeVries implicitly adressed
that question, in their report. They first kept the triangular shapes. I guess
that, in order to compensate for an abundance of elementary equations, they
had to introduce even so <I>many</I> additional variables. Now it becomes clear
what kind of different approach may be feasible here. For the <I>only</I> thing
that has to be accomplished is: a perfect balancing between the number of equations
and the number of unknowns. Instead of increasing both these numbers by some
complicated mechanism.</p>
<p><H2>References</H2></p>
<p>Th.E. Labrujère,<BR>'DE "EINDIGE ELEMENTEN - KLEINSTE KWADRATEN" METHODE
TOEGEPAST OP DE 2D INCOMPRESSIBELE STROMING OM EEN CIRKEL CYLINDER',
Memorandum WD-76-030, Nationaal Lucht- en Ruimtevaartlaboratorium (NLR),
Noordoostpolder, 23 februari 1976.</p>
<p>G. de Vries, T.E. Labruj`ere, D.H. Norrie,<BR>'A LEAST SQUARES FINITE
ELEMENT SOLUTION FOR POTENTIAL FLOW', Report No.86, Department of
Mechanical Engineering, The University of Calgary, Alberta, Canada,
December 1976.</p>
<p>O.C. Zienkiewicz,<BR>'The Finite Element Method', 3th edition, Mc.Graw-Hill
U.K. 1977, ISBN 0-07-084072-5</p>
<p>To be continued as:
<H2><A HREF="https://math.stackexchange.com/questions/2023382/any-employment-for-the-varignon-parallelogram">Any employment for the Varignon parallelogram?</A></H2></p>
<p><IMG SRC="https://i.stack.imgur.com/aUqVh.jpg"></p>
<p>Take a good look at these (Least Squares) Finite Difference Elements:
<UL>
<LI>No continuity requirements, at all, on the components of velocity</LI>
<LI>First order trial functions (linear) for both components of velocity</LI>
<LI>Numerical results in close agreement with the theoretical solution</LI>
</UL></p>
<p><IMG SRC="https://i.stack.imgur.com/VwuYx.jpg"></p>
|
211,778 | <p>In</p>
<pre><code>Block[{ ds = Dataset[{
<|"task" -> "task4", "parents" -> "parent1",
"start" -> "2019-05-14 17:10", "end" -> "2019-05-14 17:15",
"utility" -> "0.9"|>}
],
hrAssociation =
KeySort[Merge[
Rule @@@
Flatten[GroupBy[
ds[All, <|"d" -> (DateObject /@ {#"start", #"end"})|> &],
First -> Last,
Map[{CurrentDate[#[[1]], "Day"],
DateDifference[#[[1]], #[[2]], "Hour"]} &]] // Normal //
Values, 1] // Sort, Total]]},
hrAssociation
]
</code></pre>
<p>I get</p>
<blockquote>
<p><a href="https://i.stack.imgur.com/wyo5f.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/wyo5f.png" alt="enter image description here"></a></p>
</blockquote>
<p>and I think it's because I'm referring to the first dataset variables (<code>ds</code>) deep within the second variable (<code>hrAssociation</code>). How to fix?</p>
<p>I have tried replacing <code>Block</code> with <code>With</code> and <code>Module</code> and the problem persists.</p>
| Rolf Mertig | 29 | <p>This works:</p>
<pre><code>Module[{ds,hrAssociatio}
,
ds = Dataset[{
<|"task" -> "task4", "parents" -> "parent1",
"start" -> "2019-05-14 17:10", "end" -> "2019-05-14 17:15",
"utility" -> "0.9"|>}
]
; hrAssociation =
KeySort[Merge[
Rule @@@
Flatten[GroupBy[
ds[All, <|"d" -> (DateObject /@ {#"start", #"end"})|> &],
First -> Last,
Map[{CurrentDate[#[[1]], "Day"],
DateDifference[#[[1]], #[[2]], "Hour"]} &]] // Normal //
Values, 1] // Sort, Total]
]
; hrAssociation
]
</code></pre>
|
211,778 | <p>In</p>
<pre><code>Block[{ ds = Dataset[{
<|"task" -> "task4", "parents" -> "parent1",
"start" -> "2019-05-14 17:10", "end" -> "2019-05-14 17:15",
"utility" -> "0.9"|>}
],
hrAssociation =
KeySort[Merge[
Rule @@@
Flatten[GroupBy[
ds[All, <|"d" -> (DateObject /@ {#"start", #"end"})|> &],
First -> Last,
Map[{CurrentDate[#[[1]], "Day"],
DateDifference[#[[1]], #[[2]], "Hour"]} &]] // Normal //
Values, 1] // Sort, Total]]},
hrAssociation
]
</code></pre>
<p>I get</p>
<blockquote>
<p><a href="https://i.stack.imgur.com/wyo5f.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/wyo5f.png" alt="enter image description here"></a></p>
</blockquote>
<p>and I think it's because I'm referring to the first dataset variables (<code>ds</code>) deep within the second variable (<code>hrAssociation</code>). How to fix?</p>
<p>I have tried replacing <code>Block</code> with <code>With</code> and <code>Module</code> and the problem persists.</p>
| b3m2a1 | 38,205 | <p>To add to Rolf's answer, if you <em>need</em> this in the body of the declaration you can use <code>:=</code></p>
<pre><code>Block[{ds =
Dataset[{<|"task" -> "task4", "parents" -> "parent1",
"start" -> "2019-05-14 17:10", "end" -> "2019-05-14 17:15",
"utility" -> "0.9"|>}
],
hrAssociation :=
KeySort[
Merge[
Rule @@@
Flatten[
GroupBy[
ds[All, <|"d" -> (DateObject /@ {#"start", #"end"})|> &],
First -> Last,
Map[{CurrentDate[#[[1]], "Day"],
DateDifference[#[[1]], #[[2]], "Hour"]} &
]
] // Normal // Values, 1] // Sort,
Total
]
]
},
hrAssociation
]
</code></pre>
|
189,293 | <p><img src="https://i.stack.imgur.com/Ngfb2.jpg" alt="Vesica Pisces"></p>
<p>I have the radius and center $(x,y)$ on both circles, but how do I get the $(x,y)$ of the red circle, or in other words how do I get the $(x,y)$ position of where the circles intersect at the top or bottom?</p>
| Jakube | 38,034 | <p>Here is a nice example:
<a href="http://www.analyzemath.com/CircleEq/circle_intersection.html" rel="nofollow">http://www.analyzemath.com/CircleEq/circle_intersection.html</a></p>
<p>Just set up the two circle equations: $(X-M)^2=r^2$ and follow the instructions.</p>
|
466,576 | <p>Some conventional math notations seem arbitrary to English speakers but were mnemonic to non-English speakers who started them. To give a simple example, Z is the symbol for integers because of the German word <em>Zahl(en)</em> 'number(s)'. What are some more examples?</p>
| MJD | 25,554 | <p>Łukasiewicz notation for logic represents $\land \lor \leftrightarrow$ with the letters $K A E$ respectively, so that for example $r\lor(p\land q)$ is $ArKpq$. $K A E$ are the initials of the Polish words <em>koniunkcja</em>, <em>alternatywa</em>, <em>ekwiwalencja</em>.</p>
<p>I don't know why Łukasiewicz used $C$ to represent material implication. </p>
|
2,432,817 | <p>Let $X$ and $Y$ be topological spaces and let $A \subseteq X$ be a subspace of $X$. Suppose $A$ is homeomorphic to some subspace $B \subseteq Y$ of $Y$. Let $f$ explicitly denote this homeomorphism.</p>
<p>If $f : A \to B$ is a homeomorphism, does $f$ extend to a homeomorphism between $\text{Cl}_X(A)$ and $\text{Cl}_Y(B)$, i.e does there exist a $g : Cl_X(A) \to Cl_Y(B)$, such that $g|_{A} = f$.</p>
<p>More generally if $A$ and $B$ are homeomorphic, does that imply that $Cl_X(A)$ and $Cl_Y(B)$ are homeomorphic? </p>
<hr>
<p>If $X$ and $Y$ are homeormorphic, then this is true, since it is a well known-theorem that $f[Cl_X(A)] = Cl_Y(f[A] = B)$, if $f : X \to Y$ is a homeomorphism. </p>
<p>However I can't seem to come up with a counterexample for my question, since I assume the implication is false.</p>
<hr>
<p><em>Edit :</em> I know a counterexample that comes from CW Complexes, where if $(X, \xi)$ is a CW-Complex, then $\xi$ is a collection of open cells $e$, which are topological spaces homeomorphic to $\mathbb{B}^n$ the open unit ball in $\mathbb{R}^n$. Each $e \subseteq X$ is a subspace of a haursdoff space $X$.</p>
<p>Also a closed cell $\bar{e}$ is a topological space homeomorphic to the closed unit ball $\mathbb{\bar{B}}^n \subseteq \mathbb{R}^n$</p>
<p>Now in this example $Y = \mathbb{R}^n$. It is also known that for any $e \in \xi$, $Cl_X(e) \neq \bar{e} \cong {\mathbb{\bar{B}}^n} = Cl_Y(\mathbb{{B}}^n \cong e) $</p>
<p>Hence $e$ and $\mathbb{B}^n$ are homeomorphic, however $Cl_X(e)$ is not homeomorphic to $Cl_Y(\mathbb{B}^n) = \mathbb{\bar{B}}^n$, since $Cl_X(e) \neq \bar{e}$</p>
<hr>
<p>But using the above example feels like bringing a gun to a knife fight, are there any simpler counterexamples?</p>
| DanielWainfleet | 254,665 | <p>Suppose $X$ has a sub-space $A$ that is homeomorphic to $X$ where $A$ is not a closed subset of $X$ and $Cl_X(A)\ne A$ and where $A$ is not homeomorphic to $Cl_X(A)$.Let $X=Y=B$ and let $f:A\to B$ be any homeomorphism. Since $f$ is a bijection from $A$ onto $Y$ and $Cl_X(A)\ne A,$ therefore $f$ cannot be extended to a bijection from $Cl_X(A)$ onto $Cl_Y(B)=Y.$ And $Cl_Y(B)=Cl_X(X)=X$ is homeomophic to $A$, which is not homeomorphic to $Cl_X(A).$ </p>
<p>A simple example is $X=Y=B=\mathbb R$ and $A=(-\pi /2,+\pi /2)$ and $f(x)=\tan x.$</p>
|
2,236,846 | <p>For $x\in A[a,b]$</p>
<p>$\sup_{x\in A}|f(x)|\ge\int_{a}^{b}f(x)dx$</p>
<p>I'm just wondering if this is an analysis result or if the result is slightly different to this?</p>
<p>Sorry I just realised it was a greater than sign not an equals!</p>
| Trevor Gunn | 437,127 | <p>If $f(x) \le g(x)$ for all $x \in [a,b]$ (and both are integrable) then $$\int_a^b f(x) \,dx \le \int_a^b g(x) \,dx.$$ If you then take $g$ to be the constant function $g(x) = \sup_{t \in [a,b]} |f(t)|$ what this says is that
$$ \int_a^b f(x) \,dx \le (b - a) \sup_{t \in [a,b]} |f(t)|. $$</p>
<p>Note that you do need the $(b - a)$ because if, for example, $f = $ the constant function $1$ and $[a,b] = [0,2]$ then $\int_0^2 1 \,dx = 2 > \sup_{t \in [0,2]} |1| = 1$.</p>
|
4,498,203 | <p>We know that each row (and each column) of composition table of a finite group, is a rearrangement (permutation) of the elements of the group.</p>
<p>How about the other way round? If we have a composition table where each row and each column is a permutation of the elements of a set, does this composition table necessarily define a group?</p>
<p>If not then give a counter example.</p>
| Arturo Magidin | 742 | <p>What you describe is a <a href="https://en.wikipedia.org/wiki/Quasigroup" rel="noreferrer">quasigroup</a>.</p>
<p>A quasigroup is an ordered pair <span class="math-container">$(A,\cdot)$</span>, where <span class="math-container">$A$</span> is a set, and <span class="math-container">$\cdot$</span> is a binary operation on <span class="math-container">$A$</span> with the property that for all <span class="math-container">$a,b\in A$</span> there exist unique solutions to the equations <span class="math-container">$a\cdot x = b$</span> and <span class="math-container">$y\cdot a=b$</span>. If you think in terms of the <a href="https://en.wikipedia.org/wiki/Cayley_table" rel="noreferrer">Cayley table</a>, you ask that each row and each column contain each element of <span class="math-container">$A$</span> exactly once; that is, that the Cayley table be a <a href="https://en.wikipedia.org/wiki/Latin_square" rel="noreferrer">Latin square</a>.</p>
<p><a href="https://math.stackexchange.com/q/755410/742">Quasigroups that are not groups</a> exist for all orders greater than or equal to <span class="math-container">$3$</span>; if you allow the empty set, it is also a quasigroup that is not a group.</p>
<p>A quasigroup is a group if and only if the operation is associative.</p>
|
90,673 | <p>let $M$ be a closed ortientable irreducible 3-mfd, let $T$ be a non-separating torus in $M$, we cut $M$ along $T$ and glue two solid tori along the two boundary tori, we get a new closed 3-mfd $M_1$ (we need $M_1$ also is irreducible). Now If there exists nonseperable torus $T_1$ in $M_1$, we go on the above process, we get a new closed 3-mfd $M_2$ ...</p>
<p>My question is, whether you can find a $M$, choose suitable $T_i$, glue solid tori suitablely, the process will go infinitely?</p>
<p>or can you prove that it is impossible to find such an example? (for example, from $M$---> $M_1$, some invariant of 3-mfds decrease strictly).</p>
<hr>
<p>After Kevin's example, I added the condition "$M_1$ also is irreducible". This condition is natural
in the original field (for this question): 3-mfd with Anosov flow. </p>
| Kevin Walker | 284 | <p>Start with $S^2\times \{1\}$ inside $S^2\times S^1$. Increase the genus of this non-separating 2-sphere to obtain a non-separating torus $T\subset S^2\times S^1$. Cutting along $T$ yields $S^2\times I$ with a 1-handle attached to each boundary component. It is possible to glue solid tori to the two boundary components to obtain $S^2\times S^1 \# S^2\times S^1$. This process can be continued indefinitely, always increasing first Betti number.</p>
<p>Perhaps you want to require that $T$ is incompressible?</p>
|
2,906,865 | <p>I am trying to find general formula of the sequence $(x_n)$ defined by
$$x_1=1, \quad x_{n+1}=\dfrac{7x_n + 5}{x_n + 3}, \quad \forall n>1.$$
I tried
put $y_n = x_n + 3$, then $y_1=4$ and
$$\quad y_{n+1}=\dfrac{7(y_n-3) + 5}{y_n }=7 - \dfrac{16}{y_n}, \quad \forall n>1.$$
From here, I can't solve it. How can I determine general formula of above sequence?</p>
<p>With <em>Mathematica</em>, I found $x_n = \dfrac{5\cdot 4^n-8}{4^n+8}$. I want to know a method to solve problem, than have a given formula.</p>
| Yuta | 590,171 | <p>Here is a method that I read from a book. Yet I did not think deeply why it works in general.</p>
<hr>
<p>If there exists real numbers $\alpha$, $\beta$ and $r$ such that</p>
<p>$$\frac{a_{n+1}-\beta}{a_{n+1}-\alpha}=r\cdot\frac{a_n-\beta}{a_n-\alpha}$$</p>
<p>for all $n\in\mathbb{N}$, then the sequence $\{b_n\}$, where $b_n=\frac{a_n-\beta}{a_n-\alpha}$, would be geometric and can be solved easily.</p>
<p>So our job is to find such $\alpha$, $\beta$ and $r$.</p>
<p>Substituting the recurring equation,</p>
<p>\begin{align}
\frac{a_{n+1}-\beta}{a_{n+1}-\alpha}&=\frac{\frac{7a_n+5}{a_n+3}-\beta}{\frac{7a_n+5}{a_n+3}-\alpha} \\
&=\frac{7a_n+5-\beta(a_n+3)}{7a_n+5-\alpha(a_n+3)} \\
&=\frac{(7-\beta)a_n+(5-3\beta)}{(7-\alpha)a_n+(5-3\alpha)} \\
&=\frac{7-\beta}{7-\alpha}\cdot\frac{a_n-\left(-\frac{5-3\beta}{7-\beta}\right)}{a_n-\left(-\frac{5-3\alpha}{7-\alpha}\right)}
\end{align}</p>
<p>Hence the trick should work if there is a solution for $\alpha=-\frac{5-3\alpha}{7-\alpha}$ and $\beta=-\frac{5-3\beta}{7-\beta}$ and $r=\frac{7-\beta}{7-\alpha}$.</p>
<p>Noting that $\alpha$ and $\beta$ are roots of $u=-\frac{5-3u}{7-u}$.</p>
<p>\begin{align}
u&=-\frac{5-3u}{7-u} \\
u(7-u)&=-(5-3u) \\
u^2-4u-5&=0 \\
u&=-1\enspace\text{or}\enspace 5 \\
\end{align}</p>
<p>Take $(\alpha,\,\beta)=(-1,\,5)$. $r=\frac{7-5}{7-(-1)}=\frac{1}{4}$ follows.</p>
<p>$$b_1=\frac{a_1-\beta}{a_1-\alpha}=\frac{1-5}{1-(-1)}=-2$$</p>
<p>For all $n\in\mathbb{N}$,</p>
<p>$$b_n=r^{n-1}\cdot b_1=\left(\frac{1}{4}\right)^{n-1}(-2)=\frac{-8}{4^n}$$</p>
<p>Back substitute into $b_n=\frac{a_n-\beta}{a_n-\alpha}$.</p>
<p>\begin{align}
\frac{-8}{4^n}&=\frac{a_n-5}{a_n-(-1)} \\
-8(a_n+1)&=4^n(a_n-5) \\
a_n&=\frac{5\cdot 4^n-8}{4^n+8}
\end{align}</p>
<p>The same result as given by <em>Mathematica</em>.</p>
|
3,076,253 | <p>Problem: Let <span class="math-container">$I=[0,1]$</span> be the closed unit interval. Suppose <span class="math-container">$f$</span> is a continuous mapping of <span class="math-container">$I$</span> into <span class="math-container">$I$</span>. Prove that <span class="math-container">$f(x)=x$</span> for at least one <span class="math-container">$x \in I$</span>. </p>
<p>Attempt: </p>
<p>We have a known result: </p>
<p>Let <span class="math-container">$g$</span> be a continuous real valued function on a metric space <span class="math-container">$X$</span>. Then <span class="math-container">$Z(g)=\{p \in X: g(p)=0 \}$</span>, i.e. the zero set of <span class="math-container">$g$</span> is closed. [Proof is simple using "inverse image of a closed set is closed under continuous map"] </p>
<p>Now, we construct <span class="math-container">$F:I \to I$</span>, such that <span class="math-container">$F(x) =f(x)-x$</span>, which is definitely continuous, being a linear combination of two continuous functions.<br>
[It is assumed that <span class="math-container">$F(x)=f(x)-x>0$</span>, <span class="math-container">$\forall x\in I$</span>. [ If <span class="math-container">$x>f(x)$</span>, consider the function <span class="math-container">$F_1(x)=x-f(x)$</span> ]. Otherwise, if the function <span class="math-container">$F(x)$</span> changes sign somewhere within the interval, it must attain the value <span class="math-container">$0$</span>, giving us <span class="math-container">$f(x)=x$</span> ].</p>
<p>But we know that <span class="math-container">$Z(F)$</span> is closed, hence it cannot be <span class="math-container">$\phi$</span>. We are done. </p>
<p>Is this at all a valid proof? </p>
<p>Edit: Being doubtful, I write up another approach:</p>
<p>The function <span class="math-container">$f$</span> maps into <span class="math-container">$I$</span>, i.e. itself. Hence, to be <span class="math-container">$f(x)>x$</span> for every value of <span class="math-container">$x$</span>, its range set would have to exceed <span class="math-container">$I$</span>, and the best case scenario would be the identity map, for which <span class="math-container">$f(x)=x$</span> for all <span class="math-container">$x$</span> . Otherwise, <span class="math-container">$F(x)$</span> must change sign. </p>
| Community | -1 | <p>The first thing you should note is that if <span class="math-container">$\tau, \sigma \in S_n$</span>, then <span class="math-container">$\tau \circ \sigma \in S_n$</span>. This means that if you have two permutations, then their product is also a permutation of the same permutation group. </p>
<p>You also know that <span class="math-container">$S_n$</span> has exactly <span class="math-container">$n!$</span> elements. </p>
<p>Now imagine I give you a set <span class="math-container">$A$</span> where <strong>all of its elements are permutations</strong> from <span class="math-container">$S_n$</span>. Mathematically speaking this means that <span class="math-container">$A \subset S_n$</span>.</p>
<p>Now let's think about how many elements <span class="math-container">$A$</span> can have. </p>
<p>If <span class="math-container">$A = S_n$</span>, then <span class="math-container">$A$</span> contains all permutations from <span class="math-container">$S_n$</span>. So how many elements does <span class="math-container">$A$</span> have? Exactly <span class="math-container">$n!$</span>, since there are exactly <span class="math-container">$n!$</span> permutations in <span class="math-container">$S_n$</span>.</p>
<p>What if <span class="math-container">$A$</span> contains exactly <span class="math-container">$n!$</span> permutations from <span class="math-container">$S_n$</span>? Then <span class="math-container">$A$</span> must contain all permutations from <span class="math-container">$S_n$</span>, since there are only <span class="math-container">$n!$</span> elements in <span class="math-container">$S_n$</span>. So <span class="math-container">$A = S_n$</span></p>
<p>What I have shown is that any set <span class="math-container">$A$</span> that only contains permutations from <span class="math-container">$S_n$</span> has exactly <span class="math-container">$n!$</span> elements if and only if <span class="math-container">$A = S_n$</span>. </p>
<p>The key point here is that <span class="math-container">$A$</span> only has permutations from <span class="math-container">$S_n$</span> as its elements. And if it has <span class="math-container">$n!$</span> permutations (that is, all permutations) as its elements, then it must be equal to <span class="math-container">$S_n$</span> (and vice versa).</p>
<p>This is essentially all the proposition says.</p>
<p>The set <span class="math-container">$\left\{\sigma \circ \tau : \sigma \in S_n\right\}$</span> is a subset of <span class="math-container">$S_n$</span>, that is it only contains permutations from <span class="math-container">$S_n$</span>. Why? See my first statement at the top of this post.</p>
<p>So <span class="math-container">$\left\{\sigma \circ \tau : \sigma \in S_n\right\} = S_n$</span> is equivalent to saying that <span class="math-container">$\left\{\sigma \circ \tau : \sigma \in S_n\right\}$</span> has exactly <span class="math-container">$n!$</span> elements.</p>
<p>The same holds for <span class="math-container">$\left\{\tau \circ \sigma: \sigma \in S_n\right\} = S_n$</span></p>
|
50,994 | <p>I am trying to calculate the following integral. </p>
<pre><code>sigma1 = 10.0; sigma2 = 5.0; delta = 0.5;
t[x1_, y1_, x_, y_] := 100*HeavisideLambda[sigma1^-1*(x - x1), sigma2^-1*(y - y1)];
B2[x1_, y1_, x_, y_] := HeavisideTheta[(delta/2)^2 - (x - x1)^2, (delta/2)^2 - (y - y1)^2];
trans[x1_, y1_, x2_, y2_] :=
NIntegrate[B2[x1, y1, xz, yz]*t[xz, yz, xp, yp]*
(B2[x2, y2, xz, yz] - B2[x2, y2, xp, yp]),
{xp, x2 - 2.0*sigma1, x2 + 2.0*sigma1},
{yp, y2 - 2.0*sigma2, y2 + 2.0*sigma2},
{xz, x1 - 0.5*delta, x1 + 0.5*delta},
{yz, y1 - 0.5*delta, y1 + 0.5*delta},
WorkingPrecision -> 12, AccuracyGoal -> 8, MinRecursion -> 8, MaxRecursion -> 100];
</code></pre>
<p>I am interested in the value of the integral for the following inputs:</p>
<pre><code>trans[0, 0, delta, 0]
</code></pre>
<p>Here is my problem: for values of delta greater than 0.5 (I tried 1.0, 0.9, 0.8, 07, 0.6, 0.515), the result is a negative number, and it takes some time for <em>Mathematica</em> to come up with the result. For any value of delta smaller than 0.5, <em>Mathematica</em> immediately returns 0, and doesn't give any hints about what is wrong.</p>
<p>This is a part of a refinement study and I need to choose smaller and smaller values for delta. Do you know how I can make this work?</p>
| george2079 | 2,079 | <p>ah, you are right for that condition: <code>B2[x2, y2, xz, yz]</code> and <code>B2[x2, y2, xp, yp]</code> are returning 0 for every <em>sample</em> point in the domain. This is not so much an answer but I thought i may be useful to show how to use <code>EvaluationMonitor</code>and <code>Reap/Sow</code> to get at this:</p>
<pre><code> trans[x1_, y1_, x2_, y2_] :=
NIntegrate[B2[x1, y1, xz, yz]*t[xz, yz, xp, yp]*
((b21 = B2[x2, y2, xz, yz]) - (b22 = B2[x2, y2, xp, yp])),
{xp, x2 - 2.0*sigma1, x2 + 2.0*sigma1},
{yp, y2 - 2.0*sigma2, y2 + 2.0*sigma2},
{xz, x1 - 0.5*delta, x1 + 0.5*delta},
{yz, y1 - 0.5*delta, y1 + 0.5*delta},
WorkingPrecision -> 12, AccuracyGoal -> 8, MinRecursion -> 8,
MaxRecursion -> 100, EvaluationMonitor :> Sow[{b21, b22}]];
(First@Last@Reap[trans[0, 0, delta, 0]])
</code></pre>
<blockquote>
<p>result 14,000 {0,0}'s</p>
</blockquote>
<p>I didn't get so far as studying why this happens, but use of the same the symbol <code>x1</code> for different things makes it a little confusing to read.</p>
<h2>edit</h2>
<p>looking at this a little deeper, i suspect you may have a situation where the integrand is zero everywhere except for a small region. <code>NIntegrate</code> samples over a pretty coarse grid, doesn't happen to hit any non-zero points and stops. Try some of the other integration <code>Methods</code>.( <code>MinRecursion</code> doesnt seem to do anything BTW ) You might also see if you can id the nonzero region and tighten the integration limits.</p>
|
114,289 | <p>I am trying to use C++ programs through MathLink in my notebooks, but I cannot compile successfully the simple programs included in Mathematica. </p>
<p>I do not have a specific question, I am just looking for guidance.</p>
<p><code>$Version
$SystemID
"9.0 for Linux x86 (64-bit) (November 20, 2012)"
"Linux-x86-64"</code></p>
<p>My operating system is Linux Mint 17.3 Cinnamon 64-bit.</p>
<p>Step by step, what I am trying to do is the following:</p>
<p><code>cd $InstallationDirectory/SystemFiles/Links/MathLink/DeveloperKit/Linux-x86-64/CompilerAdditions/
mcc -o addtwo ../MathLinkExamples/addtwo.c ../MathLinkExamples/addtwo.tm</code></p>
<p>Since I am trying to compile from the directory where the libraries and header (mathlink.h) is, I think it should work (I am no C expert either). I've also tried copying the addtwo.c and addtwo.tm files in the "CompilerAdditions" folder and run everything from there with the same results.
I get the following errors</p>
<p><code>$InstallationDirectory/SystemFiles/Links/MathLink/DeveloperKit/Linux-x86-64/CompilerAdditions/libML64i3.so: undefined reference to 'shm_open'
(*more undefined references, see below*)
collect2: error: ld returned 1 exit status</code></p>
<p>Undefined references to: 'sem_init', 'sem_unlink', 'sem_close', 'pthread_sigmask', 'sem_destroy', 'shm_unlink', 'pthread_create', 'sem_post', 'sem_trywait', 'sem_open', 'sem_wait', and 'pthread_join'. It seems related to semaphore, but I am really clueless here.</p>
| halirutan | 187 | <p>The easiest way to see what libraries are required is to let <em>Mathematica</em> compile it and look at the commandline. First, locate the two required files <code>addtwo.tm</code> and <code>addtwo.c</code>. This might be a bit different on your system:</p>
<pre><code>file = FileNames["addtwo.*", {$InstallationDirectory}, Infinity][[5 ;; 6]]
(* {"/usr/local/Wolfram/Mathematica/10.3.1/SystemFiles/Links/\
MathLink/DeveloperKit/Linux-x86-64/MathLinkExamples/addtwo.c", \
"/usr/local/Wolfram/Mathematica/10.3.1/SystemFiles/Links/MathLink/\
DeveloperKit/Linux-x86-64/MathLinkExamples/addtwo.tm"} *)
</code></pre>
<p>After that, just try:</p>
<pre><code>Needs["CCompilerDriver`"];
CreateExecutable[file, "addTwo", "ShellOutputFunction" :> Print,
"ShellCommandFunction" :> Print]
</code></pre>
<p>and you see (1) whether it succeeds and (2) which libraries are linked. Here, it is:</p>
<pre><code>-l"ML64i4" -lm -lpthread -lrt -lstdc++ -ldl -luuid
</code></pre>
|
1,111,935 | <p>For a given $n \in \Bbb N$, how do you find the minimum $m \in \Bbb N$ which satisfies the inequality below?</p>
<p>$$3^{3^{3^{3^{\unicode{x22F0}^{3}}}}} (m \text{ times}) > 9^{9^{9^{9^{\unicode{x22F0}^{9}}}}} (n \text{ times})$$</p>
<p>What I have tried to do so far is decomposing the $9$ on the right side to $3*3$
or to $3^2$, but both ways didn't get me much far and I couldn't find a pattern.</p>
| Nathan E. | 173,608 | <p>I can tell you with certainty that <code>m+n+1</code> is the minimum value for m. Working it out is the real problem. I'm sure it's correct, though, because if you take the 3's n+1 times and subtract the 9's n times, </p>
<pre><code>3^3^3^3 (n+1 times) - 9^9^9 (n times) > 0
</code></pre>
<p>it's always positive and the difference is increasing. I have not, however, been able to model this with a function that you can take the derivative of and find a minimum value for.</p>
|
1,111,935 | <p>For a given $n \in \Bbb N$, how do you find the minimum $m \in \Bbb N$ which satisfies the inequality below?</p>
<p>$$3^{3^{3^{3^{\unicode{x22F0}^{3}}}}} (m \text{ times}) > 9^{9^{9^{9^{\unicode{x22F0}^{9}}}}} (n \text{ times})$$</p>
<p>What I have tried to do so far is decomposing the $9$ on the right side to $3*3$
or to $3^2$, but both ways didn't get me much far and I couldn't find a pattern.</p>
| Hasit Bhatt | 261,725 | <p>$9^{9^{9^{9^{\dots}}}} n$ times $= 3^{2*3^{2*3^{2*3^2{\dots}}}} $, where the upper most $2$ is $(n+1)^{th}$ power.</p>
<p>$2*3^2 < 3^{2+1} = 3^3 $</p>
<p>$\implies 9^{9^{9^{9^{\dots}}}} n$ times $\lt 3^{3^{3^{3^{\dots}}}} n+1 $ times</p>
<p>So, $ m = n + 1 $</p>
|
2,917,299 | <p><strong>Let $h: R\to S$ be a ring homomorphism. Let $P\subset R$ be a prime ideal.</strong>
<strong>Give an example to show that in general $h(P)$ is not an ideal of $S$</strong></p>
<p>The first thing I think is to take $R=\mathbb{Z}$ and $P=(2)$ but I do not know how to take $S$ or if this works in this way, any help is appreciated.</p>
<p><em>Note:</em> $R$ and $S$ are commutative rings with unit.</p>
| Yanko | 426,577 | <p>Consider $S=\mathbb{R}$. With the injection map $h(x)=x$, $R=\mathbb{Z}$ and any prime ideal of $\mathbb{Z}$, say $P=(2)$ as you suggested.</p>
|
2,874,763 | <p>I know for 3-D $$\nabla^2 \left(\frac1r\right)=-4\pi\, \delta(\vec{r})\,.$$
I would like to know, what is $$\text{Div}\cdot\text{Grad}\left(\frac{1}{r^2}\right)$$ in 4-Dimensions ($r^2=x_1^2+x_2^2+x_3^2+x_4^2$)?</p>
| Batominovski | 72,152 | <p>Let $\sigma_k$ denote the hypersurface area measure on the $k$-sphere. For example,
$$\text{d}\sigma_1(\varphi_1)=\text{d}\phi_1\,,\,\,\text{d}\sigma_2(\varphi_1,\varphi_2)=\sin(\varphi_2)\,\text{d}\varphi_1\,\text{d}\varphi_2\,,$$
and
$$\text{d}\sigma_3(\varphi_1,\varphi_2,\varphi_3)=\sin(\varphi_2)\,\sin^2(\varphi_3)\,\text{d}\varphi_1\,\text{d}\varphi_2\,\text{d}\varphi_3\,.$$
In general,
$$\text{d}\sigma_k(\varphi_1,\varphi_2,\ldots,\varphi_k)=\prod_{j=1}^k\,\left(\sin^{j-1}(\varphi_j)\,\text{d}\varphi_j\right)\text{ for all }k=1,2,3,\ldots\,.$$
For simplicity, write $\Phi_k$ for the angular tuple $\left(\varphi_1,\varphi_2,\ldots,\varphi_k\right)$.
The volume element in the polar coordinates of $\mathbb{R}^n$ is given by $$\text{d}\lambda_n(\mathbf{x})=r^{n-1}\,\text{d}r\,\text{d}\sigma_{n-1}\left(\Phi_{n-1}\right)\,,$$
if $\mathbf{x}\in\mathbb{R}^n$ is represented by the polar coordinates $(r,\varphi_1,\varphi_2,\ldots,\varphi_{n-1})$. Define
$$\Psi_n(\mathbf{x}):=\frac{1}{\|\mathbf{x}\|_2^{n-2}}\text{ for all }\mathbf{x}\in\mathbb{R}^n\text{ for }n>2\,,$$
where $\|\_\|_2$ is the usual Euclidean norm on $\mathbb{R}^n$.</p>
<p>For a differentiable function $f:\Omega\to\mathbb{R}$, where $\Omega$ is an open region in $\mathbb{R}^n$, the gradient $\boldsymbol{\nabla}f:\Omega\to\mathbb{R}^n$ is given by
$$(\boldsymbol{\nabla}f)(\mathbf{x}):=\sum_{j=1}^n\,\left(\frac{\partial f}{\partial x_j}(\mathbf{x})\right)\,\mathbf{e}_j\text{ for all }\mathbf{x}\in\Omega\,,$$
where $\mathbf{x}=\left(x_1,x_2,\ldots,x_n\right)$ and $\mathbf{e}_1,\mathbf{e}_2,\ldots,\mathbf{e}_n$ are the usual standard basis vectors of $\mathbb{R}^n$. For a differentiable vector function $\mathbf{v}:\Omega\to\mathbb{R}^n$, the divergence $\boldsymbol{\nabla}\cdot\mathbf{v}:\Omega\to\mathbb{R}$ is the function
$$(\boldsymbol{\nabla}\cdot\mathbf{v})(\mathbf{x}):=\sum_{j=1}^n\,\left(\frac{\partial v_j}{\partial x_j}(\mathbf{x})\right)\,,$$
where $\mathbf{v}=\left(v_1,v_2,\ldots,v_n\right)$. The Laplacian operator $\nabla^2$ is defined by
$$\nabla^2f:=\boldsymbol{\nabla}\cdot(\boldsymbol{\nabla}f)\,.$$</p>
<p>We want to evaluate
$$L(f):=\int_{\mathbb{R}^n}\,\Psi_n(\mathbf{x})\,(\nabla^2f)(\mathbf{x})\,\text{d}\lambda_n(\mathbf{x})$$
for a well behaved function $f$ (say, $f$ is sufficiently fast decaying at large distances from the origin $\boldsymbol{0}_n$ of $\mathbb{R}^n$, at least equipped with second weak derivatives, and with bounded first weak derivatives). We can see that the distribution $\nabla^2\Psi_n$ satisfies
$$\int_{\mathbb{R}^n}\,f(\mathbf{x})\,(\nabla^2\Psi_n)(\mathbf{x})\,\text{d}\lambda_n(\mathbf{x})=(-1)^2\,\int_{\mathbb{R}^n}\,\Psi_n(\mathbf{x})\,(\nabla^2f)(\mathbf{x})\,\text{d}\lambda_n(\mathbf{x})=L(f)\,.$$
(The equality above is where the assumption that $f$ is fast decaying at large distances comes into play.) Note that
$$\left(\boldsymbol{\nabla}\Psi_n\right)(\mathbf{x}) = -(n-2)\,\left(\frac{\mathbf{x}}{\|\mathbf{x}\|_2^{n}}\right)\text{ for all }\mathbf{x}\neq \boldsymbol{0}_n\,,$$
and so
$$(\nabla^2\Psi_n)(\mathbf{x})=-(n-2)\,\left(\frac{n}{\|\mathbf{x}\|_2^n}-n\,\sum_{j=1}^n\,\frac{x_j^2}{\|\mathbf{x}\|_2^{n+2}}\right)=0\text{ for every }\mathbf{x}\neq \boldsymbol{0}_n\,.$$
That is,
$$L(f)=\lim_{R\to0^+}\,\int_{\mathbb{R}\setminus B_R(\boldsymbol{0}_n)}\,\Psi_n(\mathbf{x})\,\big(\nabla^2 f\big)(\mathbf{x})\,\text{d}\lambda_n(\mathbf{x})\,,\tag{*}$$
where $B_\rho(\mathbf{y})$ is the open ball centered at $\mathbf{y}\in\mathbb{R}^n$ with radius $\rho>0$.</p>
<p>We can write
$$f(\mathbf{x})\,(\nabla^2\Psi_n)(\mathbf{x})=\big(\boldsymbol{\nabla}\cdot(f\,\boldsymbol{\nabla}\Psi_n)\big)(\mathbf{x})-\big((\boldsymbol{\nabla}f)(\mathbf{x})\big)\cdot\big((\boldsymbol{\nabla}\Psi_n)(\mathbf{x})\big)$$
and
$$\Psi_n(\mathbf{x})\,(\nabla^2f)(\mathbf{x})=\big(\boldsymbol{\nabla}\cdot(\Psi_n\,\boldsymbol{\nabla}f)\big)(\mathbf{x})-\big((\boldsymbol{\nabla}\Psi_n)(\mathbf{x})\big)\cdot\big((\boldsymbol{\nabla}f)(\mathbf{x})\big)\,.$$
Thus,
$$f(\mathbf{x})\,(\nabla^2\Psi_n)(\mathbf{x})=\Psi_n(\mathbf{x})\,(\nabla^2f)(\mathbf{x})+(\boldsymbol{\nabla}\cdot \mathbf{v})(\mathbf{x})\,,$$
where
$$\mathbf{v}:=f\,(\boldsymbol{\nabla}\Psi_n)-\Psi_n\,(\boldsymbol{\nabla}f)\,.$$
For $\mathbf{x}\neq \boldsymbol{0}_n$, we obtain
$$\Psi_n(\mathbf{x})\,(\nabla^2f)=-(\boldsymbol{\nabla}\cdot \mathbf{v})(\mathbf{x})\,.$$ From (*), we get
$$L(f)=-\lim_{R\to0^+}\,\int_{\mathbb{R}^n\setminus B_R(\boldsymbol{0}_n)}\,(\boldsymbol{\nabla}\cdot\mathbf{v})(\mathbf{x})\,\text{d}\lambda_n(\mathbf{x})\,.$$
Using the Divergence Theorem, we obtain
$$L(f)=\lim_{R\to 0^+}\,\int_{\partial B_R(\boldsymbol{0}_n)}\,\mathbf{v}(\mathbf{x})\cdot\mathbf{x}\,\|\mathbf{x}\|_2^{n-2}\,\text{d}\sigma_{n-1}(\Phi_{n-1})\,.$$
That is,
$$\begin{align}L(f)&=\lim_{R\to 0^+}\,\int_{\partial B_R(\boldsymbol{0}_n)}\,\Big(f(\mathbf{x})\,(\boldsymbol{\nabla}\Psi_n)(\mathbf{x})-\Psi_n(\mathbf{x})\,(\boldsymbol{\nabla}f)(\mathbf{x})\Big)\cdot\mathbf{x}\,\|\mathbf{x}\|_2^{n-2}\,\text{d}\sigma_{n-1}(\Phi_{n-1})\\
&=\lim_{R\to0^+}\,\left(\int_{\partial B_R(\boldsymbol{0}_n)}\,(-n+2)\,f(\mathbf{x})\,\text{d}\sigma_{n-1}(\Phi_{n-1})-\int_{\partial B_R(\boldsymbol{0}_n)}\,\mathbf{x}\cdot(\boldsymbol{\nabla}f)(\mathbf{x})\,\text{d}\sigma_{n-1}(\Phi_{n-1})\right)
\\
&=-(n-2)\,f(\boldsymbol{0}_n)\,\int_{\partial B_1(\boldsymbol{0}_n)}\,\text{d}\sigma_{n-1}(\Phi_{n-1})-0=-(n-2)\,\Sigma_{n-1}\,f(\boldsymbol{0}_n)\,,\end{align}$$
where $\Sigma_k$ denotes the hypersurface measure of the unit $k$-sphere. (For example, $\Sigma_1=2\pi$, $\Sigma_2=4\pi$, $\Sigma_3=2\pi^2$, and for every $k=1,2,3,\ldots$, $\Sigma_k=\dfrac{2\pi^{\frac{k+1}{2}}}{\Gamma\left(\frac{k+1}{2}\right)}$, where $\Gamma$ is the gamma function.)</p>
<p>In conclusion,
$$\left(\nabla^2\Psi_n\right)(\mathbf{x})=-\frac{2(n-2)\pi^{\frac{n}{2}}}{\Gamma\left(\frac{n}{2}\right)}\,\delta_n(\mathbf{x})\,,\tag{#}$$
where $\delta_n$ is the $n$-dimensional Dirac delta distribution. In particular,
$$\left(\nabla^2\Psi_3\right)(\mathbf{x})=-4\pi\,\delta_3(\mathbf{x})\text{ and }\left(\nabla^2\Psi_4\right)(\mathbf{x})=-4\pi^2\,\delta_4(\mathbf{x})\,.$$
In fact, it also makes sense to consider $\Psi_1$, where $\Psi_1(x)=|x|$ for all $x\in\mathbb{R}$. The same formula (#) works and we can readily check that
$$\left(\nabla^2\Psi_1\right)(x)=2\,\delta_1(x)\text{ for all }x\in\mathbb{R}\,.$$ For $\Psi_2$, (#) also works trivially, noting that $\Psi_2\equiv 1$ almost everywhere, whence $\nabla^2\Psi_2\equiv 0$ almost everywhere.</p>
<p>If you want to obtain a similar result in the $2$-dimensional case, then you can take $$\Xi(\mathbf{x}):=\ln\big(\|\mathbf{x}\|_2\big)\text{ for all }\mathbf{x}\in\mathbb{R}^2\setminus\{\boldsymbol{0}_2\}\,.$$
Then,
$$L(f):=\int_{\mathbb{R}^2}\,\Xi(\mathbf{x})\,(\nabla^2f)(\mathbf{x})\,\text{d}\lambda_2(\mathbf{x})=\int_{\mathbb{R}^2}\,f(\mathbf{x})\,(\nabla^2\Xi)(\mathbf{x})\,\text{d}\lambda_2(\mathbf{x})$$
for all well behaved functions $f:\mathbb{R}^2\to\mathbb{R}$. Note that
$$(\boldsymbol{\nabla}\Xi)(\mathbf{x})=\frac{\mathbf{x}}{\|\mathbf{x}\|_2^2}\text{ and }(\nabla^2\Xi)(\mathbf{x})=0\text{ for all }\mathbf{x}\neq \boldsymbol{0}_2\,.$$
We perform the same trick as before to get
$$\begin{align}L(f)&=\lim_{R\to0^+}\,\int_{\partial B_R(\boldsymbol{0}_2)}\,\Big(f(\mathbf{x})\,(\boldsymbol{\nabla}\Xi)(\mathbf{x})-\Xi(\mathbf{x})\,(\boldsymbol{\nabla}f)(\mathbf{x})\Big)\cdot\mathbf{x}\,\text{d}\sigma_1(\Phi_1)
\\&=f(\boldsymbol{0}_2)\,\int_{\partial B_1(\boldsymbol{0}_2)}\,\text{d}\sigma_1(\Phi_1)-0=\Sigma_1\,f(\boldsymbol{0}_2)=2\pi\,f(\boldsymbol{0}_2)\,.
\end{align}$$
That is,
$$(\nabla^2\Xi)(\mathbf{x})=2\pi\,\delta_2(\mathbf{x})\,.$$</p>
|
818,161 | <p>Suppose that repetitions are not allowed.</p>
<p>There are $6 \cdot 5 \cdot 4 \cdot 3 $ numbers with $4$ digits , that can be formed from the digits $1,2,3,5,7,8$.</p>
<p>How many of them contain the digits $3$ and $5$?</p>
<p>I thought that I could subtract from the total number of numbers those,that do not contain $3 \text{ and } 5$.I thought that the latter is equal to $4 \cdot 3 \cdot 2$,because then we can only use the numbers $1,2,7,8$.</p>
<p>So,the result would be $6 \cdot 5 \cdot 4 \cdot 3-4 \cdot 3 \cdot 2=336$,but I found in my textbook an other result..</p>
<p>Is the way I did it wrong??</p>
| Community | -1 | <p>There are $6!$ combinations of six different numbers, and there are $6^6$ combinations of six rolls. The probability is then:
$$
p = \frac{6!}{6^6}
$$</p>
|
2,103,602 | <p>What is the maximum value of
$\displaystyle{{1 + 3a^{2} \over \left(a^{2} + 1\right)^{2}}}$, given that $a$ is a real number, and for what values of $a$ does it occur ?.</p>
| Simply Beautiful Art | 272,831 | <p>Let $x=a^2\ge0$. We then have to look at</p>
<p>$$\frac{1+3x}{(x+1)^2}$$</p>
<p>upon differentiating and setting equal to $0$, we get</p>
<p>$$\frac{3(x+1)^2-2(x+1)(1+3x)}{(x+1)^4}\\\implies3(x+1)^2=2(x+1)(1+3x)\\\implies3(x+1)=2(1+3x)\\\implies x=\frac13$$</p>
<p>Thus, the relative maxima (or minima) occurs at</p>
<p>$$a=\pm\sqrt x=\pm\frac{\sqrt3}3$$</p>
|
2,103,602 | <p>What is the maximum value of
$\displaystyle{{1 + 3a^{2} \over \left(a^{2} + 1\right)^{2}}}$, given that $a$ is a real number, and for what values of $a$ does it occur ?.</p>
| Michael Rozenberg | 190,319 | <p>Let $a^2=x$.</p>
<p>Hence, $\frac{1+3x}{(1+x)^2}\leq\frac{9}{8}$ it's $(3x-1)^2\geq0$.</p>
<p>The equality occurs for $x=\frac{1}{3}$, which says that the answer is $\frac{9}{8}$.</p>
|
4,309,797 | <p>I have a question which asks me to compute the double integral
<span class="math-container">$$\iint_By^2-x^2\,dA$$</span> where B is the region enclosed by <span class="math-container">$$y=x,y=x+2,y=\frac{2}{x},y=\frac{2}{x}$$</span>I made a change of variables by letting <span class="math-container">$$u=xy \qquad \text{and}\qquad v=y-x$$</span>which gives me a very nice region in the <span class="math-container">$uv$</span>-plane <span class="math-container">$$1\le u\le 2\qquad\text{and}\qquad 0\le v\le 2$$</span> However i am having a hard time representing the integrand in terms of <span class="math-container">$u$</span> and <span class="math-container">$v$</span> <span class="math-container">$$y^2-x^2=(y-x)(y+x)=v(y+x)$$</span> Is it possible to express this in terms of <span class="math-container">$u$</span> and <span class="math-container">$v$</span> or am i wasting my time?</p>
| Sidvhid Hsinynjad | 866,977 | <p>Hint:</p>
<p>You may use the following identity</p>
<p><span class="math-container">$$(y+x)^2-(y-x)^2=4xy$$</span></p>
|
246,606 | <p>I have matrix:</p>
<p>$$
A = \begin{bmatrix}
1 & 2 & 3 & 4 \\
2 & 3 & 3 & 3 \\
0 & 1 & 2 & 3 \\
0 & 0 & 1 & 2
\end{bmatrix}
$$</p>
<p>And I want to calculate $\det{A}$, so I have written:</p>
<p>$$
\begin{array}{|cccc|ccc}
1 & 2 & 3 & 4 & 1 & 2 & 3 \\
2 & 3 & 3 & 3 & 2 & 3 & 3 \\
0 & 1 & 2 & 3 & 0 & 1 & 2 \\
0 & 0 & 1 & 2 & 0 & 0 & 1
\end{array}
$$</p>
<p>From this I get that:</p>
<p>$$
\det{A} = (1 \cdot 3 \cdot 2 \cdot 2 + 2 \cdot 3 \cdot 3 \cdot 0 + 3 \cdot 3 \cdot 0 \cdot 0 + 4 \cdot 2 \cdot 1 \cdot 1) - (3 \cdot 3 \cdot 0 \cdot 2 + 2 \cdot 2 \cdot 3 \cdot 1 + 1 \cdot 3 \cdot 2 \cdot 0 + 4 \cdot 3 \cdot 1 \cdot 0) = (12 + 0 + 0 + 8) - (0 + 12 + 0 + 0) = 8
$$</p>
<p>But WolframAlpha is saying that <a href="http://www.wolframalpha.com/input/?i=det+%7B%7B1%2C2%2C3%2C4%7D%2C%7B2%2C3%2C3%2C3%7D%2C%7B0%2C1%2C2%2C3%7D%2C%7B0%2C0%2C1%2C2%7D%7D&dataset=" rel="nofollow">it is equal 0</a>. So my question is where am I wrong?</p>
| Brusko651 | 50,608 | <p>The trick you are applying (Rule of Sarrus) only works for $ 3\times 3$ Matrices.</p>
|
2,195,287 | <blockquote>
<p>Knowing that $p$ is prime and $n$ is a natural number show that
$$n^{41}\equiv n\bmod 55$$
using Fermat's little theorem
$$n^p\equiv n\bmod p$$</p>
</blockquote>
<p>If the exercise was to show that
$$n^{41}\equiv n\bmod 11$$ I would just rewrite $n^{41}$ as a power of $11$ and would easily prove that the congruence is true in this case but I cannot apply the same logic when I have $\bmod55$ since $n^{41}$ cannot be written as power of $55$.</p>
<p>Any hint?</p>
| user21820 | 21,820 | <p>Actually the chinese remainder theorem is unnecessary here.</p>
<p>$n^{41} \equiv n \pmod{5}$ and hence $n^{41}-n = 5c$ for some integer $c$.</p>
<p>$n^{41} \equiv n \pmod{11}$ and hence $11 \mid n^{41}-n = 5c$.</p>
<p>$11$ is prime and does not divide $5$, so by Euclid's lemma $11 \mid c$.</p>
|
255,295 | <p>I just did one exercise stating:
Prove that the linear map $M: X \rightarrow C([0,1])$, is continuous iff for every $t\in[0,1]$, the rule $x\rightarrow (Mx)(t)$ defines a continuous linear functional on X.
the next exercise stated:
State, and prove a similar continuity criterion for linear maps $M:X\rightarrow Y$ where Y is an arbitrary Banach space.</p>
<p>Is there some theorem which states that $M$ is continuous iff $x\mapsto \ell(Mx)$ is continous for all linear functionals $\ell:Y\rightarrow \mathbb{K}$ in $Y'$? or what does it mean?</p>
<p>I posted a new try to a proof, can someone please confirm it or post another one?</p>
| Johan | 49,267 | <p>The map $x\mapsto \hat{\ell}(Mx)$ is a linear map $\ell \colon X \mapsto \mathbb{K}$.
By the continuity of $\hat{\ell}$ and uniformed boundedness we have $\sup_{\ell} \|\ell\| = c$.</p>
<p>\begin{equation}
\begin{split}
\|M\| =& \sup_{\|x\| = 1} \|Mx\| = \sup_{\|\hat{\ell}\| \leq 1}\sup_{\|x\| = 1} \|\hat{\ell}(Mx)\| \leq \sup_{\hat{\ell}} \sup_{\|x\| = 1} \|\hat{\ell}(Mx)\| \\
=& \sup_{\ell , \|x\| = 1} \|\ell x\| \leq \|\ell\|= c
\end{split}
\end{equation}
Is this correct?</p>
|
255,295 | <p>I just did one exercise stating:
Prove that the linear map $M: X \rightarrow C([0,1])$, is continuous iff for every $t\in[0,1]$, the rule $x\rightarrow (Mx)(t)$ defines a continuous linear functional on X.
the next exercise stated:
State, and prove a similar continuity criterion for linear maps $M:X\rightarrow Y$ where Y is an arbitrary Banach space.</p>
<p>Is there some theorem which states that $M$ is continuous iff $x\mapsto \ell(Mx)$ is continous for all linear functionals $\ell:Y\rightarrow \mathbb{K}$ in $Y'$? or what does it mean?</p>
<p>I posted a new try to a proof, can someone please confirm it or post another one?</p>
| Martin Argerami | 22,857 | <p>Your answer seems to be circling around the right ideas, but I'm not sure if you are using them in the right way. </p>
<p>Fix $x\in X$. Consider the map $T_x:Y'\to\mathbb K$, given by $T_x(\ell)=\ell(Mx)$. Note that
$$
\|T_x\|=\sup\{|T_x(\ell)|:\ \|\ell\|=1\}=\sup\{|\ell(Mx)|:\ \|\ell\|=1\}=\|Mx\|
$$
where the last equality is a classical application of the Hahn-Banach Theorem. </p>
<p>Now apply the Uniform Boundedness Theorem over the unit ball of $X$ (so we need $X$ to be Banach): for a fixed $\ell$,
$$
\sup\{|T_x(\ell)|:\ \|x\|\leq1\}=\sup\{|\ell(Mx)|:\ \|x\|\leq1\}=\|\ell\circ M\|<\infty.
$$
By the UBP, there exists $c>0$ such that $\|T_x\|\leq c$ for all $x$ with $\|x\|\leq1$. So
$$
\|Mx\|=\|T_x\|\leq c,\ \text{ if }\|x\|\leq1.
$$
So $M$ is bounded in the the unit ball, and is thus continuous. </p>
|
292,518 | <p>Can someone help me with the following proof involving positive definite matrices:</p>
<p>Suppose $X\succ 0$ positive definite. Show that $X-v{v^T}\succ 0$ if and only if ${v^T}X^{-1}v \le 1$.</p>
<p>Thanks in advance. </p>
| copper.hat | 27,978 | <p>Here is another approach. I assume $X$ is symmetric.</p>
<p>To simplify life, all square matrices below are assumed symmetric.</p>
<p>Since $X>0$ it has a square root satisfying $X= (X^{\frac{1}{2}})^2$.</p>
<p>Note that if $B$ is invertible, then $A>0$ iff $BAB>0$. Also note that $I-u u^T>0$ iff $\|u\| <1$. To see the latter, note that $u$ is an eigenvector corresponding to the eigenvalue $1-\|u\|^2$, and all other eigenvalues are $1$.</p>
<p>Hence $X-v v^T >0$ iff $I-(X^{-\frac{1}{2}} v)(X^{-\frac{1}{2}} v)^T >0$ iff $\|X^{-\frac{1}{2}}v\|^2 < 1$.</p>
<p>Since $\|X^{-\frac{1}{2}}v\|^2 = v^T X^{-1} v$, we are finished.</p>
|
292,518 | <p>Can someone help me with the following proof involving positive definite matrices:</p>
<p>Suppose $X\succ 0$ positive definite. Show that $X-v{v^T}\succ 0$ if and only if ${v^T}X^{-1}v \le 1$.</p>
<p>Thanks in advance. </p>
| user1551 | 1,551 | <p>I prefer copper.hat's approach, but it doesn't hurt to view the problem from other perspectives.</p>
<p>The assertion in the problem statement is obvious if $v=0$. So, assume $v\not=0$. Since $X$ is positive definite, it defines an inner product $\langle u,w\rangle=w^TXu$ on $\mathbb{R}^n$. Therefore, every nonzero vector $w\in\mathbb{R}^n$ can be written as $w=a(X^{-1}v)+bu$, where $(a,b)\not=(0,0)$ and $0\not=u\perp (X^{-1}v)$ w.r.t. the aforementioned inner product. That is, $0=\langle X^{-1}v,u\rangle=u^TX(X^{-1}v)=u^Tv$. So,
\begin{align*}
&\phantom{=} w^T(X - vv^T)w\\
&=(aX^{-1}v + bu)^T X (aX^{-1}v + bu) - (aX^{-1}v + bu)^T vv^T(aX^{-1}v + bu)\\
&=a^2 (v^T X^{-1}v)(1 - v^T X^{-1}v)
+ b^2 u^T Xu.
\end{align*}
Hence $X-vv^T$ is positive semidefinite if and only if $1 - v^T X^{-1}v\ge0$.</p>
|
3,676,911 | <p>I'm trying to understand what is the Hessian matrix of <span class="math-container">$f\colon\mathbb{R}^{n}\to\mathbb{R}$</span>
defined by <span class="math-container">$f\left(x\right)=\left\langle Ax,x\right\rangle \cdot\left\langle Bx,x\right\rangle $</span>
where <span class="math-container">$A,B$</span> are symetric <span class="math-container">$n\times n$</span> matrices. What I know is that
if we let <span class="math-container">$g\left(x\right)=\left\langle Ax,x\right\rangle $</span> and <span class="math-container">$h\left(x\right)=\left\langle Bx,x\right\rangle $</span>
then <span class="math-container">$\nabla g\left(x\right)=2Ax,\nabla h\left(x\right)=2Bx$</span> and
<span class="math-container">$\nabla^{2}g\left(x\right)=2A,\nabla^{2}h\left(x\right)=2B$</span>. Also
by the product rule we have <span class="math-container">$\left(fg\right)'=f'g+fg'$</span> which then
gives us
<span class="math-container">\begin{align*}
\left(fg\right)'' & =f''g+f'g'+f'g'+fg''=\\
& =f''g+2f'g'+fg''
\end{align*}</span>
Regarding <span class="math-container">$\nabla f\left(x\right)$</span> as a column vector, I tried to
implement this on the given <span class="math-container">$f\left(x\right)$</span> and what I got is
<span class="math-container">$$
\nabla f\left(x\right)=\nabla\left(gh\right)\left(x\right)=2Ax\cdot\left\langle Bx,x\right\rangle +\left\langle Ax,x\right\rangle \cdot2Bx
$$</span>
which seems to have worked fine with a concrete example. But then
I got to the Hessian:
<span class="math-container">\begin{align*}
\nabla^{2}f\left(x\right) & =\nabla^{2}\left(gh\right)\left(x\right)=2A\cdot\left\langle Bx,x\right\rangle +\underset{{\scriptscriptstyle \left(\ast\right)}}{\underbrace{2Ax\cdot2Bx}}+\underset{{\scriptscriptstyle \left(\ast\right)}}{\underbrace{2Ax\cdot2Bx}}+\left\langle Ax,x\right\rangle \cdot2B=\\
& =2A\cdot\left\langle Bx,x\right\rangle +\underset{{\scriptscriptstyle \left(\ast\right)}}{\underbrace{8Ax\cdot Bx}}+\left\langle Ax,x\right\rangle \cdot2B
\end{align*}</span>
Now as <span class="math-container">$Ax,Bx$</span> in <span class="math-container">$\left(\ast\right)$</span> are both column vectors I
thought I should try this instead
<span class="math-container">$$
\nabla^{2}f\left(x\right)=2A\cdot\left\langle Bx,x\right\rangle +\underset{{\scriptscriptstyle \left(\ast\ast\right)}}{\underbrace{8Ax\cdot\left(Bx\right)^{T}}}+\left\langle Ax,x\right\rangle \cdot2B
$$</span>
But that didn't work with my example.</p>
<p>In general I feel the whole process of differentiating functions that
are represented by matrices is quite a mystery to me when it comes
to where I should transpose and so. Any help is appreciated. Thanks
in advance.</p>
| J. Heller | 309,909 | <p>We can write formulas for <span class="math-container">$f_i$</span> and <span class="math-container">$f_{ij}$</span> (individual first and second partial derivatives) of <span class="math-container">$f$</span>:
<span class="math-container">$$
f_i(x) = g_i(x)h(x) + g(x)h_i(x)
$$</span>
and
<span class="math-container">$$
f_{ij}(x) = g_{ij}(x)h(x) + g_i(x)h_j(x) + g_j(x)h_i(x) + g(x)h_{ij}(x).
$$</span></p>
<p>We can also write the quadratic form <span class="math-container">$x^{\textrm{T}} A x$</span> in a form that is easier to differentiate:
<span class="math-container">$$
g(x) = \sum_i \sum_j A_{ij}x_i x_j
$$</span>
where <span class="math-container">$A_{ij}=A_{ji}$</span> is row <span class="math-container">$i$</span>, column <span class="math-container">$j$</span> of <span class="math-container">$A$</span> and <span class="math-container">$x_i$</span> is the <span class="math-container">$i$</span>th variable. So
<span class="math-container">$$
\begin{align}
g_k(x) &= \sum_i \sum_j A_{ij}(\delta_{ik}x_j + x_i\delta_{jk}) \\
&= \sum_j A_{kj} x_j + \sum_i A_{ik}x_i \\
&= \sum_i 2A_{ik}x_i \\
&= 2(A_{k*} \cdot x)
\end{align}
$$</span>
where <span class="math-container">$\delta_{ij}$</span> is the Kronecker delta function and <span class="math-container">$A_{k*}$</span> is the <span class="math-container">$k$</span>th row of <span class="math-container">$A$</span>. The second partial derivative with respect to variables <span class="math-container">$k$</span> and <span class="math-container">$l$</span> is
<span class="math-container">$$
g_{kl}(x) = \sum_i 2A_{ik}\delta_{il} = 2A_{kl}.
$$</span></p>
<p>Using these formulas for the partial derivatives of <span class="math-container">$g$</span> (and <span class="math-container">$h$</span>) gives the desired result:
<span class="math-container">$$
f_{ij}(x) = 2A_{ij}h(x) + 4(A_{i*}\cdot x)(B_{j*}\cdot x) + 4(A_{j*}\cdot x)(B_{i*}\cdot x)
+ 2B_{ij}g(x).
$$</span></p>
<p>
I derived the identities <span class="math-container">$\nabla g = 2Ax$</span> and <span class="math-container">$\nabla^2 g = 2A$</span> in component form and then used this to compute the individual components of the Hessian of <span class="math-container">$f$</span>. The point is that when working with matrices, it is often easier to break everything down into individual components. For example, in a matrix product <span class="math-container">$PQ$</span>, you would work with <span class="math-container">$(PQ)_{ij}$</span> instead of the matrix product itself.
</p>
|
408,601 | <p>I am asked to find the derivative of $\left(x^x\right)^x$. So I said let $$y=(x^x)^x \Rightarrow \ln y=x\ln x^x \Rightarrow \ln y = x^2 \ln x.$$Differentiating both sides, $$\frac{dy}{dx}=y(2x\ln x+x)=x^{x^2+1}(2\ln x+1).$$</p>
<p>Now I checked this answer with Wolfram Alpha and I get that this is only correct when $x\in\mathbb{R},~x>0$. I see that if $x<0$ then $(x^x)^x\neq x^{x^2}$ but if $x$ is negative $\ln x $ is meaningless anyway (in real analysis). Would my answer above be acceptable in a first year calculus course? </p>
<p>So, how do I get the correct general answer, that $$\frac{dy}{dx}=(x^x)^x (x+x \ln(x)+\ln(x^x)).$$</p>
<p>Thanks in advance. </p>
| Fly by Night | 38,495 | <p>If $y=(x^x)^x$ then $\ln y = x\ln(x^x) = x^2\ln x$. Then apply the product rule:</p>
<p>$$ \frac{1}{y} \frac{dy}{dx} = 2x\ln x + \frac{x^2}{x} = 2x\ln x + x$$</p>
<p>Hence $y' = y(2x\ln x + x) = (x^x)^x(2x\ln x + x).$</p>
<p>This looks a little different to your expression, but note that $\ln(x^x) \equiv x\ln x$.</p>
|
3,500,418 | <p>I am fining the pointwise limit of the function <span class="math-container">$f_n(x) = \frac{x^n}{3-x^n}$</span> for <span class="math-container">$x ∈ [0,1]$</span> and <span class="math-container">$n ∈ N$</span></p>
<p>In order to do this I first divided through by <span class="math-container">$x^n$</span>, yielding me <span class="math-container">$$ \frac{1}{\frac{3}{x^n}-1}$$</span></p>
<p>Using this I have determined that <span class="math-container">$f_n(x) \to f(x) :=
\begin{cases}
0,\ 0 \leq x<1 \\
\frac{1}{2},\ x=1
\end{cases}$</span></p>
<p>If I assume that I have done this correctly then I could deduce that the convergence cannot be uniform on <span class="math-container">$[0, 1]$</span> since each <span class="math-container">$f_n$</span> is continuous, but the limit function is not continuous.</p>
<p>Am I correct to make this assumption or is there actually uniform convergence? If there is, how is it determined?</p>
<p>Due to this I have also adjusted the bounds of <span class="math-container">$x$</span> to <span class="math-container">$[0,\frac{1}{2}]$</span> to see if this instead would have uniform convergence. </p>
<p>In this case we would have <span class="math-container">$f_n(x) \to f(x) :=
\begin{cases}
0,\ 0 \leq x≤\frac{1}{2}
\end{cases}$</span></p>
<p>What are the implications for the uniform convergence of this? Surely this is also not uniform convergence for similar reasons as above?</p>
<p>I am struggling to get my head around all this so any help would be greatly appreciated!</p>
| Paultje | 466,494 | <p>One possible way would be to use the definition of <span class="math-container">$\cos(x)$</span> that is <span class="math-container">$$ \cos(x) = \sum_{k=0}^\infty \frac{(-1)^k x^{2k}}{(2k)!}$$</span></p>
|
39,424 | <p>I need to teach an intro course on number theory in 1 month. I was just notified. Since I have never studied it, what are good books to learn it quickly?</p>
| Zev Chonoles | 1,916 | <p>I think Jones + Jones </p>
<p><a href="http://rads.stackoverflow.com/amzn/click/3540761977" rel="nofollow">http://www.amazon.com/Elementary-Number-Theory-Gareth-Jones/dp/3540761977</a></p>
<p>would be a good all-around introduction. It has solutions to every problem in the back, which can be helpful for self-study.</p>
|
2,611,382 | <p>Solve the equation,</p>
<blockquote>
<p>$$
\sin^{-1}x+\sin^{-1}(1-x)=\cos^{-1}x
$$</p>
</blockquote>
<p><strong>My Attempt:</strong>
$$
\cos\Big[ \sin^{-1}x+\sin^{-1}(1-x) \Big]=x\\
\cos\big(\sin^{-1}x\big)\cos\big(\sin^{-1}(1-x)\big)-\sin\big(\sin^{-1}x\big)\sin\big(\sin^{-1}(1-x)\big)=x\\
\sqrt{1-x^2}.\sqrt{2x-x^2}-x.(1-x)=x\\
\sqrt{2x-x^2-2x^3+x^4}=2x-x^2\\
\sqrt{x^4-2x^3-x^2+2x}=\sqrt{4x^2-4x^3+x^4}\\
x(2x^2-5x+2)=0\\
\implies x=0\quad or \quad x=2\quad or \quad x=\frac{1}{2}
$$
Actual solutions exclude $x=2$.ie, solutions are $x=0$ or $x=\frac{1}{2}$.
I think additional solutions are added because of the squaring of the term $2x-x^2$ in the steps. </p>
<p>So, how do you solve it avoiding the extra solutions in similar problems ?</p>
<p><strong>Note:</strong> I dont want to substitute the solutions to find the wrong ones.</p>
| lab bhattacharjee | 33,337 | <p>Like Barry Cipra,</p>
<p>$$2\arcsin x=\arccos(1-x)$$</p>
<p>Now $0\le\arccos(1-x)\le\pi$ and $-\pi\le2\arcsin x\le\pi$</p>
<p>$\implies\arcsin x\ge0\iff x\ge0$</p>
<p>Now for $\arcsin x\ge0,2\arcsin x=\begin{cases}\arccos(1-2x^2)&\mbox{if }x\ge0\\
-\arccos(1-2x^2)& \mbox{if }x<0\end{cases}$</p>
<p>$x\ge0\implies 1-x=1-2x^2$</p>
<p>Can you take it from here?</p>
|
3,030,753 | <p>Let <span class="math-container">$f:\mathbb R \rightarrow \mathbb R$</span> be a continuous function and <span class="math-container">$x_0 \in \mathbb R$</span> such that f is differentiable on both intervals <span class="math-container">$(-\infty, x_0]$</span> and <span class="math-container">$[x_0, +\infty)$</span>. Prove or disprove that there exist two functions <span class="math-container">$g, h : \mathbb R \rightarrow \mathbb R$</span> differentiable everywhere such that</p>
<p><span class="math-container">$$
f(x) = g(x) + h(x)|x - x_0|\ \ \forall x \in \mathbb R.
$$</span></p>
<p>This feels like it characterizes every non-differentiable point of a continuous function in terms of absolute values but I couldn't come up with a function to disprove nor I was able to construct <span class="math-container">$g$</span> and <span class="math-container">$h$</span>.</p>
<p>Help and directions appreciated.</p>
| copper.hat | 27,978 | <p>Here is a hint:</p>
<p>Suppose <span class="math-container">$\phi(x) = \begin{cases} ax, & x < 0 \\
bx, & x \ge 0 \end{cases}$</span>, note that we can write
<span class="math-container">$\phi(x) = {b-a \over 2} |x| + {a+b \over 2} x$</span>.</p>
|
97,340 | <p>Fellow Puny Humans, </p>
<p>A <em>geometric net</em> is a system of points and lines that obeys three axioms:</p>
<ol>
<li>Each line is a set of points.</li>
<li>Distinct line has at most one point in common.</li>
<li>If $p$ is a point and $L$ is a line with $p \notin L$, then there is exactly one line $M$ such that $p \in M$ and $L \cap M = \phi $. </li>
</ol>
<p>And whenever $L \cap M = \phi$ we say that $L$ is parallel to $M$ i.e $L || M$.</p>
<p>So far so good.</p>
<p>I want to partition these lines of geometric net into <em>equivalence classes</em> with two lines in same class if they are <em>equal or parallel</em>. One can easily show that binary operation <em>equal or parallel</em> is an <em>equivalence relation</em>.</p>
<p>Let's say there are $m$ such classes, then how many points does a line have in each class? For a given line $l$ in any class, if a point $p \in l$ then how many lines passes through $p$. </p>
<p>For example, if I partition them into two classes $CL_1$ and $CL_2$ of parallel or equal lines, then number of points on any line in $CL_1$ is equal to number of lines in $CL_2$. This implies that each point belongs to two line. Can this be extended to a case when number of classes are $m$ i.e. each point belong to $m$ lines? I am confused because I can not show it for the case when more than two lines passes through the same point.</p>
<p>This problem is from TAOCP 4(a) : <em>combinatorial searching</em> Problem 21. (Edision Wesly).</p>
| Brian M. Scott | 12,042 | <p>Let $C_1,\dots,C_m$ be your equivalence classes. Suppose first that there is a point $p$ that is in more than $m$ lines; then there is some class $C_i$ such that $p$ is in two distinct lines in $C_i$. But that’s impossible, since the lines in $C_i$ are mutually parallel, so every point is in at most $m$ lines.</p>
<p>Now suppose that some point $p$ is in fewer than $m$ lines; then there is some class $C_i$ such that no line in $C_i$ contains $p$. Let $L\in C_i$; $p\notin L$, so by axiom (3) there is a line $M$ such that $p\in M$ and $M\,||\,C_i$. But then $p\in M\in C_i$, contradicting the assumption that no line in $C_i$ contains $p$. Thus, each point is in at least $m$ lines.</p>
<p>Putting the two pieces together, we see that every point of the net belongs to exactly $m$ lines.</p>
|
3,949,580 | <p>Is it possible to set this integral up without using substitution?</p>
<p><span class="math-container">$$\iint_D e^{x+y} \,\mathrm{d}x\,\mathrm{d}y\,,$$</span> where</p>
<p><span class="math-container">$$D = \left\{-1\le x+y \le 1, -1 \le -x + y \le 1\right\}$$</span></p>
<p>The answer is: <span class="math-container">$e-\frac{1}{e}$</span></p>
| Bram28 | 256,001 | <p>It has nothing to do with the operators being binary, but rather how those operations deals with the operands.</p>
<p>Let's assume the jar has <span class="math-container">$n$</span> balls, and let that be the left operand. Then, adding <span class="math-container">$2$</span> balls to the jar would correspond to the operation <span class="math-container">$n+2$</span>, while doubling the balls would correspond to the operation <span class="math-container">$n \times 2$</span></p>
<p>Now, addition can be seen as a repeated process of adding 1 ball to the jar some number of times. How many times? Well, that's what the right operand says. OK, but how many each time? Exactly <span class="math-container">$1$</span>. So, we know everything to do what needs to be done from the right operand alone.</p>
<p>For example, adding 2 balls means adding 1 ball exactly 2 times. No need to know the left operand (i.e. what's already in the jar).</p>
<p>On the other hand, multiplication can be seen as a repeated process of adding n balls to the jar some number of times. How many times? Again, that's what the right operand says (well, you need to subtract <span class="math-container">$1$</span> if you start out with the <span class="math-container">$n$</span> balls already in the jar). OK, but how many each time? Well, that's the number <span class="math-container">$n$</span>, i.e. th number of balls already in the jar i.e. the left operand.</p>
<p>For example, multiplying the number of balls by <span class="math-container">$2$</span> (i.e. doubling the number of balls) means adding n balls to the already existing <span class="math-container">$n$</span> balls. So this time, yes, we need to know the left operand.</p>
|
1,779,965 | <p>Given the numbers $x = 123$ and $y = 100$ how to apply the Karatsuba algorithm to multiply these numbers ?</p>
<p>The formula is </p>
<pre><code>xy=10^n(ac)+10^n/2(ad+bc)+bd
</code></pre>
<p>As I understand $n = 3$ (number of digits) and I tried writing the numbers as </p>
<pre><code>x = 10*12+3 , y = 10*10 +0 thus a = 12 , b = 3 , c = 10 , d = 0
</code></pre>
<p>or</p>
<pre><code>x = 100*1+23 , y = 100*1 +0 thus a = 1 , b = 23 , c = 1 , d = 0
</code></pre>
<p>I looked at some explanations of the algorithm and tried it successfully for other numbers , but I don't know how to solve it in this particular case.</p>
<p>Is this example part of a more general case of the algorithm (like 3-digit numbers)? I found a question that may be related (<a href="https://math.stackexchange.com/questions/220419/karatsuba-multiplication-with-integers-of-size-3">Karatsuba multiplication with integers of size 3</a>) and from the answer I gather that it's impossible , so is it that Karatsuba can't multiply $2$ numbers of $3$-digits or is there a way to do this ?</p>
| Peter Phipps | 15,984 | <p>What you have writtem is almost correct.
The Karatsuba Algorithm expands the multiplication of $123$ and $100$ to
$$\begin{aligned}
(12\times 10+3)(10\times10+0)&= 10^2\times(12\times10) + 10^1\times (12\times0 + 3 \times 10) + 3\times 10\\
&= 12000 + 300 + 0\\
&= 12300
\end{aligned}$$</p>
<p>The <a href="https://en.wikipedia.org/wiki/Karatsuba_algorithm" rel="nofollow">Wikipedia page</a> describes the algorithm and how it is used to multiply integers of any length. The only caveat is that the two numbers being multiplied are of the same length.</p>
|
1,338,980 | <p>Suppose you have a set of data $\{x_i\}$ and $\{y_i\}$ with $i=0,\dots,N$. In order to find two parameters $a,b$ such that the line
$$
y=ax+b,
$$
give the best linear fit, one proceed minimizing the quantity
$$
\sum_i^N[y_i-ax_i-b]^2
$$
with respect to $a,b$ obtaining well know results. </p>
<p>Imagine now to desire a fit with a function like
$$
y=ax^p+b.
$$
After some manipulation one obtain the following relations
$$
a=\frac{N\sum_i(y_ix_i^p)-\sum_iy_i\cdot\sum_ix_i^p}{(\sum_ix_i^p)^2+N\sum_i(x_i^p)^2},
$$
$$
b=\frac{1}{N}[\sum_iy_i-a\sum_ix_i^p]
$$
and
$$
\frac{1}{N}[N\sum_i(y_ix_i^p\ln x_i)-\sum_iy_i\cdot\sum_ix_i^p\ln x_i]=\frac{a}{N}[N\sum_i(x_i^p)^2\ln x_i-\sum_ix_i^p\cdot\sum_ix_i^p\ln x_i.
$$
To me it seems that from this it is nearly impossible to extract the exponent $p$. Am I correct?</p>
| JJacquelin | 108,514 | <p>If you want to fit the function
$$y=a+b\:X^c$$
with a set of data $(X_1,y_1),(X_2,y_2),...,(X_k,y_k),...,(X_n,y_n)$, this is possible on various ways. The usual methods are iterative and require guessed initial values $a_0,b_0,c_0$ to start the process.</p>
<p>A non-usual method is described in the paper : <a href="https://fr.scribd.com/doc/31477970/Regressions-et-trajectoires-3D" rel="nofollow noreferrer">https://fr.scribd.com/doc/31477970/Regressions-et-trajectoires-3D</a> , pages 16-17. Since there is no available translation, the next information is sufficient to apply this particular method :</p>
<p>First, let $x=\ln(X)$</p>
<p>Note that $X$ is supposed to be positive because, if not $X^c$ should be complex.</p>
<p>So, the data set becomes $(x_1,y_1),(x_2,y_2),...,(x_k=\ln(X_k),y_k),...,(x_n,y_n)$</p>
<p>and the function to be fitted is :
$$y=a+be^{cx}$$
Then see:</p>
<p><a href="https://math.stackexchange.com/questions/1337601/fit-exponential-with-constant/1337641#1337641">Fit exponential with constant</a> </p>
<p>where all the details of computation are given in order to obtain the approximates $a,b,c$.</p>
|
1,619,371 | <p>I was working on a problem and reduced it to evaluating</p>
<p>$$\int_{0}^{1}\sqrt{1+x^a}\,dx~~a>0$$</p>
<p>your suggestion? Thanks</p>
| Jack D'Aurizio | 44,121 | <p>If $a>0$ we have:
$$ \color{red}{I(a)}=\int_{0}^{1}\sqrt{1+x^a}\,dx = \phantom{}_2 F_1\left(-\frac{1}{2},\frac{1}{a};1+\frac{1}{a};-1\right) \tag{1}$$
by just expanding $\sqrt{1+z}$ as a Taylor series. Approximations for the RHS can be computed from:
$$ I(a) = \frac{1}{a}\int_{0}^{1} z^{\frac{1}{a}-1}\sqrt{1+z}\,dz\color{red}{\approx}\frac{1}{a}\int_{0}^{1}z^{\frac{1}{a}-1}\left(1+(\sqrt{2}-1)z\right)\,dz = \color{red}{\frac{a+\sqrt{2}}{a+1}}.\tag{2}$$
A better approximation can be computed from
$$\forall z\in[0,1],\qquad \sqrt{1+z}\approx 1+(2\sqrt{6}-3-\sqrt{2})\,z+(2+2 \sqrt{2}-2 \sqrt{6})\,z^2 \tag{3}$$
that gives:</p>
<blockquote>
<p>$$ I(a)\approx \frac{2a^2+(2\sqrt{6}-1)a+\sqrt{2}}{(a+1)(2a+1)}.\tag{4} $$</p>
</blockquote>
|
1,334,527 | <p>The integral in hand is
$$
I(n) = \frac{1}{\pi}\int_{-1}^{1} \frac{(1+2x)^{2n}}{\sqrt{1-x^2}}\, dx
$$
I dont know whether it has closed-form or not, but currently I only want to know its asymptotic behavior. Setting $x=\cos\theta$, then
$$
I(n) = \frac{1}{\pi}\int_{0}^{\pi/2} \Big[(1+2\cos\theta)^{2n}+(1-2\cos\theta)^{2n}\Big]\, d\theta
$$
The second term can be neglected, therefore
$$
I(n) \sim \frac{1}{\pi}\int_{0}^{\pi/2}(1+2\cos\theta)^{2n}\, d\theta
$$
How can I move on?</p>
| Claude Leibovici | 82,404 | <p><em>This is not an answer but it is too long for a comment</em></p>
<p>For the antiderivative $$J_n = \frac{1}{\pi}\int \frac{(1+2x)^{2n}}{\sqrt{1-x^2}}\, dx$$ there is a "closed" for which involves the Appell hypergeometric function of two variables $\frac 2{1-x}$ and $\frac 3{2(1-x)}$ which not very useful.</p>
<p>It is more ugly for the integral $$I_n = \frac{1}{\pi}\int_{-1}^{1} \frac{(1+2x)^{2n}}{\sqrt{1-x^2}}\, dx$$ but computing the first terms and searching $OEIS$ it appears that this is sequence $A082758$ which corresponds to the sum of the squares of the trinomial coefficients. </p>
<p>According to this page, Vaclav Kotesovec proposed in 2012, beside the reccurence relation $$n (2 n-1) I_n=\left(14 n^2+n-12\right) I_{n-1}+3 \left(14 n^2-71 n+78\right) I_{n-2}-27 (n-2) (2 n-5)
I_{n-3}$$ an approximation formula $$I_n\approx \frac{3^{2 n+\frac{1}{2}}}{2 \sqrt{2 \pi n} }$$</p>
<p>Written as $$I_n = \frac{1}{\pi}\int_{0}^{\pi/2} \Big[(1+2\cos\theta)^{2n}+[(1-2\cos\theta)^{2n}\Big]\, d\theta$$ using the binomial expansion and power reduction for the cosines, the antiderivative write $$\frac 1 \pi \Big(\alpha_n \theta+\sum_{k=0}^n \beta_k \sin(2k\theta)\Big)$$ and so $I_n=\frac {\alpha_n} 2$</p>
|
2,779,152 | <p>Consider a Poisson process with rate $\lambda$ in a given time interval $[0,T]$. The inter-arrival time between successive arrivals is negative exponential distributed with mean $\frac{1}{\lambda}$ such that $X_1 >0$, and $\sum_{i=1}^\text{Last} X_i < T$, where $X$ represents inter-arrival time.</p>
<p>What about the distribution of time between Last arrival and ending time $T$? Is it also negative exponential distributed and has a mean value of $\frac{1}{\lambda}$? Can we study time segment $[0,T]$ of Poisson process in the backward direction too? In the forward direction, time between $t=0$ and first arrival is negative exponential distributed. In the backward direction, Last arrival is the first arrival and is the time between $t=T$ and Last is also negative exponential distributed. Is there any way to justify this? or some reference?</p>
| Atmos | 516,446 | <p>HINT : </p>
<p>An idea is to use the D'Alembert criteria which states that if
$$
\frac{a_{n+1}}{a_n} \underset{n \rightarrow +\infty}{\rightarrow} \ell
$$
then</p>
<ul>
<li><p>$\ell>1$ makes the series $\sum_{n \in \mathbb{N}}^{ }a_n$ diverge.</p></li>
<li><p>$\ell<1$ makes the series $\sum_{n \in \mathbb{N}}^{ }a_n$ converge.</p></li>
<li><p>$\ell=1$ we cannot conclude whether it converges or not.</p></li>
</ul>
<p>Just a simple result before justifying it : if
$$
\frac{a_{n+1}}{a_n}\leq k
$$
with $k \in \left]0,1\right[$ then the series $\sum_{n \in \mathbb{N}}^{ }a_n$ converges. And if
$$
\frac{a_{n+1}}{a_n}\geq k
$$
with $k>1$ then the series $\sum_{n \in \mathbb{N}}^{ }a_n$ diverges.</p>
<p>It comes from the fact that we have with the hypothesis
$$
a_{n+1} \leq k^n a_0
$$
and the series $\sum k^n$ converges because $k \in \left]0,1\right[$. Same for ther other case
$$
a_{n+1} \geq k^n a_0
$$
and the series $\sum k^n$ diverges because $k>1$</p>
<p>Now, let's proof the result I've stated :</p>
<p>By definition of convergence, $\forall \epsilon>0, \ \exists N$ so that for $n>N$
$$
\left|\frac{a_{n+1}}{a_n}-\ell\right|<\epsilon \Rightarrow \ell-\epsilon<\frac{a_{n+1}}{a_n}<\ell+\epsilon
$$</p>
<ul>
<li><p>If $\ell>1$, we choose $\displaystyle \epsilon=\frac{\ell-1}{2} \Rightarrow \ell-\epsilon>1$ so with the previous result, the series $\sum_{n \in \mathbb{N}}^{ }a_n$ diverges.</p></li>
<li><p>If $\ell<1$, we choose $\displaystyle \epsilon=\frac{1-\ell}{2} \Rightarrow \ell+\epsilon<1$ so with the previous result, the series $\sum_{n \in \mathbb{N}}^{ }a_n$ converges.</p></li>
</ul>
<p>Now, for your exercise, we have
$$
a_n=\frac{1}{\sqrt{5}}\left(\phi^n-\left(\overline{\phi}\right)^n\right)
$$
where $\displaystyle \phi=\frac{1+\sqrt{5}}{2}>1$ and $\displaystyle \overline{\phi}=\frac{1-\sqrt{5}}{2}<1$ hence
$$
\frac{a_{n}}{a_{n+1}}=\frac{\phi^{n}-\left(\overline{\phi}\right)^{n}}{\phi^{n+1}-\left(\overline{\phi}\right)^{n+1}}\underset{(+\infty)}{\sim}\frac{1}{\phi}=\frac{2}{1+\sqrt{5}}<1
$$
Hence with the criteria the series $\displaystyle \sum_{n \in \mathbb{N}^{*}}\frac{1}{a_n}$ converges.</p>
|
19,261 | <p>Every simple graph $G$ can be represented ("drawn") by numbers in the following way:</p>
<ol>
<li><p>Assign to each vertex $v_i$ a number $n_i$ such that all $n_i$, $n_j$ are coprime whenever $i\neq j$. Let $V$ be the set of numbers thus assigned. <br/></p></li>
<li><p>Assign to each maximal clique $C_j$ a unique prime number $p_j$ which is coprime to every number in $V$.</p></li>
<li><p>Assign to each vertex $v_i$ the product $N_i$ of its number $n_i$ and the prime numbers $p_k$ of the maximal cliques it belongs to.</p></li>
</ol>
<blockquote>
<p>Then $v_i$, $v_j$ are adjacent iff $N_i$
and $N_j$ are not coprime,</p>
</blockquote>
<p>i.e. there is a (maximal) clique they both belong to. <strong>Edit:</strong> It's enough to assign $n_i = 1$ when $v_i$ is not isolated and does not share all of its cliques with another vertex.</p>
<p>Being free in assigning the numbers $n_i$ and $p_j$ lets arise a lot of possibilites, but also the following question:</p>
<blockquote>
<p><strong>QUESTION</strong></p>
<p>Can the numbers be assigned <em>systematically</em> such that the greatest $N_i$
is minimal (among all that do the job) — and if so: how?</p>
</blockquote>
<p>It is obvious that the $n_i$ in the first step have to be primes for the greatest $N_i$ to be minimal. I have taken the more general approach for other - partly <a href="https://mathoverflow.net/questions/19076/bringing-number-and-graph-theory-together-a-conjecture-on-prime-numbers/19080#19080">answered </a> - questions like "Can the numbers be assigned such that the set $\lbrace N_i \rbrace_{i=1,..,n}$ fulfills such-and-such conditions?"</p>
| Charles Siegel | 622 | <p>You might want to look up some things about index theorems (particularly Atiyah-Singer). They tend to relate topological and geometric data, so you can put geometric data in and topological data out.</p>
|
3,997,532 | <p>Please help me with this. I can't prove the result. Tried integral by parts or notations, nothing working</p>
<p><span class="math-container">$$\int_{-1}^{1}{\frac{x^2}{e^x+1}}dx$$</span></p>
| Quanto | 686,284 | <p>Note <span class="math-container">$\frac{1}{e^x+1}
= \frac12+\frac12\tanh^{-1}\frac x2 $</span> and the odd function <span class="math-container">$\tanh^{-1}\frac x2$</span> does not contribute to the integral over <span class="math-container">$(-1,1)$</span>. Thus</p>
<p><span class="math-container">$$\int_{-1}^{1}{\frac{x^2}{e^x+1}}dx
= \int_{-1}^{1}\frac{x^2}2=\frac13
$$</span></p>
|
61,933 | <p>Consider a lattice in R^3.
Is the some "canonical" way or ways to choose basis in it ?</p>
<p>I mean in R^2 we can choose a basis |h_1| < |h_2| and |(h_2, h_1)| < 1/2 |h_1|.
Considering lattices with fixed determinant and up to unitary transformations we get standard picture of the PSL(2,Z) acting on the upper half plane, which has a fundamental domain Im (tau)>1 Re(tau) <1/2. </p>
<p>What are the similar results for other small dimensions R^3, R^4, C^4, C^8 ?
What are the algorithms to find such a lattice reductions ?</p>
| Richard Borcherds | 51 | <p>The book by Terras "Harmonic analysis on symmetric spaces and applications" volume 2 has some stereoscopic pictures of the fundamental domains for some similar groups.</p>
|
2,725,697 | <p>A weird question that has me confused. Suppose I have a symmetric matrix $A$, which has to be computed somehow. For example, the Hessian matrix is a symmetric matrix that is computed by taking the gradient twice. A covariance matrix is also symmetric as another example. $A$ will have $n^2$ entries but really only need to compute $n^2/2$ of them since it is symmetric. </p>
<p>Now consider a vector of appropriate length $z$. The product $Az$ will yield a vector, not a matrix. So it seems like in terms of computation time/steps that the product $Az$ can actually be calculated faster than computing $A$ itself? Has anyone ever thought of this, and if so in what context could this be of use?</p>
| user3658307 | 346,641 | <p>There are some cases where this is true. For instance, if $A=ab^T=a\otimes b$ is a rank 1 matrix, then you should definitely not compute $A$, or store it. Just keep $a$ and $b$ around instead and it will be much faster/cheaper to compute $Av$ (or almost anything really).</p>
<p>Another interesting case is <em>Hessian-vector products</em>. Let's say you want $H_f[x] u$ where $H$ is the Hessian and $u$ is a vector. But $H$ can be very expensive to compute or store. (Computing $\nabla f$ is usually feasible).
Let $v(x) = \nabla f(x)$. Because
$$
v(x+\delta) = \nabla f(x+\delta) \approx \nabla f(x) + H_f[x]\delta
$$
Suppose $\delta = \epsilon u$. Then substituting:
$$
v(x+\epsilon u) = \nabla f(x+\epsilon u) \approx \nabla f(x) + \epsilon H_f[x] u \;\;\implies\;\; H_f[x] u \approx \frac{\nabla f(x+\epsilon u) -\nabla f(x)}{\epsilon}
$$
so you can estimate $H_f[x] u$ without actually computing $H_f$. (Note that <a href="https://justindomke.wordpress.com/2009/01/17/hessian-vector-products/" rel="nofollow noreferrer">the second-order finite difference is usually better</a>).</p>
<p>I'm not really sure in general though. It probably depends on the mechanism by which $A$ is generated (or the properties of $A$) that determines whether it can be "skipped".</p>
<hr>
<p>Two cases that are only tangentially related to your question also come to mind.</p>
<p>When solving $Ax=b$, it is often a bad idea to compute $A^{-1}$. <a href="https://en.wikipedia.org/wiki/System_of_linear_equations#Other_methods" rel="nofollow noreferrer">Instead</a>, you can solve the system without ever computing $A^{-1}$, which might not fit in memory, and is almost certainly slower and less numerically stable to calculate. If you use e.g. LU factorization, then solving is even reusable too. Some Krylov subspace methods have similar principles I think.</p>
<p>Another cool case is used in <a href="https://en.wikipedia.org/wiki/Kernel_method" rel="nofollow noreferrer">kernel methods</a> from machine learning (also <a href="https://stats.stackexchange.com/questions/152897/how-to-intuitively-explain-what-a-kernel-is">here</a>). Suppose you have an input space $X$ and a high dimensional feature space $F$ (which is an inner-product space), with a map $\Phi:X\rightarrow F$. The point of $F$ is to restructure the data in a way that makes it more amenable to some task (e.g. more separable). If $x_1, x_2\in X$, then we might be interested in the inner product $\langle\Phi(x_1),\Phi(x_2)\rangle_F$ (e.g. to get the similarity between the points).
However, for many spaces $F$, there is a <em>kernel function</em> $K:X\times X\rightarrow \mathbb{R}$ such that $K(x_1,x_2) = \langle\Phi(x_1),\Phi(x_2)\rangle_F$ can be computed <em>very cheaply</em> in the lower space, without ever calculating $\Phi$.</p>
|
1,465,627 | <p>The problem is to maximize the determinant of a $3 \times 3$ matrix with elements from $1$ to $9$.<br>
Is there a method to do this without resorting to brute force?</p>
| mathreadler | 213,607 | <p><strong>HINT:</strong></p>
<p>The product of eigenvalues are the determinant. The trace is the sum of eigenvalues. It may be reasonable to believe that a large eigenvalue sum increases the chance of a large eigenvalue product. If that is the case then 9,8,7 would be good candidates for diagonal elements. Then some symmetry argument could possibly be used for the remaining 6 positions.</p>
<p>This would reduce from having to try <span class="math-container">$9! = 362880$</span> to <span class="math-container">$6! = 720$</span> matrices.</p>
<p><strong>SPOILER WARNING:</strong></p>
<p>Using Matlab or Octave, we can with a short but obscure brute force script (don't worry it took maybe 5-10 seconds on my machine ) :</p>
<blockquote>
<p>b = reshape(permute(perms(1:9),[3,2,1]),[3,3,362880]);</p>
<p>s=-1; for o_=1:size(b,3); if det(b(:,:,o_))>s; i_=o_; end; s = max(s,det(b(:,:,o_))); end;</p>
<p>b(:,:,i_)</p>
</blockquote>
<p>Find that the matrix</p>
<p><span class="math-container">$$\left[\begin{array}{rrr}7&1&5\\6&8&3\\2&4&9\end{array}\right]$$</span></p>
<p>Has a determinant <span class="math-container">$412$</span> which supposedly is the largest of the bunch. Of course as someone mentioned in the comments any combination of row or column permutations would give the same value so the important thing is that 7,8,9 don't share any row or column as we then could permute them into the diagonal.</p>
|
966,798 | <p>How I solve the following equation for $0 \le x \le 360$:</p>
<p>$$
2\cos2x-4\sin x\cos x=\sqrt{6}
$$</p>
<p>I tried different methods. The first was to get things in the form of $R\cos(x \mp \alpha)$:</p>
<p>$$
2\cos2x-2(2\sin x\cos x)=\sqrt{6}\\
2\cos2x-2\sin2x=\sqrt{6}\\
R = \sqrt{4} = 2 \\
\alpha = \arctan \frac{2}{2} = 45\\
\therefore \cos(2x + 45) = \frac{\sqrt6}{2}
$$</p>
<p>which is impossible. I then tried to use t-substitution, where:</p>
<p>$$
t = \tan\frac{x}{2}, \sin x=\frac{2t}{1+t^2}, \cos x =\frac{1-t^2}{1+t^2}
$$</p>
<p>but the algebra got unreasonably complicated. What am I missing?</p>
| John | 105,625 | <p>Because here $\epsilon$ is an arbitrary positive number, it could be 2. Hence we use minimum so that $|x|> \frac{1}{2}$ (bounded away from 0) to control the denominator.</p>
|
966,798 | <p>How I solve the following equation for $0 \le x \le 360$:</p>
<p>$$
2\cos2x-4\sin x\cos x=\sqrt{6}
$$</p>
<p>I tried different methods. The first was to get things in the form of $R\cos(x \mp \alpha)$:</p>
<p>$$
2\cos2x-2(2\sin x\cos x)=\sqrt{6}\\
2\cos2x-2\sin2x=\sqrt{6}\\
R = \sqrt{4} = 2 \\
\alpha = \arctan \frac{2}{2} = 45\\
\therefore \cos(2x + 45) = \frac{\sqrt6}{2}
$$</p>
<p>which is impossible. I then tried to use t-substitution, where:</p>
<p>$$
t = \tan\frac{x}{2}, \sin x=\frac{2t}{1+t^2}, \cos x =\frac{1-t^2}{1+t^2}
$$</p>
<p>but the algebra got unreasonably complicated. What am I missing?</p>
| CopyPasteIt | 432,081 | <p>If challenged with any <span class="math-container">$\varepsilon \gt 0$</span>, by setting</p>
<p><span class="math-container">$\tag 1 \Large{\delta = 1-\frac{1}{\varepsilon+1}}$</span></p>
<p>the statement</p>
<p><span class="math-container">$\tag 2 \Large{|x - 1| \lt \delta \text{ implies } |\frac{1}{x} - 1| \lt \varepsilon}$</span></p>
<p>will always be true (and makes sense since <span class="math-container">$\delta$</span> is always less than <span class="math-container">$1$</span>).</p>
<p>We won't bother proving this, but we can demonstrate it by plugging in some positive values for <span class="math-container">$\varepsilon$</span> into this <a href="https://www.wolframalpha.com/input/?i=solve%3A%20%5Bx%20%3E%200%20and%20%20%7Cx%20-%201%7C%20%3C%20%281-1%2F%28%5Cvarepsilon%2B1%29%29%20and%20%20%7C1%2Fx%20-%201%7C%20%3C%20%5Cvarepsilon%5D%20where%20%5Cvarepsilon%20%3D%201%2F100%20" rel="nofollow noreferrer">Wolfram Calculation</a>.</p>
|
869,268 | <p>I am asked if $\{n, n^{2}, n^{3}\}$ forms a group under multiplication modulo $m$ where $m = n + n^{2} + n^{3}.$</p>
<p>As an example we see that $\{2, 4, 8\}$ does form a group modulo $14,$ with identity $8,$ but am stuck starting the proof for the general case. Thanks in advance.</p>
| Ragnar | 91,741 | <p>We work modulo $m=n+n^2+n^3$. Since $m|n^4-n$, we have that $n^4=n$. This implies* $n^3=1$, so $n^3$ is the identity. Also, when $x,y\in\{n,n^2,n^3\}=G$, we have $xy\in G$, because $xy=n^an^b=n^{a+b}=n^c$, where $c-1=a+b-1\mod 3$. Take for example $x=n^2$ and $y=n^3$. Then, $xy=n^5=n^4\cdot n=n\cdot n=n^2$. So in the exponent, we may subtract $3$ when it is at least $4$, making it the modulo $3$ remainder. (Note that we should take $n^3$ instead of $n^0$ here, since $n^3$ is already in our group.)</p>
<p>*this is not always true, but since we don't actually use that $n^3=1$ but only $n^4=n$, everything is fine. </p>
|
1,068,631 | <p>I want to find the solutions of the equation $$\left[z- \left( 4+\frac{1}{2}i\right)\right]^k = 1 $$ </p>
<p>in terms of roots of unity.</p>
<p>When I try to solve this, I get
\begin{align*}z - 4 - \dfrac i2 &= 1\\
z-\dfrac{i}{2}&=5\\
\dfrac{2z-i}2 &= 5\\
z&= 5 + \dfrac i2\end{align*}</p>
<p>Is this the right approach?</p>
<p>I want to do the same for $$\left[z-\left(4+\frac{1}{2}i\right)\right]^k = 2$$</p>
<p>as well.</p>
| DeepSea | 101,504 | <p>Let $w = z - 4 + \dfrac{i}{2} \to w^k = 1 \to w = e^{\left(\dfrac{2\pi in}{k}\right)}, n = 0,1,\cdots ,(k-1) \to z = e^{\left(\dfrac{2\pi in}{k}\right)} + 4 - \dfrac{i}{2}$,</p>
<p>For b), $w^k = 2 = \left(\sqrt[k]{2}\right)^k \to \left(\dfrac{w}{\sqrt[k]{2}}\right)^k = 1 \to \dfrac{w}{\sqrt[k]{2}} = e^{\left(\dfrac{2\pi in}{k}\right)} \to w = \sqrt[k]{2}\cdot e^{\left(\dfrac{2\pi in}{k}\right)} \to z = \sqrt[k]{2}\cdot e^{\left(\dfrac{2\pi in}{k}\right)} + 4 - \dfrac{i}{2}, n = 0,1,\cdots ,(k-1)$.</p>
|
2,502,963 | <p>How do you prove that $e=\sum_{n=0}^{\infty}\frac{1}{n!}$? Here I am assuming $e:=\lim_{n\to\infty}(1+\frac{1}{n})^n$. Do you have any good PDF file or booklet available online on this? I do not like how my analysis text handles this...</p>
| Adayah | 149,178 | <p>First prove by the ratio test that the series $\displaystyle \sum_{k=0}^{\infty} \frac{1}{k!}$ converges and denote the sum by $S$.</p>
<p>Then note that</p>
<p>$$\begin{align*}
\left( 1 + \frac{1}{n} \right)^n & = \sum_{k=0}^n \binom{n}{k} \cdot 1^{n-k} \cdot \left( \frac{1}{n} \right)^k = \sum_{k=0}^n \frac{n (n-1) \ldots (n-k+1)}{k!} \cdot \frac{1}{n^k} \\[1ex]
& = \sum_{k=0}^n \frac{1}{k!} \cdot \left( 1 - \frac{1}{n} \right)\left(1-\frac{2}{n}\right) \ldots \left( 1 - \frac{k-1}{n} \right) = \sum_{k=0}^n \frac{1}{k!} \cdot P_n^{(k)}
\end{align*}$$</p>
<p>where</p>
<p>$$P_n^{(k)} = \left(1-\frac{1}{n}\right)\left(1-\frac{2}{n}\right) \ldots \left(1-\frac{k-1}{n}\right).$$</p>
<p>We can see that for each $k \in \mathbb{N}$ we have $0 \leqslant P_n^{(k)} \leqslant 1$ and $\displaystyle \lim_{n \to \infty} P_n^{(k)} = 1$. </p>
<p>Fix $\varepsilon > 0$ and let $K$ be so large that $\displaystyle \sum_{k=0}^K \frac{1}{k!} \geqslant (1-\varepsilon)S$. Now let $N$ be so large that whenever $n \geqslant N$, for $k = 0, 1, \ldots, K$ we have $P_n^{(k)} > 1-\varepsilon$.</p>
<p>So for $n \geqslant N$</p>
<p>$$\begin{align*}
(1-\varepsilon)^2 S \leqslant (1-\varepsilon) \sum_{k=0}^K \frac{1}{k!} \leqslant \sum_{k=0}^K \frac{1}{k!} P_n^{(k)} \leqslant \left(1+\frac{1}{n}\right)^n \leqslant \sum_{k=0}^n \frac{1}{k!} \leqslant S.
\end{align*}$$</p>
<p>Therefore </p>
<p>$$e = \lim_{n \to \infty} \left(1+\frac{1}{n}\right)^n = S = \sum_{k=0}^{\infty} \frac{1}{k!}.$$</p>
|
1,699,752 | <p>Does $ 1 + 1/2 - 1/3 + 1/4 +1/5 - 1/6 + 1/7 + 1/8 - 1/9 + ...$ converge? </p>
<p>I know that $(a_n)= 1/n$ diverges, and $(a_n)= (-1)^n (1/n)$, converges, but given this pattern of a negative number every third element, I am unsure how to determine if this converges. </p>
<p>I tried to use the comparison test, but could not find sequences to compare it to, and the alternating series test doesn't seem to work, because every other is not negative. </p>
| Robert Israel | 8,508 | <p>Hint: $1 + 1/2 - 1/3 > 1$, $1/4 + 1/5 - 1/6 > 1/4$, $1/7 + 1/8 - 1/9 > 1/7$, ...</p>
|
829,449 | <p>I am confused on the concept of extensionality versus intensionality. When we say 2<3 is True, we say that 2<3 can be demonstrated by a mathematical proof. So, according to mathematical logic, it is true. Yet, when we consider x(x+1) and X^2 + X, we can say that the x is the same for = 1. However, we call this intensional since the two expressions are true for the same value. This I understand. However, what I am having difficulty with is the claim that numbers are by their very nature abstract objects. So, how is it that there exists any truth values for mathematical statements? I know this seems like a general question but I am having difficulty in wrapping my head around the fact since a proposition about an abstract object by its very nature is intensional. Why then is the number 1 fixed. Is it simply because we agree that 1 is 1 and nothing else? And, does mathematical logic itself establish the meaning of 1?</p>
| m.g. | 154,039 | <p>I remember that we did some geometrical optics in high school. Maybe it is a little to advanced, but maybe you'd like to judge by yourself. So what I concretly remember is:</p>
<ul>
<li>Snells law</li>
<li>Lens optics, especially the "thin lense formula"</li>
</ul>
<p>Snells law uses very basic trigonometry (actually only the definition of sin), but you could avoid this by just using ratios.</p>
|
64,646 | <p>In $\triangle{ABC}$, given $\angle{A}=80^\circ$, $\angle{B}=\angle{C}=50^\circ$, D is a point in $\triangle{ABC}$, which $\angle{DBC}=20^\circ,\angle{DCB}=40^\circ$. Then how to find find $\angle{DAC}$?</p>
<p>thanks.</p>
| K. Raghavendran | 32,872 | <p>I tried the geometric method but could not succeed. However the trigonometric route yields the result.
Here it is:
Extend $CD$ to meet $AB$ at $G$. Angle $B$ is $50$. Angle $BCD$ is $40$. So $CG$ is perpendicular to $AB$.
Let $AH$ be the right bisector of angle $A$ resting on $BC$ at $H$. Let the required angle $DAC$ be $k$ degrees.
$$
GD = AG\tan(80-k) = BG\tan(30),
$$
$$
\tan(80-k) = \frac{GD}{AG} = {BG\tan(30)}{AG}.
$$
$$
BG = BC\cos(50)
$$
$$
BC = 2BH = 2AB\cos(50).
$$
So,
$$
BG = 2AB\cos(50) \cos(50)
$$
$$
AG = AC \cos(80) = AB \cos(80)
$$
So,
$$
\tan(80-k) = \frac{BG\tan(30)}{AG} = \frac{2\cos )(50) \cos (50) \tan (30)}{\cos(80)}
$$
$$
80-k = \arctan\left(\frac{2\cos (50) \cos (50) \tan (30)}{\cos (80)}\right) = 70^\circ
$$
Thus the required $\angle{DAC} = 10^\circ$</p>
|
631,388 | <p>If $\lim_{n\rightarrow \infty }{a_n}=\alpha (\neq 0) $ and $\lim_{n\rightarrow \infty }{b_n}=\beta$, then $\lim_{n\rightarrow \infty }{a_n}^{b_n}=\alpha ^\beta $?</p>
<p>I unconsciously used this but I realized I'd never seen this theorem before. Is it true?</p>
| Community | -1 | <p>Notice that
<span class="math-container">$$a_n^{b_n}=e^{b_n\log(a_n)}$$</span>
so by the <strong>continuity of the exponential and logarithmic functions</strong> you have the result, of course with the <strong>assumption</strong> <span class="math-container">$\boldsymbol{a_n>0}$</span> and <span class="math-container">$\boldsymbol{\alpha>0}$</span>.</p>
|
2,227,047 | <p>For any $x=x_1, \dotsc, x_n$, $y=y_1, \dotsc, y_n$ in $\mathbf E^n$, define $\|x-y\|=\max_{1 \le k \le n}|x_k-y_k|$. Let $f\colon\mathbf E^n \to \mathbf E^n$ be given by $f(x)=y$, where $y_k= \sum_{i=1}^n a_{ki} x_i + b_k$ where $k =1,2, \dotsc,n$. Under what conditions is $f$ a contraction mapping?</p>
<p>Any hint or solution for this question? I am beginner for this course, I can not understand clearly. </p>
| David H | 55,051 | <hr>
<p>Suppose $\alpha\in\mathbb{C}\setminus\left(-\infty,0\right]$, and set $\left|\alpha\right|=:\rho\in\left(0,\infty\right)\land\arg{\left(\alpha\right)}=:\theta\in\left(-\pi,\pi\right)$. Given $x\in\mathbb{R}$, we have the following expression for the modulus of the complex expression $\frac{1}{1+\alpha\,x^{2}}$ as a manifestly real function in all of its parameters:</p>
<p>$$\begin{align}
\frac{1}{\left|1+\alpha\,x^{2}\right|}
&=\frac{1}{\sqrt{\left(1+\alpha\,x^{2}\right)\left(1+\bar{\alpha}\,x^{2}\right)}}\\
&=\frac{1}{\sqrt{1+\left(\alpha+\bar{\alpha}\right)x^{2}+\alpha\bar{\alpha}\,x^{4}}}\\
&=\frac{1}{\sqrt{1+2\,\Re{\left(\alpha\right)}\,x^{2}+\left|\alpha\right|^{2}x^{4}}}\\
&=\frac{1}{\sqrt{1+2\rho\cos{\left(\theta\right)}\,x^{2}+\rho^{2}x^{4}}}.\\
\end{align}$$</p>
<p>As such, define the real function $J:\left(0,\infty\right)\times\left(-\pi,\pi\right)\rightarrow\mathbb{R}$ via the definite integral</p>
<p>$$J{\left(\rho,\theta\right)}:=\int_{-\infty}^{\infty}\frac{\mathrm{d}x}{\sqrt{1+2\rho\cos{\left(\theta\right)}\,x^{2}+\rho^{2}x^{4}}}.$$</p>
<p>Since $J{\left(\rho,\theta\right)}$ is even in $\theta$, we may go ahead and assume WLOG that $0\le\theta<\pi$. Given real parameters $\left(\rho,\theta\right)\in\left(0,\infty\right)\times\left(0,\pi\right)$, we find</p>
<p>$$\begin{align}
J{\left(\rho,\theta\right)}
&=\int_{-\infty}^{\infty}\frac{\mathrm{d}x}{\sqrt{1+2\rho\cos{\left(\theta\right)}\,x^{2}+\rho^{2}x^{4}}}\\
&=\int_{-\infty}^{\infty}\frac{\mathrm{d}y}{\sqrt{\rho}\sqrt{1+2y^{2}\cos{\left(\theta\right)}+y^{4}}};~~~\small{\left[\sqrt{\rho}\,x=y\right]}\\
&=\frac{2}{\sqrt{\rho}}\int_{0}^{\infty}\frac{\mathrm{d}y}{\sqrt{1+2y^{2}\cos{\left(\theta\right)}+y^{4}}}\\
&=\frac{2}{\sqrt{\rho}}\int_{0}^{\infty}\frac{\mathrm{d}y}{\sqrt{\left[1-2y\sin{\left(\frac{\theta}{2}\right)}+y^{2}\right]\left[1+2y\sin{\left(\frac{\theta}{2}\right)}+y^{2}\right]}}\\
&=\frac{2}{\sqrt{\rho}}\int_{0}^{\infty}\frac{\mathrm{d}y}{\sqrt{4y^{2}\left[\frac{1+y^{2}}{2y}-\sin{\left(\frac{\theta}{2}\right)}\right]\left[\frac{1+y^{2}}{2y}+\sin{\left(\frac{\theta}{2}\right)}\right]}}\\
&=\frac{2}{\sqrt{\rho}}\int_{-1}^{1}\frac{\mathrm{d}t}{\left(1-t^{2}\right)\sqrt{\left[\frac{1+t^{2}}{1-t^{2}}-\sin{\left(\frac{\theta}{2}\right)}\right]\left[\frac{1+t^{2}}{1-t^{2}}+\sin{\left(\frac{\theta}{2}\right)}\right]}};~~~\small{\left[y=\frac{1-t}{1+t}\right]}\\
&=\frac{4}{\sqrt{\rho}}\int_{0}^{1}\frac{\mathrm{d}t}{\sqrt{\left[1+t^{2}-\left(1-t^{2}\right)\sin{\left(\frac{\theta}{2}\right)}\right]\left[1+t^{2}+\left(1-t^{2}\right)\sin{\left(\frac{\theta}{2}\right)}\right]}}\\
&=\frac{4}{\sqrt{\rho}}\int_{0}^{1}\frac{\mathrm{d}t}{\sqrt{\left[1-\sin{\left(\frac{\theta}{2}\right)}+\left(1+\sin{\left(\frac{\theta}{2}\right)}\right)t^{2}\right]\left[1+\sin{\left(\frac{\theta}{2}\right)}+\left(1-\sin{\left(\frac{\theta}{2}\right)}\right)t^{2}\right]}}\\
&=\small{\frac{4}{\sqrt{\rho}}\int_{0}^{1}\frac{\mathrm{d}t}{\sqrt{\left[2\sin^{2}{\left(\frac{\pi}{4}-\frac{\theta}{4}\right)}+2t^{2}\sin^{2}{\left(\frac{\pi}{4}+\frac{\theta}{4}\right)}\right]\left[2\sin^{2}{\left(\frac{\pi}{4}+\frac{\theta}{4}\right)}+2t^{2}\sin^{2}{\left(\frac{\pi}{4}-\frac{\theta}{4}\right)}\right]}}}\\
&=\frac{4}{\sqrt{\rho}}\int_{0}^{1}\frac{\mathrm{d}t}{\sqrt{4\sin^{2}{\left(\frac{\pi}{4}-\frac{\theta}{4}\right)}\sin^{2}{\left(\frac{\pi}{4}+\frac{\theta}{4}\right)}\left[1+\frac{t^{2}\sin^{2}{\left(\frac{\pi}{4}+\frac{\theta}{4}\right)}}{\sin^{2}{\left(\frac{\pi}{4}-\frac{\theta}{4}\right)}}\right]\left[1+\frac{t^{2}\sin^{2}{\left(\frac{\pi}{4}-\frac{\theta}{4}\right)}}{\sin^{2}{\left(\frac{\pi}{4}+\frac{\theta}{4}\right)}}\right]}}\\
&=\frac{4}{\sqrt{\rho}}\int_{0}^{1}\frac{\mathrm{d}t}{\sqrt{4\sin^{2}{\left(\frac{\pi}{4}-\frac{\theta}{4}\right)}\cos^{2}{\left(\frac{\pi}{4}-\frac{\theta}{4}\right)}\left[1+\frac{t^{2}\cos^{2}{\left(\frac{\pi}{4}-\frac{\theta}{4}\right)}}{\sin^{2}{\left(\frac{\pi}{4}-\frac{\theta}{4}\right)}}\right]\left[1+\frac{t^{2}\sin^{2}{\left(\frac{\pi}{4}-\frac{\theta}{4}\right)}}{\cos^{2}{\left(\frac{\pi}{4}-\frac{\theta}{4}\right)}}\right]}}\\
&=\frac{4}{\sqrt{\rho}}\int_{0}^{1}\frac{\mathrm{d}t}{\sqrt{\cos^{2}{\left(\frac{\theta}{2}\right)}\left[1+t^{2}\cot^{2}{\left(\frac{\pi}{4}-\frac{\theta}{4}\right)}\right]\left[1+t^{2}\tan^{2}{\left(\frac{\pi}{4}-\frac{\theta}{4}\right)}\right]}}\\
&=\frac{4}{\sqrt{\rho}}\int_{0}^{\cot{\left(\frac{\pi}{4}-\frac{\theta}{4}\right)}}\frac{\sec{\left(\frac{\theta}{2}\right)}\tan{\left(\frac{\pi}{4}-\frac{\theta}{4}\right)}}{\sqrt{\left(1+u^{2}\right)\left[1+u^{2}\tan^{4}{\left(\frac{\pi}{4}-\frac{\theta}{4}\right)}\right]}}\,\mathrm{d}u;~~~\small{\left[t\cot{\left(\frac{\pi}{4}-\frac{\theta}{4}\right)}=u\right]}\\
\end{align}$$</p>
<p>Introducing the auxiliary parameter, $\tan{\left(\frac{\pi}{4}-\frac{\theta}{4}\right)}=:\tau\in\left(0,1\right)$, we obtain the following expression of the elliptic integral $J{\left(\rho,\theta\right)}$ in its Legendre canonical form:</p>
<p>$$\begin{align}
J{\left(\rho,\theta\right)}
&=\frac{2}{\sqrt{\rho}}\int_{0}^{\tau^{-1}}\frac{\left(1+\tau^{2}\right)}{\sqrt{\left(1+u^{2}\right)\left(1+\tau^{4}u^{2}\right)}}\,\mathrm{d}u\\
&=\frac{2\left(1+\tau^{2}\right)}{\sqrt{\rho}}\int_{0}^{\arctan{\left(\frac{1}{\tau}\right)}}\frac{\sec^{2}{\left(\varphi\right)}}{\sqrt{\left[1+\tan^{2}{\left(\varphi\right)}\right]\left[1+\tau^{4}\tan^{2}{\left(\varphi\right)}\right]}}\,\mathrm{d}\varphi;~~~\small{\left[u=\tan{\left(\varphi\right)}\right]}\\
&=\frac{2\left(1+\tau^{2}\right)}{\sqrt{\rho}}\int_{0}^{\cot^{-1}{\left(\tau\right)}}\frac{\mathrm{d}\varphi}{\sqrt{\cos^{2}{\left(\varphi\right)}+\tau^{4}\sin^{2}{\left(\varphi\right)}}}\\
&=\frac{2\left(1+\tau^{2}\right)}{\sqrt{\rho}}\int_{0}^{\cot^{-1}{\left(\tau\right)}}\frac{\mathrm{d}\varphi}{\sqrt{1-\left(1-\tau^{4}\right)\sin^{2}{\left(\varphi\right)}}}\\
&=F{\left(\cot^{-1}{\left(\tau\right)},\sqrt{1-\tau^{4}}\right)}.\blacksquare\\
\end{align}$$</p>
<p>As of now, I have not made any attempt to verify that the incomplete elliptic integral found in the last line above is ultimately equivalent to the <em>complete</em> elliptic integral produced by Mathematica, in which case we've inadvertently stumbled upon an exotic looking transformation identity that can be used to intimidate calculus students during exams, though not much else. ;) </p>
<hr>
<p><strong>Note:</strong> The definition for the incomplete elliptic integral of the first kind used by Wolfram Alpha and Mathematica differs from mine (which comes from <a href="http://dlmf.nist.gov/19.2#ii" rel="noreferrer">DLMF</a>):</p>
<p>$$F{\left(\theta,\kappa\right)}:=\int_{0}^{\theta}\frac{\mathrm{d}\varphi}{\sqrt{1-\kappa^{2}\sin^{2}{\left(\varphi\right)}}};~~~\small{0\le\theta\le\frac{\pi}{2}\land-1\le\kappa\le1\land\neg\left(\theta=\frac{\pi}{2}\land\kappa^{2}=1\right)}.$$</p>
<hr>
|
7,108 | <p>I need help to make a diagram(square), someone can teach me how to do? </p>
<p>I know that I could look at the posts to see a model, but I am stopped for 7 days to edit questions</p>
<p>Thanks in advance.</p>
| apnorton | 23,353 | <p>I would suggest also posting a short/abbreviated (textual) answer. If I came across an answer that was simply a YouTube video, I would skip by the answer rather than upvoting. (This is not to say I wouldn't watch the video if I asked the question--rather, if I didn't ask the question, but just stumbled upon it.)</p>
<p>I tend not to watch random videos. It takes longer for me to determine if a video is good/incredible (versus a total waste of my time) than to determine if a textual response is good. </p>
<p>That said: if there was a clear textual response that I liked, and it closed by saying, "I've explained this in [a more clear fashion]/[more detail]/[a graphical way]/[etc.] in this video (link)," I would probably follow the video link.</p>
<p>tl;dr: A video link by itself wouldn't attract my attention. However, if accompanied by a short/barebones answer, I would be much more interested.</p>
|
724,900 | <p>Assuming $y(x)$ is differentiable. </p>
<p>Then, what is formula for differentiation ${d\over dx}f(x,y(x))$?</p>
<p>I examine some example but get no clue....</p>
| Michael Hoppe | 93,935 | <p>You're substituting $c(t)=(t,y(t))$ in $f$. Hence $(f\circ c)'(t)=\langle \nabla f(c(t),c'(t)\rangle$.</p>
|
7,025 | <p>Many commutative algebra textbooks establish that every ideal of a ring is contained in a maximal ideal by appealing to Zorn's lemma, which I dislike on grounds of non-constructivity. For Noetherian rings I'm told one can replace Zorn's lemma with countable choice, which is nice, but still not nice enough - I'd like to do without choice entirely.</p>
<p>So under what additional hypotheses on a ring $R$ can we exhibit one of its maximal ideals while staying in ZF? (I'd appreciate both hypotheses on the structure of $R$ and hypotheses on what we're given in addition to $R$ itself, e.g. if $R$ is a finitely generated algebra over a field, an explicit choice of generators.)</p>
<p>Edit: I guess it's also relevant to ask whether there are decidability issues here. </p>
| Richard Dore | 27 | <p>Unlike full choice, you probably use countable choice all over the place without even recognizing it. Every time you do something iteratively and then take some sort of limit to your construction, you're using countable choice. In many cases, if you do very careful bookkeeping, you can eliminate it on a case by case basis. But you have to be very careful. Without countable choice, the countable union of countable sets isn't necessarily countable.</p>
|
634,890 | <blockquote>
<p><strong>Moderator Notice</strong>: I am unilaterally closing this question for three reasons. </p>
<ol>
<li>The discussion here has turned too chatty and not suitable for the MSE framework. </li>
<li>Given the recent pre-print of <a href="http://arxiv.org/abs/1402.0290" rel="noreferrer">T. Tao</a> (see also the blog-post <a href="http://terrytao.wordpress.com/2014/02/04/finite-time-blowup-for-an-averaged-three-dimensional-navier-stokes-equation/" rel="noreferrer">here</a>), the continued usefulness of this question is diminished.</li>
<li>The final update on <a href="https://math.stackexchange.com/a/649373/1543">this answer</a> is probably as close to an "answer" an we can expect. </li>
</ol>
</blockquote>
<p>Eminent Kazakh mathematician
Mukhtarbay Otelbaev, Prof. Dr. has published a full proof of the Clay Navier-Stokes Millennium Problem.</p>
<p>Is it correct?</p>
<p>See <a href="http://bnews.kz/en/news/post/180213/" rel="noreferrer">http://bnews.kz/en/news/post/180213/</a></p>
<p>A link to the paper (in Russian):
<a href="http://www.math.kz/images/journal/2013-4/Otelbaev_N-S_21_12_2013.pdf" rel="noreferrer">http://www.math.kz/images/journal/2013-4/Otelbaev_N-S_21_12_2013.pdf</a></p>
<p>Mukhtarbay Otelbaev has published over 200 papers, had over 70 PhD students, and he is a member of the Kazak Academy of Sciences. He has published papers on Navier-Stokes and Functional Analysis.</p>
<p>please confine answers to any actual mathematical error found!
thanks</p>
| myw01 | 121,064 | <p>I have started to translate the paper so that English speakers can explore it. I've only had time for the abstract, introduction, and main result statement, but that already gives an important part of the picture. Any further contributions are welcome. <a href="https://github.com/myw/navier_stokes_translate">https://github.com/myw/navier_stokes_translate</a></p>
|
137,794 | <p>I'm plotting the electric field of a charged ring based a solution from Jackson's <em>Electrodynamics</em>. </p>
<p><em>Mathematica</em> handles <code>VectorPlot3D</code> and <code>SliceVectorPlot3D</code> for the field without a hitch, and <code>SliceContourPlot3D</code> of the field magnitude as well. </p>
<p>However, attempting to produce a straight-up <code>ContourPlot3D</code> of the field magnitude returns multiple errors, including <code>Power::infy</code>,<code>Infinity::indet</code>, and <code>Power::indet</code>, along with general stop for those errors.</p>
<p>Does anyone have any ideas as to why this might be, and how to work around it to generate a contour plot? This is, by the way, version 10.3.</p>
<p>(I aplogize for the greek letters, they really didn't seem to want to copy cleanly.)</p>
<pre><code>Clear["Global`*"]
Φ[s_, z_] := Piecewise[{
{q*Sum[(R^l/(s^2 + z^2)^(0.5*(l + 1)))*LegendreP[l, 1/Sqrt[1 + s^2/z^2]], {l, 0, k}], Sqrt[s^2 + z^2] > R},
{q*Sum[((s^2 + z^2)^(0.5*l)/R^(l + 1))*LegendreP[l, 1/Sqrt[1 + s^2/z^2]], {l, 0, k}], Sqrt[s^2 + z^2] < R}
}]
dΦ = -{D[Φ[s, z], s], 0, D[Φ[s, z], z]};
dΦc = Simplify[TransformedField["Cylindrical" -> "Cartesian", dΦ, {s, ϕ, z} -> {x, y, Q}] /. Q -> z];
k = 5; q = 5; R = 0.2;
cont = SliceContourPlot3D[Norm[dΦc], "CenterPlanes",
{x, -0.35, 0.35}, {y, -0.35, 0.35}, {z, -0.35, 0.35},
ImageSize -> Large, PlotLegends -> Automatic, Contours -> 16,
ColorFunction -> "Rainbow"]
Clear[q, k, R];
k = 5; q = 5; R = 0.2;
ContourPlot3D[Norm[dΦc],
{x, -0.35, 0.35}, {y, -0.35, 0.35}, {z, -0.35, 0.35},
ImageSize -> Large, PlotLegends -> Automatic, Contours -> 16,
ColorFunction -> "Rainbow", RegionFunction -> Function[{x, y, z}, x*y*z > 0]]
Clear[k, q, R]
</code></pre>
| Jason B. | 9,490 | <p>Extended comment here, since it doesn't answer the underlying question of <em>why does this happen</em>, hopefully someone else may have an answer about why you get those errors. This is a workaround I would use.</p>
<p>For something like this (where the computations seem to take forever and I don't know how long it will take to do the underlying adaptive sampling) I <em>always</em> try to use <code>ListContourPlot3D</code> instead. It allows me to see the progress easily, it allows me to decide how long it's going to take. </p>
<p>Check how long a single computation takes,</p>
<pre><code>Norm[dΦc] /. {x -> .2, y -> .2,
z -> .2} // RepeatedTiming
(* {0.0011, 59.3032} *)
</code></pre>
<p>Now decide how long it will take to get a 3D grid over your range <code>{-0.35, 0.35}</code> using a step size of 0.02,</p>
<pre><code>36^3 First[%]
(* 52. *)
</code></pre>
<p>You can use a finer grid if you want to wait more than 52 seconds.</p>
<pre><code>data = Table[
Norm[dΦc] /. {x -> xx, y -> yy, z -> zz},
{xx, -.35, .35, .02}, {yy, -.35, .35, .02}, {zz, -.35, .35, .02}];~Monitor~{xx, yy, zz}
</code></pre>
<p>(using <code>Monitor</code> to check the progress), and then plot it</p>
<pre><code>ListContourPlot3D[data,
DataRange -> {{-.35, .35}, {-.35, .35}, {-.35, .35}},
ImageSize -> Large, PlotLegends -> Automatic, Contours -> 16,
ColorFunction -> "Rainbow",
RegionFunction -> Function[{x, y, z}, x*y*z > 0]]
</code></pre>
<p><img src="https://i.stack.imgur.com/ZkIB3.png" alt="Mathematica graphics"></p>
<p>You might get even better results with a finer grid. Or you can use <code>ListInterpolation</code> and get something like <a href="https://i.stack.imgur.com/tr1co.png" rel="nofollow noreferrer">this</a>, but it still gives the same errors (just returns a result quicker).</p>
|
4,496,815 | <blockquote>
<p>For <span class="math-container">$n, m \in \mathbb{N}, m \leq n$</span>, let <span class="math-container">$P(n, m)$</span> denote the number of permutations of length <span class="math-container">$n$</span> for which <span class="math-container">$m$</span> is the first number whose position is left unchanged. Thus, <span class="math-container">$P(n, 1) = (n - 1)!$</span> and <span class="math-container">$P(n, 2) = (n - 1)! - (n - 2)!$</span>. Show that <span class="math-container">$$P(n, m + 1) = P(n, m) - P(n - 1, m)$$</span> for each <span class="math-container">$m = 1, 2, \cdots, n - 1$</span>.</p>
</blockquote>
<p>Hello, can someone help me with the combinatorial proof for this?</p>
<p>I can prove it in other way, by proving that <span class="math-container">$$P(n, m) = \sum_{i = 0}^{m - 1}(-1)^i\binom{m-1}{i}(n - i-1)!$$</span>
using PIE. Now, turning <span class="math-container">$P(n, m) - P(n - 1,m)$</span> into <span class="math-container">$P(n, m+1)$</span> is just algebraic manipulation.</p>
<p>I'd be thankful if someone could help in proving this combinatorially.</p>
<p>Thanks</p>
| M1183 | 531,544 | <p>The function <span class="math-container">$f: \mathbb{R}_+ \rightarrow \mathbb{R}, x\mapsto f(x)=x+1/x$</span> is continuously differentiable for all <span class="math-container">$x$</span> in its domain. Its derivative is</p>
<p><span class="math-container">$$f'(x)=1-1/x^2.$$</span></p>
<p>Thus an extremum <span class="math-container">$x^*$</span> is determined by the condition <span class="math-container">$f(x^*)=1-1/(x^*)^2=0$</span>, such that <span class="math-container">$x^*=0$</span> is the unique extremum. As <span class="math-container">$f'(x)<0$</span> for <span class="math-container">$0<x<1$</span> and <span class="math-container">$f'(x)>0$</span> for <span class="math-container">$x>1$</span>, the extremum is a minimum. The function <span class="math-container">$f$</span> evaluates to <span class="math-container">$f(x^*)=2$</span> at its minimum.</p>
<p>As a consequence, the original <span class="math-container">$\mu$</span> asked for assumes a <strong>minimum</strong> value where all the three expressions <span class="math-container">$f(x)$</span> are minimal for <span class="math-container">$x\in\{\alpha, \beta,\gamma\}$</span> and thus evaluates to <span class="math-container">$\mu_\textsf{min}=2^{(2^2)}=16$</span> as stated in the task description.</p>
<p>Moreover, <span class="math-container">$\mu$</span> is unbounded and can grow arbitrarily large. Thus a <strong>maximum</strong> does not exist on the unbounded domain. To prove this statement, assume this were not the case and <span class="math-container">$\mu\leq M$</span> for some fixed <span class="math-container">$M\in\mathbb{R}$</span> with <span class="math-container">$M>2$</span>. Then take <span class="math-container">$\alpha>M$</span> and <span class="math-container">$\beta,\gamma$</span> arbitrary positive. As <span class="math-container">$f(x)\geq 2$</span> for all positive <span class="math-container">$x$</span>, we have</p>
<p><span class="math-container">$$\mu=M^{(2^2)}>M$$</span> because <span class="math-container">$M>1$</span> and <span class="math-container">$2^2>1$</span>, a contradiction to our assumption.</p>
|
70,429 | <p>For a $n$-dim smooth projective complex algebraic variety $X$, we can form the complex line bundle $\Omega^n$ of holomorphic $n$-form on $X$. Let $K_X$ be the divisor class of $\Omega^n$, then $K_X$ is called the canonical class of $X$.</p>
<p><strong>Question</strong>: Is homology class of $K_X$ in $H_{2n-2}(X)$ a topological invariant? If it's true, please tell me the idea of proof or some references. If not, please give me the counterexamples.</p>
| Dmitri Panov | 943 | <p>It is well known that in dimension $3$ and higher there exist complex structures on diffeomerphic manifolds with totally different Chern classes (and Chern numbers).</p>
<p>For the case of complex manifolds you can check </p>
<p><a href="https://mathoverflow.net/questions/26586/can-one-bound-the-todd-class-of-a-3-dimensional-variety-polynomially-in-c-3/26598#26598">Can one bound the todd class of a 3-dimensional variety polynomially in c_3 </a></p>
<p>For the case of complex projective manifolds the reference given in the same answer: </p>
<p><a href="http://arxiv.org/PS_cache/arxiv/pdf/0903/0903.1587v1.pdf" rel="nofollow noreferrer">http://arxiv.org/PS_cache/arxiv/pdf/0903/0903.1587v1.pdf</a></p>
|
2,934,238 | <p>Let <span class="math-container">$a\in \mathbb{Q}$</span> such that <span class="math-container">$18a$</span> and <span class="math-container">$25a$</span> are integers, then we wish to prove that <span class="math-container">$a$</span> must be an integer itself. What that means is that <span class="math-container">$a=\frac{p}{1}$</span> where <span class="math-container">$p \in \mathbb{Z}$</span>. What we do know is that we can express the <span class="math-container">$\gcd(18,25)$</span> as:
<span class="math-container">$$ \gcd(18,25)=18x +25y$$</span> Now if <span class="math-container">$x=y=a$</span>, we are done, since:
<span class="math-container">$$ \gcd(18,25)=18a +25a=43a$$</span> as the <span class="math-container">$\gcd$</span> is always an integer and so is 43, so <span class="math-container">$a$</span> is also an integer.</p>
<p>But, how would I generalise this?</p>
| José Carlos Santos | 446,262 | <p>All you know is that there are <em>some</em> <span class="math-container">$x$</span> and <span class="math-container">$y$</span> with that property, but that doesn't imply that you can take <span class="math-container">$x=y=a$</span>.</p>
<p>Note that <span class="math-container">$\gcd(18,25)=1$</span>. Therefore, there are integers <span class="math-container">$x$</span> and <span class="math-container">$y$</span> such that <span class="math-container">$18x+25y=1$</span>. But then <span class="math-container">$a=18xa+25ya\in\mathbb Z$</span>, since <span class="math-container">$18a,25a,x,y\in\mathbb Z$</span>.</p>
|
201,820 | <p>Suppose we have in <code>~/time-data/time-data.org</code> the following data:</p>
<pre><code>* Parent1
:LOGBOOK:
CLOCK: [2019-07-09 Tue 00:00]--[2019-07-09 Tue 00:20] => 0:20
:END:
** Child1
:LOGBOOK:
CLOCK: [2019-07-10 Wed 00:02]--[2019-07-10 Wed 00:40] => 0:38
:END:
** Child2
:LOGBOOK:
CLOCK: [2019-07-11 Thu 00:02]--[2019-07-11 Thu 06:40] => 0:38
:END:
</code></pre>
<p>We then can use <a href="https://github.com/atheriel/org-clock-csv" rel="nofollow noreferrer">atheriel/org-clock-csv</a> to to pull this data via</p>
<pre><code>(org-clock-csv-to-file "~/time-data/time-data.csv" '("~/time-data/time-data.org"))
</code></pre>
<p>which populates <code>time-data.csv</code> with</p>
<pre><code>task,parents,category,start,end,effort,ishabit,tags
Parent1,,,2019-07-09 00:00,2019-07-09 00:20,,,
Child1,Parent1,,2019-07-10 00:02,2019-07-10 00:40,,,
Child2,Parent1,,2019-07-11 00:02,2019-07-11 06:40,,,
</code></pre>
<p>so that in Mathematica we can run:</p>
<blockquote>
<p><a href="https://i.stack.imgur.com/43DSa.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/43DSa.png" alt="enter image description here"></a></p>
</blockquote>
<p><strong>Question:</strong> How do we get a <code>DateListPlot</code> out of this that shows, i.e., hours spent per day?</p>
<hr>
<p><strong>EDIT:</strong> I fed everyone's answers through my actual data (which spans several months) and <a href="https://www.wolframcloud.com/obj/george.w.singer/Published/time-data" rel="nofollow noreferrer">published them here</a>. I get lots of errors and (mostly) unparsable graphs. I think these answers are getting me closer to something usable though!</p>
| kglr | 125 | <pre><code>csv = "task,parents,category,start,end,effort,ishabit,tags
Parent1,,,2019-07-07 00:00,2019-07-07 00:20,,,
Child1,Parent1,,2019-07-8 00:02,2019-07-8 00:40,,,
Child2,Parent1,,2019-07-9 00:02,2019-07-9 06:40,,,
Parent2,,,2019-07-08 00:00,2019-07-08 00:20,,,
Child21,Parent2,,2019-07-9 00:02,2019-07-9 00:40,,,
Child22,Parent2,,2019-07-10 00:02,2019-07-10 06:40,,,
Parent3,,,2019-07-09 00:00,2019-07-09 00:20,,,
Child31,Parent3,,2019-07-10 00:02,2019-07-10 00:40,,,
Child32,Parent3,,2019-07-11 00:02,2019-07-11 06:40,,,";
dt = ImportString[csv, "CSV", "HeaderLines" -> 1] /. {a_, "", b__} :> {a, a, b};
dt2 = Values @ GroupBy[dt, #[[2]] &,
Labeled[Interval[DateObject[#, "Minute"] & /@ {#, #2}], #3, Above] & @@@
#[[All, {4, 5, 1}]] &];
tlp = TimelinePlot[dt2,
PlotStyle -> Thread[Directive[{Red, Green, Blue}, CapForm["Round"], Thickness[.015]]],
AxesOrigin -> Bottom, ImageSize -> 800, AspectRatio -> 1/3]
</code></pre>
<p><a href="https://i.stack.imgur.com/VkbCu.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/VkbCu.png" alt="enter image description here"></a></p>
<pre><code>edges = DirectedEdge @@@ DeleteCases[dt[[All, {2, 1}]], {a_, a_}];
vertices = VertexList[edges];
vcoords = Association @
Cases[tlp[[1]], Text[v_, Offset[o_, vc_], ___] :> v[[1]] -> vc, All];
grph = Show @ Graph[vertices, edges, VertexShapeFunction -> None,
EdgeShapeFunction -> ({Arrowheads[{{.02, .8}}],
Arrow@GraphElementData[{"CurvedArc", "Curvature" -> -.00001}][##]} &),
VertexCoordinates -> (vcoords /@ vertices), AspectRatio -> 1/3];
Show[ tlp, Prolog -> grph[[1]]]
</code></pre>
<p><a href="https://i.stack.imgur.com/ngOTi.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/ngOTi.png" alt="enter image description here"></a></p>
|
3,858,414 | <p>I need help solving this task, if anyone had a similar problem it would help me.</p>
<p>The task is:</p>
<p>Calculate using the rule <span class="math-container">$\lim\limits_{x\to \infty}\left(1+\frac{1}{x}\right)^x=\large e $</span>:</p>
<p><span class="math-container">$\lim_{x\to0}\left(\frac{1+\mathrm{tg}\: x}{1+\sin x}\right)\Large^{\frac{1}{\sin x}}
$</span></p>
<p>I tried this:</p>
<p><span class="math-container">$ \lim_{x\to0}\left(\frac{1+\mathrm{tg}\: x}{1+\sin x}\right)^{\Large\frac{1}{\sin x}}=\lim_{x\to0}\left(\frac{1+\frac{\sin x}{\cos x}}{1+\sin x}\right)^{\Large\frac{1}{\sin x}}=\lim_{x\to0}\left(\frac{\sin x+\cos x}{\cos x\cdot(1+\sin x)}\right)^{\Large\frac{1}{\sin x}}
$</span></p>
<p>But I do not know, how to solve this task.
Thanks in advance !</p>
| QED | 91,884 | <p>There are only two possibilities: either <span class="math-container">$[x]\cap[y]=\emptyset$</span> or <span class="math-container">$[x]\cap[y]\neq\emptyset$</span>. Now <span class="math-container">$[x]\cap[y]\neq\emptyset$</span> means that there exists an element <span class="math-container">$z\in[x]\cap[y]$</span>.</p>
<p>If <span class="math-container">$z\in[x]\cap[y]$</span>, then <span class="math-container">$xRz$</span> and <span class="math-container">$yRz$</span>, which implies by transitivity and symmetry (<span class="math-container">$yRz\implies zRy$</span>) that <span class="math-container">$xRy$</span>. In that case you can show <span class="math-container">$[x]=[y]$</span>. This is because for any <span class="math-container">$a\in A$</span> with <span class="math-container">$aRx$</span>, <span class="math-container">$aRx$</span> and <span class="math-container">$xRy$</span> implies by transitivity that <span class="math-container">$aRy$</span>, and hence <span class="math-container">$[x]\subseteq[y]$</span>. The reverse inclusion <span class="math-container">$[y]\subseteq[x]$</span> follows similalry.</p>
|
10,468 | <p>I know that many graph problems can be solved very quickly on graphs of bounded degeneracy or arboricity. (It doesn't matter which one is bounded, since they're at most a factor of 2 apart.) </p>
<p>From Wikipedia's article on the clique problem I learnt that finding cliques of any constant size k takes linear time on graphs of bounded arboricity. That's pretty cool.</p>
<p>I wanted to know more examples of algorithms where the bounded arboricity condition helps. This might even be well-studied enough to have a survey article written on it. Unfortunately, I couldn't find much about my question. Can someone give me examples of such algorithms and references? Are there some commonly used algorithmic techniques that exploit this promise? How can I learn more about these results and the tools they use?</p>
| Carter Tazio Schonwald | 426 | <p>I think you might want to look at the related/(same?) concept of treewidth. Its a much stronger sparseness requirement than constant degree, planar, etc, and if you have something like $\mathcal O(\log n)$ tree width, many NP-Hard problems on graphs become easy (such as computing graph cutes). Unfortunately in general computing a tree width decomposition is np hard </p>
<p>See the wikipedia <a href="http://en.wikipedia.org/wiki/Tree_decomposition" rel="nofollow">page</a> for details.</p>
|
1,412,594 | <p>I'm a statistics teacher at a college. One day a student came with a doubt about an exercise about probability. The text goes like this:</p>
<blockquote>
<p>A person has two boxes $A$ and $B$. In the first one has $4$ white balls and $5$ black balls and in the second has $5$ white balls and $4$ black balls. This person takes randomly one ball from the first box and put it into the second box. After that he takes a ball from the second box. Find the probability of taking balls of the same color in this process (i.e, the one that is taken from box $A$ to $B$ and the one taken from box $B$).</p>
</blockquote>
<p>The student made the following:
Let $C$ be the event of taking balls of the same color in the process above described.
Let $W$ be the event of taking White balls from both boxes and $Bl$ the event of taking black balls from both boxes. Let $Wb_1$ be the event of taking a white ball from the first box and $Wb_2$ the event of taking a white ball from the second box. The same with $Blb_1$ and $Blb_2$. Then,</p>
<p>\begin{align}
\mathbb P(C)&=\mathbb P(W)+\mathbb P(Bl)\\ &= \mathbb P(Wb_1)\mathbb P(Wb_2)+\mathbb P(Blb_1)\mathbb P(Blb_2)\\&= \frac49 \cdot\frac6{10} + \frac59\cdot\frac5{10}
\end{align}</p>
<p>I told the student the reasoning was wrong because he has to use conditional probability because events $Wb_1$ and $Wb_2$ as well as $Blb_1$ and $Blb_2$ are not independent. A probability teacher (his actual teacher) told the student he was right and that's why I make this post. ¿Who is right? Thanks!</p>
| joriki | 6,622 | <p>The calculation on the right-hand side is correct; just the notation is bad, because as you say the right-hand factors in both terms are conditional probabilities. A better way to write this would be</p>
<p>$$
P(\text{C})=P(\text{W})+P(\text{Bl})= P(\text{Wb1})P(\text{Wb2}\mid\text{Wb1})+P(\text{Blb1})P(\text{Blb2}\mid\text{Blb1})=\frac49\cdot\frac6{10}+\frac59\cdot\frac5{10}\;.
$$</p>
|
1,412,594 | <p>I'm a statistics teacher at a college. One day a student came with a doubt about an exercise about probability. The text goes like this:</p>
<blockquote>
<p>A person has two boxes $A$ and $B$. In the first one has $4$ white balls and $5$ black balls and in the second has $5$ white balls and $4$ black balls. This person takes randomly one ball from the first box and put it into the second box. After that he takes a ball from the second box. Find the probability of taking balls of the same color in this process (i.e, the one that is taken from box $A$ to $B$ and the one taken from box $B$).</p>
</blockquote>
<p>The student made the following:
Let $C$ be the event of taking balls of the same color in the process above described.
Let $W$ be the event of taking White balls from both boxes and $Bl$ the event of taking black balls from both boxes. Let $Wb_1$ be the event of taking a white ball from the first box and $Wb_2$ the event of taking a white ball from the second box. The same with $Blb_1$ and $Blb_2$. Then,</p>
<p>\begin{align}
\mathbb P(C)&=\mathbb P(W)+\mathbb P(Bl)\\ &= \mathbb P(Wb_1)\mathbb P(Wb_2)+\mathbb P(Blb_1)\mathbb P(Blb_2)\\&= \frac49 \cdot\frac6{10} + \frac59\cdot\frac5{10}
\end{align}</p>
<p>I told the student the reasoning was wrong because he has to use conditional probability because events $Wb_1$ and $Wb_2$ as well as $Blb_1$ and $Blb_2$ are not independent. A probability teacher (his actual teacher) told the student he was right and that's why I make this post. ¿Who is right? Thanks!</p>
| heropup | 118,193 | <p>As has already been pointed out, the precise and careful choice of notation is paramount.</p>
<p>The event $C$ of choosing the same color is the disjoint union of two separate events, so it is better to choose the following notation: Let $(b_1, b_2)$ be the random outcome of the two ball draws in order, where $b_i \in \{W, B\}$ for $i = 1, 2$. Thus, we want: $$\begin{align*} \Pr[(W,W) \cup (B,B)] &= \Pr[(W,W)] + \Pr[(B,B)]. \end{align*}$$ Now consider the first term on the RHS: $$\Pr[(W,W)] = \Pr[b_1 = W \cap b_2 = W] = \Pr[b_2 = W \mid b_1 = W]\Pr[b_1 = W],$$ by the definition of conditional probability. Similarly, $$\Pr[(B,B)] = \Pr[b_2 = B \mid b_1 = B]\Pr[b_1 = B].$$ Now we easily see $$\Pr[b_1 = W] = \frac{4}{4+5} = \frac{4}{9},$$ and $$\Pr[b_1 = B] = \frac{5}{4+5} = \frac{5}{9},$$ and the conditional probabilities are $$\Pr[b_2 = W \mid b_1 = W] = \frac{5+1}{4+5+1} = \frac{6}{10},$$ and $$\Pr[b_2 = B \mid b_1 = B] = \frac{4+1}{4+5+1} = \frac{5}{10}.$$ Then we get the desired probability $$\Pr[(W,W) \cup (B,B)] = \frac{4(6) + 5(5)}{9(10)} = \frac{49}{90}.$$ Note how we have clearly defined random variables $b_1, b_2$. Precise notation is crucial not just for solving problems but also in making the solution intelligible and rigorous.</p>
|
4,531,652 | <p>In my school book, I read this theorem</p>
<blockquote>
<p>Let <span class="math-container">$n>0$</span> is an odd natural number (or an odd positive integer), then the equation <span class="math-container">$$x^n=a$$</span> has exactly one real root.</p>
</blockquote>
<p>But, the book doesn't provide a proof, only tells <span class="math-container">$x=\sqrt [n]a$</span>.
How can I prove this theorem?</p>
<p>I tried to prove some special cases</p>
<p><span class="math-container">$$x^3=8$$</span>
<span class="math-container">$$(x-2)(x^2+2x+4)=0$$</span>
<span class="math-container">$$x=2 \vee x^2+2x+4=0$$</span></p>
<p>But the Discriminant of <span class="math-container">$x^2+2x+4=0$</span> equals to <span class="math-container">$2^2-4×4=-12<0$</span>. So <span class="math-container">$x=2$</span> is an only root. But for <span class="math-container">$x^5=32$</span>, I got <span class="math-container">$x=2$</span> and <span class="math-container">$x^4+2x^3+4x^2+8x+16=0$</span>.</p>
<p>I don't know how I can proceed.</p>
| user | 505,767 | <p>We can assume wlog <span class="math-container">$x$</span> and <span class="math-container">$a$</span> both positive such that <span class="math-container">$x^n=a$</span> indeed</p>
<p><span class="math-container">$$x^n=a \iff (-x)^n =(-1)^nx^n =-a$$</span></p>
<p>Then assume by contradiction <span class="math-container">$\exists y>0 \; y\neq x$</span> such that <span class="math-container">$y^n=a$</span> then</p>
<p><span class="math-container">$$x^n-y^n=(x-y)(x^{n-1}+x^{n-2}y+\ldots xy^{n-2}+y^{n-1})=0$$</span></p>
<p>which is impossible, that is <span class="math-container">$x^n$</span> is injective.</p>
<p>Therefore it suffices to show that at least one solution exists and it follows from <a href="https://en.wikipedia.org/wiki/Intermediate_value_theorem" rel="nofollow noreferrer">IVT</a> using that <span class="math-container">$x^n$</span> is continuous with <span class="math-container">$x^n=0$</span> at <span class="math-container">$x=0$</span> and <span class="math-container">$\lim_{x\to \infty}x^n = \infty$</span>, that is <span class="math-container">$x^n$</span> is also surjective.</p>
|
664 | <p>Erdős's 1947 probabilistic trick provided a lower exponential bound for the Ramsey number $R(k)$. Is it possible to explicitly construct 2-colourings on exponentially sized graphs without large monochromatic subgraphs?</p>
<p>That is, can we explicitly construct (edge) 2-colourings on graphs of size $c^k$, for some $c>0$, with no monochromatic complete subgraph of size $k$?</p>
| user1347 | 1,347 | <p>As was mentioned in the previous answers, the answer is no. Or more accurately I'd say that the answer is <em>currently no</em>, but possibly yes. </p>
<p>Also, consider the related question of constructing a bipartite graph with parts of size $2^n$, which contains no $K_{k,k}$ and whose complement contains no $K_{k,k}$ where $k = O(n)$. Such an explicit construction will have as far as I can tell huge impact on derandomization of randomized algorithms, among other topics in theoretical computer science. See e.g. <a href="http://www.math.ias.edu/~avi/PUBLICATIONS/ABSTRACT/fp60ab.pdf" rel="nofollow">this paper</a>, where such an explicit construction is given for $k = 2^{n^{o(1)}}$.</p>
<p>You might also be interested in the following accompanying paper (seems like I cannot post it, being a new user; you can google it though, its title is "Pseudorandomness and Combinatorial Constructions") to Luca Trvisan's talk at ICM '06. This may contain more connections between explicit constructions of combinatorial objects and applications in theoretical computer science.</p>
|
175,723 | <p>I am reading Goldstein's Classical Mechanics and I've noticed there is copious use of the $\sum$ notation. He even writes the chain rule as a sum! I am having a real hard time following his arguments where this notation is used, often with differentiation and multiple indices thrown in for good measure. How do I get some working insight into how sums behave without actually saying "Now imagine n=2. What does the sum become in this case?" Is there an easier way to do this? Is there an "algebra" or "calculus" of sums, like a set of rules for manipulating them? I've seen some documents on the web but none of them seem to come close to Goldstein's usage in terms of sophistication. Where can I get my hands on practice material for this notation?</p>
| robjohn | 13,854 | <p>This demonstrates what Robert Israel suggests.</p>
<p>Suppose that
$$
\theta^3+11\theta-4=0
$$
and
$$
\alpha=\frac{\theta^2-\theta}{2}
$$
Then
$$
\begin{align}
\alpha^0&=\frac22\\
\alpha^1&=\frac{\theta^2-\theta}{2}\\
\alpha^2&=\frac{-5\theta^2+13\theta-4}{2}\\
\alpha^3&=\frac{19\theta^2-107\theta+36}{2}
\end{align}
$$
and
$$
\begin{bmatrix}36&-107&19\end{bmatrix}
\begin{bmatrix}
2&0&0\\
0&-1&1\\
-4&13&-5
\end{bmatrix}^{-1}
=\begin{bmatrix}-4&-36&-11\end{bmatrix}
$$
Therefore,
$$
\alpha^3+11\alpha^2+36\alpha+4=0
$$
and $\alpha$ is an algebraic integer.</p>
|
2,128,182 | <p>I've been looking for a definition of game in game theory. I'd like to know if there is a definition shorter than that of Neumann and Morgenstern in <em>Theory of Games and Economic Behavior</em> and not so vague like "interactive decision problem" or "situation of conflict, or any other kind of interaction". I've started a study of the proof of the existence of Nash equilibria using Brouwer's fixed-point theorem and I think of finding a definition that allows me to understand concepts as <em>normal-form game</em> and <em>mixed strategy</em> without excessive complexity. I'd appreciate some bibliographic suggestion. Thank you!</p>
| Anna SdTC | 410,766 | <p>This definition is from Osborne and Rubinstein, "A Course in Game Theory", section 1.1:</p>
<blockquote>
<p>Game theory is a bag of analytical tools designed to help us understand
the phenomena that we observe when decision-makers interact. The basic assumptions that underlie the theory are that decision-makers pursue
well-defined exogenous objectives (they are rational ) and take into account their knowledge or expectations of other decision-makers’ behavior
(they reason strategically).</p>
<p>A game is a description of strategic interaction that includes the con-
straints on the actions that the players can take and the players’ interests, but does not specify the actions that the players do take. A
solution is a systematic description of the outcomes that may emerge in
a family of games. Game theory suggests reasonable solutions for classes
of games and examines their properties.</p>
</blockquote>
<p>In Fundenberg and Tirole, "Game Theory", subsection 1.1.1, there is a more formal definition of normal-form games.</p>
|
3,809,127 | <blockquote>
<p>Determine if the sequence <span class="math-container">$x_k \in \mathbb{R}^3$</span> is convergent when <span class="math-container">$$x_k=(2, 1, k^{-1})$$</span></p>
</blockquote>
<p>Our professor gave a hint that one should look at <span class="math-container">$||2k-a||$</span> and try to find a contradiction here.</p>
<p>So assuming that it converges to say <span class="math-container">$(a,b,c)$</span> we get <span class="math-container">$$||(2k,1,k^{-1})-(a,b,c)||.$$</span></p>
<p>Using the hint I found that <span class="math-container">$$||2k-a||=||(1,0,0)\cdot((2k,1,k^{-1})-(a,b,c))||.$$</span></p>
<p>From Cauchy-Schwartz it follows that <span class="math-container">$$||2k-a|| \leqslant||(1,0,0)||\ ||(2k,1,k^{-1})-(a,b,c)|| = ||(2k,1,k^{-1})-(a,b,c)||$$</span></p>
<p>Hence a contradiction since <span class="math-container">$||2k-a||$</span> grows without a bound, when <span class="math-container">$k \to \infty.$</span></p>
<p>So couple of questions. Firstly I'm a bit confused with the notation here, should I have <span class="math-container">$|2k-a|$</span> instead of <span class="math-container">$||2k-a||$</span>? And more importantly how should I come up with the intuition to look at <span class="math-container">$||2k-1||$</span> or <span class="math-container">$|2k-a|$</span> (whichever is the right notation)? I wouldn't have probably guessed this if I was doing this on an exam...</p>
| Batominovski | 72,152 | <p>Write <span class="math-container">$f(\pi):=\sum\limits_{i=1}^n\,\big|\pi(i)-i\big|$</span> for each <span class="math-container">$\pi\in S_n$</span>. Decompose a permutation <span class="math-container">$\pi\in S_n$</span> as a product of disjoint cycles <span class="math-container">$$\pi=\gamma_1\gamma_2\cdots\gamma_k\,.$$</span>
For each <span class="math-container">$j=1,2,\ldots,k$</span>, let <span class="math-container">$m(\gamma_j)$</span> and <span class="math-container">$M(\gamma_j)$</span> denote the smallest entry and the largest entry in the cycle <span class="math-container">$\gamma_j$</span>, respectively. Prove (using the Triangle Inequality for absolute values) that
<span class="math-container">$$f(\pi)\leq 2\,\sum_{j=1}^k\,\big(M(\gamma_j)-m(\gamma_j)\big)\,.\tag{*}$$</span>
Use the previous inequality to show that
<span class="math-container">$$\max_{\pi\in S_n}\,f(\pi)\leq 2\,\left(\sum_{j=\left\lceil \frac{n}{2}\right\rceil+1}^n\,j-\sum_{j=1}^{\left\lfloor \frac{n}{2}\right\rfloor}\,j\right)=2\left\lfloor\frac{n}{2}\right\rfloor\left\lceil\frac{n}{2}\right\rceil=2\left\lfloor\frac{n^2}{4}\right\rfloor=\left\lfloor\frac{n^2}{2}\right\rfloor\,.\tag{#}$$</span>
See the spoiler below if you want to see how to deduce (#) from (*).</p>
<blockquote class="spoiler">
<p> To deduce (#) from (*), we note the following. If we reorder <span class="math-container">$M(\gamma_1)$</span>, <span class="math-container">$M(\gamma_2)$</span>, <span class="math-container">$\ldots$</span>, <span class="math-container">$M(\gamma_k)$</span> from the largest to the smallest and call the new sequence <span class="math-container">$M_1$</span>, <span class="math-container">$M_2$</span>, <span class="math-container">$\ldots$</span>, <span class="math-container">$M_k$</span>, then <span class="math-container">$$M_j\leq n-j+1\text{ for }j=1,2,\ldots,k\,.$$</span> Similarly, if we reorder <span class="math-container">$m(\gamma_1)$</span>, <span class="math-container">$m(\gamma_2)$</span>, <span class="math-container">$\ldots$</span>, <span class="math-container">$m(\gamma_k)$</span> from the smallest to the largest and call the new sequence <span class="math-container">$m_1$</span>, <span class="math-container">$m_2$</span>, <span class="math-container">$\ldots$</span>, <span class="math-container">$m_k$</span>, then <span class="math-container">$$m_j\geq j\text{ for }j=1,2,\ldots,k\,.$$</span></p>
</blockquote>
<p>The inequality (#) becomes an equality if
<span class="math-container">$$\pi(i)=n-i+1$$</span>
for every <span class="math-container">$i=1,2,\ldots,n$</span>. Any <span class="math-container">$\pi\in S_n$</span> such that <span class="math-container">$f(\pi)=\left\lfloor\dfrac{n^2}{2}\right\rfloor$</span> is a product of <span class="math-container">$\left\lfloor\dfrac{n}{2}\right\rfloor$</span> disjoint cycles of the form <span class="math-container">$\left(a\,\,b\right)$</span>
where <span class="math-container">$a\in\Bigg\{1,2,\ldots,\left\lfloor\dfrac{n}{2}\right\rfloor\Bigg\}$</span> and <span class="math-container">$b\in\Bigg\{\left\lfloor\dfrac{n}{2}\right\rfloor+1,\left\lfloor\dfrac{n}{2}\right\rfloor+2,\ldots,n\Bigg\}$</span>. That is, there are precisely <span class="math-container">$\left\lfloor\dfrac{n}{2}\right\rfloor !$</span> possible permutation <span class="math-container">$\pi\in S_n$</span> such that <span class="math-container">$f(\pi)$</span> is the maximum value <span class="math-container">$\left\lfloor\dfrac{n^2}{2}\right\rfloor$</span>.</p>
|
2,211,075 | <p>I don't understand the following example from Math book.</p>
<p>Solve for the equation <code>sin(theta) = -0.428</code> for <code>theta</code> in <code>radians</code> to 2 decimal places. where <code>0<= theta<= 2PI</code>.</p>
<p>And this is the answer:</p>
<p><code>theta=-0.44 + 2PI = 5.84rad and theta = PI-(0.44) = 3.58rad</code> </p>
<p>I don't understand the part why we need to add <code>2PI</code> in the first answer and add <code>PI</code> in second answer?</p>
| Community | -1 | <p>If the limits exist and the denominator is nonzero, then</p>
<p>$$ \lim_{x \to a} \frac{ \frac{f(x) - f(a)}{x-a} }{g(x) }
= \frac{\lim_{x \to a} \frac{f(x) - f(a)}{x-a}}{\lim_{x \to a} g(x)} $$</p>
<p>and so you could conclude</p>
<p>$$ \lim_{x \to a} \frac{ \frac{f(x) - f(a)}{x-a} }{g(x) } = \frac{f'(a)} {\lim_{x \to a} g(x)}$$</p>
<p>But you are not in that situation, and that's not what you tried to conclude either.</p>
|
2,747,578 | <p>Let $S,T$ be sets with $|S|>|T|$ and $R$ some relations on $T$.<br>
Why is then $\langle S|-\rangle$ not isomorphic to $\langle T|R\rangle$ </p>
<p>This came up when I wanted to solve a different problem, <a href="https://math.stackexchange.com/q/2747383/506844">which I also asked on this site</a>. Unfortunately, the answers provided used a completely different strategy and I still wonder about how to prove this.</p>
<p>Intuitively its clear, but I look for a clear proof.</p>
| Hagen von Eitzen | 39,174 | <p>For completeness, the following includes a proof (admittedly, using the axiom of choice) of the closely related fact that free groups are isomorphic only if they are over sets of the sae cardinality. </p>
<p>Assume $\phi\colon\langle T\mid R\rangle\to \langle S\rangle $ is an isomorphism (or just an epimorphism). Together with the canonical projection $\pi\colon \langle T\rangle\to \langle T\mid R\rangle$, we obtain an onto homomorphism $\phi\circ\pi\colon \langle T\rangle\to \langle S\rangle$.
Let $V$ be an $\Bbb F_2$ vector space of dimension $|S|$. A bijection from $S$ to a basis of $V$ gives rise to an onto homomorphism $\psi\colon \langle S\rangle \to V$.
Then $\psi(\phi(\pi(T)))$ is a generating system of $V$, hence contains a basis of $V$. We conclude that
$$ |T|\ge |\psi(\phi(\pi(T)))|\ge \dim V=|S|,$$
contradicting the assumption that $|S|>|T|$.</p>
|
2,823,758 | <p>I was learning the definition of continuous as:</p>
<blockquote>
<p>$f\colon X\to Y$ is continuous if $f^{-1}(U)$ is open for every open $U\subseteq Y$</p>
</blockquote>
<p>For me this translates to the following implication:</p>
<blockquote>
<p>IF $U \subseteq Y$ is open THEN $f^{-1}(U)$ is open</p>
</blockquote>
<p>however, I would have expected the definition to be the other way round, i.e. with the 1st implication I defined. The reason for that is that just by looking at the metric space definition of continuous:</p>
<blockquote>
<p>$\exists q = f(p) \in Y, \forall \epsilon>0,\exists \delta >0, \forall x \in X, 0 < d(x,p) < \delta \implies d(f(x),q) < \epsilon$</p>
</blockquote>
<p>seems to be talking about Balls (i.e. open sets) in X and then has a forward arrow for open sets in Y, so it seems natural to expect the direction of the implication to go in that way round. However, it does not. Why does it not go that way? Whats is wrong with the implication going from open in X to open in Y? And of course, why is the current direction the correct one?</p>
<p>I think conceptually I might be even confused why the topological definition of continuous requires to start from things in the target space Y and then require things in the domain. Can't we just say map things from X to Y and have them be close? <strong>Why do we require to posit things about Y first in either definition for the definition of continuous to work properly</strong>?</p>
<hr>
<p>I can't help but point out that this question <a href="https://math.stackexchange.com/questions/323610/the-definition-of-continuous-function-in-topology">The definition of continuous function in topology</a> seems to be similar but perhaps lack the detailed discussion on the direction on the implication for me to really understand why the definition is not reversed or what happens if we do reverse it. The second answer there tries to make an attempt at explaining why we require $f^{-1}$ to preserve the property of openness but its not conceptually obvious to me why thats the case or whats going on. Any help?</p>
<hr>
<p>For whoever suggest to close the question, the question is quite clear:</p>
<blockquote>
<p><strong>why is the reverse implication not the "correct" definition of continuous?</strong></p>
</blockquote>
<hr>
<p>As an additional important point I noticed is, pointing out <strong>the difference between open mapping and continuous function would be very useful</strong>.</p>
<hr>
<p>Note: I encountered this in baby Rudin, so thats as far as my background in analysis goes, i.e. metric spaces is my place of understanding. </p>
<hr>
<p>Extra confusion/Appendix:</p>
<p>Conceptually, I think I've managed to nail what my main confusion is. In conceptual terms continuous functions are suppose to map "nearby points to nearby points" so for me its metric space definition makes sense in that sense. However, that doesn't seem obvious to me unless we equate "open sets" to be the definition of "close by". Balls are open but there are plenty of sets that are open but are not "close by", for example the union of two open balls. I think this is what is confusing me most. How is the topological def respecting that conceptual requirement? </p>
| Ittay Weiss | 30,953 | <p>The definition of continuity at a point $a$ for a function $f\colon A\to B$ (say between metric spaces) is: for all $\varepsilon >0$ there exists $\delta>0$ such that if $d(x,a)<\delta$, then $d(fx,fa)<\varepsilon$. Now, notice that the $\varepsilon$ is used for a condition in the codomain and the $\delta$ is used for a condition in the domain. So the order of quantification is: for all something in the codomain, there is a something in the domain such that blah blah blah. The topological definition of continuity reads: for all open in the codomain, the inverse image is open in the domain. This shows that in fact the variance in both definitions is the same: continuity of a from $f\colon A\to B$ means you can pull information back from $B$ to $A$. So, the contravariance in the definition of topological continuity is not anything you haven't seen in the metric definition already. You just always thought the metric definition is variant, but it was contravariant all the time. The topological formulation simply makes it unavoidable to notice. </p>
|
2,823,758 | <p>I was learning the definition of continuous as:</p>
<blockquote>
<p>$f\colon X\to Y$ is continuous if $f^{-1}(U)$ is open for every open $U\subseteq Y$</p>
</blockquote>
<p>For me this translates to the following implication:</p>
<blockquote>
<p>IF $U \subseteq Y$ is open THEN $f^{-1}(U)$ is open</p>
</blockquote>
<p>however, I would have expected the definition to be the other way round, i.e. with the 1st implication I defined. The reason for that is that just by looking at the metric space definition of continuous:</p>
<blockquote>
<p>$\exists q = f(p) \in Y, \forall \epsilon>0,\exists \delta >0, \forall x \in X, 0 < d(x,p) < \delta \implies d(f(x),q) < \epsilon$</p>
</blockquote>
<p>seems to be talking about Balls (i.e. open sets) in X and then has a forward arrow for open sets in Y, so it seems natural to expect the direction of the implication to go in that way round. However, it does not. Why does it not go that way? Whats is wrong with the implication going from open in X to open in Y? And of course, why is the current direction the correct one?</p>
<p>I think conceptually I might be even confused why the topological definition of continuous requires to start from things in the target space Y and then require things in the domain. Can't we just say map things from X to Y and have them be close? <strong>Why do we require to posit things about Y first in either definition for the definition of continuous to work properly</strong>?</p>
<hr>
<p>I can't help but point out that this question <a href="https://math.stackexchange.com/questions/323610/the-definition-of-continuous-function-in-topology">The definition of continuous function in topology</a> seems to be similar but perhaps lack the detailed discussion on the direction on the implication for me to really understand why the definition is not reversed or what happens if we do reverse it. The second answer there tries to make an attempt at explaining why we require $f^{-1}$ to preserve the property of openness but its not conceptually obvious to me why thats the case or whats going on. Any help?</p>
<hr>
<p>For whoever suggest to close the question, the question is quite clear:</p>
<blockquote>
<p><strong>why is the reverse implication not the "correct" definition of continuous?</strong></p>
</blockquote>
<hr>
<p>As an additional important point I noticed is, pointing out <strong>the difference between open mapping and continuous function would be very useful</strong>.</p>
<hr>
<p>Note: I encountered this in baby Rudin, so thats as far as my background in analysis goes, i.e. metric spaces is my place of understanding. </p>
<hr>
<p>Extra confusion/Appendix:</p>
<p>Conceptually, I think I've managed to nail what my main confusion is. In conceptual terms continuous functions are suppose to map "nearby points to nearby points" so for me its metric space definition makes sense in that sense. However, that doesn't seem obvious to me unless we equate "open sets" to be the definition of "close by". Balls are open but there are plenty of sets that are open but are not "close by", for example the union of two open balls. I think this is what is confusing me most. How is the topological def respecting that conceptual requirement? </p>
| John Bollinger | 404,964 | <blockquote>
<p>I would have expected the definition to be the other way round</p>
</blockquote>
<p>I take you to be proposing this:</p>
<blockquote>
<p>$f\colon X\to Y$ is continuous if $f(U)$ is open for every open $U\subseteq X$</p>
</blockquote>
<p>But that does not serve. In particular, consider constant functions. Constant functions are among those that meet our expectations for continuity, and constant functions over metric spaces are in fact continuous by the metric-space definition of continuity. But if $f\colon X\to Y$ is a constant function and $V \subseteq X$ is nonempty then $f(V) = \{k\}$ for some $k \in Y$, and in many cases we care about, such singleton sets are closed, not open.</p>
<p>On the other hand, consider a constant function $f$ defined as above, and let $U\subseteq Y$ be open. The preimage $f^{-1}(U)$ of $U$ is either $\emptyset$ or $X$, which are both open by definition in every topology over $X$, so the definition you started with serves for this example.</p>
<p>On the third hand, consider $f\colon \mathbb R \to \mathbb R$ defined by $f(x) = -1$ if $x \lt 0$ and $f(x) = 1$ if $x \ge 0$. To demonstrate that it is discontinuous, choose, say, the open interval $\left(\frac{1}{2},\frac{3}{2}\right)$. The preimage of that open set is the <em>closed</em> set $\left[0,\infty\right)$.</p>
<p>More generally, the definition captures the idea of a point of discontinuity in the <em>range</em> of the function, and that should seem natural, because that's what you look for when visually inspecting the graph of a function for discontinuities.</p>
|
205,671 | <p>How would one go about showing the polar version of the Cauchy Riemann Equations are sufficient to get differentiability of a complex valued function which has continuous partial derivatives? </p>
<p>I haven't found any proof of this online.</p>
<p>One of my ideas was writing out $r$ and $\theta$ in terms of $x$ and $y$, then taking the partial derivatives with respect to $x$ and $y$ and showing the Cauchy Riemann equations in the Cartesian coordinate system are satisfied. A problem with this approach is that derivatives get messy.</p>
<p>What are some other ways to do it?</p>
| James S. Cook | 36,530 | <p><em>I happen to have some notes on this question. What follows here is the usual approach, it's just multivariate calculus paired with the Cauchy Riemann equations. I have an idea for an easier way, I'll post it as a second answer in a bit if it works.</em></p>
<p>If we use polar coordinates to rewrite $f$ as follows:
$$ f(x(r,\theta),y(r,\theta)) = u(x(r,\theta),y(r,\theta))+iv(x(r,\theta),y(r,\theta)) $$
we use shorthands $F(r,\theta)=f(x(r,\theta),y(r,\theta))$ and $U(r,\theta )=u(x(r,\theta),y(r,\theta))$ and $V(r,\theta )=v(x(r,\theta),y(r,\theta))$. We derive the CR-equations in polar coordinates via the chain rule from multivariate calculus,
$$ U_r = x_ru_x + y_ru_y = \cos(\theta)u_x + \sin(\theta)u_y \ \
\text{and} \ \ U_{\theta} = x_{\theta}u_x + y_{\theta}u_y = -r\sin(\theta)u_x + r\cos(\theta)u_y $$
Likewise,
$$ V_r = x_rv_x + y_rv_y = \cos(\theta)v_x + \sin(\theta)v_y \ \
\text{and} \ \ V_{\theta} = x_{\theta}v_x + y_{\theta}v_y = -r\sin(\theta)v_x + r\cos(\theta)v_y $$
We can write these in matrix notation as follows:
$$ \left[ \begin{array}{l} U_r \\ U_{\theta} \end{array} \right] = \left[ \begin{array}{ll} \cos(\theta) & \sin(\theta) \\ -r\sin(\theta) & r\cos(\theta) \end{array} \right]\left[ \begin{array}{l} u_x \\ u_y \end{array} \right] \ \ \text{and} \ \
\left[ \begin{array}{l} V_r \\ V_{\theta} \end{array} \right] = \left[ \begin{array}{ll} \cos(\theta) & \sin(\theta) \\ -r\sin(\theta) & r\cos(\theta) \end{array} \right]\left[ \begin{array}{l} v_x \\ v_y \end{array} \right] $$
Multiply these by the inverse matrix: $\left[ \begin{array}{ll} \cos(\theta) & \sin(\theta) \\ -r\sin(\theta) & r\cos(\theta) \end{array} \right]^{-1} = \frac{1}{r}\left[ \begin{array}{ll} r\cos(\theta) & -\sin(\theta) \\ r\sin(\theta) & \cos(\theta) \end{array} \right]$ to find
$$ \left[ \begin{array}{l} u_x \\ u_y \end{array} \right] = \frac{1}{r}\left[ \begin{array}{ll} r\cos(\theta) & -\sin(\theta) \\ r\sin(\theta) & \cos(\theta) \end{array} \right]\left[ \begin{array}{l} U_r \\ U_{\theta} \end{array} \right] = \left[ \begin{array}{l} \cos(\theta)U_r - \tfrac{1}{r}\sin(\theta)U_{\theta} \\
\sin(\theta)U_r + \tfrac{1}{r}\cos(\theta)U_{\theta} \end{array} \right] $$
A similar calculation holds for $V$. To summarize:
$$ u_x = \cos(\theta)U_r - \tfrac{1}{r}\sin(\theta)U_{\theta} \ \ \ \ v_x = \cos(\theta)V_r - \tfrac{1}{r}\sin(\theta)V_{\theta} $$
$$ u_y =\sin(\theta)U_r + \tfrac{1}{r}\cos(\theta)U_{\theta} \ \ \ \ v_y =\sin(\theta)V_r + \tfrac{1}{r}\cos(\theta)V_{\theta} $$
The CR-equation $u_x=v_y$ yields:
$$ (A.) \ \ \cos(\theta)U_r - \tfrac{1}{r}\sin(\theta)U_{\theta} = \sin(\theta)V_r + \tfrac{1}{r}\cos(\theta)V_{\theta} $$
Likewise the CR-equation $u_y=-v_x$ yields:
$$ (B.) \ \ \sin(\theta)U_r + \tfrac{1}{r}\cos(\theta)U_{\theta} = -\cos(\theta)V_r + \tfrac{1}{r}\sin(\theta)V_{\theta}$$
Multiply (A.) by $r\sin(\theta)$ and $(B.)$ by $r\cos(\theta)$ and subtract (A.) from (B.):
$$ \boxed{U_{\theta} = -rV_r} $$
Likewise multiply (A.) by $r\cos(\theta)$ and $(B.)$ by $r\sin(\theta)$ and add (A.) and (B.):
$$ \boxed{rU_r = V_{\theta}} $$
Finally, recall that $z = re^{i\theta}=r(\cos(\theta)+i\sin(\theta))$ hence
\begin{align} \notag
f'(z) &= u_x+iv_x \\ \notag
&= (\cos(\theta)U_r - \tfrac{1}{r}\sin(\theta)U_{\theta})+i(\cos(\theta)V_r - \tfrac{1}{r}\sin(\theta)V_{\theta}) \\ \notag
&= (\cos(\theta)U_r + \sin(\theta)V_{r})+i(\cos(\theta)V_r - \sin(\theta)U_{r}) \\ \notag &= (\cos(\theta)- i\sin(\theta))U_r + i(\cos(\theta)-i\sin(\theta))V_r \\ \notag
&= e^{-i\theta}( U_r+iV_r) \notag
\end{align}</p>
|
205,671 | <p>How would one go about showing the polar version of the Cauchy Riemann Equations are sufficient to get differentiability of a complex valued function which has continuous partial derivatives? </p>
<p>I haven't found any proof of this online.</p>
<p>One of my ideas was writing out $r$ and $\theta$ in terms of $x$ and $y$, then taking the partial derivatives with respect to $x$ and $y$ and showing the Cauchy Riemann equations in the Cartesian coordinate system are satisfied. A problem with this approach is that derivatives get messy.</p>
<p>What are some other ways to do it?</p>
| user55789 | 200,806 | <p>I understand this topic is somewhat old, but I still feel like I can contribute to it with a derivation in my eyes much simpler.</p>
<p>The key is to understand the <em>goal</em>: we want to derive some analogue of the CR-equations, except in polar form. We therefore wish to relate $u_\theta$ with $v_r$ and $v_\theta$ with $u_r$.</p>
<p>Take the CR-equations in the $xy$-plane as an example and expand:
\begin{equation}
u_x = v_y \Leftrightarrow u_\theta \theta_x = v_r r_y.
\end{equation}
Now remember the definitions of polar coordinates and take the appropriate derivatives:
\begin{align*}
x = r\cos\theta \Rightarrow x_\theta = -r\sin\theta \Rightarrow \theta_x = \frac1{-r\sin\theta}\\
y = r\sin\theta \Rightarrow y_r = \sin\theta \Rightarrow r_y = \frac1{\sin\theta}.
\end{align*}
Now fill in the expressions into our expanded CR-equation to find
\begin{align*}
u_\theta \frac1{-r\sin\theta} = v_r \frac1{\sin\theta} \Rightarrow u_\theta = -r v_r.
\end{align*}
In a similar fashion we can also derive
\begin{equation}
u_r = \frac1r v_\theta.
\end{equation}</p>
|
2,214,030 | <p>$\mathbb{R}^{13}$ has two subspaces such that dim(S)=7 and dim(T)=8 <br/></p>
<p>⒜ max dim (S∩T)=?<br/>
⒝ min dim (S∩T)=?<br/>
⒞ max dim (S+T)=?<br/>
⒟ min dim (S+T)=?<br/>
⒠ dim(S∩T) + dim (S+T)=?</p>
| Benjamin Dickman | 37,122 | <p>The absolute value is greater than <strong>or equal to</strong> zero.</p>
<p>The formal definition of a limit is written precisely to work outside of the case in which $x = a$.</p>
<p>E.g. consider an example of a function $f(x) = 0$ for all $ x\neq 0$, $f(0) = 1$. Because we are not considering $f(0)$ for $a=0$, we can talk about the limit $\lim_{x \rightarrow 0} f(x)$, which then exists and equals $L = 0$: For any $\varepsilon > 0$, we take $\delta = 1$. Then all $f(x)$ where $0 < |x| < 1$ have value $0$, because $f(0)$ is not considered. And so $|f(x) - L| = |0 - 0| = 0 < \varepsilon$ for all such $x$. We cannot make all $f(x)$ close to $L$, only all $f(x)$ for $x \neq a$. </p>
|
121,362 | <p>I have a set of sample time-series data below of monthly prices for two companies. </p>
<p>Q1. I want to calculate monthly and quarterly log returns.what is the most expedient way to do this? <code>TimeSeriesAggregate[]</code> only has the standard <code>Mean</code>, etc. </p>
<p>Q2. With the returns from Q1, what is the most expedient method to calculate the correlation of the monthly returns between the two companies?</p>
<p>Q3. How would it be possible to calculate six-monthly log returns and then create a series of overlapping $6m \log$ returns so I can derive $7\times 6M$ outcomes from the limited dataset below; i.e. <code>[1m-6m, 2m-7m, 3m-8m, ...]</code> (and then calculate a correlation between these)?</p>
<pre><code>(data1 = {{Date, CompanyA, CompanyB}, {"16/01/2007", 3655,
1000}, {"16/02/2007", 3655, 1000}, {"16/03/2007", 3655,
1000}, {"16/04/2007", 3655, 1000}, {"16/05/2007", 3655,
1000}, {"16/06/2007", 3435, 1011}, {"16/07/2007", 3528,
1012}, {"16/08/2007", 3348, 1013}, {"16/09/2007", 3648,
1022}, {"16/10/2007", 3648, 1022}, {"16/11/2007", 3648,
1022}, {"16/12/2007", 3648, 1022}});
(data2 = MapAt[DateList[{#, {"Day", "Month", "Year"}}] &,
data1, {2 ;;, 1}]) // Grid
</code></pre>
<p>Thanks</p>
| Mr.Wizard | 121 | <p>I am late to see this question but here is a solution closely based on my answer to <a href="https://mathematica.stackexchange.com/q/7360/121">Creating a Sierpinski gasket with the missing triangles filled in</a>.</p>
<pre><code>tri[n_] :=
Table[{2 j - i, Sqrt[3] i}, {i, 0, n}, {j, i, n}] //
Partition[Riffle @@ #, 3, 1] & /@ Partition[#, 2, 1] &
</code></pre>
<p>Example of use:</p>
<pre><code>Map[{RandomColor[], Polygon@#} &, tri[5], {2}] // Graphics
</code></pre>
<p><a href="https://i.stack.imgur.com/Uhc9p.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Uhc9p.png" alt="enter image description here"></a></p>
<hr>
<h2>A different approach</h2>
<p>For some reason I found this problem unusually interesting so that even after "solving" it I was thinking about it. It occurred to me that the total number of triangles is $n^2$ therefore I wanted to make a function that could generate these from a call to <code>Array</code> rather than <code>Table</code>. (The latter permits non-rectangular indices as used in my first method.)</p>
<p>My method is to reflect the triangles that fall outside of target back inside. </p>
<p><a href="https://i.stack.imgur.com/zavf5.gif" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/zavf5.gif" alt="enter image description here"></a></p>
<pre><code>fn[n_] := Array[fn, {n, n}]
fn[i_, j_] /; j > i := fn[j, i + 1, -1]
fn[x_, y_, s_: 1] :=
{ 2 x - y + {0, 1, 2}, Sqrt[3] {y, s + y, y} }\[Transpose] // Polygon
Map[{RandomColor[], #} &, fn[7], {2}] // Graphics
</code></pre>
<p><a href="https://i.stack.imgur.com/Hj4YL.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Hj4YL.png" alt="enter image description here"></a></p>
<ul>
<li>Note: by design every triangle is generated separately which is not as efficient as my first approach which generates entire rows in one operation.</li>
</ul>
<p>Keeping the coloration separate allows some interesting flexibility. Coloring sequentially provides a pleasing effect due to the order of generation.</p>
<pre><code>Module[{i = 0},
Map[{ColorData["Rainbow"][i++/144], #} &, fn[12], {2}] // Graphics
]
</code></pre>
<p><a href="https://i.stack.imgur.com/DmkuH.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/DmkuH.png" alt="enter image description here"></a></p>
<p>Color based on the array coordinates:</p>
<pre><code>Array[{Hue[##/400, #/7, #2/7], fn @ ##} &, {7, 7}] // Graphics
</code></pre>
<p><a href="https://i.stack.imgur.com/mJ50i.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/mJ50i.png" alt="enter image description here"></a></p>
|
121,362 | <p>I have a set of sample time-series data below of monthly prices for two companies. </p>
<p>Q1. I want to calculate monthly and quarterly log returns.what is the most expedient way to do this? <code>TimeSeriesAggregate[]</code> only has the standard <code>Mean</code>, etc. </p>
<p>Q2. With the returns from Q1, what is the most expedient method to calculate the correlation of the monthly returns between the two companies?</p>
<p>Q3. How would it be possible to calculate six-monthly log returns and then create a series of overlapping $6m \log$ returns so I can derive $7\times 6M$ outcomes from the limited dataset below; i.e. <code>[1m-6m, 2m-7m, 3m-8m, ...]</code> (and then calculate a correlation between these)?</p>
<pre><code>(data1 = {{Date, CompanyA, CompanyB}, {"16/01/2007", 3655,
1000}, {"16/02/2007", 3655, 1000}, {"16/03/2007", 3655,
1000}, {"16/04/2007", 3655, 1000}, {"16/05/2007", 3655,
1000}, {"16/06/2007", 3435, 1011}, {"16/07/2007", 3528,
1012}, {"16/08/2007", 3348, 1013}, {"16/09/2007", 3648,
1022}, {"16/10/2007", 3648, 1022}, {"16/11/2007", 3648,
1022}, {"16/12/2007", 3648, 1022}});
(data2 = MapAt[DateList[{#, {"Day", "Month", "Year"}}] &,
data1, {2 ;;, 1}]) // Grid
</code></pre>
<p>Thanks</p>
| kglr | 125 | <p>Using the trick in <a href="https://mathematica.stackexchange.com/a/61807/125">this answer</a> to use <code>MeshFunctions</code> and <code>Dynamic</code> <code>MeshShading</code> with random colors:</p>
<pre><code>coloredTriangles = ParametricPlot[{x, y Sqrt[3] Min[x, 2 - x]}, {x, 0, 2}, {y, 0, 1},
MeshFunctions -> {Sqrt[3] # + #2 &, #2 - Sqrt[3] # &, #2 &},
Mesh -> # - 1, Exclusions -> None, ImageSize -> 200,
MeshShading -> Dynamic@{{{RandomColor[], RandomColor[]}, {RandomColor[],
RandomColor[]}}}, Frame -> False, Axes -> False] &;
</code></pre>
<p>Examples:</p>
<pre><code>Row[coloredTriangles /@ {3, 4, 6, 8}, Spacer[5]]
</code></pre>
<p><img src="https://i.stack.imgur.com/NGKTL.png" alt="Mathematica graphics"></p>
|
26,893 | <p>Does there exists a function $f \in C^2[0,\infty]$ (that is, $f$ is $C^2$ and has finite limits at $0$ and $\infty$) with $f''(0) = 1$, such that for any $g \in L^p(0,T)$ (where $T > 0$ and $1 \leq p < \infty$ may be chosen freely) we get
$$
\int_0^T \int_0^\infty \frac{u^2-s}{s^{5/2}} \exp{\left( -\frac{u^2}{2s} \right)} f(u) g(s) du ds = 0?
$$</p>
| Willie Wong | 1,543 | <p>I'm pretty sure the answer is no, there exists no such $f$. Here I first give a physical argument. </p>
<p>First define the function $K(u,s) = \frac{1}{\sqrt{s}} \exp (-u^2 / 2s )$. Up to some constant normalisation factors, this is the <a href="http://en.wikipedia.org/wiki/Heat_equation#Fundamental_solutions" rel="nofollow">Heat Kernel</a>. </p>
<p>Observe that your integral can be written as, by an explicit calculuation, </p>
<p>$$ \int_0^T \int_0^\infty (\partial_{uu}^2K)(u,s)f(u)g(s) ~ du ~ ds $$</p>
<hr>
<p>Now, notice that the condition is linear. If $f$ and $\tilde{f}$ are solutions, then $af + b\tilde{f}$ is also a solution. Next, observe that using the fundamental theorem of calculus, $f(u) = \textrm{constant}$ is a solution: since $\partial_uK$ vanishes both at 0 and infinity. Therefore we can, without loss of generality, assume that the solution we are looking for has $f(0) = 0$. So we can extend $f$ continuously to the negative real line by setting $f(x) = 0$ whenever $x < 0$. Then using that $K$ is a even function:</p>
<p>$$ \int_0^{\infty} \partial_{uu}^2K(u,s)f(u) du = \int_{-\infty}^\infty \partial_{uu}^2K(0-u,s)f(u) du = \partial_{uu}^2 (K_s*f)(0) $$</p>
<p>where $K_s*f$ is the convolution of the heat kernel $K(u,s)$ against $f(u)$ extended to the whole real line. In other words, that is the evaluation of the second derivative of $K_s*f$ at the origin. </p>
<p>Now, $K_s*f$ is a solution to the heat equation with initial data at $s=0$ being $f$. So in particular, up to a constant factor, </p>
<p>$$ \partial_{uu}^2(K_s*f) = \partial_t(K_s*f)$$</p>
<p>So your desired integral condition, since you allow $g$ to be arbitrary, tells you that $K_s*f(0) = 0$ for all $s$. (Which is, in fact, basically what joriki wrote in his comment.) </p>
<hr>
<p>The condition that $f''(0) = 1$ tells you that the initial temperature fluctuation is non-zero near the origin. As heat is diffusive, to the left of the origin you have no heat content, to the right you start with some non-zero temperature arbitrarily close to the origin. So in arbitrary short time you should feel some heat at the origin. </p>
<hr>
<p>The mathematical argument follows: </p>
<p>Now, using joriki's comment, the problem reduces to considering on constant $s$ slices. Integrating by parts twice in $u$, we have that your condition implies</p>
<p>$$ \int_0^\infty K(u,s) f''(u) du + \frac{1}{\sqrt{s}} f'(0) = 0 $$</p>
<p>Taking $s\to 0$, the first term converges to some finite value which is non-zero by the assumption that $f''(0) = 1\neq 0$ and $f''$ is continuous. This gives a contradiction as, if $f'(0) = 0$ then the above equation would require $f''(0) = 0$. And if $f'(0) \neq 0$, the above equation shows that the integral $\int_0^\infty K(u,s) f''(u) du \nearrow \infty$ as $s\searrow 0$. </p>
|
936,138 | <p>I need help approaching a proof which deals with inequalities:</p>
<p>If p and r are the precision and recall of a test, then the F1 measure of the test is
defined to be
$$F(p, r) = \frac{2pr}{p+r}$$</p>
<p>Prove that, for all positive reals p, r, and t, if t ≥ r then F(p, t) ≥ F(p, r)</p>
<p>What's the first step to approaching this problem? Do I need to look at this with different cases? </p>
| André Nicolas | 6,312 | <p>A standard way is to look at $F(p,t)-F(p,r)$, which is
$$\frac{2pt}{p+t}-\frac{2pr}{p+r}.$$
Bring to a common denominator and simplify. We get
$$\frac{2pt(p+r)-2pr(p+t)}{(p+t)(p+r)},$$
which simplifies to
$$\frac{2p^2(t-r)}{(p+t)(p+r)}.$$
This is clearly $\ge 0$. </p>
|
4,242,765 | <p>Let <span class="math-container">$X_k = 1$</span> with probability <span class="math-container">$0.5$</span> and <span class="math-container">$X_k = -1$</span> with probability 0.5, and let <span class="math-container">$X_k$</span> be independent random variables <span class="math-container">$k = (1,2,...,n)$</span>.</p>
<p>I was able to prove that the characteristic function <span class="math-container">$\Omega_X (\omega)$</span> of the random variable <span class="math-container">$\sum_{i=1}^n \frac{X_i}{\sqrt{n}}$</span> equals <span class="math-container">$\left(\cos{\frac{\omega}{\sqrt{n}}}\right)^n$</span> without too much trouble.</p>
<p>However I'm struggling to prove the next part of the question, which asks me to prove that the limit of that characteristic function approaches <span class="math-container">$e^{-\frac{\omega^2}{2}}$</span>as n approaches infinity.</p>
<p>There is a hint provided which says that I should take the logarithm of the characteristic function and then apply L'Hopital's rule, but I'm struggling at the LHOP part. I'm sure it's a weakness in my calc skills, but I seem to be stuck unfortunately. Any help would be much appreciated!</p>
<p>Problem is from Schaum's Outline of Probability and Statistics, 4th edition. P 3.74.</p>
| Kavi Rama Murthy | 142,385 | <p><span class="math-container">$$\lim_{t \to 0} \frac {\ln (\cos (\omega \sqrt t)} t$$</span> <span class="math-container">$$=\lim_{t \to 0} \frac {-\omega \sin (\omega (\sqrt t)\frac 1 2 t^{-1/2}} {{\cos (\omega (\sqrt t)}}$$</span> <span class="math-container">$$=-\frac {\omega^{2}} 2$$</span> using the fact that <span class="math-container">$\frac {\sin x} x \to 1$</span> as <span class="math-container">$ x \to 0$</span>. Put <span class="math-container">$t=\frac 1n$</span> to see that the required limit is <span class="math-container">$e^{-\frac {\omega^{2}} 2}$</span>.</p>
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.