qid int64 1 4.65M | question large_stringlengths 27 36.3k | author large_stringlengths 3 36 | author_id int64 -1 1.16M | answer large_stringlengths 18 63k |
|---|---|---|---|---|
712 | <p>I read <a href="https://math.stackexchange.com/questions/625/why-is-the-derivative-of-a-circles-area-its-perimeter-and-similarly-for-spheres">this question</a> the other day and it got me thinking: the area of a circle is $\pi r^2$, which differentiates to $2 \pi r$, which is just the perimeter of the circle. </p>
<blockquote>
<p>Why doesn't the same thing happen for squares? </p>
</blockquote>
<p>If we start with the area formula for squares, $l^2$, this differentiates to $2l$ which is sort of right but only <em>half</em> the perimeter. I asked my calculus teacher and he couldn't tell me why. Can anyone explain???</p>
| Larry Wang | 73 | <p>Actually, it is also true for squares (and for regular polygons in general!). The problem you ran into is what the <a href="http://en.wikipedia.org/wiki/Apothem" rel="noreferrer">equivalent of "r"</a> is. The side length of a square is actually more comparable to the circle's diameter.</p>
<p>Instead, the correct analogue of the circle's radius is the distance from the center of the square to the midpoint of one side, which is only half as long as the square's side.</p>
<p><img src="https://i.imgur.com/FIoOK2h.png" alt="alt text"> </p>
<p>Here, we have $A = (2r)^2 = 4 r^2$ and $P = 4 (2r) = 8 r$. The perimeter is the derivative of the area with respect to $r$, just as in the case of a circle.</p>
|
786,187 | <p>I can't seem to figure this one out:</p>
<p>$\int \frac{t^5}{\sqrt{t^2+2}}dt$</p>
<p>$t^5 / \sqrt{t^2+2}$</p>
<p>I know I need to substitute t for a trigonometric function, $\tan$ I think</p>
<p>Any hints are greatly appreciated!</p>
| Samrat Mukhopadhyay | 83,973 | <p>Let $$I_n=\int \frac{t^n}{\sqrt{t^2+a^2}} dt$$ Put $t=a\tan u$ to get $$I_n=a^{n}\int {\tan^nu}\ {\sec u du}$$ If $n$ is odd then, by replacing $\sec u=z$ you will get $$I_n=a^n\int (z^2-1)^{n/2}dz$$ which you can then expand and integrate. </p>
<p>If $n$ is even then by integration by parts $$I_n=\tan^{n-1}u\sec u-(n-1)\int \tan ^{n-2}u \sec^3u\ du=\tan^{n-1}u\sec u-(n-1)I_{n-2}-(n-1)I_{n-1}$$</p>
|
4,340,560 | <p>I have to teach limits to infinity of real functions of one variable.
I would like to start my course with a beautiful example, not simply a basic function like <span class="math-container">$1/x.$</span> For instance, I thought of using the functions linked to the propagation of covid-19 and show that, under the basic model, the number of contaminations will go to <span class="math-container">$0$</span> when time goes to <span class="math-container">$+\infty.$</span> However, this is a bad idea because the model is not so easy to explain and moreover students are sick of covid-subjects.</p>
<p>Hence, I ask you some help to find interesting examples from physics, geography, etc ... I suppose that an example with "time" going to <span class="math-container">$+\infty$</span> would be nice.</p>
| Ahmed zeribi | 716,266 | <p>"John Napier" and how he get to "e=2.7182818284..." might be a good real life limit.
(<a href="https://en.wikipedia.org/wiki/E_(mathematical_constant)" rel="nofollow noreferrer">https://en.wikipedia.org/wiki/E_(mathematical_constant)</a> )</p>
|
4,340,560 | <p>I have to teach limits to infinity of real functions of one variable.
I would like to start my course with a beautiful example, not simply a basic function like <span class="math-container">$1/x.$</span> For instance, I thought of using the functions linked to the propagation of covid-19 and show that, under the basic model, the number of contaminations will go to <span class="math-container">$0$</span> when time goes to <span class="math-container">$+\infty.$</span> However, this is a bad idea because the model is not so easy to explain and moreover students are sick of covid-subjects.</p>
<p>Hence, I ask you some help to find interesting examples from physics, geography, etc ... I suppose that an example with "time" going to <span class="math-container">$+\infty$</span> would be nice.</p>
| Piquito | 219,998 | <p>COMMENT.-You can consult about examples of mathematical modeling (there are many!) and then give them the appropriate tone for your purposes. For example, suppose that a severe drought has eliminated all mammals of a certain species in a region and that a board of ecologists wants to repopulate the area in question with these mammals for which they introduce <span class="math-container">$20$</span> of the considered animals (males and females of course) in the affected region.
Suppose that the following model has been established for the corresponding reproduction giving the number <span class="math-container">$N$</span> of animals as a function of time <span class="math-container">$t$</span>
<span class="math-container">$$N(t)=\frac{20+7t}{1+0.02t}$$</span>
According to the level of your students, several questions could be asked prior to the calculation when <span class="math-container">$t$</span> tends to infinity, in particular to see what value <span class="math-container">$N$</span> has when <span class="math-container">$t = 0$</span> or noting that many numerical solutions are not integer in which case the fractional part must be eliminated. But the fundamental clarification has to be the fact that a mathematical model is usually built for ideal conditions (no severe floods or wars, no fires or anything like that).</p>
|
3,009,351 | <p>Consider the vectors <span class="math-container">$q_1=(1,1,1)$</span> and <span class="math-container">$q_3=(1,1,-2)$</span>. I need to find a third vector <span class="math-container">$q_2$</span> such that <span class="math-container">$\{q_1,q_2,q_3\}$</span> is a arthogonal basis for <span class="math-container">$\mathbb{R}^3$</span>. </p>
<p>My problem is the following: I did take <span class="math-container">$v=(1,0,0)$</span> and I did verify that <span class="math-container">$\{q_1,q_3,v\}$</span> is a basis for <span class="math-container">$\mathbb{R}^3$</span>. Then I did take <span class="math-container">$$q_2=v-\langle v|q_1\rangle q_1-\langle v|q_3\rangle q_3=(-1,-2,1)$$</span></p>
<p>And, by Gram-Schmidt process, <span class="math-container">$q_2$</span> must be orthogonal to <span class="math-container">$q_1$</span> and <span class="math-container">$q_3$</span>. But, as we can see, it does not happen. So, where is my mistake?</p>
| user | 505,767 | <p><strong>HINT</strong></p>
<p>Since <span class="math-container">$q_1$</span> and <span class="math-container">$q_3$</span> are orthogonal it suffices to find <span class="math-container">$q_3$</span> by</p>
<p><span class="math-container">$$q_2=q_1\times q_3$$</span></p>
<p>As an alternative by GS we have</p>
<p><span class="math-container">$$q_2=v-\langle v|\hat q_1\rangle \hat q_1-\langle v| \hat q_3\rangle \hat q_3=(1,0,0)-\frac13(1,1,1)-\frac16(1,1,-2)=\left(\frac12,-\frac12,0\right)$$</span></p>
|
366,103 | <p>As a part of my current homework assignment, I am to derive the first variation of energy identity. Working out the problem with my friends, we came to exactly the same argument as presented in <a href="http://www-personal.umich.edu/%7Ewangzuoq/635W12/Notes/Lec%2024.pdf" rel="noreferrer">these notes</a> (I have cut out some irrelevant parts from the presentation there, but kept the explanation of terminology and notation; at any rate, I am just using the notes to save myself the work of typing the whole thing).</p>
<blockquote>
<p><img src="https://i.stack.imgur.com/5kGWk.png" alt="enter image description here" /></p>
<p><img src="https://i.stack.imgur.com/EdC0u.png" alt="enter image description here" /></p>
</blockquote>
<p>I understand every step except the second:</p>
<blockquote>
<p><span class="math-container">$\color{blue}{\bf \large (3)}$</span> <span class="math-container">$\nabla$</span> is a metric connection and <span class="math-container">$\langle\,\cdot\,,\,\cdot\,\rangle$</span> is symmetric</p>
<p><span class="math-container">$\color{blue}{\bf \large (4)}$</span> <span class="math-container">$\nabla$</span> is torsion-free (this is mentioned above: "where in the fourth equality...")</p>
<p><span class="math-container">$\color{blue}{\bf \large (5)}$</span> <span class="math-container">$\nabla$</span> is a metric connection</p>
<p><span class="math-container">$\color{blue}{\bf \large (6)}$</span> fundamental theorem of calculus</p>
</blockquote>
<p><strong>What is the justification for step <span class="math-container">$\color{red}{\bf (2)}$</span>?</strong></p>
<p>We have an ordinary scalar function on <span class="math-container">$[a,b]\times (-\epsilon,\epsilon)$</span>, which for convenience let's name <span class="math-container">$h$</span>:
<span class="math-container">$$h(t,s)=\langle \dot{\gamma}_s(t),\dot{\gamma}_s(t) \rangle.$$</span>
We take the partial derivative of <span class="math-container">$h$</span> w.r.t. <span class="math-container">$s$</span>, which is again a scalar function on <span class="math-container">$[a,b]\times(-\epsilon,\epsilon)$</span>. Now, I'm fine with the equality
<span class="math-container">$$\dot{\gamma}_s(t)=\frac{\partial f}{\partial t},$$</span>
but how exactly does the differentiation <span class="math-container">$\dfrac{\partial}{\partial s}$</span> get turned into <span class="math-container">$\nabla_{\tfrac{\partial f}{\partial s}}$</span>?</p>
| Yuri Vyatkin | 2,002 | <p><strong>Update</strong>. Apparently the question is now clear to the OP, but anyway I've decided to summarize the discussion that occurred here for those who are not interested in reading the whole lot.</p>
<p>As the OP commented below @Ted's answer,</p>
<blockquote>
<p>But isn't $\nabla$ the connection on M, whereas $\langle \frac{\partial f}{\partial s}, \frac{\partial f}{\partial s}\rangle$ is still a scalar function on $[a,b]\times(-\epsilon,\epsilon)$? How can we apply $\nabla$ to it?</p>
</blockquote>
<p>After some thought I've realized that we all tried to justify the step (2) which turned out to be wrong and unnecessary. In fact, that comment reveals that merely the domain of the derivative is not correct.</p>
<p>The calculation is still fine despite of the problem with the second step because one simply has to use the fact that
$$
\frac{d}{d s} \langle X, Y \rangle = \langle D_s X, Y \rangle + \langle X, D_s Y \rangle
$$
where $X$ and $Y$ are vector fields along a curve with parameter $s$, and $D_s$ is the covariant derivative along the curve (see e.g. Lemma 5.2 of J.M.Lee's "Riemannian Manifolds", p.67). Now I think that Proposition 2.2 in M.P. do Carmo's "Riemannian Geometry", p.50, states this fact even in in more suitable form, because in Lee's book we have to return to Lemma 4.9 on p. 57, part (c) that implies that in $X$ is an extendible vector field along the curve then for any extension $\widetilde{X}$ of this field we have
$$
D_s X(s) = \nabla_{\frac{\partial f}{\partial s}} \widetilde{X}
$$
<em>along the curve</em>.</p>
<p>I keep the text below for the sake of completeness.</p>
<hr>
<p>I would say that $\frac{\partial}{\partial s}$ at $s=0$ is the same as the variation vector $V$, and for a scalar $h$ we can say that $V\,h = \nabla_V h$</p>
<p><strong>Edit</strong>. Let me make my remark more convincing.</p>
<p>The partial derivative of a scalar $h(t,s)$ w.r.t. $s$ is precisely the covariant derivative of $h$ along the curve $p_t (s)$ (for each $t$ fixed). This is just because covariant derivatives of functions (scalars) are the same as the directional derivatives (as @Ted correctly pointed), and they are the same as the Lie derivatives, and the exterior... - they all agree on functions.</p>
<p>By definition, the variation vector field $V$ is the pushforward of the coordinate vector field $\frac{\partial}{\partial s}$, that is
$$
V := \mathrm{d}f \left( \frac{\partial}{\partial s} \right) = \frac{\partial f}{\partial s}
$$
so we have
$$
\frac{\partial}{\partial s} h = \mathrm{D}_s h = \nabla_V h
$$</p>
<p>Here is the picture:
<img src="https://i.stack.imgur.com/RRXQh.png" alt="Variation of a curve"></p>
|
1,326,652 | <p>What would be the nature of the roots of the equation $$2x^2 - 2\sqrt{6} x + 3 = 0$$</p>
<p>My book says that as the discriminant is 0 so the roots are rational and equal.
But discriminant can be used for determining the nature of roots only when the roots are rational numbers. Is the answer in the book wrong because actually the nature of roots should be irrational?</p>
| Hagen von Eitzen | 39,174 | <p>The discriminant can always be used. Also, your suspicion that the roots are irrational does not conflict with the claim that they are real.
Note that
$$ 2x^2-2\sqrt 6x+3=(\sqrt 2 x-\sqrt 3)^2$$</p>
|
475,267 | <p>Suppose $f(0) =0 $ and $0<f''(x)<\infty (\forall$ $x>0)$, then $\frac{f(x)}{x}$ strictly increases as $x$ increases. </p>
<p>I have shown that $f'(x)-\frac{f(x)}{x} = \frac{1}{2}xf''(c)$, for some $c\in (0,x)$. How do I proceed from here? </p>
| Alexy Vincenzo | 33,821 | <p>Consider $g(t)=tf'(t)-f(t)-\frac{1}{2}t^2A $ defined on $[0,x]$ , where $A=2\frac{xf'(x)-f(x)}{x^2}$. Then, $g(0)=g(x)=0. $ By Rolle's theorem, $\exists c\in(0,x)$ such that $g'(c)=0$. It follows that $cf''(c)+f'(c)-f'(c)-cA=0$ and upon simplification, we get: $f'(x)-\frac{f(x)}{x}=\frac{1}{2}xf''(c)$</p>
|
11,267 | <p>John Von Neumann once said to Felix Smith, "Young man, in mathematics you don't understand things. You just get used to them." This was a response to Smith's fear about the method of characteristics. </p>
<p>Did he mean that with experience and practice, one obtains understanding? </p>
| George Frank | 30,674 | <p>I hope Von Neumann was one of the very few of us who realized that remembering and familiarity are not at all the same as understanding. Understanding means that we see the need, the origin, the purpose of definitions, the REASON a definition is made. The a priori purpose. And it means that when we encounter a "theorem" that reveals a property of a mathematical object, that it is identified as such. Mathematics is NOT an abstraction of reality. It is very MUCH a PART of reality. Because it is created and used to model reality does not make it out of this world. There IS no other world that is not fictitious. Mathematics is a thing in itself. Telling us that it's "abstract" and about "reasoning" is terribly misinforming. For example, geometry is ABOUT the existence and quantitative properties of plane and solid closed figures, usually bounded by straight line segments; but not necessarily. Circular and other shaped sides are perfectly possible and just as real. The emphasis on proof is obscuring and debilitating in all areas of mathematics. Proof is unique to mathematics but it is NOT what mathematics is ABOUT. It is interesting to wonder WHY proof is possible in mathematics.</p>
|
11,267 | <p>John Von Neumann once said to Felix Smith, "Young man, in mathematics you don't understand things. You just get used to them." This was a response to Smith's fear about the method of characteristics. </p>
<p>Did he mean that with experience and practice, one obtains understanding? </p>
| Carlos Medina | 4,698 | <p>I would just like to add a reformulation, an isomorphism if you will, of von Neumann’s quote, in the shape of a story/anecdote, by a rather perspicacious fella that goes by the name of John Cleese, who, from what I have read so far of his autobiography, is either a professional thief, a bible salesman, a professional soccer player, or one of the founders of a legendary comedic troupe who's gone totally insane... but, I digress... Here's the anecdote in question:</p>
<p>“[…] I encountered the teacher who made the greatest impression on me: Mr. Bartlett. He became my maths master, and during the first term he taught me, I have to confess that I understood next to nothing. But when he taught me the same things next term, I grasped them instantly: they had become self-evident. So I was moved up a form, where Mr. Bartlett introduced me to new mathematical ideas, all of them incomprehensible—until the following term when they became blindingly obvious, and I assimilated them effortlessly. Promotion, in other words, was followed by bewilderment, and the next term, by full comprehension. Mr. Bartlett was a very good teacher.”</p>
<p>Cleese, John. So, Anyway... (p. 47). Crown/Archetype. Kindle Edition. </p>
|
1,578,940 | <p>I am asked to find $[T]_{\alpha}^{\alpha}$ for linear map $T: M_{2\times 2} \rightarrow M_{2\times 2}$ where $\alpha$ is the standard basis and </p>
<p>$T(x) =
\begin{bmatrix}
1 & 1 \\
0 & -1
\end{bmatrix}
x$</p>
<p>How can I approach this... I tried applying T to every vector in the standard basis and then decomposing the result in terms of the standard basis, but this yields a $4\times 4$ matrix and I am totally lost.</p>
<p>Any help?</p>
| seeker | 267,945 | <p>Hint :-</p>
<p>First of all since $M_{2\times 2}$ is $4$ dimensional hence the matrix for $T$ must be a $4\times 4$ matrix. So if we denote by $E_{ij}$ the basis of $M_{2\times 2}$, where $E_{ij}$ is $2\times 2 $ matrix having $1$ as the $ij$-th entry and the rest of the entry is $0$, then </p>
<p>$T(E_{11})$ = $\begin{bmatrix}
1 & 1 \\
0 & -1
\end{bmatrix}$.$\begin{bmatrix}
1 & 0 \\
0 & 0
\end{bmatrix}$ = $\begin{bmatrix}
1 & 0\\
0 & 0
\end{bmatrix} $ = $1.E_{11}+0.E_{12}+0.E_{21}+0.E_{22}$ and hence the first column of the matrix of $T$ contains $1,0,0,0$ in that order from top to bottom.</p>
<p>Similarly, find $T(E_{12})$, etc., write it as a linear combination of $E_{ij}$'s and write the matrix of $T$.</p>
|
303,944 | <p>Interesting problem I spotted while learning:</p>
<blockquote>
<p>Let <span class="math-container">$X=\left\{1,..,n\right\}$</span>. We randomly select subset of <span class="math-container">$X$</span> and name it <span class="math-container">$A$</span>. Each subset if equally likely.</p>
<p>a) Find the expected value of the sum of elements of A.</p>
<p>b) Find the expected value of the sum of elements of A, on condition that it has <span class="math-container">$k$</span> elements.</p>
</blockquote>
<p>a) I think I know how to solve a). If each subset is selected with the same probability then I think it is equivalent to selecting each element of <span class="math-container">$X$</span> with probability <span class="math-container">$\frac{1}{2}$</span>. So, using indicators, we got that expected value we are looking for is <span class="math-container">$\frac{n(n+1)}{4}$</span>. But I can't find any rigorous argument why it is equivalent to selecting each element with probability <span class="math-container">$1/2$</span>.</p>
<p>b) Small observation with <span class="math-container">$k=1$</span> (each element selected with probability <span class="math-container">$1/n$</span>) and <span class="math-container">$k=n$</span> (each element selected with probability <span class="math-container">$1$</span>) gives me feeling that approach from a) can be used with probability <span class="math-container">$k/n$</span> and then the result is <span class="math-container">$\frac{k(n+1)}{2}$</span>. But it is much less intuitive than observation in a). No idea, how to prove this. Can anyone help?</p>
| Ewan Delanoy | 15,381 | <p>All your guesses are correct. What happens here is that you initially work
on the probability space $\Omega={\cal P}(X)$, the set of all subsets of $X$.</p>
<p>On that probability space, you can define the random variable $V_i$, equal to $1$
if $i\in A$ and $0$ otherwise.</p>
<p>Denote by $E_i=\lbrace A \in \Omega | i \in A \rbrace$ and
$N_i=\lbrace A \in \Omega | i \not\in A \rbrace$. Then $E_i$ and $N_i$ have the same number of elements (indeed $A \mapsto \lbrace i \rbrace \cup A$ is a bijection between
$E_i$ and $N_i$) so it is equally likely that $A$ contains $i$ or not :
$P(V_i=0)=P(V_i=1)=\frac{1}{2}$. This justifies your “a)”.</p>
|
303,944 | <p>Interesting problem I spotted while learning:</p>
<blockquote>
<p>Let <span class="math-container">$X=\left\{1,..,n\right\}$</span>. We randomly select subset of <span class="math-container">$X$</span> and name it <span class="math-container">$A$</span>. Each subset if equally likely.</p>
<p>a) Find the expected value of the sum of elements of A.</p>
<p>b) Find the expected value of the sum of elements of A, on condition that it has <span class="math-container">$k$</span> elements.</p>
</blockquote>
<p>a) I think I know how to solve a). If each subset is selected with the same probability then I think it is equivalent to selecting each element of <span class="math-container">$X$</span> with probability <span class="math-container">$\frac{1}{2}$</span>. So, using indicators, we got that expected value we are looking for is <span class="math-container">$\frac{n(n+1)}{4}$</span>. But I can't find any rigorous argument why it is equivalent to selecting each element with probability <span class="math-container">$1/2$</span>.</p>
<p>b) Small observation with <span class="math-container">$k=1$</span> (each element selected with probability <span class="math-container">$1/n$</span>) and <span class="math-container">$k=n$</span> (each element selected with probability <span class="math-container">$1$</span>) gives me feeling that approach from a) can be used with probability <span class="math-container">$k/n$</span> and then the result is <span class="math-container">$\frac{k(n+1)}{2}$</span>. But it is much less intuitive than observation in a). No idea, how to prove this. Can anyone help?</p>
| passerby51 | 7,202 | <p>Regarding your first question, note that a subset has an equivalent representation as a binary sequence. For example, for $n = 4$, the subset $\{1,3\}$ can be identified with $(1,0,1,0)$. Now, picking a subset uniformly at random, is like picking a sequence uniformly at random out of the $2^n$ possibilities. You should be able to argue the rest.</p>
<p>Regarding your 2nd question, let $X_i$ be the indicator that the $i$ is present in the random subset (i.e., $i$-th position in the binary representation above has a $1$) Then, you want
$$
E\Big[ \sum_{i=1}^n i X_i \Big| \sum_{i=1}^n X_i = k\Big] = \sum_{i=1}^n i\, E\Big[X_i \Big| \sum_j X_j = k\Big]
$$
Now, you can use symmetry. Let $a_i :=E[X_i \Big| \sum_j X_j = k]$ (which is the conditional probability of choosing the $i$). Then all $a_i$ should be equal and $\sum_i a_i = k$ (why?), from which it follows that $a_i = k/n$. This give you the answer that you have, and also shows that the conditional probability of choosing the $i$ is $k/n$ for every $i$. </p>
|
2,607,449 | <p>I'm trying to get intuition about why gradient is pointing to the direction of the steepest ascent. I got confused because I found that directional derivative is explained with help of gradient and gradient is explained with help of directional derivative.</p>
<p>Please explain what are the exact steps that lead from
directional derivative defined by the limit $\nabla_{v} f(x_0) = \lim_{h\to 0} \frac{f(x_0+hv)-f(x_0)}h$ to directional derivative defined as dot product of gradient and vector $\nabla_{v} f(x_0) = \nabla f(x_0)\cdot{v}$ ?</p>
<p>In other words how to prove the following? $$\lim_{h\to 0} \frac{f(x_0+hv)-f(x_0)}h = \nabla f(x_0)\cdot{v}$$</p>
| Jean-Philippe Burelle | 521,944 | <p>This is really a linear algebra question.</p>
<p>You can show that the directional derivative depends linearly on the direction vector $v$, that is, it satisfies the relation:</p>
<p>$$\nabla_{a v + b u} f = a\,\nabla_v f + b\,\nabla_u f.$$</p>
<p>For scalars $a,b$ and vectors $v,u$. The gradient is the vector of partial derivatives, which in turn are just the directional derivatives in the direction of the basis vectors : $\frac{\partial}{\partial x_k} f =\nabla_{e_k} f$. Now, writing $v$ in the canonical basis $v = v_1 e_1 + \dots + v_n e_n$, by the linearity above :</p>
<p>$$\nabla_{v} f = v_1 \frac{\partial}{\partial x_1} f + \cdots + v_n \frac{\partial}{\partial x_n} f$$</p>
<p>Which is the formula for the dot product you mentioned.</p>
|
17,885 | <p>In most education systems, Mathematics is a compulsory subject from primary school all the way to the start of university. A common reason given is that essential concepts like addition and multiplication are taught to the children. </p>
<p>But for many high school students, especially those who are keen on pursuing the humanities, they do not see any point in studying Mathematics for the rest of their schooling life, or how any concept in Mathematics could possibly be applied in their future work.</p>
<p>Why is Mathematics a compulsory subject for high school students, especially those who are clearly studying in Humanities streams?</p>
| Henry Towsner | 62 | <p>Questions like this, or variants (from students, the notorious "when will I use this in real life") seem to be pretty common, and I'm always a little surprised, because the unstated premise - that high school is supposed to teach students things narrowly tailored to their future career - is so obviously false.</p>
<p>It's obviously false because <em>almost no</em> academic subjects in a conventional high school curriculum are widely applicable career skills. The actual essentials for functioning in ordinary society - literacy, basic arithmetic, a minimal ability to write - are covered by middle school, if not elementary school. (I'm distinguishing academic subjects, because, at least in the US, it's common for high schools to also offer more practical classes.)</p>
<p>I've never understood why math is the target of these questions so much more frequently than other subjects. Many fewer people <em>need</em> social studies in their professional life than need high school level math, but we understand that we don't teach social studies primarily as a career skill: we teach them because they're necessary to being an informed citizen.</p>
<p>You can get through a day at most jobs without knowing anything about history. What you can't do without some knowledge of history is make any sense of the news. Math has a similar role: we live in a world in which math plays a central role in nearly everything that happens around us - it's central to the technology we all use on a daily basis, to understanding the flood of scientific (especially health related) information that surrounds us, and to the decisions that affect us being made by corporations and governments all the time. We teach math because understanding what's going on in the world requires some basic fluency in math.</p>
<p>There are good arguments that current high school curricula don't do that as well as they should. Most high school curricula do, especially in college-oriented tracks, mix that with more specialized material needed for students going into STEM fields. So I don't mean this as a defense of any particular curriculum. But I do mean this as a rejection of the premise: the math curriculum, like the rest of the high school curriculum, will never make sense if the main question you ask is "how will I use this in my job".</p>
|
17,885 | <p>In most education systems, Mathematics is a compulsory subject from primary school all the way to the start of university. A common reason given is that essential concepts like addition and multiplication are taught to the children. </p>
<p>But for many high school students, especially those who are keen on pursuing the humanities, they do not see any point in studying Mathematics for the rest of their schooling life, or how any concept in Mathematics could possibly be applied in their future work.</p>
<p>Why is Mathematics a compulsory subject for high school students, especially those who are clearly studying in Humanities streams?</p>
| Community | -1 | <blockquote>
<p>Why is Mathematics a compulsory subject for high school students, especially those who are clearly studying in Humanities streams?</p>
</blockquote>
<ol>
<li><p>A kid at age 14 is not ready to make irrevocable decisions that will affect them for the rest of their life. That's why we don't let them get married. I have a friend who, at age 30, decided to apply to grad school in sociology, and she is now tenured at a research university. A huge obstacle for her when she was re-entering school was that she needed to learn enough statistics. She had had a fine high school and college education, but hadn't concentrated much on math. If she had simply been encouraged to stop studying math completely at age 14, then I can't imagine how she could have ever gotten over this hurdle at age 30.</p></li>
<li><p>The kid may want to study the humanities in college but then, say, go into business. If they don't understand enough math to get through a watered-down 9th grade algebra class, then they aren't going to be able to do the relevant quantitative reasoning.</p></li>
<li><p>The reason why a country like the US has universal, free, and compulsory education is not (just) that it helps kids get jobs and boosts the economy. Education is also necessary in order to have a functioning democracy. Voters who can't do basic algebra are going to be severely handicapped in making decisions about issues like nuclear power and global warming.</p></li>
<li><p>When you provide multiple tracks for students through the educational system, it can have nasty side-effects. In the US, there was a time when African-American and Latino students were routinely sent into one academic track, while white kids were put on a more demanding one. It's pretty common for kids to get put in less demanding classes because of superficial issues like poor handwriting. For these reasons, I think it's better simply to provide classes at a variety of <em>levels</em>, but not to classify kids into different categories.</p></li>
</ol>
<p>If you want to criticize the practice of forcing kids to take math, then I think there are some more appropriate targets for criticism:</p>
<ol>
<li><p>Efforts to force kids to take algebra at lower and lower ages, such as attempts in California to make all kids take algebra in 8th grade.</p></li>
<li><p>Requirements that college biology majors take a full year of calculus (including stuff like doing integrals using trig substitutions) and calculus-based physics.</p></li>
<li><p>Unrealistic government requirements that encourage public schools to pretend that all students are succeeding at a high level in math, when in fact many aren't succeeding at all.</p></li>
</ol>
|
3,043,699 | <p>I'm having difficulty understanding how below integral is evaluated : </p>
<p><span class="math-container">$$\int_0^1\frac{(1-y)^2}{2}+(1-y)y dy = \frac{-1(1-y)^3}{6}+\frac{y^2}{2}-\frac{y^3}{3}\bigg|^1_0$$</span></p>
<p>What are steps involved in this evaluation ?</p>
<p>For <span class="math-container">$\frac{(1-y)^2}{2}$</span> this appears to be evaluated as <span class="math-container">$\frac{(1-y)^{2+1}}{(3)2}$</span> but I'm how unsure - 1 is inserted ?</p>
| Kavi Rama Murthy | 142,385 | <p>Antiderivative if <span class="math-container">$(1-y)^{2}$</span> is not <span class="math-container">$(1-y)^{3} /3$</span>. If you differentiate the latter you get <span class="math-container">$-(1-y)^{2}$</span>. Hence the antiderivative is <span class="math-container">$-(1-y)^{3} /3$</span>.</p>
|
3,043,699 | <p>I'm having difficulty understanding how below integral is evaluated : </p>
<p><span class="math-container">$$\int_0^1\frac{(1-y)^2}{2}+(1-y)y dy = \frac{-1(1-y)^3}{6}+\frac{y^2}{2}-\frac{y^3}{3}\bigg|^1_0$$</span></p>
<p>What are steps involved in this evaluation ?</p>
<p>For <span class="math-container">$\frac{(1-y)^2}{2}$</span> this appears to be evaluated as <span class="math-container">$\frac{(1-y)^{2+1}}{(3)2}$</span> but I'm how unsure - 1 is inserted ?</p>
| Rhys Hughes | 487,658 | <p>Remember that where <span class="math-container">$f(x)$</span> is linear we have that:</p>
<p><span class="math-container">$$\int[f(x)]^n dx=\frac{[f(x)]^{n+1}}{(n+1)f'(x)}+C$$</span></p>
|
4,349,748 | <p>I would appreciate some help with this.</p>
<p>Is:
<span class="math-container">$$
\nabla \cdot \left( \nabla \vec{v}\right)^T= \nabla \left( \nabla \cdot \vec{v}\right)
$$</span></p>
<p>How can I show this? Is the gradient of a vector mathematically defined?</p>
<p>Best regards</p>
| Henno Brandsma | 4,280 | <p>Your boundedness proof does not use nets at all and can also be simplified: let <span class="math-container">$A$</span> be compact and let <span class="math-container">$a_0 \in A$</span>. Consider the open cover <span class="math-container">$\{\mathbb{B}(a_0,n)\mid n \in \Bbb N\}$</span> of <span class="math-container">$A$</span>: it has a finite subcover and the one with the largest radius in it contains all other ones (the balls are increasing) so for one <span class="math-container">$N$</span> we have <span class="math-container">$A \subseteq \Bbb{B}(a_0,N)$</span> and we're done. (no triangle inequality etc.)</p>
<p>The closedness argument is fine: Let <span class="math-container">$(x_d)_{d \in D}$</span> be a net in <span class="math-container">$A$</span> converging to some <span class="math-container">$x \in X$</span> in <span class="math-container">$(X,\tau)$</span> and we want to show <span class="math-container">$x \in A$</span>. By compactness of <span class="math-container">$(A,\tau_A)$</span> for some <span class="math-container">$a_0 \in A$</span> and some subnet <span class="math-container">$(x_{\phi(i)})_{i \in I}$</span> we have <span class="math-container">$x_{\phi(i)} \to a_0$</span> in <span class="math-container">$(A,\tau_A)$</span> and so also (lemma) <span class="math-container">$x_{\phi(i)} \to a_0$</span> in <span class="math-container">$(X,\tau)$</span>. As subnets of a convergent net have the same limits, <span class="math-container">$x_{\phi(i)} \to x$</span> in <span class="math-container">$(X,\tau)$</span> as well and by <span class="math-container">$T_2$</span> ness of <span class="math-container">$X$</span>, <span class="math-container">$a_0=x$</span> and so <span class="math-container">$x \in A$</span> as required.</p>
<p>This proof is not noticably simpler than the standard proof using a cover and for a beginning student the latter proof is probably more insightful.</p>
|
64,022 | <p><strong>Edit</strong></p>
<p>(As Robert pointed out, what I was trying to prove is incorrect. So now I ask the right question here, to avoid duplicate question)</p>
<p>For infinite independent Bernoulli trials with probability $p$ to success, define a random variable N which equals to the number of successful trial. Intuitively, we know if $p > 0$, $\Pr \{N < \infty \} = 0$, in other word $N \rightarrow \infty$. But I got stuck when I try to prove it mathematically.</p>
<p>\begin{aligned}
\Pr \{ N < \infty \}
& = \Pr \{ \cup_{n=1}^{\infty} [N \le n] \} \\
& = \lim_{n \rightarrow \infty} \Pr \{ N \le n \} \\
& = \lim_{n \rightarrow \infty}\sum_{i=1}^{n} b(i; \infty, p) \\
& = \sum_{i=1}^{\infty} b(i; \infty, p) \\
\end{aligned}</p>
<p>I've totally no idea how to calculate the last expression.</p>
<hr>
<p>(Original Question)</p>
<p>For infinite independent Bernoulli trials with probability $p$ to success, define a random variable N which equals to the number of successful trial. Can we prove that $\Pr \{N < \infty \} = 1$ by:</p>
<p>\begin{aligned}
\Pr \{ N < \infty \}
& = \Pr \{ \cup_{n=1}^{\infty} [N \le n] \} \\
& = \lim_{n \rightarrow \infty} \Pr \{ N \le n \} \\
& = \lim_{n \rightarrow \infty}\sum_{i=1}^{n} b(i; \infty, p) \\
& = \sum_{i=1}^{\infty} b(i; \infty, p) \\
& = \lim_{m \rightarrow \infty}\sum_{i=1}^{m} b(i; m, p) \\
& = \lim_{m \rightarrow \infty}[p + (1 - p)]^m \\
& = \lim_{m \rightarrow \infty} 1^m \\
& = 1
\end{aligned}</p>
<p>I know there must be some mistake in the process because if $p = 1$, N must infinite. So the equation only holds when $ p < 1 $. Which step is wrong?</p>
| Robert Israel | 8,508 | <p>As long as $p > 0$, $N$ will be $\infty$ with probability 1. The first mistake is in
$$ \sum_{i=1}^\infty b(i;\infty,p) = \lim_{m \to \infty} \sum_{i=1}^m b(i; m,p)$$</p>
|
2,243,646 | <p>Suppose a, b, c are positive real numbers such that </p>
<p>$$(1+a+b+c)\left(1+\frac 1a+\frac 1b+\frac 1c\right)=16$$
Then is it true that we must have $a+b+c=3$ ?</p>
<p>Please help me to solve this. Thanks in advance. </p>
| Dr. Sonnhard Graubner | 175,066 | <p>Remember the inequality between the arithmetic and the harmonic mean
$$\frac{1+a+b+c}{4}\geq \frac{4}{\frac{1}{a}+\frac{1}{b}+\frac{1}{c}+1}$$ for all $$a,b,c>0$$
the equality holds if $$a=b=c=1$$ thus $$a+b+c=3$$</p>
|
2,243,646 | <p>Suppose a, b, c are positive real numbers such that </p>
<p>$$(1+a+b+c)\left(1+\frac 1a+\frac 1b+\frac 1c\right)=16$$
Then is it true that we must have $a+b+c=3$ ?</p>
<p>Please help me to solve this. Thanks in advance. </p>
| lulu | 252,071 | <p>Expand your product to get $$4+\left(a+\frac 1a\right)+\left(b+\frac 1b\right)+\left(c+\frac 1c\right)+\left(\frac ab+\frac ba\right)+\left(\frac ac+\frac ca\right)+\left(\frac bc+\frac cb\right)=16$$</p>
<p>Now, the arithmetic-geometric inequality tells us that (for $x>0$): $$\left(x+\frac 1x\right)≥2$$ and that equality only holds when $x=1$. This quickly implies that each of the variable terms in the expanded expression must be $2$ and that $a=b=c=1$ and we are done.</p>
|
2,534,369 | <p>I am trying to work my through the exercises in Spivak's <em>Calculus on Manifolds.</em> I am currently working on the exercises in Chapter 3 which deals with Integration. I am having trouble with the following question:</p>
<blockquote>
<p>Let:</p>
<p>\begin{equation}
f(x,y)=\begin{cases}
0, & \text{if $x$ is irrational}.\\
0, & \text{if $x$ is rational, $y$ is irrational}. \\
1/q, & \text{if $x$ is rational, $y=p/q$ in lowest terms}.
\end{cases}
\end{equation}</p>
<p>Show that $f$ is integrable on $A = [0,1] \times [0,1]$ and $\int_A f = 0$.</p>
</blockquote>
<p>I was thinking of trying to prove that this set is Jordan Measurable and that it's Jordan measure is zero and that it is therefore Riemann Integrable but I am not sure how to do this or if it is even the best way to solve this problem.</p>
<p>If I could show that $f$ is continuous on $A$ up to a set of Jordan Measure $0$, then $f$ would be integrable but again, I'm not sure I can do this or if its even appropriate for this problem.</p>
<p>Any assistance that anyone could provide would be greatly appreciated.</p>
<p>Thank you.</p>
| WannaBeRealAnalysist | 319,730 | <p>So here is an attempt at a solution:</p>
<p>So for any partition $P$, </p>
<p>$u(f,P) = 0$, so it should be enough to show that $U(f,P)$ is arbitrarily close to $0$. For a natural number $q$, consider the partition,</p>
<p>$P = \bigl((0,1/q,2/q,\cdots,(q-1)/q,1),(0,1)\bigr)$.</p>
<p>Let $x \in [\frac{p}{q}, \frac{p-1}{q}]$, with $p < q$ and $\frac{p}{q}$ in lowest terms. </p>
<p>Then, if $x = \frac{a}{b}$, </p>
<p>$b \ge q$</p>
<p>So, for any rectangle in the partition $P$, $U(f,P) = \frac{1}{q^2}$</p>
<p>And since $q$ can be chosen to be arbitrarily large, the upper sum of $f$ is arbitrarily close to the lower sum of $f$ for an appropriate partition. Thus $f$ is integrable.</p>
<p>Furthermore,</p>
<p>$\int_{[0,1] \times [0,1]} f$ = $infU(f,P) = q(1/q^2) = 0$</p>
<p>Is this correct?</p>
|
4,090,775 | <p>Let <span class="math-container">$\mu$</span> be the Lebesgue measure and, <span class="math-container">$E$</span> and <span class="math-container">$F$</span> compact subsets of <span class="math-container">$\mathbb{R}$</span> such that <span class="math-container">$E\subset F$</span> and <span class="math-container">$\mu(E)=1$</span> and <span class="math-container">$\mu(F)=3$</span>. Prove there is a compact subset <span class="math-container">$K$</span> of <span class="math-container">$\mathbb{R}$</span> such that <span class="math-container">$E \subseteq K \subseteq F$</span> and <span class="math-container">$\mu(K)=2$</span>. First and foremost, as <span class="math-container">$\mu(E)=1$</span> and <span class="math-container">$\mu(F)=3$</span> are compact subsets of <span class="math-container">$\mathbb{R}$</span> there are open subsets of <span class="math-container">$\mathbb{R}$</span>, <span class="math-container">$E'$</span> and <span class="math-container">$F'$</span>, such as <span class="math-container">$E \subset E'$</span> and <span class="math-container">$F' \subset F$</span> and <span class="math-container">$\mu^{\ast}(E'- E) < \epsilon$</span> and <span class="math-container">$\mu^{\ast}(F'- F) < \epsilon$</span>. Where <span class="math-container">$\mu(E')=\mu(E)=1$</span> and <span class="math-container">$\mu(F')=\mu(F)=3$</span>. Obviously, <span class="math-container">$\mu^{\ast}$</span> stands for Lebesgue outer measure. So far, I´ve been run out of ideas and aproaches to solve this. I have googled properties about Lebesgue measure of compact sets and there is nothing about them that I have found helpful, for instance, all compact sets are Lebesgue measurable and <span class="math-container">$\mu(E)< \infty$</span> for every compact set in <span class="math-container">$\mathbb{R}$</span>. But I dont know how I´m supposed to construct this compact set <span class="math-container">$K$</span> such as <span class="math-container">$E \subseteq K \subseteq F$</span> and <span class="math-container">$\mu(K)=2$</span>. Thanks!</p>
| HallaSurvivor | 655,547 | <p>Hint:</p>
<p>Consider the sets <span class="math-container">$[-r,r] \subseteq \mathbb{R}$</span>.</p>
<p>Can you show <span class="math-container">$r \mapsto \mu(E \cup [-r,r] \cap F)$</span> is continuous? Can you show for any <span class="math-container">$r$</span> the set we're measuring is compact? What does the intermediate value theorem buy us?</p>
<hr />
<p>I hope this helps ^_^</p>
|
664,220 | <p>If $A$ and $B$ Commute, $\exp((A+B)t)= \exp(At)\cdot\exp(Bt)$? is this statement true?</p>
| dato datuashvili | 3,196 | <p>if $A\cdot B=B\cdot A$</p>
<p>then $e^{A+B}=e^{A}\cdot e^{B}$</p>
<p>now multiply both side by $t$</p>
<p>we have</p>
<p>$e^{At+Bt}$,</p>
<p>you can apply same rule for this too</p>
<p>there is question related to matrix expoenntial</p>
<p><a href="https://math.stackexchange.com/questions/370817/commuting-in-matrix-exponential">Commuting in Matrix Exponential</a></p>
|
664,220 | <p>If $A$ and $B$ Commute, $\exp((A+B)t)= \exp(At)\cdot\exp(Bt)$? is this statement true?</p>
| Robert Lewis | 67,071 | <p>It is in fact the case. For a thorough explanation, please see my answer to <a href="https://math.stackexchange.com/questions/568450/m-n-in-bbb-r-n-times-n-show-that-emn-emen-given-mn-nm">this question</a>. It's only a mouse click away!</p>
<p>Hope this helps. Cheerio,</p>
<p>and as always,</p>
<p><strong><em>Fiat Lux!!!</em></strong></p>
|
2,008,437 | <p>Given four points in the plane, there exists a one-dimensional family of conics through these, often called a pencil of conics. The locus of the centers of symmetry for all of these conics is again a conic. What's the most elegant way of computing it?</p>
<p>I know I could choose five arbitrary elements from the pencil, compute their centers and then take the conic defined by these. I can also do so on a symbolic level, to obtain a general formula. But that formula is at the coordinate level, and my CAS is still struggeling with the size of the polynomials involved here. There has to be a better way.</p>
<p>Bonus points if you know a name for this conic. Or – as the center is the pole of the line at infinity – a name for the more general locus of the pole of an arbitrary line with respect to a given pencil of conics.</p>
| Narasimham | 95,860 | <p>Excuse me, I am unable to understand , since fifth point fixes the conic and its center, what is the variable fifth point $P_5$ to determine a locus of corresponding conic centers?</p>
<p>To find a locus we need to define our object property <em>independently</em>, but that property should <em>not again depend</em> on the what is fixed already. Kindly clarify.</p>
<p>For discussion reference only to a simple pencil with a fixed center at origin,</p>
<p>taken the coordinates passing passing through $ ( ( \pm 1,\pm 1) , (0,p) )$ in standard form </p>
<p>$$ a x^2 + 2 h x y + b y^2 + 2 f x + 2 g y=1. $$</p>
<p>Solving as a standard co-ordinate approach due to high symmetry it calculates to non-zero $(a,b)$.</p>
<p>$$ (p^2-1)x^2 + y^2 = p^2$$</p>
<p>We have the pencil sketched.</p>
<p>EDIT1:</p>
<p>And in one more example for obtaining all conics' common center at origin with a new choice of $P_5$ set on a circle radius $2,( x = 2 \cos t, y= 2 \sin t), x= \pm 1,y= \pm 1$ we have the central conics (red) added in the sketch below.</p>
<p><a href="https://i.stack.imgur.com/zmx10.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/zmx10.png" alt="Conics Pencil "></a></p>
|
1,085,164 | <p><strong>I am currently taking a course on Numerical PDE. The course covers the following topics listed below.</strong> </p>
<p><strong>Chapter 1: Solutions to Partial Dierential Equations:</strong></p>
<p><strong>Chapter 2: Introduction to Finite Elements:</strong></p>
| Cookie | 111,793 | <p><a href="http://link.springer.com/book/10.1007%2F978-1-4899-7278-1" rel="nofollow">Numerical PDEs by J.W. Thomas</a> might be a good book. If you check its table of contents, they have the majority of the topics you listed. </p>
<p>Here is one example of professor's <a href="http://www.tat.physik.uni-tuebingen.de/~kokkotas/Teaching/Num_Methods_files/Comp_Phys8.pdf" rel="nofollow">lecture notes</a> based on the same book.</p>
|
2,810,755 | <p>I'm trying to set up a double integral on $e^{-(x+y)}$ and the range listed is: $0 < x < y < \infty.$</p>
<p>I'm interpreting this as $0 < x < y$ for $x$'s range and $x < y < \infty$ as $y$'s range. I've put those bounds on my two integrals and proceeded with the mission. However, $y$ is still lingering in my result. (The homework is already turned in and I'm sure I've done it wrong; now I just want to know how to do it properly.)</p>
<p>My result of the double integral was: $-((e^{-2y})/2) + (1/2)$.
I suspect that how I set up the double integral was the source of the problem. How should it have looked?</p>
<p>The integral I calculated was
$$\int_0^y\int_x^\infty e^{-(x+y)}dydx$$</p>
| Doug M | 317,162 | <p>You look great right up through here.</p>
<p>Assume:
$1\cdot2+2\cdot3+3\cdot4+...+k(k+1)=\frac{(k(k+1)(k+2))}{3}$</p>
<p>We must show that:</p>
<p>$1\cdot2+2\cdot3+3\cdot4+...+k(k+1)+(k+1)(k+2)=\frac{(k+1)(k+2)(k+3)}{3}$</p>
<p>$1\cdot2+2\cdot3+3\cdot4+...+k(k+1)+(k+1)(k+2) = \frac{(k(k+1)(k+2))}{3}+(k+1)(k+2)\ $ by the inductive hypothesis.</p>
<p>Rather than multiply everything out, notice that $(k+1)(k+2)$ is a common factor.</p>
<p>$(k+1)(k+2)(\frac{k}{3} + 1)\\
\frac{(k+1)(k+2)(k + 3)}{3}\\
$</p>
|
1,289,834 | <p>Let $\gcd(x,y,z)=1$.Can we find 3 non-perfect squares $x,y,z\in \mathbb{Z},$ such that $a \in \mathbb{Z} \geq 2$
$$a=\left(\sqrt{2(\sqrt{y}+\sqrt{z})(\sqrt{x}+\sqrt{z})}-\sqrt{y}-\sqrt{z}\right)^2$$
I cannot seem to find any such triplets. Any hints on how to prove it?</p>
| Domenico Vuono | 227,073 | <p>Let $y=k^2$, $z=r^2$ and $x=c^2$(with $k,r,c$ integers) then the expression becomes: $$(\sqrt {2\cdot (k+r)(c+r)}-k-r)^2$$
Now for $a$ to be an integer $$2(k+r)(c+r)=n^2$$
with $n$ integer.
We put $$k+r=2^{2p+1}\cdot t^s$$
And $$c+r=t^v$$
(or vice versa,with $v$ and $s$ both odd or even).
If $r=1$ $$k=2^{2p+1}\cdot t^s-1$$
And $$c=t^v-1$$. This can be a solution.</p>
|
1,289,834 | <p>Let $\gcd(x,y,z)=1$.Can we find 3 non-perfect squares $x,y,z\in \mathbb{Z},$ such that $a \in \mathbb{Z} \geq 2$
$$a=\left(\sqrt{2(\sqrt{y}+\sqrt{z})(\sqrt{x}+\sqrt{z})}-\sqrt{y}-\sqrt{z}\right)^2$$
I cannot seem to find any such triplets. Any hints on how to prove it?</p>
| Barry Cipra | 86,747 | <p>Checking a solution given by Winther in comments beneath the OP led me to the following:</p>
<blockquote>
<p>In general, if $z=x+y$, then $a=x$.</p>
</blockquote>
<p>This is seen by writing</p>
<p>$$\sqrt{2(\sqrt x+\sqrt z)(\sqrt y+\sqrt z)}=\sqrt x+\sqrt y+\sqrt z$$</p>
<p>squaring both sides and expanding to get</p>
<p>$$2(\sqrt{xy}+\sqrt{xz}+\sqrt{yz}+z)=x+y+z+2(\sqrt{xy}+\sqrt{xz}+\sqrt{yz})$$</p>
<p>and then cancelling left and right, leaving $z=x+y$.</p>
<p>Whether this gives <em>all</em> solutions to the OP's equation remains to be seen. (I'm not offering an opinion one way or the other.)</p>
|
2,976,057 | <p>I’m doing partial fractions and need to factorize the denominator. They are quadratic. However there are some that aren’t so easy to factorize and my first choice was to use the quadratic equation to find the roots however comparing my answer with the correct one the signs are different. Is the quadratic formula only to be used when the equation is equal to zero? The answer used another method of factorizing that didn’t involve equating anything to zero and I can’t find anything about it online. Where did I go wrong?
My denominator is: </p>
<p><span class="math-container">$-3z^2 -4z-1$</span></p>
<p>the correct answer is:
<span class="math-container">$-(3z+1)(z+1)$</span></p>
<p>while if I do this using the quadratic formula I get:
<span class="math-container">$(3z+1)(z+1)$</span></p>
<p>however if I factorize the negative sign then use the quadratic formula I get the correct answer which is confusing to me.</p>
| Jean-Luc Bouchot | 24,153 | <p>I'm guessing, what you have is the following fraction:
<span class="math-container">$$
f(z) = \frac{N(z)}{D(z)},
$$</span>
where <span class="math-container">$D(z) = -3z^2-4z-1$</span>. </p>
<p>Basically, you may write <span class="math-container">$D(z) = -(3z^2+4z+1) = -D_2(z)$</span>. </p>
<p>Finally, you can decompose the fraction <span class="math-container">$f$</span> as <span class="math-container">$f(z) = \frac{N(z)}{D(z)}$</span> or as <span class="math-container">$f(z) = \frac{-N(z)}{D_2(z)}$</span>. </p>
<p>In other words, you'll get the same partial fraction in any case (you'll keep only the two factors in the denominators, and push the minus sign to the numerator)</p>
|
1,620,795 | <p>Find function f(x), where:
$$f(3)=3$$
$$f'(3)=3$$
$$f'(4)=4$$
$$f''(3) = \nexists$$ </p>
<p>How to find function like this in <strong>general?</strong> What steps should I do?</p>
| Claudius | 218,931 | <p>You know that $\lvert x_i\rvert \le \max_{j=1,\dotsc,n}\lvert x_j\rvert$ for each $i=1,\dotsc, n$. Hence $\lvert x_i\rvert\le \lVert x\rVert_\infty$ for each $i=1,\dotsc,n$. Now,
\begin{align*}
\lvert x\cdot y\rvert &= \left\lvert\sum_{i=1}^nx_iy_i\right\rvert \le \sum_{i=1}^n \lvert x_i\rvert\cdot \lvert y_i\rvert\\
&\le \sum_{i=1}^n\lVert x\rVert_\infty\cdot \lvert y_i\rvert\\
&= \lVert x\rVert_\infty\cdot \sum_{i=1}^n\lvert y_i\rvert = \lVert x\rVert_\infty\cdot \lVert y\rVert_1.
\end{align*}</p>
|
2,523,581 | <p>I'm a bit rusty on my complex numbers, how would you solve the following problem on paper?</p>
<blockquote>
<p>Determine and sketch (graph) the set of all complex numbers of form:
$$z_n=\frac{2n+1}{n-i},n\in\mathbb R$$</p>
</blockquote>
<p>Rationalizing yields $$S=\left\{z_n\in\mathbb C : z_n=\frac{n + 2 n^2}{1 + n^2}+\frac{1 + 2 n}{1 + n^2}i\right\}$$</p>
<p>How do I proceed to sketch (graph) this now on paper? <em>(Wolframalpha yields <a href="http://www.wolframalpha.com/input/?i=x%3D(m+%2B+2+m%5E2)%2F(1+%2B+m%5E2)+,+y%3D+(1+%2B+2+m)%2F(1+%2B+m%5E2)" rel="nofollow noreferrer">a circle</a>)</em></p>
<p>I assume I need to find the center and the radius of that circle which would be enough to sketch the graph, but I can't quite proceed from this point on. </p>
| nonuser | 463,553 | <p>Say $$w= {2n+1\over n-i}\Longrightarrow n ={wi+1\over w-2}$$</p>
<p>Since $\overline{n}=n$ we have:</p>
<p>$$ (\overline{w}-2)(wi+1)= (-i\overline{w}+1)(w-2)$$</p>
<p>so</p>
<p>$$ |w|^2i-2wi+\overline{w}-2 = -i|w|^2+2i\overline{w}+w-2$$</p>
<p>so
$$ |w|^2 = (\overline{w}+w)+{w-\overline{w}\over 2i}$$</p>
<p>Now, since $w=x+yi$ we have</p>
<p>$$ x^2+y^2 = 2x+y$$</p>
<p>So this is circle $$(x-1)^2+(y-{1\over 2})^2 ={5\over 4}$$</p>
|
2,523,581 | <p>I'm a bit rusty on my complex numbers, how would you solve the following problem on paper?</p>
<blockquote>
<p>Determine and sketch (graph) the set of all complex numbers of form:
$$z_n=\frac{2n+1}{n-i},n\in\mathbb R$$</p>
</blockquote>
<p>Rationalizing yields $$S=\left\{z_n\in\mathbb C : z_n=\frac{n + 2 n^2}{1 + n^2}+\frac{1 + 2 n}{1 + n^2}i\right\}$$</p>
<p>How do I proceed to sketch (graph) this now on paper? <em>(Wolframalpha yields <a href="http://www.wolframalpha.com/input/?i=x%3D(m+%2B+2+m%5E2)%2F(1+%2B+m%5E2)+,+y%3D+(1+%2B+2+m)%2F(1+%2B+m%5E2)" rel="nofollow noreferrer">a circle</a>)</em></p>
<p>I assume I need to find the center and the radius of that circle which would be enough to sketch the graph, but I can't quite proceed from this point on. </p>
| Hari Shankar | 351,559 | <p>$\dfrac{1}{\overline{z_n}} = \dfrac{n+i}{2n+1} = \dfrac{n}{2n+1}+\dfrac{i}{2n+1}$</p>
<p>Evidently $\dfrac{1}{\overline{z_n}}$ lies on the line $L: 2x+y=1$</p>
<p>$z_n$ hence lies on the curve obtained by inverting $L$ in the circle $|z|=1$. The result is a circle passing through origin $O$. The radius and centre of this circle are found using properties of inversion as below:</p>
<p>Since $L$ is at a distance of $\dfrac{1}{\sqrt 5}$ from origin, the foot of the perpendicular $P$ from $O$ transforms to a point $P'$ at a distance of $\sqrt 5$ along the line $x-2y=0$ i.e. to $P'(2,1)$. Also $P'$ becomes the other end of the diameter from $O$. Hence the radius of the circle is $\dfrac{\sqrt 5}{2}$ and center is $\left(1, \dfrac{1}{2} \right)$</p>
<p>Hence $z_n$ lies on $\left(x-1\right)^2+\left(y-\dfrac{1}{2}\right)^2 = \dfrac{5}{4}$</p>
|
3,681,254 | <p><span class="math-container">$$\int_{0}^{\frac{\pi}{2}}{e^{-x}} \cos3xdx$$</span>
using the intergral by parts formula
<span class="math-container">$$\int{f(x).g(x)'}\mathrm{d}x = (f(x).g(x)) - \int{f'(x)g(x)dx}$$</span>
so it should be
<span class="math-container">$${{{e^{-x}}.\frac{1}{3}\cos(3x)}} - \int_{0}^{\frac{\pi}{2}}{-e^{-x}\frac{1}{3}\cos(3x)dx}$$</span>
but the correct next step should be
<span class="math-container">$${{{-e^{-x}}.\cos(3x)}} - \int_{0}^{\frac{\pi}{2}}{-e^{-x}(-3\sin(3x))dx}$$</span></p>
<p>which should be the correct step? </p>
| P. Lawrence | 545,558 | <p>There are two ways of doing this question. (i) First you can write <span class="math-container">$e^{-x}$</span> as <span class="math-container">$(-e^{-x})'$</span> and then apply the integration by parts forrmula, which gives other terms and, up to numerical factors, the integral of <span class="math-container">$e^{-x}\sin(3x).$</span> Then once more
write <span class="math-container">$e^{-x}$</span> as <span class="math-container">$(-e^{-x})'$</span> and apply the integration by parts formula again, which gives you back other terms and the desired integral with a number in front of it, so you can solve for the desired integral. (ii) First you can write <span class="math-container">$\cos(3x)$</span> as <span class="math-container">$(\frac{1}{3}\sin(3x))'$</span> and then apply the integration by parts formula, which gives other terms and, up to numerical factors, the integral of <span class="math-container">$e^{-x}\sin(3x).$</span> Then write <span class="math-container">$\sin(3x)$</span> as <span class="math-container">$(-\frac{1}{3}\cos(3x))'$</span> and apply the integration by parts formula again which gives you back other terms and the desired integral with a number in front of it, so you can solve for the desired integral.</p>
|
3,681,254 | <p><span class="math-container">$$\int_{0}^{\frac{\pi}{2}}{e^{-x}} \cos3xdx$$</span>
using the intergral by parts formula
<span class="math-container">$$\int{f(x).g(x)'}\mathrm{d}x = (f(x).g(x)) - \int{f'(x)g(x)dx}$$</span>
so it should be
<span class="math-container">$${{{e^{-x}}.\frac{1}{3}\cos(3x)}} - \int_{0}^{\frac{\pi}{2}}{-e^{-x}\frac{1}{3}\cos(3x)dx}$$</span>
but the correct next step should be
<span class="math-container">$${{{-e^{-x}}.\cos(3x)}} - \int_{0}^{\frac{\pi}{2}}{-e^{-x}(-3\sin(3x))dx}$$</span></p>
<p>which should be the correct step? </p>
| Anton Vrdoljak | 744,799 | <p><span class="math-container">$I=\int_0^{\frac{\pi}{2}}e^{-x}\cos {3x}dx = \text{int. by parts} =\left(-e^{-x}\cos{3x}\right)_0^{\frac{\pi}{2}}-3\int_0^{\frac{\pi}{2}}e^{-x}\sin {3x}dx\\
= 1-3\int_0^{\frac{\pi}{2}}e^{-x}\sin {3x}dx\\
= \text{int. by parts} = 1-3\left( \left( -e^{-x}\sin{3x} \right)_0^{\frac{\pi}{2}} + 3\int_0^{\frac{\pi}{2}}e^{-x}\cos {3x}dx \right) \\
=1-3\left(e^{-\frac{\pi}{2}} + 3\int_0^{\frac{\pi}{2}}e^{-x}\cos {3x}dx \right) \\
= 1 - 3e^{-\frac{\pi}{2}}-9I\\$</span></p>
<p>Now we have:
<span class="math-container">$$10I= 1 - 3e^{-\frac{\pi}{2}} \iff I = \frac{1 - 3e^{-\frac{\pi}{2}}}{10}.$$</span></p>
|
1,207,134 | <p>Given the image:
<img src="https://i.stack.imgur.com/EJ3ax.jpg" alt="enter image description here"></p>
<p>and that $x_0 = 1, y_0=0$ and $\text{angles} \space θ_i
, i = 1, 2, 3, · · ·$ can be arbitrarily picked.</p>
<p>How can I derive a recurrence relationship for $x_{n+1}$ and $x_n$?</p>
<p>I actually know what the relationship is, however, don't know how to derive it.</p>
<p>Although it is obvious to see that $x_0 = x_1$</p>
<p>so for $n = 0$</p>
<p>We have $x_{1} = x_{0}+0 $</p>
<p>The recurrence relations are</p>
<p>$x_{n+1}=x_n−y_ntan(θ_{n+1})$</p>
<p>and </p>
<p>$y_{n+1}=y_n+x_ntan(θ_{n+1})$
But I can't get the derivation.</p>
<p><img src="https://i.stack.imgur.com/ZBzWJ.jpg" alt="enter image description here"></p>
<p>I tried taking some arbitrary right angle triangle and constructing two vectors.</p>
<p>$\mathop r_{\sim} = \langle x_n,y_n \rangle $</p>
<p>and a vector perpendicular to $\mathop r_\sim$, $\mathop d_{\sim} = \langle a,b\rangle$ such that</p>
<p>$\mathop r_{\sim} \cdot \mathop d_{\sim} = 0 $</p>
<p>Then we can let construct a unit vector for $\mathop d_\sim$ and eventually construct a line through a point?</p>
| Blue | 409 | <p>If $P_i = (x_i,y_i)$, then $P_{n+1}$ is obtained by rotating $P_n$ about the origin by angle $\theta_{n+1}$, and scaling by a factor of $\frac{|OP_{n+1}|}{|OP_n|} = \sec\theta_{n+1}$. Using a rotation matrix, we can write</p>
<p>$$P_{n+1} \;=\; \sec\theta_{n+1}\;\left[\begin{array}
\;\cos\theta_{n+1} & -\sin\theta_{n+1} \\
\sin\theta_{n+1} & \phantom{-}\cos\theta_{n+1}
\end{array}\right]\;P_n \;=\; \left[\begin{array}{cc}
\;1 & -\tan\theta_{n+1} \\
\tan\theta_{n+1} & 1
\end{array}\right]\;P_n$$</p>
<p>Therefore,
$$\begin{align}
x_{n+1} &= x_n - y_n\tan\theta_{n+1} \\
y_{n+1} &= x_n \tan\theta_{n+1} + y_n
\end{align}$$</p>
|
3,467,790 | <p>I saw this in the following question: <a href="https://math.stackexchange.com/questions/24413/is-there-a-function-with-infinite-integral-on-every-interval">Is there a function with infinite integral on every interval?</a></p>
<p>I already understood all other steps on the first answer, however, I don't know how to prove the following step:</p>
<blockquote>
<p>Let <span class="math-container">$\{q_n\}$</span> be an enumeration of the rational numbers, how can I justify that
<span class="math-container">$$\sum_{n=1}^\infty \frac{2^{-n}}{|x-q_n|}<\infty$$</span>
for almost every <span class="math-container">$x\in\mathbb{R}$</span> (i.e. almost everywhere)?</p>
</blockquote>
<p>I know it has something to do with the fact that <span class="math-container">$2^{-n}$</span> tend to zero exponentially while <span class="math-container">$|x-q_n|$</span> tends to zero linearly.</p>
<p>Also, there are some modification that I made that shouldn't change the result, which are using all the rational numbers instead of only those between 0 and 1, and removing the square root (since it is squared anyways) </p>
| Kavi Rama Murthy | 142,385 | <p>I think you have to take <span class="math-container">$(q_n)$</span> to be an ennumeration of rationals in a finite interval instead of the whole line. </p>
<p><span class="math-container">$\int_a^{b} \sum \frac {2^{-n}} {|x-q_n|} dx = \sum \int_a^{b}\frac {2^{-n}} {|x-q_n|} dx$</span> and <span class="math-container">$\int_a^{b} \frac 1 {|x-q_n|}dx=\int_{a-q_n}^{b-q_n} \frac 1 {\sqrt {|y|} } dy$</span>. Since the integral here is bounded and <span class="math-container">$\sum \frac1 {2^{n}} <\infty$</span> it follows that <span class="math-container">$\int_a^{b} \sum \frac {2^{-n}} {|x-q_n|} dx <\infty$</span> which implies <span class="math-container">$\sum \frac {2^{-n}} {|x-q_n|} dx<\infty$</span> for almost all <span class="math-container">$x \in (a,b)$</span>. Since <span class="math-container">$a$</span> and <span class="math-container">$b$</span> are arbitrary we see that the sum is finite for almost all real values of <span class="math-container">$x$</span>.</p>
|
3,467,790 | <p>I saw this in the following question: <a href="https://math.stackexchange.com/questions/24413/is-there-a-function-with-infinite-integral-on-every-interval">Is there a function with infinite integral on every interval?</a></p>
<p>I already understood all other steps on the first answer, however, I don't know how to prove the following step:</p>
<blockquote>
<p>Let <span class="math-container">$\{q_n\}$</span> be an enumeration of the rational numbers, how can I justify that
<span class="math-container">$$\sum_{n=1}^\infty \frac{2^{-n}}{|x-q_n|}<\infty$$</span>
for almost every <span class="math-container">$x\in\mathbb{R}$</span> (i.e. almost everywhere)?</p>
</blockquote>
<p>I know it has something to do with the fact that <span class="math-container">$2^{-n}$</span> tend to zero exponentially while <span class="math-container">$|x-q_n|$</span> tends to zero linearly.</p>
<p>Also, there are some modification that I made that shouldn't change the result, which are using all the rational numbers instead of only those between 0 and 1, and removing the square root (since it is squared anyways) </p>
| acreativename | 347,666 | <p>You could define the sets</p>
<p><span class="math-container">$A_{q_j , \epsilon} := \{ y |$</span> <span class="math-container">$ $</span> <span class="math-container">$|y-q_j| \leq \epsilon \cdot (1.5)^{-j} \}$</span></p>
<p>and the set <span class="math-container">$A_{\epsilon} := \bigcup_{j = 1}^{\infty} A_{q_j,\epsilon}$</span> and note that </p>
<p><span class="math-container">$m(A_{\epsilon}) \leq 2\epsilon$</span> and on <span class="math-container">$A_{\epsilon}^c$</span>; the value of </p>
<p><span class="math-container">$\sum_{n=1}^{\infty} \frac{2^{-n}}{|x-q_n|}$</span> is atmost </p>
<p><span class="math-container">$\sum_{n=1}^{\infty} 2^{-n}(1.5)^{n}\frac{1}{\epsilon}$</span> which is finite; </p>
<p>Now take the set <span class="math-container">$\bigcup_{\epsilon > 0} A_{\epsilon}^c$</span> and note that for <span class="math-container">$z \in \bigcup_{\epsilon > 0} A_{\epsilon}^c$</span> there is some <span class="math-container">$\beta_z > 0$</span> so that <span class="math-container">$z \in A_{\beta_z}^c$</span> for which <span class="math-container">$\sum_{n=1}^{\infty} \frac{2^{-n}}{|z-q_n|} \leq \sum_{n=1}^{\infty} 2^{-n}1.5^{n}\frac{1}{\beta_z} < \infty$</span>.</p>
|
1,990,865 | <blockquote>
<p>Given set $\{3 - x \mid x < 0\}$, find if it is bounded.</p>
</blockquote>
<p>So, from intuition it should be bounded from below, but how to show it?</p>
| Beans on Toast | 257,517 | <p><a href="https://i.stack.imgur.com/KwOUh.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/KwOUh.png" alt="Equation of straight line for real x"></a></p>
<p>Proof without words in this plot.</p>
|
4,200,164 | <blockquote>
<p>How do you show that for <span class="math-container">$x \in (0,2 \pi)$</span> the following series converges?
<span class="math-container">$$\sum_{n=1}^\infty \frac{\cos(xn)}{n}$$</span></p>
</blockquote>
<p>Of course, this series doesn't converge absolutely. For <span class="math-container">$x= \pi$</span> you get the convergence with the Leibniz criterion. For other <span class="math-container">$x$</span> in that interval the cosine is still often enough positive and negative evenly distributed that I expect the series to converge. How to formally prove that?</p>
| QC_QAOA | 364,346 | <p>We have</p>
<p><span class="math-container">$$\sum_{n=1}^\infty \frac{\cos(nx)}{n}=\Re \left[\sum_{n=1}^\infty \frac{e^{i nx}}{n}\right]$$</span></p>
<p>This then <a href="https://www.wolframalpha.com/input/?i=sum%28exp%28i%20x%20n%29%2Fn%2C%28n%2C1%2Cinfty%29%29" rel="nofollow noreferrer">simplifies</a> to</p>
<p><span class="math-container">$$=\Re\left[-\ln(1-e^{ix})\right]$$</span></p>
<p>You could prove this using the Taylor Series for <span class="math-container">$\ln(1-x)$</span> but this is difficult in my mind as that series is normally defined for <span class="math-container">$|x|<1$</span> and thus requires more fines. To get the real part of this we have</p>
<p><span class="math-container">$$1-e^{ix}=1-\cos(x)-i\sin(x)=\sqrt{2-2 \cos (x)}e^{i\phi}$$</span></p>
<p>where <span class="math-container">$\phi=\arg(1-e^{ix})$</span>. Then</p>
<p><span class="math-container">$$\Re\left[-\ln(1-e^{ix})\right]=\Re\left[-\ln\left(\sqrt{2-2 \cos (x)}e^{i\phi}\right)\right]$$</span></p>
<p><span class="math-container">$$=\Re\left[-i\phi-\frac{1}{2}\ln(2-2\cos(x))\right]=-\frac{1}{2}\ln(2-2\cos(x))$$</span></p>
<p>This exists for all <span class="math-container">$x\neq 2\pi k$</span> (<span class="math-container">$k\in\mathbb{Z}$</span>).</p>
|
4,619,641 | <p>I need to find the image of the function <span class="math-container">$f(z) = z^2$</span> whose domain is <span class="math-container">${z: Re(z) > 0}$</span>. I first let <span class="math-container">$z = x + iy$</span>. Then <span class="math-container">$$w = f(z) = z^2 = (x+iy)^2 = (x^2-y^2) + 2xyi$$</span> Hence <span class="math-container">$u(x,y) = x^2-y^2$</span>, and <span class="math-container">$v(x,y) = 2xy$</span> Then I first consider the boundary, which is when <span class="math-container">$Re(z) = 0$</span>. When <span class="math-container">$Re(z) = 0$</span>, <span class="math-container">$x = 0$</span>. Then <span class="math-container">$z = yi$</span>. Then I plug this back into my original function, and get <span class="math-container">$$f(z) = z^2 = (yi)^2 = -y^2$$</span>, but I'm confused about what to do next, since I need to consider when <span class="math-container">$Re(z) > 0$</span>. Thanks!</p>
| Ben | 650,264 | <p>You can indeed infer a Wick's theorem for spheres from the usual one for Gaussians.</p>
<p>As we know that only an even number of contractions gives a non-zero result, let us consider a <span class="math-container">$2k$</span>-point function for the standard Gaussian in <span class="math-container">$\mathbb R^n$</span>
<span class="math-container">$$
\begin{align*}
\mathbb E_{\mathbb R^n}(x^1_{i_1}\ldots x^{2k}_{i_{2k}}) &= \frac{1}{(2\pi)^{n/2}}\int_{\mathbb R^n} d^n x\, e^{-x\cdot x/2}x^1_{i_1}\ldots x^{2k}_{i_{2k}}\\
& =\frac{1}{(2\pi)^{n/2}}\int_0^\infty dr\, r^{2k+n-1}e^{-r^2/2} \int_{S^{n-1}} d\Omega\, \hat x^1_{i_1}\ldots \hat x^{2k}_{i_{2k}},
\end{align*}
$$</span>
where we wrote out the integral in spherical coordinates and split coordinates into a unit vector and a radius as <span class="math-container">$x_i = r \hat x_i$</span>.</p>
<p>The second integral is almost the expectation value we are after but it needs normalization by the volume of the sphere <span class="math-container">$V_{S^{n-1}} = \frac{2\pi^{n/2}}{\Gamma(n/2)}$</span>. Thus, we find
<span class="math-container">$$
\mathbb E_{\mathbb R^n}(x^1_{i_1}\ldots x^{2k}_{i_{2k}})=f(k,n)\, \mathbb E_{S^{n-1}}(\hat x^1_{i_1}\ldots \hat x^{2k}_{i_{2k}}),\qquad f(k,n)= \frac{V_{S^{n-1}}}{(2\pi)^{n/2}}\int dr\, r^{2k+n-1}e^{-r^2/2},
$$</span>
which we can simplify as
<span class="math-container">$$
f(k,n) = \frac{2^k\Gamma(k+n/2)}{\Gamma(n/2)} = \frac{(2(k-1)+n)!!}{(n-2)!!}.
$$</span></p>
<p>As a check we can find the 2-point function
<span class="math-container">$$
\mathbb{E}_{S^{n-1}}(x_i x_j) = \frac{1}{f(1,n)}\delta_{ij} = \frac{\delta_{ij}}{n},
$$</span>
which is correct.</p>
|
3,906,403 | <p>Let be two vectors <span class="math-container">$u$</span> and <span class="math-container">$v$</span> of <span class="math-container">$\Bbb R^n$</span>. We assume that <span class="math-container">$||u|| = ||v||$</span>.</p>
<ul>
<li><p>The vectors <span class="math-container">$u$</span> and <span class="math-container">$v$</span> are unitary.</p>
</li>
<li><p>Vectors <span class="math-container">$2(u + v)$</span> and <span class="math-container">$2(u-v)$</span> are orthogonal.</p>
</li>
<li><p>The vectors <span class="math-container">$u$</span> and <span class="math-container">$v$</span> are dependent.</p>
</li>
<li><p>All the other answers are wrong.</p>
</li>
<li><p>The vectors <span class="math-container">$u$</span> and <span class="math-container">$v$</span> are orthogonal.</p>
</li>
</ul>
<p>I have the find the one which is true.</p>
<p>I think that only "<span class="math-container">$2(u + v)$</span> and <span class="math-container">$2(u-v)$</span> are orthogonal" is true, but it is unclear how I can do that. I know that two vectors <span class="math-container">$x$</span> and <span class="math-container">$y$</span> are orthogonal if and only if <span class="math-container">$||x+y||^2 = ||x||^2 + ||y||^2$</span>.</p>
| Mike | 544,150 | <p>Two vectors <span class="math-container">$y$</span> and <span class="math-container">$z$</span> are orthogonal if <span class="math-container">$y \cdot z$</span> is 0. Now setting <span class="math-container">$y=2(u+v)$</span> and <span class="math-container">$z=2(u-v)$</span>, it follows that <span class="math-container">$y \cdot z = 4||u||^2 - 4||v||^2$</span>. If both <span class="math-container">$||u||$</span> and <span class="math-container">$||v||$</span> are the same then, it follows that <span class="math-container">$y \cdot z$</span> is 0, and so <span class="math-container">$y=2(u+v)$</span> and <span class="math-container">$z=2(u-v)$</span> are orhtogonal.</p>
|
3,906,403 | <p>Let be two vectors <span class="math-container">$u$</span> and <span class="math-container">$v$</span> of <span class="math-container">$\Bbb R^n$</span>. We assume that <span class="math-container">$||u|| = ||v||$</span>.</p>
<ul>
<li><p>The vectors <span class="math-container">$u$</span> and <span class="math-container">$v$</span> are unitary.</p>
</li>
<li><p>Vectors <span class="math-container">$2(u + v)$</span> and <span class="math-container">$2(u-v)$</span> are orthogonal.</p>
</li>
<li><p>The vectors <span class="math-container">$u$</span> and <span class="math-container">$v$</span> are dependent.</p>
</li>
<li><p>All the other answers are wrong.</p>
</li>
<li><p>The vectors <span class="math-container">$u$</span> and <span class="math-container">$v$</span> are orthogonal.</p>
</li>
</ul>
<p>I have the find the one which is true.</p>
<p>I think that only "<span class="math-container">$2(u + v)$</span> and <span class="math-container">$2(u-v)$</span> are orthogonal" is true, but it is unclear how I can do that. I know that two vectors <span class="math-container">$x$</span> and <span class="math-container">$y$</span> are orthogonal if and only if <span class="math-container">$||x+y||^2 = ||x||^2 + ||y||^2$</span>.</p>
| Souza | 468,313 | <p>A nice way to see that is using the <a href="https://en.wikipedia.org/wiki/Polarization_identity" rel="nofollow noreferrer">Polarization Identity</a>, that is, <span class="math-container">$\langle x,y\rangle=\dfrac14\left(|x+y|^2-|x-y|^2\right)$</span>. Setting <span class="math-container">$x=2(u+v)$</span> and <span class="math-container">$y=2(u-v)$</span> follows
<span class="math-container">$$\langle 2(u+v),2(u-v)\rangle=\dfrac14\left(|4u|^2-|4v|^2\right)=4(|u|^2-|v|^2)=0$$</span></p>
|
2,579,074 | <p>When I try to factor the quadratic form, I end up with </p>
<p>$$6x^2+4y^2+2z^2+4xz-4yz = 2((x+z)^2+2x^2+(y-z)^2-z^2)$$ </p>
<p>which does not ensure that $f(x,y,z) \geq 0$ for all $x, y, z \geq 0$ since the $z$ term is negative. How should these kinds of problems be tackled?</p>
| Allure | 511,061 | <p>There's an error in your computation - the $y^2$ term doesn't match on both sides of the equation.</p>
<p>Having said that, you can still argue that the RHS is positive because $(x+z)^2 \geq z^2$ if $x\geq0$.</p>
|
2,579,074 | <p>When I try to factor the quadratic form, I end up with </p>
<p>$$6x^2+4y^2+2z^2+4xz-4yz = 2((x+z)^2+2x^2+(y-z)^2-z^2)$$ </p>
<p>which does not ensure that $f(x,y,z) \geq 0$ for all $x, y, z \geq 0$ since the $z$ term is negative. How should these kinds of problems be tackled?</p>
| user | 505,767 | <p>As an alternative at completing the square you can look at the <strong>matrix associated to the quadratic form</strong>:</p>
<p>$$x^TAx$$</p>
<p>$$A=\begin{bmatrix}
6 & 0 & 2 \\
0 & 4 & -2 \\
2 & -2 & 2 \\
\end{bmatrix}$$</p>
<p>and note that its <strong>signature</strong> is $(n_0=0,n_+=3,n_-=0)$thus the quadratic form is <strong>definite positive</strong>.</p>
|
2,579,074 | <p>When I try to factor the quadratic form, I end up with </p>
<p>$$6x^2+4y^2+2z^2+4xz-4yz = 2((x+z)^2+2x^2+(y-z)^2-z^2)$$ </p>
<p>which does not ensure that $f(x,y,z) \geq 0$ for all $x, y, z \geq 0$ since the $z$ term is negative. How should these kinds of problems be tackled?</p>
| Will Jagy | 10,400 | <p>Your quadratic form is positive definite. I do not know what the eigenvalues of the matrix are. This method is the same as "completing the square." The diagonal entries of the diagonal matrix $D$ are all positive.</p>
<p>$$ P^T H P = D $$
$$\left(
\begin{array}{rrr}
1 & 0 & 0 \\
0 & 1 & 0 \\
- \frac{ 1 }{ 3 } & \frac{ 1 }{ 2 } & 1 \\
\end{array}
\right)
\left(
\begin{array}{rrr}
6 & 0 & 2 \\
0 & 4 & - 2 \\
2 & - 2 & 2 \\
\end{array}
\right)
\left(
\begin{array}{rrr}
1 & 0 & - \frac{ 1 }{ 3 } \\
0 & 1 & \frac{ 1 }{ 2 } \\
0 & 0 & 1 \\
\end{array}
\right)
= \left(
\begin{array}{rrr}
6 & 0 & 0 \\
0 & 4 & 0 \\
0 & 0 & \frac{ 1 }{ 3 } \\
\end{array}
\right)
$$
$$ Q^T D Q = H $$
$$\left(
\begin{array}{rrr}
1 & 0 & 0 \\
0 & 1 & 0 \\
\frac{ 1 }{ 3 } & - \frac{ 1 }{ 2 } & 1 \\
\end{array}
\right)
\left(
\begin{array}{rrr}
6 & 0 & 0 \\
0 & 4 & 0 \\
0 & 0 & \frac{ 1 }{ 3 } \\
\end{array}
\right)
\left(
\begin{array}{rrr}
1 & 0 & \frac{ 1 }{ 3 } \\
0 & 1 & - \frac{ 1 }{ 2 } \\
0 & 0 & 1 \\
\end{array}
\right)
= \left(
\begin{array}{rrr}
6 & 0 & 2 \\
0 & 4 & - 2 \\
2 & - 2 & 2 \\
\end{array}
\right)
$$</p>
|
2,880,377 | <p>As the title states, I need to find the limit for $x\left(x + 1 - \frac{1}{\sin(\frac{1}{1+x})}\right)$ as $x \rightarrow \infty$, as part of a larger proof I am working on.</p>
<p>I believe the answer is 0. I think that to start, I can show that $\frac{1}{\sin(\frac{1}{1+x})} \rightarrow x + 1$ for large x. By looking at the series expansion for Sin, it's clear that Sin approximates to $\frac{1}{1+x}$ for large x, as the higher-power entries in the series $\frac{1}{1+x}^3 + \frac{1}{1+x}^5 + ...$ would disappear faster, but would it be sufficient to state this? Is there not a more rigorous way of showing this to be true?</p>
<p>If my approach is entirely wrong, or there is a more elegant way of reaching the answer, please share.</p>
| user | 505,767 | <p>To avoid Taylor's expansion by $y=\frac{1}{1+x}\to 0$ we have that</p>
<p>$$x\left(x + 1 - \frac1{\sin\left(\frac{1}{1+x}\right)}\right)=\frac{1-y}{y}\left(\frac1y - \frac1{\sin y}\right)=$$$$=\frac{1-y}{y}\left(\frac{\sin y -y}{y\sin y}\right)=(1-y)\frac{y}{\sin y}\left(\frac{\sin y -y}{y^3}\right)\to 1 \cdot 1 \cdot \left(-\frac16\right)=-\frac16$$</p>
<p>indeed we have that as $y\to 0$</p>
<p>$$\frac{\sin y -y}{y^3}\to -\frac16$$</p>
<p>refer to <a href="https://math.stackexchange.com/q/387333/505767">Are all limits solvable without L'Hôpital Rule or Series Expansion</a>.</p>
|
25,412 | <p>This is hopefully a simpler version of this previous unanswered <a href="https://mathematica.stackexchange.com/questions/23496/solving-recursion-relations-using-mathematica">question</a> of mine. </p>
<p>Let me just focus on the two expressions $F_2^{(s)}$ and $F_3^{(s)}$ given in A.3 and A.4 of page 19 of <a href="http://arxiv.org/pdf/1301.7182v2.pdf" rel="nofollow noreferrer">this paper</a>. </p>
<ul>
<li>How do I get Mathematica to just even manipulate such vector expressions? Like if I want to calculate $(F_2^{(s)})^2$ or $F_2^{(s)} F_3^{(s)}$ etc? </li>
</ul>
<hr>
<p>To make the question clear let me add in some more details about what I exactly want, </p>
<p>I define the function F2s as,</p>
<pre><code>F2s[q_, k1_] := (5/
14) + (3 (Norm[k1])^2)/(28 (Norm[q])^2) + (3 Norm[
k1]^2)/(28 (Norm[q - k1])^2) - (5)/(28 (Norm[q])^2 (Norm[
q - k1])^(-2)) - (5)/(28 (Norm[q])^(-2) (Norm[
q - k1])^(2)) + ( (Norm[
k1])^4)/(14 (Norm[q])^2 (Norm[q - k1])^2 )
</code></pre>
<p>But when I ask it to be squared all I get is this!
(basically nothing has been done and the situation doesn't change with taking a FullSimplify either) </p>
<pre><code>(2 Norm[k1]^4 - 5 (Norm[q]^2 - Norm[-k1 + q]^2)^2 + 3 Norm[k1]^2 (Norm[q]^2 + Norm[-k1 + q]^2))^2/(784 Norm[q]^4 Norm[-k1 + q]^4)
</code></pre>
<p>I would have wanted the answer to be given in the way I gave the functions $F2s$ - as a sum of fractions each of which is a product of powers of $q$, $k1$ and $\vert \vec{q} - \vec{k1}\vert$. How do I get that? </p>
| Simon Woods | 862 | <p>If you are using <em>Mathematica</em> version 9, the best approach is probably to use the new symbolic tensor functionality as suggested by zentient.</p>
<p>However for this problem it may be sufficient to explicitly specify a rule to convert expressions like <code>Norm[-q]</code> into <code>Norm[q]</code>:</p>
<pre><code>myform = Expand[# /. Norm[-x_ + y_.] :> Norm[x - y]] &;
(F2s[q, k1]*F2s[-q, k2]) // myform // Short // TraditionalForm
</code></pre>
<p><img src="https://i.stack.imgur.com/uUf1C.png" alt="enter image description here"></p>
|
322,598 | <p><a href="https://en.wikipedia.org/wiki/Partially_ordered_set" rel="noreferrer">Partially ordered sets</a> (<em>posets</em>) are important objects in combinatorics (with <a href="https://gilkalai.wordpress.com/2019/02/05/extremal-combinatorics-v-posets/" rel="noreferrer">basic connections to extremal combinatorics</a> and to algebraic combinatorics) and also in other areas of mathematics. They are also related to <em>sorting</em> and to other questions in the theory of computing. I am asking for a list of open questions and conjectures about posets.</p>
| Sam Hopkins | 25,028 | <p>Let <span class="math-container">$P$</span> be a (not necessarily finite) poset such that each element covers and is covered by a finite number of elements. Then we can define the operators <span class="math-container">$U,D\colon \mathbb{Q}P\to\mathbb{Q}P$</span> on the vector space of formal linear combinations of elements of <span class="math-container">$P$</span> by <span class="math-container">$U(p) = \sum_{p\lessdot q} q$</span> and <span class="math-container">$D(p) = \sum_{q\lessdot p}q$</span>. Such a poset <span class="math-container">$P$</span> is called <em><span class="math-container">$r$</span>-differential</em> if it is locally finite, graded, with a unique minimal element, and these up and down operators satisfy <span class="math-container">$DU-UD=rI$</span> (where <span class="math-container">$I$</span> is the identity map). See <a href="https://en.wikipedia.org/wiki/Differential_poset" rel="noreferrer">https://en.wikipedia.org/wiki/Differential_poset</a>.</p>
<p>The two prominent examples of <span class="math-container">$1$</span>-differential posets are <a href="https://en.wikipedia.org/wiki/Young%27s_lattice" rel="noreferrer">Young's lattice</a> and the <a href="https://en.wikipedia.org/wiki/Young%E2%80%93Fibonacci_lattice" rel="noreferrer">Young-Fibonacci lattice</a>. It is known that these are the only <span class="math-container">$1$</span>-differential lattices, although this was only proved relatively recently by Byrnes (<a href="https://conservancy.umn.edu/handle/11299/142992" rel="noreferrer">https://conservancy.umn.edu/handle/11299/142992</a>). The open problem is: <strong>are all <span class="math-container">$r$</span>-differential lattices products of Young's lattice and the Young-Fibonacci lattice</strong>?</p>
|
322,598 | <p><a href="https://en.wikipedia.org/wiki/Partially_ordered_set" rel="noreferrer">Partially ordered sets</a> (<em>posets</em>) are important objects in combinatorics (with <a href="https://gilkalai.wordpress.com/2019/02/05/extremal-combinatorics-v-posets/" rel="noreferrer">basic connections to extremal combinatorics</a> and to algebraic combinatorics) and also in other areas of mathematics. They are also related to <em>sorting</em> and to other questions in the theory of computing. I am asking for a list of open questions and conjectures about posets.</p>
| Tri | 51,389 | <p>Find a nice expression for the number of down-sets (order ideals) of a product of four finite chains (totally ordered sets), <span class="math-container">$|\bf2^{(\bf k\times \bf m\times \bf n\times \bf r)}|$</span>, where <span class="math-container">$k,m,n,r$</span> are positive integers and "<span class="math-container">$\bf k$</span>" denotes the <span class="math-container">$k$</span>-element chain, <span class="math-container">$\{0,1,\dots,k-1\}$</span>.</p>
<p>Richard P. Stanley writes (on page 83 of <em>Ordered Structures and Partitions</em>), "Nothing significant seems to be known in general about" this quantity.</p>
<p><a href="http://www-math.mit.edu/~rstan/pubs/pubfiles/9.pdf#page=87" rel="nofollow noreferrer">http://www-math.mit.edu/~rstan/pubs/pubfiles/9.pdf#page=87</a></p>
<p>The MacMahon formula for <span class="math-container">$|\bf2^{(\bf k\times \bf m\times \bf n)}|$</span> is <span class="math-container">$\prod_{h=0}^{k-1}\prod_{i=0}^{m-1}\prod_{j=0}^{n-1}\frac{h+i+j+2}{h+i+j+1}$</span>.</p>
<p>I'd be willing to accept a nested summation.</p>
<p>See page 2 of James Propp, "Generating Random Elements of Finite Distributive Lattices," <em>Electronic Journal of Combinatorics</em> <strong>4</strong> (1997), R15.
<a href="https://www.combinatorics.org/ojs/index.php/eljc/article/view/v4i2r15/pdf#page=2" rel="nofollow noreferrer">https://www.combinatorics.org/ojs/index.php/eljc/article/view/v4i2r15/pdf#page=2</a></p>
<p>See Theorem 3.3 on page 114 of Joel Berman and Peter Koehler, "Cardinalities of Finite Distributive Lattices," <em>Mitteilungen aus dem Mathem. Seminar Giessen</em> <strong>121</strong> (1976), 103-124.
<a href="http://homepages.math.uic.edu/~berman/freedist.pdf#page=7" rel="nofollow noreferrer">http://homepages.math.uic.edu/~berman/freedist.pdf#page=7</a></p>
|
1,363,407 | <p>If $x,y,z$ are elements of a group such that $xyz=1,$ then which of the following are true?</p>
<ol>
<li>$yzx=1$</li>
<li>$yxz=1$</li>
<li>$zxy=1$</li>
<li>$zyx=1$</li>
</ol>
<p>I have found options 1 and 3 to be correct, but how to prove that options 2 and 4 are wrong (that is, what the given answers say)?</p>
| Asinomás | 33,907 | <p>$1$ is always true. We are told $x$ is the inverse of $yz$ and so $yzx=1$. </p>
<p>analogously $3$ is always true. We are told $xy$ is the inverse of $z$ and so $zxy=1$</p>
<hr>
<p>The others are not true, for $2$ notice if $xy\neq yx$ then they can't have the same inverse. (examples of $xy\neq yx$ exist in any non-abelian group)</p>
<p>$4$ is slightly trickier, it combines the ideas of $1$ and $2$.</p>
<p>You can solve it as follows: If $zyx=1$ then $yxz=1$ (since $z$ and $yx$ would be inverses). Notice this is problem $2$.</p>
|
313,508 | <p>If anyone is familiar with Horowitz and Hill... its exercise 1.19</p>
<p>Show that all the average power delivered to the preceding circuit winds up in the resistor. Do this by computing the value of $V^2/R$. What is the power, in watts for a series circuit of a 1$\mu F$ capacitor and a 1k resistor placed across the 110 Volt RMS, 60 Hz powerline?</p>
<p>The circuit in question is an AC supply with a cap and resistor in series. Simple.</p>
<p>I've been pouring myself into this complex algebra problem and have made no progress over several hours. I'm desperate to understand what they want but cannot get there without some help.</p>
| copper.hat | 27,978 | <p>(I am not sure what part is throwing you, please ask for elaboration in the comments and I will do my best to explain.)</p>
<p>The usual convention is to use $j = \sqrt{-1}$ to avoid confusion with the symbol for current, $i$.</p>
<p>If the supply voltage is $v(t) = \sqrt{2} \operatorname{Re} V_{\text{sup}} e^{j \omega t}$ and the steady state current is $i(t) = \sqrt{2} \operatorname{Re} I e^{j \omega t}$, then the instantaneous power delivered to the circuit is (as is usual convention, $V_{\text{sup}}, I \in \mathbb{C}$ are RMS quantities, also note that $\operatorname{Re} Z = \frac{1}{2}(Z+\overline{Z})$):
\begin{eqnarray}
p(t) &=& v(t)i(t) \\
&=& 2(\operatorname{Re} V_{\text{sup}} e^{j \omega t})(\operatorname{Re} I e^{j \omega t}) \\
&=& 2 \frac{1}{2}(V_{\text{sup}} e^{j \omega t}+ \overline{V_{\text{sup}}} e^{-j \omega t}) \frac{1}{2}(I e^{j \omega t}+ \overline{I} e^{-j \omega t}) \\
&=& \frac{1}{2}( V_{\text{sup}} I e^{j 2 \omega t} +
\overline{V_{\text{sup}}} I +
V_{\text{sup}} \overline{I} +
\overline{V_{\text{sup}}} \overline{I} e^{-j 2 \omega t}) \\
&=& (\operatorname{Re} V_{\text{sup}} I e^{j 2 \omega t}) + (\operatorname{Re} \overline{V_{\text{sup}}} I)
\end{eqnarray}
from which we see that the average power (integrating $t \mapsto e^{j 2 \omega t}$ over a cycle results in zero) is given by $P = \operatorname{Re} \overline{V_{\text{sup}}} I$.</p>
<p>Since the circuit is linear (lumped, time invariant, etc.), the voltage and current are related by $V_{\text{sup}} = I Z(j\omega)$.
In this case, $Z(j\omega) = R + \frac{1}{j \omega C}$, so we obtain the average power delivered to the circuit is
\begin{eqnarray}
P &=& \operatorname{Re} \overline{V_{\text{sup}}} I \\
&=& \operatorname{Re} \frac{|V_{\text{sup}}|^2 }{Z(j\omega)} \\
&=& |V_{\text{sup}}|^2 \operatorname{Re}( \frac{1}{R} \frac{(\omega R C)^2 + j \omega R C}{1+(\omega R C)^2}) \\
&=& \frac{|V_{\text{sup}}|^2}{R} \frac{(\omega R C)^2}{1+(\omega R C)^2}
\end{eqnarray}</p>
<p>Now consider the average power delivered to the resistor: $P_R = \operatorname{Re} \overline{V_{\text{R}}} I$. Since the capacitor and resistor form a voltage divider, we have $V_{\text{R}} = V_{\text{sup}} \frac{R}{Z(j \omega)}$, from which we get $P_R = \operatorname{Re} \overline{V_{\text{sup}} \frac{R}{Z(j \omega)}} \frac{V_{\text{sup}}}{Z(j \omega)} = R \frac{|V_{\text{sup}}|^2}{|Z(j \omega)|^2} = P $. Hence the average power delivered to the circuit is the same as the average power delivered to the resistor.</p>
<p>For the computational part, we use $P= \frac{|V_{\text{sup}}|^2}{R} \frac{(\omega R C)^2}{1+(\omega R C)^2} = \frac{|V_{\text{sup}}|^2}{R} \frac{1}{1+\frac{1}{(\omega R C)^2}}$, with $|V_{\text{sup}}| = 110$, $\omega = 120 \pi$, $C=1 \mu F$, $R=1k$ to get $P \approx 1.5057 W$.</p>
<p>(Contrast this with the $> 12 W$ power that would be delivered if the capacitor was removed.)</p>
|
367,638 | <p>Reading over a book on computability, it asserts that in P.C., if A is a theorem, then A has arbitrarily many proofs. I can't see how that would work, would you do an infinite loop in the sequence of well-formed-formulae?</p>
| Doug Spoonwood | 11,300 | <p>There exist several ways to indicate this. I'll consider classical logical systems which only have detachment and universal substitution as primitive inference rules. I use Polish notation. Classical logic has CpCqp as a theorem. So, if "a" is a theorem, and "b" is a theorem we can prove "a" as follows:</p>
<p>Substitute p with a, q with b in CpCqp, abbreviated CpCqp p/a, q/b. The "*" symbol separates the left hand side from the right hand side.</p>
<p>1 CaCba </p>
<pre><code>1*Ca-2
</code></pre>
<p>This says that formula 1 comes as equiform to CaCba, with "a" as the antecedent, and 2 as the consequent, and we will detach 2. </p>
<p>2 Cba</p>
<pre><code>2*Cb-3
</code></pre>
<p>3 a</p>
<p>Since b comes as arbitrary, and classical logic has an arbitrary "number" of theorems, it follows that we can prove "a" in an arbitrary "number" of ways.</p>
<p>We also have CNNpp and CpNNp as theorems. So, if "a" is a theorem, then we can obtain NNa as a theorem, NNNNa as a theorem, and so on. Then we can prove "a" by using CNNpp and making appropriate substitutions.</p>
|
1,598,255 | <p>Consider the system
$$
\dot{x}=y,\qquad\dot{y}=x+x^2-y.
$$
It has two equilibria, namely $(0,0)$ and $(-1,0)$.</p>
<p>I would like to linearize the system in both equilibria.</p>
<p>My start is to set $\Delta x=x-x_0,\qquad \Delta y=y-y_0$. Then
$$
\dot{\Delta x}=\dot{x}-\dot{x_0}=y-y_0=\Delta y
$$
and
$$
\dot{\Delta y}=\dot{y}-\dot{y_0}=\Delta x-\Delta y+x^2-x_0^2.
$$</p>
<p>How can I get rid of the summand $x^2-x_0^2$?</p>
<p>Can I approximate the function $f(x)=x^2$ by Taylor, getting $x^2-x_0^2\approx 2x_0\Delta x$?</p>
<p>I then would get the linearization matrices
$$
\begin{pmatrix}0 & 1\\ 1 & -1\end{pmatrix}\text{ for }(0,0)
$$
and, similarly,
$$
\begin{pmatrix}0 & 1\\-1 & -1\end{pmatrix}\text{ for }(-1,0).
$$</p>
| Daniel Robert-Nicoud | 60,713 | <p>I will do the general case.</p>
<p>Let $F:\mathbb{R}^n\to\mathbb{R}^n$ be a smooth map. Consider the differential equation on $\mathbb{R}^n$ given by
$$\dot{z} = F(z).$$
An <em>equilibrium point</em> is a point $z_0\in\mathbb{R}^n$ where $F(z_0)=0$. This implies that $z(t)=z_0$ is a solution of the differential equation. The linearization of the differential equation at any point is given by taking the Taylor series of $F$ in the right hand side and cutting it off after the linear term. If you are at an equilibrium point, then the constant term is zero and you get
$$\dot{z}=D_{z_0}F(z).$$
This gives a good approximation for the behavior of solutions of the original differential equation near an equilibrium point.</p>
<p>In your special case, we have
$$F\pmatrix{x\\y} = \pmatrix{y\\x^2+x-y}.$$
Therefore, the matrices you obtain are
$$D_{(0,0)}F = \pmatrix{0&1\\1&-1},\qquad D_{(-1,0)}F=\pmatrix{0&1\\-1&-1},$$
which agrees with your solution.</p>
|
11,299 | <p>How to write this small piece in a functional way (ie. without state variables)?:</p>
<pre><code>test[oldJ_List, newJ_List] := Total[Abs[oldJ - newJ]] > 1;
relax[j_List, x_?NumericQ] := Mean[Nearest[j, x, 4]];
j = Range[100]; (* any numeric list *)
j1 = j/2; (*some initial value for the While[] test to return True*)
While[test[j1, j],
j1 = j;
(j[[#]] = relax[j, j[[#]]]) & /@ Range@Length@j]
</code></pre>
| kglr | 125 | <p>This also works:</p>
<pre><code>fold = Function[{lst},Fold[(ReplacePart[#1, #2 ->relax[#1, #1[[#2]]]]) &,
lst, Range@Length@lst]];
fxpnt = FixedPoint[fold, #, SameTest -> (Not[test[#1, #2]] &)] &;
fxpnt@j
</code></pre>
|
221,137 | <p>What's the difference between Fourier transformations and Fourier Series? </p>
<p>Are they the same, where a transformation is just used when its applied (i.e. not used in pure mathematics)?</p>
| Community | -1 | <p>If you have a locally compact Abelian group <span class="math-container">$G$</span> you can define a group called the <a href="https://en.wikipedia.org/wiki/Pontryagin_duality" rel="nofollow noreferrer">Pontryagin dual group</a> - <span class="math-container">$\widehat{G}$</span>. You can define a <a href="https://en.wikipedia.org/wiki/Haar_measure" rel="nofollow noreferrer">Haar measure</a> on <span class="math-container">$G$</span>, <span class="math-container">$\mu$</span>. We can define the Fourier transform of a function <span class="math-container">$f\in L^1(G)$</span>:</p>
<p><span class="math-container">$$\widehat f(\chi)=\int_Gf(x)\overline{\chi(x)}d\mu(x)$$</span></p>
<p><span class="math-container">$\widehat f(\chi)$</span> is a bounded continuous function that vanishes at infinity on <span class="math-container">$\widehat{G}$</span>.</p>
<p>If <span class="math-container">$G=\Bbb R$</span> then <span class="math-container">$\widehat{G}=\Bbb R$</span> and we have the regular Fourier transform.</p>
<p>If <span class="math-container">$G=S^1$</span> then <span class="math-container">$\widehat{G}=\Bbb Z$</span> and we have the Fourier series (an example of a Fourier transform).</p>
|
1,511,753 | <p>Using the concept of self-similarity, it's possible to encode the decimal expansion of a number as a sort of 'fractal' object. For instance, consider the sequence,</p>
<p>$$(1) \quad C_0=0.1, \ C_1=0.101, \ C_2=0.101000101, 0.101000101000000000101000101,...,C_n$$</p>
<p>The astute reader will notice this is analogous to the construction of the <a href="https://en.wikipedia.org/wiki/Cantor_set" rel="nofollow">Cantor Set</a>. The number I'd assume is irrational. However there is a fairly simple way to construct the number, and thus find it's decimal expansion. In fact, the number $C_n$ satisfies,</p>
<p>$$(2) \quad C_{n+1}=C_n+C_n \cdot 10^{-2 \cdot 3^{n}}$$</p>
<blockquote>
<p>Do similar methods exist for other reals such as $\sqrt{2}$ or $\pi$? If they do, how are these methods developed?</p>
</blockquote>
| fleablood | 280,126 | <p>It depends on the real number, but most "normal" irrational numbers will not have such constructions. This is intuitive as any such method would finitely expressed and there would only be countably many of them. $\pi$ and $\sqrt 2$ have several series expansions that allow as to calculate the decimal expansions. But your "arbitrary" real number will not. We do not even have any method to describe an arbitrary real number.</p>
|
3,691,747 | <p>Consider a random sample of 11 letters chosen from the alphabet with replacement. What is the probability that the letters can be arranged to spell ‘mississippi’.</p>
<p>For this question, the number of ways to choose 11 letters would be (26)^(11).
To spell 'mississippi',we need:
1M, 4I. 4S and 2P.</p>
<p>Hence, there are 11! ways of getting these specific letters.
Hence, the probability of spelling 'mississippi' would be: </p>
<p>[11!] divided by [(26)^(11)].</p>
<p>Is this correct? Any alternative way of thinking about this?</p>
| twosigma | 780,083 | <p>Your denominator of <span class="math-container">$26^{11}$</span> is correct, but your numerator should use the multinomial coefficient <span class="math-container">$\displaystyle \binom{11}{1,4,4,2} = \frac{11!}{1!4!4!2!}$</span>.</p>
<p>Why? Well, the multinomial coefficient <span class="math-container">$\displaystyle \binom{n}{n_1, n_2, ..., n_k}$</span> tells us how many ways there are to put <span class="math-container">$n$</span> objects into <span class="math-container">$k$</span> groups of sizes <span class="math-container">$n_1, n_2, ..., n_k$</span>, where the sizes add up to <span class="math-container">$n$</span>. For example, we might have <span class="math-container">$10$</span> different objects and we want to figure out how many ways there are to distribute them into <span class="math-container">$3$</span> bins of sizes <span class="math-container">$3, 5, 2$</span>. Then the formula tells us there are <span class="math-container">$\displaystyle \frac{10!}{3!5!2!} = 2,520$</span> ways.</p>
<p>With the word "MISSISSIPPI", you can think of there being <span class="math-container">$11$</span> spots, and we have to assign each of these spots to a letter -- one of M, I, S or P. There is 1 M, 4 I's, 4 S's, and 2 P's. So, we can think of this question as asking how many ways are there to take <span class="math-container">$11$</span> objects and putting them into four groups of sizes 1, 4, 4 and 2. This is simply the multinomial coefficient <span class="math-container">$\displaystyle \binom{11}{1,4,4,2} = \frac{11!}{1!4!4!2!} = 34,650$</span>.</p>
<p>Edit: Alternatively, another way to think about it is, as another answer pointed out, as a permutation with repetitions. You have to divide by those factorials so as to remove repeats.</p>
|
1,378,769 | <p>The equation is given by</p>
<p>$$ \sum_{n=1}^N \min(\gamma, \beta a_n)=N$$
where $\beta$ is the variable with $\beta\in[0,\sqrt\gamma/\min(a_n\mid a_n>0)]$, $ \gamma $ is a constant with $1\le\gamma\le N$, and $a_n$ is an nonnegative constant.</p>
<p>I know that the function $f(\beta)=\sum_{n=1}^N \min(\gamma, \beta a_n)-N$ is strictly increasing. Also, $f(0)<0 $ and $ f(\sqrt\gamma/\min(a_n\mid a_n > 0 )) > 0$. Thus, there must exist an unique solution for the above equation. How can I find the solution numerically? e.g. using matlab. Increasing $ \beta$ slowly works, but it is not accurate. </p>
| Steven Alexis Gregory | 75,410 | <p>A sequence is a function $f:\mathbb Z^+ \to \mathbf S$.</p>
<p>$f:\mathbb Z^+ \to \mathbf S$ is a special case of a relation
$f \subset \mathbb Z^+ \times \mathbf S.$</p>
<p>The cartesian product $\mathbb Z^+ \times \mathbf S$ is the set
$\{(x,y): x \in \mathbb Z^+$ and $y \in \mathbf S \}$</p>
<p>I'm not sure what you mean by "basic".</p>
<p>It seems that you are trying to use the category theory definition of direct product, which is a "generalization" of cartesian products but is not the same thing. </p>
|
2,581,593 | <p>I am trying to compute the hessian from a linear mse (mean square error) function using the index notation. I would be glad, if you could check my result and tell me if the way that I use the index notation is correct ?</p>
<p>The linear MSE:
$$L(w) = \frac{1}{2N} e^T e$$where $e=(y-Xw)$,</p>
<p>$y \in R^{Nx1} (vector)$</p>
<p>$X \in R^{NxD} (matrix)$ </p>
<p>$w \in R^{Dx1} (vector)$ </p>
<p>Now the aim is to calculate the Hessin: $\frac{\partial L(w)}{\partial^2 w}$</p>
<p>I proceed as follows:</p>
<p>$\frac{\partial L(w)}{\partial w_i w_j}=\frac{1}{\partial w_i \partial w_j} [\frac{1}{2N}(y_i-x_{ij} w_j)^2]$</p>
<p>$=\frac{1}{\partial w_i}\frac{1}{\partial w_j} [\frac{1}{2N}(y_i-x_{ij} w_j)^2]$</p>
<p>$=\frac{1}{\partial w_i}[\frac{1}{2N}\frac{1}{\partial w_j} (y_i-x_{ij} w_j)^2]$</p>
<p>$=\frac{1}{\partial w_i}[\frac{1}{N}(y_i-x_{ij} w_j)\frac{1}{\partial w_j} (y_i-x_{ij} w_j)]$</p>
<p>$=\frac{1}{\partial w_i}[\frac{1}{N}(y_i-x_{ij} w_j)\frac{-x_{ij} w_j}{\partial w_j}]$</p>
<p>$=\frac{1}{\partial w_i}[\frac{1}{N}(y_i-x_{ij} w_j) (-x_{ij})]$</p>
<p>$=\frac{1}{N}\frac{1}{\partial w_i}[(y_i-x_{ij} w_j) (-x_{ij})]$</p>
<p>$=\frac{1}{N}\frac{-x_{ij} w_j}{\partial w_i}(-x_{ij})]$</p>
<p>$=\frac{1}{N}(-x_{ij}\delta_{ji})(-x_{ij})]$</p>
<p>$=\frac{1}{N}(-x_{ji})(-x_{ij})]$</p>
<p>If I now convert it back to matrix notation the result would be:</p>
<p>$$\frac{\partial L(w)}{\partial^2 w} = \frac{1}{N} X^T X $$</p>
<p><strong>Is it correct how I used the index notation ?</strong></p>
| frank | 506,630 | <p>For ease of typing, I'll represent the differential operator $\frac{\partial}{\partial w_k}$ by $d_k$</p>
<p>The known relationships are
$$\eqalign{
e_i &= X_{ij}w_j - y_i \cr
d_ke_i &= X_{ij}\,d_kw_j =X_{ij}\,\delta_{jk} = X_{ik} \cr
}$$
Use this to find the derivatives of the objective function
$$\eqalign{
L &= \frac{1}{2N} e_ie_i \cr
d_kL &= \frac{1}{N} e_i\,d_ke_i = \frac{1}{N} e_iX_{ik} \cr
d_md_kL &= \frac{1}{N} X_{ik}\,d_me_i = \frac{1}{N} X_{ik}X_{im} \cr
\cr
}$$</p>
|
2,581,593 | <p>I am trying to compute the hessian from a linear mse (mean square error) function using the index notation. I would be glad, if you could check my result and tell me if the way that I use the index notation is correct ?</p>
<p>The linear MSE:
$$L(w) = \frac{1}{2N} e^T e$$where $e=(y-Xw)$,</p>
<p>$y \in R^{Nx1} (vector)$</p>
<p>$X \in R^{NxD} (matrix)$ </p>
<p>$w \in R^{Dx1} (vector)$ </p>
<p>Now the aim is to calculate the Hessin: $\frac{\partial L(w)}{\partial^2 w}$</p>
<p>I proceed as follows:</p>
<p>$\frac{\partial L(w)}{\partial w_i w_j}=\frac{1}{\partial w_i \partial w_j} [\frac{1}{2N}(y_i-x_{ij} w_j)^2]$</p>
<p>$=\frac{1}{\partial w_i}\frac{1}{\partial w_j} [\frac{1}{2N}(y_i-x_{ij} w_j)^2]$</p>
<p>$=\frac{1}{\partial w_i}[\frac{1}{2N}\frac{1}{\partial w_j} (y_i-x_{ij} w_j)^2]$</p>
<p>$=\frac{1}{\partial w_i}[\frac{1}{N}(y_i-x_{ij} w_j)\frac{1}{\partial w_j} (y_i-x_{ij} w_j)]$</p>
<p>$=\frac{1}{\partial w_i}[\frac{1}{N}(y_i-x_{ij} w_j)\frac{-x_{ij} w_j}{\partial w_j}]$</p>
<p>$=\frac{1}{\partial w_i}[\frac{1}{N}(y_i-x_{ij} w_j) (-x_{ij})]$</p>
<p>$=\frac{1}{N}\frac{1}{\partial w_i}[(y_i-x_{ij} w_j) (-x_{ij})]$</p>
<p>$=\frac{1}{N}\frac{-x_{ij} w_j}{\partial w_i}(-x_{ij})]$</p>
<p>$=\frac{1}{N}(-x_{ij}\delta_{ji})(-x_{ij})]$</p>
<p>$=\frac{1}{N}(-x_{ji})(-x_{ij})]$</p>
<p>If I now convert it back to matrix notation the result would be:</p>
<p>$$\frac{\partial L(w)}{\partial^2 w} = \frac{1}{N} X^T X $$</p>
<p><strong>Is it correct how I used the index notation ?</strong></p>
| V. Vancak | 230,329 | <p>Matrix notations:
$$
\frac{\partial}{\partial w} (Y - Xw)'(Y-Xw) = 2X'(Y-Xw).
$$
Using indices you are taking derivative of the sum of squares w.r.t. each of the $w_j$, i.e.,
$$
\frac{\partial}{\partial w_j} ( \sum_{i=1}^N(y_i - \sum_{j=1}^Dx_{ij} w_j))^2= -2 \sum_{i=1}^N(y_i - \sum_{j=1}^Dx_{ij} w_j)x_{ij}.
$$
Back to the matrix notation for the second derivative (the Hessian matrix),
$$
\frac{\partial}{\partial w w'} (Y - Xw)'(Y-Xw) = \frac{\partial}{\partial w'} 2X'(Y-Xw) = 2X'X.
$$
Where using index notations, you are taking derivative w.r.t. to each $w_j$, $j=1,..., D$ , from each of the aforementioned $D$ equations, i.e.,
$$
\frac{\partial}{\partial w_j^2} ( \sum_{i=1}^N(y_i - \sum_{j=1}^Dx_{ij} w_j))^2 = \frac{\partial}{\partial w_j}(-2 \sum_{i=1}^N(y_i - \sum_{j=1}^Dx_{ij} w_j)x_{ij}) = 2\sum_{i=1}^Nx_{ij}^2,
$$
and for the cross terms,
$$
\frac{\partial}{\partial w_jw_k} ( \sum_{i=1}^N(y_i - \sum_{j=1}^Dx_{ij} w_j))^2 = \frac{\partial}{\partial w_k}(-2 \sum_{i=1}^N(y_i - \sum_{j=1}^Dx_{ij} w_j)x_{ij}) = 2\sum_{i=1}^Nx_{ij}x_{ik}.
$$
Where the last expression is the $jk$-th (and the $kj$-th) entry of $2X'X$ such that $j\neq k$. And the equation before represents the entries on the main diagonal of $2X'X$.</p>
|
313,597 | <p>Let $V$ be a finite-dimensional vector space with an ordered base $\beta$ over a field $F$.</p>
<p>Let $x\in V$ an $[x]_{\beta}$ be a coordinate vector of $x$ relative to $\beta$.</p>
<p>Is this an element of $M_{n\times 1}(F)$ or $F^n$?</p>
<p>Since we usually treat them as a matrix, i think it should be defined as a matrix so that $[x]_{\beta} \in M_{n\times 1}(F)$. However, the text i'm studying states that it is an element of $F^n$. (Note that $F^n\neq M_{n\times 1}(F)=F^{n\times 1}$)</p>
<p>The reason why I'm distinguishing these two even though they have the same meaning, is to formally define a matrix multiplication. For example, an element in $F^n$ is not only sometimes treated as a column vector, but also treated as a row vector. If these two are not distinguished then multiplication cannot be well defined.</p>
<p>Am i correct? Or if i am wrong or if what i'm worrying is not a matter, please explain me why. Thank you in advance.</p>
| Community | -1 | <p>The purpose of a coordinate vector is so that you can turn problems into matrix algebra; $M_{n \times 1}(F)$ is the best choice for what a coordinate vector <em>should</em> be if you're into such fine detail.</p>
<p>But it really is a <em>fine</em> detail. Part of the power of linear and multi-linear algebra is how smoothly you can shift between many different interpretations of an object. e.g. an element of $M_{n \times n}(F)$ can be thought of as a linear transformation on $M_{n \times 1}(F)$, a linear transformation on $M_{1 \times n}(F)$, a linear transformation on any $M_{n \times m}(F)$, a linear transformation on $V = F^n$, a linear transformation on $V^*$, an element of $(V^*)^n$, an element of $(V^n)^*$, an element of $M_{2 \times 2}(M_{n/2 \times n/2}(F))$, ..., a full rank matrix in $M_{m \times n}(F)$ with $m < n$ can be viewed as a subspace of $F^n$, ....</p>
|
3,336,592 | <p>I could prove the following result from my Real Analysis course:</p>
<blockquote>
<p>Let <span class="math-container">$f:[0,1] \rightarrow [0,1]$</span> be an increasing mapping. Then it has a fixed point.</p>
</blockquote>
<p>I understand that this is a very baby version of Tarski’s Fixed Point Theorem. Now, I wish to generalize this a little bit and get the following:</p>
<blockquote>
<p>Let <span class="math-container">$f:[0,1]^n \rightarrow [0,1]^n$</span> in which <span class="math-container">$f$</span> is increasing in the sense that if <span class="math-container">$y \geq x$</span> coordinate wise then <span class="math-container">$f(y) \geq f(x)$</span> coordinate wise. Then, f has a fixed point.</p>
</blockquote>
<p>From my point of view, we could just pick a point <span class="math-container">$x_0 \in [0,1]^n$</span>, fix all coordinates but one and apply the above lemma to that coordinate. Then, when the first coordinate of the fixed point is found, we do the same for the second and so on.</p>
<p>However, I am not sure this route would be successful and even if it is, I can’t write the extension formally. Any ideas? Thanks a lot in advance!</p>
| Chris Eagle | 693,182 | <p>Your summary seems accurate, with one exception: The theory of algebraically closed fields of characteristic 0 is complete. Perhaps you meant the theory of algebraically closed fields, without specifying the characteristic?</p>
|
3,336,592 | <p>I could prove the following result from my Real Analysis course:</p>
<blockquote>
<p>Let <span class="math-container">$f:[0,1] \rightarrow [0,1]$</span> be an increasing mapping. Then it has a fixed point.</p>
</blockquote>
<p>I understand that this is a very baby version of Tarski’s Fixed Point Theorem. Now, I wish to generalize this a little bit and get the following:</p>
<blockquote>
<p>Let <span class="math-container">$f:[0,1]^n \rightarrow [0,1]^n$</span> in which <span class="math-container">$f$</span> is increasing in the sense that if <span class="math-container">$y \geq x$</span> coordinate wise then <span class="math-container">$f(y) \geq f(x)$</span> coordinate wise. Then, f has a fixed point.</p>
</blockquote>
<p>From my point of view, we could just pick a point <span class="math-container">$x_0 \in [0,1]^n$</span>, fix all coordinates but one and apply the above lemma to that coordinate. Then, when the first coordinate of the fixed point is found, we do the same for the second and so on.</p>
<p>However, I am not sure this route would be successful and even if it is, I can’t write the extension formally. Any ideas? Thanks a lot in advance!</p>
| DanielV | 97,045 | <blockquote>
<p>We can have undecidable and incomplete theories. e.g Peano Arithmetic</p>
</blockquote>
<p>This is based on a very different definition of complete than what you wrote. Godel's Incompleteness Theorem uses the "if it is true then it is provable" pseudo definition of completeness. And he gets around the ambiguity of that definition by only needing to give 1 meaningful counterexample, a unary predicate <span class="math-container">$P$</span> with the quality that there is a proof for <span class="math-container">$P(0)$</span> and a proof for <span class="math-container">$P(1)$</span> and a proof for <span class="math-container">$P(2)$</span>, etc, but there is no proof of <span class="math-container">$\forall x . P(x)$</span>.</p>
<p>The definition of completeness you give is the one that a person would mean if they said "propositional logic is complete"; that is, that every propositional statement has a proof or disproof. But an IMO better way to phrase the definition in that case is "if it exists in this language, then it has a proof". In the definition there is no particular reason to separate cases according to <span class="math-container">$\lnot$</span>.</p>
<p>If someone was to say a theory is complete, I'm not even sure I could guess what they mean. A theory is just a set of theorems (although usually in context, with some sort of deductive closure). It is usually meaningless to say a theory is (in)complete, except maybe relative to a grammar, you would instead say whether a logic is complete. </p>
<p>When they say "[a particular] first order logic" is complete, what they mean is that every statement that is a tautology (relative to whichever first order model theory they are using) has a proof in that logic. So when they talk about the completeness of [a particular] first order logic, in absolutely no way are they suggesting that it is decidable; that is, they are not at all alluding to the definition in the original question. It's all just first order model theory stuff.</p>
<p>Completeness is used to mean a lot of different things.</p>
|
1,997,407 | <p>What is the relationship between the characteristic polynomial of two square matrices and the characteristic polynomial of the product of these two square matrices? If I know the characteristic polynomials of each one of these matrices, what can I say about the characteristic polynomial of their product?
I can't seem to find this information anywhere.</p>
<p>Thank you!</p>
| Dan Fox | 60,380 | <p>In general there is no nice relation. The product of nilpotent matrices need not be nilpotent (for example, consider the product of a nontrivial nilpotent Jordan block with its transpose), so the characteristic polynomial of the product of two matrices whose characteristic polynomials have only $0$ as a root can have a nonzero root.</p>
|
1,997,407 | <p>What is the relationship between the characteristic polynomial of two square matrices and the characteristic polynomial of the product of these two square matrices? If I know the characteristic polynomials of each one of these matrices, what can I say about the characteristic polynomial of their product?
I can't seem to find this information anywhere.</p>
<p>Thank you!</p>
| ManifoldFR | 254,063 | <p>Suppose the matrices are such that $AB=BA$: one can put both matrices in triangular form within the same basis $(e_i)$ should this condition be satisfied. Call $\lambda_i$ (resp. $\mu_i$) the eigenvalues of $A$ and (resp. $B$) associated with eigenvector $e_i$. Then the roots of the characteristic polynomial of $AB$ i.e. the eigenvalues of this matrix are the products $\lambda_i\mu_i$ of the roots of the characteristic polynomials $\lambda_i$ and $\mu_i$ of $A$ and $B$. </p>
|
4,138,346 | <p>I would like someone to verify this exercise for me. Please.</p>
<p>Find the following limit:</p>
<p><span class="math-container">$\lim\limits_{n \to \infty}\left(\dfrac{1}{n+1}+\dfrac{1}{n+2}+...+\dfrac{1}{3n}\right)$</span></p>
<p><span class="math-container">$=\lim\limits_{n \to \infty}\left(\dfrac{1}{n+1}+\dfrac{1}{n+2}+...+\dfrac{1}{n+2n}\right)$</span></p>
<p><span class="math-container">$=\lim\limits_{n \to \infty}\sum\limits_{k=1}^{2n} \dfrac{1}{n+k}$</span></p>
<p><span class="math-container">$=\lim\limits_{n \to \infty}\sum\limits_{k=1}^{2n} \dfrac{1}{n\left(1+\frac{k}{n}\right)}$</span></p>
<p><span class="math-container">$=\lim\limits_{n \to \infty}\sum\limits_{k=1}^{2n}\left(\dfrac{1}{1+\frac{k}{n}}\cdot\dfrac{1}{n}\right)$</span></p>
<p><span class="math-container">$=\displaystyle\int_{1+0}^{1+2} \frac{1}{x} \,dx$</span></p>
<p><span class="math-container">$=\displaystyle\int_{1}^{3} \frac{1}{x} \,dx$</span></p>
<p><span class="math-container">$=\big[\ln|x|\big] _{1}^3$</span></p>
<p><span class="math-container">$=\ln|3|-\ln|1|$</span></p>
<p><span class="math-container">$=\ln(3)-\ln(1)$</span></p>
<p><span class="math-container">$=\ln(3)$</span></p>
| Z Ahmed | 671,540 | <p>Lastly, use <span class="math-container">$\;k/n\rightarrow x,\;\;1/n \to dx\;,\;$</span> then
<span class="math-container">$$L=\int_{0}^{2} \frac{dx}{1+x}=\ln(1+x)\big|_{0}^{2}=\ln 3\;.$$</span></p>
<p>Edit: the lower limit is <span class="math-container">$x_l=1/n, x_u=2n/n$</span>, when <span class="math-container">$n$</span> is large (<span class="math-container">$\infty$</span>} these are 0 and 2</p>
|
70,158 | <p>This question is related to <a href="https://mathoverflow.net/questions/52825/coloring-mathbb-zk"> this one </a> but feels more Ramsey-type, so perhaps it is easier. Let $S$ be a finite set, $|S|=k$. Suppose we color all subsets of $S$ in $1000$ colors. What is the maximal (in terms of $k$) guaranteed length $l=l(k)$ of a monochromatic sequence of pairwise different subsets $A_1,A_2,..., A_l$ such that $|A_i\setminus A_{i+1}|+|A_{i+1}\setminus A_i|\le 2$ for every $i$? Clearly if $A$ is a subset of $S$ such that all 2-element subsets of $A$ are monochromatic, then $l(n)\ge |A|-1$ (there is a sequence of 2-element subsets of $A$ which satisfies the above property). So $l(k)$ is at least as big as the corresponding number from the Ramsey theory. Is it much bigger? The number 1000 is of course "any fixed number". </p>
<p><b> Update 1 </b> Fedor and Tony showed below that $l(k)\ge k/1000$. Thus only the first question remains: What is $l(k)$? Is it exponential in $k$, for example?</p>
<p><b> Update 2 </b> Although the question I asked makes sense (see Update 1), I realized that it is not the question I meant to ask. Here is the correct question. Same assumptions: $|S|=k$, 1000 colors. We consider monochromatic sequences of pairwise different subsets ${\mathcal A}=A_1,A_2,...,A_l$, where $|A_i\setminus A_{i+1}|+|A_{i+1}\setminus A_i|\le 2$. For each of these sequences we compute $\chi({\mathcal A})=|A_1\setminus A_l|+|A_l\setminus A_1|$. Now the question: what is the maximal guaranteed $\chi({\mathcal A})$ in terms of $k$, call it $\chi(k)$? By Ramsey, this number grows with $k$. Indeed if we color just $s$-element subsets, we will be able (if $k\gg s$) to find a subset of size $2s$ where all subsets of size $s$ are colored with the same color; then we can find a monochromatic sequence of subsets of size $s$ with the above property and $\chi=2s$ because the first and the last subsets in that sequence are disjoint. The question is what is the growth rate of $\chi(k)$. The question is motivated by Justin Moore's answer
<a href="https://mathoverflow.net/questions/37449/covers-of-z-infty"> here.</a> </p>
| Fedor Petrov | 4,312 | <p>It is much bigger for sure, even if we restrict two subsets of cardinality 2 (call them edges). You need monochromatic path of length $\ell$. Take the color with at least $k(k-1)/2000$ edges of this color, consider only them. Consider the maximal path in our graph. It has length at most $l-1$. Hence it endpoint (both of them) has degree at most $l-1$. Remove it and repeat (or use induction). We get that our graph has at most $(l-1)k$ edges. So, $k(k-1)/2000\leq (l-1)k$, $l\geq (k-1)/2000+1$. </p>
|
2,790,910 | <blockquote>
<p>Let <span class="math-container">$X \sim N (0, 1)$</span> and <span class="math-container">$Y ∼ N (0, 1)$</span> be two independent random variables, and define <span class="math-container">$Z = \min(X, Y )$</span>. Prove that <span class="math-container">$Z^2\sim\chi^2(1),$</span> i.e. Chi-Squared with degree of freedom <span class="math-container">$1.$</span></p>
</blockquote>
<p>I found the density functions of <span class="math-container">$X$</span> and <span class="math-container">$Y,$</span> as they are normally distributed. How would one use the fact that <span class="math-container">$Z = \min(X,Y)$</span> to answer the question? Thanks!</p>
| Wanshan | 530,208 | <p>$1-F_Z(t) = P(Z>t) = P(X>t)P(Y>t) =\frac{1}{2\pi}\left[ \int_t^{\infty}\exp(-x^2/2) \, dx \right]^2$. Take derivative w.r.t. $t$ and we can get
$$
f_Z(t) = -\frac{d}{dt}\frac{1}{2\pi} \left[ \int_t^\infty \exp(-x^2/2)\,dx \right]^2 = \frac{1}{\pi}\exp(-t^2/2)\left[\int_t^{\infty}\exp(-x^2/2)\,dx\right].
$$
Now let $W = Z^2$
\begin{align}
1-F_W(t) = {} & P(W>t) = P(Z>\sqrt{t})+P(Z<-\sqrt{t})\\[10pt]
= {} & \int_{\sqrt{t}}^{\infty}\frac{1}{\pi}\exp(-s^2/2) \left[\int_s^\infty \exp(-x^2/2)\,dx\right]\,ds \\[10pt]
& {} + \int_\infty^{-\sqrt{t}}\frac{1}{\pi}\exp(-s^2/2) \left[ \int_s^{\infty}\exp(-x^2/2)\,dx\right]\,ds\\[10pt]
= {} & \int_{\sqrt{t}}^{\infty}\frac{1}{\pi}\exp(-s^2/2)\left[ \int_s^\infty \exp(-x^2/2)\,dx \right] \, ds \\[10pt]
& {} + \int^\infty_{\sqrt{t}}\frac{1}{\pi}\exp(-s^2/2) \left[ \int^{s}_{-\infty}\exp(-x^2/2)\,dx\right]\,ds\\[10pt]
= {} & \int_{\sqrt{t}}^\infty \frac{1}{\pi}\exp(-s^2/2)\frac{\sqrt{2\pi}}{2}\,ds
\end{align}
Taking derivative we can get $f_W(t) = \frac{1}{\sqrt{2\pi t}}\exp(-t/2)$, which is the same as $f_{\chi^2_1}(t) = \frac{1}{\sqrt{2\pi t}}\exp(-t/2)$.</p>
|
2,666,390 | <p>How do I show that for $x>0$:
$$\frac{x-(x^2+1)\arctan(x)}{x^2(x^2+1)} < 0$$</p>
<p>I tried to do it somehow using the fact that $$\frac{\arctan(x)}{x} < 1$$ but still didn't figure it out...</p>
| TheSimpliFire | 471,884 | <p>We have $$\begin{align}\frac{x-(x^2+1)\arctan x}{x^2(x^2+1)} < 0&\iff x<(x^2+1)\arctan x\\&\iff (1+x^2)\arctan x-x>0\end{align}$$ This is true iff $$\frac d{dx}\left[(1+x^2)\arctan x-x\right]=2x\arctan x+\frac{1+x^2}{1+x^2}-1=2x\arctan x>0$$ which is true since both $x$ and $\arctan x$ are greater than $0$ when $x>0$. Hence result.</p>
|
3,669,961 | <p>Let <span class="math-container">$f:[a,b] \longrightarrow \mathbb{R}$</span> be a integrable function. I know if <span class="math-container">$ f> 0 $</span> then
<span class="math-container">$$\int_{a}^{b} f(x)\; dx >0.$$</span> The converse is true? That is, if <span class="math-container">$$\int_{a}^{b} f(x)\; dx >0$$</span> then <span class="math-container">$f>0$</span>?
I couldn't think of an example that makes it false the converse.</p>
| Saptak Bhattacharya | 734,601 | <p>If you do not restrict <span class="math-container">$f$</span> to be non negative,take <span class="math-container">$f=2\chi_{(1,2]} - \chi_{[0,1)}$</span> on <span class="math-container">$[0,2]$</span>.Otherwise,even if you restrict <span class="math-container">$f$</span> to be non negative,this is not always the case.For example,<span class="math-container">$f=\chi_{(0,1]}$</span> on <span class="math-container">$[0,1]$</span>.If you want a general answer in terms of the Lebesgue integral,it's true that on a measure space <span class="math-container">$X$</span>, if the integral of a non negative measurable function <span class="math-container">$f$</span> is zero,then <span class="math-container">$f=0$</span> almost everywhere.You can prove this quite easily with Markov's inequality.</p>
|
3,669,961 | <p>Let <span class="math-container">$f:[a,b] \longrightarrow \mathbb{R}$</span> be a integrable function. I know if <span class="math-container">$ f> 0 $</span> then
<span class="math-container">$$\int_{a}^{b} f(x)\; dx >0.$$</span> The converse is true? That is, if <span class="math-container">$$\int_{a}^{b} f(x)\; dx >0$$</span> then <span class="math-container">$f>0$</span>?
I couldn't think of an example that makes it false the converse.</p>
| Saket Gurjar | 769,080 | <p>What does <span class="math-container">$\int_{a}^{b} f(x)\; dx >0$</span> show?....It says that the area covered by above x-axis is more that area covered by the part below x-axis.</p>
<p>For eg. a function which is always positive will always lie above the x-axis, so the area above x-axis will be always more that below (which will be zero here).</p>
<p>eg. <span class="math-container">$f(x) = (x-3)^2 +2 $</span></p>
<p><a href="https://i.stack.imgur.com/omw9F.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/omw9F.png" alt="enter image description here"></a></p>
<p>But the fact that the integral is positive does not mean that the function is positive, as it could also mean that the area above the x-axis is more. So portion of the function could exist below graph in such a condition, provided that its area under is less that the part above.</p>
<p>eg. <span class="math-container">$f(x) = x(x-3)(x-5)$</span> from <span class="math-container">$x=0.5$</span> to <span class="math-container">$x=3.5$</span></p>
<p><a href="https://i.stack.imgur.com/ZNtBF.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/ZNtBF.png" alt="enter image description here"></a></p>
<p>Here the portion (1) which is above the x-axis is definitely larger than the portion (2) below x-axis. So the definite integral will be positive here, but notice how the function is also negative . </p>
|
2,426,450 | <p>Today when I was solving problems from GRE Manhattan I ran into a strange word problem.</p>
<p><a href="https://i.stack.imgur.com/nqzes.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/nqzes.png" alt="enter image description here"></a></p>
<p>The first line of the problem already seems weird since $\frac{3}{8}$ of $420$ is not an integer, namely it is equal to $\frac{3}{8}\cdot 420=157,5$. Am I right?</p>
| Especially Lime | 341,019 | <p>You're right. It works out that $105$ students took both, $52.5$ took French but not Geography, $63$ took Geography but not French, and $199.5$ took neither. So the answer intended is presumably that only C is true, but in fact the situation described is impossible.</p>
|
389,912 | <p>I'm writing a paper and want to cite some references to efficiently prove that over any field <span class="math-container">$k$</span> of characteristic zero, every irreducible representation of a product of symmetric groups, say</p>
<p><span class="math-container">$$ S_{n_1} \times \cdots \times S_{n_p} $$</span></p>
<p>is isomorphic to a tensor product <span class="math-container">$\rho_1 \otimes \cdots \otimes \rho_p$</span> where <span class="math-container">$\rho_i$</span> is an irreducible representation of <span class="math-container">$S_{n_i}$</span>.</p>
<p>I have an open mind about this, but I'm imagining doing it by finding references for these two claims:</p>
<ol>
<li><p>If <span class="math-container">$k$</span> is an algebraically closed field of characteristic zero, every irreducible representation of a product <span class="math-container">$G_1 \times G_2$</span> of finite groups is of the form <span class="math-container">$\rho_1 \otimes \rho_2$</span> where <span class="math-container">$\rho_i$</span> is an irreducible representation of <span class="math-container">$G_i$</span>.</p>
</li>
<li><p>If <span class="math-container">$k$</span> has characteristic zero and <span class="math-container">$\overline{k}$</span> is its algebraic closure, every finite-dimensional representation of <span class="math-container">$S_n$</span> over <span class="math-container">$\overline{k}$</span> is isomorphic to one of the form <span class="math-container">$\overline{k} \otimes_k \rho$</span> where <span class="math-container">$\rho$</span> is a representation of <span class="math-container">$S_n$</span> over <span class="math-container">$k$</span>.</p>
</li>
</ol>
<p>Serre's book <em>Linear Representations of Finite Groups</em> states the first fact for <span class="math-container">$k = \mathbb{C}$</span> but apparently not for a general algebraically closed field of characteristic zero. (It's Theorem 10.) It could be true already for any field of characteristic zero, which would simplify my life.</p>
<p>The second fact should be equivalent to saying that <span class="math-container">$\mathbb{Q}$</span> is a splitting field for any symmetric group, which seems to be something everyone knows - yet I haven't found a good reference.</p>
| Mare | 61,949 | <p>Question 1 is a special case of the following statement for finite dimensional algebras:
Let <span class="math-container">$A$</span> and <span class="math-container">$B$</span> be a finite dimensional algebras over a field <span class="math-container">$K$</span> such that <span class="math-container">$A/rad(A)$</span> and <span class="math-container">$B/rad(B)$</span> are isomorphic to a direct product of matrix algebras over <span class="math-container">$K$</span> (which is always true when <span class="math-container">$K$</span> is algebraically closed).</p>
<p>When <span class="math-container">$e_i$</span> and <span class="math-container">$e_i'$</span> are pariwise orthogonal primitive idempotents which sum to 1 for <span class="math-container">$A$</span> and <span class="math-container">$B$</span> respectively, then <span class="math-container">$e_i \otimes_K e_i'$</span> are pairwise orthogonal primitive idempotents which sum to 1 for <span class="math-container">$A \otimes_K B$</span>.
Thus when <span class="math-container">$S_i$</span> and <span class="math-container">$S_i'$</span> are the simple <span class="math-container">$A$</span> and <span class="math-container">$B$</span>-modules respectively, then <span class="math-container">$S_i \otimes S_i'$</span> are the simple <span class="math-container">$A \otimes_K B$</span> modules. This is proved for <span class="math-container">$A=B$</span> in the book "Frobenius algebras I" by Skowronski and Yamagata in chapter IV. as proposition 11.3. , but the proof works in exactly the same way when <span class="math-container">$A \neq B$</span>.</p>
<p>For question 2, one can find in section 4.5. corollary 4.16 in the book "A tour of repreesntation theory" by Martin Lorenz the fact that <span class="math-container">$\mathbb{Q}$</span> is a splitting field for the symmetric group. The whole section 4 in this book is dedicated to the representation theory of the symmetric group in characteristic 0 and might be one of the nicest modern approaches to this problem.</p>
<p>Thus since <span class="math-container">$\mathbb{Q}$</span> is a splitting field for the symmetric group, it is true for any field <span class="math-container">$K$</span> of characteristic 0 (not just algebraically closed fields) that the irreducible representations of a direct product of symmetric groups is given as a tensor product of the irreducible representations of the single symmetric groups.</p>
|
389,912 | <p>I'm writing a paper and want to cite some references to efficiently prove that over any field <span class="math-container">$k$</span> of characteristic zero, every irreducible representation of a product of symmetric groups, say</p>
<p><span class="math-container">$$ S_{n_1} \times \cdots \times S_{n_p} $$</span></p>
<p>is isomorphic to a tensor product <span class="math-container">$\rho_1 \otimes \cdots \otimes \rho_p$</span> where <span class="math-container">$\rho_i$</span> is an irreducible representation of <span class="math-container">$S_{n_i}$</span>.</p>
<p>I have an open mind about this, but I'm imagining doing it by finding references for these two claims:</p>
<ol>
<li><p>If <span class="math-container">$k$</span> is an algebraically closed field of characteristic zero, every irreducible representation of a product <span class="math-container">$G_1 \times G_2$</span> of finite groups is of the form <span class="math-container">$\rho_1 \otimes \rho_2$</span> where <span class="math-container">$\rho_i$</span> is an irreducible representation of <span class="math-container">$G_i$</span>.</p>
</li>
<li><p>If <span class="math-container">$k$</span> has characteristic zero and <span class="math-container">$\overline{k}$</span> is its algebraic closure, every finite-dimensional representation of <span class="math-container">$S_n$</span> over <span class="math-container">$\overline{k}$</span> is isomorphic to one of the form <span class="math-container">$\overline{k} \otimes_k \rho$</span> where <span class="math-container">$\rho$</span> is a representation of <span class="math-container">$S_n$</span> over <span class="math-container">$k$</span>.</p>
</li>
</ol>
<p>Serre's book <em>Linear Representations of Finite Groups</em> states the first fact for <span class="math-container">$k = \mathbb{C}$</span> but apparently not for a general algebraically closed field of characteristic zero. (It's Theorem 10.) It could be true already for any field of characteristic zero, which would simplify my life.</p>
<p>The second fact should be equivalent to saying that <span class="math-container">$\mathbb{Q}$</span> is a splitting field for any symmetric group, which seems to be something everyone knows - yet I haven't found a good reference.</p>
| Maxime Ramzi | 102,343 | <p>This is not quite what you're looking for, but here's a theorem (followed by a reference) which justifies the heuristic that the representation theory of a finite group over an algebraically closed field of characteristic <span class="math-container">$0$</span> "doesn't depend on the field" :</p>
<blockquote>
<p>Suppose <span class="math-container">$K$</span> is a field in which every irreducible representation of <span class="math-container">$G$</span> is absolutely irreducible. Then for any field extension <span class="math-container">$K'/K$</span>, the induced morphism on representation rings <span class="math-container">$R_K(G)\to R_{K'}(G)$</span> is an isomorphism.</p>
</blockquote>
<p>This is in the first paragraph of section 14.6 in Serre's book.</p>
<p>For instance, here's how it can help for statement 1. : if <span class="math-container">$K$</span> is an algebraically closed field of characteristic <span class="math-container">$0$</span>, then</p>
<p>a) all irreducible representations are absolutely irreducible, by Schur's lemma and the existence of eigenvalues</p>
<p>b) <span class="math-container">$K$</span> has a common field extension with <span class="math-container">$\mathbb C$</span>.</p>
<p>From there it's not hard to see that if statement 1. holds over <span class="math-container">$\mathbb C$</span>, it does so over any algebraically closed field of characteristic <span class="math-container">$0$</span> (indeed note that the morphism <span class="math-container">$R_K(G)\to R_{K'}(G)$</span> maps the "positive part" to the positive part <span class="math-container">$R_K^+(G)\to R_{K'}^+(G)$</span>, so if it's an isomorphism so must the latter be; and free commutative monoids have at most one basis - so this induces a bijection between the irreducible representations - apply this to <span class="math-container">$G_1,G_2$</span> and <span class="math-container">$G_1\times G_2$</span>)</p>
<p>You can also deduce 2. from the similar fact over <span class="math-container">$\overline{\mathbb Q}$</span> or <span class="math-container">$\mathbb C$</span> if you use the other statement from the same paragraph of Serre's book, namely that <span class="math-container">$R_K(G)\to R_{K'}(G)$</span> is always injective, no matter what the field extension <span class="math-container">$K'/K$</span> is.</p>
<p>Indeed, your statement 2. is then saying that <span class="math-container">$R_k(S_n)\to R_\overline k(S_n)$</span> is an isomorphism, but this follows from the following chain of morphisms, where <span class="math-container">$K$</span> is a common extension of <span class="math-container">$\overline k$</span> and <span class="math-container">$\mathbb C$</span>: <span class="math-container">$R_\mathbb Q(S_n)\to R_k(S_n)\to R_\overline k(S_n)\to R_K(S_n)$</span>. By the earlier statement, the last morphism is an isomorphism, but also the composite (if you already know the surjectivity statement for <span class="math-container">$\mathbb C$</span>), therefore by injectivity so is the middle one.</p>
<p>In other words, the slogan works in these situations, and you can find the appropriate precise statements in this first paragraph of section 14.6 of Serre's book.</p>
|
389,912 | <p>I'm writing a paper and want to cite some references to efficiently prove that over any field <span class="math-container">$k$</span> of characteristic zero, every irreducible representation of a product of symmetric groups, say</p>
<p><span class="math-container">$$ S_{n_1} \times \cdots \times S_{n_p} $$</span></p>
<p>is isomorphic to a tensor product <span class="math-container">$\rho_1 \otimes \cdots \otimes \rho_p$</span> where <span class="math-container">$\rho_i$</span> is an irreducible representation of <span class="math-container">$S_{n_i}$</span>.</p>
<p>I have an open mind about this, but I'm imagining doing it by finding references for these two claims:</p>
<ol>
<li><p>If <span class="math-container">$k$</span> is an algebraically closed field of characteristic zero, every irreducible representation of a product <span class="math-container">$G_1 \times G_2$</span> of finite groups is of the form <span class="math-container">$\rho_1 \otimes \rho_2$</span> where <span class="math-container">$\rho_i$</span> is an irreducible representation of <span class="math-container">$G_i$</span>.</p>
</li>
<li><p>If <span class="math-container">$k$</span> has characteristic zero and <span class="math-container">$\overline{k}$</span> is its algebraic closure, every finite-dimensional representation of <span class="math-container">$S_n$</span> over <span class="math-container">$\overline{k}$</span> is isomorphic to one of the form <span class="math-container">$\overline{k} \otimes_k \rho$</span> where <span class="math-container">$\rho$</span> is a representation of <span class="math-container">$S_n$</span> over <span class="math-container">$k$</span>.</p>
</li>
</ol>
<p>Serre's book <em>Linear Representations of Finite Groups</em> states the first fact for <span class="math-container">$k = \mathbb{C}$</span> but apparently not for a general algebraically closed field of characteristic zero. (It's Theorem 10.) It could be true already for any field of characteristic zero, which would simplify my life.</p>
<p>The second fact should be equivalent to saying that <span class="math-container">$\mathbb{Q}$</span> is a splitting field for any symmetric group, which seems to be something everyone knows - yet I haven't found a good reference.</p>
| Benjamin Steinberg | 15,934 | <p>To summarize the situation given in the other answers (no real new content here) it is classical theory going back to Young that the complex irreducible representations of <span class="math-container">$S_n$</span> can be defined over <span class="math-container">$\mathbb Q$</span> (i.e., written with <span class="math-container">$\mathbb Q$</span>-coefficients, or written as <span class="math-container">$\mathbb C\otimes_{\mathbb Q}V$</span> with <span class="math-container">$V$</span> a <span class="math-container">$\mathbb QS_n$</span>-irreducible module) and references were given; i.e., the <span class="math-container">$\mathbb Q$</span>-irreducibles are absolutely irreducible. This can be done via Young symmetrizers and anti-symmetrizers, polytabloids, or a number of other approaches and I have nothing to add to the discussion.</p>
<p>What this means concretely is that <span class="math-container">$\mathbb QS_n\cong \prod_{i=1}^{p_n}M_{d_i}(\mathbb Q)$</span> where <span class="math-container">$p_n$</span> is the number of partitions of <span class="math-container">$n$</span> and <span class="math-container">$d_i$</span> is the dimension of the <span class="math-container">$i^{th}$</span>-irreducible representations (and of course all these <span class="math-container">$d_i$</span> are well known through tableaux combinatorics and involve counting semi-standard Young tableaux). One way to see this is to use that if <span class="math-container">$V$</span> is a finite dimensional <span class="math-container">$\mathbb QS_n$</span>-module, then <span class="math-container">$\mathrm{End}_{\mathbb CS_n}(\mathbb C\otimes_{\mathbb Q} V)\cong \mathbb C\otimes_{\mathbb Q}\mathrm{End}_{\mathbb QS_n}(V)$</span> by standard arguments and so by Schur's lemma, if <span class="math-container">$\mathbb C\otimes_{\mathbb Q}V$</span> is irreducible, then <span class="math-container">$\mathrm{End}_{\mathbb CS_n}(\mathbb C\otimes_{\mathbb Q} V)$</span> one-dimensional over <span class="math-container">$\mathbb C$</span> and hence <span class="math-container">$\mathrm{End}_{\mathbb QS_n}(V)$</span> is one-dimensional over <span class="math-container">$\mathbb Q$</span> and so apply Wedderburn-Artin to <span class="math-container">$\mathbb QS_n$</span> to get the statement.</p>
<p>Now, let's just handle <span class="math-container">$\mathbb Q[S_n\times S_m]\cong \mathbb QS_n\otimes_{\mathbb Q}\mathbb QS_m$</span>. Then by the above, we have that this tensor product is isomorphic to <span class="math-container">$$\prod_{i=1}^{p_n}\prod_{j=1}^{p_m}M_{d_i}(\mathbb Q)\otimes_{\mathbb Q} M_{c_j}(\mathbb Q)$$</span> where I introduced <span class="math-container">$c_j$</span> for the dimensions of the <span class="math-container">$S_m$</span>-irreducibles over <span class="math-container">$\mathbb Q$</span>. Obviously <span class="math-container">$$M_{d_i}(\mathbb Q)\otimes_{\mathbb Q}M_{c_j}(\mathbb Q)\cong M_{d_i}(M_{c_j}(\mathbb Q))\cong M_{d_ic_j}(\mathbb Q)$$</span> and hence <span class="math-container">$M_{d_i}(\mathbb Q)\otimes_{\mathbb Q}M_{c_j}(\mathbb Q)$</span> is simple with a unique simple module (up to isomorphism) which has dimension <span class="math-container">$d_ic_j$</span> (and this dimension characterizes the simple module). Consequently the tensor product of the unique simple modules of the two tensor factors is the unique simple module for this tensor product, e.g., by dimension consideration. Putting it all together, we get that the simple <span class="math-container">$\mathbb Q[S_n\times S_m]$</span>-modules are the tensor products of the simple <span class="math-container">$\mathbb QS_n$</span>-modules and <span class="math-container">$\mathbb QS_m$</span>-modules. Now since <span class="math-container">$K\otimes_{\mathbb Q} M_r(\mathbb Q)\cong M_r(K)$</span> and <span class="math-container">$K\otimes_{\mathbb Q}\mathbb Q^r\cong K^r$</span>, the situation doesn't change when we extend the scalars. We get the same number of irreducibles and they are obtained by extending the scalars from those of <span class="math-container">$\mathbb Q[S_n\times S_m]$</span>. Of course the argument is the same for any finite number of factors.</p>
<p><strong>Tiny update.</strong> Although John didn't ask for this, it is also well known that the <span class="math-container">$p$</span>-element field <span class="math-container">$\mathbb F_p$</span> is a splitting field for the symmetric group in characteristic <span class="math-container">$p$</span>. From this one can again deduce that all irreducible representations of products of symmetric groups are tensor products of irreducible representations of the factors over any field. Likely there is a %100 direct proof using tableaux and the like. But here is one possible proof. First note that if <span class="math-container">$G$</span> is any finite group, <span class="math-container">$K$</span> is an algebraically closed field of characteristic <span class="math-container">$p$</span> and <span class="math-container">$|G|=p^nm$</span> with <span class="math-container">$\gcd(p,m)=1$</span>, then one can show that each character <span class="math-container">$\chi$</span> of <span class="math-container">$G$</span> over <span class="math-container">$K$</span> takes values that are sums of <span class="math-container">$m^{th}$</span>-roots of unity since <span class="math-container">$1$</span> is the only <span class="math-container">$p^{th}$</span>-root of unity in <span class="math-container">$K$</span>. Hence the character field of <span class="math-container">$\chi$</span> is a finite field. The theory of Schur indices together with Wedderburn's theorem that there are no finite division rings, then tells you that <span class="math-container">$\chi$</span> is realizable over the character field <span class="math-container">$\mathbb F_p(\chi)$</span>. Moreover, a character of <span class="math-container">$G$</span> is well known to be determined by its values on the <span class="math-container">$p$</span>-regular elements of <span class="math-container">$G$</span> (the elements of order prime to <span class="math-container">$p$</span>). Now in <span class="math-container">$S_n$</span>, every element <span class="math-container">$g$</span> of order prime to <span class="math-container">$p$</span> is conjugate to <span class="math-container">$g^p$</span>. Hence <span class="math-container">$\chi(g)=\chi(g^p) = \Phi(\chi(g))$</span> where <span class="math-container">$\Phi$</span> is the Frobenius automorphism <span class="math-container">$x\mapsto x^p$</span>. Therefore, <span class="math-container">$\chi$</span> takes values in <span class="math-container">$\mathbb F_p$</span> and so is realizable over <span class="math-container">$\mathbb F_p$</span>.</p>
|
370,151 | <p>Let $f: \Bbb R → \Bbb R$ be a continuous function such that $f(x)=x$ has no real solution . Then is it true that $f(f(x))=x$ also has no real solution ? </p>
| P.. | 39,722 | <p>If $f(f(c))=c$ and $f(c)=r$ then $f(r)=c$.<br>
Since $c\neq r$ it follows that there is an $s$ between $r$ and $c$ such that $f(s)=s$ (in the case $c<r\Rightarrow f(r)-r=c-r>0$ and $f(c)-c=r-c<0$...) $\Rightarrow\Leftarrow$.</p>
|
207,433 | <p>(When one says several things in a question, then several things may get answered and others neglected. Hence this posting overlaps with one of my earlier ones, but (I hope) this one will be short, simple, and narrowly focused.)</p>
<p><b>In what contexts in mathematics, at any level of sophistication, do products of logarithms, all to the same base, arise naturally?</b></p>
<p>(I know that I've come across them a few times while doing so routine calculus problems, but I can't remember anything specific about it.)</p>
| val11 | 5,856 | <p>In computer science, the notion of <a href="http://en.wikipedia.org/wiki/Polylogarithmic" rel="nofollow">polylogarithmic growth</a> appears naturally when analyzing time/space complexity of algorithms. </p>
<p>There also exists the notion of <a href="http://en.wikipedia.org/wiki/Quasi-polynomial_time#Quasi-polynomial_time" rel="nofollow">quasi-polynomial time</a>, which is a bit more rare though.</p>
|
4,005,381 | <p>Why the probability of a single element event is "zero" on the continuous model?, the explanation given is based on probability additivity axioms. but how? some more explanation with the source would be helpful.</p>
<p>cool thanks!</p>
| Joe | 623,665 | <p>Suppose that there is a biased random number generator that can produce numbers between <span class="math-container">$0$</span> and <span class="math-container">$1$</span>. The probability density function corresponding to this generator is <span class="math-container">$f(x)=2x$</span>:</p>
<p><a href="https://i.stack.imgur.com/BYAC2.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/BYAC2.png" alt="Probability Density Function" /></a></p>
<p>To find the probability of falling into a particular interval <span class="math-container">$[a,b]$</span>, you have to compute the integral between <span class="math-container">$a$</span> and <span class="math-container">$b$</span> of the probability density function. In our example, that means finding <span class="math-container">$\int_{a}^{b} 2x \, dx$</span>:
<span class="math-container">$$
\int_{a}^{b} 2x \, dx = \left[x^2\right]_{a}^{b} = b^2 - a^2
$$</span>
In fact, you need not use integration. The area of the trapezium shown below in below is given by
<span class="math-container">$$
\text{area} = \text{base} \times \text{average height} = (b-a) \times \frac{2a+2b}{2} = b^2 - a^2
$$</span></p>
<p><a href="https://i.stack.imgur.com/I0XRB.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/I0XRB.png" alt="Area over an interval [a,b]" /></a></p>
<p>Since the probability of an event corresponds to area, we know that
<span class="math-container">$$
P(a \leq x \leq b) = b^2 - a^2 \, .
$$</span>
The probability of a single event happening corresponds to an interval of width zero. In other words,
<span class="math-container">$$
P(x=a) = P(a \leq x \leq a) = a^2 - a^2 = 0
$$</span>
This means that in a continuous probability distribution, the chance of a single event happening is equal to <span class="math-container">$0$</span>.</p>
<p>You probably still find this answer a little unconvincing. In a discrete setting, the sum of the probabilities of all the events is equal to one. In a continuous setting, not only is this untrue, it doesn't make any sense: there is no sensible way to define the sum of uncountably many terms. One way to reconcile the differences between discrete and continuous settings is to realise that a continuous probability distribution is in a sense the 'limit' of a discrete one, as user2661923 has already mentioned. Imagine we approximate the probability density function <span class="math-container">$f(x) = 2x$</span> with <span class="math-container">$10$</span> rectangles of width <span class="math-container">$0.1$</span>:</p>
<p><a href="https://i.stack.imgur.com/m5FDf.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/m5FDf.png" alt="10 rectangles" /></a></p>
<p>These rectangles give us an upper bound for the chance of falling into an interval, with
<span class="math-container">\begin{align}
P(x \in [a,a+0.1]) &\leq \text{interval width} \times \text{height of rectangle at end of interval}\\
&= 0.1 \times 2(a+0.1) = \frac{a+0.1}{5}
\end{align}</span>
What about the general case? If we split the probability distribution function into <span class="math-container">$n$</span> intervals of width <span class="math-container">$1/n$</span>, then
<span class="math-container">$$
P(x \in [a,a+1/n]) \leq 1/n \times 2(a+1/n) = \frac{2(a+1/n)}{n}
$$</span></p>
<p><a href="https://i.stack.imgur.com/Va0vy.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Va0vy.png" alt="General case" /></a></p>
<p>Furthermore, since <span class="math-container">$a$</span> is a subset of <span class="math-container">$[a,a+1/n]$</span>
<span class="math-container">$$
P(x = a) \leq P(x \in [a,a+1/n]) \, .
$$</span>
Hence,
<span class="math-container">$$
P(x=a) \leq P(x \in [a,a+1/n]) \leq \frac{2(a+1/n)}{n} \, .
$$</span>
For <em>any</em> interval of width <span class="math-container">$1/n$</span>, the above inequality must be satisfied. If we make the interval arbitrarily small, then <span class="math-container">$n$</span> tends to infinity, and
<span class="math-container">$$
\frac{2(a+1/n)}{n}
$$</span>
tends to <span class="math-container">$0$</span>. This means for every <span class="math-container">$\varepsilon > 0$</span>, <span class="math-container">$P(x=a) < \varepsilon$</span>. But since probabilities must be nonnegative, we are left with one option:
<span class="math-container">$$
P(x = a) = 0 \, .
$$</span></p>
|
1,363,967 | <p>A group $G$ acts on a set $X$ transitively and a normal subgroup $H$ fixes a point $x_{0} \in X$, i.e. $h \cdot x_{0}=x_{0}$ for all $h \in H$. Show that $h \cdot x = x$ for all $h \in H$ and $x \in X$.</p>
<p>Since the action is transitive $\mathcal{O}_{x}=\{g\cdot x : g \in G,~ x \in X\}=X$ for any $x \in G$. </p>
<p>I've been fooling around with the fact that $g\cdot x = x_{0}$ for some $g \in G$ and using the fact that $H$ is a normal subgroup but haven't really gotten anywhere. I feel like this should follow straight from the axioms of group action and definition of normal subgroup, or am I missing something?</p>
| Matt Samuel | 187,867 | <p>Suppose the normal subgroup $H$ fixes $x$, let $y$ be in the set and let $g$ be such that $y=gx$. Then $gHg^{-1}=H$ fixes $y$, so $H$ fixes every point.</p>
|
3,827,650 | <p>Does the generalised integral</p>
<p><span class="math-container">$\int_{0}^{\pi}\frac{\sqrt x}{\sin x}dx$</span></p>
<p>converge or diverge?</p>
<p>The first thing I would do here is split it into two integrals</p>
<p><span class="math-container">$$\int_0^\pi \frac{\sqrt x}{\sin{x}}dx=\int_0^{\frac{\pi}{2}} \frac{\sqrt x}{\sin{x}}dx+\int_{\frac{\pi}{2}}^\pi \frac{\sqrt x}{\sin{x}}dx$$</span></p>
<p>But then I am a bit stuck. I don't know if I now should compare it to something (and in that case what?), or if I should expand it with Taylor or something.</p>
| Claude Leibovici | 82,404 | <p>For <span class="math-container">$x$</span> close to <span class="math-container">$0$</span>,
<span class="math-container">$$\frac{\sqrt x}{\sin (x)}=\frac{1}{\sqrt{x}}+\frac{x^{3/2}}{6}+O\left(x^{7/2}\right)$$</span> so no problem.</p>
<p>But close to <span class="math-container">$x=\pi$</span>
<span class="math-container">$$\frac{\sqrt x}{\sin (x)}=-\frac{\sqrt{\pi }}{x-\pi }-\frac{1}{2 \sqrt{\pi }}+\frac{\left(3-4 \pi ^2\right)
}{24 \pi ^{3/2}} (x-\pi )+O\left((x-\pi )^2\right)$$</span> and, here, there is a major issue.</p>
<p>Amazingly, the problem could have been solved <span class="math-container">$\color{red}{1,400}$</span> years ago using
<span class="math-container">$$\sin(x) \simeq \frac{16 (\pi -x) x}{5 \pi ^2-4 (\pi -x) x}\qquad (0\leq x\leq\pi)$$</span> was proposed by Mahabhaskariya of Bhaskara I, a seventh-century Indian mathematician.</p>
<p>This would give
<span class="math-container">$$I(a)=\int_{0}^{a}\frac{\sqrt x}{\sin (x)}dx\simeq \int_{0}^{a} \frac{5 \pi ^2-4 (\pi -x) x}{16 (\pi -x) \sqrt{x}} dx$$</span>
<span class="math-container">$$I(a)=\frac{5 \pi ^{3/2}}{8} \tanh ^{-1}\left(\frac{\sqrt{a}}{\sqrt{\pi
}}\right)-\frac{a^{3/2}}{6}$$</span> and, as shown below, it is a decent approximation
<span class="math-container">$$\left(
\begin{array}{ccc}
a & \text{approximation} & \text{exact} \\
0.25 & 0.98828 & 1.00209 \\
0.50 & 1.41108 & 1.42619 \\
0.75 & 1.75095 & 1.76576 \\
1.00 & 2.05704 & 2.07133 \\
1.25 & 2.35188 & 2.36586 \\
1.50 & 2.65145 & 2.66535 \\
1.75 & 2.97141 & 2.98529 \\
2.00 & 3.33164 & 3.34531 \\
2.25 & 3.76309 & 3.77615 \\
2.50 & 4.32461 & 4.33677 \\
2.75 & 5.16168 & 5.17417 \\
3.00 & 6.89988 & 6.92410
\end{array}
\right)$$</span></p>
|
3,117,139 | <p>I am learning to calculate the arc length by reading a textbook, and there is a question</p>
<p><a href="https://i.stack.imgur.com/Zigqv.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Zigqv.png" alt="enter image description here"></a></p>
<p>However, I get stuck at calculating</p>
<p><span class="math-container">$$\int^{\arctan{\sqrt15}}_{\arctan{\sqrt3}} \frac{\sec{(\theta)} (1+\tan^2{(\theta)})} {\tan{\theta}} d\theta$$</span> How can I continue calculating it?</p>
<p><strong>Update 1:</strong></p>
<p><span class="math-container">$$\int^{\arctan{\sqrt{15}}}_{\arctan{\sqrt3}} \frac{\sec{(\theta)} (1+\tan^2{(\theta)})} {\tan{\theta}} d\theta = \int^{\arctan{\sqrt{15}}}_{\arctan{\sqrt3}} (\csc{(\theta)} + \sec{(\theta)} \tan{(\theta)}) d\theta \\
= \int^{\arctan{\sqrt{15}}}_{\arctan{\sqrt3}} \csc{(\theta) d\theta + \frac{1}{\cos{(\theta)}}} |^{arctan{\sqrt{15}}}_{arctan{\sqrt3}} \\
= \int^{\arctan{\sqrt{15}}}_{\arctan{\sqrt3}} \csc{(\theta) d\theta + \frac{1}{\cos{(\sqrt{15})}} - \frac{1}{\cos{(\sqrt3)}}}$$</span></p>
<p>But how can I get the final result?</p>
<p><strong>Update 2:</strong></p>
<p>Because <span class="math-container">$\frac{1}{\cos{(x)}} = \sqrt{ \frac{\cos^2{(x)} + \sin^2{(x)}}{cos^2{(x)}}} = \sqrt{1+\tan^2{(x)}}$</span>, I get </p>
<p><span class="math-container">$$\frac{1}{\cos{(\sqrt{15})}} - \frac{1}{\cos{(\sqrt3)}} = \sqrt{1+15} - \sqrt{1+3} = 2$$</span> </p>
<p>However, for the first part <span class="math-container">$\int^{\arctan{\sqrt{15}}}_{\arctan{\sqrt3}} \csc{(\theta)} d\theta$</span>, I get </p>
<p><span class="math-container">$$ \int^{\arctan{\sqrt{15}}}_{\arctan{\sqrt3}} \csc{(\theta)} d\theta = \log \tan{\frac{\theta}{2}} |^{arctan{\sqrt{15}}}_{arctan{\sqrt3}}$$</span></p>
<p>How can I continue it?</p>
| Claude Leibovici | 82,404 | <p><em>This is just a personal opinion.</em></p>
<p>I must confess that, when I started working on Mathematics Stack Exchange, I have been surprised to see how the "u" substitution was used (and then taught).</p>
<p>When I was young (that is to say long time ago !), the way we were taught was quite different. It was like that
<span class="math-container">$$u=f(x) \implies x=f^{(-1)}(u)\implies dx=\frac{du}{f'\left(f^{(-1)}(u)\right)}$$</span></p>
<p>For example, using the case you give
<span class="math-container">$$u=x^2 \implies x=\sqrt u\implies dx=\frac{du}{2 \sqrt{u}}$$</span></p>
<p>Another example
<span class="math-container">$$u=\sin(x)\implies x=\sin ^{-1}(u)\implies dx=\frac{du}{\sqrt{1-u^2}}$$</span></p>
<p>For sure, this can make some calculations longer but I still think that it is clearer not to say more "natural".</p>
|
4,552,988 | <p>I want to prove by diagonalization that the set of surjective total computable functions from N to N is not recursively enumerable. I know that the result is trivial using Rice's theorem, but I am trying to prove it only by a direct diagonalization argument. However, supposing that we can enumerate the functions of the set, I am unable to construct a proper surjective and total function that cannot belong to the set.</p>
| Z Ahmed | 671,540 | <p>We have <span class="math-container">$f(x)=\int_0^{x} \frac{dt}{\sqrt{1+t^4}}$</span>, So by Lebnitz we have <span class="math-container">$f'(x)=\frac{1}{\sqrt{1+x^4}}$</span></p>
<p>Let <span class="math-container">$y=f(x)\implies x=f^{-1} (y)=g(y).$</span></p>
<p>Start with <span class="math-container">$f(g(x))=x$</span> or <span class="math-container">$g(f(x))=x$</span></p>
<p>D.w.r.t. <span class="math-container">$x$</span> to get <span class="math-container">$g'(f(x)) f'(x)=1 \implies g'(y_0)=\frac{1}{f'(x_0)}$</span>
Where <span class="math-container">$(x_0,y_0)$</span> falls on the curve <span class="math-container">$y=f(x)$</span>. Here, we have <span class="math-container">$x_0=0=y_0$</span>, then
<span class="math-container">$$g'(0)=\frac{1}{f'(0)}=1$$</span></p>
<p>One may also write <span class="math-container">$$\frac{df^{-1}(y)}{dy}|_{y=y_0}=\frac{1}{f'(x_0)}$$</span></p>
|
4,552,988 | <p>I want to prove by diagonalization that the set of surjective total computable functions from N to N is not recursively enumerable. I know that the result is trivial using Rice's theorem, but I am trying to prove it only by a direct diagonalization argument. However, supposing that we can enumerate the functions of the set, I am unable to construct a proper surjective and total function that cannot belong to the set.</p>
| Joseph Fox | 1,106,755 | <p>From what you have, <span class="math-container">$g'(0) = \sqrt{1+g^4(0)}$</span>.</p>
<p>Now, <span class="math-container">$g(0) = f^{-1}(0)$</span>, and by definition, <span class="math-container">$f(g(0)) = 0$</span>. Note that <span class="math-container">$f(x) = 0$</span> only when <span class="math-container">$x=0$</span> since the integrated function is strictly positive. Hence, <span class="math-container">$g(0) = 0$</span> and <span class="math-container">$g'(0) = 1$</span>.</p>
|
199,889 | <p>What are applications of the theory of Berkovich analytic spaces? The analytification $X \mapsto X^{\mathrm{an}}$</p>
| ACL | 10,696 | <p>I would first recommend the paper of Antoine Ducros (<a href="http://webusers.imj-prg.fr/%7Eantoine.ducros/asterisque.pdf" rel="noreferrer">Espaces analytiques <span class="math-container">$p$</span>-adiques au sens de Berkovich</a>, Séminaire Bourbaki, exposé 958, 2006) for a general survey of the theory, with applications.</p>
<p>Here is a list of applications which I find striking, starting from those mentioned by Ducros's survey.</p>
<ul>
<li><p>Étale cohomology. Berkovich developed a good theory of étale cohomology for his analytic spaces, which had applications in the Langlands program (for example, in the proof by Harris-Taylor of the local Langlands conjecture).</p>
</li>
<li><p>Proof (by Berkovich) of a conjecture of Deligne that the vanishing/nearby cycles (for a scheme over a discrete valuation ring) only depend on the formal completion.</p>
</li>
<li><p>Non-archimedean analogue of the classical potential theory on Riemann surfaces (Thuillier, Favre/Rivera-Letelier, Baker/Rumely).</p>
</li>
<li><p>Non-archimedean equidistribution theorems in the framework of Arakelov geometry (myself, Favre/Rivera-Letelier, Baker/Rumely, Gubler, Yuan), with applications to the Bogomolov conjecture for abelian varieties of function fields (Gubler, Yamaki), algebraic dynamics of Manin-Mumford/Mordell-Lang type (Yuan/Zhang, Dujardin/Favre,...).</p>
</li>
<li><p>Berkovich spaces of <span class="math-container">$\mathbf Z$</span> (Poineau) have applications to complicated rings of power series with integral coefficients introduced by Harbater and to their Galois theory. (In some sense, a geometrization of Harbater's formal patching.)</p>
</li>
<li><p>Mirror symmetry (Kontsevich/Soibelman) via the study of non-archimedean degenerations of Calabi-Yau manifolds. Recent developments in birational geometry (Mustață/Nicaise, Nicaise/Xu, Temkin) and viz. a non-archimedean analogue of the Monge-Ampère equation (Boucksom/Favre/Jonsson, Yuan/Zhang, Liu Y.).</p>
</li>
<li><p>Relation with tropical geometry (Baker/Payne/Rabinoff, my work with Ducros, Gubler/Rabinoff/Werner,...)</p>
</li>
<li><p>Relations with non-archimedean Arakelov geometry (Gubler/Künnemann, Ducros and myself)</p>
<p>A notable feature of the Berkovich spaces is the presence of (sometimes canonical) closed subspaces endowed with canonical piecewise linear structures on which the analytic spaces retracts by deformations (Berkovich, Hrushovski/Loeser,...). Those subspaces (“skeleta”) carry a large amount of geometric information and are of tremendous use in the theory.</p>
</li>
</ul>
|
1,873,825 | <p>I can see why linear regression is linear, i.e., because it is represented by a line, but what does regression have to do with the term as a whole? </p>
<p>What is the meaning this word contributes to the term?</p>
| V. Vancak | 230,329 | <p>Semi-intuitive explanation: </p>
<p>Assume that we are interested in one's IQ. Not in a score that he might get in some IQ test, rather in his true IQ value. So, we have to assume that there exists such a value. Lets denote it by $\mu$. However, it is impossible to measure it directly. As such, we can use IQ tests to estimate it. Denote by $X_i$ his score in the $i$th test. We can model this score by $X_i = \mu +\epsilon_i$, where $\epsilon_i$ is the random error of the $i$th test with $\mathbb{E}\epsilon_i = 0$ and $var(\epsilon_i) = \sigma^2$ . That is, the score of the $i$th test is composed of his real value (signal) and some random error (noise). Because $\mathbb{E}X_i=\mu$, his score will (in some sense) tend to his real IQ value. Therefore, after $n$ such tests we will take his average score as the estimator of this value. This average will indeed tend to $\mu$ in a sense that as larger the number of tests that he takes - the more accurately will the sample average estimate the real IQ. </p>
<p>What exactly is the random error is more philosophical rather than statistical question. That is, it may stem from some imperfections of the tool (test) that may vary in its difficulty or it may stem from some subject related factors (tiredness, mood etc.) or even may be some inherent property of the IQ itself (i.e., it is not a scalar but rather a random variable. Then you may be interested in its mean value.). Or some combination of the above. </p>
<p>Formally:
The linearity of a regression model does not mean that it is straight line. Namely, any model that can be written in matrix notations as
$$
Y=X\beta +\epsilon,
$$<br>
is called linear. Special cases like $y=\beta x +\epsilon$ and $y=\beta_0 + \beta_1x +\epsilon $ are indeed <em>estimated</em> by straight lines. Or can be viewed as straight lines (signal) that is interrupted by some noise $\epsilon$. Linearity is defined by independence of the first derivative of $Y$. I.e., if $\partial y / \partial \beta_j = x_j$, $j=1,...,p$ with $x_1 = 1$, then it is linear. If at least one of the derivatives depends on the parameters, then it is non-linear. While, the observations may be interpret as random fluctuations around the mean $\mathbb{E}Y = X\beta$, where $\mathbb{E}\epsilon = 0$. The estimation methods are try to estimate this means by minimizing (mostly) the squared error, i.e., the (squared empirical) deviation from this unknown mean. </p>
|
1,833,495 | <p>I tried to solve this like this.</p>
<p>$x=1,y=1$ is solution.</p>
<p>And Let $x=a y=b\, (a\geq 1,b\geq2)$</p>
<p>Then, $11$ can divide $11^b = 10^a+1$</p>
<p>so $10^a = 10 \pmod{11}$ but order of $11(10) = 2$.</p>
<p>Then there is contradiction.</p>
<p>Can I solve this problem while I make $x=a y=b\, (a\geq2 , b\geq1)$?</p>
| Ghartal | 83,884 | <p>Let's calculate evaluation of $5$ for RHS, we have$$x=v_5(11^y-1)=v_5(y)+v_5(11-1)=v_5(y)+1$$which gives $v_5(y)=x-1$ and we can write $y=5^{x-1}z$, where $\gcd(z, 5)=1$ is a positive integer, but for $x>1$ $$11^{5^{x-1}z}-1>11^{x}-1>10^x$$which is a contradiction. Hence $x=1$, which leads to $y=1$.</p>
|
123,102 | <p>As far as I know, it is an open problem to give a formula counting transitive relations on an $n$-element set. Is it easier to count the idempotent relations, that is relations that are both transitive and interpolative? (A relation $\rho$ is interpolative when $x\rho y\implies((\exists z)\ x\rho z \wedge z\rho y).$)</p>
<p>Also, if we denote the number of transitive relations on an $n$-element set by $T_n$ and the number of idempotent relations by $I_n$, can we say what the asymptotic behavior of $I_n/T_n$ is?</p>
| Benjamin Steinberg | 15,934 | <p>The answer seems to be in Butler, K. K.-H.,The number of idempotents in (0,1)-matrix semigroups, Linear Algebra and Its Applications 5 (1972), 233–246. I will see if I have access to the journal and will tell you more.</p>
<p><strong>Edit.</strong> The paper is <a href="http://ac.els-cdn.com/0024379572900055/1-s2.0-0024379572900055-main.pdf?_tid=8eece726-80e4-11e2-9fcb-00000aab0f26&acdnat=1361973171_ff59deefabd771cffb05e77de54d3a1e" rel="noreferrer">here</a> for free. It counts the idempotents by D-class so it is not written down in a simple succinct formula. If you google idempotent boolean matrix there are further papers which may be of use.</p>
|
4,213,335 | <p>Consider this equation where <span class="math-container">$a$</span> and <span class="math-container">$b$</span> are positive integers.</p>
<p><span class="math-container">$$k = \frac{2^a - 1}{2^{a+b} - 3^b}$$</span></p>
<p>This equation has the trivial solution <span class="math-container">$k=1, a=1, b=1$</span>.</p>
<p>How would I find more solutions, or show that no more exist? I'm not asking anyone to solve it, but just explain how I might explore the solutions myself.</p>
<hr />
<p>I came up with this equation as one whose solution would imply one specific kind of cycle in the Collatz iteration. Since the Collatz conjecture is believed to hold, I expect that there is proof that this equation has no other solutions, and am interested to see what mathematical techniques can be used to eliminate just this one case.</p>
| Gottfried Helms | 1,714 | <p>Looking at old entries in my literature-database, I found an interesting limiting formula for a lower bound of <span class="math-container">$2^{a+b}-3^b$</span>. Some short tinkering with it seem to show, that you can prove your conjecture for all <span class="math-container">$a+b > 27$</span> with it.<br />
The formula is (<em>Stroeker/Tijdeman,'71</em>): <span class="math-container">$$ \mid 2^x - 3^y \mid \gt \exp(x (\log 2- \frac1{10})) \qquad \text{for all } x,y \in \mathbb N \quad \text{and } x\gt27 \quad \;^{[1]}\tag 1$$</span>
This can be applied to your equation. By (1) we can write
<span class="math-container">$$ 2^{a+b}-3^b \gt \mu ^{a+b} \qquad \text{where } \mu =1.80967483607... \tag 2$$</span>
and thus
<span class="math-container">$$ k = { 2^a-1\over 2^{a+b}-3^b} \lt { 2^a\over \mu^{a+b}} \qquad \text{for }a+b\gt 27 \tag 3$$</span>
Here the rhs can be found to be smaller than <span class="math-container">$1$</span> for <span class="math-container">$(a+b) \gt 27$</span>:
<span class="math-container">$$ \text{(rhs)}=\exp( a\cdot \ln2 - (a+b)(\ln2 - 1/10))\\
=\exp(0.1 a-( \ln2-0.1)b) \\
\approx \frac{1.1^a}{1.8^b} \lt 1 \tag 4$$</span>
and thus we can conclude
<span class="math-container">$$\implies k \lt 1\qquad \text{for }a+b\gt 27 \tag 5$$</span></p>
<p><span class="math-container">$\qquad\qquad\quad$</span><sub>(Hope I didn't mess with signs and computation...)</sub></p>
<p>Someone might even brush this up a bit...</p>
<hr>
<p><span class="math-container">$\;^{[1]}$</span>The citation of formula (1) is from</p>
<pre><code>R.J.STROEKER & R.TIJDEMAN
Diophantine equations (with appendix by P.L.Cijsouw, A.Korlaar & R.Tijdeman)
in: MATHEMATICAL CENTRE TRACTS 154, COMPUTATIONAL METHODS IN NUMBER THEORY; PART I;
MATHEMATISCH CENTRUM, AMSTERDAM 1982
</code></pre>
<p>and they attribute this result to W.J.Ellison in 1970/1971</p>
<pre><code>[25] ELLISON,W.J., Recipes for solving diophantine problems by Baker's method,
Sèm.Th.Nombr.,1970-1971,Exp.No.11, Lab.Thèorie Nombres,
C.N.R.S.,Talence,1971.
</code></pre>
|
2,588,271 | <blockquote>
<p>Show that $\mathcal{L}\{t^{1/2}\}=\sqrt{\pi}/(2s^{3/2}), \: s>0$</p>
</blockquote>
<p>By the definition of Laplace transform we get:</p>
<p>$$\mathcal{L}\{t^{1/2}\} = \int_0^\infty t^{1/2}e^{-st} \, dt = \{x = \sqrt{st} \} = \dfrac{2}{s^{3/2}} \int_0^\infty e^{-x^2} x^2 \, dx. $$</p>
<p>A known and easily proved result is $\int_0^\infty e^{-x^2} \, dx = \dfrac{\sqrt{\pi}}{2}$. How (if possible) can I use this result to determine the integral above? I was thinking either integration by parts or perhaps a suitable coordinate transformation (e.g. polar coordinates) but my attempts failed. </p>
| Crescendo | 390,385 | <p>Start off with the substitution $u=x^2$ such that$$I=\frac 12\int\limits_0^{\infty}du\,\sqrt ue^{-u}$$The latter integral is just simply the factorial function.$$n!=\int\limits_0^{\infty}dt\,t^ne^{-t}$$So$$I=\frac 12\left(\frac 12\right)!=\frac {\sqrt{\pi}}4$$</p>
|
474,260 | <p>We know that irrational number has not periodic digits of finite number as rational number.<br>
All this means that we can find out which digit exist in any position of rational number.<br>
But what about non-rational or irrational numbers?<br>
For example:<br>
How to find out which digit exists in Fortieth position of $\sqrt[2]{2}$ which equals 1,414213.......<br>
Is it possible to solve such kind of problem for any irrational number?</p>
| Hagen von Eitzen | 39,174 | <p>Let $\alpha$ be an irrational number. As long as there exists an algorithm the can decide whether $\alpha>q$ or $\alpha<q$ for any given rational $q$, you can obtain arbitrarily good rational approximations for $\alpha$. Especially, you can find upper and lower bounds good enough to uniquely determine any desired number of decimals.</p>
<p>For $\alpha=\sqrt 2$, the decision algorithm is quit simple: If $q=\frac nm$ with $n\in\mathbb Z, m\in\mathbb N$, then $\alpha<q\iff n>0\land n^2>2m^2$.</p>
|
474,260 | <p>We know that irrational number has not periodic digits of finite number as rational number.<br>
All this means that we can find out which digit exist in any position of rational number.<br>
But what about non-rational or irrational numbers?<br>
For example:<br>
How to find out which digit exists in Fortieth position of $\sqrt[2]{2}$ which equals 1,414213.......<br>
Is it possible to solve such kind of problem for any irrational number?</p>
| name | 407,177 | <p>In general, no.</p>
<p>Suppose that for every irrational number $r$ there were an algorithm that takes a natural $n$ as input and returns the $n$-th digit of $r$.
The possible algorithms are countable, all the irrationals are not, hence it is not possible to have such algorithms for every irrational.</p>
<p>However, such algorithms do exist for the so-called computable number.</p>
|
2,764,586 | <p>Let $S$ be the set of all complex numbers $z$ satisfying the rule $$|z-i|=\sqrt{2}|\bar{z}+1|$$</p>
<p>Show that $S$ contains points on a circle.</p>
<p>My attempt,</p>
<p>By substituting $z = x + yi$, and squaring both sides. But I can't get the circle form. </p>
| Hypergeometricx | 168,053 | <p>Let $z=x+iy$. Note that $\overline{z}=x-iy$.</p>
<p>$$\begin{align}
|z-i|&=\sqrt{2}|\overline{z}+1|\\
|z-i|^2&=2|\overline{z}+1|^2\\
|x+i(y-1)|^2&=2|(x+1)-iy|^2\\
x^2+(y-1)^2&=2(x+1)^2+y^2)\\
x^2+4x+2+y^2+2y-1&=0\\
(x+2)^2+(y+1)^2&=2^2
\end{align}$$</p>
|
1,903,495 | <p>Suppose,$y_1=y_1(x_1,x_2)$ and $y_2=y_2(x_1,x_2)$,
such that,</p>
<p>$$dy_1=\frac{\partial y_1}{\partial x_1}dx_1+\frac{\partial y_1}{\partial x_2} \, dx_2$$</p>
<p>$$dy_2=\frac{\partial y_2}{\partial x_1}dx_1+\frac{\partial y_2}{\partial x_1} \, dx_2$$</p>
<p>Then,I've taken the product of the above two,but unable to reach to the result.</p>
| applyb | 170,197 | <p>(a) $C(25,5)$</p>
<p>(b) $C(15,5)$</p>
<p>(c) Three balls must be red which is equal to $C(15,3)$ and two balls must be white which is equal to $C(10,2)$. So, in total we have $C(15,3)C(10,2)$</p>
<p>(d) Here we must select samples with four red balls (and one white) and five red balls (and zero white). Apply the same rule as in (b) to compute the respective number of samples and sum them up: $C(15,4)C(10,1) + C(15,5)$ </p>
|
1,972,079 | <p>I would like to split splines of DXF files to lines and arcs in 2D for a graphic editor. From DXF file, I have extracted the following data:</p>
<ul>
<li>degree of spline curve</li>
<li>number of knots and knot vectors</li>
<li>number of control points and their coordinates</li>
<li>number of fit points and their coordinates</li>
</ul>
<p>Using the extracted data, </p>
<ul>
<li>start and end point of lines</li>
<li>start and end point, center point, radius of arcs are needed to find.</li>
</ul>
<p>I get confused which control points are controlling which knots by seeing the extracted data.
I have found <a href="https://hakantiftikci.files.wordpress.com/2009/09/biarccurvefitting2.pdf" rel="nofollow">this paper</a> about biarc curve fitting. Is it only for only two connected arc or useful for splines with so many knot points? But, it still needs tangents to calculate the points of arc. Which algorithms should I use to find the points of arcs and lines?</p>
| fang | 180,792 | <p>Biarc fitting is to find two tangential connected arcs or one line and one arc that meet the given two end points and two end tangents. You can use it as a core algorithm to approximate a spline with lines and arcs (that are connected with G1 continuity). The algorithm would be something like this:</p>
<ol>
<li>Compute the start point, end point, start tangent and end tangent of the spline.</li>
<li>Use biarc algorithm to find the two curves (two arcs or one line and one arc) that meet the two end points and two end tangents.</li>
<li>Compute the deviation between the two curves and the original spline. If the deviation is sufficiently small, you are done. If not, subdivide the spline at t=0.5 and repeat step 1~3 for the two split spline.</li>
</ol>
<p>At the end, you should have a series of lines/arc connected with tangent continuity that approximate the spline within a certain tolerance.</p>
|
190,905 | <p>I want to make two general graphics something like the following.</p>
<p><a href="https://i.stack.imgur.com/1SNYt.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/1SNYt.png" alt="enter image description here"></a></p>
<p>I want to illustrate the approach to the function by means of a polynomial. I have tried the following.</p>
<pre><code>Plot[{Sin[x] + 2, Sin[2 x] + 2}, {x, 0, 8}, AxesLabel -> {x, y},
Ticks -> {{0.8, 2.4, 4, 5.5, 7.1}, {0, 1, 2, 3}}]
</code></pre>
<p>I also want to put <span class="math-container">$ x_{0}, x_{1}, x_{2},...,x_{n} $</span> in the points where the graphs differ the most. I appreciate any help.</p>
| zhk | 8,538 | <p>Something like this?</p>
<pre><code> Plot[{Sin[x] + 2, Sin[2 x] + 2}, {x, 0, 8}, AxesLabel -> {x, y},
Ticks -> {{{1, "x[0]"}, {2, "x[1]"}, {3, ""}, {4, "x[3]"}, {5,
""}, {6, ""}, {7, "x[n]"}}, None},
PlotLabels -> Placed[{"y=f(x)", "y=p(x)"}, {Scaled[5], Above}],
AxesStyle -> Arrowheads[{0.0, 0.05}]]
</code></pre>
<p><a href="https://i.stack.imgur.com/ONP4M.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/ONP4M.png" alt="enter image description here"></a></p>
<p>Or </p>
<pre><code>Plot[{Sin[x] + 2, Sin[2 x] + 2}, {x, 0, 8}, AxesLabel -> {x, y},
Ticks -> {{{1, "x[0]"}, {2, "x[1]"}, {3, ""}, {4, "x[2]"}, {5,
""}, {6, ""}, {7, "x[n]"}}, None},
AxesStyle -> Arrowheads[{0.0, 0.05}],
Epilog -> {Text[Style["y=f(x)", 22], Scaled[{0.5, 0.9}]],
Text[Style["y=p(x)", 22], Scaled[{0.25, 0.97}]]}]
</code></pre>
<p><a href="https://i.stack.imgur.com/kOofz.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/kOofz.png" alt="enter image description here"></a></p>
|
829,433 | <p><img src="https://i.stack.imgur.com/FgnTN.jpg" alt="enter image description here"></p>
<p>why is it when I get the average of the column "per hour", it's different from the total?</p>
| RandomUser | 142,278 | <p>Let's normalize some of the numbers. For cases, I'll divide by $1140$. For hours, I'll divide by $103.01$.</p>
<pre><code>ASC CASES HOURS
Feeder/Invoices 3.26 1.82
Manual 2.59 2.94
Audit/Processes 2.26 7.63
Other 1.00 1.00
</code></pre>
<p>As you can see, the number of hours for Audit/Processes is much larger than the other ASCs, while the number of cases doesn't vary as much. This results in its lower per hour rate of $3$ having a larger sway than the other ASCs, since it alone represents over half the total hours.</p>
<p>If the number of hours or the number of cases were equal for each ASC, then you could just take the average of the per hour rate and it would work. But since the weight of each ASC is not equal in regard to a per hour total, you can't just take the average of them.</p>
|
1,892,367 | <p>In Euclidean geometry two polygons are said to be similar if, by rotation and scaling, one can be transformed into the other, and vice versa. If we consider a general metric space, does this notion exist? I can think of how this works by scaling: you consider $d(a_i , a_{I+1} )$, where $a_i$ are the vertices of the polygon, then two figures are similar if each pair can be scaled with the same constant to yield the distances of the second figure. My problem is more with rotation and angle. The notion if homeomorphic is too general because all (non-self intersecting) polygons are homeomorphic. </p>
| Eric Wofsey | 86,856 | <p>There are two different ways you can set this up in the language of metric spaces. If $X$ and $Y$ are metric spaces, say that a bijection $f:X\to Y$ is a <em>similarity</em> if there exists a constant $r>0$ such that $d(f(x),f(y))=rd(x,y)$ for all $x,y\in X$. There are then two definitions we could make:</p>
<ul>
<li>Let $X$ and $Y$ be metric spaces. Then $X$ and $Y$ are similar if there exists a similarity $f:X\to Y$.</li>
<li>Let $Z$ be a metric space and $X,Y\subseteq Z$ be subsets. Then $X$ and $Y$ are similar in $Z$ if there exists a similarity $f:Z\to Z$ such that $f(X)=Y$.</li>
</ul>
<p>Clearly, if $X$ and $Y$ are similar in $Z$, then they are similar by the first definition, since you can just restrict $f:Z\to Z$ to a map $X\to Y$ to get a similarity from $X$ to $Y$. The converse is not true in general, though: there might exist a similarity $X\to Y$ that can't be extended to a similarity from all of $Z$ to itself. However, when $Z=\mathbb{R}^2$, it turns out that any similarity between subsets of $Z$ extends to a similarity from $Z$ to itself. So for subsets of the plane, these two definitions coincide, and are the usual definition of similarity.</p>
<p>What does this have to do with rotations (and reflections and translations)? Well, (compositions of) those are exactly the isometries of the plane. And it turns out that every similarity from $\mathbb{R}^2$ to itself can be written as a composition of a scaling followed by an isometry. In general, though, the right definition of "similarity" is any map which multiplies distances by a fixed nonzero constant.</p>
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.