qid int64 1 4.65M | question large_stringlengths 27 36.3k | author large_stringlengths 3 36 | author_id int64 -1 1.16M | answer large_stringlengths 18 63k |
|---|---|---|---|---|
2,225 | <p>If $f: \mathbb{R} \to \mathbb{R}$ is a continuous function and satisfies $f(x)=f(2x+1)$, then its not to hard to show that $f$ is a constant.</p>
<p>My question is suppose $f$ is continuous and it satisfies $f(x)=f(2x+1)$, then can the domain of $f$ be restricted so that $f$ doesn't remain a constant. If yes, then give an example of such a function.</p>
| Community | -1 | <p>As in <a href="https://mathoverflow.net/questions/31990/continuous-functions-remaining-constant">the previous proof</a> of $f$ being constant on $\mathbb{R}$, define $g(x) = f(x-1)$, so that $g(x) = g(2x)$; the domains of $f$ and $g$ are just shifted versions of each other.</p>
<p>Certainly, if the domain of $g$ is small enough, say $[2,3]$, then $g$ can be any continuous function, because the domain contains no $x$ and $2x$ at the same time. A more interesting question is: how large can we make the domain so that $g$ will still not be constant? The answer to this is suggested by <a href="https://math.stackexchange.com/questions/2225/function-satisfying-fxf2x1/2228#2228">JDH's answer</a>: if we remove only the single point 0, making the domain $\mathbb{R} \setminus \\{0\\}$, it is disconnected into two components which can independently have constant values.</p>
<p>How big can a domain be on which $g$ is not even locally constant? Remove an arbitrarily small interval around $0$. Take any non-constant continuous function $h$ which is periodic with unit period, and let $g(x) = h(\log_2 x)$. Then $g(x) = g(2x)$ for all $x$, and is continuous everywhere.</p>
|
1,278,848 | <p>Based on <a href="https://math.stackexchange.com/questions/1267021/let-m-subseteq-mathbbrk-manifold-topology-vs-trace-topology/1267760?noredirect=1#comment2573732_1267760">this</a> question I'd like to know: Are there compact (sub)manifolds without boundary in $\mathbb{R}^n$ ? Because, as that question shows, the topology of the manifolds has to be the trace topology; thus compact subspace (in particular, <em>compact</em> manifolds) are characterized by the Heine-Borel theorem: They are precisely those sets in $\mathbb{R}^n$ that are closed and bounded.</p>
<p>But, as far as I know (and I'm just starting to read about manifolds and haven't got a good grasp on the formal definitions yet), manifolds without boundary aren't closed, so they can't be compact ?</p>
| Tim Raczkowski | 192,581 | <p>The Riemann sphere is a compact complex manifold without boundary.</p>
|
1,278,848 | <p>Based on <a href="https://math.stackexchange.com/questions/1267021/let-m-subseteq-mathbbrk-manifold-topology-vs-trace-topology/1267760?noredirect=1#comment2573732_1267760">this</a> question I'd like to know: Are there compact (sub)manifolds without boundary in $\mathbb{R}^n$ ? Because, as that question shows, the topology of the manifolds has to be the trace topology; thus compact subspace (in particular, <em>compact</em> manifolds) are characterized by the Heine-Borel theorem: They are precisely those sets in $\mathbb{R}^n$ that are closed and bounded.</p>
<p>But, as far as I know (and I'm just starting to read about manifolds and haven't got a good grasp on the formal definitions yet), manifolds without boundary aren't closed, so they can't be compact ?</p>
| aGer | 191,887 | <p>Well there are several examples. @Dorebell mentioned one example.
Here are some other compact manifolds without boundary:</p>
<ul>
<li>Torus</li>
<li>Double Torus</li>
<li>Klein bottle</li>
</ul>
|
1,278,848 | <p>Based on <a href="https://math.stackexchange.com/questions/1267021/let-m-subseteq-mathbbrk-manifold-topology-vs-trace-topology/1267760?noredirect=1#comment2573732_1267760">this</a> question I'd like to know: Are there compact (sub)manifolds without boundary in $\mathbb{R}^n$ ? Because, as that question shows, the topology of the manifolds has to be the trace topology; thus compact subspace (in particular, <em>compact</em> manifolds) are characterized by the Heine-Borel theorem: They are precisely those sets in $\mathbb{R}^n$ that are closed and bounded.</p>
<p>But, as far as I know (and I'm just starting to read about manifolds and haven't got a good grasp on the formal definitions yet), manifolds without boundary aren't closed, so they can't be compact ?</p>
| Andrew D. Hwang | 86,418 | <p>$\newcommand{\Reals}{\mathbf{R}}$Comparing this question with your linked question, the central issue seems to be the term "closedness", which perhaps feels bothersome because manifolds are unions of open sets.</p>
<p>If that's really the question, the resolution comes down to "relative topology", how "open" and "closed" are defined for <em>subsets</em> of $\Reals^{n}$.</p>
<p>If $X \subset \Reals^{n}$ is an arbitrary non-empty set, we say $A \subset X$ is (<em>relatively</em>) <em>open in $X$</em> if there exists an open set $U \subset \Reals^{n}$ such that $A = X \cap U$. Relatively closed sets are defined similarly.</p>
<p>In particular, <em>every</em> set $X$ is both (relatively) open and closed as a subset of itself: The set $U = \Reals^{n}$ is both open and closed, and $X = X \cap \Reals^{n}$.</p>
<p>Now let's take an example of a compact manifold in $\Reals^{n}$, such as the $(n - 1)$-sphere
$$
S^{n-1} = \{x \text{ in } \Reals^{n} : \|x \| = 1\}.
$$
As the zero set of a continuous function $f:\Reals^{n} \to \Reals$, the sphere $S^{n-1}$ is closed in $\Reals^{n}$. Certainly, $S^{n-1}$ is not open in $\Reals^{n}$; in fact, $S^{n-1}$ has empty interior in $\Reals^{n}$.</p>
<p>As noted above, however, $S^{n-1}$ is both open and closed <em>as a subset of itself</em>. There's no contradiction because "open" and "closed" are relative concepts. (Remarkably and not completely obviously, "compactness" of $X$ is an intrinsic concept, not depending on how $X$ is viewed as a subset of a larger universe.)</p>
<p>Going back to the sphere, the hemispheres
$$
H_{i}^{+} = \{x \text{ in } S^{n-1} : x_{i} > 0\},\qquad
H_{i}^{-} = \{x \text{ in } S^{n-1} : x_{i} < 0\}
$$
are (relatively!) open subsets of $S^{n-1}$ (why?) constituting a covering by coordinate charts. It follows that $S^{n-1}$ is a compact manifold in $\Reals^{n}$.</p>
<p>(Incidentally, this is an "inefficient" covering of the sphere; stereographic projection allows the sphere to be covered with <em>two</em> coordinate charts, each covering the complement of one point.)</p>
|
297,812 | <p>If $a-b=b-c$ .How to find the value of $a^2-2b^2+c^2$</p>
| Inquest | 35,001 | <p>\begin{align}
a-b&=b-c\\
a+c&=2b\\
(a+c)^2&=(2b)^2\\
\end{align}</p>
<blockquote class="spoiler">
<p>\begin{align}a^2+c^2+2ac&=4b^2\\a^2+c^2-2b^2&=2b^2-2ac\\\end{align}</p>
</blockquote>
|
1,029,868 | <p>Let $$
A=\begin{bmatrix}
1 & 1 & 2\\
1 & 2 & 1\\
2 & 1 & 1
\end{bmatrix}$$</p>
<p>Show that $ A^-=\dfrac{1}{4}(-A^2+4A+I)$</p>
<p>I have absolutely no clue how to do this. Could someone be kind enough to explain and provide and answer? I believe it has something to do with the Cayley-Hamiliton Theorem as the question is from that problem set but I don't understand how to use it to solve this problem. Your help is appreciated. Thanks </p>
| TenaliRaman | 29,755 | <ul>
<li>Show that $|\lambda I_3 - A| = \lambda^3 - 4 \lambda^2 - \lambda + 4$</li>
<li>Use Cayley Hamilton Theorem, to show that $A^3 - 4 A^2 - A + 4I_3 = 0$</li>
<li>Show that A inverse exists (how?)</li>
<li>Multiply the equation from step 2 with A inverse and rearrange</li>
</ul>
|
21,857 | <p>This semester I am attending a reading seminar on non-archimedean analytic geometry (a subject I know nothing about), roughly following the <a href="http://math.arizona.edu/~swc/aws/07/speakers/index.html">notes of Conrad</a>. </p>
<p>Reading Conrad's notes (and e.g. those of Bosch) it struck me that the prime spectrum of affinoid algebras never seems to appear, only the maximal spectrum. Can somebody explain the reason for this? </p>
| Kevin Buzzard | 1,384 | <p>I am surprised that Brian got to this one first without making what I thought was another obvious comment: affinoids are Jacobson rings! A function which is zero at all points of an affinoid rigid space corresponds to an element of your affinoid algebra which is in all maximal ideals and hence (by Jacobson-ness) is nilpotent. For a general ring this certainly isn't true: the intersection of all prime ideals is the nilpotent elements, but the intersection of all maximal ideals might be bigger (think of a 1-dimensional local ring, for example). </p>
|
21,857 | <p>This semester I am attending a reading seminar on non-archimedean analytic geometry (a subject I know nothing about), roughly following the <a href="http://math.arizona.edu/~swc/aws/07/speakers/index.html">notes of Conrad</a>. </p>
<p>Reading Conrad's notes (and e.g. those of Bosch) it struck me that the prime spectrum of affinoid algebras never seems to appear, only the maximal spectrum. Can somebody explain the reason for this? </p>
| Emerton | 2,874 | <p>Another point to bear in mind, in addition to those raised by Brian and Kevin, is that generic points (in the sense of non-maximal prime ideals) don't make sense in analytic geomtery.</p>
<p>For example, the Tate algebra $\mathbb Q_p\langle\langle x\rangle \rangle$ contains
one non-maximal prime ideal, the zero ideal. Geometrically it corresponds to the closed
disk $|x| \leq 1$. Where in this disk would the generic point corresponding to the zero
ideal live? The point is that, unlike in algebraic geometry, in rigid analytic geometry one can find disjoint open subsets of irreducible spaces such as the closed disk.</p>
<p>In Berkovich's theory, one does have generic points, but they consist of more data than just a prime ideal; one must also choose a norm on the residue field. (This relates to Brian's comment.) Geometrically, this choice of norm pins down where on the rigid space the generic point lives.</p>
|
1,322,076 | <p>Hey can anybody help me with the following proof? I am trying to solve the following limit using epsilon delta and I have found the limit to be 1/3 using the squeeze theorem and have got to this thus far but am a bit confused where I go now as I have both a 3x and a sinx when trying to find an epsilon??
Thanks in advance!!<img src="https://i.stack.imgur.com/eUP2B.jpg" alt="enter image description here"></p>
| xanthousphoenix | 209,166 | <p>You are using the wrong definition of limits for this case. When dealing with limits at infinity, you want to use <a href="http://www.millersville.edu/~bikenaga/math-proof/limits-at-infinity/limits-at-infinity.html" rel="nofollow">this</a> definition. As RowanS stated, you can use the fact that the absolute value of sin(x) is bounded in the proof.</p>
|
2,010,255 | <p>While finding the Taylor Series of a function, <strong>when</strong> are you allowed to substitute? And <strong>why</strong>?</p>
<p>For example:</p>
<p>Around $x=0$ for $e^{2x}$ I apparently am allowed to substitute $u=2x$ and then use the known series for $e^u$. But for $e^{x+1}$ I am not allowed to substitute $u=x+1$.</p>
<p>I know the technique for finding the Taylor Series of $e^{x+1}$ around $x=0$ by taking $e^{x+1}=e\times e^x$. However, I am looking for understanding and intuition for when and why it is allowed to apply substitution.</p>
<p>Note: there are several question that are similar to this one, but I have found none that actually answers the question "why"; or that shows a complete proof.</p>
<hr>
<p>EDIT: Thanks to the answer of Markus Scheuer I should refine the question to cases where the series is finite, for example $n\to3$</p>
| Enrico M. | 266,764 | <p>The quantity $2x$ is a product and as $x\to 0$ it remains a small number.</p>
<p>The quantity $x+n$ for $n\neq 0$ is not a little quantity anymore, and so you are not anymore around zero but you're around $n$.</p>
|
92,867 | <p>Suppose we have some random variable $X$ that ranges over some sample space $S$. We also have two probability models $F$ and $G$. Let $f(x)$ and $g(x)$ be the probability density functions for these distributions. Does the following quantity $$ \log \frac{f(x)}{g(x)} = \log \frac{P(F|x)}{P(G|x)}- \log \frac{P(F)}{P(G)}$$ basically tell us how much more likely model $F$ is the true model than model $G$?</p>
| Elvis | 21,435 | <p>I am totally confused by the last comment made by Michael (the answer is ok, it is the link with logistic regression which went too far for me). Logistic regression is to be used when you have pairs of observations (X, Y) where Y is a binary variable (taking values in {0,1}) which is modeled as a Bernoulli variable $\mathcal{B}(p)$ the parameter of which depends of the value $x$ taken by $X$ : $\mathrm{logit}(p) = \beta_0 + \beta_1 x$. Here you don’t observe a variable Y taking value 1 when the model if F and 0 when it is G, the model is fixed beforehand and would not change along the observations... and you wouldn’t write $\mathrm{logit} P(F) = \beta_0 + \beta_1 x$. To me, this doesn‘t make any sense.</p>
<p>I will slightly reword Michael’s answer, just to give you some additionnal keywords. If you have a single observation $x$, then $f(x)$ is the <em>likelihood</em> of the model F, denote it by $L(F; x) = f(x)$, and $g(x)$ is the likelihood of the model G, denote it by $L(G;x) = g(x)$. As you stated, the <em>likelihood ratio</em> $L(F ; x)/L(G ;x) = f(x)/g(x)$ tells you how much the data support F against G.</p>
<p>If you have <em>prior probabilities</em> for F and G, denoted by P(F) and P(G) = 1 - P(F), then you can write <em>posterior probabilities</em> P(F|x) and P(G|x). You have
$$ P(F | x) = { L(F ; x) P(F) \over L(F;x) P(F) + L(G;x) P(G)},$$
$$ P(G | x) = { L(G ; x) P(G) \over L(F;x) P(F) + L(G;x) P(G)},$$
and
$$ {P(F | x) \over P(G |x) } = {P(F) \over P(G)} \times {L(F ; x) \over L(G ;x)}.$$
This is as Michael stated an application of Bayes’ theorem. The quantity P(F)/P(G) = P(F)/(1-P(F)) is called the <em>odds</em> of the model F. You can take the log of this last equality to get an additive statement, which is very usual (cf Michael’s answer). The quantity L(F;x)/L(G;x) is called a <em>Bayes factor</em>.</p>
<p>If you have $n$ independent observations $\mathbf{x} = x_1, \dots, x_n$, the same thing holds with $L(F ; \mathbf{x}) = \prod_i f(x_i)$ and $L(G ; \mathbf{x}) = \prod_i g(x_i)$.</p>
|
121,909 | <p>I came across this question while studying primitive roots. I know it has something to do with the fact that if the order of $a$ is $m$ then for every $k \in \mathbb{Z}$, the order of $a^k$ is $m/(m,k)$. The question is as follows: </p>
<blockquote>
<p>Let $p$ be an odd prime. Prove that $a^2$ is never a primitive root $\pmod{p}$. </p>
</blockquote>
<p>I would appreciate any help. Thank you.</p>
| bgins | 20,321 | <p>If $(a,p)=1$, then $1\equiv a^{p-1}=(a^2)^\frac{p-1}{2}$ implies that $\text{ord}_p(a^2)\le\frac{p-1}{2}$.</p>
|
1,237,528 | <p>$$ \displaystyle {\int_{0}^{z}} \sqrt {1 + \tan^2(\dfrac{\pi}{4} \dfrac{z}{H} )} dz $$</p>
<p>_</p>
<p>$$ gives $$ </p>
<p>_</p>
<p>$$ \dfrac{4H}{\pi} {\sinh^{-1}} ( {\tan \dfrac{\pi}{4} \dfrac{z}{H} } ) $$</p>
<p>Please advise solution</p>
<p>edit:- </p>
<p>I can get to </p>
<p>$$\dfrac{4H}{\pi} \displaystyle {\int_{0}^{\dfrac{\pi z}{4H}}} \sec {u} {du}$$</p>
<p>Please help after this step ?</p>
| Mark Viola | 218,419 | <p>There are always more than one way to represent a solution. </p>
<p>So, let's note a couple of things here.</p>
<hr>
<p>First, note that the hyperbolic sine function $\sinh x =\frac12 (e^x-e^{-x})$ has inverse function </p>
<p>$$\sinh^{-1}x=\log\left(x+\sqrt{1+x^2}\right)$$</p>
<p>To see this, let's solve the equation $x=\sinh y$ for $y$. Thus,</p>
<p>$$\begin{align}
x&=\frac12 (e^y-e^{-y})\\
0&=e^y-e^{-y}-2x\\
0&=(e^y)^2-2xe^y-1
\end{align}$$</p>
<p>whereupon solving the quadratic formula for $e^y$ reveals that</p>
<p>$$e^y=x + \sqrt{x^2+1}$$</p>
<p>Observe that we rejected the "negative" square root solution since $e^y>0$. Finally, taking logarithms on both sides yields the result</p>
<p>$$y=\sinh^{-1}x =\log\left(x+\sqrt{1+x^2}\right)$$</p>
<hr>
<p>The second thing to note is that the integral of the secant function $\int \sec x dx$ is given by</p>
<p>$$\begin{align}
\int \sec x dx &= \log |\tan x+\sec x|+C\\
&=\log |\tan x+\sqrt{1+\tan^2x}|+C\\
&=\sinh^{-1}(\tan x)+C
\end{align}$$</p>
<p>where in going from the first line to the second, we restricted $x$ such that $|x|<\pi/2$. So, we see that, as always, there are alternative ways to express the result. One way here is to use the log function while another way is to use the inverse hyperbolic sine function.</p>
|
138,723 | <p>By cleaning up a notebook, I mean how can I hide all the codes in the notebook so that the end-users can't see it? I saw Eric Schulz's famous interactive calculus textbook, the users can't see the code, and there is no cell brackets on the right hand side of the CDF. </p>
| m_goldberg | 3,066 | <h3>Update</h3>
<p>I have incorporated Kuba's improvement into the code.</p>
<p>Here is how I would do it.</p>
<ol>
<li><p>In a working notebook (not the target notebook) put the following code.</p>
<pre><code>With[{nb = target},
SetOptions[nb, ShowCellBracket -> False];
SetOptions[#, CellOpen -> False] & /@ Cells[nb, CellStyle -> "Input"];]
With[{nb = taget},
SetOptions[nb, ShowCellBracket -> True];
SetOptions[#, CellOpen -> True] & /@ Cells[nb, CellStyle -> "Input"];]
</code></pre>
<p>The first code cell will do the clean-up. The second lets you undo it if that becomes necessary.</p></li>
<li><p>Now in the target notebook evaluate </p>
<pre><code>EvaluationNotebook[]
</code></pre>
<p>This will return a notebook object which will look something like</p>
<p>$\qquad$<a href="https://i.stack.imgur.com/qkV8y.png" rel="noreferrer"><img src="https://i.stack.imgur.com/qkV8y.png" alt="nbobj"></a></p></li>
<li><p>Cut the notebook object from the target notebook.</p></li>
<li><p>Select the first token <code>target</code> in the working notebook and paste the notebook object over it.</p></li>
<li><p>Do the same thing for the other <code>target</code> token.</p></li>
<li><p>Delete the <code>EvaluationNotebook[]</code> code from the target notebook.</p></li>
<li><p>Evaluate the <strong><em>first</em></strong> of the two code cells.</p></li>
</ol>
<p>After pasting the notebook object into working notebook, that notebook should look like this:</p>
<p><a href="https://i.stack.imgur.com/ykUgs.png" rel="noreferrer"><img src="https://i.stack.imgur.com/ykUgs.png" alt="code"></a></p>
|
2,090,512 | <p>You can calculate the <strong>volume of a parallelepiped</strong> by $|(A \times B) \cdot C|$, where $A$, $B$ and $C$ are vectors. I wonder does the order matter? If it does how, is it determined. I know I can just put it in a matrix and calculate the determinant but I would like to know how it is in this case. </p>
<p>Thanks!</p>
| David K | 139,123 | <p>If you know that
<a href="https://math.stackexchange.com/questions/314275/scalar-triple-product-why-equivalent-to-determinant">the scalar triple product is equal to the determinant of a matrix
whose rows are the components of the vectors</a>,
and if you recall the effects of operations on the rows of a matrix,
then you can show that swapping <em>any two</em> of the vectors $A,B,C$ in the
scalar triple product $(A \times B) \cdot C$
will swap the corresponding rows of the matrix
and therefore will flip the sign of the determinant but will not
change the magnitude of the determinant.
Hence the interchange of any two vectors (which could be
$B$ and $C$ or could be $A$ and $C$, not just $A$ and $B$)
will likewise flip the sign of the scalar triple product
but will not change its magnitude.</p>
<p>Any reordering of the three vectors $A$, $B$, and $C$
can be accomplished by either one or two interchanges of two vectors.
For example, to get from $(A,B,C)$ to $(B,C,A)$,
swap the first two vectors, then the last two.
Hence of the six possible ways to order the three vectors
$A$, $B$, and $C$, three orderings will give you positive
scalar triple products and three will give you
negative scalar triple products,
but all scalar triple products will have the same magnitude.</p>
|
2,762,715 | <blockquote>
<p>Let $\mathrm a,b$ are positive real numbers such that for $\mathrm a - b = 10$, then the smallest value of the constant $\mathrm k$ for which $\mathrm {\sqrt {x^2 + ax}} - {\sqrt{x^2 + bx}} < k$ for all $\mathrm x>0$, is? </p>
</blockquote>
<p>I don't get how to approach this problem. Any help would be appreciated. </p>
| trancelocation | 467,003 | <p>An elementary way ($x>0$):
$$f(x) = \sqrt{x^2 + ax} - \sqrt{x^2+bx} = \frac{x^2 + ax - (x^2+bx)}{\sqrt{x^2 + ax} + \sqrt{x^2+bx}} =\frac{(a - b)x}{x\sqrt{1 + \frac{a}{x}} + x\sqrt{1 + \frac{b}{x}}}= \frac{(a - b)}{\sqrt{1 + \frac{a}{x}} + \sqrt{1 + \frac{b}{x}}} < \frac{a-b}{2}= 5$$</p>
|
3,187,451 | <p>Can you help me find a function <span class="math-container">$f(X,Y)$</span>, such that <span class="math-container">$f(1,x) = f(x,1) = f(\ln x, \ln x)$</span>?</p>
<p>Either always, for all <span class="math-container">$x$</span> or in the limit <span class="math-container">$x$</span> tends to infinity, all these three expressions must become equal.
Actually in computer science, there are arrays in which update takes constant <span class="math-container">$O(1)$</span> time, and cumulative sum linear <span class="math-container">$O(n)$</span> time.</p>
<p>In cumulative arrays, update takes <span class="math-container">$O(n)$</span> time and sum is constant time.</p>
<p>While in segment trees, both are <span class="math-container">$O(\ln n)$</span></p>
<p>So I thought that there might be some <span class="math-container">$f(\text{update}, \text{sum}) = \text{constant function}$</span>.</p>
| AsdrubalBeltran | 62,547 | <p><span class="math-container">$$f(X,Y)=(X-1)(Y-1)(X-Y)$$</span>
<span class="math-container">$f(x,1)=f(1,x)=f(\ln{x},\ln{x})=0$</span></p>
|
3,203,282 | <p>Given that <span class="math-container">$C[-\pi,\pi]$</span> is complete:
How can we prove, by using the supremum norm, that the space:</p>
<p><span class="math-container">$$C_p[-\pi,\pi]=\{f\in C[-\pi,\pi]\mid f(-\pi)=f(\pi)\}$$</span></p>
<p>is also complete? thank you!</p>
| Frank W | 552,735 | <p>Another possible way is to differentiate <span class="math-container">$\sin^2x$</span> and observe that<span class="math-container">$$[\sin^2x]'=2\sin x\cos x=\sin 2x$$</span></p>
<p>Thus, using the taylor series for <span class="math-container">$\sin x$</span> gives<span class="math-container">$$\sin 2x=\sum\limits_{n\geq0}(-1)^n\frac {(2x)^{2n+1}}{(2n+1)!}=2\sum\limits_{n\geq0}(-1)^n\frac {4^nx^{2n+1}}{(2n+1)!}$$</span></p>
<p>Now integrate with respect to <span class="math-container">$x$</span> to get the expansion!</p>
|
3,033,812 | <p>My problem: If there are 5 different candies in a jar and a child wants to take out one or more candies, how many ways can this be done? </p>
<p>I said it is <span class="math-container">$^5C_1 -\; ^5C_0 = 5-1 = 4$</span> ways. The <span class="math-container">$-1$</span> for the unwanted case using this trick:</p>
<p>At least/At most = total number of combinations - unwanted cases</p>
<p>But according to my answer sheet, it said <span class="math-container">$2^5 -1$</span> is the answer.</p>
<p>So my question is that in what situations should I use exponents and what impact does it have? </p>
| Kyky | 423,726 | <p>Think of it like this: The child can either take a specific candy or not take it. This means we have <span class="math-container">$2$</span> possibilities for whether this candy is taken or not. Given we have <span class="math-container">$5$</span> candies, we have <span class="math-container">$2\cdot2\cdot2\cdot2\cdot2=2^5$</span> ways of taking <span class="math-container">$5$</span> candies. Since the condition that we have <span class="math-container">$0$</span> candies can be ignored, we have <span class="math-container">$2^5-1$</span> candies. Using <span class="math-container">$^5C_1-^5C_0$</span> would work if we could pick only one candy. (although the <span class="math-container">$^5C_0$</span> is unnecessary)</p>
|
2,358,838 | <p>I can see the answer to this in my textbook; however, I am not quite sure how to solve this for myself . . . the book has the following:</p>
<blockquote>
<p>To take advantage of the inductive hypothesis, we use these steps:</p>
<p>$ 7^{(k+1)+2} + 8^{2(k+1)+1} = 7^{k+3} + 8^{2k+3} $</p>
<p>$$
= 7\cdot7^{k+2} + 8^{2}\cdot8^{2k+1}\\
= 7\cdot7^{k+2} + 64\cdot8^{2k+1}\\
= 7(7^{k+2}+8^{2k+1})+57\cdot8^{2k+1}\\ $$</p>
</blockquote>
<p>While the answer is apparent to me <em>now</em>; how exactly would I go about figuring out a similar algebraic manipulation if I were to see something like this on a test? Is there an algorithm or a way of thinking about how to break this down that I'm missing? I think I'm most lost regarding the move from the second to last and last equations.</p>
<p><em>Source: Discrete Mathematics and its Applications (7th ed), Kenneth H. Rosen (p.322)</em></p>
| Ross Millikan | 1,827 | <p>The steps to the third line seem routine, trying to find the terms of the inductive hypothesis. Once you are at the third line you have to decide to split the $64$ into $7+57$. You might just notice that both numbers are important in the problem and try it. You might notice that splitting out $7$ of the second term is what is needed to complete the inductive hypothesis in the first term, so try it. When you see $57$ is left you should be convinced that is the right track.</p>
|
1,889,957 | <p>I'm a bit rusty on my math notations and I'd like to write that:</p>
<blockquote>
<p>It exists a unique element $z$ such that $z$ belongs to the collection of values returned by $f(x,y)$</p>
</blockquote>
<p>Honestly I'm not just rusty I'm also mostly ignorant of math except from basic functions and basic matrix operations.</p>
<p>I'm in the context of computer programming and I want to write down a specification, and for my own curiosity (and fun) I was wondering how this would be written in a more scientific way.</p>
<p>I'd go with something like:</p>
<blockquote>
<p>$\exists z\in S$ such that...</p>
</blockquote>
<p>And then I'm lost with how to specify that $S$ is the result of $f(x,y)$.</p>
<p>Some usage of $P(z)$ maybe ?</p>
<p>Also $S$ means "set" right? So it doesn't work because $z$ may be present multiple times, but IDK if there's a symbol for such "collection".</p>
<p>I've googled around but it's a bit hard to find the right keywords for searching something like this.</p>
<p>Thank you.</p>
<p><strong>EDIT</strong>:</p>
<p>I knew I'd make a mistake while posting this... I've mistakenly named $x$, $x$, leading to the confusion that it is the same $x$ that is in $f(x,y)$, while actually it is not.</p>
<p>So I have renamed it $z$, sorry about that.</p>
<p><strong>EDIT 2</strong>:</p>
<p>There are multiples solutions that have been provided in the answers and for this I'm thankful, but I can't identify if one matches what I want.</p>
<p>And there are also a lot of questions which I believe are due to me not giving enough details or not expressing myself correctly, and I realize now that I have made a mistake on the way so I will try to add more details and maybe it will help to make the answers converge.</p>
<p>I have a function, say $f$, that given two arguments, say $x\in X$ and $y\in Y$, will return a collection of values, say $S$ whose values are taken from $Z$.</p>
<p>And I want $S$ to contain only $z$ (possibly multiple times).</p>
<p>Given $S1$ and $S2$ the respective results of $f(x1,y1)$ and $f(x2,y2)$, there can not be a given $z$ that would be present in both $S1$ and $S2$.</p>
<p>For the record, $y1$ may be equal to $y2$.</p>
<p>Also $y$ depends on $x$ so I guess we start with the second part of what @celtschk said in his comment and simplify:</p>
<blockquote>
<p>$$S = \bigg\{f(x, g(x)) : x ∈ X \bigg\} ⊂ Z$$</p>
</blockquote>
<p>But the first part should be:</p>
<blockquote>
<p>"$z$ exists at least once and is unique in $S$"</p>
</blockquote>
<p>and I don't know how to write that :)</p>
| Roby5 | 243,045 | <p>Let </p>
<p>$$P=\frac{a}{b+c}+\frac{b}{c+d}+\frac{c}{d+a}+\frac{d}{a+b}$$</p>
<p>$$Q=\frac{b}{b+c}+\frac{c}{c+d}+\frac{d}{d+a}+\frac{a}{a+b}$$</p>
<p>$$R=\frac{c}{b+c}+\frac{d}{c+d}+\frac{a}{d+a}+\frac{b}{a+b}$$</p>
<p>We have $$Q+R=4\tag{1}$$</p>
<p>$$P+Q=\frac{a+b}{b+c}+\frac{b+c}{c+d}+\frac{c+d}{d+a}+\frac{d+a}{a+b} \overbrace{\geq}^{\color{red}{\text{AM} \geq \text{GM}}} 4\tag{2}$$</p>
<p>$$\begin{align} P+R=\frac{a+c}{b+c}+\frac{b+d}{c+d}+\frac{c+a}{d+a}+\frac{d+b}{a+b}\\ = \left(\frac{a+c}{b+c}+\frac{a+c}{d+a}\right)+ \left(\frac{b+d}{c+d}+\frac{b+d}{a+b}\right) \\
\overbrace{\geq}^{\color{blue}{\text{Titu's Lemma}}} \frac{4(a+c)}{a+b+c+d}+\frac{4(b+d)}{a+b+c+d}=4\tag{3}
\end{align}
$$</p>
<p>Using $(1),(2)$ and $(3)$, we get the desired result.</p>
|
390,129 | <p>Let <span class="math-container">$O$</span> be a <span class="math-container">$d$</span>-dimensional rotation matrix (i.e., it has real entries and <span class="math-container">$OO^T = O^TO = I$</span>). Let <span class="math-container">$\mathbf{x}$</span> be a uniformly random bitstring of length <span class="math-container">$d$</span>, i.e., <span class="math-container">$\mathbf{x} \sim U(\{0,1\}^d)$</span>. In other words, <span class="math-container">$\mathbf{x}$</span> is a vertex of the Hamming cube, selected uniformly at random. I would like to show that there exists a <span class="math-container">$C > 0$</span> such that
<span class="math-container">$$\mathbb{P}\left[\|O\mathbf{x}\|_1 \leq \frac{d}{4}\right] \leq 2^{-Cd}.$$</span>
I am horribly stuck, any ideas on how to approach this problem would be very much appreciated. Below are some of my own attempts. This question is cross-posted at math stack exchange <a href="https://math.stackexchange.com/questions/4099958/probability-of-ell-1-norms-of-vertices-of-the-rotated-hamming-cube">here</a>.</p>
<hr>
<p><strong>Observation 1:</strong> If <span class="math-container">$O = I$</span>, then the statement holds.</p>
<p>If <span class="math-container">$O = I$</span>, then <span class="math-container">$\|O\mathbf{x}\|_1 = \|\mathbf{x}\|_1$</span> is simply the number of ones in the bitstring. Among the <span class="math-container">$2^d$</span> choices for <span class="math-container">$\mathbf{x}$</span>, the number of choices that satisfies <span class="math-container">$\|\mathbf{x}\|_1 \leq d/4$</span> is</p>
<p><span class="math-container">$$1 + \binom{d}{1} + \binom{d}{2} + \cdots + \binom{d}{\lfloor d/4\rfloor} \leq 2^{dH(\lfloor d/4\rfloor/d)} \leq 2^{dH(1/4)},$$</span>
hence the probability is upper bounded by <span class="math-container">$2^{-d(1-H(1/4))}$</span>. Here, <span class="math-container">$H(\cdot)$</span> is the binary entropy function, i.e., <span class="math-container">$H(p) = -p\log_2(p) - (1-p)\log_2(1-p)$</span>.</p>
<p><strong>Observation 2:</strong> Numerical experiments support this result. Below is a plot of the probability versus the dimension, where <span class="math-container">$O$</span> is selected at random:</p>
<p><a href="https://i.stack.imgur.com/sI9dn.png" rel="noreferrer"><img src="https://i.stack.imgur.com/sI9dn.png" alt="Plot of the probability versus the dimension" /></a></p>
<p>The blue line is the probability. The orange line is the bound derived in the case where <span class="math-container">$O = I$</span>.</p>
<p>For comparison, here is the same numerical experiment, but with <span class="math-container">$O = I$</span>:</p>
<p><a href="https://i.stack.imgur.com/ksyxQ.png" rel="noreferrer"><img src="https://i.stack.imgur.com/ksyxQ.png" alt="enter image description here" /></a></p>
<p>Thus, it appears that the introduction of <span class="math-container">$O$</span> decreases the probability.</p>
<p>Both plots are obtained by sampling <span class="math-container">$100000$</span> <span class="math-container">$\mathbf{x}$</span>'s at random. The code is here:</p>
<pre><code>import numpy as np
import matplotlib.pyplot as plt
import random
from scipy.stats import ortho_group
H = lambda p : -p * np.log2(p) - (1-p) * np.log2(1-p)
C = 1 - H(1/4)
print(C)
N = 100000
ds,Ps = [],[]
for d in range(2,40):
O = ortho_group.rvs(dim = d)
# O = np.eye(d)
P = 0
for _ in range(N):
x = random.choices(range(2), k = d)
if np.linalg.norm(O @ x, ord = 1) <= d/4: P += 1/N
print(d,P)
ds.append(d)
Ps.append(P)
fig = plt.figure()
ax = fig.gca()
ax.plot(ds, Ps)
ax.plot(ds, [2**(-C*d) for d in ds])
ax.set_yscale('log')
ax.set_xlabel('d')
ax.set_ylabel('P')
plt.show()
</code></pre>
| Marco | 143,536 | <p>Here is an attempt to the problem for a worst-case <span class="math-container">$O$</span>, with worse constants. So fix <span class="math-container">$O$</span>, letting <span class="math-container">$o_i$</span> denote its <span class="math-container">$i$</span>th row, and take <span class="math-container">$X$</span> random in <span class="math-container">$\{0,1\}^d$</span>.</p>
<ol>
<li><p>We claim that <span class="math-container">$E |\langle o_i, X\rangle| \ge cst$</span>. To see this, write <span class="math-container">$$\langle o_i, X\rangle = \langle o_i, \frac{{\bf 1}}{2}\rangle + \langle o_i, (X - \frac{{\bf 1}}{2})\rangle$$</span> and assume WLOG the first term on the RHS is non-negative. The second term on the RHS is a weighted sum of Rademacher random variables, and so with probability at least <span class="math-container">$\frac{1}{20}$</span> it is above its standard deviation, which is <span class="math-container">$\Omega(1)$</span> (see for example <a href="https://www.math.uni.wroc.pl/~pms/files/16.1/Article/16.1.9.pdf" rel="nofollow noreferrer">this paper</a> of Oleszkiewicz)</p>
</li>
<li><p>Adding over all <span class="math-container">$i$</span>'s, the result holds in expectation: <span class="math-container">$E \|OX\|_1 \ge cst\cdot d$</span>. But since the function <span class="math-container">$x \mapsto \|Ox\|_1$</span> is <span class="math-container">$\sqrt{d}$</span>-Lipschitz wrt <span class="math-container">$\ell_2$</span> (and convex), we should be able to use concentration to say that the probability that we get below this mean minus <span class="math-container">$\frac{cst \cdot d}{2}$</span> is at most <span class="math-container">$e^{-cst' \cdot d}$</span> (see for example Corollary 4.23 of van Handel's <a href="https://web.math.princeton.edu/~rvan/APC550.pdf" rel="nofollow noreferrer">notes</a>). This gives the result.</p>
</li>
</ol>
|
3,659,413 | <p>I'm reading the book "Topological Graph Theory" by Gross and I've gone through a fair bit of it. It seems like the entire book is leading up to being able to imbed a group onto a surface, and I have no idea why you would want to do that.
I am a physics major and not very advanced in math.
Any insight would be appreciated!</p>
| Jonas Linssen | 598,157 | <p>Many graph theoretic problems become easy when it is known that the graph is planar ie. can be embedded in the sphere. For example the isomorphism problem for planar graphs can be solved in polynomial time, they can be 5colored in polynomial time (I don’t know about the time complexity of finding a 4coloring though) etc. I would guess that the complexity of these problems increases with the complexity of the (in some sense minimal) surfaces you can embedd into. I would bet that this has been studied extensively, but I don’t know a reference. Moreover I bet that one can classify graphs by the surfaces they embedd into.</p>
|
2,900 | <p>I saved an <code>InterpolationFunction</code> in a ".mx" files using <code>DumpSave</code> on a variable that was scoped by a <code>Module</code>. Here is a stripped-down example:</p>
<pre><code>Module[{interpolation},
interpolation=Interpolation[Range[10]];
DumpSave["interpolation.mx", interpolation];
]
</code></pre>
<p>Is there a way to find out the variable name, presumably of the form <code>interpolation$nnn</code>, of the expression when I <code>Get</code> the interpolation? It is not apparent what the variable is when using</p>
<pre><code><<"interpolation.mx"
</code></pre>
<p>Next time I will not use a <code>Module</code> for scoping the save variable, but meantime I'd like to access the saved data and assign it to a new variable.</p>
| JxB | 63 | <p>Here is another method, although I don't know how to capture the symbol name programatically...</p>
<pre><code>On[General::newsym];
Get["/tmp/test.mx"];
Off[General::newsym];
(* General::newsym: Symbol a$1772 is new. >> *)
</code></pre>
|
4,092,994 | <p>The question is</p>
<blockquote>
<p>Find the solutions to the equation <span class="math-container">$$2\tan(2x)=3\cot(x) , \space 0<x<180$$</span></p>
</blockquote>
<p>I started by applying the tan double angle formula and recipricoal identity for cot</p>
<p><span class="math-container">$$2* \frac{2\tan(x)}{1-\tan^2(x)}=\frac{3}{\tan(x)}$$</span>
<span class="math-container">$$\implies 7\tan^2(x)=3 \therefore x=\tan^{-1}\left(-\sqrt\frac{3}{7} \right)$$</span>
<span class="math-container">$$x=-33.2,33.2$$</span></p>
<p>Then by using the quadrants
<a href="https://i.stack.imgur.com/QFDTs.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/QFDTs.png" alt="quadrant" /></a></p>
<p>I was lead to the final solution that <span class="math-container">$x=33.2,146.8$</span> however the answer in the book has an additional solution of <span class="math-container">$x=90$</span>, I understand the reasoning that <span class="math-container">$\tan(180)=0$</span> and <span class="math-container">$\cot(x)$</span> tends to zero as x tends to 90 however how was this solution fou<strong>n</strong>d?</p>
<p>Is there a process for consistently finding these "hidden answers"?</p>
| David G. Stork | 210,401 | <p>Perhaps this graph will help reveal the answers (abscissa in radians):</p>
<p><a href="https://i.stack.imgur.com/UDvZW.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/UDvZW.png" alt="enter image description here" /></a></p>
|
4,092,994 | <p>The question is</p>
<blockquote>
<p>Find the solutions to the equation <span class="math-container">$$2\tan(2x)=3\cot(x) , \space 0<x<180$$</span></p>
</blockquote>
<p>I started by applying the tan double angle formula and recipricoal identity for cot</p>
<p><span class="math-container">$$2* \frac{2\tan(x)}{1-\tan^2(x)}=\frac{3}{\tan(x)}$$</span>
<span class="math-container">$$\implies 7\tan^2(x)=3 \therefore x=\tan^{-1}\left(-\sqrt\frac{3}{7} \right)$$</span>
<span class="math-container">$$x=-33.2,33.2$$</span></p>
<p>Then by using the quadrants
<a href="https://i.stack.imgur.com/QFDTs.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/QFDTs.png" alt="quadrant" /></a></p>
<p>I was lead to the final solution that <span class="math-container">$x=33.2,146.8$</span> however the answer in the book has an additional solution of <span class="math-container">$x=90$</span>, I understand the reasoning that <span class="math-container">$\tan(180)=0$</span> and <span class="math-container">$\cot(x)$</span> tends to zero as x tends to 90 however how was this solution fou<strong>n</strong>d?</p>
<p>Is there a process for consistently finding these "hidden answers"?</p>
| g.kov | 122,782 | <blockquote>
<p>Find the solutions to the equation<br />
<span class="math-container">\begin{align} 2\tan(2x)=3\cot(x),\quad 0^\circ<x<180^\circ \tag{1}\label{1}\end{align}</span></p>
</blockquote>
<p>As it was already noted, <span class="math-container">$\tan x$</span> is not defined
on the whole range <span class="math-container">$(0^\circ,180^\circ)$</span>, but
<span class="math-container">$\cot x$</span> is, so if we use it instead in \eqref{1}:</p>
<p><span class="math-container">\begin{align}
\frac2{\cot 2x}&=3\cot(x)
,\\
\frac{4\cot x}{\cot^2 x-1}
&= 3\cot x
,
\end{align}</span></p>
<p>we get the missing third solution, <span class="math-container">$\cot x=0$</span>.</p>
|
2,529,262 | <p>I have five real numbers $a,b,c,d,e$ and their arithmetic mean is $2$. I also know that the arithmetic mean of $a^2, b^2,c^2,d^2$, and $e^2$ is $4$. Is there a way by which I can prove that the range of $e$ (or any ONE of the numbers) is $[0,16/5]$. I ran across this problem in a book and am stuck on it. Any help would be appreciated.</p>
| Michael Rozenberg | 190,319 | <p>By C-S $$(1^2+1^2+1^2+1^2)(a^2+b^2+c^2+d^2)\geq(a+b+c+d)^2$$ or</p>
<p>$$4(a^2+b^2+c^2+d^2)\geq(a+b+c+d)^2$$ or</p>
<p>$$4(20-e^2)\geq(10-e)^2$$ or
$$(e-2)^2\leq0$$ or
$$e=2.$$</p>
<p>This method works in the general case.</p>
<p>Given: $a+b+c+d+e=k$ and $a^2+b^2+c^2+d^2+e^2=l$.</p>
<p>Find the range of $e$. </p>
|
3,258,249 | <p><span class="math-container">$\lim\limits_{n\to\infty}{\sum\limits_{k=n}^{5n}{k-1 \choose n-1}(\frac{1}{5})^{n}(\frac{4}{5})^{k-n}}$</span></p>
<p>It's clear that we can simplify the limit a little bit, after which we get:</p>
<p><span class="math-container">$\lim\limits_{n\to\infty}{(\frac{1}{4})^{n}\sum\limits_{k=n}^{5n}{k-1 \choose n-1}(\frac{4}{5})^{k}}$</span></p>
<p>I could further simplify the expression, but I feel like there's a more elegant solution. </p>
<p>Give me a hint, please</p>
| user10354138 | 592,552 | <p><strong>Hint for a probabilistic proof</strong>: look at the negative binomial distribution as the sum of independent geometric distributions, and apply the central limit theorem.</p>
|
77,504 | <p>I'm in the embarrassing situation that I want to ask a question that
was <a href="https://mathoverflow.net/questions/14175/how-to-learn-about-shimura-varieties">already asked</a>, but (for complicated reasons) never answered. I'd
like to try with a blank slate.</p>
<p>Shimura varieties show connections to a lot of interesting
mathematical subjects. They're a topic of active research and have
been of importance in number theory and the Langlands program.</p>
<p>However, the theory has a bit of a reputation: for heavy
prerequisites; for a large and difficult-to-penetrate body of
literature; for seminar talks that spend a minimum of half an hour
getting past the definitions. ("Aren't you assuming that the
polarizations are principal here?" "I don't see why that has
cocompact center.")</p>
<p>Let's suppose that a poor graduate student doesn't have the best
access to the experts, but has gone to lengths to make themselves
familiar with "the basics" on modular curves and Shimura curves.
There is still a bewildering abundance of new material and new
ideas to absorb:</p>
<ul>
<li><p>Abelian schemes.</p></li>
<li><p>Reductive algebraic groups and the switch to the adelic perspective.</p></li>
<li><p>Representation theory and the switch in perspective on
modular/automorphic forms.</p></li>
<li><p>$p$-divisible groups and their various equivalent formulations.</p></li>
<li><p>Moduli problems and geometric invariant theory.</p></li>
<li><p>Deformation theory.</p></li>
<li><p>Polarizations. (Yes, I think this deserves its own bullet point.)</p></li>
<li><p>(This is a placeholder for any and all major topics that I forgot.)</p></li>
</ul>
<p>Obviously there is a lot to learn, and there's no magic way to obtain
enlightenment.</p>
<p>But for an outsider, it's not clear where to start, what a good place
to read is, what really constitutes the "core" of the subject, or even
if one might cobble together a basic education while learning things
that will prove useful outside of this specialty.</p>
<p>Is a route from modular curves to Shimura varieties that will help
both with understanding the basics of the subject, and with getting an
idea of where to learn more?</p>
<p>Thank you (and sorry for the side commentary).</p>
<p>(Often these kinds of questions ask for "<a href="https://mathoverflow.net/questions/11219/what-is-a-good-roadmap-for-learning-shimura-curves">roadmaps</a>"; but "roadmap"
seems like it presupposes the existence of roads.)</p>
| Andrei Halanay | 1,220 | <p>I think a great introduction to this subject is given in two articles of J.S. Milne: one from the book
<a href="http://www.claymath.org/library/proceedings/cmip04c.pdf">James Arthur, David Ellwood, Robert Kottwitz (eds.)-Harmonic Analysis, the Trace Formula, and Shimura Varieties</a> and a shorter <a href="http://www.jmilne.org/math/xnotes/svh.pdf">version</a> available at his website. Especially the first one does not assume many pre-requisites and goes from modular curves to Shimura varieties (with a view towards the Langlands program).</p>
|
3,087,933 | <p>I read in the book <em>A First Course in Probability</em> by Sheldon Ross the following statement:</p>
<blockquote>
<p><strong>Technical Remark.</strong> We have supposed that <span class="math-container">$P(E)$</span> is defined for all the events <span class="math-container">$E$</span> of the sample space. Actually, when the sample space is an uncountably infinite set, <span class="math-container">$P(E)$</span> is defined only for a class of events called measurable. However, this restriction need not concern us as all events of any practical interest are measurable.</p>
</blockquote>
<p>The set of Real numbers is an infinitely uncountable set then how can we calculate the probability of even real number?</p>
| BadAtAlgebra | 611,990 | <p>The answer is that there is <span class="math-container">$0$</span> chance that a randomly chosen number is even, because the set of real numbers is much larger than the set of even numbers (a bigger type of infinity, to be precise).</p>
|
65,270 | <p>On <a href="https://crypto.stanford.edu/pbc/notes/elliptic/divisor.html" rel="nofollow noreferrer">this page</a>, the author states:</p>
<blockquote>
<p>It turns out this definition can be extended to points of order 2, and also the point O (when we homogenize the functions and work over the projective plane). Moreover, every rational function has as many zeroes as poles counting multiplicities, because of the way we extend the definition to the point at infinity.</p>
</blockquote>
<p>I'm interested as to why every rational function has as many zeros as poles. That seems to be caused by "homogenization", so how homogenization works and why do we need it?</p>
<p>Why do we need to worry about points of order 2?</p>
| Gooz | 16,218 | <p>Given a rational function $f$ on an elliptic curve (or a smooth projective curve) $E$, we have that $$\sum_{x\in E} \textrm{ord}_x(f) =0. $$ If $\textrm{ord}_x(f) >0$, we say that $f$ has a zero of order $\textrm{ord}_x(f)$ at $x$. If $\textrm{ord}_x(f)<0$, we say that $f$ has a pole of order $-\textrm{ord}_x(f)$ at $x$. The statement I just gave is usually phrased as "the degree of a principal divisor is zero".</p>
<p>Here $\textrm{ord}_x$ is the discrete valuation on (the fraction field of) the local ring of $E$ at $x$. If you want to compute it for a rational function $f$ which is regular on an open affine $U$ containing $x$ you can choose an element $\pi \in \mathcal{O}(U)$ which generates the maximal ideal of $\mathcal{O}_{X,x}$ and write $f= u \pi^n$ on $U$ with $u$ a unit in the local ring. Then $\textrm{ord}_x(f) = n$. In general, to compute the order of $f$ at $x$ one writes $f$ as a quotient of two regular functions in an open affine neighborhood of $x$.</p>
|
4,122,425 | <p>Let’s say a corona test is correct with <code>p=0.8</code>. If I now take two tests. What’s the probability that I get a correct result?</p>
<p>I think thought of <code>0.8*0.8</code>, but that makes now sense, since it should not decrease and <code>0.8+0.8</code> gives a probability over 1, which makes no sense either. Or maybe that Bayes probability example?</p>
<p>Edit:
I would like to extend my questions: What’s the probability that I am negativ with one and with two tests? There the probability with two should increase if I am actually negative?
Thanks for the answers.</p>
| Garo | 526,127 | <ul>
<li>If <code>0.8</code> would be the probability of a correct one then <code>1 - 0.8 = 0.2</code> would be the probability of a incorrect one</li>
<li><code>0.8 * 0.8 = 0.64</code> will be the probability that they are <strong>both</strong> correct</li>
<li>Which means that the reverse: <code>1 - 0.64 = 0.36</code> is the probability that <strong>at least 1</strong> is <strong>in</strong>correct</li>
<li>The probability that they are <strong>both in</strong>correct will be <code>0.2 * 0.2 = 0.04</code></li>
<li>The probability that <strong>at least 1</strong> is correct will become: <code>1 - 0.04 = 0.96</code></li>
<li>The probability that <strong>exactly</strong> 1 will be correct will be <code>1 - 0.64 - 0.04 = 0.32</code></li>
<li>The probability that it will correct <strong>followed by in</strong>correct will be <code>0.32 / 2 = 0.16</code></li>
<li>It will also be <code>0.16</code> for <strong>in</strong>correct <strong>followed by</strong> correct</li>
</ul>
<p><strong>NOTE: This is just the math we are ignoring a lot of things here.</strong></p>
<p>2 important examples (of the many):</p>
<ul>
<li>The most common case of wrong results is incorrect testing, so <code>0.8</code> for the general population might be e.g. <code>0.6</code> for each test in your case if it's always the same person taking the test</li>
<li>In real life you should not only take the probability of correct or incorrect into account, instead you should use the probabilities of:
<ul>
<li>correct positive</li>
<li>correct negative</li>
<li>incorrect positive</li>
<li>incorrect negative</li>
</ul>
</li>
</ul>
|
2,807,611 | <p>I know the answer is $n=6$, but can't figure out how to solve.
I tried dividing by $n!$, but didn't work because there isn't one in RHS to simplify... also tried using Gamma function properties, but didn't work either... </p>
<p>Any help would be appreciated.</p>
<p>Thanks.</p>
| SlipEternal | 156,808 | <p>Multiply both sides by 5!. That gives you:</p>
<p>$n!((n+1)(n+2)-1) = 330\cdot 5!$</p>
<p>So, you now have the general format of a solution. We know that $n!$ divides $330\cdot 5!$, so $n\le 6$. Trial and error will get you there quickly.</p>
<p>$1!(2\cdot 3-1) = 5\cdot 1! \neq 330\cdot 5!$</p>
<p>$2!(3\cdot 4-1) = 11\cdot 2! \neq 330\cdot 5!$</p>
<p>$3!(4\cdot 5-1) = 19\cdot 3! \neq 330\cdot 5!$</p>
<p>$4!(5\cdot 6-1) = 29\cdot 4! \neq 330\cdot 5!$</p>
<p>$5!(6\cdot 7-1) = 41\cdot 5! \neq 330\cdot 5!$</p>
<p>$6!(7\cdot 8-1) = 55\cdot 6! = 55\cdot 6\cdot 5! = 330\cdot 5!$</p>
|
5,231 | <p>I have coordinates for 4 vertices/points that define a plane and the normal/perpendicular.
The plane has an arbitrary rotation applied to it.</p>
<p>How can I 'un-rotate'/translate the points so that the plane has rotation 0 on x,y,z ?</p>
<p>I've tried to get the plane rotation from the plane's normal:</p>
<pre><code>rotationX = atan2(normal.z,normal.y);
rotationY = atan2(normal.z,normal.x);
rotationZ = atan2(normal.y,normal.x);
</code></pre>
<p>Is this correct ?</p>
<p>How do I apply the inverse rotation to the position vectors ?</p>
<p>I've tried to create a matrix with those rotations and multiply it with the vertices,
but it doesn't look right.</p>
<p>At the moment, I've wrote a simple test using <a href="http://processing.org/" rel="nofollow noreferrer">Processing</a> and can be seen <a href="http://lifesine.eu/so/vertex_rotation/" rel="nofollow noreferrer">here</a>:</p>
<pre><code>float s = 50.0f;//scale/unit
PVector[] face = {new PVector(1.08335042,0.351914703846,0.839020013809),
new PVector(-0.886264681816,0.69921118021,0.839020371437),
new PVector(-1.05991327763,-0.285596489906,-0.893030643463),
new PVector(0.909702301025,-0.63289296627,-0.893030762672)};
PVector n = new PVector(0.150384, -0.500000, 0.852869);
PVector[] clone;
void setup(){
size(400,400,P3D);
smooth();
clone = unRotate(face,n,true);
}
void draw(){
background(255);
translate(width*.5,height*.5);
if(mousePressed){
rotateX(map(mouseY,0,height,0,TWO_PI));
rotateY(map(mouseX,0,width,0,TWO_PI));
}
stroke(128,0,0);
beginShape(QUADS);
for(int i = 0 ; i < 4; i++) vertex(face[i].x*s,face[i].y*s,face[i].z*s);
endShape();
stroke(0,128,0);
beginShape(QUADS);
for(int i = 0 ; i < 4; i++) vertex(clone[i].x*s,clone[i].y*s,clone[i].z*s);
endShape();
}
//get rotation from normal
PVector getRot(PVector loc,Boolean asRadians){
loc.normalize();
float rz = asRadians ? atan2(loc.y,loc.x) : degrees(atan2(loc.y,loc.x));
float ry = asRadians ? atan2(loc.z,loc.x) : degrees(atan2(loc.z,loc.x));
float rx = asRadians ? atan2(loc.z,loc.y) : degrees(atan2(loc.z,loc.y));
return new PVector(rx,ry,rz);
}
//translate vertices
PVector[] unRotate(PVector[] verts,PVector no,Boolean doClone){
int vl = verts.length;
PVector[] clone;
if(doClone) {
clone = new PVector[vl];
for(int i = 0; i<vl;i++) clone[i] = PVector.add(verts[i],new PVector());
}else clone = verts;
PVector rot = getRot(no,false);
PMatrix3D rMat = new PMatrix3D();
rMat.rotateX(-rot.x);rMat.rotateY(-rot.y);rMat.rotateZ(-rot.z);
for(int i = 0; i<vl;i++) rMat.mult(clone[i],clone[i]);
return clone;
}
</code></pre>
<p>Any syntax/pseudo code or explanation is useful.</p>
<p>What trying to achieve is this:
If I have a rotated plane:
<img src="https://i.stack.imgur.com/bZ1fn.png" alt="rotated plane"></p>
<p>How can move the vertices to have something that would have no rotation:
<img src="https://i.stack.imgur.com/ogFR0.png" alt="plane with no rotations"></p>
<p>Thanks!</p>
<p><strong>UPDATE:</strong></p>
<p>@muad</p>
<p>I'm not sure I understand. I thought I was using matrices for rotations.
PMatrix3D's rotateX,rotateY,rotateZ calls should done the rotations for me.
Doing it manually would be declaring 3d matrices and multiplying them.
Here's a little snippet to illustrate this:</p>
<pre><code> PMatrix3D rx = new PMatrix3D(1, 0, 0, 0,
0, cos(rot.x),-sin(rot.x), 0,
0, sin(rot.x),cos(rot.x) , 0,
0, 0, 0, 1);
PMatrix3D ry = new PMatrix3D(cos(rot.y), 0,sin(rot.y), 0,
0, 1,0 , 0,
-sin(rot.y), 0,cos(rot.y), 0,
0, 0,0 , 1);
PMatrix3D rz = new PMatrix3D(cos(rot.z),-sin(rot.z), 0, 0,
sin(rot.z), cos(rot.z), 0, 0,
0 , 0, 1, 0,
0 , 0, 0, 1);
PMatrix3D r = new PMatrix3D();
r.apply(rx);r.apply(ry);r.apply(rz);
//test
PMatrix rmat = new PMatrix3D();rmat.rotateX(rot.x);rmat.rotateY(rot.y);rmat.rotateZ(rot.z);
float[] frmat = new float[16];rmat.get(frmat);
float[] fr = new float[16];r.get(fr);
println(frmat);println(fr);
/*
Outputs:
[0] 0.059300933
[1] 0.09312407
[2] -0.99388695
[3] 0.0
[4] 0.90466285
[5] 0.41586864
[6] 0.09294289
[7] 0.0
[8] 0.42198166
[9] -0.9046442
[10] -0.059584484
[11] 0.0
[12] 0.0
[13] 0.0
[14] 0.0
[15] 1.0
[0] 0.059300933
[1] 0.09312407
[2] -0.99388695
[3] 0.0
[4] 0.90466285
[5] 0.41586864
[6] 0.09294289
[7] 0.0
[8] 0.42198166
[9] -0.9046442
[10] -0.059584484
[11] 0.0
[12] 0.0
[13] 0.0
[14] 0.0
[15] 1.0
*/
</code></pre>
| Community | -1 | <p>Try to represent rotations using matrices instead of angles - then finding the inverse is easy.</p>
|
1,397,160 | <p>How do I prove that in a finite group G, for each element in G there is natural power (say $k$) which depends on g,such that $g^k=e$ ?
I need to show the existence and the dependence on which $g$ I choose.</p>
<p>I tried write it that way, but I don't have any direction in the proof: </p>
<p>$$G\:=\:\left|n\right|\::\:G=\left\{e,\:g,\:g^2,\:...\:,\:g^{n-1}\right\}$$</p>
<p>Can anybody give me any direction of thinking ?</p>
| Chinny84 | 92,628 | <p>Having had to search "Coefficient of area expansion" (and I did physics at uni) you did not explain that you are working with this
$$
L = L_0\left(1+\alpha\Delta T\right)
$$
so we have
$$
A = L^2 = L_0^2\left(1+2\alpha\Delta T + \alpha^2(\Delta T)^2\right)\approx L_0^2\left(1+2\alpha\Delta T\right)
$$
you ignore terms higher than $O(\Delta T)$
or
$$
A = A_0\left(1+2\alpha\Delta T\right) = A_0 + \Delta A
$$
now we have
$$
\frac{A_0+\Delta A}{A_0} = \left(1+2\alpha\Delta T\right)
$$
so the percentage increase of the area is
$$
\frac{\Delta A}{A_0} = 2\alpha \Delta T
$$</p>
|
1,498,048 | <p>I can´t prove this problem. Can you help me? The problem says:</p>
<p><em>If $\{X_n\}$ is a sequence of identically distributed random variables with finite mean, then
$$lim_{n\to\infty}\frac{1}{n}\mathbb{E}\Big[\max_{1\leq j\leq n} |Xj|\Big] = 0$$
[HINT: Use Exercise 17 to express the mean of the maximum.]</em></p>
<p>In problem 17 show that if X is a positive random variable, then we have
$$\mathbb{E}[X]=\int_0^\infty \!\! P(X > x)\, dx=\int_0^\infty \!\! P(X \geq x)\, dx$$</p>
<p>I prove the problem 19 assuming independence and for the general case I find some upper bounds for the maximum of dependent variables on the internet but none worked.</p>
<p>Thank you for help me.</p>
| aghil alaee | 492,049 | <p>since h(x)=|f(x)| for any real valued function f, is a convex function(why?) then max(|Xi|) is also convex. by using Jensen's inequality you can prove this. be happy</p>
|
1,822,562 | <p>Please explain what method you used to prove so.
$$\sum_{n=3}^\infty \frac{\tan\left(\frac{\pi}{n}\right)}{n}$$</p>
| ncmathsadist | 4,154 | <p>Note that $\tan(x)\sim x$ as $x\to 0$. </p>
|
3,338,388 | <p>I tried to calculate the expression:
<span class="math-container">$$\lim_{n\to\infty}\prod_{k=1}^\infty \left(1-\frac{n}{\left(\frac{n+\sqrt{n^2+4}}{2}\right)^k+\frac{n+\sqrt{n^2+4}}{2}}\right)$$</span>
in Wolframalpha, but it does not interpret it correctly. </p>
<p>Could someone help me type it in and get the answer? Is it <span class="math-container">$1/2$</span>?</p>
<hr>
<p><strong>Edit:</strong> This was the <a href="https://www.mat.uniroma2.it/~tauraso/AMM/AMM12110.pdf" rel="nofollow noreferrer">AMM problem 12110</a>, whose deadline passed on 31 August 2019.</p>
<p>As an alternative numerical method, I could calculate the value in MS Excel.</p>
| bilgamish | 558,586 | <p>Here is the screenshot of MS Excel spreadsheet:</p>
<p><a href="https://i.stack.imgur.com/d4FVG.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/d4FVG.png" alt="enter image description here"></a></p>
|
138,079 | <p>I want to find an elegant method to rearrange these two sublists:</p>
<pre><code>SeedRandom[1]
list = {RandomInteger[10, {4, 2}], RandomInteger[{10, 30}, {4, 2}]}
</code></pre>
<blockquote>
<p>{{{1,4},{0,7},{0,0},{8,6}},{{11,20},{11,11},{25,17},{27,16}}}</p>
</blockquote>
<p>Make these two sublists’ elements have shortest distance from inside to outside. This is my current method:</p>
<pre><code>MapAt[Reverse,
Transpose[
Reap[Nest[
MapThread[
DeleteCases, {#,
Sow[First[
MinimalBy[Transpose[{First /@ Nearest @@ #, Last[#]}],
N[EuclideanDistance @@ #] &]]]}] &, list,
Length[First[list]]]][[2, 1]]], {1}]
</code></pre>
<blockquote>
<p>{{{0,0},{1,4},{0,7},{8,6}},{{11,11},{11,20},{25,17},{27,16}}}</p>
</blockquote>
<p><img src="https://i.stack.imgur.com/77NCM.png" alt=""> </p>
<p>Show it in graphic:</p>
<pre><code>ListPlot[Map[Labeled[#, ToString[#]] &, #] & /@ rearrangPoint,
PlotStyle -> PointSize[.03]]~Show~
ListLinePlot[
Reap[Nest[
MapThread[
DeleteCases, {#,
Sow[First[
MinimalBy[Transpose[{First /@ Nearest @@ #, Last[#]}],
N[EuclideanDistance @@ #] &]]]}] &, list,
Length[First[list]]]][[2, 1]], PlotStyle -> ColorData[3],
PlotLegends -> Automatic, AspectRatio -> Automatic]
</code></pre>
<p><img src="https://i.stack.imgur.com/r6jJP.png" alt=""> </p>
<p>But I have to say this method is too ugly.</p>
| WReach | 142 | <p>Here is a shorter, though not necessarily prettier, solution:</p>
<pre><code>ReplaceList[list, {{___, l_, ___}, {___, r_, ___}} :> {l, r} -> N@EuclideanDistance[l, r]] //
SortBy[Last] //
DeleteDuplicates[#, #[[1, 1]] == #2[[1, 1]] || #[[1, 2]] == #2[[1, 2]]&] & //
{#[[All, 1, 1]] // Reverse, #[[All, 1, 2]]} &
(* {{{0, 0}, {1, 4}, {0, 7}, {8, 6}}, {{11, 11}, {11, 20}, {25, 17}, {27, 16}}} *)
</code></pre>
|
138,079 | <p>I want to find an elegant method to rearrange these two sublists:</p>
<pre><code>SeedRandom[1]
list = {RandomInteger[10, {4, 2}], RandomInteger[{10, 30}, {4, 2}]}
</code></pre>
<blockquote>
<p>{{{1,4},{0,7},{0,0},{8,6}},{{11,20},{11,11},{25,17},{27,16}}}</p>
</blockquote>
<p>Make these two sublists’ elements have shortest distance from inside to outside. This is my current method:</p>
<pre><code>MapAt[Reverse,
Transpose[
Reap[Nest[
MapThread[
DeleteCases, {#,
Sow[First[
MinimalBy[Transpose[{First /@ Nearest @@ #, Last[#]}],
N[EuclideanDistance @@ #] &]]]}] &, list,
Length[First[list]]]][[2, 1]]], {1}]
</code></pre>
<blockquote>
<p>{{{0,0},{1,4},{0,7},{8,6}},{{11,11},{11,20},{25,17},{27,16}}}</p>
</blockquote>
<p><img src="https://i.stack.imgur.com/77NCM.png" alt=""> </p>
<p>Show it in graphic:</p>
<pre><code>ListPlot[Map[Labeled[#, ToString[#]] &, #] & /@ rearrangPoint,
PlotStyle -> PointSize[.03]]~Show~
ListLinePlot[
Reap[Nest[
MapThread[
DeleteCases, {#,
Sow[First[
MinimalBy[Transpose[{First /@ Nearest @@ #, Last[#]}],
N[EuclideanDistance @@ #] &]]]}] &, list,
Length[First[list]]]][[2, 1]], PlotStyle -> ColorData[3],
PlotLegends -> Automatic, AspectRatio -> Automatic]
</code></pre>
<p><img src="https://i.stack.imgur.com/r6jJP.png" alt=""> </p>
<p>But I have to say this method is too ugly.</p>
| Jack LaVigne | 10,917 | <p><strong><em>Elegant</em></strong> is completely subjective. It could mean <strong><em>short</em></strong> or possibly <strong><em>easy to follow</em></strong> or something entirely different.</p>
<p>The solution below is not the shortest but I do find it relatively easy to follow.</p>
<pre><code>SeedRandom[1]
list = {RandomInteger[10, {4, 2}], RandomInteger[{10, 30}, {4, 2}]}
(* { {{1, 4}, {0, 7}, {0, 0}, {8, 6}},
{{11, 20}, {11, 11}, {25, 17}, {27,16}} } *)
</code></pre>
<h2>Step 1 - newList</h2>
<p>A function is defined that will take as input the form of your list:</p>
<pre><code> {set1, set2}
⇓
{{{w1, x1}, {w2, x2}, ..., {wn, xn}}, {{y1, z1}, {y2, z2}, ..., {yn, zn}}}
</code></pre>
<p>It will find the pair <code>{{wi,xi}, {yj,zj}}</code> that represents the minimum distance, <code>Sow</code> it and return the complete list with <code>{wi,zi}</code> removed from <code>set1</code> and <code>{yj,zj}</code> removed from <code>set2</code>.</p>
<pre><code>newList[list_] := Module[
{
tuplesList = Tuples[list],
distanceList,
minimum,
position,
pair
},
distanceList = N@EuclideanDistance[#[[1]], #[[2]]] & /@ tuplesList;
minimum = N@Min[distanceList];
position = Position[distanceList, minimum];
pair = Flatten[Extract[tuplesList, position], 1];
Sow[pair];
{DeleteCases[list[[1]], pair[[1]]],
DeleteCases[list[[2]], pair[[2]]]}
]
</code></pre>
<p>Test it on the complete list</p>
<pre><code>newList[list]
(* {{{1, 4}, {0, 7}, {0, 0}}, {{11, 20}, {25, 17}, {27, 16}}} *)
</code></pre>
<h2>Step 2 - sortedNestedList</h2>
<p>Using <code>newList</code> we produce a sorted list using <code>Nest</code>, <code>Sow</code> and <code>Reap</code>.</p>
<pre><code>sortedNestedList = Reap[Nest[newList, list, Length@list[[1]]]][[2, 1]]
(* { {{8, 6}, {11, 11}}, {{0, 7}, {11, 20}},
{{1, 4}, {25, 17}}, {{0, 0}, {27, 16}}} *)
</code></pre>
<h2>Step 3 - Extract the final answer</h2>
<p>The final list is extracted from <code>sortedNestedList</code> by reversing the first column (new <code>set1</code>) and simply copying the second column (new <code>set2</code>).</p>
<pre><code>{Reverse@sortedNestedList[[All, 1]], sortedNestedList[[All, 2]]}
(* { {{0, 0}, {1, 4}, {0, 7}, {8, 6}},
{{11, 11}, {11, 20}, {25, 17}, {27, 16}}} *)
</code></pre>
<h2>Putting it all together</h2>
<p>The function <code>sortedList</code> encapsulates the previous three steps</p>
<pre><code>sortedList[list_] := Module[
{
sortedNestedList =
Reap[Nest[newList, list, Length@list[[1]]]][[2, 1]]
},
{Reverse@sortedNestedList[[All, 1]], sortedNestedList[[All, 2]]}
]
</code></pre>
<p>Testing it on the original list</p>
<pre><code>sortedList[list]
(* { {{0, 0}, {1, 4}, {0, 7}, {8, 6}},
{{11, 11}, {11, 20}, {25, 17}, {27, 16}}} *)
</code></pre>
|
3,971,025 | <p>I need to find the number of conjugated to the permutation (12)(34) in the symmetric group <span class="math-container">$S_6$</span> of rank 6</p>
<p>My answer is 6! = 720</p>
<p>Is this correct?</p>
<p>I concluded that (12)(34)=(12)(34)(5)(6) and the number of combinations for <span class="math-container">$S_6$</span> is 6! as they need to be the same partition type</p>
<p>Edit:</p>
<p>It seems to be <span class="math-container">$6! / (2*2*1*1) = 180$</span></p>
| Tanner Swett | 13,524 | <p>Here's one way to "translate" it.</p>
<blockquote>
<p>If <span class="math-container">$L^+(P,N_0)$</span> is the set of functions <span class="math-container">$f:P\rightarrow N_0$</span> with a property such that
<span class="math-container">$$\exists\; n_0 \in N_0 \; \forall \; p \in P \;$$</span></p>
<p><span class="math-container">$$ p\ge n_0 \implies f(p) = 0 $$</span></p>
</blockquote>
<p>Define <span class="math-container">$L^+(P,N_0)$</span> as the set of all expressions of the form</p>
<p><span class="math-container">$$2^{x_2} \ 3^{x_3} \ 5^{x_5} \ \cdots,$$</span></p>
<p>where the bases are the prime numbers, and <span class="math-container">$x_2, x_3, x_5, \ldots$</span> are non-negative integers, only finitely many of which are nonzero.</p>
<p>In other words, <span class="math-container">$L^+(P,N_0)$</span> is the set of all possible prime factorizations.</p>
<blockquote>
<p>then there exists a bijection <span class="math-container">$N_1 \rightarrow L^+(P,N_0) $</span> such that if <span class="math-container">$n \rightarrow f$</span> then
<span class="math-container">$$n = \prod_{p\in P} p^{f(p)}$$</span></p>
</blockquote>
<p>Then there is a function which maps positive integers to prime factorizations, such that the value of the prime factorization is (as we would hope) the number which produced it. Furthermore, this function is a bijection.</p>
<p>In other words, each positive integer has exactly one prime factorization, and furthermore, each possible prime factorization is in fact the prime factorization of exactly one positive integer.</p>
|
324,557 | <p>Map the common part of the disks $|z|<1$ and $|z-1|<1$ on the inside of the unit circle. Choose the mapping sot hat the two symmetries are preserved.</p>
<p>I don't really know how to approach this??</p>
<p>Any suggestions on how to start constructing such a linear transformation??</p>
<p>Thanks in advance!</p>
| Ittay Weiss | 30,953 | <p>To solve such questions it helps to construct small examples of transitive relations <em>in the most obvious way</em>. So, let $A=\{1,2,3\}$ and have $R\{(1,2),(2,3),(1,3)\}$. It is constructed by force to be transitive, but computing $R\circ R$ reveals a that $R\circ R\ne R$. </p>
<p>The moral is not so much any of the particularities of this solution, but rather the general strategy: construct small objects and check!</p>
|
199,738 | <p>It is known that, if a function $f$ from a planar domain $D$ to a Banach space $A$ is weakly analytic [i.e. $l(f)$ is analytic for every $l$ in $A^*$], then $f$ is strongly analytic [i.e. $\lim_{h \to 0} h^{-1}[f(z+h)-f(z)]$ exists in norm for every $z$ in $D$].</p>
<p>Now the question is, if above $f$ is assumed to be weakly continuous [i.e.$l(f)$ is continuous for every $l$ in $A^*$], then is it true that $f$ will be strongly continuous.[i.e. $\lim_{h \to 0} [f(z+h)-f(z)] = 0$ in norm for every $z$ in $D$.] </p>
| Rabee Tourky | 39,780 | <p>Regarding the clarified question with finite dimensional domain. Let $X$ be an infinite dimensional separable and reflexive Banach space. Its unit ball $B$ is weakly compact and metrizable. It is also convex. </p>
<p>So by Hahn–Mazurkiewicz theorem there exists a continuous function $f\colon [0,1]\to B$ that is onto $B$. This function is not norm continuous as $B$ is not norm compact.</p>
|
187,459 | <p>What are all 4-regular graphs such that every edge in the graph lies in a unique-4 cycle?</p>
<p>Among all such graphs, if we impose a further restriction that any two 4-cycles in the graph have at most one vertex in common, then can we characterize them in some way?</p>
<p>When is it possible to draw such a graph on a plane such that every 4-cycle is of the form: (a,c)-(b,c)-(b,d)-(a,d)-(a,c) for some a,b,c,d ?</p>
| Brendan McKay | 9,025 | <p>The number of vertices $n$ must be even or the number of 4-cycles is not an integer. The number of simple connected quartic graphs with the first condition is 0 for $n<12$ and $2,4,25,459$ for $n=12,14,16,18$. One of those on 12 vertices is the <a href="https://en.wikipedia.org/wiki/Cuboctahedron" rel="nofollow">cuboctahedron</a>. </p>
<p>After the cuboctahedron, the next 3-connected planar quartic graph with the first property has 20 vertices and there are 2 with 24 vertices.</p>
<p>This construction may be useful: A quartic graph with $n$ vertices and the first property has $n/2$ 4-cycles. Make a new graph $H$ with the 4-cycles of $G$ as vertices and an edge wherever two 4-cycles meet at a vertex. You get a quartic multigraph with half the number of vertices, simple if $G$ also satisfies the second property. To get back from $H$ to $G$ you need to choose a cyclic order of the edges around each vertex, which is similar to embedding it on an orientable surface except that reversing the order at some vertices doesn't change the result. This operation is related to the <a href="https://en.wikipedia.org/wiki/Medial_graph" rel="nofollow">medial graph</a> construction. It would probably not be hard to characterise when the medial graph of an embedded quartic graph has the required properties.</p>
|
295,076 | <p>If a finite-dimensional vector space $V$ is a direct sum of two subspaces $W_1$ and $W_2$, prove that $V^* = W_1^0 \oplus W_2^0$.</p>
<p>Where $V^*$ is the dual space of $V$ and $W^0$ is the annihilator of $W$.</p>
| DonAntonio | 31,254 | <p>Hint:</p>
<p>Look at $\,V^*/W_1^0\,$ and check your last question, already answered.</p>
|
3,070,788 | <p>Can anyone explain to me why the variance of the standard normal distribution is 1? I am trying to understand the mechanism behind standardising random variable. While I know minus the variable by the mean is like shifting the graph to make it centre at the origin, I don't know why dividing it by SD makes the variable having SD = 1 as well</p>
| Community | -1 | <p>It is immediate that <span class="math-container">$K=7$</span> and <span class="math-container">$Q=9$</span>. Then <span class="math-container">$L+2Z=37$</span> and <span class="math-container">$L+Z=24$</span> yield <span class="math-container">$Z=13,L=11$</span>. <span class="math-container">$X$</span> and <span class="math-container">$Y$</span> follow.</p>
|
762,472 | <p>Let $C$ be a circle of radius $r$ in the plane. Let $p$ be a point in the plane that lies outside of $C$. Show that there are exactly two lines through $p$ that are tangent to $C$.</p>
<hr>
<p>It is one of those questions that seem very intuitive but very hard to prove for me. How do I show that there are "exactly" two tangent lines? Try to construct a third one but reach a contradiction? I'd appreciate any help. Thanks.</p>
| ajotatxe | 132,456 | <p><strong>A proof</strong></p>
<p>Let be $s$ a tangent line through $P$. Call $T$ the point in which the line touches $C$. The radius at $T$ is perpendicular to $s$.</p>
<p>So you have to find the points $T$ such that the angle $\angle PTO$ is right (I have called $O$ the center of $C$).</p>
<p>But these points have to lay on the circle $C'$ whose diameter is $OP$. Since $O$ is inside $C$ and $P$ is outside, there are exactly two points of intersection between $C$ and $C'$, which give two tangent lines.</p>
<p><strong>Another proof</strong></p>
<p>Note, as in previous proof, that the tangent line and the corresponding radius are perpendicular. So $POT$ is a right triangle. This means that
$$OP^2=r^2+PT^2$$</p>
<p>Sinnce $r$ and $OP$ are fixed, so is $PT$. That is, the points of tangency are at a fixed distance $r'$ from $P$.
By triangle inequality, $|r-r'|<OP<r+r'$, from which we can deduce that the circle $C''$ with center at $P$ and radius $r'$ intersects two points on $C$. These are the poinrs of tangency.</p>
|
762,472 | <p>Let $C$ be a circle of radius $r$ in the plane. Let $p$ be a point in the plane that lies outside of $C$. Show that there are exactly two lines through $p$ that are tangent to $C$.</p>
<hr>
<p>It is one of those questions that seem very intuitive but very hard to prove for me. How do I show that there are "exactly" two tangent lines? Try to construct a third one but reach a contradiction? I'd appreciate any help. Thanks.</p>
| colormegone | 71,645 | <p>A proof using analytic geometry --</p>
<p>We can place the circle of radius $ \ r \ $ with its center at the origin, so its equation is $ \ x^2 \ + \ y^2 \ = \ r^2 \ $ . We can also pick a point $ \ (C, 0 ) \ $ on the positive $ \ x-$ axis. (This is equivalent to just saying we have some circle and some external point, so we'll define the center of the circle as the "origin" and the line through the center and the external point as the $ \ x-$ axis. So far, this looks like <strong>Kaj_H</strong>'s set-up.</p>
<p>A tangent line to the circle makes contact with it at a point $ \ (X,Y) \ $ , with the coordinates satisfying $ \ X^2 \ + \ Y^2 \ = \ r^2 \ $ . The radius from the origin to that point has slope $ \ m \ = \ \frac{Y}{X} \ $ . The tangent line is <em>perpendicular</em> to that radius, so its slope is $ \ m' \ = \ \frac{1}{m} \ = \ -\frac{X}{Y} \ $ . The equation of the tangent line is then</p>
<p>$$ y \ - \ 0 \ = \ \left( -\frac{X}{Y} \ \right) \ \cdot \ ( \ x - C \ ) \ \ \Rightarrow \ \ y \ = \ \frac{XC}{Y} \ - \ \frac{X}{Y} x \ \ . $$ </p>
<p>Now, a tangent point on that line is given by </p>
<p>$$ Y \ = \ \frac{XC}{Y} \ - \ \frac{X}{Y} \cdot X \ \ \Rightarrow \ \ Y^2 \ = \ XC \ - \ X^2 \ \ \Rightarrow \ \ X^2 \ + \ Y^2 \ = \ XC \ \ . $$</p>
<p>[We know this manipulation is "safe" because a tangent line is not going to contact the circle at $ \ Y = 0 \ $ . ]</p>
<p>This means that $ \ XC \ = \ r^2 \ $ ; since we chose $ \ C \ $ to be positive, $ \ X \ $ must be as well. But we also have that $ \ Y^2 \ = \ r^2 \ - \ X^2 \ $ . A tangent line cannot contact the circle at $ \ X = 0 \ $ , as this would require a tangent line of slope zero (we see this from the line equation above). Consequently, $ 0 \ < \ r^2 \ - \ X^2 \ < \ r^2 \ $ , so $ \ Y \ $ has two permissible values; thus, there are two possible tangent lines.</p>
<p>This argument is (unavoidably) related to those offered by <strong>ajotatxe</strong> and <strong>Kaj_H</strong>.</p>
<p>[A heuristic "proof": I am standing outside a circular garden, loop of pavement, etc. I see a left-hand edge and a right-hand edge. Lines-of-sight to the edges follow tangent lines.]</p>
|
993,767 | <p>Suppose $V$ is an inner product space over $\mathbb F$ and $u$,$v$ ∈ $V$ and
$\|u\| ≤ \|u + av\|$
for all $a$ ∈ $\mathbb{F}$.Then I want to show that $u$ and $v$ are orthogonal.I want to prove it geometrically.Somebody please give me some hint.</p>
| mookid | 131,738 | <p>Consider a bijection $\Bbb N \to \Bbb Q$: $f(n)$, and define $a_n = f(n)$.</p>
<p>Now let $x\in \Bbb R$. There is a sequence of rationals $(q^{(x)}_n)$ such as
$$
q^{(x)}_n\uparrow x
$$</p>
<p>Define $b_1 = q_1^{(x)}$ and, for every $n$ take $b_{n+1} \in \{q_k^{(x)}: k > n \}
\cap \{ f(k): k > n \}
$. This way, the sequence $(b_n)$ is extracted from $(a_n)$. And as $\liminf q_n^{(x)}
= x$, you also have $\lim_{n\to \infty} b_n = x$.</p>
|
2,847,419 | <p>I know that <br/>
$\sigma , \delta$ be 2 function then <br/>
$1)$ $\sigma \circ \delta$ is onto or one-one if both $\sigma $ and $\delta$ is onto or one one.<br/>
I can prove this fact .
I wanted to find the counterexample for both cases if the converse is not true.
<br/> Any Help will be appreciated </p>
| Mohammad Riazi-Kermani | 514,496 | <p>$$f'(x)=k(x+e^x)^{k-1} \times (1+e^x)=0 $$</p>
<p>has only one solution which is where $x+e^x=0$ and that is the point that you want to approximate. </p>
<p>The answer should be negative so $x=0.567$ is problematic. </p>
|
1,450,176 | <p>I would like to evaluate this limit :$$\displaystyle \lim_{x \to \infty} ({x\sin \frac{1}{x} })^{1-x}$$.</p>
<p>I used taylor expansion at $y=0$ , where $x$ go to $\infty$ i accrossed this </p>
<p>problem : ${1}^{-\infty }$ then i can't judge if this limit equal's $1$ , </p>
<p>because it is indeterminate case ,Then is there a mathematical way to </p>
<p>evaluate this limit ?</p>
<p>Thank you for any help </p>
| Victor | 142,550 | <p>I see you're a high school teacher so you're familiar with the following concepts :</p>
<blockquote>
<p>$\bullet$ $\sin(\frac{1}{x}) \simeq \frac{1}{x} - \frac{1}{6x^3} \text{ } [\text{as x $\rightarrow$ $\infty$}]$</p>
<p>$\bullet $ $ \lim_{x \to \infty} (1-\frac{k}{x})^x = e^{-k} $</p>
</blockquote>
<p><br/>
Compile these facts to get :</p>
<p>$$\underset{x \to \infty}{\lim} \bigg(1 - \frac{1}{6x^2} \bigg)^{1-x} = 1$$</p>
|
1,450,176 | <p>I would like to evaluate this limit :$$\displaystyle \lim_{x \to \infty} ({x\sin \frac{1}{x} })^{1-x}$$.</p>
<p>I used taylor expansion at $y=0$ , where $x$ go to $\infty$ i accrossed this </p>
<p>problem : ${1}^{-\infty }$ then i can't judge if this limit equal's $1$ , </p>
<p>because it is indeterminate case ,Then is there a mathematical way to </p>
<p>evaluate this limit ?</p>
<p>Thank you for any help </p>
| egreg | 62,967 | <p>Compute the limit of the logarithm:
\begin{align}
\lim_{x\to\infty}(1-x)\log(x\sin(1/x))&=
\lim_{t\to0^+}\left(1-\frac{1}{t}\right)\log\frac{\sin t}{t}
\\[6px]
&=\lim_{t\to0^+}\log\frac{\sin t}{t}-\lim_{t\to0^+}\frac{\log\sin t-\log t}{t}\\[6px]
&=-\lim_{t\to0^+}\left(\frac{\cos t}{\sin t}-\frac{1}{t}\right)\\[6px]
&=-\lim_{t\to0^+}\frac{t\cos t-\sin t}{t^2}\cdot
\lim_{t\to0^+}\frac{t}{\sin t}\\[6px]
&=\lim_{t\to0^+}\frac{t\sin t}{2t}\\[6px]
&=0
\end{align}
Of course this can be simplified by recalling that $(\sin t)/t=1+t^2/6+o(t^4)$, so we have
$$
\lim_{t\to0^+}\left(1-\frac{1}{t}\right)\log\left(\frac{\sin t}{t}\right)=
\lim_{t\to0^+}\frac{(t-1)(t^2/6+o(t^4))}{t}=0
$$</p>
|
94,134 | <p>I have a feeling that the following inequality should be very easy to prove:</p>
<p>$$
x^n \geq \prod_{i=1}^n{(x+k_i)},\quad\text{where } \sum_{i=1}^{n}{k_i}=0,\quad \text{and } x+k_i>0\text{ for all } i
$$</p>
<p>(and the equality only holds when all the $k_i=0$).</p>
<p>It seems intuitively obvious (when $n=2$, a square has a greater area than a rectangle with the same perimeter, when $n=3$, a cube has greater volume than a rectangular prism with the same surface area, etc.) but I can't find an appropriately easy proof.</p>
<p>I think I can show it analytically by finding the local maximum for $f(x_1,\ldots,x_n)=\prod_{i=1}^n{x_i}$ within the box $\max{x_i}=r$ in the upper-right quadrant, but I feel like there should be a neat algebraic/geometric argument, since it's such an intuitive statement.</p>
| Community | -1 | <p>The AM-GM inequality gives us
$$\prod_i (x+k_i)^{1/n} \leq {1\over n}\sum_i (x+k_i)=x.$$
Now take the $n$th power of both sides. </p>
|
3,867,197 | <p>Let <span class="math-container">$A$</span> be the following matrix</p>
<p><span class="math-container">$$\left(
\begin{array}{ccc}
1 & 0 & x \\
0 & 1 & y \\
x & y & 1
\end{array}
\right)$$</span></p>
<p>I have to prove that if, at least <span class="math-container">$x+y>\frac{3}{2}$</span>, <span class="math-container">$A$</span> is not positive definite.</p>
<p>I have tried to prove it by calculating the eigenvalues, obtaining:
<span class="math-container">$$
\begin{array}{c}
\lambda_1=1\\
\lambda_2=1+\sqrt{x^2+y^2} \\
\lambda_3=1-\sqrt{x^2+y^2}
\end{array}
$$</span></p>
<p>It is obvious that <span class="math-container">$\lambda_1$</span> and <span class="math-container">$\lambda_2$</span> are always positive, so I only have to take care of <span class="math-container">$\lambda_3$</span>. The problem is that I cannot relate the given condition with <span class="math-container">$1-\sqrt{x^2+y^2}<0$</span>, which would prove that the matrix is not positive definite.</p>
| Will Jagy | 10,400 | <p>your matrix is symmetric real,</p>
<p>Use <a href="https://en.wikipedia.org/wiki/Sylvester%27s_law_of_inertia#Statement_in_terms_of_eigenvalues" rel="nofollow noreferrer">Sylvester's Law of Inertia</a></p>
<p>Congruence:</p>
<p><span class="math-container">$$
\left(
\begin{array}{rrr}
1&0&0 \\
0&1&0 \\
-x&-y&1 \\
\end{array}
\right)
\left(
\begin{array}{rrr}
1&0&x\\
0&1&y \\
x&y&1 \\
\end{array}
\right)
\left(
\begin{array}{rrr}
1&0&-x \\
0&1&-y \\
0&0&1 \\
\end{array}
\right) =
\left(
\begin{array}{rrr}
1&0&0 \\
0&1&0 \\
0&0&1-x^2 - y^2 \\
\end{array}
\right)
$$</span></p>
<p>The final diagonal matrix and your original are positive definite if and only if <span class="math-container">$x^2 + y^2 < 1$</span></p>
|
1,879,673 | <p>I have woven the below incomplete proof of the following claim:</p>
<blockquote>
<p><em>Claim</em>. If $X$ is completely regular and $Y$ is a compactification of $X$,
then there is a unique, continuous, surjective, closed map
$g:\beta\left(X\right)\to Y$ which is the identity on
$X$.</p>
</blockquote>
<p><em>Here, $\beta\left(X\right)$ is the Stone-Čech compactification of $X$.</em></p>
<p><em>Proof</em>. Let $f:X\to Y$ be such that $x\mapsto x$. Then $f$ is continuous. Since $f$ is continuous and $Y$ is compact and Hausdorff, it is the case that $f$ extends uniquely to a continuous map $g:\beta\left(X\right)\to Y$. Then $g$ is the identity on $X$. Let $C\subseteq\beta\left(X\right)$ be closed. Since $\beta\left(X\right)$ is compact, it is the case that $C$ is compact. Since $g$ is continuous, it is the case that $g\left(C\right)$ is compact. Since $Y$ is Hausdorff, it is the case that $g\left(C\right)$ is closed. Then $g$ is closed...</p>
<p>I do not know how to show that $g$ is surjective. Am I allowed to use the "maximality" of $\beta\left(X\right)$? If so, then I believe that it would follow that $Y\subseteq\beta\left(X\right)$, which would imply that $g$ is surjective. I am not sure because this "maximality" is not defined in terms of containment.</p>
| Eric Wofsey | 86,856 | <p>There is one important fact you still haven't used: namely, that $Y$ is a compactification of $X$, which means not just that $X\subseteq Y$ and $Y$ is compact Hausdorff but that $X$ is dense in $Y$. As you have shown, $g$ is a closed map. In particular, taking $C=\beta X$, the image of $g$ is closed. But the image of $g$ contains $X$, so the image of $g$ must be all of $Y$ since $X$ is dense in $Y$.</p>
|
587,275 | <p>I was trying to understand why $e^{x}$ is special by finding the derivatives of other exponential functions and comparing the results. So I tried ${\rm f}\left(x\right) = 2^{x}$, but now I'm stuck.</p>
<p>Here's my final step:
<strong>$\displaystyle{{\rm f}'\left(x\right)
= \lim_{h \to 0}{2^{x}\left(2^{h} - 1\right) \over h}}$.</strong> </p>
| Alex | 38,873 | <p>One of definitions of logarithm is (see <a href="http://en.wikipedia.org/wiki/Logarithm#From_Napier_to_Euler" rel="nofollow">here</a>)
$$
\log x = \lim_{n \to \infty}\frac{x^{\frac{1}{n}}-1}{\frac{1}{n}}
$$
Hence denote $h=\frac{1}{n}$
$$
\lim_{h \to 0}\frac{2^{x+h}-2^x}{h}=2^x \lim_{h \to 0}\frac{2^h-1}{h}=2^x \log 2
$$
Hence if you replace $2$ with the base of the natural logarithm, you get $(e^x)'_x=e^x$ </p>
|
587,275 | <p>I was trying to understand why $e^{x}$ is special by finding the derivatives of other exponential functions and comparing the results. So I tried ${\rm f}\left(x\right) = 2^{x}$, but now I'm stuck.</p>
<p>Here's my final step:
<strong>$\displaystyle{{\rm f}'\left(x\right)
= \lim_{h \to 0}{2^{x}\left(2^{h} - 1\right) \over h}}$.</strong> </p>
| Dan | 1,374 | <p>It helps here to use implicit differentiation.</p>
<p>$y = a^x$</p>
<p>Take the natural logarithm of both sides.</p>
<p>$\ln{y} = x \ln{a}$</p>
<p>Differentiate both sides.</p>
<p>$\frac{1}{y} dy = dx \ln{a}$</p>
<p>Multiply and divide.</p>
<p>$\frac{dy}{dx} = y \ln{a}$</p>
<p>Substitute the original definition of $y = a^x$.</p>
<p>$\frac{dy}{dx} = a^x \ln{a}$</p>
<p>So, the derivative of $2^x$ is $2^x \ln{2}$, and the derivative of $e^x$ is $e^x \ln{e} = e^x$.</p>
|
699,383 | <p>I am a non-mathematician who knows some elemententary calculus ans I want to prove that the sequence $(x_n)$ given by</p>
<p>$$
x_n=-\sqrt{n} + n\ln\Big(1+\frac{1}{\sqrt{n}}\Big)
$$</p>
<p>is decreasing. Is there an elegant way to show this?</p>
| Carser | 132,859 | <p>You want to show that $x_n$ is monotonically decreasing, or in other words that $\frac{d}{dn} x_n$ is always non-positive. The derivative is not very friendly looking, but you get
$$ \frac{d}{dn} x_n = \frac{-2 \sqrt{n} + 2(n + \sqrt{n}) log(\frac{1}{\sqrt{n}}+1)-1}{2(n+\sqrt{n})} $$
You can plug in some values for $n$ to convince yourself that this is $always$ negative for positive $n$. More rigorously, look at the numerator. Certainly the first term will give you a negative and so will the last one. The middle term will give you a positive, but approaches zero and will never be greater in magnitude than the other two terms. </p>
|
3,129,248 | <p>I am solving ordinary differential equation in <span class="math-container">$S'$</span> (dual to Schwartz space) given as:</p>
<p><span class="math-container">$y' + ay = \delta$</span>, where <span class="math-container">$\delta$</span> is a Dirac delta function.</p>
<p>The general solution of homogenous equation is <span class="math-container">$Ce^{-ax}$</span>, where <span class="math-container">$C$</span> is a constant.</p>
<p>I actually started solving it via Fourier transform, but it is not probably efficient and I got for <span class="math-container">$x \lt 0$</span> a zero solution. But according to my textbook the solution is:</p>
<p><span class="math-container">$y(x) =
\begin{cases}
(C+1)e^{-ax}, & x \gt 0 \\[2ex]
Ce^{-ax}, & x \lt 0
\end{cases}$</span></p>
<p>And no matter how long I am staring at it, I don't understand. My textbook solves it via fundamental solution of the equation given as this in general: <span class="math-container">$Lu =f$</span>, where <span class="math-container">$L$</span> is an ordinary differential operator. And then I suppose is used the gluing of the solution (which I don't know how to proceed, nor I found any good example on the internet).</p>
<p>Can anyone help me to understand this?</p>
| Korvet | 647,164 | <p>"I am solving ordinary differential <strong><em>inhomogeneous</em></strong> equation in S′ (dual to Schwartz space) given as:
<span class="math-container">$$y' + ay = \delta(x)$$</span> -where <span class="math-container">$\delta(x)$</span> is a Dirac delta function.
The general solution of homogenous equation is <span class="math-container">$y_{1}(x)=Ce^{-ax}$</span>, where <span class="math-container">$C$</span> is a constant."
General solution of the inhomogeneous equation: <span class="math-container">$y(x)=y_{1}(x)+y_{2}(x)$</span>, where <span class="math-container">$y_{2}(x)$</span>-partial solution of an inhomogeneous equation. A particular solution is found by the method of variation of the constant (Lagrange method):
<span class="math-container">$$y_{2}(x)=e^{-ax}\int_{-\infty }^{x}\delta (t)e^{at}dt=e^{-ax}\int_{-\infty }^{x}\delta (t)e^{a0}dt=e^{-ax}\int_{-\infty }^{x}\delta (t)dt=e^{-ax}\theta (t)|_{-\infty }^{x} =e^{-ax}\theta (x)$$</span>
Where - <span class="math-container">$\theta \left ( x \right )=\left\{\begin{matrix}1,& x>0 \\ 0 ,& x<0\end{matrix}\right.$</span> -Heaviside function, <span class="math-container">$t$</span> - integration variable. Consequently
<span class="math-container">$$y(x)=y_{1}+y_{2}=Ce^{-ax}+\theta (x)e^{-ax}=\left (\theta (x)+C \right )e^{-ax}$$</span></p>
|
3,029,778 | <p>I asked a similar question in <a href="https://math.stackexchange.com/questions/3029766/positive-definite-matrix-implies-the-infimum-of-eigenvalues-are-positive">here</a>, but actually what I want to ask is more difficult as described below:</p>
<p>Suppose <span class="math-container">$P(x): \mathbb{R} \to \mathbb{R}^{n \times n}$</span> is always a positive semi-definite matrix. Now if there is a set <span class="math-container">$\Omega \subset \mathbb{R}$</span> such that we know the infimum of the determinant of <span class="math-container">$P(x)$</span> over <span class="math-container">$\Omega$</span> is always positive, then does it imply that the infimum (over <span class="math-container">$\Omega$</span>) of the minimum eigenvalue of <span class="math-container">$P(x)$</span> is always positive? In a mathematical way:</p>
<p>Is the following conclusion correct?
<span class="math-container">\begin{equation}
\inf_{x \in \Omega}\{\det(P(x))\}>0 \implies \inf_{x \in \Omega} \{\lambda_{{\rm min}}(P(x)) \} > 0
\end{equation}</span>.</p>
| user1551 | 1,551 | <p>No. Consider e.g. <span class="math-container">$P(x)=\operatorname{diag}(x,\frac1x)$</span> over <span class="math-container">$\Omega=[1,+\infty)$</span>.</p>
<p>It is true, however, that if <span class="math-container">$\Omega$</span> is compact, <span class="math-container">$P$</span> is continuous and <span class="math-container">$P(x)$</span> is positive definite over <span class="math-container">$\Omega$</span>, then <span class="math-container">$\inf_{x\in\Omega}\lambda_\min(P(x))>0$</span>. This is because the eigenvalues of a matrix vary continuously with the matrix's entries and every continuous function attains its minimum on a compact set.</p>
|
13,882 | <p>Background: When Ueno builds the fully faithful functor from Var/k to Sch/k he mentions that the variety $V$ can be identified with the rational points of $t(V)$ over $k$. I know how to prove this on affine everything and will work out the general case at some future time.</p>
<p>The question that this got me thinking about was if $X$ is a $k$-scheme where $k$ is algebraically closed, then are the $k$-rational points of $X$ just the closed points? This is probably extremely well known, but I can't find it explicitly stated nor can I find a counterexample.</p>
<p>For $k$ not algebraically closed, I can come up with examples where this is not true. So in general is there some relation between the closed points and rational points on schemes (everything over $k$)?</p>
<p>This would give a bit more insight into what this functor does. It takes the variety and makes all the points into closed points of a scheme, then adds the generic points necessary to actually make it a legitimate scheme. General tangential thoughts on this are welcome as well.</p>
| user717 | 717 | <p>If $k$ is algebraically closed and $X$ is a $k$-scheme locally of finite type, then the $k$-rational points are precisely the closed points. (See EGA 1971, Ch. I, Corollaire 6.5.3).</p>
<p>More generally: if $k$ is a field and $X$ is a $k$-scheme locally of finite type, then $X$ is a Jacobson scheme (i.e. it is quasi-isomorphic to its underlying ultrascheme) and the closed points are precisely the points $x \in X$ such that $\kappa(x)|k$ is a finite extension.</p>
<p>You should also confer the appendix of EGA 1971. There it is shown that for any field $k$ the category of $k$-schemes locally of finite type with morphisms locally of finite type is equivalent to the category of $k$-ultraschemes (a $k$-ultrascheme is locally the maximal spectrum of a $k$-algebra). </p>
|
2,681,621 | <p>I'm trying to calculate the following limit:</p>
<p>$$\lim_{x\to\pi} \dfrac{1}{x-\pi}\left(\sqrt{\dfrac{4\cos²x}{2+\cos x}}-2\right)$$</p>
<p>I thought of calculating this:</p>
<p>$$\lim_{t\to0} \dfrac{1}{t}\left(\sqrt{\dfrac{4\cos²(t+\pi)}{2+\cos(t+\pi)}}-2\right)$$</p>
<p>Which is the same as:</p>
<p>$$\lim_{t\to0} \dfrac{1}{t}\left(\sqrt{\dfrac{4\cos²t}{2-\cos t}}-2\right)$$</p>
<p>I don't have an idea about where to go from here.</p>
| user | 505,767 | <p>From here by first order binomial expansion</p>
<p>$$\frac{1}{t}\left(\sqrt{\frac{4\cos²t}{2-\cos t}}-2\right)=\frac1t(2\cos t(1-(1-\cos t))^{-\frac12}-2)\sim\frac1t(2\cos t(1+\frac12(1-\cos t))-2)=\frac1t(2\cos t+\cos t-\cos^2t-2)=\frac{-\cos^2t+3\cos t-2}{t}=\frac{(\cos t-1)(2-\cos t)}{t^2}\cdot t\to -\frac12 \cdot 0=0$$</p>
<p>As an alternative by algebraic manipulation</p>
<p>$$\frac{1}{t}\left(\sqrt{\frac{4\cos^2t}{2-\cos t}}-2\right)=
\frac{1}{t}\frac{\sqrt{4\cos^2t}-2\sqrt{2-\cos t}}{\sqrt{2-\cos t}}
\frac{\sqrt{4\cos^2t}+2\sqrt{2-\cos t}}{\sqrt{4\cos^2t}+2\sqrt{2-\cos t}}
=\frac{1}{t}\frac{4\cos^2t-8+4\cos t}{\sqrt{2-\cos t}(\sqrt{4\cos^2t}+2\sqrt{2-\cos t})}
=t\frac{\cos t -1}{t^2}\frac{4(\cos t+2)}{\sqrt{2-\cos t}(\sqrt{4\cos^2t}+2\sqrt{2-\cos t})}\to0\cdot-\frac12\cdot 2=0$$</p>
|
544,008 | <p>We know since $\mathbb{Q}$ is countable that there exist a bijection $f : \mathbb{Z} \to \mathbb{Q} $. If we view $\mathbb{Q}$ and $\mathbb{Z}$ are topological subspaces of $\mathbb{R}$, are theo homeomorphic??</p>
| Don Shanil | 103,948 | <p>First pick a topology. So in this case I assume its the induced topology. Now any topological invariant will give an obstruction to a homeomorphism. For example, $\mathbb{Q}$ is everywhere dense in $\mathbb{R}$, but $\mathbb{Z}$ is not. So the answer is no, they are not homeomorphic.</p>
|
2,607,090 | <p>I have a function for which I know:</p>
<p>$f(2) = 2x -3y \\
f(3) = 5x - 6y \\
f(4) = 9x - 10 y \\
f(5) = 14x - 15y$</p>
<p>Assuming that $f$ is a polynomial, how do I find the general expression for $f$? After many minutes of fiddling I eventually found that this general expression works:</p>
<p>$f(N) = \frac{N(N+1)-2}{2}x - \frac{N(N+1)}{2}y$.</p>
<p>It's easy to verify that the expression works, but I found this by trial-and-error and I don't know if it's either unique or the simplest solution.</p>
| Thomas Pastor | 518,233 | <p>This is called <a href="https://en.wikipedia.org/wiki/Regression_analysis" rel="nofollow noreferrer">Regression</a>.</p>
<p>$$f(N) = f1(N)x + f2(N)y$$</p>
<p>First, you need to define what is the desired form of your expression, or what you mean "simplest". For example, people use linear form $\hat{f_1}(N) = a_1 N + b_1$, $\hat{f_2}(N) = a_2 N + b2$.</p>
<p>Second, you need to define the metric to evaluate the performance of you regression. For example to minimize the squared error
$$\epsilon = \sum_N (\hat{f_1}(N) - f_1(N))^2 = \sum_N (a_1 N + b_1 - f_1(N))^2$$</p>
<p>The partial derivations of $\epsilon$ yield the values of $a_1$ and $b_1$ that minimize the squared error.</p>
<p>In your case, if you choose correctly the form of $f_1(N) = a_1N^2+b_1N+c_1$, the partial derivations will yield $a_1 =N^2/2, b_1=N/2$ and $c_1=-1$.</p>
<p>Note that all mathematical softwares have the regression functions but you always have to choose the form. And the solution is surely not unique.</p>
|
3,290,199 | <p>If I throw a fair dice <span class="math-container">$12$</span> times, the expected number of <span class="math-container">$6$</span> is <span class="math-container">$2$</span> i.e <span class="math-container">$6$</span> is expected to appear <span class="math-container">$2$</span> times when the dice is thrown <span class="math-container">$12$</span> times. But the probability of getting <span class="math-container">$6$</span> exactly <span class="math-container">$2$</span> times is <span class="math-container">${12}\choose{2}$$(1/6)^{2} (5/6)^{10} $</span> which is less than <span class="math-container">$1$</span>. </p>
<p>Now my question is <strong>How can you expect the face value six to appear for two times , when the possibility of that appearing for two times is very low?</strong></p>
<p>I am tying to give an analogy..If you are participating a game where you can win , lose or remain undecided. How can you expect to win When you know the possibility of winning the game is very low?</p>
<p>Can anyone please make me understand where I am getting wrong? I am really trying hard to understand.</p>
| Mohammad Riazi-Kermani | 514,496 | <p>No, the function does not have to be bounded to have an integral.</p>
<p>Consider <span class="math-container">$$ \int _0^1 \frac {dx}{\sqrt x}$$</span> which is an improper integral because the integrand is not bounded on <span class="math-container">$(0,1)$</span>.</p>
<p>However the anti derivative is <span class="math-container">$2\sqrt x$</span> which results in a bounded value for </p>
<p><span class="math-container">$$ \int _0^1 \frac {dx}{\sqrt x} =2 $$</span></p>
|
3,290,199 | <p>If I throw a fair dice <span class="math-container">$12$</span> times, the expected number of <span class="math-container">$6$</span> is <span class="math-container">$2$</span> i.e <span class="math-container">$6$</span> is expected to appear <span class="math-container">$2$</span> times when the dice is thrown <span class="math-container">$12$</span> times. But the probability of getting <span class="math-container">$6$</span> exactly <span class="math-container">$2$</span> times is <span class="math-container">${12}\choose{2}$$(1/6)^{2} (5/6)^{10} $</span> which is less than <span class="math-container">$1$</span>. </p>
<p>Now my question is <strong>How can you expect the face value six to appear for two times , when the possibility of that appearing for two times is very low?</strong></p>
<p>I am tying to give an analogy..If you are participating a game where you can win , lose or remain undecided. How can you expect to win When you know the possibility of winning the game is very low?</p>
<p>Can anyone please make me understand where I am getting wrong? I am really trying hard to understand.</p>
| eyeballfrog | 395,748 | <p>If <span class="math-container">$f$</span> is continuous on the interval, no additional condition is needed for it to have an antiderivative. Pick any point <span class="math-container">$c$</span> in the interval and <span class="math-container">$\int_c^x f(x')dx'$</span> will be an antiderivative, since <span class="math-container">$f(x)$</span> is bounded on <span class="math-container">$[c,x]$</span>.</p>
<p>If <span class="math-container">$f$</span> is not continuous, then things get trickier. For the normal sort of functions you encounter in calculus class, it is sufficient for the function to be bounded on every finite closed subinterval of the domain.</p>
|
2,244,423 | <p>The function given is $f(x) = \sqrt[3]{{x}^2(2-x)}$.</p>
<p>Can anybody help me to find all asymptotes of this function. I know it doesn't have a vertical asymptote and I know that it's horizontal asymptote is $\sqrt[3]{-1}$, but I don't know how to find asymptote of the slope.</p>
<p>I'd prefer if someone could help me solving it using the formula given below:
$y = kx + l$ where $k = lim_{n\to\infty} \dfrac{f(x)}{x}$ and $l=lim_{n\to\infty}[f(x)-kx]$. I found $k$ that is $k=-1$ but I don't know how to find $l$.</p>
| Community | -1 | <p>You want to compute</p>
<p>$$k=\lim_{x\to\infty}\left(\sqrt[3]{{x}^2(2-x)}+x\right).$$</p>
<p>To get rid of the cubic root, you can multiply by the conjugate trinomial and get</p>
<p>$$k=\lim_{x\to\infty}\left(\frac{{x}^2(2-x)+x^3}{\sqrt[3]{{x}^2(2-x)}^2-\sqrt[3]{{x}^2(2-x)}x+x^2}\right).$$</p>
<p>The numerator simplifies to $2x^2$ and by factoring out $x^2$ at the denominator, the expression tends to</p>
<p>$$\frac2{(-1)^2-(-1)+1}.$$</p>
|
3,320,830 | <p>I was wondering if the inequality
<span class="math-container">$$\left|\int_0^T f(t,\omega )dW_t\right|\leq \int_0^T|f(t,\omega )|dW_t$$</span> holds for stochastic integral. In fact, I don't see such a property in any book, neither on Google, so I have some doubt. What do you think ?</p>
| Parcly Taxel | 357,390 | <p>A Raipur-bound train is only going to meet Nagpur-bound trains. And we can draw a diagram for that:</p>
<p><img src="https://i.stack.imgur.com/v129m.png" alt=""></p>
<p>The answer is <span class="math-container">$12$</span> trains.</p>
|
1,221,056 | <p>Have assigment and will use it as example, found solution computationaly, want to understand idea.</p>
<p>It is about <em>SubBytes</em> procedure in AES, particulary about finding inverse of polynomial.</p>
<p>Suppose we have element $A=x^5+1$ in finite field $F=\mathbb{Z}_2[x]/x^8+x^4+x^3+x+1$ and it is required to compute $A^{-1}$, so that $AA^{-1}=1$</p>
<p>Computationally $A^{-1}=x^6 + x^5 + x^3 + x^2 + x$, $AA^{-1}=(x^5+1)(x^6 + x^5 + x^3 + x^2 + x)=x^{11} + x^{10} + x^8 + x^7 + x^5 + x^3 + x^2 + x$</p>
<p>Lets name $I=x^8+x^4+x^3+x+1$ for convenience, it is ideal of $F$. Then $AA^{-1}-I*x^3-I*x^2-I=1$, $AA^{-1}$ reduces to $1$ alright.</p>
<p>Well, how to get $A^{-1}=x^6 + x^5 + x^3 + x^2 + x$? This is where it gets messy for me.</p>
<p>It is my understanding, that $A^{-1}$ is found by eucleidean algorithm, in this case $(x^5+1)*A^{-1}+I*R=1$, where $R \in F$.</p>
<p>What I do.. try to do:</p>
<p>$(x^8+x^4+x^3+x+1)=(x^5+1)(x^3)+(x^4 + x + 1)$</p>
<p>$(x^5+1)=(x^4 + x + 1)(x)+(x^2 + x + 1)$
.. it is kinda mess, see work at Cloud Sage[Removed]</p>
<p>What I am doing wrong above, when I try to find <code>F.xgcd(A)</code>?</p>
| lab bhattacharjee | 33,337 | <p>$$\cos^2x=1-\sin^2x=(1-\sin x)(1+\sin x)\iff\frac{\cos x}{1+\sin x}=\frac{1-\sin x}{\cos x}$$</p>
|
1,692,346 | <p>I have heard of a statement like this:</p>
<blockquote>
<p>A car can technically never run out of gas (when still moving) if the driver uses half of the gas left each time.</p>
</blockquote>
<p>Is this possible (mathematics wise)?</p>
| Dan Christensen | 3,515 | <p>You are overthinking this. Yes, in draining a 100 litre tank full of gasoline, you can imagine that infinitely many events occur: At some point in time for example, the tank will be (1) 1/2 full, and (2) 1/4 full, and (3) 1/8 full, and so on. But we can measure the rate at which the tank is being emptied, in units of say litres per hour, and calculate precisely when the tank will be emptied even if infinitely many of the above "events" had occurred in the interval. Nothing "paradoxical" here.</p>
|
1,039,474 | <p>Solve the equation $x^4 - 14x^3 + 50x^2 -14x + 1 = 0$. <br/> I am not sure about how to best proceed, and would like a solution that does not involved the generalised quartic formula.</p>
| sciona | 195,458 | <p><strong>Hint:</strong> First observe the equation is palindromic. Divide throughout with $x^2$ and rewite it as a quadratic in $\left(x+\dfrac{1}{x}\right)$.</p>
|
1,039,474 | <p>Solve the equation $x^4 - 14x^3 + 50x^2 -14x + 1 = 0$. <br/> I am not sure about how to best proceed, and would like a solution that does not involved the generalised quartic formula.</p>
| Varun Iyer | 118,690 | <p>A more detailed solution:</p>
<p>If we divide the equation by $x^2$:</p>
<p>$$\frac{x^4}{x^2} - \frac{14x^3}{x^2} + \frac{50x^2}{x^2} - \frac{14x}{x^2} + \frac{1}{x^2} = x^2 - 14x + 50 - \frac{14}{x} + \frac{1}{x^2}$$</p>
<p>Then, combining like terms, we notice that:</p>
<p>$$x^2 + \frac{1}{x^2} - 14\left(x+\frac{1}{x}\right) + 50$$</p>
<p>If we let $y = x+\frac{1}{x}$ </p>
<p>Note that:</p>
<p>$$\left(x+\frac{1}{x}\right)^2 = x^2 + \frac{1}{x^2} - 2$$</p>
<p>Therefore,</p>
<p>$$x^2 + \frac{1}{x^2} - 14\left(x+\frac{1}{x}\right) + 50 = y^2 -2 -14y + 50 =0$$</p>
<p>$$y^2 - 14y +48 =0$$</p>
<p>$$(y-6)(y-8) = 0$$</p>
<p>Therefore, $x + \frac{1}{x} = 6$ and $x + \frac{1}{x} = 8$</p>
<p>Can you take it from here?</p>
|
3,370,076 | <p>The total mechanical energy is conserved when a ball is dropped from a height of 4.00 <span class="math-container">$\mathit{m}$</span>, and it makes a elastic collision with the ground. Assuming no non-conservative forces are acting find the period of the ball. g of course is 9.81.</p>
<p><span class="math-container">\begin{align}
PE_g &= U_s \\
mgh &= \frac{1}2 kA^2 \\
mgh &= \frac{1}2 kh^2 \\
2mgh &= kh^2 \\
2\frac{g}{h} &= \frac{k}{m} \\
\omega &= \sqrt{\frac{k}{m}} = \sqrt{\frac{2g}{h}} \\
T &= \frac{2 \pi}{\omega}=2\pi \sqrt{\frac{h}{2g}} = \sqrt{2} \pi\sqrt{\frac{h} {g}}=2.837 s
\end{align}</span></p>
<p>Is my approach correct?</p>
<h2>Fixed Approach</h2>
<p><span class="math-container">\begin{align}
mgh &= \frac{1}2 m v^2_f \\
v_f &= \sqrt{2gh} \\
\frac{v_f - v_0}{g} &= t = \frac{T}{2} \\
2t &= T = 1.80 s
\end{align}</span></p>
| Hagen von Eitzen | 39,174 | <p>Let <span class="math-container">$\beta(x)=a^{-1}x$</span>. Then <span class="math-container">$\alpha\circ \beta$</span> and <span class="math-container">$\beta\circ \alpha$</span> are both the identity map.</p>
|
301,264 | <p>Note: There is another question of the same title, but it is different and asks for group theory prerequisites in algebraic topology, while i want the topology prerequisites. </p>
<p>I am a physics undergrad, and I wish to take up a course on Introduction to Algebraic Topology for the next sem, which basically teaches the first two chapters of Hatcher, on Fundamental Group and Homology. However, I don't have a formal mathematics background in point-set topology, and I don't have enough time to go though whole books such as Munkres. So What part of point set topology from Munkres is actually used in the first two chapters of Hatcher?</p>
<p>More importantly, I wanted to know if the first chapter of the book <a href="http://rads.stackoverflow.com/amzn/click/1441972536">Topology, Geometry and Gauge Fields by Naber</a> or first 2 chapters of Lee's Topological Manifolds would be sufficient to provide me the necessary background for Hatcher.</p>
<p>Thanks in advance!</p>
| Sigur | 31,682 | <p>For sure you'll need <em>continuous functions</em>, <em>homeomorphisms</em>, <em>connectedness</em>, <em>compactness</em>, <em>coverings</em> and many others.</p>
|
635,351 | <p>It is well known that if a series $\sum\limits_{k= 0}^\infty a_k$ converges, then $a_k \to 0$. </p>
<p>However, this is not true for integrals. What makes them different? Is it simply that they are "smoother?" Is there a rigorous way to explain this difference?</p>
| Glen O | 67,842 | <p>As with so many things in Mathematics, the actual identification of the right way to figure something out is actually more of an art than a science (although trial and error will often get you there).</p>
<p>A good rule of thumb with integration of functions that are products of trig is that, if you can't see an obvious substitution, try integration by parts. Pattern recognition is also useful a lot of the time - for instance, how might you solve the case where $n=0$? $n=1$? $n=2$? Can you generalise the approach, either for all $n$, all integer $n$, or all even (or odd) integer $n$?</p>
|
114,733 | <p>Say you have the half-plane $\{z\in\mathbb{C}:\Re(z)>0\}$. Is there a rigorous explanation why the transformation $w=\dfrac{z-1}{z+1}$ maps the half plane onto $|w|<1$?</p>
| WimC | 25,313 | <p>You can also check it explicitly:</p>
<p>$$
\left| \frac{z-1}{z+1} \right|^2 = \frac{z-1}{z+1}\cdot\frac{\overline{z}-1}{\overline{z}+1} = \frac{|z|^2-2 \Re(z) +1}{|z|^2+2 \Re(z)+1} < 1.
$$</p>
<p>The last inequality follows simply because $\Re(z) > 0$ and so the numerator is smaller than the denominator.</p>
<p>The other way around: the inverse is given by</p>
<p>$$
z \mapsto \frac{1+z}{1-z}
$$</p>
<p>and we can check the real part for $|z| < 1$:</p>
<p>$$
\Re\left(\frac{1+z}{1-z}\right) = \frac{1}{2} \left( \frac{1+z}{1-z} + \frac{1+\overline{z}}{1-\overline{z}}\right) = \frac{1-|z|^2}{|1-z|^2} > 0.
$$</p>
|
764,947 | <p>I want to solve the following exercise:<br/>
<br/>
Show that the two elliptic curves $E/ \mathbb{Q}$ and $E'/ \mathbb{Q}$ are isomorphic.<br/>
$E: y^2 = x^3+x-2$ and $E': y'^2 = x'^3-\frac{1}{3}x' - \frac{52}{27}$. <br/>
<br/>
I am trying to find a change of variables $(x,y)\mapsto(x',y')$ transforming the Weierstraß equation defining $E$ to the Weierstraß equation defining $E'$.<br/>
<br/>
I tried this by guesswork because I couldn't think of a clever way.<br/>
A first idea was to put $y = (\sqrt{27 y'}-\sqrt{2})$ because then I already get $27y'^2 = 27x'^3-27x'-52$ which is $y'^2 = x'^3-x'-\frac{52}{27}$ which looks a bit more like $E'$. But I don't know what to do about the $x$. <br/>
<br/>
Is there a more strategic way to do this? Does anyone have a hint how to solve this exercise?<br/>
<br/>
All the best!</p>
| Álvaro Lozano-Robledo | 14,699 | <p>Well, you must have the wrong equations, because they are <strong>not isomorphic</strong>. The $j$-invariant classifies elliptic curves up to isomorphism (over $\mathbb{C}$), and the $j$-invariants of these curves are $432/7$ and $-64/25$, respectively. Since they are distinct, they are not isomorphic.</p>
<p>In light of Noam Elkies' answer, the $j$-invariant of $y^2=x^3+x^2-2$ is indeed $-64/25$, the same as $E'$ in the statement of the problem.</p>
|
1,748,547 | <p>Show that if the closed interval $[a,b]$ is covered by finitely many open intervals $(a_1,b_1), ...,(a_n,b_n)$, then $$b-a \le \sum^n_{i=1}(b_i-a_i)$$. </p>
<p>I know that $(a_1,b_1), ...,(a_n,b_n)$ form an open covering of $[a,b]$, and my thought is to show the inequality by mathematical induction, but not sure how to prove this. Could someone provide a complete proof please? Thanks a lot. </p>
| Eman Yalpsid | 94,959 | <p>Take any $n$ element cover.
The case $n=1$ is clear. Assume $n >1$.</p>
<p>If no two intervals intersect each other, then notice that no $a_i$ or $b_i$ is covered, therefore $a_i ,b_i \in \mathbb R \setminus [a,b]$ for all $i$. But $[a,b]$ is covered so there has to be an $i$ such that $a_i < a < b < b_i$ so the RHS above is at least $b-a$.</p>
<p>If there are at least two intersecting sets, then choose two and take their union, this way you have reduced the original cover to an $n-1$ element cover and thus you can apply induction. Finally noting that the union's length is at most the sum of the lengths, you should be done.</p>
|
2,235,610 | <p>I need some help for the proof of the uniformization theorem (Silverman's Advanced Topics ...).</p>
<p>If we have $G_{4}(\Lambda_{1})=G_{4}(\Lambda_{2}) $ and $ G_{6}(\Lambda_{1})=G_{6}(\Lambda_{2})$ (with $\Lambda_{1},\Lambda_{2}$ two lattices and $G_{n}$: Einsenstein serie).</p>
<p>Why we have $\Lambda_{1}=\Lambda_{2}$ ?</p>
| Joe Silverman | 317,822 | <p>It might be easiest to first note that the $j$-invariants $j(\Lambda_1)=j(\Lambda_2)$ are equal and use the theorem that the $j$-invariant defines an injective map from the space of lattice modulo homothety to the affine line. Thus the equality $j(\Lambda_1)=j(\Lambda_2)$ implies that $\Lambda_1=c\Lambda_2$ for some $c\in\mathbb C^*$. Next use the fact that $G_{2k}(c\Lambda)=c^{-2k}G_{2k}(\Lambda)$ and your assumption that $G_4(\Lambda_1)=G_4(\Lambda_2)$ and $G_6(\Lambda_1)=G_6(\Lambda_2)$ to conclude that $c^2=1$, provided that both $G_4$ and $G_6$ values are non-zero. Hence $c=\pm1$, which gives the desired result, since clearly $-\Lambda=\Lambda$. Finally, if one of $G_4$ or $G_6$ is zero, you only get $c^4=1$ or $c^6=1$, but in each case one can show that the lattice have CM by the appropriate root of unity.</p>
|
660,259 | <p>$f(y)=\begin{cases} \frac{b}{y^2}, & y\ge b,\\ 0, & \mbox{elsewhere}\end{cases}$.</p>
<p>is a bona fide probability density function for a random variable, $Y$. Assuming $b$ is a known
constant and $U$ has a uniform distribution on the interval $(0, 1)$, transform $U$ to obtain a random variable with the same distribution as $Y$.</p>
<p>I have no clue how to get started on this question. Could anyone helps me get started on this question or give some hints?</p>
| copper.hat | 27,978 | <p>Assume $b>0$.</p>
<p>Let $\phi(\alpha) = p \{ y | y \le \alpha \} = \int_{-\infty}^\alpha f(y) dy = \begin{cases} 0, & \alpha <b \\ 1-{b \over \alpha}, & \alpha \ge b\end{cases}$.
Note that the restricted $\phi:[b,\infty) \to [0,1)$ is a bijection, and we have
$\phi^{-1}:[0,1) \to [b,\infty)$ is given by
$\phi^{-1}(y) = { b \over 1-y}$.</p>
<p>Then $\phi^{-1}(U)$ is a random variable with distribution $\phi$.</p>
|
1,507,290 | <p>Kindly help me understand this statement made by my prof.</p>
<blockquote>
<p>The identity matrix I has the property that any non zero vector <span class="math-container">$V$</span> is an eigenvector of eigenvalue <span class="math-container">$1$</span>.</p>
</blockquote>
<p>My assumption of this statement is that the column vector (1,1) multiplied by the identity matrix is equal to the identity matrix. But the confusing part is when he says "...any non zero..". This is implying we can use other values that don't equal one. I believe the eigenvalue would change in light of the different non-<span class="math-container">$1$</span> values.</p>
| Ben Grossmann | 81,360 | <p>From your question, it seems that you don't understand what eigenvectors are.</p>
<p>If $A$ is a matrix, then we call $v$ an eigenvector if it is not zero and $Av=\lambda v$ for some constant (that is, some scalar) $\lambda$ such that $Av=\lambda v$. The constant $\lambda$ is called an eigenvalue of $A$.</p>
<p>Note that for every vector $v$, $Iv=1\cdot v=v$. So, if $v$ is not zero, $v$ is an eigenvector of $I$, and the associated eigenvalue is $1$.</p>
|
1,507,290 | <p>Kindly help me understand this statement made by my prof.</p>
<blockquote>
<p>The identity matrix I has the property that any non zero vector <span class="math-container">$V$</span> is an eigenvector of eigenvalue <span class="math-container">$1$</span>.</p>
</blockquote>
<p>My assumption of this statement is that the column vector (1,1) multiplied by the identity matrix is equal to the identity matrix. But the confusing part is when he says "...any non zero..". This is implying we can use other values that don't equal one. I believe the eigenvalue would change in light of the different non-<span class="math-container">$1$</span> values.</p>
| upe | 459,399 | <h2>Eigenvectors & Eigenvalues</h2>
<p><a href="https://www.youtube.com/watch?v=PFDu9oVAE-g" rel="nofollow noreferrer">3Blue1Brown's video on eigenvectors and eigenvalues</a> explains the eigenvectors and eigenvalues visually.</p>
<p>In general, matrix-vector multiplication <span class="math-container">$Av = b$</span> maps the vector <span class="math-container">$v$</span> to the vector <span class="math-container">$b$</span>.
Accordingly, the matrix multiplication with the identity matrix <span class="math-container">$I$</span> maps the vector <span class="math-container">$v$</span> to itself <span class="math-container">$Iv = v$</span>.</p>
<p>Here is my attempt to visualize an example for <span class="math-container">$Av = b$</span> in 2D space:</p>
<p><a href="https://i.stack.imgur.com/XfGzf.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/XfGzf.png" alt="matrix-vector multiplication example" /></a></p>
<p>By multiplying <span class="math-container">$A$</span> and <span class="math-container">$v$</span> we get <span class="math-container">$b$</span>.
As you can see, to get <span class="math-container">$b$</span> we need to stretch and rotate <span class="math-container">$v$</span>.
The rotation and scaling factor are given by <span class="math-container">$A$</span>.</p>
<p>Given that <span class="math-container">$A$</span> is an arbitrary matrix, there can exist <strong>nonzero</strong> vectors such that when multiplied with <span class="math-container">$A$</span> they do <strong>not</strong> get rotated (only scaled).
These vectors are eigenvectors.
Therefore, we can write <span class="math-container">$\lambda v = b$</span> for every eigenvector <span class="math-container">$v$</span>, where <span class="math-container">$\lambda$</span> is a scalar factor.
We call <span class="math-container">$\lambda$</span> the eigenvalue.</p>
<p>In the following example, <span class="math-container">$A$</span> describes a reflection along the diagonal from bottom left to top right:</p>
<p><a href="https://i.stack.imgur.com/O2ka9.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/O2ka9.png" alt="eigenvector example" /></a></p>
<p>Note that every vector on one of the diagonals does not get rotated, only scaled by a factor <span class="math-container">$\pm \lambda$</span>.
This means that all vectors on one of the diagonals are eigenvectors.</p>
<h2>To answer the question</h2>
<p>When you look at the matrix-vector multiplication with the identity matrix <span class="math-container">$Iv = v$</span>, all vectors stay the same.
In particular:</p>
<ol>
<li>There is no rotation, which means all vectors are eigenvectors (except <span class="math-container">$\vec{0}$</span>).</li>
<li>All vectors get scaled by a factor of <span class="math-container">$1$</span>, which means the eigenvalue for every eigenvector is <span class="math-container">$1$</span>.</li>
</ol>
<p>And therefore,</p>
<blockquote>
<p>The identity matrix I has the property that any non-zero vector V is an eigenvector of eigenvalue 1.</p>
</blockquote>
|
550,659 | <blockquote>
<p>A space <span class="math-container">$X$</span> is locally metrizable if each point <span class="math-container">$x$</span> of <span class="math-container">$X$</span> has a neighborhood that is metrizable in the subspace topology. Show that a compact Hausdorff space <span class="math-container">$X$</span> is metrizable if it is locally metrizable.</p>
<p><strong>Hint:</strong> Show that <span class="math-container">$X$</span> is a finite union of open subspaces, each of which has a countable basis.</p>
</blockquote>
<p>I tried to use the fact of compact space. But I do not know if the opens are compact subspaces.</p>
| D Wiggles | 103,836 | <p>For every $x\in X$, there exists a neighborhood $U_x$ which is metrizable. These neighborhoods cover $X$, i.e., $X=\bigcup_x U_x$. Now use the definition of compactness to reduce this to a finite union, $X=U_1\cup\ldots\cup U_n$. Each of these sets is metrizable, so pick metrics which are defined locally on each $U_i$. Lastly, use a partition of unity to patch together the local metrics into a global one.</p>
|
135,936 | <p>I need this one result to do a problem correctly.</p>
<p>I want to show that for any $b \in \mathbb{C}$ and $z$ a complex variable:</p>
<p>$$ |z^2 + b^2| \geq |z|^{2} - |b|^{2}$$ </p>
<p>My attempts have only led me to conclude that </p>
<p>$$ |z^2 + b^2| > \frac{|z|^{2} + |b|^{2}}{2}$$ </p>
| Tomarinator | 21,832 | <p>we know ,(from vector algebra) that</p>
<p>$$ |z^2 + b^2| \geq |z^{2}| - |b^{2}|$$</p>
<p>and, that
for any complex number $x$,</p>
<p>$$|x^{2}| \geq |x|^{2}$$ </p>
<p>therefore,
$$ |z^2 + b^2| \geq |z^{2}| - |b^{2}| \geq |z|^{2} - |b|^{2}$$</p>
<p>hence,
$$ |z^2 + b^2| \geq |z|^{2} - |b|^{2}$$</p>
|
14,385 | <p>I have always taught my students that the <span class="math-container">$y$</span>-intercept of a line is the <span class="math-container">$y$</span>-coordinate of the point of intersection of a line with the <span class="math-container">$y$</span>-axis, that is, for the line given by the equation <span class="math-container">$y=mx+y_0$</span>, the <span class="math-container">$y$</span>-intercept is <span class="math-container">$y_0$</span>. I emphasize that that the <span class="math-container">$y$</span>-intercept is the <em>number</em> <span class="math-container">$y_0$</span> and not the <em>point</em> <span class="math-container">$(0,y_0)$</span>.</p>
<p>But I was quite surprised when I recently looked at the <a href="https://en.wikipedia.org/wiki/Intercept" rel="nofollow noreferrer">Wikipedia</a> and <a href="http://mathworld.wolfram.com/x-Intercept.html" rel="nofollow noreferrer">Wolfram</a> <a href="http://mathworld.wolfram.com/y-Intercept.html" rel="nofollow noreferrer">MathWorld</a> entries for <span class="math-container">$y$</span>-intercept because these define the intercept as a point and not as a number ("the point where a line crosses the y-axis" and "The point at which a curve or function crosses the y-axis").</p>
<p>Further investigation yielded inconsistencies: the Wikipedia entry for "<a href="https://en.wikipedia.org/wiki/Line_(geometry)#On_the_Cartesian_plane" rel="nofollow noreferrer">Line (geometry)</a>" states that in the equation <span class="math-container">$y=mx+b$</span>, "<span class="math-container">$b$</span> is the y-intercept of the line"; the Wolfram MathWorld entry for "<a href="http://mathworld.wolfram.com/Line.html" rel="nofollow noreferrer">Line</a>" states that "The line with <span class="math-container">$y$</span>-intercept <span class="math-container">$b$</span> and slope <span class="math-container">$m$</span> is given by the slope-intercept form <span class="math-container">$y=mx+b$</span>.</p>
<hr />
<p><sup>Edit made on February 21, 2021</sup></p>
<p>According to the <em>Dictionary of Analysis, Calculus, and Differential Equations</em> (edited by Douglas N. Clark, published by CRC Press in 2000),</p>
<blockquote>
<p><strong>intercept</strong> The point(s) where a curve or graph of a function in <span class="math-container">$\mathbf R^n$</span> crosses one of the
axes. For the graph of <span class="math-container">$y=f(x)$</span> in <span class="math-container">$\mathbf R^2$</span>, the <span class="math-container">$y$</span>-<em>intercept</em> is the point <span class="math-container">$(0,f(0))$</span> and the <span class="math-container">$x$</span>-<em>intercepts</em> are the points <span class="math-container">$(p,f(p))$</span> such that <span class="math-container">$f(p)=0$</span>.</p>
</blockquote>
<p>Unfortunately, the book does not consistently use that definition.</p>
<blockquote>
<p><strong>slope-intercept equation of line</strong> An equation of the form <span class="math-container">$y=mx+b$</span>, for a straight line in <span class="math-container">$\mathbf R^2$</span>. Here <span class="math-container">$m$</span> is the slope of the line and <span class="math-container">$b$</span> is the <span class="math-container">$y$</span>-intercept; that is, <span class="math-container">$y=b$</span>, when <span class="math-container">$x=0$</span>.</p>
</blockquote>
<p>Thus, even though the book defines an intercept as a point, it uses the term to denote a number.</p>
<hr />
<p>Is there a trusted source targeted at mathematics educators (from, say, a government agency, an educational institution, or an organization) that defines "intercept" and consistently uses that definition?</p>
| JTP - Apologise to Monica | 64 | <p>This is a case where you might be looking for a distinction that's pretty subtle.</p>
<p>By definition, the y-intercept occurs at x=0. In one notation, it's literally f(0), where the x is explicitly offered. I'd be ok with a student's answer to "What is the y-intercept?" to be simply the y value, or the $(0,y_0)$ point. </p>
<p>If a teacher prefers one, you can ask</p>
<ul>
<li>What is the y value of the y-intercept?</li>
</ul>
<p>or </p>
<ul>
<li>Give the point (coordinate) of the y-intercept.</li>
</ul>
<p>When I was in high school, one math teacher was fussy about 'negative' vs 'minus'. He insisted that an answer, "-4" should never be pronounced "minus four". He declared "negative is an adjective, minus is a verb." While I suppose this is true, I never found value in correcting a student who is otherwise doing the math correctly. This case may be similar, if they are getting the concept, don't focus on a point (pun intended) that may be a matter of preference. </p>
|
14,385 | <p>I have always taught my students that the <span class="math-container">$y$</span>-intercept of a line is the <span class="math-container">$y$</span>-coordinate of the point of intersection of a line with the <span class="math-container">$y$</span>-axis, that is, for the line given by the equation <span class="math-container">$y=mx+y_0$</span>, the <span class="math-container">$y$</span>-intercept is <span class="math-container">$y_0$</span>. I emphasize that that the <span class="math-container">$y$</span>-intercept is the <em>number</em> <span class="math-container">$y_0$</span> and not the <em>point</em> <span class="math-container">$(0,y_0)$</span>.</p>
<p>But I was quite surprised when I recently looked at the <a href="https://en.wikipedia.org/wiki/Intercept" rel="nofollow noreferrer">Wikipedia</a> and <a href="http://mathworld.wolfram.com/x-Intercept.html" rel="nofollow noreferrer">Wolfram</a> <a href="http://mathworld.wolfram.com/y-Intercept.html" rel="nofollow noreferrer">MathWorld</a> entries for <span class="math-container">$y$</span>-intercept because these define the intercept as a point and not as a number ("the point where a line crosses the y-axis" and "The point at which a curve or function crosses the y-axis").</p>
<p>Further investigation yielded inconsistencies: the Wikipedia entry for "<a href="https://en.wikipedia.org/wiki/Line_(geometry)#On_the_Cartesian_plane" rel="nofollow noreferrer">Line (geometry)</a>" states that in the equation <span class="math-container">$y=mx+b$</span>, "<span class="math-container">$b$</span> is the y-intercept of the line"; the Wolfram MathWorld entry for "<a href="http://mathworld.wolfram.com/Line.html" rel="nofollow noreferrer">Line</a>" states that "The line with <span class="math-container">$y$</span>-intercept <span class="math-container">$b$</span> and slope <span class="math-container">$m$</span> is given by the slope-intercept form <span class="math-container">$y=mx+b$</span>.</p>
<hr />
<p><sup>Edit made on February 21, 2021</sup></p>
<p>According to the <em>Dictionary of Analysis, Calculus, and Differential Equations</em> (edited by Douglas N. Clark, published by CRC Press in 2000),</p>
<blockquote>
<p><strong>intercept</strong> The point(s) where a curve or graph of a function in <span class="math-container">$\mathbf R^n$</span> crosses one of the
axes. For the graph of <span class="math-container">$y=f(x)$</span> in <span class="math-container">$\mathbf R^2$</span>, the <span class="math-container">$y$</span>-<em>intercept</em> is the point <span class="math-container">$(0,f(0))$</span> and the <span class="math-container">$x$</span>-<em>intercepts</em> are the points <span class="math-container">$(p,f(p))$</span> such that <span class="math-container">$f(p)=0$</span>.</p>
</blockquote>
<p>Unfortunately, the book does not consistently use that definition.</p>
<blockquote>
<p><strong>slope-intercept equation of line</strong> An equation of the form <span class="math-container">$y=mx+b$</span>, for a straight line in <span class="math-container">$\mathbf R^2$</span>. Here <span class="math-container">$m$</span> is the slope of the line and <span class="math-container">$b$</span> is the <span class="math-container">$y$</span>-intercept; that is, <span class="math-container">$y=b$</span>, when <span class="math-container">$x=0$</span>.</p>
</blockquote>
<p>Thus, even though the book defines an intercept as a point, it uses the term to denote a number.</p>
<hr />
<p>Is there a trusted source targeted at mathematics educators (from, say, a government agency, an educational institution, or an organization) that defines "intercept" and consistently uses that definition?</p>
| Dan Fox | 672 | <p>This questions reflects the dangers in over formalization of the language used to discuss simple things. A pedantic speaker might distinguish between the <em>y-intercept b</em> and the <em>intercept (0, b)</em> (although <em>intersection point (0, b)</em> might be a better name for the latter), but very little is gained by fussing about such a distinction, particularly at the elementary level. </p>
<p>The example is very different from that some commenters have referenced that treats the distinction between critical point and critical value. The point on the earth where the temperature is lowest and what that temperature is are two very different things in qualitative terms, and so one needs terminology to distinguish them. In the present case, there is very little difference qualitatively, the fussing is about whether one choose to refer to the point or its ordinate as the <em>y-intercept</em>. Both choices are reasonable and neither is consequential. The difficulty arises mostly because of overly rigid teachers who insist that one is right and the other is wrong because <em>I said so</em> or for some similar reason similarly lacking in reason.</p>
<p>In practice I think it's best to adopt a consistent usage, simply because that is easier for students, but not to fuss when individual students, for having read a book on their own, gone to a tutor, or simply not speaking with the same pedantic care as the teacher, use the other terminology, as long as it is clear that they have answered correctly whatever question requires its use.</p>
|
14,385 | <p>I have always taught my students that the <span class="math-container">$y$</span>-intercept of a line is the <span class="math-container">$y$</span>-coordinate of the point of intersection of a line with the <span class="math-container">$y$</span>-axis, that is, for the line given by the equation <span class="math-container">$y=mx+y_0$</span>, the <span class="math-container">$y$</span>-intercept is <span class="math-container">$y_0$</span>. I emphasize that that the <span class="math-container">$y$</span>-intercept is the <em>number</em> <span class="math-container">$y_0$</span> and not the <em>point</em> <span class="math-container">$(0,y_0)$</span>.</p>
<p>But I was quite surprised when I recently looked at the <a href="https://en.wikipedia.org/wiki/Intercept" rel="nofollow noreferrer">Wikipedia</a> and <a href="http://mathworld.wolfram.com/x-Intercept.html" rel="nofollow noreferrer">Wolfram</a> <a href="http://mathworld.wolfram.com/y-Intercept.html" rel="nofollow noreferrer">MathWorld</a> entries for <span class="math-container">$y$</span>-intercept because these define the intercept as a point and not as a number ("the point where a line crosses the y-axis" and "The point at which a curve or function crosses the y-axis").</p>
<p>Further investigation yielded inconsistencies: the Wikipedia entry for "<a href="https://en.wikipedia.org/wiki/Line_(geometry)#On_the_Cartesian_plane" rel="nofollow noreferrer">Line (geometry)</a>" states that in the equation <span class="math-container">$y=mx+b$</span>, "<span class="math-container">$b$</span> is the y-intercept of the line"; the Wolfram MathWorld entry for "<a href="http://mathworld.wolfram.com/Line.html" rel="nofollow noreferrer">Line</a>" states that "The line with <span class="math-container">$y$</span>-intercept <span class="math-container">$b$</span> and slope <span class="math-container">$m$</span> is given by the slope-intercept form <span class="math-container">$y=mx+b$</span>.</p>
<hr />
<p><sup>Edit made on February 21, 2021</sup></p>
<p>According to the <em>Dictionary of Analysis, Calculus, and Differential Equations</em> (edited by Douglas N. Clark, published by CRC Press in 2000),</p>
<blockquote>
<p><strong>intercept</strong> The point(s) where a curve or graph of a function in <span class="math-container">$\mathbf R^n$</span> crosses one of the
axes. For the graph of <span class="math-container">$y=f(x)$</span> in <span class="math-container">$\mathbf R^2$</span>, the <span class="math-container">$y$</span>-<em>intercept</em> is the point <span class="math-container">$(0,f(0))$</span> and the <span class="math-container">$x$</span>-<em>intercepts</em> are the points <span class="math-container">$(p,f(p))$</span> such that <span class="math-container">$f(p)=0$</span>.</p>
</blockquote>
<p>Unfortunately, the book does not consistently use that definition.</p>
<blockquote>
<p><strong>slope-intercept equation of line</strong> An equation of the form <span class="math-container">$y=mx+b$</span>, for a straight line in <span class="math-container">$\mathbf R^2$</span>. Here <span class="math-container">$m$</span> is the slope of the line and <span class="math-container">$b$</span> is the <span class="math-container">$y$</span>-intercept; that is, <span class="math-container">$y=b$</span>, when <span class="math-container">$x=0$</span>.</p>
</blockquote>
<p>Thus, even though the book defines an intercept as a point, it uses the term to denote a number.</p>
<hr />
<p>Is there a trusted source targeted at mathematics educators (from, say, a government agency, an educational institution, or an organization) that defines "intercept" and consistently uses that definition?</p>
| svavil | 6,275 | <p><a href="https://matheducators.stackexchange.com/a/14387/6275">JoeTaxpayer's answer</a> says the distinction is subtle. To me, the distinction is non-existent.</p>
<p>I don't see any benefit in discerning two concepts that are </p>
<ol>
<li>Closely related</li>
<li>Have a trivial bijection between the two concepts</li>
<li>Do not appear separately in any other context</li>
</ol>
<p>In this case, the intercept value $y_0$ is trivially bijected to the intercept point $(0, y_0)$, and the large part of mathematical education is teaching students to recognize equivalent concepts and group them together.</p>
|
3,465,945 | <p>Prove that <span class="math-container">$\inf f(A) \leq f( \inf A)$</span> if <span class="math-container">$f: [-\infty, + \infty] \to \mathbb{R}$</span> is continuous and <span class="math-container">$A \neq \emptyset$</span> is a subset of <span class="math-container">$\mathbb{R}$</span>.</p>
<p>Attempt;</p>
<p>Put <span class="math-container">$a:= \inf A$</span>. Choose a sequence in <span class="math-container">$A$</span> such that <span class="math-container">$a_n \to a$</span>. Then</p>
<p><span class="math-container">$$ \inf f(A)\leq\lim_{n \to \infty} \underbrace{f(a_n)}_{\geq \inf f(A)} = f(a) = f( \inf A)$$</span></p>
<p>and we can conclude.</p>
<p>Is this correct?</p>
| user284331 | 284,331 | <p>Since the domain of <span class="math-container">$f$</span> is <span class="math-container">$[-\infty,\infty]$</span>, something is worth to mention.</p>
<p>For nonempty subset <span class="math-container">$A$</span> of <span class="math-container">$\mathbb{R}$</span>, <span class="math-container">$\inf A\ne\emptyset$</span>. But it could be the case that <span class="math-container">$\inf A=-\infty$</span>. So a sequence <span class="math-container">$(x_{n})\subseteq A$</span> is such that <span class="math-container">$x_{n}\rightarrow-\infty$</span>, <span class="math-container">$f$</span> being continuous at <span class="math-container">$-\infty$</span>, we still have <span class="math-container">$f(x_{n})\rightarrow f(-\infty)$</span>.</p>
|
3,093,660 | <p>This is an introducory task from an exam. </p>
<p><strong>If</strong> <span class="math-container">$z = -2(\cos{5} - i\sin{5})$</span>, <strong>then what are:</strong></p>
<p><span class="math-container">$Re(z), Im(z), arg(z)$</span> and <span class="math-container">$ |z|$</span>?</p>
<p>First of all, how is it possible that the modulus is negative <span class="math-container">$|z|=-2$</span>? Or is the modulus actually <span class="math-container">$|z|= 2$</span> and the minus is kind of in front of everything, and that's why the sign inside of the brackets is changed as well? That would make some sense.</p>
<p>I assume <span class="math-container">$arg(z) = 5$</span>. How do I calculate <span class="math-container">$Re(z) $</span> and <span class="math-container">$Im(z)$</span>? Something like this should do the job?</p>
<p><span class="math-container">$$arg(z) = \frac{Re(z)}{|z|}$$</span></p>
<p><span class="math-container">$$5 = \frac{Re(z)}{2}$$</span></p>
<p><span class="math-container">$$10 = Re(z)$$</span></p>
<p>And analogically with <span class="math-container">$Im(z):$</span></p>
<p><span class="math-container">$$arg(z) = \frac{Im(z)}{|z|}$$</span></p>
<p><span class="math-container">$$5 = \frac{Im(z)}{2} \Rightarrow Im(z) = Re(z) = 10$$</span></p>
<p>I'm sure I'm confusing something here because, probably somewhere wrong <span class="math-container">$\pm$</span> signs.</p>
<p>Help's appreciated.</p>
<p><strong>And finally:</strong> is there some good calculator for complex numbers? Let's say I have a polar form and I want to find out the <span class="math-container">$Re(z), Im(z)$</span> and such. Wolframalpha seems like doesn't work fine for that.</p>
| Theo Bendit | 248,286 | <p>Currently, the number is not in polar form, as it should be in the form <span class="math-container">$r(\cos(\theta) + i \sin(\theta))$</span>, where <span class="math-container">$r \ge 0$</span>. Note the <span class="math-container">$+$</span> sign, and the non-negative number <span class="math-container">$r$</span> out the front. Every complex number, including the one given, has a polar form (in fact, infinitely many), and from this you can read off the modulus and argument. But, since this is not in polar form, you need to do some extra work.</p>
<p>First, try absorbing the minus sign into the brackets:</p>
<p><span class="math-container">$$2(-\cos 5 + i \sin 5).$$</span>
Then, recalling that <span class="math-container">$\sin(\pi - x) = \sin(x)$</span> and <span class="math-container">$\cos(\pi - x) = -\cos(x)$</span>, we get
<span class="math-container">$$2(\cos(\pi - 5) + i \sin(\pi - 5)).$$</span>
This is now in polar form. The modulus is <span class="math-container">$2$</span>, and one of the infinitely many arguments is <span class="math-container">$\pi - 5$</span>.</p>
|
4,150,320 | <p>I need to prove <span class="math-container">$\displaystyle \lim _{x\to 2-} \left(\frac{|x-2|}{x^2-4}\right)=\frac{-1}{4}$</span></p>
<p>I know the definition <span class="math-container">$\forall \varepsilon >0, \exists \delta >0, 0>2-x>\delta$</span> then <span class="math-container">$\left|\left(\dfrac{|x-2|}{x^2-4}\right)+\dfrac{1}{4}\right|<\varepsilon$</span></p>
<p>And I also know how to calculate a limit but I don't know how to prove that a limit is correct</p>
| Paul Sinclair | 258,282 | <p>The main thing here is that they've apparently chosen to prefer using <span class="math-container">$Q$</span> to using <span class="math-container">$Q_2$</span>, so they rewrite <span class="math-container">$Q_2 = Q - Q_1$</span>, and substitute for <span class="math-container">$Q_2$</span> in your calculation (FYI - <code>\cdots</code> will produce "<span class="math-container">$\cdots$</span>"):
<span class="math-container">$$Q_1 \left(p_{11}-p_{12}\right) =Q_2 \left(p_{22}-p_{12}\right) + Q_3 \left(p_{23}-p_{13}\right) +\cdots$$</span>
<span class="math-container">$$Q_1 \left(p_{11}-p_{12}\right) =Q\left(p_{22}-p_{12}\right) - Q_1\left(p_{22}-p_{12}\right) + Q_3 \left(p_{23}-p_{13}\right) +\cdots$$</span>
<span class="math-container">$$Q_1 \left([p_{11}-p_{12}] + [p_{22}-p_{12}] \right) =Q\left(p_{22}-p_{12}\right) + Q_3 \left(p_{23}-p_{13}\right) +\cdots$$</span>
<span class="math-container">$$Q_1 \left(p_{11}+p_{22}-2p_{12}\right) =Q\left(p_{22}-p_{12}\right) + Q_3 \left(p_{23}-p_{13}\right) +\cdots$$</span>
<span class="math-container">$$Q_1 = Q\dfrac{p_{22}-p_{12}}{p_{11}+p_{22}-2p_{12}} + Q_3 \dfrac{p_{23}-p_{13}}{p_{11}+p_{22}-2p_{12}} + Q_4 \dfrac{p_{24}-p_{14}}{p_{11}+p_{22}-2p_{12}}+\cdots$$</span></p>
<p>For the calculation of <span class="math-container">$V_3$</span>, they just substituted this expression for <span class="math-container">$Q_1$</span> into <span class="math-container">$$V_3 = \left( p_{31} - p_{32} \right) Q_{1} +p_{32} Q + p_{33} Q_{3} + \cdots$$</span>
So
<span class="math-container">$$V_3 = (p_{31}-p_{32})\left(Q\dfrac{(p_{22}-p_{12})}{p_{11}+p_{22}-2p_{12}} + Q_3 \dfrac{(p_{23}-p_{13})}{p_{11}+p_{22}-2p_{12}} + \cdots\right)\\+p_{32}Q + p_{33}Q_{3} + \cdots\\
= \left(Q\dfrac{(p_{22}-p_{12})(p_{31}-p_{32})}{p_{11}+p_{22}-2p_{12}}+ Q_3 \dfrac{(p_{23}-p_{13})(p_{31}-p_{32})}{p_{11}+p_{22}-2p_{12}} + \cdots\right)\\+p_{32}Q + p_{33}Q_{3} + \cdots\\
=Q\left(p_{32}+\dfrac{(p_{22}-p_{12})(p_{31}-p_{32})}{p_{11}+p_{22}-2p_{12}}\right) + Q_3\left(p_{33}+\dfrac{(p_{23}-p_{13})(p_{31}-p_{32})}{p_{11}+p_{22}-2p_{12}}\right) + \cdots$$</span></p>
<p>The final thing they did <em>only applies to the coefficient of <span class="math-container">$Q_3$</span></em>: Noting that <span class="math-container">$p_{ij} = p_{ji}$</span>, they had the bright idea of rewriting <span class="math-container">$$(p_{31} - p_{32}) = (p_{13} - p_{23}) = - (p_{23} - p_{13})$$</span>
so <span class="math-container">$$(p_{23}-p_{13})(p_{31}-p_{32}) = -(p_{23}-p_{13})^2$$</span> and
<span class="math-container">$$\left(p_{33}+\dfrac{(p_{23}-p_{13})(p_{31}-p_{32})}{p_{11}+p_{22}-2p_{12}}\right) = \left(p_{33}-\dfrac{(p_{23}-p_{13})^2}{p_{11}+p_{22}-2p_{12}}\right)$$</span>
But again, this only affects the <span class="math-container">$Q_3$</span> term. It is not in the <span class="math-container">$Q$</span> term, and it is not in later terms either. The <span class="math-container">$Q_4$</span> term is
<span class="math-container">$$Q_4\left(p_{34}+\dfrac{(p_{24}-p_{14})(p_{31}-p_{32})}{p_{11}+p_{22}-2p_{12}}\right)$$</span>
(And the very fact that they used this special handling for the last term they actually show in a series you are supposed to extrapolate by pattern idicates that their idea was far less bright than they thought.)</p>
|
672,736 | <p>Let $A = \begin{bmatrix}1&2&1\\0&1&0\\1&3&1\end{bmatrix}$. Find the eigenvalues of $A$.</p>
<p>I think I got a pretty steady ground on how I approached this, I just have some difficulty getting the right answer.</p>
<p>What I have done so far:</p>
<p>$P(\lambda) = det(A - \lambda I)$</p>
<p>$det\begin{bmatrix}1-\lambda&2&1\\0&1-\lambda&0\\1&3&1-\lambda\end{bmatrix} = 0$</p>
<p>$=(1-\lambda)(1-\lambda)^2 - 2(0) + 1(1-\lambda) = 0$</p>
<p>$= (1- \lambda) ^3 +(1-\lambda) = 0$</p>
<p>But I'm not getting the right eigenvalues. The above answer gives me the eigenvalue: 1 only.</p>
<p>but the right answer is: 2, 1, 0.</p>
| copper.hat | 27,978 | <p>A trial & error approach:</p>
<p>Note that $A(e_1+e_2) = 2(e_1+e_2)$, and $A(e_1-e_2) = 0$. Note also that $A A^T \neq A^T A$, hence $A$ not normal and cannot be orthogonally diagonalized (so I can just look for a vector normal to the other two). </p>
<p>Try using the basis (or rather the inverse) $P= \begin{bmatrix} 1 & 1 & 0 \\ 0 & 0 & 1 \\ 1 & -1 & 0 \end{bmatrix}$, to get $P^{-1} A P = \begin{bmatrix} 2 & 0 & {5 \over 2} \\
0 & 0 & -{1 \over 2} \\ 0 & 0 & 1 \end{bmatrix}$, from which we can read off the eigenvalues.</p>
<p>(Note that even though I did above, we do not need to explicitly compute the inverse, we need only solve $Pv_3 = A P e_3 = A e_2$ for the last row of $v_3$.)</p>
|
109,569 | <p>Let's say I have a complex valued matrix $\begin{pmatrix}1+I&2+2I&3+3I\\4+4I&5+5I&6+6I\end{pmatrix}$ represented by a list:</p>
<pre><code> list = {{1 + I, 2 + 2 I, 3 + 3 I}, {4 + 4 I, 5 + 5 I, 6 + 6 I}}
</code></pre>
<p>I know how to plot each point of the matrix on the complex plane:</p>
<pre><code>Mlist = Table[Table[{Re[list[[i,j]]], Im[list[[i,j]]]}, {i,1,2}], {j,1,3}];
ListPlot[Mlist, PlotRange -> All]
</code></pre>
<p>In my case, I have 1000 rows, and I would like 2 things:</p>
<ul>
<li><p>Each point on a given row has the same color</p></li>
<li><p>The color of each row vary regularly along the number of row.</p></li>
</ul>
<p>I have no idea how to handle this. Any suggestion?</p>
| Leonid Shifrin | 81 | <p>While the concerns about performance degradation may in many cases be unwarranted, here is a way that would avoid double - traversal:</p>
<pre><code>exp /. b[a_]*c_f :> With[{res = c /. d[a] :> e}, res /; res =!= c]
</code></pre>
<p>This uses the semantics of local variables shared between the body and the condition of the rule, and assumes that replacements change the original expression <code>c</code>. If the condition does not hold, the whole rule isn't considered matched, so is not applied.</p>
|
4,141,378 | <p>In this equation quasi-linear eq <span class="math-container">$\Big\{\exp(f(x,y))\dfrac{\partial f(x,y)}{\partial x} + \dfrac{y}{x} \dfrac{\partial f(x,y)}{\partial y} = 1\Big\}$</span> how <span class="math-container">$f$</span> changes based on <span class="math-container">$x$</span> and <span class="math-container">$y$</span> analytically? Thanks for your advice.</p>
| DanielWainfleet | 254,665 | <p>A topological space is a pair <span class="math-container">$(X,T)$</span> where <span class="math-container">$T$</span> is a collection of some or all of the subsets of X such that</p>
<p>(i). <span class="math-container">$\emptyset\in T$</span> and <span class="math-container">$X\in T,$</span></p>
<p>(ii). If <span class="math-container">$S\subset T$</span> then <span class="math-container">$\bigcup S\in T,$</span></p>
<p>(iii). If <span class="math-container">$S\subset T$</span> and if <span class="math-container">$S$</span> is <em>finite</em> then <span class="math-container">$\bigcap S\in T.$</span></p>
<p><span class="math-container">$T$</span> is called a topology on <span class="math-container">$X.$</span> Members of <span class="math-container">$T$</span> are called open sets. If <span class="math-container">$X$</span> has more than one member, then whether a subset of <span class="math-container">$X$</span> is open or not depends on which <span class="math-container">$T$</span> we are considering.</p>
<p>This is very general. Nothing in the definition requires that anything, other than <span class="math-container">$\emptyset$</span> and <span class="math-container">$X,$</span> belongs to <span class="math-container">$T.$</span> Nothing in the def'n prevents <span class="math-container">$all$</span> subsets of <span class="math-container">$X$</span> from belonging to <span class="math-container">$T.$</span></p>
<p>It is very common to refer to "the space <span class="math-container">$X$</span>" without specifying <span class="math-container">$T.$</span> When <span class="math-container">$n\in \Bbb Z^+$</span> and <span class="math-container">$X=\Bbb R^n,$</span> it is common to assume that <span class="math-container">$T$</span> is the "standard" ("usual") topology.</p>
<p>For <span class="math-container">$x\in X,$</span> and for a given <span class="math-container">$T,$</span> a neighborhood (nbhd) of <span class="math-container">$x$</span> is any set <span class="math-container">$V\subseteq X$</span> such that there exists <span class="math-container">$U\in T$</span> with <span class="math-container">$x\in U\subseteq V.$</span></p>
|
1,700,689 | <p>Let $A, B$, and $C$ be sets. If $A\backslash B$ is a subset of $C$, then $A\backslash C$ is a subset of $B$. Is this a direct proof where I let $x$ be an element of $A$ and then work from there? I can't seem to figure out all of the cases. Thanks for help in advance.</p>
| Community | -1 | <p>You can do it with a direct proof style ( let x be...) but you may also prove it manipulating only sets :
We have $A\setminus B = A \cap B^c \subset C\implies C^c \subset A^c \cup B \implies C^c \cap A \subset A \cap(A^c \cup B)=(A\cap A^c)\cup (A\cap B)=\emptyset \cup (A\cap B)=A\cap B \subset B$ so we finally have $A\setminus C \subset B$. (Try to understand every step that was taken here it may give some intuition about those type of proofs) </p>
|
272,846 | <p>Suppose I have a List of numbers:</p>
<pre><code>num = Range[5]
</code></pre>
<p>I want to combine the second and the third element into a sublist to get the result as {1,{2,3},4,5}.<br />
I tried using this:</p>
<pre><code>MapAt[List, num, {{2}, {3}}]
</code></pre>
<p>which is not giving me the desired result. What changes are needed to be made?<br />
Can the same changes be applied to this code:</p>
<pre><code>music = SoundNote["CSharp", 0.1, 0.2, "Violin"]
</code></pre>
<p>to get the result as SoundNote[CSharp,{0.1,0.2},Violin]?</p>
| Daniel Huber | 46,318 | <p>Another possibility;</p>
<pre><code>num = Range[5];
num[[2 ;; 3]] = {num[[2 ;; 3]], Hold@Nothing[]};
num = num // ReleaseHold
(* {1, {2, 3}, 4, 5} *)
</code></pre>
|
272,846 | <p>Suppose I have a List of numbers:</p>
<pre><code>num = Range[5]
</code></pre>
<p>I want to combine the second and the third element into a sublist to get the result as {1,{2,3},4,5}.<br />
I tried using this:</p>
<pre><code>MapAt[List, num, {{2}, {3}}]
</code></pre>
<p>which is not giving me the desired result. What changes are needed to be made?<br />
Can the same changes be applied to this code:</p>
<pre><code>music = SoundNote["CSharp", 0.1, 0.2, "Violin"]
</code></pre>
<p>to get the result as SoundNote[CSharp,{0.1,0.2},Violin]?</p>
| lericr | 84,894 | <p>For the update with SoundNote, I recommend that you simply write your own "fixing" function:</p>
<pre><code>FixSoundNote[SoundNote[pitch_, start_, end_, style_]] :=
SoundNote[pitch, {start, end}, style]
</code></pre>
<p>Usage:</p>
<pre><code>badMusic = SoundNote["CSharp", 0.1, 0.2, "Violin"];
goodMusic = FixSoundNote[music];
Sound[goodMusic]
</code></pre>
<p>You can add other "fix" rules as you discover other malformations that have resulted from the pre-processing.</p>
<p>Having said all of that, it might be better to fix the pre-processing if that's something that you have under your control.</p>
|
3,014,453 | <p>If there is a number somewhere between 0 and 100 and you have to find it with the least attempts possible. Every attempt consists of you checking if the number is smaller (or bigger) than a number in the said interval (0 to 100). My guess would be you start with the half way point.</p>
<p>Is it smaller than 50?
yes --> is it smaller than 25---> yes ---> is it smaller than 25 ---> no ---> is it smaller than 37.5 ---> yes...etc </p>
<p>If this is indeed the faster method, what would be the formula that expresses it? If this isn't the fastest method, what is it and how is it expressed mathematically and verbally? Thanks.</p>
| Bram28 | 256,001 | <p>Using the <em>exact</em> halfway point is <em>not</em> the fastest method. For example, suppose the number is <span class="math-container">$98$</span>. Then you get:</p>
<p><span class="math-container">$50 \rightarrow 75 \rightarrow 87.5 \rightarrow 93.75 \rightarrow 96.875 \rightarrow 98.4375 \rightarrow 97.65625$</span></p>
<p>... and only now you know it is <span class="math-container">$98$</span></p>
<p>However, if you use whole numbers, (and let's assume we round down)then you are ruling out those very numbers when it is not that number. So again, if the number is <span class="math-container">$98$</span>:</p>
<p><span class="math-container">$50 \rightarrow 75 \rightarrow 87 \rightarrow 93 \rightarrow 96 \rightarrow 98$</span></p>
<p>and now you got it in <span class="math-container">$6$</span> ... and if the number was <span class="math-container">$99$</span>, you'd know it after these <span class="math-container">$6$</span> steps as well. </p>
<p>In fact, with this whole number method, it will be true that you will know the number after at most <span class="math-container">$\lfloor log_2 99 \rfloor = 6$</span> steps.</p>
<p>Here is a proof by induction (on <span class="math-container">$m$</span>) as to why: The general claim is that if you have a choice of <span class="math-container">$2^m \le n < 2^{m+1} $</span> numbers, it will take at most <span class="math-container">$\lfloor log_2 n \rfloor = m$</span> steps to figure out the number.</p>
<p>Base: <span class="math-container">$m=0$</span> (<span class="math-container">$n=1$</span>)</p>
<p>If there is only <span class="math-container">$1$</span> number, you immediately know what that number is, so that takes <span class="math-container">$0$</span> steps. And indeed, <span class="math-container">$\lfloor log_2 1 \rfloor = 0$</span></p>
<p>Step:</p>
<p>Suppose you have <span class="math-container">$2^{m+1} \le n < 2^{(m+1)+1} $</span> numbers left. You pick the halfway number, rounding down if necessary. In the worst case scenario (which is where <span class="math-container">$n$</span> is even, and you rounded down for your pick), you are left with exactly <span class="math-container">$\frac{n}{2}$</span> numbers. Now, given that <span class="math-container">$2^{m+1} \le n < 2^{(m+1)+1} $</span>, we have that <span class="math-container">$2^m \le \frac{n}{2} < 2^{m+1}$</span> and thus, by inductive hypothesis, it takes at most <span class="math-container">$m$</span> more steps to figure out the number from this point on. Given that you just took a guess, that means that it takes at most <span class="math-container">$m+1$</span> steps to figure out the number, which is what we need to show.</p>
<p>OK, so we have proven that with a choice of <span class="math-container">$2^m \le n < 2^{m+1} $</span> numbers, it will take at most <span class="math-container">$\lfloor log_2 n \rfloor = m$</span> steps to figure out the number. So, in your case, it takes indeed at most <span class="math-container">$6$</span> steps to figure out the number. Since your method of picking the exact halfway point with rounding to a whole number takes <span class="math-container">$7$</span> steps in some cases, your method is not the best.</p>
<p>OK, but how do we know that there is not a better method yet, other than using this method of picking the halfway point, and rounding to a whole number? Well, any other method would at some point have to deviate from this method, and thus at some point it would either not pick a whole number (like your original method), or pick a number that would split the leftover numbers in two piles, with one pile at least <span class="math-container">$2$</span> larger than the other. But in either case, that means that in the worst case scenario you end up with a pile of numbers that is at least as big as the pile you end up with using the above method. </p>
<p>But if you have more numbers to choose from, it can, in the worst case scenario, never take less steps to figure out the number than when you have fewer numbers, because it that were so, then you could of course figure out the number with the smaller pile in just as few steps by simply adding some extra numbers and making the pile just as big. </p>
<p>So, given that with some alternative method and in the worst case scenario you always get a pile with at least as many numbers as with the above method, this alternative method cannot take any fewer steps in the worst case scenario. Hence, there is no algorithm that has a worst case scenario that has better performance than the above method... meaning that the minimum number of moves in the worst case scenario is indeed <span class="math-container">$6$</span>.</p>
|
2,890,625 | <p>Suppose $f(x)$ is differentiable on $[0,1]$, and $f(0)=0$, $f(x)\ne 0,\forall x\in(0,1)$ , Prove for every $n,m\in\mathbb{N^+}$, there exists $\xi=\xi_{n,m}\in(0,1)$ such that
$$n\cdot\frac{f'(\xi)}{f(\xi)}=m\cdot\frac{f'(1-\xi)}{f(1-\xi)}$$</p>
| Theo Bendit | 248,286 | <p>Without loss of generality, we may assume $f(x) > 0$ for all $x$, since it cannot change sign, without violating the intermediate value theorem.</p>
<p>Let $h(x) = \ln(f(x)) + \frac{m}{n} \ln(f(1 - x))$, defined over $(0, 1)$. Then,
$$h'(x) = \frac{f'(x)}{f(x)} - \frac{m}{n} \frac{f(1 - x)}{f'(1 - x)}$$
so $h'(x) = 0$ if and only if $x$ is a suitable choice of $\xi$. Since $\lim_{x \to 0^+} f(x) = 0$ and $\lim_{x \to 1^-} f(x) = f(1) > 0$, we have that
$$\lim_{x \to 0^+} h(x) = \lim_{x \to 1^-} h(x) = -\infty.$$
It follows therefore that $h$ achieves a maximum somewhere in $(0, 1)$. This point will be the $\xi$ you're looking for.</p>
|
871,412 | <p>$$I=\int_a^b \sin(\alpha-\beta x^2)\cos(x)\, dx.$$</p>
<p>Can anybody tell me, how to solve this integral ?
I know that this is related to <a href="http://www.it.uom.gr/teaching/linearalgebra/NumericalRecipiesInC/c6-9.pdf" rel="nofollow">Fresnel Integral</a> if the $\cos(x)$ term is absent. </p>
| user71352 | 71,352 | <p>If $\beta>0$</p>
<p>$c+d=2\alpha-2\beta x^{2}$</p>
<p>$c-d=2x$</p>
<p>so $c=\alpha+x-\beta x^{2}=(\alpha+\frac{1}{4\beta})-(\frac{1}{2\sqrt{\beta}}-\sqrt{\beta}x)^{2}$ and $d=\alpha-x-\beta x^{2}=(\alpha+\frac{1}{4\beta})-(\frac{1}{2\sqrt{\beta}}+\sqrt{\beta}x)^{2}$</p>
<p>then using that $\sin(c)+\sin(d)=2\sin(\frac{1}{2}(c+d))\cos(\frac{1}{2}(c-d))$ we have:</p>
<p>$\int_{a}^{b}\sin(\alpha-\beta x^{2})\cos(x)dx$</p>
<p>$=\frac{1}{2}\int_{a}^{b}\sin((\alpha+\frac{1}{4\beta})-(\frac{1}{2\sqrt{\beta}}-\sqrt{\beta}x)^{2})dx+\frac{1}{2}\int_{a}^{b}\sin((\alpha+\frac{1}{4\beta})-(\frac{1}{2\sqrt{\beta}}-\sqrt{\beta} x)^{2})dx$</p>
<p>Notice that $\int_{s}^{t}\sin(c-u^{2})du=\int_{s}^{t}\sin(c)\cos(u^{2})du-\int_{s}^{t}\cos(c)\sin(u^{2})du$</p>
<p>$=\sin(c)(C(t)-C(s))-\cos(x)(S(t)-S(s))$</p>
<p>where $S(x)=\int_{0}^{x}\sin(p^{2})dp$ and $C(x)=\int_{0}^{x}\cos(p^{2})dp$</p>
<p>A similar procedure can be done for $\beta<0$ since $\sin(-x)=-\sin(x)$. If $\beta=0$ then this is just $\sin(\alpha)\int_{a}^{b}\cos(x)dx$.</p>
|
1,864,604 | <p>What's the difference between $f(x)=f(a-x)$ and $f(x)=f(x-a)$ ?</p>
<p>It's a pretty simple question maybe, but I'm unable to understand this one. </p>
| Martin Kochanski | 340,970 | <p>They mean two different things, and without knowing <strong>what</strong> it is that you don't understand, it's hard to know how to explain.</p>
<p>The way to understand it is to abandon algebra. Put $a=5$ and try different values of $x$: 0, 1, 2, 3, 4, 5 and so on.</p>
<p>You will find that:</p>
<ul>
<li><p>$f(x)=f(a-x)$ means what it says. For example, that $f(0)=f(a)$ and $f(-1)=f(a+1)$. To summarize: $f$ is mirror-symmetrical about the value $x=\frac12{a}$.</p></li>
<li><p>$f(x)=f(x-a)$ means what it says. For example, that $f(a)=f(0)=f(-a)$. To summarise: $f$ is periodic with period $a$.</p></li>
</ul>
<p><strong>Note for pedants</strong>: $f(x)=f(x-a)$ actually means that $f$ is periodic with period $|a|$ (that is, $a$ if $a$ is positive, $-a$ if $a$ is negative), and if $a=0$ if means $f(x)=f(x)$, which is always true but very uninteresting. </p>
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.