qid int64 1 4.65M | question large_stringlengths 27 36.3k | author large_stringlengths 3 36 | author_id int64 -1 1.16M | answer large_stringlengths 18 63k |
|---|---|---|---|---|
142,819 | <p>I am currently studying Serge Lang's book "Algebra", on page 25 it is proved that if $G$ is a cyclic group of order $n$, and if $d$ is a divisor of $n$, then there exists a unique subgroup $H$ of $G$ of order $d$.</p>
<p>I have trouble seeing why the proof (as explained below) settles the uniqueness part.</p>
<p>The proof (as I understand it) goes as follows: </p>
<p>First we show existence of the subgroup $H$, given any choice of a divisor $d$ of $n$. </p>
<p>So suppose $n = dm$. Obviously, one can construct a surjective homomorphism $f : \mathbb{Z} \to G$, and it is also clear that $f(m\mathbb{Z}) \subset G$ is a subgroup of $G$. The resulting isomorphism $\mathbb{Z}/m\mathbb{Z} \cong G/f(m\mathbb{Z})$ leads us to conclude that the index of $f(m\mathbb{Z})$ in $G$ is $m$ and so the order of $f(m\mathbb{Z})$ must be $d$.</p>
<p>Ok, so we have shown that a subgroup having order $d$ exists.</p>
<p>The second part is then to show uniqueness - and here is where I am lost as I don't understand why the following argument serves this end:</p>
<p>Suppose $H$ is any subgroup of order $d$. Looking at the inverse image of $f^{-1}(H)$ in $\mathbb{Z}$ we know it must be of the form $k\mathbb{Z}$ for some positive integer $k$ (since all non - trivial subgroups in $\mathbb{Z}$ can be written in this form). Now $H = f(k\mathbb{Z})$ has order $d$, and $\mathbb{Z}/k\mathbb{Z} \cong G/H$, where the group on the right hand side has order $n/d = m$. From this isomorphism we can therefore conclude that $k = m$. Here Lang ends by saying ".. and H is uniquely determined".</p>
<p>But why is this ? Does he mean uniquely determined up to isomorphism ? Because, what I think I have shown is that any subgroup of order $d$ must be isomorphic to $m\mathbb{Z}$ - yet this gives me uniqueness only up to isomorphism.. what am I missing ?</p>
<p>Thanks for your help! </p>
| Brian M. Scott | 12,042 | <p>What you're missing is that the homomorphism $f$ is fixed. <strong>Every</strong> subgroup $H$ of $G$ of order $d$ <strong>is</strong> the group $f[m\Bbb Z]$, so they're all the same subgroup of $G$.</p>
|
4,504,768 | <p>We've to prove that
<span class="math-container">$$
\lim_{(x,y)\to(0,0)} \frac{x^3+y^4}{x^2+y^2} =0
$$</span></p>
<p>Kindly check if my proof below is correct.</p>
<p><b>Proof</b></p>
<p>We need to show there exists <span class="math-container">$\delta>0$</span> for an <span class="math-container">$\varepsilon>0$</span> such that
<span class="math-container">$$
\left| \frac{x^3+y^4}{x^2+y^2} \right| < \varepsilon \implies \sqrt{x^2+y^2}< \delta
$$</span></p>
<p>Start
<span class="math-container">$$
\left| \frac{x^3}{x^2+y^2} \right| <\left| \frac{x^3+y^4}{x^2+y^2} \right| < \varepsilon
$$</span></p>
<p>Note
<span class="math-container">$$
\left| \frac{x^3}{x^2+y^2} \right|= \frac{x^2|x|}{x^2+y^2}>\frac{(x^2-y^2)|x|}{x^2+y^2}
$$</span></p>
<p>Therefore
<span class="math-container">$$
\frac{x^2-y^2}{x^2+y^2}|x|<\varepsilon \tag{1}
$$</span></p>
<p>Note
<span class="math-container">$$
\frac{x^2-y^2}{x^2+y^2}|x|<|x|=\sqrt{x^2}<\sqrt{x^2+y^2}<\delta
$$</span></p>
<p>So
<span class="math-container">$$
\frac{x^2-y^2}{x^2+y^2}|x|<\delta \tag{2}
$$</span></p>
<p>From <span class="math-container">$(1)$</span> and <span class="math-container">$(2)$</span>, we can say
<span class="math-container">$$
\delta=\varepsilon
$$</span></p>
| David A. Craven | 804,921 | <p>(Here I prove that the intersection of two subgroups of a symmetric group that are generated by transpositions is itself generated by transpositions.)</p>
<p>Let <span class="math-container">$G$</span> be a subgroup of <span class="math-container">$S_n$</span> generated by transpositions. We claim that <span class="math-container">$G$</span> is a product of the symmetric groups on the orbits of <span class="math-container">$G$</span>. It suffices to do this in the case where <span class="math-container">$G$</span> is transitive, and here we may assume that <span class="math-container">$(1,2)$</span> lies in <span class="math-container">$G$</span>. Let <span class="math-container">$\sim$</span> be the relation given by <span class="math-container">$i\sim j$</span> if and only if <span class="math-container">$(i,j)$</span> lies in <span class="math-container">$G$</span>, including the case <span class="math-container">$(i,i)=1$</span>. Then <span class="math-container">$\sim$</span> is reflexive and symmetric, and we claim it is transitive. But if <span class="math-container">$(i,j)$</span> and <span class="math-container">$(j,k)$</span> are in <span class="math-container">$G$</span>, they generate the subgroup <span class="math-container">$\mathrm{Sym}(\{i,j,k\})$</span> of <span class="math-container">$G$</span>, which contains <span class="math-container">$(i,k)$</span>. Hence <span class="math-container">$\sim$</span> is an equivalence relation. But then <span class="math-container">$G$</span> contains all transpositions of <span class="math-container">$S_n$</span>, so is <span class="math-container">$S_n$</span>.</p>
<p>Thus <span class="math-container">$G$</span> is simply a direct product of the symmetric groups on the orbits of <span class="math-container">$G$</span>, and thus we can easily understand the intersection of any two such subgroups. <span class="math-container">$i$</span> and <span class="math-container">$j$</span> lie in the same orbit of <span class="math-container">$G\cap H$</span> if and only if they lie in the same orbit of <span class="math-container">$G$</span> and of <span class="math-container">$H$</span>, and <span class="math-container">$G\cap H$</span> is the product of symmetric groups on those orbits (since <span class="math-container">$(i,j)$</span> lies in <span class="math-container">$G\cap H$</span>).</p>
<p>The general case, even where <span class="math-container">$T$</span> consists of the set of elements of order <span class="math-container">$2$</span>, is false, and counterexamples are numerous. One is given in the other answer to the question.</p>
|
637,897 | <p>I have a 2-Sphere with a finite number $k$ of points removed (at least 3), and I want to equip it with a Riemannian metric of constant negative curvature. </p>
<p>My first thought was to take a free subgroup $A_k$ of rank $k$ of the isometries of the hyperbolic plane $H$, which acts fix point free and discontinuous on $H$ (such a group exists).</p>
<p>If I take the quotient $H/A_k$, I end up with a complete manifold of constant negative curvature, with isomorphic fundamental group to the punctured Sphere, but are those two manifolds homeomorphic? Maybe this is a terribly clumsy way to try to accomplish this task.</p>
| Giulio Belletti | 99,588 | <p>That's not necessarily the case.</p>
<p>Take the sphere with 3 punctures: it has fundamental group free on two generators, right?</p>
<p>Now take the torus with one puncture (which has indeed a hyperbolic structure). You can easily see this has the same fundamental group.
So, to answer your question you probably need to know which group you are using (simply because we have just seen that it depends on that).</p>
<p>To endow the thrice punctured sphere with a hyperbolic metric, you should start with two identical copies of an ideal triangle in $\mathbb{H}^2$, and glue them together. If you are careful you can check that this gives you a hyperbolic surface homeomorphic to $S^2-\{p_1,p_2,p_3\}$.
Do you see how you could generalize this to, say, a 4-punctured sphere?</p>
<p>For another approach, non explicit:
A thrice punctured sphere $S$ can be embedded (continuously) in $\mathbb{C}$, as $\mathbb{C}$ with two holes. So, $S$ has a riemann surface structure. By the riemann mapping theorem, it is holomorphically covered by $\mathbb{H}$, the upper half plane (since we removed at least 2 points). This allows you to put a hyperbolic metric on $S$. For details and references, check the wikipedia article, <a href="http://en.wikipedia.org/wiki/Uniformization_theorem#Geometric_classification_of_surfaces" rel="nofollow">http://en.wikipedia.org/wiki/Uniformization_theorem#Geometric_classification_of_surfaces</a></p>
|
276,310 | <p>I'm trying to eliminate variables in some fairly simple sets of equations. A typical example is: </p>
<p>$$ 9 x^2 + 18 xy + 9 y^2 - 32 = 256z$$
$$ 9 x^2 + 6 xy - 3 y^2 - 8 = 376z$$
$$ 9 x^2 - 6 xy + y^2 = 512z$$</p>
<p>I'd like to eliminate $x$ and $y$. Mathematica tells me that the answer is $ 161 z^2 -162z + 1 = 0 $. OK. Good.</p>
<p><strong>But how would I do this elimination manually (without Mathematica). I'm hoping that there is some fairly simple mechanical process that I can express in a few hundred lines of C code.</strong></p>
<p>I realize that general elimination procedures are pretty complex, but this sort of problem looks quite special and therefore easier (I hope). Roughly speaking, it's just a system of "linear" equations in the variables $x^2$, $xy$, $y^2$, and $z$. Is there some sort of diagonalization process that can be applied, for example ?</p>
<p>Edit:
From the proposed answer below, I see that things can be simplified by setting $p=3x+3y$ and $q = 3x-y$. Then the equations become:</p>
<p>$$p^2 = 32 + 256z$$
$$pq = 8 + 376z$$
$$q^2 = 512z$$
I can then eliminate $p$ and $q$. Is there always a linear transformation that simplifies the problem in this way? If there is, how do I compute it?</p>
| Bojan Serafimov | 56,860 | <p>Factorize the left sides to separate x and y, and it's easy later.</p>
|
276,310 | <p>I'm trying to eliminate variables in some fairly simple sets of equations. A typical example is: </p>
<p>$$ 9 x^2 + 18 xy + 9 y^2 - 32 = 256z$$
$$ 9 x^2 + 6 xy - 3 y^2 - 8 = 376z$$
$$ 9 x^2 - 6 xy + y^2 = 512z$$</p>
<p>I'd like to eliminate $x$ and $y$. Mathematica tells me that the answer is $ 161 z^2 -162z + 1 = 0 $. OK. Good.</p>
<p><strong>But how would I do this elimination manually (without Mathematica). I'm hoping that there is some fairly simple mechanical process that I can express in a few hundred lines of C code.</strong></p>
<p>I realize that general elimination procedures are pretty complex, but this sort of problem looks quite special and therefore easier (I hope). Roughly speaking, it's just a system of "linear" equations in the variables $x^2$, $xy$, $y^2$, and $z$. Is there some sort of diagonalization process that can be applied, for example ?</p>
<p>Edit:
From the proposed answer below, I see that things can be simplified by setting $p=3x+3y$ and $q = 3x-y$. Then the equations become:</p>
<p>$$p^2 = 32 + 256z$$
$$pq = 8 + 376z$$
$$q^2 = 512z$$
I can then eliminate $p$ and $q$. Is there always a linear transformation that simplifies the problem in this way? If there is, how do I compute it?</p>
| Mark Bennet | 2,906 | <p>You can use gaussian elimination (or other equivalent methods) to find expressions for $x^2$, $xy$ and $y^2$ in terms of $z$ - treating them as independent variables, and substitute these to get an equation for z. Then you still have to solve for $x$ and $y$. </p>
<p>There is an easier way in this case, but in a general case this makes progress.</p>
|
276,310 | <p>I'm trying to eliminate variables in some fairly simple sets of equations. A typical example is: </p>
<p>$$ 9 x^2 + 18 xy + 9 y^2 - 32 = 256z$$
$$ 9 x^2 + 6 xy - 3 y^2 - 8 = 376z$$
$$ 9 x^2 - 6 xy + y^2 = 512z$$</p>
<p>I'd like to eliminate $x$ and $y$. Mathematica tells me that the answer is $ 161 z^2 -162z + 1 = 0 $. OK. Good.</p>
<p><strong>But how would I do this elimination manually (without Mathematica). I'm hoping that there is some fairly simple mechanical process that I can express in a few hundred lines of C code.</strong></p>
<p>I realize that general elimination procedures are pretty complex, but this sort of problem looks quite special and therefore easier (I hope). Roughly speaking, it's just a system of "linear" equations in the variables $x^2$, $xy$, $y^2$, and $z$. Is there some sort of diagonalization process that can be applied, for example ?</p>
<p>Edit:
From the proposed answer below, I see that things can be simplified by setting $p=3x+3y$ and $q = 3x-y$. Then the equations become:</p>
<p>$$p^2 = 32 + 256z$$
$$pq = 8 + 376z$$
$$q^2 = 512z$$
I can then eliminate $p$ and $q$. Is there always a linear transformation that simplifies the problem in this way? If there is, how do I compute it?</p>
| DonAntonio | 31,254 | <p>Mark's idea with the unknowns $\,x^2\,,\,xy\,,\,y^2\,$ , using the augmented matrix:</p>
<p>$$\begin{pmatrix} 9&18&9&\;\;256z+32\\9&6&\!\!\!\!-3&\;\;376z+8\\9&\!\!\!\!-6&1&\;\;512z\end{pmatrix}\stackrel{\begin{cases}R_2-R_1\\R_3-R_1\end{cases}}\longrightarrow \begin{pmatrix} 9&18&9&\;\;256z+32\\0&\!\!\!\!-12&\!\!\!\!-6&\;\;120z-24\\0&\!\!\!\!-24&\!\!\!\!-8&\;\;256z\end{pmatrix}\longrightarrow$$</p>
<p>$$\stackrel{R_3-2R_2}\longrightarrow \begin{pmatrix} 9&18&9&\;\;256z+32\\0&\!\!\!\!-12&\!\!\!\!-6&\;\;120z-24\\0&0&4&\;\;16z+48\end{pmatrix}$$</p>
<p>We get thus </p>
<p>$$\text{From}\,\,R_3\,:\;\;4y^2=16z+48\Longrightarrow y=\pm\sqrt{4z+12}$$</p>
<p>$$\text{From}\,\,R_2\wedge R_3\,\,\text{above}\;\;-12xy-6y^2=120z-24\Longrightarrow $$</p>
<p>$$\pm 2x\sqrt{4z+12}-16z-48=20z-4\Longrightarrow \pm x\sqrt{4z+12}=18z+22\Longrightarrow$$</p>
<p>$$x=\pm\frac{9z+11}{z+3}$$</p>
<p>and etc. </p>
|
16,831 | <p>As the title says, I'm wondering if there is a continuous function such that $f$ is nonzero on $[0, 1]$, and for which $\int_0^1 f(x)x^n dx = 0$ for all $n \geq 1$. I am trying to solve a problem proving that if (on $C([0, 1])$) $\int_0^1 f(x)x^n dx = 0$ for all $n \geq 0$, then $f$ must be identically zero. I presume then we do require the $n=0$ case to hold too, otherwise it wouldn't be part of the statement. Is there ay function which is not identically zero which satisfies $\int_0^1 f(x)x^n dx = 0$ for all $n \geq 1$?</p>
<p>The statement I am attempting to prove is homework, but this is just idle curiosity (though I will tag it as homework anyway since it is related). Thank you!</p>
| Qiaochu Yuan | 232 | <p>The answer is no. Actually I believe the following is a theorem whose name totally escapes me at the moment: assume that $f$ is continuous and let $a_n$ be a sequence of increasing positive integers such that $\int_0^1 f(x) x^{a_n} \, dx = 0$ for $n \ge 1$. If $\sum \frac{1}{a_n}$ diverges, then $f$ is identically zero! (Edit: this is a corollary of the <a href="http://en.wikipedia.org/wiki/M%C3%BCntz%E2%80%93Sz%C3%A1sz_theorem">Müntz–Szász</a> theorem - thanks, Moron!)</p>
<p>In other words, the problem isn't phrased the way it is because stronger statements are false; the stronger versions are just harder to prove.</p>
|
140,294 | <p>Generative adversarial networks (GAN) is regarded as one of "the most interesting idea in the last ten years in machine learning" by Yann LeCun. It can be used to generate photo-realistic images that are almost indistinguishable from the real ones.</p>
<p>GAN trains two competing neural networks: a generator network which generates image, and a discriminator network that distinguishes the generated image and the real training image. For example, the images shown below are generated by the network from the texts above them (taken from Han Zhang, etc., <a href="https://arxiv.org/pdf/1612.03242.pdf" rel="noreferrer">StackGAN: Text to Photo-realistic Image Synthesis
with Stacked Generative Adversarial Networks</a>).</p>
<p><a href="https://i.stack.imgur.com/r3L60.jpg" rel="noreferrer"><img src="https://i.stack.imgur.com/r3L60.jpg" alt="enter image description here"></a></p>
<p>I'm wondering whether we can implement a simplified version of that in <em>Mathematica</em>, given that the neural network framework has be enhanced greatly in version 11.1.</p>
| Pavel Perikov | 67,512 | <p>Please ignore the whole text below, just don't use MMA for nn learning NOW — you'll spend unbelievable amounts of time into studing MMA way of doing thins and step into many bugs on your ways.</p>
<p>If your goal is not to become an expert in Wolfram Mathematica NN framework — don't go this way. Invest your time in integration with python et al.</p>
<p>Talking about GANs in the NeuralNetworks settings... I'd say — do not use it.</p>
<p>Now we have <code>NetGANOperator</code> and it works IF you could reproduce the original loss function used via setting "Wasserstein" as a loss setting to the <code>NetGANOperator</code>.</p>
<p>It will take some creativity on your side and data jangling to make your discriminator and generator to fit into <code>NetGANOperator</code> assumptions (one input/one output etc).</p>
<p>If you don't believe me just look at all the "trained separately" sections in all the "Construction notebook" sections of the <code>NetModel</code>.</p>
<p>I greatly appreciate the effort all the people put into make NNs easy in MMA, but the result is what we have. I personally think that reproducing more low level functionality like the common "forward/reverse/get gradients" would be good.</p>
<p>Probably you'll spend much more time than it will take to adapt the original python source to your needs.</p>
<p>There's no way I'm aware of to reproduce GAN behaviour (without using <code>NetGANOperator</code>). Original suggestions from WRI to use <code>NetMapOperator</code> for discriminator, or use <code>NetInsertSharedArrays</code> just do not work, so they finally implemented <code>NetGANOperator</code>.</p>
<p>There're lots of subtleties like <code>BatchNormalizationLayer</code>. So you'll spend your time investigating NeuralNetwork framework (and bugs in the NN framework). That time could be spent in generating python code btw.</p>
<p>So just call Python if you have to use MMA for machine learning. NeuralNetworks framework is finely designed, the only problem with it — it's of little usability beyond the documentation examples (biased opinion here).</p>
<p>And you will NOT be able to easily reproduce what everybody does in frameworks like PyTorch, Tensorflow and so on.</p>
<p>WRI just decided they know better so they came with the whole <code>NetTrain</code> thing, but you're not allowed to pass gradients or change the graph etc etc. Just use what we provided to you.</p>
<p>The code is behind MXNetLink library — you can't study or modify it. If somebody decided you don't need some functionality — you're on your own.</p>
<p>So just adapt Python and what the world does and use <code>ExternalSessionEvaluate</code>. The idea of having everything inside MMA is great but just does not work in cases of rapidly developing knowledge areas.</p>
<p>I don't know how much creativity and stamina was put into NeuralNetworks (inference, declarative approach, visualisation, net surgery etc etc etc), but it now looks like the tool to run the examples. Buggy, btw (like <code>NetEncoder["Image"]</code> leaking memory for files and unusable as of late 2021).</p>
<p>Sorry, WRI people! you're the greatest!!!!!!</p>
<p>P.S. I finally demoed the classic Pix2Pix training. Because I love Mathematica:)</p>
|
2,583,454 | <p>Consider for instance the linear system:</p>
<p>$$\left(
\begin{array}{cc}
1 & 2 \\
3 & 4 \\
5 & 6 \\
\end{array}
\right).\left(
\begin{array}{c}
x \\
y \\
\end{array}
\right)=\left(
\begin{array}{c}
1 \\
2 \\
4 \\
\end{array}
\right)$$</p>
<p>This is over determined and thus has no solution. Yet, by simply multiplying both sides by $\textbf{A}^T$:</p>
<p>$$\left(
\begin{array}{ccc}
1 & 3 & 5 \\
2 & 4 & 6 \\
\end{array}
\right).\left(
\begin{array}{cc}
1 & 2 \\
3 & 4 \\
5 & 6 \\
\end{array}
\right).\left(
\begin{array}{c}
x \\
y \\
\end{array}
\right)=\left(
\begin{array}{ccc}
1 & 3 & 5 \\
2 & 4 & 6 \\
\end{array}
\right).\left(
\begin{array}{c}
1 \\
2 \\
4 \\
\end{array}
\right)$$</p>
<p>We find that the system now has a unique solution, which is the (x,y) that minimizes the squared error.</p>
<p>Now I understand the derivation of why multiplying by the transpose helps to find the pseudoinverse which then helps to perform OLS regression, but my question is perhaps a bit more fundamental. </p>
<p>How can multiplying both sides of an equation by a matrix change a system which previously had no solutions into one that has a unique solution? This seems to against what I assumed that the solutions to $\textbf{A}x = \textbf{B}$ were the same as the solutions to $\textbf{P}\textbf{A}x = \textbf{P}\textbf{B}$.</p>
| Robert Israel | 8,508 | <p>The solutions to $Ax=b$ are the same as those of $PAx=Pb$ if $P$ is one-to-one, i.e. $\ker(P) = \{0\}$. If $P$ is not one-to-one, so that $P y = 0$ for some $y \ne 0$, then any $x$ such that $Ax = b + y$ is a solution of $PAx = Pb$ but not a solution of $Ax = b$. </p>
|
30,292 | <p>One can view a random walk as a discrete process whose continuous
analog is diffusion.
For example, discretizing the heat diffusion equation
(in both time and space) leads to random walks.
Is there a natural continuous analog of discrete self-avoiding walks?
I am particularly interested in self-avoiding polygons,
i.e., closed self-avoiding walks.</p>
<p>I've only found reference
(in Madras and Slade,
<em>The Self-Avoiding Walk</em>, p.365ff)
to continuous analogs of "weakly self-avoiding walks"
(or "self-repellent walks") which discourage but
do not forbid self-intersection.</p>
<p>I realize this is a vague question,
reflecting my ignorance of the topic.
But perhaps those knowledgeable could point me to the
the right concepts. Thanks!</p>
<p><b>Addendum</b>.
<a href="http://en.wikipedia.org/wiki/Schramm%25E2%2580%2593Loewner_evolution">Schramm–Loewner evolution</a> is the answer. It is the conjectured scaling limit of the self-avoiding walk and several other related stochastic processes. Conjectured in low dimensions,
proved in high dimensions, as pointed out by Yuri and Yvan. Many thanks for your help!</p>
| PeterR | 4,553 | <p>For SLE see <a href="http://en.wikipedia.org/wiki/Schramm%E2%80%93Loewner_evolution" rel="nofollow">http://en.wikipedia.org/wiki/Schramm%E2%80%93Loewner_evolution</a></p>
|
24,230 | <p>$f$ is continuous between $[0,1]$, and $f(0)=f(1)$.</p>
<p>I want to prove that there is an $a \in [0,0.5]$ such that $f(a+0.5)=f(a)$.</p>
<p>ok, so Rolle's theorem can be useful here, but I can't see the connection to the derivative,</p>
<p>(Weierstrass, Uniform continuity?) I'll be glad to instructions.</p>
<p>Thanks.</p>
| Shai Covo | 2,810 | <p>HINT: Work with the function $g(a)=f(a+0.5)-f(a)$. Consider $g(0)$ and $g(0.5)$ (and their sum).</p>
|
97,062 | <p><strong>Bug introduced in 9.0 or earlier and fixed in 10.4.0</strong></p>
<hr>
<p>Why does this work?</p>
<pre><code>Solve[5 Tan[t] + 9 == 0 && 0 <= t < 2 Pi , t]
{{t -> π - ArcTan[9/5]}, {t -> 2 π - ArcTan[9/5]}}
</code></pre>
<p>But this doesn't.</p>
<pre><code>NSolve[5 Tan[t] + 9 == 0 && 0 <= t < 2 Pi , t]
{}
</code></pre>
| m_goldberg | 3,066 | <p>As I noted in a comment to the question, I made a query about this issue to WRI tech support. I have now received a reply. I quote the relevant part.</p>
<blockquote>
<p>It does appear that NSolve is not behaving properly, and I have forwarded an incident report to our developers with the information you provided. I would like to include a link to the stack exchange article; do you have the stack exchange article number?</p>
</blockquote>
<p>I have sent a reply giving the URL of this question as requested.</p>
<p>Also, on the basis of this reply from WRI, I am marking the question with <a href="/questions/tagged/bugs" class="post-tag" title="show questions tagged 'bugs'" rel="tag">bugs</a>.</p>
<h3>Update</h3>
<p>I received another email from WRI tech support concerning this issue saying the developers have agreed to fix it.</p>
<blockquote>
<p>Thank you for the link to the article. I have heard back from our development team and a fix for this issue is expected in a future release.</p>
</blockquote>
|
2,474,355 | <p>I have this system of linear equations.</p>
<p><em>2x<sub>1</sub> - 4x<sub>2</sub> - x<sub>3</sub> = 1</em></p>
<p><em>x<sub>1</sub> - 3x<sub>2</sub> + x<sub>3</sub> = 1</em></p>
<p><em>3x<sub>1</sub> - 5x<sub>2</sub> - 3x<sub>3</sub> = 1</em></p>
<p>What is the best way or is there any special way to solve this sort of system?</p>
| Theoretical Economist | 388,944 | <p>Consider the set of natural numbers $\mathbb N$. Let $d$ be the discrete metric, so that $d(m,n)=1$ if $n \neq m$ and $d(n,n)=0$. Define an alternative metric </p>
<p>$$\rho(m,n)=\left\vert \frac{1}{m} - \frac{1}{n} \right\vert.$$</p>
<p>Both $d$ and $\rho$ induce the discrete topology on $\mathbb N$, and hence are equivalent in the sense that they have the same convergent sequences. More explicitly, say that the metrics $d$ and $\rho$ are equivalent if $x_n \overset{d}{\to}x \iff x_n \overset{\rho}{\to} x$. This is the same as the metrics inducing the same topology.</p>
<p>However, the sequence $\left\{1,2,3,... \right\}$ is $\rho$-Cauchy but not $d$-Cauchy.</p>
<p>To see that $\rho$ induces the discrete topology, let $r_n = \frac{1}{n(n+1)}$, and observe that $B_{r_n}(n) = \{ n \}$.</p>
|
926,168 | <p>In how many ways can 3 teachers and 4 pupils be arranged in a line if the pupils and teachers must alternate?
.
how to get the answer?
the ans :144</p>
| lab bhattacharjee | 33,337 | <p>Using <a href="http://mathworld.wolfram.com/ProsthaphaeresisFormulas.html">Prosthaphaeresis Formulas</a>,</p>
<p>$$\sin2A+\sin2B=2\sin(A+B)\cos(A-B)$$</p>
<p>and $$\sin(2A+2B+2C)-\sin2C=2\sin(A+B)\cos(A+B+2C)$$</p>
<p>Finally,
$$\cos(A-B)-\cos(A+B+2C)=2\sin(A+C)\sin(B+C)$$</p>
|
157,587 | <p>I know the following is a well-known result.</p>
<p>Let $D = B(0,1) \subset \mathbb{C} $ a disc, $f$ holomorphic on $D$. Show that $$ 2|f^{'}(0)| \le \sup_{z, w \in D} |f(z)-f(w)|$$
Furthermore, there is equality if and only if $f$ is linear.</p>
<p>I need some reference about the second part, i.e. there is equality if and only if $f$ is linear.</p>
| Alexandre Eremenko | 25,510 | <p>Edit. </p>
<p>Let us first prove the inequality:
$$\sup_{z,w}|f(z)-f(w)|\geq\sup_z|f(z)-f(-z)|=\sup|g(z)|\geq|g'(0)|=2|f'(0)|,$$
where the Schwarz Lemma was applied to $g(z)=f(z)-f(-z)$. Equality in Schwarz lemma can
happen only if $g(z)=kz$, thus $f(z)=kz+\phi(z)$, where $\phi$ is even.</p>
<p>Now let us see when equality is possible in the first inequality. We must have
$$\sup_{|z|<1,|w|<1}|k(z-w)+\phi(z)-\phi(w)|=2|k|,$$
which an even function $\phi$ analytic in the unit disc. We have to derive from here that
$\phi$ is constant. </p>
<p>In the reference
<a href="http://www.math.wustl.edu/~geknese/schwarzpoly.pdf" rel="nofollow noreferrer">http://www.math.wustl.edu/~geknese/schwarzpoly.pdf</a>,
where extremal functions for the Schwarz lemma in the polydisc are described.
But those functions are extremal at every point.
And our function is extremal at only one point, the origin.</p>
<p>I can prove that $\phi$ is constant only under the additonal restriction that
$\phi$ is differentiable in the closed disc. WLOG $k=1$.
Let $|z|=1$ and put $w=-ze^{it}$, where $t$ is small. Then, neglecting the high powers
of $t$, we must have
$$|2+it-\phi'(z)it|\leq 2$$
This implies that $\phi'(z)$ must be real for all $z$ on the unit circle. But such function must be constant, and as $\phi$ is even, we conclude that $\phi$ is constant.</p>
<p>To get rid of the additional assumption of differentiability, one may combine
<a href="https://mathoverflow.net/questions/157741/characterization-of-discs">this</a> and
and <a href="https://mathoverflow.net/questions/157740/maximal-regions-with-given-diameter">this</a>. </p>
|
1,000,705 | <p>I have been trying to solve this problem for hours. </p>
<p>$\dfrac{9e^{2x}}{8x+3}$</p>
<p>I know $u'(x)$ will be $18e^{2x}$
and $v'(x)$ will be $8$</p>
<p>Written out, it will be $\dfrac{(8x+3)(18e^{2x})-(9e^{2x})(8)}{(8x+3)^2}$</p>
<p>I get to the part above^^ and I'm not sure what to do. I know it's probably something simple that I'm over or under thinking, but please help! </p>
| Aaron Maroja | 143,413 | <p>The determinant is given by</p>
<p>$$\det A =
\begin{vmatrix}
1 & 2 & -1 \\
\color{red}2 & \color{red}0 & \color{red}2\\
-1 & 2 & k
\end {vmatrix} =
-\color{red} 2\begin{vmatrix}2 & -1 \\ 2 & k \end {vmatrix}
+\color{red} 0\begin{vmatrix}1 & -1 \\ -1 & k \end {vmatrix}
-\color{red} 2\begin{vmatrix}1 & 2 \\ -1 & 2 \end {vmatrix}
\\= -2(2k+2) - 2(2 + 2) = -4k - 12$$</p>
<p>Now $A$ is not invertible $\Leftrightarrow \det A = 0$. Then $k = -3$.</p>
|
2,916,685 | <p>What are some common, preferably uncomplicated functions $ f: [a,b] \rightarrow \mathbb{R} $ that are Riemann integrable on $ [c,b] $ for all $ c \in (a,b) $ but not integrable on $ [a,b] $. </p>
<p>I know $ f = 1/x $ is one such function for the interval $ [0,1] $. Are there any other examples? </p>
| Slepecky Mamut | 180,179 | <p>$ sin(1/x) $ is integrable on $ [0,1] $, integral converges to $sin(1)-cosintegral(1)=0.504067...$</p>
<p>Correct example is $ sin(1/x) /x $</p>
|
2,916,685 | <p>What are some common, preferably uncomplicated functions $ f: [a,b] \rightarrow \mathbb{R} $ that are Riemann integrable on $ [c,b] $ for all $ c \in (a,b) $ but not integrable on $ [a,b] $. </p>
<p>I know $ f = 1/x $ is one such function for the interval $ [0,1] $. Are there any other examples? </p>
| katosh | 494,716 | <p>You can construct many more functions by first selecting a differentiable $F\colon[a,b]\to\mathbb R$, that is not defined or infinity at $x=a$ and finite for $x\in(a,b]$. Then just take the derivative as your example $f(x) := \frac{d}{dx}F(x)$.</p>
<p>For example, with all differentiable functions $g\colon\mathbb R\to\mathbb R$ with $\lim_{x\to-\infty}|g(x)|\to\infty$, you can construct
$$
F(x) = g\circ\log(x)\\
f(x) = \frac{d}{dx}F(x) = \frac{g'\circ\log(x)}{x}
$$
This way you know
$$
\int_a^bf(x)\mathrm dx = F(b) - F(a)
$$
which is finite for $a,b>0$ but undefined if $a=0$.</p>
|
1,371,580 | <p>i'm reading "A concise introduction ti pure mathematics" by Liebeck and in the exercises of the second chapter i found this question:</p>
<p>"Show that the decimal expression for $\sqrt 2 $ is not periodic"</p>
<p>If i write $\sqrt 2$ in its decimal form, i should obtain something like:</p>
<p>$\sqrt 2 = {a_o}.{a_1}{a_2}{a_3}....{a_n}$</p>
<p>But how can i prove that there is not a string of periodic numbers in ${a_1}{a_2}{a_3}...{a_n}$?</p>
<p>Should i prove this by contradiction?</p>
<p>Thanks a lot for your help and excuse any grammatical mistakes i could have committed, English is not my born language.</p>
| barak manos | 131,263 | <p>Assume by contradiction a periodic representation of $\sqrt2$ on base $10$.</p>
<p>Let $A$ denote the digit-sequence in the integer part of $\sqrt2$.</p>
<p>Let $B$ denote the digit-sequence in the non-periodic prefix of the fractional part of $\sqrt2$.</p>
<p>Let $C$ denote the digit-sequence in the periodic remaining of the fractional part of $\sqrt2$.</p>
<p>Let $|A|$ denote the length of $A$, $|B|$ denote the length of $B$, and $|C|$ denote the length of $C$.</p>
<p>Then $\sqrt2=A+\dfrac{B}{10^{|B|}}+\dfrac{C}{10^{|B|+|C|}-1}$.</p>
<p>But ${A,B,C,|A|,|B|,|C|}\in\mathbb{N}\implies{A+\dfrac{B}{10^{|B|}}+\dfrac{C}{10^{|B|+|C|}-1}}\in\mathbb{Q}$.</p>
<p>This leads to the fact that $\sqrt2\in\mathbb{Q}$, which is false, hence the assumption is false.</p>
<p>Note that although the proof above is for base $10$, it can be used for any other integer base.</p>
|
560,929 | <p>Consider a circle with two perpendicular chords, dividing the circle into four regions $X, Y, Z, W$(labeled):</p>
<p><img src="https://i.stack.imgur.com/2TDK5.png" alt="enter image description here"></p>
<p>What is the maximum and minimum possible value of </p>
<p>$$\frac{A(X) + A(Z)}{A(W) + A(Y)}$$</p>
<p>where $A(I)$ denotes the area of $I$?</p>
<p>I know (instinctively) that the value will be maximum when the two chords will be the diameters of the circle, in that case, the area of the four regions will be equal and the value of the expression will be $1$. </p>
<p>I don't know how to rigorously prove this, however. And I have absolutely no idea about minimizing the expression. </p>
| MvG | 35,416 | <p>Assume that the line $BD$ between areas $X\cup Y$ and $W\cup Z$ is always horizontal, so the other chord will always be vertical. You may assume your circle to be the unit circle (since the radius will cancel out of the expression in any case). You can parametrize your whole setup by the coordinates of the intersection between these two chords, namely the point $D=(x,y)$ with $x^2+y^2\le1$. Depending on these coordinates, you can compute the areas described in your question, using <a href="http://en.wikipedia.org/wiki/Circular_segment" rel="nofollow noreferrer">circular segments</a> and triangles. I did this using <a href="http://sagemath.org/" rel="nofollow noreferrer">sage</a>, and the ugly result looks like this:</p>
<pre><code>-(2*x*y - 2*sqrt(-x^2 + 1)*sqrt(-y^2 + 1) + sqrt(2*sqrt(-y^2 + 1)*x -
2*sqrt(-x^2 + 1)*y + 2)*sqrt(-1/2*sqrt(-y^2 + 1)*x + 1/2*sqrt(-x^2 +
1)*y + 1/2) + sqrt(1/2*sqrt(-y^2 + 1)*x - 1/2*sqrt(-x^2 + 1)*y +
1/2)*sqrt(-2*sqrt(-y^2 + 1)*x + 2*sqrt(-x^2 + 1)*y + 2) -
2*arcsin(1/2*sqrt(2*sqrt(-y^2 + 1)*x - 2*sqrt(-x^2 + 1)*y + 2)) -
2*arcsin(1/2*sqrt(-2*sqrt(-y^2 + 1)*x + 2*sqrt(-x^2 + 1)*y + 2)))/(2*x*y
+ 2*sqrt(-x^2 + 1)*sqrt(-y^2 + 1) - sqrt(2*sqrt(-y^2 + 1)*x +
2*sqrt(-x^2 + 1)*y + 2)*sqrt(-1/2*sqrt(-y^2 + 1)*x - 1/2*sqrt(-x^2 +
1)*y + 1/2) - sqrt(1/2*sqrt(-y^2 + 1)*x + 1/2*sqrt(-x^2 + 1)*y +
1/2)*sqrt(-2*sqrt(-y^2 + 1)*x - 2*sqrt(-x^2 + 1)*y + 2) +
2*arcsin(1/2*sqrt(2*sqrt(-y^2 + 1)*x + 2*sqrt(-x^2 + 1)*y + 2)) +
2*arcsin(1/2*sqrt(-2*sqrt(-y^2 + 1)*x - 2*sqrt(-x^2 + 1)*y + 2)))
</code></pre>
<p>If you had to do things manually, you might want to spend time simplifying this beast, but since all I care about at the moment is visualizing this, I'm fine as long as my computer can deal with it.</p>
<p>You can polot the result. For $x,y\ge0$ the result looks like this:</p>
<p><img src="https://i.stack.imgur.com/OEcAk.jpg" alt="Ratio, evaluated for a quarter circle"></p>
<p>In the bottom plane you see your quarter circle of all possible locations for $D$, and in the vertical direction you see the value of your fraction. In this image, the maximal value of $1$ can indeed be observed for $D=(0,0)$. But <em>any</em> point on a horizontal or vertical line through the origin will yield the same value. So it is sufficient that <em>one</em> of the two lines passes through the origin. Which makes sense due to the symmetric way the areas are distributed to numerator and denominator in this case.</p>
<p>The minimal value is “obviously” (although this is no proof!) at $x=y=\frac12\sqrt2$, i.e. half way between these two and at the very rim of the circle. There you get a value of</p>
<p>$$\frac{\pi-2}{\pi+2}\approx 0.222$$</p>
<p>Note that this is the same value <a href="https://math.stackexchange.com/users/55051/david-h">David H</a> already gave in <a href="https://math.stackexchange.com/questions/560929/how-to-divide-a-circle-with-two-perpendicular-chords-to-minimize-and-maximize#comment1190069_560929">a comment</a>.</p>
<p>But as comments already pointed out, it is far from obvious that $D$ has to lie in the first quadrant. In other words, if you don't always associate $W$ with the area that contains the origin, then the maximal value will neccessarily be the reciprocal of your minimal value, i.e.</p>
<p>$$\frac{\pi+2}{\pi-2}\approx 4.504$$</p>
<p>To visualize this case with all quadrants included, you can extend the above plot to the following one:</p>
<p><img src="https://i.stack.imgur.com/6KaGN.jpg" alt="Function value over whole circle"></p>
<p>Since the scales of various areas are so very different, the overall shape might be clearer if you take the logarithms of the fractions you gave. Then you get the following symmetrical result:</p>
<p><img src="https://i.stack.imgur.com/eC8xl.jpg" alt="Logarithms of function values"></p>
<p>As you are considering ways to proove these facts, looking at the plots might suggests possible approaches. For example, you might be able to argue that looking for a minimum in a quarter circle is enough, since all other cases can be reduced to that one. You might want to use polar coordinates, as the mesh in the above plots suggests. You could try to demonstrate that increasing the radius will always decrese the function value, so that it is sufficient to look at configurations which have $D$ on the circle itself. Then you have a much simpler 1d problem, which should be open to common techniques from calculus.</p>
|
47,603 | <p>Is it possible to express the functions $S(x)=x+1$ and $Pd(x)=x\dot{-}1$ in terms of the functions $f_1$, $f_2$, $f_3$ and $f_4$, where $f_1(x)=0$ if $x$ is even or $1$ if $x$ is odd, $f_2(x)=\mbox{quot}(x,2)$, $f_3(x)=2x$ and $f_4(x)=2x+1$? For example, $S(x)=f_4(f_2(x))$ if x is even. Is there a similar formula if $x$ is odd?</p>
| drbobmeister | 8,472 | <p>As I recall from reading and classes in mathematical logic, the natural numbers are
"defined"--postulated might be a better word--more or less as follows:</p>
<p>A.) There is a set $N$;</p>
<p>B.) $N$ has a "preferred element" called $0$;</p>
<p>C.) There is a function $s:N \to N$, such that i.) $0$ is not in the range of $s$;
ii.) every $n \in N$, $n \ne 0$, <em>is</em> in the range of $s$; $s$ is injective, meaning $s(n) = s(m)$ implies $n = m$.</p>
<p>D.) The principal of mathematical induction holds: if for a "statment" $P(n)$ depending
on $n$ (I'm not going into the fully nuanced logical definition of a proposition containing a variable here.) $P(0)$ holds and "$P(n)$ implies $P(s(n))$" holds, $P(n)$ holds for all $n \in N$.</p>
<p>The equation $s(n) = n + 1$ then just defines a shorthand for $s(n)$. From this point of view, taking your functions as definitive is complicated, since for example we don't even
know what "even" and "odd" mean at this point in the logical development, much less
$quot(x, 2)$ (by which I assume you mean the quotient of $x$ divided by $2$). To set up
a system for the natural numbers using your functions seems a lot more complex
than the above (which I believe is Peano's) formulation. For example, you'd have to
postulate $N$ as being decomposable into two sets (the evens and the odds), and postulate
the relationship between them, and then figure out how to axiomatize $2x$ etc. From
the Peano axioms, all this stuff can be defined and/or proved in a relatively straightforward way. Finally, I don't readily see how you would talk about the relationship
between the evens and the odds without using something which for all the world looks an
awful lot like $s(n)$. Also worth noting is that your expression for $s(x)$ really
involves a conditional (is $x$ even or odd?), whereas the standard postulate does not.</p>
<p>Response to Ryan Budney's comment: it seems to me that Tim's use of the term $S(x)$ indicates a context of the successor function from Peano arithmetic, whence his
use of the word "define" in the title, though "axiomatize" or "postulate" might be
more on point.</p>
<p>Response to Yemon Choi's comment: "generate" might be what Tim wants to do, but he
gives no starting point, leaving the question "generate from what?" hanging.</p>
|
144,864 | <p>This is my homework question:
Calculate $\int_{0}^{1}x^2\ln(x) dx$ using Simpson's formula. Maximum error should be $1/2\times10^{-4}$</p>
<p>For solving the problem, I need to calculate fourth derivative of $x^2\ln(x)$. It is $-2/x^2$ and it's maximum value will be $\infty$ between $(0,1)$ and I can't calculate $h$ in the following error formula for using in Simpson's formula.</p>
<p>$$-\frac{(b-a)}{180}h^4f^{(4)}(\eta)$$</p>
<p>How can I solve it?</p>
| mrf | 19,440 | <p>As already noticed, $f(x)$ is not $C^4$ on the closed interval $[0,1]$, and a direct estimate on the error in Simpson's method is troublesome. One way to handle things is to remove the left end point as described by Jonas Meyer. Another way to handle weak singularities as these is to start with a change of variables.</p>
<p>For this particular integral, you can check that the substitution $x = t^a$ for $a$ large enough will turn the integrand into a $C^4$ function. For example, $a = 2$ gives $x = t^2$, $dx = 2t\,dt$, so</p>
<p>$$\int_0^1 x^2 \ln(x)\,dx = \int_0^1 t^4 \ln(t^2)\,2t\,dt= 4\int_0^1 t^5\ln(t)\,dt.$$</p>
<p>Let $g(t) = t^5\ln(t)$. You can check that $g(t)$ is $C^4$ on $[0,1]$. (Extended to $g(0) = 0$, of course.) Furthermore $|g^{(4)}(t)| \le 154$ on $[0,1]$. Simpson's rule on $g$ now works (reasonably) well.</p>
|
3,256,767 | <p>So I'm trying to understand a solution made by my teacher for a question that asks me to determine whether the following is true. I'm having trouble understanding where some values in the steps are coming from.</p>
<p>Like for the first part, I don't really get where n≥5 came from. My guess is getting 16n^2 + 25 to equal 16n^2 + n^2 by substituting n with 5. But I was wondering why 25 turned into n^2 in the first place?</p>
<p>I also have no idea where k = 5 came from.</p>
<p>For the second part of the solution, I'm also having similar struggles. Why did 16n^2 turn into 15n^2 + n^2? I'm also not sure where n≥41 and k=41 came from. I would really appreciate some clarification because I'm having trouble understanding this unit. </p>
<p><a href="https://i.stack.imgur.com/97ORS.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/97ORS.png" alt="enter image description here"></a></p>
| copper.hat | 27,978 | <p>Find two vectors <span class="math-container">$b_3,b_4$</span> such that with <span class="math-container">$b_1 = (1,1,1,1)^T, b_2 = (1,0,1,0)^T$</span>, the
vectors <span class="math-container">$b_1,...,b_4$</span> span <span class="math-container">$\mathbb{R}^4$</span>.</p>
<p>Note that <span class="math-container">${\cal R} \phi = \operatorname{sp} \{ b_1,b_2\}$</span>, so you <strong>must</strong> have <span class="math-container">$\phi(b_3), \phi(b_4) \in {\cal R} \phi$</span>,
otherwise this would lead to a contradiction.</p>
<p>So you can choose <span class="math-container">$\phi(b_3), \phi(b_4)$</span> arbitrarily as long as they lie in <span class="math-container">${\cal R} \phi$</span>. (Choosing the zero vector is an easy one.)</p>
<p>Once you have chosen these, this defines <span class="math-container">$\phi $</span> completely and the matrix
representation <span class="math-container">$A$</span> is straightforward to obtain.</p>
<p>We have <span class="math-container">$y_k = \phi(b_k) = A b_k = AB e_k$</span>, where <span class="math-container">$B$</span> is the matrix with columns
<span class="math-container">$b_1,...,b_4$</span>. From this we get <span class="math-container">$Y=AB$</span>, where <span class="math-container">$Y$</span> is the matrix with columns <span class="math-container">$y_1,...,y_4$</span>, and so
<span class="math-container">$A = Y B^{-1}$</span>.</p>
|
3,256,767 | <p>So I'm trying to understand a solution made by my teacher for a question that asks me to determine whether the following is true. I'm having trouble understanding where some values in the steps are coming from.</p>
<p>Like for the first part, I don't really get where n≥5 came from. My guess is getting 16n^2 + 25 to equal 16n^2 + n^2 by substituting n with 5. But I was wondering why 25 turned into n^2 in the first place?</p>
<p>I also have no idea where k = 5 came from.</p>
<p>For the second part of the solution, I'm also having similar struggles. Why did 16n^2 turn into 15n^2 + n^2? I'm also not sure where n≥41 and k=41 came from. I would really appreciate some clarification because I'm having trouble understanding this unit. </p>
<p><a href="https://i.stack.imgur.com/97ORS.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/97ORS.png" alt="enter image description here"></a></p>
| zipirovich | 127,842 | <p>I can offer you a hint for an ad hoc solution for this exercise specifically (rather than all such questions in general).</p>
<p>One way to construct the matrix of such a linear transformation is to know the images under <span class="math-container">$\varphi$</span> of the standard basis vectors <span class="math-container">$e_1=(1,0,0,0)$</span>, <span class="math-container">$e_2=(0,1,0,0)$</span>, <span class="math-container">$e_3=(0,0,1,0)$</span>, and <span class="math-container">$e_4=(0,0,0,1)$</span>. Then you will put them down as the columns of the desired matrix:
<span class="math-container">$$M = \begin{bmatrix} \varphi(e_1) & \varphi(e_2) & \varphi(e_3) & \varphi(e_4) \\ \end{bmatrix}.$$</span></p>
<p>Note that you already know <span class="math-container">$\varphi(e_1)+\varphi(e_3)=\varphi(1,0,1,0)=(2,1,0)$</span>, as it's given to you. And you can also find <span class="math-container">$\varphi(e_2)+\varphi(e_4)=\varphi(0,1,0,1)=\varphi(1,1,1,1)-\varphi(1,0,1,0)$</span>.</p>
<p>From here, you can make up as many examples satisfying the given conditions as you want. Pick any two vectors in <span class="math-container">$\mathbb{R}^3$</span> that add up to <span class="math-container">$(2,1,0)$</span> to be the values of <span class="math-container">$\varphi(e_1)$</span> and <span class="math-container">$\varphi(e_3)$</span>. And pick any two vectors in <span class="math-container">$\mathbb{R}^3$</span> that add up to what you need to be the values of <span class="math-container">$\varphi(e_2)$</span> and <span class="math-container">$\varphi(e_4)$</span>.</p>
|
2,673,465 | <p>Suppose that $n \in \mathbb{N}$ is composite and has a prime factor $q$. If $k \in \mathbb{Z}$ is the greatest number for which $q^k$ divides $n$, how can I show that $q^k$ does not divide ${{n}\choose{q}}$?</p>
<p>Clearly, since
$$
{{n}\choose{q}} = \frac{n!}{(n-q)!q!} = \frac{n(n-1)(n-2) \dots (n-q+1)}{q!}
$$
So it suffices to show that, no element of the set
$$
\left\{ n-1, n-2, \dots, n-q+1 \right\}
$$
is divisible by $q$, which is clearly true, but I am unsure of how to show this rigourously. </p>
| Francisco José Letterio | 482,896 | <p>We can also use Kummer's Theorem (<a href="https://en.m.wikipedia.org/wiki/Kummer%27s_theorem" rel="nofollow noreferrer">https://en.m.wikipedia.org/wiki/Kummer%27s_theorem</a>) since it was designed for this kind of problems</p>
<p>Is it possible that when adding n-q and q we get k carry-overs?</p>
<p>First we must notice a couple of things</p>
<p>1) Since $q|n$, the last digit of $n$ in $q$'s expansion MUST be 0 (can see why?)</p>
<p>2)Notice that $q = (10)_q$ </p>
<p>So now let's add $(n-q)$ + $(q)$ and use Kummer's Theorem. Since $q = (10)_q$, we can only have carryovers at the second position (from right to left) onwards to the the left</p>
<p>If we were to have k carry-overs, we'd then have them at the second position (for $q^1$), third position (for $q^2$) all the way to the $k+1$th position (for $q^k$)</p>
<p>But getting a carry in all these positions must mean that $n = Aq^{k + 1+r}+ Bq^{k+r} + \dots + Cq^{k+1}$</p>
<p>This means that n is now a number which, when written in base q, has its last nonzero digit at the position for $q^{k+1}$. From this position it's all zeros to the right</p>
<p>This means that n must be divisible by $q^{k+1}$. But this is an absurd that contradicts the fact that q^k is the largest power of q that divides n</p>
<p>This absurd was caused by our assumption that we could have k carry-overs when adding $q + (n-q)$</p>
<p>Since it is impossible for this happen, we now know that the $q$-adic valuation of ${n \choose q}$ must be strictly less than $k$</p>
|
1,950,809 | <p>I'm fairly certain that the probability of both dice returning an even number is $1/4$.</p>
<p>I got this by saying that since these are independent events, with each die returning an even number being $1/2$, then the probability of both being even is $1/2 \times 1/2 = 1/4$.</p>
<p>Further, there are 36 outcomes, and all possible even number combinations are $(2, 2), (2, 4), (2, 6), (4, 4), (4, 6), (6, 6), (6, 4), (6, 2), (4, 2)$. There are nine of them and $9/36 = 1/4$</p>
<p>What I can't seem to get over, is that there are an equal number of odd and even numbers, so, why is the answer not $1/2$?</p>
<p>I know that it's not one half, but I can't explain why. </p>
| avs | 353,141 | <p>Instead of thinking of pairs of outcomes, incorporate the two dice into one, more complicated probability space. It is true that there are equally many odd and even score <em>for a single die</em>. But, with two dice scores being viewed as one outcome to a new, more complicated experiment, we have an exhaustive and mutually exclusive list of four events: (die 1: odd, die 2: odd), (even, odd), (odd, even), and (even, even). These events are equiprobable, so each has probability $1/4$. Notice that the probability of having an even score on <em>at least one die</em> is not less (actually, greater) than $1/2$.</p>
|
3,071,076 | <p>Let <span class="math-container">$ABC$</span> be an acute angled triangle whose inscribed circle touches <span class="math-container">$AB$</span> and <span class="math-container">$AC$</span> at <span class="math-container">$D$</span> and <span class="math-container">$E$</span> respectively. Let <span class="math-container">$X$</span> and <span class="math-container">$Y$</span> be the points of intersection of the bisectors of the angles <span class="math-container">$ACB$</span> and <span class="math-container">$ABC$</span> with the line <span class="math-container">$DE$</span> and let <span class="math-container">$Z$</span> be the midpoint of the side <span class="math-container">$BC$</span>. Prove that the triangle <span class="math-container">$XYZ$</span> is equilateral if and only if <span class="math-container">$\angle A = 60^o$</span>.</p>
<p>I dont know why, but it seems to me that <span class="math-container">$\Delta ADE$</span> and <span class="math-container">$\Delta XYZ$</span> are similar (or maybe congruent :\ ). Is it true? Or no? Please help.</p>
| Oldboy | 401,277 | <p><a href="https://i.stack.imgur.com/gCq7u.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/gCq7u.png" alt="enter image description here"></a></p>
<p>Let us first show that <span class="math-container">$\angle BXC=\angle BYC=90^\circ$</span>.</p>
<p>Notice that triangle <span class="math-container">$ADE$</span> is isosceles so <span class="math-container">$\angle AED=90^\circ-\alpha/2$</span>. It means that <span class="math-container">$\angle DEC=\angle XEC=90^\circ+\alpha/2$</span>. We also know that <span class="math-container">$\angle ECX=\gamma/2$</span>. From triangle <span class="math-container">$XEC$</span>:</p>
<p><span class="math-container">$$\angle CXE=180^\circ-\angle XEC-\angle ECX=180^\circ-(90^\circ+\alpha/2)-\gamma/2=\beta/2$$</span></p>
<p>It follows immediatelly that <span class="math-container">$\angle DXI=180-\beta/2$</span> and <span class="math-container">$\angle DXI+\angle DBI=180^\circ$</span>. And therefore, quadrialteral <span class="math-container">$BIXD$</span> is concyclic. Because of that:</p>
<p><span class="math-container">$$\angle BXC=\angle BXI=\angle BDI=90^\circ\tag{1}$$</span></p>
<p>In a similar way we can show that:</p>
<p><span class="math-container">$$\angle BYC=90^\circ\tag{2}$$</span></p>
<p>Because of (1) and (2) points <span class="math-container">$X$</span> and <span class="math-container">$Y$</span> must be on a circle with diameter BC with center <span class="math-container">$Z$</span>. So triangle <span class="math-container">$XYZ$</span> is isosceles with <span class="math-container">$ZX=ZY$</span>.</p>
<p>Now:</p>
<p><span class="math-container">$$\angle XZY=2\angle XBY=2(\angle XBC-\angle IBC)=2(90^\circ-\gamma/2-\beta/2)=\alpha$$</span></p>
<p>So triangle <span class="math-container">$XYZ$</span> is equilateral if and only if <span class="math-container">$\alpha=60^\circ$</span>.</p>
|
3,066,530 | <p><span class="math-container">$$\lim_{x\to 0} \frac {(\sin(2x)-2\sin(x))^4}{(3+\cos(2x)-4\cos(x))^3}$$</span> </p>
<p>without L'Hôpital.</p>
<p>I've tried using equivalences with <span class="math-container">${(\sin(2x)-2\sin(x))^4}$</span> and arrived at <span class="math-container">$-x^{12}$</span> but I don't know how to handle <span class="math-container">${(3+\cos(2x)-4\cos(x))^3}$</span>. Using <span class="math-container">$\cos(2x)=\cos^2(x)-\sin^2(x)$</span> hasn't helped, so any hint?</p>
| Mike_ | 632,850 | <p>One can evaluate all such lims using series expansions.
<span class="math-container">$$
\sin(x) = x - \frac{x^3}{6} + o(x^4)\\
\cos(x) = 1 - \frac{x^2}{2} + o(x^4)
$$</span>
<a href="https://en.wikipedia.org/wiki/Taylor_series#Trigonometric_functions" rel="nofollow noreferrer">https://en.wikipedia.org/wiki/Taylor_series#Trigonometric_functions</a></p>
<p>Just substitute functions with their expansions. Then just find lim of expression~polynomial/polynomial. Add more <span class="math-container">$x^n$</span> terms, if first 2 is not enough.</p>
<p>L'Hôpital's rule is another form of this more general approach.</p>
|
3,587,387 | <p>Assume draw that you draw a card from a standard deck.Find the probability of drawing a heart Given that your drew a face card (JQK) Using probability formulas how do I figure this out
Given in this equations mean what exactly??</p>
| Noah Schweber | 28,111 | <p>It looks like there's some confusion over the basic definitions here.</p>
<hr>
<blockquote>
<p>I am able to show that there are an infinite number of non-prime natural numbers, but I don't know how to show that the entire set of non-prime natural numbers is infinite.</p>
</blockquote>
<p>These are the same thing: "has infinitely many elements" and "is infinite" are just two ways of phrasing the same property.</p>
<hr>
<blockquote>
<p>Is it as easy as saying that it is a subset of the naturals and since the set of naturals is infinite, this set must also be infinite?</p>
</blockquote>
<p>This makes me think that you might be conflating "is infinite" with "is <strong>countable</strong>." It's certainly not true that every subset of the naturals is infinite, but it is true that every subset of the naturals is countable <em>(or finite - some texts define "countable" as "in bijection with <span class="math-container">$\mathbb{N}$</span>" as opposed to "in bijection with some subset of <span class="math-container">$\mathbb{N}$</span>)</em>.</p>
<p>So if the problem you're trying to solve is "Show that the set of composite numbers is <strong>countable and infinite</strong>, then you've more-or-less outlined it:</p>
<ul>
<li><p>It's a subset of <span class="math-container">$\mathbb{N}$</span>, so trivially countable (in the broader sense of allowing finiteness).</p></li>
<li><p>The argument you give in the OP shows that it's infinite.</p></li>
</ul>
|
3,587,387 | <p>Assume draw that you draw a card from a standard deck.Find the probability of drawing a heart Given that your drew a face card (JQK) Using probability formulas how do I figure this out
Given in this equations mean what exactly??</p>
| mathematics2x2life | 79,043 | <p>You have a few issues, first, you say that <span class="math-container">$T$</span> is infinite because it is contained in the set of all composite (non-prime) numbers. But this is the set you wanted to prove was infinite! What you really mean to say is that because there are infinitely many distinct prime numbers, the set <span class="math-container">$T$</span> is infinite. Because <span class="math-container">$T$</span> is contained in the set of composite numbers (every number in <span class="math-container">$T$</span> is divisible by <span class="math-container">$2$</span> and some prime, so the numbers in <span class="math-container">$T$</span> cannot be prime), then it must be that the set of composite numbers is infinite [so in a sense, you have proved what you wanted, modulo phrasing errors]. As for your last sentence, I think you mean how do you prove the set of primes is infinite, for that, <a href="https://math.stackexchange.com/questions/842187/proof-of-infinitely-many-primes-clarification/842192#842192">see this answer</a>. </p>
<p>But you do not need to invoke the infinitude of primes at all! Consider the set <span class="math-container">$T= \{ 2n \colon n \in \mathbb{N}, n>1\}$</span>, i.e. the even numbers bigger than <span class="math-container">$2$</span>. This set is clearly infinite because <span class="math-container">$\mathbb{N}$</span> is. Moreover, every number in <span class="math-container">$T$</span> is divisible by <span class="math-container">$2$</span> and some other integer, so every number in <span class="math-container">$T$</span> is composite. Therefore, <span class="math-container">$T$</span> is a subset of the composite numbers that is infinite, so that this set is infinite! You do not need to even consider this <span class="math-container">$T$</span>; for example, <span class="math-container">$T= \{ 2^n \colon n >1\}$</span>, <span class="math-container">$T= \{ 2 \cdot 3^b \colon n \geq 1\}$</span>, etc all work as well!</p>
<p>Perhaps what you're worried about is that your (or even my) <span class="math-container">$T$</span> do not contain <em>all</em> composite numbers. But you do not need to find them all, just an infinite amount of them to prove that the set of all of them is infinite. Because the set of composite numbers (other than 1) is the complement of the set of prime numbers in <span class="math-container">$\mathbb{N}$</span>, finding one set is the 'same' as finding the other. But finding all the primes is <em>really</em> hard! So it will be equally hard to find all the composite numbers. Again, luckily, this is not what is required for your proof.</p>
|
2,724,744 | <p>I have a basic math question.</p>
<p>If I have the following inequality:
$$-a-b > -1$$
and I want to flip (or reverse) the sign. What is the correct way of the following? And why?</p>
<p>i) $a+b \le 1$<br>
ii) $a+b < 1$</p>
<p>Many thanks! (:</p>
| user | 505,767 | <p>The step is</p>
<p>$$-a-b > -1\iff (-a-b)(-1) \stackrel{reversed}{\color{red}<} (-1)(-1)\iff a+b<1$$</p>
<p>Let consider for a numerical example</p>
<p>$$1 > -1\iff 1(-1) < (-1)(-1)\iff -1<1$$</p>
<p>Note also that for $-a-b \ge -1$ the following holds</p>
<p>$$-a-b \ge -1\iff a+b\le1$$</p>
|
2,724,744 | <p>I have a basic math question.</p>
<p>If I have the following inequality:
$$-a-b > -1$$
and I want to flip (or reverse) the sign. What is the correct way of the following? And why?</p>
<p>i) $a+b \le 1$<br>
ii) $a+b < 1$</p>
<p>Many thanks! (:</p>
| Rócherz | 451,007 | <p>Start with $-a-b>-1$.</p>
<p>Add $1+a+b$ to both sides to get $1>a+b$.</p>
<p>Which is the same as $a+b<1$. So ii).</p>
<p>That's why multiplying by a negative number reverses the inequality sign.</p>
<p>Just a comment: $a+b<1$ implies $a+b \leq 1$, but $a+b \leq 1$ <strong>does not</strong> imply $a+b<1$. Because properties of $<$ versus $\leq$.
$$a+b \leq 1 \iff a+b<1 \text{ or } a+b=1.\\ a+b<1 \iff a+b\leq 1 \text{ and } a+b \neq 1.$$</p>
|
1,301,509 | <p>I've the following integral, which should result in 1, as shown by the scetch, but in my calculation I get the result 0. What's my mistake?</p>
<p>Sorry the comments are in German and please note that a German 1 often looks like an English 7. Anything in the picture which looks like a 7 to you is in fact a 1.</p>
<p><img src="https://i.stack.imgur.com/zBwwQ.jpg" alt="enter image description here"></p>
| nullUser | 17,459 | <p>The issue is that you changed the bounds wrong. You correctly wrote
$$
\int_{z(-1)}^{z(0)}
$$
and then incorrectly changed it to
$$
\int_{0}^{1}
$$
when it should be
$$
\int_{1}^{0}
$$</p>
<p>One final note though. You should not use substitution to deal with a multiplicative constant. Just use linearity:
$$
\int_{a}^{b} c f(x)dx = c \int_{a}^b f(x)dx
$$</p>
<p>The $-1$ in your problem you could just "pull out" of the integral.</p>
|
3,022,921 | <p>If 6 divides x and 8 divides x how do you deduce 24 divides x</p>
| Henry Lee | 541,220 | <p>what we know is:</p>
<p><span class="math-container">$\frac{x}{6}$</span> is an integer so : <span class="math-container">$\frac{x}{2}$</span> and <span class="math-container">$\frac{x}{3}$</span> are also integers. Also:</p>
<p><span class="math-container">$\frac x8$</span> is an integer so :<span class="math-container">$\frac x4$</span> and <span class="math-container">$\frac x2$</span> are also integers.</p>
<p>But, for it to be divisible by <span class="math-container">$8$</span> and <span class="math-container">$6$</span> that means it must be divisible by <span class="math-container">$1,2,3,4,6,8$</span>. The smallest a number can be whilst divisible by <span class="math-container">$8,6$</span> is the LCM, 24. so we know that <span class="math-container">$x$</span> must be a multiple of <span class="math-container">$24$</span> and is therefore divisible by <span class="math-container">$24$</span></p>
|
4,609,001 | <p>The following definition of pmf is on page51 from Probability and statistical inference by Robert V. Hogg, etc.</p>
<p>The pmf <span class="math-container">$f(x)$</span> of a discrete random variable X is a function that satisfies the following properties:<br />
(a)<span class="math-container">$f(x)\gt 0, x\in S;$</span><br />
(b)<span class="math-container">$\sum_{x\in S}f(x)=1;$</span><br />
(c)<span class="math-container">$P(X\in A)=\sum_{x\in A}f(x),$</span> where <span class="math-container">$A\subset S$</span>.</p>
<p>My question is about part(c). X is a random variable, and A is a subset of sample space S, how can <span class="math-container">$X\in A$</span>. Moreover, how can <span class="math-container">${x\in A}$</span>?</p>
<p>In my opinion, the LHS of part(c) just means that we want to compute the probability of a single event, and the RHS of part(c) just means that we sum up all the related possibilities that we need.</p>
<p>For example, if X: numbers of the sum of a fair six-side dice. Then P(X=3)=P({(1,2),(2,1)})=1/36+1/36=2/36, where A is a subset of S.</p>
<p>Am I right? And could someone explain more about part(c)? It's better if someone can give me an example.</p>
| Pavel Kocourek | 1,134,951 | <p>You want to show that for any fixed <span class="math-container">$x_1<x<x_2$</span> in <span class="math-container">$I$</span> the following statements are equivalent:</p>
<p>(a) <span class="math-container">$\quad$</span> <span class="math-container">$(x,f(x))$</span> is below the line joining <span class="math-container">$x_1,f(x_1)$</span> and <span class="math-container">$x_2,f(x_2)$</span>;</p>
<p>(b) <span class="math-container">$\quad$</span> the slope of the chord joining <span class="math-container">$(x_1, f(x_1))$</span> and
<span class="math-container">$(x, f(x))$</span> is less than or equal to the slope of the chord joining <span class="math-container">$(x, f(x))$</span>
and <span class="math-container">$(x_2, f(x_2))$</span>.</p>
<p>Let us formalise the geometric intuition: Define <span class="math-container">$\lambda \in (0,1)$</span> be such that <span class="math-container">$x = (1-\lambda) x_1 + \lambda x_2$</span>, and let <span class="math-container">$a_1$</span> and <span class="math-container">$a_2$</span> the slopes of the cords from condition (b):
<span class="math-container">$$
a_1 = \frac{f(x)-f(x_1)}{x-x_1},
\quad \text{and} \quad
a_2 = \frac{f(x)-f(x_2)}{x-x_2}.
$$</span></p>
<p>Then conditions (a) and (b) can be formulated as:</p>
<p>(a) <span class="math-container">$\quad$</span> <span class="math-container">$f(x) \leq (1-\lambda) f(x_1) + \lambda f(x_2) \tag{*};$</span></p>
<p>(b) <span class="math-container">$\quad$</span> <span class="math-container">$a_1\leq a_2$</span>.</p>
<p>Subtracting <span class="math-container">$f(x)$</span> from both sides of the inequality (*) we obtain an equivalent inequality
<span class="math-container">$$
0 \leq (1-\lambda) [f(x_1)-f(x)] + \lambda [f(x_2)-f(x)],
$$</span>
<span class="math-container">$$
0 \leq (1-\lambda) a_1(x_1-x) + \lambda a_2 (x_2-x), \tag{**}
$$</span>
From the definition of <span class="math-container">$\lambda$</span> we have
<span class="math-container">$$
0 = (1-\lambda) (x_1-x) + \lambda (x_2-x).
$$</span>
Multiplying this equation by <span class="math-container">$a_1$</span> and subtracting it from (**), we obtain the following equivalent inequality
<span class="math-container">$$
0 \leq \lambda (a_2-a_1) (x_2-x).
$$</span>
Since <span class="math-container">$\lambda>0$</span> and <span class="math-container">$x_2>x$</span>, the obtained inequality is equivalent to <span class="math-container">$a_2 \geq a_1$</span>.</p>
<p>We conclude that conditions (a) and (b) are equivalent for any <span class="math-container">$x_1<x<x_2$</span> in <span class="math-container">$I$</span>.</p>
|
1,757,092 | <p>I want to find an explicit formula for $\sum_{n=0}^\infty n^3x^n$ for $|x|\le1$.Is the idea that first to show that this series is convergent and then we can find the number that it converges to? I tried to use ratio test, but it didn't work. Any suggestion? Thanks!</p>
| pancini | 252,495 | <p>By the root test
$$\limsup_{n\to\infty} \sqrt[n]{|n^3x^n|}=|x|\limsup_{n\to\infty}n^{3/n}=|x|$$
so we have convergence for $|x|<1$.</p>
<p>Now note that
$$\sum_{n=0}^\infty x^n=\frac{1}{1-x}$$
so
$$\sum_{n=0}^\infty nx^{n-1}=\frac{1}{(1-x)^2}$$
and
$$\sum_{n=0}^\infty nx^{n}=\frac{x}{(1-x)^2}.$$
You should be able to continue this process. </p>
|
4,022,415 | <p>My attempt:</p>
<p><span class="math-container">$\lbrace6,19,30\rbrace$</span> is sufficient to show that two sets are impossible.</p>
<p>Using a computer program with a brute force method I found that separating the numbers <span class="math-container">$1$</span> through <span class="math-container">$85$</span> into three sets is possible as shown below:</p>
<p><span class="math-container">$\lbrace1,4,6,9,13,14,17,18,20,26,28,33,34,37,41,42,49,54,56,57,62,69,70,73,76,78,81,85\rbrace$</span><br />
<span class="math-container">$\lbrace2,5,8,10,12,21,22,25,29,30,32,38,40,45,46,48,50,53,58,61,64,65,66,72,74,77,82,84\rbrace$</span><br />
<span class="math-container">$\lbrace3,7,11,15,16,19,23,24,27,31,35,36,39,43,44,47,51,52,55,59,60,63,67,68,71,75,79,80,83\rbrace$</span></p>
<p>but <span class="math-container">$1$</span> through <span class="math-container">$86$</span> is impossible.</p>
<p><strong>Edit:</strong></p>
<p>WhatsUp in the answer below provides
the set of four numbers: <span class="math-container">$\lbrace1058, 6338, 10823, 13826\rbrace$</span> with an explanation of how he got them. This is a alternative non-brute force way of showing that separating the natural numbers into three sets is impossible.
In a comment of <a href="https://math.stackexchange.com/questions/1576986/pairwise-sums-are-perfect-squares">this</a> question a set of five numbers <span class="math-container">$\lbrace 7442, 28658,148583,177458,763442\rbrace$</span>
is provided by the user Bob Kadylo. This shows that four sets are impossible.</p>
<p><strong>Edit 2:</strong></p>
<p>In a previous version of my post I made a proof showing that the number of sets needed for Natural numbers <span class="math-container">$1$</span> to <span class="math-container">$N$</span> so that no pair of numbers in the same set sums to a square is no more than <span class="math-container">$\lfloor\sqrt{2N-1}\rfloor$</span>. I realized that I can do significantly better than this. In order to explain the method that has a smaller upper bound I have to transform the problem into graph theory. An equivalent formulation is to have <span class="math-container">$N$</span> vertices labeled from <span class="math-container">$1$</span> to <span class="math-container">$N$</span>. A pair of points are connected iff the two points add up to a square. Then our goal is to color the vertices using the least number of colors so that no two vertices with the same color are connected. The first step in the greedy algorithm for coloring vertices is to make a list of colors with numbers. (ex. RED-1, BLUE-2, GREEN-3, YELLOW-4, etc.) If during the process more colors are required than are on the coloring list, add more colors to the list. The next step is to pick an uncolored vertex and use the lowest color number that isn't connected that the chosen vertex. Repeat the last step until all vertices are colored. The worst case scenario is to use one more color than the degree value of the vertex with the greatest degree (or tied with the greatest degree). If each vertex that is connected to the greatest degree vertex is a different color then the greatest degree vertex has to be a different color from all of those. The vertex with the greatest degree (or tied with the greatest) is <span class="math-container">$3$</span>. It has degree <span class="math-container">$\lfloor\sqrt{N+3}\rfloor-1$</span>. Therefore the Number of sets (or colors) required is no more than <span class="math-container">$\lfloor\sqrt{N+3}\rfloor$</span>. We can do slightly better by using <a href="https://en.wikipedia.org/wiki/Brooks%27_theorem" rel="nofollow noreferrer">brooke's theorem</a> which states that if a graph is simple, connected, not complete, and not an odd cycle, then the upper bound of the number of colors is <strong>equal</strong> to the degree of the greatest degree vertex. This means that the new upper bound is <span class="math-container">$\lfloor\sqrt{N+3}\rfloor-1$</span> sets. This is the significant improvment from <span class="math-container">$\lfloor\sqrt{2N-1}\rfloor$</span> I mentioned at the beginning.</p>
<p><strong>End edits</strong></p>
<p>For each natural number <span class="math-container">$X$</span> there are <span class="math-container">$\lfloor\sqrt{2X-1}\rfloor-\lfloor\sqrt{X}\rfloor$</span> numbers that are less than <span class="math-container">$X$</span> that when summed to <span class="math-container">$X$</span> results in a square. The expression: <span class="math-container">$\lfloor\sqrt{2X-1}\rfloor-\lfloor\sqrt{X}\rfloor$</span> increases as <span class="math-container">$X$</span> gets larger, because of this fact my guess is that separating the natural numbers into a finite number of sets so that no pair of numbers in a set doesn't sum to a square is impossible.</p>
| WhatsUp | 256,378 | <p>Without bruteforcing, I find the list <span class="math-container">$\{1058, 6338, 10823, 13826\}$</span> which shows that three sets is not enough.</p>
<p>My approach:</p>
<p>I start from the equations
<span class="math-container">\begin{eqnarray}
a + b &=& u^2\\
c + d &=& v^2\\
a + c &=& x^2\\
b + d &=& y^2\\
a + d &=& m^2\\
b + c &=& n^2.
\end{eqnarray}</span>
The matrix has rank <span class="math-container">$4$</span>, which means that there are just <span class="math-container">$6 - 4 = 2$</span> linearly independent relations among <span class="math-container">$u^2, \dots, n^2$</span>. They are:
<span class="math-container">$$u^2 + v^2 = x^2 + y^2 = m^2 + n^2.$$</span></p>
<p>Also, if we assume <span class="math-container">$a < b < c < d$</span>, then we have <span class="math-container">$u^2 < x^2 < m^2, n^2 < y^2 < v^2$</span>.</p>
<p>In order to get positive solutions <span class="math-container">$a, b, c, d$</span>, it suffices to have <span class="math-container">$u^2 + x^2 > n^2$</span>.</p>
<p>Now I simply take the number <span class="math-container">$N = 5 \times 13 \times 17 \times 29$</span>, which can be written as the sum of two squares in many ways. Somewhere in the middle, I take out these three:
<span class="math-container">$$N = 86^2 + 157^2 = 109^2 + 142^2 = 122^2 + 131^2.$$</span> These are my candidates of <span class="math-container">$u^2, \dots, n^2$</span>.</p>
<p>It only remains to solve <span class="math-container">$a, b, c, d$</span> back from the original equations, which gives the list in the very beginning.</p>
<p>Luckily, the solutions are integers. But even if we got non-integral solutions, we could always multiply all of them by some square to clear the denominators.</p>
<hr />
<p>I also tried to extend this method to five numbers. It became a bit messy though, so I gave up midway.</p>
|
224,019 | <p>I am trying to compute the volume of intersection of the following two regions:</p>
<pre><code>a = 0.857597;
b = 1.653926;
hexagon = Polygon[{{0, (b - a)/2, 1/2}, {(b - a)/2, 0, 1/2},
{1/2, 0, (b - 1)/(2 a)}, {1/2, (b - 1)/2, 0}, {(b - 1)/2, 1/2, 0},
{0, 1/2, (b - 1)/(2 a)}}];
octahedron = ImplicitRegion[Abs[x] + Abs[y] + a Abs[z] <= b/2, {x, y, z}];
region2 = ImplicitRegion[1 >= RegionDistance[hexagon, {x, y, z}], {x, y, z}];
</code></pre>
<p><code>NIntegrate</code> directly doesn't work:</p>
<pre><code>NIntegrate[1, {x, y, z} ∈ RegionIntersection[octahedron, region2]]
</code></pre>
<p>It results in a crash after using up the memory (32GB).</p>
<p>I tried to use <code>DiscretizeRegion</code> first:</p>
<pre><code>octd = DiscretizeRegion[octahedron, {{-1, 1}, {-1, 1}, {-1, 1}}];
regd = DiscretizeRegion[region2, {{-1, 2}, {-1, 2}, {-1, 2}}]; (* This takes 40 minutes *)
RegionIntersection[octd, regd]
</code></pre>
<p>This returns an error: “BoundaryMeshRegion: The boundary surface is not closed because the edges <<2>> only come from a single face.”</p>
<p>I also tried to discretize the regions using <code>NDSolve`FEM`ToElementMesh</code>.</p>
<pre><code>Needs["NDSolve`FEM`"];
ToElementMesh[region2, {{-1, 2}, {-1, 2}, {-1, 2}}]
</code></pre>
<p>This crashes without using significant memory. Computing finite element mesh on the first region does not crash, but intersecting it with the second region results in a crash without significant memory usage.</p>
<pre><code>octf = ToElementMesh[octahedron, {{-1, 1}, {-1, 1}, {-1, 1}}];
RegionIntersection[octf, regd]
</code></pre>
<p>I have reported the issues with <code>ToElementMesh</code> to Wolfram Support.</p>
<p>Is there any workaround?</p>
<pre><code>$Version (* 12.1.0 for Mac OS X x86 (64-bit) (March 18, 2020) *)
</code></pre>
| user21 | 18,437 | <p>Here is an approach based on creating exact regions:</p>
<pre><code>a = Rationalize[0.857597, 10^-16];
b = Rationalize[1.653926, 10^-16];
hexagon =
Polygon[{{0, (b - a)/2, 1/2}, {(b - a)/2, 0, 1/2}, {1/2,
0, (b - 1)/(2 a)}, {1/2, (b - 1)/2, 0}, {(b - 1)/2, 1/2, 0}, {0,
1/2, (b - 1)/(2 a)}}] // Simplify;
octahedron =
ImplicitRegion[Abs[x] + Abs[y] + a Abs[z] <= b/2, {x, y, z}];
rd = RegionDistance[hexagon, {x, y, z}];
region2 = ImplicitRegion[1 >= rd, {x, y, z}];
ri = RegionIntersection[octahedron, region2];
</code></pre>
<p>This will run for a few seconds but will return an exact region that we then can mesh.</p>
<pre><code>Needs["NDSolve`FEM`"]
bounds = {{-1, 1}, {-1, 1}, {-1, 1}};
mesh = ToElementMesh[ri, bounds,
"BoundaryMeshGenerator" -> {"RegionPlot",
"SamplePoints" -> {15, 15, 31}}];
mesh["Wireframe"["MeshElementStyle" -> FaceForm[Green]]]
</code></pre>
<p><a href="https://i.stack.imgur.com/z1Hgj.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/z1Hgj.png" alt="enter image description here" /></a></p>
<pre><code>NIntegrate[1, {x, y, z} \[Element] mesh]
0.871456
</code></pre>
<p>I have also tried to make use of the <a href="https://reference.wolfram.com/language/OpenCascadeLink/tutorial/UsingOpenCascadeLink.html" rel="nofollow noreferrer">OpenCasadeLink</a> based on the approach given by @flinty.</p>
<pre><code>hexcenter = RegionCentroid[hexagon];
hexnormal =
Normalize[
Cross[hexagon[[1, 1]] - hexcenter, hexagon[[1, 2]] - hexcenter]];
hexradius = Norm[hexcenter - hexagon[[1, 1]]];
cylinderhack =
Cylinder[{hexcenter - hexnormal, hexcenter + hexnormal},
hexradius];
hexhack =
Flatten[{MeshPrimitives[hexagon, 1] /. Line -> Cylinder,
MeshPrimitives[hexagon, 0] /. Point -> Ball, cylinderhack}];
</code></pre>
<p>Load the link and convert the primitives into open cascade shapes:</p>
<pre><code>Needs["OpenCascadeLink`"]
shapes = OpenCascadeShape /@ hexhack;
union = OpenCascadeShapeUnion[shapes];
oocOcta = OpenCascadeShape[ToBoundaryMesh[octahedron]];
res = OpenCascadeShapeIntersection[union, oocOcta];
</code></pre>
<p>If you have a better representation of the octahedron, then we'd not need to convert to a boundary element mesh that is then converted to open cascade.</p>
<p>Get the boundary element mesh:</p>
<pre><code>bmesh2 = OpenCascadeShapeSurfaceMeshToBoundaryMesh[res];
</code></pre>
<p>However, when we look at the <code>MeshRegion</code> version of the boundary element mesh we will see that there is a very slight elevation at the intersection - it's very hard to see at the top left corner:</p>
<pre><code>MeshRegion[bmesh2]
</code></pre>
<p><a href="https://i.stack.imgur.com/rJAbB.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/rJAbB.png" alt="enter image description here" /></a></p>
<p>And that can not be meshed with <code>ToElementMesh</code> - which is not ideal but understandable.</p>
<hr />
<p><strong>Edit by @YizhenChen:</strong></p>
<p>The following representation of the octahedron gives more accurate answers:</p>
<pre><code>octahedron = ConvexHullMesh[{{b/2, 0, 0}, {-b/2, 0, 0}, {0, b/2, 0},
{0, -b/2, 0}, {0, 0, b/(2 a)}, {0, 0, -b/(2 a)}}];
</code></pre>
<p>The <code>cylinderhack</code> given by @flinty is also incorrect, because it results in the "very slight elevation" seen in the figure above. The correct one is:</p>
<pre><code>cylinderhack =
Apply[Prism[{hexagon[[1, #1]] + hexnormal,
hexagon[[1, #2]] + hexnormal, hexagon[[1, #3]] + hexnormal,
hexagon[[1, #1]] - hexnormal, hexagon[[1, #2]] - hexnormal,
hexagon[[1, #3]] - hexnormal}] &, #] & /@ {{1, 2, 3},
{1, 3, 4}, {1, 4, 5}, {1, 5, 6}};
</code></pre>
|
1,435,590 | <p>Suppose I have a statement like this:</p>
<p>(~p ^ ~q) V (p ^ q)</p>
<p>If I understand this correctly, I can apply the law to both sides separately while leaving the OR in the middle intact. Leaving this:</p>
<p>(p V q) V (~p V ~q)</p>
<p>Is this valid? (as opposed to taking the negation of the entire statement)</p>
| Brian Tung | 224,454 | <p>Depending on the level of familiarity that your assignments are assuming, you may be able to get some traction by trying some values. For instance, assuming you know the quadratic formula, you know that if you can find one factor by trial-and-error, you can use the quadratic formula to factor the remaining two (if they are real).</p>
<p>Furthermore, homework assignments often use integer roots. You might expect, therefore, that some possible roots would be $+1$, $-1$, $+3$, or $-3$ (since $3$ is divisible by all of those). Indeed, if you try (say) $x = 3$, you get $3^3-3^2-5\cdot3-3 = 27-9-15-3 = 0$. Because $x^3-x^2-5x-3 = 0$ when $x = 3$, you know that $x-3$ must be a factor of $x^3-x^2-5x-3$. Use polynomial division to obtain the quotient when dividing $x^3-x^2-5x-3$ by $x-3$ to obtain a second-degree polynomial (a quadratic expression, in other words). If you know how to factor those, then you are home free. </p>
|
1,435,590 | <p>Suppose I have a statement like this:</p>
<p>(~p ^ ~q) V (p ^ q)</p>
<p>If I understand this correctly, I can apply the law to both sides separately while leaving the OR in the middle intact. Leaving this:</p>
<p>(p V q) V (~p V ~q)</p>
<p>Is this valid? (as opposed to taking the negation of the entire statement)</p>
| E.H.E | 187,799 | <p>$$x^3-x^2-5x-3=x^3+2x^2-3x^2-6x+x-3$$
$$x^3+2x^2+x-3x^2-6x-3=0$$
$$x(x^2+2x+1)-3(x^2+2x+1)=0$$
$$(x^2+2x+1)(x-3)=0$$</p>
|
1,641,922 | <p>I've came accros this excersize:<br>
Suppose that $D=\{z:|z| \le 1\}\subset \mathbb C$ and $$f:D\rightarrow\mathbb C$$
suppose that for every $z\in D$ such that $|z|<1$ $$|f(z)-\bar z|<0.9$$ where $\bar z$ is the complex conjugate of $z$. Prove that $f$ cannot be analytic in $D$.<br>
I started with assuming that $f$ is indeed analytic in order to get a contradiction. My first attempt was to try and get some similarities between $f$ and $g(z)=\bar z$ since they are relatively close to each other, and $g$ is not analytic. This idea quickly failed. Also I tried to integrate $f$ around $D$, or see if it possible that $f$ satisfies the Cauchy–Riemann equations, which also did not get me any further.</p>
| mvw | 86,776 | <blockquote>
<p>How to solve such questions?</p>
</blockquote>
<p>The approach is to model the given information as some variables and their relationship as equations or inequalities and then try one of the mathematical methods to find a solution, which is then translated back into the terms of the original problem.</p>
<p>Here you have the age of two persons, which one might abstract as variables $C$ and $J$.</p>
<p>Now about the given information:</p>
<blockquote>
<p>Catherine is now twice as old as Jason</p>
</blockquote>
<p>This can be expressed as
$$
C = 2 J \quad (1)
$$
if $C$ is Catherine's present age, $J$ is Jason's present age.</p>
<blockquote>
<p>$6$ years ago she was $5$ times as old as he was.</p>
</blockquote>
<p>Six years ago is $C - 6$ and Jason was $J - 6$ years old.
The resulting equation is:
$$
C - 6 = 5 \, (J - 6) \quad (2)
$$
If you had linear algebra you might recognize this as a inhomogenous system of two linear equations in the unknowns $C$ and $J$.
\begin{align}
C - 2 J &= 0 \quad (3) \\
C - 5 J &= -24 \quad (4)
\end{align}
There are systematic ways to solve this.</p>
<p><strong>Solution using algebraic equivalence transformations:</strong></p>
<p>One way is to first solve both variables for one of the variables, this means using equivalence transformation to bring the equations into the form
\begin{align}
X &= F_1(Y) \\
X &= F_2(Y)
\end{align}
then equate,
$$
F_1(Y) = F_2(Y)
$$
and get a value for the other variable (if the system is solvable).
$$
X = F_1^{-1}(F_2(Y))
$$
Then that result can be used with one of the original equations where the other variable shows up to solve for that variable too.
$$
Y = F_1^{-1}(X)
$$
In this case we have $X = C$, $Y = J$ and:
\begin{align}
C &= 2 J\\
C &= 5 J -24
\end{align}
Equating gives:
$$
2J = 5 J - 24
$$
or
$$
24 = 3 J
$$
which implies
$$
J = 24/3 = 8
$$
Using one of the original equations featuring $C$ we get
$$
C = 2 J = 2 \cdot 8 = 16
$$</p>
<p><strong>Graphical Solution:</strong></p>
<p>As these are few variables you can also attempt a graphical solution, meaning: you can try to read the solution (if it exists) from the graph.</p>
<p><a href="https://i.stack.imgur.com/EOfI5m.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/EOfI5m.png" alt="graohical solution"></a>
(<a href="https://i.stack.imgur.com/EOfI5.png" rel="nofollow noreferrer">Large Version</a>)</p>
<p>The $x$-coordinate is Catherine's current age $C$, the $y$-coordinate Jason's current age $J$. </p>
<p>The green line $a$ shows all points $(x, y)$ which satisfy equation $(3)$,
the blue line $b$ shows all points $(x, y)$ which satisfy equation $(4)$.</p>
<p>If both lines have an intersection, then the points of the intersection set satisfy both equations. They would be solutions.
Here the intersection is marked by a big red dot named $P$.</p>
<p><strong>Solution via Methods of Linear Algebra:</strong></p>
<p>More elegant is the formulation using matrices and vectors:
$$
A x = b
$$
where
$$
A =
\begin{pmatrix}
1 & -2 \\
1 & -5
\end{pmatrix}
\\
x =
\begin{pmatrix}
C \\
J
\end{pmatrix}
\quad
b =
\begin{pmatrix}
0 \\
-24
\end{pmatrix}
$$
For the number of possible solutions of such a system -- no solution, one solution, infinite many solutions -- the determinant of the matrix $A$ is important. Here it is
$$
\det A = -3 \ne 0
$$
which means, the system has one solution, the matrix $A$ has an inverse matrix, which can be used to get a solution:
$$
x = A^{-1} b
$$</p>
<p>You can solve this by hand, using Gaussian elimination,
$$
[A \mid b] =
\left[
\begin{array}{rr|r}
1 & -2 & 0 \\
1 & -5 & -24
\end{array}
\right]
\to
\left[
\begin{array}{rr|r}
1 & -2 & 0 \\
0 & -3 & -24
\end{array}
\right]
\to
\left[
\begin{array}{rr|r}
1 & -2 & 0 \\
0 & 1 & 8
\end{array}
\right]
\to
\left[
\begin{array}{rr|r}
1 & 0 & 16 \\
0 & 1 & 8
\end{array}
\right]
$$
(this calculation is similar to fellow user menag's answer).
or using an explicit formula for the inverse of $2 \times 2$ matrices
$$
A =
\begin{pmatrix}
a & b \\
c & d
\end{pmatrix}
\\
\det A = a d - b c = 1 \cdot (-5) - (-2) \cdot 1 = -3
\\
A^{-1} = \frac{1}{\det A}
\begin{pmatrix}
d & -b \\
-c & a
\end{pmatrix}
=
-\frac{1}{3}
\begin{pmatrix}
-5 & 2 \\
-1 & 1
\end{pmatrix}
$$
which leads to
$$
x = A^{-1} b
=
-\frac{1}{3}
\begin{pmatrix}
-5 & 2 \\
-1 & 1
\end{pmatrix}
\begin{pmatrix}
0 \\
-24
\end{pmatrix}
=
-\frac{1}{3}
\begin{pmatrix}
-48 \\
-24
\end{pmatrix}
=
\begin{pmatrix}
16 \\
8
\end{pmatrix}
$$
or some CAS (computer algebra system) like Octave.</p>
<p><strong>Using a Computer Algebra System:</strong></p>
<pre><code>octave> A = [1,1;-2,-5]'
A =
1 -2
1 -5
octave> det(A)
ans = -3
octave> inv(A)
ans =
1.66667 -0.66667
0.33333 -0.33333
octave> b = [0, -24]'
b =
0
-24
octave> x = inv(A)* b
x =
16
8
</code></pre>
|
1,641,922 | <p>I've came accros this excersize:<br>
Suppose that $D=\{z:|z| \le 1\}\subset \mathbb C$ and $$f:D\rightarrow\mathbb C$$
suppose that for every $z\in D$ such that $|z|<1$ $$|f(z)-\bar z|<0.9$$ where $\bar z$ is the complex conjugate of $z$. Prove that $f$ cannot be analytic in $D$.<br>
I started with assuming that $f$ is indeed analytic in order to get a contradiction. My first attempt was to try and get some similarities between $f$ and $g(z)=\bar z$ since they are relatively close to each other, and $g$ is not analytic. This idea quickly failed. Also I tried to integrate $f$ around $D$, or see if it possible that $f$ satisfies the Cauchy–Riemann equations, which also did not get me any further.</p>
| paw88789 | 147,810 | <p>It can be helpful to organize the information into a 'now-and-then' chart. Let Jason's age now be $J$.</p>
<p>$$\begin{array} {c|c|c} &\mbox{Jason's age}&\mbox{Catherine's age} \\ \hline
\mbox{Now} &J&2J\\ \mbox{6 years ago} &J-6& 2J-6 \end{array}$$</p>
<p>Then $2J-6=5(J-6)$, which you can easily solve.</p>
|
134,001 | <p>what is the basic difference between the Discrete Fourier Transform and the Wavelet Transform ? and why does JPEG2000 preferred DWT over DCT or DFT ? </p>
| Emre | 9,901 | <p>Sinusoids and wavelets are the <a href="http://en.wikipedia.org/wiki/Basis_%28linear_algebra%29" rel="nofollow">bases</a> used in DCT (JPEG) and DWT (JPEG2000), respectively. Lossy compression works by finding a basis (think: alphabet) that represents the signal using as few elements (think: words or letters) as possible. The loss is a result of discarding elements that do not make a significant contribution. To continue the analogy, if you take a sentence in English and discard the vowels you can roughly guess what was meant, while using fewer letters. Lossy compression works in the same way.</p>
|
2,391,624 | <p>This question pertains to Mosteller's classic book <em>Fifty Challenging Problems in Probability</em>. Specifically, this in regards to an algebraic operation Mosteller performs in the solution to the first question, entitled "The Sock Drawer."</p>
<p>Mosteller writes:</p>
<blockquote>
<p>Then we require the probability that both are red to be $\frac{1}{2}$, or $$\frac{r}{r+b}*\frac{r-1}{r+b-1}=\frac{1}{2}\text{.}$$
…</p>
<p>Notice that $$\frac{r}{r+b}\gt\frac{r-1}{r+b-1}\text{, for $b > 0$.}$$ Therefore we can create the inequalities $$\left(\frac{r}{r+b}\right)^2 \gt \frac 12 \gt \left(\frac{r-1}{r+b-1}\right)^2$$</p>
</blockquote>
<p>Despite much staring, and not knowing what to Google, I am stumped! In that last step, how does he do that‽</p>
<p>Many thanks,<br>
James</p>
| Ross Presser | 83,388 | <p>If $x>y$, then $x*x > xy$. But $xy = \frac12$ so $x^2 > \frac12$.</p>
<p>Similar for $\frac12 > y^2$.</p>
|
679,135 | <p><img src="https://i.stack.imgur.com/uJEMx.png" alt="enter image description here">The question is find the δ by The maximum likelihood estimation?
My answer is δ=0 but I am not sure whether it is correct and how tho show its biasness?</p>
| sas | 21,699 | <p>Geometric solution.</p>
<p>You have quadrilateral: two sides are your numbers $r_1$ and $r_2$, two diagonals are $r_1-r_2$ and $r_1+r_2$. </p>
<p>Diagonals are equal — so quadrilateral is rectangle.</p>
|
2,549,690 | <p>Is a direct sum of cyclic groups cyclic? I know every abelian group is a direct sum of cyclic groups of prime power orders, but I can't make use of this.</p>
| Mariah | 319,890 | <p>Especially in $\mathbb{R^n}$ you can picture homeomorphisms, which are a smooth deformation of a sort.</p>
<p>How to get from one ball with radius $r$ to another with radius $s$? Simply shrink or expand the balls by some proper constant..</p>
|
3,988,084 | <p>I have 3 points:</p>
<p><span class="math-container">$$A = (0,4) \\
B=(-5,0) \\
C=(5,0)$$</span></p>
<p>I need to find a polynomial that goes through B and C, and is tangent to <span class="math-container">$f(x) = (2/3)x+4$</span> at A.</p>
<p>I know that tangent means it must be equal to the derivative of f(x) at that point.</p>
<p>This is probably wrong, but I did the interpolation using points B, C and <span class="math-container">$(0,2/3)$</span>. I got</p>
<p><a href="https://i.stack.imgur.com/fdHvm.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/fdHvm.png" alt="enter image description here" /></a></p>
<p>Help?</p>
| Piquito | 219,998 | <p><span class="math-container">$P(x)=(x^2-25)(ax+b)$</span></p>
<p><span class="math-container">$P(0)=-25b=4\iff b=\dfrac{-4}{25}$</span></p>
<p><span class="math-container">$P'(x)=3ax^2-25a-\dfrac{-4x}{25}\Rightarrow P'(0)=-25a=\dfrac23\Rightarrow a=\dfrac{-2}{75}$</span></p>
|
332,603 | <p>I've passed by this article:
<a href="http://gauravtiwari.org/2011/12/11/claim-for-a-prime-number-formula/" rel="noreferrer">http://gauravtiwari.org/2011/12/11/claim-for-a-prime-number-formula/</a></p>
<p>and this paper:
<a href="http://www.m-hikari.com/ams/ams-2012/ams-73-76-2012/kaddouraAMS73-76-2012.pdf" rel="noreferrer">http://www.m-hikari.com/ams/ams-2012/ams-73-76-2012/kaddouraAMS73-76-2012.pdf</a></p>
<p>They say that there is a formula such that when you give it (n) then it returns the n-th prime number. Where other articles states that no formula discovered so far that does such thing.</p>
<p>If the formula exists indeed, then why from time to time they discover a new largest prime number known ever. It would be very simple using the formula to find a larger one.</p>
<p>I just want to ensure whether such formula exists or not.</p>
| Robert Israel | 8,508 | <p>It depends on what you mean by "formula". Certainly no formula is known such that using it to find a new very large prime would be "very simple".</p>
|
2,877,578 | <p>Yesterday, I asked the question: <a href="https://math.stackexchange.com/questions/2876740/prove-that-if-a-b-are-closed-then-exists-u-v-open-sets-such-that-u-cap?noredirect=1#comment5938458_2876740">Prove that if $A,B$ are closed then, $ \exists\;U,V$ open sets such that $U\cap V= \emptyset$</a>. </p>
<p>Here is the correct question: prove that if $A,B$ are closed sets in a metric space such that $A\cap B= \emptyset$, there exists $U,V$ open sets such that $A\subset U$, $B\subset V$, and $U\cap V= \emptyset$. </p>
<p>I am thinking of going by contradiction, that is: $\forall\; U,V$ open sets such that $A\subset U$, $B\subset V$, and $U\cap V\neq \emptyset$. </p>
<p>Let $ U,V$ open. Then, $\exists\;r_1,r_2$ such that $B(x,r_1)\subset U$ and $B(x,r_2)\subset V.$ I got stuck here!</p>
<p>I'm thinking of using the properties of $T^4-$space but I can't find out a proof! Any solution or reference related to metric spaces?</p>
| Lev Bahn | 523,306 | <p>If you our space is metric space, $X$, let's define</p>
<p>$f_S(x)=dist(x,S)=\inf_{y\in S}|x-y|$ and note that, for each closed set $S$, it is a continuous function on the space. </p>
<p>Let $U=\left\{x\in X: f_A(x) < f_B(x) \right\}=(f_A-f_B)^{-1}((-\infty,0))$</p>
<p>and $V=\left\{x\in X : f_B(x)<f_A(x) \right\}=(f_A-f_B)^{-1}(0,\infty)$.</p>
<p>And clearly, $A\subset U$ and $B\subset V $.</p>
<p>Also, since $f_A-f_B$ are continuous function and $(0,\infty)$ and $(-\infty,0)$ are open in $\mathbb{R}$, $U$ and $V$ are open and disjoint.</p>
<p>Claim. $f_S(x)$ is continuous on $X$ for each closed set $S$.</p>
<p>Proof) Let $\epsilon>0$ and $x,y\in X$ be given. Observe that, with out loss of generality, letting $f_S(x)-f_S(y)\geq 0$, there exists $a\in S$,</p>
<p>$f_S(x)-f_S(y)=\inf_{z\in S}|x-z|-\inf_{z\in S}|y-z| \leq \inf_{z\in S}|x-z|-|y-a|+\epsilon\leq |x-a|-|y-a|+\epsilon \leq |x-y|+\epsilon. $ </p>
<p>Since $\epsilon>0$ is arbitrary, we get $|f_S(x)-f_S(y)|<|x-y|$, so $f_S$ is Lipschitz continuous. </p>
|
333,360 | <p>I know the series for $\cos(x)$ it is $\sum \limits_{n=0}^\infty \dfrac{(-1)^n x^{2n}}{(2n)!}$ </p>
<p>which will result in $\sum \limits_{n=0}^\infty \dfrac{\left(-1\right)^ n x^{2n+1}}{\left(2n\right)!}$ </p>
<p>Which is great when you already know the series; however, my question is how does one find Maclaurin series when you don't already know the series? </p>
| Jared | 65,034 | <p>Are you trying to find the Maclaurin series for $x\cos(x)$? If so, what you've done is valid.</p>
<p>Practically, it is much easier to find the Maclaurin series representing a given function by relating it to some function with known Maclaurin series, as you've done here. In general, you must take care to note where the series converges, but here your radius of convergence will be infinite.</p>
<p>You can also use the following formula to calculate the Maclaurin series:
$$
\sum_{n=0}^{\infty}\frac{f^{(n)}(0)}{n!}x^n
$$
This is valid whenever $f(x)$ is infinitely differentiable in a neighborhood of $0$. Whether or not $f(x)$ is equal to its Maclaurin series is another question, but in your case, it is. Using this formula to find Maclaurin series is often very difficult, as finding patterns for the derivatives of $f(x)$ is not always clear. In the case of $\cos(x)$, the pattern is fairly clear, so for your question, it is a natural starting point.</p>
|
1,665,533 | <p>Let $\mathcal{E}_1, ...,\mathcal{E}_n$ be collections of measurable sets on $(\Omega,\mathcal{F},P)$, each closed under intersection. Suppose
\begin{align*}
P(A_1\cap...\cap\ A_n)=P(A_1)\cdot ... \cdot P(A_n),
\end{align*}
for all $A_i \in \mathcal{E}_i$ for $1 \leq i \leq n$. </p>
<p>Now I want to show that the $\sigma$-algebras $\sigma(\mathcal{E}_i)$ for $1 \leq i \leq n$ are independent, using an application of the $\pi$-$\lambda$-theorem. </p>
<p>Since $\mathcal{E}_i$ for $1 \leq i \leq n$ are closed under intersection, each $\mathcal{E}_i$ is a $\pi$-system. Now, for me it is unclear how to define a $\lambda$-system and how to apply the $\pi$-$\lambda$-theorem.</p>
| John | 105,625 | <p>Fix $A_i,\forall i=2,\cdots,n$, define $G=\{B\in \sigma(\mathcal{E}_1)|P(B\cap\cdots\cap\ A_n)=P(B)\cdot \cdots \cdot P(A_n)\}$,</p>
<p>By assumption $\mathcal{E}_1\subset G$, it suffices to show $G$ is a $\lambda$-system by definition.</p>
<p>Then by $\pi-\lambda$ theorem, we have $\sigma(\mathcal{E}_1)\subset G$, this shows given $\mathcal{E}_1, \cdots,\mathcal{E}_n$ independent, we have $\sigma(\mathcal{E}_1), \cdots,\mathcal{E}_n$ independent, hence $\mathcal{E}_2, \cdots,\mathcal{E}_n,\sigma(\mathcal{E}_1)$ independent. THen repeart</p>
|
886,626 | <p>I want to solve the following system of congruences:</p>
<p>$ x \equiv 1 \mod 2 $</p>
<p>$ x \equiv 2 \mod 3 $</p>
<p>$ x \equiv 3 \mod 4 $</p>
<p>$ x \equiv 4 \mod 5 $</p>
<p>$ x \equiv 5 \mod 6 $</p>
<p>$ x \equiv 0 \mod 7 $</p>
<p>I know, but do not understand why, that the first two congruences are redundant. Why is this the case? I see that the modulo of the congruences are not pairwise relatively prime, but why does this cause a redundancy or contradiction? Further, why is it that in the solution to this system, we discard the first two congruences and not </p>
<p>$ x \equiv 3 \mod 4 $</p>
<p>$ x \equiv 5 \mod 6 $</p>
<p>being that $ gcd(3,6) = 3 $ and $gcd(2,4) = 2$ ?</p>
<p>EDIT:</p>
<p>How is the modulo of the unique solution effected if I instead consider the system of congruences without the redundancy i.e. does $M = 4 * 5 * 6 * 7$ or does it remain $M= 2*3*4*5*6*7$?</p>
| Steven Alexis Gregory | 75,410 | <p>$x \equiv 1 \mod 2$</p>
<p>$x \equiv 2 \mod 3$</p>
<p>$x \equiv 3 \mod 4 \implies x \equiv 1 \pmod 2$</p>
<p>$x \equiv 4 \mod 5$</p>
<p>$x \equiv 5 \mod 6 \iff
\left.\begin{cases}
x \equiv 1 \mod 2 \\
x \equiv 2 \mod 3
\end{cases} \right\}$
By the CRT.</p>
<p>$x \equiv 0 \mod 7$</p>
<hr>
<p>So first we replace $x \equiv 5 \mod 6$ with
$\left.\begin{cases}
x \equiv 1 \mod 2 \\
x \equiv 2 \mod 3
\end{cases} \right\}$</p>
<hr>
<p>$x \equiv 1 \mod 2$</p>
<p>$x \equiv 2 \mod 3$</p>
<p>$x \equiv 3 \mod 4 \implies x \equiv 1 \pmod 2$</p>
<p>$x \equiv 4 \mod 5$</p>
<p>$x \equiv 1 \mod 2$</p>
<p>$x \equiv 2 \mod 3$</p>
<p>$x \equiv 0 \mod 7$</p>
<hr>
<p>Now, because $x \equiv 1 \pmod 2$ is redundant, we remove all instanced of it and we remove all but one instance of $x \equiv 2 \mod 3$.</p>
<hr>
<p>$x \equiv 2 \mod 3$</p>
<p>$x \equiv 3 \mod 4$</p>
<p>$x \equiv 4 \mod 5$</p>
<p>$x \equiv 0 \mod 7$</p>
<hr>
<p>In this case, we can cheat a little if we change the first three congruences to equivalent congruences.</p>
<p>$
\left.
\begin{array}{l}
x \equiv -1 \mod 3 \\
x \equiv -1 \mod 4 \\
x \equiv -1 \mod 5
\end{array}
\right\} \ \iff x \equiv -1 \mod 60$
(Again, by the CRT.)</p>
<p>$x \equiv 0 \mod 7$</p>
<hr>
<p>So we now have</p>
<hr>
<p>$x \equiv -1 \mod 60$</p>
<p>$x \equiv 0 \mod 7$</p>
<hr>
<p>To solve this, we start with $x \equiv 0 \mod 7$, which implies that
$x = 7n$ for some integer $n$. Substitute that into $x \equiv -1 \mod 60$ and you get</p>
<p>$7n \equiv -1 \mod 60$</p>
<p>So we need to find $\dfrac 17 \pmod{60}$ The most fundamental way to do this is to inspect numbers congruent to $1 \pmod{60}$ until we find one that is a multiple of $7$. At worst, we will have to examint $7$ such numbers.</p>
<p>$1, 61, 121, 181, 241, \color{red}{301}$</p>
<p>Since $7 \times 43 = 301$, then $\dfrac 17 \equiv 43 \pmod{60}$. So we conclude that $n \equiv -43 \equiv 17 \mod{60}$.</p>
<p>Then $x = 7n = 119 \mod{420}$.</p>
|
399,948 | <p>How do you in general find the trigonometric function values? I know how to find them for 30 45, and 60 using the 60-60-60 and 45-45-90 triangle but don't know for, say $\sin(15)$ or $\tan(75)$ or $\csc(50)$, etc.. I tried looking for how to do it but neither my textbook or any other place has a tutorial for it. I want to know how to find the exact values for all the trigonometric functions like $\sin x$, $\csc x$, ... opposed to looking it up or using calculator. According to my textbook, $\sin(15)=0.26$, $\tan(75)=3.73$, and $\csc(50)=1.31$ but doesn't show where those numbers came from, as if it was dropped from the Math heaven!</p>
| M. Strochyk | 40,362 | <p>Value of $\sin{x}$ with prescribed accuracy can be calculated from Taylor's representation
$$\sin{x}=\sum\limits_{n=0}^{\infty}{\dfrac{(-1)^n x^{2n+1}}{(2n+1)!}}$$ or infinite product
$$\sin{x}=x\prod\limits_{n=1}^{\infty}{\left(1-\dfrac{x^2}{\pi^2 n^2} \right)}.$$
For some partial cases numerous <a href="http://en.wikipedia.org/wiki/List_of_trigonometric_identities#Infinite_product_formulae" rel="nofollow">trigonometric identities</a> can be used.</p>
|
463,190 | <p>How to show that $\gcd(a + b, a^2 + b^2) = 1\mbox{ or } 2$ for coprime $a$ and $b$?</p>
<p>I know the fact that $\gcd(a,b)=1$ implies $\gcd(a,b^2)=1$ and $\gcd(a^2,b)=1$, but how do I apply this to that?</p>
| Community | -1 | <p>Hint: Write $a^2+b^2=(a + b)(a − b)+2b^2$. </p>
<p>Now you can show that $\gcd(a+b, b^2)=1$ so that $\gcd(a + b, 2b^2) = 1\text{ or }2$.</p>
|
177,124 | <h1>Definitions and notations.</h1>
<p>Let $\mathcal{P}(X)$ the <strong>power set</strong> of $X$.</p>
<p>Let $\tau_X\subseteq\mathcal{P}(X)$ a <strong>topology</strong> on X.</p>
<p>We call $A$ <strong>irreducible</strong> if every time $A=B\cup C$ with $B,C$ closed set then $(B=A)\vee(C=A)$.</p>
<p>We call $X$ <strong>sober</strong> if every non empty irreducible closed set is the closure of a (single one) point.</p>
<p>We Call $K$ <strong>compact</strong> if every open covering $(U_i)_{i\in I}\subseteq\tau_X$ of $K$ (i.e. $K\subseteq\bigcup_{i\in I}U_i$) admits a finite subcovering of $K$ (i.e. there is a finite $J\subseteq I$ s.t. $K\subseteq\bigcup_{j\in J}U_j$). Note that $(X,\tau_X)$ is not required to be T$_2$.</p>
<p>We call $A$ <strong>relatively compact in</strong> $B$ if $A\subseteq B$ and every open covering of $B$ admits a finite subcovering of $A$. Write $A\ll B$ if $A$ is relatively compact in $B$ (note: by definitions $A$ is compact iff $A\ll A$).</p>
<p>We say that $F$ has the <strong>relatively compactness property</strong> if for all $A\in F$ exist $B\in F$ s.t. $B\ll A$.</p>
<p>We call $D\subseteq\mathcal{P}(X)$ <strong>direct</strong> if $D\neq\emptyset$ and for all $A,B\in D$ exist $C\in D$ s.t. $A\cup B\subseteq C$. In such case we call $C$ an <strong>upper bound</strong> of $\{A,B\}$. In other words $D$ is directed if it is non empty and closed by upper bounds of his finite subsets.</p>
<p>We call <strong>supremum</strong> of $A\subseteq\mathcal{P}(X)$ the lower upper bound (by inclusion) of $A$, i.e. $S$ is a supremum of $A$ if for all $B\in A$ we have $B\subseteq S$ and $S$ is a subset of all other sets with the same property.
(note: if it exists, there is at most one supremum).</p>
<p>We call $S\subseteq \mathcal{P}(X)$ <strong>scott open</strong> if $S$ is upward closed and every time it contains the supremum of a direct set $D$ then $S\cap D\neq\emptyset$.</p>
<p>We call $F\subset\mathcal{P}(X)$ a <strong>filter</strong> if $\emptyset\notin F$, it is an upward set (i.e. if $A\in F$ and $A\subseteq B$ then $B\in F$) and it is closed by finite intersections.</p>
<p>We call $\mathcal{Ofilt}(X)$ the space of the scott open filters on $X$.</p>
<p>We call $A\subseteq X$ <strong>saturated</strong> if $A=\bigcap\{U\in\tau_X\mid A\subseteq U\}$.</p>
<p>We call $\mathcal{Q}(X)\subseteq\mathcal{P}(X)$ the set of all saturated and compact subset of $X$.</p>
<h1>The claim.</h1>
<p>Let $(X,\tau_X)$ a sober (and second countable) space. Then</p>
<p>$\begin{align}
f\colon\mathcal{Q}(X)&\to\mathcal{Ofilt}(\tau_X)\\
Q&\mapsto f(Q)=\{U\in\tau_X\mid Q\subset U\},
\end{align}$</p>
<p>is a bijective function whose inverse is the map which associates to a scott open filter in $\tau_X$ the intersection of the filter.</p>
<p>Note: we’ve put between brackets the assumption for X to be second countable because, for our purpose, we have it. In any case the proposition seems to be true without that assumption, as is shown in the Theorem 2.16 of [1]</p>
<h1>My question, some explanations and some requests.</h1>
<p>I'm able to prove that the function $f$ is well defined and injective. On the other hand the proof that the intersection of such a filter is compact (it is obviously a saturated set) is really an hard problem for me.</p>
<p>If it is possible I’m looking for a self-cotained (maybe direct) proof: I lost myself in cross-references from an article to another in which the authors refer.</p>
<p>What follows is my steps (without the final one).</p>
<h2>Beginning of (my) proof.</h2>
<p>Note: I'm supposing that X is second countable.</p>
<p>Let $F$ be a scott open filter of $\tau_X$ and let $P=\bigcap F$.</p>
<p>Let $(V_n)_{n\in\omega}$ be a arbitrary open covering of $P$ (eventually with repetitions). We have to prove that it has a finite subcovering of $P$ (we can suppose the covering to be countable because we have supposed $X$ is second countable).</p>
<p>Let $W_k=\bigcap_{n≤k}V_n$, so for any $k\in\omega$ we have $W_k\subseteq W_{k+1}$ and $P\subset\bigcup_{k\in\omega}W_k$. We note that $\{W_k\mid k\in\omega\}$ form a direct set and that $\bigcup_{k\in\omega}W_k$, his supremum, is open. So if we prove that $\bigcup_{k\in\omega}W_k\in F$ we can conclude thanks to the scott openness and because each $W_k$ is a finite union, covering $P$, of some sets in $(V_n)_{n\in\omega}$ sequence.</p>
<p>On the other hand, if we suppose that we have proved the statement, then the intersection of the filter (i.e. $P$) is in $\mathcal{Q}(X)$ and by $f$ it would be mapped back again to $F$ (thanks to the injectivity). So, if the statement is true, $F$ contains all open set containing $P$.</p>
<p>If we are able to prove that a general open set containing $P$ is in $F$ (that we know is consistent and "true"...), then we'll conclude that $\bigcup_{k\in\omega}W_k\in F$ (because it is open) and so we'll conclude the proof.</p>
<p>So, let $A\in\tau_X$ an open set of $X$ containing $P$.</p>
<p>First of all if $P\in F$ then $A\in F$ (because $F$ is a filter).</p>
<p>So we suppose $P\notin F$. Then (only by second countability) we can take a decrescent (by inclusion) sequence $(U_n)_{n\in\omega}\subseteq F$ s.t. $\bigcap_{n\in\omega}U_n=P$.</p>
<p>On the first hand if exist $m\in\omega$ s.t. $U_m\subseteq A$ then $A\in F$ (because $F$ is a filter) so we can suppose that for all $n\in\omega$ we have $U_n\setminus A\neq\emptyset$.</p>
<p>So let $C_n=U_n\setminus A\neq\emptyset$ and let $C=\bigcap_{n\in\omega}C_n$.
There are only two cases: $C\neq\emptyset$ or $C=\emptyset$.</p>
<p>If $C\neq\emptyset$ then (because of $C\subseteq\bigcap_{n\in\omega}U_n=P$) we conclude an absurd, i.e. $C\subseteq P\subseteq A$ when $C$ does not contain any points of $A$; so it must be $C=\emptyset$ and so...</p>
<h2>Step-conclusion.</h2>
<p>With regard to the proof of the statement I'm not able to go on from what I did in the last section; but I'm sure that without the assumption of scott openness the theorem is false.</p>
<p>we give a counterexample: let $X=\mathbb{R}$, $\tau_\mathbb{R}="\text{standard topology generated by the open intervals}"$, $P=\mathbb{Z}$, $F=\{A\in\tau_\mathbb{R}\mid \mathbb{Z}\subseteq A\}$; then let $\{\bigcup_{z\in\mathbb{Z}}(a_z,b_z)\mid\forall z\in\mathbb{Z}\; a_z,b_z\in\mathbb{Q}\wedge z\in(a_z,b_z)\}\subseteq F$ be the sequence of "$(U_n)_{n\in\omega}$", decrescent by inclusion, whose intersection is $P$.</p>
<p>So,</p>
<p>$\mathbb{R}$ is sober and second countable;</p>
<p>$F$ is the filter of all open set containing $P$ (but F is not scott open, e.g. $((-n,n))_{n\in\omega}$ is clearly a direct sequence whose union is $\mathbb{R}\in F$ but for all $n\in\omega$ we have $(-n,n)\notin F$);</p>
<p>the intersection of $F$ is $P$ but $P$ is clearly not compact, e.g. $\{(z-\frac{1}{2},z+\frac{1}{2})\mid z\in\mathbb{Z}\}$ is clearly a covering of $\mathbb{Z}$ without a finite sub covering.</p>
<h2>Moreover.</h2>
<p>I'm aware that the guideline of the proof that I followed cannot be applied in the general setting (without second countably hypothesis on $X$)...but I'm not in the general case and I'm looking for a "simple", direct and clear proof (if it exist).</p>
<p>In a little more specific setting (which is my case) we don't require directly that $F$ is scott open but that it respects the relatively compactness property, which implies the scott openness for $F$.</p>
<p>Indeed, if $D$ is a direct subset of $\mathcal{P}(\tau_X)$ whose supremum $S$ (i.e. $S=\bigcup D$) lies on $F$ (so $D$ is a covering of $S$), then there exists $A\in F$ with $A\ll S$, i.e. there must be a finite subset of $D$ that covers a set $A$ who lies on $F$ and so the union of $D$ must lie on $F$ too (because $F$ is a filter). To conclude we have only to note that a finite union of elements of a direct set lies on the direct set (by definition of direct set) and so our union is in $D$. Then $D\cap F\neq\emptyset$ and so $F$ is scott open.</p>
<p>Note that in my counterexample, obviously, $F$ fails this property too...</p>
<p>In any case if you use the relatively compactness instead of the scott openness to find a (simpler) proof... it will be fine for me!</p>
<p>Thank you all,</p>
<p>Corrado.</p>
<h1>References.</h1>
<p>[1]: Karl Heinrich Hofmann and Michael William Mislove. <em>Local compactness and continuous lattices</em>. In Bernhard Banaschewski and Rudolf-Eberhard Hoffmann, editors, Continuous Lattices, volume 871 of Lecture Notes in Mathematics, pages 209–248. Springer Berlin Heidelberg, 1981 (<strong>Theorem 2.16 pag. 226</strong>)</p>
<p>see also (if interested on my specific setting)</p>
<p>[2]: Matthew de Brecht. <em>Quasi-polish spaces</em>. Annals of Pure and Applied Logic, (164):356–381, 2013. (last three lines of the proof of the <strong>Theorem 44 pag. 369</strong>)</p>
| მამუკა ჯიბლაძე | 41,291 | <p>A self-contained proof is in the book "Continuous lattices and domains" by G. Gierz, K. H. Hofmann, K. Keimel, J. D. Lawson, M. W. Mislove and D. S. Scott.</p>
<p>The particular place you need is Lemma II-1.19 on page 146.</p>
|
177,124 | <h1>Definitions and notations.</h1>
<p>Let $\mathcal{P}(X)$ the <strong>power set</strong> of $X$.</p>
<p>Let $\tau_X\subseteq\mathcal{P}(X)$ a <strong>topology</strong> on X.</p>
<p>We call $A$ <strong>irreducible</strong> if every time $A=B\cup C$ with $B,C$ closed set then $(B=A)\vee(C=A)$.</p>
<p>We call $X$ <strong>sober</strong> if every non empty irreducible closed set is the closure of a (single one) point.</p>
<p>We Call $K$ <strong>compact</strong> if every open covering $(U_i)_{i\in I}\subseteq\tau_X$ of $K$ (i.e. $K\subseteq\bigcup_{i\in I}U_i$) admits a finite subcovering of $K$ (i.e. there is a finite $J\subseteq I$ s.t. $K\subseteq\bigcup_{j\in J}U_j$). Note that $(X,\tau_X)$ is not required to be T$_2$.</p>
<p>We call $A$ <strong>relatively compact in</strong> $B$ if $A\subseteq B$ and every open covering of $B$ admits a finite subcovering of $A$. Write $A\ll B$ if $A$ is relatively compact in $B$ (note: by definitions $A$ is compact iff $A\ll A$).</p>
<p>We say that $F$ has the <strong>relatively compactness property</strong> if for all $A\in F$ exist $B\in F$ s.t. $B\ll A$.</p>
<p>We call $D\subseteq\mathcal{P}(X)$ <strong>direct</strong> if $D\neq\emptyset$ and for all $A,B\in D$ exist $C\in D$ s.t. $A\cup B\subseteq C$. In such case we call $C$ an <strong>upper bound</strong> of $\{A,B\}$. In other words $D$ is directed if it is non empty and closed by upper bounds of his finite subsets.</p>
<p>We call <strong>supremum</strong> of $A\subseteq\mathcal{P}(X)$ the lower upper bound (by inclusion) of $A$, i.e. $S$ is a supremum of $A$ if for all $B\in A$ we have $B\subseteq S$ and $S$ is a subset of all other sets with the same property.
(note: if it exists, there is at most one supremum).</p>
<p>We call $S\subseteq \mathcal{P}(X)$ <strong>scott open</strong> if $S$ is upward closed and every time it contains the supremum of a direct set $D$ then $S\cap D\neq\emptyset$.</p>
<p>We call $F\subset\mathcal{P}(X)$ a <strong>filter</strong> if $\emptyset\notin F$, it is an upward set (i.e. if $A\in F$ and $A\subseteq B$ then $B\in F$) and it is closed by finite intersections.</p>
<p>We call $\mathcal{Ofilt}(X)$ the space of the scott open filters on $X$.</p>
<p>We call $A\subseteq X$ <strong>saturated</strong> if $A=\bigcap\{U\in\tau_X\mid A\subseteq U\}$.</p>
<p>We call $\mathcal{Q}(X)\subseteq\mathcal{P}(X)$ the set of all saturated and compact subset of $X$.</p>
<h1>The claim.</h1>
<p>Let $(X,\tau_X)$ a sober (and second countable) space. Then</p>
<p>$\begin{align}
f\colon\mathcal{Q}(X)&\to\mathcal{Ofilt}(\tau_X)\\
Q&\mapsto f(Q)=\{U\in\tau_X\mid Q\subset U\},
\end{align}$</p>
<p>is a bijective function whose inverse is the map which associates to a scott open filter in $\tau_X$ the intersection of the filter.</p>
<p>Note: we’ve put between brackets the assumption for X to be second countable because, for our purpose, we have it. In any case the proposition seems to be true without that assumption, as is shown in the Theorem 2.16 of [1]</p>
<h1>My question, some explanations and some requests.</h1>
<p>I'm able to prove that the function $f$ is well defined and injective. On the other hand the proof that the intersection of such a filter is compact (it is obviously a saturated set) is really an hard problem for me.</p>
<p>If it is possible I’m looking for a self-cotained (maybe direct) proof: I lost myself in cross-references from an article to another in which the authors refer.</p>
<p>What follows is my steps (without the final one).</p>
<h2>Beginning of (my) proof.</h2>
<p>Note: I'm supposing that X is second countable.</p>
<p>Let $F$ be a scott open filter of $\tau_X$ and let $P=\bigcap F$.</p>
<p>Let $(V_n)_{n\in\omega}$ be a arbitrary open covering of $P$ (eventually with repetitions). We have to prove that it has a finite subcovering of $P$ (we can suppose the covering to be countable because we have supposed $X$ is second countable).</p>
<p>Let $W_k=\bigcap_{n≤k}V_n$, so for any $k\in\omega$ we have $W_k\subseteq W_{k+1}$ and $P\subset\bigcup_{k\in\omega}W_k$. We note that $\{W_k\mid k\in\omega\}$ form a direct set and that $\bigcup_{k\in\omega}W_k$, his supremum, is open. So if we prove that $\bigcup_{k\in\omega}W_k\in F$ we can conclude thanks to the scott openness and because each $W_k$ is a finite union, covering $P$, of some sets in $(V_n)_{n\in\omega}$ sequence.</p>
<p>On the other hand, if we suppose that we have proved the statement, then the intersection of the filter (i.e. $P$) is in $\mathcal{Q}(X)$ and by $f$ it would be mapped back again to $F$ (thanks to the injectivity). So, if the statement is true, $F$ contains all open set containing $P$.</p>
<p>If we are able to prove that a general open set containing $P$ is in $F$ (that we know is consistent and "true"...), then we'll conclude that $\bigcup_{k\in\omega}W_k\in F$ (because it is open) and so we'll conclude the proof.</p>
<p>So, let $A\in\tau_X$ an open set of $X$ containing $P$.</p>
<p>First of all if $P\in F$ then $A\in F$ (because $F$ is a filter).</p>
<p>So we suppose $P\notin F$. Then (only by second countability) we can take a decrescent (by inclusion) sequence $(U_n)_{n\in\omega}\subseteq F$ s.t. $\bigcap_{n\in\omega}U_n=P$.</p>
<p>On the first hand if exist $m\in\omega$ s.t. $U_m\subseteq A$ then $A\in F$ (because $F$ is a filter) so we can suppose that for all $n\in\omega$ we have $U_n\setminus A\neq\emptyset$.</p>
<p>So let $C_n=U_n\setminus A\neq\emptyset$ and let $C=\bigcap_{n\in\omega}C_n$.
There are only two cases: $C\neq\emptyset$ or $C=\emptyset$.</p>
<p>If $C\neq\emptyset$ then (because of $C\subseteq\bigcap_{n\in\omega}U_n=P$) we conclude an absurd, i.e. $C\subseteq P\subseteq A$ when $C$ does not contain any points of $A$; so it must be $C=\emptyset$ and so...</p>
<h2>Step-conclusion.</h2>
<p>With regard to the proof of the statement I'm not able to go on from what I did in the last section; but I'm sure that without the assumption of scott openness the theorem is false.</p>
<p>we give a counterexample: let $X=\mathbb{R}$, $\tau_\mathbb{R}="\text{standard topology generated by the open intervals}"$, $P=\mathbb{Z}$, $F=\{A\in\tau_\mathbb{R}\mid \mathbb{Z}\subseteq A\}$; then let $\{\bigcup_{z\in\mathbb{Z}}(a_z,b_z)\mid\forall z\in\mathbb{Z}\; a_z,b_z\in\mathbb{Q}\wedge z\in(a_z,b_z)\}\subseteq F$ be the sequence of "$(U_n)_{n\in\omega}$", decrescent by inclusion, whose intersection is $P$.</p>
<p>So,</p>
<p>$\mathbb{R}$ is sober and second countable;</p>
<p>$F$ is the filter of all open set containing $P$ (but F is not scott open, e.g. $((-n,n))_{n\in\omega}$ is clearly a direct sequence whose union is $\mathbb{R}\in F$ but for all $n\in\omega$ we have $(-n,n)\notin F$);</p>
<p>the intersection of $F$ is $P$ but $P$ is clearly not compact, e.g. $\{(z-\frac{1}{2},z+\frac{1}{2})\mid z\in\mathbb{Z}\}$ is clearly a covering of $\mathbb{Z}$ without a finite sub covering.</p>
<h2>Moreover.</h2>
<p>I'm aware that the guideline of the proof that I followed cannot be applied in the general setting (without second countably hypothesis on $X$)...but I'm not in the general case and I'm looking for a "simple", direct and clear proof (if it exist).</p>
<p>In a little more specific setting (which is my case) we don't require directly that $F$ is scott open but that it respects the relatively compactness property, which implies the scott openness for $F$.</p>
<p>Indeed, if $D$ is a direct subset of $\mathcal{P}(\tau_X)$ whose supremum $S$ (i.e. $S=\bigcup D$) lies on $F$ (so $D$ is a covering of $S$), then there exists $A\in F$ with $A\ll S$, i.e. there must be a finite subset of $D$ that covers a set $A$ who lies on $F$ and so the union of $D$ must lie on $F$ too (because $F$ is a filter). To conclude we have only to note that a finite union of elements of a direct set lies on the direct set (by definition of direct set) and so our union is in $D$. Then $D\cap F\neq\emptyset$ and so $F$ is scott open.</p>
<p>Note that in my counterexample, obviously, $F$ fails this property too...</p>
<p>In any case if you use the relatively compactness instead of the scott openness to find a (simpler) proof... it will be fine for me!</p>
<p>Thank you all,</p>
<p>Corrado.</p>
<h1>References.</h1>
<p>[1]: Karl Heinrich Hofmann and Michael William Mislove. <em>Local compactness and continuous lattices</em>. In Bernhard Banaschewski and Rudolf-Eberhard Hoffmann, editors, Continuous Lattices, volume 871 of Lecture Notes in Mathematics, pages 209–248. Springer Berlin Heidelberg, 1981 (<strong>Theorem 2.16 pag. 226</strong>)</p>
<p>see also (if interested on my specific setting)</p>
<p>[2]: Matthew de Brecht. <em>Quasi-polish spaces</em>. Annals of Pure and Applied Logic, (164):356–381, 2013. (last three lines of the proof of the <strong>Theorem 44 pag. 369</strong>)</p>
| Corrado | 54,610 | <p>We have (only) to prove that for every open set <span class="math-container">$A\supseteq P$</span>, where <span class="math-container">$P=\bigcap F$</span>, we have <span class="math-container">$A\in F$</span> (for the whole notation you can see the question).</p>
<h1>The missing step.</h1>
<p>(we refer to [3], thanks to the suggestion of მამუკა ჯიბლაძე)</p>
<p>Suppose <span class="math-container">$P\subseteq A\notin F$</span> then (since <span class="math-container">$X\in F$</span>) there must exist <span class="math-container">$V\in\tau_X$</span> s.t. <span class="math-container">$A\subseteq V\notin F$</span> with <span class="math-container">$T\in F$</span> for every open set <span class="math-container">$T\supseteq V$</span>, i.e. <span class="math-container">$V$</span> is the maximal open set containing <span class="math-container">$A$</span> with respect to not being in <span class="math-container">$F$</span>.</p>
<p>So if <span class="math-container">$B,C\in\tau$</span> s.t. <span class="math-container">$B\cap C=V$</span> then <span class="math-container">$(B=V) \vee (C=V)$</span> or <span class="math-container">$V\in F$</span> because <span class="math-container">$F$</span> is a filter.</p>
<p>Then <span class="math-container">$X\setminus V$</span> is a closed (obviously) and irreducible set, indeed if <span class="math-container">$C_1,C_2\subsetneq(X\setminus V)$</span> are two closed sets s.t. <span class="math-container">$C_1\cup C_2=(X\setminus V)$</span> then <span class="math-container">$(X\setminus C_1)\cap(X\setminus C_2)=V$</span> but <span class="math-container">$(X\setminus C_1)\supsetneq V \wedge (X\setminus C_2)\supsetneq V$</span>, absurd.</p>
<p>Thanks to sobriety, there exists <span class="math-container">$p\in X$</span> s.t. <span class="math-container">$\overline{\{p\}}=X\setminus V$</span>.</p>
<p>Now, for all <span class="math-container">$G\in F$</span> we have <span class="math-container">$p\in G$</span>, because if <span class="math-container">$p\notin G$</span> then <span class="math-container">$\overline{\{p\}}\cap G=(X \setminus V)\cap G=\emptyset$</span> and so <span class="math-container">$G\subseteq V$</span> and <span class="math-container">$V\in F$</span>, because <span class="math-container">$F$</span> is a filter; abusurd.</p>
<p>So, for all <span class="math-container">$G\in F$</span> we have <span class="math-container">$p\in G$</span> but this means that <span class="math-container">$p\in P$</span> and then <span class="math-container">$p\in A\subseteq V=(X\setminus\overline{\{p\}})$</span>, absurd.</p>
<p>So <span class="math-container">$A\in F.\quad\square$</span></p>
<h2>The existence of V</h2>
<p>Let <span class="math-container">$D=\{E\in(\tau_X\setminus F)\mid A\subseteq E\}$</span> ordered by inclusion. So D is a poset. Let <span class="math-container">$M$</span> the lower upper bound of a generical chain in <span class="math-container">$D$</span>, i.e. <span class="math-container">$M$</span> is the union of the chain. If <span class="math-container">$M\in F$</span> so, thanks to the Scott openness of <span class="math-container">$F$</span>, there must be an element of the chain which lies in <span class="math-container">$F$</span>, absurd.</p>
<p>So, the lower upper bound of every chain in <span class="math-container">$D$</span> lies in <span class="math-container">$D$</span>. By Zorn's lemma there is a maximal element in <span class="math-container">$D$</span>. Call one of them <span class="math-container">$V$</span>.</p>
<h1>Reference.</h1>
<p>[3]: G. Gierz, K. H. Hofmann, K. Keimel, J. D. Lawson, M. W. Mislove and D. S. Scott. <em>Continuous lattices and domains</em> (Lemma II-1.19, page 146).</p>
|
3,170,871 | <p>Could anyone please give me a hint on how to compute the following integral?</p>
<p><span class="math-container">$$\int \sqrt{\frac{x-2}{x^7}} \, \mathrm d x$$</span></p>
<p>I'm not required to use hyperbolic/ inverse trigonometric functions.</p>
| Dr. Sonnhard Graubner | 175,066 | <p>Write your integrand in the form <span class="math-container">$$\frac{\sqrt{x-2}}{x^{7/2}}$$</span> and then substitute <span class="math-container">$$u=\sqrt{x}$$</span> so you will get <span class="math-container">$$2\int\frac{\sqrt{u^2-2}}{u^6}du$$</span> after this substitute <span class="math-container">$$u=\sqrt{2}\sec(s)$$</span> to get <span class="math-container">$$2\sqrt{2}\int\frac{\sin^2(s)\cos^2(s)}{4\sqrt{2}}ds$$</span></p>
|
373,357 | <p>I've tought using split complex and complex numbers toghether for building a 3 dimensional space (related to my <a href="https://math.stackexchange.com/questions/372747/what-are-the-uses-of-split-complex-numbers?noredirect=1">previous question</a>). I then found out using both together, we can have trouble on the product $ij$. So by adding another dimension, I've defined $$k=\begin{pmatrix}
1 & 0\\
0 & -1
\end{pmatrix}$$
with the property $k^2=1$. So numbers of the form $a+bi+cj+dk$ where ${{a,b,c,d}} \in \Bbb R^4$, $i$ is the imaginary unit, $j$ is the elementry unit of split complex numbers and k the number defined above, could be represented on a 4 dimensinal space. I know that these numbers look like the Quaternions. They are not! So far, I came out with the multiplication table below :
$$\begin{array}{|l |l l l|}\hline
& i&j&k \\ \hline
i&-1&k&j \\
j& -k&1&i \\
k& -j&-i&1 \\ \hline
\end{array}$$</p>
<p>We can note that commutativity no longer exists with these numbers like the Quaternions. When I showed this work to my math teacher he said basicaly these :</p>
<ol>
<li>It's not coherent using numbers with different properties as basic element, since $i^2=-1$ whereas $j^2=k^2=1$</li>
<li>2x2 matrices doesn't represent anything on a 4 dimensional space</li>
</ol>
<p>Can somebody explains these 2 things to me. What's incoherent here?</p>
| Willie Wong | 1,543 | <p><a href="https://math.stackexchange.com/a/373468/1543">rschwieb</a> already gave you the high powered answer. Here let me give you the low-powered version of what he wrote. </p>
<p>Consider the collection of $2\times 2$ matrices with real entries. We can write each matrix as
$$ \begin{pmatrix} A & B \\ C & D \end{pmatrix} $$
and if we re-organize the presentation, it can be identified with an element of $\mathbb{R}^4$
$$ \begin{pmatrix} A \\ B \\ C \\ D\end{pmatrix} $$
By writing it as a matrix, you allow yourself to do "multiplication" by matrix multiplication. </p>
<p>Now, we can write
$$ \begin{pmatrix} A & B \\ C & D \end{pmatrix} = A \begin{pmatrix} 1 & 0 \\ 0 & 0\end{pmatrix} + B \begin{pmatrix} 0 & 1 \\ 0 & 0\end{pmatrix} + C \begin{pmatrix} 0 & 0 \\ 1 & 0\end{pmatrix} + D \begin{pmatrix} 0 & 0 \\ 0 & 1\end{pmatrix} $$
which, if you know a bit of linear algebra, is just expressing a $2\times 2$ matrix in a <em>basis</em>. </p>
<p>As it turns out, what you've done is basically just choosing a different basis for the $2\times 2$ matrices. You chose</p>
<p>$$ \mathbf{1} = \begin{pmatrix} 1 & 0 \\ 0 & 1\end{pmatrix} \quad \mathbf{i} = \begin{pmatrix} 0 & -1 \\ 1 & 0\end{pmatrix} $$
and
$$ \mathbf{j} = \begin{pmatrix} 0 & -1 \\ -1 & 0\end{pmatrix} \quad \mathbf{k} = \begin{pmatrix} 1 & 0 \\ 0 & -1\end{pmatrix} $$</p>
<p>We can solve for the "standard" basis $\begin{pmatrix} 1 & 0 \\ 0 & 0\end{pmatrix}$ etc. in terms of this new basis. Plugging it back in to the expression then we have</p>
<p>$$ \begin{pmatrix} A & B \\ C & D\end{pmatrix} = \frac{A}{2} (\mathbf{1} + \mathbf{k}) + \frac{B}{2} (-\mathbf{i} - \mathbf{j}) + \frac{C}{2} (\mathbf{i} - \mathbf{j}) + \frac{D}{2} (\mathbf{1} - \mathbf{k}) $$</p>
<p>This identification can be reversed (exercise for you!). But in any case your identification of $a\mathbf{1} + b\mathbf{i} + c\mathbf{j} + d\mathbf{k}$ with the $\mathbb{R}^4$ vector $(a,b,c,d)$ corresponds then, to identifying the matrix $\begin{pmatrix} A & B \\ C & D\end{pmatrix}$ with the element
$$\begin{pmatrix} \frac12 (A + D) \\ \frac12 (C - B) \\ -\frac12 (B+C) \\ \frac12 (A-D) \end{pmatrix}$$
which can be realized as the <em>linear transformation of $\mathbb{R}^4$</em> that can be realized by a matrix multiplication
$$ \begin{pmatrix} A \\ B \\ C \\ D\end{pmatrix} \mapsto \begin{pmatrix} \tfrac12 & 0 & 0 &\tfrac12 \\ 0 & -\tfrac12 & \tfrac12 & 0 \\ 0 & -\tfrac12 & \tfrac12 & 0 \\ \tfrac12 & 0 & 0 & -\tfrac12 \end{pmatrix}\begin{pmatrix} A \\ B \\ C \\ D\end{pmatrix}$$</p>
<hr>
<p>What is the lesson behind all this? Given any four real numbers, you can of course identify them with an element of $\mathbb{R}^4$. The real question starts when you ask "how is this identification meaningful"? The first thing you can do is to try a little bit of linear algebra like I outlined above. But things get real exciting when you start connecting the algebra to geometry, and that's where the power of the Clifford Algebra that rschwieb mentioned really shines. </p>
<p>For the time being, if you cannot completely absorb the abstract nonsense in the definitions of Clifford algebras, it may be worthwhile to set your goal a tiny bit lower and think only about <a href="http://en.wikipedia.org/wiki/Geometric_algebra" rel="nofollow noreferrer">geometric algebra</a>. (Unfortunately the Wikipedia link is not the best way to learn about this, read <a href="http://www.mrao.cam.ac.uk/~clifford/introduction/intro/intro.html" rel="nofollow noreferrer">this first</a>, and if you are interested, perhaps follow a textbook such as <a href="http://faculty.luther.edu/~macdonal/laga/" rel="nofollow noreferrer">this</a>.)</p>
|
55,918 | <blockquote>
<p><strong>Zariski's Main Theorem</strong> (<a href="http://www.numdam.org/numdam-bin/fitem?id=PMIHES_1966__28__5_0" rel="noreferrer">EGA IV</a>, Thm 8.12.6): Suppose $Y$ is a quasi-compact and quasi-separated scheme, and $f:X\to Y$ is quasi-finite, separated, and finitely presented. Then $f$ factors as $X\xrightarrow{g} Z\xrightarrow{h} Y$, where $g$ is an open immersion and $h$ is finite.</p>
</blockquote>
<p>Is there a canonical choice for the factorization $f=h\circ g$, at least under some circumstances? </p>
<p>
For example, suppose $f$ factors as $X\to U\to Y$, where $X\to U$ is finite étale and $U\to Y$ is a Stein open immersion (i.e. the pushforward of $\mathcal O_U$ is $\mathcal O_Y$). Then I'm pretty sure the Stein factorization $X\to \mathit{Spec}_Y(f_*\mathcal O_X)\to Y$ witnesses Zariksi's Main Theorem (i.e. is an open immersion followed by a finite map).
</p>
<p>In general, when does the Stein factorization witness ZMT? In the cases where it fails to witness ZMT (e.g. $X$ finite over an affine open in $Y$), is there some other canonical witness?</p>
| Qing Liu | 3,485 | <p>I realized that I completely missed the second part of the question (the example). Note that ZMT implies that $f$ is a quasi-affine morphism. Then $X\to \mathit{Spec}(f_*\mathcal O_X)$ is always an open immersion (see <a href="http://www.math.columbia.edu/algebraic_geometry/stacks-git/morphisms.pdf" rel="nofollow noreferrer">stack project</a>, chapter 21, Lemma 12.3). So the Stein factorization witness ZMT if and only if $f_*\mathcal O_X$ is finite over $\mathcal O_Y$.</p>
<p>Some comments: one should note that in general, the quasi-coherent algebra $f_*\mathcal O_X$ is not finite over $\mathcal O_Y$ and even worse, the morphism $\mathit{Spec}(f_*\mathcal O_X)\to Y$ may not be of finite type (take $Y$ an algebraic variety and $f$ an open immersion. Then $\mathcal O(X)$ is or not finitely generated is related to Hilbert's 14th problem). Now consider a ZMT factorisation $X\to Z\to Y$. If the complementary of $X$ in $Z$ only consists in points of depth at least 2 (see <a href="https://mathoverflow.net/questions/45347">discussions here</a>), then $f_*\mathcal O_X=h_*\mathcal O_Z$ is finite and we are happy. This happens when $X$ is normal <s>(or with non-normal locus finite over $Y$) and surjective to $Y$</s> with complementary in $Z$ of codimension at least 2. But I don't have a general criterion. </p>
|
272,173 | <p>I'm looking for several references on the spectral analysis of the Laplacian operator. It is such a well-known topic, but I'm a bit struggling to locate modern systematic expositions in the literature. </p>
<p>I'd appreciate multiple suggestions that explore the topic using different approaches too.</p>
<p>I'm particularly interested in the variational characterization of the eigenvalues and eigenfunctions.</p>
| Shahrooz | 19,885 | <p>Actually, it is dangerous to answer this question, since there are a lot of good resources in this direction and there are many famous professors here that know this field much better than me. But I want to introduce a nice book to you which I believe it is a nice one in your direction:</p>
<p>"Spectral Theory in Riemannian Geometry", which is written by "Olivier Lablée"
<a href="http://bookstore.ams.org/emstext-17" rel="noreferrer">here</a>.</p>
<p>It is very readable book, specially chapter $6$: "Can one hear the holes of a drum?"</p>
<p>Also, you can see the book "Functional Analysis, Spectral Theory, and Applications" which is written by "Einsiedler" and "Ward" <a href="http://www.springer.com/gp/book/9783319585390" rel="noreferrer">here</a>.</p>
<p>I hope these books be useful for beginning (as for study and also for typical answer), specially case one.</p>
|
3,977,081 | <p>I’m just a high school student, so I may be somewhat logically flawed in understanding this.</p>
<p>According to wikipedia, the definition of function requires an input <span class="math-container">$x$</span> with its domain <span class="math-container">$X$</span> and an output <span class="math-container">$y$</span> with its domain <span class="math-container">$Y$</span>, and the function <span class="math-container">$f$</span> maps <span class="math-container">$x$</span> to <span class="math-container">$y$</span>.</p>
<p>But how about <span class="math-container">$f(x)$</span>? I often see syntaxes such as <span class="math-container">$f(1) = 0$</span> in my text book. Doesn’t that mean it is <span class="math-container">$f(x)$</span> being first assigned a value and then transfer the value into <span class="math-container">$y$</span>? So, there must be two transitions/mappings between the input <span class="math-container">$x$</span> and the output <span class="math-container">$y$</span> right?</p>
<p>My conceptual model of function is like this: A definition of function requires an input <span class="math-container">$x$</span> with its domain <span class="math-container">$X$</span>, a forwarder <span class="math-container">$f(x)$</span> with its domain <span class="math-container">$F$</span> and an output <span class="math-container">$y$</span> with its domain <span class="math-container">$Y$</span>. The function <span class="math-container">$f$</span> first maps <span class="math-container">$x$</span> to <span class="math-container">$f(x)$</span> then maps <span class="math-container">$f(x)$</span> to <span class="math-container">$y$</span>.</p>
<p>These two definitions are not quite the same.</p>
<hr />
<p>On 2022.6.29: The picture below had solved my confusion.</p>
<p><a href="https://i.stack.imgur.com/0idkt.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/0idkt.jpg" alt="enter image description here" /></a></p>
| Michael Burr | 86,421 | <p>We talk about domains and codomains of a function, not of the variables of a function. You might not have come across the term codomain before, but I think that it's the best for what you're trying to describe.</p>
<p>So, the domain of <span class="math-container">$f$</span> is <span class="math-container">$X$</span> and the codomain of <span class="math-container">$f$</span> is <span class="math-container">$Y$</span>. Often, we write this as <span class="math-container">$f:X\rightarrow Y$</span> to indicate that the valid inputs to <span class="math-container">$f$</span> are points in <span class="math-container">$X$</span> and every output of <span class="math-container">$f$</span> is a point in <span class="math-container">$Y$</span>.</p>
<p>In your example, <span class="math-container">$x$</span> is a point in <span class="math-container">$X$</span>, which is the domain of <span class="math-container">$f$</span>, and <span class="math-container">$y$</span> is a point in the codomain of <span class="math-container">$f$</span>, which is <span class="math-container">$Y$</span>.</p>
|
3,977,081 | <p>I’m just a high school student, so I may be somewhat logically flawed in understanding this.</p>
<p>According to wikipedia, the definition of function requires an input <span class="math-container">$x$</span> with its domain <span class="math-container">$X$</span> and an output <span class="math-container">$y$</span> with its domain <span class="math-container">$Y$</span>, and the function <span class="math-container">$f$</span> maps <span class="math-container">$x$</span> to <span class="math-container">$y$</span>.</p>
<p>But how about <span class="math-container">$f(x)$</span>? I often see syntaxes such as <span class="math-container">$f(1) = 0$</span> in my text book. Doesn’t that mean it is <span class="math-container">$f(x)$</span> being first assigned a value and then transfer the value into <span class="math-container">$y$</span>? So, there must be two transitions/mappings between the input <span class="math-container">$x$</span> and the output <span class="math-container">$y$</span> right?</p>
<p>My conceptual model of function is like this: A definition of function requires an input <span class="math-container">$x$</span> with its domain <span class="math-container">$X$</span>, a forwarder <span class="math-container">$f(x)$</span> with its domain <span class="math-container">$F$</span> and an output <span class="math-container">$y$</span> with its domain <span class="math-container">$Y$</span>. The function <span class="math-container">$f$</span> first maps <span class="math-container">$x$</span> to <span class="math-container">$f(x)$</span> then maps <span class="math-container">$f(x)$</span> to <span class="math-container">$y$</span>.</p>
<p>These two definitions are not quite the same.</p>
<hr />
<p>On 2022.6.29: The picture below had solved my confusion.</p>
<p><a href="https://i.stack.imgur.com/0idkt.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/0idkt.jpg" alt="enter image description here" /></a></p>
| CyclotomicField | 464,974 | <p>The notation <span class="math-container">$f(1)=0$</span> means that <span class="math-container">$f$</span> maps <span class="math-container">$1 \in X$</span> to <span class="math-container">$0 \in Y$</span>. This agrees with the familiar notation <span class="math-container">$y=f(x)$</span> form that most people encounter in high school such as <span class="math-container">$y=ax^2+bx+c$</span>. This means that <span class="math-container">$f(x) \in Y$</span> is always true. There is no other assignment operation occurring. Alternatively if you define functions as a kind of relation <span class="math-container">$f \subset X \times Y$</span> then <span class="math-container">$f(1)=0$</span> means <span class="math-container">$(1,0) \in f$</span>. In both cases it's merely two names of the same thing.</p>
|
4,064,209 | <p><a href="https://i.stack.imgur.com/Ux3cH.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Ux3cH.png" alt="enter image description here" /></a></p>
<p>Above is the exercise. Showing that <span class="math-container">$S$</span> is bounded is straightforward by <span class="math-container">$A$</span> bounded and the triangle inequality, but I thought that to show S is closed, I would do the usual thing of assuming <span class="math-container">$w \in \bar S$</span>, then by definition, there is a sequence <span class="math-container">$\{s_n\}$</span> with elements in <span class="math-container">$S$</span> so that <span class="math-container">$s_n \to x$</span> w.r.t. absolute metric.</p>
<p>The problem is now I am stuck and can't seem to use the fact that <span class="math-container">$A$</span> is closed to conclude that <span class="math-container">$w \in S$</span>. Is this approach even possible? And is there a better approach, perhaps showing that <span class="math-container">$\mathbb{R^2} \backslash A$</span> is open is easier? Thanks!</p>
<p>Note: Please do not use compactness as I have not covered it yet!</p>
<p>(I have an exam soon and so I would like exposure to as many problems as possible. Sorry if I haven't shown too much work before posting.)</p>
| RRL | 148,510 | <p>The weakest condition under which this holds is with at least one of the series <span class="math-container">$\sum a_k$</span> and <span class="math-container">$\sum b_k$</span> converging absolutely.</p>
<p>Suppose that <span class="math-container">$\sum_{k=0}^\infty a_k$</span> is absolutely convergent and <span class="math-container">$\sum_{k=0}^\infty b_k$</span> is convergent, so that there exist bounds <span class="math-container">$A,B > 0$</span> such that for all <span class="math-container">$n \in \mathbb{N}$</span>,</p>
<p><span class="math-container">$$\tag{1}\sum_{k=0}^n |a_k| < A, \quad \left|\sum_{k=0}^n b_k\right| < B$$</span></p>
<p>Define the partial sums <span class="math-container">$A_n = \sum_{k=0}^n a_k$</span>, <span class="math-container">$B_n = \sum_{k=0}^n b_k$</span>, and note that</p>
<p><span class="math-container">$$C_n = \sum_{k=0}^n\sum_{i+j=k}a_ib_j = \sum_{k=0}^n\sum_{j=0}^ka_jb_{k-j} $$</span></p>
<p>The objective is to prove that <span class="math-container">$\lim_{n \to \infty}C_n = \lim_{n \to \infty}A_nB_n = \sum_{k=0}^\infty a_k \sum_{k=0}^\infty b_k. $</span></p>
<p>We have (looking at a table with entries <span class="math-container">$a_kb_j$</span> in row <span class="math-container">$k$</span> and column <span class="math-container">$j$</span> will make this clear)</p>
<p><span class="math-container">$$\tag{2}|C_{2n}- A_nB_n| = \left|\sum_{k=0}^{n-1}a_k\sum_{j=n+1}^{2n-k}b_j + \sum_{k=n+1}^{2n}a_k\sum_{j=0}^{2n-k}b_j\right| \\ \leqslant \sum_{k=0}^{n-1}|a_k|\left|\sum_{j=n+1}^{2n-k}b_j\right| + \sum_{k=n+1}^{2n}|a_k|\left|\sum_{j=0}^{2n-k}b_j\right|$$</span></p>
<p>By the Cauchy criterion, for any <span class="math-container">$\epsilon > 0$</span> there exists <span class="math-container">$N \in \mathbb{N}$</span> such that for all <span class="math-container">$n \geqslant N$</span> and all <span class="math-container">$m \geqslant 0$</span>, we have</p>
<p><span class="math-container">$$\tag{3}\sum_{k=n+1}^{n+m} |a_k| < \frac{\epsilon}{A+B}, \quad \left|\sum_{j=n+1}^{n+m} b_j\right| < \frac{\epsilon}{A+B}$$</span></p>
<p>Applying (3) with <span class="math-container">$m = n-k$</span> and the bounds in (1) to (2), we get for all <span class="math-container">$n \geqslant N$</span></p>
<p><span class="math-container">$$|C_{2n}- A_nB_n|\leqslant A \cdot\frac{\epsilon}{A+B} + \frac{\epsilon}{A+B} \cdot B= \epsilon$$</span></p>
<p>Since <span class="math-container">$|C_{2n} - \lim_{n \to \infty}A_nB_n| \leqslant |C_{2n} - A_nB_n|+ |A_nB_n - \lim_{n \to \infty}A_nB_n|$</span> this implies that the subsequence <span class="math-container">$C_{2n}$</span> converges to <span class="math-container">$\lim_{n \to \infty} A_nB_n$</span>. By a similar argument we can show that the subseqence <span class="math-container">$C_{2n+1}$</span> converges to <span class="math-container">$\lim_{n \to \infty} A_{n+1}B_n$</span>.
Q.E.D</p>
<hr />
<p>The result does not hold if both series <span class="math-container">$\sum a_k$</span> and <span class="math-container">$\sum b_k$</span> are conditionally convergent. The standard counterexample is</p>
<p><span class="math-container">$$a_k = b_k = \frac{(-1)^{k+1}}{\sqrt{k+1}}$$</span></p>
|
3,898,411 | <p>How can I prove that <span class="math-container">$ (a_n) = \frac{n^3 -1}{2n^3-n} $</span> converges?</p>
<p>I've calculated the limit and got a result of a 1/2.</p>
<p>Now I need to prove that this limit exists. So, I tried to use the definition and find an <span class="math-container">$M$</span> that <span class="math-container">$n > M \implies |a_n - L| < \epsilon$</span> for <span class="math-container">$ \epsilon > 0 $</span>, but I couldn't reach in a result.</p>
<p>Is there any strategy to prove that this sequence converges?</p>
| Jack LeGrüß | 831,874 | <p>You can solve the problem in three cases:</p>
<p><strong>Case 1</strong>: <span class="math-container">$m\ge2$</span> and <span class="math-container">$2^m>3^n$</span>.</p>
<p>Here, looking modulos <span class="math-container">$3$</span> and <span class="math-container">$4$</span> demands that <span class="math-container">$m$</span> is even and <span class="math-container">$n$</span> is odd. The problem then reduces to the form
<span class="math-container">$$(2^{m/2})^2=x^2+3(3^{(n-1)/2})^2\,~\,~\,~\,~ (1)$$</span>
for some integer <span class="math-container">$x$</span>; thus we are looking at solutions to the Diophantine equation
<span class="math-container">$$z^2=x^2+3y^2\,.$$</span>
Checking modulo <span class="math-container">$8$</span> shows that the only possibility for both <span class="math-container">$x,y$</span> to be odd is when <span class="math-container">$z^2\equiv 4\mod 8$</span>, which thus demands that the only solutions to Eq (1) must be <span class="math-container">$$(m,n)=(2,1)\,.$$</span></p>
<p><strong>Case 2</strong>: <span class="math-container">$m\ge2$</span> and <span class="math-container">$2^m<3^n$</span>.</p>
<p>Here also, looking modulos <span class="math-container">$3$</span> and <span class="math-container">$4$</span> rather demands that <span class="math-container">$m$</span> is odd and <span class="math-container">$n$</span> is even. Now, the problem reduces to the form
<span class="math-container">$$(3^{n/2})^2=x^2+2(2^{(m-1)/2})^2\,~\,~\,~\,~ (2)$$</span>
for some integer <span class="math-container">$x$</span>; thus, this time, we are looking at solutions to the Diophantine equation
<span class="math-container">$$z^2=x^2+2y^2\,.$$</span>
The general solution to this equation (in the manner similar to deriving Pythagorean Triples) is given by <span class="math-container">$$z=a^2+2b^2\,,~\,~\,~\,x=a^2-2b^2\,,~\,~\,~\,~y=2ab$$</span> for some integers <span class="math-container">$a,b$</span>. Comparing to Eq (2) forces <span class="math-container">$a=\pm 1$</span> and <span class="math-container">$b=\pm 2^{(m-3)/2}$</span>, which implies that <span class="math-container">$3^{n/2}=z=(\pm 1)^2+2(\pm 2^{(m-3)/2})^2= 1+2^{m-2}$</span>——that is, <span class="math-container">$3^{n/2}-2^{m-2}=1$</span>——which has the only solutions <span class="math-container">$$(m,n)\in\{(3,2),(5,4)\}$$</span> thanks Catalan-Mihailescu’s Theorem.</p>
<p><strong>Case</strong>: <span class="math-container">$m\in\{0,1\}$</span>.</p>
<p>For <span class="math-container">$n=0$</span>, we have the obvious solutions <span class="math-container">$$(m,n)\in\{0,0),(1,0)\}\,.$$</span> Hence suppose <span class="math-container">$n\ge 1$</span>. Here, we have <span class="math-container">$3^n-2^m=y^2$</span> for some integer <span class="math-container">$y$</span>, which can be rewritten as
<span class="math-container">$$y^2=3^{e_n}\left(3^{(n-e_n)/3)}\right)^3-2^m\,,$$</span> where <span class="math-container">$e_n\in\{0,1,2\}$</span> and <span class="math-container">$n\equiv e_n\mod 3$</span>,. Multiplying through by <span class="math-container">$3^{2e_n}$</span>, we see that we are actually looking at integer solutions to the Mordell elliptic curve <span class="math-container">$$y^2=x^3-2^m\cdot 3^{2e_n}\,,$$</span> and a solution to your problem corresponds to <span class="math-container">$x=3^{(n+2e_n)/3}$</span>. Mordell (1920) proved that there are only <em>finitely many</em> integral solutions to such elliptic curves (you can find the integer solutions listed here <a href="https://web.archive.org/web/20110618033146/http://tnt.math.se.tmu.ac.jp/simath/MORDELL/MORDELL-" rel="nofollow noreferrer">https://web.archive.org/web/20110618033146/http://tnt.math.se.tmu.ac.jp/simath/MORDELL/MORDELL-</a>); in particular, for <span class="math-container">$m=1$</span> and <span class="math-container">$e_n=0$</span>, it is a theorem or Fermat that the only integral solutions to the elliptic are <span class="math-container">$(x,y)\in\{3,\pm 5\}$</span>, which thus gives the only solution to your problem as <span class="math-container">$$(m,n)=(1,3)\,.$$</span> When <span class="math-container">$m=e_n=0$</span>, the only integral solution is <span class="math-container">$(x,y)=(1,0)$</span>, which doesn’t correspond to a solution to your problem.
For <span class="math-container">$e_n=1$</span>, using the linked address above, the only integral solutions are <strong>no</strong> solutions for <span class="math-container">$m=0$</span> but <span class="math-container">$(x,y)=(3,\pm 3)$</span> for <span class="math-container">$m=1$</span>, which forces the only solution to your problem in this case <span class="math-container">$$(m,n)=(1,1)\,.$$</span> Finally, for <span class="math-container">$e_n=2$</span>, there are <strong>no</strong> integral solutions to the elliptic curve for <span class="math-container">$m=1$</span>, but for <span class="math-container">$m=0$</span>, the only solution is <span class="math-container">$(x,y)=(13,46)$</span>, which does not correspond to a solution to your problem.</p>
|
191,796 | <blockquote>
<p>I met with the following difficulty reading the paper <a href="http://www.cnki.com.cn/Article/CJFDTotal-ZZDZ198801008.htm" rel="nofollow">Li, Rong Xiu "The properties of a matrix order column" (1988)</a>:</p>
<p>Define the matrix $A=(a_{jk})_{n\times n}$, where
$$a_{jk}=\begin{cases}
j+k\cdot i&j<k\\
k+j\cdot i&j>k\\
2(j+k\cdot i)& j=k
\end{cases}$$
and $i^2=-1$.</p>
<p>The author says it is easy to show that $rank(A)=n$. I have proved for $n\le 5$, but I couldn't prove for general $n$.</p>
</blockquote>
<p>Following is an attempt to solve this problem:
let
$$A=P+iQ$$
where
$$P=\begin{bmatrix}
2&1&1&\cdots&1\\
1&4&2&\cdots& 2\\
1&2&6&\cdots& 3\\
\cdots&\cdots&\cdots&\cdots&\cdots\\
1&2&3&\cdots& 2n
\end{bmatrix},Q=\begin{bmatrix}
2&2&3&\cdots& n\\
2&4&3&\cdots &n\\
3&3&6&\cdots& n\\
\cdots&\cdots&\cdots&\cdots&\cdots\\
n&n&n&\cdots& 2n\end{bmatrix}$$</p>
<p>and define
$$J=\begin{bmatrix}
1&0&\cdots &0\\
-1&1&\cdots& 0\\
\cdots&\cdots&\cdots&\cdots\\
0&\cdots&-1&1
\end{bmatrix}$$
then we have
$$JPJ^T=J^TQJ=\begin{bmatrix}
2&-2&0&0&\cdots&0\\
-2&4&-3&\ddots&0&0\\
0&-3&6&-4\ddots&0\\
\cdots&\ddots&\ddots&\ddots&\ddots&\cdots\\
0&0&\cdots&-(n-2)&2(n-1)&-(n-1)\\
0&0&0&\cdots&-(n-1)&2n
\end{bmatrix}$$
and $$A^HA=(P-iQ)(P+iQ)=P^2+Q^2+i(PQ-QP)=\binom{P}{Q}^T\cdot\begin{bmatrix}
I& iI\\
-iI & I
\end{bmatrix} \binom{P}{Q}$$</p>
| Włodzimierz Holsztyński | 8,385 | <p>A modest introductory step only. The following partial algebraization might be useful: the present matrix is given by:</p>
<ul>
<li>$\quad a_{kk}\ :=\ 2\cdot(k\ +\ i\cdot k)$</li>
<li>$\quad a_{km}\ :=\ \min(k\ m)\ +\ \imath\cdot\max(k\ m)$</li>
</ul>
<p>for $\,\ k\,\ m=1\ldots n\,\ $ and $\,\ k\ne m.\ $ However, we may equivalently consider a matrix obtained from the given one by multiplying all entries by $\ 1-i.\ $ We obtain a matrix $\ (b_{mk})\ $ as follows:</p>
<ul>
<li>$\quad b_{kk}\,\ :=\,\ 4\cdot k$</li>
<li>$\quad b_{km}\,\ :=\,\ (k+m)\ +\ \imath\cdot|k-m|$</li>
</ul>
<p>for $\,\ k\,\ m=1\ldots n\,\ $ and $\,\ k\ne m$.</p>
<blockquote>
<p><em>Good luck, and I will try to continue too.</em></p>
</blockquote>
|
2,820,696 | <p>We were just discussing with colleagues the number of combinations you could get with two "normal", $6$-sided dice. Almost all of my colleagues were saying $36$ ($6^2$), which I agree with as such, but you will get almost half of the possible combinations counted twice.
If I count, the number of different combinations with $2$ dice is $21.$ I'm not able though to get to the formula that would let me calculate this for $3,4,5$ or more dice.</p>
<p>I remember from my old mathematics courses that there are several different formulas you can "pick" depending on repetition, order importance, etc...
What would be the appropriate formula taking into account $n$ as the number of dice and $s$ the number of sides ? </p>
| drhab | 75,923 | <p>It agrees with the number of sums $a_1+\cdots+a_s=n$ where the $a_i$ are nonnegative integers.</p>
<p>Here $a_i$ stands for the number of dice that show face $i$.</p>
<p>Applying <a href="https://en.wikipedia.org/wiki/Stars_and_bars_(combinatorics)" rel="nofollow noreferrer">stars and bars</a> we find $$\binom{n+s-1}{s-1}$$possibilities.</p>
|
1,299,127 | <p>Can someone please help me answer this question as I cannot seem to get to the answer.
Please note that the Cauchy integral formula must be used in order to solve it.</p>
<p>Many thanks in advance!
\begin{equation*}
\int_{|z|=3}\frac{e^{zt}}{z^2+4}=\pi i\sin(2t).
\end{equation*}</p>
<p>Also $|z| = 3$ is given the counterclockwise direction.</p>
| Olivier Oloa | 118,798 | <p><strong>Hint.</strong> The denominator $\displaystyle z^2+4=(z-2i)(z+2i)$ of the function
$$
z \longmapsto \frac{e^{zt}}{z^2+4}
$$ gives two poles inside $|z|=3$ and by the Cauchy integral formula we have
$$
\int_{|z|=3}\frac{e^{zt}}{z^2+4}dz=2i\pi\left({\rm{Res}}_{z=-2i}f(z)+{\rm{Res}}_{z=2i}f(z)\right).
$$ You conclude using for example
$$
{\rm{Res}}_{z=-2i}f(z)=\lim_{z\to -2i}\left((z+2i)\times\frac{e^{zt}}{z^2+4}\right)=\ldots
$$</p>
|
2,873,474 | <p>My staff room is having a debate about the construction of sample spaces.</p>
<blockquote>
<p>When you toss a coin twice, do you consider the sample space to be
$$\{H,H\}, \{H,T\}, \{T,T\}$$ or $$\{H,H\}, \{H,T\}, \{T,H\},\{T,T\}$$</p>
</blockquote>
<p>In my humble opinion, I feel there is no single correct answer at the moment because we do not have enough information. My feeling is that a sample space can only be established here if the order is relevant to the question at hand. In the absence of this information, either one could be the sample space.</p>
<p>However, I would like the community's thoughts on this. Is there in fact a single correct answer? Is there a mathematical reason for it being that answer?</p>
| Community | -1 | <p>From my book: </p>
<blockquote>
<p>The set of all possible outcomes is the <em>sample space</em> corresponding to an
experiment</p>
</blockquote>
<p>The key word is an experiment. That is the sample corresponds to events that are possible for pertaining to experiment. I.e an example </p>
<blockquote>
<p>The number of jobs in a print queue of the mainframe computer may be
modeled as</p>
</blockquote>
<p>$$ \Omega = \{ 0 ,1, 2, ,3 \cdots\} $$</p>
<blockquote>
<p>the set of all non-negative integers. However, in practice, there is
likely upper limit $N$ on it.</p>
</blockquote>
<p>$$ \Omega = \{ 0 ,1, 2, ,3, \cdots, N \} $$</p>
|
3,493,151 | <p>This is a calculus problem from a high school math contest in Greece,from 2012.</p>
<p>I wish to know some solutions for this. I attempted to solve it.</p>
<blockquote>
<p>Let <span class="math-container">$f:\Bbb{R} \to \Bbb{R}$</span> differentiable such that <span class="math-container">$\lim_{x \to +\infty}f(x)=+\infty$</span> and <span class="math-container">$\lim_{x \to +\infty}\frac{f'(x)}{f(x)}=2$</span>.Show that <span class="math-container">$$\lim_{x \to +\infty}\frac{f(x)}{x^{2012}}=+\infty$$</span></p>
</blockquote>
<p>Here is my attempt:</p>
<p><span class="math-container">$\frac{f(x)}{x^{2012}}=e^{\ln{\frac{f(x)}{x^{2012}}}}$</span></p>
<p>Now <span class="math-container">$\ln{\frac{f(x)}{x^{2012}}}=\ln{f(x)}-2012\ln x=\ln{x}\left( \frac{\ln{f(x)}}{\ln{x}}-2012\right)$</span></p>
<p>Now from hypothesis we see that <span class="math-container">$\lim_{x \to +\infty}\ln{f(x)}=+\infty$</span></p>
<p>By L'Hospital's rule we have that <span class="math-container">$$\lim_{x \to +\infty}\frac{\ln{f(x)}}{\ln{x}}=\lim_{x \to +\infty}x \frac{f'(x)}{f(x)}=2(+\infty)=+\infty$$</span></p>
<p>Thus <span class="math-container">$$\lim_{x \to +\infty}\ln{x}\left( \frac{\ln{f(x)}}{\ln{x}}-2012\right)=+\infty$$</span></p>
<p>Finally <span class="math-container">$\lim_{x \to +\infty}\frac{f(x)}{x^{2012}}=+\infty$</span></p>
<p>Is this solution correct?</p>
<p>If it is,then are there also better and quicker ways to solve this?</p>
<p>Thank you in advance.</p>
| Community | -1 | <p>I think you can just use de l’Hopital inductively:</p>
<p>Given <span class="math-container">$n \in \Bbb{N}$</span>, we have (since the numerator and denominator diverge):
<span class="math-container">$\lim_{x \to \infty}\frac{f(x)}{x^n} = \lim_{x \to \infty} \frac{f’(x)}{nx^{n-1}} = \lim_{x \to \infty} \frac{f(x) \frac{f’(x)}{f(x)}}{nx^{n-1}} = \frac{2}{n} \lim_{x \to \infty} \frac{f(x)}{x^{n-1}} $</span></p>
<p>So inductively we get:</p>
<p><span class="math-container">$\lim_{x \to \infty}\frac{f(x)}{x^n} = \frac{2^n}{n!} \lim_{x \to \infty}f(x) = \infty$</span></p>
|
118,029 | <p>It is well known that a generic hypersurface of degree $2n-3$ in $\mathbb CP^n$ has finite number of lines. I would like to ask a couple of questions about lines on Fermat hypersurfaces and their symmetries: </p>
<p>$$\sum_{i=1}^{n+1}x_i^{2n-3}=0.$$</p>
<p>Fermat hypersurfaces have a group of automorphisms of order $(2n-3)^n(n+1)!$. In the case $n=3$ (the case of cubic) this group is acting transitively on the collection of $27$ lines and this rases some questions.</p>
<p>The first question is pedagogical, I plan to use it for teaching and really want to know the answer. </p>
<p><em>Question 1.</em> Is there some slick way to give a high-school proof of the fact that there are exactly $27$ lines on Femat cubic in $\mathbb CP^3$ using (or not) the symmetries of the cubic but without using any theory at all?</p>
<p>Further questions are not for teaching, I am just curious about them.</p>
<p><em>Question 2.</em> Is it known that a Fermat hypersurface of degree $2n-3$ has finite number of lines for any
$n$? Is it known that these lines are never multiple?</p>
<p><em>Question 3.</em> Can one say something about the number of orbits of the action of symmetries on lines on
a Fermat hypersurface of degree $2n-3$? For example, what happen in the case of quintic, $n=5$? According
to wiki a generic quintic has $2875=125\cdot 23$ lines, so if Fermat quintic is generic, there should be more than one orbit in the action on lines on it. What is the number of orbits?</p>
<p>I would be happy to know the answer on any of these questions.</p>
| Sasha | 4,428 | <p>Assume for example that $n = 2k + 1$ is odd. Let $\xi^{2n-3} = -1$. Then for any $(y_0,y_1,\dots,y_k) \in \mathbb{CP}^k$ the point $(y_0,\xi y_0,y_1,\xi y_1,\dots,y_k, \xi y_k)$ is on the Fermat hypersurface. So, it contains $\mathbb{CP}^k$. In particular, if $k \ge 2$ (and so $n \ge 5$) the number of lines is infinite. A similar argument works for even $n \ge 6$.</p>
|
149,161 | <p>A finite simplicial set is a simplicial set having only a finite number of non degenerate simplicies. It is not hard to show that every finite simplicial set has only a finite number of simplicies in each degree. My question is: does the converse hold? that is, is every simplicial set, having a finite number of simplicies in each degree, necessarily finite?</p>
| Peter LeFanu Lumsdaine | 2,273 | <p>Take $X$ to be the “infinite-dimensional dunce’s cap”, with a unique non-degenerate simplex $x_n$ in each dimension, and with every face of $x_n$ equal to $x_{n-1}$.</p>
<p>Explicitly, $X_n = \coprod_{m \leq n} \mathrm{Surj}([n],[m])$. So it’s clear that this has finitely many simplices in each dimension, but infinitely many non-degenerate ones in total.</p>
|
62,526 | <p>I want to show the following:</p>
<p>$X$ $n$-connected $\iff $ any continuous map $f:K \rightarrow X$ where $K$ is a cell complex of dimension $\leq n$ is homotopic to a constant map</p>
<p>For this I think I can use the following:
$X$ $n$-connected $\iff $ every continuous map $f: S^n \rightarrow X$ is homotopic to a constant map.</p>
<p>Proof:</p>
<p>"$\Leftarrow$"</p>
<p>If any continuous map $f:K \rightarrow X$ where $K$ is a cell complex of dimension $\leq n$ is homotopic to a constant map then any $f: S^n \rightarrow X$ is homotopic to a constant map. So $X$ is $n$-connected.</p>
<p>"$\Rightarrow$"</p>
<p>I'm not sure how to proceed in this direction. I know $X$ is $n$-connected and so $\pi_i (X) = 0$ for all $i \leq n$. I also know any $f: S^i \rightarrow X$ is null-homotopic. </p>
<p>How to proceed from here? Many thanks for your help!</p>
| AlexE | 7,110 | <p>This is an application of the second theorem of chapter 10.3 in <em>May: A Concise Course in Algebraic Topology</em>, i.e. there you can find your proof.</p>
|
215,531 | <p>I must solve following inequation:</p>
<p>$\frac{x-3}{1-2x}<0$</p>
<p>Now the text says that I have to solve the inequation "direct" without solving the according equations.</p>
<p>What does that mean?</p>
<p>I would say that I have to multiply by $(1-2x)$ then I get</p>
<p>$x-3<0$ and </p>
<p>$L_1 = [x \le 2]$</p>
<p>$L_2 = [x >0] $</p>
<p>but I have the feeling that I'm doing something wrong.
Especially I don't understand what the paragraph about solving the inequation "direct" means. Could someone maybe explain that to me?</p>
| Salech Alhasov | 25,654 | <p>Since</p>
<p><span class="math-container">$$\frac{x-3}{1-2x}=\frac{(x-3)(1-2x)}{(1-2x)(1-2x)},\quad x\neq \frac{1}{2}$$</span></p>
<p>and <span class="math-container">$(1-2x)(1-2x)=(1-2x)^2>0$</span>, then</p>
<p><span class="math-container">$$\frac{x-3}{1-2x}=\frac{(x-3)(1-2x)}{(1-2x)^2}<0.$$</span></p>
<p>We allowed multiply that inequality by <span class="math-container">$(1-2x)^2$</span> and get</p>
<p><span class="math-container">$$(x-3)(1-2x)<0$$</span></p>
<p>Can you solve that?</p>
|
2,617,621 | <p>$$z^4 =\lvert z \lvert , z \in \mathbb{C}$$</p>
<p>Applying the formula to calculate $ \sqrt[4]{z} $, I find that solutions have to have this form:</p>
<p>$$z=\sqrt[4]{\lvert z \lvert}$$
$$z=\sqrt[4]{\lvert z \lvert} \ e^{i \frac{\pi}{2}}=i \ \sqrt[4]{\lvert z \lvert}$$
$$z=\sqrt[4]{\lvert z \lvert} \ e^{i \frac{3 \pi}{2}}=-i \ \sqrt[4]{\lvert z \lvert}$$
$$z=\sqrt[4]{\lvert z \lvert} \ e^{i \pi}=-\sqrt[4]{\lvert z \lvert}$$</p>
<p><br> </p>
<p>Using the Cartesian form:</p>
<p>$$(a+i b)^4=\sqrt{a^2+b^2}$$</p>
<p><br></p>
<p>$z=0$ is a solution</p>
<p><br></p>
<p>If $a=0$ : $$(i b)^4=\lvert b \lvert $$
$$b^4=\lvert b \lvert$$</p>
<p><br></p>
<p>$-i$ and $i$ are solutions</p>
<p><br></p>
<p>If $b=0$ : $$a^4=\lvert a \lvert $$</p>
<p><br></p>
<p>$-1$ and $1$ are solutions. </p>
<p><br></p>
<p>Finally, these are the solutions of $z^4=\lvert z \lvert$: $$0,-i,i,1,-1$$ </p>
<p>Is it correct? Thanks!</p>
| nonuser | 463,553 | <p>From $|z|=z^4$ we get $|z|=|z|^4$ so $|z|=1$ or $|z|=0$ (so $z=0$.)</p>
<p>In the first case $z^4 =1$ so $$(z+i)(z-i)(z+1)(z-1)=0$$ </p>
<p>so <strong>yes you find all the solutions.</strong></p>
|
2,617,621 | <p>$$z^4 =\lvert z \lvert , z \in \mathbb{C}$$</p>
<p>Applying the formula to calculate $ \sqrt[4]{z} $, I find that solutions have to have this form:</p>
<p>$$z=\sqrt[4]{\lvert z \lvert}$$
$$z=\sqrt[4]{\lvert z \lvert} \ e^{i \frac{\pi}{2}}=i \ \sqrt[4]{\lvert z \lvert}$$
$$z=\sqrt[4]{\lvert z \lvert} \ e^{i \frac{3 \pi}{2}}=-i \ \sqrt[4]{\lvert z \lvert}$$
$$z=\sqrt[4]{\lvert z \lvert} \ e^{i \pi}=-\sqrt[4]{\lvert z \lvert}$$</p>
<p><br> </p>
<p>Using the Cartesian form:</p>
<p>$$(a+i b)^4=\sqrt{a^2+b^2}$$</p>
<p><br></p>
<p>$z=0$ is a solution</p>
<p><br></p>
<p>If $a=0$ : $$(i b)^4=\lvert b \lvert $$
$$b^4=\lvert b \lvert$$</p>
<p><br></p>
<p>$-i$ and $i$ are solutions</p>
<p><br></p>
<p>If $b=0$ : $$a^4=\lvert a \lvert $$</p>
<p><br></p>
<p>$-1$ and $1$ are solutions. </p>
<p><br></p>
<p>Finally, these are the solutions of $z^4=\lvert z \lvert$: $$0,-i,i,1,-1$$ </p>
<p>Is it correct? Thanks!</p>
| Martin Argerami | 22,857 | <p>Your first method is a bit unclear because of the use of the fourth root: it is not true, even for a real number, that $\sqrt {z^2}=z$. A good piece of advice is to avoid "taking roots" unless unavoidable. </p>
<p>Your second method works by chance due to the fact that in this particular case all solutions have either real or imaginary part equal to zero. </p>
<p>One "clean" way of doing this is to see the equation as
$$
r^4e^{4i\theta}=r.
$$
This forces $r^4=r$, so $r=0$ or $r=1$; if $r=1$, we also have $e^{4i\theta}=1$, which gives the other four solutions $1,-1,i,-i$.</p>
|
2,617,621 | <p>$$z^4 =\lvert z \lvert , z \in \mathbb{C}$$</p>
<p>Applying the formula to calculate $ \sqrt[4]{z} $, I find that solutions have to have this form:</p>
<p>$$z=\sqrt[4]{\lvert z \lvert}$$
$$z=\sqrt[4]{\lvert z \lvert} \ e^{i \frac{\pi}{2}}=i \ \sqrt[4]{\lvert z \lvert}$$
$$z=\sqrt[4]{\lvert z \lvert} \ e^{i \frac{3 \pi}{2}}=-i \ \sqrt[4]{\lvert z \lvert}$$
$$z=\sqrt[4]{\lvert z \lvert} \ e^{i \pi}=-\sqrt[4]{\lvert z \lvert}$$</p>
<p><br> </p>
<p>Using the Cartesian form:</p>
<p>$$(a+i b)^4=\sqrt{a^2+b^2}$$</p>
<p><br></p>
<p>$z=0$ is a solution</p>
<p><br></p>
<p>If $a=0$ : $$(i b)^4=\lvert b \lvert $$
$$b^4=\lvert b \lvert$$</p>
<p><br></p>
<p>$-i$ and $i$ are solutions</p>
<p><br></p>
<p>If $b=0$ : $$a^4=\lvert a \lvert $$</p>
<p><br></p>
<p>$-1$ and $1$ are solutions. </p>
<p><br></p>
<p>Finally, these are the solutions of $z^4=\lvert z \lvert$: $$0,-i,i,1,-1$$ </p>
<p>Is it correct? Thanks!</p>
| ArsenBerk | 505,611 | <p>If we define $z = re^{i\theta}$, we have $|z| = r$ and $z^4 = r^4e^{4i\theta}$. Notice that they are equal when $r^3e^{i(4\theta)} = 1$ or $r = 0$. From here, you can easily find $\theta = 0, \frac{\pi}{2}, \pi, \frac{3\pi}{2}$ with $r = 1$ and corresponding $z$ values will be $1, i, -1, -i$ and for $r = 0$, we have $z = 0$ so you have already found all the solutions.</p>
|
2,617,621 | <p>$$z^4 =\lvert z \lvert , z \in \mathbb{C}$$</p>
<p>Applying the formula to calculate $ \sqrt[4]{z} $, I find that solutions have to have this form:</p>
<p>$$z=\sqrt[4]{\lvert z \lvert}$$
$$z=\sqrt[4]{\lvert z \lvert} \ e^{i \frac{\pi}{2}}=i \ \sqrt[4]{\lvert z \lvert}$$
$$z=\sqrt[4]{\lvert z \lvert} \ e^{i \frac{3 \pi}{2}}=-i \ \sqrt[4]{\lvert z \lvert}$$
$$z=\sqrt[4]{\lvert z \lvert} \ e^{i \pi}=-\sqrt[4]{\lvert z \lvert}$$</p>
<p><br> </p>
<p>Using the Cartesian form:</p>
<p>$$(a+i b)^4=\sqrt{a^2+b^2}$$</p>
<p><br></p>
<p>$z=0$ is a solution</p>
<p><br></p>
<p>If $a=0$ : $$(i b)^4=\lvert b \lvert $$
$$b^4=\lvert b \lvert$$</p>
<p><br></p>
<p>$-i$ and $i$ are solutions</p>
<p><br></p>
<p>If $b=0$ : $$a^4=\lvert a \lvert $$</p>
<p><br></p>
<p>$-1$ and $1$ are solutions. </p>
<p><br></p>
<p>Finally, these are the solutions of $z^4=\lvert z \lvert$: $$0,-i,i,1,-1$$ </p>
<p>Is it correct? Thanks!</p>
| Bernard | 202,857 | <p>The result is correct, but it faster with the exponential form of complesx numbers: if $z=r\,\mathrm e^{i\theta}$, the equation becomes
$$r=r^4\,\mathrm e^{4i\theta}\iff\begin{cases}r=0\quad\text{or}\\
r^3=1\:\wedge\:\mathrm e^{4i\theta}=1\end{cases}\iff \begin{cases}z=0\quad\text{or}\\ r=1\:\wedge\: \theta\equiv 0\mod\frac{2\pi}4=\frac\pi2. \end{cases}$$</p>
|
2,617,621 | <p>$$z^4 =\lvert z \lvert , z \in \mathbb{C}$$</p>
<p>Applying the formula to calculate $ \sqrt[4]{z} $, I find that solutions have to have this form:</p>
<p>$$z=\sqrt[4]{\lvert z \lvert}$$
$$z=\sqrt[4]{\lvert z \lvert} \ e^{i \frac{\pi}{2}}=i \ \sqrt[4]{\lvert z \lvert}$$
$$z=\sqrt[4]{\lvert z \lvert} \ e^{i \frac{3 \pi}{2}}=-i \ \sqrt[4]{\lvert z \lvert}$$
$$z=\sqrt[4]{\lvert z \lvert} \ e^{i \pi}=-\sqrt[4]{\lvert z \lvert}$$</p>
<p><br> </p>
<p>Using the Cartesian form:</p>
<p>$$(a+i b)^4=\sqrt{a^2+b^2}$$</p>
<p><br></p>
<p>$z=0$ is a solution</p>
<p><br></p>
<p>If $a=0$ : $$(i b)^4=\lvert b \lvert $$
$$b^4=\lvert b \lvert$$</p>
<p><br></p>
<p>$-i$ and $i$ are solutions</p>
<p><br></p>
<p>If $b=0$ : $$a^4=\lvert a \lvert $$</p>
<p><br></p>
<p>$-1$ and $1$ are solutions. </p>
<p><br></p>
<p>Finally, these are the solutions of $z^4=\lvert z \lvert$: $$0,-i,i,1,-1$$ </p>
<p>Is it correct? Thanks!</p>
| user | 505,767 | <p>Yes you are correct, indeed note that</p>
<p>$$\begin{cases}\lvert z \lvert = z^4 \iff |z|=0 \quad \lor\quad |z|=1\\\\z^4 = \lvert z \lvert =\bar z^4 \iff \mathcal{Im}(z)=0\end{cases}$$</p>
<p>thus all the <strong>non trivial solutions</strong> $z\neq0$ are</p>
<p>$$z=e^{ik\frac{\pi}{2}} \quad \forall k \in \mathbb{Z}$$</p>
|
20,771 | <p><strong>Background:</strong></p>
<p>Let $G$ be a profinite group. If $M$ is a discrete $G$-module, then $M=\varinjlim_U M^U$, where the direct limit is taken with respect to inclusions over all open normal subgroups of $G$, and one naturally has $H^n(G,M)\simeq\varinjlim H^n(G/U,M^U)$, where the cohomology groups on the right can be regarded as the usual abstract cohomology groups of the finite groups $G/U$ (this is sometimes, as in Serre's Local Fields, taken as the definition of $H^n(G,M)$).</p>
<p>More generally if one has a projective system of profinite groups $(G_i,\varphi_{ij})$ and a direct system of abelian groups $(M_i,\psi_{ij})$ such that $M_i$ is a discrete $G_i$-module and the pair $(\varphi_{ij},\psi_{ij})$ is compatible in the sense of group cohomology for all $i,j$, then $\varinjlim M_i$ is canonically a discrete $\varprojlim G_i$-module, the groups $H^n(G_i,M_i)$ form a direct system, and one has $H^n(\varprojlim G_i,\varinjlim M_i)\simeq\varinjlim H^n(G_i,M_i)$. The statement and straightforward proof of this more general result can be found, for instance, in Shatz' book on profinite groups.</p>
<p><strong>Question:</strong></p>
<p>In general, I'm wondering if there are, under appropriate hypotheses, any similar formulae for projective limits of discrete $G$-modules. Now, given a projective system of discrete $G$-modules $(M_i,\psi_{ij})$, it isn't even obvious to me that the limit will again be a discrete $G$-module, and at any rate, while each $M_i$ is discrete, the limit (in its natural topology) will be discrete if and only if it is finite. So, for the sake of specificity, I'll give a particular situation in which I'm interested. If $R$ is a complete, Noetherian local ring with maximal ideal $\mathfrak{m}$ and finite residue field and $M$ is a finite, free $R$-module as well as a discrete $G$-module such that the $G$-action is $R$-linear, then the canonical isomorphism of $R$-modules $M\simeq\varprojlim M/\mathfrak{m}^iM$ is also a $G$-module isomorphism (each $M/\mathfrak{m}^iM$ is a discrete $G$-module with action induced from that of $M$). Moreover, in this case, one can see that the limit is a discrete $G$-module (because it is isomorphic to one as an abstract $G$-module!). There is a natural homomorphism $C^n(G,M)\rightarrow\varprojlim C^n(G,M/\mathfrak{m}^iM)$ where the projective limit is taken with respect to the maps induced by the projections $M/\mathfrak{m}^jM\rightarrow M/\mathfrak{m}^iM$, and this induces similar map on cohomology. I initially thought the map at the level of cochains was trivially surjective, just because of the universal property of projective limits. However, given a ``coherent sequence" of cochains $f_i:G\rightarrow M/\mathfrak{m}^iM$, the property gives me a map $f:G\rightarrow M$ that is continuous when $M$ is regarded in its natural profinite topology, which is, as I noted above, most likely coarser than the discrete topology, so this might not be a cochain. So, what I'd really like to know is whether or not the map on cohomology is an isomorphism.</p>
<p><strong>Why I Care:</strong> The reason I'd like to know that the map described above is an isomorphism is to apply it to the particular case of $G=\hat{\mathbb{Z}}$. It is well known (and can be found, for instance, in Serre's Local Fields) that $H^2(\hat{\mathbb{Z}},A)=0$ for $A$ a torsion abelian group. In particular the higher cohomology of a finite $\hat{\mathbb{Z}}$-module vanishes, and I'd like to be able to conclude that the same is true for my $M$ above, being a projective limit of finite abelian groups. </p>
<p>Thanks!</p>
<ul>
<li>Keenan</li>
</ul>
| Ahmed Matar | 10,766 | <p>Hi Keenan,</p>
<p>You're right that the projective limit of discrete $G$-modules is not necessarily discrete. To take the cohomology of such "topological $G$-modules" you can use continuous cochain cohomology and this continuous cochain cohomology commutes with inverse limits under certain conditions. See section 7 of chapter II of Cohomology of Number Fields by Neukirch, Schmidt & Wingberg. </p>
|
3,471,684 | <p>Is always correct statement that if natural numbers <span class="math-container">$a,b \in \Bbb N$</span> for which LCM<span class="math-container">$(a,b)=16\cdot(a,b)$</span>, then <span class="math-container">$a|b$</span> or <span class="math-container">$b|a$</span>?</p>
<p>I used formula that LCM<span class="math-container">$(a,b)=\frac{a\cdot b}{(a,b)}$</span></p>
<p><span class="math-container">$\frac{a\cdot b}{(a,b)}=16\cdot(a,b) \implies a\cdot b= (4\cdot(a,b))^2$</span></p>
<p>Is it somehow useful? What should I do next?</p>
| Calvin Lin | 54,563 | <p>You're nearly there!</p>
<p>Let <span class="math-container">$ a = k_a (a,b) , b = k_b (a,b)$</span> where <span class="math-container">$(k_a, k_b ) = 1$</span>. </p>
<p><strong>Hint:</strong> What is the value of <span class="math-container">$ k_a k_b$</span> according to your equation?</p>
<p><strong>Hint:</strong> Can we conclude that one of them must be 1?<br>
If yes, then we either have <span class="math-container">$ a \mid b$</span> or <span class="math-container">$b\mid a$</span>. If no, then we have a counterexample. </p>
|
3,990,195 | <p>I need some assistance solving what seems to be a very <a href="https://i.stack.imgur.com/arMiv.png" rel="nofollow noreferrer">intuitive problem</a>, but becomes tough when only using strict natural deduction and not assuming De Morgan laws.</p>
<p>Laws allowed: Implication, And, Or, MT, PBC, Copy Rule, Negation, Double Negation, Contradictions, law of excluded middle</p>
<p>I'm thinking it uses the law of excluded middle but I can't quite figure it out.</p>
<p><span class="math-container">$$ \lnot(P \land \lnot Q), \; (\lnot P \to S) \land \lnot Q \;\;\; \text{premises} \tag{1} $$</span></p>
<p><span class="math-container">$$ T \lor S \;\;\; \text{conclusion} \tag{2} $$</span></p>
| Mauro ALLEGRANZA | 108,274 | <p><em>Hint</em></p>
<ol>
<li><p><span class="math-container">$\lnot (P \land \lnot Q)$</span> --- premise</p>
</li>
<li><p><span class="math-container">$(\lnot P \to S) \land \lnot Q$</span> --- premise</p>
</li>
<li><p><span class="math-container">$\lnot Q$</span> --- from 2) by <span class="math-container">$(\land \text E)$</span></p>
</li>
<li><p><span class="math-container">$P$</span> --- assumed [a]</p>
</li>
<li><p><span class="math-container">$(P \land \lnot Q)$</span> --- from 4) and 3) by <span class="math-container">$(\land \text I)$</span></p>
</li>
</ol>
<blockquote>
<ol start="6">
<li>Contradiction !</li>
</ol>
</blockquote>
<p>and so on, deriving the sought conclusion: <span class="math-container">$T \lor S$</span>.</p>
<p>The Natural Deduction rules needed, in addition to the <span class="math-container">$\land$</span>-rules above, are <span class="math-container">$(\lnot \text I), (\to \text E)$</span> and <span class="math-container">$(\lor \text I)$</span>.</p>
|
4,108,926 | <p>I was reading Axler's Linear Algebra Done Right, and the following appears as exercise <span class="math-container">$3$</span> in chapter <span class="math-container">$5$</span>, section A:</p>
<blockquote>
<p>Suppose <span class="math-container">$T \in \mathcal{L}(V)$</span> and <span class="math-container">$T^2 = I$</span> and <span class="math-container">$-1$</span> is not an eigenvalue of T. Prove that <span class="math-container">$T = I$</span></p>
</blockquote>
<p>I try to prove it as follows:</p>
<p>Suppose <span class="math-container">$T \neq I$</span>, and let <span class="math-container">$v \in V$</span> be such that:
<span class="math-container">$$Tv = -w \quad (1)$$</span>,
for some <span class="math-container">$w \in V$</span>.
Now apply <span class="math-container">$T$</span> to both sides gives us:
<span class="math-container">$$T(Tv) = -Tw$$</span>
<span class="math-container">$$\implies T^2v = -Tw$$</span>
<span class="math-container">$$\implies v = -Tw \quad (2)$$</span>
Now lets use <span class="math-container">$(1)$</span> and <span class="math-container">$(2)$</span> as follows:
<span class="math-container">$$T(v + w) = Tv + Tw = -(v + w)$$</span>
This implies that <span class="math-container">$-1$</span> is an eigenvalue of <span class="math-container">$T$</span>, contradicting the assumption which completes the proof.</p>
<p>Is that correct?</p>
<p><strong>Note: I Do not ask for any other solutions please read the question carefully!</strong></p>
<p>#Edit:</p>
<p>To use <span class="math-container">$T \neq I$</span> and to prove that there <span class="math-container">$v, w \in V$</span> such that <span class="math-container">$v + w \neq 0$</span> and <span class="math-container">$Tv = -w$</span>:</p>
<p>First if <span class="math-container">$Tv = 0$</span> then <span class="math-container">$v = 0$</span> otherwise we would have <span class="math-container">$T^2v = 0$</span> contradicting that <span class="math-container">$T^2 = I$</span>.</p>
<p>So assume that <span class="math-container">$Tv \neq v$</span> for some <span class="math-container">$v \in V$</span> (i.e <span class="math-container">$T \neq I)$</span> and <span class="math-container">$Tv = -w$</span>,
if <span class="math-container">$v + w = 0 \implies v = -w$</span> we have:</p>
<p><span class="math-container">$$-Tw = Tv = -w$$</span>
<span class="math-container">$$\implies Tw = w$$</span>
<span class="math-container">$$\implies T(-v) = -v$$</span>
<span class="math-container">$$\implies Tv = v$$</span>
contradicting the assumption that <span class="math-container">$Tv \neq v$</span>.</p>
| tchappy ha | 384,082 | <p>I solved this exercise as follows:</p>
<blockquote>
<p>Since <span class="math-container">$-1$</span> is not an eigenvalue of <span class="math-container">$T$</span>, <span class="math-container">$T+I$</span> is invertible by 5.6 on p.134 in "Linear Algebra Done Right 3rd Edition" by Sheldon Axler.<br />
Since <span class="math-container">$T^2=I$</span>, <span class="math-container">$T^2-I=(T+I)(T-I)=0$</span>.<br />
So, <span class="math-container">$T-I=(T+I)^{-1}(T+I)(T-I)=(T+I)^{-1}0=0$</span>.<br />
So, <span class="math-container">$T=I$</span>.</p>
</blockquote>
|
529,053 | <p>I have already proved that if ${X_k}$ converges to a limit $L$, then any subsequence of it also converges to $L$. And now the question asks to show that if ${X_k}$ has two subsequence which converge to two different limits, then ${X_k}$ can not be convergent.</p>
| Luke Skywalker | 54,762 | <p>I believe you are getting a little confused in the logic of the question. If you think it through, when you say that you have already proved that if a sequence converges, then every subsequence converges to the same limit, you can readily answer the question using just that.</p>
<p>To see this more carefully, argue by contradiction. Assume that the given sequence has two subsequences that converge to different limits AND suppose for the sake of contradiction that the original sequence is also convergent. What to do now? Well, since you are assuming that the sequence is convergent, by what you said you proved,then every subsequence converges to the same limit. In particular, the two subsequences must converge to the same limit aswell...but you had said before that they converged to different limits! Contradiction!!</p>
<p>Thus, the original sequence cannot be convergent. </p>
|
1,195,175 | <p>In <a href="http://en.wikipedia.org/wiki/Solid_angle" rel="nofollow">wikipedia</a> the solid angle is defined as follows:</p>
<blockquote>
<p>In geometry, a solid angle (symbol: Ω) is the two-dimensional angle in three-dimensional space that an object subtends at a point. </p>
</blockquote>
<p>Why solid angle is a two dimensional angle?</p>
| ASB | 111,607 | <p>$ \dfrac{f(x+\delta x)-f(x)}{(x+\delta x)-x}=\dfrac{\sqrt{x+\delta x}-\sqrt{x}}{\delta x}=\dfrac{(\sqrt{x+\delta x}-\sqrt{x})(\sqrt{x+\delta x}+\sqrt{x})}{\delta x (\sqrt{x+\delta x}+\sqrt{x})}=\dfrac{1}{\sqrt{x+\delta x}+\sqrt{x}} $</p>
|
497,015 | <p>So this is an excercise.. Does anyone have a hint? </p>
<p><strong>If A is both orthogonal and a orthogonal projector. What can you then conlcude about A?</strong></p>
<p>I know that an $n\times n$ matrix $P$ is an orthogonal projector if it is both idempotent ($P^2 = P$) and symmetric ($P = P^T$ ). Such a matrix projects any given $n$-vector orthogonally onto a subspace (namely, the column space of $P$) but leaves unchanged any vector that is already in that subspace. </p>
<p>Furthermore, due to orthogonality: $A^TA=I$</p>
| Owen Sizemore | 1,193 | <p>Hint: Assume $P$ is diagonal...</p>
|
652,446 | <p>I just ran into the next problem: The random variables $X$ and $Y$ are independent, where $X \sim Normal(1,1)$ and $Y \sim Gamma(\lambda,p)$ with $E(Y) = 1$ and $Var(Y) = 1/2$ How do we find $E(X+Y)^3$ ?? I've tried a convolution, which leads to a really ugly looking integral from which I then have to get the third moment. I've tried characteristic functions and ran into the same problem, I'm sure there has to be some other easy way to solve this. Any ideas?</p>
| Dilip Sarwate | 15,941 | <p>$$E[(X+Y)^3] = E[X^3+3X^2Y+3XY^2+Y^3] = E[X^3]+3E[X^2]E[Y] +3E[X]E[Y^2]+E[Y^3]$$ when $X$ and $Y$ are independent.</p>
|
105,535 | <p>In a thread in <a href="https://math.stackexchange.com/questions/186292/derivatives-of-the-riemann-zeta-function-at-s-0">MSE</a> I proposed an older routine of mine for the efficient computation of coefficients; I use a very similar routine for the quick&dirty computation of the Stieltjes-constants. </p>
<p>This motivated me to try to improve my earlier toy-computations to calculate now the first 512 Stieltjes to 1000 dec digits precision. I'm unable to estimate the number of correct digits by analytical arguments; at least wolframalpha allowed me to display StieltjesGamma[511] to 400 digits, which met my own computations. </p>
<p>The only freely available table around seems to be that of S. Plouffe (linked via <a href="http://en.wikipedia.org/wiki/Stieltjes_constants" rel="nofollow noreferrer">wikipedia</a>) but they display only the first 78 numbers to 256 digits precision. </p>
<p><strong>Update2:</strong> This is the effective formula to which the Pari/GP code reduces: </p>
<p>Let $ \qquad h_c = {1\over c!} \sum_{k=0}^\infty (-1)^k {\ln(1+k)^c\over1+k}$ This is done using the <em>sumalt</em>-procedure. </p>
<p>Next let $ \qquad r_c = - {\ln(2)^{c-1}\over c!} b_c$ where $b_c$ are the bernoulli numbers </p>
<p>Then $ \qquad \gamma_c = c! \sum_{d=0}^{c+1} h_d \cdot r_{c+1-d} $ </p>
<p><em>So my question:</em></p>
<blockquote>
<p>how could I possibly get an educated guess for the number of correct digits based on my Pari/GP-routine?* </p>
</blockquote>
<p>Alternatively: </p>
<blockquote>
<p>is there some table with comparable precision around such that I can at least check the match for the first m digits (where m should optimally go to 1000)?</p>
</blockquote>
<p><em>(here is the table with my current computations of <a href="http://go.helms-net.de/math/tables/stieltjes_512x1000.zip" rel="nofollow noreferrer">512 coeffs by 1000 digits</a>)</em><br>
<hr>
<strong>Update1:</strong><br>
Heuristically I find, that beginning with some precision, say $300$ dec digits at the first $\gamma_0$ , I simply lose one digit precision per step in the index, so in $\gamma_k$ are roughly $300-k$ digits correct, maybe a handful less.<br>
For this I used differences when computed with precision $200,300,400,500,600,700$ from that with precision $800$, $\gamma_0$ had just nearly all leading digits constant, when precision was increased, so that was always correct to the full precision.<br>
That would mean, that if I want $1000$ correct digits for $\gamma_{511}$ I need dec precision of (at least) $1550$ . Simple, if that is true...</p>
<hr>
<p>Here is my routine. I reduced the precision-parameter so that this can just be copied & pasted to a Pari/GP-environment. For precision of 1000 dec digits and 512 coefficients this must be optimized due to exorbitant increase of stack and computation-time otherwise</p>
<p>Prepare computations with parameters for precision of computation</p>
<pre><code>termsforseries = 32
digitstocompute = 200; digitstoshow = 12;
default(realprecision,digitstocompute)
default(format,Str("g0.",digitstoshow))
default(seriesprecision,termsforseries)
</code></pre>
<p>Compute the coefficients of the Laurent-expansion of the zeta by conversion from the same series-type of the eta-function (the alternating zeta) </p>
<pre><code>\\ ========= Zeta Laurent-expansion providing Stieltjes-coefficients ====
ps_eta = sumalt(k=0,taylor((-1)^k/(1+k)^(1-x),x))
tmp = Vec(1-2*2^(-(1-x)));
tmp[1]=0; \\ make the first zero exact. this step is needed for
\\ allowing the reciprocal of the powerseries
ps_etatozeta=1/Ser(tmp)
ps_zeta = ps_eta * ps_etatozeta \\ contains now the Stieltjes-coefficients
tmp=Vec(ps_zeta);tmp=vector(#tmp-1,c,tmp[1+c]) \\ remove the first coefficient (at 1/x)
sti = vector(#tmp,r,tmp[r]*(r-1)!) \\ extract Stieltjes-constants by mult with factorials
</code></pre>
| Fredrik Johansson | 4,854 | <p>I have recently computed a large table of rigorous values of the Stieltjes constants. Thanks to some coding by Jon Bober, the table can be browsed using a web interface on LMFDB.org (the L-functions and modular forms database):</p>
<p><a href="http://beta.lmfdb.org/riemann/stieltjes/" rel="nofollow">http://beta.lmfdb.org/riemann/stieltjes/</a></p>
<p>For any $n \le 10^5$, the web interface allows printing $\gamma_n$ to at least 10000 digits (over 30000 digits for small $n$). The raw data (huge files) can also be downloaded.</p>
|
1,731,382 | <p>Notice that the parabola, defined by certain properties, is also the trajectory of a cannon ball. Does the same sort of thing hold for the catenary? That is, is the catenary, defined by certain properties, also the trajectory of something?</p>
| Plutoro | 108,709 | <p>Neglecting air resistance, and assuming constant gravity, the trajectory of anything will be a parabola. If there is air resistance, trajectories of roughly spherical objects become a lot more complicated, and are not described easily using nice geometric terms. However, the trajectories do involve the hyperbolic cosine function, which traces out the catenary curve. See <a href="https://en.wikipedia.org/wiki/Trajectory_of_a_projectile" rel="nofollow">this page</a> for details. In spherical classical gravity, trajectories are conic sections, especially hyperbolas and ellipses.</p>
<p>It may be possible to get a true catenary if you had an object with strange aerodynamic properties, or with a very precise arrangement of objects forming a gravitational field. But in either case, the catenary trajectory would be entirely contrived.</p>
|
1,731,382 | <p>Notice that the parabola, defined by certain properties, is also the trajectory of a cannon ball. Does the same sort of thing hold for the catenary? That is, is the catenary, defined by certain properties, also the trajectory of something?</p>
| Student of physics | 682,067 | <p>The relativistic trajectory of an object under the influence of a constant force field perpendicular to its initial direction of motion is a catenary. The trajectory reduces to a parabola in the non-relativistic limit.</p>
<p>(Eg : motion of electron under constant electric field)</p>
|
83,607 | <p>While solving the heat equation in one spatial variable $u_t = u_{xx} $ (x goes from 0 to L) with the initial temperature distribution $T_0 \frac{x(L-x)}{L^2}$ , and with neumann boundary conditions $u_x(0,t) = u_x(L,t) = 0$, I got some really weird behaviour from NDSolve.</p>
<p>My code looks like this:</p>
<pre><code>h[x_] := x*(30 - x)/900;
pde = D[u[t, x], t] == D[u[t, x], x, x]
begin = 0;
end = 30;
bc = {u[0, x] == 100*h[x], (Derivative[0, 1][u])[t, begin] ==
0, (Derivative[0, 1][u])[t, end] == 0};
finaltime = 100
s = NDSolve[{pde}~Join~bc, u, {t, 0, finaltime}, {x, begin, end}];
</code></pre>
<p>Since heat cannot flow out through the ends, continuing this in time should yield a smoothening until it reaches the average everywhere. Instead, I get a very weird time evolution which when plotted seems to be of the form $u_x(x,t) = u_x(x,0) - kt$. This is particularily infuriating because the problem seems to be intermittent. Taking the square of h does not cause any trouble.</p>
<p>The problem seems to magically fix itself if I instead feed in a truncated cosine series into the code:</p>
<pre><code>rule = t -> FourierCosSeries[t*(2*Pi - t), t, 35];
f[x_] := (t /. rule) /. (t -> x)
g[x_] := f[2*Pi*x/30]/(4*Pi*Pi)
</code></pre>
<p>and insert g instead of h into the code above. Trying a function interpolation gave only errors.</p>
<p>Is there a fix which is more general than this quick hack? </p>
| toadatrix | 120 | <p>I believe that all you need to do is select the following method option in NDSolve</p>
<p>Method->{"PDEDiscretization"->{"MethodOfLines",{"SpatialDiscretization"->"FiniteElement"}}}</p>
|
4,643,832 | <p><strong>Question</strong></p>
<p><a href="https://i.stack.imgur.com/XSeNR.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/XSeNR.png" alt="enter image description here" /></a></p>
<p>I am trying to prove this using balls (that is what we use in my school). The definition is that a subset <span class="math-container">$A$</span> is open if <span class="math-container">$\forall a \in A$</span> <span class="math-container">$ \exists$</span> r such that <span class="math-container">$B_r(a) \subseteq A $</span>.</p>
<p><strong>Textbook Solution</strong></p>
<p><a href="https://i.stack.imgur.com/qNnal.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/qNnal.png" alt="enter image description here" /></a></p>
<p><strong>Confusion</strong></p>
<p>This is the answer my book gave but the problem is that I don't understand why we chose <span class="math-container">$\delta=1-||(x-1)^2-(y+2)^2||$</span> . Looking at the condition for <span class="math-container">$(x,y)$</span> to be in <span class="math-container">$A$</span> I realized that for <span class="math-container">$a=(-1,2)$</span> then we could have <span class="math-container">$r=1$</span> and <span class="math-container">$B_1((-1,2))={(x,y)\in R :||(x,y)-(-1,2)||<1 }$</span> so <span class="math-container">$a=(-1,2)$</span> is open. but I don't see why we chose that <span class="math-container">$\delta$</span>.</p>
<p>Note: I am not saying I don't agree with it. I just don't understand the trick to know the <span class="math-container">$\delta$</span> to choose.</p>
| TheBestMagician | 815,074 | <p>Taking the first two terms of the Binomial Theorem,
<span class="math-container">$$\left(1+\frac{1}{y}\right)^{x-y}\ge 1^{x-y}+\frac{x-y}y\cdot 1^{x-y-1}=\frac{x}{y}$$</span></p>
|
1,085,702 | <p>It's said that a computer program "prints" a set <span class="math-container">$A$</span> (<span class="math-container">$A \subseteq \mathbb N$</span>, positive integers.) if it prints every element of <span class="math-container">$A$</span> in ascending order (even if <span class="math-container">$A$</span> is infinite.). For example, the program can "print":</p>
<ol>
<li>All the prime numbers.</li>
<li>All the even numbers from <span class="math-container">$5$</span> to <span class="math-container">$100$</span>.</li>
<li>Numbers including "<span class="math-container">$7$</span>" in them.</li>
</ol>
<p>Prove there is a set that no computer program can print.</p>
<p>I guess it has something to do with an algorithm meant to manipulate or confuse the program, or to create a paradox, but I can't find an example to prove this. Any help?</p>
<p>Guys, this was given to me by my Set Theory professor, meaning, this question does not regard computer but regards an algorithm that cannot exhaust <span class="math-container">$\mathcal{P}(\mathbb{N})$</span>. Everything you say about computer or number of programs do not really help me with this... The proof has to contain Set Theory claims, and I probably have to find a set with terms that will make it impossible for the program to print. I am not saying your proofs including computing are not good, on the contrary, I suppose they are wonderful, but I don't really understand them nor do I need to use them, for it's about sets.</p>
| Mark Bennet | 2,906 | <p>If the number of printable sequences can be taken to be countable, then count the sequences and then create a sequence as follows:</p>
<p>The first term is one greater than the first term of the first sequence
The second term is the smallest integer greater than both the first term already determined and the second term of the second sequence</p>
<p>and so forth, in imitation of the Cantor Diagonal Argument and preserving the defining property of the sequence.</p>
<hr>
<p>But the question leaves open whether the elements of $A$ have to be consecutive in the printout - a single program is mentioned. For this note that every sequence of increasing integers is a subsequence, in order, of $1,2,3,4,5 \dots $</p>
<p>It would be possible to organise finite sequences in order so that they are consecutive within the printout, but don't necessarily start in the first place (there are countably many finite sequences - just put them in order of a suitable list) - but this won't work for the infinite sequences</p>
|
1,548,667 | <p>Consider the following steady state problem</p>
<p>$$\Delta T = 0,\,\,\,\, (x,y) \in \Omega, \space \space 0 \leq x \leq 4 ,\space \space \space\space 0 \leq y \leq 2 $$</p>
<p>$$ T(0,y) = 300, \space \space T(4,y) = 600$$</p>
<p>$$ \frac{\partial T}{\partial y}(x,0) = 0, \space \space \frac{\partial T}{\partial y}(x,2) = 0$$</p>
<p>I want to derive the analytical solution to this problem.</p>
<p>1) Use separation of variables.</p>
<p>$$\frac{X^{''}}{X}= -\frac{Y^{''}}{Y} = -\lambda $$
$$X^{''} + \lambda X = 0 \tag{1}$$
$$Y^{''} - \lambda Y = 0 \tag{2}$$</p>
<p>The solution to $(2$) is</p>
<p>$Y(y) = C_1 \cos(ay)+C_2 \sin(ay)$</p>
<p>We find that
$$C_2 = 0$$ and
$$Y^{'}(2) = C_1\alpha \sin(2\alpha) = 0 \tag{3}$$</p>
<p>with $(3)$ giving that $\alpha = n\frac{\pi}{2}$</p>
<p>so $Y$ is given by</p>
<p>$$Y(y) = C_1sin(\frac{n\pi}{2}y)$$</p>
<p>The solution to $(1)$ is</p>
<p>$$X(x) = Ae^{\alpha x}+Be^{-\alpha x}$$</p>
<p>where $\alpha$ is given by $\alpha = n\frac{\pi}{2}$ </p>
<p>So the solution is:</p>
<p>$$u(x,y) = X(x)Y(y) = C_1sin(\alpha y)(Ae^{\alpha x}+Be^{-\alpha x}) \tag{4}$$</p>
<p>where $\alpha$ is given by $\alpha = n\frac{\pi}{2}$ </p>
<p>Inserting the B.C. in $(4)$ gives:</p>
<p>$$u(0,y) \implies E_n = \frac{300}{sin(\alpha y)}$$</p>
<p>$$u(4,y) \implies F_n = \frac{600}{G sin(\alpha y)Ae^{4 \alpha }+Hsin(\alpha y)e^{-4 \alpha }}$$</p>
<p>This is how far I have come. How do I continue?</p>
| Evgeny | 87,697 | <p><strong>Sort of guide</strong></p>
<ol>
<li><p>Transform the equation such that it'll have homogeneous boundary conditions.<br>
I suggest to use $W(x, y) = 100 + 50x$ : this is the simplest function that has $W(0, y) = 100$, $W(2, y) = 200$ and by the way $\frac{\partial W}{\partial y} \equiv 0$. What will happen to solutions of original equation if we subtract $W(x, y)$ ? Let's see:
$$\Delta (T - W) = \Delta T - \Delta W = 0 - 0 = 0. $$
So, $T-W$ solves the equation $\Delta u = 0$, but with homogeneous boundary conditions of the same type.</p></li>
<li><p>From separation of variables you have that you are trying to find solutions of form $u(x, y) = X(x) \cdot Y(y)$ such that they satisfy boundary conditions and $\Delta u = 0$. This leads to following equations:
$$ X'' = - \lambda X $$
$$ Y'' = \lambda Y $$
plus boundary conditions. Because we want to find non-trivial solutions to this equation, boundary conditions yield:
$$ u(0, y) = 0 \Leftrightarrow X(0) Y(y) = 0 \Leftrightarrow X(0) = 0 $$
$$ u(2, y) = 0 \Leftrightarrow X(2) Y(y) = 0 \Leftrightarrow X(2) = 0 $$
$$ u'_{y}(x, 0) = 0 \Leftrightarrow X(x) Y'(0) = 0 \Leftrightarrow Y'(0) = 0 $$
$$ u'_{y}(x, 4) = 0 \Leftrightarrow X(x) Y'(4) = 0 \Leftrightarrow Y'(4) = 0 $$
Then you find for what values of $\lambda$ you can satisfy these boundary conditions. You will obtain a countable (or empty) set of such 'eigenvalues' $\lambda_k$ with corresponding functions $X_k$ and $Y_k$ (each depends on two parameters -- that is because they are general solutions of second order ODE.</p></li>
<li><p>Find the solution in form $U(x, y) = \sum_{k \in \mathbb{N}} X_k (x) \cdot Y_k (y) $. Any finite or infinite sum of such functions satisfy Laplace equation and boundary conditions, so you just have to determine coefficients in $X_k$ and $Y_k$. </p></li>
<li><p>Don't forget to add $W(x, y)$ to $U(x, y)$ :)</p></li>
</ol>
|
1,341,385 | <p>I want to be a mathematician or computer scientist. I'm going to be a junior in high school, and I skipped precalc/trig to go straight to AP Calc since I've studied a lot of analysis and stuff on my own. My dad wants me to memorize about 30 trig identities (though some of them are very similar) since I'm missing trig. I've gone through and proved all of them, but memorizing them seems like a waste of effort. My dad is a physicist, so he is good at math, but I think he may be wrong here. Can't one just use deMoivre's theorem to get around memorizing the identities?</p>
| Keith | 244,951 | <p>Let me first note that physicists may be better people to ask this question than mathematicians are.</p>
<p>I think it's worth remembering a few, and knowing how to rederive the others.</p>
<p>The important ones to remember initially are: </p>
<p>(a) The definitions of $\tan$, $\cot$, $\sec$, $\csc$, in terms of $\sin$ and $\cos$, as well as the identity $\cot x = 1/(\tan x)$. </p>
<p>(b) The Pythagorean identity $\sin^2 x + \cos^2 x = 1$, and the corresponding ones relating $\tan$ and $\sec$, then $\cot$ and $\csc$.</p>
<p>(c) The reduction formulas involving trigonometric functions of $-x$, $\pi/2 - x$, $\pi + x$, $2\pi + x$ (and especially the periods of $\sin, \cos, \tan$). These, as well as the geometric reasons for them, should be learned independently of (d), focusing mainly on $\sin$, $\cos$ and $\tan$.</p>
<p>(d) the angle-sum formulas for $\sin$ and $\cos$.</p>
<p>Eventually, from repeated use, you will probably learn the angle-difference formulas for $\sin$ and $\cos$, the double-angle formulas for $\sin$ and $\cos$, the formulas for $\sin^2 x$ and $\cos^2 x$, perhaps the angle-sum and angle-difference formulas for $\tan$. I have never learned the product-to-sum and sum-to-product formulas and re-derive them whenever I need them, though some people might disagree with this approach.</p>
<p>Overall then, you should learn the most important ones first, and then through practice your list of memorized identities will start to look more and more like the one your dad wants you to learn. </p>
<p>It <em>is</em> better to use complex numbers for anything involving $3x$, $4x$, etc., and cubes or higher powers of $\cos$ and $\sin$.</p>
<p>Sometimes in calculus, depending on the level you learn it at, it's important to know $\sin x$ and $\cos x$ in terms of $\tan x/2$. There is a nice geometric way to do this by considering $\tan x/2$ as the slope of the line $l$ from $(-1,0)$ to $A = (\cos x,\sin x)$. $A$ can be found as the point of intersection of $l$ and the perpendicular to $l$ passing through $(1,0)$.</p>
|
2,674,799 | <blockquote>
<p>Let $X_{2n}$ be the group whose presentation is$\langle x,y\,|\,x^n=y^2=1, xy=yx^2\rangle$. From $x=xy^2$, it is seen that $x^3=1$, hence $X_{2n}$ has at most $6$ elements. I have to show that if $n=3k$, then $X_{2n}$ has exactly $6$ elements. </p>
</blockquote>
<p>I can't see where I am having problem if I assume $x=1$.</p>
| cansomeonehelpmeout | 413,677 | <p>You know that $y^2=1$, to show that $x^3=1$, notice that $xyxy=x(x^2)=x^3$, also $xyxy=yx^2yx^2=yx(xyx)x=yxyx^4=x^6$, so $x^3=1$. From here on, you can show that the 6 elements are: $$1,x,x^2,y,xy,yx$$ and that multiplying either of these yields no new element.</p>
|
65,002 | <p>I am a programmer, so to me $[x] \neq x$—a scalar in some sort of container is not equal to the scalar. However, I just read in a math book that for $1 \times 1$ matrices, the brackets are often dropped. This strikes me as very sloppy notation if $1 \times 1$ matrices are not at least <em>functionally equivalent</em> to scalars. As I began to think about the matrix operations I am familiar with, I could not think of any (tho I am weak on matrices) in which a $1 \times 1$ matrix would not act the same way as a scalar would when the corresponding scalar operations were applied to it. So, is $[x]$ functionally equivalent to $x$? And can we then say $[x] = x$? (And are those two different questions, or are entities in mathematics "duck typed" as we would say in the coding world?)</p>
| charles.y.zheng | 7,862 | <p>No. To give a concrete example, you can multiply a 2x2 matrix by a scalar, but you can't multiply a 2x2 matrix by a 1x1 matrix.</p>
<p>It is sloppy notation.</p>
|
1,531,646 | <p>Find the following limit</p>
<p>$$
\lim_{x\to0}\left(\frac{1+x2^x}{1+x3^x}\right)^\frac1{x^2}
$$</p>
<p>I have used natural logarithm to get</p>
<p>$$
\exp\lim_{x\to0}\frac1{x^2}\ln\left(\frac{1+x2^x}{1+x3^x}\right)
$$</p>
<p>After this, I have tried l'opital's rule but I was unable to get it to a simplified form.</p>
<p>How should I proceed from here? Any here is much appreciated!</p>
| Hosein Rahnama | 267,844 | <p><strong>Solution Procedure Using Taylor Series and L'Hoptial</strong></p>
<p>Try to understand or prove each of the following steps:</p>
<p>1) $\ln \left( {\frac{{1 + x{2^x}}}{{1 + x{3^x}}}} \right) = \ln \left( {1 + x\frac{{{2^x} - {3^x}}}{{1 + x{3^x}}}} \right)$ </p>
<p>2) $\ln (1 + x) = x + o({x^2})$</p>
<p>3) $o\left( {{x^2}{{\left( {{{{2^x} - {3^x}} \over {1 + x{3^x}}}} \right)}^2}} \right) = o\left( {o\left( {{x^2}} \right)} \right) = o({x^2})$</p>
<p>4) $\ln \left( {1 + x{{{2^x} - {3^x}} \over {1 + x{3^x}}}} \right) = x{{{2^x} - {3^x}} \over {1 + x{3^x}}} + o({x^2})$</p>
<p>5) $\mathop {\lim }\limits_{x \to 0} {1 \over {{x^2}}}\ln \left( {1 + x{{{2^x} - {3^x}} \over {1 + x{3^x}}}} \right) = \mathop {\lim }\limits_{x \to 0} \frac{1}{x^2} \times x{{{2^x} - {3^x}} \over {\left( {1 + x{3^x}} \right)}}$</p>
<p>6) $\mathop {\lim }\limits_{x \to 0} {{{2^x} - {3^x}} \over {x\left( {1 + x{3^x}} \right)}} = \mathop {\lim }\limits_{x \to 0} {{{2^x}\ln 2 - {3^x}\ln 3} \over {\left( {1 + x{3^x}} \right) + x\left( {1 + x\ln 3} \right){3^x}}} = \ln 2 - \ln 3 = \ln \left( {{2 \over 3}} \right)$</p>
<p>7) $ {\exp\lim_{x\to0}\frac1{x^2}\ln\left(\frac{1+x2^x}{1+x3^x}\right)} = {\exp(\ln{2 \over 3}})= {2 \over 3}$</p>
|
15,033 | <p>I have noticed in <a href="https://meta.mathoverflow.net/questions/833/who-are-the-mathoverflow-moderators">this post</a> that at MO they have e-mail address <code>moderators@mathoverflow.net</code>, which can be used to contact moderators.</p>
<p>Is there a similar address for moderators of this site? If not, would creating such e-mail (and putting contact information at some visible place) be useful?</p>
| Community | -1 | <p>No, there is no such mail address for this site. Using email for moderator purposes has certain drawbacks, and SE tries to keep all moderator communication inside the SE platform for that reason. Keeping this all inside the SE software makes it much easier to see what happened afterwards, which is very useful if there are any complaints about the issue. </p>
<p>In most cases, you don't need to contact <em>all</em> moderators, you need to contact <em>any</em> moderator. And that is what flags are meant for. For the rare case where the issue is too complicated for a flag, you can use the "contact us" link at the bottom. This goes to SE, but they can forward the information to the moderators or handle it themselves. Anything that doesn't have to be private should be on meta anyway.</p>
<p>Suspensions are always accompanied by a moderator message (this is enforced by the software), and the suspended user can reply to the moderator message. They can only reply once to each moderator message though to prevent abuse or spamming of the moderators.</p>
|
78,368 | <p>Please, can anybody give a reference(s) to some good recent review papers about copulas and time series?</p>
| The Bridge | 2,642 | <p>Here what I found in my e-library the follwing articles (that you can find on arxiv or SSRN) :</p>
<p>Brahimi, Necir - A Semiparametric Estimation of Copula Models Based on the Method of Moments</p>
<p>Chicheportiche, Bouchaud - Goodness-of-Fit tests with Dependent Observations</p>
<p>Amblard, Girard - Estimation Procedures for a Semiparametric Family of Bivariate Copulas</p>
<p>Bergsma - Nonparametric Testing of Conditional Independence by Means of the Partial Copula</p>
<p>Segers - Asymptotics of Empirical Copula Processes under Nonrestrictive Smoothness Assumptions</p>
<p>Segers - Weak Convergence of Empirical Copula Processes under Nonrestrictive Smoothness Assumptions</p>
<p>Otherwise you can check the book by Nelsen "An introduction to copulas" </p>
<p>Regards</p>
|
2,965,193 | <p>Basically the question is asking us to prove that given any integers <span class="math-container">$$x_1,x_2,x_3,x_4,x_5$$</span> Prove that 3 of the integers from the set above, suppose <span class="math-container">$$x_a,x_b,x_c$$</span> satisfy this equation: <span class="math-container">$$x_a^2 + x_b^2 + x_c^2 = 3k$$</span> So I know I am suppose to use the pigeon hole principle to prove this. I know that if I have 5 pigeons and 2 holes then 1 hole will have 3 pigeons. But what I am confused about is how do you define the hole? Do I just say that the container has a property such that if 3 integers are in it then those 3 integers squared sum up to a multiple of 3?</p>
| DeepSea | 101,504 | <p>Let's look at any <span class="math-container">$3$</span> of them,say <span class="math-container">$a,b,c$</span> among <span class="math-container">$a,b,c,d,e$</span>. You must have <span class="math-container">$2$</span> cases: <span class="math-container">$a^2 = 0, b^2=1, c^2=0$</span> or <span class="math-container">$a^2=0, b^2=1,c^2=1$</span> for the worst scenario. For the last <span class="math-container">$2$</span> numbers <span class="math-container">$d,e$</span>, if at least one, say <span class="math-container">$d^2 = 0$</span>, you're done. If not <span class="math-container">$d^2= e^2 = 1$</span>, then <span class="math-container">$b^2+d^2+e^2 = 0$</span>, all mod <span class="math-container">$3$</span>. And you are done. </p>
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.