qid
int64
1
4.65M
question
large_stringlengths
27
36.3k
author
large_stringlengths
3
36
author_id
int64
-1
1.16M
answer
large_stringlengths
18
63k
3,151,662
<p>Consider <span class="math-container">$a_1,\dots,a_n\in\mathbb{R}^n$</span> and identify <span class="math-container">$a_j\in\mathcal{L}(\mathbb{R},\mathbb{R}^n)$</span> via <span class="math-container">$\varphi\mapsto \varphi1$</span>.</p> <p>Also, consider <span class="math-container">$A\in\mathcal{L}(\mathbb{R}^n)$</span> given by <span class="math-container">$$A\colon (x_1,\dots,x_n)\mapsto a_1x_1 + \dots + a_nx_n\tag{$\star$}$$</span></p> <p>What's the name or symbol of the map <span class="math-container">$$\mathcal{L}(\mathbb{R},\mathbb{R}^n)\times\dots\times\mathcal{L}(\mathbb{R},\mathbb{R}^n)\to\mathcal{L}(\mathbb{R}^n,\mathbb{R}^n),\quad(a_1,\dots,a_n)\mapsto A$$</span> where <span class="math-container">$A$</span> and <span class="math-container">$a_1,\dots,a_n$</span> are related as in <span class="math-container">$(\star)$</span>? I'd like to write e.g. <span class="math-container">$A=a_1\otimes\dots\otimes a_n$</span>. Is there some higher level concept that induces that map? (E.g. some time ago I wondered if this map would be the tensor product). </p> <p>The matrix equivalent would be saying that the columns of <span class="math-container">$A$</span> are the column vectors <span class="math-container">$a_1,\dots,a_n$</span> and writing <span class="math-container">$A=\begin{bmatrix}a_1&amp; \dots &amp;a_n\end{bmatrix}$</span>. However, I'd like to keep things matrix free.</p> <p>Thanks in advance.</p>
Mason
464,960
<p>There are plenty of applications of determinants, but I will just mention one that applies to optimization. A totally unimodular matrix is a matrix (doesn’t have to be square) that every square submatrix has a determinant of 0, 1 or -1. It turns out that (by Cramer’s rule) that if a constraint matrix <span class="math-container">$A$</span> of a linear program max <span class="math-container">$\{c’x:\: Ax \leq b, x \in \mathbb{R}^n_+\} $</span> is totally unimodular, it is guaranteed to have an integer solution if a solution exists. In other words, the polyhedron formed by <span class="math-container">$P = \{x:\: Ax \leq b\}$</span> has integer vertices in <span class="math-container">$\mathbb{R}^n$</span>. This has major implications in integer programming, as we solve an integer program that has a totally unimodular matrix as a linear program. This is advantageous because a linear program can me solved in polynomial time, where there is no polynomial algorithm for integer programs.</p>
3,151,662
<p>Consider <span class="math-container">$a_1,\dots,a_n\in\mathbb{R}^n$</span> and identify <span class="math-container">$a_j\in\mathcal{L}(\mathbb{R},\mathbb{R}^n)$</span> via <span class="math-container">$\varphi\mapsto \varphi1$</span>.</p> <p>Also, consider <span class="math-container">$A\in\mathcal{L}(\mathbb{R}^n)$</span> given by <span class="math-container">$$A\colon (x_1,\dots,x_n)\mapsto a_1x_1 + \dots + a_nx_n\tag{$\star$}$$</span></p> <p>What's the name or symbol of the map <span class="math-container">$$\mathcal{L}(\mathbb{R},\mathbb{R}^n)\times\dots\times\mathcal{L}(\mathbb{R},\mathbb{R}^n)\to\mathcal{L}(\mathbb{R}^n,\mathbb{R}^n),\quad(a_1,\dots,a_n)\mapsto A$$</span> where <span class="math-container">$A$</span> and <span class="math-container">$a_1,\dots,a_n$</span> are related as in <span class="math-container">$(\star)$</span>? I'd like to write e.g. <span class="math-container">$A=a_1\otimes\dots\otimes a_n$</span>. Is there some higher level concept that induces that map? (E.g. some time ago I wondered if this map would be the tensor product). </p> <p>The matrix equivalent would be saying that the columns of <span class="math-container">$A$</span> are the column vectors <span class="math-container">$a_1,\dots,a_n$</span> and writing <span class="math-container">$A=\begin{bmatrix}a_1&amp; \dots &amp;a_n\end{bmatrix}$</span>. However, I'd like to keep things matrix free.</p> <p>Thanks in advance.</p>
J. Wang
545,729
<p>My first brief understanding of matrices is that they offer an elegant way to deal with data (combinatorially, sort of). A classical and really concrete example would be a discrete Markov chain (don't be frightened by its name). Say you are given the following information: if today is rainy, then tomorrow has a 0.9 probability to be rainy; if today is sunny, then tomorrow has a 0.5 probability to be rainy. Then you may organize these data into a matrix:</p> <p><span class="math-container">$$A=\begin{pmatrix} 0.9 &amp; 0.5 \\ 0.1 &amp; 0.5 \end{pmatrix}$$</span></p> <p>Now if you compute <span class="math-container">$A^2=\begin{pmatrix} 0.86 &amp; 0.7 \\ 0.14 &amp; 0.3 \end{pmatrix}$</span>, what do you get? 0.86 is the probability that if today is rainy then the day after tomorrow is still rainy and 0.7 is the probability that if today is sunny then the day after tomorrow is rainy. And this pattern holds for <span class="math-container">$A^n$</span> an arbitrary <span class="math-container">$n$</span>.</p> <p>That's the simple point: matrices are a way to calculate elegantly. In my understanding, this aligns with the spirit of mathematics. Math occurs when people try to solve practical problems. People find that if they make good definitions and use good notations, things will be a lot easier. Here comes math. And the matrix is such a good notation to make things easier.</p>
3,318,993
<p>Consider the functions <span class="math-container">$x$</span> and <span class="math-container">$x^2$</span> on <span class="math-container">$\mathbb{R}$</span>. Clearly, they are linearly independent.<br> But consider the following argument.</p> <p>Consider the matrix <span class="math-container">$$A = \begin{bmatrix} x &amp; x^2\\ 0 &amp; 0\\ \end{bmatrix}$$</span></p> <p>Clearly, the determinant is zero. This implies the existence of a nonzero matrix, <span class="math-container">$$B =\begin{bmatrix} a\\ b\\ \end{bmatrix}$$</span> such that <span class="math-container">$$AB=0$$</span>.</p> <p>This implies that <span class="math-container">$ax+bx^2=0$</span> for some nonzero <span class="math-container">$a$</span> or some nonzero <span class="math-container">$b$</span>. But this implies that <span class="math-container">$x$</span> and <span class="math-container">$x^2$</span> are linearly dependent.</p> <p>Clearly, false.</p> <hr> <p>Where’s the flaw?</p>
Zbigniew
11,995
<p><span class="math-container">$x$</span> and <span class="math-container">$x^2$</span> are considered to be elements of the vectorial space <span class="math-container">$\mathbb{R}[X]$</span> or <span class="math-container">$\mathbb{R}[X]_2 $</span> endowed with the basis <span class="math-container">$1,x,x^2$</span>. Then <span class="math-container">\begin{align*} x=&amp;0\times 1 + 1\times x+ 0\times x^2\\ x=&amp;0\times 1 + 0\times x+ 1\times x^2 \end{align*}</span> ,hence <span class="math-container">$\begin{pmatrix} 0 \\ 1\\0\end{pmatrix}$</span>, <span class="math-container">$\begin{pmatrix} 0 \\ 0\\1\end{pmatrix}$</span> are coordinates of <span class="math-container">$x$</span> and <span class="math-container">$x^2$</span> respectively. Clearly, this two vectors are linearly independents.</p>
120,067
<p>The <em>theta function</em> is the analytic function $\theta:U\to\mathbb{C}$ defined on the (open) right half-plane $U\subset\mathbb{C}$ by $\theta(\tau)=\sum_{n\in\mathbb{Z}}e^{-\pi n^2 \tau}$. It has the following important transformation property.</p> <blockquote> <p><strong>Theta reciprocity</strong>: $\theta(\tau)=\frac{1}{\sqrt{\tau}}\theta\left(\frac{1}{\tau}\right)$.</p> </blockquote> <p>This theorem, while fundamentally analytic&mdash;the proof is just Poisson summation coupled with the fact that a Gaussian is its own Fourier transform&mdash;has serious arithmetic significance.</p> <ul> <li><p>It is the key ingredient in the proof of the functional equation of the Riemann zeta function.</p></li> <li><p>It expresses the <em>automorphy</em> of the theta function.</p></li> </ul> <p>Theta reciprocity also provides an analytic proof (actually, the <em>only</em> proof, as far as I know) of the Landsberg-Schaar relation</p> <p>$$\frac{1}{\sqrt{p}}\sum_{n=0}^{p-1}\exp\left(\frac{2\pi i n^2 q}{p}\right)=\frac{e^{\pi i/4}}{\sqrt{2q}}\sum_{n=0}^{2q-1}\exp\left(-\frac{\pi i n^2 p}{2q}\right)$$</p> <p>where $p$ and $q$ are arbitrary positive integers. To prove it, apply theta reciprocity to $\tau=2iq/p+\epsilon$, $\epsilon&gt;0$, and then let $\epsilon\to 0$.</p> <p>This reduces to the formula for the quadratic Gauss sum when $q=1$:</p> <p>$$\sum_{n=0}^{p-1} e^{2 \pi i n^2 / p} = \begin{cases} \sqrt{p} &amp; \textrm{if } \; p\equiv 1\mod 4 \\\ i\sqrt{p} &amp; \textrm{if } \; p\equiv 3\mod 4 \end{cases}$$</p> <p>(where $p$ is an odd prime). From this, it's not hard to deduce Gauss's "golden theorem".</p> <blockquote> <p><strong>Quadratic reciprocity</strong>: $\left(\frac{p}{q}\right)\left(\frac{q}{p}\right)=(-1)^{(p-1)(q-1)/4}$ for odd primes $p$ and $q$.</p> </blockquote> <p>For reference, this is worked out in detail in the paper "<a href="http://www.math.kth.se/~akarl/langmemorial.pdf">Applications of heat kernels on abelian groups: $\zeta(2n)$, quadratic reciprocity, Bessel integrals</a>" by Anders Karlsson.</p> <hr> <p>I feel like there is some deep mathematics going on behind the scenes here, but I don't know what.</p> <blockquote> <p>Why should we expect theta reciprocity to be related to quadratic reciprocity? Is there a high-concept explanation of this phenomenon? If there is, can it be generalized to other reciprocity laws (like Artin reciprocity)?</p> </blockquote> <p>Hopefully some wise number theorist can shed some light on this!</p>
paul garrett
15,629
<p>One way a person could stumble on quadratic reciprocity while looking at theta functions is by trying to prove that Weil's construction of an adelic "Segal-Shale-Weil/oscillator representation" really is a representation, and really produces automorphic forms. This would lead a person to see that certain products of local characters must be Hecke characters... and that Poisson summation exactly proves this.</p> <p>In somewhat more detail: given a local field $k$ (maybe not char 2), the "basic" <em>local</em> Segal-Shale-Weil repn is a repn of a two-fold cover $Mp_1(k)$ on Schwartz functions on $k$. For $K$ either a quadratic extension of $k$ or $k\oplus k$, the analogous repn of $Mp_1(k)$ descends to $SL_2(k)$, since the relevant cycle "splits". Without even knowing how to verify that splitting, one could still just try to directly assemble this local repn from a Bruhat decomposition and from the action of the standard unipotent radical, the standard Levi, and the Weyl element. </p> <p>That is, with the rough idea that the Weyl element should be Fourier transform, up to sign..., that the Levi component should act by dilation, up to maybe a character and a norm to make the operator unitary, and that the unipotent radical should act by multiplication by a quadratic exponential... (which one might get from the repn of the Lie algebra of $SL_2(\mathbb R)$ on various function spaces on $\mathbb R$!!!)... one sees that for these pieces to fit together to make a repn, the Levi action is indeed not just dilation (with adjustment by norm), but twisted by the norm residue character attached to $K/k$. This gives a repn of $O(N_{K/k})\times SL_2(k)$ viewing the norm as a quadratic form on $K$.</p> <p>When the local situation arises across all places from a number field extension $K/k$ (or function field, too, tho' eschewing char$=2$), one can try to apply this to the global pair $O(N_{K/k})\times SL_2(k)$ to make binary theta series and "special waveforms" by lifting Hecke characters from the adele group of $SO(K/k)$ to the adele group of $SL_2(k)$. This is the smallest interesting "theta correspondence", and the holomorphic case was known to Hecke, and the waveform case was Maass' thesis, though of course they wrote things classically and over $\mathbb Q$.</p> <p>To a Schwartz function $\varphi$ on the adeles of $K$ attach a "theta kernel" by $\theta_\varphi(g,h)=\sum_{\lambda\in K} \big((g,h)\cdot \varphi\big)(x+\lambda)$ where $g,h$ are in the adelized $SL_2(k)$ and $O(N_{K/k})$. This is immediately invariant under the action of the rational points of $O(N_{K/k})$, but it is not obvious that it is invariant under $SL_2(k)$. But, as expected (!), one can verify that it <em>is</em>.</p> <p>One point is that the character by which the Levi acts is a Hecke character. If not, the theta kernel will not be suitably invariant... but/and one uses Poisson summation and natural computations to prove that it <em>is</em> a Hecke character. Great! That is, one proves that the product of the local norm residue symbols is a Hecke character. From this, one proves the corresponding reciprocity for the quadratic Hilbert symbols, and then for quadratic symbols.</p> <p>The argument is not as short as one might wish... but is fairly natural.</p> <p>I did not quite see/find exactly that argument in Weil, despite looking again later, perhaps because of the style of his piece, but/and I'm sure he would say it is implicit, and one sees that it must be.</p> <p>After I went through the above story first-hand years ago, I wrote up the spin-off that proves quadratic reciprocity over global fields not char zero... a recently-slightly-revised version is at <a href="http://www.math.umn.edu/~garrett/m/v/quad_rec_02.pdf">http://www.math.umn.edu/~garrett/m/v/quad_rec_02.pdf</a></p>
1,841,958
<p>This is a claim on Wikipedia <a href="https://en.wikipedia.org/wiki/Partially_ordered_set">https://en.wikipedia.org/wiki/Partially_ordered_set</a></p> <p>I am not sure how to make sense of the claim</p> <p>What does it mean by ordered by inclusion? Inclusion as in $\subseteq$? </p> <p>Can someone provide a small example of couple subspaces being "ordered" by inclusion?</p> <p>Is this a linear order?</p>
Ashwin Ganesan
157,927
<p>Let $A$ be the set of all subspaces of $\mathbb{R}^3$. Let $R$ be a binary relation on $A$ defined by $R: = \{(U,V): U \mbox{ is a subspace of } V \}$. So the relation $R$ is a subset of $A \times A$. This relation is reflexive (because every subspace $V$ is a subspace of itself), antisymmetric (if $U$ is a subspace of $V$ and $V$ is a subspace of $U$, then $U=V$) and transitive (if $U$ is a subspace of $V$ and $V$ is a subspace of $W$, then $U$ is a subspace of $W$). Hence, the relation $R$ is a partial order. </p> <p>This partial order is not a linear order because if $e_i$ denotes the unit vector in the $i$th direction, then the subspaces $U = \{e_1,e_2\}$ and $V=\{e_2,e_3\}$ are incomparable, ie, neither is $U$ a subspace of $V$ nor is $V$ a subspace of $U$. </p>
15,159
<p>Specifically, is it possible for a non-Noetherian ring $R$ to have $R[x]$ Noetherian? Every reference I've seen for the Hilbert basis theorem only states the direction "$R$ Noetherian $\Rightarrow$ $R[x]$ Noetherian", which would certainly seem to imply that the converse is false. Unfortunately, it's tough to think about non-Noetherian rings, and what I'm sure is most people's favorite example of one, $K[x_1,x_2,\ldots]$ for a field $K$, is obviously not going to help us here.</p>
Wanderer
1,107
<p>If $A$ is an ideal of $R$, then $A[X]$ is an ideal of $R[X]$, right? So an ascending chain of ideals in $R$ which does not stabilize gives you an ascending chain of ideals in $R[X]$ which doesn't stabilize either?</p>
15,159
<p>Specifically, is it possible for a non-Noetherian ring $R$ to have $R[x]$ Noetherian? Every reference I've seen for the Hilbert basis theorem only states the direction "$R$ Noetherian $\Rightarrow$ $R[x]$ Noetherian", which would certainly seem to imply that the converse is false. Unfortunately, it's tough to think about non-Noetherian rings, and what I'm sure is most people's favorite example of one, $K[x_1,x_2,\ldots]$ for a field $K$, is obviously not going to help us here.</p>
Pete L. Clark
1,149
<p>Dear Zev,</p> <p>There are some sources which give the converse. See e.g. pp. 64-65 of</p> <p><a href="http://alpha.math.uga.edu/%7Epete/integral.pdf" rel="nofollow noreferrer">http://alpha.math.uga.edu/~pete/integral.pdf</a></p> <p>For that matter, see also pp. 32-33 of <em>loc. cit.</em> for the Chinese Remainder Theorem and its converse. (And I am not the only one to do this...)</p> <p>Note that in both cases the converse is left as an exercise. I think (evidently) that this is the right way to go: it may not be so easy for the journeyman mathematician to come up with the statement of the converse, but having seen the statement it is a very valuable exercise to come up with the proof. (In particular, I believe that in a math text or course at the advanced undergraduate level and beyond, most exercises should indeed be things that one could find useful later on, and not just things which are challenging to prove but deservedly forgettable.)</p> <p>Finally, by coincidence, just yesterday in my graduate course on local fields I got to the proof of the &quot;tensor product theorem&quot; on the classification of norms in a finite-dimensional field extension (which came up in a previous MO answer). The key idea of the proof -- which I found somewhat challenging to write; I certainly admit to the possibility of improvements in the exposition -- seems to be suspiciously close to the valuation-theoretic analogue of the converse of the Chinese Remainder Theorem! See pp. 18-19 of</p> <p><a href="http://alpha.math.uga.edu/%7Epete/8410Chapter2.pdf" rel="nofollow noreferrer">http://alpha.math.uga.edu/~pete/8410Chapter2.pdf</a></p> <p>if you're interested.</p>
2,099,828
<p>Can anyone show me that both $\cos t$ and $\sin t$ are eigen signals. Here is a little bit background of eigen-function. </p> <blockquote> <p>The output of a continuous-time, linear time-invariant system is denoted by $T\{z(t)\}$ where $x(t)$ is the input signal. A signal $z(t)$ is called eigen-signal of the system $T$ , when $T\{z(t)\} = \gamma z(t)$, where $\gamma$ is a complex number, in general, and is called an eigenvalue of $T$. <strong>EDIT</strong>: Suppose the impulse response of the system $T$ is real and even.</p> </blockquote>
jnez71
295,791
<p>The output of the LTI system will be the convolution of its impulse response with the input (<a href="https://en.wikipedia.org/wiki/Linear_time-invariant_theory#Impulse_response_and_convolution" rel="nofollow noreferrer">see</a>). Since the impulse response of a Lyapunov stable LTI system is a finite sum of complex exponentials (<a href="http://tutorial.math.lamar.edu/Classes/DE/DiracDeltaFunction.aspx" rel="nofollow noreferrer">see example 1 here</a>), and the sin and cos functions may also be represented as a sum of complex exponentials (<a href="https://en.wikipedia.org/wiki/Euler&#39;s_formula#Relationship_to_trigonometry" rel="nofollow noreferrer">see</a>), it is clear that the integrand in the convolution operation is also an exponential. The integral of an exponential is a scaling of that same exponential. Thus, an input with a sinusoidal (complex exponential) form yields an output also with a sinusoidal (complex exponential) form.</p> <p>I have to argue with your T{z(t)} notation though, because in many cases T{sin(t)} = a*sin(t+b). The "phase shift" b cannot be ignored. Of course, this is just a notation issue, because generally speaking, your problem is supposing z(t) = a*e^(s*t) and T{z} will result in only a scaling of the complex amplitude of the phasor z(t). For example, T{z} could yield a*e^(b)*e^(s*t) = a*e^(s*t+b), i.e. a phase shift, while there is still a clear eigenvalue of e^b. Note that a, b, and s are all in general complex.</p> <p>The fact that the LTI system is Lyapunov stable is also important. Without Lyapunov stability, it can have an impulse response of the form t*e^-t which would break this argument. I assume that whatever "even" means in regards to the impulse response is meant to handle that case.</p>
1,039,141
<blockquote> <p>Let <span class="math-container">$X = \mathbb{R}$</span> and <span class="math-container">$Y = \{x \in \mathbb{R} :x ≥ 1\}$</span>, and define <span class="math-container">$G : X → Y$</span> by <span class="math-container">$$G(x) = e^{x^2}.$$</span> Prove that <span class="math-container">$G$</span> is onto.</p> </blockquote> <p>Is this going along the right path and if so how do get the function to equal <span class="math-container">$y$</span>?</p> <blockquote> <p><span class="math-container">$G: \mathbb{R} \to\mathbb{N}_1$</span>. Let <span class="math-container">$y$$\in $$\mathbb{N_1}$</span>.</p> <p><em>claim:</em> <span class="math-container">$\sqrt{\ln y}$</span> maps to <span class="math-container">$y$</span>.</p> <p>Does <span class="math-container">$\sqrt{\ln y}$</span> belong to <span class="math-container">$\mathbb{N_1}$</span>? Yes because <span class="math-container">$y \in \mathbb{N_1}$</span>, <span class="math-container">$G( \sqrt{\ln y})=e^{(\sqrt{\ln y})^2}$</span>.</p> </blockquote>
Paul
17,980
<p>For any $y\in Y$, there is $x= \sqrt{\ln y} \in X$ such that $G(x)=y$.</p>
1,039,141
<blockquote> <p>Let <span class="math-container">$X = \mathbb{R}$</span> and <span class="math-container">$Y = \{x \in \mathbb{R} :x ≥ 1\}$</span>, and define <span class="math-container">$G : X → Y$</span> by <span class="math-container">$$G(x) = e^{x^2}.$$</span> Prove that <span class="math-container">$G$</span> is onto.</p> </blockquote> <p>Is this going along the right path and if so how do get the function to equal <span class="math-container">$y$</span>?</p> <blockquote> <p><span class="math-container">$G: \mathbb{R} \to\mathbb{N}_1$</span>. Let <span class="math-container">$y$$\in $$\mathbb{N_1}$</span>.</p> <p><em>claim:</em> <span class="math-container">$\sqrt{\ln y}$</span> maps to <span class="math-container">$y$</span>.</p> <p>Does <span class="math-container">$\sqrt{\ln y}$</span> belong to <span class="math-container">$\mathbb{N_1}$</span>? Yes because <span class="math-container">$y \in \mathbb{N_1}$</span>, <span class="math-container">$G( \sqrt{\ln y})=e^{(\sqrt{\ln y})^2}$</span>.</p> </blockquote>
Graham Kemp
135,106
<p>A function is surjective (onto) iff every element in the codomain is mapped to by at least one element in the domain.</p> <p>So to determine if: $\forall y\in [1, \infty), \exists x\in \mathbb R: y=e^{x^2}$, we ask, do the roots $x=\pm \sqrt[2]{\ln y}$ have a real value for all $y\in [1,\infty)$?</p> <p>Alternatively, we observe the value of the function at $x=0$ and the behaviour as $0&gt;x\to-\infty$ and $0&lt;x\to\infty$.</p>
2,512,461
<blockquote> <p>For any non-zero vector $x$, $$ \lVert x\rVert_0 \geq \frac{\lVert x\rVert_1^2}{\lVert x\rVert_2^2} $$</p> </blockquote> <p>I am trying to prove this inequality using the definitions of the $\ell_0$ "norm" (the number of none zero elements in the vector) and the definitions of the $\ell_1$ and $\ell_2$ norms; but I'm getting nowhere... I tried using $1$ or $n$ to prove it but it didn't help.. </p>
Clement C.
75,808
<p>This is a direct consequence of Cauchy—Schwarz: writing $\mathbf{1}_{\{a &gt; 0\}}$ for the indicator of $\{a&gt;0\}$,</p> <p>$$ \mathbf{1}_{\{a &gt; 0\}} = \begin{cases} 1 &amp;\text{ if } a&gt;0\\0&amp;\text{ otherwise}\end{cases} $$ we have $$\begin{align} \lVert x\rVert_1 = \sum_{i} \lvert x_i\rvert = \sum_{i} \lvert x_i\rvert \mathbf{1}_{\{x_i &gt; 0\}} &amp;\leq \sqrt{\sum_{i} \lvert x_i\rvert^2}\sqrt{\sum_{i} \mathbf{1}_{\{x_i &gt; 0\}}^2} = \sqrt{\sum_{i} \lvert x_i\rvert^2}\sqrt{\sum_{i} \mathbf{1}_{\{x_i &gt; 0\}}}\\ &amp;= \lVert x\rVert_2\sqrt{\rVert x\rVert_0} \end{align}$$ where the only inequality is Cauchy—Schwarz, and we used the fact that $\mathbf{1}_{\{x_i &gt; 0\}}=\mathbf{1}_{\{x_i &gt; 0\}}^2$ for all $i$ (as $0^2=0$ and $1^2=1$).</p> <p>Squaring and rearranging gives the result.</p>
167,326
<p>Let $ S = \sum_{i=1}^n X_i$ where:</p> <ul> <li>Each $X_i$ is independently 3 or 9 (with equal probability), and</li> <li>The sample size $n$ is itself an independent random variable where $N \sim \text{NegativeBinomial}(r,p)$ e.g. $r = 5$ and $p = \frac34$</li> </ul> <p>Let $W = \begin{cases}S-10 &amp; S &gt; 10 \\ 0 &amp; S \leq 10 \end{cases} . \quad$ </p> <p>Find: (i) the pmf of $S\quad$ (ii) the pmf of $W \quad$ (iii) If not the former, can one at least find $\mathbb{E}[W]$ ?</p> <hr> <p>How can I obtain the distribution of $W$ using mathematica? Although it may not be able to give me a reasonable PDF, <strong>can I at least find the expected value of $W$ with mathematica? Also, can I also draw random samples from the distribution of $W$?</strong> If so, how would I do this? Thank you.</p>
Henrik Schumacher
38,178
<p>Simulating random samples is straight-forward:</p> <pre><code>r = 5; p = 0.75; m = 10000000; Nlist = RandomVariate[NegativeBinomialDistribution[r, p], m]; W = Ramp[Subtract[Total[RandomChoice[{3, 9}, #] &amp; /@ Nlist, {2}], 10]]; </code></pre> <p>You <em>could</em> obtain the empiric probability density function by</p> <pre><code>distro = EmpiricalDistribution[W]; ρ = PDF[distro, #] &amp;; </code></pre> <p>However, using <code>BinCounts</code> is much faster:</p> <pre><code>density = N[BinCounts[W, {0, Max[W], 1}]/m]; ListPlot[density, PlotRange -&gt; All, Filling -&gt; 0] </code></pre> <p><a href="https://i.stack.imgur.com/3wjQY.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/3wjQY.png" alt="enter image description here"></a></p>
112,651
<p>What is known about the set of well orderings of $\aleph_0$ in set theory without choice? I do not mean the set of countable well-order types, but the set of all subsets of $\aleph_0$ which (relative to a pairing function) code well orderings. And I would be interested in an answer in, say, ZF without choice. My actual concern is higher order arithmetic.</p> <p>I would not be surprised if ZF proves there are continuum many. But I don't know.</p> <p>At the opposite extreme, is it provable in ZF that there are not more well orderings of $\aleph_0$ than there are countable well-order types?</p>
François G. Dorais
2,000
<p>This is an aside that I mentioned elsewhere long ago but deserves mention here since it homes in on the counterintuition that probably led Colin to doubt the answer.</p> <p>As Colin pointed out, every $R \subset \omega$ can be interpreted as a binary relation on $\omega$ through a pairing function. This leads to a partition $\mathcal{B}$ of $\mathcal{P}(\omega)$ into isomorphism classes of binary relational structures $(\omega,R)$. Every countable infinite ordinal $\alpha$ has its own isomorphism class $B_\alpha \in \mathcal{B}$ and therefore $\aleph_1 \preceq \mathcal{B}$. We can also see that $2^{\aleph_0} \preceq \mathcal{B}$ in a multitude of ways. For example, we can map each $X \subseteq \omega$ to the isomorphism class of the directed graph consisting of one directed cycle of length $n+1$ for each $n \in X$ and infinitely many isolated points to fill space. In fact, we see that $\aleph_1 + 2^{\aleph_0} \preceq \mathcal{B}$ since the ranges of these two maps are disjoint. This is all provable without the axiom of choice.</p> <p>There are models of ZF in which $2^{\aleph_0}$ and $\aleph_1$ are incomparable cardinals. Solovay's model where all sets of reals are Lebesgue measurable is such an example. In such models, $\mathcal{B}$ must have cardinality strictly greater than $2^{\aleph_0}$... Yes, that's right: $\mathcal{B}$ is a partition of $\mathcal{P}(\omega)$ that has more pieces than there are elements in $\mathcal{P}(\omega)$!</p>
3,125,263
<p>I can't solve the last exercises in a worksheet of Pre-Calculus problems. It says:</p> <p>Quadratic function <span class="math-container">$f(x)=ax^2+bx+c$</span> determines a parabola that passes through points <span class="math-container">$(0, 2)$</span> and <span class="math-container">$(4, 2)$</span>, and its vertex has coordinates <span class="math-container">$(x_v, 0)$</span>.</p> <p>a) Calculate coordinate <span class="math-container">$x_v$</span> of parabola's vertex.</p> <p>b) Calculate <span class="math-container">$a, b$</span> and <span class="math-container">$c$</span> coefficients.</p> <p>How can I get parabola's equation with this information and find what is requested?</p> <p>I would appreciate any help. Thanks in advance.</p>
J. W. Tanner
615,567
<p>Since <span class="math-container">$f(0)=c$</span> and we are given <span class="math-container">$f(0)=2$</span>, we see immediately that <span class="math-container">$c=2.$</span></p> <p>Furthermore, the equation in vertex form is <span class="math-container">$f(x)=a(x-x_v)^2+k$</span>, </p> <p>and since we are given <span class="math-container">$f(x_v)=0$</span>, we see that <span class="math-container">$k=0,$</span> i.e., <span class="math-container">$f(x)=a(x-x_v)^2$</span>. </p> <p>From <span class="math-container">$a(x-x_v)^2=ax^2+bx+2$</span> we see that <span class="math-container">$ax_v^2 = 2$</span> and <span class="math-container">$-2ax_v=b.$</span> </p> <p>Since <span class="math-container">$f(4)=f(0)=2$</span>, <span class="math-container">$(4-x_v)^2=x_v^2$</span>, which means <span class="math-container">$x_v=2$</span>. Thus <span class="math-container">$a=\frac12$</span> and <span class="math-container">$b=-2.$</span></p>
3,608,441
<p>It is well known that we can define <span class="math-container">$e^x$</span> by the following limit</p> <p><span class="math-container">$$e^{x}=\lim_{n\to\infty}\left(1+{x\over n}\right)^n$$</span></p> <p>I would like to show that the RHS sequence is always less than or equal to <span class="math-container">$e^x$</span> for all <span class="math-container">$-n\le x\le0$</span> and <span class="math-container">$n&gt;1$</span>, and what I had currently done is to study the property of this sequence (which is defined as <span class="math-container">$f(x)$</span>)</p> <p><span class="math-container">$$f(x)=\left(1+{x\over n}\right)^n$$</span></p> <p>I also find its first and second derivative <span class="math-container">$f'(x)=\left(1+{x\over n}\right)^{n-1}$</span> and <span class="math-container">$f''(x)={n-1\over n}\left(1+{x\over n}\right)^n$</span> and show that they are strictly positive for all <span class="math-container">$x\in[-n,0]$</span>. As a result <span class="math-container">$f(x)$</span> is monotonically increasing and concave up. By plugging in the end points I found that <span class="math-container">$f(0)=1\le e^0$</span> and <span class="math-container">$f(-n)=0&lt;e^{-n}$</span>, I wonder if these conditions allow me conclude that <span class="math-container">$f(x)\le e^x$</span> for all <span class="math-container">$x\in[-n,0]$</span>.</p>
Paramanand Singh
72,031
<p>Let <span class="math-container">$x=-y$</span> so that <span class="math-container">$0\leq y\leq n$</span>. If <span class="math-container">$a_n$</span> is the sequence in question then consider <span class="math-container">$$b_n=\frac{1}{a_n}=\left(1-\frac{y}{n}\right)^{-n}=1+y+\frac{1/n+1}{2!}y^2+\frac {(1/n+1)(1/n+2)}{3!}y^3+\dots$$</span> via binomial theorem. Clearly we can see that <span class="math-container">$b_n$</span> is decreasing and hence <span class="math-container">$a_n$</span> is increasing and therefore does not exceed its limit <span class="math-container">$e^x$</span>. </p>
3,717,932
<p>How can this identity convolution be shown?</p> <p><span class="math-container">$$\int^\infty_{-\infty} f(\tau)\delta(t-\tau)d\tau=f(t)$$</span></p> <p>I keep getting stuck in traps when trying to show this and need a bit of assistance</p>
md2perpe
168,433
<p><span class="math-container">$$ \int_{-\infty}^{\infty} f(\tau) \, \delta(t-\tau) \, d\tau = \{ \tau = t-\sigma \} = \int_{\infty}^{-\infty} f(t-\sigma) \, \delta(\sigma) \, (-d\sigma) \\ = \int_{-\infty}^{\infty} f(t-\sigma) \, \delta(\sigma) \, d\sigma = f(t-0) = f(t). $$</span></p>
40,348
<p>I'm trying to prove the following statement (an exercise in Bourbaki's <em>Set Theory</em>): </p> <p><em>If $E$ is an infinite set, the set of subsets of $E$ which are equipotent to $E$ is equipotent to $\mathfrak{P}(E)$.</em> </p> <p>As a hint, there is a reference to a proposition of the book, which reads: </p> <p><em>Every infinite set $X$ has a partition $(X_\iota)_{\iota\in I}$ formed of countably infinite sets, the index set $I$ being equipotent to $X$.</em> </p> <p>I don't have any idea how that proposition might help. </p> <p>If $E$ is countable, then a subset of $E$ is equipotent to $E$ iff it is infinite. But the set of all finite subsets of $E$ is equipotent to $E$. So its complement in $\mathfrak{P}(E)$ has to be equipotent to $\mathfrak{P}(E)$ by Cantor's theorem. Hence the statement is true if $E$ is countable. Unfortunately, I don't see a way to generalize this argument to uncountable $E$.</p> <p>I'd be glad for a small hint to get me going. </p>
JDH
413
<p>Using the axiom of choice, every infinite set $X$ can be divided into two disjoint sets $X_0\sqcup X_1$, both of which are equinumerous with $X$. (Just well-order $X$, and take every other point in the enumeration.) </p> <p>Now, consider all sets of the form $X_0\cup A$ for any $A\subset X_1$. There are $2^X$ many such $A$ and hence $2^X$ many such sets, and each is equinumerous with the original set $X$. So we've got $2^X$ many sets as desired, and there cannot be more than this, so this is the precise number. </p> <p>Incidently, the stated answer to this question does in fact depend on the axiom of choice, since it is known to be consistent with $ZF+\neg AC$ that there are infinite Dedekind finite sets, and these are not equinumerous with any proper subsets of themselves. So for such an infinite set $X$, there would be only one subset to which it is equinumerous. </p>
875,644
<p>I have a parabolic basin which i am trying to find the equation for so I can reproduce it. I have taken $3$ points along one line of it to find the equation of the parabola, and I'm wondering if there is a way I can go from this to the equation of the parabolic basin. The equation I have for the parabola is:</p> <p>$y = 0.1x^2+0.3 $</p> <p>($b= 0$ so no $x$ term).</p> <p>I understand the equation of a parabolic basin takes the form:</p> <p>$z = ax^2 + by^2$ or somehting those lines. </p> <p>Any help would be appreciated.</p>
Did
6,179
<p>$$m=y_1-y_2,\ t=y_1\implies g(y_1-y_2)\mathbf 1_{0\lt y_2\lt y_1\lt1}=g(m)\mathbf 1_{0\lt m\lt t\lt1}$$</p>
3,123,857
<p>I have to find the integral of <span class="math-container">$$\int_{M_0}^{\infty} q(m, \mu, \sigma) \beta e^{-\beta(m-M_0)}\,\mathrm{d}m,$$</span> where <span class="math-container">$q(m, \mu, \sigma)$</span> is the normal cumulative distribution function, <span class="math-container">$M_0$</span> is a constant, <span class="math-container">$m$</span> is the variable, and <span class="math-container">$\beta$</span>, <span class="math-container">$\mu$</span>, and <span class="math-container">$\sigma$</span> are parameters. I have done the integration using the error function as follows:</p> <p><span class="math-container">\begin{align} \int_{M_0}^{\infty} q(m, \mu, \sigma) \beta e^{-\beta(m-M_0)}\,\mathrm{d}m &amp;=\beta e^{\beta M_0} \int_{M_0}^{\infty} \frac{1}{2} \Bigg[ 1+\operatorname{erf}\Bigg (\frac{(m-\mu)}{\sigma \sqrt(2)} \Bigg ) \Bigg] e^{-\beta m }\,\mathrm{d}m \\ &amp;=\frac{1}{2} + \frac{1}{2} \beta e^{\beta M_0} \int_{M_0}^{\infty} \operatorname{erf} \Bigg (\frac{(m-\mu)}{\sigma \sqrt(2)} \Bigg)e^{- \beta m}\,\mathrm{d}m \end{align}</span></p> <p>Here I get stuck. Could anyone please help solving this?</p>
Stan Tendijck
526,717
<p>I will present to you the solution in a sort of general step-by-step method. I hope you find it useful.</p> <p>First of all, you have to use Partial Integration to get rid of the normal CDF function in the integral. Can you give it a go? If you have some difficulties, just ask me.</p> <p>If you have done this correctly, you will end up with an integral of the form <span class="math-container">$$ \int_a^b e^{a x^2 + b x + c}\,\mathrm{d} x $$</span> which you can't solve or can you? It seems difficult but it turns out that this is actually quite easy if you allow the normal CDF in your answer. The main idea in solving this integral is the same as splitting the square. So, this integral can be simplified to <span class="math-container">$$ \int_a^b e^{k(x + l)^2 + m}\,\mathrm{d} x $$</span> and this is then solved as <span class="math-container">$$ \int_a^b e^{k(x + l)^2 + m}\,\mathrm{d} x = e^{m} \int_a^b e^{\frac{(x + l)^2}{2(1/\sqrt{2 k})^2}}\,\mathrm{d} x = e^{m} \cdot \left[\Phi(b) - \Phi(a)\right]\cdot \sqrt{2\pi (1/\sqrt{2k})^2} $$</span> where <span class="math-container">$\Phi(z)$</span> is the CDF of a normal distribution with mean <span class="math-container">$-l$</span> and variance <span class="math-container">$1/(2k)$</span>. This seems a bit random but all we did was actually writing the integrand to something which is similar to a normal pdf. Finally, note that we cannot simplify this any further in general. However, in your case, you have <span class="math-container">$b=\infty$</span> and, thus, <span class="math-container">$\Phi(b)=1$</span>.</p>
650,710
<p>How would I go about simplifying $4(a-2(b-c)-(a-(b-2)))$. Show working out and steps please.</p> <p>I'd show my working out but I'm not really sure where to start. Firstly, I would want to get rid of the 4 so I'd times everything else by 4 right? No idea. </p>
Vibhs
123,765
<p>4(a−2(b−c)−(a−(b−2))) =4(a-2b+2c-(a-b+2)) =4(a-2b+2c-a+b-2) =4(2c-b-2) =8c-4b-8</p>
2,238,734
<p>Let G be a group of rationals under addition, if $G_1$ and $G_2$ are two non empty subgroups of G, then prove that $G_1 \cap G_2 \neq${0}</p>
Salvatore Baldino
374,244
<p>Well, the second definition is not so useful as $\mathbb R^n$ is a vector space, so if you take any two vectors in a subset of $\mathbb R^n$ it is guaranteed that their sum will belong to $\mathbb R^n$ (analogously for any product by a scalar).</p> <p>It would be useful to see what does the symbol $&lt;\cdot&gt;$ mean, with something instead of the dot. If I'm guessing correctly, $&lt;\cdot&gt;$ is the span of the elements of the set.</p> <p>What does that mean? It means that, if $S=\{x_1,...,x_k\}$, $&lt;S&gt;$ is the span of those vectors. The span is defined as any vector $x$ that can be expressed as $x=\sum_{i=1}^k a_i x_i$, for some coefficients (unicity of the $a_i$ is not required).</p> <p>From this definition, the proof that $&lt;S&gt;$ is avector space is trivial.</p>
1,983,745
<p>I need to choose weather this is a product notation or a summation. I can figure out which one it is.</p> <p>I have this expression:</p> <p>$$2 \times 4 \times 6 \times 8 \times 10 \ldots \times 40$$</p> <p>The answer is either:</p> <blockquote> <p>$$\sum_{m=2}^{40} m$$</p> </blockquote> <p><strong>or</strong> </p> <blockquote> <p>$$\prod_{m=2}^{40} m$$</p> </blockquote>
user2825632
250,232
<p>You are multiplying values, so you should probably use the product notation:</p> <p>$$\prod_{m=1}^{20}2m = 2 \times 4 \times 6 ... \times 40$$</p> <p>When you have something like $\prod_{m=2}^{40}m$ as in your example, this actually represents the product $2 \times 3 \times 4 ... \times 39 \times 40$ - it includes the odd numbers too, since $m$ increases by $1$ each time. To increase it by $2$, use $(2m)$ in the product instead of just $m$.</p>
1,983,745
<p>I need to choose weather this is a product notation or a summation. I can figure out which one it is.</p> <p>I have this expression:</p> <p>$$2 \times 4 \times 6 \times 8 \times 10 \ldots \times 40$$</p> <p>The answer is either:</p> <blockquote> <p>$$\sum_{m=2}^{40} m$$</p> </blockquote> <p><strong>or</strong> </p> <blockquote> <p>$$\prod_{m=2}^{40} m$$</p> </blockquote>
Hypergeometricx
168,053
<p>Can also be expressed as $$2^{20} 20!$$</p>
3,197,540
<p>Let a function be defined as:</p> <p><span class="math-container">$ f(x)=x^2\sin{\left(\frac 1x\right)}$</span> for <span class="math-container">$x \neq 0$</span> and <span class="math-container">$ f(x)=0$</span> for <span class="math-container">$x=0$</span></p> <p>I'm trying to prove that f is differentiable at 0 using the definition of derivative. However in the process of doing this I was stopped by this limit:</p> <p><span class="math-container">$$ \lim_{h \to 0} \frac{\sin\left({\frac{1}{x+h}}\right)-\sin\left({\frac{1}{x}}\right)}{h} $$</span></p> <p>Is it possible to solve this limit question without using l'Hopital's rule?</p>
DINEDINE
506,164
<p>Hint:</p> <p><span class="math-container">$$\left|\frac{x^2\sin\left(\frac1x\right)-0}{x-0}\right|\le x$$</span></p>
3,027,528
<p>I am trying to resolve an exercise and there are 2 point that are missing in order to finalize:</p> <p>Suppose <span class="math-container">$A$</span>, <span class="math-container">$B$</span>, <span class="math-container">$C$</span>, and <span class="math-container">$P$</span> are <span class="math-container">$R$</span>-modules, and <span class="math-container">$f:A \rightarrow B$</span> and <span class="math-container">$g:B\rightarrow C$</span> are both <span class="math-container">$R$</span>-module morphisms.</p> <p>1) <span class="math-container">$\forall \phi : C \rightarrow P$</span> morphism, if <span class="math-container">$\phi \circ g = 0 \Rightarrow \phi = 0$</span>, for a morphism <span class="math-container">$\phi : C \rightarrow P$</span>, does this imply that <span class="math-container">$g$</span> is surjective? Why?</p> <p>2) If <span class="math-container">$\phi \circ g \circ f = 0$</span> <span class="math-container">$ \forall \phi : C \rightarrow P$</span> morphism does this mean that <span class="math-container">$g \circ f = 0$</span>? Why?</p>
zipirovich
127,842
<p>1) No, not necessarily. Here's a counterexample. Let <span class="math-container">$B=\mathbb{Z}$</span> and <span class="math-container">$C=P=\mathbb{Q}$</span> as <span class="math-container">$R=\mathbb{Z}$</span>-modules. Further, let <span class="math-container">$g:B\to C$</span>, i.e. <span class="math-container">$g:\mathbb{Z}\to\mathbb{Q}$</span>, be the inclusion map <span class="math-container">$g(n)=n$</span>. Then for any <span class="math-container">$\varphi:C\to P$</span>, i.e. for any <span class="math-container">$\varphi:\mathbb{Q}\to\mathbb{Q}$</span>, <span class="math-container">$\varphi\circ g=0$</span> implies <span class="math-container">$\varphi=0$</span> (basically, because <span class="math-container">$\varphi\circ g=0$</span> implies <span class="math-container">$\varphi(1)=0$</span> implies <span class="math-container">$\varphi=0$</span>). And yet, <span class="math-container">$g\neq0$</span>.</p> <p>2) No, not necessarily. Here's a counterexample. Let <span class="math-container">$A=B=C=\mathbb{Z}$</span> and <span class="math-container">$P=\mathbb{Z}_2$</span> as <span class="math-container">$R=\mathbb{Z}$</span>-modules. Further, let <span class="math-container">$f:\mathbb{Z}\to\mathbb{Z}$</span> be the identity map <span class="math-container">$f(n)=n$</span> and <span class="math-container">$g:\mathbb{Z}\to\mathbb{Z}$</span> be multiplication by two map <span class="math-container">$g(n)=2n$</span>. Then for any <span class="math-container">$\varphi:\mathbb{Z}\to\mathbb{Z}_2$</span>. we have <span class="math-container">$\varphi\circ g\circ f=0$</span>, even though <span class="math-container">$g\circ f\neq0$</span>.</p>
111,425
<p>If $R$ is a unital integral ring, then its characteristic is either $0$ or prime. If $R$ is a ring without unit, then the char of $R$ is defined to be the smallest positive integer $p$ s.t. $ pa = 0 $ for some nonzero element $a \in R$. I am not sure how to prove that the characteristic of an integral domain without a unit is still either $0$ or a prime $p$. I know that if $p$ is the char of $R$, then $px = 0 $ for all $x \in R$. If we assume $ p \neq 0 $ and $R$ has nonzero char, and $p$ factors into $nm$, then $ (nm) a = 0 $ , which means $ n (ma) = 0 $. Well $ma \neq 0$, because this would contradict the minimality of $p$ on $a$. But I don't know where to go from this point w/o invoking a unit. </p> <p>Edit: I had left out the assumption that $R$ is assumed to be a integral domain. This has been corrected. </p>
Patrick Da Silva
10,704
<p>You don't need to invoke units. As your proof stated, if we assume $(nm)a = 0$ for some $a \in R$ non-zero, then $n(ma) = 0$, and since $nm$ is the <em>least</em> integer with the property that $m(na) = 0 = n(ma)$, then $na \neq 0 \neq ma$. Since $$ 0 = 0a = ((nm)a)a = (nm)a^2 = (na)(ma) \neq 0, $$ we have a contradiction (the last part is because $na \neq 0 \neq ma$ and $R$ is an integral domain).</p> <p>Hope that helps,</p>
656,531
<p>The definition of a partial derivative is the "derivative of a multi-variable function relative to a single variable when all other variables are held constant".</p> <p>But isn't the regular derivative (for one-variable functions) just a trivial case of this, where there are no other variables to hold constant? Why do we need the separate notation for partial derivatives (that is, writing $\displaystyle \frac{\partial f}{\partial x}$ rather than just $\displaystyle \frac{df}{dx}$)?</p>
Lost
100,183
<p>Because there exists a notion of "total derivative" that is the multivariate analogue of the 1-D derivative you're familiar. The total derivative of a function $f: \mathbb{R}^n \to \mathbb{R}^m$ is known as the Jacobian of f. You may have worked with this while doing change of variables with multiple integrals. See <a href="http://en.wikipedia.org/wiki/Jacobian_matrix_and_determinant" rel="nofollow noreferrer">http://en.wikipedia.org/wiki/Jacobian_matrix_and_determinant</a>. When we say $f$ is differentiable at a point $a$, we mean that its Jacobian exists at $a$. There is a theorem that says that if all of the partial derivatives of $f$ exist and are continuous at $a$ then $f$ is differentiable at $a$, but the converse isn't true. See <a href="https://math.stackexchange.com/questions/44355/can-being-differentiable-imply-having-continuous-partial-derivatives">Can &quot;being differentiable&quot; imply &quot;having continuous partial derivatives&quot;?</a>.</p> <p>Furthermore, the total derivative defines a linear map and is used as a linear approximation of $f$, via Taylor's Theorem (<a href="http://en.wikipedia.org/wiki/Taylor_series#Taylor_series_in_several_variables" rel="nofollow noreferrer">http://en.wikipedia.org/wiki/Taylor_series#Taylor_series_in_several_variables</a>), as well as in the implicit function theorem (<a href="http://en.wikipedia.org/wiki/Implicit_function_theorem" rel="nofollow noreferrer">http://en.wikipedia.org/wiki/Implicit_function_theorem</a>) among others.</p> <p>Now, the notion of the total derivative of $f: \mathbb{R}^n \to \mathbb{R}^m$ can be extended to functions $f: \Omega \subset V \to W$ where $V$ and $W$ are normed vector spaces and $\Omega$ is open (probably further, but I haven't gotten there yet).</p> <blockquote> <p>Definition: We say that $f: \Omega \subset V \to W$ is <em>differentiable at a $\in \Omega$</em> if there exists a <em>bounded linear operator</em> $L_f[h]$ such that:</p> <blockquote> <p>$f(a+h) = f(a) + L_f[h] + E[h]$</p> <p>where $lim_{h \to 0} \frac{|E[h]|_W}{|h|_V} \to 0$, i.e. the error term vanishes as h approaches zero (think about the definition of the derivative in $\mathbb{R}$)</p> </blockquote> <p>An aside: bounded linear operator here means that $|L_f(x) - L_f(y)|_W \leq C|x - y|_V$ for $x, y \in \Omega$ and a constant $C$.</p> </blockquote>
656,531
<p>The definition of a partial derivative is the "derivative of a multi-variable function relative to a single variable when all other variables are held constant".</p> <p>But isn't the regular derivative (for one-variable functions) just a trivial case of this, where there are no other variables to hold constant? Why do we need the separate notation for partial derivatives (that is, writing $\displaystyle \frac{\partial f}{\partial x}$ rather than just $\displaystyle \frac{df}{dx}$)?</p>
Martín-Blas Pérez Pinilla
98,199
<p>The total derivative/differential of Frechet (see the Lost answer) is the good generalization for several reasons. Two rather big:</p> <p>(1) existence of partial derivatives does not imply continuity;</p> <p>(2) the failure of the chain rule for partial derivatives.</p>
2,425,157
<p>How do I show that $$ \frac 12 \left(\frac 1 {3^2}+\frac 1{4^2}+ \frac 1{5^2}+\dots\right) &lt; \frac 1 {3^2} + \frac 1{5^2} + \frac1{7^2} +\dots \quad ?$$</p>
Robert Z
299,698
<p>After moving the odd terms from the LHS to the RHS, we obtain the following equivalent inequality, $$\frac 12 \left(\frac 1{4^2}+ \frac 1{6^2}+ \frac 1{8^2}+\dots\right) &lt; \left(1-\frac 12\right)\left( \frac 1 {3^2} + \frac 1{5^2} + \frac1{7^2} +\dots\right).$$ Then note that for all positive integer $n$, each term $\dfrac{1}{(2n)^2}$ is less than $\dfrac{1}{(2n-1)^2}$</p>
2,439,340
<p>How would one proceed to prove this statement?</p> <blockquote> <p>The set of the strictly increasing sequences of natural numbers is not enumerable.</p> </blockquote> <p>I've been trying to solve this for quite a while, however I don't even know where to start.</p>
CiaPan
152,299
<p>Map any strictly increasing sequence $(a_n)$ to a sequence $(b_n)$ of its increments modulo $2$: $$\{0,1\}\ni b_n \equiv a_{n+1}-a_n \pmod 2$$ This is, of course, not bijective or even injective, but it is surjective mapping, hence the cardinality of the set of $a$ sequences is not less than that of $b$ sequences.</p> <p>And the latter is known to be strictly greater than $|\mathbb N|$ because $b$ are binary sequences, which are one-to-one representation of $2^\mathbb N$. They can also be bijectively mapped onto $\mathbb R$.</p>
2,439,340
<p>How would one proceed to prove this statement?</p> <blockquote> <p>The set of the strictly increasing sequences of natural numbers is not enumerable.</p> </blockquote> <p>I've been trying to solve this for quite a while, however I don't even know where to start.</p>
orangeskid
168,051
<p>For $\alpha &gt; 1$ consider the strictly increasing sequence $f_{\alpha}(n)= [n \alpha]$</p> <p>The map $\alpha \mapsto f_{\alpha}(\cdot)$ is injective, since $\lim_{n\to \infty} \frac{[n\alpha]}{n} = \alpha$.</p>
3,231,387
<p>I have been given the following quadratic equation and is asked to find the range of its roots <span class="math-container">$\alpha$</span> and <span class="math-container">$\beta$</span>, where <span class="math-container">$\alpha&gt;\beta$</span> <span class="math-container">$$(k+1)x^2 - (20k+14)x + 91k +40 =0,$$</span> where <span class="math-container">$k&gt;0$</span> .<br><br> Here's my approach. <br><br> I applied the quadratic formula for the roots and got. <span class="math-container">$$\alpha=\frac{(10k+7) -3\sqrt{k^2+k+1}}{k+1}$$</span> Similarly <span class="math-container">$$\beta=\frac{(10k+7)+3\sqrt{k^2+k+1}}{k+1}$$</span> But how to find the range. Please help</p>
Maverick
171,392
<p>Your equation can be re-written as <span class="math-container">$$f(x)=(x-4)(x-10)+k(x-7)(x-13)$$</span> So it can be seen that</p> <p><span class="math-container">$f(4)=27k$</span></p> <p><span class="math-container">$f(7)=-9$</span></p> <p><span class="math-container">$f(10)=-9k$</span></p> <p><span class="math-container">$f(13)=27$</span></p> <p>Since <span class="math-container">$f(7)$</span> and <span class="math-container">$f(13)$</span> are of opposite signs,clearly one root must be lying on the interval <span class="math-container">$(7,13)$</span>. </p> <p>Therefore we can say that another real root must also exist since in a quadratic equation with real coefficients complex/imaginary roots can only occur in conjugate pairs.</p> <p>Hence the equation will have real roots for all real values of <span class="math-container">$k$</span></p>
3,655,545
<p>What is the asymptotic behaviour of the difference <span class="math-container">$$ c_j - c_{j+1} $$</span> for <span class="math-container">$j\rightarrow \infty$</span> if <span class="math-container">$(c_j)_{j\in\mathbb{N}}$</span> is a null sequence?</p>
Aladin
707,258
<p>Yes, because the limits exist for <span class="math-container">$\lim_{n \to \infty}c_n = 0$</span> and so <span class="math-container">$\lim_{n \to \infty}c_{n+1} = 0$</span> then <span class="math-container">$\lim_{n \to \infty}c_n - c_{n+1} = \lim_{n \to \infty}c_n - \lim_{n \to \infty}c_{n+1}= 0-0=0$</span>. </p> <p>Or by definition: Let <span class="math-container">$\epsilon &gt; 0$</span> then there exist <span class="math-container">$N \in \mathbb{N}$</span> such that for all <span class="math-container">$n &gt; N$</span> we have <span class="math-container">$|c_n| &lt; \frac{\epsilon}{2} $</span> and <span class="math-container">$|c_{n+1}| &lt; \frac{\epsilon}{2} $</span>. </p> <p>We want to show <span class="math-container">$|c_n - c_{n+1}| &lt; \epsilon$</span> and indeed by the triangle inequality:</p> <p><span class="math-container">$|c_n - c_{n+1}| \leq |c_n| + |c_{n+1}| &lt; \frac{\epsilon}{2} + \frac{\epsilon}{2} = \epsilon$</span></p>
10,873
<p>We have two tags with identical names on main and meta:</p> <p><a href="https://math.stackexchange.com/questions/tagged/computer-science">(main:computer-science)</a> and <a href="https://math.meta.stackexchange.com/questions/tagged/computer-science">(meta:computer-science)</a></p> <p>For main tags one can use <code>[tag:computer-science]</code> , i.e. <a href="https://math.stackexchange.com/questions/tagged/computer-science" class="post-tag" title="show questions tagged &#39;computer-science&#39;" rel="tag">computer-science</a>, but it's not possible to address meta tags in the same way. Please tell me if I'm wrong.</p>
Community
-1
<p>You can address meta tags like <code>[meta-tag:computer-science]</code>: <a href="/questions/tagged/computer-science" class="post-tag" title="show questions tagged 'computer-science'" rel="tag">computer-science</a></p>
3,950,098
<p>I can evaluate the limit with L'Hospital's rule:</p> <p><span class="math-container">$\lim_{n\to\infty}n(\sqrt[n]{4}-1)=\lim_{n\to\infty}\cfrac{(4^{\frac1n}-1)}{\dfrac1n}=\lim_{n\to\infty}\cfrac{\dfrac{-1}{n^2}\times 4^{\frac1n}\times\ln4}{\dfrac{-1}{n^2}}=\ln4$</span></p> <p>But is there any way to do it without using L'Hospital's rule?</p>
Dark Malthorp
532,432
<p>Here's a trick to prove convergence of the continuous limit<span class="math-container">$$ \lim\limits_{x\rightarrow\infty} x \left(\sqrt[x]4 - 1\right) $$</span> if you know also know how to integrate <span class="math-container">$2^x$</span>. Observe:<span class="math-container">\begin{eqnarray} x \left(\sqrt[x]4 - 1\right) = x \left(\sqrt[x]2 - 1\right)\left(\sqrt[x]2 + 1\right) = 2x\left(\sqrt[2x]4 - 1\right)\frac{\left(\sqrt[x]2 + 1\right)}{2} \end{eqnarray}</span> Thus we get <span class="math-container">$$ x\left(\sqrt[x]4 - 1\right) = \frac{\sqrt[x]2 + 1}2 2x\left(\sqrt[2x]4-1\right) \frac{\sqrt[x]2 + 1}2 \frac{\sqrt[2x]2 + 1}{2} 4x\left(\sqrt[4x]4-1\right) = \left(\prod_{k=0}^{n-1} \frac{\sqrt[2^kx]2+1}{2}\right) 2^n x \left(\sqrt[2^n x]4 -1\right) $$</span> Note that the multiplicands are always bigger than <span class="math-container">$1$</span>, so this implies that <span class="math-container">$2^n x \left(\sqrt[2^nx]4 - 1\right)$</span> is decreasing in <span class="math-container">$n$</span>, so the limit exists for each <span class="math-container">$x$</span>. It's also pretty clear that the function <span class="math-container">$x\to x\left(\sqrt[x]4-1\right)$</span> doesn't have any oscillations in the limit, hence we have that the continuous limit converges, so we just have to compute it for one value of <span class="math-container">$x$</span>, say <span class="math-container">$x=1$</span>. Thus we have <span class="math-container">$$ \lim\limits_{z\rightarrow\infty} z \left(\sqrt[z]4-1\right) = \lim\limits_{n\rightarrow\infty} 2^n \left(\sqrt[2^n]4-1\right) = \lim\limits_{n\rightarrow\infty} \frac{\sqrt[1]4-1}{\prod\limits_{k=0}^{n} \frac{\sqrt[2^k]2+1}{2}} = \frac2 {\prod\limits_{k=1}^\infty \frac{2^{1/2^k} + 1}{2}} $$</span> Expanding the partial products, we can see that this product actually isn't hard to evaluate:<span class="math-container">\begin{eqnarray} \prod_{k=1}^1 \frac{2^{\frac1{2^k}} + 1}{2} &amp;=&amp; \frac{2^\frac12 + 1}2\\ \prod_{k=1}^2 \frac{2^{\frac1{2^k}} + 1}{2} &amp;=&amp; \frac{2^\frac14 + 2^\frac12 + 2^\frac34 + 1}4\\ &amp;\vdots&amp;\\ \prod_{k=1}^N \frac{2^{\frac1{2^k}} + 1}{2} &amp;=&amp; \frac{2^\frac1{2^N} + 2^\frac{2}{2^N} + \cdots + 2^\frac{2^N-1}{2^N} + 2^\frac{2^N}{2^N}}{2^N} \end{eqnarray}</span> The RHS is a left Riemann sum for <span class="math-container">$\int_0^1 2^t dt$</span> splitting the interval into <span class="math-container">$2^N$</span> subintervals. Thus we get <span class="math-container">$$ \prod_{k=1}^\infty \frac{2^{\frac1{2^k}} + 1}{2} = \int_0^1 2^t dt = \frac{2-1}{\ln 2} = \frac1{\ln 2} $$</span> Hence we conclude <span class="math-container">$$ \lim\limits_{z\rightarrow\infty} z\left(\sqrt[z]4-1\right) = \frac{2}{1/\ln 2} = 2\ln 2 = \ln 4 $$</span></p>
2,995,643
<p>Here is a thought experiment I have. </p> <p>Say we flip a unique coin where we have a 99.99999999999% chance of it landing on heads, and a .000000000001% chance of it landing on tails (the two possibilities equal to 100%).</p> <p>And say we have an <em>infinite</em> number of coins flipped all at once (and only one time).</p> <p>Is it possible that none of the trials will experience the coin land on tails?</p>
Théophile
26,091
<p>The probability of getting only heads on <span class="math-container">$n$</span> flips is <span class="math-container">$0.9999999999999^n$</span>. While this is close to <span class="math-container">$1$</span> for small <span class="math-container">$n$</span>, eventually it decreases to <span class="math-container">$0$</span>, i.e., <span class="math-container">$$\lim_{n\to\infty}0.9999999999999^n=0$$</span></p> <p>So while it is technically <em>possible</em> for all the coins to land heads, the probability is <span class="math-container">$0$</span>, so effectively it will never happen.</p>
3,182,532
<p>I am confused about converting a <strong>Probability Density Function</strong> from <strong>Polar coordinates</strong> to <strong>Cartesian coordinates</strong>. </p> <p>Here is an example:</p> <p>In Polar coordinates, we can have a <strong>Gaussian probability function</strong>:</p> <p><strong><span class="math-container">$P(r,\theta)=Ae^{-r^2/2\sigma^2}$</span></strong> according to the transformation: <span class="math-container">$r^2=x^2+y^2 \textrm{ and } \theta=\tan^{-1}(y/x)$</span>.</p> <p>This function in Cartesian coordinates should also be a Gaussian function:</p> <p><strong><span class="math-container">$P(x,y)=Ae^{-(x^2+y^2)/2\sigma^2}$</span></strong></p> <p><strong>But</strong> somebody told me that in this transformation, I should multiply by the absolute value of the Jacobian determinate in order to have:</p> <p><strong><span class="math-container">$P(x,y)=Ae^{-(x^2+y^2)/2\sigma^2}/\sqrt{x^2+y^2}$</span></strong></p> <p>And the result is not Gaussian anymore!</p> <p>Could someone tell me which one is correct and also the reason, please?</p>
mjw
655,367
<p><strong>UPDATE</strong></p> <p>If your function in polar coordinates is a circularly symmetric Gaussian centered at the origin, then it could be written <span class="math-container">$P_{r\, \theta}(r,\theta)=A\,r\,e^{-r^2/2\sigma^2}$</span> and you can obtain <span class="math-container">$A$</span> from </p> <p><span class="math-container">$$\int_0^{2\pi} \int_0^\infty P_{r\, \theta}(r,\theta) \,dr\, d\theta = 1.$$</span></p> <p>Integration yields: <span class="math-container">$\displaystyle A = \frac{1}{2 \pi \sigma^2}$</span>.</p> <p>The Jacobian of the transformation is <span class="math-container">$$J(x,y)=\begin{vmatrix}\cos \theta&amp; -r \sin \theta \\ \sin \theta &amp;r\cos \theta \end{vmatrix}={r}. $$</span> so that (see text referenced in the comments below)</p> <p><span class="math-container">$$P_{x\, y}(x,y)= \frac{P_{r,\,\theta}(r,\theta)}{|J(x,y)|}=\frac{1}{r}P_{r,\, \theta}\left(\sqrt{x^2+y^2},\tan^{-1} \frac{y}{x}\right).$$</span></p> <p>Thus <span class="math-container">$$\displaystyle P_{x\, y}(x,y) = \frac{1}{2\pi \sigma^2} e^\frac{-(x^2 + y^2)}{2\sigma^2}.$$</span> </p>
366,687
<p>I am interested in the status of the conjecture about the minimum number of edge crossings <span class="math-container">$cr(K_{m,n})$</span> in a drawing of the complete bipartite graph <span class="math-container">$K_{m,n}$</span>.</p> <p>The Wikipedia article <a href="https://en.wikipedia.org/wiki/Tur%C3%A1n%27s_brick_factory_problem" rel="nofollow noreferrer">https://en.wikipedia.org/wiki/Tur%C3%A1n%27s_brick_factory_problem</a> led me to study the original papers of Zarankiewicz (<em>On a problem of P. Turan concerning graphs</em>) from 1954 and of Urbanik (<em>Solution du problème posé par P. Turán</em>) from 1955.</p> <p>I wondered whether someone could tell me whether an asymptotic approach has been successfully attempted (letting <span class="math-container">$n\to\infty$</span>). If so, I would be very interested in any references for that.</p>
John Machacek
51,668
<p>The Electronic Journal of Combinatorics has many <a href="https://www.combinatorics.org/ojs/index.php/eljc/issue/view/Surveys" rel="noreferrer">Dynamic Surveys</a> one of which is <a href="https://www.combinatorics.org/ojs/index.php/eljc/article/view/DS21/pdf" rel="noreferrer">The Graph Crossing Number and its Variants: A Survey</a> by Schaefer which first appeared in 2013 and has been updated as recently as Feb 14, 2020. From the bottom of page 40 onto page 41 you will find this conjecture for complete bipartite graphs discussed (with many references). As far as I can tell from the survey the conjecture is open (both for exact values and asymptotically).</p> <p>One paper you might be interested in is <a href="https://doi.org/10.1016/j.jctb.2012.11.001" rel="noreferrer">Zarankiewiczʼs Conjecture is finite for each fixed <span class="math-container">$m$</span></a> by Christiana, Richter, and Salazar. This paper shows that for each <span class="math-container">$m$</span> if the conjecture holds up for all <span class="math-container">$n$</span> up to some very large <span class="math-container">$N(m)$</span> (which is an explicit value), then the conjecture is true for <span class="math-container">$K_{n,m}$</span> with any <span class="math-container">$n$</span>.</p> <p>This survey also references another survey <a href="https://link.springer.com/chapter/10.1007/978-3-319-31940-7_13" rel="noreferrer">Turán’s Brick Factory Problem: The Status of the Conjectures of Zarankiewicz and Hill</a> by Székely. (I haven't be able to access this survey so I don't exactly what's inside.)</p>
3,715,522
<p>I am trying to understand fully how drug half-life works. So I derived this relationship: </p> <p><span class="math-container">$$\ U_{r} = \frac{1+\ U_{r-1}}{2}$$</span> Where <span class="math-container">$\ U_{0}=0$</span> and r is a set of natural numbers.</p> <p>My issue to how to deduce a relationship for the sum to infinity:</p> <p><span class="math-container">$$\ S_{\infty}=\lim_{n\to\infty} \sum_{r=1}^n \ U_{r}$$</span></p> <p>Consequently I need to get the relationship for <span class="math-container">$\ S_{\infty}$</span> if <span class="math-container">$\ U_{r} = \frac{A+\ U_{r-1}}{2}$</span> and <span class="math-container">$\ U_{0}=0$</span></p>
quasi
400,434
<p>Suppose <span class="math-container">$u_0,u_1,u_2,...$</span> are defined by <span class="math-container">$$ \left\lbrace \begin{align*} u_0\!&amp;=\,0\\[4pt] u_{n+1}\!&amp;=\frac{a+u_n}{2}\\[4pt] \end{align*} \right. $$</span> for some constant <span class="math-container">$a &gt; 0$</span>. <p> Then we have <span class="math-container">\begin{align*} u_1&amp;=\frac{a}{2}+\frac{1}{2}u_0\\[4pt] u_2&amp;=\frac{a}{2}+\frac{1}{2}u_1\\[4pt] u_3&amp;=\frac{a}{2}+\frac{1}{2}u_2\\[4pt] \!\!\!\!\!\vdots&amp;\phantom{=\frac{a}{2}}\;\;\,\vdots\\[4pt] u_{n}&amp;=\frac{a}{2}+\frac{1}{2}u_{n-1}\\[4pt] \end{align*}</span> Summing the above equations, we get <span class="math-container">$$\;\;\;\;S_n=\frac{a}{2}{\,\cdot\,}\,n+\frac{1}{2}S_{n-1}$$</span> but it's easily shown that for all <span class="math-container">$k$</span> we have <span class="math-container">$S_k\ge 0$</span>, so <span class="math-container">$$S_n\ge \frac{a}{2}{\,\cdot\,}\,n\qquad\;\;$$</span> hence <span class="math-container">${\displaystyle{\lim_{n\to\infty}}}S_n=\infty$</span>.</p>
2,202,529
<p>I have difficulties thinking the relationship between inverse of a number and gcd.</p> <p>If I want to know if a specific <code>number module n</code> has an inverse I check if gcd between the number and the module is 1, why?</p> <pre><code>a≡b(n) has inverse only if gcd(a,n)=1 </code></pre> <p>I know that the result of <code>gcd(a,n)</code> is the rest of Euclidean division, why does it prove that <code>a</code> has an inverse <code>module n</code>?</p>
edgar alonso
329,621
<p>It does not implies induction. One of your hypotheses is that $f(X)\subseteq X$, and when proceeding by induction you want to prove exactly this for $f(n)=n+1$, so you can deduce $X=\mathbb{N}$.</p>
2,202,529
<p>I have difficulties thinking the relationship between inverse of a number and gcd.</p> <p>If I want to know if a specific <code>number module n</code> has an inverse I check if gcd between the number and the module is 1, why?</p> <pre><code>a≡b(n) has inverse only if gcd(a,n)=1 </code></pre> <p>I know that the result of <code>gcd(a,n)</code> is the rest of Euclidean division, why does it prove that <code>a</code> has an inverse <code>module n</code>?</p>
EuYu
9,246
<p>Like Hayden pointed out in the comments, your theorem doesn't really have much to do with mathematical induction. Notice that the first hypothesis of your theorem says that $Y\backslash X \subseteq f(X)$. So the two hypotheses of your theorem combined says that $$Y\backslash X \subseteq f(X) \subseteq X.$$ Your theorem effectively says that if $X\subseteq Y$ and $Y\backslash X\subseteq X$, then $Y=X$. It has nothing to do with $f$ in particular. This is not a statement about induction or even mappings, but rather a statement about set inclusions.</p>
1,700,246
<p>Let $F=\mathbb{F}_{q}$, where $q$ is an odd prime power. Let $e,f,d$ be a standard basis for the $3$-dimensional orthogonal space $V$, i.e. $(e,e)=(f,f)=(e,d)=(f,d)$ and $(e,f)=(d,d)=1$. I have an element $g\in SO_{3}(q)$ defined by: $g: e\mapsto -e$, $f\mapsto \frac{1}{2}e -f +d$, $d\mapsto e+d$. I would like to determine the spinor norm of this element using Proposition 1.6.11 in the book 'The Maximal Subgroups of the Low-Dimensional Finite Classical Groups' by Bray, Holt and Roney-Dougal. </p> <p>The proposition is quite long to state so it would be handy if someone who can help already has a copy of the book to refer to. If not, then please let me know and I can post what the proposition says. </p> <p>I have followed the proposition and have that the matrices $$A:=I_{3}-g=\left( \begin{array}{ccc} 2 &amp; -\frac{1}{2} &amp; -1 \\ 0 &amp; 2 &amp; 0 \\ 0 &amp; -1 &amp; 0 \end{array} \right)$$ $$F= \textrm{matrix of invariant symmetric bilinear form =}\left( \begin{array}{ccc} 0 &amp; 1 &amp; 0 \\ 1 &amp; 0 &amp; 0 \\ 0 &amp; 0 &amp; 1 \end{array} \right).$$</p> <p>If $k$ is the rank of $A$, the proposition says to let $B$ be a $k\times 3$ matrix whose rows form a basis of a complement of the nullspace of $A$. I have that $ker\, A=&lt;(1,0,2)^{T}&gt;$. Now by the way the proposition is stated it seems as if it does not make a difference as to what complement is taken. However, I have tried 3 different complements and I get contradictory answers each time.</p> <p>Try 1) Orthogonal complement of $Ker\, A$, where $B=\left( \begin{array}{ccc} 1 &amp; 0 &amp; 0 \\ 0 &amp; -2 &amp; 1 \end{array} \right)$. This gives me $det(BAFB^{T})=-25$.</p> <p>Try 2) $B=\left( \begin{array}{ccc} 1 &amp; 0 &amp; 0 \\ 0 &amp; 1 &amp; 0 \end{array} \right)$. This gives me $det(BAFB^{T})=-4$.</p> <p>Try 3) $B=\left( \begin{array}{ccc} 0 &amp; 0 &amp; 1 \\ 0 &amp; 1 &amp; 0 \end{array} \right)$. This gives me $det(BAFB^{T})=0$.</p> <p>The problem is that whatever complement I take the determinants should all be non-zero squares at the same time, but this is not the case. </p> <p>I am not sure if I have misunderstood how to use the proposition. Any help will be appreciated. Thanks.</p>
mvw
86,776
<p>This would mean $$ X = \sqrt{x} &lt; r &lt; Y = \sqrt{y} $$ and $X, Y \in \mathbb{R}$ with $X \ne Y$. If I remember right, there is always at least one rational number between two different real numbers. (<a href="https://proofwiki.org/wiki/Between_two_Real_Numbers_exists_Rational_Number" rel="nofollow">Link</a>)</p>
3,380,081
<p>Question: Suppose <span class="math-container">$n(S)$</span> is the number of subset of <span class="math-container">$S$</span> and <span class="math-container">$|S|$</span> be the number of elements of <span class="math-container">$S$</span>. If <span class="math-container">$n(A)+n(B)+n(C)=n(A\cup B\cup C)$</span> and <span class="math-container">$|A|=|B|=100$</span>, Find the minimum value of <span class="math-container">$|A\cap B\cap C|$</span>.</p> <p>Now, I realise PIE is the only way to go, but I don't know how to handle the intersections of the sets taken two at a time. Also, I know that this is a duplicate, but the original on wasn't answered fully. I'd request you o kindly provide a solution before mercilessly closing it off.</p> <p>Plus, I'm not good at even very basic set theory. If you could recommend a short but good book for set-theory, I'd be much obliged.</p> <p>Thank you all!</p>
Martund
609,343
<p>You can actually <strong>exactly</strong> evaluate the area, without integration. Observe that required area is <span class="math-container">$$A=A_1-A_2-A_3$$</span> where <span class="math-container">$A_1$</span> is the area of the rectangle having vertices <span class="math-container">$(0,0), (0,2), (4,0), (4,2)$</span>, hence, <span class="math-container">$$A_1=4\times 2=8$$</span> <span class="math-container">$A_2$</span> is the area of the small rectangle having vertices <span class="math-container">$(0,0), (0,1), (1,0), (1,1)$</span>, hence, <span class="math-container">$$A_2=1\times 1=1$$</span> Now, <span class="math-container">$A_3$</span> is the area of graph of <span class="math-container">$y=\sqrt{x}$</span> with <span class="math-container">$y$</span>-axis from <span class="math-container">$y=1$</span> to <span class="math-container">$y=2$</span>. This area is actually area of curve <span class="math-container">$y=x^2$</span> from <span class="math-container">$x=1$</span> to <span class="math-container">$x=2$</span>. This area can be evaluated as limit of a sum. <span class="math-container">$$A_3=\lim_{n\to\infty}\frac{1}{n}\sum_{r=n+1}^{2n}\Big(\frac{r}{n}\Big)^2$$</span> <span class="math-container">$$=\lim_{n\to\infty}\frac{1}{n^3}(\sum_{r=1}^{2n} r-\sum_{r=1}^{n} r)$$</span> <span class="math-container">$$=\lim_{n\to\infty}\frac{1}{n^3}\Big(\frac{2n(2n+1)(4n+1)}{6}-\frac{n(n+1)(2n+1)}{6}\Big)$$</span> <span class="math-container">$$\frac{2\times 2\times 4}{6}-\frac{1\times 1\times 2}{6}$$</span> <span class="math-container">$$=\frac{7}{3}$$</span> So, required area is <span class="math-container">$$A=8-1-\frac{7}{3}=\frac{14}{3}$$</span></p>
165,853
<blockquote> <p>Schauder's conjecture: &quot;<em>Every continuous function, from a nonempty compact and convex set in a (Hausdorff) topological vector space into itself, has a fixed point.</em>&quot; [Problem 54 in The Scottish Book]</p> </blockquote> <p>I wonder whether this conjecture is resolved. I know R. Cauty [Solution du problème de point fixe de Schauder, Fund. Math. 170 (2001) 231–246] proposed an answer, but apparently in the international conference &quot;Fixed Point Theory and its Applications&quot; in 2005, T. Dobrowolski remarked that there is a gap in the proof.</p>
Jochen Wengenroth
21,051
<p>In <em>Points fixes des applications compactes dans les espaces ULC</em> published in in the <a href="https://arxiv.org/abs/1010.2401" rel="nofollow noreferrer">arXiv</a> in 2010 Robert Cauty wrote</p> <p><em>il y a d’ailleurs une erreur dans la demonstration du lemme 3 de [2], qu’il n’y a plus de raison de corriger, vu la superiorite de la nouvelle approche</em></p> <p>(there is, by the way, an error in the proof of lemma 3 from [2] for which there is no need for correction in view of the superiority of the new approach)</p> <p>It seems thus that Cauty still (or again) claims that the Schauder conjecture is settled.</p> <p>[2] is: R. Cauty. Solution du problème de point fixe de Schauder. Fund. Math. 170, 2001, 231-246.</p> <hr /> <p>Edit (August 2016). I was quite surprised that apparently there is no version of the mentioned 2010 arXiv article published in an international journal. While searching the web I learned that Robert Cauty died in 2013.</p>
3,936,187
<blockquote> <p>Consider the differential equation <span class="math-container">$$(1+t)y''+2y=0$$</span> with the variabel coefficient <span class="math-container">$(1+t)$</span>, with <span class="math-container">$t\in \mathbb{R}$</span>.</p> <p>Set <span class="math-container">$y(t)=\sum_{n=0}^{\infty}a_nt^n$</span>. What are the first 4 terms in the associated power series?</p> <p><a href="https://i.stack.imgur.com/yYwaI.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/yYwaI.png" alt="enter image description here" /></a></p> </blockquote> <h2>An Attempt</h2> <p>We write <span class="math-container">$$(1+t)y''=\sum_{n=2}^{\infty}a_nn(n-1)t^{n-2}+ \sum_{n=2}^{\infty}a_nn(n-1)t^{n-1}$$</span> <span class="math-container">$$2y=\sum_{n=0}^{\infty}2a_nt^{n} $$</span></p> <p>Combining these three gives us</p> <p><span class="math-container">$$\sum_{n=2}^{\infty}a_nn(n-1)t^{n-2}+ \sum_{n=2}^{\infty}a_nn(n-1)t^{n-1}+ \sum_{n=0}^{\infty}2a_nt^{n}$$</span></p> <p>We now want all the powers to be <span class="math-container">$t^n$</span>. <span class="math-container">$$\sum_{n=0}^{\infty}a_{n+2}(n+2)(n+1)t^{n}+ \sum_{n=1}^{\infty}a_{n+1}(n+1)nt^{n}+ \sum_{n=0}^{\infty}2a_nt^{n} $$</span></p> <p>We finally want the index of summation to all be the same. <span class="math-container">$$2a_0+2a_2+ \sum_{n=1}^{\infty}a_{n+2}(n+2)(n+1)t^{n}+ \sum_{n=1}^{\infty}a_{n+1}(n+1)nt^{n}+ \sum_{n=1}^{\infty}2a_nt^{n}$$</span></p> <p>Which simplifies to <span class="math-container">$$2a_0+2a_2+ \sum_{n=1}^{\infty}\big[a_{n+2}(n+2)(n+1)+a_{n+1}n(n+1)+2a_n \big]t^n$$</span></p> <p>Clearly, I'm not getting anywhere close to any of the answers given. I just can't find what I'm doing wrong. I have also tried with Maple, but that didn't give me any real result either.</p> <p>I hope someone can help me out.</p>
Alann Rosas
743,337
<p>If</p> <p><span class="math-container">$$y(t)=\sum_{n=0}^{\infty}c_n t^n$$</span></p> <p>then</p> <p><span class="math-container">$$y''(t)=\sum_{n=2}^{\infty}n(n-1)c_n t^{n-2}$$</span></p> <p>which is equivalent to</p> <p><span class="math-container">$$y''(t)=\sum_{n=0}^{\infty}(n+1)(n+2)c_{n+2}t^n$$</span></p> <p>so</p> <p><span class="math-container">\begin{align*} (1+t)y''(t) &amp;= y''(t)+ty''(t)\\ &amp;=\sum_{n=0}^{\infty}\left[(n+1)(n+2)c_{n+2}t^n\right]+t\sum_{n=0}^{\infty}(n+1)(n+2)c_{n+2}t^n\\ &amp;= \left(2c_2+2\cdot 3c_3 t+3\cdot 4c_4 t^2+\cdots\right)+t\left(2c_2+2\cdot 3c_3 t+3\cdot 4c_4 t^2+\cdots\right)\\ &amp;= \left(2c_2+2\cdot 3c_3 t+3\cdot 4c_4 t^2+\cdots\right)+\left(2c_2 t+2\cdot 3c_3 t^2+3\cdot 4c_4 t^3+\cdots\right)\\ &amp;= \left(1\cdot 2c_2+0\cdot 1c_1\right)+\left(2\cdot 3c_3+1\cdot 2c_2\right)t+\left(3\cdot 4c_4+2\cdot 3c_3\right)t^2+\cdots\\ &amp;= \sum_{n=0}^{\infty}\left[(n+1)(n+2)c_{n+2}+n(n+1)c_{n+1}\right]t^n \end{align*}</span></p> <p>It follows that</p> <p><span class="math-container">\begin{align*} 0 &amp;= (1+t)y''(t)+2y(t)\\ &amp;= \sum_{n=0}^{\infty}\left[(n+1)(n+2)c_{n+2}+n(n+1)c_{n+1}\right]t^n+\sum_{n=0}^{\infty}2c_n t^n\\ &amp;= \sum_{n=0}^{\infty}\left[(n+1)(n+2)c_{n+2}+n(n+1)c_{n+1}+2c_n\right]t^n \end{align*}</span></p> <p>and upon equating coefficients,</p> <p><span class="math-container">$$(n+1)(n+2)c_{n+2}+n(n+1)c_{n+1}+2c_n=0$$</span></p> <p>No initial conditions were given, so <span class="math-container">$c_0$</span> and <span class="math-container">$c_1$</span> can be chosen arbitrarily. The recurrence relation above then gives</p> <p><span class="math-container">\begin{align*} c_2 &amp;= -c_0\\ c_3 &amp;= \frac{c_0-c_1}{3}\\ &amp;\text{ }\vdots \end{align*}</span></p> <p>Thus,</p> <p><span class="math-container">$$y(t)=c_0+c_1 t-c_0t^2+\frac{c_0-c_1}{3}t^3+\cdots$$</span></p> <p>so (d) is the correct answer.</p> <p><strong>Edit</strong>: I’m apparently blind! While writing this answer, I didn’t notice that (d) is the exact same series I derived. In light of this observation (thank you Carl!), I’ve edited out the last part of my original response, where I claimed that (b) was the correct answer.</p>
1,641,137
<p>Let $(X,d)$ be a metric space, $a \in X$, and $\delta$ be a positive real number. Then the open ball $B(a;\delta)$ is defined as $$B(a;\delta) \colon= \left\{ \ x \in X \ \colon \ d(x,a) &lt; \delta \ \right\},$$ whereas the sphere $S(a; \delta)$ is defined as $$S(a;\delta) \colon= \left\{ \ x \in X \ \colon \ d(x,a) = \delta \ \right\},$$ Then the closure $\overline{B(a;\delta)}$ of $B(a;\delta)$ need not equal $B(a;\delta) \cup S(a;\delta)$. </p> <p>In particular, in the Euclidean space $\mathbb{R}^k$, this holds, whereas in a discrete metric space (with more than one point) this fails. Am I right? </p> <p>Now is (are) there any necessary and / or sufficient condition(s) on $(X,d)$ under which $$\overline{B(a;\delta)} = B(a;\delta) \cup S(a;\delta)?$$</p>
Thomas Andrews
7,933
<p>This is a global condition - that is, it is both necessary and sufficient to have your condition be true for all $x,\delta$.</p> <p>You need:</p> <blockquote> <p>(Condition 1): Given any $x\neq y$ and any $\epsilon&gt;0$ that there is some $z$ so that $d(y,z)&lt;\epsilon$ and$d(x,z)&lt;d(x,y)$.</p> </blockquote> <p>That is, every neighborhood of $y\neq x$ has a point closer to $x$ than $y$ is.</p> <p>For example, the discrete space fails because for some $\epsilon&gt;0$ there is no $z\neq y$ with $d(y,z)&lt;\epsilon$.</p> <p>It's necessary because if you have a counter-example to my condition, with and $x\neq y\in X$, define $r=d(x,y)$. Then $y$ is on the sphere or radius $r$ but, for some $\epsilon&gt;0$, $B_{\epsilon}(y)\cap B_r(x)=\emptyset$, so $y$ is not in the closure of $B_r(x)$. </p> <p>It is sufficient because if $y$ is on the sphere of radius $r$ around $x$, then $d(x,y)=r$. Now, for each $\epsilon_k=\frac{1}{k}$, find $z_k\in B_{\epsilon_k}(y)$ with $d(x,z_k)&lt;d(x,y)=r$. Then $z_k$ is a sequence in $B_r(x)$ which converges to $y$, so $y$ is in the closure of $B_r(x)$.</p> <p>This can be rewritten as:</p> <blockquote> <p>(Condition 2): If $x\in X$ and $U$ is an open set not containing $x$, then the function $U\to\mathbb R^{+}$ defined as $u\mapsto d(x,u)$ does not achieve its infimum in $U$.</p> </blockquote> <p>The relationship to condition (1) is more obvious, I suppose, if you rewrite condition 2 as:</p> <blockquote> <p>(Condition 1.5): Given open $U$ and $x\notin U$, then for any $y\in U$, there is a $z\in U$ so that $d(x,z)&lt;d(x,y)$. </p> </blockquote> <p>That's therefore clearly an extended version of Condition (1), applied to all open sets containing $y$, rather than just open balls around $y$.</p> <p><strong>Proof that Condition 1 and Condition 2 are equivalent</strong></p> <p><strong>Assume Condition (1).</strong></p> <p>Let $U\subseteq X$ and $x\notin U$. </p> <p>For any $y\in U$, pick $\epsilon&gt;0$ so that $B_{\epsilon}(y)\subseteq U$. This can be done because $U$ is open.</p> <p>But condition (1) means that there must be a $z\in B_{\epsilon}(y)\subseteq U$ so that $d(x,z)&lt;d(x,y)$. So $d(x,y)$ is not a lower bound for $\{d(x,u)\mid u\in U\}$, for any $y\in U$, proving condition $2$.</p> <p><strong>Assuming Condition (2):</strong></p> <p>Given $x\neq y\in X$. If $\epsilon&gt;0$ is chosen, define $U=B_{\epsilon}(y)$. </p> <p>If $x\in U$, then $d(x,x)=0&lt;d(x,y)$, so we can just choose $z=x$.</p> <p>If $x\notin U$, then, since $U$ is open, we know by condition (2) that $d(x,y) \neq \inf_{u\in U} d(x,u)$, so there must be a $z\in U=B_{\epsilon}(y)$ with $d(x,z)&lt;d(x,y)$.</p> <p>Thus we have Condition (1).</p>
670,292
<p>Could someone assist with the following three surface integrals? </p> <p><strong>Q1</strong> The portion of the cone $z=\sqrt{x^2+y^2}$ that lies inside the cylinder $x^2+y^2 =2x$. </p> <p><strong>Q2</strong> The portion of the paraboloid $z=1-x^2-y^2$ that lies above the $xy$-plane.</p> <p><strong>Q3</strong> The portion of the paraboloid $2z = x^2+y^2$ that is inside the cylinder $x^2+y^2=8$.</p> <p>Any assistance will be greatly appreciated.</p>
Robert Israel
8,508
<p>Hint: if $b_n = a_n/(1 + a_n)$, then $a_n = b_n/(1-b_n)$.</p> <p>By the way, the condition that $a_n$ is bounded is not needed here (it would be needed if you weren't told $\ell \ne 1$).</p>
1,762,001
<p>I recently watched a <a href="https://www.youtube.com/watch?v=SrU9YDoXE88" rel="noreferrer">video about different infinities</a>. That there is $\aleph_0$, then $\omega, \omega+1, \ldots 2\omega, \ldots, \omega^2, \ldots, \omega^\omega, \varepsilon_0, \aleph_1, \omega_1, \ldots, \omega_\omega$, etc..</p> <p>I can't find myself in all of this. Why there are so many infinities, and why even bother to classify infinity, when infinity is just... infinity? <strong>Why do we use all of these symbols? What does even each type of infinity mean?</strong></p>
Christian Gaetz
75,296
<p>I won't comment on your more philosophical questions, but I will give what I think is one of the more important applications of different sizes of infinity.</p> <p>There is a rigorous mathematical way of thinking about a computer program, called a Turing machine. One can show that the cardinality of the set of Turing machines is $\aleph_0$, however the set of all possible problems you might want a computer program to solve is strictly bigger (cardinality of $\mathbb{R}$). The very real application in this case is the conclusion that there are some problems which are not solvable by any computer program.</p>
3,568,230
<p>My question is: why, in general we cannot write down an formula for the <span class="math-container">$n-$</span>th term, <span class="math-container">$S_{n}$</span>, of the sequence of partial sums?</p> <p>I will explain better in the following but the question is basically that one above.</p> <p>Suppose then you have an <em>infinite sequence</em> in your pocket, <span class="math-container">$\{a_{1},a_{2},a_{3},...\}$</span>, or,</p> <p><span class="math-container">$$\{a_{1},a_{2},a_{3},...\} \equiv \{a_{n}\}_{n=0}^{\infty} \tag{1}$$</span></p> <p><span class="math-container">$(1)$</span> then is a fundamental object because then you can "sum up" all the terms of this particular sequence, just like: <span class="math-container">$a_{0}+a_{1}+a_{2}+\cdot \cdot \cdot$</span> to define another object. Well, doing that procedure you construct that object, called <em>infinite series of the infinite sequence <span class="math-container">$\{a_{n}\}_{n=0}^{\infty}$</span></em> </p> <p><span class="math-container">$$a_{0}+a_{1}+a_{2}+\cdot \cdot \cdot \equiv \sum^{\infty}_{n=0}a_{n} \tag{2}$$</span></p> <p>The next procedure you might like to do is then question yourself if a infinite series have some value <span class="math-container">$s \in \mathbb{K}$</span> (<span class="math-container">$\mathbb{K}$</span> a field) indeed. The procedure to answer that question is then firstly construct another infinite sequence called <em>the sequence of partial sums of the series</em>:</p> <p><span class="math-container">$$\{S_{0},S_{1},S_{2},S_{3},...,S_{k},...\} \equiv \{S_{n}\}_{n=0}^{\infty} \tag{3} $$</span></p> <p>Which is:</p> <p><span class="math-container">$$\begin{cases} S_{0} = \sum^{0}_{n=0}a_{n} = a_{0}\\S_{1} = \sum^{1}_{n=0}a_{n} = a_{0} + a_{1} \\ S_{1} = \sum^{2}_{n=0}a_{n} = a_{0} + a_{1} + a_{2} \\ S_{3} = \sum^{3}_{n=0}a_{n} = a_{0} + a_{1} + a_{2} + a_{3} \\\vdots\\ S_{k} = \sum^{k}_{n=0}a_{n} = a_{0} + a_{1} + a_{2} + a_{3}+\cdot \cdot \cdot+a_{k}\\ \vdots \end{cases} $$</span></p> <p>and then calculate the limit of this sequence <span class="math-container">$(3)$</span>, like:</p> <p><span class="math-container">$$ \lim_{n\to \infty} \sum^{n}_{j=0}a_{j} \equiv \lim_{n\to \infty} S_{n} \tag{4} $$</span></p> <p>Now, if the limit <span class="math-container">$(4)$</span> has a value <span class="math-container">$s = L$</span> then the can say that the <em>Sum of the Series</em> is that limit:</p> <p><span class="math-container">$$ \sum^{\infty}_{n=0}a_{n} = s \tag{5}$$</span></p> <p><span class="math-container">$$ * * * $$</span></p> <p>Now, if we do not have a proper expression for <span class="math-container">$S_{n} = \sum^{k}_{n=0}a_{n}$</span>, then the whole "direct limit calculus" do not work and then we need other methods for search the value (more generally the convergence) of a series (e.g. integral test). The thing is, I do not see (understand) why we cannot in general write down a formula for <span class="math-container">$S_{n}$</span> and some times we can. For instance, I do not see why in one hand we can write down a formula for geometric series but on the other hand we cannot for harmonic series, for me the <span class="math-container">$S_{n}$</span> term, of the harmonic series, to plug up in the limit is given by:</p> <p><span class="math-container">$$ S_{n} = \sum^{n}_{k=0}\frac{1}{n} = 1+ \frac{1}{2} + \frac{1}{3} + ... + \frac{1}{n} \equiv \Big( 1+ \frac{1}{2} + \frac{1}{3} + ... \Big) + \frac{1}{n} = $$</span></p> <p><span class="math-container">$$= C + \frac{1}{n} $$</span></p> <p><span class="math-container">$C$</span> a constant since it's a finite sum. Then,</p> <p><span class="math-container">$$\lim_{n\to \infty} C + \frac{1}{n} = C$$</span></p> <p>Then,</p> <p><span class="math-container">$$\sum^{\infty}_{n=0}\frac{1}{n} = C$$</span></p> <p>I know that what I wrote above isn't write, but I simply do not understand why. There's a subtle thing that I do not understand. Anyway, the question is posted above.</p> <p>Thank You. </p>
José Carlos Santos
446,262
<p>Yes, you are right: in general we cannot write down a formula for the <span class="math-container">$n$</span><sup>th</sup> partial sum of a sequence. As there is no simple closed expression for most primitives, such as <span class="math-container">$\int\frac1{\log(x)}\,\mathrm dx$</span>, <span class="math-container">$\int e^{x^2}\,\mathrm dx$</span>, and so no. And, in general, there is no simple closed expression for <span class="math-container">$\prod_{k=1}^na_k$</span>. There is nothing peculiar about series here.</p>
3,568,230
<p>My question is: why, in general we cannot write down an formula for the <span class="math-container">$n-$</span>th term, <span class="math-container">$S_{n}$</span>, of the sequence of partial sums?</p> <p>I will explain better in the following but the question is basically that one above.</p> <p>Suppose then you have an <em>infinite sequence</em> in your pocket, <span class="math-container">$\{a_{1},a_{2},a_{3},...\}$</span>, or,</p> <p><span class="math-container">$$\{a_{1},a_{2},a_{3},...\} \equiv \{a_{n}\}_{n=0}^{\infty} \tag{1}$$</span></p> <p><span class="math-container">$(1)$</span> then is a fundamental object because then you can "sum up" all the terms of this particular sequence, just like: <span class="math-container">$a_{0}+a_{1}+a_{2}+\cdot \cdot \cdot$</span> to define another object. Well, doing that procedure you construct that object, called <em>infinite series of the infinite sequence <span class="math-container">$\{a_{n}\}_{n=0}^{\infty}$</span></em> </p> <p><span class="math-container">$$a_{0}+a_{1}+a_{2}+\cdot \cdot \cdot \equiv \sum^{\infty}_{n=0}a_{n} \tag{2}$$</span></p> <p>The next procedure you might like to do is then question yourself if a infinite series have some value <span class="math-container">$s \in \mathbb{K}$</span> (<span class="math-container">$\mathbb{K}$</span> a field) indeed. The procedure to answer that question is then firstly construct another infinite sequence called <em>the sequence of partial sums of the series</em>:</p> <p><span class="math-container">$$\{S_{0},S_{1},S_{2},S_{3},...,S_{k},...\} \equiv \{S_{n}\}_{n=0}^{\infty} \tag{3} $$</span></p> <p>Which is:</p> <p><span class="math-container">$$\begin{cases} S_{0} = \sum^{0}_{n=0}a_{n} = a_{0}\\S_{1} = \sum^{1}_{n=0}a_{n} = a_{0} + a_{1} \\ S_{1} = \sum^{2}_{n=0}a_{n} = a_{0} + a_{1} + a_{2} \\ S_{3} = \sum^{3}_{n=0}a_{n} = a_{0} + a_{1} + a_{2} + a_{3} \\\vdots\\ S_{k} = \sum^{k}_{n=0}a_{n} = a_{0} + a_{1} + a_{2} + a_{3}+\cdot \cdot \cdot+a_{k}\\ \vdots \end{cases} $$</span></p> <p>and then calculate the limit of this sequence <span class="math-container">$(3)$</span>, like:</p> <p><span class="math-container">$$ \lim_{n\to \infty} \sum^{n}_{j=0}a_{j} \equiv \lim_{n\to \infty} S_{n} \tag{4} $$</span></p> <p>Now, if the limit <span class="math-container">$(4)$</span> has a value <span class="math-container">$s = L$</span> then the can say that the <em>Sum of the Series</em> is that limit:</p> <p><span class="math-container">$$ \sum^{\infty}_{n=0}a_{n} = s \tag{5}$$</span></p> <p><span class="math-container">$$ * * * $$</span></p> <p>Now, if we do not have a proper expression for <span class="math-container">$S_{n} = \sum^{k}_{n=0}a_{n}$</span>, then the whole "direct limit calculus" do not work and then we need other methods for search the value (more generally the convergence) of a series (e.g. integral test). The thing is, I do not see (understand) why we cannot in general write down a formula for <span class="math-container">$S_{n}$</span> and some times we can. For instance, I do not see why in one hand we can write down a formula for geometric series but on the other hand we cannot for harmonic series, for me the <span class="math-container">$S_{n}$</span> term, of the harmonic series, to plug up in the limit is given by:</p> <p><span class="math-container">$$ S_{n} = \sum^{n}_{k=0}\frac{1}{n} = 1+ \frac{1}{2} + \frac{1}{3} + ... + \frac{1}{n} \equiv \Big( 1+ \frac{1}{2} + \frac{1}{3} + ... \Big) + \frac{1}{n} = $$</span></p> <p><span class="math-container">$$= C + \frac{1}{n} $$</span></p> <p><span class="math-container">$C$</span> a constant since it's a finite sum. Then,</p> <p><span class="math-container">$$\lim_{n\to \infty} C + \frac{1}{n} = C$$</span></p> <p>Then,</p> <p><span class="math-container">$$\sum^{\infty}_{n=0}\frac{1}{n} = C$$</span></p> <p>I know that what I wrote above isn't write, but I simply do not understand why. There's a subtle thing that I do not understand. Anyway, the question is posted above.</p> <p>Thank You. </p>
johnnyb
298,360
<p>So, the answer depends on what you are willing to consider as a solution. If I understand your question, you are asking why, if it is rational to think of a particular series, then it is also rational to think of its partial sums, and there should be no reason we cannot deduce a general formula for the partial sum from <span class="math-container">$k = 1$</span> to <span class="math-container">$n$</span> that doesn't include counting, since it is literally a function of <span class="math-container">$n$</span>. That is, the result is (a) fixed, and (b) a function of <span class="math-container">$n$</span>, <span class="math-container">$f(n)$</span>, so why can't there be a formula for it?</p> <p>The answer, in short, is based on what you consider to be a valid list of symbols in your function. For instance, let's say that we hadn't discovered exponentiation yet. In such a case, we would not be able to write the result of the geometric series as a formula, right? Therefore, the functions that we are able to write a formula for is actually a function of the operators that we know about (or are willing to include).</p> <p>Ultimately, any given function can be written as a formula, if you are willing to make the function itself a standard operator. So, if I have a series <span class="math-container">$S$</span> with a formula for the <span class="math-container">$k$</span>th term <span class="math-container">$g(k)$</span>, I can define a function <span class="math-container">$Q(n)$</span> to represent the partial sum to <span class="math-container">$n$</span>, such that <span class="math-container">$Q(n) = \sum\limits_{k = 1}^n g(k)$</span>. You might ask me how I am going to represent this as a formual. Well, to some extent, I already have! <span class="math-container">$Q(n)$</span> IS a representation of this as a formula, if I allow it into my list of allowable functions for representation.</p> <p>In short, we need to recognize that the functions that we know about, use, and allow in our formulas have some amount of arbitrariness to them, many are historically contingent on the functions that humans have found interesting. Therefore, the list of formulas that can be modeled using these sub-formulas will be likewise limited. You can always expand this list if you want to, and if you add in the right operators to your list you will be able to represent the function you are seeking, since you could always add the function itself as one of your allowed functions.</p>
180,169
<p>Can anyone give me suggestions for new books about Besicovitch's almost periodic functions? Thanks a lot. </p>
Vladimir S Matveev
14,515
<p>I add a small $\varepsilon$ to Robert's answer, which is a simple explanation and a simple example to what he said concerning the 2 dim case. Conformal structure of signature (1,1) on the surface is essentially the same as a pair of everywhere transversal<br> foliations. Well, up to a double cover, to be precise. </p> <p>Indeed, for a metric, the lightline cone at every point is two straight lines in the tangent space intersecting at the origin. The foliations are given by condition that they are tangent to these straight lines. </p> <p>It is known (see for example sect. 5.1 of <a href="http://lanl.arxiv.org/abs/1002.3934" rel="noreferrer">http://lanl.arxiv.org/abs/1002.3934</a>) that for globally flat metrics these foliations are somehow standard: moreover, any flat torus is a quotient of $(R^2, g=dxdy)$ by a lattice. It is also easy to construct foliations that are not ``standard'', say we easily construct a foliation such that it has a Reeb component. The conformal structure corresponding to this foliation is conformally flat, since in dimension 2 any metric is conformally flat, but is not globally conformally flat </p>
3,711,744
<p>Let <span class="math-container">$C_1\geq C_2\geq\dots\geq C_n$</span> be a fixed set of positive numbers. Maximize the linear function <span class="math-container">$L(x_1, x_2, \dots, x_n)=\sum^n_1C_jx_j$</span> in the closed set described by the inequalities <span class="math-container">$0\leq x_j\leq 1, \sum^n_1 x_j\leq A$</span><br> I think this question can be solved by simplex method, but it happens in a calculus book, so can I solve it using some pure calculus method? I tried to find the critical points, but it equals <span class="math-container">$[c_1, c_2, \dots, c_n]$</span>, and can not equal <span class="math-container">$0$</span>.</p>
twosigma
780,083
<p>Here is a way to get two such maps if we are given a basis of a vector space <span class="math-container">$V$</span>.</p> <p>Let <span class="math-container">$v_1, …, v_k, v$</span> be a basis of <span class="math-container">$V$</span>. Then observe that <span class="math-container">$v_1, …, v_k, v + v_1$</span> is a basis for <span class="math-container">$V$</span>. Now define <span class="math-container">$U$</span> on the basis <span class="math-container">$v_1, …, v_k, v + v_1$</span> as follows: <span class="math-container">$v_i \mapsto 0$</span> for <span class="math-container">$1 \leq i \leq k$</span>, and <span class="math-container">$v + v_1 \mapsto v + v_1$</span>. Define <span class="math-container">$T$</span> on the basis <span class="math-container">$v_1, …, v_k, v$</span> as follows: <span class="math-container">$v_i \mapsto v_i$</span> for <span class="math-container">$1 \leq i \leq k$</span>, and <span class="math-container">$v \mapsto 0$</span>.</p> <p>Then <span class="math-container">$\ker U = \text{span}(v_1, …, v_k) = \text{im} \, T$</span>, so in particular <span class="math-container">$\text{im} \, T \subseteq \ker U$</span>, hence <span class="math-container">$UT = 0$</span>. </p> <p>On the other hand, <span class="math-container">$\{ v + v_1 \}$</span> is a basis for <span class="math-container">$\text{im} \, U$</span> and <span class="math-container">$\{ v \}$</span> is a basis for <span class="math-container">$\ker T$</span>, however <span class="math-container">$v + v_1$</span> is not in the span of <span class="math-container">$v$</span>, so there is a vector in <span class="math-container">$\text{im} \, U$</span> that isn't in <span class="math-container">$\ker T$</span>, hence <span class="math-container">$\text{im} \, U \not \subseteq \ker T$</span>. This means <span class="math-container">$TU \neq 0$</span>.</p>
1,102,709
<p>I'd like to know if there is an explicit atlas for the manifold $\mathbb{R}P^3$ which is defined as the quotient of the three-sphere by the antipodal mapping.</p> <p>Thanks.</p>
Per Erik Manne
33,572
<p>There is a version of Weierstrass' approximation theorem, due to Torsten Carleman (1927), which is valid for the interval $(-\infty,\infty)$, but it requires you to replace the approximating polynomial by a convergent power series:</p> <p>If $\epsilon : \bf R\rm \to (0,\infty)$ is a positive, continuous function (for instance, a positive constant), then for any continuous function $f :\bf R \rm \to \bf C\rm$ there is an <em>entire</em> function $h(z)$ such that $|h(t)-f(t)|&lt;\epsilon(t)$ for all $t\in\bf R\rm$. </p> <p>(That $h(z)$ is entire is equivalent to saying that there is a Taylor series expansion $h(z)=\sum_{n=0}^\infty a_n z^n$ which converges on all of $\bf C$.)</p>
2,973,314
<p>If we have to find the sum of n terms of a G.P. then we have two formulas for it (1) <span class="math-container">$a(1-r^n)/(1-r)$</span> and (2) <span class="math-container">$a(r^n-1)/(r-1)$</span>. Now I know how the (1) has been derived but dont know about the (2)(is it obtained by multiplying denominator and numerator of (1) by -1?). I am also confused when two use them and why there exists 2 formulas for the same objective? pls explain it.</p>
Sam Streeter
487,113
<p>As you have mentioned in your post and as Dr. Sonnhard Graubner has mentioned in his answer, you can get one expression from the other by multiplying by <span class="math-container">$-1$</span> on the numerator and denominator. Which one you use is just a matter of preference. In particular, when working with geometric series where <span class="math-container">$r &gt; 1$</span>, I would guess that people prefer to work with (2), since <span class="math-container">$r^n - 1$</span> and <span class="math-container">$r-1$</span> are positive quantities, so the resulting fraction is more aesthetic, but, for the same reason, (1) will tend to be used when <span class="math-container">$r &lt; 1$</span>.</p>
360,608
<p>In physics I came across these kind of equations when I am trying to find the asymptotic behaviour of some function.</p> <p>Can anyone explain if there is any sense in talking about $\sin(x)$ or $\cos(x)$ as $x$ tends to infinity?</p> <p>$$\lim_{x\rightarrow\infty}\;\sin(x)?$$</p>
Community
-1
<p>If we take $x_n=2\pi n$ and $x'_n=2\pi n+\frac{\pi}{2}$ then we have $$\lim_{n\to\infty}x_n=\lim_{n\to\infty}x'_n=+\infty$$ but $$\lim_{n\to\infty}\sin(x_n)=0\neq1=\lim_{n\to\infty}\sin(x'_n)$$ hence $\displaystyle\lim_{x\to\infty}\sin x$ does not exist.</p>
149,790
<p>I know that if $x$ is a rational multiple of $\pi$, then $tan(x)$ is <a href="http://divisbyzero.com/2010/10/28/trigonometric-functions-and-rational-multiples-of-pi/">algebraic</a>.</p> <p>Is there a fairly simple way to express $x$ as $\pi\ m/n$, if $tan(x)$ is given as a square root of a rational?</p>
ile
43,192
<p>If $x \in \mathbb{Q}$ and $tan^2(x\pi) \in \mathbb{Q}$, then $tan(x\pi) \in \{0, \pm\sqrt{3}, \pm\frac{1}{\sqrt{3}}, \pm 1 \}$.</p> <p>Chapter 11 of the <em><a href="http://books.google.com/books?id=emE6SLT3CLsC&amp;dq=Angles+Whose+Squared+Trigonometric+Functions+Are+Rational&amp;source=gbs_navlinks_s" rel="nofollow noreferrer">A Concrete Approach to Abstract Algebra</a></em> has a simple proof of this fact.</p> <p>Also, a similar <a href="https://math.stackexchange.com/questions/79861/arctan2-a-rational-multiple-of-pi">question</a> has proofs that can be extended to this case.</p>
282,050
<p>I have equation $y = -x^2 + 2x + 7$. How can I change it to canonical form, which looks like $y^2 = 2px$ ? ($p$ will be parameter)</p> <p>What i ve tried so far: $$\begin{align} y &amp;= -x^2 + 2x + 7\\ y &amp;= -(x^2 - 2x + 1) + 8\\ (y-8) &amp;= -(x-1)^2 \\ (y-8)^2 &amp;= 2*(0.5)*(x-1)^4 \end{align} $$</p> <p>But I have read somewhere its wrong, so how do I make it correct?</p> <p>Or is my solution correct?</p>
Adi Dani
12,848
<p>$$y = -x^2 + 2x + 7 $$ $$y = -(x^2 - 2x +1)+8 $$ $$y = -(x- 1)^2+8 $$ $$(x- 1)^2=-(y-8) $$ $$(x- 1)^2=2(-\frac{1}{2})(y-8)\Rightarrow p=-\frac{1}{2},x-1=Y,y-8=X $$ $$Y^2=2pX$$</p>
1,278,848
<p>Based on <a href="https://math.stackexchange.com/questions/1267021/let-m-subseteq-mathbbrk-manifold-topology-vs-trace-topology/1267760?noredirect=1#comment2573732_1267760">this</a> question I'd like to know: Are there compact (sub)manifolds without boundary in $\mathbb{R}^n$ ? Because, as that question shows, the topology of the manifolds has to be the trace topology; thus compact subspace (in particular, <em>compact</em> manifolds) are characterized by the Heine-Borel theorem: They are precisely those sets in $\mathbb{R}^n$ that are closed and bounded.</p> <p>But, as far as I know (and I'm just starting to read about manifolds and haven't got a good grasp on the formal definitions yet), manifolds without boundary aren't closed, so they can't be compact ?</p>
RobertCRH
395,336
<p>A compact manifold without a boundary is called a <a href="https://en.wikipedia.org/wiki/Closed_manifold" rel="nofollow noreferrer">closed manifold</a> so it is certainly an important class of manifolds.</p> <p>Example: A $n-1$sphere ${\cal S}^{n-1}$ is a closed manifold, but the unit ball enclosed $\{\mathbf{x}\in \mathbb{R}^{n}\|\mathbf{x}\leq 1\|\}$ is a campact manifold with its boundary being ${\cal S}^{n-1}$. </p>
3,746,630
<p>So I am solving some probability/finance books and I've gone through two similar problems that conflict in their answers.</p> <h2>Paul Wilmott</h2> <p>The first book is Paul Wilmott's <a href="https://smile.amazon.com/Frequently-Asked-Questions-Quantitative-Finance/dp/0470748753" rel="nofollow noreferrer">Frequently Asked Questions in Quantitative Finance</a>. This book poses the following question:</p> <blockquote> <p>Every day a trader either makes 50% with probability 0.6 or loses 50% with probability 0.4. What is the probability the trader will be ahead at the end of a year, 260 trading days? Over what number of days does the trader have the maximum probability of making money?</p> </blockquote> <p><strong>Solution:</strong></p> <blockquote> <p>This is a nice one because it is extremely counterintuitive. At first glance it looks like you are going to make money in the long run, but this is not the case. Let n be the number of days on which you make 50%. After <span class="math-container">$n$</span> days your returns, <span class="math-container">$R_n$</span> will be: <span class="math-container">$$R_n = 1.5^n 0.5^{260−n}$$</span> So the question can be recast in terms of finding <span class="math-container">$n$</span> for which this expression is equal to 1.</p> </blockquote> <p>He does some math, which you can do as well, that leads to <span class="math-container">$n=164.04$</span>. So a trader needs to win at least 165 days to make a profit. He then says that the average profit <em>per day</em> is:</p> <blockquote> <p><span class="math-container">$1−e^{0.6 \ln1.5 + 0.4\ln0.5}$</span> = −3.34%</p> </blockquote> <p>Which is mathematically wrong, but assuming he just switched the numbers and it should be:</p> <blockquote> <p><span class="math-container">$e^{0.6 \ln1.5 + 0.4\ln0.5} - 1$</span> = −3.34%</p> </blockquote> <p>That still doesn't make sense to me. Why are the probabilities in the exponents? I don't get Wilmott's approach here.</p> <p>*PS: I ignore the second question, just focused on daily average return here.</p> <hr /> <h2>Mark Joshi</h2> <p>The second book is Mark Joshi's <a href="https://smile.amazon.com/Quant-Interview-Questions-Answers-Second/dp/0987122827" rel="nofollow noreferrer">Quant Job Interview Question and Answers</a> which poses this question:</p> <blockquote> <p>Suppose you have a fair coin. You start off with a dollar, and if you toss an <em>H</em> your position doubles, if you toss a <em>T</em> it halves. What is the expected value of your portfolio if you toss infinitely?</p> </blockquote> <p><strong>Solution</strong></p> <blockquote> <p>Let <span class="math-container">$X$</span> denote a toss, then: <span class="math-container">$$E(X) = \frac{1}{2}*2 + \frac{1}{2}\frac{1}{2} = \frac{5}{4}$$</span> So for <span class="math-container">$n$</span> tosses: <span class="math-container">$$R_n = (\frac{5}{4})^n$$</span> Which tends to infinity as <span class="math-container">$n$</span> tends to infinity</p> </blockquote> <hr /> <hr /> <p>Uhm, excuse me what? Who is right here and who is wrong? Why do they use different formula's? Using Wilmott's (second, corrected) formula for Joshi's situation I get the average return per day is:</p> <p><span class="math-container">$$ e^{0.5\ln(2) + 0.5\ln(0.5)} - 1 = 0% $$</span></p> <p>I ran a Python simulation of this, simulating <span class="math-container">$n$</span> days/tosses/whatever and it seems that the above is not correct. Joshi was right, the portfolio tends to infinity. Wilmott was also right, the portfolio goes to zero when I use his parameters.</p> <p>Wilmott also explicitly dismisses Joshi's approach saying:</p> <blockquote> <p>As well as being counterintuitive, this question does give a nice insight into money management and is clearly related to the Kelly criterion. If you see a question like this it is meant to trick you if the expected profit, here 0.6 × 0.5 + 0.4 × (−0.5) = 0.1, is positive with the expected return, here −3.34%, negative.</p> </blockquote> <p>So what is going on?</p> <p>Here is the code:</p> <pre><code>import random def traderToss(n_tries, p_win, win_ratio, loss_ratio): SIM = 10**5 # Number of times to run the simulation ret = 0.0 for _ in range(SIM): curr = 1 # Starting portfolio for _ in range(n_tries): # number of flips/days/whatever if random.random() &gt; p_win: curr *= win_ratio # LINE 9 else: curr *= loss_ratio # LINE 11 ret += curr # LINE 13: add portfolio value after this simulation print(ret/SIM) # Print average return value (E[X]) </code></pre> <p>Use: <code>traderToss(260, 0.6, 1.5, 0.5)</code> to test Wilmott's trader scenario.</p> <p>Use: <code>traderToss(260, 0.5, 2, 0.5)</code> to test Joshi's coin flip scenario.</p> <hr /> <hr /> <p>Thanks to the followup comments from Robert Shore and Steve Kass below, I have figured one part of the issue. <strong>Joshi's answer assumes you play once, therefore the returns would be additive and not multiplicative.</strong> His question is vague enough, using the word &quot;your portfolio&quot;, suggesting we place our returns back in for each consecutive toss. If this were the case, we need the <a href="https://www.investopedia.com/articles/investing/071113/breaking-down-geometric-mean.asp" rel="nofollow noreferrer">geometric mean</a> not the arithmetic mean, which is the expected value calculation he does.</p> <p>This is verifiable by changing the python simulation to:</p> <pre><code>import random def traderToss(): SIM = 10**5 # Number of times to run the simulation ret = 0.0 for _ in range(SIM): if random.random() &gt; 0.5: curr = 2 # Our portfolio becomes 2 else: curr = 0.5 # Our portfolio becomes 0.5 ret += curr print(ret/SIM) # Print single day return </code></pre> <p>This yields <span class="math-container">$\approx 1.25$</span> as in the book.</p> <p>However, if returns are multiplicative, therefore we need a different approach, which I assume is Wilmott's formula. This is where I'm stuck. Because I still don't understand the Wilmott formula. Why is the end of day portfolio on average:</p> <p><span class="math-container">$$ R_{day} = r_1^{p_1} * r_2^{p_2} * .... * r_n^{p_n} $$</span></p> <p>Where <span class="math-container">$r_i$</span>, <span class="math-container">$p_i$</span> are the portfolio multiplier, probability for each scenario <span class="math-container">$i$</span>, and there are <span class="math-container">$n$</span> possible scenarios. Where does this (generalized) formula come from in probability theory? This isn't a geometric mean. Then what is it?</p>
T_M
562,248
<p>Joshi's problem is a much easier problem and he is correct. Wilmott's problem is a little bit more subtle, and I think he is misleading about what he is computing. The main point is that returns are not additive, so the trap is to compute expectation of the return on a given day and then &quot;add it up&quot; to conclude that you are expected to win overall. It's counterintuitive that this does not work.</p> <p>So Wilmott is correct when he says that the expected profit on day 1 is <span class="math-container">$$ 0.6 \times (1.5 - 1) + 0.4 \times (0.5 - 1) = 0.1. $$</span> If we write <span class="math-container">$X$</span> for the <em>return</em> on day 1, then: <span class="math-container">$$ \mathbb{E}(1+X) = 0.6 \times 1.5 + 0.4 \times 0.5 = 1.1. $$</span></p> <p>I think Wilmott's language is misleading for the newcomer (which is annoying as he's supposed to be famous for teaching basic quant principles to newcomers). By &quot;average profit per day&quot; in the sentence you quote he seems to be referring to something like &quot;expected daily rate of profit&quot;. To shed a bit more light on what he means, suppose you want to compute the expected return after <span class="math-container">$n$</span> days: To do this, let <span class="math-container">$X_1,\dots, X_n$</span> be i.i.d. random variables where <span class="math-container">$X_k$</span> is defined as the return on day <span class="math-container">$k$</span>. These are not additive: The return after <span class="math-container">$n$</span> days is given by the random variable <span class="math-container">$R_n = (1+X_1)(1+X_2)\cdots (1+X_n)$</span>. But log-returns are additive: <span class="math-container">$$ \log R_n = \sum_{i=1}^n \log (1+X_i), $$</span> so that by linearity of expectation (and i.i.d. assumption) we can compute the expectation of the log-return now as: <span class="math-container">$$ \mathbb{E}(\log R_n) = n \mathbb{E}(\log (1+X)) = n\Bigl(0.6 \log 1.5 + 0.4 \log 0.5\Bigr). $$</span> So you can see that what matters in the long run for the expected log-return is the whether the expression in the brackets on the right-hand side is bigger than zero or not.</p> <hr /> <p>Wilmott seems to use the value of <span class="math-container">$$ e^{\mathbb{E}(\log R_1)} - 1 = e^{(0.6 \log 1.5 + 0.4 \log 0.5)} - 1 $$</span> to make the same point I am making above. But since we've taken an expectation, we can't pull the <span class="math-container">$\mathbb{E}$</span> through a logarithm or exponential to &quot;convert&quot; easily back to <span class="math-container">$\mathbb{E}(R)$</span>. I don't know... this might be one of this quant things that is used as a measure of rate of return but isn't the same as <span class="math-container">$\mathbb{E}(R)$</span>.</p>
1,884,852
<p>Suppose there are <em>k</em> dice thrown. Let <em>M</em> denote the minimum of the <em>k</em> numbers rolled. </p> <p>I've learned that finding the individual probability is:</p> <p>$$P(M = m) = P(M \ge m) - P(M \ge m + 1) $$</p> <p>Can someone please explain this to me? I've tried plugging in values for $m = 1, ... , 6$ but it isn't clear to me how that formula is derived. </p>
André Nicolas
6,312
<p>How can the minimum be $\ge m$? There are two possibilities: (i) the minimum is exactly $m$ or (ii) the minimum is greater than $m$. </p> <p>The minimum is greater than $m$ precisely if the minimum is $\ge m+1$.</p> <p>The possibilities (i) and (ii) are disjoint, so $$\Pr(M\ge m)=\Pr(M=m)+\Pr(M\ge m+1).$$ From this we get immediately $$\Pr(M=m)=\Pr(M\ge m)-\Pr(M\ge m+1).$$ </p> <p><em>Remark</em>: The above deals with your specific question. But to finish your problem, note that the minimum is $\ge w$ precisely if all the values are $\ge w$. And for $w=1,2,\dots, 6$ the tosses are all $\ge w$ with probability $\left(\frac{7-w}{6}\right)^k$.</p>
4,283,707
<p>When solving this problem I arrive to a cubic equation not very friendly, is there any algebraic shortcut?</p> <p><span class="math-container">$$W=\frac{3+\left [ \sqrt[3]{4+\sqrt[3]{4+...}} \right ]^{2}}{1+\left [ \sqrt[3]{4+\sqrt[3]{4+...}} \right ]^{-1}}$$</span></p> <p>to do this <span class="math-container">$$P= \sqrt[3]{4+\sqrt[3]{4+...}}$$</span> <span class="math-container">$$P^{3}= 4+\sqrt[3]{4+...}$$</span> <span class="math-container">$$P^{3}=4+P$$</span> <span class="math-container">$$P^{3}-P-4=0$$</span> <span class="math-container">$$P≈1,7963$$</span></p> <p>evaluating the expression I arrive at approximately 4, but the cubic I solved it by software. Is there another way to approach this problem?</p>
march
852,914
<p>Assuming that the nested cube-root converges, which I believe it does, let <span class="math-container">$x$</span> be the unique real root of <span class="math-container">$P^3-P-4$</span>. Then <span class="math-container">$x^3 = x+4$</span>, and therefore <span class="math-container">$$ W=\frac{3+x^2}{1+x^{-1}}= \frac{3x+x^3}{x+1} = \frac{3x+x+4}{x+1} = \frac{4x+4}{x+1} = 4. $$</span></p>
4,036,896
<p>Let <span class="math-container">$\boldsymbol{A}$</span> is a real symmetric matrix, <span class="math-container">$\boldsymbol{B}$</span> is a real antisymmetric matrix, <span class="math-container">$\boldsymbol{A}^2 = \boldsymbol{B}^2$</span>, prove <span class="math-container">$\boldsymbol{A} = \boldsymbol{B} = \boldsymbol{0}$</span>.</p> <hr /> <p>I tried the second-order matrix. Let <span class="math-container">$\boldsymbol{A} = \begin{bmatrix} a_{11} &amp; a_{12} \\ a_{12} &amp; a_{22} \end{bmatrix}$</span>, <span class="math-container">$\boldsymbol{B} = \begin{bmatrix} b_{11} &amp; b_{12} \\ {-b_{12}} &amp; b_{22} \end{bmatrix}$</span>, <span class="math-container">$a_{ij}, b_{ij} \in \mathbb{R}$</span>.</p> <p><span class="math-container">\begin{align} \boldsymbol{A}^2 &amp;= \begin{bmatrix} {a_{11}^2 + a_{12}^2} &amp; {a_{11}a_{12} + a_{12}a_{22}} \\ {a_{11}a_{12} + a_{12}a_{22}} &amp; {a_{12}^2 + a_{22}^2} \end{bmatrix} \\ \boldsymbol{B}^2 &amp;= \begin{bmatrix} {b_{11}^2 - b_{12}^2} &amp; {b_{11}b_{12} + b_{12}b_{22}} \\ {-b_{11}b_{12} - b_{12}b_{22}} &amp; {-b_{12}^2 + b_{22}^2} \end{bmatrix} \end{align}</span></p> <p>Because <span class="math-container">$\boldsymbol{A}^2 = \boldsymbol{B}^2$</span>, I got <span class="math-container">\begin{align} a_{11}^2 + a_{12}^2 &amp;= b_{11}^2 - b_{12}^2 \\ a_{11}a_{12} + a_{12}a_{22} &amp;= b_{11}b_{12} + b_{12}b_{22} \\ a_{11}a_{12} + a_{12}a_{22} &amp;= -b_{11}b_{12} - b_{12}b_{22} \\ a_{12}^2 + b_{22}^2 &amp;= -b_{12}^2 + b_{22}^2 \end{align}</span></p> <p>Hence, <span class="math-container">\begin{align} a_{11}a_{12}+a_{12}a_{22}=b_{11}b_{12}+b_{12}b_{22}=0 \end{align}</span></p> <p>I lost my momentum here.</p>
TheSilverDoe
594,484
<p>Let <span class="math-container">$\mu$</span> be a complex eigenvalue of <span class="math-container">$B$</span>. Because <span class="math-container">$B$</span> is antisymmetric, then <span class="math-container">$\mu \in i\mathbb{R}$</span>, so <span class="math-container">$\mu^2$</span> is a nonpositive real number. But <span class="math-container">$\mu^2$</span> is eigenvalue of <span class="math-container">$B^2$</span>, so of <span class="math-container">$A^2$</span>, but the eigenvalues of <span class="math-container">$A^2$</span> are all nonnegative since <span class="math-container">$A$</span> is symmetric. So <span class="math-container">$\mu=0$</span>. You get <span class="math-container">$B=0$</span>, so <span class="math-container">$A^2=0$</span>, so <span class="math-container">$A$</span> is real symmetric and nilpotent, so <span class="math-container">$A=0$</span>.</p>
4,036,896
<p>Let <span class="math-container">$\boldsymbol{A}$</span> is a real symmetric matrix, <span class="math-container">$\boldsymbol{B}$</span> is a real antisymmetric matrix, <span class="math-container">$\boldsymbol{A}^2 = \boldsymbol{B}^2$</span>, prove <span class="math-container">$\boldsymbol{A} = \boldsymbol{B} = \boldsymbol{0}$</span>.</p> <hr /> <p>I tried the second-order matrix. Let <span class="math-container">$\boldsymbol{A} = \begin{bmatrix} a_{11} &amp; a_{12} \\ a_{12} &amp; a_{22} \end{bmatrix}$</span>, <span class="math-container">$\boldsymbol{B} = \begin{bmatrix} b_{11} &amp; b_{12} \\ {-b_{12}} &amp; b_{22} \end{bmatrix}$</span>, <span class="math-container">$a_{ij}, b_{ij} \in \mathbb{R}$</span>.</p> <p><span class="math-container">\begin{align} \boldsymbol{A}^2 &amp;= \begin{bmatrix} {a_{11}^2 + a_{12}^2} &amp; {a_{11}a_{12} + a_{12}a_{22}} \\ {a_{11}a_{12} + a_{12}a_{22}} &amp; {a_{12}^2 + a_{22}^2} \end{bmatrix} \\ \boldsymbol{B}^2 &amp;= \begin{bmatrix} {b_{11}^2 - b_{12}^2} &amp; {b_{11}b_{12} + b_{12}b_{22}} \\ {-b_{11}b_{12} - b_{12}b_{22}} &amp; {-b_{12}^2 + b_{22}^2} \end{bmatrix} \end{align}</span></p> <p>Because <span class="math-container">$\boldsymbol{A}^2 = \boldsymbol{B}^2$</span>, I got <span class="math-container">\begin{align} a_{11}^2 + a_{12}^2 &amp;= b_{11}^2 - b_{12}^2 \\ a_{11}a_{12} + a_{12}a_{22} &amp;= b_{11}b_{12} + b_{12}b_{22} \\ a_{11}a_{12} + a_{12}a_{22} &amp;= -b_{11}b_{12} - b_{12}b_{22} \\ a_{12}^2 + b_{22}^2 &amp;= -b_{12}^2 + b_{22}^2 \end{align}</span></p> <p>Hence, <span class="math-container">\begin{align} a_{11}a_{12}+a_{12}a_{22}=b_{11}b_{12}+b_{12}b_{22}=0 \end{align}</span></p> <p>I lost my momentum here.</p>
Fred
380,717
<p>Let us denote the usual inner product on <span class="math-container">$ \mathbb R^n$</span> by <span class="math-container">$(\cdot|\cdot).$</span> Then</p> <p><span class="math-container">$$||Ax||^2=(Ax|Ax)=(A^TAx|x)=(A^2x|x)=(B^2x|x)=(Bx|B^Tx)=(Bx|-Bx)=-(Bx|Bx)=-||Bx||^2$$</span></p> <p>for all <span class="math-container">$x \in \mathbb R^n.$</span></p> <p>Hence <span class="math-container">$-||Bx||^2 \ge 0$</span> for all <span class="math-container">$x \in \mathbb R^n.$</span> This gives</p> <p><span class="math-container">$||Bx||^2 = 0$</span> for all <span class="math-container">$x \in \mathbb R^n$</span> and thus <span class="math-container">$||Ax||^2 = 0$</span> for all <span class="math-container">$x \in \mathbb R^n$</span> .</p> <p>Cosequence: <span class="math-container">$A=B=0.$</span></p>
297,812
<p>If $a-b=b-c$ .How to find the value of $a^2-2b^2+c^2$</p>
André Nicolas
6,312
<p>Note that $a-b=b-c$ is equivalent to $c=2b-a$.</p> <p>Substitute for $c$ in $a^2-2b^2+c^2$. We get $a^2-2b^2+(2b-a)^2$.</p> <p>Expand the square. We get $a^2-2b^2+(4b^2-4ab+a^2)$.</p> <p>This simplifies to $2a^2-4ab+2b^2$, which simplifies to $2(a-b)^2$.</p> <p>Further simplification is not possible, since we have only one fact (equation), so cannot expect to eliminate more than one variable. </p>
1,029,868
<p>Let $$ A=\begin{bmatrix} 1 &amp; 1 &amp; 2\\ 1 &amp; 2 &amp; 1\\ 2 &amp; 1 &amp; 1 \end{bmatrix}$$</p> <p>Show that $ A^-=\dfrac{1}{4}(-A^2+4A+I)$</p> <p>I have absolutely no clue how to do this. Could someone be kind enough to explain and provide and answer? I believe it has something to do with the Cayley-Hamiliton Theorem as the question is from that problem set but I don't understand how to use it to solve this problem. Your help is appreciated. Thanks </p>
Martin Argerami
22,857
<p>The characteristic polynomial is $(\lambda-4)(\lambda-1)(\lambda+1)=\lambda^3-4\lambda^2-\lambda+4$. So, by Cayley-Hamilton, $$ A^3-4A^2-A+4I=0. $$ Then $$I=\frac14\,(A+4A^2-A^3),$$ and since $A$ is invertible (its determinant is $4\ne0$) we have, multiplying by $A^{-1}$, $$ A^{-1}=\frac14\,(I+4A-A^2). $$</p>
1,846,592
<p>I know that a discrete topological space is where all singletons are open.</p> <p>For example, $\mathbb{N}$ with the subspace topology inherited from $(\mathbb{R}, \mathfrak{T}_{usual})$. This is the case because we can find $\{n\} = (a,b) \cap \mathbb{N}$ which is open. Hence all singletons are open.</p> <p>But are all sets are clopen? Closed?</p> <p>My thoughts: Suppose we take a singleton $\{x\}$ in a discrete space $X$, we know singleton is open, hence $\{x\}^c$ is closed. But it is the arbitrary union of singletons, so it is also open, so all sets are clopen. </p>
J.-E. Pin
89,374
<p><strong>Hint</strong>. Use the fact that open sets are closed under arbitrary union.</p>
2,010,255
<p>While finding the Taylor Series of a function, <strong>when</strong> are you allowed to substitute? And <strong>why</strong>?</p> <p>For example:</p> <p>Around $x=0$ for $e^{2x}$ I apparently am allowed to substitute $u=2x$ and then use the known series for $e^u$. But for $e^{x+1}$ I am not allowed to substitute $u=x+1$.</p> <p>I know the technique for finding the Taylor Series of $e^{x+1}$ around $x=0$ by taking $e^{x+1}=e\times e^x$. However, I am looking for understanding and intuition for when and why it is allowed to apply substitution.</p> <p>Note: there are several question that are similar to this one, but I have found none that actually answers the question "why"; or that shows a complete proof.</p> <hr> <p>EDIT: Thanks to the answer of Markus Scheuer I should refine the question to cases where the series is finite, for example $n\to3$</p>
hamam_Abdallah
369,188
<p>If $ f(x)=P_n(x)+x^n\epsilon(x)$ then</p> <p>$f(u(x))=P(u(x))+(u(x))^n\epsilon(u(x))$</p> <p>with $\lim_{x\to 0}\epsilon(x)=0$.</p> <p>thus we need that</p> <p>$\lim_{x\to 0}\epsilon(u(x))=0$</p> <p>so, we must have</p> <p>$$\lim_{x\to 0}u(x)=0$$</p>
85,052
<p>A housemate of mine and I disagree on the following question: </p> <p>Let's say that we play a game of yahtzee. Of the five dice you throw, two dice obtain the value 1, two other dice obtain the value 2, and one die shows you six dots on the top side. Given the fact that you haven't thrown a "full house" yet, you start throwing the die which value is 6 again and again, until you throw a one or a two. You get to throw the die of six one or two times. If you throw a one or a two the first time, you stop, because now you have the "full house" already. If you haven't thrown a one or a two with the die of six, you throw it again, hoping for a one or a two this time.</p> <p>Now, what is the probability that you throw a one or a two with the fifth die after two turns? (Given the way a rational person operates in this situation.)</p> <p>My take on this question was the following: the probability that you throw a one or a two the first time with the fifth dice is $1/3$, and the probability that you don't throw a dice of which the value is one or two the first time, but you do throw a one or a two the second time is $ 2/3 \cdot 1/3 = 2/9$. Adding these values gives you the probability: $1/3 + 2/9 = 3/9 + 2/9 = 5/9$. </p> <p>My housemate, however, argues that the chance to throw a one or a two the first time is $1/3$, and believes that throwing the fifth dice again, gives you a probability of throwing a one or a two of $1/3$ again. Adding these values gives the expected probability of throwing a full house of $1/3 + 1/3 = 2/3$. </p> <p>Who is right, my housemate or me?</p> <p>I strongly believe I am right, but even if you tell me I'm right, I might not be able to convince my housemate of the truth. He argues that my way of reasoning implies that the probability of throwing a one or a two with the fifth dice the second time is smaller than throwing it the first time. Could you please also provide me with a pedagogically sound way to explain him why the probability is $5/9$? </p> <p>Thanks in advance</p>
JavaMan
6,491
<p>You are right. The easiest way to see this is to recognize the probability that you roll a $1$ or a $2$ as </p> <p>$$ 1 - Pr(\text{not rolling a }1 \text{ or } 2) = 1 - (4/6)^2 = 5/9. $$</p>
121,909
<p>I came across this question while studying primitive roots. I know it has something to do with the fact that if the order of $a$ is $m$ then for every $k \in \mathbb{Z}$, the order of $a^k$ is $m/(m,k)$. The question is as follows: </p> <blockquote> <p>Let $p$ be an odd prime. Prove that $a^2$ is never a primitive root $\pmod{p}$. </p> </blockquote> <p>I would appreciate any help. Thank you.</p>
Bill Dubuque
242
<p>Your hunch is correct: $\rm\:(m,k)&gt;1\:\Rightarrow\:ord(a^k) = m/(m,k) &lt; m = ord(a).\:$ Since $\rm\:a^k\:$ has smaller order than $\rm\:a,\:$ it doesn't have maximal order in $\rm\left&lt;a\right&gt;.\:$ Your problem is the special case $\rm\:2 = k\ |\ m$</p>
3,530,492
<blockquote> <p>Evaluate</p> <p><span class="math-container">$$ \int_0^{e^{\pi}} |\cos\ (\ln x)|dx$$</span></p> </blockquote> <p><em>My ideas:</em> I substituted <span class="math-container">$u = \ln x$</span> and tried to evaluate</p> <p><span class="math-container">$$\int_{-\infty}^\pi |\cos u|\ e^u du$$</span></p> <p>Integration by parts didn't yield any results however. Does anybody have another idea?</p>
Z Ahmed
671,540
<p><span class="math-container">$$I=\int_{0}^{e^{\pi}}|\cos (\ln x)| dx= -\int_{-\infty} ^{\pi} e^t~ |\cos t| dt$$</span> <span class="math-container">$$\implies I=-\int_{-\infty}^{0} e^{t} |\cos t|~ dt-\int_{0}^{\pi/2} e^{t} \cos t ~dt+\int_{\pi/2}^{\pi} e^{t} \cos t~ dt$$</span> <span class="math-container">$$\implies I=\int_{0}^{\infty} e^{-t} |\cos t|~ dt-\int_{0}^{\pi/2} e^{t} \cos t ~dt+\int_{\pi/2}^{\pi} e^{t} \cos t~ dt$$</span> Let <span class="math-container">$$J=\int_{0}^{\infty} e^{-t} |\cos t| ~dt = \lim_{n \rightarrow \infty} \int_{0}^{n\pi} e^{-t} |\cos t| dt =[(1+e^{-\pi}+e^{-2\pi}+e^{-3\pi}+....+e^{-n\pi}] K$$</span> <span class="math-container">$$\implies J=\frac{K}{1-e^{-\pi}}$$</span> Because the period of <span class="math-container">$|\cos t|$</span> is <span class="math-container">$\pi$</span>, so in above we have broken in sections <span class="math-container">$[0,\pi], [\pi, 2\pi],[2\pi, 3\pi],.....[(n-1)\pi, \pi]$</span>. Here <span class="math-container">$K=\int_{0}^{\pi} e^{-t} |\cos t| dt$</span> <span class="math-container">$$\implies I=J-\int_{0}^{\pi/2} e^{t} \cos t ~dt+\int_{\pi/2}^{\pi} e^{t} \cos t~ dt$$</span> by integration by part we have, <span class="math-container">$$\int e^{t} \cos t dt =\frac{1}{2} e^{t} [\cos t+ \sin t]~~~~(*)$$</span> <span class="math-container">$$I=J-\frac{1}{2}[e^{\pi}+2e^{\pi/2}-1]$$</span> Using (*) <span class="math-container">$$K=\frac{1}{2}(1-e^{-\pi}+2 e^{-\pi/2})$$</span> So we finally get <span class="math-container">$$I=\frac{e^{\pi}-1+2e^{\pi/2}}{2(e^{\pi}-1)}+\frac{e^{\pi}+2e^{\pi/2}-1}{2}=\frac{e^{\pi}(e^{\pi}-1+2e^{\pi/2})}{2(e^{\pi}-1)}$$</span></p>
1,736,376
<p>Let $R$ be a ring and $I$ the set of non-invertible elements of $R$. </p> <p>If $(I,+)$ is an additive subgroup of $(R,+)$, then show that $I$ is an ideal of $R$ and so $R$ is local. </p> <p>$$$$ </p> <p>I have done the following: </p> <p>Since $(I,+)$ is an additive subgroup of $(R,+)$, we have that $\forall a,b \in I$ : $ab\in I$. </p> <p>But how can we show that it holds that $ax\in I, \forall a\in I, \forall x\in R$ ? </p>
Alex M.
164,025
<p>I believe that somewhere in your textbook $R$ is assumed to be commutative. Otherwise, the product of a non-invertible element with an arbitrary element of the ring <a href="https://math.stackexchange.com/questions/627562/can-the-product-of-two-non-invertible-elements-in-a-ring-be-invertible">may turn out to be invertible</a> (the examples given there are even stronger: they show that the product of two non-invertible elements might be invertible).</p> <p>Let $a \in I$ and $x\in R$. Assume that $ax$ is invertible; there must exist, then, some $y \in R$ such that $(ax) y = 1$, which is equivalent to $a (xy) = 1$, which means that $xy$ is a right inverse for $a$, which will also be a left inverse because $R$ is commutative, so $a$ is invertible, which is a contradiction, therefore $ax$ is not invertible so $ax \in I$.</p>
1,237,528
<p>$$ \displaystyle {\int_{0}^{z}} \sqrt {1 + \tan^2(\dfrac{\pi}{4} \dfrac{z}{H} )} dz $$</p> <p>_</p> <p>$$ gives $$ </p> <p>_</p> <p>$$ \dfrac{4H}{\pi} {\sinh^{-1}} ( {\tan \dfrac{\pi}{4} \dfrac{z}{H} } ) $$</p> <p>Please advise solution</p> <p>edit:- </p> <p>I can get to </p> <p>$$\dfrac{4H}{\pi} \displaystyle {\int_{0}^{\dfrac{\pi z}{4H}}} \sec {u} {du}$$</p> <p>Please help after this step ?</p>
tired
101,233
<p>1.) Define $\frac{\pi/4 }{H}=a $</p> <p>2.) Substitute $a z'=\arctan(r),\quad dz'=\frac{dr}{a(1+r^2)}$. This gives</p> <p>$$ I(a)=\frac{1}{a}\int_0^{\tan(a z)}\frac{\sqrt{1+r^2}}{1+r^2}=\frac{1}{a}\int_0^{\tan(a z)}\frac{1}{\sqrt{1+r^2}} $$</p> <p>Furthermore $\int\frac{1}{\sqrt{1+r^2}}=\text{arcsinh}(r)+C$</p> <p>and therefore $$ I(a)=\frac{1}{a}\text{arcsinh}(\tan(az)) $$</p> <p>Done!</p> <p>Sidenote: This integral is only well be behaved (in a straightforward manner) as long as $az&lt; \pi/2$, because of the divergence of $\tan$</p>
2,624,669
<blockquote> <p>Find global maxima and global minima of $$f(x)=3(x-2)^{\frac{2}{3}}-(x-2)$$ over the interval $[0,20]$.</p> </blockquote> <p><strong>My input:</strong> Derivative vanishes at $x=10$ and left neighborhood gives positive derivative and right neighborhood gives negative derivative . Therefore $x=10$ is the value where function attains global maxima.(Correct me here if i write something wrong). And i am not able to figure out the global minima. Need help. I saw graph of this function at Desmos but there is a peak at $2$ i am not able to understand that too. At peak we have derivative not defined ? </p>
Dr. Sonnhard Graubner
175,066
<p>you are correct, the Maximum will be attained for $x=10$ gives $$f(10)$$ the Minimum for $x=2$ this gives $$f(2)=0$$</p>
3,263,076
<p>Let <span class="math-container">$\Gamma\subset PSL_2(\mathbb{R})$</span> be a cofinite Fuchsian group (e.g. a Fuchsian group with finite fundamental domain). Does <span class="math-container">$\Gamma$</span> necessarily contain a hyperbolic element? </p> <p>At first, I tried to use the fact that <span class="math-container">$tr(\gamma)&gt;2$</span> if <span class="math-container">$\gamma\in \Gamma$</span> is hyperbolic, but I failed at this. (Which does not mean it is not possible and if it were, I would appreciate the simplicity of this approach.)</p> <p>Now, I thought one could use the following two facts</p> <ul> <li><p>A non-elementary Fuchsian group (the orbit <span class="math-container">$\Gamma z $</span> is infinite for all <span class="math-container">$z\in \mathbb{H}$</span>) must contain a hyperbolic element.</p></li> <li><p>A Fuchsian group is elementary if it is either cyclic or generated by the Moebius transformations <span class="math-container">$g(z)=kz$</span> and <span class="math-container">$h(z)=-\frac{1}{z}$</span></p></li> </ul> <p>If I wanted to use the above, I would need to show that the fundamental domain of both a cyclic group and the one generated by <span class="math-container">$g$</span> and <span class="math-container">$h$</span> are finite, I assume. However, I fail with that. Maybe somebody can help me there?</p> <p>I appreciate any help - so if my thoughts are leading in the wrong direction, I am very happy to check out a new approach!</p>
Severin Schraven
331,816
<p>We show that a pretty ring has exactly one unit. Indeed, if <span class="math-container">$0 \neq u$</span> is not a unit and <span class="math-container">$e$</span> is a unit, then <span class="math-container">$$ (u+e) + 0 = e + u $$</span> tells us that <span class="math-container">$u+e$</span> is not a unit (otherwise we get the contradiction <span class="math-container">$u=0$</span>). Nowe take two units <span class="math-container">$e, \tilde{e}$</span> then <span class="math-container">$$ e + (u+ \tilde{e}) = \tilde{e} + (u+ e) $$</span> implies <span class="math-container">$e=\tilde{e}$</span> and hence <span class="math-container">$1$</span> is the only unit. </p> <p>On the other hand every ring with only one unit is a pretty ring as we can write every <span class="math-container">$x\neq 0$</span> as <span class="math-container">$$ x = 1 + (x-1).$$</span></p> <p>Thus, we have for a unital ring <span class="math-container">$R\neq \mathbb{Z}/2 \mathbb{Z}$</span>: <span class="math-container">$$ R \text{ is a pretty ring} \quad \Leftrightarrow \quad \vert R^\times \vert =1 $$</span> In particular we have <span class="math-container">$-1=1$</span> and thus a pretty ring has either characteristic equal to <span class="math-container">$1$</span> or <span class="math-container">$2$</span>. Note that both cases are possible as the zero ring is a pretty ring.</p> <p><strong>Shorter proof:</strong> Assume that that the characteristic of the pretty ring is not <span class="math-container">$2$</span>. Then we get from <span class="math-container">$1+0=-1+2$</span> must be a unit. Let <span class="math-container">$u\neq 0$</span> be a non-unit (exists as a pretty ring is not a field), then <span class="math-container">$1+0=(u+1) - u$</span> implies that <span class="math-container">$u+1$</span> is a non-unit. Then we get from <span class="math-container">$(u+1)+1=u +2$</span> that <span class="math-container">$0=1$</span>, ie. our pretty ring is the zero ring. Therefore, a pretty ring has characteristic <span class="math-container">$1$</span> or <span class="math-container">$2$</span>.</p>
3,130,877
<p>Normal geometry concepts, such as parallel, angle, area, triangle, do they still apply in Mobius band?</p> <p>If not, in which case will they fail to do so?</p> <p>For example, what would three lines on a Mobius band form? A triangle if not parallel? or it might be totally something else?</p>
Mark Fischler
150,362
<p>Away from the edges, the geometry of a Moebius band is locally Euclidean, so the usual concepts and theorems of geometry all apply. The issues involving edges are qualitatively no different than if you were trying to do geometry on a disk instead of a plane: non-parallel lines may still never meet because they fall off the edge.</p> <p>On the other hand, if you could form a one-sided surface with no boundaries, such as a Klein bottle, then things would be different. For one thing, the sum of the angles in a triangle is no longer 180 degrees. Since there is no obvious way to have a Klein bottle without some places being more tightly curved than others, this gets messy really fast, but the concepts are not a lot different than those of spherical geometry or hyperbolic geometry.</p>
70,801
<p>I am asked to find how many there are $k$-dimensional subspaces in vector space $V$ over $\mathbb F_p$, $\dim V = n$.</p> <p>My attempt: 1) Let's find a total number of elements in $V$: assume that $\{v_1, v_2, \cdots, v_n\}$ is a basis in $V$. Then, for every $v \in V$ we can write down $$ v = a_1 v_1 + a_2 v_2 + \cdots + a_n v_n $$ and since the coordinates ($a_1, \cdots, a_n$) are from $\mathbb F_p$ there are $p^n$ vectors in $V$; $p-1$ without the zero vector.</p> <p>2) Let's look at a situation where $k=1$. Let's call this 1-dimensional space $V&#39;$. $$\forall v&#39; \in V&#39;. v&#39; = a_1 v_1$$ where $v_1$ is a basis in $V&#39;$. We know that if there are two non-zero vectors $u \in V&#39;_1$ and $v \in V&#39;_2$, they are not linear dependant. So, every 1-dimensional subspace has $(p-1)$ basises. Therefore, there are $\frac{p^n - 1}{p-1}$ possible 1-dimensional subspaces in $V$</p> <p>3) k-dimensional subspace is defined by the set of it's basises. Since basis can not contain zero vectors we can write down the formula for selecting $k$ linear independent vectors: $C^k_m (p-1)^k$, where $m = \frac{p^n - 1}{p-1}$. Here we first choose $k$ 1-dimentioanl subspaces and then we choose one of $(p-1)$ non-zero vectors from each of the subspaces.</p> <p>4) .. unfortunately, this is where I am stuck. My intuition says that the answer may be $\frac{p^n - 1}{(p-1)^k}$, but this might be completely wrong and I don't know how to go about finishing the problem.</p> <p>Thanks in advance.</p>
Marc van Leeuwen
18,880
<p>Just for the record, the number asked for here is the <a href="http://en.wikipedia.org/wiki/Gaussian_binomial_coefficient">Gaussian binomial coefficient</a> $\binom nk_q$ evaluated at $q=p$. The indeterminate of the Gaussian binomial coefficient is traditionally called $q$, and I guess this is in particular because of the usefulness of setting it equal to the order of a finite field.</p>
70,801
<p>I am asked to find how many there are $k$-dimensional subspaces in vector space $V$ over $\mathbb F_p$, $\dim V = n$.</p> <p>My attempt: 1) Let's find a total number of elements in $V$: assume that $\{v_1, v_2, \cdots, v_n\}$ is a basis in $V$. Then, for every $v \in V$ we can write down $$ v = a_1 v_1 + a_2 v_2 + \cdots + a_n v_n $$ and since the coordinates ($a_1, \cdots, a_n$) are from $\mathbb F_p$ there are $p^n$ vectors in $V$; $p-1$ without the zero vector.</p> <p>2) Let's look at a situation where $k=1$. Let's call this 1-dimensional space $V&#39;$. $$\forall v&#39; \in V&#39;. v&#39; = a_1 v_1$$ where $v_1$ is a basis in $V&#39;$. We know that if there are two non-zero vectors $u \in V&#39;_1$ and $v \in V&#39;_2$, they are not linear dependant. So, every 1-dimensional subspace has $(p-1)$ basises. Therefore, there are $\frac{p^n - 1}{p-1}$ possible 1-dimensional subspaces in $V$</p> <p>3) k-dimensional subspace is defined by the set of it's basises. Since basis can not contain zero vectors we can write down the formula for selecting $k$ linear independent vectors: $C^k_m (p-1)^k$, where $m = \frac{p^n - 1}{p-1}$. Here we first choose $k$ 1-dimentioanl subspaces and then we choose one of $(p-1)$ non-zero vectors from each of the subspaces.</p> <p>4) .. unfortunately, this is where I am stuck. My intuition says that the answer may be $\frac{p^n - 1}{(p-1)^k}$, but this might be completely wrong and I don't know how to go about finishing the problem.</p> <p>Thanks in advance.</p>
Abdolzadeh.H
330,189
<p>Let $n=\dim V$, where $V$ is a vector space over any finite field $\mathbb{F}$ with $|\mathbb{F}|=r$ and $W$ is a $k$-dimensional subspace of $V$. First of all we count the number of basis of $W$. There are $r^k-1$ non zero vectors in $W$, so to select a base for $W$, first member could be selected in $r^k-1$ way. Second member in $r^k-r$ way and so on. Hence the total number is $s=(r^k-1)(r^k-r)\cdots (r^k-r^{k-1})$.</p> <p>How many $k$-element subsets are there in $V$, each of which generates a $k$-dimensional subspace of $V$? The answer is as follows: to construct such a subset $A=\{v_1,\cdots , v_k\}$, obviously $v_1$ could be selected in $r^n-1$ way, the vector $v_2$ in $r^n-r$ way, $\cdots$, and $v_k$ in $r^n-r^{k-1}$ way. So there are $t=(r^n-1)( r^n-r)(r^n-r^{k-1})$ of such subsets. Let us to call this family of subsets of $V$ with $\mathcal{B}$ and the family of basis of $W$ with $\mathcal{A}$. The size of $\mathcal{A}$ is $s$ and the size of $\mathcal{B}$ is $t$. A member $A$ in $\mathcal{B}$ generates $W$ if and only if $A$ lies in $\mathcal{A}$, on the other hand any $k$-dimensional subspace of $V$ corresponds to a member of $\mathcal{B}$ and as $W$ was typical, in $\mathcal{B}$ there are $s$ elements which all of them generate same subspace. Therefore the number of different $k$-dimensional subspaces of $V$ is $t/s$.</p>
2,178,318
<p>On a test I wrote an implication arrow "$\implies$" to show that I deduced one statement from the previous one, but I didn't get full score since it was more accurate to use an equivalence arrow "$\iff$". For example: $$ 2x = 4 \implies x = 2 $$ but it's also true the other way around: $$ 2x = 4 \impliedby x = 2$$ so it is more correct to write equivalence arrow: $$ 2x = 4 \iff x = 2$$ Given this i would assume that if $Q \implies P$ is <strong>true</strong>, then $Q \impliedby P$ is <strong>false</strong>. <br><strong>Is this correct?</strong></p> <p><br>I don't want to check whether a statement only implies or is equivalent to another every time I do some operations to it. <br>So my second question is then: is there some other more loosely defined implication arrow that allows me to show that implication in one direction is true, without saying that implication the other direction is false? I also came across <a href="https://i.stack.imgur.com/FpyJi.png" rel="noreferrer">this picture</a>, but i'm not entirely sure what the difference between those two definitions are.</p>
Gordon Geringas
571,600
<blockquote> <p>"Given this i would assume that if Q⟹P is true, then Q⟸P is false. Is this correct?"</p> </blockquote> <p>No that is not implied. However you may be thinking of Modus Tollens which goes:</p> <blockquote> <p>P⇒Q, notQ hence notP</p> </blockquote> <p>For example, if it rains (P) then it is wet outside (Q). Since it is not wet outside (notQ) it did not rain (not P). </p> <p>Counterexample to the logic you questioned showing it's not correct: </p> <blockquote> <p>Assume it is correct and apply to the rain to get:</p> <p>If it rains then it is wet, hence it's not true that if were wet outside then it has rained. </p> <blockquote> <p>That doesn't make any sense when you say it out loud, and that's not an accident. The logic is actually contradictory unless we add additional premises.</p> </blockquote> <p>Our assumption that it was correct has been refuted. </p> </blockquote> <p>The double arrow should be used whenever you have identified things either formally or by definition as one another. </p> <p>For example:</p> <blockquote> <p>You are a bachelor iff you are unmarried. (true by definition)</p> <p>The linear transformation between two vector spaces is one to one iff that linear transformation only maps one element to 0. (requires formal proof in both directions)</p> </blockquote>
1,505,076
<p>This might sound a stupid question but it is indeed a real one.</p> <p>I'm trying to figure a Confidence interval for the average age of my population.</p> <p>Given i have a population of 100 individual, and i sample 3 of them. From CLT, i can say that $Var[\bar{x}_3] = \frac{s^2}{3} $. Alright. </p> <p>I want to increase precision of my confidence interval and thus i sample more. I head toward a 100 individual sample.</p> <p>From CLT, i can say that $Var[\bar{x}_{100}] = \frac{s^2}{100} $, which clearly is not zero. Small but not Zero.</p> <p>Where did i do something wrong?</p> <p>Thanks a lot!</p>
A.S.
274,197
<p>You are sampling from a finite population so you samples are not in fact independent - once you sampled a person, they are out and you are sampling from a smaller pool of people. Once your sample size gets compatible (order of magnitude) with population size, these effects start to matter more and more.</p>
2,358,838
<p>I can see the answer to this in my textbook; however, I am not quite sure how to solve this for myself . . . the book has the following:</p> <blockquote> <p>To take advantage of the inductive hypothesis, we use these steps:</p> <p>$ 7^{(k+1)+2} + 8^{2(k+1)+1} = 7^{k+3} + 8^{2k+3} $</p> <p>$$ = 7\cdot7^{k+2} + 8^{2}\cdot8^{2k+1}\\ = 7\cdot7^{k+2} + 64\cdot8^{2k+1}\\ = 7(7^{k+2}+8^{2k+1})+57\cdot8^{2k+1}\\ $$</p> </blockquote> <p>While the answer is apparent to me <em>now</em>; how exactly would I go about figuring out a similar algebraic manipulation if I were to see something like this on a test? Is there an algorithm or a way of thinking about how to break this down that I'm missing? I think I'm most lost regarding the move from the second to last and last equations.</p> <p><em>Source: Discrete Mathematics and its Applications (7th ed), Kenneth H. Rosen (p.322)</em></p>
G Cab
317,234
<p>"..for every non-negative integer $n$" means that it shall be valid starting from $n=0$.<br> In fact $F(0)= 7^2+8=57$.</p> <p>You already found that $F(n+1)=7 \,F(n)+57\,8^{2n+1}$ which is clearly divisible by 57.<br> So $$57\backslash F(n)\quad \left| {\;0 \le n} \right.$$</p>
198,995
<p>From Barbeau's <em>Polynomials</em>:</p> <blockquote> <ul> <li>(a) Is it possible to find a polynomial, apart from the constant $0$ itself, which is identically equal to $0$ (i.e. a polynomial $P(t)$ with some nonzero coefficient such that $P(c)=0$ for each number $c$)?</li> </ul> </blockquote> <p>And then I thought about 2 hypothesis:</p> <ol> <li><em>P(c-c)</em></li> <li>I thought about a polynomial such as $ax^2+bx+c=0$, then I could make a polynomial with $a=1$, $b=-x$, $c=0$ which would render $x^2+(-x)x=0$. I tested it on Mathematica with values from $-10$ to $10$ and it gave me $0$ for all these values.</li> </ol> <p>When I went for the answer, I've found:</p> <p>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;<img src="https://i.stack.imgur.com/qUtmS.jpg" alt="enter image description here"></p> <p>When I went to the answer, I couldn't understand it, can you help me? I'm trying to know what's he doing in this answer, I guess it's a way to prove it, but It's still intractible to me. You can explain me or recommend me some thing for reading. I'll be happy if you also tell me something about my hypothesis. Thanks.</p>
Marc van Leeuwen
18,880
<p>I guess there is no way to forbid people to present contorted arguments to prove a simple result, but they do the reader a disservice by masking the essential point. Here the essential point is that we want a polynomial that vanishes <em>in more points than its degree</em>, and this can only be achieved by the zero polynomial. (In abstract algebra one should specify: polynomial in one variable over a commutative domain, but that is the case here, so you can forget this.) This is (more or less) the proof referred to at the end of your citation, with reference to the Factor Theorem: if $P$ vanishes in distinct points $a_1,a_2,\ldots,a_n$ then $P$ is divisible by the product $(x-a_1)(x-a_2)\cdots(x-a_n)$ (using an induction argument: $P$ is divisible by $x-a_n$ and $P/(x-a_n)$ still vanishes in $a_1,a_2,\ldots a_{n-1}$), so either $P=0$ or $P$ has degree at least $n$. Add to that the fact that there are more numbers one can take for the $a_i$ (namely infinitely many) than the degree of any fixed polynomial, and you are done.</p> <p>This argument also immediately shows that one should not expect this result if one only has finitely many numbers to take for the $a_i$, as happens when working over a finite field, as the product taken over <em>all possible values</em> for $a_i$ gives a counterexample. For instance $x(x-1)=x^2-x$ is a nonzero polynomial over $\mathbf Z/2\mathbf Z$ that vanishes "everywhere", i.e., on $\{0,1\}$. Those who have a copy of Lang's <em>Linear Algebra</em> at hand should check how he swindles to establish the result in arbitrary fields, on which he even bases his <em>definition</em> of polynomials; a fine example of a textbook blunder*.</p> <p>So we are left with the Factor Theorem. But in spite of the capitals, it is no big deal; the text in the question in fact already uses it for $a=0$. But the general case is similar: if $a$ is a value at which a polynomial $P$ in $x$ vanishes, write the $P$ as a polynomial in $y=x-a$ by using $x=y+a$ and expanding; now since $P$ vanishes at $y=0$ the result has a zero constant term, so it is divisible by $y$ QED.</p> <hr> <p>*Upon rereading that book, I think I found that the explanation lies in the fact that in it Lang <em>defines the term field to mean sub-field of $\Bbb C$</em>, for which limited case the result is true. I still think that, even when writing for a specific audience, one should not so constrain established mathematical terms (the same goes for defining "polynomials" as polynomial functions), any in any case avoid talking about "arbitrary fields" if one does.</p>
65,304
<p>I have a plane curve $C$ described by parametric equations $x(t)$ and $y(t)$ and a function $f: \mathbb{R}^2 \rightarrow \mathbb{R}$. The line integral of $f$ along $C$ is the area of the "fence" whose path is governed by $C$ and height is governed by $f$.</p> <p><img src="https://i.stack.imgur.com/4rmZy.png" alt="enter image description here"></p> <p>How can I generate a picture of the "fence" in Mathematica?</p> <p>For the sake of a concrete example, let's borrow from Stewart (since I already borrowed his picture). For $0 \leq t \leq \pi$, define $$ \begin{align*} x(t) &amp;= \cos t\\ y(t) &amp;= \sin t\\ f(x,y) &amp;= 2 + x^2y \end{align*} $$ so that $$ \begin{align*} f(x(t),y(t)) &amp;= 2 + \cos^2 t \sin t. \end{align*} $$</p>
ubpdqn
1,997
<p>I have not done the labeling but this is a start:</p> <pre><code>axes[n_] := With[{uv = n IdentityMatrix[3]}, Graphics3D[{Arrow[{{0, 0, 0}, #}] &amp; /@ uv, MapThread[ Text[#1, 1.1 #2] &amp;, {Style[#, 20] &amp; /@ {"x", "y", "z"}, uv}]}, Boxed -&gt; False]]; p = ParametricPlot3D[{Cos[t], Sin[t], u (2 + Cos[t]^2 Sin[t])}, {t, 0, Pi/2}, {u, 0, 1}, MeshFunctions -&gt; {#4 &amp;, #5 &amp;}, Mesh -&gt; {10, {0.99}}, MeshStyle -&gt; {Directive[Blue, Thick], Directive[Red, Thickness[0.01]]}, Boxed -&gt; False, PlotStyle -&gt; {LightBlue, Opacity[0.5]}, Axes -&gt; False]; Show[axes[3], p] </code></pre> <p><img src="https://i.stack.imgur.com/5BwUv.png" alt="enter image description here"></p>
1,889,957
<p>I'm a bit rusty on my math notations and I'd like to write that:</p> <blockquote> <p>It exists a unique element $z$ such that $z$ belongs to the collection of values returned by $f(x,y)$</p> </blockquote> <p>Honestly I'm not just rusty I'm also mostly ignorant of math except from basic functions and basic matrix operations.</p> <p>I'm in the context of computer programming and I want to write down a specification, and for my own curiosity (and fun) I was wondering how this would be written in a more scientific way.</p> <p>I'd go with something like:</p> <blockquote> <p>$\exists z\in S$ such that...</p> </blockquote> <p>And then I'm lost with how to specify that $S$ is the result of $f(x,y)$.</p> <p>Some usage of $P(z)$ maybe ?</p> <p>Also $S$ means "set" right? So it doesn't work because $z$ may be present multiple times, but IDK if there's a symbol for such "collection".</p> <p>I've googled around but it's a bit hard to find the right keywords for searching something like this.</p> <p>Thank you.</p> <p><strong>EDIT</strong>:</p> <p>I knew I'd make a mistake while posting this... I've mistakenly named $x$, $x$, leading to the confusion that it is the same $x$ that is in $f(x,y)$, while actually it is not.</p> <p>So I have renamed it $z$, sorry about that.</p> <p><strong>EDIT 2</strong>:</p> <p>There are multiples solutions that have been provided in the answers and for this I'm thankful, but I can't identify if one matches what I want.</p> <p>And there are also a lot of questions which I believe are due to me not giving enough details or not expressing myself correctly, and I realize now that I have made a mistake on the way so I will try to add more details and maybe it will help to make the answers converge.</p> <p>I have a function, say $f$, that given two arguments, say $x\in X$ and $y\in Y$, will return a collection of values, say $S$ whose values are taken from $Z$.</p> <p>And I want $S$ to contain only $z$ (possibly multiple times).</p> <p>Given $S1$ and $S2$ the respective results of $f(x1,y1)$ and $f(x2,y2)$, there can not be a given $z$ that would be present in both $S1$ and $S2$.</p> <p>For the record, $y1$ may be equal to $y2$.</p> <p>Also $y$ depends on $x$ so I guess we start with the second part of what @celtschk said in his comment and simplify:</p> <blockquote> <p>$$S = \bigg\{f(x, g(x)) : x ∈ X \bigg\} ⊂ Z$$</p> </blockquote> <p>But the first part should be:</p> <blockquote> <p>"$z$ exists at least once and is unique in $S$"</p> </blockquote> <p>and I don't know how to write that :)</p>
Michael Rozenberg
190,319
<p>$\sum\limits_{cyc}\frac{a}{b+c}=\sum\limits_{cyc}\frac{a^2}{ab+ac}\geq\frac{(a+b+c+s)^2}{\sum\limits_{cyc}(ab+ac)}\geq2$</p> <p>Because the last inequality it's $(a-c)^2+(b-d)^2\geq0$. Done!</p>
2,723,585
<p>If $\textbf{A}$ is a square matrix, how can I prove that, by using the power series of matrices that the above equality holds? Note that the $x \in \mathbb{N}$ and $\textbf{A}$ is a square matrix.</p>
David C. Ullrich
248,223
<p>If $AB=BA$ it follows by induction on $n$ that the binomial theorem holds for $(A+B)^n$ ($n\in\Bbb N$). Now if you simply mulitply the two power series and collect terms this shows that$$e^{A+B}=e^Ae^B\quad(AB=BA).$$</p> <p>By induction on $n$ this shows that $$\left(e^A\right)^n=e^{nA}\quad(n\in\Bbb N).$$</p> <p>It also follows that $e^Ae^{-A}=e^0=I,$ so $$\left(e^A\right)^{-1}=e^{-A}.$$</p> <p>Hence for $n\in\Bbb N$ you have $$\left(e^A\right)^{-n}=\left(\left(e^A\right)^n\right)^{-1}=\left(e^{nA}\right)^{-1}=e^{-nA}.$$</p>
2,416,510
<p>I have a matrix $A \in R^{n×n}$. I would like to choose two diagonal matrices $D_1,D_2 \in R^{n×n}$ such that $\text{cond}(D_1AD_2)$ should be minimal. How to provide such diagonal matrices? </p>
Jaroslaw Matlak
389,592
<p><img src="https://i.imgur.com/ZUa8pHw.png" alt=""></p> <p>What you need to compute is $\alpha+\beta+\gamma+\delta=?$</p>
48,726
<p>I'm trying to plot a 3d revolution plot from a set of 2d points. These data points form a 2d curve, then we rotate that curve around y axis and get a 3d surface. @<a href="https://mathematica.stackexchange.com/users/50/50">J. M.</a> has a well explained and very helpful post at <a href="https://mathematica.stackexchange.com/a/11738/1364">here</a> which deals exactly the problem I have. However, I tried to use the method, and get a 3d surface that is very rough and not smooth.</p> <p>Here is the 2d data points:</p> <pre><code>points=Uncompress["1:eJx13Hc4lX/4OHAjW7JnhKRCpEIazrESokhCpUhZFR6KhLLik6w0NBSVShItoew9I7KyRcnWkozffY7v9/yuy/11/uJ6Xeec5zzPe9zv+z2kbF1Mj1cy0NG5L6Gjo9vl6O5xHP75//8RdP/zIthqIqQ/+0vkLfBLe+8n2IXeJBGHIt9WjWPnmTQW8VO6RSI2N3DYTGBXdrHfWfU5iURI5U2wswUgJ7HRV+oYvCIRno5F8Suwbz8vrxN8Hjz4Z/MhSezzn59JIra8NP+hir0mp0p1s0geiYjWrvMwwr41nt7/mjr4R/5HVsbYj8UYqbbYFZKIvQ+zPtthp/srzGocXkIiksU7dp7DXm417qP3CjyH0Nb3Xuz6y0kEs8qSs0L4/mU9C2Bjn6oiEZLfv5M9zyN/v1yw5qxqNYlQGLiXvR/7+URyyuG1tSSi0+RrY5Iv8u8lEaw2d+tIxOkpraiNPsj7ZBltr0+ANwfa5n4+t8j9qScR+/K2MYl4I+9K7mu/k9tAIvarV7UMeiGPKhcv4jH4RCJI1eSgA9gN/a1yTeMbSYTaFi39qDPIfY4VntjS0UQiakwvWu05jXxDDOtr/YfNJKKM59jbQI9F7n8Libj/8kbIMnfkcqumSvfNtJIIlvDKq0VuyEXVC3tGZz+TiFKrVOm7rsiFD3nU6Qq1k4i/B/+mm7sgl55d4dcu1kEiHJ+espI9hTylIOl3vmUniTBdbcbz6sQiz7eLRCRc+aUZ5Yy850lCrUgcuJTni2F/J+QrZHcyGVh1k4i2H5NzpQ7IieWlbstLwQ3KxqvK7ZG/Mt3Yo6PQA+VD++eF9uPIB5y2LnvF0ksi7PPiRtrsFik/4E+3nb7EiT1wuuiHyzvwSG41fvWjyD+Eu7LUvP9CIpwEXtowHUZuwd4wdY2hj0S8+ch8pPcQcmOJ+j4lErjuv9jOzIO4/A6d2l3i3U8ijoo1XPhtvkj5BA+0Pd1oij02bdO5zDxwsYIPfCn7kFtmj21dof6NRNDrfjrzwRi5xmRX4DINcEaTd1feYr9mFyhirAVuXafJ/gD7ilNL133wAd8JtznGCHnGRpn+Yc4BEjF6c9P9cQPkp+P7FHILwW8WTIpH6S1Sf8BTPiqWrcAuY/6+7v4f8LDpwIK4HciLyalHQnQGScTY5fK/pmT8fri6/86CNw4urdlDQr7tsWFHdzo4Q4n/+u0ayCcu2o5FFg+RiNy8q0Or1JELUevnMLSfW0P876ghv/Vlg4DeGXD7qwN89SrIL9+/P2e5Y4REDKzVJg0oI6cWvzfg0jma7g1KyCelRHc9UxklEdW2Jn8c1uH2J6eHS6YUnC9mPPqZ3CL1f4xECGtO23GuQZ6ceiE/1Bt8g3Jm66wM8t4TPMv+PADfPn3vi8tKfH2U4jcJPvoo0yFxBfLWgMa1OtbjJMKiL3V/lChys2ssAw1HwLOIs/Eu2D/SGfcPOoE7UcqRyCLtDzi9nJbINX7k9C52gh6iEySiXpOZuXYZ8pM9Ij0NeuCG9akKakuRD+2nPCFwn/dDvdmsuP3dma/N1wA+xB3jvJ4J+fFfb2+5Kv+A8m0f4N88J/5/t2/gNZcNjb5OIx/I+i2dFg5upMD5NHIKedvRFj4usZ8kgtOY77fXBPKOmqjYtxvBH7oZinGOI0/a+CxPXw983KBYb3gU+Zuxy4fVJsDD4GNk+pGPHBsY76f7RSLMd2il7upDPt++gg/KMp1VwT7Go2f3egl4Lv/sLD92/sHvDieywBWcE9TjW5F/1KXUUPC9BhZ6u1rw9fWfUyVGwGf0mPQSm5C7FPwInHL/TSLE1ZcITn5APt9+g3PJua1xrEZutTrl+vpm8EBdrp0qFcgnLg5+Clz3B+IH345wo2LkRe/jpJRdwP1W7Ji5mI+8xL7j90g+OLu2kH7xe+Q566W/jvSDV26y6IvJQj7fP0ySCH2xnV+yXiE3it0UmnIT/FNF7pKgZ8gbr4e2Zb0B1yg/GmL8FPl8//CXRLh/Lk93fYCfX7vYgeO7wH0+8Gx3u4vcy9c2UScZ/Km7cMPkdeTz/Qd43G3D/2RjkLdKMSqumAOXbjh62joKuZjdtlM18VMkQlCiWvJBIPKn1Bd4yYHqA20B+P4v3x3a/AK8baPDuCH2HUuKTzt1gtdQ2nE/5NRWi/cfifi8lUv3rTf25Lr6KDtw13aW/s0eyOf7N/Axsd2KVwj8/A0E1Syug/uX+Z80ckX+PNlmh6XQNPSPJMWRdceR35Wi3GHw3A8bL8fZIg9bw1leHwa+3/PJp9mDyC/5ZbNLVoCHhm1ZIW2BfL7/nCEReld8tThMcf2X0XULcgMfk2dymjFA3tAqsmbUh+JipT6i2FnISnqy/uDm/2WUFukjj6I2W+Aat0Y5vmgi/9LSUtCoNEsi3vkyiazYhuvHnlXasZ7gYxKdU1MqyOf7b/BLeZ6xXMrIL7Q+E9/SBb7riNG1bgXcvqrvyTLxnIP4X8/ZNVwcedAySg0GD7r6WENvOfKlcS3BGy+DN67UEnYRQ16RolGs9gach/ogkFf/snKSW05HJg6nRERwsyOfjx/Aq9svlVezINd3HT93Yju43pVCBnVGXP+jReT5CPCBp/ISlyaXo/hM2xa6KHAeneq90j+Qvy3MEl2fC36tTezal2/IrSaUOpeVgROnJBOEviL/S23+wN8frgjh7EAuLiRTYy5BTyY8f01Yj35C3nVAL6SWDC7Xk2b5qgZ5uh/lBoG79xSxRZQhb057oazgAX5m3Dp6XxHy605GFm8LwJXdqjK3vkbuTo1vwGe5yj4dfIU8hPWTHVsFuEPykZjCl8gP/2YzOtIMbkYZJqcil492NLsoyUAmdl+vN6lLQL4+69v9MEXw6yNpMar3kJdmbLFr1gbPzqPr/O8Wct4bjMYlZ8C9Zw1F70chH6bGV+BTS6PEYv5DzslrTb+qHFyVaY545oP8yH4md/YKcHYpBS5P7E9/UdoV8IupJsu43ZBb3H/hc1KckUzsqEnvd3JArnieT2O1KXitirO9yAHk9tT4DbyFXlom0xz5huO8B5VdwW1//rqauxe5Y2V2nPId8BvUhhj//uVMTNkF4H6cpZxtGshlqB0keDlfNuP5bcjvns0UX94GLt7SnxGzGX++3AbPboElZKLA56NRtBzybmr8CC42FM61byXy8yxb3MOPgf/9FnI1WhD5nXumzlcdwK+a1XF/EsDXd+Jluext8EKGFf6/2ZDHHo9X83gH3uA7k6xLj1wks0+EexR8+acZCb+fYgv9ALV4MZGJiMK1XdxjyEVv6bw/xQV+k+/CXO0Q8s+7r8Z93Q7eSmlHupAbxL58rm0DzulRlafajFz4CaUAgr9Y1+Df/gn5BYHNiiVe4IRg9Va1j8gtKkIqe16BH2K98Ox4IfJP1PiXiVL/zAP8spB7u61ocZwBn35NaamQ66Vb/z3IyEwmIs1VktySkTdyMZhHsoK3Zsnotichv9K7OaJ3M3iSuGmP5W3kOk7Vl96ZgteeCGCziUFuaj5dsToOXJRnF/1rH+S7qfE3eP5/mZ2l5/DvV5OXZrkP/uBOjst27BdH3+k8fwwuTg2kkCtbJRtfGQfXlBJ3tjyKnHOFz1g+CwuZ2Le8mT/tIPIVAnvotiuDs42JzCzdgzyPLN39lwzO70RUmRgiL6fG/+BOw6I8TtuRD93zKPOJAg+QfxBwbCPycIbIUqVicMv46eOXVuLfT02Pgts9jpUylEAuNFeSOtcJPj5Uv/msGHK+6mFD2TnwI9RAFrnd2j2D+9lZyYRNUFPSOy7kWtYiNwKUwA8/XzJ6mRG5NnV8Am5tI7hBYU50oU+NVkdIW4In7emsZf6D/FS2sx/TefAaF5XE+EHkdNchKk8ED/WxYDjVgTzoM7HmbRX4mjFRid465L+sZMgKP8GdA5/uOVGIPJs6/mGD57t1v7BlFvLrvW3n7IXBfayOkmXSkUfsFxd+owb+gDoQQf54cMcHITvw2qJk22O3kHvXURoA8JSLpRGuV5Fb/RR38rwBXhDycX9UCPKMH2nSZangMko/gx75IVejjr/Av0pe/Bp4Cnn5P9EzXv/AvW5ue/zVFnlhuNDVgi3sZELo8pT1a0PkP6FSbtUGr4sR331zJ/JO1RSvdXoUp/6BfEJtYlunCXg/9YOQf4RC++MM+A6jAYaHKsi/c4wub7kCLnmqetBRHvkL6mMBz2J+k/5cGrn8ap6cpR3gJ6OkjzcvQz5+Vkv+EsUdNebKkrGfolYrDjLx46EJj+aMyELfY7OeNVYOnCO00sPgJ3IW402pwSfB11IfFPKV68weHDoDLpVe3RHfjFye2iyC09/pIpOakIsbPXlR7Q+uxpryIKQe+VdHwup6DvjqfRV2TpnIv1G7TXDrwQdslU+R79fPPnGNmZNMfL7+Q3fqJvJ8ztyIms3ge/rf/vkViDyRGjaB893+1UP2xe9/y+ISbQwear9cKMQbuYRq+LD/QfAT1IqE/N2ES17ydfDiJUfVNhxAbksNq8EzVq86ZLwH+cC+vckvOsDVKmTGy7Yiv7ZNq6trFFzdzLzJdBPyFdRh21Jon2ydbnitxO/feEqwXhZ8FbuE/LnlyEdHeSpuG4CnUhsq5FktRzVHT4GH807IrJkRXuht1LQG+P1Vaf+MfyMPXP+wLPUROMusgOSPL8jvi53g5HoLnpHenNPUjvwmdXwOvnrvseFj5cjDDnk0KcyB91rlWoTkIHcPMtp7dSMXmZigXGci8ksa9f+mdMBl704b9t9Bbk6d9gD3dW2tMo1FHvTK6GyOI3iRcET83v+QP5i4ctk7FDzw9a3k317I+aj5AfAmgfpdmjbIsx+Zik9VgndWbOZg2Ye842vfFfE/4CrUjhJ5kxXv87PLlpGJbd2NTkLKyGvdKANIcOV3lVVKcshnlG9rTOwAZ91syxnJh/zLy+YE1r3gl2+fW6e2FHk4NT8B7u3Qu+baD6GFvmRq+9Gy6+CjOrnZIQPIPXo5lu/LAveiBirIj6qz99m2gkf3O3K9KkZukHIYIhBwF/5DEY3vkddLrh0c4uImEyImUTVZCcgZRoTkvXjAc+VHV9HdQx5IzX+AiwcZ+fNeRG6Zy/voy3pwhzFNvq1+yFs1Vb2aNMDfUQM95Nyj0XlbTMEFX0oIeVkit6VU/0Pgk3PDF2IM8fVbxyq9cgPXmGAr+rseOZfZuzfqkeC2M+WaQ+zIydT8C/jWLTc1elnx9RcTLbLR4Btk4vrdsB+IveaSSvFZCUqgjDzYttUjJQ28KtdgaFmXIGp/oXe+WAQu6lct+LAEeclgFZtcLXj0Jq6+m2+Rd09Oae7uAdekDhSQh+pfGkseBbcwtfdXjEQ+Q80Pga9Z+rxQKQi5wWO9pnXLeMiEhGueeOFx5OYOahzeK8AjiR+lrSbI21L5YQQIHuBoe7BZEzknE1NOqiL4l3CRrP0ayD94JfGObQAPog60kNfYFjxUNgZPTibMhwSQZ1HzU+D8jxSXNU0JLHSndLqmMWfwNW+q//s3gJw9xrZP6hx4KXWgiVzIsDMgIhS8mOyYVvoOeSwlPRID/iBmdPfOp8hLiuPH194G/wbF4H0c8uLObEWFRHDbqiVNMeHIWd9QfiB49GUJ2dALyP0nO2aU34GzUwfiyAmPO63l+eBz++NnDh9G7knNv4H7BxtNl5oj3y4vzr7lM7gCpR9RRf7qgNR1+w7wktBQbQVl5Cl1Lhwsf8Hvbn84qcSM3HyQkiADP+yZ/ESPEXlVnOUGpn/g6i5NBpMMyCcPHRDUorgRNRGC3MAinP+BEC+0vzvEO4oa+Bf6Jmr+D9yGWfi/ilTkZ5pZu3U2gfMlTejefIS88ar06j1bwaOoiR7kIXRWuxN2g3sfcB/vOY2clxLV24LvCovb8dAYeaHmMYteB/BNZIePf8jI675YbzF0Aa+nJrKQ99s83i8ZAJ7dU3GPgRP5ODU/Ce63vzxatocP9e9qavusboNbjZY5sjQhv7Jh7dGweHBBaqIO+UY9xq3Rr8E73EhLmO8jr5W9DTcOnHXj1M2Tp5Hzs0tOFFWDd1nu15w9gVz58fLYV3XgltREJfKmm6SmmX7w5M8erXSbkT+n5k/B975WGwtjQ97Ts6/Sjo6PTDDVno8+zIjvT6/4PR5G8DvzH7TQV30O9y0SAH8u58D/pgr5x2r3+3xC4GtYb3EZlyM/3mRzpG8NeOf8g1roOvKz6ooK4Marwjn5LyO/TJ2ABaefiSJW+yJ3fKqULqoJ/jm4pt7XGDkL/zlF9l3g0vMVZaGTCs7l85qDVwjpZs5yI3eefyxkwvzIlvclfTwLXcXzyGo+Z/DJ3qG9UfXIvRoT5sxcwI/NN1QLXU5BN5zvArhs2tURqevI100X7pgLBu8yaN1TG4ScWYosU3oF/Ml8R7LQo+lzBYaugqtlH685aoHcgBJexIIzPvuu5LwT+cs1HO4/EsHfn2zfz8iHvIu/cX9bMvjgfEe90PcRZh7hb8B1jxqvLatAbu2RzL4pC7zg/cCNP6+RnyvPUSzNB1ecD4QWep9+g0p3EeX3eyicowtAvma+WsP9T5MKS7JGLqwSd1i2Adw05U6cmDryzUrickQTuNt8oInGd6mtn/w6wW88caydYEBuKHn8P+de8D+VIhdOdi1b6L+vnlslPgzuOB+oo/hJujmwfRy88dZdlv2GyMMU9EY7J8HbpTqaA+q4UPluzfBRmgIfq6cOdBZ6wibnuiGKWz16/PlNPHJmOsqLH8an5SZx56SQb+rXD+ZiAFdS7txpPbJ0oe/J3HXrCMXPzA8kFzpj7F+6OooHbK5T6mpFLq8gJcPKBh659K/N8a3If/6ouuHKDj59kDoQX+gTxzKVrDnBdxdH8Fb3cS50aWKygJ4bfGyA99ypUOSxikbXLvCCB84nMha6W5ZGMr8AeLOaagH/T46FntoXEWxH8SSftSvFPiBXfpJWUs8Hzj6f6FnoCvmTAX084O7d7PQ/lJFf+KE/vo5y/ae9PrrYDrIv9BjpgLP+XOBR84myhX7RtWr/qqUU/+ZlKHAGuY310Lgb5f4GfBPmvsqEXN+AITOO8nwE5xORC91B2SI5gBU8Ion5dn4m8ktSKXTdS8BTMpu/uUki76Jr4wiheNx8InfR8jl4Ytr/bhXrQl8peJcpj1I/Uu85nFx2AHldtxjd2t/gMvOJblQ+Ha7NpI2Bi8SqetfTIX/Vvom8cpDS/utWSD4sYEHx7bEdFqmU+v90fiJhoWslVnpfbAfXHJJv2CyHvDElK3WiHvy7e2jXvxzmhb5+wDVNpQ58/fxEDOqfT9i+GfgAfulKeEh1NHJa+9jMwWIj6oq8TU0v2TEP/JvcW6NROuRaLTFbtN+Bv52f6FromeWUAAl81lBxO0cMcovPl+TePgf3tTk20uqFXMCbzbUyAdzZ8cL5C6uRb/IyYP9wG1xjfiJxoe98stYh9Ba4zsSja2+nliza/11oJtddG0H+5WlQgtJFcItnf4y2+SJPO37bfdQLvGh+IhbFb2wnBf1dwfl+8QoLMyGnxRcrEzXYZMoYF3pZXqy4kBV4U7vkze5I5Cos2YFtJuAG8xPd6P57FsmkaIPfKlEUTRZFTouvHmxu+1pWzIDKX9FL1q/y4EnmG9b0PUU+EGj8clQWvG5+oQDq33pvTi/hpcSPd01/R2gjp8WvTn6HT0VN0y/0wUp6Dt3f4EcLXazSepG3fWVqFR8Bt5hfiLHQP/sn20e3giukBQfHByKnxe9uNZaiAzuR78pd8e1nDviQn5vUppXIt2Q90cx4SRkfzC9kQe2nm+6I4wNwiXM3HvkXIKeNX9zH7dk2X0PuKRpPPAihjK9uD2zItkVef5HLTNsD3G5+IdBCNzNX8aynjL8OuprkSCxDThu/yZwseX9sei53gfso7zJlMgAXjhDkqY1C/k5n5ZE/euARv5+52oQgp41P97KO7nrGi1x6I4++2GrwY5xv7YLHZhe6q8Vgr4woeCM1fkGe/txpSScz+McMu+KnZshp4/PElFW8vJLIZVe2abcPgx+cX4i20IdM1KeG+sAV1zia+uUgb0vujWSvBmdrtihI34Gclp9Y43dByHcN8t836yOzisFHXLm7zaSR37oSPaFRCN5LjW+Qmx/zOhlLyb/8uXtUKSl4eqHT8jsN21sf7tuDfI9RyMqMAPAN345ljcz+W+hJTlHFHUfBnXV1NwncRk7LX+kyJJh+jECuH6Mt+uEQuJfmDMk7FHnBowaGzxbgY33UhZTo+89LVJpvpuTX7O/9PC+InJa/q/bnONk3PLXQn5cHrffkBz8zv5B0oe//UuL/mhdcqvKT96dE5GknluWXcYNfqypskIpHTstPSgeGXfxqiZzpgUptwhB4tnWL1vPNyE1yVlsrVIHPnKIupF3ow2az+pdKwC9aLE1LqUROy89GiM1J5ZYgz/O0WqGTB851535YZibyT8ylhUQ8uKmppniKNXJa/vnFmo6dyhrIX/x99qXiLHiITpiJ0u/Jha7J6vx5lpL/ZrmRsn5XH3Ja/rxaRV19Lgf5W7Imk7AJuPU121C2u8gHuV+r8cuD79jr0/TKEDltfkD7rtHHLTuR54R8WnVsDbhKv3pury5yw9EjI1dWgrPPL9Re6P6tAzay/5ZB/6KdkZA++2eh0+Y/1h6OfE76iXyIw9de5RP4FcLc52Ef8oba5zo298HHJsRNuYqQ0+Z3JoUncvtTkV8we2VrbAneyJG+rjgGuYb8AabP+uC7efsvOYUip81PVWWQ4znskVdciXdaNcEF8eeMvnKQPnLNru16w5/Bq14cD5rZipw2/8bg+6S4VBw543S58dWb4Pu7xds4uJDb6n5rLTsKvjP7+o01f3+j9vF/5w+DB9OkQ8eQO1829NyxFbx4TfTpiwPIJ4a385HGl5KJs43u23OqkNPmR12yU0uPlSF/tzz2y1wbeIRr87/eIuTqKc0Cc5T5W6H5jQwLfTWHScw7ig+TTvlsxk6b/2Xpuzeigj3J7j1LCsV3rflZvwH7lb9sWp6S4IbLlsxcPoecNr99WzJRxMMb+Uxjl0nhUvBoB878ndjNJUZ6MzI5oX4yDaeVeyGnzc+T++s9pM8iv5TQc0DyEXgm262Us9iLHzs5HjYBv+Hg0Uj4IKetP8js07YtxN75YpvNDRL4bH5oCK8vcrq7gjs3fucgE2+ENW4cx/eHtn6irqnebDP2uF3JZzKawIcelCfsxT6n5Tt6Mhi8dKm4Sq4rctr6j9zkMtM9bsir3Y++Uj0FnvB2vSWbO/LnZYaDw6LgQhcO+jPg30db37Jra8zpKj/8/qIbazXYwQWI0N+9F5DfOy4gZ/OMnUwciU7N97qJnLY+Z2xH8GwbdpvlTWYySeCNepwO6reQexnZnLGJZF+0fvAlFsecoKw/uta9a43QC+S09UsKTU0qPC+RX5NNW+qhAX5FNOeU6yvktxhOb3jXx0YmHtZE2XPh+k1bf3VBZcfo6Qrk7wVmc+/UgjN8nevirkZel/yenu4iuMd3FceOfuS09WO90vlvO4eQL1GaNaU/DD7CfrY2/xfyR+9Cy3bIgWfGD+1MF0DtJ2193OthQ4Pzq5FHPdvDJNrHSibK455o3Sch9/NQueheC77s36olsruQ09b36c7uC1l6GHnvN4d4Dg/wa9reytpeyOmF8zR+ngIXOGsX6+ONnLY+8UwwT0ZzJHIfl5GHlYLgIw2CrHQJyDdp/Q1qnGIhEyaWf7NUU5DT1mfamJ+OYM1GLvjbPSksE3zn8Vcd78uRO57YUJlxEfyyekT7WCdy2vrSsEPc9FzjyAMGDo6xHQYffdqeQ8zg/jM9ia+Psv517Lbcn4DtKD6hrY/ldLY2uoNddYPMdBLFX+r4v0zF/mjPegd5ii8S/+zx0PdKS2MmE0uL7thfPI+ctj54ewYxORCGnJHdQtAkFHzrp++rBXF855R+6me6JbijdsbyQ/nIaeubO+jPH1FtRf7zELNINmV9dDBXskXSLHKu8aLulwzgww9vXGdjxPHx/67fPvClv65iFfIguUs6z5LBs+g39ifrIPfjUPV9cAn8aPcxxr2WyGnrz1vYPmvtc0Fuwd9Tv8YMfHaFyDerIOSzwgk3WDnB3TZt/ESfjpy2vp7HbN+5kffIPfoGRPdOLyETtp9zyvIKkN93Sv51qWvJouOTM6znM/Qegx/ouMm4hxONb2j7B3gicoazBJHbm8aL+YeCNzel0h2VRd5uruSqbwI+HNH9iM8UOW3/w3LtoN+sJ5BLiYa3BDGBT2me0blxFbnZy4KaA9OMMD5zf9Mncgc5bf8Gc9TLhL25yDmvN7V8fwQedTaykfcL8gEPkyL7APDvhzsajBnR+JW2P6V25135AVHkB5YwcEmZgOcm5To/kUPuGsmo1z/LAOXHL5Ip1h05bX9NoqwvedYDufofJyPfafB1QwPrGk8jbzTW0XaZZFh0fC70K0rO/TFl/5EVq71TLnLa/qHXzfSWElXI07OtvbV9wauM77gfb0M+zZQk074HXEpglO8DA8pf0PY/hWzRS26WQh6c0bDn7jQ9jO/MZf6L2o2cdTnH9lWT4FHWdg+l9yKn7e/6w9Elb+6FnF+Pzzj4Mbg9S6LWQCzys9x3OqrcwRsFblmWliCn7U97sN5lruET8jM9lWciTMF7Bpc+rehBbvzlwCCrCP2i+aEzK5/fk52lg+8/Ls77TRE5bf/d1CrzYpFNyF+b1VrTj4L/Cte6unkb8lauVzZWSeCHwt52uJ9FTts/6Pp8OoslEjnH+EsXFyfwR4nKlwvTkKt63eg8ZAae9fjDLY1C5FcOa994u45u0fwdbX9kv+qDtF8DyHdWnq0MnJsjEc4Ng55hXCg/uF3u9urMavCjd8lqplLIafs/r253M2vZitwn5cnB86bgpctqrUmmyEvCtAY31cySiEN2zoNxAchp+1f/ef17syMc+4GWuDsPwRu73yaqXkVOTf8ZgTObnd7b9QJ5WDqvb7Mu5f3/d36VYfvB5wZ/ZkhE1nKdsN1NyGn7d5251BmOtSPXa0p0j60GdzCXEBTtR+7yd2rDGwPwi5yslt1iKD9M25/8qV328LKNyLdcHOsaLJ8mERH+DJ7pu5Hbb9unFHsX3IlNW9DOBjltf7WHS8VQRBDy0tqhtaUT/0jE8qTJD36Pkf83fdCdtQR8VvxBeF4mctr+8F9klx1bW5BnphikmeqBT7RsSTAdQf5m6anIq3lTJMJSy7fsNB/K39P2xz/NuPWGWxg50/0puUN3wHPvRK3jFUP+0uQ4g7EF+CLzB31PhI3Mxv+SCOHO8RdfNZDT9v///U8mLl0Tufev7/uG3oOLrci+R9qJXMgkUKRNG7yw4F6X0RHktPML+h+zvLntgFzqm25GeO4k1I8xic+5Z5BfdbU29o0BzxLP8az2Q047f0Hul5J682Xk6veC7kysApeab8gW+oYu7q8fu/+QiI9lrwj6O8gTkyMZmUvBZ8aj1znGI6edL/HAkVWs6jly1+BzV6aVwC8l2Vxgy0T++M3mA4MffpOIgGeJ7ollyGnnY6hkh4YeqUVutVvlNZsf+PBz6yW/PiGX2xB+fKT5F9TPn0uZi8aQ0873OClxK5N1AvlaYxEDsSpwlYHg6S/Yw6ws7Xqegi8yf+Zu8VJKVBT8r6KtasEyNP9GO79EwC6Z9Sof8k1bpMYnun+SCIYXqiLdwsglVoTcCXAAl1D25mNbh5x2Pstk0m7dKhXk/RaD9x9//0EimszXMhlrIh+TftRcnQ7u6Nr2YoMBctr5MlbM5OTL5sjXKF1O4GUGF8tgat/ngPyWfFP2oYkJEpG0cyok0wk57XyckODIPqPTyKeYquq8jcGPV4S63DmPvJfYKjTKBP612LRKPxQ57Xyf693bymcjkT/bLXd9yXXwK9Q/kJMj5R1WioJzn1ohopaInHZ+UfQhrbuHk5Ar9BWK5beDX5Zk8bv3DHlcsljmfkXwLYJDx2QKkNPOXwptfS+Vjz3F+1CC7yrwi+O9x7YUIt+Z4JbTyAS+yPyzleSIkUfrCIm4UJBn9KkFOe18KbMwXlXNNuQ/IoX4hYPBj+ZkvOTpxPfvt+ngQedhEjHkkD67ehw57XysIykGs8Rv5IEhWn7Xng6RCGmWI7c9ZnD54F3K/mh6kEQUvDQfe8qK5u9p53ud2KUgq8mNPCNfi7jz4TuJCH++PJd3OXKdnEvL1kaAdxdafDKQRE47n4x1ON2rcjVyL3tbFeOt4PzzA4GFHvT06TYPbnDvX/I1xuuRKwkMfShmBbcoX/L8Hnba+W0/M5Qnf23G7zfND1575SuJKA+aamAhI58qWjv1Qxf875v7qnJ4/QPt/DnfKDu+KX3kn/821XTwgYvwl3/O24281MNeRjqwj0TcvE/Ofb0POe38vPKNJ9O6rJD3/4rr0b32hUTEGo2Wmh5GLnaYaOnJ6yURv8c9GX7bI6ed/xc2ZbryuTNyT5bKXTGC4K08+p1Np5Dv1pDdUDzbTSJ6S1rdZb2R084v7GV+9NrWB/nYrpyHJTfAuceCh3f7IQ8Qb730eFsXiai9vb3P/DJy2vmLg5LpVhLhyJ2uvJj+KwyuV1PwrxQ7B7X+dpCIwO8RDt54/cs+K0J1JhJ8kfUx4SK6XD/WtpOIkYsaKx/dQk47f/Jt6phqC/bvwvGZwwzgbQ5zLdy3kd9vCjljy90K/bvblcOSDxapXy0k4lVVi1Ia9miNhG1B98H/9v5r3PUQuWL5C3/9X40komJtfNDm5EXKN7jrWLEy/zN8fdemtrzXBn9nKn1aMgV5XmcTvWJTPYn49n2IOerlIuUP/P6r7TW+r5BfiFZ0aRv4SCIMhD8yZb1G7rez3Vokoxb6lwtu5KTMRcoH+Lki5w3fs/D6p/wTzf6SH6B9+LYrSOs9cpO5SztOKFfC+KW8pv553iL3v5xEGIaHjygWIK874vPR4GQZiYjLLGWoLET+8OSBD4kTRdA+8CnYvy5d5P4UQvvYxXBiuAzX/8GWe5r5+SQiv6q55GQFcv3L6i22e7IhPj0rtF2nepHrz4TxK/PE4zPYtTxzbvQcSScRe77yuyRhn7MnJ2o4pJGIZmmxg/GLfX4SiTg1Jcp3G3vJhU6mHa33SYTODQlmlhrk2rkfi9mvhsH3z1wbkMbvp/ufF1H+sPq1Mvj/A+ndrWo="]; </code></pre> <p>and this is how it looks like</p> <pre><code>Graphics[Line[points], Frame -&gt; True] </code></pre> <p><img src="https://i.stack.imgur.com/nscUw.png" alt="enter image description here"></p> <p>and then I use J. M.'s method(code copied and modified from <a href="https://mathematica.stackexchange.com/a/11738/1364">here</a>) (it will take about 10 seconds to run)</p> <pre><code>parametrizeCurve[pts_List, a : (_?NumericQ) : 1/2] := FoldList[Plus, 0, Normalize[(Norm /@ Differences[pts])^a, Total]] /; MatrixQ[pts, NumericQ] tvals = parametrizeCurve[points]; m = 3; knots = Join[ConstantArray[0, m + 1], MovingAverage[ArrayPad[tvals, -1], m], ConstantArray[1, m + 1]]; bas = Table[BSplineBasis[{m, knots}, j - 1, tvals[[i]]], {i, Length[points]}, {j, Length[points]}]; ctrlpts = LinearSolve[bas, points]; circPoints = {{1, 0}, {1, 1}, {-1, 1}, {-1, 0}, {-1, -1}, {1, -1}, {1,0}}; circKnots = {0, 0, 0, 1/4, 1/2, 1/2, 3/4, 1, 1, 1}; circWts = {1, 1/2, 1/2, 1, 1/2, 1/2, 1}; wgpts = Map[Function[pt, Append[#1 pt, #2]], circPoints] &amp; @@@ ctrlpts; wgwts = ConstantArray[circWts, Length[ctrlpts]]; </code></pre> <p>and then generate the 3d surface</p> <pre><code>Graphics3D[{Directive[EdgeForm[]], BSplineSurface[wgpts, SplineClosed -&gt; {False, True}, SplineDegree -&gt; {3, 2}, SplineKnots -&gt; {knots, circKnots}, SplineWeights -&gt; wgwts]}, Boxed -&gt; False] </code></pre> <p><img src="https://i.stack.imgur.com/Ds37G.png" alt="enter image description here"></p> <p>We can see that the surface is not smooth. It looks like the surface is composed by flat rings. So how can we make the surface smooth?</p> <p>Edit:</p> <p>I think this unsmoothness may come from my data rather than the rotation process, so I tried to smooth my data using something like</p> <pre><code>pointsSmooth=ExponentialMovingAverage[points, 1/20]; </code></pre> <p>then I get the a smoother surface, but <code>ExponentialMovingAverage</code> seems to have removed the end points and there is a hole on the surface, which I don't want.</p> <p><img src="https://i.stack.imgur.com/oyGlp.png" alt="enter image description here"></p> <p>Also smoothing using a smooth constant like 1/20 largely modified the original data:</p> <pre><code>Graphics[{Red, Line[ExponentialMovingAverage[points, 1/20]], Blue, Line[points]}, Frame -&gt; True, AspectRatio -&gt; 1] </code></pre> <p><img src="https://i.stack.imgur.com/kyf6Z.png" alt="enter image description here"></p> <p>So is it possible to smooth the data while keep the general shape so that it will give a better smooth surface? Or there are other ways to contract a smooth surface from the data?</p>
Community
-1
<p>Sounds like what you actually need after your edit is a way to smooth a list of data while keeping the endpoints fixed. Here's a dumb approach that will work with any "symmetrical" smoothing filter, including <code>GaussianFilter</code>, <code>MeanFilter</code>, even <code>MedianFilter</code>. It won't work with <code>ExponentialMovingAverage</code>, though, because that's not symmetrical, although it should if you average the results from <code>ExponentialMovingAverage</code> and <code>Reverse@ExponentialMovingAverage@Reverse</code>.</p> <pre><code>smooth[list_, filter_] := Take[filter[Join[ (2 First@list - #) &amp; /@ Reverse@Rest@list, list, (2 Last@list - #) &amp; /@ Reverse@Most@list]], {Length@list, 2 Length@list - 1}] </code></pre> <p>All it does is it extends the data in a "flipped" form about both endpoints -- for example, $[1,2,10]$ will become $[\color{grey}{-8,0},1,2,10,\color{grey}{18,19}]$ -- then smooths <em>that</em>, and drops the extra entries.</p> <p>For example:</p> <pre><code>smooth[{a, b, c, d, e}, GaussianFilter[#, 1] &amp;] // Simplify </code></pre> <blockquote> <pre><code>{a, (b BesselI[0, 1/4] + (a + c) BesselI[1, 1/4])/(BesselI[0, 1/4] + 2 BesselI[1, 1/4]), (c BesselI[0, 1/4] + (b + d) BesselI[1, 1/4])/(BesselI[0, 1/4] + 2 BesselI[1, 1/4]), (d BesselI[0, 1/4] + (c + e) BesselI[1, 1/4])/(BesselI[0, 1/4] + 2 BesselI[1, 1/4]), e} </code></pre> </blockquote> <p>:)</p> <hr> <p>Turns out <code>GaussianSmooth</code> smooths across all dimensions of the array by default, so the $x$-coordinate gets averaged with the $y$-coordinate and vice versa. Oops.</p> <pre><code>{xs, ys} = smooth[#, GaussianFilter[#, 5] &amp;] &amp; /@ Transpose[points]; {fx, fy} = Interpolation /@ {xs, ys}; RevolutionPlot3D[{fx[i], fy[i]}, {i, 1, Length[points]}] </code></pre> <p><img src="https://i.stack.imgur.com/i596Y.png" alt="enter image description here"></p> <p>You can set <code>smoothedPoints = Transpose@{xs, ys}</code> if you want to use the whole <code>parametrizeCurve</code> stuff instead.</p>
4,092,994
<p>The question is</p> <blockquote> <p>Find the solutions to the equation <span class="math-container">$$2\tan(2x)=3\cot(x) , \space 0&lt;x&lt;180$$</span></p> </blockquote> <p>I started by applying the tan double angle formula and recipricoal identity for cot</p> <p><span class="math-container">$$2* \frac{2\tan(x)}{1-\tan^2(x)}=\frac{3}{\tan(x)}$$</span> <span class="math-container">$$\implies 7\tan^2(x)=3 \therefore x=\tan^{-1}\left(-\sqrt\frac{3}{7} \right)$$</span> <span class="math-container">$$x=-33.2,33.2$$</span></p> <p>Then by using the quadrants <a href="https://i.stack.imgur.com/QFDTs.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/QFDTs.png" alt="quadrant" /></a></p> <p>I was lead to the final solution that <span class="math-container">$x=33.2,146.8$</span> however the answer in the book has an additional solution of <span class="math-container">$x=90$</span>, I understand the reasoning that <span class="math-container">$\tan(180)=0$</span> and <span class="math-container">$\cot(x)$</span> tends to zero as x tends to 90 however how was this solution fou<strong>n</strong>d?</p> <p>Is there a process for consistently finding these &quot;hidden answers&quot;?</p>
Quanto
686,284
<p>Factorize the equation as follows</p> <p><span class="math-container">\begin{align} 2\tan(2x)-3\cot(x) =&amp; \frac{2\sin2x}{\cos 2x} - \frac{3\cos x}{\sin x}\\ =&amp; \frac{2\sin2x\sin x-3 \cos x\cos2x }{ \sin x\cos 2x}\\ =&amp; \frac{\cos x(10\sin^2x-3 )}{ \sin x\cos 2x}\\ \end{align}</span> where the factor <span class="math-container">$\cos x =0$</span> captures the solution <span class="math-container">$x=\frac\pi2$</span> and <span class="math-container">$10\sin^2x -3=0$</span> yields <span class="math-container">$x= \sin^{-1}\sqrt{\frac3{10}}, \&gt;\pi - \sin^{-1}\sqrt{\frac3{10}}$</span>.</p>
1,478,142
<p>Evaluate these limits by relating them to a derivative. </p> <p>$\lim\limits_{x \to 0} \frac{\sqrt{\cos{x}}-1}{x}$</p>
MPW
113,214
<p><strong>Hint:</strong> Put $f(t) = \sqrt{\cos t}$, note that $f(0)=1$ and $f(0+x) = f(x) = \sqrt{\cos x}$, and $$\frac{f(0+x)-f(0)}{x}=\frac{\sqrt{\cos x} - 1}{x}$$</p> <p>Can you take it from here?</p> <p><strong>Note:</strong> You may recognize the form on the left side of the last line better if you write $h$ (or$\Delta x$) instead of $x$.</p>
1,784,912
<p>In this question, I know that $\text{C},\text{R},\text{T},\text{A}\in\mathbb{R}^+$</p> <p>I've this circuit (the bottom of the resitor is connected to earth ($0$)): <a href="https://i.stack.imgur.com/hfKGJ.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/hfKGJ.jpg" alt="enter image description here"></a></p> <p>When I use Laplace transform I can find that:</p> <ul> <li>$$\text{V}_{\text{out}}(s)=\frac{\text{R}}{\text{R}+\frac{1}{\text{C}s}}\cdot\text{V}_{\text{in}}(s)$$</li> </ul> <p>My input function $\text{V}_{\text{in}}(t)$ is:</p> <p><a href="https://i.stack.imgur.com/KDFT5.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/KDFT5.jpg" alt="enter image description here"></a></p> <p>When I use Laplace transform, I can find that:</p> <ul> <li>$$\text{V}_{\text{in}}(s)=\frac{\text{A}\tanh\left(\frac{\text{T}s}{4}\right)}{s}$$</li> </ul> <p>Now, when I substitute that in, I get that:</p> <ul> <li>$$\text{V}_{\text{out}}(s)=\frac{\text{R}}{\text{R}+\frac{1}{\text{C}s}}\cdot\frac{\text{A}\tanh\left(\frac{\text{T}s}{4}\right)}{s}$$</li> </ul> <p>So, when I solved the inverse Laplace transform. I got ($\text{H}(x)$ is the Heaviside stepfunction):</p> <ul> <li>$$\text{V}_{\text{out}}(t)=\text{A}\exp\left[-\frac{t}{\text{CR}}\right]-2\text{A}\sum_{n=0}^{\infty}\text{H}\left(t-\frac{\text{T}n}{2}\right)\exp\left[-\frac{\left(t-\frac{\text{T}n}{2}\right)}{\text{CR}}\right]$$</li> </ul> <p>Now, when I choose values $\text{T}=\frac{1}{50},\text{R}=1980,\text{A}=6,\text{C}=\frac{47\times10^{-6}}{10}$</p> <p>I got a graph that looks like:</p> <p><a href="https://i.stack.imgur.com/MWobO.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/MWobO.jpg" alt="enter image description here"></a></p> <blockquote> <p>Q: When I build it I looked at the scope and that told me that the graph I should get looks somehing like the picture down here, where is my mistake?:</p> </blockquote> <p><a href="https://i.stack.imgur.com/keQ13.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/keQ13.jpg" alt="enter image description here"></a></p> <p>I noticed, when I took out the floor part of my $\text{V}_{\text{out}}(t)$ function (I get that with mathematica) and set $\text{A}=-6$ I got a graph that look more like the thing I expected, but here I dont understand why it oscillates around $18$, it should me around $0$: <a href="https://i.stack.imgur.com/kyNbi.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/kyNbi.jpg" alt="enter image description here"></a></p>
Rorschach 0007
334,308
<p>Your circuit is a passive differentiator I think. There is one zero and one pole and the operation of the differentiation on the square wave (hence the spikes to be expected as your picture shows) occurs at frequencies below the pole frequency. You should be getting a Dirac delta function in the output for this, I think, after taking the inverse transform.</p>
71,117
<p>I have this assertion: if $p$ is a prime such that $p\equiv 11 \pmod{56}$, then $p$ splits in $\mathbb{Z}[\sqrt{14}]$ (the discriminant of $\mathbb{Z}[\sqrt{14}]$ is $56$.)</p> <p>Why? Does $p\equiv 11\pmod{56}$ imply $14$ is a quadratic residue mod $p$?</p>
Brandon Carter
1,016
<p>If we have a quadratic field $K = \mathbb{Q}(\sqrt{d})$ with $d$ squarefree, then an odd prime $p$ splits if and only if $\left(\frac{d}{p}\right) = 1$. </p> <p><em>Claim</em>: If $q$ is an odd prime with $q \equiv 3 \pmod 4$, then $q$ is a quadratic residue mod $p$ if and only if $p \equiv \pm b^2 \pmod {4q}$, where $b$ is an odd integer prime to $q$. The proof is straightforward from the law of quadratic reciprocity.</p> <p>So, $\left(\frac{14}{p}\right) = \left(\frac{2}{p}\right) \left(\frac{7}{p}\right)$.</p> <p>$2$ is a quadratic residue mod $p$ if and only if $p \equiv \pm 1 \pmod 8$. Since $p \equiv 11 \pmod {56}$ we see that $p \equiv 3 \pmod 8$. So $\left( \frac{2}{p}\right) = -1$.</p> <p>On the other hand, we can use the claim above to calculate $\left(\frac{7}{p}\right)$. So we need only check that $p \equiv \pm b^2 \pmod {28}$. Since $p \equiv 11 \pmod {56}$, $p \equiv 11 \pmod {28}$. And simple computation shows that $\pm 11$ are non-quadratic residues mod $28$. Hence $\left(\frac{7}{p}\right) = -1$.</p> <p>So $\left(\frac{14}{p}\right) = 1$, and so $p$ splits in $\mathcal{O}_K = \mathbb{Z}[\sqrt{14}]$.</p>
386,921
<p>I am 16 years old at the time of writing (so I have no supervisors to seek advice from) and I have written a mathematics research paper, which I plan on submitting to a journal for publication. I asked an <a href="https://academia.stackexchange.com/questions/164114/how-to-structure-a-proof-by-induction-in-a-maths-research-paper?noredirect=1#comment442141_164114">identical question</a> on Academia.SE and I was advised to ask the question here.</p> <p>For a couple of the assertions that I make, I use proofs by induction. Now, in school we're encouraged to write proofs by induction in the following (rigid) format:</p> <p><strong>Base case:</strong>...</p> <p><strong>Assumption(s):</strong> ….</p> <p><strong>Inductive step:</strong> ….</p> <p><strong>Conclusion:</strong> ….</p> <p>I have noticed that no research articles that I have seen have written proofs by induction using this sort of format. The authors usually make it flow much more smoothly, eg 'For the base case, the result is trivial. Now assume the result holds for some <span class="math-container">$n=k$</span>, so that …. Now consider the expression for <span class="math-container">$n=k+1$</span> … and by the inductive hypothesis this equals … hence the result is true by mathematical induction.'</p> <p>So, is it good practice to write proofs by induction in the pretty rigid structure I first outlined or is it ok/better to write the proofs more naturally so that it flows better?</p>
2734364041
111,215
<p>Writing a proof for school is very different from writing a proof for a research paper. Perhaps the most important distinction is that the audiences are completely different. In school, your audience is your instructor, whose job is to assess your ability to learn and apply a principle. The audience of a research article is the professional mathematical community, where favorable viewing of your work hinges on novelty of ideas, correctness, readability, and possibly elegance, not rigid adherence to one person's notion of how to organize thoughts. With that in mind, I know I would prefer to read a proof with a nice natural flow instead of one that is written in rigid adherence to one specific instructor's preferences.</p> <p>When you finish writing your paper, I recommend that you send your paper to a professional researcher with whom you have a good working relationship, someone who can give you candid, meaningful, and constructive feedback. As you go about writing your paper, I recommend reading as many papers in professional journals as you can so that you get a sense for what good writing looks like. This second bit of advice is tricky without knowing what area(s) of research interest you. So perhaps the professional researcher with whom you have a good working relationship might direct you to some examples of quality writing.</p>
386,921
<p>I am 16 years old at the time of writing (so I have no supervisors to seek advice from) and I have written a mathematics research paper, which I plan on submitting to a journal for publication. I asked an <a href="https://academia.stackexchange.com/questions/164114/how-to-structure-a-proof-by-induction-in-a-maths-research-paper?noredirect=1#comment442141_164114">identical question</a> on Academia.SE and I was advised to ask the question here.</p> <p>For a couple of the assertions that I make, I use proofs by induction. Now, in school we're encouraged to write proofs by induction in the following (rigid) format:</p> <p><strong>Base case:</strong>...</p> <p><strong>Assumption(s):</strong> ….</p> <p><strong>Inductive step:</strong> ….</p> <p><strong>Conclusion:</strong> ….</p> <p>I have noticed that no research articles that I have seen have written proofs by induction using this sort of format. The authors usually make it flow much more smoothly, eg 'For the base case, the result is trivial. Now assume the result holds for some <span class="math-container">$n=k$</span>, so that …. Now consider the expression for <span class="math-container">$n=k+1$</span> … and by the inductive hypothesis this equals … hence the result is true by mathematical induction.'</p> <p>So, is it good practice to write proofs by induction in the pretty rigid structure I first outlined or is it ok/better to write the proofs more naturally so that it flows better?</p>
AfterMath
153,908
<p>Welcome to math overflow!</p> <p>There is no need to list up the names of the steps of the induction (Base case, Inductive step etc.) if it clear from context and/or obvious what you're doing. This will likely only make the argument take up more physical space on the page than strictly necessary. It should however be clear that you are doing an argument by induction, what the base case is, and how the induction step is performed.</p> <p>If a step of the argument is a little bit difficult to comprehend mentally, then I would say it's no loss in making it explicitly clear why the argument holds, without drowning in details, of course.</p> <p>I had a teacher once that used the term &quot;Weierstrass-rigorous&quot; for certain arguments / persons. This refers to the ideal standard that your proofs should be logically consistent (not contradict them selves) and &quot;water-proof&quot; , in other words <em>correct</em>, as others have stated. When attempting to confine to such a standard, it often happens that there are steps used in a proof that actually assume more than they seem. For example, something as innocent as</p> <p><span class="math-container">$$"\text{Let} \hspace{2mm}(x_n)_{n=1}^{\infty}\hspace{2mm} \text{be a sequence of elements in} \hspace{2mm} X", $$</span></p> <p>may actually require the assumption of the Axiom Of Choice to obtain the sequence.</p> <p>The proof should also be readable. This means, among other things, that all symbols, functions, etc. that you utilize throughout the proof are defined ether a priori, or on the fly. It is also a bonus if the proof follows a natural progression, if this is possible.</p> <p>In summary: Too rigorous is better than to little rigorous, and write it in away that makes it readable (or perhaps also enjoyable). And don’t forget to &quot;get to the point&quot; in the proof.</p> <p>[Ps. Writing style is very much a personal preference, as long as it is rigorous.]</p>
258,132
<p>Consider the following simple example as motivation for my question. If it were the case that, say, the Riemann hypothesis turned out to be independent of ZFC, I have no doubt it would be accepted by many as a new axiom (or some stronger principle which implied it). This is because we intuitively think that if we cannot find a zero off of the half-line, then there is no zero off the half-line. It just doesn't "really" exist.</p> <p>Similarly, if Goldbach's conjecture is independent of ZFC, we would accept it as true as well, because we could never find any counter-examples.</p> <p>However, is there any reason we should suppose that adding these two independent statements as axioms leads to a consistent system? Yes, because we have the "standard model" of the natural numbers (assuming sufficient consistency).</p> <p>But can this Platonic line of thinking work in a first-order way, for any theory? Or is it specific to the natural numbers, and its second-order model? </p> <p>In other words, let $T$ be a (countable, effectively enumerable) theory of first-order predicate logic. Define ${\rm Plato}(T)$ to be the theory obtained from $T$ by adjoining statements $p$ such that: $p:=\forall x\ \varphi(x)$ where $\varphi(x)$ is a sentence with $x$ a free-variable (and the only one) and $\forall x\ \varphi(x)$ is independent of $T$. Does ${\rm Plato}(T)$ have a model? Is it consistent?</p> <p>The motivation for my question is that, as an algebraist, I have a very strong intuition that if you cannot construct a counter-example then there is no counter-example and the corresponding universal statement is "true" (for the given universe of discourse). In particular, I'm thinking of the Whitehead problem, which was shown by Shelah to be independent of ZFC. From an algebraic point of view, this seems to suggest that Whitehead's problem has a positive solution, since you cannot really find a counter-example to the claim. But does adding the axiom "there is no counter-example to the Whitehead problem" disallow similar new axioms for other independent statements? Or can this all be done in a consistent way, as if there really is a Platonic reality out there (even if we cannot completely touch it, or describe it)?</p>
Andrej Bauer
1,176
<p>It cannot be done in a consistent way.</p> <p>Consider a <em>closed</em> statement $\psi$ which is independent of a theory $T$, and take $\forall x . \psi$ and $\forall x . \lnot\psi$. (I made the closed statement have a dummy free variable to satisfy your condition.) Both statements are of the kind you are asking for, but when we add both to $T$ we get an inconsistent theory.</p> <p>It should be clear that one can come up with examples where the two sentences that contradict each other are not so blatantly in opposition with each other. And with a bit of work we can even come up with examples where the free variable $x$ is doing something.</p>
258,132
<p>Consider the following simple example as motivation for my question. If it were the case that, say, the Riemann hypothesis turned out to be independent of ZFC, I have no doubt it would be accepted by many as a new axiom (or some stronger principle which implied it). This is because we intuitively think that if we cannot find a zero off of the half-line, then there is no zero off the half-line. It just doesn't "really" exist.</p> <p>Similarly, if Goldbach's conjecture is independent of ZFC, we would accept it as true as well, because we could never find any counter-examples.</p> <p>However, is there any reason we should suppose that adding these two independent statements as axioms leads to a consistent system? Yes, because we have the "standard model" of the natural numbers (assuming sufficient consistency).</p> <p>But can this Platonic line of thinking work in a first-order way, for any theory? Or is it specific to the natural numbers, and its second-order model? </p> <p>In other words, let $T$ be a (countable, effectively enumerable) theory of first-order predicate logic. Define ${\rm Plato}(T)$ to be the theory obtained from $T$ by adjoining statements $p$ such that: $p:=\forall x\ \varphi(x)$ where $\varphi(x)$ is a sentence with $x$ a free-variable (and the only one) and $\forall x\ \varphi(x)$ is independent of $T$. Does ${\rm Plato}(T)$ have a model? Is it consistent?</p> <p>The motivation for my question is that, as an algebraist, I have a very strong intuition that if you cannot construct a counter-example then there is no counter-example and the corresponding universal statement is "true" (for the given universe of discourse). In particular, I'm thinking of the Whitehead problem, which was shown by Shelah to be independent of ZFC. From an algebraic point of view, this seems to suggest that Whitehead's problem has a positive solution, since you cannot really find a counter-example to the claim. But does adding the axiom "there is no counter-example to the Whitehead problem" disallow similar new axioms for other independent statements? Or can this all be done in a consistent way, as if there really is a Platonic reality out there (even if we cannot completely touch it, or describe it)?</p>
Erfan Khaniki
83,598
<p>This post is not an answer to your question, but it explains the reason that if <span class="math-container">$\bf GC$</span> (or <span class="math-container">$\bf RH$</span>) is independent of <span class="math-container">$\bf ZFC$</span>, <span class="math-container">$\bf GC$</span> (or <span class="math-container">$\bf RH$</span>) is true in the standard model of natural numbers.</p> <p>The reason is <span class="math-container">$\bf GC$</span> and <span class="math-container">$\bf RH$</span> are <span class="math-container">$\Pi_1$</span> sentences in the language of arithmetic.</p> <p>Def.</p> <ol> <li><span class="math-container">$x|y := \exists z(z \leq y \land x\cdot z = y)$</span></li> <li><span class="math-container">$Pr(x) := \forall y(y&lt;x\land y&gt;0 \land y|x \to y=1)$</span></li> </ol> <p>Therefor <span class="math-container">$\bf GC$</span> can be defined by <span class="math-container">$\forall x\exists y,z(y+z = 2\cdot x+4 \land Pr(y) \land \Pr(z))$</span> which is a <span class="math-container">$\Pi_1$</span> sentence. For <span class="math-container">$\Pi_1$</span> definition of <span class="math-container">$\bf RH$</span> see <a href="https://mathoverflow.net/questions/31846/is-the-riemann-hypothesis-equivalent-to-a-pi-1-sentence">here</a>.</p> <p>Let <span class="math-container">$\phi(x)$</span> be a <span class="math-container">$\Delta_0$</span> formula and suppose <span class="math-container">${\bf PA} \nvdash \exists x \neg \phi(x)$</span>, then <span class="math-container">$\mathbb{N}\models \forall x \phi(x)$</span>. This is true because of <span class="math-container">$\Sigma_1$</span> completeness of <span class="math-container">${\bf PA}$</span>, that is if <span class="math-container">$\psi$</span> be a <span class="math-container">$\Sigma_1$</span> sentence, then <span class="math-container">$\mathbb{N}\models \psi$</span> iff <span class="math-container">${\bf PA}\vdash \psi$</span>.</p> <p>The important thing in this argument is <span class="math-container">$\Pi_1$</span> definability of problem. For example consistency of <span class="math-container">$\bf PA$</span> is a <span class="math-container">$\Pi_1$</span> sentence and by second incompleteness theorem, <span class="math-container">${\bf PA} \nvdash Con_{\bf PA}$</span>, but <span class="math-container">$\mathbb{N}\not\models \neg Con_{\bf PA}$</span>, therefore we can not prove similar theorems for formulas in another level of Arithmetical Hierarchy except <span class="math-container">$\Pi_1$</span>.</p>
258,132
<p>Consider the following simple example as motivation for my question. If it were the case that, say, the Riemann hypothesis turned out to be independent of ZFC, I have no doubt it would be accepted by many as a new axiom (or some stronger principle which implied it). This is because we intuitively think that if we cannot find a zero off of the half-line, then there is no zero off the half-line. It just doesn't "really" exist.</p> <p>Similarly, if Goldbach's conjecture is independent of ZFC, we would accept it as true as well, because we could never find any counter-examples.</p> <p>However, is there any reason we should suppose that adding these two independent statements as axioms leads to a consistent system? Yes, because we have the "standard model" of the natural numbers (assuming sufficient consistency).</p> <p>But can this Platonic line of thinking work in a first-order way, for any theory? Or is it specific to the natural numbers, and its second-order model? </p> <p>In other words, let $T$ be a (countable, effectively enumerable) theory of first-order predicate logic. Define ${\rm Plato}(T)$ to be the theory obtained from $T$ by adjoining statements $p$ such that: $p:=\forall x\ \varphi(x)$ where $\varphi(x)$ is a sentence with $x$ a free-variable (and the only one) and $\forall x\ \varphi(x)$ is independent of $T$. Does ${\rm Plato}(T)$ have a model? Is it consistent?</p> <p>The motivation for my question is that, as an algebraist, I have a very strong intuition that if you cannot construct a counter-example then there is no counter-example and the corresponding universal statement is "true" (for the given universe of discourse). In particular, I'm thinking of the Whitehead problem, which was shown by Shelah to be independent of ZFC. From an algebraic point of view, this seems to suggest that Whitehead's problem has a positive solution, since you cannot really find a counter-example to the claim. But does adding the axiom "there is no counter-example to the Whitehead problem" disallow similar new axioms for other independent statements? Or can this all be done in a consistent way, as if there really is a Platonic reality out there (even if we cannot completely touch it, or describe it)?</p>
Joel David Hamkins
1,946
<p>The phenomenon accords more strongly with your philosophical explanation if you ask also that the sentences have complexity $\Pi^0_1$. That is, the universal statement $\forall x\ \varphi(x)$ should have $\varphi(x)$ involving only bounded quantifiers, so that we can check $\varphi(x)$ for any particular $x$ in finite time. If you drop that requirement, there are some easy counterexamples, hinted at or given already in the comments and other answers.</p> <p>But meanwhile, even in the case you require $\varphi(x)$ to have only bounded quantifiers, there is still a counterexample. </p> <p><strong>Theorem.</strong> If $\newcommand\PA{\text{PA}}\newcommand\Con{\text{Con}}\PA$ is consistent, then there is a consistent theory $T$ extending $\PA$ with two $\Pi^0_1$ sentences $$\forall x\ \varphi(x)$$ $$\forall x\ \psi(x)$$ both of which are consistent with and independent of $T$, but which are not jointly consistent with $T$.</p> <p><strong>Proof.</strong> Let $T$ be the theory $\PA+\neg\Con(\PA)$, which is consistent if $\PA$ is consistent. Let $\rho$ be the <a href="https://en.wikipedia.org/wiki/Rosser%27s_trick" rel="noreferrer">Rosser sentence</a> of this theory, which asserts that the first proof in $T$ of $\rho$ comes only after the first proof of $\neg\rho$ (see also my discussion of <a href="http://jdh.hamkins.org/every-function-can-be-computable/" rel="noreferrer">the Rosser tree</a>). Our two $\Pi^0_1$ sentences are:</p> <ul> <li>every proof of $\rho$ from $T$ has a smaller proof of $\neg\rho$. </li> <li>every proof of $\neg\rho$ from $T$ has a smaller proof in $T$ of $\rho$. </li> </ul> <p>The first statement is equivalent to $\rho$, and the second is equivalent over $T$ to $\neg\rho$, since $T$ proves that every statement is provable; the only question is which proof comes first. So both statements are consistent with $T$.</p> <p>But the sentences are not jointly consistent with $T$, since in any model of $T$, both $\rho$ and $\neg\rho$ are provable from $T$, and so one of the proofs has to come first. <strong>QED</strong></p>
4,122,425
<p>Let’s say a corona test is correct with <code>p=0.8</code>. If I now take two tests. What’s the probability that I get a correct result?</p> <p>I think thought of <code>0.8*0.8</code>, but that makes now sense, since it should not decrease and <code>0.8+0.8</code> gives a probability over 1, which makes no sense either. Or maybe that Bayes probability example?</p> <p>Edit: I would like to extend my questions: What’s the probability that I am negativ with one and with two tests? There the probability with two should increase if I am actually negative? Thanks for the answers.</p>
MathR
876,250
<p>0.2<span class="math-container">$\cdot$</span>0.8+0.2<span class="math-container">$\cdot$</span>0.8+0.8<span class="math-container">$\cdot$</span>0.8=0.96=P(“at least one positive test”)</p>
2,420,727
<p>I'm trying to evaluate </p> <blockquote> <p>$$\lim _{ x\to -\infty } \frac { 2x-3 }{ \sqrt { x^{ 2 }+7x-2 } } $$</p> </blockquote> <p>by rationalizing the denominator, but I am not getting anywhere. Can someone please help me with this?</p> <p>Thanks</p>
Siong Thye Goh
306,553
<p>$$\lim_{x\to -\infty}\left(\frac{2x-3}{\sqrt{x^2+7x-2}}\right) = \lim_{x\to -\infty}\left(\frac{2\frac{x}{|x|}-\frac{3}{|x|}}{\sqrt{1+\frac{7}{x}-\frac{2}{x^2}}}\right)=-2$$</p>
2,420,727
<p>I'm trying to evaluate </p> <blockquote> <p>$$\lim _{ x\to -\infty } \frac { 2x-3 }{ \sqrt { x^{ 2 }+7x-2 } } $$</p> </blockquote> <p>by rationalizing the denominator, but I am not getting anywhere. Can someone please help me with this?</p> <p>Thanks</p>
Nosrati
108,128
<p>\begin{align} \lim_{x\to\infty}\frac{2x-3}{\sqrt{x^2+7x-2}} &amp;= \lim_{x\to\infty}\frac{x(2-\dfrac3x)}{\sqrt{x^2(1+\dfrac7x-\dfrac{2}{x^2})}} \\ &amp;= \lim_{x\to\infty}\frac{x}{\sqrt{x^2}} \times \lim_{x\to\infty}\frac{2-\dfrac3x}{\sqrt{1+\dfrac7x-\dfrac{2}{x^2}}} \\ &amp;= \lim_{x\to\infty}\frac{x}{\sqrt{x^2}} \times \lim_{x\to\infty}\frac{2-0}{\sqrt{1+0-0}} \\ &amp;= \lim_{x\to\infty}\frac{x}{\sqrt{x^2}} \times 2 \\ &amp;= \lim_{x\to\infty}\frac{2x}{|x|} \end{align} it is $2$ as $x\to+\infty$ and $-2$ as $x\to-\infty$.</p>
3,954,865
<p>I am trying to solve a question but stuck with the steps. I can not find any similar questions. With help of some online resources to calculate some parts of the question but I can see that is not enough. I know my approach has lack of information but, this is the only thing I have reached, I was covid ill at the class hours and can not follow the class examples, I thought someone can help me to solve and learn the subject.</p> <p>With help of the answers from here I try to give an answer. Still need some improvements but tried to do my best. I still do not have answer for question D and confused about CL(the part C) and Significance level(part B)</p> <p><strong>My answers:</strong></p> <p><span class="math-container">$N\ =\ 9\ \ \ \ \ \ \ \ Sum\ of\ x\ =\ 3970\ \ \ \ \ \ \ \ Mean,\ µ = 441.1111 \ Variance,σ^2 = 161.1111$</span> <span class="math-container">$ \sigma\ =\ \sqrt{161.1111} = 12.6929$</span></p> <p><span class="math-container">$t\ =\ \frac{m\ -\ \mu}{s\ /\ \sqrt n}$</span></p> <p><span class="math-container">$t\ =\ \frac{500\ -\ 441.1111}{12.6929\ /\ \sqrt9} = 13.918545$</span></p> <p>We subtract 1 to get degrees free 9 - 1 = 8</p> <p>Degrees of freedom = n – 1 = 8</p> <p><span class="math-container">$Probability: P( T ≤ 13.918545) = 0.00000069 $</span> So, this is the p-Value</p> <p><span class="math-container">$We\ will\ reject\ H_0\ at \ \alpha = 1% $</span> and also any &gt; 1%</p> <p><span class="math-container">$$ (i) 0.10\ The\ information\ from\ the\ first\ question\, the\ critical\ t-value\ for\ α = 0.10\ and\ df = 8,\ t_c=\ 1.86 \\ CI\ =\ (\bar{X}\ -\ \frac{t_c\ \times\ s}{\sqrt n},\ \bar{\ X}\ +\ \frac{t_c\ \times\ s}{\sqrt n}) \\CI\ =\ (441.1111\ -\ \frac{1.86\ \times\ 12.6929}{\sqrt9},\ \ 441.1111+\frac{1.86\ \times\ 12.6929}{\sqrt9}) = (441.1111 – 7.868, 441.1111 + 7.868) = (433.243, 448.979) $$</span></p> <p><span class="math-container">$For\ the\ other\ t_c\ values:$</span></p> <blockquote> <p>(ii) 0.05 <span class="math-container">$t_c=\ 2.306$</span> CL = (431.354, 450.868) (iii) 0.01 <span class="math-container">$t_c=\ &gt; 3.355$</span> = (426.915, 455.308)</p> </blockquote> <p>Based on the answers in part 2 for (i) = 0.90, (ii) = 0.95, (iii) = 0.99, none of the confidence intervals contain 500.</p> <p><strong>The Question:</strong></p> <p>The worker says that the mean purchasing cost is 500 USD. We decide to test this.</p> <p>For a random sample of 9 purchases drawn from a normally distributed population with unknown variance, the costs are:</p> <pre><code>430, 450, 450, 440, 460, 420, 430, 450, 440. </code></pre> <p>A) Conduct a hypothesis test of whether the population mean purchasing equals 500 USD. Include all assumptions, the hypotheses, test statistic, and P-value and interpret the result in context.</p> <p>B) For which significance levels can you reject <span class="math-container">$H_0?$</span> (i) 0.10, (ii) 0.05, or (iii) 0.01.</p> <p>C) Based on the answers in part B), for which confidence levels would the confidence interval contain 500? (i) 0.90, (ii) 0.95, or (iii) 0.99.</p> <p>D) Use part B) and part C) to illustrate the correspondence between results of significance tests and results of confidence intervals.</p>
BruceET
221,800
<p>Notice that <em>all nine</em> of the observations are \$460 and below? Just from common sense, what does that tell you about the claim that average cost is \$500.</p> <p>You already have a thoughtful Answer from @tommik (+1), but because you ask I will show some additional detail.</p> <hr /> <p>Here is a relevant t test from a recent release of Minitab. How much of the output can your find by hand? What parts of the question can you answer from this?</p> <pre><code>One-Sample T: x Test of μ = 500 vs ≠ 500 Variable N Mean StDev SE Mean 95% CI T P x 9 441.11 12.69 4.23 (431.35, 450.87) -13.92 0.000 Descriptive Statistics: x Variable N Mean SE Mean StDev Minimum Q1 Median Q3 Maximum x 9 441.11 4.23 12.69 420.00 430.00 440.00 450.00 460.00 </code></pre> <blockquote> <p>I don't know whether you are taking this course in a classroom or online. A lot of online courses are using hastily written texts with confusing, useless problems. By contrast, <em>this</em> is a very nice problem carefully written (probably by a real statistician) to encourage your intuitive insight into hypothesis testing and confidence intervals. It will be worth your trouble to do computations, look at results, compare with computer printout, and think carefully about each of your answers.</p> </blockquote> <hr /> <p>Below is output from R statistical software for the same problem.</p> <pre><code>x = c(430, 450, 450, 440, 460, 420, 430, 450, 440) summary(x); length(x); sd(x) Min. 1st Qu. Median Mean 3rd Qu. Max. 420.0 430.0 440.0 441.1 450.0 460.0 [1] 9 # sample size [1] 12.69296 # sample SD t.test(x, mu = 500, conf.lev=.99) One Sample t-test data: x t = -13.918, df = 8, p-value = 6.874e-07 alternative hypothesis: true mean is not equal to 500 99 percent confidence interval: 426.9145 455.3077 sample estimates: mean of x 441.1111 boxplot(x, ylim=c(400,500), col=&quot;skyblue2&quot;) abline(h=500, col=&quot;green2&quot;) </code></pre> <p><a href="https://i.stack.imgur.com/chMoC.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/chMoC.png" alt="enter image description here" /></a></p>
311,380
<p>Prove that the relation $x \sim y$ iff $y$ is an element of the connected component of $x$ is an equivalence relation.</p> <p>This question is confusing me, do I simply go about showing the relation is reflexive, symmetric, and transitive? I don't really see how to do this for this question. Any suggestions or hints are appreciated! </p>
Cameron Buie
28,900
<p>Let's call our space $S$. For any $x\in S,$ we can define $C_x$, the connected component of $x$, to be the $\subseteq$-greatest connected subset $A$ of $S$ such that $x\in A.$ Put another way, $C_x$ is the union of all connected subsets of $S$ containing $x$ as an element--we can show that such a union is connected in various ways.</p> <p>Then $x\sim y$ if and only if $y\in C_x$. Clearly, $x\in C_x$, so $x\sim x$.</p> <p>If $x\sim y$, then $C_x$ is a connected set containing $y$, so $C_x\subseteq C_y$ (since $C_y$ is the $\subseteq$-greatest subset of $S$ containing $y$), and so $x\in C_y$, meaning $y\sim x$.</p> <p>If $x\sim y$ and $y\sim z$, then $y\in C_x$ and $z\in C_y$. As with the symmetry proof, we then have $C_x\subseteq C_y$ and $C_y\subseteq C_z$, so it follows that $x\in C_z$. But then $C_z$ is a connected set containing $x$, so $C_z\subseteq C_x$, and so $z\in C_x$, meaning $x\sim z$.</p>