tag
stringclasses
9 values
question_body
stringlengths
61
12.9k
accepted_answer
stringlengths
38
36.4k
second_answer
stringlengths
63
33k
linear-algebra
<p>This may be a trivial question yet I was unable to find an answer:</p> <p>$$\left \| A \right \| _2=\sqrt{\lambda_{\text{max}}(A^{^*}A)}=\sigma_{\text{max}}(A)$$</p> <p>where the spectral norm $\left \| A \right \| _2$ of a complex matrix $A$ is defined as $$\text{max} \left\{ \|Ax\|_2 : \|x\| = 1 \right\}$$</p> ...
<p>Put <span class="math-container">$B=A^*A$</span> which is a Hermitian matrix. A linear transformation of the Euclidean vector space <span class="math-container">$E$</span> is Hermite iff there exists an orthonormal basis of E consisting of all the eigenvectors of <span class="math-container">$B$</span>. Let <span cl...
<p>First of all, <span class="math-container">$$\begin{align*}\sup_{\|x\|_2 =1}\|Ax\|_2 &amp; = \sup_{\|x\|_2 =1}\|U\Sigma V^Tx\|_2 = \sup_{\|x\|_2 =1}\|\Sigma V^Tx\|_2\end{align*}$$</span> since <span class="math-container">$U$</span> is unitary, that is, <span class="math-container">$\|Ux_0\|_2^2 = x_0^TU^TUx_0 = x_0...
differentiation
<p>If water is poured into a cone at a constant rate and if <span class="math-container">$\frac {dh}{dt}$</span> is the rate of change of the depth of the water, I understand that <span class="math-container">$\frac {dh}{dt}$</span> is decreasing. However, I don't understand why <span class="math-container">$\frac {dh}...
<blockquote> <p>I understand that <span class="math-container">$\frac {dh}{dt}$</span> is decreasing. However, I don't understand on an <strong>intuitive</strong> level why <span class="math-container">$\frac {dh}{dt}$</span> is non-linear.</p> </blockquote> <p>Because any strictly decreasing function that is linear...
<p>The volume of water is changing linearly, but the height and volume are related nonlinearly. That is why <span class="math-container">$h(t)$</span> is non-linear.</p> <p><span class="math-container">\begin{eqnarray*} V &amp;=&amp; \frac{1}{3} \pi r^2 h\\ r &amp;=&amp; h \tan \theta\\ V &amp;=&amp; \frac{1}{3} \pi \...
geometry
<p>The number $\pi$ is defined as the ratio between the circumeference and diameter of a circle. How do we know the value $\pi$ is correct for every circle? How do we truly know the value is the same for every circle?</p> <p>How do we know that $\pi = {C\over d}$ for any circle? Is there a proof that states the follo...
<p>This is not a very rigorous proof, but it is how I was taught the fact that the circumference of a circle is proportional to its radius.</p> <p><a href="https://i.sstatic.net/Yg2CC.png" rel="noreferrer"><img src="https://i.sstatic.net/Yg2CC.png" alt="Two concentric circles"></a></p> <p>Consider two concentric circ...
<p>Here is a rigorous proof.</p> <p>First, we need a definition for the distance of two points along a section of a curve. This is called arc length and is defined as the limit of the sum of line-segments of the curve.</p> <p><a href="https://i.sstatic.net/xOje2.png" rel="noreferrer"><img src="https://i.sstatic.net/xOj...
probability
<p>Wikipedia says:</p> <blockquote> <p>The probability density function is nonnegative everywhere, and its integral over the entire space is equal to one.</p> </blockquote> <p>and it also says.</p> <blockquote> <p>Unlike a probability, a probability density function can take on values greater than one; for examp...
<p>Consider the uniform distribution on the interval from $0$ to $1/2$. The value of the density is $2$ on that interval, and $0$ elsewhere. The area under the graph is the area of a rectangle. The length of the base is $1/2$, and the height is $2$ $$ \int\text{density} = \text{area of rectangle} = \text{base} \cdo...
<p>Remember that the 'PD' in PDF stands for "probability density", not probability. Density means probability per unit value of the random variable. That can easily exceed $1$. What has to be true is that the integral of this density function taken with respect to this value must be exactly $1$.</p> <p>If we know a PD...
differentiation
<p>What would be the derivative of square roots? For example if I have $2 \sqrt{x}$ or $\sqrt{x}$.</p> <p>I'm unsure how to find the derivative of these and include them especially in something like implicit.</p>
<p>Let $f(x) = \sqrt{x}$, then $$f'(x) = \lim_{h \to 0} \dfrac{\sqrt{x+h} - \sqrt{x}}{h} = \lim_{h \to 0} \dfrac{\sqrt{x+h} - \sqrt{x}}{h} \times \dfrac{\sqrt{x+h} + \sqrt{x}}{\sqrt{x+h} + \sqrt{x}} = \lim_{x \to 0} \dfrac{x+h-x}{h (\sqrt{x+h} + \sqrt{x})}\\ = \lim_{h \to 0} \dfrac{h}{h (\sqrt{x+h} + \sqrt{x})} = \lim_...
<p>$\sqrt x=x^{1/2}$, so you just use the power rule: the derivative is $\frac12x^{-1/2}$.</p>
geometry
<p>I have the equation not in the center, i.e.</p> <p>$$\frac{(x-h)^2}{a^2}+\frac{(y-k)^2}{b^2}=1.$$</p> <p>But what will be the equation once it is rotated?</p>
<p>After a lot of mistakes I finally got the correct equation for my problem:-</p> <p><span class="math-container">$$\dfrac {((x-h)\cos(A)+(y-k)\sin(A))^2}{a^2}+\dfrac{((x-h) \sin(A)-(y-k) \cos(A))^2}{b^2}=1,$$</span></p> <p>where <span class="math-container">$h, k$</span> and <span class="math-container">$a, b$</span>...
<p>The equation you gave can be converted to the parametric form: $$ x = h + a\cos\theta \quad ; \quad y = k + b\sin\theta $$ If we let $\mathbf x_0 = (h,k)$ denote the center, then this can also be written as $$ \mathbf x = \mathbf x_0 + (a\cos\theta)\mathbf e_1 + (b\sin\theta)\mathbf e_2 $$ where $\mathbf e_1 = (1,0...
probability
<p>The following is a homework question for which I am asking guidance.</p> <blockquote> <p>Let $A$, $B$, $C$ be independent random variables uniformly distributed between $(0,1)$. What is the probability that the polynomial $Ax^2 + Bx + C$ has real roots?</p> </blockquote> <p>That means I need $P(B^2 -4AC \geq 0$)...
<p>Hints: First consider $B^2 \geq 4AC$. Now, if $U$ is uniform$(0,1)$, then $-\log(U)$ is exponential$(1)$; further, the sum of two independent exponential$(1)$ random variables has pdf $x e^{-x}$, $x &gt; 0$. Thus, using the law of total probability, the answer can be found by solving an elementary one dimensional in...
<p>Hint: You are looking for the volume of the $(a,b,c) \in [0,1]^3$ such that $b^2 \geq 4ac$. </p>
logic
<p><strong>Context:</strong> I'm studying for my discrete mathematics exam and I keep running into this question that I've failed to solve. The question is as follows.</p> <hr> <p><strong>Problem:</strong> The main form for normal induction over natural numbers $n$ takes the following form:</p> <ol> <li>$P(1)$ is tr...
<p>Consider the following definition of mathematical induction (adapted from David Gunderson's book <em>Handbook of Mathematical Induction</em>):</p> <hr> <p><strong>Principle of mathematical induciton:</strong> For some fixed integer $b$, and for each integer $n\geq b$, let $S(n)$ be a statement involving $n$. If</p...
<p>Your question seems somewhat unclear to me, as it stands, but I'll answer the one in the title, and if the question is updated, I'll address that too.</p> <p>Mathematical induction can be taken as its own axiom, independent from the other (though, as comments point out, it can be proven as a theorem in common syste...
matrices
<p><span class="math-container">$$\det(A^T) = \det(A)$$</span></p> <p>Using the geometric definition of the determinant as the area spanned by the <em>columns</em>, could someone give a geometric interpretation of the property?</p>
<p><em>A geometric interpretation in four intuitive steps....</em></p> <p><strong>The Determinant is the Volume Change Factor</strong></p> <p>Think of the matrix as a geometric transformation, mapping points (column vectors) to points: $x \mapsto Mx$. The determinant $\mbox{det}(M)$ gives the factor by which volumes ...
<p>This is more-or-less a reformulation of Matt's answer. He relies on the existence of the SVD-decomposition, I show that <span class="math-container">$\det(A)=\det(A^T)$</span> can be stated in a little different way.</p> <p>Every square matrix can be represented as the product of an orthogonal matrix (representing a...
differentiation
<p>It is often quoted in physics textbooks for finding the electric potential using Green's function that </p> <p>$$\nabla ^2 \left(\frac{1}{r}\right)=-4\pi\delta^3({\bf r}),$$ </p> <p>or more generally </p> <p>$$\nabla ^2 \left(\frac{1}{|| \vec x - \vec x'||}\right)=-4\pi\delta^3(\vec x - \vec x'),$$</p> <p>where ...
<p>The gradient of $\frac1r$ (noting that $r=\sqrt{x^2+y^2+z^2}$) is</p> <p>$$ \nabla \frac1r = -\frac{\mathbf{r}}{r^3} $$ when $r\neq 0$, where $\mathbf{r}=x\mathbf{i}+y\mathbf{j}+z\mathbf{k}$. Now, the divergence of this is</p> <p>$$ \nabla\cdot \left(-\frac{\mathbf{r}}{r^3}\right) = 0 $$ when $r\neq 0$. Therefore,...
<p>I'm new around here (so suggestions about posting are welcome!) and want to give my contribution to this question, even though a bit old. I feel I need to because using the divergence theorem in this context is not quite rigorous. Strictly speaking $1/r$ is not even differentiable at the origin. So here's a proof us...
differentiation
<p>When differentiated with respect to $r$, the derivative of $\pi r^2$ is $2 \pi r$, which is the circumference of a circle.</p> <p>Similarly, when the formula for a sphere's volume $\frac{4}{3} \pi r^3$ is differentiated with respect to $r$, we get $4 \pi r^2$.</p> <p>Is this just a coincidence, or is there some de...
<p>Consider increasing the radius of a circle by an infinitesimally small amount, $dr$. This increases the area by an <a href="http://en.wikipedia.org/wiki/Annulus_%28mathematics%29" rel="noreferrer">annulus</a> (or ring) with inner radius $2 \pi r$ and outer radius $2\pi(r+dr)$. As this ring is extremely thin, we can ...
<p>$\newcommand{\Reals}{\mathbf{R}}\newcommand{\Bd}{\partial}\DeclareMathOperator{\vol}{vol}$The formulas are no accident, but not especially deep. The explanation comes down to a couple of geometric observations.</p> <ol> <li><p>If $X$ is the closure of a bounded open set in the Euclidean space $\Reals^{n}$ (such as ...
linear-algebra
<p>I'm trying to intuitively understand the difference between SVD and eigendecomposition.</p> <p>From my understanding, eigendecomposition seeks to describe a linear transformation as a sequence of three basic operations (<span class="math-container">$P^{-1}DP$</span>) on a vector:</p> <ol> <li>Rotation of the coordin...
<p>Consider the eigendecomposition $A=P D P^{-1}$ and SVD $A=U \Sigma V^*$. Some key differences are as follows,</p> <ul> <li>The vectors in the eigendecomposition matrix $P$ are not necessarily orthogonal, so the change of basis isn't a simple rotation. On the other hand, the vectors in the matrices $U$ and $V$ in th...
<p>I encourage you to see an <span class="math-container">$(m \times n)$</span> real-valued matrix <span class="math-container">$A$</span> as a bilinear operator between two spaces; intuitively, one space lies to the left (<span class="math-container">$R^m$</span>) and the other (<span class="math-container">$R^n$</spa...
linear-algebra
<p>For any <span class="math-container">$(a_1,a_2,\cdots,a_n)\in\mathbb{R}^n$</span>, a matrix <span class="math-container">$A$</span> is defined by</p> <p><span class="math-container">$$A_{ij}=\frac1{1+|a_i-a_j|}$$</span></p> <p>Is <span class="math-container">$\det(A)$</span> always non-negative? I did some numerical...
<p>Here is an analytic proof (which I think I have learnt somewhere else, although not in this form). We first show an algebraic fact:</p> <blockquote> <p><strong>Theorem 1.</strong> Let <span class="math-container">$\mathbb{K}$</span> be a commutative ring, and let <span class="math-container">$n\in\mathbb{N} $</span>...
<p>This answer is a suggestion as to how you may prove the result, but does not contain a proof. Instead, I'll show how a similar problem has been answered, and offer pointers to claims that would show your result:</p> <p>Consider the matrix: $$A_{ij} = \frac{1}{1 + |a_i - a_j|^2}$$ Ultimately we will find that $\det...
logic
<p><em>I'm sorry if this is a duplicate in any way. I doubt it's an original question. Due to my ignorance, it's difficult for me to search for appropriate things.</em></p> <h2>Motivation.</h2> <p>This question is inspired by Exercise 1.2.16 of <a href="http://www.math.wisc.edu/%7Emiller/old/m571-08/simpson.pdf" rel="n...
<p>Well, let's look at the structure of the problem:</p> <p>There is a set $S$ of suspects (three in the original problem, a countably infinite number of them in Hilbert's hotel).</p> <p>There's a subset $G\subset S$ of guilty suspects.</p> <p>And there's a mapping $f:S\to P(S)$ where $P(S)$ is the power set (set of...
<p>It sounds to me like you are asking about Infinitary logic. I've pondered this idea myself a fair bit. For instance, we can make sense of the 'limit object' of this sequence $$ a_1 \wedge a_2, (a_1 \wedge a_2) \wedge a_3, (((a_1 \wedge a_2) \wedge a_3) \wedge a_4$$ where $\wedge $ denotes logical and. In this case ...
logic
<p>Recently I learned that for any set A, we have $\varnothing\subset A$.</p> <p>I found some explanation of why it holds.</p> <blockquote> <p>$\varnothing\subset A$ means "for every object $x$, if $x$ belongs to the empty set, then $x$ also belongs to the set A". This is a vacuous truth, because the antecedent ($x...
<p>There’s no conflict: you’ve misinterpreted the second highlighted statement. What it actually says is that $\varnothing$ and $A$ have no element in common, i.e., that $\varnothing\cap A=\varnothing$. This is not the same as saying that $\varnothing$ is not a subset of $A$, so it does not conflict with the fact that ...
<p>From Halmos's <a href="https://books.google.com/books?id=x6cZBQ9qtgoC" rel="noreferrer">Naive Set Theory</a>:</p> <p><a href="https://i.sstatic.net/IfcEP.png" rel="noreferrer"><img src="https://i.sstatic.net/IfcEP.png" alt="enter image description here" /></a></p> <hr /> <p>A transcription:</p> <blockquote> <p>The e...
game-theory
<p>Two persons have 2 uniform sticks with equal length which can be cut at any point. Each person will cut the stick into $n$ parts ($n$ is an odd number). And each person's $n$ parts will be permuted randomly, and be compared with the other person's sticks one by one. When one's stick is longer than the other person's...
<p>When intuition doesn't help, try brute force. </p> <p>trials = Table[With[{k = RandomReal[{0, 1}, {7}]}, k/Total[k]], {50}]; Column[Sort[Transpose[{Total /@ Table[Total[Sign[trials[[a]] - trials[[b]]]], {a, 1, 50}, {b, 1, 50}], trials}]]]</p> <p>Now we can look at some best/worst performers.</p> <p>{-55, {0.0186...
<p>Here's a (partial) answer to the setting where the number of sticks is 3 (i.e. $n = 3$). With some effort, one can show the following claims: </p> <p>Given the strategy $(a,a,b)$ where $a \ge b$ (i.e. you break your stick into two equal parts of size $a$ and one smaller part of size $b = 1-2a$), the optimal value f...
logic
<p>Most of the systems mathematicians are interested in are consistent, which means, by Gödel's incompleteness theorems, that there must be unprovable statements.</p> <p>I've seen a simple natural language statement here and elsewhere that's supposed to illustrate this: "I am not a provable statement." which leads to ...
<p>Here's a nice example that I think is easier to understand than the usual examples of Goodstein's theorem, Paris-Harrington, etc. Take a countably infinite paint box; this means that it has one color of paint for each positive integer; we can therefore call the colors <span class="math-container">$C_1, C_2, $</span...
<p>Any statement which is not logically valid (read: always true) is unprovable. The statement $\exists x\exists y(x&gt;y)$ is not provable from the theory of linear orders, since it is false in the singleton order. On the other hand, it is not disprovable since any other order type would satisfy it.</p> <p>The statem...
matrices
<p>The Frobenius norm of a $m \times n$ matrix $F$ is defined as</p> <p>$$\| F \|_F^2 := \sum_{i=1}^m\sum_{j=1}^n |f_{i,j}|^2$$</p> <p>If I have $FG$, where $G$ is a $n \times p$ matrix, can we say the following?</p> <p>$$\| F G \|_F^2 = \|F\|_F^2 \|G\|_F^2$$</p> <p>Also, what does Frobenius norm mean? Is it analog...
<p>Actually there is <span class="math-container">$$||FG||^2_F \leqslant||F||^2_F||G||^2_F$$</span> The proof is as follows. <span class="math-container">\begin{align} \|FG\|^2_F&amp;=\sum\limits_{i=1}^{m}\sum\limits_{j=1}^{p}\left|\sum\limits_{k=1}^nf_{ik}g_{kj}\right|^2 \\ &amp;\leqslant\sum\limits_{i=1}^{m}\sum\limi...
<p>Let <span class="math-container">$A$</span> be <span class="math-container">$m \times r$</span> and <span class="math-container">$B$</span> be <span class="math-container">$r \times n$</span>. A better bound here is <span class="math-container">$$ \| A B \|_F \le \|A\| \|B\|_F \quad (*) $$</span> where <span class="...
logic
<p>So a while ago I saw a proof of the Completeness Theorem, and the hard part of it (all logically valid formulae have a proof) went thusly:</p> <blockquote> <p>Take a theory $K$ as your base theory. Suppose $\varphi$ is logically valid but not a theorem. Then you can add $\neg\varphi$ to $K$'s axioms, forming a ne...
<p>The property that "every consistent theory has a model" does <strong>not</strong> hold for second-order logic.</p> <p>Consider, for example the second-order Peano axioms, which are well known to have only $\mathbb N$ as their model (in standard semantics). Extend the language of the theory with a new constant $c$, ...
<p>First of all, even in the first-order case that "proof" doesn't work: how do you get $M$? You need the assumption that, if $K'$ is consistent, then it has a model; but this is exactly what you are trying to prove.</p> <p>Second of all, in order to even ask if the completeness theorem holds for second-order logic, w...
geometry
<p>This is something that always annoys me when putting an A4 letter in a oblong envelope: one has to estimate where to put the creases when folding the letter. I normally start from the bottom and on eye estimate where to fold. Then I turn the letter over and fold bottom to top. Most of the time ending up with three d...
<p>Fold twice to obtain quarter markings at the paper bottom. Fold along the line through the top corner and the third of these marks. The vertical lines through the first two marks intersect this inclined line at thirds, which allows the final foldings.</p> <p>(Photo by Ross Millikan below - if the image helped you, ...
<p>Here is a picture to go with Hagen von Eitzen's answer. The horizontal lines are the result of the first two folds. The diagonal line is the third fold. The heavy lines are the points at thirds for folding into the envelope.</p> <p><img src="https://i.sstatic.net/LZr8b.jpg" alt="enter image description here"></p...
probability
<p>I want to teach a short course in probability and I am looking for some counter-intuitive examples in probability. I am mainly interested in the problems whose results seem to be obviously false while they are not.</p> <p>I already found some things. For example these two videos:</p> <ul> <li><a href="http://www.you...
<p>The most famous counter-intuitive probability theory example is the <a href="https://brilliant.org/wiki/monty-hall-problem/">Monty Hall Problem</a></p> <ul> <li>In a game show, there are three doors behind which there are a car and two goats. However, which door conceals which is unknown to you, the player.</li> <l...
<h1>Birthday Problem</h1> <p>For me this was the first example of how counter intuitive the real world probability problems are due to the inherent underestimation/overestimation involved in mental maps for permutation and combination (which is an inverse multiplication problem in general), which form the basis for pr...
matrices
<blockquote> <p>Let <span class="math-container">$A$</span> be a block upper triangular matrix:</p> <p><span class="math-container">$$A = \begin{pmatrix} A_{1,1}&amp;A_{1,2}\\ 0&amp;A_{2,2} \end{pmatrix}$$</span></p> <p>where <span class="math-container">$A_{1,1} ∈ C^{p \times p}$</span>, <span class="math-container">$...
<p>Let $A$ be the original matrix of size $n \times n$. One way out is to use the identity. (Result from Schur Complement) <a href="https://en.wikipedia.org/wiki/Schur_complement" rel="noreferrer">https://en.wikipedia.org/wiki/Schur_complement</a></p> <p>$\det \left( \begin{matrix} B_{1,1}&amp;B_{1,2}\\ B_{2,1 }&amp;B...
<p>A simpler way is from the definition. Is is easy to show that if $\lambda_1$ is an eigenvalue of the upper diagonal block $A_{1,1}$, with eigenvector $p_1$, (size $n_1$) then it's also an eigenvalue of the full matrix, with the same eigenvector augmented with zeros.</p> <p>$A_{1,1} \; p_1 = \lambda_1 p_1$ with $...
probability
<p>For independent events, the probability of <em>both</em> occurring is the <strong>product</strong> of the probabilities of the individual events: </p> <p>$Pr(A\; \text{and}\;B) = Pr(A \cap B)= Pr(A)\times Pr(B)$.</p> <p>Example: if you flip a coin twice, the probability of heads both times is: $1/2 \times 1/2 =1/4...
<p>I like this answer taken from <a href="https://web.archive.org/web/20180128170657/http://mathforum.org/library/drmath/view/74065.html" rel="nofollow noreferrer">Link</a> :</p> <p>&quot; It may be clearer to you if you think of probability as the fraction of the time that something will happen. If event A happens 1/...
<p>If you randomly pick one from $n$ objects, each object has the probability $\frac{1}{n}$ of being picked. Now imagine you pick randomly <em>twice</em> - one object from a set of $n$ objects, and a second object from a <em>different</em> set of $m$ objects. There are $n\cdot m$ possible pairts of objects, and thus th...
probability
<p>What’s the probability of getting 3 heads and 7 tails if one flips a fair coin 10 times. I just can’t figure out how to model this correctly.</p>
<p>Your question is related to the <a href="http://en.wikipedia.org/wiki/Binomial_distribution">binomial distribution</a>.</p> <p>You do $n = 10$ trials. The probability of one successful trial is $p = \frac{1}{2}$. You want $k = 3$ successes and $n - k = 7$ failures. The probability is:</p> <p>$$ \binom{n}{k} p^k (1...
<p>We build a mathematical model of the experiment. Write H for head and T for tail. Record the results of the tosses as a string of length $10$, made up of the letters H and/or T. So for example the string HHHTTHHTHT means that we got a head, then a head, then a head, then a tail, and so on.</p> <p>There are $2^{1...
game-theory
<p>I am attempting to determine two variables in this game:</p> <ol> <li>The optimum strategy: (What number the bettor should stay at)</li> <li>The expected value given perfect play: (The percent return on a bet when using the optimum strategy)</li> </ol> <p>Here is how the game works: There are two players. One is t...
<p>In the following I shall treat the continuous version of the game: The rolling of the die is modeled by the drawing of a real number uniformly distributed in $[0,1]$.</p> <p>Only the bettor can have a strategy, and this strategy is completely characterized by some number $\xi\in\ ]0,1[\&gt; $. It reads as follows: ...
<p>I would approach this by essentially working backwards. First, you can figure out what the dealer's probabilities are if the bettor outcome is fixed. Let $p_w(m,n)$, $p_d(m,n)$, $p_l(m,n)$ be the probabilities that the dealer wins, draws or loses respectively if his current total is $n$ and bettor's total is $m$. (A...
probability
<p>The Product of Two Gaussian Random Variables is not Gaussian distributed:</p> <ul> <li><a href="https://math.stackexchange.com/questions/101062/is-the-product-of-two-gaussian-random-variables-also-a-gaussian">Is the product of two Gaussian random variables also a Gaussian?</a></li> <li><a href="http://mathworld.wolf...
<p>The product of the PDFs of two random variables $X$ and $Y$ will give the <em>joint</em> distribution of the vector-valued random variable $(X,Y)$ in the case that $X$ and $Y$ are independent. Therefore, if $X$ and $Y$ are normally distributed independent random variables, the product of their PDFs is <strong>bivar...
<p>1.) The first example is already sufficient. Just to throw in another one for a sum of Gaussian variables, consider diffusion: at each step in time a particle is perturbed by a random, Gaussian-distributed step in space. At each time the distribution of its possible positions in space will be a Gaussian because the ...
probability
<p>I saw this problem yesterday on <a href="https://www.reddit.com/r/mathriddles/comments/3sa4ci/colliding_bullets/" rel="noreferrer">reddit</a> and I can't come up with a reasonable way to work it out.</p> <hr /> <blockquote> <p>Once per second, a bullet is fired starting from <span class="math-container">$x=0$</span>...
<p>Among the <span class="math-container">$N$</span> bullets fired at time <span class="math-container">$0,1,2...N$</span> (with <span class="math-container">$N$</span> even), let us call <span class="math-container">$B_{\max, N}$</span> the one with the highest velocity. We have two cases:</p> <ul> <li><p>if <span cla...
<p>I did some experiment with Mathematica for even $N$ and found some regularities, but I don't have an explanation for them at the moment.</p> <p>1) If we fix $N$ different speeds for the bullets and count how many bullets escape to infinity for all the $N!$ permutations of those speeds among bullets, then the result...
logic
<p>Today I had an argument with my math teacher at school. We were answering some simple True/False questions and one of the questions was the following:</p> <p><span class="math-container">$$x^2\ne x\implies x\ne 1$$</span></p> <p>I immediately answered true, but for some reason, everyone (including my classmates and ...
<p>The short answer is: Yes, it is true, because the contrapositive just expresses the fact that $1^2=1$.</p> <p>But in controversial discussions of these issues, it is often (but not always) a good idea to try out non-mathematical examples:</p> <hr> <p>"If a nuclear bomb drops on the school building, you die."</p> ...
<p>First, some general remarks about logical implications/conditional statements. </p> <ol> <li><p>As you know, $P \rightarrow Q$ is true when $P$ is false, or when $Q$ is true. </p></li> <li><p>As mentioned in the comments, the <em>contrapositive</em> of the implication $P \rightarrow Q$, written $\lnot Q \righta...
logic
<p>I wanted to give an easy example of a non-constructive proof, or, more precisely, of a proof which states that an object exists, but gives no obvious recipe to create/find it.</p> <p>Euclid's proof of the infinitude of primes came to mind, however there is an obvious way to "fix" it: just try all the numbers betwee...
<p>Some digit occurs infinitely often in the decimal expansion of $\pi$.</p>
<p>Claim: There exist irrational $x,y$ such that $x^y$ is rational.</p> <p>Proof: If $\sqrt2^{\sqrt2}$ is rational, take $x=y=\sqrt 2$. Otherwise take $x=\sqrt2^{\sqrt2}, y=\sqrt2$, so that $x^y=2$.</p>
matrices
<p>I am auditing a Linear Algebra class, and today we were taught about the rank of a matrix. The definition was given from the row point of view: </p> <blockquote> <p>"The rank of a matrix A is the number of non-zero rows in the reduced row-echelon form of A".</p> </blockquote> <p>The lecturer then explained t...
<p>The answer is yes. This statement often goes under the name "row rank equals column rank". Knowing that, it is easy to search the internet for proofs.</p> <p>Also any reputable linear algebra text should prove this: it is indeed a rather important result.</p> <p>Finally, since you said that you had only a substi...
<p>There are several simple proofs of this result. Unfortunately, most textbooks use a rather complicated approach using row reduced echelon forms. Please see some elegant proofs in the Wikipedia page (contributed by myself):</p> <p><a href="http://en.wikipedia.org/wiki/Rank_%28linear_algebra%29" rel="noreferrer">http...
differentiation
<p>Are $\sin$ and $\cos$ the only functions that satisfy the following relationship: $$ x'(t) = -y(t)$$ and $$ y'(t) = x(t) $$</p>
<p>The relationships $x'(t) = -y(t)$ and $y'(t) = x(t)$ imply $$x''(t) = -y'(t) = -x(t)$$ i.e. $$x''(t) = -x(t)$$ which only has solutions $x(t) = A \cos t + B \sin t$ for some constants $A$, $B$. For a given choice of the constants we then get $y(t) = -x'(t) = A \sin t - B \cos t$.</p>
<p>Basically, yes, they are. More precisely: if $x,y\colon\mathbb{R}\longrightarrow\mathbb{R}$ are differentiable functions such that $x'=-y$ and that $y'=x$, then there are numbers $k$ and $\omega$ such that$$(\forall t\in\mathbb{R}):x(t)=k\cos(t+\omega)\text{ and }y(t)=k\sin(t+\omega).$$</p>
linear-algebra
<blockquote> <p>Alice and Bob play the following game with an $n \times n$ matrix, where $n$ is odd. Alice fills in one of the entries of the matrix with a real number, then Bob, then Alice and so forth until the entire matrix is filled. At the end, the determinant of the matrix is taken. If it is nonzero, Alice wins...
<p>I tried to approach it from Leibniz formula for determinants</p> <p>$$\det(A) = \sum_{\sigma \in S_n} \operatorname{sgn}(\sigma) \prod_{i=1}^n A_{i,\sigma_i}.$$</p> <p>There are $n!$ factorial terms in this sum. Alice will have $\frac{n^2+1}{2}$ moves whereas Bob has $\frac{n^2-1}{2}$ moves. There are $n^2$ variab...
<p>The problem states that <span class="math-container">$n$</span> is odd, ie., <span class="math-container">$n \geq 3$</span>.</p> <p>With Alice and Bob filling the determinant slots in turns, the objective is for Alice to obtain a non-zero determinant value and for Bob to obtain a zero determinant value.</p> <p>So, A...
linear-algebra
<p>Let $A$ be the matrix $$A = \left(\begin{array}{cc} 41 &amp; 12\\ 12 &amp; 34 \end{array}\right).$$</p> <p>I want to decompose it into the form of $B^2$.</p> <p>I tried diagonalization , but can not move one step further.</p> <p>Any thought on this? Thanks a lot!</p> <p>ONE STEP FURTHER:</p> <p>How to find a...
<p>This is an expansion of Arturo's comment.</p> <p>The matrix has eigenvalues $50,25$, and eigenvectors $(4,3),(-3,4)$, so it eigendecomposes to $$A=\begin{pmatrix}4 &amp; -3 \\ 3 &amp; 4\end{pmatrix} \begin{pmatrix}50 &amp; 0 \\ 0 &amp; 25\end{pmatrix} \begin{pmatrix}4 &amp; -3 \\ 3 &amp; 4\end{pmatrix}^{-1}.$$</p> ...
<p>For the first part of your question, here is a solution that only works for 2-by-2 matrices, but it has the merit that <strong><em>no eigenvalue is needed</em></strong>.</p> <p>Recall that in the two-dimensional case, there is a magic equation that is useful in many situations. It is $X^2-({\rm tr}X)X+(\det X)I=0$,...
game-theory
<p>I am trying to understand how to compute all Nash equilibria in a 2 player game, but I fail when there are more than 2 possible options to play. Could somebody explain to me how to calculate a matrix like this (without computer) \begin{matrix} 1,1 &amp; 10,0 &amp; -10,1 \\ 0,10 &amp; 1,1 &amp; 10,1 \\ 1,-10 &...
<p>A Nash equilibrium is a profile of strategies <span class="math-container">$(s_1,s_2)$</span> such that the strategies are best responses to each other, i.e., no player can do strictly better by deviating. This helps us to find the (pure strategy) Nash equilibria.</p> <p>To start, we find the best response for playe...
<p>I think I finally understood how to get all equilibrias by the support concept. </p> <p>Given payoff matrix A for Player 1 and B for Player 2. That a point $(x,y)=(x_1,x_2,x_3,y_1,y_2,y_3)$ is a Nash equilibria with supp $x =I$ and supp $ y =J$ (which means that $x_i&gt;0 $ for $i \in I$ and $ x_i=0 $ for $ i \not\...
probability
<p>Say I have $X \sim \mathcal N(a, b)$ and $Y\sim \mathcal N(c, d)$. Is $XY$ also normally distributed?</p> <p>Is the answer any different if we know that $X$ and $Y$ are independent?</p>
<p>The product of two Gaussian random variables is distributed, in general, as a linear combination of two Chi-square random variables:</p> <p>$$ XY \,=\, \frac{1}{4} (X+Y)^2 - \frac{1}{4}(X-Y)^2$$ </p> <p>Now, $X+Y$ and $X-Y$ are Gaussian random variables, so that $(X+Y)^2$ and $(X-Y)^2$ are Chi-square distributed w...
<p>As @Yemon Choi showed in the first question, without any hypothesis the answer is negative since $P(X^2&lt;0)=0$ whereas $P(U&lt;0)\neq 0$ if $U$ is Gaussian.</p> <p>For the second question the answer is also no. Take $X$ and $Y$ two Gaussian random variables with mean $0$ and variance $1$. Since they have the same...
logic
<p>Can anyone explain why the predicate <code>all</code> is true for an empty set? If the set is empty, there are no elements in it, so there is not really any elements to apply the predicate on? So it feels to me it should be false rather than true.</p>
<p>It hinges on the Law of the Excluded Middle. The claim itself is either TRUE or FALSE, one way or the other, not both, not neither.<br><br> Pretend that I am asserting "For every $x\in S$, property $P(x)$ holds." How could you declare me to be a liar? You would have to produce an element of the set ($S=\varnothing$,...
<blockquote> <p>"All of my children are rock stars."</p> <p>"If we go through the list of my children, one at a time, you will never find one that is not a rock star."</p> </blockquote> <p>Do you want the above two sentences to mean the same thing?</p> <p>Also, do you want</p> <blockquote> <p>"Not all of ...
combinatorics
<p>One of my friends found this riddle.</p> <blockquote> <p>There are 100 soldiers. 85 lose a left leg, 80 lose a right leg, 75 lose a left arm, 70 lose a right arm. What is the minimum number of soldiers losing all 4 limbs?</p> </blockquote> <p>We can't seem to agree on a way to approach this.</p> <p>Right off the bat...
<p>Here is a way of rewriting your original argument that should convince your friend:</p> <blockquote> <p>Let $A,B,C,D\subset\{1,2,\dots,100\}$ be the four sets, with $|A|=85$,$|B|=80$,$|C|=75$,$|D|=70$. Then we want the minimum size of $A\cap B\cap C\cap D$. Combining the fact that $$|A\cap B\cap C\cap D|=100-|A...
<p>If you add up all the injuries, there is a total of 310 sustained. That means 100 soldiers lost 3 limbs, with 10 remaining injuries. Therefore, 10 soldiers must have sustained an additional injury, thus losing all 4 limbs.</p> <p>The manner in which you've argued your answer seems to me, logical, and correct.</p>
differentiation
<p>I'm studying about EM-algorithm and on one point in my reference the author is taking a derivative of a function with respect to a matrix. Could someone explain how does one take the derivative of a function with respect to a matrix...I don't understand the idea. For example, lets say we have a multidimensional Gau...
<p>It's not the derivative with respect to a matrix really. It's the derivative of $f$ with respect to each element of a matrix and the result is a matrix.</p> <p>Although the calculations are different, it is the same idea as a Jacobian matrix. Each entry is a derivative with respect to a different variable.</p> <p>...
<p>You can view this in the same way you would view a function of any vector. A matrix is just a vector in a normed space where the norm can be represented in any number of ways. One possible norm would be the root-mean-square of the coefficients; another would be the sum of the absolute values of the matrix coefficien...
matrices
<p>Given an $n\times n$-matrix $A$ with integer entries, I would like to decide whether there is some $m\in\mathbb N$ such that $A^m$ is the identity matrix.</p> <p>I can solve this by regarding $A$ as a complex matrix and computing its Jordan normal form; equivalently, I can compute the eigenvalues and check whether ...
<p>The following conditions on an $n$ by $n$ integer matrix $A$ are equivalent: </p> <p>(1) $A$ is invertible and of finite order. </p> <p>(2) The minimal polynomial of $A$ is a product of distinct cyclotomic polynomials. </p> <p>(3) The elementary divisors of $A$ are cyclotomic polynomials. </p>
<p>Answer amended in view of Rasmus's comment:</p> <p>I'm not sure how useful it is, but here's a remark. If $A$ has finite order, clearly $\{\|A^{m}\|: m \in \mathbb{N} \}$ is bounded (any matrix norm you care to choose will do). </p> <p>On the other hand, if the semisimple part (in its Jordan decomposition as a co...
logic
<p>is $\forall x\,\exists y\, Q(x, y)$ the same as $\exists y\,\forall x\,Q(x, y)$?</p> <p>I read in the book that the order of quantifiers make a big difference so I was wondering if these two expressions are equivalent or not. </p> <p>Thanks. </p>
<p>Certainly not. In that useful &quot;loglish&quot; dialect (a halfway house between the formal language and natural English), the first says</p> <blockquote> <p>For any <span class="math-container">$x$</span>, there is a <span class="math-container">$y$</span> such that <span class="math-container">$Qxy$</span>.</p> ...
<p>Because I <em>really</em> hate the real world analogies (after an exam I was forced to read nearly 300 answers mumbling "every pot has a lid" analogies), let me give you a mathematical way of understanding this without evaluating the actual formulas.</p> <p>Let $M$ be an arbitrary structure for our language, let $A...
combinatorics
<p>I think this soft question may be marked "opinion-based" or "off-topic", but I really do not know where else to get help. So <em>please</em> read my question before it's closed, I am really desperately in need of help... Thanks in advance for all people paying attention to this question!</p> <hr> <p><strong>Backgr...
<p>A soft-question should receive a soft-answer. This is primarily opinion-based, but I want that if this question gets closed, you get at least one opinion.</p> <blockquote> <p>Is it normal to find it hard to solve the problems in that book?</p> </blockquote> <p>Combinatorics have always had the reputation of having h...
<p>In my personal experience, I find reviewing the basics on subjects in which I am advanced will often settle my frustrations and allow me to continue to higher levels. In addition to discrete mathematics, I enjoy learning languages. Through high school,it was very mechanical, not so much fun but I was always st the t...
combinatorics
<p>In one of his interviews, <a href="https://www.youtube.com/shorts/-qvC0ISkp1k" rel="noreferrer">Clip Link</a>, Neil DeGrasse Tyson discusses a coin toss experiment. It goes something like this:</p> <ol> <li>Line up 1000 people, each given a coin, to be flipped simultaneously</li> <li>Ask each one to flip if <strong>...
<p>It is known (to a nonempty set of humans) that when <span class="math-container">$p=\frac12$</span>, there is no limiting probability. Presumably the analysis can be (might have been) extended to other values of <span class="math-container">$p$</span>. Even more surprisingly, the reason I know this is because it end...
<p>There is a pretty simple formula for the probability of a unique winner, although it involves an infinite sum. To derive the formula, suppose that there are <span class="math-container">$n$</span> people, and that you continue tossing until everyone is out, since they all got tails. Then you want the probability t...
linear-algebra
<p>I am trying to understand how - exactly - I go about projecting a vector onto a subspace.</p> <p>Now, I know enough about linear algebra to know about projections, dot products, spans, etc etc, so I am not sure if I am reading too much into this, or if this is something that I have missed.</p> <p>For a class I am ...
<p>I will talk about orthogonal projection here.</p> <p>When one projects a vector, say $v$, onto a subspace, you find the vector in the subspace which is "closest" to $v$. The simplest case is of course if $v$ is already in the subspace, then the projection of $v$ onto the subspace is $v$ itself.</p> <p>Now, the sim...
<p>Take a basis $\{v_1, \dots, v_n\}$ for the "signal subspace" $V$. Let's assume $V$ is finite dimensional for simplicity and practical purposes, but you can generalize to infinite dimensions. Let's also assume the basis is orthonormal.</p> <p>The projection of your signal $f$ onto the subspace $V$ is just</p> <p>...
linear-algebra
<p>I'm trying to prove the following: Let $A$ be a $k\times k$ matrix, let $D$ have size $n\times n$, and $C$ have size $n\times k$. Then,</p> <p>$$\det\left(\begin{array}{cc} A&amp;0\\ C&amp;D \end{array}\right) = \det(A)\det(D).$$</p> <p>Can I just say that $AD - 0C = AD$, and I'm done?</p>
<p>If <span class="math-container">$A$</span> is singular, its rows are linearly dependent, hence the rows of the entire matrix are linearly dependent, hence both sides of the equation vanish.</p> <p>If <span class="math-container">$A$</span> is not singular, we have</p> <p><span class="math-container">$$\pmatrix{I&am...
<p>As @user153012 is asking for a proof in full detail, here is a brute-force approach using an explicit expression of a determinant of an $n$ by $n$ matrix, say $A = (a[i,j])$, $$\det A = \sum_{\sigma\in S_n}\operatorname{sgn}\sigma \prod_i a[{i,\sigma(i)}],$$ where $S_n$ is the symmetric group on $[n] = \{1,\dots, n\...
linear-algebra
<p>I gave the following problem to students:</p> <blockquote> <p>Two $n\times n$ matrices $A$ and $B$ are <em>similar</em> if there exists a nonsingular matrix $P$ such that $A=P^{-1}BP$.</p> <ol> <li><p>Prove that if $A$ and $B$ are two similar $n\times n$ matrices, then they have the same determinant and th...
<p>If <span class="math-container">$A$</span> is a <span class="math-container">$2\times 2$</span> matrix with determinant <span class="math-container">$d$</span> and trace <span class="math-container">$t$</span>, then the characteristic polynomial of <span class="math-container">$A$</span> is <span class="math-contain...
<p>As Eric points out, such $2\times2$ matrices are special. In fact, there are only two such pairs of matrices. The number depends on how you count, but the point is that such matrices have a <em>very</em> special form.</p> <p>Eric proved that the two matrices must have a double eigenvalue. Let the eigenvalue be $\la...
combinatorics
<p>The intent of this question is to provide a list of learning resources for people who are new to generating functions and would like to learn about them. </p> <p>I'm personally interested in combinatorics, and I sometimes use generating functions in answers to combinatorial questions on stackexchange, but I know m...
<p>Here are some resources to get you started on generating functions. With one exception, which is clearly designated, any of the items mentioned here should be suitable to provide a gentle introduction to GFs for total newbies.</p> <ul> <li><p><em>generatingfunctionology</em> by Herbert S. Wilf is probably the best i...
<blockquote> <p>One of the treasures which might fit the needs is <em><a href="https://www.csie.ntu.edu.tw/~r97002/temp/Concrete%20Mathematics%202e.pdf" rel="noreferrer">Concrete Mathematics</a></em> by R.L. Graham, D.E. Knuth and O. Patashnik. </p> <p>A starting point could be section 5.4 <em>Generating Fun...
probability
<p>A mathematician and a computer are playing a game: First, the mathematician chooses an integer from the range $2,...,1000$. Then, the computer chooses an integer <em>uniformly at random</em> from the same range. If the numbers chosen share a prime factor, the larger number wins. If they do not, the smaller number wi...
<p>For fixed range:</p> <pre><code>range = 16; a = Table[Table[FactorInteger[y][[n, 1]], {n, 1, PrimeNu[y]}], {y, 1, range}]; b = Table[Sort@DeleteDuplicates@ Flatten@Table[ Table[Position[a, a[[y, m]]][[n, 1]], {n, 1, Length@Position[a, a[[y, m]]]}], {m, 1, PrimeNu[y]}], {y, 1, range}]; c = Table[Complement[Range[ra...
<p>First consider choosing a prime $p$ in the range $[2,N]$. You lose only if the computer chooses a multiple of $p$ or a number smaller than $p$, which occurs with probability $$ \frac{(\lfloor{N/p}\rfloor-1)+(p-2)}{N-1}=\frac{\lfloor{p+N/p}\rfloor-3}{N-1}. $$ The term inside the floor function has derivative $$ 1-\f...
geometry
<p><a href="https://i.sstatic.net/UK66w.png" rel="noreferrer"><img src="https://i.sstatic.net/UK66w.png" alt="illustration"></a></p> <p>There is a river in the shape of an annulus. Outside the annulus there is town "A" and inside there is town "B". One must build a bridge towards the center of the annulus such that th...
<p>To take advantage of <a href="http://en.wikipedia.org/wiki/Snell%27s_law" rel="nofollow noreferrer">Snell's law</a>, applying a limit argument: We want to find the trajectory of a light ray where the velocity on the inner and outer terrain is constant (say <span class="math-container">$v$</span>) and the velocity on...
<p>(Edited to make coordinates symmetrical about the $x$-axis from the beginning. Also, notation is changed slightly to help distinguish the path's fixed endpoints from the bridge's variable endpoints.)</p> <hr> <p>Take our path to have fixed endpoints $A(a\cos\phi, a\sin\phi)$, $B(b \cos\phi,-b\sin\phi)$ and our bri...
combinatorics
<p>Let <span class="math-container">$\mathbb{N}=\{0,1,2,\ldots\}$</span>. Does there exist a bijection <span class="math-container">$f\colon\mathbb{N}\to\mathbb{N}$</span> such that <span class="math-container">$f(0)=0$</span> and <span class="math-container">$|f(n)-f(n-1)|=n$</span> for all <span class="math-container...
<p>The good news is that your function will never get stuck. The bad news is that the reason quite easy: your function will never have value 5. Let see how to get it.</p> <p>Suppose there are $k$ and $\ell$ such that $f(\ell) = 5$, and $f(n) = f(n - 1) + n$ for all $13 \le n \le k$, and $f(n) = f(n - 1) - n$ for all $...
<p><strong>Such a bijection exists</strong>.</p> <p>It is convenient to assume that we are looking for an infinite jump sequence on $\mathbb N$ starting from $0$, never returning to a previous position and ultimately filling all $\mathbb N$. The distance of the $n$th jump is restricted to be $n$, we can choose only it...
differentiation
<p>In complex analysis class professor said that in complex analysis if a function is differentiable once, it can be differentiated infinite number of times. In real analysis there are cases where a function can be differentiated twice, but not 3 times.</p> <p>Do anyone have idea what he had in mind? I mean specific e...
<blockquote> <p><em>What function cannot be differentiated $3$ times?</em></p> </blockquote> <p>Take an integrable discontinuous function (such as the sign function), and integrate it three times. Its first integral is the absolute value function, which is continuous: as are all of its other integrals.</p>
<p>$f(x) =\begin{cases} x^3, &amp; \text{if $x\ge 0$} \\ -x^3, &amp; \text{if $x \lt 0$} \\ \end{cases}$</p> <p>The first and second derivatives equal $0$ when $x=0$, but the third derivative results in different slopes.</p>
logic
<p>So a while ago I saw a proof of the Completeness Theorem, and the hard part of it (all logically valid formulae have a proof) went thusly:</p> <blockquote> <p>Take a theory $K$ as your base theory. Suppose $\varphi$ is logically valid but not a theorem. Then you can add $\neg\varphi$ to $K$'s axioms, forming a ne...
<p>The property that "every consistent theory has a model" does <strong>not</strong> hold for second-order logic.</p> <p>Consider, for example the second-order Peano axioms, which are well known to have only $\mathbb N$ as their model (in standard semantics). Extend the language of the theory with a new constant $c$, ...
<p>First of all, even in the first-order case that "proof" doesn't work: how do you get $M$? You need the assumption that, if $K'$ is consistent, then it has a model; but this is exactly what you are trying to prove.</p> <p>Second of all, in order to even ask if the completeness theorem holds for second-order logic, w...
logic
<p>In logic, a semantics is said to be compact iff if every finite subset of a set of sentences has a model, then so to does the entire set. </p> <p>Most logic texts either don't explain the terminology, or allude to the topological property of compactness. I see an analogy as, given a topological space X and a subset...
<p>The Compactness Theorem is equivalent to the compactness of the <a href="http://en.wikipedia.org/wiki/Stone%27s_representation_theorem_for_Boolean_algebras" rel="noreferrer">Stone space</a> of the <a href="http://en.wikipedia.org/wiki/Lindenbaum%E2%80%93Tarski_algebra" rel="noreferrer">Lindenbaum–Tarski algebra</a> ...
<p>The analogy for the compactness theorem for propositional calculus is as follows. Let $p_i $ be propositional variables; together, they take values in the product space $2^{\mathbb{N}}$. Suppose we have a collection of statements $S_t$ in these boolean variables such that every finite subset is satisfiable. Then I ...
probability
<p>It seems that there are two ideas of expectation, variance, etc. going on in our world.</p> <p><strong>In any probability textbook:</strong></p> <p>I have a random variable <span class="math-container">$X$</span>, which is a <em>function</em> from the sample space to the real line. Ok, now I define the expectation o...
<p>You ask a very insightful question that I wish were emphasized more often.</p> <p><strong>EDIT</strong>: It appears you are seeking reputable sources to justify the above. Sources and relevant quotes have been provided.</p> <p>Here's how I would explain this:</p> <ul> <li>In probability, the emphasis is on populatio...
<p>The first definitions you gave are correct and standard, and statisticians and data scientists will agree with this. (These definitions are given in statistics textbooks.) The second set of quantities you described are called the &quot;sample mean&quot; and the &quot;sample variance&quot;, not mean and variance.</p>...
logic
<p>I recently pulled out <a href="http://digitalcommons.trinity.edu/mono/7/" rel="noreferrer">my old Real Analysis textbook</a> and noticed something that didn't stand out when I was taking the class all those years ago. When the book is listing out the axioms it seems to assume we understand what equality is.</p> <p>...
<p>You're right -- the formal properties of equality need to be defined somewhere.</p> <p>The properties we need are the pure equality axioms: $$ x=x \qquad x=y\Rightarrow y=x \qquad x=y\land y=z\Rightarrow x=z, $$ <em>plus</em> the crucial property that we're allowed to substitute equals for equals in an expression a...
<p><em>Two sets are equal if they are the same, i.e. contains the same elements.</em></p> <p>So in your context, $x+y=y+x$ for all $x,y \in \mathbb R$ means that the two sets $x+y$ and $x+y$ are the same. In fact there are many ways to define the reals. At least Dedekind cuts, or via Cauchy sequences. In all cases, a ...
linear-algebra
<p>This is a question from the free Harvard online abstract algebra <a href="http://www.extension.harvard.edu/open-learning-initiative/abstract-algebra" rel="noreferrer">lectures</a>. I'm posting my solutions here to get some feedback on them. For a fuller explanation, see <a href="https://math.stackexchange.com/ques...
<p>A direct computation is also fine: $$(PP^T)_{ij} = \sum_{k=1}^n P_{ik} P^T_{kj} = \sum_{k=1}^n P_{ik} P_{jk}$$ but $P_{ik}$ is usually 0, and so $P_{ik} P_{jk}$ is usually 0. The only time $P_{ik}$ is nonzero is when it is 1, but then there are no other $i&#39; \neq i$ such that $P_{i&#39;k}$ is nonzero ($i$ is the...
<p>Another way to prove it is to realize that any permutation matrix is the product of <em>elementary</em> permutations, where by <em>elementary</em> I mean a permutation that swaps two entries. Since in an identity matrix swapping $i$ with $j$ in a row is the same as swapping $j$ with $i$ in a column, such matrix is s...
linear-algebra
<p>I was wondering if a dot product is technically a term used when discussing the product of $2$ vectors is equal to $0$. And would anyone agree that an inner product is a term used when discussing the integral of the product of $2$ functions is equal to $0$? Or is there no difference at all between a dot product an...
<p>In my experience, <em>the dot product</em> refers to the product $\sum a_ib_i$ for two vectors $a,b\in \Bbb R^n$, and that "inner product" refers to a more general class of things. (I should also note that the real dot product is extended to a complex dot product using the complex conjugate: $\sum a_i\overline{b}_i)...
<p>A dot product is a very specific inner product that works on $\Bbb{R}^n$ (or more generally $\Bbb{F}^n$, where $\Bbb{F}$ is a field) and refers to the inner product given by</p> <p>$$(v_1, ..., v_n) \cdot (u_1, ..., u_n) = v_1 u_1 + ... + v_n u_n$$</p> <p>More generally, an inner product is a function that takes i...
logic
<p>In a probability course, a game was introduced which a logical approach won't yield a strategy for winning, but a probabilistic one will. My problem is that I don't remember the details (the rules of the game)! I would be thankful if anyone can complete the description of the game. I give the outline of the game, be...
<p>In the <a href="http://blog.plover.com/math/envelope.html" rel="noreferrer">Envelope Paradox</a> player 1 writes any two different numbers $a&lt; b$ on two slips of paper. Then player 2 draws one of the two slips each with probability $\frac 12$, looks at its number $x$, and predicts whether $x$ is the larger or th...
<p>I know this is a late answer, but I'm pretty sure I know what game OP is thinking of (and none of the other answers have it right).</p> <p>The way it works is person A chooses to hide either $100$ or $200$ dollars in an envelope, and person B has to guess the amount that person A hid. If person B guesses correctly ...
differentiation
<p>What is the Jacobian matrix?</p> <p>What are its applications? </p> <p>What is its physical and geometrical meaning?</p> <p>Can someone please explain with examples?</p>
<p>The Jacobian $df_p$ of a differentiable function $f : \mathbb{R}^n \to \mathbb{R}^m$ at a point $p$ is its best linear approximation at $p$, in the sense that $f(p + h) = f(p) + df_p(h) + o(|h|)$ for small $h$. This is the "correct" generalization of the derivative of a function $f : \mathbb{R} \to \mathbb{R}$, and...
<p>Here is an <strong>example</strong>. Suppose you have two implicit differentiable functions</p> <p>$$F(x,y,z,u,v)=0,\qquad G(x,y,z,u,v)=0$$</p> <p>and the functions, also differentiable, $u=f(x,y,z)$ and $v=g(x,y,z)$ such that </p> <p>$$F(x,y,z,f(x,y,z),g(x,y,z))=0,\qquad G(x,y,z,f(x,y,z),g(x,y,z))=0.$$</p> <p>I...
linear-algebra
<p>In which cases is the inverse of a matrix equal to its transpose, that is, when do we have <span class="math-container">$A^{-1} = A^{T}$</span>? Is it when <span class="math-container">$A$</span> is orthogonal? </p>
<p>If $A^{-1}=A^T$, then $A^TA=I$. This means that each column has unit length and is perpendicular to every other column. That means it is an orthonormal matrix.</p>
<p>You're right. This is the definition of orthogonal matrix.</p>
differentiation
<p>I am trying to figure out a the derivative of a matrix-matrix multiplication, but to no avail. <a href="https://www.math.uwaterloo.ca/%7Ehwolkowi/matrixcookbook.pdf" rel="noreferrer">This document</a> seems to show me the answer, but I am having a hard time parsing it and understanding it.</p> <p>Here is my problem:...
<p>For the first question alone (without context) I'm going to prove something else first (then check the $\boxed{\textbf{EDIT}}$ for what is asked):</p> <p>Suppose we have three matrices $A,X,B$ that are $n\times p$, $p\times r$, and $r\times m$ respectively. Any element $w_{ij}$ of their product $W=AXB$ is expressed...
<p>Like most articles on Machine Learning / Neural Networks, the linked document is an awful mixture of code snippets and poor mathematical notation.</p> <p>If you read the comments preceding the code snippet, you'll discover that <strong>dX</strong> does not refer to an increment or differential of <span class="math-c...
linear-algebra
<p>This is my first semester of quantum mechanics and higher mathematics and I am completely lost. I have tried to find help at my university, browsed similar questions on this site, looked at my textbook (Griffiths) and read countless of pdf's on the web but for some reason I am just not getting it. </p> <p>Can someo...
<p>In short terms, kets are vectors on your Hilbert space, while bras are linear functionals of the kets to the complex plane</p> <p>$$\left|\psi\right&gt;\in \mathcal{H}$$</p> <p>\begin{split} \left&lt;\phi\right|:\mathcal{H} &amp;\to \mathbb{C}\\ \left|\psi\right&gt; &amp;\mapsto \left&lt;\phi\middle|\psi\right&gt;...
<p>First, the $bra$c$ket$ notation is simply a convenience invented to greatly simplify, and abstractify the mathematical manipulations being done in quantum mechanics. It is easiest to begin explaining the abstract vector we call the "ket". The ket-vector $|\psi\rangle $ is an abstract vector, it has a certain "size...
geometry
<p>Yesterday I was tutoring a student, and the following question arose (number 76):</p> <p><img src="https://i.sstatic.net/7nw6J.png" alt="enter image description here"></p> <p>My student believed the answer to be J: square. I reasoned with her that the information given only allows us to conclude that the <em>top a...
<p>Clearly the figure is a trapezoid because you can construct an infinite number of quadralaterals consistent with the given constraints so long as the vertical height $h$ obeys $0 &lt; h \leq 9$ inches. Only one of those infinite number of figures is a square.</p> <p>I would email the above statement to the teacher...
<p>Of course, you are right. Send an email to the teacher with a concrete example, given that (s)he seems to be geometrically challenged. For instance, you could attach the following pictures with the email, <strong>which are both drawn to scale</strong>. You should also let him/her know that you need $5$ parameters to...
linear-algebra
<p>How would you prove the trace of a transformation from V to V (where V is finite dimensional) is independent of the basis chosen?</p>
<p>The simplest way it to note that a basis transformation of a transformation $T$ is done via $ATA^{-1}$ where $A$ is an invertible matrix, and that the trace has the property $\operatorname{tr}(AB)=\operatorname{tr}(BA)$. Putting this together, you get $$\operatorname{tr}(ATA^{-1}) = \operatorname{tr}(A^{-1}AT) = \op...
<p>An elementary proof could be the following. First, let $A$ be the matrix of your linear transformation in any basis of $V$. The characteristic polynomial of $A$ is </p> <p>$$ Q_A(t) = \mathrm{det}\ (A - tI) = \begin{vmatrix} a^1_1 - t &amp; a^1_2 &amp; \dots &amp; a^1_n \\ a^2_1 &amp; a^2_2 - t &amp;...
logic
<p>Why if in a formal system the Peirce's law $((P\rightarrow Q)\rightarrow P) \rightarrow P$ is true, the law of excluded middle $P \lor \neg P$ is true too?</p>
<p><strong>Intuition:</strong></p> <p>Peirce's law is of the form $$\big((A \to B) \to A\big) \quad \to \quad A,$$ that is, given $(A \to B) \to A$ we could deduce $A$. Therefore, we will try to construct $$((P \lor \neg P) \to \bot) \to (P \lor \neg P).$$ The trick is that for any $A \to B$ we can strengthen $A$ or r...
<p>This is probably what you're looking for: <a href="https://proofwiki.org/wiki/Peirce%27s_Law_is_Equivalent_to_Law_of_Excluded_Middle" rel="nofollow noreferrer">Peirce's Law Equivalent to Law of Excluded Middle</a>. Note, however, that this proof depends on a few assumptions which are <em>not</em> embodied in every l...
number-theory
<p>In <a href="https://math.stackexchange.com/questions/3533451/will-there-at-some-point-be-more-numbers-with-n-factors-than-prime-numbers-for?noredirect=1&amp;lq=1">this</a> question I plotted the number of numbers with <span class="math-container">$n$</span> prime factors. It appears that the further out on the numbe...
<p>Yes, the line for numbers with <span class="math-container">$3$</span> prime factors will be overtaken by another line. As shown &amp; explained in <a href="https://hermetic.ch/prf/plotting_prime_number_freq.htm" rel="nofollow noreferrer">Prime Factors: Plotting the Prime Factor Frequencies</a>, even up to <span cla...
<p>Just another plot to about <span class="math-container">$250\times10^9$</span>, showing the relative amount of numbers below with x factors (with multiplicity) <a href="https://i.sstatic.net/KE3Fa.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/KE3Fa.png" alt="enter image description here" /></a></p> ...
probability
<p>What is an intuitive interpretation of the 'events' $$\limsup A_n:=\bigcap_{n=0}^{\infty}\bigcup_{k=n}^{\infty}A_k$$ and $$\liminf A_n:=\bigcup_{n=0}^{\infty}\bigcap_{k=n}^{\infty}A_k$$ when $A_n$ are subsets of a measured space $(\Omega, F,\mu)$. Of the first it should be that 'an infinite number of those events is...
<p>Try reading it piece by piece. Recall that $A\cup B$ means that at least one of $A$, $B$ happens and $A\cap B$ means that both $A$ and $B$ happen. Infinite unions and intersections are interpreted similarly. In your case, $\bigcup_{k=n}^{\infty}A_k$ means that at least one of the events $A_k$ for $k\geq n$ happens. ...
<p>The $\limsup$ is the collection of all elements which appear in every tail of the sequence, namely results which occur infinitely often in the sequence of events.</p> <p>The $\liminf$ is the union of all elements appearing in <em>all</em> events from a certain point in time, namely results which occur in all but fi...
geometry
<p>I was inspired by <a href="https://commons.wikimedia.org/wiki/File:Mathematical_implication_diagram-alt-large-print.svg" rel="noreferrer">this</a> flowchart of mathematical sets and wanted to try and visualize it, since I internalize math best in that way. This is what I've come up with so far:</p> <p><a href="http...
<p>My advice is to place a lot more landmarks like <span class="math-container">$\mathbb R^n$</span>. Ideally, every area should have at least one point in it, which will serve to prove that the area really belongs there. It will also clarify what the relationships really mean. For example, all manifolds are metrizable...
<p>Ad the issue with inner product Banach space vs. Hilbert space: Every inner product space induces a norm and every norm induces a metric. A Banach space is a normed vector space such that the induced metric is complete. A Hilbert space is an inner product space such that the induced metric is complete. So in your di...
linear-algebra
<p>Let $A$ be an n×n matrix with eigenvalues $\lambda_i, i=1,2,\dots,n$. Then $\lambda_1^k,\dots,\lambda_n^k$ are eigenvalues of $A^k$.</p> <ol> <li>I was wondering if $\lambda_1^k,\dots,\lambda_n^k$ are <strong>all</strong> the eigenvalues of $A^k$?</li> <li>Are the algebraic and geometric multiplicities of $\lambda_...
<p>The powers of the eigenvalues are indeed all the eigenvalues of $A^k$. I will limit myself to $\mathbb{R}$ and $\mathbb{C}$ for brevity.</p> <p>The algebraic multiplicity of the matrix will indeed be preserved (up to merging as noted in the comments). An easy way to see this is to take the matrix over $\mathbb{C}$ ...
<p>I feel like I should mention this theorem. Forgot its name but I think its one of the spectral theorems. It says that if $\lambda$ is an eigenvalue for a matrix A and $f(x)$ is any analytic function, then $f(\lambda)$ is an eigenvalue for $f(A)$. So even $\sin(A)$ will have $\sin(\lambda)$ as its eigenvalues.</p> <...
matrices
<p>I've seen the statement "The matrix product of two orthogonal matrices is another orthogonal matrix. " on Wolfram's website but haven't seen any proof online as to why this is true. By orthogonal matrix, I mean an $n \times n$ matrix with orthonormal columns. I was working on a problem to show whether $Q^3$ is an or...
<p>If <span class="math-container">$$Q^TQ = I$$</span> <span class="math-container">$$R^TR = I,$$</span> then <span class="math-container">$$(QR)^T(QR) = (R^TQ^T)(QR) = R^T(Q^TQ)R = R^TR = I.$$</span> Of course, this can be extended to <span class="math-container">$n$</span> many matrices inductively.</p>
<p>As an alternative to the other fine answers, here's a more geometric viewpoint:</p> <p>Orthogonal matrices correspond to linear transformations that preserve the length of vectors (isometries). And the composition of two isometries $F$ and $G$ is obviously also an isometry.</p> <p>(Proof: For all vectors $x$, the ...
probability
<p>Randomly break a stick (or a piece of dry spaghetti, etc.) in two places, forming three pieces. The probability that these three pieces can form a triangle is $\frac14$ (coordinatize the stick form $0$ to $1$, call the breaking points $x$ and $y$, consider the unit square of the coordinate plane, shade the areas th...
<p>The three triangle inequalities are</p> <p>\begin{align} x + y &amp;&gt; 1-x-y \\ x + (1-x-y) &amp;&gt; y \\ y + (1-x-y) &amp;&gt; x \\ \end{align}</p> <p>Your problem is that in picking the smaller number first from a uniform distribution, it's going to end up being bigger than it would if you had just picked two...
<p>FYI: This question was included in a Martin Gardner 'Mathematical Games' article for Scientific American some years ago. He showed that there were 2 ways of randomly choosing the 2 'break' points:</p> <ol> <li>choose two random numbers from 0 to 1, or</li> <li>choose one random number, break the stick at that poi...
linear-algebra
<p>I am quite confused about this. I know that zero eigenvalue means that null space has non zero dimension. And that the rank of matrix is not the whole space. But is the number of distinct eigenvalues ( thus independent eigenvectos ) is the rank of matrix?</p>
<p>Well, if $A$ is an $n \times n$ matrix, the rank of $A$ plus the nullity of $A$ is equal to $n$; that's the rank-nullity theorem. The nullity is the dimension of the kernel of the matrix, which is all vectors $v$ of the form: $$Av = 0 = 0v.$$ The kernel of $A$ is precisely the eigenspace corresponding to eigenvalue ...
<p>My comment is 7 years late but I hope someone might find some useful information.</p> <p>First, the number of linearly independent eigenvectors of a rank <span class="math-container">$k$</span> matrix can be greater than <span class="math-container">$k$</span>. For example <span class="math-container">\begin{align} ...
logic
<p>I am struggling to understand this. According to truth tables, if $P$ is false, it doesn't matter whether $Q$ is true or not: Either way, $P \implies Q$ is true.</p> <p>Usually when I see examples of this people make up some crazy premise for $P$ as a way of showing that $Q$ can be true or false when $P$ is somethi...
<p>Consider the statement:</p> <blockquote> <p>All multiples of 4 are even.</p> </blockquote> <p>You would say that statement is true, right?</p> <p>So let's formulate that in formal logic language:</p> <blockquote> <p>$\forall x: 4|x \implies 2|x$</p> </blockquote> <p>(Here "$a|b$" means "$a$ divides $b$", th...
<p>This is done so that classical propositional calculus follows some natural rules. Let's try to motivate this, without getting into technical details:</p> <p>The expression "$P\Rightarrow Q$" should be read "$P$ implies $Q$", or "whenever $P$ is true, $Q$ is also true".</p> <p>The negation of such an expression wou...
linear-algebra
<blockquote> <p>Assume that we are working over a complex space $W$ of dimension $n$. When would an operator on this space have the same characteristic and minimal polynomial? </p> </blockquote> <p>I think the easy case is when the operator has $n$ distinct eigenvalues, but what about if it is diagonalizable? Is tha...
<p><strong>Theorem.</strong> <em>Let $T$ be an operator on the finite dimensional complex vector space $\mathbf{W}$. The characteristic polynomial of $T$ equals the minimal polynomial of $T$ if and only if the dimension of each eigenspace of $T$ is $1$.</em></p> <p><em>Proof.</em> Let the characteristic and minimal po...
<p>The following equivalent criteria, valid for an arbitrary field, are short to state. Whether or not any one of the conditions is easy to test computationally may depend on the situation, though 2. is in principle always doable.</p> <p><strong>Proposition.</strong> <em>The following are equivalent for a linear operat...
linear-algebra
<p>In <a href="https://math.stackexchange.com/questions/1696686/is-linear-algebra-laying-the-foundation-for-something-important">a recent question</a>, it was discussed how LA is a foundation to other branches of mathematics, be they pure or applied. One answer argued that linear problems are <em>fully understood</em>,...
<p>It's closer to true that all the questions in finite-dimensional linear algebra that can be asked in an introductory course can be answered in an introductory course. This is wildly far from true in most other areas. In number theory, algebraic topology, geometric topology, set theory, and theoretical computer scien...
<p>The hard parts of linear algebra have been given new and different names, such as representation theory, invariant theory, quantum mechanics, functional analysis, Markov chains, C*-algebras, numerical methods, commutative algebra, and K-theory. Those are full of mysteries and open problems.</p> <p>What is left ove...
linear-algebra
<p>I am auditing a Linear Algebra class, and today we were taught about the rank of a matrix. The definition was given from the row point of view: </p> <blockquote> <p>"The rank of a matrix A is the number of non-zero rows in the reduced row-echelon form of A".</p> </blockquote> <p>The lecturer then explained t...
<p>The answer is yes. This statement often goes under the name "row rank equals column rank". Knowing that, it is easy to search the internet for proofs.</p> <p>Also any reputable linear algebra text should prove this: it is indeed a rather important result.</p> <p>Finally, since you said that you had only a substi...
<p>There are several simple proofs of this result. Unfortunately, most textbooks use a rather complicated approach using row reduced echelon forms. Please see some elegant proofs in the Wikipedia page (contributed by myself):</p> <p><a href="http://en.wikipedia.org/wiki/Rank_%28linear_algebra%29" rel="noreferrer">http...
linear-algebra
<p>I don't understand how to find the multiplicity for an eigenvalue. To be honest, I am not sure what the books means by multiplicity. </p> <p>For instance, finding the multiplicty of each eigenvalue for the given matrix: $$\begin{bmatrix}1 &amp; 4\\2 &amp; 3\end{bmatrix}$$</p> <p>I found the eigenvalues of this mat...
<p>The characteristic polynomial of the matrix is $p_A(x) = \det (xI-A)$. In your case, $A = \begin{bmatrix} 1 &amp; 4 \\ 2 &amp; 3\end{bmatrix}$, so $p_A(x) = (x+1)(x-5)$. Hence it has two distinct eigenvalues and each occurs only once, so the algebraic multiplicity of both is one.</p> <p>If $B=\begin{bmatrix} 5 &amp...
<p>Let me explain the two multiplicities that I know are related to eigen-values of matrices:<br> Firstly, what is the eigenvalue of a matrix $A$? By definition it consists of the zeros of the polynomial: $\det(A-xI)$. So the muliplicities that they occur in this polynomial are defined to be the multiplicities of the e...
combinatorics
<blockquote> <p>A professor knows $9$ jokes and tells $3$ jokes per lecture. Prove that in a course of $13$ lectures there is going to be a pair of jokes that will be told together in at least $2$ lectures.</p> </blockquote> <p>I've started with counting how many possibilities there are to tell jokes in a lecture....
<p>I will assume that he doesn't tell the same joke twice (or three times) in the same lecture. Else, here is a counterexample:</p> <p>Let $\{a_1, \ldots, a_9\}$ be the set of jokes. On the $i$-th day for $1 \leq i \leq 9$, tell jokes $(a_i, a_i, a_i)$. Then tell $(a_1, a_2, a_3)$, $(a_4, a_5, a_6)$, $(a_7, a_8, a_9)$...
<p>Notice, that there are $\binom{9}{2}=36$ unique pairs of jokes.</p> <p>In every lecture there are three jokes (A, B and C), thus there are three unique pairs (AB, AC, BC) per lecture used. </p> <p>In series of 13 lectures there are $13*3=39$ pairs of used jokes. Thus, after the pigeonhole principle, at least one o...
matrices
<p>How could we prove that the "The trace of an idempotent matrix equals the rank of the matrix"?</p> <p>This is another property that is used in my module without any proof, could anybody tell me how to prove this one?</p>
<p>Sorry to post solution to this such a old question, but "The trace of an idempotent matrix equals the rank of the matrix" is very basic problem and every answer here is using the solution using <strong>eigen values</strong>. But there is another way which should be highlighted.</p> <p><strong>Solution:</strong></p>...
<p>An idempotent has two possible eigenvalues, zero and one, and the multiplicity of one as an eigenvalue is precisely the rank. Therefore the trace, being the sum of the eigenvalues, <em>is</em> the rank (assuming your field contains $\mathbb Q$...)</p>
linear-algebra
<p>In the <a href="https://archive.org/details/springer_10.1007-978-1-4684-9446-4" rel="noreferrer">book</a> of <em>Linear Algebra</em> by Werner Greub, whenever we choose a field for our vector spaces, we always choose an arbitrary field $F$ of <strong>characteristic zero</strong>, but to understand the importance of ...
<p>The equivalence between symmetric bilinear forms and quadratic forms given by the <a href="https://en.wikipedia.org/wiki/Polarization_identity" rel="noreferrer">polarization identity</a> breaks down in characteristic $2$.</p>
<p>Many arguments using the trace of a matrix will no longer be true in general. For example, a matrix $A\in M_n(K)$ over a field of characteristic zero is <em>nilpotent</em>, i.e., satisfies $A^n=0$, if and only if $\operatorname{tr}(A^k)=0$ for all $1\le k\le n$. For fields of prime characteristic $p$ with $p\mid n$ ...
differentiation
<p>Can the order of a differentiation and summation be interchanged, and if so, what is the basis of the justification for this?</p> <p>E.g. is <span class="math-container">$\frac{\mathrm{d}}{\mathrm{d}x}\sum_{n=1}^{\infty}f_n(x)$</span> equal to <span class="math-container">$\sum_{n=1}^{\infty}\frac{\mathrm{d}}{\mathr...
<p>If $\sum f_n'$ converges uniformly, then yes. This is a standard theorem proved in texts like Rudin's <em>Principles of mathematical analysis</em> (see 7.17, 3rd Ed for details).</p> <p>More generally, one of the following 3 things can happen:</p> <ol> <li>The series is not differentiable.</li> <li>The series of ...
<p>This is under the assumption that you are taking a sequence of functions, i.e., your limit here is over a sequence $f_n$ of functions with a single variable $x$.</p> <p>This is not always possible. While differentiation is linear, this does <em>not</em> extend to infinite sums.</p> <p>This question is related to t...
logic
<p>I am talking about classical logic here.</p> <p>I admit this might be a naive question, but as far as I understand it: Syntactic entailment means there is a proof using the syntax of the language, while on the other hand semantic entailment does not care about the syntax, it simply means that a statement must be tr...
<p>First of all, let me set the terminology straight:</p> <p>By a syntactic proof (<span class="math-container">$\vdash$</span>) we mean a proof operating purely on a set of rules that manipulate strings of symbols, without making reference to semantic notions such as assignment, truth, model or interpretation. A synta...
<p>The main reason to care about syntactic proofs is that they are crucial to the foundations of mathematics. If you are (say) formulating axioms for set theory which you will use as the foundation for all of mathematics, you need an unambiguous notion of proof that relies on an absolute minimum of background concepts...
matrices
<p>Let <span class="math-container">$T$</span> be a linear operator on a finite-dimensional vector space <span class="math-container">$V$</span> over the field <span class="math-container">$K$</span>, with <span class="math-container">$\dim V=n$</span>. Is there a definition of the determinant of <span class="math-cont...
<p>This answer was edited quite a few times after receiving valuable input from several users. While its present form reflects quite faithfully the process that led to it, the patchy nature of the text is perhaps not especially pleasing to read. I thus decided to add a (hopefully) last edit down at the bottom, with...
<p>An endomorphism <span class="math-container">$T$</span> of a vector space <span class="math-container">$E$</span> yields endomorphisms on various vector spaces deduced from <span class="math-container">$E$</span>. For instance on the dual space <span class="math-container">$E'$</span>, the space of linear forms on <...
probability
<p>I've a confession to make. I've been using PDF's and PMF's without actually knowing what they are. My understanding is that density equals area under the curve, but if I look at it that way, then it doesn't make sense to refer to the &quot;mass&quot; of a random variable in discrete distributions. How can I interpre...
<p>(This answer takes as its starting point the OP's question in the comments, "Let me understand mass before going to density. Why do we call a point in the discrete distribution as mass? Why can't we just call it a point?")</p> <p>We could certainly call it a point. The utility of the term "probability mass functio...
<p>Probability mass functions are used for discrete distributions. It assigns a probability to each point in the sample space. Whereas the integral of a probability density function gives the probability that a random variable falls within some interval. </p>
geometry
<p>Tessellation is fascinating to me, and I've always been amazed by the drawings of M.C.Escher, particularly interesting to me, is how he would've gone about calculating tessellating shapes.</p> <p>In my spare time, I'm playing a lot with a series of patterns which use a hexagonal grid, and a tessellating design I sa...
<p>What's going on is a little <a href="http://en.wikipedia.org/wiki/Group_theory">group theory</a>, specifically the study of <a href="http://en.wikipedia.org/wiki/Wallpaper_group">wallpaper groups</a>. The best introduction I know to this subject (that specifically cares about tessellations and is written for non-mat...
<p>In addition to some of the "technical" issues involved in making a particular design (treated above) there is also the issue of the difference between symmetry of a pattern when the colors are disregarded and the symmetry of the pattern that includes the colors. In addition to the book: Symmetry of Things, which doe...
differentiation
<p>When differentiated with respect to $r$, the derivative of $\pi r^2$ is $2 \pi r$, which is the circumference of a circle.</p> <p>Similarly, when the formula for a sphere's volume $\frac{4}{3} \pi r^3$ is differentiated with respect to $r$, we get $4 \pi r^2$.</p> <p>Is this just a coincidence, or is there some de...
<p>Consider increasing the radius of a circle by an infinitesimally small amount, $dr$. This increases the area by an <a href="http://en.wikipedia.org/wiki/Annulus_%28mathematics%29" rel="noreferrer">annulus</a> (or ring) with inner radius $2 \pi r$ and outer radius $2\pi(r+dr)$. As this ring is extremely thin, we can ...
<p>$\newcommand{\Reals}{\mathbf{R}}\newcommand{\Bd}{\partial}\DeclareMathOperator{\vol}{vol}$The formulas are no accident, but not especially deep. The explanation comes down to a couple of geometric observations.</p> <ol> <li><p>If $X$ is the closure of a bounded open set in the Euclidean space $\Reals^{n}$ (such as ...
linear-algebra
<blockquote> <p>The linear transformation matrix for a reflection across the line <span class="math-container">$y = mx$</span> is:</p> <p><span class="math-container">$$\frac{1}{1 + m^2}\begin{pmatrix}1-m^2&amp;2m\\2m&amp;m^2-1\end{pmatrix} $$</span></p> </blockquote> <p>My professor gave us the formula above with no e...
<p>You can have (far) more elegant derivations of the matrix when you have some theory available. The low-tech way using barely more than matrix multiplication would be:</p> <p>The line $y = mx$ is parametrised by $t \cdot \begin{pmatrix}1\\m\end{pmatrix}$. The line orthogonal to it is parametrised by $r \cdot \begin{...
<p>Another way. To reflect along a line that forms an angle <span class="math-container">$\theta$</span> with the horizontal axis is equivalent to:</p> <ul> <li>rotate an angle <span class="math-container">$-\theta$</span> (to make the line horizontal)</li> <li>invert the <span class="math-container">$y$</span> coordin...
probability
<p>In the book "Zero: The Biography of a Dangerous Idea", author Charles Seife claims that a dart thrown at the real number line would never hit a rational number. He doesn't say that it's only "unlikely" or that the probability approaches zero or anything like that. He says that it will never happen because the irrati...
<p>Mathematicians are strange in that we distinguish between "impossible" and "happens with probability zero." If you throw a magical super sharp dart at the number line, you'll hit a rational number with probability zero, but it isn't <em>impossible</em> in the sense that there do exist rational numbers. What <em>is</...
<p>Note that if you randomly (i.e. uniformly) choose a real number in the interval $[0,1]$ then for <em>every</em> number there is a zero probability that you will pick this number. This does not mean that you did not pick <em>any</em> number at all.</p> <p>Similarly with the rationals, while infinite, and dense and a...
geometry
<p>Circular manholes are great because the cover can not fall down the hole. If the hole were square, the heavy metal cover could fall down the hole and kill some man working down there.</p> <p>Circular manhole: <img src="https://i.sstatic.net/JQweE.png" alt="Circle" /></p> <p><em>Can manholes be made in other shapes t...
<p>Any manhole cover bounded by a <a href="http://en.wikipedia.org/wiki/Curve_of_constant_width">curve of constant width</a> will not fall through. The circle is the simplest such curve.</p>
<p>A manhole cover can't fall into the hole if the minimum width of the cover is greater than the maximum width of the hole.</p> <p>For example, consider a one-meter square cover over a square hole slightly smaller than $1\over\sqrt 2$ meter on a side. The diagonal of the hole is slightly less than 1 meter, so the cov...
probability
<p>Let $X_{i}$, $i=1,2,\dots, n$, be independent random variables of geometric distribution, that is, $P(X_{i}=m)=p(1-p)^{m-1}$. How to compute the PDF of their sum $\sum_{i=1}^{n}X_{i}$?</p> <p>I know intuitively it's a negative binomial distribution $$P\left(\sum_{i=1}^{n}X_{i}=m\right)=\binom{m-1}{n-1}p^{n}(1-p)^{m...
<p>Let $X_{1},X_{2},\ldots$ be independent rvs having the geometric distribution with parameter $p$, i.e. $P\left[X_{i}=m\right]=pq^{m-1}$ for $m=1,2.\ldots$ (here $p+q=1$). </p> <p>Define $S_{n}:=X_{1}+\cdots+X_{n}$.</p> <p>With induction on $n$ it can be shown that $S_{n}$ has a negative binomial distribution with ...
<p>Another way to do this is by using moment-generating functions. In particular, we use the theorem, a probability distribution is unique to a given MGF(moment-generating functions).<br/> Calculation of MGF for negative binomial distribution: <br/></p> <p><span class="math-container">$$X\sim \text{NegBin}(r,p),\ P(X=x...
matrices
<p>I have a very simple question that can be stated without any proof. Are all eigenvectors, of any matrix, always orthogonal? I am trying to understand principal components and it is crucial for me to see the basis of eigenvectors.</p>
<p>In general, for any matrix, the eigenvectors are NOT always orthogonal. But for a special type of matrix, symmetric matrix, the eigenvalues are always real and eigenvectors corresponding to distinct eigenvalues are always orthogonal. If the eigenvalues are not distinct, an orthogonal basis for this eigenspace can be...
<p>Fix two linearly independent vectors $u$ and $v$ in $\mathbb{R}^2$, define $Tu=u$ and $Tv=2v$. Then extend linearly $T$ to a map from $\mathbb{R}^n$ to itself. The eigenvectors of $T$ are $u$ and $v$ (or any multiple). Of course, $u$ need not be perpendicular to $v$.</p>
probability
<p>Imagine a game of chess where both players generate a list of legal moves and pick one uniformly at random.</p> <blockquote> <p><em>Q</em>: What is the expected outcome for white?</p> </blockquote> <ul> <li>1 point for black checkmated, 0.5 for a draw, 0 for white checkmated. So the expected outcome is given by...
<p>I found a bug in the code given in Hooked's answer (which means that my original reanalysis was also flawed): one also have to check for insufficient material when assessing a draw, i.e.</p> <pre><code>int(board.is_stalemate()) </code></pre> <p>should be replaced with</p> <pre><code>int(board.is_insufficient_materia...
<p><strong>Update</strong>: The code below has a small, but significant oversight. I was unaware that a stalemate would not be counted the same way as a board with insufficient pieces to play and this changes the answer. @Winther has fixed the bug and <a href="https://math.stackexchange.com/a/846750/196">reran the simu...
probability
<p>I have this problem on a textbook that doesn't have a solution. It is:</p> <p>Let $$f(x)=\frac{\binom{r}{x} \binom{N-r}{n-x}}{\binom{N}{n}}\;,$$ and keep $p=\dfrac{r}{N}$ fixed. Prove that $$\lim_{N \rightarrow \infty} f(x)=\binom{n}{x} p^x (1-p)^{n-x}\;.$$</p> <p>Although I can find lots of examples using the bin...
<p>Write the pmf of the hypergeometric distribution in terms of factorials: $$\begin{eqnarray} \frac{\binom{r}{x} \binom{N-r}{n-x}}{\binom{N}{n}} &amp;=&amp; \frac{r!}{\color\green{x!} \cdot (r-x)!} \frac{(N-r)!}{\color\green{(n-x)!} \cdot (N-n -(r-x))!} \cdot \frac{\color\green{n!} \cdot (N-n)!}{N!} \\ &amp;=&amp; ...
<p><span class="math-container">$X_N\sim\;hypergeom(N, r, x)\\ with\;x=0, 1, 2,..., n.$</span></p> <p>When <span class="math-container">$N$</span> and <span class="math-container">$N-r$</span> are large and <span class="math-container">$n$</span> is small, <span class="math-container">$X_N$</span> has a binomial distri...
linear-algebra
<p>Assuming that we can't bold our variables (say, we're writing math as opposed to typing it), is it "not mathematically mature" to put an arrow over a vector?</p> <p>I ask this because in my linear algebra class, my professor never used arrow notation, so sometimes it wasn't obvious between distinguishing a scalar a...
<p>A sign of mathematical maturity is the awareness that truth is invariant under changes of notation.</p>
<p>A main issue with marking <em>vectors</em> with an arrow is that it is context dependent what is considered as a <em>vector.</em></p> <p>Let us decide we mark an element $\mathbb{R}^3$ as a vector, so we write $\vec{v}$ for it. Now, we want to multiply it with a $3\times 3$ matrix, since it is a matrix there is n...
probability
<p>I'm struggling with the concept of conditional expectation. First of all, if you have a link to any explanation that goes beyond showing that it is a generalization of elementary intuitive concepts, please let me know. </p> <p>Let me get more specific. Let <span class="math-container">$\left(\Omega,\mathcal{A},P\ri...
<p>Maybe this simple example will help. I use it when I teach conditional expectation. </p> <p>(1) The first step is to think of ${\mathbb E}(X)$ in a new way: as the best estimate for the value of a random variable $X$ in the absence of any information. To minimize the squared error $${\mathbb E}[(X-e)^2]={\mathbb ...
<p>I think a good way to answer question 2 is as follows.</p> <p>I am performing an experiment, whose outcome can be described by an element $\omega$ of some set $\Omega$. I am not going to tell you the outcome, but I will allow you to ask certain questions yes/no questions about it. (This is like "20 questions", bu...
matrices
<p>As a part of an exercise I have to prove the following:</p> <p>Let $A$ be an $(n \times m)$ matrix. Let $A^T$ be the transposed matrix of $A$. Then $AA^T$ is an $(n \times n)$ matrix and $A^TA$ is an $(m \times m)$ matrix. $AA^T$ then has a total of $n$ eigenvalues and $A^TA$ has a total of $m$ eigenvalues.</p> <p...
<p>Let $\lambda$ be an eigenvalue of $A^TA$, i.e. $$A^T A x = \lambda x$$ for some $x \neq 0$. We can multiply $A$ from the left and get $$A A^T (Ax) = \lambda (Ax).$$</p> <p>What can you conclude from this?</p>
<p>in fact, nonzero eigenvalues $AB$ and $BA$ are the same for any rectangular matrices $A$ and $B$. this follows from the fact that $trace((AB)^k) = trace((BA)^k)$ and the coefficients of the characteristic polynomials of a square matrix $A$ are a function of $trace(A^k).$ </p>
matrices
<p>Suppose I have a square matrix $\mathsf{A}$ with $\det \mathsf{A}\neq 0$.</p> <p>How could we define the following operation? $$\mathsf{A}!$$</p> <p>Maybe we could make some simple example, admitted it makes any sense, with </p> <p>$$\mathsf{A} = \left(\begin{matrix} 1 &amp; 3 \\ 2 &amp; 1 \end{matrix} \right) ...
<p>For any holomorphic function <span class="math-container">$G$</span>, we can define a corresponding matrix function <span class="math-container">$\tilde{G}$</span> via (a formal version of) the Cauchy Integral Formula: We set <span class="math-container">$$\tilde{G}(B) := \frac{1}{2 \pi i} \oint_C G(z) (z I - B)^{-1...
<p>The <a href="https://en.wikipedia.org/wiki/Gamma_function">gamma function</a> is analytic. Use the <a href="https://en.wikipedia.org/wiki/Matrix_function">power series</a> of it.</p> <p>EDIT: already done: <a href="http://www.sciencedirect.com/science/article/pii/S0893965997001390">Some properties of Gamma and Beta...
probability
<p>Need some help.</p> <p>I'm a first year teacher, moving into a probability unit.</p> <p>I was looking through my given materials, and think I've found two errors. It's been a long time since I've done probability and I don't want to ask staff for fear of sounding stupid.</p> <p>I created a Word document that I t...
<p>Not only you're right, but the second question is quite weird: the question asks for $P(A\cap B)$ when in the end the final result is supposedly $P(A\cup B)$. Moreover, $A$ <strong>contains</strong> $B$, which is never pointed out; the term "overlapping" is very misleading in this case. I would throw this material a...
<p>I'm pretty sure its an error on the course materials. The ideas of a number being odd and a number being prime are dependent on each other (an even number cannot be prime unless it is $2$), so the rule $P(A)*P(B)=P(A\cap B)$ is not valid here (it's only valid for independent events). So: trust your instincts, you ar...
differentiation
<p>I was messing around with the definition of the derivative, trying to work out the formulas for the common functions using limits. I hit a roadblock, however, while trying to find the derivative of $e^x$. The process went something like this:</p> <p>$$\begin{align} (e^x)' &amp;= \lim_{h \to 0} \frac{e^{x+h}-e^x}{h}...
<p>As to your comment:</p> <p>Consider the differential equation</p> <p>$$y - \left( {1 + \frac{x}{n}} \right)y' = 0$$</p> <p>It's solution is clearly $$y_n={\left( {1 + \frac{x}{n}} \right)^n}$$</p> <p>If we let $n \to \infty$ "in the equation" one gets</p> <p>$$y - y' = 0$$</p> <p>One should expect that the sol...
<p>Let say $y=e^h -1$, then $\lim_{h \rightarrow 0} \dfrac{e^h -1}{h} = \lim_{y \rightarrow 0}{\dfrac{y}{\ln{(y+1)}}} = \lim_{y \rightarrow 0} {\dfrac{1}{\dfrac{\ln{(y+1)}}{y}}} = \lim_{y \rightarrow 0}{\dfrac{1}{\ln{(y+1)}^\frac{1}{y}}}$. It is easy to prove that $\lim_{y \rightarrow 0}{(y+1)}^\frac{1}{y} = e$. Then u...
differentiation
<p>Given a smooth real function <span class="math-container">$f$</span>, we can approximate it as a sum of polynomials as <span class="math-container">$$f(x+h)=f(x)+h f'(x) + \frac{h^2}{2!} f''(x)+ \dotsb = \sum_{k=0}^n \frac{h^k}{k!} f^{(k)}(x) + h^n R_n(h),$$</span> where <span class="math-container">$\lim_{h\to0} R_...
<p>Yes. There is a <em>geometric</em> explanation. For simplicity, let me take <span class="math-container">$x=0$</span> and <span class="math-container">$h=1$</span>. By the Fundamental Theorem of Calculus (FTC), <span class="math-container">$$ f(1)=f(0)+\int_{0}^{1}dt_1\ f'(t_1)\ . $$</span> Now use the FTC for the <...
<p>Here is a heuristic argument which I believe naturally explains why we expect the factor <span class="math-container">$\frac{1}{k!}$</span>.</p> <p>Assume that <span class="math-container">$f$</span> is a &quot;nice&quot; function. Then by linear approximation,</p> <p><span class="math-container">$$ f(x+h) \approx f...
matrices
<p>I am representing my 3d data using its sample covariance matrix. I want to know what the determinant of covariance Matrix represents. If the determinant is positive, zero, negative, high positive, high negative, what does it mean or represent?</p> <p>Thanks</p> <p>EDIT:</p> <p>Covariance is being used to represent v...
<p>I would like to point out that there is a connection between the determinant of the covariance matrix of (Gaussian distributed) data points and the differential entropy of the distribution.</p> <p>To put it in other words: Let's say you have a (large) set of points from which you assume it is Gaussian distributed. I...
<p>It cannot be negative, since the covariance matrix is positively (not necessary strictly) defined. So all it's eigenvalues are not negative and the determinant is product of these eigenvalues. It defines (square root of this) in certain sense the volume of n (3 in your case) dimensional $\sigma$-cube. It is analog...