qid int64 1 4.65M | question large_stringlengths 27 36.3k | author large_stringlengths 3 36 | author_id int64 -1 1.16M | answer large_stringlengths 18 63k |
|---|---|---|---|---|
4,133,782 | <p>I am having trouble finding a formula that connects the two and can produce an answer. Anyone know how this is done? I tried y=mx+b, m=3, and b=5-a. But I don't know what to do next or did I even start right.</p>
| Still Learning | 362,881 | <p>As an add on to user169852’s proof, I would note that Strang’s argument that <span class="math-container">$ A^T A $</span> is invertible if A has full column rank is fairly simple.</p>
<p>He shows that for any matrix A, <span class="math-container">$ A^T A $</span> has the same nullspace as A:</p>
<p>(1) Clearly the nullspace of A is contained in the nullspace of A^T A.</p>
<p>(2) To show the reverse inclusion, suppose that <span class="math-container">$ A^T A x = 0 $</span>. Then <span class="math-container">$ x^T A^T A x = 0 $</span>, so <span class="math-container">$ (A x)^T (A x) = 0 $</span>. I.e., the norm of A x is zero and hence A x = 0. So the nullspace of <span class="math-container">$ A^T A $</span> is contained in the nullspace of A.
Hence the nullspace of A equals the nullspace of <span class="math-container">$ A^T A $</span>.</p>
<p>user169852 has already shown that A having full column rank implies its nullspace is trivial. So <span class="math-container">$ A^T A $</span> is a square matrix with trivial nullspace and hence is invertible.</p>
|
1,386,343 | <p>Let $P$ be an idempotent $n \times n$ matrix ($P^2 = P$). What is $(I + P)^{-1}$? I've been thinking about this problem for a while, but can't find an answer. I tried a few examples, but I'm not sure what the general pattern is.</p>
| Emilio Novati | 187,568 | <p>Hint:
$$
(I+P)(P-2I)=P-2I+P-2P=-2I
$$</p>
|
301,662 | <p>This is a challenging puzzle I heard from my little brother.</p>
<p>For some $n$ and $x$, $\sum_{k=1}^n \sin^{2k}(x) = 2013$.</p>
<p>Is it possible to deduce
$$\sum_{k=1}^n \cos^{2k}(x) \text{ ?}$$</p>
<p>Edit:
I've just noticed something which now seems obvious to me.<br>
Choose $n = 2013$ and $x = \pi/2$ which satisfies the condtion. It follows that the cosine terms would sum to zero. I'm not sure this is a unique solution.</p>
| Gerry Myerson | 8,269 | <p>As noted, the equation holds if $n=2013$ and $x=\pi/2$. Now let $n=2014$. By continuity, there is a value of $x$ a tiny bit smaller than $\pi/2$ for which the equation will hold, and, for this value of $x$, the cosine sum will not be zero. So one cannot deduce the cosine sum from knowing the first equation holds. </p>
|
2,349,124 | <p>I keep on hitting a road block in trying to solve this, especially when trying to prove it going from the right hand side to the left hand side. </p>
| Atif Farooq | 451,530 | <p><em>Proof</em>. </p>
<p>$(\Rightarrow).$
Assume that $X=\varnothing$ and that $y\in Y$, evidently $y\not\in X$ therefore $y\in Y\backslash X$ consequently $y\in (Y\backslash X)\cup (X\backslash Y)$, since $y$ was arbitrary we may now conclude that $Y\subseteq (Y\backslash X)\cup (X\backslash Y)$.</p>
<p>Assume now that $z\in(Y\backslash X)\cup (X\backslash Y)$, evidently $X\backslash Y = \varnothing$ thus $z\in(Y\backslash X)$ consqeuently $z\in Y$.
Since our choice of $z$ was arbitrary it follows that $(Y\backslash X)\cup (X\backslash Y)\subseteq Y$.</p>
<p>We may now conclude that $Y = (Y\backslash X)\cup (X\backslash Y)$.</p>
<p>$(\Leftarrow).$ Assume for purpose of contradiction that $Y=(Y\backslash X)\cup (X\backslash Y)$ and that $X\neq\varnothing$,evidently some $x\in X$, moreover $x\in Y\lor x\not\in Y$, arguing from Cases.</p>
<p><em>Case:-1$(x\in Y)$:</em> Assume $x\in Y$, since $Y = (Y\backslash X)\cup (X\backslash Y)$ it follows $x\in (Y\backslash X)\cup (X\backslash Y)$ and therefore $x\not\in X\cap Y$ but it is clearly evident that $x\in X\cap Y$ resulting in a contradiction.</p>
<p><em>Case:-2$(x\not\in Y)$:</em> Assume $x\not\in Y$, since $Y = (Y\backslash X)\cup (X\backslash Y)$ it follows $x\not\in (Y\backslash X)\cup (X\backslash Y)$ but clearly $x\in X\backslash Y$ and therefore $x\in(Y\backslash X)\cup (X\backslash Y)$ again resulting in a contradiction.</p>
<p>Therefore it must be that $X=\varnothing$.</p>
<p>$\blacksquare$</p>
|
2,349,124 | <p>I keep on hitting a road block in trying to solve this, especially when trying to prove it going from the right hand side to the left hand side. </p>
| Peter Szilas | 408,605 | <p>$ X,Y \subset T$.</p>
<p>$X = \emptyset \iff $</p>
<p>$Y = ( X \cap Y^C ) \cup ( X^C \cap Y )$.</p>
<p>1) $\Rightarrow$ :</p>
<p>Let $X = \emptyset$ .</p>
<p>$( X \cap Y^C ) \cup ( X^C \cap Y)$ =</p>
<p>$\emptyset \cup ( T \cap Y) = Y$.</p>
<p>2) $\Leftarrow$ :</p>
<p>Let $Y = ( X \cap Y^C ) \cup ( X^C \cap Y)$.</p>
<p>$A := ( X \cap Y^C) ; B : = ( X^C \cap Y)$.</p>
<p>$Y = A \cup B$.</p>
<p>1) $A \subset Y$; </p>
<p>It follows that $A = \emptyset$, since every element of $A$ is an element of $Y^C$. </p>
<p>( Note: $Y \cap Y^C = \emptyset$ ).</p>
<p>$A = (X \cap Y^C) = \emptyset$,</p>
<p>$(\star) \rightarrow X \subset Y$.</p>
<p>We are left with:</p>
<p>$Y = X^C \cap Y$, or </p>
<p>$Y^C = X \cup Y^C$,</p>
<p>$(\star \star) \rightarrow X \subset Y^C$.</p>
<p>Putting together: </p>
<p>$X\subset Y$ and $X\subset Y^C $</p>
<p>$ \rightarrow X = \emptyset$.</p>
|
716,561 | <p>I want to show that $$\sum\limits_{n=1}^{\infty}\frac{i^n}{\sqrt{n}}$$ is convergent, but not absolutely convergent.</p>
<p>Demonstrating that it is not absolutely convergent is easy since $$\left|\frac{i^n}{\sqrt{n}} \right|=\frac{1}{\sqrt{n}}$$ but $$\sum\limits_{n=1}^{\infty}\frac{1}{\sqrt{n}}$$ diverges. I'm stuck showing that it is conditionally convergent.</p>
| Community | -1 | <p>We have</p>
<p>$$\left|\sum_{k=1}^n i^k\right|=\left|\frac{1-i^n}{1-i}\right|\le\frac{1+|i^n|}{\sqrt 2}=\sqrt2$$
and the sequence $\left(\frac1{\sqrt n}\right)_n$ is decreasing to $0$ so by the <a href="http://en.wikipedia.org/wiki/Dirichlet%27s_test">Dirichlet's test</a>
the given series is convergent and since it's not absolutely convergent as you proved it then this series is conditionally convergent. </p>
|
716,561 | <p>I want to show that $$\sum\limits_{n=1}^{\infty}\frac{i^n}{\sqrt{n}}$$ is convergent, but not absolutely convergent.</p>
<p>Demonstrating that it is not absolutely convergent is easy since $$\left|\frac{i^n}{\sqrt{n}} \right|=\frac{1}{\sqrt{n}}$$ but $$\sum\limits_{n=1}^{\infty}\frac{1}{\sqrt{n}}$$ diverges. I'm stuck showing that it is conditionally convergent.</p>
| Carl Butcher | 85,836 | <p>$\sum_{n=1}^\infty \frac{i^n}{\sqrt{n}} = \sum_{n=1}^\infty \frac{i^{2n}}{\sqrt{2n}} + \sum_{n=1}^\infty \frac{i^{2n-1}}{\sqrt{2n-1}} = \sum_{n=1}^\infty (-1)^{n-1}\frac{1}{\sqrt{2n}} + i \sum_{n=1}^\infty (-1)^{n-1}\frac{1}{\sqrt{2n-1}}$</p>
<p>But the decreasing sequences $\{\frac{1}{\sqrt{2n}}\}_n$ and $\{\frac{1}{\sqrt{2n-1}}\}_n$ converge to $0$ as $n \rightarrow \infty$. Hence by alternating series test, we get that right hand side of the above equation converges.</p>
|
112,503 | <p>I am working the a subject guide on involving $L$-Systems and have the alphabet $A = \{a, b, c\}$. The initiator is the string $a$ and the rules of substitution $a \to ba$, $b \to ccb$, $c \to a$. </p>
<p>The study guide gives the first five generations as:</p>
<p>$$[a] \to [ba] \to [ccba] \to [acba] \to [aaba] \to [aaccba]$$</p>
<p>I can't for the life of me figure out how this works. No rules regarding the order of substitution are provided, and my lecturer say's that it is possible to get to this.</p>
<p>Does anybody have any ideas?</p>
| Trismegistos | 23,730 | <p>It looks like only one symbol is substituted in one step. The symbol $c$ gets highest priority, followed by $b$ and then $a$. When there are multiple instances of the same symbol, the leftmost is changed. But of course this is guessing, and the example is too short to allow much confidence.</p>
|
2,756,332 | <p>It is <a href="https://math.stackexchange.com/questions/985879/relation-between-trace-and-rank-for-projection-matrices">not difficult</a> to show that if $A \in M_n(k)$ for some field $k$, and $A^2=A$ then $\operatorname{tr}(A) = \dim(\operatorname{Im}(A))$</p>
<p>In <a href="https://mathoverflow.net/questions/13526/geometric-interpretation-of-trace/13530#comment447113_13527">this comment</a>, it is written:</p>
<blockquote>
<p>This property, together with linearity, determines the trace uniquely,
and so one can view the trace as the linearised version of the
dimension-counting operator.</p>
</blockquote>
<p>What does it mean precisely? If $f : M_n(k) \to k$ is $k$-linear (say with $k = \Bbb C$), and $f(A) = \dim(\operatorname{Im}(A))$ for any $A^2= A \in M_n(k)$, then we have that $f$ is the trace function?</p>
| Martin Argerami | 22,857 | <p>Given any $A\in M_n(\mathbb C)$, we have $A=\text{Re}\,A+i\text{Im}\,A$, where
$$
\text{Re}\,A=\frac{A+A^*}2,\ \ \ \text{Im}\,A-\frac{A-A^*}{2i}
$$
are selfadjoint. So it is enough to test the assertion for selfadjoint matrices. In such case we have the spectral theorem available, which tells us that if $A=A^*$, then
$$
A=\sum_{j=1}^n\lambda_j P_j,
$$
where $P_j$ are (pairwise orthogonal) projections of rank-one. Then
$$
f(A)=\sum_j\lambda_j f(P_j)=\sum_j\lambda_j=\text{Tr}\,(A).
$$</p>
|
3,439,626 | <p>I need to proof the following statement:</p>
<p>Let <span class="math-container">$a, b, n \in \Bbb{Z}$</span> with <span class="math-container">$ n≥ 2, gcd(a,n)=1$</span>. Proof that if <span class="math-container">$s_{1},s_{2}$</span> are solutions to <span class="math-container">$ax\equiv b \pmod{n}$</span>, then <span class="math-container">$s_{1}\equiv s_{2} \pmod{n}$</span>.</p>
<p>I don't know where to start my proof. I do know that if you get any solution, then by adding the modulo you get equivalent solutions. Then, there are n possible solutions. But I don't think my argument is correct. </p>
| Andrea Mori | 688 | <p>If <span class="math-container">$a\in\Bbb Z$</span> is such <span class="math-container">${\rm gcd}(a,n)=1$</span> then <span class="math-container">$a$</span> has a multiplicative inverse modulo <span class="math-container">$n$</span>, i.e. there exists <span class="math-container">$a^*\in\Bbb Z$</span> such that <span class="math-container">$aa^*\equiv1\bmod n$</span>.</p>
<p>Thus, if you have a congruence
<span class="math-container">$$
ab_1\equiv ab_2\bmod n
$$</span>
simply multiplying both hands by the multiplicative inverse <span class="math-container">$a^*$</span> gives
<span class="math-container">$$
b_1\equiv b_2\bmod n.
$$</span>
This is false if <span class="math-container">${\rm gcd}(a,n)\neq1$</span>, e.g. <span class="math-container">$2\cdot1\equiv2\cdot4\bmod6$</span> but <span class="math-container">$1\not\equiv4\bmod6$</span>.</p>
|
4,559,503 | <blockquote>
<p>Let <span class="math-container">$\displaystyle f(x)=\frac{1+\cos(2\pi x)}2$</span> for <span class="math-container">$x\in\mathbb R$</span>, and <span class="math-container">$f^n=\underbrace{ f \circ \cdots \circ f}_{n}$</span>. Is it true that for Lebesgue almost every <span class="math-container">$x$</span>, <span class="math-container">$\displaystyle\lim_{n \to \infty} f^n(x)=1$</span>?</p>
</blockquote>
<p>I'm more inclined to believe that the answer is "yes".</p>
<p>This is the Problem <span class="math-container">$5$</span> of <a href="https://artofproblemsolving.com/community/c2557361_2021_mikloacutes_schweitzer" rel="noreferrer"><span class="math-container">$2021$</span> Miklós Schweitzer</a>. Recently, a <a href="https://math.stackexchange.com/questions/4558540/spivak-ch-22-problem-20-f-continuous-sequence-x-fx-ffx-fffx?noredirect=1&lq=1">related question</a> reminds me of this problem. After spending some time on it, I found that it is a hard problem, as always as Miklós Schweitzer does. Almost every problem from that competition is very difficult for me.</p>
<p>First of all, for a fixed <span class="math-container">$x_0\in\mathbb R$</span>, if <span class="math-container">$f^n(x_0)$</span> is convergent, then its limit <span class="math-container">$\ell$</span> must be a fixed point of <span class="math-container">$f$</span>. Since <span class="math-container">$f(x)=\cos^2(\pi x)\in[0,1]$</span>, we must have <span class="math-container">$\ell\in[0,1]$</span>. Let's find the fixed points of <span class="math-container">$f$</span>. Let <span class="math-container">$g(x)=f(x)-x$</span> for <span class="math-container">$x\in[0,1]$</span>, then we need to find the zeroes of <span class="math-container">$g$</span>. Since <span class="math-container">$g'(x)=-\pi\sin(2\pi x)-1$</span>, <span class="math-container">$g'$</span> has two zeroes <span class="math-container">$\eta_1,\eta_2\in[0,1]$</span> with <span class="math-container">$1/2<\eta_1<3/4$</span>, <span class="math-container">$3/4<\eta_2<1$</span>, and <span class="math-container">$\sin(2\pi\eta_1)=\sin(2\pi\eta_2)=-1/\pi$</span>. Hence, <span class="math-container">$g$</span> is decreasing in <span class="math-container">$[0, \eta_1)$</span>, then increasing in <span class="math-container">$(\eta_1, \eta_2)$</span>, and then decreasing in <span class="math-container">$(\eta_2,1]$</span>. Note that <span class="math-container">$g(1/2)=-1/2<0, g(1)=0$</span>, we know that <span class="math-container">$g(\eta_1)<0$</span> and <span class="math-container">$g(\eta_2)>0$</span>. Therefore, we can find three zeroes of <span class="math-container">$g$</span>, named by <span class="math-container">$\ell_1$</span>, <span class="math-container">$\ell_2$</span> and <span class="math-container">$\ell$</span> with <span class="math-container">$\ell_1\in(0,1/2)$</span>, <span class="math-container">$\ell_2\in(\eta_1, \eta_2)$</span> and <span class="math-container">$\ell=1$</span>.</p>
<p><a href="https://i.stack.imgur.com/Gl1UO.jpg" rel="noreferrer"><img src="https://i.stack.imgur.com/Gl1UO.jpg" alt="The graphs" /></a></p>
<p>We can find the locations the fixed points <span class="math-container">$\ell_1, \ell_2$</span> more accurately. Indeed, since
<span class="math-container">$$g\left(\frac14\right)=\cos^2\left(\frac\pi4\right)-\frac14=\frac12-\frac14>0,\qquad g\left(\frac13\right)=\cos^2\left(\frac\pi3\right)-\frac13=\frac14-\frac13<0,$$</span>
we have <span class="math-container">$\ell_1\in(1/4,1/3)$</span>, hence
<span class="math-container">$$f'(\ell_1)=-\pi\sin(2\pi\ell_1)<-\pi\sin\left(\frac{2\pi}3\right)=-\frac{\sqrt 3}2\pi<-1.$$</span>
Also,
<span class="math-container">$$g\left(\frac56\right)=\cos^2\left(\frac56\pi\right)-\frac56=\frac34-\frac56<0,\qquad g\left(\frac{11}{12}\right)=\frac{1+\cos\left(\frac{11}6\pi\right)}2-\frac{11}{12}=\frac{\sqrt3-2}4>0,$$</span>
we have <span class="math-container">$\ell_2\in(5/6,11/12)$</span>, hence
<span class="math-container">$$f'(\ell_2)=-\pi\sin(2\pi\ell_2)>-\pi\sin\left(\frac{11\pi}6\right)=\frac{1}2\pi>1.$$</span></p>
<p><strong>The following are not rigorous.</strong></p>
<p>Therefore, locally, near <span class="math-container">$\ell_1$</span>, <span class="math-container">$f$</span> behaves like <span class="math-container">$-A(x-\ell_1)$</span> with <span class="math-container">$A>1$</span>. Consider the map <span class="math-container">$f_1: x\mapsto -Ax$</span>, then <span class="math-container">$f_1^n(x)$</span> converges if and only if <span class="math-container">$x=0$</span>. This indicates that, for fixed <span class="math-container">$x_0$</span>, if the sequence <span class="math-container">$\{f^n(x_0)\}$</span> doesn't reach <span class="math-container">$\ell_1$</span>, it will not converge to <span class="math-container">$\ell_1$</span> ; a similar analysis on <span class="math-container">$\ell_2$</span> indicates that <span class="math-container">$\{f^n(x_0)\}$</span> will not converge to <span class="math-container">$\ell_2$</span> if it doesn't touch <span class="math-container">$\ell_2$</span>; hence, if <span class="math-container">$\{f^n(x_0)\}$</span> converges without touching <span class="math-container">$\ell_1, \ell_2$</span>, then the limit should must be <span class="math-container">$\ell=1$</span>. I think the ideas in this paragraph can be written down rigorously, although I don't know how to write a clean one.</p>
<p>Another question I've not had any ideas: What if <span class="math-container">$\{f^n(x_0)\}$</span> diverges? To finish the problem, even if we write down a proof about the above paragraph, we also need to prove that for a.e. <span class="math-container">$x$</span>, the sequence <span class="math-container">$\{f^n(x)\}$</span> is convergent.</p>
<p>Any help would be appreciated!</p>
| acreativename | 347,666 | <p>It suffices for iteration purposes to restrict <span class="math-container">$f$</span> to <span class="math-container">$[0,1]$</span> as <span class="math-container">$f(.) \in [0,1]$</span>.</p>
<p>Set</p>
<p><span class="math-container">$$A:= \{x: x\in[0,1], \exists n \in \mathbb{N};\text{ } f^{n}(x) \in (l_{2},1]\} $$</span></p>
<p>where <span class="math-container">$l_{2}$</span> is the second largest fixed point of <span class="math-container">$f$</span> (which is roughly <span class="math-container">$0.89$</span>). If <span class="math-container">$A$</span> has measure <span class="math-container">$1$</span> then almost all points converge to <span class="math-container">$1$</span>, this fact has been discussed by people before me. <span class="math-container">$\textbf{Assume}$</span> for the sake of contradiction that <span class="math-container">$m(A) < 1$</span>.</p>
<p><span class="math-container">$\textbf{Definition 1:}$</span> When <span class="math-container">$V \subset [0,1]$</span> and <span class="math-container">$n \in \mathbb{N}$</span> then</p>
<p><span class="math-container">$$f^{n}(V) := \{y: \exists x \in V\text{ so that } y = f^{n}(x)\}$$</span></p>
<p><span class="math-container">$$f^{-n}(V) := \{z: f^{n}(z) \in V\}$$</span></p>
<p>Note that <span class="math-container">$[0,1] = A \cup B$</span> where
<span class="math-container">$$B=\bigcap_{j=1}^{\infty}f^{-j}([0,l_{2}]).$$</span> Observe that <span class="math-container">$f^{-n-1}([0,l_{2}]) \subseteq f^{-n}([0,l_{2}])$</span> for all <span class="math-container">$n \geq 1$</span>, indeed if <span class="math-container">$x \in f^{-n-1}([0,l_{2}])\cap (f^{-n}([0,l_{2}]))^c$</span> then <span class="math-container">$f^{n}(x) > l_{2}$</span> and therefore <span class="math-container">$f^{n+1}(x) > l_{2} \notin [0,l_{2}]$</span> which is impossible.</p>
<p>Getting our hands dirty we can verify that <span class="math-container">$[0,l_{2}] \subseteq [0,0.9]$</span>, <span class="math-container">$f^{-1}([0,l_{2}]) \subseteq [0.1, 0.9]$</span> and <span class="math-container">$f^{-2}([0,l_{2}]) \subseteq f^{-1}([0.1,0.9]) \subseteq [0.1,0.4] \cup [0.6,0.9].$</span></p>
<p>Thus <span class="math-container">$B \subseteq [0.1,0.4] \cup [0.6,0.9]$</span>, furthermore
<span class="math-container">$$B = \bigcap_{j=1}^{\infty}f^{-j}([0.1,0.4] \cup [0.6,0.9]).$$</span>
Set</p>
<p><span class="math-container">$$C = [0.1,0.4] \cup [0.6,0.9]$$</span></p>
<p>On <span class="math-container">$C$</span> we have <span class="math-container">$1.84 \leq |f'(.)| \leq \pi$</span>. Furthermore like before we have <span class="math-container">$f^{-n-1}(C) \subseteq f^{-n}(C)$</span>, indeed if <span class="math-container">$x \in f^{-n-1}(C) \cap (f^{-n}(C))^{c} $</span> then <span class="math-container">$f^n(x) \not \in C$</span> and <span class="math-container">$f^{n+1}(x) \not \in C$</span> which is impossible.</p>
<p>Since we know that <span class="math-container">$m(A) < 1$</span> and <span class="math-container">$m(A)+m(B) = 1$</span>, it follows that <span class="math-container">$m(B)>0$</span>.</p>
<p><span class="math-container">$\textbf{Claim 1:}$</span> <span class="math-container">$$m(f^{-n}(C)) \rightarrow m(B)$$</span></p>
<p><span class="math-container">$\textbf{Proof:}$</span> Note that the characteristic functions <span class="math-container">$1_{f^{-n}(C)}(.)$</span> decrease downward to <span class="math-container">$1_{B}(.)$</span> (as <span class="math-container">$n\rightarrow \infty$</span>). Thus by the monotone convergence theorem we have that <span class="math-container">$$\int_{0}^{1}1_{f^{-n}(C)}(x)dx \rightarrow \int_{0}^{1}1_{B}(x)dx = m(B).$$</span></p>
<p><span class="math-container">$$\Rightarrow m(f^{-n}(C)) \rightarrow m(B).$$</span></p>
<p><span class="math-container">$\textbf{Definition:}$</span> For an interval <span class="math-container">$I \subseteq [0,1]$</span> with non empty interior, define</p>
<p><span class="math-container">$$P_I := \frac{m(A\cap I)}{|I|} = \frac{m(A\cap [a,b])}{b-a} = \frac{\int_{I}1_{A}(x) dx}{|I|}$$</span></p>
<p>where <span class="math-container">$I = [a,b]$</span>.</p>
<p><span class="math-container">$\textbf{Definition:}$</span> For an interval <span class="math-container">$[a,b] \subseteq [0,1]$</span> we set <span class="math-container">$T((\gamma_{j})_{j=1}^{n}, [a,b])$</span>, where <span class="math-container">$\gamma_{j} \in \{0,1\}$</span> as follows; if <span class="math-container">$n = 1$</span> then
<span class="math-container">$T((\gamma_{j})_{j=1}^{1}, [a,b]) = \begin{cases} f^{-1}([a,b])\cap[0,\frac{1}{2}] & \text{ if }\gamma_{1} = 0 \\
f^{-1}([a,b]) \cap[\frac{1}{2},1] & \text{ if }\gamma_{1} = 1
\end{cases}$</span></p>
<p>and for <span class="math-container">$n > 1$</span> we define
<span class="math-container">$$T((\gamma_{j})_{j=1}^{n}, [a,b])= T((\gamma_{j})_{j=2}^{n},T((\gamma_{k})_{k=1}^{1}, [a,b]))$$</span></p>
<p><span class="math-container">$\textbf{Example 1:}$</span> <span class="math-container">$T((0,1),[0,1]) = T((1), T(0,[0,1]))= T((1),[0,\frac{1}{2}])= [\frac{1}{2},\frac{3}{4}]$</span></p>
<p><span class="math-container">$\textbf{Observation 1:}$</span> <span class="math-container">$f^{-n}([a,b])$</span> is a union of <span class="math-container">$2^n$</span> disjoint closed intervals, each of the form <span class="math-container">$T((\gamma_{j})_{j=1}^{n}, [a,b])$</span> for some sequence <span class="math-container">$(\gamma_{j})_{j=1}^{n} \in \{0,1\}^{n}$</span></p>
<p><span class="math-container">$\textbf{Lemma 1:}$</span> <span class="math-container">$$\lim_{n \rightarrow \infty}\min( \min_{V \in \{0,1\}^{n}}P_{T(V,[0.1,0.4])}, \min_{V \in \{0,1\}^{n}}P_{T(V,[0.6,0.9])}) = 0$$</span></p>
<p><span class="math-container">$\textbf{Proof:}$</span> Suppose that there exists a sequence of naturals <span class="math-container">$n_{1}< n_{2},...$</span> and an <span class="math-container">$\alpha > 0$</span> so that</p>
<p><span class="math-container">$$\min( \min_{V \in \{0,1\}^{n_{j}}}P_{T(V,[0.1,0.4])}, \min_{V \in \{0,1\}^{n_{j}}}P_{T(V,[0.6,0.9])}) \geq \alpha,$$</span></p>
<p>then for such <span class="math-container">$n_{j}$</span> note that</p>
<p><span class="math-container">$$f^{-n_{j}}(C) = \bigcup_{V \in \{0,1\}^{n_{j}}}T(V,[0.1,0.4]) \cup \bigcup_{W \in \{0,1\}^{n_{j}}}T(W,[0.6,0.9])$$</span></p>
<p>a union of <span class="math-container">$2^{n_{j}+1}$</span> intervals, if each such interval <span class="math-container">$U$</span> had <span class="math-container">$P_{U} \geq \alpha$</span> then we have the inequality;
<span class="math-container">$$m(B) \leq m(f^{-n_{j}}(C))(1-\alpha)$$</span></p>
<p>Taking limits as in claim <span class="math-container">$1)$</span> we have</p>
<p><span class="math-container">$$m(B) \leq m(B)(1-\alpha)$$</span></p>
<p>which is only possible when <span class="math-container">$\alpha = 0$</span> or <span class="math-container">$m(B) = 0$</span>, as <span class="math-container">$m(B) \neq 0$</span> we have a contradiction.</p>
<p><span class="math-container">$\textbf{Lemma 2:}$</span> If <span class="math-container">$I$</span> is a subinterval of <span class="math-container">$C$</span> (which is <span class="math-container">$[0.1,0.4] \cup [0.6,0.9]$</span>) then when <span class="math-container">$\gamma \in \{0,1\}$</span>, <span class="math-container">$T((\gamma), I)$</span> has length between <span class="math-container">$\frac{|I|}{\pi}$</span> and <span class="math-container">$\frac{|I|}{1.84}$</span>.</p>
<p><span class="math-container">$\textbf{Proof:}$</span> <span class="math-container">$T((0),I)$</span> and <span class="math-container">$T((1),I)$</span> are two distinct intervals with the same size (as <span class="math-container">$T((1),I) = 1 - T((0),I)$</span>). If <span class="math-container">$f([u,v]) = I$</span> where <span class="math-container">$u,v \geq \frac{1}{2}$</span> then <span class="math-container">$f([u,v])= [\cos^2(\pi u), \cos^2(\pi v)]$</span> (here <span class="math-container">$[u,v]$</span> corresponds to <span class="math-container">$T((1),I)$</span>). Thus <span class="math-container">$|f([u,v])| = |f'(\eta)||u-v|$</span> for some <span class="math-container">$\eta \in [u,v]$</span>. On <span class="math-container">$C$</span> we have <span class="math-container">$1.84 \leq |f'(\eta)| \leq \pi$</span>. Thus <span class="math-container">$$1.84 |T((\gamma),I)| \leq |I| \leq \pi |T((\gamma),I)|.$$</span></p>
<p><span class="math-container">$\textbf{Definition:}$</span> Given an interval <span class="math-container">$I \subseteq [0,1]$</span> we set <span class="math-container">$M_{I} := \sup_{x \in I}|f'(x)|$</span> and <span class="math-container">$m_{I} = \inf_{x \in I}|f'(x)|$</span>.</p>
<p><span class="math-container">$\textbf{Observation 2:}$</span> We have <span class="math-container">$M_{I}-m_{I} \leq 2\pi^2 |I|$</span> by the mean value theorem applied to <span class="math-container">$f'(.)$</span>.</p>
<p><span class="math-container">$\textbf{Lemma 3:}$</span> <span class="math-container">$$P_{I} \leq \frac{M_{T((\gamma),I)}}{m_{T((\gamma),I)}}P_{T((\gamma),I)}$$</span></p>
<p><span class="math-container">$\textbf{Proof:}$</span> As before suppose <span class="math-container">$f([u,v]) = I$</span> where <span class="math-container">$u,v \geq \frac{1}{2}$</span> (as by symmetry <span class="math-container">$m([u,v]\cap A) = m([1-u,1-v] \cap A)$</span>). Note here that <span class="math-container">$T((1),I) = [u,v]$</span>. Observe that</p>
<p><span class="math-container">$$P_{I} = \frac{\int_{I}1_{A}(x)dx}{|I|}$$</span></p>
<p><span class="math-container">$$=\frac{\int_{u}^{v}1_{A}(\cos^2(\pi x))\pi \sin(2\pi x) dx}{|I|} \text{(by integration by substitution)}$$</span></p>
<p><span class="math-container">$$= \frac{\int_{u}^{v}1_{A}(x)\pi \sin(2\pi x) dx}{|I|} \text{ (note that }1_{A}(\cos^2(\pi x))=1_{A}(x))$$</span></p>
<p><span class="math-container">$$\leq \frac{M_{[u,v]}\int_{u}^{v}1_{A}(x)}{|I|}$$</span></p>
<p><span class="math-container">$$= \frac{M_{[u,v]}P_{[u,v]}|v-u|}{|I|}$$</span></p>
<p><span class="math-container">$$\leq \frac{M_{[u,v]}P_{[u,v]}|v-u|}{m_{[u,v]}|v-u|} \text{ (by M.V.T)}$$</span></p>
<p><span class="math-container">$$= \frac{M_{T((1),I)}P_{T((1),I)}}{m_{T((1),I)}}$$</span></p>
<p>the case for <span class="math-container">$\gamma = 0$</span> is symmetrical.</p>
<p><span class="math-container">$\textbf{Observation 3:}$</span> If <span class="math-container">$I \subseteq C$</span> (<span class="math-container">$I$</span> is a closed connected interval), we have by lemma 2 that
<span class="math-container">$$P_{I} \leq \frac{1.84 + 2\pi^2|T((\gamma),I)|}{1.84}P_{T((\gamma),I)} \leq (1 + \frac{2\pi^2 |I|}{1.84^2})P_{T((\gamma),I)}$$</span>. By induction and lemma 2, we gather that</p>
<p><span class="math-container">$$P_{I} \leq (\prod_{j=0}^{n-1}(1+\frac{2\pi^2|I|}{1.84^{j+2}}))P_{T((\gamma_{j})_{j=1}^{n},I)}$$</span></p>
<p>Note that <span class="math-container">$\sum_{j=0}^{\infty} \frac{2\pi^2|I|}{1.84^{j+2}} < \infty$</span> thus</p>
<p><span class="math-container">$\prod_{j=0}^{\infty}(1+\frac{2\pi^2|I|}{1.84^{j+2}}) = K_{I} < \infty.$</span></p>
<p>Now set <span class="math-container">$C_{1} = [0.1, 0.4]$</span> and <span class="math-container">$C_{2} = [0.6,0.9]$</span>, as per lemma 1) there exists <span class="math-container">$i \in {1,2}$</span> ,a natural number sequence <span class="math-container">$n_{1}<n_{2}<...$</span>, and a sequence of sequences <span class="math-container">$(V_{j})_{j=1}^{\infty}$</span>, with <span class="math-container">$V_{j} \in \{0,1\}^{n_{j}}$</span> so that <span class="math-container">$P_{T(V_{j}, C_{i})} \leq \frac{1}{j}$</span>. Thus we must have <span class="math-container">$P_{C_{i}} \leq \frac{K_{C_{i}}}{j}$</span> for all <span class="math-container">$j \in \mathbb{N}$</span> or <span class="math-container">$P_{C_{i}} = 0$</span>, but this is impossible as we have <span class="math-container">$m(C_{i}\cap A) > 0 $</span> for <span class="math-container">$i = 1,2$</span>.</p>
|
2,390,077 | <p>is it possible to find a matrix $B$ which fulfills:</p>
<p>$BAA^TB^T=I$, where $I$ is identity matrix and $A$ strictly lower triangular?</p>
<p>Thank you very much in advance!</p>
| tiefi | 415,002 | <p>It is even never possible. Say $A$ is a $n\times n$ matrix. Then, because it is strictly(!) lower triangular, $A$ has rank at most $n-1$. So also $BAA^TB^T$ has rank at most $n-1$. But on the right side of your equation, you have $I$, which has rank $n$. </p>
|
1,342,570 | <p>So, this was my initial proof:</p>
<hr>
<p>Assume $R$ is a ring, and $a,b\in R$</p>
<p>Let $x_1$ and $x_2$ be solutions of $ax=b$</p>
<p>Hence, $ax_1=b=ax_2 \Rightarrow ax_1-ax_2=0_R \Rightarrow a(x_1-x_2)=0_R$</p>
<p>Thus, we have $x_1-x_2=0_R \Rightarrow x_1=x_2$, and only one solution exists.</p>
<hr>
<p>Only now did I realize that I can only assume $x_1-x_2=0_R$ from $a(x_1-x_2)=0_R$ if $R$ was an integral domain. I didn't know why they provided that $R$ had an identity or why $a$ is a unit.</p>
| Brian Fitzpatrick | 56,960 | <p>Let $R$ be a unitial ring with $a,b\in R$ where $a$ is a unit. Then $ax=b$ if and only if $x=a^{-1}b$. Indeed, if $ax=b$, then $$x=1_Rx=a^{-1}ax=a^{-1}b$$
Conversely
$$
a(a^{-1}b)=(aa^{-1})b=1_Rb=b
$$</p>
|
1,342,570 | <p>So, this was my initial proof:</p>
<hr>
<p>Assume $R$ is a ring, and $a,b\in R$</p>
<p>Let $x_1$ and $x_2$ be solutions of $ax=b$</p>
<p>Hence, $ax_1=b=ax_2 \Rightarrow ax_1-ax_2=0_R \Rightarrow a(x_1-x_2)=0_R$</p>
<p>Thus, we have $x_1-x_2=0_R \Rightarrow x_1=x_2$, and only one solution exists.</p>
<hr>
<p>Only now did I realize that I can only assume $x_1-x_2=0_R$ from $a(x_1-x_2)=0_R$ if $R$ was an integral domain. I didn't know why they provided that $R$ had an identity or why $a$ is a unit.</p>
| Alex Mathers | 227,652 | <p>Your solution is close to being great. First off, they told you that $R$ is ring with unity because only those can have units. Second, your proof doesn't require knowledge that it is an integral domain. Towards the end, you have</p>
<p>$$a(x_1-x_2)=0$$</p>
<p>Then, because $a$ is a unit, $a^{-1}$ exists, and</p>
<p>$$a^{-1}a(x_1-x_2)=a^{-1}0=0$$</p>
<p>$$x_1-x_2=0\implies x_1=x_2$$</p>
<p>so the solution is unique, as you stated.</p>
|
4,234 | <p>I recently pasted the following code:</p>
<pre><code> my @cards = qw(BB BR RR);
my $n_trials = shift || 100;
for (1 .. $n_trials) {
my $card = $cards[ int(rand 3) ];
my @faces = split //, $card;
my $face_choice = int(rand 2);
my ($face, $other_face) = @faces[$face_choice, 1-$face_choice];
if ($face eq "R") {
# this trial is spoiled; do it over
redo;
} else {
$count{$other_face} += 1;
}
}
print "In $n_trials trials:\n";
for my $other_face (keys %count) {
print " The other face was '$other_face' in $count{$other_face} trials\n";
}
</code></pre>
<p>The Markdown engine displays the code incorrectly. For example, the <code>for</code> in the fourth line is indented by eight spaces, the same as the <code>my</code> on the third line. But it is displayed as if it were indented 12 spaces.</p>
<p>Similarly, the three following lines that begin with <code>my</code> should all be aligned the same, but the third (<code>my $face_choice</code>) is indented four spaces too far.</p>
<p>There are other errors; you can view the source of this note to see how it should be indented.</p>
<p>The code does not contain any tabs or trailing spaces.</p>
<p><a href="https://i.stack.imgur.com/FEl1M.png" rel="nofollow noreferrer">Here is a screenshot that shows the incorrect rendering</a>, in case it works properly on your browser.</p>
| Zev Chonoles | 264 | <p>It appears to be a conflict between the code formatting and MathJax. For comparison, here is your code, but with all of the dollar signs replaced by S's:</p>
<pre><code> my @cards = qw(BB BR RR);
my Sn_trials = shift || 100;
for (1 .. Sn_trials) {
my Scard = Scards[ int(rand 3) ];
my @faces = split //, Scard;
my Sface_choice = int(rand 2);
my (Sface, Sother_face) = @faces[Sface_choice, 1-Sface_choice];
if (Sface eq "R") {
# this trial is spoiled; do it over
redo;
} else {
Scount{Sother_face} += 1;
}
}
print "In Sn_trials trials:\n";
for my Sother_face (keys %count) {
print " The other face was 'Sother_face' in Scount{Sother_face} trials\n";
}
</code></pre>
<p>Now the displayed version matches the actual number of spaces used. I don't have any idea about the specifics of what's going wrong, or how to fix it, but I'm sure that <a href="http://meta.math.stackexchange.com/users/7798/davide-cervone?tab=answers">Davide Cervone</a> will be able to sort it out.</p>
<hr>
<p>I remembered that the <a href="https://meta.stackexchange.com/q/1777/161783"><code><pre></code> HTML tag</a> acts like the four spaces do in markdown, and it appears that formatting code in this way avoids the issue (somehow):</p>
<pre>
my @cards = qw(BB BR RR);
my $n_trials = shift || 100;
for (1 .. $n_trials) {
my $card = $cards[ int(rand 3) ];
my @faces = split //, $card;
my $face_choice = int(rand 2);
my ($face, $other_face) = @faces[$face_choice, 1-$face_choice];
</pre>
|
138,800 | <h1>Background</h1>
<p>I have a block of code, reproduced at the bottom of this post, consisting of combined \$PreRead and \$PrePrint statements, that automatically formats outputs as 'input = output', and also allows easy inline combination of math and text (allowing the text to be placed either before, after, or both), as follows. [Note that one merely needs to put the text in quotes, and separate it from the math with a semicolon (;).] </p>
<pre><code>int=Integrate[x^2,x]
"Letting"; int=Integrate[x^2,x]
"We find that"; int/x; ", as expected."
</code></pre>
<blockquote>
<p>$\text{int}=\int x^2 \, dx=\frac{x^3}{3}$</p>
<p>$\color{blue}{\textit {Letting}}\>\text{int}=\int x^2 \, dx=\frac{x^3}{3}$</p>
<p>$\color{blue}{\textit {We find that}}\>\frac{int}{x} = \frac{x^2}{3}\> \color{blue}{\textit {, as expected.}}$</p>
</blockquote>
<p>[Attribution: The code is a synthesis and extension, done by MB1965 and myself (<a href="https://mathematica.stackexchange.com/questions/134653/combined-inline-printing-of-input-output-and-text-w-minimal-added-syntax/134657#134657">Combined inline printing of input, output, and text, w/ minimal added syntax</a>), of code blocks written by Simon Rochester (<a href="https://mathematica.stackexchange.com/questions/134406/would-like-input-and-output-printed-on-same-line-w-o-needing-extra-syntax">Would like input and output printed on same line, w/o needing extra syntax</a>) and Mr. Wizard (<a href="https://mathematica.stackexchange.com/questions/11961/notebook-formatting-easier-descriptions-for-equations-and-results/11987#11987">Notebook formatting - easier descriptions for equations and results?</a> ).]</p>
<h1>Issue</h1>
<p>As far as I can tell, the code works perfectly. However, I've subsequently realized that the code's input=output functionality isn't appropriate for graphical output, since it puts the graphics inline with the input (making them too small, and thus necessitating manual resizing):</p>
<pre><code>Plot[x^2, {x, -10, 10}]
</code></pre>
<p><a href="https://i.stack.imgur.com/mNTEt.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/mNTEt.png" alt="enter image description here"></a></p>
<h1>Objective</h1>
<p>To address the above issue, I'd like to modify the code to enable me to tell Mathematica to not implement its inline input=output functionality for specific inputs. I had in mind adding some distinguishing symbol, or combination of symbols, not used elsewhere in Mathematica, at the end of my input (e.g., a triple semicolon or colon): </p>
<pre><code>Plot[x^2, {x, -10, 10}];;;
</code></pre>
<p>In such cases, it would be nice to retain the code's ability to add text to output; here I'd like the text to be placed either above or below the graphics (depending on whether it's entered before or after the math in the input), rather than inline.</p>
<p>Secondarily, I'd also like to modify the code to be able to specify specific commands for which its inline input=output functionality is to be disabled (Plot, ListPlot, etc.).</p>
<p>I've made several attempts to achieve these modifications, but they were all unsuccessful.</p>
<p>Finally, note that I could globally deactivate the code prior to a given evaluation using:</p>
<pre><code>$PreRead = .
$PrePrint = .
</code></pre>
<p>...and then reactivate it.</p>
<p>But that's somewhat inconvenient and impractical, since I would then need to go back up to wherever I'd posted the code block to reactivate it. In addition, this would preclude me from being able to globally evaluate the entire notebook (because of the code's length, it's not practical to re-paste it following every statement for which I'd deactivated it).</p>
<hr>
<h1>Code</h1>
<pre><code>$note1 = Null;
$note2 = Null;
$note3 = Null;
$outputStyles =
<|
"Default" -> {
Blue,
15,
Italic,
FontFamily -> "Times"
},
"Before" -> {
Blue,
15,
Italic,
FontFamily -> "Times"
},
"After" -> {
Blue,
15,
Italic,
FontFamily -> "Times"
}
|>;
boxExpr[body_] :=
RowBox@{"Replace", "[", "\"thisIsJustATag\"", ";", body, ",",
"Null", "->", "\"\"", "]"};
styleNote[note_, style_] :=
Style[ToExpression@note,
Sequence @@ Lookup[$outputStyles, style, $outputStyles["Default"]]];
extractNotes[boxes_] :=
Replace[boxes, {RowBox[{note1_String?(StringMatchQ[#, "\"*\""] &),
";", body__, ";",
note2_String?(StringMatchQ[#, "\"*\""] &)}] :> ($note1 =
styleNote[note1, "Before"]; $note2 =
styleNote[note2, "After"];
boxExpr@body),
RowBox[{body__, ";",
note_String?(StringMatchQ[#, "\"*\""] &)}] :> ($note2 =
styleNote[note, "After"];
$note1 = Null;
boxExpr@body),
RowBox[{note_String?(StringMatchQ[#, "\"*\""] &), ";",
body__}] :> ($note1 = styleNote[note, "After"];
$note2 = Null;
boxExpr@body),
RowBox[{note_String?(StringMatchQ[#, "\"*\""] &),
";"}] :> ($note3 = styleNote[note, "Neither"];
$note2 = Null; $note1 = Null;
note),
e_ :> ($note1 = Null; $note2 = Null; boxExpr@e)}];
applyFormatting[out_] :=
With[{line = $Line},
HoldForm[In[line] = $placeHolder] /.
DownValues[In] /. {
$placeHolder -> out,
HoldPattern[
Replace[CompoundExpression["thisIsJustATag", expr_],
Null -> ""]] :> expr
}
/. {
HoldPattern[a_ = ""] :> a,
HoldPattern[a_ = a_] :> a,
HoldPattern[a_ = HoldForm[a_]] :> a,
HoldPattern[(c : (a_ = b_)) = b_] :> c,
HoldPattern[(a_ = b_) = c_] :> HoldForm[a = b = c]
}
];
addNotes[formatted_] :=
TraditionalForm@Switch[{$note1, $note2, $note3},
{Null, Null, Except@Null},
With[{r = $note3}, $note3 = Null; r],
{Except@Null, Except@Null, _},
With[{r1 = $note1, r2 = $note2}, $note1 = $note2 = Null;
Row[{r1, formatted, r2}, Spacer[5]]
],
{Except@Null, _, _},
With[{r = $note1}, $note1 = Null;
Row[{r, formatted}, Spacer[5]]
],
{_, Except@Null, _},
With[{r = $note2}, $note2 = Null;
Row[{formatted, r}, Spacer[5]]
],
_,
formatted
];
$PreRead = extractNotes;
$PrePrint = addNotes@*applyFormatting;
</code></pre>
<h1>Update</h1>
<p>For the convenience of the reader, here is the current code, incorporating Mr. Wizard's additions:</p>
<pre><code>$note1 = Null;
$note2 = Null;
$note3 = Null;
$outputStyles = <|
"Default" -> {Blue, 15, Italic, FontFamily -> "Times"},
"Before" -> {Blue, 15, Italic, FontFamily -> "Times"},
"After" -> {Blue, 15, Italic, FontFamily -> "Times"}|>;
boxExpr[body_] :=
RowBox@{"Replace", "[", "\"thisIsJustATag\"", ";", body, ",",
"Null", "->", "\"\"", "]"};
styleNote[note_, style_] :=
Style[ToExpression@note,
Sequence @@ Lookup[$outputStyles, style, $outputStyles["Default"]]];
extractNotes[boxes_] :=
Replace[boxes, {RowBox[{note1_String?(StringMatchQ[#, "\"*\""] &),
";", body__, ";",
note2_String?(StringMatchQ[#, "\"*\""] &)}] :> ($note1 =
styleNote[note1, "Before"]; $note2 =
styleNote[note2, "After"];
boxExpr@body),
RowBox[{body__, ";",
note_String?(StringMatchQ[#, "\"*\""] &)}] :> ($note2 =
styleNote[note, "After"];
$note1 = Null;
boxExpr@body),
RowBox[{note_String?(StringMatchQ[#, "\"*\""] &), ";",
body__}] :> ($note1 = styleNote[note, "After"];
$note2 = Null;
boxExpr@body),
RowBox[{note_String?(StringMatchQ[#, "\"*\""] &),
";"}] :> ($note3 = styleNote[note, "Neither"];
$note2 = Null; $note1 = Null;
note), e_ :> ($note1 = Null; $note2 = Null; boxExpr@e)}];
applyFormatting[out_] :=
With[{line = $Line},
HoldForm[In[line] = $placeHolder] /.
DownValues[In] /. {$placeHolder -> out,
HoldPattern[
Replace[CompoundExpression["thisIsJustATag", expr_],
Null -> ""]] :> expr} /. {HoldPattern[a_ = ""] :> a,
HoldPattern[a_ = a_] :> a, HoldPattern[a_ = HoldForm[a_]] :> a,
HoldPattern[(c : (a_ = b_)) = b_] :> c,
HoldPattern[(a_ = b_) = c_] :> HoldForm[a = b = c]}];
addNotes[formatted_] :=
TraditionalForm@
Switch[{$note1, $note2, $note3}, {Null, Null, Except@Null},
With[{r = $note3}, $note3 = Null; r], {Except@Null,
Except@Null, _},
With[{r1 = $note1, r2 = $note2}, $note1 = $note2 = Null;
Row[{r1, formatted, r2}, Spacer[5]]], {Except@Null, _, _},
With[{r = $note1}, $note1 = Null;
Row[{r, formatted}, Spacer[5]]], {_, Except@Null, _},
With[{r = $note2}, $note2 = Null;
Row[{formatted, r}, Spacer[5]]], _, formatted];
bypass = Replace[
RowBox[{b1___, RowBox[{b2___, ";;"}], ";"}] :> ($bypass = True;
RowBox[{b1, b2}])];
applyFormatting[out_] /; $bypass := Pane[out];
self : addNotes[formatted_] /; $bypass := ($bypass =.;
Unevaluated[self] /. (DownValues[addNotes] /. Row -> Column))
SetAttributes[graphicsQ, HoldFirst]
graphicsQ[_Graphics | _Graphics3D | _Graph | _Image | _Image3D] = True;
graphicsQ[Legended[_?graphicsQ, ___]] = True;
graphicsQ[{___, _?graphicsQ, ___}] = True;
applyFormatting[out_?graphicsQ] :=
Column[{# /. DownValues[In], Pane@out}] &[
HoldForm@InputForm@In@# &@$Line] /.
HoldPattern[Replace["thisIsJustATag"; expr_, Null -> ""]] :> expr
$PreRead = extractNotes@*bypass;
$PrePrint = addNotes@*applyFormatting;
</code></pre>
<p>Mr. Wizard's very nice code blocks succeed in accomplishing both my primary and secondary goals, so I have accepted his answer. But, for completeness, I should note that three corner issues remain. The first two involve the new code; the third, involving the use of the semicolon to suppress output (including that the system sometimes prints it in red), is a carry-over from the code I originally posted (interestingly, when I quit the kernel, the red semicolons revert to black):</p>
<p><a href="https://i.stack.imgur.com/8261e.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/8261e.png" alt="enter image description here"></a></p>
<p>TEST CODE:</p>
<pre><code>ParametricPlot[{{2 Cos[t], 2 Sin[t]}, {2 Cos[t], Sin[t]}, {Cos[t],
2 Sin[t]}, {Cos[t], Sin[t]}}, {t, 0, 2 Pi},
PlotLegends -> "Expressions"];
ParametricPlot[{{2 Cos[t], 2 Sin[t]}, {2 Cos[t], Sin[t]}, {Cos[t],
2 Sin[t]}, {Cos[t], Sin[t]}}, {t, 0, 2 Pi},
PlotLegends -> "Expressions"]
\[Alpha] = Integrate[x^2, x] ;;;
Sin[x] // N ;;;
"Some text"; Integrate[x^2, x];
Graphics[{Thick, Green,Rectangle[{0, -1}, {2, 1}], Red, Disk[], Blue,
Circle[{2, 0}], Yellow, Polygon[{{2, 0}, {4, 1}, {4, -1}}],
Purple, Arrowheads[Large], Arrow[{{4, 3/2}, {0, 3/2}, {0, 0}}],
Black, Dashed, Line[{{-1, 0}, {4, 0}}]}];
Graphics3D[Cylinder[]];
a = Plot[x^2, {x, -10, 10}];
</code></pre>
<p><a href="https://i.stack.imgur.com/UKGhb.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/UKGhb.png" alt="enter image description here"></a></p>
| Mr.Wizard | 121 | <h1>Pane</h1>
<p>It appears that someone along the way took out my <a href="https://mathematica.stackexchange.com/a/11987/121">carefully placed</a> <a href="http://reference.wolfram.com/language/ref/Pane.html" rel="nofollow noreferrer"><code>Pane</code></a> which prevented downsizing of graphics. If you just want the graphics full size add it back in, either in your <code>$PrePrint</code> code or manually:</p>
<pre><code>Pane @ Plot[x^2, {x, -10, 10}]
</code></pre>
<p><a href="https://i.stack.imgur.com/FoZth.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/FoZth.png" alt="enter image description here"></a></p>
<h1>Intermediate Goal</h1>
<p>To realize your intermediate goal of bypassing the auto-input-inlining, and to change the style of comments from row to column on bypassed output, I would add these lines:</p>
<pre><code>bypass =
Replace[
RowBox[{b1___, RowBox[{b2___, ";;"}], ";"}] :>
($bypass = True; RowBox[{b1, b2}])
];
applyFormatting[out_] /; $bypass := Pane[out];
self : addNotes[formatted_] /; $bypass := ($bypass =.;
Unevaluated[self] /. (DownValues[addNotes] /. Row -> Column))
$PreRead = extractNotes@*bypass;
$PrePrint = addNotes@*applyFormatting;
</code></pre>
<p>Now:</p>
<p><a href="https://i.stack.imgur.com/bJmm3.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/bJmm3.png" alt="enter image description here"></a></p>
<p>And also:</p>
<pre><code>"label above"; Plot[x^2, {x, -10, 10}]; "label below" ;;;
</code></pre>
<p><a href="https://i.stack.imgur.com/SaE7e.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/SaE7e.png" alt="enter image description here"></a></p>
<h1>Final Goal</h1>
<p>I believe this additional code will achieve the final goal as discussed in the comments.</p>
<p><em>Now hopefully working in the three additional cases I forgot to handle.</em></p>
<pre><code>SetAttributes[graphicsQ, HoldFirst]
graphicsQ[_Graphics | _Graphics3D | _Graph | _Image | _Image3D] = True;
graphicsQ[Legended[_?graphicsQ, ___]] = True;
graphicsQ[{___, _?graphicsQ, ___}] = True;
applyFormatting[out_?graphicsQ] :=
Column[{# /. DownValues[In], Pane @ out}] &[
HoldForm @ InputForm @ In @ # & @ $Line
] /. HoldPattern[Replace["thisIsJustATag"; expr_, Null -> ""]] :> expr
</code></pre>
<p>Now:</p>
<pre><code>Plot[Sinc[x], {x, 0, 12}]
</code></pre>
<p><a href="https://i.stack.imgur.com/Pp4xz.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Pp4xz.png" alt="enter image description here"></a></p>
|
3,902,937 | <p>I wish to compute the number of possible partitions <span class="math-container">$S$</span> of a set of cardinal <span class="math-container">$np$</span> into <span class="math-container">$n$</span> subsets of cardinal <span class="math-container">$p$</span>.
It is easy to obtain the formula :
<span class="math-container">$\displaystyle S=\sum_{k=0}^n {{pk}\choose{p}}$</span>.
However i am getting stucked here without recouring to algebra
. I have in fact used generating functions to compute the sum, but i am looking for a non-algebraic argument (for example double couting ?) that could help me to compute this sum directly as i am sure there's a way to.</p>
| Wuestenfux | 417,848 | <p>"I wish to compute the number of possible partitions S of a set of cardinal np into n subsets of cardinal p."</p>
<p>Hint: Stirling numbers of 2nd kind: <span class="math-container">$S(n,k)$</span> is the number of partitions of an <span class="math-container">$n$</span> element set into <span class="math-container">$k$</span> subsets.</p>
<p>"It is easy to obtain the formula : <span class="math-container">$S=\sum_{k=0}^n{np\choose p}$</span>." Summand is independent of <span class="math-container">$k$</span>.</p>
|
3,902,937 | <p>I wish to compute the number of possible partitions <span class="math-container">$S$</span> of a set of cardinal <span class="math-container">$np$</span> into <span class="math-container">$n$</span> subsets of cardinal <span class="math-container">$p$</span>.
It is easy to obtain the formula :
<span class="math-container">$\displaystyle S=\sum_{k=0}^n {{pk}\choose{p}}$</span>.
However i am getting stucked here without recouring to algebra
. I have in fact used generating functions to compute the sum, but i am looking for a non-algebraic argument (for example double couting ?) that could help me to compute this sum directly as i am sure there's a way to.</p>
| Phicar | 78,870 | <p>You are closed, but not quite. Just put your <span class="math-container">$n\cdot p$</span> objects in a line and permute them. Then divide them from left to right taking <span class="math-container">$p$</span> at a time and divide by the order in each group and for the order of the pairings, you will get
<span class="math-container">$$\frac{(n\cdot p)!}{n!\cdot (p!)^n}$$</span></p>
|
2,372,762 | <p>So I know in order to prove a function is bijective, you need to prove that it is both injective and surjective. I know that to prove it is an injection, I need to make $f(x) = f(y)$, and try to get $x=y$ from that, but I can't seem to manipulate the equations to do so. </p>
<p>Also, how would I prove that this is surjective? </p>
| Donald Splutterwit | 404,247 | <p>Suppose we have a Young tableaux with shape
\begin{eqnarray*}
(\underbrace{k,\cdots,k}_{a_k \text{times}}, \underbrace{k-1,\cdots,k-1}_{a_{k-1} \text{times}},\cdots,\underbrace{2,\cdots,2}_{a_2 \text{times}}\underbrace{1,\cdots,1}_{a_1 \text{times}})
\end{eqnarray*}
This will give a contribution of $x^n$ to the generating function, where $n=a_1+2a_2+\cdots+(k-1)a_{k-1}+ka_k$.</p>
<p>To get all possible Young tableauxs, $k$ can take any value $1,2,\cdots$ and the values $a_k$ can take any value $0,1,\cdots$. So the generating function is given by
\begin{eqnarray*}
\sum_{a_1=0}^{\infty} \sum_{a_2=0}^{\infty} \cdots \sum_{a_{k-1}=0}^{\infty} \sum_{a_k=0}^{\infty} \cdots x^n
\end{eqnarray*}
All of the sums will "decouple" and this can be rewritten as
\begin{eqnarray*}
\sum_{a_1=0}^{\infty} x^{a_1} \sum_{a_2=0}^{\infty} x^{2 a_2}\cdots \sum_{a_{k-1}=0}^{\infty} x^{(k-1)a_{k-1}} \sum_{a_k=0}^{\infty} x^{ka_k}\cdots
\end{eqnarray*}
Now sum each of these geometric plums & we get
\begin{eqnarray*}
\frac{1}{(1-x)} \frac{1}{(1-x^2)} \cdots \frac{1}{(1-x^{k-1})} \frac{1}{(1-x^k)} \cdots = \prod_{k=0}^{\infty} \frac{1}{(1-x^k)}
\end{eqnarray*}</p>
|
385,537 | <p>How would you go about proving the following?</p>
<p>$${1- \cos A \over \sin A } + { \sin A \over 1- \cos A} = 2 \operatorname{cosec} A $$</p>
<p>This is what I've done so far:</p>
<p>$$LHS = {1+\cos^2 A -2\cos A + 1 - \cos^2A \over \sin A(1-\cos A)}$$</p>
<p>....no idea how to proceed .... X_X</p>
| user123 | 76,473 | <p>I'll go step by step.</p>
<p>$${1 - \cos A \over \sin A} + {\sin A \over 1 - \cos A}$$</p>
<p>$${(1 - \cos A)^2 \over \sin A (1 - \cos A)} + {\sin^2 A \over \sin A(1 - \cos A)}$$</p>
<p>$$(1 - \cos A)^2 + \sin^2 A \over \sin A(1 - \cos A)$$</p>
<p>Expanding $(1 - \cos A)^2$ yields:</p>
<p>$$1 - 2\cos A + \cos^2 A + \sin^2 A \over \sin A(1 - \cos A)$$</p>
<p>Knowing that $\cos^2 x + \sin^2 x = 1$, we can change the expression to the following:</p>
<p>$$1 - 2 \cos A + 1 \over \sin A(1 - \cos A)$$</p>
<p>$$2 - 2 \cos A \over \sin A(1 - \cos A)$$</p>
<p>We factorize using $2$:</p>
<p>$$2(1 - \cos A) \over \sin A(1 - \cos A)$$</p>
<p>$$2 \over \sin A$$</p>
<p>$$2 \left ({1 \over \sin A} \right)$$</p>
<p>Finally:</p>
<p>$$ 2 \csc A$$</p>
|
385,537 | <p>How would you go about proving the following?</p>
<p>$${1- \cos A \over \sin A } + { \sin A \over 1- \cos A} = 2 \operatorname{cosec} A $$</p>
<p>This is what I've done so far:</p>
<p>$$LHS = {1+\cos^2 A -2\cos A + 1 - \cos^2A \over \sin A(1-\cos A)}$$</p>
<p>....no idea how to proceed .... X_X</p>
| Inceptio | 63,477 | <p><strong>Hint:</strong></p>
<p>$1-\cos A=1-(1-2 \sin^2\dfrac{A}{2})=2\sin^2 \dfrac{A}{2}$</p>
<p>$\dfrac{2 \sin^2 \dfrac{A}{2}}{2 \sin \dfrac{A}{2} \cos \dfrac{A}{2}}=\tan \dfrac{A}{2}$</p>
<p>The other expression will be $\cot \dfrac{A}{2}$</p>
<p>$(\tan^2 \dfrac{A}{2}+1) /\tan\dfrac{A}{2}= \dfrac{\sec^2 A \cos \dfrac{A}{2}}{\sin \dfrac{A}{2}}= \dfrac{1}{2 \sin A}$</p>
|
385,537 | <p>How would you go about proving the following?</p>
<p>$${1- \cos A \over \sin A } + { \sin A \over 1- \cos A} = 2 \operatorname{cosec} A $$</p>
<p>This is what I've done so far:</p>
<p>$$LHS = {1+\cos^2 A -2\cos A + 1 - \cos^2A \over \sin A(1-\cos A)}$$</p>
<p>....no idea how to proceed .... X_X</p>
| lab bhattacharjee | 33,337 | <p>LCM is not required.</p>
<p>Observe that the first term already has $\sin A$ in the denominator.</p>
<p>$$\text{Now,}\frac{\sin A}{1-\cos A}=\frac{\sin A(1+\cos A)}{1-\cos^2A}=\frac{\sin A(1+\cos A)}{\sin^2A}=\frac{1+\cos A}{\sin A}$$</p>
<p>$$\text{So,}{1- \cos A \over \sin A } + { \sin A \over 1- \cos A} =\frac{1-\cos A}{\sin A}+\frac{1+\cos A}{\sin A}=\frac2{\sin A}=2\csc A$$</p>
|
2,870,729 | <blockquote>
<p>Why does $|e^{ix}|^2 = 1$?</p>
</blockquote>
<p>The book said $e^{ix} = \cos x + i\sin x$, and square it, then $|e^{ix}|^2 = \cos^2x + \sin^2x = 1$.</p>
<p>But, when I calculated it, $ |e^{ix}|^2 = \left|\cos x + i\sin x\right|^2 = \cos^2x - \sin^2x + 2i\sin x\cos x$.</p>
<p>I can't make it to be equal $1.$ How can I do it?</p>
| Siong Thye Goh | 306,553 | <p>If $x \in \mathbb{R}$,
$$\cos^2x - \sin^2 x= \cos(2x)$$</p>
<p>$$2 \sin x \cos x =\sin(2x)$$ </p>
<p>$$|(\cos x + i \sin x)^2|=\cos^2(2x)+\sin^2(2x)=1$$</p>
|
1,484,838 | <p>Ok, so I just learned trig identities and I come across this problem that had it's answer to it, and I have no idea how they got to that answer.</p>
<p>Here is the problem:</p>
<p>$$
\frac{-\sec\theta}{1-\cos\theta}=\frac{-1-\sec\theta}{\sin^2\theta}
$$</p>
<p>Now the problem calls for the left side to be adjusted. Here's where it came to first:</p>
<p>$$
\frac{-1-\sec\theta}{1-\cos^2\theta}
$$</p>
<p>After that, it then came to the solution which was:</p>
<p>$$
\frac{-1-\sec\theta}{\sin^2\theta}
$$</p>
<p>I'm stumped on how they got to the second step. What am I missing?</p>
| gdm | 196,478 | <p>$1= cos(\theta)^2 + sin(\theta)^2$
~Pythagoras
<a href="https://i.stack.imgur.com/aFKYH.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/aFKYH.jpg" alt="enter image description here"></a></p>
|
12,690 | <p>I understand Lie groups are defined by the structure constants associated with the lie brackets, which are treated as commutators in quantum mechanics, but i dont know of a math theory related to group theory to define or use an anti commutator. If Lie groups theory uses the commutator, what theory uses the anti commutator?</p>
<p>Finite groups (not Lie groups, which are continuous), can be specified by structure equations analogous to a Lie bracket, but more general, of which commutation or anti commutation relations are just one of infinite possibilities. Is there such variety of possible structure equations in continuous groups too?</p>
| Raskolnikov | 3,567 | <p>The algebraic structure corresponding most naturally to the anticommutator is that of the <a href="http://en.wikipedia.org/wiki/Clifford_algebra" rel="nofollow">Clifford algebras</a>.</p>
<p>One can construct a Clifford algebra by defining $n$ generating elements $\mathbf{e}_j$ with $j \in \{1,\ldots,n\}$ such that they satisfy the conditions:</p>
<p>$$ \mathbf{e}_i \mathbf{e}_j + \mathbf{e}_j \mathbf{e}_i = 2 \delta_{ij} \; ,$$</p>
<p>where $\delta_{ij}$ is the Kronecker delta. From these elements you can form products of elements of the form</p>
<p>$$\mathbf{e}_{i_1}\mathbf{e}_{i_2}\ldots\mathbf{e}_{i_p} \; ,$$</p>
<p>for $p$ varying from $0$ to $n$. There's a total of $2^n$ products that can be formed and from these, taking linear combination you can generate the rest of the algebra.</p>
<p>You can check that Pauli matrices and quaternions form Clifford algebras.</p>
<p>Now, if you take a Clifford algebra with $n$ generating elements, and construct the elements $M_{ij}=\frac{1}{2}\mathbf{e}_{i}\mathbf{e}_{j}$, you can show that they form the basis of the Lie algebra $so(n)$.</p>
<p>Another construct related to fermions, and which is important in the path integral representation of fermions, are the <a href="http://en.wikipedia.org/wiki/Grassmann_number" rel="nofollow">Grassmann numbers</a>.</p>
|
185,112 | <p>Does there exist $m,n\ge1$, an $m \times n$ matrix $A$, and a vector $x \in \mathbb{R}^n$ such that:</p>
<ul>
<li>The entries of $A$ are $\in \{0, 1\}$.</li>
<li>For all pairs of columns $u, v$ of $A$ the entries of $u - v$ are never either all non-negative or all non-positive (i.e. there is a positive entry and a negative entry in $u - v$).</li>
<li>$\sum_i x_i = 0$.</li>
<li>The entries of $Ax$ are all non-negative with at least one entry being strictly positive.</li>
</ul>
<p>Edit: It turns out this is not true via a concrete counterexample found by my collaborator. I would list it, but it's rather large.</p>
| Joel David Hamkins | 1,946 | <p>Unless I have misunderstood, here is a counterexample. Let $A$ be the $2\times 2$ identity matrix. This has your Pareto property on $x-y$, but if $x=[{a\atop b}]$, then $Ax=x$, and there is no way to have both $a+b=0$ and both $a$ and $b$ non-negative with at least one positive.</p>
|
3,953,681 | <p>I have a basic question but I have failed in solving it. I have the equation of a cylinder which is <span class="math-container">$y^2 + z^2 = r^2$</span> (centered in the x-axis). The parametric equation (dependent on <span class="math-container">$L$</span> and <span class="math-container">$s$</span>) is <span class="math-container">$(x,y,z) = (L, r\cos(s), r\sin(s))$</span>.</p>
<p>I would like to rotate it certain angle <span class="math-container">$\theta$</span> (anticlockwise). Thus I have the new axis from the rotation as: <span class="math-container">$x=x'*\cos\theta + z'*\sin\theta$</span>, <span class="math-container">$y=y'$</span> and <span class="math-container">$z=r*\sin\theta$</span>. However, when rewriting the equation of the cylinder as <span class="math-container">$(y')^2 + (-x'*\sin\theta + z'*\cos\theta)^2 = r^2$</span> and parametrizing, I get: <span class="math-container">$(x,y,z) = (L, r*\cos(s), z+x'*\tan\theta)$</span>, with <span class="math-container">$z=r*\sin\theta$</span>. When I plot this, I get a elliptic cylinder.
Does anyone know what am I doing wrong? I need such equation because I will generate multiple cylinders later computationally.</p>
<p>I have followed previous posts such as <a href="https://math.stackexchange.com/questions/2733090/if-i-have-an-oblique-cylinder-can-i-trim-it-in-to-a-rectilinear-cylinder">If I have an oblique cylinder can I trim it in to a rectilinear cylinder?</a> but they actually obtain the elliptic cylinder.</p>
<p>Many thanks!</p>
| Quanto | 686,284 | <p>Substitute <span class="math-container">$x=\sqrt{\frac ab }\frac1t$</span> to get</p>
<p><span class="math-container">$$\int \frac{dx}{x\sqrt{a - bx^2}}=-\frac1{\sqrt a}\int \frac{dt}{\sqrt{t^2-1}}= - \frac1{\sqrt a}\cosh^{-1}t=- \frac1{\sqrt a}\ln\left(t+\sqrt{t^2-1}\right)
$$</span></p>
<p>which is equal to <span class="math-container">$\frac {1}{\sqrt{a}}\ln\left(\frac{\sqrt{a}-\sqrt{a-bx^2}}{x\sqrt{a}}\right)$</span>.</p>
|
1,574,663 | <p>I'm a first time Calc I student with a professor who loves using $e^x$ and logarithims in questions. So, loosely I know L'Hopital's rule states that when you have a limit that is indeterminate, you can differentiate the function to then solve the problem. But what do you do when no matter how much you differentiate, you just keep getting an indeterminate answer? For example, a problem like</p>
<p>$\lim _{x\to \infty }\frac{\left(e^x+e^{-x}\right)}{\left(e^x-e^{-x}\right)}$</p>
<p>When you apply L'Hopital's rule you just endlessly keep getting an indeterminate answer. With just my basic understanding of calculus, how would I go about solving a problem like that?</p>
<p>Thanks</p>
| marty cohen | 13,079 | <p>$\frac{e^x+e^{-x}}{e^x-e^{-x}}
=\frac{e^x-e^{-x}+2e^{-x}}{e^x-e^{-x}}
=1+\frac{2e^{-x}}{e^x-e^{-x}}
=1+\frac{2}{e^{2x}-1}
\to 1
$
as
$x \to \infty
$.</p>
|
4,204,133 | <p>Trying to solve <a href="https://math.stackexchange.com/questions/4201328/how-can-i-arrange-a-group-of-people-at-tables-and-switch-them-around-so-that-no">this problem</a> led me to consider the following generalization.</p>
<p>Let <span class="math-container">$g$</span> and <span class="math-container">$p$</span> be positive integers. Imagine that you own <span class="math-container">$g$</span> distinct board games, where each game requires exactly <span class="math-container">$p$</span> people to play. You plan to host a game night, and invite exactly <span class="math-container">$p\times g$</span> friends over to play your games. You plan to do this over the course of <span class="math-container">$g$</span> rounds, where in each round, the group divides itself into <span class="math-container">$g$</span> groups of <span class="math-container">$p$</span> people, and each group plays a game. The goal is to do this in such that way that</p>
<ol>
<li><p>Everyone plays every game exactly once, and</p>
</li>
<li><p>No pair of people play together more than once.</p>
</li>
</ol>
<p>This is equivalent to finding a <span class="math-container">$g\times (gp)$</span> matrix such that each column is a permutation of <span class="math-container">$\{1,\dots,g\}$</span>, each number in <span class="math-container">$\{1,\dots,g\}$</span> appears <span class="math-container">$p$</span> times in each row, and where any two columns agree in at most one place (each column is a player, each row is a round, the entry says which game that player plays in that round). When <span class="math-container">$p=1$</span>, this is just a Latin square of order <span class="math-container">$g$</span>.</p>
<p>My question is, <strong>for what values of <span class="math-container">$g$</span> and <span class="math-container">$p$</span> is this possible</strong>? Also, I am curious if anyone has seen this problem in the literature. This certainly bears a resemblance to the social golfer problem. Both this and the SGP involve dividing <span class="math-container">$pg$</span> people into <span class="math-container">$g$</span> groups of <span class="math-container">$p$</span> over the course of several rounds, such that no pair of people play together twice. The difference is that here, people are playing distinct games instead of splitting into unlabeled groups, so constraint <span class="math-container">$(1)$</span> does not make sense in the context of SGP and makes this problem distinct.</p>
<p>This is possible in the trivial cases where <span class="math-container">$g=1$</span> or <span class="math-container">$p=1$</span>. Besides these, I know that <span class="math-container">$$g\ge p+1$$</span> is a necessary condition for a schedule to exist. Consider the <span class="math-container">$p$</span> people who play Monopoly (say) in the first round. In the next round, they must all play different games, so there must be at least <span class="math-container">$p$</span> other games besides Monopoly.</p>
<p>I suspect that this is also a sufficient condition, since I have only found positive results in this regime:</p>
<ul>
<li><p>You can succeed with <span class="math-container">$p=2$</span>, as long as <span class="math-container">$g$</span> is odd. For each <span class="math-container">$x,y\in \{0,1,\dots,g-1\}$</span>, persons numbered <span class="math-container">$2x$</span> and <span class="math-container">$2y+1$</span> will play game number <span class="math-container">$x+y\pmod g$</span> during round number <span class="math-container">$x-y\pmod g$</span>.</p>
</li>
<li><p>It is also possible when <span class="math-container">$(g,p)=(4,3)$</span> and <span class="math-container">$(5,4)$</span>. I found the schedule for the former case by hand, and for the latter case by computer. However, I cannot see how the <a href="https://math.stackexchange.com/a/4203659/177399">schedule</a> I found would generalize. Here is the solution for <span class="math-container">$(g,p)=(4,3)$</span>:
<span class="math-container">$$
\begin{array}{|cccccccccccc|}\hline
1&1&1&2&2&2&3&3&3&4&4&4\\
2&3&4&1&3&4&1&2&4&1&2&3\\
3&4&2&4&1&3&2&4&1&3&1&2\\
4&2&3&3&4&1&4&1&2&2&3&1\\\hline
\end{array}
$$</span></p>
</li>
</ul>
<p>Therefore, I further ask <strong>is it indeed true that a schedule exists if and only if <span class="math-container">$g\ge p+1$</span></strong>?</p>
| Maximilian Janisch | 631,742 | <p>Hint: Is <span class="math-container">$]z-r, z+r[\times\{0\}$</span> an open subset of <span class="math-container">$\mathbb R^2$</span> ?</p>
|
1,511,246 | <blockquote>
<p>What is the value of $0.7\overline{54}$ +$0.69\overline2$?</p>
<p>(a) $\frac{1813}{900}$ (b) $\frac{1783}{910}$ (c) $\frac{14323}{9900} (d) \frac{13243}{9900}$</p>
</blockquote>
<p>I get</p>
<p>@edit</p>
<p>$$754-7/990 + 692-69/900$$=$747$/$990$ + $623$/$900$=$1$/$90$($747$/$11$ + $623$/$10$)</p>
<p>=($7470$/$11$ + $623$ . $11$/$10$)=($7470$ + $6853$)=$14323$/$9900$</p>
<p>Thankx for help I did not multiplied by $90$</p>
<p>Can anyone guide me how to solve the problem?</p>
| gt6989b | 16,192 | <p><strong>HINT</strong></p>
<p>Easier to convert to regular fractions:
$$
0.7\overline{54} = \frac{7}{10} + \frac{54}{99 \cdot 10}
$$
Can you convert the other one?</p>
|
112,432 | <p>a) Is true the following statement. Let $h$ be analytic in the unit disk such that $$|h(z)|\le \frac{|z|^2}{1-|z|^2},$$ then $$|h'(z)|\le \frac{2}{(1-|z|^2)^2}.$$
a') Is true the following statement. Let $h$ be analytic in the unit disk such that $$|h(z)|\le \frac{|z|^2}{1-|z|^2},$$ then the inequality $$|h'(z)|\le \frac{8}{\pi(1-|z|^2)^2}$$ is sharp. The inequality can be proved by using Schur test, and Riesz-Thorin convexity type theorem (Dunford & Schwartz 1958, §VI.10.11).</p>
<p>b) If $$|h(z)|\le \frac{|z|^2}{|1-z^2|}$$ then we have better conclusion $$|h'|\le \frac{2|z|}{(1-|z|^2)|1-z^2|}$$ and this follows by using Schwarz lemma. Namely in this case $$|H(z)|=|(1-z^2) h(z)/z^2|\le 1.$$ Then $$|H'(z)|\le \frac{1-|H(z)|^2}{1-|z|^2}.$$</p>
<p>As $$H'(z)=(1-z^2) h'(z)/z^2-2/z^3 h(z),$$ it follows that $$|(1-z^2) h'(z)/z^2|\le \frac{2(1-|z|^2)/|z|^3 h(z)+1-|H(z)|^2}{1-|z|^2}$$ $$\le \frac{2|H(z)|/|z| +1-|H(z)|^2}{1-|z|^2}\le \frac{2|z|^{-1}}{1-|z|^2}.$$</p>
<p>The question a) is related to precise estimation of norm of a Bergman projection into Bloch space and is far for being a homework.</p>
| fedja | 1,131 | <p>Looks like we are closing the question anyway, so I'll just provide a counterexample quickly before the final vote is cast. </p>
<p>If you think a bit of what is asked and what the natural freedoms and scalings are present here, you'll see that it is enough to get an analytic $f$ in the right half-plane $x>0$ ($z=x+iy$ as usual) such that $|f|<1/x$ and $|f'(1)|>1$. Now just take something like $f(z)=\frac 1z-aze^{-\sqrt{z}}$ with sufficiently small positive $a$. I leave it to somebody else to beat $4$ in the upper bound. </p>
<p>As to "motivation" in general, look up in the evening. You'll see the stars in the sky. What other motivation do you need?</p>
|
3,415,378 | <p>I am looking for an estimation or an approximation of </p>
<p><span class="math-container">$\sum _{k=1}^{n}{\log(k)\binom {n}{k}}$</span></p>
<p>Any hints will be appreciated.
Thank you.</p>
| Jack D'Aurizio | 44,121 | <p>Interesting question. Frullani's integral provides an integral representation for <span class="math-container">$\log(k)$</span>,
<span class="math-container">$$ \log(k) = \int_{0}^{+\infty}\frac{e^{-x}-e^{-kx}}{x}\,dx $$</span>
which in combination with the binomial theorem leads to
<span class="math-container">$$ \begin{eqnarray*}\sum_{k=1}^{n}\log(k)\binom{n}{k} &=& \int_{0}^{+\infty}\left[(2^n-1)e^{-x}-(1+e^{-x})^n+1\right]\frac{dx}{x}\\&=&\int_{0}^{1}\underbrace{\frac{1-(1+t)^n+t(2^n-1)}{t\log(t)}}_{f_n(t)}\,dt.\end{eqnarray*} $$</span>
Most of the mass of the integral comes from the right endpoint of the integration range, and since
<span class="math-container">$$ \lim_{t\to 1^-}f_n(t) = 1+\left(\frac{n}{2}-1\right)2^n,\qquad \lim_{t\to 1^-}f_n'(t) = -\frac{1}{2}+\left(\frac{n^2}{8}-\frac{3n}{8}+\frac{1}{2}\right)2^n $$</span>
we may approximate
<span class="math-container">$$ \int_{0}^{1}f_n(t)\,dt \approx \int_{0}^{1} f_n(1^-)e^{-\frac{f_n'(1^-)}{f_n(1^-)}x}\,dx\approx \frac{2^{n+1}(n-2)^2}{n^2-3n+4}\left(1-e^{-\frac{n^2-3n+4}{4n-8}}\right). $$</span></p>
|
428,841 | <p>Let $x_{n} = \sqrt{1 + \sqrt{2 + \sqrt{3 + ...+\sqrt{n}}}}$</p>
<p>a) Show that $x_{n} < x_{n+1}$</p>
<p>b) Show that $x_{n+1}^{2} \leq 1+ \sqrt{2} x_{n}$</p>
<p>Hint : Square $x_{n+1}$ and factor a 2 out of the square root</p>
<p>c) Hence Show that $x_{n}$ is bounded above by 2. Deduce that $\lim\limits_{n\to \infty} x_{n}$ exists.</p>
<p>Any help? I don't know where to start.</p>
| DonAntonio | 31,254 | <p>Hints: Induction on</p>
<p>$$\bullet\;\;x_n<x_{n+1}\iff 1+\sqrt{2+\sqrt{3\ldots+\sqrt n}}<1+\sqrt{2+\sqrt{3+\ldots+\sqrt{n+\sqrt{n+1}}}}\iff$$</p>
<p>$$2+\sqrt{3+\ldots+\sqrt n}<2+\sqrt{3+\ldots\sqrt{n+1}}\iff\ldots$$</p>
<p>$$\bullet\bullet\;x_{n+1}^2=1+\sqrt{2+\sqrt{3+\ldots+\sqrt{n+1}}}\le 1+\sqrt2\left(\sqrt{1+\sqrt{2+\ldots+\sqrt n}}\right)=1+\sqrt2\,x_n$$</p>
<p>$$\iff\left(\sqrt{2+\sqrt{3+\ldots+\sqrt{n+1}}}\right)\le\sqrt{2+2\sqrt{2+\ldots+\sqrt n}}\iff\ldots$$</p>
<p>For (c) you're already done with (a)-(b) since then you have a monotone ascending sequence bounded from above, so the sequence's limit equals its supremum...</p>
|
1,606,202 | <p>I'm having trouble figuring out why these two different ways to write this combination give different answers. Here is the scenario:</p>
<p>Q: Choose a group of 10 people from 17 men and 15 women, in how many ways are at most 2 women chosen?</p>
<p>Solution A: From 17 men choose 8, and from 15 women choose 2. Or from 17 men choose 9, and from 15 women choose 1. Or from 17 men choose 10.</p>
<p>C(17,8)*C(15,2)+C(17,9)*C(15,1)+C(17,10) = 2936648 ways</p>
<p>Solution B: Choose from the men to fill the first 8 positions and choose the next 2 positions from the remaining men and women.</p>
<p>C(17,8)*[C(9,2)+C(9,1)*C(14,1)+C(14,2)] = 6150430 ways</p>
<p>What is wrong with my logic or interpretation here? </p>
| Thomas Andrews | 7,933 | <p>Consider a simpler problem. How many ways are there to choose ten men from a set of ten men? By your second reasoning, choose eight first, then choose another two. That gives a total of:
$$\binom{10}{8}\binom{2}{2}=45$$
Why is that reasoning problematic?</p>
|
3,985,447 | <p>Which of these is a possible solution for
<span class="math-container">$$\cos^2(x)+\sin^2(x)-1=0$$</span>
in the interval <span class="math-container">$x\in[0,2\pi]$</span></p>
<p>a. <span class="math-container">$x=\frac{\pi}{4}$</span><br />
b. <span class="math-container">$x=\pi$</span><br />
c. <span class="math-container">$x=\frac{5\pi}{3}$</span><br />
d. all of the above</p>
| John Smith | 836,805 | <p>like aryan said it is an identity.
<span class="math-container">$\sin^2(x)+\cos^2(x)=1$</span> for all <span class="math-container">$x$</span> therefore with a simple substitution you have <span class="math-container">$1-1=0$</span>
so it is true for all <span class="math-container">$x$</span></p>
|
3,971,059 | <p><span class="math-container">$x_n$</span> is a sequence. The only thing I can do here is just write the definition of <span class="math-container">$\lim_{n \to \infty} x_n=a$</span>, but that doesn' t seem helpful to me. I tried looking for some inequalities that would help, and the only thing that I found that I thought could be a little bit helpful, was the reverse triangle inequality, but I don' t know how to use that here.Could you please give some advice or a hint?</p>
<p>Thanks in advance</p>
| Servaes | 30,382 | <p>This follows immediately, by induction on <span class="math-container">$k$</span>, from the product rule for limits; if the limits <span class="math-container">$\lim_{n\to\infty}a_n$</span> and <span class="math-container">$\lim_{n\to\infty}b_n$</span> exist, then
<span class="math-container">$$\lim_{n\to\infty}(a_n\cdot b_n)=\left(\lim_{n\to\infty}a_n\right)\cdot\left(\lim_{n\to\infty}b_n\right).$$</span></p>
|
3,910,053 | <p>The function
<span class="math-container">$u(x,t) = \frac{2}{\sqrt{\pi}}$$\int_{0}^\frac{x}{\sqrt{t}} e^{-s^2}ds$</span>
satisfies the partial differential equation</p>
<p><span class="math-container">$$\frac{\partial u}{\partial t} = K\frac{\partial^2u}{\partial x^2}$$</span></p>
<p>where <span class="math-container">$K$</span> is a positive constant number. Find the value of <span class="math-container">$K$</span>.</p>
<p>The value of <span class="math-container">$K$</span> that I calculated isn't a constant. Can anyone please help with this question?
Thank you so much!</p>
| egreg | 62,967 | <p>If <span class="math-container">$\lambda$</span> is an eigenvalue of <span class="math-container">$A$</span>, then <span class="math-container">$Av=\lambda v$</span> for some <span class="math-container">$v\ne0$</span>. Therefore
<span class="math-container">$$
(A+2I)v=Av+2v=\lambda v+2v=(\lambda+2)v
$$</span>
and so <span class="math-container">$\lambda+2$</span> is an eigenvalue of <span class="math-container">$A+2I$</span>.</p>
<p>The converse is also true: if <span class="math-container">$\mu$</span> is an eigenvalue of <span class="math-container">$A+2I$</span>, then <span class="math-container">$\mu-2$</span> is an eigenvalue of <span class="math-container">$A$</span> (similar proof).</p>
<p>Thus you find <em>all</em> the eigenvalues of <span class="math-container">$A+2I$</span> by adding <span class="math-container">$2$</span> to the eigenvalues of <span class="math-container">$A$</span>. Note that <span class="math-container">$A$</span> can be of <em>any</em> order.</p>
<p>The characteristic polynomial of <span class="math-container">$A$</span> is <span class="math-container">$x^2-3x-10=(x+2)(x-5)$</span>, so the eigenvalues are <span class="math-container">$-2$</span> and <span class="math-container">$5$</span>. The eigenvalues of <span class="math-container">$A+2I$</span> are <span class="math-container">$0$</span> and <span class="math-container">$7$</span>.</p>
|
227,096 | <p>Q:
Let $A$ be an $n\times n$ matrix defined by $A_{ij}=1$ for all $i,j$.
Find the characteristic polynomial of $A$.</p>
<p>There is probably a way to calculate the characteristic polynomial $(\det(A-tI))$ directly but I've spent a while not getting anywhere and it seems cumbersome. Something tells me there is a more intelligent and elegant way. The rank of $A$ is only 2. Is there a way to use this?</p>
| Rudy the Reindeer | 5,798 | <p>This is a sum over all entries $M_{ij}$ of $M$, multiplying the diagonal entries $M_{ii}$ by $2$.</p>
|
4,046,356 | <p>Recently, some of the remarkable properties of second-order
Eulerian numbers <span class="math-container">$ \left\langle\!\!\left\langle n\atop k\right\rangle\!\!\right\rangle$</span> <a href="https://oeis.org/A340556" rel="nofollow noreferrer">A340556</a> have been proved on MSE [ <a href="https://math.stackexchange.com/questions/4034224"> a </a>,
<a href="https://math.stackexchange.com/questions/4037172"> b </a>, <a href="https://math.stackexchange.com/questions/4037946"> c </a> ].</p>
<p>But there are also other notable identities to which these numbers lead.
For instance, we also noticed the following identity, which we haven't seen
elsewhere (reference?):</p>
<p><span class="math-container">$$ \sum_{j=0}^{k} \binom{n-j}{n-k}
\left\langle\!\! \left\langle n\atop j\right\rangle\!\! \right\rangle \,=\,
\sum_{j=0}^k (-1)^{j+k} \binom{n+k}{n+j} \left\{ n+j \atop j\right \} \quad ( n \ge 0) $$</span></p>
<p>If we use the notation <span class="math-container">$ \operatorname{W}_{n, k} $</span> for these numbers,
we can also introduce the corresponding polynomials.</p>
<p><span class="math-container">$$ \operatorname{W}_{n}(x) = \sum_{k=0}^n \operatorname{W}_{n, k} x^k \quad ( n \ge 0) $$</span></p>
<p>So far so routine, but then came the surprise:</p>
<p><span class="math-container">$$ 3^n W_n\left(-\frac13\right) \, = \, 2^n \left\langle\!\!\left\langle - \frac{1}{2} \right\rangle\!\!\right\rangle_n \quad ( n \ge 0) $$</span></p>
<p>On the right side are the numbers we recently asked about their
<a href="https://math.stackexchange.com/questions/4044848">combinatorial significance</a>!</p>
<p>Should I trust this strange equation?</p>
| Peter Luschny | 406,546 | <p>A sketch in three steps: First, we need a definition for <span class="math-container">$ \operatorname{W}_{n}(x)$</span>. With this, we do not want to assume the
second-order Eulerian numbers. So we take the RHS of the identity.</p>
<p>The next and primary step is to show that in general</p>
<p><span class="math-container">$$ \operatorname{W}_{n}(x) = (1 + x)^n
\left\langle \! \left\langle \frac{x}{1+x} \right\rangle \! \right\rangle _n. $$</span></p>
<p>Finally, plug in <span class="math-container">$ x = -\frac13 $</span>, et voilà,
<span class="math-container">$$ \operatorname{W}_{n}\left( - \frac13 \right) = \left( \frac23 \right)^n
\left\langle \! \left\langle -\frac12 \right\rangle \! \right\rangle _n .$$</span></p>
<p>The choice of the notation <span class="math-container">$ \operatorname{W}_{n}(x) $</span> was not accidental.
We are speaking here about the <em>Ward polynomials</em>. Their coefficients are
the <em>Ward numbers</em> which be found at <a href="https://oeis.org/A269939" rel="nofollow noreferrer">A269939</a>.</p>
|
292,651 | <blockquote>
<p>Does an integer $9<n<100$ exist such that the last 2 digits of $n^2$ is $n$? If yes, how to find them? If no, prove it.</p>
</blockquote>
<p>This problem puzzled me for a day, but I'm not making much progress. Please help. Thanks.</p>
| Hanul Jeon | 53,976 | <p>this problem is equivalent to $n^2\equiv n \pmod{100}$. and by <a href="http://www.wolframalpha.com/input/?i=x%5E2-x=0%20mod%20100" rel="nofollow">wolframalpha</a>, solution of this equation is $n=25,76$.</p>
|
292,651 | <blockquote>
<p>Does an integer $9<n<100$ exist such that the last 2 digits of $n^2$ is $n$? If yes, how to find them? If no, prove it.</p>
</blockquote>
<p>This problem puzzled me for a day, but I'm not making much progress. Please help. Thanks.</p>
| lab bhattacharjee | 33,337 | <p>So, we need $n^2\equiv n\equiv{100}\iff 100\mid n(n-1)$</p>
<p>Now, $(n.n-1)=1$ and $100=2^25^2$</p>
<p>So, </p>
<p>either (i) $100\mid n\implies n=100k$ where integer $k\ge0$</p>
<p>So we need $0<100k<100\implies 0<k<1$ which is not possible .</p>
<p>or (ii) $100\mid (n-1)\implies n=100k+1$ where integer $k\ge0$ </p>
<p>So we need $0<100k+1<100\implies 0<k<1$ which is not possible .</p>
<p>or (iii) $4\mid n$ and $25\mid (n-1)$</p>
<p>(a) $n\equiv0\pmod 4$ and $n\equiv1\pmod{25}$</p>
<p>Applying well-known <a href="http://mathworld.wolfram.com/ChineseRemainderTheorem.html" rel="nofollow">Chinese Remainder Theorem</a>,</p>
<p>$$n\equiv0\cdot b_1\cdot\frac{25\cdot4}4+1\cdot b_2\cdot\frac{25\cdot4}{25}\pmod{100}\equiv4b_1\pmod{100}$$ </p>
<p>where $b_1\cdot\frac{25\cdot4}4\equiv1\pmod4$ and $b_2\cdot\frac{25\cdot4}{25}\equiv1\pmod{25}$ </p>
<p>But we don't need $b_1$ as its coefficient is already $0$</p>
<p>and $4b_2\equiv1\pmod{25}\implies b_2\equiv4^{-1}\pmod{25}$</p>
<p>Using the <a href="http://en.wikipedia.org/wiki/Fundamental_recurrence_formulas#The_determinant_formula" rel="nofollow">Convergent</a> property of continued fraction, $\frac{25}4=6+\frac14$</p>
<p>So, the last but one convergent is $\frac61\implies 25\cdot1-4\cdot6=1\implies 4^{-1}\equiv-6\pmod{25}$</p>
<p>$\implies b_2\equiv(-6)\pmod{25}\implies x\equiv4(-6)\pmod{100}\equiv-24$</p>
<p>(b)We have $n=4d,n-1=25e\implies 4d-25e=1$</p>
<p>$\implies 25e=4d-1 \implies 25(e+1)=4(d+6)\implies \frac{4(d+1)}{25}=e+1$ an integer </p>
<p>So, $25\mid(d+6)$ as $(25,4)=1$ $\implies d=25f-6$ for some integer $f$</p>
<p>$\implies n=4d=4(25f-6)=100f-24$</p>
<p>Using $(a)$ or $(b),$ we need $9<100f-24<100\implies 1\le f<2\implies f=1\implies n=76$</p>
<p>or (iv) $4\mid (n-1)$ and $25\mid n\implies n-1=4a,n=25b\implies 25b-4a=1$</p>
<p>Applying one of the two approaches $(a),(b)$ mentioned above, we get $n=100c+25 $</p>
<p>So we need $9<100c+25<100\implies 0\le c<1\implies c=0\implies n=25$</p>
|
292,651 | <blockquote>
<p>Does an integer $9<n<100$ exist such that the last 2 digits of $n^2$ is $n$? If yes, how to find them? If no, prove it.</p>
</blockquote>
<p>This problem puzzled me for a day, but I'm not making much progress. Please help. Thanks.</p>
| awllower | 6,792 | <p>Write $n=10a+b$. Then $n² \equiv 20ab+b² \pmod{100}$. So the problem is reduced to solving $20ab+b²\equiv 10a+b \pmod{100}$. Hence $100|b(20a+b-1)-10a$. So $10|b(b-1)$. But $0\leq b<10$, thus either b is even and $b-1$ is divisible by $5$ or $b-1$ is even and $b$ is a multiple of $5$.<br>
In the former case, $b$ must be $6$ and $100|110a+30$, viz. $a=7$ and $n=76$.<br>
In the latter, $b$ must be $5$, so that $100|90a+20$, namely, $a$=$2$ and $n=25$.</p>
|
455,230 | <p>I found this proposition and don't see exactly as to why it is true and even more so, why the converse is false:</p>
<p>Proposition 1. The equivalence between the proposition $z \in D$ and the proposition $(\exists x \in D)x = z$ is provable from the definitory equations of the existential quantifier and of the equality relation. If $D = \{t_{1},t_{2},...t_{n}\}$, the sequent
$z = t_{1} \vee z = t_{2} \vee z \vee ... z = t_{n} \vdash z \in D$ is provable from the definition of the additive disjunction $\vee$.</p>
<p>The converse sequent $z \in D \vdash z = t_{1} \vee z = t_{2} \vee ... z = t_{n}$ is not provable.</p>
<p>The author goes on to say: "We adopt the intuitionistic interpretation of disjunction. With respect to it, one can characterize a particular class of finite sets"</p>
<p>On the first part of the proposition, well I am not sure what the point is as if we take an element z in D, then we could just call this element x and hence this x = z. Is there something more to this? On the second part of Proposition 1 since $z = \text{ some } t \in D$ since t is in the set of axioms and $t = z$, then $t \vdash z$ as z is derivable from t.</p>
<p>For the converse if $z \in D$ then why wouldn't z = some $t \in D$? Is this because of the Incompleteness theorem? That perhaps there D as a set of axioms has some consequence which can not be proven by the set of axioms in D? Or perhaps I am way off here.</p>
<p>Any ideas?</p>
<p>Thanks,</p>
<p>Brian</p>
| Matt E | 221 | <p>If $S$ is a simple module over a ring $A$, and $x$ is a non-zero element of
$S$, then the annihilator of $x$ is a maximal left ideal $\mathfrak m$ of $A$. </p>
<p>Thus is $S$ embeds into $A$, then $A$ contains a non-zero element annihilated by $\mathfrak m$.</p>
<p>If $A$ is not a simple module over itself (i.e. is not a division algebra), then $\mathfrak m$ will contain non-zero elements, and so if $A$ is an integral domain (i.e. does not contain non-zero zero-divisors), then it will not contain any simple submodules.</p>
<hr>
<p>The example $K[X]$ of a polyomial algebra that appears in the other answers is a special case of this general observation, since $K[X]$ is an integral domain.</p>
|
3,242,921 | <p>Prove that the equation<span class="math-container">$$x^4+(a-2)x^3+(a^2-2a+4)x^2-x+1=0$$</span>
does not admit <span class="math-container">$$x=-2$$</span> as a triple root.</p>
| JJC94 | 215,893 | <p>I figured it out thanks to Servaes.</p>
<p>Every hyperplane in <span class="math-container">$V=\mathbb{F}_2^n$</span> contains exactly <span class="math-container">$2^{n-1}-1$</span> non-zero vectors; thus, any set of <span class="math-container">$2^{n-1}$</span> distinct, non-zero vectors in <span class="math-container">$V$</span> cannot be contained in a single hyperplane, so they must span <span class="math-container">$V$</span>.</p>
|
60,259 | <p>The independence of theorems in some propositional calculus systems seems well studied. For example, if we just have the rules of detachment, substitution, and replacement, and every theorem of this axiom set {((p->q)->((q->r)->(p->r))), ((~p->p)->p), (p->(~p->q))}=<strong>X</strong> as our system <strong>X'</strong>, every theorem of <strong>X</strong>, as I've read, can get proved independent of any other theorem of the set (except itself). But, this "independence" simply has to happen outside of the system, since within the system, all the theorems of <strong>X</strong> imply each other. This works out this way, since (p->(q->(p->q))) happens in the system and via the rule of substitution (or equivalently (q->(p->q)), we can get every instance of (x->y) as a theorem, where x, y belong to <strong>X</strong>. In turn, this yields short proofs of every axiom from any other axiom of the axiom set. </p>
<pre><code>1 x since x is a theorem by hypothesis
2 (x->y) which can get formally demonstrated by the above
3 y 1, 2 detachment
</code></pre>
<p>So, "independence" of logical axioms simply can't mean that we can't prove a logical axiom from another axiom within a system like this, since all axioms can get derived from each other within <strong>X'</strong>. What then does "independence" of logical axioms mean precisely?</p>
<p>Edit: So it can get checked that (x->y) holds in the sense above, I've reproduced the necessary parts of Jan Lukasiewicz's <em>Elements of Mathematical Logic</em> to prove the "law of simplification" (q->(p->q)). Symbols of the type Lz refers to a thesis given by Lukasiewicz, while Sz refers to intermediate expressions (which also qualify as theorems of his system) which he indicates as intermediate expressions via his shorthand for proofs. I've included spaces where substitutions have gotten made. A string followed by "L1, p/Cpq" for example means that in thesis L1 all instances of "p" occuring in that thesis have gotten uniformly substituted by "Cpq". </p>
<pre><code>L1 CCpqCCqrCpr axiom
L2 CCNppp axiom
L3 CpCNpq axiom
S1 CC Cpq CCqrCpr CC CCqrCpr s C Cpq s L1, p/Cpq, q/CCqrCpr, r/s
L4 CCCCqrCprsCCpqs L1, S1 detachment
S2 CCCC Cqr Csr C p Csr CCsqCpCsr CC p Cqr CCsqCpCsr L4, q/Cqr, r/Csr, s/CCsqCpCsr
S3 CCCC q r C s r CpCsr CC s q CpCsr L4, p/s, s/CpCsr
L5 CCpCqrCCsqCpCsr S2, S3 detachment
S4 CCCC q r C p r CCCprsCCqrs CC p q CCCprsCCqrs L4, s/CCCprsCCqrs
S5 CC Cqr Cpr CC Cpr s C Cqr s L1, p/Cqr, q/Cpr, r/s
L6 CCpqCCCprsCCqrs S4, S5 detachment
S6 CC Cpq C CCprs CCqrs CC t CCprs C Cpq C t CCqrs L5, p/Cpq, q/CCprs, r/CCqrs, s/t
L7 CCtCCprsCCpqCtCCqrs L6, S6 detachment
S7 CC p CNpq CC CNpq r C p r L1, q/CNpq
L9 CCCNpqrCpr L3, S7 detachment
S8 CCCN p q CCCNpppCCqpp C p CCCNpppCCqpp L9, r/CCCNpppCCqpp
S9 CC Np q CCC Np p p CC q p p L6, p/Np, r/p, s/p
L10 CpCCCNpppCCqpp S8, S9 detachment
S10 C CCNppp CCCN CCNppp CCNppp CCNppp CC q CCNppp CCNppp L10, p/CCNppp
S11 CCCNCCNpppCCNpppCCNpppCCqCCNpppCCNppp L2, S10 detachment
S12 CCN CCNppp CCNppp CCNppp L2, p/CCNppp
L11 CCqCCNpppCCNppp S11, S12 detachment
S13 CCCN t CCNppp CCNppp C t CCNppp L9, p/t, q/CCNppp, r/CCNppp
S14 CC Nt CCNppp CCNppp L11 q/Nt
L12 CtCCNppp S14, S13 detachment
S15 CC t CC Np p p CC Np q C t CC q p p L7, p/Np, r/p, s/p
L13 CCNpqCtCCqpp L12, S15 detachment
S16 CC CNpq CtCCqpp CC CtCCqpp r C CNpq r L1 p/CNpq, q/CtCCqpp
L14 CCCtCCqpprCCNpqr L13, S16 detachment
S17 CCC NCCqpp CC q p p CCqpp CCN p q CCqpp L14, t/NCCqpp, r/CCqpp
S18 CCN CCqpp CCqpp CCqpp L2, p/CCqpp
L15 CCNpqCCqpp S18, S17 detachment
S19 CCCN p q CCqpp C p CCqpp L9, r/CCqpp
L17 CpCCqpp L15, S19 detachment
S20 CC q C CNpq q CC p CNpq C q C p q L5, p/q, q/CNpq, r/q, s/p
S21 C q CC Np q q L17, p/q, q/Np
S22 CCpCNpqCqCpq S20, S21 detachment
L18 CqCpq L3, S22 detachment
</code></pre>
<p>Now, since we have (q->(p->q)) as a theorem, by uniformly substituting thesis L1 for q, and L2 for p, since L1 holds as a thesis, we can obtain that (L1->L2) as a thesis, or equivalently, a theorem. So, consequently, one completely prove any thesis by any other thesis <em>within</em> the system using just detachment via a proof of the type "x, (x->y), y". What then does "independence" of logical axioms mean precisely?</p>
| Kaveh | 468 | <p>In classical propositional logic, we say that a formula $\varphi$ is independent from a theory (i.e. a set of formulas) $T$ if $T \nvDash \varphi$ and $T \nvDash \lnot \varphi$. Alternatively, if $T \cup \{\varphi\}$ and $T \cup \{\lnot \varphi\}$ are both satisfiable. Obviously any classical propositional tautology is part of the classical propositional logic and cannot be independent from any theory.</p>
<p>More generally, assume that we have a logic (which gives us some definition for $T \vDash \varphi$). We can similarly define the independence of a formula from a theory w.r.t. that logic.</p>
|
60,259 | <p>The independence of theorems in some propositional calculus systems seems well studied. For example, if we just have the rules of detachment, substitution, and replacement, and every theorem of this axiom set {((p->q)->((q->r)->(p->r))), ((~p->p)->p), (p->(~p->q))}=<strong>X</strong> as our system <strong>X'</strong>, every theorem of <strong>X</strong>, as I've read, can get proved independent of any other theorem of the set (except itself). But, this "independence" simply has to happen outside of the system, since within the system, all the theorems of <strong>X</strong> imply each other. This works out this way, since (p->(q->(p->q))) happens in the system and via the rule of substitution (or equivalently (q->(p->q)), we can get every instance of (x->y) as a theorem, where x, y belong to <strong>X</strong>. In turn, this yields short proofs of every axiom from any other axiom of the axiom set. </p>
<pre><code>1 x since x is a theorem by hypothesis
2 (x->y) which can get formally demonstrated by the above
3 y 1, 2 detachment
</code></pre>
<p>So, "independence" of logical axioms simply can't mean that we can't prove a logical axiom from another axiom within a system like this, since all axioms can get derived from each other within <strong>X'</strong>. What then does "independence" of logical axioms mean precisely?</p>
<p>Edit: So it can get checked that (x->y) holds in the sense above, I've reproduced the necessary parts of Jan Lukasiewicz's <em>Elements of Mathematical Logic</em> to prove the "law of simplification" (q->(p->q)). Symbols of the type Lz refers to a thesis given by Lukasiewicz, while Sz refers to intermediate expressions (which also qualify as theorems of his system) which he indicates as intermediate expressions via his shorthand for proofs. I've included spaces where substitutions have gotten made. A string followed by "L1, p/Cpq" for example means that in thesis L1 all instances of "p" occuring in that thesis have gotten uniformly substituted by "Cpq". </p>
<pre><code>L1 CCpqCCqrCpr axiom
L2 CCNppp axiom
L3 CpCNpq axiom
S1 CC Cpq CCqrCpr CC CCqrCpr s C Cpq s L1, p/Cpq, q/CCqrCpr, r/s
L4 CCCCqrCprsCCpqs L1, S1 detachment
S2 CCCC Cqr Csr C p Csr CCsqCpCsr CC p Cqr CCsqCpCsr L4, q/Cqr, r/Csr, s/CCsqCpCsr
S3 CCCC q r C s r CpCsr CC s q CpCsr L4, p/s, s/CpCsr
L5 CCpCqrCCsqCpCsr S2, S3 detachment
S4 CCCC q r C p r CCCprsCCqrs CC p q CCCprsCCqrs L4, s/CCCprsCCqrs
S5 CC Cqr Cpr CC Cpr s C Cqr s L1, p/Cqr, q/Cpr, r/s
L6 CCpqCCCprsCCqrs S4, S5 detachment
S6 CC Cpq C CCprs CCqrs CC t CCprs C Cpq C t CCqrs L5, p/Cpq, q/CCprs, r/CCqrs, s/t
L7 CCtCCprsCCpqCtCCqrs L6, S6 detachment
S7 CC p CNpq CC CNpq r C p r L1, q/CNpq
L9 CCCNpqrCpr L3, S7 detachment
S8 CCCN p q CCCNpppCCqpp C p CCCNpppCCqpp L9, r/CCCNpppCCqpp
S9 CC Np q CCC Np p p CC q p p L6, p/Np, r/p, s/p
L10 CpCCCNpppCCqpp S8, S9 detachment
S10 C CCNppp CCCN CCNppp CCNppp CCNppp CC q CCNppp CCNppp L10, p/CCNppp
S11 CCCNCCNpppCCNpppCCNpppCCqCCNpppCCNppp L2, S10 detachment
S12 CCN CCNppp CCNppp CCNppp L2, p/CCNppp
L11 CCqCCNpppCCNppp S11, S12 detachment
S13 CCCN t CCNppp CCNppp C t CCNppp L9, p/t, q/CCNppp, r/CCNppp
S14 CC Nt CCNppp CCNppp L11 q/Nt
L12 CtCCNppp S14, S13 detachment
S15 CC t CC Np p p CC Np q C t CC q p p L7, p/Np, r/p, s/p
L13 CCNpqCtCCqpp L12, S15 detachment
S16 CC CNpq CtCCqpp CC CtCCqpp r C CNpq r L1 p/CNpq, q/CtCCqpp
L14 CCCtCCqpprCCNpqr L13, S16 detachment
S17 CCC NCCqpp CC q p p CCqpp CCN p q CCqpp L14, t/NCCqpp, r/CCqpp
S18 CCN CCqpp CCqpp CCqpp L2, p/CCqpp
L15 CCNpqCCqpp S18, S17 detachment
S19 CCCN p q CCqpp C p CCqpp L9, r/CCqpp
L17 CpCCqpp L15, S19 detachment
S20 CC q C CNpq q CC p CNpq C q C p q L5, p/q, q/CNpq, r/q, s/p
S21 C q CC Np q q L17, p/q, q/Np
S22 CCpCNpqCqCpq S20, S21 detachment
L18 CqCpq L3, S22 detachment
</code></pre>
<p>Now, since we have (q->(p->q)) as a theorem, by uniformly substituting thesis L1 for q, and L2 for p, since L1 holds as a thesis, we can obtain that (L1->L2) as a thesis, or equivalently, a theorem. So, consequently, one completely prove any thesis by any other thesis <em>within</em> the system using just detachment via a proof of the type "x, (x->y), y". What then does "independence" of logical axioms mean precisely?</p>
| Doug Spoonwood | 11,300 | <p>Let's call a theorem or axiom A dependent if given a set Z of axioms and primitive inference rules, then a proof of A can get written from Z. An axiom is independent of Z if given Z, then a proof of A cannot get written. A set of axioms S for system Y comes as independent if each subsystem of Y with just one less axiom under the same rule(s) of inference as Y cannot prove the axiom omitted.</p>
|
2,621,871 | <p>My lecture notes say that</p>
<blockquote>
<p>A topological space is an ordered pair <span class="math-container">$(X, \tau)$</span>, where <span class="math-container">$X$</span> is a set and <span class="math-container">$\tau$</span> is a collection of subsets of <span class="math-container">$X$</span> that satisfy</p>
<ol>
<li><p><span class="math-container">$X$</span> and <span class="math-container">$\emptyset$</span> belong to <span class="math-container">$\tau$</span>.</p>
</li>
<li><p>Any finite or infinite union of the elements of <span class="math-container">$\tau$</span> belong to <span class="math-container">$\tau$</span>.</p>
</li>
<li><p>Any finite intersection of the elements of <span class="math-container">$\tau$</span> belong to <span class="math-container">$\tau$</span>.</p>
</li>
</ol>
<p><strong>The elements of <span class="math-container">$\tau$</span> are called open sets</strong> and <span class="math-container">$\tau$</span> is said to be a topology on <span class="math-container">$X$</span></p>
</blockquote>
<p>I also know that for a metric space <span class="math-container">$(X,d)$</span>, a set <span class="math-container">$M \subset X$</span> is open if for all <span class="math-container">$x \in M$</span>, <span class="math-container">$\exists \varepsilon >0$</span> such that <span class="math-container">$\mathbb{B}_{\varepsilon}(x) \subset M$</span>.</p>
<p>However, I am slightly confused about how the two definitions of openness match up...</p>
<p>Which is the more general version of openness? Is there a proof to show the more general idea of openness implies the other or to show they are equivalent?</p>
| Cubic Bear | 378,597 | <p>It is an exercise to prove. For $x\in A\cap B$ where $A,B$ are two open set, there exists $\gamma, \delta>0$ such that $\mathbb{B}_\gamma(x)\subseteq A, \mathbb{B}_{\delta }(x)\subseteq B$, then take $\epsilon=\min (\gamma, \delta)$,
$\mathbb{B}_{\delta }(x)\subseteq A \cap B$. This prove $A\cap B$ is open. The argument of infinite union is trivial. </p>
<p>Moreover, there are a lot of axioms which can define topology, such as open sets axiom, closed sets axiom, neighborhood axiom, neighborhood basis axiom, and clossure axiom. see <a href="https://en.wikipedia.org/wiki/Topological_space" rel="nofollow noreferrer">wiki</a>. They are determined each other. </p>
<p>And actually, metric space gives topology by neighborhood basis axiom. </p>
|
3,481,661 | <p>Let <span class="math-container">$X$</span>,<span class="math-container">$Y$</span> be Banach spaces. And <span class="math-container">$T:X\rightarrow Y$</span> a continuous operator when <span class="math-container">$X$</span> is endowed with the weak topology and <span class="math-container">$Y$</span> with the norm topology. I am trying to show that <span class="math-container">$T$</span> is finite rank i.e. that its range is a finite dimensional vector space.</p>
<p>The first thing I tried was taking a basis in <span class="math-container">$X$</span> and looking at the image but I do not think the assumptions say anything useful about this.
I also saw on some other post the same question for Hilbert spaces in which they looked at the inverse image of the (closed) unit ball, but I am not sure how they arrived at that idea and how I can use this. </p>
<p>A subset <span class="math-container">$U$</span> of <span class="math-container">$X$</span> is open if there exist a <span class="math-container">$\rho>0$</span> and <span class="math-container">$\omega_1,\ldots,\omega_n\in X^*$</span> the dual space of <span class="math-container">$X$</span> such that <span class="math-container">$\{y\in X: \lvert\omega_i(x-y)\rvert<\rho\}\subset U $</span> for all <span class="math-container">$\omega_i$</span>.</p>
| Aphelli | 556,825 | <p>We know that the inverse image of <span class="math-container">$B_Y(0,1)$</span> contains an open subset, so there are linear forms <span class="math-container">$u_1,\ldots,u_n$</span> on <span class="math-container">$X$</span> and some <span class="math-container">$\epsilon > 0$</span> such that for any <span class="math-container">$x \in X$</span>, if <span class="math-container">$|u_i(x)| < \epsilon$</span>, then <span class="math-container">$\|Tx\| < 1$</span>. </p>
<p>In particular, let <span class="math-container">$K$</span> be the intersection of the kernels of the <span class="math-container">$u_i$</span>. Then <span class="math-container">$T_{|K}$</span> is a linear map that is bounded (as in, a bounded function), so is zero. </p>
<p>So, it remains to show that if <span class="math-container">$T:X \rightarrow Y$</span> is linear and vanishes on an intersection of finitely many hyperplanes, then <span class="math-container">$T$</span> has finite rank. </p>
<p>We prove it by assuming that <span class="math-container">$T$</span> vanishes on the intersections of the kernels of the <span class="math-container">$v_i$</span> (<span class="math-container">$1 \leq i \leq m$</span>), <span class="math-container">$v_i \in X^*$</span>, the algebraic dual. </p>
<p>We can suppose that <span class="math-container">$m$</span> is minimal with this propriety, ie that for any <span class="math-container">$i$</span>, <span class="math-container">$T$</span> doesn’t vanish on the intersections of the kernels of the <span class="math-container">$v_j$</span>, <span class="math-container">$j \neq i$</span>. </p>
<p>Then let, for any <span class="math-container">$1 \leq i \leq m$</span> be <span class="math-container">$y_i$</span> such that <span class="math-container">$v_j(y_i)=0$</span> (<span class="math-container">$j \neq i$</span>) and <span class="math-container">$T(y_i) \neq 0$</span>, thus <span class="math-container">$v_i(y_i)\neq 0$</span> (up to scaling by a constant factor we may assume <span class="math-container">$v_i(y_i)=1$</span>). </p>
<p>Let now <span class="math-container">$x \in X$</span>. Then it is easy to check that <span class="math-container">$x-\sum_{i=1}^m{v_i(x)y_i}$</span> is in the kernel of every <span class="math-container">$v_i$</span>, so in the kernel of <span class="math-container">$T$</span>, thus <span class="math-container">$T(x)$</span> is a linear combination of <span class="math-container">$T(y_1),\ldots,T(y_n)$</span> and we are done. </p>
|
2,548,353 | <p><strong>Find the number of $4\times4$ matrices such that $|a_{ij}| = 1 \forall i,j\in[1,4]$ , and sum of every row and column is zero.</strong></p>
<p>I tried 'counting' the number of matrices that satisfy the above conditions, that is, elements are $1$ or $-1$ and sum of every row and column is zero.</p>
<p>In the attempt to generate a recursion I started off with a $2\times2$ matrix, for which case the answer is $2$. (First element is 1 or -1, other elements are decided accordingly)
However, this method becomes cumbersome and mathematically disappointing for $3x3$ and larger matrices.</p>
<p>Could someone please explain the method, or post a solution to the problem?
Is it possible to generalise the result to an nxn matrix? </p>
| Zach Teitler | 343,280 | <p>The first row has two $1$s and two $-1$s. There are $\binom{4}{2} = 6$ ways they can be arranged in the first row. We'll count the number of matrices with first row $(1,1,-1,-1)$, then multiply by $6$ to account for all the other arrangements of the first row.</p>
<p>The second row can be one of the three following things: $(1,1,-1,-1)$, or $(-1,-1,1,1)$, or $(1,-1,1,-1)$. In the third case this counts for $4$ possibilities: the first two columns might be switched, or the last two columns might be switched. We'll count the number of matrices in each of these three cases (multiplying the third cases's count by $4$, to account for the switches).</p>
<p><em>Case 1:</em> The matrix looks like
$$
\begin{pmatrix}
1 & 1 & -1 & -1 \\
1 & 1 & -1 & -1 \\
* & * & * & * \\
* & * & * & *
\end{pmatrix}
$$
There is only one possible matrix:
$$
\begin{pmatrix}
1 & 1 & -1 & -1 \\
1 & 1 & -1 & -1 \\
-1 & -1 & 1 & 1 \\
-1 & -1 & 1 & 1 \\
\end{pmatrix}
$$</p>
<p><em>Case 2:</em> The matrix looks like
$$
\begin{pmatrix}
1 & 1 & -1 & -1 \\
-1 & -1 & 1 & 1 \\
* & * & * & * \\
* & * & * & *
\end{pmatrix}
$$
In the last two rows, there is precisely one $-1$ in each column and two $-1$s in each row. There are $\binom{4}{2}=6$ ways to choose the locations of two $-1$s in the third row; then the fourth row is determined (it has $-1$s in the complementary positions).</p>
<p><em>Case 3:</em> The matrix looks like
$$
\begin{pmatrix}
1 & 1 & -1 & -1 \\
1 & -1 & 1 & -1 \\
* & * & * & * \\
* & * & * & *
\end{pmatrix}
$$
The first and last columns are determined:
$$
\begin{pmatrix}
1 & 1 & -1 & -1 \\
1 & -1 & 1 & -1 \\
-1 & * & * & 1 \\
-1 & * & * & 1
\end{pmatrix}
$$
There are two solutions: the remaining block can be $\begin{pmatrix} 1 & -1 \\ -1 & 1 \end{pmatrix}$ or the opposite. That gives two solutions when the second row is as displayed, but because of the possibility of switching the first two or last two columns, we count $8$ solutions.</p>
<p><em>Subtotal:</em> With this first row, we have $1+6+8=15$ matrices.</p>
<p><em>Total:</em> There are $6$ equivalent arrangements of the first row, so we have $6 \cdot 15 = 90$ matrices in total.</p>
<p>It seems unlikely that this kind of approach could generalize to larger matrices.</p>
|
1,528,507 | <p>So I started reading Conjecture and Proof by Miklos Laczkovich and one of the first proofs he provides is that of the irrationality of the square root of two. I am aware there are alternative proofs (one of which is geometric and another that uses the fundamental theorem of arithmetic) but I have a few questions about this one.</p>
<p>The Proof:</p>
<p>Suppose $\sqrt2 = p/q$, where $p, q$ are positive integers and let $q$ be the smallest such number. Then $2q^2=p^2$ and thus $p^2$ is even. Then p itself must be even; let $p=2p_1$. Then $2p^2 = (2p_1)^2=4p_{1}^{2}$ and thus q is also even. If $q=2q_1$ then $\sqrt{2} =p/q=p_{1}/q_{1}$. Since $q_{1}<q$, this contradicts the minimality of $q$. </p>
<p>My questions:</p>
<p>Why do we let $q$ be the smallest such number? I understand that this creates the contradiction at the end of the proof but I don't know why this is a fundamental need. </p>
<p>Also, if we are picking from the set of positive integers and $q$ is the smallest wouldn't that make $q=1$ if our set is all positive integers if we exlude $0$ in the set, and $q=0$ if we do include $0$ in the set? So in the first case $\sqrt{2}=p$ and the second case $\sqrt{2}= undefinded$. I am unsure of where I am going wrong here</p>
| user247327 | 247,327 | <p>Saying "q is the smallest such number" is the same as saying the fraction is "reduced to lowest terms". Essentially the proof is showing that you <strong>always</strong> have factors of 2 in both numerator and denominator so that a fraction equal to square root of 2 <strong>cannot</strong> be reduced to lowest terms, a contradiction.</p>
|
1,528,507 | <p>So I started reading Conjecture and Proof by Miklos Laczkovich and one of the first proofs he provides is that of the irrationality of the square root of two. I am aware there are alternative proofs (one of which is geometric and another that uses the fundamental theorem of arithmetic) but I have a few questions about this one.</p>
<p>The Proof:</p>
<p>Suppose $\sqrt2 = p/q$, where $p, q$ are positive integers and let $q$ be the smallest such number. Then $2q^2=p^2$ and thus $p^2$ is even. Then p itself must be even; let $p=2p_1$. Then $2p^2 = (2p_1)^2=4p_{1}^{2}$ and thus q is also even. If $q=2q_1$ then $\sqrt{2} =p/q=p_{1}/q_{1}$. Since $q_{1}<q$, this contradicts the minimality of $q$. </p>
<p>My questions:</p>
<p>Why do we let $q$ be the smallest such number? I understand that this creates the contradiction at the end of the proof but I don't know why this is a fundamental need. </p>
<p>Also, if we are picking from the set of positive integers and $q$ is the smallest wouldn't that make $q=1$ if our set is all positive integers if we exlude $0$ in the set, and $q=0$ if we do include $0$ in the set? So in the first case $\sqrt{2}=p$ and the second case $\sqrt{2}= undefinded$. I am unsure of where I am going wrong here</p>
| hmakholm left over Monica | 14,366 | <p>The reason for assuming that $q$ is minimal IS that without this assumption we wouldn't reach a contradiction at the end, so the proof wouldn't work.</p>
<p>Writing a proof is not a mechanical process where each step follows with necessity from what comes before it. Instead, very often the reason to do something is that it will enable a step <em>later</em> in the proof to work.</p>
<p>It's like explaining the right path through a maze: The answer to "why should I turn right here?" is not of the form "because we're following a rule that says we must turn right after we have turned left a prime number of times" (or whatever) -- the answer is, "because that's the way that leads to the exit!"</p>
<p>The person who <em>created</em> the walkthrough of the maze has probably explored more of if than the instructions say, lots of blind ends and so forth. But he doesn't chronicle all of those false starts in his finial explanation of how to get through.</p>
<p>A proof is like a cheat sheet for a maze: All that's required of it is that it leads to the exit; it doesn't need to catalogue all of the blind alleys, or all of the possible steps that <em>don't</em> turn out to help us prove our goal.</p>
<hr>
<blockquote>
<p>wouldn't that make $q=1$ if our set is all positive integers if we exlude $0$ in the set, and $q=0$ if we do include $0$ in the set?</p>
</blockquote>
<p>No. The proof step doesn't say "let $q$ be the smallest integer", but instead "let $q$ be the smallest integer <em>such that there is a $p$</em> with $p/q=\sqrt2$". Even though at this point in the proof we're assuming that <em>some</em> $p/q$ equals $\sqrt 2$, so there is <em>some</em> $q$ we can choose, it doesn't immediately follow from this that $1$ is among the possible $q$s we can choose.</p>
|
172,119 | <p>For a random graph G(n,p) what is the expected number of connected components? What is the probability distribution of this value?
I'm specially interested in what happens for small values of p, before the giant component emerges.</p>
| apg | 90,619 | <p>See Mathew Kahle and Elizabeth Meckes <a href="https://arxiv.org/abs/1009.4130" rel="nofollow noreferrer">"Limit theorems for Betti numbers of random simplicial complexes"</a></p>
<p>They say that the distribution is Gaussian in <span class="math-container">$G(n,p)$</span> with <span class="math-container">$n \to \infty$</span>, but only state the theorem for first homology or higher i.e. not for connected components. Nevertheless, the number of components converges via a central limit theorem to the normal distribution.</p>
<p><strong>In fact, asymptotically almost all nodes are isolated when <span class="math-container">$np \to 0$</span> as <span class="math-container">$n \to \infty$</span>.</strong></p>
<p>With <span class="math-container">$np \to c$</span> with <span class="math-container">$0 < c < 1$</span>, we have normal isolated nodes, normal 2-cliques, and so on. Though they intersect, this is vanishingly small as <span class="math-container">$n \to \infty$</span>, so the branching processes are asymptotically independent. How many branching processes survive until exactly displaying cardinality 2 (i.e. one child)? Then count three, four etc. All normal, but expectation and variance need to be calculated. See the comments to the OP.</p>
<p>When critical, also normal. When there is a giant component, as long as <span class="math-container">$n$</span> is large, then the expected number of e.g. isolated vertices is large, so asymptotically normal. All we need is an asymptotically infinite number of connected components I think. See the proof in that paper.</p>
<p>If, for example, <span class="math-container">$np = c \log(n)$</span> with <span class="math-container">$c>1$</span>, then the number of components is the number of isolated nodes plus one for the giant component itself. This is because each node has binomial degree with expectation <span class="math-container">$np$</span>, and in the limit is isolated with probability <span class="math-container">$e^{-np}$</span>. So coin toss for each node i.e. take <span class="math-container">$n$</span> chances of being isolated with this probability, giving <span class="math-container">$\mathbb{E}I_{0} = n e^{-np}$</span>.</p>
<p>When the graph is a.a.s. connected, there is only one component. But this is just normal, with zero variance and unit mean. In fact, all the Betti numbers of the corresponding random clique complex are normal. The main theorems in that paper give the normality, and formulas for the expectation.</p>
|
2,232,952 | <p>Could you please help me solve these quesitons??</p>
<p><a href="https://i.stack.imgur.com/iRgCE.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/iRgCE.png" alt="enter image description here" /></a></p>
<p>Consider the bases S={<span class="math-container">$u_1, u_2, u_3$</span>}, and T={<span class="math-container">$v_1, v_2, v_3$</span>}, with <br/></p>
<p><span class="math-container">$u_1$</span>=[-3, 0, -3], <span class="math-container">$u_2$</span>=[-3, 2, -1], <span class="math-container">$u_3$</span>=[1, 6, -1], <br/>
<span class="math-container">$v_1$</span>=[-6, -6, 0],
<span class="math-container">$v_2$</span>=[-2, -6, 4], <span class="math-container">$v_3$</span>=[-2, -3, 7] <br/>
(a) Find the transition matrix from <span class="math-container">$S$</span> to <span class="math-container">$T$</span> <br/>
(b) Using the result in (a), compute the coordinate vector <span class="math-container">$[w]_T$</span> where <span class="math-container">$w$</span>=[-5, 8, -5]</p>
| dantopa | 206,581 | <p>As noted by @Emilio Novati, the key idea is to connect to the <em>standard basis</em>. </p>
<p>Vectors will be colored according to the basis membership, and named chromatically:
$$
\color{blue}{\mathbf{B}\ (standard)}, \qquad
\color{red}{\mathbf{R}\ (u)}, \qquad
\color{green}{\mathbf{G}\ (v)}.
$$
$$
\mathbf{R}_{\color{red}{R}\to \color{blue}{A}}=
\color{black}{\left[
\begin{array}{rrr}
-3 & -3 & 1 \\
0 & 2 & 6 \\
-3 & -1 & -1 \\
\end{array}
\right]}
$$
Example: the second vector in the $\color{red}{\mathbf{S}}$ basis has the following coordinates in the $\color{blue}{standard}$ basis
$$
\mathbf{R}_{\color{red}{S}\to \color{blue}{A}}
\color{red}{\left[
\begin{array}{c}
0 \\
1 \\
0
\end{array}
\right]}
=
\color{blue}{\left[
\begin{array}{r}
-3 \\
2 \\
-1
\end{array}
\right]}
$$
You may think of the matrix $\mathbf{R}_{\color{red}{S}\to \color{blue}{A}}$ as an operator which takes a $\color{blue}{blue}$ vector and returns a $\color{red}{red}$ vector. </p>
<p>The inverse matrix is a map which connects vectors in the $\color{blue}{standard}$ basis to vectors in the $\color{red}{\mathbf{R}}$ basis:
$$
\mathbf{R}^{-1}_{\color{blue}{B}\to \color{red}{U}}
%
\color{blue}{\left[
\begin{array}{c}
0 \\
0 \\
1
\end{array}
\right]}
=
%
\frac{1}{24}
\left[
\begin{array}{rrr}
2 & -2 & -10 \\
-9 & 3 & 9 \\
3 & 3 & -3 \\
\end{array}
\right]
%
\color{blue}{\left[
\begin{array}{c}
0 \\
0 \\
1
\end{array}
\right]}
=
%
\color{red}{\frac{1}{24}
\left[
\begin{array}{r}
-10 \\
9 \\
-3
\end{array}
\right]}
$$</p>
<p>For the $\color{green}{\mathbf{G}}$ basis, the maps are
$$
\mathbf{G}_{\color{green}{G}\to \color{blue}{A}}=
\color{black}{\left[
\begin{array}{rrr}
-6 & -6 & 0 \\
-2 & -6 & 4 \\
-2 & -3 & 7 \\
\end{array}
\right]}, \qquad
%
\mathbf{G}^{-1}_{\color{blue}{A}\to \color{green}{G}}=
\color{black}{\left[
\begin{array}{rrr}
-5 & 7 & -4 \\
1 & -7 & 4 \\
-1 & -1 & 4 \\
\end{array}
\right]}
$$</p>
<p><strong>Transition from $S$ to $T$</strong></p>
<p>We connect all bases through the hub of the $\color{blue}{standard}$ basis. Start with a vector in the $\color{red}{\mathbf{R}}$ basis, map that to a vector in the $\color{blue}{standard}$ basis, then map that to a vector in the $\color{green}{\mathbf{G}}$ basis:
$$
\color{red}{
\left[
\begin{array}{c}
x_{1} \\
y_{1} \\
z_{1}
\end{array}
\right]}
%
\quad \Longrightarrow \quad
%
\color{blue}{
\left[
\begin{array}{c}
x_{2} \\
y_{2} \\
z_{2}
\end{array}
\right]}
%
\quad \Longrightarrow \quad
%
\color{green}{
\left[
\begin{array}{c}
x_{3} \\
y_{3} \\
z_{3}
\end{array}
\right]}
%
$$
The formal steps are
$$
\begin{align}
%
\mathbf{R}_{\color{red}{R}\to \color{blue}{B}}
\color{red}{
\left[
\begin{array}{c}
x_{1} \\
y_{1} \\
z_{1}
\end{array}
\right]}
%
&=
%
\color{blue}{
\left[
\begin{array}{c}
x_{2} \\
y_{2} \\
z_{2}
\end{array}
\right]} \\[3pt]
%
\mathbf{G}^{-1}_{\color{blue}{B}\to \color{green}{G}}
\left(
\mathbf{S}_{\color{red}{R}\to \color{blue}{B}}
\color{red}{
\left[
\begin{array}{c}
x_{1} \\
y_{1} \\
z_{1}
\end{array}
\right]} \right)
%
&=
%
\mathbf{G}^{-1}_{\color{blue}{B}\to \color{green}{G}}
\color{blue}{
\left[
\begin{array}{c}
x_{2} \\
y_{2} \\
z_{2}
\end{array}
\right]}
%
=
%
\color{green}{
\left[
\begin{array}{c}
x_{3} \\
y_{3} \\
z_{3}
\end{array}
\right]}
%
\end{align}
$$
The operator which maps $\color{red}{red}$ vectors to $\color{green}{green}$ is
$$
\mathbf{X}_{\color{red}{R}\to \color{green}{G}} =
\mathbf{G}^{-1}_{\color{blue}{B}\to \color{green}{G}}
\mathbf{R}_{\color{red}{R}\to \color{blue}{B}} =
%
\frac{1}{24}
\left(
\begin{array}{rr}
27 & 33 & 41 \\
-15 & -21 & -45 \\
-9 & -3 & -11 \\
\end{array}
\right)%
$$</p>
<p><strong>Computation</strong></p>
<p>Turn a $\color{red}{red}$ vector into a $\color{green}{green}$ vector:
$$
\mathbf{X}_{\color{red}{R}\to \color{green}{G}}
\color{red}{
\left[
\begin{array}{r}
-5 \\
8 \\
-5
\end{array}
\right]}
=
\color{green}{
\frac{1}{18}
\left[
\begin{array}{r}
-57 \\
99 \\
57
\end{array}
\right]}
$$</p>
|
2,463,421 | <p>The question is:</p>
<p>Nadir Airways offers three types of tickets on their Boston-New York flights. First-class tickets are \$140, second-class tickets are \$110, and stand-by tickets are \$78. If 69 passengers pay a total of $6548 for their tickets on a particular flight, how many of each type of ticket
were sold?</p>
<p>Now I set up my equation as </p>
<p>$140x+110y+78z=6548$</p>
<p>But I'm confused how to go from here. I know I need to find the GCD in order to evaluate that the equation has a solution and then set up my formulas for
$x=x_{0}+\frac{b}{d}(n)$ and $y=y_{0}-\frac{a}{d}(n)$</p>
<p>Ive solved Diophantine equations before but only in the form $ax+by=c$. How do I continue from here? I'm not interested in the solution, I can do that by myself, but I would like to know the process from solving these types of Diophantine equations. </p>
| Donald Splutterwit | 404,247 | <p>i) is fine.</p>
<p>ii) There are $15$ possible values for the first digit, $15$ for the second digit, $14$ for the third and $13$ for the last digit. Giving $ 15 \times 15 \times 14 \times 13$.</p>
<p>iii) There are $15$ possible values for the first digit, $14$ for the second digit, $13$ for the third and $1$ for the last digit. Giving $ 15 \times 14 \times 13 \times 1$.</p>
<p>iv) The first digit cannot be zero so there are $ \binom {15}{4} $ ways to choose $4$ digits , now put them in ascending order. </p>
|
382,504 | <p>Let <span class="math-container">$n \geq 2$</span> be an integer, and let <span class="math-container">$f(x) = \prod\limits_{k = 1}^n(x - \alpha_k)$</span> be a monic irreducible polynomial in <span class="math-container">$\mathbb Z[x]$</span>, with the property that <span class="math-container">$f(-\alpha_k) \neq 0$</span> for any <span class="math-container">$k = 1, 2, \ldots, n$</span>.</p>
<p>Is there anything meaningful that we can say about <span class="math-container">$\operatorname{Res}(f(x), f(-x))$</span>, the resultant of <span class="math-container">$f(x)$</span> and <span class="math-container">$f(-x)$</span>?</p>
<p>To rephrase, what can be said about the value of the product <span class="math-container">$\prod\limits_{k = 1}^nf(-\alpha_k)$</span>? Perhaps, it can be expressed somehow through <span class="math-container">$n$</span> and the discriminant of <span class="math-container">$f(x)$</span>?</p>
<p>One thing that I can note is that <span class="math-container">$f(-\alpha_1), \ldots, f(-\alpha_k)$</span> are algebraic conjugates, which means that their product is equal to the norm of <span class="math-container">$f(-\alpha_1)$</span>. Thus, up to a sign, the product <span class="math-container">$\prod\limits_{k = 1}^nf(-\alpha_k)$</span> is equal to the constant coefficient of the minimal polynomial of <span class="math-container">$f(-\alpha_1)$</span>. But what is this constant coefficient is a mystery.</p>
| Richard Stanley | 2,807 | <p>We have <span class="math-container">$\mathrm{Res}(f(x),f(-x))=2^n a_n P(\alpha)^2$</span>, where
<span class="math-container">$P(\alpha)=\prod_{1\leq i<j\leq n}(\alpha_i+\alpha_j)$</span>. By
e.g. the case <span class="math-container">$d=2$</span> of Exercise 7.30 in <em>Enumerative Combinatorics</em>,
vol. 2, we have <span class="math-container">$P(\alpha)=s_{n-1,n-2,\dots,1}(\alpha)$</span>, where
<span class="math-container">$s_{n-1,n-2,\dots,1}$</span> is a Schur function. By the dual Jacobi-Trudi
identity (Corollary 7.16.2 of the above reference), we get
<span class="math-container">$$ \mathrm{Res}(f(x),f(-x)) = 2^n a_n
\left( \det[a_{n-2i+j}]_{i,j=1}^{n-1}\right)^2, $$</span>
where we set <span class="math-container">$a_0=1$</span> and <span class="math-container">$a_{-k}=0$</span> for <span class="math-container">$k>0$</span>. For instance, when
<span class="math-container">$n=3$</span> the determinant is
<span class="math-container">$$ \left| \begin{array}{cc} a_2 & a_3\\ 1 & a_1\end{array} \right|
=a_2a_1-a_3. $$</span></p>
|
2,215,087 | <p>I'm trying to show that $\mathbb{Z}[\sqrt{11}]$ is Euclidean with respect to the function $a+b\sqrt{11} \mapsto|N(a+b\sqrt{11})| = | a^2 -11b^2|$</p>
<p>By multiplicativity, it suffices to show that $\forall x \in \mathbb{Q}(\sqrt{11}) \exists n \in \mathbb{Z}(\sqrt{11}):|N(n-x)| < 1$</p>
<p>For the analogous statement for $\mathbb Z [\sqrt6]$, it worked by considering different cases, so I tried to do the same thing here. Here is what I did so far:</p>
<p>Let $x+y\sqrt{11} \in \mathbb Q (\sqrt{11})$</p>
<p><strong>Case 1:</strong> Suppose there exists a $b \in \mathbb Z$ s.t. $|y-b| < \frac{1}{\sqrt{11}}$, then we can choose such a $b$ and a $a \in \mathbb Z$ s.t. $|x-a| \leq \frac{1}{2}$, then we have $|N(x+y\sqrt{11}-(a+b\sqrt{11}))| < 1$</p>
<p>From now on suppose $\forall b \in \mathbb Z: |y-b| > \frac{1}{\sqrt{11}}$</p>
<p><strong>Case 2:</strong> Suppose there exists a $b \in \mathbb Z$ s.t. $|y-b| < \sqrt{\frac{5}{44}}$ Then we have $1 < 11 (y-b)^2 < \frac{5}{4}$, so we can choose $a \in \mathbb Z$ such that $\frac{1}{2} \leq |x-a| \leq 1$, then we have $|N(x+y\sqrt{11}-(a+b\sqrt{11}))| < 1$</p>
<p>From now on suppose $\forall b \in \mathbb Z: |y-b| > \sqrt{\frac{5}{44}}$</p>
<p><strong>Case 3:</strong> Suppose there exists a $b \in \mathbb Z$ s.t. $|y-b| < \sqrt{\frac{2}{11}}$ Then we can choose $a \in \mathbb Z $ s.t. $1 \leq |x-a| \leq \frac{3}{2}$, then we have $|N(x+y\sqrt{11}-(a+b\sqrt{11}))| < 1$</p>
<p>From now on, we may suppose that $|y-b| > \sqrt{\frac{2}{11}}$.</p>
<p>This is where I'm stuck. I tried choosing $b \in \mathbb Z$ s.t. $\frac{1}{2} \geq |y-b| > \sqrt{\frac{2}{11}}$, but then I run into problems, whether I choose $a \in \mathbb Z$ s.t. $1 \leq |x-a| \leq \frac{3}{2}$ or s.t. $ \frac{3}{2} \leq |x-a| \leq 2$</p>
| Gerry Myerson | 8,269 | <p>I believe the result is proved in Oppenheim, Quadratic fields with and without Euclid's algorithm, Math Annalen 109 (1934) 349-352, and I think this paper is freely available <a href="https://eudml.org/doc/159685" rel="noreferrer">here</a>. The proof is essentially the first half of page 350, together with preliminary observations on page 349. </p>
|
136,363 | <p>Consider the following case:</p>
<pre><code>(a^3*b) //. {a^2 -> c, a*b -> d}
</code></pre>
<p>instead of <code>c d</code> the output is:</p>
<pre><code>(*a^3*b*)
</code></pre>
<p>How can I get what I want?</p>
| mikado | 36,788 | <p>In general, you want to make the left hand sides of your rules as simple as possible. A simple way of doing what you want is</p>
<pre><code>(a^3*b) //. {a -> Sqrt[c], b -> d/a}
(* c d *)
</code></pre>
|
3,769,843 | <p>This is a multiple choice question from my Text Book</p>
<p>Let <span class="math-container">$A=\{1,2,3\}$</span>. The no. of relations containing <span class="math-container">$(1,2)$</span> and <span class="math-container">$(1,3)$</span> which are reflexive and Symmetric but not transitive is</p>
<p>(A) <span class="math-container">$1$</span></p>
<p>(B) <span class="math-container">$2$</span></p>
<p>(C) <span class="math-container">$3$</span></p>
<p>(D) <span class="math-container">$4$</span></p>
<hr />
<p><em><strong>My Approach:</strong></em>
<span class="math-container">$A=\{1,2,3\}$</span></p>
<p>Relation <span class="math-container">$R$</span> must contain <span class="math-container">$(1,2)$</span> and <span class="math-container">$(1,3)$</span></p>
<p>For <span class="math-container">$R$</span> to be Reflexive, it must contain <span class="math-container">$(2,2)$</span> and <span class="math-container">$(1,1)$</span></p>
<p>For <span class="math-container">$R$</span> to be symmetric, it must contain <span class="math-container">$(2,1)$</span> and <span class="math-container">$(3,1)$</span></p>
<p>For <span class="math-container">$R$</span> to not to be Transitive, it must not contain <span class="math-container">$(2,3)$</span> and <span class="math-container">$(3,2)$</span></p>
<p>\therefore, <span class="math-container">$R=\{(1,2),(1,3),(2,2),(1,1),(3,1),(2,1)\}$</span></p>
<p>Anyother addition to <span class="math-container">$R$</span> will not satisfy the stated condition.</p>
<p>Hence, option <span class="math-container">$A$</span> is correct</p>
<p>Am I right?</p>
<p>[Edit:</p>
<p>R contains <span class="math-container">$(3,3)$</span> as well]</p>
| None | 806,517 | <p>Third take. Stop changing the definitions! :)</p>
<p>We have a sequence of cumulative rotation unit quaternions <span class="math-container">$R_i$</span> and cumulative translations <span class="math-container">$T_i$</span> in each local coordinate system, with <span class="math-container">$Q_i$</span> describing the rotation from the global coordinate system to local coordinate system at step <span class="math-container">$i$</span>, and <span class="math-container">$P_i$</span> being the sum total translations up to and including step <span class="math-container">$i$</span> in the global coordinate system:</p>
<p><span class="math-container">$$Q_i = Q_{i-1} R_{i-1} \tag{1a}\label{None1a}$$</span>
or equivalently
<span class="math-container">$$Q_i = Q_{i+1} R_i^{-1} \tag{1b}\label{None1b}$$</span>
and
<span class="math-container">$$P_i = P_{i-1} + Q_{i-1} T_{i-1} Q_{i-1}^{-1} \tag{2a}\label{None2a}$$</span>
or equivalently
<span class="math-container">$$P_i = P_{i+1} - Q_i T_i Q_i^{-1} \tag{2b}\label{None2b}$$</span></p>
<p>We can use <span class="math-container">$\eqref{None1a}$</span> or <span class="math-container">$\eqref{None1b}$</span> to construct <span class="math-container">$Q_0^\prime, Q_1^\prime, \dots, Q_{N-1}^\prime, Q_{N}^\prime$</span>. With <span class="math-container">$\eqref{None1a}$</span>, set <span class="math-container">$Q_N^\prime = 1$</span> and iterate <span class="math-container">$i = N-1, N-2, \dots, 2, 1, 0$</span>. With <span class="math-container">$\eqref{None1b}$</span>, set <span class="math-container">$Q_0\prime = 0$</span> and iterate <span class="math-container">$i = 1, 2, \dots, N-1, N$</span>.</p>
<p>Then, fix <span class="math-container">$Q_k = 1$</span> (where <span class="math-container">$0 \le k \le N$</span>), by iterating
<span class="math-container">$$Q_i = Q_k^{\prime -1} Q_i^\prime, \quad \forall i \tag{3}\label{None3}$$</span></p>
<p>After you have the orientations, you can calculate the total translations <span class="math-container">$P_i$</span> using
<span class="math-container">$$P_0^\prime = \vec{0}, \quad P_i^\prime = P_{i-1}^\prime + Q_{i-1} T_{i-1} Q_{i-1}^{-1} \tag{4a}\label{None4a}$$</span>
or
<span class="math-container">$$P_N^\prime = \vec{0}, \quad P_i^\prime = P_{i+1}^\prime - Q_i T_i Q_i^{-1} \tag{4b}\label{None4b}$$</span>
which will yield the same translations except for a constant difference. Again, to fix <span class="math-container">$P_k = \vec{0}$</span> for some <span class="math-container">$0 \le k \le N$</span>, iterate
<span class="math-container">$$P_i = P_i^\prime - P_k^\prime, \quad \forall i \tag{5}\label{None5}$$</span></p>
<p>As you can clearly see, this has <span class="math-container">$O(N)$</span> time complexity.</p>
<p>If <span class="math-container">$k = N$</span>, use the <span class="math-container">$(a)$</span> methods; and if <span class="math-container">$k = 0$</span>, the <span class="math-container">$(b)$</span> methods. This way the fixup is an identity operation, and can be skipped.</p>
<p>Note that with the fixup pass, both <span class="math-container">$(a)$</span> and <span class="math-container">$(b)$</span> work for all <span class="math-container">$k$</span>, including <span class="math-container">$k = 0$</span> and <span class="math-container">$k = N$</span>. Choosing one over the other is just an optimization.</p>
<hr />
<p>Here is an example Python3 program, that implements two helper classes, <code>Vector</code> and <code>Versor</code>, and implements the above logic. (If you save this as <code>example.py</code>, you can use <code>pydoc3 example</code> to see the API it provides. It is in Public Domain, and you can use the classes in your own scripts if you put it in the same directory, and add <code>from example import Vector, Versor</code>.)</p>
<pre><code>"""3D Euclidean vector class 'Vector', and unit quaternion class 'Versor'
-- a playground for experimenting with rotations in 3D"""
# SPDX-License-Identifier: CC0-1.0
from math import sqrt, pi, sin, cos, atan2
class Vector(tuple):
"""3D Cartesian vector type"""
FORMAT = "(%8.5f,%8.5f,%8.5f)"
def __new__(cls, x, y, z):
"""Create a new vector"""
return tuple.__new__(cls, (float(x), float(y), float(z)))
@property
def x(self):
"""x coordinate"""
return self[0]
@property
def y(self):
"""y coordinate"""
return self[1]
@property
def z(self):
"""z coordinate"""
return self[2]
@property
def norm(self):
"""Euclidean length"""
return sqrt(self[0]*self[0] + self[1]*self[1] + self[2]*self[2])
@property
def normsqr(self):
"""Euclidean length squared"""
return self[0]*self[0] + self[1]*self[1] + self[2]*self[2]
@property
def unit_vector(self):
"""Scaled to unit length"""
n = sqrt(self[0]*self[0] + self[1]*self[1] + self[2]*self[2])
if n > 0:
return tuple.__new__(Vector, (self[0]/n, self[1]/n, self[2]/n))
else:
return tuple.__new__(Vector, (0, 0, 0))
def perpendicular_to(self, other):
"""Part perpendicular to another vector"""
nn = float(other[0])**2 + float(other[1])**2 + float(other[2])**2
if nn > 0:
d = (self[0]*other[0] + self[1]*other[1] + self[2]*other[2]) / nn
return tuple.__new__(Vector, (self[0] - d*other[0],
self[1] - d*other[1],
self[2] - d*other[2]))
else:
return tuple.__new__(Vector, (0, 0, 0))
def transform(self, matrix, translate=(0,0,0), pretranslate=(0,0,0)):
"""Transform this point by rotation matrix and translation"""
curr = Vector(self[0]+pretranslate[0], self[1]+pretranslate[1], self[2]+pretranslate[2])
post = Vector(translate)
return Vector(matrix[0][0]*curr[0] + matrix[0][1]*curr[1] + matrix[0][2]*curr[2] + post[0],
matrix[1][0]*curr[0] + matrix[1][1]*curr[1] + matrix[1][2]*curr[2] + post[1],
matrix[2][0]*curr[0] + matrix[2][1]*curr[1] + matrix[2][2]*curr[2] + post[2])
def __str__(self):
"""Conversion to string"""
return Vector.FORMAT % self
def __abs__(self):
"""Absolute value is the Euclidean length"""
return sqrt(self[0]*self[0] + self[1]*self[1] + self[2]*self[2])
def __neg__(self):
"""Opposite vector, negation"""
return tuple.__new__(Vector, (-self[0], -self[1], -self[2]))
def __bool__(self):
"""Nonzero vectors are considered True, zero vectors False"""
return (self[0]*self[0] + self[1]*self[1] + self[2]*self[2] > 0)
def __add__(self, other):
"""Vector addition"""
return tuple.__new__(Vector, (self[0]+other[0], self[1]+other[1], self[2]+other[2]))
def __radd__(self, other):
"""Vector addition"""
return tuple.__new__(Vector, (other[0]+self[0], other[1]+self[1], other[2]+self[2]))
def __sub__(self, other):
"""Vector subtraction"""
return tuple.__new__(Vector, (self[0]-other[0], self[1]-other[1], self[2]-other[2]))
def __radd__(self, other):
"""Vector subtraction"""
return tuple.__new__(Vector, (other[0]-self[0], other[1]-self[1], other[2]-self[2]))
def __mul__(self, scalar):
if isinstance(scalar, (int, float)):
return tuple.__new__(Vector, (self[0]*scalar, self[1]*scalar, self[2]*scalar))
else:
return NotImplemented
def __rmul__(self, scalar):
if isinstance(scalar, (int, float)):
return tuple.__new__(Vector, (scalar*self[0], scalar*self[1], scalar*self[2]))
else:
return NotImplemented
def __truediv__(self, scalar):
if isinstance(scalar, (int, float)):
return tuple.__new__(Vector, (self[0]/scalar, self[1]/scalar, self[2]/scalar))
else:
return NotImplemented
def __rtruediv__(self, ignored):
return NotImplemented
def __or__(self, other):
"""Dot product, a | b"""
if isinstance(other, (list, tuple)) and len(other) == 3:
return self[0]*other[0] + self[1]*other[1] + self[2]*other[2]
else:
return NotImplemented
def dot(self, other):
"""Dot product"""
return self[0]*other[0] + self[1]*other[1] + self[2]*other[2]
def __xor__(self, other):
"""Cross product, a ^ b"""
if isinstance(other, (list, tuple)) and len(other) == 3:
return tuple.__new__(Vector, ( self[1]*other[2] - self[2]*other[1],
self[2]*other[0] - self[0]*other[2],
self[0]*other[1] - self[1]*other[0] ))
else:
return NotImplemented
def cross(self, other):
"""Cross product"""
return tuple.__new__(Vector, ( self[1]*other[2] - self[2]*other[1],
self[2]*other[0] - self[0]*other[2],
self[0]*other[1] - self[1]*other[0] ))
class Versor(tuple):
"""Unit quaternion type describing an orientation or a rotation"""
FORMAT = "(%8.5f;%8.5f,%8.5f,%8.5f)"
def __new__(cls, w, x, y, z):
"""Create a new versor"""
w = float(w)
x = float(x)
y = float(y)
z = float(z)
n = sqrt(w*w + x*x + y*y + z*z)
if n == 0:
w = 1
x = 0
y = 0
z = 0
elif n != 1:
w /= n
x /= n
y /= n
z /= n
return tuple.__new__(cls, (w, x, y, z))
@classmethod
def from_axis_angle(cls, axis, angle):
"""Create a versor from an axis and angle in degrees"""
x = float(axis[0])
y = float(axis[1])
z = float(axis[2])
h = angle * pi / 360.0
n = sqrt(x*x + y*y + z*z)
if n == 0:
return tuple.__new__(cls, (1, 0, 0, 0))
s, c = sin(h), cos(h)
return tuple.__new__(cls, (c, s*x/n, s*y/n, s*z/n))
def __str__(self):
"""Conversion to string"""
return Versor.FORMAT % self
def __abs__(self):
return sqrt(self[0]*self[0] + self[1]*self[1] + self[2]*self[2])
def __neg__(self):
return Versor(-self[0], -self[1], -self[2], -self[3])
def __mul__(self, other):
if isinstance(other, (int, float)):
return (self[0]*other, self[1]*other, self[2]*other, self[3]*other)
elif isinstance(other, (list, tuple)) and len(other) == 4:
return (self[0]*other[0] - self[1]*other[1] - self[2]*other[2] - self[3]*other[3],
self[0]*other[1] + self[1]*other[0] + self[2]*other[3] - self[3]*other[2],
self[0]*other[2] - self[1]*other[3] + self[2]*other[0] + self[3]*other[1],
self[0]*other[3] + self[1]*other[2] - self[2]*other[1] + self[3]*other[0])
else:
return NotImplemented
def __rmul__(self, other):
if isinstance(other, (int, float)):
return (other*self[0], other*self[1], other*self[2], other*self[3])
elif isinstance(other, (list, tuple)) and len(other) == 4:
return (other[0]*self[0] - other[1]*self[1] - other[2]*self[2] - other[3]*self[3],
other[0]*self[1] + other[1]*self[0] + other[2]*self[3] - other[3]*self[2],
other[0]*self[2] - other[1]*self[3] + other[2]*self[0] + other[3]*self[1],
other[0]*self[3] + other[1]*self[2] - other[2]*self[1] + other[3]*self[0])
else:
return NotImplemented
@property
def w(self):
"""Versor real component"""
return self[0]
@property
def x(self):
"""Versor vector x component"""
return self[1]
@property
def y(self):
"""Versor vector y component"""
return self[2]
@property
def z(self):
"""Versor vector z component"""
return self[3]
@property
def axis(self):
"""Rotation unit axis vector"""
n = sqrt(self[1]*self[1] + self[2]*self[2] + self[3]*self[3])
if n > 0:
return Vector(self[1]/n, self[2]/n, self[3]/n)
else:
return Vector(0, 0, 0)
@property
def angle(self):
"""Rotation angle, in degrees"""
n = sqrt(self[1]*self[1] + self[2]*self[2] + self[3]*self[3])
if n > 0:
return atan2(n, self[0]) * 360.0 / pi
@property
def normalized(self):
"""Versor normalized to unit length"""
n = sqrt(self[0]*self[0] + self[1]*self[1] + self[2]*self[2] + self[3]*self[3])
if n == 0:
return tuple.__new__(Versor, (1, 0, 0, 0))
elif n != 1:
return tuple.__new__(Versor, (self[0]/n, self[1]/n, self[2]/n, self[3]/n))
else:
return self
@property
def inverse(self):
"""Inverse of this versor"""
return tuple.__new__(Versor, (self[0], -self[1], -self[2], -self[3]))
@property
def matrix(self):
"""Rotation matrix corresponding to this versor"""
c01 = 2*self[0]*self[1]
c02 = 2*self[0]*self[2]
c03 = 2*self[0]*self[3]
c11 = 2*self[1]*self[1]
c12 = 2*self[1]*self[2]
c13 = 2*self[1]*self[3]
c22 = 2*self[2]*self[2]
c23 = 2*self[2]*self[3]
c33 = 2*self[3]*self[3]
return ( Vector(1-c22-c33, c12-c03, c13+c02),
Vector(c12+c03, 1-c11-c33, c23-c01),
Vector(c13-c02, c23+c01, 1-c11-c22) )
def rotate(self, other):
"""For vector v, q.rotate(v) calculates q v q^-1 .
For quaternion p, q.rotate(p) calculates q p."""
if isinstance(other, (list, tuple)) and len(other) == 3:
x = float(other[0])
y = float(other[1])
z = float(other[2])
c00 = self[0]*self[0]
c11 = self[1]*self[1]
c22 = self[2]*self[2]
c33 = self[3]*self[3]
c01 = 2*self[0]*self[1]
c02 = 2*self[0]*self[2]
c03 = 2*self[0]*self[3]
c12 = 2*self[1]*self[2]
c13 = 2*self[1]*self[3]
c23 = 2*self[2]*self[3]
return Vector(x*(c00+c11-c22-c33) + y*(c12-c03) + z*(c02+c13),
x*(c03+c12) + y*(c00-c11+c22-c33) + z*(c23-c01),
x*(c13-c02) + y*(c01+c23) + z*(c00-c11-c22+c33))
elif isinstance(other, (list, tuple)) and len(other) == 4:
return Versor(self[0]*other[0] - self[1]*other[1] - self[2]*other[2] - self[3]*other[3],
self[0]*other[1] + self[1]*other[0] + self[2]*other[3] - self[3]*other[2],
self[0]*other[2] - self[1]*other[3] + self[2]*other[0] + self[3]*other[1],
self[0]*other[3] + self[1]*other[2] - self[2]*other[1] + self[3]*other[0])
else:
raise ValueError("Cannot rotate a %s" % str(type(other)))
def rotated_by(self, other):
"""For quaternion p, q.rotated_by(p) calculates p q."""
w = other[0]*self[0] - other[1]*self[1] - other[2]*self[2] - other[3]*self[3]
x = other[0]*self[1] + other[1]*self[0] + other[2]*self[3] - other[3]*self[2]
y = other[0]*self[2] - other[1]*self[3] + other[2]*self[0] + other[3]*self[1]
z = other[0]*self[3] + other[1]*self[2] - other[2]*self[1] + other[3]*self[0]
n = sqrt(w*w + x*x + y*y + z*z)
if n == 0:
return tuple.__new__(Versor, (1, 0, 0, 0))
elif n != 1:
w /= n
x /= n
y /= n
z /= n
return tuple.__new__(Versor, (w, x, y, z))
def transform(self, points, translate=(0, 0, 0), pretranslate=(0,0,0)):
result = []
pre = Vector(pretranslate[0], pretranslate[1], pretranslate[2])
post = Vector(translate[0], translate[1], translate[2])
for p in points:
result.append(self.rotate(Vector(p) + pre) + post)
return result
if __name__ == '__main__':
from random import Random
rng = Random()
N = 10
# Cumulative inverse rotations and local translations
R = []
T = []
for i in range(0, N):
while True:
x = rng.uniform(-1,+1)
y = rng.uniform(-1,+1)
z = rng.uniform(-1,+1)
n = sqrt(x*x + y*y + z*z)
if n > 0.1 and n < 1.0:
break
R.append(Versor.from_axis_angle(Vector(x, y, z), rng.uniform(-180, 180)))
T.append(Vector(rng.uniform(-2,+2), rng.uniform(-2,+2), rng.uniform(-2,+2)))
n = N - 1
# Valid range for k is 0..N, inclusive.
k = round(rng.uniform(-0.49, N+0.49))
Qa = [ None ]*(N+1)
Qb = [ None ]*(N+1)
Pa = [ None ]*(N+1)
Pb = [ None ]*(N+1)
#
# Orientation
#
# a) Forwards
Qa[0] = Versor(1,0,0,0)
for i in range(1, N+1): # i = 1, 2, ..., N-1, N.
Qa[i] = Qa[i-1].rotate(R[i-1])
# b) Backwards
Qb[N] = Versor(1,0,0,0)
for i in range(N-1,-1,-1): # i = N-1, N-2, ..., 1, 0.
Qb[i] = Qb[i+1].rotate(R[i].inverse)
print("Orientation quaternions before fixup pass:")
for i in range(0, N+1): # i = 0, 1, ..., N-1, N.
print("%s : %s" % (str(Qa[i]), str(Qb[i])))
# Orientation fixup.
QaFix = Qa[k].inverse
QbFix = Qb[k].inverse
for i in range(0, N+1): # i = 0, 1, ..., N-1, N.
Qa[i] = QaFix.rotate(Qa[i])
Qb[i] = QbFix.rotate(Qb[i])
#
# Position
#
# a) Forwards
Pa[0] = Vector(0,0,0)
for i in range(1, N+1): # i = 1, 2, ..., N-1, N.
Pa[i] = Pa[i-1] + Qa[i-1].rotate(T[i-1])
# b) Backwards
Pb[N] = Vector(0,0,0)
for i in range(N-1,-1,-1): # i = N-1, N-2, ..., 1, 0.
Pb[i] = Pb[i+1] - Qb[i].rotate(T[i])
# Position fixup.
PaFix = Pa[k]
PbFix = Pb[k]
for i in range(0, N+1): # i = 0, 1, ..., N-1, N.
Pa[i] = Pa[i] - PaFix
Pb[i] = Pb[i] - PbFix
print("Orientation quaternions after fixup pass for k=%d:" % k)
qerr = [ 0, 0, 0, 0 ]
perr = [ 0, 0, 0, 0 ]
for i in range(0, N+1): # i = 0, 1, ..., N-1, N.
print("%s = %s %s = %s" % (str(Qa[i]), str(Qb[i]), str(Pa[i]), str(Pb[i])))
qerr = [ max(qerr[0], abs(Qa[i][0] - Qb[i][0])),
max(qerr[1], abs(Qa[i][1] - Qb[i][1])),
max(qerr[2], abs(Qa[i][2] - Qb[i][2])),
max(qerr[3], abs(Qa[i][3] - Qb[i][3])) ]
perr = [ max(perr[0], abs(Pa[0] - Pb[0])),
max(perr[1], abs(Pa[1] - Pb[1])),
max(perr[2], abs(Pa[2] - Pb[2])) ]
print("Max errors: %.6f %.6f %.6f %.6f %.6f %.6f %.6f" % (*qerr, *perr))
</code></pre>
<p>When run, the program generates ten rotations and translations, and uses the two iteration directions to solve <span class="math-container">$Q_i$</span> and <span class="math-container">$P_i$</span> with a random <span class="math-container">$k$</span>.</p>
<p>The output contains first <span class="math-container">$Q_a^\prime$</span> and <span class="math-container">$Q_b^\prime$</span> before the fixup pass, then <span class="math-container">$Q_a$</span>, <span class="math-container">$Q_b$</span>, <span class="math-container">$P_a$</span>, and <span class="math-container">$P_b$</span>. If this approach works, then <span class="math-container">$Q_a = Q_b$</span> and <span class="math-container">$P_a = P_b$</span>, with a selected row (<span class="math-container">$k = 0$</span> being the first) having <span class="math-container">$Q_a = Q_b = 1$</span> and <span class="math-container">$P_a = P_b = \vec{0}$</span>.
The last line reports the maximum component-wise differences between each pair of <span class="math-container">$Q_a$</span>, <span class="math-container">$Q_b$</span> and <span class="math-container">$P_a$</span>, <span class="math-container">$P_b$</span>.</p>
<p>Here is the output from an example run:</p>
<pre><code>Orientation quaternions before fixup pass:
( 1.00000; 0.00000, 0.00000, 0.00000) : ( 0.77039;-0.19139,-0.58682,-0.15973)
( 0.41966;-0.19167,-0.83338, 0.30435) : (-0.15381;-0.53969,-0.79943, 0.21446)
( 0.69413; 0.48687,-0.52190, 0.09363) : ( 0.33662; 0.10393,-0.86924, 0.34685)
( 0.77771; 0.49771,-0.37579,-0.07897) : ( 0.46126; 0.22090,-0.84049, 0.17893)
( 0.92556; 0.28991,-0.23517,-0.06307) : ( 0.62045; 0.04565,-0.78269, 0.01871)
(-0.24442; 0.33496,-0.88851,-0.19650) : (-0.67698; 0.27822,-0.63217, 0.25427)
( 0.40926; 0.48004,-0.01805,-0.77572) : ( 0.27267; 0.74382,-0.47920,-0.37783)
(-0.34109; 0.24307,-0.89218,-0.16908) : (-0.76681; 0.20925,-0.55835, 0.23762)
(-0.48096; 0.51707, 0.30692,-0.63805) : (-0.19338; 0.91384, 0.31398,-0.17004)
( 0.45215; 0.51974, 0.68178, 0.24620) : ( 0.88721; 0.27829, 0.22401, 0.29195)
( 0.77039; 0.19139, 0.58682, 0.15973) : ( 1.00000; 0.00000, 0.00000, 0.00000)
Orientation quaternions after fixup pass for k=5:
(-0.24442;-0.33496, 0.88851, 0.19650) = (-0.24442;-0.33496, 0.88851, 0.19650) (-0.36075, 0.15348,-3.62981) = (-0.36075, 0.15348,-3.62981)
( 0.51388; 0.34046, 0.64085, 0.45752) = ( 0.51388; 0.34046, 0.64085, 0.45752) ( 0.47612,-0.59985,-2.00515) = ( 0.47612,-0.59985,-2.00515)
( 0.43873;-0.16576, 0.87133,-0.14426) = ( 0.43873;-0.16576, 0.87133,-0.14426) ( 0.18914,-0.35767,-1.86871) = ( 0.18914,-0.35767,-1.86871)
( 0.32603;-0.37848, 0.85420,-0.14422) = ( 0.32603;-0.37848, 0.85420,-0.14422) ( 1.60830,-2.51529,-2.32274) = ( 1.60830,-2.51529,-2.32274)
( 0.09222;-0.39072, 0.91569, 0.01848) = ( 0.09222;-0.39072, 0.91569, 0.01848) ( 0.61687,-1.03912,-1.88966) = ( 0.61687,-1.03912,-1.88966)
( 1.00000; 0.00000, 0.00000,-0.00000) = ( 1.00000; 0.00000,-0.00000, 0.00000) ( 0.00000, 0.00000, 0.00000) = ( 0.00000, 0.00000, 0.00000)
( 0.22923;-0.94011, 0.20253,-0.15045) = ( 0.22923;-0.94011, 0.20253,-0.15045) ( 1.83789,-0.53006,-0.60693) = ( 1.83789,-0.53006,-0.60693)
( 0.99072; 0.07993,-0.09386, 0.05718) = ( 0.99072; 0.07993,-0.09386, 0.05718) ( 2.73874,-2.16200,-1.44103) = ( 2.73874,-2.16200,-1.44103)
( 0.14344;-0.59250,-0.61448,-0.50078) = ( 0.14344;-0.59250,-0.61448,-0.50078) ( 2.76191,-1.35384,-0.52793) = ( 2.76191,-1.35384,-0.52793)
(-0.59057;-0.19371, 0.41969,-0.66149) = (-0.59057;-0.19371, 0.41969,-0.66149) ( 4.10824,-0.82590,-2.47648) = ( 4.10824,-0.82590,-2.47648)
(-0.67698;-0.27822, 0.63217,-0.25427) = (-0.67698;-0.27822, 0.63217,-0.25427) ( 5.38527,-3.28029,-2.42548) = ( 5.38527,-3.28029,-2.42548)
Max errors: 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000
</code></pre>
<p>Hopefully this suffices as a numerical proof that this approach works.</p>
|
721,449 | <p>I need to determine all the positive divisors of 7!. I got 360 as the total number of positive divisors for 7!. Can someone confirm, or give the real answer?</p>
| Alessandro Codenotti | 136,041 | <p>Once you factorize a number as $N=p_1^{a_1}p_2^{a_2}p_3^{a_3}...p_n^{a_n}$, $p_i$ prime for every $i$, $a_i>0$ for every $i$ the number of divisors is given by $(a_1+1)(a_2+1)(a_3+1)...(a_n+1)$.</p>
<p>It is easy to see why this formula works from a combinatorial point of view, the divisors of $N$ are also of the form $p_1^{b_1}p_2^{b_2}p_3^{b_3}...p_n^{b_n}$, with $b_i\leq a_i$ for every $i$, but this time some (or all) of the $b_i$ can be $0$, this mean we can pick $a_i+1$ values for $b_i$, from $0$ to $a_i$.</p>
<p>In your case $7!=2^43^25^17^1$ so it has $(4+1)(2+1)(1+1)(1+1)=60$ divisors</p>
|
4,382,317 | <p>Show that <span class="math-container">$|z+1| = 2\cos(\frac{\theta}{2})$</span>, <span class="math-container">$z = cis(\theta)$</span> and <span class="math-container">$z \in C$</span></p>
<p>Here is what I have managed to do:</p>
<ul>
<li><span class="math-container">$r=1$</span></li>
<li><span class="math-container">$(z+1)^2 = (\cos(\theta)+i\sin(\theta)+1)^2$</span></li>
<li><span class="math-container">$(z+1)^2 = (\cos^2(\theta)+2(i\sin(\theta)+1)\cos(\theta)+(i\sin(\theta)+1)^2)$</span></li>
<li><span class="math-container">$(z+1)^2 = (\cos^2(\theta)+2i\sin(\theta)\cos(\theta)+2\cos(\theta)+(-\sin^2(\theta)+2i\sin(\theta)+1))$</span></li>
</ul>
<p>What I wanted to do was to take the square root of both sides to get the absolute value, however, I need to simplify the right-hand side first. I need help with that part</p>
| Samual | 766,626 | <p>note that
<span class="math-container">$z = \text{cis}(\theta)$</span> and <span class="math-container">$1 = \text{cis}(0)$</span>,</p>
<p>you can rewrite the function as
<span class="math-container">$|\text{cis}(\theta)+\text{cis }(0)|$</span></p>
<p>use the identity that
<span class="math-container">$ \text{cis}(x+y) = \text{cis}(x) \ \text{cis}(y)$</span> with <span class="math-container">$\frac{\theta}{2}$</span> from <span class="math-container">$\theta$</span> and <span class="math-container">$0$</span></p>
|
1,963,295 | <p>I have the following equation for a decision boundary line: $-w_0 = w_1x_1 + w_2x_2$ and I want to prove that the distance from the decision boundary to the origin is $l = \frac{w^Tx}{||w||}$. I am having trouble wrapping my mind around how I can just get the distance from a line to a point. Am I supposed to be averaging the distances of all the points on the line to the point?</p>
| dr.ivanova | 192,583 | <p>Let <span class="math-container">$X\sim N(\mu, \sigma)$</span>. "Rectified" Gaussian is then <span class="math-container">$Y = \max(0, X)$</span>. For both the expectation and the variance use the law of total expectation, as suggested by Michael. I'll also make use of the first two moments of the truncated normal distribution, which are readily available from <a href="https://en.wikipedia.org/wiki/Truncated_normal_distribution" rel="nofollow noreferrer">Wikipedia</a>.</p>
<p><span class="math-container">\begin{align}
\mathbb{E}Y &= \mathbb{E}[X|X>0]\mathbb{P}(X>0) + 0\times\mathbb{P}(X\leq0)
\end{align}</span></p>
<p>Notice that the random variable <span class="math-container">$X|X>0$</span> is a truncated normal (with parameters <span class="math-container">$\mu, \sigma, 0, \infty$</span>) which has a mean <span class="math-container">$\mu + \sigma \frac{\phi(-\mu/\sigma)}{1-\Phi(-\mu/\sigma)}$</span>. Hence, as already established:
<span class="math-container">$$
\mathbb{E}Y = \mu \left(1-\Phi\left(-\frac{\mu}{\sigma}\right)\right) + \sigma \phi\left(-\frac{\mu}{\sigma}\right)
$$</span></p>
<p>For the variance
<span class="math-container">\begin{align}
Var(Y) &= \mathbb{E}[Y^2] - (\mathbb{E}Y)^2 \\
&= \mathbb{E}[X^2|X>0]\mathbb{P}(X>0) + 0\times\mathbb{P}(X\leq0)- (\mathbb{E}Y)^2
\end{align}</span></p>
<p>The only thing needed to calculate the above which we don't have is the second moment of the truncated normal, <span class="math-container">$\mathbb{E}[X^2|X>0]$</span>, which can be obtained from its mean and the variance:
<span class="math-container">\begin{align}
\mathbb{E}[X^2|X>0] &= \sigma^2\left( 1+ \frac{-\frac{\mu}{\sigma} \phi(-\frac{\mu}{\sigma})}{1-\Phi\left(-\frac{\mu}{\sigma}\right)} - \frac{\phi\left(-\frac{\mu}{\sigma} \right)^2}{\left(1-\Phi\left(-\frac{\mu}{\sigma}\right)\right)^2}\right) +
\left(\mu + \sigma \frac{\phi\left(-\frac{\mu}{\sigma}\right)}{1-\Phi\left(-\frac{\mu}{\sigma}\right)} \right)^2 \\
& = \sigma^2 +\mu^2 +\mu\sigma \frac{\phi\left(-\frac{\mu}{\sigma}\right)}{1-\Phi\left(-\frac{\mu}{\sigma}\right)}
\end{align}</span></p>
<p>You can expand <span class="math-container">$(\mathbb{E}Y)^2$</span> and try to simplify a bit further:
<span class="math-container">\begin{align}
Var(Y) &= (\sigma^2 + \mu^2)\left(1-\Phi\left(-\frac{\mu}{\sigma}\right)\right) + \mu\sigma \phi\left(-\frac{\mu}{\sigma}\right) - \left(\mu \left(1-\Phi\left(-\frac{\mu}{\sigma}\right)\right) + \sigma \phi\left(-\frac{\mu}{\sigma}\right) \right)^2 \\
&= \mu^2\Phi\left(-\frac{\mu}{\sigma}\right)\left(1-\Phi\left(-\frac{\mu}{\sigma}\right)\right) + \mu\sigma\phi\left(-\frac{\mu}{\sigma}\right)\left(2\Phi\left(-\frac{\mu}{\sigma}\right)-1 \right) + \sigma^2\left(1-\Phi\left(-\frac{\mu}{\sigma}\right) - \phi\left(-\frac{\mu}{\sigma}\right)^2\right)
\end{align}</span></p>
|
3,339,780 | <p>The popular definition of a vector is </p>
<blockquote>
<p>A vector is an object that has both a magnitude and <strong>a</strong> direction.</p>
</blockquote>
<p>We know that zero vector has no specific <strong>single</strong> direction.</p>
<p>Then how can it be a vector?</p>
| John Bentin | 875 | <p>You are right to question this definition. It suggests that every vector is associated with a unique direction. This is almost true, with the sole exception of the zero vector, which cannot sensibly be said to have any direction. Unfortunately, a more accurate definition</p>
<p>“a vector is an object that has both a magnitude and a direction, with the sole exception of the zero vector, which cannot sensibly be said to have any direction”</p>
<p>is considerably less snappy. When you get to more advanced treatment of vectors, the lack of a direction for the zero vector is made quite explicit.</p>
|
3,339,780 | <p>The popular definition of a vector is </p>
<blockquote>
<p>A vector is an object that has both a magnitude and <strong>a</strong> direction.</p>
</blockquote>
<p>We know that zero vector has no specific <strong>single</strong> direction.</p>
<p>Then how can it be a vector?</p>
| Joonas Ilmavirta | 166,535 | <p>What you quote is a reminder, not a definition.
If you are asked to calculate a vector, you know the answer shouldn't be a single number.
The zero vector disagrees with that reminder, and that is a useful caveat to learn.</p>
<p>When vectors are first introduced, they might be treated as arrows and explained with pictures in a way that emphasizes that direction matters.
But if you actually get to give a definition of a vector in linear algebra (an element of a vector space), it looks different.</p>
<p>The direction is of a vector is not a canonically defined object, just a useful way of thinking.
Consider two possible definitions in <span class="math-container">$\mathbb R^n$</span>:</p>
<ol>
<li>The direction of a vector <span class="math-container">$x\in\mathbb R^n$</span> is a vector <span class="math-container">$v\in S^{n-1}$</span> so that <span class="math-container">$x=tv$</span> for some <span class="math-container">$t\in\mathbb R$</span>, <span class="math-container">$t\geq0$</span>.</li>
<li>The direction of <span class="math-container">$x$</span> is the unit vector <span class="math-container">$x/|x|$</span>.</li>
</ol>
<p>They only disagree on the zero vector, and both definitions are reasonable.
Whether the zero vector has any or no direction is an irrelevant choice.
If you are on a space without a norm, you can define a direction as an equivalence class.
Sometimes it can be useful to think that <span class="math-container">$v$</span> and <span class="math-container">$-v$</span> have the same direction, sometimes not.
Going from a vector space to the corresponding space of directions is essentially projectivization.</p>
<p>How you should define a direction depends on what you want to do with it.
At a general level with no application in sight I would say that there simply is no canonical definition.
Sometimes it's useful to say <span class="math-container">$1/0=\infty$</span>, sometimes it's better to leave <span class="math-container">$1/0$</span> undefined.</p>
<p>If these concepts are unfamiliar or weird to you, the message is even clearer:
The concept of direction is a tricky one to define.
I think it's often best left as a heuristic term instead of a technical one.</p>
|
3,237,476 | <p>I have a trouble with calculating the sum of this series:</p>
<p><span class="math-container">$$2+\sum_{n=1}^{\infty}\frac{1-n}{9n^3-n}$$</span></p>
<p>I tried to split it into three separate series like this:
<span class="math-container">$$2+\sum_{n=1}^{\infty}\frac{1-n}{9n^3-n} =2+\sum_{n=1}^{\infty}\frac{2}{3n+1}+\sum_{n=1}^{\infty}\frac{1}{3n-1}-\sum_{n=1}^{\infty}\frac{1}{n} $$</span> but I'm not able to continue, can you give me some tips?</p>
| J.G. | 56,861 | <p>Let's rewrite <span class="math-container">$\frac{1}{n}$</span> as <span class="math-container">$\frac{3}{3n}$</span>, so your sum is <span class="math-container">$S+2$</span> with<span class="math-container">$$S:=\sum_{n\ge 1}\left(\frac{1}{3n-1}-\frac{3}{3n}+\frac{2}{3n+1}\right)\\=\sum_{n\ge 1}\int_0^1 x^{3n-2}\left(1-3x+2x^2\right)\mathrm dx=\int_0^1\frac{x-3x^2+2x^3}{1-x^3}\mathrm dx.$$</span>Thus <span class="math-container">$$S+2=\int_0^1\frac{2+3x}{1+x+x^2}\mathrm dx=\frac32\int_0^1\frac{1+2x}{1+x+x^2}\mathrm dx+\frac12\int_0^1\frac{\mathrm dx}{1+x+x^2}.$$</span>The rest I'll leave to you.</p>
|
141,138 | <p>In "Catégories Tannakiennes" by Savedra Rivano (under A. Grothendieck supervision) at pag.78 he define a rigid category $\mathscr{C}$ as a monoidal symmetrical closed such that the natural morphisms $[X, X'] \otimes [Y, Y'] \to [X \otimes Y, X' \otimes Y' ]$ and $\theta: X \mapsto [[X, I], I] $ are isomorphisms.</p>
<p>The first is adjoint to $[X, X'] \otimes [Y, Y'] \otimes X \otimes Y \cong
[X, X'] \otimes X \otimes [Y, Y'] \otimes Y \xrightarrow{t \otimes t} X' \otimes Y' $</p>
<p>the latter is adjoint to $X \otimes [X, I] \cong [X, I] \otimes X \xrightarrow{t} I$.</p>
<p>where $t_{A, B}: [A, B] \otimes B \to A$, $u_{X, Y}: X \to [Y, X \otimes Y]$ the co-unity and unity of the adjunction $(A \otimes B, C) \cong (A, [B, C])$.</p>
<p>Let us write $[X, I] := \widehat{X}$.</p>
<p>Follow the natural isomorphisms:</p>
<p>1) $[X, Y] \cong \hat{X} \otimes Y $</p>
<p>2) $([X, Y])^\wedge \cong [\widehat{Y}, \widehat{X}] $</p>
<p>Now, in mathematical literature are studied many kinds of specialized monoidal categories: compact closed, tortile, autonomous, rigid (but a different definition) ecc. ecc. and many work about coherence questions about.</p>
<p><strong>I ask</strong>: this example of "rigid category" (of Savedra Rivano) is a particular case of some well studied type of monoidal categories?</p>
<p>A more simple question: let $\tau_X: I\xrightarrow{j} [X, X]\cong \widehat{X}\otimes X$.</p>
<p><strong>I ask</strong>: (in a rigid category) is the morphisms $I\xrightarrow{\tau_X}\widehat{X}\otimes X \xrightarrow{s}
X\otimes \widehat{X} \xrightarrow{\theta \otimes 1} \widehat{\widehat{X}}\otimes \widehat{X} $ equal to $\tau_{\widehat{X}}: I\to \widehat{\widehat{X}}\otimes \widehat{X}$ ?</p>
| Todd Trimble | 2,926 | <p>Rigid monoidal categories in this sense are equivalent to compact closed categories. Their coherence was studied by Kelly and LaPlaza: </p>
<ul>
<li>G.M. Kelly and M. LaPlaza, Coherence for compact closed categories, Journal of Pure and Applied Algebra, vol. 19, pp. 193-213, 1980. </li>
</ul>
<p>In particular, the answer to your second (simpler) question is 'yes'. </p>
|
141,138 | <p>In "Catégories Tannakiennes" by Savedra Rivano (under A. Grothendieck supervision) at pag.78 he define a rigid category $\mathscr{C}$ as a monoidal symmetrical closed such that the natural morphisms $[X, X'] \otimes [Y, Y'] \to [X \otimes Y, X' \otimes Y' ]$ and $\theta: X \mapsto [[X, I], I] $ are isomorphisms.</p>
<p>The first is adjoint to $[X, X'] \otimes [Y, Y'] \otimes X \otimes Y \cong
[X, X'] \otimes X \otimes [Y, Y'] \otimes Y \xrightarrow{t \otimes t} X' \otimes Y' $</p>
<p>the latter is adjoint to $X \otimes [X, I] \cong [X, I] \otimes X \xrightarrow{t} I$.</p>
<p>where $t_{A, B}: [A, B] \otimes B \to A$, $u_{X, Y}: X \to [Y, X \otimes Y]$ the co-unity and unity of the adjunction $(A \otimes B, C) \cong (A, [B, C])$.</p>
<p>Let us write $[X, I] := \widehat{X}$.</p>
<p>Follow the natural isomorphisms:</p>
<p>1) $[X, Y] \cong \hat{X} \otimes Y $</p>
<p>2) $([X, Y])^\wedge \cong [\widehat{Y}, \widehat{X}] $</p>
<p>Now, in mathematical literature are studied many kinds of specialized monoidal categories: compact closed, tortile, autonomous, rigid (but a different definition) ecc. ecc. and many work about coherence questions about.</p>
<p><strong>I ask</strong>: this example of "rigid category" (of Savedra Rivano) is a particular case of some well studied type of monoidal categories?</p>
<p>A more simple question: let $\tau_X: I\xrightarrow{j} [X, X]\cong \widehat{X}\otimes X$.</p>
<p><strong>I ask</strong>: (in a rigid category) is the morphisms $I\xrightarrow{\tau_X}\widehat{X}\otimes X \xrightarrow{s}
X\otimes \widehat{X} \xrightarrow{\theta \otimes 1} \widehat{\widehat{X}}\otimes \widehat{X} $ equal to $\tau_{\widehat{X}}: I\to \widehat{\widehat{X}}\otimes \widehat{X}$ ?</p>
| mszyld | 39,603 | <p>The answer to both your questions is yes. The second one
may be more elementary, but since I'm used to working with a definition
of duality different that the one of Saavedra Rivano, I prefer to answer
both questions in order.</p>
<p>Answer to the first question: I copy from the introduction of my degree
thesis (<a href="http://arxiv.org/pdf/1110.5293v1.pdf" rel="nofollow">http://arxiv.org/pdf/1110.5293v1.pdf</a>)
"In section 3 we define the important concept of dual pairing. In [1] and
in [3] the authors use two (a priori) different definitions of duality, so
we expose both and establish relations between them. We prove that
they are equivalent, but in the process we also obtain a formulation of
the concept of rigid category (of [3]) equivalent to the usual one but
with fewer axioms and valid also in the non-symmetric case"</p>
<p>[1] A. Joyal and R. Street. An Introduction to Tannaka Duality and
Quantum Groups in Category Theory, Proceedings, Como 1990, Lecture Notes in Math. 1488, pages 413-492.</p>
<p>[3] P. Deligne and J.S. Milne. Tannakian Categories. Hodge Cocycles
Motives and Shimura Varieties, pages 101-228, 1982.</p>
<p>Note: the definition of [3] is the same as the one of Saavedra Rivano.</p>
<p>Note: the full text of the thesis is in spanish.</p>
<p>After having written my thesis, I found out that Deligne also does this
(better of course) in Pierre Deligne, Catégories tannakiennes. This is in The Grothendieck Festschrift, Volume 2, 111--195. Birkhauser, 1990.
This is done in section 2, Rappels et compléments: catégories
tensorielles (text is in french). This is stated explicitly in proposition 2.3 and in 2.5.</p>
<p>To summarize, a rigid category as in Saavedra-Rivano or as in Deligne-Milne is equivalent to an autonomous category (as in Joyal-Street §9) that is symmetric. This is by definition a symmetric monoidal category where every object $X$ has a (right and left) dual $X^*$, i.e. there are arrows $\eta: I \rightarrow X^* \otimes X$, $\varepsilon: X \otimes X^* \rightarrow I$ satisfying two
triangular identities), you can also find this in the wikipedia article
<a href="http://en.wikipedia.org/wiki/Rigid_category" rel="nofollow">http://en.wikipedia.org/wiki/Rigid_category</a> (there it says that there are
at least two equivalent definitions but only states this one, maybe someone reading this is willing to correct it?).</p>
<p>Answer to the second question: now we know that a rigid category as in
Saavedra-Rivano satisfies that every object has a dual $X^*$ as above.
Since the category is symmetric, right and left duals coincide and ${X^*}^*
= X$. Therefore the first composition of your second question (without
the last arrow if we take ${X^*}^* = X$ as definition of ${X^*}^*$) is precisely the way one defines the first arrow ($\eta: I \rightarrow {X^*}^* \times X^*$) that expresses the duality between ${X^*}^*$ and $X^*$ (remember ${X^*}^*=X$).</p>
<p>In other words, the symmetry morphism $s$ is the one that yields the equality ${X^*}^* = X$, by taking the $\eta$ corresponding to the duality ${X^*}^* \dashv X^*$ ($\tau_{\widehat{X}}$ in your notation) as the first composition of your second question by definition. I state this well known fact explicitly in proposition 3.9 of my degree thesis.</p>
|
3,012,558 | <p>Let <span class="math-container">$C \subseteq \mathbb{R}^n$</span> be a closed convex set, and <span class="math-container">$x^* \in C^c$</span> (not in <span class="math-container">$C$</span> and its closure). </p>
<p>Define the Euclidean distance from <span class="math-container">$x^*$</span> to <span class="math-container">$C$</span> as <span class="math-container">$d_C(x^*):=\min_{z \in C}\|z -x^*\|_2$</span>.</p>
<p>Let <span class="math-container">$D$</span> be a closed convex set containing <span class="math-container">$C$</span>, i.e., <span class="math-container">$C \subseteq D$</span>. </p>
<p>Show that
<span class="math-container">$$
d_D(x^*) \leq d_C(x^*)
$$</span></p>
<p>I do not know how to use <span class="math-container">$C \subseteq D$</span> together with taking minimum.</p>
| B.Swan | 398,679 | <p>Since <span class="math-container">$C \subseteq D$</span>, the minimum over <span class="math-container">$C$</span> can only be greater or equal to the minimum over <span class="math-container">$D$</span>, so </p>
<p><span class="math-container">$$d_D(x^*)=\min_{z \in D}\|z -x^*\|_2 \leq \min_{z \in C}\|z -x^*\|_2= d_D(x^*)$$</span></p>
|
501,512 | <p>The lines CD and EF are perpendicular with points $C(1,2)$, $D(3,-4)$, $E(-2,5)$, and $F(k,4)$. Find the value of the constant $k$.</p>
| Ted Shifrin | 71,348 | <p>The OP presumably does not know about vectors or dot products. Here's the appropriate hint: What do you know about the <em>slopes</em> of perpendicular lines? Now can you do the problem?</p>
|
2,425,127 | <p>I am learning about general solutions to differential equations and would like to ask whether my solution is mathematically correct. </p>
<p>I was asked to find the general solution to the differential equation </p>
<p>$$\frac{dy}{dx} = 2e^{x-y}$$</p>
<p>So I did the following - </p>
<p>$$\int e^y dy = 2\int e^x dx$$
$$e^y = 2 e^x + C $$
$$y = \ln (2 e^x + C) $$</p>
<p>Now, my book says that the solution in the form $y = f(x)$ is $y = \ln (2 e^x + C) $.</p>
<p>However, I progressed further and did the following:</p>
<p>$$y = \ln (2 e^x + C) $$
$$ = \ln (e^{{x}^{2}} + C) $$
$$y = x^2 + C' $$</p>
<p>where $C'$ is a modified constant from the original constant $C$. </p>
<p>Is this an acceptable solution?</p>
| doraemon | 473,786 | <p>Note this inequality: $$2e^x\neq e^{x^2}$$</p>
|
315,004 | <p>For a knot <span class="math-container">$K$</span>, let <span class="math-container">$\Sigma_K$</span> be the double cyclic branched cover of a knot. </p>
<p>By the classical work of <strong>Casson</strong> and <strong>Gordon</strong>, we know that if <span class="math-container">$K$</span> is smoothly slice, then <span class="math-container">$\Sigma_K$</span> bounds a rational homology ball.</p>
<p>Is there any well-known counter-example for the reversed direction?</p>
<p><strong>EDIT</strong> More general statement is true due to the work of Casson & Gordon, just seen in <a href="https://arxiv.org/pdf/1811.01433.pdf" rel="noreferrer">ACP18</a>. </p>
<p>For any prime <span class="math-container">$p$</span> and positive integer <span class="math-container">$r$</span>, the <span class="math-container">$p^r$</span>-fold cyclic branched cover of a knot <span class="math-container">$ K$</span>, denoted by <span class="math-container">$\Sigma_{p^r}(K)$</span>, is a <span class="math-container">$\mathbb Z_p$</span>-homology sphere.</p>
<p><strong>Theorem:</strong> If <span class="math-container">$K$</span> is smoothly slice, then <span class="math-container">$\Sigma_{p^r}(K)$</span> bounds <span class="math-container">$\mathbb Z_p$</span>-homology ball.</p>
| Oğuz Şavk | 131,172 | <p>Probably, I found some more explicit examples:</p>
<p><strong>Akbulut</strong> and <strong>Larson</strong> recently showed in <a href="https://arxiv.org/pdf/1704.07739.pdf" rel="nofollow noreferrer">AL18</a> that the family of Brieskorn spheres <span class="math-container">$\Sigma(2,4n+1,12n+5)$</span> and <span class="math-container">$\Sigma(3,3n+1,12n+5)$</span> bound rational homology balls.</p>
<p>These Brieskorn spheres are respectively <span class="math-container">$2$</span>- and <span class="math-container">$3$</span>-fold cyclic branched cover of torus knots <span class="math-container">$T_{4n+1,12n+5}$</span> and <span class="math-container">$T_{3n+1,12n+5}$</span> in <span class="math-container">$S^3$</span>. But these knots are not smoothly slice due to for example Milnor slice genus proven by <strong>Kronheimer</strong> and <strong>Mrowka</strong> in <a href="http://www.math.harvard.edu/~kronheim/thom1.pdf" rel="nofollow noreferrer">KM93</a>.</p>
|
3,984,480 | <p>Show that <span class="math-container">$-\vec{0} = \vec{0}$</span> in any vector space.</p>
<p>I know this is a seemingly obvious statement but is the following justification correct:</p>
<p>Assume <span class="math-container">$-\vec{0} \neq \vec{0}$</span>.</p>
<p><span class="math-container">$$(4): \vec{u} + \vec{0} = \vec{u}$$</span>
<span class="math-container">$$ \vec{u} + \vec{0} + (-\vec{0}) = \vec{u} + (-\vec{0})$$</span>
<span class="math-container">$$(5): \vec{u} -\vec{u} + \vec{0} + (-\vec{0}) = \vec{u} - \vec{u} + (-\vec{0})$$</span>
<span class="math-container">$$(5): \vec{0} + (-\vec{0}) = (-\vec{0}) \rightarrow -\vec{0} + \vec{0} = \vec{0}$$</span></p>
<p>Hence we clearly see that <span class="math-container">$-\vec{0} = \vec{0}$</span> QED Is this reasoning correct?</p>
| Teplotaxl | 613,332 | <ol>
<li>A Theorem of Gromov states that: If <span class="math-container">$G$</span> is quasi-isometric to <span class="math-container">$\mathbb{Z}^{n}$</span> then <span class="math-container">$G$</span> has a finite index subgroup isomorphic to <span class="math-container">$\mathbb{Z}^{n}$</span>. (From a geometric information for the group we get algebraic information.) Therefore <span class="math-container">$\mathbb{Z}^{n}$</span> is quasi-isometrically rigid. The proof for general <span class="math-container">$n$</span> is quite difficult but for <span class="math-container">$n=1$</span> is far easier.</li>
<li>Also an interesting topic is the mapping class group <span class="math-container">$Mod(M)$</span> (the group of symmetries of a topological surface <span class="math-container">$M$</span>). If <span class="math-container">$M$</span> is compact and orientable then <span class="math-container">$Mod(M)$</span> is generated by Dehn twists. Now two examples are if <span class="math-container">$M$</span> is the annulus <span class="math-container">$S^{1}\times[0,1]$</span> then <span class="math-container">$Mod(M)\cong \mathbb{Z}$</span> and if <span class="math-container">$M$</span> is a pair of pants (a sphere with three disks removed) then <span class="math-container">$Mod(M)\cong \mathbb{Z}^{3}$</span>.</li>
</ol>
|
649,570 | <p>How do we show that there is only one solution to,$$\sqrt{2+\sqrt{2+\sqrt{2+\sqrt{2+x}}}}=\sqrt[3]{6+\sqrt[3]{6+\sqrt[3]{6+\sqrt[3]{6+x}}}}$$</p>
<p>I guess it is only $x=2$.
Please help.</p>
| achille hui | 59,379 | <p>Let $\;f(x) = \sqrt{2+x}\;$ and $\;g(x) = \sqrt[3]{6+x}$, they are strictly increasing function in $x$ when $x \ge -2$.
Since $(x+2)^3 - (x+6)^2 = (x-2)(x^2 + 7x + 14)$ and $x^2 + 7x + 14 > 0$ for all $x$,
we have
$$\begin{cases} f(x) > g(x) > 2,& x > 2\\f(x) = g(x) = 2,& x = 2\\f(x) < g(x) < 2, & x <2\end{cases}$$
So for any $x > 2$, we have
$$\begin{align}
& f(x) > g(x) > 2\\
\implies & f(f(x)) > f(g(x)) > g(g(x)) > 2\\
\implies & f(f(f(x))) > f(g(g(x)) > g(g(g(x)) > 2\\
\implies & f(f(f(f(x))) > f(g(g(g(x))) > g(g(g(g(x)))) > 2\\
\implies & f(f(f(f(x))) \ne g(g(g(g(x))))
\end{align}$$
Please note that in above deductions, we are using following reasoning repeatedly.
$$\underbrace{g\circ\cdots\circ g(x)}_{k \text{ terms}} > 2
\implies
f(\underbrace{g\circ\cdots\circ g(x)}_{k \text{ terms}}) >
\underbrace{g\circ\cdots\circ g(x)}_{k+1 \text{ terms}} > 2.$$</p>
<p>Similar logic shows that $f(f(f(f(x)))) \ne g(g(g(g(x))))$ for $x < 2$. As a result,
$x = 2$ is the only solution for the equation $f(f(f(f(x)))) = g(g(g(g(x))))$.</p>
|
1,339,875 | <p>Suppose we are given a function
$$g\left ( x \right )= \sum_{n=1}^{\infty}\frac{\sin \left ( nx \right )}{10^{n}\sin \left ( x \right )},x\neq k\pi , k\in\mathbb{Z}$$
and
$$g\left ( k\pi \right )=\lim _{x\rightarrow k\pi}g\left ( x \right )$$
I found that $\lim _{x\rightarrow k\pi}g\left ( x \right )= \sum_{n=1}^{\infty}\frac{1}{10^{n}}$ for both odd and even $k$. However, I am still unsure how to proceed from here. What does continuity even mean for functions like this? If I wanted to take the derivative of this function, surely I must prove first that it converges and then I would apply differentiation term by term.
Proving periodicity for one term would be easy enough if it weren't for the $10^{n}$ term in the denominator. How to deal with it?
<em>EDIT</em>: How to find the Fourier series of this function( by first proving that Dirichlet's conditions are met)</p>
| Claude Leibovici | 82,404 | <p>I am not sure that I arrive to the same result $$g(k\pi)=\sum_{n=1}^{\infty}\frac{1}{10^{n}}=\frac{1}{9}$$ </p>
<p>Using complex representation of $\sin(x)$, what I got is that $$g\left ( x \right )= \sum_{n=1}^{\infty}\frac{\sin \left ( nx \right )}{10^{n}\sin \left ( x \right )}=-\frac{10 e^{i x}}{\left(-10+e^{i x}\right) \left(-1+10 e^{i x}\right)}$$ So, $g\big(2k\pi\big)=\frac{10}{81}$, $g\big((2k+1)\pi\big)=\frac{10}{121}$.</p>
|
1,339,875 | <p>Suppose we are given a function
$$g\left ( x \right )= \sum_{n=1}^{\infty}\frac{\sin \left ( nx \right )}{10^{n}\sin \left ( x \right )},x\neq k\pi , k\in\mathbb{Z}$$
and
$$g\left ( k\pi \right )=\lim _{x\rightarrow k\pi}g\left ( x \right )$$
I found that $\lim _{x\rightarrow k\pi}g\left ( x \right )= \sum_{n=1}^{\infty}\frac{1}{10^{n}}$ for both odd and even $k$. However, I am still unsure how to proceed from here. What does continuity even mean for functions like this? If I wanted to take the derivative of this function, surely I must prove first that it converges and then I would apply differentiation term by term.
Proving periodicity for one term would be easy enough if it weren't for the $10^{n}$ term in the denominator. How to deal with it?
<em>EDIT</em>: How to find the Fourier series of this function( by first proving that Dirichlet's conditions are met)</p>
| Christian Blatter | 1,303 | <p>It is a basic precalculus fact that for arbitrary complex $z$ with $|z|<1$ one has
$$\sum_{k=0}^\infty z^k={1\over 1-z}\ .$$
Furthermore, by Euler's equation, for arbitrary real $x$ we may write $\sin x={\rm Im}\>e^{ix}$ . Putting these two together we obtain
$$\eqalign{\sum_{n=1}^\infty{\sin (nx)\over 10^n}&={\rm Im}\sum_{n=1}^\infty\left({e^{ix}\over10}\right)^n={\rm Im}{e^{ix}/10\over 1-e^{ix}/10}\cr &={\rm Im}{e^{ix}(10-e^{-ix})\over (10-e^{ix})(10-e^{-ix})}={10\sin x\over101-20\cos x}\ .\cr}$$
It follows that
$$g(x)={10\over 101-20\cos x}\qquad(x\in{\mathbb R}\setminus \pi{\mathbb Z})\ .\tag{1}$$
It follows that the apparent singularities at integer $x$ are removable, so that $g$ has a natural $C^\infty$ extension to all of ${\mathbb R}$, and this extension is $2\pi$-periodic. Note that the initially given expression for $g$ is <em>not</em> its Fourier series. In order to obtain the latter one would have to compute the integrals $$a_k:={1\over\pi}\int_0^{2\pi}g(x)\>\cos(kx)\>dx\ ,$$ using the expression $(1)$ for $g$. The resulting expressions for the $a_k$ will not be simple.</p>
|
1,915,366 | <p>Suppose we have two Gaussian distributed random variable $X$~$N(0,\sigma^2)$ and $Y$~$N(0,\sigma^2)$. These variables are not independent. What will be the expected value of product of square of this random variables</p>
<p>$E[X^2Y^2]$ = ??</p>
<p>Edit 1: They are jointly Gaussian distributed with correlation coefficient $\rho$ </p>
<p>Edit 2: $X$~$N(0,\sigma^2)$, $Y$~$N(0,\sigma^2)$</p>
| Frank | 87,112 | <p>See <a href="https://stats.stackexchange.com/a/182822/">Variance of product of dependent variables</a> at CrossValidated, specifically the second answer, for the outline of a derivation. In your case, because $X$ and $Y$ have zero mean and the same variance, the formula is simpler.</p>
<p>As explained there, the conditional density of $Y$ given $X = x$ is normal, with mean $\rho x$ and variance $\sigma^2(1-\rho^2)$. It follows that
\begin{align*}
E[X^2 Y^2 \mid X]&= X^2 E[Y^2 \mid X] \\
&= X^2 \left[
\sigma^2 (1-\rho^2) + \rho^2 X^2
\right].
\end{align*}
By the law of iterated expectation,
$$
E[X^2Y^2] = E\left[ E[X^2Y^2 \mid X] \right] = \sigma^2(1-\rho^2) E[X^2] + \rho^2 E[X^4]
$$
and you can can look up the moments $E[X^2]$ and $E[X^4]$ at <a href="https://en.wikipedia.org/wiki/Normal_distribution#Moments" rel="nofollow noreferrer">the Wikipedia article on the normal distribution</a>. You'll end up with $
\sigma^4(1+2\rho^2)
$.</p>
|
72,669 | <p>I encountered this site today <a href="https://code.google.com/p/google-styleguide/">https://code.google.com/p/google-styleguide/</a> regarding the programming style in some languages. What would be best programming practices in Mathematica, for small and large projects ?</p>
| murray | 148 | <p>Let me try with a few simple ("obvious"?) style guidelines I try to follow:</p>
<ul>
<li>Use meaningful names that are spelled out or that use widely-adopted abbreviations from the field of application.</li>
<li>Begin names with lower-case letters (except when they're going into a Package for others' use) and then use camelCasing.</li>
<li>Avoid nesting functions too deeply with use of brackets; instead, try to use the built-in special input forms, e.g., <code>/@</code> for <code>Map</code>, <code>@@</code> for <code>Apply</code>, prefix form with <code>@</code>, and (when it's semantically appropriate) postfix form with <code>//</code>.</li>
<li>Intermix text cells containing documentation with input cells containing code.</li>
<li>Define <code>::usage</code> strings for functions to be used by others, or for functions whose syntax you may readily forget.</li>
</ul>
|
523,529 | <p>I'm new to inequalities in mathematical induction and don't know how to proceed further. So far I was able to do this:<br>
$V(1): 1≤1 \text{ true}$ <br>
$V(n): n!≤((n+1)/2)^n$ <br>
$V(n+1): (n+1)!≤((n+2)/2)^{(n+1)}$<br><br></p>
<p>and I've got : <br>$(((n+1)/2)^n)\cdot(n+1)≤((n+2)/2)^{(n+1)}$ <br>$((n+1)^n)n(n+1)≤((n+2)^n)((n/2)+1)$</p>
| Smylic | 100,361 | <p>It is more easy to prove this inequality without induction. Really $$0 < i\cdot (n + 1 - i) = \left(\frac{n+1}2 + \frac{2i - n - 1}2\right)\left(\frac{n+1}2 - \frac{2i - n - 1}2\right) = \left(\frac{n+1}2\right)^2 - \left(\frac{2i - n - 1}2\right)^2 \le \left(\frac{n+1}2\right)^2.$$
Multiply this inequalities for all $i = 1, 2, \ldots, \left\lfloor\frac n2\right\rfloor$ and by $\frac{n+1}2 = \frac{n+1}2$ for odd $n$ to get $n! \le \left(\frac{n+1}2\right)^n$ as desired.</p>
|
980,941 | <p>How can I calculate $1+(1+2)+(1+2+3)+\cdots+(1+2+3+\cdots+n)$? I know that $1+2+\cdots+n=\dfrac{n+1}{2}\dot\ n$. But what should I do next?</p>
| R. J. Mathar | 185,604 | <p>The n-th partial sum of the triangular numbers as listed in <a href="http://oeis.org/A000292" rel="nofollow">http://oeis.org/A000292</a> .</p>
|
1,027,807 | <p>So I have this question that looks like</p>
<p>$$ \frac{x^3 + 3x^2 - x - 8}{x^2 + x - 6} $$</p>
<p>and first I got the partial fraction so getting </p>
<p>$$ x + 2 + \frac{3x + 4}{x^2 + x -6} $$</p>
<p>but now I'm trying to integrate it and I cannot remember for the life of me how I should integrate the fraction on the end. Please help.</p>
| rae306 | 168,956 | <p><strong>Hint</strong>:</p>
<p>Remember that $x^2+x-6=(x-2)(x+3)$.</p>
<p>Now apply the partial fraction decomposition again:</p>
<p>$$\frac{3x+4}{x^2+x-6}=\frac{3x+4}{(x-2)(x+3)}=\frac{A}{x-2}+\frac{B}{x+3}$$</p>
<p>Also, it seems as if your division is not correct:</p>
<p>$$\require{cancel}\frac{x^3 + 3x^2 - x - 8}{x^2 + x - 6}=\color{red}{\cancel{3}}x+2+\frac{3x+4}{x^2+x-6}$$</p>
|
3,275,892 | <p>Im having problem graphing the following inequality:
<span class="math-container">$(x+y) \div (x-y) \ge 0$</span>.
I know what the graph looks like, but I can't grasp the thought process behind solving the problem.</p>
<p>Thanks in advance!</p>
| Dr. Sonnhard Graubner | 175,066 | <p>First of all it must be <span class="math-container">$$x\neq y$$</span>- So, now we have two cases:
<span class="math-container">$$x>y$$</span> then we get <span class="math-container">$$x\geq -y$$</span>
Second case:
<span class="math-container">$$x<y$$</span> then we get <span class="math-container">$$x\le-y$$</span></p>
|
4,185,658 | <p>In the proof of Proposition. 1.3. page 100 Functional Analysis book of Conway the following claim (<span class="math-container">$X$</span> is a TVS and <span class="math-container">$p$</span> is a seminorm.)</p>
<p>If <span class="math-container">$0 \in \operatorname{Int}{\{x \in X : p(x) \le 1}\}$</span> then <span class="math-container">$0 \in \operatorname{Int}{\{x \in X : p(x) \le \epsilon}\}$</span> for every <span class="math-container">$\epsilon >0$</span>. How this is true?</p>
| Ramiro | 190,563 | <p>Suppose <span class="math-container">$0 \in \operatorname{Int}{\{x \in X : p(x) \le 1}\}$</span>.
Given <span class="math-container">$\varepsilon >0$</span>, let <span class="math-container">$f: X \rightarrow X$</span> be defined by <span class="math-container">$f(x) = \frac{1}{\varepsilon} x$</span>. We have that <span class="math-container">$f$</span> is continuous. Now, <span class="math-container">$\operatorname{Int}{\{x \in X : p(x) \le 1}\}$</span> is an open set in <span class="math-container">$X$</span> and so we have that <span class="math-container">$f^{-1}(\operatorname{Int}{\{x \in X : p(x) \le 1}\})$</span> is an open set in <span class="math-container">$X$</span> and
<span class="math-container">$$ 0 \in f^{-1}(\operatorname{Int}\{x \in X : p(x) \le 1\}) \subseteq \{x \in X : p(x) \le \varepsilon\}$$</span>
So, <span class="math-container">$0\in \operatorname{Int}{\{x \in X : p(x) \le \varepsilon}\}$</span>.</p>
|
138,698 | <p>I want to evaluate the following double summation</p>
<pre><code>Sum[(-1)^(i + j + i*j)*Exp[-Pi/2*( i^2 + j^2)], {i, -Infinity,
Infinity}, {j, -Infinity, Infinity}]
</code></pre>
<p>I am really new both in using Mathematica and in doing mathematics using computer. I don't know if there is some special technics to deal with these kind of summations (Lattice sums) in Mathematica.</p>
<p>When I evaluate the former expression, Mathematica refuses to evaluate it and just reprint it in the output.</p>
<p>Theoretically, the expected value is 0.</p>
| mikado | 36,788 | <p>As noted by Szabolcs, there is an inefficiency in generating the table of real elements one at a time. </p>
<pre><code>v = RandomReal[{0, 1}, 1000000];
Table[Timing[Length[Select[v, # < 0.5 &]]][[1]], {10}]
(* {0.384, 0.38, 0.376, 0.38, 0.38, 0.38, 0.504, 0.48, 0.384, 0.38} *)
</code></pre>
<p>However, a more important time saving can be made by compiling the selection function </p>
<pre><code>sel = Compile[{{x, _Real, 1}}, Select[x, # < 0.5 &]];
Table[Timing[Length[sel[v]]][[1]], {10}]
(* {0.072, 0.068, 0.068, 0.068, 0.072, 0.068, 0.064, 0.068, 0.068, 0.068} *)
</code></pre>
|
4,478,109 | <p>Find the value of <span class="math-container">$$\int\frac{1+\ln x}{4+x\ln x^2}\mathrm{d}x$$</span> I have a very bad understanding of integrals where some function of a variable is in the denominator. I know I have to do some kind of substitution and I even tried that but can't get any help from it. Please forgive me if this is a very simple question because I am really bad at integrals (denominator ones).</p>
<p>Any help will be greatly appreciated.</p>
| insipidintegrator | 1,062,486 | <p><span class="math-container">$$x\ln(x^2)=2\cdot x\ln x. $$</span>
Now, try to differentiate the function <span class="math-container">$f(x)= x\ln x.$</span> So by using the product rule, <span class="math-container">$$\frac{d}{dx}(x\ln x)= x\cdot\frac 1x + 1\cdot \ln x=1+\ln x.$$</span> This is the numerator of your integrand. Hence, substitute <span class="math-container">$x\ln x=u$</span>, (you could also do <span class="math-container">$4+2x \ln x=u$</span>) so that <span class="math-container">$du=(1+\ln x)dx$</span>. Hence the integral becomes <span class="math-container">$$\int \frac{du}{4+2u}$$</span> Now put <span class="math-container">$4+2u=t$</span> so that <span class="math-container">$2du=dt$</span>. Thus we get <span class="math-container">$$ \int \frac{du}{4+2u}=\int \frac{dt}{2t}=\frac 12 \ln t+C= \frac 12 \ln(2u+4)+c= \frac 12 \ln(2x\ln x+4)+C= \frac 12 \ln(x\ln(x^2)+4)+C.$$</span> where <span class="math-container">$C$</span> is the integration constant.</p>
|
4,478,109 | <p>Find the value of <span class="math-container">$$\int\frac{1+\ln x}{4+x\ln x^2}\mathrm{d}x$$</span> I have a very bad understanding of integrals where some function of a variable is in the denominator. I know I have to do some kind of substitution and I even tried that but can't get any help from it. Please forgive me if this is a very simple question because I am really bad at integrals (denominator ones).</p>
<p>Any help will be greatly appreciated.</p>
| Davius | 910,130 | <p>First note that <span class="math-container">$\ln(x^2)$</span> is just <span class="math-container">$2\ln x$</span>. Then, consider a funciton <span class="math-container">$f(x) = ln(a+bx\ln(x))$</span> then you have:</p>
<p><span class="math-container">$$f'(x) = \frac{b+b\ln(x)}{a+bx\ln x}$$</span></p>
<p>Now, you are in the path to find your answer.</p>
|
638,566 | <p>I came across a question yesterday about combinations, and I wanted to know what the correct answer was. The question states as follows:</p>
<p>There are 8 spaces that are alternately black and white. There is one king, one queen, two identical rooks, two identical bishops, and two identical knights. The king needs to be surrounded by the two rooks. Then, the two bishops can be put on any of the remaining spaces as long as one is on a black space and the other is on a white space. Finally, the queen and the knights can be placed anywhere in the remaining spaces in relation to the other pieces. How many possible arrangements are there considering these conditions?</p>
| bof | 111,012 | <p>Assuming that the king and rooks do <strong>not</strong> have to be on consecutive squares, the question is about the number of legal arrays in <a href="http://www.chessvariants.org/diffsetup.dir/fischer.html" rel="nofollow">Fischer Random Chess</a> aka <a href="http://en.wikipedia.org/wiki/Chess960" rel="nofollow">Chess960</a>. The answer is (surprise!) $960$. Here is an explanation from the <a href="http://en.wikipedia.org/wiki/Chess960#Why_960" rel="nofollow">Wikipedia page</a>:</p>
<blockquote>
<p>Each bishop can take one of four positions, the queen one of six, and the two knights can assume five or four possible positions respectively. This leaves three open squares which the king and rooks must occupy according to setup stipulations, without choice. This means there are 4×4×6×5×4 = 1920 possible starting positions if the two knights were different in some way. However, the two knights are indistinguishable during play (if swapped, there would be no difference), so the number of distinguishable possible positions is half of 1920, or 1920/2 = 960. (Half of the 960 are left-right mirror images of the other half, however Chess960 castling rules preserve left-right asymmetry in play.)</p>
</blockquote>
<p>Of these $960$ positions, just $108$ satisfy the further condition that the rooks be right next to the king. Namely, there are $6$ ways to place the RKR combo. Any placement of the RKR combo leaves three squares of one color and two squares of the other color, so there are $3\times2$ ways to place the bishops. The queen then has a choice of $3$ squares, and the knights take what's left; $6\times3\times2\times3=108$.</p>
<p>The requirement that the king be placed between the rooks is natural, because it permits a kind of generalized <a href="http://en.wikipedia.org/wiki/Castling" rel="nofollow">castling</a> with an effect somewhat similar to castling in orthodox chess. Requiring the rooks to be placed right next to the king has no apparent chessic motivation.</p>
|
3,066,530 | <p><span class="math-container">$$\lim_{x\to 0} \frac {(\sin(2x)-2\sin(x))^4}{(3+\cos(2x)-4\cos(x))^3}$$</span> </p>
<p>without L'Hôpital.</p>
<p>I've tried using equivalences with <span class="math-container">${(\sin(2x)-2\sin(x))^4}$</span> and arrived at <span class="math-container">$-x^{12}$</span> but I don't know how to handle <span class="math-container">${(3+\cos(2x)-4\cos(x))^3}$</span>. Using <span class="math-container">$\cos(2x)=\cos^2(x)-\sin^2(x)$</span> hasn't helped, so any hint?</p>
| MSDG | 447,520 | <p><strong>Hint:</strong> Note that
<span class="math-container">$$ 3+\cos(2x)-4\cos(x) = 3 + 2\cos^2(x) - 1 - 4\cos(x) = 2(\cos(x)-1)^2, $$</span>
and that
<span class="math-container">$$ \sin(2x) - 2\sin(x) = 2\sin(x)\cos(x)-2\sin(x) = 2\sin(x)(\cos(x)-1). $$</span></p>
|
2,479,290 | <p>Came across this question in my textbook:</p>
<p>$f(x) = (1+2x)^{10}$. Determine $f^{(5)}(0)$ using the binomial theorem.</p>
<p>If I am correct, the author of the book want me not to use the power rule. How else do I compute this? </p>
| Graham Kemp | 135,106 | <p><strong>Strong Induction</strong> :</p>
<p>We let <span class="math-container">$n$</span> refer to some arbitrary natural number, and by assuming <span class="math-container">$P(0),..., P(n-1)$</span>, we make <em>a valid argument</em> that this infers, <span class="math-container">$P(n)$</span>. Thus we conclude <span class="math-container">$\forall n\in\Bbb N:P(n)$</span>.</p>
<p>Thus we don't <em>need</em> to prove the base case <span class="math-container">$P(0)$</span> is true to make the strong inductive argument. Although a proof might do so; there is nothing prohibiting it.</p>
<blockquote>
<p>The argument for <span class="math-container">$P(0)$</span> holds true vacuously since <span class="math-container">$[\forall k \in N, k<0, P(k)]$</span> is always false (i.e. there are no natural numbers that are less than zero) so we have that <span class="math-container">$[\forall k \in N, k<0, P(k)] \rightarrow P(0)$</span>. Therefore, <span class="math-container">$P(0)$</span> is proven as the base case while proving the inductive hypothesis.</p>
</blockquote>
<p>No, the argument is that <span class="math-container">$[\forall k\in N, k<0: P(k)]$</span> is vacuously <em>true</em>. There does not exist a natural number less than <span class="math-container">$0$</span> where the premise does not hold.</p>
<p>So, since <span class="math-container">$[\forall k\in\Bbb N, k<0:P(k)]$</span> is vacuously true, and <em>if</em> <span class="math-container">$[\forall k\in\Bbb N, k<n:P(k)]\to P(n)$</span> <em>can be proven</em> for an <em>arbitrary</em> natural number, then <span class="math-container">$P(0)$</span> <em>will</em> be true. And then so will <span class="math-container">$P(1)$</span>, <span class="math-container">$P(2)$</span>, and cetera.</p>
<p>Still, it is safest to check.</p>
<hr />
<blockquote>
<p>All natural numbers are even. No base case required. Well, suppose that <span class="math-container">$[\forall k \in N, k<n, P(k)]$</span> this is true. Then it's clearly true for <span class="math-container">$n$</span>, since <span class="math-container">$n=(n-2)+2$</span>. Since <span class="math-container">$n-2$</span> is even by the hypothesis, and <span class="math-container">$n-2+2$</span> is just adding 2 to an even number, we still have an even number.</p>
</blockquote>
<p>No, to have a valid strong induction you need an argument that establishes that the truth of <span class="math-container">$P(n)$</span> is <em>entailed</em> by <span class="math-container">$P(k)$</span> holding for <em>all</em> <span class="math-container">$k$</span> which are naturals less than <span class="math-container">$n$</span>; not just for some of them.</p>
|
2,724,744 | <p>I have a basic math question.</p>
<p>If I have the following inequality:
$$-a-b > -1$$
and I want to flip (or reverse) the sign. What is the correct way of the following? And why?</p>
<p>i) $a+b \le 1$<br>
ii) $a+b < 1$</p>
<p>Many thanks! (:</p>
| CY Aries | 268,334 | <p>$-a-b>-1$ $\implies$ $-a-b+(a+b+1)>-1+(a+b+1)$ $\implies$ $1>a+b$</p>
|
4,609,001 | <p>The following definition of pmf is on page51 from Probability and statistical inference by Robert V. Hogg, etc.</p>
<p>The pmf <span class="math-container">$f(x)$</span> of a discrete random variable X is a function that satisfies the following properties:<br />
(a)<span class="math-container">$f(x)\gt 0, x\in S;$</span><br />
(b)<span class="math-container">$\sum_{x\in S}f(x)=1;$</span><br />
(c)<span class="math-container">$P(X\in A)=\sum_{x\in A}f(x),$</span> where <span class="math-container">$A\subset S$</span>.</p>
<p>My question is about part(c). X is a random variable, and A is a subset of sample space S, how can <span class="math-container">$X\in A$</span>. Moreover, how can <span class="math-container">${x\in A}$</span>?</p>
<p>In my opinion, the LHS of part(c) just means that we want to compute the probability of a single event, and the RHS of part(c) just means that we sum up all the related possibilities that we need.</p>
<p>For example, if X: numbers of the sum of a fair six-side dice. Then P(X=3)=P({(1,2),(2,1)})=1/36+1/36=2/36, where A is a subset of S.</p>
<p>Am I right? And could someone explain more about part(c)? It's better if someone can give me an example.</p>
| Abolfazl Chaman motlagh | 938,462 | <p>for any <span class="math-container">$x \in (x_1,x_2)$</span> we can write : <span class="math-container">$x=tx_1+(1-t)x_2$</span> which <span class="math-container">$t \in (0,1)$</span>. if you solve it for <span class="math-container">$t$</span> you get :
<span class="math-container">$$
t = \frac{x_2-x}{x_2-x_1}
$$</span>
so <span class="math-container">$x$</span> become :
<span class="math-container">$$
x = \big( \frac{x_2-x}{x_2-x_1} \big)x_1 + \big( \frac{x-x_1}{x_2-x_1} \big)x_2
\quad \forall x \in (x_1,x_2) $$</span>
with this transformation the proof of both side is just writing down given equation:
<span class="math-container">$$
\frac{f(x)-f(x_1)}{x-x_1} \leq \frac{f(x_2)-f(x)}{x_2-x}
\\ \iff (x_2-x+x-x_1)f(x) \leq (x_2-x)f(x_1)+(x-x_1)f(x_2)
\\ \iff f(x) \leq \big( \frac{x_2-x}{x_2-x_1} \big)f(x_1) +\big( \frac{x-x_1}{x_2-x_1} \big)f(x_2)
\\ \iff f(tx_1+(1-t)x_2) \leq tf(x_1)+(1-t)f(x_2) \qquad \forall t \in (0,1) \quad \square$$</span></p>
|
444,448 | <p>Let $M$ be a set of prime numbers of $\mathbb{Q}$ . The limit
$$d(M)= \lim_{s\rightarrow 1^+} \frac{ \sum_{p \in M} p^{-s} }{ - \log(s-1)}$$
Where $p$ is a prime of $\mathbb{Q}$ is called <strong>Dirichlet Density</strong> of $M$. Also, the <strong>Natural density</strong> of $M$ is the limit
$$ \delta(M)= \lim_{x\rightarrow \infty} \frac{ \# \{ p \in M : p \leq x\}}{ \# \{ p \in \mathbb{Q} : p \leq x\}} $$</p>
<p>Now, let $$M= \bigcup_{k=0}^{\infty}\{ p \mbox{ prime} : 10^k \leq p < 2\cdot10^k \}$$ Show that $\delta(M)$ not exists and $d(M)=\frac{\log(2)}{\log(10)}$</p>
<p>For $\delta(M)$ i can apply the Prime Number Theorem, but if $$M(x)= \# \{ p \in M : p \leq x \}= \sum_{p \in M, p\leq x}1 = \sum_{0 \leq k \leq \frac{\log(x)}{\log(10)}} \sum_{10^k \leq p < 2\cdot10^k, p \leq x} 1$$</p>
<p>I do not know how to follow...</p>
| Kunnysan | 84,764 | <p>This is a non-trivial result, I may give you an idea of proof but first you should look at <a href="http://ac.els-cdn.com/0022314X84900611/1-s2.0-0022314X84900611-main.pdf?_tid=40706d58-ed8a-11e2-921a-00000aab0f26&acdnat=1373919061_d93a04eb74763c51d5c52d7dc3395665" rel="nofollow">this</a>.</p>
|
3,265,243 | <p>The number of ways of placing <span class="math-container">$n$</span> objects not in position is given by the inclusion-exclusion number <span class="math-container">$D_n$</span>: </p>
<p><span class="math-container">$n! \left( 1-\dfrac{1}{1!}+\dfrac{1}{2!}+....+(-1)^n\dfrac{1}{n!} \right)$</span></p>
<p>which can also be written as:</p>
<p><span class="math-container">$n!\sum_{i=1}^{n-1} (-1)^{i+1}/(i+1)!$</span></p>
<p>Using a different approach, I came to the following answer, which has probably already been proven:</p>
<p><span class="math-container">$(n-1)!\sum_{i=1}^{n-1} (-1)^{i+1}(n-i)/(i-1)!$</span></p>
<p>The 2 different summations give the same answer ( I have tested for n=2,3,4,5,6,7). However,each term for any <span class="math-container">$i$</span> is different in each summation. How can the equality of the 2 summations be proved (if it is indeed he case for all <span class="math-container">$n$</span>)? </p>
| XbarJim | 682,244 | <p>Let <span class="math-container">$LHS(n)$</span> denote the expression <span class="math-container">$n!\sum_{i=1}^{n-1} (-1)^{i+1}/(i+1)!$</span></p>
<p>Let <span class="math-container">$RHS(n)$</span> denote the expression <span class="math-container">$(n-1)!\sum_{i=1}^{n-1} (-1)^{i+1}(n-i)/(i-1)!$</span></p>
<p>Claim <span class="math-container">$LHS(n)=RHS(n) \ \ \forall n$</span></p>
<p>E.g. for <span class="math-container">$n=4$</span> we have that <span class="math-container">$LHS=12-4+1=9$</span> and <span class="math-container">$RHS=18-12+3=9$</span></p>
<p><strong>Proof by Induction:</strong></p>
<p>Assume that <span class="math-container">$LHS(n)=RHS(n)$</span> for some given <span class="math-container">$n$</span></p>
<p>Want <span class="math-container">$LHS(n+1)=RHS(n+1)$</span></p>
<p><span class="math-container">$LHS(n+1)=\dots=(n+1)LHS(n)+(-1)^{n+1}$</span></p>
<p><span class="math-container">$RHS(n+1)=\dots$</span></p>
<p><strong>THAT IS ALL I HAVE TIME FOR RIGHT NOW, SORRY!</strong></p>
|
224,019 | <p>I am trying to compute the volume of intersection of the following two regions:</p>
<pre><code>a = 0.857597;
b = 1.653926;
hexagon = Polygon[{{0, (b - a)/2, 1/2}, {(b - a)/2, 0, 1/2},
{1/2, 0, (b - 1)/(2 a)}, {1/2, (b - 1)/2, 0}, {(b - 1)/2, 1/2, 0},
{0, 1/2, (b - 1)/(2 a)}}];
octahedron = ImplicitRegion[Abs[x] + Abs[y] + a Abs[z] <= b/2, {x, y, z}];
region2 = ImplicitRegion[1 >= RegionDistance[hexagon, {x, y, z}], {x, y, z}];
</code></pre>
<p><code>NIntegrate</code> directly doesn't work:</p>
<pre><code>NIntegrate[1, {x, y, z} ∈ RegionIntersection[octahedron, region2]]
</code></pre>
<p>It results in a crash after using up the memory (32GB).</p>
<p>I tried to use <code>DiscretizeRegion</code> first:</p>
<pre><code>octd = DiscretizeRegion[octahedron, {{-1, 1}, {-1, 1}, {-1, 1}}];
regd = DiscretizeRegion[region2, {{-1, 2}, {-1, 2}, {-1, 2}}]; (* This takes 40 minutes *)
RegionIntersection[octd, regd]
</code></pre>
<p>This returns an error: “BoundaryMeshRegion: The boundary surface is not closed because the edges <<2>> only come from a single face.”</p>
<p>I also tried to discretize the regions using <code>NDSolve`FEM`ToElementMesh</code>.</p>
<pre><code>Needs["NDSolve`FEM`"];
ToElementMesh[region2, {{-1, 2}, {-1, 2}, {-1, 2}}]
</code></pre>
<p>This crashes without using significant memory. Computing finite element mesh on the first region does not crash, but intersecting it with the second region results in a crash without significant memory usage.</p>
<pre><code>octf = ToElementMesh[octahedron, {{-1, 1}, {-1, 1}, {-1, 1}}];
RegionIntersection[octf, regd]
</code></pre>
<p>I have reported the issues with <code>ToElementMesh</code> to Wolfram Support.</p>
<p>Is there any workaround?</p>
<pre><code>$Version (* 12.1.0 for Mac OS X x86 (64-bit) (March 18, 2020) *)
</code></pre>
| flinty | 72,682 | <p>This is not ideal, but it gives an approximate resulting region. I first generate random points on the hexagon and add a random vector on the unit sphere. I take the convex hull of the points which is acceptable because the blob must be convex. Finally I discretize the octahedron and intersect with <code>crudehexagonblob</code>:</p>
<pre><code>crudehexagonblob =
ConvexHullMesh[# + RandomPoint[Sphere[#, 1]] & /@
RandomPoint[hexagon, 40000]];
RegionIntersection[DiscretizeRegion[octahedron], crudehexagonblob]
</code></pre>
<p>Sadly convex hull is buggy and if I do 50000 or 20000 points I get an empty region, so I did 40000 and it worked. What a mess.</p>
<p>You could find a way to represent <code>region2</code> differently. I'm thinking you can put spheres at all vertices and cylinders along all edges and join it to a cylinder at the center. I think this combination of spheres and cylinders is identical to <code>region2</code>:</p>
<pre><code>RegionPlot3D[1 >= RegionDistance[hexagon, {x, y, z}], {x, -2, 2}, {y, -2, 2}, {z, -2, 2}]
hexcenter = RegionCentroid[hexagon];
hexnormal = Normalize[Cross[hexagon[[1, 1]] - hexcenter, hexagon[[1, 2]] - hexcenter]];
hexradius = Norm[hexcenter - hexagon[[1, 1]]];
cylinderhack = Cylinder[{hexcenter - hexnormal, hexcenter + hexnormal}, hexradius];
hexhack = Flatten[{
MeshPrimitives[hexagon, 1] /. Line -> Cylinder,
MeshPrimitives[hexagon, 0] /. Point -> Ball,
cylinderhack}];
Graphics3D[hexhack]
</code></pre>
<p><a href="https://i.stack.imgur.com/Wu0Vy.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Wu0Vy.png" alt="hexagon blob plots"></a></p>
<p>Unfortunately I had to use the same hack with <code>ConvexHullMesh</code> and random points to get a mesh out of the <code>RegionUnion</code> of these combined cylinders and spheres, because if you discretize them individually and <code>RegionUnion</code> them together it fails. Still, this mesh is pretty good:</p>
<pre><code>cvxhm = ConvexHullMesh[RandomPoint[RegionUnion[RegionBoundary /@ hexhack], 40000]]
</code></pre>
<p><a href="https://i.stack.imgur.com/gynptm.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/gynptm.png" alt="approx mesh blob hexagon"></a></p>
<p>And disappointingly we can't even intersect this with the octahedron! I welcome any advice to get this to work: </p>
<pre><code>(* unfortunately this fails for me in v12.1 *)
RegionIntersection[
DiscretizeRegion@octahedron,
cvxhm
]
</code></pre>
<p>Even though it doesn't provide a satisfying answer, I hope I've provided something you or somebody else can build on.</p>
|
6,562 | <p>I want to make some button shaped graphics that would essentially be a rectangular shape with curved edges. In the example below I have used <code>Polygon</code> rather than <code>Rectangle</code> so as to take advantage of <code>VertexColors</code> and have a gradient fill. The code below illustrates the sort of thing I want in so far as the <code>Frame</code> with <code>RoundingRadius</code> shows where I want the boundaries of the <code>Graphic</code> to be cut off (for example).</p>
<pre><code>Framed[Graphics[{
Polygon[{{0, 0}, {1, 0}, {1, 1}, {0, 1}},
VertexColors -> {Red, Red, Blue, Blue}]
},
AspectRatio -> 0.2,
ImagePadding -> 0,
ImageMargins -> 0,
ImageSize -> 200,
PlotRangePadding -> 0],
ContentPadding -> True,
FrameMargins -> 0,
ImageMargins -> 0,
RoundingRadius -> 20]
</code></pre>
<p>I'm thinking there is probably a very straight forward way of accomplishing this. Is there some way to exclude parts of the <code>Graphic</code> that fall outside the <code>Frame</code> from displaying? Any alternative methods would be welcome.</p>
<p><strong>Edit</strong></p>
<p>I had been expecting that this was going to be possible with existing options rather than having to write functions. @Mr.Wizard provided a concise solution from existing built in functionality but I ultimately didn't want a raster solution. @Heike used <code>RegionPlot</code> like the others, but in a way in which the user, i.e. me, could implement it by simply changing a rounding radius parameter, so that makes it a more straight forward solution IMO.</p>
| Heike | 46 | <p>This answer uses <code>RegionPlot</code> to plot the rounded rectangle. In <code>roundedRect</code>, <code>{{xmin, xmax}, {ymin, ymax}}</code> is the range of the rectangle and <code>rad</code> the rounding radius. <code>roundedRect</code> accepts any option of <code>RegionPlot</code>, in particular <code>ColorFunction</code> which you can use to shade the rectangle. </p>
<pre><code>Options[roundedRect] = Options[RegionPlot];
SetOptions[roundedRect, {Frame -> False, Axes -> False, BoundaryStyle -> None}];
roundedRect[range : {{xmin_, xmax_}, {ymin_, ymax_}}, rad_,
opt : OptionsPattern[roundedRect]] := Module[{p, norm},
p = 1/Log2[Sqrt[2] + 2];
norm[pt_, pt0_] := Total[Abs[pt - pt0]^p]^(1/p) > rad;
RegionPlot[And @@ (norm[{x, y}, #] & /@ Tuples[range]),
{x, xmin, xmax}, {y, ymin, ymax}, opt,
AspectRatio -> Abs[ymax - ymin]/Abs[xmax - xmin],
Evaluate[Options[roundedRect]]]]
</code></pre>
<p><strong>Example</strong></p>
<pre><code>roundedRect[{{0, 5}, {0, 1}}, .4, ColorFunction -> (Blend[{Red, Blue}, #2] &)]
</code></pre>
<p><img src="https://i.stack.imgur.com/Z3ET7.png" alt="Mathematica graphics"></p>
<p><strong>Edit</strong> </p>
<p>@Heike I hope you do not mind me making a change to your answer. I think this is more Mathematica like by having the rounding radius as an option.</p>
<pre><code>ClearAll[roundedRect];
Options[roundedRect] = Flatten[{RoundingRadius -> 0.5, Options[RegionPlot]}];
SetOptions[roundedRect, {Frame -> False, Axes -> False, BoundaryStyle -> None}];
roundedRect[range : {{xmin_, xmax_}, {ymin_, ymax_}},
opt : OptionsPattern[roundedRect]] := Module[{p, norm, opts, rad},
rad = OptionValue[RoundingRadius];
opts = FilterRules[{opt}, Options[RegionPlot]];
p = 1/Log2[Sqrt[2] + 2];
norm[pt_, pt0_] := Total[Abs[pt - pt0]^p]^(1/p) > rad;
RegionPlot[
And @@ (norm[{x, y}, #] & /@ Tuples[range]), {x, xmin, xmax}, {y,
ymin, ymax}, Evaluate@opts,
AspectRatio -> Abs[ymax - ymin]/Abs[xmax - xmin]]]
</code></pre>
<p>example:</p>
<pre><code>roundedRect[{{0, 5}, {0, 1}}, Frame -> False, RoundingRadius -> 0.4,
ColorFunction -> (Blend[{Red, Blue}, #2] &)]
</code></pre>
|
3,940,447 | <p>Disclaimer: I believe this proof is wrong, but I'm asking because I can't find what's wrong with it, which means I must have some basic misunderstanding of the concepts involved.</p>
<p>First, some definitions. Recursively, we define an ordinal <span class="math-container">$\alpha$</span> to be <span class="math-container">$\eta$</span>-unboundedly elementary:</p>
<p><span class="math-container">$\alpha$</span> is <span class="math-container">$0$</span>-unboundedly elementary iff <span class="math-container">$\alpha$</span> is an ordinal<br />
<span class="math-container">$\alpha$</span> is <span class="math-container">$\eta$</span>-unboundedly elementary iff for all <span class="math-container">$\beta<\eta$</span>, for all <span class="math-container">$\gamma$</span>, there exists <span class="math-container">$\delta>\gamma$</span>, such that <span class="math-container">$V_\alpha\prec V_\delta$</span> and <span class="math-container">$\delta$</span> is <span class="math-container">$\beta$</span>-unboundedly elementary. That is, <span class="math-container">$V_\alpha$</span> is elementary in unboundedly many <span class="math-container">$V_\delta$</span> for <span class="math-container">$\delta$</span>'s lower in this hierarchy.</p>
<p>Some examples: <span class="math-container">$1$</span>-unboundedly elementary ordinals are those <span class="math-container">$\alpha$</span>'s such that <span class="math-container">$V_\alpha\prec V_\beta$</span> for unboundedly many ordinals <span class="math-container">$\beta$</span>. In this <a href="http://jdh.hamkins.org/otherwordly-cardinals/" rel="noreferrer">blogpost</a> they are called totally otherworldly cardinals. <span class="math-container">$2$</span>-unboundedly elementary ordinals are those <span class="math-container">$\alpha$</span>'s such that <span class="math-container">$V_\alpha$</span> is elementary in unboundedly many totally otherworldly cardinals.</p>
<p>An interesting feature of these unboundedly elementaries is that they exhibit increasing degree of correctness (where <span class="math-container">$\kappa$</span> is <span class="math-container">$\Sigma_n$</span>-correct iff <span class="math-container">$V_\kappa\prec_{\Sigma_n}V$</span>). Let us first see that <span class="math-container">$1$</span>-unboundedly elementary ordinals are <span class="math-container">$\Sigma_3$</span>-correct:<br />
That they are <span class="math-container">$\Sigma_2$</span> correct is proven in the blogpost linked above. To see that they are in fact <span class="math-container">$\Sigma_3$</span>-correct, suppose <span class="math-container">$\exists x~\varphi(x)$</span> is a true <span class="math-container">$\Sigma_3$</span> statement with parameters in <span class="math-container">$V_\alpha$</span>. Fix a witness <span class="math-container">$a$</span> such that <span class="math-container">$\varphi(a)$</span> is true and fix <span class="math-container">$\beta$</span> large enough with <span class="math-container">$a\in V_\beta$</span> and <span class="math-container">$V_\alpha\prec V_\beta$</span>. Then since <span class="math-container">$\beta$</span> must be a <span class="math-container">$\beth$</span>-fixed point (this follows from the fact that, for any ordinals <span class="math-container">$\alpha,\beta$</span>, <span class="math-container">$V_\alpha\prec V_\beta$</span> implies <span class="math-container">$V_\beta\vDash \mathsf{ZFC}$</span>), <span class="math-container">$V_\beta\prec_{\Sigma_1}V$</span>. Then the truth of the <span class="math-container">$\Pi_2$</span> statement <span class="math-container">$\varphi(a)$</span> is preserveed downward to <span class="math-container">$V_\beta$</span>. This means <span class="math-container">$V_\beta\vDash \exists x~\varphi(x)$</span>, and by elementarity, so does <span class="math-container">$V_\alpha$</span>. This shows that <span class="math-container">$\alpha$</span> is <span class="math-container">$\Sigma_3$</span>-correct.</p>
<p>Essentially the same trick can give us that <span class="math-container">$2$</span>-unboundedly elementary ordinals are <span class="math-container">$\Sigma_4$</span> correct: this is because there are unboundedly many <span class="math-container">$\Sigma_3$</span>-correct ordinals that they are elementary in, and if <span class="math-container">$\exists x~\varphi(x)$</span> is a true <span class="math-container">$\Sigma_4$</span> statement, then we can find some target <span class="math-container">$V_\beta$</span> large enough to include a witness, then proceed the same way as in the last paragraph.</p>
<p>Now here's the problem: the recursive definition of <span class="math-container">$\eta$</span>-unboundedly elementary ordinals is <span class="math-container">$\Pi_3$</span>. This means ``there exists a <span class="math-container">$2$</span>-unboundedly elementary ordinal" a <span class="math-container">$\Sigma_4$</span> statement. So the least <span class="math-container">$2$</span>-unboundedly elementary ordinal must be below the least <span class="math-container">$\Sigma_4$</span>-correct cardinal. But this contradicts the fact we've just shown, that <span class="math-container">$2$</span>-unboundedly elementary ordinals are <span class="math-container">$\Sigma_4$</span>-correct!</p>
<p>Nevertheless, assuming there is an inaccessible cardinal, then the usual Löwenheim–Skolem argument shows the consistency of essentially any degree of unboundedly elementary ordinals, and in particular the consistency of <span class="math-container">$2$</span>-unboundedly elementary.</p>
<p>Taking stock, we have: <span class="math-container">$\mathsf{ZFC}\vdash ``\text{there exists an inaccessible cardinal}"\rightarrow$</span> Con(<span class="math-container">$2$</span>-unboundedly elementary), and also <span class="math-container">$\mathsf{ZFC}\vdash``2\text{-unboundedly elementary ordinals are inconsistent}"$</span>. Putting things together, this shows that <span class="math-container">$\mathsf{ZFC}$</span> refutes the existence of inaccessible cardinals. What went wrong?</p>
<p>Some possibilities I can think of: 1) the proof of their increasing degree of correctness is wrong, 2) the complexity calculation of the recursive definition wrong (or maybe it's not recursive after all, that there is some metamathematical issues), 3) inaccessible cardinals doesn't show that these unboundedly elementary ordinals are consistent.</p>
| lulu | 252,071 | <p>The distribution is correct but the expected value is not.</p>
<p>I think it is simpler to approach this by noting that you expect it to take <span class="math-container">$\frac 1p$</span> tries for a girl and <span class="math-container">$\frac 1{1-p}$</span> for a boy so the expected number of times to get one of each is <span class="math-container">$$E=p\times \left(1+\frac 1{1-p}\right)+(1-p)\times \left(1+\frac 1p\right)=\boxed {\frac {p^2-p+1}{p\,(1-p)}}$$</span></p>
<p>We remark that, for <span class="math-container">$p=\frac 12$</span>, this yields <span class="math-container">$E=\frac {3/4}{1/4}=3$</span>, which is easily seen to be correct (after the first child, you expect it to take two tries to get the opposite gender). The formula provided by the OP gives <span class="math-container">$2$</span>, so presumably there was an error in summing the infinite series. Indeed, the infinite series is a bit messy, which is a good reason for looking for alternate methods of solution.</p>
|
2,492,206 | <p>Let $(X, \mathcal{X})$ and $(Y,\mathcal{Y})$ be measurable spaces, and $f: X \to Y$ a measurable function. Let $A \in \mathcal{X}$ be a measurable subset of $X$.</p>
<p>Is it guaranteed that $f_{\mid A}: A \to Y$ is measurable?</p>
<p>The measurable space on $A$ is $(X, \mathcal{X})$ restricted to $A$. Formally, the $\sigma$-algebra on $A$ is $\mathcal{A}=\{ S \cap A \mid S \in \mathcal{X} \}$.</p>
<hr>
<p><strong>Proof suggestion:</strong></p>
<p>My guess is that it is, because for a measurable $B \in \mathcal{Y}$, $f_{\mid A^{-1}}(B)=f^{-1}(B) \cap A$, and $f^{-1}(B)$ and $A$ are both measurable.</p>
<hr>
<p><strong>On different notions of measurability</strong>: Does anything change if we use one of the following definitions for measurable functions?</p>
<ul>
<li>The preimage of every measurable set is measurable (this is the standard definition in my opinion)</li>
<li>The preimage of every open set is measurable</li>
<li>The preimage of every open set is open (this is a sub-case of the last case)</li>
</ul>
| Lucio | 495,116 | <p>The proof, as pointed out, is correct.</p>
<p>Regarding the notions of measurability: no, they are not the same. First of all, the last notion, preimage of open set is open, is the notion of continuity. Continuity is in general much stronger than measurability. Secondly, observe that you haven't defined any topology nor on $X$ or $Y$. So you don't even know that open sets are measurable, in order to understand whether the notions are equivalent you would need to assume it. However, if you take the sigma algebra on $Y$ to be the borel sigma algebra, i,e. the sigma algebra generated by open sets of $Y$, then the first two notions are equivalent.</p>
|
2,391,624 | <p>This question pertains to Mosteller's classic book <em>Fifty Challenging Problems in Probability</em>. Specifically, this in regards to an algebraic operation Mosteller performs in the solution to the first question, entitled "The Sock Drawer."</p>
<p>Mosteller writes:</p>
<blockquote>
<p>Then we require the probability that both are red to be $\frac{1}{2}$, or $$\frac{r}{r+b}*\frac{r-1}{r+b-1}=\frac{1}{2}\text{.}$$
…</p>
<p>Notice that $$\frac{r}{r+b}\gt\frac{r-1}{r+b-1}\text{, for $b > 0$.}$$ Therefore we can create the inequalities $$\left(\frac{r}{r+b}\right)^2 \gt \frac 12 \gt \left(\frac{r-1}{r+b-1}\right)^2$$</p>
</blockquote>
<p>Despite much staring, and not knowing what to Google, I am stumped! In that last step, how does he do that‽</p>
<p>Many thanks,<br>
James</p>
| Angina Seng | 436,618 | <p>You have two positive numbers, which I'll call $x$ and $y$, with
$xy=\frac12$ and $x>y$.
Then
$$x^2>xy>y^2$$
which is the same as
$$x^2>\frac12>y^2.$$</p>
|
4,030,296 | <p>I'm reading Carothers' Real Analysis, and I'm currently looking at <strong>homeomorphisms</strong>. The author says "two intervals that look different, are different" - i.e. they are not homeomorphic. The proof is done for the case <span class="math-container">$(0,1]$</span> and <span class="math-container">$(a,b)$</span>, where the first interval is semi-open so we expect that the two are not homeomorphic.</p>
<p>I have some difficulty following the book's argument which I paraphrase here (for convenience) with inline questions.</p>
<blockquote>
<p>Proof by contradiction. Suppose <span class="math-container">$(0,1]$</span> and <span class="math-container">$(a,b)$</span> are homeomorphic. Then by removing <span class="math-container">$1$</span> from <span class="math-container">$(0,1]$</span> we get <span class="math-container">$(0,1)$</span>, and by removing its image <span class="math-container">$f(1) = c$</span> from <span class="math-container">$(a,b)$</span> we have that <span class="math-container">$(0,1)$</span> is homeomorphic to <span class="math-container">$(a,c)\cup (c,b)$</span>.</p>
</blockquote>
<p>Wait, why is that? I understand that we are deleting an element each from the domain and codomain, so the bijection remains a bijection. What about the continuity of <span class="math-container">$f$</span> and <span class="math-container">$f^{-1}$</span> though? We literally threw away an element, how do we know that continuity still holds?</p>
<blockquote>
<p>But <span class="math-container">$(0,1)$</span> is homeomorphic to <span class="math-container">$\Bbb R$</span>, so <span class="math-container">$\Bbb R$</span> would have to be homeomorphic to <span class="math-container">$(a,c)\cup (c,b)$</span> too. So <span class="math-container">$\Bbb R$</span> can be written as the disjoint union of two non-trivial open sets, which is impossible.</p>
</blockquote>
<p>Two questions:</p>
<ol>
<li>Why is <span class="math-container">$(0,1)$</span> homeomorphic to <span class="math-container">$\Bbb R$</span>? We really need to produce a homeomorphism, that's all I'm asking.</li>
<li>How to prove that <span class="math-container">$\Bbb R$</span> cannot be written as the disjoint union of two non-trivial open sets? The idea that comes to my mind is - we already know that any open subset of <span class="math-container">$\Bbb R$</span> can be written as the disjoint union of unique open maximal intervals. Since <span class="math-container">$\Bbb R$</span> is open in <span class="math-container">$\Bbb R$</span>, we can obviously do the same for it as well, and clearly, our choice of intervals would not be maximal in the event that we could write <span class="math-container">$\Bbb R$</span> as the union of two disjoint open intervals. Does this make sense?</li>
</ol>
<p>Thank you!</p>
| DanielWainfleet | 254,665 | <p>In <span class="math-container">$\Bbb R^2$</span> let <span class="math-container">$C$</span> be the circle centered at <span class="math-container">$(0,2)$</span> with radius <span class="math-container">$1.$</span> Let <span class="math-container">$D=\{(x,y)\in C: y<2\}$</span>...Now <span class="math-container">$D$</span> is a semi-circle without its end-points, and <span class="math-container">$D$</span> is homeomorphic to <span class="math-container">$(0,1).$</span></p>
<p>For <span class="math-container">$p\in D$</span> let <span class="math-container">$l_p$</span> be the line thru <span class="math-container">$p$</span> and <span class="math-container">$(0,2)$</span> and let <span class="math-container">$l_p$</span> intersect the <span class="math-container">$x$</span>-axis at <span class="math-container">$(f(p),0)$</span>... (Draw a diagram of this.)</p>
<p><span class="math-container">$f: D\to \Bbb R$</span> is a homeomorphism. A composition of homeomorphisms is a homeomorphism, so if <span class="math-container">$g: (0,1)\to D$</span> is a homeomorphism, then so is <span class="math-container">$(fg):(0,1)\to\Bbb R.$</span></p>
|
1,266,210 | <p>Hello everybody my query is regarding the number of positive integral solution.</p>
<blockquote>
<p>In the sport of cricket, find the number of ways in which a batsman can score $14$ runs in $6$ balls not scoring more than $4$ runs in any ball.</p>
</blockquote>
| Sachin Gupta | 490,560 | <p>my solution is based on non-negative intergral solution..</p>
<p>$a+b+c+d+e+f=14$ provided the condition $a,b,c,d,e,f \leq 4$</p>
<p>let $a=4-a_1,b=4-a_2,....f=4-a_6$ and replace in given equation</p>
<p>we get ,</p>
<p>$(4-a_1)+(4-a_2)+(4-a_3)+(4-a_4)+(4-a_5)+(4-a_6)=14$</p>
<p>$24-(a_1+a_2+a_3+a_4+a_5+a_6)=14$</p>
<p>$a_1+a_2+a_3+a_4+a_5+a_6=10$ ( now $a$ was $\leq 4$ so $4-a_1 \leq 4 \Rightarrow a_1 \geq 0$)</p>
<p>so,</p>
<p>$a_1+a_2+a_3+a_4+a_5+a_6=10$ provided $a_1,a_2,a_3,a_4,a_5,a_6 \geq 0$ </p>
<p>use solution of non-negative integral solution for linear equation
$= {{n+r-1}\choose{r-1}}$
$ = {{10+6-1}\choose{6-1}} = {{15}\choose{5}} = 3003$</p>
<p>so total no of ways $=3003$</p>
|
3,988,084 | <p>I have 3 points:</p>
<p><span class="math-container">$$A = (0,4) \\
B=(-5,0) \\
C=(5,0)$$</span></p>
<p>I need to find a polynomial that goes through B and C, and is tangent to <span class="math-container">$f(x) = (2/3)x+4$</span> at A.</p>
<p>I know that tangent means it must be equal to the derivative of f(x) at that point.</p>
<p>This is probably wrong, but I did the interpolation using points B, C and <span class="math-container">$(0,2/3)$</span>. I got</p>
<p><a href="https://i.stack.imgur.com/fdHvm.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/fdHvm.png" alt="enter image description here" /></a></p>
<p>Help?</p>
| Hagen von Eitzen | 39,174 | <p>You have four conditions
<span class="math-container">$$\begin{align}f(0)&=4\\f(-5)&=0\\f(5)&=0\\f'(0)&=\frac23\end{align} $$</span>
and so we should be looking for four unknowns, i.e., <span class="math-container">$f$</span> is of degree <span class="math-container">$3$</span>, or
<span class="math-container">$$ f(x)=ax^3+bx^2+cx+d$$</span>
and the four conditions translate into linear equations in the unknown coefficients:
<span class="math-container">$$\begin{align}d&=4\\-125a+25b-5c+d&=0\\125a+25b+5c+d&=0\\c&=\frac23\end{align} $$</span>
As <span class="math-container">$c$</span> and <span class="math-container">$d$</span> are available immediately, you are essentially left with a simple system of two very user-friendly linear equations in <span class="math-container">$a$</span> and <span class="math-container">$b$</span>.</p>
<p>Your interpolation considers only the first three givens.</p>
|
1,679,615 | <p>From what I have read about a transitive relation is that if xRy and yRz are both true then xRz has to be true. </p>
<p>I'm doing some practice problems and I'm a little confused with identifying a transitive relation. </p>
<p>My first example is a "equivalence relation"
$S=\{1,2,3\}$ and $R = \{(1,1),(1,3),(2,2),(2,3),(3,1),(3,2),(3,3)\}$
My Book solutions say that this relations is
Reflexive and Symmetric</p>
<p>My Second example is "partial order"
$S=\{1,2,3\}$ and $R =\{(1,1),(2,3),(1,3)\}$
My Book solutions says is
Antisymmetric and Transitive</p>
<p>I got confused with why is the partial order(second example) Transitive. So what I did is that I applied $1R1$ and $1R3$ so $1R3$($xRy$ and $yRz$ so $xRz$). </p>
<p>I tried to applied this to my first example (equivalence relation). What I did is $1R1$ and $1R3$ so $1R3$ ($xRy$ and $yRz$ so $xRz$).</p>
<p>Can someone explain what I'm missing or doing wrong? What can I do to identify a transitive relation? As you can see on both practice examples both have the same set of relations $1R1$ and $1R3$ so $1R3$($xRy$ and $yRz$ so $xRz$) but one is transitive and the other is not.</p>
| Ove Ahlman | 222,450 | <p>$xRy$ and $yRz$ should imply $xRz$ for all choises of $x,y,z$.</p>
<p>In your case you have just noticed that $1R3$ hold in both examples. This is however not sufficient for transifitivty, you have to check all possible cases of $x,y$ and $z$ in order to show that a relation is transitive. </p>
<p>In the case of $R =\{(1,1),(2,3),(1,3)\}$, the only tuples where both $xRy$ hold and $yRz$ is when $x=1$, $y=1$ and $z=3$ or when $x=1$, $y=1$ and $z=1$. If we choose $x,y$ or $z$ in any other way then either $xRy$ will not hold or $yRz$. Now it is straight forward to check that if $x=1$, $y=1$ and $z=3$ then $xRy,xRz$ and $xRz$ and if $x=1$, $y=1$ and $z=1$ then $xRy,xRz$ and $xRz$.</p>
<p>Now for the other relation $R = \{(1,1),(1,3),(2,2),(2,3),(3,1),(3,2),(3,3)\} $, choosing $x=1,y=1,z=3$ does not pose a problem as $1R1,1R3$ and $1R3$ hold. However If we choose $x=2,y=3,z=1$ then we see that $2R3$ and $3R1$ both hold, however $2R1$ does not hold (since $(2,1)\notin R$) which was supposed to hold in order for the relation to be transitive. Thus this gives one example of where the relation "$xRy$ and $yRz$ imply $xRz$" does not hold, and thus the relation is not transitive.</p>
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.