qid
int64
1
4.65M
question
large_stringlengths
27
36.3k
author
large_stringlengths
3
36
author_id
int64
-1
1.16M
answer
large_stringlengths
18
63k
2,259,840
<blockquote> <p>Points $P$, $Q$, and $R$ lie on the same line. Three semi-circles with the diameters $PQ$, $QR$, and $PR$ are drawn on the same side of the line segment $PR$. (That is, suppose we have an <a href="https://en.wikipedia.org/wiki/Arbelos" rel="nofollow noreferrer">arbelos</a>.) The centers of the semi-circles are $A$, $B$, and $O$, respectively. A circle with center $C$ touches all three semi-circles. Show that the radius of this circle is $$c = \frac{ab(a+b)}{a^2+ab+b^2}$$ where $a :=|AQ|$ and $b :=|BQ|$ are the radii of the smaller two semi-circles.</p> </blockquote> <p>I know that since this is a trigonometry question, I have to construct a triangle somewhere. However, I am unsure as to whether I should construct the triangle between points ACQ or points ACB. </p>
CY Aries
268,334
<p>Hint: Draw the figure and you will have $AC=a+c$, $BC=b+c$, $OA=b$, $OB=a$ and $OC=a+b-c$. Apply the cosine formula to find $\cos\angle BAC$ in</p> <p>(1) $\triangle ABC$</p> <p>(ii) $\triangle OAC$</p>
4,447,522
<p>Show that <span class="math-container">$$\cot\left(\dfrac{\pi}{4}+\beta\right)+\dfrac{1+\cot\beta}{1-\cot\beta}=-2\tan2\beta$$</span> I'm supposed to solve this problem only with sum and difference formulas (identities).</p> <p>So the LHS is <span class="math-container">$$\dfrac{\cot\dfrac{\pi}{4}\cot\beta-1}{\cot\dfrac{\pi}{4}+\cot\beta}+\dfrac{1+\cot\beta}{1-\cot\beta}=\dfrac{\cot\beta-1}{1+\cot\beta}+\dfrac{1+\cot\beta}{1-\cot\beta}=\dfrac{4\cot\beta}{1-\cot^2\beta}$$</span> I also tried to work with <span class="math-container">$\sin\beta$</span> and <span class="math-container">$\cos\beta$</span> and arrived at <span class="math-container">$$\dfrac{4\sin\beta\cos\beta}{\sin^2\beta-\cos^2\beta}$$</span> I don't see how to get <span class="math-container">$-2\tan2\beta$</span> from here (even with other identities).</p>
Anamaria
857,136
<p><span class="math-container">$$\frac{4 \sin \beta \cos \beta}{\sin ^{2} \beta-\cos ^{2} \beta} = \frac{2 \sin2\beta}{-(\cos ^{2} \beta-\sin ^{2} \beta)}=-2\frac{\sin 2\beta}{\cos 2\beta}=-2 \tan(2\beta)$$</span></p>
1,341,440
<p>I came across a claim in a paper on branching processes which says that the following is an <em>immediate consequence</em> of the B-C lemmas:</p> <blockquote> <p>Let $X, X_1, X_2, \ldots$ be nonnegative iid random variables. Then $\limsup_{n \to \infty} X_n/n = 0$ if $EX&lt;\infty$, and $\limsup_{n \to \infty} X_n/n = \infty$ if $EX=\infty$.</p> </blockquote> <p>So to apply the BC lemmas to these, I want to essentially show that $$(1) \; \textrm{If } EX&lt;\infty, \textrm{ then } P(\limsup \{X_n/n &gt; \epsilon\}) = 0 \quad \forall \epsilon&gt;0$$ $$(2) \; \textrm{If } EX=\infty, \textrm{ then } P(\limsup \{X_n/n &gt; \delta\}) = 1 \quad \forall \delta&gt;0$$</p> <p>But I keep getting stuck. For example if I want to apply the first BC lemma to (1), then using Markov's inequality only gives $P(X_n &gt; n\epsilon) &lt; EX/n\epsilon$, which isn't summable. Am I missing something right under my nose?</p>
Nicky Hekster
9,605
<p>Put $N=\langle x \rangle$. Then $|N|=4$ and index$[G:N]=2$ so, $G=N \cup gN$, with $g \notin N$. $N$ is normal, so $g^{-1}x^2g \in N$. But this element, being conjugate to $x^2$, has order equal to that of $x^2$, which is $2$. Since $x^2$ is the unique element of order $2$ in $N$, it follows that $g^{-1}x^2g=x^2$. So $x^2$ commutes with $g$ and of course with any power of $x$. Hence $x^2 \in Z(G)$.</p>
627,871
<p>Let $\mathbf{A}$ be an algebra (in the sense of universal algebra) of some signature $\Sigma$. By <em>quasi-identity</em> I mean the formula of the form</p> <p>$$(\forall x_1) (\forall x_2) \dots (\forall x_n) \left(\left[\bigwedge_{i=1}^{k}t_i(x_1, \dots, x_n)=s_i(x_1, \dots, x_n)\right]\rightarrow t(x_1, \dots, x_n)=s(x_1, \dots, x_n) \right) \;, $$</p> <p>where $t_i(x_1, \dots, x_n), s_i(x_1, \dots, x_n), t(x_1, \dots, x_n), s(x_1, \dots, x_n)$ are terms (using the algebra operations) with all its variables among $x_1, \dots, x_n$.</p> <p>Since the class of all algebras (of the considered signature) satisfying some quasi-identity is (allegedly) not a variety in general and clearly such class is closed under taking subalgebras and products, it follows that it is not closed under taking quotients in general.</p> <p>So the question is:</p> <p><strong>Is there some (possibly elementary) example of an algebra satisfying some quasi-identity and its quotient where the quasi-identity does not hold?</strong></p> <p>My only idea was to show this for the cancellation law in some monoid, but I do not see any example (mainly because I do not see how the quotients look like).</p> <p>Thanks in advance for any help.</p>
J.-E. Pin
89,374
<p>Your idea was correct. Take the cancellation law in a commutative monoid. It is satisfied by $\mathbb N$ but not by its quotient $(\{0, 1\}, +)$ (obtained by identifying all positive numbers to $1$). In this latter monoid, you have $0 + 1 = 1 = 1 + 1$, but $0 \not= 1$.</p>
1,390,976
<p>Similar to <a href="https://math.stackexchange.com/questions/54763/what-do-algebra-and-calculus-mean">What do Algebra and Calculus mean?</a>, what is the difference between a logic and a calculus?</p> <p>I am learning about the different kinds of logics, and often when I look them up in a different resource, some people call it a logic, others call it a calculus (<em><a href="https://en.wikipedia.org/wiki/Propositional_calculus" rel="noreferrer">propositional calculus</a></em> and <em>propositional logic</em>). Or some calculus is defined as a logical system, like the <a href="https://en.wikipedia.org/wiki/Situation_calculus" rel="noreferrer">situation calculus</a>:</p> <blockquote> <p>The <strong>situation calculus</strong> is a <em>logic formalism</em> designed for representing and reasoning about dynamical domains.</p> </blockquote> <p>When do you call something a calculus vs. a logic?</p> <p>It seems that the definitions of "a logic" and "a calculus" are often circular. A logic is a calculus, and a calculus is a logic. Or a calculus is rules for calculating, while a logic is rules for inference. But in this sense, they're both systems of rules, so maybe they are both just generally "formal systems", and when focusing on inference it's a "logic", and when focusing on calculation it's a "calculus"?</p>
starseed_trooper
515,191
<p>I would elaborate on the answer above with respect to the etymology of logic. Logic is closely associated with philosophy since the days of Aristotle. Many philosophers are well-versed in logic, but not as something that you represent symbolically. This is what is known as informal logic and involves such things as logical fallacies and assessing the validity of an argument.</p> <p>Then, at the very foundational level, we have the work of logicians like Frege, Russell, and Gödel, who saw logic as the foundation from which to derive arithmetic and thence, algebra and calculus. In this view, the difference between the terms it not only historical, but hierarchical as well.</p>
276,329
<p>I have a problem, from Gelfand's "Algebra" textbook, that I've been unable to solve, here it is:</p> <p><strong>Problem 268.</strong> </p> <p>What is the possible number of solutions of the equation $$ax^6+bx^3+c=0\;?$$</p> <p>Thanks in advance.</p>
Belgi
21,335
<p><strong>Hint:</strong> Set $t=x^3$ and solve for $t$</p>
276,329
<p>I have a problem, from Gelfand's "Algebra" textbook, that I've been unable to solve, here it is:</p> <p><strong>Problem 268.</strong> </p> <p>What is the possible number of solutions of the equation $$ax^6+bx^3+c=0\;?$$</p> <p>Thanks in advance.</p>
Elias Costa
19,266
<p>The aplication of <a href="http://en.wikipedia.org/wiki/Quadratic_formula#Quadratic_formula" rel="nofollow">quadratic formula</a> in $$ a\cdot (x^3)^2+b\cdot(x^3)+c=0 $$ give us that the possible roots enjoy<br> $$ x^3 =\left[\frac{-b+\sqrt{b^2+4ac}}{2a}\right] \quad \mbox{ or } \quad x^{3} =\left[\frac{-b-\sqrt{b^2+4ac}}{2a}\right] $$ I believe that the greatest difficulty is to extract all the cubic roots of these expressions. For every cubic roots to use <a href="http://en.wikipedia.org/wiki/DeMoivre_formula" rel="nofollow">de Moivre's formula</a>. Extracting the cubics roots of $\left[\frac{-b+\sqrt{b^2+4ac}}{2a}+ i\cdot 0\right]$ and $\left[\frac{-b-\sqrt{b^2+4ac}}{2a} + i\cdot 0\right]$ by <a href="http://en.wikipedia.org/wiki/DeMoivre_formula" rel="nofollow">de Moivre's formula</a> we have this equation have six roots $$ x_{+\,i} =\sqrt[3\;]{\frac{-b+\sqrt{b^2+4ac}}{2a}}\cdot \omega^i \quad \mbox{ and } \quad x_{-\,i} =\sqrt[3\;]{\frac{-b-\sqrt{b^2+4ac}}{2a}}\cdot \omega^i $$ where $i=0,1,2$ and $\omega$ is a complex cubic root of unit, for exemple $$ \omega=\frac{1}{2}+i\cdot\frac{\sqrt{3}}{2} $$</p>
2,098,395
<p>Evaluate the following;</p> <p>$$\sum_{r=0}^{50} (r+1) ^{1000-r}C_{50-r}$$</p> <p>Using $^{n}C_{r}=^{n}C_{n-r}$ we get $\sum_{r=0}^{50} (r+1) ^{1000-r}C_{950}$</p> <p>but I am not getting how to solve $\sum_{r=0}^{50} r \cdot \hspace{0.5 mm} ^{1000-r}C_{950}$</p>
DXT
372,201
<p>$\displaystyle \sum^{50}_{r=0}r\cdot \binom{1000-r}{950}=0\cdot \binom{1000}{950}+1\cdot \binom{999}{950}+2\cdot \binom{998}{950}+\cdots \cdots \cdots +50\cdot \binom{950}{950}$</p> <p>Using $\displaystyle \binom{n}{k} = $ Coefficients of $x^k$ in $(1+x)^n$</p> <p>so coefficients of $\displaystyle x^{950}$ in</p> <p>$\displaystyle 1\cdot (1+x)^{999}+2\cdot (1+x)^{998}+3\cdot (1+x)^{997}+\cdots \cdots +50 \cdot (1+x)^{950}$</p> <p>let $S=1\cdot (1+x)^{999}+2\cdot (1+x)^{998}+3\cdot (1+x)^{997}+\cdots \cdots +50 \cdot (1+x)^{950}\cdots \cdots (\star)$</p> <p>multiply both side by $\displaystyle \frac{1}{1+x}$</p> <p>$\displaystyle S\cdot \frac{1}{1+x}=1\cdot (1+x)^{998}+2\cdot (1+x)^{997}+\cdots+49 \cdot (1+x)^{949}+50\cdot (1+x)^{949}\cdots \cdots (\star \star)$</p> <p>So $\displaystyle \bigg(1-\frac{1}{1+x}\bigg) = (1+x)^{999}+(1+x)^{998}+\cdots \cdots (1+x)^{949}-50(1+x)^{949}$</p> <p>So $\displaystyle S \cdot \frac{x}{1+x} = \frac{(1+x)^{1000}-(1+x)^{950}}{1+x-1}-50(1+x)^{949}$</p> <p>So $\displaystyle S = \frac{(1+x)^{1001}-(1+x)^{951}}{x^2}-\frac{50(1+x)^{950}}{x}$</p> <p>So coefficients of $x^{950}$ in $\displaystyle \frac{(1+x)^{1001}-(1+x)^{951}}{x^2}-$ coefficient of $x^{950}$ in $\displaystyle \frac{50(1+x)^{950}}{x}$</p> <p>So coefficients of $x^{952}$ in $\bigg((1+x)^{1001}-(1+x)^{951}\bigg)-$ coefficients of $x^{951}$ in $(1+x)^{950}$</p> <p>So we are getting $\displaystyle = \binom{1001}{952}$</p>
2,049,685
<p>If a team 1 has a probability of p of winning against team 2. What is the probability "formula" that team one will win 7 games first. </p> <p>There are no ties and the teams play until one t am wins 7 games </p>
Canardini
341,007
<p>A state $(a,b)$ where $a$ is the number of victories from team1 and $b$ from team2.</p> <p>Team 1 wins if they reach the states $(7,0), (7,1),...,(7,6)$.</p> <p>Each state $(7,k)$ are tied to $\binom{6+k}{k}$ possible scenarios of same probability of occuring, that is $p^7(1-p)^{k}$. Indeed, if the result is $(7,k)$ , it means that we played $7+k$ games, the $(7+k)^{th}$ game was won by team1, and we pick $k$ losses out of the $6+k$ remaining games. Therefore $P((7,k))=\binom{6+k}{k}p^7(1-p)^{k}$.</p> <p>Finally you probability is $$\sum_{k=0}^{6}P((7,k))=\sum_{k=0}^{6}{\binom{6+k}{k}p^7(1-p)^{k}}$$</p>
2,096,408
<p>My Physics book has many graphs. Some are straight lines, some parabolas while others are hyperbolas. I have not studied these curves (conic sections) yet and to me parabola and hyperbola look just the same. Is there any way of knowing whether a line is a parabola or a hyperbola just by seeing the graph of the line.</p>
JonathanZ supports MonicaC
275,313
<p>This isn't an infallible method, but every hyperbola has two <a href="https://en.wikipedia.org/wiki/Asymptote" rel="nofollow noreferrer">asymptotes</a>, whereas parabolas don't have even one. </p>
20,726
<p>The following situation is ubiquitous in mathematical physics. Let $\Lambda_N$ be a finite-size lattice with linear size $N$. An typical example would be the subset of $\mathbb{Z}\times\mathbb{Z}$ given by those pairs of integers $(j,k)$ such that $j,k \in$ { $0,\ldots,N-1$}. On each vertex $j$ of the lattice place a copy of the vector space $\mathbb{C}^d$. The total space will be the tensor product of all of these spaces. Then define a Hamiltonian acting on this total space as follows: $$ H = \sum_{k \in \Lambda_N} h_k$$ for some Hermitian matrices $h_k$ which act like the identity everywhere except on the vector spaces located on site $k$ and in the neighborhood surrounding $k$. Typically, one is interested in the case where there is a translational symmetry (except at the boundary) in the definition of the $h_k$. Denote the eigenvalues of $H$ in increasing order by $\lambda_1 \le \lambda_2 \le \ldots \le \lambda_M$. </p> <blockquote> <p>For an arbitrary fixed family of Hamiltonians $H$, what proof techniques exist for computing an upper and a lower bound on $\Delta = \lambda_2 - \lambda_1$ as a function of $N$? In particular, we want to know if $\Delta$ decays to zero as a function of $N$, or if it is lower-bounded by some constant independent of $N$.</p> </blockquote> <p>The gap $\Delta$ is the energy gap between the ground state and the first excited state of an interacting quantum system. Understanding this quantity tremendously impacts our understanding of the different phases of matter, but it is extremely difficult to compute or even bound for all but the simplest cases (like when all the $h_k$ commute). This difficulty persists even when there is significant additional (physically motivated) structure in the problem, such as considering only $h_k$ which are projectors, and where there is a unique zero-energy eigenstate (all others having positive energy for any finite $N$).</p> <p>More general formulations of this question also have applications to expansion properties of graphs, mixing times of Markov chains, and many other things. I’m happy to hear answers related to these as well, but I’m hoping to find answers that are useful for the structure of local Hamiltonians, as defined above.</p>
Helge
3,983
<p>This is off-topic: But powerful techniques to show that gap exist, can be found in the following papers:</p> <ul> <li><p>Michael Goldstein, Wilhelm Schlag, <em><a href="https://arxiv.org/abs/math/0511392" rel="nofollow noreferrer">On resonances and the formation of gaps in the spectrum of quasi-periodic Schroedinger equations</a></em></p> </li> <li><p>Artur Avila, Jairo Bochi, David Damanik, <em><a href="https://arxiv.org/abs/0903.2281" rel="nofollow noreferrer">Opening Gaps in the Spectrum of Strictly Ergodic Schrödinger Operators</a></em></p> </li> </ul> <p>of course. The setting is somewhat different, since they consider discrete Schroedinger operators ...</p>
4,118,149
<p>Let's say I have the following matrix: <span class="math-container">\begin{bmatrix} \frac{1}{3} &amp; \frac{2}{3} &amp; 0 &amp; \frac{2}{3} \\ \frac{2}{3} &amp; -\frac{1}{3} &amp; \frac{2}{3} &amp; 0 \\ a &amp; b &amp; c &amp; d \\ e &amp; f &amp; g &amp; h \end{bmatrix}</span></p> <p>How do I find the last <span class="math-container">$2$</span> rows such that this matrix is orthogonal? I know that it's orthogonal if <span class="math-container">$M*M^T = Id$</span>, but this isn't helping me, nor does the gram-schmidt process.</p>
Surb
154,545
<p>First, you can easily solve this equation and see what happen. But you can also see it without solving this equation. Let <span class="math-container">$g(x,y)=(y^2-1)\cos(x)$</span>. The function <span class="math-container">$g(x,\cdot )$</span> is locally-Lipschitz, and thus your IVP has a unique local solution. Let <span class="math-container">$x_0&gt;0$</span> and <span class="math-container">$y:[0,x_0)\to \mathbb R$</span> a solution of your IVP. Since <span class="math-container">$y(0)=0$</span>, there is <span class="math-container">$u&lt;x_0$</span> s.t. <span class="math-container">$y(x)\in (-1,1)$</span> for all <span class="math-container">$x\in [0,u)$</span>. Suppose that there is <span class="math-container">$\bar x\in[u,x_0)$</span> s.t. <span class="math-container">$y(\bar x)=\pm 1$</span> (suppose WLOG that <span class="math-container">$y(\bar x)=1$</span>). By unicity of the solution, <span class="math-container">$y\equiv 1$</span> on <span class="math-container">$[0,x_0)$</span> which contradict <span class="math-container">$y(0)=0$</span>. Therefore <span class="math-container">$u=x_0$</span>, i.e. <span class="math-container">$y(x)\in (-1,1)$</span> for all <span class="math-container">$x\in [0,x_0)$</span>.</p>
2,049,207
<p>The question is this: <strong><em>How many ways are there to put 5 different balls into 3 different boxes so that none of the boxes is empty?</em></strong> </p> <p>The correct answer as per my lecturer's notes is <strong>150</strong>, and I would like to know where I am going wrong in my approach.</p> <p><strong>Here is how I approached it (wrongly):</strong></p> <p>Separated into three tasks: </p> <p>1) Picking three balls from 5 to put in the boxes</p> <pre><code> {5 \choose 3} ways </code></pre> <p>2) Permuting those balls</p> <pre><code> 3! ways </code></pre> <p>3) Placing the other two balls can be done in two ways:</p> <p>Either put both in one of the boxes</p> <pre><code> 3 ways </code></pre> <p>Or put both each in a different box</p> <pre><code> P(3,2) ways to pick two boxes and arrange the balls in them </code></pre> <p>Task 3 has a total of </p> <pre><code> 6 +3 = 9 ways </code></pre> <p>Total using product rule is (as per my approach, which is incorrect):</p> <pre><code> 60 * (9) = 540 </code></pre> <p>I have seen other approaches to getting the correct answer(including using Stirling numbers of the second kind followed by permutation, inclusion-exclusion), but would like to know what is the correct way to split this into tasks and use the product rule (without using Stirling numbers).</p>
Nicky Hekster
9,605
<p>$A_5$ is simple so you will not find any interesting normal subgroups. Let me give you a hint: is $g,a,b \in G$, then $g^{-1}abg=g^{-1}ag \cdot g^{-1}bg$. This should help you proving that the subgroup generated by a conjugacy class is actually normal.</p>
637,199
<p>If $K^T=K$, $K^3=K$, $K1=0$ and $K\left[\begin{matrix}1\\2 \\-3\end{matrix}\right]=\left[\begin{matrix}1\\2 \\-3\end{matrix}\right]$,</p> <p>how can I find the trace of $K$ and the determinant of $K$?</p> <p>I think for determinant of $K$, since $K^3-K=(K^2-I)K=0$, then $K^2=I$ since $K$ is nonzero. Then this implies $|K|^2=1$ implies $|K|=\pm 1$, where the two lines | | denotes the determinant.</p> <p>But I'm not sure if $tr(K^2)=tr(I)=3$?</p>
Community
-1
<p>The polynomial $x^3-x=x(x^2-1)$ annihilates the matrix $K$ which's diagonalizable since it's real <em>(not clear from the hypothesis)</em> symmetric or since the polynomial has simple roots, moreover $0$ and $1$ are eigenvalues of $K$ since $K1=0$ and since $$K\left[\begin{matrix}1\\2 \\-3\end{matrix}\right]=\left[\begin{matrix}1\\2 \\-3\end{matrix}\right]$$ now surely as the determinant is the product of eigenvalues we have $\det K=0$ and for the trace: there's 3 possibilities: </p> <ul> <li>if $-1$ is an eigenvalue of $K$ then the trace is $0$</li> <li>if $0$ is an eigenvalue with multiplicity $2$ the trace is $1$</li> <li>if $1$ is an eigenvalue with multiplicity $2$ the trace is $2$</li> </ul>
3,466,870
<p>Suppose </p> <p><span class="math-container">$$a^2 = \sum_{i=1}^k b_i^2$$</span> </p> <p>where <span class="math-container">$a, b_i \in \mathbb{Z}$</span>, <span class="math-container">$a&gt;0, b_i &gt; 0$</span> (and <span class="math-container">$b_i$</span> are not necessarily distinct).</p> <p>Can any positive integer be the value of <span class="math-container">$k$</span>?</p> <hr> <p>The reason I am interested in this: in a irreptile tiling where the smallest piece has area <span class="math-container">$A$</span>, we have <span class="math-container">$a^2A = \sum_{i=1}^k b_i^2A$</span>, where we have <span class="math-container">$k$</span> pieces scaled by <span class="math-container">$b_i$</span> to tile the big figure, which is scaled by <span class="math-container">$a$</span>. I am wondering what constraints there are on the number of pieces.</p> <p>Here is an example tiling that realizes <span class="math-container">$4^2 = 3^2 + 7 \cdot 1^2$</span>, so <span class="math-container">$k = 8$</span>.</p> <p><a href="https://i.stack.imgur.com/LdwQy.png" rel="noreferrer"><img src="https://i.stack.imgur.com/LdwQy.png" alt="enter image description here"></a></p>
J.G.
56,861
<p>Yes, <span class="math-container">$k$</span> can be arbitrary. Define a sequence<span class="math-container">$$a_1:=3,\,a_{k+1}:=\frac12\left(a_k^2+1\right)$$</span> of odd positive integers (since <span class="math-container">$\frac12((2n+1)^2+1)=2(n^2+n)+1$</span>), so<span class="math-container">$$a_{k+1}^2-a_k^2=\frac14\left[(a_k^2+1)^2-4a_k^2\right]=\left[\frac12(a_k^2-1)\right]^2$$</span>is a perfect square. Now define<span class="math-container">$$b_1:=3,\,b_{k+1}:=\frac12(a_k^2-1)$$</span>so <span class="math-container">$a_k^2=\sum_{i=1}^kb_i^2$</span> for all positive integers <span class="math-container">$k$</span>. The sequence <span class="math-container">$a_n$</span> is called the <a href="https://oeis.org/A053630" rel="noreferrer">Pythagorean spiral</a> or <a href="https://en.wikipedia.org/wiki/On-Line_Encyclopedia_of_Integer_Sequences" rel="noreferrer">OEIS</a> A053630.</p>
3,466,870
<p>Suppose </p> <p><span class="math-container">$$a^2 = \sum_{i=1}^k b_i^2$$</span> </p> <p>where <span class="math-container">$a, b_i \in \mathbb{Z}$</span>, <span class="math-container">$a&gt;0, b_i &gt; 0$</span> (and <span class="math-container">$b_i$</span> are not necessarily distinct).</p> <p>Can any positive integer be the value of <span class="math-container">$k$</span>?</p> <hr> <p>The reason I am interested in this: in a irreptile tiling where the smallest piece has area <span class="math-container">$A$</span>, we have <span class="math-container">$a^2A = \sum_{i=1}^k b_i^2A$</span>, where we have <span class="math-container">$k$</span> pieces scaled by <span class="math-container">$b_i$</span> to tile the big figure, which is scaled by <span class="math-container">$a$</span>. I am wondering what constraints there are on the number of pieces.</p> <p>Here is an example tiling that realizes <span class="math-container">$4^2 = 3^2 + 7 \cdot 1^2$</span>, so <span class="math-container">$k = 8$</span>.</p> <p><a href="https://i.stack.imgur.com/LdwQy.png" rel="noreferrer"><img src="https://i.stack.imgur.com/LdwQy.png" alt="enter image description here"></a></p>
David G.
733,021
<p>Solutions exist for every <span class="math-container">$k&gt;0$</span>. The simplest forms use most of the <span class="math-container">$b_n$</span> values as <span class="math-container">$1$</span>. I will list them as "<span class="math-container">$b_*$</span>". I suspect there are infinitely many distinct answers for each <span class="math-container">$k\ge2$</span>, but I can't prove it.</p> <p>if <span class="math-container">$k = 2n$</span> (k is even) and large enough (<span class="math-container">$k\ge4$</span> or <span class="math-container">$n&gt;1$</span>):</p> <ul> <li><span class="math-container">$ a = n $</span></li> <li><span class="math-container">$ b_1 = n-1 $</span></li> <li><span class="math-container">$ b_* = 1 $</span></li> </ul> <p><span class="math-container">\begin{align} \sum_{i=1}^k b_i^2 &amp; = (n-1)^2 + (2n-1)\cdot1^1 \\ &amp; = n^2 - 2n + 1 + 2n - 1 \\ &amp; = n^2 \\ &amp; = a^2 \\ \end{align}</span></p> <p>if <span class="math-container">$k = 4n + 1$</span> and large enough (<span class="math-container">$k\ge9$</span> or <span class="math-container">$n&gt;1$</span>)</p> <ul> <li><span class="math-container">$ a = n+1 $</span></li> <li><span class="math-container">$ b_1 = n-1 $</span></li> <li><span class="math-container">$ b_* = 1 $</span></li> </ul> <p><span class="math-container">\begin{align} \sum_{i=1}^k b_i^2 &amp; = (n-1)^2 + ((4n+1)-1)\cdot1^1 \\ &amp; = n^2 - 2n + 1 + 4n \\ &amp; = n^2 +2n + 1 \\ &amp;= (n + 1)^2 \\ &amp; = a^2 \\ \end{align}</span></p> <p>if <span class="math-container">$k = 4n + 3$</span> </p> <ul> <li><span class="math-container">$ a = 2n+3 $</span></li> <li><span class="math-container">$ b_1 = 2n+2 $</span></li> <li><span class="math-container">$ b_2 = 2 $</span></li> <li><span class="math-container">$ b_* = 1 $</span></li> </ul> <p><span class="math-container">\begin{align} \sum_{i=1}^k b_i^2 &amp; = (2n+2)^2 + 2^2 + ((4n+3)-2)\cdot1^1 \\ &amp; = 4n^2 + 8n + 4 + 4 + 4n + 1 \\ &amp; = 4n^2 + 12n + 9 \\ &amp;= (2n + 3)^2 \\ &amp; = a^2 \\ \end{align}</span></p> <p>The only values that don't work with these patterns are <span class="math-container">$k$</span> = 1, 2, or 5.</p> <p>For <span class="math-container">$k=1$</span>, <span class="math-container">$a=b_1$</span> for any values.</p> <p>For <span class="math-container">$k=2$</span>, we have we have a well known case, with minimal value <span class="math-container">$5^2=4^2+3^2$</span></p> <p>For <span class="math-container">$k=5$</span>, the minimal value is <span class="math-container">$4^2=3^2+2^2+1^2+1^2+1^2$</span></p>
3,466,870
<p>Suppose </p> <p><span class="math-container">$$a^2 = \sum_{i=1}^k b_i^2$$</span> </p> <p>where <span class="math-container">$a, b_i \in \mathbb{Z}$</span>, <span class="math-container">$a&gt;0, b_i &gt; 0$</span> (and <span class="math-container">$b_i$</span> are not necessarily distinct).</p> <p>Can any positive integer be the value of <span class="math-container">$k$</span>?</p> <hr> <p>The reason I am interested in this: in a irreptile tiling where the smallest piece has area <span class="math-container">$A$</span>, we have <span class="math-container">$a^2A = \sum_{i=1}^k b_i^2A$</span>, where we have <span class="math-container">$k$</span> pieces scaled by <span class="math-container">$b_i$</span> to tile the big figure, which is scaled by <span class="math-container">$a$</span>. I am wondering what constraints there are on the number of pieces.</p> <p>Here is an example tiling that realizes <span class="math-container">$4^2 = 3^2 + 7 \cdot 1^2$</span>, so <span class="math-container">$k = 8$</span>.</p> <p><a href="https://i.stack.imgur.com/LdwQy.png" rel="noreferrer"><img src="https://i.stack.imgur.com/LdwQy.png" alt="enter image description here"></a></p>
ralphmerridew
231,358
<p>Yes.</p> <p>For <span class="math-container">$k = 2$</span>: <span class="math-container">$3^2 + 4^2 = 5^2$</span></p> <p>For <span class="math-container">$k &gt; 2$</span>:<br> Start with a solution for <span class="math-container">$k-1$</span><br> Multiply both sides by <span class="math-container">$5^2$</span><br> Replace one <span class="math-container">$(5a)^2$</span> on the left with <span class="math-container">$(3a)^2$</span> + <span class="math-container">$(4a)^2$</span>.</p> <p><span class="math-container">$3^2 + 4^2 = 5^2$</span><br> <span class="math-container">$15^2 + 20^2 = 25^2$</span><br> <span class="math-container">$9^2 + 12^2 + 20^2 = 25^2$</span> (k = 3) </p> <p><span class="math-container">$45^2 + 60^2 + 100^2 = 125^2$</span><br> <span class="math-container">$27^2 + 36^2 + 60^2 + 100^2 = 125^2$</span> (k = 4) </p> <p>Repeat until k is as desired.</p>
3,466,870
<p>Suppose </p> <p><span class="math-container">$$a^2 = \sum_{i=1}^k b_i^2$$</span> </p> <p>where <span class="math-container">$a, b_i \in \mathbb{Z}$</span>, <span class="math-container">$a&gt;0, b_i &gt; 0$</span> (and <span class="math-container">$b_i$</span> are not necessarily distinct).</p> <p>Can any positive integer be the value of <span class="math-container">$k$</span>?</p> <hr> <p>The reason I am interested in this: in a irreptile tiling where the smallest piece has area <span class="math-container">$A$</span>, we have <span class="math-container">$a^2A = \sum_{i=1}^k b_i^2A$</span>, where we have <span class="math-container">$k$</span> pieces scaled by <span class="math-container">$b_i$</span> to tile the big figure, which is scaled by <span class="math-container">$a$</span>. I am wondering what constraints there are on the number of pieces.</p> <p>Here is an example tiling that realizes <span class="math-container">$4^2 = 3^2 + 7 \cdot 1^2$</span>, so <span class="math-container">$k = 8$</span>.</p> <p><a href="https://i.stack.imgur.com/LdwQy.png" rel="noreferrer"><img src="https://i.stack.imgur.com/LdwQy.png" alt="enter image description here"></a></p>
Piquito
219,998
<p>It is known enough the generalization of Phytagorean triples which let us to say YES. Just as for <span class="math-container">$k=2$</span> two parameters are needed, for <span class="math-container">$k\gt2$</span> we need <span class="math-container">$k$</span> arbitrary parameters <span class="math-container">$t_1,t_2\cdots,t_k$</span> so we have the identity easily verified <span class="math-container">$$(t_k^2-t_1^2-t_2^2\cdots-t_{k-1}^2)^2+(2t_1t_k)^2+(2t_2t_k)^2+\cdots+(2t_{k-1}t_k)^2=(t_1^1+\cdots+t_k^2)^2$$</span>.</p> <p>(Note that for <span class="math-container">$k=2$</span> we have the quite known parameterization of Phytagorean triples).</p>
431,236
<p>I have a cylinder of radius 4 and height 10 that is at a 30 degree angle. I need to find the volume.</p> <p>I have no clue how to do this, I have spent quite a while on it and went through many ideas but I think my best idea was this.</p> <p>I know that the radius is 4 so if I cut the cylinder in half from corner to corner I will have two side lengths giving me a third side length. So this gives</p> <p>$$\sqrt{116} = height$$ </p> <p>Or the length of the tall sides.</p> <p>Now I just plug this into my formula</p> <p>$$\pi r^2 h$$</p> <p>$$\pi *16*\sqrt116$$</p> <p>This is about $34\pi$ which is way off. What did I do wrong?</p>
chijioke
116,927
<p>assuming that the height giving in the question where to be the slanted height.Therefore the straight height H, must be found before proceeding for further solving using l multiplied the sine of the giving angle 30 degree to give 5, after that you substitute in normal volume formula to give ans. 251.36 </p>
384,006
<p>Just came across the following question:</p> <blockquote> <p>Let $S=\{2,5,13\}$. Notice that $S$ satisfies the following property: for any $a,b \in S$ and $a \neq b$, $ab-1$ is a perfect square. Show that for any positive integer $d \not\in S$, $S \cup \{d\}$ does not satify the above property.</p> </blockquote> <p>This question can be done by considering modulo 4.</p> <p>Here comes my question:</p> <blockquote> <p>What is the greatest value of $|A|$ if all elements in $A$ are different and for any $a,b \in A$ and $a \neq b$, $ab-1$ is a perfect square?</p> </blockquote> <p>Remark: $A$ may not contain any of $\{2,5,13\}$, example: $\{17, 26, 85\}$</p> <p>Edit: From the link <a href="http://web.math.pmf.unizg.hr/~duje/intro.html">here</a>, there are infinite many 3-element sets statisfy the poperty. These sets are of the form $\{a, b, a+b+2r\}$ where $r^2 = ab-1$. Are we able to find a 4-element set that statisfies the property?</p>
Matthew W.
82,204
<p>I have been trying to find a 4-element set that satisfies the property. I ran a computer program to check all 4-element sets of integers &lt;= 20,000 and found nothing, so I am guessing that there are no such sets, although of course I have nothing conclusive. <p> The program also listed all of the 3-element sets that it found, and I've noticed that each set, taken modulo 4, is either {1, 1, 1} or {1, 1, 2}. I can now prove this:</p> <blockquote> It is fairly easy to see that, if a*b-1 is square, then {a, b} modulo 4 is one of {1, 1}, {3, 3}, {1, 2}, and {2, 3}. <p> Next, looking through some notes on number theory that I printed from MIT OpenCourseware, there is a lemma which states that, because a*b is the sum of two squares (n^2 + 1^2), any prime factors of a*b which are congruent to 3 mod 4 must divide both squares, i.e., n and 1. Since no primes divide 1, this implies that a*b has no prime factors congruent to 3 mod 4, from which it follows that a and b are both not congruent to 3 mod 4. <p> Thus, {a, b} modulo 4 is one of {1, 1}, {1, 2}. From this, it is easy to see that any 3-element set satisfying the property is of one of the forms {1, 1, 1}, {1, 1, 2}, as desired. </blockquote> <p><p> @pipi: how did you prove the original problem "modulo 4"? Does your result imply that <em>any</em> set of the form {1, 1, 2} modulo 4 cannot be extended to a 4-element set? If so, by using your argument and then modifying it to work on {1, 1, 1}, we may be able to prove that no 4-element sets satisfying the property exist.</p>
2,238,614
<p>For $n\ge3$ a given integer, find a Pythagorean Triple having n as one of its members.<br> Hint: For n an odd integer, consider the triple $$\left(n, \frac 12\left(n^2-1\right), \frac 12(n^2+1)\right);$$ For n even, consider the triple $$\left(n, \left(\frac{n^2}{4}\right)-1, \left(\frac{n^2}{4}\right)+1 \right)$$</p> <p>I have been trying to solve this problem by letting $n=x-y$ or $n=2xy$ but have been unable to use the hints properly. Do I just add the squares of the first 2 and see if it equals the square of the last? I am just missing the concept here. Any suggestions would be appreciated.</p> <p>$$n^2+\left(\frac{n^2-1}{2}\right)^2=n^2+\frac{n^4-2n^2+1}{4}=\frac{n^4+2n^2+1}{4}$$</p> <p>$$\left( \frac{n^2+1}{2}\right)^2=\frac{n^4+2n+1}{4}$$</p> <p>Is this all I have to do for odd?</p>
Micah
30,836
<p>You do want to add the squares of the first two and see if it equals the square of the last.</p> <p>You also have to make sure all three numbers are integers, which is why you need to do something different depending on whether $n$ is even or odd.</p>
2,461,506
<p>I am trying to derive / prove the fourth order accurate formula for the second derivative:</p> <p>$f''(x) = \frac{-f(x + 2h) + 16f(x + h) - 30f(x) + 16f(x - h) - f(x -2h)}{12h^2}$.</p> <p>I know that in order to do this I need to take some linear combination for the Taylor expansions of $f(x + 2h)$, $f(x + h)$, $f(x - h)$, $f(x -2h)$. For example, when deriving the the centered-difference formula for the first derivative, the Taylor expansion of $f(x + h)$ minus $f(x-h)$ can be computed to give the desired result of $f'(x)$, in that case.</p> <p>In what way would I have to combine these Taylor expansions above to obtain the required result?</p>
Vladimir F Героям слава
134,138
<p>You can easily derive the formula, if you do not know it, as a derivative of the Lagrange polynomial</p> <pre><code>D[D[InterpolatingPolynomial[{(-2*h,y0),(-1*h,y1),(0*h,y2),(1*h,y3),(2*h,y4)},x],x],x] /. x=0 </code></pre> <p><kbd> <a href="https://www.wolframalpha.com/input/?i=D%5BD%5BInterpolatingPolynomial%5B%7B%28-2*h%2Cy0%29%2C%28-1*h%2Cy1%29%2C%280*h%2Cy2%29%2C%281*h%2Cy3%29%2C%282*h%2Cy4%29%7D%2Cx%5D%2Cx%5D%2Cx%5D+%2F.+x%3D0" rel="nofollow noreferrer">Try at Wolfram Alpha</a></kbd></p> <p><a href="https://i.stack.imgur.com/OvjUA.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/OvjUA.png" alt="enter image description here" /></a></p> <p>The other answers show how to prove the order of accuracy of an already-known formula.</p>
3,424,687
<blockquote> <p>Let <span class="math-container">$n$</span> be a positive integer and a complex number with unit modulus is a solution of the equation <span class="math-container">$z^n+z+1=0$</span>. Prove that <span class="math-container">$n $</span> can't be <span class="math-container">$196$</span>. </p> </blockquote> <p>The above question has been bothering me since a long time. I 've tried using the Euler's form for <span class="math-container">$z $</span> and have obtained <span class="math-container">$\sin 2nx=-0.5$</span>. I don't know how to use that. Would someone help me to solve this problem?</p> <p>Thanks in advance.</p>
Mohammad Riazi-Kermani
514,496
<p>Let <span class="math-container">$$z^{196}+z+1=0$$</span></p> <p>Then we have <span class="math-container">$$z^{196}= -z-1$$</span> </p> <p>Thus <span class="math-container">$$|z|^{196} =|-z-1|$$</span></p> <p>Since <span class="math-container">$|z|=1$</span> we get <span class="math-container">$|z+1|=1$</span></p> <p>Let <span class="math-container">$z=x+iy$</span> then we have <span class="math-container">$z+1=(x+1)+iy$</span> so <span class="math-container">$|z+1|^2 =(x+1)^2+y^2 =1$</span></p> <p>That is <span class="math-container">$x^2+y^2+2x+1=1$</span> which implies <span class="math-container">$x=-1/2$</span> </p> <p>since we have <span class="math-container">$|z|=1$</span> we have <span class="math-container">$z=-1/2 \pm i {\sqrt 3}/2 =e^{\pm 2\pi i/3}$</span> </p> <p>which does not satisfy <span class="math-container">$$z^{196}+z+1=0$$</span></p>
2,087,107
<p>In the following integral</p> <p>$$\int \frac {1}{\sec x+ \mathrm {cosec} x} dx $$</p> <p><strong>My try</strong>: Multiplied and divided by $\cos x$ and Substituting $\sin x =t$. But by this got no result.</p>
Dhanvi Sreenivasan
332,720
<p><strong>HINT</strong> </p> <p>Use half-angle formulae to substitute for $\sec x$ and $\csc x$, and use the substitution $u = \tan \frac{x}{2}$, to get the following expression:</p> <p>$$I = 4\int\frac{u(u^2-1)}{(u^2+1)^2(u^2-2u-1)}du$$</p> <p>This should be easier to compute, using factorization and other basic techniques</p>
350,747
<p>Base case: $n=1$. Picking $2n+1$ random numbers 5,6,7 we get $5+6+7=18$. So, $2(1)+1=3$ which indeed does divide 18. The base case holds. Let $n=k&gt;=1$ and let $2k+1$ be true. We want to show $2(k+1)+1$ is true. So, $2(k+1)+1=(2k+2) +1$....</p> <p>Now I'm stuck. Any ideas?</p>
André Nicolas
6,312
<p>For clarity, we start by taking $n=4$. </p> <p>Look at the following $2n+1=9$ consecutive integers: $$-4, \quad-3,\quad -2, \quad -1,\quad 0,\quad 1,\quad 2,\quad 3,\quad 4.$$</p> <p>The sum of these is $0$, obviously divisible by $2n+1$.</p> <p>Now any $9$ consecutive integers can be obtained by adding a suitable constant $b$, possibly negative, to each of the numbers above. </p> <p>But that means their sum is $0$, incremented by $9b$. So their sum is a multiple of $9$.</p> <p><strong>Exactly</strong> the same argument works for general $2n+1$, except that we start with the nmbers $0, \pm 1, \pm 2, \dots, \pm(n-1)$, arranged in increasing order. </p>
733,675
<p>A new question has emerged after this one was successfully answered by r9m: <a href="https://math.stackexchange.com/questions/731292/inequality-with-abcd-2/731930#731930">If $a+b+c+d = 2$, then $\frac{a^2}{(a^2+1)^2}+\frac{b^2}{(b^2+1)^2}+\frac{c^2}{(c^2+1)^2}+\frac{d^2}{(d^2+1)^2}\le \frac{16}{25}$</a>. I thought of this generalization. Does it hold?</p> <p>$$\dfrac{x_1^2}{(x_1^2+1)^2}+\dfrac{x_2^2}{(x_2^2+1)^2}+\cdot\cdot\cdot+\dfrac{x_n^2}{(x_n^2+1)^2}\le \dfrac{n^2}{(n+1)^2}$$ with $$x_1+x_2+\cdot\cdot\cdot+x_n= \sqrt{n}$$ $$x_1,x_2,\cdot\cdot\cdot,x_n \ge0$$ $$ n \in \mathbb{N}$$</p>
Shane
58,882
<p>I'm afraid the inequality is wrong. Note that RHS $&lt; 1$. However, if we take $x:=1$, then $x^2/(x^2+1)^2=1/4$. Thus, we choose a large $n$, and let as much $x_i$ as possible be $1$, then the inequality fails. But I believe that there exists a bound of $n$ to let the inequality hold.</p>
1,220,502
<p>The problem is this. Given that $\int_0^a f(x) dx = \int_0^a f(a-x)dx$, evaluate $$\int_0^\pi \frac{x\sin x}{1+\cos^2x} dx$$</p> <p>I write the integral as $$\int_0^\pi \frac{(\pi-x)\sin(\pi-x)}{1+\cos^2(\pi-x)} dx$$ but I don't see how that helps to do it.</p>
Olivier Oloa
118,798
<p>If you set $$ I=\int_0^\pi \frac{x\sin x}{1+\cos^2x} dx $$then $$\begin{align} I&amp;=\int_0^\pi \frac{(\pi-x)\sin(\pi-x)}{1+\cos^2(\pi-x)} dx\\\\ &amp;=\int_0^\pi \frac{(\pi-x)\sin x}{1+\cos^2x} dx\\\\ &amp;=\pi\int_0^\pi \frac{\sin x}{1+\cos^2x} dx-\int_0^\pi \frac{x\sin x}{1+\cos^2x} dx\\\\ &amp;=\pi\int_0^\pi \frac{\sin x}{1+\cos^2x} dx-I \end{align}$$ giving $$\begin{align} I=\frac{\pi}2\int_0^\pi \frac{\sin x}{1+\cos^2x} dx \end{align}=-\frac{\pi}2\left[ \arctan(\cos x)\right]_0^\pi=\color{red}{\frac{\pi^2}4}.$$</p>
281,450
<p>How can I find the area bound by $\;x=0,\, x=1,\;$ the $\;x$-axis ($y = 0$) and $\;y=x^2+2x\;$ using Riemann sums? </p> <p>I want to use the right-hand sum. Haven't really found any good resources online to explain the estimation of areas bounded by curves, hoping anyone here can help?</p> <p>By the way, I would like there to be 100 intervals.</p>
Hagen von Eitzen
39,174
<p>The sequence for which $(1+x+x^{10})^{20}$ is the generating function, is $1, 20, 190, 1140, 4845, 15504, 38760, 77520, 125970, 167960, 184776, 168340, 129390, 96900, 116280, 248064, 547485, 1008900, 1511830, 1847580, 1847751, 1515060, 1036830, 697680, 813960, 1705440, 3546540, 6049980, 8314400, 9237820, 8315160, 6065940, 3682200, 2403120, 3294600, 7209360, 14137710, 22174140, 27713590, 27713400, 22175565, 14186160, 7635720, 5426400, 9593100, 21318000, 38818140, 55427940, 62355150, 55426800, 38814264, 21395520, 10445820, 9767520, 21744360, 46636032, 77602365, 99768240, 99768240, 77597520, 46597272, 21705600, 10581480, 15736560, 39031320, 77613024, 116396280, 133024320, 116396280, 77597520, 38876280, 15116400, 9573720, 22713360, 55465560, 99768240, 133024320, 133024320, 99768240, 55426800, 22296690, 7558200, 9321780, 27790920, 62355150, 99768240, 116396280, 99768240, 62355150, 27713400, 8481980, 3359200, 9363770, 27713400, 55426800, 77597520, 77597520, 55426800, 27713400, 9237800, 2032316, 2015520, 8314020, 22170720, 38798760, 46558512, 38798760, 22170720, 8314020, 1847560, 352716, 1511640, 6046560, 14108640, 21162960, 21162960, 14108640, 6046560, 1511640, 167960, 125970, 1007760, 3527160, 7054320, 8817900, 7054320, 3527160, 1007760, 125970, 0, 77520, 542640, 1627920, 2713200, 2713200, 1627920, 542640, 77520, 0, 0, 38760, 232560, 581400, 775200, 581400, 232560, 38760, 0, 0, 0, 15504, 77520, 155040, 155040, 77520, 15504, 0, 0, 0, 0, 4845, 19380, 29070, 19380, 4845, 0, 0, 0, 0, 0, 1140, 3420, 3420, 1140, 0, 0, 0, 0, 0, 0, 190, 380, 190, 0, 0, 0, 0, 0, 0, 0, 20, 20, 0, 0, 0, 0, 0, 0, 0, 0, 1$ (and then only $0$'s)</p> <p>Note that $a_n$ counts the number of ways to write $n$ as sum of $20$ numbers $\in\{0,1,10\}$.</p>
67,929
<p>Joel David Hamkins in an answer to my question <a href="https://mathoverflow.net/questions/67259/countable-dense-sub-groups-of-the-reals">Countable Dense Sub-Groups of the Reals</a> points out that "one can find an uncountable chain of countable dense additive subgroups of $\mathbb{R}$ whose subset relation has the order type of the continuum $\langle \mathbb{R},&lt;\rangle$."</p> <p>I would like to know what is the cardinality of the set of countable dense additive subgroups of the reals (up to isomorphism)? Is it undecidable?</p>
Juris Steprans
13,878
<p>The result of Joel quoted above certainly shows a complicated structure, but does not actually provide continuum many non-isomorphic subgroups since there is no reason why $G$ being a subgroup of $H$ should imply that $G$ and $H$ are not isomorphic. Indeed, considering two groups generated by the rationals and two infinite algebraically independent sets of reals will provide counterexamples to this.</p> <p>However, the structure theorem of rank one torsion free abelian groups does shows there are continuum many such. For example, for any set of primes $P$ let $G_P$ be the set of all rationals whose denominator is a product of primes only from $P$. It is easy to see that every element has $p^\text{th}$ roots if $p \in P$ but that $1$ has no $q^\text{th}$ root if $q\notin P$. Hence all these groups are non-isomorphic.</p>
3,075,979
<p>Prove that <span class="math-container">$$\frac{k^7}{7}+\frac{k^5}{5}+\frac{2k^3}{3}-\frac{k}{105}$$</span> is an integer using mathematical induction.</p> <p>I tried using mathematical induction but using binomial formula also it becomes little bit complicated.</p> <p>Please show me your proof.</p> <p>Sorry if this question was already asked. Actually i did not found it. In that case only sharing the link will be enough.</p>
David Quinn
187,299
<p>hint...if you only want to use induction, let <span class="math-container">$$f(k)=15k^7+21k^5+70k^3-k$$</span> and consider <span class="math-container">$$f(k+1)-f(k)=$$</span></p> <p>For the induction step you have to show this is divisible by <span class="math-container">$105$</span></p> <p>So, for example, <span class="math-container">$$(k+1)^7-k^7=7N+1$$</span> where <span class="math-container">$N$</span> is an integer, etc...</p> <p>Can you finish?</p>
3,075,979
<p>Prove that <span class="math-container">$$\frac{k^7}{7}+\frac{k^5}{5}+\frac{2k^3}{3}-\frac{k}{105}$$</span> is an integer using mathematical induction.</p> <p>I tried using mathematical induction but using binomial formula also it becomes little bit complicated.</p> <p>Please show me your proof.</p> <p>Sorry if this question was already asked. Actually i did not found it. In that case only sharing the link will be enough.</p>
Jean-Claude Arbaut
43,608
<p>You can use the <a href="https://en.wikipedia.org/wiki/Binomial_transform" rel="nofollow noreferrer">binomial transform</a> to prove that</p> <p><span class="math-container">$$\frac{k^7}{7}+\frac{k^5}{5}+\frac{2k^3}{3}-\frac{k}{105} \\={k\choose1}+28{k\choose2}+292{k\choose3}+1248{k\choose4}+2424{k\choose5}+2160{k\choose6}+720{k\choose7}$$</span></p>
3,075,979
<p>Prove that <span class="math-container">$$\frac{k^7}{7}+\frac{k^5}{5}+\frac{2k^3}{3}-\frac{k}{105}$$</span> is an integer using mathematical induction.</p> <p>I tried using mathematical induction but using binomial formula also it becomes little bit complicated.</p> <p>Please show me your proof.</p> <p>Sorry if this question was already asked. Actually i did not found it. In that case only sharing the link will be enough.</p>
Cardioid_Ass_22
631,681
<p>Base case for <span class="math-container">$k=1$</span>: <span class="math-container">$$\frac{1^7}{7}+\frac{1^5}{5}+\frac{2*1^3}{3}-\frac{1}{105}=1$$</span></p> <p>Now, assume for some k that <span class="math-container">$\frac{k^7}{7}+\frac{k^5}{5}+\frac{2k^3}{3}-\frac{k}{105}$</span> is indeed in integer. </p> <p>Then <span class="math-container">$$\frac{(k+1)^7}{7}+\frac{(k+1)^5}{5}+\frac{2(k+1)^3}{3}-\frac{k+1}{105}=\\\frac{\sum_{i=0}^7\binom{7}{i}k^i}{7}+\frac{\sum_{i=0}^5\binom{5}{i}k^i}{5}+2\frac{\sum_{i=0}^3\binom{3}{i}k^i}{3}-\frac{k+1}{105}=$$</span> </p> <p>Extracting the highest indexed term from each sum (and the <span class="math-container">$-\frac{k}{105}$</span> at the end): <span class="math-container">$$\frac{k^7}{7}+\frac{k^5}{5}+\frac{2k^3}{3}-\frac{k}{105}+\frac{\sum_{i=0}^6\binom{7}{i}k^i}{7}+\frac{\sum_{i=0}^4\binom{5}{i}k^i}{5}+2\frac{\sum_{i=0}^2\binom{3}{i}k^i}{3}-\frac{1}{105}$$</span> </p> <p>By the induction hypothesis, the sum of the first four terms is an integer so, if we can show the rest of the above sum is an integer, we will be done. Use the fact that, for any prime, <span class="math-container">$p$</span>, <span class="math-container">$p|\binom{p}{k}$</span> where <span class="math-container">$1\leq k\leq p-1$</span>. This is because <span class="math-container">$$\binom{p}{k}=\frac{p(p-1)...(p-k+1)}{k(k-1)...1}$$</span> </p> <p><span class="math-container">$p$</span> divides the numerator but not the denominator (as <span class="math-container">$1\leq k\leq p-1$</span>) so <span class="math-container">$p|\binom{p}{k}$</span> </p> <p>So each term in the remaining sum with index <span class="math-container">$i$</span> <span class="math-container">$\geq1$</span> and <span class="math-container">$\leq p-1$</span> (<span class="math-container">$p$</span> being the respective prime in each sum) is divisible by the corresponding <span class="math-container">$p$</span> in the denominator and produces an integer. The only non-integer terms left will be the ones at <span class="math-container">$i=0$</span>, i.e. <span class="math-container">$$\frac{\binom{7}{0}k^0}{7}+\frac{\binom{5}{0}k^0}{5}+\frac{2\binom{3}{0}k^0}{3}-\frac{1}{105}=\frac{1}{7}+\frac{1}{5}+\frac{2}{3}-\frac{1}{105}=1$$</span> </p> <p>So <span class="math-container">$\frac{(k+1)^7}{7}+\frac{(k+1)^5}{5}+\frac{2(k+1)^3}{3}-\frac{k+1}{105}$</span> is a sum of integers making it an integer.</p>
2,759,827
<blockquote> <p>Let $\{x_n\}$ be a bounded sequence and $s=\sup\{x_n|n\in\mathbb N\}.$ Show that if $s\notin \{x_n|n\in\mathbb N\}, $then there exists a subsequence convereges to $s$.</p> </blockquote> <p>$s-1$ cannot be the upperbound and $s\notin \{x_n|n\in\mathbb N\}, \exists n_1\in \mathbb N:n_1\ge1:s-1&lt;x_{n_1}&lt;s.$</p> <p>$s-\frac{1}{2}$ cannot be the upperbound and $s\notin \{x_n|n\in\mathbb N\}, \exists n_2\in \mathbb N:n_2\ge1:s-\frac{1}{2}&lt;x_{n_2}&lt;s.$</p> <p>$\dots$</p> <p>$s-\frac{1}{k}$ cannot be the upperbound and $s\notin \{x_n|n\in\mathbb N\}, \exists n_k\in \mathbb N:n_k\ge1:s-\frac{1}{2}&lt;x_{n_k}&lt;s.$</p> <p>$\dots$</p> <p><strong>My doubt</strong></p> <p>I can create a sequence of a number like this. How do we guarantee that we can choose $n_2&gt;n_1$ such that $s-\frac{1}{2}&lt;x_{n_2}&lt;s$ hold? similarly for other cases. If we can show this then required subsequence would be $\{x_{n_k}\}$.</p>
user284331
284,331
<p>One can do it like looking for $n_{k+1}\geq 1$ such that $\max\left\{s-\dfrac{1}{k+1},x_{1},...,x_{n_{k}}\right\}&lt;x_{n_{k+1}}&lt;s$.</p>
2,461,962
<p>I'm looking to reproduce \begin{align} \partial_{j_m,j_n}\bigg|_{\bf{j}=0}\exp\left(\frac12\bf{j}^\top\bf{B}\bf{j}\right) = B_{mn} \end{align} where $B_{mn}=B_{nm}$ is a real, symmetric, positive-definite $N\times N$ matrix. I have tried the following, and I know this is incorrect due to the surplus of indices. \begin{align} \partial_{j_m,j_n}\bigg|_{\bf{j}=0}\exp\left(\frac12j_mB_{mn}j_n\right) &amp;= \left[\partial_{j_m} \frac12B_{mn}j_m\exp\left(\frac12j_mB_{mn}j_n\right)\right]_{\bf{j}=0} \\ &amp;= \left[\frac12B_{mn}\exp\left(\frac12j_mB_{mn}j_n\right) + \frac14B_{mn}j_mB_{mn}j_n\exp\left(\frac12j_mB_{mn}j_n\right)\right]_{\bf{j}=0} \\ &amp;= \frac12B_{mn} \end{align}</p> <p>I'm naively working with the indices it seems. Can someone clarify my mistakes?</p> <p>(PS. Please edit the title to something more descriptive if you can think of something.)</p> <p>With Jiaqi Li's answer, I think I've understood how to go about this: \begin{align} \partial_{j_m,j_n}\exp\left(\frac12j_rB_{rs}j_s\right) &amp;= \frac12\partial_{j_m}\left[\left(B_{ns}j_s+j_rB_{rn}\right)\exp\left(\frac12j_rB_{rs}j_s\right) \right] \\ &amp;= \frac12\left[\left(B_{nm}+B_{mn}\right)\exp\left(\frac12j_rB_{rs}j_s\right) \\ \qquad\qquad\qquad\qquad+ \frac12\left(B_{ns}j_s+j_rB_{rn}\right)\left(B_{ms}j_s+j_rB_{rm}\right)\exp\left(\frac12j_rB_{rs}j_s\right) \right] \\ \end{align}</p> <p>This appears to evaluate to the desired result for $\bf{B}$ symemtric. The two major mistakes were the initial usage of the same indices inside the exponential and forgetting that $\frac{\partial x_i}{\partial x_j} = \delta_{ij}$.</p>
Brethlosze
386,077
<p>Using doubled index sum notation the result is straightforward. $$ \left.\frac{\partial}{\partial x_m}\frac{\partial}{\partial x_n} e^{\frac12 x_iB_{ij}x_j}\right|_{x=0}\\ =\left.\frac{\partial}{\partial x_m} \frac 12 (B_{nq}x_q+x_pB_{pn})e^{\frac12 x_iB_{ij}x_j}\right|_{x=0}\\ =\left.(\frac 12 (B_{nm}+B_{mn})+\frac 14 (B_{nq'}x_{q'}+x_{p'}B_{p'n})(B_{mq''}x_{q''}+x_{p''} B_{p''m}))e^{\frac12 x_iB_{ij}x_j}\right|_{x=0}\\ =\frac 12 (B_{nm}+B_{mn})\\ $$ The given result holds when $B$ is symmetric.</p>
1,483,802
<p>Take $B(0,1)$ the ball in $\mathbb{R}^2$ with the normalized Lebesgue measure $\lambda$ such that $\int_{B(0,1)} d \lambda=1.$</p> <p>Now, I want to show, or give a counterexample that this is false, that for all $f \in H^1_0(B(0,1))$ we have for fixed constants $a,b&gt;0$ and any(!) $p \in (2,\infty)$ \begin{equation} ||f||_p^2 \le a \left(\int_{B(0,1)}| \nabla f|^2 d\lambda \right) + b ||f||_2^2. \end{equation}</p> <p>Does anybody know how to do this? The normal Sobolev inequality is apparently too weak to show this, as this holds for any $p$ and fixed $a,b$.</p>
Community
-1
<p>If such a $a, b$ is found for a $f\in W^{1,2}_0(B)$, then $$\|f\|_p &lt;C$$</p> <p>for all $p &gt;2$. In particular, this shows $\|f\|_\infty \le C$ as <a href="https://math.stackexchange.com/questions/242779/limit-of-lp-norm">$\|f\|_p \to \|f\|_\infty$ as $p\to \infty$</a>.</p> <p>In particular, $f\in L^\infty$. Thus one cannot find such an $a, b$ for even a fixed unbounded $f\in W^{1,2}_0(B)$. In particular, pick $\phi$ be a smooth function supported in $B_{3/4}(0)$ and is $1$ on $B_{1/2}(0)$. Then </p> <p>$$f(x) = -\phi(x) \log\left(\log \left(1 +\frac 1{|x|}\right)\right)$$</p> <p>is such an example. </p>
1,457,623
<p>A floor is paved with rectangular marble blocks,each of length $a$ and breadth $b$.A circular block of diameter $c(c&lt;a,b)$ is thrown on the floor at random.Show that the chance that it falls entirely on one rectangular block is $\frac{(a-c)(b-c)}{ab}$<br></p> <p>I thought over this problem,i found total number of cases as area of rectangular marble block,but i cannot find the favorable number of cases.Favorable number of cases cannot be area of rectangular marble block$-$ area of circular block,as the answer suggests.What should be the correct logic,please help.</p>
ewcz
274,913
<p>to have the circle contained entirely in a block, you need to put it within an inner block of size $(a-c)\times(b-c)$ since the center should be placed at a distance of at least $c/2$ from any of the edges. Now, dividing by the total area $a\times b$ yields the desired result.</p> <p><a href="https://i.stack.imgur.com/BIEDs.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/BIEDs.jpg" alt="enter image description here"></a></p>
299,405
<p>(a) In how many ways can the students answer a 10-question true false examination? </p> <p>(b) In how many ways can the student answer the test in part (a) if it is possible to leave a question unanswered in order to avoid an extra penalty for a wrong answer</p> <hr> <p>For part (a) I've got the answer, it is $2^{10}$.</p> <p>For part (b) I think the answer is $ 10 \times 2^9 $ because the number of ways to choose the question to answer is 10 and in each selection the number of ways to answer the question is $2^9$ but the answer provided in the book is $3^{10}$.</p> <p>Can someone explain to me?</p>
Andreas Caranti
58,401
<p>If the answer has to be $3^{10}$, then this means that in case (b) it is intended that <em>for each question</em> the student can choose 1`out of 3 possibilites: true, false, not telling. </p>
3,609,191
<p>Actually I am not very comfortable with using blocks, I understand the definition that it is a maximal <span class="math-container">$2$</span>-connected graph, though.</p> <p><strong>My attempt</strong> Suppose not. Then there exists a maximal graph <span class="math-container">$G$</span> which cannot be written as an edge disjoint union of its blocks.</p> <p>I cannot go further. I am not even sure why maximal exists, because the number of vertices of <span class="math-container">$G$</span> are not predetermined here. I am not even sure how can we write complete graph in such a fashion.</p> <p>Please help</p>
Aryak Sen
768,532
<p>A block of a graph G is a maximal connected subgraph of G that has no articulation point(cut-point or cut-vertex). If G itself is connected and has no cut-vertex, then G is a block.Two or more blocks of a graph can meet at a single vertex only, which must necessarily be an articulation point of the graph.Hence, any graph can be written as an edge disjoint union of its blocks, because of their maximality. We can prove this by contradiction. Suppose we surmise a graph G which cannot be represented as an edge-disjoint union of blocks. For simplicity, let us consider that there are two different blocks <strong>X</strong> and <strong>Y</strong> which have a common edge <strong>uv</strong>. Without loss of generality, we are at liberty to assume that <strong>u</strong> has an adjacent vertex <strong>m</strong> in <strong>X</strong>. And <strong>v</strong> and <strong>m</strong> are vertices in <strong>X</strong>, therefore, there exists a path(<strong>P</strong>) between <strong>v</strong> and <strong>m</strong> not containing <strong>u</strong>. Evidently, <strong>P</strong> does not contain the edge <strong>uv</strong>. Also a path can be represented as a union of unique edges(k2 graphs, which are by definition blocks). Let <strong>S</strong> be the set of edges, in <strong>X</strong>-<strong>uv</strong>-<strong>um</strong>-<strong>P</strong>. And thus G can be represented by <strong>Y</strong> ∪ <strong>um</strong> ∪ <strong>P</strong> ∪ <strong>S</strong> which can be intuitively viewed as a union of different edge-disjoint blocks, which is a contradiction.</p>
84,605
<p>Let $w$ be a word in letters $x_1,...,x_n$. A value of $w$ is any word of the form $w(u_1,...,u_n)$ where $u_1,...,u_n$ are words. For example, $abaaba$ is a value of $x^2$. A word $u$ is called unavoidable if every infinite word in a finite alphabet contains a value of $u$ as a subword. There is a nice characterization of unavoidable words due to Zimin. A word $u$ in $n$ letters is unavoidable if and only if a value of $u$ is a subword of the $n$th Zimin word $Z_n$ defined by induction: $Z_1=x_1$,...,$Z_n=Z_{n-1}x_nZ_{n-1}$, that is $Z_1=x_1, Z_2=x_1x_2x_1, Z_3=x_1x_2x_1x_3x_1x_2x_1,...$. Zimin words appear very often in algebra. For example, if one lists binary expressions of all numbers $1,2,3,...$ and records the numbers of 0s at the end of the numbers plus 1, one will get $12131214121...$ which is the infinite Zimin word. Values of Zimin words also appeared as $m$-sequences in Levitzki's description of Baer radical (see Jacobson's book "Structure of rings") and in the work of Schutzenberger. The Zimin words have obvious fractal structure, so these words could have appeared in other areas of mathematics as well. </p> <p><b> Question. </b> Do Zimin words appear in your area of mathematics?</p> <p>This might be a "big list" question. But I do not know how big the list is, it may be empty. If it turn out to be big, I will make the question "community wiki". </p> <p><b> Update 1. </b> Googling 121312141213121 ($=Z_4$) returns 439 results including a discussion at reddit. </p> <p><b> Update 2. </b> The most curious among these links is <a href="http://webcache.googleusercontent.com/search?q=cache:wJzPCA5sw0cJ:www.patents.com/us-4389538.html+&amp;cd=1&amp;hl=en&amp;ct=clnk&amp;gl=us">this link</a> to a US patent. It looks like Zimin words up to $Z_5$ at least have been patented before Zimin introduced them. </p> <p><b> Update 3. </b> This is needed for the book with the current title "Words and their meaning" which we are writing together with Mikhail Volkov. There we already have four different applications of Zimin words to different areas of algebra and would like to mention applications outside algebra as well. </p>
Gerhard Paseman
3,402
<p>Yes. However, the initial application is related to semigroup varieties, so it is likely very boring to you.</p> <p>In studying the hyperidentity for associativity, one can look at its representation on algebras of type &lt;2>, a.k.a. groupoids or magmas or sets with one binary operation. Such an algebra is hyperassociative if each of the derived terms is an associative binary operation. Thus, such an algebra must be a semigroup and also satisfy xyxzxyx=xyzyx . As a result, the variety of such semigroups is locally finite, and this leads to a nice representation of all hyperassociative semigroups as a finitely based variety, with additional equations stating that x^2, x^2y, and xy^2 are also associative. Libor Polak published his analysis of this in Algebra Universalis, v 36, pp 363-378 (1996) and in an additional paper which discussed certain subvarieties of this variety.</p> <p>I studied hyperassociativity (whether it could be represented as a finite set of identities) in other similarity types as well, but did not use Zimin words as much. However, a weaker version of hyperidentities involving (essentially) a subset of derived terms was studied by Denecke and others, and for hyperassociativity I believe small Zimin words were used. However, my recollection on this is hazy, and should not be relied upon.</p> <p>Gerhard "Ask Me About Hyperassociative Algebras" Paseman, 2011.12.30</p>
32,021
<p>On more than one occasion, always with an explicit disclaimer, I have posted a comment of more than 600 characters as an &quot;answer&quot;. I have done this because I have quite often seen other people do it, and I have never once, in 5 years in Maths.SE, seen anyone object to the practice. But a comment I posted in this way last night has been deleted. The reason given was &quot;low quality&quot;. (I have undeleted it, but I have no idea if that action will remain in effect for long.)</p> <p>Has there been a change in policy?</p> <p>Or is there some other reason why this comment in particular was singled out for deletion? Is it perhaps connected with there having been a truly extraordinary comment thread on the question? The thread - including the first comment, which had nothing to do with the very strange dispute that suddenly erupted - was deleted in its entirety, not even moved to chat (as I had requested, in order to mitigate the extreme distraction from the question that had been asked). That is something else that I have never seen happen in my 5 years in Maths.SE, and this coincidence seems highly unlikely to be accidental.</p> <p><a href="https://math.stackexchange.com/posts/3739469/timeline">Timeline for answer to Is the sequence <span class="math-container">$(B_n)_{n \in \Bbb{N}}$</span> unbounded, where <span class="math-container">$B_n := \sum_{k=1}^n\mathrm{sgn}(\sin(k))$</span>? by Calum Gilhooley - Mathematics Stack Exchange</a>.</p>
Aloizio Macedo
59,234
<p>First I must point out that this question is somewhat unclear. What do you mean by &quot;post a <em>comment</em> of more than 600 characters (...)&quot;? (Emphasis mine.) If &quot;comment&quot; means that <em>I would post it in the comments below the question, but there is no space</em>, then obviously this has never been OK. It is even somewhat self-contradictory, considering the role that comments have.</p> <p>By the development of the discussion, I take it you mean by &quot;comment&quot; just that it is not a complete answer, and I'll assume this to be the case for my answer.</p> <p>This is somewhat tricky. The short answer is: it depends on how useful the answer is.</p> <p>The long answer is that in general, the network allows (and wants) answers to be useful and relevant. It is the case that in Mathematics those two are heavily correlated with correctness and completeness. So the &quot;safest&quot; way to make a useful answer is just to make a complete and correct answer. Others which usually can also be OK can be summarized in &quot;partial&quot; and &quot;incomplete&quot;, and I'll clarify the distinction I make. (The way I use the terms may be nonstandard, which is why I'm clarifying.)</p> <p>By &quot;partial answers&quot; I mean answers that solve a problem considering some simplifying hypotheses. For example, a question which asks &quot;Is every foo a bar?&quot; can have a useful answer &quot;If the foo is also a xyzzy, then this holds because of (...)&quot;. How useful that is obviously depends on the question, on the restriction and the context, and will be evaluated by the community. (E.g., if someone asks &quot;Is every continuous real function <span class="math-container">$f$</span> on <span class="math-container">$[0,1]$</span> integrable&quot;, then responding it with &quot;Yes if we assume it to be constant&quot; is obviously not useful. )</p> <p>By &quot;incomplete answers&quot; I mean answers which make significant progress towards a solution but do not quite manage to close it up. Again, how useful that is will depend on the question, the context and will be evaluated by the community.</p> <p>Those kinds of answers <em>can be</em> OK. Again, it depends and will be evaluated by the community. As always, these evaluations can be contested by the community. Preferably, all this should happen in a reasonable way, no matter how outrageous one thinks something is.</p> <p>If an &quot;answer&quot; is not any of those, then it will usually be the case that it is not even an answer. And answers should be answers. (I know this sounds facetious, but there is no simpler way to put it. There is a flag for something being &quot;Not an answer&quot; for a reason.) So it is most likely not OK to post &quot;something&quot; as an answer which is not a complete and correct answer, or a partial answer, or an incomplete answer. (There may be exceptions, but I cannot think of one.) For example, we frequently delete things posted as &quot;answers&quot; that are questions about other answers, or corrections about other answers etc.</p> <p>Now, turning to the specific case of your answer. The first version of your answer is not a complete and correct answer, not a partial answer and not an incomplete answer. (As per what I defined above.) This already points to it being potentially not useful. But more specifically, it was just some computations. It is entirely reasonable to be considered not an answer, and as such, should have been flagged as such and elaborated upon or deleted. To be honest, I say this almost objectively.</p>
3,337,440
<p>Suppose that, in a memoryless way, an object A can suddenly transform into object B or object C. Once it transforms, it can no longer transform again (so if it becomes B, it cannot become C, and visa versa) </p> <p>Suppose that the pdf of an object A becoming object B is </p> <p><span class="math-container">$$\lambda e^{-\lambda t}$$</span></p> <p>Where <span class="math-container">$t$</span> is the time of the transition</p> <p>And that same object A becoming object C instead has a pdf of</p> <p><span class="math-container">$$\mu e^{-\mu t}$$</span></p> <p>We can integrate over the timespan <span class="math-container">$\tau$</span> of the experiment to derive</p> <p>P(A -> B over timespan <span class="math-container">$\tau$</span>) = <span class="math-container">$1-e^{-\lambda \tau}$</span></p> <p>P(A -> C over timespan <span class="math-container">$\tau$</span>) = <span class="math-container">$1-e^{-\mu \tau}$</span></p> <p>But the events A -> B over timespan <span class="math-container">$\tau$</span> and A -> C over timespan <span class="math-container">$\tau$</span> are mutually exclusive, so in theory </p> <p>P(A transitions to B or C over timespan <span class="math-container">$\tau$</span>) = P(A -> B over timespan <span class="math-container">$\tau$</span>) + P(A -> C over timespan <span class="math-container">$\tau$</span>) = <span class="math-container">$1-e^{-\lambda \tau} + 1-e^{-\mu \tau} = 2-(e^{-\lambda \tau} + e^{-\mu \tau})$</span></p> <p>But this is clearly incorrect, because as the timespan <span class="math-container">$\tau$</span> increases without bound, the probability of transition increases to be greater than 1, which is impossible.</p> <p>Where did I go wrong? At first pass, everything I did seems correct, but it obviously isn't. </p>
Grada Gukovic
679,434
<p>Altough the underlying processes are indepenedent the events <span class="math-container">$A \rightarrow B$</span> and <span class="math-container">$A \rightarrow C$</span> are not independent. <span class="math-container">$A \rightarrow B$</span> can occur only if <span class="math-container">$A \rightarrow C$</span> has not already occurred and vice versa. Thus you are looking for the probabilities of the events: <span class="math-container">$A \rightarrow B$</span> occurs before <span class="math-container">$A \rightarrow C$</span> and before <span class="math-container">$\tau:$</span> <span class="math-container">$P((A \rightarrow B) \cap (A \rightarrow C)^c) = \int_{0}^{\tau}\lambda e^{-\lambda t} e^{-\mu t} dt$</span> and </p> <p><span class="math-container">$A \rightarrow C$</span> occurs before <span class="math-container">$A \rightarrow B$</span> and before <span class="math-container">$\tau:$</span> <span class="math-container">$P((A \rightarrow C) \cap (A \rightarrow B)^c) = \int_{0}^{\tau}\mu e^{-\mu t} e^{-\lambda t} dt$</span>.</p> <p>As <span class="math-container">$\tau \rightarrow \infty$</span> one of the events will occur.</p>
2,189,123
<p>There are 36 gangsters, and several gangs these gangsters belong to. No two gangs have identical roster, and no gangster is an enemy of anyone in their gang. However, each gangster has at least one enemy in every gang they are <strong>not</strong> in. What is the greatest possible number of gangs? </p>
bof
111,012
<p>This question is the case $n=36$ of the general problem: find the maximum number of maximal cliques in a graph on $n$ vertices. The general solution is given by OEIS sequence <a href="http://oeis.org/A000792" rel="nofollow noreferrer">A000792</a>, namely: $a(3k)=3^k,$ $\ a(3k+1)=4\cdot3^{k-1},$ $\ a(3k+2)=2\cdot3^k,$ except that $a(1)=1.$ The OEIS cites M. Capobianco and J. C. Molluzzo, <em>Examples and Counterexamples in Graph Theory</em>, North-Holland 1978, p. 207 for this result.</p> <p><strong>Update.</strong> This is a 1960 result of R. E. Miller and D. E. Muller:</p> <blockquote> <p><a href="http://users.monash.edu.au/~davidwo/MillerMuller-NumberMaximalCliques.pdf" rel="nofollow noreferrer">R. E. Miller and D. E. Muller, A problem of maximum consistent subsets, IBM Research Report RC-240, J. T. Watson Research Center, 1960.</a></p> </blockquote> <p>The same result was obtained independently by John W. Moon and Leo Moser in 1965:</p> <blockquote> <p><a href="http://users.monash.edu.au/~davidwo/MoonMoser65.pdf" rel="nofollow noreferrer">J. W. Moon and L. Moser, On cliques in graphs, Israel J. Math. 3 (1965), 23–28.</a></p> </blockquote> <p>A new and simple proof was given by David R. Wood in 2007:</p> <p><a href="https://arxiv.org/pdf/1104.1243.pdf" rel="nofollow noreferrer">David R. Wood, On the number of maximal independent sets in a graph, Graphs Combin. 23 (2007), 337–352.</a></p> <p>These references and more can be found in the answers to <a href="https://cstheory.stackexchange.com/questions/8390/the-number-of-cliques-in-a-graph-the-moon-and-moser-1965-result">this cstheory.stackexchange question</a>.</p>
3,169,668
<p>If A and B don't commute are there counterexamples that AB is diagonalizable but BA not?</p> <p>I read that if AB=BA then both AB and BA are diagonalizable. </p>
Michael Biro
29,356
<p>Try <span class="math-container">$A = \begin{bmatrix}0&amp;0\\1&amp;0\end{bmatrix}$</span> and <span class="math-container">$B = \begin{bmatrix}0&amp;0\\0&amp;1\end{bmatrix}$</span></p>
1,926,423
<p>My problem with this is the final step. Through iterative substitution, I come up with the following: $$T(n) = T(n-4) + (n-3) + (n-2) + (n-1) + n$$</p> <p>which leads to the general form: $$T(n) = T(n-k) + kn - \frac{[k(k-1)]}{2}$$</p> <p>The restrictions are $T(1)=1$ and $n=2^k-1$. What I am doing at this point is solving for $k$, when $T(1) = 1$ which gives me : $$n = 2^k - 1$$ $$n + 1 = 2^k$$ $$2 = 2^k$$ ($n=1$, since this is for $T(1)=1$)</p> <p>$$1 = k$$</p> <p>plugging back in I just come up with the original formula which is obviously not correct. </p> <p>What am I doing wrong in the final step (if that really is my problem)? Based on other examples I have seen, this would work out nicely if I can put $k$ in terms of $n$ instead of a constant, but I am drawing a blank on how to do this and I cannot find it anywhere.</p>
Olivier Oloa
118,798
<p>Alternatively, one may use a <em>telescoping sum</em>. From $$ T(k)-T(k-1)=k, \qquad k=1,2,3,\cdots, \tag1 $$ one gets $$ \sum_{k=1}^n \left(T(k)-T(k-1)\right)=T(n)-T(0)=\sum_{k=1}^n k, \qquad n\ge1, $$ giving</p> <blockquote> <p>$$ T(n)=\frac{n(n+1)}2+T(0),\qquad n\ge1. $$</p> </blockquote>
1,926,423
<p>My problem with this is the final step. Through iterative substitution, I come up with the following: $$T(n) = T(n-4) + (n-3) + (n-2) + (n-1) + n$$</p> <p>which leads to the general form: $$T(n) = T(n-k) + kn - \frac{[k(k-1)]}{2}$$</p> <p>The restrictions are $T(1)=1$ and $n=2^k-1$. What I am doing at this point is solving for $k$, when $T(1) = 1$ which gives me : $$n = 2^k - 1$$ $$n + 1 = 2^k$$ $$2 = 2^k$$ ($n=1$, since this is for $T(1)=1$)</p> <p>$$1 = k$$</p> <p>plugging back in I just come up with the original formula which is obviously not correct. </p> <p>What am I doing wrong in the final step (if that really is my problem)? Based on other examples I have seen, this would work out nicely if I can put $k$ in terms of $n$ instead of a constant, but I am drawing a blank on how to do this and I cannot find it anywhere.</p>
AlphaNumeric
96,679
<p>$\begin{array}{rcl}T(n) &amp;=&amp; T(n-1) + n \\ &amp;=&amp; T(n-2) + (n-1) + n \\ &amp;=&amp; \ldots \\ &amp;=&amp; T(n-[n-1]) + (n-[n-2]) + (n-[n-3]) + \ldots + n \\ &amp;=&amp; T(1) + 2 + 3 + \ldots + n \\ &amp;=&amp; 1 + 2 + 3 + \ldots \\ &amp;=&amp; \sum_{j=1}^{n}j \\ &amp;=&amp; \frac{1}{2}n(n+1) \end{array}$ </p> <p>If $n = 2^{k}-1$ then $T(n) = T(2^{k}-1) = \frac{1}{2}(2^{k}-1)(2^{k}-1+1) = (2^{k}-1)2^{k-1}$</p>
2,584,688
<p>Consider normed spaces $X$ and $Y$. You can assume that they are Banach spaces if needed. Let $\mathcal{L}(X, Y)$ denote the spaces of bounded linear operators from $X$ to $Y.$ Now consider the set </p> <p>$$\Omega=\{T \in L(X,Y): T \textrm{ is onto}\}.$$ Is $\Omega$ open with the norm topology? </p>
mechanodroid
144,766
<p>$\Omega$ is not open in general, at least if $X$ and $Y$ are not assumed to be Banach.</p> <p>Consider $c_{00}$, the space of all finitely-supported sequences, equipped with $\|\cdot\|_2$.</p> <p>Define $A : c_{00} \to c_{00}$ as </p> <p>$$A(x_1, x_2, x_3, \ldots) = \left(x_1, \frac12x_2, \frac13x_3\ldots\right)$$</p> <p>$A$ is bounded and onto.</p> <p>However, take any $\varepsilon &gt; 0$ and pick $n_0 \in \mathbb{N}$ such that $\frac{1}{n_0} &lt; \varepsilon$.</p> <p>Define $A_{n_0} : c_{00} \to c_{00}$ as:</p> <p>$$A_{n_0}(x_1, x_2, x_3, \ldots) = \left(x_1, \frac12x_2, \frac13x_3, \ldots, \frac1{n_0 - 1}x_{n_0 - 1}, 0, \frac1{n_0+1}x_{n_0+1}, \ldots\right)$$</p> <p>$A_{n_0}$ is bounded and not onto. Futhermore, we have:</p> <p>$$(A - A_{n_0})(x_1, x_2, \ldots ) = \left(0, \ldots, 0, \frac{1}{n_0} x_{n_0}, 0, \ldots\right)$$</p> <p>Therefore $\|A - A_{n_0}\| = \frac{1}{n_0} &lt; \varepsilon$.</p>
286,574
<p>How to prove that each non-convex polygon with no self-intersecting parts, has at least one interior angle which size is less then $180$ degrees.</p>
Gerry Myerson
8,269
<p>The formula for the sum of the angles of a polygon, as a function of the number of sides, holds for non-convex as well as for convex polygons, and the result follows immediately from that formula. </p>
286,574
<p>How to prove that each non-convex polygon with no self-intersecting parts, has at least one interior angle which size is less then $180$ degrees.</p>
Community
-1
<p>Hint: Consider the vertex with the largest $x$-coordinate.</p>
4,142,540
<p>Let <span class="math-container">$V$</span> a vector subspace of dimension <span class="math-container">$n$</span> on <span class="math-container">$\mathbb R$</span> and <span class="math-container">$f,g \in V^* \backslash \{0\}$</span> two linearly independent linear forms. I want to show that <span class="math-container">$\dim (\ker f \cap \ker g) = n-2$</span>.</p> <p>Since <span class="math-container">$f$</span> and <span class="math-container">$g$</span> are linear forms, I know that dim ker <span class="math-container">$f =n-1$</span> and dim ker <span class="math-container">$g =n-1$</span>. I think I should use the fact that the two forms are linearly independent to <span class="math-container">$\dim (\ker f \cap \ker g) = n-2$</span> but I don't really see how...</p> <p>I saw a proof with scalar product but I would like to see an alternative proof or maybe an explanation of the scalar product's proof.</p>
Tsemo Aristide
280,301
<p>If <span class="math-container">$f,g$</span> are linearly independent, <span class="math-container">$Ker f +Ker g=\mathbb{R}^n$</span>.</p> <p><span class="math-container">$dim(Ker f+Kerg) =dim Ker f +dim Ker g-dim(Ker f\cap Ker g)$</span> implies that</p> <p><span class="math-container">$n = n-1+n-1-dim(Ker f\cap Ker g)$</span>, and <span class="math-container">$dim(Ker f\cap Ker g)=n-2$</span>.</p>
243,903
<p>I have posted this question on mathstack echange but did not get any answer. It mam trying my luck here. </p> <p>The only simple finite groups admitting an irreducible character of degree 3 are $\mathfrak{A}_5$ and $PSL(2,7)$. That seems to be a result coming from Blichfeldt's work on $GL(3,\mathbb{C})$, which I cannot find. Is there a proof available somewhere ? </p>
yakov
92,209
<p>A comprehensive report on groups possessing a faithful irreducible character of small degree is contained in Walter Feit's report `The current situation in the theory of finite simple groups, Actes Congr. Internat. Math. Nice 1970, vol. 1 Gauthier-Villars, Paris, 1971, 55-93.'</p>
3,453,175
<p>If <span class="math-container">$y=\dfrac {1}{x^x}$</span> then show that <span class="math-container">$y'' (1)=0$</span></p> <p>My Attempt:</p> <p><span class="math-container">$$y=\dfrac {1}{x^x}$$</span> Taking <span class="math-container">$\ln$</span> on both sides, <span class="math-container">$$\ln (y)= \ln \left(\dfrac {1}{x^x}\right)$$</span> <span class="math-container">$$\ln (y)=-x.\ln (x)$$</span> Differentiating both sides with respect to <span class="math-container">$x$</span> <span class="math-container">$$\dfrac {1}{y}\cdot y'=-(1+\ln (x))$$</span></p>
lab bhattacharjee
33,337
<p><span class="math-container">$y(1)=?$</span></p> <p><span class="math-container">$$-y_1=y(1+\ln x)$$</span></p> <p><span class="math-container">$y_1(1)=?$</span></p> <p><span class="math-container">$-y_2=y_1(1+\ln x)+y/x$</span></p> <p><span class="math-container">$$y_2(1)=-y_1(1)-y(1)=?$$</span></p>
4,166,579
<p>I'm trying to solve this proof but I'm not completely sure how to start. Discrete has been pretty rough for me so far so any help would be greatly appreciated!</p>
Robert Lee
695,196
<p><em><strong>Hint:</strong></em> If <span class="math-container">$f(S) \subseteq f(T)$</span> then for every <span class="math-container">$s \in S$</span> there exists a <span class="math-container">$t \in T$</span> such that <span class="math-container">$f(s) = f(t)$</span>. Recalling that <span class="math-container">$f$</span> is injective, what can you do with the previous equation?</p>
1,703,120
<p>So I have a vector <span class="math-container">$a =( 2 ,2 )$</span> and a vector <span class="math-container">$b =( 0, 1 )$</span>.<br /> As my teacher told me, <span class="math-container">$ab = (-2, -1 )$</span>.</p> <p><span class="math-container">$ab = b-a = ( 0, 1 ) - ( 2, 2 ) = ( 0-2, 1-2 ) = ( -2, -1 )$</span><br /> <span class="math-container">$ab = a-b = ( 2 ,2 ) - ( 0 ,1 ) = ( 2-0,2-1 ) = ( 2 ,1 )$</span></p> <p>Seems like its the same but the negative signs are gone.</p> <p>Why do I have to subtract b from a to get ab? Why not a-b or a+b?</p>
chandings
203,901
<p>I know I am late to answer. Also I do not have a lot of knowledge of Mathematics. But I will try and simplify. Please do not refrain from commenting if I get it horribly or event slightly wrong.</p> <p>Now to simplify lets us first imagine things in one dimensions. That is a number line. Now we have 2 points on a number line a and b.</p> <p>b can either be in the positive direction ie right of a, or in the negative direction ie left of a.</p> <p>So to find the direction from a to b ie ab, we will substract a from b ie b - a. So if b is greater than a (ie in positive direction) we will get a positive answer or if b is lesser than a (ie in negative direction) we will get a negative answer.</p> <p>Now when we move to 2D the same logic is applied to find the direction, hence</p> <p>ab = b - a ie direction from a to b.</p> <p>and ba = a - b ie direction from b to a.</p> <p>Also it is important to understand that direction from a to b and b to a are different values.</p> <p>Hope this helps.</p>
2,263,230
<p>Let's say I wanted to express sqrt(4i) in a + bi form. A cursory glance at WolframAlpha tells me it has not just a solution of 2e^(i<em>Pi/4), which I found, but also 2e^(i</em>(-3Pi/4))</p> <p>Why do roots of unity exist, and why do they exist in this case? How could I find the second solution? </p>
Kaynex
296,320
<p>$$\sqrt{4i}$$ $$= \sqrt{4e^{\pi i /2 + 2\pi n}}$$ $$= 2e^{\pi i/4 + \pi n}$$ $$= 2e^{\pi i/4}, 2e^{5\pi i/4}$$</p> <p>There are two solutions to the square root of any number. There are three cube roots, four fourth roots, ect.</p>
279,994
<p>Note that S [n] is the sum of the first n terms of the sequence a [n]. It is known that a [1]==1, and the sequence {S [n]/a [n]} is an equal difference sequence with a tolerance of 1/3. Find the general term formula of sequence a [n]</p> <p>Let b [n]==S [n]/a [n], first work out the general term formula of b [n], and then operate the &quot;?</p> <pre><code>RSolve[{b[n + 1] == b[n] + 1/3, b[1] == 1}, b[n], n] </code></pre>
Bob Hanlon
9,362
<pre><code>Clear[a, b, m, S]; b[n_] = RSolveValue[ {b[n + 1] == b[n] + 1/3, b[1] == 1}, b[n], n] (* (2 + n)/3 *) a[1] = 1; S[n_] = Sum[a[k], {k, 1, n}]; m = 5; sol = Solve[Table[b[n] == S[n]/a[n], {n, 2, m}], Array[a, m - 1, 2]][[1]] (* {a[2] -&gt; 3, a[3] -&gt; 6, a[4] -&gt; 10, a[5] -&gt; 15} *) seq = Array[a, m] /. sol (* {1, 3, 6, 10, 15} *) a[n_] = FindSequenceFunction[seq, n] (* 1/2 n (1 + n) *) </code></pre> <p>Verifying,</p> <pre><code>b[n] == S[n]/a[n] (* True *) </code></pre>
3,353,826
<p>All the vertices of quadrilateral <span class="math-container">$ABCD$</span> are at the circumference of a circle and its diagonals intersect at point <span class="math-container">$O$</span>. If <span class="math-container">$∠CAB = 40°$</span> and <span class="math-container">$∠DBC = 70°$</span>, <span class="math-container">$AB = BC$</span>, then find <span class="math-container">$∠DCO$</span>.</p>
Bart Michels
43,288
<p>I have seen <span class="math-container">$\lfloor x \rceil$</span>. It must have been in the context of math olympiads, so I can't point to a book that uses it. Wikipedia suggest this notation, among others: <a href="https://en.wikipedia.org/wiki/Nearest_integer_function" rel="noreferrer">nearest integer function</a>.</p> <p>Personally, I would prefer <span class="math-container">$[x]$</span>, being a cleaner mix of <span class="math-container">$\lfloor x \rfloor$</span> and <span class="math-container">$\lceil x \rceil$</span>. But I've seen this notation being used for the floor function. Especially in older texts, say, pre-TeX era.</p> <p>You could also do something like <span class="math-container">$\mathrm{nint}(x)$</span>, but in formulas that could be cumbersome.</p> <p>See also the remarks at <a href="http://mathworld.wolfram.com/NearestIntegerFunction.html" rel="noreferrer">Mathworld</a>.</p>
1,480,179
<p>Let $n ≥ 2$, and $A ∈ M_n$ be Hermitian, and let ${\rm{B }} \in {\rm{ }}{{\rm{M}}_{n - 1}}$ be a leading principal submatrix of A.</p> <p>If $B$ is positive semidefinite and $rank B = rank A$, why does $A$ is positive semidefinite?</p>
user1551
1,551
<p>The size of $B$ doesn't matter (you don't need $B\in M_{n-1}$; $B$ can be smaller sized). As long as it is a positive semidefinite leading principal submatrix of a Hermitian matrix $A$ of the same rank, $A$ must be positive semidefinite.</p> <p>Let $B$ be $r\times r$ and $A=\pmatrix{B&amp;X\\ X^\ast&amp;Y}$. Since $A$ has the same rank as $B$, its last $n-r$ columns must be linear combinations of its first $r$ columns. That is, $\pmatrix{X\\ Y}=\pmatrix{B\\ X^\ast}C$ for some matrix $C$. It follows that $X=BC,\ Y=C^\ast BC$ and $$ A=\pmatrix{B&amp;BC\\ C^\ast B&amp;C^\ast BC} =\pmatrix{B^{1/2}\\ C^\ast B^{1/2}} \pmatrix{B^{1/2}&amp;B^{1/2}C}, $$ which is positive semidefinite because it is a Gram matrix.</p>
3,977,281
<p>Is it possible for a function to not have a maxima or a minima? (S.t. I can't find the decreasing and increasing interval.) If so, how do we show it mathematically?</p> <p>I was practicing and found these two functions.</p> <p><span class="math-container">$a. f(x) = x+\sqrt{x^2-1} $</span> and <span class="math-container">$b. f(x) = \frac{x^2}{x^2+4} $</span></p> <p><span class="math-container">$a. f'(x) = 1 + \frac{x}{\sqrt{x^2-1}} = 0$</span><br /> However, I can't find the extrema.<br /> I tried to multiply with <span class="math-container">$\frac{\sqrt{x^2-1}}{\sqrt{x^2-1}}$</span> and obtain <span class="math-container">$x\sqrt{x^2-1}=-x^2+1$</span><br /> But I still can't find the extrema. If I only paid attention to <span class="math-container">$\frac{1}{\sqrt{x^2-1}}$</span>, I'll obtain <span class="math-container">$x=-1$</span> or <span class="math-container">$x=1$</span>.<br /> Thus, <span class="math-container">$f(-1)=-1$</span> as the minimum value and <span class="math-container">$f(1)=1$</span> as the maximum value.<br /> But I think this is wrong.</p> <p><span class="math-container">$b. f'(x) = \frac{2x(x^2+4)-x^2(2x)}{(x^2+4)^2}=0$</span><br /> <span class="math-container">$x=0$</span> <br /> <span class="math-container">$---0+++$</span> <br /> So I know that <span class="math-container">$0$</span> will give us the minimum value. But what about the maxima?<br /> <span class="math-container">$f''(x)=\frac{8(-3x^2+4)}{(x^2+4)^3}$</span> <br /> <span class="math-container">$f''(0)=\frac{1}{2} &gt;0$</span> So 0 will really gives the minimum.<br /> So, the function decreases when x&lt;0 and increases when x&gt;0.</p>
saulspatz
235,128
<p>For the first problem, <span class="math-container">$1 + \frac{x}{\sqrt{x^2-1}} = 0$</span> gives <span class="math-container">$$\frac{x}{\sqrt{x^2-1}}=-1\\x=-\sqrt{x^2-1}\\x^2=x^2-1$$</span> which has no solution, so the derivative never vanishes. That means that the function has no extremum at a point of differentiability, but we still have to consider other points.</p> <p>The natural domain of the function is <span class="math-container">$|x|\geq1$</span> and the function fails to be differentiable at <span class="math-container">$x=\pm1$</span>. The function takes its absolute minimum value of <span class="math-container">$-1$</span> at <span class="math-container">$x=-1$</span>, which you can show by proving that <span class="math-container">$f$</span> decreases when <span class="math-container">$x&lt;-1$</span>. (Obviously, <span class="math-container">$f(x)&gt;0$</span> for <span class="math-container">$x\geq1$</span>.)</p>
894,152
<blockquote> <p>Let $x_{i}\ge 0$ for $i\in\{1,2,\cdots,n\}$ and $x_{1}+x_{2}+\cdots+x_{n}=n$ for $n\ge 3$</p> <p>Show that for all strictly positive integers $k\ge2$ the following inequality holds : $$\sum_{i=1}^{n}x^k_{i}\ln{x_{i}}\ln{\dfrac{x_{i}}{n}}\le 0$$</p> </blockquote> <p>We consider $$f(x)=x^k\ln{x}\ln{\dfrac{x}{n}}$$ Then $$f'(x)=kx^{k-1}\ln{x}\cdot\ln{\dfrac{x}{n}}+x^{k-1}\ln{\dfrac{x}{n}}+\dfrac{x^{k-1}}{n}\ln{x}$$ $$\Longrightarrow f''(x)=k(k-1)x^{k-2}\cdot\ln{x}\cdot\ln{\dfrac{x}{n}}+kx^{k-2}\ln{\dfrac{x}{n}}+\dfrac{kx^{k-2}}{n}\ln{x}+(k-1)x^{k-2}\ln{\dfrac{x}{n}}+\dfrac{x^{k-2}}{n}+\dfrac{k-1}{n}x^{k-2}\ln{x}+\dfrac{x^{k-2}}{n}.$$ Unfortunatly I can't know the sign of $f''(x)$ because I want to use <a href="http://mathworld.wolfram.com/JensensInequality.html" rel="nofollow">Jensen's Inequality</a> to prove it.</p> <p>So how can we prove this inequality ? </p>
babbupandey
92,239
<p>I think you need to look at $\ln\frac{x_i}{n}$ in your equation. Since, $$x_1 + x_2 + \dots + x_n = n$$ therefore $$x_i &lt; n$$ and notice that $$\ln\frac{x_i}{n} = \ln(x_i) - \ln(n) \implies \ln\frac{x_i}{n} &lt; 0$$ hence the expression $${x_i}^k\ln{x_i}\ln\frac{x_i}{n} &lt; 0$$ as ${x_i}^k$ and $\ln{x_i}$ are going to be positive. Hence the sum of all negative numbers is going to be less than zero.</p>
894,152
<blockquote> <p>Let $x_{i}\ge 0$ for $i\in\{1,2,\cdots,n\}$ and $x_{1}+x_{2}+\cdots+x_{n}=n$ for $n\ge 3$</p> <p>Show that for all strictly positive integers $k\ge2$ the following inequality holds : $$\sum_{i=1}^{n}x^k_{i}\ln{x_{i}}\ln{\dfrac{x_{i}}{n}}\le 0$$</p> </blockquote> <p>We consider $$f(x)=x^k\ln{x}\ln{\dfrac{x}{n}}$$ Then $$f'(x)=kx^{k-1}\ln{x}\cdot\ln{\dfrac{x}{n}}+x^{k-1}\ln{\dfrac{x}{n}}+\dfrac{x^{k-1}}{n}\ln{x}$$ $$\Longrightarrow f''(x)=k(k-1)x^{k-2}\cdot\ln{x}\cdot\ln{\dfrac{x}{n}}+kx^{k-2}\ln{\dfrac{x}{n}}+\dfrac{kx^{k-2}}{n}\ln{x}+(k-1)x^{k-2}\ln{\dfrac{x}{n}}+\dfrac{x^{k-2}}{n}+\dfrac{k-1}{n}x^{k-2}\ln{x}+\dfrac{x^{k-2}}{n}.$$ Unfortunatly I can't know the sign of $f''(x)$ because I want to use <a href="http://mathworld.wolfram.com/JensensInequality.html" rel="nofollow">Jensen's Inequality</a> to prove it.</p> <p>So how can we prove this inequality ? </p>
oknsnl
169,453
<p>While $x_i&lt;=1$ or $x_i&gt;=e$ then $x_i^k\ln^2{x_i}\ln{\dfrac{x_i}n}\ge x_i^k\ln{x_i}\ln{\dfrac{x_i}n}$</p> <p>If we call $y$ $x$'s between $x_i \le 1$ or $x_i \ge e$ </p> <p>and we call $z$ $x$'s between $x_i \ge 1$ or $x_i \le e$.</p> <p>After that point we write greater function then $\sum_{i=1}^{n}x^k_{i}\ln{x_{i}}$$ ( \ln{x_i}-\ln{n})$</p> <p>$gf(x)=\sum_{i=1}^{n}y^k_{i}\ln^2{y_{i}}$$ ( \ln{y_i}-\ln{n}) + \sum_{i=1}^{n}z^k_{i}\ln{z_{i}}$$ ( \ln{z_i}-\ln{n})$</p> <p>then write this functions positive parts 'p' negative parts 'n'</p> <p>$gf(x)=\sum_{i=1}^{n}p*p*n+\sum_{i=1}^{n}p*p*n -&gt; gf(x)=n+n=n$</p> <p>That means 'gf' is negative function because it is bigger than $\sum_{i=1}^{n}x^k_{i}\ln{x_{i}}$$ ( \ln{x_i}-\ln{n})$ we prove $\sum_{i=1}^{n}x^k_{i}\ln{x_{i}}$$ ( \ln{x_i}-\ln{n})$$\le 0$</p> <p>I hope it will help.</p> <p>I have better solution</p> <p>$\sum_{i=1}^{n}x^k_{i}\ln{x_{i}}$$ ( \ln{x_i}-\ln{n})$$\le \lim_{a-&gt;0}\sum_{i=1}^{n}(n-a)^k\ln{(n-a)}$$ ( \ln{(n-a)}-\ln{n})$</p> <p>$\lim_{a-&gt;0}\sum_{i=1}^{n}(n-a)^k\ln{(n-a)}$$ ( \ln{(n-a)}-\ln{n})\le0$ $\lim_{a-&gt;0}\sum_{i=1}^{n}(positive)*(positive)$$*(megative)\le0$</p> <p>then again lesser function should satisfy that rule.</p> <p>$\sum_{i=1}^{n}x^k_{i}\ln{x_{i}}$$ ( \ln{x_i}-\ln{n})$$\le 0$</p> <p>It is inspired by user45... from comments.</p>
3,423,674
<p>According to my calculus professor and MIT open coursework, taking the derivative of (x^2+4)^-1 is an application of the chain rule, not the power rule. The answer to the question is -(x^2+4)^-2, which makes sense to me, but I just don't understand why this is considered an application of the chain rule rather than the power rule, since the power rule says that d/dx(x^n) = nx^(n-1).</p> <p>Here is a link to the MIT coursework I am talking about: <a href="https://ocw.mit.edu/courses/mathematics/18-01sc-single-variable-calculus-fall-2010/1.-differentiation/part-a-definition-and-basic-rules/session-11-chain-rule/MIT18_01SCF10_ex11sol.pdf" rel="nofollow noreferrer">https://ocw.mit.edu/courses/mathematics/18-01sc-single-variable-calculus-fall-2010/1.-differentiation/part-a-definition-and-basic-rules/session-11-chain-rule/MIT18_01SCF10_ex11sol.pdf</a></p>
Graham Kemp
135,106
<p>The Power Rule was applied, but so too was the chain rule, and also a few unmentioned others.</p> <p>The power rule may only be applied when the derivation 'numerator' is a power of the 'denominator' , so first you must use the Chain Rule to obtain such.</p> <p>Thus taking everything one elementary step at a time:</p> <p><span class="math-container">$$\begin{align}\dfrac{\mathrm d~(x^2+4)^{-1}}{\mathrm d~x}&amp;=\dfrac{\mathrm d ~(x^2+4)^{-1}}{\mathrm d~(x^2+4)}\cdot\dfrac{\mathrm d~(x^2+4)}{\mathrm d~x}&amp;&amp;\text{Chain Rule}\\[2ex]&amp;=-(x^2+4)^{-2}\cdot\dfrac{\mathrm d~(x^2+4)}{\mathrm d~x}&amp;&amp;\text{Power Rule}\\[2ex]&amp;=-(x^2+4)^{-2}\cdot\left(\dfrac{\mathrm d~x^2}{\mathrm d~x}+\dfrac{\mathrm d~4}{\mathrm d~x}\right)&amp;&amp;\text{Additive Rule}\\[2ex]&amp;=-(x^2+4)^{-2}\cdot\left(2x+\dfrac{\mathrm d~4}{\mathrm d~x}\right)&amp;&amp;\text{Power Rule }\\[2ex]&amp;=-(x^2+4)^{-2}\cdot\left(2x+0\right)&amp;&amp;\text{Derivation of a Constant}\\[2ex]&amp;= -2x(x^2+4)^{-2} &amp;&amp;\text{Algebraic Rearrangement}\end{align}$$</span></p>
2,428,009
<p>I want to solve the equation $2^n=2k$ for $n$ even with $n,k \in \Bbb{N}$. I'm not sure how to go about this, using logarithm makes me enter the reals.</p>
gammatester
61,216
<p><strong>The</strong> solution is $m=1+\log_2k$. If $m$ is an integer, you are done; otherwise use $\lceil m \rceil$ or $\lfloor m \rfloor$ depending on the problem.</p>
1,224,180
<p>Q: evaluate $\lim_{x \to \infty}$ $ (x-1)\over \sqrt {2x^2-1}$</p> <p>What I did:</p> <p>when $\lim_ {x \to \infty}$ you must put the argument in the form of $1/x$ so in that way you know that is equal to $0$</p> <p>but in this ex. the farest that I went was</p> <p>$\lim_{x \to \infty}$ $x \over x \sqrt{2}$ $1-(1/x) \over (1/x) - ??$</p>
JMP
210,189
<p>$$\lim_\limits{x\to\infty}\dfrac{x-1}{\sqrt{2x^2}}\le \lim_\limits{x\to\infty}\dfrac{x-1}{\sqrt{2x^2-1}}\le \lim_\limits{x\to\infty}\dfrac{x-1}{\sqrt{2x^2-2}}$$</p> <p>$$\lim_\limits{x\to\infty}\dfrac{x-1}{\sqrt{2}x}\le \lim_\limits{x\to\infty}\dfrac{x-1}{\sqrt{2x^2-1}}\le \lim_\limits{x\to\infty}\dfrac{x-1}{\sqrt{2(x^2-1)}}$$</p> <p>$$\lim_\limits{x\to\infty}\dfrac{1}{\sqrt{2}}-\dfrac{1}{\sqrt{2}x}\le \lim_\limits{x\to\infty}\dfrac{x-1}{\sqrt{2x^2-1}}\le \lim_\limits{x\to\infty}\sqrt{\dfrac{x-1}{2(x+1)}}$$</p> <p>$$\dfrac{1}{\sqrt{2}}\le \lim_\limits{x\to\infty}\dfrac{x-1}{\sqrt{2x^2-1}}\le \lim_\limits{x\to\infty}\sqrt{\dfrac{x+1-2}{2(x+1)}}$$</p> <p>$$\dfrac{1}{\sqrt{2}}\le \lim_\limits{x\to\infty}\dfrac{x-1}{\sqrt{2x^2-1}}\le \lim_\limits{x\to\infty}\sqrt{\dfrac{1}{2}-\dfrac{1}{x+1}}$$</p> <p>$$\dfrac{1}{\sqrt{2}}\le \lim_\limits{x\to\infty}\dfrac{x-1}{\sqrt{2x^2-1}}\le \dfrac{1}{\sqrt{2}}$$</p> <p>In fact we can say $\lim_\limits{x\to\infty}\dfrac{x-1}{\sqrt{2x^2-1}}\to \dfrac{1}{\sqrt{2}}$ from beneath.</p>
3,551,030
<p>Let the real sequence <span class="math-container">$x_n s.t. \Vert x_{n+2} - x_{n+1}\Vert = M \Vert x_{n+1} - x_{n} \Vert$</span> </p> <p>If the <span class="math-container">$0&lt; \vert M \vert &lt;1$</span>, then <span class="math-container">$x_n$</span> surely convergent since it is a contractive. </p> <p>But the <span class="math-container">$\vert M \vert =1 $</span>, we can't say the <span class="math-container">$x_n$</span> is converge or not when taking the example <span class="math-container">$x_n = {1 \over n}$</span></p> <p>So I guessed the case <span class="math-container">$\vert M \vert &gt;1$</span>, My provisional conclusion was that <span class="math-container">$x_n$</span> is diverge when the <span class="math-container">$\vert M \vert &gt;1 $</span>, started proving myself. But I don't have any idea how to show it. Hence, how to show this sequence not convergent on <span class="math-container">$\mathbb{R}$</span> when the <span class="math-container">$M(\in \mathbb{R})$</span> is a positive which is <span class="math-container">$\vert M \vert &gt;1$</span> ? Any help or hint would be appreciated. </p>
Peter Szilas
408,605
<p>Assume <span class="math-container">$M &gt;1$</span>;</p> <p><span class="math-container">$b_n:=|x_{n+1}-x_n|$</span>;</p> <p>Let <span class="math-container">$n_0$</span> be a positive integer, s.t. <span class="math-container">$b_{n_0} &gt;0.$</span> (Such a <span class="math-container">$n_0$</span> exists (Why?))</p> <p><span class="math-container">$b_{n+1} &gt;b_n$</span>, an increasing sequence, bounded below by <span class="math-container">$b_{n_0}&gt;0$</span> for <span class="math-container">$n \gt n_0$</span>.</p> <p><span class="math-container">$\lim_{n \rightarrow \infty}b_n \not =0$</span>.</p> <p>Hence <span class="math-container">$x_n$</span> does not converge (Why?)</p>
2,933,383
<p>Cauchy's theorem of limits states that if <span class="math-container">$\ \lim_{n \to \infty} a_n=L ,$</span> then <span class="math-container">$$ \lim_{n \to \infty} \frac{a_1+a_2+\cdots+a_n}{n}=L $$</span> If I apply this in the series <span class="math-container">$$S = \lim_{n\to\infty} \dfrac{1}{n}[e^{\frac{1}{n}} + e^{\frac{2}{n}} + e^{\frac{3}{n}} + e^{\frac{4}{n}} + e^{\frac{5}{n}} + e^{\frac{6}{n}} + ... + e^{\frac{n}{n}}]$$</span> Here <span class="math-container">$a_n=e^{n/n},$</span> Hence, <span class="math-container">$\lim_{n \to \infty} a_n=e \ $</span>, so <span class="math-container">$S=e$</span>. I know this is incorrect because the integration gives the answer <span class="math-container">$e-1$</span>. It seems to me that the reason for this is related to the terms of the sequence being dependent on <span class="math-container">$n$</span>. I can't be sure of this because the same theorem gives the correct answer for the limit of this series(below) which is given in my book as a solved example of the above theorem <span class="math-container">$$ \ \lim_{n \to \infty} \left( \frac{1}{\sqrt{n^2+1}}+\frac{1}{\sqrt{n^2+2}}+\cdots+\frac{1}{\sqrt{n^2+n}} \right)=1 $$</span></p>
Mostafa Ayaz
518,023
<p>You can't apply the theorem in the first case since <span class="math-container">$$a_n=e^{n\over n}=e$$</span>which doesn't make sense. Another reason is that <span class="math-container">$a_k=e^{k\over n}$</span> that says <span class="math-container">$a_k$</span> is dependent to <span class="math-container">$n$</span> (!!) which is quite obviously wrong . Instead <span class="math-container">$$\lim_{n\to \infty}{1\over n}\sum_{i=1}^{n} e^{i\over n}=\int_{0}^{1}e^xdx=e-1$$</span>also for the second one, I'm yet suspicious because of a same argument as the former one. In this case we can easily write that<span class="math-container">$$ {n\over \sqrt{n^2+n}}\le\frac{1}{\sqrt{n^2+1}}+\frac{1}{\sqrt{n^2+2}}+\cdots+\frac{1}{\sqrt{n^2+n}}\le{n\over \sqrt{n^2+1}}$$</span>and use squeeze theorem to attain the result.</p>
3,693,196
<p>This is known as the factor formula. It is used for the addition of sin functions. I don't understand how the two are equal though. How would you get to the right side of the equation using the left?</p> <p><span class="math-container">$$\sin(s) + \sin(t) = 2 \sin\left(\frac{s+t}{2}\right) \cos \left(\frac{s-t}{2}\right)$$</span></p>
global05
696,211
<p>P(A|B) = (∩)/() </p> <p>P(∩) = 0.8</p> <p>P (B) = 0.9</p> <p>Hence P (A|B) = 0.8/0.9 = 8/9.</p>
1,004,303
<p>Let $S=\{(x,0)\} \cup\{(x,1/x):x&gt;0\}$. Prove that $S$ is not a connected space (the topology on $S$ is the subspace topology)</p> <p>My thoughts: Now in the first set $x$ is any real number, and I can't see that this set in open in $S$. I can't find a suitable intersection anyhow.</p>
Michael Hardy
11,667
<p>The set $\{(x,0) : x\in\mathbb R\}$ is open in $S$ because every point $(x,0)$ has an open neighborhood that does not intersect the graph of $y=1/x$. Just use $I\times\{0\}$ where $I$ is any open interval containing $x$.</p> <p>Then do a similar thing with the set $\{(x,1/x): x&gt;0\}$.</p>
2,141,082
<p>Let me first put down a couple definitions, two of which have terminology I make up for this post. If you already know about sheaf theory, you can safely skip Definitions 1-3 and 7-8, and the Construction. Definitions 4-6 introduce notation and terminology that is probably nonstandard, so I recommend reading those in any case. $\DeclareMathOperator{\coker}{coker}\DeclareMathOperator{\Im}{Im} \newcommand{\~}{\sim}$</p> <p><strong>Definition 1 (Sheaf)</strong></p> <p>Let $X$ be a topological space. A <em>sheaf</em> on $X$ is a map $\mathcal{F}:\operatorname{Open}(X)\to(\operatorname{Ab})$, i.e. a map which to every open $U\subseteq X$ assigns an abelian group $\mathcal{F}(U)=\Gamma(\mathcal{F},U)$, such that:</p> <ol> <li>For all $U\subseteq V\subseteq X$ there is a group morphism $\tau_{U,V}:\mathcal{F}(V)\to\mathcal{F}(U)$, called <em>restriction morphism</em>, such that $\tau_{U,V}\circ\tau_{V,W}=\tau_{U,W}$ for all $U\subseteq V\subseteq W$;</li> <li>If $U=\bigcup_iU_i$ and $\tau_{U_i,U}(s)=\tau_{U_i,U}(t)$, then $s=t$ in $\mathcal{F}(U)$;</li> <li>If $s_i\in\mathcal{F}(U_i)$ and, for all $i,j$, $\tau_{U_i\cap U_j,U_i}(s_i)=\tau_{U_i\cap U_j,U_j}(s_j)$, then there exists $s\in\mathcal{F}(\bigcup_iU_i)$ such that $\tau_{U_i,U}(s)=s_i$ for all $i$.</li> </ol> <p>A map $\mathcal{F}:\operatorname{Open}(X)\to(\operatorname{Ab})$ satisfying only 1., and not 2. and 3., is a <em>presheaf</em>.</p> <p><strong>Definition 2 (sheaf morphism)</strong></p> <p>Given two sheaves $\mathcal{F},\mathcal{G}$, a <em>sheaf morphism</em> $\phi:\mathcal{F}\to\mathcal{G}$ is the specification of a group morphism $\phi_U:\mathcal{F}(U)\to\mathcal{G}(U)$ (or, we may say, an element $\phi\in\prod_{U\in\operatorname{Open}(X)}\operatorname{Hom}(\mathcal{F}(U),\mathcal{G}(U))$) such that, for all $U,V$, $\phi_U\circ\tau_{U,V}=\tau_{U,V}\circ\phi_V$, where $\tau_{U,V}$ is the restriction morphism of $\mathcal{F}$ on the LHS and of $\mathcal{G}$ on the RHS.</p> <p><strong>Definition 3 (kernel of a morphism)</strong></p> <p>Given $\phi:\mathcal{F}\to\mathcal{G}$ a sheaf morphism, $\ker\phi$ (the <em>kernel</em> of $\phi$) is defined by:</p> <p>$$(\ker\phi)(U):=\ker(\phi_U).$$</p> <p>It is easily verified that this is a sheaf.</p> <p><strong>Definition 4 (Image and Cokernel presheaves of a morphism)</strong></p> <p>Give $\phi$ as in the previous definition, I will denote by $\operatorname{Im}^p\phi$ and $\operatorname{coker}^p\phi$ the presheaves:</p> <p>$$(\operatorname{Im}^p\phi)(U):=\operatorname{Im}(\phi_U),\qquad(\operatorname{coker}^p\phi)(U):=\operatorname{coker}\phi_U.$$</p> <p>These are, in general, not sheaves. However, given any presheaf, there is a construction (explained <a href="https://rigtriv.wordpress.com/2008/01/30/morphisms-of-sheaves/" rel="noreferrer">here</a>) that turns it into a sheaf with minimal variation, in the sense that the stalks (see below) stay the same. This is called <em>sheafification</em>.</p> <p><strong>Definition 5 (Image and Cokernel sheaves of a morphism)</strong></p> <p>The sheafifications of $\operatorname{Im}^p\phi$ and $\coker^p\phi$ as defined above are called <em>Image</em> and <em>cokernel</em> of $\phi$, and denoted as $\operatorname{Im}\phi$ and $\coker\phi$ respectively.</p> <p><strong>Definition 6 (C-surjective, I-surjective and injective morphisms)</strong></p> <p>Given a morphism $\phi:\mathcal{F}\to\mathcal{G}$, we say it is:</p> <ol> <li><em>injective</em> if $\ker\phi=0$, i.e. $(\ker\phi)(U)=0$ for all $U$;</li> <li><em>I-surjective</em> if $\Im\phi=\mathcal{G}$;</li> <li><em>C-surjective</em> if $\coker\phi=0$.</li> </ol> <p><strong>Definition 7 (stalks)</strong></p> <p>Given $\mathcal{F}$ a (pre)sheaf, set:</p> <p>$$\mathcal{F}(x):=\{(U,s):s\in\mathcal{F}(U),x\in U\in\operatorname{Open}(X)\}.$$</p> <p>Introduce the equivalence relation:</p> <p>$$(U,s)\~(V,t)\iff\exists W\subseteq U\cap V:\tau_{U,W}(s)=\tau_{V,W}(t).$$</p> <p>The <em>stalk</em> of $\mathcal{F}$ at $x$ is the quotient:</p> <p>$$\mathcal{F}_x:=\frac{\mathcal{F}(x)}{\~}.$$</p> <p><strong>Note</strong></p> <p>I know the stalk can be denoted as:</p> <p>$$\mathcal{F}_x=\lim_{U\ni x}\mathcal{F}(U),$$</p> <p>but I avoid that notation since I have never done that much category theory and, in particular, I have never seen limits of functors in enough detail to not perceive that limit notation as foreign.</p> <p><strong>Definition 8 (germs)</strong></p> <p>Given $\mathcal{F}$ a sheaf or presheaf, let $s\in\mathcal{F}(U)$. We set:</p> <p>$$s_x:=[(U,s)],$$</p> <p>that is, $s_x$ is the equivalence class of $(U,s)$ in the stalk $\mathcal{F}_x$. $s_x$ is called the <em>germ</em> of $s$ at $x$.</p> <p>The "germification" map $s\mapsto s_x$ is a group homomorphism from $\mathcal{F}(U)$ to $\mathcal{F}_x$ for any $x\in X,x\in U\in\operatorname{Open}(X)$, as can easily be verified.</p> <p><strong>Construction</strong></p> <p>Let $\phi:\mathcal{F}\to\mathcal{G}$ be a sheaf morphism. For all $x\in X$, there is an induced morphism:</p> <p>$$\phi_x:\mathcal{F}_x\to\mathcal{G}_x,\qquad\phi_x(s_x):=(\phi_U(s))_x,$$</p> <p>where $(U,s)$ is any representative of the germ $s_x\in\mathcal{F}_x$. This is well-defined. Indeed, if $(U,s)\~(V,t)$, then $\tau_{U,W}(s)=\tau_{V,W}(t)$ for some $W\subseteq U\cap V$, and by definition $(U,s)\~(W,\tau_{U,W}(s)),(V,t)\~(W,\tau_{V,W}(t))$. We would need $(\phi_U(s))_x=(\phi_V(t))_x$. Then again:</p> <p>$$(U,\phi_U(s))\~(W,\tau_{U,W}(\phi_U(s)))=(W,\phi_W(\tau_{U,W}(s)))=(W,\phi_W(\tau_{V,W}(t)))=(W,\tau_{V,W}(\phi_V(t)))\~(V,\phi_V(t)).$$</p> <p><strong>Definition 9 (stalk-injectivity, stalk-I-surjectivity and stalk-C-surjectivity)</strong></p> <p>Given a sheaf morphism $\phi:\mathcal{F}\to\mathcal{G}$, we call it</p> <ol> <li>Stalk-injective if $\phi_x:\mathcal{F}_x\to\mathcal{G}_x$ is injective for all x;</li> <li>Stalk-I-surjective if $\Im\phi_x=\mathcal{G}_x$ for all x</li> <li>Stalk-C-injective if $\coker\phi_x=0$ for all $x$. </li> </ol> <p>That said, a couple WBFs (WannaBe Facts) one might like to prove.</p> <p><strong>WBF 1</strong></p> <p>$\phi$ is C-surjective iff it is I-surjective.</p> <p><strong>WBF 2</strong></p> <p>$\phi$ is stalk-C-surjective iff it is stalk-I-surjective.</p> <p><strong>WBF 3</strong></p> <p>$\phi$ is C-surjective iff it is stalk-C-surjective.</p> <p><strong>WBF 4</strong></p> <p>$\phi$ is I-surjective iff it is stalk-I-surjective.</p> <p><strong>WBF 5</strong></p> <p>$\phi$ is injective iff it is stalk-injective.</p> <p>Unfortunatly, WBF 1 is, unless I'm much mistaken, not true.</p> <p><strong>Fact 1</strong></p> <p>Stalk-C-surjectivity implies stalk-I-surjectivity, but not viceversa.</p> <p><em>Proof</em>.</p> <p>Stalk-C-surjectivity implies $\coker^p\phi=0$, since any presheaf is a sub-presheaf of its sheafification (i.e. $\mathcal{F}(U)$ is always contained in $\mathcal{F}^+(U)$, $\mathcal{F}^+$ being the sheafification of $\mathcal F$). But then $\Im^p\phi=\mathcal G$, and $\Im^p\subseteq\Im$ just like $\coker^p\subseteq\coker$, and so we have I-surjectivity.</p> <p>Let $\Omega^p$ be the sheaf of $p$-forms on a manifold. The exterior derivative can be seen as a sheaf morphism $d:\Omega^p\to Z^p$, $Z^p$ being the closed forms. $\Im^pd=B^p$, the presheaf of exact forms, whose sheafification is just $Z^p$, since every closed form is locally exact, otherwise known as a gluing of exact forms and hence an element of the sheafification. Sk we hage I-surjectivity. However, if the cohomology of the manifold m is nontrivial, $\coker d_U=H^{p+1}(U)$ is in general nontrivial, which renders C-surjectivity impossible. $\square$</p> <p><strong>Fact 2</strong></p> <p>WBF 2 holds.</p> <p><em>Proof</em>.</p> <p>$\coker\phi_x=\frac{\mathcal G_x}{\Im\phi_x}$, which is zero iff $\phi_x$ is surjective. $\square$</p> <p><strong>Corollary 1</strong></p> <p>Seen as WBF 3 and WBF 4, along with WBF 2 which is true, would imply WBF 1 which is false, one of those must be false as well.</p> <p><strong>Fact 3</strong></p> <p>WBF 5 holds.</p> <p><em>Proof.</em></p> <p>Assume $\phi$ is stalk-injective. Let $s\in\mathcal{F}(U)$ satisfy $\phi_U(s)=0$. Then $\phi_x(s_x)=(\phi_U(s))_x=0_x=0$ for all $x\in U$, so $s_x=0$ for all $x\in U$, by stalk-injectivity. But this means that, for all $x\in U$, there is $V\subseteq U$ containinug $x$ such that $\tau_{U,V}(s)=0$. But by axiom 2. of the definition of sheaf, this implies $s=0$ on all $U$, proving $\phi_U$ is injective. So one direction is done.</p> <p>Viceversa, let $\phi$ be injective. Assume $\phi_x(s_x)=0$ for some $s_x\in\mathcal{F}_x$. Take a representative $(U,\tilde s)$. $s_x=0$ implies there is $V\subseteq U$ such that $x\in V$ and $\tau_{U,V}(\phi_U(s))=0$. But that is $\phi_V(\tau_{U,V}(s))$, so $\tau_{U,V}(s)$ must be zero by injectivity of $\phi_V$. But then $s_y=0$ for all $y\in V$, in particular for $y=x$, proving stalk-injectivity. $\square$</p> <p><strong>Fact 4</strong></p> <p>I-surjectivity implies stalk-I-surjectivity.</p> <p><em>Proof</em>.</p> <p>Let $t_x\in\mathcal{G}_x$. Take a representative $(U,t)$ such that the germ of $t$ at $x$ be the aforechosen $t_x$. $\phi_U$ is surjective by hypothesis, hence there exists $(U,s)$ such that $\phi_U(s)=t$. But then $\phi_x(s_x)=(\phi_U(s))_x=t_x$, so $\phi_x$ is surjective. $\square$.</p> <p>Let us draw a diagram of what we have proved about the various types of surjectivity, and deduce a couple corollaries by looking at it.</p> <p>$$\begin{array}{ccc} \text{C-surjectivity} &amp; &amp; \text{stalk-C-surjectivity} \\ \not\uparrow\downarrow &amp; &amp; \uparrow\downarrow \\ \text{I-surjectivity} &amp; \rightarrow &amp; \text{stalk-I-surjectivity} \end{array}$$</p> <p><strong>Corollary 2</strong></p> <p>Stalk-C-surjectivity cannot imply C-surjectivity.</p> <p><em>Proof</em>.</p> <p>Suppose otherwise. Then I-surjectivity implies stalk-I-surjectivity (Fact 4), which implies stalk-C-surjectivity (Fact 2) which implies C-surjectivity (hypothesis), and yet I-surjectivity does not imply C-surjectivity (counterexample in Fact 1), contardiction. $\square$</p> <p><strong>Corollay 3</strong></p> <p>C-surjectivity implies stalk-C-surjectivity.</p> <p><em>Proof</em>.</p> <p>Assume C-surjectivity. Then we have I-surjectivity (Fact 1), and hence stalk-I-sujectivity (Fact 4), and hence stalk-C-sutjectivity (Fact 2). $\square$</p> <p>So we add another couple arrows to the diagram.</p> <p>$$\begin{array}{ccc} \text{C-surjectivity} &amp; ^{\not\leftarrow}_{\rightarrow} &amp; \text{stalk-C-surjectivity} \\ \not\uparrow\downarrow &amp; &amp; \uparrow\downarrow \\ \text{I-surjectivity} &amp; \rightarrow &amp; \text{stalk-I-surjectivity} \end{array}$$</p> <p>There remains therefore one last question.</p> <blockquote> <p><strong>Does stalk-I-surjectivity imply I-surjectivity?</strong></p> </blockquote> <p>And that is my question. I tried proving it, and I cannot seem to get to the end. In fact, there is also another question.</p> <blockquote> <p><strong>Is all the above correct?</strong></p> </blockquote> <p>I am particularly doubtful about WBF 1 being false, since my Complex Geometry teacher said that «$\phi$ […] ̀è suriettivo se il fascio-immagine è tutto $\mathcal{G}$, oppure il cokernel è 0, è la stessa cosa» ($\phi$ […] is surjective if the image sheaf is the whole of $\mathcal{G}$, or if the cokernel is zero, it's the same thing), and I seem to have just disproven his statement.</p> <p><strong>Update</strong></p> <p>I thought I had answered the first question. I was writing a swlf-answer, which started by proving the following.</p> <blockquote> <p><strong>Lemma</strong></p> <p>For all $x$, we have:</p> <p>$$(\ker^p\phi)_x=\ker\phi_x,\qquad(\Im^p\phi)_x=\Im\phi_x.$$</p> <p><em>Proof</em>.</p> <p>$$((\ker^p\phi)_x\subseteq\ker\phi_x)$$</p> <p>Let $s_x\in(\ker^p\phi)_x$. This means $s_x$ is the germ at $x$ of some $s\in(\ker^p\phi)(U)$, by definition of stalks, and by definition of the kernel presheaf we have $\phi_U(s)=0$. Hence, by definition of the stalk morphism, $\phi_x(s_x)=(\phi_U(s))_x=0_x=0$.</p> <p>$$((\ker^p\phi)_x\supseteq\ker\phi_x)$$</p> <p>Let $s_x\in\ker\phi_x$, i.e. $\phi_x(s_x)=0$. By definition of the stalk morphism, $\phi_x(s_x)$ is $(\phi_U(s))_x$ for any representative $(U,s)$ of $s_x$. $(\phi_U(s))_x=0$ implies there is $V\subseteq U$ such that $\tau_{U,V}(\phi_U(s))=0$. But that is $\phi_V(\tau_{U,V}(s))$, meaning $\tau_{U,V}(s)\in\ker\phi_V$. Naturally, $s_x$ is also the germ of $\phi_V(s)$ at $x$, which gives us $s_x\in(\ker^p\phi)_x$.</p> <p>$$((\Im^p\phi)_x\subseteq\Im\phi_x)$$</p> <p>Let $s_x\in(\Im^p\phi)_x$. This means there is $s\in(\Im^p\phi)(U)$ such that its germ at $x$ is $s_x$. $s\in(\Im^p\phi)(U)$ means $s\in\Im\phi_U$, so there is $t\in\mathcal{F}(U)$ such that $\phi_U(t)=s$. But this means $\phi_x(t_x)=s_x$, showing $s_x\in\Im\phi_x$.</p> <p>$$((\Im^p\phi)_x\supseteq\Im\phi_x)$$</p> <p>Let $s_x\in\Im\phi_x$. This means there is $t_x\in\mathcal{F}_x:\phi_x(t_x)=s_x$. Taking a representative $(U,t)$ such that $t_x$ is the germ of $t$ at $x$, we will have $(\phi_U(t))_x=s_x$. But that means $s_x$ is the germ of something in $\Im\phi_U$, and hence $s_x\in(\Im^p\phi)_x$. $\square$</p> </blockquote> <p>This should still allow me to conclude that the answer to that question is yes, since if $\phi$ is stalk-I-surjective then $(\Im\phi)_x=(\Im^p\phi)_x=\Im\phi_x=\mathcal{G}_x$, and… wait. Is it true that if $\mathcal{H}$ is a subsheaf of $\mathcal{G}$ and $\mathcal{H}_x=\mathcal{G}_x$ for all $x$, then $\mathcal{G}=\mathcal{H}$? Because if so, since $\Im\phi$ is a subsheaf of $\mathcal{G}$ (right?), I have concluded.</p> <p>Anyways, as I wrote the second $\subset$ part of the Lemma proof, Roland posted his answer, pointing out that indeed my proof of half of WBF 1 is wrong.</p> <p>Hedidn't say anything about Facts 2-3, so I assume I did not go wrong there. Since the first half of the Lemma is essentially equivalent to Fact 3, I guess I can safely assume the first half of my proof of the Lemma is OK.</p> <p>I will come back to Facts 1 and 4 later. Right now, let me prove the following.</p> <p><strong>Fact U1</strong></p> <p>If $\mathcal{F}$ is a subsheaf of $\mathcal{G}$ (i.e. they are both sheaves and $\mathcal{F}(U)\subseteq\mathcal{G}(U)$ for all opern $U$) and $\mathcal{F}_x=\mathcal{G}_x$ for all $x\in X$, then $\mathcal{F}=\mathcal{G}$.</p> <p><em>Proof</em>.</p> <p>Assume, to the contrary, that there is an open $U$ such that $\mathcal{F}(U)\subsetneq\mathcal{G}(U)$. Take any $s\in\mathcal{G}(U)\smallsetminus\mathcal{F}(U)$. Assume, for the moment, that I can find a convering $\{U_i\}$ of $U$ with open sets such that $\mathcal{F}(U_i)=\mathcal{G}(U_i)$ for all $i$. Then we have $\tau_{U_i,U}(s)=:s_i\in\mathcal{G}(U_i)=\mathcal{F}(U_i)$, and naturally $\tau_{U_i\cap U_j,U_i}(s_i)=\tau_{U_i\cap U_j,U_j}(s_j)$, so by condition 3. in the definition of sheaf we should have $s\in\mathcal{F}(U)$, contradiction.</p> <p>It remains to prove that I can find such a covering $\{U_i\}$. Note that condition 3 does not require the covering to be finite, so we can use the set $\{V\subseteq U:\mathcal{F}(V)=\mathcal{G}(V)\}$ as a candidate. If that does not cover $U$, then we have $x\in U$ such that, for all $V\subseteq U$ containing $x$, there is $s_V\in\mathcal{G}(V)\smallsetminus\mathcal{F}(V)$. I would like to deduce from this that $\mathcal{F}_x\neq\mathcal{G}_x$. But I'm not sure how to exlude that, for every $V$, there be $W_V\subseteq V$ such that $\tau_{W_V,V}(s_V)\in\mathcal{F}(W_V)$. Maybe I'll think about this and come back afterwards. For the time being, $\not\square$.</p> <p>It should be true that $\Im^p\phi$ is a sub-presheaf of $\Im\phi$. If "separated presheaf" means something satisfying 1. and 2., but not necessarily 3., from the definition of sheaf, then $\Im^p\phi$ is a separated presheaf, since it is a sub-presheaf of $\mathcal{G}$. I think I remember reading, on a trip on Google, that if a presheaf is separated, then it is a sub-presheaf of its sheafification, which would conclude here.</p> <p>So now we are left with these questions:</p> <ol> <li>How to prove WBFs 1, 4 and 5;</li> <li>Is the content of this update correct so far?</li> <li>How to conclude the proof of Fact U1;</li> <li>How to prove that a separated presheaf is a sub-presheaf of its sheafification.</li> </ol> <p><strong>Update 2</strong></p> <p>Let us answer question 4.</p> <p><strong>Fact U2</strong></p> <p>For every presheaf $\mathcal{F}$, there exists a natural morphism $\phi:\mathcal{F}\to\mathcal{F}^+$ ($\mathcal{F}^+$ being the sheafification). It is injective iff the presheaf is separated, and, under the hypothesis of 2., it is surjective iff 3. holds.</p> <p><em>Proof</em>.</p> <p>Set $\phi_U(s)=\{x\mapsto s_x\}$. That $\{x\mapsto s_x\}$ is a map from $U$ to the union of the stalks at points of $U$, such that for all $x$, $s_x\in\mathcal{F}_x$. Also, by constrution, this is an elemento of $\mathcal{F}^(U)$, so our map is well-defined.</p> <p>The kernel $\ker\phi_U$ is precisely the set of all $s\in\mathcal{F}(U)$ such that there exists a covering $U\subseteq\bigcup_iU_i$ satisfying $\tau_{U_i,U}(s)=0$ for all $i$. Hence, it is trivial for all $U$ iff the presheaf is separated, but triviality of all $\ker\phi_U$ is precisely the injectivity of $\phi$.</p> <p>Surjectivity of $\phi$ means that, as Rolando points out at the end of the answer, if $s\in\mathcal{F}^+(U)$, then there is a covering $\{U_i\}$ of $U$ such that $\tau_{U_i,U}(s)=\phi_{U_i}(t_i)$ for all $i$, where $t_i$ are some elements of $\mathcal{F}(U_i)$. If I had injectivity, I could certainly conclude that these have coinciding restrictions to the intersections, and then condition 3. would let me glue them to find a preimage, and $\phi_U$ would be surjective for all $U$, implying surjectivity.</p> <p>I cannot seem to prove the other direction, but this is all I need. $\not\,\square$</p> <p>This implies that a sheaf is isomorphic to its sheafification, and any separated presheaf is isomorphic to a sub-presheaf of its sheafification. In particular, the answer to question 4 is yes.</p>
Roland
113,969
<p>That is a very long post. This is nice to give all the definition, but I bet that anyone who doesn't know them will not read through the whole post ;)</p> <p>About all your wanna be facts : all of them are true, these are all very basic facts about sheaf theory. They are in every book on the topic. So let me point out where there are mistakes...</p> <p>First of all I want to give a bit of category theory. In an abelian category <span class="math-container">$\mathcal{C}$</span>, given a morphism <span class="math-container">$f:A\rightarrow B$</span>, the equivalence <span class="math-container">$\operatorname{Im}f=B \Leftrightarrow\operatorname{coker}f=0$</span> holds. It is an important fact that the category of sheaves of abelian groups is an abelian category.</p> <p><strong>Fact 1</strong> You say that any presheaf is a sub-presheaf of its sheafification. This is false (as any sub-presheaf of a sheaf is separated).</p> <p>You say <span class="math-container">$\operatorname{coker} d_U=H^{p+1}(U)$</span>. This is true, but <span class="math-container">$\operatorname{coker} d_U\neq(\operatorname{coker} d)(U)$</span>. As you explained in definition 4-5, one needs to sheafify the presheaf <span class="math-container">$U\mapsto\operatorname{coker} d_U$</span>. In your example, <span class="math-container">$U\mapsto H^{p+1}(U)$</span> is only a presheaf, and its sheafification is the zero-sheaf (because locally a manifold is contractible, so has zero-cohomology). By the way, you have an example of a presheaf which is not a subsheaf of its sheafification...</p> <p><strong>Fact 4</strong> Again, you assume that <span class="math-container">$\phi_U$</span> is onto. It might not be so, remember that to get the image sheaf, one needs to sheafify the image presheaf. In fact, there is also this characterization of a surjective morphism of sheaves : <span class="math-container">$\phi:\mathcal{F}\rightarrow\mathcal{G}$</span> is onto iff for every open <span class="math-container">$U$</span> and every section <span class="math-container">$t\in\mathcal{G}(U)$</span>, there exists an open cover <span class="math-container">$U=\bigcup U_i$</span> and sections <span class="math-container">$s_i\in\mathcal{F}(U_i)$</span> such that <span class="math-container">$\phi_{U_i}(s_i)=t_{|U_i}$</span>. In other words, <span class="math-container">$\phi_U$</span> needs not be surjective, but any sections are locally in the image.</p> <hr /> <p>Here are some comments, about your updates :</p> <ul> <li><p>the lemma is true, and its proof is ok.</p> </li> <li><p>the fact U1 is true. Here is a quick proof : let <span class="math-container">$s\in\mathcal{G}(U)$</span>. We want to prove that <span class="math-container">$s\in\mathcal{F}(U)$</span>. For all <span class="math-container">$x\in U$</span>, <span class="math-container">$s_x\in\mathcal{F}_x$</span>, this means that there exists <span class="math-container">$U_x\subset U$</span> and sections <span class="math-container">$t^x\in\mathcal{F}(U_x)$</span> such that <span class="math-container">$t^x_x=s_x$</span>. And this equality means that there exists <span class="math-container">$V_x\subset U_x$</span> such that <span class="math-container">$t^x_{|V_x}=s_{|V_x}$</span>. Clearly <span class="math-container">$(V_x)$</span> form an open cover of <span class="math-container">$U$</span> and the <span class="math-container">$t^x_{|V_x}$</span> glue to give <span class="math-container">$s\in\mathcal{F}(U)$</span>.</p> </li> <li><p>About the proof of the (true) fact that <span class="math-container">$\phi:\mathcal{F}\rightarrow\mathcal{F}^+$</span> is into iff <span class="math-container">$\mathcal{F}$</span> is separated, and <span class="math-container">$\phi$</span> is an isomorphism iff <span class="math-container">$\mathcal{F}$</span> is a sheaf. That makes four implications, and your proof will be clearer if you say what do you assume and what do you want to prove. Moreover, your characterization of surjectivity is wrong : here <span class="math-container">$\phi$</span> is a map of presheaves, not sheaves (even if both source and target are sheaves !), so surjectivity of <span class="math-container">$\phi$</span> simply means <span class="math-container">$\phi_U$</span> onto for every <span class="math-container">$U$</span>. Finally, the other direction is very easy : if <span class="math-container">$\phi$</span> is into, <span class="math-container">$\mathcal{F}$</span> is isomorphic to <span class="math-container">$\operatorname{im}\phi$</span> which is a subpresheaf of a sheaf, so <span class="math-container">$\mathcal{F}$</span> is separated. Similarly, if <span class="math-container">$\phi$</span> is an isomorphism, <span class="math-container">$\mathcal{F}$</span> is isomorphic to a sheaf, so it is a sheaf.</p> </li> </ul>
480,504
<p>If <span class="math-container">$M$</span> is a symmetric positive-definite matrix, is it possible to get a <strong>positive</strong> lower bound on the smallest eigenvalue of <span class="math-container">$M$</span> in terms of a matrix norm of <span class="math-container">$M$</span> or elements of <span class="math-container">$M$</span>? E.g., I want <span class="math-container">$$\lambda_{\text{min}} \geq f(\lVert M \rVert)$$</span> or something like that. <span class="math-container">$M$</span> is a Gram matrix, if that helps.</p>
Evan
38,878
<p>A quick comment: If you have diagonal dominance, then <a href="https://www.google.com/url?sa=t&amp;rct=j&amp;q=&amp;esrc=s&amp;source=web&amp;cd=1&amp;cad=rja&amp;ved=0CC4QFjAA&amp;url=http://en.wikipedia.org/wiki/Gershgorin_circle_theorem&amp;ei=pKUiUun7DJXQsATfnIDoBQ&amp;usg=AFQjCNFgcxCvB-_iFubVENtxtz_JF025iw&amp;sig2=tAldUCkYk-KQU2Fn9M8TwA&amp;bvm=bv.51495398,d.cWc%22Gershgorin%20Circle%20Theorem%22" rel="noreferrer">Gerhsgorin's circle theorem</a> for eigenvalues will get you at least something. So for each row, subtract the diagonal term from the sum of the absolute values of the off-diagonal terms, and take the minimum over the rows. That is a bound on the eigenvalue that will be positive (again, if you have diagonal dominance, which may not hold for all Gram matrices). </p>
1,223,209
<p>As part of another problem I am working on, I have the following product to work out. </p> <p>$\begin{bmatrix} 1 &amp; 2 &amp; 3 \end{bmatrix} \cdot h $</p> <p>where $h$ is a scalar. My question is, if I commute the row vector and the scalar then I can just multiply it through. If I think of the $h$ as a $1 \times 1$ matrix however, it seems that this isn't allowed. </p> <p>I know it sounds simple, but I want to understand the subtlety involved. </p>
Rellek
228,621
<p>Well, from your equation, I notice that it is only increasing, which can't be right, since the probability should keep going down. So, instead of having a constant denominator, there needs to be some variable there. Also, if I checked this right, there should be a 0 probability of getting a "new" ball on your 6th try.</p> <p>I would say that you actually have 10 balls to choose from after you draw your first 3, since you're allowed to pick one of your old balls. Then, after that, you've added 2 more "old" balls. So, you only have 8 choices. Keep going, you've got 6, 4, 2, and finally 0. So, yeah, it is much simpler than you thought originally. You should be taking the probability of getting 2 new balls on the first pulls and then multiplying it by the probability of getting 2 more new balls on the second, and so on.</p> <p>I got $\frac{84}{6655}$ as the probability. Do you know what the correct answer is?</p>
1,291,050
<p>I have been doing doing this problem $∇ × (\varphi∇\varphi)=0$</p> <p>I am just having trouble applying the product result i get which is below.</p> <p>$$i(( \frac {d}{dy} )(\varphi \frac {d}{dz} \varphi) - ((\frac {d}{dz})(\varphi \frac {d}{dy} \varphi)) )$$</p> <p>if i take the first part </p> <p>$$(\varphi \frac {d}{dz} \varphi)$$</p> <p>and use the product rule i get the following</p> <p>$$\frac {d}{dx}(uv)= ((\varphi \frac {d}{dz} \varphi) + ((\frac {d^2}{d^2z} \varphi^2)))$$</p> <p>this doesnt seem right, can someone help by going through the how to apply the product rule to this. thank you</p>
Alex Fok
223,498
<p>Claim: $\det(A^{-1}A^\top+I)\geq 2^n$.</p> <p>Proof: Note that $\det(A^{-1}A^\top)=1$. Let $\lambda_1, \cdots, \lambda_n$ be the eigenvalues of $A^{-1}A^\top$. Then $\lambda_1\cdots\lambda_n=1$ and the eigenvalues of $A^{-1}A^\top+I$ are $\lambda_1+1, \cdots, \lambda_n+1$. So \begin{align*} \det(A^{-1}A^\top+I)&amp;=(\lambda_1+1)\cdots(\lambda_n+1)\\ &amp;\geq (2\sqrt{\lambda_1})\cdots(2\sqrt{\lambda_n})\\ &amp;=2^n \end{align*}</p> <p>Now we have $\displaystyle\det H=\det\left(\frac{A+A^\top}{2}\right)=\det(A)\det\left(\frac{I+A^{-1}A^\top}{2}\right)\geq\det(A)$. It is easy to see that equality holds if and only if $A^{-1}A^\top=I$, i.e. $A$ is symmetric (under the assumption that the eigenvalues of $A$ are positive).</p>
3,573,575
<p>I'm trying to find the eigenvalues of a matrix <span class="math-container">$$A=\begin{bmatrix}2/3 &amp; -1/4 &amp; -1/4 \\ -1/4 &amp; 2/3 &amp; -1/4 \\ -1/4 &amp; -1/4 &amp; 2/3\end{bmatrix}$$</span></p> <p>The eigenvalues of this matrix, are the roots <span class="math-container">$\lambda$</span> of the equation <span class="math-container">$det(A-\lambda I)=0$</span>. Expanding this determinant with Sarrus's Rule gives a polynomial of a third degree, the solutions can apparently be estimated by iterative methods. Before I start exploring those avenues, however, I'd like to know if there is a more practical method to compute the eigenvalues of this matrix.</p>
Mohammad Riazi-Kermani
514,496
<p>Check for the rational roots of the characteristic polynomial.</p> <p>The eigenvalues are <span class="math-container">$$\frac {11}{12}, \frac {11}{12}, \frac {1}{6} $$</span></p>
2,060,891
<p>Number of solutions of $a^3=e$ in $C_9$</p> <p>The solution goes: $a^3=e$ if and only if $a$ lies in the unique subgroup of $C_9$ of order $3$ thus there are $3$ solutions.</p> <p>I'm questioning why? </p>
MikeWasTaken
392,196
<p>The text book must be wrong then</p> <p><a href="https://i.stack.imgur.com/c0ADa.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/c0ADa.png" alt="enter image description here"></a></p> <p>6 Combinations (4,6), (5,5), (5,6), (6,4), (6,5), (6,6)</p>
176,059
<p>I asked this question in MSE, but I did not received any answer, so I repeat it here:</p> <p><a href="https://math.stackexchange.com/questions/858238/a-question-on-fixed-point-property">https://math.stackexchange.com/questions/858238/a-question-on-fixed-point-property</a></p> <p>Assume that $0&lt;k&lt;n-1$, Note that $\mathbb{C}P^{k}$ can be considered as a closed subset of $\mathbb{C}P^{n}$, in a natural way. We collapse $\mathbb{C}P^{k}$ to a point. The resulting space is denoted by $\mathbb{C}P^{n}/\mathbb{C}P^{k}$ </p> <p><strong>My fixed point question:</strong></p> <blockquote> <p>Does $\mathbb{C}P^{n}/\mathbb{C}P^{k}$ satisfies fixed point property?(At least when $n$ is even)</p> </blockquote> <p>This question is motivated by:</p> <p><a href="https://math.stackexchange.com/questions/845057/show-mathbbcp2-cp1-is-not-a-retract-of-mathbbcp4-cp1#comment1754879_845057">https://math.stackexchange.com/questions/845057/show-mathbbcp2-cp1-is-not-a-retract-of-mathbbcp4-cp1#comment1754879_845057</a></p>
user56137
56,137
<p>I don't think $\mathbb{C}P^n/\mathbb{C}P^k$ (or $\mathbb{R}P^n/\mathbb{R}P^k$) ever has the fixed point property in the range you describe. I haven't thought about this very long so could be wrong, but on first look I think you can construct an endomorphism with no fixed points in the following way:</p> <p>Consider $\mathbb{C}P^n$ as the space of 1-dimensional complex subspaces of $\mathbb{C}^{n+1}$, and $\mathbb{C}P^k$ as 1-dim subspaces in some $k+1$ dimensional subspace. Now every 1-dim subspace in $\mathbb{C}^{n+1}$ has $n$ naturally associated other subspaces, namely those orthogonal to it. So to construct a continuous endomorphism, we could try to find a continuous choice of orthogonal subspace, and in order for it to pass to an endomorphism of the quotient, we just need to make sure that all the subspaces contained in $\mathbb{C}P^{k+1}$ are sent to a common subspace. Since $k&lt;n$, there is at least one choice of 1-dim subspace which is orthogonal to all of $\mathbb{C}^{k+1}$, so pick one and call it $V$. Now given any 1-dim subspace $U$ which is not in $\mathbb{C}^{k+1}$, $U$ has an orthogonal projection $\bar{U}$ onto $\mathbb{C}^{k+1}$, and so we can apply the unique unitary operator which fixes the orthogonal complement $W$ of $U+\bar{U}$ ($W$ is codimension 2) and sends $\bar{U}$ to $U$. Under this map, $V$ is sent to a subspace orthogonal to $U$. So we have assigned to every 1-dim subspace of $\mathbb{C}^{n+1}$ an orthogonal one, in a manner which is clearly continuous on passage to $\mathbb{C}P^n$, and so gives a map which by construction passes to the quotient $\mathbb{C}P^n/\mathbb{C}P^k$ and has no fixed points.</p>
302,179
<p>The question I am working on is:</p> <blockquote> <p>"Use a direct proof to show that every odd integer is the difference of two squares."</p> </blockquote> <p>Proof:</p> <p>Let n be an odd integer: $n = 2k + 1$, where $k \in Z$</p> <p>Let the difference of two different squares be, $a^2-b^2$, where $a,b \in Z$.</p> <p>Hence, $n=2k+1=a^2-b^2$...</p> <p>As you can see, this a dead-end. Appealing to the answer key, I found that they let the difference of two different squares be, $(k+1)^2-k^2$. I understand their use of $k$--$k$ is one number, and $k+1$ is a different number--;however, why did they choose to add $1$? Why couldn't we have added $2$?</p>
P.K.
34,397
<p>$$2k + 1 = 1 \times (2k +1) = (\color{#C00}{k + 1} -{\bf k}) (\color{#C00}{k + 1} +\bf{ k})$$</p>
4,092,473
<p>I am trying to prove if the following set is bounded <span class="math-container">$$S=\bigcup_{a\in(0,1)} M_{a},$$</span> where <span class="math-container">$$M_{a}=\{(x,y) \in \mathbb{R}^2: ax+(1-a)y=b, x&gt;0, y&gt;0, b \text{ is a fixed positive real number}\}.$$</span> I think this set is bounded since all lines in the set are bounded by the vertical line <span class="math-container">$x = b$</span> and the horizontal line <span class="math-container">$y=b$</span>. However, I am not sure how to prove it formally? Thanks!</p> <p><strong>Edited: the expression set is reformed and my attempt for this is as follows</strong> <span class="math-container">$$\lim_{a \rightarrow 0} M_a = {(x,y)}\in \mathbb{R}^2: y=b, x\in (0,\infty)$$</span> and <span class="math-container">$$\lim_{a \rightarrow 1} M_a = {(x,y)}\in \mathbb{R}^2: x=b, y\in (0,\infty).$$</span> Does this implies that <span class="math-container">$S$</span> is bounded?</p>
Rahul Madhavan
439,353
<p>Claim: The set <span class="math-container">$S=\{(x,y) \in \mathbb{R}\times\mathbb{R}: ax+(1-a)y=b, x&gt;0, y&gt;0\}: 0&lt;a&lt;1\}$</span> is unbounded if <span class="math-container">$b&gt;0$</span> and empty otherwise.</p> <p>Proof: <span class="math-container">$ax+(1-a)y$</span> is positive. So if <span class="math-container">$b\leq 0$</span>, then we can't find appropriate <span class="math-container">$(x,y)$</span>. Therefore x is empty.</p> <p>For the next case, if <span class="math-container">$b&gt;0$</span>, WLOG, let's assume b is some number greater than 1. Specifically, let's take <span class="math-container">$b=2$</span>.(If not, <span class="math-container">$(x,y)$</span> can be appropriately scaled such that this is true).</p> <p>Pick some large <span class="math-container">$N$</span> that you want to bound with. Then <span class="math-container">$S\subset[0,N]\times[0,N]$</span> Now given any such <span class="math-container">$N$</span>, one can find some <span class="math-container">$\epsilon&gt;0$</span> such that <span class="math-container">$\frac{1}{\epsilon}&gt;N$</span>. Set <span class="math-container">$a=\epsilon$</span>. We then set y to be some small quantity, say <span class="math-container">$\epsilon$</span>, such that <span class="math-container">$b-(1-\epsilon)y&gt;1$</span> Then for such points <span class="math-container">$(x,y)$</span> in S: <span class="math-container">\begin{align*} a x+(1-a)y&amp;=b\\ a x &amp;=b-(1-a)y\\ \epsilon x &amp;&gt;1\\ x &amp;&gt;\frac{1}{\epsilon}\\ x &amp;&gt;N\\ \end{align*}</span></p> <p>Thus we found such <span class="math-container">$(x,y) \in S$</span> that is not bound by <span class="math-container">$[0,N]\times[0,N]$</span>, and therefore we have a contradiction to the bound on <span class="math-container">$S$</span>. Therefore <span class="math-container">$S$</span> is unbounded.</p>
317,601
<blockquote> <p>Let <span class="math-container">$F$</span> be a ring, let <span class="math-container">$f(x)=a_0+a_1x+\cdots+a_nx^n$</span> be in <span class="math-container">$F[x]$</span>, and <span class="math-container">$f'(x)$</span> be the regular derivative of <span class="math-container">$f(x)$</span>.</p> <p>Prove that <span class="math-container">$(f+g)'(x)=f'(x)+g'(x)$</span>.</p> <p>Conclude that we can define a homomorphism of abelian groups <span class="math-container">$D:F[x]\to F[x]$</span> by <span class="math-container">$D(f(x))=f'(x)$</span>.</p> </blockquote> <p>How to prove that <span class="math-container">$(fg)'=f'g+fg'$</span>?</p>
Zev Chonoles
264
<p>There isn't really anything to prove; part of the axioms for a ring are that a ring is an abelian group under addition. If you feel the need to say anything about the issue at all, it should be fine to just say $F[x]$ is abelian group under addition by the ring axioms.</p> <hr> <p><em>Hint for finding the kernel of $D$ when $F$ is of prime characteristic</em></p> <p>Note that $$D(x^p)=px^{p-1}=0$$ because $p=0$ in the field $F$. Do you see now which polynomials will have a derivative which equals 0?</p>
1,464,143
<p>$\lim_{n \to \infty} n\ln\left(1+\frac{1}{n}\right)$ using L'Hòpital rule show that this is $1$. Can you do this since there isn't a division and $n$ will obviously tend to infinity and $\ln\left(1+\frac{1}{n}\right)$ will tend to $0$? So there limits aren't matching?</p> <p>So I set $u=n $</p> <p>$du=1$</p> <p>$v= \ln\left(1+\frac{1}{n}\right)$</p> <p>$dv= -\frac{1}{n^2+n}$</p> <p>Hence $\ln\left(1+\frac{1}{n}\right) - \frac{1}{n+1}$ in which both of these tend to $0$ so I am complete lost.</p>
Bernard
202,857
<p>Using L'Hospital's rule here is properly ridiculous: set $\;\dfrac1n=x$ and observe we have a rate of variation: $$\lim_{n\to\infty}n\ln\Bigl(1+\frac1n\Bigr)=\lim_{x\to0}\frac{\ln(1+x)}x=\bigl(\ln(1+x)\bigr)'_{x=0}=1.$$</p>
302
<p>I know that the Fibonacci numbers converge to a ratio of .618, and that this ratio is found all throughout nature, etc. I suppose the best way to ask my question is: where was this .618 value first found? And what is the...significance?</p>
Larch
31,272
<p>Golden ration came first in nature long before humans evolved to think about Fibonacci numbers.</p>
34,671
<p>Going through some old papers, I came up with a simple-looking problem I thought about 5 years ago or so. </p> <p>MO wants motivation ... Associated to a probability measure on a metric space is something called "quantization dimension" ... this involves defining a function $D \colon (0,\infty) \to (0,\infty)$. Exactly how is not the point here, but see for example </p> <p><a href="http://www.ams.org/mathscinet-getitem?mr=1877974" rel="nofollow">http://www.ams.org/mathscinet-getitem?mr=1877974</a></p> <p>Lindsay, L. J. and Mauldin, R. D. Quantization dimension for conformal iterated function systems. Nonlinearity 15 (2002), no. 1, 189--199. </p> <p>It was observed numerically that $D$ is increasing and concave, but proof was lacking. When we do this for the simplest possible self-similar measure (similarities with ratios $s_1, s_2$ and probabilities $p_1, p_2$) I still did not solve it, even though it looks like an elementary calculus exercise. Here it is.</p> <blockquote> <p>Let $s_1, s_2, p_1, p_2$ be positive real numbers such that $s_1 &lt; 1$, $s_2 &lt; 1$, $p_1+p_2=1$. For $r&gt;0$ define $D = D(r)$ implicitly by $$ \left(p_1 s_1^r\right)^{D/(r+D)} + \left(p_2 s_2^r\right)^{D/(r+D)} = 1. $$ Then:<br> Does it follow that $D'(r) \ge 0$? [YES]<br> Does it follow that $D''(r) \le 0$? [OPEN]</p> </blockquote> <p>At least it was open back then!</p>
fedja
1,131
<p>All right. The hardest thing is to do all algebra right (I am always prone to misplacing + and -, so check everything I say in the first part. :)).</p> <p>Let $\lambda=\log \frac 1p$ and $\Gamma=\log\frac 1s$. We want to create a situation when there are 3 points on the line $D=1+br$ ($b&gt;0$). Let's write $(ps^r)^{D/D+r}$ as the exponent of minus $\lambda\frac{1+br}{1+(b+1)r}+\Gamma\frac{r(1+br)}{1+(b+1)r}$. Our first step will be to replace $(b+1)r$ by $r$. Since $\Gamma&gt;0$ is free ($\lambda$'s are restricted to $\sum_j e^{-\lambda_j}=1$), we still get $\lambda\frac{1+br}{1+r}+\Gamma\frac{r(1+br)}{1+r}$ but now $b\in(0,1)$. Now replace $1+r$ with $r$. We'll get $\lambda\frac{1-b+br}{r}+\Gamma\frac{(r-1)(1-b+br)}{r}$. Now let's open the parentheses and group the terms. If I haven't made an odd number of errors, we get $(\lambda-\Gamma)(1-b)r^{-1}+\Gamma b r+(\lambda-\Gamma)b+\Gamma(1-b)$ ($r\ge 1$). Now let us replace $r$ by $\frac{1-b}{b}r$ to get $(\lambda-\Gamma)br^{-1}+\Gamma(1-b) r+(\lambda-\Gamma)b+\Gamma(1-b)$ ($r\ge \frac b{1-b}$)and denote $X=(\lambda-\Gamma)b$, $Y=\Gamma(1-b)$. Thus, the problem is reduced to asking whether the sum of two exponents of the kind $$ \exp(-Xr^{-1}-Yr-X-Y) $$ where $r&gt; 0$, $Y&gt;0$ and $X$ is unrestricted can take the value $1$ three times on the positive semiaxis (the leftmost root will be $b/(1-b)$ after which you can go back and discern the initial values). </p> <p>The rest is simple analysis.</p> <p>Let for one exponent $X=Y=B$. Then at $1$, it is $e^{-4B}$ and at $1/2$ it is $e^{-4.5B}$. Now for the second exponent choose $Y=A$ and $X=-\varepsilon$. Then $X$ guarantees that we have $+\infty$ at $0+$ but is invisible for any noticeable positive $r$. Also, at $+\infty$, we have $0$, so we just want the value at $1$ to be larger than $1$ and the value at $1/2$ to be smaller than $1$, which results in the system of inequalities $$ e^{-1.5A}+e^{-4.5B}&lt;1;\qquad e^{-2A}+e^{-4B}&gt;1 $$ Now, take relatively small $A$ and put $e^{-4B}=2A$. The second inequality is then fine and the first one is $e^{-1.5 A}+A^{9/8}&lt;1$, which is true if $A$ is small enough.</p> <p>UPDATE: The counterexample failed, so let's try the proof. The same chain of changes of variable can be applied to the line $D=ar+b$. Note that $D'&gt;0$ and $(D/r)'&lt;0$ so the only chance to have this line to intersect the graph of $D$ more than once is to take $a,b&gt;0$.</p> <p>Thus, using the tangent line to the graph of $D$ at some positive point where concavity is violated, the problem can be restated as follows: the sum $F(r)=e^{-Ar^{-1}-Br-(A+B)}+e^{-Cr^{-1}-Dr-(C+D)}$ cannot have the value $1$, the derivative $0$ and positive second derivative at any point except the point corresponding to the case when the original $r$ is $0$. Here $B,D&gt;0$ and $A,C$ are unrestricted.</p> <p>If $A,C\le 0$, then $F$ is decreasing and the claim is trivial. So, let us assume that $C&gt; 0$.</p> <p>The nice case is when $A&gt;0$ as well. In this case, we just need to switch to the variables $x=\frac 1{r+1}$ and $y=\frac r{r+1}$ and write $F$, $F'$ and $F''$ explicitly (differentiating with respect to $x$ instead of $r$):</p> <p>$e^{-\frac Ax-\frac By}+e^{-\frac Cx-\frac Dy}=1$</p> <p>$\left(\frac A{x^2}-\frac B{y^2}\right)e^{-\frac Ax-\frac By}+\left(\frac C{x^2}-\frac D{y^2}\right)e^{-\frac Cx-\frac Dy}=0$</p> <p>$\left[\left(\frac A{x^2}-\frac B{y^2}\right)^2-2\left(\frac A{x^3}+\frac B{y^3}\right)\right]e^{-\frac Ax-\frac By}+\left[\left(\frac C{x^2}-\frac D{y^2}\right)^2-2\left(\frac C{x^3}+\frac D{y^3}\right)\right]e^{-\frac Cx-\frac Dy}\ge 0$</p> <p>Note that the first square bracket can be non-negative only if $\max(A/x,B/y)&gt;2$, in which case the first exponent is at most $e^{-2}$. This tells us that the second exponent is certainly above $1/e$, so $\frac Cx+\frac Dy&lt;1$ and the cubic sum in the second square bracket beats the square of the quadratic sum even with coefficient $1$. Thus, the last inequality implies </p> <p>$\left(\frac A{x^2}-\frac B{y^2}\right)^2 e^{-\frac Ax-\frac By}\ge\left(\frac C{x^3}+\frac D{y^3}\right)e^{-\frac Cx-\frac Dy}$.</p> <p>Using the estimate $e^{-t}\ge 1-te^{-t}$ for $t&gt;0$, we conclude that the first equality implies </p> <p>$e^{-\frac Ax-\frac By}\ge \left(\frac C{x}+\frac D{y}\right)e^{-\frac Cx-\frac Dy}$.</p> <p>Now, the second inequality certainly implies that</p> <p>$\left|\frac A{x^2}-\frac B{y^2}\right|e^{-\frac Ax-\frac By}&lt;\left(\frac C{x^2}+\frac D{y^2}\right)e^{-\frac Cx-\frac Dy}$.</p> <p>Now, looking at the left hand sides, which form a geometric progression, we see that the right hand sides violate Cauchy-Schwarz, so this case is done.</p> <p>In the second case $A&lt;0$, we start with noticing that a dip on the line $F=1$ implies that $F$ takes the same value $1$ five or more times (counting with multiplicity). So, we need to show that it cannot happen. My original idea was to show that $X,Y,X^2,XY,Y^2$ is a Chebyshev system on the curve $e^{-X}+e^{-Y}=1$. This can be done (differentiating quotients and getting rid of the functions one by one, as usual) but I couldn't finish the necessary computations without Maxima and posting the resulting long expressions was certainly out of question. Finally I settled on a different change of variable.</p> <p>TO BE CONTINUED... </p>
2,269,042
<p>Consider $f(x) = \lfloor x \rfloor + \lfloor -x \rfloor $ . Now find value of $\lim_{x \to \infty} f(x) $ . I know that if $x_0 \in \mathbb{R}$ then $\lim_{x \to x_0} f(x) = -1$ but I don't know whether it is true or not in the infinity . </p>
boaz
83,796
<p>It has no limit when $x\to\infty$. Consider the sequences $$ x_n=n\qquad y_n=n+\frac{1}{2} $$ Both sequence tend to $\infty$, but notice that $f(x_n)=0$ while $f(y_n)=-1$ for every $n$.</p>
2,612,416
<p>Can you please help me with this limit? I can´t use L'Hopital rule.</p> <p>$$\lim_{x\to \infty} \frac{\sqrt{4x^2+5}-3}{\sqrt[3]{x^4}-1} $$</p>
Guy Fsone
385,707
<p>Since the OP seems to be doubtfully on the whether $x\to \infty$ or $x\to1$ I have included both answers:</p> <hr> <p>If $x\to \infty$ then we have $$\lim_{x\to \infty} \frac{\sqrt{4x^2+5}-3}{\sqrt[3]{x^4}-1} =\lim_{x\to \infty}{\sqrt{4+{5\over x^2}}-{3\over x}\over \sqrt[3]{x}-{1\over x}} =\lim_{x\to \infty}{\sqrt{4}\over \sqrt[3]{x}} = 0$$</p> <hr> <p>And, if $x\to 1,$ then we have:</p> <p>By definition of derivative at $x=1$ we have $$\lim_{x\to 1} \frac{\sqrt{4x^2+5}-3}{\sqrt[3]{x^4}-1} = \lim_{x\to 1} \frac{\sqrt{4x^2+5}-3}{x-1}\lim_{x\to 1} \frac{x-1}{\sqrt[3]{x^4}-1} = \left(\sqrt{4x^2+5}\right)'\frac{1}{\left(\sqrt[3]{x^4}\right)'}\Bigg|_{x=1} = 1$$</p> <p>Since $$\left(\sqrt{4x^2+5}\right)'\Bigg|_{x=1} = \frac{4}{\sqrt{4x^2+5}}\Bigg|_{x=1} = \frac{4}{3}$$</p> <p>and $$\left(\sqrt[3]{x^4}\right)'\Bigg|_{x=1} = \frac{4}{3}x^{4/3-1}\Bigg|_{x=1} = \frac{4}{3}$$</p>
2,612,416
<p>Can you please help me with this limit? I can´t use L'Hopital rule.</p> <p>$$\lim_{x\to \infty} \frac{\sqrt{4x^2+5}-3}{\sqrt[3]{x^4}-1} $$</p>
Martín-Blas Pérez Pinilla
98,199
<p>Hint: use that $$a^2 - b^2 = (a - b)(a + b)$$ and $$a^3 - b^3 = (a - b)(a^2 + ab + b^2).$$</p>
227,311
<p>It is known that the roots of Chebyshev polynomials of the second kind, denote it by $U_n(x)$, are in the interval $(-1,1)$. I have noticed that, by looking at the low values of $n$, the roots of $(1-x)U_n(x)+U_{n-1}(x)$ are in the interval $(-2,2)$. However, I don't have a clear idea how to start proving this, could anyone help me please?</p> <p>*PS I have asked this question on StackExchange and set a bounty but still have not got any answer for it. <a href="https://math.stackexchange.com/q/1583588/60099">see this link</a> </p>
John Jiang
4,923
<p>The key is to write $U_n(x)$ in terms of $x$ explicitly. Let $x = \cos t$, then $\sin t = \sqrt{1-x^2}$, where we fixed a branch of square root so that $\sqrt{-1} = i$; the value of $U_n$ is independent of the choice. Then $e^{it} = x + i \sqrt{1 - x^2}$. For $|x| \ge 1$, we can thus write $e^{it} = x - \sqrt{x^2 - 1}$.</p> <p>For $|x| \ge 1$, let $y = x - \sqrt{x^2 -1}$, $A = 1-\sqrt{x^2 -1}$ and $B = (1-x)y + 1 $. Observe that $0 &lt; y \le 1$. By analytic continuation, we can write</p> <p>$$(1-x)U_n(x) + U_{n-1}(x) = \frac{ A y^{-(n+1)} -B y^n }{2 \sqrt{x^2 -1}}.$$</p> <p>For $x &gt; \sqrt{2}$, $A &lt; 0$ and $B &gt; 0$. So it's pretty clear the function has no zero there.</p> <p>Similar argument can be made for $x \le -1$.</p>
1,902,138
<p>It's common to see a plus-minus ($\pm$), for example in describing error $$ t=72 \pm 3 $$ or in the quadratic formula $$ x = \frac{-b \pm \sqrt{b^2-4ac}}{2a} $$ or identities like $$ \sin(A \pm B) = \sin(A) \cos(B) \pm \cos(A) \sin(B) $$</p> <p>I've never seen an analogous version combining multiplication with division, something like $\frac{\times}{\div}$</p> <blockquote> <p>Does this ever come up, and if not why?</p> </blockquote> <p>I suspect it simply isn't as naturally useful as $\pm$. </p>
barak manos
131,263
<p>I think that this question is primarily opinion-based, so here is my opinion:</p> <ul> <li>The expression $[t=72\pm3]$ is equivalent to $[t=72+(+3)]\vee[t=72+(-3)]$</li> <li>The expression $[t=72\frac{\times}{\div}3]$ would be equivalent to $[t=72\times3]\vee[t=72\times\frac13]$</li> </ul> <p>So the second operand "looks the same" in the case of $\pm$ but not in the case of $\frac{\times}{\div}$.</p> <p>If we had a different notation for $\frac13$ (for example, $\color\red3$), then it might have seemed more appropriate to denote something like $[t=72\frac{\times}{\div}3]$, which would be equivalent to $[t=72\times3]\vee[t=72\times\color\red3]$.</p> <p>So it's basically a matter of "backward compatibility" with our existing notation for <em>inverse</em>...</p>
4,121,607
<p>I want to find a function which satisfies certain following limits.</p> <p>The question is: Find a function which satisfies</p> <p><span class="math-container">$$ \lim_{x\to5} f(x)=3, \text{ and } f(5) \text{ does not exist} $$</span></p> <p>I would think that because it says <span class="math-container">$f(5)$</span> doesn't exist, there must be a fraction with <span class="math-container">$(x-5)$</span> on the bottom. I would think <span class="math-container">$f(x) = \frac{15}{x-5}$</span> but that tends to infinity as <span class="math-container">$x\to5$</span></p>
Amaan M
860,916
<p>There are, generally, three types of discontinuities: removable, jump, and asymptotic. If you have a jump or asymptotic discontinuity, a finite two-sided limit won't exist.</p> <p>If you have a jump discontinuity, your left-handed and right-handed limits aren't equal, so the limit doesn't exist.</p> <p>If you have an asymptotic discontinuity, your left and right-handed limits are each <span class="math-container">$\pm \infty$</span>, so even if a two-sided limit exists, it's <span class="math-container">$\pm \infty$</span>, not a finite number. The solution you tried first had this problem - it was asymptotic, and tended to <span class="math-container">$\infty$</span> from the right and <span class="math-container">$-\infty$</span> from the left.</p> <p>The simplest way to have a two-sided limit at a point where a function isn't defined is with a removable discontinuity.</p> <p>As Thomas Andrews noted, a very simple solution is <span class="math-container">$\frac{3(x-5)}{x-5}$</span>. That fraction is equivalent to <span class="math-container">$3$</span> everywhere except at <span class="math-container">$x=5$</span>, where it has a hole.</p> <p>You could, in fact, take any function <span class="math-container">$g$</span> such that <span class="math-container">$g$</span> is continuous at <span class="math-container">$x = 5$</span> and <span class="math-container">$g(5) = 3$</span>, then let <span class="math-container">$f(x) = g(x)*\frac{x-5}{x-5}$</span>. Similarly, you could define <span class="math-container">$f$</span> piecewise, where it's equal to <span class="math-container">$g$</span> everywhere except <span class="math-container">$x = 5$</span>, and undefined at <span class="math-container">$x = 5$</span>. That's the solution that jjagmath is getting at.</p> <p>Thomas's solution uses <span class="math-container">$g(x) = 3$</span>, but you could use, for example, <span class="math-container">$g(x) = x-2$</span>, or <span class="math-container">$g(x) = \sqrt{x+4}$</span>, or anything else that satisfies those above conditions.</p>
4,017,554
<p>Simple question - how to prove that:</p> <p><span class="math-container">$\sqrt {x\times y} = \sqrt x \times \sqrt y$</span> ?</p> <p>If I use the exponentation the answer seems easy, because</p> <p><span class="math-container">$(x\times y)^n = x^n \times y^n$</span> because I get</p> <p><span class="math-container">$(x\times y) \times\cdots\times (x\times y)$</span> (where <span class="math-container">$x$</span> occurs <span class="math-container">$n$</span> times and <span class="math-container">$y$</span> occurs <span class="math-container">$n$</span> times) can be rewritten as: <span class="math-container">$x \times\cdots\times x \times y \times \cdots \times y$</span>.</p> <p>But in case of square roots that's not so obvious, because I can't rewrite the it the same way. I can of course reason that <span class="math-container">$\sqrt x$</span> is <span class="math-container">$x^{\frac12}$</span> and <span class="math-container">$\sqrt y$</span> is <span class="math-container">$y^{\frac12}$</span> and think using induction, but that seems to not satisfy me.</p>
uriyaba
728,938
<p>Since <span class="math-container">$n$</span> isn't a natural number, we can approach the proof from a different angle.</p> <p>Change the square root and rewrite it as a power of a <span class="math-container">$0.5 $</span>, then use the <a href="https://mathinsight.org/exponentiation_basic_rules#power_product" rel="nofollow noreferrer">power of a product rule</a> to get:</p> <p><span class="math-container">$$\sqrt{xy} = (xy)^{0.5} = x^{0.5} \cdot y^{0.5} = \sqrt x \cdot \sqrt y$$</span></p> <p>This is true, of course, as long as <span class="math-container">$x$</span> and <span class="math-container">$y$</span> aren't negative numbers.</p>
1,043,956
<p>Find a normal vector and a tangent vector to the curve given by the equation: $x^5 + y ^5 =2x^3$ at the point $P(1, 1)$. Find the equation of the tangent line. <br/> Edit: The notes I have:<img src="https://i.stack.imgur.com/arOee.png" alt="enter image description here"></p> <p>Taking $f(x, y) = x^5 - 2x^3 + y^5 = 0$, I got $m_n = (-1,5), m_t = (5, 1)$ and $x = 5y-4$ for the tangent line. <br/></p> <p>I'm very much unsure on if this formula is what I should/can use and if I've used it correctly. </p>
HDE 226868
170,257
<p>You can find the slope of the tangent line by differentiating. If you don't want to do implicit differentiation (which may be simpler in this case), you can just do some algebra beforehand: $$x^5+y^5=2x^3 \to y^5=2x^3-x^5 \to y=\sqrt[5]{2x^3-x^5}$$ and differentiate according to the power rule. You have the slope of your tangent line; knowing that it goes through $(1,1)$, you should have enough information to solve for that.</p> <p>The tangent vector will have a slope exactly the same as that of the tangent line. The normal vector will have a slope that is the negative inverse of that of the tangent vector. If $m_t$ is the slope of the tangent vector, the slope $m_n$ of the normal vector will be $-\frac{1}{m_t}$.</p> <p>In the method you mentioned, you really just have to find $c$ here to match it up with that equation. You would just re-arrange the equation so that all the constants are on one side of the equation. Here, we find that $$F(x,y)=x^5-2x^3+y^5=c$$ and $c=0$.</p>
97,672
<p>Given that I have a set of equations about varible $x_0,x_1,\cdots,x_n$, which own the following style:</p> <p>$ \left( \begin{array}{cccccccc} \frac{1}{6} &amp; \frac{2}{3} &amp; \frac{1}{6} &amp; 0 &amp; 0 &amp; 0 &amp; 0 &amp; 0 \\ 0 &amp; \frac{1}{6} &amp; \frac{2}{3} &amp; \frac{1}{6} &amp; 0 &amp; 0 &amp; 0 &amp; 0 \\ 0 &amp; 0 &amp; \frac{1}{6} &amp; \frac{2}{3} &amp; \frac{1}{6} &amp; 0 &amp; 0 &amp; 0 \\ 0 &amp; 0 &amp; 0 &amp; \frac{1}{6} &amp; \frac{2}{3} &amp; \frac{1}{6} &amp; 0 &amp; 0 \\ 0 &amp; 0 &amp; 0 &amp; 0 &amp; \frac{1}{6} &amp; \frac{2}{3} &amp; \frac{1}{6} &amp; 0 \\ \end{array} \right) \left( \begin{array}{c} x_0 \\ x_1 \\ x_2 \\ x_3 \\ x_4 \\ \color{red}{x_0} \\ \color{red}{x_1} \\ \color{red}{x_2} \\ \end{array} \right)=\left( \begin{array}{c} (1,1) \\ (2,3) \\ (3,-1) \\ (4,1) \\ (5,0) \\ \end{array} \right) $</p> <p>Obviously, I <strong>cannot</strong> solve this linear system by <code>LinearSolve[]</code>. To solve this equation group, I only used the <code>Solve[]</code>.</p> <pre><code>mat= {{1/6, 2/3, 1/6, 0, 0, 0, 0, 0}, {0, 1/6, 2/3, 1/6, 0, 0, 0, 0}, {0, 0, 1/6, 2/3, 1/6, 0, 0, 0}, {0, 0, 0, 1/6, 2/3, 1/6, 0, 0}, {0, 0, 0, 0, 1/6, 2/3, 1/6, 0}}; eqns = mat.{x0, x1, x2, x3, x4, x0, x1, x2}; </code></pre> <p>$ \begin{pmatrix} \frac{x_0}{6}+\frac{2 x_1}{3}+\frac{x_2}{6}\\ \frac{x_1}{6}+\frac{2 x_2}{3}+\frac{x_3}{6}\\ \frac{x_2}{6}+\frac{2 x_3}{3}+\frac{x_4}{6}\\ \frac{x_0}{6}+\frac{x_3}{6}+\frac{2 x_4}{3}\\ \frac{2 x_0}{3}+\frac{x_1}{6}+\frac{x_4}{6} \end{pmatrix} $</p> <pre><code>yValues = {{1, 1}, {2, 3}, {3, -1}, {4, 1}, {5, 0}}; part1 = {x0, x1, x2, x3, x4} /. Solve[Thread[eqns == yValues[[All, 1]]], {x0, x1, x2, x3, x4}] part2 = {x0, x1, x2, x3, x4} /. Solve[Thread[eqns == yValues[[All, 2]]], {x0, x1, x2, x3, x4}] res = Transpose[Join[part1, part2]] </code></pre> <blockquote> <pre><code> {{75/11, -8/11}, {-9/11, 4/11}, {27/11, 58/11}, {3, -38/11}, {39/11, 28/11}} </code></pre> </blockquote> <h3>Question</h3> <p>However, the index $n$ for variables $\{x_0,x_1,\cdots,x_n\}$ is very large ($n=100$) in my work. My solution that by <code>Solve[]</code> is very cockamamie. So I would like to know how to deal with this case by the <em>built-in</em> <code>LinearSolve[]</code> efficiently?</p>
Cesareo
62,129
<p>This script accepts the equations (consistent equations) with the unknowns in any order.</p> <pre><code>vars = Variables[eqns] A = Grad[eqns, vars] B1 = Map[First, yValues] B2 = Map[Last, yValues] LinearSolve[A, B1] LinearSolve[A, B2] </code></pre>
268,185
<p><strong>Question:</strong> </p> <ol> <li>Given a PDE, is there a general method to show that it is <em>not solvable</em> using the inverse scattering transform?</li> <li>Specifically, for the perturbed 1D NLS or the 2D cubic NLS, where was it first shown that these equations can not be solved using <em>any</em> form of the inverse scattering transform.</li> </ol> <p><strong>Background and details:</strong> The cubic 1D nonlinear Schrodinger equation (NLS) $$ iu_t + u _{xx} + |u | ^2 u = 0$$ and the KdV equation $$u_t -6uu_x+u_{xxx} = 0$$ are both known to be integrable, and solvable via the <a href="https://en.wikipedia.org/wiki/Inverse_scattering_transform" rel="nofollow noreferrer">inverse scattering transform</a>. So, given the initial condition $u(t=0,x)=u_0 (x)$, one can compute these constants and solve an inverse, linear, auxilary problem to find $u$ for all times $t$. For example, for the cubic 1d NLS this is the Zakharov-Shabat equations, and for the KdV it is the linear, time-independant Schrodinger equation. </p> <p>The 2D cubic NLS, or almost every perturbation of the 1D case, e.g., $$iu_t +u_{xx} + |u|^2 u -\epsilon |u|^4u = 0 \, ,$$ is known to be <em>not</em> solvable using the inverse scattering transform, i.e., not integrable. I didn't find any reference that explains why, however.</p>
Phil Harmsworth
106,467
<p>The prolongation structure method developed by Wahlquist and Estabrook is one method to show whether or not a PDE is solvable via the inverse scattering transform. (There are others - refer Y. Kosmann-Schwarzbach, B. Grammaticos, K. M. Tamizhmani (eds.), <em>Integrability of Nonlinear Systems</em>, Lect. Notes Phys. 638 Springer-Verlag 2004.)</p> <p>For example, in an appendix to their second paper (on the NLS equation - J. Math. Phys. 17 (1976) 1293-1297) Estabrook and Wahlquist analyse the KdV-like equation $u_t + u_{xxx} + f(u)\,u_x = 0$, concluding that it admits a non-trivial prolongation structure (i.e. is integrable via IST) only if $f(u) = 2\alpha + 6\beta u+12\gamma u^2$, for some constants $\alpha$, $\beta$, $\gamma$.</p> <p>Dodd and Fordy <em>"The prolongation structures of quasi-polynomial flows"</em> Proc Roy. Soc. Lond. A<strong>385</strong> (1983) 389-429 discuss methods for dealing with a general class of equations that includes your variants of the NLS. Applying their methods to your quoted example would show why it does not produce a non-trivial prolongation structure.</p> <p>As far as I can see from the literature, perturbations of the type you describe are discussed as perturbations of the IST solution of the unperturbed equation, e.g. V. I. Karpman <em>"Soliton Evolution in the Presence of Perturbation"</em> Physica Scripta 20 (1979) 462</p> <p>Edit: most, if not all of the equations in two spatial dimensions - such as the Kadomtsev–Petviashvili equation - place heavy restrictions on soliton behaviour. They've sometimes been described as "one and a half dimensional" problems.</p>
1,890,047
<p>Consider two linear transformations $L_1, L_2: V \to W$.</p> <p>Fix a basis of $V$, $W$, and consider $M_1$, $M_2$, the matrices of the aforementioned transformations w.r.t said basis.</p> <p>Suppose you can obtain $M_2$ from swapping columns in $M_1$.</p> <p>How are $L_1$ and $L_2$ related? (Besides having the same image)</p>
paf
333,517
<p>If $M_2$ is obtained from $M_1$ by swapping the first two columns (for example), and if we denote by $(v_1,\dots,v_n)$ the basis of $V$, then we have $$L_2(v_1)=L_1(v_2)\text{ and }L_2(v_2)=L_1(v_1).$$</p> <p>This is because in the matrix of a linear map $L:V\to W$, the $j$th column contains the coordinates (in the basis of $W$ chosen) of $L(v_j)$, the image of the $j$th basis vector of $V$. Hence, the first equality I wrote above means that the first column of $M_2$ is the second column of $M_1$ and the second equality I wrote above means that the second column of $M_2$ is the first column of $M_1$.</p> <p><strong>Edit:</strong> If $\varphi$ is the permutation of the $v_i$'s (e.g. $\varphi(v_1)=v_2$, $\varphi(v_2)=v_1$ and $\varphi(v_i) = v_i\;\forall i\geq 3$ in the example above), then $$L_2 = L_1\circ \varphi.$$ Indeed, one can check that $\forall i \in\{1,\dots,n\},\;L_2(v_i) = L_1(\varphi(v_i))$. </p> <p>In fact, since the transpositions generate the symmetric group, this example is sufficient.</p>
102,963
<p>Could someone please explain the difference between the group of all icosahedral symmetries and S5? I know that the former is a direct product, but don't they work the same? Say I have an icosahedron, why wouldn't S5 work as a description of its symmetries? Thank you very much.</p> <p><strong>Added:</strong> When counting the symmetries of a platonic solid, in this case the icosahedron, Does it include reflecting along a plane cutting through the solid, in a sort of turing itself inside out reflection? I read that the symmetries counted should be "orientation-preserving". What does that mean?</p>
Thomas Andrews
7,933
<p>Hint: If $g$ is the symmetry that send each point to it's opposite point in the icosahedron, then show $\{1,g\}$ is a normal subgroup of the group of symmetries. Show that $S_5$ does not have any normal subgroup of order $2$.</p>
360,464
<p>Is there a publication containing this obvious fact: For any real <span class="math-container">$T&gt;0$</span>, any natural <span class="math-container">$n$</span>, any complex <span class="math-container">$c_1,\dots,c_n$</span>, and any distinct complex <span class="math-container">$z_1,\dots,z_n$</span> such that <span class="math-container">$\sum_1^n c_k e^{tz_k}=0$</span> for all <span class="math-container">$t\in[0,T)$</span>, we have <span class="math-container">$c_1=\dots=c_n=0$</span>? </p> <p>Somehow, I cannot find such a publication. </p>
Alexandre Eremenko
25,510
<p>Let <span class="math-container">$y_k(t)=e^{tz_k}$</span>. Proving by contradiction, suppose that they are linearly dependent, that is <span class="math-container">$$\sum_{k=1}^nc_ky_k\equiv 0.$$</span> Differentiating <span class="math-container">$n-1$</span> times we obtain a homogeneous system of linear equations with respect to <span class="math-container">$c_k$</span>. To have a non-trivial solution, this system must have non-zero determinant. The determinant is: <span class="math-container">$$\left|\begin{array}{cccc}y_1&amp;y_2&amp;\ldots&amp; y_n\\ y_1^\prime&amp; y_2^\prime&amp;\ldots&amp;y_n^\prime\\ \ldots&amp;\ldots&amp;\ldots&amp;\ldots\\ y_1^{(n-1)}&amp; y_2^{(n-1)}&amp;\ldots&amp; y_n^{(n-1)}\end{array}\right|=A(t) \left|\begin{array}{cccc}1&amp;1&amp;\ldots&amp;1\\ z_1&amp;z_2&amp;\ldots&amp; z_n\\ \ldots&amp;\ldots&amp;\ldots&amp;\ldots\\ z_1^{n-1}&amp;z_2^{n-1}&amp;\ldots&amp;z_n^{n-1}\end{array}\right|,$$</span> where <span class="math-container">$A(t)=e^{t(z_1+\ldots+z_n)}\neq 0$</span>. The determinant in the right hand side is easy to compute. Consider it as a polynomial with respect to, <span class="math-container">$z_n$</span>. It is evidently of degree <span class="math-container">$n-1$</span> and has <span class="math-container">$n-1$</span> roots at <span class="math-container">$z_1,\ldots,z_{n-1}$</span>. Therefore it is of the corm <span class="math-container">$$C(z_1,\ldots,z_{n-1})(z_n-z_1)\ldots(z_n-z_{n-1}).$$</span> Looking at the top degree term, we conclude that <span class="math-container">$C$</span> is a similar polynomial. So by induction our determinant is <span class="math-container">$$\prod_{i&lt;k}(z_i-z_k).$$</span> this is never zero, since <span class="math-container">$z_k$</span> are distinct.</p> <p>References. Polya Szego, Problems and theorems of analysis, vol II, Part 7, "Determinants and quadratic forms''. Computation of the Vandermonde determinant is problem 2. The Wronskian criterion of linear independence is problem 60.</p> <p>Remark. Vandermondes's determinant is computed in ANY undergraduate textbook of linear algebra, as a first example of determinant. For example, I teach linear algebra with the textbook of Strang, and differential equations with the textbook of Boyce and di Prima. Both of them have Vandermonde determinant.</p> <p>Remark 2. Undergraduate textbooks are rarely freely available online. If you insist on a free online reference, you may refer on the proof above. </p>
360,464
<p>Is there a publication containing this obvious fact: For any real <span class="math-container">$T&gt;0$</span>, any natural <span class="math-container">$n$</span>, any complex <span class="math-container">$c_1,\dots,c_n$</span>, and any distinct complex <span class="math-container">$z_1,\dots,z_n$</span> such that <span class="math-container">$\sum_1^n c_k e^{tz_k}=0$</span> for all <span class="math-container">$t\in[0,T)$</span>, we have <span class="math-container">$c_1=\dots=c_n=0$</span>? </p> <p>Somehow, I cannot find such a publication. </p>
Todd Trimble
2,926
<p>I will recount the more general statement of linear independence of characters, given in Lang's Algebra book, and credited to Artin. Let <span class="math-container">$G$</span> be a group, and <span class="math-container">$K$</span> a field. Then distinct homomorphisms <span class="math-container">$\phi_1, \ldots, \phi_n: G \to K^\times$</span> are linearly independent. </p> <p><strong>Proof:</strong> Suppose not, and suppose we have a nontrivial linear relation </p> <p><span class="math-container">$$a_1 \phi_1 + \ldots + a_n \phi_n = 0,\qquad (1)$$</span> </p> <p>where <span class="math-container">$n$</span> is taken as small as possible. Clearly <span class="math-container">$n&gt;1$</span> and <span class="math-container">$a_i \neq 0$</span> for all <span class="math-container">$i$</span>. Because the <span class="math-container">$\phi_i$</span> are distinct, we can find an element <span class="math-container">$g \in G$</span> such that <span class="math-container">$\phi_1(g) \neq \phi_2(g)$</span>. We have </p> <p><span class="math-container">$$a_1 \phi_1(gh) + a_2 \phi_2(gh) + \ldots + a_n \phi_n(gh) = 0$$</span> </p> <p>for all <span class="math-container">$h \in G$</span>; by virtue of the <span class="math-container">$\phi_i$</span> being homomorphisms, this may be rewritten to say </p> <p><span class="math-container">$$a_1 \phi_1(g)\phi_1 + a_2 \phi_2(g)\phi_2 + \ldots + a_n \phi_n(g)\phi_n = 0, \qquad (2)$$</span></p> <p>Dividing <span class="math-container">$(2)$</span> by <span class="math-container">$\phi_1(g)$</span> and then subtracting (1) from the result, we arrive at a linear relation</p> <p><span class="math-container">$$\left(a_2 \frac{\phi_2(g)}{\phi_1(g)} - a_2\right) \phi_2 + \ldots = 0$$</span> </p> <p>which has fewer than <span class="math-container">$n$</span> summands and is nontrivial by choice of <span class="math-container">$g$</span>, contradiction. <span class="math-container">$\Box$</span></p>
5,739
<p>Hi, I have recently got interested in multi-index (multi-dimensional) Dirichlet series, i.e. series of the form $F(s_1,...,s_k)=\sum_{(n_1,...,n_k)\in\mathbb{N}^k}\frac{a_{n_1,...,n_k}}{n_1^{s_1}...n_k^{s_k}}$. I found some papers suggesting that multi-index Dirichlet series are in fact a distinct subfield for itself within analytic number theory. So, I´m now looking for some 'basic' learning materials/books or similar on this subject.</p> <p>Any suggestions are greatly appreciated!</p> <p>efq</p> <p>PS: I believe I have already checked most books on multi-dimensional complex analysis/several complex variables.</p>
user3010
3,010
<p>I don´t know about general multi-index Dirichlet series, but there is a good amount of theory on <em>multiple zeta-functions</em> (special cases of what you are asking for). There is plenty of stuff in MathSciNet on this.</p>
1,207,250
<p>I have to calculate $\lim_{x \to \infty}{x-x^2\ln(1+\frac{1}{x})}$. I rewrote it as $\lim_{x \to \infty}{\frac{x-x^3\ln^2(1+\frac{1}{x})}{1 + x\ln(1+\frac{1} {x})}}$ and tried to apply L'Hôpital's rule but it didn't work. How to end this?</p>
Prasun Biswas
215,900
<p>Make the substitution $t=\dfrac{1}{x}$. Then, $x\to\infty \implies t\to 0$</p> <p>$$\lim_{x\to\infty}\left(x-x^2\ln\left(1+\frac{1}{x}\right)\right)\\ = \lim_{t\to 0}\left(\frac{1}{t}-\frac{1}{t^2}\ln(1+t)\right)\\= \lim_{t\to 0}\left(\frac{t-\ln(1+t)}{t^2}\right)$$</p> <p>This comes out as $\frac{0}{0}$ on direct plugging of values, so it's ready for some L'Hopital bash.</p> <p>$$\lim_{t\to 0}\left(\dfrac{1-\frac{1}{1+t}}{2t}\right)$$</p> <p>Again, by direct plugging, we get $\frac{0}{0}$, so recursion of L'Hopital will work.</p> <p>$$\lim_{t\to 0}\left(\dfrac{0+\frac{1}{(1+t)^2}}{2}\right)=\lim_{t\to 0}\left(\frac{1}{2(1+t)^2}\right)=\boxed{\frac{1}{2}}$$</p>
1,207,250
<p>I have to calculate $\lim_{x \to \infty}{x-x^2\ln(1+\frac{1}{x})}$. I rewrote it as $\lim_{x \to \infty}{\frac{x-x^3\ln^2(1+\frac{1}{x})}{1 + x\ln(1+\frac{1} {x})}}$ and tried to apply L'Hôpital's rule but it didn't work. How to end this?</p>
tqviet
378,056
<p>The reason why L'Hôpital's rule didn't work is that you added on a determined-term into the limit, i.e. $$ 1 + x \ln \left( 1 + \frac{1}{x} \right) = 1 + \ln \left( \left( 1 + \frac{1}{x} \right)^x \right) \to 1 + \ln \left( e \right)= 2 $$ when $x \to \infty$. Another approach with the one from Prasun Biswas is using the Taylor series for $1/x$, for $x&gt;1$: \begin{align} x - x^2 \ln \left( 1 + \frac{1}{x} \right) &amp;= x - x^2 \left( \sum_{k=1}^{\infty} \frac{(-1)^{k+1}}{ k} \frac{1}{x^k} \right) = x - x^2 \left( \frac{1}{x} - \frac{1}{2x^2} + \sum_{k=3}^{\infty} \frac{(-1)^{k+1}}{k} \frac{1}{x^k} \right) \\ &amp;= \frac{1}{2} + \sum_{k=3}^{\infty} \frac{(-1)^{k+1}}{k} \frac{1}{x^{k-2}} \end{align} Letting $x \to \infty$, one ends up the limit with $\frac{1}{2}$.</p>
182,346
<p>Let's call a polygon $P$ <em>shrinkable</em> if any down-scaled (dilated) version of $P$ can be translated into $P$. For example, the following triangle is shrinkable (the original polygon is green, the dilated polygon is blue):</p> <p><img src="https://i.stack.imgur.com/M0LOu.png" alt="enter image description here"></p> <p>But the following U-shape is not shrinkable (the blue polygon cannot be translated into the green one):</p> <p><img src="https://i.stack.imgur.com/S30bD.png" alt="enter image description here"></p> <p>Formally, a compact $\ P\subseteq \mathbb R^n\ $ is called <em>shrinkable</em> iff:</p> <p>$$\forall_{\mu\in [0;1)}\ \exists_{q\in \mathbb R^n}\quad \mu\!\cdot\! P\, +\, q\ \subseteq\ P$$</p> <p>What is the largest group of shrinkable polygons?</p> <p>Currently I have the following sufficient condition: if $P$ is <a href="https://en.wikipedia.org/wiki/Star-shaped_polygon" rel="nofollow noreferrer">star-shaped</a> then it is shrinkable. </p> <p><em>Proof</em>: By definition of a star-shaped polygon, there exists a point $A\in P$ such that for every $B\in P$, the segment $AB$ is entirely contained in $P$. Now, for all $\mu\in [0;1)$, let $\ q := (1-\mu)\cdot A$. This effectively translates the dilated $P'$ such that $A'$ coincides with $A$. Now every point $B'\in P'$ is on a segment between $A$ and $B$, and hence contained in $P$.</p> <p><img src="https://i.stack.imgur.com/sdPRw.png" alt="enter image description here"></p> <p>My questions are:</p> <p>A. Is the condition of being star-shaped also necessary for shrinkability?</p> <p>B. Alternatively, what other condition on $P$ is necessary?</p>
Stewart Hinsley
76,354
<p>If I understand the question correctly the requirement is for a figure, F, such there exists a translation T(c) for all contractions c, such that cF+T(c) lies within F. It seems to me that that criterion holds for monoconvex hexagons (chevrons) and biconvex hexagons (hourglasses), which are not stars.</p>
3,062,701
<p>I want to solve this system by Least Squares method:<span class="math-container">$$\begin{pmatrix}1 &amp; 2 &amp; 3\\\ 2 &amp; 3 &amp; 4 \\\ 3 &amp; 4 &amp; 5 \end{pmatrix}\begin{pmatrix}x\\y\\z\end{pmatrix} =\begin{pmatrix}1\\5\\-2\end{pmatrix} $$</span> This symmetric matrix is singular with one eigenvalue <span class="math-container">$\lambda1 = 0$</span>, so <span class="math-container">$\ A^t\cdot A$</span> is also singular and for this reason I cannot use the normal equation: <span class="math-container">$\hat x = (A^t\cdot A)^{-1}\cdot A^t\cdot b $</span>. So I performed Gauss-Jordan to the extended matrix to come with <span class="math-container">$$\begin{pmatrix}1 &amp; 2 &amp; 3\\\ 0 &amp; 1 &amp; 2 \\\ 0 &amp; 0 &amp; 0 \end{pmatrix}\begin{pmatrix}x\\y\\z\end{pmatrix} =\begin{pmatrix}1\\3\\-1\end{pmatrix} $$</span> Finally I solved the <span class="math-container">$\ 2x2$</span> system: <span class="math-container">$$\begin{pmatrix}1 &amp; 2\\\ 0 &amp; 1\end{pmatrix}\begin{pmatrix}x\\y\end{pmatrix} =\begin{pmatrix}1\\3\end{pmatrix} $$</span> taking into account that the best <span class="math-container">$\ \hat b\ $</span> is <span class="math-container">$\begin{pmatrix}1\\3\\0\end{pmatrix}$</span></p> <p>The solution is then <span class="math-container">$\ \hat x = \begin{pmatrix}-5\\3\\0\end{pmatrix}$</span></p> <p>Is this approach correct ? </p> <p><strong>EDIT</strong></p> <p>Based on the book 'Lianear Algebra and its applications' from David Lay, I also include the Least Squares method he proposes: <span class="math-container">$(A^tA)\hat x=A^t b $</span></p> <p><span class="math-container">$$A^t b =\begin{pmatrix}5\\9\\13\end{pmatrix}, A^tA = \begin{pmatrix}14 &amp; 20 &amp; 26 \\ 20 &amp; 29 &amp; 38 \\ 26 &amp; 38 &amp; 50\end{pmatrix}$$</span> The reduced echelon from the augmented is: <span class="math-container">$$ \begin{pmatrix}14 &amp; 20 &amp; 26 &amp; 5 \\ 20 &amp; 29 &amp; 38 &amp; 9 \\ 26 &amp; 38 &amp; 50 &amp; 13 \end{pmatrix} \sim \begin{pmatrix}1 &amp; 0 &amp; -1 &amp; -\frac{35}{6} \\ 0 &amp; 1 &amp; 2 &amp; \frac{13}{3} \\ 0 &amp; 0 &amp; 0 &amp; 0 \end{pmatrix} \Rightarrow \hat x = \begin{pmatrix}-\frac{35}{6} \\ \frac{13}{3} \\ 0 \end{pmatrix}$$</span> for the independent variable case that <span class="math-container">$z=\alpha , \alpha=0 $</span> </p>
random
513,275
<p>Since the matrix has the eigenvector <span class="math-container">$\pmatrix{1&amp;-2&amp;1\cr}^t$</span> with eigenvalue <span class="math-container">$0$</span>, one has</p> <p><span class="math-container">$$\begin{pmatrix}1 &amp; 2 &amp; 3\\\ 2 &amp; 3 &amp; 4 \\\ 3 &amp; 4 &amp; 5 \end{pmatrix}\begin{pmatrix}x\\y\\z\end{pmatrix} =\begin{pmatrix}1 &amp; 2 &amp; 3\\\ 2 &amp; 3 &amp; 4 \\\ 3 &amp; 4 &amp; 5 \end{pmatrix}\begin{pmatrix}x+t\\y-2t\\z+t\end{pmatrix}$$</span></p> <p>for all <span class="math-container">$t$</span>, so there is a least squares solution with <span class="math-container">$z=0$</span>, which makes it a least squares solution for</p> <p><span class="math-container">$$\begin{pmatrix}1 &amp; 2 &amp; 3\\\ 2 &amp; 3 &amp; 4 \\\ 3 &amp; 4 &amp; 5 \end{pmatrix}\begin{pmatrix}x\\y\\0\end{pmatrix} \approx\begin{pmatrix}1\\5\\-2\end{pmatrix} \text{ or, equivalently, } \begin{pmatrix}1 &amp; 2 \\\ 2 &amp; 3 \\\ 3 &amp; 4 \end{pmatrix}\begin{pmatrix}x\\y\end{pmatrix} \approx\begin{pmatrix}1\\5\\-2\end{pmatrix}$$</span></p> <p>That makes it a regular least squares problem with solution <span class="math-container">$\pmatrix{x&amp;y\cr}^t=\pmatrix{-35/6&amp;13/3\cr}^t$</span>, so the solutions for the orig1nal problem are <span class="math-container">$$\begin{pmatrix}-\frac{35}6 +t\\\frac{13}3-2t\\t\end{pmatrix}$$</span></p>
1,515,776
<p>How can I solve something like this?</p> <p>$$3^x+4^x=7^x$$</p> <p>I know that $x=1$, but I don't know how to find it. Thank you!</p>
lab bhattacharjee
33,337
<p>HINT:</p> <p>As for $0&lt;a&lt;1,$ $$a^m&gt;a^n$$ if $m&lt;n,$</p> <p>$$\left(\dfrac37\right)^m+\left(\dfrac47\right)^m&gt;\left(\dfrac37\right)^n+\left(\dfrac47\right)^n$$</p> <p>if $m&lt;n$</p>