qid
int64
1
4.65M
question
large_stringlengths
27
36.3k
author
large_stringlengths
3
36
author_id
int64
-1
1.16M
answer
large_stringlengths
18
63k
578,961
<p>Let $\mathbf{T}=[\mathbf{t}_1,\dots,\mathbf{t}_d]$ be a $m\times d$ matrix with $\mathbf{t}_i$ as its linearly independent columns. Also I assume $d&lt;\min(m,n)$. Let $\mathbf{H}$ be a $n\times m$ matrix. Let $\mathbf{W}$ be a $n \times n$ positive definite matrix. For $i=1,\dots,d$, let me define the matrices \begin{align} \mathbf{B}_i&amp;=\mathbf{H}\mathbf{t}_i\mathbf{t}_i^H\mathbf{H}^H \\ \mathbf{C}_i&amp;=\mathbf{W}+\sum_{k\neq i}\mathbf{H}\mathbf{t}_k\mathbf{t}_k^H\mathbf{H}^H \\ \mathbf{D}_i&amp;=\mathbf{C}_i^{-1/2}\mathbf{B}_i\mathbf{C}_i^{-1/2} \end{align} Here $\mathbf{A}^H$, $\mathbf{A}^{-1}$,$\mathbf{A}^{1/2}$ denotes hermitian, inverse and cholesky root (or square root) of matrix $\mathbf{A}$ respectively. </p> <p>Now all $\mathbf{D}_i$'s are rank-one matrices since each $\mathbf{B}_i$ is also a rank-one matrix. So they have one non-zero eigenvalue each. Let the non-zero eigenvalue of $\mathbf{D}_i$ be $\alpha_i$ for all $i\in\{1,\dots,d\}$</p> <p><strong>CLAIM:</strong> $\alpha_1,\dots,\alpha_d$ are also the eigenvalues of $\mathbf{T}^H\mathbf{H}^H\mathbf{W}^{-1}\mathbf{H}\mathbf{T}$</p> <p>Is it true. If so, how do I prove this?. It is becoming really difficult for me. </p>
adam W
43,193
<p>For the matrices of interest I can show a shared null space of their similar matrices. This may not be helpful but I hope it is, and I did the work so I thought I would show it.</p> <p>Let $\mathbf{M} = \mathbf{H}\mathbf{T}$. In matrix notation, using $\mathbf{e}_0^H = \pmatrix{1 &amp; 0 &amp; 0 &amp; \cdots}$, your equations are $$\mathbf{B}_i = \mathbf{M}\mathbf{e}_i\mathbf{e}_i^H\mathbf{M}^H$$ and $$\mathbf{C}_i = \mathbf{W} +\mathbf{M}\left(\mathbf{I} - \mathbf{e}_i\mathbf{e}_i^H\right)\mathbf{M}^H$$ The first matrix has similarity $$\mathbf{D}_i \sim \mathbf{B}_i\mathbf{C}_i^{-1}\tag{1}$$</p> <p>The second matrix has similarity $$\mathbf{M}^H\mathbf{W}^{-1}\mathbf{M}\sim \mathbf{M}\mathbf{M}^H\mathbf{W}^{-1}\tag{2}$$</p> <p>Let $\mathbf{V}$ be the right pseudo-inverse of $\mathbf{M}^H$ so that $\mathbf{M}^H\mathbf{V} = \mathbf{I}$. The two matrices have the same right null space $$\left(\mathbf{B}_i\mathbf{C}_i^{-1}\right)\left[\mathbf{W}\mathbf{V}\mathbf{M}^H-\mathbf{W}\right] = 0 \tag{A}$$ and $$\left(\mathbf{M}\mathbf{M}^H\mathbf{W}^{-1}\right)\left[\mathbf{W}\mathbf{V}\mathbf{M}^H-\mathbf{W}\right] = 0 \tag{B}$$ which is a little surprising to me given the way $\mathbf{C}_i$ is built using $\mathbf{W}$ and that (A) is true for each $i$. (B) is relatively obvious, so I will show (A). (A) is true if and only if $$\mathbf{B}_i\mathbf{C}_i^{-1}\mathbf{W}\mathbf{V}\mathbf{M}^H = \mathbf{B}_i\mathbf{C}_i^{-1}\mathbf{W} $$ Using from the definition of $\mathbf{C}_i$ that $$\mathbf{W} = \mathbf{C}_i - \mathbf{M}\left(\mathbf{I} - \mathbf{e}_i\mathbf{e}_i^H\right)\mathbf{M}^H \tag{W}$$ we have $$\mathbf{C}_i^{-1}\mathbf{W} = \mathbf{C}_i^{-1}\left[\mathbf{C}_i - \mathbf{M}\left(\mathbf{I} - \mathbf{e}_i\mathbf{e}_i^H\right)\mathbf{M}^H\right]$$ $$= \mathbf{I} - \mathbf{C}_i^{-1}\mathbf{M}\left(\mathbf{I} - \mathbf{e}_i\mathbf{e}_i^H\right)\mathbf{M}^H$$ and from $\mathbf{V}$ a right inverse of $\mathbf{M}^H$ $$\mathbf{C}_i^{-1}\mathbf{W}\mathbf{V}\mathbf{M}^H = \mathbf{V}\mathbf{M}^H - \mathbf{C}_i^{-1}\mathbf{M}\left(\mathbf{I} - \mathbf{e}_i\mathbf{e}_i^H\right)\mathbf{M}^H$$ and therefore $$\mathbf{B}_i\mathbf{C}_i^{-1}\mathbf{W}\mathbf{V}\mathbf{M}^H = \underbrace{\overbrace{\mathbf{B}_i\mathbf{V}}^{\mathbf{M}\mathbf{e}_i\mathbf{e}_i^H}\mathbf{M}^H}_{\mathbf{B}_i} - \mathbf{B}_i\mathbf{C}_i^{-1}\mathbf{M}\left(\mathbf{I} - \mathbf{e}_i\mathbf{e}_i^H\right)\mathbf{M}^H$$ $$=\mathbf{B}_i\left[\mathbf{I} - \mathbf{C}_i^{-1}\mathbf{M}\left(\mathbf{I} - \mathbf{e}_i\mathbf{e}_i^H\right)\mathbf{M}^H\right]$$ $$=\mathbf{B}_i\mathbf{C}_i^{-1}\left[\mathbf{C}_i - \mathbf{M}\left(\mathbf{I} - \mathbf{e}_i\mathbf{e}_i^H\right)\mathbf{M}^H\right]$$ $$=\mathbf{B}_i\mathbf{C}_i^{-1}\mathbf{W}$$ where the last step uses the equation (W) for the equality on $\mathbf{W}$</p>
252,870
<p>Given a polynomial, lets say for example <span class="math-container">$f(x,y) = (1+x+y)^2 = 1+2x+x^2+2y+2xy+y^2$</span>, I'd like to be able to order the terms of the polynomial by total degree, either in increasing or decreasing order (and if alphabetical order can be taken into account within terms of the same total order, then that would be great, but not necessary).</p> <p>I'd like a function to take in <span class="math-container">$1+2x+x^2+2y+2xy+y^2$</span> and return <span class="math-container">$(1) + (2x + 2y) + (x^2 + 2xy + y^2)$</span>, or in reverse order (and not necessarily with parenthesis, but that would be nice to work with.</p> <p>I've tried various commands using Collect[], and MonomialList[], and while MonomialList[f(x,y),{x,y},&quot;DegreeLexicographic&quot;] gives a list of the terms in the order I want, I would like the full expression.</p>
kglr
125
<pre><code>hypatia = {&quot;HIP87382&quot;, &quot;2MASS19290895+4311502&quot;, &quot;HIP98314&quot;, &quot;HIP98316&quot;, &quot;HIP106931&quot;}; </code></pre> <p>A random list of names:</p> <pre><code>SeedRandom[1] names = Table[RandomSample[#, 5] &amp; @ Union[{&quot;2259072226846681494447849441045193405&quot;, &quot;HIP87382&quot;, &quot;HD162826&quot;, &quot;BD+4003225B+4003225&quot;, &quot;2MASS17511402+4004208&quot;, &quot;GaiaDR21344497769227698432&quot;, &quot;TYC3093-01946-1&quot;, &quot;HR06669&quot;, &quot;GaiaDR11344497764930721536&quot;}, hypatia], 10]; Column @ names </code></pre> <p><a href="https://i.stack.imgur.com/LVuuz.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/LVuuz.png" alt="enter image description here" /></a></p> <p><strong>1.</strong> Using <a href="https://reference.wolfram.com/language/ref/Outer.html" rel="nofollow noreferrer"><code>Outer</code></a>:</p> <pre><code>poslist = Join @@ Outer[ If[MemberQ[names[[#]], hypatia[[#2]]], {#, #2}, Nothing] &amp;, Range @ Length @ names, Range @ Length @ hypatia] </code></pre> <blockquote> <pre><code>{{1, 4}, {1, 5}, {2, 1}, {3, 1}, {3, 3}, {4, 2}, {5, 2}, {7, 2}, {8, 3}, {9, 2}, {9, 3}} </code></pre> </blockquote> <p><strong>2.</strong> Using <a href="https://reference.wolfram.com/language/ref/Tuples.html" rel="nofollow noreferrer"><code>Tuples</code></a>:</p> <pre><code>posF = If[MemberQ[names[[#]], hypatia[[#2]]], {#, #2}, Nothing] &amp;; poslist2 = posF @@@ Tuples[{Range @ Length @ names, Range @ Length @ hypatia}] </code></pre> <blockquote> <pre><code> {{1, 4}, {1, 5}, {2, 1}, {3, 1}, {3, 3}, {4, 2}, {5, 2}, {7, 2}, {8, 3}, {9, 2}, {9, 3}} </code></pre> </blockquote> <p><strong>3.</strong> Using <a href="https://reference.wolfram.com/language/ref/MapIndexed.html" rel="nofollow noreferrer"><code>MapIndexed</code></a> + <a href="https://reference.wolfram.com/language/ref/PositionIndex.html" rel="nofollow noreferrer"><code>PositionIndex</code></a>:</p> <pre><code>poslist3 = Join @@ MapIndexed[Join @@ (Thread[{#2[[1]], PositionIndex[hypatia]@#}] /. {_, _Missing} :&gt; Nothing) &amp;, names, {2}] </code></pre> <blockquote> <pre><code> {{1, 5}, {1, 4}, {2, 1}, {3, 1}, {3, 3}, {4, 2}, {5, 2}, {7, 2}, {8, 3}, {9, 2}, {9, 3}} </code></pre> </blockquote>
542,808
<p>I had a little back and forth with my logic professor earlier today about proving a number is irrational. I proposed that 1 + an irrational number is always irrational, thus if I could prove that 1 + irrational number is irrational, then it stood to reason that was also proving that the number in question was irrational.</p> <p>Eg. $\sqrt2 + 1$ can be expressed as a continuous fraction, and through looking at the fraction, it can be assumed $\sqrt2 + 1$ is irrational. I suggested that because of this, $\sqrt2$ is also irrational.</p> <p>My professor said this is not always true, but I can't think of an example that suggests this.</p> <p>If $x+1$ is irrational, is $x$ always irrational? </p> <p>Actually, a better question is: if $x$ is irrational, is $x+n$ irrational, provided $n$ is a rational number?</p>
DKal
83,540
<p>In fact, for any rational number $ r $ it is true that the irrationality of $ x+r $ implies the irrationality of $ x $. This is due to the fact that the rationals are closed under addition. Assume that $ x+r $ is irrational and (for contradiction) that $ x $ is rational, by the fact that the rationals are closed under addition ($\mathbb {Q}$ is a field) you get that $ x+r $ is rational. Contradiction. </p>
542,808
<p>I had a little back and forth with my logic professor earlier today about proving a number is irrational. I proposed that 1 + an irrational number is always irrational, thus if I could prove that 1 + irrational number is irrational, then it stood to reason that was also proving that the number in question was irrational.</p> <p>Eg. $\sqrt2 + 1$ can be expressed as a continuous fraction, and through looking at the fraction, it can be assumed $\sqrt2 + 1$ is irrational. I suggested that because of this, $\sqrt2$ is also irrational.</p> <p>My professor said this is not always true, but I can't think of an example that suggests this.</p> <p>If $x+1$ is irrational, is $x$ always irrational? </p> <p>Actually, a better question is: if $x$ is irrational, is $x+n$ irrational, provided $n$ is a rational number?</p>
ncmathsadist
4,154
<p>The rational numbers are closed under addition and subtraction. Let $w$ be any irrational number and $r$ be a rational number. Since $$ (r + w) - r = w$$ and $w$ is irrational, one of the subtrahends here is irrational. Since $r$ is rational, the irrational quantity must by $r + w$.</p>
3,754,819
<p>Evaluate the integral: <span class="math-container">$$\int_{1}^{\sqrt{2}} \frac{x^4}{(x^2-1)^2+1}\,dx$$</span></p> <p>The denominator is irreducible, if I want to factorize and use partial fractions, it has to be in complex numbers and then as an indefinite integral, we get <span class="math-container">$$x + \frac{\tan^{-1}\left(\displaystyle\frac{x}{\sqrt{-1 - i}}\right)}{\sqrt{-1 - i}} + \frac{\tan^{-1}\left(\displaystyle\frac{x}{\sqrt{-1 + i}}\right)}{\sqrt{-1 + i}}+C$$</span></p> <p>But evaluating this from <span class="math-container">$1$</span> to <span class="math-container">$\sqrt{2}$</span> is another mess, keeping in mind the principal values. I also tried the substitution <span class="math-container">$x \mapsto \sqrt{x+1}$</span>, which then becomes</p> <p><span class="math-container">$$\frac{1}{2}\int_{0}^1 \frac{(x+1)^{3/2}}{x^2+1}\,dx$$</span></p> <p>I don't see where I can go from here. Another substitution of <span class="math-container">$x\mapsto \tan x$</span> also leads me nowhere.</p> <p>Should I approach the problem in some other way?</p>
Quanto
686,284
<p>Note <span class="math-container">\begin{align} I=&amp;\int_{1}^{\sqrt{2}} \frac{x^4}{(x^2-1)^2+1}\,dx\\ = &amp;\int_{1}^{\sqrt{2}} \left(1+\frac{2x^2-2}{x^4-2x^2+2}\right)\,dx\\ = &amp;\sqrt2-1+\int_{1}^{\sqrt{2}} \frac{2-\frac2{x^2}}{x^2+\frac2{x^2}-2}dx\\ =&amp; \sqrt2-1 + (1+\frac1{\sqrt2})I_1 + (1-\frac1{\sqrt2})I_2\tag1\\ \end{align}</span></p> <p>where</p> <p><span class="math-container">\begin{align} I_1= \int_{1}^{\sqrt{2}} \frac{1-\frac{\sqrt2}{x^2}}{x^2+\frac2{x^2}-2}dx &amp;=\int_{1}^{\sqrt{2}} \frac{d(1+\frac{\sqrt2}{x})}{(x+\frac{\sqrt2}x)^2-2(1+\sqrt2)}=0 \\ I_2= \int_{1}^{\sqrt{2}} \frac{1+\frac{\sqrt2}{x^2}}{x^2+\frac2{x^2}-2}dx &amp;=\int_{1}^{\sqrt{2}} \frac{d(1-\frac{\sqrt2}{x})}{(x-\frac{\sqrt2}x)^2+2(\sqrt2-1)}\\ &amp;=\sqrt{\frac2{\sqrt2-1}} \tan^{-1}\sqrt{\frac{\sqrt2-1}2} \end{align}</span></p> <p>Plug <span class="math-container">$I_1$</span> and <span class="math-container">$I_2$</span> into (1) to obtain</p> <p><span class="math-container">$$I = \sqrt2-1 + \sqrt{\sqrt2-1}\tan^{-1}\sqrt{\frac{\sqrt2-1}2} $$</span></p>
80,056
<p>I am toying with the idea of using slides (Beamer package) in a third year math course I will teach next semester. As this would be my first attempt at this, I would like to gather ideas about the possible pros and cons of doing this. </p> <p>Obviously, slides make it possible to produce and show clear graphs/pictures (which will be particularly advantageous in the course I will teach) and doing all sorts of things with them; things that are difficult to carry out on a board. On the other hand, there seems to be something about writing slowly with a chalk on a board that makes it easier to follow and understand (I have to add the disclaimer that here I am just relying on only a few reports I have from students and colleagues).</p> <p>It would very much like to hear about your experiences with using slides in the classroom, possible pitfalls that you may have noticed, and ways you have found to optimize this.</p> <p>I am aware that the question may be a bit like the one with Japanese chalks, once discussed here, but I think as using slides in the classroom is becoming more and more common in many other fields, it may be helpful to probe the advantages and disadvantages of using them in math classrooms as well.</p>
Toby Bartels
8,508
<p>Like Terry Tao, I find the transience of slides to be a problem. This is one reason why I stopped using slides as such and began using a single continuous-scroll page for each topic. I lecture from the bottom of the page, so students who are behind can still see the top. (I'm also one of those people who mixes the projector and the board, with bullet points and formulas on the projector and worked-out examples on the board, so I don't scroll down the page very quickly. Fortunately I work in a facility where the lighting allows this.)</p>
424,675
<p>Just one simple question:</p> <p>Let $\tau =(56789)(3456)(234)(12)$.</p> <p>How many elements does the conjugacy class of $\tau$ contain? How do you solve this exersie?</p> <p>First step is to write it in disjunct cyclces I guess. What's next? :)</p>
Shuhao Cao
7,200
<p>Here is an old scicomp.SE question that answered some of your question: <a href="https://scicomp.stackexchange.com/questions/290/what-are-criteria-to-choose-between-finite-differences-and-finite-elements">What are criteria to choose between finite-differences and finite-elements</a>?</p> <p>In my humble opinion, FEM is the most flexible one in terms of dealing with complex geometry and complicated boundary conditions. FEM also allows the adaptive/local procedure to get higher order local approximation or battling singularities. FEM's basis can be discontinuous and not well-defined pointwisely, which is a nice heritage from the Hilbert space framework. For computational fluid dynamics and electromagnetism, FEM is the way to incorporate the intrinsic geometrical properties of the solutions. </p> <p>For FVM: partly you can refer to my answer here: <a href="https://math.stackexchange.com/questions/327569/how-should-a-numerical-solver-treat-conserved-quantities/372087">How should a numerical solver treat conserved quantities?</a> It is also worth noting that FVM can only have lower order of approximation. </p> <p>In some recently development in FEM addresses the problem I mentioned in the answer above. For example, for convection-dominated pde, tradition continuous Galerkin framework for FEM doesn't work well, which introduces dissapation over time and oscillation over material-layers for the numerical solution. Now there are Discontinuous Galerkin FEM (higher order FVM) and hybrized DGFEM (see here: <a href="http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.134.6414" rel="nofollow noreferrer">Unified hybridization of discontinuous Galerkin, mixed, and continuous Galerkin methods for second order elliptic problems</a>) to remedy these two effects.</p> <p>FDM and FVM are easy to implement, but you get trade-off from this convenience of implementation for limited usage for different PDEs.</p>
424,675
<p>Just one simple question:</p> <p>Let $\tau =(56789)(3456)(234)(12)$.</p> <p>How many elements does the conjugacy class of $\tau$ contain? How do you solve this exersie?</p> <p>First step is to write it in disjunct cyclces I guess. What's next? :)</p>
AGN
252,682
<p><strong>FDM</strong> </p> <p>FDM is created from basic definition of differentiation that is $$ \frac{df}{dx}=\frac{f(x+h)-f(x)}{h}$$ here "h" tends to zero.</p> <p>In numerical analysis, its not possible to divide a number by "0" so "zero" means a small number. So FDM is similar to differential calculus but it has killed the heart that is limit tenda to "zero". So in most of the cases accuracy of FDM increases with refining grid. Easy method but not reliable for conservative differential equations and solutions having shocks. Tough to implement in complex geometry where it needs complex mapping and mapping makes governing equation even tougher. Extending to higher order accuracy is very simple</p> <p><strong>FEM:</strong> </p> <p>It is a numerical tool that is borrowed from calculus of variation. There are lot of types of FEM like point collocation method, sub-domain method etc. Here they assume some trial function and multiply that trial function with weighting function . In Galerkins method the trial function itself weighting function. Different methods follow different ways in weighting. Then this weighting function is multiplied with trial function then integrated over the control volume ( weak form) and equated to zero (This procedure will differ for different types of FEM but theme is same). Then we get one set of algebraic equations. Solving that will give solution. Here we are working only in error and differential equation some times conservative law may be violated. This method is more accurate than FVM and FDM. Ideal for linear PDEs, expensive and complex for non-linear PDEs. Here higher order accuracy is achieved by using higher order basis (i.e) shape functions. Extending to higher order accuracy is relatively complex than FVM and FDM. Higher order accurate calculations are expensive in computation and Mathematical formulation especially for non-linear PDEs. Mostly suitable for Heat transfer, Structural mechanics, vibrational analysis etc. </p> <p><strong>FVM:</strong> This is similar to FDM but. It didn't kill the theme of differentiation because we are integrating the differential equation over a control volume and discretizing the domain. Since we have integrated the differential equation discetization is mathematically a valid one. It can be loosely viewed as FEM but weight here used is 1. Here fluxes are integrated and resultant is set to zero, so flux is conserved. Can handle almost any PDEs and complex domain. Interpolation is done from face to centre will reduce the accuracy of this process. Here accuracy is based on order of polynomial used. FVM can also produce any order accurate numerical solution similar to FDM but more expensive than FDM Aero acoustic problems use FVM about $11^{th}$ order schemes such schemes are rarely used even in DNS and LES. Ideal for Fluid mechanics.</p>
466,576
<p>Some conventional math notations seem arbitrary to English speakers but were mnemonic to non-English speakers who started them. To give a simple example, Z is the symbol for integers because of the German word <em>Zahl(en)</em> 'number(s)'. What are some more examples?</p>
Stefan Hansen
25,632
<p>A function is often called càdlàg if it is right-continuous and admits left limits. This term is from the french <em>continue à droite, limite à gauche</em>.</p>
466,576
<p>Some conventional math notations seem arbitrary to English speakers but were mnemonic to non-English speakers who started them. To give a simple example, Z is the symbol for integers because of the German word <em>Zahl(en)</em> 'number(s)'. What are some more examples?</p>
citedcorpse
52,216
<p>In homology one has a sequence of "differentials". Their images are usually denoted $B(X)$, apparently from the german word for "images", and their kernels $Z(X)$ from the german word for "cycles". </p>
466,576
<p>Some conventional math notations seem arbitrary to English speakers but were mnemonic to non-English speakers who started them. To give a simple example, Z is the symbol for integers because of the German word <em>Zahl(en)</em> 'number(s)'. What are some more examples?</p>
Przemysław Scherwentke
72,361
<p>In old Polish textbooks for secondary school complex numbers were denoted $\bf Z$, as <em>zespolone</em> and integers &mdash; $\bf C$, as <em>całkowite</em>.</p>
1,516,363
<p>If $f: (0,+ \infty) \rightarrow \mathbb{R}$ is continuous and $$f(x + y) = f(x) + f(y) ,$$ then $f$ is linear? </p> <p>I saw in somewhere that if $f$ is continuous, one can drop the condition $f(ax)= a f(x), \forall a, x \in \mathbb{R}$. Is this true?</p>
ncmathsadist
4,154
<p>Notice that for positive integers $p$ and $q$ it is easy to show that $$ f(p/q) = p/q \cdot f(1).$$<br> Extension by density does the rest.</p>
4,498,203
<p>We know that each row (and each column) of composition table of a finite group, is a rearrangement (permutation) of the elements of the group.</p> <p>How about the other way round? If we have a composition table where each row and each column is a permutation of the elements of a set, does this composition table necessarily define a group?</p> <p>If not then give a counter example.</p>
Michael Kinyon
444,012
<p>For an example with an identity element and inverses, consider <span class="math-container">$$\begin{array}{c|ccccc} \ast &amp; e &amp; a &amp; b &amp; c &amp; d\\ \hline e &amp; e &amp; a &amp; b &amp; c &amp; d \\ a &amp; a &amp; e &amp; c &amp; d &amp; b \\ b &amp; b &amp; d &amp; e &amp; a &amp; c \\ c &amp; c &amp; b &amp; d &amp; e &amp; a \\ d &amp; d &amp; c &amp; a &amp; b &amp; e \end{array}.$$</span> It is easy to find triples that do not associate, but the nonassociativity can also be seen from the fact that this loop (quasigroup with identity element) has order 5 but every element has order 2, hence Lagrange's theorem does not hold.</p>
2,573,572
<p>Here is the expression to take the derivative of. $$C = \frac{1}{2}\sum_j (y_j - a_j^L)^2$$</p> <p>Here is the result. $$\frac{\partial C}{\partial a_j^L} = 2(a_j^L-y_j)$$</p> <p>Multiplying by 2, then again by the derivative of the inside (-1) seems reasonable, but what happened to the summation?</p>
Matthew Leingang
2,785
<p>Let's save the index $j$ for the derivative and write $$ C = \frac{1}{2} \sum_{i=1}^n (y_i - a_i^L)^2 $$ Therefore by the sum rule and chain rule, $$ \frac{\partial C}{\partial a_j^L} = \sum_{i=1}^n 2(y_i - a_i^L)(-1)\frac{\partial a_i^L}{\partial a_j^L} $$ Assuming the variables $\left(a_1^L,\dots,a_n^L\right)$ are independent, the derivative of any one of them with respect to any other one of them is zero. But the derivative of each of them with respect to itself is one. In other words, $$ \frac{\partial a_i^L}{\partial a_j^L} = \delta_{ij} = \begin{cases} 1 &amp; i = j \\ 0 &amp; i \neq j \end{cases} $$ The symbol $\delta_{ij}$ is called the <strong><a href="https://en.wikipedia.org/wiki/Kronecker_delta" rel="nofollow noreferrer">Kronecker delta</a></strong>.</p> <p>Returning to the summation, we see that each term is multiplied by $0$ except the case $i=j$. So the only surviving term is that one: $$ \frac{\partial C}{\partial a_j^L} = 2(y_j - a_j^L)(-1) = 2(a_j^L-y_j) $$</p>
2,203,066
<p>The definition I have is the following:</p> <blockquote> <p>A vector space V is said to be <strong>finite-dimensional</strong> if there is a finite set of vectors in V that spans V and is said to be <strong>infinite-dimensional</strong> if no such set exists.</p> </blockquote> <p>However, with this definition I can't determine whether the vector space $\mathbb{R}^3$ is finite-dimensional or infinite-dimensional (I am assuming that it is finite since the dimension of $\mathbb{R}^3$ is $3$)</p> <p>Going with my thought process, though, I know that $(1,0,0),(0,1,0),(0,0,1)$ spans $\mathbb{R}^3$. However we can also check that $(2,0,0),(0,2,0),(0,0,2)$ spans $\mathbb{R}^3$. Also note that $(3,0,0),(0,3,0),(0,0,3)$ spans $\mathbb{R}^3$. This process could be continued over and over to show that there are infinitely many vectors that span $\mathbb{R}^3$. </p> <p>Wouldn't this mean that $\mathbb{R}^3$ is infinite-dimensional? Because there isn't a finite number of vectors that span $\mathbb{R}^3$. (Again I want to say this isn't the case and that there is something I am overlooking.) </p>
cws
225,972
<p>The question asks whether there exists a finite basis of the vector space. If there exists a finite basis, then this vector space is said to be finite dimensional. If not the vector space is infinite dimensional. An example of an infinite dimensional vector space is the vector space of all power series.</p> <p>Contrast this with the vector space of all polynomials of degree less than or equal to 3, $\mathbb{P}_3 [t]$ which has finite dimension 4 since one basis consists of $\{1,t,t^2,t^3\}$</p>
2,507,864
<blockquote> <p>Check if for any two set families $\mathcal A $ and $\mathcal B $ the following is true: $\bigcup (\mathcal A \cap \mathcal B) = \bigcup \mathcal A \cap \bigcup \mathcal B$</p> </blockquote> <p>First of all I considered an example: $\mathcal A = \{ \{1,2\}, \{1,3\} \}$ and $\mathcal B = \{\{1,2\},\{3,5\}\}$<br> Now, $\mathcal A \cap \mathcal B = \{1,2 \}$, and so $\bigcup(\mathcal A \cap \mathcal B) =\{1,2 \}$ Whereas, on the other hand, $\bigcup \mathcal A= \{1,2,3 \} $ and $\bigcup \mathcal B = \{1,2,3,5 \}$ and so their intersection is $\{1,2,3 \}$ and so my thesis is - the statement is not true. But now, when I start to evaluate it using the Axiom of Extensionality, I get: $$(\exists X)((X\in \mathcal A \land X\in \mathcal B )\land x \in X)$$ $$\iff(\exists X)(X \in \mathcal A\land x\in X) \land (\exists X)(X \in \mathcal B \land x\in X)$$ $$\iff x \in \bigcup \mathcal A \land x \in \bigcup \mathcal B \iff x \in (\bigcup \mathcal A \cap \bigcup \mathcal B)$$ So, on the one hand the example I provided shows that the theorem is false, but on the other hand, the definitions says that it is actually true. Therefore I must have erred somewhere in my reasoning but I can't see where. Could you steer me towards this error?</p>
Zhuoran He
485,692
<p>Maximum preserves convexity and minimum preserves concavity. So the maximum of two concave functions may be neither concave nor convex. It may become double peaked. For example,</p> <p>$$f(x)=\max[-|x+1|,-|x-1|]$$</p> <p>has an "M"-shaped graph. The minimum of two concave functions is always concave. This is not difficult to prove. Use the definition. For concave $f(x),g(x)$, we have</p> <p>$$\theta f(x_0)+(1-\theta)f(x_1)\leq f(x_\theta),$$ $$\theta g(x_0)+(1-\theta)g(x_1)\leq g(x_\theta),$$</p> <p>where $x_\theta=\theta x_0+(1-\theta)x_1$ and $\theta\in[0,1]$. Therefore</p> <p>$$\theta\min[f(x_0),g(x_0)]+(1-\theta)\min[f(x_1),g(x_1)]\leq\theta f(x_0)+(1-\theta)f(x_1)\leq f(x_\theta),$$</p> <p>and similarly, $$\theta\min[f(x_0),g(x_0)]+(1-\theta)\min[f(x_1),g(x_1)]\leq\theta g(x_0)+(1-\theta)g(x_1)\leq g(x_\theta).$$</p> <p>Therefore,</p> <p>$$\theta\min[f(x_0),g(x_0)]+(1-\theta)\min[f(x_1),g(x_1)]\leq \min[f(x_\theta),g(x_\theta)],$$</p> <p>which proves that $\min[f(x),g(x)]$ is concave.</p>
1,090,620
<p>I don't know how to solve this limit</p> <p>$$ \lim_{y\to0} \frac{x e^ { \frac{-x^2}{y^2}}}{y^2}$$</p> <p>$\frac{1}{e^ { \frac{x^2}{y^2}}} \to 0$</p> <p>but $\frac{x}{y^2} \to +\infty$</p> <p>This limit presents the indeterminate form $0 \infty$ ?</p>
egreg
62,967
<p>For $x\ne0$, set $x^2/y^2=t$; then, as $y\to0$, we have $t\to\infty$, so the limit becomes $$ \lim_{t\to\infty}\frac{1}{x}te^{-t}=\frac{1}{x}\lim_{t\to\infty}\frac{t}{e^t} $$ that's easy to show being $0$. If $x=0$ there's of course nothing to do.</p>
114,289
<p>I am trying to use C++ programs through MathLink in my notebooks, but I cannot compile successfully the simple programs included in Mathematica. </p> <p>I do not have a specific question, I am just looking for guidance.</p> <p><code>$Version $SystemID "9.0 for Linux x86 (64-bit) (November 20, 2012)" "Linux-x86-64"</code></p> <p>My operating system is Linux Mint 17.3 Cinnamon 64-bit.</p> <p>Step by step, what I am trying to do is the following:</p> <p><code>cd $InstallationDirectory/SystemFiles/Links/MathLink/DeveloperKit/Linux-x86-64/CompilerAdditions/ mcc -o addtwo ../MathLinkExamples/addtwo.c ../MathLinkExamples/addtwo.tm</code></p> <p>Since I am trying to compile from the directory where the libraries and header (mathlink.h) is, I think it should work (I am no C expert either). I've also tried copying the addtwo.c and addtwo.tm files in the "CompilerAdditions" folder and run everything from there with the same results. I get the following errors</p> <p><code>$InstallationDirectory/SystemFiles/Links/MathLink/DeveloperKit/Linux-x86-64/CompilerAdditions/libML64i3.so: undefined reference to 'shm_open' (*more undefined references, see below*) collect2: error: ld returned 1 exit status</code></p> <p>Undefined references to: 'sem_init', 'sem_unlink', 'sem_close', 'pthread_sigmask', 'sem_destroy', 'shm_unlink', 'pthread_create', 'sem_post', 'sem_trywait', 'sem_open', 'sem_wait', and 'pthread_join'. It seems related to semaphore, but I am really clueless here.</p>
Andrew Klofas
39,468
<p>Without knowing specifics (haven't done what you are trying to do), it's pretty common to see those types of runtime linker errors if you don't have the correct link parameters. try adding '-lrt' (without quotes) to the end of the mcc command. Does that help?</p>
17,143
<p>My next project I'd like to start working on is Domain Coloring. I am aware of the beautiful discussion at:</p> <p><a href="https://mathematica.stackexchange.com/questions/7275/how-can-i-generate-this-domain-coloring-plot">How can I generate this &quot;domain coloring&quot; plot?</a></p> <p>And I am studying it. However, a lot of the articles on domain coloring refer back to Hans Lundmark's page at:</p> <p><a href="http://www.mai.liu.se/~halun/complex/domain_coloring-unicode.html" rel="nofollow noreferrer">http://www.mai.liu.se/~halun/complex/domain_coloring-unicode.html</a></p> <p>So, I would like to begin my work by using Mathematica to draw these three images based on Hans' notes. I'd appreciate if anyone can provide some code that will produce these images, as I could use it to start my study of the rest of Hans' page.</p> <p><img src="https://i.stack.imgur.com/FuqMb.jpg" alt="arg"></p> <p><img src="https://i.stack.imgur.com/9S0I6.jpg" alt="abs"></p> <p><img src="https://i.stack.imgur.com/8cqhp.png" alt="blend"></p> <p>A very small adjustment. Still learning.</p> <pre><code>g[{f_, cf_}] := DensityPlot[f, {x, -1, 1}, {y, -1, 1}, PlotPoints -&gt; 51, ColorFunction -&gt; cf, Frame -&gt; False]; g /@ {{Arg[-(x + I y)], "SolarColors"}, {Mod[Log[2, Abs[x + I y]], 1], GrayLevel}} ImageMultiply @@ % </code></pre> <p><img src="https://i.stack.imgur.com/115CH.png" alt="scheme-blend-1"></p> <p>Not sure where to put my current question, so I'll update here. Just came back to visit and discovered some wonderful answers at the bottom of this list. I do understand the opening code:</p> <pre><code>f[z_] := (z + 2)^2*(z - 1 - 2 I)*(z + I) paint[z_] := Module[{x = Re[z], y = Im[z]}, color = Blend[{Black, Red, Orange, Yellow}, Rescale[ArcTan[-x, -y], {-Pi, Pi}]]; shade = Mod[Log[2, Abs[x + I y]], 1]; Darker[color, shade/4]] </code></pre> <p>But then I encounter difficulty with the following code:</p> <pre><code>ParametricPlot[{x, y}, {x, -3, 3}, {y, -3, 3}, ColorFunctionScaling -&gt; False, ColorFunction -&gt; Function[{x, y}, paint[f[x + y I]]], Frame -&gt; False, Axes -&gt; False, MaxRecursion -&gt; 1, PlotPoints -&gt; 50, Mesh -&gt; 400, PlotRangePadding -&gt; 0, MeshStyle -&gt; None, ImageSize -&gt; 300] </code></pre> <p>I'm good with the first few lines. Looks like ParametricPlot is plotting points, where x and y both range from -3 to 3 (correct me if I am wrong). I also understand the ColorFunctionScaling and the ColorFunction lines. I understand Axes, PlotRangePadding, MeshStyle, and ImageSize. Where I am having trouble is with what PlotPoints->50 and Mesh->400 are doing. </p> <p>First of all, my image size is 300. What does PlotPoints->50 mean? Does that mean it will sample and array of 50x50 points out of 300x300 and scale the results to fit in the domain [-3,3]x[-3,3]? My next question is, then those points get colored? And if so, how are the remainder of the points in the image colored? For example, I tried:</p> <pre><code>Table[ParametricPlot[{x, y}, {x, -3, 3}, {y, -3, 3}, ColorFunctionScaling -&gt; False, ColorFunction -&gt; Function[{x, y}, paint[f[x + y I]]], PlotPoints -&gt; n, MeshStyle -&gt; None], {n, 10, 50, 10}] </code></pre> <p>And the images got a little sharper as the PointPlots->n increased. </p> <p>Here's another question. What does Mesh->400 do in this situation. For example, I tried lowering the mesh number:</p> <pre><code>ParametricPlot[{x, y}, {x, -3, 3}, {y, -3, 3}, ColorFunctionScaling -&gt; False, ColorFunction -&gt; Function[{x, y}, paint[f[x + y I]]], Frame -&gt; False, Axes -&gt; False, MaxRecursion -&gt; 1, PlotPoints -&gt; 50, Mesh -&gt; 100, PlotRangePadding -&gt; 0, MeshStyle -&gt; None, ImageSize -&gt; 300] </code></pre> <p>And was completely surprised that it had an effect on the image, particularly when MeshStyle->None. Here's the image I get:</p> <p><img src="https://i.stack.imgur.com/4jqEj.png" alt="today"></p> <p>Why does setting Mesh->100 decrease the sharpness of the image?</p> <p>One final question I have regards adding the mesh lines. Simon suggested<br> For the mesh you could do something like Mesh->{Range[-5,5],Range[-5,5]}, MeshStyle->Opacity[0.5], MeshFunctions->{(Re@f[#1+I #2]&amp;),(Im@f[#1+I #2]&amp;)} and cormullion added them to produce a beautiful result, but I tried this:</p> <pre><code>ParametricPlot[{x, y}, {x, -3, 3}, {y, -3, 3}, ColorFunctionScaling -&gt; False, ColorFunction -&gt; Function[{x, y}, paint[f[x + y I]]], Frame -&gt; False, Axes -&gt; False, MaxRecursion -&gt; 1, PlotPoints -&gt; 50, Mesh -&gt; {Range[-5, 5], Range[-5, 5]}, PlotRangePadding -&gt; 0, MeshStyle -&gt; Opacity[0.5], MeshFunctions -&gt; {(Re@f[#1 + I #2] &amp;), (Im@f[#1 + I #2] &amp;)}, ImageSize -&gt; 300] </code></pre> <p>And got this resulting image.</p> <p><img src="https://i.stack.imgur.com/zamAO.png" alt="today2"></p> <p>So I am clearly missing something. Maybe someone could post the code that gives cormullion's last image?</p> <p>OK, just purchased and installed Presentations package. Tried this:</p> <pre><code>With[{f = Function[z, (z + 2)^2 (z - 1 - 2 I) (z + I)], zmin = -2 - 2 I, zmax = 2 + 2 I, colorFunction = Function[arg, HotColor[Rescale[arg, {-Pi, Pi}]]], imgSize = 400}, Draw2D[{ComplexDensityDraw[Arg[f[z]], {z, zmin, zmax}, ColorFunction -&gt; colorFunction, ColorFunctionScaling -&gt; False, Mesh -&gt; 50, MeshFunctions -&gt; {Function[{x, y}, Abs[f[x + I y]]]}, PlotPoints -&gt; {50, 50}]}, Frame -&gt; True, FrameLabel -&gt; {Re, Im}, PlotLabel -&gt; Row[{"Arg coloring and Abs mesh of ", f[z]}], RotateLabel -&gt; False, BaseStyle -&gt; 12, ImageSize -&gt; imgSize]] </code></pre> <p>But got this colorless image.</p> <p><img src="https://i.stack.imgur.com/xSlX8.png" alt="today3"></p> <p>Any thoughts on how to fix this?</p>
Dr. belisarius
193
<p>Here is something quickly made, and similar to what you are after. You'll have to work out the details, though:</p> <pre><code>HotColor[ z_ ] := Which[ 0 &lt;= z &lt;= 3/8, RGBColor[z 8/3, 0, 0], 3/8 &lt;= z &lt;= 6/8, RGBColor[1, (z - 3/8) 8/3, 0], True, RGBColor[1, 1, (z - 6/8) 8/6] ]; g[{f_, cf_}] := DensityPlot[f, {x, -1, 1}, {y, -1, 1}, PlotPoints -&gt; 50, ColorFunction -&gt; cf, Frame -&gt; False]; g /@ {{ArcTan[-x, -y], HotColor}, {SawtoothWave[3 Norm[{x, y}]], GrayLevel}} ImageMultiply @@ % </code></pre> <p><img src="https://i.stack.imgur.com/rhRXf.png" alt="Mathematica graphics"></p>
1,018,235
<p>Mathematically speaking, given $c\in\mathbb{R}$, can I say that: $c\leq\infty$?</p> <p>E.g., is $10 \leq \infty$ a correct mathematical statement?</p> <p>I know this comparison is true in computer arithmetic, however is it correct from mathematical point of view? Does the "equality" part in $\leq$ matter here?</p>
Community
-1
<p>The real numbers $\mathbb R$ do not contain $\infty$ as an element, so with the relation $\le_\mathbb R$, the statement $c\le\infty$ does not make sense.</p> <p>The <a href="http://en.wikipedia.org/wiki/Extended_real_number_line" rel="nofollow"><strong>extended real number line</strong></a> $\overline{\mathbb R}$ <em>does</em> contain both $\infty$ and $-\infty$. While it loses the additive group structure of the standard reals, it retains a total ordering induced by $\le_\overline{\mathbb R}$, and under that relation, it is indeed true that</p> <p>$$c\le_\overline{\mathbb R}\infty$$</p> <p>for any $c\in \mathbb R$.</p>
1,018,235
<p>Mathematically speaking, given $c\in\mathbb{R}$, can I say that: $c\leq\infty$?</p> <p>E.g., is $10 \leq \infty$ a correct mathematical statement?</p> <p>I know this comparison is true in computer arithmetic, however is it correct from mathematical point of view? Does the "equality" part in $\leq$ matter here?</p>
Qiaochu Yuan
232
<p>Sure. An example where this is used in mathematics is when talking about <a href="http://en.wikipedia.org/wiki/Lp_space" rel="nofollow">$L^p$-spaces</a>. The only thing you need to know about these is that they depend on a real parameter $p$ which is allowed to take the value $\infty$, and it's common to make statements about $L^p$-spaces which are uniform in the parameter $p$, e.g. to say "for all $1 \le p \le \infty$, the $L^p$-space satisfies..." </p>
1,902,842
<p>In Bert Mendelson's <em>Introduction to Topology</em>, the first exercise of Ch. 1 Sec. 5 states:</p> <blockquote> <p>Let $X\subset A$ and $Y\subset B$. Prove that $$C(X\times Y)=A\times C(Y)\cup C(X) \times B.$$</p> </blockquote> <p>I have seen a "proof" of this, but I remain unsatisfied with the result. As support, I offer the following as a counterexample. </p> <p>Let $A=\{-1,0,1\}=B$. Also let $X=\{0,1\}$ and $Y=\{-1,0\}$. These satisfy the preconditions. Now, is it true that</p> <p>$$(\{0,1\}\times \{-1,0\})^C=\{-1,0,1\}\times \{-1,0\} \cup \{0,1\}^C\times \{-1,0,1\}.$$</p> <p>It is easy enough to see that $X\times Y=\{(0,0),(0,-1),(1,0),(1,-1)\}$. The complement* would then be anything not in this set, for example $(2,2)$. However, certainly $(2,2)$ is in neither $\{-1,0,1\}\times \{-1,0\}$ nor $\{0,1\}^C\times \{-1,0,1\}$.</p> <p>(*Is this definition of complement correct?)</p> <p>Is there some underlying assumption of which I should be aware? Is staying within the bounds of the parent sets a standard practice? Is my counterexample unreasonable? Please advise.</p>
Doug M
317,162
<p>From the previous line</p> <p>$|x-2||x+3| &lt; \epsilon$</p> <p>We have established that $|x-2|$ is less then $\delta$</p> <p>We need a construction for $\delta$ to make all this true.</p> <p>$\delta &lt; \frac {\epsilon}{x+3}$</p> <p>And now we are where your question begins.</p> <p>Demand that $\delta &lt; 1$ For the purposes of the proof you only need to show that a $\delta$ exists. But, you can any further demands on it that you like. And if the circumstances require, you can demand smaller.</p> <p>If $\delta &lt; 1$ and $|x-2| &lt; \delta$ then $|x+3|&lt;6$</p> <p>or </p> <p>$-1 &lt; x-2 &lt; 1\\ 4 &lt; x+3&lt; 6$</p> <p>$\delta = \min (1,\frac \epsilon 6)$</p>
1,904,903
<p>Taken from Soo T. Tan's Calculus textbook Chapter 9.7 Exercise 27-</p> <p>Define $$a_n=\frac{2\cdot 4\cdot 6\cdot\ldots\cdot 2n}{3\cdot 5\cdot7\cdot\ldots\cdot (2n+1)}$$ One needs to prove the convergence or divergence of the series $$\sum_{n=1}^{\infty} a_n$$</p> <p>upon finding the radius of convergence for $\sum_{n=1}^{\infty}\frac{2\cdot 4\cdot 6\cdot\ldots\cdot 2n}{3\cdot 5\cdot7\cdot\ldots\cdot (2n+1)}\cdot x^{2n+1}$ to be $1$ and checking the endpoints. Also, please use tests and methods that are taught in introductory courses.</p> <p>Answers show divergence but no without explanation. </p>
grand_chat
215,011
<p>Rewrite the $n$th term by sliding each of the factors in the numerator one position to the left. This gives $$ a_n = \frac 21\frac43\frac65\cdots\frac{2n}{2n-1}\frac1{2n+1}. $$ We now see $a_n$ is a product consisting of factors bigger than one, multiplied onto the final factor $\frac1{2n+1}$. Conclude $$ a_n&gt;\frac1{2n+1}, $$ so the series $\sum a_n$ diverges.</p>
2,195,287
<blockquote> <p>Knowing that $p$ is prime and $n$ is a natural number show that $$n^{41}\equiv n\bmod 55$$ using Fermat's little theorem $$n^p\equiv n\bmod p$$</p> </blockquote> <p>If the exercise was to show that $$n^{41}\equiv n\bmod 11$$ I would just rewrite $n^{41}$ as a power of $11$ and would easily prove that the congruence is true in this case but I cannot apply the same logic when I have $\bmod55$ since $n^{41}$ cannot be written as power of $55$.</p> <p>Any hint?</p>
kub0x
309,863
<p>Since $n^{10} \equiv 1 \pmod{11}$ Then $n^{10\cdot k} \equiv 1 \pmod{11}$ </p> <p>Thus for $k=4 \Rightarrow n^{40} \equiv 1 \pmod{11}$ then $n \equiv n^{41} \pmod{11}$ (using Fermat Little's).</p> <p>For modulus $55$ you can use the fact that $55=11.5$ so:</p> <p>$n^{11} \equiv n \pmod{11}$ and $n^{5} \equiv n \pmod 5$</p> <p>Then regroup using CRT for modulo $55$:</p> <p>$45n + 11n \equiv 56n \equiv n \pmod{55}$</p>
1,985,427
<p>$$ A= \begin{bmatrix} 2 &amp; 1 &amp; -1 \\ -2 &amp; -2 &amp; 1 \\ 0 &amp; -2 &amp; 1 \\ \end{bmatrix} $$</p> <p>Can someone show me the best way to approach this? Should I use pivoting? I tried using the formula, but I think that only works for 2 x 2 matrices. </p>
Joffan
206,402
<p>As an easy-to-understand process, you can note that $A.A^{-1} = I$ and then undertake parallel row operations on $A$ and $I$ to transform this into $I.A^{-1}=X$, where $X$ is the result of the same operations on $I$ that transformed $A$ into $I$. You can check, through considering the action of matrix multiplication, that the effect of row scaling or combinations maintains the equality through each transformation.</p> <p>For convenience during the elimination process this can be written as an augmented matrix $[A\mid I]$:</p> <p>$$\begin{align} &amp; \left[ \begin{array}{ccc|ccc} 2 &amp; 1 &amp; -1 &amp; 1 &amp; 0 &amp; 0 \\ -2 &amp; -2 &amp; 1 &amp; 0 &amp; 1 &amp; 0 \\ 0 &amp; -2 &amp; 1 &amp; 0 &amp; 0 &amp; 1 \\ \end{array} \right] \tag{$A\mid I$}\\ &amp; \left[ \begin{array}{ccc|ccc} 2 &amp; 1 &amp; -1 &amp; 1 &amp; 0 &amp; 0 \\ 0 &amp; -1 &amp; 0 &amp; 1 &amp; 1 &amp; 0 \\ 0 &amp; -2 &amp; 1 &amp; 0 &amp; 0 &amp; 1 \\ \end{array} \right] \tag{add r1 to r2}\\ &amp; \left[ \begin{array}{ccc|ccc} 2 &amp; 0 &amp; -1 &amp; 2 &amp; 1 &amp; 0 \\ 0 &amp; -1 &amp; 0 &amp; 1 &amp; 1 &amp; 0 \\ 0 &amp; -2 &amp; 1 &amp; 0 &amp; 0 &amp; 1 \\ \end{array} \right] \tag{add r2 to r1}\\ &amp; \left[ \begin{array}{ccc|ccc} 2 &amp; 0 &amp; -1 &amp; 2 &amp; 1 &amp; 0 \\ 0 &amp; 1 &amp; 0 &amp; -1 &amp; -1 &amp; 0 \\ 0 &amp; -2 &amp; 1 &amp; 0 &amp; 0 &amp; 1 \\ \end{array} \right] \tag{mult r2 by -1}\\ &amp; \left[ \begin{array}{ccc|ccc} 2 &amp; 0 &amp; -1 &amp; 2 &amp; 1 &amp; 0 \\ 0 &amp; 1 &amp; 0 &amp; -1 &amp; -1 &amp; 0 \\ 0 &amp; 0 &amp; 1 &amp; -2 &amp; -2 &amp; 1 \\ \end{array} \right] \tag{add 2xr2 to r3}\\ &amp; \left[ \begin{array}{ccc|ccc} 2 &amp; 0 &amp; 0 &amp; 0 &amp; -1 &amp; 1 \\ 0 &amp; 1 &amp; 0 &amp; -1 &amp; -1 &amp; 0 \\ 0 &amp; 0 &amp; 1 &amp; -2 &amp; -2 &amp; 1 \\ \end{array} \right] \tag{add r3 to r1} \\ &amp; \left[ \begin{array}{ccc|ccc} 1 &amp; 0 &amp; 0 &amp; 0 &amp; -0.5 &amp; 0.5 \\ 0 &amp; 1 &amp; 0 &amp; -1 &amp; -1 &amp; 0 \\ 0 &amp; 0 &amp; 1 &amp; -2 &amp; -2 &amp; 1 \\ \end{array} \right] \tag{mult r1 by 0.5}\\ \text{so }&amp;A^{-1} = \begin{bmatrix} 0 &amp; -0.5 &amp; 0.5 \\ -1 &amp; -1 &amp; 0 \\ -2 &amp; -2 &amp; 1 \\ \end{bmatrix} \end{align}$$</p> <p>There are plenty of more sophisticated methods but I think this is a good basic tool to get started with.</p>
2,751,909
<blockquote> <p>Let $f$ be a non-negative differentiable function such that $f'$ is continuous and $\displaystyle\int_{0}^{\infty}f(x)\,dx$ and $\displaystyle\int_{0}^{\infty}f'(x)\,dx$ exist.</p> <p>Prove or give a counter example: $f'(x)\overset{x\rightarrow \infty}{\rightarrow} 0$</p> </blockquote> <p><strong>Note:</strong> I think it is not true but I couldn't find a counter example.</p>
Ian
83,396
<p>The $P$ will need to depend on $\epsilon$; you will not be able to find one $P$ for all $\epsilon$. To put it another way, your quantifiers are in the wrong order. </p> <p>That said, the key property enabling you to do this problem directly from the definition is that $f(x)=x$ is an increasing function. This means </p> <p>$$U(f,P)=\sum_{i=1}^n f(x_i)(x_i-x_{i-1}) \\ L(f,P)=\sum_{i=1}^n f(x_{i-1})(x_i-x_{i-1})$$</p> <p>where $P$ is $-1=x_0&lt;x_1&lt;\dots&lt;x_n=1$. To make these close to each other, a uniform partition will suffice.</p>
78,001
<p>I have to find the exponential generating function for placing distinct objects into $k$ distinct boxes with at least $m$ object per box, indexed by the number of objects. Could you help me please? Also with some hints</p>
Phira
9,325
<p>It is easy to do for $k=1$. For general $k$, one can use the correspondence between disjoint union of labelled combinatorial objects and products of exponential generating functions.</p>
78,001
<p>I have to find the exponential generating function for placing distinct objects into $k$ distinct boxes with at least $m$ object per box, indexed by the number of objects. Could you help me please? Also with some hints</p>
Brian M. Scott
12,042
<p>Let $a_k(m,n)$ be the number of ways of placing $n$ distinct objects in $k$ distinct boxes if there must be at least $m$ objects in each box. Suppose first that $k=1$. Clearly $a_1(m,n)=0$ if $n&lt;m$, and $a_1(m,n)=1$ if $m\ge n$, so the exponential generating function for the $a_1(m,n)$ is $$A_1(x)=\sum_{n\ge 0}a_1(m,n)\frac{x^n}{n!}=\sum_{n\ge m}\frac{x^n}{n!}=e^x-\sum_{n=0}^{m-1}\frac{x^n}{n!}.$$</p> <p>Suppose now that you know $A_k(x)$ for some $k\ge 1$. To distribute $n$ objects amongst $k+1$ boxes you must first choose some of them to go into box $k+1$ and then distribute the remainder amongst the first $k$ boxes. Suppose that you put $r$ objects into box $k+1$. You can choose these $r$ objects in $\binom{n}r$ ways, and there are then $a_1(m,r)$ ways to ‘distribute’ them to box $k+1$ and $a_k(m,n-r)$ ways to distribute the remainder amongst the other $k$ boxes. Summing over the possible values of $r$, we find that $$a_{k+1}(m,n) = \sum_{r=0}^n\binom{n}r a_1(m,r)a_k(m,n-r)\;,$$ which is exactly what is needed to give us $A_{k+1}(x)=A_1(x)A_k(x)$. By induction, then, $$A_k(x)=A_1(x)^k=\left(e^x-\sum_{n=0}^{m-1}\frac{x^n}{n!}\right)^k\tag{1}.$$ If we set $$p_m(x)=\sum_{n=0}^{m-1}\frac{x^n}{n!},$$ the Taylor polynomial of order $m-1$ for the exponential function, $(1)$ can be written simply $$A_k(x) = \big(e^x-p_m(x)\big)^k.$$</p>
2,184,056
<p>To compute the oblique asymptote as $x \to +\infty$, we can first compute $\mathop {\lim }\limits_{x \to + \infty } \frac{{f(x)}}{x}$, it it exists, and $\mathop {\lim }\limits_{x \to + \infty } \frac{{f(x)}}{x} = k$, then we can further compute $\mathop {\lim }\limits_{x \to + \infty } (f(x) - kx)=b$, and if it exists then the asymptote would be $y = kx + b$.</p> <p>But I am wondering if the existence of $\mathop {\lim }\limits_{x \to + \infty } \frac{{f(x)}}{x}$ always imply the existence of the second limit $\mathop {\lim }\limits_{x \to + \infty } f(x) - kx$ and hence the asymptote? If not, any counterexample is appreciated.</p>
onamoonlessnight
188,019
<p>Generally no - just take any function $f(x)$ that grows sub-linearly at infinity. For example, $$ f(x) = \sqrt{x} $$ means $k=0$, but then $$ \lim_{x \to \infty} (f(x) - kx) = \lim_{x \to \infty } \sqrt{x} = \infty. $$</p>
3,232,341
<p>How would I show this? I know a directed graph with no cycles has at least one node of outdegree zero (because a graph where every node has outdegree one contains a cycle), but do not know where to go from here.</p>
sam000013
719,586
<p>This proof will prove that DAGs (Directed Acyclic Graphs) have at least one node of indegree <span class="math-container">$ 0 $</span> and one node of outdegree <span class="math-container">$ 0 $</span> as well: A DAG will contain all paths of finite length (Because of absence of cycles). Let's consider WLOG, the longest path <span class="math-container">$ P $</span> of the graph in consideration from a vertex <span class="math-container">$ u $</span> (our source) to vertex <span class="math-container">$ v $</span> (our destination). Since this path is the longest path from <span class="math-container">$ u $</span> to <span class="math-container">$ v $</span>, there would not be any incoming edge on <span class="math-container">$ u $</span> from any other vertex <span class="math-container">$ u ' $</span> (if that had been the case, then our longest path would have started from <span class="math-container">$ u ' $</span> itself or any other vertex but not <span class="math-container">$ u $</span> for sure) and similarly there will not be any outgoing edge from <span class="math-container">$ v $</span> to any vertex <span class="math-container">$ v ' $</span> (if that had been the case, then our longest path should not have ended at <span class="math-container">$ v $</span> but at <span class="math-container">$ v ' $</span> or some other vertex, but definitely not <span class="math-container">$ v $</span>). So this proves that <span class="math-container">$ u $</span> has indegree <span class="math-container">$ 0 $</span> and <span class="math-container">$ v $</span> has outdegree <span class="math-container">$ 0 $</span> and this proof includes your answer.</p>
248,313
<p>Assume that $f:\mathbb R \rightarrow \mathbb R$ is continuous and $h\in \mathbb R$. Let $\Delta_h^n f(x)$ be a finite difference of $f$ of order $n$, i.e</p> <p>$$ \Delta_h^1 f(x)=f(x+h)-f(x), $$ $$ \Delta_h^2f(x)=\Delta_h^1f(x+h)-\Delta_h^1 f(x)=f(x+2h)-2f(x+h)+f(x), $$ $$ \Delta_h^3 f(x)=\Delta_h^2f(x+h)-\Delta_h^2f(x)=f(x+3h)-3f(x+2h)+3f(x+h)-f(x), $$ etc. There is an explicite formula for $n$-th difference: $$ \Delta_h^n f(x)=\sum_{k=0}^n (-1)^{n-k}\frac{n!}{k!(n-k)!} f(x+kh). $$</p> <p>Assume now that $n\in \mathbb N$ and $f:\mathbb R \rightarrow \mathbb R$ are such that for each $x \in \mathbb R$: $$ \frac{\Delta_h^n f(x)}{h^n} \rightarrow 0 \textrm{ as } h \rightarrow 0. $$ Is it then $f$ a polynomial of degree $\leq n-1$?</p> <p>It is clear if $n=1$, because then $f'(x)=0$ for $x\in \mathbb R$.</p> <p>Edit. Without continuity assumption about $f$ it is not true, because for $n-1$-additive function $F$ which is not $n-1$-linear we have $\Delta_h^nf(x)=0$, where $f(x)=F(x,...,x)$.</p>
Davide Giraudo
9,849
<p>The result holds if we assume that $f$ is $n$-times differentiable, otherwise, as WimC shows, it's not necessarily the case.</p> <p>Using <a href="https://math.stackexchange.com/questions/243425/prove-that-hk0-lim-t-to0-frac-sum-j-0k-binomkj-1k-jhjt/243449#243449">this thread</a> and translated function ($f(\cdot)=h(x+\cdot)$), we can see that $$\frac{\Delta_h^nf(x)}{h^n}=f^{(n)}(x),$$ so the hypothesis yields $f^{(n)}\equiv 0$, hence $f$ is a polynomial of degree at most $n-1$.</p>
248,313
<p>Assume that $f:\mathbb R \rightarrow \mathbb R$ is continuous and $h\in \mathbb R$. Let $\Delta_h^n f(x)$ be a finite difference of $f$ of order $n$, i.e</p> <p>$$ \Delta_h^1 f(x)=f(x+h)-f(x), $$ $$ \Delta_h^2f(x)=\Delta_h^1f(x+h)-\Delta_h^1 f(x)=f(x+2h)-2f(x+h)+f(x), $$ $$ \Delta_h^3 f(x)=\Delta_h^2f(x+h)-\Delta_h^2f(x)=f(x+3h)-3f(x+2h)+3f(x+h)-f(x), $$ etc. There is an explicite formula for $n$-th difference: $$ \Delta_h^n f(x)=\sum_{k=0}^n (-1)^{n-k}\frac{n!}{k!(n-k)!} f(x+kh). $$</p> <p>Assume now that $n\in \mathbb N$ and $f:\mathbb R \rightarrow \mathbb R$ are such that for each $x \in \mathbb R$: $$ \frac{\Delta_h^n f(x)}{h^n} \rightarrow 0 \textrm{ as } h \rightarrow 0. $$ Is it then $f$ a polynomial of degree $\leq n-1$?</p> <p>It is clear if $n=1$, because then $f'(x)=0$ for $x\in \mathbb R$.</p> <p>Edit. Without continuity assumption about $f$ it is not true, because for $n-1$-additive function $F$ which is not $n-1$-linear we have $\Delta_h^nf(x)=0$, where $f(x)=F(x,...,x)$.</p>
WimC
25,313
<p>Let $f(x) = |x|$ then $\Delta_h^2(f)$ has support $[-2h, 0]$. In particular $\lim_{h \to 0}\Delta_h^2(f)/h^2 = 0$ pointwise, but $f$ is not a polynomial.</p> <p><strong>Edit:</strong> If the convergence in $x$ is <em>uniform</em> on an interval $[a, b]$ then I think that $f$ is a polynomial on that interval. This may follow from Fourier expansion, but I don't have time now to hammer out the fine points (if it can be done).</p>
2,286,749
<p>My question is about the general solution for the following differential equation: $$ \frac{dx}{dt} = x^a(1-x)^b,\quad a,b\gt 0~~~~~~~~~~~~~~~(1)~. $$</p> <p>Obviously, if $a=b=1$ then (1) reduces to $$ \frac{dx}{dt} = x(1-x) $$ which has as solution $$ x(t) = \frac{1}{1 + A e^{-t}}\,,$$ for some constant, $A$. In fact, for $a,b$ positive integers, a solution can be obtained by using method of separation of variables and partial fractions. I want to be able to find a solution that considers all cases and which would obviously include the special cases when $a$ and $b$ are positive integers.</p>
Jan Eerland
226,665
<p>Well, we have:</p> <p>$$\text{x}'\left(t\right)=\text{x}\left(t\right)^\text{a}\cdot\left(1-\text{x}\left(t\right)\right)^\text{b}\space\Longleftrightarrow\space\int\frac{\text{x}'\left(t\right)}{\text{x}\left(t\right)^\text{a}\cdot\left(1-\text{x}\left(t\right)\right)^\text{b}}\space\text{d}x=\int1\space\text{d}x\tag1$$</p> <p>For the integrals:</p> <ul> <li>$$\int1\space\text{d}x=x+\text{C}_1\tag2$$</li> <li>Substitute $\text{u}=\text{x}\left(t\right)$: $$\int\frac{\text{x}'\left(t\right)}{\text{x}\left(t\right)^\text{a}\cdot\left(1-\text{x}\left(t\right)\right)^\text{b}}\space\text{d}x=\int\frac{1}{\text{u}^\text{a}\cdot\left(1-\text{u}\right)^\text{b}}\space\text{d}\text{u}\tag3$$</li> </ul>
4,206,147
<blockquote> <p><span class="math-container">$f(f(x))=f(x),$</span> for all <span class="math-container">$x\in\Bbb R$</span> suppose <span class="math-container">$f$</span> is differentiable, show <span class="math-container">$f$</span> is constant or <span class="math-container">$f(x)=x$</span></p> </blockquote> <p>Clearly, <span class="math-container">$f'(f(x))f'(x)=f'(x)$</span>. This implies for each <span class="math-container">$x$</span>, <span class="math-container">$f'(f(x))=1$</span>, or <span class="math-container">$f'(x)=0$</span>. But this is not enough.</p>
FShrike
815,585
<p>Since <span class="math-container">$f(f(x))=f(x)$</span>, <span class="math-container">$y=f(x)$</span> is a valid input to the function, implying that the function's range is a subset of its domain. With that in mind, <span class="math-container">$f'(f(x))$</span> is analogous to <span class="math-container">$f'(y)$</span> for some <span class="math-container">$y$</span> in the range of <span class="math-container">$f$</span>.</p> <p><span class="math-container">$f'(y)=1$</span> or <span class="math-container">$f'(x)=0$</span>. In the first case, this immediately shows that <span class="math-container">$f(y)=y+C$</span> for all <span class="math-container">$y$</span> in each simply connected subset of the range of <span class="math-container">$f$</span>, but <span class="math-container">$C=0$</span> for all these regions since <span class="math-container">$f(y)=y$</span> by the first equation. Now since <span class="math-container">$f$</span> is differentiable everywhere in <span class="math-container">$\mathbb{R}$</span>, the only continuation of the derivative for some <span class="math-container">$f'(x),x\notin\operatorname{Range}(f)$</span>, that preserves the continuity of <span class="math-container">$f$</span> on the border of every connected region of its range, is <span class="math-container">$1$</span>, showing that <span class="math-container">$f(x)=x$</span> for all <span class="math-container">$x\in\mathbb{R}$</span>.</p> <p>In the second case, it immediately shows that <span class="math-container">$f(x)=k$</span>, for some constant <span class="math-container">$k$</span>, which applies to the whole domain of <span class="math-container">$f$</span> - <span class="math-container">$\forall x\in\mathbb{R}$</span>.</p>
1,345,643
<p>In an exercise it seems I must use Pascal's triangle to solve this $(z^1+z^2+z^3+z^4)^3$. The result would be $z^3 + 3z^4 + 6z^5 + 10z^ 6 + 12z^ 7 + 12z^ 8 + 10z^ 9 + 6z^ {10} + 3z^ {11} + z^{12}$. But how do I use the triangle to get to that result? Personally I can only solve things like $(x+y)^2$ and $(x+y)^3$.</p> <p>Thanks for any tips that may be given.</p>
A C
246,578
<p>You can have x = z<sup>1</sup> and y = z <sup>2</sup> + z <sup>3</sup> + z <sup>4</sup> Then the problem is (x+y) <sup>3</sup>. Later resubstitute y in the expansion and choose a = z <sup>2</sup> and b = z <sup>3</sup> + z <sup>4</sup>. And expand those powers by pascal's triangle and finally resubstitute for a and b and expand those powers by pascal's triangle.</p>
179,377
<p>Consider the $k \times k$ block matrix:</p> <p>$$C = \left(\begin{array}{ccccc} A &amp; B &amp; B &amp; \cdots &amp; B \\ B &amp; A &amp; B &amp;\cdots &amp; B \\ \vdots &amp; \vdots &amp; \vdots &amp; \ddots &amp; \vdots \\ B &amp; B &amp; B &amp; \cdots &amp; A \end{array}\right) = I_k \otimes (A - B) + \mathbb{1}_k \otimes B$$</p> <p>where $A$ and $B$ are size $n \times n$ and $\mathbb{1}$ is the matrix of all ones.</p> <p>It would seem that the formula for the determinant of $C$ is simply:</p> <p>$$\det(C) = \det(A-B)^{k-1} \det(A+(k-1) B)$$</p> <p>Can anyone explain why this seems to be true or offer a proof or direct me to a proof?</p>
Christian Remling
48,839
<p>We can just manipulate $C$ in the usual way by row operations: Subtract the last "row" from all the other "rows" (this is really several traditional row operations done at once). This produces $$ \begin{pmatrix} A- B &amp;0&amp; 0 &amp; \ldots &amp; 0 &amp;B-A \\ 0 &amp; A-B &amp;0 &amp;\ldots &amp; 0 &amp; B-A\\ &amp;&amp; \ldots &amp;&amp;&amp;\\ B &amp; B &amp; B &amp; \ldots &amp; B &amp; A \end{pmatrix} . $$ Assume for the moment that $A-B$ is invertible. Subtract $B(A-B)^{-1}$ times all the other rows from the last row; we multiply from the left so that we indeed obtain linear combinations of the <em>rows.</em> This gives an upper triangular matrix with diagonal entries $A-B$ ($k-1$ times) and $A+(k-1)B$. We now read off the asserted formula.</p> <p>The invertible matrices are dense, so I obtain the general case by approximation.</p>
179,377
<p>Consider the $k \times k$ block matrix:</p> <p>$$C = \left(\begin{array}{ccccc} A &amp; B &amp; B &amp; \cdots &amp; B \\ B &amp; A &amp; B &amp;\cdots &amp; B \\ \vdots &amp; \vdots &amp; \vdots &amp; \ddots &amp; \vdots \\ B &amp; B &amp; B &amp; \cdots &amp; A \end{array}\right) = I_k \otimes (A - B) + \mathbb{1}_k \otimes B$$</p> <p>where $A$ and $B$ are size $n \times n$ and $\mathbb{1}$ is the matrix of all ones.</p> <p>It would seem that the formula for the determinant of $C$ is simply:</p> <p>$$\det(C) = \det(A-B)^{k-1} \det(A+(k-1) B)$$</p> <p>Can anyone explain why this seems to be true or offer a proof or direct me to a proof?</p>
Rodrigo de Azevedo
91,764
<p>Let us assume that $A-B$ is invertible. Write</p> <p>$$\begin{array}{rl} C &amp;= \begin{bmatrix} A &amp; B &amp; \ldots &amp; B\\ B &amp; A &amp; \ldots &amp; B\\ \vdots &amp; \vdots &amp; \ddots &amp; \vdots\\B &amp; B &amp; \ldots &amp; A\end{bmatrix}\\\\ &amp;= \begin{bmatrix} A-B &amp; O_n &amp; \ldots &amp; O_n\\ O_n &amp; A-B &amp; \ldots &amp; O_n\\ \vdots &amp; \vdots &amp; \ddots &amp; \vdots\\ O_n &amp; O_n &amp; \ldots &amp; A-B \end{bmatrix} + \begin{bmatrix} B \\ B\\ \vdots \\ B\end{bmatrix} \begin{bmatrix} I_n \\ I_n\\ \vdots \\ I_n\end{bmatrix}^T\\\\ &amp;= (I_k \otimes (A-B)) + (1_k \otimes B) (1_k \otimes I_n)^T\\\\ &amp;= (I_k \otimes (A-B)) \left(I_{nk} + (I_k \otimes (A-B)^{-1}) (1_k \otimes B) (1_k \otimes I_n)^T\right)\end{array}$$</p> <p>Using Sylvester's determinant identity,</p> <p>$$\begin{array}{rl} \det (C) &amp;= \det\left((I_k \otimes (A-B)) \left(I_{nk} + (I_k \otimes (A-B)^{-1}) (1_k \otimes B) (1_k \otimes I_n)^T\right)\right)\\\\ &amp;= \det(I_k \otimes (A-B)) \cdot \det \left( I_{nk} + (I_k \otimes (A-B)^{-1}) (1_k \otimes B) (1_k \otimes I_n)^T \right)\\\\ &amp;= \det(I_k \otimes (A-B)) \cdot \det \left( I_{n} + (1_k \otimes I_n)^T (I_k \otimes (A-B)^{-1}) (1_k \otimes B) \right)\\\\ &amp;= \left(\det(A - B)\right)^k \cdot \det \left( I_{n} + k (A-B)^{-1} B \right)\\\\ &amp;= \det((A-B)^k) \cdot \det \left( I_{n} + k (A-B)^{-1} B \right)\\\\ &amp;= \det((A-B)^{k-1}) \cdot \det \left( A-B + k B \right)\\\\ &amp;= (\det(A-B))^{k-1} \cdot \det \left( A + (k-1) B \right)\end{array}$$</p>
1,658,577
<p>I'm an electrical/computer engineering student and have taken fair number of engineering math courses. In addition to Calc 1/2/3 (differential, integral and multivariable respectfully), I've also taken a course on linear algebra, basic differential equations, basic complex analysis, probability and signal processing (which was essentially a course on different integral transforms).</p> <p>I'm really interested in learning rigorous math, however the math courses I've taken so far have been very applied - they've been taught with a focus on solving problems instead of proving theorems. I would have to relearn most of what I've been taught, this time with a focus on proofs. </p> <p>However, I'm afraid that if I spend a while relearning content I already know, I'll soon become bored and lose motivation. However, I don't think not revisiting topics I already know is a good idea, because it would be next to impossible to learn higher level math without knowing lower level math from a proof based point of view.</p>
David
297,532
<p>There are countries where people are puzzled if you tell them there is a distinction between "calculus" and "analysis." They think "calculus" is just an old-fashioned name for analysis. The reason these subjects are viewed as different in North America is because a typical "calculus" class is where one learns the mechanical aspects and basic applications of calculus, whereas "analysis" is where you learn everything that comes after that, including relearning the theoretical parts of calculus, but without an excessive focus on the familiar practical aspects.</p> <p>So to get to my point, most textbooks published in the U.S. or Canada with a title like "mathematical analysis" are made precisely for people like you. For example, you could use Apostol's <em>Mathematical Analysis</em>. <em>Mathematical Analysis I, II</em> by Zorich, translated from Russian, is also very good, and can be used in conjunction with the problem book by Makarov and Goluzina, which has mostly non-routine problems with hints and answers. I think you would find either of these books preferable to Spivak's <em>Calculus</em>, which was originally intended for people learning the material for the first time (though it's not always used that way). One caveat about Apostol's book is that the chapter on Riemann-Stieltjes integration can be a bit difficult for those who haven't studied the plain Riemann integral rigorously elsewhere.</p> <p>In many North American universities, there is a second course in linear algebra that revisits the subject from a more theoretical perspective. And of course, there are textbooks to match this approach, usually beginning with vector spaces (e.g., Lang or Friedberg/Insel/Spence). However, in your case an attractive alternative might be to study abstract algebra and linear algebra concurrently (vector spaces being a special case of groups, and groups being widely applied in linear algebra in other ways). An excellent book that takes this combined approach is <em>Algebra</em> by Artin. Godement's <em>Algebra</em> is also outstanding, though a good deal drier.</p> <p>I would consider basic mathematical analysis and algebra at the level of those books to be the foundation of an undergraduate education in mathematics. After that, there are many directions you can go in. I think you'll find that books in differential equations, complex analysis, etc., that are aimed at people who have already reached that level of sophistication are sufficiently different from what you've done before that you won't feel frustrated by them. You can get an idea of what topics are most important by looking at the undergraduate curricula of good universities.</p> <p>There are also books on analysis and algebra at a lower level of difficulty and covering much less material. <em>Elementary Analysis</em> by Ross and <em>Abstract Algebra</em> by Herstein come to mind. (The latter doesn't discuss linear algebra.)</p>
3,261,846
<blockquote> <p>What is the solution to the IVP <span class="math-container">$$y'+y=|x|, \ x \in \mathbb{R}, \ y(-1)=0$$</span></p> </blockquote> <p>The general solution of the above problem is <span class="math-container">$y_{g}(x)=ce^{-x}$</span>.</p> <p>How to find the particular solution? As <span class="math-container">$|x|$</span> is not differentiable at origin. Is there any alternate way to get the solution?</p>
user289143
289,143
<p>You have to distinguish the two cases whether <span class="math-container">$x &lt; 0$</span> and <span class="math-container">$x \geq 0$</span> and see that the two solutions "matches" at the origin.</p> <p>When <span class="math-container">$x &lt; 0$</span>, <span class="math-container">$y'+y=-x$</span> you look for a particular solution of the form <span class="math-container">$y_p(x)=ax+b$</span> and gives you <span class="math-container">$a+ax+b=-x$</span>, therefore <span class="math-container">$a=-1$</span> and <span class="math-container">$b=-a=1$</span> so <span class="math-container">$y_p(x)=-x+1$</span></p> <p>When <span class="math-container">$x \geq 0$</span>, <span class="math-container">$y'+y=x$</span> you look for a particular solution of the form <span class="math-container">$y_p(x)=ax+b$</span> and gives you <span class="math-container">$a+ax+b=x$</span>, therefore <span class="math-container">$a=1$</span> and <span class="math-container">$b=-a=-1$</span> so <span class="math-container">$y_p(x)=x-1$</span></p> <p>So now we have <span class="math-container">$y(x)=c_1e^{-x}-x+1$</span> for <span class="math-container">$x &lt; 0$</span> and <span class="math-container">$y(x)=c_2e^{-x}+x-1$</span> for <span class="math-container">$x \geq 0$</span></p> <p>The initial condition <span class="math-container">$y(-1)=0$</span> gives us <span class="math-container">$0=c_1e+1+1$</span> i.e. <span class="math-container">$c_1=-2e^{-1}$</span> and we get <span class="math-container">$y(x)=-2e^{-x-1}-x+1$</span> for <span class="math-container">$x &lt; 0$</span>.</p> <p>Now we want our two solutions to match at the origin so <span class="math-container">$-2e^{-1}+1=c_2-1$</span>, so <span class="math-container">$c_2=2(1-e^{-1})$</span> and we get <span class="math-container">$y(x)=2(1-e^{-1})e^{-x}+x-1$</span> when <span class="math-container">$x \geq 0$</span></p> <p>Finally we can write our solution <span class="math-container">$$y(x)=-2e^{-x+1}-x+1 \ \mathrm{when\ } x &lt; 0$$</span></p> <p><span class="math-container">$$y(x)=2(1-e^{-1})e^{-x}+x-1 \ \mathrm{when \ }x \geq 0 $$</span></p>
3,261,846
<blockquote> <p>What is the solution to the IVP <span class="math-container">$$y'+y=|x|, \ x \in \mathbb{R}, \ y(-1)=0$$</span></p> </blockquote> <p>The general solution of the above problem is <span class="math-container">$y_{g}(x)=ce^{-x}$</span>.</p> <p>How to find the particular solution? As <span class="math-container">$|x|$</span> is not differentiable at origin. Is there any alternate way to get the solution?</p>
Community
-1
<p>To find a particular solution, you can explore the Ansatz</p> <p><span class="math-container">$$y=|x|,$$</span> giving <span class="math-container">$$y'+y=\text{sgn}(x)+|x|.$$</span></p> <p>This is valid on the whole real line, except at the origin, where the discontinuity is unavoidable. So compensating the sign term, there is a piecewise solution,</p> <p><span class="math-container">$$y=|x|-\text{sgn}(x).$$</span></p> <p>The rest is standard.</p> <blockquote class="spoiler"> <p> <span class="math-container">$$x&lt;0\to y=-\dfrac2ee^{-x}-x+1,\\x&gt;0\to y=ce^{-x}+x-1$$</span> where <span class="math-container">$c$</span> is unknown.</p> </blockquote>
3,261,846
<blockquote> <p>What is the solution to the IVP <span class="math-container">$$y'+y=|x|, \ x \in \mathbb{R}, \ y(-1)=0$$</span></p> </blockquote> <p>The general solution of the above problem is <span class="math-container">$y_{g}(x)=ce^{-x}$</span>.</p> <p>How to find the particular solution? As <span class="math-container">$|x|$</span> is not differentiable at origin. Is there any alternate way to get the solution?</p>
JJacquelin
108,514
<p><span class="math-container">$$y'+y=|x|$$</span> Solving without condition :</p> <p>Case <span class="math-container">$x&gt;0 \quad:\quad y'+y=x \qquad y=c_1e^{-x}+x-1$</span></p> <p>Case <span class="math-container">$x&lt;0 \quad:\quad y'+y=-x \qquad y=c_2e^{-x}-x+1$</span> <span class="math-container">$$y=ce^{-x}+(x-1)\text{ sgn}(x)$$</span> sgn<span class="math-container">$(x)$</span> is the function sign of <span class="math-container">$x$</span></p> <p>The function is discontinuous at <span class="math-container">$x=0$</span>.</p> <p><a href="https://i.stack.imgur.com/OhO1k.gif" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/OhO1k.gif" alt="enter image description here"></a></p> <p>The discontinuous curve represented is only an example for a particular value of <span class="math-container">$c_1=c_2=c$</span>. Of course they are an infinity of solutions. Don't confuse this general equation with the solution below which is strictly the answer corresponding to the condition <span class="math-container">$y(-1)=0$</span>.</p> <p>With condition <span class="math-container">$\quad y(-1)=0$</span> :</p> <p><span class="math-container">$y(-1)=0=c_2e^{-(-1)}+((-1)-1)(-1)=c_2e+2\quad;\quad c_2=-\frac{2}{e}$</span> <span class="math-container">$$y=-2e^{-x-1}-x+1 \quad \text{in } x\leq 0. \tag 1$$</span></p> <p>In <span class="math-container">$x&gt;0$</span> :</p> <p>Condition <span class="math-container">$y(0)=1-2e^{-1}$</span></p> <p><span class="math-container">$y(0)=1-2e^{-1}=c_1e^{-0}+0-1=c_1-1\quad;\quad c_1=2-2e^{-1}$</span> <span class="math-container">$$y=2(1-e^{-1})e^{-x}+x-1 \quad \text{in } x\geq 0. \tag 2$$</span></p> <p><a href="https://i.stack.imgur.com/SLKA3.gif" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/SLKA3.gif" alt="enter image description here"></a></p> <p>Note after the comments :</p> <p>The above solution Eqs.<span class="math-container">$(1)$</span> and <span class="math-container">$(2)$</span> is the same as the LutzL's solution written on a different form : <span class="math-container">$$y=ce^{-x}+sign(e^{-x}-1+x)$$</span> with <span class="math-container">$c=1+2e^{-1}$</span>.</p>
2,352,811
<p>Why it's not enough for the partial derivatives to exist for implying differentiability of the function? Why is the continuity of the partial derivatives needed?</p>
Michael Hoppe
93,935
<p>To answer your first question: because they're partial, i.e., derivatives in very special directions. Imagine, for example, a surface which is generated by a ray parallel to the $x$-$y$-axis that is rotating in a way that for the first quarter of its rotation it raises by $1$, for the next quarter it lowers by $1$, then lowers $1$ again and in its last quarter it raises again by $1$.</p>
1,443,441
<blockquote> <p>If <span class="math-container">$\frac{x^2+y^2}{x+y}=4$</span>,then all possible values of <span class="math-container">$(x-y)$</span> are given by<br></p> <p><span class="math-container">$(A)\left[-2\sqrt2,2\sqrt2\right]\hspace{1cm}(B)\left\{-4,4\right\}\hspace{1cm}(C)\left[-4,4\right]\hspace{1cm}(D)\left[-2,2\right]$</span><br></p> </blockquote> <p>I tried this question.<br></p> <p><span class="math-container">$\frac{x^2+y^2}{x+y}=4\Rightarrow x+y-\frac{2xy}{x+y}=4\Rightarrow x+y=\frac{2xy}{x+y}+4$</span><br></p> <p><span class="math-container">$x-y=\sqrt{(\frac{2xy}{x+y}+4)^2-4xy}$</span>, but I am not able to proceed. I am stuck here. Is my method wrong?</p>
Wanderer
195,012
<p>${x^2+y^2\over x+y}=4 \implies x^2+y^2=4x+4y \implies x^2+y^2-4x-4y=0 \implies (x-2)^2+(y-2)^2=(2\sqrt{2})^2$ which is a circle with center $(2,2)$ and radius $2\sqrt{2}$</p>
452,306
<p>I am trying to be able to find the radius of a cone combined with a cylinder. see my other question (Solving for radius of a combined shape of a cone and a cylinder where the cone is base is concentric with the cylinder? part2 )</p> <p>I have a volume calculation that Has been reduced as far as I know how to.</p> <p>Know values:</p> <p>$$v=65712.4$$ $$x=3$$ $$y=2$$ $$\theta=30$$ $$r=unknown$$</p> <p>$$v=\pi r^3\left(2y-\frac{2}{3}\tan\theta-\frac{x}{r}\right)$$</p> <p>Since I haven't solved a Quadratic equation in a while. </p> <p>I would appreciate it explained in steps. </p> <p>Thank You For Your Time.</p>
DBS
67,937
<p>I am editing what I said before which mistakenly assumed $A$ above is a subgroup. </p> <p>Here is the answer. Assume that $g$ is not nilpotent (in that case there is nothing to show). </p> <p>Claim: The point {1} is not isolated in $\bar{A}$. </p> <p>The claim implies that there is a non-trivial sequence in $\bar{A}$ hence in $A$ which converges to 1. Then writing this sequence as $\{ g^{k_{i}} \}$ we get $\lbrace { g^{k_{i}-1} \rbrace} \rightarrow g^{-1}$. Hence $g^{-1} \in \bar{A}$ and thus $\bar{A}$ is the closure of the subgroup generated by $A$ and hence is a subgroup (which is a simple fact).</p> <p>To prove the claim. The set $A$ has atleast one limit point (because of compactness). Call it $x$. Then for any sequence $\{ g^{n_{k}} \} \rightarrow x$. Hence $x^{-1}.g^{n_{k}} \rightarrow 1.!$ </p>
117,024
<p>The trivial approach of counting the number of triangles in a simple graph $G$ of order $n$ is to check for every triple $(x,y,z) \in {V(G)\choose 3}$ if $x,y,z$ forms a triangle. </p> <p>This procedure gives us the algorithmic complexity of $O(n^3)$.</p> <p>It is well known that if $A$ is the adjacency matrix of $G$ then the number of triangles in $G$ is $tr(A^3)/6.$</p> <p>Since matrix multiplication can be computed in time $O(n^{2.37})$ it is natural to ask:</p> <p>Is there any (known) faster method for computing the number of triangles of a graph?</p>
Listing
3,123
<p>Let me cite this <a href="https://www-complexnetworks.lip6.fr/%7Elatapy/Publis/triangles_short.pdf" rel="noreferrer">paper</a> from 2007 (Practical algorithms for triangle computations in very large (sparse (power-law)) graphs by Matthieu Latapy):</p> <blockquote> <p>The fastest algorithm known for finding and counting triangles relies on fast matrix product and has an <span class="math-container">$\mathcal{O}(n^\omega)$</span> time complexity, where <span class="math-container">$\omega &lt; 2.376$</span> is the fast matrix product exponent. This approach however leads to a <span class="math-container">$\theta(n^2)$</span> space complexity.</p> </blockquote> <p>There are some improvements for sparse graphs or if you want to list the triangles shown in the document.</p>
3,388,457
<p>I made an equation <span class="math-container">$$(100b+40+a)-(100a+40+b)=99$$</span> simplified that to <span class="math-container">$b-a=1$</span> , but do not know where to go from there.</p>
farruhota
425,072
<p>You found: <span class="math-container">$b-a=1$</span>. </p> <p><span class="math-container">$a4b$</span> is divisible by <span class="math-container">$9$</span>, so: <span class="math-container">$a+b=5$</span> or <span class="math-container">$14$</span>. </p> <p>Adding the two equations you get: <span class="math-container">$2b=6$</span> or <span class="math-container">$15$</span>. </p> <p>Can you finish?</p>
2,725,697
<p>A weird question that has me confused. Suppose I have a symmetric matrix $A$, which has to be computed somehow. For example, the Hessian matrix is a symmetric matrix that is computed by taking the gradient twice. A covariance matrix is also symmetric as another example. $A$ will have $n^2$ entries but really only need to compute $n^2/2$ of them since it is symmetric. </p> <p>Now consider a vector of appropriate length $z$. The product $Az$ will yield a vector, not a matrix. So it seems like in terms of computation time/steps that the product $Az$ can actually be calculated faster than computing $A$ itself? Has anyone ever thought of this, and if so in what context could this be of use?</p>
Rócherz
451,007
<p>It sounds as something that would be discussed in Numerical Linear Algebra courses. Haven't dealt with it directly, though, just similar operations.</p>
936,200
<p>Suppose that x_0 is a real number and x_n = [1+x_(n-1)]/2 for all natural n. Use the Monotone Convergence Theorem to prove x_n → 1 as n grows.</p> <p>Can someone please help me? I don't know what to assume since I don't know if it is increasing or decreasing when x_0 &lt; 1 and when x_0 > 1. Any hint/help would really help. Thank you.</p>
Marc van Leeuwen
18,880
<p>The relation $\in$ is typed <code>\in</code> (in TeX) and often pronounced "is in", or "is a member of", or "is an element of". But many variations occur occuring to the particular set, so you might pronounce $z\in\mathbf C$ as "$z$ is a complex number" rather than "$z$ is an element of the set of complex numbers".</p> <p>Your confusion is due to the two different roles of a variable. A variable is a name, a lexical object introduced in a mathematical text, usually specifying what <em>kind</em> of value it <em>designates</em> by specifying a set of allowed values, its domain. The domain is an attribute of the name of the variable. However, when reasoning about the variable, one assumes that it designates one specific element of its domain, the (current) value of the variable. When talking about $y$ in this way, one always means the value of (the name) $y$, not the name $y$ itself. It is the value of $y$ that is an element of (is a member of, is in) the domain of (the name) $y$.</p>
4,637,565
<p>I am thinking of positive sequences whose sum is infinite but whose sum of squares is not?</p> <p>One representative sequence is <span class="math-container">$$x[n] = \frac{a}{n+b},$$</span> where <span class="math-container">$a$</span> and <span class="math-container">$b$</span> are given real numbers such that <span class="math-container">$a&gt;0$</span> and <span class="math-container">$b\ge0$</span>.</p> <p>I know that there will be infinitely many more sequences <span class="math-container">$x[n]$</span> such that <span class="math-container">$x[n]\ge0, ~x=1, 2, ...$</span>, <span class="math-container">$\sum x[n] = \infty$</span>, and <span class="math-container">$\sum (x[n])^2 &lt;= M$</span> for a sufficiently large constant value <span class="math-container">$M$</span>.</p> <p>Can you give me some examples? If possible, I would really appreciate it if you could tell me how to find these sequences (i.e., methodology of how to find).</p>
Daron
53,993
<p>The standard example is the sequence <span class="math-container">$x_n = 1/n$</span>.</p> <p>We have <span class="math-container">$\sum_{n=1}^N \frac{1}{n} \simeq \log N$</span> and so <span class="math-container">$\sum_{n=1}^N \frac{1}{n}$</span> diverges.</p> <p>At the same time <span class="math-container">$\sum_{n=1}^\infty \frac{1}{n^2} = \frac{\pi^2}{6}$</span> and the squared series converges.</p> <p>More generally we have <span class="math-container">$\sum_{n=1}^N \frac{1}{n^p} \simeq p N^{p-1}$</span> for <span class="math-container">$0&lt;p &lt;1$</span> and so the series diverges. While for <span class="math-container">$p &gt;1$</span> the series converges. You can use these two facts to come up with other examples.</p>
966,798
<p>How I solve the following equation for $0 \le x \le 360$:</p> <p>$$ 2\cos2x-4\sin x\cos x=\sqrt{6} $$</p> <p>I tried different methods. The first was to get things in the form of $R\cos(x \mp \alpha)$:</p> <p>$$ 2\cos2x-2(2\sin x\cos x)=\sqrt{6}\\ 2\cos2x-2\sin2x=\sqrt{6}\\ R = \sqrt{4} = 2 \\ \alpha = \arctan \frac{2}{2} = 45\\ \therefore \cos(2x + 45) = \frac{\sqrt6}{2} $$</p> <p>which is impossible. I then tried to use t-substitution, where:</p> <p>$$ t = \tan\frac{x}{2}, \sin x=\frac{2t}{1+t^2}, \cos x =\frac{1-t^2}{1+t^2} $$</p> <p>but the algebra got unreasonably complicated. What am I missing?</p>
Ross Millikan
1,827
<p>Note that along the way he assumed $\delta \lt \frac 12$ This allowed the following calculations to go through. As long as $\epsilon$ is small, $\frac \epsilon 2 \lt \frac 12$, but a nasty opponent (who knew our proof, say)could give us $\epsilon =5$, say. If we just say $\delta = \frac \epsilon 2$ our opponent could say "Look, it doesn't work at $x=0$ and $|0-1| \lt \frac 52$</p>
869,268
<p>I am asked if $\{n, n^{2}, n^{3}\}$ forms a group under multiplication modulo $m$ where $m = n + n^{2} + n^{3}.$</p> <p>As an example we see that $\{2, 4, 8\}$ does form a group modulo $14,$ with identity $8,$ but am stuck starting the proof for the general case. Thanks in advance.</p>
Bill Dubuque
242
<p>Though one can verify this result by brute force calculation, one gains much more insight by examining the ring-theoretic structure governing the result. First, we have factorizations</p> <p>$\quad \smash[t]{(\color{#0a0}{\overbrace{n^3\!+\!n^2\!+\!n}^{\large m}})}(n\!-\!1)\, =\, n(n^3\!-1),\,$ and $\ G = \langle n\rangle = \{1,n,n^2\}$ is a subgroup of $\,\Bbb Z/(n^3\!-1)\phantom{I^{I^{I^I}}}$</p> <p>Chinese rem (CRT) $\,\Rightarrow\, \Bbb Z/(n^3\!-1)\times \Bbb Z/n \,\cong\, \Bbb Z/n(n^3\!-1)\,\ $ by $\,\ (a,b)\to\, \color{#c00}{n^3} a + (1\!-\!n^3)\, b$</p> <p>A ring hom is a group hom, therefore $\,(G,0)\to \color{#c00}{n^3}G \,$ remains a group in $\,R = \Bbb Z/n(n^3\!-1),\,$ namely $\ n^3 G = {n^3}\{1,n,n^2\} = \{n^3,n^4,n^5\} = \{n^3,n,n^2\},\:$ since $\,\ n^4 = n\,$ in $\ \Bbb Z/(n^4\!-n)$</p> <p>And, again, the image of $\ n^3 G \subset R\,$ remains a group in $\,R/\color{#0a0}m \,\cong\,\Bbb Z/\color{#0a0}m\ \ $ <strong>QED</strong></p>
1,721,925
<p>I've been introduced more or less to these methods of finding a root of a function (a point where it intersects the $x$ axis), but I'm not sure when they should be used and what are the advantages of one method over the other. </p> <p>I think that it would be nice to have an answer that puts them in comparison and shows the situations where one method would be more advantageous than the other, etc.</p> <p>So, my question are: </p> <ul> <li><p>When should we use one method over the other and why? </p></li> <li><p>And, what are the advantages and disadvantages therefore of one method over the other?</p></li> </ul> <p>Also, if you want to provide a brief explanation of each of the method, I think that the answer would be more complete and interesting.</p>
Lutz Lehmann
115,115
<p>You should never seriously use bisection.</p> <p>If you think that derivatives are hard, use the secant method.</p> <p>If you want to force convergence and can find intervals with opposite signs of the function, then use one of the anti-stalling variants of regula falsi. If you think that convergence could be faster, use a method based on the inverse quadratic like Muller or Brent.</p> <p>If derivatives are not so hard, then use Newton. To encourage convergence, combine with line-search.</p>
1,721,925
<p>I've been introduced more or less to these methods of finding a root of a function (a point where it intersects the $x$ axis), but I'm not sure when they should be used and what are the advantages of one method over the other. </p> <p>I think that it would be nice to have an answer that puts them in comparison and shows the situations where one method would be more advantageous than the other, etc.</p> <p>So, my question are: </p> <ul> <li><p>When should we use one method over the other and why? </p></li> <li><p>And, what are the advantages and disadvantages therefore of one method over the other?</p></li> </ul> <p>Also, if you want to provide a brief explanation of each of the method, I think that the answer would be more complete and interesting.</p>
Simply Beautiful Art
272,831
<p>I'm of an opinion a bit different from that of <a href="https://math.stackexchange.com/a/1721990">Lutz Lehmann's</a>. Here are my thoughts concerning root-finding in 1 dimension:</p> <p>TL;DR although no method is best and each has their drawbacks, it is often the case that another method can be used to counteract the drawback. The drawbacks to this mindset are either a necessary understanding of the provided function (will Newton's method work well here?) or more complicated code combining multiple methods (which method to be used each iteration?).</p> <ul> <li><p>You should never use bisection <em>on its own</em>, unless you are absolutely certain the root cannot be linearly approximated nicely, a rather extreme case where no method outperforms bisection. Even if you believe this may be the case, you may want to look at the following points.</p> </li> <li><p>You should, whenever possible, use bisection <em>with</em> other methods, even if you believe the root may not be linearly approximated nicely. Such a method is called a <em>hybrid method</em>, and can be used to <em>guarantee convergence</em> in the event that something like Newton's method is not converging fast enough, or worse, diverging. See <a href="https://youtu.be/FD3BPTMGJds" rel="nofollow noreferrer">Newt-safe</a> for example.</p> </li> <li><p>For bracketing methods (methods which bound the root like bisection, which should be used whenever possible), it is often better to first try something else before resorting to bisection (which should be a worst case resort). Lutz's answer mentions several of these.</p> </li> <li><p>Newton's method should be reserved for cases when computing <span class="math-container">$f(x)/f'(x)$</span> is quite easy (such as for a polynomial). Otherwise it is probably simpler to just use the secant method.</p> </li> <li><p>Fixed-point iteration should never be used outside of a theoretical situation. There not many advantages here aside from being marginally simpler to use for really simple or specific situations.</p> </li> </ul> <p>It is worth noting that although the second and third points are desirable, they are not always necessary. You may be able to deduce beforehand that Newton's method is perfectly fine on its own without any of the additional considerations above. On top of this, although the concept of a hybrid method sounds enticing, it can lead to significantly more complicated code. In other words, this may be quite overkill for the given problem.</p>
985,212
<p>Can you row reduce the matrix before computing $\det(\lambda I-A)$? Will this still give an equivalent characteristic polynomial?</p>
BioCoder
186,331
<p>No, you can't row reduce in advance. You will get a different characteristic polynomial if you do that. For example,matrix A=$\left[ \begin{array} {lcr} -1 &amp; 1 &amp; 0 \\ -4 &amp; 3 &amp; 0 \\ 1 &amp; 0 &amp; 2 \\ \end{array} \right] $ has a characteristic polynomial (2-$\lambda$)(1-$\lambda$)$^2$ But the reduced matrix A'= $\left[ \begin{array} {lcr} -1 &amp; 1 &amp; 0 \\ 0 &amp; -1 &amp; 0 \\ 0 &amp; 0 &amp; 2 \\ \end{array} \right]$ has a different characteristic polynomial (2-$\lambda$)(1+$\lambda$) $^2$ </p>
551,964
<p>I have an optimization problem where I have to optimize a function f(A) where A is a matrix(sparse).</p> <p>Like</p> <p>A = \begin{array}{cccc} A_1 &amp; A_0 &amp; A_0 &amp; 0 \\ A_0 &amp; A_2 &amp; 0 &amp; A_0 \\ A_0 &amp; 0 &amp; A_3 &amp; A_0 \\ 0 &amp; A_0 &amp; A_0 &amp; A_4 \\ \end{array}</p> <p>A is a positive definite matrix and the variables are $A_i$. I want to optimize over these variables with the constraint that the matrix A is positive definite. Are there any softwares to help me out with this?</p>
Ross B.
68,567
<p>The CVX software package for MATLAB can handle semidefinite prodramming problems (SDPs).</p>
551,964
<p>I have an optimization problem where I have to optimize a function f(A) where A is a matrix(sparse).</p> <p>Like</p> <p>A = \begin{array}{cccc} A_1 &amp; A_0 &amp; A_0 &amp; 0 \\ A_0 &amp; A_2 &amp; 0 &amp; A_0 \\ A_0 &amp; 0 &amp; A_3 &amp; A_0 \\ 0 &amp; A_0 &amp; A_0 &amp; A_4 \\ \end{array}</p> <p>A is a positive definite matrix and the variables are $A_i$. I want to optimize over these variables with the constraint that the matrix A is positive definite. Are there any softwares to help me out with this?</p>
srihegde
567,367
<p>There are plenty of packages for semidefinite programming (SDP). I have experience using <a href="http://cvxopt.org/" rel="nofollow noreferrer">CVXOPT</a> and <a href="http://www.cvxpy.org/" rel="nofollow noreferrer">CVXPY</a> python packages.</p> <p>Also take a look at <a href="https://peterwittek.com/sdp-in-python.html" rel="nofollow noreferrer">Peter Wittek</a>'s blog on well curated list of SDP solver packages for different versions of Python.</p>
590,891
<p>I'm going back to school and haven't taken a math class in years, so I'm brushing up on the basics.</p> <p>The text states $\frac{g(t + \Delta(t))^2}{2} = \frac{gt^2}{2} + \frac{g}{2}\left(2t\Delta t + \Delta t^2\right)$.</p> <p>(Sorry for the lack of formatting. I'll probably get slammed, but I couldn't figure it out on my phone...)</p> <p>My question is: how did they arrive at that conclusion? I've spent the last four hours trying to work it out to no avail. I'm very discouraged at this point, so any clarification would be very helpful.</p>
Way to infinity
53,489
<p>use the following formula and you will get your answer :</p> <p>$(a+b)^2 = a^2+2ab+b^2 $</p>
686,361
<p>Given if we know $P(S)$ and $P(C|S)$ and $P(D|S)$, how do you compute $E[C|D=d]$? One way that I thought of is to find the conditional probability of $P(C|D)$ by computing the joint probability $P(C,D,S)$ and marginalizing it over $S$. But, $P(D|S)$ is a binomial distribution with parameter $q$ and $S$. Finding the full joint probability distribution will be too complicated. Does anyone know an easier way to find $E[C|D=d]$? Thanks. </p> <p>Note: I forgot to mention that C and D are independent given S</p>
Bombyx mori
32,240
<p>You have $$ P(C\cap S)=P(S)P(C|S), P(D\cap S)=P(S)P(D|S) $$ This information is not enough to let you compute $$ P(C\cap D), P(C|D) $$ because you lost information at $$ P(C\cap D\cap S^{c}) $$ and in principle you do not know what this is. </p>
64,646
<p>In $\triangle{ABC}$, given $\angle{A}=80^\circ$, $\angle{B}=\angle{C}=50^\circ$, D is a point in $\triangle{ABC}$, which $\angle{DBC}=20^\circ,\angle{DCB}=40^\circ$. Then how to find find $\angle{DAC}$?</p> <p>thanks.</p>
Job Bouwman
274,003
<p>The regular 18-polygon has the nice property that each node sees all other nodes separated with $10^\circ$. Embed $\triangle{ABC}$ in this polygon as shown below. </p> <p><a href="https://i.stack.imgur.com/KS5d9.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/KS5d9.jpg" alt="Solution"></a></p> <p>Now use this polygon to construct the two equilateral triangles as illustrated, and consider the intersection of their drawn bisectors. This intersection is the center of a circle through A and C, and it is easy to verify that this center is $D$. Obviously $\triangle{ADC}$ is an isosceles, so $\angle{DAC} = \angle{DCA} = 10^\circ$</p>
631,388
<p>If $\lim_{n\rightarrow \infty }{a_n}=\alpha (\neq 0) $ and $\lim_{n\rightarrow \infty }{b_n}=\beta$, then $\lim_{n\rightarrow \infty }{a_n}^{b_n}=\alpha ^\beta $?</p> <p>I unconsciously used this but I realized I'd never seen this theorem before. Is it true?</p>
Mario De León
14,759
<p>I think that we can use the fact of convergent sequences product and that $$e^{b_n \cdot \log a_n}$$ converges when $n \rightarrow \infty$.</p>
2,458,184
<p>Let $a, b$ be non-negative integers and $p\ge3$ be a prime number. If $a^2+b^2$ and $a+b$ are divisible by $p$ does it mean $a$ and $b$ are always divisible by $p$?</p>
Joffan
206,402
<p>Since $a+b\equiv 0 \bmod p$, we have $a\equiv -b $ and thus $ a^2\equiv b^2 \bmod p$. </p> <p>Then $a^2+b^2\equiv 2a^2 \equiv 0 \bmod p$ and since $p&gt;2$ we know $p\mid a^2$ and thus $p\mid a$ and $p\mid b$.</p>
2,227,047
<p>For any $x=x_1, \dotsc, x_n$, $y=y_1, \dotsc, y_n$ in $\mathbf E^n$, define $\|x-y\|=\max_{1 \le k \le n}|x_k-y_k|$. Let $f\colon\mathbf E^n \to \mathbf E^n$ be given by $f(x)=y$, where $y_k= \sum_{i=1}^n a_{ki} x_i + b_k$ where $k =1,2, \dotsc,n$. Under what conditions is $f$ a contraction mapping?</p> <p>Any hint or solution for this question? I am beginner for this course, I can not understand clearly. </p>
JanG
266,041
<p>In this answer I use a variable substitution which I cannot find in the already published answers.</p> <p>Say that $\alpha \neq 0$ and $\alpha = \varrho e^{i\theta}, \, -\pi &lt;\theta&lt; \pi$. Then $|1+\alpha x^2| = \sqrt{\varrho^2x^4 +2\varrho\cos \theta x^2+1}$ and \begin{gather*} I = \int_{-\infty}^{\infty}\dfrac{dx}{|1+\alpha x^2|} = 2\int_{0}^{\infty}\dfrac{dx}{\sqrt{\varrho^2x^4 +2\varrho\cos \theta x^2+1}} =\dfrac{2}{\sqrt{\varrho}} \int_{0}^{\infty}\dfrac{dx}{\sqrt{x^4 +2\cos \theta x^2+1}} = \\[2ex] \dfrac{4}{\sqrt{\varrho}} \int_{0}^{1}\dfrac{dx}{\sqrt{x^4 +2\cos \theta x^2+1}} = \dfrac{4}{\sqrt{\varrho}} \int_{0}^{1}\dfrac{dx}{\sqrt{x^4+2x^2+1-4x^2\sin^2\frac{\theta}{2}}} = \\[2ex] \dfrac{4}{\sqrt{\varrho}} \int_{0}^{1}\dfrac{dx}{(x^2+1)\sqrt{1-\frac{4x^2}{(x^2+1)^2}\sin^2\frac{\theta}{2}}}.\tag{1} \end{gather*} For $0&lt;x&lt;1$ we put $y = \dfrac{2x}{x^2+1}, 0&lt;y&lt;1$. Then \begin{equation*} y(x^2+1)=2x\tag{2} \end{equation*} and \begin{equation*} x= \dfrac{1-\sqrt{1-y^2}}{y}.\tag {3} \end{equation*} From (2) we get \begin{equation*} (x^2+1)dy + 2xydx=2dx \Leftrightarrow dx= \dfrac{x^2+1}{2(1-xy)}dy = \dfrac{x^2+1}{2\sqrt{1-y^2}}dy \end{equation*} where we have used (3) in the last step. Finally we use that in (1). Thus \begin{equation*} I = \dfrac{2}{\sqrt{\varrho}} \int_{0}^{1}\dfrac{dy}{\sqrt{1-y^2}\sqrt{1-y^2\sin^2\frac{\theta}{2}}} = \dfrac{2}{\sqrt{|\alpha|}}K\left(\sin^2\frac{\theta}{2}\right). \end{equation*}</p>
157,731
<p>I have a coupled PDE. How can I convert the equation from $(x,t)$ to $(p,t)$, the Fourier space in MATHEMATICA? </p> <p>\begin{equation} \frac{\partial c}{\partial t} +\frac{\partial d}{\partial t} = -4\gamma(\frac{\partial a}{\partial x} +x (\frac{\partial c}{\partial x} +\frac{\partial d}{\partial x}) - \frac{\partial^2 c}{\partial x^2} - \frac{\partial^2 d}{\partial x^2}) \end{equation}</p> <p>$\gamma$ is a constant.How can I write the corresponding equation in Fourier space?</p> <pre><code>Derivative[0, 1][c][x, t] + Derivative[0, 1][d][x, t] == -4 \[Gamma] (Derivative[1, 0][a][x, t] + x (Derivative[1, 0][c][x, t] + Derivative[1, 0][d][x, t]) - Derivative[2, 0][c][x, t] - Derivative[2, 0][d][x, t]) </code></pre>
LLlAMnYP
26,956
<p><strong>EDIT 17.10.2017:</strong> I've completely redone this to eliminate any manual labor.</p> <p>Converting to Fourier space is quite simple. We must realize that any function</p> <p>$$ f(x,t) = \int f(p, t) e^{i p x} dp $$</p> <p>That is</p> <pre><code>f[x, t] = InverseFourierTransform[f[p, t], p ,x] </code></pre> <p>So the replacement rule I use is</p> <pre><code>rule = {Derivative[i_, j_][f_][x, t] :&gt; D[InverseFourierTransform[f[p, t], p, x], {x, i}, {t, j}], f_[x, t] :&gt; InverseFourierTransform[f[p, t], p, x]} </code></pre> <p>Let's define</p> <pre><code>expr = Derivative[0, 1][c][x, t] + Derivative[0, 1][d][x, t] == -4 \[Gamma] (Derivative[1, 0][a][x, t] + x (Derivative[1, 0][c][x, t] + Derivative[1, 0][d][x, t]) - Derivative[2, 0][c][x, t] - Derivative[2, 0][d][x, t]) // FullSimplify </code></pre> <blockquote> <pre><code>Derivative[0, 1][c][x, t] + Derivative[0, 1][d][x, t] + 4*\[Gamma]*( Derivative[1, 0][a][x, t] + x*(Derivative[1, 0][c][x, t] + Derivative[1, 0][d][x, t]) - Derivative[2, 0][c][x, t] - Derivative[2, 0][d][x, t] ) == 0 </code></pre> </blockquote> <p>The rhs is zero anyhow, so we do</p> <pre><code>FourierTransform[First@expr /. rule, x, p] // FullSimplify == 0 </code></pre> <blockquote> <p>$$ -4 i \gamma p a(p,t)+4 \gamma \left(\left(p^2-1\right) d(p,t)-p \left(c^{(1,0)}(p,t)+d^{(1,0)}(p,t)\right)\right)+c^{(0,1)}(p,t)+4 \gamma \left(p^2-1\right) c(p,t)+d^{(0,1)}(p,t)=0 $$</p> </blockquote> <p>I hope that now nothing is missed.</p> <p><strong>UPDATE 19.10.17</strong></p> <p>My rule can be simplified to</p> <pre><code>rule2 = f : Alternatives[a, b, c, d] :&gt; (InverseFourierTransform[f[p, #2], p, #] &amp;) </code></pre> <p>Then</p> <pre><code>FourierTransform[(expr // Expand // First) /. rule2, x, p] == 0 </code></pre> <p>also gives the desired result.</p>
3,897,067
<p>Consider a binary operation <span class="math-container">$*$</span> acting from a set <span class="math-container">$X$</span> to itself. It's useful and standard to work with operations which are associative, such that <span class="math-container">$(a*b)*c = a*(b*c)$</span>. What about operations which are not associative?</p> <p>Is there any way to characterize all different possible types of such binary operations <span class="math-container">$*$</span> which are not associative? Eg. Can we say that if <span class="math-container">$*$</span> is not associative, then it must instead satisfy one of set of other possible properties, depending on any other additional operations that we have on our set <span class="math-container">$X$</span>?</p> <p>If we also add some additional structure to our set <span class="math-container">$X$</span> so that we can add elements together and multiply by scalars, it's standard to quantify the amount that two elements of <span class="math-container">$X$</span> commute with each other under <span class="math-container">$*$</span> by calculating the commutator <span class="math-container">$[a,b] = a*b - b*a$</span>. Is it ever useful to consider an 'associative commutator' <span class="math-container">$[abc] = (a*b)*c - a*(b*c)$</span>, for a given non-associative <span class="math-container">$*$</span>?</p> <p>Finally, I know from Lie algebras that if <span class="math-container">$*$</span> anticommutes then it can be natural to consider a Jacobi identity</p> <p><span class="math-container">$(a*b)*c = a*(b*c) - b*(a*c)$</span></p> <p>Are there other natural extensions of associativity in different settings? Why do Lie algebras use this Jacobi identity and not for example</p> <p><span class="math-container">$(a*b)*c = a*(b*c) + k b*(a*c)$</span></p> <p>Where k is a scalar?</p>
runway44
681,431
<blockquote> <p>Is there any way to characterize all different possible types of operation which are not associative?</p> </blockquote> <p>This is too broad and subjective to answer I think. What exactly is a &quot;type&quot; of operation? I assume you're already talking about binary operations, so presumably a &quot;type&quot; of operation is one which satisfies certain identities, like the associative identity. Certain specific examples come to mind:</p> <ul> <li>Jacobi identity for Lie algberas,</li> <li>Jordan identity for Jordan algebras,</li> <li>Moufang identites for loops,</li> <li>Self-distributive laws for racks and quandles,</li> </ul> <p>and certainly others (I am no an expert in nonassociative algebra). Many of the identities above are not three-variable identities, but still. Generally, interesting algebras and their identities are not chosen randomly but rather follow from certain canonical examples whose properties are generalized. The algebras are meant to represent certain structures, and the identities ensure that. For instance, Lie algberas linearize Lie groups, and similarly Jordan algebras linearize projective spaces, Moufang identities generalize octonions' alternativity, racks and quandles represent how groups act on themselves by conjugation, etc.</p> <p>Ultimately there is a &quot;type&quot; of operation for every possible set of &quot;words&quot; you can pick from the free magma (or if you allow addition, free nonassociative algebra) on so many generators. (There is going to be redundancy in this - different sets of words can yield the same class of algebras.)</p> <blockquote> <p>Can we say that if ∗ is not associative, then it must instead satisfy one of set of other possible properties, depending on any other additional operations that we have on our set <span class="math-container">$X$</span>?</p> </blockquote> <p>Probably not. For instance the free nonassociative algebra on some generating set strikes me as a candidate for not having any &quot;properties&quot; (i.e. identities).</p> <blockquote> <p>Is it ever useful to consider an 'associative commutator' for a given non-associative ∗?</p> </blockquote> <p>Yes. The <strong>associator</strong> is useful for instance in (efficiently) proving the octonions are an alternative algebra (which is like halfway to being associative), which is in turn useful for many things like simplifying octonion expressions and classifying subalgebras and reasoning about automorphisms of <span class="math-container">$\mathbb{O}$</span>. The octonion associator also gives rise to the <a href="https://math.stackexchange.com/questions/1852658/octonionic-formula-for-the-ternary-eight-dimensional-cross-product">exceptional ternary 8D cross product</a>.</p> <p>There's probably a lot more you can do with it in general nonassociative algebras but I wouldn't know.</p> <blockquote> <p>Why do Lie algebras use this Jacobi identity</p> </blockquote> <p>Consider where Lie algebras come from. Start with a Lie group <span class="math-container">$G$</span>. The tangent space <span class="math-container">$\mathfrak{g}$</span> tells you all the directions that one-parameter subgroups can point in. The addition operation on <span class="math-container">$\mathfrak{g}$</span> corresponds to the group operation on <span class="math-container">$G$</span>. Indeed, the exponential <span class="math-container">$\exp:\mathfrak{g}\to G$</span> is approximately linear in a neighborhood of <span class="math-container">$0$</span> with quadratic error term. As <span class="math-container">$G$</span> acts on itself by conjugation (and there are many sources listing example after example to show conjugation in a group is very important), so too it acts on <span class="math-container">$\mathfrak{g}$</span> by conjugation. Define <span class="math-container">$\mathrm{Ad}_A(X)=AYA^{-1}$</span> for <span class="math-container">$A\in G,Y\in\mathfrak{g}$</span>. If we differentiate this at <span class="math-container">$A=I$</span> with tangent vector <span class="math-container">$X$</span> we get <span class="math-container">$\mathrm{ad}_X(Y)=XY-YX=[X,Y]$</span>, the &quot;commutator bracket.&quot; Note the adjoint action preserves this operation, and if we differentiate <span class="math-container">$\mathrm{Ad}_A[Y,Z]=[\mathrm{Ad}_AY,\mathrm{Ad}_AZ]$</span> at <span class="math-container">$A=I$</span> again with the product rule we get the identity <span class="math-container">$\mathrm{ad}_X[Y,Z]=[\mathrm{ad}_XY,Z]+[Y,\mathrm{ad}_XZ]$</span>, which says <span class="math-container">$\mathrm{ad}_X$</span> is a &quot;derivation&quot; (i.e. satisfies the &quot;product rule&quot; like a derivative, but with the commutator bracket instead of multiplication). This identity may be rearranged to the more cyclically symmetric form you know as the Jordan identity.</p> <p>All of the other identities I listed above have similar stories of where they come from. The Jordan identity comes from an algebraic investigation of spaces of Hermitian matrices (which are the span of projection operators, which correspond to points in projective spaces). Apparently the Jordan identity also has an interpretation in terms of the inversion symmetry of a Riemannian symmetric space, but I don't know how that story goes. The Moufang identity comes from investigating real normed division algebras, which leads to the octonions, which leads to the alternative identities, and then the simplest four-term identities one can check are where one term is repeated. The self-distributive law for racks and quandles comes from the fact conjugation is an automorphism in a group.</p>
3,034,421
<p>Lets say I have 2 multivariate functions:</p> <pre><code>f(x,y) = x - y g(x,y) = x + y </code></pre> <p>How do I get the composition of these 2 functions <span class="math-container">$g(f(x,y))$</span> ? </p>
Shubham Johri
551,962
<p>Use the converse of the distributive property:</p> <p><span class="math-container">$((x=0)\lor(y=0)\lor(3x+4y=-2))\land((x=0)\lor(y=0)\lor(x+3y=-1))\\\equiv(x=0)\lor(y=0)\lor[(3x+4y=-2)\land(x+3y=-1)]$</span></p> <p><span class="math-container">$3x+4y+2=0=x+3y+1$</span> is just a pair of straight lines (linear equations) intersecting at <span class="math-container">$(-2/5,-1/5)$</span>. Therefore, you have <span class="math-container">$(x=0)\lor(y=0)\lor(x=-2/5\land y=-1/5)$</span></p>
3,120,729
<p>I came across this exercise:</p> <blockquote> <p>Prove that <span class="math-container">$$\tan x+2\tan2x+4\tan4x+8\cot8x=\cot x$$</span></p> </blockquote> <p>Proving this seems tedious but doable, I think, by exploiting double angle identities several times, and presumably several terms on the left hand side would vanish or otherwise reduce to <span class="math-container">$\cot x$</span>.</p> <p>I started to wonder if the pattern holds, and several plots for the first few powers of <span class="math-container">$2$</span> seem to suggest so. I thought perhaps it would be easier to prove the more general statement:</p> <blockquote> <p>For <span class="math-container">$n\in\{0,1,2,3,\ldots\}$</span>, prove that <span class="math-container">$$2^{n+1}\cot(2^{n+1}x)+\sum_{k=0}^n2^k\tan(2^kx)=\cot x$$</span></p> </blockquote> <p>Presented this way, a proof by induction seems to be the smart way to do it.</p> <p><strong>Base case:</strong> Trivial, we have</p> <p><span class="math-container">$$\tan x+2\cot2x=\frac{\sin x}{\cos x}+\frac{2\cos2x}{\sin2x}=\frac{\cos^2x}{\sin x\cos x}=\cot x$$</span></p> <p><strong>Induction hypothesis:</strong> Assume that</p> <p><span class="math-container">$$2^{N+1}\cot(2^{N+1}x)+\sum_{k=0}^N2^k\tan(2^kx)=\cot x$$</span></p> <p><strong>Inductive step:</strong> For <span class="math-container">$n=N+1$</span>, we have</p> <p><span class="math-container">$$\begin{align*} 2^{N+2}\cot(2^{N+2}x)+\sum_{k=0}^{N+1}2^k\tan(2^kx)&amp;=2^{N+2}\cot(2^{N+2}x)+2^{N+1}\tan(2^{N+1}x)+\sum_{k=0}^N2^k\tan(2^kx)\\[1ex] &amp;=2^{N+2}\cot(2^{N+2}x)+2^{N+1}\tan(2^{N+1}x)-2^{N+1}\cot(2^{N+1}x)+\cot x \end{align*}$$</span></p> <p>To complete the proof, we need to show</p> <p><span class="math-container">$$2^{N+2}\cot(2^{N+2}x)+2^{N+1}\tan(2^{N+1}x)-2^{N+1}\cot(2^{N+1}x)=0$$</span></p> <p>I noticed that if I ignore the common factor of <span class="math-container">$2^{N+1}$</span> and make the substitution <span class="math-container">$y=2^{N+1}x$</span>, this reduces to the base case,</p> <p><span class="math-container">$$2^{N+1}\left(2\cot2y+\tan y-\cot y\right)=0$$</span></p> <p>and this appears to complete the proof, and the original statement is true.</p> <p>First question: <strong>Is the substitution a valid step in proving the identity?</strong></p> <p>Second question: <strong>Is there a nifty way to prove the special case for <span class="math-container">$n=2$</span>?</strong></p>
DXT
372,201
<p>Let <span class="math-container">$\displaystyle S =\sum^{2}_{k=0}2^k\tan(2^k x)+8\cot (8x).$</span></p> <p>Then <span class="math-container">$\displaystyle \int Sdx=-\sum^{2}_{k=0}\ln\bigg(\cos(2^k x\bigg)+\ln(\sin 8x)$</span></p> <p>Using <span class="math-container">$\displaystyle \prod^{n-1}_{r=0}\cos(2^r x)=\frac{1}{2^n}\frac{\sin(2^nx)}{\sin x}.$</span></p> <p><span class="math-container">$\displaystyle \int Sdx=-\ln\bigg(\frac{1}{2^3}\frac{\sin 8x}{\sin x}\bigg)+\ln(\sin 8x)$</span></p> <p><span class="math-container">$$\displaystyle \frac{d}{dx}\int Sdx=\frac{d}{dx}\bigg[\ln(\sin x)+\ln(8)\bigg]$$</span></p> <p><span class="math-container">$$S=\sum^{2}_{k=0}2^k\tan(2^k x)+8\cot (8x)=\cot x$$</span></p>
1,368,073
<p>Halmos, in Naive Set Theory, on page 19, provides a definition of intersection restricted to subsets of $E$, where $C$ is the collection of the sets intersected. The point is to allow the case where $C$ is $\emptyset$, which with this definition of intersection gives $E$ as the result. </p> <blockquote> <p>$\{x \in E: x \in X$ for every $X$ in $C\}$</p> </blockquote> <p>My problem lies in interpreting the sentence. I wanted to read it as:</p> <blockquote> <p>"Elements x in E, given that: Element x is in X for every X in C"</p> </blockquote> <p>My brain, tuned by a number of popular programming languages, wants to evaluate the terms in the condition reading from left to right. And clearly, no element $x$ will be in any $X$ if $C$ is $\emptyset$, and if the condition is evaluated to false, $E$ will not be the result of the intersection.</p> <p>After struggling for a while, I figured that I had to read the sentence as:</p> <blockquote> <p>"Elements x in E, given that: For all X that are in C, x is in all of them"</p> </blockquote> <p>The <em>for</em> part of the condition has to be the pivotal one. It has to be the first term you evaluate. In analogy with common programming languages.</p> <p>Questions:</p> <ol> <li>Is my new reading and conclusion correct?</li> <li>How does one learn the order of evaluation in set theoretic expressions?</li> </ol> <p>Edit: Corrected after discussion with coldnumber.</p> <p>Edit 2: Upon rereading the previous chapter, I've found that Halmos actually explains his "for every". The condition "$x \in X$ for every $X$ in $C$" actually means "for all $X$ (if $X \in C$, then $x \in X$)" -- which seems to give an unambiguous order of evaluation.</p>
David Holden
79,543
<p>maybe think of it this way (remembering that your sets $X$ belong to some universe which is not to be identified with the collection $C$). to avoid confusion i use the bound variable $y$ for the universally quantified statement (in place of your $X$): $$ \{x: (x \in E) \land \forall y (y \in C \Rightarrow x \in y)\} $$</p>
95,242
<p>Is it possible to use <code>ProbabilityScalePlot</code> to show different plot markers in a single dataset, such as in going from <code>plot2</code> to <code>plot3</code> below?</p> <pre><code>nPoints = 10; x = RandomVariate[NormalDistribution[1, 1], nPoints]; y = RandomVariate[LogNormalDistribution[1, 1], nPoints]; z = RandomVariate[WeibullDistribution[1, 1], nPoints]; plot1 = SmoothHistogram[{x, y, z}, Filling -&gt; Axis] plot2 = ProbabilityScalePlot[{x, y, z}] plot3 = ProbabilityScalePlot[Flatten[{x, y, z}]] </code></pre>
Dr. belisarius
193
<pre><code>s = GatherBy[First@Cases[FullForm@plot3, Point[h___] :&gt; h, Infinity], Function[{u}, MemberQ[#, u[[1]]] &amp; /@ {x, y, z}]] plot3 /. Point[__] :&gt; MapThread[{#1, Point@#2} &amp;, {{Red, Blue, Green}, s}] </code></pre> <p><img src="https://i.stack.imgur.com/uQwnR.png" alt="Mathematica graphics"></p>
186,553
<p><strong>Problem:</strong> Test the convergence of $\sum_{n=0}^{\infty} \frac{n^{k+1}}{n^k + k}$, where $k$ is a positive constant.</p> <p>I'm stumped. I've tried to apply several different convergence tests, but still can't figure this one out.</p>
Roman Chokler
38,328
<p>The quickest way is just to reacall pythagorean triples $(3,4,5)$ and $(5,12,13)$ due to 5 being common to both triples and 13 being the diagonal length thus establishing that 3, 4,and 12 satisfy $3^2+4^2+12^2=13^2$ as desired for the correct diagonal length. Notice that the product of these three numbers in the triples is 144 and they give the correct surface area.</p>
634,890
<blockquote> <p><strong>Moderator Notice</strong>: I am unilaterally closing this question for three reasons. </p> <ol> <li>The discussion here has turned too chatty and not suitable for the MSE framework. </li> <li>Given the recent pre-print of <a href="http://arxiv.org/abs/1402.0290" rel="noreferrer">T. Tao</a> (see also the blog-post <a href="http://terrytao.wordpress.com/2014/02/04/finite-time-blowup-for-an-averaged-three-dimensional-navier-stokes-equation/" rel="noreferrer">here</a>), the continued usefulness of this question is diminished.</li> <li>The final update on <a href="https://math.stackexchange.com/a/649373/1543">this answer</a> is probably as close to an "answer" an we can expect. </li> </ol> </blockquote> <p>Eminent Kazakh mathematician Mukhtarbay Otelbaev, Prof. Dr. has published a full proof of the Clay Navier-Stokes Millennium Problem.</p> <p>Is it correct?</p> <p>See <a href="http://bnews.kz/en/news/post/180213/" rel="noreferrer">http://bnews.kz/en/news/post/180213/</a></p> <p>A link to the paper (in Russian): <a href="http://www.math.kz/images/journal/2013-4/Otelbaev_N-S_21_12_2013.pdf" rel="noreferrer">http://www.math.kz/images/journal/2013-4/Otelbaev_N-S_21_12_2013.pdf</a></p> <p>Mukhtarbay Otelbaev has published over 200 papers, had over 70 PhD students, and he is a member of the Kazak Academy of Sciences. He has published papers on Navier-Stokes and Functional Analysis.</p> <p>please confine answers to any actual mathematical error found! thanks</p>
James Robinson
123,656
<p>A full translation of the main theorem (Theorem 6.1) and conditions (Y.1)-(Y.4), due to Sergei Chernyshenko, can be found at</p> <p><a href="http://go.warwick.ac.uk/jcrobinson/lf/otelbaev">http://go.warwick.ac.uk/jcrobinson/lf/otelbaev</a></p> <p>There is also a brief discussion of the method of proof used by Otelbaev.</p> <p>[Responses below are to an earlier version of this post in which I thought I had found an error in the Galerkin argument used in the final part of the proof of Theorem 6.1.]</p>
4,588,408
<p>This is a step in a guided proof that the cyclotomic polynomial <span class="math-container">$\Phi_n$</span> is the minimal polynomial of <span class="math-container">$u$</span>. I already know that <span class="math-container">$\Phi_n(0)=0$</span> so <span class="math-container">$P$</span> divides <span class="math-container">$\Phi_n$</span>, I need to show the converse. Any hints?</p>
Aphelli
556,825
<p>Here’s the elementary argument.</p> <p>If <span class="math-container">$\Phi_n$</span> isn’t irreducible, it is a product of two monic polynomials <span class="math-container">$A$</span> and <span class="math-container">$B$</span> with integer coefficients and positive degree. Write <span class="math-container">$X^n-1=ABC$</span>, then <span class="math-container">$X(X^n-1)’-n(X^n-1)=n$</span>, so that <span class="math-container">$n \in (A,B)$</span>.</p> <p>We can assume that <span class="math-container">$A$</span> is irreducible and that it has a root <span class="math-container">$\omega$</span> and a prime <span class="math-container">$p$</span> not dividing <span class="math-container">$n$</span> such that <span class="math-container">$A(\omega^p) \neq 0$</span>.</p> <p>Then <span class="math-container">$\omega$</span> is a root of <span class="math-container">$B(X^p)$</span>, so that <span class="math-container">$A|B(X^p)$</span>. Write <span class="math-container">$a,b$</span> the reductions mod <span class="math-container">$p$</span> of <span class="math-container">$A$</span> and <span class="math-container">$B$</span>. The above divisibility means that, because <span class="math-container">$p|B(X^p)-B(X)^p$</span>, <span class="math-container">$a|b^p$</span>. But <span class="math-container">$n \in (a,b)$</span>, so that <span class="math-container">$a$</span> and <span class="math-container">$b$</span> are coprime. So <span class="math-container">$a=1$</span> and we get a contradiction.</p>
159,563
<p>I have some <a href="https://pastebin.com/MGEzkeC3" rel="nofollow noreferrer">data</a> and want to fit it to Planck's law for black body radiation. The problem is that Mathematica does not give me the correct coefficients.</p> <p>When I evaluate</p> <pre><code>dati = Import[&quot;https://pastebin.com/raw/MGEzkeC3&quot;, &quot;Table&quot;]; h = 6.62607004*10^(-34); c = 299792458; kb = 1.38064852*10^(-23); Planks[l_, T_, A_] := (1/A)*(((2*h*c^2)/l^5)*(1/(Exp[((h*c)/(l*kb*T))] - 1))); fittesana2 = FindFit[dati, Planks[l, T, A], {T, A } , l]; Show[ Plot[fittesana2[l], {l, 400, 900}, PlotStyle -&gt; Red, PlotRange -&gt; All], ListPlot[dati], Frame -&gt; True] Pfit = NonlinearModelFit[dati, Planks[l, T, A], {{A, 1*10^8}, {T, 1700}}, l]; Show[ Plot[Pfit[l], {l, 400, 900}, PlotStyle -&gt; Red, PlotRange -&gt; All], ListPlot[dati], Frame -&gt; True] Normal[Pfit] Pfit[&quot;ANOVATable&quot;] Pfit[&quot;ParameterTable&quot;] Pfit[&quot;FitCurvatureTable&quot;] </code></pre> <p>I get</p> <p><a href="https://i.stack.imgur.com/sCkM5.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/sCkM5.png" alt="enter image description here" /></a></p> <p>Sorry guys for forgetting to write down constants. And my data is only the part of Black Body radiation law. And unit of x-axis is nanometers(nm), and y-axis (uW/cm^2/nm).<br /> <strong>Update:</strong> As suggested by @JimB, I changed my fitting function. I tried to use @JimB suggested function, but for me easier was different one, because I need to find out temperature (Te). Here is the code:</p> <pre><code>h = 6.62607004*10^(-34); c = 299792458; kb = 1.38064852*10^(-23); b = 2*6.62607004*10^(-34)*299792458^2*Pi; d = (6.62607004*10^(-34)*299792458)/(1.38064852*10^(-23)); dati = ImportString[Import[&quot;H2liesma.txt&quot;], &quot;Table&quot;]; Plankulis[la_, Te_, G_, b_, d_] := (1/G)*(b/(la^5*(Exp[d/(la*Te)] - 1))); Pfit3 = FindFit[dati, Plankulis[la, Te, G, b, d], {G, 1*10^(9)}, { Te, 1500} , la]; Show[Plot[Pfit3[la], {la, 400, 900}, PlotStyle -&gt; Red, PlotRange -&gt; All], ListPlot[dati], Frame -&gt; True] </code></pre> <p>I get:</p> <pre><code> FindFit::nonopt: Options expected (instead of la) beyond position 4 in FindFit[{{400.035,-0.00759963},{400.409,0.0136996},{400.783,-0.000465753},{401.157,0.00636862},{401.531,0.0205706},{401.904,0.0257837},{402.278,0.0298773},{402.652,0.00226108},{403.025,0.0188769},{403.399,-0.0230916},{403.772,-0.00365794},{404.146,0.00856837},&lt;&lt;28&gt;&gt;,{414.961,-0.00272152},{415.333,-0.00222349},{415.706,-0.00943255},{416.078,-0.00921836},{416.45,0.00204648},{416.823,-0.0261218},{417.195,-0.00775242},{417.567,0.0140285},{417.939,-0.00992257},{418.311,-0.00711655},&lt;&lt;1408&gt;&gt;},&lt;&lt;24&gt;&gt;/&lt;&lt;1&gt;&gt;,{&lt;&lt;1&gt;&gt;},&lt;&lt;1&gt;&gt;,la]. An option must be a rule or a list of rules. &gt;&gt; </code></pre> <p>When I write analitical solution for my function:</p> <pre><code>b = 2*6.62607004*10^(-34)*299792458^2*Pi; d = (6.62607004*10^(-34)*299792458)/(1.38064852*10^(-23)); Plankulis1[la_] := (1/G)*(b/(la^5*(Exp[d/(la*Te)] - 1))); Te = 1500; G = 1*10^(9); Plankulis[G, b, la, d, Te] Plot[Plankulis1[la], {la, 400*10^(-9), 700*10^(-8)}, {PlotRange -&gt; Full}, Frame -&gt; True] </code></pre> <p>I get: <a href="https://i.stack.imgur.com/QzrRD.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/QzrRD.png" alt="enter image description here" /></a></p> <p>I get out what I need.<br /> What is it I am doing wrong? I did not really understood it in answers. Thank you.</p>
JimB
19,758
<p><strong><em>Correction:</em></strong> What a difference good starting values can make. What I presented earlier totally missed values of the parameters that allow for a good fit.</p> <p>Using a simpler parameterization with <code>A -&gt; 2 a c^2 h</code> and <code>T -&gt; (c h t)/kb</code> we have the equation </p> <pre><code>1/(a (-1 + E^(1/(l t))) l^5) </code></pre> <p>$$\frac{1}{a l^5 \left(e^{\frac{1}{l t}}-1\right)}$$</p> <p>Using <code>FindFit</code> with good starting values (suggested by @SimonWoods) we have the following:</p> <pre><code>sol = FindFit[dati, 1/(a (-1 + E^(1/(l t))) l^5), {{a, 1/(6 10^24)}, {t, 0.00007}}, l] (* {a -&gt; 5.707889613047356`*^-24,t -&gt; 0.00007085381301746858`} *) Show[Plot[1/(a (-1 + E^(1/(l t))) l^5) /. sol, {l, 400, 900}, PlotStyle -&gt; {Thickness[0.02], Red}, PlotRange -&gt; All], ListPlot[dati, PlotStyle -&gt; White]] </code></pre> <p><a href="https://i.stack.imgur.com/tkMs1.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/tkMs1.png" alt="Data and fit"></a></p>
153,448
<p>On the complex plane, I have a transformation "T" such that :</p> <p>$z' = (m+i)z + m - 1 - i$ ($z'$ is the image and $z$ the preimage, $z$ and $z'$ are both complex number)</p> <p>and $m$ is a real number. </p> <p>I'd need to determine "$m$" such that this transformation "T" is a rotation.</p> <p>I know a rotation can be written under the form : $z'- w = k (z - w)$ with "$w$" the complex number associated with the center and "$k$" a complex number modulus 1. But I can't find how to put "T" under the form of a rotation.</p> <p>Some hint would be very appreciated, Thanks.</p>
Community
-1
<p>If you integrate it out, you will get $$\dfrac{f(x)^3}{3} = x^3(1+x)^2$$ Hence, $$f(x)^3 = 3x^3(1+x)^2$$ Setting $x = 2$, gives us $$f(2)^3 = 3 \times 2^3 \times 3^2 = 6^3.$$ Since $f(x) \in \mathbb{R}$, we get that $f(2) = 6$.</p>
201,820
<p>Suppose we have in <code>~/time-data/time-data.org</code> the following data:</p> <pre><code>* Parent1 :LOGBOOK: CLOCK: [2019-07-09 Tue 00:00]--[2019-07-09 Tue 00:20] =&gt; 0:20 :END: ** Child1 :LOGBOOK: CLOCK: [2019-07-10 Wed 00:02]--[2019-07-10 Wed 00:40] =&gt; 0:38 :END: ** Child2 :LOGBOOK: CLOCK: [2019-07-11 Thu 00:02]--[2019-07-11 Thu 06:40] =&gt; 0:38 :END: </code></pre> <p>We then can use <a href="https://github.com/atheriel/org-clock-csv" rel="nofollow noreferrer">atheriel/org-clock-csv</a> to to pull this data via</p> <pre><code>(org-clock-csv-to-file "~/time-data/time-data.csv" '("~/time-data/time-data.org")) </code></pre> <p>which populates <code>time-data.csv</code> with</p> <pre><code>task,parents,category,start,end,effort,ishabit,tags Parent1,,,2019-07-09 00:00,2019-07-09 00:20,,, Child1,Parent1,,2019-07-10 00:02,2019-07-10 00:40,,, Child2,Parent1,,2019-07-11 00:02,2019-07-11 06:40,,, </code></pre> <p>so that in Mathematica we can run:</p> <blockquote> <p><a href="https://i.stack.imgur.com/43DSa.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/43DSa.png" alt="enter image description here"></a></p> </blockquote> <p><strong>Question:</strong> How do we get a <code>DateListPlot</code> out of this that shows, i.e., hours spent per day?</p> <hr> <p><strong>EDIT:</strong> I fed everyone's answers through my actual data (which spans several months) and <a href="https://www.wolframcloud.com/obj/george.w.singer/Published/time-data" rel="nofollow noreferrer">published them here</a>. I get lots of errors and (mostly) unparsable graphs. I think these answers are getting me closer to something usable though!</p>
Edmund
19,542
<p>You may use the <code>"Dataset"</code> and <code>"HeaderLines"</code> <a href="https://reference.wolfram.com/language/ref/Import.html" rel="nofollow noreferrer"><code>Import</code></a> options for <a href="http://reference.wolfram.com/language/ref/format/CSV.html" rel="nofollow noreferrer"><code>"CSV"</code></a> along with <a href="https://reference.wolfram.com/language/ref/Dataset.html" rel="nofollow noreferrer"><code>Dataset</code></a> and <a href="https://reference.wolfram.com/language/ref/Query.html" rel="nofollow noreferrer"><code>Query</code></a>.</p> <p>Using a slightly modified <code>csv</code> from @M.R.</p> <pre><code>csv = "task,parents,category,start,end,effort,ishabit,tags Parent1,,,2019-07-07 00:00,2019-07-07 00:20,,, Child1,Parent1,,2019-07-8 00:02,2019-07-8 00:40,,, Child2,Parent1,,2019-07-9 00:02,2019-07-9 06:40,,, Parent2,,,2019-07-08 00:00,2019-07-08 00:20,,, Child21,Parent2,,2019-07-9 00:02,2019-07-9 00:40,,, Child22,Parent2,,2019-07-10 00:02,2019-07-10 06:40,,, Parent3,,,2019-07-09 00:00,2019-07-09 00:20,,, Child31,Parent3,,2019-07-10 00:02,2019-07-10 00:40,,, Child32,Parent3,,2019-07-11 00:02,2019-07-11 06:40,,,"; </code></pre> <p><code>Import</code> (using <code>ImportString</code> but its the exact same options for <code>Import</code>) as a <code>Dataset</code> with</p> <pre><code>ds = ImportString[csv, {"CSV", "Dataset"}, "HeaderLines" -&gt; 1] </code></pre> <blockquote> <p><img src="https://i.stack.imgur.com/3iyDJ.png" alt="Mathematica graphics"></p> </blockquote> <p>Convert <code>"start"</code> and <code>"end"</code> to <a href="https://reference.wolfram.com/language/ref/DateObject.html" rel="nofollow noreferrer"><code>DateObject</code></a>s and fill in <code>"parents"</code> for parent task to make grouping easier.</p> <pre><code>ds = Query[All, &lt;|#, "parents" -&gt; If[#parents == "", #task, #parents]|&gt; &amp;]@ Query[All, Thread[{"start", "end"} -&gt; DateObject]]@ds </code></pre> <blockquote> <p><img src="https://i.stack.imgur.com/Tkx7c.png" alt="Mathematica graphics"></p> </blockquote> <p><a href="https://reference.wolfram.com/language/ref/GroupBy.html" rel="nofollow noreferrer"><code>GroupBy</code></a> <code>"parents"</code> then the <a href="https://reference.wolfram.com/language/ref/CurrentDate.html" rel="nofollow noreferrer"><code>CurrentDate</code></a> <code>"Day"</code> of <code>"start"</code>, calculate the <a href="https://reference.wolfram.com/language/ref/DateDifference.html" rel="nofollow noreferrer"><code>DateDifference</code></a> in <code>"Hour"</code>s between <code>"start"</code> and <code>"end"</code>, <a href="https://reference.wolfram.com/language/ref/Total.html" rel="nofollow noreferrer"><code>Total</code></a> the hours per start day.</p> <pre><code>dsHours = ds[ GroupBy[{#parents &amp;, CurrentDate[#start, "Day"] &amp;}] /* KeySort, All, Total, DateDifference[#start, #end, "Hour"] &amp; ] </code></pre> <blockquote> <p><img src="https://i.stack.imgur.com/Eid48.png" alt="Mathematica graphics"></p> </blockquote> <p>Then <a href="https://reference.wolfram.com/language/ref/DateListPlot.html" rel="nofollow noreferrer"><code>DateListPlot</code></a>.</p> <pre><code>DateListPlot[dsHours, Filling -&gt; Axis] </code></pre> <blockquote> <p><img src="https://i.stack.imgur.com/K5naq.png" alt="Mathematica graphics"></p> </blockquote> <p>or <a href="https://reference.wolfram.com/language/ref/DateHistogram.html" rel="nofollow noreferrer"><code>DateHistogram</code></a></p> <pre><code>DateHistogram[ dsHours[All, {Keys, Values} /* Apply[WeightedData]], "Day", ChartLegends -&gt; Automatic ] </code></pre> <blockquote> <p><img src="https://i.stack.imgur.com/JaFlF.png" alt="Mathematica graphics"></p> </blockquote> <p>Hope this helps.</p>
244,769
<p>I am DMing a game of DnD and one of my players is really into fear effects, which is cool, but the effect of having monsters suffer from the &quot;panicked&quot; condition gets tedious to render via dice rolls.</p> <p>The rule is, on the battle grid the monster will run for 1 square in a random direction, then from that new position it will move into another random adjacent square. repeat this process until its moved its full move speed.</p> <pre><code>movespeed = 6; points = Point[ NestList[{(#[[1]] + RandomChoice[{-1, 0, 1}]), #[[2]] + RandomChoice[{-1, 0, 1}]} &amp;, {11/2, 11/2}, movespeed]]; Graphics[{PointSize[Large], points}, GridLines -&gt; {Range[0, 11], Range[0, 11]}, PlotRange -&gt; {{0, 11}, {0, 11}}, Axes -&gt; True] </code></pre> <p>I have written some code that shows me the squares the monster moves through, but I would love to replace the little black dots with numbers like &quot;1&quot;, &quot;2&quot;,...,&quot;6&quot; so that I know the path it actually took.</p>
Daniel Huber
46,318
<p>Assume a,b,c are given as:</p> <pre><code>{a, b, c} = {{0.2, 0.8}, {0.1, 0.15}, {0.8, 0.25}}; </code></pre> <p>The origin of the uv system is on a-b: orig= b+ x(a-b), where we need to determine x. This is done by using the fact that (c-orig) is perpendicular to (a-b):</p> <pre><code>orig = b + x (a - b) /. Solve[(b + x (a - b) - c).(a - b) == 0, x][[1]]; </code></pre> <p>With the origin, it is easy to determine unit vectors along u and v:</p> <pre><code>{eu, ev} = {(a - orig)/Norm[a - orig], (c - orig)/Norm[c - orig]}; </code></pre> <p>To map coordinates x/y to u/v we need first to subtract orig from x/y und then project onto eu and ev. To map from u/v to x/y we need to multiply eu by u and ev by v, add both together and finally add orig:</p> <pre><code>xy2uv[p : {x_, y_}] := {eu, ev}.(p - orig); uv2xy[p : {u_, v_}] := p.{eu, ev} + orig; </code></pre> <p>As a test, the u/v coordinates of c should be {0,v}:</p> <pre><code>t=xy2uv[c] (*{-1.38778*10^-17, 0.676654}*) </code></pre> <p>And if we back map t we should again get the coordinates of c:</p> <pre><code>uv2xy[t] (*{0.8, 0.25}*) </code></pre> <p><strong>Addendum</strong></p> <p>If the u/v coordinates have a different scale, the transformation is:</p> <pre><code>xy2uv[p : {x_, y_}] := {eu, ev}.(p - orig) scaleuv; uv2xy[p : {u_, v_}] := (p/scaleuv).{eu, ev} + orig; </code></pre>
4,531,652
<p>In my school book, I read this theorem</p> <blockquote> <p>Let <span class="math-container">$n&gt;0$</span> is an odd natural number (or an odd positive integer), then the equation <span class="math-container">$$x^n=a$$</span> has exactly one real root.</p> </blockquote> <p>But, the book doesn't provide a proof, only tells <span class="math-container">$x=\sqrt [n]a$</span>. How can I prove this theorem?</p> <p>I tried to prove some special cases</p> <p><span class="math-container">$$x^3=8$$</span> <span class="math-container">$$(x-2)(x^2+2x+4)=0$$</span> <span class="math-container">$$x=2 \vee x^2+2x+4=0$$</span></p> <p>But the Discriminant of <span class="math-container">$x^2+2x+4=0$</span> equals to <span class="math-container">$2^2-4×4=-12&lt;0$</span>. So <span class="math-container">$x=2$</span> is an only root. But for <span class="math-container">$x^5=32$</span>, I got <span class="math-container">$x=2$</span> and <span class="math-container">$x^4+2x^3+4x^2+8x+16=0$</span>.</p> <p>I don't know how I can proceed.</p>
Mark Bennet
2,906
<p>To prove this you can do the following to show that the function <span class="math-container">$y=x^n$</span> is increasing when <span class="math-container">$n$</span> is odd.</p> <p>So suppose <span class="math-container">$a\gt b$</span> and <span class="math-container">$n$</span> is odd, we want to prove <span class="math-container">$a^n\gt b^n$</span>. Well if <span class="math-container">$a\gt 0 \gt b$</span> then you are adding the positive terms <span class="math-container">$a^n$</span> and <span class="math-container">$-b^n$</span>. Else <span class="math-container">$$a^n-b^n=(a-b)\left(a^{n-1}+a^{n-2}b+\dots b^{n-1}\right)$$</span></p> <p>Here <span class="math-container">$a-b$</span> is positive by hypothesis and every term <span class="math-container">$a^rb^{n-1-r}$</span> is non-negative because <span class="math-container">$n$</span> is odd and <span class="math-container">$a$</span> and <span class="math-container">$b$</span> do not have opposite signs (one of them could be zero). Finally since <span class="math-container">$a\gt b$</span> we have <span class="math-container">$a\neq b$</span> so that either <span class="math-container">$a^{n-1}$</span> or <span class="math-container">$b^{n-1}$</span> is positive (non zero) so <span class="math-container">$a^n-b^n$</span> is the product of two strictly positive numbers and is positive.</p>
4,052,760
<blockquote> <p>Prove that <span class="math-container">$\int\limits^{1}_{0} \sqrt{x^2+x}\,\mathrm{d}x &lt; 1$</span></p> </blockquote> <p>I'm guessing it would not be too difficult to solve by just calculating the integral, but I'm wondering if there is any other way to prove this, like comparing it with an easy-to-calculate integral. I tried comparing it with <span class="math-container">$\displaystyle\int\limits^{1}_{0} \sqrt{x^2+1}\,\mathrm{d}x$</span>, but this greater than <span class="math-container">$1$</span>, so I'm all out of ideas.</p>
aschepler
2,236
<p>Yet another way: If <span class="math-container">$0 &lt; x &lt; 1$</span>, then <span class="math-container">$x^2 &lt; x$</span>. So</p> <p><span class="math-container">$$\int_0^1 \sqrt{x^2+x} \ dx &lt; \int_0^1 \sqrt{2x}\ dx = \frac{2 \sqrt{2}}{3} &lt; 1$$</span></p> <p>(This bound is about 0.9428, so not as good as <a href="https://math.stackexchange.com/a/4052785/2236">Martin R's answer</a>)</p>
664
<p>Erdős's 1947 probabilistic trick provided a lower exponential bound for the Ramsey number $R(k)$. Is it possible to explicitly construct 2-colourings on exponentially sized graphs without large monochromatic subgraphs?</p> <p>That is, can we explicitly construct (edge) 2-colourings on graphs of size $c^k$, for some $c&gt;0$, with no monochromatic complete subgraph of size $k$?</p>
Harrison Brown
382
<p>I believe the answer is "no"; the best known constructions only give no clique or independent set of size about $2^\sqrt{n}$ in a graph with $2^n$ vertices. Bill Gasarch has a page on the subject <a href="http://www.cs.umd.edu/~gasarch/const_ramsey/const_ramsey.html" rel="nofollow">here</a>, although I don't know how frequently it updates.</p>
664
<p>Erdős's 1947 probabilistic trick provided a lower exponential bound for the Ramsey number $R(k)$. Is it possible to explicitly construct 2-colourings on exponentially sized graphs without large monochromatic subgraphs?</p> <p>That is, can we explicitly construct (edge) 2-colourings on graphs of size $c^k$, for some $c&gt;0$, with no monochromatic complete subgraph of size $k$?</p>
Alon Amit
25
<p>I also believe the answer is "no". Another reference is <a href="http://www.math.tau.ac.il/~nogaa/PDFS/ap5.pdf" rel="nofollow">this paper</a>, which treats off-diagonal Ramsey numbers (e.g. graphs with no clique of size k and no anti-clique of size l). </p>
1,476,456
<p>How many positive (integers) numbers less than $1000$ with digit sum to $11$ and divisible by $11$?</p> <p>There are $\lfloor 1000/11 \rfloor = 90$ numbers less than $1000$ divisible by $11$.</p> <p>$N = 100a + 10b + c$ where $a + b + c = 11$ and $0 \le a, b, c \le 9$</p> <p>I got $\binom{13}{2} - 9 = 69$ solutions.</p>
Adelafif
229,367
<p>$N=a+10b+100c$. $a-b+c=0$ or $a-b+c=11$. Also we have $a+b+c=11$. We get two cases:</p> <ul> <li>$a-b+c=0,\;a+b+c=11$ from which we get $2a+2b=11$; impossible.</li> <li>$a-b+c=11,\;a+b+c=11$ from which we get $a+c=11,\; b=0$. They are $209, 308, 407, 506, 605, 704, 803, 902$.</li> </ul>
232,777
<p>Let $F$ be an ordered field.</p> <p>What is the least ordinal $\alpha$ such that there is no order-embedding of $\alpha$ into any bounded interval of $F$?</p>
Panurge
82,840
<p>Here is my proof.</p> <p>Lemma. Let $X_{1}, \ldots , X_{s}$ distinct sets, with nonempty union, let $l$ a natural number such that $l \leq s$ and assume that for all distinct indices $i_{1}, \ldots , i_{l}$ in $\{1, \ldots s \}$, $X_{i_{1}} \cup \ldots \cup X_{i_{l}}$ is equal to the whole union $X_{1} \cup \ldots \cup X_{s}$.</p> <p>Then one can find $s+1-l$ sets among $X_{1}, \ldots , X_{s}$ that have an element in common.</p> <p>Proof of the lemma. Take a natural number $m$ with the same properties as $l$, but minimal. Since the whole union $X_{1} \cup \ldots \cup X_{s}$ is nonempty by hypothesis, $m \geq 1$. By minimality of $m$, we can choose $i_{1}, \ldots , i_{m-1}$ in $\{1, \ldots s \}$ such that $X_{i_{1}} \cup \ldots \cup X_{i_{m-1}}$ is not equal to the whole union $X_{1} \cup \ldots \cup X_{s}$. Choose an element $x$ not in $X_{i_{1}} \cup \ldots \cup X_{i_{m-1}}$. Let $j$ be any index not in $\{i_{1}, \ldots , i_{m-1}\}$. Since $m$ has the same properties as $l$, $X_{i_{1}} \cup \ldots \cup X_{i_{m1}} \cup X_{j}$ is equal to the whole union $X_{1} \cup \ldots \cup X_{s}$. Thus, for every $j$ not in $\{i_{1}, \ldots , i_{m-1}\}$, $x$ is in $X_{j}$. Since there are $s+1-m \geq s+1-l$ such indices $j$, we have proved he lemma.</p> <p>Statement in the opening post. If $k$ and $r$ are natural numbers such that $r \leq k$, if a union closed family of sets ("union closed" means that the union of two sets from the family is always a member of the family) has at least ${k \choose r} + 1$ members with cardinality $r$, then this family has at least two members with cardinality $\geq k$.</p> <p>Proof. Induction on $r$. It is trivially true for $r=0$ (the hypothesis is impossible in this case). Let $X_{1}, \ldots , X_{s}$ distinct sets, with $s = {k \choose r} + 1$. Let $F$ denote the union-closed family generated by these sets. We have to prove that $F$ has at least two members with cardinality $\geq k$. Assume it is false (denying hypothesis). Let $V$ denote the union of the sets. Then $V$ has cardinalty at least $k+1$ (since it has more than ${k \choose r}$ subsets with cardinality $r$). In view of our denying hypothesis, $V$ is the only member of $F$ with cardinality at least $k$. The thesis is trivially true for $k=r$, thus we can assume $k\geq r+1$. The union of ${k-1 \choose r} + 1$ members of $V$ wih cardinality $r$ has cardinality at least $k$ (same reasoning as for the cardinality of $V$), thus, in view of the unicity of $V$, the union of ${k-1 \choose r} + 1$ members of $V$ wih cardinality $r$ is always equal to $V$. By the lemma, applied to the sets $X_{1}, \ldots X_{s}$, one can find</p> <p>${k \choose r} + 2 - ({k-1 \choose r} + 1) = {k-1 \choose r-1} + 1$ sets $X_{i}$ whith a common element $x$. The ${k-1 \choose r-1} + 1$ sets $X_{i} \setminus \{x\}$ are of cardinality $r-1$. By induction hypothesis, the union-closed family generated by these sets has at least two members with cardinality at least $k-1$, thus the union-closed family generated by the correspondings $X_{i}$ has at least two members with cardinality at least $k$. Thus the denying hypothesis contradicts itself.</p> <p>This proof seems a bit complicated to me, and I hope I made no errors...</p>
611,361
<p>Let's have function $f$ defined by: $$f(x)=2\sum_{k=1}^{\infty}\frac{e^{kx}}{k^3}-x\sum_{k=1}^{\infty}\frac{e^{kx}}{k^2},\quad x\in(-2\pi,0\,\rangle$$ My question: Can somebody expand it into a correct Maclaurin series, but using an unconventional way? Conventional is e.g. using $n$-th derivative of $f(x)$ in zero.</p> <p>Reedited: Let me explain the reason for my question. This will be like conventionaly use of expansion of $e^{kx}$, but using incorrect arguments(using zetas for divergent series). Nice thing is, that the final result looks correct! We have: \begin{align}f(x)&amp;=2\sum_{k=1}^{\infty}\sum_{m=0}^{\infty}\frac{k^{m-3}x^m}{m!}-\sum_{k=1}^{\infty}\sum_{m=0}^{\infty}\frac{k^{m-2}x^{m+1}}{m!}=\\&amp;= \sum_{m=0}^{\infty}\frac{x^m}{m!}2\sum_{k=1}^{\infty}k^{m-3}-\sum_{m=0}^{\infty}\frac{x^{m+1}}{m!}\sum_{k=1}^{\infty}k^{m-2}=\\&amp;=\sum_{m=0}^{\infty}\frac{x^m}{m!}2\zeta(3-m)-\sum_{m=0}^{\infty}\frac{x^{m+1}}{m!}\zeta(2-m)=\\&amp;=\sum_{m=0}^{\infty}\frac{x^m}{m!}2\zeta(3-m)-\sum_{m=1}^{\infty}\frac{x^{m}}{(m-1)!}\zeta(3-m)=\\&amp;=\sum_{m=0}^{\infty}\frac{x^m}{m!}(2-m)\zeta(3-m)\end{align} So we get nice Maclaurin series containing Zetas: $$f(x)=\sum_{m=0}^{\infty}\frac{x^m}{m!}(2-m)\zeta(3-m)$$ And now, if somebody will find expansion, but using some unconventional technique, there is a chance to get some interesting formula for $\zeta(3)$. That's motivation for my question.</p>
Claude Leibovici
82,404
<p>My first reaction and idea was to find a closed form for f(x) which I could later expand as a Taylor series. From definition, the function write<br> f(x) = -x PolyLog[2, E^x] + 2 PolyLog[3, E^x]<br> which is (it looks simple). However, the difficulties start when I try to expand this result as a Taylor series; the result is quite unpleasant and not really workable. </p> <p>Meanwhile I was typing these first comments appeared Farshad Nahangi answer which is very nice and to which nothing has to be added if I want to avoid some stupid redundency.</p>
3,963,884
<p>Suppose <span class="math-container">$(X_n)_n$</span> are i.i.d. random variables and let <span class="math-container">$W_n = \sum_{k=1}^n X_k$</span>. Assume that there exist <span class="math-container">$u_n&gt;0 , v_n \in \mathbb{R}$</span> such that</p> <p><span class="math-container">$$\frac{1}{u_n}W_n-v_n\Rightarrow W$$</span></p> <p>where <span class="math-container">$W$</span> is not degenerate. Show that</p> <p><span class="math-container">$$u_n\to \infty , \frac{u_n}{u_{n+1}}\to 1$$</span></p> <p>What happens to <span class="math-container">$u_n$</span> if <span class="math-container">$W$</span> is degenerate?</p> <p><strong>Hint:</strong> You may need to consider <span class="math-container">${u_{2n}}/{u_n}$</span>.</p> <p>Is the following attempt, for the first part, true?</p> <p>In order to remove <span class="math-container">$v_n,$</span> we can consider <span class="math-container">$\frac{1}{u_n}\sum_{k=1}^n(X_{2k+1}-X_{2k})$</span> which converges in distribution to a non-degenerate random variable <span class="math-container">$Y.$</span> So we can suppose that <span class="math-container">$\frac{1}{u_n}W_n$</span> converges in distribution to <span class="math-container">$W$</span>.</p> <p>If <span class="math-container">$W$</span> is non-degenerate then there exist <span class="math-container">$x \in \mathbb{R};|\phi_W(x)|&lt;1,$</span> since <span class="math-container">$\frac{1}{u_n}W_n-v_n\Rightarrow W$</span> then there exist <span class="math-container">$k \in \mathbb{N};|\phi_{X_1}(\frac{x}{u_k})|&lt;1$</span> which means that <span class="math-container">$X_1$</span> is not degenerate, if <span class="math-container">$(u_n)_n$</span> is bounded from above then there exist a subsequence <span class="math-container">$(u_{k_n})$</span> such that <span class="math-container">$W_{k_n}$</span> converges in distribution. Let <span class="math-container">$(u_{q_n})_n$</span> be an arbitrary subsequence, since <span class="math-container">$X_1$</span> is not-degenerate then <span class="math-container">$\sum_{l=1}^{q_n}X_l=W_{q_n}$</span> doesn't converges in distribution, so <span class="math-container">$u_{q_n}$</span> is not bounded from above and we can extract a subsequence from <span class="math-container">$u_{q_n}$</span> diverging to <span class="math-container">$+\infty.$</span></p> <p>In case <span class="math-container">$W$</span> is degenerate, is it possible to know the behavior of <span class="math-container">$u_n$</span>?</p>
Botnakov N.
452,350
<p>In order to finish the solution we should show that <span class="math-container">$\frac{u_{n+1}}{u_n} \to 1$</span>.</p> <p>As <span class="math-container">$\frac{X_{n+1}}{u_n} =\frac{X_{1}}{u_n} $</span> and <span class="math-container">$u_n \to \infty$</span> we have <span class="math-container">$\frac{X_{n+1}}{u_n} \to 0$</span> in distribution. We have</p> <p><span class="math-container">$\frac{\sum_{k=1}^{n+1} X_k - X_{n+1}}{u_n} = \frac{\sum_{k=1}^n X_k}{u_n} \to W$</span>, <span class="math-container">$\frac{X_{n+1}}{u_n} \to 0$</span>, thus <span class="math-container">$\frac{\sum_{k=1}^{n+1} X_k}{u_n} \to W$</span>. But we know that <span class="math-container">$\xi_n = \frac{\sum_{k=1}^{n+1} X_k}{u_{n+1}} \to W$</span>. Put <span class="math-container">$c_n = \frac{u_{n+1}}{u_n}$</span>. Thus <span class="math-container">$\xi_n \to W$</span> and <span class="math-container">$c_n \xi_n = \frac{u_{n+1}}{u_n} \frac{\sum_{k=1}^{n+1} X_k}{u_{n+1}} = \frac{\sum_{k=1}^{n+1} X_k}{u_{n}} \to W$</span>.</p> <p>So we know that <span class="math-container">$\xi_n \to W$</span>, <span class="math-container">$c_n \xi_n \to W$</span> in distribution, <span class="math-container">$c_n &gt;0$</span>, <span class="math-container">$W$</span> is nondegenerate and we want to show that <span class="math-container">$c_n \to 1$</span>.</p> <p>Let us prove if by contradiction. Suppose that there is <span class="math-container">$c_{n_k} \to c \in [0,1) \cup (1, \infty]$</span>. Instead of <span class="math-container">$c_{n_k}$</span> we will write <span class="math-container">$c_n$</span>.</p> <p>According to Skorokhod's representation theorem we may assume W.L.O.G. that <span class="math-container">$\xi_n \to W$</span> a.s. (and still <span class="math-container">$c_n \xi_n \to W$</span> in distribution).</p> <p><strong>Case 1.</strong> <span class="math-container">$c = \infty$</span>. As <span class="math-container">$\xi_n \to W$</span> a.s., <span class="math-container">$|W| &lt; \infty$</span>, <span class="math-container">$c_n \to \infty$</span>, we have <span class="math-container">$|c_n \xi_n(\omega)| \to (+\infty)$</span> for <span class="math-container">$\omega: W(\omega) \ne 0$</span>. But <span class="math-container">$|c_n \xi_n| \to |W| &lt; \infty$</span> in distribution. Thus <span class="math-container">$W = 0$</span> a.s. It's contradiction.</p> <p><strong>Case 2.</strong> <span class="math-container">$c=0$</span>. As <span class="math-container">$\xi_n \to W$</span> and <span class="math-container">$c_n \to 0$</span>, we have <span class="math-container">$c_n W_n \to 0$</span>, but <span class="math-container">$c_n W_n \to W$</span>. Thus <span class="math-container">$W = 0$</span> a.s. It's contradiction.</p> <p><strong>Case 3.</strong> <span class="math-container">$c \in (0,1) \cup (1,\infty)$</span>. As <span class="math-container">$\xi_n \to W, c_n \xi_n \to W, c_n \to c$</span>, we have <span class="math-container">$W = cW$</span> in distribution and hence <span class="math-container">$\frac{1}{c}W = W$</span> in distribution.</p> <p>We see that it's sufficient to show that <span class="math-container">$W = dW$</span> in distribution with <span class="math-container">$d \in (0,1)$</span> (and <span class="math-container">$d=c$</span> or <span class="math-container">$d = \frac1{c}$</span>) is impossible.</p> <p>As <span class="math-container">$W =dW$</span>, we have <span class="math-container">$$W = dW = d(dW) = d^2 W = \ldots = d^n W $$</span> in distribution.</p> <p>Put <span class="math-container">$\eta_n = d^n W$</span>. Thus <span class="math-container">$\eta_n \to 0$</span>, because <span class="math-container">$d^n \to 0$</span>. But <span class="math-container">$\eta_n = W$</span> in distribution. Thus <span class="math-container">$W=0$</span>. It's contradiction.</p> <p>Hence we got a contradiction in every case, q.e.d.</p>
3,690,185
<p>By <span class="math-container">$a_n \sim b_n$</span> I mean that <span class="math-container">$\lim_{n \rightarrow \infty} \frac{a_n}{b_n} = 1$</span>.</p> <p>I don't know how to do this problem. I have tried to apply binomial theorem and I got <span class="math-container">$$\int_{0}^{1}{(1+x^2)^n dx} = \int_0^1 \sum_{k=0}^n{\binom{n}{k}x^{2k}dx} = \sum_{k=0}^n \int_0^1{ \binom{n}{k}x^{2k}dx} = \sum_{k=0}^n \frac {\binom{n}{k}}{2k+1}$$</span> But I don't know what I could do with this, nor if it is a correct approach. </p>
Barry Cipra
86,747
<p>Let <span class="math-container">$u=(1+x^2)/2$</span>, so that <span class="math-container">$du=x\,dx=\sqrt{2u-1}\,dx$</span>. It follows that</p> <p><span class="math-container">$${n\over2^n}\int_0^1(1+x^2)^n\,dx=n\int_{1/2}^1{u^n\over\sqrt{2u-1}}\,du=n\int_{1/2}^1u^n\,du+n\int_{1/2}^1u^n\left({1\over\sqrt{2u-1}}-1\right)\,du$$</span></p> <p>Now</p> <p><span class="math-container">$$n\int_{1/2}^1u^n\,du={n\over n+1}\left(1-\left(1\over2\right)^{n+1}\right)\to1(1-0)=1$$</span></p> <p>So it remains to show that</p> <p><span class="math-container">$$n\int_{1/2}^1u^n\left({1\over\sqrt{2u-1}}-1\right)\,du\to0$$</span></p> <p>Note that</p> <p><span class="math-container">$$0\le{1\over\sqrt{2u-1}}-1={1-\sqrt{2u-1}\over\sqrt{2u-1}}={2(1-u)\over\sqrt{2u-1}(1+\sqrt{2u-1})}\le{1-u\over\sqrt{2u-1}}$$</span></p> <p>so it's enough to show that</p> <p><span class="math-container">$$n\int_{1/2}^1{u^n(1-u)\over\sqrt{2u-1}}\,du\to0$$</span></p> <p>It's convenient to let <span class="math-container">$u=1-v$</span> and, taking <span class="math-container">$n=N^3$</span> to be sufficiently large, rewrite the integral as</p> <p><span class="math-container">$$\begin{align} N^3\int_0^{1/2}{v(1-v)^{N^3}\,dv\over\sqrt{1-2v}} &amp;=N^3\int_0^{1/N^2}{v(1-v)^{N^3}\,dv\over\sqrt{1-2v}} +N^3\int_{1/N^2}^{1/2}{v(1-v)^{N^3}\,dv\over\sqrt{1-2v}}\\ &amp;\le{N^3\over\sqrt{1-1/N^2}}\int_0^{1/N^2}v\,dv+{N^3\over2}\left(1-{1\over N^2}\right)^{N^3}\int_0^{1/2}{dv\over\sqrt{1-2v}}\\ &amp;={1\over2N\sqrt{1-2/N^2}}+{N^3\over2}\left(\left(1-{1\over N^2}\right)^{N^2} \right)^N\\ &amp;\le{1\over N}+N^3(e^{-1/2})^N\\ &amp;\to0+0=0 \end{align}$$</span></p> <p>(Note, these calculations do not require <span class="math-container">$n$</span> to be an integer.)</p>
3,600,633
<p>As I was reading <a href="https://math.stackexchange.com/questions/1918673/how-can-i-prove-that-the-finite-extension-field-of-real-number-is-itself-or-the">this question</a>, I saw Ethan's answer. However, perhaps this is very obvious, but why does the degree of the polynomial be at most <span class="math-container">$2$</span>? I get that the polynomial must be irreducible but does that force the degree to be at most <span class="math-container">$2$</span>?</p>
HallaSurvivor
655,547
<p>We know that, over <span class="math-container">$\mathbb{C}$</span>, we can factor any polynomial entirely into linear terms (this is the fundamental theorem of algebra). Moreover, one can show that whenever <span class="math-container">$f$</span> has real coefficients, then <span class="math-container">$z$</span> and <span class="math-container">$\overline{z}$</span> (the complex conjugate) must <em>both</em> be roots of <span class="math-container">$f$</span>. </p> <p>Now, given <span class="math-container">$f \in \mathbb{R}[x]$</span>, we factor it into linear terms over <span class="math-container">$\mathbb{C}$</span>. We look at each root <span class="math-container">$\alpha$</span> in turn. If it is real, then <span class="math-container">$(x-\alpha)$</span> is a factor of <span class="math-container">$f$</span> over <span class="math-container">$\mathbb{R}$</span> as well. If it has a complex part, then <span class="math-container">$\overline{\alpha}$</span> must be a root too, and then <span class="math-container">$(x - \alpha)(x - \overline{\alpha})$</span> is a quadratic with real coefficients. </p> <p>Since these are the only two cases which can occur, we have factored <span class="math-container">$f$</span> into linear and quadratic parts.</p> <hr> <p>I hope this helps ^_^</p>
3,905,629
<p>I need to compute a limit:</p> <p><span class="math-container">$$\lim_{x \to 0+}(2\sin \sqrt x + \sqrt x \sin \frac{1}{x})^x$$</span></p> <p>I tried to apply the L'Hôpital rule, but the emerging terms become too complicated and doesn't seem to simplify.</p> <p><span class="math-container">$$ \lim_{x \to 0+}(2\sin \sqrt x + \sqrt x \sin \frac{1}{x})^x \\ = \exp (\lim_{x \to 0+} x \ln (2\sin \sqrt x + \sqrt x \sin \frac{1}{x})) \\ = \exp (\lim_{x \to 0+} \frac {\ln (2\sin \sqrt x + \sqrt x \sin \frac{1}{x})} {\frac 1 x}) \\ = \exp \lim_{x \to 0+} \dfrac {\dfrac {\cos \sqrt x} {x} + \dfrac {\sin \dfrac 1 x} {2 \sqrt x} - \dfrac {\cos \dfrac 1 x} {x^{3/2}}} {- \dfrac {1} {x^2} \left(2\sin \sqrt x + \sqrt x \sin \frac{1}{x} \right)} $$</span></p> <p>I've calculated several values of this function, and it seems to have a limit of <span class="math-container">$1$</span>.</p>
Cesareo
397,348
<p>Hint.</p> <p>For <span class="math-container">$x&gt;0$</span> small we have</p> <p><span class="math-container">$$ \left(2\sin(\sqrt{x})-\sqrt{x}\right)^x\le \sigma(x)\le \left(2\sin(\sqrt{x})+\sqrt{x}\right)^x $$</span></p> <p><a href="https://i.stack.imgur.com/H9KMf.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/H9KMf.jpg" alt="enter image description here" /></a></p>
488,141
<p>\begin{align*}A=\left(\begin{array}{cccc} 1 &amp; 2 &amp; 3 &amp; 4 \\ 0 &amp; 1 &amp; 2 &amp; 3 \\ 0 &amp; 0 &amp; 1 &amp; 2 \\ 0 &amp; 0 &amp; 0 &amp; 1 \\\end{array}\right);\end{align*}</p> <p>The eigenvalues are $1$, I know one of the eigenvectors is $(1,0,0,0)$, Is that all?</p> <p>The mathematica gives, why not {{1,0,0,0},{1,0,0,0},{1,0,0,0},{1,0,0,0}}? </p> <pre><code>Eigenvectors[A] </code></pre> <p>\begin{align*}\left(\begin{array}{cccc} 1 &amp; 0 &amp; 0 &amp; 0 \\ 0 &amp; 0 &amp; 0 &amp; 0 \\ 0 &amp; 0 &amp; 0 &amp; 0 \\ 0 &amp; 0 &amp; 0 &amp; 0 \\\end{array}\right)\end{align*}</p>
Alex Youcis
16,497
<p>Note that the matrix of the multiplication $m_\alpha:\mathbb{Q}(\sqrt[4]{2})\to \mathbb{Q}(\sqrt[4]{2})$ map with respect to a generic element $\alpha=a+b\sqrt[4]{2}+c\sqrt[4]{2^2}+d\sqrt[4]{2^3}$ is (in the canonical basis)</p> <p>$$\begin{pmatrix}a &amp; 2d &amp; 2b &amp; 2b \\ b &amp; a &amp; 2d &amp; 2c \\ c &amp; b &amp; a &amp; 2d \\ d &amp; c &amp; b &amp; a \end{pmatrix}$$</p> <p>so, the trace of the matrix is $4a$. Now, note that if $\sqrt{3}\in\mathbb{Q}(\sqrt[4]{2})$ then </p> <p>$$\text{Tr}_{\mathbb{Q}(\sqrt[4]{2})/\mathbb{Q}}(\sqrt{3})=\text{Tr}_{\mathbb{Q}(\sqrt{3})/\mathbb{Q}}\left(\text{Tr}_{\mathbb{Q}(\sqrt[4]{2})/\mathbb{Q}(\sqrt{3})}(\sqrt{3})\right)=\text{Tr}_{\mathbb{Q}(\sqrt{3})/\mathbb{Q}}(2\sqrt{3})=0$$</p> <p>(this is just using the transitivity of the field trace--a similar result can be arrived at calculating the minimal polynomial of $\sqrt{3}$). So, $a=0$. Thus, by dividing both sides by $\sqrt[4]{2}$ we get </p> <p>$$\sqrt[4]{\frac{9}{2}}=b+c\sqrt[4]{2^2}+d\sqrt[4]{2^3}$$</p> <p>Taking the minimal polynomial of the left hand side shows that $\text{Tr}_{\mathbb{Q}(\sqrt[4]{2})/\mathbb{Q}})\left(\sqrt[4]{\frac{9}{2}}\right)=0$. But, the trace of the right hand side is $b$. Continuing this process you find that $a=b=c=d=0$, which is clearly impossible.</p> <p><b>EDIT:</b> Just for funsies, here is another number theoretic way to approach this problem. This is actually what initially occurred to me--not the trace trick. I hope you find it enlightening.</p> <p>$\text{ }$ $\text{ }$</p> <p>Derivation of the ring of integers of $\mathbb{Q}(\sqrt[4]{2})$--skip if you're willing to believe it's $\mathbb{Z}[\sqrt[4]{2}]$.</p> <hr> <p>Let $K=\mathbb{Q}(\sqrt[4]{2})$. It's obvious that $\mathbb{Z}[\sqrt[4]{2}]\subseteq\mathcal{O}_K$, and so to show that $\mathcal{O}_K=\mathbb{Z}[\sqrt[4]{2}]$ it suffices to show that $\mathbb{Z}[\sqrt[4]{2}]$ is integrally closed, or equivalently, that its locally integrally closed. </p> <p>But, since $\mathbb{Z}[\sqrt[4]{2}]$ is a simple integral extension of $\mathbb{Z}$, with generating having minimal polynomial $f(x)=x^4-2$, it suffices to check that its integrally closed only at maximal ideals containing $f'(\sqrt[4]{2})=4 \sqrt[4]{2^3}$. In particular, we see that if $f'(\sqrt[4]{2})\in M$, then $8=\sqrt[4]{2}f'(\sqrt[4]{2})\in M$, so that $M$ lies over $2$. But, since $x^4-2\equiv x^4 \mod 2$, the only max ideal of $\mathbb{Z}[\sqrt[4]{2}]$ lying over $2$ is $M=(2,\sqrt[4]{2})$. But, since this max ideal is already principal $M=(\sqrt[4]{2})$ it's trivial that the only maximal ideal, and thus the only prime ideal (since $\mathbb{Z}[\sqrt[4]{2}]$ is dimension $1$, and thus so is $\mathbb{Z}[\sqrt[4]{2}]_M$) of $\mathbb{Z}[\sqrt[4]{2}]_M$ is principal. Thus, $\mathbb{Z}[\sqrt[4]{2}]_M$ is a PID, and so, in particular, integrally closed. It follows from previous discussion that $\mathbb{Z}[\sqrt[4]{2}]$ is integrally closed, and thus $\mathcal{O}_K=\mathbb{Z}[\sqrt[4]{2}]$. </p> <hr> <p>$\text{ }$ $\text{ }$</p> <p>Now, note that we can use the Dedekind-Kummer theorem to factor $3\mathcal{O}_K$. Namely, since $(x^2+x+2)(x^2+2x+2)$ is the factorization into irreducibles of $x^4-2$ in $\mathbb{F}_3[x]$ we see that $(3,\sqrt[4]{2^2}+\sqrt[4]{2}+2)(3,\sqrt[4]{2^2}+2\sqrt[4]{2}+2)$ is the factorization of $3\mathcal{O}_K$ into primes. In particular, $3$ doesn't ramify in $K$ (this could have also been determined by finding $d_K$, but I thought it would be nice to have the actual factorization).</p> <p>But, since $3$ ramifies in $\mathbb{Q}(\sqrt{3})$ we must have that $\mathbb{Q}(\sqrt{3})\not\subseteq\mathbb{Q}(\sqrt[4]{2})$ as desired.</p> <p>The above is all kind of misleading. It makes it seem like the technique involving traces is much more natural than the above, but this is only because some more sophisticated machinery went into it--and that I fully fleshed it out. The idea is simpler, and is the one I would imagine would immediately occur to most people. Namely, it's simple--$3$ ramifies in $\mathbb{Q}(\sqrt{3})$ but not in $\mathbb{Q}(\sqrt[4]{2})$, so we can't possibly have $\sqrt{3}\in\mathbb{Q}(\sqrt[4]{2})$. </p>
2,823,758
<p>I was learning the definition of continuous as:</p> <blockquote> <p>$f\colon X\to Y$ is continuous if $f^{-1}(U)$ is open for every open $U\subseteq Y$</p> </blockquote> <p>For me this translates to the following implication:</p> <blockquote> <p>IF $U \subseteq Y$ is open THEN $f^{-1}(U)$ is open</p> </blockquote> <p>however, I would have expected the definition to be the other way round, i.e. with the 1st implication I defined. The reason for that is that just by looking at the metric space definition of continuous:</p> <blockquote> <p>$\exists q = f(p) \in Y, \forall \epsilon&gt;0,\exists \delta &gt;0, \forall x \in X, 0 &lt; d(x,p) &lt; \delta \implies d(f(x),q) &lt; \epsilon$</p> </blockquote> <p>seems to be talking about Balls (i.e. open sets) in X and then has a forward arrow for open sets in Y, so it seems natural to expect the direction of the implication to go in that way round. However, it does not. Why does it not go that way? Whats is wrong with the implication going from open in X to open in Y? And of course, why is the current direction the correct one?</p> <p>I think conceptually I might be even confused why the topological definition of continuous requires to start from things in the target space Y and then require things in the domain. Can't we just say map things from X to Y and have them be close? <strong>Why do we require to posit things about Y first in either definition for the definition of continuous to work properly</strong>?</p> <hr> <p>I can't help but point out that this question <a href="https://math.stackexchange.com/questions/323610/the-definition-of-continuous-function-in-topology">The definition of continuous function in topology</a> seems to be similar but perhaps lack the detailed discussion on the direction on the implication for me to really understand why the definition is not reversed or what happens if we do reverse it. The second answer there tries to make an attempt at explaining why we require $f^{-1}$ to preserve the property of openness but its not conceptually obvious to me why thats the case or whats going on. Any help?</p> <hr> <p>For whoever suggest to close the question, the question is quite clear:</p> <blockquote> <p><strong>why is the reverse implication not the "correct" definition of continuous?</strong></p> </blockquote> <hr> <p>As an additional important point I noticed is, pointing out <strong>the difference between open mapping and continuous function would be very useful</strong>.</p> <hr> <p>Note: I encountered this in baby Rudin, so thats as far as my background in analysis goes, i.e. metric spaces is my place of understanding. </p> <hr> <p>Extra confusion/Appendix:</p> <p>Conceptually, I think I've managed to nail what my main confusion is. In conceptual terms continuous functions are suppose to map "nearby points to nearby points" so for me its metric space definition makes sense in that sense. However, that doesn't seem obvious to me unless we equate "open sets" to be the definition of "close by". Balls are open but there are plenty of sets that are open but are not "close by", for example the union of two open balls. I think this is what is confusing me most. How is the topological def respecting that conceptual requirement? </p>
Daniel Schepler
337,888
<p>I think in the translation, it might help to separate out the direct generalization of the notion of "continuity at a point" from the general topological arguments that this generalization being true at every point is equivalent to the condition on inverse images of open sets.</p> <p>So, recall that for a map $f : X \to Y$ between metric spaces, and $x_0 \in X$, we have $f$ is continuous at $x_0$ if and only if: $$ \forall \epsilon &gt; 0, \exists \delta &gt; 0, \forall x \in X, d(x, x_0) &lt; \delta \rightarrow d(f(x), f(x_0)) &lt; \epsilon. $$ Now let us express what this condition is saying in terms of open balls: first, $d(f(x), f(x_0)) &lt; \epsilon$ is equivalent to $f(x) \in B_\epsilon(f(x_0))$, which is further equivalent to $x \in f^{-1}(B_\epsilon(f(x_0)))$. On the other hand, $d(x, x_0) &lt; \delta$ is equivalent to $x \in B_\delta(x_0)$. Therefore, $f$ is continuous at $x_0$ if and only if: $$ \forall \epsilon &gt; 0, \exists \delta &gt; 0, \forall x \in X, x \in B_\delta(x_0) \rightarrow x \in f^{-1}(B_\epsilon(f(x_0))). $$ Now, the $\forall x \in X$ part is equivalent to a subset condition, so $f$ is continuous at $x_0$ if and only if: $$ \forall \epsilon &gt; 0, \exists \delta &gt; 0, B_\delta(x_0) \subseteq f^{-1}(B_\epsilon(f(x_0))). $$ Now, note that the $\exists \delta &gt; 0, \ldots$ part is precisely equivalent by definition to: "$f^{-1}(B_\epsilon(f(x_0)))$ is a neighborhood of $x_0$." Furthermore, the collection of $B_\epsilon(f(x_0))$ for $\epsilon &gt; 0$ is precisely the neighborhood basis at $f(x_0)$ coming from the metric on $Y$. To summarize, we have seen that more or less directly:</p> <blockquote> <p>$f$ is continuous at $x_0$ if and only if for all basic neighborhoods $N$ of $f(x_0)$, we have $f^{-1}(N)$ is a neighborhood of $x_0$.</p> </blockquote> <hr> <p>Now, not all topological spaces in general will have a natural system of neighborhood bases, so usually the generalization of continuity at a point to general maps of topological spaces will look something like:</p> <blockquote> <p><strong>Definition:</strong> Let $f : X \to Y$ be a map between topological spaces, and $x_0 \in X$. Then $f$ is continuous at $x_0$ if and only if one of the following equivalent statements is true:</p> <ol> <li>For every neighborhood $N$ of $f(x_0)$, we have that $f^{-1}(N)$ is a neighborhood of $x_0$.</li> <li>For every open neighborhood $N$ of $f(x_0)$, we have that $f^{-1}(N)$ is a neighborhood of $x_0$.</li> <li>(In the presence of a given system of neighborhood bases on $Y$:) For every basic neighborhood $N$ of $f(x_0)$, we have that $f^{-1}(N)$ is a neighborhood of $x_0$.</li> </ol> </blockquote> <p>(Of course, I think in practice, most textbooks will likely just choose one of these conditions as the definition - in my experience, usually either (1) or (2) - and then prove the equivalence to the other conditions as separate results.)</p> <p>Also, we have the general topological fact: "For any subset $U \subseteq X$, $U$ is open if and only if $U$ is a neighborhood of all of its elements." Using this, it is easy to prove the first equivalence in the below revised definition of continuity:</p> <blockquote> <p><strong>Definition:</strong> Let $f : X \to Y$ be a map between topological spaces. Then $f$ is continuous if and only if one of the following equivalent statements is true:</p> <ol> <li>$f$ is continuous at every point of $X$.</li> <li>For every open subset $V \subseteq Y$, we have that $f^{-1}(V)\subseteq X$ is open.</li> <li>(In the presence of a given basis for the topology of $Y$:) For every basic open subset $V \subseteq Y$, we have that $f^{-1}(V) \subseteq X$ is open.</li> </ol> </blockquote> <p>(Of course, again most textbooks will present (2) as the definition of continuity, and then prove equivalence to (1) and (3) as separate results.)</p> <hr> <p>Now, according to the translation above, the $\epsilon$-$\delta$ definition of continuity is most closely related to (1) above, with the continuity at a point $x_0 \in X$ being expanded from (3). Looking more closely at the initial expansion, we see that the overall structure "if $V$ is a basic open neighborhood of $f(x_0)$ then $f^{-1}(V)$ is a neighborhood of $x_0$" expands to the $\forall \epsilon &gt; 0, \exists \delta &gt; 0, \ldots$ part. Whereas the part the question is about, the part $d(x, x_0) &lt; \delta \rightarrow d(f(x), f(x_0)) &lt; \epsilon$, is actually part of the expansion of "$f^{-1}(V)$ is a neighborhood of $x_0$."</p>
109,734
<p>I am trying to do this homework problem and I have no idea how to approach it. I have tried many methods, all resulting in failure. I went to the books website and it offers no help. I am trying to find the derivative of the function $$y=\cot^2(\sin \theta)$$</p> <p>I could be incorrect but a trig function squared would be the result of the trig function with the angle value and then squared. Not the angle value squared, that would give a different answer. Knowing this I also know that I can not use the table of simple trig derivatives so I know I can't just take the derivative as $$y=\cot^2(x)$$ $$ x=\sin(\theta)$$ </p> <p>This does not help because I can't get the derivative of cot squared. What I did try to do was rewrite it as $\frac{\cos x}{\sin x}\frac{\cos x}{\sin x}$ and then find the derivative of that but something went wrong with that and it does not produce an answer that is like the one in the book. In fact the book gets a csc squared in the answer so I know they are doing something very different.</p>
André Nicolas
6,312
<p>You started in a way that leads to the answer.</p> <p>Let $y=\cot^2(\sin \theta)$. We want to find $\dfrac{dy}{d\theta}$. Make the substitution $x=\sin\theta$. (Comment: when we are using substitution, it is more common to use letters like $u$, $v$, $w$, but $x$ is fine here.) Note that $$y=\cot^2 x.$$ Differentiate with respect to $\theta$. We get $$\frac{dy}{d\theta}=\frac{dx}{d\theta}\frac{dy}{dx}.$$</p> <p>Easily, we have $\dfrac{dx}{d\theta}=\cos\theta$. We still need to find $\dfrac{dy}{dx}$ where $y=\cot^2 x$. </p> <p>How shall we do this? There are several possible ways. For example, $\cot^2 x$ is a <em>product</em>, so we could use the Product Rule. Or else, we can use the Chain Rule again. Let $w=\cot x$. Then $$y=w^2,$$ and therefore $$\frac{dy}{dx}=\frac{dw}{dx}\frac{dy}{dw}.$$ Since $y=w^2$, we have $\dfrac{dy}{dw}=2w$. We still need $\dfrac{dw}{dx}$. </p> <p>Since $w=\cot x$, in order to find $\dfrac{dw}{dx}$ we need to find the derivative of $\cot x$. There are many approaches to this. Perhaps it is one of the derivatives that you just remember: the answer is $-\csc^2 x$. Or if you don't remember this derivative, use the fact that $$\cot x=\frac{\cos x}{\sin x}$$ and use the Quotient Rule. After a while, we find that $$\frac{dw}{dx}=\frac{-(\sin^2 x+\cos^2 x)}{\sin^2 x}.$$ Using the fact that $\sin^2 x+\cos^2 x=1$, you can simplify this to $-\dfrac{1}{\sin^2 x}$, and then to $-\csc^2 x$. Finally, it is time to put the pieces together. We had $$\frac{dy}{d\theta}=\frac{dx}{d\theta}\frac{dy}{dx}=\frac{dx}{d\theta}\frac{dw}{dx}\frac{dy}{dw}.$$</p> <p>In the calculations above, we found: $$\frac{dx}{d\theta}=\cos\theta;\qquad \frac{dw}{dx}=-\csc^2 x;\qquad \frac{dy}{dw}=2w.$$ Multiply them together, but first express everything in terms of the original variable $\theta$. So $-\csc^2 x=-\csc^2(\sin\theta)$ and $2w=2\cot x=2\cot(\sin\theta)$.</p>
4,268,962
<blockquote> <p>Check whether <span class="math-container">$y=\ln (xy)$</span> is an answer of the following differential equation or not</p> <p><span class="math-container">$$(xy-x)y''+xy'^2+yy'-2y'=0$$</span></p> </blockquote> <p>First I tried to solve the equation,</p> <p><span class="math-container">$$x(yy''-y''+y'^2)+yy'-2y'=0$$</span> <span class="math-container">$$x((yy')'-y'')+(yy')-2y'=0$$</span> Since I have <span class="math-container">$-y''$</span> in the parenthesis , the substitution <span class="math-container">$z=yy'$</span> doesn't work here but if it was <span class="math-container">$-2y''$</span> instead, I could use the substitution <span class="math-container">$u=yy'-2y'$</span> but it is not the case.</p> <hr /> <p>My second try was taking derivative of the answer (i.e <span class="math-container">$y=\ln(xy)$</span> ) and plugging it in the D.E,</p> <p><span class="math-container">$$y'=\frac1x+\frac{y'}y\quad\Rightarrow y'(1-\frac1y)=\frac1x\quad\Rightarrow y'=\frac y{y-1}\times \frac1x$$</span></p> <p><span class="math-container">$$y''=\frac{-1}{x^2}+\frac{yy''-y'^2}{y^2}\quad\Rightarrow y''=\frac{y}{y-1}\times(\frac{-1}{x^2}-\frac{y^2}{y'^2})$$</span> But it is getting really ugly when I plug <span class="math-container">$y,y',y''$</span> in the original equation.</p>
Rezha Adrian Tanuharja
751,970
<p>Here is an alternative</p> <p><span class="math-container">$$ \begin{align} 0&amp;=(xy-x)\cdot\frac{d^{2}y}{dx^{2}}+x\cdot\left(\frac{dy}{dx}\right)^{2}+y\cdot\frac{dy}{dx}-2\cdot\frac{dy}{dx}\\ \\ &amp;=\frac{d}{dx}\left[(xy-x)\cdot\frac{dy}{dx}-y\right]\\ \\ \\ C_{1}&amp;=(xy-x)\cdot\frac{dy}{dx}-y\\ \\ &amp;=xy\cdot\frac{dy}{dx}-\frac{d}{dx}(xy) \end{align} $$</span></p> <p>In a particular case when <span class="math-container">$C_{1}=0$</span> we have the following:</p> <p><span class="math-container">$$ \begin{align} \frac{dy}{dx}&amp;=\frac{1}{xy}\cdot\frac{d}{dx}(xy)\\ \\ y&amp;=\ln(xy)+C_{2} \end{align} $$</span></p> <p>In a particular case when <span class="math-container">$C_{2}=0$</span> we get what is asked</p>
3,907,928
<p>Suppose the solutions to a general cubic equation <span class="math-container">$ax^3+bx^2+cx+d=0$</span> are to be found. Then according to Cardano's method, First a variable substitution must be carried on to convert the general cubic to depressed cubic. <span class="math-container">$$ax^3+bx^2+cx+d=0\rightarrow t^3+pt+q=0\ \text{where $t=x-\frac{b}{3a}$}$$</span> This seems pretty clear but what happens next is not so obvious for me.</p> <p>Now we take <span class="math-container">$t=u+v$</span> to get <span class="math-container">$u^3+v^3+(3uv+p)(u+v)+q=0$</span>. Assuming <span class="math-container">$3uv+p=0$</span>, we get the system <span class="math-container">$$u^3+v^3=-q$$</span> <span class="math-container">$$uv=\frac{-p}{3}$$</span>. Using Vieta's formula, the quadratic equation with <span class="math-container">$u^3$</span> and <span class="math-container">$v^3$</span> as roots is <span class="math-container">$$x^2+qx-\frac{p^3}{27}=0$$</span> Now the roots for quadratic can be found by analysing the discriminant.</p> <p>My doubt is that what was the inspiration behind assuming <span class="math-container">$3uv+p=0$</span>, except helping to form the quadratic equation.</p> <p><em>Please help</em></p> <p><em><strong>THANKS</strong></em></p>
Will Orrick
3,736
<p>There is a geometric picture behind this. The expression <span class="math-container">$(u+v)^3$</span> can be visualized as a cube subdivided into eight rectangular solids of sides <span class="math-container">$u\times u\times u$</span>, <span class="math-container">$u\times u\times v$</span> (permuted three ways), <span class="math-container">$u\times v\times v$</span> (permuted three ways), and <span class="math-container">$v\times v\times v$</span>. As an alternative, however, it can be understood as subdivided into five rectangular solids of sides <span class="math-container">$u\times u\times u$</span>, <span class="math-container">$t\times u\times v$</span> (cyclically permuted three ways), and <span class="math-container">$v\times v\times v$</span>.</p> <p><a href="https://i.stack.imgur.com/AtnPr.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/AtnPr.png" alt="enter image description here" /></a></p> <p>If <span class="math-container">$p$</span> and <span class="math-container">$q$</span> are negative, the equation <span class="math-container">$t^3+pt=-q$</span> can be understood geometrically as saying that if from a cube of side <span class="math-container">$t$</span> a rectangular solid of length <span class="math-container">$t$</span> and cross-sectional area <span class="math-container">$-p$</span> is removed, or three rectangular solids of length <span class="math-container">$t$</span> and cross-sectional area <span class="math-container">$-\frac{p}{3}$</span> are removed, then the remaining volume is <span class="math-container">$-q$</span>. If we arrange for the cross-sectional area <span class="math-container">$-\frac{p}{3}$</span> to be that of a <span class="math-container">$u\times v$</span> rectangle, then the remaining volume will consist of a cube of side <span class="math-container">$u$</span> and a cube of side <span class="math-container">$v$</span>. Hence we have <span class="math-container">$$ v=-\frac{p}{3u},\quad u^3+v^3=-q, $$</span> which is quadratic in <span class="math-container">$u^3$</span>.</p> <p>This doesn't come completely out of the blue: at the point in time when Tartaglia solved the depressed cubic, there was a 3500 year history of solving quadratics using an analogous 2-dimensional geometric picture. The ancient Mesopotamians, for example, solved, in essence, the problem of finding <span class="math-container">$t$</span> given that <span class="math-container">$t(t+a)=b$</span>. They did this by arranging a square of side <span class="math-container">$t$</span>, a square of side <span class="math-container">$\frac{a}{2}$</span>, and rectangles of dimensions <span class="math-container">$t\times\frac{a}{2}$</span> and <span class="math-container">$\frac{a}{2}\times t$</span> into a square of side <span class="math-container">$t+\frac{a}{2}$</span>, which must then have area <span class="math-container">$b+\left(\frac{a}{2}\right)^2$</span>. From <span class="math-container">$\left(t+\frac{a}{2}\right)^2=b+\left(\frac{a}{2}\right)^2$</span>, one can compute <span class="math-container">$t+\frac{a}{2}$</span> and hence <span class="math-container">$t$</span>. Although the Mesopotamian algorithms were described in words only, accompanied neither by equations nor by geometric diagrams, a careful <a href="https://www.mprl-series.mpg.de/textbooks/2/4/index.html" rel="nofollow noreferrer">analysis</a> by Jens Høyrup of the geometric terminology used provides convincing evidence that this was how they understood their procedures. Closer to the time of Tartaglia, al-Khwarizmi solved quadratics by methods based on a similar geometric picture.</p>
1,586,354
<p>I did the following exercise:</p> <blockquote> <p>Suppose $n$ is an even positive integer and $H$ is a subgroup of $\mathbb{Z}_n$ (integers mod n with addition). Prove that either every member of $H$ is even or exactly half of the members of $H$ are even.</p> </blockquote> <p>My answer:</p> <p>Since $\mathbb{Z}_n$ is cyclic so is $H$. If $k$ generates $H$ when $k$ is even then every element in $H$ is even. If $k$ is odd then exactly every other element is even which proves the claim. </p> <p>Assuming my proof is correct I was wondering how else to do this. The exercise appears before the chapter about cyclic groups. </p> <blockquote> <p>How to answer this question without using any knowledge of cyclic groups, generators, etc.?</p> </blockquote>
Matt Samuel
187,867
<p>Suppose there is an element $x$ that isn't even. Let $A$ be the set of even elements in the subgroup and define $B=\{x+a:a\in A\}$. Then every element of $B$ is odd. Prove that $A$ and $B$ have the same number of elements and the subgroup is the disjoint union of $A$ and $B$.</p>
2,402,410
<p>I defined the "function":</p> <p>$$f(t)=t \delta(t)$$</p> <p>I know that Dirac "function" is undefined at $t=0$ (see <a href="http://web.mit.edu/2.14/www/Handouts/Convolution.pdf" rel="nofollow noreferrer">http://web.mit.edu/2.14/www/Handouts/Convolution.pdf</a>).</p> <p>In Wolfram I get $0 \delta(0)=0$ (<a href="http://www.wolframalpha.com/input/?i=0" rel="nofollow noreferrer">http://www.wolframalpha.com/input/?i=0</a>*DiracDelta(0)). Why? I expect $0 \delta(0)=undefined$ (if $\delta(0)=\infty$, thus I will have an indeterminate form $0 \infty$).</p> <p>Thank you for your time.</p>
Eric Towers
123,905
<p>Wolfram Alpha <a href="http://www.wolframalpha.com/input/?i=DiracDelta(1)" rel="nofollow noreferrer">evaluates</a> <code>DiracDelta(1)</code>, giving zero. Wolfram Alpha <a href="http://www.wolframalpha.com/input/?i=DiracDelta(0)" rel="nofollow noreferrer">fails to interpret</a> <code>DiracDelta(0)</code>, nor does it <a href="http://www.wolframalpha.com/input/?i=1*DiracDelta(0)" rel="nofollow noreferrer">evaluate</a> <code>1 * DiracDelta(0)</code>, leaving <code>δ(0)</code> unevaluated. Wolfram Alpha believes that the product of <code>0</code> and an unevaluated symbol is zero. You can see this with <a href="http://www.wolframalpha.com/input/?i=0*f" rel="nofollow noreferrer"><code>0*f</code></a> and with <a href="http://www.wolframalpha.com/input/?i=0*DiracDelta(0)" rel="nofollow noreferrer"><code>0*DiracDelta(0)</code></a>, both yielding $0$.</p> <p>This is not so surprising. From the definition of a field. If we presume that we work in a <a href="https://en.wikipedia.org/wiki/Field_(mathematics)" rel="nofollow noreferrer">field</a> large enough to contain $\mathbb{R}$ and $\delta(0)$, so that it is possible to interpret the string "$0 \cdot \delta(0)$", then we must find that $0 \cdot \delta(0) = 0$. It's a <a href="https://en.wikipedia.org/wiki/Field_(mathematics)#Consequences_of_the_definition" rel="nofollow noreferrer">standard elementary exercise</a> to show that multiplication of any element of field with the additive identity yields the additive identity.</p> <p>We may also choose to use elementary limit relations. Suppose we have defined $\delta(t) = \lim_{x \rightarrow 0} f(x,t)$ (a common example is to define $\delta$ to be the limit of Gaussian PDFs as the standard deviation goes to zero), as long as each limit on each RHS in the following exists: \begin{align*} 0 \cdot \delta(0) &amp;\triangleq 0 \cdot \lim_{x \rightarrow 0} f(x,0) \\ &amp;= \lim_{x \rightarrow 0} 0 \cdot \lim_{x \rightarrow 0} f(x,0) \\ &amp;= \lim_{x \rightarrow 0} (0 \cdot f(x,0) ) \\ &amp;= \lim_{x \rightarrow 0} 0 \\ &amp;= 0 \end{align*} The only questionable limit is the first one, from our definition of $\delta$. If we claim $0$ is in the domain of $\delta$, we must allow this limit. Everything else follows. (When one is being careful, one does not claim that $\delta$ has a "domain" because $\delta$ is not a function. At this level of care, one describes delta as an element of some dual space of your space of functions so that the right place for it to appear is as a multiplicand in an integrand. Then one defines $\delta$ in terms of its behaviour in this compound object, not as if it were an independently meaningful thing. This also happens in elementary Calculus : $\mathrm{d}x$ is not a separately intelligible part of $\frac{\mathrm{d}}{\mathrm{d}x}$. The compound object is defined, not its parts. This point of view is described in other answers.)</p>
1,554,285
<p>Here's my problem:</p> <blockquote> <p>In Ohio, 55% of the population support the republican candidate in an upcoming election. 200 people are polled at random. If we suppose that each person’s vote (for or against) is a Bernoulli random variable with probability p, and votes are independent,</p> <p>(a) Show that the number of people polled that support the democratic candidate X has distribution Bin(200, .45) and calculate the mean and variance.</p> <p>(b) Calculate directly the probability that more than half of the polled people will vote for the democratic candidate. Tell me the equation that you used to solve this.</p> <p>(c) Use the CLT to approximate the Binomial probability and calculate the approximate probability that half of the polled people will vote for the democratic candidate</p> </blockquote> <p>And here's what I got so far:</p> <p><strong>Part a:</strong> Let us suppose if X number of people are supporting the democratic candidate, then there can be $\binom {200} {X}$ possible ways to select the people $\binom {200} {X} (0.45)^X (0.55)^{1-X}$ Therefore the given distribution is binomial distribution with n=200, p =0.55 and 1-p = 0.45</p> <p>According to the theorem, the mean of the probability distribution is given as $E(X) = n*p = 200 * 0.45 = 90$</p> <p>The variance of probability distribution is given as $E(X^2) - (E(X))^2 = np(1-p)$</p> <p>For this problem,</p> <p>$200*(0.45)*(1-0.45) = 49.5$</p> <p><strong>Part b:</strong> More than half of the people voting for the democratic candidate would be equal to $\sum\limits_{i=101}^{i=200} \binom {200} {i} (0.45)^i (0.55)^{200-i}$</p> <p><strong>Part c</strong> I'm at a total loss. </p> <p>I'm very new to these sorts of problems and suspect I might be way off the mark on every part. Any guidance would be appreciated. (Apologies if this is way too long a problem, I can split it up.)</p>
Erick
224,176
<p>$c)$ Let $X\sim Bin(200,.45)$ that counts the number of votes for the democratic candidate so $X=\sum_{i=1}^{200}\xi_i$ where $\xi_i\sim Bernoulli(.45)$ and $\xi_i$ are i.i.d. so by central limit theorem we have that: $$X\sim^{\star}N(np,np(1-p))$$ $$X\sim^{\star}N(90,49.5)$$ So: $$P[X&gt; 100]=1-P[X\leq100]\approx $$ $$1-P\left[\dfrac{X-90}{\sqrt{49.5}}\leq \dfrac{100-90}{\sqrt{49.5}}\approx1.4213\right]$$ $$=1-.07885=.92115$$</p>
4,487,380
<p>I was reading my calculus book wherein I came across a note, being worth of attention. It says:</p> <blockquote> <p>Integrals in the form of <span class="math-container">$\int P(x)e^{ax}dx$</span> have a special property. After calculating the integral, we obtain a function in the form of <span class="math-container">$Q(x)e^{ax}$</span> where <span class="math-container">$Q(x)$</span> is a polynomial of the same degree as of <span class="math-container">$P(x)$</span>. This is called method of indefinite coefficients.</p> </blockquote> <p>Like for example, we do <span class="math-container">$\displaystyle\int (3x^3-17)e^{2x}dx$</span>, We can do it traditionally by integration by parts but let me show you my or rather author's method :</p> <p>Let this be equal to <span class="math-container">$(Ax^3+Bx^2+Cx+D)e^{2x}$</span></p> <p>Now differentiating both sides we get, <span class="math-container">$$(3x^3-17)e^{2x}=2(Ax^3+Bx^2+Cx+D)e^{2x}+e^{2x}(3Ax^2+2Bx+C)$$</span> Now we will cancel <span class="math-container">$e^{2x}$</span> on both sides and the rest is equating the coefficients.</p> <p>I have seen that this property is applicable in every question but I don't know the mathematical proof of this. It wasn't even in the book.</p> <p>Any help regarding the proof is greatly appreciated.</p>
Robert Lee
695,196
<p>The statement</p> <blockquote> <p><span class="math-container">$\int P(x) e^{ax}\, \mathrm{d}x = Q(x) e^{ax}$</span> with <span class="math-container">$\deg(P(x)) = \deg(Q(x)) $</span></p> </blockquote> <p>is equivalent (by the definition of antiderivative) to showing that <span class="math-container">$$ \frac{\mathrm{d}}{\mathrm{d}x}Q(x) e^{ax} = P(x) e^{ax} $$</span> where <span class="math-container">$\deg(P(x)) = \deg(Q(x)) $</span>.</p> <p>Thus, writing <span class="math-container">$Q(x) = \sum_{k=0}^\color{blue}{n} c_k x^k$</span> for some constants <span class="math-container">$c_k$</span> with <span class="math-container">$c_n \neq 0$</span> then <span class="math-container">$\deg(Q(x)) = \color{blue}{n}$</span>. So we get</p> <p><span class="math-container">\begin{align} \frac{\mathrm{d}}{\mathrm{d}x}Q(x) e^{ax} &amp; =\frac{\mathrm{d}}{\mathrm{d}x}\left( c_0 e^{ax} +\sum_{k=1}^n c_k x^ke^{ax}\right)\\ &amp; =ac_0e^{ax} +\sum_{k=1}^n c_k \left( kx^{k-1} e^{ax} + x^kae^{ax}\right)\\ &amp; \overset{\color{purple}{k-1\to k}}{=} \left(ac_0+ \sum_{k=\color{purple}{0}}^{\color{purple}{n-1}} c_{k+1} (k+1) x^{k} +\sum_{k=1}^n c_k ax^k\right) e^{ax}\\ &amp; = \underbrace{\left(\sum_{k=0}^{n}C_k x^k\right)}_{P(x)} e^{ax} \end{align}</span> where <span class="math-container">$$ C_k =\begin{cases} a c_0 + c_1, &amp; k=0\\ c_{k+1}(k+1) + c_ka, &amp; 1\le k \le n-1\\ c_ka, &amp; k=n \end{cases} $$</span> and since <span class="math-container">$C_n = c_n a \neq 0$</span> then <span class="math-container">$\deg(P(x)) = n$</span> as well.</p>
1,643,649
<blockquote> <p>I need to show that $\Bbb Z^*_8$ is not isomorphic to $\Bbb Z^*_{10}$.</p> </blockquote> <p>$\Bbb Z^*_n$ means integers up to $n$ coprime with $n$</p> <p>I do not know how to do this. I have difficulties doing proofs involving isomorphisms. A methodological answer would be highly appreciated.</p> <p>Thanks in advance!</p>
Alex Wertheim
73,817
<p>Hint: see if you can show that one of these groups is cyclic, whereas the other is not. </p>
3,789,060
<p>I was asked the following question:</p> <blockquote> <p>Determine if the following set is a vector space:<br /> <span class="math-container">$$W=\left\{\left[\begin{matrix}p\\q\\r\\s\\\end{matrix}\right]:\begin{matrix}-3p+2q=-s\\p=-s+3r\\\end{matrix}\right\}$$</span></p> </blockquote> <p>I know the answer is yes and you can show it by showing that W is a subspace of <span class="math-container">$\mathbb{R}_4$</span>. But, I have no idea how to show that, or in general how to determine if a set is a vector space. I am interested in understanding so that I can apply it to future questions, not just so that I can answer this question.</p>
John Wayland Bales
246,513
<p>We have</p> <p><span class="math-container">$$ \arctan(\cot(\pi x))=\frac{\pi}{2}-\pi x+\pi\left\lfloor x\right\rfloor $$</span></p> <p>so it is continuous on intervals <span class="math-container">$[n,n+1)$</span> for <span class="math-container">$n$</span> an integer.</p>
4,098,630
<p>I am trying to solve for x and y using the following equation: <span class="math-container">$4i + 2 = \frac{x + iy + 5 + 4i}{2x + 2iy - 5}$</span></p> <p>I got it down to real and imaginary, but am unsure what to do next. <span class="math-container">$3x-8y-15=(-8x-3y+24)i$</span></p>
lonza leggiera
632,373
<p><strong>Hint</strong></p> <p>Assuming <span class="math-container">$\ x,y\ $</span> are required to be real numbers, then equating the real and imaginary parts of both sides of the equation <span class="math-container">$$ 3x-8y-15=(-8x-3y+24)i $$</span> will give you two linear equations in <span class="math-container">$\ x\ $</span> and <span class="math-container">$\ y\ $</span> which you can solve to obtain their valuea.</p>
4,098,630
<p>I am trying to solve for x and y using the following equation: <span class="math-container">$4i + 2 = \frac{x + iy + 5 + 4i}{2x + 2iy - 5}$</span></p> <p>I got it down to real and imaginary, but am unsure what to do next. <span class="math-container">$3x-8y-15=(-8x-3y+24)i$</span></p>
Learner
855,893
<p>Suppose a,b are two real numbers,if we are having a=ib as some condition,as a,b are real they can be equal only when both of the sides of equation are zero. So a=b=0.you can proceed with your problem now.</p>
2,469,720
<p>Math problem:</p> <blockquote> <p>Find $x$, given that $ \, 2^2 \times 2^4 \times 2^6 \times 2^8 \times \ldots \times 2^{2x} = \left( 0.25 \right)^{-36}$</p> </blockquote> <p>To solve this question, I changed the left side of the equation to $2^{2+4+6+ \ldots + 2x}$ and the right side to: $\frac{2^{74}}{3^{36}}$.</p> <p>My question is how can $3$ to a power (in this case $36$) be changed to $2$ to a power? (algebraically-without a calculator)</p> <p>By checking with a calculator and doing $\log$, I found that it is not a whole number and therefore the wrong method for this question.</p>
user577215664
475,762
<p>$( 0.25)^{-36}=( \frac 1 4) ^{-36}=(2^{-2})^{-36}=(2^{36})^2$</p> <p>$2^22^42^6....2^{2x}=(2^12^22^32^4...2^x)^2$</p> <p>So we must have:</p> <p>$(2^{36})^2=(2^12^22^32^4...2^x)^2$</p> <p>Or simply:</p> <p>$2^{36}=2^12^22^32^4...2^x$</p> <p>$1+2+3+ ....x=36$</p> <p>$\frac {(x+1)x} 2=36$</p> <p>$x^2+x=72$</p> <p>$x=8$</p> <p>Check $8^2+8=72$</p>