qid
int64
1
4.65M
question
large_stringlengths
27
36.3k
author
large_stringlengths
3
36
author_id
int64
-1
1.16M
answer
large_stringlengths
18
63k
1,961,727
<p>As far as I understood <a href="https://en.wikipedia.org/wiki/Gram%E2%80%93Schmidt_process">Gram–Schmidt orthogonalization</a> starts with a set of linearly independent vectors and produces a set of mutually orthonormal vectors that spans the same space that starting vectors did.</p> <p>I have no problem understanding the algorithm, but here is a thing I fail to get. Why do I need to do all these calculations? For example, instead of doing the calculations provided in that wiki page in example section, why can't I just grab two basis vectors $w_1 = (1, 0)'$ and $w_2 = (0, 1)'$? They are clearly orthonormal and span the same subspace as the original vectors $v_1 = (3, 1)'$, $v_2 = (2, 2)'$.</p> <p>It is clear that I'm missing something important, but I can't see what exactly.</p>
Community
-1
<p>Orthonormal bases are nice because several formulas are much simpler when vectors are given wrt an ON basis. </p> <p>Example: Let $\mathcal E = \{e_1, \dots, e_n\}$ be an ON basis. Then the Fourier expansion of any vector $v\in\operatorname{span}(\mathcal E)$ is just $$v = (v\cdot e_1)e_1 + (v\cdot e_2)e_2 + \cdots + (v\cdot e_n)e_n$$</p> <p>Notice that there are no normalization factors and we don't need to construct a dual basis -- it's just a really simple formula.</p> <p>In your example, of course $\{(1,0),(0,1)\}$ spans the same space as $\{(3,2),(2,2)\}$. But let me provide an example of my own: what about $\{(1.1,1.2,0.9,2.1,4),(3,-2,6,14,2),(6,6,6,3.4,11.1)\}$? There's certainly no subset of the standard basis vectors that spans the same space as these linearly independent vectors. But this is a pretty poor choice of basis because they're not orthonormal. It'd sure be nice if we had some algorithm that could produce as ON basis from them...</p>
2,007,373
<p>At some point in your life you were explained how to understand the dimensions of a line, a point, a plane, and a n-dimensional object. </p> <p>For me the first instance that comes to memory was in 7th grade in a inner city USA school district. </p> <p>Getting to the point, my geometry teacher taught,</p> <p>"a point has no length width or depth in any dimensions, if you take a string of points and line them up for "x" distance you have a line, the line has "x" length and zero height, when you stack the lines on top of each other for "y" distance you get a plane"</p> <p>Meanwhile I'm experiencing cognitive dissonance, how can anything with zero length or width be stacked on top of itself and build itself into something with width of length?</p> <p>I quit math. </p> <p>Cut to a few years after high school, I'm deep in the math's. </p> <p>I rationalized geometry with my own theory which didn't conflict with any of geometry or trigonometry. </p> <p>I theorized that a point in space was infinitely small space in every dimension such that you can add them together to get a line, or add the lines to get a plane. </p> <p>Now you can say that the line has infinitely small height approaching zero but not zero.</p> <p>What really triggered me is a Linear Algebra professor at my school said that lines have zero height and didn't listen to my argument. . . </p> <p>I don't know if my intuition is any better than hers . . . if I'm wrong, if she's wrong . . . </p> <p>I would very much appreciate some advice on how to deal with these sorts of things. </p>
AnoE
360,316
<blockquote> <p>Meanwhile I'm experiencing cognitive dissonance, how can anything with zero length or width be stacked on top of itself and build itself into something with width of length?</p> </blockquote> <p>That's not what happens, at all, when going from zero to one and then on to n dimensions. You are thinking too literal. A mathematical point, line, or plane does not exist in the physical reality, it is simply a thought construct. </p> <p>We have to separate the "dots" we make on paper from mathematical "points". The point your professor is likely talking about probably consists of a vector (a bunch of numbers) together with an associated vector base in some vector space, with no degree of freedom (0 dimensions). A line is the same, with one degree of freedom (1 dimension), which is achieved by adding a second vector, multiplied by a variable. A plane is the same, with two degrees of freedom (2 dimensions), achieved by adding two vectors multiplied by two independent variables and so on. Or, if vector spaces are not used, then the point is simply a collection of numbers describing its location.</p> <p>At no point whatsoever does even the <em>concept</em> of "thickness" enter the picture. At no point whatsoever do we "stack points with zero length and end up with a line". </p> <p>In general, pretty much anything can be a vector space; those are not limited to what we understand as points, lines, planes, volumes, at all. But in each vector space, the concepts are the same - you have objects which are spanned up by vectors, which have <em>no other attributes</em> except those that are explicitely given (a base and something spanning them up) and so on; and usually no easy/intuitive/naive correlation to "reality".</p> <blockquote> <p>What really triggered me is a Linear Algebra professor at my school said that lines have zero height and didn't listen to my argument. . .</p> </blockquote> <p>So, did <em>you</em> then listen to <em>his</em> argument?</p> <p>EDIT: removed some stuff not really belonging here.</p>
2,109,832
<p>This is for beginners in probability!</p> <p>Could someone give me a step by step on how to find the MGF of the binomial distribution?</p>
David Holden
79,543
<p>for any $\theta$ we have $$ \sin 2\theta = 2 \sin \theta\cos \theta $$ if<br> $$ \theta = \arcsin \frac35 $$ then, trivially, $$ \sin \theta = \frac35 $$ using $$ \cos^2 \theta +\sin^2 \theta = 1 $$ can you compute $\cos \theta$ to finish off? (how do you interpret the fact that there are two possible values of $\cos \theta$?)</p>
567,391
<p>A bus follows its route through nine stations, and contains six passengers. What is the probability that no two passengers will get off at the same station? </p> <p>no detailed solution is required here but an idea of the general line of thought could be nice...</p>
Max Sherman
36,053
<p>Usually what you want to do is pick any arbitrary element of $A \cup B$, call it $x$. So $x \in A \cup B$. Then show that $x \in C$, using what you know about elements of $A$ and $B$.</p>
567,391
<p>A bus follows its route through nine stations, and contains six passengers. What is the probability that no two passengers will get off at the same station? </p> <p>no detailed solution is required here but an idea of the general line of thought could be nice...</p>
amWhy
9,003
<p>To show $A\cup B \subseteq C$, we show that for any $x \in A\cup B$, it follows that $x \in C$.</p> <hr> <p>Assumptions (Givens): $A\subseteq C$, $B \subseteq C$.</p> <p>Suppose $x \in A\cup B.\;\;$ Then $x \in A$, or $x\in B\;$ (from the definition of set union). </p> <p>Now, use what you know from our givens above to argue, therefore, $x \in C$.</p> <p>Thus, we will have shown $A\cup B \subseteq C$.</p>
1,976,382
<p>Hölder's inequality for finite sums is given by $$\sum_{k=0}^n|a_kb_k|\leq\left(\sum_{k=0}^n|a_k|^p\right)^{1/p}\left(\sum_{k=0}^n|b_k|^q\right)^{1/q},$$ where $1/p+1/q=1$, $p,q\in(1,\infty)$.</p> <p>Is there a "similar" inequality which gives a lower bound for the left hand sum? I have searched, but found nothing so far.</p>
Mark Fischler
150,362
<p>the corresponding lower bound is $$ \max(|a_k b_k|) $$ which is not very interesting because it squelches $p$ and $q$, but it saturates so we can't produce a more interesting lower bound.</p>
9,934
<p>I have been given some code with the following line</p> <pre><code>PeriodicExtension[g_, x_] := If[Abs[x] &lt; Pi, g[x], PeriodicExtension[g, x - 2 Sign[x] Pi]] </code></pre> <p>I do not understand the syntax. I would appreciate if someone can explain what this code does for different values of <code>x</code>.</p>
Dr. belisarius
193
<pre><code>PeriodicExtension[g_, x_] := If[Abs[x] &lt; Pi, g[x], PeriodicExtension[g, x - 2 Sign[x] Pi]] g[x_] := x Plot[PeriodicExtension[g, x], {x, 0, 4 Pi}] </code></pre> <p><img src="https://i.stack.imgur.com/9Tgp0.png" alt="Mathematica graphics"></p>
3,534,364
<blockquote> <p><span class="math-container">$x^2y'^2 + 3xyy' +2y^2 = 0 $</span></p> </blockquote> <p>Usually, to solve an ODE with respect to <span class="math-container">$y'=p$</span>, we first isolate the <span class="math-container">$y$</span>, to get <span class="math-container">$y = f(x,p)$</span> and then differentiate with respect to <span class="math-container">$x$</span> to get an expression that only depends on <span class="math-container">$x$</span> and <span class="math-container">$p$</span>. Then, we can write <span class="math-container">$x$</span> and <span class="math-container">$y$</span> in terms of <span class="math-container">$p$</span> and get to a solution.</p> <p>But what do we do if, like in this particular example, we can't isolate the y? Is there another method for these kind of ODEs?</p> <p>My manual lists the solutions for this equation as <span class="math-container">$xy=c$</span> or <span class="math-container">$yx^2 =c$</span>, but I have no clue as to how they come to that conclusion. </p>
Robert Israel
8,508
<p>You can factor your differential equation to get <span class="math-container">$$ (x y' + y)(x y' + 2 y) = 0 $$</span></p> <p>so either <span class="math-container">$x y' + y = 0$</span> or <span class="math-container">$x y' + 2 y = 0$</span>. One gives you <span class="math-container">$y = c/x$</span>, the other <span class="math-container">$c/x^2$</span>.</p> <p>Actually we should be careful to check that you can't switch from one of these to the other while maintaining differentiability: it turns out that you can't.</p>
1,198,722
<p>I am working with a standard linear program:</p> <p>$$\text{min}\:\:f'x$$ $$s.t.\:\:Ax = b$$ $$x ≥ 0$$</p> <p><strong>Goal:</strong> I want to enforce all nonzero solutions $x_i\in$ x to be greater than or equal to a certain threshold "k" if it's nonzero. In other words, I want to add a conditional bound to the LP: if any $x_i$ is > 0, enforce $x_i$ ≥ k.</p> <p><strong>Main issue:</strong> Is there a way to set up this problem as an LP? Any alternate approaches? Any input would be appreciated and I'm happy to provide any additional info as needed! Thanks! </p>
bassen
194,657
<p>Like Rahul mentioned in a comment to your question, this is not possible (incidentally, I do not agree with TravisJ's comment that an ILP is a special case of an LP. Rather, an LP is a special case of a mixed-integer linear program, of which integer-linear programming is also a special case. But I do not have enough points to comment yet). However, all is not necessarily lost.</p> <p>If your goal is to solve some practical problem (rather than showing that this can be solved through linear programming), modeling your problem as a mixed-integer linear program and solving that instead might actually work. Solvers such as CPLEX and Gurobi are surprisingly fast.</p> <p>If you have some a priori upper bound $M_i$ on each $x_i$, you could do the following: for each $x_i$, you can introduce a boolean variable $b_i$ and have the constraints:</p> <p>$x_i\geq k\cdot b_i$, $x_i \leq M_i\cdot b_i$. </p>
2,393,525
<p>I have two questions which I think both concern the same problem I am having. Is $...121212.0$ a rational number and is $....12121212....$ a rational number? The reason I was thinking it could be a number is when you take the number $x=0.9999...$, then $10x=9.999...$ . Therefore, we conclude $9x=9$ which means $x=1$. Why could or couldn't you do the same thing and divide the first number in similar fashion by defining it as $x$ and then taking $x/100$?</p>
Noah Schweber
28,111
<p>You write:</p> <blockquote> <p>I thought the real numbers were defined as the numbers on the number line.</p> </blockquote> <p>This isn't really a definition of the real numbers, since "number line" is a bit vague, but: a key fact about the number line as generally understood is that the distance between any two points is finite. If I imagine a number with digits stretching infinitely far to the left, such a number is infinitely large, that is, infinitely far away from zero; and these don't have a place on the number line as generally understood.</p> <p>This is not to say that we can't give a mathematically precise meaning to such objects! Indeed, one particular formalization of them - the <a href="https://en.wikipedia.org/wiki/P-adic_number" rel="nofollow noreferrer"><em>$p$-adic numbers</em></a> - plays an important role in number theory and algebraic geometry (and they allow manipulations such as that in your last comment). However, it's important to note that these are not, in fact, real numbers.</p> <p>Put another way, while you can manipulate these expressions in an interesting way (e.g. conclude that $...99999=-1$), that does not in any way mean that they correspond to something <em>in the particular number system "the real numbers"</em>; rather, it merely suggests that they may be interesting objects in their own right. There are lots of very interesting objects (besides the $p$-adics mentioned above) that we can make sense of, which are not real numbers:</p> <ul> <li><p><a href="https://en.wikipedia.org/wiki/Complex_number" rel="nofollow noreferrer">Square roots of negative numbers.</a></p></li> <li><p><a href="https://en.wikipedia.org/wiki/Non-standard_analysis" rel="nofollow noreferrer">Infinitesimals.</a></p></li> <li><p><a href="https://en.wikipedia.org/wiki/Smooth_infinitesimal_analysis" rel="nofollow noreferrer">Non-zero numbers which, when squared, equal zero.</a></p></li> <li><p><a href="https://en.wikipedia.org/wiki/Quaternion" rel="nofollow noreferrer">Complex numbers but this time with even <em>more</em> square roots of negative numbers, because why quit when you're ahead.</a></p></li> <li><p>And, dearest to my heart, <a href="https://en.wikipedia.org/wiki/Ordinal_number" rel="nofollow noreferrer">various different kinds of infinity</a>.</p></li> </ul>
3,558,784
<p>A question is as follows: Consider a open top cylinder with radius <span class="math-container">$R$</span> and height <span class="math-container">$H$</span> full of water and tilt the cylinder to pour the water until the water surface at the base of the cylinder intersects the diameter of the base.</p> <p>Find the volume of the water remaining.</p> <p>My approach is to use multivariable integration, and I considered freezing the water and tilting the cylinder back to upright position, then setting the coordinate axis on the center of base of the cylinder. Then I found the plane containing the surface of water by finding 3 points:</p> <p>(0,0,0), (R,0,0), (0,R,H), on the surface of water in the cylinder, which gave the plane <span class="math-container">$z=\frac{H\cdot y}{R}$</span></p> <p>Finally, I integrated this using cylindrical coordinates, with z from 0 to the plane, and <span class="math-container">$D$</span> the region that is a semicircle <span class="math-container">$D = \{(x,y):x^2+y^2\leq R^2, y\geq0\}$</span></p> <p><span class="math-container">$$\iint_D\int_{z=0}^{\frac{H\cdot r\cdot \sin \theta }{R}}r dzdA,$$</span> where <span class="math-container">$D$</span> is just <span class="math-container">$\theta$</span> from <span class="math-container">$0$</span> to <span class="math-container">$\pi$</span> and <span class="math-container">$r$</span> from <span class="math-container">$0$</span> to <span class="math-container">$R$</span>.</p> <p>This yields the answer <span class="math-container">$\frac{2HR^2}{3}$</span>. I find it suspicious the answer does not depend on <span class="math-container">$\pi$</span>, so what did I do wrong? Or does the answer really not depend on <span class="math-container">$\pi$</span>?</p>
John Omielan
602,049
<p>Notice when you multiply out the second line you get <span class="math-container">$8 \times 65548 = 524384$</span> (you have a typo in your second line where it says <span class="math-container">$65568x^2$</span>) for the <span class="math-container">$x^3$</span> term and <span class="math-container">$-7 \times 42109 = -294763$</span> (you have a typo in your first line where it says -<span class="math-container">$295763y^3$</span>) for the <span class="math-container">$y^3$</span> term. What was likely done was those coefficients were factored (note that <span class="math-container">$524384 = 2^5 \times 7 \times 2341$</span> (so <span class="math-container">$8$</span> is a factor due to the <span class="math-container">$2^5 = 32$</span> factor) and <span class="math-container">$294763 = 7 \times 17 \times 2477$</span>), with the factors checked to see if any of them work with the linear term, with the middle coefficient in the second factor, i.e., <span class="math-container">$53954$</span>, being determined &amp; then checked to see if you got one consistent value for it when you compare the result against the coefficients of the middle <span class="math-container">$2$</span> terms in the first line, i.e., <span class="math-container">$-27204$</span> and <span class="math-container">$-40806$</span>. In this case, <span class="math-container">$8x - 7y$</span> is what works.</p> <p>In particular, you have (I used negative for the <span class="math-container">$b$</span> term due to the <span class="math-container">$y^3$</span> term being negative)</p> <p><span class="math-container">$$(ax - by)(cx^2 + dxy + ey^2) = (ac)x^3 + (ad - bc)x^2y + (ae - bd)xy^2 + (-be)y^3 \tag{1}\label{eq1A}$$</span></p> <p>Matching coefficients gives</p> <p><span class="math-container">$$ac = 524384 \implies c = \frac{524384}{a} \tag{2}\label{eq2A}$$</span></p> <p><span class="math-container">$$ad - bc = -27204 \implies d = \frac{-27204 + bc}{a} \tag{3}\label{eq3A}$$</span></p> <p><span class="math-container">$$ae - bd = -40806 \implies d = \frac{40806 + ae}{b} \tag{4}\label{eq4A}$$</span></p> <p><span class="math-container">$$-be = -294763 \implies e = \frac{294763}{b} \tag{5}\label{eq5A}$$</span></p> <p>Note when you choose <span class="math-container">$a$</span> and <span class="math-container">$b$</span> that you get <span class="math-container">$c$</span> from \eqref{eq2A} and <span class="math-container">$e$</span> from \eqref{eq5A}. However, you <em>must</em> then get the same <span class="math-container">$d$</span> value in \eqref{eq3A} and \eqref{eq4A}. There are many situations where no combination of integral <span class="math-container">$a$</span> and <span class="math-container">$b$</span> will work. However, in this case, you have that <span class="math-container">$a = 8$</span> and <span class="math-container">$b = 7$</span> do work.</p> <p>FYI, this process is similar to what is stated in the <a href="https://en.wikipedia.org/wiki/Rational_root_theorem" rel="nofollow noreferrer">Rational root theorem</a>.</p>
3,558,784
<p>A question is as follows: Consider a open top cylinder with radius <span class="math-container">$R$</span> and height <span class="math-container">$H$</span> full of water and tilt the cylinder to pour the water until the water surface at the base of the cylinder intersects the diameter of the base.</p> <p>Find the volume of the water remaining.</p> <p>My approach is to use multivariable integration, and I considered freezing the water and tilting the cylinder back to upright position, then setting the coordinate axis on the center of base of the cylinder. Then I found the plane containing the surface of water by finding 3 points:</p> <p>(0,0,0), (R,0,0), (0,R,H), on the surface of water in the cylinder, which gave the plane <span class="math-container">$z=\frac{H\cdot y}{R}$</span></p> <p>Finally, I integrated this using cylindrical coordinates, with z from 0 to the plane, and <span class="math-container">$D$</span> the region that is a semicircle <span class="math-container">$D = \{(x,y):x^2+y^2\leq R^2, y\geq0\}$</span></p> <p><span class="math-container">$$\iint_D\int_{z=0}^{\frac{H\cdot r\cdot \sin \theta }{R}}r dzdA,$$</span> where <span class="math-container">$D$</span> is just <span class="math-container">$\theta$</span> from <span class="math-container">$0$</span> to <span class="math-container">$\pi$</span> and <span class="math-container">$r$</span> from <span class="math-container">$0$</span> to <span class="math-container">$R$</span>.</p> <p>This yields the answer <span class="math-container">$\frac{2HR^2}{3}$</span>. I find it suspicious the answer does not depend on <span class="math-container">$\pi$</span>, so what did I do wrong? Or does the answer really not depend on <span class="math-container">$\pi$</span>?</p>
zwim
399,263
<p>Another method is to set <span class="math-container">$y=tx$</span> to get the polynomial <span class="math-container">$f(t)=\sum\limits_{i=0}^3 a_it^i$</span></p> <p><span class="math-container">$$f(t)=524384-27204t-40806t^2-294763t^3$$</span></p> <p>Possible rational roots of <span class="math-container">$f$</span> are to be searched in <span class="math-container">$$r\in\left\{\pm\dfrac{\operatorname{divisors}(a_0)}{\operatorname{divisors}(a_3)}\right\}$$</span></p> <p><a href="https://www.chilimath.com/lessons/intermediate-algebra/rational-roots-test/" rel="nofollow noreferrer">https://www.chilimath.com/lessons/intermediate-algebra/rational-roots-test/</a></p> <p>In this case it is not a very interesting method since <span class="math-container">$\begin{cases}a_0=(2)^5(7)(2341) &amp;\text{has 24 divisors}\\ a_3=-(7)(11)(2477)&amp;\text{has 8 divisors}\end{cases}$</span></p> <p>and there are <span class="math-container">$192$</span> possible <span class="math-container">$r$</span> to test (<span class="math-container">$\times 2$</span> for the sign minus duplicates). But for equations with coefficients that have less factors, it may be a suitable method. However in this particular case John's method is faster.</p> <p>Anyway, we find that only <span class="math-container">$f(\frac 87)=0$</span></p> <p>So <span class="math-container">$y=\frac 87 x\iff 7y=8x$</span> and you can factorize by <span class="math-container">$(8-7t)$</span> or equivalently by <span class="math-container">$(8x-7y)$</span>.</p> <p><br></p> <p>The factorization is done incrementally, first divide the leading coefficient by <span class="math-container">$8x$</span> to get the first term. Multiply this term by <span class="math-container">$-7y$</span> and calculate the remaining. Then go on by dividing the remaining by <span class="math-container">$8x$</span> again and so forth until you get the complete factorization.</p> <p><span class="math-container">$\begin{array}{ll} 524384x^3−27204x^2y−40806xy^2−295763y^3\\\\ = (8x-7y)\times(65548x^2+\cdots) &amp; \text{gives }-458836x^2y &amp; \text{miss}\quad 431632x^2y\\ = (8x-7y)\times(65548x^2+53954xy+\cdots) &amp; \text{gives }-377678xy^2 &amp; \text{miss}\quad 336872xy^2\\ = (8x-7y)\times(65548x^2+53954xy+42109y^2) &amp; \text{gives }-294763y^3 &amp; \text{miss}\quad 0y^3 \end{array}$</span></p>
2,281,932
<p>If Peano axioms uniquely determine the natural numbers, doesn't this mean that Peano axioms are categorical and hence complete?</p> <p>If above is true, how is it explained by Goedel's incompleteness theorem?</p>
M. Winter
415,941
<p>The Peano axioms do <em>not</em> pin down the natural numbers uniquely (see <a href="https://math.stackexchange.com/a/2251401/415941">this</a> amazing answer to some silimar question of mine).</p> <p>As you said, there is Gödel's incompleteness theorem which prevents this from happening. Each approach to pin down the natural numbers must fail in one of these points:</p> <ul> <li>You will have other models, so called <em>non-standard natural numbers</em> that also satisfy your axioms. Your axiom system is too weak to tell you which model you are currently talking about. This happens in the first-order Peano axioms.</li> <li>If your axioms determine the natural numbers uniquely, then there is still no way to prove all truths about them, because we have no way to perform these proves. Our prove techniques are just to weak. This is seen as worse than not pinning down $\Bbb N$. This happens in the second-order Peano axioms.</li> <li>The axioms you desire are not computably enumerable, i.e. there is no procedure to write them down. There is indeed an axiom system that describes $\mathbb N$ uniquely, <em>but</em> there is no algorithm in the world to give it to you or write it out completely (or in a closed form). This happens in the theory $\mathrm{Th}(\Bbb N)$ of all true sentences of $\Bbb N$.</li> <li>Absurdely but possible, your proof system is chosen so unfavorable that it can prove all truths about $\Bbb N$, but also some <em>wrong theorems</em> about it. Very undesirable. Such a system is called <em>not sound</em>.</li> </ul> <p>You can see this as the incapability of our finite axioms and prove systems (in the end, our finite human nature) to talk about infinite structures of some specific expressive power. We would need infinitely many space and/or time to talk about them uniquely.</p>
1,699,833
<p>I found a problem, I don't really know how to solve, although it should be something very easy, since it is stuff of Algebra I.</p> <blockquote> <p>Let $f= 29X^5−13X^4−44X^3+ 18X^2+ 35X+ 10\in\mathbb{Z}[X]$.</p> <p>1) Decompose $f$ into irreducible factors in $\mathbb{F}_2[X]$ and $\mathbb{F}_3[X]$.</p> <p>2) Conclude: $f$ is irreducible over $\mathbb{Z}$ and $\mathbb{Q}$.</p> </blockquote> <p>1) Is clear: $\bar{f}=X^5+X^4+ X\in\mathbb{F}_2[X]$ and $\bar{f}= 2X^5−X^4−2X^3+ 2X+ 1\in\mathbb{F}_3[X]$.</p> <p>So $\bar{f}=X^5+X^4+ X=X(X^4+X^3+1)\in\mathbb{F}_2[X]$ and $X^4+X^3+1$ is irreducible, <s>because $0+0+1=1=1+1+1$ in $\mathbb{F}_2$.</s></p> <p><strong>Edit:</strong> because no irreducible polynominal with degree $\le 2$ divides the latter polynominal. </p> <p><s>Furthermore, $\bar{f}=2X^5−X^4−2X^3+ 2X+ 1=2X^5+2X^4+X^3+ 2X+ 1\in\mathbb{F}_3[X]$ is irreducible, because $f(0)=1$, $f(1)=8=2,f(2)=109=1$.</s></p> <p><strong>Edit:</strong> $\bar{f}=2X^5−X^4−2X^3+ 2X+ 1=2X^5+2X^4+X^3+ 2X+ 1=2(X^2+1)(X^3+X^2+X+2)\in\mathbb{F}_3[X]$ with irreducible factors. Thank you Nicolas!</p> <p>2) It's clear: If $f$ is irreducible over $\mathbb{Z}$, then it is irreducible over $\mathbb{Q}$ by a basic theorem of algebra, which gives this result (in deed equivalence) for factorial rings (UFD) and their field of fractions. </p> <p>But I don't know how to conclude irreducibility by using 1). Is it something about the degree of the irreducible polynomials?</p> <p>Thank you for your answers!</p>
Nikolas Wojtalewicz
312,038
<blockquote> <p>$X^4+X^3+1$ is irreducible, because $0+0+1=1=1+1+1$ in $\mathbb{F}_2$</p> </blockquote> <p>Not true: or, at least, the part about being irreducible because it doesn't contain any linear factors. A polynomial can have no linear factors and still be reducible. Consider \begin{align} p = \left( X^2 + X + 1 \right) \in \mathbb{F}_2[X] \end{align}</p> <p>Then certainly $p^2$ has no linear factors and is reducible. </p> <p>To prove reducibility in a finite field, the only way that (I) know of is to find all reducible polynomials of degree (in this case) 3 or less. If none of those polynomials divide your polynomial, then its reducible.</p> <p>The reasoning being that, if your polynomial was reducible in $\mathbb{Z}$, then $p=fg$ and thus, can be factored in $\mathbb{F}_p$, $p$ a prime. Since no such factorization exists in $\mathbb{F}_2$ (assuming that your polynomial over $\mathbb{F}_2$ is actually irreducible), there is no way that it can factor over $\mathbb{Z}$.</p>
1,699,833
<p>I found a problem, I don't really know how to solve, although it should be something very easy, since it is stuff of Algebra I.</p> <blockquote> <p>Let $f= 29X^5−13X^4−44X^3+ 18X^2+ 35X+ 10\in\mathbb{Z}[X]$.</p> <p>1) Decompose $f$ into irreducible factors in $\mathbb{F}_2[X]$ and $\mathbb{F}_3[X]$.</p> <p>2) Conclude: $f$ is irreducible over $\mathbb{Z}$ and $\mathbb{Q}$.</p> </blockquote> <p>1) Is clear: $\bar{f}=X^5+X^4+ X\in\mathbb{F}_2[X]$ and $\bar{f}= 2X^5−X^4−2X^3+ 2X+ 1\in\mathbb{F}_3[X]$.</p> <p>So $\bar{f}=X^5+X^4+ X=X(X^4+X^3+1)\in\mathbb{F}_2[X]$ and $X^4+X^3+1$ is irreducible, <s>because $0+0+1=1=1+1+1$ in $\mathbb{F}_2$.</s></p> <p><strong>Edit:</strong> because no irreducible polynominal with degree $\le 2$ divides the latter polynominal. </p> <p><s>Furthermore, $\bar{f}=2X^5−X^4−2X^3+ 2X+ 1=2X^5+2X^4+X^3+ 2X+ 1\in\mathbb{F}_3[X]$ is irreducible, because $f(0)=1$, $f(1)=8=2,f(2)=109=1$.</s></p> <p><strong>Edit:</strong> $\bar{f}=2X^5−X^4−2X^3+ 2X+ 1=2X^5+2X^4+X^3+ 2X+ 1=2(X^2+1)(X^3+X^2+X+2)\in\mathbb{F}_3[X]$ with irreducible factors. Thank you Nicolas!</p> <p>2) It's clear: If $f$ is irreducible over $\mathbb{Z}$, then it is irreducible over $\mathbb{Q}$ by a basic theorem of algebra, which gives this result (in deed equivalence) for factorial rings (UFD) and their field of fractions. </p> <p>But I don't know how to conclude irreducibility by using 1). Is it something about the degree of the irreducible polynomials?</p> <p>Thank you for your answers!</p>
user26857
121,097
<p>We have $\bar{f}=X(X^4+X^3+1)\in\mathbb{F}_2[X]$ and $\bar f=2(X^2+1)(X^3+X^2+X+2)\in\mathbb F_3[X]$ (credit to Macavity) factorizations into irreducibles.<br> If $f$ is reducible over $\mathbb Z$ then $f=gh$ with $g,h\in\mathbb Z[X]$. We have $\deg g+\deg h=5$, $1\le\deg g&lt;5$ and $1\le\deg h&lt;5$. Moreover, the possible leading coefficients of $g$ and $h$ are $1$ and $29$, respectively $-1$ and $-29$, and all of them are non-zero modulo $2$, respectively $3$.<br> Now reduce $f=gh$ modulo $2$ and get $\bar f=\bar g\bar h$. Furthermore, $\deg\bar g=\deg g$ and $\deg\bar h=\deg h$, so $\deg g=1$ and $\deg h=4$ (or vice versa).<br> Then reduce $f=gh$ modulo $3$ and similarly get $\deg g=2$ and $\deg h=3$ (or vice versa).<br> Thus we reached a contradiction. </p>
273,798
<p>I am writing a large numerical code where I care a lot about performance, so I am trying to write compiled functions that are as fast as possible.</p> <p>I need to write a function that does the following. Consider a list of positive integers, for example {5,3}, take its flattened binary form (with a given number of digits, let's say 5) which is {0, 0, 1, 0, 1, 0, 0, 0, 1, 1} in our example, and then count how many 1s there are starting from the left and stopping to some index1, then to some index2, then to some index3, then to some index4, etc... Finally, sum all the results and return it. The list {index1, index2, index3, index4, ...} is given as an input, and in all cases it contains at most 4 indexes. For example, if index1=4 we encounter the number 1 just once, and if index2=6, we encounter the number 1 twice, so the function should return 1+2=3. Here's my code so far</p> <pre><code>CCSign = Compile[{ {L,_Integer},{f,_Integer},{indexes,_Integer,1},{state,_Integer,1} }, With[{ binarystate = Flatten[IntegerDigits[#,2,L]&amp;/@state], }, Total[ Total@Take[binarystate, #]&amp;/@indexes ] ], CompilationTarget-&gt;&quot;C&quot; ]; </code></pre> <p>Is there some way to improve it and make it faster?</p> <p>Thank you!</p>
Syed
81,355
<p>Please investigate how this technique fares in your application. Let's say:</p> <pre><code>SeedRandom[1]; binSequence = RandomInteger[{0, 1}, 20] </code></pre> <blockquote> <p>{1, 1, 0, 1, 0, 0, 0, 1, 0, 1, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1}</p> </blockquote> <pre><code>indices = {5, 9, 15}; acc = Accumulate[binSequence] </code></pre> <blockquote> <p>{1, 2, 2, 3, 3, 3, 3, 4, 4, 5, 5, 5, 5, 5, 5, 5, 5, 6, 7, 8}</p> </blockquote> <p>Now it is a matter of reading these values.</p> <pre><code>Total@acc[[indices]] </code></pre>
3,069,987
<p>I know that whatever numbers you choose for x and y and their sum equals to 1 will satisfy the equation <span class="math-container">$x^2 + y = y^2 + x$</span></p> <p>Algebraic proof: </p> <p>Given: <span class="math-container">$x + y = 1$</span></p> <p><span class="math-container">$$LS = x^2+ y = (1-y)^2 + y = 1 - 2y+y^2 + y = y^2 - y + 1$$</span></p> <p><span class="math-container">$$RS = y^2 + x = y^2 + (1-y) = y^2 - y + 1$$</span></p> <p>Therefore,<span class="math-container">$$ LS = RS $$</span></p> <p>How can this be proved geometrically? (Ex. in a diagram of rectangular areas)</p> <p>I tried to add a square piece with side lengths y with a rectangle with side lengths x and x+y but I can't seem to prove it geometrically. </p> <p>Can someone help? </p>
Jaap Scherphuis
362,967
<p>Here is a picture. The left shows <span class="math-container">$y^2+x$</span>, the right <span class="math-container">$x^2+y$</span>.</p> <p><a href="https://i.stack.imgur.com/D7MKa.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/D7MKa.png" alt="enter image description here"></a></p>
132,862
<p>Is it true that given a matrix $A_{m\times n}$, $A$ is regular / invertible if and only if $m=n$ and $A$ is a basis in $\mathbb{R}^n$?</p> <p>Seems so to me, but I haven't seen anything in my book yet that says it directly.</p>
Trismegistos
23,730
<p>Way of proving some theories by indcution is very importand technique without it mathematics would not be the same so it was added to formalize our intuition. From what I know suprise was that this axiom was not consequence of rest of the axioms but is needed to be stated explicite.</p>
179,581
<p><strong>Problem:</strong></p> <p>(a). If $f$ is continuous on $[a,b]$ and $\int_a^x f(t) dt = 0$ for all $x \in [a,b]$, show that $f(x) = 0$ for all $x \in [a,b]$.</p> <p>(b). If $f$ is continuous on $[a,b]$ and $\int_a^x f(t)dt = \int_x^b f(t)dt$ for all $x \in [a,b]$, show that $f(x)=0$ for all $x\in [a,b]$.</p> <p><strong>Work so far:</strong></p> <p>For (a), I think I am supposed to use Leibniz's rule and differentiate both sides and say $f(x)d/dx(x) - f(a)d/dx(a) = 0,$ so $f(x)-0=0$ and $f(x)=0.$ For (b) I think I am supposed to use Leibniz's Rule and differentiate both sides and get $f(x)d/dx(x) - f(a)d/dx(a) = f(b)d/dx(b) - f(x)d/dx(x)$, thus $f(x) - 0 = 0 - f(x)$, $2f(x) = 0$, and $f(x) = 0$....am I going about this correctly?</p>
Kevin Arlin
31,228
<p>Yes, you've solved these accurately. As a small point, I wouldn't call these Leibniz' rule problems, as that relates to taking partial derivatives of multivariable functions in a direction orthogonal to the path of integration. This is an advanced calculus technique you may not have encountered yet. </p> <p>Rather, this problem is really just using the fundamental theorem of calculus, which says for example that $\int_a^x f(t) dt=F(x)-F(a)$ with $F$ an antiderivative of $f$; then it's immediate from the rules of differential calculus that the derivative of the left-hand side with respect to $x$ is $f(x)$.</p>
3,366,064
<p>I have a baking recipe that calls for 1/2 tsp of vanilla extract, but I only have a 1 tsp measuring spoon available, since the dishwasher is running. The measuring spoon is very nearly a perfect hemisphere. </p> <p>My question is, to what depth (as a percentage of hemisphere radius) must I fill my teaspoon with vanilla such that it contains precisely 1/2 tsp of vanilla? Due to the shape, I obviously have to fill it more than halfway, but how much more?</p> <p>(I nearly posted this in the Cooking forum, but I have a feeling the answer will involve more math knowledge than baking knowledge.)</p>
Quanto
686,284
<p>It may be surprising that the problem actually admits an analytic solution.</p> <p>A spherical cap is the difference between two overlapping cones, one with a spherical bottom and the other with a flat bottom, i.e.</p> <p><span class="math-container">$$ V = \frac{2\pi}{3}r^2h - \frac{\pi}{3}(2rh-h^2)(r-h) =\frac{\pi}{3}(3rh^2-h^3)$$</span></p> <p>which, with half of the semisphere volume <span class="math-container">$V=\frac{2\pi}{3}r^3$</span>, becomes</p> <p><span class="math-container">$$\left(\frac rh \right)^3 - 3\frac rh+1=0$$</span></p> <p>Let <span class="math-container">$\frac rh = 2\cos x$</span> and compare with <span class="math-container">$4\cos^3 x -3\cos x -\cos 3x=0$</span> to obtain <span class="math-container">$x=40^\circ$</span>. Thus, the depth <span class="math-container">$h$</span> as a fraction of the radius <span class="math-container">$r$</span> is</p> <p><span class="math-container">$$\frac hr = \frac{1}{2}\sec 40^\circ$$</span></p>
1,783,323
<p>Given the transition matrix for a 2 state Markov Chain, how do I find the n-step transition matrix P^n? I also need to take n--> inf and find the invariant probability pi?</p>
nicomezi
316,579
<p>A common way to find $P^n$ is to diagonalize your matrix. Then you will have $P=MDM^{-1}$ with D a diagonal matrix, so $P^n=MD^nM^{-1}$. So taking $n \rightarrow \infty$ will be easy.</p> <p>Also, if $\mu$ is a measure of probability on your two states MC, $\underset{n\to \infty}\lim\mu P^n$, if it converges, is an invariant probability. </p>
875,729
<p>Prove that, without using induction, A real symmetric matrix $A$ can be decomposed as $A = Q^T \Lambda Q$, where $Q$ is an orthogonal matrix and $\Lambda$ is a diagonal matrix with eigenvalues of $A$ as its diagonal elements.</p> <p>I can see that all eigenvalues of $A$ are real, and the corresponding eigenvectors are orthogonal, but I failed to see that when putting all (interesting) eigenvectors together, they form a basis of $\mathbb{R}^n$.</p> <p><strong>Edit</strong></p> <p>The reason I asked this question is to show that a real symmetric matrix is diagonalizable, so let's not use that fact for a while. Other than that, any undergraduate level linear algebra can be used. </p> <p><strong>Edit 2</strong></p> <p>After reading <strong>Algebraic Pavel</strong>'s answer, I feel like ruling out Schur Decomposition as well, but I can't keep ruling out theorems, so...if a proof is too obvious, that's probably not what I am looking for, though, it maybe a technically correct answer.</p> <p>Thanks.</p>
Algebraic Pavel
90,996
<p>Provided that the Schur decomposition is an allowed tool:</p> <p>Using the Schur decomposition, we have that there exists an orthogonal $Q$ and an upper triangular $R$ such that $A=QRQ^T$. Since $A$ is symmetric, $Q^TAQ=R$ is symmetric as well. Therefore $R$ is symmetric. A symmetric triangular matrix is necessarily diagonal.</p> <hr> <p>There is also a neat theory behind tridiagonal matrices, which can help:</p> <p>It is easy to show that for any real $A$ there is an orthogonal matrix $Q$ such that $Q^TAQ=H$, where $H$ is upper Hessenberg. If $A$ is symmetric it then follows that $H$ is symmetric as well and hence tridiagonal. Now if the tridiagonal matrix $H$ is unreduced (none of the upper and lower diagonal entries are zero), then the eigenvalues of $H$ (and therefore of $A$) are distinct. Equivalently, if $A$ has repeated eigenvalues then $H$ is reduced (some upper and lower (symmetrically) diagonal entries are zero and hence $H$ is a block diagonal matrix). Consequently, one must have that each repeated eigenvalue must be in different unreduced diagonal blocks of $H$. In each of these blocks we must find at least one eigenvector (actually, only one) and it is almost trivial to show that these eigenvectors must be linearly independent (therefore, we can orthogonalize them).</p>
627,258
<p>Helly everybody,<br> I'm trying to find another approach to topology in order to justify the axiomatization of topology. My idea was as follows:</p> <p>Given an <strong>arbitrary</strong> collection of subsets of some space: $\mathcal{C}\in\mathcal{P}^2(\Omega)$<br> Define a closure operator by: $\overline{A}:=\bigcap_{A\subseteq C\in\mathcal{C}}C$<br> This gives rise to a topology apart from the space itself being open.<br> However, considering the space as being equipped with a notion of being close, all topological question can be studied - as in topological spaces.<br> <em>(I left out the details as being part of my research)</em></p> <p>So my question is:<br> <em>What could go BADLY! wrong if a collection would satisfy all axioms for open sets but the entire space not necessarily being open?</em></p> <p>Thanks for your help! Cheers Alex</p>
Dominik
50,527
<p>In fact, the consequences are of leaving out this axiom are rather dull.</p> <p>Let $X$ be such an <em>almost topological space</em>. Now let $$Y\subset X$$ be the supset of all elements of $x$ such that there is an open set $U\ni x$. Taking the union of all such open sets $U$ you see that $Y$ is open and every open set is contained in $X$. In particular, the open sets induce a proper topology on $Y$.</p> <p>Thus we see that $X$ is the union of a topological space $Y$ and some pathological points $X\setminus Y$ which possess no open neighborhood at all.</p>
496,255
<p>Let $u$ be an integer of the form $4n+3$, where $n$ is a positive integer. Can we find integers $a$ and $b$ such that $u = a^2 + b^2$? If not, how to establish this for a fact? </p>
Marquis Randell
104,178
<p>For any integer n, n = 0, 1, 2 or 3 (mod 4). So $n^{2}$ = 0 or 1 (mod 4). Then for any integers a and b, $a^{2} + b^{2}$ = 0, 1 or 2 (mod 4). This means the sum of two squares can only be in the form 4k, 4k+1 or 4k+2, but never 4k+3. Thus no integer of the form 4k+3 is the sum of two squares.</p>
1,665,833
<p>Given that A $\in$ M $_{mxn}$ (<strong>R</strong>). Assume that {$v_1$...$v_n$} is a basis for $R^n$ such that {$v_1$...$v_k$} is a basis for Null(A). </p> <p>How would I prove that {A$v_{k+1}$...A$v_n$} spans Col(A)?</p>
Sam
630,614
<p>@henry and @user104111 I will share the same answer as the <a href="https://stats.stackexchange.com/questions/1624/based-on-z-score-is-it-possible-to-compute-confidence-without-looking-at-a-z-ta/389082#389082">thread here</a> because I understand what you're saying. You don't want a software or tool to build a table but you need the formula &amp; methods used to create the table from scratch and find the values in it.</p> <p>So to find the values, you can proceed with more than one methods. You can use the <a href="https://en.wikipedia.org/wiki/Simpson%27s_rule" rel="nofollow noreferrer">Simpson's rule</a> and approximate each individual value in a <a href="https://www.ztable.net" rel="nofollow noreferrer">z score table</a> for both negative and positive side. Alternatively, you can also use series approximation or numerical integration. As @whuber added in the other thread Mills Ratio works well out in the tails: see stats.stackexchange.com/questions/7200.</p> <p>Hope this clears your doubts. Feel free to ask if you have any questions and I will elaborate my answer.</p> <p>Disc: I'm affiliated with the site linked above</p>
3,565,015
<p>I generated this polynomial after playing around with the golden ratio. I first observed that (using various properties of <span class="math-container">$\phi$</span>), <span class="math-container">$\phi^3+\phi^{-3}=4\phi-2$</span>. This equation has no significance at all, I just mention it because the whole problem stems from me wondering: which other numbers does this equation hold for?</p> <p>The six possible answers are the roots of <span class="math-container">$x^6-4x^4+2x^3+1=0$</span>. Note that I am <em>not</em> interested in solving for <span class="math-container">$x$</span> itself as much as I am interested in a method which would allow me to completely factor out this polynomial into lowest degree factors which still have real coefficients. Note that I am treating this equation as if I had no clue that the golden ratio is one of the solutions. In other words, I am trying to factor this equation as if I never saw it before, so I can't just immediately factor out <span class="math-container">$(x^2-x-1)$</span> without a justifiable process, even though it is indeed one of the factors.</p> <p>I first observed that the equation holds for <span class="math-container">$x=1$</span>, so I was able to divide out <span class="math-container">$(x-1)$</span> to get the factorization of:</p> <p><span class="math-container">$$(x-1)(x^5+x^4-3x^3-x^2-x-1)$$</span></p> <p>I tried making an assumption that the quintic reduces to a product of <span class="math-container">$(x^3+Ax^2+Bx+C)(x^2+Dx+E)$</span>, multiplying out, and equalling coefficients, but I ended up with a system of two extremely convoluted equations which I had no idea how to solve. I also tried to turn the first five terms of the quintic into a palindromic polynomial and then perform the standard method of factoring palindromic polynomials, to no avail.</p> <p>I am either missing something, or I don't know of a nice method that would let this expression be factored. I'm looking forward to being enlightened, thanks for any help.</p>
Toby Mak
285,313
<p>Your original method is tedious but it can be done.</p> <p>You can show that <span class="math-container">$(x^3+Ax^2+Bx+C)(x^2+Dx+E)$</span> is equal to:</p> <p><span class="math-container">$$x^5+(D+A)x^4+(1+AD+B)x^3 + (AE+BD+C)x^2 + (BE+CD) + CE$$</span></p> <p>so <span class="math-container">$A+D = 1, B+AD+1 = -3, AE+BD+C=-1, BE+CD=-1, CE=-1$</span>.</p> <p>Assuming <span class="math-container">$A,B,C,D,E$</span> are all integers, we either have <span class="math-container">$C=-1, E=1$</span> or <span class="math-container">$C=1, E=-1$</span>.</p> <p>If <span class="math-container">$C=-1, E=1$</span>, then we have:</p> <p><span class="math-container">$$A+D=1 \tag{1}$$</span> <span class="math-container">$$B+AD=-4 \tag{2}$$</span> <span class="math-container">$$A+BD=0 \tag{3}$$</span> <span class="math-container">$$B-D=-1 \tag{4}$$</span></p> <p><span class="math-container">$(1)+(4)$</span> gives <span class="math-container">$A+B=0$</span> so <span class="math-container">$A=-B$</span>, which gives:</p> <p><span class="math-container">$$-B+D=1 \tag{5}$$</span> <span class="math-container">$$B-BD=-4 \tag{6}$$</span> <span class="math-container">$$-B+BD=0 \tag{7}$$</span> <span class="math-container">$$B-D=-1 \tag{8}$$</span></p> <p>and this is clearly impossible since <span class="math-container">$(6) + (7)$</span> gives <span class="math-container">$0=-4$</span>.</p> <p>Therefore we must have <span class="math-container">$C=1, E=-1$</span>:</p> <p><span class="math-container">$$A+D=1 \tag{9}$$</span> <span class="math-container">$$B+AD=-4 \tag{10}$$</span> <span class="math-container">$$-A+BD=-2 \tag{11}$$</span> <span class="math-container">$$-B+D=-1 \tag{12}$$</span></p> <p>This time <span class="math-container">$(9)-(12)$</span> gives <span class="math-container">$A+B=2$</span>, so <span class="math-container">$A=2-B$</span>:</p> <p><span class="math-container">$$-B+D=-1 \tag{13}$$</span> <span class="math-container">$$B+2D-BD=-4 \tag{14}$$</span> <span class="math-container">$$B+BD=0 \tag{15}$$</span> <span class="math-container">$$-B+D=-1 \tag{16}$$</span></p> <p><span class="math-container">$(14)+(15)$</span> gives <span class="math-container">$2B+2D = -4$</span>, so <span class="math-container">$B+D=-2$</span>. When we add this to <span class="math-container">$(16)$</span>, <span class="math-container">$2D=-2$</span> so <span class="math-container">$D=-1$</span>.</p> <p>And the rest follows:</p> <p><span class="math-container">$$B - D = 1 \Rightarrow B+1=1, B=0$$</span> <span class="math-container">$$A=2-B \Rightarrow A=2$$</span></p> <p>so the factorisation is <span class="math-container">$(x-1)(x^3+2x^2+1)(x^2-x-1)$</span>.</p> <p>I wouldn't wish this method on anybody.</p>
1,617,698
<p>While I was trying to find the formula of something by my own means I came across this sum which I need to solve, however I don't know if there is a solution for it, maybe it doesn't mean anything and I made a mistake. However if there's an equation which can replace this sum I will appreciate it a lot if you show me which one and how did you find the answer!</p>
N. S.
9,176
<p>Multiply your sum by $\sin\left(\frac{\pi}{4n}\right)$ and use the formula $$\sin\left(\frac{i \pi}{2n}\right)\sin\left(\frac{\pi}{4n}\right)=\frac{1}{2}\left[\cos\left(\frac{i \pi}{2n}-\frac{\pi}{4n}\right)-\cos\left(\frac{i \pi}{2n}+\frac{\pi}{4n}\right)\right]$$</p>
1,562,010
<p>Let $f(x) \in \mathbb Z[x]$ be an irreducible monic polynomial such that $|f(0)|$ is not a perfect square . Then is $f(x^2)$ also irreducible in $\mathbb Z[x]$ ?</p> <p>( It is supposed to have an elementary solution , without using any field-extension etc. )</p>
An Hoa
41,874
<p>Let $\alpha$ be a root of $f$. Let $K = \mathbb{Q}(\alpha)$ and $L = \mathbb{Q}(\sqrt{\alpha})$. We have tower of extension $$L \supseteq K \supseteq \mathbb{Q}$$ By irreducibility of $f$, we know that $[K : \mathbb{Q}] = \deg f$ and obviously $[L : K] \leq 2$. If $[L : K] = 2$ then $[L : \mathbb{Q}] = 2 \deg f$ so that $f(x^2)$ must be irreducible for it must then be minimal polynomial for $\sqrt{\alpha}$. If $[L : K] = 1$ then obviously, $f(x^2)$ is reducible for it is divisible by minimal polynomial say $g(x) \in \mathbb{Q}[x]$ for $\sqrt{\alpha}$, which is of the same degree as $f$ because $\deg g = [L : \mathbb{Q}] = \deg f$. So whether $f(x^2)$ is irreducible or not is entirely decided by whether $L \not= K$ (equivalently, $\sqrt{\alpha} \in K$) or not.</p> <p>Now if $L = K$ i.e. $f(x^2)$ is reducible then we further have factorization $f(x^2) = (-1)^{\deg g} g(x) g(-x)$ just by noting that if $\gamma$ is a root of $g$ then $-\gamma$ is also a root of $f(x^2)$ and hence, $|f(0)| = |g(0)|^2$ is a square. So if we assume $|f(0)|$ is not a square then $f(x^2)$ must be irreducible.</p>
313,030
<p>I often find myself writing a definition which requires a proof. You are defining a term and, contextually, need to prove that the definition makes sense. </p> <p>How can you express that? What about a definition with a proof?</p> <p>Sometime one can write the definition and then the theorem. But often happens that many definition which should stay together need to be split because a theorem is required in between.</p> <p>A tentative example:</p> <p><strong>Definition</strong> (rational numbers) Let <span class="math-container">$\sim$</span> be the equivalence relation on <span class="math-container">$\mathbb Z^*\times \mathbb Z$</span> given by <span class="math-container">$$ (q,p) \sim (q',p') \iff pq' = p'q. $$</span> We define <span class="math-container">$\mathbb Q= (\mathbb Z^*\times \mathbb Z)/\sim$</span>. On <span class="math-container">$\mathbb Q$</span> we define addition and multiplication as follows <span class="math-container">$$ [(q,p)] + [(q',p')] = [(qq',pq'+p'q)] \\ [(q,p)] \cdot[(q',p')] = [(qq',pp')] $$</span> With these operations and choosing <span class="math-container">$0_\mathbb Q=[(1,0)]$</span>, and <span class="math-container">$1_\mathbb Q=[(1,1)]$</span> turns out that <span class="math-container">$\mathbb Q$</span> is a field.</p> <p><strong>Proof.</strong> We are going to prove that <span class="math-container">$\sim$</span> is indeed an equivalence relation, that addition and multiplication are well defined and that the resulting set is a field. [...]</p>
Iosif Pinelis
36,721
<p><span class="math-container">$\newcommand{\Z}{\mathbb{Z}} \newcommand{\Q}{\mathbb{Q}}$</span> I think the notion of "well-defined" may not always be well defined and should perhaps be avoided. In your example, it may be unclear what exactly is being proved. </p> <p>I also think it is all right to introduce notions within a statement; this can be done without ambiguity by using terms such as "define" and "introduce" and/or the symbol "<span class="math-container">$:=$</span>" meaning "[is] defined as". I have done it many times in my papers and never had a reviewer complain about that. In particular, your example could be rewritten as follows. </p> <hr> <p>\subsection{Rational numbers}</p> <p>The following proposition introduces, in a justified manner, the field of rational numbers. </p> <blockquote> <p><strong>Proposition</strong> </p> <p>(I) The binary relation <span class="math-container">$\sim$</span> on <span class="math-container">$P:=\Z\times\Z^*$</span> defined by the condition <span class="math-container">\begin{equation} (p_1,q_1)\sim(p_2,q_2)\iff p_1q_2=p_2q_1 \end{equation}</span> for <span class="math-container">$(p_1,q_1)$</span> and <span class="math-container">$(p_2,q_2)$</span> in <span class="math-container">$P$</span> is an equivalence. Let then <span class="math-container">\begin{equation} \Q:=P/\sim. \end{equation}</span></p> <p>(II) Consider the binary operations <span class="math-container">$\oplus$</span> and <span class="math-container">$\odot$</span> on <span class="math-container">$P$</span> defined by the formulas <span class="math-container">\begin{align} (p_1,q_1)\oplus(p_2,q_2)&amp;:=(p_1q_2+p_2q_1,q_1q_2), \\ (p_1,q_1)\odot(p_2,q_2)&amp;:=(p_1p_2,q_1q_2) \end{align}</span> for <span class="math-container">$(p_1,q_1)$</span> and <span class="math-container">$(p_2,q_2)$</span> in <span class="math-container">$P$</span>. Then for any <span class="math-container">$r_1,\tilde r_1,r_2,\tilde r_2$</span> in <span class="math-container">$P$</span> such that <span class="math-container">$r_1\sim\tilde r_1$</span> and <span class="math-container">$r_2\sim\tilde r_2$</span> we have <span class="math-container">\begin{equation} r_1\oplus r_2\sim\tilde r_1\oplus\tilde r_2\quad\text{and}\quad r_1\odot r_2\sim\tilde r_1\odot\tilde r_2. \end{equation}</span></p> <p>(III) Define now the binary operations <span class="math-container">$+$</span> and <span class="math-container">$\cdot$</span> on <span class="math-container">$\Q$</span> by the formulas<br> <span class="math-container">\begin{equation} [r_1]+[r_2]:=[r_1\oplus r_2]\quad\text{and}\quad [r_1]\cdot[r_2]:=[r_1\odot r_2] \end{equation}</span> for all <span class="math-container">$r_1,r_2$</span> in <span class="math-container">$P$</span>. Let also <span class="math-container">$0_\Q:=[(0,1)]$</span> and <span class="math-container">$1_\Q:=[(1,1)]$</span>. Then <span class="math-container">$(\Q,+,\cdot,0_\Q,1_\Q)$</span> is a field. </p> </blockquote> <p><em>Proof.</em> <span class="math-container">$\ldots$</span> </p> <hr>
4,608,805
<p>Suppose that I have a class of 35 students whose average grade is 90. I randomly picked 5 students whose average came out to be 85. Assume their grades are i.i.d and of normal <span class="math-container">$N(\mu, \sigma^2)$</span>. From the example I have seen, <span class="math-container">$\mu$</span> is usually called the population mean and should be equal to <span class="math-container">$90$</span>. The sample me is usually referred to as <span class="math-container">$\frac{\sum{X_i}}{5}$</span>. When we do hypothesis testing we can ask whether the sample mean is equal to <span class="math-container">$90$</span>.</p> <ol> <li><p>Is the sample mean <span class="math-container">$\frac{\sum X_i}{5}$</span> or <span class="math-container">$85$</span>?</p> </li> <li><p>The population mean mathematically should be <span class="math-container">$\mu$</span>, but I think people also say that <span class="math-container">$90$</span> is the population mean. This does not make sense to mean since it is not obvious to me that why <span class="math-container">$90 = \mu$</span>. <span class="math-container">$90$</span> is calculated through the sum of the grades divided by 35, whereas <span class="math-container">$\mu$</span> is equal to some integral. I just do not see how they can be equal to each other.</p> </li> </ol>
José Gabriel Astaíza-Gómez
361,031
<p>We do hypothesis testing when the sample size excedes the number of our observations. That is the meaning of <span class="math-container">$n\rightarrow \infty$</span> in, e.g. <span class="math-container">$\sqrt{n}(\bar{X}_n - \mu)\rightarrow_d N(0,\sigma^2)$</span> where <span class="math-container">$\bar{X}_n=\dfrac{1}{n}\sum x_i$</span> (the <strong>Linderberg-Levy CLT</strong>). Notice that the value of <span class="math-container">$\bar{X}_n$</span> depends on <span class="math-container">$n$</span>, i.e. the sample size.</p> <p>In your example, we have a population with a finite and known number of elements, with a mean of 90, which is also known.</p> <ol> <li>You randomly picked 5 students whose average came out to be 85. In math language, that is written as <span class="math-container">$\dfrac{1}{5}\sum_{i=1}^5 X_i = 85$</span></li> <li>When <span class="math-container">$n\rightarrow \infty$</span> the distribution can be approximated by a continuous function, and we can use integrals instead of sums. In your example, we can use sums directly. Assuming all probabilities <span class="math-container">$p_i, \ i=1,2,...,35$</span> are equal, then the first moment of your distribution is <span class="math-container">$$\mu = \mathbb{E}(X) =\sum_{i=1}^{35} X_i p_i = \dfrac{1}{35}\sum_{i=1}^{35} X_i=90$$</span></li> </ol>
1,002,777
<p>I want to convert this polynomoial to partial fraction.</p> <p>$$ \frac{x^2-2x+2}{x(x-1)} $$</p> <p>I proceed like this: $$ \frac{x^2-2x+2}{x(x-1)} = \frac{A}{x} + \frac{B}{x-1} $$ Solving, $$ A=-2,B=1 $$ But this does not make sense. What is going wrong?</p>
Module
114,669
<p>$$\frac{x^2-2x+2}{x(x-1)}=1+\frac Ax+\frac B{x-1}$$</p> <p>Now what you have to do to solve for A and B is to multiply both sides of your equation by $x(x-1)$, and that should give you something like.</p> <p>$$\frac{x^2-2x+2}{1}=x^2-x +A(x-1)+Bx=x(A+B)-A=-x+2$$ From here on it's pretty easy to solve for both A and B.</p>
2,136,024
<p>I am having problems with this linear algebra proof:</p> <blockquote> <p>Let $ A $ be a square matrix of order $ n $ that has exactly one nonzero entry in each row and each column. Let $ D $ be the diagonal matrix whose $ i^{th} $ diagonal entry is the nonzero entry in the $i^{th}$ row of $A$</p> <p>For example:</p> <p>$A = \begin{bmatrix}0 &amp; 0 &amp; a_1 &amp; 0\\a_2 &amp; 0 &amp; 0 &amp; 0\\0 &amp; 0 &amp; 0 &amp; a_3 \\0 &amp; a_4 &amp; 0 &amp; 0 \end{bmatrix} \quad $ $D = \begin{bmatrix}a_1 &amp; 0 &amp; 0 &amp; 0\\0 &amp; a_2 &amp; 0 &amp; 0\\0 &amp; 0 &amp; a_3 &amp; 0\\0 &amp; 0 &amp; 0 &amp; a_4 \end{bmatrix}$</p> <p>A permutation matrix, P, is defined as a square matrix that has exactly one 1 in each row and each column</p> <p>Please prove that:</p> <ol> <li>$ A = DP $ for a permutation matrix $ P $</li> <li>$ A^{-1} = A^{T}D^{-2} $</li> </ol> </blockquote> <p>My attempt:</p> <p>For 1, I tried multiplying elementary matrices to $ D $ to transform it into $ A $:</p> <p>$$ A = D * E_1 * E_2 * \cdots * E_k $$</p> <p>Since I am performing post multiplication with elementary matrices, the effect would be a column wise operation on D. But I can't see how this swaps the elements of $ D $ to form $A$. I also cannot prove that the product of the elementary matrices will be a permutation matrix.</p> <p>For 2, my attempt is as follows (using a hint that $PP^{T} = I$):</p> <p>$$ \begin{aligned} A^{T}D^{-2} &amp;= (DP)^{T}D^{-2} \\ &amp;= (P^{T})(D^{T})(D^{-1})(D^{-1}) \\ &amp;= (P^{-1})(D^{T})(D^{-1})(D^{-1}) \end{aligned} $$</p> <p>I am not sure how to complete the proof since I cannot get rid of the term $D^{T}$.</p> <p>Could someone please advise me on how to solve this problem?</p>
Fimpellizzeri
173,410
<p><strong>Hint:</strong> For $(1)$, find a matrix $P(i,j)$ that swaps columns $i$ and $j$. Your permutation matrix will be a product of $P(i,j)$'s.</p> <p>For $(2)$, try to convince yourself that when $D$ is diagonal, $D^{T}=D$. It's not too hard!</p>
2,165,213
<p><strong>The Problem</strong></p> <p>Let $V=k^3$ for some field $k$. Let $W$ be the subspace spanned by $(1,0,0)$ and let $U$ be the subspace spanned by $(1,1,0)$ and $(0,1,1)$. Show that $V= W \oplus U$. Explain your argument in detail.</p> <hr> <p><strong>What I Know</strong></p> <ol> <li><p>I know that a field $k^n=n$-tuples of elements of $k$.</p></li> <li><p>I know that a subset $W$ of a vector space $V$ over a field $k$ is a <em>subspace</em> if the operations of $V$ make $W$ into a vector space over $k$.</p></li> <li><p>I know that if span$(S)=V$ for a set $S$ in a vector space $V$, where $S$ is linearly independent, then $S$ is a <em>basis</em> for $V$.</p></li> <li><p>I know that the <em>external direct sum</em> $V \oplus W$ for vector spaces $V$ and $W$ over a field $k$ is defined as the set of all ordered pairs $(v,w)$ such that $v\in V$ and $w \in W$.</p> <hr></li> </ol> <p><strong>What I Don't Know</strong></p> <ol> <li><p>How to <em>apply</em> what I listed above to help me solve the problem. I am absolutely atrocious at this material and struggle so much in simply starting these problems.</p></li> <li><p>If everything I listed above is even relevant to the problem at hand.</p></li> <li><p>If what I listed above is insufficient to complete the problem. </p> <hr></li> </ol> <p>Text: <em>Abstract Linear Algebra</em> by Curtis</p>
Kenny Wong
301,805
<p>I presume you're trying to identify $\mathbb R[x]/(x-5)$?</p> <p>$\phi$ is a homomorphism of rings. As you mentioned, ${\rm ker} \phi = (x-5)$. It shouldn't be hard to see that ${\rm im} \phi = \mathbb R$. Now use the fact that if $\phi : R \to S$ is a ring homomorphism, then $R/{\rm ker } \phi \cong {\rm im} \phi$.</p>
3,060,742
<p><span class="math-container">$\sum_{n=1}^\infty \frac{1}{n^2} = \frac{1}{1^2} + \frac{1}{2^2} + \frac{1}{3^2} + \cdots = 1.644934$</span> or <span class="math-container">$\frac{\pi^2}{6}$</span></p> <p>What if we take every 3rd term and add them up? </p> <p>A = <span class="math-container">$ \frac{1}{3^2} + \frac{1}{6^2} + \frac{1}{9^2} + \cdots = ??$</span></p> <p>How to take every 3rd-1 term and add them up?</p> <p>B = <span class="math-container">$ \frac{1}{2^2} + \frac{1}{5^2} + \frac{1}{8^2} + \cdots = ??$</span></p> <p>How to take every 3rd-2 term and add them up?</p> <p>C = <span class="math-container">$ \frac{1}{1^2} + \frac{1}{4^2} + \frac{1}{7^2} + \cdots = ??$</span></p> <p>I am not sure how to adapt Eulers methods as he used the power series of sin for his arguments: <a href="https://en.wikipedia.org/wiki/Basel_problem" rel="nofollow noreferrer">https://en.wikipedia.org/wiki/Basel_problem</a></p>
Jack D'Aurizio
44,121
<p>As a complement to Mark's answer, </p> <p><span class="math-container">$$\sum_{n\geq 0}\frac{1}{(3n+1)^2}=-\int_{0}^{1}\sum_{n\geq 0} x^{3n}\log(x)\,dx=\int_{0}^{1}\frac{-\log x}{1-x^3}\,dx $$</span> (and similarly <span class="math-container">$\sum_{n\geq 0}\frac{1}{(3n+2)^2}$</span>) can be expressed in terms of dilogarithms, since <span class="math-container">$$ \int_{0}^{1}\frac{-\log x}{1-a x}=\frac{\text{Li}_2(a)}{a} $$</span> for any <span class="math-container">$|a|\leq 1$</span>, with <span class="math-container">$\text{Li}_2(a)=\sum_{n\geq 1}\frac{a^n}{n^2}$</span>. This is equivalent to stating that <span class="math-container">$\psi'\left(\frac{1}{3}\right)$</span> and <span class="math-container">$\psi'\left(\frac{2}{3}\right)$</span> can be computed through the discrete Fourier transform. It is worth noticing that <span class="math-container">$$\text{Re}\,\text{Li}_2(e^{i\theta})=\sum_{n\geq 1}\frac{\cos(n\theta)}{n^2} $$</span> is a continuous and piecewise-parabolic function, as the formal primitive of the sawtooth wave. On the contrary, <span class="math-container">$\text{Im}\,\text{Li}_2(e^{i\theta})$</span> does not have a nice closed form, in general. Ref.: <a href="https://en.wikipedia.org/wiki/Spence%27s_function" rel="nofollow noreferrer">https://en.wikipedia.org/wiki/Spence%27s_function</a></p>
1,433,980
<p>so the problem I m having deals with conditional probability. I am given so much information and don't know what to do with what. Here is the problem:</p> <p>"A study investigated whether men and women place more importance on a mate's ability to express his/her feelings or on a mate's ability to make a good living. In the study, 55% of the participants were men, 71% of participants said that feelings were more important, and 35% of the participants were men that said feelings were more important. Suppose that an individual is randomly selected from the participants in this study. Let M be the event that the individual is male and F be the event that the individual said that feelings were more important."</p> <p>I am asked to find $P(M' \cap F)$</p> <p>I get confused on which percentages to use. For instance, 55% of all the participants were men, so that means 45% were women. 71% of the participants said feelings were more important so that means 29% said feelings were not important. Of the 71% that said feelings were important, 35% of them were men. So does that means the percentage of women that said feelings were important is 65%? Or would that be 36%? Since I am finding $P(M' \cap F)$, for M' would I use the 65% (or 36%) or would I use the 45% of the total population?</p> <p>Thanks</p>
MegaboofMD
269,606
<p>$P(M' and F)=P(F)*P(M' given F)$ so $P(F)=0.71$ and $P(M'givenF)= 0.36/0.71$</p> <p>Thus, $P(M' and F)=0.71*0.36/0.71=0.36$</p> <p>I edited, initially had the wrong value for M'givenF.</p>
1,433,980
<p>so the problem I m having deals with conditional probability. I am given so much information and don't know what to do with what. Here is the problem:</p> <p>"A study investigated whether men and women place more importance on a mate's ability to express his/her feelings or on a mate's ability to make a good living. In the study, 55% of the participants were men, 71% of participants said that feelings were more important, and 35% of the participants were men that said feelings were more important. Suppose that an individual is randomly selected from the participants in this study. Let M be the event that the individual is male and F be the event that the individual said that feelings were more important."</p> <p>I am asked to find $P(M' \cap F)$</p> <p>I get confused on which percentages to use. For instance, 55% of all the participants were men, so that means 45% were women. 71% of the participants said feelings were more important so that means 29% said feelings were not important. Of the 71% that said feelings were important, 35% of them were men. So does that means the percentage of women that said feelings were important is 65%? Or would that be 36%? Since I am finding $P(M' \cap F)$, for M' would I use the 65% (or 36%) or would I use the 45% of the total population?</p> <p>Thanks</p>
M.K.
124,485
<p>It's sometimes easier to break this down into a table as follows:</p> <p><a href="https://i.stack.imgur.com/lGPLF.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/lGPLF.png" alt="enter image description here"></a></p> <p>As you can see the percentage that the are not male and consider feelings important is 36% i.e probability is 0.36.</p>
30,718
<p>As we all know, questions lacking context are strongly discouraged on this site. This includes mainly "homework questions" that look a bit like:</p> <blockquote> <p>Prove that <span class="math-container">$\lim_{x\to0}x^2=0$</span> using <span class="math-container">$\epsilon$</span>-<span class="math-container">$\delta$</span> definition of the limit</p> </blockquote> <p>and contain nothing else in the question body. It is for this reason that the close reason of "lacking context and/or other details" exists, to ensure that we are not spammed with and overwhelmed by such questions which display no effort on the part of the OP.</p> <p>I used to think this was quite clear: if the OP showed effort, even if it might have led to little to no progress, then it has context and should be allowed and not closed. But recently, I encountered <a href="https://math.stackexchange.com/questions/3366064/how-deep-is-the-liquid-in-a-half-full-hemisphere">this question</a> about how deep the liquid in a half-full hemisphere should fill it up to. Seeing essentially no mathematical effort by the OP, I was immediately tempted to downvote and close the question as off-topic due to lack of context. This is especially since the question is something that could come up in any introductory course on calculus, just phrased differently. But at the same time, the OP did provide some sort of "context", albeit a non-mathematical one---that they were trying to measure the exact amount of vanilla extract to use in a cooking recipe. This makes the motivation clear in some sense, but the "context" provided isn't what one would usually expect, and certainly not one I would have considered prior to coming across this question.</p> <p>So, my question is: <strong>How exactly should we define "context" in general, and in this particular case, should the question be regarded as lacking context?</strong></p>
hardmath
3,111
<p>I often claim to have a <em>de minimus</em> requirement for context, in which even a slight indication of effort or interest in a problem suffices.</p> <p>It is with the intent of offering an example that I feel <em>narrowly misses</em> this qualification that I post <a href="https://math.stackexchange.com/questions/3368062/prove-every-5-integer-numbers-have-2-numbers-that-their-sum-or-difference-or-m">this recent Close review</a> instance:</p> <p>Title: <strong>Prove : every 5 integer numbers have 2 numbers that their sum or difference or multiplication divisible by 10</strong></p> <p>Body: "I think that I can prove that with pigenhole-principle[sic] , but I сan't do that."</p> <p>I've left a Comment inviting the OP to add more in the way of context (and to use the body to give a self-contained problem statement). My thought about this example is that the OP knew this was an exercise to promote/reinforce learning the Pigeonhole Principle, and so included that phrase in lieu of a proper explanation of the the problem setup and goal, omitting all but the vague suggestion of an approach and "difficulty encountered".</p> <p>I find this post unsatisfactory for Math.SE's goal of collecting excellent content to help students of mathematics at all levels, but given the luxury of waiting an hour or two (the post was three hours old when I came across it), I'd like to see the OP respond to my Comment. So I chose to "skip" in review, but will vote to place it on-hold (Close) if there is no response in the near future. [Absent that luxury I'd have voted to close it in its present form.]</p>
603,986
<p>Show that in a finite field $F$ there exists $p(x)\in F[X]$ s.t $p(f)\neq 0\;\;\forall f\in F$</p> <p>Any ideas how to prove it?</p>
DBFdalwayse
88,790
<p>Consider counting the total number of monic polynomials with roots, for fixed degree : say for degree $3$ , you can have $(x-r_1)(x-r_2)(x-r_3)$ (notice that you need to consider that different permutations of $r_1,r_2,r_3$ will generate the same polynomial). And the overall number of monic polynomials will be $x^3+ax^2+bx+c$ , where (hint) $(a,b,c)$ is in $\mathbb F^3$.</p> <p>EDIT: The identity I had in mind was: for degree q&lt;|$\mathbb F$| , there are $\mathbb FCq$ ways of having all different roots, then, for having 1 repeat....1 way of having all roots equal, compared with |$\mathbb F^3$|. Even for $q=2$, we have $\mathbb FC2$+|$\mathbb F$| , versus |$\mathbb F^2$|, where the $C$ means "choose".</p>
8,997
<p>I have a set of data points in two columns in a spreadsheet (OpenOffice Calc):</p> <p><img src="https://i.stack.imgur.com/IPNz9.png" alt="enter image description here"></p> <p>I would like to get these into <em>Mathematica</em> in this format:</p> <pre><code>data = {{1, 3.3}, {2, 5.6}, {3, 7.1}, {4, 11.4}, {5, 14.8}, {6, 18.3}} </code></pre> <p>I have Googled for this, but what I find is about importing the entire document, which seems like overkill. Is there a way to kind of cut and paste those two columns into <em>Mathematica</em>? </p>
LIU Qi
1,902
<p>Notice that you can use <code>Import["file.xlsx", {"Data",k,m,n}]</code> to import data at cell {m,n} on k-th sheet in the file. To import a range of data, simply replace number <code>m</code> and <code>n</code> by a range list, e.g.</p> <p><code>Import["file.xlsx", {"Data",1,Table[i,{i,3,8}],{1,2}}]</code> </p> <p>will import data from row 3 to 8, columes 1 and 2 from 1st sheet in your data file, giving you a 6*2 list. I assume this also works for openoffice file.</p>
481,167
<p>Let $V$ be a $\mathbb{R}$-vector space. Let $\Phi:V^n\to\mathbb{R}$ a multilinear symmetric operator.</p> <p>Is it true and how do we show that for any $v_1,\ldots,v_n\in V$, we have:</p> <p>$$\Phi[v_1,\ldots,v_n]=\frac{1}{n!} \sum_{k=1}^n \sum_{1\leq j_1&lt;\cdots&lt;j_k\leq n} (-1)^{n-k}\phi (v_{j_1}+\cdots+v_{j_k}),$$ where $\phi(v)=\Phi(v,\ldots,v)$.</p> <p>My question come from that, I have seen this formula when I was reading about mixed volume, and also when I was reading about mixed Monge-Ampère measure. The setting was not exactly the one of a vector space $V$ but I think the formula is true here and I am interested by having this property shown out of the specific context of Monge-Ampère measures or volumes. I have done some work in the other direction, <em>i.e.</em> starting from an operator $\phi:V\to\mathbb{R}$ satisfying some condition and obtaining a multilinear operator $\Phi$ ; bellow are the results I have seen in this direction.</p> <p>I already know that if $\phi':V\to\mathbb{R}$ is such that for any $v_1,\ldots,v_n\in V$, $\phi'(\lambda_1 v_1+\ldots+\lambda_n v_n)$ is a homogeneous polynomial of degree $n$ in the variables $\lambda_i$, then there exists a unique multilinear symmetric operator $\Phi':V^n\to\mathbb{R}$ such that $\Phi'(v,\ldots,v)=\phi'(v)$ for any $v\in V$. Moreover $\Phi'(v_1,\ldots,v_n)$ is the coefficient of the symmetric monomial $\lambda_1\cdots\lambda_n$ in $\phi'(\lambda_1 v_1+\ldots+\lambda_n v_n)$ (see <a href="https://math.stackexchange.com/questions/469342/symmetric-multilinear-form-from-an-homogenous-form">Symmetric multilinear form from an homogenous form.</a>).</p> <p>I also know that if $\phi'(\lambda v)=\lambda^n \phi'(v)$ and we define $$\Phi''(v_1,\ldots,v_n)=\frac{1}{n!} \sum_{k=1}^n \sum_{1\leq j_1&lt;\cdots&lt;j_k\leq n} (-1)^{n-k}\phi' (v_{j_1}+\cdots+v_{j_k}),$$ then $\Phi''(v,\ldots,v)=\frac{1}{n!} \sum_{k=1}^n (-1)^{n-k} \binom{n}{k} k^n \phi'(v)=\phi'(v)$ (see <a href="https://math.stackexchange.com/questions/465172/show-this-equality-the-factorial-as-an-alternate-sum-with-binomial-coefficients">Show this equality (The factorial as an alternate sum with binomial coefficients).</a>). It is clear that $\Phi''$ is symmetric, but I don't know if $\Phi''$ is multilinear.</p> <p>Formula for $n=2$: $$\Phi[v_1,v_2]=\frac12 [\phi(v_1+v_2)-\phi(v_1)-\phi(v_2)].$$</p> <p>Formula for $n=3$: $$\Phi[v_1,v_2,v_3]=\frac16 [\phi(v_1+v_2+v_3)-\phi(v_1+v_2)-\phi(v_1+v_3)-\phi(v_2+v_3)+\phi(v_1)+\phi(v_2)+\phi(v_3)].$$</p>
Ewan Delanoy
15,381
<p>SKETCH OF THE PROOF : Your big sums are always sums of (sums of sums of) terms of the form <span class="math-container">$\Phi(v_{k_1},v_{k_2},\ldots ,v_{k_n})$</span> for some tuples of indices <span class="math-container">$(k_1,k_2, \ldots ,k_n)$</span>. Thanks to the symmetry of <span class="math-container">$\Phi$</span>, we can always rearrange and put the tuple in increasing order. You are then left with a simpler sum with less terms, where exactly one term is multilinear (the term <span class="math-container">$\Phi(v_1,v_2, \ldots ,v_n)$</span>) and all the others are not. So this is a Rambo-like situation of one against a hundred. But fortunately for us, the combinatorial property (let us call it <span class="math-container">$P$</span>) that for any finite set <span class="math-container">$X$</span>, the sum <span class="math-container">$\sum_{B \subseteq X}(-1)^{|B|}$</span> is zero except when <span class="math-container">$X$</span> is empty, allows us to show that all the non-multilinear terms have zero coefficient in the sum.</p> <p>THE DETAILS : Given a tuple <span class="math-container">$(k_1,\ldots ,k_n)$</span>, denote by <span class="math-container">$\rho(k_1,k_2,\ldots,k_n)$</span> the rearranged tuple according to increasing order. (Thus, <span class="math-container">$\rho(1,3,2)=(1,2,3)$</span>).</p> <p>It is more convenient here to view tuples as functions, so we shall speak of <span class="math-container">$u$</span> and <span class="math-container">$\rho u$</span> where <span class="math-container">$u$</span> and <span class="math-container">$\rho u$</span> are maps <span class="math-container">$\lbrace 1,2, \ldots, n\rbrace \to \lbrace 1,2, \ldots, n\rbrace$</span> and <span class="math-container">$\rho u$</span> is increasing. Also, we put <span class="math-container">$\psi(f)=\Phi(v_{f(1)},\ldots,v_{f(n)})$</span>.</p> <p>For an arbitrary increasing tuple <span class="math-container">$i$</span>, denote by <span class="math-container">$w(i)$</span> the number of tuples <span class="math-container">$j$</span> satisfying <span class="math-container">$\rho(j)=i$</span>. For any <span class="math-container">$A \subseteq \lbrace 1,2, \ldots ,n \rbrace$</span>, denote by <span class="math-container">$I(A)$</span> the set of all increasing maps <span class="math-container">$\lbrace 1,2, \ldots ,n \rbrace \to A$</span>. Also, let <span class="math-container">$I=I(\lbrace 1,2, \ldots, n \rbrace)$</span> and <span class="math-container">$V(f)=\lbrace A \subseteq \lbrace 1,2, \ldots ,n \rbrace | f\in I(A) \rbrace$</span> . Note that if <span class="math-container">$K(f)=\lbrace 1,2, \ldots ,n \rbrace \setminus Im(f)$</span>, then there is a natural bijection between <span class="math-container">${\cal P}(K(f))$</span> and <span class="math-container">$V(f)$</span>, given by <span class="math-container">$B \mapsto Im(f) \cup B$</span>.</p> <p>Let <span class="math-container">$$ \lambda (A)=\phi\bigg(\sum_{a\in A}v_a\bigg) \tag{2} $$</span></p> <p>Then, expanding <span class="math-container">$\lambda(A)$</span> completely shows that</p> <p><span class="math-container">$$ \lambda (A)=\sum_{f\in I(A)} w(f) \psi(f) \tag{3} $$</span></p> <p>Then, the RHS (call it <span class="math-container">$\Phi''$</span>) of the desired equality can be rewritten as</p> <p><span class="math-container">$$ \begin{eqnarray} \Phi'' &amp;=&amp; \sum_{A\subseteq \lbrace 1,2, \ldots ,n \rbrace}(-1)^{n-|A|}\lambda(A)\\ &amp;=&amp; \sum_{A\subseteq \lbrace 1,2, \ldots ,n \rbrace}(-1)^{n-|A|}\sum_{f\in I(A)} w(f) \psi(f) \\ &amp;=&amp; \sum_{f\in I}w(f)\psi(f)\sum_{A\in V(f)}(-1)^{n-|A|} \\ &amp;=&amp; \sum_{f\in I}w(f)\psi(f)\sum_{B\subseteq K(f)}(-1)^{n-|Im(f)|+|B|} \\ &amp;=&amp; w({\mathsf{id}})\psi(\mathsf{id}) \ \text{by property } P. \\ &amp;=&amp; n! \Phi(v_1,v_2, \ldots ,v_n) \end{eqnarray} $$</span></p> <p>which concludes the proof.</p>
737,835
<p>Why is $[0,1]$ not homeomorphic to $[0,1]^2$? It seems that the easiest way to show this is to find some inconsistency between the open set structures of the two. It is clear that the two share the same cardinality. Both are compact. Both are normal since they are metric spaces. However, where to find the open set structure that is not shared by the two? Any hint, please?</p>
Siminore
29,672
<p>If you remove an <em>inner</em> point from $[0,1]$, the resulting space is disconnected. This is clearly false if you remove <em>any</em> point from $[0,1]\times [0,1]$.</p> <p>This is a rather rough approach. A much more general one follows from Dimension Theory, or from the <a href="http://en.wikipedia.org/wiki/Invariance_of_domain" rel="nofollow">Invariance-of-domain Theorem</a>. It is stated for open subsets, but I think it is pretty easy to deduce your statement from the statement that $(0,1)$ is not homeomorphic to $(0,1) \times (0,1)$.</p>
737,835
<p>Why is $[0,1]$ not homeomorphic to $[0,1]^2$? It seems that the easiest way to show this is to find some inconsistency between the open set structures of the two. It is clear that the two share the same cardinality. Both are compact. Both are normal since they are metric spaces. However, where to find the open set structure that is not shared by the two? Any hint, please?</p>
Stefano
108,586
<p>If $A$ is omeomorphic to $B$ through $f$, then $A \setminus \lbrace a \rbrace$ is omeomorphic to $B \setminus \lbrace f \left( a \right) \rbrace$ through $f$. Then pick $a= \frac{1}{2}$. $\left[ 0,1\right] \setminus \lbrace \frac{1}{2} \rbrace$ is disconnected, while $\left[ 0,1\right] \times \left[ 0,1\right] \setminus \lbrace p \rbrace$ is connected for any $p \in \left[ 0,1\right] \times \left[ 0,1\right]$.</p>
229,606
<p>I need little help in proving the following result :</p> <p>Consider the ring $R:=\mathbb{F}_q[X]/(X^n-1)$, where $\mathbb{F}_q$ is a finite field of cardinality $q$ and $n\in\mathbb{N}$. Then any ideal $I$ of $R$ is principle and can be written as $I=(g(X))$, such that $g(X)|(X^n-1)$.</p>
Berci
41,488
<p><strong>Hint:</strong> Think about $R=\Bbb F_q[X]/(X^n-1)$ as the ring of polynomials of degree $&lt;n$, and multiplication is '<em>modulo $(X^n-1)$</em>', meaning that $X^n=1$ is <em>the rule</em> to use in $R$.</p>
3,472,151
<p>I find two main sources on how to compute the half-derivative of <span class="math-container">$e^x$</span>. Both make sense to me, but they give different answers.</p> <p>Firstly, people argue, that <span class="math-container">$$\begin{align} \frac{\mathrm{d}}{\mathrm{d} x} e^{k x} &amp;= k e^{k x} \\[4pt] \frac{\mathrm{d}^2}{\mathrm{d} x^2} e^{k x} &amp;= k^2 e^{k x} \\[4pt] \frac{\mathrm{d}^n}{\mathrm{d} x^n} e^{k x} &amp;= k^n e^{k x} \end{align}$$</span></p> <p>Therefore, it seems very reasonable, that <span class="math-container">$$\frac{\mathrm{d}^{1/2}}{\mathrm{d} x^{1/2}} e^{k x} = \sqrt{k} e^{k x}$$</span></p> <p>But this is not what the usual formula gives <span class="math-container">$$ \frac{\mathrm{d}^{1/2}}{\mathrm{d} x^{1/2}} e^{k x} = \frac{1}{\Gamma (1/2)} \frac{\mathrm{d}}{\mathrm{d} x} \int \limits_0^x \mathrm{d} t \frac{e^{k t}}{\sqrt{x-t}} = \frac{1}{\Gamma (1/2)} \frac{\mathrm{d}}{\mathrm{d} x} e^{k x} \int \limits_0^x \mathrm{d} u \frac{e^{- k u}}{\sqrt{u}} = \\ = \frac{1}{\Gamma (1/2)} \frac{\mathrm{d}}{\mathrm{d} x} \frac{e^{k x}}{\sqrt{k}} \int \limits_0^{\sqrt{k x}} \mathrm{d} s \, e^{-s^2} $$</span></p> <p>We can already see that this is not equal to <span class="math-container">$\sqrt{k} e^{k x}$</span>.</p> <p>So who is right?</p> <p>Why do we use the latter formula in almost all cases but somehow we settle for the simpler formula when it comes to the exponential?</p> <hr> <p>Note that both satisfy that if we apply the half-derivative twice, we get the usual first derivative. (In the first case, it's simply because <span class="math-container">$\sqrt{k} \sqrt{k} = k$</span>; in the second case, there's a proof on Wiki using the properties of beta and gamma functions - plus I verified this numerically even though I couldn't express the integrals in a closed-form.) </p> <p>I also have hard time accepting this second, complicated, formula, mainly because any integer derivative of the exponential gives exponential, but for the half-derivative we get this weird monstrosity. On the other hand, it should be consistent with the formulas for the half-derivative for all powers of <span class="math-container">$x$</span> when it's put together in the infinite series for <span class="math-container">$e^{k x}$</span>.</p> <p>Can anyone shed some light on this issue for me, please?</p>
Ninad Munshi
698,724
<p>Those two formulae are not as different as you think. Finish the computation:</p> <p><span class="math-container">$$\frac{1}{\Gamma\left(\frac{1}{2}\right)}\frac{d}{dx}\left(e^{kx}\int_0^x u^{-\frac{1}{2}}e^{-ku}du\right) = \frac{1}{\Gamma\left(\frac{1}{2}\right)}\left(ke^{kx}\int_0^x u^{-\frac{1}{2}}e^{-ku}du + \frac{1}{\sqrt{x}}\right)$$</span></p> <p><span class="math-container">$$= \sqrt{k}e^{kx}\operatorname{erf}(\sqrt{kx}) + \frac{1}{\sqrt{\pi x}}$$</span></p> <p>So yes, the various definitions for fractional derivatives are not in agreement with each other (Fourier vs. calculus/combinatorics), but they still do resemble each other and some underlying structure to some degree, especially in the behavior of leading order terms.</p>
2,277,115
<p>I'm asking for examples of interesting categories in which there exist non-isomorphic objects $X$ and $Y$, a split monomorphism $f : X \to Y$, and a split epimorphism $g : X \to Y$. Spelled out, there should exist maps $f : X \leftrightarrow Y : f'$ such that $f'f = \mathrm{id}_{X}$ and maps $g : X \leftrightarrow Y : g'$ such that $gg' = \mathrm{id}_{Y}$ such that there is no pair of maps $h : X \leftrightarrow Y : h'$ satisfying $h'h = \mathrm{id}_{X}$ and $hh' = \mathrm{id}_{Y}$.</p> <p>My professor found a seemingly relevant exercise from Rowen's "Graduate Algebra: Noncommutative view" suggesting this may occur in R-Mod, but I haven't got the book on hand and remember having trouble understanding the exercise anyways. Additionally, he more specifically asked if this can happen in Top.</p>
Niels J. Diepeveen
3,457
<p>In topological terms, you are asking for two non-homeomorphic spaces, each of which is homeomorphic to a retract of the other. You can easily find examples of such spaces by stringing together an infinite chain of copies of two non-homeomorphic spaces. Taking a circle and a segment as an example, we get subspaces of the plane shaped like o--o--o--o--o--o--o--.... and --o--o--o--o--o--o--o....</p> <p>Each of these spaces can be obtained from the other by mapping the first link in the chain to its connecting point, which is a retraction.</p> <p>Note that the fact that the chains are not homeomorphic cannot be deduced from the fact that the segment and the circle are not homeomorphic, but we can easily see that every point of the first space is a local cut point, which is not true for the second one.</p> <p>Countless examples can be constructed in similar ways. For a compact metrizable example you might consider the one point compactification of the spaces above.</p>
1,285,014
<p>Let $R,S$ be commutative rings with identity.</p> <p>Proving that $X \sqcup Y$ is an affine scheme is the same as proving that $Spec(R) \sqcup Spec(S) = Spec(R \times S)$.</p> <p>I proved that if $R,S$ are rings, then the ideals of $R \times S$ are exactly of the form $P \times Q$, where $P$ is an ideal of $R$ and $Q$ is an ideal of $S$.</p> <p>However, for prime ideals this is not true in general.</p> <p>If $I$ is a prime ideal of $R \times S$, then $I = \mathfrak{p} \times \mathfrak{q}$, where $\mathfrak{p}$ is a prime ideal of $R$ and $\mathfrak{q}$ is a prime ideal of $S$.</p> <p>But if $\mathfrak{p}$ is a prime ideal of $R$ and $\mathfrak{q}$ is a prime ideal of $S$, it is not true in general that $\mathfrak{p} \times \mathfrak{q}$ is a prime ideal of $R \times S$.</p> <p>Then, $Spec(R \times S) \subseteq Spec(R) \times Spec(S)$ and the reverse inclusion is false in general.</p> <p>My question is, what is $Spec(R) \sqcup Spec(S)$ set-theoretically, in order to use what I proved above?</p>
Mauro ALLEGRANZA
108,274
<p>Of course, we can use also the tableau method, obtaining the same result produced by the use of truth-table.</p> <p>We have to apply the tableau to the original formula, checking its <em>satisfiability</em>.</p> <p>Each open path defines a (set of) assignments to the <em>sentential variables</em> satisfying the formula. Every assignment will form a "basic conjubct" that must be "disjoined" to have the required $DNF$.</p> <p>If we start with $T[p \to ¬(q \lor r)]$ and apply the rule for $T\to$, we get two branches: one with $Fp$ and the other with $T[¬(q∨r)]$, i.e. $F[q∨r]$.</p> <p>The left branch is finished without closing and thus gives the four possible conjuncts with $p$ <em>false</em>, i.e. the four conjuncts : $\lnot p \land \ldots$ (see the above answer).</p> <p>The same for the right branch; applying the rule for $F\lor$ we get: $Fq$ and $Fr$, i.e. the two conjuncts : $p \land \lnot q \land \lnot r$ and $\lnot p \land \lnot q \land \lnot r$.</p> <p>We have only to note that the second one is already present among the four conjuncts previously produced by the left branch, and we are done.</p>
4,462,081
<p>I actually already have the solution to the following expression, yet it takes a long time for me to decipher the first operation provided in the answer. I understand all of the following except how to convert <span class="math-container">$\left(1+e^{i\theta \ }\right)^n=\left(e^{\frac{i\theta }{2}}\left(e^{\frac{-i\theta }{2}}+e^{\frac{i\theta }{2}}\right)\right)^n$</span></p> <p>I am not sure if I wrote the expression correctly, I am new to this website.</p> <p>Thank You!</p>
Mark Bennet
2,906
<p>If you had <span class="math-container">$1+x^2$</span> you could write it as <span class="math-container">$x\left(\dfrac 1x+x\right)$</span> if you wanted to.</p> <p>This occurs occasionally with the trigonometric/complex exponential functions because, of course <span class="math-container">$e^{-ia}=\dfrac 1{e^{ia}}$</span> and you can write the sine and cosine functions as a scalar multiple of something which looks like <span class="math-container">$x\pm\dfrac 1x$</span>.</p> <p>Of course what you have doesn't quite look like <span class="math-container">$1+x^2$</span> so you have to notice that form lurking beneath the surface. Worth noting the structure, though, as you will likely encounter it again.</p> <p>And that is what is going on here - other answers will be likely more explicit. I just wanted to highlight this useful idea.</p>
612,827
<p>I'm self studying with Munkres's topology and he uses the uniform metric several times throughout the text. When I looked in Wikipedia I found that there's this concept of a <a href="http://en.wikipedia.org/wiki/Uniform_space" rel="nofollow">uniform space</a>.</p> <p>I'd like to know what are it's uses (outside point set topology) and whether it's an important thing to learn on a first run on topology? </p>
arsmath
4,880
<p>Uniform spaces are right on the boundary of formalisms that are worth knowing. Topological groups are definitely worth knowing, and are central to analysis and its applications. The topology of a topological group is determined by the system of neighborhoods of the identity. They also have a natural notion of uniform continuity.</p> <p>In applications, the topology on a group is frequently given by a family of pseudometrics. (An simple example is a Banach space -- a vector space is a kind of group, and the topology on the group is given by a single metric.) Families of pseudometrics also have a notion of uniform continuity.</p> <p>The axioms of a uniform space generalize these two cases &mdash; except that it's not a generalization. The topology on a group always given by family of pseudometrics. The interesting thing is that this proof &mdash; which can be phrased in terms of neighborhoods of the identity &mdash; immediately generalizes to all uniform spaces. So while uniform spaces are not any more general, once you do the work of proving that theorem for topological groups, you get uniform spaces for free.</p> <p>So if you don't like the definition of uniform spaces, or find it hard to understand, the idea is not strictly necessary. But it's not much more work beyond understanding neighborhoods of the identity of a topological group.</p>
2,541,044
<p>I read this argument on the internet about how the solution to the sleeping beauty problem is $\frac{1}{3}$:</p> <p>All these events are equally likely in the experiment : </p> <ol> <li><p>Coin landed Heads, it's Monday and Beauty is awake</p></li> <li><p>Coin landed Heads, it's Tuesday and Beauty is asleep</p></li> <li><p>Coin landed Tails, it's Monday and Beauty is awake</p></li> <li><p>Coin landed Tails, it's Tuesday and Beauty is awake</p></li> </ol> <p>All these are mutually exclusive and exhaustive and also equally likely. So, all four of these events have a probability $\frac{1}{4}$. But when Beauty is awakened, she knows that she isn't asleep. So, the second possibility can be ruled out. The rest three are still equally likely with a probability $\frac{1}{3}$. Hence the probability that the coin landed Heads is $\frac{1}{3}$.</p> <p>But I remember something from the Monty Hall problem and this situation looks somewhat similar. The solution assumes that when possibility no. 2 is ruled out, the remaining three remain equally likely. This doesn't happen in the Monty Hall problem.</p> <p>For example, there are 100 doors. A prize is behind one of them. Clearly, all the doors are equally likely to have the prize. We pick one random door. The probability that it has the prize is $\frac{1}{100}$. The probability that the prize is in one of the remaining doors is $\frac{99}{100}$. When doors from the set of remaining 99 doors are ruled out one by one, all the doors no longer remain equally likely. Our door still has the probability $\frac{1}{100}$ while the group of remaining doors still hold a probability of $\frac{99}{100}$.</p> <p>Could this be true for the Sleeping Beauty Problem too? I mean the possibilities 1. and 2. that I've listed collectively hold a probability of $\frac{1}{2}$ and even when possibility no.2 is ruled out, it's probability gets transferred to possibility no.1, so that it still has a probability of $\frac{1}{2}$.</p> <p><strong>EDIT:</strong> Suppose Beauty is the contestant on the Monty Hall Show. She is presented four doors in front of her, A, B, C, D. Clearly, all the door currently have a winning probability of $\frac{1}{4}$. But she knows that before the prize was put behind one of the doors, a coin was tossed. If it landed heads, the prize was placed in one of the doors A or B and in case it was tails, the prize was put in C or D. Beauty knows this. Now, the host rules out door B as a possibility (which is equivalent to Beauty ruling out possibility 2). Do the doors A, C and D remain equally likely to have the prize or is it safer to choose A?</p> <p>I think it's safer to choose A because either you can assume the coin landed tails and further burden yourself in choosing between C and D or you can assume the coin landed heads and then choose A, the only remaining Heads door.</p>
spaceisdarkgreen
397,125
<p>To add to Qiaochu's answer, the Monty Hall problem's solution depends on our understanding of the exact process the game follows. The assumption is that if you are standing in front of a door with the goat, Monty will always open the other door with the goat. What if instead the procedure that Monty will open either door you aren't standing in front of with equal probability and if the door has the prize, you lose (or win, for that matter)? What if the procedure is that he will open any of the three doors with equal probability (and if he opens your door you get what's behind it, and if he opens a door that isn't in front of you and the prize is behind it you lose/win). </p> <p>In either case there is no longer any advantage to switching. If you were on this game show and a door with a goat that you weren't in front of happened to open up, but you didn't know which procedure Monty was following, you would not know whether there was any value to switching$.^*$ Compare to Qiaochu's sleeping beauty.</p> <p>$\;^*$ Well, I suppose one should view the fact that you were shown a goat behind a door that you aren't in front of as evidence that you are playing the canonical version of the game since this situation is most probable under this version. But then to calculate odds you would need to specify what the possible versions of the game are and prior probabilities for each of them.</p>
1,666,615
<blockquote> <p>For the series $\sum_{k=1}^{\infty}a_k$, suppose that there is a number $r$ with $0\leq r&lt;1$ and a natural number $N$ such that $$|a_k|^{1/k}&lt;r\qquad\text{for all indices $k\geq N$}$$ Prove that $\sum_{k=1}^{\infty}a_k$ converges absolutely.</p> </blockquote> <p>Proof:</p> <p>For a given $r\in\mathbb{R}$ with $0\leq r&lt;1$ and $N\in\mathbb{N}$ satisfy $|a_k|^{1/k}&lt;r$ for all indices $k\geq N$, that gives $|a_k|&lt;r^k$. Now, define $s_n=\sum_{k=1}^{n}|a_k|$ be a sequence of partial sum of $\sum_{k=1}^{\infty}|a_k|$. Since $\sum_{k=1}^{n}r^k$ converges to $(1-r^{n+1})/(1-r)$, for all $\epsilon&gt;0$, this gives $$\left|\sum_{k=1}^{n}r^k-\frac{1-r^{n+1}}{1-r}\right|&lt;\frac{\epsilon}{2}\qquad\text{for all $k\geq N$}$$ Then for all $j,k\geq N$, we have \begin{align*} \left|\sum_{j=1}^{n}a_j-\sum_{k=1}^{n}a_k\right|&lt;\left|\sum_{j=1}^{n}r^j-\sum_{k=1}^{n}r^k\right|&amp;=\left|\sum_{j=1}^{n}r^j-\frac{1-r^{n+1}}{1-r}+\frac{1-r^{n+1}}{1-r}-\sum_{k=1}^{n}r^k\right|\\ &amp;\leq\left|\sum_{j=1}^{n}r^j-\frac{1-r^{n+1}}{1-r}\right|+\left|\frac{1-r^{n+1}}{1-r}-\sum_{k=1}^{n}r^k\right|\\ &amp;=\frac{\epsilon}{2}+\frac{\epsilon}{2}=\epsilon \end{align*} Hence, $\{s_n\}$ is a Cauchy sequence which implies $\{s_n\}$ is convergent, so there exists an $M\in\mathbb{R}$ such that $\sum_{k=1}^{n}a_k\leq M$. This inequality implies $\sum_{k=1}^{\infty}|a_k|$ is convergent; therefore, $\sum_{k=1}^{\infty}a_k$ converges absolutely.</p> <hr> <p>Does this solution valid? If not, can someone give me a hint or suggestion to receive the answer? Thanks.</p>
ldiaz
709,059
<p>Since you are trying to prove that </p> <p><span class="math-container">$$\sum_{k=0}^\infty a_k$$</span> converges absolutely, consider the sum <span class="math-container">$$|\sum_{k=0}^\infty a_k|$$</span> = <span class="math-container">$$|\sum_{k=0}^N a_k + \sum_{k=N}^\infty a_k|$$</span> <span class="math-container">$\le$</span> <span class="math-container">$$|\sum_{k=0}^N a_k| + |\sum_{k=N}^\infty a_k|$$</span></p> <p>Notice that the first sum is convergent because it has finite terms. The second sum is where you use the assumption given about each <span class="math-container">$|a_k|^\frac1k &lt; r$</span> and that <span class="math-container">$0 \le r &lt; 1$</span>. </p> <p>Invoking the triangle inequality, the original assumption, the ratio test and the root test should show that you have a sum of two convergent series. </p>
947,730
<p>I'm trying to do this for practice but I'm just going nowhere with it, I'd love to see some work and answers on it.</p> <p>Thanks :)</p> <p>Find a polynomial that passes through the points (-2,-1), (-1,7), (2,-5), (3,-1). Present the answer in standard form.</p> <p>What I've tried:</p> <p><img src="https://i.stack.imgur.com/Wsvj9.jpg" alt="What I firstly tried but went nowhere"></p> <p><img src="https://i.stack.imgur.com/R6USZ.jpg" alt="Another attempt to get somewhere"></p>
André Nicolas
6,312
<p><strong>Hints:</strong> </p> <p>Way 1: Consider the polynomial $$\small A(x+1)(x-2)(x-3)+B(x+2)(x-2)(x-3)+C(x+2)(x+1)(x-3)+D(x+2)(x+1)(x-2).$$ You can find constants $A,B,C,D$ such that the above polynomial will do the job. For example, to make the polynomial be equal to $-1$ at $-2$, all we need to do is to make $A(-2+1)(-2+-2)(-2+-3)=1$. </p> <p>The hard thing about this procedure is presenting the answer in standard form. That is a routine but very unpleasant calculation. </p> <p>Way 2: Let our polynomial be $ax^3+bx^2+cx+d$. Because the curve $y=ax^3+bx^2+cx+d$ passes through $(-2,-1)$ we have $$-8a+4b-2c+d=-1.$$ Similarly, we obtain three other linear equations in $4$ unknowns. It akes a fair amount of routine work, but you can solve the resulting system of $4$ linear equations in $4$ unknowns, and then you are finished. </p>
339,880
<p>I'm interested in examples where the sum of a set with itself is a substantially bigger set with nice structure. Here are two examples:</p> <ul> <li><strong>Cantor set</strong>: Let <span class="math-container">$C$</span> denote the ternary Cantor set on the interval <span class="math-container">$[0,1]$</span>. Then <span class="math-container">$C+C = [0,2]$</span>. There are several nice proofs of this result. Note that the set <span class="math-container">$C$</span> has measure zero, so is "thin" compared to the interval <span class="math-container">$[0,2]$</span> whose measure is positive. </li> <li><strong>Goldbach Conjecture</strong>: Let <span class="math-container">$P$</span> denote the set of odd primes and <span class="math-container">$E_6$</span> the set of even integers greater than or equal to 6. Then the conjecture states is equivalent to <span class="math-container">$P + P = E_6$</span>. Note that the primes have asymptotic density zero on the integers, so the set <span class="math-container">$P$</span> is "thin" relative to the positive integers.</li> </ul> <p>Are there other nice examples?</p>
José Hdz. Stgo.
1,593
<p>I know you asked for examples of the "thin + thin = nice and thick" phenomenon but, since <a href="https://www.usatoday.com/story/news/nation/2019/09/10/palindrome-week-last-one-century/2273558001/" rel="noreferrer"><strong>Palindrome Week</strong></a> is all the rage these days, I can't avoid mentioning the following example of "thin + thin + thin = nice and thick".</p> <p>A couple of years ago, <a href="https://mathoverflow.net/users/31020/javier">J. Cilleruelo</a>(†), F. Luca, and L. Baxter <a href="https://arxiv.org/pdf/1602.06208.pdf" rel="noreferrer">proved that</a> every natural number <span class="math-container">$n$</span> can be written as a sum of three <a href="http://oeis.org/A002113" rel="noreferrer">palindromic numbers</a>. Since the natural density of the set of palindromic numbers is <span class="math-container">$0$</span>, if we agree to regard <span class="math-container">$\mathbb{N}$</span> as "nice" and "thick", we do have here--as promised--an example of the "thin + thin + thin = nice and thick" phenomenon.</p>
3,552,219
<p>I come across an explanation of recursion complexity. This screenshot is in question:</p> <p><a href="https://i.stack.imgur.com/ySKdo.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/ySKdo.png" alt="a"></a></p> <p>How do you get this?</p> <pre><code>T(n) = 3T(n/4) + n </code></pre> <p>The <span class="math-container">$log_n^4$</span> shown seems to be base 4, and this baffles me. What does the <code>n</code> superscript stand for? I am inclined to think the base of this log should be 3. Can someone explain this to me?</p> <hr> <p>Another example was provided where the tree expands at an exponent of 2: <a href="https://i.stack.imgur.com/gP5tc.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/gP5tc.png" alt="b"></a></p> <p>The formula given is:</p> <pre><code>T(n) = 2T(n/2) + 2 </code></pre> <p>The <code>log(n)</code> given is said to be of base 2. This makes sense to me, but not the base 4 in given by the first picture.</p>
marty cohen
13,079
<p><span class="math-container">$\begin{array}\\ T(n) &amp;=3T(n/4)+1\\ &amp;=3(3T(n/16)+1)+1\\ &amp;=9T(n/16)+4\\ &amp;=9(3T(n/64)+1)+4\\ &amp;=27T(n/64)+13\\ &amp;.....\\ &amp;=3^kT(n/4^k)+\sum_{j=0}^{k-1}3^j \qquad\text{conjecture}\\ &amp;=3^k(3T(n/4^{k+1})+1)+\sum_{j=0}^{k-1}3^j \qquad\text{induction step}\\ &amp;=3^{k+1}T(n/4^{k+1})+3^k+\sum_{j=0}^{k-1}3^j\\ &amp;=3^{k+1}T(n/4^{k+1})+\sum_{j=0}^{k}3^j \qquad\text{confirmed}\\ \end{array} $</span></p> <p>Note that the induction stops when <span class="math-container">$4^k \ge n$</span>.</p> <p>Also note that <span class="math-container">$\sum_{j=0}^{k-1}3^j =\dfrac{3^k-1}{3-1} =\dfrac{3^k-1}{2} $</span>.</p>
3,375,375
<p>I noticed this issue was throwing off a more sophisticated problem I'm working on. When computing the indefinite integral </p> <p><span class="math-container">$$ I(x) = \int \frac{dx}{1-x} = \log | 1-x | + C,$$</span></p> <p>I realized I could equivalently write</p> <p><span class="math-container">$$ I(x) = - \int \frac{dx}{x-1} = -\log|x-1| +C = \log \frac{1}{|1-x|} + C.$$</span></p> <p>How are these two answers compatible? What am I missing here? </p>
José Carlos Santos
446,262
<p>The first answer is wrong. Note that<span class="math-container">$$\int\frac{\mathrm dx}{a+bx}=\frac1b\log\lvert a+bx\rvert+C.$$</span>In particular<span class="math-container">$$\int\frac{\mathrm dx}{1-x}=-\log\lvert1-x\rvert+C.$$</span></p>
3,375,375
<p>I noticed this issue was throwing off a more sophisticated problem I'm working on. When computing the indefinite integral </p> <p><span class="math-container">$$ I(x) = \int \frac{dx}{1-x} = \log | 1-x | + C,$$</span></p> <p>I realized I could equivalently write</p> <p><span class="math-container">$$ I(x) = - \int \frac{dx}{x-1} = -\log|x-1| +C = \log \frac{1}{|1-x|} + C.$$</span></p> <p>How are these two answers compatible? What am I missing here? </p>
YiFan
496,634
<p>You forgot to use the chain rule when doing the first integral. <span class="math-container">$$\int\frac{dx}{1-x}=-\log|1-x|+C,$$</span> which is the same as the second one you gave.</p>
45,441
<p>There is a method of constructing representations of classical Lie algebras via Gelfand-Tsetlin bases. It has also been applied to Symmetric groups by Vershik and Okounkov. Does anybody know of any application of the method to complex representations of $GL_n(\mathbb F_q)$? Or, at least, any results in this directions, like what is the centralizer of $GL_{n-1}$ in $\mathbb C[GL_n]$?</p>
Matt Davis
10,738
<p>Not an answer either, but in response to Jim - Schur-Weyl duality doesn't always apply over finite fields. See <a href="http://www.ams.org/mathscinet-getitem?mr=2563588" rel="nofollow">http://www.ams.org/mathscinet-getitem?mr=2563588</a> for one result and some discussion of the related issues. </p>
1,281,507
<blockquote> <p>$$x*y = 3xy - 3x - 3y + 4$$</p> <p>We know that $*$ is associative and has neutral element, $e$.</p> <p>Find $$\frac{1}{1017}*\frac{2}{1017}*\cdots *\frac{2014}{1017}.$$</p> </blockquote> <p>I did find that $e=\frac{4}{3}$, and, indeed, $x*y = 3(x-1)(y-1)+1$. Also,it is easy to check that the law $*$ is commutative.</p> <p>How can I solve this?</p>
Erick Wong
30,402
<p>One easy way to manufacture non-obvious associative/commutative operations is to take a known associative operation and <em>conjugate</em> it with an invertible function $f : \mathbb R \to \mathbb R$. In this case, taking $f(x) = 3(x-1)$ (with inverse function $f^{-1}(x) = 1 + \tfrac x3$), we can see that</p> <p>$$f(x*y) = 9(x-1)(y-1) = f(x)f(y),$$</p> <p>so that the mysterious $*$ operation is just multiplication conjugated by $f$, i.e. $x*y = f^{-1}(f(x)f(y))$. This makes it very clear why it is both associative and commutative: it mimics those properties from multiplication. Also note that $e = f^{-1}(1)$, with $1$ being the neutral element for multiplication.</p> <p>Just as diagonalizing a matrix $A$ into $PDP^{-1}$ lets you compute powers easily, conjugation lets you easily express longer chains:</p> <p>$$f(x*(y*z)) = f(x)f(y*z) = f(x)f(y)f(z).$$</p> <p>More generally, $f(a_1 * a_2 * \cdots * a_n) = \prod_{i=1}^n f(a_i)$, so we can compute</p> <p>$$a_1 * a_2 * \cdots * a_n = f^{-1}\left(\prod_{i=1}^n f(a_i)\right) = 1 + \tfrac13 \prod_{i=1}^n 3(a_i-1) = 1 + 3^{n-1} \prod_{i=1}^n (a_i-1).$$</p> <p>It so happens that for your question, one of the $a_i$s is equal to $1$, simplifying the whole expression. But even without this great simplification, it would be easy to compute $a_1 * a_2 * \cdots * a_n$.</p>
3,954,410
<p>I am solving exercises from Loring Tu.</p> <p>Show that if <span class="math-container">$L : V \rightarrow V$</span> is a linear operator on a vector space V of dimension n, then the pullback <span class="math-container">$L^{\wedge} : A_n(V) \rightarrow A_n(V)$</span> is multiplication by determinant of L.</p> <p>Attempt:</p> <p>This is a linear operator between the same spaces of the dimension 1. It follows that it must be multiplication by a constant. I don't understand why is it the determinant ?</p>
ArsenBerk
505,611
<p><strong>HINT:</strong> Instead of induction here, we can use Fermat's Little Theorem as it says <span class="math-container">$n^7 \equiv n \mod 7$</span> for every positive integer <span class="math-container">$n$</span>.</p>
132,238
<p>I'm trying to solve a maximization problem that apparently is too complicated (it's a convex function) and NMaximize just runs endlessly.</p> <p>I'd like to have an approximate result, though. How can I tell <code>NMaximize</code> to just give up after $n$ seconds and give me the best it has found so far?</p>
Michael E2
4,999
<p>Each of the four methods in <code>NMinimize</code> has built-in hooks you can use to get the current values through <code>StepMonitor</code>. There are some great advantages to this approach over ones that hijack the user's objective function and only minor drawbacks:</p> <ul> <li>The regular <code>NMinimize</code> interface can be used.</li> <li>The user's objective function will be used as is, so that any analysis of the function normally done by <code>NMinimize</code> is not prevented by wrapping the objective function in shell that turns the function into a numeric black box.</li> <li>The hooks give direct access to the state of the method algorithm being used. One could hardly ask for more.</li> <li>The hooks are easy to access. Some post-processing is often needed, depending on the method. One might want further to apply <code>FindMinimum[]</code> to the results (not shown).</li> <li>The hooks are undocumented AFAIK, and perhaps they are subject to change. OTOH, the code is open to inspection and this approach can be adapted should new methods be added or old methods improved. I believe the current code has been stable for a fairly long time.</li> </ul> <p>Note that <code>NMaximize[f[x], x]</code> basically calls <code>NMinimize[-f[x], x]</code>, so I will speak primarily in terms of minimization. The raw values you get with the following approaches will also be of <code>-f[x]</code>, when using <code>NMaximize[]</code>. In most examples below, which call <code>NMaximize[]</code>, this is accounted for that the maximum is returned.</p> <p>In <code>NMinimize</code> the objective function consists of the user's function plus a penalty function. There are two values that it keeps track of, <code>val</code> and <code>fval</code>. The value initially optimized is <code>val</code>, which equals <code>fval</code> plus a penalty (often <code>0.</code>), where <code>fval</code> is the value of the function. At the end of the method, there is post-processing of the results, sometimes using the equivalent of <code>FindMinimum</code> to polish the results.</p> <p>Each algorithm is different and there is not a uniform user-interface to them. Here are examples of each:</p> <p><strong>"DifferentialEvolution"</strong></p> <p>In this example <code>foo</code> contains the most recent pools of points (<code>vecs</code>) and values. One can use <a href="https://mathematica.stackexchange.com/a/25474/4999">linked lists</a> to keep track of each step (see <code>"SimulatedAnnealing"</code> at the end).</p> <pre><code>(* "DifferentialEvolution" *) TimeConstrained[ NMaximize[ {7 x - 4 x^2 + y - y^2, {x, y} ∈ Disk[]}, {x, y}, Method -&gt; "DifferentialEvolution", StepMonitor :&gt; If[ValueQ@Optimization`NMinimizeDump`fvals, foo = {Optimization`NMinimizeDump`vals, Optimization`NMinimizeDump`vecs, Optimization`NMinimizeDump`fvals}]], 0.1] tolerance = 10^-7; Last@ Sort@ Pick[Transpose[{-foo[[1]], Thread[{x, y} -&gt; #] &amp; /@ foo[[2]]}], UnitStep[foo[[1]] - foo[[3]] - tolerance], 0] (* $Aborted {3.31235, {x -&gt; 0.870649, y -&gt; 0.491204}} *) </code></pre> <p><strong>"NelderMead"</strong></p> <p>Like with <code>"DifferentialEvolution"</code>, <code>foo</code> contains the most recent pools of points (<code>vecs</code>, the "simplex") and values.</p> <pre><code>(* "NelderMead" *) TimeConstrained[ NMaximize[ {7 x - 4 x^2 + y - y^2, {x, y} ∈ Disk[]}, {x, y}, Method -&gt; {"NelderMead", "RandomSeed" -&gt; 1 (* for reproducibility *)}, StepMonitor :&gt; If[ValueQ@Optimization`NMinimizeDump`fvals, foo = {Optimization`NMinimizeDump`vals, Optimization`NMinimizeDump`vecs, Optimization`NMinimizeDump`fvals}]], 0.05] tolerance = 10^-7; Last@ Sort@ Pick[Transpose[{-foo[[1]], Thread[{x, y} -&gt; #] &amp; /@ foo[[2]]}], UnitStep[tolerance + foo[[3]] - foo[[1]]], 1] (* $Aborted {3.31236, {x -&gt; 0.871075, y -&gt; 0.491156}} *) </code></pre> <p>Note that each value <code>val</code> has a penalty. Hence the need for a positive <code>tolerance</code>, or all points would be rejected. (This would be cleaned up in post-processing, which was aborted by the time constraint.) As one can see below, the selected point above does not satisfy <code>{x, y} ∈ Disk[]</code>. The user will have to decide how to treat the results in their particular case. (This applies to all methods, in fact.)</p> <pre><code>Norm[{x, y}] /. Last[%] // InputForm (* 1.0000027734542223 *) foo[[1]] - foo[[3]] (* {5.21808*10^-8, 3.6511*10^-8, 5.48056*10^-8} *) </code></pre> <p><strong>"RandomSearch"</strong></p> <p>In <code>"RandomSearch"</code> <code>results</code> is initialized to the pool of initial points. Each point is replaced by the result of a local minimizer (which may be specified with the <code>"Method"</code> suboption to <code>Method</code>). The post-processing code chooses the best result of the minimizer.</p> <pre><code>(* "RandomSearch" *) TimeConstrained[ NMaximize[{7 x - 4 x^2 + y - y^2, {x, y} ∈ Disk[]}, {x, y}, Method -&gt; "RandomSearch", StepMonitor :&gt; If[ValueQ@Optimization`NMinimizeDump`results, foo = Optimization`NMinimizeDump`results]], 0.2] Select[foo, ! FreeQ[#, "Converged"] &amp;] (* $Aborted {{{-3.31236, {0.871072 -&gt; 0.871072, 0.491155 -&gt; 0.491155}}, {True, "Converged"}}, {{-3.31236, {0.871072 -&gt; 0.871072, 0.491155 -&gt; 0.491155}}, {True, "Converged"}}, {{-3.31236, {0.871072 -&gt; 0.871072, 0.491155 -&gt; 0.491155}}, {True, "Converged"}}, {{-3.31236, {0.871072 -&gt; 0.871072, 0.491155 -&gt; 0.491155}}, {True, "Converged"}}, {{-3.31236, {0.871072 -&gt; 0.871072, 0.491155 -&gt; 0.491155}}, {True, "Converged"}}, {{-3.31236, {0.871072 -&gt; 0.871072, 0.491155 -&gt; 0.491155}}, {True, "Converged"}}} *) </code></pre> <p><strong>"SimulatedAnnealing"</strong></p> <p>Simulated annealing works somewhat like the random search method in that it starts with a pool of initial points and processes each individually. It then does some post-processing and returns the best result found. As it processes each point, it keeps track of the best result of processing that point in the form</p> <pre><code> Optimization`NMinimizeDump`best = {val, vec, fval} </code></pre> <p>To accumulate all the results efficiently, I used <a href="https://mathematica.stackexchange.com/a/25474/4999">link lists</a>. I wrapped the list <code>Optimization`NMinimizeDump`best</code> in an undefined head called <code>hold</code> to make flattening the linked list in post-processing easier. Note it does <strong>not</strong> <a href="http://reference.wolfram.com/language/ref/Hold.html" rel="nofollow noreferrer"><code>Hold[]</code></a> the results.</p> <pre><code>(* "SimulatedAnnealing" *) TimeConstrained[ ClearAll[hold]; foo = {}; NMaximize[{7 x - 4 x^2 + y - y^2, {x, y} ∈ Disk[]}, {x, y}, Method -&gt; "SimulatedAnnealing", StepMonitor :&gt; If[ValueQ@Optimization`NMinimizeDump`best, foo = {hold[Optimization`NMinimizeDump`best], foo}]], 0.02] foo = Flatten@foo /. hold -&gt; Identity; tolerance = 0; First@ Sort@ Pick[ Transpose[{-foo[[All, 1]], Thread[{x, y} -&gt; #] &amp; /@ foo[[All, 2]]}], UnitStep[tolerance + foo[[All, 3]] - foo[[All, 1]]], 1] (* $Aborted {3.25174, {x -&gt; 0.753606, y -&gt; 0.457427}} *) </code></pre>
2,226,337
<p>How many multiples of $5$ are greater than $60,000,$ and can be made from the digits: $$0, 1, 2, 3, 4, 5, 6$$ </p> <p>if <strong>all</strong> digits have to be used and each can only be used once with no repeats.</p> <p>Am I looking at this in the wrong way too simplistically or is it simply $2 \times 6!$</p> <p>Many thanks</p> <p>KM</p>
Bernard
202,857
<p><strong>Hints:</strong></p> <p>How do you know a number is divisible by $5$?</p> <p>The notation of an integer can't begin with $0$.</p> <p><em>Some details:</em></p> <p>A multiple of $5$ ends in a $0$ or a $5$. So two cases:</p> <ul> <li>If it ends in a $0$, you have to consider the $6!$ permutations of the digits $1··\,6$.</li> <li>If it ends in a $5$, you have the constraint that the leading digit cannot be $0$. You first choose the leading digit among $\{1,2,3,4,6\}$, then you have to consider the $5!$ permutations of the remaining digits, in all $5\cdot 5!\;$ numbers.</li> </ul> <p>All this makes a total of $$6!+5\cdot 5!=11\cdot 5!=1320\enspace\text{numbers}.$$ They all are greater than $60\,000$ since they are $7$-digits numbers.</p>
850,390
<p>Let $f(x)$ be differentiable function from $\mathbb R$ to $\mathbb R$, If $f(x)$ is even, then $f'(0)=0$. Is it always true?</p>
David K
139,123
<p>Trick question. Remember that "continuous" does not imply "differentiable".</p> <p>If the function is <em>differentiable</em> at $0$, refer to the answer by @Brandon.</p> <p><strong>Update:</strong> The question has been edited so that it says "differentiable" rather than "continuous". The answer above applied to the originally posted question.</p>
104,626
<p>I encountered the following differential equation when I tried to derive the equation of motion of a simple pendulum:</p> <p>$\frac{\mathrm d^2 \theta}{\mathrm dt^2}+g\sin\theta=0$</p> <p>How can I solve the above equation?</p>
Peđa
15,660
<p>Use substitution : $\theta&#39; =v$ ,therefore we have that :</p> <p>$$\theta&#39;&#39;=\frac{dv}{dt}\cdot \frac{dt}{d\theta}\cdot \frac{d\theta}{dt} \Rightarrow \theta&#39;&#39;=\frac{dv}{d\theta}\cdot v \Rightarrow \theta&#39;&#39;=v&#39;\cdot v$$</p> <p>where $v$ is function in terms of variable $\theta$ .So differential equation becomes :</p> <p>$v&#39; \cdot v +g \cdot \sin \theta=0$</p> <p>which is <a href="http://en.wikipedia.org/wiki/Separation_of_variables" rel="nofollow">separable differential equation</a> .</p>
40,241
<p>Let $N$ be a prime number. Let $J(N)$ be the jacobian of $X_\mu(N)$, the moduli space of elliptic curves with $E[N]$ symplectically isomorphic to $Z/NZ \times \mu_N$. Over complex numbers we get that J(N) is isogeneous to product of bunch of irreducible Abelian varieties. Is there a way of describing these Abelian varieties using $J_1(M)$ and $J_0(M)$? Specifically, what can we say about the decomposition of $J(11)$?</p> <p>Note that $X_\mu(N)$ is birationally isomorphic as a curve to the fibre product $X_0(N^2) \times_{X_0(N)} X_1(N)$. (This is because $\Gamma(N)$ is conjugate to $\Gamma_0(N^2) \cap \Gamma_1(N)$, and the group generated by $\Gamma_0(N^2)$ and $\Gamma_1(N)$ is $\Gamma_0(N)$.) Therefore, we have $J_1(N)$ and $J_0(N^2)$ are both some of the factors in $J(N)$. In fact, we know that $J(7)$ is three copies of $J_0(49)$. For N=11, the above fibre product to $X_0(121)$ is an unramified covering. If I was going to make a guess on what $J(11)$ going to decompose as, I would guess that it is five copies of $J_0^{new}(121)$ and six copies of $J_1(11)$. Is that reasonable? Is there a geometric way of arguing this?</p> <p>Also, I'm guessing that the question about <a href="https://mathoverflow.net/questions/4763/sl2-z-n-decomposition-of-space-of-cusp-forms-for-gamman"> $SL_2(F_N)$ decompoposition of space of cusprforms </a> is related to this, and Jared Weienstein's thesis will come into play here, but I'm not sure how.</p>
William Stein
8,441
<p>Ernst Kani was very interested in this and related questions around 2000. I remember implementing an algorithm for him in around 2000 when I visited Essen to compute a basis of $S_2(\Gamma(p))$ in terms of $\Gamma_1(p^2)$. I'm sure Kani knows the decomposition of $J(N)$ for small $N$, since I vaguely remember talking about it with him, but I didn't explicitly see it in a cursory glance through the papers at <a href="http://www.mast.queensu.ca/~kani/" rel="nofollow">http://www.mast.queensu.ca/~kani/</a>. You may want to look at the papers up there from around 2000, since many mention X(11) explicitly. You might also just email Kani. </p>
2,403,404
<p>I would be thankful if anyone can answer my question. This is a very basic question. Let's say we wish to minimise the quantity</p> <p>$$\hat{h}= \|h-h_i\|+\lambda\|h-u\|,$$</p> <p>where:</p> <p>$$h=[13,17,20, 17, 20, 14, 17, 18, 16, 15, 15, 12, 19, 13, 17, 13]^\top,\\ h_i=[18, 17, 14, 13, 17, 15, 17, 19, 12, 20, 15, 13, 16, 17, 20, 13]^\top, \\u = [16, 16,16,16,16,16,16,16,16,16,16,16,16,16,16,16]^\top,\\ \text{with }\lambda \in [0,100].$$</p> <p>I know this is a very basic question, but please help me to understand. Also, please suggest me any book where I can start from zero to learn to solve these kinds of problems.</p>
pisco
257,943
<p>The locus of $C$ is two lines parallel to $AB$, one line lie above $AB$ by 4 cm, one line lie below $AB$ by 4 cm. We denote the upper line by $l$. We only need to consider the case when $C$ lies on $l$.</p> <p>Let $B'$ be the reflection of $B$ along $l$, then $CA+CB = CA+CB'$, so $CA+CB$ is minimized when $C,A,B'$ are collinear, which implies $CA=CB$, the triangle is isosceles in this case.</p>
2,766,879
<p>Show that there are no primitive pythagorean triple $(x,y,z)$ with $z\equiv -1 \pmod 4$. </p> <p>I once have proven that, for all integers $a,b$, we have that $a^2 + b^2$ is congruent to $0$, or $1$, or $2$ modulo $4$. I feel like it is enough to conclude it by considering $a=x$, $b=y$ and $\gcd(x,y)=1$. But I am not completely sure if it is the way the proof should end.</p>
Robert Z
299,698
<p>Note that the given inequality is $$f(x)-f(1/x)=\frac{(x+1)^3}{x(x-1)}\geq 8,$$ that is for $x&gt;1$, $$h(x)=(x+1)^3-8x(x-1)\geq 0$$ which holds because $h(1)=8$ and $h$ is strictly increasing since $$h'(x)=3(x+1)^2-16x+8=3x^2-10x+11&gt;0 \quad (\Delta=10^2-12\cdot 11&lt;0).$$</p>
4,090,970
<p>Let <span class="math-container">$X$</span> and <span class="math-container">$Y$</span> be independent exponential random random variables with common parameter <span class="math-container">$\lambda$</span> and let <span class="math-container">$Z = X + Y$</span>. Find <span class="math-container">$f_Z(z)$</span>.</p> <hr /> <p>My approach:</p> <p><strong>Step 1:</strong> <span class="math-container">$$F_Z(z) = P(X + Y \leq Z) = \int\limits_{-\infty}^{\infty}\int\limits_{-\infty}^{z-x}f_{X,Y}(x,y)dydx$$</span></p> <p><strong>Step 2:</strong> <span class="math-container">$$f_Z(z) = \frac{d}{dz}F_Z(z) = \int\limits_{-\infty}^{\infty}\frac{d}{dz}[\int\limits_{-\infty}^{z-x}f_{X,Y}(x,y)dy]dx = \int\limits_{-\infty}^{\infty}f_{X,Y}(x,z-x)dx$$</span></p> <p><strong>Step 3:</strong> Since the variables are independent: <span class="math-container">$$f_Z(z) = \int\limits_{-\infty}^{\infty}f_{X}(x)*f_{Y}(z-x)dx$$</span></p> <p><strong>Step 4:</strong> Using exponential function formula: <span class="math-container">$\lambda e^{-\lambda x}$</span> and that lower bound for exponential is 0 to infinity: <span class="math-container">$$f_Z(z) = \int\limits_{0}^{\infty} \lambda e^{-\lambda x}* \lambda e^{-\lambda (z-x)} dx = \lambda ^2\int\limits_{0}^{\infty} e^{-\lambda z}dx$$</span></p> <p>I think I took a long turn somewhere because I'm getting an integral of a constant as my result. Where did I go wrong?</p>
herb steinberg
501,262
<p>For <span class="math-container">$x\gt z$</span> the integrand <span class="math-container">$=0$</span> since <span class="math-container">$f_Y(y)=0$</span> for <span class="math-container">$y\lt 0$</span>.</p> <p><span class="math-container">$f_Z(z)=\lambda^2 e^{-\lambda z}\int\limits_0^zdx=z\lambda^2 e^{-\lambda z}$</span>.</p>
2,828,472
<p>This question is regarding property of little o notation given in Apostol Calculus. The property is given on page 288 and stated as:</p> <blockquote> <p>Theorem 7.8 (c) As $x\to a$ we have $f(x)\cdot o (g(x)) = o(f(x)g(x))$.</p> </blockquote> <p>Here say $h(x) = o(g(x))$ then we have $f(x) \lim_{x\to a} \frac{h(x)}{g(x)} = 0$, and on the right, we have $j(x) = o(f(x)g(x)) $ then $\lim_{x\to a} \frac{j(x)}{f(x)g(x)} = 0$ .. I am confused to approach the proof. </p>
hamam_Abdallah
369,188
<p>$$o(g(x))=g(x)\epsilon(x)$$ with $$\lim_{x\to a}\epsilon(x)=0$$</p> <p>then</p> <p>$$f(x)o(g(x))=\Bigl(f(x)g(x)\Bigr)\epsilon(x)=o(f(x)g(x))$$</p>
697,402
<p>I have this limit:</p> <p>$$ \lim_{x\to\infty}\frac{x^3+\cos x+e^{-2x}}{x^2\sqrt{x^2+1}} $$ I tried to solve it by this:</p> <p>$$ \lim_{x\to\infty}\frac{x^3+\cos x+e^{-2x}}{x^2\sqrt{x^2+1}} = \lim_{x\to\infty}\frac{\frac{x^3}{x^3}+\frac{\cos x}{x^3}+\frac{e^{-2x}}{x^3}}{\frac{x^2\sqrt{x^2+1}}{x^3}} = \frac{0+0+0}{\frac{\sqrt{\infty^2+1}}{\infty}}$$ I do not think that I got it right there... Wolfram also says that the answer is $1$, which this does not seems to be. How do I solve this?</p>
Christian Blatter
1,303
<p>A hint: When $x\to\infty$ then numerator and denominator both have order of magnitude $x^3$. Therefore extract a factor $x^3\ne0$ on top and bottom, in the hope that now the numerator and denominator both have a finite limit when $x\to\infty$.</p>
66,000
<p>In 2008 I wrote a group theory package. I've recently started using it again, and I found that one (at least) of my functions is broken in Mathematica 10. The problem is complicated to describe, but the essence of it occurs in this line:</p> <pre><code>l = Split[l, Union[#1] == Union[#2] &amp;] </code></pre> <p>Here <code>l</code> is a list of sets. The intent of the line is to split <code>l</code> into sublists of identical sets. Each set is represented as a list of group elements. I say "sets" rather than "lists" because two sets are to be considered identical if they contain the same members in any order. This is the reason for comparing <code>Union</code>s of the sets. </p> <p>This used to work, but now it doesn't. The problem is that sets that, as far as I can tell, are equal, do not compare equal by this test. fact, the comparison <code>e1 == e2</code> for indistinguishable group elements <code>e1</code> and <code>e2</code> also sometimes fails to yield <code>True</code>. (It remains unevaluated; <code>e1 === e2</code> evaluates to <code>False</code>.) The elements can be fairly complicated objects. For instance, in one case where I'm having this problem, <code>ByteCount[e1]</code> is 2448. But <code>e1</code> and <code>e2</code> are indistinguishable. For instance, <code>ToString[FullForm[e1]] === ToString[FullForm[e2]]</code> yields <code>True</code>. </p> <p>I've shown one line where this failure to compare equal causes a problem. In this one case I could probably work around the problem by defining <code>UpValue</code>s for <code>e1 == e2</code> or <code>e1 === e2</code>. But, unfortunately, the problem raises its head in other contexts as well. For instance, I am trying to use <code>GraphPlot</code> to show a cycle graph of the elements. <code>GraphPlot</code> takes a list of edges of the form <code>ei-&gt;ej</code>. In order to recognize that edges <code>ei-&gt;ej</code> and <code>ei-&gt;ek</code> are both connected to <code>ei</code>, <code>GraphPlot</code> needs to know that the <code>ei</code> appearing in the first edge is the same as <code>ei</code> in the second. It doesn't, so I get a disconnected graph. Unlike <code>Split</code>, <code>GraphPlot</code> doesn't provide a hook to enable me to tell it how to test vertexes for equality, and it apparently doesn't use <code>Equal</code> or <code>SameQ</code>, either, as <code>UpValue</code>s I define for those are not used. </p> <p>(Sorry about the generic tag -- I couldn't find anything more specific. Suggestions welcome.)</p> <p>EDIT: In response to Szabolcs request, here is the <code>FullForm</code> of such an object:</p> <pre><code>a = sdp[znz[1, 3], aut[List[Rule[znz[1, 3], znz[1, 3]]], List[Rule[znz[0, 3], znz[0, 3]], Rule[znz[1, 3], znz[1, 3]], Rule[znz[2, 3], znz[2, 3]]], Dispatch[List[Rule[znz[0, 3], znz[0, 3]], Rule[znz[1, 3], znz[1, 3]], Rule[znz[2, 3], znz[2, 3]]]]], Function[NonCommutativeMultiply[Slot[2], Slot[1]]]] b = sdp[znz[1, 3], aut[List[Rule[znz[1, 3], znz[1, 3]]], List[Rule[znz[0, 3], znz[0, 3]], Rule[znz[1, 3], znz[1, 3]], Rule[znz[2, 3], znz[2, 3]]], Dispatch[List[Rule[znz[0, 3], znz[0, 3]], Rule[znz[1, 3], znz[1, 3]], Rule[znz[2, 3], znz[2, 3]]]]], Function[NonCommutativeMultiply[Slot[2], Slot[1]]]] a === b (* ==&gt; False *) </code></pre> <p>Note that <code>a</code> and <code>b</code> are identical and <code>ToString[a] === ToString[b]</code> gives <code>True</code>.</p>
Simon Woods
862
<p>Another possible workaround is to wrap <code>Dispatch</code> with a memoized function, so that both expressions <code>a</code> and <code>b</code> contain references to the same internal dispatch table.</p> <p>i.e. define</p> <pre><code>mem : disp[x_] := mem = Dispatch[x] </code></pre> <p>then use <code>disp</code> in place of <code>Dispatch</code> in your code.</p>
2,952,392
<p>Revisit the following discussion: </p> <p><a href="https://math.stackexchange.com/questions/843909/prove-that-the-inverse-image-of-an-open-set-is-open">Prove that the inverse image of an open set is open</a></p> <p>Obviously, the above discussion is based on Euclidean space (which is also a metric space, so the proof is based on the open ball). Can we say the following:</p> <p>Let <span class="math-container">$X,Y$</span> be any two topological spaces, </p> <p><span class="math-container">$f: X \rightarrow Y$</span> be a continuous function. The inverse image of an open set is open under <span class="math-container">$f$</span>. </p> <p>Can this famous theorem apply to any topological space? for example, Zariski space? </p>
Shweta Aggrawal
581,242
<p>This statement is false in general. Consider the following polynomials in <span class="math-container">$\mathbb{C}[X,Y]$</span></p> <p>The polynomials <span class="math-container">$ XY $</span> and <span class="math-container">$ X + Y $</span> have infinitely many zeros in <span class="math-container">$ \mathbb{C}^{2} $</span>.</p> <p>The zero-set of <span class="math-container">$XY$</span> is union of <span class="math-container">$(\{0\}×\mathbb{C})$</span> and <span class="math-container">$(\mathbb{C}×\{0\})$</span>, while the zero-set of <span class="math-container">$X+Y$</span> is <span class="math-container">$\{(a,−a) | a\in \mathbb{C}\}$</span>. </p> <p>Both are uncountable sets!!</p>
2,468,329
<p>Let F be a field and choose an element $u \in F$. Consider the function $\epsilon_u:F[x]\rightarrow F$ given by $$\epsilon_u(a_nx^n+...+a_0)=a_nu^n+...+a_0$$</p> <p>I am asked to show that this is surjective but not injective, as well as finding its kernel.</p> <p>My idea is that this function will just send every element in $F$ to itself, hence the surjectivity. Since there is this one-to-one correspondence between the part of the range and the entire domain, the function cannot possibly be injective. I am not sure if this is the right idea or how to formalize it, and I am also not sure how to find the kernel.</p>
Bernard
202,857
<p>For a polynomial ring over a commutative ring $A$, you have the equivalence $$\forall u\in A\;\forall f\in A [X],\quad(f(u)=0\iff f\;\text{is divisible by}\; X-u).$$</p> <p>Indeed, let's divide $f(X)$ by $X-u$. We get $$f(X)=q(X)(X-u)+r,\qquad r\in A. $$ Clearly, $\;f(u)=0\iff r=0$.</p>
697
<p><a href="https://mathoverflow.net/questions/36307/why-cant-i-post-a-question-on-math-stackexchange-com">This question</a> was posted on MO about not being able to post on math.SE. While MO wasn't the right place for the question, I have to wonder what is. New users who are experiencing difficulty using math.SE can't post about it on meta, so where do they turn? The only thing I can think of is that they have to figure out that it is possible for them to contact the moderators, but nowhere is it explicitly described how to do this. Maybe something should be added to the FAQ.</p>
Community
-1
<p>We need to allow people to post on meta with low reputation.</p>
1,113,415
<p>Is there a website or a book with a calculus theorems list? Or what are the ways remembering calculus theorems list?</p>
A.D
37,459
<p>You know $\frac{1}{0}$ is undefined. So, $\frac{1}{0} - 0.5$ is also undefined. Since $\text{undefined} - 0.5$, is not possible to define. But main interesting thing is to know why $\frac{1}0$ is undefined. For that purpose see this link <a href="https://math.stackexchange.com/questions/26445/division-by-0">here</a>.</p>
3,005,208
<p>I want to solve this polynomial analytically. I know the useful answer is between 0 and 1. Is there any way I can write the answer based on a, b, and c? <span class="math-container">$$ 6\cdot a \cdot x^4 + 2 \cdot b \cdot x^3-b \cdot c=0 $$</span> Also, an approximate answer is acceptable, for example, an answer with 2% error. I will appreciate if someone can help me on this subject.</p>
theREALyumdub
175,429
<p>Since you say an approximate answer is alright, up to 2% tolerance, it might be a good idea to use a numerical approximation like Newton's method.</p> <p>The derivative works out to be <span class="math-container">$$ f'(x) = 24ax^3 + 6bx^2 = 6x^2 (4ax + b) $$</span></p> <p>So you can take an <span class="math-container">$x_0$</span> in the range <span class="math-container">$ (0, 1) $</span> and try approximating your missing solution, if you know it is there. Just linearize to <span class="math-container">$ y = f'(x_0) x_1 + f(x_0) $</span> and solve for <span class="math-container">$ x_1 $</span> when <span class="math-container">$ y = 0 $</span>, and repeat until you have the right confidence.</p> <p>Newton's method can sometimes instead get you the wrong root. So you would likely have to have some kind of assured interval for the space you are looking at.</p> <p>If your values are in the wrong range, you won't get a solution in <span class="math-container">$ (0, 1) $</span> at all, so there may be some other conditions you are looking at. For instance, suppose for a given triple <span class="math-container">$ (a, b, c) $</span>, we have a solution to <span class="math-container">$ f $</span> in the range <span class="math-container">$(0, 1)$</span>. Taking the forward image <span class="math-container">$ f( \, (0, 1) \, ) = (x , y) $</span> will give us an upper bound that the function attains on the interval, so if we re-pick <span class="math-container">$ c $</span> as <span class="math-container">$$ c' = c - \frac{y}{b} - 1 $$</span> then a new function <span class="math-container">$ g $</span> with triple <span class="math-container">$ (a, b, c') $</span> has no solutions in the range <span class="math-container">$ (0, 1) $</span> because of the vertical shift. So not every function with real parameters <span class="math-container">$a, b, c$</span> will have the desired property.</p>
3,005,208
<p>I want to solve this polynomial analytically. I know the useful answer is between 0 and 1. Is there any way I can write the answer based on a, b, and c? <span class="math-container">$$ 6\cdot a \cdot x^4 + 2 \cdot b \cdot x^3-b \cdot c=0 $$</span> Also, an approximate answer is acceptable, for example, an answer with 2% error. I will appreciate if someone can help me on this subject.</p>
G Cab
317,234
<p>The exact solution would turn into a "complicated" expression in <span class="math-container">$a,b,c$</span>. </p> <p>If you are looking for an approximated solution, and you know that a real root is near to <span class="math-container">$1$</span> (and in fact it is for "normal" positive value of the parameters) , then replace <span class="math-container">$x$</span> with <span class="math-container">$1+y$</span>, retain only the terms of degree <span class="math-container">$\le 2$</span> and solve for <span class="math-container">$y$</span>. The shift to <span class="math-container">$x=1$</span> is because in <span class="math-container">$x=0$</span> the polynomial is quite flat (1st and 2nd derivative null). </p> <p>Depending on the parameters you might find a better approximation developing instead at <span class="math-container">$x=1/2$</span>.</p>
86,067
<p>So I am having an issue using <code>NDSolve</code> and plotting the function. So I have two different <code>NDSolve</code> calls in my plotting function. (They are technically the same, just have different names; but that can be changed back if at all possible because I want them to be the same.) But the second one is not working. </p> <p>When I remove the plotting code from the <code>Manipulate</code> command, <code>Plot</code> works fine and outputs an answer. I just need the expanded form so that I can manipulate the variables (if there is a way around this, that would be great too!)</p> <p>Here is what I have so far, any help would be appreciated as I have no idea why I am getting this error.</p> <pre><code>Manipulate[ Plot[{ (Evaluate[ ReplaceAll[ Paorta[t], NDSolve[ {Paorta'[t] == 1/Caorta ((1/2*k*(1 + Cos[ω t]) + 10 - Paorta[t])/ Piecewise[{{Ro, 1/2*k*(1 + Cos[ω t]) + 10 - Paorta[t] &gt; 0}}, x*Ro] - Paorta[t]/Rsystemic), Paorta[0] == 90}, {Paorta[t]}, {t, 0, 10} ] ] ]), (1/2*k*(1 + Cos[ω t]) + 10), (((1/2*k*(1 + Cos[ω t]) + 10) - Evaluate[ ReplaceAll[Pao[t], NDSolve[{Pao'[t] == 1/Caorta ((1/2*k*(1 + Cos[ω t]) + 10 - Pao[t])/ Piecewise[{{Ro, 1/2*k*(1 + Cos[ω t]) + 10 - Pao[t] &gt; 0}}, x*Ro] - Pao[t]/Rsystemic), Pao[0] == 90}, {Pao[t]}, {t, 0, 10} ] ] ])/Ro)}, {t, 0, 10}, ImageSize -&gt; Large, PlotRange -&gt; Full, PlotLegends -&gt; {"Aortic Pressure", "Pressure in Left Ventricle", "Flow"} ], {{Caorta, 1/.48}, 1, 6}, {{Rsystemic, 3.1}, .1, 6}, {{x, 8000}, 1, 10000}, {{ω, 2 π}, π, 3 π}, {{k, 110}, 60, 200}, {{Ro, .01}, .007, .05} ] </code></pre> <p>Thanks in advance!</p>
george2079
2,079
<p>This is tedious.. manually drawing the axes.</p> <pre><code> GraphicsRow[{Histogram[data], Show[{Histogram[data , PlotRangePadding -&gt; Scaled[.2], Axes -&gt; False, PlotRange -&gt; {{-3, 3}, {0, 100}}], Graphics[{Line[{Scaled[{.2, .15}], Scaled[{.8, .15}]}], Line[Scaled /@ {{#, .15}, {#, .1}}] &amp; /@ Range[.2, .8, .1], Text[#, Scaled[{.6 (# + 3)/6 + .2, .04}], {0, 0}] &amp; /@ Range[-3, 3, 1], Text[Rotate[ #, Pi/2], Scaled[{.04, (#/100) .6 + .2 }], {0, 0}] &amp; /@ Range[0, 100, 25], Line[Scaled /@ {{.06, #}, {.1, #}}] &amp; /@ Range[.2, .8, .1], Line[Scaled /@ {{.1, .2}, {.1, .8}}]}]}]}] </code></pre> <p><img src="https://i.stack.imgur.com/JtNU2.png" alt="enter image description here"></p> <p>a bit of caution, I'm not certain the axes are precisely aligned. </p>
86,067
<p>So I am having an issue using <code>NDSolve</code> and plotting the function. So I have two different <code>NDSolve</code> calls in my plotting function. (They are technically the same, just have different names; but that can be changed back if at all possible because I want them to be the same.) But the second one is not working. </p> <p>When I remove the plotting code from the <code>Manipulate</code> command, <code>Plot</code> works fine and outputs an answer. I just need the expanded form so that I can manipulate the variables (if there is a way around this, that would be great too!)</p> <p>Here is what I have so far, any help would be appreciated as I have no idea why I am getting this error.</p> <pre><code>Manipulate[ Plot[{ (Evaluate[ ReplaceAll[ Paorta[t], NDSolve[ {Paorta'[t] == 1/Caorta ((1/2*k*(1 + Cos[ω t]) + 10 - Paorta[t])/ Piecewise[{{Ro, 1/2*k*(1 + Cos[ω t]) + 10 - Paorta[t] &gt; 0}}, x*Ro] - Paorta[t]/Rsystemic), Paorta[0] == 90}, {Paorta[t]}, {t, 0, 10} ] ] ]), (1/2*k*(1 + Cos[ω t]) + 10), (((1/2*k*(1 + Cos[ω t]) + 10) - Evaluate[ ReplaceAll[Pao[t], NDSolve[{Pao'[t] == 1/Caorta ((1/2*k*(1 + Cos[ω t]) + 10 - Pao[t])/ Piecewise[{{Ro, 1/2*k*(1 + Cos[ω t]) + 10 - Pao[t] &gt; 0}}, x*Ro] - Pao[t]/Rsystemic), Pao[0] == 90}, {Pao[t]}, {t, 0, 10} ] ] ])/Ro)}, {t, 0, 10}, ImageSize -&gt; Large, PlotRange -&gt; Full, PlotLegends -&gt; {"Aortic Pressure", "Pressure in Left Ventricle", "Flow"} ], {{Caorta, 1/.48}, 1, 6}, {{Rsystemic, 3.1}, .1, 6}, {{x, 8000}, 1, 10000}, {{ω, 2 π}, π, 3 π}, {{k, 110}, 60, 200}, {{Ro, .01}, .007, .05} ] </code></pre> <p>Thanks in advance!</p>
Virgil
27,697
<p>This can be done more-or-less easily with a combination of options for <code>AxesOrigin</code>, <code>PlotRange</code>, and <code>PlotRangePadding</code> and the <a href="http://library.wolfram.com/infocenter/MathSource/5599/" rel="noreferrer"><code>CustomTicks</code> package</a> (for easy outward-facing ticks).</p> <pre><code>Needs["CustomTicks`"]; GapAxes[plot_Graphics, ticks : {{x__}, {y__}}, scalefactor_: Automatic] := With[ {prange = ticks[[All, 1 ;; 2]], s = Flatten@{scalefactor /. Automatic -&gt; 0.02 {1, 1/(AspectRatio /. Options[plot])}}}, Show[plot, Ticks -&gt; {LinTicks[x], LinTicks[y]}, PlotRange -&gt; (prange + Subtract @@@ prange {{First@s, 0}, {Last@s, 0}}), PlotRangePadding -&gt; (Subtract @@@ prange {{First@s, 0}, {Last@s, 0}}), AxesOrigin -&gt; (prange[[All, 1]] + Subtract @@@ prange {First@s, Last@s}) ] ]; </code></pre> <ol> <li><em><code>plot</code></em> can be any plot or chart. </li> <li><em><code>ticks</code></em> gives the arguments of the <code>LinTicks</code> functions which specify the axes ticks. <em><code>x</code></em> and <em><code>y</code></em> must each contain a range specification (which also doubles as the <code>PlotRange</code> specifiation) as the first two items, but they may also include as additional items any of the other arguments that may be passed to <code>LinTicks</code> (<code>TickDirection -&gt; Out</code>, perhaps). </li> <li>The optional argument <em><code>scalefactor</code></em> specifies how far to separate the axes from the plot as a fraction of the total image dimensions. If <em><code>scalefactor</code></em> is not specified, the axes are separated by 2% of the total width.</li> </ol> <hr> <p><strong>Examples</strong></p> <pre><code>data = RandomVariate[HalfNormalDistribution[1/150], 500]; GapAxes[ Histogram[data, {100}], {{0, 700, TickDirection -&gt; Out}, {0, 200, TickDirection -&gt; Out}} ] </code></pre> <p><img src="https://i.stack.imgur.com/YBJ1y.png" alt="chart"></p> <pre><code>GapAxes[ Plot[Tan[x], {x, -3, 3}], {{-3, 3, TickDirection -&gt; Out}, {-6, 6, TickDirection -&gt; Out}} ] </code></pre> <p><img src="https://i.stack.imgur.com/S5cci.png" alt="plot"></p> <hr> <p><strong>Notes:</strong></p> <ol> <li><p>It remains to be seen how robust this <code>GapAxes</code> function will prove to be, but the basic method should be pretty universal. </p></li> <li><p>To see the whole plot when the axes are short, additional <code>ImagePadding</code> may be needed.</p> <pre><code>GapAxes[ Histogram[data, {100}, ImagePadding -&gt; {{Automatic, 50}, {Automatic, Automatic}}], {{0, 600, TickDirection -&gt; Out}, {0, 200, TickDirection -&gt; Out}} ] </code></pre></li> </ol> <p><img src="https://i.stack.imgur.com/Q973Z.png" alt="chart2"></p>
1,722,226
<p>How many solutions are there to the inequality $x_1 + x_2 + x_3 ≤ 11$, where $x_1, x_2$ and $x_3$ are non-negative integers? [Hint: Introduce an auxiliary variable $x_4$ such that $x_1 + x_2 + x_3$ + $x_4$ = 11.]</p> <p>Would my reasoning be correct if I let $x_4 = x_1 + x_2 + x_3, x_4 = 11$</p> <p>Then proceeded as normally with $14\choose11$? I'm a bit unsure of how the auxiliary variable comes to play.</p>
ashleydc
323,353
<p>If you use a "matchstick" approach - using 11 x's and 3 |'s, count the number of x's to the left of each pipe to determine $x_1$, $x_2$, and $x_3$:</p> <p>|||xxxxxxxxxxx => 0 + 0 + 0 </p> <p>xxx|||xxxxxxxx => 3 + 0 + 0 </p> <p>x|x|x|xxxxxxxx => 1 + 1 + 1 </p> <p>xxxxxxxxxxx||| => 11 + 0 + 0 </p> <p>In the above examples, the number of x's that are not used in the equation is what $x_4$ is equal to. So, in the first example, $x_4 = 11$. In the second example $x_4 = 8$. Basically, you use the pipe symbols to partition the x's into 4 compartments, the size of each compartment being equal to $x_1$, $x_2$, $x_3$, $x_4$.</p> <p>One other thing to note, the number of ways to arrange the 3 |'s and 11 x's should be a formula that is familiar to you.</p>
1,860,267
<blockquote> <p>Prove the convergence of</p> <p><span class="math-container">$$\int\limits_1^{\infty} \frac{\cos(x)}{x} \, \mathrm{d}x$$</span></p> </blockquote> <p>First I thought the integral does not converge because</p> <p><span class="math-container">$$\int\limits_1^{\infty} -\frac{1}{x} \,\mathrm{d}x \le \int\limits_1^{\infty} \frac{\cos(x)}{x} \, \mathrm{d}x$$</span></p> <p>But in this case</p> <p><span class="math-container">$$\int\limits_1^{\infty} \frac{\cos(x)}{x} \, \mathrm{d}x \le \int\limits_1^{\infty} \frac{1}{x^2} \, \mathrm{d}x$$</span></p> <p>it converges concerning the majorant criterion. What's the right way?</p>
Maman
167,819
<p><strong><em>Hint</em></strong>: Awful and tricky but as the problem is near <span class="math-container">$+\infty$</span>, write <span class="math-container">$$\int \limits_{\frac{\pi}{2}}^{N\pi+\frac{\pi}{2}}\frac{\cos(x)}{x}\mathrm{d}x= \sum \limits_{k=1}^{N}\left(\int\limits_{k\pi-\frac{\pi}{2}}^{k\pi+\frac{\pi}{2}}\frac{\cos(x)}{x}\mathrm{d}x\right)$$</span> and use the criterion for alternating series with the sequence <span class="math-container">$$a_k=\int\limits_{k\pi-\frac{\pi}{2}}^{k\pi+\frac{\pi}{2}}\frac{\cos(x)}{x}\mathrm{d}x$$</span></p>
3,204,950
<p>This question arose from Physics, where the force on an object attached on a spring is proportional to the displacement to the equilibrium (that is, the rest position). Also, if the displacement to the equilibrium is positive, the force will be negative, as it tries to pull the object back (i.e. if you pull a string, the force is opposite to your direction of pull).</p> <p>Therefore, it can be said that:</p> <p><span class="math-container">$$F \propto -x$$</span> Where <span class="math-container">$F$</span> is the force and <span class="math-container">$x$</span> is the displacement from equilibrium</p> <p>Is this the same as: <span class="math-container">$$F \propto x$$</span> According to the relation of proportionality, it should be, but my friend says that putting the second one is not correct.</p> <p>Both are equivalent right?</p>
Adam Latosiński
653,715
<p>Mathematically they are equivalent, but physycist may want to differentiate between a force in the same direction as the displacement, and a force opposite to the displacement, as they lead to physically different behavior of the system. Thus the minus sign matters. </p>
1,904,767
<p>I'm trying to understand regularization in machine learning. one way of regularization is adding a l1 norm to the error function. This is said to produce sparsity. But I can't understand.</p> <p>sparsity is defined as "only few out of all parameters are non-zero". But if you look at the l1 norm equation, it is the summation of parameters' absolute value. </p> <p>Sure, a small l1 norm could mean fewer non-zero parameters. but it could also mean that many parameters are non-zero, only the values of them are close to zero.</p> <p>So why adding l1 regularization will for sure guarantee the first case? not the second case?</p> <p>to give a concrete example, we have 2 vectors, A and B.</p> <p>A is [0.1, 0.1, 0.1], which is not sparse</p> <p>B is [1000, 0, 0], which is sparse.</p> <p>clearly l1 norm of A is smaller than that of B.</p> <p>so why do we use l1 norm to ensure sparsity? they seem to be unrelated?</p>
Bill Yan
362,324
<p>It took me an hour yesterday to finally understand this. I wrote a very detailed blog to explain it.</p> <p><a href="https://medium.com/@shiyan/l1-norm-regularization-and-sparsity-explained-for-dummies-5b0e4be3938a#.nhy58osj5" rel="noreferrer">https://medium.com/@shiyan/l1-norm-regularization-and-sparsity-explained-for-dummies-5b0e4be3938a#.nhy58osj5</a></p> <p>I’m posting a simple version here. </p> <p>Yesterday when I first thought about this, I used two example vectors [0.1, 0.1] and [1000, 0]. The first vector is obviously not sparse, but it has the smaller L1 norm. That’s why I was confused, because looking at the L1 norm alone won’t make this idea understandable. I have to consider the entire loss function as a whole.</p> <p>when you are solving a large vector x with less training data. The solutions to x could be a lot.</p> <p><a href="https://i.stack.imgur.com/huBac.png" rel="noreferrer"><img src="https://i.stack.imgur.com/huBac.png" alt="enter image description here"></a></p> <p>Here A is a matrix that contains all the training data. x is the solution vector you are looking for. b is the label vector.</p> <p>When data is not enough and your model’s parameter size is large, your matrix A will not be “tall” enough and your x is very long. So the above equation will look like this:</p> <p><a href="https://i.stack.imgur.com/MwTJI.png" rel="noreferrer"><img src="https://i.stack.imgur.com/MwTJI.png" alt="enter image description here"></a></p> <p>let’s use a simple and concrete example. Suppose we want to find a line that matches a set of points in 2D space. We all know that you need at least 2 points to fix a line. But what if the training data has only one point? Then you will have infinite solutions: every line that pass through the point is a solution. Suppose the point is at [10, 5], and a line is defined as a function y = a * x + b. Then the problem is finding a solution to this equation:</p> <p><a href="https://i.stack.imgur.com/2dPhE.gif" rel="noreferrer"><img src="https://i.stack.imgur.com/2dPhE.gif" alt="enter image description here"></a></p> <p>Since b = 5 – 10 * a, all points on this following line b = 5 – 10 * a should be a solution:</p> <p><a href="https://i.stack.imgur.com/UgsLU.gif" rel="noreferrer"><img src="https://i.stack.imgur.com/UgsLU.gif" alt="enter image description here"></a></p> <p>But how to find the sparse one with L1 norm?</p> <p>L1 norm is defined as the summation of absolute values of a vector’s all components. For example, if a vector is [x, y], it’s L1 norm is |x| + |y|.</p> <p>Now if we draw all points that has a L1 norm equals to a constant c, those points should form something (in red) like this:</p> <p><a href="https://i.stack.imgur.com/YwPwX.png" rel="noreferrer"><img src="https://i.stack.imgur.com/YwPwX.png" alt="enter image description here"></a></p> <p>This shape looks like a tilted square. In high dimension space, it will be a octahedron. Notice that on this red shape, not all points are sparse. Only on the tips, points are sparse. That is, either x or y component of a point is zero. Now the way to find a sparse solution is enlarging this red shape from the origin by giving an ever growing c to “touch” the blue solution line. The intuition is that the touch point is most likely at a tip of the shape. Since the tip is a sparse point, the solution defined by the touch point is also a sparse solution.</p> <p><a href="https://i.stack.imgur.com/tqoAX.png" rel="noreferrer"><img src="https://i.stack.imgur.com/tqoAX.png" alt="enter image description here"></a></p> <p>As an example, in this graph, the red shape grows 3 times till it touches the blue line b = 5–10 * a. The touch point, as you can see, is at a tip of the red shape. The touch point [0.5, 0] is a sparse vector. Therefore we say, by finding the solution point with the smallest L1 norm (0.5) out of all possible solutions (points on the blue line), we find a sparse solution [0.5, 0] to our problem. At the touch point, the constant c is the smallest L1 norm you could find within all possible solutions.</p> <p>The intuition of using L1 norm is that, the shape formed by all points whose L1 norm equals to a constant c has many tips (spikes) that happen to be sparse (lays on one of the axises of the coordinate system). Now we grow this shape to touch the solutions we find for our problem (usually a surface or a cross section in high dimension). The probability that the touch point of the 2 shapes is at one of the “tips” or “spikes” of the L1 norm shape is very high. That’s why you want to put L1 norm into your loss function formula, so that you can keep looking for a solution with a smaller c (at the “sparse” tip of the L1 norm). (So in the real loss function case, you are essentially shrinking the red shape to find a touch point, not enlarging it from the origin.)</p> <p>Does L1 norm always touch the solution at a tip and find us a sparse solution? Not necessarily. Suppose we still want to find a line out of 2D points, but this time, the only training data is a point [1, 1000]. In this case, the solution line b = 1000 -a is in parallel to one of the edges of the L1 norm shape:</p> <p><a href="https://i.stack.imgur.com/CrKnL.png" rel="noreferrer"><img src="https://i.stack.imgur.com/CrKnL.png" alt="enter image description here"></a></p> <p>Eventually they touch on an edge, not by a tip. Not only you can’t have an unique solution this time, most of your regularized solutions are still not sparse (other than the two tip points.)</p> <p>But again, the probability for touching a tip is very high. I guess this is even more true for high dimension, real world problems. As when your coordinate system has more axises, your L1 norm shape should have more spikes or tips. It must look like a cactus or a hedgehog! I can’t imagine.</p> <p><a href="https://i.stack.imgur.com/tiIxF.jpg" rel="noreferrer"><img src="https://i.stack.imgur.com/tiIxF.jpg" alt="enter image description here"></a></p> <p>But is the L1 norm the best kind of norm to find sparse solution? Well it turns out that the Lp norm when 0 &lt;= p &lt; 1 gives the best result. This can be explained by looking at the shapes of different norms:</p> <p><a href="https://i.stack.imgur.com/o657Z.png" rel="noreferrer"><img src="https://i.stack.imgur.com/o657Z.png" alt="enter image description here"></a></p> <p>As you can see, when p &lt; 1, the shape is more “scary”, with more sharpen, outbreaking spikes. Whereas when p = 2, the shape becomes a smooth, non-threatening ball. Then why not letting p &lt; 1? That’s because when p &lt; 1, there are calculation difficulties.</p>
33,582
<p>My code finding <a href="http://en.wikipedia.org/wiki/Narcissistic_number">Narcissistic numbers</a> is not that slow, but it's not in functional style and lacks flexibility: if $n \neq 7$, I have to rewrite my code. Could you give some good advice?</p> <pre><code>nar = Compile[{$}, Do[ With[{ n = 1000000 a + 100000 b + 10000 c + 1000 d + 100 e + 10 f + g, n2 = a^7 + b^7 + c^7 + d^7 + e^7 + f^7 + g^7}, If[n == n2, Sow@n]; ], {a, 9}, {b, 0, 9}, {c, 0, 9}, {d, 0, 9}, {e, 0, 9}, {f, 0, 9}, {g, 0, 9}], RuntimeOptions -&gt; "Speed", CompilationTarget -&gt; "C" ]; Reap[nar@0][[2, 1]] // AbsoluteTiming (*{0.398023, {1741725, 4210818, 9800817, 9926315}}*) </code></pre>
WalkingRandomly
4,786
<p>From a cold start, I would have written it like this:</p> <pre><code> findNarc = Compile[{{stop, _Integer}, {pow, _Integer}}, Do[ If[Total[IntegerDigits[n]^pow] == n, Sow[n]] , {n, 1, stop} ] , RuntimeOptions -&gt; "Speed", CompilationTarget -&gt; "C"]; </code></pre> <p>However, it is slower than your function (which takes 0.326 seconds on my machine)</p> <pre><code>Reap[findNarc[10000000, 7]] // AbsoluteTiming (*{2.900166, {Null, {{1, 1741725, 4210818, 9800817, 9926315}}}}*) </code></pre> <p>I'd usually use InternalBag instead of Sow since it can be compiled whereas Sow cannot (see <a href="https://mathematica.stackexchange.com/questions/845/internalbag-inside-compile">Internal`Bag inside Compile</a> ) but Sow is called so infrequently here that I don't think that's the problem. Besides, you used it as well so both my code and your code would have been hit by the same penalty.</p> <p>So, for the sake of speed, I'd be tempted to just do a very minor modification of your code to give the flexibility for choosing the Power:</p> <pre><code>nar = Compile[{{pow, _Integer}}, Do[With[{n = 1000000 a + 100000 b + 10000 c + 1000 d + 100 e + 10 f + g, n2 = a^pow + b^pow + c^pow + d^pow + e^pow + f^pow + g^pow}, If[n == n2, Sow@n];];, {a, 9}, {b, 0, 9}, {c, 0, 9}, {d, 0, 9}, {e, 0, 9}, {f, 0, 9}, {g, 0, 9}], RuntimeOptions -&gt; "Speed", CompilationTarget -&gt; "C"]; Reap[nar[7]] // AbsoluteTiming (*{0.329019, {Null, {{1741725, 4210818, 9800817, 9926315}}}}*) </code></pre>
33,582
<p>My code finding <a href="http://en.wikipedia.org/wiki/Narcissistic_number">Narcissistic numbers</a> is not that slow, but it's not in functional style and lacks flexibility: if $n \neq 7$, I have to rewrite my code. Could you give some good advice?</p> <pre><code>nar = Compile[{$}, Do[ With[{ n = 1000000 a + 100000 b + 10000 c + 1000 d + 100 e + 10 f + g, n2 = a^7 + b^7 + c^7 + d^7 + e^7 + f^7 + g^7}, If[n == n2, Sow@n]; ], {a, 9}, {b, 0, 9}, {c, 0, 9}, {d, 0, 9}, {e, 0, 9}, {f, 0, 9}, {g, 0, 9}], RuntimeOptions -&gt; "Speed", CompilationTarget -&gt; "C" ]; Reap[nar@0][[2, 1]] // AbsoluteTiming (*{0.398023, {1741725, 4210818, 9800817, 9926315}}*) </code></pre>
wolfies
898
<p>Not an answer <em>per se</em>, but two clarifications (which are too long for the comment box):</p> <p>1) The Wiki definition you have linked to for a narcissistic number is not really apt. The Wiki page is actually describing the definition for an Armstrong Number, also known as pluperfect digital invariants, or <em>m</em>-narcissistic numbers, such as:</p> <p>$$407 = 4^3 + 0^3 + 7^3$$</p> <p>These require the use of the power term $m$ (the 3 in this example) over and above the digits of integer $n= 407$. By contrast, the correct and proper reference to the term 'narcissistic number' comes from the article by Madachy, J. S. (1966), <em>Mathematics on Vacation</em>, Thomas Nelson &amp; Sons — p.163 to 175, who defines them as numbers:</p> <p>"that are representable, in some way, by mathematically manipulating the digits of the numbers themselves".</p> <p>What the Wiki page describes ... the Armstrong numbers ... is quite different, ... not the 'narcissistic numbers', as the page claims, but the <em>m</em>-narcissistic numbers. But that's wiki for you. </p> <p>2) Finite to infinite: The set of narcissistic numbers involve a finite search (such as the solutions above). The problem becomes rather more tricky if you allow for the use of radicals or factorials ... because the search problem is no longer finite ... rather you can have infinite nesting of square root symbols or factorial symbols. </p> <p><a href="https://i.stack.imgur.com/LTsEK.gif" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/LTsEK.gif" alt="enter image description here"></a></p> <p>One can get some pretty results when you allow radicals, such as say:</p> <p><a href="https://i.stack.imgur.com/46X6f.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/46X6f.png" alt="enter image description here"></a></p> <p>For more detail, please see:</p> <p><a href="http://www.tri.org.au/numQ/pwn/" rel="nofollow noreferrer">http://www.tri.org.au/numQ/pwn/</a></p> <p>or a fun little piece I did entitled:</p> <p>Radical Narcissistic Numbers, <em>Journal of Recreational Mathematics</em>, 33(4), 2004-2005, 250-254.</p> <p>I've been meaning to put up the mma code for this too ... this was done long before the age of multi-processors, so I think I'll have to update the code for parallel cores, which would make an enormous difference here.</p>
3,680,864
<p>I'm trying to understand the relation between the following conditions. I will assume that <span class="math-container">$X$</span> is a Hausdorff topological space and <span class="math-container">$A \subset X$</span>.</p> <ol> <li><span class="math-container">$\overline{A}$</span> is compact;</li> <li>Every net <span class="math-container">$\{x_{\lambda}\}_{\lambda \in \mathbb{L}} \subset A$</span> has a subnet converging to some point;</li> </ol> <p>It is clear to me that <span class="math-container">$1 \Rightarrow 2$</span>. I read that <span class="math-container">$2 \Rightarrow 1$</span> if <span class="math-container">$X$</span> is regular, but I am not able to find a proof. I would like to have a proof and, if possible, an explicit example in which the implication <span class="math-container">$2 \Rightarrow 1$</span> is false.</p>
Eric Wofsey
86,856
<p>I find these things easier to think about in the language of filters. Using the usual correspondence between nets and filters, (2) is equivalent to saying that every filter on <span class="math-container">$X$</span> containing <span class="math-container">$A$</span> has an accumulation point in <span class="math-container">$X$</span>.</p> <p>So, suppose <span class="math-container">$\overline{A}$</span> is not compact, and we will find a filter containing <span class="math-container">$A$</span> with no accumulation point in <span class="math-container">$X$</span>. Since <span class="math-container">$\overline{A}$</span> is not compact and is closed in <span class="math-container">$X$</span>, there is a filter <span class="math-container">$F$</span> containing <span class="math-container">$\overline{A}$</span> with no accumulation point in <span class="math-container">$X$</span>. Let <span class="math-container">$G$</span> be the filter generated by all open elements of <span class="math-container">$F$</span> together with <span class="math-container">$A$</span>.</p> <p>First, I claim <span class="math-container">$G$</span> is a proper filter. Indeed, suppose <span class="math-container">$U\in F$</span> is open. Then <span class="math-container">$U\cap \overline{A}\in F$</span> since <span class="math-container">$\overline{A}\in F$</span>. Since <span class="math-container">$U$</span> is open and <span class="math-container">$A$</span> is dense in <span class="math-container">$\overline{A}$</span>, this means <span class="math-container">$U\cap A$</span> is nonempty. Since every element of <span class="math-container">$G$</span> contains a set of the form <span class="math-container">$U\cap A$</span>, this means <span class="math-container">$G$</span> is a proper filter.</p> <p>Second, I claim <span class="math-container">$G$</span> has no accumulation point in <span class="math-container">$X$</span>, and is thus our desired filter since <span class="math-container">$A\in G$</span>. Indeed, let <span class="math-container">$x\in X$</span> be any point. Since <span class="math-container">$x$</span> is not an accumulation point of <span class="math-container">$X$</span>, there is an open neighborhood <span class="math-container">$U$</span> of <span class="math-container">$x$</span> such that <span class="math-container">$X\setminus U\in F$</span>. By regularity, there are disjoint open sets <span class="math-container">$V$</span> and <span class="math-container">$W$</span> such that <span class="math-container">$x\in V$</span> and <span class="math-container">$X\setminus U\subseteq W$</span>. Then <span class="math-container">$W\in G$</span>, and hence <span class="math-container">$X\setminus V\in G$</span>, and hence <span class="math-container">$x$</span> is not an accumulation point of <span class="math-container">$G$</span>.</p> <hr /> <p>Here is another proof which is a bit more complicated at first glance but which nicely conceptualizes the role of regularity.</p> <p>Recall that if <span class="math-container">$X$</span> is a set, then the set <span class="math-container">$\beta X$</span> of ultrafilters on <span class="math-container">$X$</span> has a natural compact Hausdorff topology, which has as a basis the sets <span class="math-container">$U_A=\{F\in\beta X:A\in F\}$</span> for each <span class="math-container">$A\subseteq X$</span>. If <span class="math-container">$X$</span> is a Hausdorff space, we will write <span class="math-container">$C_X\subseteq\beta X$</span> for the set of ultrafilters that converge in <span class="math-container">$X$</span> and <span class="math-container">$L:C_X\to X$</span> for the map taking an ultrafilter to its limit. We then have the following remarkable characterization of regularity.</p> <blockquote> <p><strong>Theorem</strong>: Let <span class="math-container">$X$</span> be a Hausdorff space. Then <span class="math-container">$X$</span> is regular iff <span class="math-container">$L:C_X\to X$</span> is continuous.</p> <p><em>Proof</em>: Suppose <span class="math-container">$X$</span> is regular. Let <span class="math-container">$F\in C_X$</span>, write <span class="math-container">$x=L(F)$</span>, and suppose <span class="math-container">$U$</span> is a neighborhood of <span class="math-container">$x$</span>. By regularity, let <span class="math-container">$A$</span> be a closed neighborhood of <span class="math-container">$x$</span> contained in <span class="math-container">$U$</span>. Then <span class="math-container">$A\in F$</span> since <span class="math-container">$A$</span> is a neighborhood of <span class="math-container">$x$</span>, and if <span class="math-container">$G\in C_X$</span> and <span class="math-container">$A\in G$</span> then <span class="math-container">$L(G)\in A$</span> since <span class="math-container">$A$</span> is closed. Thus <span class="math-container">$U_A\cap C_X$</span> is a neighborhood of <span class="math-container">$F$</span> in <span class="math-container">$C_X$</span> whose image under <span class="math-container">$L$</span> is contained in <span class="math-container">$U$</span>, as desired.</p> <p>Conversely, suppose <span class="math-container">$X$</span> is not regular; let <span class="math-container">$x\in X$</span> with a neighborhood <span class="math-container">$U$</span> which contains no closed neighborhood of <span class="math-container">$x$</span>. For each neighborhood <span class="math-container">$V$</span> of <span class="math-container">$x$</span>, its closure is not contained in <span class="math-container">$U$</span>, so we can pick an ultrafilter <span class="math-container">$F_V$</span> which contains <span class="math-container">$V$</span> and converges to a point <span class="math-container">$L(F_V)\not\in U$</span>. Consider these <span class="math-container">$(F_V)$</span> as a net in <span class="math-container">$\beta X$</span>, indexed by the directed set of neighborhoods of <span class="math-container">$x$</span> ordered by reverse inclusion. By compactness of <span class="math-container">$\beta X$</span>, this net has a subnet converging to an ultrafilter <span class="math-container">$F$</span>. Since <span class="math-container">$V\in F_V$</span> for all <span class="math-container">$V$</span>, this limit <span class="math-container">$F$</span> must contain every neighborhood of <span class="math-container">$x$</span>; that is, <span class="math-container">$L(F)=x$</span>. However, the net <span class="math-container">$(L(F_V))$</span> is entirely outside the neighborhood <span class="math-container">$U$</span> of <span class="math-container">$x$</span>, so no subnet can converge to <span class="math-container">$x$</span>. Thus <span class="math-container">$L$</span> fails to preserve the convergence of this subnet and is not continuous.</p> </blockquote> <p>Using this theorem, proving <span class="math-container">$2\Rightarrow 1$</span> for regular spaces is quite natural. In terms of ultrafilters, (2) says that every ultrafilter containing <span class="math-container">$A$</span> has a limit in <span class="math-container">$X$</span>. Now suppose this is true and let <span class="math-container">$(x_i)$</span> be a net in <span class="math-container">$\overline{A}$</span>. For each <span class="math-container">$i$</span>, we can pick an ultrafilter <span class="math-container">$F_i$</span> containing <span class="math-container">$A$</span> which converges to <span class="math-container">$x_i$</span>. By compactness of <span class="math-container">$\beta X$</span>, there is a subnet of <span class="math-container">$(F_i)$</span> that converges to some ultrafilter <span class="math-container">$F$</span>, which will still contain <span class="math-container">$A$</span>. By (2), <span class="math-container">$F$</span> converges to some <span class="math-container">$x\in\overline{A}$</span>. Since <span class="math-container">$X$</span> is regular, the theorem says that the corresponding subnet of <span class="math-container">$(x_i)$</span> converges to <span class="math-container">$x$</span>. Thus <span class="math-container">$(x_i)$</span> has a convergent subnet. Since <span class="math-container">$(x_i)$</span> was an arbitrary net in <span class="math-container">$\overline{A}$</span>, this means <span class="math-container">$\overline{A}$</span> is compact.</p> <hr /> <p>Finally, here is an example of how <span class="math-container">$2\Rightarrow 1$</span> can be false if <span class="math-container">$X$</span> is not regular. Let <span class="math-container">$X$</span> be the closed unit disk and let <span class="math-container">$A\subseteq X$</span> be the open unit disk, and say a subset <span class="math-container">$C\subseteq X$</span> is closed if it contains the closure of <span class="math-container">$C\cap A$</span> with respect to the usual topology. This defines a topology in <span class="math-container">$X$</span> (another way to describe it is you take the usual topology and then enlarge it by declaring that every subset of the unit circle <span class="math-container">$X\setminus A$</span> is closed, and take the topology that generates; so a closed set in <span class="math-container">$X$</span> is just a union of a closed set in the usual topology and an arbitrary subset of <span class="math-container">$X\setminus A$</span>).</p> <p>Now <span class="math-container">$\overline{A}=X$</span> is not compact, since <span class="math-container">$X\setminus A$</span> is closed in <span class="math-container">$X$</span> but is not compact since it is infinite and discrete. However, every net in <span class="math-container">$A$</span> has a limit in <span class="math-container">$X$</span>. Indeed, every net in <span class="math-container">$A$</span> has a subnet converging to some point of <span class="math-container">$X$</span> with respect to the usual topology by compactness, and the same is true of the topology of <span class="math-container">$X$</span> since nets in <span class="math-container">$A$</span> which converge with respect to the usual topology still converge with respect to the topology of <span class="math-container">$X$</span>.</p>
644,057
<p>I am having trouble with this problem:</p> <p>Let $a_n$ be sequence of positive terms with $$\frac{a_{n+1}}{a_n}\lt \frac{n^2}{(n+1)^2}.$$ Then is the series $\sum a_n$ convergent?</p> <p>Thanks for any help.</p>
Community
-1
<p>According to the Daniel Fischer's comment let $$v_n=\frac{1}{n^2}$$ so we have $$\frac{a_{n+1}}{a_n}&lt;\frac{v_{n+1}}{v_n}\iff \frac{a_{n+1}}{v_{n+1}}&lt;\frac{a_{n}}{v_{n}}$$ hence the sequence $\left(\frac{a_{n}}{v_{n}}\right)$ is decreasing and then $$ \frac{a_{n}}{v_{n}}&lt;\frac{a_{1}}{v_{1}}=C\iff a_n&lt;C v_n$$ and since the series $\displaystyle\sum_n v_n$ is convergent then the series $\displaystyle\sum_n a_n$ is also convergent by comparison.</p>
2,342,537
<p>Suppose $f:\mathbb R\rightarrow \mathbb R$ s.t. $f(x+y)=f(x)+f(y)$ for all $x,y\in \mathbb R$ and $f$ is not continuous on $\mathbb R$. Prove that</p> <p>(a).$f$ is not bounded below (or above) on any subinterval $(a,b)$ of $\mathbb R$.</p> <p>(b). $f$ is not monotone.</p> <p>On plugging $x=y=0$; $f(0)=f(0)+f(0)$ which gives $f(0)=0$. Also, plug $y=-x$ to get $f(-x)=-f(x)$ i.e. $f$ is an odd function and thus have graph in opposite quadrants. Now, I am stuck how to show that graph will be unbounded? Please help!</p>
Elle Najt
54,092
<p>We can reduce all statements to studying behavior near zero, by the observation that $f(x + a) = f(x) + f(a)$, so if $x$ had P near zero , then it has $P$ near $a$, where $P$ is continuous, or monotone, or not bounded below on any interval.</p> <p>Here is a technique to prove continuity from monotonicity.</p> <p>Suppose that $f$ was monotone, and had the property $f(x + y) = f(x) + f(y)$. We want to show that if $x_n \to 0$, then $f(x_n) \to f(0) = 0$. Because of the monotonicity, it is enough to prove this when $x_n = 1/n$ (squeeze theorem). But $f(1/n)n= f(1)$, so $f(1/n) = f(1) / n$, and this goes to zero.</p> <p>(This proves continuity on the right.)</p> <p>For boundedness: If, for example, $f$ was upper bounded on an interval $(-\epsilon, \epsilon)$, say by $M$, then an upper bound on $(-\epsilon/2, \epsilon/2)$ is $M/2$. Continuing in this way, you can prove that it is continuous at $0$. (Again, from the right.)</p> <p>To go from right continuity to continuity at $0$, use your observation that $f$ is even.</p> <p>I'm leaving some details out, but I think this is the correct sketch.</p>
376,600
<p>$$\lim_{n\to\infty} \int_{-\infty}^{\infty} \frac{1}{(1+x^2)^n}\,dx $$</p> <p>Mathematica tells me the answer is 0, but how can I go about actually proving it mathematically?</p>
Community
-1
<p>Simply use this inequality</p> <p>$$0\leq\int_0^\infty\frac{1}{(1+x^2)^n}\,dx\leq \int_0^\infty\frac{1}{1+nx^2}\,dx=\frac{1}{\sqrt{n}}\int_0^\infty\frac{dt}{1+t^2}=\frac{\pi}{2\sqrt{n}}$$</p>
149,872
<p>How would I show that $|\sin(x+iy)|^2=\sin^2x+\sinh^2y$? </p> <p>Im not sure how to begin, does it involve using $\sinh z=\frac{e^{z}-e^{-z}}{2}$ and $\sin z=\frac{e^{iz}-e^{-iz}}{2i}$?</p>
DonAntonio
31,254
<p>$$z=x+iy\Longrightarrow \sin z=\frac{e^{iz}-e^{-iz}}{2i}=\frac{e^{-y+ix}-e^{y-ix}}{2i}=$$$$=\frac{e^{-y}(\cos x+i\sin x)-e^y(\cos x-i\sin x)}{2i}=\frac{1}{2i}\left[i\sin x\left(e^y+e^{-y}\right)-\cos x\left(e^y-e^{-y}\right)\right]=$$$$=\sin x\cosh y+i\cos x\sinh y\Longrightarrow ...$$</p>
1,236,600
<p>A dose of $D$ milligrams of a drug is taken every 12 hours. Assume that the drug's half-life is such that every $12$ hours a fraction $r$, with $0&lt;r&lt;1$ of the drug remains in the blood. Let $d_1= D$ be the amount of the drug in the blood after first dose. It follows that the amount of the drug in the blood after the $n^{\mathrm{th}}$ dose is $$d_n= D\sum_{k=0}^{n-1}r^k.$$</p> <p>At the steady state, $$d_\infty = \lim_{n\to\infty} d_n = \frac D{1-r}.$$</p> <p>$d_\infty$ is the drug level just AFTER a dose, so it is the maximum drug level. Find the minimum drug level $d_{\min}$, just PRIOR to a steady state dose. Verify that $$d_\infty - d_{\min}=D.$$</p> <p>I have no idea how to do this. Any ideas?</p>
Vectornaut
16,063
<p>You defined $d_n$ to be the amount of drug in the blood just after the $n$th dose is taken. Where does the expression $$d_n = D\sum_{k = 0}^{n - 1} r^k$$ come from? The $m$th dose contributed $D$ milligrams of drug at the moment it was taken. The $n$th dose is taken $n - m$ time units later; by then, the contribution of the $m$th dose has decayed to $Dr^{n-m}$ milligrams. Since $d_n$ is the sum of the contributions from all previous doses, plus the current dose, $$d_n = \sum_{m=1}^n Dr^{n-m}.$$ Defining $k = n - m$, we get the expression you wrote.</p> <hr> <p>Let's say $d'_n$ is the amount of drug in the blood just <em>before</em> the $n$th dose is taken. Calculating $d'_n$ is just like calculating $d_n$, except we leave out the $n$th dose: $$d'_n = \sum_{m=1}^{n-1} Dr^{n-m}.$$ This time, since $m$ only goes up to $n-1$, we can define $k = (n - 1) - m$ and get $$d'_n = D\sum_{k=0}^{n-2} r^{k+1}.$$ Pulling out an $r$, $$d'_n = Dr\sum_{k=0}^{n-2} r^k$$ we see that $$d'_n = rd_{n-1}.$$</p> <hr> <p>Here's a graph of how the amount of drug in the blood changes over time. At the time of each does, $d_n$ is highlighted with a pink circle, and the $d'_n$ is highlighted with a purple square.</p> <p><img src="https://i.stack.imgur.com/KRVmh.png" alt="enter image description here"></p> <p>You defined $d_\infty$ to be the maximum amount of drug in the blood at steady state, and $d_\text{min}$ to be the minimum amount. Looking at the graph above, you should be able to see that $$d_\infty = \lim_{n \to \infty} d_n$$ and $$d_\text{min} = \lim_{n \to \infty} d'_n$$ We worked out earlier that $d'_n = rd_{n-1}$. From that, you can figure out that $$d_\text{min} = r d_\infty,$$ so $d_\infty - d_\text{min} = (1-r)d_\infty$. The rest is straightforward.</p>
2,483,611
<p>I believe the answer is 13 * $13\choose4$ * $48\choose9$.</p> <p>There are $13\choose4$ to draw 4 of the same cards, and multiply by 13 for each possible rank (A, 2, 3, ..., K). Then there are $48\choose9$ to choose the remaining cards.</p> <p>One thing I am not certain of, is whether this accounts for the possibility of having two 4-of-a-kinds or three 4-of-a-kinds, but I believe it is, since having two and three means you have one.</p>
Nick Pavlov
477,185
<p><strong>EDIT</strong>: I realized that calling Macavity's answer "working by accident" is not fair. What it does is use the general prescription for solving an inequality involving one irrational expression, just doesn't explicitly state that. In general, $f(x) \geq \sqrt{g(x)}$ is equivalent to $$f(x) \geq 0 \;\; \land \;\; (f(x))^2 \geq g(x) \geq 0 $$ and in your case only the first of the above yields a new restriction on $m$. Be careful though, if the inequality is the other way, it is more complicated. $f(x) \leq \sqrt{g(x)}$ is equivalent to $$ \{ f(x) &lt; 0 \;\;\land\;\; g(x) \geq 0 \} \;\;\lor\;\; \{ f(x) \geq 0 \;\;\land\;\; (f(x))^2 \leq g(x) \} $$</p> <hr> <p>There is another very powerful approach for imposing conditions on the roots of a quadratic by considering its graph. Let $Q(x) = ax^2 + bx + c$. It has two non-negative $x$-intercepts if and only if $$ \begin{align} \Delta \geq 0 \\ aQ(0) = ac \geq 0 \\ \frac{-b}{2a} \geq 0 \end{align} $$ The first is equivalent to having real roots (if you want distinct, make it strict). The second says that you want $0$ to be <strong>outside</strong> of the interval between the roots (which happens iff $Q(0) \geq 0$ for positive $a$ and iff $Q(0) \leq 0$ for negative $a$). The last one makes sure that if it is outside, it is on the <strong>left</strong> side, because it will be to the left of the vertex. So together they are equivalent to saying that $0$ is less than or equal to both roots (if you want strictly positive roots, make the second inequality strict). </p> <p>It is easily adapted to when other configurations are desired:</p> <ul> <li><p>for two negative roots, flip the direction of the last one;</p></li> <li><p>for one positive and one negative, flip the second one and get rid of the third one altogether (in fact, the first one also becomes redundant in this case);</p></li> <li><p>if you want roots bigger/smaller than some particular number $t$ instead of $0$, replace $Q(0)$ with $Q(t)$ in the second and $0$ with $t$ in the third.</p></li> <li><p>it is even possible to apply the same method (carefully) to situations where two (or even more) numbers are involved in the conditions on the roots, such as if we want both roots in some interval $(s, t)$, or one inside and one outside (possibly on a particular side), or each in its own interval, etc.</p></li> </ul>
1,903,717
<p>This is actually from an Analysis text but i feel its a set theory question.</p> <p>Proposition for ever rational number $\epsilon &gt; 0$ there exists a non-negative number x s.t $x^2 &lt; 2 &lt; (x+ \epsilon )^2 $</p> <p>It provides a proof that im having trouble understanding.</p> <p>Proof: let $ \epsilon &gt;0$ be rational. Suppose for contradiction sake that there is no on-negative rational number x that $x^2 &lt; 2 &lt; (x+ \epsilon )^2 $ holds.</p> <p>ie when every $ x^2 &lt; 2$ the statement $(x+ \epsilon )^2 &lt;2 $</p> <p>It states by a previous proposition that $(x+ \epsilon )^2 $ cannot equal 2.</p> <p><strong>Then it states "Since $0^2 &lt; 2$ we thus have $ \epsilon ^2 &lt; 2$ which then implies that $ (2\epsilon )^2 &lt; 2$ and indeed a simple induction shows that $ (n\epsilon )^2 &lt; 2$ for every natural number n." Which is what i cant understand.</strong></p> <p>The rest of the proof is strange as well im fine with the statement $ \epsilon ^2 &lt; 2$ as it clearly follows that $ \epsilon ^2 &lt; (x+ \epsilon )^2 $ as x is positive and $ \epsilon ^2$ is on both sides of the expression.</p> <p>If i was proving it then i would rewrite </p> <p>$ \epsilon ^2 = n $ $ \epsilon' $ s.t $n \in \mathbb {N} $ and $ \epsilon' \in \mathbb {Q} $ </p> <p>i would then use the Archimedean property to prove this is a contradiction. </p> <p>If anyone can follow/explain what the bold text means i would greatly appreciate it.</p>
Francesco Alem.
175,276
<p>The statement is true. here's a constructive proof for ya.</p> <p>let $\epsilon\in \mathbf{Q}\,|\, \epsilon &gt;0$</p> <p>choose $k \in \mathbf{N}$ such that $k&gt;2\epsilon$ so that $\frac{\epsilon}{k}&lt;\frac{1}{2}$</p> <p>choose $x \in [\sqrt 2 -\frac{\epsilon}{k};\,\,\sqrt2 - \frac{\epsilon}{2k}] \cap\mathbf{Q}$ arbitrarily (such an $x$ exists since rationals are dense in the reals, a consequence of the archimedean property).</p> <p>$x$ is chosen in such a way that certainly makes it positive and lower than $\sqrt 2$ since $x&lt;\sqrt 2 -\frac{\epsilon}{2k}&lt;\sqrt 2$, hence it must hold $x^2&lt;2$</p> <p>also $x+\epsilon$ is certainly above $\sqrt 2$ since $x+\epsilon&gt;\sqrt 2 -\frac{\epsilon}{k}+\epsilon&gt;\sqrt 2$ hence it must hold $(x+\epsilon)^2&gt;2$</p> <p>it follows that $x^2&lt;2&lt;(x+\epsilon)^2$</p>
1,171,150
<p>I am struggling to figure out $$\lim\limits_{n \to \infty} \sqrt[n]{n^2+1} .$$ I've tried manipulating the inside of the square root but I cannot seem to figure out a simplification that helps me find the limit.</p>
abel
9,252
<p>let $$y = \sqrt[n]{n^2 + 1}$$ then $$\ln y = \dfrac{\ln(n^2 + 1)}{n} = \dfrac{\ln n^2}{n} + \cdots = \dfrac{2 \ln n} n + \cdots \to 0 \text{ as } n \to \infty.$$ therefore $$\lim_{n \to \infty} y = \lim_{n \to \infty}\sqrt[n]{n^2 + 1} = 1$$</p>
2,581,135
<blockquote> <p>Find: $\displaystyle\lim_{x\to\infty} \dfrac{\sqrt{x}}{\sqrt{x+\sqrt{x+\sqrt{x}}}}.$</p> </blockquote> <p>Question from a book on preparation for math contests. All the tricks I know to solve this limit are not working. Wolfram Alpha struggled to find $1$ as the solution, but the solution process presented is not understandable. The answer is $1$.</p> <p>Hints and solutions are appreciated. Sorry if this is a duplicate.</p>
Peter Szilas
408,605
<p>Let $y=√x$. $\lim x \rightarrow \infty =\lim y \rightarrow \infty. $</p> <p>Numerator: $y$</p> <p>Denominator:</p> <p>$\sqrt {y^2 +\sqrt{y^2+y}}= \sqrt{y^2+y\sqrt{1+1/y}}=$</p> <p>$y\sqrt{1+(1/y)\sqrt{1+1/y}}.$</p> <p>$\lim_{y \rightarrow \infty} \dfrac{y}{y \sqrt{1+(1/y) \sqrt{1+1/y}}}= $</p> <p>$\lim_{y \rightarrow \infty} \dfrac{1}{\sqrt{1+(1/y)\sqrt{1+1/y}}} =1.$</p>
1,346,073
<p>$$100\frac{dy^2}{dx^2} + y = 0$$</p> <p>Is this worked out by using the auxillary equation such that:</p> <p>$$100m^2 + 1 = 0$$</p> <p>so $m = \pm i\sqrt{1/100}$ ?</p> <p>So the general solution would be $y(x) = A cos (1/10) + B sin(1/10)$?</p> <p>I am not sure if I've gone about this the right way.</p>
ccorn
75,794
<p>If <em>lowest</em> means the same as <em>minimum</em> to you, then yes. Here is a sloppy outline, I will leave it to you to fill in the gaps.</p> <p>For a real symmetric matrix $n\times n$ matrix $A$, consider the Rayleigh quotient $$R_A(u) = \frac{u^\top A\,u}{u^\top u}\quad\text{for}\quad u\in\mathbb{R}^n\setminus\{0\}$$ Every eigenvector $v$ is a stationary point of that quotient, that is, $$\forall i=1,\ldots,n: \left.\frac{\partial R_A}{\partial u_i}\right|_{u=v} = 0$$ and all stationary points are eigenvectors. (Exercise: Prove that.) The eigenvalue corresponding to such eigenvector $v$ is $$\lambda = R_A(v)$$ In particular, the minimum quotient corresponds to the minimum eigenvalue: $$\lambda_{\text{min}} = \min_{u\in\mathbb{R}^n\setminus\{0\}} R_A(u)$$ Now suppose the main diagonal element $a_{ii}$ of $A$ is the minimum of all main diagonal elements. Let $e_i$ be the corresponding basis vector, then: $$\lambda_{\text{min}}\leq R_A(e_i) = a_{ii}$$ and there we are.</p>
1,496,651
<p>I'm trying to solve a control problem involving a pendulum, in which the equation of motion is:</p> <p>$ml^2\frac{d^2 \theta}{d\theta^2} = \tau -mgl cos(\theta)$</p> <p>I need to linearize $\tau -mgl cos(\theta)$ for $\tau = \tau_0 + \delta \tau, \theta = \theta_0 + \delta \theta$.</p> <p>The answer to this question is supposed to be $mgl sin(\theta_0)\delta \theta + \delta \tau$, but I just don't see how to get to that answer. I know it probably assumes that $\delta \tau = -mglcos(\theta_0)$, as it is the minimal torque required ot counteract the force of gravity. But the other part, probably some mathematical trick involving the $\delta \theta$, eludes me.</p>
obareey
111,671
<p>I think you forgot to mention that $(\tau_0, \theta_0)$ is a fixed point, i.e. $\tau_0 - mgl \cos(\theta_0) = 0$. Now, we can use Taylor series expansion around $(\tau_0, \theta_0)$ to obtain</p> <p>$$ml^2 \frac{d^2 (\theta_0 + \delta \theta)}{dt^2} = \tau_0 + \delta \tau - mgl [\cos(\theta_0) - \sin(\theta_0) \delta \theta + O((\delta \theta) ^2)]$$</p> <p>Ignoring the higher order terms for small $\delta \theta$, we can obtain</p> <p>$$ml^2 \frac{d^2 (\delta \theta)}{dt^2} = \delta \tau + mgl \sin(\theta_0) \delta \theta$$</p>
1,496,651
<p>I'm trying to solve a control problem involving a pendulum, in which the equation of motion is:</p> <p>$ml^2\frac{d^2 \theta}{d\theta^2} = \tau -mgl cos(\theta)$</p> <p>I need to linearize $\tau -mgl cos(\theta)$ for $\tau = \tau_0 + \delta \tau, \theta = \theta_0 + \delta \theta$.</p> <p>The answer to this question is supposed to be $mgl sin(\theta_0)\delta \theta + \delta \tau$, but I just don't see how to get to that answer. I know it probably assumes that $\delta \tau = -mglcos(\theta_0)$, as it is the minimal torque required ot counteract the force of gravity. But the other part, probably some mathematical trick involving the $\delta \theta$, eludes me.</p>
JMJ
295,405
<p>First: $\tau$ is not a state variable but a forcing term (torque in the equation), so the perturbation $\tau = \tau_0 + \delta\tau$ is unnecessary. </p> <p>Second: In principle, you need not necessarily choose a fixed point equilibrium to linearize about (though it certainly helps for any practical implementation of a controller).</p> <p>First let's derive the general solution, then we'll see how it works for the specific problem.</p> <p>Let $x$ be an arbitrary state vector in the Hilbert space $X$ and suppose the time evolution of $x$ is given via $\dot{x} = f(x,t)$ for a smooth mapping $f:X\rightarrow X$. As obareey states, $f$ enjoys a Taylor expansion about the function $y$ given by $$ f(x) = f(y) + \partial_xf(y)(x-y) + \frac{1}{2}\partial^2_xf(y)(x-y)(x-y)^T+\cdots, $$ where $$ \partial_xf(y) = \frac{\partial f}{\partial x}|_{x = y},\ \ \partial^2_xf(y) = \frac{\partial^2 f}{\partial x\partial x^T}|_{x = y}, $$ etc. The linearization is of $f$ is defined to be the truncation of this series to first (that is, linear) order. This produces the linearized equation $$ \dot{x} \approx f(y) + \partial_xf(y)(x-y). $$ Since this approximation is based on the assumption $x = y + \delta$ where $|\delta|/|y| \ll 1$, it follows that $\dot{x} = \dot{y} + \dot{\delta} = f(y) +\dot{\delta}$, and so we have $$ \dot{\delta} = \partial_xf(y)\delta. $$ Since $y$ is presumed known then $x = y + \delta$ is solved when $\delta$ is known and $\partial_xf(y)$ is just a (possibly time-dependent) matrix $A(t)$. By this method we have reduced the nonlinear equation to the equation $$ \dot{\delta} = A(t)\delta. $$</p> <p>Now on to your problem:</p> <p>The state vector is $x = [\theta, \dot{\theta}]^T$ and the nonlinear model is $$ \frac{d\theta}{dt} = f_\theta = \dot{\theta},\ \ \frac{d\dot{\theta}}{dt} = f_{\dot{\theta}} = \frac{\tau}{m\ell^2}- \frac{g}{\ell}\cos\theta. $$ Choosing the constant reference $y = [\theta_0,\dot{\theta}_0]^T$, the linearization is equivalent to finding the partials $$\begin{align} \frac{\partial f_\theta}{\partial \theta} = 0, &amp;\ \frac{\partial f_\theta}{\partial \dot{\theta}} = 1\\ \frac{\partial f_\dot{\theta}}{\partial \theta} = \frac{g}{\ell}\sin\theta_0, &amp;\ \frac{\partial f_\dot{\theta}}{\partial \dot{\theta}}=0\\ \end{align}$$ and therefore the linearized system is of the form $$ \dot{\delta} = \begin{bmatrix} 0 &amp; 1\\ \frac{g}{\ell}\sin\theta_0 &amp; 0\\ \end{bmatrix}\delta $$ where $\delta = [\theta-\theta_0,\dot{\theta}-\dot{\theta}_0]^T$. Choosing the fixed point $\theta_0 = \cos^{-1}(\tau/mg\ell)$ and $\dot{\theta}_0 = 0$ we have $$ \begin{align} \frac{d\theta}{dt} &amp;= \dot{\theta}\\ \frac{d\dot{\theta}}{dt} &amp;= \frac{g}{\ell}\sin(\cos^{-1}(\tau/mg\ell))\theta - \frac{g}{\ell}\sin(\cos^{-1}(\tau/mg\ell))\cos^{-1}(\tau/mg\ell) \end{align} $$</p>
636,730
<p>Let $G$ be a group of infinite order . Does there exist an element $x$ belonging to $G$ such that $x$ is not equal to $e$ and the order of $x$ is finite?</p>
user1729
10,513
<p>Easy example: Take the cross product of $\mathbb{Z}$ with your favourite non-trivial finite group $H$, $G=\mathbb{Z}\times H$. It in infinite as it contains $\mathbb{Z}$ as a subgroup, but it contains elements of finite order as it contains your favourite non-trivial finite group $H$ as a subgroup.</p> <p>For example, if $H=C_2$ is cyclic or order two then $G=\mathbb{Z}\times C_2$ contains an element of order two. Using $C_k$, the cyclic group of order $k$, gives an infinite group with an element of order $k$.</p> <p>Interestingly, the other two examples which use $\mathbb{R}$ and $\mathbb{Q}$ are not "finitely generated". A group $G$ is finitely generated if there exists a finite subset $S$ of $G$ such that every element of $G$ is a product of elements from $S$. So in the examples of $\mathbb{R}$ under multiplication, and of rotations of the circle, there exists no such set. In my above example, the set $S$ could consist of the copy of the finite subgroup $H$ and the element $1\in\mathbb{Z}$ (more formally, $S=\{(1, e)\}\cup\{(0, h): h\in H\}$). Note that there are lots of choices for the set $S$: the above is just a single example.</p>
3,224,745
<p>Naive evaluation of <span class="math-container">$\sqrt{a + x} - \sqrt{a}$</span> when <span class="math-container">$|a| &gt;&gt; |x|$</span> suffers from catastrophic cancellation and loss of significance.</p> <p>WolframAlpha gives the Taylor series for <span class="math-container">$\sqrt{a+x}-\sqrt{a}$</span> as: <span class="math-container">$$\frac{x}{2 \sqrt{a}} - \frac{x^2}{8 a^{3/2}} + \frac{x^3}{16 a^{5/2}} - \frac{5 x^4}{128 a^{7/2}} + \frac{7 x^5}{256 a^{9/2}} + O(x^6)$$</span> which (I think) equals: <span class="math-container">$$\sqrt{a} \left( \frac{1}{2} \left(\frac{x}{a}\right) - \frac{1}{8} \left(\frac{x}{a}\right)^2 + \frac{1}{16} \left(\frac{x}{a}\right)^3 - \frac{5}{128} \left(\frac{x}{a}\right)^4 + \frac{7}{256} \left(\frac{x}{a}\right)^5 + O\left(\left(\frac{x}{a}\right)^6\right) \right)$$</span></p> <p>How quickly do the coeffients decrease?</p> <p>How many terms are needed to reach <span class="math-container">$53$</span> bits of accuracy (IEEE <code>double</code> precision) in the result given that <span class="math-container">$10^{-300} &lt; \left|\frac{x}{a}\right| &lt; 1$</span> is known?</p> <p>Alternatively, what are the threshold values of <span class="math-container">$\left|\frac{x}{a}\right|$</span> where the number of terms changes?</p> <p>What about rounding errors, assuming each value is stored in <code>double</code> precision?</p>
Robert Israel
8,508
<p>The Taylor series is</p> <p><span class="math-container">$$ \sqrt{a+x} - \sqrt{a} = \sum_{k=1}^\infty (-1)^{k+1} \frac{(2k)!}{(k!)^2(2k-1)} 4^{-k} a^{1/2-k} x^k$$</span> If <span class="math-container">$|x/a| &lt; 1$</span>, the absolute values of the terms decrease, since if <span class="math-container">$c_k = (2k)!/((k!)^2 (2k-1) 4^k)$</span>, <span class="math-container">$$ \frac{c_{k+1}}{c_k} = \frac{2k-1}{2k+2} &lt; 1$$</span> Thus if <span class="math-container">$a &gt; x &gt; 0$</span> the absolute value of the error is always less than that of the next term. However, if <span class="math-container">$x/a$</span> is close to <span class="math-container">$1$</span> the convergence is rather slow: <span class="math-container">$$c_k \sim \frac{1}{2 \sqrt{\pi} k^{3/2}}$$</span> so that won't be less than <span class="math-container">$2^{-53}$</span> unless <span class="math-container">$k &gt; 1.862 \times 10^{10}$</span> approximately.</p>
1,618,411
<p>I'm learning the fundamentals of <em>discrete mathematics</em>, and I have been requested to solve this problem:</p> <p>According to the set of natural numbers</p> <p>$$ \mathbb{N} = {0, 1, 2, 3, ...} $$</p> <p>write a definition for the less than relation.</p> <p>I wrote this:</p> <p>$a &lt; b$ if $a + 1 &lt; b + 1$</p> <p>Is it correct?</p>
miracle173
11,206
<p>How can you decide if $3&lt;5$ using your definition? You can say $3&lt;5$ if $4&lt;6$ if $5&lt;7$ and so on, but this sequence will never end.</p> <p>It works the other way round: </p> <ul> <li>if $b \ne 0$: $0 \lt b$ </li> <li>if $a \lt b$: $a+1 \lt b+1 $ </li> </ul> <p>$2 \ne 0$ , so $0 \lt 2$, therefore $1 \lt 3$, therefore $ 2 \lt 4$ , and finally $3 \lt 5$</p>
2,613,410
<blockquote> <p>What is the value of <span class="math-container">$2x+3y$</span> if</p> <p><span class="math-container">$x+y=6$</span> &amp; <span class="math-container">$x^2+3xy+2y=60$</span> ?</p> </blockquote> <p>My trial: from given conditions: substitute <span class="math-container">$y=6-x$</span> in <span class="math-container">$x^2+3xy+2y=60$</span> <span class="math-container">$$x^2+3x(6-x)+2(6-x)=60$$</span> <span class="math-container">$$x^2-8x+24=0$$</span> <span class="math-container">$$x=\frac{8\pm\sqrt{8^2-4(1)(24)}}{2(1)}=4\pm2i\sqrt2$$</span> this gives us <span class="math-container">$y=2\mp2i\sqrt2$</span> we now have <span class="math-container">$x=4+2i\sqrt2, y=2-2i\sqrt2$</span> or <span class="math-container">$x=4-2i\sqrt2, y=2+2i\sqrt2$</span></p> <p>substituting these values i got <span class="math-container">$2x+3y=14-2i\sqrt2$</span> or <span class="math-container">$$2x+3y=14+2i\sqrt2$$</span></p> <p>But my book suggests that <span class="math-container">$2x+3y$</span> should be a real value that I couldn't get. Can somebody please help me solve this problem? Is there any mistake in the question.?</p> <p>Thank you.</p>
Dr. Sonnhard Graubner
175,066
<p>we get $$x^2+3x(6-x)+2(6-x)=60$$ simplifying we obtain $$-2x^2+16x-48=0$$ or $$x^2-8x+24=0$$ can you solve this? you will get $$x_1=4+2\sqrt{2}i$$ or $$x_2=4-2\sqrt{2}i$$ and and $$y_1=2-2\sqrt{2}i$$ $$y_2=2+2\sqrt{2}i$$ so $x+y=6$</p>