qid
int64
1
4.65M
question
large_stringlengths
27
36.3k
author
large_stringlengths
3
36
author_id
int64
-1
1.16M
answer
large_stringlengths
18
63k
1,673,452
<p>Let $\{a_j\}_{j=1}^N$ be a finite set of positive real numbers. Suppose </p> <p>$$\sum_{j=1}^{N} a_j = A,$$ prove</p> <p>$$\sum_{j=1}^{N} \frac{1}{a_j} \geq \frac{N^2}{A}.$$ </p> <p>Hints on how to proceed?</p>
DanielWainfleet
254,665
<p>For positive $a,b$ we have $a/b+b/a= (\sqrt {a/b}-\sqrt {b/a})^2+2\geq 2 .$</p> <p>Therefore $\sum_1^na_i \sum_1^n 1/a_i= \sum_1^n a_i(1/a_i)+\sum_{1\leq i&lt;j\leq n}(a_i/a_j+a_j/a_i)=n+\sum_{1\leq i&lt;j\leq n}(a_i/a_j+a_j/a_i)\geq n+\sum_{1\leq i&lt;j\leq n}2=n+\binom {n}{2}2=n^2.$ The Cauchy-Schwarz Inequality.</p>
25,782
<p>Hello I'm having trouble showing the following:</p> <p>Let $u$ be a positive measure. If $\int_E f\, du= \int_E g\, du$ for all measurable $E$ then $f=g$ a.e.</p> <p>I was trying to argue by contradiction: if $f\neq g$ a.e. then there must exist some set $E=\{x: f(x)\neq g(x)\}$ such that $u(E) \gt 0$. Then let $E^+=\{x: f(x)\gt g(x)\}$ and $E^-=\{x: f(x)\lt g(x)\}$. Now, if $E^+$ or $E^-$ is measurable and have positive measure then $\int_{E^+} f\, du \gt \int_{E^+} g\, du$ or $\int_{E^-} f\, du \lt \int_{E^-} g\, du$, contradiction.</p> <p>As you can see, the argument hinges on $E^+$ or $E^-$ being measurable. This is the part I'm having trouble with.</p>
Willie Wong
1,543
<p><strong>hint</strong>:</p> <ol> <li>The difference of two measurable functions is measurable</li> <li>$(0,\infty)\subset\mathbb{R}$ is Borel, so for a measurable function $F$, the set on which it takes positive values is a measurable set. </li> <li>The collection of measurable sets form a $\sigma$ algebra, and in particular intersection of two measurable sets is measurable. </li> </ol>
2,590,068
<p>$$\epsilon^\epsilon=?$$ Where $\epsilon^2=0$, $\epsilon\notin\mathbb R$. There is a formula for exponentiation of dual numbers, namely: $$(a+b\epsilon)^{c+d\epsilon}=a^c+\epsilon(bca^{c-1}+da^c\ln a)$$ However, this formula breaks down in multiple places for $\epsilon^\epsilon$, yielding many undefined expressions like $0^0$ and $\ln 0$. So, here's my question: what is $\epsilon^\epsilon$ equal to for <a href="https://en.wikipedia.org/wiki/Dual_number" rel="noreferrer">dual numbers</a>?</p>
zoli
203,663
<p><strong>To prove that $\epsilon^{\epsilon}$ does not exist</strong></p> <p>If $\epsilon^{\epsilon}$ has a value in the set of dual numbers and if $$\epsilon^{\epsilon}\not=\epsilon$$ then $$\epsilon^{\epsilon}-\epsilon=a+b\epsilon$$ and either $a$ or $b$ is different from zero. From here $$\epsilon\left[\epsilon^{\epsilon}-\epsilon\right]=\epsilon^2\left[\epsilon^{\epsilon-1}-1\right]=a\epsilon$$ so $a=0$ and $b\not=0$, then $$\epsilon^{\epsilon-1}-1=b\epsilon.$$</p> <p>Multiplying both sides by $\epsilon$ we get that $$\epsilon^{\epsilon}=\epsilon.$$ This is a contradiction if $\epsilon^{\epsilon}$ exists at all.</p> <hr> <p>On the other hand, if we assume again that $\epsilon^{\epsilon}$ exists in the set of dual numbers and if $$\epsilon^{\epsilon}=\epsilon$$ then multiplying both sides with $\epsilon$ we get that $$\epsilon\epsilon^{\epsilon}=0.$$ Since $\epsilon^{\epsilon}\not=0$ according to the indirect hypotheses, $\epsilon$ has to be $0$. That is, $$\epsilon^{\epsilon}\not= \epsilon.$$ This is another contradiction showing that $\epsilon^{\epsilon}$ cannot exist.</p>
2,033,485
<p>I have an equilateral triangle with each point being a known distance of N units from the center of the triangle.</p> <p>What formula would I need to use to determine the length of any side of the triangle?</p>
hamam_Abdallah
369,188
<p><strong>Hint</strong></p> <p>let $L$ be the length of the triangle sides.</p> <p>then</p> <p>$$L^2=N^2+N^2-2N.N.\cos(\frac{2\pi}{3})$$</p> <p>and</p> <p>$$L=2N\sin(\frac{\pi}{3})=N\sqrt{3}$$</p>
2,033,485
<p>I have an equilateral triangle with each point being a known distance of N units from the center of the triangle.</p> <p>What formula would I need to use to determine the length of any side of the triangle?</p>
Community
-1
<p>If you label the triangle $ABC$ (from bottom left to right and top point $C$). Take the center to be $M$. Then consider the triangle formed by $AM$ and the middle of $AB:=Q$. So now you have $AMQ$ with angle $AMQ = 90$ degrees, and angle $MAQ = 60$ degrees. This means you have a $1:2:\sqrt{3}$ triangle with $|AM|=N$. This should be enough to solve it.</p>
2,674,853
<blockquote> <p>Suppose $E \subset \mathbb R^d$ has measure $0$ and $f: \mathbb R^d \longrightarrow \mathbb R$ is measurable. Does $f (E)$ necessarily have measure $0$?</p> </blockquote> <p>I tried to find a counter-example though I failed.It is clear that countable subset will not work for otherwise the image of it is again countable in $\mathbb R$ and hence has measure $0$. So we need to find an uncountable set of measure $0$ which maps to some set in $\mathbb R$ of positive measure under some measurable function in order to find a counter-example.How to find it?</p> <p>Please help me in finding this example.Then it will really help me.</p> <p>Thank you in advance.</p>
Angina Seng
436,618
<p>You can take the Cantor set in $\Bbb R$ (measure zero) and map it onto the closed interval $[0,1]$ by a continuous increasing function. So, even continuous functions on $\Bbb R$ don't preserve measure zero.</p>
34,521
<p>I have been using the following result:</p> <p>Given a polynomial $f(x,t)$ of degree $n$ in $\mathbb{Q}[x,t]$, if a rational specialization of $t$ results in a separable polynomial $g(x)$ of the same degree, then the Galois group of $g$ over $\mathbb{Q}$ is a subgroup of that of $f$ over $\mathbb{Q}(t)$.</p> <p>However, I have been unable to prove this for myself, and cannot seem to find a proof of it anywhere. Is there an elementary proof? And if not, can anyone direct me to a source containing one, or at least explain the general principle?</p> <p>My need to understand the result arose from considering the following:</p> <p>If I specialize $t$ such that $g$ factorizes as $x^k.h(x)$, where $h$ is an irreducible polynomial of degree $n-k$, is it legitimate to surmise that the Galois group of $h$ over $\mathbb{Q}$ is a subgroup of the original?</p> <p>If nothing else, I'd be very grateful for an answer to this!</p>
damiano
4,344
<p>You can find the first statement, for instance, in van der Waerden, Modern Algebra I, Section 61. In particular, under the inclusion of the Galois group of the reduction in the Galois group of the original polynomial, the cycle structure matches up.</p> <p>For the statement when you drop the separability assertion, the answer is no. Here is an example in which the cycle structure of the two groups are incompatible. While this does not completely settle your second question, it might convince you that the answer is "no".</p> <p>The polynomial $f=x^4+x^2+9$ is irreducible over $\mathbb{Q}$ and has discriminant equal to $420^2$. Therefore the Galois group of $f$ is contained in the alternating group of order 4 and contains no transposition. The reduction of the polynomial modulo 3 is $x^2(x^2+1)$ and therefore has Galois group that is a transposition. Even though the Galois group of $f$ contains elements of order two, the cycle structures do not match up.</p>
34,521
<p>I have been using the following result:</p> <p>Given a polynomial $f(x,t)$ of degree $n$ in $\mathbb{Q}[x,t]$, if a rational specialization of $t$ results in a separable polynomial $g(x)$ of the same degree, then the Galois group of $g$ over $\mathbb{Q}$ is a subgroup of that of $f$ over $\mathbb{Q}(t)$.</p> <p>However, I have been unable to prove this for myself, and cannot seem to find a proof of it anywhere. Is there an elementary proof? And if not, can anyone direct me to a source containing one, or at least explain the general principle?</p> <p>My need to understand the result arose from considering the following:</p> <p>If I specialize $t$ such that $g$ factorizes as $x^k.h(x)$, where $h$ is an irreducible polynomial of degree $n-k$, is it legitimate to surmise that the Galois group of $h$ over $\mathbb{Q}$ is a subgroup of the original?</p> <p>If nothing else, I'd be very grateful for an answer to this!</p>
KConrad
3,272
<p>Here is a broader setup for your question. Let $A$ be a Dedekind domain with fraction field $F$, $E/F$ be a finite Galois extension, and $B$ be the integral closure of $A$ in $E$. Pick a prime $\mathfrak p$ in $A$ and a prime $\mathfrak P$ in $B$ lying over $\mathfrak p$. The decomposition group $D(\mathfrak P|\mathfrak p)$ naturally maps by reduction mod $\mathfrak P$ to the automorphism group $\text{Aut}((B/\mathfrak P)/(A/\mathfrak p))$ and Frobenius showed this is surjective. The kernel is the inertia group, so if $\mathfrak p$ is unramified in $B$ then we get an isomorphism from $D(\mathfrak P|\mathfrak p)$ to $\text{Aut}((B/\mathfrak P)/(A/\mathfrak p))$, whose inverse is an embedding of the automorphism group of the residue field extension into $\text{Gal}(E/F)$.</p> <p>If we take $A = {\mathbf Z}$ then we're in the number field situation and this is where Frobenius elements in Galois groups come from. </p> <p>In your case you want to take $A = {\mathbf Q}[t]$, so $F = {\mathbf Q}(t)$. You did not give any assumptions about $f(x,t)$ as a polynomial in ${\mathbf Q}[x,t]$. (Stylistic quibble: I think it is better to write the polynomial as $f(t,x)$, specializing the first variable, but I'll use your notation.) Let's assume $f(x,t)$ is absolutely irreducible, so the ring $A' = {\mathbf Q}[x,t]/(f)$ is integrally closed. [EDIT: I should have included the assumption that $f$ is smooth, as otherwise $A'$ will <em>not</em> be integrally closed, but this "global" int. closed business is actually not so important. See comments below.] Write $F'$ for the fraction field of $A'$. After a linear change of variables we can assume $f(x,t)$ has a constant coefficient for the highest power of $x$, so $A'$ is the integral closure of $A$ in $F'$. </p> <p>Saying for some rational $t_0$ that the specialization $g(x) = f(x,t_0)$ is separable in ${\mathbf Q}[x] = (A/(t-t_0))[x]$ implies the prime $(t-t_0)$ is unramified in $A'$. Let $E$ be the Galois closure of $F'/F$ and $B$ be the integral closure of $A$ in $E$. A prime ideal that is unramified in a finite extension is unramified in the Galois closure, so $(t-t_0)$ is unramified in $B$. For any prime $\mathfrak P$ in $B$ that lies over $(t-t_0)$, the residue field $B/\mathfrak P$ is a finite extension of $A/(t-t_0) = \mathbf Q$ and since $E/F$ is Galois the field $B/\mathfrak P$ is normal over $A/(t-t_0)$. These residue fields have characteristic 0, so they're separable: $B/\mathfrak P$ is a finite Galois extension of $\mathbf Q$. I leave it to you to check that $B/\mathfrak P$ is the Galois closure of $g(x) = f(t_0,x)$ over $\mathbf Q$. Then the isomorphism of $D(\mathfrak P|(t-t_0))$ with $\text{Aut}((B/\mathfrak P)/\mathbf Q) = \text{Gal}((B/\mathfrak P)/\mathbf Q)$ provides (by looking at the inverse map) an embedding of the Galois group of $g$ over $\mathbf Q$ into the Galois group of $f(x,t)$ over $F = {\mathbf Q}(t)$.</p> <p>I agree with Damiano that there are problems when the specialization is not separable. In that case what happens is that the Galois group of the residue field extension is identified not with the decomposition group (a subgroup of the Galois group of $E/F$) but with the quotient group $D/I$ where $I = I(\mathfrak P|\mathfrak p)$ is the inertia group, and you don't generally expect a proper quotient group of a subgroup to naturally embed into the original group. </p>
3,582,585
<p>Consider the experiment of throwing a die, if a multiple of 3 comes up, throw the die again and if any other number comes, toss a coin. Find the conditional probability of the event <strong>‘the coin shows a tail’</strong>, given that <em>‘at least one throw of die shows a 3’</em>.</p> <p>I don't know how to deal with this kind of problem. Moreover, there are three answers I am encountering. The first answer is zero, which I intuitively feel wrong, the second answer is <span class="math-container">$\frac{1}{10}$</span> and third is <span class="math-container">$\frac{1}{2}$</span> </p>
lab bhattacharjee
33,337
<p>Let <span class="math-container">$$3^{15a}=5^{5b}=15^{3c}=k$$</span></p> <p>What if <span class="math-container">$k=1$</span></p> <p>Else</p> <p><span class="math-container">$3=k^{1/15a}$</span> etc.</p> <p>As <span class="math-container">$15=3\cdot5$</span></p> <p><span class="math-container">$k^{1/3c}=k^{1/5b}\cdot k^{1/15a}=k^{1/5b+1/15a}$</span></p> <p><span class="math-container">$\implies\dfrac1{3c}=\dfrac1{5b}+\dfrac1{15a}=\dfrac{3a+b}{15ab}$</span></p>
3,582,585
<p>Consider the experiment of throwing a die, if a multiple of 3 comes up, throw the die again and if any other number comes, toss a coin. Find the conditional probability of the event <strong>‘the coin shows a tail’</strong>, given that <em>‘at least one throw of die shows a 3’</em>.</p> <p>I don't know how to deal with this kind of problem. Moreover, there are three answers I am encountering. The first answer is zero, which I intuitively feel wrong, the second answer is <span class="math-container">$\frac{1}{10}$</span> and third is <span class="math-container">$\frac{1}{2}$</span> </p>
Z Ahmed
671,540
<p>Take <span class="math-container">$$3^{15a}=5^{5b}=15^{3c}=K \implies a=\frac{\log k}{15\log 3}, b=\frac{\log K}{5\log 5}, c=\frac{\log K}{3 \log 15} $$</span> Then <span class="math-container">$$ F=5 ab-bc-3ac=abc(5/c-1/a-3/b)=anc\left(\frac {15 \log 15}{\log K}-\frac{15 \log 3}{\log K}-\frac{15 \log 5}{\log K}\right)$$</span> <span class="math-container">$$=abc\frac{15 \log(15/15)}{\log K}=0.$$</span></p>
4,577,266
<blockquote> <p>Let <span class="math-container">$X_n$</span> be an infinite arithmetic sequence with positive integers term. The first term is divisible by the common difference of successive members. Suppose, the term <span class="math-container">$x_i$</span> has exactly <span class="math-container">$m&gt;1$</span> distinct prime factors, for some <span class="math-container">$i\in\Bbb N$</span>.</p> <p><strong>Prove that, there infinitely many terms have exactly <span class="math-container">$m$</span> distinct prime factors.</strong></p> </blockquote> <p>My some thoughts.</p> <p>Let <span class="math-container">$x_1$</span> has <span class="math-container">$n$</span> distinct prime factors and <span class="math-container">$x_2=x_1+d$</span> where <span class="math-container">$d\mid x_1\implies x_1=dk$</span>. Hence <span class="math-container">$x_2=x_1+d=d(1+k)$</span>. But, we can not say anything about the prime factors of <span class="math-container">$x_2$</span>. Therefore, we don't know the number of prime factors of <span class="math-container">$x_2$</span>. By induction we don't know the number of prime factors of <span class="math-container">$x_3$</span>.</p> <p>I tried to make the solution by induction, but as it seems I could not be successful.</p>
templatetypedef
8,955
<p>It might be easier to think of things this way. Start at <span class="math-container">$w$</span> and follow the edge from <span class="math-container">$w$</span> to <span class="math-container">$x$</span>. Now walk down path <span class="math-container">$P$</span> toward <span class="math-container">$y$</span>. Since <span class="math-container">$w$</span> is contained in path <span class="math-container">$P$</span>, as we walk down <span class="math-container">$P$</span> we’ll eventually bump into <span class="math-container">$w$</span>. Overall this means that we started at <span class="math-container">$w$</span>, walked down a series of nodes, and ended back up at <span class="math-container">$w$</span> without repeating any other nodes. That’s a cycle, which is where the contradiction comes from.</p> <p>The proof uses a similar argument but instead starts at <span class="math-container">$x$</span> and walks towards <span class="math-container">$y$</span> down <span class="math-container">$P$</span>. At some point on the path you will encounter <span class="math-container">$w$</span> (because <span class="math-container">$w$</span> appears somewhere on <span class="math-container">$P$</span>), at which point we follow the edge from <span class="math-container">$w$</span> back to <span class="math-container">$x$</span> and close the loop. This is the same cycle as above, just expressed differently.</p>
213,639
<blockquote> <p>Where is axiom of regularity actually used? Why is it important? Are there some proofs, which are substantially simpler thanks to this axiom?</p> </blockquote> <p>This question was to some extent provoked by <a href="https://math.stackexchange.com/users/3515/dan-christensen">Dan Christensen</a>'s <a href="https://math.stackexchange.com/questions/204865/axiom-of-regularity#comment467198_204866">comment</a>: <em>Would regularity ever be used in a formal development of, say, number theory or real analysis? I can't imagine it.</em></p> <p>I have to admit that I do not know other use of this axiom than the proof that every set has <a href="http://en.wikipedia.org/wiki/Von_Neumann_universe" rel="noreferrer">rank</a> in cumulative hierarchy, and a few easy consequences of this axiom, which are mentioned in <a href="http://en.wikipedia.org/wiki/Axiom_of_regularity#Elementary_implications_of_regularity" rel="noreferrer">Wikipedia article</a>.</p> <p>I remember seeing an introductory book in axiomatic set theory, which did not even mention this axiom. (And that book went through plenty of stuff, such as introducing ordinals, transfinite induction, construction of natural numbers.)</p> <p>Wikipedia article on <a href="http://en.wikipedia.org/wiki/Non-well-founded_set_theory" rel="noreferrer">Non-well-founded set theory</a> links to Metamath page for <a href="http://us.metamath.org/mpegif/axreg.html" rel="noreferrer">Axiom of regularity</a> and says: <em>Scroll to the bottom to see how few Metamath theorems invoke this axiom.</em></p> <p>Based on the above, it seems that quite a lot of stuff can be done without this axiom.</p> <p>Of course, it's quite possible that this axiom becomes important in some areas of set theory which are not familiar to me, such as forcing or working without Axiom of Choice. (It might be difficult to define cardinality without AC and regularity, as mentioned <a href="https://math.stackexchange.com/a/53771/">here</a>.) But even if the this axiom is important only for some advanced stuff - which I will probably never get to - I'd be glad to know that.</p>
Community
-1
<p>Here's an (IMO) interesting example. Surely you're familiar with the set-theoretic interpretation of ordered pairs given by $(x,y) = \{ \{ x \}, \{ x, y \} \}$. You may wonder, why not use $(x,y) = \{ x, \{ x, y\}\}$?</p> <p>We can... <em>if</em> we assume some amount of regularity. If there exists a set $S$ satisfying $S = \{ \{S, T\}, U \}$ with $T \neq U$, then we would have</p> <p>$$ (S, T) = \{ S, \{ S, T \} \} = \{ \{ \{ S, T \}, U \}, \{ S, T \} \} = ( \{ S, T \}, U ) $$</p> <p>which contradicts the property of ordered pairs $(S, T) = (X, Y) \implies S = X \wedge T = Y$, however regularity forbids the existence of such a set $S$.</p>
3,742,189
<p>in a game I play there's a chance to get a good item with 1/1000. After 3200 runs I only got 1.</p> <p>So how can I calculate how likely that is and I remember there are graphs which have 1 sigma and 2 sigma as vertical lines and you can tell what you can expect with 90% and 95% sureness.</p> <p>Sorry if that's asked before, but I don't remember the name of such a graph!</p> <p>Thanks in advance.</p>
Siong Thye Goh
306,553
<p>From <span class="math-container">$5+3x &lt; 14$</span> and <span class="math-container">$-x&lt;1$</span>, by adding them up, you have conclude those inequalities imply that <span class="math-container">$x&lt;5$</span>. <span class="math-container">$x&lt;5$</span> is necessarily true. But from <span class="math-container">$x&lt;5$</span> alone, we can't conclude that <span class="math-container">$5+3x &lt; 14$</span> and <span class="math-container">$-x&lt;1$</span> hold.</p> <p>You should simplify the first inequality as much as possible first to get an upper bound for <span class="math-container">$x$</span> and use the second inequality to get a lower bound for <span class="math-container">$x$</span>.</p>
3,726,772
<p>For finite-dimensional vector space <span class="math-container">$V$</span>, there exist linear operators <span class="math-container">$A$</span> and <span class="math-container">$B$</span> on <span class="math-container">$V$</span> such that <span class="math-container">$AB=BA$</span> commutative relation holds.</p> <p>If we define the <span class="math-container">$A$</span>'s minimal polynomial degree by <span class="math-container">$\deg(A)$</span>, how can I prove the inequality <span class="math-container">$\deg(A+B)\leq \deg(A)\deg(B)$</span>?</p> <p>I grasp the idea that in the minimal polynomial of <span class="math-container">$A+B$</span>, I could expand the <span class="math-container">$(A+B)^k$</span> terms by exchanging multiplication order of <span class="math-container">$A$</span> and <span class="math-container">$B$</span> but I can't proceed further.</p>
Ben Grossmann
81,360
<p>Note that <span class="math-container">$\deg(A)$</span> is the dimension of the subspace consisting of all polynomials of <span class="math-container">$A$</span>.</p> <p>Let <span class="math-container">$m = \deg(A), n = \deg(B)$</span>. Every polynomial of <span class="math-container">$p(A)$</span> can be written as a linear combination of the powers <span class="math-container">$I,A,\dots,A^{m-1}$</span> of <span class="math-container">$A$</span>, and every polynomial <span class="math-container">$p(B)$</span> can be written as a linear combination of the powers <span class="math-container">$I,B,\dots,B^{n-1}$</span> of <span class="math-container">$B$</span>. We conclude that for any bivariate polynomial <span class="math-container">$p(x,y)$</span>, <span class="math-container">$p(A,B)$</span> can be written as a linear combination of the elements of <span class="math-container">$S = \{A^jB^k : 0 \leq j \leq m-1, \ 0 \leq k \leq n-1\}$</span>. Note that <span class="math-container">$S$</span> contains <span class="math-container">$\deg(A)\deg(B)$</span> elements, so its span is at most <span class="math-container">$\deg(A) \deg(B)$</span> dimensional.</p> <p>Because <span class="math-container">$ \operatorname{span}\{I,(A+B),(A+B)^2,\dots\} \subset \operatorname{span}(S), $</span> we can conclude that <span class="math-container">$$ \deg(A + B) = \dim \operatorname{span}\{I,(A+B),(A+B)^2,\dots\} \leq \dim \operatorname{span}(S) \leq \deg(A)\deg(B).$$</span></p> <hr /> <p>For another perspective, we could note that the map from <span class="math-container">$\Bbb F[x,y]$</span> to <span class="math-container">$p(x,y)$</span> defined by <span class="math-container">$p\mapsto p(A,B)$</span> is an <span class="math-container">$\Bbb F$</span>-algebra homomorphism.</p>
844,832
<p>How to find the derivative of this function $$ 7\sinh(\ln t)?$$</p> <p>I don't know from where to start, so i looked at it in wolfram alpha and it was saying that the $$ 7((-1 + t^2) / 2t) $$ I did not get that. How did they jump from $$ 7\sinh(\ln t) $$ to this step? Is there an equation for it that I am missing?</p>
amWhy
9,003
<p>As Jean-Claude has shown you:</p> <p>$$f(x) = \sinh (\ln t)=\frac{e^{\ln t}-e^{-\ln t}}{2}=\frac{t-\frac{1}{t}}{2}=\frac{t^2-1}{2t}$$</p> <p>So $$7\sinh (\ln t)=\frac{e^{\ln t}-e^{-\ln t}}{2}=\frac{t-\frac{1}{t}}{2}=\frac{7(t^2-1)}{2t}$$</p> <p>So Wolfram has not returned the value of the <em>derivative</em> of $f(x)$; it gave you an alternative representation of $f(x)$. Now, using the quotient rule, you can find $$f'(x) = \left(\frac{7(t^2-1)}{2t}\right)' = \frac 72 \cdot \dfrac{(t^2 - 1)'\cdot t - (t^2 - 1)\cdot (t)'}{(t)^2}$$ $$ = \frac 72 \frac {2t\cdot t - (t^2 - 1)\cdot 1}{t^2}$$ $$ = \frac 72\cdot \frac{2t^2 - t^2 + 1}{t^2} $$ $$= \frac 72\cdot \frac{t^2 + 1}{t^2} = \frac 72\left(1+ \frac 1{t^2}\right)$$</p>
3,269,112
<p>Theorem:</p> <p><span class="math-container">$ x \lt y + \epsilon$</span> for all <span class="math-container">$\epsilon \gt 0$</span> if and only if <span class="math-container">$x \leq y$</span></p> <p>Suppose to the contrary that <span class="math-container">$x \lt y + \epsilon$</span> but <span class="math-container">$x \gt y$</span></p> <p>Set <span class="math-container">$\epsilon_0 = x - y \gt 0$</span> </p> <p>Notice that <span class="math-container">$y + \epsilon = x$</span> Hence by the trichotomy property This contradicts the hypothesis <span class="math-container">$\epsilon = \epsilon_0$</span>. Thus <span class="math-container">$x \leq y$</span></p> <p>I would like some clarification. I am wondering why this is proven by contradiction and what is <span class="math-container">$ \epsilon_0$</span> I know that it stands for the initial value of <span class="math-container">$ \epsilon$</span>. It appears that the initial value is set to <span class="math-container">$0$</span> How does <span class="math-container">$y + \epsilon_0 = x$</span>? I am guessing that <span class="math-container">$y + \epsilon$</span> cannot be be greater than x because it is like saying that x can be greater than itself which is false.</p>
Jose Brox
146,587
<p>We have that if <span class="math-container">$G/Z(G)$</span> is cyclic then <span class="math-container">$G$</span> is abelian. But since <span class="math-container">$G$</span> abelian means <span class="math-container">$G=Z(G)$</span>, this forces <span class="math-container">$|G/Z(G)|=1$</span>. Now, if <span class="math-container">$|G/Z(G)|$</span> were a prime number then <span class="math-container">$G/Z(G)$</span> would be cyclic and then <span class="math-container">$|G/Z(G)|$</span> would be 1, which is impossible. Hence we can never have <span class="math-container">$|G/Z(G)|$</span> prime. We have proved that either <span class="math-container">$G$</span> is abelian or <span class="math-container">$|G/Z(G)|$</span> has at least two prime factors (which may be equal).</p>
208,008
<p>Let $k$ and $n$ be two fixed integers. Let $C$ denotes the circle with radius $4n$ (in the plane $\mathbb{R}^2$). Suppose $\{C_1,C_2\}$ shows the set of two arbitrary tangent circles with radius $2n$ in $C$. Also, let $\{C_{11},C_{12}\}$ and $\{C_{21},C_{22}\}$ be the sets of two arbitrary tangent circles with radius $n$ in $C_1$ and $C_2$, respectively. Is there a finite number of points with size $k$ in the circle $C$ such that each $C_1$ and $C_2$ contains the odd number of points and each $C_{11}$, $C_{12}$, $C_{21}$ and $C_{22}$ contains the even number of points?</p> <p>In the following I drew a time-fixed picture of the problem, since actually the circles can revolve in the original circles by the condition of being tangent in each time:</p> <p><img src="https://i.stack.imgur.com/7z52i.jpg" alt="enter image description here"></p> <p>Motivation: Actually, one of my friends is working on the effects of critical points in the bounded area in the plane. He is engineer and do not like abstract mathematics. Based on his problem, I determine some special points in the bounded area and I give some properties to these points. After that, we define these points with their properties in the computer and we compute some special parameters of the phenomenon on that bounded area by some simple line integral and other mathematical tools. Actually, I helped him for all of this procedure. But in my private time, I abstracted this problem to the version which you can see.</p> <p>In my opinion, for arbitrary $k$ the answer is no, and if these points exist, I think they have symmetry in the plane.</p> <p>I think in general, the bellow claim is true:</p> <p>Let $S$ be a set of points which is distributed in the circle $C$ such a way that any two tangent circles $C_1$ and $C_2$ in $C$ contain the even number of points of $S$. Then $S$ contains even number of points.</p> <p>I appreciate any answer or helpful comment. </p>
Joseph O'Rourke
6,094
<p>This is <em>not</em> a answer, just an animation illustrating the challenge. The blue circle $C$ contains $9$ points, and the red circles $C_1$ and $C_2$ each contain $2$ points, except at four discrete times (times corresponding to rotations that are multiples of $\pi/4$) when they contain just $1$ each (e.g., at the start/end). <hr /> &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; <img src="https://i.stack.imgur.com/P1WN8.gif" alt="a.gif"> <br /> &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; <sup> (Reload the page to repeat the animation.) </sup></p> <hr />
3,265,403
<p>While trying to compute the line integral along a path K on a function, I need to parametrize my path K in terms of a single variable, let's say this single variable will be <span class="math-container">$t$</span>. My path is defined by the following ensemble: <span class="math-container">$$K=\{(x,y)\in(0,\infty)\times[-42,42]|x^2-y^2=1600\}$$</span> I know how to calculate the line integral, that is not my issue. My problem is to parametrize <span class="math-container">$x^2-y^2=1600$</span>. I tried using the identities: <span class="math-container">$$\sin^2(t)+\cos^2(t)=1$$</span> <span class="math-container">$$\sec^2(t)-\tan^2(t)=1$$</span> But I did not get anywhere with my parametrization (see below for my poor try into parametrizing). I would welcome any help/hints and if you happen to know some good reading to learn more about parametrization, I am also interested. <span class="math-container">$$r(t)=1600\sec^2(t)-1600\tan^2(t)=1600$$</span> for <span class="math-container">$$x=40\sec(t) \land y=40\tan(t)$$</span></p>
A.G.
115,996
<p>Notice that <span class="math-container">$x^2-y^2=(x+y)(x-y)=1600$</span>, therefore you are dealing with a hyperbola with asymptotes <span class="math-container">$x+y=0$</span> and <span class="math-container">$x-y=0$</span> as shown here:</p> <p><a href="https://i.stack.imgur.com/B5Ebv.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/B5Ebv.png" alt="enter image description here"></a></p> <p>It is natural to try <span class="math-container">$x+y=t$</span>, therefore <span class="math-container">$$ x-y= {1600\over t}. $$</span> That gives <span class="math-container">$$ x = {t+ {1600\over t} \over 2} \quad y= {t- {1600\over t} \over 2}. $$</span> As far as bounds go, <span class="math-container">$-42\leq y \leq 42$</span> therefore <span class="math-container">$$ -84 \leq t- {1600\over t} \leq 84 \rightarrow 16\leq t \leq 100. $$</span></p>
1,711,653
<p>Let's define:</p> <p>$f(t) = A_1 \cos(\omega_1t) + A_2 \cos(\omega_2t) $</p> <p>I am interested in finding an expression for the peak of this function. It is not true in general that this peak will have the value:</p> <p>$max{f(t)} = \sqrt{A_1^2 + A_2^2 + 2A_1A_2}$</p> <p>To find the value of max(f), I did the following manipulations:</p> <p>$\omega_2 = \omega_1 + \Delta \omega_1$</p> <p>so I can express the second cosine as that of a sum of a single radian frequency:</p> <p>$f(t) = A_1 \cos(\omega_1t) + A_2 \cos (\omega_1 t + \Delta \omega t)$</p> <p>and after a little algebra:</p> <p>$f(t) = [A_1 + A_2 \cos(\Delta \omega t)]\cos(\omega_1t) - A_2 \sin(\Delta \omega t) \sin(\omega_1t )$</p> <p>I can then transform the sum of two isochronic $\sin$ and $\cos$ into a single $\cos$ with a certain amplitude and phase:</p> <p>$f(t) = \sqrt{A_1^2 A_2^2 + A_1 A_2 \cos(\Delta \omega t)} \; \cos \left[\omega_1t - \tan^{-1} \left( \frac{A_1 + A_2 \cos (\Delta \omega t)}{A_2 \sin(\Delta \omega t)} \right) \right]$</p> <p>But that's about how far I can drive it: since a trig function of $\Delta \omega$ is present both in the amplitude and phase of the cosine, I am not sure how to proceed. For sure, it is not said that the maximum of the function will be that of the amplitude part.</p> <p>I find the <em>straight</em> approach of taking the derivative of f(t) and find its zero would be probably too cumbersome, however I would ask more experienced people what they think the best way to proceed would be.</p>
Jean Marie
305,862
<p>If the ratio $r=\dfrac{\omega_1}{\omega_2}$ is not rational, function $f$ can reach values arbitrarily close to $|A_1|+|A_2|$.</p> <p>A graphical example in the case $f(t)=2 \cos(1.5t)-3 \cos(\sqrt{2}t)$ with values arbitrarily close to $|2|+|-3|=5$.</p> <p>A sketch of proof: As remarked by Yves Daoust, the case $A_1&gt;0$ and $A_2&gt;0$ is evident. Let us consider, for the sake of definiteness, the case $A_1&gt;0$ and $A_2&lt;0$. </p> <p>It suffices to find a value of $t$ simultaneously:</p> <ul> <li><p>very close to $\dfrac{2k\pi}{\omega_1}$, giving a value of the first cosine is very close to $1$, and</p></li> <li><p>very close to $\dfrac{(2\ell+1)\pi}{\omega_2}$, so as the second cosine is very close to $-1$. </p></li> </ul> <p>(with with $k,\ell \in \mathbb{Z}$). These approximate identities:</p> <p>$$t \approx \dfrac{2k\pi}{\omega_1} \approx \dfrac{(2\ell+1)\pi}{\omega_2} \ \ \ (1)$$</p> <p>can be reinterpreted by saying that it suffices to be able to find integers $k$ and $\ell$ such that:</p> <p>$$\dfrac{\omega_1}{\omega_2}\approx\dfrac{2k}{2\ell+1}$$</p> <p>otherwise said, to be able to find a "very good" rational approximation of $r=\dfrac{\omega_1}{\omega_2}$ of the form $\dfrac{2k}{2\ell+1}$. This is a consequence of the density of the rationals in the reals (even of the rationals of the form $\dfrac{2k}{2\ell+1}$).</p> <p><a href="https://i.stack.imgur.com/1kL7P.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/1kL7P.jpg" alt="enter image description here"></a></p>
3,245,270
<p>From Statistical Inference by Casella and Berger:</p> <blockquote> <p>Let <span class="math-container">$X_1, \dots X_n$</span> be a random sample from a discrete distribution with <span class="math-container">$f_X(x_i) = p_i$</span>, where <span class="math-container">$x_1 \lt x_2 \lt \dots$</span> are the possible values of <span class="math-container">$X$</span> in ascending order. Let <span class="math-container">$X_{(1)}, \dots, X_{(n)}$</span> denote the order statistics from the sample. Define <span class="math-container">$Y_i$</span> as the number of <span class="math-container">$X_j$</span> that are less than or equal to <span class="math-container">$x_i$</span>. Let <span class="math-container">$P_0 = 0, P_1 = p_1, \dots, P_i = p_1 + p_2 + \dots + p_i$</span>.</p> </blockquote> <p>If <span class="math-container">$\{X_j \le x_i\}$</span> is a "success" and <span class="math-container">$\{X_j \gt x_i\}$</span> is a "failure", then <span class="math-container">$Y_i$</span> is binomial with parameters <span class="math-container">$(n, P_i)$</span>.</p> <p><strong>Then the event <span class="math-container">$\{X_{(j)} \le x_i\}$</span> is equivalent to the event <span class="math-container">$\{Y_i \ge j\}$</span></strong></p> <p>Can someone explain why these two are equivalent?</p> <p><span class="math-container">$\{X_{(j)} \le x_i\} = \{s \in \text{dom}(X_{(j)}) : X_{(j)}(s) \le x_i\}$</span></p> <p><span class="math-container">$\{Y_i \ge j\} = \{s' \in \text{dom}(Y_i) : Y_i(s') \ge j\}$</span></p> <p>I'm having trouble understanding how these random variable functions show this equivalence.</p>
jgon
90,543
<p>The <a href="https://en.wikipedia.org/wiki/Sierpi%C5%84ski_space" rel="nofollow noreferrer">Sierpinski space</a> is not pseudometrizable. The <a href="https://en.wikipedia.org/wiki/Kolmogorov_space#The_Kolmogorov_quotient" rel="nofollow noreferrer">Kolmogorov quotient</a> of a pseudometric space <a href="https://en.wikipedia.org/wiki/Pseudometric_space#Metric_identification" rel="nofollow noreferrer">is metric</a>. However the Sierpinski space is already <span class="math-container">$T_0$</span>, but it's not Hausdorff, and thus not metric.</p> <p>The Sierpinski space is the topological space on two points with the topology <span class="math-container">$$\{\varnothing, \{0\}, \{0,1\}\}.$$</span></p>
473,874
<p>Three people consider as A,B,C went for sight seeing. <br/> A,B and C each individually saw a bird that no other saw.(Eg: If A saw a bird the same is not seen by B and C) <br/> Each pair saw a yellow bird that the other pair dint see(Eg: If AB saw a bird the same is not seen by BC and CA) <br/> 3 people together saw a yellow bird. <br/> <br/> Question Find the total number of birds and the number of yellow birds seen all together?</p>
Hagen von Eitzen
39,174
<p>If I don't overlook any subtlety in the problem, in the minimal case, each bird determines a nonempty subset of $\{A,B,C\}$ (its obervers), hence there are at least $7$ birds, and at least $4$ of these are yellow.</p>
473,874
<p>Three people consider as A,B,C went for sight seeing. <br/> A,B and C each individually saw a bird that no other saw.(Eg: If A saw a bird the same is not seen by B and C) <br/> Each pair saw a yellow bird that the other pair dint see(Eg: If AB saw a bird the same is not seen by BC and CA) <br/> 3 people together saw a yellow bird. <br/> <br/> Question Find the total number of birds and the number of yellow birds seen all together?</p>
Willemien
88,985
<p>I disagree I think only 6 birds are needed<br> A sees bird A not seen by B &amp; C<br> B sees bird B not seen by A &amp; C<br> C sees bird C not seen by A &amp; B<br> A and B see yellow bird AB not seen by C<br> A and C see yellow bird AC not seen by B<br> B and C see yellow bird AC not seen by A</p> <p>A, B and C have all seen a yellow bird (nowhere is written it has to be the same bird) so we are done.</p> <p>If the birds A,B and C are all blue then A, B and C have all seen a blue bird as well.</p> <p>But they did go to the wrong place at a reasonable bird watching place you will see hundreds of birds.</p>
422,233
<p>I was asked to find a minimal polynomial of $$\alpha = \frac{3\sqrt{5} - 2\sqrt{7} + \sqrt{35}}{1 - \sqrt{5} + \sqrt{7}}$$ over <strong>Q</strong>.</p> <p>I'm not able to find it without the help of WolframAlpha, which says that the minimal polynomial of $\alpha$ is $$19x^4 - 156x^3 - 280x^2 + 2312x + 3596.$$ (Truely it is - $\alpha$ is a root of the above polynomial and the above polynomial is also irreducible over <strong>Q</strong>.)</p> <p>Can anyone help me with this?</p> <p>Thank you!</p>
GregHL
364,116
<p>I know nothing about Galois Theory, and haven't really thought about this one, and am replying quickly (and am not in shape) so I might be saying stupid things, but I guess we could use the fact that $\alpha \in \mathbb{Q}(\sqrt{5},\sqrt{7})$, and then define $x \mapsto \bar{x}$ as sending -for example- $\sqrt{7}$ to $-\sqrt{7}$, and $x \mapsto x^{*}$ sending $\sqrt{5}$ to $-\sqrt{5}$, then generate a -monic- polynomial over $\mathbb{Q}$ by taking: $$p(X) := (X-\alpha)(X-\bar{\alpha})(X-a^{*})(X-\bar{\alpha}^{*}) $$ then if we want the coefficients of the Polynomial to be integers, we define $$D := 1 - \sqrt{5} + \sqrt{7} $$ i.e. the denominator of $\alpha$, then multiply $p(X)$ by $\pm D \bar{D} D^{*} \bar{D}^{*}$ (you should obviously get $19$) to get a polynomial whose coefficients are all integers (which should be the one you're looking for [at least I made a quick try on Python and it seems to work...]) (Those are just the main ideas to get a polynomial in $\mathbb{Z}[X]$ or a monic polynomial in $\mathbb{Q}[X]$ that sends some $\alpha$ to $0$ -by using what I guess is called the "Galois Group"- but I guess a priori, this obviously doesn't guarantee you will "always" get "THE Minimal polynomial" of $\alpha$; this would rather in general give you a multiple of it [I guess... as I said I know nothing about Galois Theory and haven't really thought about the question; therefore, I guess there should probably be much more efficient methods than this...])</p>
666,217
<p>If $a^2+b^2 \le 2$ then show that $a+b \le2$</p> <p>I tried to transform the first inequality to $(a+b)^2\le 2+2ab$ then $\frac{a+b}{2} \le \sqrt{1+ab}$ and I thought about applying $AM-GM$ here but without result</p>
JLA
30,952
<p>Let $a=\sqrt{2}\cos\theta$, $y=\sqrt{2}\sin\theta$. Then $a^2+b^2=2$, and $a+b=\sqrt{2}(\cos\theta+\sin\theta)$, which is a maximum at $\theta=\frac{\pi}{4}$, at which case $a+b=2$. So $a+b\le 2$.</p>
666,217
<p>If $a^2+b^2 \le 2$ then show that $a+b \le2$</p> <p>I tried to transform the first inequality to $(a+b)^2\le 2+2ab$ then $\frac{a+b}{2} \le \sqrt{1+ab}$ and I thought about applying $AM-GM$ here but without result</p>
Steven Stadnicki
785
<p>Yet another method, inspired by looking at the problem geometrically (try drawing the region $a^2+b^2\leq2$ and the line $a+b=2$): let $s=a+b$, $t=a-b$. Then $a=\frac12(s+t)$ and $b=\frac12(s-t)$, so $a^2+b^2=\frac14\bigl((s^2+2st+t^2)+(s^2-2st+t^2)\bigr) = \frac12(s^2+t^2)$ and $a+b=s$, so the problem becomes: </p> <blockquote> <p>if $s^2+t^2\leq 4$ then show that $s\leq 2$.</p> </blockquote> <p>But this is manifestly true; $t^2\geq 0$, so if $s^2+t^2\leq 4$ then $s^2\leq 4$ and so $s\leq 2$.</p>
1,530,702
<p>Can anybody help me out with getting an expression of the values of $\lambda$ for a matrix $A$ for which $det(A-\lambda I)$ equals the determinant of a matrix with on the main diagonal $-\lambda$, on the diagonal above the main diagonal $\dfrac{1}{2}$ and on the diagonal under the main diagonal $\frac{1}{2} \lambda$.</p>
Bernard
202,857
<p>The determinant of such tridiagonal matrices of order <span class="math-container">$n$</span> are computed with the linear recurrence of order <span class="math-container">$2$</span>: <span class="math-container">$$D_n=-\lambda D_{n-1}-\frac\lambda4 D_{n-2}$$</span> and the initial conditions <span class="math-container">$\; D_0=1,\enspace D_1=\lambda$</span>.</p> <p><em>How to find a closed formula for</em> <span class="math-container">$D_n$</span> <em>:</em></p> <p>We look for basic solutions that are geometric progressions <span class="math-container">$r^n\ (r\ne 0)$</span>; this leads to <span class="math-container">$$r^n=-\lambda r^{n-1}-\frac\lambda4 r^{n-2}\iff r^2=-\lambda r-\frac\lambda4. $$</span> So the possible values of <span class="math-container">$r$</span> are the roots <span class="math-container">$r_1,r_2$</span> of the <em>characteristic equation</em>: <span class="math-container">$$r^2+\lambda r+\frac\lambda4=0.$$</span> The general solution is a linear combination of the basic solutions: <span class="math-container">$\;\alpha r_1^n+\beta r_2^n$</span>, where <span class="math-container">$\alpha, \beta$</span> are determined by the initial conditions.</p>
1,787,806
<p>I've recently had this problem in an exam and couldn't solve it.</p> <p>Find the remainder of the following sum when dividing by 7 and determine if the quotient is even or odd:</p> <p>$$\sum_{i=0}^{99} 2^{i^2}$$</p> <p>I know the basic modular arithmetic properties but this escapes my capabilities. In our algebra course we've seen congruence, divisibility, division algorithm... how could I approach it?</p>
Gareth McCaughan
8,474
<p>$2^{i^2}$ mod 7 is determined by $i^2$ mod 6, hence by $i$ mod 6. That should suffice to sort out the remainder. Now the parity of the quotient depends only on the remainder of the sum mod 14, which you can get from knowing the sum mod 7 and mod 2.</p>
159,761
<p>I have two lists:</p> <pre><code>list1 = {"a", "b"}; list2 = {{{1, 2}, {3, 4}}, {{1, 2}}}; </code></pre> <p>My goal is to create a new list which would be:</p> <pre><code>{"a u 1:2","a u 2:3","b u 1:2"} </code></pre> <p>In other words first element in <code>list1</code> would be distributed before each subelement of first element in <code>list2</code> etc.</p> <p>There are some answers using <code>MapThread</code> e.g. <a href="https://mathematica.stackexchange.com/questions/79943/add-elements-of-one-list-to-sublists-of-another-list">here</a>. But that is not satisfactory, actually, it does not work, just try e.g.</p> <pre><code>subl = {{{1, 2}, {3, 4}}, {{5, 6}}}; list = {11, 12}; MapThread[Append, {subl, list}] </code></pre> <p>As it returns: {{{1, 2}, {3, 4}, 11}, {{5, 6}, 12}}</p> <p>while the result I am seeking should look like:</p> <pre><code>{{{1,2,11},{3,4,11}},{{5,6,12}}} </code></pre> <p>And level specification returns errors:</p> <pre><code>MapThread::mptd: Object {{{1,2},{3,4}},{{5,6}}} at position {2, 1} in MapThread[Append,{{{{1,2},{3,4}},{{5,6}}},{11,12}},2] has only 1 of required 2 dimensions. </code></pre> <p>or</p> <pre><code>MapThread::intnm: Non-negative machine-sized integer expected at position 3 in MapThread[Append,{{{{1,2},{3,4}},{{5,6}}},{11,12}},{2}]. </code></pre> <p>Thus I do not think this is a duplicate, or I have not found an answer that would work in this case.</p> <p>I have:</p> <pre><code>Map[Function[u, StringRiffle[ToString /@ u, {"u ", ":", ""}]], list2, {2}] </code></pre> <p>Producing</p> <pre><code>{{"u 1:2", "u 3:4"}, {"u 1:2"}} </code></pre> <p>so I had thought simply:</p> <pre><code>MapThread[ StringJoin[#2, #1] &amp;, {Map[ Function[u, StringRiffle[ToString /@ u, {"u ", ":", ""}]], list2, {2}], list1},{2}] </code></pre> <p>but that gives error and </p> <pre><code>MapThread[ StringJoin[#2, #1] &amp;, {Map[ Function[u, StringRiffle[ToString /@ u, {"u ", ":", ""}]], list2, {2}], list1}] </code></pre> <p>on the first level gives:</p> <pre><code>{"a u 1:2u 3:4", "b u 1:2"} </code></pre> <p>I tried to repartition the lists so that they are similar in size but that did not work. The solution that works is:</p> <pre><code>listC = Map[Function[u, StringRiffle[ToString /@ u, {"u ", ":", ""}]], list2, {2}] MapThread[Function[{u, v}, StringJoin[u, #] &amp; /@ v], {list1, listC}] </code></pre> <p>But I do no like it due to the <code>/@v</code> part. I would really like to find a general solution to this problem: redistribute (prepend, apend, join strings) elements in one list across arbitrary dimension of another list (which my solution does not permit, I made use that the in this particular case where i need to apply one level deeper).</p>
WReach
142
<p>The first <code>Map</code> attempt listed in the question gets us close -- a tweak involving <code>MapIndexed</code> can get us all the way:</p> <pre><code>MapIndexed[StringRiffle[{list1[[#2[[1]]]], #}, " u ", ":"] &amp;, list2, {2}] // Flatten (* {"a u 1:2", "a u 3:4", "b u 1:2"} *) </code></pre> <p>Another way is to reshape input lists first and then map <code>StringRiffle</code> across the result:</p> <pre><code>StringRiffle[#, " u ", ":"] &amp; /@ Inner[Thread@*List, list1, list2, Join] (* {"a u 1:2", "a u 3:4", "b u 1:2"} *) </code></pre> <p>The <code>MapIndexed</code> approach is the more general of the two as it allows elements to matched up across the two lists by simple indexing. In contrast, the second approach might look shorter, but it produces more intermediate lists and the exact restructuring operations must be designed for each unique situation.</p> <p><code>ReplacePart</code> can be used in a manner that parallels the <code>MapIndexed</code> approach:</p> <pre><code>ReplacePart[list2, {i_, j_} :&gt; StringRiffle[{list1[[i]], list2[[i, j]]}, " u ", ":"]] // Flatten </code></pre> <p>Sometimes this is easier to read than the <code>MapIndexed</code> variant.</p>
2,637,914
<p>I would like to teach students about the pertinence of the Axiom of Infinity. Are there any high school-level theorems of arithmetic, algebra, or calculus, whose proof depends on the Axiom of Infinity? If there are no such examples, what would be the simplest theorem which demands the Axiom of Infinity?</p> <p>It seems we can still generate endless numbers without the Axiom of Infinity, but this axiom lets us treat infinite sets as a whole -- is this true?</p>
Tsemo Aristide
280,301
<p>Suppose that $\sum_{i=1}^{i=n}{i\over{i+1}}\leq {n^2\over{n+1}}$, </p> <p>$\sum_{i=1}^{i=n+1}{i\over{i+1}}\leq {n^2\over{n+1}}+{{n+1}\over{n+2}}$.</p> <p>${n^2\over{n+1}}+{{n+1}\over{n+2}}=$</p> <p>${{n^2(n+2)+(n+1)^2}\over{{(n+1)(n+2)}}}$</p> <p>$\leq {{n(n^2+2n+1)+(n+1)^2}\over{{(n+1)(n+2)}}}$</p> <p>$={{n(n+1)^2+(n+1)^2}\over{(n+1)(n+2)}}={{(n+1)^2}\over{n+2}}$.</p>
1,077,594
<p>Let $C[a,b]$ be the space of continuous functions on $[a,b]$ with the norm $$ \left\Vert{f}\right\Vert=\max_{a \leq t \leq b}\left| f(t)\right| $$</p> <p>Then $C[a,b]$ is a Banach space. </p> <p>Let's view $C^1[a,b]$ as a subspace of it. My question is, is this $C^1[a,b]$ a Banach space?</p> <p>I think it is, since for every Cauchy sequence $\{f_n\}$ in $C^1[a,b]$, it is also a Cauchy sequence in $C[a,b]$, so it converges to a function $f$ in $C[a,b]$. But convergence in $C[a,b]$ is uniform, so $f$ is in $C[a,b]$ too, which follows that $C^1[a,b]$ is complete, i.e. a Banach space.</p> <p>However, I just read a theorem named <a href="http://en.wikipedia.org/wiki/Closed_graph_theorem" rel="nofollow">Closed Graph Theorem</a>, stating that</p> <blockquote> <p>(Closed Graph Theorem) Let $X$ and $Y$ be two Banach space, and $T$ a closed linear operator from $A\subset X$ to $Y$. If $A$ is closed in $X$, then $T$ is continuous.</p> </blockquote> <p>Apply this theorem to the above case, let $X=C^1[a,b]$, $Y=C[a,b]$ and $T=\frac{d}{dt}$ from $X$ to $Y$. We can prove that $T$ is a closed linear operator. Note that $X$ is closed in $X$, so by the above theorem $T$ is continuous.</p> <p>However, it is easy to prove that differential operator is NOT continuous.</p> <p>I am sure the Closed Graph Theorem and the last statement is true, so I think $C^1[a,b]$ is not Banach.</p> <p>Could anyone tell me why?</p>
sabachir
201,840
<p>we use $$f\left( z \right) = \frac{{e^{iz} }}{{1 + z^2 }}$$ then take real parts of the resulting integral .using the same contour $C$ </p> <h2><img src="https://i.stack.imgur.com/NMBBI.jpg" alt="enter image description here"></h2> <p>\begin{array}{l} i \in C; - i \notin C \\ {\mathop{\rm Re}\nolimits} s\left( {f;i} \right) = \mathop {\lim }\limits_{z \to i} \left\{ {\left( {z - i} \right)\frac{{e^{iz} }}{{1 + z^2 }}} \right\} = \frac{{e^{ - 1} }}{{2i}} \\ \end{array} \begin{array}{l} \oint\limits_C {\frac{{e^{iz} }}{{1 + z^2 }}dz} = \int\limits_{ - R}^{ + R} {\frac{{e^{ix} dx}}{{1 + x^2 }}} + \int\limits_{S_R} {\frac{{e^{iz} dz}}{{1 + z^2 }}} = 2\pi i\frac{{e^{ - 1} }}{{2i}} = \frac{\pi }{e} \\ \Rightarrow \int\limits_{ - R}^{ + R} {\frac{{\cos \left( x \right)dx}}{{1 + x^2 }}} + i\int\limits_{ - R}^{ + R} {\frac{{\sin \left( x \right)dx}}{{1 + x^2 }} + } \int\limits_{S_R} {\frac{{e^{iz} dz}}{{1 + z^2 }}} = \pi e^{ - 1} \\ 2\int\limits_0^{ + R} {\frac{{\cos \left( x \right)dx}}{{1 + x^2 }}} = \pi e^{ - 1} ;\quad \left( {\int\limits_{S_R} {\frac{{e^{ix} dx}}{{1 + x^2 }} = 0} ;R \to + \infty \int\limits_{ - R}^{ + R} {\frac{{\sin \left( x \right)dx}}{{1 + x^2 }} = 0} } \right) \\ \mathop {\lim }\limits_{R \to + \infty } \int\limits_0^{ + R} {\frac{{\cos \left( x \right)dx}}{{1 + x^2 }}} = \frac{\pi }{2}e^{ - 1} \\ \end{array}</p> <p>note: but if $y = {\mathop{\rm Im}\nolimits} \left( z \right)$ then $\cos \left( z \right) \approx \frac{{e^{\left| y \right|} }}{{\left| z \right|^2 }}$ for large $\left| z \right|$</p> <p>we have the estimate $ \left| {\int\limits_{C_R } {\frac{{e^{iz} }}{{z^2 + 1}}dz} } \right| \le \int\limits_{C_R } {\frac{{e^{ - y} }}{{R^2 - 1}}\left| {dz} \right|} \le \frac{{\pi R}}{{R^2 - 1}} \to 0 $ as $R \to \infty $ where $ y = {\mathop{\rm Im}\nolimits} \left( z \right) &gt; 0 $by the residue theorem $ \int\limits_{ - \infty }^{ + \infty } {\frac{{e^{ix} }}{{x^2 + 1}}dx = } 2\pi i\sum\limits_{{\mathop{\rm Im}\nolimits} a &gt; 0} {{\mathop{\rm Re}\nolimits} s_a } \frac{{e^{iz} }}{{z^2 + 1}} = \frac{\pi }{e} $</p>
1,219,129
<p>For any vector space $V$ over $\mathbb{C}$, let $X$ be a set whose cardinality is the dimension of $V$. Then $V \cong \bigoplus\limits_{i \in X} \mathbb{C}$ as vector spaces.</p> <p>Is there a similar description of arbitrary Hilbert spaces? Is there something they all "look" like?</p>
TZakrevskiy
77,314
<p>On the one hand, you can apply the same thing as you did with a generic vectors space and obtain that $$H \cong \bigoplus\limits_{i \in X} \mathbb{C}.$$ Notes that this relies on the notion of Hamel dimension of the space of a vector space.</p> <p>However, you can say that a Hilbert space has its Hilbert dimension equal to the cardinality of a certain set $E$, then $$H\cong \ell_2 (E)$$(i.e. a set of square-summable sequences indexed by elementss of $E$), so, for example, all separable Hilbert spaces are "all the same". Note that the Hilbert dimension is rarely the same as Hamel dimension.</p>
2,115,532
<blockquote> <p>Let $\mu$ be a $\sigma$-finite measure on $(A,\mathcal{A})$. Then there are finite measures $(\mu_n)_{n \in \mathbb{N}}$ on $(X,\mathcal{A})$ such that $$\mu = \sum_{n \in \mathbb{N}}\mu_n$$</p> </blockquote> <p>So if $\mu$ is $\sigma$-finite, we have that $$X = \bigcup_{n \in \mathbb{N}}X_n$$ for some measurable sets $X_n$ with $\mu(X_n) &lt; \infty$ for any $n \in \mathbb{N}$. My first idea was something like restricting this measures to the sets $X_n$ but $\mu_n$ must be defined on $\mathcal{A}$. Any hint?</p>
C. Dubussy
310,801
<p>Since $\theta \in [0,2\pi]\mapsto e^{i\theta}$ is a parametrization of the circle, one has $$\int_{0}^{2\pi} e^{e^{i\theta}}d\theta = \int_{C(0,1)} \frac{e^z}{iz} dz = 2\pi$$ by Cauchy's theorem.</p>
1,553,354
<p>Help me to find an example of a sequence of differentiable functions defined on $[0,1]$ that converge uniformly to a function $f$ on $[0,1]$ such that there exists $x \in (0,1)$ such that $f$ is not differentiable at $x$.</p>
Prahlad Vaidyanathan
89,789
<p>Any continuous function is a uniform limit of polynomials, so pick your favourite non-differentiable continuous function, and that would work!</p> <p>In fact, if $f$ is such a function, say $f(x) := |x- 0.5|$, then take the sequence to be the sequence of Bernstein polynomials $$ B_n(f)(x) := \sum_{k=1}^n {n\choose k} f\left(\frac{k}{n}\right) x^k (1-x)^{n-k} $$ Then $B_n(f) \to f$ uniformly.</p> <p>Edit: A simpler example: Take $f(x) = |x-0.5|$ and consider $$ f_n(x) = \sqrt{\frac{1}{n^2} + f(x)^2} $$ Then check that $f_n$ is differentiable and $$ f(x) \leq f_n(x) \leq f(x) + 1/n \quad\forall x\in [0,1] $$ and so $f_n \to f$ uniformly.</p>
117,608
<p>We know that if $G$ is a simple group with $p+1$ Sylow $p$-subgroups, then $G$ is 2-transitive. Now let $G$ be almost simple group with $p+1$ Sylow $p$-subgroups. Is $G$ 2-transitive group?</p>
Derek Holt
35,840
<p>I am sure that the answer is yes, but you might have to do a bit of work to write down a completely rigorous proof.</p> <p>Let $S \unlhd G$ with $S$ simple and $G \le {\rm Aut}(S)$, and suppose that $G$ has $p+1$ Sylow $p$-subgroups. If $p$ divides $|S|$, then $S$ has at most $p+1$ and hence exactly $p+1$ Sylow $p$-subgroups, and so $S$ and hence $G$ act 2-transitively by conjugation on the set of Sylow $p$-subgroups of $S$ (or of $G$).</p> <p>Using the classification, we find that the only finite simple groups $S$ that have an automorphism of prime order $p$ that does not divide $|S|$ are groups $S=L(r^{p^k})$ of Lie type that have a field automorphism of order $p^k$. So a Sylow $p$-subgroup $P$ of $G$ is cyclic of order $p^l$ for some $l \le k$. Then the centralizer of $P$ in $S$ is isomorphic to $L(r^{p^{k/l}})$, which has index greater than $p+1$ in $S$, so it is not possible for $G$ to have $p+1$ Sylow $p$-subgroups in this situation.</p>
187,974
<p>If $ \cot a + \frac 1 {\cot a} = 1 $, then what is $ \cot^2 a + \frac 1{\cot^2 a}$? </p> <p>the answer is given as $-1$ in my book, but how do you arrive at this conclusion?</p>
Jyrki Lahtonen
11,619
<p>Hint: $$ x^2+\frac1{x^2}=\left(x+\frac1x\right)^2-2. $$</p>
187,974
<p>If $ \cot a + \frac 1 {\cot a} = 1 $, then what is $ \cot^2 a + \frac 1{\cot^2 a}$? </p> <p>the answer is given as $-1$ in my book, but how do you arrive at this conclusion?</p>
Apratim Ran Chak
299,618
<p>cota+ (1/cota)=1</p> <p>Therefore, Squaring on both sides we get:</p> <pre><code> cot^2a + (1/cot^2a)+ 2 = 1 Hence, cot^2a + (1/cot^2a) = -1 </code></pre>
2,461,615
<p>I am still at college. I need to solve this problem.</p> <p>The total amount to receive in 1 year is 17500 CAD. And the university pays its students each 2 weeks (26 payments per year). </p> <p>How much does a student have to receive for 4 months? I have calculated this in 2 ways (both seem ok) but results are different. Which one is the right one and why? </p> <pre><code>a) 17500CAD / 12 months = 1458.33CAD each month 1458.33CAD x 4 months = 5833 (total amount of money in 4 months) If money has to be given each 2 weeks: 5833 / 8 = 729.125 CAD b) 17500 / 26 = 673.08 each 2 weeks 673.08 x 8 = 5384.62 (total amount of money in 4 months) </code></pre> <p>I think the right one is a), because b) is assuming the student has been receiving money for the whole year (26 payments). But it is not the case.</p> <p>Thank you</p>
Guillemus Callelus
361,108
<p>If you consider that a month has $4$ weeks, you have a total of $48$ weeks, not $52$.</p>
2,216,601
<p>Alright so I have this Transformation that I know isn't one to one transformation, but I'm not sure why. </p> <p>A Transformation is defined as $f(x,y)=(x+y, 2x+2y)$.</p> <p>Now my knowledge is that you need to fulfill the 2 conditions: Additivity and the scalar multiplication one. I tried both of them and somehow both of them are met perfectly. </p> <p>However, the transformation is NOT linear. This is because the column vectors of the transformation are linearly dependent. </p> <p>So how am I supposed to relate these 2 seemingly unrelated conjectures to check the one-one transformation ? </p>
hamam_Abdallah
369,188
<p><strong>hint</strong></p> <p>$$f (1,-1)=(0,0) $$ and</p> <p>$$f (0,0)=(0,0) $$</p>
1,943,328
<p>I know about $S_n$, $D_n$ and $A_n$. And from my limited understanding there seem to be many more. I would like to know whether there is some kind of relation that links a small set of non Abelian groups to create the other ones. Something like with the Abelian groups and the Fundamental Theorem of Abelian Groups.</p>
P Vanchinathan
28,915
<p>There is a construction called semi-direct product. It is somewhat like direct product with a twist. This creates a non-abelian group even if the two factors were abelian. And there is a generalisation called group extensions which creates more non-abelian groups.</p> <p>It is difficult to classify them. Because direct product of two groups with one of them non-abelian will result in a non-abelian group. So one tries to classify simple groups: those not admitting proper non-trivial normal subgroups. Even here it is a vast. Classification Theorem for Simple Groups runs to thousands of journal pages.</p> <p>The closest analogue of Fundamental Theorem of arithmetic is Jordan-Hölder Theorem for groups. But the same simple ("prime") components can be "put together" in various ways to produce many different non-abelian groups.</p> <p>If you want some important examples in finite cases: Non-singular matrices of size $n\times n$ (with entries from finite fields).</p> <p>It has various interesting subgroups, triangular etc.</p>
1,983,129
<p>In the tripple integral to calculate the volume of a sphere why does setting the limits as follows not work?</p> <p>$$ \int_{0}^{2\pi} \int_{0}^{\pi} \int_{0}^{R} p^2 \sin{\phi} \, dp\,d\theta\,d\phi $$</p>
GEdgar
442
<p>Geometrically...</p> <p>The center of gravity of the set of vertices of the polygon $\{1, \epsilon, \cdots, \epsilon^{n-1}\}$ is the center of that polygon. Proof: the polygon is invariant under rotation by $\epsilon$ about the center, so the center of gravity is also inveriant under that rotation.</p>
2,046,521
<p>Of course, faster calculations help solve problems quickly. But does that also mean that faster calculations open more opportunities for a career in mathematics (like a researcher)? I like mathematics and can spend weeks trying to solve any problem or understanding any concept. But nowadays, there are many contests that focus on faster calculations rather than problem solving. I am very slow at calculations due to which I end up doing badly in these types of contests. Does that mean I am lagging somewhere? Can this cause hindrance in pursuing a career in mathematics? </p>
SchrodingersCat
278,967
<p>This is a vague concept. True mathematics does not deal with numerical calculations, let alone faster calculations. Science has devised machines called calculators to perform this task of faster calculations. What real mathematics deals with are <em>concepts</em>, mathematical subtleties, reasoning, logic and new lines of thought to <em>imagine</em> things that we cannot see. Strictly speaking, mathematics is the only science which ventures into things that cannot be seen or felt in reality nor properly visualised. A classic example of this notion is the n-dimensional space, generalising everything possible theory, theorem, law and equation to n-dimensions, assuming some kind of a universal symmetry, perfectness and order in this generalisation. If you really want to do research in mathematics, leave all calculations to the calculator and start to develop <em>the art of thinking</em>. That's what's most required to carry on in this line.</p>
2,046,521
<p>Of course, faster calculations help solve problems quickly. But does that also mean that faster calculations open more opportunities for a career in mathematics (like a researcher)? I like mathematics and can spend weeks trying to solve any problem or understanding any concept. But nowadays, there are many contests that focus on faster calculations rather than problem solving. I am very slow at calculations due to which I end up doing badly in these types of contests. Does that mean I am lagging somewhere? Can this cause hindrance in pursuing a career in mathematics? </p>
Christopher.L
347,503
<p>In general I would say that problem solving skills are more important than being 'skilled' in arithmetic. The contests you mention, often test both problem solving and speed as well.</p> <p>However, being fast at doing calculations often means that you understand theorems and proofs faster, and thus tend to get stuck less often. This, in turn, might mean that you are just able to read and learn more during a shorter period of time. This is of course not true for everyone.</p> <p>I don't think that you should feel hindered by the fact that you feel that you are slow at calculations. I know several mathematicians (myself included) who sometimes feel inferior to others when it comes to calculating things fast. Often I feel that this has to do more with self confidence. If I try to do calculations fast in a somewhat stressful situation (especially if someone goes "Your the mathematician, calculate this!"), I'm often blocked by the stress, and tend to do silly mistakes. Because of this, I often double check my calculations, in my head, even really easy ones, and this slows down calculations substantially.</p> <p>In conclusion, when it comes to pursuing a career in mathematics, don't feel that this is too much of an obstacle, you're not alone. Get an app that trains basic arithmetic in your head and don't let this get in the way of your studies. It is far more important that you have an ability to connect different areas of mathematics and have an ability to be creative, and most important, that you really think that mathematics is interesting, fun and important in its own right. Get away from the thought that mathematics is a tool that has to be applied to some other problem (often physical). Being fast at calculations is handy but, unless you are really slow, it is more important in social situations than doing research in mathematics.</p>
557,543
<p>Does there exists a positive decreasing sequence $\{a_i\}$ with $\sum_{i\in\mathbb{N}} a_i$ convergent, such that $\forall I\subset\mathbb{N},\sum_{i\in I}a_i$ is an irrational number?</p> <p>Such an example would give rise to a <strong>closed perfect set containing no rationals</strong>. I can only do it for infinite $I$ (for example let $a_i=10^{-p_i}$, where $p_i$ is the $i$th prime.), but the set of infinite sums is not closed.</p>
Andrés E. Caicedo
462
<p>Another easy-to-describe example of a perfect set of irrationals consists of the set of all $x\in(0,1)$ whose continued fraction has the form $$\cfrac1{a_0+\cfrac1{a_1+\cfrac1{a_2+\cfrac1{\dots}}}},$$ where each $a_i$ is either $1$ or $2$. In fact, the set of irrationals in $(0,1)$ is precisely the set of numbers whose continued fraction is infinite. Many perfect sets can be obtained by varying the idea above. Descriptive set-theorists recognize the example at once: The use of continued fractions naturally identifies the set of irrationals with the set $\mathbb N^{\mathbb N}$ of infinite sequences of naturals (sometimes called <em>Baire space</em>). The Cantor set is naturally identified with $\{1,2\}^{\mathbb N}$, and this set corresponds, via continued fractions, to the given example. See <a href="https://math.stackexchange.com/q/1064/462">here</a>.</p> <p>For another example, now that we mentioned the Cantor set $C$, it suffices to show that there is a translate of it consisting only of irrationals. See <a href="https://math.stackexchange.com/q/381690/462">here</a> for this. Curiously, we do not seem to know any concrete examples of reals $r$ such that $C+r$ only contains irrationals. </p> <p>Regarding your question, an interesting problem is to see what possible sets can be obtained as the set of sums of subseries of a given series. That is, given a sequence $\{x_n\}_{n=1}^\infty$ of positive numbers converging to zero, we consider the set $\Sigma(\{x_n\})$ of all numbers that are the sum of a (finite, or infinite, or even empty) subsequence of $\{x_n\}$. The question is what sets are $\Sigma(\{x_n\})$ for some such sequence $\{x_n\}$.</p> <p>This question is solved by Guthrie and Nymann, and a nice self-contained exposition of the result can be found in a recent paper by Zbigniew Nitecki: </p> <ul> <li>If $\sum_n x_n$ diverges, then $\Sigma(\{x_n\})=[0,\infty)$.</li> <li>Otherwise, $\Sigma(\{x_n\})$ is one of the following: <ol> <li>A finite union of (non-trivial) closed bounded intervals. </li> <li>A Cantor set. </li> <li>A "symmetric Cantorval".</li> </ol></li> </ul> <p>The term "Cantorval" is due to Pedro Mendes and Fernando Oliveira. A symmetric Cantorval is, by definition, a non-empty compact set $S\subseteq\mathbb R$ such that</p> <ol> <li>$S=\overline{\stackrel{\circ} S}$, that is, $S$ is the closure of its interior, and </li> <li>Both endpoints of any non-trivial connected component of $S$ are accumulation points of trivial (that is, one-point) components of $S$.</li> </ol> <p>Just as all Cantor sets are homeomorphic, so all Cantorvals are homeomorphic as well. </p> <p>Nitecki's paper, <em>Subsum sets: Intervals, Cantor sets, and Cantorvals</em>, can be downloaded at the <a href="http://arxiv.org/abs/1106.3779" rel="nofollow noreferrer">ArXiv</a>. It is a nice paper, and I recommend it. It also addresses the case where the sequence is not decreasing (but still converges to zero), or when not all of its terms are positive. This added generality does not change anything: Either one obtains an unbounded interval, containing $0$ (and may equal $\mathbb R$), or a perfect set whose convex hull is $[a,b]$ and is symmetric about the midpoint of this interval, where $a,b$ are the extended reals that are the sum of the negative and positive parts of the sequence, respectively. In this case, the set is again one of the three possibilities listed above.</p> <p>Note that the only way we only get irrational numbers in $\Sigma(\{x_n\})\setminus\{0\}$ is if $\Sigma(\{x_n\})$ is a Cantor set.</p> <p>An easy example that this is possible is described in André's answer. I suspect that the set of (non-empty) subseries of $\sqrt2\sum_{n=0}^\infty\frac1{n!}$ is another example, but even though all infinite subseries of $\sum_{n=0}^\infty\frac1{n!}$ are obviously irrational, to show that they are in fact transcendental seems rather more elaborate than to argue in terms of Liouville numbers, as in his answer.</p>
182,785
<p>I haved plot a graph from two functions:</p> <pre><code>η = 52; h = 0.5682; dpdx = -4.092*10^(-2); Fg = dpdx; Fl = dpdx/η; Bl = ((Fg - Fl) h^2 - Fg)/(2 h - 2 η*h + 2 η); Cg = -Fg/2 - η*Bl; Bg = η*Bl; Ut1[y_] := Fg*y^2/2 + Bg*y + Cg; Ut2[y_] := Fl*y^2/2 + Bl*y; Plot1 = Plot[Ut1[y]*1000, {y, h, 1}]; Plot2 = Plot[Ut2[y]*1000, {y, 0, h}, PlotStyle -&gt; Orange]; Show[{Plot1, Plot2}, PlotRange -&gt; All, AxesLabel -&gt; {"y", "U"}, AxesStyle -&gt; FontSize -&gt; 14] </code></pre> <p>The result:</p> <p><a href="https://i.stack.imgur.com/9BNfN.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/9BNfN.png" alt="enter image description here"></a></p> <p>How to flip and transform the graph to this way?</p> <p><a href="https://i.stack.imgur.com/0JvMC.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/0JvMC.jpg" alt="enter image description here"></a></p> <p>PS: The numbers at the axes has to be legible. </p>
kglr
125
<p>You can also post-process the <code>Show</code> output using <a href="https://reference.wolfram.com/language/ref/RotationTransform.html" rel="nofollow noreferrer"><code>RotationTransform</code></a> and <a href="https://reference.wolfram.com/language/ref/ReflectionTransform.html" rel="nofollow noreferrer"><code>ReflectionTransform</code></a>:</p> <pre><code>show = Show[{Plot1, Plot2}, PlotRange -&gt; All, AxesLabel -&gt; {"y", "U"}, AxesStyle -&gt; 14]; Show[MapAt[GeometricTransformation[#, Composition[ReflectionTransform[{-1, 0}], RotationTransform[Pi/2]]] &amp;, show, {1}], AxesOrigin -&gt; {0, 0}, AxesLabel -&gt; { "U", "y"} ] </code></pre> <p><a href="https://i.stack.imgur.com/ko3EZ.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/ko3EZ.png" alt="enter image description here"></a></p>
2,801,433
<p>I have made the following conjecture, and I do not know if this is true.</p> <blockquote> <blockquote> <p><strong>Conjecture:</strong></p> </blockquote> <p><span class="math-container">\begin{equation*}\sum_{n=1}^k\frac{1}{\pi^{1/n}p_n}\stackrel{k\to\infty}{\longrightarrow}2\verb| such that we denote by | p_n\verb| the | n^\text{th} \verb| prime.|\end{equation*}</span></p> </blockquote> <p>Is my conjecture true? It seems like it, according to a plot made by Wolfram|Alpha, but if it does, then it converges.... <em>very</em>.... <em>very</em>, slowly. In fact, let <span class="math-container">$k=5000$</span>, then the sum is approximately equal to <span class="math-container">$1.97$</span>, which just proves how slow it would be.</p> <p>Is there a way of showing whether or not this is indeed convergent? For any other higher values of <span class="math-container">$k$</span>, it seems that it is just too much for Wolfram|Alpha to calculate, and it does not give me a result when I let <span class="math-container">$k=\infty$</span>. Also, for users who might not understand the notation, we can similarly write that <span class="math-container">$$\sum_{n=1}^\infty\frac{1}{\pi^{1/n}p_n}=2\qquad\text{ or }\qquad\lim_{k\to\infty}\sum_{n=1}^k\frac{1}{\pi^{1/n}p_n}=2.$$</span> Also, without Wolfram|Alpha, I have <em>no idea</em> how to approach this problem in terms of proving it or disproving it. Does the sum even converge <em>at all</em>? If so, to what value? Any help would be much appreciated.</p> <hr /> <p>Thank you in advance.</p> <p><strong>Edit:</strong></p> <p>I looked at <a href="https://math.stackexchange.com/questions/2070991/is-sum-limits-n-1-infty-frac1nk1-frac12-for-k-to-infty?rq=1">this post</a> to see if I could rewrite my conjecture as something else in order to help myself out. Consequently, I wrote that <span class="math-container">$$\sum_{n=1}^k\frac{1}{\pi^{1/n}p_n}\stackrel{k\to\infty}{\longleftrightarrow}4\sum_{n=1}^\infty\frac{1}{n^k+1}\tag{$\text{LHS}=2$}$$</span> since both sums look very similar. Could <em>this</em> be of use?</p>
marty cohen
13,079
<p>If $a &gt; 1$ Then, since $p_n \sim n \ln n$ and $1/a^{1/n} =e^{-\ln a/n} \sim 1-\ln a/n $, $\sum_{k=1}^n 1/(a^{1/k}p_k) \sim \sum_{k=1}^n 1/p_k-\sum_{k=1}^n\ln a/(kp_k) \sim \ln. \ln n-c $ for some $c$ since the second sum converges.</p> <p>Therefore the sum diverges like $\ln \ln n$.</p>
3,355,270
<p>I know that</p> <p><span class="math-container">$$\int \frac{1}{x}dx = \ln |x| + C$$</span></p> <p>however I have seen differential equation notes and solutions claim that the integrating factor for <span class="math-container">$P(x)=-\frac{1}{x}$</span> is</p> <p><span class="math-container">$$\mu(x)=e^{\int P(x)dx}=e^{-\int\frac{1}{x}dx}=e^{-\ln|x|}=\frac{1}{x}$$</span></p> <p>For example consider the IVP</p> <p><span class="math-container">$$\frac{dy}{dx}-\frac{y}{x}=xe^x, ~~~y(1)=e^{-1}$$</span></p> <p>We have that <span class="math-container">$P(x)=-\frac{1}{x}$</span> so we could find the integrating factor exactly as above</p> <p><span class="math-container">$$\mu(x)=e^{\int P(x)dx}=e^{-\int\frac{1}{x}dx}=e^{-\ln|x|}=\frac{1}{x}\tag{1a}$$</span></p> <p>then our equation would become</p> <p><span class="math-container">$$\frac{1}{x}\frac{dy}{dx}-\frac{y}{x^2}=e^x \implies \frac{d}{dx}\Big(\frac{1}{x}y\Big)=e^x$$</span></p> <p>which after integrating produces</p> <p><span class="math-container">$$\frac{y}{x}=e^x+C$$</span></p> <p>Applying the initial condition of <span class="math-container">$y(1)=e^{-1}$</span> forms <span class="math-container">$C=-1$</span>. Then</p> <p><span class="math-container">$$\frac{y}{x}=e^x+1$$</span></p> <p>or</p> <p><span class="math-container">$$y=xe^x+x\tag{1b}$$</span></p> <p>If instead we found the integrating factor as</p> <p><span class="math-container">$$\mu(x)=e^{\int P(x)dx}=e^{-\int\frac{1}{x}dx}=e^{-\ln|x|}=\frac{1}{|x|}\tag{2a}$$</span></p> <p>then we would carry through the <span class="math-container">$|x|$</span> throughout the computation. We have</p> <p><span class="math-container">$$\frac{1}{|x|}\frac{dy}{dx}-\frac{y}{x|x|}=e^x \implies \frac{d}{dx}\Big(\frac{1}{|x|}y\Big)=e^x$$</span></p> <p>which after integrating forms</p> <p><span class="math-container">$$\frac{y}{|x|}=e^x+C$$</span></p> <p>Applying the initial condition of <span class="math-container">$y(1)=e^{-1}$</span> once again forms <span class="math-container">$C=-1$</span>. Then</p> <p><span class="math-container">$$\frac{y}{|x|}=e^x+1$$</span></p> <p>or</p> <p><span class="math-container">$$y=|x|e^x+|x|\tag{2b}$$</span></p> <p>I have seen different people claim that both solutions are correct. I'm not sure if we can drop the absolute value sign at some point in the computation. </p>
Arthur
15,500
<p>The differential equation breaks down at <span class="math-container">$x=0$</span>, so what happens for negative <span class="math-container">$x$</span> is something we cannot tell from the given information. We only care about positive <span class="math-container">$x$</span> because we <em>can</em> only care about positive <span class="math-container">$x$</span>, and therefore the absolute value signs do nothing.</p> <p>Even for the simpler differential equation <span class="math-container">$y'=\frac yx$</span>, we get general solution <span class="math-container">$$ y(x)=\cases{ax&amp; for $x&gt;0$\\bx&amp; for $x&lt;0$} $$</span> In fact, the true antiderivative of <span class="math-container">$\frac1x$</span> is <span class="math-container">$$ \cases{\ln x+c_1&amp; for $x&gt;0$\\\ln(-x)+c_2 &amp; for $x&lt;0$} $$</span> and you don't have any information to help you pin down <span class="math-container">$c_2$</span> (or really <span class="math-container">$c_2-c_1$</span>) for your integrating factor, which is another manifestation of our inability to tell what happens for negative <span class="math-container">$x$</span>.</p> <p>Of course, assuming something like <span class="math-container">$y$</span> being differentiable at <span class="math-container">$x=0$</span> (if that's even something that can happen; that's not the case with all differential equations) will be enough to stitch together the negative and positive branches.</p>
3,355,270
<p>I know that</p> <p><span class="math-container">$$\int \frac{1}{x}dx = \ln |x| + C$$</span></p> <p>however I have seen differential equation notes and solutions claim that the integrating factor for <span class="math-container">$P(x)=-\frac{1}{x}$</span> is</p> <p><span class="math-container">$$\mu(x)=e^{\int P(x)dx}=e^{-\int\frac{1}{x}dx}=e^{-\ln|x|}=\frac{1}{x}$$</span></p> <p>For example consider the IVP</p> <p><span class="math-container">$$\frac{dy}{dx}-\frac{y}{x}=xe^x, ~~~y(1)=e^{-1}$$</span></p> <p>We have that <span class="math-container">$P(x)=-\frac{1}{x}$</span> so we could find the integrating factor exactly as above</p> <p><span class="math-container">$$\mu(x)=e^{\int P(x)dx}=e^{-\int\frac{1}{x}dx}=e^{-\ln|x|}=\frac{1}{x}\tag{1a}$$</span></p> <p>then our equation would become</p> <p><span class="math-container">$$\frac{1}{x}\frac{dy}{dx}-\frac{y}{x^2}=e^x \implies \frac{d}{dx}\Big(\frac{1}{x}y\Big)=e^x$$</span></p> <p>which after integrating produces</p> <p><span class="math-container">$$\frac{y}{x}=e^x+C$$</span></p> <p>Applying the initial condition of <span class="math-container">$y(1)=e^{-1}$</span> forms <span class="math-container">$C=-1$</span>. Then</p> <p><span class="math-container">$$\frac{y}{x}=e^x+1$$</span></p> <p>or</p> <p><span class="math-container">$$y=xe^x+x\tag{1b}$$</span></p> <p>If instead we found the integrating factor as</p> <p><span class="math-container">$$\mu(x)=e^{\int P(x)dx}=e^{-\int\frac{1}{x}dx}=e^{-\ln|x|}=\frac{1}{|x|}\tag{2a}$$</span></p> <p>then we would carry through the <span class="math-container">$|x|$</span> throughout the computation. We have</p> <p><span class="math-container">$$\frac{1}{|x|}\frac{dy}{dx}-\frac{y}{x|x|}=e^x \implies \frac{d}{dx}\Big(\frac{1}{|x|}y\Big)=e^x$$</span></p> <p>which after integrating forms</p> <p><span class="math-container">$$\frac{y}{|x|}=e^x+C$$</span></p> <p>Applying the initial condition of <span class="math-container">$y(1)=e^{-1}$</span> once again forms <span class="math-container">$C=-1$</span>. Then</p> <p><span class="math-container">$$\frac{y}{|x|}=e^x+1$$</span></p> <p>or</p> <p><span class="math-container">$$y=|x|e^x+|x|\tag{2b}$$</span></p> <p>I have seen different people claim that both solutions are correct. I'm not sure if we can drop the absolute value sign at some point in the computation. </p>
Oscar Lanzi
248,217
<p>We can get the integrating factor without logarithms. Use the quotient rule for differentiation:</p> <p><span class="math-container">$\dfrac{(u/v)}{dx}=\dfrac{v(du/dx)-u(dv/dx)}{v^2}$</span></p> <p>Put <span class="math-container">$u=y, v=x$</span> to get</p> <p><span class="math-container">$\dfrac{d(y/x)}{dx}=\dfrac{x(dy/dx)-y(dx/dx)}{x^2}=\dfrac{1}{x}\dfrac{d(y/x)}{dx}$</span></p> <p>which gives the integrating factor and the exact integral in one blow. Since this applies for both positive and negative <span class="math-container">$x$</span> we have the <span class="math-container">$(1b)$</span> solution unambiguously.</p>
798,897
<p>In our lecture we ran out of time, so our prof told us a few properties about measure: He said that a measure is $\sigma$-additive iff it has a right-side continuous function that it creates. And he was not only referring to probability measures. After going through my lecture notes, I thought that this would imply that there can be no other measures than ones having a right-side continuous function (I think they are called Lebesgue-Stieltjes measures) as $\sigma$-additivity is a prerequisite to be a measure. So somehow, this does not fit together. Does anybody know what he could have meant here? Or was he only referring to probability measures?</p> <p>Is anything unclear about my question?</p>
Community
-1
<p>There are many other measures. For example, the counting measure: $\mu(A)$ is the number of elements of $A$, with $\mu(A)=\infty$ if $A$ is infinite. This is not a Lebesgue-Stieltjes measure. Neither are the <a href="http://en.wikipedia.org/wiki/Hausdorff_measure" rel="nofollow noreferrer">Hausdorff measures</a> $\mathcal H^d$ with $0&lt;d&lt;1$. Indeed, all of these measures have $\mu([a,b])=\infty$ whenever $a&lt;b$, which is something a Lebesgue-Stieltjes measure cannot satisfy. </p> <p>A Borel measure on $\mathbb R$ is a <a href="http://en.wikipedia.org/wiki/Lebesgue%E2%80%93Stieltjes_integration" rel="nofollow noreferrer">Lebesgue-Stieltjes measure</a> if and only if it is <a href="http://en.wikipedia.org/wiki/Regular_Borel_measure" rel="nofollow noreferrer">regular</a>; equivalently, if it is finite on bounded sets. See <em>Real Analysis</em> by Royden, section 12.3. </p> <hr> <p>Additional remark from comments: if $\mu$ is a <em>finitely additive measure</em> that is finite on bounded sets, then the $\sigma$-additivity of $\mu$ is equivalent to its CDF being right-continuous. One direction is <a href="https://math.stackexchange.com/q/679172/">here</a>, the other direction is <a href="https://math.stackexchange.com/q/668513/">here</a>.</p>
3,839,878
<p>Am currently doing a question that asks about the relationship between a quadratic and its discriminant.</p> <p>If we know that the quadratic <span class="math-container">$ax^2+bx+c$</span> is a perfect square, then can we say anything about the discriminant?</p> <p>Specifically, can we be sure that the discriminant equals 0?</p> <p>So far, I have tried to complete the square for the general quadratic, and got to:</p> <p><span class="math-container">$a((x+\frac{b}{2a})^2-\frac{b^2}{4a^2}+\frac ca)$</span></p> <p>But am now stuck. What should I do next, or is there a totally different route I should be taking?</p>
The Chaz 2.0
7,850
<p>By hypothesis, the quadratic is a perfect square if it is <em>something</em> (linear) squared, say <span class="math-container">$$(ux + v)^2 = u^2x^2 + 2uvx + v^2$$</span></p> <p>Then the discriminant <span class="math-container">$b^2 - 4ac = (2uv)^2 - 4(u^2)(v^2) = 0$</span></p>
3,839,878
<p>Am currently doing a question that asks about the relationship between a quadratic and its discriminant.</p> <p>If we know that the quadratic <span class="math-container">$ax^2+bx+c$</span> is a perfect square, then can we say anything about the discriminant?</p> <p>Specifically, can we be sure that the discriminant equals 0?</p> <p>So far, I have tried to complete the square for the general quadratic, and got to:</p> <p><span class="math-container">$a((x+\frac{b}{2a})^2-\frac{b^2}{4a^2}+\frac ca)$</span></p> <p>But am now stuck. What should I do next, or is there a totally different route I should be taking?</p>
tomi
215,986
<p>A perfect square takes the form <span class="math-container">$(px+q)^2$</span></p> <p>By expanding the brackets this can be shown to be equal to <span class="math-container">$p^2x^2+2pqx+q^2$</span></p> <p>You want <span class="math-container">$ax^2+bx+c \equiv p^2x^2+2pqx+q^2$</span></p> <p>Comparing coefficients of <span class="math-container">$x^2$</span> gives you <span class="math-container">$a=p^2$</span></p> <p>Comparing coefficients of <span class="math-container">$x$</span> gives you <span class="math-container">$b=2pq$</span></p> <p>Comparing constant terms gives you <span class="math-container">$c=q^2$</span></p> <p>The discriminant is <span class="math-container">$b^2-4ac$</span></p> <p>Using the expressions above means that <span class="math-container">$b^2-4ac=(2pq)^2-4(p^2)(q^2)=4p^2q^2-4p^2q^2=0$</span></p>
977,232
<p>We have in and out degree of a directed graph G. if G does not includes loop (edge from one vertex to itself) and does not include multiple edge (from each vertex to another vertex at most one directed edge), we want to check for how many of the following we have a corresponding graph. the vertex number start from 1 to n and the degree sequence are sort by vertex numbers.</p> <p>a) $d_{in}=(0,1,2,3), d_{out}=(2,2,1,1)$</p> <p>b) $d_{in}=(2,2,1), d_{out}=(2,2,1)$</p> <p>c) $d_{in}=(1,1,2,3,3), d_{out}=(2,2,3, 1,2)$</p> <p>I want to find a nice way instead of drawing graph.</p> <p>for (C):</p> <p><img src="https://i.stack.imgur.com/nR4VK.jpg" alt="enter image description here"></p>
user140161
140,161
<p>You are right. For $(t^2, -3t, (6-t)^{1/2}$ to be parallel to $(2,-3,1)$ it must be $p$ times $(2,-3,1)$, where $p$ is a scalar multiple. Set up 2 equation by equating any two of the $i$ $j$ and $k$ components and solving them simultaneously for $p$ and $k$. When you have obtained these values, plug back into the original vector to confirm that they are indeed parallel</p>
1,136,278
<p>Prove that $n(n-1)&lt;3^n$ for all $n≥2$. By induction. What I did: </p> <p>Step 1- Base case: Keep n=2</p> <p>$2(2-1)&lt;3^2$</p> <p>$2&lt;9$ Thus it holds.</p> <p>Step 2- Hypothesis: </p> <p>Assume: $k(k-1)&lt;3^k$</p> <p>Step 3- Induction: We wish to prove that:</p> <p>$(k+1)(k)$&lt;$3^k.3^1$</p> <p>We know that $k≥2$, so $k+1≥3$ </p> <p>Then $3k&lt;3^k.3^1$</p> <p>Therefore, $k&lt;3^k$, which is true for all value of $n≥k≥2$</p> <p>Is that right? Or the method is wrong? Is there any other methods?</p>
rafforaffo
91,488
<p>I think that your solution is fine. However, I would phrase it slightly different.</p> <p>Step-2. To be completely formal, I would say: Let $k&gt;2$ and assume $k(k-1)&lt;3^k$.</p> <p>Step 3. We need to show $k(k+1)&lt;3^{k+1}$. We have $$k(k+1)=k(k-1)+2k&lt;3^k+2k&lt;3^k+3^k+3^k=3^{k+1}$$ Where we have used the inductive hypothesis and the fact that $k&lt;3^k$, which is true because $k&gt;2$.</p> <p>Notice that you can prove that the inequality is true for all $n\geq0$ (indeed the base case will become trivial!).</p>
1,677,359
<p>$\sum_{i=0}^n 2^i = 2^{n+1} - 1$</p> <p>I can't seem to find the proof of this. I think it has something to do with combinations and Pascal's triangle. Could someone show me the proof? Thanks</p>
GoodDeeds
307,825
<p>Let $$\tag1S=1+2+2^2+\cdots+2^n$$ Multiplying both sides by $2$, $$\tag22S=2+2^2+2^3+\cdots+2^{n+1}$$ Subtracting $(1)$ from $(2)$, $$S=2^{n+1}-1$$</p> <p>This is a specific example of the sum of a geometric series. In general, $$a+ar+ar^2+\cdots+ar^n=a\left(\frac{r^{n+1}-1}{r-1}\right)$$</p>
1,677,359
<p>$\sum_{i=0}^n 2^i = 2^{n+1} - 1$</p> <p>I can't seem to find the proof of this. I think it has something to do with combinations and Pascal's triangle. Could someone show me the proof? Thanks</p>
copper.hat
27,978
<p>\begin{eqnarray} 2^{n+1} &amp;=&amp; 2^n+2^n \\ &amp;=&amp;2^n + 2^{n-1} + 2^{n-1} \\ &amp;\vdots&amp; \\ &amp;=&amp; 2^n + 2^{n-1} +2^{n-1} + \cdots + 2 +2 \\ &amp;=&amp; 2^n + 2^{n-1} +2^{n-1} + \cdots + 2 +1 + 1 \end{eqnarray}</p>
1,677,359
<p>$\sum_{i=0}^n 2^i = 2^{n+1} - 1$</p> <p>I can't seem to find the proof of this. I think it has something to do with combinations and Pascal's triangle. Could someone show me the proof? Thanks</p>
choco_addicted
310,026
<p>Mathematical induction will also help you.</p> <ul> <li>(Base step) When $n=0$, $\sum_{i=0}^0 2^i = 2^0 = 1= 2^{0+1}-1$.</li> <li>(Induction step) Suppose that there exists $n$ such that $\sum_{i=0}^n 2^i = 2^{n+1}-1$. Then $\sum_{i=0}^{n+1}2^i=\sum_{i=0}^n 2^i + 2^{n+1}= (2^{n+1}-1)+2^{n+1}=2^{n+2}-1.$</li> </ul> <p>Therefore given identity holds for all $n\in \mathbb{N}_0$.</p> <hr> <p><strong>Edit:</strong> If you want to apply combinations and Pascal's triangle, observe \begin{align} 2^0&amp;=\binom{0}{0}\\ 2^1&amp;=\binom{1}{0}+\binom{1}{1}\\ 2^2&amp;=\binom{2}{0}+\binom{2}{1}+\binom{2}{2}\\ 2^3&amp;=\binom{3}{0}+\binom{3}{1}+\binom{3}{2}+\binom{3}{3}\\ \vdots&amp;=\vdots\\ 2^n&amp;=\binom{n}{0}+\binom{n}{1}+\binom{n}{2}+\binom{n}{3}+\cdots+\binom{n}{n} \end{align} <a href="http://www.artofproblemsolving.com/wiki/index.php/Combinatorial_identity" rel="nofollow">Hockey stick identity</a> says that $$ \sum_{i=r}^n \binom{i}{r}=\binom{n+1}{r+1}. $$ and so \begin{align} \binom{0}{0}+\binom{1}{0}+\cdots+\binom{n}{0}&amp;=\binom{n+1}{1}\\ \binom{1}{1}+\binom{2}{1}+\cdots+\binom{n}{1}&amp;=\binom{n+1}{2}\\ \binom{2}{2}+\binom{3}{2}+\cdots+\binom{n}{2}&amp;=\binom{n+1}{3}\\ \vdots&amp;=\vdots\\ \binom{n}{n}&amp;=\binom{n+1}{n+1} \end{align} Add all terms, then we get \begin{align} \sum_{i=0}^n 2^i &amp;= \sum_{i=1}^{n+1} \binom{n+1}{i}\\ &amp;=\sum_{i=0}^{n+1}\binom{n+1}{i}-1\\ &amp;=2^{n+1}-1. \end{align}</p>
745,613
<p>I've been pondering this since yesterday. I</p> <blockquote> <p>Is it true that given two irreducible polynomials <span class="math-container">$f(x)$</span> and <span class="math-container">$ g(x)$</span> will <span class="math-container">$f(g(x))$</span> or <span class="math-container">$g(f(x))$</span> be irreducible?</p> </blockquote>
voldemort
118,052
<p>This need not be true. Note that in <span class="math-container">$\mathbb{R}[x]$</span> any irreducible polynomial has degree either <span class="math-container">$1$</span> or <span class="math-container">$2$</span>. So, you can take two irreducible polynomials of degree 2, and compose them to get a reducible polynomial.</p>
745,613
<p>I've been pondering this since yesterday. I</p> <blockquote> <p>Is it true that given two irreducible polynomials <span class="math-container">$f(x)$</span> and <span class="math-container">$ g(x)$</span> will <span class="math-container">$f(g(x))$</span> or <span class="math-container">$g(f(x))$</span> be irreducible?</p> </blockquote>
Pipicito
93,689
<p>I don't know a good answer to this question. But here are a couple of simple ideas for easy cases.</p> <p>1) If $\mathbb{K}$ is a field then for any two irreducible polynomials $f$ and $g$ in $\mathbb{K}[x]$ such that $g$ has degree one we have that $f \circ g$ is irreducible. To see this, write $g=ax+b$ with $a\neq 0$. Then if $f \circ g$ were reducible you would have $p$ and $q$ in $\mathbb{K}[x]$, both of them with degree at least one, with $f\circ g= p\cdot q$. But as $\mathbb{K}$ is a field you can find the "inverse" of $g$, call it $t=a^{-1}x-ba^{-1}$. So now $ f = f \circ x = f\circ( g \circ t) = (f\circ g) \circ t = (p\cdot q) \circ t = (p\circ t)\cdot (q \circ t)$. But this contradicts the irreducibility of $f$ since $p\circ t$ and $q \circ t$ have degree at least one.</p> <p>2) If in 1) we replace $\mathbb{K}$ with an arbitraty ring then it's no longer true. For example in $\mathbb{Z}[x]$ you can take $f=5x+7$ and $g=2x+3$. Both of them are reducible but $f \circ g = 2\cdot (5x+11)$. Note that in $\mathbb{Z}[x]$ the polynomial $2$ is not a unit.</p> <p>3) If in 1) we take $f$ to be of degree one then it's no longer true. Even more, if $g$ is any polynomial of degree two or more we can always find a polynomial $f$ of degree one such that $f\circ g$ is reducible. Since $g$ has degree two or more we can write $g= x\cdot h + a$, $a$ is the "constant term" of $g$. Now took $f=x-a$. It turns out that $f\circ g= x \cdot h$. This last equality gives a true factorization since the degree of $f\circ g$ is the same as the degree of $g$ because $f$ has degree one and $\mathbb{K}$ is, in particular, an integral domain. Note that the reasoning works fine in any integral domain.</p>
14,007
<p>A colleague of mine will be teaching 3 classes (pre-calculus and two sections of calculus, at the university level) next semester with an additional grader in only one of those classes (pre-calculus). With an upper bound of 35 students a class, there is potential for a large amount of grading that needs to happen without a lot of resources available in terms of additional graders.</p> <p>My question essentially boils down to: </p> <blockquote> <p>What are tips/tricks/techniques for creating quiz and exam questions that both</p> <ol> <li>test students at various levels of Bloom's hierarchy and</li> <li>minimize the amount of work for the grader</li> </ol> <p>?</p> </blockquote> <p>(Again, the subject is particularly (pre)calculus, but more general tips/tricks/techniques are great too. I only mention the subjects so that techniques that work particularly well for topics in (pre)calculus are mentioned.)</p> <p>I have some ideas:</p> <ul> <li>Formatting can get rid of a lot of the time spent <em>looking</em> for various components of an answer. E.g. for a question about convergence/divergence of a series, you can label spaces for saying whether the series converges or diverges, for what test they are using, and what work needs to be done in order to use that test.</li> <li>True/False questions in which the student must give a counterexample or an explanation in case of False (and perhaps some explanation in case of True as well).</li> <li>Matching problems for topics in which this makes sense, such as polar/parametric graphs or conics (given Cartesian or polar equations).</li> </ul> <p>I'm curious to hear what other things people have used.</p>
Gerald Edgar
127
<p>Fatou's Lemma states: for nonnegative measurable functions $f_n$, $$ \int_E \liminf_{n\to\infty} f_n\;d\mu \le \liminf_{n \to \infty}\int_E f_n\;d\mu $$ The mnemonic is $$ \text{ILLLLLI}, $$ meaning "the Integral of the Lower Limit is Less than the Lower Limit of the Integral".</p>
14,007
<p>A colleague of mine will be teaching 3 classes (pre-calculus and two sections of calculus, at the university level) next semester with an additional grader in only one of those classes (pre-calculus). With an upper bound of 35 students a class, there is potential for a large amount of grading that needs to happen without a lot of resources available in terms of additional graders.</p> <p>My question essentially boils down to: </p> <blockquote> <p>What are tips/tricks/techniques for creating quiz and exam questions that both</p> <ol> <li>test students at various levels of Bloom's hierarchy and</li> <li>minimize the amount of work for the grader</li> </ol> <p>?</p> </blockquote> <p>(Again, the subject is particularly (pre)calculus, but more general tips/tricks/techniques are great too. I only mention the subjects so that techniques that work particularly well for topics in (pre)calculus are mentioned.)</p> <p>I have some ideas:</p> <ul> <li>Formatting can get rid of a lot of the time spent <em>looking</em> for various components of an answer. E.g. for a question about convergence/divergence of a series, you can label spaces for saying whether the series converges or diverges, for what test they are using, and what work needs to be done in order to use that test.</li> <li>True/False questions in which the student must give a counterexample or an explanation in case of False (and perhaps some explanation in case of True as well).</li> <li>Matching problems for topics in which this makes sense, such as polar/parametric graphs or conics (given Cartesian or polar equations).</li> </ul> <p>I'm curious to hear what other things people have used.</p>
Nick C
470
<p>Recently, a student in my beginning algebra course offered the following to the class, regarding signed number multiplication:</p> <p>Assuming positivity is like <em>love</em>, and negativity is like <em>hate</em>, then...</p> <ul> <li>"If you love love, that's love." $\Rightarrow$ <em>positive</em> $\times$ <em>positive</em> = <em>positive</em></li> <li>"If you love hate, that's hate." $\Rightarrow$ <em>positive</em> $\times$ <em>negative</em> = <em>negative</em></li> <li>"If you hate love, that's hate." $\Rightarrow$ <em>negative</em> $\times$ <em>positive</em> = <em>negative</em></li> <li>"But if you hate hate, that's love." $\Rightarrow$ <em>negative</em> $\times$ <em>negative</em> = <em>positive</em></li> </ul> <p>[Read this by treating the first instance of <em>love</em> or <em>hate</em> as a verb.]</p>
14,007
<p>A colleague of mine will be teaching 3 classes (pre-calculus and two sections of calculus, at the university level) next semester with an additional grader in only one of those classes (pre-calculus). With an upper bound of 35 students a class, there is potential for a large amount of grading that needs to happen without a lot of resources available in terms of additional graders.</p> <p>My question essentially boils down to: </p> <blockquote> <p>What are tips/tricks/techniques for creating quiz and exam questions that both</p> <ol> <li>test students at various levels of Bloom's hierarchy and</li> <li>minimize the amount of work for the grader</li> </ol> <p>?</p> </blockquote> <p>(Again, the subject is particularly (pre)calculus, but more general tips/tricks/techniques are great too. I only mention the subjects so that techniques that work particularly well for topics in (pre)calculus are mentioned.)</p> <p>I have some ideas:</p> <ul> <li>Formatting can get rid of a lot of the time spent <em>looking</em> for various components of an answer. E.g. for a question about convergence/divergence of a series, you can label spaces for saying whether the series converges or diverges, for what test they are using, and what work needs to be done in order to use that test.</li> <li>True/False questions in which the student must give a counterexample or an explanation in case of False (and perhaps some explanation in case of True as well).</li> <li>Matching problems for topics in which this makes sense, such as polar/parametric graphs or conics (given Cartesian or polar equations).</li> </ul> <p>I'm curious to hear what other things people have used.</p>
ZirJohn
9,865
<p>For the 4 quadrants of a Cartesian graph I say "All Students Take Calculus" counterclockwise (in order) to remember which trig fxns are positive in which quadrants.</p>
1,451,745
<p>Can someone check my logic here. </p> <p><strong>Question:</strong> How many ways are there to choose a an $k$ person committee from a group of $n$ people? </p> <p><strong>Answer 1:</strong> there are ${n \choose k}$ ways. </p> <p><strong>Answer 2:</strong> condition on eligibility. Assume the creator of the committee is already in the committee. This leaves us with choosing $k - 1$ people from a group of $n - 1$ potentially eligible people. If all remaining people are eligible, there are ${n - 1 \choose k - 1}$ possible committees, if there are $n - 2$ eligible people, there are ${n - 2 \choose k - 1}$ committees, if there are $n - 3$ eligible people, there are ${n - 3 \choose k - 1}$ committees,..., if there are $k - 1$ eligible people there are ${k - 1 \choose k - 1}$ committees. Therefore,$${n - 1 \choose k - 1} + {n - 2 \choose k - 1} + {n - 3 \choose k - 1} + \dots + {k - 1 \choose k - 1} = {n \choose k}$$.</p>
dcstup
126,393
<p>Usually the logic is:</p> <p>If $A$ is in the committee, the problem reduces to choosing $k-1$ people from $n-1$ people.</p> <p>If $A$ is not in the committee, the problem reduces to choosing $k$ people from $n-1$ people, which we keep dividing:</p> <p>$\,$ If $B$ is in the committee, the problem reduces to choosing $k-1$ people from $n-2$ people.</p> <p>$\,$ If $B$ is not in the committee, the problem reduces to choosing $k$ people from $n-2$ people, which we keep dividing, and so on.</p> <p>So summing them up gives you the left hand side, while directly solving the problem gives you the right hand side.</p> <p>I don't think your logic applies because in the first place there is no such idea as "creator" in the question and assuming it with your real-world experience is not really mathematical.</p> <p>Apart from the creator problem, the remaining part of the logic does not apply too because it does not follow <a href="https://en.wikipedia.org/wiki/MECE_principle" rel="nofollow">MECE principle</a> (i.e. you have to make sure you did not double count and you counted all cases), since in the case where you choose $k-1$ people from $n-1$ or $n-2$ eligible candidates, there is no guarantee that you will not pick the same combination.</p>
4,008,488
<p>While looking for the answer for my question I came across <a href="https://math.stackexchange.com/questions/1840801/why-is-ata-invertible-if-a-has-independent-columns?rq=1">this</a> post. It may be a silly idea, but if <span class="math-container">$A^{t}$</span> has independent rows can I just transpose it and get <span class="math-container">$A$</span> with independent columns and proceed as is shown in the linked solution? Or it is not working this way?</p>
egreg
62,967
<p>Suppose <span class="math-container">$A$</span> is <span class="math-container">$m\times n$</span> and that <span class="math-container">$AA^T$</span> is not invertible. Then there exists <span class="math-container">$v\ne0$</span> (an <span class="math-container">$m\times1$</span> column vector) such that <span class="math-container">$$ AA^Tv=0 $$</span> This implies <span class="math-container">$v^TAA^Tv=0$</span>, hence <span class="math-container">$(A^Tv)(A^Tv)=0$</span>, so <span class="math-container">$A^Tv=0$</span>. Thus <span class="math-container">$v^TA=0$</span> and therefore <span class="math-container">$A$</span> doesn't have linearly independent rows.</p>
4,351,990
<p>I have just finished my undergrad and while I haven't studied much in representation theory I find it a very fascinating subject. My current interest is in differential equations, and I am wondering is there any ongoing research that combines these two areas?</p>
markvs
454,915
<p>There are strong connections. For example look at <a href="https://mathoverflow.net/questions/335116/is-there-a-connection-between-representation-theory-and-pdes">this question in</a> MO and its answers.</p> <p>There are also older books and papers about connections between these subjects. For example, &quot;Theory of Differential Equations : Representation Theory and Automorphic Functions&quot; by I.M. Gelfand, M. I. Graev, and I. I. Pyatetskii-Shapiro.</p>
270,849
<p>I am trying to show that </p> <p>$P(E\mid E\bigcup F) \geq P(E \mid F)$.</p> <p>This is intuitively clear. But when expanding I get $P(E)\ P(F)\geq P(E\bigcup F)\ P(E \bigcap F)$. How to continue?</p>
mathemagician
49,176
<p>Let $a=P(E\cap F^c)$, $b=P(E\cap F)$ and $c=P(F\cap E^c)$. You have that $P(E)=a+b$, $P(F)=b+c$. Since $E\cup F=((E\cap F^c)\cup(E\cap F)\cup (F\cap E^c))$ and since the the union is disjoint you have that $P(E\cup F)=a+b+c$. Therefore, the problem you stated reduces to showing $(a+b)(b+c)\geq b(a+b+c)$ which follows trivially since $ac=P(E\cap F^c)P(F\cap E^c)\geq 0$. </p>
4,722
<p>Is there a necessary and sufficient condition for the boundary of a planar region to be a finite union of Jordan curves?</p>
some guy on the street
1,631
<p><em>Videtur</em> I can't post comments of my own? This is not a complete answer.</p> <p>@buzzard, I'd say yours probably <em>isn't</em> a facetious comment, in that I can imagine a union of two Jordan curves --- that is, an intersection of two connected open planar sets --- looking particularly wild. For example, take one a JC with positive measure, and for the second take a small isotopy of the first.</p> <p>With Harald, I prefer to assume that "region" means "open subset"; this simplifies distinctions between "(connected) component" and "maximal connected subset".</p> <p>Obvious note: neither the region nor its complement can have an infinite sequence of separated components; in other words, the (open) region's closure and its (closed) complement both have a finite number of components. But I don't believe this is sufficient; again, I'm thinking of rather fractalous regions, but they're trickier to describe.</p>
4,722
<p>Is there a necessary and sufficient condition for the boundary of a planar region to be a finite union of Jordan curves?</p>
Greg Kuperberg
1,450
<p>I can think of an important necessary condition: The boundary has to be locally contractible; in particular, locally connected. The topologist's sine curve is not locally connected, while the Hawaiian earring is not locally simply connected, and both occur in boundaries of open sets.</p> <p>If you throw in the condition that the open set should be the interior of its closure (although that does not always happen even if the boundary is a union of Jordan curves), then at the moment I can't think of a counterexample.</p>
2,368,827
<p>I would like to know how a piecewise function and its derivative would look like under these circumstances. Suppose that the function is continuous (and also nice like poly, trig etc) but defined differently for points $\le a$ and point $\gt a $</p> <p>1) The function is differentiable at $a$. Then the derivative would be continuous at $a$, but would it be differentiable at $a$?</p> <p>2) The function is continuous at $a$ but not differentiable at $a$. Then the derivative would not be defined at $a$ but defined elsewhere. Is this correct? Also would the left and right limits of the derivative be equal at $a$?</p>
Mundron Schmidt
448,151
<p>You have to be very careful. Consider $$ f(x)=\begin{cases} x^2\sin\left(\frac1x\right) &amp; x&gt;0\\0&amp; x\leq 0\end{cases} $$ This function is differentiable at $0$ since $$ \lim_{h\to 0^+}\left|\frac{f(h)-f(0)}h\right|=\lim_{h\to 0^+}h\left|\sin\left(\frac1h\right)\right|=0\text{ and }\lim_{h\to 0^-}\frac{f(h)-f(0)}h=0. $$ But $$ f'(h)=\begin{cases} 2x\sin\left(\frac1x\right)-\cos\left(\frac1x\right) &amp; x&gt;0\\0&amp;x\leq 0\end{cases} $$ is not continuous!</p>
1,656,136
<p>I'm trying to track down an example of a ring in which there exists an infinite chain of ideals under inclusion. (i.e. $I_1 \subsetneq I_2 \subsetneq I_3 \subsetneq...$)</p>
RghtHndSd
86,816
<p>Edit: By "ring" I mean commutative ring with identity.</p> <p>Every ring is a quotient of an infinite polynomial ring. Let $R$ be any ring, and let $\{r_i\}_{i \in I}$ be a set of generators (under the ring operations). For example, we could simply take every single element of the ring $R$. Then define a ring homomorphism $\varphi : \mathbb{Z}[x_i : i \in I] \rightarrow R$ by sending $x_i \rightarrow r_i$. This is a surjective homomorphism, and thus $$R \cong \mathbb{Z}[x_i : i \in I]/\ker \varphi.$$</p> <p>In particular, any non-Notherian integral domain is a quotient of an infinite polynomial ring. So in some sense, your example is a fundamental one.</p>
1,656,136
<p>I'm trying to track down an example of a ring in which there exists an infinite chain of ideals under inclusion. (i.e. $I_1 \subsetneq I_2 \subsetneq I_3 \subsetneq...$)</p>
Will Byrne
214,346
<p>By definition, such a ring is non-Noetherian. A good example of a non-Noetherian ring is $F[X_1, X_2, X_3,...]$, the ring of polynomials over a field F in countably infinite indeterminates. In this ring, we have the infinite chain of generated ideals $(X_1) \subsetneq (X_1, X_2) \subsetneq (X_1, X_2, X_3) \subsetneq...$.</p> <p>Claim: $X_{n+1} \notin (X_1, ..., X_n)$</p> <p>Proof: Suppose for contradiction that $X_{n+1} \in (X_1, ..., X_n)$. Then, we would be able to write $X_{n+1} = a_1 \cdot X_1 \,+ ... + \,a_n \cdot X_n$ for some $a_1, ..., a_n \in F[X_1, X_2, X_3,...]$. If we take all the indeterminate arguments of each $a_i$ to be $0$ (there are finitely many arguments for each), it follows that $X_{n+1} = a_k \cdot X_k\, + ... + \,a_l\cdot X_l$, where $a_k,...,a_l$ are the constant functions among these $a_1, ..., a_n$. This is a contradiction because $X_{n+1}$ can now be written as a linear combination of certain other indeterminates, when its choice should be unconstrained. </p> <p>To see this last step more clearly, it can help to actually consider $f(X_{n+1}) = X_{n+1}$ and $g(X_k,...,X_l) = a_k \cdot X_k\, + ... + \,a_l\cdot X_l$. Note that both of these polynomials reside in $F[X_1, X_2, X_3,...]$. Moreover, by the results above, $f(X_{n+1}) \equiv g(X_k,...,X_l)$. Now take $X_{n+1} = 1$ and $X_k= ...=X_l=0$. Then, it follows that </p> <p>\begin{equation} 1 = f(1) = g(0,...,0) = a_k \cdot 0\, + ... + \,a_l\cdot 0 = 0 \end{equation} Hence, $1=0$, which is a contradiction because F is a field and taken to have nontrivial multiplication. The claim follows.</p> <p>For more info on Noetherian rings, check out the <a href="https://en.wikipedia.org/wiki/Noetherian_ring" rel="nofollow">wikipedia page</a>.</p>
1,656,136
<p>I'm trying to track down an example of a ring in which there exists an infinite chain of ideals under inclusion. (i.e. $I_1 \subsetneq I_2 \subsetneq I_3 \subsetneq...$)</p>
zyx
14,120
<p>For a bi-infinite chain of ideals (and bounded transcendence degree), start from $k[X]$ and adjoin $2^n$-th roots of $X$ for all $n$. </p> <p>The inclusion ordering of the principal ideals $(X^{2^i})$ for $i \in \mathbb{Z}$ is equivalent to the reversed ordering of integers, via the correspondence $2^i \leftrightarrow i$.</p>
2,664,341
<blockquote> <p>Simplify $$\frac{1}{\sqrt[3]1+\sqrt[3]2+\sqrt[3]4}+\frac{1}{\sqrt[3]4+\sqrt[3]6+\sqrt[3]9}+\frac{1}{\sqrt[3]9+\sqrt[3]{12}+\sqrt[3]{16}}$$</p> </blockquote> <p>I have no idea how to do this. I tried using the idea of multiplying the conjugate to every term, but I seem to be getting no where. Is there any hint as to how to approach this?</p>
I was suspended for talking
474,690
<p>Hint: Let $f_n := f\chi_{E_n}$, where $\chi$ is the characteristic function. Then $f \chi_E = \lim_{n\to\infty} f_n$. Hence you would like to move the limit inside the integral, which theorem would be useful for that?</p>
633,858
<p>If G is cyclic group of 24 order, then how many element of 4 order in G? I can't understand how to find it, step by step. </p>
Mikasa
8,581
<p>If $G=\langle a\rangle =\{a^0,a^1,a^2,\cdots,a^{23}\}$ then $$|a|=24\\|a^2|=k\to 24\mid2k\to12|k\to k=12.~\text{(because 12 is the least positive integer in this case)}\\|a^3|=k\to 24|3k\to8|k\to k=8.~\text{(because 8 is the least positive integer in this case)}\\|a^4|=k\to 24|4k\to6|k\to k=6.~\text{(because 6 is the least positive integer in this case)}\\|a^5|=k\to 24|5k,~\text{but} \gcd(24,5)=1\to 24|k\to\\ k=24.~\text{(because 24 is the least positive integer in this case)}\\ \vdots\\$$ By this mechanical way you can see why above <strong>lemma</strong> works.</p>
2,993,979
<p>I tried to determine if <span class="math-container">$n\cdot \arctan (\frac 1n)$</span> is divergent or convergent. </p> <p>My solution is in the two pictures. I really have no clue as how to solve it, so I tried something, but it cannot be right. At least that's what I think.</p> <p>I am sorry in advance for my bad maths.</p> <p>Appreciate all help :)<a href="https://i.stack.imgur.com/Uf3nt.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Uf3nt.png" alt="enter image description here"></a></p>
paf
333,517
<p><span class="math-container">$$n\arctan\left(\dfrac1n\right) = \dfrac{\arctan\left(\dfrac1n\right)}{\dfrac1n} = \dfrac{\arctan\left(t\right)}{t} =\dfrac{\arctan\left(t\right) - \arctan 0}{t-0} $$</span> if <span class="math-container">$t:=1/n$</span>. This last expression tends to <span class="math-container">$\arctan'(0)=\dfrac1{1+0^2}=1$</span>.</p> <p>Thus, the series <span class="math-container">$\displaystyle \sum n\arctan\left(\dfrac1n\right)$</span> is divergent since its general term doesn't tend to 0.</p>
2,993,979
<p>I tried to determine if <span class="math-container">$n\cdot \arctan (\frac 1n)$</span> is divergent or convergent. </p> <p>My solution is in the two pictures. I really have no clue as how to solve it, so I tried something, but it cannot be right. At least that's what I think.</p> <p>I am sorry in advance for my bad maths.</p> <p>Appreciate all help :)<a href="https://i.stack.imgur.com/Uf3nt.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Uf3nt.png" alt="enter image description here"></a></p>
saulspatz
235,128
<p>The <span class="math-container">$n$</span>th term doesn't go to zero, so the series diverges. </p>
2,993,979
<p>I tried to determine if <span class="math-container">$n\cdot \arctan (\frac 1n)$</span> is divergent or convergent. </p> <p>My solution is in the two pictures. I really have no clue as how to solve it, so I tried something, but it cannot be right. At least that's what I think.</p> <p>I am sorry in advance for my bad maths.</p> <p>Appreciate all help :)<a href="https://i.stack.imgur.com/Uf3nt.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Uf3nt.png" alt="enter image description here"></a></p>
Community
-1
<p>From </p> <p><span class="math-container">$$1-x^2\le\frac1{1+x^2}\le1$$</span> you draw, by integration from <span class="math-container">$0$</span></p> <p><span class="math-container">$$x-\frac{x^3}3\le\arctan x\le x$$</span> and for <span class="math-container">$x&gt;0$</span></p> <p><span class="math-container">$$1-\frac{x^2}3&lt;\frac{\arctan x}x&lt;1.$$</span></p> <p><a href="https://i.stack.imgur.com/ZM2ti.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/ZM2ti.png" alt="enter image description here"></a></p>
2,706,141
<p>I've been working on a math problem recently whose small subpart part is this. I don't want to post the whole problem and be spoon fed it, but I've been struggling with this sub part of it and since my math skills are still trivial the solution may require maths which I have to learn so,</p> <p>Can the product $\mathtt(LR)$ where L is the hypotenuse of a right angled triangle and R is it's base be expressed using trigonometric relations of <strong>only</strong> $\theta$? Where $\theta$ is the angle between the hypotenuse and <strong>height H</strong> of the right angled triangle?</p> <p>If yes derive the expression? Otherwise prove it not possible.</p>
MrYouMath
262,304
<p>Hint: Use partial fractions</p> <p>$$\int\dfrac{dx}{(\sqrt{3}x-\sqrt{7})(\sqrt{3}x+\sqrt{7})}=a\int\dfrac{dx}{\sqrt{3}x-\sqrt{7}}+b\int\dfrac{dx}{\sqrt{3}x+\sqrt{7}}$$</p> <p>In which the coefficients can be directly determined by Euler's trick:</p> <p>$$a=\left.\dfrac{1}{\sqrt{3}x+\sqrt{7}}\right|_{x=\dfrac{\sqrt{7}}{\sqrt{3}}}$$ $$b=\left.\dfrac{1}{\sqrt{3}x-\sqrt{7}}\right|_{x=-\dfrac{\sqrt{7}}{\sqrt{3}}}.$$</p>
203,111
<p>Assume $(A_{i})_{i\in\Bbb N}$ to be an infinite sequence of sets of natural numbers, satisfying</p> <p>$$A_{0}\subseteq A_{1}\subseteq A_{2}\subseteq A_{3}\cdots\subseteq\Bbb N\tag{*}$$</p> <p>For each property $p_{i}$ shown below, state whether </p> <p>• the hypothesis (*) is sufficient to conclude that $p_{i}$ holds; or</p> <p>• the hypothesis (*) is sufficient to conclude that $p_{i}$ does not hold; or</p> <p>• the hypothesis (*) is not sufficient to conclude anything about the truth of $p_{i}$ .</p> <p>Justify your answers (briefly).</p> <ol> <li><p>$p_{1}$ : $\forall k\in\Bbb N.\ A_{k}=\bigcup_{i=0}^{k}A_{i}$</p></li> <li><p>$p_{2}$ : for all $i$, if $A_{i}$ is infinite, then $A_{i}=A_{i+1}$</p></li> <li><p>$p_{3}$ : if $\forall i\in\Bbb N.\ A_{i}\neq A_{i+1}$, then $\bigcup_{i=0}^{\infty}A_{i}=\Bbb N$</p></li> <li><p>$p_{4}$ : if $\forall i\in\Bbb N.\ A_{i}$ is finite, then $\bigcup_{i=0}^{\infty}A_{i}$ is finite</p></li> <li><p>$p_{5}$ : if $\forall i\in\Bbb N.\ A_{i}$ is finite, then $\bigcup_{i=0}^{\infty}A_{i}$ is infinite</p></li> <li><p>$p_{6}$ : if $\forall i\in\Bbb N.\ A_{i}$ is infinite, then $\bigcup_{i=0}^{\infty}A_{i}$ is infinite</p></li> </ol>
Brian M. Scott
12,042
<p>You really shouldn’t have any trouble with $p_1$ or $p_6$. I’ll do $p_3$ as an illustration.</p> <p>First, $p_3$ is consistent with (*). For example, let $A$ be the set of odd positive integers, and for $n\in\Bbb N$ let $A_n=A\cup\{2k:k\le n\}$: then $2(n+1)\in A_{n+1}\setminus A_n$, and $\bigcup_{n\in\Bbb N}A_n=\Bbb N$.</p> <p>On the other hand, $\lnot p_3$ is also consistent with (*). This time let let $A_n=A\cup\{2k+2:k\le n\}$. Once again the sets $A_n$ form a strictly increasing sequence of subsets of $\Bbb N$, but $0\notin\bigcup_{n\in\Bbb N}A_n$.</p> <p>Thus, (*) is not sufficient to decide $p_3$ one way or the other.</p>
380,177
<p>In mathematics, I want to know what is indeed the difference between a <strong>ring</strong> and an <strong>algebra</strong>?</p>
Community
-1
<p>In the contexts I'm used to, rings are defined to have a constant $1$ and the corresponding axioms making it a multiplicative unit. Algebras, however, do not.</p> <p>And while you can talk about (for some fixed ring $R$) "rings over $R$" just as you can "algebras over $R$", the latter phrase is far more common than the former.</p>
498,785
<p>I'm trying to solve this problem, but I'm not even sure how to formulate it in a coherent mathematical manner, or even what branch of mathematics this might fall in to.</p> <p>Basically I have a set of weights, where each weight individually must remain in the range $[0,1]$. I want to change the mean of the weights to some new mean, also in the $[0,1]$ range, by modifying all the weights slightly (that is, I can't add or remove weights; only modify their values).</p> <p>Also, ideally, after changing the mean to a new value, if I do the algorithm again, and try to return to the original mean, I'll get the same original weights. That is, the mapping function can work as its own inverse. Which I think implies certain things about the distribution of the values of the weights before and after the mapping, but I'm not sure how to describe it in mathematical terms.</p> <p>Last, the amount of movement of individual weights should be minimized, probably in a least squares sort of way. That is, I'd prefer to move all the values a slight amount over moving a single value from 0 to 1, for instance.</p> <p>Does anyone know how I might go about this sort of remapping? Basically I have four requirements:</p> <ol> <li>After modifying the original weights, the new values stay within $[0,1]$.</li> <li>The new mean of the modified weights must be the mean I wanted</li> <li>The mapping can be applied again to get back to the original weights.</li> <li>The change in weights is minimized in a least squares-esque manner.</li> </ol>
JoshS
95,647
<p>I'd use an exponential transform, eg raise each value to the same real power. I'm not sure it's possible to get a closed form for the power at which to raise to achieve a specific target mean, though. Iterative approximation should work for this, though, if performance is not too much of a concern.</p> <p>see <a href="https://math.stackexchange.com/questions/498235/is-there-a-closed-form-for-sum-i-1n-a-ix-nk">https://math.stackexchange.com/questions/498235/is-there-a-closed-form-for-sum-i-1n-a-ix-nk</a></p>
498,785
<p>I'm trying to solve this problem, but I'm not even sure how to formulate it in a coherent mathematical manner, or even what branch of mathematics this might fall in to.</p> <p>Basically I have a set of weights, where each weight individually must remain in the range $[0,1]$. I want to change the mean of the weights to some new mean, also in the $[0,1]$ range, by modifying all the weights slightly (that is, I can't add or remove weights; only modify their values).</p> <p>Also, ideally, after changing the mean to a new value, if I do the algorithm again, and try to return to the original mean, I'll get the same original weights. That is, the mapping function can work as its own inverse. Which I think implies certain things about the distribution of the values of the weights before and after the mapping, but I'm not sure how to describe it in mathematical terms.</p> <p>Last, the amount of movement of individual weights should be minimized, probably in a least squares sort of way. That is, I'd prefer to move all the values a slight amount over moving a single value from 0 to 1, for instance.</p> <p>Does anyone know how I might go about this sort of remapping? Basically I have four requirements:</p> <ol> <li>After modifying the original weights, the new values stay within $[0,1]$.</li> <li>The new mean of the modified weights must be the mean I wanted</li> <li>The mapping can be applied again to get back to the original weights.</li> <li>The change in weights is minimized in a least squares-esque manner.</li> </ol>
Alecos Papadopoulos
87,400
<p>What you describe is a minimization problem under constraints. I will provide a mathematical formalization, and work the problem up to a point.</p> <p>We have $n$ weights $w_1,...,w_n$, with $w_i \in [0,1], \; i=1,...n$. We want to arrive at new weights $w_i^* = w_i+d_i$, with $ |d_i|\le 1-w_i$. This is a constraint that guarantees that the new weights will range also in $[0,1]$. We want to achieve a specific mean value for the new set of weights, $\frac 1n \sum_{i=1}^nw_i^* = \bar w^* \Rightarrow \sum_{i=1}^n(w_i+d_i) = n\bar w^*$. And we want to determine the new weights under a least-squares criterion. Then we have the minimization problem</p> <p>$$ \min_{d_1,...,d_n}\sum_{i=1}^nd_i^2 \\ s.t. \qquad \sum_{i=1}^n(w_i+d_i) = n\bar w^* \\ \qquad d_i^2\le (1-w_i)^2 \qquad i=1,...,n $$</p> <p>The Lagrangean of the problem is </p> <p>$$L =\sum_{i=1}^nd_i^2 - q\left(\sum_{i=1}^n(w_i+d_i) - n\bar w^*\right) -\sum_{i=1}^{n}\mu_i\left(d_i^2- (1-w_i)^2\right) \qquad q\neq 0\;,\; \mu_i\ge 0$$</p> <p>First-order necessary conditions are $$\frac {\partial L}{\partial d_i} = 0 \Rightarrow 2d_i - q - 2\mu_id_i =0 \qquad i=1,...,n$$</p> <p>$$\Rightarrow 2(1-\mu_i) d_i = q \qquad i=1,...,n$$</p> <p>From this relation we conclude that <em>all weights must change</em>, because $q\neq 0 \Rightarrow d_i\neq 0\qquad i=1,...,n$.</p> <p>Moreover we obtain the following relation, since the multiplier $q$ is common in all equations:</p> <p>$$\frac {d_i}{d_j} = \frac {1-\mu_j}{1-\mu_i}$$</p> <p>We can draw some conclusions from this relation. </p> <p><em>If</em> the optimal solution dictates that <em>no new weight will be equal to zero or unity</em>, then all $\mu_i$ multipliers will be zero (non-binding) and we will have $$\frac {d_i}{d_j} = 1 \Rightarrow d_1=...=d_n =d$$ Then, using the constraint we obtain $$\sum_{i=1}^n(w_i+d_i) = n\bar w^* \Rightarrow \sum_{i=1}^nw_i+nd = n\bar w^* \Rightarrow d=\bar w^* - \bar w$$ But such a solution will be feasible if either $\bar w^* &gt; \bar w$ <em>and</em> $\forall\; w_i,\; w_i&lt; 1-(\bar w^* - \bar w)$,<br> or if $\bar w^* &lt; \bar w$ <em>and</em> $\forall\; w_i,\; w_i&gt; \bar w-\bar w^* $. Only in these two cases the specific solution will lead to all new weights being in $[0,1]$. If neither of these two cases holds (because, say, some initial weights are either zero or unity, or because you want to make a large change in the value of the average weight), then the solution will necessarily dictate that some new weights will be equal to zero or unity.</p> <p>By writing and working the problem <em>starting</em> from the new weights, you can derive what conditions should hold so that you can return back to the original set of weights.</p>
576,519
<p>Assume that $x+\frac{1}{x} \in \mathbb{N}$. Prove by induction that $$x^2+\frac1{x^2}, x^3+\frac1{x^3}, \dots , x^n+\frac1{x^n}$$ is also a member of $\mathbb{N}$.</p> <p>I have my <em>base</em>, it is indeed true for $n=1$..</p> <p>I can assume it is true for $x^k+x^{-k}$ and then proove it is true for $x^{k+1}+x^{-(k+1)}$ but I'm stuck there.</p>
lhf
589
<p>Let $a_n = x^n + \frac{1}{x^n}$. Then $x^2 = a_1x - 1$ implies $a_{n+2} = a_1a_{n+1} -a_n$ for all $n$.</p> <p>Since $a_0=2$ and $a_1 \in \mathbb Z$, we have $a_n \in \mathbb Z$ for all $n \in \mathbb N$ by induction.</p>
884,362
<blockquote> <p>Compute the integral $$\int_{0}^{2\pi}\frac{x\cos(x)}{5+2\cos^2(x)}dx$$</p> </blockquote> <p>My Try: I substitute $$\cos(x)=u$$</p> <p>but it did not help. Please help me to solve this.Thanks </p>
lab bhattacharjee
33,337
<p>Using $\displaystyle\int_a^bf(x)\ dx=\int_a^bf(a+b-x)\ dx,$</p> <p>$$I=\int_0^{2\pi}\frac{x\cos x}{5+2\cos^2x}dx=\int_0^{2\pi}\frac{(2\pi-x)\cos(2\pi-x)}{5+2\cos^2(2\pi-x)}\ dx=\int_0^{2\pi}\frac{(2\pi-x)\cos x}{5+2\cos^2 x}\ dx$$</p> <p>$$2I=2\pi\int_0^{2\pi}\frac{\cos x}{5+2\cos^2x}dx$$</p> <p>$$\implies I=\pi\int_0^{2\pi}\frac{\cos x}{7-2\sin^2x}dx$$</p> <p>Set $\displaystyle\sin x=u$</p>
4,264,558
<p>I calculated homogenous already, I'm just struggling a bit with the right side. Would <span class="math-container">$y_p$</span> be <span class="math-container">$= ++e^x$</span> or <span class="math-container">$= ++e^{2x}$</span>?</p> <p>Would the power in front of the root be the roots found from the homogenous part?</p> <p>Sorry, it's been a while since I did ODE's. All help is appreciated.</p>
MachineLearner
647,466
<p>Hint: <span class="math-container">$\cos(x) = 1-\dfrac{x^2}{2}+O(x^4)$</span> or <span class="math-container">$\cos(1/x) = 1 - \dfrac{1}{2x^2}+O\left(\dfrac{1}{x^4}\right)$</span></p>
1,677,868
<p>The sequence is:</p> <p>$$a_n = \frac {2^{2n} \cdot1\cdot3\cdot5\cdot...\cdot(2n+1)} {(2n!)\cdot2\cdot4\cdot6\cdot...\cdot(2n)} $$</p>
Brian M. Scott
12,042
<p>Investigating the ratio of consecutive terms, as suggested by <strong>André Nicolas</strong>, is probably easiest. Alternatively, note that</p> <p>$$2\cdot 4\cdot 6\cdot\ldots\cdot(2n)=2^nn!\;,$$</p> <p>and</p> <p>$$1\cdot3\cdot5\cdot\ldots\cdot(2n+1)\le 2\cdot4\cdot6\cdot\ldots\cdot 2n\cdot(2n+2)=2^{n+1}(n+1)!\;,$$</p> <p>so</p> <p>$$0&lt;a_n\le\frac{2^{3n+1}(n+1)!}{2^nn!^2}=\frac{2^{2n+1}(n+1)}{n!}&lt;\frac{4^{n+1}(n+1)}{n!}&lt;\frac{4^{n+2}}{(n-1)!}\;,$$</p> <p>and consider what you know about</p> <p>$$\lim_{n\to\infty}\frac{4^n}{n!}\;.$$</p>
2,208,943
<p>I am about to finish my first year of studying mathematics at university and have completed the basic linear algebra/calculus sequence. I have started to look at some real analysis and have really enjoyed it so far.</p> <p>One thing I feel I am lacking in is motivation. That is, the difference in rigour between the usual introduction to calculus class and real analysis seems to be quite strong. While I appreciate rigour for aesthetic reasons, I have trouble understanding why the transition from the 18th century Euler style calculus to the rigorous "delta-epsilon" formulation of calculus was necessary. </p> <p>Is there a book that provides some historical motivation for the rigorous developement of calculus? Perhaps something that gives several counterexamples that occur when one is only equipped with a non-rigorous (i.e. first year undergraduate) formulation of calculus. For example, were there results that were thought to be true but turned out to be false when the foundations of calculus were strengthened? I suppose if anyone knows good counterexamples themselves they could list them here as well. </p>
JMJ
295,405
<p>One good motivating example I have is the <a href="https://en.wikipedia.org/wiki/Weierstrass_function" rel="nofollow noreferrer">Weierstrass Function</a>, which is continuous everywhere but differentiable nowhere. Throughout the 18th and 19th centuries (until this counter example was discovered) it was thought that every continuous function was also (almost everywhere) differentiable and a large number of "proofs" of this assertion were attempted. Without a rigorous definition of concepts like "continuity" and "differentiabiliy", there is no way to analyze these sort of pathological cases. </p> <p>In integration, a number of <a href="https://math.stackexchange.com/questions/970431/example-for-non-riemann-integrable-functions">functions which are not Riemann integrable</a> (see also <a href="https://math.stackexchange.com/questions/364184/lebesgue-integral-but-not-a-riemann-integral">here</a>) were discovered, paving the way for the Stieltjes and more importantly the Lebesgue theories of integration. Today, the majority of integrals considered in pure mathematics are Lebesgue integrals. </p> <p>A large number of these cases, especially pertaining to differentiation, integration, and continuity were all motivating factors in establishing analysis on a rigorous footing. </p> <p>Lastly, the development of rigorous mathematics in the late 19th and early 20th centuries changed the focus of mathematical research. Before this revolution, mathematics--especially analysis--was extremely concrete. One did research into a specific function or class of functions--e.g. Bessel functions, Elliptic functions, etc.--but once rigorous methods exposed the underlying structure of many different classes and types of functions, research began to focus on the abstract nature of these structures. As a result, virtually all research in pure mathematics these days is abstract, and the major tool of abstract research is rigor. </p>
2,208,943
<p>I am about to finish my first year of studying mathematics at university and have completed the basic linear algebra/calculus sequence. I have started to look at some real analysis and have really enjoyed it so far.</p> <p>One thing I feel I am lacking in is motivation. That is, the difference in rigour between the usual introduction to calculus class and real analysis seems to be quite strong. While I appreciate rigour for aesthetic reasons, I have trouble understanding why the transition from the 18th century Euler style calculus to the rigorous "delta-epsilon" formulation of calculus was necessary. </p> <p>Is there a book that provides some historical motivation for the rigorous developement of calculus? Perhaps something that gives several counterexamples that occur when one is only equipped with a non-rigorous (i.e. first year undergraduate) formulation of calculus. For example, were there results that were thought to be true but turned out to be false when the foundations of calculus were strengthened? I suppose if anyone knows good counterexamples themselves they could list them here as well. </p>
asv
403,888
<p>You can try to read this <a href="https://en.wikipedia.org/wiki/Fluxion" rel="noreferrer">https://en.wikipedia.org/wiki/Fluxion</a> to understand the motivations to introduce the definition of limit.</p> <p>Important is the example in the indicated web page:</p> <blockquote> <p>If the fluent ${\displaystyle y}$ is defined as ${\displaystyle y=t^{2}}$ (where ${\displaystyle t}$ is time) the fluxion (derivative) at ${\displaystyle t=2}$ is:</p> <p>${\displaystyle {\dot {y}}={\frac {\Delta y}{\Delta t}}={\frac {(2+o)^{2}-2^{2}}{(2+o)-2}}={\frac {4+4o+o^{2}-4}{2+o-2}}=4+o}$ </p> <p>Here ${\displaystyle o}$ is an infinitely small amount of time and according to Newton, we can now ignore it because of its infinite smallness. He justified the use of ${\displaystyle o}$ as a non-zero quantity by stating that fluxions were a consequence of movement by an object.</p> <p>Criticism</p> <p>Bishop George Berkeley, a prominent philosopher of the time, slammed Newton's fluxions in his essay The Analyst, published in 1734. Berkeley refused to believe that they were accurate because of the use of the infinitesimal ${\displaystyle o}$. He did not believe it could be ignored and pointed out that if it was zero, the consequence would be division by zero. Berkeley referred to them as "ghosts of departed quantities", a statement which unnerved mathematicians of the time and led to the eventual disuse of infinitesimals in calculus.</p> <p>Towards the end of his life Newton revised his interpretation of ${\displaystyle o}$ as infinitely small, preferring to define it as approaching zero, using a similar definition to the concept of limit. He believed this put fluxions back on safe ground. By this time, Leibniz's derivative (and his notation) had largerly replaced Newton's fluxions and fluents and remain in use today.</p> </blockquote> <p>You can also find informations in: <a href="http://www.mr-ideahamster.com/classes/assets/a_evepsilon.pdf" rel="noreferrer">"The American Mathematical Monthly, March 1983, Volume 90, Number 3, pp. 185–194."</a> </p>
1,393,265
<p>How to prove that$(n!)^{1/n}$ tends to infinity as limit tends to infinity? I tried to do this by expanding $n!$ as $n\times (n-1)\times (n-2)\cdots 4\times3\times2\times 1$ and taking out n common from each factor so that I can have $n$ outside the radical sign, But then the last terms would be $(4/n)\times(3/n)\times(2/n)\times (1/n)$, which would tend to zero and would present indeterminate form of $0\cdot \infty$, but how should I further solve it. I would appreciate a little help.</p>
Ben Grossmann
81,360
<p><strong>Hint:</strong> for any $a &gt; 1$, we have $n! &gt; a^n$ for sufficiently large $n$. </p>
403,631
<p>$a^n \rightarrow 0$ as $n \rightarrow \infty$ for $\left|a\right| &lt; 1 $ <br/> Hint $u_{2n}$ = $u_{n}^2$</p> <p>I have totally no idea how to prove this, this looks obvious but i found out proof is really hard... I am doing a real analysis course and there's a lot of proving and I stuck there. Any advices? Practice makes perfect? </p>
Did
6,179
<p>Replacing $a$ by $|a|$, one can assume without loss of generality that $a$ is a nonnegative real number. If $a=0$, the result is direct. If $0\lt a\lt1$, the sequence defined by $u_n=a^n$ is decreasing and positive hence it converges to some finite nonnegative limit $\ell$. Since $u_{n+1}=au_n$, $\ell=a\ell$. Since $a\ne1$, the only possible limit is $\ell=0$, QED.</p> <p>The hint that $u_{2n}=u_n^2$ can probably be used as follows, once one knows that the limit $\ell$ exists and is finite: $\ell=\ell^2$ hence $\ell=0$ or $1$ and, since $u_n\leqslant u_1=a\lt1$ for every $n\geqslant1$, $\ell\ne1$ hence $\ell=0$.</p>
92,020
<p>Let $W_t$ be a Brownian motion with $m$ independent components on $(\Omega,F,P)$.<br> Let $G(\omega,t)=[g_{ij}(\omega,t)]_{1\leq i\leq n,1\leq j\leq m}$ in $V^{n\times m}[S,T]$ such that<br> $$\limsup_{\omega,t \in\Omega\times[S,T]} \sum_i^m \sum_j^n | g_{ij}(\omega,t)|&lt;\infty$$ and<br> $$\int_S^T E|G(\omega,t)^6|dt&lt;\infty.$$<br> I have to prove that:<br> $$E\left|\int_S^T G(\omega,t)dW_t\right|^6 \le 15^3(T-S)^2\int_S^TE|G(\omega,t)^6|dt&lt;\infty.$$<br> I've also a hint:<br> $$\int_S^T \int_\Omega H(\omega,t)^4 K(\omega,t)^2dtdP(\omega) \le \left\{\int_S^T \int_\Omega H(\omega,t)^6 dtdP(\omega) \right\}^{4/6} \left\{ \int_S^T \int_\Omega K(\omega,t)^6dtdP(\omega))\right\}^{2/6}.$$ </p> <p>My idea was to use Itō's isometry in order to pass from $dW_t$ in $dt$ but I don't know if it's possible with $6$ at the exponent. Maybe a change of variable? Anyway I can't figure out where the coefficient $15^3(T-S)^2$ came from... </p> <p>Thank you for your help</p> <p>EDIT: I found a very interesting article from Novikov called "On moment inequalities and identities for stochastic integrals", which analyses a very similar case. I had not time to properly study this work but the key was applying the Itō's formula to a specific function. </p>
Fers
21,345
<p>I've found something...<br> let's define $$\eta(t) = \int_S^t G(\omega,t)dW_t$$ </p> <p>if we apply the Ito's formula we obtain ( for the general case with $2n=6$ ) $$ E \left[\eta(T)^{2n} \right] = E \left[ \frac{2n(2n-1)}{2}\int_S^T\eta(s)^{2n-2} G^2(s,\omega)ds \right] \le$$ using the hint $$ \le \frac{2n(2n-1)}{2} \left\{ E\int_S^T\eta(s)^{2n}ds \right\}^{\frac{2n-2}{2n}} \left\{ E\int_S^T G^{2n}(s,\omega)ds \right\}^{\frac{1}{n}} $$</p> <p>then I'm stuck, I don't know how to compute $ \left\{ E\int_S^T\eta(s)^{2n}ds \right\}^{\frac{2n-2}{2n}} $</p>
177,519
<p>Let $\mathfrak{g}$ be a simple lie algebra over $\mathbb{C}$ and let $\hat{\mathfrak{g}}$ be the Kac-Moody algebra obtained as the canonical central extension of the algebraic loop algebra $\mathfrak{g} \otimes \mathbb{C}[t,t^{-1}]$. In a sequence of papers, Kazhdan and Lusztig constructed a braided monoidal structure on (a certain subcategory of) the category of representations of $\hat{\mathfrak{g}}$ of central charge $k - h$ where $k \in \mathbb{C}^* \;\backslash\; \mathbb{Q}_{\geq 0}$ and $h$ is the coxeter number of $\mathfrak{g}$. They then showed that the resulting braided category is equivalent to the braided category of finite dimensional representations of the quantum group $U_q(\mathfrak{g})$ for $q = e^{\frac{\pi i}{k}}$. </p> <p>My question then is this: is there any conceptual explanation as to why these two braided categories should be equivalent (which does not resort to computing both sides and seeing that they are same)? The representations of $\hat{\mathfrak{g}}$ of various central charges can be considered as twists of the representation theory of the loop algebra $\mathfrak{g} \otimes \mathbb{C}[t,t^{-1}]$. On the other hand, the representation theory of $U_q(\mathfrak{g})$ is a braided deformation (which can be thought of as a form of twisting) of the representation theory of $\mathfrak{g}$ itself. Moreover, the equivalence above only holds for non-trivially deformed/twisted cases. The limiting case of the representations of $\mathfrak{g}$ is recovered by (carefully) taking $q=1$, which corresponds to $k \rightarrow \infty$ and hence does not participate in the game. On the other hand, to obtain central charge $0$ we would need to take $k=h$ which is also excluded (as the proof Kazhdan-Lustig assumes $k \notin \mathbb{Q}_{\geq 0}$). Is there any reason why these two lie algebras would have the same twisted/deformed representations, but not the same representations?</p>
BWW
50,658
<p>I don't have the references to hand but as no-one else has offered an explanation, here is how I understand it. The representation category of the Kac-Moody algebra is the fusion category of a rational conformal field theory. The category associated to the quantum group needs to be defined a bit more carefully than in your post. However this is the category associated to a 2+1 topological field theory. There is a relation, I hesitate to say correspondence, between rational conformal field theories and 2+1 TQFT. Then the claim is that these two examples correspond.</p>
4,244,187
<blockquote> <p>Find the equation of the tangent line to <span class="math-container">$\sin^{-1}(x) + \sin^{-1}(y) = \frac{\pi}{6}$</span> at the point <span class="math-container">$(0,\frac{1}{2})$</span></p> </blockquote> <p>This is in the context of learning implicit differentiation.</p> <p>First, I apply <span class="math-container">$\frac{dy}{dx}$</span> operator to both sides of the equation yielding:</p> <p><span class="math-container">$-\sin^{-2}(x) - \sin^{-1}(y)\frac{dy}{dx} = 0$</span></p> <p>Second, I want to solve for <span class="math-container">$\frac{dy}{dx}$</span>.</p> <p><span class="math-container">$\frac{dy}{dx} = -\sin^{-2}(x)\sin(y)$</span>.</p> <p>Third, I substitute the point <span class="math-container">$(0,\frac{1}{2})$</span> into the above equation to find the slope of the tangent line.</p> <p><span class="math-container">$\frac{dy}{dx}\mid_{(0,\frac{1}{2})} = -\sin^{-2}(0)\sin(\frac{1}{2}) = -0.479$</span></p> <p>Finally, I substitute the slope into the point-slope equation of the line to obtain</p> <p><span class="math-container">$y = -0.479x + 0.2395$</span></p> <p>Is this correct?</p>
Elias Costa
19,266
<p>The trace <span class="math-container">$\mathop{\rm trace}(U^\ast V)$</span> of the product of two matrices <span class="math-container">$U,V\in \mathbb{C}^{n\times n}$</span> behaves as an inner product</p> <p><span class="math-container">$$ \langle U, V \rangle= \mathop{\rm trace}(U^\ast V) =\sum_{i=1}^{n}\sum_{j=1}^{n} U_{ij}\cdot \overline{V_{ij}} $$</span> So worth the inequality of CAUCHY-SCHWARZ <span class="math-container">$$ \langle U, V \rangle \leq \| U \|\cdot \|V\| $$</span> Then <span class="math-container">$$ \mathop{\rm trace }\left(U V W Z \right) \leq \|UV\|\|WZ\| $$</span> Since for matrices norm <span class="math-container">$\|\;\;\|$</span> is worth the following inequality <span class="math-container">$\|T\cdot S\| \leq \|T\|\cdot \|S\|$</span> we have <span class="math-container">$$ \mathop{\rm trace }\left(UVWZ \right) \leq \|U\|\|V\|\|W\|\|Z\| $$</span> If <span class="math-container">$\|U\|\leq 1 $</span>, <span class="math-container">$\|V\|\leq 1$</span>, <span class="math-container">$\|W\|= 1$</span> and <span class="math-container">$\|Z\|= 1$</span> we have <span class="math-container">$$ \mathop{\rm trace }\left(UVWZ \right)\leq 1 \qquad \mathop{\rm trace }\left(UV\right)\leq 1 \qquad \mathop{\rm trace }\left(WZ \right)= 1 $$</span> Now choose <span class="math-container">$U$</span>, <span class="math-container">$V$</span>, <span class="math-container">$W$</span> and <span class="math-container">$Z$</span> properly in function of <span class="math-container">$A$</span>,<span class="math-container">$A^\ast$</span>, <span class="math-container">$B$</span>, <span class="math-container">$C$</span>, <span class="math-container">$\sqrt{n}\;$</span> and <span class="math-container">$\;\sqrt{m}$</span> to obtain desired inequality.</p> <p><strong>UPDATE 09/08/2021</strong></p> <p>To make sure you correctly use the hypotheses:</p> <ol> <li><span class="math-container">$\mathbf{A} $</span> is Hermitian positive definite ( <span class="math-container">$v^TAv&gt;0 \quad \forall v\in \mathbb{C}^{n\times 1}$</span>);</li> <li><span class="math-container">$\mathbf{B}$</span> is Hermitian positive semidefinite ( <span class="math-container">$v^TBv\geq 0 \quad \forall v\in \mathbb{C}^{n\times 1}$</span>);</li> <li><span class="math-container">$\mathbf{C}$</span> is Hermitian positive semidefinite ( <span class="math-container">$v^TCv\geq 0 \quad \forall v\in \mathbb{C}^{n\times 1}$</span>);;</li> <li><span class="math-container">$\left\|\mathbf{A}\right\|^2 \leq n$</span> ( and by 1. we have <span class="math-container">$\left\|\mathbf{A}\right\|^2&gt;0$</span>) ;</li> <li><span class="math-container">$\mathbf{B} \mathbf{B}^* = \mathbf{B}$</span>;</li> <li><span class="math-container">$\mathbf{C} \mathbf{C}^* = \mathbf{C}$</span>;</li> <li><span class="math-container">$\left\|\mathbf{B}\right\|^2 = m &lt; n$</span>.</li> <li><span class="math-container">$\left\|\mathbf{C}\right\|^2 = m &lt; n$</span>.</li> </ol>
1,512,171
<p>I want to show that there exists a diffeomorphic $\phi$ such that the following diagram commutes: $$ \require{AMScd} \begin{CD} TS^1 @&gt;{\phi}&gt;&gt; S^1\times\mathbb{R}\\ @V{\pi}VV @V{\pi_1}VV \\ S^1 @&gt;{id_{S^1}}&gt;&gt; S^1 \end{CD}$$ where $\pi$ is the associated projection of $TS^1$, and $\pi_1(x,y)=x$ is the standard projection function in the first component.</p> <p>A hint was given along with the exercise that I should find a nowhere vanishing vector field on $S^1$. However, I don't know how to find one exactly, or what to do subsequent to finding such a vector field. I have seen an analogous example where $\phi$ was given without reason where $S^1$ and $\mathbb{R}$ were both instead $\mathbb{R}^n$. The definition of that $\phi$ was:$$\phi(a^i\frac{\partial}{\partial x^i}(p)) = (p,(a^1,...,a^n)).$$Perhaps the nowhere vanishing vector field on $S^1$ is used in an analogous formula?</p> <p>Could anyone give some additional hints or a sketch of a proof?</p> <p><strong>EDIT:</strong> Thinking about it, if I get the nowhere vanishing vector field, say, $u$, then because $S^1$ is a 1-manifold, I have that $T_pS^1$ is 1-dimensional as well. So that means that $T_pS^1$ is spanned by $u_p$. So I am thinking we use $\forall v_p\in TS^1$ the unique coefficient given by $\alpha\in\mathbb{R}$ such that $v_p = \alpha u_p$. So perhaps:$$\phi(v_p)=(p,\alpha),$$is our diffeomorphism? In that case, is there a condition that is met by $S^1$ such that it has to have a nowhere vanishing vector field (i.e. I don't have to find an exact formula for one)?</p>
Element118
274,478
<p>If $f$ is not continuous, the Riemann integral may not exist. In this case, by splitting the range into intervals and picking numbers in the range, it is possible to pick only rational numbers, forcing the value of the integral to $0$. It is also perfectly valid to pick only irrational numbers, forcing the value of the integral to be $1$. Since no such value can be derived, it cannot be Riemann integrated.</p> <p>The Lebesgue integral deals with the measure of the irrational numbers and the measure of the rational numbers, since the measure of the rational numbers is $0$, while the measure of the irrational numbers is $1$, hence the integral evaluates to $1$.</p>
1,341,440
<p>I came across a claim in a paper on branching processes which says that the following is an <em>immediate consequence</em> of the B-C lemmas:</p> <blockquote> <p>Let $X, X_1, X_2, \ldots$ be nonnegative iid random variables. Then $\limsup_{n \to \infty} X_n/n = 0$ if $EX&lt;\infty$, and $\limsup_{n \to \infty} X_n/n = \infty$ if $EX=\infty$.</p> </blockquote> <p>So to apply the BC lemmas to these, I want to essentially show that $$(1) \; \textrm{If } EX&lt;\infty, \textrm{ then } P(\limsup \{X_n/n &gt; \epsilon\}) = 0 \quad \forall \epsilon&gt;0$$ $$(2) \; \textrm{If } EX=\infty, \textrm{ then } P(\limsup \{X_n/n &gt; \delta\}) = 1 \quad \forall \delta&gt;0$$</p> <p>But I keep getting stuck. For example if I want to apply the first BC lemma to (1), then using Markov's inequality only gives $P(X_n &gt; n\epsilon) &lt; EX/n\epsilon$, which isn't summable. Am I missing something right under my nose?</p>
Adelafif
229,367
<p>If x^2 is not in the center then the subgroup does not intersect the center except in e. It follws that .Z=G and G is Abelian but then x^2 is in the center.</p>
173,112
<blockquote> <p>Solve for $x$. $12x^3+8x^2-x-1=0$ all solutions are rational and between $\pm 1$</p> </blockquote> <p>As mentioned in my previous answers, I'm guessing I have to use the Rational Root Theorem. But I've done my research and I do not understand what to plug in or anything about it at all. Can someone please dumb this theorem down so I can try to solve this equation. I also <strong>do not</strong> want anyone to solve this problem for me. Thanks!</p>
Ross Millikan
1,827
<p>The rational root theorem tells you that any rational roots of the polynomial have numerators that divide the constant term and denominators that divide the coefficient of the highest power. So here the numerator must be $\pm1$ and the denominator can be any of $1,2,3,4,6,12$. So you have $12$ possibilities for rational roots. You test each to see if it satisfies the polynomial. As there are only three roots of a cubic, you can quit when you have them.</p>
173,112
<blockquote> <p>Solve for $x$. $12x^3+8x^2-x-1=0$ all solutions are rational and between $\pm 1$</p> </blockquote> <p>As mentioned in my previous answers, I'm guessing I have to use the Rational Root Theorem. But I've done my research and I do not understand what to plug in or anything about it at all. Can someone please dumb this theorem down so I can try to solve this equation. I also <strong>do not</strong> want anyone to solve this problem for me. Thanks!</p>
Arturo Magidin
742
<p>The Rational Root Theorem tells you that if the equation has any rational solutions (it need not have any), then when you write them as a reduced fraction $\frac{a}{b}$ (reduced means that $a$ and $b$ have no common factors), then $a$ must divide the constant term of the polynomial, and $b$ must divide the leading term.</p> <p>Here, the constant term is $1$, so that means that $a$ must be either $\pm 1$. And the leading terms is $12$; the integers that divide $12$ are $\pm 1$, $\pm 2$, $\pm 3$, $\pm 4$, $\pm 6$, and $\pm 12$. That gives twelve possible things to try.</p> <p>As soon as you find a root $r$, you should stop and factor out $x-r$ from the polynomial, thus reducing the problem to one with smaller degree.</p> <p>For example, say you wanted to find the roots of $$6x^3 -25x^2+ 10x - 1.$$ The rational root theorem says that any rational root $\frac{a}{b}$ must have $a$ dividing $1$ (so $a=1$ or $a=-1$), and $b$ dividing $6$ (so $b=\pm 1$, $b=\pm 2$, $b=\pm 3$, or $b=\pm 6$). It doesn't tell you <em>all</em> of them are roots, just that <strong>if</strong> there are any rational roots, then they must be among $$\pm\frac{1}{2},\quad\pm\frac{1}{3},\quad \pm\frac{1}{6}.$$ Now, you can just test them. $\pm\frac{1}{1}$ does not work (plugging in $1$ gives $-10 $, plugging in $-1$ gives $-42$). Plugging in $\frac{1}{2}$, $-\frac{1}{2}$, $\frac{1}{3}$, $-\frac{1}{3}$ doesn't work either. Then when you plug in $\frac{1}{6}$, we get $$\frac{6}{6^3} - \frac{25}{6^2} + \frac{10}{6} - 1 = \frac{1}{36}-\frac{25}{36}+\frac{60}{36} - \frac{36}{36} = 0,$$ so $x=\frac{1}{6}$ is a root. We can then factor out $x-\frac{1}{6}$ from the original polynomial, $$6x^3 -25x2 + 10x -1 = \left(x - \frac{1}{6}\right)\left(6x^2-24x+6\right),$$ so we now just need to find the roots of the <em>other</em> factor, $6x^2-24x+6 = 6(x^2-4x+1)$. We can solve this using the quadratic formula, and the roots are $$\frac{4+\sqrt{16-4}}{2}=2 + \sqrt{3},\qquad \text{and}\qquad \frac{4-\sqrt{16-4}}{2} = 2-\sqrt{3}.$$ So the rational root theorem gave us a <em>finite</em> collection of possible roots; we check them, and if we get lucky and find a root among them, we can use it to reduce the degree of the polynomial by $1$ (by factoring out $x-r$) and so exchange the original problem for a simpler one (instead of a polynomial of degree 3, we now have a polynomial of degree 2).</p>
1,390,976
<p>Similar to <a href="https://math.stackexchange.com/questions/54763/what-do-algebra-and-calculus-mean">What do Algebra and Calculus mean?</a>, what is the difference between a logic and a calculus?</p> <p>I am learning about the different kinds of logics, and often when I look them up in a different resource, some people call it a logic, others call it a calculus (<em><a href="https://en.wikipedia.org/wiki/Propositional_calculus" rel="noreferrer">propositional calculus</a></em> and <em>propositional logic</em>). Or some calculus is defined as a logical system, like the <a href="https://en.wikipedia.org/wiki/Situation_calculus" rel="noreferrer">situation calculus</a>:</p> <blockquote> <p>The <strong>situation calculus</strong> is a <em>logic formalism</em> designed for representing and reasoning about dynamical domains.</p> </blockquote> <p>When do you call something a calculus vs. a logic?</p> <p>It seems that the definitions of "a logic" and "a calculus" are often circular. A logic is a calculus, and a calculus is a logic. Or a calculus is rules for calculating, while a logic is rules for inference. But in this sense, they're both systems of rules, so maybe they are both just generally "formal systems", and when focusing on inference it's a "logic", and when focusing on calculation it's a "calculus"?</p>
Lance
5,266
<p>The short answer is, after a few more months of letting this sit, is that in reality there is no difference between algebra, logic, and calculus. They are all just saying "a formal collection of mathematical rules". But each of these words have a history, and so when authors use them, they are mentally invoking that history of the word. Because these words were used in the development of different ideas, the logic/calculus/algebra typically prioritize different ideas.</p> <p>It is like saying "what is the difference between book and tome"? They both are the exact same thing, you are just highlighting different aspects of it by invoking mental imagery. In the algebra/logic/calculus case, the mental imagery is the history of it's use.</p>
1,390,976
<p>Similar to <a href="https://math.stackexchange.com/questions/54763/what-do-algebra-and-calculus-mean">What do Algebra and Calculus mean?</a>, what is the difference between a logic and a calculus?</p> <p>I am learning about the different kinds of logics, and often when I look them up in a different resource, some people call it a logic, others call it a calculus (<em><a href="https://en.wikipedia.org/wiki/Propositional_calculus" rel="noreferrer">propositional calculus</a></em> and <em>propositional logic</em>). Or some calculus is defined as a logical system, like the <a href="https://en.wikipedia.org/wiki/Situation_calculus" rel="noreferrer">situation calculus</a>:</p> <blockquote> <p>The <strong>situation calculus</strong> is a <em>logic formalism</em> designed for representing and reasoning about dynamical domains.</p> </blockquote> <p>When do you call something a calculus vs. a logic?</p> <p>It seems that the definitions of "a logic" and "a calculus" are often circular. A logic is a calculus, and a calculus is a logic. Or a calculus is rules for calculating, while a logic is rules for inference. But in this sense, they're both systems of rules, so maybe they are both just generally "formal systems", and when focusing on inference it's a "logic", and when focusing on calculation it's a "calculus"?</p>
Community
-1
<p><strong>In proof theory there is a difference between logic and calculi</strong></p> <p>There might be one semantic consequence relation $\vDash$, but many different syntactic consequences relations $\vdash_1$, $\vdash_2$, $\vdash_3$, ... . Thats the usual modus operandi of mathematical logic in the formal sciences, especially <a href="https://en.wikipedia.org/wiki/Proof_theory" rel="nofollow noreferrer">proof theory</a>.</p> <p>The semantic consequence relation $\vDash$ might be viewed extensional and is a litmus test for the syntactic consequences relations. The $\vdash_1$, $\vdash_2$, $\vdash_3$, ... are then studied intensionally, whereby various properties can be studied individually and comparatively. </p> <p>For example Gerhard Gentzen in his landmark 1934 paper "Investigations in Logical Deduction" showed a cut elimination and subsequently did relate a natural deduction style calculus with a sequent calculus.</p>
2,451,350
<p>Currently I am reading into functional data analysis. A common assumption is that the expected value of some random function is $0$, i.e. $\mathbb{E}(x) = 0$ where $x \in L^2$, the space of all squared integrable functions with inner product $\langle x,y \rangle = \int x(t)y(t) \text{d}t$. </p> <p>My question might appear a little trivial to many of you, but I just want to be certain that I don't get this basic concept of zero expectation wrong: Does $\mathbb{E}(x) = 0$ mean, that $\mathbb{E}\left[x(t)\right] = 0 ~\forall t$?</p> <p>Thanks for your help!</p>
José Carlos Santos
446,262
<ol> <li>If $Ax=b$, then $A(u+v)=Au+Av=b+0=b$.</li> <li>What I wrote above holds for every solution $v$ of the equation $Av=0$, not just to one or some of them.</li> <li>If $Ax=b$ has a solution $u$, then, since the equation $Ax=0$ has, at least, one solution $v\neq0$, then every $\lambda v$ ($\lambda\in\mathbb R$) is also a solution of $Ax=0$, and therefore every $u+\lambda v$ is a solution of $Ax=b$.</li> </ol> <hr /> <ol> <li>If $u$ is a solution of $Bx=0$, then $(AB)u=A(Bu)=A0=0$. Therefore, every solution of $Bx=0$ is also a solution of $ABx=0$.</li> </ol>
139,021
<p>Can you, please, recommend a good text about algebraic operads?</p> <p>I know the main one, namely, <a href="http://www-irma.u-strasbg.fr/~loday/PAPERS/LodayVallette.pdf" rel="nofollow noreferrer">Loday, Vallette "Algebraic operads"</a>. But it is very big and there is no way you can read it fast. Also there are notes by <a href="https://arxiv.org/abs/1202.3245" rel="nofollow noreferrer">Vallette "Algebra+Homotopy=Operad"</a>, but they don't have much information and are too combinatorial. So what I am looking for is a pretty concise introduction to the theory of algebraic operads, that will be more algebraic then combinatorial, and that will give enough information to actually start working with operads.</p> <p>Thank you very much for your help!</p> <p><strong>Edit</strong>: I have also found this interesting paper <a href="http://arxiv.org/pdf/math/9906063v2.pdf" rel="nofollow noreferrer">Modules and Morita Theorem for Operads</a> by Kapranov--Manin. Maybe it's a bit too concise for the first time reading about operads, but it has a lot of really nice examples and theorems.</p> <p>There are also <a href="http://folk.uib.no/nmajv/Operader.ps" rel="nofollow noreferrer">notes</a> by Vatne (only in PostScript).</p>
Dan Petersen
1,310
<p>The book of Markl, Stasheff and Shnider is also a standard reference. </p> <p>Also, a good jumping-in point could be Ginzburg and Kapranov's "Koszul duality for operads".</p>
1,034,335
<p>I'm preparing for my calculus exam and I'm unsure how to approach the question: "Explain the difference between convergence of a sequence and convergence of a series?" </p> <p>I understand the following:</p> <p>Let the sequence $a_n$ exist such that $a_n =\frac{1}{n^2}$ </p> <p>Then $\lim_{n\to\infty} a_n=\lim_{n\to\infty} \frac{1}{n^2}=0$ therefore $a_n$ converges to $0$.</p> <p>And the series $\sum_{i=1}^{n}a_n=1 + \frac{1}{4} + \frac{1}{9} + \frac{1}{16} + ... +\frac{1}{n^2}$</p> <p>And by the $n$-th term test, this series converges. But, I don't understand <em>why</em> or <em>how</em> the convergence between the series and the sequence is different.</p> <p>I looked online and I find a lot of answers on how to determine convergence or divergence, but the only difference I've found is that you use limits to test sequences and series have more complex testing requirements. Please help!</p>
Juan123
527,550
<p>You could say that all series are sequences, meaning that you could make a sequence taking its partial sums and checking whether or not it approaches a limit, but certainly not all sequences are series. </p>