qid int64 1 4.65M | question large_stringlengths 27 36.3k | author large_stringlengths 3 36 | author_id int64 -1 1.16M | answer large_stringlengths 18 63k |
|---|---|---|---|---|
3,395,044 | <p>Wikipedia states:</p>
<blockquote>
<p>In mathematics, a <strong>formal power series</strong> is a generalization of a <strong>polynomial</strong>, where the number of terms is allowed to be infinite; this implies giving up the possibility of replacing the variable in the <strong>polynomial</strong> with an arbitrary number. Thus a <strong>formal power series</strong> differs from a <strong>polynomial</strong> in that it may have infinitely many terms, and differs from a <strong>power series</strong>, whose variables can take on numerical values. </p>
</blockquote>
<p>What I am getting from this is that in both polynomials and formal power series, the variables "don't represent numbers". But I'm not exactly sure what this means, or what they <em>do</em> represent. Also it seems to be inconsistent with how I've been using polynomials, which is very much as "variables representing numbers".</p>
<p>So basically I'm conceptually confused about what this means, and can't really understand how they're being used.</p>
| Jean-Claude Arbaut | 43,608 | <p>The best way to see this: forget about the <span class="math-container">$X$</span> in a polynomial or a <strong>formal</strong> power series, they are really sequences of coefficients, with no constraint on their values, with some specific rules of computation for addition and multiplication.</p>
<p>A power series, however, involves some limiting process, and that requires specific conditions, namely that the series converges.</p>
<p>For instance, you can manipulate <span class="math-container">$S=\sum_{n=0}^{\infty} x^n$</span> as a <strong>formal</strong> power series, and you won't consider convergence, only the operations on it, for instance <span class="math-container">$S^2=1+2x+3x^2+\dots$</span>. That is, the coefficients of <span class="math-container">$S$</span> are <span class="math-container">$(1,1,1,\dots)$</span> while the coefficients of <span class="math-container">$S^2=(1,2,3,\dots)$</span>. But you may also consider <span class="math-container">$T=\sum_{n=0}^{\infty} n! x^n$</span>, it's a valid formal power series.</p>
<p>Now, for a power series, you require convergence. It's possible to prove that a power series in <span class="math-container">$x$</span> converges for all complex number <span class="math-container">$x$</span> such that <span class="math-container">$|x|<R$</span>, for some real (of infinite) <span class="math-container">$R$</span>. This <span class="math-container">$R$</span> is unique and is called the radius of convergence. For instance, the series <span class="math-container">$S$</span> above has radius <span class="math-container">$1$</span>. It converges for <span class="math-container">$|x|<1$</span> to the number <span class="math-container">$\frac{1}{1-x}$</span>. The series <span class="math-container">$T$</span> has radius <span class="math-container">$0$</span>: it never converges if <span class="math-container">$x\ne0$</span>. As a power series, it's almost useless, but as a formal power series, it can still be useful (we don't care that it does not converge).</p>
<hr>
<p>There is a similar distinction between a polynom and a polynomial function. But here it's even more tricky, because in the usual undergraduate courses polynom are considered with coefficients in <span class="math-container">$\Bbb R$</span> or <span class="math-container">$\Bbb C$</span>, and there many properties of polynomial directly relate to properties of the associated polynomial function.</p>
<p>When coefficients are in a finite field, it's more surprising. For instance, in <span class="math-container">$\Bbb F_2$</span>, the finite field with two elements, the polynom <span class="math-container">$X^2+X$</span> is not the null polynomial (the null polynomial has null coefficients). However, the function <span class="math-container">$x\to x^2+x$</span> only takes the value <span class="math-container">$0$</span>.</p>
|
1,983,129 | <p>In the tripple integral to calculate the volume of a sphere
why does setting the limits as follows not work?</p>
<p>$$ \int_{0}^{2\pi} \int_{0}^{\pi}
\int_{0}^{R} p^2 \sin{\phi}
\, dp\,d\theta\,d\phi $$</p>
| Ben Grossmann | 81,360 | <p>Yes. In particular, there's no reason to go through the calculation $\left(\cos{2 \pi k \over n} + i \sin{2 \pi k \over n}\right)^n = 1$. From the problem statement, we <em>already</em> know that $\epsilon^n = 1$.</p>
|
1,983,129 | <p>In the tripple integral to calculate the volume of a sphere
why does setting the limits as follows not work?</p>
<p>$$ \int_{0}^{2\pi} \int_{0}^{\pi}
\int_{0}^{R} p^2 \sin{\phi}
\, dp\,d\theta\,d\phi $$</p>
| hamam_Abdallah | 369,188 | <p>For each root $ (\epsilon \neq1 )$ of the equation $x^n=1, \;$ let
$$S_\epsilon=\sum_{k=0}^{n-1}\epsilon^k.$$</p>
<p>then, for each one of these $\epsilon$, </p>
<p>$$\epsilon S_\epsilon=S_\epsilon.$$</p>
<p>thus</p>
<p>$$S_\epsilon=0,$$</p>
<p>since $\epsilon\neq 0$.</p>
|
2,046,521 | <p>Of course, faster calculations help solve problems quickly. But does that also mean that faster calculations open more opportunities for a career in mathematics (like a researcher)? I like mathematics and can spend weeks trying to solve any problem or understanding any concept. But nowadays, there are many contests that focus on faster calculations rather than problem solving. I am very slow at calculations due to which I end up doing badly in these types of contests. Does that mean I am lagging somewhere? Can this cause hindrance in pursuing a career in mathematics? </p>
| Mathily | 375,170 | <p>I don't think being slow at computations will make pursuing research math impossible, but it might make it harder. Getting an undergraduate degree in math (at an American university) still requires a fair amount of test taking and problem sets which involve computation. If you're really slow it may depress your grades, which is a barrier you'd have to overcome. However, this becomes less and less important as you progress through a course of study.</p>
<p>Secondly, there are many areas of research mathematics where 'computation' play a large roll. There is quite a lot of algebraic computation in discrete mathematics for example, so it might be the case that it impacts what field you choose to study.</p>
<p>Being faster at recognizing when/how to apply basic theorems and techniques will make you more efficient, but in the end it's inventiveness that is necessary for substantial breakthroughs.</p>
|
793,693 | <p>Since I was interested in maths, I have a question. Is infinity a real or complex quantity? Or it isn't real or complex?</p>
| Michael Hardy | 11,667 | <p>There are many different things that could be called "the infinite" in mathematics. <b>None</b> of them is a real number or a complex number, but some are used in discussing functions or real or complex numbers.</p>
<ul>
<li><p>There are things called $+\infty$ and $-\infty$. Those appear in such expressions as
$$
\lim_{x\to-\infty}\frac{1}{1+2^x} = 1 \text{ and }\lim_{x\to+\infty}\frac{1}{1+2^x}=0.
$$</p></li>
<li><p>There is also an $\infty$ that is approached as $x$ goes in either direction along the line. That occurs in
$$
\lim_{x\to\infty} \frac{x}{1-x} = -1\text{ and }\lim_{x\to1} \frac{x}{1-x}=\infty.
$$
The second limit above could be said to approach $+\infty$ as $x$ approaches $1$ from one direction and $-\infty$ if from the other direction, but if one has just one $\infty$ at both ends of the line, then one makes the rational function a continuous function at the point where it has a vertical asymptote. This may be regarded as the same $\infty$ that appears in the theory of complex variables.</p></li>
<li><p>There are "points at infinity" in projective geometry. This is similar to the "infinity" in the bullet point immediately preceding this one. Two parallel lines meet at infinity, and it's the same point at infinity regardless of which of the two directions you take along the lines. But two non-parallel lines pass through <em>different</em> points at infinity rather than the same point at infinity. Thus any two lines in the projective plane intersect exactly once.</p></li>
<li><p>There are cardinalities of infinite sets such as $\{1,2,3,\ldots\}$ (which is countably infinite) or $\mathbb R$ (which is uncountably infinite). When it is said that Euclid proved there are infinitely many prime numbers, this sort of "infinity" is referred to.</p></li>
<li><p>One regards an integral $\int_a^b f(x)\,dx$ as a sum of infinitely many infinitely small quantities, and a derivative $dy/dx$ as a quotient of two infinitely small quantities. This is a different idea from all of the above.</p></li>
<li><p>Consider the step function $x\mapsto\begin{cases} 0 & \text{if }x<0, \\ 1 & \text{if }x\ge 0. \end{cases}$ One can say that its rate of change is infinite at $x=0$. This "infinity" admits multiplication by real numbers, so that for example, the rate of change at $0$ of the function that is $3.2$ times this function, is just $3.2$ times the "infinity" that is the rate of change of the original step function at $0$. This is made precise is the very useful theory of Dirac's delta function.</p></li>
<li><p>There is the "infinite" of Robinson's nonstandard analysis. In that theory, we learn that if $n$ is an infinite positive integer, then evern "internal" one-to-one function that maps $\{1,2,3,\ldots,n-3\}$ into $\{1,2,3,\ldots,n\}$ omits exactly three elements of the latter set from its image. Nothing like that holds for cardinalities of infinite sets discussed above.</p></li>
<li><p>I'm sure there are other examples that I'm missing here.</p></li>
</ul>
|
2,220,738 | <p>How do I go about finding the dimension of the subspace:
$$$$S:={p($x$) ∈ $P_4$: p($x$)= 2p($x$) for all $x\in\mathbb{R}$} of $P_4$</p>
<p>My textbook says $dim(P_n)=n+1$, but this does not give me the correct answer. All help is appreciated.</p>
| Eman Yalpsid | 94,959 | <p>Let $n=4$, then $p \in S \leq P_n$ has the form of $\sum_{j=0}^{n}a_jx^j$ for some $a_j \in R$. <br> Therefore $p = 2p$ implies that for all $x \in R$,
$$\sum_{j=0}^{n}a_jx^j = 2\sum_{j=0}^{n}a_jx^j \iff 0 = \sum_{j=0}^{n}(2-1)a_jx^j = p(x).$$
What's the dimension of a space where every element has this property? </p>
<p>Now that we are given $R = \mathbb R$, it's even easier: which element of $P_n \leq \mathbb R[x]$ has infinitely many roots?</p>
|
182,785 | <p>I haved plot a graph from two functions:</p>
<pre><code>η = 52;
h = 0.5682;
dpdx = -4.092*10^(-2);
Fg = dpdx;
Fl = dpdx/η;
Bl = ((Fg - Fl) h^2 - Fg)/(2 h - 2 η*h + 2 η);
Cg = -Fg/2 - η*Bl;
Bg = η*Bl;
Ut1[y_] := Fg*y^2/2 + Bg*y + Cg;
Ut2[y_] := Fl*y^2/2 + Bl*y;
Plot1 = Plot[Ut1[y]*1000, {y, h, 1}];
Plot2 = Plot[Ut2[y]*1000, {y, 0, h}, PlotStyle -> Orange];
Show[{Plot1, Plot2}, PlotRange -> All, AxesLabel -> {"y", "U"},
AxesStyle -> FontSize -> 14]
</code></pre>
<p>The result:</p>
<p><a href="https://i.stack.imgur.com/9BNfN.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/9BNfN.png" alt="enter image description here"></a></p>
<p>How to flip and transform the graph to this way?</p>
<p><a href="https://i.stack.imgur.com/0JvMC.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/0JvMC.jpg" alt="enter image description here"></a></p>
<p>PS: The numbers at the axes has to be legible. </p>
| Bob Hanlon | 9,362 | <p>Use <a href="https://reference.wolfram.com/language/ref/ParametricPlot.html" rel="nofollow noreferrer"><code>ParametricPlot</code></a> to flip the axes.</p>
<pre><code>η = 52;
h = 0.5682;
dpdx = -4.092*10^(-2);
Fg = dpdx;
Fl = dpdx/η;
Bl = ((Fg - Fl) h^2 - Fg)/(2 h - 2 η*h + 2 η);
Cg = -Fg/2 - η*Bl;
Bg = η*Bl;
Ut1[y_] := Fg*y^2/2 + Bg*y + Cg;
Ut2[y_] := Fl*y^2/2 + Bl*y;
plot3 = ParametricPlot[{Ut1[y]*1000, y}, {y, h, 1}];
plot4 = ParametricPlot[{Ut2[y]*1000, y}, {y, 0, h}, PlotStyle -> Orange];
Show[{plot3, plot4}, PlotRange -> All, AxesLabel -> {"U", "y"},
AxesStyle -> FontSize -> 14]
</code></pre>
<p><a href="https://i.stack.imgur.com/Jgizy.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Jgizy.png" alt="enter image description here"></a></p>
|
2,801,433 | <p>I have made the following conjecture, and I do not know if this is true.</p>
<blockquote>
<blockquote>
<p><strong>Conjecture:</strong></p>
</blockquote>
<p><span class="math-container">\begin{equation*}\sum_{n=1}^k\frac{1}{\pi^{1/n}p_n}\stackrel{k\to\infty}{\longrightarrow}2\verb| such that we denote by | p_n\verb| the | n^\text{th} \verb| prime.|\end{equation*}</span></p>
</blockquote>
<p>Is my conjecture true? It seems like it, according to a plot made by Wolfram|Alpha, but if it does, then it converges.... <em>very</em>.... <em>very</em>, slowly. In fact, let <span class="math-container">$k=5000$</span>, then the sum is approximately equal to <span class="math-container">$1.97$</span>, which just proves how slow it would be.</p>
<p>Is there a way of showing whether or not this is indeed convergent? For any other higher values of <span class="math-container">$k$</span>, it seems that it is just too much for Wolfram|Alpha to calculate, and it does not give me a result when I let <span class="math-container">$k=\infty$</span>. Also, for users who might not understand the notation, we can similarly write that <span class="math-container">$$\sum_{n=1}^\infty\frac{1}{\pi^{1/n}p_n}=2\qquad\text{ or }\qquad\lim_{k\to\infty}\sum_{n=1}^k\frac{1}{\pi^{1/n}p_n}=2.$$</span> Also, without Wolfram|Alpha, I have <em>no idea</em> how to approach this problem in terms of proving it or disproving it. Does the sum even converge <em>at all</em>? If so, to what value? Any help would be much appreciated.</p>
<hr />
<p>Thank you in advance.</p>
<p><strong>Edit:</strong></p>
<p>I looked at <a href="https://math.stackexchange.com/questions/2070991/is-sum-limits-n-1-infty-frac1nk1-frac12-for-k-to-infty?rq=1">this post</a> to see if I could rewrite my conjecture as something else in order to help myself out. Consequently, I wrote that <span class="math-container">$$\sum_{n=1}^k\frac{1}{\pi^{1/n}p_n}\stackrel{k\to\infty}{\longleftrightarrow}4\sum_{n=1}^\infty\frac{1}{n^k+1}\tag{$\text{LHS}=2$}$$</span> since both sums look very similar. Could <em>this</em> be of use?</p>
| Claude Leibovici | 82,404 | <p>Numerically, this does not seem to be true.</p>
<p>Considering $$S_m=\sum_{n=1}^{10^m}\frac{1}{\pi^{1/n}p_n}$$ and, using illimited precision, I obtained the following numbers
$$\left(
\begin{array}{cc}
m & S_m \\
1 & 0.891549393 \\
2 & 1.437754209 \\
3 & 1.787152452 \\
4 & 2.038881140 \\
5 & 2.235759176 \\
6 & 2.397832041
\end{array}
\right)$$</p>
<p><strong>Edit</strong></p>
<p>After marty cohen's answer, based on the above data, a quick and dirty fit for the model
$$S_k=\sum_{n=1}^{k}\frac{1}{\pi^{1/n}p_n}=a+b\,\log(\log(k))$$ gives $(R^2=0.999947)$
$$\begin{array}{clclclclc}
\text{} & \text{Estimate} & \text{Standard Error} & \text{Confidence Interval} \\
a & 0.17085 & 0.02635 & \{0.08600,0.25471\} \\
b & 0.84291 & 0.01302 & \{0.80146,0.88436\} \\
\end{array}$$</p>
|
4,394,983 | <p>I am tasked with proving that Th((<span class="math-container">$\mathbb{Z}, <))$</span> has continuum many models.
For this we are given the following construction.</p>
<blockquote>
<p>Let <span class="math-container">$\alpha \in \mathcal{C} = \{0,1\}^{\mathbb{N}}$</span>. We define for each <span class="math-container">$\alpha$</span> the set <span class="math-container">$V_{\alpha}$</span>: <span class="math-container">$$V_{\alpha} = \{q \in \mathbb{Q}\mid\exists n[2n \leq q \leq 2n+1]\ \lor\ \exists n[\alpha(n) = 1\ \land\ q = 2n + \frac{3}{2}]\}$$</span>
Define for each such <span class="math-container">$V_{\alpha}$</span> the set <span class="math-container">$W_{\alpha} := V_{\alpha}\times \mathbb{Z}$</span>.</p>
</blockquote>
<p>Now consider the structures <span class="math-container">$(V_{\alpha}, <)$</span> and <span class="math-container">$(W_{\alpha}, <')$</span> with <span class="math-container">$<$</span> the usual ordering on <span class="math-container">$\mathbb{Q}$</span> and <span class="math-container">$<'$</span> the lexicographic ordering on <span class="math-container">$\mathbb{Q}^2$</span>.</p>
<p>There are now three things to do:</p>
<ol>
<li><p>Prove: <span class="math-container">$\forall\alpha\in\mathcal{C}\forall\beta\in\mathcal{C}[\alpha\neq\beta\to (V_{\alpha}, <) \ncong (V_{\beta}, <)]$</span>.</p>
</li>
<li><p>Prove: <span class="math-container">$(V_{\alpha}, <) \cong (V_{\beta}, <)$</span> if and only if <span class="math-container">$(W_{\alpha}, <') \cong (W_{\beta}, <')$</span>.</p>
</li>
<li><p>Prove: <span class="math-container">$(W_{\alpha}, <') \equiv (\mathbb{Z}, <)$</span>. That is that both structures are elementary equivalent.</p>
</li>
</ol>
<p><strong>My question</strong></p>
<p>I have difficulties proving 3. Thus far I tried using Ehrenfeucht-Fraïssé games but as of now no result yet. This is probably because <span class="math-container">$(W_{\alpha}, <')$</span> can be thought of as a plane and <span class="math-container">$(\mathbb{Z}, <)$</span> as a line. Successfully devising a winning strategy basically comes down to good choices when the first player chooses an element in <span class="math-container">$W_{\alpha}$</span>.
I know how to win if the games takes at most 3 moves from each player. Yet generalizing this has proven to be difficult.</p>
<p>Also I have some troubles with 1. I see why this is true; if <span class="math-container">$\alpha \neq \beta$</span> then either <span class="math-container">$V_{\alpha}$</span> or <span class="math-container">$V_{\beta}$</span> has one "successor" more. But translating this to a formal proof has yet to be made.</p>
<p>Can I get help on these problems?
I thank you in advance.</p>
| Olivier Roche | 649,615 | <p>Point 1. : let <span class="math-container">$\alpha \neq \beta \in \mathcal{C}$</span>. Then there is <span class="math-container">$n \in \mathbb{N}$</span> such that <span class="math-container">$\alpha(n) \neq \beta(n)$</span>. WLOG, <span class="math-container">$\alpha(n) = 0$</span> and <span class="math-container">$\beta(n) = 1$</span>. Now take <span class="math-container">$q := 2 n + \frac{3}{2}$</span>.</p>
<p><span class="math-container">$q \notin V_\alpha$</span> but <span class="math-container">$q\in V_\beta$</span>, hence <span class="math-container">$V_\alpha \neq V_\beta$</span>.</p>
<p>Point 3. : The theory of <span class="math-container">$(\mathbb{Z}, <)$</span> is the theory of discrete linear orders without endpoints (which happens to be a complete theory). So, all you have to do is to check that <span class="math-container">$(W_{\alpha}, <')$</span> is a discrete linear order without endpoints.</p>
|
3,839,878 | <p>Am currently doing a question that asks about the relationship between a quadratic and its discriminant.</p>
<p>If we know that the quadratic <span class="math-container">$ax^2+bx+c$</span> is a perfect square, then can we say anything about the discriminant?</p>
<p>Specifically, can we be sure that the discriminant equals 0?</p>
<p>So far, I have tried to complete the square for the general quadratic, and got to:</p>
<p><span class="math-container">$a((x+\frac{b}{2a})^2-\frac{b^2}{4a^2}+\frac ca)$</span></p>
<p>But am now stuck. What should I do next, or is there a totally different route I should be taking?</p>
| poetasis | 546,655 | <p>Let us assume that <span class="math-container">$ax^2+bx+c=d=(jx+k)^2 $</span></p>
<p><span class="math-container">\begin{equation}
d=(jx+k)^2\implies j^2x^2+2jkx+k^2-d=0\implies a=j^2\quad b=2jk\quad c=k^2-d\\
x=\frac{-b\pm\sqrt{b^2-4ac}}{2a}
=\frac{-2jk\pm\sqrt{2^2j^2k^2-4j^2(k^2-d)}}{2j}
=\frac{-2jk\pm 2j\sqrt{d}}{2j}
=\pm\sqrt{d}-k
\end{equation}</span></p>
<p>By definition above, d is a perfect square and the discriminant (just before and after the last equal sign) supports that. If <span class="math-container">$ax^2+bx+c=0$</span>, then the discriminant is zero.</p>
|
333,467 | <p>I was reading in my analysis textbook that the map $ f: {\mathbf{GL}_{n}}(\mathbb{R}) \to {\mathbf{GL}_{n}}(\mathbb{R}) $ defined by $ f(A) := A^{-1} $ is a continuous map. I also saw that $ {\mathbf{GL}_{n}}(\mathbb{R}) $ is dense in $ {\mathbf{M}_{n}}(\mathbb{R}) $. My question is:</p>
<blockquote>
<p>What is the unique extension of $ f $ to $ {\mathbf{M}_{n}}(\mathbb{R}) $?</p>
</blockquote>
| Haskell Curry | 39,362 | <p>Just to add to the two answers above. If you refer to my solution in the thread <a href="https://math.stackexchange.com/questions/290971/left-topological-zero-divisors-in-banach-algebras/291196#291196">Left topological zero-divisors in Banach algebras.</a>, you will see that if $ X \in \partial({\text{GL}_{n}}(\mathbb{R})) \subseteq {\text{M}_{n}}(\mathbb{R}) $, where $ \partial $ denotes the topological boundary operator, then there exists a sequence $ (X_{n})_{n \in \mathbb{N}} $ in $ {\text{GL}_{n}}(\mathbb{R}) $ such that $ \displaystyle \lim_{n \to \infty} X_{n} = X $ but $ \displaystyle \lim_{n \to \infty} \| X_{n}^{-1} \| = \infty $. This proves that $ (\bullet)^{-1}: {\text{GL}_{n}}(\mathbb{R}) \to {\text{GL}_{n}}(\mathbb{R}) $ cannot be extended continuously to $ {\text{M}_{n}}(\mathbb{R}) $.</p>
|
1,677,359 | <p>$\sum_{i=0}^n 2^i = 2^{n+1} - 1$</p>
<p>I can't seem to find the proof of this. I think it has something to do with combinations and Pascal's triangle. Could someone show me the proof? Thanks</p>
| André Nicolas | 6,312 | <p>We give a combinatorial interpretation.</p>
<p>We are counting the "words" of length $n+1$, over the alphabet $\{0,1\}$, that are not all $0$'s. There are $2^{n+1}-1$ such words. </p>
<p>We count these words another way. Maybe the first $1$ is at the beginning. There are $2^n$ such words.</p>
<p>Maybe the word begins with $01$. There are $2^{n-1}$ such words. </p>
<p>Maybe the word begins with $001$. There are $2^{n-2}$ such words.</p>
<p>And so on. Finally, maybe the first $1$ is at the right end. There are $2^0$ such words.</p>
<p>So the total number of words of length $n+1$ that are not all $0$'s is $2^n+2^{n-1}+2^{n-2}+\cdots +2^0$. This is our given sum, backwards. </p>
|
1,666,396 | <p>I can show the convergence of the following infinite product and some bounds for it:</p>
<p>$$\prod_{k \geq 2}\sqrt[k]{1+\frac{1}{k}}=\sqrt{1+\frac{1}{2}} \sqrt[3]{1+\frac{1}{3}} \sqrt[4]{1+\frac{1}{4}} \cdots<$$</p>
<p>$$<\left(1+\frac{1}{4} \right)\left(1+\frac{1}{9} \right)\left(1+\frac{1}{16} \right)\cdots=\prod_{k \geq 2} \left(1+\frac{1}{k^2} \right)=\frac{\sinh \pi}{2 \pi}=1.83804$$</p>
<p>Here I used Euler's product for $\frac{\sin x}{x}$.</p>
<p>The next upper bound is not as easy to evaluate, but still possible, taking two more terms in Taylor's series for $\sqrt[k]{1+\frac{1}{k} }$:</p>
<p>$$\prod_{k \geq 2}\sqrt[k]{1+\frac{1}{k}}<\prod_{k \geq 2} \left(1+\frac{1}{k^2}-\frac{k-1}{2k^4}+\frac{2k^2-3k+1}{6k^6} \right)=$$</p>
<p>$$=\prod_{k \geq 2} \left(1+\frac{1}{k^2}-\frac{1}{2k^3}+\frac{5}{6k^4}-\frac{1}{2k^5}+\frac{1}{6k^6} \right)<$$</p>
<p>$$<\exp \left( \frac{\pi^2}{6}+\frac{\pi^4}{108}+\frac{\pi^6}{5670}-1-\frac{\zeta (3)}{2}-\frac{\zeta (5)}{2} \right)=1.81654$$</p>
<p>The numerical value of the infinite product is approximately:</p>
<p>$$\prod_{k \geq 2}\sqrt[k]{1+\frac{1}{k}}=1.758743628$$</p>
<p>The ISC found no closed from for this number.</p>
<blockquote>
<p>Is there some way to evaluate this product or find better bounds in closed form?</p>
</blockquote>
<hr>
<p><strong>Edit</strong></p>
<p>Clement C suggested taking logarithm and it was a very useful suggestion, since I get the series:</p>
<p>$$\ln \prod_{k \geq 2}\sqrt[k]{1+\frac{1}{k}}= \frac{1}{2} \ln \left(1+\frac{1}{2} \right)+\frac{1}{3} \ln \left(1+\frac{1}{3} \right)+\dots$$</p>
<p>I don't know how to find the closed form, but I can certainly use it to find the boundaries (since the series for logarithm are very simple).</p>
<p>$$\frac{1}{2} \ln \left(1+\frac{1}{2} \right)+\frac{1}{3} \ln \left(1+\frac{1}{3} \right)+\dots>\sum^{\infty}_{k=2} \frac{1}{k^2}-\frac{1}{2}\sum^{\infty}_{k=2} \frac{1}{k^3}$$</p>
<p>$$\prod_{k \geq 2}\sqrt[k]{1+\frac{1}{k}}>\exp \left( \frac{\pi^2}{6}-\frac{1}{2}-\frac{\zeta (3)}{2} \right)=1.72272$$</p>
<p>$$\prod_{k \geq 2}\sqrt[k]{1+\frac{1}{k}}<\exp \left( \frac{\pi^2}{6}+\frac{\pi^4}{270}-\frac{5}{6}-\frac{\zeta (3)}{2} \right)=1.77065$$</p>
<p>$$\prod_{k \geq 2}\sqrt[k]{1+\frac{1}{k}}>\exp \left( \frac{\pi^2}{6}+\frac{\pi^4}{270}-\frac{7}{12}-\frac{\zeta (3)}{2} -\frac{\zeta (5)}{4}\right)=1.75438$$</p>
<p>$$\prod_{k \geq 2}\sqrt[k]{1+\frac{1}{k}}<\exp \left( \frac{\pi^2}{6}+\frac{\pi^4}{270}+\frac{\pi^6}{4725}-\frac{47}{60}-\frac{\zeta (3)}{2} -\frac{\zeta (5)}{4}\right)=1.76048$$</p>
<blockquote>
<p>$$\prod_{k \geq 2}\sqrt[k]{1+\frac{1}{k}}>\exp \left( \frac{\pi^2}{6}+\frac{\pi^4}{270}+\frac{\pi^6}{4725}-\frac{37}{60}-\frac{\zeta (3)}{2} -\frac{\zeta (5)}{4} -\frac{\zeta (7)}{6}\right)=1.75803$$</p>
</blockquote>
<p>This method generates much better bounds than my first idea. The last two are very good approximations.</p>
<hr>
<p><strong>Edit 2</strong></p>
<p>Actually, would it be correct to write (it gives the correct value of the product):</p>
<blockquote>
<p>$$\prod_{k \geq 2}\sqrt[k]{1+\frac{1}{k}}=\frac{1}{2} \exp \left( \sum_{k \geq 2} \frac{(-1)^k \zeta(k)}{k-1} \right)$$</p>
</blockquote>
| Rob | 274,944 | <p>As already discussed above by Yuriy S and others, the product is intimately linked with the series $\sum_{n=1}^{\infty} { \ln(n+1) \over n(n+1) } \approx 1.2577\dots$. The connection is derived as follows:</p>
<p>$$ \prod_{k=2}^{\infty} \left( 1+ {1\over k} \right)^{1/k} \\
=\exp \left( \ln \left( \prod_{k=2}^{\infty} \left( 1+ {1\over k} \right)^{1/k} \right) \right) \\
= \exp \left( \sum_{n=2}^{\infty} \ln \left( \left( 1+ {1\over k} \right)^{1/k} \right) \right) \\
= \exp \left( \sum_{n=2}^{\infty} { \ln\left({k+1 \over k}\right)\over k } \right) \\
=\exp \left( \sum_{n=2}^{\infty} {\ln(k+1)\over k } -\sum_{n=2}^{\infty} {\ln(k) \over k} \right)\\
=\exp \left( \sum_{n=3}^{\infty} {\ln(k) \over k-1} -\sum_{n=3}^{\infty} {\ln(k) \over k} - {1 \over 2}\ln 2 \right)\\
=\exp \left( \sum_{n=3}^{\infty} {\ln(k) \over k(k-1)} - {1 \over 2}\ln 2 \right)\\
=\exp \left( \sum_{n=2}^{\infty} {\ln(k+1) \over k(k+1)} - {1 \over 2}\ln 2 \right)\\
=\exp \left( \sum_{n=1}^{\infty} {\ln(k+1) \over k(k+1)} - {1 \over 2}\ln 2 - {1 \over 2}\ln 2 \right)\\
=e^{\sum_{n=1}^{\infty} {\ln(k+1) \over k(k+1)}} \cdot e^{-\ln 2}\\
={1 \over 2}e^{\sum_{n=1}^{\infty} {\ln(k+1) \over k(k+1)}} $$</p>
<p>Therefore the product will only have a closed form if $\sum_{n=1}^{\infty} {\ln(k+1) \over k(k+1)}$ has a closed form.</p>
|
1,752,021 | <blockquote>
<p>Let $G=S_3\times S_3$ where $S_3$ is the symmetric group. Let $p=
\begin{pmatrix}
1 & 2 & 3 \\
2 & 3 & 1 \\
\end{pmatrix}
$, let $L=(p)$, $K=L\times L$ and $H=\{(I_3,I_3),(p,p),(p^2,p^2)\}$. Show that $K\triangleleft G$, $H\triangleleft K$ but $H$ no is a normal subgroup of $G$.</p>
</blockquote>
<p>I wonder if there a quick way to do this exercise, without having to develop each of the products.</p>
| JKim | 290,442 | <p>$K \trianglelefteq G$ is easy to be seen because each $L \trianglelefteq S_3$ as $L$ has index 2 in $S_3$. For $H \trianglelefteq K$, we can use the fact:</p>
<p>If $H$ has a prime index $p$ in $G$ and there is no prime divisor of $|G|$ less than $p$, then $H \trianglelefteq G$. </p>
<p>$|K:H| = 3$ and there is no smaller prime dividing $|K|=9$, so we see that $H \trianglelefteq K$. </p>
<p>Now $H$ not being normal in $G$ should be easy done by one computation $((1,2),e)*(p,p)*((1,2),e) = ((1,3,2),e)\notin H$</p>
|
14,007 | <p>A colleague of mine will be teaching 3 classes (pre-calculus and two sections of calculus, at the university level) next semester with an additional grader in only one of those classes (pre-calculus). With an upper bound of 35 students a class, there is potential for a large amount of grading that needs to happen without a lot of resources available in terms of additional graders.</p>
<p>My question essentially boils down to: </p>
<blockquote>
<p>What are tips/tricks/techniques for creating quiz and exam questions that both</p>
<ol>
<li>test students at various levels of Bloom's hierarchy and</li>
<li>minimize the amount of work for the grader</li>
</ol>
<p>?</p>
</blockquote>
<p>(Again, the subject is particularly (pre)calculus, but more general tips/tricks/techniques are great too. I only mention the subjects so that techniques that work particularly well for topics in (pre)calculus are mentioned.)</p>
<p>I have some ideas:</p>
<ul>
<li>Formatting can get rid of a lot of the time spent <em>looking</em> for various components of an answer. E.g. for a question about convergence/divergence of a series, you can label spaces for saying whether the series converges or diverges, for what test they are using, and what work needs to be done in order to use that test.</li>
<li>True/False questions in which the student must give a counterexample or an explanation in case of False (and perhaps some explanation in case of True as well).</li>
<li>Matching problems for topics in which this makes sense, such as polar/parametric graphs or conics (given Cartesian or polar equations).</li>
</ul>
<p>I'm curious to hear what other things people have used.</p>
| Daniel R. Collins | 5,563 | <p>One I recently learned -- for the order of the signs in factoring a sum or difference of cubes, remember <strong>SOAP</strong>: Same sign, Opposite sign, Always a Plus.</p>
<blockquote>
<p>Sum of Cubes: <span class="math-container">$x^3 + a^3 = (x + a)(x^2 - ax + a^2)$</span></p>
<p>Difference of Cubes: <span class="math-container">$x^3 - a^3 = (x - a)(x^2 + ax + a^2)$</span></p>
</blockquote>
<p>Credit: <a href="https://openstax.org/details/books/college-algebra" rel="nofollow noreferrer">OpenStax College Algebra</a></p>
|
14,007 | <p>A colleague of mine will be teaching 3 classes (pre-calculus and two sections of calculus, at the university level) next semester with an additional grader in only one of those classes (pre-calculus). With an upper bound of 35 students a class, there is potential for a large amount of grading that needs to happen without a lot of resources available in terms of additional graders.</p>
<p>My question essentially boils down to: </p>
<blockquote>
<p>What are tips/tricks/techniques for creating quiz and exam questions that both</p>
<ol>
<li>test students at various levels of Bloom's hierarchy and</li>
<li>minimize the amount of work for the grader</li>
</ol>
<p>?</p>
</blockquote>
<p>(Again, the subject is particularly (pre)calculus, but more general tips/tricks/techniques are great too. I only mention the subjects so that techniques that work particularly well for topics in (pre)calculus are mentioned.)</p>
<p>I have some ideas:</p>
<ul>
<li>Formatting can get rid of a lot of the time spent <em>looking</em> for various components of an answer. E.g. for a question about convergence/divergence of a series, you can label spaces for saying whether the series converges or diverges, for what test they are using, and what work needs to be done in order to use that test.</li>
<li>True/False questions in which the student must give a counterexample or an explanation in case of False (and perhaps some explanation in case of True as well).</li>
<li>Matching problems for topics in which this makes sense, such as polar/parametric graphs or conics (given Cartesian or polar equations).</li>
</ul>
<p>I'm curious to hear what other things people have used.</p>
| JRN | 77 | <p>Here is my way of memorizing the three main trigonometric functions. An angle $\theta$ is in standard position locating a point $(x,y)$ on a circle with a radius $r$ centered at the origin. There is a convertible (a car with the roof removed) being driven down a road during the daytime.</p>
<p>The sun (sine) is above (vertical $y$) the road ($r$): $\sin\theta=y/r$.</p>
<p>The car (cosine*) is moving horizontally ($x$) over the road ($r$): $\cos\theta=x/r$.</p>
<p>When the sun is above ($y$) the car ($x$), the driver gets a tan: $\tan\theta=y/x$.</p>
<p>*In Filipino (my language), "car" is "kotse" which is pretty close to "cos."</p>
|
14,007 | <p>A colleague of mine will be teaching 3 classes (pre-calculus and two sections of calculus, at the university level) next semester with an additional grader in only one of those classes (pre-calculus). With an upper bound of 35 students a class, there is potential for a large amount of grading that needs to happen without a lot of resources available in terms of additional graders.</p>
<p>My question essentially boils down to: </p>
<blockquote>
<p>What are tips/tricks/techniques for creating quiz and exam questions that both</p>
<ol>
<li>test students at various levels of Bloom's hierarchy and</li>
<li>minimize the amount of work for the grader</li>
</ol>
<p>?</p>
</blockquote>
<p>(Again, the subject is particularly (pre)calculus, but more general tips/tricks/techniques are great too. I only mention the subjects so that techniques that work particularly well for topics in (pre)calculus are mentioned.)</p>
<p>I have some ideas:</p>
<ul>
<li>Formatting can get rid of a lot of the time spent <em>looking</em> for various components of an answer. E.g. for a question about convergence/divergence of a series, you can label spaces for saying whether the series converges or diverges, for what test they are using, and what work needs to be done in order to use that test.</li>
<li>True/False questions in which the student must give a counterexample or an explanation in case of False (and perhaps some explanation in case of True as well).</li>
<li>Matching problems for topics in which this makes sense, such as polar/parametric graphs or conics (given Cartesian or polar equations).</li>
</ul>
<p>I'm curious to hear what other things people have used.</p>
| Steven Gubkin | 117 | <p>I tell students to visualize $<$ and $>$ as mouths. They always want to eat the bigger number.</p>
|
1,657,694 | <p>The Algorithms course I am taking on Coursera does not require discrete math to find discrete sums. Dr. Sedgewick recommends replacing sums with integrals in order to get basic estimates.</p>
<p>For example: $$\sum _{ i=1 }^{ N }{ i } \sim \int _{ x=1 }^{ N }{ x } dx \sim \frac { 1 }{ 2 } N^2$$</p>
<p>How would I go about doing this for the problem below?</p>
<p>What is the order of growth of the worst case running time of the following code fragment
as a function of N?</p>
<p><code>int sum = 0;
for (int i = 1; i <= N; i++)
for (int j = 1; j <= i*i*i; j++)
sum++;</code></p>
<p>What I've got so far:</p>
<p>$$\sum _{ i=1 }^{ N }{ } \sum _{ j=1 }^{ i^{ 3 } }{ } 1\approx \int _{ i=1 }^{ N }{ } dx\int _{ j=1 }^{ i^{ 3 } }{ } 1djdi$$</p>
<p>I'm not sure if this is correct, and I'm confused as to how to set these types of problems up. I've taken integral calculus, but it was almost 6 months ago. A hint in the right direction would go a long way. </p>
| DanielWainfleet | 254,665 | <p>Let $D$ be the set of open intervals removed from $[0,1]$ in the construction of the Cantor set.</p>
<p>For $d=(a,b)\in D,$ let $(x_{n,a})_{n\in N}$ be strictly decreasing , converging to $a$; and let $(y_{n,d})_{n\in N}$ be strictly increasing, converging to $b$; with $x_{1,d}=y_{1,d}=(a+b)/2.$</p>
<p>Let $C$ be the Cantor set .</p>
<p>Let $E=C\cup \{x_{n,d}:n\in N\land d\in D\}\cup \{y_{n,d}:n\in N\land d\in D\}.$</p>
<p>Let $F=\bar E.$ (I think $E$ is closed by I haven't verified it.)</p>
<p>Observe that the set $F^i$ of isolated points of $F$ is equal to $E\backslash C,$ but $\overline {F^i}$ includes the set $V$ of all the endpoints of the members of $D,$ and $\bar V=C.$ We conclude that $F^i$ is dense in $F.$ And $F$ is uncountable because $C\subset F.$</p>
|
3,429,489 | <blockquote>
<p>Let <span class="math-container">$\mathcal F = \{f \mid f : \mathbb R \rightarrow \mathbb R\}$</span> and
define relationship <span class="math-container">$R$</span> on <span class="math-container">$\mathcal F$</span> as follows:</p>
<p><span class="math-container">$$R = \{(f,g) \in \mathcal F \times \mathcal F \mid \exists h \in \mathcal F (f = h \circ g)\}$$</span></p>
<p>Prove that for all <span class="math-container">$f \in \mathcal F$</span>, <span class="math-container">$i_{\mathbb R}Rf$</span> iff <span class="math-container">$f$</span> is
one-to-one.</p>
</blockquote>
<p><span class="math-container">$$i_{\mathbb R} = \{(x,x) \mid x \in \mathbb R\}$$</span></p>
<p>My attempt:</p>
<p><span class="math-container">$(\rightarrow)$</span></p>
<p>Suppose <span class="math-container">$i_{\mathbb R}Rf$</span>. Exists function <span class="math-container">$h \in F$</span>, such that </p>
<p><span class="math-container">$$i_{\mathbb R} = h \circ f $$</span></p>
<p>Suppose <span class="math-container">$(x,a_1) \in f$</span> and <span class="math-container">$(x,a_2) \in f$</span></p>
<p>Since <span class="math-container">$a_1,a_2 \in \mathbb R$</span> and <span class="math-container">$h: \mathbb R \rightarrow \mathbb R$</span>, there exist <span class="math-container">$b_1,b_2 \in \mathbb R$</span> such that <span class="math-container">$(a_1,b_1) \in h$</span> and <span class="math-container">$(a_2,b_2) \in h$</span></p>
<p>Then we have <span class="math-container">$(x,b_1) \in i_{\mathbb R}$</span> and <span class="math-container">$(x,b_2) \in i_{\mathbb R}$</span>, which means that <span class="math-container">$b_1 = b_2$</span> </p>
<p>Since <span class="math-container">$h$</span> is a function, <span class="math-container">$a_1 = a_2$</span></p>
<p><span class="math-container">$(\leftarrow)$</span></p>
<p>Suppose <span class="math-container">$f$</span> is one-to-one. Define <span class="math-container">$f^{-1}$</span> as </p>
<p><span class="math-container">$$f^{-1} = \{(y,x) \mid (x,y) \in f\} $$</span></p>
<p>Then <span class="math-container">$f^{-1} \circ f = i_{\mathbb R}$</span>, and thus <span class="math-container">$i_{\mathbb R}Rf$</span></p>
<p><span class="math-container">$\Box$</span></p>
<hr>
<p>Is it correct?</p>
| Community | -1 | <p>[ EDITED ] I'm not competent enough to assess the proof you have proposed ( though it seems right to me). . Let me try an indirect proof of (-->) using contraposition. </p>
<p><a href="https://i.stack.imgur.com/25CyC.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/25CyC.png" alt="enter image description here"></a></p>
|
3,301,115 | <p>I'm currently taking an introductory course in graph theory, and this problem is giving me a bit of a hard time. Where would I even start? Thanks a bunch?</p>
| chenbai | 59,487 | <p>hint:</p>
<p><span class="math-container">$\dfrac{a}{\sqrt{2-2bc}} \le \dfrac{a}{\sqrt{1+a^2}}$</span></p>
<p>if you can prove <span class="math-container">$f(x)=\sqrt{\dfrac{x}{1+x}}$</span> is concave function, then the problem is solved.</p>
|
2,147,458 | <p>Solve the following integral:
$$
\frac{2}{\pi}\int_{-\pi}^\pi\frac{\sin\frac{9x}{2}}{\sin\frac{x}{2}}dx
$$</p>
| xpaul | 66,420 | <p>By using
$$ \sin A-\sin B=2\cos(\frac{A+B}{2})\sin(\frac{A-B}{2})$$
one has
\begin{eqnarray}
&&\frac{2}{\pi}\int_{-\pi}^\pi\frac{\sin\frac{9x}{2}}{\sin\frac{x}{2}}dx\\
&=&\frac{2}{\pi}\int_{-\pi}^\pi\frac{\sum_{n=1}^4\bigg[\sin\frac{(2n+1)x}{2}-\sin\frac{(2n-1)x}{2}\bigg]+\sin\frac{x}{2}}{\sin\frac{x}{2}}dx\\
&=&\frac{2}{\pi}\int_{-\pi}^\pi\frac{\sum_{n=1}^42\cos(nx)\sin \frac{x}{2}+\sin\frac{x}{2}}{\sin\frac{x}{2}}dx\\\\
&=&\frac{2}{\pi}\int_{-\pi}^\pi\bigg[2\sum_{n=1}^4\cos(nx)+1\bigg]dx\\
&=&\frac{2}{\pi}\cdot2\pi\\
&=&4.
\end{eqnarray}
Here $$ \int_{-\pi}^{2\pi}\cos(nx)dx=0$$
is used.</p>
|
14,712 | <p>I have matrix <code>in</code> as shown, consisting of real numbers and 0. How can I sort it to become <code>out</code> as shown?</p>
<pre><code>in ={
{0, 0, 3.411, 0, 1.343},
{0, 0, 4.655, 2.555, 3.676},
{0, 3.888, 0, 3.867, 1.666}
};
out ={
{1.343, 3.411, 0, 0, 0},
{2.555, 3.676, 4.655, 0, 0},
{1.666, 3.867, 3.888, 0, 0}
};
</code></pre>
<p>This is related to a <a href="https://mathematica.stackexchange.com/questions/14663/">question I asked</a>. It is much easier to add the columns by sorting it this way than in previous question, and easier to visualize than trying to take the first non-zero value in a row.</p>
| Andy Ross | 43 | <p>You can map <code>Sort</code> over the rows using a custom ordering function which treats 0 as infinity.</p>
<pre><code>data = RandomChoice[{0, 1}, {5, 5}]*RandomReal[{1, 10}, {5, 5}];
f[0|0.]= \[Infinity];
f[x_] := x
Sort[#, f[#1] <= f[#2] &] & /@ data
(*{{6.07883, 7.33113, 0., 0., 0.}, {2.74761, 0., 0., 0., 0.},
{6.09223, 8.11442, 0., 0., 0.}, {3.16126, 4.72089, 7.72369, 0.,0.},
{9.25964, 0., 0., 0., 0.}}*)
</code></pre>
|
14,712 | <p>I have matrix <code>in</code> as shown, consisting of real numbers and 0. How can I sort it to become <code>out</code> as shown?</p>
<pre><code>in ={
{0, 0, 3.411, 0, 1.343},
{0, 0, 4.655, 2.555, 3.676},
{0, 3.888, 0, 3.867, 1.666}
};
out ={
{1.343, 3.411, 0, 0, 0},
{2.555, 3.676, 4.655, 0, 0},
{1.666, 3.867, 3.888, 0, 0}
};
</code></pre>
<p>This is related to a <a href="https://mathematica.stackexchange.com/questions/14663/">question I asked</a>. It is much easier to add the columns by sorting it this way than in previous question, and easier to visualize than trying to take the first non-zero value in a row.</p>
| ssch | 1,517 | <p>Since what you are doing is basically sorting each row, but 0 is treated as highest value. One way is to replace all zeros with <code>Infinity</code> before sorting and changing back after </p>
<pre><code>r = RandomChoice[{0, Random[]}, {3, 5}];
r // MatrixForm
(Sort[#] & /@ (r /. {(0 | 0.) -> Infinity})) /. {Infinity -> 0} // MatrixForm
</code></pre>
<p><img src="https://i.stack.imgur.com/HTWbm.png" alt="Output"></p>
<p><strong>Edit</strong> I like Andy Ross's solution better</p>
|
14,712 | <p>I have matrix <code>in</code> as shown, consisting of real numbers and 0. How can I sort it to become <code>out</code> as shown?</p>
<pre><code>in ={
{0, 0, 3.411, 0, 1.343},
{0, 0, 4.655, 2.555, 3.676},
{0, 3.888, 0, 3.867, 1.666}
};
out ={
{1.343, 3.411, 0, 0, 0},
{2.555, 3.676, 4.655, 0, 0},
{1.666, 3.867, 3.888, 0, 0}
};
</code></pre>
<p>This is related to a <a href="https://mathematica.stackexchange.com/questions/14663/">question I asked</a>. It is much easier to add the columns by sorting it this way than in previous question, and easier to visualize than trying to take the first non-zero value in a row.</p>
| user1066 | 106 | <pre><code>PadRight[#, Length@in[[1]], 0] & /@ Sort /@ DeleteCases[in, 0 | 0., 2]
</code></pre>
<p>=></p>
<blockquote>
<p>{{1.343, 3.411, 0, 0, 0}, {2.555, 3.676, 4.655, 0, 0}, {1.666, 3.867,
3.888, 0, 0}}</p>
</blockquote>
|
2,936,028 | <p>The question is:</p>
<p>Prove that If the sum of the elements of each row of a square matrix is k, then the sum of the elements in each row of the inverse matrix is 1/k ?</p>
<p>In the text book the answer is:</p>
<p>Let A be <span class="math-container">${m\times m}$</span>, non-singular, with the stated property. Let B be its imverse. Then for <span class="math-container">$n\leqslant m$</span>,
<span class="math-container">$$
1 = \sum\limits_{r=1}^m \sigma_{nr} = \sum\limits_{r=1}^m\sum \limits_{s=1}^mb_{ns}a_{sr} = \sum\limits_{s=1}^m\sum \limits_{r=1}^mb_{ns}a_{sr}
= k\sum\limits_{s=1}^m b_{ns}$$</span></p>
<p>(A is singular if K = 0).</p>
<p>I have trouble to understand this proof. Is there another way to prove it?</p>
| peterwhy | 89,922 | <p>Let <span class="math-container">$A$</span> be the invertible square matrix.</p>
<p>The product <span class="math-container">$A \pmatrix{1\\1\\\vdots\\1} $</span> gives a column matrix, with elements equal to sum of elements in a row of <span class="math-container">$A$</span>.</p>
<p><span class="math-container">$$\begin{align*}
A \pmatrix{1\\1\\\vdots\\1} &= \pmatrix{k\\k\\\vdots\\k} \\
A^{-1}A\pmatrix{1\\1\\\vdots\\1} &= A^{-1} \pmatrix{k\\k\\\vdots\\k}\\
\pmatrix{1\\1\\\vdots\\1} &= A^{-1} \pmatrix{k\\k\\\vdots\\k}\\
\pmatrix{1/k\\1/k\\\vdots\\1/k} &= A^{-1} \pmatrix{1\\1\\\vdots\\1}\\
\end{align*}$$</span></p>
<p>i.e. each row of <span class="math-container">$A^{-1}$</span> sums to <span class="math-container">$\frac1k$</span>, if <span class="math-container">$k\ne 0$</span>.</p>
|
2,905,022 | <p>I recently stumbled upon the problem $3\sqrt{x-1}+\sqrt{3x+1}=2$, where I am supposed to solve the equation for x. My problem with this equation though, is that I do not know where to start in order to be able to solve it. Could you please give me a hint (or two) on what I should try first in order to solve this equation?</p>
<p><strong>Note</strong> that I only want hints.</p>
<p>Thanks for the help!</p>
| Batominovski | 72,152 | <p>Let $A:=\sqrt{3x+1}$ and $B:=\sqrt{3x-3}$. Then, $A+\sqrt{3}B=2$ and $A^2-B^2=4$. That is,
$$(A+\sqrt{3}B)^2=A^2-B^2\,.$$</p>
<blockquote class="spoiler">
<p> Suppose that we are solving over the reals. Thus, $\left(2\sqrt{3}A+4B\right)\,B=0$. Since $A$ and $B$ are nonnegative and cannot simultaneously be $0$, we conclude that $B=0$. </p>
</blockquote>
<hr>
<p>Alternatively, we note that $x\geq 1$ so that $\sqrt{x-1}$ is a real number. Thus, $$\sqrt{3x+1}\geq \sqrt{3\cdot 1+1}=2\,.$$ For the required equality to hold, we must then have $\sqrt{3x+1}=2$. Therefore, ....</p>
|
2,905,022 | <p>I recently stumbled upon the problem $3\sqrt{x-1}+\sqrt{3x+1}=2$, where I am supposed to solve the equation for x. My problem with this equation though, is that I do not know where to start in order to be able to solve it. Could you please give me a hint (or two) on what I should try first in order to solve this equation?</p>
<p><strong>Note</strong> that I only want hints.</p>
<p>Thanks for the help!</p>
| user247327 | 247,327 | <p>Similar to what others have said but I think a little simpler: write $3\sqrt{x- 1}+ \sqrt{3x+ 1}= 2$ as $3\sqrt{x- 1}= 2- \sqrt{3x+ 1}$ and square both sides: $9(x- 1)= 4- 4\sqrt{3x+ 1}+ 3x+ 1$. Now write that as $6x- 14= -4\sqrt{3x+ 1}$ and square again: $36x^2- 168x+ 196= 16(3x+ 1)= 48x+ 16$.</p>
<p>$36x^2- 120x+ 180= 0$.</p>
<p>$3x^2- 10x+ 15= 0$.</p>
<p>Of course, squaring twice might have introduced "spurious" solutions so solve that quadratic equation and check the solutions in the original equation. </p>
|
206,305 | <p>Prove: $s_n \to s \implies \sqrt{s_n} \to \sqrt{s}$ by the definition of the limit. $s \geq 0$ and $s_n$ is a sequence of non-negative real numbers.</p>
<p>This is my preliminary computation:</p>
<p>$|\sqrt{s_n} - \sqrt{s}| < \epsilon$</p>
<p>multiply by the conjugate:</p>
<p>$|\dfrac{s_n - s}{\sqrt{s_n}+\sqrt{s}}| < \epsilon$</p>
<p>Thus we can use the fact that $|\sqrt{s_n} - \sqrt{s}| <
\dfrac{|s_n - s|}{\sqrt{s}} < \epsilon$</p>
<p>After this I am lost...</p>
| Mathematics | 22,687 | <p>well, you can actually take $\epsilon={\sqrt{s}}\epsilon'$ for any $\epsilon'>0$ given that for all $n>N, \epsilon'>|s_n-s|$</p>
|
206,305 | <p>Prove: $s_n \to s \implies \sqrt{s_n} \to \sqrt{s}$ by the definition of the limit. $s \geq 0$ and $s_n$ is a sequence of non-negative real numbers.</p>
<p>This is my preliminary computation:</p>
<p>$|\sqrt{s_n} - \sqrt{s}| < \epsilon$</p>
<p>multiply by the conjugate:</p>
<p>$|\dfrac{s_n - s}{\sqrt{s_n}+\sqrt{s}}| < \epsilon$</p>
<p>Thus we can use the fact that $|\sqrt{s_n} - \sqrt{s}| <
\dfrac{|s_n - s|}{\sqrt{s}} < \epsilon$</p>
<p>After this I am lost...</p>
| Pedro | 23,350 | <p><strong>ADD</strong> You got to</p>
<p>$$\left| {\frac{{{s_n} - s}}{{\sqrt {{s_n}} + \sqrt s }}} \right| < \frac{{\left| {{s_n} - s} \right|}}{{\sqrt s }}$$</p>
<p>Since $s_n\to s$, for every $\epsilon >0$ there is an $n_0$ for wich $$\left| {{s_n} - s} \right| < \varepsilon \sqrt s $$
whenever $n\geq n_0$ (i.e. $\varepsilon \sqrt s$ is <em>also</em> an $\epsilon'>0$). Then, for this $n_0$,
$$\left| {\frac{{{s_n} - s}}{{\sqrt {{s_n}} + \sqrt s }}} \right| < \frac{{\left| {{s_n} - s} \right|}}{{\sqrt s }} < \frac{{\varepsilon \sqrt s }}{{\sqrt s }} = \varepsilon $$</p>
<p>which means $\sqrt {s_n}\to\sqrt s$.</p>
<hr>
<p>You're almost done. You arrived at</p>
<p>$$\left|\dfrac{s_n - s}{\sqrt{s_n}+\sqrt{s}}\right| $$</p>
<p>You know that $s_n\to s$, so you can make $|s_n-s|$ as small as you wish. Now, we need to know how to handle $\sqrt{s_n}+\sqrt{s}$. Since $s_n\to s$, there is an $n_0$ for wich</p>
<p>$$|s-s_n|<3s/4$$</p>
<p>Since $s_n>0$,$s\geq 0$, then </p>
<p>$$s-s_n\leq|s-s_n|<3s/4$$ </p>
<p>This means that</p>
<p>$$s_n>s/4$$</p>
<p>then</p>
<p>$$2\sqrt {{s_n}} > \sqrt s $$</p>
<p>or, since $\sqrt s>0$</p>
<p>$$\eqalign{
& 2\sqrt {{s_n}} + 2\sqrt s > 3\sqrt s \cr
& \sqrt {{s_n}} + \sqrt s > \frac{{3\sqrt s }}{2} \cr
& \frac{1}{{\sqrt {{s_n}} + \sqrt s }} < \frac{2}{{3\sqrt s }} \cr} $$</p>
<p>Again, since $s_n\to s$, there is an $n_1$ for wich</p>
<p>$$|s-s_n|<\epsilon \frac{3\sqrt s}{{2 }}$$</p>
<p>Then, taking $n\geq \max\{n_0,n_1\}$ we have</p>
<p>$$\left| {\frac{{{s_n} - s}}{{\sqrt {{s_n}} + \sqrt s }}} \right| < \frac{2}{{3\sqrt s }}\left| {{s_n} - s} \right| < \frac{2}{{3\sqrt s }}\frac{{3\sqrt s }}{2}\varepsilon = \varepsilon $$</p>
|
99,506 | <p>I am trying to show that how the binary expansion of a given positive integer is unique.</p>
<p>According to this link, <a href="http://www.math.fsu.edu/~pkirby/mad2104/SlideShow/s5_3.pdf" rel="nofollow">http://www.math.fsu.edu/~pkirby/mad2104/SlideShow/s5_3.pdf</a>, All I see is that I can recopy theorem 3-1's proof?</p>
<p>Is this polished enough of an argument. Thanks</p>
| Amit Kumar Gupta | 8,953 | <p>Assume for contradiction that $n$ is the smallest positive integer with two different binary expansions.</p>
<p>Then $n=a_m\dots a_0=b_m\dots b_0$, allowing leading zeroes in at most one of the expansions.</p>
<p>Let $l$ be the smallest index so that $a_l\neq b_l$. It follows that $a_m\dots a_l = b_m\dots b_l$ are distinct representations of the same number. $l$ must be $0$ due to minimality of $n$, but then our original number would be both odd and even, contradiction.</p>
|
1,852,889 | <blockquote>
<p>A fair coin is tossed independently four times . The probability of event "the no. Of times head show up is more than the no. Of times tails shows up" is</p>
</blockquote>
<p>The answer is $5/16$.</p>
<p>I did</p>
<p>$${_4\mathsf C}_4 (1/2)^4 (1/2)^0 + {_4\mathsf C}_3(1/2)^3 (1/2)$$</p>
<p>Is it correct ?</p>
| samerivertwice | 334,732 | <p>There are two combinations of tails only coming up once or not at all:</p>
<p>All heads: This can only happen 1 successful way.</p>
<p>Or once tails and three heads: The tail can come 1st, 2nd, 3rd, or 4th: So there are 4 succesful ways.</p>
<p>There are $2^4=16$ possible outcomes in total as 4 tosses each have 2 outcomes.</p>
<p>That makes a total of 5 successful ways out of 16 possible outcomes in total:</p>
<p>$$\frac{1+4}{2^4}=\frac{5}{16}$$</p>
|
1,852,889 | <blockquote>
<p>A fair coin is tossed independently four times . The probability of event "the no. Of times head show up is more than the no. Of times tails shows up" is</p>
</blockquote>
<p>The answer is $5/16$.</p>
<p>I did</p>
<p>$${_4\mathsf C}_4 (1/2)^4 (1/2)^0 + {_4\mathsf C}_3(1/2)^3 (1/2)$$</p>
<p>Is it correct ?</p>
| Em. | 290,196 | <p>You're correct. You claimed that you calculated it, but it simply looks like you meant ${_4 \mathsf C}_4$ <strong>not</strong> ${_5\mathsf C }_4$. If you fix that, then it is correct. In other words, the number of heads is 3 or 4 are the only two cases where the number of heads is greater than the number of tails, this will result in the correct answer. Because the events are disjoint, we can add up the probabilities and so
$${_4\mathsf C }_3 \left(\frac{1}{2}\right)^3\left(\frac{1}{2}\right)^1+{_4\mathsf C }_4 \left(\frac{1}{2}\right)^4\left(\frac{1}{2}\right)^0 = \frac{5}{16} $$</p>
|
633,858 | <p>If G is cyclic group of 24 order, then how many element of 4 order in G?
I can't understand how to find it, step by step. </p>
| user44441 | 44,441 | <p>Any element of order 4 will generate a cyclic group of order 4. Inside any finite cyclic group, there is a unique cyclic subgroup of any order dividing the order of the group. Therefore, there is a unique group of order 4 inside this group of order 24 and every element of order 4 is inside this subgroup. In a group of order 4, the number of elements of order 4 is $\varphi(4) = 2$. So there are precisely two elements of order 4 in a cyclic group of order 24. In the usual representation of this as $\mathbf{Z}/24\mathbf{Z}$, these would be the elements $18$ and $6$.</p>
|
3,298,445 | <p>A random variable is defined by it distribution function. The density function is the derivative of the distribution function. Thus the density function exisst iff the distribution function is absolutely continuous. However, can we construct a distribution function without a density function, except for the finite discrete random variable(i.e. the distribution function is a step function)?</p>
<p>I think the hardness is that the distribution function is non-decreasing and upper-bounded by 1.</p>
| forgottenarrow | 531,585 | <p>Adding to Masacroso's answer, there are examples of continuous random variables that do not have densities. The standard construction involves the <em>Cantor function</em>.</p>
<p>If you're not familiar with the Cantor set, this set is constructed via the following iteration:</p>
<ul>
<li>Start with <span class="math-container">$C_0 = [0,1]$</span>.</li>
<li>To get <span class="math-container">$C_{k+1}$</span> take every interval in <span class="math-container">$C_k$</span> and remove an open interval around the middle third. Thus, <span class="math-container">$C_1 = [0,1/3]\cup[2/3,1]$</span>, <span class="math-container">$C_2 = [0,1/9]\cup[2/9,1/3]\cup[2/3,7/9]\cup[8/9,1]$</span> etc.</li>
<li><span class="math-container">$C = \bigcap_{k=0}^\infty C_k$</span>.</li>
</ul>
<p>It turns out that <span class="math-container">$C$</span> is an uncountably infinite set, but it has Lebesgue measure 0. We can characterize <span class="math-container">$C$</span> as the set of all real numbers in <span class="math-container">$[0,1]$</span> whose decimal expansion in base 3 does not have any 1's. Thus, <span class="math-container">$1/3 = 0.1_3 = 0.0\overline{222}_3$</span> and <span class="math-container">$8/27 = 0.022_3$</span> are in the Cantor set, but <span class="math-container">$5/9 = 0.12_3$</span> is not.</p>
<p>So now we can define the <em>Cantor function</em>. First, consider the function <span class="math-container">$g:C \rightarrow [0,1]$</span> defined in the following manner. To compute <span class="math-container">$g(x)$</span>,</p>
<ul>
<li>Write out <span class="math-container">$x$</span> in base 3 decimal form.</li>
<li>Replace all 2s with 1s.</li>
<li>Switch to binary.</li>
</ul>
<p>For example, <span class="math-container">$g(1/3) = g(0.0\overline{222}_3) = 0.0\overline{111}_2 = 0.1_2 = 1/2$</span>. Or <span class="math-container">$g(8/27) = g(0.022_3) = 0.011_2 = 3/8$</span>. <span class="math-container">$g$</span> is a strictly increasing, surjective function.</p>
<p>Define <span class="math-container">$f:[0,1] \rightarrow [0,1]$</span> in the following way. If <span class="math-container">$x \in C$</span>, then <span class="math-container">$f(x) = g(x)$</span>. Otherwise, <span class="math-container">$f(x) = \max_{y< x, y \in C} g(y)$</span>.</p>
<p><span class="math-container">$f$</span> is a continuous, increasing, surjective function from <span class="math-container">$[0,1]$</span> to <span class="math-container">$[0,1]$</span>. Furthermore, since <span class="math-container">$f$</span> is constant outside the Cantor set, the derivative of <span class="math-container">$f$</span> is <span class="math-container">$0$</span> almost everywhere (recall that the Cantor set has Lebesgue measure <span class="math-container">$0$</span>). But, <span class="math-container">$f(0) = 0$</span> and <span class="math-container">$f(1) = 1$</span>. Therefore, the random variable with cumulative distribution function <span class="math-container">$f$</span> is a continuous random variable without a density.</p>
|
2,706,141 | <p>I've been working on a math problem recently whose small subpart part is this. I don't want to post the whole problem and be spoon fed it, but I've been struggling with this sub part of it and since my math skills are still trivial the solution may require maths which I have to learn so,</p>
<p>Can the product $\mathtt(LR)$ where L is the hypotenuse of a right angled triangle and R is it's base be expressed using trigonometric relations of <strong>only</strong> $\theta$? Where $\theta$ is the angle between the hypotenuse and <strong>height H</strong> of the right angled triangle?</p>
<p>If yes derive the expression? Otherwise prove it not possible.</p>
| José Carlos Santos | 446,262 | <p><strong>Hint:</strong> Just express $\dfrac1{3x^2-7}$ as$$\frac13\left(\frac a{x-\sqrt{\frac73}}+\frac b{x+\sqrt{\frac73}}\right).$$</p>
|
2,706,141 | <p>I've been working on a math problem recently whose small subpart part is this. I don't want to post the whole problem and be spoon fed it, but I've been struggling with this sub part of it and since my math skills are still trivial the solution may require maths which I have to learn so,</p>
<p>Can the product $\mathtt(LR)$ where L is the hypotenuse of a right angled triangle and R is it's base be expressed using trigonometric relations of <strong>only</strong> $\theta$? Where $\theta$ is the angle between the hypotenuse and <strong>height H</strong> of the right angled triangle?</p>
<p>If yes derive the expression? Otherwise prove it not possible.</p>
| shere | 524,467 | <p>you can use this substitution: $x=\dfrac{\sqrt{7}}{\sqrt{3}}\sin u$ so $\mathrm dx= \dfrac{\sqrt{7}}{\sqrt{3}}\cos u$ and we have:</p>
<p>$$\int \dfrac{\frac{\sqrt{7}}{\sqrt{3}}\cos u}{7\cos^{2}u}\,\mathrm du=\frac{\sqrt{7}}{7\sqrt{3}}\int \sec u\,\mathrm du$$</p>
<p>which is easy to find.</p>
|
380,177 | <p>In mathematics, I want to know what is indeed the difference between a <strong>ring</strong> and an <strong>algebra</strong>?</p>
| Stephen | 146,439 | <p>For a commutative ring $k$, a $k$-<em>algebra</em> is a ring $A$ together with an extra datum: a homomorphism from $k$ into the center of $A$.</p>
<p>The definition allows non-associative algebras if you allow non-associative rings. The most well-understood case occurs when $k$ is a field, and the right way to think about the general case is as an algebraic family of algebras over fields---one for each prime ideal of $k$.</p>
|
1,212,425 | <p>This is a homework problem that I cannot figure out. I have figured out that if $n^2 + 1$ is a perfect square it can be written as such:</p>
<p>$n^2 + 1 = k^2$.</p>
<p>and if $n$ is even it can be written as such:</p>
<p>$n = 2m$</p>
<p>I believe I'm supposed to use the fact that if $n \pmod{4} \equiv 0$ or $1$ then it's a perfect square (maybe that's wrong).</p>
<p>I cannot figure this out.</p>
| Dan Brumleve | 1,284 | <p>If $n^2+1$ is a perfect square then $n=0$, and $0$ is even. You can prove this by factoring the equation as $(k-n) \cdot (k+n) = 1$.</p>
|
1,212,425 | <p>This is a homework problem that I cannot figure out. I have figured out that if $n^2 + 1$ is a perfect square it can be written as such:</p>
<p>$n^2 + 1 = k^2$.</p>
<p>and if $n$ is even it can be written as such:</p>
<p>$n = 2m$</p>
<p>I believe I'm supposed to use the fact that if $n \pmod{4} \equiv 0$ or $1$ then it's a perfect square (maybe that's wrong).</p>
<p>I cannot figure this out.</p>
| Divide1918 | 706,588 | <p>Suppose n is odd, write <span class="math-container">$n=2m+1$</span>. Then <span class="math-container">$n^2+1=(2m+1)^2+1=4m^2+4m+2$</span>, which is even. Therefore if this is a perfect square, it must be divisible by <span class="math-container">$4$</span>. But clearly <span class="math-container">$n^2+1\equiv 2\pmod 4$</span>, a contradiction.</p>
|
576,519 | <p>Assume that $x+\frac{1}{x} \in \mathbb{N}$. Prove by induction that $$x^2+\frac1{x^2}, x^3+\frac1{x^3}, \dots , x^n+\frac1{x^n}$$ is also a member of $\mathbb{N}$.</p>
<p>I have my <em>base</em>, it is indeed true for $n=1$..</p>
<p>I can assume it is true for $x^k+x^{-k}$ and then proove it is true for $x^{k+1}+x^{-(k+1)}$ but I'm stuck there.</p>
| Community | -1 | <p><strong>Hint</strong></p>
<p>By induction using</p>
<p>$$(x^n+x^{-n})(x+x^{-1})=x^{n+1}+x^{-(n+1)}+x^{n-1}+x^{1-n}\in\mathbb N$$</p>
|
884,362 | <blockquote>
<p>Compute the integral
$$\int_{0}^{2\pi}\frac{x\cos(x)}{5+2\cos^2(x)}dx$$</p>
</blockquote>
<p>My Try: I substitute $$\cos(x)=u$$</p>
<p>but it did not help. Please help me to solve this.Thanks </p>
| Claude Leibovici | 82,404 | <p><strong>This is not an answer to the post but a reply to David's comment</strong></p>
<p>The antiderivative does not express in terms of elementary functions. For your curiosity, I write it down, but, as said, it looks like a nightmare.</p>
<p>$$4 \sqrt{14}\int\frac{x\cos(x)}{5+2\cos^2(x)}dx=-2 i \text{Li}_2\left(-\frac{i \left(-7+\sqrt{35}\right) e^{-i
x}}{\sqrt{14}}\right)+2 i \text{Li}_2\left(\frac{i \left(-7+\sqrt{35}\right) e^{-i
x}}{\sqrt{14}}\right)+2 i \text{Li}_2\left(-\frac{i \left(7+\sqrt{35}\right) e^{-i
x}}{\sqrt{14}}\right)-2 i \text{Li}_2\left(\frac{i \left(7+\sqrt{35}\right) e^{-i
x}}{\sqrt{14}}\right)+2 x \log \left(1-\frac{i \left(\sqrt{35}-7\right) e^{-i
x}}{\sqrt{14}}\right)-\pi \log \left(1-\frac{i \left(\sqrt{35}-7\right) e^{-i
x}}{\sqrt{14}}\right)-2 x \log \left(1+\frac{i \left(\sqrt{35}-7\right) e^{-i
x}}{\sqrt{14}}\right)+\pi \log \left(1+\frac{i \left(\sqrt{35}-7\right) e^{-i
x}}{\sqrt{14}}\right)-2 x \log \left(1-\frac{i \left(7+\sqrt{35}\right) e^{-i
x}}{\sqrt{14}}\right)+\pi \log \left(1-\frac{i \left(7+\sqrt{35}\right) e^{-i
x}}{\sqrt{14}}\right)+2 x \log \left(1+\frac{i \left(7+\sqrt{35}\right) e^{-i
x}}{\sqrt{14}}\right)-\pi \log \left(1+\frac{i \left(7+\sqrt{35}\right) e^{-i
x}}{\sqrt{14}}\right)-4 \sin ^{-1}\left(\frac{\sqrt{7+\sqrt{14}}}{2^{3/4}
\sqrt[4]{7}}\right) \log \left(1-\frac{i \left(\sqrt{35}-7\right) e^{-i
x}}{\sqrt{14}}\right)+4 \sin ^{-1}\left(\frac{\sqrt{7+\sqrt{14}}}{2^{3/4}
\sqrt[4]{7}}\right) \log \left(1+\frac{i \left(7+\sqrt{35}\right) e^{-i
x}}{\sqrt{14}}\right)-\pi \log \left(\sqrt{14} \sin (x)-7\right)+\pi \log
\left(\sqrt{14} \sin (x)+7\right)-4 i \sinh
^{-1}\left(\frac{\sqrt{7-\sqrt{14}}}{2^{3/4} \sqrt[4]{7}}\right) \log
\left(1+\frac{i \left(\sqrt{35}-7\right) e^{-i x}}{\sqrt{14}}\right)+4 i \sinh
^{-1}\left(\frac{\sqrt{7-\sqrt{14}}}{2^{3/4} \sqrt[4]{7}}\right) \log
\left(1-\frac{i \left(7+\sqrt{35}\right) e^{-i x}}{\sqrt{14}}\right)+8 i \sin
^{-1}\left(\frac{\sqrt{7+\sqrt{14}}}{2^{3/4} \sqrt[4]{7}}\right) \tan
^{-1}\left(\frac{\left(\sqrt{14}-7\right) \cot \left(\frac{1}{4} (2 x+\pi
)\right)}{\sqrt{35}}\right)+8 \sinh ^{-1}\left(\frac{\sqrt{7-\sqrt{14}}}{2^{3/4}
\sqrt[4]{7}}\right) \tan ^{-1}\left(\frac{\left(7+\sqrt{14}\right) \cot
\left(\frac{1}{4} (2 x+\pi )\right)}{\sqrt{35}}\right)$$</p>
|
2,136,937 | <p>Find $$\lim_{z \to \exp(i \pi/3)} \dfrac{z^3+8}{z^4+4z+16}$$</p>
<p>Note that
$$z=\exp(\pi i/3)=\cos(\pi/3)+i\sin(\pi/3)=\dfrac{1}{2}+i\dfrac{\sqrt{3}}{2}$$
$$z^2=\exp(2\pi i/3)=\cos(2\pi/3)+i\sin(2\pi/3)=-\dfrac{1}{2}+i\dfrac{\sqrt{3}}{2}$$
$$z^3=\exp(3\pi i/3)=\cos(\pi)+i\sin(\pi)=1$$
$$z^4=\exp(4\pi i/3)=\cos(4\pi/3)+i\sin(4\pi/3)=-\dfrac{1}{2}-i\dfrac{\sqrt{3}}{2}$$</p>
<p>So,
\begin{equation*}
\begin{aligned}
\lim_{z \to \exp(i \pi/3)} \dfrac{z^3+8}{z^4+4z+16} & = \dfrac{1+8}{-\dfrac{1}{2}-i\dfrac{\sqrt{3}}{2}+4\left(-\dfrac{1}{2}+\dfrac{\sqrt{3}}{2}\right)+16} \\
& = \dfrac{9}{\dfrac{27}{2}+i\frac{3\sqrt{3}}{2}} \\
& = \dfrac{6}{9+i\sqrt{3}} \\
& = \dfrac{9}{14}-i\dfrac{\sqrt{3}}{2} \\
\end{aligned}
\end{equation*}</p>
<p>But, when I check my answer on wolframalpha, their answer is $$\dfrac{245}{626}-i\dfrac{21\sqrt{3}}{626}.$$</p>
<p>Can someone tell me what I am doing wrong?</p>
| Dan Velleman | 414,884 | <p>If you like "How To Prove It," you could try:</p>
<p>Velleman, Calculus: A Rigorous First Course, Dover Publications, 2016.</p>
|
4,264,558 | <p>I calculated homogenous already, I'm just struggling a bit with the right side. Would <span class="math-container">$y_p$</span> be <span class="math-container">$= ++e^x$</span> or <span class="math-container">$= ++e^{2x}$</span>?</p>
<p>Would the power in front of the root be the roots found from the homogenous part?</p>
<p>Sorry, it's been a while since I did ODE's. All help is appreciated.</p>
| Paradox | 969,042 | <p>In general, if you are to calculate
<span class="math-container">$$
\lim_{x \to a} [f(x) + g(x)]
$$</span>
where <span class="math-container">$a$</span> can be a real number or <span class="math-container">$\pm \infty$</span>, you cannot blindly move the limit inside the parantheses like this:
<span class="math-container">$$
\Big[\lim_{x\to a} f(x) + \lim_{x \to a} g(x) \Big]
$$</span>
The reason you can not do that is because when proving this limit rule it is assumed that both
<span class="math-container">$$\lim_{x\to a} f(x)$$</span>
and
<span class="math-container">$$
\lim_{x \to a} g(x)
$$</span></p>
<p>are defined. To be defined in this context means that they actually are equal to a real number (they converge). If any one of them, say <span class="math-container">$\lim_{x \to \infty} f(x)$</span>, does not converge to any real number, it could be because for example:</p>
<ol>
<li><p><span class="math-container">$\lim_{x \to \infty} f(x)$</span> oscillates between several different values as <span class="math-container">$x$</span> goes to <span class="math-container">$\infty$</span>, and thus does not settle for a limit value. An example of this is the function <span class="math-container">$\cos(x)$</span>.</p>
</li>
<li><p><span class="math-container">$\lim_{x \to \infty} f(x) $</span> does not have an upper bound. That is, no matter what number <span class="math-container">$M$</span> you come up with, there is a point <span class="math-container">$x_0$</span> on the real line such that <span class="math-container">$f(x) > M$</span> whenever <span class="math-container">$x > x_0$</span>.</p>
</li>
</ol>
<p>In this latter case we might informally write this as <span class="math-container">$\lim_{x \to \infty} f(x) = \infty$</span>, but note that this is just notation. <span class="math-container">$\infty$</span> is not a number that one can do arithmetic with as with real numbers. What the notation really means is once again that <span class="math-container">$f(x)$</span> <em>grows without limit</em> if we only increase <span class="math-container">$x$</span> enough. Therefore, an expression such as <span class="math-container">$\infty - \infty$</span> does not have any mathematical meaning, because <span class="math-container">$\infty$</span> is just a symbol and not a number.</p>
<p>The same caution applies to many other limit rules such as <span class="math-container">$\lim_{x \to a}(f(x)g(x)) = \lim_{x \to a}f(x) \lim_{x \to a}g(x)$</span>, because you encounter the same problems of doing arithmetic with the symbol <span class="math-container">$\infty$</span> otherwise.</p>
<hr />
<p>One can sometimes convert a limit where the variable goes to <span class="math-container">$\infty$</span> to a limit where the variable goes to <span class="math-container">$0$</span> by doing a change of variables. In this case, we can make the change of variables <span class="math-container">$u = \dfrac{1}{2x}$</span>. It is then true that <span class="math-container">$u \to 0$</span> when <span class="math-container">$x \to \infty$</span>. The limit calculation becomes<br />
<span class="math-container">\begin{align}
&\lim_{x\to \infty} \Big[ x^2 - x^2 \cos(\frac{1}{x}) \Big] =\\
&\lim_{u \to 0} \Big[\dfrac{1}{4u^2} - \dfrac{1}{4u^2}\cos(u + u) \Big] = \\
&\lim_{u \to 0} \Big[\dfrac{1}{4u^2} (1 - \cos(u)^2 + \sin(u)^2) \Big] = \\
&\lim_{u \to 0} \dfrac{2 \sin(u)^2}{4u^2} = \\
&\lim_{u \to 0}\dfrac{1}{2} \lim_{u \to 0}\dfrac{\sin(u)}{u} \lim_{u \to 0}\dfrac{\sin(u)}{u} = \\
&\dfrac{1}{2} \cdot 1 \cdot 1 = \dfrac{1}{2} \\
\end{align}</span></p>
<p>Where we have used the facts that</p>
<ul>
<li><span class="math-container">$\lim_{u \to 0}\dfrac{1}{2}$</span> and the standard limit <span class="math-container">$\lim_{u \to 0}\dfrac{\sin(u)}{u}$</span> are real numbers.</li>
<li><span class="math-container">$\cos(u + u) = \cos(u)^2 - \sin(u)^2$</span></li>
<li><span class="math-container">$1 = \cos(u)^2 + \sin(u)^2$</span></li>
</ul>
|
1,740,151 | <p>$$\lim_\limits {x \to \pi} \frac{(e^{\sin x} -1)}{(x-\pi)}$$</p>
<p>I found $-1$ as the answer and what I did was: </p>
<p>$\lim_\limits {x \to \pi} \frac{(e^{\sin x} -1)}{(x-\pi)}$ $\Rightarrow$ $\lim \frac{(f(x) - f(a))}{(x-a)}$ $\Rightarrow$ $f(x)=(e^{\sin x})$ </p>
<p>$f(a)=1$ </p>
<p>$x=x$ </p>
<p>and $a=\pi$ </p>
<p>So I concluded that the limit of the first function would be the same as the derivative of $f(x)$ so I did:</p>
<p>$\frac{d}{dx} (e^{\sin x})$ $\Rightarrow$ $-1$</p>
<p>But isn't it the same as using the L'HÔPITAL rule?? </p>
| Asaf Karagila | 622 | <p>This is false without the axiom of choice.</p>
<p>Mostowski constructed a model of $\sf ZFA$ (set theory with atoms), and in that model for every $n\in\Bbb N$ there is some $A$ such that: $$|A|<|A|^2<\ldots<|A|^n=|A|^{n+1}$$
So taking a large enough $n$ (e.g. $n=2$) we can take $X=A^{n-1}$ and $Y=A^n$. The Jech-Sochor theorem is enough to transfer <em>this</em> part of the model to a model without atoms.</p>
<p>So all in all, $\sf ZF$ cannot prove that if $|X|<|Y|$, then $|X|^2<|Y|^2$.</p>
|
2,208,943 | <p>I am about to finish my first year of studying mathematics at university and have completed the basic linear algebra/calculus sequence. I have started to look at some real analysis and have really enjoyed it so far.</p>
<p>One thing I feel I am lacking in is motivation. That is, the difference in rigour between the usual introduction to calculus class and real analysis seems to be quite strong. While I appreciate rigour for aesthetic reasons, I have trouble understanding why the transition from the 18th century Euler style calculus to the rigorous "delta-epsilon" formulation of calculus was necessary. </p>
<p>Is there a book that provides some historical motivation for the rigorous developement of calculus? Perhaps something that gives several counterexamples that occur when one is only equipped with a non-rigorous (i.e. first year undergraduate) formulation of calculus. For example, were there results that were thought to be true but turned out to be false when the foundations of calculus were strengthened? I suppose if anyone knows good counterexamples themselves they could list them here as well. </p>
| lhf | 589 | <p>You may enjoy these books. The first one is a classic.</p>
<ul>
<li><p><a href="http://store.doverpublications.com/0486605094.html" rel="noreferrer">The History of the Calculus and Its Conceptual Development</a>,
by Carl B. Boyer</p></li>
<li><p><a href="http://bookstore.ams.org/hmath-24" rel="noreferrer">A History of Analysis</a>,
edited by Hans Niels Jahnke</p></li>
<li><p><a href="http://www.springer.com/br/book/9780387945514" rel="noreferrer">Analysis by Its History</a>, by Ernst Haider and Gerhard Wanner</p></li>
<li><p><a href="http://www.maa.org/press/books/a-radical-approach-to-real-analysis" rel="noreferrer">A Radical Approach to Real Analysis</a>, by David Bressoud</p></li>
</ul>
|
2,208,943 | <p>I am about to finish my first year of studying mathematics at university and have completed the basic linear algebra/calculus sequence. I have started to look at some real analysis and have really enjoyed it so far.</p>
<p>One thing I feel I am lacking in is motivation. That is, the difference in rigour between the usual introduction to calculus class and real analysis seems to be quite strong. While I appreciate rigour for aesthetic reasons, I have trouble understanding why the transition from the 18th century Euler style calculus to the rigorous "delta-epsilon" formulation of calculus was necessary. </p>
<p>Is there a book that provides some historical motivation for the rigorous developement of calculus? Perhaps something that gives several counterexamples that occur when one is only equipped with a non-rigorous (i.e. first year undergraduate) formulation of calculus. For example, were there results that were thought to be true but turned out to be false when the foundations of calculus were strengthened? I suppose if anyone knows good counterexamples themselves they could list them here as well. </p>
| Count Iblis | 155,436 | <p>Rigor is essential in mathematics, there is just no other way to do math than to proceed on the basis of rigorously proven theorems. This does not mean that calculus necessarily needs to be set up in the same way as it currently is. You may rail against the rigorous definition of limits, but you need to come up with an alternative if you don't like the way things are done. There are plenty of examples for imperfections in mathematics as was practiced in previous centuries, see Stella Biderman's and ABL's answers for details.</p>
<p>A more compelling objection against the way real analysis is done, is i.m.o. that we haven't gone far enough in neutralizing infinitely large or infinitely small objects. So, as is pointed out in asv's answer the limit procedure does away with the ill defined fluxions. To make such quantities well defined requires setting up the formalism of non-standard analysis, which is extremely complicated to do. But we still have not excised all infinite objects, etake e.g. the set of real numbers, <a href="https://arxiv.org/abs/math/0411418" rel="nofollow noreferrer">as pointed out here</a> this is an extremely complicated matter that's easily glossed over.</p>
<p>Therefore, it's worthwhile to explore the opposite idea where limits are used also at a higher level to get rid all infinite objects. This has not yet been done, but there have been mathematicians who have railed against the idea of infinite sets, which has led to formalisms such as <a href="https://en.wikipedia.org/wiki/Finitism" rel="nofollow noreferrer">finitism</a> and <a href="https://en.wikipedia.org/wiki/Ultrafinitism" rel="nofollow noreferrer">ultrafinitism</a>. A proper finitist foundation can allow one to always work on a discrete set and then approach the continuum only in a proper scaling limit where both the set is made larger and larger but also the functions that are defined on that set are coarse grained so that they become smooth functions in that continuum limit (we then don't get exotic objects like non-measurable functions). This more elaborate limiting procedure would i.m.o. at least, lead to a much simpler and much more natural set up of real analysis. </p>
<p>I'm in no doubt that the mathematicians who have fallen in love with exotic objects would strongly disagree with me, but it's difficult to explain to an engineering student why he/she has to navigate around all these exotic mathematical artifacts in order to study a topic such as fluid dynamics. </p>
|
275,371 | <p>I was wondering if it is possible to decompose any symmetric matrix into a positive definite and a negative definite component. I can't seem to think of a counterexample if the statement is false.</p>
| adam W | 43,193 | <p>Yes, see one of my <a href="https://math.stackexchange.com/q/209834/43193">question</a>s with the details. I will type up some more:</p>
<p>Given $A$ such that $A = A^\top$, $A$ with both positive and negative eigenvalues, the LDU factorization will have $U=L^\top$ (follows directly from symmetry) and $D$ diagonal with both positive and negative values. So
$$A=L(D_p + D_n)L^\top$$</p>
<p>where $D$ is separated into the positive portion $D_p$ and the negative portion $D_n$. They have all positive or all negative values and zeros. Thus when the matrix is decomposed as</p>
<p>\begin{align}
A &= LD_pL^\top + LD_nL^\top \\
&= P + N \\
\end{align}</p>
<p>it is separated with $P$ symmetric positive semidefinite, and $N$ symmetric negative semidefinite.</p>
<p>As was pointed out in the comments $0=-1 + 1$. Thus to obtain definiteness for both, do something to $D_p + D_n$ to make it happen while retaining the value of $D = D_p + D_n$.</p>
|
1,407,797 | <p>P is the middle of the median line from vertex A, of ABC triangle. Q is the point of intersection between lines AC and BP.</p>
<p><a href="https://i.stack.imgur.com/ka8E8.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/ka8E8.png" alt="enter image description here"></a></p>
| Macavity | 58,320 | <p>Starting you off in more detail....</p>
<p>With $\mathbb{a, b}$ denoting the vertices $A, B$ and $C$ being the Origin, one gets $$\mathbb m = \tfrac12\mathbb b, \quad \mathbb p = \tfrac12(\mathbb{m+a})=\tfrac12\mathbb a+\tfrac14 \mathbb b$$</p>
<p>Now $Q$ is located on the intersection of $\vec{BP} = t\mathbb b+(1-t)\mathbb p$ and $\vec {CA} = s\mathbb a$, so we solve to get $t = -\frac13, s = \frac23$, giving $\mathbb q = \frac23 \mathbb a$.</p>
<p>Actually, we have all that is needed to answer the questions by now...</p>
|
1,407,797 | <p>P is the middle of the median line from vertex A, of ABC triangle. Q is the point of intersection between lines AC and BP.</p>
<p><a href="https://i.stack.imgur.com/ka8E8.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/ka8E8.png" alt="enter image description here"></a></p>
| mathlove | 78,967 | <p>Let $\vec a=\vec{CA},\vec b=\vec{CB}$. Then, we have
$$\vec{CM}=\frac 12\vec b$$$$\vec{CP}=\frac 12\vec{CA}+\frac 12\vec{CM}=\frac 12\vec a+\frac 14\vec b\tag 1$$</p>
<p>Also, setting $QC:AQ=s:1-s,BP:PQ=t:1-t$ gives</p>
<p>$$\vec{CP}=t\vec{CQ}+(1-t)\vec{CB}=ts\vec a+(1-t)\vec b\tag2$$</p>
<p>Now comparing $(2)$ with $(1)$ will give you the answer.</p>
|
329,513 | <p>$$
\int \frac{\sqrt{\frac{x+1}{x-2}}}{x-2}dx
$$</p>
<p>I tried:
$$
t =x-2
$$
$$
dt = dx
$$
but it didn't work.
Do you have any other ideas?</p>
| Pantelis Sopasakis | 8,357 | <p>$\renewcommand{\Re}{\mathbb{R}}\newcommand{\<}{\langle}\newcommand{\>}{\rangle}\newcommand{\barre}{\bar{\Re}}$Let us first introduce the <em>convex conjugate</em> of an extended-real-valued convex proper function $f:\Re^n\to\barre$ which is a function $f^*:\Re^n\to\barre$ defined as</p>
<p>$$
f^*(y) = \sup_x \<x,y\> - f(y).
$$</p>
<p>Given a (primal) optimization problem</p>
<p>$$
\mathsf{P}: \mathrm{Minimize}_{x\in\Re^n}\ f(x)
$$</p>
<p>Its Fenchel dual is</p>
<p>$$
\mathsf{D}: \mathrm{Maximize}_{y\in\Re^n}\ -f^*(y)
$$</p>
<p>and the second dual is</p>
<p>$$
\mathsf{P}': \mathrm{Minimize}_{x\in\Re^n}\ f^{**}(x)
$$</p>
<p>In general $f^{**}\leq f$. In the context of <a href="https://en.wikipedia.org/wiki/Fenchel%27s_duality_theorem" rel="nofollow noreferrer">Fenchel duality</a>, your question is equivalent to asking under what conditions $f=f^{**}$.</p>
<p>Necessary and sufficient conditions are provided by the <a href="https://en.wikipedia.org/wiki/Fenchel%E2%80%93Moreau_theorem" rel="nofollow noreferrer">Fenchel-Moreau Theorem</a> according to which it is necessary and sufficient that $f$ is proper, convex and lower semi-continuous (i.e., it has a closed epigraph).</p>
<p>Note that $f=f^{**}$ implies <a href="https://en.wikipedia.org/wiki/Strong_duality" rel="nofollow noreferrer">strong duality</a>.</p>
<p><strong>References:</strong></p>
<ol>
<li>H.H. Bauschke and P.L. Combettes, <em>Convex Analysis and Monotone Operator Theory in Hilbert Spaces,</em> Springer, 2011.</li>
<li>R.T. Rockafellar, <em>Convex Analysis,</em> Princeton University Press, 1970.</li>
</ol>
<p><strong>Update:</strong> In the case of <a href="https://en.wikipedia.org/wiki/Duality_(optimization)" rel="nofollow noreferrer">Lagrangian duality</a> where we consider problems of the form
\begin{align}
\mathrm{Minimize}_{x\in\Re^n} f(x)\\
\text{subject to}: x\in C,
\end{align}
where $f:\Re^n\to\Re$ is a convex function and $C$ is a nonempty convex set, we can write this as
\begin{align}
\mathrm{Minimize}_x F(x) := f(x) + \delta_C(x),
\end{align}
where $\delta_C$ is the <em>indicator</em> function of $C$ defined as
\begin{align}
\delta_C(x) = \begin{cases}
0,&\text{ if } x\in C,\\
+\infty,&\text{ otherwise}
\end{cases}
\end{align}
The set $C$ is given by $C=\{x\in\Re^n: g(x) \leq 0\}$.
The Lagrangian dual (where we "dualize" the the constraints by introducing a dual variable $y$ and a cost $\<y,g(x)\>$ and so on) is equivalent to the Fenchel dual. </p>
<p>Then, we may apply the above: the second dual is equivalent to the dual provided that $F^{**}=F$, so, if (By the Fenchel-Moreau Theorem) $F$ is proper, convex and lower semicontinuous. I'll leave it up to you to tell what this means for $f$ and $C$.</p>
|
4,021,994 | <p>I was taught in high school algebra to translate word problems into algebraic expressions. So when I encountered <a href="https://artofproblemsolving.com/wiki/index.php/2016_AMC_10A_Problems/Problem_3" rel="nofollow noreferrer">this</a> problem I tried to reason out an algebra formula for it</p>
<blockquote>
<p>For every dollar Ben spent on bagels, David spent 25 cents less. Ben
paid $12.50 more than David. How much did they spend in the bagel store
together?</p>
</blockquote>
<p>To solve this I imagined a series of comparisons when Ben spends <span class="math-container">$x$</span>, David spends <span class="math-container">$.75x$</span>. Loop this relationship until <span class="math-container">$x - .75x \approx 12.50$</span>. Good. Done. <span class="math-container">$x = 50$</span>, then add David's for the answer. Coming from computers, I would have set this up in code where a loop (recursion) would increase <span class="math-container">$x$</span> until the condition <span class="math-container">$x - .75x = 12.50$</span> was met, then the "loop counter/accumulator" would be how much Ben spent, i.e., <span class="math-container">$50$</span>, etc.</p>
<p>I'm a beginner with math, but it seems like there should be a better approach, something with series and sequences or even calculus derivatives, something better than my brute-force computer algorithm. Can someone enlighten? The "answer" given at the site (see link) is its own brute-force and hardly satisfying. I'm thinking there should be something more formal -- at least for the first part that derives <span class="math-container">$50$</span>.</p>
<p><strong>Update</strong></p>
<p>I think everyone so far has missed my point. Many of you simply re-did the problem again. I'm wondering if there is a more <em>formal</em> way to do this other than just "figuring it out" (FIO). The whole FIO routine is murky. It looks like a limit problem; it looks like a system of equations, but I'm not experienced enough to know exactly. If there isn't, then let's call it a day....</p>
| Robert Israel | 8,508 | <p>Another way, with no <span class="math-container">$x$</span>'s needed:</p>
<p>The first condition says the difference between their spending is <span class="math-container">$1/4$</span> of what Ben spends. That difference is <span class="math-container">$\$12.50$</span>, so Ben's amount is <span class="math-container">$4 \times \$ 12.50 = \$50$</span>.</p>
|
319,262 | <p>If the first 10 positive integer is placed around a circle, in any order, there exists 3 integer in consecutive locations around the circle that have a sum greater than or equal to 17? </p>
<p>This was from a textbook called "Discrete math and its application", however it does not provide solution for this question. </p>
<p>May I know how to tackle this question. </p>
<p>Edit: I relook at the actual question and realize it is sum greater or equal to 17. My apologies.</p>
| Brandon | 212,011 | <p>Solution: For any given the first 10 positive integers placed around a circle, in any order, there are exactly
10 choices of 3 consecutive numbers around the circle. And each number appears exactly 3 times among the
10 choices. Hence, the sum of all numbers in 10 choices of 3 adjacent number is
(1 + 2 + · · · + 10) × 3 = 165
This implies that at least one choice of 3 adjacent numbers has a sum greater than or equal to 17. Indeed,
if there is no such triple, the sum of all numbers in 10 triples will be strictly less than 160 because each one
is strictly less than 16. This contradicts to the sum of all triples is 165.</p>
|
1,092,665 | <p>My question is really simple, how can I write symbolically this phrase: </p>
<blockquote>
<p>$x=\sum a_mx^m$ where $m$ range over
$\{1,\ldots,g\}\setminus\{t_1,\ldots,t_u\}$</p>
</blockquote>
<p>Being more specific, I would like to know how to write with mathematical symbols this part: "range over $\{1,\ldots,g\}\setminus\{t_1,\ldots,t_u\}$"</p>
<p>Thanks</p>
| Ross Millikan | 1,827 | <p>Often we see something like$$x=\sum_{m\in\{1,\ldots,g\}\setminus\{t_1,\ldots,t_u\}} a_mx^m$$</p>
|
177,519 | <p>Let $\mathfrak{g}$ be a simple lie algebra over $\mathbb{C}$ and let $\hat{\mathfrak{g}}$ be the Kac-Moody algebra obtained as the canonical central extension of the algebraic loop algebra $\mathfrak{g} \otimes \mathbb{C}[t,t^{-1}]$. In a sequence of papers, Kazhdan and Lusztig constructed a braided monoidal structure on (a certain subcategory of) the category of representations of $\hat{\mathfrak{g}}$ of central charge $k - h$ where $k \in \mathbb{C}^* \;\backslash\; \mathbb{Q}_{\geq 0}$ and $h$ is the coxeter number of $\mathfrak{g}$. They then showed that the resulting braided category is equivalent to the braided category of finite dimensional representations of the quantum group $U_q(\mathfrak{g})$ for $q = e^{\frac{\pi i}{k}}$. </p>
<p>My question then is this: is there any conceptual explanation as to why these two braided categories should be equivalent (which does not resort to computing both sides and seeing that they are same)? The representations of $\hat{\mathfrak{g}}$ of various central charges can be considered as twists of the representation theory of the loop algebra $\mathfrak{g} \otimes \mathbb{C}[t,t^{-1}]$. On the other hand, the representation theory of $U_q(\mathfrak{g})$ is a braided deformation (which can be thought of as a form of twisting) of the representation theory of $\mathfrak{g}$ itself. Moreover, the equivalence above only holds for non-trivially deformed/twisted cases. The limiting case of the representations of $\mathfrak{g}$ is recovered by (carefully) taking $q=1$, which corresponds to $k \rightarrow \infty$ and hence does not participate in the game. On the other hand, to obtain central charge $0$ we would need to take $k=h$ which is also excluded (as the proof Kazhdan-Lustig assumes $k \notin \mathbb{Q}_{\geq 0}$). Is there any reason why these two lie algebras would have the same twisted/deformed representations, but not the same representations?</p>
| dhy | 51,424 | <p>(Written on my phone - apologies for any typos.) </p>
<p>A few comments:</p>
<p>a) First, as to the source of the braided monoidal structure on the Kazhdan-Lusztig category. The category of integrable affine Lie algebra reps is naturally a factorization category, which is close morally to an E2/braided monoidal category, see DBZ's answer to mathoverflow.net/questions/53988/what-is-the-motivation-for-a-vertex-algebra/54008#54008. In some cases this can be made precise, and this gives you your braided monoidal category here.</p>
<p>b) Now as to why one might expect these two braided monoidal categories to agree, I think the key is exactly the $\kappa\rightarrow\infty$ case you mention (corresponding to $q=1.$) A general "limiting to infinity" procedure is described in <a href="https://arxiv.org/abs/1708.05108" rel="noreferrer">https://arxiv.org/abs/1708.05108</a>, but here I will proceed in a more ad hoc way. Before taking this limit, let me restate the category we are interested in. The Kazhdan-Lusztig category is the category of finitely-generated $U_{\kappa}(g((t)))$-modules equipped with an action of $G$, with the conditions that the two induced $g$ actions agree and that elements of $tg[[t]]$ act nilpotently.</p>
<p>For our $\kappa\rightarrow\infty$ limit, we need to figure out how to degenerate $U_{\kappa}(g((t)))$. Writing $\kappa=c\kappa'$ for a fixed nondegenerate $\kappa'$, we can describe $U_{\kappa}(g((t)))$ as the free algebra on $g((t))$, mod the relations $[s,t]=[s,t]_0+c\kappa'(s,t).$ We can't directly limit $c$ to infinity, but note that we can rescale the generators and rewrite the relations as $[s,t]=\frac{1}{c}[s,t]_0+\kappa'(s,t),$ which we can limit to $[s,t]=\kappa'(s,t).$ So we can reasonably set $U_{\infty}(g((t)))$ to be the tensor product of $\operatorname{Sym} g$ and a Weyl algebra on $tg[[t]]\oplus (tg[[t]])^*$ (recall $\kappa'$ induces a perfect pairing between $g[[t]]$ and $t^{-1}g[t^{-1}]$.)</p>
<p>Now our original integrability condition limits to the conditions that $tg[[t]]$ acts nilpotently, that $g$ acts by zero (not that it agrees with the action of $G$ - recall our rescaling!), and that the $G$ actions on our representation and on the Weyl algebra are compatible. The only representations satisfying these conditions are of the form $V\otimes W$, where $V$ is a finite dimensional representation of $G$ and $W$ is the standard rep of the Weyl algebra. So our final category is indeed the category of $G$-reps.</p>
<p>c) A more highbrow way to limit $\kappa$ to infinity is via geometric Langlands. GL switches level infinity with critical level, and the Kazhdan-Lusztig category with the Whittaker category on the affine Grassmannian for the Langlands dual group (since here we are working with abelian categories and critical level, you can think instead of the category of spherical D-modules.) Now geometric Satake tells you this gives you back $Rep(G)$.</p>
<p>This is kind of perverse, in that you're using Langlands duality twice. The point here though is that this suggests one can prove the Kazhdan-Lusztig equivalence via similar methods to geometric Satake, by describing both sides via root datum. I believe Dennis Gaitsgory has a new proof following these lines.</p>
|
3,386,530 | <p>Let <span class="math-container">$(\Omega,\mathcal{F},\mathbb{P})$</span> be a probability space and <span class="math-container">$(\mathcal{X},d)$</span> be a complete, separable, locally compact metric space. Suppose that <span class="math-container">$X,X_1,X_2,X_3,... : \Omega\to\mathcal{X}$</span> are <span class="math-container">$\mathbb{P}$</span>-i.i.d. random variables.</p>
<p>Define: <span class="math-container">$$\forall m\in\mathbb{N}, \pi_m: \mathcal{X}\times\mathcal{X}^m\to\{1,...,m\}, (x,x_1,...,x_m)\mapsto \min\left(\operatorname{argmin}_{k\in\{1,...,m\}}\left(d\left(x,x_1\right),...,d\left(x,x_m\right)\right)\right).$$</span>
Define:
<span class="math-container">$$\forall m\in\mathbb{N}, Z_m:\Omega\to\mathcal{X}, \omega\mapsto X_{\pi_m\left((X(\omega),X_1(\omega),...,X_m(\omega)\right)}(\omega).$$</span></p>
<p>If <span class="math-container">$A$</span> is a open set of <span class="math-container">$(\mathcal{X},d)$</span>, is it true that:
<span class="math-container">$$\limsup_{m\to+\infty}\mathbb{P}_{Z_m}(A)\le\mathbb{P}_{X}(A)?$$</span></p>
<blockquote>
<p><strong>Edit 1</strong>: or maybe that there exists a constant <span class="math-container">$C>0$</span> independent of <span class="math-container">$A$</span> such that:
<span class="math-container">$$\limsup_{m\to+\infty}\mathbb{P}_{Z_m}(A)\le C\cdot \mathbb{P}_{X}(A)?$$</span></p>
</blockquote>
<p>If it is false in general, what if we add the hypothesis that <span class="math-container">$\mathcal{X}=\mathbb{R}^n$</span>, <span class="math-container">$d$</span> is the Euclidean distance and <span class="math-container">$\mathbb{P}_X$</span> is absolutely continuous w.r.t. Lebesgue measure in <span class="math-container">$\mathbb{R}^n$</span>?</p>
<blockquote>
<p><strong>Edit 2:</strong> in this last case, we have that if <span class="math-container">$B$</span> is a ball of <span class="math-container">$\mathbb{R}^n$</span>, then <span class="math-container">$\mathbb{P}_X(\partial B)=0$</span> so, since <span class="math-container">$\mathbb{P}_{Z_m}\to \mathbb{P}_{X}$</span> in distribution (as explained below by WoolierThanThou), we have that <span class="math-container">$\mathbb{P}_{Z_m}(B)\to\mathbb{P}_{X}(B), m\to \infty$</span>. Now, since by Besicovitch covering theorem there exist an universal constant <span class="math-container">$C_n\in\mathbb{N}$</span> such that every open subset <span class="math-container">$A$</span> is the union of at most <span class="math-container">$C_n$</span> open sets <span class="math-container">$A_1,...,A_{C_n}$</span> each of them is a disjoint countable union of open balls, say <span class="math-container">$A_i = \cup_{j\in I_i} B_{i,j}$</span>, we have that:
<span class="math-container">$$\mathbb{P}_{Z_m}(A)\le \sum_{i=1}^{C_n}\mathbb{P}_{Z_m}(A_i)= \sum_{i=1}^{C_n}\sum_{j\in I_i}\mathbb{P}_{Z_m}(B_{i,j}) = (*)$$</span>
Now, if only we could exchange the limit and the series, we obtain that
<span class="math-container">$$(*)\to \sum_{i=1}^{C_n}\sum_{j\in I_i}\mathbb{P}_{X}(B_{i,j})= \sum_{i=1}^{C_n}\mathbb{P}_{X}(A_i)\le \sum_{i=1}^{C_n}\mathbb{P}_{X}(A) = C_n \mathbb{P}_{X}(A) $$</span>
Can anyone see a reason why could we exchange the limit an the series?</p>
</blockquote>
| egreg | 62,967 | <p>The proof is correct. Here's a different way to present the same idea.</p>
<p>First a useful lemma: <em>if <span class="math-container">$A\subseteq B$</span> and <span class="math-container">$x$</span> is a limit point of <span class="math-container">$A$</span>, then <span class="math-container">$x$</span> is a limit point of <span class="math-container">$B$</span></em>. The proof is just applying definitions.</p>
<p>Now, if <span class="math-container">$E$</span> is an open subset of <span class="math-container">$\mathbb{R}^n$</span> and <span class="math-container">$x\in E$</span>, then there exists <span class="math-container">$r>0$</span> such that <span class="math-container">$B(x,r)$</span> (the open ball centered at <span class="math-container">$x$</span> of radius <span class="math-container">$r$</span>) is contained in <span class="math-container">$E$</span>. Thus we are reduced to prove that <span class="math-container">$x$</span> is a limit point of <span class="math-container">$B(x,r)$</span>. A neighborhood <span class="math-container">$U$</span> of <span class="math-container">$x$</span> contains a ball <span class="math-container">$B(x,s)$</span>, so <span class="math-container">$U\cap B(x,r)\supseteq B(x,s)\cap B(x,r)=B(x,\min(s,r))$</span>, which contains points distinct from <span class="math-container">$x$</span>.</p>
|
395,791 | <p>I am searching for examples of manifolds which are not symmetric spaces but where Jacobi fields can be computed in closed form. For now, I am aware of</p>
<ul>
<li>Gaussian distribution with the Wasserstein metric: <a href="https://arxiv.org/pdf/2012.07106.pdf" rel="noreferrer">https://arxiv.org/pdf/2012.07106.pdf</a></li>
<li>Kendall shape space: <a href="https://arxiv.org/pdf/1906.11950.pdf" rel="noreferrer">https://arxiv.org/pdf/1906.11950.pdf</a></li>
</ul>
<p>Are there many others? Thank you for your help.</p>
| R W | 8,588 | <p><strong>Damek-Ricci spaces</strong> are obtained by equipping certain solvable Lie groups with appropriate invariant Riemannian metrics. This class is larger than the class of symmetric spaces (this was precisely the point of their construction as counterexamples to the <a href="https://en.wikipedia.org/wiki/Lichnerowicz_conjecture" rel="nofollow noreferrer">Lichnerowicz conjecture</a> ), and the Jacobi fields on them are explicitly described in Section 4.2 of <a href="https://www.sciencedirect.com/science/article/pii/S0001870813000455" rel="nofollow noreferrer">Isoparametric hypersurfaces in Damek–Ricci spaces</a> by Díaz-Ramos and Domínguez-Vázquez.</p>
|
370,058 | <p>How can I take this integral?</p>
<p>$$\int_{0}^{x} (z- u)_+^2 du $$</p>
<p>which <code>+</code> means If $z$ is bigger than u its equal $z - u$ and else it's equal zero.</p>
| Community | -1 | <p>If $t\leq\tau$, we have
$$\int_0^t (\tau-u)_+^2 du = \int_0^t (\tau-u)^2 du = \dfrac{(t-\tau)^3+\tau^3}3$$
If $t\geq\tau$, we have
$$\int_0^t (\tau-u)_+^2 du = \int_0^{\tau} (\tau-u)^2 du = \dfrac{\tau^3}3$$
Hence, we get that
$$\int_0^t (\tau-u)_+^2 du = \begin{cases} \dfrac{(t-\tau)^3+\tau^3}3 & t \leq \tau\\ \dfrac{\tau^3}3 & t \geq \tau\end{cases}$$
The above can be rewritten as
$$\int_0^t (\tau-u)_+^2 du = \dfrac{\tau^3-(\tau-t)_+^3}3$$</p>
|
4,447,522 | <p>Show that <span class="math-container">$$\cot\left(\dfrac{\pi}{4}+\beta\right)+\dfrac{1+\cot\beta}{1-\cot\beta}=-2\tan2\beta$$</span>
I'm supposed to solve this problem only with sum and difference formulas (identities).</p>
<p>So the LHS is <span class="math-container">$$\dfrac{\cot\dfrac{\pi}{4}\cot\beta-1}{\cot\dfrac{\pi}{4}+\cot\beta}+\dfrac{1+\cot\beta}{1-\cot\beta}=\dfrac{\cot\beta-1}{1+\cot\beta}+\dfrac{1+\cot\beta}{1-\cot\beta}=\dfrac{4\cot\beta}{1-\cot^2\beta}$$</span> I also tried to work with <span class="math-container">$\sin\beta$</span> and <span class="math-container">$\cos\beta$</span> and arrived at <span class="math-container">$$\dfrac{4\sin\beta\cos\beta}{\sin^2\beta-\cos^2\beta}$$</span> I don't see how to get <span class="math-container">$-2\tan2\beta$</span> from here (even with other identities).</p>
| user2661923 | 464,411 | <p>The alternative approach is to stay with your original approach and complete it.</p>
<p><span class="math-container">$\displaystyle \frac{4\cot\beta}{1-\cot^2\beta}
= \frac{\frac{4}{\tan(\beta)}}{1 - \frac{1}{\tan^2(\beta)}}
= \frac{\frac{4}{\tan(\beta)}}{\frac{\tan^2(\beta) - 1}{\tan^2(\beta)}}
$</span></p>
<p><span class="math-container">$\displaystyle = ~\frac{4\tan(\beta)}{\tan^2(\beta) - 1}
= (-2) \times \frac{2\tan(\beta)}{1 - \tan^2(\beta)} = (-2) \times \tan(2\beta).$</span></p>
|
3,999,488 | <p><strong>Question:</strong> How is the differentiation of <span class="math-container">$xy=constant$</span> equal to <span class="math-container">$x\text{d}y+y\text{d}x$</span>?</p>
<p><strong>My Approach:</strong> I first tried using partial differentiation, which I know very little of. Basically, it's the differentiation of the function with respect to one variable at a time, while keeping the other constant, right?</p>
<p>So using that, shouldn't I get the answer as <span class="math-container">$x+y$</span>?</p>
<p>All help will be appreciated greatly.</p>
| Community | -1 | <p>In general, the <em>differential</em> of the function <span class="math-container">$f(x,y)$</span> is given by
<span class="math-container">$$
df=f_x(x,y)dx+f_y(x,y)dy
$$</span>
where <span class="math-container">$f_x$</span> and <span class="math-container">$f_y$</span> denote the partial derivatives.</p>
<p>In your example, if you set <span class="math-container">$f(x,y)=xy$</span>, you have
<span class="math-container">$$
df=ydx+xdy
$$</span></p>
<hr />
<p>Notes.</p>
<p>The expression <span class="math-container">$xy=C$</span> is not a function, but an equation. You do not differentiate an equation <em>per se</em>, although you can differentiate both <em>sides</em> of an equation.</p>
|
3,476,022 | <p>I was watching this Mathologer video (<a href="https://youtu.be/YuIIjLr6vUA?t=1652" rel="noreferrer">https://youtu.be/YuIIjLr6vUA?t=1652</a>) and he says at 27:32</p>
<blockquote>
<p>First, suppose that our initial <em>chunk</em> is part of a parabola, or if you like a cubic, or any polynomial. If I then tell you that my <em>mystery function</em> is a polynomial, there's always going to be exactly one polynomial that continues our initial <em>chunk</em>. In other words, <strong>a polynomial is completely determined by any part of it.</strong> [...] Again, just relax if all this seems a little bit too much.</p>
</blockquote>
<p>So he didn't give a proof of the theorem in bold text – I think this is very important.</p>
<p>I understand that there always exists a polynomial of degree <span class="math-container">$n$</span> that passes through a set of <span class="math-container">$n+1$</span> points (i.e. there are <strong>finitely many</strong> custom points to be passed by, the <em>chunk</em> has to be discrete, like <span class="math-container">$(1,1),(2,2),(3,3),(4,5)$</span>). But there also exists some polynomial of degree <span class="math-container">$m$</span> (<span class="math-container">$m\ne n$</span>) that passes through the same set of points.</p>
<p>But how do I prove that there exists one and only one polynomial that passes through a set of <strong>infinitely many</strong> points?</p>
| Alberto Saracco | 715,058 | <p>“There is one and only one polynomial” means two things:</p>
<p>1) There is at most one polynomial.</p>
<p>2) There is at least one polynomial.</p>
<p>Only the first affirmation is true.</p>
<hr>
<p>1) There is at most one polynomial:</p>
<p>Proof by contradiction.</p>
<p>Assume <span class="math-container">$P$</span> and <span class="math-container">$Q$</span> are two different polynomials passing trough <span class="math-container">$(x_i,y_i)$</span>, <span class="math-container">$i\in\mathbb N$</span>. Let <span class="math-container">$n$</span> be the maximum of their degrees.</p>
<p>There is a unique polynomial of degree at most <span class="math-container">$n$</span> through <span class="math-container">$(x_i,y_i)$</span> for <span class="math-container">$i=1,...,n+1$</span>. But we already know that <span class="math-container">$P$</span> and <span class="math-container">$Q$</span> do. So <span class="math-container">$P=Q$</span>.</p>
<hr>
<p>2) There may be <strong>no</strong> polynomial.</p>
<p>Example: let us consider the points <span class="math-container">$(n,e^n)$</span>, <span class="math-container">$n\in\mathbb N$</span>. Suppose <span class="math-container">$P$</span> is a polynomial passing through them.</p>
<p>Notice that <span class="math-container">$f(x)=e^x$</span> passes through them as well.</p>
<p>Hence if
<span class="math-container">$$\lim_{x\to+\infty}\frac{e^x}{P(x)}$$</span>
exists, it must be <span class="math-container">$1$</span>, since it is <span class="math-container">$1$</span> when <span class="math-container">$x\in\mathbb N$</span>. But (as it is easily proved using De L'Hospital), that limit exists and is <span class="math-container">$\pm\infty$</span>. Contradiction. Hence, there is no such polynomial.</p>
<hr>
<p>Conclusion: It is false that there exists a polynomial that passes through any infinite set of points, but if you know that the function is a polynomial beforehand, then it is uniquely determined.</p>
|
1,175,993 | <p>I want to show $T=d/dx$ is unbounded on $C^1[a,b]$ with $b>1$. Take a sequence $f(x)=x^n$, and $\|T\|=\sup_{x\in[a,b]}\frac{\|Tx\|}{\|x\|}=\frac{\|n\cdot b^{n-1}\|}{\|b\|}$. I want to claim as $n$ goes to infinity, the operator norm goes to infinity, and hence it's unbounded. But the definition of operator norm only says I can take sup w.r.t. $x$, and I'm confused about why I can take sup w.r.t. $n$ here.</p>
| Pedro M. | 21,628 | <p>Your definition of operator norm does not compile: $Tx$ makes no sense for $x \in [a,b]$, as $T$ operates on functions. It should have been $$\sup_{f \in C^1([a,b])} \frac{\|Tf\|}{\|f\|} \geq \frac{\|Tf_n\|}{\|f_n\|} = nb^{n-2},$$
and thus $T$ is unbounded ($f_n(x) = x^n$).</p>
|
1,787,985 | <p>I have the following differential equation: $$\Big(\frac{dy}{dx}\Big)^2=\frac{y^2-A^2}{A^2}.$$ I am looking to obtain a solution $$y(x)=A\cosh\Big({\frac{x+B}{A}}\Big),$$ where B and A are constants. </p>
<p>I have tried square-rooting and Taylor expanding, substitution of an integral but am getting nowhere. </p>
<p>Sorry if this is an inappropriate homework question but it has been annoying me for too long today...</p>
| Omar Antolín-Camarena | 1,070 | <p>Let $V$ have basis $e_1, \ldots, e_n$. There is a basis $\delta_, \ldots, \delta_n$ of $V^\vee$ called the dual basis characterized by the property $\delta_i(e_j) = \begin{cases}1 & \text{if }i=j \\ 0 & \text{otherwise}\end{cases}$.</p>
<p>The element $"\mathrm{id}" \in T \otimes T^\vee$ corresponding to the identity $V \to V$ is then $\sum_{i=1}^n e_i \otimes \delta_i$. As you can see the tensor powers of $"\mathrm{id}"$ are fairly complicated and permuting the contravariant components is definitely not the identity.</p>
<p>So, why does that element correspond to the identity? Let's go backwards: given $\sum_i v_i \otimes \alpha_i \in V \otimes V^\vee$, the linear transformation $V \to V$ corresponding to it is given by $F(x) = \sum_i \alpha_i(x)v_i$. If you try this on the candidate for $"\mathrm{id}"$, you see that it corresponds to $x \mapsto \sum_{i=1}^n \delta_i(x) e_i$. Since the $\delta_i$ give the coordinates with respect to the basis $e_1, \ldots, e_n$, that sum is precisely $x$.</p>
<hr>
<p>EDIT: About the correspondence between maps $T^{a,b} \to T^{c,d}$ and elements of $T^{b+c,a+d}$. The correspondence described above $V\otimes V^\vee \to \mathrm{Hom}(V,V)$ generalizes straightforwardly to give a map $V \otimes W^\vee \to \mathrm{Hom}(W,V)$ ---just use the same formula! (Warning: this map is an isomorphism only if $W$ is finite dimensional.) Then we have $T^{b+c,a+d} \cong T^{c,d} \otimes T^{b,a} \cong T^{c,d} \otimes (T^{a,b})^\vee \to \mathrm{Hom}(T^{a,b},T^{c,d})$.</p>
|
1,787,985 | <p>I have the following differential equation: $$\Big(\frac{dy}{dx}\Big)^2=\frac{y^2-A^2}{A^2}.$$ I am looking to obtain a solution $$y(x)=A\cosh\Big({\frac{x+B}{A}}\Big),$$ where B and A are constants. </p>
<p>I have tried square-rooting and Taylor expanding, substitution of an integral but am getting nowhere. </p>
<p>Sorry if this is an inappropriate homework question but it has been annoying me for too long today...</p>
| Alex Saad | 184,547 | <p>Not really an answer, but I wanted to post this here in case anyone else ends up thinking about this thing. It might help set you on the right track.</p>
<p>Anyway, after browsing through <a href="http://www.math.washington.edu/~julia/AMS_SFSU_2014/Serganova.pdf" rel="nofollow noreferrer">this</a> collection of slides (see "Ingredients of Construction" slide) I now know that the morphisms in this category can be described using <strong>walled Brauer diagrams</strong> (see e.g. page 6-7 <a href="http://www.math.uni-bonn.de/ag/stroppel/brauer.pdf" rel="nofollow noreferrer">here</a>). I haven't completely tied up all the holes in my understanding but it has helped quite a lot.</p>
<p>The rule for going from an element $\sigma \in S_{a+d}$ to a map $\epsilon(\sigma):T^{a,b}\to T^{c,d}$ is as follows: draw the element $\sigma$ as in my diagram below as a collection of strands joining the $a+d = b+c$ points along two lines from top to bottom. Additionally, there is a green "wall" between the $a$th and $(a+1)$st points on the top connected to the point between the $c$th and $(c+1)$st on the bottom (this wall represents the "split" between covariant and contravariant factors in the map). Then "flip" the right (contravariant) part of the diagram along the green wall to get a pictorial description of a map $T^{a,b}\to T^{c,d}$. One obtains a similar diagram, but with different numbers of nodes on the top and bottom (the top has $a+b$ nodes, the bottom has $c+d$ nodes). Lines that crossed the green wall then become "semi-loops" attached to one edge, which I have taken to mean "evaluation" and "coevaluation". The other lines show which factors are sent to which other factors in the corresponding map</p>
<p>The universal formula mentioned above feels within reach when you follow the formal rule</p>
<p>$$\text{closed loop in a Brauer diagram} = \text{factor of } \text{rank}(V) \text{ multiplying the diagram without the loop.}$$</p>
<p>Assuming this rule works in this vague interpretation, I managed to demonstrate the formula in the image below for a particular element $(2,3)$ of the symmetric group $S_5$. Why one must follow this rule is still unclear to me, but I guess it is something like this: in a composition
$$T^{a,b}\xrightarrow{\epsilon(\sigma)} T^{c,d}\xrightarrow{\epsilon(\tau)}T^{e,f}$$
where $a+d = b+c$ and $c+f = e+d$, there is some number $N$ of evaluations and coevaluations occuring between elements of $V$ and $V^\vee$, which can be seen as closed loops in the centre of the Brauer diagram. When each of these pairings between coevaluations and evaluations occurs, a sum of basis vectors of the form $\delta_i(e_i)$ (notation in the accepted answer to this question) multiplies the morphism by a factor of the <strong>rank</strong> of $V$. Therefore every time a loop occurs in the diagram, we thus pick up another factor of the rank. </p>
<p>When you calculate what the corresponding map $\epsilon(\tau\sigma): T^{a,b}\to T^{e,f}$ does, some "double-overlapping" of the wall kills off certain strands in the diagrams, which would get mapped to closed loops after "flipping" the diagram. Multiplying by the number of strands that get killed by this gives you the same thing: hence
$$\epsilon(\tau)\circ\epsilon(\sigma) = \text{rank} (V)^N \epsilon(\tau\sigma).$$</p>
<p>This is quite a hard procedure to describe, and after reading about it there seems to be lots of extremely deep maths related to this construction including links to supersymmetry in physics. I hope someone else may find this to be a helpful and/or interesting note. Of course, if anyone would like to correct me or add to what I have written in this answer I would be very happy!</p>
<p><a href="https://i.stack.imgur.com/MURye.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/MURye.jpg" alt="enter image description here"></a></p>
|
168,053 | <p>If g is a positive, twice differentiable function that is decreasing and has limit zero at infinity, does g have to be convex? I am sure, from drawing a graph of a function which starts off as being concave and then becomes convex from a point on, that g does not have to be convex, but can someone show me an example of an actual functional form that satisfies this property?</p>
<p>We know that since g has limit at infinity, g cannot be concave, but I am sure that there is a functional example of a function g:[0,∞)↦(0,∞) which is increasing, has limit zero at infinity, and is not everywhere convex, I just can't come up with it. Any ideas?</p>
<p>Thank you!</p>
| copper.hat | 27,978 | <p>Let $f(x) = \begin{cases} e^{1\over x}, & x < 0 \\
0, & \text{otherwise} \end{cases}$.</p>
|
139,021 | <p>Can you, please, recommend a good text about algebraic operads?</p>
<p>I know the main one, namely, <a href="http://www-irma.u-strasbg.fr/~loday/PAPERS/LodayVallette.pdf" rel="nofollow noreferrer">Loday, Vallette "Algebraic operads"</a>. But it is very big and there is no way you can read it fast. Also there are notes by <a href="https://arxiv.org/abs/1202.3245" rel="nofollow noreferrer">Vallette "Algebra+Homotopy=Operad"</a>, but they don't have much information and are too combinatorial. So what I am looking for is a pretty concise introduction to the theory of algebraic operads, that will be more algebraic then combinatorial, and that will give enough information to actually start working with operads.</p>
<p>Thank you very much for your help!</p>
<p><strong>Edit</strong>: I have also found this interesting paper <a href="http://arxiv.org/pdf/math/9906063v2.pdf" rel="nofollow noreferrer">Modules and Morita Theorem for Operads</a> by Kapranov--Manin. Maybe it's a bit too concise for the first time reading about operads, but it has a lot of really nice examples and theorems.</p>
<p>There are also <a href="http://folk.uib.no/nmajv/Operader.ps" rel="nofollow noreferrer">notes</a> by Vatne (only in PostScript).</p>
| Al-Amrani | 34,304 | <p>A recent good book is <a href="http://www-irma.u-strasbg.fr/~loday/PAPERS/LodayVallette.pdf" rel="nofollow noreferrer">Algebraic Operads by Jean-Louis Loday and Bruno Vallette</a>.</p>
|
139,021 | <p>Can you, please, recommend a good text about algebraic operads?</p>
<p>I know the main one, namely, <a href="http://www-irma.u-strasbg.fr/~loday/PAPERS/LodayVallette.pdf" rel="nofollow noreferrer">Loday, Vallette "Algebraic operads"</a>. But it is very big and there is no way you can read it fast. Also there are notes by <a href="https://arxiv.org/abs/1202.3245" rel="nofollow noreferrer">Vallette "Algebra+Homotopy=Operad"</a>, but they don't have much information and are too combinatorial. So what I am looking for is a pretty concise introduction to the theory of algebraic operads, that will be more algebraic then combinatorial, and that will give enough information to actually start working with operads.</p>
<p>Thank you very much for your help!</p>
<p><strong>Edit</strong>: I have also found this interesting paper <a href="http://arxiv.org/pdf/math/9906063v2.pdf" rel="nofollow noreferrer">Modules and Morita Theorem for Operads</a> by Kapranov--Manin. Maybe it's a bit too concise for the first time reading about operads, but it has a lot of really nice examples and theorems.</p>
<p>There are also <a href="http://folk.uib.no/nmajv/Operader.ps" rel="nofollow noreferrer">notes</a> by Vatne (only in PostScript).</p>
| Pedro | 21,326 | <p>The book by M. Bremner and V. Dotsenko titled <em>Algebraic Operads: an algorithmic companion</em> (published in 2016) is (in my perhaps biased opinion) a must-have for those wishing to complement their reading of Loday--Vallette. As the authors explain :</p>
<blockquote>
<p>It is fairly accurate to say that the aim of this book is to create an
accessible companion book to [180] which would, in the spirit of [64] contain enough hands-on methods for working with specific operads: making experiments, formulating conjectures and, hopefully, proving theorems, as well as, in the spirit of [252], include enough interesting examples to stimulate the reader toward those experiments, conjectures and theorems.</p>
</blockquote>
<p>As the back-matter explains, it contains a systematic treatment of Groebner bases in several contexts, starting with non-commutative polynomials, and then moving to richer structure like twisted and shuffle algebras, and operads (ns, shuffle, symmetric), the main topic of the book. Like the book of Loday--Vallette, many instances of the book record relatively recent results concerning operads and related structures, and at the same time provides the reader with many challenging exercises (sometimes prompting them to use a CAS, if necessary) that provide invaluable insight for those aiming to make concrete computations using rewriting systems and their kin to study and prove results about operads.</p>
<p>[64] is <em>Ideals, Varieties and Algorithms</em> by Cox, Little and O'Shea,
[180] is <em>Algebraic Operads</em> by Loday and Vallette and
[252] is <em>Combinatorial and Asymptotic Methods in Algebra</em> by Ufnarovski.</p>
|
2,473,220 | <p>From how I understood the question and judging from solutions I've been provided with (see graph below),</p>
<p><a href="https://i.stack.imgur.com/73RU3.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/73RU3.png" alt="enter image description here"></a></p>
<p>$f(x)$ starts from an x-position, which should be an integer, and assuming this goes on for all integers until infinity. </p>
<p>I also assume the graph follows the function $f(x)=x$, whereby $0\le y \le 0.5$ to make sure the function returns to the nearest integer. </p>
<p>If not, can $f(x)$ be equal to any function as long as it occupies the distance from $x$ to the next integer? For example, $f(x)=2x$ whereby $0\le y \le 1$</p>
<p><a href="https://i.stack.imgur.com/cSaYk.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/cSaYk.jpg" alt="enter image description here"></a></p>
<p>And can we say the critical points are all integers? Or maybe I did not understand the question well.</p>
| Rory Daulton | 161,807 | <p>Your first graph is the correct one for $f(x)$, and $f$ is well-defined by the definition in your title--although I might rather say "...to a nearest integer" since more than one could be "nearest."</p>
<p>You are correct that all integers are critical points of $f$. Those are the bottom corners in your graph. However, you missed some critical points, the ones halfway between two consecutive integers--i.e. the values $n+\frac 12$ where $n$ is an integer. Those are the top corners in your graph.</p>
<p>All other points have a derivative of $1$ or $-1$, so we have found all the critical points of $f$.</p>
|
2,780,403 | <p>I've tried to solve this but I don't seem to get anywhere.</p>
<p>The question states:</p>
<blockquote>
<p>Tom's home is $1800$ m from his school. One morning he walked part of the way to school and then ran the rest. If it took him $20$ mins or less to get to school, and he walks at $70$ m/min and runs at $210$ m/min, how far did he run?</p>
</blockquote>
<p>My attempts to solve it got me stuck here:</p>
<blockquote>
<p>$$\frac{x}{70}+\frac{y}{210}≥\frac{1800}{20}$$</p>
</blockquote>
<p>Any help would be appreciated! </p>
| trancelocation | 467,003 | <p>The gradient at $P(1,1,1)$ is $\nabla T_P= (2,4,4)$.</p>
<p>So, the <strong>magnitude</strong> of the rate of change is
$$|\nabla T_P| = 2\cdot |(1,2,2)| = 6$$</p>
<p>Nevertheless, the direction of maximum decrease is in the direction of $-(2,4,4)$, which has the same direction as $-(1,2,2)$ which give the same unit vector
$$u =-\frac{1}{3}(1,2,2)$$</p>
<p>Edit for completeness:</p>
<p>The directional derivative in the direction of $u$ is
$$\nabla T_P\cdot u = -\frac{2}{3}|(1,2,2)|^2=-6$$</p>
|
2,496,817 | <p>My task is to prove the above, with $m,n \in \mathbb{N}$</p>
<p>Here is what I have:</p>
<p>$7 | (100m + n) \iff (100m +n) \mod 7 = 0$</p>
<p>$\iff (100m \mod 7 + n \mod 7) \mod 7 = 0 $</p>
<p>$\iff (2m +n) \mod 7 = 0$ </p>
<p>That is where I am stuck.</p>
| lhf | 589 | <p>More generally,</p>
<blockquote>
<p>$7 \mid (100m + n) \iff 7 \mid (m + 4n)$</p>
</blockquote>
<p>Indeed, let $a=100m + n$ and $b=m + 4n$. Then
$$
a-2b = 98m -7n \equiv 0 \bmod 7
$$
Therefore, $a \equiv 2b \bmod 7$. The result follows because $b \equiv 0 \bmod 7$ iff $2b \equiv 0 \bmod 7$.</p>
|
942,470 | <p>I am trying to count how many functions there are from a set $A$ to a set $B$. The answer to this (and many textbook explanations) are readily available and accessible; I am <strong>not</strong> looking for the answer to that question and <strong>please do not post it</strong>. Instead I want to know what fundamental mistake(s) I am making in counting the number of these functions. My reasoning is below, which I know is wrong after checking this question: <a href="https://math.stackexchange.com/questions/639326/how-many-functions-there-is-from-3-element-set-to-2-element-set">How many functions there is from 3 element set to 2 element set?</a>.</p>
<hr>
<p>For an example case, I consider counting how many functions there are from set $A = \{0,1\}$ to set $B = \{a,b\}$. My understanding of the term <em>function</em> is that it is any possible mapping between elements of set $A$ to elements of set $B$. Thus, a possible function $F: A \times B$ is the function that maps each element of $A$ to no element of $B$, i.e. $f_0(0) = \emptyset, f_0(1) = \emptyset$. Another possible function is $f_1(0) = a, f_1(1) = \{a, b\}$. </p>
<p>I notice a pattern here: for each element of the set $A$, there are $|\mathcal P (B)|$ unique combinations of elements that it can map to. In this case, $\mathcal P(B) = \{\{a,b\}, \{a\}, \{b\}, \emptyset\}$. To count these functions, then, we can use the product rule, since the choice of what each element of $A$ maps to does not affect what another element of $A$ can map to (since we consider all functions). </p>
<p>There are $4$ choices for $0$ and $4$ choices for $1$. Therefore there are $16$ unique functions $F: A \times B$. For a sanity check, I've listed out all <strong>16</strong> possible functions.</p>
<p>$f_0(0) = \emptyset, f_0(1) = \emptyset$</p>
<p>$f_1(0) = \emptyset, f_1(1) = \{a\}$</p>
<p>$f_2(0) = \emptyset, f_2(1) = \{b\}$</p>
<p>$f_3(0) = \emptyset, f_3(1) = \{a, b\}$</p>
<p>$f_4(0) = \{a\}, f_4(1) = \emptyset$</p>
<p>$f_5(0) = \{a\}, f_5(1) = \{a\}$</p>
<p>$f_6(0) = \{a\}, f_6(1) = \{b\}$</p>
<p>$f_7(0) = \{a\}, f_7(1) = \{a, b\}$</p>
<p>$f_8(0) = \{b\}, f_8(1) = \emptyset$</p>
<p>$f_9(0) = \{b\}, f_9(1) = \{a\}$</p>
<p>$f_{10}(0) = \{b\}, f_{10}(1) = \{b\}$</p>
<p>$f_{11}(0) = \{b\}, f_{11}(1) = \{a, b\}$</p>
<p>$f_{12}(0) = \{a,b\}, f_{12}(1) = \emptyset$</p>
<p>$f_{13}(0) = \{a,b\}, f_{13}(1) = \{a\}$</p>
<p>$f_{14}(0) = \{a,b\}, f_{14}(1) = \{b\}$</p>
<p>$f_{15}(0) = \{a,b\}, f_{15}(1) = \{a, b\}$</p>
<p>The generalization: The number of functions $F: A \times B$ is $|\mathcal P(B)|^{|A|}$.</p>
<hr>
<p>Now I know my reasoning is completely wrong, but why? Am I double counting? Do I misunderstand the definition of a function? </p>
| Daniel McLaury | 3,296 | <p>A function $f : A \to B$ sends each element of $A$ to exactly one element of $B$.</p>
|
3,096,572 | <p>I am trying to find whether the following is stable absolutely using the improved Euler and the Adams-Bashforth 2 scheme,
<span class="math-container">$u'=\begin{bmatrix} -20&0&0\\ 20&-1&0\\0&1&0\end{bmatrix}u=Au$</span>, where the timestep is <span class="math-container">$\frac{1}{2}$</span>.</p>
<p>From my notes we know that for <span class="math-container">$u'=\lambda u$</span> is absolutely stable iff <span class="math-container">$|1+h\lambda|<1$</span>. But I am confused as to what the `<span class="math-container">$\lambda$</span>' is. Is it simply the eigenvalue(s) of the problem, which then confuses me if we are using other methods, or is it just a scalar? If it is the eigenvalues, then how to do you derive the absolute stability for the system using the Adams-Bashforth 2 scheme?</p>
<p>So far I found, for Euler, that <span class="math-container">$\lambda=-20,-1,-0.1$</span> which is not stable for our timestep. However for deriving the stability for the Adams-Bashforth I got as far as,</p>
<p><span class="math-container">$$u^{n+1}=u^nA(1-\frac{A}{2}+\frac{A^2}{4})$$</span>
but struggle as to how to proceed. Any advice would be appreciated.</p>
| Lutz Lehmann | 115,115 | <p>In the improved Euler method, your ODE gives the step
<span class="math-container">\begin{align}
k_1&=Au_n\\
k_2&=A(u_n+hk_1)=(A+hA^2)u_n\\
u_{n+1}&=u_n+\frac h2 (k_1+k_2)=(I+hA+\frac{h^2}2A^2)u_n
\end{align}</span>
What you want for stability is that the matrix factor is contracting if all eigenvalues of <span class="math-container">$A$</span> have negative part. That means you need to find <span class="math-container">$h$</span> so that <span class="math-container">$|1+(1+hλ)^2|<2$</span> for all relevant eigenvalues.</p>
<hr>
<p>In the most simple Adams-Bashford method <span class="math-container">$u_{n+1}=u_n+\frac h2(3Au_n-Au_{n-1})$</span> you actually need to solve the linear recursion problem. Considering the eigenspaces this again reduces to an equation for the eigenvalues, the characteristic equation of the recursion for an eigenvalue <span class="math-container">$λ$</span> is
<span class="math-container">$$
q^2=q+\frac h2(3λq-λ)
$$</span>
which has roots around <span class="math-container">$q=1+hλ$</span> and <span class="math-container">$q=\dfrac{hλ}{2+3hλ}$</span>, compute the exact values with the solution formula, which both need to have absolute value smaller <span class="math-container">$1$</span>.</p>
|
4,507,155 | <p>In <a href="https://math.stackexchange.com/questions/4454551/are-fracp212-and-fracp5np5-12-are-coprime-to-each-other">previous post</a>, I got the answer that <span class="math-container">$\gcd \left(\frac{p^2+1}{2}, \frac{p^5-1}{2} \right)=1$</span>, where <span class="math-container">$p$</span> is prime number.</p>
<p>I am looking for more general case, that is for <span class="math-container">$p$</span> prime,</p>
<blockquote>
<p>When is <span class="math-container">$\gcd \left(\frac{p^r+1}{2}, \frac{p^t-1}{2} \right)=1$</span> ?</p>
</blockquote>
<p>where <span class="math-container">$t=r+s$</span> such that <span class="math-container">$\gcd(r,s)=1$</span>.</p>
<p>I am excluding the cases <span class="math-container">$r=1=s$</span>.</p>
<hr />
<p>In the <a href="https://math.stackexchange.com/questions/4454551/are-fracp212-and-fracp5np5-12-are-coprime-to-each-other">previous post</a>, it was <span class="math-container">$r=2,~t=5$</span>. So <span class="math-container">$t=5=2+3=r+s$</span> with <span class="math-container">$s=3$</span> so that <span class="math-container">$\gcd(r,s)=1$</span>.</p>
<p>In this current question:</p>
<p>Since <span class="math-container">$\gcd(r,s)=1$</span>, we also have <span class="math-container">$\gcd(r,t)=1$</span>.
I have the following intuition:</p>
<p><strong>Case-I</strong>:</p>
<p>Assume <span class="math-container">$r$</span>=even and <span class="math-container">$s$</span>=odd so that <span class="math-container">$\gcd(r,s)=1$</span> as well as <span class="math-container">$\gcd(r,t)=1$</span>. We can also assume <span class="math-container">$r$</span>=odd and <span class="math-container">$s$</span>=even.</p>
<p>I think the same strategies of <a href="https://math.stackexchange.com/questions/4454551/are-fracp212-and-fracp5np5-12-are-coprime-to-each-other">previous post</a> can be applied to show that <span class="math-container">$$\gcd \left(\frac{p^r+1}{2}, \frac{p^t-1}{2} \right)=1.$$</span></p>
<p><strong>Case II:</strong></p>
<p>The problem arises when both <span class="math-container">$r$</span> and <span class="math-container">$s$</span> are odd numbers so that <span class="math-container">$t=r+s$</span> is even.</p>
<p>If I take <span class="math-container">$r=3, ~s=5$</span>, then <span class="math-container">$t=8$</span>.</p>
<p>For prime <span class="math-container">$p=3$</span>, <span class="math-container">$\frac{p^r+1}{2}=\frac{3^3+1}{2}=14$</span> and <span class="math-container">$\frac{p^t-1}{2}=\frac{3^8-1}{2}=3280$</span> so that the gcd is <span class="math-container">$2$</span> at least.</p>
<p>For other primes also we can find gcd is not <span class="math-container">$1$</span>.</p>
<hr />
<p>So I think it is possible only for Case I, where among <span class="math-container">$r$</span> and <span class="math-container">$s$</span>, one is odd and another is even so that <span class="math-container">$t$</span> is odd.</p>
<p>In other word, <span class="math-container">$t$</span> can not be even number.</p>
<p>But I need to be ensured with a general method.</p>
<p>So the question reduces to</p>
<blockquote>
<p>How to prove <span class="math-container">$\gcd \left(\frac{p^m+1}{2}, \frac{p^n-1}{2} \right)=1$</span> ?</p>
</blockquote>
<p>provided <span class="math-container">$\gcd(m,n)=1$</span> and <span class="math-container">$n$</span> is odd number and <span class="math-container">$p$</span> is prime number.</p>
<p>Thanks</p>
| Joseph Camacho | 731,433 | <p>Suppose that
<span class="math-container">$$\gcd\left(\frac{p^m + 1}{2}, \frac{p^n - 1}{2}\right) \neq 1.$$</span>
There are two cases:</p>
<ol>
<li>Both <span class="math-container">$p^m + 1$</span> and <span class="math-container">$p^n - 1$</span> are multiples of <span class="math-container">$4$</span>.</li>
<li>Both <span class="math-container">$p^m + 1$</span> and <span class="math-container">$p^n - 1$</span> are multiples of the same odd prime <span class="math-container">$q$</span>.</li>
</ol>
<p>In case 1, <span class="math-container">$p^m + 1 \equiv 0 \pmod 4$</span> implies that <span class="math-container">$p^m \equiv 3 \pmod 4$</span>, so that <span class="math-container">$p \equiv 3 \pmod 4$</span> and <span class="math-container">$m$</span> is odd. But then <span class="math-container">$p^n \equiv 3 \pmod 4$</span> as well, since <span class="math-container">$n$</span> is odd. This contradicts the fact that <span class="math-container">$p^n - 1$</span> is a multiple of <span class="math-container">$4$</span>.</p>
<p>In case 2, let <span class="math-container">$r$</span> be the order of <span class="math-container">$p$</span> modulo <span class="math-container">$q$</span>. Since <span class="math-container">$p^m \equiv -1 \pmod q$</span>, <span class="math-container">$r$</span> must be even (as <span class="math-container">$r \mid 2m$</span>, but <span class="math-container">$r \not \mid m$</span>). But this makes it impossible for <span class="math-container">$p^n \equiv 1 \pmod q$</span>, since <span class="math-container">$n$</span> is odd and thus cannot be a multiple of <span class="math-container">$r$</span>.</p>
<p>Since neither case is possible, it must be that <span class="math-container">$\gcd\left(\frac{p^m + 1}{2}, \frac{p^n - 1}{2}\right) = 1$</span>.</p>
|
3,112,043 | <p>The first part of the problem is:</p>
<p>Prove that for all integers <span class="math-container">$n \ge 1$</span> and real numbers <span class="math-container">$t>1$</span>,
<span class="math-container">$$ (n+1)t^n(t-1)>t^{n+1}-1>(n+1)(t-1)$$</span></p>
<p>I have done the first part by induction on <span class="math-container">$n$</span> for any real <span class="math-container">$t>1$</span>.</p>
<p>However, I don't know how to do the second part, which is:</p>
<p>Use this to prove that if <span class="math-container">$m$</span> and <span class="math-container">$n$</span> are positive integers,</p>
<p><span class="math-container">$$ \frac{m^{n+1}}{n+1}<1^n+2^n+\dots+m^n<\Big(1+\frac{1}{m}\Big)^{n+1}\frac{m^{n+1}}{n+1} $$</span></p>
<p>I factorized <span class="math-container">$t^{n+1}-1$</span> into <span class="math-container">$(t-1)(1+t+t^2+\dots+t^n)$</span> and cancelled <span class="math-container">$t-1$</span> to obtain <span class="math-container">$(n+1)t^n>1+t+t^2+\dots+t^n>n+1$</span>. And have no idea what to do next. This is the only way I could think of in order to get a "sum", but couldn't see any relation between the two sum (if there are any...).</p>
<p>The last part of the question is:</p>
<p>Find
<span class="math-container">$$ \lim_{m\rightarrow \infty} \frac{1^n+2^n+\dots+m^n}{m^{n+1}}$$</span></p>
<p>I think I know how to do the last part. It should be divide the inequality by <span class="math-container">$m^{n+1}$</span>, then apply Squeeze Theorem. I believe the answer is <span class="math-container">$\frac{1}{n+1}$</span>.</p>
| didgogns | 392,996 | <p>We only need to prove the second part of question.</p>
<p>For the left inequality, let's apply induction on <span class="math-container">$m$</span>.</p>
<p>Base case: <span class="math-container">$\frac{1}{n+1}<1$</span>, trivial.</p>
<p>Induction case: Note that <span class="math-container">$1^n+\cdots+m^n>\frac{(m-1)^{n+1}}{n+1}+m^n$</span> by induction hypothesis. By first part, <span class="math-container">$(1+\frac{1}{m-1})^{n+1}-1<(n+1)(1+\frac{1}{m-1})^{n}\frac{1}{m-1}$</span>, which is equivalent to <span class="math-container">$m^{n+1}-(m-1)^{n+1}<(n+1)m^{n}$</span> or <span class="math-container">$(m-1)^{n+1}>m^{n+1}-(n+1)m^{n}$</span>. Applying it, we get <span class="math-container">$$\frac{(m-1)^{n+1}}{n+1}+m^n>m^n+\frac{m^{n+1}-(n+1)m^{n}}{n+1}=\frac{m^{n+1}}{n+1}$$</span>
therefore induction case is also proved.</p>
<p>For the right inequality, it's simpler.</p>
<p>From the first part, <span class="math-container">$\Big(1+\frac{1}{m}\Big)^{n+1}\frac{m^{n+1}}{n+1}>\frac{m^{n+1}}{n+1}(\frac{n+1}{m}+1)=m^n+\frac{m^{n+1}}{n+1}$</span>. Now apply induction on <span class="math-container">$m$</span>.</p>
<p>For the base case, it's <span class="math-container">$1<\frac{2^{n+1}}{n+1}$</span> which is not hard.</p>
<p>For the induction case, from the induction hypothesis and above inequality,<span class="math-container">$$1^n+\cdots+m^n<\Big(1+\frac{1}{m-1}\Big)^{n+1}\frac{(m-1)^{n+1}}{n+1}+m^n=\frac{m^{n+1}}{n+1}+m^n<\Big(1+\frac{1}{m}\Big)^{n+1}\frac{m^{n+1}}{n+1}$$</span></p>
|
637,199 | <p>If $K^T=K$, $K^3=K$, $K1=0$ and $K\left[\begin{matrix}1\\2 \\-3\end{matrix}\right]=\left[\begin{matrix}1\\2 \\-3\end{matrix}\right]$,</p>
<p>how can I find the trace of $K$ and the determinant of $K$?</p>
<p>I think for determinant of $K$, since $K^3-K=(K^2-I)K=0$, then $K^2=I$ since $K$ is nonzero. Then this implies $|K|^2=1$ implies $|K|=\pm 1$, where the two lines | | denotes the determinant.</p>
<p>But I'm not sure if $tr(K^2)=tr(I)=3$?</p>
| copper.hat | 27,978 | <p>There are three matrices that satisfy the given conditions.</p>
<p>$K^3=K$ shows that all eigenvalues are 0 or satisfy $\lambda^2 = 1$. Since $K$ is symmetric (and presumably real?) we have $\lambda \in \{-1,0,1\}$.</p>
<p>Since $K e = 0$, we see that $K$ is singular, hence $\det K = 0$. We are given that one eigenvalue is 1, so we only need to determine the other eigenvalue to finish.</p>
<p>Let $u_1 = {1 \over \sqrt{3}} (1,1,1)^T$, $u_2 = {1\over \sqrt{14}} (1, 2 , -3)^T$, and $u_3 = u_1 \times u_2 = {1 \over \sqrt{42}} (-5,4,1)^T$. It is easy to check that these are orthonormal, $Ku_1 = 0$, $Ku_2 = u_2$, and, since $K$ is symmetric, we have $K u _3 = \lambda u_3$ for some $\lambda \in \{-1,0,1\}$.</p>
<p>Hence we can write $K = u_2 u_2^T + \lambda u_3 u_3^T$. It is easy to check that for any $\lambda \in \{-1,0,1\}$ that $K$ is symmetric, $K^3 = u_2 u_2^T + \lambda^3 u_3 u_3^T = K$ and that the other two conditions hold as well. Hence we do not have enough information to completely determine $K$.</p>
<p>The possible values for the trace are $\operatorname{tr} K = 0 + 1 + \lambda$, that is $\{0, 1, 2\}$.</p>
|
431,236 | <p>I have a cylinder of radius 4 and height 10 that is at a 30 degree angle. I need to find the volume.</p>
<p>I have no clue how to do this, I have spent quite a while on it and went through many ideas but I think my best idea was this.</p>
<p>I know that the radius is 4 so if I cut the cylinder in half from corner to corner I will have two side lengths giving me a third side length. So this gives</p>
<p>$$\sqrt{116} = height$$ </p>
<p>Or the length of the tall sides.</p>
<p>Now I just plug this into my formula</p>
<p>$$\pi r^2 h$$</p>
<p>$$\pi *16*\sqrt116$$</p>
<p>This is about $34\pi$ which is way off. What did I do wrong?</p>
| Lucozade | 83,956 | <p>You should use the standard formula for volume of a cylinder, except that the height is now the projected height of the axis length onto the perpendicular to the base plane. Thus, if the length of the axis of your cylinder is 10, then the projected height is 10*sin(30deg) = 5 (to see this, consider the projection of the cylinder in the plane: you get a right-angled triangle with hypothenusa 10 and angles 30, 90, 60, in which you need the side opposite to 30deg angle). That should give V = pi*r^2 * 5 = 251.328.</p>
<p>EDIT:
If you find it difficult to understand why it is this way, recall how one can calculate the area of a parallelogram from that of rectangle with equal base length and height: by cutting off one triangle from the rectangle from the left side and pasting it back on the right side, you obtain a parallelogram of equal area. The slanted and straight cylinders are in fact the same: if you project them in the back plane, you see parallelogram and rectangle, respectively. Oop, when writing this I realized I may have made a mistake: it should not have been multiplied by sin(30deg) after all, because of what I just said. The issue is rather how the radius is measured: it is for the cross-section perpendicular to the axis of the cylinder, or is it the radius of the base, which is at an angle to this axis? In the former case, you will need to find the projected radius first.</p>
|
431,236 | <p>I have a cylinder of radius 4 and height 10 that is at a 30 degree angle. I need to find the volume.</p>
<p>I have no clue how to do this, I have spent quite a while on it and went through many ideas but I think my best idea was this.</p>
<p>I know that the radius is 4 so if I cut the cylinder in half from corner to corner I will have two side lengths giving me a third side length. So this gives</p>
<p>$$\sqrt{116} = height$$ </p>
<p>Or the length of the tall sides.</p>
<p>Now I just plug this into my formula</p>
<p>$$\pi r^2 h$$</p>
<p>$$\pi *16*\sqrt116$$</p>
<p>This is about $34\pi$ which is way off. What did I do wrong?</p>
| DJohnM | 58,220 | <p>To expand on the answer above, consider an ordinary cylinder as a stack of thin discs:coins, poker chips, whatever. The stack has a height, given by the total thickness of all the coins, and a certain volume, given by the total volume of all the coins.</p>
<p>Now, sort of srounch the stack over at an angle, so that each coin is offset by a little from the coin below it, each in the same direction. The new cylinder will have the same height (the number and thickness of the coins hasn't changed), and the same volume (the number of coins and the individual volumes haven't changed) . And the same will be true no matter how you change the angle.</p>
<p>The surface area of the cylinder is not, unfortunately, as simple...</p>
|
4,019,956 | <p><strong>Preface</strong></p>
<p>We will use the following facts</p>
<p>i) The sequence <span class="math-container">$ \left\lbrace a_n \right\rbrace $</span> is convergent to <span class="math-container">$a$</span> if for each <span class="math-container">$ \varepsilon >0$</span> there exists <span class="math-container">$ N>0 $</span> such that <span class="math-container">$ \vert a_n -a \vert < \varepsilon $</span> whenever <span class="math-container">$ n>N$</span>.</p>
<p>(ii) <span class="math-container">$ |A+B| \leqslant |A|+|B| $</span>; and <span class="math-container">$ \vert -A+B \vert = \vert A-B \vert $</span></p>
<p>(iii) <span class="math-container">$ (-1)^{2n+1} =-1$</span> and <span class="math-container">$ (-1)^{2n}=1 $</span></p>
<p>(iv) The statement <span class="math-container">$ 8<8 $</span> is a contradiction. Contradiction=false in every way.</p>
<p><strong>Proof</strong></p>
<p>Suppose the sequence is convergent to <span class="math-container">$ \omega $</span>. Then for each <span class="math-container">$ \varepsilon >0 $</span>, there exists <span class="math-container">$ N>0 $</span>, such that <span class="math-container">$ \vert 4+3(-1)^n - \omega \vert < \varepsilon $</span>, whenever <span class="math-container">$ n>N $</span>. When <span class="math-container">$ n $</span> is even, we have <span class="math-container">$ \vert 7-\omega \vert < \varepsilon $</span>, whenever <span class="math-container">$ n>N $</span>. When <span class="math-container">$ n $</span> is odd, we have <span class="math-container">$ \vert 1-\omega \vert < \varepsilon $</span>, whenever <span class="math-container">$ n> N $</span>. We choose <span class="math-container">$ \varepsilon=1 $</span> and consider <span class="math-container">\begin{align*}
8 &=\vert 7+1 \vert=\vert 7 -\omega + \omega +1\vert\\
&=\vert (7-\omega)+4 -3+\omega \vert \\
& \leqslant \vert 7 -\omega \vert + \vert 4 \vert+ \vert -3+\omega \vert \\
&= \vert 7 -\omega \vert + \vert 4 \vert+ \vert -2 -1+\omega \vert \\
& \leqslant \vert 7 -\omega \vert + \vert 4 \vert+\vert -2 \vert + \vert -1+\omega \vert \\
&= \vert 7 -\omega \vert + 6 + \vert 1-\omega \vert \\
& < 2 \varepsilon +6 \\
&= 8
\end{align*}</span>
This is a contradiction.</p>
<p>Question: Do you like this proof? Is it correct?</p>
| trancelocation | 467,003 | <p>Just do some square completion:</p>
<p><span class="math-container">\begin{eqnarray*}a^2+ab+b^2-a-2b
& = & \left(a+\frac b2\right)^2 + 3\left(\frac b2\right)^2 - a - 2b \\
& = & \left(a+\frac b2\right)^2 + 3\left(\frac b2\right)^2 - \left(a+\frac b2\right) - 3\frac b2 \\
& \stackrel{u=a+\frac b2, v= \frac b2}{=} & \left(u-\frac 12\right)^2 -\frac 14+3\left(v-\frac 12\right)^2-\frac 34 \\
& \geq & -1
\end{eqnarray*}</span></p>
|
1,998,244 | <p>Given the equation of a damped pendulum:</p>
<p>$$\frac{d^2\theta}{dt^2}+\frac{1}{2}\left(\frac{d\theta}{dt}\right)^2+\sin\theta=0$$</p>
<p>with the pendulum starting with $0$ velocity, apparently we can derive:</p>
<p>$$\frac{dt}{d\theta}=\frac{1}{\sqrt{\sqrt2\left[\cos\left(\frac{\pi}{4}+\theta\right)-e^{-(\theta+\phi)}\cos\left(\frac{\pi}{4}-\phi\right)\right]}}$$</p>
<p>where $\phi$ is the initial angle from the vertical. How can we derive that? Obviously $\frac{dt}{d\theta}$ is the reciprical of $\frac{d\theta}{dt}$, but I don't see how to deal with the second derivative.</p>
<p>I've found a similar derivation at <a href="https://en.wikipedia.org/wiki/Pendulum_(mathematics)" rel="nofollow noreferrer">https://en.wikipedia.org/wiki/Pendulum_(mathematics)</a>, where the formula</p>
<p>$${\frac {d\theta }{dt}}={\sqrt {{\frac {2g}{\ell }}(\cos \theta -\cos \theta _{0})}}$$</p>
<p>is derived in the "Energy derivation of Eq. 1" section. However, that uses a conservation of energy argument which is not applicable for a damped pendulum.</p>
<p>So how can I derive that equation?</p>
| Dion Leijnse | 385,638 | <p>It says that there are 18 with at least one sister, and 5 without brothers/sisters. So there have to be at least 23. Now we have to look if something implies that this has to be higher. Because these eighteen said that they had at least one sister, those 17 with at least one brother can all be combined with this, so we get a minimum of 23. </p>
|
3,745,273 | <p>I am looking for a way to solve :</p>
<p><span class="math-container">$$\int_{-\infty}^{\infty} \frac{x\sin(3x)}{x^4+1}\,dx $$</span></p>
<p>without making use of complex integration.</p>
<p>What I tried was making use of integration by parts, but that didn't reach any conclusive result. (i.e. I integrated <span class="math-container">$\sin(3x)$</span> and differentiated the rest)</p>
<p>I can't see a clear starting point to solve this question. Any help appreciated.</p>
<p>This problem is posted by Vilakshan Gupta on <a href="https://brilliant.org/problems/integrate-it-8/?ref_id=1591957" rel="nofollow noreferrer">Brilliant</a>.</p>
| Nanayajitzuki | 611,558 | <p>I only write the key step for central issue, let for a>0 (this makes problem easy to deal without abs function)
<span class="math-container">$$
f(a) = \int_{0}^{\infty} \frac{\sin(ax)}{x(x^4+1)} \,\mathrm{d}x
$$</span>
then you have ODE
<span class="math-container">$$
f^{(4)}(a) + f(a) = \int_{0}^{\infty} \frac{\sin(ax)}{x} \,\mathrm{d}x =\frac{\pi}{2}
$$</span>
with boundary value <span class="math-container">$f(0)=f^{(2)}(0)=0$</span>, <span class="math-container">$f^{(1)}(0)=\pi/2\sqrt2$</span>, <span class="math-container">$f^{(3)}(0)=-\pi/2\sqrt2$</span>
solved as
<span class="math-container">$$
f(a) = \frac{\pi}{2}\left(1-e^{-a/\sqrt2}\cos\left(\tfrac{a}{\sqrt2}\right)\right)
$$</span>
then you just need to find <span class="math-container">$-f^{(2)}(3)$</span> to obtain the final answer.</p>
|
2,421,145 | <p>I am practicing problems around NFA and DFA.</p>
<p>I have seen many questions on how to convert NFA to DFA and DFA to Regular expression etc.</p>
<p>But I have seen very different question and I am stuck on how to proceed with the following question? </p>
<p>Given DFA. Convert this DFA to NFA with 5 states.
<a href="https://i.stack.imgur.com/GyDyp.png" rel="nofollow noreferrer">DFA IMAGE ATTACHED</a></p>
<p>I plan is to find a language and regular expression first and then try to convert regular expression to NFA with 5 states. But I had hard time just to find the language it accepts.</p>
<p>How to approach these problems. And to how to find language or regular expressions for large DFA's? Are there any algorithms or rules?</p>
<p>Edit: </p>
<p>Regular exp for the language and NFA with 5 states are added in the diagram.</p>
<p><a href="https://i.stack.imgur.com/t2z3x.jpg" rel="nofollow noreferrer">reg exp and NFA</a></p>
| SSequence | 469,108 | <p>I believe at least the intention of question seems to understand the language of the DFA and then use that to build the NFA. </p>
<p>I think I understand the language of DFA. Hopefully you can use that description to find a corresponding NFA. It seems to me that it shouldn't be too difficult, but if you have trouble with corresponding NFA you can mention it in comments.</p>
<p>First of all we can observe that the DFA rejects all strings with length less than or equal to 3. Now suppose we have a string of length 4 or more as input. Denote the input string by x. First we note that the alphabet set is $\Sigma=\{0,1\}$. Then the input string can always be written as:
$$x=s.abcd$$
Here $s$ is some arbitrary string (that can also be empty) and $a,b,c,d \in \Sigma$. Now the given DFA accepts a string of length four or more if and only if the alphabet corresponding to $a$ is $0$.</p>
<p>In short description, the DFA checks the fourth-last alphabet of any input string $x$. If the fourth-last alphabet is equal to $0$ it accepts it and otherwise rejects it. </p>
<p>Note that we can understand it better why it is so, if we mark the states a little different. For example, look at the state marked [001] for example. It can really be marked as [<strong>1</strong>001]. The state [01] can be marked as [<strong>11</strong>01]. Similarly state [0] can be marked as [<strong>111</strong>0]. The start state of DFA can be marked as [<strong>1111</strong>]. State [011] can be marked as [<strong>1</strong>011] and so on...</p>
<p>These states essentially store the information about the last four alphabet of the input string.</p>
|
339,142 | <p>I'm trying to understand the difference between the sense, orientation, and direction of a vector. According to
<a href="http://www.eng.auburn.edu/users/marghdb/MECH2110/c1_2110.pdf">this</a>,
sense is specified by two points on a line parallel to a vector. Orientation is specified by the relationship between the vector and given reference lines (which I'm interpreting to be some basis).</p>
<p>However, these two definitions seem to be synonymous with direction. How do these 3 terms differ?</p>
| Mehdi | 67,913 | <p>I think(I'm not sure) that direction of a vector is an intrinsic property of that vector, so one can define direction of a vector without any reference to the outside world, but orientation is an extrinsic property, it depends on the relation between the vector and outside world(how it is placed w.r.t other vectors of a basis for example), actually what I am saying is by assuming the definition of orientation of a vector in $R^n$ for example and certainly there are more general definitions for orientation.</p>
|
1,955,591 | <p>I have to prove that ' (p ⊃ q) ∨ ( q ⊃ p) ' is a tautology.I have to start by giving assumptions like a1 ⇒ p ⊃ q and then proceed by eliminating my assumptions and at the end i should have something like ⇒(p ⊃ q) ∨ ( q ⊃ p) but could not figure out how to start.</p>
| marty cohen | 13,079 | <p>(p ⊃ q) ∨ ( q ⊃ p) </p>
<p>I assume that
"⊃" means "implies".
Actually,
since your statement
is symmetric in p and q,
it doesn't matter if
"a ⊃ b" means
"a implies b"
or
"b implies a".</p>
<p>Since
"a ⊃ b"
is equivalent to
"~a ∨ b",
your statement is
equivalent to
"(~p ∨ q) ∨ ( ~q ∨ p)".</p>
<p>Since
"∨" is commutative
and associative,
this is the same as
"(p ∨ ~p)∨(q ∨ ~q)"
which is always true.</p>
<p>Is this what you wanted?</p>
|
4,137,362 | <p>If we refer to the <code>Minimum</code> of a set of numbers, we mean the lowest number. <code>Min(12, 7, 18) = 7</code> and <code>Min(5, -8) = -8</code>.</p>
<p>Is there a technical term for the 'number closest to zero'? e.g. where <code>fn(5, -8) = 5</code> and <code>fn(-5, 8) = -5</code></p>
<p>Similarly, what would be the opposite be (e.g 'number farthest from zero')?</p>
<p>I am aware that there are two solutions in some situations, e.g. <code>fn(5, -5)</code> and that any application of this would have to bear that in mind. However, this question is just about whether or not there is some existing terminology for this function.</p>
| SolubleFish | 918,393 | <p>If <span class="math-container">$c_2$</span> or <span class="math-container">$c_3$</span> is non zero, then the exponentials in <span class="math-container">$x(t)$</span> will mean that <span class="math-container">$\|x(t)\| \to \infty$</span>.</p>
<p>The solution with <span class="math-container">$c_1= 1$</span>, <span class="math-container">$c_2 = c_3 = 0$</span> is constant and of norm <span class="math-container">$1$</span>.</p>
|
4,137,362 | <p>If we refer to the <code>Minimum</code> of a set of numbers, we mean the lowest number. <code>Min(12, 7, 18) = 7</code> and <code>Min(5, -8) = -8</code>.</p>
<p>Is there a technical term for the 'number closest to zero'? e.g. where <code>fn(5, -8) = 5</code> and <code>fn(-5, 8) = -5</code></p>
<p>Similarly, what would be the opposite be (e.g 'number farthest from zero')?</p>
<p>I am aware that there are two solutions in some situations, e.g. <code>fn(5, -5)</code> and that any application of this would have to bear that in mind. However, this question is just about whether or not there is some existing terminology for this function.</p>
| the_candyman | 51,370 | <p>The solutions with <span class="math-container">$|c1| \leq 1$</span> and <span class="math-container">$c_2 = c_3 = 0$</span> are constant and of norm less or equal to <span class="math-container">$1$</span>, i.e.:</p>
<p><span class="math-container">$$x(t) = \begin{bmatrix}c_1 \\ 0 \\ 0 \end{bmatrix} ~\forall t \geq 0,$$</span></p>
<p>and</p>
<p><span class="math-container">$$\|x(t)\| = |c_1| \leq 1 ~\forall t \geq 0.$$</span></p>
<p>In all other cases, the exponentials will blow up to infinity.</p>
|
140,500 | <p>The diagonals of a rectangle are both 10 and intersect at (0,0). Calculate the area of this rectangle, knowing that all of its vertices belong to the curve $y=\frac{12}{x}$.</p>
<p>At first I thought it would be easy - a rectanlge with vertices of (-a, b), (a, b), (-a, -b) and (a, -b). However, as I spotted no mention about the rectangle sides being perpendicular to the axises, it's obviously wrong which caused me to get stuck. I thought that maybe we could move in a similar way - we know that if a rectangle is somehow rotated (and we need to take that into account), the distances from the Y axis from the points being symmetric to (0,0) are still just two variables. So we would have: (-b, -12/b), (a, 12/a), (-a, -12/a), (b, 12/b). I then tried to calculate the distance between the first two and the second and the third which I could then use along with the Pythagorean theorem and a diagonal. However, the distance between the first two is $\sqrt{(a+b)^2+(\frac{12}{a}+\frac{12}{b})^2}$ which is unfriendly enough to make me thing it's a wrong way. Could you please help me?</p>
| bgins | 20,321 | <p>To follow J.M.'s hint, you need to solve the two equations
$$
\eqalign{
x^2+y^2&=25\\
xy&=12
}
$$
and a nice way to do this would be to notice that then
$$
\left(x+y\right)^2=x^2+2xy+y^2=25+2\cdot12=49
$$
so that $x+y=\pm7$. Next, if you note that
$$
\left(x-y\right)^2=x^2-2xy+y^2=25-2\cdot12=1
$$
you will see that $x-y=\pm1$. This gives us the system
$$
\eqalign{
x+y&=\pm7\\
x-y&=\pm1
}
$$
From which we can add or subtract, then divide by two, to get
$$
x,y=\pm\frac{7\pm1}{2}=\pm3,\pm4
$$
from which we can work out the combination of signs
(in the first and third quadrants) that satisfy the
equations above:
$$
\{(3,4),(4,3),(-3,-4),(-4,-3)\}
$$
Since the (within- and between-quadrant) sides of
the rectangle are $\sqrt{1^2+1^2}=\sqrt2$ and
$\sqrt{7^2+7^2}=7\sqrt2$, the area is their product,
$$A = \sqrt2\cdot7\sqrt2 = 14\,.$$</p>
|
816,088 | <blockquote>
<p>The sum of two variable positive numbers is $200$.
Let $x$ be one of the numbers and let the product of these two numbers be $y$. Find the maximum value of $y$.</p>
</blockquote>
<p><em>NB</em>: I'm currently on the stationary points of the calculus section of a text book. I can work this out in my head as $100 \times 100$ would give the maximum value of $y$. But I need help making this into an equation and differentiating it. Thanks!</p>
| Samrat Mukhopadhyay | 83,973 | <p>The problem is the following: $$\mbox{Find}\\ y=\max_{x\ge 0}\left(x(200-x)\right)$$ So differentiate the function $f(x)=x(200-x)$ to get $x=100$ as the one that maximizes it (since $f''(100)=-2$. So, $y=100^2$.</p>
|
17,713 | <p>I am a bit perplexed in trying to find values <span class="math-container">$a,b,c$</span> so that the approximation is as precise as possible:</p>
<p><span class="math-container">$$\sum_{k=n}^{\infty}\frac{(\ln(k))^{2}}{k^{3}} \approx \frac{1}{n^{2}}[a(\ln (n))^{2}+b \ln(n) + c]$$</span></p>
<p>I can see from Wolfram that <span class="math-container">$\ln(x)$</span> could be written in many different <a href="http://www.wolframalpha.com/input/?i=ln(x)+series" rel="nofollow noreferrer">ways</a> and that the sum starting from <span class="math-container">$1$</span> can be written with Riemann zeta function <a href="http://www.wolframalpha.com/input/?i=sum((ln(x)%5E2%2Fx%5E3)" rel="nofollow noreferrer">here</a>. They are probably not something I am looking here. The decreasing exponent hints me that perhaps the term in the sum can be approximated with its derivations, the first derivate <a href="http://www.wolframalpha.com/input/?i=Dervivate((ln(x)%5E2%2Fx%5E3)" rel="nofollow noreferrer">here</a> and the second <a href="http://www.wolframalpha.com/input/?i=Derivate(Dervivate((ln(x)%5E2%2Fx%5E3))" rel="nofollow noreferrer">here</a> but when I try to approximate something goes wrong, perhaps wrong premise. Let <span class="math-container">$z=\frac{\ln(k)^{2}}{k^{3}}$</span> then</p>
<p><span class="math-container">$$\sum_{k=n}^{\infty} z \approx z' z-z' z^3/2!$$</span></p>
<p>by the Taylor approximation for the odd function (<span class="math-container">$\ln(x)$</span> is odd, <span class="math-container">$x^3$</span> is odd and the oddity is preserved after the operations, maybe wrong, so <span class="math-container">$z$</span> is odd like <span class="math-container">$\sin(x)$</span>), I get:</p>
<p><span class="math-container">$$\sum_{k=n}^{\infty} z \approx (\frac{1}{x^3}(2\ln(x) - 3(\ln(x))^2)) -\frac{1}{x^2}(6 \ln(x)^2-7 \ln(x)+1)$$</span></p>
<p>something wrong because the first term has <span class="math-container">$x^3$</span> instead of <span class="math-container">$x^2$</span>. I am sorry if this hard to read but stuck here. So is it the correct way to approximate the sum? Is it really with Riemann zeta function or does Taylor work here, as I am trying to do above?</p>
<p><em>I labeled the question with <code>parity</code> because I think it is central here. I feel I may be misunderstanding how the oddity works over different operations, hopefully shown in my vague explanations, and hence the error.</em></p>
| Nabyl Bod | 5,838 | <p>Invoke an <a href="http://en.wikipedia.org/wiki/Integral_test_for_convergence" rel="nofollow">Integral test for convergence</a>
then two Integration by parts give you (up to a mistake in my calculus)</p>
<p>$a=1/2$, $b=1/2$, and $c=1/4$.</p>
|
3,075,979 | <p>Prove that <span class="math-container">$$\frac{k^7}{7}+\frac{k^5}{5}+\frac{2k^3}{3}-\frac{k}{105}$$</span> is an integer using mathematical induction.</p>
<p>I tried using mathematical induction but using binomial formula also it becomes little bit complicated.</p>
<p>Please show me your proof.</p>
<p>Sorry if this question was already asked. Actually i did not found it. In that case only sharing the link will be enough.</p>
| Klaas van Aarsen | 134,550 | <p>We have:
<span class="math-container">$$\frac{k^7}{7}+\frac{k^5}{5}+\frac{2k^3}{3}-\frac{k}{105}
=\frac{15k^7+21k^5+70k^3-k}{3\cdot 5\cdot 7}
$$</span>
To prove this is an integer we need that:
<span class="math-container">$$15k^7+21k^5+70k^3-k\equiv 0 \pmod{3\cdot 5\cdot 7}$$</span>
According to the <em>Chinese Remainder Theorem</em>, this is the case iff
<span class="math-container">$$\begin{cases}15k^7+21k^5+70k^3-k\equiv 0 \pmod{3} \\
15k^7+21k^5+70k^3-k\equiv 0 \pmod{5}\\
15k^7+21k^5+70k^3-k\equiv 0 \pmod{7}\end{cases} \iff
\begin{cases}k^3-k\equiv 0 \pmod{3} \\
k^5-k\equiv 0 \pmod{5}\\
k^7-k\equiv 0 \pmod{7}\end{cases}$$</span>
<em>Fermat's Little Theorem</em> says that <span class="math-container">$k^p\equiv k \pmod{p}$</span> for any prime <span class="math-container">$p$</span> and integer <span class="math-container">$k$</span>.</p>
<p>Therefore the original expression is an integer.</p>
|
100,955 | <p>I'm trying to find the most general harmonic polynomial of form $ax^3+bx^2y+cxy^2+dy^3$. I write this polynomial as $u(x,y)$. </p>
<p>I calculate
$$
\frac{\partial^2 u}{\partial x^2}=6ax+2by,\qquad \frac{\partial^2 u}{\partial y^2}=2cx+6dy
$$
and conclude $3a+c=0=b+3d$. Does this just mean the most general harmonic polynomial has form $ax^3-3dx^2y-3axy^2+dy^3$ with the above condition on the coefficients? "Most general" is what my book states, and I'm not quite sure what it means.</p>
<p>Also, I want to find the conjugate harmonic function, say $v$. I set $\frac{\partial v}{\partial y}=\frac{\partial u}{\partial x}$ and $\frac{\partial v}{\partial x}=-\frac{\partial u}{\partial y}$ and find
$$
v=dx^3+3ax^2y+bxy^2-ay^3+K
$$
for some constant $K$. By integrating $\frac{\partial v}{\partial x}$, finding $v$ as some polynomial in $x$ and $y$ plus some function in $y$, and then differentiating with respect to $y$ to determine that function in $y$ up to a constant. Is this the right approach?</p>
<p>Finally, the question asks for the corresponding analytic function. Is that just $u(x,y)+iv(x,y)$? Or something else? Thanks for taking the time to read this.</p>
| David E Speyer | 448 | <p>This is a community wiki answer to note that the question was answered in the comments, and thus remove this question from the unanswered list -- the answer is yes, the OP is correct.</p>
|
2,759,827 | <blockquote>
<p>Let $\{x_n\}$ be a bounded sequence and $s=\sup\{x_n|n\in\mathbb N\}.$
Show that if $s\notin \{x_n|n\in\mathbb N\}, $then there exists a
subsequence convereges to $s$.</p>
</blockquote>
<p>$s-1$ cannot be the upperbound and $s\notin \{x_n|n\in\mathbb N\}, \exists n_1\in \mathbb N:n_1\ge1:s-1<x_{n_1}<s.$</p>
<p>$s-\frac{1}{2}$ cannot be the upperbound and $s\notin \{x_n|n\in\mathbb N\}, \exists n_2\in \mathbb N:n_2\ge1:s-\frac{1}{2}<x_{n_2}<s.$</p>
<p>$\dots$</p>
<p>$s-\frac{1}{k}$ cannot be the upperbound and $s\notin \{x_n|n\in\mathbb N\}, \exists n_k\in \mathbb N:n_k\ge1:s-\frac{1}{2}<x_{n_k}<s.$</p>
<p>$\dots$</p>
<p><strong>My doubt</strong></p>
<p>I can create a sequence of a number like this. How do we guarantee that we can choose $n_2>n_1$ such that $s-\frac{1}{2}<x_{n_2}<s$ hold? similarly for other cases. If we can show this then required subsequence would be $\{x_{n_k}\}$.</p>
| Alex Ortiz | 305,215 | <p>Using the definition of supremum and the assumption that $s\notin \{x_n\}$, show that for any $\epsilon > 0$, there are infinitely many $n$ such that $s-\epsilon < x_n < s$. Having done so, find an increasing sequence $n_1 < n_2 < n_3 < \dots,$ such that $s-1/n_i < x_{n_i} <s$.</p>
|
7,110 | <p>In 1974, Aharoni proved that every separable metric space (X, d) is Lipschitz isomorphic to a subset of the Banach space c_0.
Thus, for some constant L, there is a map K: X --> c_0 that satisfies the inequality d(u,v) <= || Ku - Kv || <= Ld(u,v) for all u and v in X.
Now, suppose X = l_1 (in this case, L = 2 is best possible). I have the following</p>
<p><strong>Conjecture</strong>: Let K: l_1 --> c_0 be a Lipschitz embedding. Then K cannot be <em>monotone</em> w.r.t. the natural duality pairing (.,.) between l_1 and c_0,
i.e., there are some u and v in l_1 such that (u - v, Ku - Kv) < 0.</p>
| fedja | 1,131 | <p>To answer Bill Johnson's question, a monotone linear bi-Lipschitz embedding (actually, an isometric one) $\ell^1\to\ell^\infty$ is very easy to construct. Just take any antisymmetric matrix $A$ of $\pm 1$s with the property that for each $n$ every combination of signs in the first $n$ positions appears in some row of $A$ (you can easily build it by induction) and take $Lx=x+Ax$. Unfortunately, I do not see how to convert it into a mapping to $c_0$. </p>
|
4,301,632 | <blockquote>
<p><span class="math-container">$$X^2 = \begin{bmatrix}1&a\\0&1\\\end{bmatrix}$$</span> where <span class="math-container">$a \in \Bbb R \setminus \{0\}$</span>. Solve for matrix <span class="math-container">$X$</span>.</p>
</blockquote>
<hr />
<p>I was practicing for matrix equations and this is the first one where it has squared matrix and another number, in this case, <span class="math-container">$a$</span>. I would be grateful if you could help. If you can please suggest a book that has matrix equations to practice. Thank you!</p>
| Theo Bendit | 248,286 | <p>Let
<span class="math-container">$$M(a) = \begin{bmatrix} 1 & a \\ 0 & 1 \end{bmatrix}.$$</span>
Note <span class="math-container">$M(a)$</span> has one eigenvalue: <span class="math-container">$1$</span>, and is not diagonalisable. This means that, if <span class="math-container">$X^2 = M(a)$</span>, then <span class="math-container">$X$</span> has eigenvalue <span class="math-container">$1$</span> or <span class="math-container">$-1$</span>, but not both (if it had both, then <span class="math-container">$X$</span> would be diagonalisable, and so would <span class="math-container">$X^2 = M(a)$</span>).
We also note that any eigenvectors of <span class="math-container">$X$</span> will be an eigenvector for <span class="math-container">$X^2$</span>, and <span class="math-container">$X^2$</span> has only the eigenvector <span class="math-container">$\begin{bmatrix} 1 \\ 0\end{bmatrix}$</span>. Thus, <span class="math-container">$X$</span> must have only this eigenvector as well. (Of course, when I say a matrix has <em>only</em> one eigenvector, I mean that there is one eigenspace, and it is one-dimensional, so we can only find one linearly independent eigenvector.)</p>
<p>Now, multiplication by <span class="math-container">$\begin{bmatrix} 1 \\ 0\end{bmatrix}$</span> on the right reveals the first column of a <span class="math-container">$2\times 2$</span> matrix. Therefore, the first column of <span class="math-container">$X$</span> is
<span class="math-container">$$X \begin{bmatrix} 1 \\ 0\end{bmatrix} = \pm 1 \cdot \begin{bmatrix} 1 \\ 0\end{bmatrix} = \begin{bmatrix} \pm1 \\ 0\end{bmatrix},$$</span>
depending on whether the unique eigenvalue is <span class="math-container">$1$</span> or <span class="math-container">$-1$</span>. This makes the matrix <span class="math-container">$X$</span> upper-triangular (due to the <span class="math-container">$0$</span> in the bottom left corner), so the eigenvalues of <span class="math-container">$X$</span> lie on the diagonal. That is, <span class="math-container">$X$</span> takes the form:
<span class="math-container">$$\begin{bmatrix} \pm 1 & * \\ 0 & \pm 1\end{bmatrix} = \pm\begin{bmatrix} 1 & * \\ 0 & 1\end{bmatrix},$$</span>
where the three <span class="math-container">$\pm$</span>s are all the same. That is,
<span class="math-container">$$X = \pm M(b)$$</span>
for some <span class="math-container">$b$</span>. Note that <span class="math-container">$(\pm M(b))^2 = M(b)^2$</span>, so we are looking for <span class="math-container">$b$</span> such that
<span class="math-container">$$\begin{bmatrix}1 & b \\ 0 & 1 \end{bmatrix}^2 = \begin{bmatrix}1 & a \\ 0 & 1 \end{bmatrix},$$</span>
which was solved by Angel in their answer: <span class="math-container">$b = \frac{a}{2}$</span>. This gives us precisely two solutions:
<span class="math-container">$$X = \begin{bmatrix}1 & \frac{a}{2} \\ 0 & 1 \end{bmatrix}, -\begin{bmatrix}1 & \frac{a}{2} \\ 0 & 1 \end{bmatrix}.$$</span></p>
|
2,554,448 | <p>Beside using l'Hospital 10 times to get
$$\lim_{x\to 0} \frac{x(\cosh x - \cos x)}{\sinh x - \sin x} = 3$$ and lots of headaches, what are some elegant ways to calculate the limit?</p>
<p>I've tried to write the functions as powers of $e$ or as power series, but I don't see anything which could lead me to the right result.</p>
| egreg | 62,967 | <p>It's not difficult to show that
$$
\lim_{x\to0}\frac{\cosh x-1}{x^2}=
\lim_{x\to0}\frac{\cosh^2x-1}{x^2(\cosh x+1)}=
\lim_{x\to0}\frac{\sinh^2x}{x^2}\frac{1}{\cosh x+1}=\frac{1}{2}
$$
Similarly,
$$
\lim_{x\to0}\frac{1-\cos x}{x^2}=\frac{1}{2}
$$
hence
$$
\lim_{x\to0}\frac{\cosh x-\cos x}{x^2}=1
$$
Therefore your limit is the same as
$$
\lim_{x\to0}\frac{x^3}{\sinh x-\sin x}
$$
If you apply l'Hôpital once, you get
$$
\lim_{x\to0}\frac{3x^2}{\cosh x-\cos x}=3
$$
by the same limit computed before.</p>
<p>With Taylor expansion:
$$
\lim_{x\to0}
\frac{x(1+\frac{x^2}{2}-1+\frac{x^2}{2}+o(x^2))}
{x+\frac{x^3}{6}-x+\frac{x^3}{6}+o(x^3)}
=3
$$</p>
|
1,425,935 | <p>How would I solve this trigonometric equation?</p>
<p>$$3\cos x \cot x + \sin 2x = 0$$</p>
<p>I got to this stage: $$3 \cos x = -2 \sin^2x$$</p>
<p>Is is a dead end or is there a easier way solve this equation? </p>
| Aaron Maroja | 143,413 | <p><strong>Hint:</strong> Use $\sin^2 x = 1 - \cos^2 x $ then $$-2 \cos^2 x + 3 \cos x + 2 = 0$$</p>
<p>take $y = \cos x$. </p>
|
1,955,212 | <p>I couldn't find this on the whole internet. My life depends on solving this. Please help.
I must write a formula for this sequence
<span class="math-container">$-8, -14, 8, 14, -8, -14, 8, 14$</span>.</p>
| coffeemath | 30,316 | <p>$f(x)=14 \cos (\pi x/2)-8 \sin(\pi x/2)$ does it, for $x=1,2,3,4,\cdots .$</p>
|
201,173 | <p>I have problem solving this equation:
$$
\left(\frac{1+iz}{1-iz}\right)^4 = \frac12 + i {\sqrt{3}\over 2}
$$
I know how to solve equations that are like:
$$
w^4 = \frac12 + i {\sqrt{3}\over 2}
$$
And I have solved it to:
$$
w = \cos(-\frac{\pi}{12} + \frac{\pi k}{2})) + i\sin(-\frac{\pi}{12} + \frac{\pi k}{2}))
$$
But now is:
$$
w = \frac{1+iz}{1-iz}
$$
How does one get the complex z? Or am I solving it wrong?</p>
| Did | 6,179 | <p>$$w=\frac{1+\mathrm iz}{1-\mathrm iz}\iff z=\mathrm i\cdot\frac{1-w}{1+w}$$
<strong>Edit:</strong> On the road are the identities $(1-\mathrm iz)\cdot w=1+\mathrm iz$ and $1-w=-\mathrm i\cdot(1+w)\cdot z$.</p>
|
119,584 | <p>It is known that there are multiplicative version concentration inequalities for
sums of independent random variables. For example, the following
multiplicative version <strong>Chernoff</strong> bound.</p>
<hr>
<p><strong>Chernoff bound:</strong></p>
<p>Let $X_1,\ldots,X_n$ be independent random variables and $X_i \in$
$[0,1]$. Let $Y=\sum_{i=1}^n X_i$. Then for any $\delta>0$,</p>
<p>$\Pr\left(Y \ge (1+\delta)EY \right) \le e^{-c\cdot(EY)\delta ^2},$</p>
<p>where $c$ is some absolute constant, e.g., c=1/3.</p>
<hr>
<p>Now consider <strong>dependent</strong> random variables. A slight variant of <strong>Azuma</strong>'s inequality states the following.</p>
<hr>
<p><strong>Azuma's Inequality:</strong></p>
<p>Let $X_1,\ldots,X_n$ be (dependent) random variables and $X_i \in
[0,1]$. Assume that there exists $\mu$, such that $ \Pr \left( \sum_{i=1}^n \mathbb{E}[X_i|X_{1},\ldots,X_{i-1}] \le \mu\right) = 1$. Let $Y=\sum_{i=1}^n X_i$. Then for any $\lambda > 0$,</p>
<p>$\Pr\left(Y \ge n\mu+\lambda \right) \le e^{-2 \lambda^2/n}.$</p>
<hr>
<p>Azuma's inequality is additive. My question is that does a
multiplicative version of Azuma's inequality such as the following
hold?</p>
<hr>
<p><strong>My question:</strong> does the following hold?</p>
<p>Let $X_1,\ldots,X_n$ be (dependent) random variables and $X_i \in
[0,1]$. Assume that there exists $\mu$, such that $\Pr\left( \sum_{i=1}^n \mathbb{E}[X_i|X_1,\ldots,X_{i-1}] \le \mu\right) = 1.$ Let $Y=\sum_{i=1}^n X_i$. Then for any $\delta>0$</p>
<p>$\Pr\left(Y \ge (1+\delta)n\mu \right) \le e^{-c\cdot n\mu \delta^2},$</p>
<p>where $c$ is some absolute constant.</p>
<hr>
<p><strong>Note</strong>: the standard Azuma's inequality does not imply the
multiplicative version when $n\mu \ll
\sqrt{n}$.</p>
| Neal | 27,773 | <p>Yes, such bounds are possible. You can adapt the proof of Azuma's inequality to the multiplicative-error case, if you set it up correctly.
For example:</p>
<p><strong>Lemma 10 [<a href="http://www.cs.ucr.edu/~neal/Koufogiannakis13Nearly.pdf" rel="nofollow noreferrer">this paper</a>]</strong>.
<em>Let $Y=\sum_{t=1}^T x_t$ and $Z=\sum_{t=1}^T z_t$
be sums of non-negative random variables,
where $T$ is a random stopping time with finite expectation,
and, for all $t$,
$|x_t-z_t|\le 1$ and
$$\textstyle
E\big[\,x_{t} - z_{t} ~|\, \sum_{s< t} x_s, \sum_{s< t} z_s\,\big]
~\le~
0.$$
Let $\epsilon\in[0,1]$ and $A\in\mathbb{R}$. Then
$$\Pr\big[\,(1-\epsilon) Y \,\ge\, Z + A\, \big] ~\le~ \exp({-\epsilon}A).$$</em></p>
<hr>
<p>To apply this to your question, take $T=n$, $z_t=\mu$, and $A=\epsilon\, n\, \mu$. Then you get
$$\Pr\big[\,Y \,\ge\, \frac{1+\epsilon}{1-\epsilon}n\mu\, \big] ~\le~ \exp({-\epsilon^2}n\mu).$$</p>
|
18,048 | <p>When taking a MOOC in calculus the exercises contain 5 options to select from. I then solve the question and select the option that matches my answer. Obviously only one of the options is correct. But there are (quite a few) times where my solution is wrong even though it is one of the available options. </p>
<p>My question is, how does a maths teacher know how to create wrong options that are as plausible as the correct answer?</p>
<p>I've found these two good questions but I am wondering how would an educator come up with wrong answers in general.</p>
<p><a href="https://matheducators.stackexchange.com/questions/16573/what-are-some-common-ways-students-get-confused-about-finding-an-inverse-of-a-fu/16576#16576">What are some common ways students get confused about finding an inverse of a function?</a></p>
<p><a href="https://matheducators.stackexchange.com/questions/12045/how-are-students-messing-up-in-this-khan-academy-surface-area-problem-right-tr">How are students messing up in this Khan Academy surface area problem? (right triangular prism: 3-4-5 triangular base, height of 11)</a></p>
| Amy B | 5,321 | <p>I freelance as an item writer, someone who writes questions for standardized tests. When making up alternate choices, I always have to justify my reasons for the "wrong answers" or distractors.
Here are some strategies I use.</p>
<ol>
<li>I focus on common misconceptions for students at that grade level.This is easier after years of classroom experience. </li>
<li>In a multi-step problem I often use the answer to an intermediate step. </li>
<li>When calculator use isn't allowed, I use answers with common
calculation errors.</li>
<li>I look for errors students might make if they don't read the problem carefully.</li>
</ol>
|
18,048 | <p>When taking a MOOC in calculus the exercises contain 5 options to select from. I then solve the question and select the option that matches my answer. Obviously only one of the options is correct. But there are (quite a few) times where my solution is wrong even though it is one of the available options. </p>
<p>My question is, how does a maths teacher know how to create wrong options that are as plausible as the correct answer?</p>
<p>I've found these two good questions but I am wondering how would an educator come up with wrong answers in general.</p>
<p><a href="https://matheducators.stackexchange.com/questions/16573/what-are-some-common-ways-students-get-confused-about-finding-an-inverse-of-a-fu/16576#16576">What are some common ways students get confused about finding an inverse of a function?</a></p>
<p><a href="https://matheducators.stackexchange.com/questions/12045/how-are-students-messing-up-in-this-khan-academy-surface-area-problem-right-tr">How are students messing up in this Khan Academy surface area problem? (right triangular prism: 3-4-5 triangular base, height of 11)</a></p>
| JRN | 77 | <p>If a teacher has taught the course before, and has asked questions that are free-response (not multiple-choice), then the teacher can look at the incorrect answers previously given by the students.</p>
<p>If not, then the teacher can ask other teachers who have had this experience.</p>
<p>Errors that "appear to stem from consistent application of a faulty method, algorithm, or rule" are called "systematic errors" (or "bugs") in the literature. Use these terms to search the mathematics education literature. (See my answer <a href="https://matheducators.stackexchange.com/a/10879/77">here</a>, for example.)</p>
|
18,048 | <p>When taking a MOOC in calculus the exercises contain 5 options to select from. I then solve the question and select the option that matches my answer. Obviously only one of the options is correct. But there are (quite a few) times where my solution is wrong even though it is one of the available options. </p>
<p>My question is, how does a maths teacher know how to create wrong options that are as plausible as the correct answer?</p>
<p>I've found these two good questions but I am wondering how would an educator come up with wrong answers in general.</p>
<p><a href="https://matheducators.stackexchange.com/questions/16573/what-are-some-common-ways-students-get-confused-about-finding-an-inverse-of-a-fu/16576#16576">What are some common ways students get confused about finding an inverse of a function?</a></p>
<p><a href="https://matheducators.stackexchange.com/questions/12045/how-are-students-messing-up-in-this-khan-academy-surface-area-problem-right-tr">How are students messing up in this Khan Academy surface area problem? (right triangular prism: 3-4-5 triangular base, height of 11)</a></p>
| niicole16 | 13,676 | <p>So. I started working as a teaching assistant for a course, and the professor showed me "test generating" software. It comes down to the fact that the editor of a textbook creates various open, multiple choice, and true/false questions and answers, and you - using the software - can just click "generate X number of MC questions". No need to think about it at all. </p>
<p>So to create a midterm, I just downloaded the software, hit "generate" and got all the questions and answers. I know this does not exactly answer your question, but I did not know this existed and it completely changed the "they must think so hard about all of this" idea I had in my head and I had to share this. Especially as I was shown this only AFTER coming up with questions and answers for the previuos midterms.</p>
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.