qid int64 1 4.65M | question large_stringlengths 27 36.3k | author large_stringlengths 3 36 | author_id int64 -1 1.16M | answer large_stringlengths 18 63k |
|---|---|---|---|---|
2,685,424 | <p>What I don't understand is that why can't we find the general solution of non homogeneous differential equation from the non homogeneous one itself. Currently we use the homogeneous equation also. </p>
<p>Why isn't it that general solution is not available from the non-homogeneous equation itself?</p>
| Christian Blatter | 1,303 | <p>What you are observing here is a fundamental principle valid in the "linear world". You are given an equation or system of equations
$$Ax=b\ ,\tag{1}$$
whereby $A$ operates linearly on the input vector $x$, and $b$ is a given constant vector. Such an equation may have no solutions. If it has solutions then they are of the form
$$x=y_{\rm hom} +x_p\ ,$$
whereby $x_p$ is a <em>particular solution</em> of the original equation $(1)$ (maybe found by guessing), and $y_{\rm hom}$ is the <em>general solution</em> of the associated homogeneous equation
$$Ay=0\ .\tag{2}$$
Note that the set of solutions of $(2)$ is a <em>vector space</em>, which means that any linear combination of "special solutions" of $(2)$ found by "special methods" is again a solution of $(2)$. One last thing: If the RHS of $(1)$ is the sum of two "simpler" vectors: $b=b_1+b_2$, and we can find "particular solutions" $x_p^{(1)}$, $x_p^{(2)}$ of $(1)$ for these "simpler" RHSs then $x_p=x_p^{(1)}+x_p^{(2)}$ is a solution of $(1)$ for the given $b$.</p>
<p>What I have described here applies to inhomogeneous linear systems of ordinary equations, to inhomogeneous linear systems of ODEs, and to inhomogeneous linear PDEs with homogeneous and inhomogeneous boundary conditions. It is a truly <em>fundamental principle</em>: The set of all solutions of $(1)$ is an <em>affine space</em>, i.e., a linear space of functions $V$, albeit translated away from the origin by some (not uniquely defined) vector $x_p$.</p>
|
454,426 | <blockquote>
<p>In set theory and combinatorics, the cardinal number $n^m$ is the size of the set of functions from a set of size m into a set of size $n$.</p>
</blockquote>
<p>I read this from this <a href="http://en.wikipedia.org/wiki/Empty_product#0_raised_to_the_0th_power" rel="nofollow noreferrer">Wikipedia page</a>.</p>
<p>I don't understand, however, why this is true. I reason with this example in which $M$ is a set of size $5$, and $N$ is a set of size $3$. For each element in set $M$, there are three functions to map the element from the set of size $5$ to an element in the set of size $3$. </p>
<p>By my reasoning, that means the total number of functions is just $3*5$, i.e. $3$ functions for each of the $5$ elements in the set. Why is it actually $3^5$? I saw on <a href="https://math.stackexchange.com/questions/209361/size-of-the-set-of-functions-from-x-to-y">this thread</a> that the number of functions from a set of size $n$ to a set of size $m$ is equivalent to "How many $m$-digit numbers can I form using the digits $1,2,...,n$ and allowing repetition?" I know how to answer that question, but I don't know why it's the same thing as finding the number of functions from the size $n$ set to the size $m$ set. </p>
| Chris Culter | 87,023 | <p>Consider a small example: the number of functions from a 2-element set $\{a,b\}$ to a 3-element set $\{1,2,3\}$. They are:</p>
<p>$$\begin{align}
a,b\mapsto1,1\\
a,b\mapsto1,2\\
a,b\mapsto1,3\\
a,b\mapsto2,1\\
a,b\mapsto2,2\\
a,b\mapsto2,3\\
a,b\mapsto3,1\\
a,b\mapsto3,2\\
a,b\mapsto3,3 \end{align}$$</p>
<p>See? There are $9=3^2$, not $6=3\times 2$.</p>
|
3,218,662 | <p>Let <span class="math-container">$T: X \to Y$</span> be a linear operator between normed Banach spaces <span class="math-container">$X$</span> and <span class="math-container">$Y$</span>.
The definition of the operator norm
<span class="math-container">$$
\| T \|
:= \sup_{x \neq 0} \frac{\| Tx \|}{\| x \|}
$$</span>
is well known.</p>
<p>Now, let <span class="math-container">$T$</span> be bijective.
When can we say that
<span class="math-container">$$
\| T^{-1} \|
= \inf_{x \neq 0} \frac{\| x \|}{\| T x \|}
$$</span>
holds?</p>
| Aweygan | 234,668 | <p>First of all, you need to assume that <span class="math-container">$T^{-1}$</span> is bounded. If <span class="math-container">$X$</span> and <span class="math-container">$Y$</span> are Banach spaces, this holds automatically, but not for normed spaces. For example, consider <span class="math-container">$X=Y=c_{00}(\mathbb N)$</span>, the space of sequences with finite support, and let <span class="math-container">$T$</span> be defined by <span class="math-container">$(Tx)(n)=\frac{1}{n}x(n)$</span>. Then <span class="math-container">$T$</span> a bounded linear bijection (under any <span class="math-container">$p$</span>-norm), but it's inverse is not bounded.</p>
<p>Under this additional hypothesis, your question is equivalent to asking when <span class="math-container">$\|T^{-1}\|=\|T\|^{-1}$</span>. I am not aware of any well-known conditions which guarantees this, but it doesn't happen a lot. In fact, if <a href="https://en.wikipedia.org/wiki/Condition_number#Matrices" rel="nofollow noreferrer">wikipedia</a> is to be believed, this only happens in finite-dimensional spaces when <span class="math-container">$T$</span> is a scalar multiple of an isometry. </p>
|
3,211,264 | <p>I can use the exponents laws only <span class="math-container">$m,n \in \mathbb{N}$</span>, and need to prove them for <span class="math-container">$m,n \in \mathbb{Z}$</span>.</p>
<p>note that <span class="math-container">$0 \neq a \in \mathbb{R}$</span></p>
<p>I proved some cases (mainly the trivial ones) and I'm having a hard time proving this case:</p>
<p>Assuming <span class="math-container">$m<0 \wedge n>0$</span> and <span class="math-container">$\left | n \right | > \left | m \right |$</span> we know that <span class="math-container">$(m+n) \in \mathbb{N}$</span></p>
<p>I was thinking about expressing <span class="math-container">$n+m$</span> as <span class="math-container">$k\in\mathbb{N}$</span> but then I need to handle the cases where <span class="math-container">$k = 1$</span> and <span class="math-container">$k > 1$</span></p>
<p>I started with <span class="math-container">$a^{m+n} = \frac{1}{a^{-m-n}}$</span> but then again I get that <span class="math-container">$-m \in\mathbb{N}$</span> and that <span class="math-container">$-n<0$</span> so I can't seem to apply any exponent rules with the naturals. </p>
<p>How should I approach this?</p>
<p><strong>edit</strong>:</p>
<p>Would it be legit performing:</p>
<p><span class="math-container">$a^{m+n} = \frac{1}{a^{-m-n}} = \frac{1}{a^{-m}*\frac{1}{a^{n}}} = \frac{1}{\frac{1}{a^m}*\frac{1}{a^n}} = a^m*a^n$</span></p>
| J.G. | 56,861 | <p>Defining <span class="math-container">$a^{-k}:=\frac{1}{a^k}$</span> for <span class="math-container">$k>0$</span>, we first prove <span class="math-container">$a^{m+1}=a^m a$</span> for all <span class="math-container">$m\in\Bbb Z$</span>: the desired result is the definition of <span class="math-container">$a^{m+1}$</span> for <span class="math-container">$m\ge 0$</span>, is trivial if <span class="math-container">$m=-1$</span>, and extends to <span class="math-container">$m=-k$</span> for all <span class="math-container">$k>0$</span> viz. <span class="math-container">$$a^{-k+1}=\frac{1}{a^{k-1}}=\frac{a}{a^k}=a^{-k}a.$$</span>Now your original problem is solved for <span class="math-container">$n=1$</span>, all <span class="math-container">$n\ge 1$</span> follow from the inductive step <span class="math-container">$$a^{m+l+1}=a^ma^l\cdot a=a^m a^{l+1},$$</span>and <span class="math-container">$n=0$</span> is trivial. Finally, if <span class="math-container">$n=-k$</span> with <span class="math-container">$k>0$</span> then <span class="math-container">$$a^m a^n=\frac{a^m}{a^k}=a^{m-k},$$</span>with the last <span class="math-container">$=$</span> following from <span class="math-container">$a^sa^k=a^{s+k}$</span> with <span class="math-container">$s:=m-k$</span>. This is valid by a previous step because <span class="math-container">$k>0$</span>.</p>
|
3,211,264 | <p>I can use the exponents laws only <span class="math-container">$m,n \in \mathbb{N}$</span>, and need to prove them for <span class="math-container">$m,n \in \mathbb{Z}$</span>.</p>
<p>note that <span class="math-container">$0 \neq a \in \mathbb{R}$</span></p>
<p>I proved some cases (mainly the trivial ones) and I'm having a hard time proving this case:</p>
<p>Assuming <span class="math-container">$m<0 \wedge n>0$</span> and <span class="math-container">$\left | n \right | > \left | m \right |$</span> we know that <span class="math-container">$(m+n) \in \mathbb{N}$</span></p>
<p>I was thinking about expressing <span class="math-container">$n+m$</span> as <span class="math-container">$k\in\mathbb{N}$</span> but then I need to handle the cases where <span class="math-container">$k = 1$</span> and <span class="math-container">$k > 1$</span></p>
<p>I started with <span class="math-container">$a^{m+n} = \frac{1}{a^{-m-n}}$</span> but then again I get that <span class="math-container">$-m \in\mathbb{N}$</span> and that <span class="math-container">$-n<0$</span> so I can't seem to apply any exponent rules with the naturals. </p>
<p>How should I approach this?</p>
<p><strong>edit</strong>:</p>
<p>Would it be legit performing:</p>
<p><span class="math-container">$a^{m+n} = \frac{1}{a^{-m-n}} = \frac{1}{a^{-m}*\frac{1}{a^{n}}} = \frac{1}{\frac{1}{a^m}*\frac{1}{a^n}} = a^m*a^n$</span></p>
| zwim | 399,263 | <p><strong>Important note:</strong> I consider always <span class="math-container">$m,n,k\in\mathbb N$</span>, and use <span class="math-container">$-n,-m,-k$</span> for negative exponents.</p>
<p>If both negative exponents then <span class="math-container">$$a^{-m-n}=(\frac 1a)^{m+n}\color{red}=(\frac 1a)^n(\frac 1a)^m=a^{-n}a^{-m}$$</span></p>
<p><span class="math-container">$\color{red}=\quad$</span> base <span class="math-container">$\frac 1a\neq 0$</span> and <span class="math-container">$m,n$</span> positive exponents.</p>
<p>If one is negative and the other positive, since the expression is symmetric we don't lose in generality assuming <span class="math-container">$m\le n$</span>. </p>
<p>Then consider <span class="math-container">$n-m=k\ge 0$</span>.</p>
<p><span class="math-container">$$\require{cancel}a^{-n}=a^{0-n}=a^{k-k-n}=a^{\cancel{n}-m-k-\cancel{n}}=a^{-m-k}\color{blue}=a^{-m}a^{-k}$$</span></p>
<p><span class="math-container">$\color{blue}=\quad$</span> base <span class="math-container">$a\neq 0$</span> and <span class="math-container">$-m,-k$</span> both negative exponents.</p>
<p>Rearranging gives <span class="math-container">$a^k=a^na^{-m}$</span> which is the desired result.</p>
|
1,105,126 | <p><img src="https://i.stack.imgur.com/XtrB7.png" alt="enter image description here"></p>
<p>My attempt at the solution is to let P(n) be $10^{3n} + 13^{n+1}$</p>
<p>P(1)= $10^3 + 13^2 = 1169$</p>
<p>Thus P(1) is true.</p>
<p>Suppose P(k) is true for all $k \in N$
$\Rightarrow P(k) = 10^{3k} + 13^{k+1} = 10^{3k} + 13\cdot13^{k}$</p>
<p>$P(k+1) = 10^{3k+3} + 13^{k+2} \\ P(k+1) = 1000\cdot10^{3k} + 169\cdot13^k \\ P(k+1) = (10^{3k} + 13\cdot13^k) + 999\cdot10^{3k} + 12\cdot13^{k+1}$</p>
<p>My solution is not divisible by 7. I've always use this method to solve these types of questions. Can someone point out my error?</p>
| drhab | 75,923 | <p>In $200$ minutes working together they deliver $5+4=9$ papers. So how many minutes they need for delivering $1$ paper?</p>
|
453,212 | <p>Consider a number Q in a made up base system:</p>
<p>The base system is as follows:</p>
<p>It encodes a number as a sum of odd numbers:</p>
<p>1 3 5 7 9 ...</p>
<p>If the number can be expressed as a sum of unique odds. For example, the number 16 is expressed as:</p>
<p>1110 = 7 + 5 + 3 + 1</p>
<p>The system is also redundant as 16 can also be expressed as:</p>
<p>11000</p>
<p>My question:</p>
<p>Given a natural number u, how can u be expressed in this system quickly if u can be expressed in the system, quickly.</p>
| qaphla | 85,568 | <p>If $u$ is even and greater than two: represent $u$ as $110\dots0$, where the number of $0$s is equal to $\frac{u}{2} - 2$.</p>
<p>If $u$ is odd: represent $u$ as $10\dots0$, where the number of $0$s is equal to $\frac{u - 1}{2}$.</p>
<p>$2$ cannot be represented in this system, but is the only nonnegative number for which this is the case.</p>
|
1,570,754 | <p>Let $I$ be an interval and $f\colon I \to \mathbb{R}$ a differentiable function. Suppose the following definitions:</p>
<p>For $x_0 \in I$ the point $(x_0,f(x_0))$ is called <em>saddle point</em> if $f'(x_0) = 0$ but $x_0$ is not a local extremum of $f$.</p>
<p>For $x_W \in I$ the point $(x_W,f(x_W))$ is called <em>point of inflection</em> if there is a neighborhood $U$ from $x_W$ in $I$ such that $f'$ is strictly monotonic increasing (resp. decreasing) for $x < x_W$ on $U$ and strictly monotinic decreasing (resp. increasing) for $x > x_W$ on $U$.</p>
<p>What is the logical relation between saddle points and points of inflection? </p>
<p>My first intuitive guess was that a point $(x,f(x))$ is a saddle point <em>iff</em> it is a point of inflection and $f'(x) = 0$. However the implication "$\implies$" seems to be wrong. Consider the following counterexample:
$$
f(x) =
\begin{cases} x^4 \cdot \sin\left(\frac{1}{x}\right) & x \neq 0 \\
0 & x = 0
\end{cases}
$$</p>
<p>Then $(0,0)$ is a saddle point but not a point of inflection because the derivativative oscillates on every neighborhood of $0$.</p>
<p>Is this correct so far? Is the other implication true? If so, how to prove it?</p>
| Altrouge | 298,678 | <p>OK for the first part.</p>
<p>Let's take a look at the second part (the $\Leftarrow$ implication). It is indeed true.</p>
<p>Let us suppose $(x,f(x))$ is a point of inflexion and $f'(x) = 0$. Then there exists an interval $J = [a,b]$, $x \in J$ and $J \subset I$, where we can suppose by symmetry, that $f'$ is increasing on $[a,x]$ and decreasing on $[x,b]$ (otherwise, we can study $-f$).</p>
<p>If $f' < 0$ on $[a,x[$ and $f' < 0$ on $]x,b]$, we have:
$\forall u \in [a,x[$, $f(u) > f(x)$ and $\forall v \in ]x,b]$, $f(v) < f(x) $. Consequently, $(x,f(x))$ is not an extremum and is a saddle point.</p>
<p>Then, let us look at the left interval, with a proof by contradiction. If we suppose that there exists $c$ in $[a,x[$ such as $f'(c) >= 0$, as $f'$ is strictly increasing on $[c,x]$, we also have a $d \in [c,x[$ such as $f'(d) > 0$ and $\forall u \in [d,x[$, $f'(u) > f'(d) > 0$. </p>
<p>By integrating on $[x-h,x]$ with $h \in [0,x-d[$, we have $\forall h \in ]0,x-d[$, $f(x)-f(x-h) > f'(d)h$, then $\frac{f(x-h)-f(x)}{h} <-f'(d) < 0$. Consequently, by taking the limit, $f'(x) \neq 0$, and we have our contradiction.</p>
<p>We can have a similar reasoning for the right interval, and conclude that $f' < 0$ on $[a,x[$, $f'>0$ on $]x,b]$, which means that $(x,f(x))$ is a saddle point.</p>
|
1,570,754 | <p>Let $I$ be an interval and $f\colon I \to \mathbb{R}$ a differentiable function. Suppose the following definitions:</p>
<p>For $x_0 \in I$ the point $(x_0,f(x_0))$ is called <em>saddle point</em> if $f'(x_0) = 0$ but $x_0$ is not a local extremum of $f$.</p>
<p>For $x_W \in I$ the point $(x_W,f(x_W))$ is called <em>point of inflection</em> if there is a neighborhood $U$ from $x_W$ in $I$ such that $f'$ is strictly monotonic increasing (resp. decreasing) for $x < x_W$ on $U$ and strictly monotinic decreasing (resp. increasing) for $x > x_W$ on $U$.</p>
<p>What is the logical relation between saddle points and points of inflection? </p>
<p>My first intuitive guess was that a point $(x,f(x))$ is a saddle point <em>iff</em> it is a point of inflection and $f'(x) = 0$. However the implication "$\implies$" seems to be wrong. Consider the following counterexample:
$$
f(x) =
\begin{cases} x^4 \cdot \sin\left(\frac{1}{x}\right) & x \neq 0 \\
0 & x = 0
\end{cases}
$$</p>
<p>Then $(0,0)$ is a saddle point but not a point of inflection because the derivativative oscillates on every neighborhood of $0$.</p>
<p>Is this correct so far? Is the other implication true? If so, how to prove it?</p>
| Narasimham | 95,860 | <p>Can only be brief, sorry you can fill in the gaps. Information available in Wiki.. For z = f(x,y) ; second derivative test. <0 for max, >0 for min, test fails but at saddle points the both signs prevail. E.g., monkey saddle.$ f(x,y) = x^3 - 3 x y^2 $. when considering inflection points along certain directions ( 3 of 6 directions). Like a Col point in mountainous range, one direction upward, one downward, one is neither, topography negative Gauss curvature, inflection along asymptotic direction, normal curvature vanishes.. See also Minimax, Nash equilibrium.</p>
|
677,859 | <p>$f(x)= f(x+1)+3$ and $f(2)= 5$, determine the value of $f(8)$.</p>
<p>I don't understand how $f(x)$ can equal $f(x+1)+3$</p>
| WhizKid | 87,019 | <p>Essentially re arrange the equation to: $f(x+1)=f(x)-3$</p>
<p>so $f(8)=f(7)-3=f(6)-3-3=.....=f(2)-18=5-18=-13$ using $f(2)=5$</p>
<p>so $f(8)=-13$</p>
|
1,914,752 | <p>dividing by a whole number i can describe by simply saying split this "cookie" into two pieces, then you now have half a cookie. </p>
<p>does anyone have an easy way to describe dividing by a fraction? 1/2 divided by 1/2 is 1</p>
| fleablood | 280,126 | <p>$a \div b $ means "how many $b $s does it take to get $a $"</p>
<p>So "$2 \frac 12 \div \frac 12$" is "how many $\frac 12$s does it take to get $2\frac 12$?" The answer is $5$.</p>
<p>So how many half cookies does it take to make half a cookie? The answer is one.</p>
|
203,827 | <p>Suppose I have the following lists: </p>
<pre><code>prod = {{"x1", {"a", "b", "c", "d"}}, {"x2", {"e", "f",
"g"}}, {"x3", {"h", "i", "j", "k", "l"}}, {"x4", {"m",
"n"}}, {"x5", {"o", "p", "q", "r"}}}
</code></pre>
<p>and </p>
<pre><code>sub = {{"m", "n"}, {"o", "p", "r", "q"}, {"g", "f", "e"}};
</code></pre>
<p>for each element in <code>sub</code> I want to go through <code>prod</code> and select if the element exist such that I get the following output, </p>
<pre><code> {{"x2", {"e", "f", "g"}}, {"x4", {"m", "n"}}, {"x5", {"o", "p", "q","r"}}}
</code></pre>
<p>I tried doing: </p>
<pre><code>Table[Select[
prod[[All, 2]][[i]], # == ContainsAny[Map[Sort, sub]][[i]] &], {i,
Length[sub]}]
</code></pre>
<p>yet it doesn't work, am I missing something? </p>
| MelaGo | 63,360 | <pre><code>sortedsub = Sort /@ sub;
Select[prod, MemberQ[sortedsub, Sort[#[[2]]]] &]
</code></pre>
<blockquote>
<p>{{"x2", {"e", "f", "g"}}, {"x4", {"m", "n"}}, {"x5", {"o", "p", "q", "r"}}}</p>
</blockquote>
|
90,712 | <p>How many <em>unique</em> pairs of integers between $1$ and $100$ (inclusive) have a sum that is even? The solution I got was</p>
<p>$${100 \choose 1}{99 \choose 49}$$</p>
<p>I don't have a way to verify it, but I figured you pick one card from the 100, then you can pick 49 of the other cards (if the first card is even the other has to be even and if the first card is odd the other has to be odd as well).</p>
| TurlocTheRed | 397,318 | <p>An equivalent variation: To get an even sum, both numbers have to be even or both numbers have to be odd. For either case, there are C(50,2) possible combinations. So the final answer is 2*C(50,2). </p>
|
4,263,631 | <p>so i have question about existence of function <span class="math-container">$f:\mathbb{R} \to \mathbb{R}$</span> such <span class="math-container">$f$</span> is not the pointwise limit of a sequence of continuous functions <span class="math-container">$\mathbb{R} \to \mathbb{R}$</span>.</p>
<p>i'm created a family of continuous functions <span class="math-container">$k_i:\mathbb{R} \to \mathbb{R}$</span> for any function <span class="math-container">$f:\mathbb{R} \to \mathbb{R}$</span> such that <span class="math-container">$\lim_{n \to \infty} k_n(x)=f(x)$</span> , how ever i am pretty sure my construction have a flaw in it but i could not understand why it's wrong can some one tell me what am i doing wrong , because it seems very unrealistic to be able to do it for any <span class="math-container">$f$</span>.</p>
<p>here is my construction steps:</p>
<p>1.pick any arbitrary <span class="math-container">$x,y \in \mathbb{R}$</span> such <span class="math-container">$x<y$</span> , and connect <span class="math-container">$f(x)$</span> to <span class="math-container">$f(y)$</span>.</p>
<p>2.pick any arbitrary <span class="math-container">$z \in \mathbb{R}-\{x,y\}$</span> , if <span class="math-container">$f(y)<f(z)$</span> connect <span class="math-container">$f(z)$</span> to <span class="math-container">$f(y)$</span> and if <span class="math-container">$f(z)<f(x)$</span> connect <span class="math-container">$f(z)$</span> to <span class="math-container">$f(x)$</span>, if <span class="math-container">$f(x)<f(z)<f(y)$</span> , connect <span class="math-container">$f(x)$</span> to <span class="math-container">$f(z)$</span> and <span class="math-container">$f(z)$</span> to <span class="math-container">$f(y)$</span>.</p>
<p>now we are again doing this uncountable times with other elemnts in <span class="math-container">$\mathbb{R}-\{x,y,z\}$</span> and do the step 2 for that element over all choosed elements by uncountable number of comparisions (after uncountable number of iteration we will have uncountable number of picked elements of <span class="math-container">$\mathbb{R}$</span>).</p>
<p>call set of picked elements <span class="math-container">$S$</span>, when we pick an other element from <span class="math-container">$\mathbb{R}-S$</span> , like <span class="math-container">$t$</span> we will consider <span class="math-container">$ x_1= \max \{ x ; x \in S \land x<t \}$</span> and <span class="math-container">$x_2 = \min \{ x ; x \in S \land x>t \}$</span> and we connect <span class="math-container">$f(x_1)$</span> to <span class="math-container">$f(t)$</span> and then <span class="math-container">$f(t)$</span> to <span class="math-container">$f(x_2)$</span> and call all the connected line with <span class="math-container">$S \cup {t}$</span> function <span class="math-container">$k_t$</span>. we will do this uncountable number of times.</p>
<p>i'm pretty sure there is a flaw in my argument , but can you please help me to find it.i really appreciate your kindness and support.</p>
| principal-ideal-domain | 131,887 | <p>If you want to cover it simply use rectangles of width <span class="math-container">$1$</span> and height of the maximum of the function for that inverval of length <span class="math-container">$1$</span>. So you have
<span class="math-container">$$m_2(A) \le \sum_{n=1}^\infty \frac{1}{n^2} = \frac{\pi^2}{6}.$$</span></p>
|
13,989 | <p>Suppose $E_1$ and $E_2$ are elliptic curves defined over $\mathbb{Q}$.
Now we know that both curves are isomorphic over $\mathbb{C}$ iff
they have the same $j$-invariant.</p>
<p>But $E_1$ and $E_2$ could also be isomorphic over a subfield of $\mathbb{C}$.
As is the case for $E$ and its quadratic twist $E_d$. Now the question general is.</p>
<blockquote>
<p>$E_1$ and $E_2$ defined over $\mathbb{Q}$ and isomorphic over $\mathbb{C}$. Let $K$
the smallest subfield of $\mathbb{C}$ such that $E_1$ and $E_2$ become isomorphic over $K$.
What can be said about $K$. Is it always a finite extension of $\mathbb{Q}$. If so, what can be
said about the extension $K|\mathbb{Q}$.</p>
</blockquote>
<p>My second question is something goes something like in the opposite direction. I start again with
quadratic twists. Let $E$ be an elliptic curve over $\mathbb{Q}$ and consider the quadratic extension
$\mathbb{Q}|\mathbb{Q}(\sqrt{d})$. Describe the curves over $\mathbb{Q}$(or isomorphism classes over $\mathbb{Q}$)
which become isomorphic to $E$ over $\mathbb{Q}(\sqrt{d})$. I think the answer is $E$ and $E_d$.
Again I would like to know what happens if we take a larger extension.</p>
<blockquote>
<p>Let $E$ be an elliptic curve over $\mathbb{Q}$ and $K|\mathbb{Q}$ a finite extension.
Describe the isomorphism classes of elliptic curves over $\mathbb{Q}$ which become isomorphic
to $E$ over K.</p>
</blockquote>
<p>I have no idea what is the right context to answer such questions.</p>
| Sam Derbyshire | 362 | <p>A concrete explanation: for elliptic curves defined using short Weierstrass equations $E_i : y^2 = x^3 + a_ix + b_i$ over $K$ (not of characteristic $2$ or $3$), all isomorphisms over $L$ (an extension of $K$) are just given by $f(x,y) = (\lambda^2 x,\lambda^3 y)$ for some $\lambda \in L^\times$, so we then need $\lambda^4 a_1 = a_2$ and $\lambda^6 b_1 = b_2$ (from the expression for $j$). In this case, basic algebra shows that if $j \neq 0,1728$ then the isomorphism involves extracting a square root, so can take place in a degree 2 extension of $\mathbb{Q}$. If $j=1728$, then $b_i=0$ and we just need to extract a fourth root, so we have (at most) a degree $4$ extension over which $E_1$ and $E_2$ become isomorphic. Similarly, if $j=0$, we can need up to a degree $6$ extension.</p>
<p>You can arrange so that you need to extract the root of any given element in $K^\times$, this describes the behaviour over various extensions.</p>
<p>I believe that the situation in characteristic $2$ or $3$ gets rather trickier, and you might need an extension of degree up to $24$.</p>
|
3,130,059 | <p>I have faced this differential problem: <span class="math-container">$(y'(x))^3 = 1/x^4$</span>. </p>
<p>From the fundamental theorem of algebra i know there exist 3 solutions <span class="math-container">$y_1$</span>, <span class="math-container">$y_2$</span>, <span class="math-container">$y_3$</span>, but formally how can I procede to deduce that? </p>
| Fred | 380,717 | <p>If <span class="math-container">$y'(x)^3=\frac{1}{x^4}$</span>, then <span class="math-container">$y(x)=-3 x^{-1/3}+c$</span> ........</p>
|
1,788,435 | <p>My math book says, a Linear equation has exactly one solution. Because $ax + b = 0$; $x =-\frac{b}{a}$. But I've solved many linear equations with multiple solutions before.
(I'm not very good in math. Need help...)</p>
| Rodrigo de Azevedo | 339,790 | <p>If we have $2$ unknowns, then the linear system</p>
<p>$$a_1 x_1 + a_2 x_2 = b$$</p>
<p>has, in general, infinitely many solutions. Why is that? Assuming that $a_1 \neq 0$, we write</p>
<p>$$x_1 = \frac{b}{a_1} - \left(\frac{a_2}{a_1}\right) x_2$$</p>
<p>Let $x_2 = \gamma$, where $\gamma \in \mathbb{R}$. Then, the solution set is a <strong>line</strong> parameterized as follows</p>
<p>$$\begin{bmatrix}x_1\\ x_2\end{bmatrix} = \begin{bmatrix} \frac{b}{a_1}\\ 0\end{bmatrix} + \gamma \begin{bmatrix} - \frac{a_2}{a_1}\\ 1\end{bmatrix}$$ </p>
|
3,878,380 | <p>Do the columns of a matrix always represent different vectors? If so, I don't understand how if I have a <span class="math-container">$3\times3$</span> matrix where the rows represent the dimensions and I multiply it by a <span class="math-container">$3\times1$</span> column vector with the same dimensions, it will give me a vector. Some sources say the result comes from the dot product of each line - is this correct? If so, <span class="math-container">$a_{12}\cdot b_{21}$</span> would give zero right?</p>
| Watercrystal | 571,790 | <p>Well, on the first level your answer is "because that is how we defined things", but this is hardly a satisfying explanation.
The real reason is that matrices are just a convenient way to represent linear functions between (finite dimensional) vector spaces, i.e. functions of the form <span class="math-container">$f \colon V \to W$</span> (where <span class="math-container">$V$</span> and <span class="math-container">$W$</span> are vector spaces over some field <span class="math-container">$F$</span>) such that <span class="math-container">$f(ax + y) = af(x) + f(y)$</span> for all <span class="math-container">$a \in F$</span> and all <span class="math-container">$x, y \in V$</span>.
These linear functions, also called <em>homomorphisms</em>, are the main things we actually study in linear algebra and since they are (structure-preserving) maps between vector spaces, the "output" of such a map is a vector.
I won't go into explaining why or how matrices represent linear functions for brevity's sake, but basically we have that every linear function (relative to the base of our choosing) can be represented as a <span class="math-container">$\dim W \times \dim V$</span> matrix <span class="math-container">$A_f$</span> such that <span class="math-container">$f(x) = A_fx$</span>.
This is the real reason we define matrices as we do: To make them convenient representations of the objects we study.</p>
<p>Also let me quickly say that it is an absolute shame how many linear algebra courses never touch on linear functions but rather only focus on doing matrix stuff which, in my opinion, makes the subject more technical and obstructs the students' ability to get a good intuition about it.</p>
|
3,656,978 | <p>I understand to say that a bounded linear operator <span class="math-container">$T$</span> is called "polynomially compact" if there is a nonzero polynomial <span class="math-container">$p$</span> such that <span class="math-container">$p(T)$</span> is compact. </p>
<p>Can anyone help me with examples of polynomially compact operators? </p>
| RedLapm | 779,251 | <p>This should be the correct way to solve:</p>
<p><span class="math-container">$\mathbf I$$\vec x$</span> - <span class="math-container">$\mathbf A$$\vec x$</span> = <span class="math-container">$\vec d$</span></p>
<p>(<span class="math-container">$\mathbf I$</span>-<span class="math-container">$\mathbf A$</span>)<span class="math-container">$\vec x$</span> = <span class="math-container">$\vec d$</span></p>
<p><span class="math-container">$(\mathbf I - \mathbf A)^{-1}$</span>
(<span class="math-container">$\mathbf I$</span>-<span class="math-container">$\mathbf A$</span>) <span class="math-container">$\vec x$</span> =<span class="math-container">$(\mathbf I - \mathbf A)^{-1}$</span> <span class="math-container">$\vec d$</span></p>
<p><span class="math-container">$\vec x$</span> = <span class="math-container">$(\mathbf I - \mathbf A)^{-1}$</span> <span class="math-container">$\vec d$</span></p>
|
175,971 | <p>Let's $F$ be a field. What is $\operatorname{Spec}(F)$? I know that $\operatorname{Spec}(R)$ for ring $R$ is the set of prime ideals of $R$. But field doesn't have any non-trivial ideals.</p>
<p>Thanks a lot!</p>
| Rene Schoof | 36,713 | <p>$Spec(Z)$ does not have the cofinite topology. The non-empty
open sets are precisely the cofinite sets that contain the zero ideal.</p>
|
2,041,484 | <p>Solve the system of equations for all real values of $x$ and $y$
$$5x(1 + {\frac {1}{x^2 +y^2}})=12$$
$$5y(1 - {\frac {1}{x^2 +y^2}})=4$$</p>
<p>I know that $0<x<{\frac {12}{5}}$ which is quite obvious from the first equation.<br>
I also know that $y \in \mathbb R$ $\sim${$y:{\frac {-4}{5}}\le y \le {\frac 45}$}</p>
<p>I don't know what to do next.</p>
| Dr. Sonnhard Graubner | 175,066 | <p>for $$x,y\ne 0$$ we obtain
$$1+\frac{1}{x^2+y^2}=\frac{12}{5x}$$
$$1-\frac{1}{x^2+y^2}=\frac{4}{5y}$$
adding both we get
$$5=\frac{6}{x}+\frac{2}{y}$$
from here we obtain
$$y=\frac{2x}{5x-6}$$
can you proceed?
after substitution and factorizing we get this equation:
$$- \left( x-2 \right) \left( 5\,x-2 \right) \left( 5\,{x}^{2}-12\,x+9
\right)=0
$$</p>
|
4,019,748 | <p>Let <span class="math-container">$H_1=(H_1, (\cdot, \cdot )_1)$</span> and <span class="math-container">$H_2=(H_2, (\cdot, \cdot )_2)$</span> be Hilbert spaces. Suppose that <span class="math-container">$H_1$</span> is continuously and densely embedded in <span class="math-container">$H_2$</span>. Simbolically, <span class="math-container">$H_1 \stackrel d{\hookrightarrow} H_2$</span>. Let <span class="math-container">$X \subset H_1$</span> and <span class="math-container">$Y \subset H_2$</span> be closed subspaces such that <span class="math-container">$X \subset Y$</span>, with <span class="math-container">$X,Y \neq \emptyset$</span>. I know that
<span class="math-container">$$
X=(X, (\cdot, \cdot )_1) \quad \text{and} \quad Y=(Y, (\cdot, \cdot )_2) \tag{1}
$$</span>
are Hilbert spaces.</p>
<p><strong>Question.</strong> Is <span class="math-container">$X \stackrel d{\hookrightarrow} Y$</span>?</p>
<p>Its clear for me that, since <span class="math-container">$X \subset Y$</span> and due to <span class="math-container">$(1)$</span> that <span class="math-container">$X$</span> is continuously embedded in <span class="math-container">$Y$</span>. But I don't know (and I couldn't prove) that <span class="math-container">$X$</span> is dense in <span class="math-container">$Y$</span>. I tried using the Hahn-Banach theorem, but without success.</p>
| alepopoulo110 | 351,240 | <p>Of course not, the question fails trivially in this generality: take any Hilbert space <span class="math-container">$H$</span> and let <span class="math-container">$H_1=H_2=H$</span>. Let <span class="math-container">$X,Y$</span> be closed subspaces with <span class="math-container">$X\subset Y$</span> and <span class="math-container">$X\neq Y$</span>. If <span class="math-container">$X$</span> embeds densely in <span class="math-container">$Y$</span> that means that the identity function <span class="math-container">$i:X\to Y$</span> maps <span class="math-container">$X$</span> to a dense subspace of <span class="math-container">$Y$</span>, i.e. <span class="math-container">$X$</span> is dense in <span class="math-container">$Y$</span>, i.e. <span class="math-container">$\bar{X}=Y$</span>. But <span class="math-container">$X$</span> is closed, so <span class="math-container">$X=Y$</span>, a contradiction since <span class="math-container">$Y\neq X$</span>.</p>
|
4,363,327 | <p>Consider the one-layer neural network <span class="math-container">$y=\mathbf{w}^T\mathbf{x} +b$</span> and the optimization objective <span class="math-container">$J(\mathbf{w}) = \mathbb{E}\left[ \frac12 (1-y\cdot t) \right]$</span> where <span class="math-container">$t\in\{-1,1\}$</span> is the label of our data point. I am asked to compute the Hessian of <span class="math-container">$J$</span> at the current location <span class="math-container">$\mathbf{w}$</span> in the parameter space. I know that the correct solution is <span class="math-container">$H=\frac{\partial^2J}{\partial \mathbf{w}^2} = \mathbb{E}\left[ \mathbf{x}\mathbf{x}^T \right]$</span>.</p>
<p>I am having issues to arrive at this exact formulation because of how derivation of row/column vectors work. My solution goes as follows:
We first determine the first derivative.</p>
<p><span class="math-container">\begin{align*}
&\frac{\partial \mathbb{E}\left[ \frac12 \left( 1 - yt \right)^2 \right]}{\partial \mathbf{w}}\\
&= \mathbb{E}\left[ \frac{\partial \frac12 \left( 1 - yt \right)^2 }{\partial \mathbf{w}} \right]\\
&= \mathbb{E}\left[ \frac{\partial \frac12 \left( 1 - yt \right)^2 }{\partial y} \frac{\partial y}{\partial \mathbf{w}} \right]\\
&= \mathbb{E}\left[ -t\cdot(1-yt) \frac{\partial \mathbf{w}^T\mathbf{x}+b}{\partial \mathbf{w}} \right]\\
&= \mathbb{E}\left[ -t\cdot(1-yt) \mathbf{x} \right]\\
\end{align*}</span></p>
<p>Note that I think <span class="math-container">$\mathbf{x}\in\mathbb{R}^d$</span> is considered a <em>column</em> vector, and according to the matrix cookbook, <span class="math-container">$ \frac{\partial \mathbf{w}^T\mathbf{x}+b}{\partial \mathbf{w}} = \mathbf{x}$</span>, not <span class="math-container">$\mathbf{x}^T$</span> (I have found sources saying otherwise...)</p>
<p>We now differentiate this again, in order to derive the Hessian.
<span class="math-container">\begin{align*}
&\frac{\partial \mathbb{E}\left[ -t\cdot(1-yt) \mathbf{x} \right]}{\partial \mathbf{w}}\\
&= \mathbb{E}\left[ \frac{\partial -t(1-yt)\mathbf{x}}{\partial y} \frac{\partial \mathbf{w}^T\mathbf{x}+b}{\partial \mathbf{w}} \right]\\
&= \mathbb{E}\left[ \frac{\partial -t(1-yt)\mathbf{x}}{\partial y} \mathbf{x} \right]\\
&= \mathbb{E}\left[ \frac{\partial (-t+yt^2)\mathbf{x}}{\partial y} \mathbf{x} \right]\\
&= \mathbb{E}\left[ \underbrace{t^2}_{= 1} \mathbf{x} \mathbf{x} \right]\\
&= \mathbb{E}\left[ \mathbf{x} \mathbf{x} \right]\\
&\neq \mathbb{E}\left[ \mathbf{x}\mathbf{x}^T \right]
\end{align*}</span></p>
<p>So here, we have column vector times column vector which is not really defined. I do not know where to get the transpose from, though. I tried deriving the whole thing again with the assumption that <span class="math-container">$\frac{\partial \mathbf{w}^T\mathbf{x}+b}{\partial \mathbf{w}} = \mathbf{x}^T$</span>, instead of <span class="math-container">$\mathbf{x}$</span>. Then we either get <span class="math-container">$\mathbb{E}\left[ \mathbf{x}^T\mathbf{x}^T \right]$</span> similar to before, or, <em>if</em> we assume that the derivative of a row vector w.r.t. scalar is a column vector, we get <span class="math-container">$\mathbb{E}\left[ \mathbf{x}\mathbf{x}^T \right]$</span>, which is what we want. However,this would be a very weird assumption, to me, because why would the derivative of a row vector w.r.t. to a scalar be a column vector? And furthermore, it contradicts the matrix cookbook, which says that <span class="math-container">$\frac{\partial \mathbf{w}^T\mathbf{x}+b}{\partial \mathbf{w}} = \mathbf{x}$</span>.</p>
<p>I would be very glad for help here. Where did I go wrong? Which assumptions of row/column vectors are correct? Thank you so much for your help!</p>
<p>Last, I found an alternative way of solving it where you don't have that issue of transposing or not, by just inserting the definition of the prediction <span class="math-container">$y$</span>, but I still would like to know where the issue in my solution above lies.</p>
<p><span class="math-container">\begin{align*}
&\frac{\partial \mathbb{E}\left[ -t\cdot(1-yt) \mathbf{x} \right]}{\partial \mathbf{w}}\\
&= \frac{\partial \mathbb{E}\left[ -t\mathbf{x} + t^2y\mathbf{x} \right]}{\partial \mathbf{w}}\\
&= \frac{\partial \mathbb{E}\left[ -t\mathbf{x} + t^2(w^T\mathbf{x} + b)\mathbf{x} \right]}{\partial \mathbf{w}}\\
&= \frac{\partial \mathbb{E}\left[ -t\mathbf{x} + t^2(w^T\mathbf{x})\mathbf{x} + b\mathbf{x} \right]}{\partial \mathbf{w}}\\
&= \frac{\partial \mathbb{E}\left[ -t\mathbf{x} + t^2\mathbf{x}^Tw\mathbf{x} + b\mathbf{x} \right]}{\partial \mathbf{w}}\\
&= \mathbb{E}\left[ \underbrace{t^2}_{= 1} \mathbf{x}\mathbf{x}^T \right]\\
&= \mathbb{E}\left[ \mathbf{x}\mathbf{x}^T \right]\\
\end{align*}</span></p>
| Trevor Gunn | 437,127 | <p>Neither is "the correct way of doing it." Nor are either wrong. There are two conventions, and as long as you follow the same convention consistently, you will get sensible answers.</p>
<p>If you treat the derivative as a linear operator approximating your function, then the derivative of a function <span class="math-container">$\mathbf{R}^n \to \mathbf{R}^m$</span> is a linear function <span class="math-container">$\mathbf{R}^n \to \mathbf{R}^m$</span>. In the case that <span class="math-container">$m = 1$</span>, that gives you a row vector.</p>
<p>Now let's start:</p>
<p><span class="math-container">\begin{align}
D_w (w^Tx + b)^2 &= 2(w^Tx + b)x^T \\
H_w (w^Tx + b)^2 &= D_w [2(w^Tx + b)x^T]
\end{align}</span></p>
<p>Row vectors, right? But now stop right there! The function <span class="math-container">$f(w) = 2(w^Tx + b)x^T$</span> is a function from <span class="math-container">$\mathbf{R}^n \to \mathbf{R}^n$</span> <strong>even if</strong> the output is a row vector. More precisely, we want to pick a basis (the dual basis) for the space of row vectors and calculate our derivative in that basis. Representing a row vector in the dual basis is just taking the transpose, so we get</p>
<p><span class="math-container">$$
H_w (w^Tx + b)^2 = D_w [2(w^Tx + b)x] = 2x D_w (w^Tx) = 2xx^T
$$</span></p>
<p>where we use the rule <span class="math-container">$D_x cf(x) = c D_x f(x)$</span>.</p>
<hr />
<p>This is how I think about the Hessian. As the second derivative where we need to compute each derivative in the appropriate basis and that means taking the transpose on the second step. I learned to think about it this way studying differential geometry/topology.</p>
<p>Sources that don't want to discuss derivatives on manifolds, might simply define <span class="math-container">$H(f) = D(\nabla f)$</span> (the derivative/Jacobian of the gradient). Where by definition <span class="math-container">$\nabla f = (D f)^T$</span>.</p>
<p>When you do it via basis to basis. Every map from <span class="math-container">$\mathbf{R}^n \to \mathbf{R}^m$</span> is the same type of thing. If you aren't considering bases, then you need to worry about row vectors and column vectors and there become 4 kinds of maps <span class="math-container">$\mathbf{R}^n \to \mathbf{R}^m$</span> (row to row, row to column, etc.). Then you need a consistent way of computing those derivatives.</p>
|
204,043 | <p>I am looking at the following optimization problem</p>
<p><span class="math-container">$$
\begin{align*}
\max\ & 1000 r_1 + \frac{1}{2}r_2 + \frac{1}{3}r_3\\
\text{s.t. }& 1000^2 r_1 + \frac{1}{4}r_2 + \frac{1}{9}r_3 = \frac{1}{9},\\
& 1000^2 p_1 + \frac{1}{4}p_2 + \frac{1}{9}p_3 = \frac{1}{9},\\
& r_1 + r_2 + r_3 + r_4 = 2\\
& p_1 + p_2 + p_3 + p_4 = 2\\
& 0\leq p_i\leq 1,\quad i = 1,\dots,4\\
& r_i^2\leq p_i,\quad i = 1,\dots,4
\end{align*}
$$</span>
with the following Mathematica code (it is clear that <span class="math-container">$r_1=p_1=0$</span>, <span class="math-container">$r_2=p_2=0$</span>, <span class="math-container">$r_3=p_3=1$</span>, <span class="math-container">$r_4=p_4=1$</span> is a feasible solution so I picked it to be the initial solution)</p>
<pre><code>FindMaximum[{1000 r1 + 1/2 r2 + 1/3 r3,
{1000^2 r1 + 1/4 r2 + 1/9 r3 == 1/9,
r1 + r2 + r3 + r4 == 2,
r1^2 <= p1, r2^2 <= p2, r3^2 <= p3, r4^2 <= p4,
1000^2 p1 + 1/4 p2 + 1/9 p3 == 1/9,
p1 + p2 + p3 + p4 == 2,
0 <= p1 <= 1, 0 <= p2 <= 1, 0 <= p3 <= 1, 0 <= p4 <= 1}
},
{{r1, 0}, {r2, 0}, {r3, 1}, {r4, 1}, {p1, 0}, {p2, 0}, {p3, 1}, {p4, 1}}, AccuracyGoal -> 10]
</code></pre>
<p>Mathematica returns the following answer</p>
<pre><code>{0.45521, {r1 -> -6.30073*10^-8, r2 -> 0.268328, r3 -> 0.963328,
r4 -> 0.768345, p1 -> 0., p2 -> 0.0719999, p3 -> 0.928, p4 -> 1.}}
</code></pre>
<p>If you look at the constraint that <span class="math-container">$1000^2 p_1 + \frac{1}{4}p_2 + \frac{1}{9}p_3 = \frac{1}{9}$</span>, you can see that in the solution Mathematica returns,
the left-hand side evaluates to <span class="math-container">$0.12111\cdots$</span>, so the constraint is not satisfied.</p>
<p>Is this a bug or just a numerical inaccuracy? Just the value seems too large compared with <span class="math-container">$1/9$</span> -- the additive error is not at the scale of <span class="math-container">$10^{-4}$</span> or <span class="math-container">$10^{-5}$</span>.</p>
| user64494 | 7,152 | <p>I don't find it a bug. The feasible set consists of only one element</p>
<pre><code>Reduce[{1000^2 r1 + 1/4 r2 + 1/9 r3 == 1/9, r1 + r2 + r3 + r4 == 2,
r1^2 <= p1, r2^2 <= p2, r3^2 <= p3, r4^2 <= p4, 1000^2 p1 + 1/4 p2 + 1/9 p3 == 1/9,
p1 + p2 + p3 + p4 == 2, 0 <= p1 <= 1, 0 <= p2 <= 1, 0 <= p3 <= 1, 0 <= p4 <= 1},
{r1, r2, r3, r4, p1, p2, p3, p4}, Reals]
</code></pre>
<blockquote>
<p>r1 == 0 && r2 == 0 && r3 == 1 && r4 == 1 && p1 == 0 && p2 == 0 &&
p3 == 1 && p4 == 2 - p3</p>
</blockquote>
<p>and Mathematica does her best.</p>
|
81,588 | <p>A certain function contains points $(-3,5)$ and $(5,2)$. We are asked to find this function,of course this will be simplest if we consider slope form equation </p>
<p>$$y-y_1=m(x-x_1)$$</p>
<p>but could we find for general form of equation? for example quadratic? cubic?</p>
| rogerl | 27,542 | <p>One more proof, similar to Greg Martin's: Suppose $\alpha$ is a root of $f(x)=x^p-x+a$ in some splitting field; then
\begin{equation*}
(\alpha+1)^p - (\alpha+1) + a
= \alpha^p + 1 - \alpha - 1 + a
= \alpha^p - \alpha + a = 0,
\end{equation*}
so that $\alpha+1$ is also a root. It follows that the roots of $f$ are $\alpha+i$ for $0\le i < p$. If $f$ factors in $\mathbb{F}[x]$, say $f = gh$, then the sum of the roots of $g$ is $k\alpha + r$ where $\deg g = k$ and $k, r\in\mathbb{F}_P$. Since $g\in \mathbb{F}_p[x]$ it follows that $\alpha\in \mathbb{F}_p$. But that implies that $f$ splits in $\mathbb{F}_p$, which is not the case (for example, neither $0$ nor $1$ is a root). Thus $f$ is irreducible.</p>
|
467 | <p>Moderators have started incorporating the old faq material in the new <a href="https://mathoverflow.net/help">help system</a>. It wasn't a perfect fit, a lot of stuff is no longer relevant, redundant, missing or broken. You can help by going through the <a href="https://mathoverflow.net/help">help center</a> and post anything that needs fixing here.</p>
<p>In case substantial editing is needed it would help to add a proposal; we may not use it verbatim but it will nevertheless save a lot of effort on our end.</p>
<p>Note that this is not the right place for policy discussion. If you think we should be wearing shirts instead of wearing pants, start a new question with your proposal...</p>
| Neil Strickland | 10,366 | <p>I don't know what things you can or can't edit, but ideally there should be a couple of lines about the scope of the site and the relationship with MSE right near the top of <a href="https://mathoverflow.net/help">https://mathoverflow.net/help</a>. I suggest:</p>
<blockquote>
<p>This site is for questions about research level mathematics, of the
kind that you might encounter when writing or reading a PhD thesis,
research paper or graduate level book. For other kinds of questions
about mathematics, please use math.stackexchange.com instead.</p>
</blockquote>
|
467 | <p>Moderators have started incorporating the old faq material in the new <a href="https://mathoverflow.net/help">help system</a>. It wasn't a perfect fit, a lot of stuff is no longer relevant, redundant, missing or broken. You can help by going through the <a href="https://mathoverflow.net/help">help center</a> and post anything that needs fixing here.</p>
<p>In case substantial editing is needed it would help to add a proposal; we may not use it verbatim but it will nevertheless save a lot of effort on our end.</p>
<p>Note that this is not the right place for policy discussion. If you think we should be wearing shirts instead of wearing pants, start a new question with your proposal...</p>
| Neil Strickland | 10,366 | <p>I suggest that the topics in the 'Asking' menu should be reordered as follows. Roughly speaking, stuff that all new posters should read comes at the top, and stuff that only becomes relevant when there is a problem goes further down.</p>
<ul>
<li>What topics can I ask about here?</li>
<li>How do I ask a good question?</li>
<li>What types of questions should I avoid asking?</li>
<li>What are tags, and how should I use them?</li>
<li>What should I do when someone answers my question?</li>
<li>What should I do if no one answers my question?</li>
<li>What does it mean if a question is "closed" or "on hold"?</li>
<li>Why are some questions marked as duplicate?</li>
<li>What if I disagree with the closure of a question? How can I reopen it?</li>
<li>Why do I see a message that my question does not meet quality standards?</li>
<li>Why and how are some questions deleted?</li>
<li>Why are questions no longer being accepted from my account?</li>
</ul>
|
2,506,182 | <p>The question speaks for itself, since the question comes from a contest environment, one where the use of calculators is obviously not allowed, can anyone perhaps supply me with an easy way to calculate the first of last digit of such situations?</p>
<p>My intuition said that I can look at the cases of $3$ with an index ending in a $1$, so I looked at $3^1=3$, $3^{11}=177,147$ and $3^{21}= 10,460,353,203$. So there is a slight pattern, but I'm not sure if it holds, and even if it does I will have shown it holds just for indices with base $3$, so I was wondering whether there is an easier way of knowing.</p>
<p>Any help is appreciated, thank you.</p>
| Aaron Montgomery | 485,314 | <p>Hint: Consider the first few powers of $3$ (say, the first five) and look for a pattern.</p>
|
1,056,041 | <p>Look at problem 8 :</p>
<blockquote>
<p>Let $n\geq 1$ be a fixed integer. Calculate the distance:
$$\inf_{p,f}\max_{x\in[0,1]}|f(x)-p(x)|$$ where $p$ runs over
polynomials with degree less than $n$ with real coefficients and $f$
runs over functions $$ f(x)=\sum_{k=n}^{+\infty}c_k\, x^k$$ defined on
the closed interval $[0,1]$, where $c_k\geq 0$ and
$\sum_{k=n}^{+\infty}c_k = 1.$</p>
</blockquote>
<p>This is what I have so far.</p>
<p>Clearly for $n=1$, we have $1/2$.
I am conjecturing for $n>1$, we have $(n-1)^{(n-1)} / n^n$ or something similar to that? (just put $x^{(n-1)}$ and $x^n$ then use AM-GM). it's just weird that the pattern does not fit, so it's probably wrong. Any ideas?</p>
| d125q | 112,944 | <p>You can think of the number of favorable arrangements in the following way: choose the empty box in $\binom{n}{1}$ ways. For each such choice, choose the box that will have at least $2$ balls (there has to be one such box) in $\binom{n - 1}{1}$ ways. And for this box, choose the balls that will go inside in $\binom{n}{2}$ ways. Now permute the remaning balls in $(n - 2)!$ ways.</p>
<p>Thus, the number of favorable arrangements is:</p>
<p>$$
\binom{n}{1} \binom{n - 1}{1} \binom{n}{2} (n - 2)! = \binom{n}{2} n!
$$</p>
|
2,663,130 | <p>Let $f:\mathbb{R}^2\to \mathbb{R}^2$ be function $f(x,y)=(\frac{1}{2}x+y,x-2y)$. Find a image of set $A\subset\mathbb{R}^2$ bounded with lines $x-2y=0, x-2y+2=0, x+2y-2=0, x+2y-3=0.$</p>
<p>Set $A$ is parallelogram with vertices $(1,\frac{1}{2}), (\frac{3}{2},\frac{3}{4}), (\frac{1}{2},\frac{5}{4}), (0,1)$.</p>
<p>What is $f(A)$?</p>
<p>Any help is welcome. Thanks in advance. </p>
| Peter Szilas | 408,605 | <p>Hint: $\dfrac{n}{n+1}= 1- \dfrac{1}{n+1}.$</p>
|
2,557,520 | <p>PS: Before posting it, one tried to grasp <a href="https://math.stackexchange.com/questions/1996141/if-ffx-x2-x1-what-is-f0">this</a> (although didn't understand mfl's answer completely either).</p>
<p>I was shown the way to solve it: if I set $$\frac{3x-2}{2}=0 \Rightarrow x=\frac{2}{3} \Rightarrow x^2-x-1=-\frac{11}{9}$$ that answer will also be the one of $f(0)$ (i.e. $f(0)=-11/9$), which is true, but I don't quite get it: how the answer of $f(\frac{3x-2}{2})$ (i.e. $-11/9$) is also the answer of $f(0)$? why the answer of the quad equ. is also the answer of $f(0)$ (that it is $f(0)=-11/9$)? I asked the person who solved it for me if there was a proof for this, but she couldn't bring it. So, I would be very happy and obliged if someone here would show the proof that it is $f(0)=-11/9$.</p>
| John | 7,163 | <p>Hint: To calculate $f(0)$ given an expression for $f((3x-2)/2)$, you could find the value of $x$ that makes the argument zero, and then use that value in the expression.</p>
|
72,478 | <p>I am trying to find the intervals on which f is increasing or decreasing, local min and max, and concavity and inflextion points for $f(x)=\sin x+\cos x$ on the interval $[0,\pi]$.</p>
<p>I know at $\pi/4$ the derivative will equal zero. So that gives me my critical numbers, positive and negative $\pi/4$ so now I need to find the intervals which is not making any sense to me, I thought they could only change at critical numbers but $\pi$ and $2\pi$ are different values. I am getting a positive for $2\pi$ and a negtive for $\pi$. How can this happen if the only critical number is $\pi/4$?</p>
| NoChance | 15,180 | <p>$f(x)=\sin(x)+\cos(x)$</p>
<p>$f'(x)=\cos(x)-\sin(x)$</p>
<p>critical points are when $f'(x)=0$: </p>
<p>i.e, at:</p>
<p>$\cos(x)=\sin(x)$ which can be satisfied by the values of x such as:</p>
<p>...,$-7{\pi}/4$ , $-3{\pi}/4$, ${\pi}/4$, $3{\pi}/4$,...</p>
<p>now, you need to examine the second derivative's sign at the above points:</p>
<p>$f''(x)=-\sin(x)-\cos(x)$</p>
<p>at $-7{\pi}/4$ , $f''(x)$ is (-) --> Local Max.</p>
<p>at $-3{\pi}/4$, $f''(x)$ is (+) --> Local Min.</p>
<p>at ${\pi}/4$ , $f''(x)$ is (-) --> Local Max.</p>
<p>at $3{\pi}/4$, $f''(x)$ is (+) --> Local Min.</p>
<p>The link (<a href="https://math.stackexchange.com/questions/72444/graphing-and-differentiation/72464#72464">example</a>) may help.</p>
<p>also, These plots may help:
<img src="https://i.stack.imgur.com/Mqhuj.jpg" alt="enter image description here"></p>
|
2,949,224 | <p>How do I show that if <span class="math-container">$\sqrt{n}(X_n - \theta)$</span> converges in distribution, then <span class="math-container">$X_n$</span> converges in probability to <span class="math-container">$\theta$</span>? </p>
<p>Setting <span class="math-container">$Y_n = \sqrt{n}(X_n - \theta)$</span> , convergence in distribution (to a random variable <span class="math-container">$Y$</span>) means: <span class="math-container">$P(Y_n \leq y)$</span> implieas <span class="math-container">$P(Y \leq y)$</span>. </p>
<p>Convergence in probability requires that <span class="math-container">$P(Y_n \geq \epsilon) \rightarrow 0 $</span> </p>
<p>My reasoning so far is the following. Given convergence in distribution I can use Porohov's theorem that <span class="math-container">$P(|Y_n|>M)< \epsilon$</span>, for some positive <span class="math-container">$M$</span> and some positive <span class="math-container">$\epsilon$</span>. Now, I need to show that this translates into <span class="math-container">$P(|Y_n| \geq \epsilon)=0$</span> and this will be convergence in probability. I'm quite stuck however, any hints are appreciated</p>
| yurnero | 178,464 | <p>We have:
<span class="math-container">$$
\frac{1}{\sqrt{n}}\to 0\implies\frac{1}{\sqrt{n}}\overset{L}\to 0\implies X_n-\theta=\frac{1}{\sqrt{n}}[\sqrt{n}(X_n-\theta)]\overset{L}{\to} 0\implies X_n-\theta\overset{P}{\to} 0.
$$</span>
Here, <span class="math-container">$\overset{L}{\to}$</span> indicates convergence in distribution whereas <span class="math-container">$\overset{P}{\to}$</span> convergence in probability. The second implication above uses Slutsky's Theorem and last implication uses the fact that convergence in distribution to a constant implies convergence in probability to the same constant.</p>
|
3,657,428 | <p>My textbook says that</p>
<blockquote>
<p>If <span class="math-container">$f(x)$</span> is piecewise continuous on <span class="math-container">$(a,b)$</span> and satisfies <span class="math-container">$f(x) = \frac{1}{2} [f(x_{-})+f(x_{+})]$</span> for all <span class="math-container">$x\in(a,b)$</span>, and if <span class="math-container">$f(x_{0})\neq 0$</span>, then <span class="math-container">$|f(x)|>0$</span> on some interval containing <span class="math-container">$x_{0}$</span>.</p>
</blockquote>
<p>Why is this true?</p>
<p>Edit:</p>
<p>My attempt to prove this claim:</p>
<p>Suppose <span class="math-container">$f \in PC (a,b), f (x) = \frac{1}{2} [f(x-) + f(x+)]$</span>, and <span class="math-container">$f(x_{0}) \neq 0$</span> for some <span class="math-container">$x_{0} \in (a,b)$</span>.</p>
<p>Case I:
If <span class="math-container">$f$</span> is continuous at <span class="math-container">$x_{0}$</span>, then by the definition of continuity, there is a neighborhood <span class="math-container">$U$</span> containing <span class="math-container">$x_{0}$</span> where <span class="math-container">$f(x)\neq0$</span> for <span class="math-container">$x \in U$</span>.</p>
<p>Case II:
If <span class="math-container">$x_{0}$</span> is a point of discontinuity, then because <span class="math-container">$f$</span> is piecewise continuous, the values
<span class="math-container">$$f(x-)=\lim_{\varepsilon\to0+} f(x-\varepsilon), \quad f(x+) = \lim_{\varepsilon \to 0+} f(x+\varepsilon)$$</span></p>
<p>always exist. For fixed <span class="math-container">$x_{0} \in (a,b)$</span> such that <span class="math-container">$f(x_{0})\neq0$</span>, the condition <span class="math-container">$f(x_{0}) = \frac{1}{2} [f(x_{0}^{-}) + f(x_{0}^{+})]$</span> implies that <span class="math-container">$f(x_{0}^{-})$</span> and <span class="math-container">$f(x_{0}^{+})$</span> are not simultaneously zero. Then:</p>
<p>II.i) if <span class="math-container">$f(x_{0}^{-})$</span> is zero, choose the interval <span class="math-container">$[x_{0},x_{0}+\varepsilon)$</span>.</p>
<p>II.ii) if <span class="math-container">$f(x_{0}^{+})$</span> is zero, choose the interval <span class="math-container">$[x_{0}-\varepsilon, x_{0})$</span>.</p>
<p>II.iii) if <span class="math-container">$f(x_{0}^{-}), f(x_{0}^{+})$</span> are not zero, choose the interval <span class="math-container">$(x_{0}-\varepsilon, x_{0}+\varepsilon)$</span>.</p>
| Wlod AA | 490,755 | <p>When <span class="math-container">$\ f(x_0)>0\ $</span> then <span class="math-container">$\ f(x_-)> 0\ $</span> or <span class="math-container">$\ f(x_+) > 0. $</span>
Thus, either <span class="math-container">$\ f\ $</span> is positive in an interval
<span class="math-container">$\ (x_0-h;x_0)\ $</span> or in <span class="math-container">$\ (x_0;x_0+h)\ )$</span>, respectively (or even for both), for certain <span class="math-container">$\ h>0.$</span></p>
<p><strong><em>Remark 1:</strong> One may add <span class="math-container">$\ x_0\ $</span> to the considered intervals.</em></p>
<p><strong><em>Remark 2:</strong> The result is more general, the assumption about piecewise continuity can be replaced by a weaker (and much more elegant). It is enough to assume that both limits <span class="math-container">$\ f(x_-)\ $</span> and <span class="math-container">$\ f(x_+)\ $</span> exist for every <span class="math-container">$\ x\in(a;b).$</span> And let's keep the other assumptions intact.</em></p>
|
2,075,485 | <p>Let $[a,b]$ be a finite closed interval on $\mathbb{R}$, $f$ be a continuous differentible function on $[a,b]$. Prove that
$$\max_{x\in [a,b]} |f(x)|\le \Bigg|\frac{1}{b-a} \int_a^b f(x)dx\Bigg|+\int_a^b |f'(x)|dx$$</p>
<p>I think this is similar to Sobolev embedding theorem but have no idea about how to use it. I have also tried to transform the inequality into $$(b-a)\int_a^b (|f(t)|-|f'(x)|)dx\le \Bigg|\int_a^b f(x)dx\Bigg|,$$ where $f(t)$ is the maximum, but still don't know how to proceed. Thanks for any help.</p>
| na1201 | 397,984 | <p>although Kobe has explained well. I had already started writing the answer.
So using triangle inequality we have $|f(c)| \leq |f(\xi)|+|f(c) - f(\xi)|$. By fundamental theorem of calculus $f(c) - f(\xi) = \int_{\xi}^cf'(x)dx$. Taking absolute values we have $$|f(c) - f(\xi)| = \int_{\xi}^cf'(x)dx \leq \int_{\xi}^c|f'(x)|dx \leq \int_a^b |f'(x)|dx \hspace{2mm} (A)$$.
Using mean value theorem we also have $$ f(\xi) = \frac{\int_a^bf(x)dx}{b-a}$$ therefore $$ |f(\xi)| \leq \frac{\int_a^b|f(x)|dx}{b-a} \hspace{3mm}(B)$$
Hence by(A) and (B) we have the result.</p>
|
1,440,106 | <p>I am currently studying how to prove the Fibonacci Identity by Simple Induction, shown <a href="http://mathforum.org/library/drmath/view/52718.html">here</a>, however I do not understand how $-(-1)^n$ becomes $(-1)^{n+1}$. Can anybody explain to me the logic behind this?</p>
| mweiss | 124,095 | <p>The negative sign <em>outside</em> the parentheses can be re-written as $-1$:
$$-(-1)^n = (-1)(-1)^n$$
That first factor of $(-1)$ can be written as $(-1)^1$, so we have
$$(-1)^1(-1)^n$$
and finally the two factors can be combined by adding the exponents:
$$(-1)^{n+1}$$</p>
|
1,440,106 | <p>I am currently studying how to prove the Fibonacci Identity by Simple Induction, shown <a href="http://mathforum.org/library/drmath/view/52718.html">here</a>, however I do not understand how $-(-1)^n$ becomes $(-1)^{n+1}$. Can anybody explain to me the logic behind this?</p>
| Kushal Bhuyan | 259,670 | <p>Simple there is a $-$ sign in front so you can write $-(-1)^n=(-1)(-1)^n=(-1)^{n+1}$.</p>
|
804,532 | <p>I a working my way through some old exam papers but have come up with a problem. One question on sequences and induction goes:</p>
<p>A sequence of integers $x_1, x_2,\cdots, x_k,\cdots$ is defind recursively as follows: $x_1 = 2$ and $x_{k+1} = 5x_k,$ for $k \geq 1.$</p>
<p>i) calculate $x_2, x_3, x_4$</p>
<p>ii) deduce a formula for the $n$th term i.e. $x_n$ in terms of $n$ and then prove its validity, using the principles of mathematical induction.</p>
<p>It is the last part that is giving me trouble. I think $x_2, x_3$ and $x_4$ are $10, 50$ and $250$ respectively. I also think I managed to work out the formula, it is $f(n) = 2 \cdot 5^{n-1}.$</p>
<p>However I'm not sure how I'm supposed to prove this using induction... induction is only used when you're adding the numbers in a sequence I thought? I've looked everywhere and can't find any answer specific enough to this question to help. Any help appreciated. Thanks.</p>
| mfl | 148,513 | <p>Your answer is correct.</p>
<p>To write down the proof by induction you have:</p>
<p>Your formula is correct for $n=1$ because $x_1=2\cdot 5^{1-1}=2\cdot 5^0=2.$</p>
<p>Now you suppose that it is correct for $n,$ that is, $x_n=2\cdot 5^{n-1},$ and you need to prove that it hols for $n+1.$ We have:</p>
<p>$x_{n+1}=5x_n= 5\cdot (2\cdot 5^{n-1})=2\cdot 5^n,$ (where we have used the induction hypothesis in the second equality) which finishes the proof.</p>
|
1,593,282 | <p>Say we have the function $f:A \rightarrow B$ which is pictured below.<a href="https://i.stack.imgur.com/WfCxs.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/WfCxs.jpg" alt="enter image description here"></a></p>
<p>This function is not bijective, so the inverse function $f^{-1}: B \rightarrow A$ does not exist. However, we can see that elements $d$ and $e$ in set $B$ are each mapped to by a single element in $A$, so they are kind of "nice" in this sense, almost like a bijective function.</p>
<p>Would it be acceptable to use the notation $f^{-1}(d)=6$ and $f^{-1}(e)=7$ here, even though not all the elements of $B$ have a single inverse? Or does the use of $f^{-1}$ always imply that the entire function $f$ has an inverse?</p>
| AnotherPerson | 185,237 | <p>If it is not injective then no, because we don't know what preimage you are referring to when you write $f^{-1}(x)$ if $x$ has multiple preimages. But what you can write is $f^{pre}(x)$ indicating the preimage (or set of preimages) of $x$. </p>
|
3,600,528 | <p>Is there a general formula for determining multiplicity of <span class="math-container">$2$</span> in <span class="math-container">$n!\;?$</span>
I was working on a Sequence containing subsequences of 0,1. 0 is meant for even quotient, 1 for odd quotient.
Start with k=3, k should be odd at start, if odd find (k-1) /2, otherwise k/2. This subsequence goes on until we reach 1.</p>
<p>Assign 0,1 accordingly as quotient is even, odd respectively. k(n+1) =k(n) +2, k(n) is odd, n>=1.Do this for all k>=3.1 is added before each subsequence as subsequence is generated by odd integer>=3.This sequence goes on like this: 11, 101, 111, 1001, 1101, 1011, 1111, 10001, 11001, 10101, 11101, 10011, 11011, 10111, 11111,..</p>
<p>The subsequences with increasing k are replica of the subsequence with steps to reach 1 minus one step or no of bits minus one with one extra bit of 1,0 depending on k.</p>
<p>Example-for k=5 the subsequence is 101 as quotient in first step is 2, and in second step is 1.</p>
<p>I want to find what is sequence at nth step?
Also can this sequence help in determining what is multiplicity of 2 in n! ? </p>
| J. W. Tanner | 615,567 | <p><strong>Hint:</strong></p>
<p><span class="math-container">$n!$</span> is the product of the numbers from <span class="math-container">$1$</span> to <span class="math-container">$n$</span>.</p>
<p>How many multiples of <span class="math-container">$2$</span> are there in <span class="math-container">$\{1,2,...,n\}$</span>?</p>
<p>How many multiples of <span class="math-container">$4$</span> are there in <span class="math-container">$\{1,2,...,n\}$</span>?</p>
<p>How many multiples of <span class="math-container">$8$</span> are there in <span class="math-container">$\{1,2,...,n\}$</span>?</p>
<p>...</p>
|
259,119 | <p>I have a string like this:</p>
<p><code>string="there is a humble-bee in Hanna's garden";</code></p>
<p>Now I want to exclude those words that contain "-" and "'". My own solution would be:</p>
<p><code>StringDelete[string,Cases[StringSplit[string," "], _?(StringContainsQ[#, {"'", "-"}] &)]]</code></p>
<p>so the outcome is:</p>
<p><code>"there is a in garden"</code></p>
<p>But I was wondering whether there is a more elegant solution?</p>
| Daniel Huber | 46,318 | <p>You can give a string pattern to "StringDelete" like:</p>
<pre><code>string = "there is a humble-bee in Hanna's garden";
pat = WordCharacter ... ~~ ("-" | "'") ~~ WordCharacter ...;
StringDelete[string, pat]
(*"there is a in garden"*)
</code></pre>
|
182,101 | <p>With respect to assignments/definitions, when is it appropriate to use $\equiv$ as in </p>
<blockquote>
<p>$$M \equiv \max\{b_1, b_2, \dots, b_n\}$$</p>
</blockquote>
<p>which I encountered in my analysis textbook as opposed to the "colon equals" sign, where this example is taken from Terence Tao's <a href="http://terrytao.wordpress.com/">blog</a> :</p>
<blockquote>
<p>$$S(x, \alpha):= \sum_{p\le x} e(\alpha p) $$</p>
</blockquote>
<p>Is it user-background dependent, or are there certain circumstances in which one is more appropriate than the other?</p>
| Michael Hardy | 11,667 | <p>The $\equiv$ symbol has different standard meanings in different contexts:</p>
<ul>
<li>Congruence in number theory, and various generalizations;</li>
<li>Geometric congruence;</li>
<li>Equality for <b>all</b> values of the variables, as opposed to an equation in which one seeks the values that make the equation true;</li>
<li>$x$ is <em>defined</em> to be $y$;</li>
<li>probably a bunch of others.</li>
</ul>
<p>But I suspect "$:=$" is not used for anything other than definitions.</p>
<p>So the latter at least avoids ambiguity. But if you're reading something written by someone who doesn't see it that way, you still want to understand what is being said, so you should be aware of usage conventions that you might reasonably consider less than optimal.</p>
|
165,328 | <p>What is the difference between $\cap$ and $\setminus$ symbols for operations on sets?</p>
| Asaf Karagila | 622 | <p>Their definition is different:</p>
<ul>
<li><p>$A\cap B=\{x\mid x\in A\text{ and } x\in B\}$, we take all the elements which appear both in $A$ and in $B$, but not just in one of them. </p></li>
<li><p>$A\setminus B=\{x\mid x\in A\text{ and } x\notin B\}$, we take only the part of $A$ which is not a part of $B$. </p></li>
</ul>
<p>Amongst the different properties, the intersection ($\cap$) is commutative and associative while difference ($\setminus$) is not. Namely it is generally true that:</p>
<p>$$A\cap B=B\cap A\\ A\setminus B\neq B\setminus A$$</p>
<p>and similarly:
$$A\cap (B\cap C) = (A\cap B)\cap C\\ A\setminus(B\setminus C)\neq (A\setminus B)\setminus C$$</p>
|
214,475 | <p>Function:
<a href="https://i.stack.imgur.com/sH7mh.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/sH7mh.png" alt="enter image description here"></a></p>
<p>I am to solve for <span class="math-container">$T_{12}(4.8), T_{24}(1.2)$</span>, using <strong>If</strong> and <strong>Which</strong> functions.</p>
<p>I started with this function and keep getting a recursion limit error:</p>
<pre><code>t[n_] := (7/2 x) t[n - 1] - (7/2) t[n + 1]
</code></pre>
| AsukaMinato | 68,689 | <pre><code>T[x_, n_] := Block[{temp = n}, Which[
temp == 0, Return[1], temp == 1, Return[x], temp > 1,
1/x T[x, temp - 2] - 2/7 T[x, temp - 1]]]
</code></pre>
<p>T[1, 4] gives</p>
<blockquote>
<p>167/343</p>
</blockquote>
<p>Or as what you want.</p>
<pre><code>T[n_] := Block[{temp = n},
Which[temp == 0, Return[1], temp == 1, Return[x], temp > 1,
1/x T[temp - 2] - 2/7 T[temp - 1]]]
T[4] /. x -> 1
</code></pre>
|
3,247,176 | <p>I have this statement:</p>
<blockquote>
<p>It can be assured that | p | ≤ 2.4, if it is known that:</p>
<p>(1) -2.7 ≤ p <2.3</p>
<p>(2) -2.2 < p ≤ 2.6</p>
</blockquote>
<p>My development was:</p>
<p>First, <span class="math-container">$ -2.4 \leq p \leq 2.4$</span></p>
<p>With <span class="math-container">$1)$</span> by itself, that can't be insured, same argument for <span class="math-container">$2)$</span></p>
<p>Now, i will use <span class="math-container">$1)$</span> and <span class="math-container">$2)$</span> together, and the intersection between this intervals are: <span class="math-container">$(-2.2, 2.3)$</span>. So, this also does not allow me to ensure that | p | ≤ 2.4, since, there are some numbers that are outside the intersection of these two intervals, for example 2.35 is outside this interval. </p>
<p>But according to the guide, the correct answer must be <span class="math-container">$1), 2)$</span> together. And i don't know why.</p>
<p>Thanks in advance.</p>
| Vineet | 196,541 | <p><span class="math-container">$ -2.4 \leq p \leq 2.4$</span></p>
<p><span class="math-container">$ -2.7 \leq p < 2.3$</span></p>
<p><span class="math-container">$-2.2 < p \leq 2.6 $</span></p>
<p>Your answer is common intersection of these inequalities. </p>
<p><span class="math-container">$ p \in (-2.2, 2.3)$</span></p>
|
361,740 | <p>Spivak's <em>Calculus on Manifolds</em> asks the reader to prove this (problem 1-8, pp.4-5):</p>
<blockquote>
<p>If there is a basis $x_1, x_2, ..., x_n$ of $\mathbb{R}^n$ and numbers $\lambda_1, \lambda_2, ..., \lambda_n$ such that $T(x_i) = \lambda_i x_i$, $1 \leq i \leq n$, prove that $T$ is angle-preserving iff $\left| \lambda_i \right| = c, 1 \leq i \leq n$.</p>
</blockquote>
<p>Here "angle-preserving" means that the linear map $T$ satisfies $$\frac{ \langle x, y \rangle}{\|x\| \|y\|} = \frac{ \langle T(x), T(y) \rangle}{\|T(x)\| \|T(y)\|},$$
and that $T$ is injective.</p>
<p>My first problem with this question is that the claim is false. Taking $n = 2$, $x_1 = (1, 0)$, $x_2 = (1,1)$, $T(x_1) = -x_1$, $T(x_2) = x_2$, and setting $x = x_1$, $y = x_1 + x_2$, the expression in the RHS above evaluates to $0$, while the expression in the LHS evaluates to $\frac{1}{\sqrt{2}}$.</p>
<p>My second, bigger problem is that I'm not really understanding what's going on. An earlier part of the problem had me show that norm-preserving matrices are angle-preserving; this I'm not sure I get. Thus, I'm not sure what true "version" of this statement the author had in mind (was he trying to get a converse?) and I don't know what to do.</p>
<hr>
<p>Here's my guess:</p>
<p>Looking at some transformations in $\mathbb{R}^2$ (just drawing them), it looks like some of them could be "flip the sign of a basis vector" <strong>IF</strong> it's an orthogonal basis. However, I can't seem to recover the usual "rotation through an angle $\theta$" transformation this way, so I'm not sure that requiring the basis be orthogonal makes the statement true. </p>
<p>Also, I'm not sure how to take the inner product of vectors which aren't in standard coordinates. Or am I missing something here? </p>
| user1551 | 1,551 | <p>You are correct. This is one of the few errors in Spivak's <em>Calculus on Manifolds</em>. For this particular exercise, see the following questions:</p>
<ul>
<li><a href="https://math.stackexchange.com/questions/177005/question-about-angle-preserving-operators">Question about Angle-Preserving Operators</a></li>
<li><a href="https://math.stackexchange.com/questions/354848/action-of-angle-preserving-linear-transformation-on-basis-vectors">Action of angle-preserving linear transformation on basis vectors</a></li>
</ul>
<p>copper.hat's <a href="https://math.stackexchange.com/a/177040/1551">answer</a> to the first question cited in the above used the same counterexample as yours.</p>
<p>For your second question, I don't think there is any conflict. Spivak simply meant that if $T$ is a norm-preserving map, then it automatically enjoys the property of being angle-preserving. He did not say that the converse is true. In other words, norm preservation is a stronger condition than angle preservation. In fact, $T$ is norm-preserving map if and only if its matrix w.r.t. the standard basis is a real orthogonal matrix $Q$, and $T$ is angle-preserving if and only if its matrix w.r.t. the standard basis is $\lambda Q$ for some scalar $\lambda>0$ and some real orthogonal matrix $Q$ (for a proof, see the aforementioned answer by copper.hat).</p>
|
361,740 | <p>Spivak's <em>Calculus on Manifolds</em> asks the reader to prove this (problem 1-8, pp.4-5):</p>
<blockquote>
<p>If there is a basis $x_1, x_2, ..., x_n$ of $\mathbb{R}^n$ and numbers $\lambda_1, \lambda_2, ..., \lambda_n$ such that $T(x_i) = \lambda_i x_i$, $1 \leq i \leq n$, prove that $T$ is angle-preserving iff $\left| \lambda_i \right| = c, 1 \leq i \leq n$.</p>
</blockquote>
<p>Here "angle-preserving" means that the linear map $T$ satisfies $$\frac{ \langle x, y \rangle}{\|x\| \|y\|} = \frac{ \langle T(x), T(y) \rangle}{\|T(x)\| \|T(y)\|},$$
and that $T$ is injective.</p>
<p>My first problem with this question is that the claim is false. Taking $n = 2$, $x_1 = (1, 0)$, $x_2 = (1,1)$, $T(x_1) = -x_1$, $T(x_2) = x_2$, and setting $x = x_1$, $y = x_1 + x_2$, the expression in the RHS above evaluates to $0$, while the expression in the LHS evaluates to $\frac{1}{\sqrt{2}}$.</p>
<p>My second, bigger problem is that I'm not really understanding what's going on. An earlier part of the problem had me show that norm-preserving matrices are angle-preserving; this I'm not sure I get. Thus, I'm not sure what true "version" of this statement the author had in mind (was he trying to get a converse?) and I don't know what to do.</p>
<hr>
<p>Here's my guess:</p>
<p>Looking at some transformations in $\mathbb{R}^2$ (just drawing them), it looks like some of them could be "flip the sign of a basis vector" <strong>IF</strong> it's an orthogonal basis. However, I can't seem to recover the usual "rotation through an angle $\theta$" transformation this way, so I'm not sure that requiring the basis be orthogonal makes the statement true. </p>
<p>Also, I'm not sure how to take the inner product of vectors which aren't in standard coordinates. Or am I missing something here? </p>
| Matt S | 109,082 | <p>A possible "true 'version' of this statement the author had in mind" is</p>
<blockquote>
<p>Suppose <span class="math-container">$T(x_i)=\lambda_ix_i$</span> for some basis <span class="math-container">$x_1,\dots,x_n$</span> of <span class="math-container">$\mathbb R^n$</span> and numbers <span class="math-container">$\lambda_1,\dots,\lambda_n.$</span></p>
<ul>
<li>If <span class="math-container">$T$</span> is angle preserving, then every <span class="math-container">$|\lambda_i|=|\lambda_j|.$</span></li>
<li>If every <span class="math-container">$\lambda_i=\lambda_j$</span>, then <span class="math-container">$T$</span> is angle preserving.</li>
</ul>
</blockquote>
<p>The second part is trivial. A proof of (the contrapositive of) the first part can be found at <a href="https://math.stackexchange.com/a/177042">https://math.stackexchange.com/a/177042</a>.</p>
|
1,994,021 | <p>In one of the research article it is written that the following limit is equal to zero $$\lim_{x \to 0 }\frac{d}{2^{b+c/x}-1}\left[a2^{b+c/x}-a-a\frac{c\ln{(2)}2^{b+c/x}}{2x}-\frac{c\ln{(2)}}{2x^2}\frac{2^{b+c/x}}{\sqrt{2^{b+c/x}-1}}\right]\left(e^{-ax\sqrt{2^{b+c/x}-1}}\right)=0$$ where $a,b,c,d$ are all positive constants. I am unable to solve it. Please help me in getting there. Many thanks in advance.</p>
| zhw. | 228,045 | <p>Try something simpler. Get rid of most of the constants. Instead of $x\to 0^+,$ replace $x$ by $1/x$ and let $x\to \infty.$ (For me it's easier to think this way.) Throw away the $1$'s you keep subtracting, they're nothing compared with $2^x.$ So here's what I looked at:</p>
<p>$$\tag 1 \frac{1}{2^x}\left [ 2^x + x2^x + x^22^{x/2}\right ]e^{-2^{x/2}/x}.$$</p>
<p>That's a lot less than</p>
<p>$$3x^22^xe^{-2^{x/2}/x}.$$</p>
<p>Now apply $\ln$ to get</p>
<p>$$\ln 3 + 2 \ln x + x\ln 2 - 2^{x/2}/x.$$</p>
<p>That has to go to $-\infty$ because of the exponential growth of $2^{x/2}.$ That tells me that $(1)\to 0.$ And that's pretty good evidence that your original expression $\to 0.$ Now you have an idea where you're going with that messy thing.</p>
|
223,008 | <p>Ok so my teacher said we can use this sentence:
<strong>If $a$ is not a multiple of $5$, then $a^2$ is not a multiple of $5$ neither.</strong></p>
<p>to prove this sentence:
<strong>If $a^2$ is a multiple of $5$, then $a$ itself is a multiple of $5$</strong></p>
<p>I don't understand the logic behind it, I mean what's the link between them, how can we conclude the 2nd sentence to be true if the 1st one is true?</p>
<p>Thanks a lot guys!</p>
| Mark Bennet | 2,906 | <p>There are two possibilities:</p>
<p>$a$ is a multiple of 5 - in which case we prove that $a^2$ is a multiple of 5</p>
<p>$a$ is not a multiple of 5 - in which case we prove that $a^2$ is not a multiple of 5</p>
<p>So we have proved those facts.</p>
<p>Now suppose we are given a square number, and it is a multiple of 5. Can it come from the second line - no; so it must come from the first line.</p>
<p>In fact we don't need to prove the first line to show that a square number which is a multiple of 5 cannot come from the second line. And your teacher has dispensed with it.</p>
|
1,597,247 | <p>Give the continued fraction expansion of two real numbers $a,b \in \mathbb R$, is there an "easy" way to get the continued fraction expansion of $a+b$ or $a\cdot b$?</p>
<p>If $a,b$ are rational it is easy as you can easily conver the back to the 'rational' form, add or multiply and then conver them back to continued fraction form. But is there a way that requres <em>no conversation</em>?</p>
<p>Other than that I found no clues whether there is an "easy" way to do it for irrational numbers.</p>
| djechlin | 79,767 | <p>You need a representation of the real number to start with. Real numbers such as $e = \sum\frac1{n!}$ are easy to work with (in particular, using that representation). Real numbers such as $\gamma = \lim_{n\rightarrow\infty}\left(\sum_{k=1}^n\frac1k - \ln n\right)$ are going to be a bit "harder" to work with. And try real numbers such as "the number of nontrivial zeros of the Riemann zeta function".</p>
<p>You could start with a decimal expansion of a real number, but isn't that begging the question? A decimal expansion is a mediocre rational approximation. A continued fraction is a very good rational approximation. So you need to consume more and more decimal digits to be able to compute the next term of the continued fraction.</p>
<p>Rational numbers you can solve using Euclid's algorithm. Square roots are actually relatively easy and don't require any complex operations. I'm not sure about other algebraic numbers.</p>
<p>You can write an "explicit" formula down for an arbitrary real number using a <em>lot</em> of floor functions.</p>
|
290,527 | <p>What would be a good metric on $C^k(0,1)$, space of $k$ times continuously differentiable real valued functions on $(0,1)$ and $C^\infty(0,1)$, space of infinitely differentiable real valued functions on $(0,1)$? </p>
<p>It is of course open to interpretation what good would mean, I want it to bring a good notion of convergence and unitize the openness of the interval as well as the $k$ times/infinite continuously differentiable property. This is a question that I want to think more about to understand metric spaces better. Thank you.</p>
<p>EDIT: And what changes if it is on $[0,1]$?</p>
| Sh4pe | 14,497 | <p>The space $C^k([a,b])$ is a normed space for each $k$ and for each pair $a < b$ of real numbers with the norm</p>
<p>$\|f\|_{C^0} = \sup_{x\in[a,b]} |f(x)|$ for $k=0$</p>
<p>and</p>
<p>$\|f\|_{C^k} = \sum_{|s|\le k} \|\partial^sf\|_{C^0}$ for $k>1$</p>
<p>(For the $C^0$-case, it is important that the interval is closed, as there are continuous functions that live on $(a,b)$ but are not bounded (i.e. they run towards $\infty$ if $x\to a$ or $x\to b$).)</p>
<p>Proving the norm axioms for the $C^0$ case is not trivial but should be contained in textbooks on functional analysis. With this, proving the norm axioms for the $C^k$-case is trivial.</p>
<p>Now, every norm induces a metric: $d(f,g) := \|f-g\|_{C^k([a,b])}$.</p>
<p>Also, these norms make $C^k$ a Banach space (each Cauchy sequence converges <em>in</em> the space itself). A proof for this statement should also be contained in functional analysis textbooks.</p>
<p>Hope this helps you a bit.</p>
<p><strong>P.S.</strong>: A good functional analysis textbook in German would be "Hans Wilhelm Alt: Lineare Funktionalanalysis" in my opinion. All the proofs I hinted at above and much more can be found there.</p>
<p><strong>P.P.S:</strong> To address some more of your questions: Generally, the notion of a norm is 'better' (as in more good ;) ) than just a metric, since it gives you some knowledge of the 'length' or 'size' of elements, not just the distance between them. As mentioned above, this is more general.</p>
<p>In the finite dimensional case, one can show that every norm is equivalent, which means that they induce the same topology (with every metric you can define 'open balls', which are a basis of a topology). This statement is known as the Heine-Borel theorem.</p>
<p>However, in the infinite dimensional case (like $C^k$), there are different norms that are not equivalent. The norm for $C^k([a,b])$ I stated above is, however, the generally most used norm on these spaces and should be sufficient for basic linear functional analysis - as far as I know. </p>
<p>I cannot say if this is the 'best' metric, but this is a widely used one. For everything I just wrote, the book I mentioned above is a good reference especially in the first chapters where topology, norms and metrics are treated.</p>
|
2,725,839 | <p>The question is below.<a href="https://i.stack.imgur.com/k3UMf.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/k3UMf.png" alt="enter image description here"></a></p>
<p>I was able to solve part (a) because the $x$-coordinate would just be the circumference of the circle, which is $2\pi$. Therefore, $P = (2\pi, 0)$. </p>
<p>I am confused with parts (b) and after, but I know that the $y$-coordinate will remain to be $1$ because that's the radius of the circle and how far it is off from the $x$-axis. Any help with how to find the $x$-coordinate will help for part (b). Thank you in advance. </p>
| CY Aries | 268,334 | <p>The $x$-coordinate is just the length of the "rotated arc", i.e., $\displaystyle \frac{\pi}{2}$.</p>
|
131,842 | <p>Let $X,Y$ be normed linear spaces. Let $T: X\to Y$ be linear. If $X$ is finite dimensional, show
that $T$ is continuous. If $Y$ is finite dimensional, show that $T$ is continuous if and only if $\ker T$ is closed. </p>
<p>I am able to show that $X$, finite dimensional $\implies$ $T$ is bounded, hence continuous. </p>
<p>For the second part: This is what I have: </p>
<p>Suppose $T$ is continuous. By definition $\ker T = \{ x\in X : Tx = 0 \}$ , and so $\ker T$ is the continuous inverse of a closed set. Hence $\ker T $ is closed. </p>
<p>First, is what I have attempted okay. How about the other direction? </p>
| Balbichi | 24,690 | <p>I am doing for $Y=\mathbb{R}$; Clearly if $f$ is continuous then its kernel is closed set. for the converse, assume that $f\neq0$ and that $f^{-1}(\{0\})$ is a closed set. Pick some $e$ in $X$ with $f(e)=1$. Suppose by way of contradiction $||f||=\infty$. Then there exists a sequence $\{x_n\}$ in $X$ with $||x_n||=1$ and $f(x_n)\ge n$ for all $n$. Note that the sequence $\{y_n\}$ defined by $y_n=e-\frac{x_n}{f(x_n)}$, satisfies $y_n\in f^{-1}(\{0\})$ for all $n$ and $y_n\rightarrow e$. Since the set $f^{-1}(\{0\})$ is closed it follows that $e$ must belong to it and consequently $f(e)=0$ which is a contradiction. Thus $f$ is a continuous linear functional.</p>
|
4,623,022 | <p>I have a question that I have been curious about for years.</p>
<p>In differential geometry, since the exterior derivative satisfies property <span class="math-container">$d^2=0$</span>, we can make a de Rham cohomology from it.</p>
<p>Then if we write <span class="math-container">$\iota_X:\Omega^n\rightarrow\Omega^{n-1}$</span> as the interior derivative(also called as interior product) for a vector field <span class="math-container">$X$</span>, then <span class="math-container">$\iota_X^2=0$</span> holds. Can we make a homology for a suitable vector field <span class="math-container">$X$</span> from this?</p>
<p>And if you can create such a homology, are there any useful properties about it? Like the de Rham theorem.</p>
<p>I would really appreciate it if you could let me know.</p>
| Mariano Suárez-Álvarez | 274 | <p>You can do this calculation purely algebraically. I'd suggest doing first the following variation.</p>
<p>Consider a vector space <span class="math-container">$V$</span>, a vector <span class="math-container">$v$</span> in <span class="math-container">$V$</span>, the exterior algebra <span class="math-container">$\Lambda^\bullet V$</span>, and the map <span class="math-container">$w\mapsto v\wedge w$</span> from the exterior algebra to itself of degree <span class="math-container">$+1$</span>. One can easily check that this complex is exact if and only if the vector v is non-zero.</p>
<p>If you do that, which is easier than your problem because of less notation, you can look at the next thing:</p>
<p>Let now <span class="math-container">$A$</span> be the graded vector space <span class="math-container">$\Lambda^\bullet V^*$</span>, the exterior algebra on the dual space of <span class="math-container">$V$</span>, let <span class="math-container">$v$</span> be a vector in <span class="math-container">$V$</span>, and now let <span class="math-container">$d$</span> be the map of degree <span class="math-container">$-1$</span> on <span class="math-container">$A$</span> that is given by contraction with <span class="math-container">$v$</span>. Can you compute the homology of <span class="math-container">$A$</span>?</p>
|
804,871 | <p>Prove that if x and y are odd natural numbers, then $x^2+y^2$ is never a perfect square.</p>
<p>Let $x=2m+1$ and $y=2l+1$ where m,l are integers.</p>
<p>$x^2+y^2=(2m+1)^2+(2l+1)^2=4(m^2+m+l^2+l)+2$</p>
<p>Where do I go from here?</p>
| Deepak | 151,732 | <p>The square of an integer is congruent to 0 or 1 (mod 4). In fact an even number (remember the sum is even) will always be congruent to 0 (mod 4). What you end up with is congruent to 2 (mod 4), which means it's not a perfect square.</p>
|
804,871 | <p>Prove that if x and y are odd natural numbers, then $x^2+y^2$ is never a perfect square.</p>
<p>Let $x=2m+1$ and $y=2l+1$ where m,l are integers.</p>
<p>$x^2+y^2=(2m+1)^2+(2l+1)^2=4(m^2+m+l^2+l)+2$</p>
<p>Where do I go from here?</p>
| BlackAdder | 74,362 | <p>You can now look at all the natural numbers modulo $4$. We know that numbers must either be even or odd, hence they have the form
$$2n\text{ or } 2n+1.$$
In modulo $4$, they are just $2n\text{ or } 2n+1\mod4$. Now, look at the squares of these numbers, we have that
$$(2n)^2\equiv 4n^2\equiv0\mod4.$$
Also, the odds give
$$(2n+1)^2\equiv 4n^2+4n+1\equiv1\mod4.$$</p>
<p>Now we know that squares MUST be either $0$ or $1$ modulo $4$. What is your number modulo $4$?</p>
|
141,522 | <p>First a summary of the general problem I'm trying to solve:
I want to get <strong>a</strong> set of inequalities for a very complex function (If you are interested is the no-arbitrage conditions Black-Scholes equation with a volatility given by an SVI function)</p>
<p>So basically I'm trying to find the parameters that best fit a model given some conditions.</p>
<p>Long story short these are my problems:</p>
<p>1) I want only ONE set of inequalities, Reduce and Solve give me many solutions but in doing so take a little to much time. (This shouldn't be difficult)</p>
<p>2) Only get "general" inequalities: (this is the hard one). I have parameters and variables: I can set the parameters but the variables are independant. There, I need solutions that don't depend on the variables, but only on the parameters.</p>
<p>As an easy example, let's say my model is: <code>a*b*(1+x^2)</code>, where x is a Real variable and <em>a</em> and <em>b</em> are my variables. Note, if my condition is something like: <code>f(a,b,x)>2</code> I want this result:</p>
<p><code>a*b>2</code></p>
<p>Instead of this result which is what I got:</p>
<p><code>a*b>2/(1+x^2)</code></p>
<p>The second term depends on x^2 so it doesn't give me defined boundaries for my conditions which I can use to fit the model to the data (As I need general terms of <em>a</em> and <em>b</em> that should fit any x)</p>
<p>EDIT: I found of the function <code>ForAll</code> which solves my simple example, but doesn't work for me on the actual problem as I have also conditions on x (Is there any similar command with no such conditions?)</p>
<p>Thanks in advance for your time, and sorry if this is simple, I couldn't find a solution within this site.</p>
| mikado | 36,788 | <p>I have found <code>CylindricalDecomposition</code> very useful for analysing inequalities. The result you get will depend on the order in which you list the variables in the second argument.</p>
<p>I think the result you are looking for is given by</p>
<pre><code>f = a*b*(1 + x^2);
CylindricalDecomposition[f > 0, {x, a, b}]
(* (a < 0 && b < 0) || (a > 0 && b > 0) *)
</code></pre>
|
1,583,747 | <p>I just started a course on queue theory, yet equations are given for granted without any demonstrations, which is very frustrating... Thus</p>
<ol>
<li>Why is the mean number of people in a queue system following an $M/M/1$ system</li>
</ol>
<p>$$E(L)=\frac{\rho}{1-\rho}$$</p>
<p>with $\rho=\frac{\lambda}{\mu}$ with $\lambda$ the clients arrival rate and $\mu$ the service rate.</p>
<ol start="2">
<li>And the mean number of people waiting in the queue:</li>
</ol>
<p>$$E(L^q)=\frac{\rho^2}{1-\rho}$$</p>
| JKnecht | 298,619 | <p>For every common queue system you can follow this route:</p>
<ol>
<li><p>Set up the balance equations: Inflow equal outflow in steady state.</p></li>
<li><p>Solve the balance equations. Always straight forward.</p></li>
<li><p>Calculate $p_0$ and $E[L]$ </p></li>
</ol>
<p>And its not hard to see that this gives $E[L]$, right?</p>
<p>The fact that the first sum to 1 gives $p_0$</p>
<p>$$\sum_ n P_n = 1$$</p>
<p>$$E[L] = \sum_ n nP_n$$</p>
<p>The range of the sum depends on which kind of queue you have.
In your case its from $0$ to $\infty$ because you have unlimited places in the queueing system. If its limited to K places you sum to K.</p>
<ol start="4">
<li>Once you have $E[L]$ most of what you are interested in follows
from simple and obvious equations. </li>
</ol>
<p>E.g. $E[L^{q}]$ follows from:</p>
<p>(omitting the E[] from here on as im used to)</p>
<p>$L = L_q + L_s$ (average number of people in the queue plus average number of people getting service)</p>
<p>$L_s = W_s*\lambda$ (average waiting time getting service times the inflow rate.
$\lambda =\lambda_e = \lambda*(1-p_k)$ when you have a limited number of places in the queue)</p>
<p>$W_s = 1 / \mu$ ($\mu$ mean service time)</p>
<p>And with those equations you can solve for $L_q$</p>
|
1,583,747 | <p>I just started a course on queue theory, yet equations are given for granted without any demonstrations, which is very frustrating... Thus</p>
<ol>
<li>Why is the mean number of people in a queue system following an $M/M/1$ system</li>
</ol>
<p>$$E(L)=\frac{\rho}{1-\rho}$$</p>
<p>with $\rho=\frac{\lambda}{\mu}$ with $\lambda$ the clients arrival rate and $\mu$ the service rate.</p>
<ol start="2">
<li>And the mean number of people waiting in the queue:</li>
</ol>
<p>$$E(L^q)=\frac{\rho^2}{1-\rho}$$</p>
| JKnecht | 298,619 | <p>There is a pretty good <a href="https://www.youtube.com/watch?v=AsTuNP0N7DU" rel="nofollow">series of video lectures on youtube</a> that might help you out.</p>
|
3,237,242 | <p>I have the following problem:</p>
<p>I need to prove that given the following integral</p>
<p><span class="math-container">$\int_{0}^{1}{c(k,l)x^k(1-x)^l}dx = 1$</span>,</p>
<p>we the constant <span class="math-container">$c(k,l) = (k+l+1) {{k+l}\choose{k}} = \frac{(k+l+1)!}{k!l!}$</span>,</p>
<p>with the use of two dimensional mathematical induction on <span class="math-container">$min(k,l)$</span>.
Here <span class="math-container">$k$</span> and <span class="math-container">$l$</span> are two nonnegative integers.</p>
<p>(THUS: I need to proof that <span class="math-container">$c(k,l)$</span> is equal to <span class="math-container">$(k+l+1) {{k+l}\choose{k}}$</span>)</p>
<p>For the base step I have proved that <span class="math-container">$c(k, 0) = c(0, k) = k + 1$</span> for all <span class="math-container">$k$</span>.</p>
<p>I am given a hint that for the induction step I could try using integration by parts to show <span class="math-container">$c(k,l) = \frac{k+1}{l} c(k+1,l−1)$</span>.</p>
<p>By integrating the following by parts I indeed managed to show the latter:</p>
<p><span class="math-container">$\int_{0}^{1}{c(k+1,l-1)x^{k+1}(1-x)^{l-1}dx}=1$</span>.</p>
<p>However, I don't really see how this helps me to complete my proof, since I don't really get the idea of two dimensional induction.</p>
<p>Can someone maybe clarify this a bit for me, and help me further with my proof?</p>
| Community | -1 | <p>Integration by parts relates <span class="math-container">$I_{k,l}$</span> to <span class="math-container">$I_{k+1,l-1}$</span>. So by setting <span class="math-container">$l=n-k$</span>, you relate <span class="math-container">$I_{k,n-k}$</span> to <span class="math-container">$I_{k+1,n-(k+1)}$</span>, which forms an ordinary induction on <span class="math-container">$k$</span>. The <span class="math-container">$I_{0,n}$</span> are elementary.</p>
|
80,783 | <p>How do you convert $(12.0251)_6$ (in base 6) into fractions?</p>
<p>I know how to convert a fraction into base $x$ by constantly multiplying the fraction by $x$ and simplifying, but I'm not sure how to go the other way?</p>
| wendy.krieger | 78,024 | <p>You could use continued fractions. </p>
<pre><code> Cf A / B
12.0251 1 / 0 In the left, we produce a
1.0000 -12.0000 8 8 / 1 continued fraction from
-0.5420 0.0251 12 97 / 12 12.0521 and 1.0000, in base 6
0.0140 -0.0140 1 105 / 13 There are Cf of the positive
-0.0111 0.0111 1 202 / 25 number, which gives the negative
0.0025 -0.0054 2 511 / 63 number.
-0.0013 0.0013 1 713 / 88 The right is Cf(r-1) + Cf(r-2)
&c starting at 1 in A and 0,1 in B.
</code></pre>
<p>You stop when the error gets precise enough, or you get a zero in either column. </p>
|
3,999,325 | <p>Let me start with some objects. Consider the <span class="math-container">$\mathrm{C}^*$</span>-algebra <span class="math-container">$A$</span> defined by:
<span class="math-container">$$A=M_1(\mathbb{C})\oplus M_2(\mathbb{C})\subset B(\mathbb{C}^3).$$</span>
Let <span class="math-container">$x=\mathbb{C}^3$</span> be given by <span class="math-container">$(e_1+e_2)/\sqrt{2}\,$</span> (the one dimensional factor acts on <span class="math-container">$e_1$</span>).</p>
<p>We have a vector state given by <span class="math-container">$\rho_x(f)=\langle x,f(x)\rangle$</span>, and</p>
<p><span class="math-container">$$\rho_x\left((c_1)+\left(\begin{array}{cc}c_{11} & c_{12} \\ c_{21} & c_{22}\end{array}\right)\right)=\frac12 c_1+\frac{1}{2}c_{11}.$$</span></p>
<p>If I read something about quantum mechanics for such a state, as it is is given by a unit vector, it is called a pure state.</p>
<p>However this contradicts <a href="https://math.stackexchange.com/a/3273820/19352">the answer</a> to this question... and when I pick up Murphy he says that a pure state is such that if <span class="math-container">$\rho_0\leq \rho_x$</span> is a positive linear functional, then <span class="math-container">$\rho_0=t\rho_x$</span> for <span class="math-container">$t\in[0,1]$</span>.</p>
<p>However for <span class="math-container">$\rho_x$</span> we have the linear functional:</p>
<p><span class="math-container">$$\rho_0\left((c_1)+\left(\begin{array}{cc}c_{11} & c_{12} \\ c_{21} & c_{22}\end{array}\right)\right)=\frac12 c_1$$</span></p>
<p>is a bounded linear functional such that <span class="math-container">$\rho_x-\rho_{0}$</span> is a positive linear functional --- half the state given by the vector <span class="math-container">$e_2$</span>, not <span class="math-container">$t\rho_x$</span>... and so Murphy would say <span class="math-container">$\rho_x$</span> is <em>not</em> pure.</p>
<p><strong>Can you help me with my confusion?</strong></p>
| JP McCarthy | 19,352 | <p>The confusion is that the (introductory) quantum mechanical texts that I am reading are using the full <span class="math-container">$B(\mathsf{H})$</span> rather than closed self-adjoint subalgebras.</p>
<p>For example in full <span class="math-container">$B(\mathbb{C}^3)$</span>, the state associated to the same vector is:</p>
<p><span class="math-container">$$\rho_x\left([c_{ij}]_{i,j=1}^3\right)=\frac{c_{11}+c_{12}+c_{21}+c_{22}}{2},$$</span>
a different beast to <span class="math-container">$\rho_x\in A^*$</span>.</p>
|
4,011,864 | <p><span class="math-container">$$\lim_{n \to \infty}(3^n+1)^{\frac{1}{n}}$$</span></p>
<p>I'm fairly sure I can't bring the limit inside the 1/n and I don't think I can use l'Hôpital's rule. I'm pretty sure I'm meant to use the sandwich theorem but I'm not quite sure how to do that in this circumstance.</p>
| Aryan | 866,404 | <p>Since <span class="math-container">$\ln\big(3^n+1\big)^{\frac{1}{n}}=\frac{\ln(3^n+1)}{n}$</span> , one can observe <span class="math-container">$\displaystyle\lim_{n\to\infty}{(3^n+1)^{\frac{1}{n}}}=\displaystyle\lim_{n\to\infty}e^{\frac{\ln(3^n+1)}{n}}=e^{\tiny{{\displaystyle\lim_{n\to\infty}{\frac{\ln(3^n+1)}{n}}}}}$</span><br />
Now by L'hopital's rule, <span class="math-container">$$\displaystyle\lim_{n\to\infty}{\frac{\ln(3^n+1)}{n}}=\displaystyle\lim_{n\to\infty}{\frac{\frac{3^n\cdot\ln(3)}{3^n+1}}{1}}=\displaystyle\lim_{n\to\infty}{\frac{3^n\cdot\ln(3)}{3^n+1}}=\ln(3)$$</span><br />
So finally <span class="math-container">$\displaystyle\lim_{n\to\infty}{\frac{\ln(3^n+1)}{n}}=\ln(3)$</span> and therefore <span class="math-container">$\displaystyle\lim_{n\to\infty}{(3^n+1)^{\frac{1}{n}}}=e^{\tiny{{\displaystyle\lim_{n\to\infty}{\frac{\ln(3^n+1)}{n}}}}}=e^{ln(3)}=3$</span></p>
<p>P.S. The result was also clear of the fact that <span class="math-container">$1$</span> does not have a significant impact on the limit as <span class="math-container">$n\to\infty$</span> and hence the result could've been achieved with ignoring the <span class="math-container">$1$</span> in the parentheses.</p>
|
4,011,864 | <p><span class="math-container">$$\lim_{n \to \infty}(3^n+1)^{\frac{1}{n}}$$</span></p>
<p>I'm fairly sure I can't bring the limit inside the 1/n and I don't think I can use l'Hôpital's rule. I'm pretty sure I'm meant to use the sandwich theorem but I'm not quite sure how to do that in this circumstance.</p>
| Kyky | 423,726 | <p>Consider the natural logarithm of the limit, <span class="math-container">$$\lim_{n\to\infty}\frac1n\ln\left(3^n+1\right)$$</span></p>
<p>Note that <span class="math-container">$\lim_{n\to\infty}\ln\left(3^n\right)-\ln\left(3^n+1\right)=\ln\left(\lim_{n\to\infty}\frac{3^n}{3^n+1}\right)=\ln1=0$</span>. Hence we have:</p>
<p><span class="math-container">$$\lim_{n\to\infty}\frac1n\ln\left(3^n+1\right)$$</span></p>
<p><span class="math-container">$$=\lim_{n\to\infty}\frac1n\ln\left(3^n+1\right)+\lim_{n\to\infty}\frac1n\left[\ln\left(3^n\right)-\ln\left(3^n+1\right)\right]$$</span></p>
<p><span class="math-container">$$=\lim_{n\to\infty}\frac1n\left[\ln\left(3^n+1\right)+\ln\left(3^n\right)-\ln\left(3^n+1\right)\right]$$</span></p>
<p><span class="math-container">$$=\lim_{n\to\infty}\frac1nn\ln3$$</span></p>
<p><span class="math-container">$$=\ln3$$</span></p>
<p>Now <span class="math-container">$$\lim_{n \to \infty}(3^n+1)^{\frac1n}$$</span></p>
<p><span class="math-container">$$=\exp\left(\lim_{n\to\infty}\frac1n\ln\left(3^n+1\right)\right)$$</span></p>
<p><span class="math-container">$$=\exp(\ln3)=3$$</span></p>
|
134,523 | <p>Is there any recursively axiomized system with infinitely many proofs for some propositions or a proposition? So we will have at least one proposition which is deduced from the recursively axiomatic system in infinite ways.Could any one give an example or proof?</p>
<p><strong>EDIT</strong>:We define a proof or a reduced proof as one which could not omit any proposition in the arguments,otherwise would be invalid. Or we can view it as question of formal language,that is :is there a formal language with at least one word which can be parsed in infinitely many ways?Of course there have to be no rewriting like "irrelevent padding or stupid detours"</p>
| Noah Schweber | 8,133 | <p>How about the theorem "There exists at least one prime number?" There are infinitely many distinct proofs of this result, none of which includes another as a subproof. Certainly this example is trivial in some sense, but I think it is not obvious how to pin down why this shouldn't be counted.</p>
<p>Perhaps a better example is "There is at least one Turing machine which halts." Now - in a precise sense - any possible logical deduction is paralleled by a proof that some specific machine halts, so if there are infinitely many "distinct" deductions at all, then there are infinitely many "distinct" proofs of this proposition.</p>
<p>EDIT: This is entirely tangential, but I think you may find it interesting. There is a strong and fruitful tradition in mathematical logic - especially on the constructive side - of drawing analogies between (or equating) proofs and algorithms; so an at-least-vaguely related question is, "How do we tell if two algorithms are the same?" This paper (<a href="http://research.microsoft.com/en-us/um/people/gurevich/Opera/192.pdf" rel="nofollow">http://research.microsoft.com/en-us/um/people/gurevich/Opera/192.pdf</a>) by Andreas Blass, Nachum Dershowitz, and Yuri Gurevich argues that there is no entirely satisfactory answer to this question. I think at least one thing to take away from this is that one should not be cavalier about the notion of "distinct proof": finding a satisfactory such notion would be a huge advancement in logic!</p>
|
449,296 | <p>I am trying to deduce how mathematicians decide on what axioms to use and why not other axioms, I mean surely there is an infinite amount of available axioms. What I am trying to get at is surely if mathematicians choose the axioms then we are inventing our own maths, surely that is what it is but as it has been practiced and built on for so long then it is too late to change all this so we have had to adapt and create new rules?? I'm not sure how clear what I am asking is or whether it is even understandable but would appreciate any answers or comments, thanks.</p>
| Ronnie Brown | 28,586 | <p>Of course the (presumably) first axiom system was that of Euclid's Geometry. This book though was a systematisation of knowledge at the time, and it seems reasonable to suppose it started as a teaching course. If you are giving a course, you have to decide where you are going to start, and it seems reasonable to start with basic assumptions. Of course we do not know what was the evolution of this book! </p>
<p>Axioms often evolve through practice and trial and error. People carry out a particular kind of argument and then realise it can be carried out under certain abstract assumptions. </p>
<p>The virtues of abstraction are several. </p>
<p>1) To cover several examples, and in this to make analogies. </p>
<p>2) To be available for new examples, and so new analogies. </p>
<p>3) To simplify proofs. </p>
<p>The last advantage may be surprising, but the reason is that the axioms sort out the essentials which are required for the proof, and allow the casting off of excess baggage. </p>
<p>All this is intended to emphasise that axioms arise from lots of study, of examples, of proofs, and of other axiomatic systems. </p>
<p>To illustrate: we all know that 2+3 = 3 +2, and 2 x 3 = 3 x 2. To say these are examples of a commutativity law is to make an analogy between addition and multiplication of numbers. </p>
<p>Things get more exciting when you get an analogy between addition of knots and multiplication of numbers: see this <a href="http://www.popmath.org.uk" rel="nofollow">knot exhibition</a>. </p>
|
449,296 | <p>I am trying to deduce how mathematicians decide on what axioms to use and why not other axioms, I mean surely there is an infinite amount of available axioms. What I am trying to get at is surely if mathematicians choose the axioms then we are inventing our own maths, surely that is what it is but as it has been practiced and built on for so long then it is too late to change all this so we have had to adapt and create new rules?? I'm not sure how clear what I am asking is or whether it is even understandable but would appreciate any answers or comments, thanks.</p>
| Andreas Blass | 48,510 | <p>There are (at least) four types of sources for axiomatic systems. Here are the scenarios that I have in mind:</p>
<p>(1) Some mathematical structure, like the plane in geometry or the system of natural numbers, has been recognized as useful for applications and has therefore been studied extensively. So many facts are known about it. In this situation, one might want to organize those facts in a logical system, showing which facts are consequences of which other facts. Of course, to avoid circularity, some facts have to be taken as basic, and then other facts are shown to be consequences of these. The basic facts are called axioms or postulates, and it is desirable to make them as simple and as few as possible, so that one is not assuming things that could rather be proved. Among the axiom systems that arose in this way are Euclid's axioms for geometry (and, in a more rigorous age, Hilbert's axioms for geometry) and Peano's axioms for arithmetic.</p>
<p>(2) Questions have arisen about the legitimacy of some arguments, so it becomes necessary to say exactly what the assumptions are that underlie those arguments. The clearest example of this is Zermelo's (1908) axiomatization of set theory. The immediate problem facing Zermelo was the axiom of choice. It had been used as if obvious, for example in the proof that the union of countably many countable sets is countable. But, when Zermelo pointed it out as an explicit statement and used it in his proof (1904) that all sets can be well-ordered, he got a lot of flak. There were also other points in need of clarification, such as Cantor's distinction between consistent multiplicities (sets) and inconsistent ones. So Zermelo set up a system of axioms on which to base not only the proof of his well-ordering theorem but also the other set-theoretic arguments of the time. (Nowadays, we can view Zermelo's axioms, as well as later extensions by Fraenkel and others, as falling under scenario (1) above, as systematizations of the known facts about the cumulative hierarchy of sets. But, as far as I know, the cumulative hierarchy is not mentioned in Zermelo's writings until 1930. So I regard their introduction in 1908 as a different scenario.)</p>
<p>(3) People notice that very similar ideas and proofs are occurring in different areas. The elementary arithmetic of addition of integers, or real numbers, or complex numbers is very similar to the behavior of the operation of composition of permutations of finite sets or of rotations of space. In this situation, it is worthwhile to isolate the basic features common to these different contexts and deduce other common features from the basic ones (axioms) once and for all, rather than treating each context individually. Thus, the examples I just mentioned are all subsumed by the axioms for groups. Notice that here the axioms are intended to apply to many different structures (numbers, permutations, etc.) whereas in (1) (and perhaps also (2)), the axioms are intended to describe one specific structure. In (1), the existence of different models of the axioms is an unintended feature or bug; in (3) it is the main reason for formulating the axioms.</p>
<p>(4) Just plain curiosity. For example, given Euclid's axioms for plane geometry, let's see what happens if we replace the parallel postulate by some contrary assumption. Nowadays, such non-Euclidean geometries are seen as descriptions of interesting structures (like the hyperbolic plane), but when such axioms were first considered, no such structures were known, and in fact these "strange" axioms were expected to be contradictory. In principle, anybody can make up and study whatever axioms (s)he wants. Whether anyone else will pay attention, though, is a more difficult question. Axiomatic systems that don't fit under (1), (2), or (3) above had better come with some serious motivation, or the person who introduces and uses them is likely to be ignored.</p>
|
4,122,419 | <blockquote>
<p>From the triangle <span class="math-container">$\triangle ABC$</span> we have <span class="math-container">$AB=3$</span>, <span class="math-container">$BC=5$</span>, <span class="math-container">$AC=7$</span>. If
the point <span class="math-container">$O$</span> placed inside the triangle <span class="math-container">$\triangle ABC$</span> so that
<span class="math-container">$\vec{OA}+2\vec{OB}+3\vec{OC}=0$</span> , then what is the ratio of the area
of <span class="math-container">$\triangle ABC$</span> to the area of <span class="math-container">$\triangle AOC$</span> ?</p>
<p><span class="math-container">$1)\frac32\qquad\qquad2)\frac53\qquad\qquad3)2\qquad\qquad4)3\qquad\qquad5)\frac72$</span></p>
</blockquote>
<p>By knowing the length of the sides of <span class="math-container">$\triangle ABC$</span> I concluded it is an obtuse triangle (Because <span class="math-container">$3^2+5^2<7^2 $</span> ). I'm not sure how to use <span class="math-container">$\vec{OA}+2\vec{OB}+3\vec{OC}=0$</span> to solve the problem, but from the forth choices I realized it happens when the point <span class="math-container">$O$</span> be the centroid of <span class="math-container">$\triangle ABC$</span> so this might be the answer.</p>
| Andrei | 331,661 | <p>Let's call the length of the race <span class="math-container">$L$</span>, and the time required by A to finish it <span class="math-container">$t$</span>. Then <span class="math-container">$$v_A=\frac Lt\\v_B=\frac{L-10}t\\v_C=\frac{L-20}t$$</span>
When they race for the second time, we want to calculate the time required by each contestant to get to the finish line:
<span class="math-container">$$t_A=\frac{L+10}{v_A}=\frac{L+10}L t\\t_B=\frac L{v_B}=\frac{L}{L-10}t\\t_C=\frac{L-10}{v_C}=\frac{L-10}{L-20}t$$</span>
One can immediately see that the times might be different. For example, assuming that the track is long enough:
<span class="math-container">$$t_A-t_B=\left(\frac{L+10}L-\frac{L}{L-10}\right)t=\frac{-100}{L(L-10)}t<0$$</span>
That means that A still arrives earlier than B.</p>
<p><strong>Note</strong> Another way to think about this, for A it will take longer than <span class="math-container">$t$</span> to get to the finish line. In fact, after time <span class="math-container">$t$</span> all contestants will be 10 meters away from the finish line. Since A is faster than B, who is faster than C, the order stays the same.</p>
|
4,013,796 | <p>If we make a regular polygon with n vertices (n edges) and triangulate on the inside with n-3 edges, then triangulate on the outside with (n-3) edges (or draw dotted lines inside again), a Maximal Planar Graph is formed. Edges shouldn't be repeated and there's no loops or directions.</p>
<p>How many distinct graphs of this type are there?</p>
<p>It's connected to an earlier question where it was asked 'How many Distinct Maximal Planar Graphs are there? <a href="https://math.stackexchange.com/questions/4009628/how-many-distinct-maximal-planar-graphs-exist-with-n-vertices">How many distinct Maximal Planar Graphs exist with $n$ vertices?</a></p>
<p>Will Orrick gave the OEIS numbers A000109. It was then wondered if there was a known formula for those numbers or bounds on them. It's conjectured that graphs of the type in this question might constitute most Maximal Planar Graphs, so a formula for the 'polygon' types might be an approximate formula or a lower bound for all types of Maximal Planar Graphs.</p>
<p>Dividing the polygon on the inside can be done in C(n-2) ways, where C(n) are the Catalan numbers A000108, so a connection was looked for - and A000109 seem to be close to C(n-2)*2^(n-13) at least for n=9 to 23.</p>
<p>So, the answer to this question would be of interest and any thoughts on connecting the A000108 and A000109 numbers. Lots of ways have been tried so far, e.g. since the next Catalan number can be formed by adding the product of ones before, e.g. C(4) = C(3)*C(1) + C(2)*C(2) + C(1)*C(3), perhaps something similar happens for A000109, or by incorporating numbers from both sequences. Lots of coincidences (probably) have been found including the 233 number in A000109 say X(10) is C(9-2) - the sum of the Catalan numbers before it.</p>
<p>Think I'm going to go crazy looking for patterns any longer! Any suggestions please!</p>
| John Hunter | 721,154 | <p>Here is the work so far. In what follows the number of distinct Hamiltonian Maximal Planar Graphs with n vertices is <span class="math-container">$X(n)$</span>. It should follow A000109 <a href="https://oeis.org/A000109/list" rel="nofollow noreferrer">https://oeis.org/A000109/list</a> up to <span class="math-container">$n=11$</span> when the first non Hamiltonian occurs.</p>
<p>Also used are the number of distinct ways to divide a polygon <a href="https://oeis.org/A000207" rel="nofollow noreferrer">https://oeis.org/A000207</a> . <span class="math-container">$A(n)$</span> will be a number from this list corresponding to the number of distinct ways to triangulate a polygon with <span class="math-container">$n$</span> sides (offset by <span class="math-container">$2$</span> from the list in the link) . There is a formula for these, not written here.</p>
<p>Since a Maximal Planar Hamiltonian graph can be represented by a regular polygon triangulated by 'inside' edges and also triangulated again with 'outside' edges, without repeating edges, the following is suggested.
<span class="math-container">$$
\begin{array}{c|cc}
n & A(n) & X(n)\\
\hline
3 & 1 & 1\\
4 & 1 & 1\\
5 & 1 & 1\\
6 & 3 & 2\\
7 & 4 & 5\\
8 & 12 & 14\\
9 & 27 & 50\\
10 & 82 & 233\\
11 & 228 & 1249\\
\end{array}
$$</span></p>
<p>Dividing the pentagon on the inside can be done in <span class="math-container">$1$</span> way with a fan <span class="math-container">$F$</span> from one vertex.
there is only one distinct way to do it on the outside, another fan, <span class="math-container">$1\times1=1$</span></p>
<p>For the hexagon let's start with the maximum number of edges possible from either set of edges, leaving a vertex, it's <span class="math-container">$3$</span>, making a fan <span class="math-container">$F$</span>. Let the hexagon be <span class="math-container">$ABCDEF$</span>. If the <span class="math-container">$3$</span> inside (say) edges leave node <span class="math-container">$B$</span>. The outside edges can't repeat any of these, one of the outside edges must also join <span class="math-container">$A$</span> to <span class="math-container">$C$</span>, to ensure it's triangulated and not repeat edges. This leaves a pentagon to triangulate by the outside edges, but using one less edge (one was used for <span class="math-container">$AC$</span>) - the number of distinct ways for that is <span class="math-container">$A(5)$</span></p>
<p>There are two remaining ways to divide the hexagon with inside edges, a triangle <span class="math-container">$T$</span> and a <span class="math-container">$Z$</span> shape.
Next the case where the maximum number from one set of edges leaving a vertex is <span class="math-container">$2$</span>. If we draw diagrams using the <span class="math-container">$2$</span> remaining ways to divide the hexagon but not using a fan <span class="math-container">$F$</span>, we find an extra distinct case from this, so <span class="math-container">$X(6)=A(5)+1$</span>.</p>
<hr />
<p>In a similar way using tracing paper, with the <span class="math-container">$4$</span> ways to divide a heptagon, it's found that <span class="math-container">$X(7) = A(6) + 2$</span>. The <span class="math-container">$A(6)$</span> is the number using a Fan of <span class="math-container">$4$</span> edges at one corner of the heptagon, similar to above, and the <span class="math-container">$2$</span> comes by combining the other <span class="math-container">$3$</span> ways, each of the <span class="math-container">$2$</span> new cases came when one of the set of edges has <span class="math-container">$3$</span> edges from one set leaving a vertex. The case where the maximum number is <span class="math-container">$2$</span> edges leaving a vertex gave no extra distinct cases.</p>
<p>So a pattern is now being looked for of this type
<span class="math-container">$$
X(n) = A(n-1)+aX(n-1)+bX(n-2)+cX(n-3)\ldots
$$</span>
where <span class="math-container">$a$</span>, <span class="math-container">$b$</span>, <span class="math-container">$c$</span> etc...are integer coefficients.</p>
<p>The pattern isn't clear and might involve the factors of the number <span class="math-container">$n$</span>, perhaps the Catalan numbers? So a definite reason isn't given here for the coefficients, perhaps it's coincidence, but it's possible to build up the <span class="math-container">$X(n)$</span> in this way e.g.
<span class="math-container">$$
\begin{array}{r|rcl|l}
n & X(n) &\!=\!& A(n-1)+aX(n-1)+\ldots &\\
\hline
6 & 2 &\!\!=\!\!& 1 + 1 &\\
7 & 5 &\!\!=\!\!& 3 + 2 &\\
8 & 14 &\!\!=\!\!& 4 + 2\times5 &\\
9 & 50 &\!\!=\!\!& 12 + 2\times14 + 2\times5 &\\
10 & 233 &\!\!=\!\!& 27 + 3\times50 + 4\times14 &\\
11 & 1248 &\!\!=\!\!& 82 + 4\times233 + 3\times50 + 6\times14 & \text{1248 instead of 1249 as there is one}\\
& & & & \text{non Hamiltonian graph for $n=11$.}
\end{array}
$$</span></p>
|
2,311,848 | <p>$X$ and $Y$ are independent r.v.'s and we know $F_X(x)$ and $F_Y(y)$. Let $Z=max(X,Y)$. Find $F_Z(z)$.</p>
<p>Here's my reasoning: </p>
<p>$F_Z(z)=P(Z\leq z)=P(max(X,Y)\leq z)$. </p>
<p>I claim that we have 2 cases here: </p>
<p>1) $max(X,Y)=X$. If $X<z$, we are guaranteed that $Y<z$, so $F_Z(z)=P(Z\leq z)=P(X<z)=F_X(z)$</p>
<p>2) $max(X,Y)=Y$. Similarly, $F_Z(z)=P(Z\leq z)=P(Y<z)=F_Y(z)$</p>
<p>Since we're interested in either case #1 or #2, </p>
<p>$F_Z(z)=F_X(z)+F_Y(z)-F_X(z)*F_Y(z)$</p>
<p>However, it's wrong and I know it. But I would like to know where the flaw in my reasoning is. I <strong><em>know the answer</em></strong> to this problem, I just want to know at what moment my reasoning fails.</p>
| Community | -1 | <p>$\max(X,Y)\le z$ means that <em>both</em> $X$ and $Y$ are $\le z$.</p>
|
2,311,848 | <p>$X$ and $Y$ are independent r.v.'s and we know $F_X(x)$ and $F_Y(y)$. Let $Z=max(X,Y)$. Find $F_Z(z)$.</p>
<p>Here's my reasoning: </p>
<p>$F_Z(z)=P(Z\leq z)=P(max(X,Y)\leq z)$. </p>
<p>I claim that we have 2 cases here: </p>
<p>1) $max(X,Y)=X$. If $X<z$, we are guaranteed that $Y<z$, so $F_Z(z)=P(Z\leq z)=P(X<z)=F_X(z)$</p>
<p>2) $max(X,Y)=Y$. Similarly, $F_Z(z)=P(Z\leq z)=P(Y<z)=F_Y(z)$</p>
<p>Since we're interested in either case #1 or #2, </p>
<p>$F_Z(z)=F_X(z)+F_Y(z)-F_X(z)*F_Y(z)$</p>
<p>However, it's wrong and I know it. But I would like to know where the flaw in my reasoning is. I <strong><em>know the answer</em></strong> to this problem, I just want to know at what moment my reasoning fails.</p>
| Math-fun | 195,344 | <p>I think you are fine with separating the cases, but then do not take care of them correctly. Since when you say in your case 1 that the maximum is $X$, you are "conditioning on" $X>Y$ and that changes the space over which you calculate the probabilities. </p>
<p>We have two cases that either of which happens:</p>
<p>case 1: $X<Y<z$</p>
<p>case 2: $Y<X<z$.</p>
<p>That is
\begin{align}
\Pr(\max\{X,Y\}<z)&=\Pr(X<Y<z)+\Pr(Y<X<z)\\
&=\int_{x=-\infty}^z\int_{y=x}^zf_Y(y)f_X(x)dydx+\int_{y=-\infty}^z\int_{x=y}^zf_Y(y)f_X(x)dxdy\\
&=\int_{y=-\infty}^z\int_{x=-\infty}^yf_Y(y)f_X(x)dxdy+\int_{y=-\infty}^z\int_{x=y}^zf_Y(y)f_X(x)dxdy\\
&=\int_{y=-\infty}^zf_Y(y)\left(\int_{x=-\infty}^yf_X(x)dxdy+\int_{x=y}^zf_X(x)dx\right)dy\\
&=\int_{y=-\infty}^zf_Y(y)\left(\int_{x=-\infty}^zf_X(x)dxdy\right)dy\\
&=F_Y(z)F_X(z).
\end{align}</p>
|
536,073 | <p>I came across several questions like this in the problem section of a book on coding theory & cryptography and I have no idea how to tackle them. There must be a certain trick that allows for efficiently solving such problems by hand.</p>
| André Nicolas | 6,312 | <p>There are a lot of tricks. A useful one for your problems is <strong>Fermat's Theorem</strong>, which says that if $p$ is prime and $a$ is not divisible by $p$, then $a^{p-1}\equiv 1\pmod{p}$.</p>
<p>We look for example at $2^{170}$ modulo $19$. Note that $170=9\cdot 18+8$. Thus
$$2^{170}=(2^{18})^9 2^8\equiv 1^9\cdot 2^8\equiv 2^8\pmod{19}.$$</p>
<p>The number $2^8$ is easy to handle. But even here there are tricks, which, for larger numbers, cut down enormously on computation time. You might want to look up the very useful <a href="http://en.wikipedia.org/wiki/Modular_exponentiation" rel="nofollow"><strong>binary method</strong></a> for modular (and other) exponentiation. Our number $2^8$ provides a too simple example. Note that $2^4=16\equiv -3\pmod{19}$. So $2^8\equiv (-3)^2\pmod{19}$. </p>
|
536,073 | <p>I came across several questions like this in the problem section of a book on coding theory & cryptography and I have no idea how to tackle them. There must be a certain trick that allows for efficiently solving such problems by hand.</p>
| Dennis Meng | 35,665 | <p>For those specific examples, <a href="http://en.wikipedia.org/wiki/Fermat%27s_little_theorem" rel="nofollow">Fermat's Little Theorem</a> is the way to go. It states that if $p$ is prime and $gcd(a,p) = 1$, then $$a^{p-1} \equiv 1 \bmod{p}$$.</p>
<p>For the more general case where the modulus is not prime, then you have <a href="http://en.wikipedia.org/wiki/Euler%27s_theorem" rel="nofollow">Euler's theorem</a>, which states that if $gcd(a, m) = 1$, then $$a^{\phi(m)} \equiv 1 \bmod{m}$$ where $\phi$ is <a href="http://en.wikipedia.org/wiki/Euler%27s_totient_function" rel="nofollow">Euler's totient function</a>.</p>
|
34,775 | <p><strong>What is the goal of MSE? Is it to get a repository of interesting questions and well-written answers. Or are we instead an online math tutoring site where we help anyone as long as they seem to be trying. These two goals are often in contradiction with each other!</strong></p>
<p>I am afraid that we are headed in the direction of being an online tutoring site, at least for a couple months in the spring and fall when school is in session. What I have noticed this past spring was that MSE was inundated with "newbie" users coming on here and asking on average a high volume of problem-set questions each--oftentimes 10 questions/week per person. Now, in all fairness, the users were demonstrating some effort in their questions. But it was clear that they were struggling with the basics, so their questions were hardly what you would consider to be "good" questions. And yet, these users still received a lot of help on their questions from the more established posters on MSE nonetheless. And so it continued on through March and April. It is like MSE was filling the role of Teaching Assistant or whatever for these students.</p>
<p>[It seems to have quieted down now that the most recent semester is about to end, but it will pick up again. Just wait until the fall! Or maybe even later this summer. If not even sooner than that.]</p>
<p>If the desire is to move back away from being a homework-tutoring site, it is probably going to be hard for the site to stop this without making changes on the admin level. [A possibility would be lowering the number of votes to close from 5 to 3. Another possibility would be to make a tag or section of MSE dedicated for someone learning the basics.] <strong>Meanwhile, I'm not seeing how the EoQS currently implemented, is changing this.</strong> This site is nonetheless being clogged with many boring or poorly-written questions, which are still getting rewarded with a long string of comments doing their best to tutor the student, and at least one of those comments [are comments under the purview of EoQS<span class="math-container">$^1$</span>] has the answer to the student's questions. These questions may get a couple votes to close and maybe a downvote too, but then they also get a pity upvote. And so we get many more such questions, because users are being rewarded for asking them--whether there is an "Answer" or not. There does seem to be a critical mass of users on MSE who do feel that this should be a site where struggling students can come for help with their basic homework even if their questions don't meet the MSE Guidelines, as long as they are demonstrating some effort.</p>
<p>If you cannot already tell, my vote would be MSE moving towards a repository of high-quality questions and answers, and away from being a homework-tutoring site.</p>
<p><strong>ETA: In any event though, I do think EoQS would work better if the way it were administered were shifted.</strong> What if the following changes were implemented:</p>
<p>(a) Reduce the number of votes needed to <em>close</em> [<strong>NOT</strong> delete!] a question from 5 down to 4 or 3. I think a reason why EoQS came to be in the first place was the proliferation of too many really bad questions that get too much oxygen.</p>
<p>(b) Enforce <em>comments</em> as much as answers. In particular, no more rewarding bad questions by answering in the comments. If we don't want a bad question answered in the answer box, then we don't want a bad question answered in the comments either. Likewise, if a question is worth keeping around, then it is worth being answered, <em>in the answer box</em>, as answering in the comments really helps no one.</p>
<p>I'm not necessarily for more enforcement, I am for smarter enforcement. The net result of what we are doing now w EoQS are question after question of debatable quality, with a long string of comments--in place of a well-written answer written where it is supposed to be--the answer box. The worst of both worlds--still no quality control but now messy flow. Should those questions be allowed to stay? Maybe. I get from the comments and whatnot that it is a debate. But if so, then at the very least, the formatting should be right.</p>
<p>Please advise.</p>
<p><strong>ETA 5/20/2022 18:30 EDT: Reading the other posts and comments here, I think the biggest problem with EoQS as I see it, is in unclear and contradictory objectives of here, and so what gets enforced as bad content is often absurd.</strong> I understand that really confused students are going to end up asking questions that are really duplicates [even with context]. For example, every semester we see a bunch of question such as:</p>
<p><em>Is <span class="math-container">$\{(x_1,x_2) \in \mathbb{R}^2; 2x_1+x_2=5\}$</span> a vector space?</em></p>
<p>We will also get a bunch of questions about the probability of drawing <span class="math-container">$2$</span> red cards or a certain hand from a deck of <span class="math-container">$52$</span> cards, and so on. Just as we did last semester and the semester before that. The consensus on here, going by what I'm reading in the comments anyway, is that those questions should get respect on here if the student is showing effort. Alright, fine and great. If this is what the board decides then let's give those questions respect.
<strong>But then if these questions are fine and allowed, then what is the point of EoQS again? What is the point of shutting down more interesting questions again then? Sometimes an answer to a duplicate gives a different take that may be useful to the next person. And just as much, why are the ones who <em>answer</em> a lot of hard questions getting put into the corner then. They are the ones contributing to the knowledge base here! And they were never really contributing to the problem EoQS was supposedly about fixing.</strong></p>
<p>It often just all seems to arbitrary and capricious....</p>
<p><span class="math-container">$^1$</span> <strong>EoQS = Enforcement of Quality Standards</strong></p>
| discipulus | 1,060,368 | <p>(writing this from new user, but I have some experience on this in other SE's and MSE in the recent years)</p>
<ul>
<li>I believe that any change that is imposed from "above" will ultimately be difficult to manage and cannot be successful without drastic change in site that would (I bet on that) make this site much less popular and with less visits. And I don't say it is a bad or good thing. I just say it.</li>
<li>we may like it or not, but what happened to MSE happened organically. and I don't think it is different from other SE sites nor even other open communities where at the beginning we see deep discussions then it disappears.</li>
<li>The only way that it can be mitigated is an automatic engine to mark question as duplicates. (and then maybe if user posts the same question twice after being duplicated to be banned from site). As we don't have this automatic tool I don't see a practical way to change things.</li>
<li>It seems one of the functions of MSE is neither "get a repository of interesting questions" nor "online math tutoring site". I got the impression that many people here enjoy answering questions more than they enjoy finding the duplicate.</li>
<li>The constant steam of low-level questions gives the opportunity (sometimes very rare) for some users to finally post an answer. The community does (as I recall) want the users "to pay back". without this new "duplicate" and low level questions it would be virtually impossible. It might help keep the user here on site to review more questions and to learn.</li>
</ul>
|
61,316 | <p>Hi all,</p>
<p>I heard a claim that if I have a matrix $A\in\mathbb R^{n\times n}$ such that $A^n \to 0 \ (\text{for }n\to\infty )$
(that is, every entry of $A^n$ converges to $0$ where $n\to \infty$)
then $I-A$ is invertible.</p>
<p>anyone knows if there is a name for such a matrix or how (for general knowledge) to prove this ?</p>
| Community | -1 | <p>The matrices you are looking for are exactly those that have spectral radius (the max. of the absolute value of the eigenvalues) strictly less than one.
I do not know whether there is a more specific name.
(A matrix such that a finite power would be exactly the zero-matrix would be called nilpotent; but this is a different property.)</p>
<p>Regarding the invertibility of $I-A$.
Note that (first only formally) $(I-A) (I + A + A^2 + \dots )=I$</p>
<p>To make this rigorous it suffices to show that $(I + A + A^2 + \dots )$ converges. </p>
<p>This can be done by noting that the spectral radius is 'almost' a matrix norm;
more precisely, for $\varepsilon>0$ and all sufficiently large $k$ one has $||A^k|| \le (r + \varepsilon)^k$ where $r$ is the spectral radius. Now, you just have to sum a geometric series. For some more details and or background see e.g. <a href="http://en.wikipedia.org/wiki/Spectral_radius" rel="nofollow">http://en.wikipedia.org/wiki/Spectral_radius</a> and <a href="http://en.wikipedia.org/wiki/Matrix_norm" rel="nofollow">http://en.wikipedia.org/wiki/Matrix_norm</a> </p>
|
238,970 | <p>Recently I've come across 'tetration' in my studies of math, and I've become intrigued how they can be evaluated when the "tetration number" is not whole. For those who do not know, tetrations are the next in sequence of iteration functions. (The first three being addition, multiplication, and exponentiation, while the proceeding iteration function is pentation) </p>
<p>As an example, 2 with a tetration number of 2 is equal to $$2^2$$ 3 with a tetration number of 3 is equal to $$3^{3^3}$$ and so forth.</p>
<p>My question is simply, or maybe not so simply, what is the value of a number "raised" to a fractional tetration number. What would the value of 3 with a tetration number of 4/3 be?</p>
<p>Thanks for anyone's insight</p>
| The_Sympathizer | 11,172 | <p>Ah yes, a fave topic of mine. Basically, there is no universally-agreed on way to do this. The problem is, that, in general, there isn't a unique way to interpolate the values of tetration at integer "height" (which is what the "number of exponents in the 'tower'" may be called). So in theory, you could define it to be anything.</p>
<p>In the case of exponentiation, one has the useful identity $a^{n + m} = a^n a^m$, which enables for a "natural" extension to non-integer values of the exponent. Namely, you can see, for example, that $a^1 = a^{1/2 + 1/2} = (a^{1/2})^2$, from which we can say that we need to define $a^{1/2} = \sqrt{a}$ if we want that identity to hold in the extended exponentiation. No such identities exist for tetration.</p>
<p>You may also want to look at Qiaochu Yuan's answer here, where he explores some of this from a viewpoint of higher math:</p>
<p><a href="https://math.stackexchange.com/a/56710/11172">https://math.stackexchange.com/a/56710/11172</a></p>
<p>One could, perhaps, compare this problem to the question of the interpolation of factorial $n!$ to non-integer values of $n$. There is, in general, no simple identity that provides a natural extension for this, either. <em>But</em>, when an extension is desired, the usual choice is to use what is called the "Gamma function", defined by</p>
<p>$$\Gamma(x) = \int_{0}^{\infty} e^{-t} t^{x-1} dt$$.</p>
<p>Then, you can extend $n!$ to non-integer $x$ by $x! = \Gamma(x+1)$. However, usually one does not use $x!$ for non-integer factorials, but rather the Gamma function notation.</p>
<p>One can give a uniqueness theorem involving soem simple analytical conditions; it is called the Bohr-Mullerup theorem. In addition, the gamma function has various nice number-theoretic and analytic properties, and turns up in a number of different areas of math.</p>
<p>But in the case of tetration, there are no nice integral representations known. Henryk Trappmann and some others recently proved a theorem that gives a simple uniqueness criterion for the <em>inverse</em> of tetration (with respect to the "height") here, presuming extension not just to the real, but the complex numbers:</p>
<p><a href="http://www.ils.uec.ac.jp/~dima/PAPERS/2009uniabel.pdf" rel="noreferrer">http://www.ils.uec.ac.jp/~dima/PAPERS/2009uniabel.pdf</a></p>
<p>The solution that satisfies the condition is one that was developed by Hellmuth Kneser in the 1940s. I call it "Kneser's tetrational function" or simply "Kneser's function". It defies simple description.</p>
<p>On this site:</p>
<p><a href="http://math.eretrandre.org/tetrationforum/index.php" rel="noreferrer">http://math.eretrandre.org/tetrationforum/index.php</a></p>
<p>an algorithm was posted to compute the Kneser solution (though I'm not sure if it's been proven) for various bases of tetration. Using this solution, the answer to your question would be</p>
<p>$$^{4/3} 3_\mathrm{Kneser} = 4.834730793026332...$$</p>
<p>Other interpolations for tetration have been proposed, some of which give different results. But this is the only one that seems to satisfy "nice" properties like analyticity and has a simple uniqueness theorem via its inverse. Yet as I said in the beginning, I don't believe that it's universally agreed by the general mathematical community that this is "the" answer.</p>
|
238,970 | <p>Recently I've come across 'tetration' in my studies of math, and I've become intrigued how they can be evaluated when the "tetration number" is not whole. For those who do not know, tetrations are the next in sequence of iteration functions. (The first three being addition, multiplication, and exponentiation, while the proceeding iteration function is pentation) </p>
<p>As an example, 2 with a tetration number of 2 is equal to $$2^2$$ 3 with a tetration number of 3 is equal to $$3^{3^3}$$ and so forth.</p>
<p>My question is simply, or maybe not so simply, what is the value of a number "raised" to a fractional tetration number. What would the value of 3 with a tetration number of 4/3 be?</p>
<p>Thanks for anyone's insight</p>
| Gottfried Helms | 1,714 | <p>Here is a q&d - implementation in Pari/GP to get some intuition about what is going on at all. The "Kneser-Method" is much more involved, but it seems there is a good possibility, that the simple method below (I call it the "polynomial method") is asymptotic to/approximates the Kneser-method when the size of the matrices gets increased without bounds.</p>
<pre><code>n=32 \\ Size for matrix and power series. If n=48 ...
default(realprecision,800) \\ ... choose realprecision at least 2000!
\\ For n=64 Pari/GP needs much more digits
\\ and time
default(format,"g0.12") \\ display only 12 significant digits
[b =3, bl=log(b)] \\ we choose exponentiation/tetration to base bb=3
Bb = matrix(n,n,r,c,(bl*(c-1))^(r-1)/(r-1)!) ; \\ create the Carleman-matrix
\\ for iterable z1 = 3^z0
tmpM=mateigen(Bb); \\ invoke diagonalization to
tmpW = tmpM^-1; \\ allow fractional powers of
tmpD=diag(tmpW*Bb*tmpM); \\ the matrix Bb
\\ ==============================================================================
h=4/3 \\ the tetration-"height" can be fractional;
\\ and is best in the interval 0..1
coeffs=tmpM * vectorv(n,r, tmpW[r,2]*tmpD[r]^h)
\\ coeffs of the new power series for h=4/3
z0 = 1.0 \\ default starting value with
\\ tetration is usually z0=1
z1 = sum(k=0, n-1, z0^k * coeffs[1+k]) \\ = z0 tetrated to height ...
\\ ... 4/3 with base 3
\\ results:
\\ 4.8347111352647465948 \\ n=32 use matrix-size n=32
\\ 4.8347252436478228906 \\ n=48 when run with matrixsize n=48
\\ \\ n=64 : expected to approximate Kneser-value
\\ \\ if matrix size is increased
\\ 4.834730793026332... \\ reference by kneser-method as shown by @Mike4ty4
</code></pre>
|
238,970 | <p>Recently I've come across 'tetration' in my studies of math, and I've become intrigued how they can be evaluated when the "tetration number" is not whole. For those who do not know, tetrations are the next in sequence of iteration functions. (The first three being addition, multiplication, and exponentiation, while the proceeding iteration function is pentation) </p>
<p>As an example, 2 with a tetration number of 2 is equal to $$2^2$$ 3 with a tetration number of 3 is equal to $$3^{3^3}$$ and so forth.</p>
<p>My question is simply, or maybe not so simply, what is the value of a number "raised" to a fractional tetration number. What would the value of 3 with a tetration number of 4/3 be?</p>
<p>Thanks for anyone's insight</p>
| Mark Hunter | 700,582 | <p>Some attempts at defining <span class="math-container">$e$</span> to itself <span class="math-container">$s$</span> times when <span class="math-container">$s$</span> is a real number besides just a whole number involve adding a sort of socket variable to turn the problem into finding continuous iterates of <span class="math-container">$exp(x)$</span>.</p>
<p>We want to find a unique function <span class="math-container">$exp_s (x)$</span> of a single variable <span class="math-container">$x$</span> where <span class="math-container">$s$</span> is a parameter for the number of iterations.</p>
<p>... <span class="math-container">$exp_1$</span>(x) = <span class="math-container">$exp(x)$</span></p>
<p>and for <span class="math-container">$s$</span> and <span class="math-container">$t$</span> real </p>
<p>... <span class="math-container">$exp_s (exp_t (x)) = exp_{s + t} (x)$</span>.</p>
<p>Then <span class="math-container">$exp_s (1)$</span> will answer your question.</p>
<p>It’s hard to find a continuou iterate of a function that doesn't have a fixed point, like <span class="math-container">$exp(x)$</span> on the real line. H. Kneser’s method involves going into the complex plane to find a fixed point. See the links other people here have provided.</p>
<p>The trouble is that there is more than one fixed point in the complex plane, which leads to singularities and non-uniqueness. There are ways of dealing with this but they don’t convince everybody.</p>
<p>George Szekeres (1911 - 2005) tackled the problem solely in the real domain. His method is explained, and some serious gaps in his argument patched up, in the article:</p>
<p>“The Fourth Operation”
<a href="http://ariwatch.com/VS/Algorithms/TheFourthOperation.htm" rel="nofollow noreferrer">http://ariwatch.com/VS/Algorithms/TheFourthOperation.htm</a></p>
|
550,441 | <p>Say I roll a 6-sided die until its sum exceeds $X$. What is E(rolls)?</p>
| Community | -1 | <p>Let $h(s)$ be the expected number of rolls to exceed $X$, starting with a sum of $s$.
Then "first step analysis" gives the recursive formula $h(s)=1+{1\over 6}\sum_{j=1}^6 h(s+j)$ for $0\leq s\leq X$, while $h(s)=0$ for $s>X$. You use this equation to calculate $h(s)$ for $s=X,X-1,X-2,\dots$ and eventually work your way back to $h(0)$, the answer to your question. </p>
<p>The answer $h(0)$ will be very close to $X/3.5$ for large $X$. </p>
|
2,995,495 | <p>I'm trying to prove that, for every <span class="math-container">$x \geq 1$</span>:</p>
<p><span class="math-container">$$\left|\arctan (x)-\frac{π}{4}-\frac{(x-1)}{2}\right| \leq \frac{(x-1)^2}{2}.$$</span> </p>
<p>I could do it graphically on <span class="math-container">$\Bbb R$</span>, but how to make a formal algebraic proof?</p>
| hamza boulahia | 406,464 | <h2> Hint: </h2>
<p>Use the Taylor expansion of <span class="math-container">$\arctan(x)$</span> near <span class="math-container">$1$</span> to the second order. And the fact that <span class="math-container">$\arctan(x)$</span> is concave when <span class="math-container">$x\geq1$</span></p>
<hr>
<h2> Answer: </h2>
<p>So the Limited Taylor expansion of <span class="math-container">$\arctan(x)$</span> around <span class="math-container">$1$</span> to the second order is given by:
<span class="math-container">$$ \arctan(x)=\frac{\pi}{4}+\frac{(x-1)}{2}-\frac{(x-1)^2}{4}+o((x-1)^2)$$</span>
Hence, <span class="math-container">$$ \arctan(x)-\frac{\pi}{4}-\frac{(x-1)}{2}=-\frac{(x-1)^2}{4}+o((x-1)^2)$$</span>
We can write: <span class="math-container">$$o((x-1)^2)=\frac{(x-1)^2}{2}\varepsilon(x-1),\quad\varepsilon(x-1)\xrightarrow{x\rightarrow 1}{0} $$</span></p>
<p>Then we have,<span class="math-container">\begin{align} \arctan(x)-\frac{\pi}{4}-\frac{(x-1)}{2}&=-\frac{(x-1)^2}{4}+\frac{(x-1)^2}{2}\varepsilon(x-1)\\
&=\frac{(x-1)^2}{2}\bigg(-\frac{1}{2}+\varepsilon(x-1)\bigg)\end{align}</span>
So when <span class="math-container">$x$</span> is too close to <span class="math-container">$1$</span> we have that <span class="math-container">$$-1<\bigg(-\frac{1}{2}+\varepsilon(x-1)\bigg)<0$$</span>
So,
<span class="math-container">\begin{align} \left|\arctan (x)-\frac{π}{4}-\frac{(x-1)}{2}\right|&=\left| \frac{(x-1)^2}{2}\bigg(-\frac{1}{2}+\varepsilon(x-1)\bigg)\right|\\
&= \frac{(x-1)^2}{2}\bigg(\frac{1}{2}-\varepsilon(x-1)\bigg)\\
&\leq \frac{(x-1)^2}{2}\tag{$\star$}\end{align}</span></p>
<p>And since for <span class="math-container">$x\geq1$</span>, the second derivative of <span class="math-container">$f(x)=\left|\arctan (x)-\frac{π}{4}-\frac{(x-1)}{2}\right|$</span> is the opposite of that of <span class="math-container">$\arctan(x)$</span> so <span class="math-container">$f(x)$</span> is convex over <span class="math-container">$[1,+\infty[$</span>. Furthermore, the second derivative of <span class="math-container">$g(x)=\frac{(x-1)^2}{2}$</span> is <span class="math-container">$1$</span> so <span class="math-container">$g(c)$</span> is also convex over <span class="math-container">$[1,+\infty[$</span>.</p>
<p>The first derivatives of <span class="math-container">$f$</span> and <span class="math-container">$g$</span>, over <span class="math-container">$[1,+\infty[$</span>, are:
<span class="math-container">$$ f'(x)=\frac{1}{2}-\frac{1}{1+x^2},\quad\quad g'(x)=x-1$$</span>
We clearly have <span class="math-container">$$ g'(x)>f'(x),\quad\forall x\geq1$$</span>
And since <span class="math-container">$g(1)=f(1)=0$</span> then the result <span class="math-container">$(\star)$</span> holds true over <span class="math-container">$[1,+\infty[$</span>.</p>
|
2,995,495 | <p>I'm trying to prove that, for every <span class="math-container">$x \geq 1$</span>:</p>
<p><span class="math-container">$$\left|\arctan (x)-\frac{π}{4}-\frac{(x-1)}{2}\right| \leq \frac{(x-1)^2}{2}.$$</span> </p>
<p>I could do it graphically on <span class="math-container">$\Bbb R$</span>, but how to make a formal algebraic proof?</p>
| zhw. | 228,045 | <p>Hint: For each <span class="math-container">$x >1,$</span> Taylor gives</p>
<p><span class="math-container">$$\arctan (x)=\frac{\pi}{4}+\frac{(x-1)}{2} + \frac{\arctan'' (c_x)}{2}(x-1)^2,$$</span></p>
<p>where <span class="math-container">$1<c_x<x.$</span> Thus all you need to show is that <span class="math-container">$|\arctan'' (c)|\le 1$</span> for all <span class="math-container">$c\ge 1.$</span></p>
|
112,137 | <p>I'm guessing the answer to this question is well-known:</p>
<p>Suppose that $Y:C \to P$ and $F:C \to D$ are functors with $D$ cocomplete. Then one can define the point-wise Kan extension $\mathbf{Lan}_Y\left(F\right).$ Under what conditions does $\mathbf{Lan}_Y\left(F\right)$ preserve colimits? Notice that if $C=P$ and $Y=id_C,$ then $\mathbf{Lan}_Y\left(F\right)=F,$ so this is not true in general. Would $F$ preserving colimits imply this?</p>
<p>Dually, under what conditions does a right Kan extension preserve limits?</p>
<p>Thank you.</p>
| Tom Leinster | 586 | <p>$F$ preserving colimits doesn't imply that $\text{Lan}_Y(F)$ preserves colimits, even if all the categories are cocomplete. </p>
<p>Consider, for example, the case $C = D$ and $F = 1_C$. Then the left Kan extension $\text{Lan}_Y(1_C)$ exists if and only if $Y$ has a right adjoint, and if it does exist, it <em>is</em> the right adjoint of $Y$. (This is Theorem X.7.2 of <em>Categories for the Working Mathematician</em>.) Of course, $1_C$ preserves colimits, but right adjoints usually don't.</p>
<p>(From your notation, I guess you're generalizing from the case where $P$ is the category of Presheaves on $C$ and $Y$ is the Yoneda embedding. In that case, as I bet you know, $\text{Lan}_Y(F) = - \otimes F$ not only preserves colimits but has a right adjoint.) </p>
|
112,137 | <p>I'm guessing the answer to this question is well-known:</p>
<p>Suppose that $Y:C \to P$ and $F:C \to D$ are functors with $D$ cocomplete. Then one can define the point-wise Kan extension $\mathbf{Lan}_Y\left(F\right).$ Under what conditions does $\mathbf{Lan}_Y\left(F\right)$ preserve colimits? Notice that if $C=P$ and $Y=id_C,$ then $\mathbf{Lan}_Y\left(F\right)=F,$ so this is not true in general. Would $F$ preserving colimits imply this?</p>
<p>Dually, under what conditions does a right Kan extension preserve limits?</p>
<p>Thank you.</p>
| John Bourke | 17,696 | <p>The pointwise left Kan extension of F along Y is a coend of functors $Lan_{Y}(F) = \int^{x}P(Yx,-).Fx$ where each functor $P(Yx,-).Fx$ is the composite of the representable $P(Yx,-):P \to Set$ and the copower functor $(-.Fx):Set \to D$. As a coend (colimit) of the $P(Yx,-).Fx$ the left Kan extension preserves any colimit by each of these functors.</p>
<p>Now the copower functor $(-.Fx)$ is left adjoint to the representable $D(Fx,-)$ and so preserves all colimits, so that $P(Yx,-).Fx$ preserves any colimit preserved by $P(Yx,-)$.
Therefore $Lan_{Y}(F)$ preserves any colimit preserved by each representable $P(Yx,-):P \to Set$ for $x \in C$.</p>
<p>If Y is the Yoneda embedding we have $P(Yx,-)=[C^{op},Set](Yx,-)=ev_{x}$ the evaluation functor at x which preserves all colimits, so that left Kan extensions along Yoneda preserve all colimits. </p>
<p>Or if each $P(Yx,-)$ preserves filtered colimits then left Kan extensions along Y preserve filtered colimits.</p>
<p>I think this is all well known but don't know a reference.</p>
|
480,195 | <p>Three friends brought 3 pens together each 10 dollars. Next day they got 5 dollars cash back so they shared each 1 dollar and donated 2 dollars. Now the pen cost for each guy will be 9 dollars (\$10 -\$1).</p>
<p>But if you add all 9+9+9 = 27 dollars and donated amount is 2 dollars so total 29 dollars. </p>
<p>Where is the other \$1?</p>
| Tomas | 83,498 | <p>The last conclusion is simply wrong. You are right, they paid $27$ dollars altogether. The pens however cost $25$ dollars ($30$ dollars initially, then $5$ discount), so that's the two dollar donation difference. </p>
<p>There is no sense in adding the $2$ dollars, since the nine dollars each friend spent includes the donation.</p>
<p><strong>EDIT:</strong> As T. Bongers noted, this is a known fallacy, so you might want to check <a href="http://en.wikipedia.org/wiki/Missing_dollar_riddle" rel="nofollow">Wikipedia</a> or google for "missing dollar" for more detailed explanations.</p>
|
2,601,412 | <p>"A game is played by tossing an unfair coin ($P(head) = p$) until $A$ heads or $A$ tails (not necessarily consecutive) are observed. What is the expected number of tosses in one game?"</p>
<p>My approach is the following:</p>
<p>Let's represent represent a head by $H$ and a tail by $T$, and call $H_n$ the event "the game ends with $A$ heads when the coin is tossed for the n-th time" and $T_n$ the event "the game ends with $A$ tails when the coin is tossed for the n-th time".</p>
<p>First, I analyse $H_n$.</p>
<p>For $n<A$, $P(H_n) = 0$ because at least $A$ tosses are needed.
For $n>2A-1$, $P(H_n) = 0$ because by then we will surely have at least $A$ heads or tails.</p>
<p>$$P(H_A) = p^A$$</p>
<p>$$P(H_{A+1}) = \left(\binom{A+1}{A} -1 \right) p^A(1-p)$$
$$P(H_{A+2}) = \left(\binom{A+2}{A} -\binom{A+1}{A} \right) p^A(1-p)^2$$</p>
<p>We can generalize it to:
$$P(H_{A+i}) = \left(\binom{A+i}{A} -\binom{A+i-1}{A} \right) p^A(1-p)^{i}$$</p>
<p>The expression for $T_n$ is analogous:
$$P(T_{A+i}) = \left(\binom{A+i}{A} -\binom{A+i-1}{A} \right) p^i(1-p)^{A}$$</p>
<p>And the expectation for the number of tosses in one game is:
$$\sum_{i=0}^{A-1} P(H_{A+i}) (A+i) + \sum_{i=0}^{A-1} P(T_{A+i}) (A+i)$$</p>
<p><strong>Is it correct? Is there a more elegant way of doing it?</strong></p>
<p>EDIT:</p>
<p>For each of the cases $(A,p) \in \{3,5,10\} \times \{0.5,0.6.0.7\}$, I simulated $10^7$ games. The maximum relative difference between the simulated average and the expectation given by the formula above was $0.013%$. I am assuming the formula is correct.</p>
| BallBoy | 512,865 | <p>The approach seems correct. There are a couple of different ways to get to the terms in the sum, and rearrange the sum, but no significantly more elegant method that I see.</p>
|
317,753 | <p>I am taking real analysis in university. I find that it is difficult to prove some certain questions. What I want to ask is:</p>
<ul>
<li>How do we come out with a proof? Do we use some intuitive idea first and then write it down formally?</li>
<li>What books do you recommended for an undergraduate who is studying real analysis? Are there any books which explain the motivation of theorems? </li>
</ul>
| amWhy | 9,003 | <p>While this doesn't speak, directly, to Real Analysis, it is a recommendation that will help you there, and in other courses you're encounter, or will encounter soon:</p>
<p>In terms of both reading and writing proofs, in general, an excellent book to work through and/or have as a reference is Velleman's great text <strong><em><a href="http://rads.stackoverflow.com/amzn/click/0521675995" rel="nofollow">How to Prove It: A Structured Approach</a></em></strong>. The best way to overcome doubt and apprehension about proofs, whether trying to understand them or to write them, is to be patient, persist, and <strong><em>dig in and do it</em></strong>! (often, write and rewrite, read and reread, until you're convinced and you're convinced you can convince others!)</p>
<hr>
<p>One helpful (and free-to-use) online resource is the website maintained by MathCS.org: <a href="http://www.mathcs.org/analysis/reals/" rel="nofollow"><strong><em>Interactive Real Analysis</em></strong></a>. </p>
<blockquote>
<p>"Interactive Real Analysis is an online, interactive textbook for Real Analysis or Advanced Calculus in one real variable. It deals with sets, sequences, series, continuity, differentiability, integrability (Riemann and Lebesgue), topology, power series, and more."</p>
</blockquote>
|
1,005,576 | <p>How can I write this term in a compact form where $a$ only appears once on the RHS (in particular without cases)?</p>
<p>$T(a) =
\begin{cases}
a^2 &,\text{ if $a \leq 0$}\\
2a^2 &,\text{ if $a > 0$}\\
\end{cases}$</p>
<p>I have already thought about $T(a) = \max\{\sqrt{2}a,|a|\}^2$ or $T(a) = \frac{3+\text{sgn}(a)}{2}a^2$, but in both cases $a$ appears twice.</p>
| matheburg | 155,537 | <p>Another class of solutions to this problem is the "trivial substitution class".</p>
<p><strong>Examples</strong></p>
<p>$$T(a) = 2\int_0^a x\cdot(\max\{x,0\}+1)dx$$</p>
<p>or even more trivial</p>
<p>$$T(a) = \max\{\sqrt{2}x,|x|\}^2\bigg|_{x=a}$$</p>
|
22,101 | <p>The general rule used in LaTeX doesn't work: for example, typing <code>M\"{o}bius</code> and <code>Cram\'{e}r</code> doesn't give the desired outputs.</p>
| Community | -1 | <p>I also wanted to type Möbius, etc, just as in LaTeX, by typing <kbd>M</kbd><kbd>\</kbd><kbd>"</kbd><kbd>o</kbd>... so I've made a <a href="https://github.com/normalhuman/MathShortcuts2" rel="nofollow">userscript</a> for that. To use it,</p>
<ol>
<li>Install a userscript manager (e.g., Tampermonkey extension for Chrome or Greasemonkey extension for Firefox)</li>
<li><a href="https://raw.githubusercontent.com/normalhuman/MathShortcuts2/master/MathShortcuts2.user.js" rel="nofollow">Click here</a> to add the userscript. </li>
</ol>
<p>Besides the common diacritics, the script enables shortcuts for "blackboard bold" letters and math operator names: see the <a href="https://github.com/normalhuman/MathShortcuts2/blob/master/README.md" rel="nofollow">readme file</a>.</p>
|
22,101 | <p>The general rule used in LaTeX doesn't work: for example, typing <code>M\"{o}bius</code> and <code>Cram\'{e}r</code> doesn't give the desired outputs.</p>
| Rob | 510,296 | <p>Slightly less ugly than one answer offered is: \ddot{\mathsf a} <span class="math-container">$\ddot{\mathsf a}$</span> - but it's slightly bold and for all the extra typing it's better to find a webpage with the characters and copy/paste them (assuming you're using a cellphone or a keyboard without the characters).</p>
<p>Sources from webpages are:</p>
<ul>
<li><p>language specific ones such as: <a href="https://en.m.wikipedia.org/wiki/Open_central_unrounded_vowel" rel="nofollow noreferrer">open central unrounded vowel</a>, <a href="https://en.m.wikipedia.org/wiki/Germanic_umlaut" rel="nofollow noreferrer">Germanic umlaut</a> - those offer explanations of history and usage, along with a litany of links.</p></li>
<li><p><a href="https://en.wikipedia.org/wiki/Diaeresis_(diacritic)" rel="nofollow noreferrer">Big list of Diaeresis</a> (single, double, above, below, offset: dots and lines) at Wikipedia, and ones linked to <a href="https://en.wikipedia.org/wiki/Cedilla" rel="nofollow noreferrer">various diacritics</a>.</p></li>
<li><p>Search websites such as <a href="http://www.amp-what.com/unicode/search/space" rel="nofollow noreferrer">&what;</a> allow searches for all the Unicode characters.</p></li>
</ul>
<p>You can copy them from here also: ä ö ü </p>
<p>Latin
Ä ä
Ǟ ǟ
Ą̈ ą̈
B̈ b̈
C̈ c̈
Ë ë
Ḧ ḧ
Ï ï
Ḯ ḯ
K̈ k̈
M̈ m̈
N̈ n̈
Ö ö
Ȫ ȫ
Ǫ̈ ǫ̈
Ṏ ṏ
P̈ p̈
Q̈ q̈
Q̣̈ q̣̈
S̈ s̈
T̈ ẗ
Ü ü
Ǖ ǖ
Ǘ ǘ
Ǚ ǚ
Ǜ ǜ
Ṳ ṳ
Ṻ ṻ
Ṳ̄ ṳ̄
ᴞ
V̈ v̈
Ẅ ẅ
Ẍ ẍ
Ÿ ÿ
Z̈ z̈</p>
<p>Greek
Ϊ ϊ
ῒ ΐ ῗ
Ϋ ϋ
ῢ ΰ ῧ
ϔ</p>
<p>Cyrillic
Ӓ ӓ
Ё ё
Ӛ ӛ
Ӝ ӝ
Ӟ ӟ
Ӥ ӥ
Ї ї
Ӧ ӧ
Ӫ ӫ
Ӱ ӱ
Ӵ ӵ
Ӹ ӹ
Ӭ ӭ</p>
<p>The only MathJax used in this answer is in the first sentence, the remaining characters can all be copy/pasted off of this webpage and onto another Stack Exchange edit or comment box, as-is.</p>
|
1,845,663 | <p>$\left\{a,b,c\right\}\in \mathbb{R}^3$ are linearly independent vectors.</p>
<p>Find the value of $\lambda $, so the dimension of the subspace generated by the vectors:</p>
<p>$2a-3b,\:\:\left(\lambda -1\right)b-2c,\:\:3c-a,\:\:\lambda c-b$ is 2.</p>
<p>So, if I understand this correctly the span of the given vectors should have 2 linearly independent vectors, so I construct the matrix:</p>
<p>$$A=\begin{pmatrix}2&0&-1&0\\ -3&\lambda -1&0&-1\\ 0&-2&3&\lambda \end{pmatrix}$$</p>
<p>And this matrix should have rankA = 2? And now I should just find a value for lambda that satisfies this condition? Is my logic correct?</p>
| Zau | 307,565 | <p>As Michael said, by using row reduction:</p>
<p>$$\begin{pmatrix}2&0&-1&0\\ -3&\lambda -1&0&-1\\ 0&-2&3&\lambda \end{pmatrix}$$</p>
<p>$$ \iff \begin{pmatrix}2&0&-1&0\\ 0&\lambda -1&-\frac{3}{2}&-1\\ 0&-2&3&\lambda \end{pmatrix} $$</p>
<p>$$ \iff \begin{pmatrix}2&0&-1&0\\ 0&-2&3&\lambda\\0&\lambda -1&-\frac{3}{2}&-1 \end{pmatrix}$$</p>
<p>$$ \iff \begin{pmatrix}2&0&-1&0\\ 0&-2&3&\lambda\\0&0&\frac{3}{2}(\lambda-2)&\frac{1}{2}(\lambda+1)(\lambda-2) \end{pmatrix}$$</p>
<p>Notice that the first row and second row has non-zero pivotal element: $2$ and $-2$ in different position so the first row vector and second row vector is linearly independent wherever $\lambda$ gets arbitary value.</p>
<p>In the same logic, if $\lambda \neq 2$, these three row vector will be linearly independent. Therefore, the rank of A is 3.</p>
<p>If $\lambda = 2$, the third row vector is $ ( \ 0 \ 0\ 0\ 0 )$ which shows that these three row vectors are linearly dependent. By arguement above, $ rank A = 2 $</p>
|
1,845,663 | <p>$\left\{a,b,c\right\}\in \mathbb{R}^3$ are linearly independent vectors.</p>
<p>Find the value of $\lambda $, so the dimension of the subspace generated by the vectors:</p>
<p>$2a-3b,\:\:\left(\lambda -1\right)b-2c,\:\:3c-a,\:\:\lambda c-b$ is 2.</p>
<p>So, if I understand this correctly the span of the given vectors should have 2 linearly independent vectors, so I construct the matrix:</p>
<p>$$A=\begin{pmatrix}2&0&-1&0\\ -3&\lambda -1&0&-1\\ 0&-2&3&\lambda \end{pmatrix}$$</p>
<p>And this matrix should have rankA = 2? And now I should just find a value for lambda that satisfies this condition? Is my logic correct?</p>
| Marc van Leeuwen | 18,880 | <p>I'd say column reduction is easier here, since there are already two linearly independent columns that do not involve $\lambda$ at all; the remaining columns must be linear combinations of them. Column-reduction of $A$ (starting with moving the second column to the end) gives
$$
A'=\begin{pmatrix}2&0&0&0\\
-3&1&0&0\\
0&-2&\lambda-2 &2\lambda-4\end{pmatrix}
$$
Now it is clear that each of the last columns will only be a linear combination of the first two columns if it is zero, which just happens to occur for the same value of $\lambda$, namely $\lambda=2$; this is your answer.</p>
|
878,115 | <p>Question1:
I found 30 boxes. In 10 boxes i found 15 balls. In 20 boxes i found 0 balls.
Afer i collected all 15 balls i put them randomly inside the boxes.</p>
<p>How much is the chance that all balls are in only 10 boxes or less?</p>
<p>Question2:
I found 30 boxes. In 10 boxes i found 15 balls. In 20 boxes i found 0 balls. In two of the boxes i could find 3 balls. (So in one box has to be 2 balls and in the other seven boxes have to be 1 ball.)
Afer i collected all 15 balls i put them randomly inside the boxes.</p>
<p>How much is the chance that i find in only 2 boxes 6 balls or more?</p>
<p>I wrote a c# programm and tried it 1 million times.
My solution was: With a chance of 12,4694% all balls are in 10 boxes or less.</p>
| hardmath | 3,111 | <p>Random trials/Monte Carlo simulations are notoriously slow to converge, with an expected error inversely proportional to the square root of the number of trials.</p>
<p>In this case it is not hard (given a programming language that provides big integers) to do an exact count of cases. Effectively the outcomes are partitions of the 15 balls into some number of boxes (we have thirty boxes to work with, so at least half will be empty).</p>
<p>I wrote a Prolog program to do this (Amzi! Prolog has arbitrary precision integers built in), and got the following results:</p>
<p>$$ Pr(\text{10 or fewer boxes occupied}) = \frac{59486359170743424000}{30^{14}} \approx
0.124371 $$</p>
<p>$$ Pr(\text{2 boxes hold 6 or more balls}) = \frac{30415369655816064000}{30^{14}} \approx
0.063591 $$</p>
<p>The reason I'm dividing by $30^{14}$ in these probabilities is because I normalized the counting to begin with one case where a ball is in one box. If we counted that as thirty cases, we'd need to divide by $30^{15}$. So this keeps the totals slightly smaller. Each ball we add increases the total number of cases by a factor of $30$.</p>
<p>I wrote a recursive rule to build cases for $n+1$ balls from cases for $n$ balls. The first few cases have the following counts:</p>
<pre><code> /* case(Balls,Partition,LengthOfPartition,Count) */
case(1,[1],1,1). /* Count is nominally 1 to begin */
case(2,[2],1,1).
case(2,[1,1],2,29).
case(3,[3],1,1).
case(3,[1,2],2,87).
case(3,[1,1,1],3,812). /* check: for Sum = 3, sum of Count is 900 */
</code></pre>
<p>The <a href="https://oeis.org/A000041" rel="nofollow">number of cases generated</a> is modest enough for a desktop, daunting to manage by hand. For $n=15$ there are $176$ partitions. It simplified the Prolog code to maintain the partitions as lists in ascending order.</p>
|
927,261 | <p>I was doing a presentation on Limits and I was using this $$f(x)=\frac{x^2+2x-8}{x^2-4}$$ to explain different types of limits. </p>
<p>I know that the function is not defined at $x=-2$ or $x=2$. I showed the graph and everyone was ok with the graph at $x=-2$ but one member of the audience didn't like how the graph looked at $x=2$. </p>
<p>I think they didn't understand that a function doesn't need to be defined at the point to have a limit. I said there was a hole at $x=2$, not sure now because when I restricted the domain to be close to $x=2$ This was displayed. </p>
<p><img src="https://i.stack.imgur.com/8Iayy.jpg" alt="graph of f(x) near x=2"></p>
<p>I used "discont=true" as an option of the plot command. </p>
<p>I computed both the left and right limits of $f(x), \; x\to 2$, both limits equal 3/2. I don't think there is any up and down behavior like $\sin(1/x)$</p>
<p>Is this a problem with maple or have I missed something about limits? </p>
| acer | 12,448 | <p>As mentioned, you are seeing artefacts of floating-point computation. These can be alleviated by increasing the working precision, by adjusting the <code>Digits</code> environment variable.</p>
<p>The default value of <code>Digits</code> is 10. Also, for an expression containing only arithmetic operations (and elementary functions) Maple's 'plot' command will try to use its faster <code>evalhf</code> double-precision interpreter if Digits is less than 15 which is <code>trunc(evalhf(Digits))</code>.</p>
<p>For your example the <code>plot</code> command's option <code>discont=[showremovable]</code> will mark the plot of the expression f at x=2 with a symbol. This works for your f which is an explicit expression, for which Maple uses its symbolic <code>discont</code> procedure to find the point discontinuity. It may not work if f were instead an operator (procedure), since in that case Maple would fall back to using its purely numeric <code>fdiscont</code> procedure.</p>
<pre><code>restart:
f := (x^2+2*x-8)/(x^2-4):
Digits := 20: # increased working precision -- default is 10
plot( f, x=2-1e-6 .. 2+1e-6, discont=[showremovable] );
</code></pre>
<p><img src="https://i.stack.imgur.com/rozRY.png" alt="enter image description here"></p>
<p>Using a wider domain the plotting command does not compute enough evaluations near x=2 to produce the artefacts you saw.</p>
<pre><code>Digits := 10: # the default
plot( f, x=-3..3, discont=[showremovable] );
</code></pre>
<p><img src="https://i.stack.imgur.com/5rJHJ.png" alt="enter image description here"></p>
|
1,766,264 | <p>A store sells 8 kinds of candy. How many ways can you pick out 15 candies total to throw unordered into a bag and take home.</p>
<p>here 15 candies..
so we choose 8 from out of 15 is ..=$^{15}C_8$ is i am right</p>
| André Nicolas | 6,312 | <p>Call the various types of candy Type 1, Type 2, and so on up to Type 8. Let $x_1$ be the number of Type 1 candies we get, $x_2$ the number of Type 2 candies we get, and so on up to $x_8$.</p>
<p>Then the $x_i$ are non-negative integers, and $x_1+x_2+\cdots +x_8=15$.</p>
<p>Conversely, if $x_1,x_2,\dots, x_8$ are non-negative integers with the sum of the $x_i=15$, we can produce a candy selection by choosing $x_1$ of Type 1, $x_2$ of Type 2, and so on up to Type 8.</p>
<p>So the number of different candy selections is the number of solutions of $$x_1+x_2+\cdots +x_8=15\tag{1}$$ in non-negative integers.</p>
<p>By Stars and Bars (please see Wikipedia) Equation (1) has $\binom{15+8-1}{15}$ solutions, or equivalently $\binom{15+8-1}{8-1}$ solutions.</p>
|
718,266 | <p>Is there a simple intuitive graphical explanation of Clifford Algebra for the layman? Since Clifford Algebra is a Geometric Algebra, surely the best way to present those concepts is with graphical figures.</p>
| Paul Siegel | 1,509 | <p>I'm not sure I agree with the premise of the question; I would say that the point of introducing Clifford algebras is to work with certain geometric data that cannot be easily visualized.</p>
<p>But Clifford algebras do make contact with conventional geometry via the twisted adjoint representation. Given a unit vector $x$ in a Euclidean space $V$, $x$ acts on $V$ via:
$$\rho_x(v) := -x v x^{-1}$$
where the product on the right-hand side is given by multiplication in the Clifford algebra $Cl(V)$. It is not immediately obvious that $\rho_x(v)$ is an element of $V$, but a simple calculation shows that $\rho_x(v)$ is in fact the reflection of $v$ across the hyperplane perpendicular to $x$. Every rotation is the product of an even number of reflections, so we have a surjective group homomorphism
$$\rho \colon Spin(V) \to SO(V)$$
where $Spin(V)$ is by definition the multiplicative subgroup of $Cl(V)$ generated by products of an even number of unit vectors in $V$. To really understand $Cl(V)$, you have to understand why this map isn't an isomorphism.</p>
<p>Let's work in dimension $3$. Every rotation has an axis, and there are two choices of unit vector along that axis. Choose the unit vector which has the property that the rotation occurs counter-clockwise if the vector is pointing at your eye. Now multiply the vector by the angle of the rotation (a number from $0$ to $\pi$), yielding a point in the ball of radius $\pi$ in $\mathbb{R}^3$. This space is nearly a model of $SO(3)$, but we need to account for the fact that a clockwise rotation by $\pi$ is the same as a counter-clockwise rotation by $\pi$. Thus $SO(3)$ is really the ball of radius $\pi$ with antipodal points on the boundary identified; topologically this is $\mathbb{R}P^3$.</p>
<p>You may recall that $\mathbb{R}P^3$ is double covered by the sphere $S^3$, and indeed topologically the map $Spin(3) \to SO(3)$ is just the double cover $S^3 \to \mathbb{R}P^3$. For higher dimensional $V$ it is no longer true that $SO(n) = \mathbb{R}P^n$, but $Spin(n)$ is still a simply connected double cover of $SO(n)$. So in a sense $Cl(V)$ keeps track of an extra bit of orientation data that you can't really see in the symmetries of $V$.</p>
|
4,272,964 | <p>I want to solve the equation following in a set of complex numbers:</p>
<p><span class="math-container">$$z^2 + \bar z = \frac 1 2$$</span></p>
<p><strong>My work so far</strong></p>
<p>Apparently I have a problem with transforming equation above into form that will be easy to solve. I tried to multiply sides by <span class="math-container">$z$</span> and use fact that: <span class="math-container">$z\bar z = |z|^2$</span> but it doesn't seem great idea. After that I tried the following:</p>
<p><span class="math-container">$$\bar z = \frac 1 2 - z^2 \Leftrightarrow |z| = | \frac 1 2 - z^2|$$</span></p>
<p>and then rewrite as <span class="math-container">$z = Re(z) +Im(z)$</span> but also result was not satisfying. Could you please give me a hand with solving this equation?</p>
| Reveillark | 122,262 | <p>Write <span class="math-container">$z=x+iy$</span>, so <span class="math-container">$z^2=x^2-y^2+i2xy$</span>. So, equating real and imaginary parts,
<span class="math-container">$$
x^2-y^2+x=\frac{1}{2}
$$</span>
and
<span class="math-container">$$
2xy-y=0
$$</span>
So this means that <span class="math-container">$y=0$</span> or <span class="math-container">$x=\frac{1}{2}$</span>. Can you see where to go from there?</p>
|
1,793,182 | <p>My task was to find the directional derivative of function:<br>
$$z = y^2 - \sin(xy)$$ at the point $(0, -1)$ in direction of vector $u = (-1, 10) $. </p>
<p>The result I found was $-21/\sqrt{101}$. But I can't figure out what is the interpretation of this result. </p>
<p>Does it mean that the function grows fastest with that derivative or with something else?</p>
| Venkata Karthik Bandaru | 303,300 | <p>[This is from Prop 2.3.4 of "A Course in Metric Geometry" by Burago-Burago-Ivanov. As in the book, <span class="math-container">${ \vert p q \vert }$</span> stands for <span class="math-container">${ d(p,q) }$</span>]</p>
<p><strong>Def</strong>: Let <span class="math-container">${ (X, d) }$</span> be a metric space and <span class="math-container">${ \gamma : [a,b] \to X }$</span> a path. The supremum of sums <span class="math-container">${ \Sigma (\gamma, Y) = \sum _1 ^N d(\gamma(y _{i-1}), \gamma(y _i)) }$</span> taken over all partitions <span class="math-container">${Y = \lbrace a = y _0 \leq y _1 \leq \ldots \leq y _N = b \rbrace }$</span> is called length of <span class="math-container">${ \gamma }$</span> and written <span class="math-container">${ L _d (\gamma) }.$</span></p>
<p><strong>Thm</strong>: Let <span class="math-container">${ (X,d) }$</span> be a metric space. Then <span class="math-container">${ L _d }$</span> is lower semicontinuous on <span class="math-container">${ \mathscr{C}([a,b], X) }$</span> wrt pointwise convergence.<br />
(That is, if paths <span class="math-container">${ \gamma _j : [a,b] \to X }$</span> have pointwise limit <span class="math-container">${ \gamma , }$</span> then <span class="math-container">${ \liminf L _d (\gamma _j) \geq L _d (\gamma) }$</span>)<br />
<strong>Pf</strong>: [<strong>Case 1</strong> : <span class="math-container">${ L(\gamma) \lt \infty }$</span>]<br />
Let <span class="math-container">${ \epsilon \gt 0 }.$</span> Pick a partition <span class="math-container">${ Y = \lbrace y _0 = a \leq \ldots \leq y _N = b \rbrace }$</span> with <span class="math-container">${L(\gamma) - \Sigma (\gamma, Y) }$</span> <span class="math-container">${ \lt \epsilon }.$</span> Pick a <span class="math-container">${ J }$</span> such that <span class="math-container">${ \vert \gamma _j (y ) \gamma (y) \vert }$</span> <span class="math-container">${ \lt \epsilon /N }$</span> whenever <span class="math-container">${ y \in Y },$</span> <span class="math-container">${ j \geq J }.$</span><br />
Now <span class="math-container">${ L(\gamma) }$</span> <span class="math-container">${ \leq \epsilon + {\color{red}{ \Sigma (\gamma, Y) } } }$</span> <span class="math-container">${ = \epsilon + \sum \vert \gamma(y _{i-1}) \gamma(y _i) \vert }$</span> <span class="math-container">${ \leq \epsilon + \sum _{i=1} ^{N} \left( \vert \gamma _j (y _{i-1}) \gamma _j (y _i) \vert + 2 \epsilon /N \right) }$</span> <span class="math-container">${ = \epsilon + {\color{blue}{\Sigma (\gamma _j, Y) + 2\epsilon} } }$</span> <span class="math-container">${ \leq L (\gamma _j) + 3 \epsilon , }$</span> whenever <span class="math-container">${ j \geq J }.$</span><br />
So <span class="math-container">${ L(\gamma) }$</span> <span class="math-container">${ \leq \liminf L(\gamma _j) + 3 \epsilon .}$</span> Since <span class="math-container">${ \epsilon \gt 0 }$</span> was arbitrary, <span class="math-container">${ L(\gamma) \leq \liminf L(\gamma _j) }$</span> as needed.<br />
[<strong>Case 2</strong> : <span class="math-container">${ L(\gamma) = \infty }$</span>]<br />
Let <span class="math-container">${ \epsilon \gt 0 }.$</span> Pick a partition <span class="math-container">${ Y = \lbrace y _0 = a \leq \ldots \leq y _N = b \rbrace }$</span> with <span class="math-container">${ \Sigma (\gamma, Y) \gt \frac{1}{\epsilon} }.$</span> Pick a <span class="math-container">${ J }$</span> such that <span class="math-container">${ \vert \gamma _j (y ) \gamma (y) \vert }$</span> <span class="math-container">${ \lt \epsilon /N }$</span> whenever <span class="math-container">${ y \in Y },$</span> <span class="math-container">${ j \geq J }.$</span><br />
Now as above, <span class="math-container">${ L(\gamma _j) }$</span> <span class="math-container">${ \geq { \color{blue}{\Sigma (\gamma _j , Y)} } }$</span> <span class="math-container">${ \geq {\color{red}{\Sigma (\gamma, Y)} } {\color{blue}{- 2\epsilon}} }$</span> <span class="math-container">${ \geq \frac{1}{\epsilon} - 2 \epsilon }$</span> whenever <span class="math-container">${ j \geq J }.$</span><br />
So <span class="math-container">${ L(\gamma _j) \to \infty }$</span> in this case, as needed.</p>
|
1,865,364 | <p>After having seen a lengthy and painful calculation showing
$\operatorname{Gal}(\mathbb Q[e^\frac{2\pi i}3, \sqrt[\leftroot{-2}\uproot{2}3]{2}]/\mathbb Q)\cong S_3$, I'm wondering whether there's a slick proof $\operatorname{Gal}(\mathbb Q[e^\frac{2\pi i}p, \sqrt[\leftroot{-2}\uproot{2}p]{2}]/\mathbb Q)\cong S_p$ for odd prime $p$, because these calculations are getting intractable fast.</p>
<p>What are some slick proofs of this fact (assuming it is indeed correct).</p>
<p><strong>Correction:</strong> What <strong>IS</strong> $\operatorname{Gal}(\mathbb Q[e^\frac{2\pi i}p, \sqrt[\leftroot{-2}\uproot{2}p]{2}]/\mathbb Q)$ for prime $p$?</p>
| M. Van | 337,283 | <p>Here is an 'easy' group it is isomorphic to:</p>
<p>$$
\left\{\begin{pmatrix}
a & b \\ 0 & 1
\end{pmatrix} : a, b \in \mathbb{F}_p, a \neq 0 \right\}
$$
with the following isomorphism. If $\sigma \in \text{Gal}(\mathbb{Q}(\zeta,\sqrt[p]{2}))$ with $\sigma(\zeta)= \zeta^a$ and $ \sigma ( \sqrt{2} ) = \zeta^b \sqrt[p]{2}$, then send $\sigma$ to
$$
\begin{pmatrix}
a & b \\ 0 & 1
\end{pmatrix}.
$$
This was actually an excercise in a Galois Theory course I followed this year :)</p>
|
2,943,461 | <p>I'm stumped on a math puzzle and I can't find an answer to it anywhere!
A man is filling a pool from 3 hoses. Hose A could fill it in 2 hours, hose B could fill it in 3 hours and hose C can fill it in 6 hours. However, there is a blockage in hose A, so the guy starts by using hoses B and C. When the blockage in hose A has been cleared, hoses B and C are turned off and hose A starts being used. How long does the pool take to fill?
Any help would be strongly appreciated :)</p>
| farruhota | 425,072 | <blockquote>
<p>But the blockage in hose A is still bothering me, does it make a difference?</p>
</blockquote>
<p>Finetuning MRobinson's solution. </p>
<p>Let the pool can fit <span class="math-container">$x$</span> units of water. </p>
<p>Let the rates of hoses be: <span class="math-container">$r_A=\frac{x}{2}; r_B=\frac x3; r_C=\frac x6$</span> per hour.</p>
<p>Assume the two hoses <span class="math-container">$B$</span> and <span class="math-container">$C$</span> worked <span class="math-container">$t_1$</span> hours and then only <span class="math-container">$A$</span> worked for <span class="math-container">$t_2$</span> hours. Then:
<span class="math-container">$$\left(\frac x3+\frac x6\right)t_1+\frac x2\cdot t_2=x \Rightarrow \frac12(t_1+t_2)=1 \Rightarrow t_1+t_2=2.$$</span>
Interpretation: Regardless of <span class="math-container">$t_1$</span> and <span class="math-container">$t_2$</span> hours, the total time is <span class="math-container">$2$</span> hours. For example, <span class="math-container">$B$</span> and <span class="math-container">$C$</span> could have worked for <span class="math-container">$0.5$</span> hours, then <span class="math-container">$A$</span> must have worked for <span class="math-container">$1.5$</span> hours, totalling <span class="math-container">$2$</span> hours. </p>
|
4,351,504 | <p>A question from Herstein's Abstract Algebra book goes-</p>
<blockquote>
<p>Let <span class="math-container">$(R,+,\cdot)$</span> be a ring with unit element. Using its elements we define a ring <span class="math-container">$(\tilde R,\oplus,\odot)$</span> by defining <span class="math-container">$a\oplus b = a + b + 1$</span> and <span class="math-container">$a\odot b = a\cdot b + a + b$</span> where <span class="math-container">$a,b\in R$</span>.</p>
<ol>
<li>Prove that <span class="math-container">$\tilde R$</span> is a ring under the operations <span class="math-container">$\oplus$</span> and <span class="math-container">$\odot$</span>.</li>
<li>What is the zero element of <span class="math-container">$\tilde R$</span>?</li>
<li>What is the unit element of <span class="math-container">$\tilde R$</span>?</li>
<li>Prove that <span class="math-container">$R$</span> is isomorphic to <span class="math-container">$\tilde R$</span>.</li>
</ol>
</blockquote>
<p>Parts 1,2 and 3 seemed quite easy for me, and the answers I got for 2 and 3 are <span class="math-container">$-1$</span> and <span class="math-container">$0$</span> respectively.</p>
<p>But, I got stuck with part 4. I understood that I had to construct an isomorphism <span class="math-container">$\phi:R\to \tilde R$</span> such that <span class="math-container">$0\mapsto -1$</span> and <span class="math-container">$1\mapsto 0$</span>. But, I couldn't construct the bijection explicitly. A little google search revealed the answer to be <span class="math-container">$\phi (x)=x-1$</span> and that works.</p>
<p>My question is, how do we come up with that isomorphism? How do we construct that function when all we know are the two weird sum and product definitions, and <span class="math-container">$0\mapsto -1$</span> and <span class="math-container">$1\mapsto 0$</span>? Some <em>"stacking"</em> showed <a href="https://math.stackexchange.com/q/2004269/943723">some</a> <a href="https://math.stackexchange.com/q/2003399/943723">similar</a> <a href="https://math.stackexchange.com/a/15006/943723">questions</a> where people have suggested something called <a href="https://math.stackexchange.com/search?q=user%3A242+transport+">"transporting ring structure"</a> which I honestly can't grasp properly. I'm not even sure whether that is really the answer to my question.</p>
<p>I would like to have some help from the experts here.</p>
<p>Also please change the title of the question if you can think of a better one :|</p>
| Jan Eerland | 226,665 | <p>Well, we are trying to find:</p>
<p><span class="math-container">$$\text{y}_\text{k}\left(\text{n}\space;x\right):=\mathscr{L}_\text{s}^{-1}\left[-\sqrt{\frac{\text{k}}{\text{s}}}\cdot\exp\left(-\text{n}\cdot\sqrt{\frac{\text{s}}{\text{k}}}\right)\right]_{\left(x\right)}\tag1$$</span></p>
<p>Using the linearity of the inverse Laplace transform and the convolution property:</p>
<p><span class="math-container">$$\text{y}_\text{k}\left(\text{n}\space;x\right)=\sqrt{\text{k}}\cdot\int_x^0\mathscr{L}_\text{s}^{-1}\left[\exp\left(-\text{n}\cdot\sqrt{\frac{\text{s}}{\text{k}}}\right)\right]_{\left(\sigma\right)}\cdot\mathscr{L}_\text{s}^{-1}\left[\frac{1}{\sqrt{\text{s}}}\right]_{\left(x-\sigma\right)}\space\text{d}\sigma\tag2$$</span></p>
<p>It is well known and not hard to prove that:</p>
<ul>
<li><span class="math-container">$$\mathscr{L}_\text{s}^{-1}\left[\frac{1}{\sqrt{\text{s}}}\right]_{\left(x-\sigma\right)}=\frac{1}{\sqrt{\pi}}\cdot\frac{1}{\sqrt{x-\sigma}}\tag3$$</span></li>
<li><span class="math-container">$$\mathscr{L}_\text{s}^{-1}\left[\exp\left(-\text{n}\cdot\sqrt{\frac{\text{s}}{\text{k}}}\right)\right]_{\left(\sigma\right)}=\frac{\text{n}\exp\left(-\frac{\text{n}^2}{4\text{k}\sigma}\right)}{2\sqrt{\text{k}\pi}\sigma^\frac{3}{2}}\tag4$$</span></li>
</ul>
<p>So:</p>
<p><span class="math-container">$$\text{y}_\text{k}\left(\text{n}\space;x\right)=\frac{\text{n}}{2\pi}\int_x^0\frac{\exp\left(-\frac{\text{n}^2}{4\text{k}\sigma}\right)}{\sigma^\frac{3}{2}}\cdot\frac{1}{\sqrt{x-\sigma}}\space\text{d}\sigma\tag5$$</span></p>
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.