qid int64 1 4.65M | question large_stringlengths 27 36.3k | author large_stringlengths 3 36 | author_id int64 -1 1.16M | answer large_stringlengths 18 63k |
|---|---|---|---|---|
383,063 | <p>I need some help with this exercise:</p>
<p>Suppose $A\subseteq{G}$ is abelian, and $|G:A|$ is a prime power. Show that $G'\lt{G}$</p>
<p>Thank you very much in advance.</p>
| 包殿斌 | 324,668 | <p>If we assume that <span class="math-container">$A$</span> is a normal abelian subgroup of <span class="math-container">$G$</span>. Then we can use the idea in the above comments,i.e., show that there is a nontrivial degree 1 representation of <span class="math-container">$G$</span>. </p>
<p>First we have <span class="math-container">$|G|=\sum_{\chi\in Irr(G)}\chi^2(1)=n_1+\sum_{\chi\in Irr(G),\chi(1)>1}\chi^2(1)$</span>. Ito's theorem gives <span class="math-container">$\chi(1)\big||G:A|$</span>, which is a prime power. Then the number <span class="math-container">$n_1$</span> of linear characters of <span class="math-container">$G$</span> is divisible by <span class="math-container">$p$</span>. So <span class="math-container">$n_1\neq 1$</span>. Then <span class="math-container">$G/G'$</span> is nontrivial.</p>
|
104,875 | <p>I've been looking for a solution to this problem for other applications too, for some time, but haven't come up with a solution that does not involve <code>Animate</code> or similar (and it never works).</p>
<p>Take this example:
plot a function (say <code>f=a/x</code>) for different <code>a</code>. The y-axis plot range is based off of <code>a/1</code> but there are say, 3, possible plot ranges:</p>
<pre><code>range=Which[f<2,2,f<5,5,f<10,10]
</code></pre>
<p>for <code>1<=a<=10</code>.
Each time <code>range</code> changes as <code>a</code> changes (with <code>Manipulate</code> slider), the <code>FrameStyle</code> for the changing y-axis should flash red then return back to black.</p>
<p>Every time I've encountered this issue I was using <code>Manipulate</code> (and need to find a solution to this while still using <code>Manipulate</code>).</p>
<p>Here's what I want to show but WITHOUT having to use the method I used to creat this, which was:</p>
<pre><code>Which[
f[1] < 1.9, Black,
1.9 <= f[1] <= 2.1, Red,
2.1 < f[1] < 4.9, Black,
4.9 <= f[1] <= 5.1, Red,
5.1 < f[1] < 9.9, Black,
9.9 <= f[1] <= 10, Red
]
</code></pre>
<p>for the <code>FrameStyle</code>. Here's what it did, for clarification:</p>
<p><a href="https://i.stack.imgur.com/suFLd.gif" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/suFLd.gif" alt="enter image description here"></a></p>
| Edmund | 19,542 | <p>You can avoid the <code>Which</code> by dynamically setting <code>PlotRange</code> and <code>FrameStyle</code> as a function of <code>f</code></p>
<pre><code>Manipulate[
Plot[f[x], {x, 0, 10},
PlotRange -> {Automatic, {0, Max[axisStep Quotient[f[1], axisStep], 2]}},
Frame -> True,
FrameStyle ->
{With[{z = Mod[f[1], axisStep]},
If[flashRange > z \[Or] z > axisStep - flashRange,
Directive[Thick, Red], Automatic]],
Automatic}
],
{{a, 1}, 1, 12},
Initialization :> (axisStep = 5; flashRange = 0.3; f[x_] := a/x),
TrackedSymbols :> {a}]
</code></pre>
<p><a href="https://i.stack.imgur.com/Eigdl.gif" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Eigdl.gif" alt="enter image description here"></a></p>
<p>Hope this helps.</p>
|
204,150 | <p>If I had a list of let's say 20 elements, how could I split it into two separate lists that contain every other 5 elements of the initial list?</p>
<p>For example:</p>
<pre><code>list={1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20}
function[list]
(*
{1,2,3,4,5,11,12,13,14,15}
{6,7,8,9,10,16,17,18,19,20}
*)
</code></pre>
<p>Follow-up question:</p>
<p>Thanks to the numerous answers! Is there a way to revert this process? Say we start from two lists and I would like to end up with the <code>list</code>above:</p>
<pre><code>list1={1,2,3,4,5,11,12,13,14,15}
list2={6,7,8,9,10,16,17,18,19,20}
function[list1,list2]
(*
{1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20}
*)
</code></pre>
| Coolwater | 9,754 | <p>You could use:</p>
<pre><code>function = Flatten[Transpose[Fold[Partition, #, {5, 2}]], {{1}, {2, 3}}] &;
</code></pre>
<p>If the length of the input is not a multiple of <code>10</code>, it will effectively cut off the remainder at the end.</p>
|
44,552 | <p>I was pushing my way through a physics book when the author separated the variables of the Schrödinger equation and I lost the plot:</p>
<p>$$\Psi (x, t) = \psi (x) T(t)$$</p>
<p>can someone please explain how this technique works and is used? It can be in general maths or in the context of this problem. Thanks </p>
| Ross Millikan | 1,827 | <p>Some functions (not all) $\psi (x,t)$ can be written as a product of a function of $x$ and another function of $t$. For example, $\psi (x,t)=xt$ can be, while $\psi_2 (x,t)=x^2+t^2$ cannot. The author is guessing that this will yield a solution to the problem and will go on to show that it does. After some manipulation the equation comes to something like $f(x)=g(t)$ where the left does not depend on $t$ and the right does not depend on $x$. Then you argue that since the left does not depend on $t$, the right really doesn't either, and both sides must equal some constant. So now you are solving $f(x)=g(t)=c$. As equations in single variables it is usually easier. Proving that this yields <em>a</em> solution is easy. Proving that <em>all</em> solutions come as a linear combination of solutions of this form is harder.</p>
|
2,480,528 | <blockquote>
<p>Find a formula for $\prod_{i=1}^{2n-1} \left(1-\frac{(-1)^i}{i}\right)$ then prove it. </p>
</blockquote>
<p>I assumed that $\prod_{i=1}^{2n-1} \left(1-\frac{(-1)^i}{i}\right)=\frac{2n}{2n-1}$ after doing a few cases from above then I tried to prove it with induction would this be a fair approach or any other approaches that would work? </p>
| xpaul | 66,420 | <p>Let $z=x+yi$ and then from
$$ |1+z|=|1-i\bar{z}| $$
it is easy to obtain
$$ (x+1)^2+y^2=(-y+1)^2+x^2, $$
So
$$ x=-y $$
and hence $z=(1-i)x$.</p>
|
2,480,528 | <blockquote>
<p>Find a formula for $\prod_{i=1}^{2n-1} \left(1-\frac{(-1)^i}{i}\right)$ then prove it. </p>
</blockquote>
<p>I assumed that $\prod_{i=1}^{2n-1} \left(1-\frac{(-1)^i}{i}\right)=\frac{2n}{2n-1}$ after doing a few cases from above then I tried to prove it with induction would this be a fair approach or any other approaches that would work? </p>
| ivanculet | 493,187 | <p>Take this
$$|1+z|=|1-i\bar{z}|$$
then square this babies
$$(1+z)(1+\bar{z})=(1-i\bar{z})(1+iz)$$
$$1+z+\bar{z}+z\bar{z}=1-i\bar{z}+iz+z\bar{z} $$
so
$$ z+\bar{z}=i(z-\bar{z})$$
$$2Re(z)=i(i2Im(z)$$
therefore $$Re(z)=-Im(z)$$</p>
|
3,452,493 | <p>I remember hearing / reading about a scenario during WW2 where the US / Western Powers were thinking about attacking Japan via an overland route from India. The problem involved how to leapfrog the supplies from India over to China and there begin the fight with the Japanese. There’s a logic to it as the distance is far less than from California to Japan. But some mathematicians realized that it would take too long to stock up the supplies and so the island hopping strategy was finalized.</p>
<p>The problem, in a nutshell, is Plane A can travel X miles with a full load of supplies. But then it needed to have enough gas to fly back to the home base in order to get more supplies. (Or, there would be some combinations where some planes carry only supplies in the cargo areas while others carry extra gas to allow the planes to return to base to restock up supplies.)</p>
<p>So the questions are: </p>
<ol>
<li>What is the name of this math problem (if such a name exists) where one determines how many trips would it require to bring supplies from place A to place B.</li>
<li>Has anyone any information if this calculation ever took place. (This second question may have to be asked at history.stackexchange). I’ve made some perfunctory searches and haven’t found anything - which leads me to question if this story is apocryphal. </li>
</ol>
| user284331 | 284,331 | <p><span class="math-container">\begin{align*}
\left(1+\dfrac{a}{n}\right)^{n}\geq 1+n\cdot\dfrac{a}{n}=1+a>a,
\end{align*}</span>
so <span class="math-container">$1+a/n>a^{1/n}$</span>.</p>
<p><span class="math-container">\begin{align*}
\left(\dfrac{na}{1+na}\right)^{n}&=\left(\dfrac{1}{1+\dfrac{1}{na}}\right)^{n}\\
&=\dfrac{1}{\left(1+\dfrac{1}{na}\right)^{n}}\\
&\leq\dfrac{1}{1+n\cdot\dfrac{1}{na}}\\
&=\dfrac{1}{1+\dfrac{1}{a}}\\
&<\dfrac{1}{\dfrac{1}{a}}\\
&=a,
\end{align*}</span>
so <span class="math-container">$na/(1+na)<a^{1/n}$</span>.</p>
|
4,287,733 | <blockquote>
<p>How can we show <span class="math-container">$$\frac{1-x^n}{1-c^n} + \left(1-\frac{1-x}{1-c}\right)^n \leq 1 $$</span> for all <span class="math-container">$n \in \mathbb{N}$</span>, <span class="math-container">$0 \leq c \leq x
\leq 1, c \neq 1$</span>?</p>
</blockquote>
<p>The context is <a href="https://math.stackexchange.com/questions/4287507/on-the-conditioned-tail-of-a-maximum-of-a-sequence-of-random-variables/4287567#4287567">this</a> probability problem, but of course this problem might be of independent interest to inequality enthusiasts.</p>
<p>In my attempt, I have <a href="https://www.desmos.com/calculator/rbxehr6quf" rel="nofollow noreferrer">graphed</a> the inequality on Desmos. We might note that the derivative changes sign in the range <span class="math-container">$c < x < 1$</span> so it may be unlikely differentiation would be of help.</p>
| MathematicsStudent1122 | 238,417 | <p>Using the idea proposed by <strong>Ivan Kaznacheyeu</strong>, we can generalize this to the following inequality:</p>
<blockquote>
<p>For any <span class="math-container">$n \in \mathbb{N}$</span>, suppose <span class="math-container">$(x_i)_{i=1}^{n} \in [0,1]^n$</span>, <span class="math-container">$(c_i)_{i=1}^{n} \in [0,1)^n$</span> with <span class="math-container">$x_i \geq c_i \ \forall i$</span>. Then <span class="math-container">$$\frac{1 - x_1x_2 \cdots x_n}{1-c_1c_2 \cdots c_n} + \prod_{i=1}^{n} \left(1 - \frac{1-x_i}{1-c_i}\right) \leq 1$$</span></p>
</blockquote>
<p>Without loss of generality, suppose <span class="math-container">$x_\ell > c_\ell$</span> for each index <span class="math-container">$\ell$</span>. This is fine, since if <span class="math-container">$x_{\ell} = c_{\ell}$</span> for some <span class="math-container">$\ell$</span>, the product <span class="math-container">$\prod_{i=1}^{n} \left(1 - \frac{1-x_i}{1-c_i}\right)$</span> is <span class="math-container">$0$</span>, so the inequality clearly holds. In addition suppose <span class="math-container">$n \geq 2$</span>. (Otherwise we have equality).</p>
<p>Now, using the same manipulations as that in the answer by <strong>Ivan Kaznacheyeu</strong>, it suffices to prove <span class="math-container">$$\frac{1-\prod_{j} c_j}{\prod_{j} (1 - c_j)} \leq \frac{\prod_{j} x_j-\prod_{j} c_j}{\prod_{j} (x_j - c_j)}$$</span></p>
<p>Define <span class="math-container">$f(x_1, x_2, \cdots x_n) = \frac{\prod_{j} x_j-\prod_{j} c_j}{\prod_{j} (x_j - c_j)}$</span>, noting that <span class="math-container">$f(1,1, \cdots, 1)$</span> is exactly the LHS above.</p>
<p>It thus suffices to show that for each index <span class="math-container">$i$</span>, <span class="math-container">$\frac{\partial}{\partial x_i} f(x_1, x_2, \cdots x_n) \leq 0$</span> in the polytope <span class="math-container">$[0,1]^n \cap \bigcap_{i=1}^{n} \{x \in \mathbb{R}^n: x_i > c_i\}$</span>.</p>
<p>By the quotient rule, <span class="math-container">$$\frac{\partial}{\partial x_i} f(x_1, x_2, \cdots x_n) \leq 0 \Longleftrightarrow$$</span><span class="math-container">$$ \left(\prod_{j \neq i} x_j\right)\prod_{j} (x_j - c_j) - \left(\prod_{j } x_j - \prod_{j} c_j\right) \prod_{j \neq i} (x_j - c_j) \leq 0 \Longleftrightarrow$$</span><span class="math-container">$$ \prod_{j \neq i} (x_j - c_j)\left((x_i - c_i)\prod_{j \neq i} x_j - \left(\prod_{j } x_j - \prod_{j} c_j\right) \right) \leq 0 \Longleftrightarrow$$</span><span class="math-container">$$ (x_i - c_i)\prod_{j \neq i} x_j - \left(\prod_{j } x_j - \prod_{j} c_j\right) = \prod_{j} c_j - c_i \prod_{j \neq i} x_j \leq 0$$</span></p>
<p>where the last equality is clear since <span class="math-container">$x_j > c_j$</span> for each <span class="math-container">$j$</span>.</p>
|
78,143 | <p>I don't know the meaning of geometrically injective morphism f of schemes. </p>
<p>What's the definition of "geometrically injective"?</p>
<p>I can't find it. I hope your answer.</p>
<p>Thanks.</p>
| Leo Alonso | 6,348 | <p>A map of schemes $f \colon X \to Y$ is <em>geometrically injective</em> if it is injective on <em>geometric points</em>, i.e. points with values in an algebraic closed field. In more detail, let $K$ be an algebraically closed field. For all pairs of maps ($K$-valued points) $x, y \colon \operatorname{Spec}(K) \to X$ such that they have the same image on $Y$, i.e $f \circ x = f \circ y$ then $x = y$. </p>
<p>In other words the map
$$
\operatorname{Hom}(\operatorname{Spec}(K), X) \longrightarrow \operatorname{Hom}(\operatorname{Spec}(K), Y)
$$
given by composition with $f$, is injective for every algebraically closed field $K$.</p>
|
2,800,015 | <p>Prove $p(x)=\frac{6}{(\pi x)^2}$ for $x=1,2,...$where $p$ is a probability function. and $E[X]$ doesn't exists.</p>
<p><b> My work </b></p>
<p>I know $\sum _{n=1}^\infty\frac{1}{n^2}=\frac{\pi^2}{6}$</p>
<p>Moreover,</p>
<p>$p(1)=\frac{6}{\pi^2}$<br>
$p(2)=\frac{6}{\pi^24}$<br>
$p(3)=\frac{6}{\pi^29}$<br>
$p(4)=\frac{6}{\pi^216}$<br>
.<br>
.<br>
.<br></p>
<p>Then, for prove $p$ is a probability function then i need prove</p>
<p>$\lim_{x\rightarrow\infty}\frac{6}{(\pi x)^2}=1
$</p>
<p>Then,
$\lim_{x\rightarrow\infty}\frac{6}{(\pi x)^2}=\lim_{x\rightarrow\infty}\frac{6}{\pi^2 x^2}=\frac{6}{\pi^2}\lim_{x\rightarrow\infty}\frac{1}{x^2}=\frac{6}{\pi^2}\times\frac{\pi^2}{6}=1$</p>
<p>In consequence,
$p$ is a probability function.</p>
<blockquote>
<p>Moreover, i need prove $E[X]$ doesn't exist.</p>
</blockquote>
<p>Here i'm a little stuck. Can someone help me?</p>
| Ted Shifrin | 71,348 | <p><strong>HINT</strong>: What can you say about $\sum\limits_{n=1}^\infty n\cdot \dfrac 6{(\pi n)^2}$?</p>
|
3,831,387 | <p><span class="math-container">$X,Y\sim N(0,1)$</span> and are independent, consider <span class="math-container">$X+Y$</span> and <span class="math-container">$X-Y$</span>.</p>
<p>I can see why <span class="math-container">$X+Y$</span> and <span class="math-container">$X-Y$</span> are independent based on the fact that their joint distribution is equal to the product of their marginal distributions. Just, I'm having trouble understanding <em>intuitively</em> why this is so.</p>
<p>This is how I see it : When you look at <span class="math-container">$X+Y=u$</span>, the set <span class="math-container">$\{(x,u-x)|x\in\mathbb{R}\}$</span> is the list of possibilities for <span class="math-container">$X$</span> and <span class="math-container">$Y$</span>.</p>
<p>And intuitively, I understand independence of two random variables <span class="math-container">$A$</span> and <span class="math-container">$B$</span> as, the probability of the event <span class="math-container">$A=a$</span> being completely unaffected by the event <span class="math-container">$B=b$</span> happening.</p>
<p>But when you look at <span class="math-container">$X+Y=u$</span> given that <span class="math-container">$X-Y=v$</span>, the set of possibilities has only one value <span class="math-container">$(\frac{u+v}{2},\frac{u-v}{2})$</span>.</p>
<p>So, <span class="math-container">$\mathbb{P}(X+Y=u|X-Y=v)\neq \mathbb{P}(X+Y=u)$</span>.</p>
<p>Doesn't this mean that <span class="math-container">$X+Y$</span> is affected by the occurrance of <span class="math-container">$X-Y$</span>?
So, they would have to be dependent?
I'm sorry if this comes off as really stupid, it has been driving me crazy, even though I am sure that they are independent, it just doesn't feel right.</p>
<p>Thank you.</p>
| John Dawkins | 189,130 | <p>Intuitively, it's because the joint density of <span class="math-container">$X$</span> and <span class="math-container">$Y$</span> is rotation invariant, and the transformation from <span class="math-container">$(X,Y)$</span> to <span class="math-container">$((X+Y)/\sqrt{2},(X-Y)/\sqrt{2})$</span> is a rotation. Therefore <span class="math-container">$(X+Y,X-Y)$</span> has the same distribution as <span class="math-container">$(\sqrt{2}X, \sqrt{2}Y)$</span>, and the random variables in this latter pair are independent.</p>
|
3,597,172 | <h2>The problem</h2>
<p>Let <span class="math-container">$f: \mathbb{R} \to \mathbb{R}$</span></p>
<p>Determine <span class="math-container">$f(x)$</span> knowing that </p>
<p><span class="math-container">$ 3f(x) + 2 = 2f(\left \lfloor{x}\right \rfloor) + 2f(\{x\}) + 5x $</span>, where <span class="math-container">$ \left \lfloor{x}\right \rfloor $</span> is the floor function and <span class="math-container">$\{x\} = x - \left \lfloor{x}\right \rfloor$</span> (also known as the fractional part)</p>
<h2>My thoughts</h2>
<p>We can observe that for <span class="math-container">$x = 0$</span> we obtain <span class="math-container">$f(0) = 2$</span>.</p>
<p>Considering <span class="math-container">$f(\left \lfloor{x}\right \rfloor)$</span> we get <span class="math-container">$ 3f(\left \lfloor{x}\right \rfloor) + 2 = 2f(\left \lfloor\left \lfloor{x}\right \rfloor\right \rfloor) + 2f(\{\left \lfloor{x}\right \rfloor\}) + 5\left \lfloor{x}\right \rfloor $</span></p>
<p>And for <span class="math-container">$f(\{x\})$</span> we get <span class="math-container">$ 3f(\{x\}) + 2 = 2f(\left \lfloor\{x\}\right \rfloor) + 2f(\{\{x\}\}) + 5\{x\} $</span></p>
<p>I did this in the hope of defining <span class="math-container">$f(\left \lfloor{x}\right \rfloor)$</span> and <span class="math-container">$f(\{x\})$</span> and thus replacing them in the initial condition.</p>
| José Carlos Santos | 446,262 | <p>Yes, this is the matrix of a surjective linear map. Look at the first and the fourth columns: <span class="math-container">$\left[\begin{smallmatrix}1\\0\end{smallmatrix}\right]$</span> and <span class="math-container">$\left[\begin{smallmatrix}0\\1\end{smallmatrix}\right]$</span> respectively. It follows from this that the vectors <span class="math-container">$\left[\begin{smallmatrix}1\\0\end{smallmatrix}\right]$</span> and <span class="math-container">$\left[\begin{smallmatrix}0\\1\end{smallmatrix}\right]$</span> belong to the range and that therefore the range is <span class="math-container">$k^2$</span>, where <span class="math-container">$k$</span> is the field that you are working with.</p>
|
271,255 | <p>I'm reading <em>Mathematica Programming</em> by Leonid Shifrin. And it said</p>
<blockquote>
<p><code>ClearAll</code> serves to clear all definitions (including attributes) for a given symbol (or symbols), and not to clear definitions of all global symbols in the system (it is a common mistake to mix these two things).</p>
</blockquote>
<p>So what is the difference between these two things, which are a given symbol and a global symbol?</p>
| user293787 | 85,954 | <p>Let me rewrite the paragraph that you quote:</p>
<p><em>In <code>ClearAll</code> the "all" stands for "all definitions". It does not stand for "all symbols". Therefore calling <code>ClearAll[f]</code> means to "clear all definitions associated to the symbol <code>f</code>".</em></p>
<p>Two examples:</p>
<ul>
<li>Suppose you first define <code>f</code> to be the squaring function as in <code>f[x_]:=x^2</code> and you then call <code>ClearAll[f]</code>, then <code>f</code> will no longer be the squaring function, so <code>f[12]</code> will not give <code>144</code> but <code>f[12]</code>. Other symbols <code>g</code>, <code>h</code>, ... that you may have defined are not affected by <code>ClearAll[f]</code>.</li>
<li><code>ClearAll[]</code> does not mean "clear all symbols", but it means "clear all definitions of no symbol", so it does nothing at all.</li>
</ul>
<p>To clarify what Leonid Shifrin means by "given symbol": If you call <code>ClearAll[f]</code> then <code>f</code> is the "given symbol". If you call <code>ClearAll[f,g]</code> then <code>f</code> and <code>g</code> are the "given symbols".</p>
<p>By contrast, you should read "all global symbols" as simply "all symbols". The qualification "global" refers to <a href="https://reference.wolfram.com/language/tutorial/ModularityAndTheNamingOfThings.html#27982" rel="nofollow noreferrer">contexts</a> which is a slightly more advanced topic that you can ignore at first.</p>
<p><strong>Note.</strong> In this answer I used "all definitions" as an abbreviation for "all definitions, all attributes, all messages" and so on.</p>
|
2,403,608 | <p>I was asked to solve for the <span class="math-container">$\theta$</span> shown in the figure below.</p>
<p><a href="https://i.stack.imgur.com/3Yxqv.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/3Yxqv.png" alt="enter image description here" /></a></p>
<p>My work:</p>
<p>The <span class="math-container">$\Delta FAB$</span> is an equilateral triangle, having interior angles of <span class="math-container">$60^o.$</span> I don't think <span class="math-container">$\Delta HIG$</span> and <span class="math-container">$\Delta DEC$</span> are right triangles.</p>
<p>So far, that's all I know. I'm confused on how to get <span class="math-container">$\theta.$</span> How do you get the <span class="math-container">$\theta$</span> above?</p>
| haqnatural | 247,767 | <p>Hope it will help,ask if it will not clear
<a href="https://i.stack.imgur.com/C8xli.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/C8xli.png" alt="enter image description here"></a></p>
|
627,815 | <p>What is the best way to write 'exclusively divisible by' a given number in terms of set notation? eg: the set of natural numbers that are divisible by $2$ and only $2$; the set of natural numbers that are divisible by $3$ and only $3$; $\dots 5$, $7\dots$ etc.</p>
| Tom Collinge | 98,230 | <p>The natural numbers are an example of a “ Euclidean Ring”. A ring is almost a "field" but doesn't have a multiplicative inverse, and the Euclidean property as applied to natural numbers means that any number a can be written as a = q.b + r. The Euclidean property has a corresponding but slightly more complicated definition for rings in general.</p>
<p>In any ring an “ideal” is a subset of the ring which is closed wrt addition, and “strongly closed” wrt multiplication, i.e. any element of the ideal multiplied by any element of the ring is an element of the ideal.</p>
<p>A property of ideals in Euclidean rings is that all the elements of an ideal have a greatest common divisor (GCD).</p>
<p>So, the numbers in N which are all divisible by 2 (or 3 ,..) are “ideals” of the "Euclidean ring" of natural numbers of the form 2N, 3N, etc where 2N is the set of all numbers of the form 2 (= GCD) x n in N.</p>
|
3,492,155 | <p>I am looking for the values <span class="math-container">$ x \in R $</span> which satisfy the following equation :</p>
<p><span class="math-container">$ e^{-\alpha x} = \frac{a}{x - c} $</span></p>
<p>Where <span class="math-container">$ \alpha $</span>, <span class="math-container">$ a $</span> and <span class="math-container">$ c $</span> are real valued constants.</p>
<p>If <span class="math-container">$ c = 0 $</span>, we get <span class="math-container">$ x = - \frac{W(-a \alpha)}{\alpha} $</span>, where <span class="math-container">$ W$</span> denotes the <a href="https://en.wikipedia.org/wiki/Lambert_W_function" rel="nofollow noreferrer">Lambert W function</a>, but with <span class="math-container">$ c \neq 0 $</span> I don't see an obvious solution.</p>
<p>Otherwise, could I find an approximate solution with numerical methods in limited time ? (my system needs to be running in real time)</p>
| Claude Leibovici | 82,404 | <p>Doing what @user721481 suggested, you should arrive at
<span class="math-container">$$x=c-\frac{W\left(-a \alpha e^{\alpha c}\right)}{\alpha }$$</span></p>
|
4,614,853 | <p>I'm trying to prove that the BinPacking problem is NP hard granted the partition problem <em>is NP hard</em>. If I have E a set of positive integers, can I split it into two subsets such that the sums of the integers in both subsets are equal?</p>
<p>The polynomial reduction I found would be the following:</p>
<ul>
<li>My objects O are defined as what's in E (the weight of each object is the integer value and |E| is the number of objects)</li>
<li>Number of bags = 2</li>
<li>Capacity of each bag is half of the sum of everything in E</li>
</ul>
<p>Is this wrong because my polynomial reduction only reduces to BinPacking with <strong>2 bags</strong> whose <strong>capacity are the same</strong>?</p>
<p>I’m skeptic because if I carry out this proof, and I find an algorithm in polynomial time for the partition problem I will only have proof that BinPacking with 2 bags whose capacity are the same is solvable in a polynomial time.</p>
| Apass.Jack | 580,448 | <p>It is pleasantly surprising that you got a simple reduction from the partition problem to the BinPacking problem. Instead of being skeptic, you are supposed to celebrate your elegant proof.</p>
<p>Here is a little lemma for this occasion.</p>
<p>Let us represent <a href="https://en.wikipedia.org/wiki/Decision_problem#Definition" rel="nofollow noreferrer">a decision problem</a> by <span class="math-container">$(I, Y)$</span>, where <span class="math-container">$I$</span> is an infinite set of inputs and <span class="math-container">$Y\subseteq I$</span> is the subset of inputs for which the answer is YES.</p>
<p><strong>(All is at least as hard as a part)</strong> Lemma: Let <span class="math-container">$I^-\subseteq I$</span>. If <span class="math-container">$(I^-, Y\cap I^-)$</span> is NP-hard, so is <span class="math-container">$(I,Y)$</span>.<br />
Proof: Obvious. (Please make sure you can prove it rigorously.)</p>
<p>Since you have proved the partial BinPacking problem, which is with 2 bags whose capacity are the same is NP-hard, so is the whole BinPacking problem.</p>
<hr />
<p>If we only deduce in term of NP-hardness, a polynomial-time algorithm for the partition problem would imply "that BinPacking with 2 bags whose capacity are the same is solvable in a polynomial time" only. It does not imply the existence of a polynomial-time algorithm for the (whole) BinPacking problem.</p>
<p>However, since the partition problem is NP-complete and (the decision version of) the BinPacking problem is a NP-problem, a polynomial-time algorithm for the partition problem will immediately gives a polynomial-time algorithm for the BinPacking problem once we integrate with a polynomial-time reduction of the BinPacking problem to the partition problem.</p>
<hr />
<p>What you see here is a very general phenomenon.</p>
<p>In fact, people have been researching intensively, for an NP-hard problem <span class="math-container">$(I,Y)$</span>, how much smaller <span class="math-container">$I^-$</span> can be so that <span class="math-container">$(I^-, Y\cap I^-)$</span> remains NP-hard. A very small such <span class="math-container">$I'$</span> will illustrate where the NP-hardness of <span class="math-container">$(I, Y)$</span> comes from. Here we can say that the NP-hardness of BinPacking problem is already present in the much smaller case of two bags with equal capacity.</p>
<p>When <span class="math-container">$I^-$</span> becomes smaller enough, it may happen that it is "hard" to compute the YES/NO answer to most large elements of <span class="math-container">$I^-$</span>. We could then say <span class="math-container">$I^-$</span> are hard instances of <span class="math-container">$I$</span>. You can take a look at <a href="https://en.wikipedia.org/wiki/Partition_problem#Hard_instances_and_phase-transition" rel="nofollow noreferrer">Hard instances and phase-transition</a>.</p>
|
89,810 | <p>I have defined a table </p>
<pre><code>Table[Table[
Graphics3D[
Cuboid[radijDensity[[j, i]] {-Sin[kotiDensity[[j, i]]],
1 - Cos[kotiDensity[[j, i]]], 0}, {radijDensity[[j, i]]*
Sin[kotiDensity[[j, i]]],
radijDensity[[j, i]] (1 - Cos[kotiDensity[[j, i]]]) +
visina[[j, i + 1]], 0}]], {i, 1, n + m - 10}], {j, 1,
Length[force[[All, 1]]], 1}]
</code></pre>
<p>of cuboids.</p>
<p>But what I want is to have all of them on the same plot but at different z axis values. And I don't know how to do it. Something like <code>ListPlot3D</code> just that I want it to show those cuboids.</p>
| Szabolcs | 12 | <blockquote>
<p>Something like ListPlot3D just that I want it to show those cuboids.</p>
</blockquote>
<p>If you need to place the same shape at multiple points either in 2D or 3D, the best solution is <a href="http://reference.wolfram.com/mathematica/ref/Translate.html" rel="nofollow noreferrer"><code>Translate</code></a>. It can take more than one translation vector as the second argument.</p>
<p>Example:</p>
<pre><code>pts = RandomReal[10, {20, 3}];
Graphics3D[
Translate[
Cuboid[],
pts
]
]
</code></pre>
<p><img src="https://i.stack.imgur.com/YMwpJ.png" alt="Mathematica graphics"></p>
|
4,398,207 | <p>I have the following exercise:</p>
<blockquote>
<p>Prove if the functor that sends an abelian group to it's <span class="math-container">$n$</span>-torsion subgroup for <span class="math-container">$n\geq 2$</span> is exact.</p>
</blockquote>
<p>I know that I need to take <span class="math-container">$f\colon M\to N$</span> and <span class="math-container">$g\colon N\to L$</span> of <span class="math-container">$R$</span>-Modules in <span class="math-container">$\textrm{Ab}$</span> which is exact i.e. <span class="math-container">$\operatorname{im}(f)=\ker(g)$</span> then I need to show that for the functor <span class="math-container">$F$</span> also <span class="math-container">$\operatorname{im}(F(f))=\ker(F(g))$</span> holds. Then the functor is exact.</p>
<p>But somehow I first don't see how this functor works so it sends an abelian group <span class="math-container">$A$</span> to the set <span class="math-container">$\{g\in A:g^n=e\}$</span> so all the elements of order <span class="math-container">$n$</span>.</p>
<p>Could someone maybe explain this a bit to me?</p>
<p>THanks for your help</p>
| Andreas Blass | 48,510 | <p>Let <span class="math-container">$f:\mathbb Z\to\mathbb Z$</span> be the function <span class="math-container">$n\mapsto2n$</span>. Then the short exact sequence
<span class="math-container">$$
0\to\mathbb Z\overset{f}\to\mathbb Z\to\mathbb Z/2\to0
$$</span>
is sent by the "torsion part" functor to
<span class="math-container">$$
0\to0\to0\to\mathbb Z/2\to0,
$$</span>
which is not exact at <span class="math-container">$\mathbb Z/2$</span>.</p>
|
2,110,561 | <p>so I want to find the volume of the body D defined as the region under a sphere with radius 1 with the center in (0, 0, 1) and above the cone given by $z = \sqrt{x^2+y^2}$. The answer should be $\pi$. A hint is included that you should use spherical coordinates. I've started by making a equation for the sphere, $x^2+y^2+(z-1)^2=1$. I used the transformation $(x, y, z) = (\rho\sin\phi\cos\theta, \rho\sin\phi\sin\theta, \rho\cos\phi+1)$. Now I'm struggling on defining the region D. I got that $0\leq\theta\leq2\pi$, but I can't find the bounds for $\phi$ and $\rho$.</p>
| Kuifje | 273,220 | <p>In spherical coordinates:
$$
E=\{(\rho,\theta,\phi)|0\le \theta\le 2\pi, 0\le \phi \le \pi/4, 0\le \rho \le 2\cos \phi \}
$$
It follows that</p>
<p>$$
\boxed{
V= \iiint_E dV = \int_0^{2\pi}\int_0^{\pi/4}\int_0^{2\cos\phi}\rho^2\sin\phi \;d\rho d\phi d\theta = \pi
}
$$</p>
<p><strong>Note</strong>: to find your bounds for $\phi$ and $\rho$:</p>
<ul>
<li>The upper bound for $\phi$ is precisely the cone $z=\sqrt{x^2+y^2}\; \Leftrightarrow \; \rho \cos \phi = \rho \sin \phi \; \Leftrightarrow \; \phi = \pi/4$</li>
<li>The upper bound for $\rho$ is precisely the sphere $x^2+y^2+(z-1)^2=1\; \Leftrightarrow \;\rho^2\sin^2\phi+(\rho \cos\phi-1)^2=1\; \Leftrightarrow \;\rho=2\cos\phi$</li>
</ul>
|
1,600,054 | <p>The graph of $y=x^x$ looks like this:</p>
<p><a href="https://i.stack.imgur.com/JdbSv.gif" rel="noreferrer"><img src="https://i.stack.imgur.com/JdbSv.gif" alt="Graph of y=x^x."></a></p>
<p>As we can see, the graph has a minimum value at a turning point. According to WolframAlpha, this point is at $x=1/e$.</p>
<p>I know that $e$ is the number for exponential growth and $\frac{d}{dx}e^x=e^x$, but these ideas seem unrelated to the fact that the mimum value of $x^x$ is $1/e$. <strong>Is this just pure coincidence, or could someone provide an intuitive explanation</strong> (i.e. more than just a proof) <strong>of why this is?</strong></p>
| Narasimham | 95,860 | <p>Try to minimize logarithm of $ y= x^x,$ i.e., $y=x \, \log x$ </p>
<p>Its derivative is</p>
<p>$$ 1 + \log(x) $$</p>
<p>When equated to zero, it solves to that the minimum of $y(x)$ ocuurs at </p>
<p>$$ x= \dfrac{1}{e}= 0.36788$$</p>
<p>and the minimum value is</p>
<p>$$ y_{min}= \dfrac{1}{{e}^{\frac{1}{e}}} \approx 0.6922$$</p>
<p>The tiny red dot shown in your graph for minimum is:</p>
<p>$$ (x,y) = (0.36788, 0.6922) $$</p>
<p>Note that the minimum value is not $\dfrac{1}{e}\, ! \, $ ... but is the reciprocal of $e^{th} $root of $e.$</p>
|
187,197 | <p>I have a logic expression:
<code>f0[a0_, a1_, a2_, a3_] := a0 And Not a1 And Not a2 And a3 Or Not a0 And a2 And a3 Or Not a0 And a1 And a3</code>, I know I should use <code>BooleanTable</code>, but it cannot generate a table like below.</p>
<p>How to generate a truth table in mathematica like below?</p>
<p><a href="https://i.stack.imgur.com/XJ58w.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/XJ58w.png" alt="enter image description here"></a></p>
| Conor Cosnett | 36,681 | <pre><code>truthTableFormattor[rawData_] := Insert[Insert[
Grid[rawData /. {0 -> 0,
1 -> Item[1, Background -> Lighter[Magenta]]},
FrameStyle -> Gray,
Frame -> All], {Background -> {None, {GrayLevel[0.7], {White}}},
Dividers -> {Black, {2 -> Black}}, Frame -> True,
Spacings -> {2, {2, {0.7}, 2}}}, 2], {Dividers -> All,
Spacings -> .7 {1, 1}}, 2];
truthTable[f__] := Module[{}, atoms = Cases[Most[{f}],
(a_ /;Length[a] == 0 \[And] Not[StringQ[a]])];heads =
ToString[TraditionalForm@#] & /@ {f};
rawData = Transpose@Boole[BooleanTable[#, atoms] & /@ {f}];
If[Last[{f}] === 1,
Transpose@Boole[BooleanTable[#, atoms] & /@ Most[{f}]],
If[Last[{f}] === "rev",
truthTableFormattor[{ToString[
TraditionalForm@#] & /@ (Most@{f})}~Join~
Transpose[(Reverse /@
Boole[BooleanTable[#, atoms] & /@ (Most@{f})])]],
truthTableFormattor[{heads}~Join~rawData]]]];
f0[a0_, a1_, a2_, a3_] := a0 \[And] \[Not] a1 \[And] \[Not] a2 \[And] a3;
truthTable[a0, a1, a2, a3, f0[a0, a1, a2, a3]]
</code></pre>
<p><a href="https://i.stack.imgur.com/p5SVB.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/p5SVB.png" alt="enter image description here"></a></p>
|
424,445 | <p>I'm studying Pattern recognition and statistics and almost every book I open on the subject I bump into the concept of <strong>Mahanalobis distance</strong>. The books give sort of intuitive explanations, but still not good enough ones for me to actually really understand what is going on. If someone would ask me "What is the Mahanalobis distance?" I could only answer: "It's this nice thing, which measures distance of some kind" :) </p>
<p>The definitions usually also contain eigenvectors and eigenvalues, which I have little trouble connecting to the Mahanalobis distance. I understand the definition of eigenvectors and eigenvalues, but how are they related to the Mahanalobis distance? Does it have something to do with changing the base in Linear Algebra etc.?</p>
<p>I have also read these former questions on the subject:</p>
<p><a href="https://stats.stackexchange.com/questions/41222/what-is-mahanalobis-distance-how-is-it-used-in-pattern-recognition">https://stats.stackexchange.com/questions/41222/what-is-mahanalobis-distance-how-is-it-used-in-pattern-recognition</a></p>
<p><a href="https://math.stackexchange.com/questions/261557/intuitive-explanations-for-gaussian-distribution-function-and-mahalanobis-distan">Intuitive explanations for Gaussian distribution function and mahalanobis distance</a></p>
<p><a href="http://www.jennessent.com/arcview/mahalanobis_description.htm" rel="nofollow noreferrer">http://www.jennessent.com/arcview/mahalanobis_description.htm</a></p>
<p>The answers are good and pictures nice, but still I don't <strong>really</strong> get it...I have an idea but it's still in the dark. Can someone give a "How would you explain it to your grandma"-explanation so that I could finally wrap this up and never again wonder what the heck is a Mahanalobis distance? :) Where does it come from, what, why? </p>
<p>I will post this question on two different forums so that more people could have a chance answering it and I think many other people might be interested besides me :) </p>
<p>Thank you in advance for help!</p>
| Adnan Baysal | 953,976 | <p>I found <a href="https://www.machinelearningplus.com/statistics/mahalanobis-distance/" rel="nofollow noreferrer">this link</a> useful for understanding what Mahalanobis distance measures actually. The following image captures the essence very well: <a href="https://i.stack.imgur.com/LECVD.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/LECVD.jpg" alt="Uncorrelated vs. Correlated data in 2D" /></a></p>
<p>If your dataset has a strong correlation as in the plot on the right, you probably want Point 2 to be more distant to black point in the center than the Point 1, although they have the same Euclidean distance. As Avitus pointed out, multiplying with the inverse of the correlation matrix deforms the Eucledean distance so that Point 2 becomes more distant than Point 1.</p>
|
903,049 | <p>I have to find the series expansion and interval of convergence for the function $\ln(1 - x)$.</p>
<p>For the expansion, I have gone through the process and obtained the series:</p>
<p>$-x - (x^2/2) - (x^3/3) - . . . - (-1)^k((-x)^k)/k$</p>
<p>I know that the interval of convergence will be $(-1,1)$, but am having trouble with the ratio test component to achieve this result. i.e. I am having trouble breaking down/simplifying the equation.</p>
<p>Thanks very much</p>
| amWhy | 9,003 | <p>Don't let the $(-1)^k$ or $(-x)^k = (-1)^kx^k$ trouble you. They have the effect of canceling each other out for odd $k$, and besides, for the ratio test, we apply it taking the absolute value of the general term $|a_k|$.</p>
<p>$$|a_k| = \frac{(x)^k }{k}$$</p>
<p>$$\frac{a_{k+1}}{a_k} = \frac{\frac{(x)^{k+1}}{k+1}}{\frac{(x)^k }{k}} = \frac{xk}{k+1}$$</p>
|
2,502,711 | <p>If the cross ratio $Z_1, Z_2, Z_3$ and $Z_4$ is real, then</p>
<p>which of the following statement is true? </p>
<p>1)$Z_1, Z_2$ and $Z_3$ are collinear</p>
<p>2)$Z_1, Z_2$ and $Z_3$ are concyclic</p>
<p>3)$Z_1, Z_2$ and $Z_3$ are collinear when atleast one $Z_1, Z_2$ or $Z_3$ is real </p>
<p>My attempt : By theorem: <a href="https://math.stackexchange.com/questions/482166/cross-ratio-is-real-on-image-of-real-axis">Cross ratio is real on image of real axis</a></p>
<p>I am confused due to $Z_4$ is not being given. Please help me.</p>
| aleden | 468,742 | <p>$$\int_{0}^\infty e^{-azx}e^{-sx}dx=\int_{0}^\infty e^{-x(az+s)}dx=-\frac{e^{-x(az+s)}}{az+s}|_{0}^\infty=\frac{1}{az+s}$$</p>
|
1,840,159 | <blockquote>
<p>Question: Prove that a group of order 12 must have an element of order 2.</p>
</blockquote>
<p>I believe I've made great stride in my attempt.</p>
<p>By corollary to Lagrange's theorem, the order of any element $g$ in a group $G$ divides the order of a group $G$.</p>
<p>So, $ \left | g \right | \mid \left | G \right |$.
Hence, the possible orders of $g$ is $\left | g \right |=\left \{ 1,2,3,4,6,12 \right \}$</p>
<p>Suppose $\left | g \right |=12.$
Then, $g^{12}=\left ( g^{6} \right )^{2}=e.$
So, $\left | g^{6} \right |=2$</p>
<p>Using the above same idea and applying it to $\left | g \right |=\left \{ 6,4,2 \right \}$ and $\left | g \right |=1,$
we see that these elements g have order 2.</p>
<p>However, for $\left | g^{3} \right |$, the group $G$ does not require an element of order 2.</p>
<p>How can I take this attempt further?</p>
<p>Thanks in advance. Useful <strong>hints</strong> would be helpful.</p>
| N. S. | 9,176 | <p><strong>Hint:</strong> Here is a simple proof idea that every group of even order must have an element of order $2$.</p>
<p>Pair every element in $G \backslash \{ e \}$ with its inverse. If all pairs consist of two different elements then $G \backslash \{ e \}$ would have an even number of elements. </p>
<p>What does it mean that $a=a^{-1}$?</p>
<blockquote class="spoiler">
<p> $a=a^{-1} \Leftrightarrow a^2=e$. And since $a \neq e$ we get that $ord(a)=2$.</p>
</blockquote>
|
1,527,137 | <p>Usually one has the matrix and wishes to estimate the eigenvalues, but here it's the other way around: I have the positive eigenvalues of an unknown real positive definite matrix and I would like to say something about it's diagonal elements.</p>
<p>The only result I was able to find is that the sum of the eigenvalues coincides with the trace of the matrix, does anyone know of anything more specific? Or perhaps can point me to any literature that discusses this problem?</p>
| Ilya | 5,887 | <p>For any fixed partition $a=x_0\leq\dots\leq x_n =b$ we have
$$
S^f(b) -S^f(a) + f(b) - f(a) \geq \sum_{i = 0}^{n-1} |f(x_{i+1}) - f(x_i)| + \sum_{i = 0}^{n-1} (f(x_{i+1}) - f(x_i)) \geq 0
$$
since $y + |y| \geq 0$ for all $y$. Hence, when you take $\sup$ over partitions, the inequality still holds.</p>
|
104,297 | <p>How would I go about solving</p>
<p>$(1+i)^n = (1+\sqrt{3}i)^m$ for integer $m$ and $n$?</p>
<p>I have tried </p>
<pre><code>Solve[(1+I)^n == (1+Sqrt[3] I)^m && n ∈ Integers && m ∈ Integers, {n, m}]
</code></pre>
<p>but this does not give the answer in the 'correct' form.</p>
| Eric Towers | 16,237 | <p>One can get a truly ridiculous solution by <code>ComplexExpand[]</code>ing the real and imaginary parts:</p>
<pre><code>Reduce[ComplexExpand[{Re /@ #, Im /@ #}] &[
(1 + I)^n == (1 + Sqrt[3] I)^m],
{m, n}, Integers]
(* ... an astonishing mess involving 14 integer parameters ... *)
</code></pre>
<p>However, pulling out the magnitude and argument is much nicer. </p>
<pre><code>Reduce[ComplexExpand[{Abs /@ #, Arg /@ #}] &[
(1 + I)^n == (1 + Sqrt[3] I)^m],
{m, n}, Integers]
(* C[1] ∈ Integers && m == 12 C[1] && n == 24 C[1] *)
</code></pre>
<p>(This essentially automates <a href="https://mathematica.stackexchange.com/a/104311/16237">@bbgodfrey's answer</a>.)</p>
|
705,829 | <p>I'm trying to solve a problem here.</p>
<p>It says: "Prove that a triangle is isoceles if $\large b=2a\sin\left(\frac{\beta}{2}\right)$."
$B-\beta$
I've tried to prove it but I can't</p>
<p>Can anyone help me?</p>
| sirfoga | 83,083 | <p><strong>Hint</strong> : for the law of sines you have $$ \frac{a}{\sin\alpha} = \frac{b}{\sin\beta} = \frac{2a\sin\left(\frac{\beta}{2}\right)}{\sin\beta} \Rightarrow ...$$ </p>
|
3,226,320 | <p>I have some data points that need to be fit to the curve defined by</p>
<p><span class="math-container">$$y(x)=\frac{k}{(x+a)^2} - b$$</span></p>
<p>I have considered that it can be done by the least squares method. However, the analytical solution gives me a negative <span class="math-container">$a$</span>, so it puts the first point on the left branch of this hyperbola and I need all the points to fit to the right branch, thus <span class="math-container">$a$</span> must be positive. All my points have positive <span class="math-container">$x$</span> and <span class="math-container">$y$</span> is non-increasing.</p>
<p>Is there any way to add this type of constraint to analytical solution?</p>
<p>I would also kindly appreciate any links to related and/or useful information on iterative numerical solution. I need to program everything manually for my mobile app, so I can't use any external software or libraries.</p>
| JJacquelin | 108,514 | <p>I cannot see any difficulty with the your data points x = {0, 1, 2, 3}, y = {-23, -32, -38, -40}.</p>
<p>With least squares fitting my result is shown on the figure below. The computed value of <span class="math-container">$a$</span> is positive as expected.</p>
<p>If you obtain a negative <span class="math-container">$a$</span> or other aberrant results this is probably due to the software that you use.</p>
<p>Since the regression is non-linear, the usual softwares proceed with iterative calculus which requires initial values for the sought parameters. The computation of preliminary approximates of the parameters is the main weakness of the softwares. If the "guessed" starting values are not good enough the further iterative computation may lead to incorrect results.</p>
<p>Of course I cannot be sure that this is the true explanation of the trouble in your case of calculus without more information about the algorithm of your software, especially for the approximation of the starting values of the parameters.</p>
<p><a href="https://i.stack.imgur.com/5a61C.gif" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/5a61C.gif" alt="enter image description here"></a></p>
<p>IN ADDITION after the discussion in comments :</p>
<p>So, you want to write your own program. I suggest a simplified way for non-linear regression in case of the function
<span class="math-container">$$y=\frac{k}{(x+a)^2}-b$$</span>
Start with a guessed value <span class="math-container">$a=a_0$</span> .
From the data <span class="math-container">$(x_k, y_k)$</span> compute a new data <span class="math-container">$(X_k,y_k)$</span> with
<span class="math-container">$$X_k=\frac{1}{(x_k+a_0)^2}$$</span>
Then make a linear regression for the unknown parameters <span class="math-container">$k,b$</span> with respect to the linear function
<span class="math-container">$$y=kX-b$$</span>
Compute a corrected value of <span class="math-container">$a_0$</span> and iterate the process.</p>
<p>Of course it is possible to proceed "by hand" with successive corrections of <span class="math-container">$a_0$</span> by trial and error but this should be tiresome. </p>
|
3,226,320 | <p>I have some data points that need to be fit to the curve defined by</p>
<p><span class="math-container">$$y(x)=\frac{k}{(x+a)^2} - b$$</span></p>
<p>I have considered that it can be done by the least squares method. However, the analytical solution gives me a negative <span class="math-container">$a$</span>, so it puts the first point on the left branch of this hyperbola and I need all the points to fit to the right branch, thus <span class="math-container">$a$</span> must be positive. All my points have positive <span class="math-container">$x$</span> and <span class="math-container">$y$</span> is non-increasing.</p>
<p>Is there any way to add this type of constraint to analytical solution?</p>
<p>I would also kindly appreciate any links to related and/or useful information on iterative numerical solution. I need to program everything manually for my mobile app, so I can't use any external software or libraries.</p>
| JJacquelin | 108,514 | <p>Supposing that the OP is looking for a conventional method of regression, the present answer would be not convenient. This is why I post it as a distinct answer.</p>
<p>The calculus below is ultra simple since there is no iteration and no need for initial guessed values. </p>
<p><a href="https://i.stack.imgur.com/ii9uM.gif" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/ii9uM.gif" alt="enter image description here"></a></p>
<p>Numerical example :</p>
<p><a href="https://i.stack.imgur.com/XPtNv.gif" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/XPtNv.gif" alt="enter image description here"></a></p>
<p>Result :</p>
<p><a href="https://i.stack.imgur.com/mIXxJ.gif" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/mIXxJ.gif" alt="enter image description here"></a></p>
<p><span class="math-container">$\textbf{Comment :}$</span></p>
<p>At first sight, while comparing the approximate values of parameters <span class="math-container">$a,b,k$</span> , it seems that the present result is not close to the previous result obtained with a classical non-linear regression. </p>
<p>In fact the curves drawn on the respective graphs are almost undistinguishable. Moreover the respective standard deviations are close : <span class="math-container">$0.403$</span> for the first method compared to <span class="math-container">$0.441$</span> for the second.</p>
<p>This is an unexpected good result because in case of small number of points the numerical integration introduces additional deviations. (The numerical integration is involved in the computation of the <span class="math-container">$S_i$</span> ).</p>
<p><span class="math-container">$\textbf{For information :}$</span></p>
<p>In this non-conventional method, instead of the fitting of the function where the parameters act non-linearly, one fit an integral equation where the same parameters act linearly. The original function is a solution of the integral equation. This transforms the non-linear regression into a linear regression. For more explanation and examples : <a href="https://fr.scribd.com/doc/14674814/Regressions-et-equations-integrales" rel="nofollow noreferrer">https://fr.scribd.com/doc/14674814/Regressions-et-equations-integrales</a></p>
<p>In the present case a convenient integral equation is :
<span class="math-container">$$ay+2bx-xy-\int y\;dx=\text{constant}$$</span>
One observe that a parameter <span class="math-container">$c$</span> seems to disappear but is in fact hidden into the constant. That is why an additional linear regression is necessary to compute the missing parameter. </p>
<p>Of course the criteria of fitting is not the same in the conventional methods. If a specific criteria is specified in the wording of the problem, one cannot avoid a non-liner regression adapted for the specific criteria. In that case one can start the iterative process from the values of parameters provided by the above regression with integral equation. This avoids the uncertain search for good "guessed' values of parameters.</p>
|
4,181,524 | <p><span class="math-container">$$\text{I need to prove the following lemma : }\frac{\zeta'(s)}{\zeta(s)} = - \sum_{n=1}^{\infty} \frac{\Lambda(n)}{n^s}$$</span></p>
<p><strong>My attempt:</strong></p>
<p><span class="math-container">$$\text{We know that }\zeta(s) = \sum_{n=1}^{\infty}\frac{1}{n^s} $$</span><span class="math-container">$$\text{So, }\zeta'(s) = \sum_{n=1}^{\infty}\frac{d}{ds}.\frac{1}{n^s} = -\sum_{n=1}^{\infty} \frac{ln(n)}{n^s}$$</span></p>
<p>This gives : <span class="math-container">$$\frac{\zeta'(s)}{\zeta(s)} = -\frac{\sum_{n=1}^{\infty}\frac{1}{n^s}}{\sum_{n=1}^{\infty} \frac{ln(n)}{n^s}} $$</span></p>
<p>How do I proceed ? Please help.</p>
| Kenta S | 404,616 | <p>HINT:<span class="math-container">$$\frac{\zeta'(s)}{\zeta(s)}=\frac{d}{ds}\ln\zeta(s)=-\sum_p\frac d{ds}\ln(1-p^{-s}).$$</span></p>
|
4,539,637 | <blockquote>
<p>If the digits <span class="math-container">$7,7,3,2$</span>, and 1 are randomly arranged from left to right, what is the probability both of the 7 digits are to the left of the 1 digit?</p>
</blockquote>
<p>The answer is <span class="math-container">$1/3$</span> because <span class="math-container">$1 7 7$</span>, <span class="math-container">$7 7 1$</span>, <span class="math-container">$7 1 7$</span>.</p>
<p>I thought about re-arranging the two sevens as one digit
"77"1 then the three and two have two positions, and we re-arrange the "77" and 1 glued together using <span class="math-container">$3 \choose 1$</span> and divide by <span class="math-container">$5!$</span></p>
| user97357329 | 630,243 | <p>You may find a simple solution in <strong>(Almost) Impossible Integrals, Sums, and Series</strong>, Sect. <strong>6.60</strong>, page <span class="math-container">$533$</span>, showing that
<span class="math-container">$$\sum_{i=1}^\infty \sum_{j=1}^\infty \frac{\Gamma(i) \Gamma(j) \Gamma(x)}{\Gamma(i+j+x)}=\frac{1}{2}\left(\psi^{(1)}\left(\frac{x}{2}\right)-\psi^{(1)}\left(\frac{x+1}{2}\right)\right),$$</span>
where <span class="math-container">$\psi^{(1)}(x)$</span> is Trigamma function.</p>
|
89,000 | <p>Let $f:I \rightarrow \mathbb{R}$, where $I\subset \mathbb{R}$ is an interval, be midconvex, that is
$$f\left(\frac{x+y}{2}\right) \leq \frac{f(x)+f(y)}{2}$$ for all $x,y \in I$.
Assume that for some $x_0, y_0 \in \mathbb{R}$ such that $x_0 < y_0$ holds equality
$$f\left(\frac{x_0+y_0}{2}\right)= \frac{f(x_0)+f(y_0)}{2}.$$
What can we say about $f$ restricted to $[x_0, y_0]$ ? Maybe $f|_{[x_0,y_0]}$ is
a sum of some additive function and some constant?</p>
<p>Thanks.</p>
<p>Added.</p>
<p>Maybe then $f(qx_0+(1-q)y_0)=qf(x_0)+(1-q)f(y_0)$ for $q=\frac{k}{2^n}$, where $n \in \mathbb{N}$, $k=0,1,...,2^n$ ?</p>
| Robert Israel | 8,508 | <p>Consider the following example. Let $B$ be a Hamel basis of $\mathbb R$ over the rationals $\mathbb Q$. Thus each $x \in \mathbb R$ can be written uniquely as
$x = \sum_{b \in B} c_b(x) b$, where all but finitely many of the $c_b(x)$ are $0$ for any particular $x$. Let $f(x) = c_{b_0}(x)^2$ for some given $b_0 \in B$.
Since $c_b\left(\frac{x+y}{2}\right) = \frac{c_b(x) + c_b(y)}{2}$, $f$ is midpoint-convex. For any $x_0$ and $y_0$ such that $c_b(x_0) = c_b(y_0) = 0$ we have
$f\left(\frac{x_0+y_0}{2}\right) = \frac{f(x_0)+f(y_0)}{2} = 0$. But there is a dense set of points $ y_1$ where $f\left(\frac{x_0+y_1}{2}\right) < \frac{f(x_0)+f(y_1)}{2} $</p>
|
3,571,047 | <p>Here's what I have so far:</p>
<p><span class="math-container">$$\frac{\partial f}{\partial y}|_{(a,b)} = \lim\limits_{t\to 0} \frac{\sin(a^2 + b^2 + 2tb + t^2) - \sin(a^2 + b^2)}{t} = \lim\limits_{t\to 0} \frac{\sin(a^2 + b^2)[\cos(2tb + t^2) - 1] + \cos(a^2 + b^2)\sin(2tb + t^2)}{t}$$</span>
I can see that side left of the plus sign might go to <span class="math-container">$0$</span>, and the side on the right would probably go to <span class="math-container">$\cos(a^2+b^2)$</span>, but I'm missing a <span class="math-container">$2a$</span> multiplying my solution!</p>
| Hadi | 645,692 | <p>We can play around with the substitution choice to ensure that the integral is expressed purely in terms of <span class="math-container">$t$</span>.</p>
<p>From the substitution choice we infer:
<span class="math-container">$$t=\sqrt{x}+1$$</span>
<span class="math-container">$$\implies t-1=\sqrt{x}$$</span>
<span class="math-container">$$\implies (t-1)^2=x.$$</span></p>
<p>As GEdgar said, after using the substitution <span class="math-container">$t=\sqrt{x}+1$</span>, you should obtain: <span class="math-container">$$dt=\frac{1}{2\sqrt{x}}dx$$</span>
<span class="math-container">$$\implies 2\sqrt{x}dt=2(t-1)=dx.$$</span></p>
<p>We now make use of these equalities back in the integral:</p>
<p><span class="math-container">$$\int \frac{x}{\sqrt{x}+1}dx$$</span>
<span class="math-container">$$=\int \frac{2(t-1)^2(t-1)}{t}dt$$</span>
<span class="math-container">$$=2\int \frac{(t-1)^3}{t}dt.$$</span></p>
<p>We can expand the numerator and then divide each term by <span class="math-container">$t$</span>, like so:</p>
<p><span class="math-container">$$2\int\frac{t^3-3t^2+3t-1}{t}dt$$</span>
<span class="math-container">$$=2\int t^2-3t+3-\frac{1}{t}dt$$</span></p>
<p>Can you proceed from here?</p>
|
3,001,747 | <p>I'd need some help evaluating this limit:</p>
<p><span class="math-container">$$\lim_{x \to 0} \frac{\ln\sin mx}{\ln \sin x}$$</span></p>
<p>I know it's supposed to equal 1 but I'm not sure how to get there.</p>
| J.G. | 56,861 | <p>You want <span class="math-container">$1+\lim_{x\to 0}\dfrac{\ln\frac{\sin mx}{\sin x}}{\ln\sin x}$</span>. The numerator <span class="math-container">$\to \ln m$</span>, the denominator <span class="math-container">$\to -\infty$</span>. The result is therefore <span class="math-container">$1+0=1$</span>.</p>
|
2,703,639 | <p>a) $f: L^1(0,3) \rightarrow \mathbb{R}$</p>
<p>b) $f: C[0,3] \rightarrow \mathbb{R}$</p>
<p>for part a I got $\|f\| = 1$ because $\|f(x)\|=|\int_0^2x(t)dt| \leq \int_0^2|x(t)|dt \leq\int_0^3|x(t)|dt = \|x(t)\|_1$ so $\|f\|=1$</p>
<p>for b, I think its similar: $\|f(x)\|=|\int_0^2x(t)dt| \leq \int_0^2|x(t)|dt \leq\int_0^3|x(t)|dt = \|x(t)\|_1$ and I have a theorem that says there is some c>0 s.t. $\|x(t)\|_1 \leq c\|x(t)\|_{max}$ so by taking the infinum of all such c, we get that $\|f\| = 0$.</p>
<p>Are these correct? I am particularly worried about my answer for b</p>
| angryavian | 43,949 | <p>For part a), you have only shown $\|f\| \le 1$. But it should not be hard to find some $x$ such that $|f(x)| = \|x\|$.</p>
<p>For part b), note $|f(x)| \le \int_0^2 |x(t)|\,dt \le 2 \|x\|_{\max}$, so $\|f\| \le 2$. If you take some $x \in C[0,3]$ with $x(t)=1$ for $t \in [0,2]$ and $\|x\|_{\max}=1$, then $|f(x)|=2=2\|x\|_{\max}$, so $\|f\|=2$.</p>
|
57,213 | <p>Let <span class="math-container">$A \in \mathbb{Q}^{6 \times 6}$</span> be the block matrix below:</p>
<p><span class="math-container">$$A=\left(\begin{array}{rrrr|rr}
-3 &3 &2 &2 & 0 & 0\\
-1 &0 &1 &1 & 0 & 0\\
-1&0 &0 &1 & 0 & 0\\
-4&6 &4 &3 & 0 & 0\\
\hline
0 & 0 & 0 & 0 & 0 &1 \\
0 & 0 & 0 & 0 & -9 &6
\end{array}\right).$$</span></p>
<p>I found out that the minimal polynomial of <span class="math-container">$A$</span> is <span class="math-container">$(x-3)^3(x+1)^2$</span>, and now let</p>
<p><span class="math-container">$$f(x)=2x^9+x^8+5x^3+x+a$$</span></p>
<p>a polynomial, <span class="math-container">$a\in N$</span>. I need to find out for which <span class="math-container">$a$</span> the matrix <span class="math-container">$f(A)$</span> is invertible.</p>
<p>It has some similarity to <a href="https://math.stackexchange.com/questions/57123/prove-that-if-gt-is-relatively-prime-to-the-characteristic-polynomial-of-a">to my last question</a>, but I still can't understand and solve it. Thanks again.</p>
| Gerry Myerson | 8,269 | <p>Expanding on the comment: </p>
<p>If $A$ has eigenvalue $\lambda$, then $f(A)$ has eigenvalue $f(\lambda)$. So $f(A)$ is not invertible if $f(\lambda)=0$. </p>
|
2,372,171 | <p>Let $T$ be a bounded linear operator on a Hilbert space $H$. I have to show that the following are equivalent:</p>
<p>(i) $T$ is unitary</p>
<p>(ii) For every orthonormal basis $\{u_{\alpha}:\alpha\in \Lambda\}$, $\{T(u_{\alpha}):\alpha\in \Lambda\}$ is an orthonormal basis.</p>
<p>(iii) For some orthonormal basis $\{u_{\alpha}:\alpha\in \Lambda\}$, $\{T(u_{\alpha}):\alpha\in \Lambda\}$ is an orthonormal basis.</p>
<p>I have proved that (i)$\implies$ (ii). Also (ii)$\implies$ (iii) is obvious. </p>
<p>How to show that (iii)$\implies$ (i)? Please suggest anything?</p>
| Prahlad Vaidyanathan | 89,789 | <p>For $(iii) \Rightarrow (i)$, you want to show that
$$
\langle Tx,Ty\rangle = \langle x,y\rangle \qquad (\ast)
$$
for all $x,y\in H$. First note that $(\ast)$ is true if $x,y\in S := \{u_{\alpha}\}$. By sesqui-linearity, $(\ast)$ is true if $x,y\in \text{span}(S)$. However, $\text{span}(S)$ is dense in $H$, so if $x\in H,v\in \text{span}(S)$ and $\epsilon > 0$, then $\exists u\in \text{span}(S)$ such that $\|x-u\| < \epsilon$. Hence,
$$
|\langle Tx,Tv\rangle - \langle x,v\rangle| \leq |\langle Tx,Tv\rangle - \langle Tu,Tv\rangle| + |\langle Tu,Tv\rangle - \langle u,v\rangle| + |\langle u,v\rangle - \langle x,v\rangle|
$$
Since $T$ is bounded, you can make the right hand side as small as you want, proving that
$$
\langle Tx,Tv\rangle = \langle x,v\rangle
$$
Similarly, one can replace $v$ by an arbitrary $y\in H$, proving $(\ast)$ for all $x,y\in H$.</p>
<hr>
<p>Edit: My apologies, in addition to proving $(\ast)$, one also needs to prove that $T$ is surjective. To see this, one simply notes that $R(T)$, the range of $T$ contains an orthonormal basis for $H$, so if $x\in H$, then write
$$
x = \sum a_{\alpha} T(u_{\alpha}) \qquad (\dagger)
$$
so that the series $y = \sum a_{\alpha} u_{\alpha}$ converges because $(\dagger)$ converges. Now $y\in H$ and $x = T(y)$ since $T$ is continuous. Hence, $R(T) = H$.</p>
|
283,747 | <p>Let $BG$ denote the classifying space of a finite group $G$. For which group cohomology classes $c\in H^2(G;\mathbb{Z}/2)$ does there exist a real vector bundle $E$ over $BG$ such that $w_2(E)=c$?</p>
| Mark Grant | 8,103 | <p>You can set this up as an obstruction theory problem. Your cohomology class is represented by a map $c: BG\to K(\mathbb{Z}/2,2)$, and the question is whether this map lifts through the universal Stiefel-Whitney class $w_2:BO\to K(\mathbb{Z}/2,2)$. </p>
<p>The primary obstruction to such a lift turns out to be the class $$\beta Sq^2(c) = \beta(c\cup c)\in H^5(BG;\mathbb{Z}),$$ where $\beta$ is the Bockstein associated to the coefficient sequence $\mathbb{Z}\to \mathbb{Z}\to \mathbb{Z}/2$. This and several other relevant facts can be found in </p>
<p><em>Teichner, Peter</em>, <a href="http://dx.doi.org/10.2307/2160595" rel="noreferrer"><strong>6-dimensional manifolds without totally algebraic homology</strong></a>, Proc. Am. Math. Soc. 123, No.9, 2909-2914 (1995). <a href="https://zbmath.org/?q=an:0858.57033" rel="noreferrer">ZBL0858.57033</a>.</p>
<p>As Neil Strickland mentions, there are higher obstructions which can possibly be defined in terms of secondary cohomology operations, provided you know enough about the Postnikov tower of $BO$.</p>
|
2,647,123 | <p>I'm asked to to find a $3\times3$ matrix, in which no entry is $0$ but $A^2=0$. </p>
<p>The problem is if I I brute force it, I am left with a system of 6 equations (Not all of which are linear...) and 6 unknowns. Whilst I could in theory solve that, is there more intuitive way of solving this problem or am I going to have to brute force the solution?</p>
<p>Any suggestions would be greatly appreciated.</p>
| M. Winter | 415,941 | <p>Here is a geometric approach. Think about a matrix $P$ which orthogonally projects all of $\Bbb R^3$ onto a one-dimensional subspace spanned by a vector $n$:</p>
<p>$$P_{ij}=n_i n_j$$</p>
<p>After projection, rotate your space by $90^\circ$ around some axes $v$ orthogonal to $n$ by using a rotation matrix $R$. Your desired matrix then can be</p>
<p>$$M:=RP.$$</p>
<p>If you choose appropriate $n$ and $v$, your matrix will not have zero-entries. </p>
<hr>
<p><strong>Example</strong></p>
<p>Choose $n=(1,2,3)$ and $v=(-2, 1, 0)$. We have $n\cdot v=0$. The resulting matrix is</p>
<p>$$
M=\frac1{\sqrt{5}}
\begin{pmatrix}
3& 6& 9&\\
6& 12& 18\\
-5& -10& -15
\end{pmatrix}.
$$</p>
<p>You can also drop the $1/\sqrt 5$ in front of the matrix. This will not change the result.</p>
<hr>
<p><strong>Why does it work</strong></p>
<p>If you project your space onto a line $\ell$ and then rotate this line by $90^\circ$ degree, all vectors are now orthogonal to the line $\ell$. So by applying the matrix again, the complete rotated line will be projected into the zero-vector. Because all vectors were previously projected into the rotated line, everything gets mapped into zero on the second run.</p>
<p><a href="https://i.stack.imgur.com/zI1UB.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/zI1UB.png" alt="enter image description here"></a></p>
|
1,441,349 | <p>Given is that $a∈ℤ_n^*$ and $d|ord(a)$.</p>
<p>I need to show that $ord(a^d)= ord(a)/d$.</p>
<p>I started with the following:</p>
<p>$ord(a^d) = e$, such that $(a^d)^e\equiv 1\pmod n$</p>
<p>$ord(a)/d =f/d$ where $ord(a)=f$, such that $a^f\equiv 1\pmod n$</p>
<p>Now I want to prove that $e=f/d$. </p>
<p>I have tried multiplying and dividing the formulas, but I am not able to prove it. How do I do this?</p>
| Noah Schweber | 28,111 | <p>I think you're asking, "When do we have '$\emptyset\models\psi$'?"</p>
<p>If this is the case, the answer is: only when $\psi$ is a tautology. By definition, $\emptyset\models\psi$ iff every valuation making every formula in $\emptyset$ true, makes $\psi$ true. However, every valuation <em>at all</em> makes every formula in $\emptyset$ true, since there aren't any. So $\emptyset\models\psi$ if and only if $\psi$ is true under every valuation.</p>
<hr>
<p>You mention that propositional logic doesn't use quantifiers; I think this is an instance of confusing the theory and the metatheory. When we reason <em>about</em> propositional logic, we do so in a much more complicated logical system, where - among other things - we are allowed to use quantifiers. Definitions like that of "$\models$" take place in this metatheory.</p>
|
366,654 | <p>Find all values of real number p or which the series converges:</p>
<p>$$\sum \limits_{k=2}^{\infty} \frac{1}{\sqrt{k} (k^{p} - 1)}$$ </p>
<p>I tried using the root test and the ratio test, but I got stuck on both. </p>
| preferred_anon | 27,150 | <p>$$\frac{dy}{dx}+\frac{y}{x-2}=5(x-2)\sqrt{y}$$
Multiply through by $\frac{1}{2}\sqrt \frac{x-2}{y}$ to get
$$\frac{\sqrt{x-2}}{2\sqrt{y}}\frac{dy}{dx}+\frac{\sqrt{y}}{2\sqrt{x-2}}=\frac{5}{2}(x-2)^{3/2}$$
The LHS is the derivative of $\sqrt{x-2}\sqrt{y}$, so we can integrate:
$$\sqrt{x-2}\sqrt{y}=\int\frac{5}{2}(x-2)^{3/2}dx=\frac{5}{3}(x-2)^{5/2}+A$$
For some arbitrary constant $A$.
Thus, dividing by $\sqrt{x-2}$:<br>
$$\sqrt{y}=\frac{5}{3}(x-2)^{2}+A(x-2)^{-1/2}$$
Then, finally, squaring:
$$y=\frac{25}{9}(x-2)^{4}+\frac{10A}{3}(x-2)^{3/2}+A^{2}(x-2)^{-1}$$<br>
While I think @André's solution is more elegant, this one is perhaps an alternative.</p>
|
2,218,716 | <p>Explain why 1/i is − i. </p>
<p>(That is: explain why the multiplicative inverse of i is the complex number − i.)</p>
<p>And then the hint that I was given was, what property defines the multiplicative inverse?</p>
<p>I know how to algebraically prove 1/i = -i, but need help writing the proof.</p>
| Stella Biderman | 123,230 | <p>By the definition of multiplicative inverse, $1/i$ satisfies $(1/i)i=1$. Presumably you know that $i^2=-1$. Then you have that $$(-i)i=-i^2=1=(1/i)i$$ By cancellation, $-i=1/i$.</p>
|
3,249,735 | <p>My question relates to this problem:</p>
<p>Prove by induction that 54 divides <span class="math-container">$2^{2k+1}-9k^2+3k-2$</span>. </p>
<p>My solving so far gives this answer: (after all calculations)</p>
<p><span class="math-container">$2^{2(k+1)+1}-9(k+1)^2+3(k+1)-2= 54 \cdot2^2k+27k^2-27k$</span></p>
<p>It is obvious that <span class="math-container">$27=\frac{1}{2}54$</span> divides this expression, but how do I figure it out if 54 divides it too? The end result is correct (checked!) </p>
| Deepak | 151,732 | <p><span class="math-container">$27k^2-27k = 27k(k-1)$</span>.</p>
<p>Exactly one of <span class="math-container">$k$</span> and <span class="math-container">$k-1$</span> is even, so... </p>
|
819,830 | <p>Is the idea of a proof by contradiction to prove that the desired conclusion is both true and false or can it be any derived statement that is true and false (not necessarily relating to the conclusion)? Or can it simply be an absurdity that you know is false but through your derivation comes out true?</p>
| Kaj Hansen | 138,538 | <p>If revolving around the $x$-axis makes more sense to you, then we can consider revolving the inverse function of $f(x) = \sqrt{x}$ around the $x$-axis with the appropriate bounds.</p>
<p>Of course, $f^{-1}(x) = x^2$, and so revolving $y = x^2$ around the $x$-axis from $x = 0$ to $x = 2$ will yield the same result. </p>
|
3,579,346 | <p>I've been learning some introductory analysis on manifolds and have had a small issue ever since the notion of tangent spaces at points on a differentiable manifold was introduced.</p>
<p>In our lectures, we began with the definition using equivalence classes of curves. But it is also possible to define tangent spaces using derivations of smooth functions (and apparently several other ways too, but for now I'm only familiar with these two).</p>
<p>It seems intuitively sensible to call both these pictures (the curve and derivative ones) "equivalent": let the point of interest be <span class="math-container">$p$</span> and pick a local chart <span class="math-container">$\phi$</span>. Then we form a quotient of the set of curves through <span class="math-container">$p$</span> (parametrized so that <span class="math-container">$p=\gamma(0)$</span>), declaring <span class="math-container">$\gamma_1\sim\gamma_2$</span> iff <span class="math-container">$(\phi\,\circ\,\gamma_1)'(0)=(\phi\,\circ\,\gamma_2)'(0)$</span>. This is one particular version of a tangent space at <span class="math-container">$p$</span>. But we could also define it as the space of derivations, i.e. linear maps from <span class="math-container">$C^\infty(M)$</span> to <span class="math-container">$\mathbb{R}$</span> satisfying the Leibnitz rule <span class="math-container">$$D(fg)=D(f)g(p)+f(p)D(g)$$</span>
For any equivalence class of curves <span class="math-container">$[\gamma]$</span> at <span class="math-container">$p$</span>, the operator defined on <span class="math-container">$C^\infty(M)$</span> by
<span class="math-container">$$
D_{[\gamma]}(f)=(f\circ\gamma)'(0)
$$</span>
is a derivation; conversely, it is true that every derivation is such a directional derivative (proof: <a href="https://math.stackexchange.com/questions/1146901/equivalence-of-definitions-of-tangent-space">Equivalence of definitions of tangent space</a>).</p>
<p>Most of this a recap of a part of <a href="https://en.wikipedia.org/wiki/Tangent_space" rel="noreferrer">Wikipedia</a>. At any rate, both of these notions seem to give in some sense "the same" tangent spaces.</p>
<p>Here is my problem: I don't actually understand what precisely it is we are checking for when trying to decide if some two definitions are equivalent; right now, all I would personally try to do is show isomorphism of vector spaces and then try to convince myself that this isomorphism respects some vague notion of direction. But then <span class="math-container">$\mathbb{R}^{\mathrm{dim}(M)}$</span> is certainly isomorphic to any tangent space of the manifold <span class="math-container">$M$</span>, at least as a vector space. Nevertheless, just declaring <span class="math-container">$T_pM=\mathbb{R}^{\mathrm{dim}(M)}$</span> doesn't strike me as a successful construction of a tangent space.</p>
<p>Now, there are two levels to my question, ordered by "degree of abstraction", so to speak (presumably they also get harder to answer). I do, however, believe they are connected.</p>
<p>First, is there some precise notion of vector space isomorphisms respecting direction on a manifold? Specifically, is <span class="math-container">$\mathbb{R}^{\mathrm{dim}(M)}$</span> a valid tangent space or is it not, or do I perhaps have to specify some additional structure on it and then check that the additional structure relates to, say, the curve definition in a correct way? (I suppose this last case would require taking one definition of the tangent space as the absolute foundation and comparing all others to it, which I find somewhat unsatisfying.)</p>
<p>Second, is there perhaps an abstract, "external" definition of a tangent space? What I'm talking about could be something like, "Given a smooth manifold <span class="math-container">$M$</span>, a point <span class="math-container">$p\in M$</span> and a vector space <span class="math-container">$V$</span>, this vector space is called a <em>tangent space at <span class="math-container">$p$</span></em> if it satisfies some properties <span class="math-container">$X,Y,Z...$</span>" where these <span class="math-container">$X,Y,Z$</span> don't depend on the type of objects in <span class="math-container">$V$</span> or other particular details specific to <span class="math-container">$V$</span>.</p>
<p>The motivation behind asking this is related to the situation with ordered pairs of objects (yes, this is quite a leap): I can use the Kuratowski definition or infinitely many others, and in each case, I will be able to eventually convince myself that, indeed, this thing before me works just as well to encode "ordered-ness" of objects as any other. But I don't have to keep referring to one of these specific cases, I just need to describe how pairs should arise and behave in general: there is a two-place function <span class="math-container">$f$</span> that sends two objects <span class="math-container">$x$</span> and <span class="math-container">$y$</span> to <span class="math-container">$(x,y)$</span> and there are two projections <span class="math-container">$\pi_1,\pi_2$</span> that pull <span class="math-container">$x$</span> and <span class="math-container">$y$</span> back out. (For a precise definition see <a href="https://www.logicmatters.net/resources/pdfs/GentleIntro.pdf#chapter.7" rel="noreferrer">this PDF</a>, I summarised the discussion from there. It goes on to define products also within category theory.) Furthermore, I would find it highly suspect if some theorem about ordered pairs referred to the particulars of the Kuratowski definition - all the relevant information about <span class="math-container">$(x,y)$</span> should be recoverable from just the abstract setup described above (or better yet, in the linked PDF). Is there some way of treating tangent spaces in this same spirit?</p>
<p>I know this question is vague, but I honestly don't know how better to phrase it, I hope I've at least gotten the mindset across if nothing else.</p>
| painday | 433,808 | <p>I think what OP wants is a definition using the concept of functors. Below is the repost of my answer <a href="https://mathoverflow.net/q/429683">https://mathoverflow.net/q/429683</a> to the question "<strong>An easy way to to explain the equivalence definitions of tangent spaces?</strong>"</p>
<hr />
<p><strong>In short:</strong> To give a definition of tangent spaces, one actually define a functor from the category of (pointed)manifolds to the category of vector spaces that is a <strong>natural extension</strong> of
the tangent space of Euclidean spaces satisfying the principle of localization(see below). It can be shown that <strong>the tangent space functor is uniquely determined by these two conditions</strong>, up to a natural isomorphism.</p>
<p>Hence to check that a concrete definition of tangent space is reasonable, one only need to show that this definition generalizes the concept of tangent spaces of Euclidean spaces, and satisfies the principle of localization.
If you have two such definitions, then they are equivalent in the sense that there exists a natural isomorphism between the functors representing the two definitions.</p>
<p><strong>In detail:</strong> The category of pointed manifold <span class="math-container">$\mathcal M$</span> has objects in the form <span class="math-container">$(x,M),$</span> where <span class="math-container">$M$</span> is any smooth manifold and <span class="math-container">$x$</span> an element of <span class="math-container">$M.$</span>
A morphism from <span class="math-container">$(x,M)$</span> to <span class="math-container">$(y,N)$</span> is an ordered quadruple <span class="math-container">$(x,f,M,N),$</span> in which <span class="math-container">$f$</span> is a <strong>functional relation</strong> on <span class="math-container">$M\times N$</span> with <span class="math-container">$\mathrm{dom}(f)$</span> a (not necessarily open)neighborhood of <span class="math-container">$x,$</span> <span class="math-container">$f$</span> differentiable at <span class="math-container">$x$</span> and <span class="math-container">$f(x)=y.$</span></p>
<p>More exactly, <span class="math-container">$f$</span> is a subset of <span class="math-container">$M\times N$</span> such that if <span class="math-container">$(x,y),(x,y')\in f$</span> then <span class="math-container">$y=y'.$</span>
By definition, if <span class="math-container">$O$</span> is an open neighborhood of <span class="math-container">$x$</span> in <span class="math-container">$M$</span> such that <span class="math-container">$\mathrm{dom}(f)\subset O,$</span> and <span class="math-container">$O'$</span> is an open subset of <span class="math-container">$N$</span> with <span class="math-container">$\mathrm{Im}(f)\subset O',$</span> then <span class="math-container">$(x,f,O,O')$</span> is a morphism from <span class="math-container">$(x,O)$</span> to <span class="math-container">$(y,O').$</span></p>
<p>The composition of morphisms is given by
<span class="math-container">\begin{equation}
\boxed{(y,g,N,Q)\circ (x,f,M,N):=(x,g\circ f,M,Q)}
\end{equation}</span>
where <span class="math-container">$g\circ f:=\{(x_1,x_3)|\ \exists \ x_2\in N,\ s.t. \ (x_1,x_2)\in f,\ (x_2,x_3)\in g \}$</span> is the composition of relations.</p>
<p>If we consider Euclidean spaces only, then we get a full subcategory <span class="math-container">$\mathcal M(\mathbb E)$</span> of <span class="math-container">$\mathcal M.$</span> Now we define the <strong>canonical tangent space</strong> of <span class="math-container">$(x,\mathbb R^n)$</span> to be simply <span class="math-container">$\mathbb R^n,$</span> and the <strong>canonical tangent map</strong> <span class="math-container">$T_{\rm canonical}(x,f,\mathbb R^n,\mathbb R^m):=D_xf,$</span>
then we get a functor <span class="math-container">$T_{\rm canonical}:\mathcal M(\mathbb E)\to \mathsf{Vec}(\mathbb R).$</span>
<span class="math-container">\begin{equation}
\boxed{T_{\rm canonical}\big[(x,\mathbb R^n)\stackrel{(x,f,\mathbb R^n,\mathbb R^m)}{\longrightarrow} (y,\mathbb R^m)\big]:=\mathbb R^n \stackrel{D_xf}{\longrightarrow}\mathbb R^m}
\end{equation}</span></p>
<p>A tangent space functor defined on the full category <span class="math-container">$\mathcal M$</span> must generalize the concept of tangent spaces of Euclidean spaces, that is, it must be an extension of <span class="math-container">$T_{\rm canonical}.$</span> In fact this <strong>extension should be natural</strong>, that is to say,
the restriction of <span class="math-container">$T$</span> to <span class="math-container">$\mathcal M(\mathbb E)$</span> is nutural isomorphic to <span class="math-container">$T_{\rm canonical}.$</span> In other words,
for any object <span class="math-container">$(x,\mathbb R^n)$</span> there is an isomorphism <span class="math-container">$\alpha_{(x,\mathbb R^m)}:T(x,\mathbb R^n)\to T_{\rm canonical}(x,\mathbb R^n)=\mathbb R^n$</span> such that
if <span class="math-container">$(x,f,\mathbb R^n,\mathbb R^m):(x,\mathbb R^n)\to (y,\mathbb R^m)$</span> is a morphism, then the following diagram commutes:
<a href="https://i.stack.imgur.com/AcCIg.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/AcCIg.png" alt="enter image description here" /></a>
We say that a functor <span class="math-container">$T:\mathcal M\to\mathsf{Vec}(\mathbb R)$</span> gives a definition of tangent spaces, if it satisfies the followng two conditions:</p>
<ol>
<li><span class="math-container">$T$</span> is a natural extension of <span class="math-container">$T_{\rm canonical}.$</span></li>
<li>(<strong>Principle of localization</strong>)If <span class="math-container">$(x,f,M,N),(x,g,M,N)$</span> are morphisms and <span class="math-container">$f,g$</span> coincide on a neighborhood of <span class="math-container">$x$</span> then <span class="math-container">$T(x,f,M,N)=T(x,g,M,N).$</span></li>
</ol>
<p>Of course, if <span class="math-container">$T$</span> and <span class="math-container">$\widetilde T$</span> are two such functors, then their restriction to <span class="math-container">$\mathcal M(\mathbb E)$</span> are naturally isomorphic through the transformation <span class="math-container">$\beta:=\widetilde\alpha^{-1}\circ\alpha,$</span> since we have the commutative diagram
<a href="https://i.stack.imgur.com/5ckXm.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/5ckXm.png" alt="enter image description here" /></a></p>
<p>To show the uniqueness of tangent space functor, we need to find a natural isomorphism between <span class="math-container">$T$</span> and <span class="math-container">$\widetilde T.$</span></p>
<p>For any object <span class="math-container">$(x,M)$</span> of <span class="math-container">$\mathcal M,$</span> we define a vector space isomorphism
<span class="math-container">$\gamma_{(x,M)}:T(x,M)\to \widetilde T(x,M)$</span> by
<span class="math-container">\begin{equation}\boxed{\gamma_{(x,M)}:=\widetilde T(\varphi(x),\varphi^{-1},\mathbb R^n,M)\circ \beta_{(\varphi(x),\mathbb R^n)}\circ T(x,\varphi,M,\mathbb R^n)}\end{equation}</span>
in which <span class="math-container">$(U,\varphi)$</span> is a coordinate chart for <span class="math-container">$x$</span> in <span class="math-container">$M.$</span> To show that <span class="math-container">$\gamma_{(x,M)}$</span> is well-defined, we have to verify that if
<span class="math-container">$(V,\psi)$</span> is another coordinate chart for <span class="math-container">$x$</span> in <span class="math-container">$M$</span> then
<span class="math-container">\begin{equation}\widetilde T(\varphi(x),\varphi^{-1},\mathbb R^n,M)\circ \beta_{(\varphi(x),\mathbb R^n)}\circ T(x,\varphi,M,\mathbb R^n)=\widetilde T(\psi(x),\psi^{-1},\mathbb R^n,M)\circ \beta_{(\psi(x),\mathbb R^n)}\circ T(x,\psi,M,\mathbb R^n)\end{equation}</span></p>
<p>In fact, principle of localization yields the commutative diagram
<a href="https://i.stack.imgur.com/XbIeX.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/XbIeX.png" alt="enter image description here" /></a></p>
<p>A similar digram commutes for the functor <span class="math-container">$\widetilde T.$</span> On the other hand, since <span class="math-container">$\psi\circ\varphi^{-1}$</span> is a functional relation on <span class="math-container">$\mathbb R^n\times\mathbb R^n,$</span> it follows that the following diagram commutes:
<a href="https://i.stack.imgur.com/W2qXq.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/W2qXq.png" alt="enter image description here" /></a></p>
<p>Thus we have a combined commutative diagram</p>
<p><a href="https://i.stack.imgur.com/R7H18.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/R7H18.png" alt="enter image description here" /></a></p>
<p>which is desired.</p>
<p>It remains to show that <span class="math-container">$\gamma$</span> is a natural isomorphism between <span class="math-container">$T$</span> and <span class="math-container">$\widetilde T.$</span>
Let <span class="math-container">$(x,f,M,N)$</span> be a morphism from <span class="math-container">$(x,M)$</span> to <span class="math-container">$(y,N)$</span> and let <span class="math-container">$(U,\varphi)$</span> and <span class="math-container">$(V,\psi)$</span> be a coordinate chart for <span class="math-container">$x$</span> and <span class="math-container">$y,$</span> respectively, then <span class="math-container">$(\varphi(x),\psi\circ f\circ\varphi,\mathbb R^n,\mathbb R^n)$</span> is a morphism from <span class="math-container">$(\varphi(x),\mathbb R^n)$</span> to <span class="math-container">$(\psi(y),\mathbb R^n),$</span> hence
<span class="math-container">\begin{equation}\label{key}\widetilde T(\varphi(x),\psi\circ f\circ\varphi^{-1},\mathbb R^n,\mathbb R^n)\circ\beta_{\varphi(x),\mathbb R^n}=\beta_{\psi(y),\mathbb R^n}\circ T(\varphi(x),\psi\circ f\circ\varphi^{-1},\mathbb R^n,\mathbb R^n)\end{equation}</span>
Plugging
<span class="math-container">\begin{equation}(\varphi(x),\psi\circ f\circ\varphi^{-1},\mathbb R^n,\mathbb R^n)=(y,\psi,N,\mathbb R^n)\circ (x,f,M,N)\circ(\varphi(x),\varphi^{-1},\mathbb R^n,M)\end{equation}</span>
into the former equation gives
<span class="math-container">\begin{equation}\widetilde T(x,f,M,N)\circ \widetilde T(\varphi(x),\varphi^{-1},\mathbb R^n,M)\circ \beta_{(\varphi(x),\mathbb R^n)}\circ T(x,\varphi,M,\mathbb R^n)=\widetilde T(\psi(y),\psi^{-1},\mathbb R^n,N)\circ\beta_{(\psi(y),\mathbb R^n)}\circ T(y,\psi,N,\mathbb R^n)\circ T(x,f,M,N)\end{equation}</span>
hence we have the commutative diagram
<a href="https://i.stack.imgur.com/TTgFA.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/TTgFA.png" alt="enter image description here" /></a>
as expected.</p>
<hr />
<p><strong>Remark:</strong>
The principle of localization is equivalent to the following statement: for any two open subsets <span class="math-container">$U,U'$</span> of any manifold <span class="math-container">$M,$</span> if <span class="math-container">$x\in U\cap U'$</span> then there is an isomorphism <span class="math-container">$\Delta^{(x,M)}_{U,U'}$</span>(abbreviated as <span class="math-container">$\Delta_{U,U'}$</span>) satisfying the cocycle condition <span class="math-container">$\Delta_{U',U''}\circ \Delta_{U,U'}=\Delta_{U,U''},\ \Delta_{U,U}=\mathrm{id}$</span> and
if <span class="math-container">$(x,f,M,N)$</span> is a morphism from <span class="math-container">$(x,M)$</span> to <span class="math-container">$(y,N),$</span> then the following diagram commutes:
<a href="https://i.stack.imgur.com/1Flgp.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/1Flgp.png" alt="enter image description here" /></a>
Here <span class="math-container">$f|_{U}^V:=f\cap (U\times V)$</span> and it's easy to see that <span class="math-container">$(x,f|_U^V,U,V)$</span> is a morphism from <span class="math-container">$(x,U)$</span> to <span class="math-container">$(y,V).$</span></p>
<p>In fact, assume that the principle of localization hold, then defining
<span class="math-container">\begin{equation}\Delta_{U,U'}^{(x,M)}:=T(x,\mathrm{id}_{U\cap U'},U,U')\end{equation}</span>
will make the cocycle diagram commute.</p>
<p>Conversely,assume that the cocycle diagram commute, and assume <span class="math-container">$f$</span> coincide <span class="math-container">$g$</span> on a neighborhood <span class="math-container">$U$</span> of <span class="math-container">$x,$</span> then we have the commutative diagram
<a href="https://i.stack.imgur.com/pC8qk.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/pC8qk.png" alt="enter image description here" /></a>
Since <span class="math-container">$\Delta_{U,M}\circ\Delta_{M,U}=\mathrm{id},\ \Delta_{N,N}=\mathrm{id},$</span> it follows that <span class="math-container">$T(x,f,M,N)=T(x,g,M,N),$</span> which establishes the equivalence of the two statements.</p>
|
611,788 | <p>I'm here to ask you guys if my logic is correct.
I have to calculate limit of this:
$$\lim_{n\rightarrow\infty}\sqrt[n]{\sum_{k=1}^n (k^{999} + \frac{1}{\sqrt k})}$$
At first point. I see it's some limit of $$\lim_{n\rightarrow\infty}\sqrt[n]{1^{999} + \frac{1}{\sqrt 1} + 2^{999} + \frac{1}{\sqrt 2} \dots + n^{999} + \frac {1}{\sqrt n} }$$
And now is generally my doubt. Can i assume that if this limit goes to one then if limit of larger sequence goes to one too, then my original sequence goes to one too? </p>
<p>I came up with something like this.</p>
<p>if $$\lim_{n\rightarrow\infty} \sqrt[n]{\sum_{k=1}^n k^{1000}}$$ this limit goes to one ( cause obviously, expression under original sum is much lower than second sum ) then original limit goes to one too. </p>
<p>But i made it even simplier.
$$\sum_{k=1}^n k^{1000} < n^{1001}$$, so...
$$\lim_{n\rightarrow\infty} \sqrt[n]{n^{1001}}$$ this limit obviously goes to 1. Ans my final answer is that original limit goes to one too.
I did some kind of bounding. But i'm not sure if i can do it. Any answers and tips are going to be greatly appreciated :-) Thanks.</p>
| hmakholm left over Monica | 14,366 | <p>Yes -- in fact the 3-sphere can be <em>completely</em> covered by <em>disjoint</em> great circles, the <a href="http://en.wikipedia.org/wiki/Hopf_fibration">Hopf fibration</a>.</p>
<p>(Terminology nitpick: The subset $\{ (x,y,z,w)\in\mathbb R^4\mid x^2+y^2+z^2+w^2=1\}$ is usually known as the <strong>three</strong>-sphere, counting the dimension of the surface itself rather than the space it happens to embed into).</p>
|
611,788 | <p>I'm here to ask you guys if my logic is correct.
I have to calculate limit of this:
$$\lim_{n\rightarrow\infty}\sqrt[n]{\sum_{k=1}^n (k^{999} + \frac{1}{\sqrt k})}$$
At first point. I see it's some limit of $$\lim_{n\rightarrow\infty}\sqrt[n]{1^{999} + \frac{1}{\sqrt 1} + 2^{999} + \frac{1}{\sqrt 2} \dots + n^{999} + \frac {1}{\sqrt n} }$$
And now is generally my doubt. Can i assume that if this limit goes to one then if limit of larger sequence goes to one too, then my original sequence goes to one too? </p>
<p>I came up with something like this.</p>
<p>if $$\lim_{n\rightarrow\infty} \sqrt[n]{\sum_{k=1}^n k^{1000}}$$ this limit goes to one ( cause obviously, expression under original sum is much lower than second sum ) then original limit goes to one too. </p>
<p>But i made it even simplier.
$$\sum_{k=1}^n k^{1000} < n^{1001}$$, so...
$$\lim_{n\rightarrow\infty} \sqrt[n]{n^{1001}}$$ this limit obviously goes to 1. Ans my final answer is that original limit goes to one too.
I did some kind of bounding. But i'm not sure if i can do it. Any answers and tips are going to be greatly appreciated :-) Thanks.</p>
| Jeremy Daniel | 115,164 | <p>Firstly, you should speak of the $3$-dimensional sphere, viewed in the $4$-dimensional euclidian space. </p>
<p>Secondly, the term equator in this situation should certainly refer to the intersection in $\mathbb{R}^4$ of the sphere $S^3$ and a hyperplane, that is a $2$-dimensional object.</p>
<p>For your question, consider the vectors $(\cos(\phi)\cos(\theta), \sin(\phi)\cos(\theta), \cos(\phi)\sin(\theta), \sin(\phi)\sin(\theta))$. Then, when you fix $\phi$, you get a circle with the coordinate $\theta$, and if I'm not mistaken the circles corresponding to $\phi_1$ and $\phi_2$ are disjoint is $\phi_1 \neq \phi_2$.</p>
|
2,361,336 | <p>Note: This is <strong>not a duplicate</strong> as I am asking for a proof, not a criteria, and this is a specific proof, not just any proof – <strong>please treat like any other question on a specific math problem.</strong> Please do not close. thanks!</p>
<p><a href="https://i.stack.imgur.com/5R2aE.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/5R2aE.png" alt="enter image description here"></a></p>
<p>I am having trouble proving the above as I don't know how to express the various cases/outcomes of <em>N</em> when 11 is added to <em>M</em>. Take for example N=<strong>9759</strong>, then M=9-7+5-9=0</p>
<p>However, M+11 could give many different numbers, depending on where and what integers are added.</p>
<p>So for M to become 0+11=11 in the above example,</p>
<p>(i) N=<strong>9757</strong>946 is one possibility</p>
<p>(ii) N=946<strong>9757</strong> is another possibility</p>
<p>Although they are essentially the same, mathematically they are different (I think?) because:</p>
<p>(i) M=<strong>9-7+5-7</strong>+9-4+6</p>
<p>(ii) M=9-4+6 <strong>-9+7-5+7</strong></p>
<p>so the (-1)^n coefficient changes for the digits 9, 7, 5, 7</p>
<hr>
<p>These proofs for divisibility by 3 may help:</p>
<p><a href="https://i.stack.imgur.com/yw1xT.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/yw1xT.png" alt="enter image description here"></a></p>
<p><a href="https://i.stack.imgur.com/bo0jy.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/bo0jy.png" alt="enter image description here"></a></p>
| Bernard | 202,857 | <p>We don't really need induction here (or, at a pinch, finite induction).</p>
<p>Multiplying by $(x-x_0)^{n-k}$, we deduce that $\dfrac{Q(x)}{(x-x_0)^k}\to0$ for all $k=0,1,\dots n$.</p>
<p>In particular, $Q(0)=0$, whence $a_0=0$.</p>
<p>Next $\dfrac{Q(x)}{x-x_0}=a_1+a_2(x-x_0)+\dots+a_n(x-x_0)^{n-1}\to 0$, so $a_1=0$.</p>
<p>The result follows looking successively at all $\dfrac{Q(x)}{(x-x_0)^k}$.</p>
|
4,248,766 | <p>How many functions <span class="math-container">$f: \{1,...,n_1\} \to \{1,...,n_2\}$</span> are there such that if <span class="math-container">$f(k)=f(l)$</span> for some <span class="math-container">$k,l \in \{1,...,n_1\}$</span>, then <span class="math-container">$k=l$</span>?</p>
| Mohammad Riazi-Kermani | 514,496 | <p>Assuming that <span class="math-container">$n_2\ge n_1$</span> we have <span class="math-container">$$n_1!\binom{n_2}{n_1}$$</span> functions.</p>
<p>Note that <span class="math-container">$f(1)$</span> has <span class="math-container">$n_2$</span> choices and <span class="math-container">$f(2)$</span> has <span class="math-container">$(n_2-1)$</span> choices and so forth.</p>
<p>Thus we have <span class="math-container">$$(n_2)(n_2-1)(n_2-2)...(n_2-n_1+1)=n_1!\binom{n_2}{n_1}$$</span></p>
<p>functions.</p>
|
4,639,011 | <p>I'm interested in finding the asymptotic at <span class="math-container">$n\to\infty$</span> of
<span class="math-container">$$b_n:= \frac{e^{-n}}{(n-1)!}\int_0^\infty\prod_{k=1}^{n-1}(x+k)\,e^{-x}dx=e^{-n}\int_0^\infty\frac{e^{-x}}{x\,B(n;x)}dx$$</span>
Using a consecutive application of Laplace' method, I managed to get <a href="https://artofproblemsolving.com/community/c7h3012973_integral_inequality_concepts" rel="nofollow noreferrer">(here)</a>
<span class="math-container">$$b_n\sim(e-1)^{-n}$$</span>
but this approach is not rigorous, and I cannot find even next asymptotic term, let alone a full asymptotic series.</p>
<p>So, my questions are:</p>
<ul>
<li>how we can handle beta-function in this (and similar) expressions at <span class="math-container">$n\to\infty$</span></li>
<li>whether we can get asymptotic in a rigorous way ?</li>
</ul>
| Gary | 83,800 | <p><strong>First approach.</strong> We have
<span class="math-container">\begin{align*}
b_n & = \frac{{{\rm e}^{ - n} }}{{\Gamma (n)}}\int_0^{ + \infty } {\frac{{\Gamma (x + n)}}{{\Gamma (x + 1)}}{\rm e}^{ - x} {\rm d}x}
\\ & = \frac{{{\rm e}^{ - n} }}{{\Gamma (n)}}\int_0^{ + \infty } {\frac{1}{{\Gamma (x + 1)}}{\rm e}^{ - x} \left( {\int_0^{ + \infty } {s^{x + n - 1} {\rm e}^{ - s} {\rm d}s} } \right)\!{\rm d}x}
\\ & = \frac{1}{{\Gamma (n)}}\int_0^{ + \infty } {\frac{1}{{\Gamma (x + 1)}}\left( {\int_0^{ + \infty } {t^{x + n - 1} {\rm e}^{ - {\rm e}t} {\rm d}t} } \right)\!{\rm d}x}
\\ & = \frac{1}{{\Gamma (n)}}\int_0^{ + \infty } {t^{n - 1} {\rm e}^{ - {\rm e}t} \left( {\int_0^{ + \infty } {\frac{{t^x }}{{\Gamma (x + 1)}}{\rm d}x} } \right)\!{\rm d}t} .
\end{align*}</span>
Employing <a href="https://math.stackexchange.com/q/941764">Ramanujan's formula</a>
<span class="math-container">$$
\int_0^{ + \infty } {\frac{{t^x }}{{\Gamma (1 + x)}}{\rm d}x} = {\rm e}^t - \int_{ - \infty }^{ + \infty } {\frac{{{\rm e}^{ - t{\rm e}^y } }}{{y^2 + \pi ^2 }}{\rm d}y} ,
$$</span>
yields the exact expression
<span class="math-container">\begin{align*}
b_n & = \frac{1}{{\Gamma (n)}}\int_0^{ + \infty } {t^{n - 1} {\rm e}^{ - ({\rm e} - 1)t} {\rm d}t} - \frac{1}{{\Gamma (n)}}\int_0^{ + \infty } {t^{n - 1} {\rm e}^{ - {\rm e}t} \int_{ - \infty }^{ + \infty } {\frac{{{\rm e}^{ - t{\rm e}^y } }}{{y^2 + \pi ^2 }}{\rm d}y}\, {\rm d}t}
\\ & = \frac{1}{{({\rm e} - 1)^n }} - \int_{ - \infty }^{ + \infty } {\frac{1}{{({\rm e} + {\rm e}^y )^n }}\frac{1}{{y^2 + \pi ^2 }}{\rm d}y} .
\end{align*}</span>
Since
<span class="math-container">$$
\int_{ - \infty }^{ + \infty } {\frac{1}{{({\rm e} + {\rm e}^y )^n }}\frac{1}{{y^2 + \pi ^2 }}{\rm d}y} \le \frac{1}{{{\rm e}^n }}\int_{ - \infty }^{ + \infty } {\frac{{{\rm d}y}}{{y^2 + \pi ^2 }}} = \frac{1}{{{\rm e}^n }},
$$</span>
we indeed have
<span class="math-container">$$
b_n \sim \frac{1}{{({\rm e} - 1)^n }}
$$</span>
as <span class="math-container">$n\to +\infty$</span>.</p>
<p><strong>Second approach.</strong> Changing the order of summation and integration yields
<span class="math-container">$$
\sum\limits_{n = 1}^\infty {b_n z^n } = z\int_0^{ + \infty } {\frac{{{\rm d}x}}{{({\rm e} - z)^{x + 1} }}} = \frac{z}{{({\rm e} - z)\log ({\rm e} - z)}}
$$</span>
for sufficiently small <span class="math-container">$z$</span>. Now note that
<span class="math-container">$$
\frac{z}{{({\rm e} - z)\log ({\rm e} - z)}} = \frac{{\rm e} - 1}{{({\rm e} - 1) - z}} + H(z)
$$</span>
where <span class="math-container">$H(z)$</span> is holomorphic in the disc <span class="math-container">$|z|<\mathrm{e}$</span>. The first term may be expanded as
<span class="math-container">$$
\frac{{\rm e} - 1}{{({\rm e} - 1) - z}} = \sum\limits_{n = 0}^\infty {\frac{1}{{({\rm e} - 1)^{n} }}z^n } .
$$</span>
On the other hand, the <span class="math-container">$n$</span>th Maclaurin series coefficient of <span class="math-container">$H(z)$</span> is <span class="math-container">$\mathcal{O}((\mathrm{e}-\varepsilon)^{-n})$</span> by the Cauchy–Hadamard theorem for any <span class="math-container">$\varepsilon>0$</span> as <span class="math-container">$n\to+\infty$</span>. Thus
<span class="math-container">$$
b_n \sim \frac{1}{{({\rm e} - 1)^n }}
$$</span>
as <span class="math-container">$n\to +\infty$</span>.</p>
|
2,589,837 | <p>Let $\Omega \subset \mathbb R^d$ smooth, bounded and connected domain. Let $A\in \mathbb R^{d\times d}$ symetric and uniformly elliptic, i.e. there is $C>0$ such that $$C^{-1}\|x\|^2\leq Ax\cdot x\leq C\|x\|^2.$$</p>
<p>How can I prove that $$a(u,v)=\int_\Omega A \nabla u\cdot \nabla v,$$
continuous ? I know that $$|a(u,v)|\leq \int_\Omega \|A\nabla u\|\|\nabla v\|.$$
I suppose that $|A\nabla u|\leq C|\nabla u|$, but I can't prove it since $|A\nabla u|$ is not $A\nabla u\cdot \nabla u$. I also tried as $$|A\nabla u|^2=A^2\nabla u\cdot \nabla u,$$
but is also $A^2$ uniformly elliptic ? If yes how can I prove it ? If no, how can I conclude ?</p>
| gerw | 58,577 | <p>If $A$ is a symmetric matrix, then $Ax \cdot x \le C \|x\|^2$ implies that all eigenvalues of $A$ are bounded from above by $C$. Hence, $\|A x\| \le C \|x\|$.</p>
|
2,589,837 | <p>Let $\Omega \subset \mathbb R^d$ smooth, bounded and connected domain. Let $A\in \mathbb R^{d\times d}$ symetric and uniformly elliptic, i.e. there is $C>0$ such that $$C^{-1}\|x\|^2\leq Ax\cdot x\leq C\|x\|^2.$$</p>
<p>How can I prove that $$a(u,v)=\int_\Omega A \nabla u\cdot \nabla v,$$
continuous ? I know that $$|a(u,v)|\leq \int_\Omega \|A\nabla u\|\|\nabla v\|.$$
I suppose that $|A\nabla u|\leq C|\nabla u|$, but I can't prove it since $|A\nabla u|$ is not $A\nabla u\cdot \nabla u$. I also tried as $$|A\nabla u|^2=A^2\nabla u\cdot \nabla u,$$
but is also $A^2$ uniformly elliptic ? If yes how can I prove it ? If no, how can I conclude ?</p>
| Community | -1 | <p>Note that for a symmetric matrix A we have </p>
<p>$$ \|A\| =\sup_{\|x\|_2=1}\{\|Ax\|_2\}= \sup_{\|x\|_2=1}\{|\langle Ax,x\rangle_2|\}$$
Therefore, since for all $x\in \Bbb R^d$
$$|Ax\cdot x| =\left|\sum_{i,j=1}^{d}A_{ij}x_ix_j \right|\le C\|x\|_2$$</p>
<p>We automatically for all $x\in \Bbb R^d$ we get</p>
<p>$$\|Ax\|_2\le C\|x\|_2$$</p>
<p>Now replacing $x_i= \partial_iu $ and making use of by Cauchy Schwartz inequality we have </p>
<p>$$|A\nabla u\cdot \nabla v| \le\|A\nabla u\|_2\| \nabla v\|_2\le C \|\nabla u\|_2\| \nabla v\|_2$$</p>
<p>Integrating both side and applying Cauchy Schwartz again we obtain:
$$\int_\Omega|A\nabla u\cdot \nabla v| \le C\int_\Omega \|\nabla u\|_2\| \nabla v\|_2\le C\|\nabla u\|_{L^2(\Omega)}\| \nabla v\|_{L^2(\Omega)}$$</p>
|
119,876 | <pre><code>Module[{x},
f@x_ = x;
p@x_ := x;
{x, x_, x_ -> x, x_ :> x}
]
?f
?p
</code></pre>
<p>gives</p>
<pre><code>{x$17312, x$17312_, x_ -> x, x_ :> x}
f[x_]=x
p[x_]:=x
</code></pre>
<p>but I'd like to get</p>
<pre><code>{x$17312, x$17312_, x$17312_ -> x$17312, x$17312_ :> x$17312}
f[x$17312_]=x$17312
p[x$17312_]:=x$17312
</code></pre>
<p>I thought <code>Module[{x}, body_]</code> operates something like the following, which would do what I want:</p>
<pre><code>module[{x_Symbol}, body_] := ReleaseHold[Hold@body /. x -> Unique@x];
SetAttributes[module, HoldAll];
module[{x},
f@x_ = x;
p@x_ := x;
{x, x_, x_ -> x, x_ :> x}
]
?f
?p
</code></pre>
<p>I guess there are some cases with nested scoping constructs that need to be considered for special treatment, but why can't it do the replacement in <code>Set, SetDelayed, Rule, RuleDelayed</code>?</p>
<hr>
<p>Motivation</p>
<p>I want to use<code>f@x_ = Integrate[y^2, {y, 0, x}]</code> instead of <code>f@x_ := Evaluate@Integrate[y^2, {y, 0, x}]</code> and to be safe I want to scope the variable/pattern label <code>x</code> to something unique.</p>
<p>See also <a href="https://mathematica.stackexchange.com/questions/119878/why-does-syntax-highlighting-in-set-and-rule-not-color-pattern-names-on-the">Why does syntax highlighting in `Set` and `Rule` not color pattern names on the RHS?</a></p>
| masterxilo | 6,804 | <p>I think a third kind of behaviour would also be possible: it is debatable whether <code>f@x_ = x, x_ -> x</code> within <code>Module[{x}, ...</code> should not become <code>f@x_ = x$123, x_ -> x$123</code> because then the sequence</p>
<pre><code>ClearAll[Global`x];
x = 1;
0 /. x_ -> x
</code></pre>
<p>would do the same whether executed in the global context or within a <code>Module[{x}, ...]</code>. It currently does not: <em>With a fresh kernel</em> (or after clearing <code>x</code>), the program gives 1 in the global context and <code>0</code> within a <code>Module[{x}, ...]</code>:</p>
<pre><code>ClearAll[Global`x];
Module[{x}, x = 1; 0 /. x_ -> x]
</code></pre>
<p>Use <code>Hold, Unique, ReleaseHold</code> as you demonstrated if you want some specific renaming to happen within pattern labels of <code>Set, SetDelayed, Rule, RuleDelayed</code> - <code>Module</code> will not descend into these by design.</p>
|
2,971,143 | <p>Let me choose <span class="math-container">$n=1$</span> for my induction basis: <span class="math-container">$2 > 1$</span>, true.</p>
<p>Induction Step : <span class="math-container">$2^n > n^2 \rightarrow 2^{n+1} > (n+1)^2 $</span></p>
<p><span class="math-container">$2^{n+1} > (n+1)^2 \iff$</span></p>
<p><span class="math-container">$2\cdot 2^n > n^2 + 2n + 1 \iff$</span></p>
<p><span class="math-container">$0 > n^2 + 1 + 2n - 2\cdot 2^n \iff$</span></p>
<p><span class="math-container">$0 > n^2 -2^n + 1 + 2n - 2^n \iff$</span> IH: <span class="math-container">$0 > n^2 - 2^n$</span></p>
<p><span class="math-container">$0 > 1 + 2n - 2^n > n^2 - 2^n + 1 + 2n - 2^n \iff$</span></p>
<p><span class="math-container">$2^n > 1 + 2n > n^2$</span>, which can be proved with induction for <span class="math-container">$n \geq 3$</span></p>
<p><span class="math-container">$2^n > n^2$</span>, true by assumption</p>
<p>I have showed that, based from the induction basis, I can conclude the general statement. But like I have said in the headline the identity is not fulfilled for <span class="math-container">$n=2$</span>, so something must be wrong in the proof. </p>
| Mohammad Riazi-Kermani | 514,496 | <p>Obviously your inequality is not true for <span class="math-container">$n=4$</span> </p>
<p>Thus you better start at <span class="math-container">$n\ge 5 $</span> which is proved the same way. </p>
|
2,971,143 | <p>Let me choose <span class="math-container">$n=1$</span> for my induction basis: <span class="math-container">$2 > 1$</span>, true.</p>
<p>Induction Step : <span class="math-container">$2^n > n^2 \rightarrow 2^{n+1} > (n+1)^2 $</span></p>
<p><span class="math-container">$2^{n+1} > (n+1)^2 \iff$</span></p>
<p><span class="math-container">$2\cdot 2^n > n^2 + 2n + 1 \iff$</span></p>
<p><span class="math-container">$0 > n^2 + 1 + 2n - 2\cdot 2^n \iff$</span></p>
<p><span class="math-container">$0 > n^2 -2^n + 1 + 2n - 2^n \iff$</span> IH: <span class="math-container">$0 > n^2 - 2^n$</span></p>
<p><span class="math-container">$0 > 1 + 2n - 2^n > n^2 - 2^n + 1 + 2n - 2^n \iff$</span></p>
<p><span class="math-container">$2^n > 1 + 2n > n^2$</span>, which can be proved with induction for <span class="math-container">$n \geq 3$</span></p>
<p><span class="math-container">$2^n > n^2$</span>, true by assumption</p>
<p>I have showed that, based from the induction basis, I can conclude the general statement. But like I have said in the headline the identity is not fulfilled for <span class="math-container">$n=2$</span>, so something must be wrong in the proof. </p>
| Kolja | 386,635 | <p>You chose <span class="math-container">$n=1$</span> as induction base, but the induction step works only for <span class="math-container">$n\geq 3$</span>, i.e. you showed that <span class="math-container">$2^n > n^2$</span> implies <span class="math-container">$2^{n+1}>(n+1)^2$</span> only when <span class="math-container">$n\geq3$</span>. That's where the problem is. Then you might try to use <span class="math-container">$n=3$</span> as the base case, but unfortunately for <span class="math-container">$n=3$</span> the statement is not true. That's why you will have to use <span class="math-container">$n=5$</span> as a basis case, and the induction proof will be correct.</p>
|
2,179,317 | <p>We know that, if $\mathcal D$ is a domain containing the origin $(0,0,0)$, then</p>
<p>$$\int_{\mathcal D} \delta(\vec r) d \vec r= \int_{\mathcal D} \delta(x) \delta(y) \delta(z) dx dy dz=1$$</p>
<p>However, <a href="http://mathworld.wolfram.com/DeltaFunction.html" rel="nofollow noreferrer">we also know that</a> the delta distribution can be expressed in spherical coordinates as</p>
<p>$$\delta(r,\theta,\phi)=\frac{\delta(r)}{2 \pi r^2}$$</p>
<p>If we take $\mathcal D = \mathbb R^3$, we would then have</p>
<p>$$\int_{\mathbb R^3} \delta(\vec r) d\vec r =\int_0^\infty 4 \pi r^2 \frac{\delta(r)}{2 \pi r^2} dr= \int_0^\infty 2 \delta(r) dr = 1$$</p>
<p>That is to say,</p>
<p>$$ \int_0^\infty \delta(r) dr = \frac 1 2$$</p>
<p>Now, this seems very odd, but maybe it can have some sense. Indeed, we know that for every $\epsilon >0$</p>
<p>$$\int_{-\epsilon}^{\epsilon} \delta(x) dx = 1$$</p>
<p>So it is possible that we can say (maybe in a not very rigorous way...), being $\delta(x)$ even, that</p>
<p>$$\int_{0}^{\epsilon} \delta(x) dx = \int_{-\epsilon}^{0} \delta(x) dx = \frac 1 2$$</p>
<p>Does this have sense? If so, can we make it rigorous, i.e. showing that every succession of function converging to $\delta$ has this property? </p>
<p>And if not, can we nevertheless give some sense to</p>
<p>$$ \int_0^\infty \delta(r) dr = \frac 1 2 \ \ ?$$</p>
<p><strong>Update</strong></p>
<p><a href="http://www.fen.bilkent.edu.tr/~ercelebi/mp03.pdf" rel="nofollow noreferrer">Here</a>, I found another formula for the delta distribution in spherical coordinates, that is to say:</p>
<p>$$\delta(\vec r ) = \frac{\delta (r)}{4 \pi r^2}$$</p>
<p>This seems to make much more sense, because we would have </p>
<p>$$\int_0^\infty \delta(r) dr = 1$$</p>
<p>However, there are two issues at this point:</p>
<ol>
<li>Which one is the correct form for the delta in spherical coordinates?</li>
<li>Is the integral $\int_0^\infty \delta(r) dr$ well defined? (See also Ruslan's answer).</li>
</ol>
| Ruslan | 64,206 | <p>First, your integral identity doesn't make any sense. An example of an asymmetric nascent delta function, which will have another result is</p>
<p>$$\delta_s(x)=\frac1{s\sqrt\pi}\exp\left(-\frac{(x-s)^2}{s^2}\right).$$</p>
<p>For it we'll have</p>
<p>$$\int\limits_0^\infty \delta_s(x)\,dx=\frac12+\frac{\operatorname{erf}(1)}2.$$</p>
<p>Now to your transformation of variables. It'd be better to avoid using ill-defined integrals when working with $\delta$ distribution. To get well-defined integrals we can use spherical coordinates with another ranges of variables. For example, we can take $r\in(-\infty,\infty)$, $\phi\in[0,\pi)$, $\theta\in[0,\pi)$. Then your final integral will be</p>
<p>$$\int\limits_{\mathbb R^3} \delta(\vec r) d\vec r =
\int\limits_{-\infty}^\infty 2 \pi r^2 \frac{\delta(r)}{2 \pi r^2} dr= \int\limits_{-\infty}^\infty \delta(r) dr = 1.$$</p>
|
4,220,972 | <p>I'm studying a for the GRE and a practice test problem is, "For all real numbers x and y, if x#y=x(x-y), then x#(x#y) =?</p>
<p>I do not know what the # sign means. This is apparently an algebra function but I cannot find any such in several searches. I'm an older student and haven't had basic algebra in over 45 years and this was certainly not in my recent linear algebra class.</p>
| zd_ | 381,421 | <p>By Chebyshev's inequality,
<span class="math-container">$$
\mathbb{P}(|X| > k) \leq \frac{\mathbb{E}(|X|^p)}{k^p},
$$</span>
for any <span class="math-container">$p>0$</span>.</p>
<p>Since we have <span class="math-container">$\mathbb{E}(X^2)$</span>,
<span class="math-container">$$
\mathbb{P}(|X| > k) \leq \frac{\mathbb{E}(|X|^2)}{k^2},
$$</span>
and since <span class="math-container">$\mathbb{P}(-1<X<1) = 1 - \mathbb{P}(|X| > 1)$</span>,
substituting <span class="math-container">$k=1$</span> yields the desired lower bound.</p>
|
1,179,497 | <p>Let $(F,+,\cdot)$ be a field. </p>
<p>Then to prove that $(F,+)$ and $(F-\{0\},\cdot)$ are not isomorphic as groups.</p>
<p>I am facing difficulty in finding the map to bring a contradiction!!</p>
| quid | 85,306 | <p>Show: </p>
<ul>
<li><p>if the additive group contains an element of order $2$ then the multiplicative does not. </p></li>
<li><p>if the additive group contains no element of order $2$ then the multiplicative does. </p></li>
</ul>
|
2,111,402 | <p>Simple exercise 6.2 in Hammack's Book of Proof. "Use proof by contradiction to prove"</p>
<p>"Suppose $n$ is an integer. If $n^2$ is odd, then $n$ is odd"</p>
<p>So my approach was:</p>
<p>Suppose instead, IF $n^2$ is odd THEN $n$ is even</p>
<p>Alternatively, then you have the contrapositive, IF $n$ is not even ($n$ is odd), then $n^2$ is not odd ($n^2$ is even).</p>
<p>$n = 2k+1$ where $k$ is an integer. (definition of odd)</p>
<p>$n^2 = (2k+1)^2$</p>
<p>$n^2 = 4k^2 + 4k + 1$</p>
<p>$n^2 = 2(2k^2 + 2k) + 1$</p>
<p>$n^2 = 2q + 1$ where $q = 2k^2 + 2k$</p>
<p>therefore $n^2$ is odd by definition of odd.</p>
<p>Therefore we have a contradiction. Contradictory contrapositive proposition said $n^2$ is not odd, but the derivation says $n^2$ is odd. Therefore the contradictory contrapositive is false, therefore the original proposition is true.</p>
<p>Not sure if this was the efficient/correct way to prove this using Proof-By-Contradiction.</p>
| jupiterd | 409,586 | <p>Your proof looks correct to me, but I would like to share with you my strategy for proof by contradiction.</p>
<p>Consider the if/then statement $p\Rightarrow q$. In your case, $p$ represents "$n^{2}$ is odd" and $q$ represents "$n$ is odd". To achieve the proof by contradiction, we want to show that when $p$ is true, then it is impossible for $\neg q$, that is to say $n$ is even, to also be true. So we will assume that $p$ <strong>and</strong> $\neg q$ are both true (i.e., at the same time).</p>
<p>Applying this to your particular problem, we have $n^{2}$ is odd and $n$ is even. By definition, $n=2k$ for some integer $k$. Then $$n^{2}=(2k)^{2}=4k^{2}=2\cdot(2k^{2}).$$ Since the integers are multiplicatively closed, we have that $n^{2}$ is $2$ times an integer. Hence $n^{2}$ is even, which contradicts the assumption that $n^{2}$ is odd. </p>
<p>This tells us two things:</p>
<ol>
<li>The logical statement "$n^{2}$ is odd and $n$ is even" is a false statement.</li>
<li>The logical statement "$n$ is even implies $n^{2}$ is even" is a true statement. This is the contrapositive ($\neg q\Rightarrow\neg p$) of the original statement, which always has the same truth value as the original statement.</li>
</ol>
|
1,982,102 | <p>If I wanted to figure out for example, how many tutorial exercises I completed today.</p>
<p>And the first question I do is <strong>question $45$</strong>, </p>
<p>And the last question I do is <strong>question $55$</strong></p>
<p>If I do $55-45$ I get $10$.</p>
<p>But I have actually done $11$ questions:<br>
$1=45$, $2=46$, $3=47$, $4=48$, $5=49$, $6=50$, $7=51$, $8=52$, $9=53$, $10=54$, $11=55$.</p>
<p>Is there any way to know when I can just subtract. Or is the rule I always have to add $1$ when I subtract?</p>
| dxiv | 291,201 | <p>It depends on whether the range includes or not its endpoints. For example, let $n \in \mathbb{N}$. Then:</p>
<ul>
<li>$45 \lt n \lt 55\quad$ has $\;55-45-1=9$ solutions;</li>
<li>$45 \le n \lt 55\;$ and $\;45 \lt n \le 55\quad$ both have has $\;55-45=10$ solutions;</li>
<li>$45 \le n \le 55\quad$ has $\;55-45+1=11$ solutions.</li>
</ul>
<p><hr>
[ EDIT ] In the original post, <code>the first question I do is question 45</code> and <code>the last question I do is question 55</code> imply that the range is inclusive of both endpoints $45,55$ which falls in the latter case among the above, so the correct answer is $55-45+1=11$.</p>
|
1,982,102 | <p>If I wanted to figure out for example, how many tutorial exercises I completed today.</p>
<p>And the first question I do is <strong>question $45$</strong>, </p>
<p>And the last question I do is <strong>question $55$</strong></p>
<p>If I do $55-45$ I get $10$.</p>
<p>But I have actually done $11$ questions:<br>
$1=45$, $2=46$, $3=47$, $4=48$, $5=49$, $6=50$, $7=51$, $8=52$, $9=53$, $10=54$, $11=55$.</p>
<p>Is there any way to know when I can just subtract. Or is the rule I always have to add $1$ when I subtract?</p>
| Deusovi | 256,930 | <p>If you want to use pure subtraction, here's how you'd do it:</p>
<p>At the beginning, you were at the start of problem <strong>45</strong>.</p>
<p>At the end, you were at the end of problem 55, <em>which is the start of problem <strong>56</em></strong>.</p>
<p>And $56-45=11$. You finished $11$ problems.</p>
<hr>
<p>The reason for the offset is because you're measuring from different places from each problem - the <em>start</em> of 45, but the <em>end</em> of 46.</p>
|
2,781,827 | <p>I need to find a symmetric matrix of real values (not the zero matrix) of any order that is orthogonal to any diagonal matrix of real values.
Any hints?</p>
| max_zorn | 506,961 | <p>Put zeros on the diagonal and ones on the off-diagonal. </p>
|
4,524,554 | <p>Consider we have an <span class="math-container">$n \times n$</span> matrix, <span class="math-container">$A$</span>. This matrix represents a linear function from <span class="math-container">$\Bbb R^n$</span> to <span class="math-container">$\Bbb R^n$</span>. Let's say we found a sub-space spanned by the vectors <span class="math-container">$u_1, u_2, \dots u_k$</span> where <span class="math-container">$k<n$</span> such that <span class="math-container">$A$</span> maps any vector from the span of <span class="math-container">$u_1, u_2, \dots u_k$</span> back to this same span. Now, we want to define a linear transformation that does exactly what <span class="math-container">$A$</span> does but defined from the span of <span class="math-container">$u_1, u_2, \dots u_k$</span> to span <span class="math-container">$u_1, u_2, \dots u_k$</span>.</p>
<p>I read in a proof I'm trying to follow that the standard way to do this is to define a transformation matrix, <span class="math-container">$A'$</span> that is <span class="math-container">$k \times k$</span> and its columns are the vectors <span class="math-container">$A u_j \forall j \in 1 \dots k$</span>, expressed in the coordinate system of <span class="math-container">$u_1, u_2 \dots u_k$</span>. This leads to the conclusion that:</p>
<p><span class="math-container">$$A' = U^T A U$$</span></p>
<p>Is this statement correct? And if so, any intuition for why its true and how do I prove it?</p>
| Anne Bauval | 386,889 | <p>No, <span class="math-container">$f$</span> is not necessarily of this form. It is given by: for a fixed nonempty set <span class="math-container">$A$</span> of primes,</p>
<p><span class="math-container">$f(n)=1$</span> if some <span class="math-container">$p\in A$</span> divides <span class="math-container">$n$</span>, and <span class="math-container">$0$</span> otherwise.</p>
<p>This is because <span class="math-container">$f(xy)=1$</span> iff <span class="math-container">$f(x)=1$</span> or <span class="math-container">$f(y)=1$</span>, hence <span class="math-container">$f$</span> is completely determined by its value on primes.</p>
|
2,377,816 | <p>I was solving problems based on Bayes theorem from the book "A First Course in Probability by Sheldon Ross". The problem reads as follows:</p>
<blockquote>
<p>An insurance company believes that there are two types of people: accident prone and not accident prone. Company statistics states that accident prone person have an accident in any given year with probability $0.4$, whereas the probability is $0.2$ for not-accident prone person. If we assume $30\%$ of population is accident prone, what is the conditional probability that a new policyholder will have an accident in his or her second year of policy ownership, given that the policyholder has had an accident in the first year?</p>
</blockquote>
<p>The solution given is as follows:</p>
<blockquote>
<p><strong>Book Solution</strong><br>
$$
\begin{align}
P(A)=0.3 & & (given)\\
\therefore P(A^c)=1-P(A)=0.7 & & \\
P(A_1|A)=P(A_2|AA_1)=0.4 & &(given)\\
P(A_1|A^c)=P(A_2|A^cA_1)=0.2 & & (given)
\end{align}
$$
$$
P(A_1)=P(A_1|A)P(A)+P(A_1|A^c)P(A^c)
=(.4)(.3)+(.2)(.7)=.26 \\
P(A|A_1)=\frac{(.4)(.3)}{.26}=\frac{6}{13} \\
P(A^c|A_1)=1-P(A|A_1)=\frac{7}{13}
$$
$$
\begin{align}
P(A_2|A_1)& =P(A_2|AA_1)P(A|A_1)+P(A_2|A^cA_1)P(A^c|A_1) &&...(I)\\
&=(.4)\frac{6}{13}+(.2)\frac{7}{13}\approx .29\\
\end{align}
$$</p>
</blockquote>
<p>I dont understand the statement $(I)$. </p>
<blockquote>
<p><strong>My Solution</strong><br>
Shouldnt it be like this:
$$P(A_2|A_1)=P(A_2|AA_1)P(AA_1)+P(A_2|A^cA_1)P(A^cA_1)$$
Continuing further:<br>
$$
\begin{align}
P(A_2|A_1)&=P(A_2|AA_1)P(A_1|A)P(A)+P(A_2|A^cA_1)P(A_1|A^c)P(A^c)\\
&=(.4)(.4)(.3)+(.2)(.2)(.7)=0.076
\end{align}
$$</p>
</blockquote>
<p>Am I wrong? If yes, where did I go wrong?</p>
<p><strong>Added Later</strong> </p>
<p>After going through comments and thinking more, it seems that I am struggling to apply law of total probability (and my above solution is very well wrong). The basic form of law of total probability, which I came across till now, is as follows:
$$P(A)=P(A|\color{red}{B})P(\color{red}{B})+P(A|\color{magenta}{B^c})P(\color{magenta}{B^c})$$
I am first time facing application of this law for conditional probability, as done book solution:
$$P(A_2|A_1)=P(A_2|AA_1)P(A|A_1)+P(A_2|A^cA_1)P(A_c|A_1)$$
as it involves three events ($A,A_1,A_2$). Book did not explained this. Though in current problem, it looks "somewhat" intuitive, </p>
<ol>
<li><p>can someone generalize it, so as to make my understanding more clear? Say for $n$ events? </p></li>
<li><p>Also, in $P(A_2|A_1)=P(A_2|\color{red}{AA_1})P(\color{red}{A|A_1})+P(A_2|\color{magenta}{A^cA_1})P(\color{magenta}{A^c|A_1})$, I feel red colored stuff should be same and pink colored stuff should be same, as in case of simple form law of total probability. </p></li>
<li><p>I felt it should be $P(A_2|\color{red}{(A_1|A)})P(\color{red}{A_1|A})+P(A_2|\color{magenta}{(A_1|A^c)})P(\color{magenta}{A_1|A^c})$. Am I absolutely stupid here? </p></li>
<li><p>For a moment I felt its related to:$P(E_1E_2E_2...E_n)=P(E_1)P(E_2|E_1)P(E_3|E_1E_2)...P(E_n|E_1...E_{n-1})$. Is it so?</p></li>
</ol>
<p>I am now screwed at my ability to apply law of total probability. Please enlighten me.</p>
| Satish Ramanathan | 99,745 | <p>Another rationale for the answer is to get the cue from the statements:</p>
<p>1) Probability that an accident prone drive will have an accident on a given year is 0.4</p>
<p>2) Probability that an non accident prone driver will have an accident on a given year is 0.2</p>
<p>3) Probability that a person is accident prone is 0.3</p>
<p>These are all given:</p>
<p>What we can derive is $P(\text{Having an accident/ Accident Prone}) =.4\times .3$</p>
<p>$P( \text{Having an accident/ Not Accident Prone}) = .2\times .7$</p>
<p>Now use Bayes theorem to find $P(\text{if the person is accident prone/he has had an accident}) = \frac{.4\times .3} {.4\times .3+.2\times .7}$</p>
<p>$P(\text{if the person is not accident prone/he has had an accident}) = \frac{.2\times .7} {.4\times .3+.2\times .7}$</p>
<p>Now the new person has had an accident in the first year. (Given).</p>
<p>We need to find out the probabilities of whether he could be classified as accident prone or not accident prone which is what you found in the last two steps.</p>
<p>Having found that Now use Total Probability rule to find out if he will have an accident in the second year ( which has got little to do with the first year) using the first two facts.</p>
<p>$P(\text{Accident on the second year}) = P(\text{accident on a given year/Accident Prone})*P(\text{if the person is accident prone/he has had an accident}) + P(\text{accdient on a given year/ No Accident Prone})* P(\text{if the person is not accident prone/he has had an accident})$</p>
|
3,905,197 | <p>Stirling's Formula states that <span class="math-container">$\Gamma(z+1) \sim \sqrt{2 \pi z} (\frac{z}{\mathbb{e}})^{z}$</span> as <span class="math-container">$z \rightarrow \infty$</span>. I need to prove the following identity using Stirling's formula:</p>
<p><span class="math-container">$$ (2n)! \sim \frac{2^{2n} (n!)^{2}}{\sqrt{\pi n}} $$</span> as <span class="math-container">$n \rightarrow \infty$</span>.</p>
<p>In Stirling's formula I plugged in <span class="math-container">$z = 2n$</span> to get:</p>
<p><span class="math-container">$$ \Gamma(2n+1) = 2n\Gamma(2n) \sim \sqrt{4n \pi} (\frac{2n}{\mathbb{e}})^{2n} $$</span>
where the first equality follows from the functional equation for the gamma function. Simplifying a little bit more I achieve:</p>
<p><span class="math-container">$$ (2n)! \sim \frac{\sqrt{\pi n} 2^{2n} n^{2n-1}}{\mathbb{e}^{2n}} $$</span></p>
<p>But, I don't know how to prove the identity from here. Can someone give me some hints on how to move on from this step?</p>
| Claude Leibovici | 82,404 | <p>Hoping that you enjoy hypergeometric functions
<span class="math-container">$$I_n=\int_0^\infty
\frac{1}{x^{2n+3}}
\left (
\sin x -
\sum_{k=0}^n \frac{(-1)^k x^{2k+1}}{(2k + 1)!}
\right )
\,dx$$</span></p>
<p><span class="math-container">$$\sum_{k=0}^n \frac{(-1)^k x^{2k+1}}{(2k + 1)!}=\frac{(-1)^n x^{2 n+3} \,
_1F_2\left(1;n+2,n+\frac{5}{2};-\frac{x^2}{4}\right)}{(2 n+3)!}+\sin (x)$$</span></p>
<p><span class="math-container">$$I_n=\frac{(-1)^{n+1}}{\Gamma (2 n+4)}\int_0^\infty \, _1F_2\left(1;n+2,n+\frac{5}{2};-\frac{x^2}{4}\right)\,dx$$</span>
<span class="math-container">$$I_n=(-1)^{n+1}\,\frac{ \pi}{2\,\Gamma (2 n+3)}$$</span></p>
|
2,245,408 | <blockquote>
<p>How is the following result of a parabola with focus <span class="math-container">$F(0,0)$</span> and directrix <span class="math-container">$y=-p$</span>, for <span class="math-container">$p \gt 0$</span> reached? It is said to be <span class="math-container">$$r(\theta)=\frac{p}{1-\sin \theta} $$</span></p>
</blockquote>
<p>I started by saying the the standard equation of a parabola, in Cartesian form is <span class="math-container">$y= \frac{x^2}{4p} $</span>, where <span class="math-container">$p \gt 0 $</span> and the focus is at <span class="math-container">$F(0,p)$</span> and the directrix is <span class="math-container">$y=-p$</span>. So for the question above, would the equation in Cartesian form be <span class="math-container">$$y= \frac{x^2}{4 \cdot \left(\frac{1}{2}p\right)}=\frac{x^2}{2p}?$$</span></p>
<p>I thought this because the vertex is halfway between the directrix and the focus of a parabola.</p>
<p>Then I tried to use the facts:
<span class="math-container">$$r^2 = x^2 +y^2 \\
x =r\cos\theta \\
y=r\sin\theta.$$</span></p>
<p>But I couldn't get the form required, any corrections, or hints?</p>
<p>Cheers.</p>
| Chappers | 221,811 | <p>No, that can't work, because you've shown that these parabolae have different foci.</p>
<p>Since there are lots of parabolae ($x^2=4ay$, $y^2=4ax$, any rotation and translation of one of these), it's easier to work backwards. So let's do that: $\sin{\theta} = y/r$, so
$$ p= r(1-\sin{\theta}) = r-y, $$
so
$$ x^2+y^2 = r^2 = (p+y)^2 = p^2+2py+y^2, $$
and
$$ x^2 = 2py+p^2. $$
In particular, if we look more closely, we notice that
$$ \sqrt{x^2+y^2} = p+y $$
is the definition of a parabola: the distance from $(0,0)$ to the point is the same as $y+p$, the distance from the horizontal line $y=-p$.</p>
|
2,756,686 | <p>I have a second derivative that I need to use to find inflection points to create a graph. The second derivative is $$f^{\prime\prime}(x)=-4\pi^2\cos(\pi(x-1))$$</p>
<p>So I set the equation to $0$ and solve for $x$</p>
<p>$$-4\pi^2\cos(\pi(x-1))=0$$</p>
<p>I divide by the constant $-4\pi^2$ and get </p>
<blockquote>
<p>$$\cos(\pi(x-1))=0$$</p>
</blockquote>
<p>But I am basically stuck at this point. I know I need to take the inverse cosine of both sides. The result I am getting is $x=3/2$, but the answer in the book is $x=1/2$, $3/2$. Can someone help me figure out how to solve the last steps of this problem?</p>
| Phil H | 554,494 | <p>$\cos\frac{\pi}{2} = \cos\frac{3\pi}{2}$. There is a solution in another region of the 4 quadrants.</p>
|
3,410,150 | <p>If we try solving it by finding <span class="math-container">$f''(x)$</span> then it is very long and difficult to do, so my teacher suggested a way of doing it, he said find nature of all the roots of <span class="math-container">$f(x) =f'(x)$</span>, and on finding nature of the roots we got them to be real(but not all distinct) and then he said as all the roots of <span class="math-container">$f(x) = f'(x)$</span> are real so all the roots of <span class="math-container">$f'(x)= f''(x)$</span> are real and distinct. I did not understand how to prove that if all the roots of <span class="math-container">$f(x) = f'(x)$</span> are real so all the roots of <span class="math-container">$f'(x)= f''(x)$</span> are real and distinct.Can anyone please help me to prove this?</p>
<p>Is this statement (all roots of <span class="math-container">$f(x) = f'(x)$</span> are real so all roots of <span class="math-container">$f'(x)= f''(x)$</span> are real and distinct) true only for this question or is it true in general for all function <span class="math-container">$f(x)$</span> whose all roots of <span class="math-container">$f(x) = 0$</span> are real?</p>
<p>If instead of <span class="math-container">$f(x) = (x-a)^3(x-b)^3$</span> we had <span class="math-container">$f(x) = (x-a)^4(x-b)^4$</span>
then would we say that as all roots of <span class="math-container">$f(x) = f'(x)$</span> are real so all the roots of <span class="math-container">$f'(x)= f''(x)$</span> are real and distinct or rather we would say that as all roots of <span class="math-container">$f(x) = f'(x)$</span> are real so all roots of <span class="math-container">$f'(x)= f''(x)$</span> are real.</p>
<p>If anyone has any other way of solving this question
<span class="math-container">$f(x) = (x-a)^3(x-b)^3$</span> then what is the nature of the roots of <span class="math-container">$f''(x) = f'(x)$</span> please share it.</p>
| Community | -1 | <p><strong>Proof the roots are real and distinct</strong></p>
<p>We can scale the axes and translate the curve so that, without loss of generality, the function is <span class="math-container">$f(x)=(x-1)^3(x+1)^3$</span>.</p>
<p>Then <span class="math-container">$f'(x)=6x(x-1)^2(x+1)^2, f''(x)=6(x-1)(x+1)(5x^2-1)$</span>.</p>
<p>Then the roots are <span class="math-container">$-1,1$</span> and the roots of <span class="math-container">$x^3-5x^2-x-1=0$</span> which are real and distinct as required.</p>
|
528,591 | <p>I need to prove there are zero divisors in $\mathbb{Z}_n$ if and only if $n$ is not prime.
What should I consider first? </p>
| Cameron Buie | 28,900 | <p><strong>Hint</strong>: If $n$ is not prime, then either $n=1$ (in which case this is trivial) or there are some integers $k,m\in\{2,...,n-1\}$ such that $n=km$ (as you've correctly deduced). What can we conclude from there about certain elements of $\Bbb Z_n$?</p>
<p>If $n$ is prime, and $[km]=[k][m]=[0]$ for some $[k],[m]\in\Bbb Z_n,$ then $n\mid km,$ and so...what?</p>
|
1,172,893 | <p>My textbook says I should solve the following integral by first making a substitution, and then using integration by parts:</p>
<p>$$\int cos\sqrt x \ dx$$</p>
<p>The problem is, after staring at it for a while I'm still not sure what substitution I should make, and hence I'm stuck at the first step. I thought about doing something with the $\sqrt x$, but that doesn't seem to lead anywhere as far as I can tell. Same with the $cos$. Any hints?</p>
| John Hughes | 114,036 | <p>Try $x = u^2$, and $dx = 2u ~ du$. </p>
|
1,172,893 | <p>My textbook says I should solve the following integral by first making a substitution, and then using integration by parts:</p>
<p>$$\int cos\sqrt x \ dx$$</p>
<p>The problem is, after staring at it for a while I'm still not sure what substitution I should make, and hence I'm stuck at the first step. I thought about doing something with the $\sqrt x$, but that doesn't seem to lead anywhere as far as I can tell. Same with the $cos$. Any hints?</p>
| Frank Lu | 41,622 | <p>Let $x=t^2$, then $dx=2tdt$, so
$$\int\cos\sqrt{x}dx=\int 2t\cos tdt$$</p>
<p>Then use integration by parts.</p>
|
713,098 | <p>The answer to my question might be obvious to you, but I have difficulty with it. </p>
<p>Which equations are correct:</p>
<p>$\sqrt{9} = 3$</p>
<p>$\sqrt{9} = \pm3$</p>
<p>$\sqrt{x^2} = |x|$</p>
<p>$\sqrt{x^2} = \pm x$</p>
<p>I'm confused. When it's right to take an absolute value? When do we have only one value and why? When two and why? </p>
<p>Thank you very much in advance for your help!</p>
| PossibilityZero | 135,599 | <p>The mathematical symbol
√
refers to positive number of the two possible square roots.</p>
<p>If the question is written as "What is the square root of 9?", then the answer is both 3 and -3. </p>
<p>However, if the question is "Evaluate √9," the answer would only be 3. Consequentially, -√9 = -3</p>
|
2,715,374 | <p>We know that \begin{equation*}
a_0+\cfrac{1}{a_1+\cfrac{1}{a_2+\cfrac{1}{a_3+\cfrac{1}{\ddots+\cfrac{1}{a_n}}}}}=[a_0,a_1, \cdots, a_n]
\end{equation*}</p>
<p>If $\frac{p_n}{q_n}=[a_0,a_1, \cdots, a_n]$.</p>
<blockquote>
<p>How to prove that $$
\begin{pmatrix}
p_n & p_{n-1} \\
q_n & q_{n-1} \\
\end{pmatrix}=\begin{pmatrix}
a_0 &1 \\
1 & 0 \\
\end{pmatrix}\begin{pmatrix}
a_1 &1 \\
1 & 0 \\
\end{pmatrix}\cdots\begin{pmatrix}
a_n &1 \\
1 & 0 \\
\end{pmatrix}
$$.</p>
</blockquote>
<p>I am getting the answer while checking with $n=0,1,2,3$. I think that it could be done by induction but after assuming $k=n-1$ when I am going to prove $k=n$ the calculation is getting messy. Please help me out in proving this.</p>
| Henno Brandsma | 4,280 | <p>The false implication $x \neq y \to ax \neq ay$ is used. This is true iff $a$ is non-zero (in a field), because then the equivalent (by contraposition) $ax = ay \to x=y$ can be shown by division by $a$. So indeed it's a division by $0$ error implicitly.</p>
|
676,573 | <p>Exercise: Write the polynomial $1 + 2x -x^2 + 5x^3 - x^4$ at powers of $(x-1)$.</p>
<p>I presume this exercise is solved using Taylor Series, since it belongs to that chapter, but have no idea how to solve it. Otherwise, it's very straightforward.</p>
<p>Note: The above exercise is <strong>not</strong> homework.</p>
| Mark Bennet | 2,906 | <p>Here is a method. Let $t=x-1$ so that $x=t+1$.</p>
<p>$$p(t)=1+2(t+1)-(t+1)^2+5(t+1)^3-(t+1)^4$$</p>
<p>Then it is just arithmetic to get the answer.</p>
<p>You can also use Taylor Series - sometimes these problems are set to solve by Taylor Series to show the method works.</p>
|
1,383,956 | <p>Having two points <span class="math-container">$A(xa, ya)$</span> and <span class="math-container">$B(xb, yb)$</span> and knowing a value <span class="math-container">$k$</span> representing the length of a perpendicular segment in the middle of <span class="math-container">$[AB]$</span>, how can I find the other point of the segment?</p>
<blockquote>
<p><a href="https://i.stack.imgur.com/fy8KZ.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/fy8KZ.png" alt="" /></a></p>
<p>The known values are <span class="math-container">$xa$</span>, <span class="math-container">$ya$</span>, <span class="math-container">$xb$</span>, <span class="math-container">$yb$</span>. Also, it's obvious that <span class="math-container">$xm = \frac{(xa + xb)}{2}$</span> and <span class="math-container">$ym = \frac{(ya + yb)}{2}$</span></p>
</blockquote>
<p>How to find <span class="math-container">$N(xn, yn)$</span>?</p>
| Hetebrij | 252,750 | <p>About part (4), you have $P ( X \ge 0.5 \mid X \ge 0.3) = \frac{ P( X \ge 0.5 \textrm{ and } X \ge 0.3)}{ P( X \ge 0.3) }$. And $ P( X \ge 0.5 \textrm{ and } X \ge 0.3) \neq P( X \ge 0.15)$. So you must divide by $P( X \ge 0.3)$ instead of $P(X \ge 0.5)$. And we have $P(X \ge 0.5 \textrm{ and } X \ge 0.3) = P(X \ge 0.5)$ as $X \ge 0.5$ already implies $X \ge 0.3$.</p>
|
367,204 | <p>I'm trying to prove that $\mathbb Z_p^*$ ($p$ prime) is a group using the Fermat's little theorem to show that every element is invertible.</p>
<p>Thus using the Fermat's little theorem, for each $a\in Z_p^*$, we have $a^{p-1}\equiv1$ (mod p). The problem is to prove that p-1 is the least positive integer which $a^{p-1}\equiv1$ (mod p).</p>
<p><strong>Remark:</strong> $\mathbb Z_p^*$ is $\{\overline 1,...,\overline {p-1}\}$ with multiplication.</p>
<p>I need help.</p>
<p>Thanks a lot.</p>
| Vittorino Mandujano Cornejo | 1,013,589 | <p>In my case I made an Isomorphism between <span class="math-container">$\mathbb{Z}^*_p = \{1,2,3,...,p-1\}$</span> and all the Automorphisms of a Group G of order p which is of the form <span class="math-container">$\varphi_{x}(g)=g^{x}$</span> (Where <span class="math-container">$x \in \mathbb{Z}^*_p$</span> and <span class="math-container">$g \in G$</span>, I hope you can know the demonstration of this first part). That's the key, due to G is of prime order, that's cyclic with no subgroups, I said <span class="math-container">$\forall x \in \mathbb{Z}^*_p, \varphi_{x}(g) = g^{x} \neq e$</span>.
And now, you know that all Automorphism made a group under composition and <span class="math-container">$x,y \in \mathbb{Z}^*_p, \varphi_{x} \circ \varphi_{y} = \varphi_{xy}$</span> then <span class="math-container">$\mathbb{Z}^*_p$</span> is a group too!</p>
|
675,718 | <p>I have a non-linear system of equations, $$\left\{ \begin{array}{rcl} x^2 - xy + 8 = 0 \\ x^2 - 8x + y = 0 \\ \end{array} \right.$$
I have tried equating the expressions (because both equal 0), which tells me: $$x^2 - xy + 8 = x^2 - 8x + y$$
Moving all expressions to the right yields: $$0 = xy - 8x + y - 8$$
Factoring the equation: $$0 = x(y-8) + 1(y-8)$$
$$0 = (x+1)(y-8)$$
$$x=-1$$
$$y=8$$
Problem solved, right? No. When you plug in the values into the equations above, you get a false statement. Allow me to demonstrate:
$$x^2 - 8x + y = 0$$
$$(-1)^2 - 8(-1) + (8) = 0$$
$$1 - (-8) + 8 = 0$$
$$1 + 8 + 8 = 0$$
$$17 = 0$$
Can someone please help me solve this system of nonlinear equations? I am stuck.</p>
| Acrobat | 262,162 | <p>Выражаешь из второго $y$. Потом подставляешь в первое вместо $y$, далее решаешь квадратное уравнение относительно $x$. Находишь два корня. Потом для каждого из корней находишь $y$. В итоге получится два корня $x$ и два корня $y$. Всё банально просто. </p>
<p>Quoth google translate:</p>
<blockquote>
<p>Expresses the second $y$. Then you substitute the first instead of $y$, continue to solve quadratic equation with respect to $x$. There are two root. Then for each of the roots are $y$. That will have two roots and two root $x$ $y$. All corny easy.</p>
</blockquote>
|
1,282,843 | <p>I'm having trouble proving the following statement:</p>
<blockquote>
<p>$x(u, v) = (u − u^ 3/ 3
+ uv^2 , v − v^ 3/ 3
+ u^ 2 v, u^2 − v^ 2 )$ is a minimal surface and x is not injective</p>
</blockquote>
<p>Proving that $x(u,v)$, which is also known as the Enneper surface, is minimal is not a problem. However, I can't prove that $x$ is not injective.</p>
<p>Is there any smart way to do this rather than trying some couples $ (a,b)$ and (c,d) hoping that $x(a,b)$ will be equal to $x(c,d)$ with $(a,b)$ and $(c,d) $ different couples?</p>
| Community | -1 | <p>One way is to look for symmetries. The second and third components are even functions of $u$, while the first is an odd function of $x$. So, if $(u_0, v_0)$ is a point such that the first component of $x$ is zero, then $x(u_0,v_0)=x(-u_0,v_0)$.</p>
<p>It's not hard to find a solution of $u−u^3/3+uv^2=0$ with nonzero $u$: for example, choose $v=0$ and solve for $u$.</p>
|
1,837,807 | <p>Let $\mathbb{N}$ denote the set of natural numbers, then a subbasis on $\mathbb{N}$ is </p>
<p>$$S = \{(-\infty, b), b \in \mathbb{N}\} \cup \{(a,\infty), a \in \mathbb{N}\}$$</p>
<p>Let $\leq$ be the relation on $\mathbb{N}$ identified with "less or equal to"</p>
<p>Then I saw a claim that says: (the order topology) $(\mathbb{N}, \leq)$ is a discrete space</p>
<p><strong>Proof:</strong> Let $x \in \mathbb{N}, \{x\} = (x-1,x+1) = (-\infty,x+1) \cap (x-1, \infty) \quad\quad\square$</p>
<p>I have trouble with the last equivalent. If $(-\infty,x+1)$ and $(x-1, \infty)$ are subbasic elements i.e. open sets, then intersection of opens are open and I have no problem with that. But my confusion is whether $(-\infty,x+1)$ and $(x-1, \infty)$ are subbasic open sets to begin with.</p>
<p>The confusion stems from definition of the subbasis which is $$S = \{(-\infty, b), b \in \mathbb{N}\} \cup \{(a,\infty), a \in \mathbb{N}\}$$</p>
<p>Then every element in the subbasis has to be like $(-\infty, b)\cup (a,\infty)$. So it doesn't seem we can get rid of the $(a,\infty)$ part to get a pure subbasic element of the form $(-\infty, b)$. So $(-\infty, b)$ is not a subbasic element.</p>
<p>Can someone help?</p>
| Henno Brandsma | 4,280 | <p>The subbase is a union of two families: the family of left intervals, and the family of right intervals. So the proof is correct.</p>
<p>If your interpretation would hold, you would write it as </p>
<p>$$S = \{(-\infty,b) \cup (a, \infty): a,b \in \mathbb{N} \}$$</p>
<p>which is different: a family of unions, parametrised by 2 parameters, and it would be only one family.</p>
<p>As it stands, there are two families, both consisting of one type (left or right), and we take both of them together.</p>
<p>The separate families are also subbases but for different, coarser topologies. Your interpretation gives a different topology yet again. So be careful reading the notation!</p>
|
1,714,902 | <p>(Question edited to shorten and clarify it, see the history for the original)</p>
<p>Suppose we are given two $n\times n$ matrices $A$ and $B$. I am interested in finding the closest matrix to $B$ that can be achieved by multiplying $A$ with orthogonal matrices. To be precise, the problem is</p>
<p>$$\begin{align}
\min_{U,V}\ & \|UAV^T-B\|_F \\
\text{s.t.}\ & U^TU = I \\
& V^TV = I,
\end{align}$$
where $\|\cdot\|_F$ is the Frobenius norm.</p>
<p>Without loss of generality*, we can restrict our attention to <em>diagonal</em> matrices with nonnegative diagonal entries $a_1,a_2,\ldots,a_n$ and $b_1,b_2,\ldots,b_n$. My hypothesis is that in this case the optimal $UAV^T$ is still diagonal, with its entries being the permutation of $a_i$ which minimizes $\sum_i (a_{\pi_i} - b_i)^2$. In other words, $U=V=P$, where $P$ is the permutation matrix corresponding to said permutation $\pi$. This appears to be true based on numerical tests, but I don't know how to prove it. Is there an elegant proof?</p>
<hr>
<p>*For arbitrary $A$ and $B$, take their singular value decompositions $A=U_A\Sigma_AV_A^T$ and $B=U_B\Sigma_BV_B^T$ to obtain
$$\begin{align}
\|UAV^T-B\|_F &= \|UU_A\Sigma_AV_AV^T-U_B\Sigma_BV_B^T\|_F \\
&= \|U'\Sigma_AV'^T-\Sigma_B\|_F,
\end{align}$$
where $U'=U_B^{-1}UU_A$ and $V'=V_B^{-1}VV_A$ are orthogonal. So we can work with $\Sigma_A$ and $\Sigma_B$ instead.</p>
| reuns | 276,986 | <p>you want to minimize over $U$ an unitary matrix :</p>
<p>$$\|U A-B\|_F^2 = \sum_{n=1}^N \|U A_n - B_n\|^2$$</p>
<p>where $A_n,B_n$ are the rows of $A,B$. since any permutation matrix is unitary, you can also consider minimizing : </p>
<p>$$\sum_{n=1}^N \|U A_{\phi_n} - B_n\|^2$$
over $U$ and a permutation $\phi$. if $v,w$ are collinear vectors, then $$\min_U \|U v - w \|^2 = | \ \|v\|^2-\|w\|^2 \ |$$
and this stays true even when $u,v$ are not collinear. hence :</p>
<p>$$\|U A-B\|_F^2 \ge \sum_{n=1}^N | \ \|A_{\phi_n}\|^2-\|B_n\|^2 \ |$$</p>
<p>finally, when $A,B$ are diagonal matrices, the optimum is attained with :</p>
<p>$$\min_\phi \sum_{n=1}^N | \, A_{\phi_n,\phi_n}^2- B_{n,n}^2 |$$</p>
<p>which can be written with a signed permutation matrix $P$ representing the optimal permutation $\phi$ and the signs simplifying the absolute values :</p>
<p>$$\min_P \sum_{n=1}^N \| \, PA_{\phi_n}- B_{n}\|^2 $$</p>
<p>and by taking the SVD, we have the general solution even when $A,B$ are not diagonal.</p>
<p>(and I leave you seing how this answers to your question on $U = V$) </p>
|
3,451,301 | <p>The following classical generalization</p>
<blockquote>
<p><span class="math-container">$$\sum_{n=1}^\infty\frac{(-1)^{n}H_n}{n^{2a}}=-\left(a+\frac 12\right)\eta(2a+1)+\frac12\zeta(2a+1)+\sum_{j=1}^{a-1}\eta(2j)\zeta(2a+1-2j)$$</span>
where <span class="math-container">$\eta(a)=\sum_{n=1}^\infty\frac{(-1)^{n-1}}{n^a}=(1-2^{1-a})\zeta(a)$</span> is the Dirichlet eta function.</p>
</blockquote>
<p>was proved by <strong>G. Bastien</strong> <a href="https://arxiv.org/pdf/1301.7662.pdf?fbclid=IwAR2cz1aKTBT9iBvuMvzyFd4maDw0zlq2FfF7WmFd8hA99pSIkzH7gVDeHe0" rel="nofollow noreferrer">here</a> page 7 Eq. 17 and also by <strong>Cornel</strong> <a href="https://www.researchgate.net/publication/333999069_A_new_powerful_strategy_of_calculating_a_class_of_alternating_Euler_sums" rel="nofollow noreferrer">here</a>.</p>
<hr />
<p>I am trying to prove it in a different way but came across an integral that can be calculated by Beta function but I want it in <span class="math-container">$\zeta$</span> if possible to get the right result.</p>
<p>Here is my approach which follows from the same idea of my solution <a href="https://math.stackexchange.com/q/3449946">here</a>:</p>
<p>By using <span class="math-container">$$\frac1{n^{2a}}=-\frac1{(2a-1)!}\int_0^1x^{n-1}\ln^{2a-1}(x)\ dx$$</span></p>
<p>we can write</p>
<p><span class="math-container">$$\sum_{n=1}^\infty\frac{(-1)^nH_n}{n^{2a}}=-\frac1{(2a-1)!}\int_0^1\frac{\ln^{2a-1}(x)}{x}\left(\sum_{n=1}^\infty(-x)^nH_n\right)\ dx$$</span></p>
<p><span class="math-container">$$=\frac1{(2a-1)!}\int_0^1\frac{\ln^{2a-1}(x)\ln(1+x)}{x(1+x)}dx=\frac1{(2a-1)!}I_a\tag1$$</span></p>
<hr />
<p><span class="math-container">$$I_a=\int_0^1\frac{\ln^{2a-1}(x)\ln(1+x)}{x(1+x)}dx=\int_0^\infty\frac{\ln^{2a-1}(x)\ln(1+x)}{x(1+x)}dx-\underbrace{\int_1^\infty\frac{\ln^{2a-1}(x)\ln(1+x)}{x(1+x)}dx}_{x\mapsto 1/x}$$</span></p>
<p><span class="math-container">$$=\int_0^\infty\frac{\ln^{2a-1}(x)\ln(1+x)}{x(1+x)}dx+\color{blue}{\int_0^1\frac{\ln^{2a-1}(x)\ln(1+x)}{1+x}dx}-\int_0^1\frac{\ln^{2a}(x)}{1+x}dx$$</span></p>
<p>By adding</p>
<p><span class="math-container">$$I_a=\int_0^1\frac{\ln^{2a-1}(x)\ln(1+x)}{x(1+x)}dx=\int_0^1\frac{\ln^{2a-1}(x)\ln(1+x)}{x}dx-\color{blue}{\int_0^1\frac{\ln^{2a-1}(x)\ln(1+x)}{1+x}dx}$$</span></p>
<p>to both sides, the blue integral nicely cancels out and we get</p>
<p><span class="math-container">$$2I_a=\int_0^\infty\frac{\ln^{2a-1}(x)\ln(1+x)}{x(1+x)}dx+\underbrace{\int_0^1\frac{\ln^{2a-1}(x)\ln(1+x)}{x}dx}_{IBP}-\int_0^1\frac{\ln^{2a}(x)}{1+x}dx$$</span></p>
<p><span class="math-container">$$=\int_0^\infty\frac{\ln^{2a-1}(x)\ln(1+x)}{x(1+x)}dx-\frac{1+2a}{2a}\int_0^1\frac{\ln^{2a}(x)}{1+x}dx$$</span></p>
<p>where</p>
<p><span class="math-container">$$\int_0^1\frac{\ln^{2a}(x)}{1+x}dx=\sum_{n=1}^\infty(-1)^{n-1}\int_0^1 x^{n-1}\ln^{2a}(x)dx=(2a)!\sum_{n=1}^\infty\frac{(-1)^{n-1}}{n^{2a+1}}=(2a)!\eta(2a+1)$$</span></p>
<p>so</p>
<p><span class="math-container">$$I_a=\frac12\int_0^\infty\frac{\ln^{2a-1}(x)\ln(1+x)}{x(1+x)}dx-\left(a+\frac12\right)(2a-1)!\eta(2a+1)\tag2$$</span></p>
<p>Plug <span class="math-container">$(2)$</span> in <span class="math-container">$(1)$</span></p>
<p><span class="math-container">$$\sum_{n=1}^\infty\frac{(-1)^nH_n}{n^{2a}}=-\left(a+\frac12\right)\eta(2a+1)+\frac1{2(2a-1)!}\int_0^\infty\frac{\ln^{2a-1}(x)\ln(1+x)}{x(1+x)}dx\tag{3}$$</span></p>
<p>So any idea how to evaluate the integral in <span class="math-container">$(3)$</span> in a way that completes my proof?</p>
<hr />
| Ali Shadhar | 432,085 | <p>A different proof with a big bonus:</p>
<p>By the definition of the skew harmonic number we have
<span class="math-container">\begin{gather}
\sum_{n=1}^\infty\frac{(-1)^n \overline{H}_n}{n^{2q}}=\sum_{n=1}^\infty \frac{(-1)^n}{n^{2q}}\left(\ln(2)-\int_0^1 \frac{(-x)^n}{1+x}\mathrm{d}x\right)\\
=\ln(2)\sum_{n=1}^\infty \frac{(-1)^n}{n^{2q}}-\sum_{n=1}^\infty\frac{1}{n^{2q}}\int_0^1\frac{x^n}{1+x}\mathrm{d}x\\
=-\ln(2)\eta(2q)-\sum_{n=1}^\infty\frac{1}{n^{2q}}(\ln(2)+H_{\frac n2}-H_n)\\
=-\ln(2)\eta(2q)-\ln(2)\zeta(2q)-\sum_{n=1}^\infty\frac{H_{\frac n2}}{n^{2q}}+\sum_{n=1}^\infty\frac{H_n}{n^{2q}}.\label{fbi}
\end{gather}</span>
To get the first sum, set <span class="math-container">$p=2$</span> and replace <span class="math-container">$q$</span> by <span class="math-container">$2q$</span> in (<a href="https://math.stackexchange.com/q/3708866">1</a>),
<span class="math-container">\begin{gather}
\sum_{n=1}^\infty\frac{H_{\frac n2}}{n^{2q}}=2 \sum_{n=1}^\infty\frac{H_{2n}}{(2n)^{2q}}-\sum_{j=1}^{2q-2}(-2)^{-j}\zeta(2q-j)\zeta(j+1)\\
=\sum_{n=1}^\infty\frac{H_{n}}{n^{2q}}+\sum_{n=1}^\infty\frac{(-1)^nH_{n}}{n^{2q}}-\sum_{j=1}^{2q-2}(-2)^{-j}\zeta(2q-j)\zeta(j+1).
\end{gather}</span>
Thus we have
<span class="math-container">\begin{gather}
\sum_{n=1}^\infty\frac{(-1)^n \overline{H}_n}{n^{2q}}+\sum_{n=1}^\infty\frac{(-1)^nH_n}{n^{2q}}=\sum_{j=1}^{2q-2}(-2)^{-j}\zeta(2q-j)\zeta(j+1)\\
-\ln(2)(\eta(2q)+\zeta(2q)).\tag{a}
\end{gather}</span>
To establish another relation, let <span class="math-container">$a_n=\frac{\overline{H}_{n}}{n^{2q}}$</span> in
<span class="math-container">$$\sum_{n=1}^\infty a_{2n}=\frac12\sum_{n=1}^\infty a_n+\frac12\sum_{n=1}^\infty (-1)^n a_n,$$</span>
we obtain
<span class="math-container">\begin{gather*}
\sum_{n=1}^\infty \frac{(-1)^n\overline{H}_{n}}{n^{2q}}+\sum_{n=1}^\infty \frac{\overline{H}_{n}}{n^{2q}}=2\sum_{n=1}^\infty \frac{\overline{H}_{2n}}{(2n)^{2q}}\\
\{\text{substitute $\overline{H}_{2n}=H_{2n}-H_n$}\}\\
=2\sum_{n=1}^\infty \frac{H_{2n}}{(2n)^{2q}}-2\sum_{n=1}^\infty \frac{H_{n}}{(2n)^{2q}}\\
=\sum_{n=1}^\infty \frac{(-1)^nH_{n}}{n^{2q}}+\sum_{n=1}^\infty \frac{H_{n}}{n^{2q}}-2\sum_{n=1}^\infty \frac{H_{n}}{(2n)^{2q}}\\
=\sum_{n=1}^\infty \frac{(-1)^nH_{n}}{n^{2q}}+(1-2^{1-2q})\sum_{n=1}^\infty \frac{H_{n}}{n^{2q}}.
\end{gather*}</span>
Rearrange the terms,
<span class="math-container">\begin{equation}
\sum_{n=1}^\infty \frac{(-1)^n\overline{H}_{n}}{n^{2q}}-\sum_{n=1}^\infty \frac{(-1)^nH_{n}}{n^{2q}}=(1-2^{1-2q})\sum_{n=1}^\infty \frac{H_{n}}{n^{2q}}-\sum_{n=1}^\infty \frac{\overline{H}_{n}}{n^{2q}}.\tag{b}
\end{equation}</span>
Taking the difference of <span class="math-container">$(a)$</span> and <span class="math-container">$(b)$</span> then dividing by <span class="math-container">$2$</span> gives
<span class="math-container">\begin{gather*}
\sum_{n=1}^\infty \frac{(-1)^nH_{n}}{n^{2q}}=-\frac12\ln(2)(\zeta(2q)+\eta(2q))+\frac12\sum_{j=1}^{2q-2}(-2)^{-j}\zeta(2q-j)\zeta(j+1)\\
+\frac12\sum_{n=1}^\infty \frac{\overline{H}_{n}}{n^{2q}}-\frac{1-2^{1-2q}}{2}\sum_{n=1}^\infty \frac{H_{n}}{n^{2q}}
\end{gather*}</span></p>
<p>and combining <span class="math-container">$(a)$</span> and <span class="math-container">$(b)$</span> then dividing by <span class="math-container">$2$</span> gives
<span class="math-container">\begin{gather*}
\sum_{n=1}^\infty \frac{(-1)^n\overline{H}_{n}}{n^{2q}}=-\frac12\ln(2)(\zeta(2q)+\eta(2q))+\frac12\sum_{j=1}^{2q-2}(-2)^{-j}\zeta(2q-j)\zeta(j+1)\\
-\frac12\sum_{n=1}^\infty \frac{\overline{H}_{n}}{n^{2q}}+\frac{1-2^{1-2q}}{2}\sum_{n=1}^\infty \frac{H_{n}}{n^{2q}}.
\end{gather*}</span></p>
<p>Substitute</p>
<p><a href="https://math.stackexchange.com/q/469785"><span class="math-container">$$\sum_{m=1}^\infty\frac{H_m}{m^q}
=\frac{q+2}{2}\zeta(q+1)-\frac12\sum_{j=1}^{q-2}\zeta(q-j)\zeta(j+1)$$</span></a></p>
<p>and</p>
<p><a href="https://math.stackexchange.com/q/3706722"><span class="math-container">$$\sum_{m=1}^\infty\frac{\overline{H}_m}{m^q}=\left(1-2^{-q}-\frac{q}{2}\right)\zeta(q+1)+(2-2^{1-q})\ln(2)\zeta(q)$$</span>
<span class="math-container">$$+\frac12\sum_{j=1}^{q-2}(1-2^{1-q+j})(1-2^{-j})\zeta(q-j)\zeta(j+1)$$</span></a></p>
<p>we get
<span class="math-container">$$
\sum_{n=1}^\infty\frac{(-1)^{n}H_n}{n^{2q}}=-\,\frac{2^{2q+1}q-2q-1}{2^{2q+1}}\zeta(2q+1)$$</span>
<span class="math-container">$$+\frac14\sum_{j=1}^{2q-2}\left[2-2^{-j}-(-2)^{1-j}-2^{j-2q+1}\right]\zeta(2q-j)\zeta(j+1)\tag{c}$$</span></p>
<p>and</p>
<p><span class="math-container">$$\sum_{n=1}^\infty\frac{(-1)^n \overline{H}_n}{n^{2q}}=(2^{1-2q}-2)\ln(2)\zeta(2q)+\frac{2^{2q+1}q-2q-1}{2^{2q+1}}\zeta(2q+1)$$</span>
<span class="math-container">$$+\frac14\sum_{j=1}^{2q-2}\left[2^{j-2q+1}-(-2)^{1-j}+2^{-j}-2\right]\zeta(2q-j)\zeta(j+1)\tag{d}.$$</span></p>
<hr />
<p>Note: we can see that the the form of <span class="math-container">$(c)$</span> is different than the form in the question body but for sure they are equivalent as they both give the right results for any <span class="math-container">$q\in N.$</span></p>
|
2,189,445 | <p>I try to solve this:
$$
\frac{\partial^{2} I}{\partial b \partial a} = I.
$$
I guessed $ I = C e^{a+b} $, but it's not the general solution. So, how to find the last one?</p>
| Bram28 | 256,001 | <p>The Barber Paradox is a straight-forward contradiction, and your logical analysis shows exactly that. Good job!</p>
<p>OK, but then what about the professor's claim?</p>
<p>The professor used the Barber's paradox to illustrate the self-reference or diagonalization method that is behind Godel's proof of the incompleteness of arithmetic. That is, at some point in Godel's proof, he introduces a sentence (the Godel sentence) $G$ that roughly says "I cannot be derived from the axioms". This $G$ feels very much like the statement about the barber, except where the barber sentence is a genuine contradiction, this $G$ is not a contradiction ... It is in fact true: If it were false, then the statement $G$ <em>can</em> be proven from the axioms, so assuming the axioms reflect basic truths, sentence $G$ would have to be true as well. So: If $G$ is false, then $G$ is true. So: $G$ is true! And so, $G$ is some truth that cannot be proven from the axioms. Which axioms? <em>Any</em> (recursive) set of axioms for arithmetic (each axiom set will have its own Godel sentence!) Meaning: there is no (recursive ... think: finite) set of axioms for arithmetic such that all arithmetical truths can be derived from it. And that is Godel's incompleteness result. Relatedly: there is no decision procedure for arithmetic: for any procedure that tries to decide whether some claim about arithmetic is true or false: if it is sound (i.e. It never getsthe wrong answer), then for some clais it cannot figure out whether it is true or false.</p>
<p>So ... yeah ... fascinating stuff, but we're far beyond any basic Barber's paradox/contradiction here!</p>
|
314,238 | <p>Let $R$ be a ring and $\mathfrak{m},\mathfrak{m'}$ two ideals of $R$.</p>
<p>Suppose that $\frac{R}{\mathfrak{m}}$ and $\frac{R}{\mathfrak{m'}}$ are isomorphic. Can i san say that $\mathfrak{m}$ and $\mathfrak{m'}$ are isomorphic too?</p>
| Tom Oldfield | 45,760 | <p>Unfortunately not!</p>
<p>For a particularly interesting example, consider $R = C[0,1]$, the ring of continuous functions from $[0,1] \rightarrow R$. Take the ideal $I = \{f \in R : f(x) = 0 \space \forall x \in [0,\frac12]\}$. The quotient $R/I$ is then isomorphic to $C[\frac12, 1]$, which is in turn isomorphic to $R$ itself! </p>
<p>Certainly then $R/I \cong R/\{0\}$, but $I \not\cong{0}$. This is also shows that in a general ring, information about the quotient may contain very little information about the ideal being quotiented out.</p>
|
3,121,361 | <p>Given <span class="math-container">$G$</span> has elements in the interval <span class="math-container">$(-c, c)$</span>. Group operation is defined as:
<span class="math-container">$$x\cdot y = \frac{x + y}{1 + \frac{xy}{c^2}}$$</span></p>
<p>How to prove closure property to prove that G is a group?</p>
| trancelocation | 467,003 | <p>I would also use Binomial theorem but would estimate a bit differently using a telescoping sum:</p>
<p><span class="math-container">\begin{eqnarray*}
\left(1+\frac{1}{n}\right)^{n}
& = & 1+1 +\sum_{k=2}^n\frac{n(n-1)\cdots (n-k+1)}{n^k}\cdot\frac{1}{k!}\\
& < & 2 +\sum_{k=2}^n\frac{1}{(k-1)k}\\
& = & 2 +\sum_{k=2}^n\left( \frac{1}{k-1}-\frac{1}{k}\right)\\
& & 2 + 1- \frac{1}{n} = 3-\frac{1}{n}
\end{eqnarray*}</span></p>
|
2,555,499 | <p>Let $v_1=(1,1)$ and $v_2=(-1,1)$ vectors in $\mathbb{R}^2$. They are <strong>clearly linearly independent</strong> since each is not an scalar multiple of the other. The following information about a linear transformation $f: \mathbb{R}^2 \to \mathbb{R}^2$ is given: $$f(v_1)=10 \cdot v_1 \text{ and } f(v_2)=4 \cdot v_2$$</p>
<ol>
<li><p><strong>Give the transformation matrix $_vF_v$ with respect to ordered basis $\mathcal{B}=(v_1,v_2)$</strong></p></li>
<li><p><strong>Give the transformation matrix $_eF_e$ with respect to the ordered standard basis $e=(e_1,e_2)$ of $\mathbb{R}^2$</strong></p></li>
</ol>
<p>Recall that
$$ \begin{bmatrix} 1 & -1 \\ 1 & 1 \end{bmatrix}^{-1}=\begin{bmatrix} \frac{1}{2} & \frac{1}{2} \\ -\frac{1}{2} & \frac{1}{2} \end{bmatrix} $$
We need a matrix $_eF_e$ such that:
$$_eF_e\begin{bmatrix} 1 & -1 \\ 1 & 1 \end{bmatrix}=\begin{bmatrix} 10 & -4 \\ 10 & 4 \end{bmatrix}=\begin{bmatrix} 1 & -1 \\ 1 & 1 \end{bmatrix}\begin{bmatrix} 10 & 0 \\ 0 & 4 \end{bmatrix}$$
then
$$_eF_e=\begin{bmatrix} 10 & -4 \\ 10 & 4 \end{bmatrix}\begin{bmatrix} 1 & -1 \\ 1 & 1 \end{bmatrix}^{-1}
=\begin{bmatrix} 10 & -4 \\ 10 & 4 \end{bmatrix}\begin{bmatrix} \frac{1}{2} & \frac{1}{2} \\ -\frac{1}{2} & \frac{1}{2} \end{bmatrix}=\begin{bmatrix} 7 & 3 \\ 3 & 7 \end{bmatrix}$$
Okay so I'm pretty sure that $$_eF_e=_eF_v \cdot _vF_v \cdot _vF_e$$
And i figured I could find $_eF_e=\begin{bmatrix} ? & ? \\ ? & ? \end{bmatrix}$ in the following equation $$\begin{bmatrix} ? & ? \\ ? & ? \end{bmatrix} \text{ } \begin{bmatrix} 1 & -1 \\ 1 & 1 \end{bmatrix}= \begin{bmatrix} 10 & -4 \\ 10 & 4 \end{bmatrix} \\ \Rightarrow
\begin{bmatrix} 10 & -4 \\ 10 & 4 \end{bmatrix} \text{ } \begin{bmatrix} \frac{1}{2} & \frac{1}{2} \\ -\frac{1}{2} & \frac{1}{2} \end{bmatrix}= \begin{bmatrix} 7 & 3 \\ 3 & 7 \end{bmatrix} \\ \Rightarrow {}_eF_e=\begin{bmatrix} 7 & 3 \\ 3 & 7 \end{bmatrix} $$</p>
<p>Now, how can I find ${}_v{F}_v$? I got a feeling that I'm making it more difficult than necessary</p>
| user334639 | 221,027 | <p>Wring $x \div y \times z$ should be avoided as it is just a bad idea.</p>
<p>It doesn't matter how many people will say it has one correct interpretation, the number of people who will be confused or misinterpret that is just too big.</p>
<p>If you mean $(x \div y) \times z$, then just write $x \times z \div y$.
The "one correct interpretation" would be this one, but it's so silly to write $x \div y \times z$ instead of $x \times z \div y$ that many people will think that's not what you mean.</p>
<p>If you mean $x \div (y \times z)$, then just write $\frac{x}{yz}$. If you don't want something so vertical, write $x/(yz)$ or $x(yz)^{-1}$. And if you think that's too ugly, well, you can write $x/yz$, it's less ugly although a little ambiguous.</p>
<p>Writing $x/yz$ for $\frac{x}{yz}$ will often be my choice. That's under the implicit convention that, unlike $\times$, juxtaposition has precedence over $\div$, or under the assumption that if I meant $xz/y$ I would have written $xz/y$. But some people will argue that it's technically wrong.</p>
|
1,849,588 | <p>The exercise asks if exists a definition of $Nat(x)$ such that $Nat(x) \Rightarrow Nat(S(x))$, and $\exists x $such that $ Nat(x)$ is false without the use of axiom of infinity. Here $Nat(x) \Leftrightarrow x$ is a natural number, and $S(x)$ is the successor of $x$. </p>
<p>I tried to define $$S(a)=\{a\}.$$ Then I define $$a \in C_a \Leftrightarrow (a\in C_a \Rightarrow S(s) \in C_a).$$
So $$\emptyset\in \cap C_i \forall i. $$
At the end I define $$Nat (a) \Leftrightarrow a\in C_\emptyset.$$
Can you ckech my idea or suggest another one?</p>
| Eric Wofsey | 86,856 | <p>I don't understand your attempt. In particular, you haven't actually defined what "$C_a$" is. You wrote "$a \in C_a \Leftrightarrow (a\in C_a \Rightarrow S(s) \in C_a)$", but this is circular when taken as a definition (since $C_a$ appears on the right-hand side) and even if it weren't, it would only define when $a\in C_a$, not when $x\in C_a$ for arbitrary $x$.</p>
<p>The other answer describes the standard way to define $Nat(x)$. Actually, though, for this problem as stated, you can do something much much simpler. Note that the task of just naming a predicate $Nat(x)$ such that $Nat(x) \Rightarrow Nat(S(x))$ and $\exists x (\neg Nat(x))$ is much easier than actually defining the natural numbers. For instance, you could define $Nat(x)$ to be $x\neq x$, and then it satisfies these properties. Slightly less trivially, you could define $Nat(x)$ to be $x\neq \emptyset$, or $x\neq A$ for your favorite set $A$ which is not of the form $S(y)$ for any $y$.</p>
|
1,849,588 | <p>The exercise asks if exists a definition of $Nat(x)$ such that $Nat(x) \Rightarrow Nat(S(x))$, and $\exists x $such that $ Nat(x)$ is false without the use of axiom of infinity. Here $Nat(x) \Leftrightarrow x$ is a natural number, and $S(x)$ is the successor of $x$. </p>
<p>I tried to define $$S(a)=\{a\}.$$ Then I define $$a \in C_a \Leftrightarrow (a\in C_a \Rightarrow S(s) \in C_a).$$
So $$\emptyset\in \cap C_i \forall i. $$
At the end I define $$Nat (a) \Leftrightarrow a\in C_\emptyset.$$
Can you ckech my idea or suggest another one?</p>
| Kaa1el | 95,485 | <p>Yes, all you need is some kind of type theory with induction principle.</p>
<p>You just define it like this:</p>
<pre><code>N : Type
0 : N
s : N -> N
</code></pre>
<p>It works in many proof verifier and maybe some functional programming languages.</p>
|
1,071,564 | <p>Let's a call a directed simple graph $G$ on $n$ labelled vertices <strong>good</strong> if every vertex has outdegree 1 and, when considered as if it were undirected, it is connected. How many good graphs of size $n$ are there?</p>
<p>Here's my work so far. Let's call this number $T(n)$. Clearly, $T(2) = 1$: there's only the loop on two vertices.</p>
<p><img src="https://i.stack.imgur.com/NRt8v.png" alt="T(2)"></p>
<p>We also have that $T(3) = 8$. We can count them using the following argument: let's call a <strong>possible shape</strong> a directed simple graph on $n$ <em>unlabelled</em> vertices which is good. For $n = 3$ we have the following shapes:</p>
<p><img src="https://i.stack.imgur.com/Cbtpp.png" alt="T(3)"></p>
<p>There are $3!$ <em>labelled</em> good graphs of the first shape: fix the outside vertex in 3 possible ways, then fix the loop in two possible ways. There are also $\frac{3!}{3}$ <em>labelled</em> good graphs of the second shape: it's simply the number of cycles on 3 elements. So in total we have: $$T(3) = 3! + \frac{3!}{3} = 8\text{.}$$</p>
<p>We also know that $T(4) = 78$. Let's list all possible shapes:</p>
<p><img src="https://i.stack.imgur.com/SdBDj.png" alt="T(4)"></p>
<p>From top left to bottom right, it's easy to check that we have $4!$ <em>labelled</em> good graphs of the first shape, $2\cdot {4 \choose 2}$ of the second, $2\cdot {4 \choose 2}$ of the third, $4!$ of the fourth and $\frac{4!}{4}$ of the last. In total: $$T(4) = 4! + \left(2\cdot {4 \choose 2} + 2\cdot {4 \choose 2}\right) + 4! + \frac{4!}{4} = 3\cdot 4! + \frac{4!}{4}\text{.}$$</p>
<p><s>I <em>think</em> that $T(5) = 884$, but I won't draw all possible shapes or count their labelings for brevity.</s></p>
<p>I computed $T(5)$ again, and now I get 944. This also invalidates the following conjecture.</p>
<p><strong>CONJECTURE [DISPROVEN]:</strong> I'm <em>conjecturing</em> that there's a simple-ish formula for $T(n)$. It's something like $$T(n) = (2^{n-2} - 1) \cdot n! + \frac{n!}{n} + S(n)$$ where $S(n)$ is some function I currently don't understand such that $S(2) = S(3) = S(4) = 0$, while $S(5) = 5\cdot 4$.</p>
| David | 119,775 | <p>My solution does not agree with your answer for $T(5)$, but let's give it a try anyway . . .</p>
<p>To construct such a graph on $n$ vertices, consider the vertices with indegree $0$. If there are none of these then the graph is a (directed) cycle, and there are $(n-1)!$ possibilities. If there are $k$ specified vertices with indegree $0$, then we obtain our graph by choosing a graph of the required type on $n-k$ vertices, then deciding which of these $n-k$ vertices are to be the targets of the edges from the $k$ vertices of indegree $0$. The number of possibilities is $T(n-k)(n-k)^k$. Now the maximum value of $k$ is $n-2$ (can't be $n-1$ because then the remaining vertex would have to be adjacent to itself, which you do not allow). Therefore, by inclusion/exclusion, we have
$$T(n)=(n-1)!+\sum_{k=1}^{n-2}(-1)^{k-1}\!\!\binom nk T(n-k)(n-k)^k\ .$$
The initial value is $T(2)=1$, and for $n=3,4,5$ this gives $8,78,944$.</p>
<p>My results seem to match <a href="http://oeis.org/A000435">OEIS A000435</a>, though I can't see any connection with your problem.</p>
<p>Comments please!</p>
|
1,832,320 | <p>I know there are n linearly independent and n + 1 affinely independent vectors in $\mathbb{R}^n$. But how many convexly independent there are?</p>
<p>I think there are infinity number of them because if I have a convex polytope I can always add another point that is "outside" of said polytope. </p>
<p>But I'm not sure if my reasoning is correct.</p>
| Luísa Borsato | 296,426 | <p>You can use that those metrics are equivalent (because they derived from norms). Then, you just have to verify if the sets $B_i$ are open or not in the $d_i$ metric. </p>
|
1,010 | <p>For periodic/symmetric tilings, it seems somewhat "obvious" to me that it just comes down to working out the right group of symmetries for each of the relevant shapes/tiles, but its not clear to me if that carries over in any nice algebraic way for more complicated objects such as a <a href="http://en.wikipedia.org/wiki/Penrose_tiling" rel="noreferrer">penrose tiling</a>
and just following <a href="http://en.wikipedia.org/wiki/Quasicrystal" rel="noreferrer">wikipedia</a> just leads to the statement that groupoids come into play, but with no references to example constructions! Moreover, at least naively thinking about, it seems any such algebraic approach should naturally also apply to fractals.</p>
<ol>
<li>what references am I somehow not able to find that do a good job talking about this further?</li>
<li>is my "intuition" that the mathematical structure for at least some classes of fractals and quasicrystals being equivalent correct?</li>
</ol>
| Danny Calegari | 1,672 | <p>Aperiodic tilings can be thought of (in a sometimes useful way) as leaves of laminations; the groupoid in question (as in Emily's answer) is then the holonomy groupoid of the lamination.</p>
<p>There is a standard description of the Penrose tiles in this way; think of an irrational plane (i.e. an $R^2$) in $R^n$ for some $n>2$, and consider the set of 2-dimensional faces of the $Z^n$ lattice in $R^n$ that intersect a (uniform) thickened tubular neighborhood of your plane. Project each such 2-dimensional face perpendicularly down to your plane; the result is an aperiodic tiling. If the irrational plane happens to be chosen with extra symmetries (eg it could be an eigenspace of a finite order element in $GL(n,Z)$) one gets quite a tile set with extra "partial symmetries". The Penrose tiling is of this kind: think of $Z/5Z$ permuting the coordinate axes in $R^5$. This fixes the vector $(1,1,1,1,1)$ and has two perpendicular irrational eigenspaces on which it acts as an order 5 rotation; translates of these eigenspaces give rise to the "standard" Penrose tilings.</p>
<p>The lamination in this case is the "irrational foliation" of the torus $R^5/Z^5$ by planes with slope equal to the slope of the $R^2$ (and one can easily imagine generalizations).</p>
|
2,418,954 | <p>Using Vieta's formulas, I can get $$\begin{align} \frac{1}{x_1^3} + \frac{1}{x_2^3} + \frac{1}{x_3^3} &= \frac{x_1^3x_2^3 + x_1^3x_3^3 + x_2^3x_3^3}{x_1^3x_2^3x_3^3} \\&= \frac{x_1^3x_2^3 + x_1^3x_3^3 + x_2^3x_3^3}{x_1^3x_2^3x_3^3} \\ &= \frac{x_1^3x_2^3 + x_1^3x_3^3 + x_2^3x_3^3}{\left (-\frac{d}{a} \right)^3}\end{align}$$
But then I don't know how to substitute the numerator.</p>
| Andreas | 317,854 | <p>Let $y = 1/x$. So you ask for the value of $f(y) = y_1^3 + y_2^3 + y_3^3$ where the $y_i$ are solutions of $dy^3 + cy^2 + by + a = 0$. With this equation, write
$$- d f(y) = c (y_1^2 + y_2^2 + y_3^2) + b (y_1 + y_2 + y_3) + 3 a.$$</p>
<p>Note the identity $$y_1^2 + y_2^2 + y_3^2=(y_1+y_2+y_3)^2-2(y_1y_2+y_1y_3+y_2y_3).$$ So you have </p>
<p>$$- d f(y) = c (\color{red}{y_1+y_2+y_3})^2-2c (\color{green}{y_1y_2+y_1y_3+y_2y_3}) + b (\color{red}{y_1 + y_2 + y_3}) + 3 a.$$ </p>
<p>Now Vieta's formulae give (note the order of the constants!)</p>
<p>$$y_1+y_2+y_3 = -\frac cd\quad \text{and}\quad y_1y_2+y_1y_3+y_2y_3 = \frac bd.$$
So you obtain</p>
<p>$$- d f(y) = c \left(\color{red}{-\frac cd}\right)^2-2c \left(\color{green}{\frac bd}\right) + b \left(\color{red}{-\frac cd}\right) + 3 a,$$ </p>
<p>hence the final result</p>
<p>$$
f(y) = y_1^3 + y_2^3 + y_3^3 = - \frac{c^3}{d^3} + \frac{3bc}{d^2} - \frac{3a}d = \frac{-c^3+ 3 bcd - 3 ad^2}{d^3}.
$$</p>
|
1,477,325 | <p>It is known that an integrable function is a.e. finite. Is an a.e. finite function integrable? What if the measure is finite?</p>
| Giovanni | 263,115 | <p>No, just consider the constant function $1$. It is not integrable on the real line. </p>
<p>You don't even need an unbounded domain. Let $f(x) = \frac 1x$ and integrate over $[0,1]$ to find a counterexample to your statement.</p>
|
2,261,500 | <p>I try to prove that statement using only Bachet-Bézout theorem (I know that it's not the best technique). So I get $k$ useful equations with $n_1$ then $(k-1)$ useful equations with $n_2$ ... then $1$ useful equation with $n_{k-1}$. I multiply all these equations to obtain $1$ for one side. For the other side I'm lost (there are too many terms) but I want to make appear the form $n_1 L_1+...+n_k L_k$.</p>
<p>Supposing the existence of all the integers we need :</p>
<p>$\underbrace{(a_1n_1+a_2n_2)(a_1n_1+a_3n_3)...(a_1n_1+a_kn_k)}_{\textit{k equations}} \underbrace{(b_2n_2+b_3n_3)...(b_2n_2+b_kn_k)}_{\textit{(k-1) equations}}...\underbrace{(\mu_{k-1} n_{k_1}+\mu_{k} n_k)}_{\textit{1 equation}}=1$</p>
<p>Maybe we can reduce the number of useful equations or start an induction to identify a better form for the product.</p>
<p>Thanks in advance !</p>
| Tengu | 58,951 | <p>The condition $\forall i \ne j \in \{1,2 \ldots, k \}, \gcd (n_i,n_j)=1$ is too strong. Only <strong>two</strong> $i \ne j$ so $\gcd(n_i,n_j)=1$ is enough to imply $\gcd (n_1, \ldots, n_k)=1$.</p>
<p>Indeed, let $d=\gcd (n_1, \ldots, n_k)$ then $d \mid n_i, d \mid n_j$ so $d \mid \gcd (n_i,n_j)=1$ so $d=1$. Thus, $\gcd (n_1, \ldots, n_k)=1$.</p>
|
1,637,748 | <p>I was given the following thing to prove:</p>
<p>$$\lim_{n \to \infty} {d(n) \over n} = 0$$
where $d(n)$ is the number of divisors of n.</p>
<p>I'm so sure how to approach this question. One way I thought of is to use the UFT to turn the expression to:</p>
<p>$$\lim_{n \to \infty} {\prod (x_i + 1) \over \prod p_i^{x_i}}$$</p>
<p>And then to use <a href="https://en.wikipedia.org/wiki/L'H%C3%B4pital's_rule">L'Hôpital's rule</a> for each $x_i$, so I get something like this:</p>
<p>$$\lim_{n \to \infty} {1 \over \ln (\sum p_i) \prod p_i^{x_i}}$$</p>
<p>That equals zero.</p>
<ol>
<li>Is this a good approach?</li>
<li>Is there a different way to solve this?</li>
</ol>
| wythagoras | 236,048 | <p><strong>Hint:</strong> For every divisior $p$ greater than $\sqrt{n}$ of $n$, there is a divisor $q$ smaller than $\sqrt{n}$ such that $n=pq$. It follows that we can divide the divisiors in pairs where one of the elements is smaller than $\sqrt{n}$, and hence $d(n) \leq 2\sqrt{n}$. Can you use this to evaluate the limit?</p>
|
3,368,402 | <p>I am utilizing set identities to prove (A-C)-(B-C).</p>
<p><span class="math-container">$\begin{array}{|l}(A−B)− C = \{ x | x \in ((x\in (A \cap \bar{B})) \cap \bar{C}\} \quad \text{Def. of Set Minus}
\\
\quad \quad \quad \quad \quad =\{ x | ((x\in A) \wedge (x\in\bar{B})) \wedge (x\in\bar{C})\} \quad \text{Def. of intersection}
\\ \quad \quad \quad \quad \quad =\{ x | (A\wedge\overline{C}\wedge\overline{B})\vee(\overline{C}\wedge\overline{B}\wedge C)\} \quad \text{Association Law}
\\
\quad \quad \quad \quad \quad =\{ x | ((x\in A) \wedge (x\in\bar{C})) \wedge ((x\in \bar{B}) \wedge (x\in\bar{C}))\} \quad \text{Idempotent Law}
\\
\quad \quad \quad \quad \quad =\{ x | (((x\in (A\cap\bar{C})) \cap (x\in (\bar{B} \cap\bar{C})))\} \quad \text{Def. of union}
\\
\quad \quad \quad \quad \quad =\{ x | (((x\in (A\cap \bar{C})) \cap \overline{(x\in (B\cup C)))} \} \quad \text{DeMorgan's Law}
\\
\quad \quad \quad \quad \quad =\{ x | x \in (A - C) - (B \cup C) \} \quad \text{Def. Set Minus}
\\
=(A-C)-(B-C)
\end{array}$</span></p>
<p>So it looks like I screwed up on the final step. Is there something that I am forgetting to do properly or where am I supposed to go from that final step? </p>
| Matthew Leingang | 2,785 | <p>You might also just try to prove this in words. Suppose <span class="math-container">$x \in (A-B)-C$</span>.</p>
<ul>
<li>Then <span class="math-container">$x \in (A-B)$</span> and <span class="math-container">$x \notin C$</span>.</li>
<li>Since <span class="math-container">$x \in (A-B)$</span>, we know <span class="math-container">$x \in A$</span> and <span class="math-container">$x \notin B$</span>.</li>
<li>Since <span class="math-container">$x \in A$</span> and <span class="math-container">$x \notin C$</span>, we know <span class="math-container">$x \in (A-C)$</span>.</li>
<li>Since <span class="math-container">$x \notin B$</span>, we know <span class="math-container">$x \notin (B-C)$</span>.</li>
<li>Since <span class="math-container">$x \in (A-C)$</span> and <span class="math-container">$x \notin (B-C)$</span>, we know <span class="math-container">$x \in ((A-C)-(B-C))$</span>.</li>
</ul>
<p>Since this is true for all <span class="math-container">$x \in (A-B)-C$</span>, we know
<span class="math-container">$$
(A-B)-C \subseteq ((A-C)-(B-C))
$$</span></p>
<p>Now do it the other way around.</p>
|
440,615 | <p>Let $R$ be the region in the first quadrant bounded above by the circle $(x-1)^2 + y^2 = 1$ and below by the line $y = x$ . Sketch the region $R$ and evaluate the double integral $\iint 2y \;\mathrm dA$ . </p>
| Y.H. Chan | 71,563 | <p>First you need to find the region: you can see it <a href="http://www.wolframalpha.com/input/?i=%28x-1%29%5E2%2By%5E2%3D1+and+y%3Dx+and+x%3D0+and+x%3D1" rel="nofollow">here</a>. The minor segment is your required region and bounded by
$$x=0$$$$x=1$$$$y=x$$$$y=\sqrt{1-(x-1)^2}=\sqrt{2x-x^2}$$
so all you need to do is changing $dA$ into $dydx$ and put those upper and lower limit on the integral.</p>
<p>$$\iint 2y\;\mathrm dA=\int^1_0\int^{\sqrt{2x-x^2}}_x 2y\;\mathrm dy\mathrm dx=\int^1_02x-2x^2\:\mathrm dx=\frac13$$</p>
|
2,137,706 | <p>$$\sum_{k=0}^m {n \choose k} {n-k \choose m-k} = 2^m {n \choose m}, m<n$$</p>
<p>I know that $2^m$ represents the number of subsets of a set of length $m$, which I can see there being a connection to the ${n \choose k}$ term, but I can't see how the combination it's multiplied by affects this.</p>
| Nicolás Vilches | 413,494 | <h1>A "double-counting" proof</h1>
<p>Let $m<n$ be two positive integers. We have $n$ objects, and we want to choose $m$ of them to paint them, then choose some of the painted objects (possibly none of them) to store them on a box. We have two ways to do this</p>
<p>First, we can select the $m$ objects to be painted ($\binom{n}{m}$ ways), and then the subset to be placed inside the box ($2^m$ waks). Multiplicative principle gives us</p>
<p>$$ \binom{n}{m}2^m $$</p>
<p>to do this. Another way is first select the number of objets we want to put on the box, $k$ (of course, $0\leq k \leq m$), then select the box-objects ($\binom{n}{k}$ ways), and then the rest of painted objects ($m-k$ object), in $\binom{n-k}{m-k}$ ways. This way, we have</p>
<p>$$ \binom{n}{k} \binom{n-k}{m-k}$$ </p>
<p>ways to do this, and adding the (disjoint) cases for $k=0, 1, \dots, m$ we have</p>
<p>$$ \sum_{k=0}^n \binom{n}{k} \binom{n-k}{m-k}$$ </p>
<p>options. We counted the same number in two different ways, so they have to be equal:</p>
<p>$$ \sum_{k=0}^n \binom{n}{k} \binom{n-k}{m-k} = 2^m \binom{n}{m} $$</p>
<h1>An algebraic proof</h1>
<p>$$\begin{align*}
& \sum_{k=0}^m \binom{n}{k} \binom{n-k}{m-k} \\
=& \sum_{k=0}^m \frac{n!}{k! (n-k)!}\cdot \frac{(n-k)!}{(m-k)!(n-m)!} \\
=& \sum_{k=0}^m \frac{n!}{k!(m-k)!(n-m)!}\\
=& \sum_{k=0}^m \frac{n!}{(n-m)!m!}\cdot \frac{m!}{k!(m-k)!} \\
=& \sum_{k=0}^m \binom{n}{m} \binom{m}{k} \\
=& \binom{n}{m} \sum_{k=0}^m \binom{m}{k} \\
=& \binom{n}{m} \cdot 2^m
\end{align*} $$</p>
|
1,310,233 | <p>Fundamental theorem of calculus states that the derivative of the
integral is the original function, meaning that
$$
f(x)=\frac{d}{dx}\int_{a}^{x}f(y)dy.\tag{*}
$$
To motivate the statement of the Lebesgue differentiation theorem, observe
that (*) may be written in terms of symmetric differences as
$$
f(x)=\lim_{r\to 0^+}\frac{1}{2r}\int_{x-r}^{x+r}f(y)dy.\tag{**}
$$
An $n$-dimensional version of (**) is
$$
f(x)=\lim_{r\to 0^+}\frac{1}{|B(x,r)|}\int_{B(x,r)}f(y)dy.\tag{***}
$$</p>
<p>where the integral is with respect $n$-dimensional Lebesgue measure. The Lebesgue differentiation theorem states that (***) holds pointwise $\mu$-a.e. for any locally integrable function $f$.</p>
<p>My question is how could we write (**) by using (*) ? If we define $F(x)=\int_{a}^{x}f(y)dy$. The quotient
$$
\frac{F(x+r)-F(x)}{r}=\frac{\int_{a}^{x+r}f(y)dy-\int_{a}^{x}f(y)dy}{r}=\frac{1}{r}\int_{x}^{x+r}f(y)dy
$$
How could we say that
$$\frac{1}{r}\int_{x}^{x+r}f(y)dy\overset{?}{=}\frac{1}{2r}\int_{x-r}^{x+r}f(y)dy$$ </p>
| aexl | 33,764 | <p>Let $g: \Bbb R \to \Bbb R$ be differentiable at a point $x \in \Bbb R$, i.e. the limit
$$ g'(x) = \lim_{h \to 0} \frac{g(x+h) - g(x)}{h} $$
exists.
So it follows, that
$$\begin{align*} \frac{g(x+h) - g(x-h)}{2h}
&= \frac{\left[ g(x+h) - g(x) \right] + \left[ g(x) - g(x-h) \right]}{2h} \\
&= \frac 1 2 \frac{g(x+h) - g(x)}{h} + \frac 1 2 \frac{g(x + (-h)) - g(x)}{(-h)} \\
&\to \frac 1 2 g'(x) + \frac 1 2 g'(x) = g'(x) \quad \text{for } h \to 0 \; .
\end{align*}$$
For the rest, see John's answer.</p>
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.