qid int64 1 4.65M | question large_stringlengths 27 36.3k | author large_stringlengths 3 36 | author_id int64 -1 1.16M | answer large_stringlengths 18 63k |
|---|---|---|---|---|
69,542 | <p>My first question here would fall into the 'ask Johnson' category if there was one (no pressure Bill). I'm interested in constructing a uniformly convex Banach space with conditional structure without using interpolation. The constructions of Ferenczi and Maurey-Rosenthal both use interpolation. </p>
<p>Using existing methods for constructing spaces with conditional structure I think it is possible to construct a hereditarily indecomposable space whose natural basis statisfies a lower $\ell_2$ estimate on any $n$ disjointly supported blocks vectors supported after the $n^{th}$ position on the basis and an upper $\ell_2$ estimate on all finite block sequences. The space $X$ is sure to be reflexive and probably doesn't contain $\ell_\infty$ finitely represented. </p>
<p>I would like to have some way of showing that $X$ is uniformly convex and this is where I'm stuck. Perhaps one could show that $\ell_1$ is not finitely represented in $X$ but as far as I can see this is not good enough (or is it?). </p>
<p>My question: If a space is reflexive and does not contain $\ell_1$ finitely represented is it necessarily uniformly convex? </p>
<p>I suspect the answer is no but I don't have a counterexample. </p>
<p>Another question: Are there any known conditions on a basis, which (1) do not imply the basis is unconditional and (2) do imply the space is uniformly convex? </p>
| Adi Tcaciuc | 7,872 | <p>I think James also showed that if $X$ does not contain almost isometric copies of $\ell_1^2$ (he called such a space uniformly non-square) then $X$ <strong>is</strong> superreflexive. This is no longer true for $n>2$, as James later constructed a non-reflexive, uniformly non-octahedral (no almost isometric copies of $\ell_1^3$) space, thus also having non-trivial type. </p>
<p>Maybe you can check whether your space is uniformly non-square. Connecting it with your last question I think that you would have to verify that $\exists \delta>0$ such that for any normalized block vectors $x$ and $y$ (but not necessarily disjointly supported) there exist a choice of signs such that $||x\pm y||<2-\delta.$ I don't think this condition implies unconditionality. </p>
<p>Hopefully this makes sense...</p>
|
139,232 | <p>Let $O$ be an operad in $\mathtt{SETS}$. Assume that $O(0)$ is empty and $O(1)$ only consists of the identity. Assume for simplicity that $O$ is monochromatic, i.e. we have no labels on the in/outputs. Assume also for simplicity that the operad is plain, i.e. neither symmetric nor braided. So the operads in question consist of a set of $n$-ary operations $O(n)$ for each $n\in\mathbb{N}$ together with an associative composition and there is a unit element in $O(1)$ (but no more elements, as required above).</p>
<p>Now $O$ freely generates a monoidal category $S(O)$: The objects are natural numbers and an arrow from $m$ to $n$ consists of a sequence of operations in $O$ with a total of $m$ inputs and a total of $n$ outputs. For example if $a\in O(3)$ and $b\in O(5)$, then $(a,b)$ is an arrow from $3+5=8$ to $2$. Composition is given by composition in the operad.</p>
<p>I know that $S(O)$ is aspherical when $O$ is free and also in some other special cases. Here I consider categories as spaces via the usual geometric realization, i.e. the geometric realization of the nerve of the category.</p>
<p>Question: Is the category $S(O)$ always aspherical?</p>
| James Griffin | 110 | <p>This needs some checking but for the free non-symmetric operad generated by a single operation in arity 2, the category (PRO) I think you get is the free monoidal category generated by a single object $X$ and a single morphism $\alpha$ from $X^2$ to $X$.</p>
<p>But the nerve of this category is a classifying space for Thompson's group F. That it's fundamental group is F can be seen from <a href="http://arxiv.org/abs/math/0508617" rel="nofollow">http://arxiv.org/abs/math/0508617</a>. I seem to recall that the nerve is actually a locally CAT(0) complex which implies that it's a classifying space. One reference of interest is Guba and Sapir's "Diagram Groups", see also Brown's <a href="http://www.math.cornell.edu/~kbrown/papers/homology.pdf" rel="nofollow">http://www.math.cornell.edu/~kbrown/papers/homology.pdf</a>. The monoidal structure means that although F isn't abelian the homology has a (non-unital) Pontryagin product.</p>
<p>So these categories are perhaps better studied than you may have guessed!</p>
|
2,002,601 | <p>Given n+1 data pairs $(x_0,y_0)...(x_n,y_n)$ for j=0,1,2...,n we have
$p_j=\prod_{i\neq j}(x_j-x_i)$ and $\psi(x)=\prod_{i=0}^n(x-x_i)$.</p>
<p>I am having trouble determining what $\psi(x_j)$ is and what $\psi'(x_j)$ would be. </p>
<p>I feel like $\psi(x_j)= 0$ because it would contain the $x_j-x_j$ term... But I feel like I am missing something...</p>
| Svinto | 263,547 | <p><strong>Hint:</strong> Which function does $f_n$ converge pointwise to? Is it continuous?</p>
|
4,481,695 | <p>I tried substituting <span class="math-container">$x+3$</span> to see if I could simplify in any way, but couldn't think of anything. Also tried using <span class="math-container">$\ln$</span> and <span class="math-container">$\exp$</span>, but in the end just got to <span class="math-container">$\ln(0)$</span>. Can someone give me a tip?</p>
| Átila Correia | 953,679 | <p><strong>HINT</strong></p>
<p>Are you acquainted to the derivative definition?</p>
<p><span class="math-container">\begin{align*}
\lim_{x\to-3}\frac{4^{\frac{x+3}{5}} - 1}{x + 3} = \lim_{x\to-3}\frac{4^{\frac{x+3}{5}} - 4^{\frac{-3 + 3}{5}}}{x - (-3)}
\end{align*}</span></p>
<p>Can you take it from here?</p>
|
2,480,528 | <blockquote>
<p>Find a formula for $\prod_{i=1}^{2n-1} \left(1-\frac{(-1)^i}{i}\right)$ then prove it. </p>
</blockquote>
<p>I assumed that $\prod_{i=1}^{2n-1} \left(1-\frac{(-1)^i}{i}\right)=\frac{2n}{2n-1}$ after doing a few cases from above then I tried to prove it with induction would this be a fair approach or any other approaches that would work? </p>
| Robert Z | 299,698 | <p>Something went wrong in your evaluation.
After the substitution $z=a+ib$, you should have
$$(1+a)^2+b^2=|1+a+ib|^2=|1-b-ia|^2=(1-b)^2+(-a)^2$$
that is, after a few simplifications, $a=-b$.</p>
|
4,066,512 | <p>Find <span class="math-container">$\sum_{j=0}^{n}\sum_{i=j}^{n} {n \choose i}{i \choose j}$</span>.</p>
<p>I don't know how to double summations like this very well. Can someone expand this to show how the <span class="math-container">$i=j$</span> thing works?</p>
<p>I tried the following:
<span class="math-container">${n \choose i}{i \choose j}={n \choose j}{n-j \choose i-j}$</span></p>
<p><span class="math-container">$\sum_{j=0}^{n}\sum_{i=j}^{n} {n \choose i}{i \choose j}=\sum_{j=0}^{n}\sum_{i=j}^{n} {n \choose j}{n-j \choose i-j}=\sum_{j=0}^{n} {n \choose j}{n-j \choose 0}=2^n$</span></p>
<p>Where am I going wrong?</p>
| robjohn | 13,854 | <p><span class="math-container">$$
\begin{align}
\sum\limits_{i=j}^n\binom{n-j}{i-j}
&=\sum\limits_{i=0}^{n-j}\binom{n-j}{i}\tag1\\[3pt]
&=2^{n-j}\tag2
\end{align}
$$</span>
Explanation:<br />
<span class="math-container">$(1)$</span>: subsitute <span class="math-container">$i\mapsto i+j$</span><br />
<span class="math-container">$(2)$</span>: evaluate <span class="math-container">$(1+1)^{n-j}$</span> with the Binomial Theorem</p>
<p>If step <span class="math-container">$(1)$</span> is confusing, break it into two steps:</p>
<ol>
<li><span class="math-container">$i\mapsto k+j$</span>: since <span class="math-container">$i$</span> ranges from <span class="math-container">$j$</span> to <span class="math-container">$n$</span>, <span class="math-container">$k=i-j$</span> ranges from <span class="math-container">$0$</span> to <span class="math-container">$n-j$</span></li>
<li><span class="math-container">$k\mapsto i$</span>: simply change the variable of summation back</li>
</ol>
<p>Thus, this sum is <span class="math-container">$2^{n-j}$</span>, not <span class="math-container">$\binom{n-j}{0}$</span>.</p>
<p>Now, the rest is evaluating either
<span class="math-container">$$
2^n\sum_{j=0}^n\color{#C00}{\binom{n}{j}}2^{-j}=2^n\left(1+2^{-1}\right)^n\tag{3a}
$$</span>
or
<span class="math-container">$$
\sum_{j=0}^n\color{#C00}{\binom{n}{n-j}}2^{n-j}=(1+2)^n\tag{3b}
$$</span>
with the Binomial Theorem (the parts in red are equal).</p>
|
4,066,512 | <p>Find <span class="math-container">$\sum_{j=0}^{n}\sum_{i=j}^{n} {n \choose i}{i \choose j}$</span>.</p>
<p>I don't know how to double summations like this very well. Can someone expand this to show how the <span class="math-container">$i=j$</span> thing works?</p>
<p>I tried the following:
<span class="math-container">${n \choose i}{i \choose j}={n \choose j}{n-j \choose i-j}$</span></p>
<p><span class="math-container">$\sum_{j=0}^{n}\sum_{i=j}^{n} {n \choose i}{i \choose j}=\sum_{j=0}^{n}\sum_{i=j}^{n} {n \choose j}{n-j \choose i-j}=\sum_{j=0}^{n} {n \choose j}{n-j \choose 0}=2^n$</span></p>
<p>Where am I going wrong?</p>
| crankk | 202,579 | <p>Change the order of summation (overall there are only finitely many summands). For that note that</p>
<p><span class="math-container">\begin{align*}
M:&=\{ (i,j)\in\mathbb{N}^2~:~j=0,...,n~~i=j,...,n\}\\
&= \{(i,j)\in \mathbb{N}^2~:~i=0,...,n~~j=0,...,i\}.
\end{align*}</span></p>
<p>Therefore we can change the representation of <span class="math-container">$M$</span> and obtain</p>
<p><span class="math-container">\begin{align*}
\sum_{j=0}^n \sum_{i=j}^n {n\choose i}{i\choose j} &= \sum_{(i,j)\in M} {n \choose i}{i \choose j}= \sum_{i=0}^n \sum_{j=0}^i {n \choose i}{i \choose j}\\
&= \sum_{i=0}^n {n \choose i} \sum_{j=0}^i {i\choose j} = \sum_{i=0}^n {n\choose i} 2^i = (1+2)^n = 3^n.
\end{align*}</span></p>
|
3,382,241 | <p>I am trying to find the smallest <span class="math-container">$n \in \mathbb{N}\setminus \{ 0 \}$</span>, such that <span class="math-container">$n = 2 x^2 = 3y^3 = 5 z^5$</span>, for <span class="math-container">$x,y,z \in \mathbb{Z}$</span>. Is there a way to prove this by the Chinese Remainder Theorem?</p>
| Sam | 616,072 | <p>First of all the number of ways in which you can fill 5 identical boxes with 25 identical balls when none of them are empty will be <span class="math-container">$25-5+5-1\choose 5-1$</span> or <span class="math-container">${24\choose 4} = 10626$</span>.</p>
<p>If you want to use inclusion-exclusion principle, Total ways to fill boxes <span class="math-container">$={25+5-1\choose 5-1}={25\choose 4}=23751$</span></p>
<p>Ways in which 1 box is empty and rest are filled <span class="math-container">$={5\choose 1}\times {24\choose 3}=10120$</span></p>
<p>Ways in which 2 boxes are empty and rest are filled <span class="math-container">$={5\choose 2}\times {24\choose 2}=2760$</span></p>
<p>Ways in which 3 boxes are empty and rest are filled <span class="math-container">$={5\choose 3}\times {24\choose 1}=240$</span></p>
<p>Ways in which 4 boxes are empty and rest are filled <span class="math-container">$={5\choose 4}\times {24\choose 0}=5$</span></p>
<p>Total Number of Ways to fill boxes <span class="math-container">$=23751-10120-2760-240-5=10626$</span></p>
|
2,800,015 | <p>Prove $p(x)=\frac{6}{(\pi x)^2}$ for $x=1,2,...$where $p$ is a probability function. and $E[X]$ doesn't exists.</p>
<p><b> My work </b></p>
<p>I know $\sum _{n=1}^\infty\frac{1}{n^2}=\frac{\pi^2}{6}$</p>
<p>Moreover,</p>
<p>$p(1)=\frac{6}{\pi^2}$<br>
$p(2)=\frac{6}{\pi^24}$<br>
$p(3)=\frac{6}{\pi^29}$<br>
$p(4)=\frac{6}{\pi^216}$<br>
.<br>
.<br>
.<br></p>
<p>Then, for prove $p$ is a probability function then i need prove</p>
<p>$\lim_{x\rightarrow\infty}\frac{6}{(\pi x)^2}=1
$</p>
<p>Then,
$\lim_{x\rightarrow\infty}\frac{6}{(\pi x)^2}=\lim_{x\rightarrow\infty}\frac{6}{\pi^2 x^2}=\frac{6}{\pi^2}\lim_{x\rightarrow\infty}\frac{1}{x^2}=\frac{6}{\pi^2}\times\frac{\pi^2}{6}=1$</p>
<p>In consequence,
$p$ is a probability function.</p>
<blockquote>
<p>Moreover, i need prove $E[X]$ doesn't exist.</p>
</blockquote>
<p>Here i'm a little stuck. Can someone help me?</p>
| LeoCenturion | 565,209 | <p>$\mathbb{E[x]} = \sum_{i=1}^\infty x\frac{6}{\pi^2 x^2} = \frac{6}{\pi^2} \sum_{i=1}^\infty \frac{1}{x} $ which is the harmonic series and is divergent, thus the random variable X with probability function as described above doesn't have a finite Expected value </p>
|
306,588 | <p>I'll first explain what Mobius inversion says, and then state what I am fairly sure the equivariant version is. I can write out a proof, but I also can't believe this hasn't been done already; this is a request for references to where it has already been done.</p>
<p><b>Ordinary Mobius Inversion</b> Let $P$ be a finite poset with minimal element $0$. Let $u$ be a function $P \to \mathbb{Z}$ and define $v: P \to \mathbb{Z}$ by $v(p) = \sum_{q \geq p} u(q)$. Mobius inversion aims to recover $u(0)$ from the values of $v$. It says that $u(0) = \sum_{q \in P} \mu(q) v(q)$. The function $\mu : P \to \mathbb{Z}$ can be described topologically: Let $(0,q)$ be the poset $\{ r \in P : 0 < r < q \}$ and let $\Delta((0,q))$ be the order complex, which is the simiplicial complex whose faces are totally ordered subsets of $(0,q)$. Then $\mu(q)$ is the reduced ordered characteristic of $\Delta((0,q))$. </p>
<p><b>The equivariant situation</b> Let $P$ be a finite poset with minimal element $0$ and let $G$ be a group acting on $P$. For each $p \in P$, let $U(p)$ be a finite dimensional $\mathbb{C}$-vector space. Define $V(p) : = \bigoplus_{q \geq p} U(p)$, so $V(0) = \bigoplus_p U(p)$. Let $G$ act on $V(0)$, with $g U(p)= U(gp)$. My goal is to recover the class of $U(0)$, in the representation ring $Rep(G)$, from the $V(p)$'s. </p>
<p>For $p \in P$, let $G_p$ be the stabilizer of $p$, so $U(p)$ and $V(p)$ are $G_p$-reps. Let $\mu_{eq}(q)$ be the equivariant reduced Euler characteristic of $q$, meaning the sum $\sum (-1)^j [\tilde{H}^j(\Delta((0,q)))]$ computed in the representation ring $Rep(G_q)$. Let $G\backslash P$ be a set of orbit representatives for $G$ acting on $P$. </p>
<p>Then I claim that
$$[U(0)] = \sum_{q \in G \backslash P} \mathrm{Ind}_{G_q}^G \left[ \mu_{eq}(q) \otimes V(p) \right]$$
in $Rep(G)$.</p>
<p>Has anyone seen this before?</p>
| Cihan | 21,848 | <p>This is more of a long comment. I am not sure I understand your construction, but the sort of alternating sum you take has a preimage in the Burnside ring, and is often called the "Lefschetz invariant" by finite group theorists. One of the Representation and Cohomology books by Benson has a section about this. Also see this <a href="https://ac.els-cdn.com/0097316587900781/1-s2.0-0097316587900781-main.pdf?_tid=19eca468-6f74-484e-8dc9-7fa3f37f571b&acdnat=1532610135_e9590fac7ba3655380e7467bfdaac055" rel="nofollow noreferrer">paper</a> by Thévenaz.</p>
|
3,597,172 | <h2>The problem</h2>
<p>Let <span class="math-container">$f: \mathbb{R} \to \mathbb{R}$</span></p>
<p>Determine <span class="math-container">$f(x)$</span> knowing that </p>
<p><span class="math-container">$ 3f(x) + 2 = 2f(\left \lfloor{x}\right \rfloor) + 2f(\{x\}) + 5x $</span>, where <span class="math-container">$ \left \lfloor{x}\right \rfloor $</span> is the floor function and <span class="math-container">$\{x\} = x - \left \lfloor{x}\right \rfloor$</span> (also known as the fractional part)</p>
<h2>My thoughts</h2>
<p>We can observe that for <span class="math-container">$x = 0$</span> we obtain <span class="math-container">$f(0) = 2$</span>.</p>
<p>Considering <span class="math-container">$f(\left \lfloor{x}\right \rfloor)$</span> we get <span class="math-container">$ 3f(\left \lfloor{x}\right \rfloor) + 2 = 2f(\left \lfloor\left \lfloor{x}\right \rfloor\right \rfloor) + 2f(\{\left \lfloor{x}\right \rfloor\}) + 5\left \lfloor{x}\right \rfloor $</span></p>
<p>And for <span class="math-container">$f(\{x\})$</span> we get <span class="math-container">$ 3f(\{x\}) + 2 = 2f(\left \lfloor\{x\}\right \rfloor) + 2f(\{\{x\}\}) + 5\{x\} $</span></p>
<p>I did this in the hope of defining <span class="math-container">$f(\left \lfloor{x}\right \rfloor)$</span> and <span class="math-container">$f(\{x\})$</span> and thus replacing them in the initial condition.</p>
| Ilmari Karonen | 9,602 | <p>Just in case, check the conventions your textbook is using, and specifically whether matrices conventionally act from left (on column vectors) or from right (on row vectors).</p>
<p>In particular, for your matrix <span class="math-container">$$A = \begin{bmatrix}1 & -2 & 3 & 0 \\ 0 & 0 & 0 & 1 \end{bmatrix},$$</span> the map <span class="math-container">$v \mapsto Av$</span> (where <span class="math-container">$v$</span> is a column vector) is surjective but not injective, while the map <span class="math-container">$u \mapsto uA$</span> (where <span class="math-container">$u$</span> is a row vector) is injective but not surjective.</p>
|
271,255 | <p>I'm reading <em>Mathematica Programming</em> by Leonid Shifrin. And it said</p>
<blockquote>
<p><code>ClearAll</code> serves to clear all definitions (including attributes) for a given symbol (or symbols), and not to clear definitions of all global symbols in the system (it is a common mistake to mix these two things).</p>
</blockquote>
<p>So what is the difference between these two things, which are a given symbol and a global symbol?</p>
| MarcoB | 27,951 | <p>Leonid, I think, was cautioning against using <code>ClearAll[]</code> alone in hopes that this would clear all definitions and provide a "clean slate". The closest equivalent to that might be <code>ClearAll["Global`*"]</code>, although that still only clears user-generated definitions and not those in other contexts. Or you could use <code>Quit[]</code> or the menu entries to quit the currently running kernel and start afresh, for a more thorough cleanup that will forget all transient definitions, including packages loaded, modifications to other contexts, et cetera.</p>
<p>In other words: <code>ClearAll</code> is a "more powerful" version of <code>Clear</code>, but it still works only <em>for the specific symbol(s) provided</em> in its input. It does not, perhaps confusingly, "clear all symbols" in the session. Instead, it "clears all properties" of a given symbol.</p>
<hr />
<p>It may be worth mentioning, for completeness, that <code>Clear</code> clear values of symbols, but it does not clear attributes, messages, or defaults associated with them; <code>ClearAll</code> clears those as well. You may also be interested in the discussion in <a href="https://mathematica.stackexchange.com/q/113783/27951">Good clearing practices</a> on this forum.</p>
|
812,939 | <p>How to find the sequences of $\sin n$ (n=natural number) sub-sequence limits? I know that it is a $[-1;1]$, but how to proof?</p>
<p><strong>Edit:</strong>
Is it true, that sin(n), with all natural numbers have different value? How to proof? If that is true, it is posible found bijection between N->[-1;1] </p>
| André Nicolas | 6,312 | <p><strong>Outline:</strong> Consider the points $P(n)=(\cos n,\sin n)$ as $n$ ranges over the positive integers. It is enough to show that this set of points is dense in the unit circle. </p>
<p>Since $\pi$ is irrational, we have $P(m)\ne P(n)$ if $m\ne n$. It follows by the Pigeonhole Principle that given any $\epsilon \gt 0$, there exist $m_0$ and $n_0$, with $m_0\lt n_0$, such that $0\lt d(P(m_0),P(n_0))\lt \epsilon$. In particular, $0 \lt |\sin(m_0) -\sin(n_0)|\lt \epsilon$. </p>
<p>Then for any point $P$ on the unit circle, there are infinitely many positive integers $k$ such that the distance of $P(m_0+k(n_0-m_0))$ from $P$ is less than $\epsilon$. In particular, for any $x$ there exist infinitely many $k$ such that
$|\sin x -\sin(m_0+k(n_0-m_0))|\lt \epsilon$. </p>
<p><strong>Remark:</strong> The modified question asks whether $\sin m$ and $\sin n$ can be equal if $m\ne n$. Note that $\sin x=\sin y$ if and only if $y=x+2k\pi$ or $y=(2k+1)\pi -x$ for some integer $k$. Since $\pi$ is irrational, this cannot happen if $x=m$ and $y=n$, where $m$ and $n$ are distinct integers. In essence this fact was used in the proof outlined in the answer. </p>
|
812,939 | <p>How to find the sequences of $\sin n$ (n=natural number) sub-sequence limits? I know that it is a $[-1;1]$, but how to proof?</p>
<p><strong>Edit:</strong>
Is it true, that sin(n), with all natural numbers have different value? How to proof? If that is true, it is posible found bijection between N->[-1;1] </p>
| Math.StackExchange | 86,086 | <p>It's easy to prove that $\forall n,m\in\mathbb{N}, \sin n \neq \sin m$ if $n \neq m $. </p>
<p>If $\sin{n}=\sin{m}$, then $n=m+2\pi k (k\in\mathbb{Z})$ by periodicity. So, $\pi=\dfrac{n-m}{2k}\in\mathbb{Q}$ But since the left side is clearly irrational number, this contradiction implies the above fact. </p>
|
2,270,861 | <p>What follows is part of Exercise 1.34 from Pillay's <em>Introduction to Stability Theory</em>. Suppose the following:</p>
<ol>
<li>$M \prec N$.</li>
<li>$N$ is $|M|^+$-saturated.</li>
<li>$p \in S_1(M)$, $q \in S_1(N)$.</li>
<li>$q \supset p$ is a coheir of $p$.</li>
</ol>
<p>Construct a sequence $(a_i \mid i < \omega)$ of elements in $N$ inductively as follows: let $a_0$ realize $p$, and given $a_0, \dots, a_n$, let $a_{n+1}$ realize $q \upharpoonright Ma_0\dots a_n$. The exercise is to show that $(a_i)_i$ is an indiscernible sequence over $M$.</p>
<p>My attempt is to show, by induction on $n$, that for $\bar ab$ and $\bar a'b'$, subsequences of $(a_i)_i$ of length $n$, the type of $\bar a$ over $M$ is the same as that of $\bar a'$. Suppose that the following are equivalent for a formula $\phi$ without parameters, $\bar cd $ a subsequence of $(a_i)_i$, and $\bar m \in M$:</p>
<ol>
<li>$N \models \phi(\bar c, d; \bar m)$</li>
<li>There exists $d' \in M$ such that $N \models \phi(\bar c, d';\bar m)$.</li>
</ol>
<p>Note that since $q$ is a coheir, 1 implies 2. If that is true, what I need to prove is a straightforward consequence of the induction hypothesis.</p>
<p>My question is whether or not the two conditions are equivalent, and how to show it.</p>
| Chappers | 221,811 | <p>You'll kick yourself: the integrand of $F'(y)+2yF(y)$ is the derivative with respect to $x$ of
$$ e^{-x^2}\sin{2xy}, $$
which vanishes at the endpoints. Hence the integral is zero.</p>
|
3,673,950 | <p>I realized that matrix transformation must be a linear transformation, but linear is not necessary matrix.
Can someone give me an example of a linear transformation that is not matrix transformation?</p>
| user786879 | 786,879 | <p>I'm going to assume that, by "matrix transformation", you mean a linear transformation of the form
<span class="math-container">$$T : \Bbb{R}^n \to \Bbb{R}^m : v \mapsto Av$$</span>
where <span class="math-container">$A$</span> is an <span class="math-container">$m \times n$</span> real matrix. We could also replace the reals with complex numbers (or indeed any field, if you're feeling extra spicy).</p>
<p>Now, the answer depends on how far down the rabbit hole you are. Some courses simply deal with <span class="math-container">$\Bbb{R}^n$</span> only, and linear maps can only be between various finite powers of <span class="math-container">$\Bbb{R}$</span>. If this is the case, then all linear maps (between powers of <span class="math-container">$\Bbb{R}$</span>) are matrix transformations.</p>
<p>You'll no doubt come across this in your course at some point. In essence, we can build <span class="math-container">$A$</span> by computing <span class="math-container">$Te_1, \ldots, Te_n$</span>, where <span class="math-container">$e_1, \ldots, e_n$</span> are the standard basis for <span class="math-container">$\Bbb{R}^n$</span>, and forming <span class="math-container">$A$</span> by putting <span class="math-container">$Te_1, \ldots, Te_n$</span> as the columns of <span class="math-container">$A$</span> (in order).</p>
<p>Other courses expand beyond this, and talk about finite-dimensional vector spaces (in the abstract sense). One discovers that, for example, the set <span class="math-container">$P_3(\Bbb{R})$</span> of polynomials of degree <span class="math-container">$3$</span> or less, with real coefficients, is a <span class="math-container">$4$</span>-dimensional vector space over <span class="math-container">$\Bbb{R}$</span>, and hence it makes perfect sense to make linear transformations to and from this space. For example,</p>
<p><span class="math-container">$$T : P_3(\Bbb{R}) \to P_3(\Bbb{R}) : p(x) \mapsto p(0)x^2 + 3xp'(x)$$</span></p>
<p>is a linear transformation.</p>
<p>Note that it can't be a matrix transformation in the above sense, as it does not map between the right spaces. The vectors here are polynomials, not column vectors which can be multiplied to matrices.</p>
<p>That said, there still is a way to "represent" <span class="math-container">$T$</span> by a matrix. It's not that <span class="math-container">$T$</span> is multiplication by a matrix, but there are still matrices that can help us evaluate <span class="math-container">$T$</span> in a meaningful way. In fact, this can be done for any linear map <span class="math-container">$T$</span> between any two finite-dimensional spaces over the same field.</p>
<p>I won't go into too much detail about this here, but essentially it involves choosing bases <span class="math-container">$B$</span> and <span class="math-container">$B'$</span> for the domain space and the codomain space respectively, evaluating the map on the vectors in <span class="math-container">$B$</span>, and writing the results as coordinate column vectors with respect to <span class="math-container">$B'$</span>. These coordinate vectors are the columns of <span class="math-container">$A$</span>. We can then multiply coordinate vectors with respect to <span class="math-container">$B$</span>, to get the transformed vector as a coordinate vector with respect to <span class="math-container">$B'$</span>. If you haven't covered this stuff yet, you probably will soon.</p>
<p>It's important to know that choosing different bases will yield different matrices. That is, you can't really "identify" a linear map with a single matrix. There has to be some choices of bases first.</p>
<p>And then, there are infinite-dimensional spaces, such as the set <span class="math-container">$C[0, 1]$</span> of continuous real-valued functions on the domain <span class="math-container">$[0, 1]$</span>. We can define linear maps to and/or from such spaces too. In this case, there's no way to even represent linear maps like this as matrices; they're too complicated for just a finite array of numbers!</p>
|
3,673,950 | <p>I realized that matrix transformation must be a linear transformation, but linear is not necessary matrix.
Can someone give me an example of a linear transformation that is not matrix transformation?</p>
| Disintegrating By Parts | 112,478 | <p>Let <span class="math-container">$X=C[0,1]$</span> be the linear space of continuous complex functions on <span class="math-container">$[0,1]$</span>, and let <span class="math-container">$(Tf)(x)=xf(x)$</span>. <span class="math-container">$T$</span> is a linear transformation on <span class="math-container">$C[0,1]$</span>. <span class="math-container">$C[0,1]$</span> does not a have a finite basis. So you cannot represent <span class="math-container">$T$</span> as a matrix.</p>
|
2,956,050 | <p>I think the title is pretty clear about the problem. Should I try to find the joint probability of <span class="math-container">$X$</span> and <span class="math-container">$Y$</span> and decide if <span class="math-container">$\, f_{X,Y}(x,y) = f_X(x) \cdot f_Y(y)$</span>? If so, how am I going to find a joint distribution like this? Or is there some shortcut to prove it?</p>
| Leucippus | 148,155 | <p>For <span class="math-container">$a+e^a\ln x = x+e^a\ln a = a+e^x\ln a$</span> there are some oddities. For instance take the first and third segments, ie <span class="math-container">$a + e^{a} \, \ln x = a + e^{x} \, \ln a$</span> then <span class="math-container">$x=a$</span>. For the second and third segments, <span class="math-container">$x + e^{a} \, \ln a = a + e^{x} \, \ln a$</span> then <span class="math-container">$x = a$</span>. For the first and second the solution for <span class="math-container">$x$</span> is in terms of the Lambert W function as seen by:
<span class="math-container">$$a + e^{a} \, \ln x = x + e^{a} \, \ln a$$</span>
has solution
<span class="math-container">$$x = - a \, e^{a} \, W\left(- e^{-a} \, e^{-e^{-a}} \right).$$</span>
This is the interesting case.</p>
|
2,956,050 | <p>I think the title is pretty clear about the problem. Should I try to find the joint probability of <span class="math-container">$X$</span> and <span class="math-container">$Y$</span> and decide if <span class="math-container">$\, f_{X,Y}(x,y) = f_X(x) \cdot f_Y(y)$</span>? If so, how am I going to find a joint distribution like this? Or is there some shortcut to prove it?</p>
| Claude Leibovici | 82,404 | <p><span class="math-container">$x=a$</span> is clearly a solution.</p>
<p>To find <span class="math-container">$a$</span>, Consider <span class="math-container">$$a+e^a\ln x = x+e^a\ln a $$</span> and solve for <span class="math-container">$x$</span>. The result is given in terms of Lambert function
<span class="math-container">$$x_1=-e^a\,W\left(-a e^{-e^{-a} a-a}\right)\tag1$$</span>
Now consider <span class="math-container">$$x+e^a\ln a = a+e^x\ln a$$</span> for which the solution is now
<span class="math-container">$$x_2=-W\left(-a^{-e^a} e^a \log (a)\right)+a-e^a \log (a)\tag2$$</span></p>
<p>Graphing <span class="math-container">$f(a)=x_1-x_2$</span>, we see a zero close to <span class="math-container">$a=1.3$</span>.</p>
<p><strong>Edit</strong></p>
<p>As Leucippus commented, there is a problem with <span class="math-container">$x_2$</span> since the argument of Lambert function must be <span class="math-container">$\geq - \frac 1 e$</span> and this happens at <span class="math-container">$a=1.3098$</span> which is the solution of <span class="math-container">$e^a\,\log(a)=1$</span>.</p>
|
187,197 | <p>I have a logic expression:
<code>f0[a0_, a1_, a2_, a3_] := a0 And Not a1 And Not a2 And a3 Or Not a0 And a2 And a3 Or Not a0 And a1 And a3</code>, I know I should use <code>BooleanTable</code>, but it cannot generate a table like below.</p>
<p>How to generate a truth table in mathematica like below?</p>
<p><a href="https://i.stack.imgur.com/XJ58w.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/XJ58w.png" alt="enter image description here"></a></p>
| Community | -1 | <p><code>f = a0 && ! a1 && ! a2 && a3</code></p>
<p><code>TableForm[BooleanTable[{a0, a1, a2, a3, f}, {a0, a1, a2, a3}],
TableHeadings -> {None, {a0, a1, a2, a3, f}}]</code></p>
<p>Apparently there are resources like <a href="http://mathworld.wolfram.com/TruthTable.html" rel="nofollow noreferrer">this one</a> or <a href="https://mathematica.stackexchange.com/questions/180111/what-is-the-easiest-way-to-get-truth-table-of-logical-expression?rq=1">this one</a> </p>
|
2,013,115 | <p>I tried to do this problem in the following way:</p>
<p>As, $x^2+1 + \langle 3 , x^2+1 \rangle= 0 + \langle 3 , x^2+1 \rangle \implies x^2+1 \equiv 0 \implies x^2 \equiv -1.$</p>
<p>Also, $3+ \langle 3 , x^2+1 \rangle=0 +\langle 3 , x^2+1 \rangle \implies 3 \equiv 0$.</p>
<p>Now, any element of $\mathbb{Z}[x]/\langle 3 , x^2+1 \rangle$ is of form $p(x)+\langle 3 , x^2+1 \rangle$ where $p(x) \in \mathbb{Z}[x]$. So, divide $p(x)$ by $x^2+1$ to get a linear polynomial $ax+b$ as a remainder. So, any element of $\mathbb{Z}[x]/\langle 3 , x^2+1 \rangle$ can be written as $ax+b+ \langle 3 , x^2+1 \rangle$ where $a,b \in \mathbb{Z}/3\mathbb{Z}.$ Let $I=\langle 3, x^2+1\rangle$. So, the elements of the ring are $I, 1+I, 2+I, x+I,(x+1)+I,(x+2)+I,2x+I,(2x+1)+I,(2x+2)+I.$</p>
<p>Is my solution correct?</p>
<p>Thanks!</p>
| Stahl | 62,500 | <p>Looks right, except for one nit-picky thing: the $a$ and $b$ in your $ax + b$ aren't elements of $\Bbb Z/3\Bbb Z$, they're elements of $\{0,1,2\}\subseteq\Bbb Z$ (you could also take $a,b\in S$ for any fixed system $S$ of representatives of $\Bbb Z/3\Bbb Z$ in $\Bbb Z$).</p>
|
3,965,668 | <p>I started off my proof by of course stating that the different right triangles I would be comparing should have the same area A. I was able to show that what the question is asking is true visually and computationally using the Pythagorean Theorem, and even using the triangle inequality, but I don't really know how to set it up in such a way that I can use the Euler-Lagrange equation in order to prove that this is true.</p>
<p>Equations:</p>
<p><span class="math-container">$$\frac{1}{2}xy=A$$</span></p>
<p>So</p>
<p><span class="math-container">$$y(x) = 2A/x \Longrightarrow y'(x) = -\frac{2A}{x^2}$$</span></p>
| mathcounterexamples.net | 187,663 | <p><strong>Hint</strong></p>
<p>The answer is positive and follows from:</p>
<ul>
<li>The fact that a branch of the logarithm is uniquely defined on an open disk <span class="math-container">$D(a,r)$</span> such that <span class="math-container">$0 \notin D(a,r)$</span> by its value <span class="math-container">$f(a)$</span> at <span class="math-container">$a$</span>.</li>
<li>The image of <span class="math-container">$[0,1]$</span> under <span class="math-container">$\gamma$</span> is path connected, hence connected.</li>
<li>Moreover this image is included in the simply connected open subset <span class="math-container">$\Omega$</span> for which <span class="math-container">$0 \notin \Omega$</span>.</li>
</ul>
|
4,539,637 | <blockquote>
<p>If the digits <span class="math-container">$7,7,3,2$</span>, and 1 are randomly arranged from left to right, what is the probability both of the 7 digits are to the left of the 1 digit?</p>
</blockquote>
<p>The answer is <span class="math-container">$1/3$</span> because <span class="math-container">$1 7 7$</span>, <span class="math-container">$7 7 1$</span>, <span class="math-container">$7 1 7$</span>.</p>
<p>I thought about re-arranging the two sevens as one digit
"77"1 then the three and two have two positions, and we re-arrange the "77" and 1 glued together using <span class="math-container">$3 \choose 1$</span> and divide by <span class="math-container">$5!$</span></p>
| user170231 | 170,231 | <p>I have not been able to simplify the last result, but I did manage to figure out the path from double sum to hypergeometric function.</p>
<p>It turns out that either alternative to the "third option" mentioned in OP are more useful. We have for instance</p>
<p><span class="math-container">$$\frac{\Gamma(m) \Gamma(n) \Gamma(x)}{\Gamma(m+n+x)} = \frac{\Gamma(m) \Gamma(x)}{\Gamma(m+x)} \cdot \frac{\Gamma(m+x) \Gamma(n)}{\Gamma(m+n+x)}$$</span></p>
<p>and we use the result from the first part of OP, rewritten in terms of <span class="math-container">$\Gamma$</span>.</p>
<p><span class="math-container">$$\sum_{n=1}^\infty \frac{\Gamma(n) \Gamma(x)}{\Gamma(n+x)} = \frac1{x-1} = \frac{\Gamma(x-1)}{\Gamma(x)} \implies \sum_{n=1}^\infty \frac{\Gamma(n) \Gamma(m+x)}{\Gamma(m+n+x)} = \frac{\Gamma(m+x-1)}{\Gamma(m+x)} \qquad (1)$$</span></p>
<p>Then to more closely align the result with the hypergeometric function, we rewrite the <span class="math-container">$\Gamma$</span>-s as Pochhammer symbols, in particular</p>
<p><span class="math-container">$$\begin{cases}(1)_n = \Gamma(n+1) = n! \\ (x)_n = \frac{\Gamma(n+x)}{\Gamma(x)} \\ (x+1)_n = \frac{\Gamma(n+x+1)}{\Gamma(x+1)}\end{cases} \qquad (2)$$</span></p>
<p>So we have</p>
<p><span class="math-container">$$\begin{align*}
\sum_{m=1}^\infty \sum_{n=1}^\infty \frac{\Gamma(m) \Gamma(n) \Gamma(x)}{\Gamma(m+n+x)} &= \sum_{m=1}^\infty \sum_{n=1}^\infty \frac{\Gamma(m) \Gamma(x)}{\Gamma(m+x)} \cdot \frac{\Gamma(m+x) \Gamma(n)}{\Gamma(m+n+x)} \\[1ex]
&= \sum_{m=1}^\infty \frac{\Gamma(m) \Gamma(x)}{\Gamma(m+x)} \cdot \frac{\Gamma(m+x-1)}{\Gamma(m+x)} & (1) \\[1ex]
&= \sum_{m=0}^\infty \frac{\Gamma(m+1) \Gamma(x)}{\Gamma(m+x+1)} \cdot \frac{\Gamma(m+x)}{\Gamma(m+x+1)} \\[1ex]
&= \sum_{m=0}^\infty \frac{\Gamma(m+1) \Gamma(x)^2}{\Gamma(m+x+1)} \cdot \frac{\Gamma(m+x)}{\Gamma(m+x+1) \Gamma(x)} \\[1ex]
&= \frac1{x^2} \sum_{m=0}^\infty \frac{\Gamma(m+1) \Gamma(x+1)^2}{\Gamma(m+x+1)} \cdot \frac{\Gamma(m+x)}{\Gamma(m+x+1) \Gamma(x)} & \Gamma(x+1)=x\Gamma(x) \\[1ex]
&= \frac1{x^2} \sum_{m=0}^\infty \frac{\Gamma(m+1) \cdot \frac{\Gamma(m+x)}{\Gamma(x)}}{\frac{\Gamma(m+x+1)^2}{\Gamma(x+1)^2}} \\[1ex]
&= \frac1{x^2} \sum_{m=0}^\infty \frac{[(1)_m]^2 (x)_m}{[(x+1)_m]^2} \cdot \frac1{m!} & (2) \\[1ex]
&= \frac1{x^2} \, {}_3F_2 \left(\left.\begin{array}{c}1,1,x\\x+1,x+1\end{array}\right\vert 1\right)
\end{align*}$$</span></p>
|
3,136,568 | <p><a href="https://i.stack.imgur.com/STONY.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/STONY.jpg" alt="enter image description here"></a></p>
<p>I seem to be struggling with this particular question. It is my understanding that in this situation, where du does not equal dx, that you must manipulate the original problem to accommodate for this? However I admit, I arrived at my answer through use of an online calculator and would like to have a clear understanding of this concept. </p>
| user170231 | 170,231 | <p>With <span class="math-container">$u=5t$</span>, the differential is <span class="math-container">$\mathrm du=5\,\mathrm dt$</span>, or <span class="math-container">$\mathrm dt=\frac{\mathrm du}5$</span>. The "manipulation" you mention is really just a matter of keeping track of this factor of <span class="math-container">$\frac15$</span>. Then you have</p>
<p><span class="math-container">$$\int\sqrt{25t^2-4}\,\mathrm dt=\int\sqrt{(5t)^2-2^2}\,\mathrm dt=\color{red}{\frac15}\int\sqrt{u^2-2^2}\,\mathrm du$$</span></p>
<p>which according to the table has an antiderivative of</p>
<p><span class="math-container">$$\color{red}{\frac15}\left(\frac{5t}2\sqrt{(5t)^2-2^2}-\frac{2^2}2\ln\left|5t+\sqrt{(5t)^2-2^2}\right|+M\right)$$</span></p>
<p><span class="math-container">$$\frac t2\sqrt{25t^2-4}-\frac25\ln\left|5t+\sqrt{25t^2-4}\right|+M$$</span></p>
<p><span class="math-container">$$\frac1{10}\left(5t\sqrt{25t^2-4}-4\ln\left|5t+\sqrt{25t^2-4}\right|\right)+M$$</span></p>
|
2,820,779 | <p>So i have this integral </p>
<p>$$\int_{\sqrt[3]{4}}^{\sqrt[3]{3+e}}x^2 \ln(x^3-3)\,dx.$$</p>
<p>I was thinking of using u subsitution to make everything easier. </p>
<p>I made $u = x^3-3$ and $du = 3x^2dx$.</p>
<p>So I would then re-write my integral as </p>
<p>$$1/3\int_{\sqrt[3]{4}}^{\sqrt[3]{3+e}} \ln(x^3-3)\.$$</p>
<p>How would I proceed from here. Should I plug in the integral values? Wouldn't I need to integrate the $\ln()$? Should I use u substitution again? Please help!</p>
<p>I already asked this question, so don't mark it as a duplicate. Unfortunately I couldn't understand what other people were writing.</p>
| JohnKnoxV | 431,468 | <p>After your substitution, your integral should be
$ 1/3 \int_1^e \ln(u) du$.</p>
|
117,619 | <p>I need to evaluate the following real convergent improper integral using residue theory (vital that i use residue theory so other methods are not needed here)
I also need to use the following contour (specifically a keyhole contour to exclude the branch cut):</p>
<p><a href="https://i.stack.imgur.com/4wwwj.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/4wwwj.png" alt=""></a></p>
<p>$$\int_0^\infty \frac{\sqrt{x}}{x^3+1}\ \mathrm dx$$</p>
| Batominovski | 72,152 | <p>Since a solution involving contour integration has been given, I am providing an alternative method without contour integration. Let $u:=\sqrt{x}$. Then, the integral $I:=\displaystyle\int_0^\infty\,\frac{\sqrt{x}}{x^3+1}\,\text{d}x$ equals
$$I=2\,\int_0^\infty\,\frac{u^2}{u^6+1}\,\text{d}u=\int_{-\infty}^{+\infty}\,\frac{u^2}{u^6+1}\,\text{d}u\,.$$
Note that
$$\frac{u^2}{u^6+1}=\frac{1}{3}\,\left(\frac{u^2+1}{u^4-u^2+1}\right)-\frac13\,\left(\frac{1}{u^2+1}\right)\,.$$
Now, let $v:=u-\frac{1}{u}$. Then,
$$\frac{u^2+1}{u^4-u^2+1}=\frac{1+\frac{1}{u^2}}{\left(u-\frac{1}{u}\right)^2+1}=\left(\frac{1}{v^2+1}\right)\,\frac{\text{d}v}{\text{d}u}\,.$$
Thus,
$$\begin{align}\int\,\frac{u^2+1}{u^4-u^2+1}\,\text{d}u&=\int\,\frac{1}{v^2+1}\,\text{d}v\\&=\text{arctan}(v)+C\\&=\text{arctan}\left(u-\frac{1}{u}\right)+C\,,\end{align}$$
where $C$ is a constant of integration.
Thus,
$$\int_0^{+\infty}\,\frac{u^2+1}{u^4-u^2+1}\,\text{d}u=\pi=\int_{-\infty}^0\,\frac{u^2+1}{u^4-u^2+1}\,\text{d}u\,.$$
On the other hand,
$$\int_{-\infty}^{+\infty}\,\frac{1}{u^2+1}\,\text{d}u=\Big.\big(\text{arctan}(u)\big)\Big|_{u=-\infty}^{u=+\infty}=\pi\,,$$
making
$$I=\frac{1}{3}\,(2\pi)-\frac{1}{3}\,\pi=\frac{\pi}{3}\,.$$
This result agrees with the computation made by Amir Alizadeh approximately six years ago.</p>
|
2,491,394 | <p>So, my problem is with Axiom 5 of the proof, where Gödel considers necessary existence as a property. However, by his own definition, a 'property' applies to objects including those whose necessary existence has not even been proven, as can be inferred from Theorem 1. This, to me, seems like the perfect example of question begging, and if such logic is to be used on other examples, the conclusions may be contradictory. For example, I can prove that a Godlike object doesn't exist using the same logic and assuming Gödel's axioms:</p>
<ol>
<li><span class="math-container">$Df. 1:A_φ(x)⇔(◇∃x⇒(◻∃x∧φ(x)))$</span></li>
<li><span class="math-container">$Ax. 1:(P(φ)∧◻∀x(φ(x)⇒ψ(x)))⇒P(ψ)$</span></li>
<li><span class="math-container">$Ax. 2:P(¬φ)⇔¬P(φ)$</span></li>
<li><span class="math-container">$Th. 1:P(φ)⇒◇∃x(φ(x))$</span></li>
<li><span class="math-container">$Ax. 3:P(◇∃x⇒◻∃x)$</span></li>
<li><span class="math-container">$Th. 2:∀φ(P(φ)⇒◻∃x(A_φ(x))$</span></li>
</ol>
<p>Ax.3 is inferred from Gödel's fifth axiom, where necessary existence is a positive property. From here, I can conclude that any positive property that one can think of exists. For example, if being a unicorn is a positive property (which it is) then invisible flying unicorns also exist (because God is also flying and invisible, so these are positive properties).</p>
<p>Note that I didn't, in any way, deviate from the axioms in Gödel's original theorem, and I didn't add any extra ones.</p>
<p>Obviously, though, it is very unlikely that I've just proven Gödel's proof to be wrong, so my 'theorem' must be wrong. However, I've followed through each of the steps in my 'proof' many times over and didn't manage to find any deviation from Gödel's axioms either time. Can anyone help me with this?</p>
| Christopher Rose | 296,508 | <p>I havent read too far into it, but i think these authors embedded the higher order modal argument into higher order logic, and proved their embedding is consistent:
<a href="https://www.google.com/url?sa=t&source=web&rct=j&url=http://page.mi.fu-berlin.de/cbenzmueller/papers/C40.pdf&ved=0ahUKEwiciojM14_XAhVDRCYKHVc1CpwQFghYMAQ&usg=AOvVaw0iNVCeRqSVtD-hbE4IoPjn" rel="nofollow noreferrer">https://www.google.com/url?sa=t&source=web&rct=j&url=http://page.mi.fu-berlin.de/cbenzmueller/papers/C40.pdf&ved=0ahUKEwiciojM14_XAhVDRCYKHVc1CpwQFghYMAQ&usg=AOvVaw0iNVCeRqSVtD-hbE4IoPjn</a></p>
<p>However, the same authors were able to put the original argument into an extremely novel higher order modal logic prover, and they found an inconsistency that had never before appeared in the literature:
<a href="https://www.google.com/url?sa=t&source=web&rct=j&url=https://www.ijcai.org/Proceedings/16/Papers/137.pdf&ved=0ahUKEwiIu4yg2o_XAhUGJiYKHSTZANAQFghsMAY&usg=AOvVaw3A4fyJ-0UUFYRmeWfjgUAX" rel="nofollow noreferrer">https://www.google.com/url?sa=t&source=web&rct=j&url=https://www.ijcai.org/Proceedings/16/Papers/137.pdf&ved=0ahUKEwiIu4yg2o_XAhUGJiYKHSTZANAQFghsMAY&usg=AOvVaw3A4fyJ-0UUFYRmeWfjgUAX</a></p>
<p>I dont fully understand your objection, and I think they agree that axiom 5 is one of the faulty steps, but anyways these papers seem like a good place to start. Pretty deep stuff!</p>
|
1,456,444 | <p>How can I go about solving this Pigeonhole Principle problem? </p>
<p>So I think the possible numbers would be: $[3+12], [4+11], [5+10], [6+9], [7+8]$</p>
<p>I am trying to put this in words...</p>
| Shailesh kumar | 31,530 | <p>You can divide your set into three group </p>
<ol>
<li>selected </li>
<li>Can't selected </li>
<li>total- {selected + can't selected}</li>
</ol>
<p>As your selected set increase , you can observe that your second group is also increasing and both are equal. In the end , your total group become zero. </p>
<p>So , Next time when you want to select next element , you don't have any choice other than selection from group 2(Pigeonhole principle, Don't have choice to put all Ball in different Box , if we have n Box and n+1 Ball ).</p>
|
849,093 | <p>After being introduced to the non-elementary function through an attempt to evaluate $\int x \tan (x)$, an interesting question occurred to me:
Can the non-elementary functions be decomposed to elementary ones? For instance, the logarithm, an elementary, can be decomposed into multiplication (e.g. $\ln x=y$ is the same as $y$ iterations of $e*e$), another elementary. So, is this decomposition possible to transform a complicated non-elementary function into an elementary one that can be easily evaluated?</p>
| David K | 139,123 | <p>It seems you mean a very general kind of "decomposition" in which you are allowed to rewrite the functions in terms of some procedure involving some sequence of "elementary" operations.</p>
<p>It seems to me that any "decomposition" in any reasonable sense would have to be capable of being described using a finite number of words and symbols. This would allow even procedures with an infinite number of steps converging on the answer, such as power series, as long as the <em>rules</em> for the procedure are finite.</p>
<p>It also seems reasonable that we can only have a fixed finite set of elementary operations to work with. Otherwise it's hard to see how they can all be elementary.</p>
<p>Given the possibility of writing any finite set of rules using a fixed finite set of operations, words, and symbols, there are only a countably infinite number of possible "decompositions" that can be written. (I'm assuming there is no limit on the <em>length</em> of a "decomposition"; if there were such a limit, the number of "decompositions" would be finite.)</p>
<p>But there are an uncountable number of functions from $\mathbb{N}$ to $\mathbb{N}$, let alone functions from $\mathbb{R}$ to $\mathbb{R}$. Therefore we cannot decompose all functions.</p>
|
819,830 | <p>Is the idea of a proof by contradiction to prove that the desired conclusion is both true and false or can it be any derived statement that is true and false (not necessarily relating to the conclusion)? Or can it simply be an absurdity that you know is false but through your derivation comes out true?</p>
| DonAntonio | 31,254 | <p>You want the area between $\;y=\sqrt x\;,\;\;y=2\;$ and $\;x=0\;$ revolved around the $\;x$-axis, thus you get</p>
<p>$$\pi\int\limits_0^4\left(2^2-(\sqrt x)^2\right)dx=\pi\left(16-\frac124^2\right)=8\pi$$</p>
|
611,788 | <p>I'm here to ask you guys if my logic is correct.
I have to calculate limit of this:
$$\lim_{n\rightarrow\infty}\sqrt[n]{\sum_{k=1}^n (k^{999} + \frac{1}{\sqrt k})}$$
At first point. I see it's some limit of $$\lim_{n\rightarrow\infty}\sqrt[n]{1^{999} + \frac{1}{\sqrt 1} + 2^{999} + \frac{1}{\sqrt 2} \dots + n^{999} + \frac {1}{\sqrt n} }$$
And now is generally my doubt. Can i assume that if this limit goes to one then if limit of larger sequence goes to one too, then my original sequence goes to one too? </p>
<p>I came up with something like this.</p>
<p>if $$\lim_{n\rightarrow\infty} \sqrt[n]{\sum_{k=1}^n k^{1000}}$$ this limit goes to one ( cause obviously, expression under original sum is much lower than second sum ) then original limit goes to one too. </p>
<p>But i made it even simplier.
$$\sum_{k=1}^n k^{1000} < n^{1001}$$, so...
$$\lim_{n\rightarrow\infty} \sqrt[n]{n^{1001}}$$ this limit obviously goes to 1. Ans my final answer is that original limit goes to one too.
I did some kind of bounding. But i'm not sure if i can do it. Any answers and tips are going to be greatly appreciated :-) Thanks.</p>
| Igor Rivin | 109,865 | <p>Yes. Take any orthonormal basis of $\mathbb{R}^4,$ call it $v_1, v_2, v_3, v_4$ Then the circles $\cos t v_1 + \sin t v_2$ will be disjoint from $\cos s v_3 + \sin s v_4.$</p>
|
73,424 | <p>I'm working on a PhD project that involves parameter estimation for diffusion processes. I'm based in a machine learning research group, and the emphasis here is strongly on "practical" research. </p>
<p>I've developed some theory, and now I'm starting to look for real-world problems to apply it to. To this end, I'd like to ask for some examples of phenomena that are 'naturally' modelled as diffusion processes. An ideal answer would include some justification of why the continuous-time setting is more appropriate than, say, a discrete-time Markov chain.</p>
<p>Two great examples of the kind of thing I'm looking for can be found at the Azimuth project website (<a href="http://www.azimuthproject.org/azimuth/show/Hopf+bifurcation" rel="nofollow">here</a> and <a href="http://www.azimuthproject.org/azimuth/show/Quantitative+ecology" rel="nofollow">here</a>). The first article discusses a noisy analogue of a dynamical system that exhibits a <a href="http://en.wikipedia.org/wiki/Hopf_bifurcation" rel="nofollow">Hopf bifurcation</a>. It's suggested that this system might be a sensible first step in modelling oscillatory weather patterns such as El Nino. The second article is somewhat related, and discusses noisy predator-prey systems.</p>
<p>Thanks in advance for your help.</p>
| ShawnD | 17,219 | <p>You might look at the introduction of Bernt Oskendal's <em>Stochastic Differential Equations</em>. He gives seven motivating problems (at least he does in the edition I have) for studying stochastic calculus. One big area that you have not mentioned is the applications of stochastic calculus to finance.</p>
|
73,424 | <p>I'm working on a PhD project that involves parameter estimation for diffusion processes. I'm based in a machine learning research group, and the emphasis here is strongly on "practical" research. </p>
<p>I've developed some theory, and now I'm starting to look for real-world problems to apply it to. To this end, I'd like to ask for some examples of phenomena that are 'naturally' modelled as diffusion processes. An ideal answer would include some justification of why the continuous-time setting is more appropriate than, say, a discrete-time Markov chain.</p>
<p>Two great examples of the kind of thing I'm looking for can be found at the Azimuth project website (<a href="http://www.azimuthproject.org/azimuth/show/Hopf+bifurcation" rel="nofollow">here</a> and <a href="http://www.azimuthproject.org/azimuth/show/Quantitative+ecology" rel="nofollow">here</a>). The first article discusses a noisy analogue of a dynamical system that exhibits a <a href="http://en.wikipedia.org/wiki/Hopf_bifurcation" rel="nofollow">Hopf bifurcation</a>. It's suggested that this system might be a sensible first step in modelling oscillatory weather patterns such as El Nino. The second article is somewhat related, and discusses noisy predator-prey systems.</p>
<p>Thanks in advance for your help.</p>
| Tim van Beek | 1,478 | <p>Please also have a look at the page about <a href="http://www.azimuthproject.org/azimuth/show/Stochastic+resonance" rel="nofollow">stochastic resonance</a> on Azimuth (there are links to review papers in the reference section of that page).</p>
<p>While stochastic resonance has been "invented" to explain the glacial cycles, there are a lot of other systems that exhibit this phenomenon. As an exercise in parametric estimation, you could try to estimate the model parameters of a bistable symmetric potential to temperature series of Earth's history.</p>
<p>It is also quite customary to model stochastic resonance both in continuous time and space and as a discrete system (with discrete time or as a discrete two state Markov system in continuous or discrete time). So this is a natural playing ground for comparing both approaches.</p>
<p>In the case of the glacial cycles there are several periodic or quasi periodic external forcings like the Milankovich cycles, which are most naturally modelled in continuous time. In simulations you'll always use a discrete approximation, of course, so a continuous time model should in this context be viewed as a means to change the time step in your approximation according to your needs, which is a kind of modelling freedom that a discrete Markov model does not have. The continuous time model allows you to compare results obtained for different discretizations, while you have to build in the discretization into a discrete Markov model a priori without any chance to check if that is a good approximation.</p>
<p>Edit, Addendum: Two books written for practitioners in physics and other natural sciences with lots of applications of diffusion processes are</p>
<ul>
<li><p>Hannes Risken: The Fokker-Planck equation. Methods of solution and applications.</p></li>
<li><p>Crispin Gardiner: Stochastic methods. A handbook for the natural and social sciences.</p></li>
</ul>
<p>I think both are also cited on the Azimuth project.</p>
|
2,174,912 | <p>I' just so stumped right now. I want to get $x^{n}$ to equal $x^{2n+1}$. Right now I have that:
$$(\sqrt{x})^{2n} = x^n$$
But I don't know what to do to x to get:
$$x^n = \{\text{something done to $x$}\}^{2n+1}$$</p>
| Community | -1 | <p>Recall the following exponent law:</p>
<p>$$x^{ab} = (x^a)^b$$</p>
<p>In your case, you have this:</p>
<p>$$x^n = (x^a)^{2n+1}$$</p>
<p>Note that this is just a more mathematical way of stating exactly what you have in your question:</p>
<blockquote>
<p>But I don't know what to do to x to get:
$$x^n = \{\text{something done to $x$}\}^{2n+1}$$</p>
</blockquote>
<p>So for your particular problem, you have $ab = n$ and $b = 2n+1$, and you want to find the value of $a$ that makes all of this work. Well, since you know $ab = n$, then $a = n/b$, and since you know $b = 2n+1$, then we can conclude $a = \dfrac n{2n+1}$.</p>
<p>$$ x^n = \left(x^{\frac n{2n+1}}\right)^{2n+1} $$</p>
|
2,197,967 | <p>Can someone explain how is the RHS concluded? I did with sample numbers and it is all correct. but I can't figure out how C(12,6) comes to play.
$$
\binom{12}{0} + \binom{12}{1} + \binom{12}{2} + \binom{12}{3} + \binom{12}{4} + \binom{12}{5} = (2^{12} - \binom{12}{6}) / 2
$$</p>
| PSPACEhard | 140,280 | <p><strong>Hint:</strong> We have
$$
\sum_{i=0}^n \binom{n}{i} = 2^n
$$
and
$$
\binom{n}{i} = \binom{n}{n-i}
$$</p>
|
119,876 | <pre><code>Module[{x},
f@x_ = x;
p@x_ := x;
{x, x_, x_ -> x, x_ :> x}
]
?f
?p
</code></pre>
<p>gives</p>
<pre><code>{x$17312, x$17312_, x_ -> x, x_ :> x}
f[x_]=x
p[x_]:=x
</code></pre>
<p>but I'd like to get</p>
<pre><code>{x$17312, x$17312_, x$17312_ -> x$17312, x$17312_ :> x$17312}
f[x$17312_]=x$17312
p[x$17312_]:=x$17312
</code></pre>
<p>I thought <code>Module[{x}, body_]</code> operates something like the following, which would do what I want:</p>
<pre><code>module[{x_Symbol}, body_] := ReleaseHold[Hold@body /. x -> Unique@x];
SetAttributes[module, HoldAll];
module[{x},
f@x_ = x;
p@x_ := x;
{x, x_, x_ -> x, x_ :> x}
]
?f
?p
</code></pre>
<p>I guess there are some cases with nested scoping constructs that need to be considered for special treatment, but why can't it do the replacement in <code>Set, SetDelayed, Rule, RuleDelayed</code>?</p>
<hr>
<p>Motivation</p>
<p>I want to use<code>f@x_ = Integrate[y^2, {y, 0, x}]</code> instead of <code>f@x_ := Evaluate@Integrate[y^2, {y, 0, x}]</code> and to be safe I want to scope the variable/pattern label <code>x</code> to something unique.</p>
<p>See also <a href="https://mathematica.stackexchange.com/questions/119878/why-does-syntax-highlighting-in-set-and-rule-not-color-pattern-names-on-the">Why does syntax highlighting in `Set` and `Rule` not color pattern names on the RHS?</a></p>
| Mr.Wizard | 121 | <p>This issue has been discussed before in</p>
<ul>
<li><a href="https://mathematica.stackexchange.com/q/72758/121">I define a variable as local to a module BUT then the module uses its global value! Why?</a> </li>
</ul>
<p>Regarding your motivation a solution of mine, which you linked to yourself, is shown in</p>
<ul>
<li><a href="https://mathematica.stackexchange.com/questions/1992/how-to-make-a-function-like-set-but-with-a-block-construct-for-the-pattern-name">How to make a function like Set, but with a Block construct for the pattern names</a></li>
</ul>
<p>What is left is to implement a <code>Module</code> alternative as you attempted, or to use some alternative to <code>Rule</code>, <code>Set</code>, etc., as shown in other answers. I shall explore making a <code>Module</code> alternative more robust. To pick apart your starting code:</p>
<ol>
<li><p><code>Unique@x</code> will evaluate <code>x</code>; this is unacceptable.</p></li>
<li><p>The local Symbol lacks the <a href="http://reference.wolfram.com/language/ref/Temporary.html" rel="nofollow noreferrer"><code>Temporary</code></a> attribute and will not be garbage-collected.</p></li>
<li><p>Only a single local Symbol may be specified.</p></li>
<li><p>There is no provision for assignments within the first parameter.</p></li>
</ol>
<p>Here is my attempt to fix these limitations.</p>
<pre><code>SetAttributes[module, HoldAll]
clean = Replace[#, (Set | SetDelayed)[s_Symbol, _] :> s, {2}] &;
module[{sets : (_Symbol | _Set | _SetDelayed) ..}, body_] :=
(List @@ #;
Unevaluated[body] /.
List @@ MapAt[HoldPattern, {All, 1}] @
Thread[clean[Hold[sets] :> #], Hold]) & @ Module[{sets}, Hold[sets]]
</code></pre>
<p>Now:</p>
<pre><code>x = 1;
module[{x},
f @ x_ = x;
p @ x_ := x;
{x, x_, x_ -> x, x_ :> x}
]
?f
?p
</code></pre>
<blockquote>
<pre><code>{x$533, x$533_, x$533_ -> x$533, x$533_ :> x$533}
f[x$533_]=x$533
p[x$533_]:=x$533
</code></pre>
</blockquote>
<p>And also:</p>
<pre><code>module[{x, y = 3, z := Print["foo!"]},
{x, y, x_, y_, x_ -> x y, x_ :> x, z_ :> x y}
]
</code></pre>
<blockquote>
<pre><code>{x$533, 3, x$533_, y$533_, x$533_ -> 3 x$533, x$533_ :> x$533,
z$533_ :> x$533 y$533}
</code></pre>
</blockquote>
|
402,802 | <p>I have read that $$y=\lvert\sin x\rvert+ \lvert\cos x\rvert $$ is periodic with fundamental period $\frac{\pi}{2}$.</p>
<p>But <a href="http://www.wolframalpha.com/input/?i=y%3D%7Csinx%7C%2B%7Ccosx%7C" rel="nofollow">Wolfram</a> says it is periodic with period $\pi$.</p>
<p>Please tell what is correct.</p>
| robjohn | 13,854 | <p><strong>Hint 1:</strong> $|\sin(x)|$ and $|\cos(x)|$ have period $\pi$</p>
<p><strong>Hint 2:</strong> $(|\sin(x)|+|\cos(x)|)^2=1+|2\sin(x)\cos(x)|=1+|\sin(2x)|$</p>
|
3,244,073 | <p>Let <span class="math-container">$A = \{1, 3, 5, 9, 11, 13\}$</span> and let <span class="math-container">$\odot$</span> define the binary operation of multiplication modulo <span class="math-container">$14$</span>.</p>
<p>Prove that <span class="math-container">$(A, \odot)$</span> is a group. </p>
<p>While completing this question I was able to show that the set was closed, and that associative law held, and that the set contained an identity element. However, I was unable to show that the set had inverses.</p>
<p>I drew up the following Cayley table for the set:</p>
<p><span class="math-container">$$\begin{bmatrix}
\odot & 1 & 3 & 5 & 9 & 11 & 13 \\
1 & 1 & 3 & 5 & 9 & 11 & 13 \\
3 & 3 & 9 & 1 & 13 & 5 & 11 \\
5 & 5 & 1 & 11 & 3 & 13 & 9 \\
9 & 9 & 13 & 3 & 11 & 1 & 5 \\
11 & 11 & 5 & 13 & 1 & 9 & 3 \\
13 & 13 & 11 & 9 & 5 & 3 & 1 \\
\end{bmatrix}$$</span></p>
<p>Any help with showing that this set has inverses would be much appreciated. Thanks in advance :)</p>
| dan_fulea | 550,003 | <p>The function to be integrated (and any related function that will ultimately solve the problem) hates and wants to avoid the points <span class="math-container">$0,1$</span>, so the pink contour would be a first guess to apply complex analysis (<em>C.A.</em> for short below):</p>
<p><a href="https://i.stack.imgur.com/TAUjp.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/TAUjp.png" alt="Complex analysis problem, stackexchange, 3244071"></a></p>
<p>Let <span class="math-container">$a$</span> be <span class="math-container">$a=2\pi i$</span>.
We quickly find a polynomial <span class="math-container">$P=P_a$</span>, so that <span class="math-container">$P(X+a)-P(X)=X^2$</span>, this is
<span class="math-container">$$
P(X) = \frac 1{3a}\left(x^3-\frac 32ax^2+\frac 12a^2x\right)
\ .
$$</span>
Then
<span class="math-container">$$
\int_{\color{magenta}{\text{Pink contour}}}
\frac {P(\ln z)}{(z-1)^2}
=2\pi\; i\sum_{r\text{ Residue}}\operatorname{Res}_{z=r}
\frac {P(\ln z)}{(z-1)^2}
=0
\ .
$$</span>
Now we let <span class="math-container">$R$</span> go to <span class="math-container">$+\infty$</span>. </p>
<ul>
<li><p>The upper blue segment produces an integral of the shape <span class="math-container">$$\displaystyle
\int_{0+\varepsilon}^\infty
\frac{P(\ln x)}{(x-1)^2}\; dx\ ,
$$</span> </p></li>
<li><p>then the semicircle does not contribute, since for <span class="math-container">$z=R e^{it}$</span> the numerator contributes with <span class="math-container">$\ln^3 R+O(1)$</span>, the denominator is <span class="math-container">$O(R^{-2})$</span>, the whole fraction is <span class="math-container">$\sim R^{-2}\ln R$</span>, and the contour is in <span class="math-container">$O(R)$</span>,</p></li>
<li><p>and finally the lower pink segment produces an integral of the shape
<span class="math-container">$$\displaystyle
\int_\infty^{0+\varepsilon}
\frac{P(\ln x+2\pi i)}{(x-1)^2}\; dx\ ,
$$</span>
<strong>if we succeed to push it beyond the holes</strong>.</p></li>
</ul>
<p>Here i have to appologize for using <span class="math-container">$\ln$</span> for the real function <span class="math-container">$\log$</span>, and also for the used branch of the logarithm, which is the reason for getting <span class="math-container">$\ln(x+2\pi i)$</span> by the monodromy of the logarithm when the contour comes back.</p>
<p>By the choice of <span class="math-container">$P$</span>, the two integrals come together to build
(up to sign)
<span class="math-container">$$
\int_{0+\varepsilon}^\infty
\frac{P(\ln x+2\pi i)-P(\ln x)}{(x-1)^2}\; dx
=
\int_{0+\varepsilon}^\infty
\frac{x^2}{(x-1)^2}\; dx
\ ,
$$</span>
and this value comes then <strong>during "pushing"</strong> exactly from the holes in <span class="math-container">$0,1$</span>. Let us compute the two contributions. Let <span class="math-container">$C(z_0,\varepsilon)$</span> the contour / the circle centered in <span class="math-container">$z_0\in\Bbb C$</span> with radius <span class="math-container">$\varepsilon>0$</span>. Then
using the substitution <span class="math-container">$z=z_0+\varepsilon e^{it}$</span>, <span class="math-container">$t\in[0,2\pi]$</span>,
<span class="math-container">$$
\left|
\int_{C(0,\varepsilon)}
\frac
{\ln^k z}
{(z-1)^2}\; dz
\right|
\sim 2\pi\epsilon\cdot |\ln\varepsilon+O(1)|^k
\in O(\varepsilon^{1/2})
$$</span>
produces no contribution, and
<span class="math-container">$$
\begin{aligned}
\int_{C(1,\varepsilon)}
\frac
{\ln^k z}
{(z-1)^2}\; dz
&=
\int_0^{2\pi}
\frac
{\displaystyle\ln^k(1+\varepsilon e^{it})}
{\displaystyle(1+\varepsilon e^{it}-1)^2}
\; i\varepsilon e^{it}\; dt
\\
&=
\int_0^{2\pi}
\frac
{\displaystyle
\left(
\frac 11\varepsilon e^{it}
-\frac 12\varepsilon^2 e^{2it}
+\frac 13\varepsilon^3 e^{3it}
\pm\dots
\right)^k}
{\displaystyle \varepsilon^2 e^{2it}}
\; i\varepsilon e^{it}\; dt
\\
&\to
\begin{cases}
2\pi i = a&\text{ for }k=1\ ,\\
0&\text{ for }k\ge 2\ ,
\end{cases}
\qquad
\text{ when }
\varepsilon\to0\ .
\end{aligned}
$$</span>
The coefficient of <span class="math-container">$x$</span> in <span class="math-container">$P$</span> was <span class="math-container">$\frac a6$</span>, so we get the contribution from <span class="math-container">$1$</span> in the form (well, computations were done up to sign)
<span class="math-container">$$
\pm \frac 16a^2=\pm\frac 16(2\pi i)^2=\mp \color{blue}{\frac23\pi^2}\ .
$$</span>
(We are choosing the plus sign of course, the integral is built w.r.t. a positive function.)</p>
<p><span class="math-container">$\color{red}\square$</span></p>
<p>Job done, but...</p>
<hr>
<p>This method applies <em>mot-a-mot</em> to the integrals
<span class="math-container">$$
\int_0^\infty
\frac{\ln^4 x}{(x-1)^2}\; dx\ ,\qquad
\int_0^\infty
\frac{\ln^6 x}{(x-1)^2}\; dx\ ,\qquad
\int_0^\infty
\frac{\ln^8 x}{(x-1)^2}\; dx\ ,\qquad
\dots
$$</span>
in the following way:</p>
<ul>
<li>we build the corresponding Bernoulli polynomial <span class="math-container">$Q$</span> so that <span class="math-container">$Q(x+1)-Q(x)=(2n+1)x^{2n}$</span>. An <span class="math-container">$a$</span>-homogenized, <span class="math-container">$1/(2n+1)$</span>-normed version of it delivers <span class="math-container">$P$</span> with <span class="math-container">$P(0)=0$</span>, <span class="math-container">$P(x+a)-P(x)=x^{2n}$</span>, and store the coefficient <span class="math-container">$c_1$</span> of <span class="math-container">$P(x)$</span> in <span class="math-container">$x^1$</span>, </li>
<li>we use the same contour of integration, the same argument applies,</li>
<li>there is no contribution from the singularity in zero,</li>
<li>the singularity in one contributes with <span class="math-container">$a=2\pi i$</span> again.</li>
</ul>
<hr>
<p>Numerical experiments.</p>
<p>Numerically, i prefer to compute the half value,
<span class="math-container">$$
\begin{aligned}
J(2n)
&= \int_1^\infty \frac{\ln^{2n} x}{(x-1)^2}\; dx
\qquad\text{ Substitution: }
y=\frac{1}{x},\ x=\frac{1}{y},\ dx=-\frac 1{y^2}\; dy
\\
&= \int_0^1 \frac{\ln^{2n} y}{(1-y)^2}\; dy
=\frac 12\int_0^\infty \frac{\ln^{2n} x}{(x-1)^2}\; dx
\ .
\end{aligned}
$$</span>
Then the expected value of the integral is numerically validated:</p>
<pre><code>sage: for n in [2, 4, 6, 8, 10]:
....: Q = bernoulli_polynomial(x, n+1)/(n+1)
....: c1 = Q.coefficient( x^1 )
....: print "n = %s" % n
....: print "B(x, %s) / %s is %s" % (n+1, n+1, Q)
....: print "Its coefficient in x^1 is %s" % c1
....: f(x) = log(x)^n / (x-1)^2
....: myintegral = numerical_integral( lambda x: f(x), (0,1) )[0]
....: print "f(x) is %s" % (f(x))
....: print "Integral of f on (0, 1) is ~ %f" % myintegral
....: print "| c1 * (2 pi)^%2s / 2 | is ~ %f" % (n, (abs(c1)*(2*pi)^n/2).n())
....: print
....:
n = 2
B(x, 3) / 3 is 1/3*x^3 - 1/2*x^2 + 1/6*x
Its coefficient in x^1 is 1/6
f(x) is log(x)^2/(x - 1)^2
Integral of f on (0, 1) is ~ 3.289868
| c1 * (2 pi)^ 2 / 2 | is ~ 3.289868
n = 4
B(x, 5) / 5 is 1/5*x^5 - 1/2*x^4 + 1/3*x^3 - 1/30*x
Its coefficient in x^1 is -1/30
f(x) is log(x)^4/(x - 1)^2
Integral of f on (0, 1) is ~ 25.975758
| c1 * (2 pi)^ 4 / 2 | is ~ 25.975758
n = 6
B(x, 7) / 7 is 1/7*x^7 - 1/2*x^6 + 1/2*x^5 - 1/6*x^3 + 1/42*x
Its coefficient in x^1 is 1/42
f(x) is log(x)^6/(x - 1)^2
Integral of f on (0, 1) is ~ 732.487004
| c1 * (2 pi)^ 6 / 2 | is ~ 732.487005
n = 8
B(x, 9) / 9 is 1/9*x^9 - 1/2*x^8 + 2/3*x^7 - 7/15*x^5 + 2/9*x^3 - 1/30*x
Its coefficient in x^1 is -1/30
f(x) is log(x)^8/(x - 1)^2
Integral of f on (0, 1) is ~ 40484.398950
| c1 * (2 pi)^ 8 / 2 | is ~ 40484.399002
n = 10
B(x, 11) / 11 is 1/11*x^11 - 1/2*x^10 + 5/6*x^9 - x^7 + x^5 - 1/2*x^3 + 5/66*x
Its coefficient in x^1 is 5/66
f(x) is log(x)^10/(x - 1)^2
Integral of f on (0, 1) is ~ 3632409.110395
| c1 * (2 pi)^10 / 2 | is ~ 3632409.114224
sage:
</code></pre>
<p>Also for some two bigger values:</p>
<pre><code>sage: for n in [20, 30 ]:
....: Q = bernoulli_polynomial(x, n+1)/(n+1)
....: c1 = Q.coefficient( x^1 )
....: print "n = %s" % n
....: print "c1 is %s" % QQ(c1).factor()
....: f(x) = log(x)^n / (x-1)^2
....: myintegral = numerical_integral( lambda x: f(x), (0, 1), eps_abs=1e-4 )[0]
....: print "f(x) is %s" % (f(x))
....: print "Integral of f on (0, 1) is ~ %r" % myintegral
....: print "| c1 * (2 pi)^%2s / 2 | is ~ %r" % (n, (abs(c1)*(2*pi)^n/2).n())
....: print
....:
n = 20
c1 is -1 * 2^-1 * 3^-1 * 5^-1 * 11^-1 * 283 * 617
f(x) is log(x)^20/(x - 1)^2
Integral of f on (0, 1) is ~ 2.432904323980626e+18
| c1 * (2 pi)^20 / 2 | is ~ 2.43290432907279e18
n = 30
c1 is 2^-1 * 3^-1 * 5 * 7^-1 * 11^-1 * 31^-1 * 1721 * 1001259881
f(x) is log(x)^30/(x - 1)^2
Integral of f on (0, 1) is ~ 2.6525284549431265e+32
| c1 * (2 pi)^30 / 2 | is ~ 2.65252860059228e32
sage:
</code></pre>
|
1,908,844 | <p>The following example is taken from the book "Introduction to Probability Models" of Sheldon M. Ross (Chapter 5, example 5.4).</p>
<blockquote>
<p>The dollar amount of damage involved in an automobile accident is an
exponential random variable with mean 1000. Of this, the insurance
company only pays that amount exceeding (the deductible amount of)
400. Find the expected value and the standard deviation of the amount the insurance company pays per accident."</p>
</blockquote>
<p>In the solution, the author states that: </p>
<blockquote>
<p>By the lack of memory property of the exponential, it follows that if
a damage amount exceeds 400, then the amount by which it exceeds it is
exponential with mean 1000.</p>
</blockquote>
<p>After reading several implications of this property, I easily map such statement to something like: if you have been waiting for 400s without seeing the bus, then the expected time until the next bus is always 1000s. (Please correct me if I'm wrong)</p>
<p>In case I've understood well, what makes me confuse is this next equation:</p>
<p>$$
E[Y|I=1] = 1000
$$</p>
<p>where:</p>
<p>$X$: the dollar amount of damage resulting from an accident</p>
<p>$Y=(X-400)^+$: the amount paid by the insurance company (where $a^+$ is $a$ if $a>0$ and 0 if $a<=0$).</p>
<p>$I = 1*(X > 400) + 0*(X<=400)$</p>
<p>I don't get why that equality holds given the memoryless property. Straightforwardly, I think with respect to 400 subtraction, it should be something like: $E[Y|I] = 1000 - 400 = 600$ (or some other value). Can anyone give me an explanation about this?</p>
<p>In case you are not clear about my description, please refer to this <a href="https://books.google.ca/books?id=A3YpAgAAQBAJ&pg=PA281&lpg=PA281&dq=probability%20model%20dollar%20amount%20of%20damage%20exponential&source=bl&ots=CaFTvM6Rtw&sig=t0nrAFc-6hX0ByxD3bAD-E3M7EM&hl=en&sa=X&ved=0ahUKEwiA4oaN4enOAhUGfxoKHRZHDEYQ6AEIHDAA#v=onepage&q=probability%20model%20dollar%20amount%20of%20damage%20exponential&f=false" rel="nofollow">link</a> with <strong>example 5.4</strong>.</p>
| epi163sqrt | 132,007 | <blockquote>
<p>This identity is known as <em><a href="https://en.wikipedia.org/wiki/Vandermonde%27s_identity" rel="nofollow">Vandermonde's identity</a></em>.</p>
</blockquote>
<p>In order to show the relationship with binomials $(1+x)^n$ it is convenient to introduce the <em>coefficient of</em> operator $[x^i]$ to denote the coefficient of $x^i$ in a series. This way we can write e.g.
\begin{align*}
[x^i](1+x)^n=\binom{n}{i}
\end{align*}</p>
<blockquote>
<p>We obtain
\begin{align*}
\sum_{i=0}^k\binom{n}{i}\binom{m}{k-i}
&=\sum_{i=0}^\infty [x^i](1+x)^n[y^{k-i}](1+y)^m\tag{1}\\
&= [y^k](1+y)^m\sum_{i=0}^\infty y^i [x^i](1+x)^n\tag{2}\\
&=[y^k](1+y)^m(1+y)^n\tag{3}\\
&=[y^k](1+y)^{m+n}\tag{4}\\
&=\binom{m+n}{k}
\end{align*}
and the claim follows.</p>
</blockquote>
<p><em>Comment:</em></p>
<ul>
<li><p>In (1) we apply the <em>coefficient of</em> operator twice. We also extend the upper limit of the series to $\infty$ without changing anything since we are adding zeros only.</p></li>
<li><p>In (2) we use the linearity of the <em>coefficient of</em> operator and apply the rule $[x^{p-q}]A(x)=[x^p]x^qA(x)$.</p></li>
<li><p>In (3) we use the <em>substitution rule</em> of the <em>coefficient of</em> operator:
\begin{align*}
A(y)=\sum_{i=0}^\infty a_i y^i=\sum_{i=0}^\infty y^i [x^i]A(x)
\end{align*}</p></li>
<li><p>In (4) we select the coefficient of $x^{m+n}$.</p></li>
</ul>
|
1,574,003 | <p>I know that if A and C are finite sets then |AxC|=|A||C|. This makes the problem quite simple but the sets may not be finite. </p>
<p>I am guessing that the concept of cardinally of infinite sets and ℵ <sub>0</sub> are part of the solution but those are concepts that my class did not go into much and I do not understand very well.</p>
<p>This is my first post to stack exchange so please inform me of any wrong doings.</p>
| Clive Newstead | 19,542 | <p><strong>Hint:</strong> Let $f : A \to B$ and $g : C \to D$ be bijections. Find a bijection $A \times C \to B \times D$ in terms of $f$ and $g$.</p>
|
1,466,198 | <p>I was solving some mathematical questions and have come across the situation, where I need to divide 3900/139. Here is my question, </p>
<p>a. Can I assume 139 to 140 for the ease of division?</p>
<p>If so, how will I know what percentage of error I am introducing? How can I ensure that I am adding very less value to a number and the results will not be tremendously affected?</p>
| fleablood | 280,126 | <p>N/139 = real answer</p>
<p>N/140 = your answer</p>
<p>your answer/real answer = (N/140)/(N/139) = 139/140. </p>
<p>Your answer will be 1/140 too small. </p>
<p>====</p>
<p>in general if you replace p with (p + n) your result will by factor of n/(p+n)</p>
<p>Replace 487 with 500 and your be off by a factor of 13/500.</p>
<p>Basically your answer will be off be the same proportion as your rounding was off.</p>
|
2,111,402 | <p>Simple exercise 6.2 in Hammack's Book of Proof. "Use proof by contradiction to prove"</p>
<p>"Suppose $n$ is an integer. If $n^2$ is odd, then $n$ is odd"</p>
<p>So my approach was:</p>
<p>Suppose instead, IF $n^2$ is odd THEN $n$ is even</p>
<p>Alternatively, then you have the contrapositive, IF $n$ is not even ($n$ is odd), then $n^2$ is not odd ($n^2$ is even).</p>
<p>$n = 2k+1$ where $k$ is an integer. (definition of odd)</p>
<p>$n^2 = (2k+1)^2$</p>
<p>$n^2 = 4k^2 + 4k + 1$</p>
<p>$n^2 = 2(2k^2 + 2k) + 1$</p>
<p>$n^2 = 2q + 1$ where $q = 2k^2 + 2k$</p>
<p>therefore $n^2$ is odd by definition of odd.</p>
<p>Therefore we have a contradiction. Contradictory contrapositive proposition said $n^2$ is not odd, but the derivation says $n^2$ is odd. Therefore the contradictory contrapositive is false, therefore the original proposition is true.</p>
<p>Not sure if this was the efficient/correct way to prove this using Proof-By-Contradiction.</p>
| NeedForHelp | 392,893 | <p>To prove
$$
n^2\text{ is odd}\implies n\text{ is odd}\tag{1}
$$
by contradiction, you need to prove that
$$
n^2\text{ is odd}\wedge n\text{ is even}\tag{2}
$$
is false. That is, you need to suppose that $n^2$ is odd <strong>and</strong> that $n$ is even and obtain a contradiction from those two statements.</p>
<p>This method of proof becomes clearer when the implication
$$
n^2\text{ is odd}\implies n\text{ is odd}
$$
is written in a logically equivalent way as
$$
\neg((n^2\text{ is odd})\wedge\neg(n\text{ is odd}))\tag{3}
$$
The proof by contradiction assumes the negation of the statement and obtains a known contradiction from it. In this case, you see that the negation of $(3)$ is $(2)$.</p>
<p>You propose to show
$$
n^2\text{ is odd}\implies n\text{ is even}\tag{4}
$$
is false in order to show $(1)$. That is incorrect.</p>
<p>For example, one could prove that
$$
x>0\implies\sin(x)\geq0
$$
is false and yet
$$
x>0\implies\sin(x)<0
$$
is also false.</p>
<p>In fact, what you did is show the converse of $(1)$. That is, you showed
$$
n\text{ is odd}\implies n^2\text{ is odd}
$$</p>
<p>In this case, in order to prove $(1)$, a proof of its contrapositive is the simplest way to go. Indeed, if $n=2k$ is even, then $n^2=(2k)^2=2(2k^2)$ is even. Here there is no real difference between the proof by contradiction and the proof by contrapositive: the hypothesis that $n^2$ is odd in $(2)$ doesn't need to be used.</p>
|
3,637,283 | <p>How would I find the fourth roots of <span class="math-container">$-81i$</span> in the complex numbers? </p>
<p>Here is what I currently have: </p>
<p><span class="math-container">$w = -81i$</span> </p>
<p><span class="math-container">$r = 9$</span> </p>
<p><span class="math-container">$\theta = \arctan (-81)$</span>? </p>
<p>Although I am not sure it's correct or if I am on the right track. May I have some help please? </p>
| vonbrand | 43,946 | <p>Use Euler's formula: If the complex number is <span class="math-container">$z = \rho e^{i \theta} = \rho (\cos \theta + i \sin \theta)$</span> (polar coordinates; <span class="math-container">$\rho, \theta$</span> are reals), then:</p>
<p><span class="math-container">$\begin{align*}
z^\alpha
&= \rho^\alpha \cdot e^{i \alpha \theta}
\end{align*}$</span></p>
<p>In the particular case that <span class="math-container">$\alpha = 1 / n$</span> for a natural number <span class="math-container">$n$</span>, as <span class="math-container">$e^{i \theta} = e^{i (\theta + 2 k \pi)}$</span> :</p>
<p><span class="math-container">$\begin{align*}
z^{1/n}
&= \rho^{1/n} \cdot e^{i (\theta + 2 \pi) / n}
\end{align*}$</span></p>
<p>I.e., the <span class="math-container">$n$</span>-th roots are situated on a circle of radius <span class="math-container">$\rho^{1/n}$</span> around 0, distributed evenly one at an angle <span class="math-container">$\theta /n$</span> and the others <span class="math-container">$2 \pi / n$</span> apart. For <span class="math-container">$n = 4$</span>, they form a square.</p>
|
3,408,082 | <blockquote>
<p><span class="math-container">$\textbf{Definition}$</span>: We say <span class="math-container">$f:\mathbb{R} \to \mathbb{R}$</span> is <em>intersecting</em> if for every nonempty <span class="math-container">$A \subset \mathbb{R}$</span>, <span class="math-container">$f[A] \cap A \neq \varnothing$</span>.</p>
</blockquote>
<p>There is only one intersecting function: the identity. The reason for this is that <span class="math-container">$f[\{x\}] \cap \{x\} \neq \varnothing$</span> forces <span class="math-container">$f(x)=x$</span>.</p>
<p>If we impose restrictions on the subsets <span class="math-container">$A$</span> we consider (say, for instance, we rule out singletons), must <span class="math-container">$f$</span> still be the identity? Let's try this. We can first introduce some definitions.</p>
<blockquote>
<p><span class="math-container">$\textbf{Definition:}$</span> For any cardinal <span class="math-container">$\ell$</span>, we say <span class="math-container">$f:\mathbb{R} \to \mathbb{R}$</span> is <span class="math-container">$\ell$</span>-<em>intersecting</em> if for every nonempty <span class="math-container">$A \subset \mathbb{R}$</span> with <span class="math-container">$|A| \geq \ell$</span>, <span class="math-container">$f[A] \cap A \neq \varnothing$</span>.</p>
<p><span class="math-container">$\textbf{Definition:}$</span> For any <span class="math-container">$f:\mathbb{R} \to \mathbb{R}$</span>, its <em>deviation</em> is the cardinality of the set
<span class="math-container">$\{x \in \mathbb{R} : f(x) \neq x\}$</span>.</p>
</blockquote>
<p>I'm going to write down some results for the <span class="math-container">$2$</span>-intersecting case. Suppose <span class="math-container">$f$</span> has the property that for every <span class="math-container">$A \subset \mathbb{R}$</span> with <span class="math-container">$|A| \geq 2$</span> that <span class="math-container">$f[A] \cap A \neq \varnothing$</span>. There indeed exists a non-identity example here, one with deviation <span class="math-container">$3$</span>, in fact. Let <span class="math-container">$f(x) = x$</span> for <span class="math-container">$x \in \mathbb{R} \setminus \{1,2,3\}$</span> and let <span class="math-container">$f(1)=2, f(2)=3, f(3)=1$</span>. Is there a <span class="math-container">$2$</span>-intersecting <span class="math-container">$f$</span> with deviation <span class="math-container">$\geq 4$</span>? No. The argument for this is combinatorial. Suppose an <span class="math-container">$f$</span> with this property existed, and let <span class="math-container">$A=\{x_1, x_2, x_3, x_4\}$</span> be a set of four elements with <span class="math-container">$f(x_i) \neq x_i$</span> for each <span class="math-container">$x_i \in A$</span>. We first note that <span class="math-container">$f$</span> restricted to <span class="math-container">$A$</span> defines a function <span class="math-container">$f_{A}:A \to A$</span>. To justify this claim, we can argue by contradiction. Suppose for some <span class="math-container">$x_i \in A$</span> that <span class="math-container">$f(x_i) = y \notin A$</span>. Let <span class="math-container">$x_j, x_k$</span> be distinct elements in <span class="math-container">$A$</span>, also both distinct from <span class="math-container">$x_i$</span>. Then since <span class="math-container">$f[\{x_i, x_j\}] \cap \{x_i, x_j\} \neq \varnothing$</span>, and <span class="math-container">$f[\{x_i, x_k\}] \cap \{x_i, x_k\} \neq \varnothing$</span>, it's easy to see this implies <span class="math-container">$f(x_k) = f(x_j) = x_i$</span> , which implies <span class="math-container">$f[\{x_k, x_j\}] \cap \{x_k, x_j\} = \varnothing$</span>, contradiction. We also note the restricted function <span class="math-container">$f_{A}$</span> is injective. To see this, suppose, say, <span class="math-container">$f(x_i) = x_{m}$</span> and <span class="math-container">$f(x_j) = x_{m}$</span>, and each of <span class="math-container">$i,j, m$</span> are distinct. Then <span class="math-container">$f[\{x_i, x_j\}] \cap \{x_i, x_j\} = \varnothing$</span>, contradiction. Since <span class="math-container">$A$</span> is finite, this implies it's bijective too. Hence <span class="math-container">$f_{A}$</span> corresponds to a permutation in <span class="math-container">$S_4$</span> without fixed points. It cannot have two cycles, since if <span class="math-container">$x_j, x_i$</span> are in distinct two-cycles, then <span class="math-container">$f[\{x_i, x_j\}] \cap \{x_i, x_j\} = \varnothing$</span>. Hence it's cyclic. But if it's cyclic, then <span class="math-container">$f[\{x_{i_2}, x_{i_4}\}] \cap \{x_{i_2}, x_{i_4}\} = \varnothing$</span>, where <span class="math-container">$x_{i_2}, x_{i_4}$</span> are respectively the second and fourth elements of the cycle. This establishes the contradiction.</p>
<p>What eludes me is the general case. The combinatorics seem to get harder if <span class="math-container">$\ell \geq 3$</span>.</p>
<blockquote>
<p><span class="math-container">$\textbf{Problem 1:}$</span> The finite case. Let <span class="math-container">$\ell$</span> be a finite cardinal (i.e., a positive integer). Then is it true that any <span class="math-container">$\ell$</span>-intersecting map <span class="math-container">$f:\mathbb{R} \to \mathbb{R}$</span> has finite deviation? If this is true (which I suspect), is there a closed form for the largest possible deviation in terms of <span class="math-container">$\ell$</span>?</p>
<p><span class="math-container">$\textbf{Problem 2:}$</span> The infinite case. What is the maximum deviation of an <span class="math-container">$\aleph_{0}$</span>-intersecting map <span class="math-container">$f:\mathbb{R} \to \mathbb{R}$</span>? What is the maximum deviation of a <span class="math-container">$\mathfrak{c}$</span>-intersecting map <span class="math-container">$f:\mathbb{R} \to \mathbb{R}$</span>? In particular, are there such functions with infinite deviation?</p>
</blockquote>
<p>You will notice that we don't really use any of the analytic or algebraic structure of <span class="math-container">$\mathbb{R}$</span> here, so really these notions can be generalized to functions <span class="math-container">$f:X \to Y$</span> for arbitrary sets <span class="math-container">$X,Y$</span>. We could, however, ask different kinds of questions similar to the ones described above which make use of the structure of <span class="math-container">$\mathbb{R}$</span>. Instead of considering subsets <span class="math-container">$A$</span> with sufficiently large cardinality, we could alternatively consider subsets <span class="math-container">$A$</span> which are (nondegenerate) intervals, as has been suggested in the comments, or possibly subsets which are nonempty open sets. In a broad sense, we're interested in finding "highly non-identity" functions <span class="math-container">$f$</span> satisfying <span class="math-container">$f[A] \cap A \neq \varnothing$</span> for <span class="math-container">$A$</span> in some 'large' collection of subsets of <span class="math-container">$\mathbb{R}$</span>. If you have the solution to a different problem, but one which is similar in the broad sense described above, you're free to share it.</p>
| antkam | 546,005 | <p>A bunch of partial results... (<strong>updated</strong> 10/25 10:35 EDT with a stronger result for finite case)</p>
<p>Define the <em>deviant set</em> as <span class="math-container">$D(f) = \{x \in \mathbb{R}: f(x) \neq x\}$</span>. Obviously, <em>deviation</em> <span class="math-container">$= | D(f)|$</span>.</p>
<blockquote>
<p>Claim: For positive integer <span class="math-container">$n \ge 2$</span>, the max deviation among all <span class="math-container">$n$</span>-intersecting functions is at least <span class="math-container">$\color{red}{m = 3(n-1)} $</span>.</p>
</blockquote>
<p>The bound is achieveable via this construction:</p>
<ul>
<li>Take <span class="math-container">$D =$</span> any <span class="math-container">$3(n-1)$</span> integers, and form them into <span class="math-container">$n-1$</span> disjoint <span class="math-container">$3$</span>-cycles.</li>
</ul>
<p>E.g. for <span class="math-container">$n=3$</span> we construct <span class="math-container">$f$</span> s.t. it permutes <span class="math-container">$D(f) = \{1,2,3,4,5,6\}$</span> as two cycles <span class="math-container">$(123)(456)$</span>.</p>
<p>Then any size-<span class="math-container">$n$</span> <span class="math-container">$A \subset D$</span> must intersect some <span class="math-container">$3$</span>-cycle at least at two numbers, since there are only <span class="math-container">$n-1$</span> such cycles. Those two numbers <span class="math-container">$x \neq y$</span> and their corresponding <span class="math-container">$f(x) \neq f(y)$</span> all belong to the same <span class="math-container">$3$</span>-cycle, and by pigeonhole we have <span class="math-container">$\{x,y\} \cap \{f(x),f(y)\} \neq \varnothing$</span> and therefore <span class="math-container">$A \cap f(A) \neq \varnothing$</span>. Thus, <span class="math-container">$f$</span> is <span class="math-container">$n$</span>-intersecting and has <span class="math-container">$|D(f)| = 3(n-1)$</span>.</p>
<p>I am <em>guessing</em> this bound is tight, but I can't quite prove it yet...</p>
<blockquote>
<p>Claim: If the family of <span class="math-container">$A$</span> is the family of any interval, i.e. <span class="math-container">$f$</span> is "interval-intersecting", then the max deviation is <span class="math-container">$\mathfrak{c}$</span>, the cardinal of the continuum.</p>
</blockquote>
<p>Constructive example: Take <span class="math-container">$D =$</span> the <a href="https://en.wikipedia.org/wiki/Cantor_set" rel="nofollow noreferrer">Cantor set</a> and <span class="math-container">$\forall x \in D: f(x) = 0$</span>. Any interval <span class="math-container">$A$</span> must include some <span class="math-container">$y \notin D$</span>, i.e. some <span class="math-container">$y = f(y)$</span>, so <span class="math-container">$f$</span> is interval-intersecting. OTOH, the cardinality of the Cantor set is <span class="math-container">$\mathfrak{c}$</span>.</p>
<blockquote>
<p>Claim: Any <span class="math-container">$f$</span> is <span class="math-container">$\ell$</span>-intersecting for any cardinal <span class="math-container">$\ell > |D(f)|$</span> strictly (because any <span class="math-container">$A$</span> with <span class="math-container">$|A| =\ell$</span> must include some <span class="math-container">$y \notin D(f)$</span>).</p>
<p>Corollary: Some <span class="math-container">$\aleph_{0}$</span>-intersecting <span class="math-container">$f$</span> exists with any finite deviation, and some <span class="math-container">$\mathfrak{c}$</span>-intersecting <span class="math-container">$f$</span> exists with <span class="math-container">$\aleph_{0}$</span> deviation.</p>
</blockquote>
<p>Remarks: This still leaves open the <span class="math-container">$\aleph_{0}$</span>-intersecting and <span class="math-container">$\mathfrak{c}$</span>-intersecting cases, i.e. whether they can have deviations of <span class="math-container">$\aleph_{0}$</span> and <span class="math-container">$\mathfrak{c}$</span> respectively.</p>
|
1,982,102 | <p>If I wanted to figure out for example, how many tutorial exercises I completed today.</p>
<p>And the first question I do is <strong>question $45$</strong>, </p>
<p>And the last question I do is <strong>question $55$</strong></p>
<p>If I do $55-45$ I get $10$.</p>
<p>But I have actually done $11$ questions:<br>
$1=45$, $2=46$, $3=47$, $4=48$, $5=49$, $6=50$, $7=51$, $8=52$, $9=53$, $10=54$, $11=55$.</p>
<p>Is there any way to know when I can just subtract. Or is the rule I always have to add $1$ when I subtract?</p>
| Rob Arthan | 23,171 | <p>Suggestion: if you'd done questions $1$ to $N$, you'd have done $N$ questions. So if you start at question $44$ and finish at question $55$, subtract $43$ from both $44$ and $55$ to reduce to the easy case where the question numbers begin with $1$.</p>
|
66,199 | <p>Say I have the following lists of rules:</p>
<pre><code>case1 = {a -> 1, b -> 3, c -> 4, e -> 5}
case2 = {c -> 3, a -> 1, w -> 2}
case3 = {x -> 5, y -> 2, z -> 0, c -> 2}
</code></pre>
<p>How do I write a function <code>myfun[]</code>, to select the value of "c" in each case?</p>
<p>I want</p>
<pre><code>myfun[case1]
</code></pre>
<p>to return 4;</p>
<pre><code>myfun[case2]
</code></pre>
<p>to return 3;</p>
<pre><code>myfun[case3]
</code></pre>
<p>to return 2.</p>
| Dr. belisarius | 193 | <pre><code>myfun[x_] := c /. x
myfun /@ {case1, case2, case3}
(* {4, 3, 2} *)
</code></pre>
<p>But please note that if you inadvertently assign a value to the symbol <code>c</code>, it goes astray and can't be repaired by tricks done only on <code>myfun[]</code> since it "corrupts" your cases lists.<br>
Considering the above, perhaps it is safer to work with Formal symbols if you really need this kind of construct.</p>
<pre><code>case1F = {\[FormalA] -> 1, \[FormalB] -> 3, \[FormalC] -> 4, \[FormalE] -> 5};
myfun1[x_] := \[FormalC] /. x
myfun1[case1F]
(*4*)
</code></pre>
|
66,199 | <p>Say I have the following lists of rules:</p>
<pre><code>case1 = {a -> 1, b -> 3, c -> 4, e -> 5}
case2 = {c -> 3, a -> 1, w -> 2}
case3 = {x -> 5, y -> 2, z -> 0, c -> 2}
</code></pre>
<p>How do I write a function <code>myfun[]</code>, to select the value of "c" in each case?</p>
<p>I want</p>
<pre><code>myfun[case1]
</code></pre>
<p>to return 4;</p>
<pre><code>myfun[case2]
</code></pre>
<p>to return 3;</p>
<pre><code>myfun[case3]
</code></pre>
<p>to return 2.</p>
| Bob Hanlon | 9,362 | <p>An alternative to @belisarius' use of formal symbols to overcome the problem caused by c having been assigned a value</p>
<pre><code>case1 = {a -> 1, b -> 3, c -> 4, e -> 5};
case2 = {c -> 3, a -> 1, w -> 2};
case3 = {x -> 5, y -> 2, z -> 0, c -> 2};
myfun[x_] := Cases[x, (c -> val_) :> val][[1]]
myfun /@ {case1, case2, case3}
</code></pre>
<blockquote>
<p>{4, 3, 2}</p>
</blockquote>
<pre><code>c = 5;
myfun /@ {case1, case2, case3}
</code></pre>
<blockquote>
<p>{4, 3, 2}</p>
</blockquote>
|
1,439,004 | <p>I am trying to come up with a counting argument for: $\sum_{k=1}^{n}q^{k-1} = \frac{q^n-1}{q-1}$. I am trying to base it off of counting the left side as the sum of the (k-1) length words from an alphabet of size q for $k=1$ to $k=n-1$, but I can't seem to come up with a fitting argument to count the right side of the equation.</p>
| Graham Kemp | 135,106 | <p>The proof is not a combinatorial argument. It's an algebraic argument.</p>
<p>$$\require{cancel}\begin{align}
(q-1)\sum_{k=1}^n q^{n-1} & = (q-1)(q^{n-1}+q^{n-2}+\ldots + q^1+q^0)
\\[1ex] & = (q^n+\cancel{q^{n-1}+\ldots + q^2 + q^1}) -( \cancel{q^{n-1}+ \ldots+q^2+q^1}+q^0)
\\[2ex] & = q^n -1
\end{align}$$</p>
<p>Therefore $\sum\limits_{k=1}^n q^{n-1} = \dfrac{q^n-1}{q-1}$</p>
<p>That is all.</p>
|
1,038,579 | <p>The question is from Joseph .J Rotman's book - Introduction to the Theory of Groups and it goes like this:
<br/> $A,B,C$ are subgroups of $G$, so $A\leq B$, prove that if $(AC=BC\ \text{and}\ A\cap C=B\cap C)$. (we do not assume that either AB or AC is a subgroup) than A=B.
<br/><br/>
I need you guys to tell me if something is wrong - any criticism is welcomed.<br/>
Proof: the map $\varphi:A\rightarrow B/A\cap C $ defined by $\varphi(a)=a(A\cap C)$ is a homomorphism, with $\ker \varphi=A\cap C$
<br/>Though it is clrear to me (and maybe it shouldn't be...), I don't know if it is well founded.
<br/>Thus by the first isomorphism theorem, $A\cap C \vartriangleleft B$ and $A/A\cap C \cong Im \varphi$
<br/>suppose $\varphi$ is not a surjection then there exists $b\in B$ that for all $a\in A$ keeps $b(A\cap C)\neq a(A\cap C)$ and that makes $AC\neq BC$. So our map is a surjection, meaning $A/A\cap C \cong B/A\cap C$. From that I figure that $A\smallsetminus C$ and $B\smallsetminus C$ have the same number of elements, therefore B and A are of the same size. Add that to the fact that $A\leq B$, we get A=B
<br/>QED???</p>
| user 59363 | 192,084 | <p>One can also do shorter: take any $b\in B$. Since $B\subseteq BC=AC$ there exist $a\in A,c\in C$ such that $b=ac$. Then $a^{-1}b=c$; looking at the left hand side, this is in $B$ (since $A\subseteq B$), looking at the right hand side, this is in $C$, altogether it is in $B\cap C$ and hence in $A\cap C$. In particular, $c\in A$ and therefore $b=ac\in A$, as required.</p>
|
3,905,197 | <p>Stirling's Formula states that <span class="math-container">$\Gamma(z+1) \sim \sqrt{2 \pi z} (\frac{z}{\mathbb{e}})^{z}$</span> as <span class="math-container">$z \rightarrow \infty$</span>. I need to prove the following identity using Stirling's formula:</p>
<p><span class="math-container">$$ (2n)! \sim \frac{2^{2n} (n!)^{2}}{\sqrt{\pi n}} $$</span> as <span class="math-container">$n \rightarrow \infty$</span>.</p>
<p>In Stirling's formula I plugged in <span class="math-container">$z = 2n$</span> to get:</p>
<p><span class="math-container">$$ \Gamma(2n+1) = 2n\Gamma(2n) \sim \sqrt{4n \pi} (\frac{2n}{\mathbb{e}})^{2n} $$</span>
where the first equality follows from the functional equation for the gamma function. Simplifying a little bit more I achieve:</p>
<p><span class="math-container">$$ (2n)! \sim \frac{\sqrt{\pi n} 2^{2n} n^{2n-1}}{\mathbb{e}^{2n}} $$</span></p>
<p>But, I don't know how to prove the identity from here. Can someone give me some hints on how to move on from this step?</p>
| robjohn | 13,854 | <p>This can be done via Integration by Parts.
<span class="math-container">$$
\begin{align}
&\int_0^\infty\frac1{x^{2n+3}}\left(\sin(x)-\sum_{k=0}^n\frac{(-1)^kx^{2k+1}}{(2k+1)!}\right)\mathrm{d}x\\
&=\frac{-1}{(2n+1)(2n+2)}\int_0^\infty\frac1{x^{2n+1}}\left(\sin(x)-\sum_{k=0}^{n-1}\frac{(-1)^kx^{2k+1}}{(2k+1)!}\right)\mathrm{d}x\tag1\\
&=\frac{(-1)^{n+1}}{(2n+2)!}\int_0^\infty\frac1x\,\sin(x)\,\mathrm{d}x\tag2\\[3pt]
&=\frac{(-1)^{n+1}}{(2n+2)!}\frac\pi2\tag3
\end{align}
$$</span>
Explanation:<br />
<span class="math-container">$(1)$</span>: integrate by parts twice<br />
<span class="math-container">$(2)$</span>: repeat <span class="math-container">$(1)$</span> <span class="math-container">$n$</span> more times<br />
<span class="math-container">$(3)$</span>: see <span class="math-container">$(9)$</span> from <a href="https://math.stackexchange.com/a/338582">this answer</a></p>
|
2,245,408 | <blockquote>
<p>How is the following result of a parabola with focus <span class="math-container">$F(0,0)$</span> and directrix <span class="math-container">$y=-p$</span>, for <span class="math-container">$p \gt 0$</span> reached? It is said to be <span class="math-container">$$r(\theta)=\frac{p}{1-\sin \theta} $$</span></p>
</blockquote>
<p>I started by saying the the standard equation of a parabola, in Cartesian form is <span class="math-container">$y= \frac{x^2}{4p} $</span>, where <span class="math-container">$p \gt 0 $</span> and the focus is at <span class="math-container">$F(0,p)$</span> and the directrix is <span class="math-container">$y=-p$</span>. So for the question above, would the equation in Cartesian form be <span class="math-container">$$y= \frac{x^2}{4 \cdot \left(\frac{1}{2}p\right)}=\frac{x^2}{2p}?$$</span></p>
<p>I thought this because the vertex is halfway between the directrix and the focus of a parabola.</p>
<p>Then I tried to use the facts:
<span class="math-container">$$r^2 = x^2 +y^2 \\
x =r\cos\theta \\
y=r\sin\theta.$$</span></p>
<p>But I couldn't get the form required, any corrections, or hints?</p>
<p>Cheers.</p>
| Cye Waldman | 424,641 | <p>The equation for a parabola in the complex plane is</p>
<p>$$z=\frac{1}{2}p(u+i)^2\\
y=pu\\
x=\frac{1}{2}p(u^2-1)
$$</p>
<p>I think you would have to say</p>
<p>$$r=|z|\\
\theta=\arg(z)$$</p>
<p>to get the true polar form.</p>
<p>Ref: Zwikker, C. (1968). <em>The Advanced Geometry of Plane Curves and Their Applications</em>, Dover Press.</p>
|
4,251,233 | <p>Find</p>
<p><span class="math-container">$\int\frac{x+1}{x^2+x+1}dx$</span></p>
<p><span class="math-container">$\int \frac{x+1dx}{x^2+x+1}=\int \frac{x+1}{(x+\frac{1}{2})^2+\frac{3}{4}}dx$</span></p>
<p>From here I don't know what to do.Write <span class="math-container">$(x+1)$</span> = <span class="math-container">$t$</span>?</p>
<p>This does not work.Use partial integration?I don't think it will work here.</p>
<p>And I should complete square then find.</p>
| Z Ahmed | 671,540 | <p>Hint: Write <span class="math-container">$(x+1)=(2x+1)/2+1/2$</span></p>
|
4,251,233 | <p>Find</p>
<p><span class="math-container">$\int\frac{x+1}{x^2+x+1}dx$</span></p>
<p><span class="math-container">$\int \frac{x+1dx}{x^2+x+1}=\int \frac{x+1}{(x+\frac{1}{2})^2+\frac{3}{4}}dx$</span></p>
<p>From here I don't know what to do.Write <span class="math-container">$(x+1)$</span> = <span class="math-container">$t$</span>?</p>
<p>This does not work.Use partial integration?I don't think it will work here.</p>
<p>And I should complete square then find.</p>
| Siong Thye Goh | 306,553 | <p><span class="math-container">\begin{align}
\int \frac{x+1}{x^2+x+1}\, dx &= \int \frac{x+\frac12 + \frac12}{x^2+x+1} \, dx \\
&= \int \frac{x+\frac12}{x^2+x+1} \, dx + \frac12 \int \frac1{x^2+x+1}\, dx \\
&=\frac12\ln |x^2+x+1| + \frac12 \int\frac1{(x+\frac12)^2 + \frac34} \, dx
\end{align}</span></p>
<p>I leave the rest as an exercise.</p>
|
3,410,150 | <p>If we try solving it by finding <span class="math-container">$f''(x)$</span> then it is very long and difficult to do, so my teacher suggested a way of doing it, he said find nature of all the roots of <span class="math-container">$f(x) =f'(x)$</span>, and on finding nature of the roots we got them to be real(but not all distinct) and then he said as all the roots of <span class="math-container">$f(x) = f'(x)$</span> are real so all the roots of <span class="math-container">$f'(x)= f''(x)$</span> are real and distinct. I did not understand how to prove that if all the roots of <span class="math-container">$f(x) = f'(x)$</span> are real so all the roots of <span class="math-container">$f'(x)= f''(x)$</span> are real and distinct.Can anyone please help me to prove this?</p>
<p>Is this statement (all roots of <span class="math-container">$f(x) = f'(x)$</span> are real so all roots of <span class="math-container">$f'(x)= f''(x)$</span> are real and distinct) true only for this question or is it true in general for all function <span class="math-container">$f(x)$</span> whose all roots of <span class="math-container">$f(x) = 0$</span> are real?</p>
<p>If instead of <span class="math-container">$f(x) = (x-a)^3(x-b)^3$</span> we had <span class="math-container">$f(x) = (x-a)^4(x-b)^4$</span>
then would we say that as all roots of <span class="math-container">$f(x) = f'(x)$</span> are real so all the roots of <span class="math-container">$f'(x)= f''(x)$</span> are real and distinct or rather we would say that as all roots of <span class="math-container">$f(x) = f'(x)$</span> are real so all roots of <span class="math-container">$f'(x)= f''(x)$</span> are real.</p>
<p>If anyone has any other way of solving this question
<span class="math-container">$f(x) = (x-a)^3(x-b)^3$</span> then what is the nature of the roots of <span class="math-container">$f''(x) = f'(x)$</span> please share it.</p>
| N. S. | 9,176 | <p>Recall the Leibnitz differentiation formula</p>
<p><span class="math-container">$$(fg)^{(n)}=\sum_{k=0}^n f^{(k)} g^{(n-k)}$$</span></p>
<p>Then
<span class="math-container">$$\left( (x-a)^3(x-b)^3 \right)'=3(x-a)^2(x-b)^3+3(x-a)^3(x-b)^2=3(x-a)^2(x-b)^3[x-a+x-b]$$</span>
<span class="math-container">$$\left( (x-a)^3(x-b)^3 \right)''=6(x-a)(x-b)^3+18(x-a)^2(x-b)^2+6(x-a)^3(x-b)=6(x-a)(x-b)[(x-a)^2+3(x-a)(x-b)+(x-b)^2]
$$</span></p>
<p>Equating them, you get <span class="math-container">$x=a, x=b$</span> as solutions together with the roots of
<span class="math-container">$$(x-a)(x-b)[x-a+x-b]=2[(x-a)^2+3(x-a)(x-b)+(x-b)^2] $$</span></p>
<p>This is a cubic equation, for which the nature of the roots can be easily studied.</p>
<p><strong>Note</strong> that if you have instead <span class="math-container">$(x-a)^n (x-b)^m$</span> you would still end up with acubic, after canceling <span class="math-container">$(x-a)^{n-2} (x-b)^{m-2}$</span>. If I didn't make any misatke, your cubic would be</p>
<p><span class="math-container">$$(x-a)(x-b)[m(x-a)+n(x-b)]=2[m(m-1)(x-a)^2+2mn(x-a)(x-b)+n(n-1)(x-b)^2] $$</span> </p>
|
262,425 | <p>I'm trying to integrate a function that involves a <em>finite</em> sum:</p>
<p><span class="math-container">$$\int_{-\infty}^{\infty}\sum_{j=1}^n (e^{-b t^2}r_j) \,dt$$</span></p>
<p>I think it should be possible to take the exponent <em>outside</em> the sum:</p>
<p><span class="math-container">$$\int_{-\infty}^{\infty}\left(e^{-b t^2} \sum_{j=1}^n r_j \right)dt=\sum_{j=1}^n r_j \times \int_{-\infty}^{\infty}e^{-b t^2} dt$$</span></p>
<p>I write it in Mathematica like this:</p>
<pre><code>$Assumptions=_\[Element]Reals
Assuming[
b>0,
Integrate[Sum[Exp[-b t^2]*r[j],{j,1,n}],{t,-\[Infinity],+\[Infinity]}]
]
</code></pre>
<p>This, however, simply returns the integral unchanged:</p>
<p><span class="math-container">$$\int_{-\infty }^{\infty } \left(\sum _{j=1}^n e^{-b t^2} r(j)\right)\, dt$$</span></p>
<p>If I specify a number for <span class="math-container">$n$</span>, I get the expected result:</p>
<p><span class="math-container">$$\frac{\sqrt{\pi } (r(1)+r(2)+r(3)+r(4)+r(5))}{\sqrt{b}}$$</span></p>
<hr />
<p>How do I extract <span class="math-container">$e^{-bt^2}$</span> outside the sum? Alternatively, how do I bring the integral inside the sum? More generally, how do I integrate this?</p>
| Michael E2 | 4,999 | <p>Using <code>linearExpand</code> from <a href="https://mathematica.stackexchange.com/questions/64422/how-to-do-algebra-on-unsolved-integrals/64447#64447">How to do algebra on unevaluated integrals?</a> :</p>
<pre><code>Clear[linearExpand];
linearExpand[e_, x_, head_] :=
e //. {op : head[arg_Plus, __] :> Distribute[op],
head[arg1_Times, rest__] :>
With[{dependencies = Internal`DependsOnQ[#, x] & /@ List @@ arg1},
Pick[arg1, dependencies, False] head[
Pick[arg1, dependencies, True], rest]]};
linearExpand[Sum[Exp[-b t^2]*r[j], {j, 1, n}], j, Sum]
</code></pre>
<img src="https://i.stack.imgur.com/CQSLx.png" width="100">
<pre><code>Assuming[b > 0,
Integrate[
linearExpand[Sum[Exp[-b t^2]*r[j], {j, 1, n}], j, Sum],
{t, -\[Infinity], +\[Infinity]}]
]
</code></pre>
<img src="https://i.stack.imgur.com/dYAWI.png" width="100">
|
1,005,186 | <p>the difference between a positive integer, n, and its cube is 4896. Compute n.
Please give solution and detailed explanation! Thank you ver much!
I tried and got 17, but what i did is to try numbers one by one, so i would really appreciate if anyone can tell me the right and systematic way to tackle this question??</p>
| Did | 6,179 | <blockquote>
<p>I don't know what limits to use.</p>
</blockquote>
<p>Note that $x=w/u$, $y=u$, $z=\sqrt{v}$ with $0\leqslant x,y,z\leqslant1$ hence the domain of integration is $$0\leqslant w/u,u,\sqrt{v}\leqslant1,$$ or, equivalently, $$0\leqslant w\leqslant u\leqslant1,\qquad0\leqslant v\leqslant1.$$</p>
<blockquote>
<p>Find the joint pdf of $W:=XY$ and $V:=Z^2$.</p>
</blockquote>
<p>This can be simplified by noting that $W$ and $V$ are independent hence their marginal densities suffice to solve the question.</p>
|
1,005,186 | <p>the difference between a positive integer, n, and its cube is 4896. Compute n.
Please give solution and detailed explanation! Thank you ver much!
I tried and got 17, but what i did is to try numbers one by one, so i would really appreciate if anyone can tell me the right and systematic way to tackle this question??</p>
| Vladimir Vargas | 187,578 | <p>Notice that:</p>
<p>$$f_{WVU}(w,v,u)=|\boldsymbol{J(h)}|f_{XYZ}(h(x,t,z))=|\boldsymbol{J(h)}|f_X\left(\dfrac{w}{u}\right)\chi_{[0,1]}(w)f_Y(u)\chi_{[w,1]}(u)f_Z(\sqrt{v})\chi_{[0,1]}(v).$$</p>
|
1,172,893 | <p>My textbook says I should solve the following integral by first making a substitution, and then using integration by parts:</p>
<p>$$\int cos\sqrt x \ dx$$</p>
<p>The problem is, after staring at it for a while I'm still not sure what substitution I should make, and hence I'm stuck at the first step. I thought about doing something with the $\sqrt x$, but that doesn't seem to lead anywhere as far as I can tell. Same with the $cos$. Any hints?</p>
| abel | 9,252 | <p>make a subs $u = \sqrt x, x = u^2, dx = 2u du$ now the integral $\int \cos \sqrt x \, dx$ is transformed into $$2\int u \cos u \, du = 2 \int u d (\sin u) =2\left( u\sin u - \int \sin u \, du\right) = 2\left( u\sin u +\cos u +C\right)$$</p>
|
713,098 | <p>The answer to my question might be obvious to you, but I have difficulty with it. </p>
<p>Which equations are correct:</p>
<p>$\sqrt{9} = 3$</p>
<p>$\sqrt{9} = \pm3$</p>
<p>$\sqrt{x^2} = |x|$</p>
<p>$\sqrt{x^2} = \pm x$</p>
<p>I'm confused. When it's right to take an absolute value? When do we have only one value and why? When two and why? </p>
<p>Thank you very much in advance for your help!</p>
| Klaas van Aarsen | 134,550 | <p>In the real numbers, $\sqrt x$ is <em>defined</em> to be positive.</p>
<p>In the complex numbers, $\sqrt z$ is a <em>multivalued function</em> that indeed yields 2 values. In that case we have a <em>principal value</em> of $\sqrt 9$ that is $3$.</p>
|
2,715,374 | <p>We know that \begin{equation*}
a_0+\cfrac{1}{a_1+\cfrac{1}{a_2+\cfrac{1}{a_3+\cfrac{1}{\ddots+\cfrac{1}{a_n}}}}}=[a_0,a_1, \cdots, a_n]
\end{equation*}</p>
<p>If $\frac{p_n}{q_n}=[a_0,a_1, \cdots, a_n]$.</p>
<blockquote>
<p>How to prove that $$
\begin{pmatrix}
p_n & p_{n-1} \\
q_n & q_{n-1} \\
\end{pmatrix}=\begin{pmatrix}
a_0 &1 \\
1 & 0 \\
\end{pmatrix}\begin{pmatrix}
a_1 &1 \\
1 & 0 \\
\end{pmatrix}\cdots\begin{pmatrix}
a_n &1 \\
1 & 0 \\
\end{pmatrix}
$$.</p>
</blockquote>
<p>I am getting the answer while checking with $n=0,1,2,3$. I think that it could be done by induction but after assuming $k=n-1$ when I am going to prove $k=n$ the calculation is getting messy. Please help me out in proving this.</p>
| user | 505,767 | <p>When we multiply for $(x-n)$ we need to set $x\neq n$ that is precisely the solution we obtain.</p>
|
367,204 | <p>I'm trying to prove that $\mathbb Z_p^*$ ($p$ prime) is a group using the Fermat's little theorem to show that every element is invertible.</p>
<p>Thus using the Fermat's little theorem, for each $a\in Z_p^*$, we have $a^{p-1}\equiv1$ (mod p). The problem is to prove that p-1 is the least positive integer which $a^{p-1}\equiv1$ (mod p).</p>
<p><strong>Remark:</strong> $\mathbb Z_p^*$ is $\{\overline 1,...,\overline {p-1}\}$ with multiplication.</p>
<p>I need help.</p>
<p>Thanks a lot.</p>
| egreg | 62,967 | <p>You can't show that $p-1$ is the least positive integer $r$ such that $a^r\equiv 1\pmod{p}$, because in general it isn't: for instance, the least integer for $a=1$ is $1$.</p>
<p>But all you need is to find an element which acts as an inverse:</p>
<p>$$a\cdot a^{p-2} \equiv 1 \pmod{p}$$</p>
<p>so that, for any $\overline{x}\in\mathbb{Z}^*_p$ you have</p>
<p>$$\overline{x}\cdot\overline{x}^{\,p-2} = \overline{1}$$</p>
<p>and so</p>
<p>$$\overline{x}^{\,-1}=\overline{x}^{\,p-2}$$</p>
|
3,516,241 | <p>Consider the equation:</p>
<p><span class="math-container">$$ x ^ 4 - (2m - 1) x^ 2 + 4m -5 = 0 $$</span></p>
<p>with <span class="math-container">$m \in \mathbb{R}$</span>. I have to find the values of <span class="math-container">$m$</span> such that the given equation has all of its roots real.</p>
<p>This is what I did:</p>
<p>Let <span class="math-container">$ u = x^2, \hspace{.25cm} u\ge 0$</span></p>
<p>We get:</p>
<p><span class="math-container">$$ u ^ 2 - (2m - 1)u + 4m -5 = 0 $$</span></p>
<p>Now since we have </p>
<p><span class="math-container">$$ u = x ^ 2$$</span></p>
<p>That means</p>
<p><span class="math-container">$$x = \pm \sqrt{u}$$</span></p>
<p>That means that the roots <span class="math-container">$x$</span> are real only if <span class="math-container">$u \ge 0$</span>.</p>
<p>So we need to find the values of <span class="math-container">$m$</span> such that all <span class="math-container">$u$</span>'s are <span class="math-container">$\ge 0$</span>. If all <span class="math-container">$u$</span>'s are <span class="math-container">$\ge 0$</span>, that means that the sum of <span class="math-container">$u$</span>'s is <span class="math-container">$\ge 0$</span> <strong>and</strong> the product of <span class="math-container">$u$</span>'s is <span class="math-container">$ \ge 0 $</span>. Using Vieta's formulas</p>
<p><span class="math-container">$$S = u_1 + u_2 = - \dfrac{b}{a} \hspace{2cm} P = u_1 \cdot u_2 = \dfrac{c}{a}$$</span></p>
<p>where <span class="math-container">$a, b$</span> and <span class="math-container">$c$</span> are the coefficients of the quadratic, we can solve for <span class="math-container">$m$</span>. We get:</p>
<p><span class="math-container">$$S = - \dfrac{-(2m - 1)}{1} = 2m - 1$$</span></p>
<p>We need <span class="math-container">$S \ge 0$</span>, so that means <span class="math-container">$m \ge \dfrac{1}{2}$</span> <span class="math-container">$(1)$</span></p>
<p><span class="math-container">$$P = \dfrac{4m - 5 }{1} = 4m - 5$$</span></p>
<p>We need <span class="math-container">$P \ge 0$</span>, so that means <span class="math-container">$m \ge \dfrac{5}{4}$</span> <span class="math-container">$(2)$</span></p>
<p>Intersecting <span class="math-container">$(1)$</span> and <span class="math-container">$(2)$</span> we get the final answer:</p>
<p><span class="math-container">$$ m \in \bigg [ \dfrac{5}{4}, \infty \bigg )$$</span></p>
<p>My question is: Is this correct? Is my reasoning sound? Is there another way (maybe even a better way!) to solve this?</p>
| P. Lawrence | 545,558 | <p>Put m=2. Then u is not real, so x is not real. Instead, if <span class="math-container">$ m \ge 5/4 $</span> write out the two quadratic factors and use the condition for them(it's the same condition for each factor) to have real roots.After simpplification, you finally get <span class="math-container">$$ (2m-7)(2m-3) \ge 0 $$</span> so
<span class="math-container">$$ 1.25 \le m \le 1.5 $$</span> or
<span class="math-container">$$ m \ge 3.5 $$</span></p>
|
3,451,301 | <p>The following classical generalization</p>
<blockquote>
<p><span class="math-container">$$\sum_{n=1}^\infty\frac{(-1)^{n}H_n}{n^{2a}}=-\left(a+\frac 12\right)\eta(2a+1)+\frac12\zeta(2a+1)+\sum_{j=1}^{a-1}\eta(2j)\zeta(2a+1-2j)$$</span>
where <span class="math-container">$\eta(a)=\sum_{n=1}^\infty\frac{(-1)^{n-1}}{n^a}=(1-2^{1-a})\zeta(a)$</span> is the Dirichlet eta function.</p>
</blockquote>
<p>was proved by <strong>G. Bastien</strong> <a href="https://arxiv.org/pdf/1301.7662.pdf?fbclid=IwAR2cz1aKTBT9iBvuMvzyFd4maDw0zlq2FfF7WmFd8hA99pSIkzH7gVDeHe0" rel="nofollow noreferrer">here</a> page 7 Eq. 17 and also by <strong>Cornel</strong> <a href="https://www.researchgate.net/publication/333999069_A_new_powerful_strategy_of_calculating_a_class_of_alternating_Euler_sums" rel="nofollow noreferrer">here</a>.</p>
<hr />
<p>I am trying to prove it in a different way but came across an integral that can be calculated by Beta function but I want it in <span class="math-container">$\zeta$</span> if possible to get the right result.</p>
<p>Here is my approach which follows from the same idea of my solution <a href="https://math.stackexchange.com/q/3449946">here</a>:</p>
<p>By using <span class="math-container">$$\frac1{n^{2a}}=-\frac1{(2a-1)!}\int_0^1x^{n-1}\ln^{2a-1}(x)\ dx$$</span></p>
<p>we can write</p>
<p><span class="math-container">$$\sum_{n=1}^\infty\frac{(-1)^nH_n}{n^{2a}}=-\frac1{(2a-1)!}\int_0^1\frac{\ln^{2a-1}(x)}{x}\left(\sum_{n=1}^\infty(-x)^nH_n\right)\ dx$$</span></p>
<p><span class="math-container">$$=\frac1{(2a-1)!}\int_0^1\frac{\ln^{2a-1}(x)\ln(1+x)}{x(1+x)}dx=\frac1{(2a-1)!}I_a\tag1$$</span></p>
<hr />
<p><span class="math-container">$$I_a=\int_0^1\frac{\ln^{2a-1}(x)\ln(1+x)}{x(1+x)}dx=\int_0^\infty\frac{\ln^{2a-1}(x)\ln(1+x)}{x(1+x)}dx-\underbrace{\int_1^\infty\frac{\ln^{2a-1}(x)\ln(1+x)}{x(1+x)}dx}_{x\mapsto 1/x}$$</span></p>
<p><span class="math-container">$$=\int_0^\infty\frac{\ln^{2a-1}(x)\ln(1+x)}{x(1+x)}dx+\color{blue}{\int_0^1\frac{\ln^{2a-1}(x)\ln(1+x)}{1+x}dx}-\int_0^1\frac{\ln^{2a}(x)}{1+x}dx$$</span></p>
<p>By adding</p>
<p><span class="math-container">$$I_a=\int_0^1\frac{\ln^{2a-1}(x)\ln(1+x)}{x(1+x)}dx=\int_0^1\frac{\ln^{2a-1}(x)\ln(1+x)}{x}dx-\color{blue}{\int_0^1\frac{\ln^{2a-1}(x)\ln(1+x)}{1+x}dx}$$</span></p>
<p>to both sides, the blue integral nicely cancels out and we get</p>
<p><span class="math-container">$$2I_a=\int_0^\infty\frac{\ln^{2a-1}(x)\ln(1+x)}{x(1+x)}dx+\underbrace{\int_0^1\frac{\ln^{2a-1}(x)\ln(1+x)}{x}dx}_{IBP}-\int_0^1\frac{\ln^{2a}(x)}{1+x}dx$$</span></p>
<p><span class="math-container">$$=\int_0^\infty\frac{\ln^{2a-1}(x)\ln(1+x)}{x(1+x)}dx-\frac{1+2a}{2a}\int_0^1\frac{\ln^{2a}(x)}{1+x}dx$$</span></p>
<p>where</p>
<p><span class="math-container">$$\int_0^1\frac{\ln^{2a}(x)}{1+x}dx=\sum_{n=1}^\infty(-1)^{n-1}\int_0^1 x^{n-1}\ln^{2a}(x)dx=(2a)!\sum_{n=1}^\infty\frac{(-1)^{n-1}}{n^{2a+1}}=(2a)!\eta(2a+1)$$</span></p>
<p>so</p>
<p><span class="math-container">$$I_a=\frac12\int_0^\infty\frac{\ln^{2a-1}(x)\ln(1+x)}{x(1+x)}dx-\left(a+\frac12\right)(2a-1)!\eta(2a+1)\tag2$$</span></p>
<p>Plug <span class="math-container">$(2)$</span> in <span class="math-container">$(1)$</span></p>
<p><span class="math-container">$$\sum_{n=1}^\infty\frac{(-1)^nH_n}{n^{2a}}=-\left(a+\frac12\right)\eta(2a+1)+\frac1{2(2a-1)!}\int_0^\infty\frac{\ln^{2a-1}(x)\ln(1+x)}{x(1+x)}dx\tag{3}$$</span></p>
<p>So any idea how to evaluate the integral in <span class="math-container">$(3)$</span> in a way that completes my proof?</p>
<hr />
| Ali Shadhar | 432,085 | <p>In the question body in Eq <span class="math-container">$(3)$</span>, we reached</p>
<p><span class="math-container">$$\sum_{n=1}^\infty\frac{(-1)^nH_n}{n^{2a}}=-\left(a+\frac12\right)\eta(2n+1)+\frac1{2(2a-1)!}\int_0^\infty\frac{\ln^{2a-1}(x)\ln(1+x)}{x(1+x)}dx\tag{1}$$</span></p>
<hr />
<p>From following the same approach of <a href="https://math.stackexchange.com/q/3538399">this solution</a>, we have</p>
<p><span class="math-container">\begin{align}
I_a=\int_0^\infty\frac{\ln^{2a-1}(x)\ln(1+x)}{x(1+x)}dx&= - \frac{\partial^{2a-1}}{\partial m^{2a-1}} \frac{\partial}{\partial n} \operatorname{B}(m,n-m) \, \Bigg \rvert_{m=0, \, n=1} \\
&= - \frac{\partial^{2a-1}}{\partial m^{2a-1}} \operatorname{\Gamma}(m) \frac{\partial}{\partial n} \frac{\operatorname{\Gamma}(n-m)}{\operatorname{\Gamma}(n)} \, \Bigg \rvert_{m=0,\, n=1} \\
&= - \frac{\partial^{2a-1}}{\partial m^{2a-1}} \operatorname{\Gamma}(m) \operatorname{\Gamma}(1-m) [\operatorname{\psi}^{(0)} (1-m) + \gamma] ~\Bigg \rvert_{m=0} \\
&= - \frac{\partial^{2a-1}}{\partial m^{2a-1}}\frac{\pi}{\sin(\pi m)} [\operatorname{\psi}^{(0)} (1-m) + \gamma] ~\Bigg \rvert_{m=0} \\
\end{align}</span></p>
<hr />
<p><strong>Edit</strong>: Thanks to @Gary for finding the <a href="https://math.stackexchange.com/q/4441148">closed form </a>of this limit. I will mention it here with more details:</p>
<p>We have
<span class="math-container">$$
\psi(1-z)+\gamma=\sum\limits_{n= 1}^\infty {\frac{1}{{n!}}\left[ {\frac{{d^n\psi(1-z)}}{{dz^n }}}\right]_{z=0}z^n}=\sum\limits_{n = 1}^\infty {(-1)^n \frac{{\psi^{(n)}(1)}}{{n!}}z^n}
$$</span>
for <span class="math-container">$|z|<1$</span>. Thus,
<span class="math-container">$$
\frac{{\psi(1-z)+\gamma}}{z}=\sum\limits_{n=0}^\infty {(-1)^{n+1} \frac{{\psi^{(n+1)}(1)}}{{(n+1)!}}z^n}
$$</span>
for <span class="math-container">$|z|<1$</span> (the left-hand side is defined as a limit when <span class="math-container">$z=0$</span>). We also have
<span class="math-container">$$
\pi z\csc(\pi z)=\sum\limits_{n=0}^\infty {\frac{{(-1)^{n-1}2\pi^{2n} (2^{2n -1}-1)B_{2n}}}{{(2n)!}}z^{2n}}=2\sum\limits_{n=0}^\infty {\eta (2n)z^{2n} }
$$</span>
for <span class="math-container">$|z|<1$</span> (the left-hand side is defined as a limit when <span class="math-container">$z=0$</span>).</p>
<p>so we have
<span class="math-container">$$F(z)=\pi\csc(\pi z)[\psi(1-z)+\gamma]=\left(2\sum\limits_{n = 0}^\infty {\eta (2n)z^{2n}}\right)\left(\sum\limits_{n=0}^\infty {(-1)^{n+1} \frac{{\psi^{(n+1)}(1)}}{{(n+1)!}}z^n}\right).$$</span>
Apply Cauchy product:</p>
<p><span class="math-container">$$\left(\sum_{n=0}^\infty a_{2n} z^{2n}\right)\left(\sum_{n=0}^\infty b_{n+1} z^n\right)=\sum_{n=0}^\infty\left(\sum_{k=0}^{\lfloor \frac{n}{2}\rfloor}a_{2k} b_{n-2k+1}\right)z^n$$</span></p>
<p>we get</p>
<p><span class="math-container">$$F(z)=2\sum_{n=0}^\infty\left(\underbrace{\sum_{k=0}^{\lfloor \frac{n}{2}\rfloor} \eta(2k) (-1)^{n-2k+1}\frac{\psi^{(n-2k+1)}}{(2-2k+1)!}}_{f_n}\right)z^n=2\sum_{n=0}^\infty f_n z^n.$$</span></p>
<p>Note that</p>
<p><span class="math-container">\begin{align}
\lim_{z\to0}\frac{\partial^{2a-1}}{\partial z^{2a-1}}F(z)&=2\lim_{z\to0}\frac{\partial^{2a-1}}{\partial z^{2a-1}} \sum_{n=0}^\infty f_n z^n\\
&=2\lim_{z\to0}\frac{\partial^{2a-1}}{\partial z^{2a-1}}\left(f_0 z^0+f_1 z^1+f_2 z^2+...\right)\\
&=2(2a-1)! f_{2a-1}\\
&=2(2a-1)!\sum_{k=0}^{\lfloor \frac{2a-1}{2}\rfloor} \eta(2k)(-1)^{2a-2k}\frac{\psi^{(2a-2k)}}{(2a-2k)!}\\
&=2(2a-1)!\sum_{k=0}^{a-1}\frac{\eta(2k)}{(2a-2k)!}\psi^{(2a-2k)}(1).
\end{align}</span>
Substitute <span class="math-container">$\psi^{(a)}(1)=(-1)^{a-1}a!\zeta(a+1)$</span>,</p>
<p><span class="math-container">$$\lim_{z\to 0}\frac{\partial^{2a-1}}{\partial z^{2a-1}}F(z)=-2(2a-1)!\sum_{k=0}^{a-1}\eta(2k)\zeta(2a-2k+1)=-I_a.\tag{2}$$</span></p>
<p>Plug <span class="math-container">$(2)$</span> in <span class="math-container">$(1)$</span> we get</p>
<p><span class="math-container">$$\sum_{n=1}^\infty\frac{(-1)^{n}H_n}{n^{2a}}=-\left(a+\frac 12\right)\eta(2a+1)+\sum_{k=0}^{a-1}\eta(2k)\zeta(2a-2k+1).$$</span></p>
<hr />
<p>acknowledgement:</p>
<p>Big thanks to @ComplexYetTrivial and @Gary for their solutions <a href="https://math.stackexchange.com/q/3538399">here</a> and <a href="https://math.stackexchange.com/q/4441148">here</a>. With their help, the proof is now completed rigorously which I think is a new proof in the literature.</p>
<hr />
<p>More rigorous proof of <span class="math-container">$\displaystyle\pi z\csc(\pi z)=\sum_{n=0}^\infty \eta(2n) x^{2n}:$</span></p>
<p>Set <span class="math-container">$x=0$</span> in the Fourier series of <span class="math-container">$\cos(z x)$</span>:
<span class="math-container">$$\cos(zx)=\frac{2z\sin(\pi z)}{\pi}\left[\frac1{2z^2}-\sum_{n=1}^\infty\frac{(-1)^n\cos(nx)}{n^2-z^2}\right],$$</span>
we get
<span class="math-container">\begin{align}
\pi z \csc(\pi z)&=1-2\sum_{n=1}^\infty\frac{(-1)^n z^2}{n^2-z^2}\\
&=1-2\sum_{n=1}^\infty (-1)^n \frac{z^2/n^2}{1-z^2/n^2}\\
&=1-2\sum_{n=1}^\infty (-1)^n \left(\sum_{k=1}^\infty (z^2/n^2)^k\right)\\
&=1-2\sum_{k=1}^\infty z^{2k} \left(\sum_{n=1}^\infty \frac{(-1)^n}{n^{2k}}\right)\\
&=1-2\sum_{k=1}^\infty z^{2k} \left(-\eta(2k)\right)\\
&=2\left(\frac12+\sum_{k=1}^\infty \eta(2k) z^{2k}\right)\\
&=2\sum_{k=0}^\infty \eta(2k) z^{2k}.
\end{align}</span></p>
|
2,189,445 | <p>I try to solve this:
$$
\frac{\partial^{2} I}{\partial b \partial a} = I.
$$
I guessed $ I = C e^{a+b} $, but it's not the general solution. So, how to find the last one?</p>
| Peter Smith | 35,151 | <p>(A) The "Barber paradox" is not really a paradox, properly so called. </p>
<p>What we have here is a perfectly good proof by reductio that there can't exist someone in the village who shaves all and only those in the village who don't shave themselves. For suppose there is such a person, $B$. Then, by hypothesis, for all $y$, $B$ shaves $y$ if and only if $y$ doesn't shave $y$, where the variable $y$ runs over people in the village. So, in particular $B$ shaves $B$ if and only if $B$ doesn't shave $B$ -- contradiction!</p>
<p>Now, we have no antecedent reason to suppose there might be a barber who shaves all and only those who don't shave themselves. Hence it should be no particular surprise to learn that as a matter of simple logic that there can't be one.</p>
<p>And it is indeed a matter of simple logic. Generalizing, take any binary relation $R$ defined over the domain $U$ of widgets. Then this is a straightforward theorem of first-order logic: $$\neg\exists x\forall y(Rxy \leftrightarrow \neg Ryy)$$ with the quantifiers running over the domain $U$. Hence, whatever relation $R$ you take, there can't be a widget which is $R$-related to all and only those widgets which are not $R$-related to themselves. (Exercise, prove that formal wff in your favourite system of first-order logic!)</p>
<p>So far, there is nothing paradoxical going on. We can only speak really of a paradox or antinomy when a (seemingly) compelling proof clashes with other (seemingly) compelling ideas. And that isn't yet the case here. </p>
<p>(B) Suppose though -- to make the famous historical connection -- that we are thinking not about the shaving relation but about set-membership. Then we have this particular instance of our first-order theorem: $$\neg\exists x\forall y(x \in y \leftrightarrow y \notin y)$$ with the quantifier running over all sets. Hence there is no set containing all and only the normal sets (where a set is normal if and only if it doesn't contain itself.</p>
<p>But now note that unlike the Barber case, this result does clash with some assumptions that we might rather naturally have had about sets. For suppose we start off wedded to the ideas that (i) we can collect together all the $X$s into a set of $X$s, whatever $X$s might me -- so in particular, there is a super-collection $U$ that collects together all the sets (so $U$ is a set of all the sets), and (ii) for any given set $X$ and property $P$, there will be a subset of it containing just those members of $X$ with property $P$, and (iii) there is a perfectly good property of being a normal set, i.e. one which doesn't have itself as a member. </p>
<p>Then (i), (ii) and (iii) commit us to the existence of a set $Russell$, the subset of the super-collection $U$ which contains all and only the normal sets -- so $Russell$ is the set of all sets which are not members of themselves. </p>
<p>But as we've seen the $Russell$ set can't exist because of our simple logical theorem. Which shows that we can't hold (i), (ii) and (iii) together. And <em>this</em> traditionally is called a paradox -- though it is, strictly speaking, only paradoxical to the extent to which we previously might have been firmly wedded to (i), (ii) and (iii).</p>
|
314,238 | <p>Let $R$ be a ring and $\mathfrak{m},\mathfrak{m'}$ two ideals of $R$.</p>
<p>Suppose that $\frac{R}{\mathfrak{m}}$ and $\frac{R}{\mathfrak{m'}}$ are isomorphic. Can i san say that $\mathfrak{m}$ and $\mathfrak{m'}$ are isomorphic too?</p>
| Zev Chonoles | 264 | <p>No; for example, let $R=\mathbb{Z}/2\mathbb{Z}\times\mathbb{Z}/4\mathbb{Z}$, let $$\mathfrak{m}=\mathbb{Z}/2\mathbb{Z}\times2\mathbb{Z}/4\mathbb{Z}=\{(\overline{0},\overline{0}),(\overline{1},\overline{0}),(\overline{0},\overline{2}),(\overline{1},\overline{2})\}$$
and
$$\mathfrak{m}'=0\times\mathbb{Z}/4\mathbb{Z}=\{(\overline{0},\overline{0}),(\overline{0},\overline{1}),(\overline{0},\overline{2}),(\overline{0},\overline{3})\}.$$ Then
$$R/\mathfrak{m}\cong R/\mathfrak{m}'\cong\mathbb{Z}/2\mathbb{Z}$$
but $\mathfrak{m}$ and $\mathfrak{m}'$ are not isomorphic as abelian groups, and hence certainly not isomorphic as $R$-modules.</p>
|
314,238 | <p>Let $R$ be a ring and $\mathfrak{m},\mathfrak{m'}$ two ideals of $R$.</p>
<p>Suppose that $\frac{R}{\mathfrak{m}}$ and $\frac{R}{\mathfrak{m'}}$ are isomorphic. Can i san say that $\mathfrak{m}$ and $\mathfrak{m'}$ are isomorphic too?</p>
| Math Gems | 75,092 | <p>Let $\rm\:R = \Bbb Q[x_1,x_2,x_3,\ldots].\:$ Then $\rm\: R\,\cong\, R/(x_1\!)\,\cong\, R/(x_1,x_2)\cong R/(x_1,x_2,x_3)\,\cong\, \cdots$</p>
|
3,121,361 | <p>Given <span class="math-container">$G$</span> has elements in the interval <span class="math-container">$(-c, c)$</span>. Group operation is defined as:
<span class="math-container">$$x\cdot y = \frac{x + y}{1 + \frac{xy}{c^2}}$$</span></p>
<p>How to prove closure property to prove that G is a group?</p>
| Peter Szilas | 408,605 | <p>Hint:</p>
<p>1)<span class="math-container">$\binom{n}{k}\frac{1}{n^k} \le \frac {1}{k!}, k \in
\mathbb{N}$</span>.</p>
<p>2)<span class="math-container">$(1+ \frac{1}{n})^n =$</span></p>
<p><span class="math-container">$\sum_{k=0}^{n} \binom{n}{k}(\frac{1}{n})^k \le \sum_{k=0}^{n}\frac{1}{k!}$</span></p>
<p>3) Upper bound:</p>
<p><span class="math-container">$\sum_{k=0}^{n} \frac{1}{k!} \le 1+ \sum_{k=0}^{n}\frac{1}{2^k} < 3$</span>.</p>
|
413,719 | <p>Would I be correct in saying that they correspond to all points in $\mathbb{R}^3$? Or a line in $\mathbb{R}^3$?</p>
| Community | -1 | <p>On their own they're simply vectors in $\mathbb{R}^3$. However, if you were to take all linear combinations of these vectors</p>
<p>$$c_1(1,0,0) + c_2(0,1,0) + c_3(0,0,1) \text { where } c_1,c_2,c_3 \in \mathbb{R}$$</p>
<p>Then this would give you the entire space of $\mathbb{R}^3$. In more technical language, $\beta = \{(1,0,0), (0,1,0),(0,0,1)\}$ is a basis for $\mathbb{R}^3$ and therefore, it generates $\mathbb{R}^3$. </p>
|
1,071,564 | <p>Let's a call a directed simple graph $G$ on $n$ labelled vertices <strong>good</strong> if every vertex has outdegree 1 and, when considered as if it were undirected, it is connected. How many good graphs of size $n$ are there?</p>
<p>Here's my work so far. Let's call this number $T(n)$. Clearly, $T(2) = 1$: there's only the loop on two vertices.</p>
<p><img src="https://i.stack.imgur.com/NRt8v.png" alt="T(2)"></p>
<p>We also have that $T(3) = 8$. We can count them using the following argument: let's call a <strong>possible shape</strong> a directed simple graph on $n$ <em>unlabelled</em> vertices which is good. For $n = 3$ we have the following shapes:</p>
<p><img src="https://i.stack.imgur.com/Cbtpp.png" alt="T(3)"></p>
<p>There are $3!$ <em>labelled</em> good graphs of the first shape: fix the outside vertex in 3 possible ways, then fix the loop in two possible ways. There are also $\frac{3!}{3}$ <em>labelled</em> good graphs of the second shape: it's simply the number of cycles on 3 elements. So in total we have: $$T(3) = 3! + \frac{3!}{3} = 8\text{.}$$</p>
<p>We also know that $T(4) = 78$. Let's list all possible shapes:</p>
<p><img src="https://i.stack.imgur.com/SdBDj.png" alt="T(4)"></p>
<p>From top left to bottom right, it's easy to check that we have $4!$ <em>labelled</em> good graphs of the first shape, $2\cdot {4 \choose 2}$ of the second, $2\cdot {4 \choose 2}$ of the third, $4!$ of the fourth and $\frac{4!}{4}$ of the last. In total: $$T(4) = 4! + \left(2\cdot {4 \choose 2} + 2\cdot {4 \choose 2}\right) + 4! + \frac{4!}{4} = 3\cdot 4! + \frac{4!}{4}\text{.}$$</p>
<p><s>I <em>think</em> that $T(5) = 884$, but I won't draw all possible shapes or count their labelings for brevity.</s></p>
<p>I computed $T(5)$ again, and now I get 944. This also invalidates the following conjecture.</p>
<p><strong>CONJECTURE [DISPROVEN]:</strong> I'm <em>conjecturing</em> that there's a simple-ish formula for $T(n)$. It's something like $$T(n) = (2^{n-2} - 1) \cdot n! + \frac{n!}{n} + S(n)$$ where $S(n)$ is some function I currently don't understand such that $S(2) = S(3) = S(4) = 0$, while $S(5) = 5\cdot 4$.</p>
| Asinomás | 33,907 | <p>Let $f(n)$ be what you want. Denote a graph acceptable if it's connected components (when viewed as an undirected graph) Are all good.(This is the same as saying it is of regular out-degree 1)</p>
<p>Then the number of acceptable graphs is $(n-1)^n$.</p>
<p>But we can also count the number of acceptable graphs by classifying on the number of vertices in the connected component of a special vertex (call it $v$).</p>
<p>We then arrive at the recursion $n(n-1)=\sum_{i=1}^n\binom{n-1}{i-1}f(i)(n-i-1)^{n-i}$.</p>
<p>So $f(n)=(n-1)^n-\sum_{i=1}^{n-1}f(i)\binom{n-1}{i-1}(n-i-1)^{n-i}$</p>
|
1,071,564 | <p>Let's a call a directed simple graph $G$ on $n$ labelled vertices <strong>good</strong> if every vertex has outdegree 1 and, when considered as if it were undirected, it is connected. How many good graphs of size $n$ are there?</p>
<p>Here's my work so far. Let's call this number $T(n)$. Clearly, $T(2) = 1$: there's only the loop on two vertices.</p>
<p><img src="https://i.stack.imgur.com/NRt8v.png" alt="T(2)"></p>
<p>We also have that $T(3) = 8$. We can count them using the following argument: let's call a <strong>possible shape</strong> a directed simple graph on $n$ <em>unlabelled</em> vertices which is good. For $n = 3$ we have the following shapes:</p>
<p><img src="https://i.stack.imgur.com/Cbtpp.png" alt="T(3)"></p>
<p>There are $3!$ <em>labelled</em> good graphs of the first shape: fix the outside vertex in 3 possible ways, then fix the loop in two possible ways. There are also $\frac{3!}{3}$ <em>labelled</em> good graphs of the second shape: it's simply the number of cycles on 3 elements. So in total we have: $$T(3) = 3! + \frac{3!}{3} = 8\text{.}$$</p>
<p>We also know that $T(4) = 78$. Let's list all possible shapes:</p>
<p><img src="https://i.stack.imgur.com/SdBDj.png" alt="T(4)"></p>
<p>From top left to bottom right, it's easy to check that we have $4!$ <em>labelled</em> good graphs of the first shape, $2\cdot {4 \choose 2}$ of the second, $2\cdot {4 \choose 2}$ of the third, $4!$ of the fourth and $\frac{4!}{4}$ of the last. In total: $$T(4) = 4! + \left(2\cdot {4 \choose 2} + 2\cdot {4 \choose 2}\right) + 4! + \frac{4!}{4} = 3\cdot 4! + \frac{4!}{4}\text{.}$$</p>
<p><s>I <em>think</em> that $T(5) = 884$, but I won't draw all possible shapes or count their labelings for brevity.</s></p>
<p>I computed $T(5)$ again, and now I get 944. This also invalidates the following conjecture.</p>
<p><strong>CONJECTURE [DISPROVEN]:</strong> I'm <em>conjecturing</em> that there's a simple-ish formula for $T(n)$. It's something like $$T(n) = (2^{n-2} - 1) \cdot n! + \frac{n!}{n} + S(n)$$ where $S(n)$ is some function I currently don't understand such that $S(2) = S(3) = S(4) = 0$, while $S(5) = 5\cdot 4$.</p>
| Marko Riedel | 44,883 | <p>I would like to contribute some ideas even though I don't have as much
time as I'd like at the moment. If I understand this problem correctly
then the class of graphs under consideration call it $\mathcal{Q}$ is
in a set-of relationship with the class of endofunctions call it
$\mathcal{E}$ with the latter being sets of the former.</p>
<p>The following <a href="https://math.stackexchange.com/questions/2691262/">MSE link A</a> has closely related material, as does this <a href="https://math.stackexchange.com/questions/1463544/">MSE link B</a> where aspects of "Random Mapping Statistics" by Flajolet and Odlyzko are discussed.</p>
<p><P>
This gives the species equation
$$\def\textsc#1{\dosc#1\csod}
\def\dosc#1#2\csod{{\rm #1{\small #2}}}
\mathcal{E} = \textsc{SET}(\mathcal{Q})$$
which is in terms of generating functions
$$E(z) = \exp Q(z)
\quad\text{or}\quad
Q(z) = \log E(z).$$</p>
<p>Recall the popular <em>labeled rooted tree function</em> which represents the
species
$$\mathcal{T} = \mathcal{Z} \times \textsc{SET}(\mathcal{T})$$
and has the functional equation
$$T(z) = z \exp T(z).$$</p>
<p>We also have that $$T(z) = \sum_{n\ge 1} n^{n-1} \frac{z^n}{n!}$$
(Cayley's formula)
and since there are $n^n$ endofunctions we obtain
$$E(z) = 1 + z\frac{d}{dz} T(z) = 1 + z T'(z).$$</p>
<p>But from the functional equation we get
$$T'(z) = \exp T(z) + z \exp T(z) T'(z) = \frac{T(z)}{z} + T(z) T'(z)$$
so that
$$T'(z) = \frac{1}{z} \frac{T(z)}{1-T(z)}.$$</p>
<p>This finally yields
$$E(z) = 1 + \frac{T(z)}{1-T(z)}$$
and hence $$Q(z) = \log\left(1+\frac{T(z)}{1-T(z)}\right).$$</p>
<p>This gives the sequence
$$1, 3, 17, 142, 1569, 21576, 355081, 6805296,
\\ 148869153, 3660215680,\ldots$$
which points us to <a href="https://oeis.org/A001865" rel="nofollow noreferrer">OEIS A001865</a>
where we learn that indeed we have the right exponential generating
function. Note that the formula for $E(z)$ also proves that
$\mathcal{E} = \textsc{SEQ}(\mathcal{T}).$</p>
<p><P>
Now to extract coefficients from this for a closed formula we expand
the logarithm to get for the count $q_n$ the formula</p>
<p>$$n! [z^n] \sum_{k\ge 1} \frac{(-1)^{k+1}}{k}
\left(\frac{T(z)}{1-T(z)}\right)^k.$$</p>
<p>Observe that we can restrict this to $k\le n$ because the tree
function term starts at $z,$ getting
$$n! [z^n] \sum_{k=1}^n \frac{(-1)^{k+1}}{k}
\left(\frac{T(z)}{1-T(z)}\right)^k.$$</p>
<p>We still need the terms of the fraction in the tree function,
which can be done by Lagrange inversion. We have
$$[z^n] \left(\frac{T(z)}{1-T(z)}\right)^k
= \frac{1}{2\pi i}
\int_{|z|=\epsilon} \frac{1}{z^{n+1}}
\left(\frac{T(z)}{1-T(z)}\right)^k \; dz.$$</p>
<p>Put $T(z) = w$ so that $z=w/\exp(w)=w\exp(-w)$ and
$dz=(\exp(−w)−w\exp(−w))\; dw$ to get
$$\frac{1}{2\pi i}
\int_{|w|=\epsilon} \frac{\exp(w(n+1))}{w^{n+1}}
\left(\frac{w}{1-w}\right)^k
(\exp(−w)−w\exp(−w))\; dw
\\ = \frac{1}{2\pi i}
\int_{|w|=\epsilon} \frac{\exp(wn)}{w^{n+1-k}}
\frac{1}{(1-w)^{k-1}} \; dw .$$</p>
<p>Extracting the residue at zero yields for $k\ge 2$
$$\sum_{q=0}^{n-k} \frac{n^{n-k-q}}{(n-k-q)!}
{q+k-2\choose k-2}$$
and for $k=1,$ $$\frac{n^{n-1}}{(n-1)!}.$$</p>
<p>Collecting these into one formula finally yields
$$n^n +
n! \sum_{k=2}^n \frac{(-1)^{k+1}}{k}
\sum_{q=0}^{n-k} \frac{n^{n-k-q}}{(n-k-q)!}
{q+k-2\choose k-2}.$$</p>
<p>This computation is closely related to the material at this
<a href="https://math.stackexchange.com/questions/689526/">MSE link</a>.</p>
<p><strong>Addendum.</strong> I just noticed in one of the other posts that fixed points are not admitted in those endofunctions. The above material admits fixed points as in the discussion and the diagram at the <a href="https://en.wikipedia.org/wiki/Pseudoforest" rel="nofollow noreferrer">Wikipedia entry</a>.</p>
<p><P><strong>Addendum Thu Dec 18 19:57:09 CET 2014.</strong>
The case when there are no fixed points goes as follows.
There are now $(n-1)^n$ endofunctions with no fixed points,
which gives $$E(z) = 1 + \sum_{n\ge 1} (n-1)^n \frac{z^n}{n!}.$$</p>
<p>Now observe that when we apply Lagrange inversion to
$$\frac{\exp(-T(z))}{1-T(z)}$$ we obtain</p>
<p>$$[z^n] \frac{\exp(-T(z))}{1-T(z)}
= \frac{1}{2\pi i}
\int_{|z|=\epsilon} \frac{1}{z^{n+1}}
\frac{\exp(-T(z))}{1-T(z)}\; dz$$</p>
<p>which using the same substitution as before becomes
$$\frac{1}{2\pi i}
\int_{|w|=\epsilon} \frac{\exp(w(n+1))}{w^{n+1}}
\frac{\exp(-w)}{1-w}
(\exp(−w)−w\exp(−w))\; dw
\\ = \frac{1}{2\pi i}
\int_{|w|=\epsilon} \frac{\exp(w(n-1))}{w^{n+1}}\; dw
= \frac{(n-1)^n}{n!}.$$
But $\exp(-T(z)) = \frac{z}{T(z)}$ and we finally get for
$E(z)$ the closed form
$$\frac{z}{T(z)(1-T(z))}.$$</p>
<p>This gives for $Q(z)$ that
$$Q(z) = \log\left(\frac{z}{T(z)(1-T(z))}\right)$$
(note that $E(z)$ has a constant term in its expansion at zero which
is one) which produces the sequence
$$0, 1, 8, 78, 944, 13800, 237432, 4708144, 105822432, 2660215680,
\\ 73983185000, 2255828154624,\ldots $$
which points us to <a href="https://oeis.org/A000435" rel="nofollow noreferrer">OEIS A000435</a>, confirming
the result from the accepted answer. Note that the OEIS
says that this sequence was the first in the database, so we are
content to be referencing it in this computation.</p>
<p>To get a closed form re-write $Q(z)$ as follows:
$$Q(z) = \log\left(1 + \frac{z}{T(z)(1-T(z))} -1\right)$$
to get the formula
$$n! [z^n] \sum_{k=1}^n \frac{(-1)^{k+1}}{k}
\left(\frac{z}{T(z)(1-T(z))} -1\right)^k$$
This is
$$n! [z^n] \sum_{k=1}^n \frac{(-1)^{k+1}}{k}
\sum_{q=0}^k {k\choose q} (-1)^{k-q}
\left(\frac{z}{T(z)(1-T(z))}\right)^q.$$
We can drop the term for $q=0$ when $n\ge 1,$ getting
$$n! [z^n] \sum_{k=1}^n \frac{(-1)^{k+1}}{k}
\sum_{q=1}^k {k\choose q} (-1)^{k-q}
\left(\frac{z}{T(z)(1-T(z))}\right)^q.$$</p>
<p>Use Lagrange inversion to extract coefficients from the tree function
term.
$$[z^n] \left(\frac{z}{T(z)(1-T(z))}\right)^q
= \frac{1}{2\pi i}
\int_{|z|=\epsilon} \frac{1}{z^{n+1}}
\left(\frac{z}{T(z)(1-T(z))}\right)^q
\; dz$$</p>
<p>which using the same substitution as before becomes
$$\frac{1}{2\pi i}
\int_{|w|=\epsilon} \frac{\exp(w(n+1-q))}{w^{n+1-q}}
\left(\frac{1}{w(1-w)}\right)^q
(\exp(−w)−w\exp(−w))\; dw
\\ = \frac{1}{2\pi i}
\int_{|w|=\epsilon} \frac{\exp(w(n-q))}{w^{n+1}}
\frac{1}{(1-w)^{q-1}} \; dw$$</p>
<p>Extracting coefficients we get
$$\sum_{p=0}^n \frac{(n-q)^{n-p}}{(n-p)!} {p+q-2\choose q-2}$$
when $q\ge 2$ and for $q=1$
$$\frac{(n-1)^n}{n!}.$$</p>
<p>Substituting this into the sum formula yields
$$n! \times \sum_{k=1}^n \frac{(-1)^{k+1}}{k}
\times k \times (-1)^{k-1} \frac{(n-1)^n}{n!}
\\ + n! \times \sum_{k=1}^n \frac{(-1)^{k+1}}{k}
\sum_{q=2}^k {k\choose q} (-1)^{k-q}
\sum_{p=0}^n \frac{(n-q)^{n-p}}{(n-p)!} {p+q-2\choose q-2}.$$
This simplifies to
$$n\times (n-1)^n
+ n! \times \sum_{k=2}^n \frac{(-1)^{k+1}}{k}
\sum_{q=2}^k {k\choose q} (-1)^{k-q}
\sum_{p=0}^n \frac{(n-q)^{n-p}}{(n-p)!} {p+q-2\choose q-2}.$$</p>
<p>This formula can be used to compute the count of these graphs for
large $n$ where the tree function formula would no longer be
practicable.</p>
<p><P>
The sequence for $n=30$ to $n=34$ reads</p>
<pre>
38086159100543376291945674612050231296000000,
3117962569860399657478478640723143576082043800,
263711778692997479722657378560127779200642842624,
23019602620026625886784119896351926037410391377792,
2071846675499818842878197235287956993753027358752768
</pre>
<p><P><strong>Additional observations.</strong>
The OEIS entry <a href="https://oeis.org/A000435" rel="nofollow noreferrer">OEIS A000435</a>
says that this sequence is the normalized total height of all labeled
rooted trees on $n$ nodes (sum of height of all nodes in all trees
scaled by $n$). The question arises on how to prove this.
<P></p>
<p>The height parameter is represented by the bivariate generating
function $T(z, u)$ where $T(z, 1) = T(z)$ (the ordinary tree
function) and we have the functional equation</p>
<p>$$T(z, u) = z
+ z\frac{T(uz,u)^1}{1!}
+ z\frac{T(uz,u)^2}{2!}
+ z\frac{T(uz,u)^3}{3!}
+ z\frac{T(uz,u)^4}{4!}
+ \cdots$$
or
$$T(z, u) = z \exp T(uz, u).$$</p>
<p>The exponential generating function for the sum of the height of all
nodes of all rooted labeled trees by the number of nodes is given by
$$G(z) = \left.\frac{\partial}{\partial u} T(z, u)\right|_{u=1}.$$</p>
<p>We thus differentiate the functional equation, getting
$$G(z) = \left. z \exp T(uz, u)
\left(z \frac{\partial}{\partial z} T(z, u) +
\frac{\partial}{\partial u} T(z, u) \right)\right|_{u=1}$$
which becomes
$$G(z) = T(z) (z T'(z) + G(z))$$
so that
$$G(z) (1-T(z)) = z T(z) T'(z)$$
or
$$G(z) = \frac{z T(z)}{1-T(z)} T'(z).$$</p>
<p>We may substitute the expression for the derivative of the tree
function that we obtained earlier into this to get
$$G(z) = \frac{z T(z)}{1-T(z)}
\frac{1}{z} \frac{T(z)}{1-T(z)}
= \left(\frac{T(z)}{1-T(z)}\right)^2.$$</p>
<p>We have computed the exponential generating function of the total
height of all labelled trees on $n$ nodes, which gives the sequence
$$0, 2, 24, 312, 4720, 82800, 1662024, 37665152, 952401888,
\\ 26602156800, 813815035000 $$
which is <a href="https://oeis.org/A001864" rel="nofollow noreferrer">OEIS A001864</a>.</p>
<p>The normalized height is this sequence divided by $n.$ To verify that
this is indeed the same as the count of endofunctions with no fixed
point we must show that
$$z \frac{d}{dz} Q(z) = G(z)$$
i.e. that the endofunctions times $n$ give the total height or
alternatively, the total height divided by $n$ give the endofunctions.
<P>
But the left is
$$ z \frac{T(z)(1-T(z))}{z} \\ \times
\left(\frac{1}{T(z)(1-T(z))}
- z \frac{1-2T(z)}
{T(z)^2 (1-T(z))^2} T'(z) \right)$$
which gives
$$1- z \frac{1 - 2T(z)}{T(z)(1-T(z))} T'(z)
= 1- \frac{1 - 2T(z)}{T(z)(1-T(z))} \frac{T(z)}{1-T(z)}
= 1- \frac{1 - 2T(z)}{(1-T(z))^2}
\\ = \frac{1-2T(z)+T(z)^2 -(1- 2T(z))}{(1-T(z))^2}
= \left(\frac{T(z)}{1-T(z)}\right)^2,$$
thus concluding the proof.</p>
|
1,832,320 | <p>I know there are n linearly independent and n + 1 affinely independent vectors in $\mathbb{R}^n$. But how many convexly independent there are?</p>
<p>I think there are infinity number of them because if I have a convex polytope I can always add another point that is "outside" of said polytope. </p>
<p>But I'm not sure if my reasoning is correct.</p>
| Tsemo Aristide | 280,301 | <p>Hint: On finite dimensional spaces, two metrics are equivalent.</p>
|
2,611,656 | <p>Suppose $x = 1/t$. So now $x$ is a function of $t$, i.e., $x(t)$.</p>
<p>So $$\frac{dx(t)}{dt} = -t^{-2} \Rightarrow dx(t) = -t^{-2}dt$$</p>
<p>This problem is from the textbook: <code>advanced mathematical methods for scientists and engineers</code></p>
<p><a href="https://i.stack.imgur.com/SqKnQ.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/SqKnQ.png" alt="enter image description here"></a></p>
<p>How to go from "$dx = -t^2dt$" to "$\frac{d}{dx} = -t^2\frac{d}{dt}$"? </p>
<p>It seems that I just divide the previous term by $1$ and then multiply it by $d$. </p>
<p>However, it seems unrealistic to me; can anyone please explain this carefully to me? More specifically, can I multiply $d$, which is like an operator to me. thanks!</p>
| Peter Szilas | 408,605 | <p>Correct me if wrong :</p>
<p>$x=x(t)$, differentiable on an interval $I$, and $x'(t)\not=0,$</p>
<p>then the inverse function $t=t(x)$ exists, and is differentiable $x=x(t).$</p>
<p>Let $F(t(x))$ arbitrary, differentiable, then:</p>
<p>$\dfrac{d}{dx} F(t(x)) = \dfrac{d}{dt} F(t)×\dfrac{d}{dx}t(x),$ I.e.,</p>
<p>$[\dfrac{d}{dx}]F(t(x)) = [\dfrac{d}{dx}t(x) \dfrac{d}{dt}]F(t(x))$, </p>
<p>or $\dfrac{d}{dx} = t'(x)\dfrac{d}{dt}$, as an operator equation.</p>
<p>With:</p>
<p>$t'(x) = \dfrac{1}{x'(t)}= \dfrac{1}{-t^{-2}}$:</p>
<p>$\dfrac{d}{dx} = -t^2 \dfrac{d}{dt}$.</p>
|
2,979,315 | <p>Let <span class="math-container">$X$</span> be a continuous random variable with uniform distribution between <span class="math-container">$0$</span> and <span class="math-container">$1$</span>. Compute the distribution of <span class="math-container">$Y = \sin(2\pi X)$</span>.</p>
<p><span class="math-container">$\sin(2\pi \cdot0)$</span> and <span class="math-container">$\sin(2\pi \cdot1) =0$</span>. So, the inverse image of the function has multiple roots. How can I find the PDF of <span class="math-container">$Y$</span> then?</p>
| José Carlos Santos | 446,262 | <p>Let <span class="math-container">$R$</span> be the set of all reflections. Fix <span class="math-container">$r_0\in R$</span>. For each <span class="math-container">$r\in R$</span>, <span class="math-container">$r\circ{r_0}^{-1}$</span> is an isometry of the cube. But, since <span class="math-container">$r_0$</span> and <span class="math-container">$r$</span> both have determinant <span class="math-container">$-1$</span>, <span class="math-container">$r\circ{r_0}^{-1}$</span> has determinant <span class="math-container">$1$</span>. In other words, <span class="math-container">$r\circ{r_0}^{-1}$</span> belongs to <span class="math-container">$SO(3,\mathbb{R})$</span>. So, <span class="math-container">$r=g\circ r_0$</span>, for some <span class="math-container">$g\in SO(3,\mathbb{R})$</span>. And, if <span class="math-container">$r'\in R$</span> and <span class="math-container">$g'\in SO(3,\mathbb{R})$</span> are such that <span class="math-container">$r'=g'\circ r_0$</span>, then <span class="math-container">$r=r'\iff g=g'$</span>. On the other hand, if <span class="math-container">$g\in SO(3,\mathbb{R})$</span>, then <span class="math-container">$g\circ r_0\in R$</span>. This proves that <span class="math-container">$\#R=\#SO(3,\mathbb{R})=24$</span>.</p>
|
2,261,500 | <p>I try to prove that statement using only Bachet-Bézout theorem (I know that it's not the best technique). So I get $k$ useful equations with $n_1$ then $(k-1)$ useful equations with $n_2$ ... then $1$ useful equation with $n_{k-1}$. I multiply all these equations to obtain $1$ for one side. For the other side I'm lost (there are too many terms) but I want to make appear the form $n_1 L_1+...+n_k L_k$.</p>
<p>Supposing the existence of all the integers we need :</p>
<p>$\underbrace{(a_1n_1+a_2n_2)(a_1n_1+a_3n_3)...(a_1n_1+a_kn_k)}_{\textit{k equations}} \underbrace{(b_2n_2+b_3n_3)...(b_2n_2+b_kn_k)}_{\textit{(k-1) equations}}...\underbrace{(\mu_{k-1} n_{k_1}+\mu_{k} n_k)}_{\textit{1 equation}}=1$</p>
<p>Maybe we can reduce the number of useful equations or start an induction to identify a better form for the product.</p>
<p>Thanks in advance !</p>
| Thomas Andrews | 7,933 | <p>You are making this way too complicated.</p>
<p>There exists integers $x_1,x_2$ such that $n_1x_1+n_2x_2=1$. Letting $x_3=x_4=\cdots=x_k=0$. Then $\sum_{i=1}^{k} n_ix_i=1$ so the $x_i$ must be relatively prime.</p>
<p>Basically, $\gcd(x_1,x_2,\dots,x_k)\mid\gcd(x_1,x_2)$.</p>
|
1,637,748 | <p>I was given the following thing to prove:</p>
<p>$$\lim_{n \to \infty} {d(n) \over n} = 0$$
where $d(n)$ is the number of divisors of n.</p>
<p>I'm so sure how to approach this question. One way I thought of is to use the UFT to turn the expression to:</p>
<p>$$\lim_{n \to \infty} {\prod (x_i + 1) \over \prod p_i^{x_i}}$$</p>
<p>And then to use <a href="https://en.wikipedia.org/wiki/L'H%C3%B4pital's_rule">L'Hôpital's rule</a> for each $x_i$, so I get something like this:</p>
<p>$$\lim_{n \to \infty} {1 \over \ln (\sum p_i) \prod p_i^{x_i}}$$</p>
<p>That equals zero.</p>
<ol>
<li>Is this a good approach?</li>
<li>Is there a different way to solve this?</li>
</ol>
| W-t-P | 181,098 | <p>It is actually true that $\lim_{n\to\infty} \frac{d(n)}{n^c}=0$ for any $c>0$, but if you only want it for $c=1$, the simplest proof would be the following. The divisors $\delta\mid n$ can be organized into pairs $(\delta,n/\delta)$, and in every pair the smallest divisor is $\min\{\delta,n/\delta\}\le\sqrt n$. It follows that the number of pairs is at most $\sqrt n$. Thus $d(n)\le 2\sqrt n$ and the assertion follows.</p>
|
114,371 | <p>A sector $P_1OP_2$ of an ellipse is given by angles $\theta_1$ and $\theta_2$. </p>
<p><img src="https://i.stack.imgur.com/mdmq2.png" alt="A sector of an ellipse"></p>
<p>Could you please explain me how to find the area of a sector of an ellipse?</p>
| Community | -1 | <p>Scale the entire figure along the $y$ direction by a factor of $a/b$. The ellipse becomes a circle of radius $a$, and the two angles become $\tan^{-1}(\frac ab\tan\theta_1)$ and $\tan^{-1}(\frac ab\tan \theta_2)$. The area of the original elliptical sector is $b/a$ times the area of the circular sector between these two angles, which is straightforward to find.</p>
|
440,615 | <p>Let $R$ be the region in the first quadrant bounded above by the circle $(x-1)^2 + y^2 = 1$ and below by the line $y = x$ . Sketch the region $R$ and evaluate the double integral $\iint 2y \;\mathrm dA$ . </p>
| André Nicolas | 6,312 | <p>We will do it using rectangular coordinates, though polar is tempting in any problem that involves circles.</p>
<p>The sketch is an essential part of the work. (I do not think I could do the calculation of the integral without a picture.)</p>
<p>We have a circle centre $(1,0)$ and radius $1$, and the familiar line $y=x$. These two curves meet. It will be useful to know where. Put $y=x$ in the equation $(x-1)^2+y^2=1$. Expand. We quickly get $2x^2-2x=0$. So the curves meet at $(0,0)$ and $(1,1)$. </p>
<p>Shall we integrate first with respect to $x$ or with respect to $y$? We take a quick think about the alternatives. I think $y$ first will be best.</p>
<p>If we integrate first with respect to $y$, then $y$ will travel from $x$ to the top of the circle. From $y^2=1-(x-1)^2$, we find that the top half of the circle has equation $y=\sqrt{1-(x-1)^2}=\sqrt{2x-x^2}$. After we have integrated with respect to $y$, we integrate with respect to $x$, with $x$ travelling from $0$ to $1$. So we want
$$\int_0^1 \left(\int_x^{\sqrt{2x-x^2}} 2y\,dy \right)\,dx.$$
The integrations are easy. And even the square roots disappear!</p>
<p><strong>Remark:</strong> If we choose to integrate first with respect to $x$, note that the <strong>left half</strong> of the circle has equation $x=1-\sqrt{1-y^2}$. So if we integrate first with respect to $x$, then $y$ travels from $1-\sqrt{1-y^2}$ to $y$, and then $y$ travels from $0$ to $1$. The integration with respect to $x$ is easy, we get $2yx$. The substitution now yields a slightly messy function of $y$, but doable. </p>
|
898,495 | <p>A standard pack of 52 cards with 4 suits (each having 13 denominations) is well shuffled and dealt out to 4 players (N, S, E and W).</p>
<p>They each receive 13 cards.</p>
<p>If N and S have exactly 10 cards of a specified suit between them. </p>
<p>What is the probability that the 3 remaining cards of the suit are in one player's hand (either E or W)? Can you please just help me understand how to solve this conditional probability question?</p>
| Philwy | 810,389 | <p><a href="https://i.stack.imgur.com/FlXWg.png" rel="nofollow noreferrer">Click to see my answer.</a></p>
<p>See the picture. Thank you.</p>
|
1,868,440 | <p>In a game , there are <code>N</code> numbers and <code>2</code> player(<code>A</code> and <code>B</code>) . If <code>A</code> and <code>B</code> pick a number and replace it with one of it's divisors other than itself alternatively, how would I conclude who would make the last move? (Notice that eventually when the list gets replaced by all 1's , you can't make any more moves). Any help would be appreciated, thank you :)</p>
| PM 2Ring | 207,316 | <p>As I mentioned in the comments, this is essentially a version of the ancient game known as Nim. This game has been thoroughly analysed and the <a href="https://en.wikipedia.org/wiki/Nim" rel="nofollow">Wikipedia article</a> gives a good description of the winning strategy for Nim, so that will not be repeated here. Instead, I'll explain why the OP's game is just Nim in disguise.</p>
<p>For those unfamiliar with Nim, here's a brief description, from the linked Wikipedia article:</p>
<blockquote>
<p>Nim is a mathematical game of strategy in which two players take turns
removing objects from distinct heaps. On each turn, a player must
remove at least one object, and may remove any number of objects
provided they all come from the same heap. The goal of the game is to
be the player to remove the last object.</p>
</blockquote>
<p>Each of the $N$ numbers in the OP's game corresponds to a Nim heap. The objects in a given heap are the prime factors of that heap's number.</p>
<p>For example, let's say we start with the 3 numbers 7, 30, 100. That gives us 3 heaps:<br>
{7}, {2, 3, 5}, and {2, 2, 5, 5}.</p>
<p>If $d$ is a divisor of $n$ the list of prime factors of $d$ is a sublist of the list of prime factors of $n$. So when we make a move in the OP's game, replacing $n$ by $d$, we remove the prime factors of $n/d$ from the $n$ heap, which will leave the prime factors of $d$ in the heap.</p>
<p>Eg, we can replace 30 by 5 by removing the factors of $30 / 5 = 6$, i.e. by removing the 2 and the 3. </p>
<p>When a heap is empty it has the value of the empty product, which is one.</p>
<p>FWIW, we can convert this game into actual Nim form by writing each of the prime factors of each of the $N$ numbers onto cards or other small objects.</p>
|
19,962 | <p><a href="http://en.wikipedia.org/wiki/Covariance_matrix" rel="nofollow">http://en.wikipedia.org/wiki/Covariance_matrix</a></p>
<pre><code>Cov(Xi,Xj) = E((Xi-Mi)(Xj-Mj))
</code></pre>
<p>Is the above equivalent to:</p>
<pre><code>(Xi-Mi)(Xj-Mj)
</code></pre>
<p>I don't understand why the expectancy of (Xi-Mi)(Xj-Mj) would be different than just (Xi-Mi)(Xj-Mj)</p>
<p>Addendum:</p>
<p>Let's say I have two sets of data:</p>
<p>Set 1: 1,2,3 avg: 2</p>
<p>Set 2: 4,5 avg: 4.5</p>
<p>Is the following a covariance matrix? </p>
<pre><code>(1-2)*(4-4.5) , (2-2)*(4-4.5) , (3-2)*(4-4.5)
(1-2)*(5-4.5) , (2-2)*(5-4.5) , (3-2)*(5-5.5)
</code></pre>
<p>I'm reading online and it seems like a covariance matrix composed of two data sets should be a 2x2 matrix, but in this case, I have a 2x3 matrix. How do I go from what I have to the correct 2x2 matrix?</p>
| Henry | 6,460 | <p>There is a difference between a random variable and its expectation. Take a standard fair six sided die. It can take any of the values {1,2,3,4,5,6}, but its expectation is 3.5. </p>
<p>Similarly $(X_i - \mu_i)(X_j - \mu_j)$ is a random variable with several possible values while $\mathrm{E}[(X_i - \mu_i)(X_j - \mu_j)]$ is a number depending on the joint distribution of $X_i$ and $X_j$.</p>
|
4,425,234 | <p>Let <span class="math-container">$f:[1,\infty)\rightarrow [1,\infty)$</span> be a function such that for every <span class="math-container">$x\in [1,\infty)$</span>, <span class="math-container">$f(f(x))=2x^{2}-3x+2$</span>. I am required to show that <span class="math-container">$f$</span> is bijective and also to study the injectivity of the function <span class="math-container">$g:[1,\infty)\rightarrow \mathbb{R}$</span>, <span class="math-container">$g(x)=x^{2}+(f(x)-4)x-2f(x)+7$</span>, for every <span class="math-container">$x\in\mathbb{R}$</span>.</p>
<p>For the first task I selected <span class="math-container">$x,y \in [1,\infty)$</span> such that <span class="math-container">$x\neq y$</span>. Then, <span class="math-container">$f(f(x))=f(f(y))$</span> iff <span class="math-container">$2x^{2}-3x=2y^{2}-3y$</span>, meaning that <span class="math-container">$2x-3=2y-3 \iff x=y$</span>, which is not true. Thus, the function is not injective.</p>
<p>For every <span class="math-container">$x \in [1,\infty)$</span>, we want to show that there is a <span class="math-container">$z$</span> in <span class="math-container">$[1,\infty)$</span> such that <span class="math-container">$z=2x^{2}-3x+2$</span>; because <span class="math-container">$2x^2-3x+2=2x(x-1)-(x-1)+1=(2x-1)(x-1)+1$</span> and <span class="math-container">$x\geq 1$</span>, then <span class="math-container">$2x\geq 1$</span> and <span class="math-container">$z \geq 1$</span>, so there exists <span class="math-container">$z \in [1,\infty)$</span> such that <span class="math-container">$z=2x^{2}-3x+2$</span>. Thus the function is surjective.</p>
<p>I am quite clueless on how to study the injectivity of the other function, not knowing who <span class="math-container">$f$</span> is and what properties does it have.</p>
| Ross Millikan | 1,827 | <p><span class="math-container">$f(t_n,y_n)$</span> is the (approximation of) the derivative of <span class="math-container">$y$</span> at <span class="math-container">$t_n$</span>. <span class="math-container">$h$</span> is the time step, so the change in <span class="math-container">$y$</span> over the time step is the length of the time step times the slope of the tangent at that point. For example, if <span class="math-container">$f(t,y)=3y$</span> and we start from <span class="math-container">$t=1,y=1$</span> we have <span class="math-container">$f(1,1)=3$</span>. If our timestep is <span class="math-container">$0.1$</span> we have <span class="math-container">$h=0.1, y(1.1)\approx 1+0.1 \cdot 3 = 1.3$</span> The true solution is <span class="math-container">$y=e^{3t}-e^3+1$</span> and the true <span class="math-container">$y(1.1)=1.286$</span> to three places</p>
|
1,213,613 | <p>I am stuck trying to solve this equation. </p>
<p>B<sup>2</sup>(B + 11.97) = 238.67</p>
<p>This is for my math class, we solved this equation and got to that final form and I know that one solution is 3.88 but I don't know how to get it mathematically. I tried using Horner's Method but I don't know how to make it work because 238.67 is not a whole number.
Anyone has any ideas or an explanation how to get that 3.88 solution?</p>
| Claude Leibovici | 82,404 | <p>You have $$B^2(B + 11.97) = 238.67$$ Rewrite is as $$B^2\Big(B+\frac{1197}{100}\Big)=\frac{23867}{100}$$ Now define $B=\frac{x}{100}$ so the equation becomes $$\frac{x^2}{10000}\Big(\frac{x}{100}+\frac{1197}{100}\Big)=\frac{23867}{100}$$ Multiple by $100$ inside the parentheses and the rhs to get $$\frac{x^2}{10000}(x+{1197})=23867$$ Finally multiply lhs and rhs by $10000$. Use Horner method for an approximate solution of the equation for $x$ and go back to $B$ from $B=\frac{x}{100}$. </p>
|
1,213,613 | <p>I am stuck trying to solve this equation. </p>
<p>B<sup>2</sup>(B + 11.97) = 238.67</p>
<p>This is for my math class, we solved this equation and got to that final form and I know that one solution is 3.88 but I don't know how to get it mathematically. I tried using Horner's Method but I don't know how to make it work because 238.67 is not a whole number.
Anyone has any ideas or an explanation how to get that 3.88 solution?</p>
| Mark Bennet | 2,906 | <p>Treat it as $B^2(A+11.97)=238.67$ with $A\approx B$ to get a quick estimate, which can be improved with standard methods.</p>
<p>Start by approximating $B^2(A+12)=240$ with $A=0$ so that $B^2\approx 20$ and $B\approx 4$. So set $A=4$ and $B^2\approx 15$ so that $B\approx 3.7$</p>
<p>That's close with no calculator - to get a more accurate solution put the accurate constants in and use a calculator from this point on.</p>
|
2,414,472 | <blockquote>
<p>Let $(a_n)_{n\geq2}$ be a sequence defined as
$$
a_2=1,\qquad a_{n+1}=\frac{n^2-1}{n^2}a_n.
$$
Show that
$$
a_n=\frac{n}{2(n-1)},\quad\forall n\geq2
$$
and determine $\lim_{n\rightarrow+\infty}a_n$.</p>
</blockquote>
<p>I cannot show that $a_n$ is $\frac{1}{2}\frac{n}{n-1}$. Some helps? </p>
<p>Thank You</p>
| Franklin Pezzuti Dyer | 438,055 | <p>Try induction.</p>
<p>First of all, notice that if $n=2$, then
$$\frac{1}{2}\frac{n}{n-1}=1=a_1$$
Which proves that the explicit formula holds for $a_1$. Then suppose that for some $k$, the formula holds. Then
$$a_k=\frac{1}{2}\frac{k}{k-1}$$
and so, using the recursive definition,
$$a_{k+1}=\frac{k^2-1}{k^2}a_k$$
$$a_{k+1}=\frac{1}{2}\frac{(k+1)(k-1)}{k^2}\frac{k}{k-1}$$
$$a_{k+1}=\frac{1}{2}\frac{k+1}{k}$$
$$a_{k+1}=\frac{1}{2}\frac{(k+1)}{(k+1)-1}$$
and so $a_{k+1}$ also follows the formula.</p>
<p>Let $S_k$ be the statement
$$a_k=\frac{1}{2}\frac{k}{k-1}$$
We have proven $S_2$ and $S_k\implies S_{k+1}$, so
$$S_2\implies S_3\implies S_4\implies ...$$
or, in other words, $S_k$ is true for all $k\ge 2$.</p>
|
2,569,267 | <p><a href="https://gowers.wordpress.com/2011/10/16/permutations/" rel="nofollow noreferrer">This</a> article claims:</p>
<blockquote>
<p>we simply replace the number 1 by 2, the number 2 by 4, and the number 4 by 1</p>
<p>....I start with the numbers arranged as follows: 1 2 3 4 5 6. After doing the permutation (124) the numbers are arranged as 2 4 3 1 5 6.</p>
</blockquote>
<p>I always thought <span class="math-container">$(124)$</span> was read left to right as "1 goes to 2, 2 goes to 4, and 4 goes to 1" and therefore the outcome should be 4, 1, 3, 2, 5, 6.</p>
<p>According to my understanding, the article did the permutation reading from right to left. Is the blog following a convention of reading right to left, or do I just have it wrong?</p>
| Dietrich Burde | 83,966 | <p>The text says "Let me illustrate this with $n=6$ and those two permutations. I start with the numbers arranged as follows: 1 2 3 4 5 6. After doing the permutation $(124)$ the numbers are arranged as 2 4 3 1 5 6." So indeed $1$ goes to $2$, and $2$ goes to $4$, and $4$ goes to $1$, if you read this top-down:
$$
\begin{matrix} 1 & 2 & 3 & 4 & 5 & 6\cr 2 & 4 & 3 & 1 & 5 & 6\end{matrix}
$$</p>
|
2,878,448 | <p>What could be a possible approach to find the proof of:</p>
<blockquote>
<p>$\binom{2k+1}{k}$ is odd when $k=2^m-1$, otherwise $\binom{2k+1}{k}$ is even.</p>
</blockquote>
<p>I have seen some similar problems in <a href="https://math.stackexchange.com/questions/317163/prove-if-n-2k-1-then-binomni-is-odd-for-0-leq-i-leq-n">https://math.stackexchange.com/questions/317163</a> and <a href="https://math.stackexchange.com/questions/2046338/2k-1-choose-a-is-odd?noredirect=1&lq=1">https://math.stackexchange.com/questions/2046338</a>, but I still don't know that why$\binom{2k+1}{k}$ is even when $k \neq 2^m-1$. </p>
<p>Any answer will be appreciated. Thanks!</p>
| P. Grabowski | 549,313 | <p>You could just use Legendre's formula or have some fun with the binomial expansion of $\left ( 1 + x \right)^{2k+1}$ modulo 2.</p>
<p>[<a href="https://en.wikipedia.org/wiki/Legendre%27s_formula" rel="nofollow noreferrer">https://en.wikipedia.org/wiki/Legendre%27s_formula</a> ][1]</p>
|
3,461,762 | <blockquote>
<p>Is it true that, for any Pythagorean triple <span class="math-container">$4ab > c^2$</span>?</p>
</blockquote>
<p>So this came up in a proof I was working on and it seems experimentally correct from what I've tried and I would imagine the proof is similar to proving,</p>
<p><span class="math-container">$$ab < \frac{c^2}{2}$$</span></p>
<p>The idea I have for this approach is,</p>
<p><span class="math-container">$$4ab > c^2$$</span>
<span class="math-container">$$4ab > a^2 + b^2$$</span>
(Then maybe something with the triangle inequality?)</p>
<blockquote>
<p>So is this statement true (it seems to be), and how can I prove it?</p>
<p>A counter-example would also be acceptable.</p>
</blockquote>
| fleablood | 280,126 | <p>If <span class="math-container">$a^2 + b^2 = c^2$</span> then </p>
<p><span class="math-container">$c^2 < 4ab$</span> is the same thing as claiming <span class="math-container">$a^2 + b^2 < 4ab$</span></p>
<p>This is true if and only if <span class="math-container">$a^2 - 2ab + b^2 < 2ab$</span> which is to say</p>
<p><span class="math-container">$(a-b)^2 < 2ab$</span></p>
<p>There shouldn't be any reason that should be true in general. If let <span class="math-container">$m = $</span> average/midpoint of <span class="math-container">$a,b$</span>, that is, <span class="math-container">$m = \frac {a+b}e$</span> and <span class="math-container">$e = $</span> the radius of the interval <span class="math-container">$a$</span> to <span class="math-container">$b$</span>, that is <span class="math-container">$e = |m-a| =|b-m| = \frac{|a-b|}2$</span> then we have </p>
<p><span class="math-container">$(a-b)^2 < 2ab \implies$</span></p>
<p><span class="math-container">$4e^2 < 2(m-e)(m+e) = 2(m^2 - e^2)\implies$</span></p>
<p><span class="math-container">$3e^2 < m^2$</span></p>
<p>Which utterly need not be true!</p>
<p>Of course finding a <em>pythagorean triplet</em> where <span class="math-container">$a= m\pm e$</span> and <span class="math-container">$b=m\mp e$</span> are integers so that <span class="math-container">$a^2 + b^2 = 2(m^2 + e^2)$</span> is not merely and integer but a perfect square may put a wrinkle in finding a counterexample.</p>
<p>But not an insurmountable wrinkle.</p>
<p>We need, for a counter-example, <span class="math-container">$3e^2 \ge m^2$</span> which just require <span class="math-container">$e$</span> be significantly large, which is utterly irrelevent (it would seem) to <span class="math-container">$2(m^2 + e^2)$</span> being a perfect square.</p>
<p>... But to find a counter example:</p>
<p>.....</p>
<p>Back to square 1: We can find a Pythogorian triple <span class="math-container">$a^2 + b^2 = c^2$</span> by letting <span class="math-container">$b$</span> be any odd integer and if <span class="math-container">$b^2 = 2a+1$</span> so <span class="math-container">$c^2 = (a+1)^2$</span> ... retrosolving by letting <span class="math-container">$a = \frac {b^2-1}2$</span>.</p>
<p>So we want a counter example of <span class="math-container">$c^2 = a^2 + b^2 = a^2 + 2a + 1 = (a+1)^2 \ge 4ab$</span> or substitutint <span class="math-container">$a = \frac {b^2-1}2$</span>, we want a case where</p>
<p><span class="math-container">$(\frac {b^2+1}2)^2 \ge 4\frac{b^2 -1}2b$</span>.</p>
<p>Surely we can find a counter example.</p>
<p><span class="math-container">$(b^2 + 1)^2 \ge 8(b^2-1)b$</span></p>
<p><span class="math-container">$b^4 + 2b^2 + 1 \ge 8b^3 -8b$</span></p>
<p><span class="math-container">$b^4 -8b^3 +2b^2 - 8b + 1 \ge 0$</span></p>
<p>yeah that can be solved...</p>
<p>(<i>Obviously as <span class="math-container">$b\to \infty$</span> the <span class="math-container">$b^4-8b^3 + 2b^2 - 8b + 1\to \infty$</span> so there is some <span class="math-container">$K$</span> where <span class="math-container">$b > K$</span> will always have <span class="math-container">$b^4 -8b^3 +2b^2 - 8b + 1\ge 0$</span>.</i>) </p>
<p>Taking a sledgehammer and letting <span class="math-container">$b > 8$</span>, say, <span class="math-container">$b=9$</span> and <span class="math-container">$a = \frac {b^2 -1}2=40$</span> so <span class="math-container">$40^2 + 9^2 = 41^2> 4*9*40=36*40$</span></p>
<p>Yeah... it's not true.</p>
<p>(<i>Actually our sledgehammer was fairly precise. <span class="math-container">$b^4 -8b^3 +2b^2 - 8b + 1$</span> has only two real solutions; one between <span class="math-container">$0$</span> and <span class="math-container">$1$</span> (far closer to <span class="math-container">$0$</span> than to <span class="math-container">$1$</span>) and the other between <span class="math-container">$7$</span> and <span class="math-container">$8$</span>. So of primmitive triplets where <span class="math-container">$b$</span> is odd and <span class="math-container">$b^2 = 2a+1$</span>. <span class="math-container">$a = 40, b=9, c=41$</span> is the smallest primitive triplet (and thus smallest of all triplets) where <span class="math-container">$c^2 > 4ab$</span>.</i>)</p>
<p>(<i> so <span class="math-container">$c^2 < 4ab \iff \frac {c}{\gcd(a,b,c)} < 41$</span></i>)</p>
|
1,206,460 | <p>This is the question :
Prove that the set of all the words in the English language is countble (the set's cardinality is אo)
A word is defined as a finite sequence of letters in the English language.</p>
<p>I'm not really sure how to start this. I know that a finite union of countble sets is countble and i think this is the way to start.</p>
<p>Thanks in advance !</p>
| peter.petrov | 116,591 | <p>The set $S_n$ of the English words with length $n$ is finite (this is almost obvious). So it's also countable. Why is it finite? The set $A_n$ of all sequences with length $n$ made up of latin characters is finite as it contains $26^n$ elements. Only some of these sequences are meaningful/actual English words. So $S_n \subset A_n$. So $S_n$ is also finite.</p>
<p>The set $T$ for which you have to prove that it is countable is:</p>
<p>$T = S_1 \cup S_2 \cup S_3 \cup ... $ </p>
<p>Now you have this theorem:<br>
"A countable union of countable sets is also countable" </p>
<p>Applying it you get that T is also countable.<br>
Thus your statement has been proved.</p>
|
1,206,460 | <p>This is the question :
Prove that the set of all the words in the English language is countble (the set's cardinality is אo)
A word is defined as a finite sequence of letters in the English language.</p>
<p>I'm not really sure how to start this. I know that a finite union of countble sets is countble and i think this is the way to start.</p>
<p>Thanks in advance !</p>
| barak manos | 131,263 | <p>There are $26$ letters in the English language.</p>
<p>Consider each letter as one of the digits on base $27$:</p>
<ul>
<li>$A=1$</li>
<li>$B=2$</li>
<li>$C=3$</li>
<li>$\dots$</li>
<li>$Z=26$</li>
</ul>
<p>Then map each word to the corresponding integer on base $27$, for example:</p>
<p>$\text{BAGDAD}=217414_{27}=2\cdot27^5+1\cdot27^4+7\cdot27^3+4\cdot27^2+1\cdot27^1+4\cdot27^0$.</p>
<p>This mapping yields that the cardinality of your set is $\leq|\mathbb{N}|$, hence this set is countable.</p>
|
1,448,476 | <p>Let's assume that we are given $f_{X}(x)=0.5e^{-|x|}$, with x being in the set of all real numbers and Y=$|X|^{1/3}$. If I'm asked to find the pdf of Y, do I just follow the formula and do the following?</p>
<p>$f_{Y}(Y)$=$f_{x}(g^{-1}(y))$|$g^{-1}$'(y) to get something like:
$0.5e^{-|y^{1/3}|} |y^{-2/3}/3|$</p>
<p>Is it just a matter of following the formula or are there other things to consider?</p>
| Leo | 244,657 | <p>I let:</p>
<p>$$ y=f(x)=x^2\cos(1/x) $$</p>
<p>then let,</p>
<p>$$ u=x^2 $$ so $$ du/dx=2x $$ and $$ v=\cos(1/x) $$</p>
<p>so using the chain rule $$dv/dx=-\sin(1/x)*-1/x^2$$ $$dv/dx=1/x^2.\sin(1/x)$$</p>
<p>Then applying the product rule: $dy/dx=v*du/dx+u*dv/dx$,</p>
<p>$$f'(x)=2x\cos(1/x)+x^2.1/x^2\sin(1/x)=2x\cos(1/x)+\sin(1/x)$$</p>
<p>$$f'(x)=2x\cos(1/x)+\sin(1/x)$$</p>
<p>Checking with wolfram alpha: <a href="http://www.wolframalpha.com/input/?i=d%2Fdx%28%28x%5E2%29cos%281%2Fx%29%29" rel="nofollow">http://www.wolframalpha.com/input/?i=d%2Fdx%28%28x^2%29cos%281%2Fx%29%29</a></p>
|
3,328,822 | <blockquote>
<p>How do I evaluate <span class="math-container">$$\displaystyle\int^{\infty}_0 \exp\left[-\left(4x+\dfrac{9}{x}\right)\right] \sqrt{x}\;dx?$$</span> </p>
</blockquote>
<p>To my knowledge the following integral should be related to the Gamma function.</p>
<p>I have tried using the substitution <span class="math-container">$t^2 = x$</span>, and I got
<span class="math-container">$$
2e^{12}\displaystyle \int^{\infty}_0 \exp\left[-\left(2t + \dfrac{3}{t}\right)^2\right] t^2 \; dt
$$</span>
after substitution. But it seems like I can do nothing about this integral anymore. Can anyone kindly give me a hint, or guide me to the answer?</p>
| dan_fulea | 550,003 | <p>(I started an answer involving only "plain computations", but was not quick enough, maybe it is time now to complete and submit, rather then removed the typed formulas and quit the post. I am posting an alternative solution in the hope it looks simpler from some point of view, although there is a lot to be typed.)</p>
<hr>
<p>We have to compute the integral:
<span class="math-container">$$
\begin{aligned}
J&=
\int_0^{\infty} \exp\left(-\left(4x+\frac{9}{x}\right)\right) \; \sqrt{x}\;dx
\\
&\qquad\text{Substitution, so formally: $t=2\sqrt x$, $t^2=4x$, $x=t^2/4$, $dx=\frac 12t\; dt$}
\\
&=
\int_0^{\infty}
\exp\left(-\left(t^2+\frac{36}{t^2}\right)\right) \; \frac 12 t\cdot
\frac 12t\; dt
\\
&=
\frac 14
e^{-12}
\underbrace{
\int_0^{\infty}
\exp\left(-\left(t-\frac{6}t\right)^2\right)
t^2\; dt}_{\text{Notation: }K}
\\[3mm]
&\qquad\text{ and we want to show the above is equal to...}
\\
&\overset{(?)}=
\frac 14
e^{-12}\cdot\frac14\cdot 13\sqrt \pi\ .
\\[3mm]
&\qquad\text{ So we consider the integral...}
\\
K&=
\int_0^{\infty}
\exp\left(-\left(t-\frac{6}t\right)^2\right)
t^2\; dt
\\
&\qquad\text{ Substitution $\displaystyle
s =t-\frac 6t
$, so formally $t^2-st-6=0$,}
\\
&\qquad\text{ we use $t=\frac 12(s+\sqrt{s^2+24})$,
formally
$\displaystyle dt=\frac12
\left(1+\frac s{\sqrt{s^2+24}}\right)\; ds
$...}
\\
&=
\int_{-\infty}^{\infty}
e^{-s^2}\cdot
\frac 14
(s^2+\color{blue}{2s}\sqrt{s^2+24}+(s^2+24))
\;
\frac 12
\left(1+\frac {\color{red}{s}}{\sqrt{s^2+24}}\right)\; ds
\\
&\qquad\text{ now expand the parentheses, and ignore the odd part...}
\\
&=
\frac14\cdot\frac 12
\int_{\Bbb R}
e^{-s^2}\;\Big(\
s^2\ +\
(s^2+24)\ + \
\color{blue}{2s}\cdot\color{red}{s}
\ \Big)
\;ds
\\
&=
\frac 14\cdot\frac 12
\cdot26\sqrt\pi\ .
\\[3mm]
&\qquad\text{ Putting all together:}
\\
J&=
\frac 14
e^{-12}
\cdot K
\\
&=
\frac 14
e^{-12}
\cdot
\frac 14\cdot\frac 12
\cdot26\sqrt\pi
\\
&=
\color{magenta}{
\frac {13}{16}\cdot
e^{-12}
\cdot\sqrt\pi}\ .
\end{aligned}
$$</span></p>
<hr>
<p>Numerical validation, <a href="https://www.sagemath.org" rel="nofollow noreferrer">sage</a> code:</p>
<pre><code>sage: J = integral( exp(-4*x-9/x) * sqrt(x), x, 0, oo )
sage: J.n()
8.848395438034755e-06
sage: ( 13. / 16. * exp(-12) * sqrt(pi) ).n()
8.84839543773073e-6
sage: var('s');
sage: integral( exp(-s^2) * (s^2+ (s^2+24) + 2*s*s), s, -oo, +oo )
26*sqrt(pi)
</code></pre>
|
1,450,497 | <p>Consider the class of topological spaces $\langle X,\mathcal T\rangle$ such that the following are equivalent for $A\subseteq X$:</p>
<ul>
<li>$A$ is a $G_\delta$ set with respect to $\mathcal T$</li>
<li>$A\in\mathcal T$ or $X\smallsetminus A\in\mathcal T$</li>
</ul>
<p>Open sets, of course, are always $G_\delta$. So, equivalently, we are considering the topological spaces such that</p>
<ul>
<li>closed sets are $G_\delta$ and</li>
<li>non-open $G_\delta$ sets are closed.</li>
</ul>
<p>Clearly, not all spaces satisfy this equivalence. For example, with respect to the typical topology, $\Bbb R$ has (many!) $G_\delta$ subsets that are neither open nor closed. On the other hand, with respect to the order topology, the set $\omega_1\cup\{\omega_1\}$ has $\{\omega_1\}$ as a closed subset that is not $G_\delta$ (unless, of course, $\omega_1$ is a countable union of countable sets, as may happen in models of $\mathsf{ZF}$).</p>
<p>On the other hand, there are certainly spaces that <em>do</em> satisfy the equivalence, with the discrete and trivial topologies on any set giving us two (not-very-enlightening) examples.</p>
<hr>
<p>I wonder, then, has the described class of topological spaces been studied in much depth? If so, I am curious about the following:</p>
<ol>
<li>Are there any non-trivial topologies that make such a space?</li>
<li>Is there any common nomenclature for such spaces?</li>
<li>Are there sets of conditions on a topology that imply (or are implied by) a space being part of the class of such spaces?</li>
<li>Furthermore, the members of this class may clearly vary from model to model of $\mathsf{ZF},$ so is there any such set of conditions such that one implication or the other (or both) is equivalent to a Choice principle?</li>
</ol>
<hr>
<p><strong>Edit</strong>: The examples so far (aside from indiscrete spaces on sets with at least two points) have the property that every subset of the underlying set is open or closed. Is it possible that all such space are indiscrete or have all subsets open or closed?</p>
<p>Another thing that is readily apparent (now that I'm a little more awake) is that $\langle X,\mathcal T\rangle$ has the desired property if and only if $$\sigma(\mathcal T)=\mathcal T\cup\{X\setminus U:U\in\mathcal T\},$$ where $\sigma(\mathcal T)$ is the (Borel) $\sigma$-algebra (on $X$) generated by $\mathcal T.$</p>
| marty cohen | 13,079 | <p>A really trivial proof
for positive $x$ and $y$.</p>
<p>If $x|y$,
then
$y = ax$
where $a \ge 1$.</p>
<p>If $y|x$,
then
$x = by$
where $b \ge 1$.</p>
<p>Therefore
$x = by
=bax
$
so
$ba = 1$.
Since $a\ge 1$ and
$b \ge 1$,
we must have
$a = b = 1$
so $x = y
$.</p>
|
2,101,756 | <p>From the power series definition of the polylogarithm and from the integral representation of the Gamma function it is easy to show that:
\begin{equation}
Li_{s}(z) := \sum\limits_{k=1}^\infty k^{-s} z^k = \frac{z}{\Gamma(s)} \int\limits_0^\infty \frac{\theta^{s-1}}{e^\theta-z} d \theta
\end{equation}
The identity holds whenever $Re(s) > 0$. Now my question is twofold. </p>
<p>Firstly, how do we analytically continue that function to the area $Re(s) <0$? Clearly this must be possible because it was already Riemann who found a corresponding reflection formula by deforming the integration contour to the complex plane and evaluating that integral both in a clock-wise and in a anti-clockwise direction.</p>
<p>My second question would be how do we compute two dimensional functions of that kind. To be precise I am interested in quantities like this:</p>
<p>\begin{equation}
Li_{s_1,s_2}^{(\xi_1,\xi_2)}(z_1,z_2) := \sum\limits_{1 \le k_1 < k_2 < \infty }(k_1+\xi_1)^{-s_1} (k_2+\xi_2)^{-s_2} z_1^{k_1} z_2^{k_2-k_1}
\end{equation}
Clearly if both $Re(s_1) >0$ and $Re(s_2) >0$ the quantity above has a following integral representation:
\begin{equation}
Li_{s_1,s_2}^{(\xi_1,\xi_2)}(z_1,z_2) = \frac{z_1 z_2}{\Gamma(s_1) \Gamma(s_2)} \int\limits_{{\mathbb R}_+^2} \frac{\theta_1^{s_1-1} \theta_2^{s_2-1} e^{-\theta_1 \xi_1-\theta_2 \xi_2}}{\left(e^{\theta_1+\theta_2}-z_1\right)\left(e^{\theta_2}-z_2\right)} d\theta_1\theta_2
\end{equation}
However how do I compute the quantity if any of the real parts of the $s$-parameters becomes negative?</p>
| Tushant Mittal | 272,305 | <p>Just take $\log_3 (y) = t $ and $\log_3 (x) = s $.
The equations become
$$ s + t =5$$
$$ st = 6 $$
Thus, s,t are the roots of the equation $a^2-5a+6 =0$ whose obvious roots are 2,3.</p>
<p>Therefore, $(y=8,x=27)$ or $(x=8,y=27)$ are the solutions. </p>
|
3,115,347 | <p>Let <span class="math-container">$f:(0,\infty) \to \mathbb R$</span> be a differentiable function and <span class="math-container">$F$</span> on of its primitives. Prove that if <span class="math-container">$f$</span> is bounded and <span class="math-container">$\lim_{x \to \infty}F(x)=0$</span>, then <span class="math-container">$\lim_{x\to\infty}f(x)=0$</span>.</p>
<p>I've seen this problem on a Facebook page yesterday. Can anybody give me some tips to solve it, please? It looks pretty interesting and I have no idea of a proof now.</p>
| GNUSupporter 8964民主女神 地下教會 | 290,189 | <p>Let <span class="math-container">$D_i$</span> be the outcome of the <span class="math-container">$i$</span>-th die. Denote <span class="math-container">$D = (D_1,\dots,D_4)$</span>.</p>
<blockquote>
<p>So the denominator/sample space should be 6464 right?</p>
</blockquote>
<ul>
<li>Sample space <span class="math-container">$\Omega = \{1,\dots,6\}^4$</span></li>
<li>We use the <em>cardinality</em> of <span class="math-container">$\Omega$</span> as the denominator, and that of the desired event <span class="math-container">$E$</span> as the numerator for (a) and (b).</li>
<li>(a): take <span class="math-container">$E = \{1,2,3\}^4$</span>, so <span class="math-container">$P(D \in E) = |E| / |\Omega| = 3^4/6^4 = 1/2^4 = 1/16$</span></li>
<li>(b): take <span class="math-container">$E = \{1,2,3,4\}^4$</span>, so <span class="math-container">$P(D \in E) = |E| / |\Omega| = 4^4/6^4 = 2^4/3^4 = 16/81$</span></li>
<li>(c): take <span class="math-container">$E = \{1,2,3,4\}^4 \setminus \{1,2,3\}^4$</span>, so <span class="math-container">$P(D \in E) = (4^4-3^4)/6^4 = 175/1296$</span></li>
</ul>
<hr>
<p>Edit in response to OP's question in comments:</p>
<p><span class="math-container">$$\begin{aligned}
& \{\max(D_1,\dots,D_4) = 4\} \\
&= \{\forall i \in \{1,\dots,4\}, D_i \in \{1,\dots,4\} \text{ and } \exists i \in \{1,\dots,4\}, D_i = 4\} \\
&= \{\forall i \in \{1,\dots,4\}, D_i \in \{1,\dots,4\} \text{ and } \neg (\forall i \in \{1,\dots,4\}, D_i \ne 4) \} \\
&= \{\forall i \in \{1,\dots,4\}, D_i \in \{1,\dots,4\} \text{ and } \neg (\forall i \in \{1,\dots,4\}, D_i \in \{1,2,3\}) \} \\
&= \{\forall i \in \{1,\dots,4\}, D_i \in \{1,\dots,4\}\} \cap \{(\forall i \in \{1,\dots,4\}, D_i \in \{1,2,3\}) \}^\complement \\
&= \{1,2,3,4\}^4 \setminus \{1,2,3\}^4
\end{aligned}$$</span></p>
|
3,941,106 | <p>Let <span class="math-container">$K\subseteq\mathbb R$</span> be compact and <span class="math-container">$h:K\to\mathbb R$</span> be continuous and <span class="math-container">$\varepsilon>0$</span>. By the Stone-Weierstrass theorem, there is a polynomial <span class="math-container">$p:K\to\mathbb R$</span> with <span class="math-container">$\left\|h-p\right\|_\infty<\varepsilon$</span>.</p>
<blockquote>
<p>Why can we choose <span class="math-container">$p$</span> such that <span class="math-container">$\left\|p\right\|_\infty\le\left\|h\right\|_\infty$</span>?</p>
</blockquote>
<p>I'm not sure whether it's really crucial that <span class="math-container">$p$</span> is a polynomial. In general, if <span class="math-container">$K$</span> is a compact Hausdorff space, <span class="math-container">$\mathcal C$</span> is a subalgebra of <span class="math-container">$C(K)$</span> with <span class="math-container">$1\in\mathcal C$</span> and <span class="math-container">$f:K\to\mathbb R$</span> is continuous with <span class="math-container">$c:=\left\|f\right\|_\infty\ne0$</span>, then there is a <span class="math-container">$\tilde g\in\mathcal C$</span> with <span class="math-container">$$\left\|\frac fc-\tilde g\right\|_\infty<\frac\varepsilon2.$$</span> Now <span class="math-container">$g:=c\tilde g\in\mathcal C$</span> and <span class="math-container">$\left\|f-g\right\|_\infty<\varepsilon$</span>. But only if <span class="math-container">$\left\|\tilde g\right\|\le1$</span>, we obtain <span class="math-container">$\left\|g\right\|_\infty\le c=\left\|f\right\|_\infty$</span> ...</p>
| Kavi Rama Murthy | 142,385 | <p><span class="math-container">$\|p\|\leq \epsilon +\|h\|$</span>. Let <span class="math-container">$q=\frac {\|h\|} {\epsilon+\|h\|} p$</span>. The <span class="math-container">$q$</span> is a polynomial and <span class="math-container">$\|q\| \leq \|h\|$</span>. Now <span class="math-container">$\|q-h\| \leq \|q-p\|+\|p-h\|$</span>. Can you finish?</p>
|
4,004 | <p>This is related to <a href="https://math.stackexchange.com/q/133615/26306">this post</a>, please read the comments.</p>
<p>What is the usual way of dealing with that kind of problems on math.SE?
(By "that kind of problems" I mean someone posting tasks from an ongoing contest.)</p>
<p>I mean I did email the contest coordinator and flag the post, but it seems that there is more than one user and more than one question involved. Also, I do not know whether the OP is a contestant or e.g. a friend that wishes to learn the answer himself. The whole situation is not trivial and I do not see any way to prevent such abuse on future occasions (one cannot possibly be aware of all the contests in the world).</p>
<p>Any comments/ideas/explanations will be appreciated.</p>
| Michael Joyce | 17,673 | <p>In my opinion, one thing that should be done is to make an effort to change the culture from one of posting complete solutions to problems to one of posting hints and general strategies that explain the methodology behind the problem. This does not directly address contest cheating, as it is still possible to post questions and make use of hints, but it at least makes it take more effort to cheat and thus provides a disincentive. A large number of cheaters are very lazy individuals (as the recent example illustrates) and will resist doing the last 20% of the work even if the other 80% is done for them.</p>
<p>Overall, it is a very big negative for this site if lazy students view the site as a source of easy access to people providing full solutions to their questions on demand, regardless of whether they are contest problems or homework problems. And for students who are legitimately struggling and post questions because they are stuck after making a good faith effort, it seems to me that they usually benefit more from well thought out hints and nudges than from a completely posted answer. Math is ultimately learned by doing (with a large amount of reliance on ideas others have developed), not by watching others do. Of course, many questioners make an effort to understand a fully posted solution, not merely copy it, but it still is not the same as if they had come to part of the discovery of how to solve their problem on their own.</p>
<p>As far as more practical suggestions, I think you did the right thing by alerting the organizers of the contest. To a certain extent, the onus is on them to weigh the pros and cons of having such an open contest where cheating is certainly possible. Given the nature of their contest, one would hope that they would have put some forethought into how they would deal with such a situation as developed here. Hopefully, they can investigate the threads and perhaps get IP address information that helps them identify the guilty person(s).</p>
|
4,004 | <p>This is related to <a href="https://math.stackexchange.com/q/133615/26306">this post</a>, please read the comments.</p>
<p>What is the usual way of dealing with that kind of problems on math.SE?
(By "that kind of problems" I mean someone posting tasks from an ongoing contest.)</p>
<p>I mean I did email the contest coordinator and flag the post, but it seems that there is more than one user and more than one question involved. Also, I do not know whether the OP is a contestant or e.g. a friend that wishes to learn the answer himself. The whole situation is not trivial and I do not see any way to prevent such abuse on future occasions (one cannot possibly be aware of all the contests in the world).</p>
<p>Any comments/ideas/explanations will be appreciated.</p>
| Community | -1 | <p>Sequence of same or almost same math questions (possibly from online math contest). I have flagged the moderators for attention. Unfortunately, I do not know what online math contest it is to email the coordinators.</p>
<p><a href="https://math.stackexchange.com/questions/222524/whats-the-probability-that-x-y-is-less-than-c-where-x-y-are-real-numbers-and">https://math.stackexchange.com/questions/222524/whats-the-probability-that-x-y-is-less-than-c-where-x-y-are-real-numbers-and</a></p>
<p><a href="https://math.stackexchange.com/questions/222518/x-y-c-number-of-possible-x-and-y">https://math.stackexchange.com/questions/222518/x-y-c-number-of-possible-x-and-y</a></p>
<p><a href="https://math.stackexchange.com/questions/222470/what-is-the-density-of-the-sum-z-xy">What is the density of the sum $Z = X+Y$?</a></p>
<p><a href="https://math.stackexchange.com/questions/222189/probability-with-real-number">Probability with real number</a></p>
<p><a href="https://stackoverflow.com/questions/13106597/how-to-get-sequence-of-numbers-generated-by-ideal-random-number-generator-taking">https://stackoverflow.com/questions/13106597/how-to-get-sequence-of-numbers-generated-by-ideal-random-number-generator-taking</a></p>
<p>The first two have been deleted after I intimated the OP that it is incorrect to post such questions.</p>
|
1,522,216 | <p>I want to show that following:
$$\left(\frac{n^2-1}{n^2}\right)^n\sqrt{\frac{n+1}{n-1}}\leq 1; ~~n\geq 2$$ and $n$ is an integer. </p>
<p>After some simplifications, I got left hand-side as
$$LHS:\left(1-\frac{1}{n}\right)^{n-\frac{1}{2}} \left(1+\frac{1}{n}\right)^{n+\frac{1}{2}}$$
It is clear that the 1st term is less than 1, but I do not have any clue how I can show that multiplication is less than 1.</p>
<p>Can someone give me some hints? </p>
| Archis Welankar | 275,884 | <p>Ths collinearity is X-Z-Y now XY is 10 and XZ is 3 so YZ has to be 7.So proxuct has to be 7. And Z-X-Y so 13 so product =13.7=91</p>
|
1,657,557 | <p>For example, how would I enter y^(IV) - 16y = 0? </p>
<p>typing out fourth derivative, and putting four ' marks does not seem to work. </p>
| Adriano | 76,987 | <p>Typing <a href="http://www.wolframalpha.com/input/?t=crmtb01&f=ob&i=y%27%27%27%27%20-%2016y%20%3D%200" rel="nofollow"><code>y'''' - 16y = 0</code></a> or <a href="http://www.wolframalpha.com/input/?t=crmtb01&i=(d%5E4%2Fdx%5E4%20y)%20-%2016y%20%3D%200" rel="nofollow"><code>(d^4/dx^4 y) - 16y = 0</code></a> both seem to work.</p>
|
3,354,566 | <p>I see integrals defined as anti-derivatives but for some reason I haven't come across the reverse. Both seem equally implied by the fundamental theorem of calculus.</p>
<p>This emerged as a sticking point in <a href="https://math.stackexchange.com/questions/3354502/are-integrals-thought-of-as-antiderivatives-to-avoid-using-faulhaber">this question</a>.</p>
| AccidentalFourierTransform | 289,977 | <h2>Weak derivatives.</h2>
<p>This is essentially the way one defines a <a href="https://en.wikipedia.org/wiki/Weak_derivative" rel="noreferrer">weak derivative</a>. If a function is not differentiable in the traditional sense, but it is integrable, then one may define a weaker notion of derivative through duality: the derivative of <span class="math-container">$f$</span> is the function <span class="math-container">$f'$</span> such that
<span class="math-container">$$
\int f' u=-\int f u'
$$</span>
for all smooth functions <span class="math-container">$u$</span>. One can prove that the function <span class="math-container">$f'$</span> is in fact <span class="math-container">$L^p$</span>-unique. If <span class="math-container">$f$</span> is differentiable in the standard sense, then it is also differentiable in the weak sense, and both derivatives agree.</p>
<p>For example, the Dirichlet function is nowhere continuous, let alone differentiable. But its weak derivative exists, and is in fact the zero function. Indeed,
<span class="math-container">$$
0=\int 1_{\mathbb Q} u'=-\int 1'_{\mathbb Q} u
$$</span>
implies that <span class="math-container">$1'_{\mathbb Q}=0$</span> almost everywhere.</p>
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.