qid int64 1 4.65M | question large_stringlengths 27 36.3k | author large_stringlengths 3 36 | author_id int64 -1 1.16M | answer large_stringlengths 18 63k |
|---|---|---|---|---|
2,969,195 | <p>I have a cube with rotations <span class="math-container">$\{r, r^2, s, t\}$</span>. <strong>I want to find the cardinality of the conjugacy classes for these elements.</strong> (I know they are 6, 3, 8 and 6 respectively) I couldn't find any formula or anything so I tried to do it by hand for <span class="math-container">$r$</span>, which seemed to work (see left side of my note picture), but for <span class="math-container">$r^2$</span> I ended up with the same elements in its conjugacy class as in <span class="math-container">$r$</span>. I haven't tried for <span class="math-container">$s$</span> or <span class="math-container">$t$</span>. Or is there some algebra like there is for dihedral groups (like <span class="math-container">$sr^b=r^{-b}s$</span>)</p>
<p><strong>I also didn't know in which order I had to apply the elements</strong> (<span class="math-container">$s^2tr^2$</span> versus <span class="math-container">$r^2s^2t$</span> for example).</p>
<p>I am using this to calculate the distinct orbits of a group (correct terminology?) using the Counting Theorem</p>
<p><a href="https://i.stack.imgur.com/PT6oo.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/PT6oo.png" alt="Rotations"></a>
<a href="https://i.stack.imgur.com/hl2Me.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/hl2Me.jpg" alt="enter image description here"></a></p>
| Baran Zadeoglu | 575,644 | <p>Hint: Maybe instead of thinking about the cube you can think about the diagonals. Can you recover the same information from the diagonals? If so, which group does this look like? With diagonals I mean line segments between pairs of points with distance sqrt(3) assuming the cube has edge length 1. <br/>
On a note, I believe r shouldn't be conjugate to r^2. You might want to check that. </p>
|
2,148,631 | <p>$a_{n+1} = 1-\frac{1}{a_n + 1}$<br>
$a_1 = \sqrt 2$</p>
<p>I need to prove:<br>
1. $a_n$ is irrational for every $n$.<br>
2. $a_n$ convregres</p>
<p>My ideas: </p>
<ol>
<li>Induction - as I know that $a_1 $ is irrational, I'm assuming $a_n$ is also irrational, and then: $a_{n+1} = 1-\frac{1}{a_n + 1} \implies a_{n+1} = \frac{a_n}{a_n + 1}$. Now it can be explained that $\frac{a_n}{a_n + 1}$ has got to be irrational.</li>
<li>No good ideas here, I can show that $a_n > 0$ for every $n$ by induction. Then $\frac{a_{n+1}}{a_n} = \frac{1}{a_n + 1} \le 1$. No idea how to continue</li>
</ol>
| Laray | 396,534 | <p>Show convergence by proving that $a_n$ is bounded below by $0$, as you already did.
Then you can show, that the sequence is drecreasing by starting with $a_{n+1}<a_n$ and going each step of the constructiong for the next element (keep track of the direction of the sign) to show, that $a_{n+2}<a_{n+1}$.
Since it is decreasing and you have a lower bound, it must converge. </p>
|
177,091 | <p>What would $\int\limits_{-\infty}^\infty e^{ikx}dx$ be equal to where $i$ refers to imaginary unit? What steps should I go over to solve this integral? </p>
<p>I saw this in the Fourier transform, and am unsure how to solve this.</p>
| robjohn | 13,854 | <p>$$
\begin{align}
\int_{-\infty}^\infty e^{ixy}\overbrace{\ \ \ e^{-\epsilon x^2}\ \ \ }^{\to1}\,\mathrm{d}x
&=e^{-\frac{y^2}{4\epsilon}}\color{#C00}{\int_{-\infty}^\infty e^{-\epsilon\left(x-\frac{iy}{2\epsilon}\right)^2}\,\mathrm{d}x}\tag{1}\\
&=\underbrace{\color{#C00}{\sqrt{\frac\pi\epsilon}}\,e^{-\frac{y^2}{4\epsilon}}}_{\to2\pi\delta(y)}\tag{2}
\end{align}
$$
Using <a href="https://en.wikipedia.org/wiki/Cauchy%27s_integral_theorem" rel="noreferrer">Cauchy's Integral Theorem</a>, the red integral in $(1)$ is simply $\int_{-\infty}^\infty e^{-\epsilon x^2}\,\mathrm{d}x=\sqrt{\frac\pi\epsilon}$ .</p>
<p>As $\epsilon\to0$, we get that $(2)$ approximates $2\pi\delta(y)$. That is, the integral of $(2)$ is $2\pi$ for all $\epsilon$, and as $\epsilon\to0$, the main mass of the function is squeezed into a very small region about $0$.</p>
|
177,091 | <p>What would $\int\limits_{-\infty}^\infty e^{ikx}dx$ be equal to where $i$ refers to imaginary unit? What steps should I go over to solve this integral? </p>
<p>I saw this in the Fourier transform, and am unsure how to solve this.</p>
| felipeh | 73,723 | <p>I just want to clarify what is meant by saying that "as a distribution",
$$
2\pi\delta(k) = \int_{-\infty}^\infty e^{ikx}\,dx.
$$
The right hand side certainly doesn't make sense, and the left hand side can only make sense for $k\not=0$. What we mean though is that for any smooth function $\phi$ with rapid decay,
$$
2\pi \phi(0) = \int_{-\infty}^\infty \int_{-\infty}^\infty e^{ikx} \phi(k) \,dk\,dx.
$$
Now both sides make sense. This isn't trivial to prove, and in fact it implies the Fourier inversion formula (using the relationship between modulation and translation under the Fourier transform). The proof often goes through a calculation similar to what @robjohn's answer describes.</p>
|
1,036,964 | <blockquote>
<p>Let $\{e_1, e_2\}$ and $\{f_1, f_2, f_3\}$ the canonical ordered bases of $\mathbb{R}^2$ and $\mathbb{R}^3$ respectively. Find the coordinates of $x \otimes y$ with respect to the basis $\{e_i\otimes f_j\}$ of $\mathbb{R}^2\otimes\mathbb{R}^3$ where $x = (1, 1)$ and $y = (1, -2, 1)$.</p>
</blockquote>
<p>Since I have that $(1, 1) = e_1 + e_2$ and $(1, -2, 2) = f_1 - 2f_2 + f_3$, then $x \otimes y = (1, 1) \otimes (1, -2, 1) = (e_1 + e_2)\otimes (f_1 - 2f_2 + f_3) = \dots ?$ How can I can compute this last tensor product? Thanks. Also any further reading recomendation about examples of simple tensor product product calculations and not only the abstract background would be appreciated ;)</p>
| Brian M. Scott | 12,042 | <p>Clearly $0\in f[A]$. Suppose now that $t\in(0,8)$. Choose any $u\in(t,8)$, and set $y=\frac{u}4$; then $0<y<2$. Moreover, $0<\frac{t}y=\frac{4t}u<4$, so $\sqrt{\frac{t}y}<2$. Let $x=-\sqrt\frac{t}y$; then $\langle x,y\rangle\in A$, and $f(x,y)=x^2y=t$.</p>
<p>The case $t\in(-8,0)$ is easier, since you can start by setting $y=-2$; what must $x$ be in that case?</p>
|
3,932,883 | <p>Can someone suggest how to find the infinite series sum for</p>
<p><span class="math-container">$$\frac{k(k+1)3^k}{k!}$$</span> where k goes from <span class="math-container">$1$</span> to infinity.</p>
<p>I know that <span class="math-container">$\sum_0\frac{3^k}{k!}=e^3$</span> but I'm not sure if that helps here.</p>
| Albus Dumbledore | 769,226 | <p><strong>Hint</strong>:let <span class="math-container">$$f(x)=xe^x=\sum_{k=0}^{\infty}\frac{x^{k+1}}{k!}$$</span> what is <span class="math-container">$3f''(3)?$</span></p>
|
527,665 | <p>I need to bound a sum of the form $\sum_{j=1}^m |z+j|^{-n}$ with $\Re z\ge1$ and $n \ge 2$. I am searching for a bound of the form
$$\sum_{j=1}^m |z+j|^{-n} \le C |z|^{-n+1}.$$</p>
<p>It is easy to see that</p>
<p>$$\sum_{j=1}^m |z+j|^{-n} \le \int_{0}^{m+1} |z+x|^{-n}dx \le \int_{0}^{\infty} |z+x|^{-n}dx.$$</p>
<p>My problem is that I cannot compute the integral. By setting $z=r+i s$, we have
$$\int_0^\infty ((r+x)^2+s^2)^{-\frac n2} dx.$$</p>
<p>My idea was to use integration by parts to transform it to an integrable form, but so far I cannot find how I am going to do that. Any help is welcome.</p>
| Felix Marin | 85,343 | <p>$\newcommand{\angles}[1]{\left\langle #1 \right\rangle}%
\newcommand{\braces}[1]{\left\lbrace #1 \right\rbrace}%
\newcommand{\bracks}[1]{\left\lbrack #1 \right\rbrack}%
\newcommand{\dd}{{\rm d}}%
\newcommand{\ds}[1]{\displaystyle{#1}}%
\newcommand{\expo}[1]{{\rm e}^{#1}}%
\newcommand{\ic}{{\rm i}}%
\newcommand{\imp}{\Longrightarrow}%
\newcommand{\pars}[1]{\left( #1 \right)}%
\newcommand{\pp}{{\cal P}}%
\newcommand{\ul}[1]{\underline{#1}}%
\newcommand{\verts}[1]{\left\vert #1 \right\vert}$
$\large\it\mbox{Hint:}$
<br><br>
$\large n > 1:$</p>
<blockquote>
\begin{align}
{\cal T}_{n}
&=
\int_{0}^{\infty}{\dd x \over \bracks{\pars{r + x}^2 + s^{2}}^{n/2}}
=
{n \over 2}\int_{0}^{\infty}
{x\bracks{2\pars{r + x}} \over \bracks{\pars{r + x}^2 + s^{2}}^{n/2 + 1}}\,\dd x
\\[3mm]&=
n\int_{0}^{\infty}
{\bracks{\pars{r + x}^{2} + s^{2}} - \bracks{r\pars{r + x} + s^{2}} \over \bracks{\pars{r + x}^2 + s^{2}}^{n/2 + 1}}\,\dd x
\\[3mm]&=
n\int_{0}^{\infty}\!\!\!\!\!\!
{\dd x \over \bracks{\pars{r + x}^2 + s^{2}}^{n/2}}\,
+
\left.r\,
{1 \over \bracks{\pars{r + x}^2 + s^{2}}^{n/2}}\right\vert_{0}^{\infty}
-
ns^{2}\int_{0}^{\infty}\!\!\!\!\!
{\dd x \over \bracks{\pars{r + x}^2 + s^{2}}^{n/2 + 1}}\,
\\[3mm]&=
n{\cal T}{n}
-
{r \over \bracks{r^2 + s^{2}}^{n/2}}
-
ns^{2}{\cal T}_{n + 2}
\end{align}
</blockquote>
<blockquote>
\begin{align}
{\cal T}_{n + 2}
&\equiv
{1 \over s^{2}}\,{n - 1 \over n}\,{\cal T}_{n}
-
{r \over n s^{2}}\,{1 \over \pars{r^{2} + s^{2}}^{n/2}}
\\[3mm]
{\cal T}_{2}
&=
\int_{0}^{\infty}{\dd x \over \pars{r + x}^{2} + s^{2}}
=
{1 \over \verts{s}}\,\bracks{{\pi \over 2} - \arctan\pars{r \over \verts{s}}}
\\[3mm]
{\cal T}_{3}
&=
\int_{0}^{\infty}{\dd x \over \bracks{\pars{r + x}^{2} + s^{2}}^{3/2}}
=
{1 \over s^{2}}\,\pars{1 - {r \over \sqrt{r^{2} + s^{2}\,}\,}}
\end{align}
</blockquote>
|
523,074 | <p>I have never done integration in my life and I am in first year of university. Is it harder than taking the derivative? I've heard its just going backwards. Also, my high school taught me only differentiation, I don't know why we never touched on integration. I'm going to be starting it next week and I want to know what I'm facing. Is it generally considered harder than differentiation? Thank you in advance.</p>
| Aravindan Sai Narayanan | 1,038,658 | <p>Integration is the reverse of differentiation through the Fundamental Theorem of Calculus.Integration is generally tougher because you will learn and use techniques like partial fractions, trigonometric substitutions, algebraic expression substitutions,integration by parts, etc.Certain real valued integrals can only be solved easily using techniques from Complex Analysis like Cauchy's Residual Theorem.To explain ,Integration is historically finding the limit of a sum of infinitesimals.Moerover,integrating a function smoothens it while differentiating a function makes it less smooth.Integration requires one to understand about the function's behaviour throughout the entire interval while differentiation only requires one to understand the behaviour of the function's rate of change about a point and its neighbourhood.</p>
|
18,090 | <p>I am wondering if anyone could prove the following equivalent definition of recurrent/persistent state for Markov chains:</p>
<p>1) $P(X_n=i,n\ge1|X_0=i)=1$<p>
2) Let $T_{ij}=min\{n: X_n=j|X_0=i\}$, a state $i$ is recurrent if for $\forall j$ such that $P(T_{ij}<\infty)>0$, one has $P(T_{ij}<\infty)=1$</p>
| Community | -1 | <p>A recurrent state will satisfy property (2), but not the other way around. Take a two state Markov chain that jumps from $i$ to $j$ with probability one, then stays there forever. Then $i$ is transient, but $P(T_{ij}<\infty)=1$. </p>
<p><strong>Edit:</strong>
Here's a sketch of (1) implies (2) in terms of "taboo probabilities".
We assume that $i$ is recurrent, and all of the probabilities below are conditioned on $X_0=i$.</p>
<p>Define $T_i=\inf(n\geq 1 : X_n=i)$ and $T_j=\inf(n\geq 1 : X_n=j)$.
We want to show that $P(T_j<\infty)>0$ implies $P(T_j<\infty)=1$. It suffices to assume $i\not= j$, since $P(T_i<\infty)=1$. </p>
<p>Since $i$ is recurrent, we have $P(T_i < \infty)=1$ so that
$P(T_j < T_i)+P(T_i < T_j)=1,$ since $T_i=T_j$ implies $T_i=\infty$. </p>
<p>Also, if $P(T_j < \infty)>0$ we must have $P(T_j < T_i)>0$,
by considering the shortest possible path from $i$ to $j$ with non-zero probability.</p>
<p>Finally, letting $n$ represent the number of times we return to $i$ without
hitting $j$, the strong Markov property gives </p>
<p>$$P(T_j < \infty)=\sum_{n=0}^\infty P(T_i < T_j)^n P(T_j < T_i)
=\sum_{n=0}^\infty P(T_i < T_j)^n [1-P(T_i < T_j)]=1.$$ </p>
|
4,159,451 | <p>Kindly see the boldened sentence below. What probability concept does "the most likely value" refer to? How do I symbolize it?</p>
<p>I don't think the author's referring to Expected Value, because it's additive.</p>
<blockquote>
<p> The principle of additivity is so intuitively appealing that it’s easy to think
it’s obvious. But, just like the pricing of life annuities, it’s not obvious! To see
that, substitute other notions in place of expected value and watch everything
go haywire. Consider:<br />
<em><strong>The most likely value of the sum of a bunch of things is the sum of the
most likely values of each of the things.</strong></em> [emphasis mine]<br />
That’s totally wrong. Suppose I choose randomly which of my three
children to give the family fortune to. The most likely value of each child’s
share is zero, because there’s a two in three chance I’m disinheriting them.
But the most likely value of the sum of those three allotments—in fact, its
<em>only possible</em> value—is the amount of my whole estate.</p>
</blockquote>
<p>Ellenberg, <em>How Not to Be Wrong</em> (2014), page 213.</p>
| A rural reader | 874,661 | <p>It sounds like you're trying to describe a situation in which <span class="math-container">$\operatorname{mode}(X_1 + \cdots +X_n) = \operatorname{mode} X_1 + \cdots + \operatorname{mode} X_n$</span>.</p>
|
1,361,517 | <p>$K_1 K_2 \dotsb K_{11}$ is a regular $11$-gon inscribed in a circle, which has a radius of $2$. Let $L$ be a point, where the distance from $L$ to the circle's center is $3$. Find
$LK_1^2 + LK_2^2 + \dots + LK_{11}^2$.</p>
<p>Any suggestions as to how to solve this problem? I'm unsure what method to use. </p>
| JimmyK4542 | 155,509 | <p>Use complex coordinates. Let $L = 0$ (the origin), and let $K_n = 3 + 2e^{i\tfrac{2\pi n}{11}}$ for $n = 1,\ldots,11$. </p>
<p>Clearly $K_1,\ldots,K_{11}$ are $11$ equally spaced points on the circle of radius $2$ centered at $3$.</p>
<p>Then for each $n$, you have $LK_n^2 = \left|3 + 2e^{i\tfrac{2\pi n}{11}}\right|^2 = \left(3 + 2e^{i\tfrac{2\pi n}{11}}\right)\left(3 + 2e^{-i\tfrac{2\pi n}{11}}\right) = \cdots$ (I'll let you finish expanding this). </p>
<p>From here, computing the sum $LK_1^2+LK_2^2+\cdots+LK_{11}^2$ is easy (assuming you know how to sum a geometric series).</p>
<blockquote class="spoiler">
<p> I believe you should get $LK_1^2+LK_2^2+\cdots+LK_{11}^2 = 143$ as the answer.</p>
</blockquote>
<p>EDIT: You don't really need to know how to sum a geometric series if you know that the sum of the $k$-th roots of unity is zero for any positive integer $k$. </p>
|
466,909 | <p>How many ways can you split the numbers 1 to 5 into two groups of varying size?
For example: '1 and 2,3,4,5' or '1,2 and 3,4,5' or '1,2,3 and 4,5'. How many combinations are there like this? What is the formula?</p>
| wonderwalrus | 673,760 | <p>Here's another way of looking at the problem of splitting the numbers into group A and group B:</p>
<p>For every number, there are two choices: into group A or group B. If there are n numbers, then there are <span class="math-container">$2^n$</span> combinations.</p>
<p>We know that the order of the groups isn't important. For example, using the numbers in the original question, A={1,2,3} and B={4,5} is the same as A={4,5} and B={1,2,3}. As such, each combination in <span class="math-container">$2^n$</span> has a duplicate. Thus, <span class="math-container">$\frac{2^n}{2}$</span> or <span class="math-container">$2^{n-1}$</span>.</p>
<p>There is one case where a group is empty. Since groups must be non-empty, we subtract this one case. Thus, <span class="math-container">$2^{n-1}-1$</span>.</p>
|
1,778,601 | <p>Let $G$, $H$ be finite groups. </p>
<p>For a homomorphism $\varphi \colon$ $G \to H$ we learned that for $g \in G$, $ord(\varphi(g))$ must divide $|G|$ and $|H|$. </p>
<p>So then, (in essence), what we did was look at the common factors of $|G|$ and $|H|$ and set $\varphi(x) = \gamma $ where $x$ is a generator of $G$ and $\gamma \in H$ s.t. $ord(\gamma)$ divides both $|G|$ and $|H|$.</p>
<hr>
<p>However, I came across an example where this seemed to break-down – I was hoping for some guidance in this case. </p>
<p>For homomorphisms between $C_4$ and $C_2 \times C_2$, we have $|C_4| = |C_2 \times C_2| = 4$. </p>
<p>This implies (with our strategy above) that if $C_4 = \lbrace 1, x, x^2, x^3 \rbrace$, $x^4 = 1$ then $\varphi(x)$ can have order 1, 2 or 4. </p>
<p>However, the example goes on to state that we cannot set $\varphi(x)$ to have an order of 4, as it is unattainable.</p>
<p>Does this mean that my condition of $\varphi(x)$ dividing both $|G|$ and $|H|$ is necessary but not sufficient? For such cases (as in my example) how would I go about correctly stating all the homomorphisms under a time pressure? (i.e. is there a way for me to check, without too much calculation, that setting $\varphi(x)$ to an element with order 4 would not produce a homomorphism?)</p>
<p>I'm sure my mistake is somewhat elementary, any help on this would be much appreciated!</p>
<hr>
<p>Follow up question: I don't understand why the line $\varphi_i$ is a homomorphism iff ord($\varphi_i$) divides ord(g) is true in this image <a href="https://imgur.com/ICFuuQG" rel="nofollow noreferrer">Link</a> </p>
| lulu | 252,071 | <p>To elaborate on the discussion in the comments: Indicator variables can be very helpful for problems like these. Accordingly, let $X_i$ be the indicator variable for the $i^{th}$ value. Thus $X_i=1$ if your draw of $x$ elements gets one of value $i$, and $X_i=0$ otherwise. It is easy to compute $E[X_i]$...if $p_i$ denotes the probability that the $i^{th}$ value is drawn then we see that $E[X_i]=p_i$ and $$1-p_i=\frac {(m-1)k}{mk}\times \frac {(m-1)k-1}{mk-1}\times \cdots \times \frac {(m-1)k-(x-1)}{mk-(x-1)}=\frac {(n-k)!(n-x)!}{(n-k-x)!(n!)}$$ Where $m$ is the number of value types, $n=km$ is the total number of items, and $x$ is the number of draws.</p>
<p>The desired answer is then: $$E=\sum_{i=1}^mE[X_i]=m\left(1-\frac {(n-k)!(n-x)!}{(n-k-x)!(n!)}\right)$$</p>
<p>Note: it is not difficult to check that this matches the answer given by @MarkoRiedel (I have used $x$ for the number of draws, following the OP, instead of $p$).</p>
|
309,872 | <blockquote>
<p>Let $a$ and $b$ be elements of a group $G$. If $|a| = 12, |b| = 22$ and $\langle a \rangle \: \cap \: \langle b\rangle \ne e$, prove that $a^6 = b^{11}$.</p>
</blockquote>
<p>Any ideas where to start would be helpful.</p>
<p>Thanks.</p>
| Potato | 18,240 | <p>I sketch a solution; you should be able to fill in the details.</p>
<p>The intersection of the groups generated by $a$ and $b$ must have order dividing each, by Lagrange's theorem. Since this order is not 1, looking at the prime factorization tells us it must be 2. Hence it contains a single non-identity element $c$ with order $2$. They only such element in the group generated by $a$ is $a^6$, and the only such element in the group generated by $b$ is $b^{11}$. So these are equal.</p>
|
4,531,998 | <p>As you increase the value of n, you will generate all pythagorean triples whose first square is even. Is there any visual proof of the following explicit formula and where does it come from or how to derive it?</p>
<p><span class="math-container">$(2n)^2 + (n^2 - 1)^2 = (n^2 + 1)^2$</span></p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th><span class="math-container">$(2n)^2+(n^2-1)^2=(n^2+1)^2$</span></th>
<th><span class="math-container">$(2n)^2+(n^2-1)^2=(n^2+1)^2$</span></th>
<th><span class="math-container">$(2n)^2+(n^2-1)^2=(n^2+1)^2$</span></th>
</tr>
</thead>
<tbody>
<tr>
<td><span class="math-container">$(2*0)^2+(0^2-1)^2=(0^2+1)^2$</span></td>
<td><span class="math-container">$(2*1)^2+(1^2-1)^2=(1^2+1)^2$</span></td>
<td><span class="math-container">$(2*2)^2+(2^2-1)^2=(2^2+1)^2$</span></td>
</tr>
<tr>
<td><span class="math-container">$(2*0)^2+(0-1)^2=(0+1)^2$</span></td>
<td><span class="math-container">$(2*1)^2+(1-1)^2=(1+1)^2$</span></td>
<td><span class="math-container">$(2*2)^2+(4-1)^2=(4+1)^2$</span></td>
</tr>
<tr>
<td><span class="math-container">$0^2+1^2=1^2$</span></td>
<td><span class="math-container">$2^2+0^2=2^2$</span></td>
<td><span class="math-container">$4^2+3^2=5^2$</span></td>
</tr>
<tr>
<td><span class="math-container">$0+1=1$</span></td>
<td><span class="math-container">$4+0=4$</span></td>
<td><span class="math-container">$16+9=25$</span></td>
</tr>
<tr>
<td><span class="math-container">$1=1$</span></td>
<td><span class="math-container">$4=4$</span></td>
<td><span class="math-container">$25=25$</span></td>
</tr>
</tbody>
</table>
</div> | Mark Bennet | 2,906 | <p>This formula can be derived in various ways, and with connections to different areas of mathematics. One way of seeing why it generates all the triples is to note that the unit circle <span class="math-container">$$x^2+y^2=1$$</span> in polar co-ordinates has <span class="math-container">$y=\sin \theta, x=\cos \theta$</span> so this takes in all the points of the circle (for angles <span class="math-container">$-\pi\le \theta \lt \pi$</span> it traverses the circle once).</p>
<p>Now, with <span class="math-container">$t = \tan \frac {\theta}2$</span> we have the half angle formulae <span class="math-container">$y=\dfrac {2t}{1+t^2}, x=\dfrac {1-t^2}{1+t^2}$</span> which parametrise the unit circle with rational functions (the point at <span class="math-container">$x=-1, y=0$</span> corresponds to the singularity in the tangent function).</p>
<p>If we substitute back into the original equation for the circle, and multiply through by <span class="math-container">$(1+t^2)^2$</span> we get a pythagorean relationship <span class="math-container">$$(1-t^2)^2+(2t)^2 = (2t)^2+(t^2-1)^2=(t^2+1)^2$$</span></p>
<p>We can do this for every rational point on the unit circle, and for each integer <span class="math-container">$t$</span> we recover such a rational point.</p>
<p>There are "more elementary" ways of showing how pythagorean triples work, but this is worth noting as an example of the rational parametrisation of a curve - a topic which has its own theory, generalising the case of the circle. You can draw a diagram to show how a pythagorean triangle with integer sides corresponds to a rational point on the unit circle.</p>
|
2,010,768 | <p>I have the question </p>
<p>Solve the simultaneous equations </p>
<p>$$\begin{cases}
3^{x-1} = 9^{2y} \\
8^{x-2} = 4^{1+y}
\end{cases}$$</p>
<p><a href="https://i.stack.imgur.com/xN9Ny.jpg" rel="nofollow noreferrer">source image</a></p>
<p>I know that $x-1=4y$ and $3X-6=2+2y$ .</p>
<p>However when I checked the solutions this should become $6X-16=4Y$ .</p>
<p>How is this? </p>
| Dr. Sonnhard Graubner | 175,066 | <p>write the system in the form
$$3^{x-1}=3^{4y}$$ and
$$2^{3(x-2)}=2^{2(1+y)}$$
and you will get
$$x-1=4y$$
and
$$3(x-2)=2(1+y)$$
can you proceed?</p>
|
6,424 | <p>Hello Mathematica users,</p>
<p>I have problems trying to eliminate variable $Q(x,y)$ froma PDE, but Mathematica output is just <code>True</code>. The problem is that I already found how to eliminate it. We can apply $d/dx$ on the second equation and then from the first equation we can determine and substitute $dQ[x,y]/dx$ in the second. </p>
<p>Why can't Mathematica do that? My output is in that case a PDE of fourth order, but I don't have <code>Q[x,y]</code>, which is the aim, and I solved it by hand.</p>
<pre><code> Eliminate[{A0*D[w[x,y],{y,2}]-B0*D[w[x,y],{x,2}]-
C0*D[Q[x,y],{x,1}]+C0*D[w[x,y],{x,2}]==0,
E0*D[Q[x,y],{y,2}]-FF0*D[Q[x,y],{x,2}]-CC0*D[w[x,y],{x,1}]-Q[x,y]==0},
{(Q^(0,2))[x,y],(Q^(1,0))[x,y],Q[x,y]}]
</code></pre>
| Vitaliy Kaurov | 13 | <p>Generally if you have n variables you need to have at least n+1 equations to get meaningful elimination and end up with at least one equation. With m equations and k variables you get m-k resulting equations after elimination (in a well defined system). Your case is two variables and two equations which results of course in this:</p>
<pre><code> Eliminate[{x == 2 + y, y == z - 5 x}, {x, y}]
</code></pre>
<blockquote>
<p>True</p>
</blockquote>
<p>While meaningful request for <code>Eliminate</code> would be for example</p>
<pre><code> Eliminate[{x == 2 + y, y == z - 5 x}, x]
</code></pre>
<blockquote>
<p>-10 + z == 6 y</p>
</blockquote>
<p>My suspicion is you just would like to express your variables via the rest of stuff, which can be done with <code>Solve</code>:</p>
<pre><code>eq={A0*Derivative[0, 2][w][x, t] + C0*Derivative[2, 0][w][x, y] ==
C0*Derivative[1, 0][Q][x, t] + B0*Derivative[2, 0][w][x, t],
Q[x, y] + CC0*Derivative[1, 0][w][x, y] + FF0*Derivative[2, 0][Q][x, y] ==
E0*Derivative[0, 2][Q][x, t]};
eq // Column // TraditionalForm
</code></pre>
<p><img src="https://i.stack.imgur.com/Rcw7m.png" alt="enter image description here"></p>
<pre><code>Solve[eq, {Derivative[0, 2][Q][x, t], Derivative[1, 0][Q][x, t]}] //
First // Column // TraditionalForm
</code></pre>
<p><img src="https://i.stack.imgur.com/EBcjj.png" alt="enter image description here"></p>
|
6,424 | <p>Hello Mathematica users,</p>
<p>I have problems trying to eliminate variable $Q(x,y)$ froma PDE, but Mathematica output is just <code>True</code>. The problem is that I already found how to eliminate it. We can apply $d/dx$ on the second equation and then from the first equation we can determine and substitute $dQ[x,y]/dx$ in the second. </p>
<p>Why can't Mathematica do that? My output is in that case a PDE of fourth order, but I don't have <code>Q[x,y]</code>, which is the aim, and I solved it by hand.</p>
<pre><code> Eliminate[{A0*D[w[x,y],{y,2}]-B0*D[w[x,y],{x,2}]-
C0*D[Q[x,y],{x,1}]+C0*D[w[x,y],{x,2}]==0,
E0*D[Q[x,y],{y,2}]-FF0*D[Q[x,y],{x,2}]-CC0*D[w[x,y],{x,1}]-Q[x,y]==0},
{(Q^(0,2))[x,y],(Q^(1,0))[x,y],Q[x,y]}]
</code></pre>
| Daniel Lichtblau | 51 | <p><code>Eliminate</code>, and its cousin <code>GroebnerBasis</code>, work with algebraic equations. If you require differential elimination you will need to take derivatives (prolongations, that is). Here is a blind approach: just take some derivatives, sort the variables into two sets, and eliminate all the <code>Q</code> stuff.</p>
<pre><code>dpolys = {A0*D[w[x, y], {y, 2}] - B0*D[w[x, y], {x, 2}] -
C0*D[Q[x, y], {x, 1}] + C0*D[w[x, y], {x, 2}],
E0*D[Q[x, y], {y, 2}] - FF0*D[Q[x, y], {x, 2}] -
CC0*D[w[x, y], {x, 1}] - Q[x, y]};
derivs = {D[dpolys[[1]], x], D[dpolys[[1]], y]};
allpolys = Join[dpolys, derivs];
bigger = Join[allpolys, D[allpolys, x], D[allpolys, y]];
In[111]:= params = {A0, B0, C0, CC0, E0, FF0};
vars = Complement[Variables[bigger], params];
qvars = Select[Variables[bigger], ! FreeQ[#, Q] &];
wvars = Complement[vars, qvars];
In[110]:= GroebnerBasis[bigger, wvars, qvars,
MonomialOrder -> EliminationOrder]
{(-A0)*Derivative[0, 2][w][x, y] + A0*E0*Derivative[0, 4][w][x, y] +
B0*Derivative[2, 0][w][x, y] - C0*Derivative[2, 0][w][x, y] -
C0*CC0*Derivative[2, 0][w][x, y] -
B0*E0*Derivative[2, 2][w][x, y] +
C0*E0*Derivative[2, 2][w][x, y] -
A0*FF0*Derivative[2, 2][w][x, y] +
B0*FF0*Derivative[4, 0][w][x, y] -
C0*FF0*Derivative[4, 0][w][x, y]}
</code></pre>
|
2,251,112 | <p>A small college has 1095 students. What is the approximate probability that more than five students were born on Christmas day? Assume that the birthrates are constant throughout the year and that each year has 365 days.</p>
<p>I tried doing<br>
$X \sim Pn(3) $ and calculating $P(X\gt5)$. My calculation turned out to be 0.22....., which was wrong. (What was wrong with my approximation?). The solution given used a Normal approximation to get an answer of 0.0735. When I tried using a Normal approximation, I was still unable to get that answer. Here is how I attempted it. </p>
<p>$ N\sim(3,1092/365)$.<br>
$ P(X\gt5)=P(N\gt4.5) $ #continuity correction<br>
$= P\left(Z\gt\frac{4.5-3}{\sqrt{\frac{1092}{365}}}\right)$<br>
=$ P(Z>0.86721)$.<br>
=$0.193$. </p>
<p>Any help is much appreciated.</p>
| Cato | 357,838 | <p>poisson probability distribution $\lambda = 1095 / 365$
then calculate </p>
<p>using</p>
<p>$P(N) = \frac{\lambda^n}{n!} \exp(-\lambda)$</p>
<p>1 - P(0) - P(1) - P(2) - P(3) - P(4) - P(5) = </p>
<p>$1 - \exp(-\lambda)(1 + \lambda + \lambda^2 / 2! + \lambda^3 / 3! + \lambda^4/4! + \lambda^5/5!) = .084$</p>
<p>note this is an approximation,</p>
<p>also assuming that 5 is NOT included</p>
|
79,868 | <p>Given a function $f: \mathbb{R}^+ \rightarrow \mathbb{C}$ satisfying suitable conditions (exponential decay at infinity, continuous, and bounded variation) is good enough, its <em>Mellin transform</em> is defined by the function</p>
<p>$$M(f)(s) = \int_0^{\infty} f(y) y^s \frac{dy}{y},$$</p>
<p>and $f(y)$ can be recovered by the Mellin inversion formula:</p>
<p>$$f(y) = \frac{1}{2\pi i} \int_{\sigma - i \infty}^{\sigma + i \infty} y^{-s} M(f)(s) ds.$$</p>
<p>This is a change of variable from the Fourier inversion formula, or the Laplace inversion formula, and can be proved in the same way. This is used all the time in analytic number theory (as well as many other subjects, I understand) -- for example, if $f(y)$ is the characteristic function of $[0, 1]$ then its Mellin transform is $1/s$, and one recovers the fact (Perron's formula) that </p>
<p>$$\frac{1}{2\pi i} \int_{2 - i \infty}^{2 + i \infty} n^{-s} \frac{ds}{s}$$</p>
<p>is equal to 1 if $0 < n < 1$, and is 0 if $n > 1$. (Note that there are technical issues which I am glossing over; one integrates over any vertical line with $\sigma > 0$, and the integral is equal to $1/2$ if $n = 1$.)</p>
<p>I use these formulas frequently, but... I find myself having to look them up repeatedly, and I'd like to understand them more intuitively. Perron's formula can be proved using Cauchy's residue formula (shift the contour to $- \infty$ or $+ \infty$ depending on whether $n > 1$), but this proof doesn't prove the general Mellin inversion formula.</p>
<p>My question is:</p>
<blockquote>
<p>What do the Mellin transform and the inversion formula mean? Morally, why are they true?</p>
</blockquote>
<p>For example, why is the Mellin transform an integral over the positive reals, while the inverse transform is an integral over the complex plane?</p>
<p>I found some resources -- Wikipedia; <a href="https://mathoverflow.net/questions/383/motivating-the-laplace-transform-definition">this MO question</a> is closely related, and the first video in particular is nice; and a proof is outlined in Iwaniec and Kowalski -- but I feel that there should be a more intuitive explanation than any I have come up with so far.</p>
| paul garrett | 15,629 | <p>[Some next-day edits in response to comments] As counterpoint to other viewpoints, one can say that Mellin inversion is "simply" Fourier inversion in other coordinates. Depending on one's temperament, this "other coordinates" thing ranges from irrelevancy to substance... The question about moral imperatives for Fourier inversion is addressed a bit below.</p>
<p>[Added: the exponential map $x\rightarrow e^x$ gives an isomorphism of the topological group of additive reals to multiplicative. Thus, the harmonic analysis on the two is necessarily "the same", even if the formulaic aspects look different. The occasional treatment of values (and derivatives) at $0$ for functions on the positive reals, as in "Laplace transforms", is a relative detail, which certainly has a corresponding discussion for Fourier transforms.]</p>
<p>The specific riff in Perron's identity in analytic number theory amounts to (if one tolerates a change-of-coordinates) guessing/discerning an L^1 function on the line whose Fourier transform is (in what function space?!) the characteristic function of a half-line. </p>
<p>Since the char fcn of a half-line is not in L^2, and does not go to 0 at infinity, there are bound to be analytical issues... but these are <em>technical</em>, not conceptual. </p>
<p>[Added: the Fourier transform families $x^{\alpha-1}e^{-x}\cdot \chi_{x>0}$ and $(1+ix)^{-\alpha}$, (up to constants) where $\chi$ is the characteristic function, when translated to multiplicative coordinates, give one family approaching the desired "cut-off" effect of the Perron integral. There are other useful families, as well.]</p>
<p>To my taste, the delicacies/failures/technicalities of not-quite-easily-legal aspects of Fourier transforms are mostly crushed by simple ideas about Sobolev spaces and Schwartz' distributions... tho' these do not change the underlying realities. They only relieve us of some of the burden of misguided fussiness of some self-appointed guardians of a misunderstanding of the Cauchy-Weierstrass tradition.</p>
<p>[Added: surely such remarks will strike some readers as inappropriate poesy... but it is easy to be more blunt, if desired. Namely, in various common contexts there is a pointless, disproportionate emphasis on "rigor". Often, elementary analysis is the whipping-boy for this impulse, but also one can see elementary number theory made senselessly difficult in a similar fashion. Supposedly, the audience is being made aware of a "need/imperative" for care about delicate details. However, in practice, one can find oneself in the role of the Dilbertian "Mordac the Preventer (of information services)" [see wiki] proving things like the intermediate value theorem to calculus students: it is obviously true, first, or else one's meaning of "continuous" or "real numbers" needs adjustment; nevertheless, the traditional story is that this intuition must be delegitimized, and then a highly stylized substitute put in its place. What was the gain? Yes, something foundational, but time has passed, and we have only barely recovered, at some expense, what was obviously true at the outset.</p>
<p>On another hand, Bochner's irritation with "distributions theory" was that it was already clear to <em>him</em> that things worked like this, and <em>he</em> could already answer all the questions about generalized functions... so why be impressed with Schwartz' "mechanizing" it? For me, the answer is that Schwartz arranged a situation so that "any idiot" could use generalized functions, whereas previously it was an "art". Yes, sorta took the fun out of it... but maybe practical needs over-rule preservation of secret-society clubbiness?] </p>
<p>Why should there be Fourier inversion? (for example...) Well, we can say we <em>want</em> such a thing, because it diagonalizes the operator $d/dx$ on the line (and more complicated things can be said in more complicated situations). </p>
<p>Among other things, this renders "engineering math" possible... That is, one can understand and justify the almost-too-good-to-be-true ideas that seem "necessary" in applied situations... where I can't help but add "like modern number theory". :)</p>
<p>[Added: being somewhat an auto-didact, I was not aware until relatively late that "proof" was absolutely sacrosanct. To the point of fetishism? In fact, we do seem to collectively value insightful conjecture and not-quite-justifiable heuristics, and interesting unresolved ideas offer more chances for engagement than do settled, ironclad, finished discussions. For that matter, the moments that one intuits "the truth", and then begins looking for reasons, are arguably more memorable, more fun, than the moments at which one has dotted i's and crossed t's in the proof of a not-particularly-interesting lemma whose truth was fairly obvious all along. More ominous is the point that sometimes we can see that something is true and works despite being unable to "justify" it. Heaviside's work is an instance. Transatlantic telegraph worked fine despite...]</p>
<p>In other words: spectral decomposition and synthesis. Who couldn't love it?!</p>
<p>[Added: and what recourse do we have than to hope that reasonable operators are diagonalizable, etc? Serre and Grothendieck (and Weil) knew for years that the Lefschetz fixed-point theorem should have an incarnation that would express zeta functions of varieties in terms of cohomology, before being able to make sense of this. Ngo (Loeser, Clucker, et alteri)'s proof of the fundamental lemma in the number field case via model theoretic transfer from the function field case is not something I'd want to have to "justify" to negativists!]</p>
|
79,868 | <p>Given a function $f: \mathbb{R}^+ \rightarrow \mathbb{C}$ satisfying suitable conditions (exponential decay at infinity, continuous, and bounded variation) is good enough, its <em>Mellin transform</em> is defined by the function</p>
<p>$$M(f)(s) = \int_0^{\infty} f(y) y^s \frac{dy}{y},$$</p>
<p>and $f(y)$ can be recovered by the Mellin inversion formula:</p>
<p>$$f(y) = \frac{1}{2\pi i} \int_{\sigma - i \infty}^{\sigma + i \infty} y^{-s} M(f)(s) ds.$$</p>
<p>This is a change of variable from the Fourier inversion formula, or the Laplace inversion formula, and can be proved in the same way. This is used all the time in analytic number theory (as well as many other subjects, I understand) -- for example, if $f(y)$ is the characteristic function of $[0, 1]$ then its Mellin transform is $1/s$, and one recovers the fact (Perron's formula) that </p>
<p>$$\frac{1}{2\pi i} \int_{2 - i \infty}^{2 + i \infty} n^{-s} \frac{ds}{s}$$</p>
<p>is equal to 1 if $0 < n < 1$, and is 0 if $n > 1$. (Note that there are technical issues which I am glossing over; one integrates over any vertical line with $\sigma > 0$, and the integral is equal to $1/2$ if $n = 1$.)</p>
<p>I use these formulas frequently, but... I find myself having to look them up repeatedly, and I'd like to understand them more intuitively. Perron's formula can be proved using Cauchy's residue formula (shift the contour to $- \infty$ or $+ \infty$ depending on whether $n > 1$), but this proof doesn't prove the general Mellin inversion formula.</p>
<p>My question is:</p>
<blockquote>
<p>What do the Mellin transform and the inversion formula mean? Morally, why are they true?</p>
</blockquote>
<p>For example, why is the Mellin transform an integral over the positive reals, while the inverse transform is an integral over the complex plane?</p>
<p>I found some resources -- Wikipedia; <a href="https://mathoverflow.net/questions/383/motivating-the-laplace-transform-definition">this MO question</a> is closely related, and the first video in particular is nice; and a proof is outlined in Iwaniec and Kowalski -- but I feel that there should be a more intuitive explanation than any I have come up with so far.</p>
| Phil Isett | 7,193 | <p>As others have pointed out, the Mellin inversion theorem is just the Fourier inversion theorem in disguise for the particular group ${\mathbb R_+}$ with invariant measure $\frac{dx}{x}$. The goal of the Fourier transform is to express a general function as a linear combination (i.e. integral) of the characters of the group, so that in this basis the operations of translations and all commuting operations will be diagonalized. For the ${\mathbb R_+}$, these characters look like $x \mapsto x^{-s}$ (the minus sign because of the normalization you chose in the question), and they are unitary (take values in the circle) for imaginary $s$ -- the operation of multiplying characters is just addition in the $s$ variable, so in the inversion formula you have the measure $ds$. There's also this funny thing about how there are $s$ with positive real part -- this is because in the "physical space" ${\mathbb R_+}$ you're always talking about distributions which are compactly supported away from $0$ when you use this transform. Let's ignore that.</p>
<p>Since Mellin inversion is a disguised Fourier inversion, the real question is: why is the Fourier inversion formula on ${\mathbb R}$ true? To me the most convincing answer is the following: we can decompose a general function $f(x) = \int f(y) \delta(x-y) dy$ (this is the definition of $\delta$ but you have to take approximate delta-functions to make this rigorously work like a decomposition), so if we want to express a general function as a combination of the characters $x \mapsto e^{2 \pi i \xi x}$, it suffices to consider the $\delta$ function</p>
<p>$\delta(x) = \int u(\xi) e^{2 \pi i \xi x} d\xi $</p>
<p>One interpretation of this formal idea is that the distributions $\delta(x-y)$ are just like your usual standard basis functions.</p>
<p>Now, observe that because $\delta(x)$ is invariant under multiplication by $e^{2 \pi i \eta x}$ for any $\eta$, the distribution $u(\xi)$ is translation invariant, and therefore must be constant. After you find the constant, plugging in $\delta(x) = C \int e^{2 \pi i \xi x} d\xi$ into $f(x) = \int f(y) \delta(x-y) dy$ gives the Fourier inversion formula. Complete, rigorous proofs all follow more or less these lines, but there are many flavors of how you like to phrase it. Of course, we can write the whole argument with multiplicative characters as well.</p>
<p>Edit: The above argument assumes uniqueness of the representation, but one can also remark that if there is even a single function $f(x)$ for which $\int f(x) dx \neq 0$ and which can be realized as a linear combination $\int \hat{f}(\xi) e^{2 \pi i \xi \cdot x} d\xi$, then by rescaling, renormalizing and taking a limit, we obtain $\delta(x) = C \lim_{\epsilon \to 0} \epsilon^{-1} f(x/\epsilon)$, leading formally to the formula $\delta(x) = C \int e^{2 \pi i \xi \cdot x} d\xi$. One common rigorous execution of this philosophy is performed by taking $f$ to be a Gaussian.</p>
|
79,868 | <p>Given a function $f: \mathbb{R}^+ \rightarrow \mathbb{C}$ satisfying suitable conditions (exponential decay at infinity, continuous, and bounded variation) is good enough, its <em>Mellin transform</em> is defined by the function</p>
<p>$$M(f)(s) = \int_0^{\infty} f(y) y^s \frac{dy}{y},$$</p>
<p>and $f(y)$ can be recovered by the Mellin inversion formula:</p>
<p>$$f(y) = \frac{1}{2\pi i} \int_{\sigma - i \infty}^{\sigma + i \infty} y^{-s} M(f)(s) ds.$$</p>
<p>This is a change of variable from the Fourier inversion formula, or the Laplace inversion formula, and can be proved in the same way. This is used all the time in analytic number theory (as well as many other subjects, I understand) -- for example, if $f(y)$ is the characteristic function of $[0, 1]$ then its Mellin transform is $1/s$, and one recovers the fact (Perron's formula) that </p>
<p>$$\frac{1}{2\pi i} \int_{2 - i \infty}^{2 + i \infty} n^{-s} \frac{ds}{s}$$</p>
<p>is equal to 1 if $0 < n < 1$, and is 0 if $n > 1$. (Note that there are technical issues which I am glossing over; one integrates over any vertical line with $\sigma > 0$, and the integral is equal to $1/2$ if $n = 1$.)</p>
<p>I use these formulas frequently, but... I find myself having to look them up repeatedly, and I'd like to understand them more intuitively. Perron's formula can be proved using Cauchy's residue formula (shift the contour to $- \infty$ or $+ \infty$ depending on whether $n > 1$), but this proof doesn't prove the general Mellin inversion formula.</p>
<p>My question is:</p>
<blockquote>
<p>What do the Mellin transform and the inversion formula mean? Morally, why are they true?</p>
</blockquote>
<p>For example, why is the Mellin transform an integral over the positive reals, while the inverse transform is an integral over the complex plane?</p>
<p>I found some resources -- Wikipedia; <a href="https://mathoverflow.net/questions/383/motivating-the-laplace-transform-definition">this MO question</a> is closely related, and the first video in particular is nice; and a proof is outlined in Iwaniec and Kowalski -- but I feel that there should be a more intuitive explanation than any I have come up with so far.</p>
| Community | -1 | <p>Another property: The (inverse) Mellin transform interchanges $q$-expansions of modular forms and Dirichlet $L$-series.</p>
|
76,747 | <pre><code>Plot[D[Abs[x], x], {x, -10, 10}, Exclusions -> {0}]
</code></pre>
<p>gives out several lines of error messages and an empty plot.</p>
<pre><code>Plot[Derivative[1][Abs[#1] & ][x], {x, -10, 10}, Exclusions -> {0}]
</code></pre>
<p>just gives out an empty plot.</p>
<p>How do I plot $\left(\left|x\right|\right)^\prime$?</p>
| ubpdqn | 1,997 | <p>I suspect the simplest way is just define the absolute value function yourself, e.g.</p>
<pre><code>f[x_] := Piecewise[{{x, x > 0}, {-x, x < 0}}]
Plot[Evaluate[D[f[x], x]], {x, -1, 1}, Exclusions -> {0}]
</code></pre>
|
3,851,399 | <p>I am attempting to prove the following statement</p>
<blockquote>
<p>Prove that if n is any integer then 4 either divides n^2 or n^2 − 1</p>
</blockquote>
<p>I have started with the case of n = 2k</p>
<pre><code>Consider the case n = 2k
n = 2k
n^2 = 4k^2
⇒ n = 4k
∴ 4 divides n^2 as there is some integer k in which n = k
</code></pre>
<p>Would this be considered as a correct proof for the first case here? Is there any additions I should make?</p>
<p>I also attempted case 2, where n = 2k+1, however I am less sure of the direction I have taken this and is incomplete, so some advice on this would also be appreciated.</p>
<pre><code>Consider the case n = 2k+1
n = 2k+1
n^2 = (2k+1)^2
n^2 = 4k^2 + 1
n^2 - 1 = 4k^2
</code></pre>
| Alessio K | 702,692 | <p>Consider the case <span class="math-container">$n=2k$</span>, then <span class="math-container">$$n^2=4k^2$$</span>
<span class="math-container">$$\therefore \space 4|n^2 \space \text{that is if $n$ is even then $n^2$ is divisible by $4$}$$</span></p>
<p>For the case <span class="math-container">$n=2k+1$</span> we have <span class="math-container">$$n^2-1=(2k+1)^2-1=(2k)^2+2k+2k+1-1=4k^2+4k=4(k^2+k)$$</span>
<span class="math-container">$$\therefore \space 4|(n^2-1) \space \text{that is if $n$ is odd then $n^2-1$ is divisible by $4$}$$</span></p>
<p>Since all integers are of the form <span class="math-container">$2k$</span> or <span class="math-container">$2k+1$</span> for some <span class="math-container">$k\in \mathbb Z$</span>, we are done.</p>
|
1,879,346 | <p>Ok, so I have this question from my math book.</p>
<blockquote>
<p>The vertices A,B,C of a triangle are (3,5),(2,6) and (-4,-2)
respectively. Find the coordinates of the circum-centre and also the
radius of the circum-circle of the triangle.</p>
</blockquote>
<p>How can we solve this? Can we use the distance formula?
Answer: The circum-radius was found to be <em>R=5</em>.The coordinates of circum-centre were found to be <em>(-1,2)</em>. A diagram would be appreciated. Thank you!</p>
| David Quinn | 187,299 | <p>HINT...You could find the equations of the perpendicular bisectors of two of the sides and where they meet will be the circumcentre. Then use the distance formula to work out the circumradius.</p>
|
2,046,773 | <p>Given a function $f$ that is differentiable at a point $x_0$, if we define (using the Riemann integral)</p>
<p>$$F(x) = \int_a^x f$$</p>
<p>Can we necessarily say that $F^{\prime}(x)$ is continuous at $x_0$? Going back and forth between $f$ and $F$ confuses me a bit. I <em>think</em> that the Fundamental Theorem of Calculus gives us some relation between $F^{\prime}(x_0)$ and $f(x_0)$, but I'm not sure. </p>
| user251257 | 251,257 | <p>The statement is true in the sense, that <span class="math-container">$F'$</span> is continuous at <span class="math-container">$x_0$</span> on its domain, which need not to be the entire interval <span class="math-container">$(a, b)$</span>. We may drop that <span class="math-container">$f$</span> is differentiable at <span class="math-container">$x_0$</span>, we may also replace Riemann integrability with Lebesgue.</p>
<p><strong>Claim:</strong></p>
<p>Let <span class="math-container">$D = \{ x\in (a, b) \mid F \text{ is differentiable at } x \}$</span>. For <span class="math-container">$x_n\in D$</span> with <span class="math-container">$x_n\to x_0$</span> it follows <span class="math-container">$F'(x_n) \to F'(x_0) = f(x_0)$</span>.</p>
<p><em>Proof:</em></p>
<p>Notice that as <span class="math-container">$f$</span> is continuous at <span class="math-container">$x_0$</span> it follows
<span class="math-container">$$F'(x_0) = \lim_{y\to x_0} \frac{1}{y - x_0} \int_{x_0}^y f(t) dt = f(x_0).$$</span></p>
<p>Fix <span class="math-container">$\varepsilon > 0$</span>.</p>
<ol>
<li>As <span class="math-container">$f$</span> is continuous at <span class="math-container">$x_0$</span>, a <span class="math-container">$\delta > 0$</span> exists such that for every <span class="math-container">$x\in [a, b] \cap (x_0 - \delta, x_0 + \delta)$</span> it follows
<span class="math-container">$$|f(x) - f(x_0)| < \frac\varepsilon2.$$</span></li>
<li>As <span class="math-container">$x_n\to x_0$</span>, a <span class="math-container">$N> 0$</span> exists such that for every <span class="math-container">$n \ge N$</span> it follows <span class="math-container">$$|x_n - x_0| < \frac{\delta}{2}.$$</span></li>
<li>As <span class="math-container">$F$</span> is differentiable at <span class="math-container">$x_n$</span>, a <span class="math-container">$\eta_n > 0$</span> exists such that for every <span class="math-container">$y\in [a, b] \cap (x_n - \eta_n, x_n + \eta_n)$</span> with <span class="math-container">$y\ne x_n$</span> it follows
<span class="math-container">$$ \left|\frac{F(y) - F(x_n)}{y - x_n} - F'(x_n) \right| < \frac\varepsilon2. $$</span></li>
</ol>
<p>Putting these together, for every <span class="math-container">$y$</span> with <span class="math-container">$|y - x_n| < \min(\eta_n, \frac{\delta_n}{2})$</span> it follows
<span class="math-container">\begin{align*}
\left| F'(x_n) - f(x_0) \right|
&\le \left| F'(x_n) - \frac{F(y) - F(x_n)}{y - x_n} \right| + \left| \frac{F(y) - F(x_n)}{y - x_n} - f(x_0) \right| \\
& \le \frac\varepsilon 2 + \left| \frac{1}{y - x_n}\int_{x_n}^y f(t) - f(x_0) dt \right| \\
& \le\frac\varepsilon 2 + \frac{1}{y - x_n}\int_{x_n}^y \underbrace{|f(t) - f(x_0)|}_{\le \varepsilon / 2} dt \\
&\le \epsilon,
\end{align*}</span>
as <span class="math-container">$|t - x_0| \le |t - x_n| + |x_n - x_0| \le |y - x_n| + |x_n - x_0| < \delta$</span>.</p>
<p>That is, <span class="math-container">$F'(x_n)$</span> converges to <span class="math-container">$f(x_0) = F'(x_0)$</span>.</p>
<p><em>Notes:</em></p>
<p>If <span class="math-container">$f$</span> is differentiable at <span class="math-container">$x_0$</span>, a similar estimation shows that
<span class="math-container">$$\lim_{n\to\infty} \frac{F'(x_n) - F'(x_0)}{x_n - x_0} = f'(x_0)$$</span></p>
<hr />
<p>Old Answer for reference only:</p>
<p>This rather an extensive comment than an answer.</p>
<p>Since <span class="math-container">$f$</span> is Riemann integrable, its set of continuity
<span class="math-container">$$ C = \left\{ x\in[a,b] \mid \lim_{t\to x} f(t) = f(x) \right\} $$</span>
has full measure, that is <span class="math-container">$|C| = b-a$</span>. Thus, <span class="math-container">$F'$</span> exists on <span class="math-container">$C$</span> and agrees with <span class="math-container">$f$</span> on <span class="math-container">$C$</span>. Trivially, <span class="math-container">$F'$</span> restricted onto <span class="math-container">$C$</span> is continuous.</p>
<p>That says nothing about the domain of <span class="math-container">$F'$</span>, which might be larger than <span class="math-container">$C$</span>, and whether <span class="math-container">$F'$</span> is continuous at <span class="math-container">$x_0$</span>.</p>
|
2,385,338 | <p>In a bag, there are 6 white disks, 6 black disks and 8 red disks. A disk is drawn at random from the bag.The colour is recorded and the disc is returned to the bag This process is repeated 10 times.
Find the probability that less than 4 red disks are drawn </p>
<p>I approached this question with n= 8 and p=0.4, q=0.6. But I'm not sure about what to do with the repetition of 10 times, how do I do this question?</p>
| Jordan Abbott | 468,755 | <p>As the disc is returned to the bag it will be the <em>same</em> probability of a red disc coming out each time. As it is <em>less than</em> four red discs, we need to find the probability of $0, 1, 2$ and $3$ red discs being chosen. Your values of $p$ and $q$ are indeed correct, however I'm a bit confused as to why you've chosen $n = 8$. There are $10$ trials, and as such I have chosen $n = 10$ and $r = 0, 1, 2$ and $3$, and putting this into the formula $$^nC_r\times p^{r}\times q^{n-r}$$ and adding up the four values for r you should get your answer.</p>
<p>Hope this helps :)</p>
<p>EDIT: Answer:</p>
<blockquote class="spoiler">
<p>$(^{10}C_0 \times 0.4^0 \times 0.6^{10-0}) + (^{10}C_1 \times 0.4^1 \times 0.6^{10-1})...$<br> I work out the answer to be 0.382...</p>
</blockquote>
|
1,012,730 | <p>A purely imaginary number is one which contains no non-zero real component. </p>
<p>If I had a sequence of numbers, say $\{0+20i, 0-i, 0+0i\}$, could I call this purely imaginary? </p>
<p>My issue here is that because $0+0i$ belongs to multiple sets, not just purely imaginary, is there not a valid case to say that the sequence isn't purely imaginary?</p>
| Seth | 31,659 | <p>A complex number is said to be purely imaginary if it's real part is zero. Zero is purely imaginary, as it's real part is zero. </p>
|
1,012,730 | <p>A purely imaginary number is one which contains no non-zero real component. </p>
<p>If I had a sequence of numbers, say $\{0+20i, 0-i, 0+0i\}$, could I call this purely imaginary? </p>
<p>My issue here is that because $0+0i$ belongs to multiple sets, not just purely imaginary, is there not a valid case to say that the sequence isn't purely imaginary?</p>
| user191016 | 191,016 | <p>0 is both purely real and purely imaginary. The given set is purely imaginary. That's not a contradiction since "purely real" and "purely imaginary" are not fully incompatible. Somewhat similarly baffling is that "all members of X are even integers" and "all members of X are odd integers" is not a contradiction. It just means that X is an empty set.</p>
|
3,570,943 | <p>From the comments I got, my question amonts to : can the " inverse of inverse" law be derived from the definition of division. </p>
<hr>
<p>Division is defined as : <span class="math-container">$\dfrac AB = A.\dfrac1B$</span>, that is, </p>
<p>" dividisng by A by B is, by definition, mutiplicating A by the inverse of B". </p>
<p>My question is: </p>
<p>How do I derive from this definition, the equality: </p>
<p><span class="math-container">$\frac{a}{b/c}$</span> = <span class="math-container">$\frac{ac}{b}$</span> </p>
<p>I tried this: </p>
<p><span class="math-container">$\frac{a}{b/c}$</span></p>
<p>= <span class="math-container">$\frac{a}{b\times1/c}$</span> ( applying the definition of division to the denominator) </p>
<p>=<span class="math-container">$\frac{a\times1}{b\times1/b}$</span> ( using " 1 is the identity for multiplication") </p>
<p>= <span class="math-container">$\frac ab$$\times$$\frac{1}{1/c}$</span> ( using <span class="math-container">$\frac{ab}{cd}$</span> = <span class="math-container">$\frac{a\times b}{c\times d}$</span> in the reverse sense)</p>
<p>But could not go further. </p>
<p>How to recover <span class="math-container">$\frac {c}{1}$</span> from <span class="math-container">$\frac{1}{1/c}$</span> using exclusively the definition of division <span class="math-container">$\frac AB$</span> = A.<span class="math-container">$\frac1B$</span> ? </p>
<p>It seems to me I am moving in a circle, since apparently I would need the formula I want to prove to obtain the last equality I want. </p>
| Abhirup Adhikary | 742,851 | <p><span class="math-container">$1/(1/c)$</span> </p>
<p>Here, according to the definition, we need to multiply 1 with the multiplicative inverse of (1/c)</p>
<p>Multiplicative inverse of a number is a number which when multiplied by the given number yields 1</p>
<p>So, here multiplicative inverse of (1/c) is c (you can check it by calculating),</p>
<p>Hence, <span class="math-container">$1/(1/c)$</span> = c</p>
<p><em>I would not say the comments to the question are less informative or different from my answer but I presented this answer in my own way to make it clear.</em></p>
|
3,570,943 | <p>From the comments I got, my question amonts to : can the " inverse of inverse" law be derived from the definition of division. </p>
<hr>
<p>Division is defined as : <span class="math-container">$\dfrac AB = A.\dfrac1B$</span>, that is, </p>
<p>" dividisng by A by B is, by definition, mutiplicating A by the inverse of B". </p>
<p>My question is: </p>
<p>How do I derive from this definition, the equality: </p>
<p><span class="math-container">$\frac{a}{b/c}$</span> = <span class="math-container">$\frac{ac}{b}$</span> </p>
<p>I tried this: </p>
<p><span class="math-container">$\frac{a}{b/c}$</span></p>
<p>= <span class="math-container">$\frac{a}{b\times1/c}$</span> ( applying the definition of division to the denominator) </p>
<p>=<span class="math-container">$\frac{a\times1}{b\times1/b}$</span> ( using " 1 is the identity for multiplication") </p>
<p>= <span class="math-container">$\frac ab$$\times$$\frac{1}{1/c}$</span> ( using <span class="math-container">$\frac{ab}{cd}$</span> = <span class="math-container">$\frac{a\times b}{c\times d}$</span> in the reverse sense)</p>
<p>But could not go further. </p>
<p>How to recover <span class="math-container">$\frac {c}{1}$</span> from <span class="math-container">$\frac{1}{1/c}$</span> using exclusively the definition of division <span class="math-container">$\frac AB$</span> = A.<span class="math-container">$\frac1B$</span> ? </p>
<p>It seems to me I am moving in a circle, since apparently I would need the formula I want to prove to obtain the last equality I want. </p>
| fleablood | 280,126 | <p>Just do it:</p>
<p><span class="math-container">$\frac a {\frac bc}$</span> will by definition be <span class="math-container">$a\cdot \frac 1{\frac bc}$</span></p>
<p>So we need to figure out what <span class="math-container">$\frac 1{\frac bc}$</span> is.</p>
<p>And it is the value <span class="math-container">$k$</span> so that <span class="math-container">$\frac bc\cdot k = 1$</span>.[1]</p>
<p>So we need to solve for <span class="math-container">$(\frac bc)k =(b\cdot \frac 1c)\cdot k = 1$</span>.</p>
<p>We know that <span class="math-container">$(b\cdot \frac 1c)\cdot c = b\cdot(\frac 1c\cdot c)= b\cdot 1 = b$</span>.</p>
<p>And so <span class="math-container">$(b\cdot \frac 1c)\cdot (c \cdot \frac 1b) = b\cdot (\frac 1c \cdot c)\cdot \frac 1b = b\cdot 1 \cdot \frac 1b = b\cdot \frac 1b = 1$</span>.</p>
<p>So <span class="math-container">$\frac 1{\frac bc} = c\cdot \frac 1b = \frac cb$</span>.</p>
<p>So <span class="math-container">$\frac a{\frac bc} = a\cdot \frac 1{\frac bc}=a\frac cb = a(c\cdot \frac 1b)=(ac)\frac 1b = \frac {ac}b$</span> </p>
<p>......</p>
<p>[1]</p>
<p>This assumes that for any <span class="math-container">$m \ne 0$</span> that there exists a <span class="math-container">$k$</span> that <span class="math-container">$mk =1$</span> <em>and</em> that <span class="math-container">$k$</span> is unique.</p>
<p>That such a <span class="math-container">$k$</span> exists is the definition of <span class="math-container">$\mathbb Q$</span> is a field.</p>
<p>We can prove <span class="math-container">$k$</span> is unique. </p>
<p>If <span class="math-container">$k$</span> and <span class="math-container">$j$</span> are both inverses of <span class="math-container">$m$</span> then <span class="math-container">$mk =1=km $</span> and <span class="math-container">$mj = 1=jm$</span>.</p>
<p>That means that <span class="math-container">$kmj =(km)j=1\cdot j = j$</span> but <span class="math-container">$kmj=k(mj) =k\cdot 1 =k$</span>. SO <span class="math-container">$j = k$</span>. </p>
<p>And inverses are unique.</p>
<p>We can use that to prove that <span class="math-container">$\frac 1{\frac 1c} = c$</span>.</p>
<p>because <span class="math-container">$c \times \frac 1c = 1$</span> that means.... <em>by definition</em> that <span class="math-container">$c$</span> is the inverse of <span class="math-container">$\frac 1c$</span>. SO <span class="math-container">$c = \frac 1{\frac 1c}$</span>.</p>
|
1,778,982 | <p>I can prove that $G$ is cyclic, but I am not sure how to prove the orders. I know I need to use the Fundamental Theorem of Cyclic Groups but I'm not sure how to apply it. Is there something obvious I am missing?</p>
| Alex Wertheim | 73,817 | <p>The subgroups of any finite cyclic group $G$ are in bijective correspondence with the divisors of $|G|$. Write $|G| = p_{1}^{k_{1}}p_{2}^{k_{2}}\cdots p_{n}^{k_{n}}$ for distinct primes $p_{1}, p_{2}, \ldots, p_{n}$ and positive integers $k_{1}, \ldots, k_{n}$. Then each divisor of $|G|$ can be specified by a tuple $(\alpha_{1}, \alpha_{2}, \ldots, \alpha_{n})$ such that $0 \leqslant \alpha_{i} \leqslant k_{i}$, and each distinct tuple gives rise to a distinct divisor of $|G|$. Then one can see that the only way that $|G|$ can have exactly two nontrivial divisors (i.e., two divisors other than $1$ and $|G|$) is if $|G| = p^{3}$ (in which case the nontrivial divisors are $p, p^{2}$) or $|G| = pq$ (in which case the divisors are $p, q$). </p>
|
2,608,455 | <p>Could someone please help me with how do I calculate the sum of the $$\sum_{n=1}^{\infty}\frac{1}{4n^{2}-1}$$ infinite series? I see that $$\lim_{n\rightarrow\infty}\frac{1}{4n^{2}-1}=0$$ so the series is convergent based on the Cauchy's convergence test. But how do I calculate the sum? Thank you.</p>
| José Carlos Santos | 446,262 | <p>Since$$\frac1{4n^2-1}=\frac1{(2n-1)(2n+1)}=\frac12\left(\frac1{2n-1}-\frac1{2n+1}\right),$$your series is a telescopic series.</p>
|
3,043,319 | <p>I just begin with the 3- dimension function. I dont really understand how to begin with this problem.
Function <span class="math-container">$$f(x,y)= (x^2+4y^2-5)(x-1)$$</span>
Find where <span class="math-container">$$f(x,y)=0; f(x,y)>0; f(x,y)<0.$$</span></p>
<p>To find where <span class="math-container">$$f(x,y)=0$$</span> i already have the ellipse function <span class="math-container">$$(x^2)/5 +(y^2)/(5/4)=1$$</span> and the straight line <span class="math-container">$$x=1$$</span> but the other two i dont know how to solve. </p>
<p>Thank you very much.</p>
| J.G. | 56,861 | <p>Since <span class="math-container">$f(x)=x$</span> bijects <span class="math-container">$A$</span> to <span class="math-container">$A$</span>, <span class="math-container">$\sim$</span> is reflexive. Write a similar proof <span class="math-container">$\sim$</span> is symmetric, using the fact bijections have inverses; write a similar proof <span class="math-container">$\sim$</span> is transitive, using a composition of bijections.</p>
|
616,454 | <p>How can I calculate this integral?</p>
<p>$$ \int {\exp(2x)}{\sin(3x)}\, \mathrm{d}x$$ </p>
<p>I tried using integration by parts, but it doesn't lead me any improvement. So I made an attempt through the replacement $$ \cos(3x) = t$$ and it becomes $$\frac{1}{-3}\int \exp\left(2\left(\dfrac{\arccos(t)}{3}\right)\right)\, \mathrm{d}t$$ but I still can not calculate the new integral. Any ideas?</p>
<p><strong>SOLUTION</strong>:</p>
<p>$$\int {\exp(2x)}{\sin(3x)}\, \mathrm{d}x = \int {\sin(3x)}\, \mathrm{d(\frac{\exp(2x)}{2}))}=$$ </p>
<p>$$\frac{1}{2}{\sin(3x)}{\exp(2x)}-\frac{1}{2}\int {\exp(2x)}\, \mathrm{d}(\sin(3x))=$$</p>
<p>$$\frac{1}{2}{\sin(3x)}{\exp(2x)}-\frac{3}{2}\int {\exp(2x)}{\cos(3x)}\mathrm{d}x=$$</p>
<p>$$\frac{1}{2}{\sin(3x)}{\exp(2x)}-\frac{3}{2}\int {\cos(3x)}\mathrm{d(\frac{\exp(2x)}{2}))}=$$</p>
<p>$$\frac{1}{2}{\sin(3x)}{\exp(2x)}-\frac{3}{4}{\cos(3x)}{\exp(2x)}+\frac{3}{4}\int {\exp(2x)}\mathrm{d({\cos(3x)})}=$$</p>
<p>$$\frac{1}{2}{\sin(3x)}{\exp(2x)}-\frac{3}{4}{\cos(3x)}{\exp(2x)}-\frac{9}{4}\int {\sin(3x)}{\exp(2x)}\mathrm{d}x$$</p>
<p>$$ =>(1+\frac{9}{4})\int {\exp(2x)}{\sin(3x)}\, \mathrm{d}x= \frac{1}{2}{\sin(3x)}{\exp(2x)}-\frac{3}{4}{\cos(3x)}{\exp(2x)}+c$$</p>
<p>$$=\frac{1}{13}\exp(2x)(2\sin(3x)-3\cos(3x))+c$$</p>
| Mhenni Benghorbal | 35,472 | <p>You need to use integration by parts or use the identity</p>
<p>$$ \sin y = \frac{e^{iy} - e^{-iy} }{2i} $$</p>
<p>Which makes the integral easier.</p>
|
190,392 | <p>How to prove that the Torus and a Ball have the same Cardinality ?
The proof is using the Cantor Berenstein Theorem.
I now that they are subsets of $\mathbb{R}^{3}$ so I can write $\leq \aleph$ but I do not know how to prove $\geqslant \aleph$.
Thanks</p>
| Asaf Karagila | 622 | <p><strong>Hint:</strong> </p>
<p>Show that a circle is equipotent with $[0,2\pi)$ by fixing a base point, and sending each point on the circle to its angle, where $0$ is the base point.</p>
<p>Since both the torus and the ball contain a circle, both have at least $\aleph$ many elements.</p>
|
190,392 | <p>How to prove that the Torus and a Ball have the same Cardinality ?
The proof is using the Cantor Berenstein Theorem.
I now that they are subsets of $\mathbb{R}^{3}$ so I can write $\leq \aleph$ but I do not know how to prove $\geqslant \aleph$.
Thanks</p>
| Andrea Mori | 688 | <p>Let's assume known the fact that $\Bbb R^m$ and $\Bbb R^n$ have the same cardinality for all pairs of positive integers $(m,n)$.</p>
<p>Then the following follows easily: Let $X\subset\Bbb R^n$ such that there exists a subset $A\subset X$ homeomorphic to an open subset of $\Bbb R^m$ for some $m\leq n$. Then $|X|=|\Bbb R^n|$.</p>
<p>Indeed, such an $A$ obviously contains an open segment $I$ (e.g. any radius of any open ball contained in $A$) and the chain of inclusions
$$
I\subset A\subset X\subset\Bbb R^n
$$
together with the fact that $|I|=|\Bbb R|=|\Bbb R^n|$ shows that $|X|=|\Bbb R^n|$.</p>
<p>It is clear that both the ball $B$ and the torus $T$ (no matter whether you take the solid shapes or just their surfaces) satisfy the property required for $X$, thus $|B|=|\Bbb R^3|=|T|$.</p>
|
2,888,464 | <p>Let $S$ be the set of polynomials $f(x)$ with integer coefficients satisfying </p>
<p>$f(x) \equiv 1$ mod $(x-1)$</p>
<p>$f(x) \equiv 0$ mod $(x-3)$</p>
<p>Which of the following statements are true?</p>
<p>a) $S$ is empty .</p>
<p>b) $S$ is a singleton.</p>
<p>c)$S$ is a finite non-empty set.</p>
<p>d) $S$ is countably infinite.</p>
<p>My Try: I took $x =5$ then $f(5) \equiv 1$ mod $4$ and $f(5) \equiv 0$ mod $2$ . Which is impossible so $S$ is empty.</p>
<p>Am I correct? Is there any formal way to solve this?</p>
| mfl | 148,513 | <p>We have that</p>
<p>$$f(x)\equiv 1\bmod{(x-1)}\iff \exists a(x)|f(x)=(x-1)a(x)+1.$$</p>
<p>$$f(x)\equiv 0\bmod{(x-3)}\iff \exists b(x)|f(x)=(x-3)b(x).$$</p>
<p>So, if $f$ exists then it must be </p>
<p>$$(x-1)a(x)+1=(x-3)b(x).$$ We have for $x=1:$</p>
<p>$$1=-2b(1)\implies b(1)=-\dfrac 12,$$ which is not possible, since $b(x)$ is a polynomial with integer coefficientes and $1\in\mathbb{Z}$. (Note taht in a similar way we have $a(3)=-\frac12$.) Thus we conclude that there is no a polynomial with integer coefficients satisfying the given conditions. </p>
<p>As it was said in comments your proof is essentially correct. If $f$ exists then it must be </p>
<p>$$f(5)=4a(5)+1\equiv 1\bmod{4}$$ and $$f(5)=2b(5)\equiv 0\bmod{2}.$$ Since both congruences can't hold together you can conclude that the polynomial doesn't exist.</p>
|
3,200,431 | <p>We can say, that any field <span class="math-container">$\mathbb{K}$</span> -- <span class="math-container">$1$</span>-dim vector space on itself: <span class="math-container">$\mathbb{K}_{\mathbb{K}}$</span>. So any vector of one another finite-dimensional vector space <span class="math-container">$V_{\mathbb{K}}$</span>, after choosing the some basis can be represented as the element of isomorphic space <span class="math-container">$\mathbb{K}_{\mathbb{K}}^{n} = \prod_{i=1}^n\mathbb{K}_i$</span>, where <span class="math-container">$n$</span> -- dimension of <span class="math-container">$V_{\mathbb{K}}$</span>. But we can determine operations on elements of <span class="math-container">$\mathbb{K}_{\mathbb{K}}^{n}$</span> like on the direct group product: <span class="math-container">$(x_1,\dots,x_n) + (x'_1, \dots, x'_n) = (x_1 + x'_1, \dots ,x_n + x'_n)$</span> and similar for second field operation: <span class="math-container">$(x_1,\dots,x_n) \times (x'_1, \dots, x'_n) = (x_1 \times x'_1, \dots ,x_n \times x'_n)$</span>.</p>
<p>But usually we doing it only with one field operation <span class="math-container">$+$</span>. Why? </p>
| James Silipo | 365,160 | <p>The category of fields has no product. If it had, then the projection on one factor would be injective, which is clearly false! </p>
|
2,572,564 | <p>So I have a straight line, in the classic <span class="math-container">$y = mx + b$</span>, and I'm just trying to translate the formula for the line a certain distance along its normal.</p>
<p>For example, with <a href="https://i.stack.imgur.com/r8vct.jpg" rel="nofollow noreferrer">this graph</a>, how would I translate the red line to the blue one if (for example) <span class="math-container">$x$</span> was <span class="math-container">$4$</span>?</p>
| Michael Rozenberg | 190,319 | <p>$$\lim_{x\rightarrow0}\frac{\ln\left(1+\frac{x^4}{3}\right)}{\sin^6x}=\lim_{x\rightarrow0}\left(\left(\frac{x}{\sin{x}}\right)^6\cdot\ln\left(1+\frac{x^4}{3}\right)^{\frac{3}{x^4}}\cdot\frac{1}{3x^2}\right)=+\infty$$</p>
|
1,368,447 | <p><img src="https://i.stack.imgur.com/TL80w.jpg" alt="enter image description here"></p>
<p>So I tried the good old Calculus 1 approach and turned this into an optimization problem. The equations got REALLY hairy, but it was okay since this was the graphing calculator section of the exam. I called the longer part of the horizontal diagonal $x_2$, the shorter part of the horizontal diagonal $x_1$, and half of the vertical diagonal $v$. </p>
<p>After getting $x_1$ in terms of $x_2$ and some tedious algebra, I got that $$2y=\sqrt{4 - \sqrt{x_2^2 - 21}} + \sqrt{25 - x_2^2}.$$ I then multiplied both sides of the equation by the total length of the horizontal diagonal, and graphed the right-hand side of the equation on my calculator. The logic was to find out where $(x_1 + x_2)2y$ would reach a maximum point, since the product of the diagonals of a kite equal twice the area of the kite. </p>
<p>After graphing that disaster I got a somewhat reasonable answer. I got the horizontal diagonal to be equal to roughly $4.5829$, and the vertical diagonal to be equal to $4$. However, I don't have an answer key, so I don't know if I am correct. Any feedback would be appreciated!</p>
| Hagen von Eitzen | 39,174 | <p>The upper (and/or) lower half of the kite is a triangle. Such a triangle with two given side lengths has maximal area when it is a right triangle. Now you can find the exact lengths of the diagonals with Pythagoras & Cie.</p>
|
1,656,294 | <p>Apart from the Dirchet function ( equals 1 for rational x and 0 for irrational x), and similar constructs (equals some constant a for rational x and b for irrational , where a!=b) , what are the other functions which are defined on all real numbers and discontinuous at all points?</p>
<p>Are there infinite such functions? </p>
| Win Vineeth | 311,216 | <p>Let me try this way : <br> In $(0,$$\pi\over 4$$)$ , $\tan x <1 $ and $\cot x >1$ <br> I make a statement - <em>Anything less than one to the power of a positive number, is less than one!</em> </p>
<p>Another statement - <em>Anything greater than one to the power of a positive number, is greater than one</em></p>
<p>So, I say, $\cot > \tan$</p>
<p>Well, From your statements, <em>since $x^x$ is decreasing,</em> doesn't mean $cot (x) ^ {cot(x)}$ should decrease right?. </p>
|
1,397,016 | <p>We put
<span class="math-container">$\|f\|_{(N, \alpha)}:= \sup_{x\in \mathbb R} (1+|x|)^{\alpha} | D^{\beta}f(x)|; $</span> and he Schwartz space,</p>
<p><span class="math-container">$S(\mathbb R): = \{f\in C^{\infty}(\mathbb R): \|f\|_{(N, \alpha)}< \infty , \forall \alpha, N \in \mathbb N \cup \{0\} \}.$</span></p>
<p>It is well-known that
(1) <span class="math-container">$\mathcal{S}(\mathbb R)$</span> is a Fréchet space with the topology defined by the norms <span class="math-container">$\|\cdot\|_{(N, \alpha)}$</span> (2) <span class="math-container">$\mathcal{S}(\mathbb R) \ast \mathcal{S} (\mathbb R) \subset \mathcal{S}(\mathbb R).$</span>
[For the proof you may see Folland , Real Analysis chapter 8]</p>
<p>And we may say <span class="math-container">$\mathcal{S}(\mathbb R)$</span> is a Fréchet algebra with respect to convolution.</p>
<p><strong>My naive questions are:</strong></p>
<blockquote>
<p>(1) What is a notion of approximate identity in Fréchet algebra <span class="math-container">$\mathcal{S}(\mathbb R)$</span>? How to define it? (I am familiar with the notion of approximate identity in <span class="math-container">$(L^1, \ast)$</span>; and in fact in <span class="math-container">$L^{1},$</span> the approximate identity is uniformly bounded; for instance take we may take, <span class="math-container">$\phi_{t}(x)=t^{-1}\phi(t^{-1 }x), t>0$</span> for <span class="math-container">$\phi \in L^{1}$</span>)</p>
<p>(2) Does there exists an approximate identity in <span class="math-container">$\mathcal{S}(\mathbb R)$</span>? Is it uniformly bounded?</p>
</blockquote>
| David C. Ullrich | 248,223 | <p>There's certainly an approximate identity in $\mathcal S$.</p>
<p>For $\phi\in\mathcal S$ and $t>0$ define $\phi_t(x)=t^{-1}\phi(x/t)$. Then if $\int\phi=1$ it follows that $\phi_t*f\to f$ in $\mathcal S$ for every $f\in\mathcal S$.</p>
<p>Say $\psi=\hat\phi$. It's easiest to verify that you have an approximate identity if you choose $\phi$ so that $\psi=1$ in a neighborhood of the origin. Note that $\hat\phi_t=\psi^t$, where $\psi^t(x)=\psi(tx)$. Since the Fourier transform is an isomorphism you only need to check that $$(1-\psi^t)\,\hat f\to0$$in $\mathcal S$, which is not too hard.</p>
<p>"Not too hard"... Ok, let's assume that $0\le\psi\le1$ and $\psi(x)=1$ for all $|x|\le 1$. Then $1-\psi^t$ vanishes on $[-1/t,1/t]$. So $$\sup_x|x|^N|(1-\psi^t(x))f(x)|\le\sup_{|x|>1/t}|x|^N|f(x)|\le t\sup_x|x|^{N+1}|f(x)|\to0.$$That takes care of $||.||_{N,0}$. I'll do $\alpha=1$; the rest is the same but the notation gets complicated. First,$$D((1-\psi^t)f)(x)=-\frac1t\psi'\left(\frac xt\right)f(x)+(1-\psi^t(x))f'(x)=\chi_t(x),$$say. Now $$\sup_x|x|^N|\chi_t(x)|=\sup_{|x|>1/t}|x|^N|\chi_t(x)|\le t^2\sup_x|x|^{N+2}|\chi_t(x)|\le ct.$$</p>
<hr>
<p>If by bounded you mean bounded as a subset of $\mathcal S$ then no it's not bounded.</p>
|
1,397,016 | <p>We put
<span class="math-container">$\|f\|_{(N, \alpha)}:= \sup_{x\in \mathbb R} (1+|x|)^{\alpha} | D^{\beta}f(x)|; $</span> and he Schwartz space,</p>
<p><span class="math-container">$S(\mathbb R): = \{f\in C^{\infty}(\mathbb R): \|f\|_{(N, \alpha)}< \infty , \forall \alpha, N \in \mathbb N \cup \{0\} \}.$</span></p>
<p>It is well-known that
(1) <span class="math-container">$\mathcal{S}(\mathbb R)$</span> is a Fréchet space with the topology defined by the norms <span class="math-container">$\|\cdot\|_{(N, \alpha)}$</span> (2) <span class="math-container">$\mathcal{S}(\mathbb R) \ast \mathcal{S} (\mathbb R) \subset \mathcal{S}(\mathbb R).$</span>
[For the proof you may see Folland , Real Analysis chapter 8]</p>
<p>And we may say <span class="math-container">$\mathcal{S}(\mathbb R)$</span> is a Fréchet algebra with respect to convolution.</p>
<p><strong>My naive questions are:</strong></p>
<blockquote>
<p>(1) What is a notion of approximate identity in Fréchet algebra <span class="math-container">$\mathcal{S}(\mathbb R)$</span>? How to define it? (I am familiar with the notion of approximate identity in <span class="math-container">$(L^1, \ast)$</span>; and in fact in <span class="math-container">$L^{1},$</span> the approximate identity is uniformly bounded; for instance take we may take, <span class="math-container">$\phi_{t}(x)=t^{-1}\phi(t^{-1 }x), t>0$</span> for <span class="math-container">$\phi \in L^{1}$</span>)</p>
<p>(2) Does there exists an approximate identity in <span class="math-container">$\mathcal{S}(\mathbb R)$</span>? Is it uniformly bounded?</p>
</blockquote>
| Jochen | 38,982 | <p>Concerning <em>bounded approximate identities</em>: $\mathcal S$ is a Montel spaces, that is bounded sets are relatively compact ($\mathcal S$ is even nuclear). If it had a bounded approximate identity compactness would give a limit which then would be an identity element which certainly does not exist.</p>
|
3,340,723 | <p>How can we find the total number of ways in which we can divide <span class="math-container">$n$</span> elements into two subsets such that none of them are empty and the union of both sets should be equal to the whole set?</p>
<p>Eg. If <span class="math-container">$S=\{1,2,3\}$</span>, the answer can be <span class="math-container">$A=\{1,2\}$</span>, <span class="math-container">$B=\{3\}$</span> or <span class="math-container">$A = \{1,3\}$</span> and <span class="math-container">$B=\{2\}$</span> or <span class="math-container">$A=\{2,3\}$</span> and <span class="math-container">$B=\{1\}$</span>.</p>
| ab123 | 454,871 | <p>Choosing elements of one group determines the other group as well. So, the total number of ways needs summing over all possible ways of "choosing" elements of one group:</p>
<p>For any general <span class="math-container">$n$</span>:
<span class="math-container">$$\dfrac{{n \choose 1} + {n \choose 2} + \dots + {n \choose n - 1}}{2} = \dfrac{\sum\limits_{i = 0}^n {n \choose i} - {n \choose 0} - {n \choose n}}{2} = \dfrac{2^n - 2}{2} = 2^{n-1} - 1$$</span></p>
<p>We divided by <span class="math-container">$2$</span> because forming one group simultaneously generates the other, so we counted everything twice (eg. think how you counted a way for choosing <span class="math-container">$1$</span> element out of <span class="math-container">$4$</span> to make group <span class="math-container">$1$</span> and again counted it when considering a way of choosing <span class="math-container">$3$</span> elements to make group <span class="math-container">$2$</span> )</p>
|
3,054,558 | <p>I've just learned a throrem which states that : A metric space has the structure of a topological space in which the open sets are unions of balls . </p>
<p>But the theorem only told me there "exist" one topology with respect to the metric.When we refer to topology on a metric space <span class="math-container">$S$</span> , do we mean the topology generated by open ball ? </p>
| Perturbative | 266,135 | <p>Yes, a metric space <span class="math-container">$(S, d)$</span> induces a topology <span class="math-container">$\mathcal{T}$</span> which is generated by the basis <span class="math-container">$$\mathcal{B} = \{B(x, r) \ | \ x \in S \ \text{ and } \ r > 0\}$$</span></p>
<p>When authors refer to the topology on this metric space <span class="math-container">$(S, d)$</span>, they usually mean the topology <span class="math-container">$\mathcal{T}$</span> above, which you can think of as the topology generated by open balls.</p>
|
10,978 | <p>So, in <a href="https://math.meta.stackexchange.com/a/10083/43351">this post</a>, I suggested to replace <a href="https://math.stackexchange.com/questions/tagged/rootsystem" class="post-tag" title="show questions tagged 'rootsystem'" rel="tag">rootsystem</a> with <a href="https://math.stackexchange.com/questions/tagged/root-system" class="post-tag" title="show questions tagged 'root-system'" rel="tag">root-system</a>. A user suggested <a href="https://math.stackexchange.com/questions/tagged/root-systems" class="post-tag" title="show questions tagged 'root-systems'" rel="tag">root-systems</a> instead, which is indeed better.</p>
<p>So having 7 upvotes and no downvotes, I decided to implement the change. But then I ran into the system:</p>
<p><img src="https://i.stack.imgur.com/D3pO9.png" alt="can't create tag"></p>
<p>How could I ever go about implementing this change if the system prohibits its creation in the first place? Can it only be done by moderators?</p>
<p>(As an aside, I think the red text should say "Your <strong>edit</strong> couldn't be submitted." instead of "question".)</p>
| Community | -1 | <p>Ask a moderator to perform this change by making a meta post. Moderators can also directly merge the two tags to rename the old tag, which is easier than retagging all old questions by hand.</p>
|
4,287,450 | <p>I tried solving this by finding dy/dx and letting the denominator be zero, yielding an equation whose solution can only be approximated. But that approximated value is not right as seen on Desmos.</p>
| Claude Leibovici | 82,404 | <p>As @Hosam Hajjir wrote, the slope is infinite when
<span class="math-container">$$\cos (\theta) - \theta \sin (\theta)=0$$</span> and the solutions are closer and closer to <span class="math-container">$n\pi$</span>.</p>
<p>Expanding as series around <span class="math-container">$x=n \pi$</span>,we have
<span class="math-container">$$\cos (\theta) - \theta \sin (\theta)=\sum_{k=0}^\infty (-1)^n \frac{(k+1) \cos \left(\frac{\pi k}{2}\right)-\pi n \sin \left(\frac{\pi
k}{2}\right)}{\Gamma (k+1)}(x-n\pi)^k$$</span> Now, using series reversion, for <span class="math-container">$n > 0$</span>
<span class="math-container">$$\theta_n=t+\frac{1}{t}-\frac{4}{3 t^3}+\frac{53}{15 t^5}-\frac{1226}{105
t^7}+O\left(\frac{1}{t^9}\right) \quad \text{with} \quad t=n \pi$$</span></p>
<p>Trying
<span class="math-container">$$\left(
\begin{array}{ccc}
n & \text{estimate} & \text{solution} \\
1 & 3.424580679 & 3.425618459 \\
2 & 6.437295608 & 6.437298179 \\
3 & 9.529334335 & 9.529334405 \\
4 & 12.64528722 & 12.64528722
\end{array}
\right)$$</span></p>
|
3,649,125 | <p>Bridge is a game of four players in which each player is dealt 13 cards from a standard 52 card deck. Bridge players (such as myself) are interested in the number of possible deals, where each player is distinct. This can be counted by</p>
<p><span class="math-container">$$\binom{52}{13}\binom{39}{13}\binom{26}{13}\binom{13}{13}=5.364\times10^{28}$$</span></p>
<p>However, this number is misleadingly large, since bridge players usually only care about the face cards (jack, queen, king, and ace) in each suit. We often consider the cards with denominations 2-10 as indistinguishable.<b> Supposing we distinguish only face cards, what is the number of possible deals? </b></p>
<p><a href="http://www.rpbridge.net/7z74.htm" rel="nofollow noreferrer">This source</a> puts the figure at <span class="math-container">$8.110\times10^{15}$</span> based on a computer program. I am curious if there is a more elegant mathematical solution.</p>
| Mike Earnest | 177,399 | <p>I will answer a much more general problem:</p>
<blockquote>
<p>Suppose you have a collection of playing cards, each falling into one of <span class="math-container">$k$</span> types, so that cards of the same type are indistinguishable. For each <span class="math-container">$i\in \{1,\dots,k\}$</span>, there are <span class="math-container">$n_i$</span> cards of type <span class="math-container">$i$</span>. How many ways are there to deal these cards to <span class="math-container">$p$</span> players so that for each <span class="math-container">$j\in \{1,\dots,p\}$</span>, the <span class="math-container">$j^{th}$</span> player receives <span class="math-container">$m_j$</span> cards?</p>
</blockquote>
<p>This can be solved using generating functions. Specifically, the enumerator for dealing <span class="math-container">$n_i$</span> indistinguishable cards to <span class="math-container">$p$</span> players is
<span class="math-container">$$
\sum_{a_1+\dots+a_p=n_i}x_1^{a_1}\dots x_p^{a_p}=h_{n_1}(x_1,x_2,\dots,x_p),
$$</span>
where <span class="math-container">$h_{n_i}$</span> is the <span class="math-container">$(n_i)^{th}$</span> homogenous symmetric polynomial in <span class="math-container">$p$</span> variables. Each summand corresponds to a particular way to deal the <span class="math-container">$n_i$</span> indistinguishable cards to the <span class="math-container">$p$</span> players; the summand <span class="math-container">$x_1^{a_1}\dots x_p^{a_p}$</span> corresponds to giving <span class="math-container">$a_j$</span> cards of type <span class="math-container">$i$</span> to the <span class="math-container">$j^{th}$</span> player, for <span class="math-container">$j\in \{1,\dots,p\}$</span>. </p>
<p>Furthermore, the action of dealing out all of the cards is simply accomplished by multiplying the enumerators for each type of card together. Each summand in this product with powers <span class="math-container">$x_1^{b_1}\cdots x_p^{b_p}$</span> comes with a coefficient equal to the number of ways to deal the cards so that player <span class="math-container">$j$</span> receives <span class="math-container">$b_j$</span> cards for <span class="math-container">$j\in \{1,\dots,p\}$</span>. Therefore, </p>
<blockquote>
<p>The number of dealings is equal to the coefficient of <span class="math-container">$x_1^{m_1}\cdots x_p^{m_p}$</span> in <span class="math-container">$\prod_{i=1}^k h_{n_i}(x_1,\dots,x_p)$</span>.</p>
</blockquote>
<p>In your case, you want the coefficient of <span class="math-container">$x_1^{13}x_2^{13}x_3^{13}x_4^{13}$</span> in <span class="math-container">$h_{36}\cdot h_1^{16}$</span>. If you considered the lower ranks within a suit to be indistinguishable, yet distinct from other suits (as the author of the linked webpage did), then you would want the coefficient of the same monomial in <span class="math-container">$h_{9}^4\cdot h_1^{16}$</span>.</p>
<p>We can leverage the knowledge of symmetric functions to attack this computationally. Specifically, let <span class="math-container">$\lambda$</span> be the decreasing sorted list of numbers of cards in each type, <span class="math-container">$(n_1,n_2,\dots,n_k)$</span>, and let <span class="math-container">$\mu$</span> be the sorted list <span class="math-container">$(m_1,m_2,\dots,m_p)$</span>. It can be shown<span class="math-container">${}^1$</span> that the coefficient of <span class="math-container">$x_1^{m_1}\cdots x_p^{m_p}$</span> in <span class="math-container">$\prod_{i=1}^k h_{n_i}(x_1,\dots,x_p)$</span> is equal to <span class="math-container">$N_{\lambda,\mu}$</span>, defined to be the number of <span class="math-container">$k\times p$</span> matrices with entries in the nonnegative integers whose vector of row sums is equal to <span class="math-container">$\lambda$</span> and whose vector of column sums is equal to <span class="math-container">$\mu$</span>. </p>
<p>I am not sure what the best way to compute <span class="math-container">$N_{\lambda,\mu}$</span> is. This procedure is built into the symmetric functions package of Sage. The following program computes the number of deals when the ranks <span class="math-container">$2$</span> to <span class="math-container">$9$</span> are indistinguishable <em>within a suit</em> but different suits are distinct. It gives the count of <span class="math-container">$8110864720503360$</span>, taking about <span class="math-container">$8$</span> minutes to run on the online interpreter CoCalc. This agrees with your source. Furthermore, the program is easily configurable to work for any number of suits, players, and number of ranks considered indistinguishable. </p>
<pre><code>from time import time
Sym = SymmetricFunctions(QQ)
m = Sym.monomial()
h = Sym.homogeneous()
suits = 4
low_ranks = 9
high_ranks = 4
players = 4
lamb = [low_ranks]*suits + [1]*(high_ranks*suits)
targ = [(low_ranks + high_ranks) * suits // players] * (players)
t0 = time()
print('Number of hands: ', m(h(lamb)).coefficient(targ) )
print('Seconds to compute: ', time() - t0 )
</code></pre>
<p><span class="math-container">${}^1$</span> Stanley, <em>Enumerative Combinatorics, volume 2</em>, Chapter 7, Section 5.</p>
|
3,888,805 | <p>"Six men and three women are standing in a supermarket queue. Three of the people in the queue are chosen to take part in a customer survey. How many different choices are possible if at least one woman must be included?"</p>
<p>I went about solving this question by considering 3 cases, first with a single woman and then with 2 and finally with 3.</p>
<p><span class="math-container">$$
{3 \choose 1} \ \times \ {6 \choose 2} \ + \ {3 \choose 2} \ \times \ {6 \choose 1} \ + \ {3 \choose 3}
$$</span></p>
<p>Which simplifies to <span class="math-container">$ \ 45 \ + \ 18 \ + \ 1$</span> <strong>leading to <span class="math-container">$64$</span> different choices</strong>.</p>
<p>This approach is in fact correct. However, another approach came to my mind as well.</p>
<p>If at least 1 woman must be included in the group then we can simply choose 1 from the 3 women, and fill the remaining 2 slots from the remnant 8 'people'.</p>
<p><span class="math-container">$$
{3 \choose 1} \ \times \ {8 \choose 2}
$$</span></p>
<p><strong>This, however, gives the answer <span class="math-container">$84$</span>.</strong> This answer is most certainly wrong but I am unable to explain why the method is incorrect. If someone could explain why this leads to the wrong answer that would be very nice.</p>
| Derek Luna | 567,882 | <p>Because you can choose Becky with <span class="math-container">${3 \choose 1}$</span> and Kate with <span class="math-container">${8 \choose 2}$</span>, and in another way choose Kate with <span class="math-container">${3 \choose 1}$</span> and Becky with <span class="math-container">${8 \choose 2}$</span> all other choices being the same. You repeat cases depending on which order the women are chosen if there is more than <span class="math-container">$1$</span>.</p>
|
3,888,805 | <p>"Six men and three women are standing in a supermarket queue. Three of the people in the queue are chosen to take part in a customer survey. How many different choices are possible if at least one woman must be included?"</p>
<p>I went about solving this question by considering 3 cases, first with a single woman and then with 2 and finally with 3.</p>
<p><span class="math-container">$$
{3 \choose 1} \ \times \ {6 \choose 2} \ + \ {3 \choose 2} \ \times \ {6 \choose 1} \ + \ {3 \choose 3}
$$</span></p>
<p>Which simplifies to <span class="math-container">$ \ 45 \ + \ 18 \ + \ 1$</span> <strong>leading to <span class="math-container">$64$</span> different choices</strong>.</p>
<p>This approach is in fact correct. However, another approach came to my mind as well.</p>
<p>If at least 1 woman must be included in the group then we can simply choose 1 from the 3 women, and fill the remaining 2 slots from the remnant 8 'people'.</p>
<p><span class="math-container">$$
{3 \choose 1} \ \times \ {8 \choose 2}
$$</span></p>
<p><strong>This, however, gives the answer <span class="math-container">$84$</span>.</strong> This answer is most certainly wrong but I am unable to explain why the method is incorrect. If someone could explain why this leads to the wrong answer that would be very nice.</p>
| Toby Mak | 285,313 | <p>There is a much simpler approach. "At least one woman" is the opposite of "zero women" or "all men" from the total, so subtracting from the total gives us:</p>
<p><span class="math-container">$${9 \choose 3} - {6 \choose 3} = 64$$</span></p>
<hr />
<p>If you are interested in another subtractive approach, following from N. F. Taussig's answer, you can again subtract from <span class="math-container">${3 \choose 1} \cdot {8 \choose 2} = 84$</span>.</p>
<p>The cases where there are two women and one man (Abigail, Beatrice, Charles) are counted <span class="math-container">$2$</span> times, when they should only be counted <span class="math-container">$1$</span> time. Therefore, you need to subtract the number of combinations of this type <span class="math-container">$1$</span> time. This is just <span class="math-container">${3 \choose 2} \cdot 6 = 18$</span>.</p>
<p>Similarly, the cases where there are three women are counted <span class="math-container">$3$</span> times, when they should only be counted <span class="math-container">$1$</span> time, as there is only one <em>combination</em> possible with all <span class="math-container">$3$</span> women. Therefore, the number of combinations should be subtracted by <span class="math-container">$2$</span>.</p>
<p>Hence <span class="math-container">$84 - 18 - 2 = 64$</span>, which is the correct answer.</p>
|
3,556,490 | <p>Can someone explain me how to solve the equation <span class="math-container">$\frac{x-n}{x+n} = e^{-x}$</span>, where <span class="math-container">$n$</span> is a non-zero natural number ? Unfortunately, I have not even an idea how to start. Any hint is much appreciated. </p>
<p>Many thanks in advance. </p>
| egreg | 62,967 | <p>You can observe that a solution must satisfy <span class="math-container">$x>n$</span>. So we can consider the equivalent equation
<span class="math-container">$$
\log(x+n)-\log(x-n)=x
$$</span>
so we can study the function <span class="math-container">$f(x)=\log(x+n)-\log(x-n)-x$</span>. We can note that
<span class="math-container">$$
\lim_{x\to n}f(x)=\infty,\qquad \lim_{x\to\infty}f(x)=-\infty
$$</span>
Also
<span class="math-container">$$
f'(x)=\frac{1}{x+n}-\frac{1}{x-n}-1=\frac{n^2-2n-x^2}{x^2-n^2}
$$</span>
Since <span class="math-container">$x^2>n^2$</span>, we obtain that the derivative is negative. Hence the equation has a single solution.</p>
<p>Note that <span class="math-container">$f(n+1)=\log(2n+1)-(n+1)<0$</span>, so the single solution is in <span class="math-container">$(n,n+1)$</span>.</p>
<p>Indeed, if <span class="math-container">$g(t)=\log(2t+1)-(t+1)$</span>, we have
<span class="math-container">$$
g'(t)=\dfrac{2}{2t+1}-1=\frac{1-2t}{1+2t}
$$</span>
which is negative for <span class="math-container">$t>1/2$</span> and <span class="math-container">$g(1/2)=\log2-\frac{3}{2}<0$</span>.</p>
|
115,477 | <p>Let us say that a finite set $A$ in the plane is $1$-separated if:</p>
<p><strong>1)</strong> it has an even number of points;</p>
<p><strong>2)</strong> no open ball of diameter $1$ contains more than $|A|/2$ points.</p>
<p>For a $1$-separated set $A$ define $G(A)$ to be a graph where two points $x,y$ in $A$ are joined by an edge iff the distance between them is at least $1$.</p>
<blockquote>
<p><em>Question</em>: can one find a finite set of graphs $G _ 1,\dots,G _ n$ such
that any $1$-separated set $A $ can be
partitioned into non-empty
$1$-separated sets $A _ 1,\dots,A _ k$
such that $G(A _ i)$ is isomorphic to
one of the $G _ j$'s?</p>
</blockquote>
<p><em>Comment</em>: The definition makes sense on the real line (the ball of diameter $1$ is replaced by an interval of length $1$). In that case we can take $n=1$ and $G_1$ to be a graph on two vertices joined by an edge (that is, $G(A)$ contains a matching). </p>
| Alfred | 29,755 | <p>I think Domotorp is correct. Take a regular $(2n-1)$-gon such that its longest diagonal is 1, along with its center. Then $A$ cannot be partitioned, and $G(A)=C_{2n-1}$.</p>
|
45,800 | <p>Let's say I want to calculate the following scalar-by-matrix derivative</p>
<p>$$\frac{\partial}{\partial A} \text{tr} \left[(\vec X^T A)^T (\vec X^T A)\right],$$</p>
<p>with $\vec X$ and $A$ being a $n \times 1$ and a $n \times m$ matrix, respectively.
Is there a way in Mathematica to get the result</p>
<p>$$2 \vec X (\vec X^T A)$$</p>
<p>without explicitly defining (for instance)</p>
<pre><code>n=3
m=2
A=Array[a,{n,m}]
X=Array[x,{n,1}]
</code></pre>
<p>and calculating</p>
<pre><code>D[Tr[Transpose[Transpose[X].A].Transpose[X].A],{A}]
</code></pre>
<p>? The problem with this approach is that the Mathematica result cannot be easily cast back into an human-readable form like</p>
<p>$$2 \vec X (\vec X^T A).$$</p>
| dtn | 67,019 | <p>I tried the <code>NCAlgebra</code> package:</p>
<pre><code><< NC`;
<< NCAlgebra`;
SNC[X, A]
NCGrad[tr[tp[tp[X] ** A] ** (tp[X] ** A)], A]
</code></pre>
<p>And got this result:</p>
<pre><code>2 A^T ** X ** X^T
</code></pre>
<p>This is a result similar to that which can be obtained by hand.</p>
|
1,730,569 | <p>All spaces are assumed Hausdorff. We call a topological space <em>compact</em> if every open cover has a finite subcover. We call it <em>Lindelöf</em> if every open cover has a countable subcover, and <em>hereditarily Lindelöf</em> if moreover every subspace is Lindelöf. </p>
<p>It is obvious that every compact space is Lindelöf. We have that every closed subset of a compact space is compact, i.e. it is Lindelöf and 'so much more'. This leads to the natural question, is every open set Lindelöf, too? (this would suffice to make the space hereditarily Lindelöf; see for example <a href="https://math.stackexchange.com/questions/494918/showing-that-if-every-open-subspace-is-lindel%C3%B6f-then-the-space-is-hereditarily">here</a>). </p>
| Henno Brandsma | 4,280 | <p>The OP's own answer, the one-point compactification of an uncountable discrete space, is a classical one. Another is $\{0,1\}^X$, where $X$ is uncountable, where the set of all points with exactly one $1$ is uncountable and discrete (and so not Lindelöf too). </p>
<p>Of course, every closed subspace of a Lindelöf space is also Lindelöf, so by the same argument, every Lindelöf space should be hereditarily so... </p>
|
2,993,299 | <p>If anyone can help me with how to go about solving these kind of equations i would really appreciate it. :-)</p>
<p><span class="math-container">$$\sqrt{36-2x^2} = 4$$</span></p>
<p>Solve for X</p>
| William Elliot | 426,203 | <p>From <span class="math-container">$2$</span> on ward <span class="math-container">$0 < 1 - 1/x < 1$</span><br />
Thus <span class="math-container">$$2 < x < x/(1 - 1/x) < (1 + 1/x)/(1 - 1/x)$$</span><br />
Desired conclusion follows.</p>
|
1,882,509 | <p>As we know that, according to multiplicative identity 2 to the power of 3 means,
1*2*2*2=8;but why 2 to the power -3 is equal to the 1/8,if i think it is the reverse of multiplicative identity then from where 1 is coming?
i don't want only mathematical proof,but also the intuitive story behind this?
Why it is turning into a fraction and also why it positive fraction?
Why minus is vanished?</p>
| Aaron | 9,863 | <p>When you are starting out with arithmetic, first you have counting, then you have addition of positive numbers, and then you have repeated addition, which is multiplication. There are obvious real world ways to give meaning to these expressions in terms of counting of objects when the operations involve positive integers. If you write $n*m=m+m+\ldots +m$ where you have added $m$ to itself $n$ times, then you can extend the formula to when $m$ is not a positive integer. But in this case, what is $m*n$ when $m$ is not a positive integer? Or when neither $m$ nor $n$ are?</p>
<p>At this point, these expressions don't have any meaning, and you have to decide how you want to give them meaning. What mathematicians have found is that one should look for properties that hold in the familiar case, declare that any reasonable extension will still have these properties, and use these properties to define what happens in the unfamiliar case.</p>
<p>So with multiplication, we have that $x(y+z)=xy+xz$, $xy=yx$, $x(yz)=(xy)z$, and $1*x=x*1=x$, and the question is, can we extend multiplication to negative/rational/real/complex numbers while maintaining these properties?</p>
<p>Since $0+a=a$, $xy=x(0+y)=x*0+xy$, and subtracting $xy$ from both sides, we have $x*0=0$. Thus, if we are to extend multiplication to $0$, we must have that $x*0=0$. Similarly, since $x+(-x)=0$, we can multiply $0=y(x+(-x))=yx+y(-x)$, and so $y(-x)=-yx$. So if we can extend multiplication and still have the distributive property, it forces the results to be what we (now) know them to be. It is a bit of work to make sure that all the other properties still hold.</p>
<p>This brings us to your particular question. What should $x^{-n}$ be? If $m,n>0$, we have $x^{m+n}=x^m x^n$, and so we ask "Can we make sense of $x^0$ and $x^{-n}$ in a way where this property continues to hold?</p>
<p>Since $0+n=n$, we would want $x^n=x^{0+n}=x^0x^n$, and dividing both sides by $x^n$ (assuming it is nonzero, which it is when $x\neq 0$), that gives us $x^0=1$. Similarly, $1=x^0=x^{n+(-n)}=x^nx^{-n}$, and dividing by $x^n$ we have $x^{-n}=1/x^n$. Thus, assuming our property of exponentiation continues to hold, this is how we must extend exponentiation to negative exponents. Of course, there is more work to make sure this definition actually works.</p>
<p>So why is it defined the way it is? Because we discovered useful properties in the case where everything was a positive number, and we wanted our extension to still have these properties. Of course, this has its limits. For example, you can't use the basic properties to make sense of $(-8)^{\pi}$ in a reasonable and unambiguous way. Like I mentioned above, after we use the properties to come up with a potential definition, we need to make sure it works, and it just doesn't for some types of exponents. But where it does work, we happily use it.</p>
|
3,430,744 | <p>I'm new to fourier transform but I somewhat know fourier series. Since the fourier series of <span class="math-container">$\sin(2\pi t)$</span> is just <span class="math-container">$\sin(2\pi t)$</span>, I thought of playing with this function to better understand what fourier transform is doing. </p>
<p>As a start I sampled <span class="math-container">$5$</span> points from <span class="math-container">$f(t)=\sin(2\pi t)$</span>:
<span class="math-container">$$x=[0,1,0,-1,0]$$</span>
<a href="https://i.stack.imgur.com/HfQvI.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/HfQvI.png" alt="enter image description here"></a></p>
<p>From <a href="https://www.wolframalpha.com/input/?i=DFT%280%2C1%2C0%2C-1%2C0%29" rel="nofollow noreferrer">wolfram</a> it's fourier transform is
<span class="math-container">$$X=[0, 0.5 + 0.688191 i, -0.5 - 0.16246 i, -0.5 + 0.16246 i, 0.5 - 0.688191 i]$$</span></p>
<p>To my knowledge above numbers are the weights of the frequencies present in the original function. </p>
<ul>
<li>The first number <span class="math-container">$0$</span> says the dc component(<span class="math-container">$0$</span> frequency) is <span class="math-container">$0$</span>.</li>
<li>Does the second number <span class="math-container">$ 0.5 + 0.688191 i$</span> represent the amplitude for <span class="math-container">$\sin(2\pi\cdot 1\cdot t)$</span>? </li>
<li>Similarly does the third number <span class="math-container">$ −0.5−0.16246i$</span> represent the amplitude for <span class="math-container">$\sin(2\pi\cdot 2\cdot t)$</span>? </li>
</ul>
<p>They don't make sense at all because the original function has only one frequency. Any help what these numbers represent for my specific example problem?</p>
| maxmilgram | 615,636 | <p>Basically this would be very hard to solve if it weren't for the specific choice of coefficients. Observe that it can be rewritten as
<span class="math-container">$$
\frac{dy}{dx}-2x=\sqrt{4x^2-4y}\\
\frac{d}{dx}(y-x^2)=\sqrt{4x^2-4y}\\
-\frac{dz}{dx}=2\sqrt{z}
$$</span>
For <span class="math-container">$z=x^2-y$</span>. Can you take it from here?</p>
|
3,030,317 | <p>I have four points on a rectangular grid <span class="math-container">$(x_1,y_1)$</span>, <span class="math-container">$(x_1,y_2)$</span>, <span class="math-container">$(x_2,y_1)$</span> and <span class="math-container">$(x_2,y_2)$</span>. I also have the value of a third variable <span class="math-container">$z$</span> at each of these points, as well as the partial derivatives <span class="math-container">$\frac{\partial z}{\partial x}$</span> and <span class="math-container">$\frac{\partial z}{\partial y}$</span> at each of these points.</p>
<p>I would like to perform 2-d interpolation to obtain the value of <span class="math-container">$z$</span> at any point <span class="math-container">$(x,y)$</span> within the grid block.</p>
<p>I can easily perform 2-d linear interpolation using the four values of <span class="math-container">$z$</span>, however I would like to increase the smoothness by using the eight derivatives I already have.</p>
<p>I have read up about bicubic interpolation but this requires four 2nd derivatives which I do not have.</p>
<p>Is there a method using the 12 bits of data I have which gives a smoother surface than the linear solution?</p>
| Federico | 180,428 | <p>Here is a silly way of doing it.</p>
<p>Lets work in <span class="math-container">$[0,1]^2$</span> for simplicity.</p>
<p>Consider the functions
<span class="math-container">$$
f(x,y)=x(1-x)^2(1-y) \qquad \text{and} \qquad
g(x,y)=(1-x)^2(1+2x)(1-y)^2(1+2y) .
$$</span>
We have 12 values we are interested in: the 4 values of a function at the vertices and the 8 values of the partial derivatives. As it happens, among these 12 values the function <span class="math-container">$g$</span> has only the value at <span class="math-container">$(0,0)$</span> that is nonzero (it is <span class="math-container">$1$</span>) and the function <span class="math-container">$f$</span> has only one derivative at <span class="math-container">$(0,0)$</span> which is nonzero (it is <span class="math-container">$1$</span>).</p>
<p>Therefore the functions
<span class="math-container">$$
g(x,y) \quad g(1-x,y) \quad g(x,1-y) \quad g(1-x,1-y)
$$</span>
and
<span class="math-container">$$
f(x,y) \quad f(1-x,y) \quad f(x,1-y) \quad f(1-x,1-y)
$$</span>
<span class="math-container">$$
f(y,x) \quad f(y,1-x) \quad f(1-y,x) \quad f(1-y,1-x)
$$</span>
form a basis for the problem.</p>
<p>Moreover, since I chose the functions well, meaning that the 12 values are all <span class="math-container">$0$</span> except 1 which is <span class="math-container">$1$</span>, finding the coefficients to use is a trivial job.</p>
<hr>
<p>Graph of <span class="math-container">$f$</span>. The function vanishes at the vertices and only one partial derivative is nonzero:
<a href="https://i.stack.imgur.com/GhWSQ.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/GhWSQ.png" alt="enter image description here"></a></p>
<p>Graph of <span class="math-container">$g$</span>. Only one of the values at the vertices is nonzero, and all the partial derivatives vanish:
<a href="https://i.stack.imgur.com/RnjNS.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/RnjNS.png" alt="enter image description here"></a></p>
|
182,307 | <p>I have a long flat list that needs to be partitioned. The list is formatted so the "header" is repeated, followed by the values. Essentially, it looks something like this:</p>
<pre><code>list={a,a,1,2,3,b,b,5,6,c,c,1,5,a,a,7,8,9,1}
</code></pre>
<p>I am looking for an output of:</p>
<pre><code>{{a,1,2,3},{b,5,6},{c,1,5},{a,7,8,9,1}}
</code></pre>
<p>The output above would then let me create the association list I need.</p>
<p>Obviously <code>Partition</code> won't work because the sublists are of different lengths. I have looked at various ways to identify where the repeated "header" data is, but that doesn't help with the splits. </p>
| kglr | 125 | <pre><code>Join[Most /@ Rest @ Most @ #, {Last @ #}] & @ Split[list, UnsameQ]
</code></pre>
<blockquote>
<p>{{a, 1, 2, 3}, {b, 5, 6}, {c, 1, 5}, {a, 7, 8, 9, 1}}</p>
</blockquote>
<p>You can also use use <code>Split</code> twice and reorganize the result:</p>
<pre><code>Rest /@ Flatten /@ Split[Split[list], Length @ #2 == 1 &]
</code></pre>
<blockquote>
<p>{{a, 1, 2, 3}, {b, 5, 6}, {c, 1, 5}, {a, 7, 8, 9, 1}}</p>
</blockquote>
<p>And yet an alternative way:</p>
<pre><code>Take[list, {#, #2 - 2}] & @@@
Partition[Last /@ SequencePosition[list, {a_, a_}], 2, 1, 1, {Length[list] + 2}]
</code></pre>
<blockquote>
<p>{{a, 1, 2, 3}, {b, 5, 6}, {c, 1, 5}, {a, 7, 8, 9, 1}} </p>
</blockquote>
|
116,220 | <p>Say two graphs are not isomorphic but are both strongly regular with the same set of parameters. Are there any parameters (other than the usual such as order, degrees, eigenvalues and multiplicities, etc.) that are determined, e.g., independence number, chromatic number, etc.?</p>
<p>Thanks for any help</p>
| Brendan McKay | 9,025 | <p>The number of cycles of length 3,4,5 are determined. If the girth is 4, the number of 6-cycles is determined too.</p>
|
4,412,371 | <p>Would appreciate some help with proving the following statement:</p>
<p>Let <span class="math-container">$a_{n}$</span> be a sequence, and <span class="math-container">$\lim _{n\rightarrow \infty }a_{n}=1 $</span>.</p>
<p>Prove that <span class="math-container">$\lim _{n\rightarrow \infty }a_{1}+a_{2}+\ldots +a_{n}=\infty $</span>.
It's easy to see that if <span class="math-container">$a_{n}$</span> is monotonically decreasing, then we can choose the sequence <span class="math-container">$b_{n}=\sum ^{n}a_{n}k=1$</span>, then <span class="math-container">$\lim _{n\rightarrow \infty }b_{n}=\infty$</span> and since <span class="math-container">$\forall n\in \mathbb{N} \left( a_{n+1}\leq a_{n}\right)$</span>, we can assume that <span class="math-container">$\lim _{n\rightarrow \infty }a_{1}+a_{2}+\ldots +a_{n}=\infty $</span>, but what happens when <span class="math-container">$a_{n}$</span> is monotonically increasing?</p>
| José Carlos Santos | 446,262 | <p>There is some <span class="math-container">$n\in\Bbb N$</span> suche that <span class="math-container">$n\geqslant N\implies a_n\geqslant\frac12$</span>. So, if <span class="math-container">$n\geqslant N$</span>,<span class="math-container">$$a_1+a_2+\cdots+a_n\geqslant a_1+\cdots+a_{N-1}+\frac{n-N-1}2.$$</span>Can you take it from here?</p>
|
1,241,586 | <p>I want to prove that for all positive integers
$n$, $2^1+2^2+2^3+...+2^n=2^{n+1}-2$.
By mathematical induction:</p>
<p>1) it holds for $n=1$, since $2^1=2^2-2=4-2=2$</p>
<p>2) if $2+2^2+2^3+...+2^n=2^{n+1}-2$, then prove that $2+2^2+2^3+...+2^n+2^{n+1}=2^{n+2}-2$ holds.</p>
<p>I have no idea how to proceed with step 2), could you give me a hint?</p>
| marwalix | 441 | <p>It is straightforward</p>
<p>$$2+2^2+\cdots+2^n+2^{n+1}=(2+2^2+\cdots+2^n)+2^{n+1}$$</p>
<p>Use the induction assumption</p>
<p>$$\begin{align} 2+2^2+\cdots+2^n+2^{n+1}&=2^{n+1}-2+2^{n+1}\\&=2\cdot 2^{n+1}-2\\&=2^{n+2}-2\end{align}$$</p>
|
2,437,466 | <p>A point, $p$, is defined as a cluster point of a set $S$ if $\forall \epsilon > 0,$ there exists an open ball of radius $\epsilon$ centered at $p$ that contains infinitely many points in $S$.</p>
<p>We want to prove that a subset $S$ of a metric space $(E,d)$ is closed iff $S$ contains all of its cluster points.</p>
<p>I believe I have the forward direction, which seems to be pretty straightforward, but I am having difficulty with the $(\Leftarrow)$ direction. My idea was to somehow use the fact that $S$ contains all of its cluster points to show that every infinite subset of $S$ contains a cluster point. Then we can say $S$ is sequentially compact, therefore it is compact, and hence closed.</p>
<p>I'm just mostly unsure of how to show the first part. Thanks for any comments.</p>
| José Carlos Santos | 446,262 | <p>Note that $$\frac{\frac{4^{n+1}}{n+2}}{\frac{4^n}{n+1}}=4\frac{n+1}{n+2}$$ and that $$\frac{\frac{(2(n+1))!}{(n+1)!^2}}{\frac{(2n)!}{n!^2}}=\frac{(2n+2)!}{(2n)!}\cdot\frac{n!^2}{(n+1)!^2}=\frac{(2n+2)(2n+1)}{(n+1)^2}=2\frac{2n+1}{n+1}. $$So, when is it true that$$4\frac{n+1}{n+2}\leqslant2\frac{2n+1}{n+1}?$$ Well, the previous inequaliy is equivalent to $2(n+1)^2\leqslant(2n+1)(n+2)=2n^2+5n+2$, which clearly holds always. So, since you proved that $\frac{4^2}{2+1}<\frac{(2\times2)!}{2!^2}$, the inequality that you want to prove is true whenever $n\geqslant2$.</p>
|
178,828 | <p>Let $\left\{ x_n \right\}_{n\geq0}$ be a sequence of real numbers such that $$x_{n+1}=\lambda x_n+(1-\lambda)x_{n-1},\ n\geq 1,$$for some $0<\lambda<1$ </p>
<p>(a) Show that $x_n=x_0+(x_1-x_0)\sum_{k=0}^{n-1}(\lambda -1)^k$ </p>
<p>(b) Hence, or otherwise, show that $x_n$ converges and find the limit.</p>
<p>Note that $x_{n+1}=\lambda x_n+(1-\lambda)x_{n-1}$$$=>x_{n+1}-x_n=(\lambda-1)(x_n-x_{n-1})=\cdots=(\lambda-1)^n(x_1-x_0),\forall n$$ </p>
<p>Hence we get $x_n-x_0=(x_n-x_{n-1})+(x_{n-1}-x_{n-2})+\cdots+(x_1-x_0)=(\lambda-1)^{n-1}(x_1-x_0)+(\lambda-1)^{n-2}(x_1-x_0)+\cdots+(x_1-x_0)=(x_1-x_0)\sum_{k=0}^{n-1}(\lambda -1)^k.$</p>
<p>Help me in convergence part.</p>
| André Nicolas | 6,312 | <p><strong>Hint:</strong> The <strong>geometric series</strong> $1+r+r^2+r^3+\cdots$ converges to $\frac{1}{1-r}$ if $|r|\lt 1$.</p>
<p>This is a fact that is undoubtedly already familiar to you. It can be proved by showing that
$$1+r+r^2+\cdots +r^{n-1}=\frac{1-r^n}{1-r},\tag{$1$}$$
and then noting that $|r|\lt 1$, then $r^n\to 0$ as $n \to \infty$.</p>
<p>One way to prove $(1)$ is to show that
$$(1-r)(1+r+r^2+\cdots +r^{n-1})=1-r^n.$$
This can be done by multipling out the left-hand side and observing the mass cancellation.</p>
<p>In our case, $r=\lambda-1$ and therefore $\frac{1}{1-r}=\frac{1}{2-\lambda}$.</p>
|
178,828 | <p>Let $\left\{ x_n \right\}_{n\geq0}$ be a sequence of real numbers such that $$x_{n+1}=\lambda x_n+(1-\lambda)x_{n-1},\ n\geq 1,$$for some $0<\lambda<1$ </p>
<p>(a) Show that $x_n=x_0+(x_1-x_0)\sum_{k=0}^{n-1}(\lambda -1)^k$ </p>
<p>(b) Hence, or otherwise, show that $x_n$ converges and find the limit.</p>
<p>Note that $x_{n+1}=\lambda x_n+(1-\lambda)x_{n-1}$$$=>x_{n+1}-x_n=(\lambda-1)(x_n-x_{n-1})=\cdots=(\lambda-1)^n(x_1-x_0),\forall n$$ </p>
<p>Hence we get $x_n-x_0=(x_n-x_{n-1})+(x_{n-1}-x_{n-2})+\cdots+(x_1-x_0)=(\lambda-1)^{n-1}(x_1-x_0)+(\lambda-1)^{n-2}(x_1-x_0)+\cdots+(x_1-x_0)=(x_1-x_0)\sum_{k=0}^{n-1}(\lambda -1)^k.$</p>
<p>Help me in convergence part.</p>
| i. m. soloveichik | 32,940 | <p>Since $x_n−x_0=(x_1−x_0)\sum_{k=0}^{n-1}(\lambda-1)^k=A\frac{1-(\lambda-1)^n}{2-\lambda}$, where $A=x_1-x_0$. then $x_n=x_0+A\frac{1-(\lambda-1)^n}{2-\lambda}$. Since $0<\lambda<1$ then $(\lambda-1)^n\rightarrow 0$ as $n\rightarrow \infty$ and thus $$\lim_{n\rightarrow \infty} x_n=x_0+\frac{x_1-x_0}{2-\lambda}.$$</p>
|
1,426,891 | <blockquote>
<p>Why does $$\left(\int_{-\infty}^{\infty}e^{-t^2}dt\right)^2 = \int_{-\infty}^{\infty}\int_{-\infty}^{\infty}e^{-(x^2 + y^2)}dx\,dy ?$$</p>
</blockquote>
<p>This came up while studying Fourier analysis. What's the underlying theorem?</p>
| Olivier Oloa | 118,798 | <p><strong>Hint.</strong> One may observe that
$$
A\int_{-\infty}^{+\infty}e^{-x^2}dx=\int_{-\infty}^{+\infty}A\:e^{-x^2}dx
$$ for any constant $A$. Then putting $A:=\int_{-\infty}^{+\infty}e^{-y^2}dy$ and using <a href="https://en.wikipedia.org/wiki/Fubini's_theorem" rel="nofollow">Fubini's theorem</a>, which is allowed here, yields the desired result.</p>
|
1,426,891 | <blockquote>
<p>Why does $$\left(\int_{-\infty}^{\infty}e^{-t^2}dt\right)^2 = \int_{-\infty}^{\infty}\int_{-\infty}^{\infty}e^{-(x^2 + y^2)}dx\,dy ?$$</p>
</blockquote>
<p>This came up while studying Fourier analysis. What's the underlying theorem?</p>
| Salihcyilmaz | 227,487 | <p>$$\left(\int_{-\infty}^{\infty}e^{-t^2}dt\right)^2 = \left(\int_{-\infty}^{\infty}e^{-t^2}dt\right) * \left(\int_{-\infty}^{\infty}e^{-u^2}du\right) $$since $u$ and $t$ varies independently we have$$\left(\int_{-\infty}^{\infty}e^{-t^2}dt\right) * \left(\int_{-\infty}^{\infty}e^{-u^2}du\right)= \int_{-\infty}^{\infty}\int_{-\infty}^{\infty}e^{-(u^2 + t^2)}du\,dt$$ </p>
|
2,985,962 | <p>I was asking my friends a riddle about identifying hats. Each person has to correctly identify the colour of their own hat that was put on their head randomlly.
There is no defined number of either colour. So they could be all white or all black or any combination in between.</p>
<p>They gave an answer that gets 50% right but I fired back that getting 50% right is what you would expect, on average, for straight guesses.
They claimed that that would depend on the colours of the hats that are on the people's heads. In other words, if everyone was wearing black then the 50% rule not apply.
This just doesn't "feel" right to me.</p>
<p>Who is correct?</p>
<p>Edit:</p>
<p>This is the puzzle I asked.
You have 100 people standing one behind the other such that the last person can see all the people in front of him/her and so on.
So the last one see 99 and the next sees 98 etc.
They each have a hat put on their head which is black or white. They have no idea how many of each exist.</p>
<p>Assuming they plan on a strategy in advance, how many can get their hat right.
They said that the best way is for the back person to say the colour of the hat on person 99. Person 99 can say his colour. Then 98 will say the colour of the one in front etc.
This was I am guaranteed at least 50 right and maybe more if two consecutive people have the same colour.
My claim was that 50% guaranteed is the same as random (ignoring the extra lucky one if there are consecutive hats). Their counter-claim was that the 50% random guess would only be right if their were exactly 50 of each colour.</p>
| John Hughes | 114,036 | <p>Skip forward to below the "=====" for the answer that you're probably looking for. Read the first part to find out why I put it after the "=====". </p>
<p>To make sense of this, you need to think about probability spaces. And to do that right, you need more information about the meaning of the words in your question. </p>
<p>Case 1: There's a distribution <span class="math-container">$d$</span> from which the hats for each person are drawn independently randomly. The players guess black/white uniformly randomly. In this case, the expected number of correct guesses is 50 out of 100. </p>
<p>Case 2: There's a distribution as before, again with independent drawing of hat colors, but the players get to look at others' hats before guessing; they then guess black with a probability proportional to the number of black hats they see (out of the total 99 hats they see). (Roughly: if 95 others have black hats, and 4 have white hats, you guess "black" 95 out of 99 times (perhaps by rolling a die to generate your guess). The expected number of correct guesses in this case is always at least 50, but can be far greater. If the distribution <span class="math-container">$d$</span> is highly skewed, this strategy wins big. Note that the players are still "guessing randomly" here ... just not <em>uniformly</em> randomly. </p>
<p>Case 3: The hat-placer is an adversary, and has thought about strategies you might employ. The hat-placer carefully chooses a number <span class="math-container">$k$</span> of black hats and <span class="math-container">$100-k$</span> white hats, and then distributes these randomly among the players by picking uniformly randomly a permutation of the numbers 1...100. (Note that this still meets the condition "that was put on their head randomly"). The players guess uniformly randomly from "black" or "white", without observing the others' hats. The expected number of correct guesses is again 50. </p>
<p>Case 4: Same adversarial setup as in case 3, but the players use the 'bayesian' approach of case 2. In this case, the adversary will presumably optimize, which will turn out to set <span class="math-container">$k = 50$</span>, and again the expected number of correct guesses is 50. </p>
<p>====</p>
<p>Anyhow, case 2 makes the point that saying what distribution is being used in each step of randomness in the problem is critical to assessing expected values. Just saying "randomly" doesn't guarantee uniform randomness. And "straight guesses" doesn't actually mean much of anything to me, although I'm guessing that to you it means "uniformly randomly chosen from 'black' and 'white'." </p>
<p>Let me ramble on a little further still, and formulate the problem a little differently. </p>
<p>You have a fixed but unknown list of 100 bits, <span class="math-container">$b_1, \ldots, b_{100}$</span>, each either a <span class="math-container">$0$</span> or a <span class="math-container">$1$</span>. </p>
<p>You generate another list of 100 bits, <span class="math-container">$c_i$</span>, <span class="math-container">$i = 1, \ldots, 100$</span>, chosen independently identically distributed from the uniform distribution on the set <span class="math-container">$\{0, 1\}$</span>. </p>
<p>You ask "What is the expected number of <span class="math-container">$i$</span> for which <span class="math-container">$b_i = c_i$</span>?"</p>
<p>The answer in this case is <span class="math-container">$50$</span>, and does not depend on the initial bit sequence <span class="math-container">$b$</span>. The proof is straightforward: the probability space consists of all possible <span class="math-container">$c$</span>-sequences; there are <span class="math-container">$2^{100}$</span> of these, each equally probably. </p>
<p>If we look at the <span class="math-container">$i$</span>th digit of each of these sequences, in half of them <span class="math-container">$c_i$</span> is zero; in the other half, <span class="math-container">$c_i = 1$</span>. Hence the probability that <span class="math-container">$c_i$</span> equals <span class="math-container">$b_i$</span> is exactly <span class="math-container">$1/2$</span>, and the expected value of the event <span class="math-container">$c_i = b_i$</span> is <span class="math-container">$1/2$</span>. By linearity of expectation, the expected number of matching bits is the sum of the expected number of matching first-bits, matching second-bits, and so so, hence is <span class="math-container">$100$</span> times <span class="math-container">$1/2$</span>, or <span class="math-container">$50$</span>. </p>
|
1,406,462 | <p>I've been studying an introductory book on set theory that uses the <a href="https://en.wikipedia.org/wiki/Zermelo%E2%80%93Fraenkel_set_theory">ZFC</a> set of axioms, and it's been really exciting to construct and define numbers and operations in terms of sets. But now I've seen that there are other alternatives to this approach to construct mathematics like <a href="https://en.wikipedia.org/wiki/New_Foundations">New Foundations</a> (NF), category theory and others. </p>
<p>My question is, what was the motivation to look for an alternative for ZFC? Does ZFC have any weakneses that served as motivation? I know there are some concepts that don't exist in ZFC like the universal set or a set that belongs to itself. So I thought that maybe some branches of mathematics <em>needed</em> these concepts in order to develop them further, is that true?</p>
| Daniel Ranard | 263,791 | <p>If you're looking for motivation to pursue other foundations, I recommend the article "Rethinking Set Theory" by Tom Leinster: <a href="http://arxiv.org/pdf/1212.6543v1.pdf" rel="noreferrer">http://arxiv.org/pdf/1212.6543v1.pdf</a>. In particular, he's providing a gentle, well-motivated introduction to Lawvere's Elementary Theory of the Category of Sets (ETCS). </p>
<p>Leinster mentions at least two complaints. First, in ZFC-style set theory, if you've been working from the axioms to define all of your objects, then it will be sensible to ask questions like "Is $7 \in \pi$?" because every mathematical object is a set, and all elements of sets are sets. But we might be more comfortable with foundations that don't allow these types of questions, even if we don't normally bother with them. (By the way, the above point is often called "[someone]'s paradox" but I forget whose.)</p>
<p>Another complaint is that the axioms of ZFC don't have much in common with the high-level mathematical operations we do all the time, whereas the axioms described by Leinster are about composition and equality of maps, etc.: the axioms directly describe the higher-level manipulations. Perhaps you view this as an advantage; perhaps not.</p>
|
2,495,789 | <p>\begin{pmatrix}4&-6&2&-6\\ \:-2&3&-1&3\end{pmatrix}</p>
<p>Am I supposed to move the $0 $ to the other side of the equation and find the inverse of the matrix above or is there another way?</p>
| FredetikS | 497,201 | <p>You have 4 variables and 2 equations. This implies that you have infinitly many solutions.</p>
|
125,645 | <p>I stumbled upon the following integral:</p>
<p>$$\int e^{-\left(\frac{\gamma}{2} + ip \right)^2} H_n \left( q-\frac{\gamma}{2} \right) H_n \left( q+\frac{\gamma}{2} \right) \, d\gamma \, ,$$</p>
<p>and I really need a closed form of it. I'm pretty sure it exists, and probably in terms of $\Gamma$ functions, since</p>
<p>$$ \int e^\frac{-\gamma^2}{2} H_n(\gamma) d\gamma = 4^n \sqrt{2} \,\Gamma \left( n + \frac{1}{2} \right) \, .$$</p>
<p>The problem is that <em>Mathematica</em> can't compute this in closed form, although it can do it term by term and hint that a closed expression probably exists:</p>
<pre><code>Table[Integrate[
HermiteH[i, q + 1/2 \[Gamma]] HermiteH[i,
q - 1/2 \[Gamma]] Exp[-(\[Gamma]/2 + I p)^2], {\[Gamma], -Infinity,
Infinity}],{i,0,2}]
</code></pre>
<p>gives</p>
<p>$$\left\{2 \sqrt{\pi },4 \sqrt{\pi } \left(2 p^2+2 q^2-1\right),16 \sqrt{\pi } \left(2 \left(p^4+2 p^2 \left(q^2-1\right)+q^4\right)-4
q^2+1\right)\right\} \, .$$</p>
<p>Can someone give me a tip?</p>
| yarchik | 9,469 | <p>It is interesting to solve this integral without relying on <em>Gradshteyn and Ryzhik</em>. As I suggested in the comments, one can use the generating function approach. The solution consists of 7 steps. Human input is needed on step 5. All other steps are pretty much automatic.</p>
<ol>
<li><p>Substitute generating functions
$\exp(2xt-t^2)=\sum_n\frac{1}{n!}H_n(x)t^n$ and integrate</p>
<pre><code>g[1] = Integrate[Exp[-(Y/2 + I p)^2] Exp[2(q-Y/2)t-t^2] Exp[2(q+Y/2)s-s^2],
{Y, -Infinity, Infinity}]
</code></pre>
<p>We have for <code>g[1]=</code>$2 \sqrt{\pi } e^{-2 i p (s-t)+2 q (s+t)-2 s t}=\sum_{mn}\frac{1}{m!n!}I_{m,n}(p,q)t^ms^n$, where we are interested in $I_{n,n}(p,q)$.</p></li>
<li><p>Next we need to extract from this generating function coefficients
in front of $t^ns^n$, i.e., diagonal elements of the expansion. Let
us introduce the substitution <code>{t -> x/y, s -> y}</code> and search for
coefficients in front of $x^n y^0$.</p>
<pre><code>g[2] = g[1] /. {t -> x/y, s -> y};
</code></pre></li>
<li><p>It is the exponential function with the $2\sqrt\pi$ prefactor and
power:</p>
<pre><code>pow = Cases[g[2], Exp[x_] -> x] // First;
</code></pre>
<p>i.e., <code>g[2]=2 Sqrt[Pi] Exp[pow]</code></p></li>
<li><p>We simplify the argument of the exponent little bit:</p>
<pre><code>g[3] = Collect[pow, {y, x}]
</code></pre>
<p>$2(q+ip)\,x/y+2(q-ip)\,y -2 x$</p></li>
<li><p>Now, we can easily extract the coefficient in front of $y^0$ in the
series expansion of <code>g[2]</code> (this is the only nontrivial step in this
derivation). For the summation, we rely on the amazing capabilities
of Mathematica:</p>
<pre><code>g[4] = 2 Sqrt[Pi] E^(-2 x) Sum[1/(2 n)! Binomial[2n,n] (2 I p + 2 q)^n x^n
(-2 I p + 2 q)^n,{n, 0, Infinity}]
(*2 E^(-2 x) Sqrt[Pi] Hypergeometric0F1[1, 4 (p^2 + q^2) x]*)
</code></pre></li>
<li><p>At this point we have the generating function, i.e., $2\sqrt{\pi} e^{-2x}{\,}_0F_1\left[;1,(p^2 + q^2) x\right]=\sum_n \frac{1}{(n!)^2}I_{n,n}x^n$. Finally, we extract
corresponding series coefficients.</p>
<pre><code>g[5] = (n!)^2 SeriesCoefficient[g[4], {x, 0, n}];
</code></pre></li>
<li><p>The solution in terms of the <em>generalized hypergeometric function</em>
can be simplified as follows:</p>
<pre><code>FullSimplify[FunctionExpand[g[5]], Assumptions -> n >= 0 && n \[Element] Integers]
</code></pre></li>
</ol>
<p>And we are done:</p>
<pre><code>g[6]=(-1)^n 2^(1 + n) n! Sqrt[Pi] LaguerreL[n, 2 (p^2 + q^2)]
</code></pre>
|
3,537,549 | <p>In the game of poker, five cards from a standard deck of 52 cards are dealt to each player.</p>
<p>Assume there are four players and the cards are dealt five at a time around the table until all four players have received five cards.</p>
<p>a. What is the probability of the first player receiving a royal flush (the ace, king, queen, jack, and 10 of the same suit).</p>
<p>b. What is the probability of the second player receiving a royal flush?</p>
<p>c. If the cards are dealt one at a time to each player in turn, what is the probability of the first player receiving a royal flush?</p>
<p>I know that the probability of a royal flush is 1649740, because of 52C5=2598960, and 42598960=1649740.</p>
<p>But I am struggling to understand how to determine the probability of the first player getting it, etc...</p>
<p>Any help would be greatly appreciated.</p>
| Claude Leibovici | 82,404 | <p>If you want an exact value <span class="math-container">$$\frac{R}{2\sqrt{R}-1} \int_{\pi/2}^{\pi} d\theta \, e^{R t \cos{\theta}}=\frac{R}{2\sqrt{R}-1}\frac{\pi}{2} \, (I_0(R t)-\pmb{L}_0(R t))$$</span> where appear Bessel and Struve functions</p>
|
1,506,719 | <p>Let us have measurable spaces $(S_1, \Sigma_1)$ and $(S_2, \Sigma_2)$.
Idea of measurable function $f$ with respect to $\Sigma_1,\Sigma_2$ is the following. $f:$ $S_1 \to S_2$ has to be such that:
$$\forall E_2: f^{-1}(E_2) \in \Sigma_1, $$
where
$ E_2 \subset \Sigma_2, E_1 \subset \Sigma_1 $.</p>
<hr>
<p>Another author looks for elements of $S_1$, lets say $x$, and says that function $f$ is measurable if:
$$\{x: f(x) \in E_2 \} \in \Sigma_1 $$</p>
<hr>
<p>My question is:
WHY we have: $f$ from the $S_1 \to S_2$ but not $f$ from the $E_1 \to E_2$!?</p>
<p>Or, if we wish to travel from sets, then the inverse function $f^{-1}$ will be from the $S_2 \to S_1$.</p>
<p>I made a sketch of the 1st definition of this measurable function, and my picture does not seems to be coherent with what I naturally want to find.
<a href="https://i.stack.imgur.com/F4hvb.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/F4hvb.jpg" alt="enter image description here"></a></p>
<blockquote>
<p>I mean for me it would be natural to
define that $f^{-1}$ goes back
the same route from which it started ($S_2 \to S_1$).</p>
</blockquote>
<p>wikipedia and few books I have been reading does not shed light on this topic except what is unclear for me. Other googling did not help either. thank you.</p>
<hr>
<hr>
<p><strong><em>UPDATE</em></strong>
Thank you all very much. All three answers complement each other and give me a wider picture of what to think about. I was thinking the hole day, and will after a while, but I have troubles and many ideas still somehow mix in the head. </p>
<p>To summarize. The first take home message I got:</p>
<p>1) From simple calculus. We build image and preimage functions firstly on elements of some set. Then we have something like a function on sets.</p>
<p>2) There is no restrictions ( inverse exist, or it is injective, surjective or bijective function) for measurable function $f$.</p>
<p>3) There was a great insight why exactly preimage is what we have to use in building the mapping. But I agree it somewhere in my heart, and still try to make it more understandable to brain and I fail. I will post below more text about what I understand and what I do not.</p>
<p>4) What about second definiton where we have just image. I mean, are these definitions equal? It does not use preimage function.</p>
<p>5) I am also trying to find analogue in Probability Theory, where we have measurable space $(\Omega,\mathcal{F})$, measurable function(r.v.) $X$ that makes mapping $X: \Omega \to \mathcal{R}$, where $(\mathcal{R},\sigma(\mathcal{R}))$ is another measurable space.</p>
<p>My question here is the following.
Let $w \in \Omega, X(w) \to r$ where $r \in \mathcal{R}$.
$X^{-1}(r) = w$.
Here we do not use correspondence of subsets of sigma algebras. Am I right? I omitted which r and w should be because I think I can write something stupid=) </p>
<p>6)And to conclude. The general problem/question for me was and remains, that I still try to imagine in my head that we should have for a measurable function some kind of correspondence of image and preimage, but this is not enough(or other reason stated) as I read from answers.
Maybe there is possible to do example/picture to answer the question of the reason of function (image) to be from set X to set Y, while the preimage function may work not necessary with the same “matter”, in our case some subsets of sigma algebras, that are generated by initial sets (or shortly part of my question is: why measurable function is not an inversion function that maps from same sets X and Y?).</p>
<p>Am I right that this definition of measurable function ( $\forall E_2: f^{-1}(E_2) \in \Sigma_1, $) allows for certain sets $E_1$ to have no correspondence with all the subsets of sigma algebra $\Sigma_2$? If yes, this is the point where $f$ may have mapping from $E_1$ to a an empty set $\emptyset$?</p>
<p>FIN1. Is such a mnemonics mathematically correct: </p>
<p>Function $f$ that is an image of set $X$ to set $Y$ (here I mean image of every element $x$ of $X$ to elements $y$ of $Y$) is measurable, if
for any subset generated by $\sigma(Y)$ we have a preimage that is inside a $\sigma(X)$. </p>
<p>FIN2.And again, </p>
<blockquote>
<p>For every function $f$, subset $A$ of the domain and subset $B$ of the
codomain we have $A \subset f^{−1}(f(A))$ and $f(f^{−1}(B)) \subset B$. </p>
<p>If $f$ is injective we have $A = f^{−1}(f(A))$ and if ''f'' is
surjective we have $f(f^{−1}(B)) = B$.</p>
</blockquote>
<p>So, in order to understand this better I wanted at least to show myself visually that this holds.</p>
<p>I have included picture of my thoughts which give me completely opposite result. Should I post this as separate question?
<a href="https://i.stack.imgur.com/UjSTu.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/UjSTu.jpg" alt="enter image description here"></a></p>
| John Dawkins | 189,130 | <p>In measure theory (or probability theory) to study the behavior of a function like $f$ we often look at sets of the form $\{x\in S_1:f(x)\in E_2\}$, and it is useful to have a briefer notation for such a set, whence $f^{-1}(E_2)$. And this notion is more than just notation: the set mapping $f^{-1}$ so defined maps the power set of $S_2$ into the power set of $S_1$, and a function $f:S_1\to S_2$ is $\Sigma_1/\Sigma_2$ measurable provided $f^{-1}$ maps $\Sigma_2$ into $\Sigma_1$.</p>
<p>And there is a certain coherence to the notation. We have the direct image $f(E_1)$ of a subset of $S_1$, defined by $f(E_1)=\{f(x): x\in E_1\}$. If $f$ happens to be one-to-one (and onto $E_2$), then $f^{-1}(E_2)$ as defined in the preceding paragraph is the same as the direct image of $E_2$ under the <em>point mapping</em> $f^{-1}$ (here the inverse function of $f$). The set mapping $f^{-1}$ make sense even when $f$ is not one-to-one. The dual use of the notation $f^{-1}$ (sometimes a point mapping, sometimes a set mapping) may be a source of confusion, but usually context indicates the proper interpretation.</p>
|
40,473 | <p>I found this « Worm on the rubber band » problem in Concrete Mathematics book.</p>
<p>A slow worm $W$ starts at one end of a meter-long rubber band and crawls one centimetre per minute toward the other end.</p>
<p>At the end of each minute, a keeper of the band $K$ stretches it one meter.</p>
<p>Does the worm ever reach the finish?</p>
<p>The given solution is: $W$ reaches the finish if and when $H(n)$ ever surpasses 100, where $H(n)$ is the $n$th Harmonic number.</p>
<p>How to solve this generalized problem with continuous data and with $W$ crawling with a velocity $u=f(t)$ and $K$ crawling with velocity $v=g(t)$ where $u(t)$ and $v(t)$ are both arbitrary functions of time.</p>
<p>For example, I want to find if and when the worm will reach the end of a rubber band of length $L$ if (with $a$ and $b$ constants) $u(t)=a*t$ and $v(t)=b*(t)$ ? </p>
| Luboš Motl | 10,599 | <p>Bonjour, Jean-Pierre, the relative position of the insect on the rubber band is
$$r(0)=x_W(0) / x_K(0)$$
and you will have to know it to solve the problem. It will be a number between 0 and 1.</p>
<p>Now, you want to find $t$ such that $x_W(t)=x_K(t)$ i.e. $r(t)=1$. To find it, you must calculate the time derivative of $r(t)$.
$$r'(t) = \frac{u(t)}{x_K(t)} . $$
It's because the relative position of the insect is determined essentially by its velocity $u(t)$ but the longer the rubber band is, i.e. the greater is $x_K(t)$, the less the insect manages to change the relative position.</p>
<p>Given your data, $x_K(t)$ is the indefinite integral (that you have to calculate) of the function $v(t)$ you have mentioned and $u(t)$ is something you mentioned. So both the numerator and denominator are, in principle, known functions of $t$, so you solve the differential equation above.</p>
<p>One may also write the general compact formula for a given $r(0),u(t),v(t)$. But it's more interesting to know a major example. If $v(t)$ is constant, $x_K(t)$ goes like $Kt+x_0$ and if $u(t)$ is constant, the equation above says $r'(t)\sim 1/(t+t_0)$. This is solved by $r(t)\sim \ln((t+t_0)/T)$ and when you invert it, you will find out that the $t$ for which $r(t)=1$ is an exponential function of the parameters. It's a straightforward exercise, and given the risk that this is a homework, I don't want to write every detail and the final answer here.</p>
<p>So the ant will finally get there but you will need an exponentially long time! For more general functions, the computation is more complicated.</p>
|
2,124,560 | <p>A pentagon ABCDE contains 5 triangles whose areas are each one. The triangles are ABC, BCD, CDE, DEA, and EAB. Find the area of ABCDE?</p>
<p>Is there a theorem for overlapping triangle areas?</p>
| Michael Lugo | 173 | <p>It was asked how to find the area algebraically.</p>
<p>Assume that the pentagon <span class="math-container">$ABCDE$</span> is regular and the area <span class="math-container">$[ABC] = 1$</span>. Let <span class="math-container">$s$</span> be its side length. The angle at B is <span class="math-container">$3\pi/5$</span>. The area of <span class="math-container">$ABC$</span> is then <span class="math-container">$s^2/2 \sin (3\pi/5)$</span>, which must equal 1. The area of the pentagon is <span class="math-container">$1/4 \times 5 \times s^2 \times \cot (\pi/5)$</span> (see for example the Wikipedia article on regular polygons, <a href="https://en.wikipedia.org/wiki/Regular_polygon#Area" rel="nofollow noreferrer">https://en.wikipedia.org/wiki/Regular_polygon#Area</a>). Therefore we have</p>
<p><span class="math-container">$$ {[ABCDE] \over [ABC]} = {1/4 \times 5s^2 \times \cot (\pi/5) \over s^2/2 \times \sin(3\pi/5)} = {5 \over 2} {\cot (\pi/5) \over \sin(3\pi/5) } $$</span></p>
<p>So we need to evaluate
<span class="math-container">$$x = {\cot (\pi/5) \over \sin(3\pi/5)}$$</span>. Recalling the definition of the cotangent, we get</p>
<p><span class="math-container">$$x = {\cos(\pi/5) \over \sin(\pi/5) \sin(2\pi/5)} $$</span></p>
<p>Now using the double angle formula, <span class="math-container">$\sin(2\pi/5) = 2 \sin(\pi/5) \cos(\pi/5)$</span> and so we have</p>
<p><span class="math-container">$$x = {\cos \pi/5 \over 2 \sin^2 \pi/5 \cos \pi/5} = {1 \over 2 \sin^2 \pi/5} $$</span></p>
<p>and knowing that <span class="math-container">$$ \sin {\pi \over 5} = {1 \over 4} \sqrt{10 - 2\sqrt{5}}$$</span> (this is a standard fact one can look up) gives</p>
<p><span class="math-container">$$ x= {8 \over (10 - 2 \sqrt{5})} = {8(10 + 2\sqrt{5}) \over (10+2\sqrt{5})(10-2\sqrt{5})} = {80 + 16 \sqrt{5} \over 80} = {5 + \sqrt{5} \over 5}$$</span></p>
<p>Plugging this back in we get, as desired,</p>
<p><span class="math-container">$${[ABCDE] \over[ABC]} = {5 \over 2} {5 + \sqrt{5} \over 5} = {5 + \sqrt{5} \over 2}. $$</span></p>
|
2,018,631 | <p>Let C(x, y) : "x and y have chatted over the Internet"</p>
<p>where the domain for the variables x and y consists of all students in your class.</p>
<p>a ) There are two students in your class who have not
chatted with each other over the Internet.</p>
<p><strong>My answer:</strong> $\exists x \exists y[(x \not =y) \land \lnot C(x, y)]$</p>
<p>I googled it and mine found correct.</p>
<p>b ) There are <strong>exactly</strong> two students in your class who have not
chatted with each other over the Internet.</p>
<p><strong>My answer:</strong> $\exists x \exists y[(x \not =y) \land \lnot C(x, y) \land \forall a \forall b(\lnot C(a, b) \iff ((a = x \land b = y)\lor(a = y \land b = x)))]$</p>
<p>Am I correct for question b?</p>
| Bram28 | 256,001 | <p>Your answer is correct. In fact, given that you use a biconditional, you can leave out the $\lnot C(x, y)$: </p>
<p>$\exists x \exists y[(x \not =y) \land \forall a \forall b(\lnot C(a, b) \iff ((a = x \land b = y)\lor(a = y \land b = x)))]$</p>
<p>(do you see why this works?)</p>
<p>Please do make sure that you can use $a$ and $b$ as variables in your system: some systems regard those as individual constants.</p>
|
1,816,783 | <p>When can one conclude that a character table has non-real entries?</p>
<p>In particular, by constructing the character table for $\mathbb{Z}/3\mathbb{Z}$ or $A_4$ how does one determine that some of the entries will be nonreal?
Is the reason that the same complex values in the table for $\mathbb{Z}/3\mathbb{Z}$ also appear in the table for $A_4$ because $A_4 / ({1,(12)(34),(13)(24),(14)(23)})\cong \mathbb{Z}/3\mathbb{Z}$?</p>
<p>Here's what I have for $\mathbb{Z}/3\mathbb{Z}$.</p>
<p><a href="https://i.stack.imgur.com/z8Oln.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/z8Oln.jpg" alt="enter image description here"></a> </p>
<p>Using short orthogonality relations I conclude that $$1+a^2+b^2=1+c^2+d^2+=1+a^2+c^2=1+b^2+d^2=3,$$ and $$1+a+b=1+c+d=1+ac+bd=0,$$ I don't see how to conclude from this that $a=d=\omega$ and $b=c=\bar{\omega}=\omega^2$, where $\omega=e^{2\pi i/3}.$</p>
| Patrick Stevens | 259,262 | <p>A question from the Cambridge Part II Maths course on Representation Theory:</p>
<blockquote>
<p>Let $x$ be an element of order $n$ in finite group $G$. Say, without detailed proof, why:</p>
<ol>
<li>if $\chi$ is a character of $G$, then $\chi(x)$ is a sum of $n$th roots of unity;</li>
<li>$\tau(x)$ is real for every character $\tau$ iff $x$ is conjugate to $x^{-1}$;</li>
<li>$x$ and $x^{-1}$ have the same number of conjugates in $G$.</li>
</ol>
<p>Prove that the number of irreducible characters which take only real values is equal to the number of self-inverse conjugacy classes.</p>
</blockquote>
|
1,653,569 | <p>Consider, in the first-order NGB theory of sets, the following axioms:
$$\vdash\exists x\forall y(y\notin x)$$
and
$$\vdash\forall y(y\notin\varnothing)$$</p>
<p>The second one defines the constant $\varnothing$.
Moreover, from the second axiom we get the first by $\exists$I inference rule.</p>
<p>So my question is: we can replace axiom $$\vdash\exists x\forall y(y\notin x)$$ with $$\vdash\forall y(y\notin\varnothing)$$?</p>
| Noah Schweber | 28,111 | <p>Well, you can only write that second sentence if you have a constant symbol for "$\emptyset$". Since the language of NBG is $\{\in\}$, this isn't technically a sentence you can write.</p>
<p>Now, you could make an argument that maybe we should add a constant symbol to the language - but clearly we don't need it (by the first formulation).</p>
<p>As a side note, we don't really need the axiom of emptyset - we can get the emptyset via separation applied to an arbitrary set via the formula "$x\not=x$" (this uses the fact that the semantics for first-order logic require the domain to be nonempty).</p>
|
2,354,094 | <p>I've been stumped by this problem:</p>
<blockquote>
<p>Find three non-constant, pairwise unequal functions $f,g,h:\mathbb R\to \mathbb R$ such that
$$f\circ g=h$$
$$g\circ h=f$$
$$h\circ f=g$$
or prove that no three such functions exist.</p>
</blockquote>
<p>I highly suspect, by now, that no non-trivial triplet of functions satisfying the stated property exists... but I don't know how to prove it.</p>
<p>How do I prove this, or how do I find these functions if they do exist?</p>
<p>All help is appreciated!</p>
<p>The functions should also be continuous.</p>
| Jim Ferry | 458,592 | <p>A simple, isometric solution exists in $\mathbb{R}^n$ for $n \ge 2$: let $f$ be a reflection in the first coordinate, and $g$ be a reflection in the second. A bijection between $\mathbb{R}^n$ and $\mathbb{R}$ then yields a solution to the stated problem, though continuity is lost.</p>
|
2,829,121 | <p>I get how to derive the ellipse equation, but I'm struggling to understand what it means intuitively. </p>
<p>You see, a circle equation can be understood very intuitively. The circle equation models how the radius of the circle can be represented using the Pythagorean theorem. But I don't understand what the ellipse equation means at such a level. Does it model how an ellipse can be drawn out using a stretched rope? What exactly does it model? Can someone please explain? </p>
<p>Can you please explain it as simply as possible, as I'm still a beginner? </p>
| AgentS | 168,854 | <p>Key thing to keep in mind is that the length of the rope doesn't change. This means you're basically modeling all the points whose "sum of distances from the given two fixed points" doesn't change.</p>
<p>Say $A$ and $B$ are the fixed points and $L$ is the length of the rope, then the point P traces the curve given by the equation :
$$\text{(distance between P and A)} + \text{(distance between P and B)}= L$$</p>
<p>Try plugging in $P = (x,y)$, $A = (-c,0)$ and $B=(c,0)$ and see what you get.</p>
|
2,829,121 | <p>I get how to derive the ellipse equation, but I'm struggling to understand what it means intuitively. </p>
<p>You see, a circle equation can be understood very intuitively. The circle equation models how the radius of the circle can be represented using the Pythagorean theorem. But I don't understand what the ellipse equation means at such a level. Does it model how an ellipse can be drawn out using a stretched rope? What exactly does it model? Can someone please explain? </p>
<p>Can you please explain it as simply as possible, as I'm still a beginner? </p>
| Doug M | 317,176 | <p>$x^2+y^2 = 1$ is our unit circle</p>
<p>$\frac {x}{a}$ dilates $x$ by a factor of $a.$ That is it stretches everything horizontally by a factor of $a$</p>
<p>Simmilarly, $\frac {y}{b}$ dilates $y$ by a factor of $b.$</p>
<p>$\frac {x^2}{a^2} + \frac {y^2}{b^2} = 1\\
\left(\frac xa\right)^2 + \left(\frac yb\right)^2 = 1$</p>
<p>The equation of an ellipse just shows how to distort a circle.</p>
<p>While we can derive the distance between the foci, and the "length of the rope" it is not entirely obvious from the equation.</p>
<p>The length of the major and minor axis $(2a, 2b)$ are. </p>
|
26,074 | <p>Okay, so this was labeled as a "fun problem", but I'm having trouble knowing how to approach it.</p>
<p>I'm given: $\lim\limits_{x \to 0^+} f(x) = A$ and $\lim\limits_{x \to 0^-} f(x) = B$.</p>
<p>I need to find (or at least know where to start):</p>
<p>a) $\lim\limits_{x \to 0^+} f(x^3 - x)$</p>
<p>b) $\lim\limits_{x \to 0^-} f(x^3 - x)$</p>
<p>Any insight on how to approach this (or even a solution) would be greatly appreciated.</p>
<p>Thanks</p>
| Community | -1 | <p>Let $z(x)=x^3-x$</p>
<p>You have
$$\lim_{x\rightarrow 0^- }f(z(x)) =\lim_{z\rightarrow 0^+ }f(z) $$ </p>
<p>i.e $z$ approaches from $0+\delta$ when x approaches from $0-\delta$ for $\delta>0$ and vice versa for the other limit. </p>
<p>So whatever limits you have for x in will be inverted in z, i.e, answer for a) is B and b) is A </p>
<p>To get a visual hint, <a href="http://www.wolframalpha.com/input/?i=x%5E3+-x+plot" rel="nofollow">plot</a> $x^3-x$</p>
|
1,701,761 | <p><a href="https://i.stack.imgur.com/Qletj.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Qletj.png" alt="Trapezium "></a> </p>
<blockquote>
<p>Prove that ABED is a parallelogram </p>
</blockquote>
<p><strong>Given:</strong></p>
<ol>
<li><p>ABCD is a trapezium</p></li>
<li><p>F and G are the midpoints of AB and DC respectively</p></li>
<li><p>FHG is a straight line</p></li>
<li><p>AD is equal to and parallel to BE</p></li>
</ol>
<p>My attempts have included trying to show that $\angle BAD + \angle ADE = 180^{\circ}$ and trying to show that the opposite angles are equal. But these have not led me anywhere as at some point I am required to assume that AB and DE are parallel, which is what has to be proven. I'd like a hint; any help is much appreciated.</p>
| Narasimham | 95,860 | <p>Steps 1,2,3 need not be given.</p>
<p>AD = BE and parallel to it, ABED is a parallelogram</p>
<p>So AB = DE and parallel to it, by parallelogram definition and property.</p>
|
2,788,002 | <p>If I convert, say, #FF$1919$ to binary, I can do it in groups of three bytes like:</p>
<p>FF: $1111$ $1111$</p>
<p>$19$: $0001$ $1001$</p>
<p>$19$: $0001$ $1001$</p>
<p>So can I write that #FF$1919 =$ $1111$ $1111$ $0001$ $1001$ $0001$ $1001$?</p>
<p>Or do I have to write them as separate bytes like above?</p>
| Rhys Hughes | 487,658 | <p>Yes that is absolutely fine. You could have also done it as:
$$F: 1111$$
$$F: 1111$$
$$1: 0001$$
$$9: 1001$$
$$1: 0001$$
$$9: 1001$$
and combined in the same way</p>
<p>So $FF1919: 1111\space 1111\space 0001\space 1001\space 0001\space 1001$</p>
|
66,634 | <p>Here is a quote from <em>Lectures on Ergodic Theory</em> by Halmos:</p>
<blockquote>
<p>I cannot resist the temptation of
concluding these comments with an
alternative "proof" of the ergodic
theorem. If $f$ is a complex valued
function on the nonnegative integers,
write $\int f(n)dn=\lim
> \frac{1}{n}\sum_{j=1}^nf(n)$ and whenever
the limit exists call such
functions integrable. If $T$ is a
measure preserving transformation on a
space $X$, then $$
> \int\int|f(T^nx)|dndx=\int\int|f(T^nx)|dxdn=\int\int|f(x)|dxdn=\int|f(x)|dx<\infty.
> $$ Hence, by "Fubini's theorem" (!),
$f(T^nx)$ is an integrable function of
its two arguments, and therefore, for
almost every fixed $x$ it is an
integrable function of $n$. Can any of
this nonsense be made meaningful?</p>
</blockquote>
<p>Can any of this nonsense be made meaningful using nonstandard analysis? I know that Kamae gave a short proof of the ergodic theorem using nonstandard analysis in <em>A simple proof of the ergodic theorem using nonstandard analysis</em>, Israel Journal of Mathematics, Vol. 42, No. 4, 1982. However, I have to say that I am not satisfied with his proof. It is tricky and not very illuminating, at least for me. Besides, it does not look anything like the so called proof proposed by Halmos. Actually, Kamae's idea can be made standard in a very straightforward manner. See for instance <em>A simple proof of some ergodic theorems</em> by Katznelson and Weiss in the same issue of the Israel Journal of Mathematics. By the way, Kamae's paper is 7 pages and Katznelson-Weiss paper is 6 pages.</p>
<p>To summarize, is there a not necessarily short but conceptually clear and illuminating proof of the ergodic theorem using nonstandard analysis, possibly based on the intuition of Halmos?</p>
| John Galt | 15,214 | <p>There is a short proof in Katok's book. Introduction to the modern theory of dynamical systems [Book]</p>
|
1,273,789 | <p>Problem: X has mean and variance of 20.<br>
What can be said about $P(0<X<40)$? </p>
<p>Chebyshev formula = $P(|X-\mu|\geq k)\leq\frac{\sigma^2}{k^2}$ </p>
<p>The first step has $P(|X-20|\geq 20)\leq\frac{\sigma^2}{20\times20}$ </p>
<p>My question is: How did they get this part $P(|X-20|\geq 20)$? (Did $P(0<X<40)$ turn into that???). </p>
| André Nicolas | 6,312 | <p>Let $A$ be the event $0\lt X\lt 40$. Then $A$ <strong>fails to happen</strong> if either $X\ge 40$ or $X\le 0$. </p>
<p>The event "$X\ge 40$ or $X\le 0$" happens precisely if $|X-20|\ge 20$. This is clear from the geometry. We have $X\ge 40$ or $X\le 0$ if the <strong>distance</strong> of $X$ from $20$ is $\ge 20$.</p>
<p>In our case, the Chebyshev Inequality tells us that $\Pr(|X-20|\ge 20)$ is $\le \frac{20}{20^2}$, that is, $0.05$.</p>
<p><strong>Remark:</strong> Informally, the Chebyshev Inequality, as you stated it, says that the sum of the probabilities in the "tails" of the distribution is "small." That is equivalent to saying that the probability of landing somewhere in the middle is large.</p>
<p>Going back to the event $A$, we conclude that since
$$\Pr(0\lt X\lt 40)=1-\Pr(|X-20|\le 20),$$
we can conclude that $\Pr(0\lt X\lt 40)\ge 1-0.05=0.95$.</p>
|
67,039 | <p>I just noticed that <code>Exit</code> and <code>Quit</code> can work without brackets i.e. a single</p>
<pre><code>Exit
</code></pre>
<p>or</p>
<pre><code>Quit
</code></pre>
<p>will quit the kernel. Quite surprising!</p>
<p>This isn't mentioned in the document of them. Is there a list for functions with this feature? </p>
<p>Well, I admit I asked this question mainly to show the discovery above :D</p>
| J. M.'s persistent exhaustion | 50 | <p>At the OP's request, I am adding the following:</p>
<p><code>Nothing</code> is a function with very peculiar behavior:</p>
<pre><code>Nothing === Nothing[] === Nothing[x, y]
True
</code></pre>
<p>This means that</p>
<pre><code>Nothing & /@ {{a}, {b}}
</code></pre>
<p>and</p>
<pre><code>Nothing /@ {{a}, {b}}
</code></pre>
<p>and</p>
<pre><code>Nothing @@@ {{a}, {b}}
</code></pre>
<p>all give <code>{}</code> as the result.</p>
|
50,441 | <p>here is the silly question of mine,</p>
<p>if <code>mathematica</code> takes <code>3</code> seconds to evaluate the function <code>f[3]</code>
then I plug the this value <code>f[3]</code> into a new function <code>g[f[3],f[3]]</code>, Let's ignore the evaluation time for the new function <code>g</code>, how long does it take for <code>g[f[3],f[3]]</code>, 3 seconds or 6 seconds?</p>
<p>if it is 6 seconds, how can I make <code>mathematica</code> temporarily to remember the value of <code>f[3]</code></p>
<p>thanks</p>
| Kuba | 5,478 | <p><code>With</code> keeps the code clear for longer cases but sometimes I'd do:</p>
<pre><code>g[#,#] & @ f[3]
</code></pre>
|
76,493 | <p>i'm interested in geometric interpretations of many linear algebra notions (check also related <a href="https://mathoverflow.net/questions/75241/geometric-interpretation-of-matrix-minors">geometric interpretation of matrix minors</a>). it came to me recently that geometric description of adjugate matrix (for example in case 3×3-matrix) might be quite hard—feel welcome to fill the gap!—but what catched my attention is functoriality of adjugate matrix ($\scriptstyle \mathbf I^\mathrm D = \mathbf I$ and $\scriptstyle (\mathbf{AB})^\mathrm D = \mathbf B^\mathrm D \mathbf A^\mathrm D$); my question is:</p>
<blockquote>
<p>what kind of functor is the adjugating (for linear endomorphisms)?</p>
</blockquote>
<p>it seems to have strong relationship with (hermitian) adjoint but has slightly different properties (it commutes with transpose). thanks in advance!</p>
| joel | 17,755 | <p>i found quite a satisfying answer to this question which gives enough trails for further enquiry. the answer involves exterior algebra and gradation (check also hodge duality).</p>
<blockquote>
<p>if <span class="math-container">$\mathrm A\colon V \to V$</span> is a linear map, then it naturally induces a graded operator <span class="math-container">$\mathrm A^*$</span> in the exterior algebra of <span class="math-container">$V$</span>. when <span class="math-container">$V$</span> has finite dimension, there are two pieces of the exterior algebra which are isomorphic to <span class="math-container">$V$</span>, namely <span class="math-container">$\Lambda^1(V)$</span>, the dual space of <span class="math-container">$V$</span>, and <span class="math-container">$\Lambda^{n-1}(V)$</span>, being the space of <span class="math-container">$(n-1)$</span>-multivectors of <span class="math-container">$V$</span>.</p>
<p>the restriction of <span class="math-container">$\mathrm A^*$</span> to <span class="math-container">$\Lambda^1(V)$</span> is the <em>dual map</em> (also called <em>adjoint map</em>). generally the dual space is not canonically isomorphic to <span class="math-container">$V$</span> if it has only linear structure; this changes when some sort of duality is established thanks to an extra structure, e.g. inner product. this is the adjoint linear map acting in <span class="math-container">$V$</span> with matrix <span class="math-container">$\mathbf A^\mathrm T$</span> depending on the choice of basis.
on the other hand <span class="math-container">$\Lambda^{n-1}(V)$</span> is naturally isomorphic to <span class="math-container">$V$</span> (the isomorphism is given by the determinant), and so the restriction of <span class="math-container">$\mathrm A^*$</span> to <span class="math-container">$\Lambda^{n-1}(V)$</span> naturally induces an linear map on <span class="math-container">$V$</span> denoted <span class="math-container">$\operatorname{adj} \mathrm A$</span> with matrix <span class="math-container">$\mathbf A^\mathrm D$</span>. since this is natural, an matrix adjugate can be defined in terms of this.</p>
<p>both notions of adjointness (classical and conjugate transpose) are quite related. the simple one is just duality and makes more sense for operators in general vector spaces (or better in inner product space). the seemingly not-so-simple one is natural, makes sense for operators or matrices, is usually expressed in terms of matrices, and appears often in relation to quadratic forms.</p>
</blockquote>
<p><sub>source: <a href="https://groups.google.com/d/msg/sage-devel/YjImMWVVwo4/5ok9dcxrZFMJ" rel="noreferrer">google groups on sage developement</a></sub></p>
|
190,604 | <p><strong>Background</strong></p>
<p>I am currently trying to solve exercise 1.1.18 in Hatcher's Algebraic Topology. The part of the exercise I am interested in is the following:</p>
<blockquote>
<p>Using the technique in the proof of Proposition 1.14, show that if a space $X$ is obtained from a path-connected subspace $A$ by attaching an n-cell $e^n$ with $n ≥ 2$, then the inclusion $A \hookrightarrow X$ induces a surjection on $\pi_1$ .</p>
</blockquote>
<p>I know that $i:A \hookrightarrow X$ induces a homomorphism $i_*:\pi_1(A)\rightarrow \pi_1(X)$, so I only need to show this is a surjection. I think I understand the idea of the proof, which is to show that every loop $f\in \pi_1(X)$ is homotopic to a loop which is contained entirely in $A$. Hatcher's suggestion is to follow the proof $\pi_1(S^n)=0$ for $n\geq 2$, meaning that we should be able to push the sections of $f$ which are in the attached $n$-cell $e^n$ out. This is causing me a bit of trouble.</p>
<p><strong>Attempt</strong></p>
<p>Since $X$ is defined to be the result of attaching an $n$-cell to $A$ via some attaching map $\varphi:\partial D^n\rightarrow X$, it has the form $X=A \amalg e^n/\sim$, where $x\sim \varphi(x)$ for all $x \in \partial D^n$. Note first that since $A$ and $e^n$ are path connected, the adjunction space $X=A\cup_\varphi e^n$ is path connected. As such, our choice of base point does not affect the structure of $\pi_1(X)$, so let $x_0 \in A$ be the base point of $\pi_1(X)$ we are working over. Let $f \in \pi_1(X,x_0)$. Let $E=\text{Int}(e^n)$ and consider $f^{-1}(E)$. This is an open subset of $(0,1)$, so it is the union of a possibly infinite collection of subsets of $(0,1)$ of the form $(a_i,b_i)$. Let $f_i$ denote the restriction of $f$ to $(a_i,b_i)$. Note that $f_i$ lies in $e^n$ and, in particular, $f(a_i)$ and $f(b_i)$ lie on the boundary of $e^n$, so they are elements of $A$. For $n\geq 2$ we can homotopy $f_i$ to the path $g_i$ from $f(a_i)$ to $f(b_i)$ that goes along the boundary of $e^n$, which is homeomorphic to $S^{n-1}$, so it is path connected for $n\geq 2$. Since $e^n$ is homeomorphic to $D^n$, where $n\geq 2$, it is simply connected so $f_i$ and $g_i$ are homotopic. Repeating this process for all $f_i$, we obtain a loop $g$ homotopic to $f$ such that $g(I)\subseteq A$. </p>
<p>What really bothers me about this is how I could homotopy form a homotopy from $f$ to $g$ consisting of possibly infinitely many individual homotopies from $f_i$ to $g_i$. I believe I need there to only be finitely many $f_i's$, but I don't see how to show it. </p>
<p><strong>Note:</strong> This is not homework.</p>
| Community | -1 | <p>If you have a little knowledge of the Van-Kampen theorem then you can prove it like this: You have $A \subseteq X$ and you are attaching an $n-cell$ $e^n$. Now recall that you have a characteristic map $\Phi : D^n \to X$ that restricts to $\varphi$ on $S^{n-1}$. We know that $\Phi$ is a homeomorphism between the interior of $D^n$ and its image. So set </p>
<p>$$V = \textrm{Im}\left(\textrm{int}(D^n)\right),\hspace{4mm} U = X - \Phi(\textrm{origin}).$$</p>
<p>Now $V$ is the continuous image of a path-connected space, and $U$ is also path - connected. Their intersection is homeomorphic to the image of $\textrm{int}{D^n} - \{\text{origin}\}$ that is path connected. Furthermore $X = U \cup V$, and choosing some $x \in U \cap V$ as our basepoint the Seifert Van Kampen theorem now gives that</p>
<p>$$\pi_1(X,x) \cong \bigg( \pi_1(U,x) \ast \pi_1(V,x)\bigg)/N$$</p>
<p>where $N$ is some normal subgroup in the free product. Now because $U$ is homotopy equivalent to $A$ and $\pi_1(V,x) = 0$ , it follows that</p>
<p>$$\pi_1(X,x) \cong \text{some quotient of $\pi_1(A,x)$}$$</p>
<p>via the induced map $\overline{i_\ast}$. Now $i_\ast = \overline{i_\ast} \circ g$ where $g : \pi_1(A) \to \pi_1(A)/N$. The composition of two surjective maps is surjective, from which it follows that $i_\ast$ is surjective.</p>
|
2,425,669 | <p><em>Having fun with the calculator, I realized that :</em></p>
<p><code>(a^c)</code> and <code>(a^b)</code></p>
<p>with <code>c > b</code> and <code>c > 4</code> and <code>b = 2</code></p>
<pre><code>a^c / a^b = a^(c-2)
</code></pre>
<p><strong><em>So, for example:</em></strong></p>
<p><code>3^5 / 3^2 = 27</code> is <strong><em>same that</em></strong> <code>3^(5-2) => 27</code></p>
<p><strong>I know it's basic, but how is this happening?</strong></p>
<p><strong>What is this formally called? I realized thinking "How often is this greater than this other"</strong></p>
<p><em>I think it's "exponential growth"</em></p>
| Bernard | 202,857 | <p>A fraction with five factors equal to $3$ in the numerator, and two factors $3$ can be simplified by the elementary rules on fractions.</p>
|
180,073 | <p>The function <code>FullSimplify</code> can easily reformat this expression</p>
<pre><code>FullSimplify[Cos[omegan t]^2 + Sin[omegan t]^2]
(* output: 1 *)
</code></pre>
<p>However, it cannot simplify the same expression if contained in a larger expression:</p>
<pre><code>sol = DSolve[m*(x''[t] + omegan^2 * x[t]) == B*Exp[I*omegad*t], x, t]
{{x->Function[{t},C[1] Cos[omegan t]+C[2] Sin[omegan t]-(B E^(I omegad t) (Cos[omegan t]^2+Sin[omegan t]^2))/(m (omegad-omegan) (omegad+omegan))]}}
</code></pre>
<p>Result:</p>
<p>$-\frac{B e^{i \omega_d t} \left(\sin ^2(\omega_n t)+\cos ^2(\omega_n t)\right)}{m (\omega_d-\omega_n) (\omega_d+\omega_n)}+c_2 \sin (\omega_n t)+c_1 \cos (\omega_n t)$</p>
<p>Instead of:</p>
<p>$-\frac{B e^{i \omega_d t}}{m (\omega_d^2-\omega_n^2)}+c_2 \sin (\omega_n t)+c_1 \cos (\omega_n t)$</p>
<p>What is the command I'm missing?</p>
| mikado | 36,788 | <p><code>Collect</code> with its optional 3rd argument is very effective here</p>
<pre><code>Collect[x[t] /. Flatten[sol], B, FullSimplify]
(* -((B E^(I omegad t))/(m omegad^2 - m omegan^2)) + C[1] Cos[omegan t] + C[2] Sin[omegan t] *)
</code></pre>
<p>In fact, <code>Collect</code> may not be needed here.</p>
<p>Note that simplification does not reach into the body of <code>Function</code>. You probably need to evaluate it before attempting to simplify.</p>
|
1,938,353 | <p>A detailed solution will be helpful. Given that $a_1=3$ and $a_{n+1}=3^{a_n}$, find the remainder when $a_{2004}$ is divided by 100.</p>
| Stefan4024 | 67,746 | <p>Note that $a_{2004} = 3^{3^{.^{.^{.^{3}}}}}$ where the length of the exponent tower is $2003$. Anyway all we need to apply is Euler's Totient Formula. As $\phi(100) = 40$, so now we consider: $3^{3^{.^{.^{.^{3}}}}} \bmod {40}$ where the tower length is $2002$. Continue in this manner and we will consider $\phi(40) = 16$, later on $\phi(16) = 8$.</p>
<p>Now we do a simple back tracking. As we know that $3$ to an odd exponent is equal to $3$ modulo $8$ we have: $3^{3^{.^{.^{.^{3}}}}} \equiv 3 \pmod 8$. Then $3^{3^{.^{.^{.^{3}}}}} \equiv 3^3 \equiv 11 \pmod{16}$. Then $3^{3^{.^{.^{.^{3}}}}} \equiv 3^{11} \equiv 27 \pmod{40}$. And at the end:</p>
<p>$$a_{2004} = 3^{3^{.^{.^{.^{3}}}}} \equiv 3^{27} \equiv 87 \pmod{100}$$</p>
<p><strong>NOTE:</strong> For the exponents I used the same symbol, although the length of the tower varies, but I guess that should be understandable.</p>
|
1,938,353 | <p>A detailed solution will be helpful. Given that $a_1=3$ and $a_{n+1}=3^{a_n}$, find the remainder when $a_{2004}$ is divided by 100.</p>
| Parcly Taxel | 357,390 | <p>$$a_1=3$$
$$a_2=3^3=27$$
$$a_3=3^{27}$$
Consider $a_4=3^{3^{27}}$. By Euler's theorem this is congruent to $3^{3^{27}\bmod40}$ modulo 100. Since $3^4\equiv1\bmod40$, $3^{27}\equiv3^3\equiv27\bmod40$. Thus we have the relation
$$3^{3^{27}}\equiv3^{27}\bmod100$$
In other words, exponentiating by 3 does not change the residue. Hence $a_n\equiv87\bmod100$ for $n\ge3$, and the answer is 87.</p>
|
628,254 | <p>I am searching for some good book which section is devoted to modular arithmetic. I am self learner so I strongly prefer that book has exercises best with answers or solutions. I have CS background and has taken course on discrete mathematics but besides some basic facts on modulo operation it lacked some introduction to modular arithmetic.</p>
| user119261 | 119,261 | <p>May I ask what your goals are? Modular arithmetic gets fun when you get to deal with symmetries; the study of symmetry in this context is called group theory. From there you can look at such things as public key cryptography. There is also quite a lot of fun to be had just working with mod 2. If you want to look at such things as Galois shift registers, which you may know as CRC32, that's another direction you can go in. For an entry into the world of finite mathematics I like Peter Cameron's book on combinatorics. It has lots of games ad fun situations to examine. You may want to start with something simpler, such as any introductory level book on group theory. That always deals with modular arithmetic.</p>
<p>Have fun!</p>
|
2,696,998 | <blockquote>
<p>Show that for any integer <span class="math-container">$k ≥ 0$</span> we have</p>
<p><span class="math-container">$$\frac{1}{(1-z)^{k+1}} = \sum\limits_{n=0}^{\infty}\binom {n+k}{n}z^{n}$$</span></p>
<p>and that it converges absolutely for any <span class="math-container">$|z| < 1$</span>.</p>
</blockquote>
<p>I really do not just know how to approach this question.</p>
| Angina Seng | 436,618 | <p>If $T$ is diagonalisable, then $E=E_1\oplus E_2\oplus\cdots\oplus E_k$
where the $E_j$ are the eigenspaces of $T$.
Then prove that any $W$ which is preserved by $T$ is a sum
$W_1+\cdots+W_k$ where $W_j\subseteq E_j$. It may help to note that
the projection maps from $E$ to $E_j$ are polynomials in $T$.</p>
|
951,414 | <p>Consider the absolute value equation:</p>
<p>|x| + |x-2| +|x-4|= 6</p>
<p>How to find the solution(s)?</p>
<p>My attempt:</p>
<p>For |x|, we got x, for x>=0 and -x, for x <0</p>
<p>For |x-2|, we got x-2, for x >= 0 and -(x-2), for x<0</p>
<p>For |x-4|, we got x-4, for x>=0 and -(x-4), for x<0</p>
<p>After this, I'm confused how to find the solutions? is it any easy way to find the solution?</p>
<p>Thanks</p>
| Ben Frankel | 112,897 | <p>Define $n = x - 2$, and rewrite the equation.</p>
<p>$$|n-2| + |n| + |n+2| = 6$$</p>
<p>In this form we can easily see that if $n$ is a solution, so is $-n$. Thus WLOG we will assume that $n$ is positive. We can then remove some redundant $|\cdot|$s.</p>
<p>$$|n-2| + (n) + (n + 2) = 6 \implies |n-2| + 2n = 4$$</p>
<p>Clearly then, because $|n-2| \geq 0$, we have $|n-2| + 2n = 4 \implies n \leq 2$. This means that $n-2 \leq 0$ and so we can remove another $|\cdot|$.</p>
<p>$$-(n-2) + 2n = 4 \implies n = 2$$</p>
<p>And thus we have two solutions, $n = \pm 2$ or $\boxed{x = 0, 4}$.</p>
|
717,991 | <p>I've been stuck on this problem for a few weeks now. Any help?</p>
<p>Prove:
$\sum_{i=1}^{n}\prod_{j=0,j\neq i}^{n}\frac{x-x_j}{x_i-x_j}=1$</p>
<p>The sum of lagrange polynomials should be one, otherwise affine combinations of with these make no sense.</p>
<p>EDIT:
Can anybody prove this by actually working out the sum and product? The other proofs make no sense to me. Imagine explaining this to someone who has never heard of lagrange.</p>
| JP McCarthy | 19,352 | <p>HINT: Throw in the $f(x_i)$ and what happens if they are all 1?</p>
|
2,846,928 | <blockquote>
<p>If I have the a complex number $z \in \mathbb{C}$ with absolute value $|z| = 1$, how do I show that $-i \frac {z-1}{z+1}$ is real? </p>
</blockquote>
| David K | 139,123 | <p>A geometric reason is that $1$ and $-1$ are opposite ends of a diameter of the unit circle, and $z$ is on the circumference.
This implies that $z,$ $1,$ and $-1$ are vertices of a right triangle, with the right angle at $z.$
But $z-1$ and $z+1$ have the same lengths and directions as the legs of that triangle;
I’m particular, the difference in their directions is $\frac\pi2$ (a right angle).
Hence the ratio of these two numbers has argument $\frac\pi2,$ that is, it is purely imaginary, and multiplication by $-i$ produces a real number. </p>
<hr>
<p>For an algebraic approach, the complex conjugate of $z+1$ is $z^*+1$ ($1$ added to the conjugate of $z$).
Multiply numerator and denominator by this same amount to get a real number in the denominator.
Use the facts that $zz^*=\lvert z\rvert^2$ and that $z-z^*$ is purely imaginary.</p>
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.