qid int64 1 4.65M | question large_stringlengths 27 36.3k | author large_stringlengths 3 36 | author_id int64 -1 1.16M | answer large_stringlengths 18 63k |
|---|---|---|---|---|
2,403,741 | <blockquote>
<p>Solve for $\theta$ the following equation.
$$\sqrt {3} \cos \theta - 3 \sin \theta = 4 \sin 2\theta \cos 3\theta.$$</p>
</blockquote>
<p>I tried writing sin and cos expansions but it is becoming too long.Please help me.</p>
| Michael Rozenberg | 190,319 | <p>Because $13=6+6+1=2+2+9$. </p>
<p>Now we need to check other sums.</p>
|
244,679 | <p>I have a two variable function <code>z[x,y] = f[x,y] + g[x,y]</code>, such that I know the functional form of <code>f[x,y]</code> but not of <code>g[x,y]</code>. I have to do some symbolic calculations with the function <code>z[x,y]</code>, but I would like to keep only the first order in <code>g[x,y]</code> (treating <code>g</code> as small). So, for example, I would like Mathematica to approximate <code>(z[x,y])^3 = (f[x,y] + g[x,y])^3 = f[x,y]^3 + 3*f[x,y]^2*g[x,y]</code>, or <code>(D[z[x,y], x])^2 = (D[f[x,y], x])^2 + 2*D[f[x,y], x]*D[g[x,y], x]</code>. Is there a way to do it? I have tried the most naive way, namely to use <code>Series['exp'[z],{g, 0, 1}]</code>, treating <code>g[x,y]</code> as a parameter rather than a function, but (as expected) it doesn't work. Do you know a way to do it?</p>
<p>Thank you very much in advance for anyone who will reply!</p>
| Adam | 74,641 | <p>The following (and my comment) overlooks the fact that <code>u1</code> and <code>f</code> are dependent. Copy code, don't rewrite it -- also don't over edit once copied!</p>
<hr />
<p>When I look at <code>Normal@f</code> for various values of <code>dos</code>, I find the "edge" elements of <code>f</code> (considered as a <code>dos</code> by <code>dos</code> 2d grid) are 0 and the "inner" elements of <code>f</code> are <code>1.0I-7.0</code>. Here, then, is a quicker way to construct such arrays</p>
<pre><code>f=Join[
0&/@Range@dos,
Join@@(Append[Prepend[1.0I-7.0&/@Range[dos-2],0],0]&/@Range[dos-2]),
0&/@Range@dos]
]
</code></pre>
<p>There are definitely faster/more efficient ways to construct <code>f</code>, but I think this may be a case of over-early-optimization. If you're interested in situations for arbitrary rules <code>lll</code>, it may simply not be possible to speed things up.</p>
<p><s>A good rule of thumb is to present the computationally hardest example you can think of in the question (i.e. a randomized set of rules, although this is probably not what you're interested in). Of course, simple cases are useful for understanding the problem at hand. And then there's the necessity of splitting up your problem into small, single-question-size parts.</p>
<p>I think there's more that can be elucidated with further code exchange.</s> The crossed out text detracts from this post.</p>
|
180,647 | <p>Two persons have 2 uniform sticks with equal length which can be cut at any point. Each person will cut the stick into $n$ parts ($n$ is an odd number). And each person's $n$ parts will be permuted randomly, and be compared with the other person's sticks one by one. When one's stick is longer than the other person's, he will get one point. The person with more points will win the game. How to maximize the probability of winning the game for one of the person. What is the best strategy to cut the stick.</p>
| rschwieb | 29,335 | <p>The previous answers that basically say "No, addition/subtraction is not defined between matrices of different dimensions" are the correct answer to your question.</p>
<p>Actually though, something like this is done formally in <a href="http://en.wikipedia.org/wiki/Clifford_algebra" rel="nofollow">Clifford algebras</a>. There are elements of the algebra identified with scalars, elements identified as vectors, (and even more elements with different identifications). Because they all live in an algebra, addition, subtraction and multiplication are defined between all of them.</p>
<p>However, this last item is probably not the answer you're looking for, because the addition is just formal: a scalar $\lambda$ plus a vector $v$ is just "$\lambda +v$", and there isn't a formula which presents it as another scalar, or another vector.</p>
|
167,446 | <p>Let $p$ be a prime number, $C_p$-cyclic group of order $p$, and $G$ an elementary p-group of order $p^n$. Let us denote by Cext$(G,C_p)$ the group of all central extensions of $C_p$ by $G$. Is the number of non isomorphic groups in Cext$(G,C_p)$ known as a function of $n$? </p>
| Derek Holt | 35,840 | <p>I voted to close because I was unsure which way around the extension went but, as Yves said, the question is almost trivial if $C_p^n$ is the normal subgroup.</p>
<p>So, suppose that $N \unlhd G$ with $N=C_p$ and $G/N \cong C_p^n$.</p>
<p>Recall that a $p$-group of this form is called <em>extraspecial</em> if $N=Z(G)$. It is a standard result that extraspecial groups have order $p^{2k+1}$ for some $k \ge 1$, and that for each $p$ and $k$ there are exactly two isomorphism classes of extraspecial groups. (They all arise as central products of extraspecial groups of order $p^3$. For $p$ odd, one of these groups has exponent $p$ and the other does not. For $p=2$, they are central products of $D_8$ and $Q_8$ and the isomorphism type depends on the parity of the number of $D_8$s or $Q_8$s.)</p>
<p>For each $k \ge 1$, we can define a group $S_{p,k}$, which has order $p^{2k+2}$, and is a central product of an extraspecial group of order $p^{2k+1}$ with $C_{p^2}$. It is not hard to show the two types of extraspecial groups give rise to isomorphic groups $S_{p,k}$.</p>
<p>So, in the problem, let $G= \langle z,x_1,x_2,\ldots,x_n \rangle$ with $z \in N$, and order the generators such that, for some $i$, $Z(G) = \langle z,x_{i+1},x_{i+2},\ldots,x_n \rangle$ .</p>
<p>If $i=0$, then $G$ is abelian and $G \cong C_p^{n+1}$ or $C_{p^2} \times C_p^{n-1}$. So suppose that $i>0$.</p>
<p>Then $x_1,\ldots,x_i$ generate an extraspecial group, so $i=2k$ is even. Now there are two cases.</p>
<p>If $Z(G)$ is elementary abelian, then $G \cong E \times C_p^{n-2k}$, where $E$ is extraspecial of order $p^{2k+1}$. For each $k$ with $0 < 2k \le n$, there are two isomorphism types of groups of this form, one for each of the two types of extraspecial group.</p>
<p>Otherwise $Z(G) \cong C_p^2 \times C_p^{n-2k+1}$, and $G \cong S_{p,k} \times C_p^{n-2k+1}$. For each $k$ with $0 \le 2k \le n-1$, there is a single isomorphism class of groups of this form.</p>
|
2,651,054 | <p>I have this expression:
$$(x + y + zβ)(xβ + yβ + z)$$ which I am trying to simplify. I decide to multiply it out in order to get, $${\color{red}{(xx')}}+(xy')+(xz)+(yx')+{\color{red}{(yy')}}+(yz)+(z'x')+(z'y')+{\color{red}{(z'z)}}.$$
I know that the $xx', yy'$ and $zz'$ would just be $0$, however, now I am stuck. I can't seem to find something to pull out and simplify further.</p>
| Mohammad Riazi-Kermani | 514,496 | <p>$$z^3+2=a_0+a_1(z-1)+a_2(z-1)^2+a_3(z-1)^3.$$</p>
<p>$$ z=1 \implies a_0 =3$$ Differentiate, you get $$3z^2 =a_1+2a_2 (z-1)+ 3a_3(z-1)^2$$ </p>
<p>$$ z=1 \implies a_1=3$$</p>
<p>Differentiate, you get</p>
<p>$$ 6z=2a_2+6a_3(z-1)$$</p>
<p>$$ z=1 \implies a_2=3$$</p>
<p>Differentiate, you get</p>
<p>$$ a_3 =1$$</p>
<p>Thus $$z^3+2=3+3(z-1)+3(z-1)^2+(z-1)^3.$$</p>
|
2,180,700 | <p>A and B toss a fair coin 10 times. In each toss, if its a head A's score gets incremented by 1, if its a tail B's score gets incremented by 1.</p>
<p>After 10 tosses, the person with the greatest score wins the game.</p>
<p>What is the probability that A wins?</p>
<p>And if B alone gets an extra toss. What is the probability that A wins?</p>
<p>According to me,
The cases where A can win are </p>
<p>(Score of A, Score of B)</p>
<p>(6,4)</p>
<p>(7,3)</p>
<p>(8,2)</p>
<p>(9,1)</p>
<p>(10,0)</p>
<p>These are A's winning cases. Now I am confused on how to proceed.</p>
<p>One method I can think of is that in each of these 5 cases the probability them happening is (1/2)^10. So the probability of A winning is 5*(1/2)^10</p>
<p>But I think I am not taking into consideration the various occurrences of the winning tosses from the total tosses.</p>
<p>So should the probability of A winning be like </p>
<p>(10C6 + 10C7 + 10C8 + 10C9 + 10C10 ) / 2^10</p>
<p>Which is the number of possible outcomes for A divided by the total number of outcomes. Where 10C6 is the number of ways of selecting 6 from 10 items</p>
| user247327 | 247,327 | <p>It should be obvious, from symmetry, that, in the first case, where the coin is flipped 10 times, that A and B have the same probability of winning. But if there are 5 heads and 5 tails, neither wins. The probability of that is $(1/2)^{10}\begin{pmatrix}10 \\ 5 \end{pmatrix}= \frac{252}{1024}= \frac{63}{251}$ so the probability that there is no tie, that either A or B wins is $1- \frac{63}{251}= \frac{181}{251}$. Since A and B are equally likely to win, the probability A wins is half that, $\frac{181}{502}$.</p>
<p>I don't understand what you mean by "if B alone gets an extra toss." In the original question, it didn't matter who flipped the coin- just whether the flip resulted in "heads" or "tails". Are you just saying that B wins if "heads" and "tails" come up an equal number of times? In that case, the probability that A wins is the [b]same[/b]. The only difference is that, now, the probability that B wins is the probability that A does NOT win.</p>
|
957,940 | <p>I'm "walking" through the book "A walk through combinatorics" and stumbled on an example I don't understand. </p>
<blockquote>
<p><strong>Example 3.19.</strong> A medical student has to work in a hospital for five
days in January. However, he is not allowed to work two consecutive
days in the hospital. In how many different ways can he choose the
five days he will work in the hospital?</p>
<p><strong>Solution</strong>. The difficulty here is to make sure that we do not choose two consecutive days. This can be assured by the following
trick. Let $a_1, a_2, a_3, a_4, a_5$ be the dates of the five days of
January that the student will spend in the hospital, in increasing
order. Note that the requirement that there are no two consecutive
numbers among the $a_i$, and $1 \le a_i \le 31$ for all $i$ is equivalent
to the requirement that $1 \le a_i < a_2 β 1 < a_3 β 2 < a_4 β 3 < a_5 β 4 \le 27$. In other words, there is an obvious bijection between the
set of 5-element subsets of [31] containing no two consecutive
elements and the set of 5-element subsets of [27].</p>
<p>*** Instead of choosing the numbers $a_i$, we can choose the numbers $1 \le a_i < a_2 β 1 < a_3 β 2 < a_4 β 3 < a_5 β 4 \le 27$, that is, we can
simply choose a five-element subset of [27], and we know that there
are $\binom{27}{5}$ ways to do that.</p>
</blockquote>
<p>What I don't understand here $1 \le a_i < a_2 β 1 < a_3 β 2 < a_4 β 3 < a_5 β 4 \le 27$:</p>
<ul>
<li>Why do the subtracting numbers increment with every other $a_i$? </li>
<li>Why 27?</li>
</ul>
<p>And the very last sentence (***) is unclear to me.</p>
<ul>
<li>Why is there no talk about "non-consecutive"? Why choosing 5 elements of 27 is equivalent to choosing 5 non-consecutive elements out of 31? I miss the connection. </li>
</ul>
<p>I'd be very thankful if you could help me to understand this example!</p>
| Chris Taylor | 4,873 | <p>Let's look at a simpler example - choosing 3 days from 6, such that there are no two consecutive days. Obviously there are four ways to do this - any of $(1,3,5)$, $(1,3,6)$, $(1,4,6)$ or $(2,4,6)$.</p>
<p>Formalizing the problem, you need to choose $(a_1,a_2,a_3)$ such that $1 \leq a_i \leq 6$ and there are no consecutive choices. Without loss of generality, let $a_1 < a_2 < a_3$ (which ensures the days are unique). Since you require $a_1$ and $a_2$ to be at least two days apart, you require $a_1 \leq a_2 - 2$, which is equivalent to $a_1 < a_2-1$ (since the $a_i$ are integers). You also require $a_2 \leq a_3-2$, which is equivalent to $a_2 < a_3-1$, which is equivalent to $a_2-1 < a_3-2$. Clearly you require $a_3 \leq 6$, which is equivalent to $a_3-2 \leq 4$, so you can put your constraints together as</p>
<p>$$1 \leq a_1 < a_2-1 < a_3-2 \leq 4$$</p>
<p>Now defining $n_i = a_i - i + 1$ you get</p>
<p>$$1 \leq n_1 < n_2 < n_3 \leq 4$$</p>
<p>and the solution is obvious.</p>
<p>Generalizing to the case where you have $N$ days and need to choose $K$ of them with no two adjacent, you have</p>
<p>$$1 \leq a_1 < a_2-1 < a_3-2 < a_4-3 < \cdots < a_K - K + 1 \leq N-K+1$$</p>
<p>which should allow you to solve your problem.</p>
|
957,940 | <p>I'm "walking" through the book "A walk through combinatorics" and stumbled on an example I don't understand. </p>
<blockquote>
<p><strong>Example 3.19.</strong> A medical student has to work in a hospital for five
days in January. However, he is not allowed to work two consecutive
days in the hospital. In how many different ways can he choose the
five days he will work in the hospital?</p>
<p><strong>Solution</strong>. The difficulty here is to make sure that we do not choose two consecutive days. This can be assured by the following
trick. Let $a_1, a_2, a_3, a_4, a_5$ be the dates of the five days of
January that the student will spend in the hospital, in increasing
order. Note that the requirement that there are no two consecutive
numbers among the $a_i$, and $1 \le a_i \le 31$ for all $i$ is equivalent
to the requirement that $1 \le a_i < a_2 β 1 < a_3 β 2 < a_4 β 3 < a_5 β 4 \le 27$. In other words, there is an obvious bijection between the
set of 5-element subsets of [31] containing no two consecutive
elements and the set of 5-element subsets of [27].</p>
<p>*** Instead of choosing the numbers $a_i$, we can choose the numbers $1 \le a_i < a_2 β 1 < a_3 β 2 < a_4 β 3 < a_5 β 4 \le 27$, that is, we can
simply choose a five-element subset of [27], and we know that there
are $\binom{27}{5}$ ways to do that.</p>
</blockquote>
<p>What I don't understand here $1 \le a_i < a_2 β 1 < a_3 β 2 < a_4 β 3 < a_5 β 4 \le 27$:</p>
<ul>
<li>Why do the subtracting numbers increment with every other $a_i$? </li>
<li>Why 27?</li>
</ul>
<p>And the very last sentence (***) is unclear to me.</p>
<ul>
<li>Why is there no talk about "non-consecutive"? Why choosing 5 elements of 27 is equivalent to choosing 5 non-consecutive elements out of 31? I miss the connection. </li>
</ul>
<p>I'd be very thankful if you could help me to understand this example!</p>
| Florian D'Souza | 176,860 | <p>The number 27 is there because he has to work 5 days in January, and January has 31 days. So, after he picks the 1st day to work, he can only pick from among 30 days for the 2nd day, et cetera until he has 27 days from which he can pick his 5th day.</p>
<p>The subtractions, I believe, are just illustrating that once the student picks his 1st day, the 2nd day has to have at least 1 day between it, and these increase for the 3rd, 4th, and 5th day. For example, let's say he chooses to work January 1st. Then, he cannot work January 2nd, but he could work January 3, 4,.... So, if our $a_1 = 1$, then $a_2=3, 4, \dots$ or $a_1 < a_2 - 1$. You can just carry these reasoning to see why you have to subtract 2, 3, and 4 days as well.</p>
|
957,940 | <p>I'm "walking" through the book "A walk through combinatorics" and stumbled on an example I don't understand. </p>
<blockquote>
<p><strong>Example 3.19.</strong> A medical student has to work in a hospital for five
days in January. However, he is not allowed to work two consecutive
days in the hospital. In how many different ways can he choose the
five days he will work in the hospital?</p>
<p><strong>Solution</strong>. The difficulty here is to make sure that we do not choose two consecutive days. This can be assured by the following
trick. Let $a_1, a_2, a_3, a_4, a_5$ be the dates of the five days of
January that the student will spend in the hospital, in increasing
order. Note that the requirement that there are no two consecutive
numbers among the $a_i$, and $1 \le a_i \le 31$ for all $i$ is equivalent
to the requirement that $1 \le a_i < a_2 β 1 < a_3 β 2 < a_4 β 3 < a_5 β 4 \le 27$. In other words, there is an obvious bijection between the
set of 5-element subsets of [31] containing no two consecutive
elements and the set of 5-element subsets of [27].</p>
<p>*** Instead of choosing the numbers $a_i$, we can choose the numbers $1 \le a_i < a_2 β 1 < a_3 β 2 < a_4 β 3 < a_5 β 4 \le 27$, that is, we can
simply choose a five-element subset of [27], and we know that there
are $\binom{27}{5}$ ways to do that.</p>
</blockquote>
<p>What I don't understand here $1 \le a_i < a_2 β 1 < a_3 β 2 < a_4 β 3 < a_5 β 4 \le 27$:</p>
<ul>
<li>Why do the subtracting numbers increment with every other $a_i$? </li>
<li>Why 27?</li>
</ul>
<p>And the very last sentence (***) is unclear to me.</p>
<ul>
<li>Why is there no talk about "non-consecutive"? Why choosing 5 elements of 27 is equivalent to choosing 5 non-consecutive elements out of 31? I miss the connection. </li>
</ul>
<p>I'd be very thankful if you could help me to understand this example!</p>
| najayaz | 169,139 | <p>These kind of problems can be generalized to a single formula in the following manner:</p>
<p>Suppose I have $n$ objects arranged in a row and I pick any $r$ objects such that no two of them are consecutive. I want to calculate in how many ways I can do this. To do that I first randomly select $r$ objects and take them out. Now there are $n-r$ objects left in the line and as a consequence there are $n-r+1$ gaps that can be filled between these objects. The number of ways in which I can fill these gaps is $\binom{n-r+1}{r}$. Since selection and deselection are equivalent in combinatorics, the answer to my problem is this only.</p>
<p>Now coming to your problem, in January there are 31 days and you have to select any 5 non-consecutive ones. The formula says that the answer would be:$$\binom{31-5+1}{5}$$which is nothing but $\binom{27}{5}$.</p>
<p>I'm sorry for not adhering to the solution in your book but I really think that this way is easier to understand.</p>
|
2,942,263 | <p>I am curious whether there is an algebraic verification for <span class="math-container">$y = x + 2\sqrt{x^2 - \sqrt{2}x + 1}$</span> having its minimum value of <span class="math-container">$\sqrt{2 + \sqrt{3}}$</span> at <span class="math-container">$\frac{1}{\sqrt{2}} - \frac{1}{\sqrt{6}}$</span>. I have been told the graph of it is that of a hyperbola.</p>
| Cesareo | 397,348 | <p>Without derivatives,</p>
<p>Considering the functions</p>
<p><span class="math-container">$$
f(x,y)=y-x-2\sqrt{x^2-\sqrt 2 x-1} = 0\\
y = \lambda
$$</span></p>
<p>Their intersection is at the solution for</p>
<p><span class="math-container">$$
f(x,\lambda)=\lambda-x-2\sqrt{x^2-\sqrt 2 x-1} = 0
$$</span></p>
<p>or squaring</p>
<p><span class="math-container">$$
(x-\lambda)^2-2(x^2-\sqrt 2 x-1)=0
$$</span></p>
<p>Solving for <span class="math-container">$x$</span> we have</p>
<p><span class="math-container">$$
x = \frac{1}{3} \left(2 \sqrt{2}\pm 2 \sqrt{\lambda ^2-\sqrt{2} \lambda -1}-\lambda\right)
$$</span></p>
<p>but at tangency between <span class="math-container">$f(x,y)=0, y = \lambda$</span> we have</p>
<p><span class="math-container">$$
\lambda ^2-\sqrt{2} \lambda -1 = 0\Rightarrow \lambda = \frac{1}{2} \left(\sqrt{2}+\sqrt{6}\right)
$$</span></p>
<p>as the feasible minimum.</p>
<p><a href="https://i.stack.imgur.com/qSYQZ.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/qSYQZ.jpg" alt="enter image description here"></a></p>
|
2,942,263 | <p>I am curious whether there is an algebraic verification for <span class="math-container">$y = x + 2\sqrt{x^2 - \sqrt{2}x + 1}$</span> having its minimum value of <span class="math-container">$\sqrt{2 + \sqrt{3}}$</span> at <span class="math-container">$\frac{1}{\sqrt{2}} - \frac{1}{\sqrt{6}}$</span>. I have been told the graph of it is that of a hyperbola.</p>
| roman | 53,017 | <p>One of the approaches may be as follows:</p>
<p>Suppose there exists some <span class="math-container">$a$</span> which is the minimum. Then:</p>
<p><span class="math-container">$$
\begin{align}
x + 2\sqrt{x^2 - \sqrt2 x+1} &= a \\
2\sqrt{x^2 - \sqrt2 x+1} &= a-x
\end{align}
$$</span></p>
<p>Square both sides:</p>
<p><span class="math-container">$$
(a-x)^2 = 4x^2 - 4\sqrt2x + 4
$$</span></p>
<p>Applying some transformations you can get:</p>
<p><span class="math-container">$$
3x^2 + x(2a-4\sqrt2) + 4 -a^2 =0
$$</span></p>
<p>Now you want the discriminant to be equal to zero which means only one root will exist, so:</p>
<p><span class="math-container">$$
D = 16a^2 - 16\sqrt2a-16 = a^2 - \sqrt2a - 1 =0
$$</span></p>
<p>The equation in terms of <span class="math-container">$a$</span> has two solutions:</p>
<p><span class="math-container">$$
a_1 = {1-\sqrt3 \over \sqrt2} \\
a_2 = {1+\sqrt{3} \over \sqrt2}
$$</span></p>
<p>If you plug them into the initial equation only one of them is going to produce a valid statement. Therefore that will be you minimum, which appears to be <span class="math-container">${1+\sqrt3 \over \sqrt2}$</span></p>
|
1,002,719 | <p>If we have</p>
<p>$f: \{1, 2, 3\} \to \{1, 2, 3\}$</p>
<p>and</p>
<p>$f \circ f = id_{\{1,2,3\}}$</p>
<p>is the following then always true for every function?</p>
<p>$f = id_{\{1,2,3\}}$</p>
| Milo Brandt | 174,927 | <p>Notice that we could easily show that $f$ is bijective, since if $f\circ f$ is bijective, as the identity is, it must be that $f$ is injective (since were it not, $f\circ f$ could not be either) and that $f$ is surjective (for the same reason).</p>
<p>In particular, this implies that $f$ will be a member of the symmetric group $S_3$. We could apply <a href="http://en.wikipedia.org/wiki/Cauchy's_theorem_(group_theory)" rel="nofollow">Cauchy's theorem</a> to show that there must be some non-zero element $f$ of $S_3$ such that $f^2$ is the identity (where $f^2=f\circ f$ - an iterate of $f$) - which implies that your statement is false.</p>
<p>However, interestingly, a consequence of <a href="http://en.wikipedia.org/wiki/Lagrange's_theorem_(group_theory)" rel="nofollow">Lagrange's theorem</a> is that $f^6$ is always the identity, which in turn can be used to show that if $f^5=f\circ f\circ f\circ f\circ f$ is the identity, then $f$ is as well.</p>
|
458,088 | <p>I would like to find an approximation when $ n \rightarrow\infty$ of $ \frac{n!}{(n-2x)!}(n-1)^{-2x} $. Using Stirling formula, I obtain $$e^{\frac{-4x^2+x}{n}}. $$ The result doesn't seem right!</p>
<p>Below is how I derive my approximation. I use mainly Stirling Approximation and $e^x =(1+\frac{x}{n})^n $.</p>
<p>$$\frac{n!}{(n-1)^{2 x} (n-2 x)!}\approx \frac{\left(\sqrt{2 \pi } n^{n+\frac{1}{2}} e^{-n}\right) (n-1)^{-2 x}}{\sqrt{2 \pi } e^{2 x-n} (n-2 x)^{-2 f+n+\frac{1}{2}}}\approx \frac{n^{n+\frac{1}{2}} e^{-2 x} n^{-2 x} \left(1-\frac{1}{n}\right)^{-2 x}}{(n-2 x)^{n-2 x+\frac{1}{2}}}\approx e^{-2 x} \left(\frac{n}{n-2 x}\right)^{n-2 x+\frac{1}{2}} \left(\left(1-\frac{1}{n}\right)^n\right)^{-\frac{2 x}{n}}\approx e^{-2 x} \left(\frac{n}{n-2 x}\right)^{n-2 x+\frac{1}{2}} e^{\frac{2 x}{n}}\approx \left(\frac{n-2 x}{n}\right)^{-n+2 x-\frac{1}{2}} \left(\frac{n-2 x}{n}\right)^n e^{\frac{2 x}{n}}\approx \left(\frac{n-2 x}{n}\right)^{2 x-\frac{1}{2}} e^{\frac{2 x}{n}}\approx \left(\left(1-\frac{2 x}{n}\right)^n\right)^{\frac{2 x}{n}-\frac{1}{2 n}} e^{\frac{2 x}{n}}\approx \left(e^{-2 x}\right)^{\frac{2 x}{n}-\frac{1}{2 n}}=e^{\frac{x-4 x^2}{n}}$$</p>
<p>I would appreciate your input!</p>
| zyx | 14,120 | <p>Online topology game illustrating a recent theorem</p>
<p><a href="http://www.sci.osaka-cu.ac.jp/math/OCAMI/news/gamehp/etop/gametop.html" rel="nofollow">http://www.sci.osaka-cu.ac.jp/math/OCAMI/news/gamehp/etop/gametop.html</a></p>
|
1,329,398 | <p>So, I've posted a question regarding Wikipedia's quartic page. This was from the first question.</p>
<blockquote>
<p>I'm trying to implement the general quartic solution for use in a ray tracer, but I'm having some trouble. The solvers I've found do cause some strange false negatives leaving holes in the tori I'm testing with.</p>
<p>Most implementations use the depressed quartic solutions, I don't understand the math involved and can't figure out why I'm having false non-intersections (link to layman explanation would be great). So I'm trying to implement the general solution at <a href="https://en.wikipedia.org/wiki/Quartic_function#General_formula_for_roots" rel="nofollow noreferrer">this wikipedia page</a>. I got the stuff up until the special cases implemented, but at that point I have an issue.</p>
</blockquote>
<p>With lots of rays being traced most of the special cases become common. I've found a set of coefficients that <a href="http://www.wolframalpha.com/input/?i=0.999999999x%5E4-24.591812430x%5E3%2B226.372338960x%5E2-924.44347730x%2B1412.89015972" rel="nofollow noreferrer">Wolfram Alpha tells me has two real roots</a>, but my code was just returning NaN, further searching I found my S was coming up as <span class="math-container">$\sqrt{-4.9 \times 10^{-11}}$</span> Floating point precision error, means this should equate to 0, so I need the special case for S=0, it says we need to "change choice of cubic root in Q" but it does not explain how to do this. I did try changing the sign of Q when S=0, but that doesn't work. Does anyone know what this means and how I can do it?</p>
| asmeurer | 781 | <p>As noted in a comment by Robert Israel, if you are using floating point numbers, it's better to use a root-finding algorithm than the exact formula for quartic roots, as root finding algorithms tend to have much better numerical stability, and avoid issues like this one. They also tend to work much more generally (i.e., they won't be limited to just a single degree of polynomial, or just polynomials of degree < 5). </p>
|
49,074 | <p>It might sound silly, but I am always curious whether Hölder's inequality $$\sum_{k=1}^n |x_k\,y_k| \le \biggl( \sum_{k=1}^n |x_k|^p \biggr)^{\!1/p\;} \biggl( \sum_{k=1}^n |y_k|^q \biggr)^{\!1/q}
\text{ for all }(x_1,\ldots,x_n),(y_1,\ldots,y_n)\in\mathbb{R}^n\text{ or }\mathbb{C}^n.$$
can be derived from the Cauchy-Schwarz inequality.</p>
<p>Here $\frac{1}{p}+\frac{1}{q}=1$, $p>1$.</p>
| Did | 6,179 | <p>Yes it can, assuming nothing more substantial than the fact that midpoint convexity implies convexity. Here are some indications of the proof in the wider context of the integration of functions.</p>
<p>Consider positive $p$ and $q$ such that $1/p+1/q=1$ and positive functions $f$ and $g$ sufficiently integrable with respect to a given measure for all the quantities used below to be finite. Introduce the function $F$ defined on $[0,1]$ by
$$
F(t)=\int f^{pt}g^{q(1-t)}.
$$
One sees that
$$
F(0)=\int g^q=\|g\|_q^q,\quad F(1)=\int f^p=\|f\|_p^p,\quad F(1/p)=\int fg=\|fg\|_1.
$$
Furthermore, for every $t$ and $s$ in $[0,1]$,
$$
F({\textstyle{\frac12}}(t+s))=\int h_th_s,\qquad h_t=f^{pt/2}g^{q(1-t)/2},\ h_s=f^{ps/2}g^{q(1-s)/2},
$$
hence Cauchy-Schwarz inequality yields
$$
F({\textstyle{\frac12}}(t+s))^2\le\int h_t^2\cdot\int h_s^2=F(t)F(s).
$$
Thus, the function $(\log F)$ is midpoint convex hence convex. In particular, $1/p=(1/p)1+(1/q)0$ with $1/p+1/q=1$ hence
$$
F(1/p)\le F(1)^{1/p}F(0)^{1/q},
$$
which is HΓΆlder's inequality $\|fg\|_1\le\|f\|_p\|g\|_p$.</p>
|
3,345,063 | <p><a href="https://i.stack.imgur.com/T3Ue9.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/T3Ue9.png" alt="Given a triangle whose apex angle is \theta" /></a></p>
<p><strong>Given a triangle with two circles and apex angle equals <span class="math-container">$\theta$</span>.</strong></p>
<p><em><strong>Find the ratio of radius of the two circles in terms of <span class="math-container">$\theta$</span></strong></em>.</p>
<p>My approach: treat the circles as incircle and excircle by drawing a line parallel to base.<br />
We know that <span class="math-container">$$ \frac{r}{R}=s-\frac{a}{s}$$</span> where <span class="math-container">$s=$</span>semiperimeter and <span class="math-container">$$\sin \frac\theta 2= \sqrt{(s-b)(s-c)}/bc$$</span>
and proceeding for equilateral triangle and right angled triangle,
I get <span class="math-container">$$ \left(1+\sin \frac\theta 2\right)/\left(1-\sin \frac\theta 2\right).$$</span></p>
<p>I have proved it for the following angles: <span class="math-container">$\frac{\pi}{4}$</span> and <span class="math-container">$\frac{\pi}{3}$</span>.</p>
<p>But I am yet to prove it for a scalene triangle.</p>
<p>Thanks.</p>
| Henry | 6,460 | <p>The question does not actually mention the base edge</p>
<p>You introduced the tangent where the two circles touch, which necessarily creates a small isosceles triangle. You can then make the base tangent parallel to this, ensuring a similar large triangle which is then isosceles</p>
<p>So there is no scalene case that you need to consider</p>
<hr />
<p>Looking back, that explanation was a little brief. Consider this diagram where the red triangle and circles are given. Your calculations only involved the blue lines and two isosceles triangles even though the original triangle was scalene.</p>
<p><a href="https://i.stack.imgur.com/klcV3.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/klcV3.png" alt="triangles and circles" /></a></p>
|
1,855,650 | <p>Need to solve:</p>
<p>$$2^x+2^{-x} = 2$$</p>
<p>I can't use substitution in this case. Which is the best approach?</p>
<p>Event in this form I do not have any clue:</p>
<p>$$2^x+\frac{1}{2^x} = 2$$</p>
| Zau | 307,565 | <p>Hint:
By AM-GM inequality , $2^x+2^{-x} \geq 2 \times \sqrt{2^x \times 2 ^{-x}} = 2 $
When $2 ^x = 2^{-x}$ the equality holds.</p>
<p>Alternative solution:</p>
<p>Easy to show that $x = 0$, is one possible solution.</p>
<p>$$\frac{d(2^x+2^{-x})}{dx} = ( 2^x -2^{-x}) \log 2$$
which positive over $(0,+\infty)$, negative over $(-\infty,0)$</p>
|
1,839,057 | <p>Where n is an integer, $n\ge1$ and $(A,B)$ just constants </p>
<blockquote>
<p>$$I=\int_{-n}^{n}{x+\tan{x}\over A
+B(x+\tan{x})^{2n}}dx=0$$</p>
</blockquote>
<p>It is obvious that</p>
<p>$$\int_{-n}^{n}x+\tan{x}dx=0$$</p>
<p>Let make a substitution for <em>I</em> $$u=x+\tan{x}\rightarrow du=1+\sec^2{x}dx$$</p>
<p>$$\int_{-n}^{n}{u\over A+Bu^{2n}}{du\over 2+\tan^2{u}}=0$$</p>
<p>I can't find a standard integral of this. I am shrugged at this point on how to continued any further, required some help please</p>
<p>Also note that </p>
<p>$$\int_{-n}^{n}{u\over A+Bu^{2n}}du=0$$</p>
<p>And</p>
<p>$$\int_{-n}^{n}{u\over A+Bu^{2n}}{du\over C+D\tan^{2k}{u}}=0$$</p>
<p>Where <em>A</em>,<em>B</em>,<em>C</em> and <em>D</em> are just constants </p>
<p>$n,k\ge1$ are both integers</p>
| Hosein Rahnama | 267,844 | <p>Let us call the integrand as</p>
<p>$$f(x)={x+\tan{x}\over A
+B(x+\tan{x})^{2n}}$$</p>
<p>Then it is quite evident that it is an odd function of $x$</p>
<p>$$f(-x)=-f(x)$$</p>
<p>and hence you can easily conclude</p>
<p>$$\int_{-n}^{n}f(x)dx=0$$</p>
|
1,839,057 | <p>Where n is an integer, $n\ge1$ and $(A,B)$ just constants </p>
<blockquote>
<p>$$I=\int_{-n}^{n}{x+\tan{x}\over A
+B(x+\tan{x})^{2n}}dx=0$$</p>
</blockquote>
<p>It is obvious that</p>
<p>$$\int_{-n}^{n}x+\tan{x}dx=0$$</p>
<p>Let make a substitution for <em>I</em> $$u=x+\tan{x}\rightarrow du=1+\sec^2{x}dx$$</p>
<p>$$\int_{-n}^{n}{u\over A+Bu^{2n}}{du\over 2+\tan^2{u}}=0$$</p>
<p>I can't find a standard integral of this. I am shrugged at this point on how to continued any further, required some help please</p>
<p>Also note that </p>
<p>$$\int_{-n}^{n}{u\over A+Bu^{2n}}du=0$$</p>
<p>And</p>
<p>$$\int_{-n}^{n}{u\over A+Bu^{2n}}{du\over C+D\tan^{2k}{u}}=0$$</p>
<p>Where <em>A</em>,<em>B</em>,<em>C</em> and <em>D</em> are just constants </p>
<p>$n,k\ge1$ are both integers</p>
| Mc Cheng | 327,363 | <p>Let:<br>
$$f(x)={x+\tan{x}\over A
+B(x+\tan{x})^{2n}}$$
Set $x \mapsto -u$ </p>
<p>\begin{align}
f(-u)&={-u+\tan{-u}\over A
+B(-u+\tan{-u})^{2n}} \\
&={-u-\tan{u}\over A
+B(-1)^{2n}(u+\tan{u})^{2n}} \\
&=(-1){u+\tan{u}\over A
+B(u+\tan{u})^{2n}} \\
&=-f(u)
\end{align}
Since <em>x</em> and <em>u</em> are dummy variables, $f(-x)=-f(x)$. Thus $f(x)$ is an odd function.<br>
Your integral can be written as:<br>
\begin{align}
I&=\int_{0}^{n}f(x)dx+\int_{0}^{n}f(-x)dx \\
&=\int_{0}^{n}f(x)dx-\int_{0}^{n}f(x)dx \\
&= 0
\end{align}
Therefore, we obtain: </p>
<blockquote>
<p>$$\int_{-n}^{n}{x+\tan{x}\over A
+B(x+\tan{x})^{2n}}dx=0$$</p>
</blockquote>
|
681,608 | <blockquote>
<p>Prop. 6.9: Let $X \to Y$ be a finite morphism of non-singular curves, then for any divisor $D$ on $Y$ we have $\deg f^*D=\deg f\deg D$.</p>
</blockquote>
<p>I can not understand two points in the proof:</p>
<p>(1) (Line 9) Now $A'$ is torsion free, and has rank equal to $r=[K(X):K(Y)]$.</p>
<p>Since it is a torsion-free module over PID $O_Q$, I see it is free, but how to calculate its rank?</p>
<p>(2) (Line 15) Clearly $tA'=\bigcap(tA'_{\mathfrak m_i}\cap A')$ so ... I don't know how to show the claim ? </p>
| SomeEE | 126,056 | <p>Line 9</p>
<p>$A'$ is a localization of the ring A which is defined as the integral closure of B in K(X). This gives us $Quot(A) = Quot(A') = K(X)$ and so $Quot(A')$ is $r$ dimensional over $Quot(B)$. $A'$ is torsion free and finitely generated over the PID $\mathcal{O}_{Q}$ so $A' = \mathcal{O}_{Q}^{\oplus n}$ for some $n$. Passing to quotient fields we see that $n=r$.</p>
<p>Line 15</p>
<p>For this, I think it may be easiest to use the Dedekind property. You know that $tA' = Q^e = m_1^{n_1}...m_j^{n_j}$, $tA'_{m_i} = m_i^{n_i}$, and that $tA'_{m_i} \cap A' = m_i^{n_i}$.</p>
|
192,883 | <p>Can anyone please give an example of why the following definition of $\displaystyle{\lim_{x \to a} f(x) =L}$ is NOT correct?:</p>
<p>$\forall$ $\delta >0$ $\exists$ $\epsilon>0$ such that if $0<|x-a|<\delta$ then $|f(x)-L|<\epsilon$</p>
<p>I've been trying to solve this for a while, and I think it would give me a greater understanding of why the limit definition is what it is, because this alternative definition seems quite logical and similar to the real one, yet it supposedly shouldn't work.</p>
| Charlie Parker | 118,359 | <p>The main issue with the definition that you suggested is that its trivially satisfied as other people pointed out. If you want to get $\delta$ close (really close) to $a$ then your definition says you need to find <strong>any</strong> $\epsilon$ (thats the problem! its too flexible). Trivially set $\epsilon = \infty$ (i.e. it should hold by the archimedian property i.e. just find a large enough number until $|f(x) - L| < \epsilon$). Then the the consequent of your implication will always hold as long as $f(x)$ is defined in the neighbourhood $\delta$. As Pedro pointed out brilliantly, the issue is that your definition provides to much freedom to choose $\epsilon$. In fact we intuitively want that as $\epsilon$ gets smaller and smaller that $x$ gets closer and closer to $a$ (or at least the largest delta gets smaller and smaller). This does hold true for the original definition of limits (though its not explicit in the definition) as shown in <a href="https://math.stackexchange.com/a/2842939/119482">my answer here</a>. Therefore to my understanding, your questions shows why people came up with the definition of limits the way they did, because:</p>
<ol>
<li>it's not clear how to define it cleanly starting first with $\delta$'s and have the property that we get closer and closer to the value limit value $L$ as we get closer to the limit point $a$.</li>
<li>as your question shows, if we start with $\delta$ first, then $\epsilon$ can't be just specified with a single quantifier. Choosing existence is too flexible and "every" also is too restrictive. $\forall \delta $, $\forall \epsilon$, is obviously too intense. So every function value in the neighbourhood around $a$ must be arbitrarily close to the limit point? That just can't be true for most sensible notions of distance unless everything maps to one point.</li>
<li>therefore, instead we proceed to specify a restriction on $\epsilon$ and say we want to be able to be arbitrarily close to the limit $L$ (since we want to approach the limit L, thats nearly the whole point!). Now we just need to find a set of points around $a$ that approach it, and if its possible to have a definition that as we want to be closer to the limit $L$ that the points get closer to $a$, the better! The original definition does satisfy that and since it uses friendly quantifiers it's more compact (and elegant I guess) and its easier to sue and it has the properties we want. Perhaps the second point that I really find important is not explicit but would you prefer to have a really long complicated definition just to make that explicit? I the answer must be no for mathematicians and poof thats why the definition is the way it is!</li>
</ol>
|
1,097,134 | <p>this is something that came up when working with one of my students today and it has been bothering me since. It is more of a maths question than a pedagogical question so i figured i would ask here instead of MESE.</p>
<p>Why is $\sqrt{-1} = i$ and not $\sqrt{-1}=\pm i$?</p>
<p>With positive numbers the square root function always returns both a positive and negative number, is it different for negative numbers? </p>
| Quality | 153,357 | <p>In fact, there are two complex square roots of -1. i and -i. So in fact the answer to your question that it is equal to +i or -i..</p>
|
124,662 | <p>Is topology on $\mathbb{R}/\mathbb{Z}$ compact? If it is, how to prove it? </p>
<p>$\mathbb{R}/\mathbb{Z}$ denotes the set of equivalence classes of the set of real numbers, two real numbers being equivalent if and only if their difference is an integer.</p>
| azarel | 20,998 | <p>$\bf Hint:$ Find a compact subset $K$ of $\mathbb R$ so that the quotient map $\pi: \mathbb R\to \mathbb {R/Z}$ restricted to $K$ is onto.</p>
|
92,660 | <p>Let $X$ be a nonsingular projective variety over $\mathbb{C}$, and let $\widetilde{X}$ be the blow-up of X at a point $p\in X$.
What relationships exist between the degrees of the Chern classes of $X$ (i.e. of the tangent bundle of $X$) and the degrees of the Chern classes of $\widetilde{X}$?</p>
<p>Thanks.</p>
| Liviu Nicolaescu | 20,302 | <p>Assume $X$ is smooth compact of dimension $n$ and $x_0\in X$ is the point where we perform the blowup. Set $ X_* := X \setminus x_0 $, $ \tilde{X}_* := \tilde{X} \setminus E$. Denote by $N$ a tubular neighborhood of $E$ in $\tilde{X}_* $. By Mayer-Vietoris, the Chern classes of $ \tilde{X} $ are determined once we know their restrictions to $ X_* $ and $ N $.</p>
<p>We identify $ \tilde{X}_* $ with $ X_* $ via the blowdown map $p:\tilde{X}_* \to X_* $. The restriction of $c_k( \tilde{X}) $ to $X_*$ is equal to the restriction of $c_k(X)$. The restriction of $c_k(\tilde{X})$ to $N$ is easy to determine since</p>
<p>$$TN \cong \pi^* T\mathbb{CP}^{n-1} \oplus \pi^* H^*, $$</p>
<p>where $\pi: N\to E= \mathbb{CP}^{n-1}$ is the natural projection and $H\to \mathbb{CP}^{n-1}$ is the hyperplane line bundle. Thus, </p>
<p>$$ c_k(\tilde{X})|_N = c_k( N ) = \pi^*c_k(\mathbb{CP}^{n-1} ) +\pi^* c_{k-1}(\mathbb{CP}^{n-1} ) \pi^* c_1(H^*) $$</p>
<p>$$ = \pi^*c_k(\mathbb{CP}^{n-1} ) - \pi^* c_{k-1}(\mathbb{CP}^{n-1} )\cup \pi^*[H]. $$</p>
<p>Things can be simplified a bit if we introduce the notation $h=\pi^*[H]\in H^2(N,\mathbb{Z})$ and we observe that for some integers $\nu_k$ and $\nu_{k-1}$</p>
<p>$$ \pi^*c_k(\mathbb{CP}^{n-1} ) =\nu_k h^k, $$</p>
<p>$$ \pi^* c_{k-1}(\mathbb{CP}^{n-1} )=\nu_{k-1} h^{k-1}. $$</p>
<p>Then</p>
<p>$$ c_k(N) = ( \nu_k -\nu_{k-1} ) h^k. $$</p>
<p>As for the integers $\nu_k$ they are determined from the equality</p>
<p>$$ 1+c_1( \mathbb{CP}^{n-1} )+\cdots + c_{n-1}( \mathbb{CP}^{n-1} )= (1+H)^n - H^n $$</p>
<p>$$= \sum_{k=0}^{n-1}\binom{n}{k} H^k. $$</p>
|
209,856 | <p>Of course, I can use Stirling's approximation, but for me it is quite interesting, that, if we define $k = (n-1)!$, then the left function will be $(nk)!$, and the right one will be $k! k^{n!}$. I don't think that it is a coincidence. It seems, that there should be smarter solution for this, other than Stirling's approximation.</p>
| cactus314 | 4,997 | <p>For $(nk)!$ your factors are $1,2,3,\dots, k$ then $k+1, \dots, 2k,2k+1 \dots, k!$.</p>
<p>For $k! k^{n!}$ your factors are $1,2,3,\dots, k$ but then constant $k,\dots,k$.</p>
<p>So every factor of <strong>(nk)!</strong> is > or = to each factor of <strong>k!k^(n!)</strong></p>
|
4,096,771 | <p>Given a sequence of iid random variables <span class="math-container">$(Y_i)_{i=1}^\infty$</span> on a probability space <span class="math-container">$(\Omega, \mathcal{F}, \mathbb{P})$</span> such that <span class="math-container">$\mathbb{E}|Y_i| < \infty$</span> and <span class="math-container">$\mathbb{E}Y_i = 0$</span>, consider the discrete time process given by <span class="math-container">$$X_0 := 0, \quad X_n = \sum_{i=1}^n Y_i, \quad n \in \{1,2,...\}.$$</span>
and also the filtration given by <span class="math-container">$\mathcal{F_n} = \sigma(Y_1, \dots, Y_n)$</span> and show that <span class="math-container">$(X_n)_{n=1}^\infty$</span> is a (discrete) martingale with respect to <span class="math-container">$(\mathcal{F_n})_{n=1}^\infty$</span>.</p>
<p>So far, in answering this question I believe that I have proven the first two properties of a martingale:</p>
<ul>
<li><p>We have that, since <span class="math-container">$X_n$</span> is the sum of <span class="math-container">$\mathcal{F}$</span>-measurable variables, we know that it is too <span class="math-container">$\mathcal{F}$</span>-measurable for each <span class="math-container">$n$</span>. Hence it is adapted.</p>
</li>
<li><p><span class="math-container">$\mathbb{E}|X_n| \leq \mathbb{E}(\sum_{i=1}^n| Y_i|) = \sum_{i=1}^n\mathbb{E}| Y_i| < \infty$</span>.</p>
</li>
</ul>
<p>However, I am not sure about how one could go about proving the final property of a martingale, that <span class="math-container">$\mathbb{E}(X_t | \mathcal{F_s}) = X_s$</span> for all <span class="math-container">$s \leq t$</span>.</p>
<p>Might anyone have any ways of demonstrating such a proof?</p>
| stochastic-conch | 765,353 | <p>Since we are in discrete time, it is enough to show that <span class="math-container">$\mathbb{E}[X_{n+1}\mid\mathcal{F}_{n}]=X_n$</span> a.s. for all <span class="math-container">$n\in\mathbb{N}$</span>. Notice that <span class="math-container">$X_{n+1}=Y_{n+1}+X_n$</span>, so that
<span class="math-container">$$
\mathbb{E}[X_{n+1}\mid\mathcal{F}_n]=\mathbb{E}[Y_{n+1}\mid\mathcal{F}_n]+\mathbb{E}[X_n\mid\mathcal{F}_n]=\mathbb{E}[Y_{n+1}]+X_n=X_n\text{ a.s.},
$$</span>
since <span class="math-container">$Y_{n+1}$</span> and <span class="math-container">$\mathcal{F}_n$</span> are independent, and <span class="math-container">$X_n$</span> is <span class="math-container">$\mathcal{F}_n$</span>-measurable.</p>
|
424,694 | <p>Let <span class="math-container">$p$</span> be a prime, and consider <span class="math-container">$$S_p(a)=\sum_{\substack{1\le j\le a-1\\(p-1)\mid j}}\binom{a}{j}\;.$$</span>
I have a rather complicated (15 lines) proof that <span class="math-container">$S_p(a)\equiv0\pmod{p}$</span>. This must be
extremely classical: is there a simple direct proof ?</p>
| Ofir Gorodetsky | 31,469 | <p>Let <span class="math-container">$$P(x)=(1+x)^a-1-x^a=\sum_{1 \le j \le a-1} \binom{a}{j}x^j.$$</span> Working in a field <span class="math-container">$F$</span> where <span class="math-container">$|\{\mu \in F: \mu^{p-1}=1\}|=p-1$</span> (roots of unity of order <span class="math-container">$p-1$</span> exist), we have
<span class="math-container">$$ \frac{1}{p-1}\sum_{\mu^{p-1}=1}P(x \mu) = \sum_{\substack{1 \le j \le a-1\\(p-1)\mid j}} \binom{a}{j}x^j.$$</span>
We now specialize the field to <span class="math-container">$\mathbb{F}_p$</span>, and let <span class="math-container">$x=1$</span>:
<span class="math-container">$$ -\sum_{\mu \in \mathbb{F}_p^{\times}}P( \mu) = \sum_{\substack{1 \le j \le a-1\\(p-1)\mid j}} \binom{a}{j}.$$</span>
To conclude, observe that <span class="math-container">$P(0)=0$</span> and <span class="math-container">$|\mathbb{F}_p| = p$</span>, so <span class="math-container">$$\sum_{\mu \in \mathbb{F}_p^{\times}} P(\mu) = \sum_{\mu \in \mathbb{F}_p} P(\mu) = \sum_{\mu \in \mathbb{F}_p} ((1+\mu)^a - \mu^a)=0,$$</span>
because <span class="math-container">$\mu\mapsto \mu+1$</span> is a permutation of <span class="math-container">$\mathbb{F}_p$</span>.</p>
<hr />
<p>A slightly more general congruence is due to Glaisher (1899), as I've found in a survey by Granville, see <a href="https://dms.umontreal.ca/%7Eandrew/Binomial/sumsofbcs.html" rel="noreferrer">equation (11) here</a>. The precise reference is Glaisher, J. W. L., "A congruence theorem relating to sums of binomial-theorem coefficients.", Quart. J. 30, 150-156, 349-360, 361-383 (1899). See <a href="https://zbmath.org/?q=an%3A29.0152.05" rel="noreferrer">here for the zbMath review</a> in German. There is no entry in MathSciNet.</p>
<p>Namely, Glaisher proved that for any <span class="math-container">$0 \le j_0 \le p-1$</span> we have
<span class="math-container">$$\sum_{\substack{1 \le j \le a \\ j \equiv j_0 \bmod (p-1)}} \binom{a}{j} \equiv \binom{k}{j_0} \bmod p$$</span>
where <span class="math-container">$k$</span> is any positive integer with <span class="math-container">$k \equiv a \bmod p$</span> Applying this with <span class="math-container">$j_0=0$</span> and <span class="math-container">$k=a$</span>, and observing that the term <span class="math-container">$j=a$</span> contributes <span class="math-container">$1 \bmod p$</span>, your result follows. Clicking on equation (11) in the link above leads to a <a href="https://dms.umontreal.ca/%7Eandrew/Binomial/pfeq11.html" rel="noreferrer">detailed proof</a> which utilizes Lucas' theorem.</p>
|
2,916,037 | <p>Some cute results have every digit doubled. </p>
<p>\begin{align}
99225500774400 = {} & \frac{40!}{31!} \\[8pt]
33554433 = {} & 2^{25} +1 \\[8pt]
222277 = {} & -22^{2^2}+77^3 \\[8pt]
8811551199 = {} & 95^5 + 64^5 \\[8pt]
7755660000 = {} & 95^5 + 65^4 \\[8pt]
334444448888 = {} & 6942^3 - 10^8 \\[8pt]
11881133 = {} & 26^5 - 3^5 \\[8pt]
0.0011223344556677\ldots = {} & \frac 1 {891} \\[8pt]
22116600446655008888446677444 = {} & 148716510336462^2 \\[8pt]
\end{align}</p>
<p>The last is from <a href="http://oeis.org/A079036" rel="nofollow noreferrer">A079036</a>. What are other cute examples?</p>
| Deepesh Meena | 470,829 | <p>$$\frac{2244}{9999}=0.\overline{2244}$$
$$\frac{22446688}{10^8-1}=0.\overline{22446688}$$
$$\frac{112233445566778899}{10^{18}-1}=0.\overline{112233445566778899}$$
$$\frac{2}{909}=0.\overline{0022}$$
$$\frac{1133}{9999}=0.\overline{1133}$$</p>
<p>$$\frac{1}{11}=0.\overline{09}$$</p>
|
2,055,559 | <blockquote>
<p>Let <span class="math-container">$a,b,c$</span> be the length of sides of a triangle then prove that:</p>
<p><span class="math-container">$a^2b(a-b)+b^2c(b-c)+c^2a(c-a)\ge0$</span></p>
</blockquote>
<p>Please help me!!!</p>
| Nat | 401,304 | <p>Label the triangle so that $a>b>c>0$</p>
<p>Then $ab > b^2$ and $cb> c^2$ and $ac> a^2$</p>
<p>So $ab-b^2 > 0$ and $a^2(ab-b^2)>0$</p>
<p>Similarly for the other two terms, so that their sum will naturally be greater than $0$</p>
|
356,530 | <p>I'm a really confused as to how to start this question, would really appreciate any help you guys could give me!</p>
| Brian M. Scott | 12,042 | <p>HINT: There are $2^{17}$ bit strings of length $17$. Count those with exactly $0,1,2,3$, and $4$ ones and subtract from $2^{17}$. For example, there is just one bit string with no ones. To find the number with exactly $2$ ones, observe that there are $\binom{17}2$ ways to choose positions for the ones, and once youβve done that, you have no more freedom of chose. Thus, there are $\binom{17}2$ $17$-bit strings with exactly $2$ ones. You can probably finish it from there pretty easily: just follow the same model.</p>
|
273,499 | <blockquote>
<p>Show that every group $G$ of order 175 is abelian and list all isomorphism types of these groups. [HINT: Look at Sylow $p$-subgroups and use the fact that every group of order $p^2$ for a prime number $p$ is abelian.]</p>
</blockquote>
<p>What I did was this. $|G| = 175$. Splitting 175 gives us $175 = 25 \cdot 7$. Now we want to calculate the Sylow $p$-groups, i.e we want</p>
<p>$$P= n_7: n_7 \equiv 1 mod 7 \hspace{1.5cm} n_7|25$$
$$Q= n_{25}: n_{25} \equiv 1 mod 25 \hspace{1.5cm} n_{25} | 7$$</p>
<p>After listing all elements that are $\equiv 1 mod 7$ and $1 mod 25$ you see that the only (avaliable) ones are $n_7 = n_{25} = 1$. This tells us that both groups $P,Q$ are normal subgroups of $G$. I think, by definition of a normal subgroup, they are abelian and so this tells us that $G$ is abelian. To list all the isomorphism types, we want the semidirect product (SDP) such that</p>
<p>$$P \rightarrow Aut(Q) = C_7 \rightarrow C_{20}$$</p>
<p>As there are no elements of order 7 in $C_{20}$, the only SDP we have is the trivial SDP, i.e the direct product</p>
<p>$$C_7 \times C_{25} \cong C_{175}$$</p>
<p>We know that $175 = 5^2 \cdot 7$ and so multiplying the powers shows us that there are 2 non-isomorphic groups:</p>
<p>$$C_{25} \times C_7$$
$$C_5 \times C_5 \times C_7$$</p>
<p>My question for this is is my reasoning also correct for things like showing the abelian groups? I saw something which said something about $P \cap Q = I_G$ and they used this but I don't understand what it was.</p>
<p>The next question, assuming that I had to possibilites for my $p$ subgroup, i.e $n_p = 1 or x$, how would I go about answering this question? (I am doing a question like this now and am stuck as I have two Sylow $p$-subgroups).</p>
| rschwieb | 29,335 | <p>This line seems especially mistaken: "I think, by definition of a normal subgroup, they are abelian and so this tells us that G is abelian." Certainly normal subgroups need not be abelian: for an example you can take the alternating subgroup of the symmetric group for any $n>5$.</p>
<p>The Sylow theorems tell you that $n_7\in \{1,5,25\}$ and that it is 1 mod 7, and so the only possibility is that it is 1.</p>
<p>The Sylow theorems tell you that $n_5\in \{1,7\}$ and that it is 1 mod 5, and so the only possibility is that it is 1.</p>
<p>Thus for both 5 and 7 you have unique (=normal for Sylow subgroups) subgroups. Let's call them $F$ and $S$ respectively. Clearly $FS$ is a subgroup of $G$ of size 175 by the reasoning you gave. (The reason that $F\cap S$ is trivial is that the intersection is a subgroup of both $F$ and $S$, so it must have order dividing both the order of $F$ and of $S$, but the greatest common divisor is 1.)</p>
<p>$S$ is obviously abelian, as it is cyclic (of prime order!). The question is whether or not a group of size 25 must be abelian. There are a lot of ways to see that, but the one that comes to my mind is to say that it definitely has a nontrivial center. If its center $C$ were of order $5$, then $F/C$ would be cyclic of order 5. However, by a lemma (If $G/Z$ is cyclic for a central subgroup $Z$, then $G$ is abelian) $F$ would have to be abelian.</p>
<p>So $G$ is a product of two abelian subgroups, and so is abelian itself.</p>
<p>And also, your conclusion about the two types of abelian groups of order 175 is correct. Initially you wrote that there were "two isomorphic types," but (I edited that to correct it and ) I hope that was just a slip and that you really did mean "two non-isomorphic types".</p>
|
89,188 | <p>Norimatsu's lemma says that on a smooth projective complex variety $X$ of dimension $n$, then we have $H^i(X,\mathcal O_X(-A-E))=0$ for $i < n$ when $A$ is an ample divisor and $E$ is a simple normal crossings (SNC) divisor. Does this statement remain true if $E$ is an effective divisor with SNC support? In other words, if $\sum E_i$ is SNC, do we still have these vanishings for $\mathcal O_X(-A-\sum a_i E_i)$ where $a_i >0$? Or is there a counterexample?</p>
| Francesco Polizzi | 7,460 | <p>Here is a counterexample.</p>
<p>Let $X$ be a smooth cubic surface in $\mathbf{P}^3$ and $E$ a line on it; moreover take $A=-K_X$.</p>
<p>Then by Serre duality $$H^1(X, -A-aE)=H^1(X, K_X + A + aE)=H^1(X, aE).$$</p>
<p>On the other hand $h^0(X, aE)=1$ since $E^2=-1$ and $h^2(X, aE)=h^0(X, K_X-aE)=0$ since $K_X$ is not effective. </p>
<p>Now Riemann-Roch yields $$\chi(X, aE)=\frac{aE(aE-K_X)}{2}+\chi(\mathcal{O}_X)=\frac{-a^2+a}{2}+1,$$
hence $$h^1(X, aE)=\frac{a(a-1)}{2}$$ which is not zero for $a \geq 2$.</p>
|
2,794,704 | <p>Can the following sum be further simplified? $${1\over 20}\sum_{n=1}^{\infty}\left(n^2+n\right)\left(\frac45\right)^{n-1}$$
(It's part of a probability problem)</p>
| Claude Leibovici | 82,404 | <p>$$f(x)=\sum_{n=1}^{\infty}n(n+1)x^{n-1}=\frac 1x\sum_{n=1}^{\infty}[n(n-1)+2n]x^{n}$$
$$f(x)=\frac {x^2}x\sum_{n=1}^{\infty}n(n-1)x^{n-2}+\frac {2x}x\sum_{n=1}^{\infty}nx^{n-1}=x \left(\sum_{n=1}^{\infty}x^{n}\right)''+2\left(\sum_{n=1}^{\infty}x^{n}\right)'$$ Use now
$$\sum_{n=1}^{\infty}x^{n}=\frac 1{1-x}$$</p>
|
3,141,510 | <p>Calculate sum
<span class="math-container">$$ \sum_{k=2}^{2^{2^n}} \frac{1}{2^{\lfloor \log_2k \rfloor} \cdot 4^{\lfloor \log_2(\log_2k )\rfloor}} $$</span></p>
<p>I hope to solve this in use of Iverson notation:</p>
<h2>my try</h2>
<p><span class="math-container">$$ \sum_{k=2}^{2^{2^n}} \frac{1}{2^{\lfloor \log_2k \rfloor} \cdot 4^{\lfloor \log_2(\log_2k )\rfloor}} = \sum_{k,l,m}2^{-l}4^{-m} [2^l \le k < 2^{l+1}][2^{2^m} \le k < 2^{2^m+1}] $$</span></p>
<p>and now:
<span class="math-container">$$ [2^l \le k < 2^{l+1}][2^{2^m} \le k < 2^{2^m+1}] \neq 0 $$</span> if and only if <span class="math-container">$$2^l \le k < 2^{l+1} \wedge 2^{2^m} \le k < 2^{2^m+1} $$</span></p>
<p>I can assume that <span class="math-container">$l$</span> is const (we know value of <span class="math-container">$l$</span>) and treat <span class="math-container">$m$</span> as variable depence from <span class="math-container">$l$</span>. Ok so: <br>
<span class="math-container">$$2^l \le 2^{2^m} \wedge 2^{2^m+1} \le 2^{l+1} $$</span>
but it gives me that <span class="math-container">$l=2^m$</span>
I think that it is not true (but also I don't see mistake). Even if it is true, how can be it finished? </p>
| epi163sqrt | 132,007 | <p>Here is an answer following rather closely OP's approach.</p>
<blockquote>
<p>We obtain for <span class="math-container">$n\in\mathbb{Z}, n\geq 0$</span>:
<span class="math-container">\begin{align*}
\color{blue}{\sum_{k=2}^{2^{2^n}}}&\color{blue}{\frac{1}{2^{\left\lfloor\log_2 k\right\rfloor}4^{\left\lfloor\log_2\left(\log_2 k\right)\right\rfloor}}}\\
&=\sum_{k=2}^{2^{2^n}}\sum_{l,m}2^{-l}4^{-m}\left[l=\left\lfloor\log_2 k\right\rfloor\right]\left[m=\left\lfloor\log_2\left(\log_2\left(k\right)\right)\right\rfloor\right]\tag{1}\\
&=\sum_{k=2}^{2^{2^n}}\sum_{l,m}2^{-l}4^{-m}\left[l\leq \log_2 k<l+1\right]\left[m\leq\log_2\left(\log_2\left(k\right)\right)<m+1\right]\tag{2}\\
&=\sum_{k=2}^{2^{2^n}}\sum_{l,m}2^{-l}4^{-m}\left[2^l\leq k<2^{l+1}\right]\left[2^{2^m}\leq k<2^{2^{m+1}}\right]\\
&=\sum_{p=0}^{n-1}4^{-p}\sum_{k=2^{2^p}}^{2^{2^{p+1}}-1}\sum_{l}2^{-l}\left[2^l\leq k<2^{l+1}\right]+2^{-2^n}4^{-n}\tag{3}\\
&=\sum_{p=0}^{n-1}4^{-p}\sum_{q=2^p}^{2^{p+1}-1}2^{-q}\sum_{k=2^q}^{2^{q+1}-1}1+2^{-2^n}4^{-n}\tag{4}\\
&=\sum_{p=0}^{n-1}4^{-p}\sum_{q=2^p}^{2^{p+1}-1}2^{-q}2^q+2^{-2^n}4^{-n}\\
&=\sum_{p=0}^{n-1}4^{-p}2^p+2^{-2^n}4^{-n}\\
&=\sum_{p=0}^{n-1}2^{-p}+2^{-2^n}4^{-n}\\
&=\frac{1-\left(\frac{1}{2}\right)^n}{1-\frac{1}{2}}+2^{-2^n}4^{-n}\tag{5}\\
&\,\,\color{blue}{=2-\frac{1}{2^{n-1}}+\frac{1}{2^{2^n}4^n}}\\
\end{align*}</span></p>
</blockquote>
<p><em>Comment:</em></p>
<ul>
<li><p>In (1) we use <em><a href="https://en.wikipedia.org/wiki/Iverson_bracket" rel="nofollow noreferrer">Iverson brackets</a></em> to get rid of the floor function.</p></li>
<li><p>In (2) we use an equivalent representation of the <em><a href="https://en.wikipedia.org/wiki/Floor_and_ceiling_functions#Equivalences" rel="nofollow noreferrer">floor function</a></em>.</p></li>
<li><p>In (3) we see the intervals inside the Iverson brackets suggest a partioning of the interval <span class="math-container">$\left[2,2^{2^n}\right]$</span>. We do a first partitioning with respect to <span class="math-container">$m$</span> as union of right half-open intervals and add the value for <span class="math-container">$k=2^{2^n}$</span>. This way we see <span class="math-container">$m$</span> takes precisely one value, namely <span class="math-container">$m=p$</span>.</p></li>
<li><p>In (4) we continue similarly as we did in (3). This way we see <span class="math-container">$l$</span> takes precisely one value, namely <span class="math-container">$l=q$</span>.</p></li>
<li><p>In (5) we use the <em><a href="https://en.wikipedia.org/wiki/Geometric_series#Formula" rel="nofollow noreferrer">finite geometric series</a></em> formula.</p></li>
</ul>
|
1,876,708 | <p>Let $m<n$. Why $\mathbb R^m$ is closed in $\mathbb R^n$ ? For example, let us take $\mathbb R^3$ and the subspace $\mathbb R^2$. It looks weird to me that $\mathbb R^2$ is closed in $\mathbb R^3$. To me it looks impossible. It may be open, but not closed. Any explanation is welcome.</p>
| Surb | 154,545 | <p>Let $\boldsymbol x=(x,y,0)\in\mathbb R^2$. We denote $$B(\boldsymbol x,\varepsilon)=\{\boldsymbol u\in \mathbb R^3\mid \|\boldsymbol x-\boldsymbol u\|\leq \varepsilon\}.$$</p>
<p>I think it's very easy to show (even to see), that for all $\varepsilon>0$, $$B(\boldsymbol x,\varepsilon)\cap \mathbb R^2\neq \emptyset\quad \text{and}\quad B(\boldsymbol x,\varepsilon)\cap (\mathbb R^2)^c\neq \emptyset.$$
In particular, $\mathbb R^2\subset \partial \mathbb R^3.$</p>
|
548,776 | <p>I am new in Topos theory. I have actually just started learning. I am reading MacLane-Moerdijk's book, as it was suggested to me as the best introduction to the subject. Unfortunately I can not make sense of the following.</p>
<p>In section 5 of Chapter I, (page 41) they build what is needed to prove that in the presheaf category $[\mathcal{C}^{\textrm{op}},\mathbf{Set}]$, any object is the colimit of a diagram of representable objects. They define the category of elements of $P$ as follows: The objects are all pairs $(C,p)$ where $C$ is an object of $\mathcal{C}$ and $p$ is an element $p\in P(C)$. Its morphisms $(C',p')\to (C,p)$ are those morphisms $u:C'\to C$ of $\mathcal{C}$ for which $pu=p'$. I do not understand the morphisms of this category. $p$ is a set and $u$ a morphism in $\mathcal{C}$ so what does $pu$ mean? Do they mean $P(u)(p)=p'$ maybe?</p>
<p>Thanks for any help.</p>
| Ittay Weiss | 30,953 | <p>$u:C'\to C$ is a morphism, and $P$ is a presheaf, thus $P(u):P(C)\to P(C')$ is just a function between two sets, which you can apply to $p\in P(C)$ and get an element in $P(C')$, that is simply the element $P(u)(p)$, and the requirement is, as you write, just $P(u)(p)=p'$. </p>
<p>Notice that there is no guess work involved here and really there is no other way to interpret it. Also, notice that they introduce this shorthand notation a few pages earlier when they introduce presheaf categories.</p>
|
8,816 | <p>What is the result of multiplying several (or perhaps an infinite number) of binomial distributions together?</p>
<p>To clarify, an example.</p>
<p>Suppose that a bunch of people are playing a game with k (to start) weighted coins, such that heads appears with probability p < 1. When the players play a round, they flip all their coins. For each heads, they get a coin to flip in the next round. This is repeated every round until they have a round with no heads.</p>
<p>How would I calculate the probability distribution of the number or coins a player will have after n rounds? Especially if n is extremely high and p extremely close to 1?</p>
| Mariano SuΓ‘rez-Γlvarez | 1,409 | <p>In [Matsusaka, T. Polarized varieties, fields of moduli and generalized Kummer varieties of polarized abelian varieties. Amer. J. Math. 80 1958 45--82.] it is proved that a non-singular projective variety has a maximal algebraic group of automorphisms (that is, every group which acts on the variety by automorphisms contains a maximal algebraic subgroup)</p>
<p>[Matsumura, Hideyuki; Oort, Frans. Representability of group functors, and automorphisms of algebraic schemes. Invent. Math. 4 1967 1--25.] shows the automorphism group (of an algebraic proper scheme over a field) is representable by a group scheme.</p>
|
1,459,334 | <p>How can we show that additive inverse of a real number equals the number multiplied by -1, i.e. how can we show that $(-1)*u = -u$ for all real numbers $u$?</p>
| timber | 275,791 | <p>\begin{align}
&& (1 + (-1)) &= 0 \\
&\implies& (1+(-1))\cdot u &= 0\cdot u = 0\\
&\implies& u + (-1)\cdot u &= 0\\
&\implies& (-u) + (u + (-1)\cdot u) &= -u\\
&\implies& (-u + u) + (-1)\cdot u &= -u\\
&\implies& (-1)\cdot u &= -u
\end{align}</p>
|
1,282,486 | <p>Given the function $f(x) = |8x^3 β 1|$ in the set $A = [0, 1].$
Prove that the function is not differentiable at $x = \frac12.$ </p>
<p>The answer in my book is as follows:</p>
<p>$$\lim_{x \to \frac12-} \dfrac{f(x)-f(1/2)}{x-1/2} = -6$$
$$\lim_{x \to \frac12+} \dfrac{f(x)-f(1/2)}{x-1/2} = 6$$ </p>
<p>Can anyone explain how the $6$'s were derived. I understand that as $x$ tends to $\frac12$ from the negative side, the bottom will be negative, so thats why the first one is a minus.</p>
<p>But how do you get to the $6$, what am I missing? Obviously $f(\frac12)=0$ but what do you make $f(x)=$ as $x$ tends to $\frac12$</p>
<p>Thanks</p>
| copper.hat | 27,978 | <p>Let $f_+(x)= 8x^3-1$, $f_-(x) = 1-8x^3$, these are both smooth.</p>
<p>In the following $f$ is the function the OP defined. (I am not redefining $f$.)</p>
<p>If $x \ge { 1\over 2}$ then $f(x) = f_+(x)$.</p>
<p>If $x \le { 1\over 2}$ then $f(x) = f_-(x)$.</p>
<p>Hence the one sided derivatives of $f$ will match the derivatives of $f_+,f_-$
as appropriate.</p>
|
422,143 | <p>f differentiable function in R. $f(x)= e^{f'(x)}$
$f(0)=1$</p>
<p>I have proved that $f(x)=1$ for every $x\lt0$. im stuck for $x\gt0
$</p>
| TZakrevskiy | 77,314 | <p>Take $f(x)=1$ and then apply <a href="http://en.wikipedia.org/wiki/Picard%E2%80%93Lindel%C3%B6f_theorem" rel="nofollow">PicardβLindelΓΆf theorem</a></p>
<p><strong>Edit.</strong></p>
<p>Well, for short, this is the theorem that guarantees local existence and unicity of solutions of Cauchy problems for ODE (i.e. one of the most important theorems in theory of ODE) under some hypothesis on the equation and initial data. You can look the correct formulation on wiki, but here what we do here: in the neighborhood of initial data the logarithm function is well-defined and is a differmorphism, so we can safely apply it to both sides of the equation:</p>
<p>$$f'(x)=\ln(f(x)), f(0)=1.$$
We check the conditions of our theorem: the application $(x,f)\to \ln(f)$ is continuous in $x$ (in fact, it doesn't depend on it) and is Lipschitz in $f$ in some neighborhood of $f(0)=1 $(it's sufficient to check that it's $\mathcal C^1$ in that neighborhood). Thus, we can guarantee the existence and inicity of solution $f(x)$ of our ODE on $x\in (-\varepsilon,\varepsilon)$ for some $\varepsilon>0$. On the other hand, the function $f(x)=1\,\forall x$ fits our ODE; thus we conclude that this function is indeed the only solution to our ODE.</p>
|
19,880 | <p>I want to write down $\ln(\cos(x))$ Maclaurin polynomial of degree 6. I'm having trouble understanding what I need to do, let alone explain why it's true rigorously.</p>
<p>The known expansions of $\ln(1+x)$ and $\cos(x)$ gives:</p>
<p>$$\forall x \gt -1,\ \ln(1+x)=\sum_{n=1}^{k} (-1)^{n-1}\frac{x^n}{n} + R_{k}(x)=x-\frac{x^2}{2}+\frac{x^3}{3}+R_{3}(x)$$
$$\cos(x)=\sum_{n=0}^{k} (-1)^{n}\frac{x^{2n}}{(2n)!} + T_{2k}(x)=1-\frac{x^2}{2}+\frac{x^4}{24}+T_{4}(x)$$</p>
<p>Writing $\ln(1+x)$ with $t=x-1$ gives:</p>
<p>$$\forall t \gt 0,\ \ln(t)=\sum_{n=1}^{k} (-1)^{n-1}\frac{(t+1)^n}{n} + R_{k}(t)$$</p>
<p>But now I'm clueless. </p>
<ul>
<li>Do I just 'plug' $\cos(x)$ expansion in $\ln(t)$? Can I even do that?</li>
<li>Isn't it a problem that $\ln(x)$ isn't defined for $x\leq 0$ but $|\cos(x)| \leq 1$?</li>
</ul>
| Chris Card | 1,470 | <p>Try starting from the definition of the MacLaurin series (e.g. as defined here: <a href="http://mathworld.wolfram.com/MaclaurinSeries.html" rel="nofollow">http://mathworld.wolfram.com/MaclaurinSeries.html</a>)?</p>
|
359,212 | <p>I mean, $\Bbb Z_p$ is an instance of $\Bbb F_p$, I wonder if there are other ways to construct a field with characteristic $p$?
Thanks a lot!</p>
| Jim | 56,747 | <p>There are also extensions of $\mathbb F_p$. For example $\mathbb F_2[t]/(t^2 + t + 1)$ is a field with $4$ elements. It has characteristic $2$.</p>
<p>You might try perusing the <a href="http://en.wikipedia.org/wiki/Finite_field">wikipedia page</a> which has more examples.</p>
|
359,212 | <p>I mean, $\Bbb Z_p$ is an instance of $\Bbb F_p$, I wonder if there are other ways to construct a field with characteristic $p$?
Thanks a lot!</p>
| azimut | 61,691 | <p>For each power $p^r$ of $p$, there is a unique finite field of characteristic $p$ and size $p^r$, it is denoted by $\mathbb F_{p^r}$.
For the construction, you take an irreducible polynomial $f\in\mathbb F_p[x]$ of degree $r$. Then $\mathbb F_{p^r} \cong \mathbb F_p[x]/(f)$.
The smallest non-trivial example is $\mathbb F_4 = \mathbb F_{2^2}$. A concrete construction for it is given in the answer of Jim.</p>
|
359,212 | <p>I mean, $\Bbb Z_p$ is an instance of $\Bbb F_p$, I wonder if there are other ways to construct a field with characteristic $p$?
Thanks a lot!</p>
| Seirios | 36,434 | <p>As said in the other answers, if $P$ is an irreducible polynomial on $\mathbb{F}_p$ of degree $d$, then $k=\mathbb{F}_p[X]/ (P)$ is a field of cardinality $p^d$ and is of characteristic $p$. For a nice presentation of $k$, you can say that $k=\mathbb{F}_p(A) \subset M_d(\mathbb{F}_p)$ where $A$ is the <a href="http://en.wikipedia.org/wiki/Companion_matrix" rel="nofollow">companion matrix</a> of $P$.</p>
<p>For instance, the example given by Jim becomes: $k= \left\{ \left( \begin{array}{cc} b & -a \\ a & b-a \end{array} \right) \mid a,b \in \mathbb{F}_p \right\}$.</p>
|
2,571,031 | <p>I need to solve the quadratic programming problem $$ \text{minimize}\,\, \sum_{j=1}^{n}(x_{j})^{2} \\ \text{subject to}\,\,\, \sum_{j=1}^{n}x_{j}=1,\\ 0 \leq x_{j}\leq u_{j}, \, \, j=1,\cdots , n $$</p>
<p>I know that the first thing I need to do is form the Lagrangian. </p>
<p>Now, for a problem in standard form (note that below, $\overline{x}$, $\overline{\lambda}$, $\overline{\mu}$ denote vectors): $$ \text{minimize} \, \, f_{0}(\overline{x}) \\ \text{subject to} \,\,\, f_{i}(\overline{x}), \,\,\, i=1,\cdots, m \\ h_{i}(\overline{x}), \,\,\, i = 1,\cdots, p $$ the Lagrangian looks like this: $\displaystyle L(\overline{x},\overline{\lambda}, \overline{\mu}) = f_{0}(x) + \sum_{i=1}^{m}\lambda_{i}f_{i}(\overline{x}) + \sum_{i=1}^{p}\mu_{i}h_{i}(\overline{x})$</p>
<p>In this case, I am being thrown off by the fact that my sole $h_{i}(\overline{x})$ happens to be a sum that adds up to $1$, and if I want my $f_{i}(\overline{x})$'s to be $\leq 0$, I'm going to need to rewrite the last line of constraints as $x_{j} - u_{j} \leq 0$, $j = 1,\cdots , n$ and $-x_{j} \leq 0$, $j = 1, \cdots, n$.</p>
<p>Then, would my Lagrangian be $\displaystyle L(\overline{x},\overline{\lambda}, \overline{\mu}) = \sum_{j=1}^{n}(x_{j})^{2} + \sum_{j=1}^{n}\lambda_{i}(x_{j}-u_{j}) + \sum_{j=1}^{n}\nu_{i} (-x_{i}) + \mu\left[\left(\sum_{j=1}^{n}x_{j} \right)-1\right]$ ?</p>
<p><strong>And then, how would I go about completing the problem?</strong> I've never done a problem with this many Lagrange variables in it before, nor with this many constraints, and so I'm finding it a little overwhelming...</p>
<p>Thank you ahead of time for your time and patience!</p>
| max_zorn | 506,961 | <p>I ran into space limitations, so here is what I wanted as a comment:</p>
<p>Hi @ALannister: </p>
<p>1] if the $u_i$'s are large, then the solution is $\tfrac{1}{n}\mathbf{e}_n$, where $\mathbf{e}_n$ is the vector of all $1$'s in $\mathbb{R}^n$. </p>
<p>2] view your problem as a projection problem: you want to project the origin onto the intersection of the box with the hyperplane with normal vector $\mathbf{e}_n$ and offset value $1$. </p>
<p>3] For notational convenience, assume $u_1\leq u_2\leq\cdots\leq u_n$. </p>
<p>4] As you blow up the ball (intersected with the nonnegative orthant), it will either hit first the hyperplane or the boundary of the box you are given. </p>
<p>5] If it hits the hyperplane first, then you are done (the problem is really unconstrained). </p>
<p>6] If you hit the box boundary first, you will hit it at $x_1=u_1$. This value is now fixed. </p>
<p>7] The remaining variables $x_2,\ldots x_n$ are now in a box of one less dimension, and the hyperplane has now normal vector $\mathbf{e}_{n-1}$, the all-ones in $\mathbb{R}^{n-1}$, and the offset is $1-u_1$. </p>
<p>8] Repeat this argument until you are done. This leads to your solution. </p>
<p>I suspect this is known. Is this a homework in a book? If so, please let us know from where this problem comes from, it is neat. </p>
|
1,988,420 | <p>An Ant is on a vertex of a triangle. Each second, it moves randomly to an adjacent vertex. What is the expected number of seconds before it arrives back at the original vertex?</p>
<p>My solution: I dont know how to use markov chains yet, but Im guessing that could be a way to do this. I was wondering if there was an intuitive way to solve this problem. I would have guessed 3 seconds as an answer. </p>
<p>I'm assuming that if it is at Vertex A, there is a 1/2 chance of going to Vertex B or C. So minimum number of seconds is 2 seconds. Max number could be infinite if it keeps bouncing back between B and C without returning to A.</p>
<p>I'm still not sure how to do this puzzle.</p>
| Graham Kemp | 135,106 | <p>The ant moves $1$ time, then moves another $N$ times to return to the original vertex. After the first move, the ant either takes one more move back to the original vertex, or it moves to the other non-original vertex, then continues trying from there. Recursively then:</p>
<p>$$\mathsf E(N) ~=~ \tfrac 12+\tfrac 12(1+\mathsf E(N)) \\ ~ \\ \therefore \quad \mathsf E(N)+1=3$$</p>
<p>The expected time the ant takes to move and return to the origin is $3$seconds.</p>
|
512,037 | <p>This is a question from our reviewer for our exam for linear algebra. I just want to have some ideas how to tackle the problem.</p>
<p>If $A$ is an $n\times n$ matrix with integer coefficients, such that the sum of each row's elements is equal to $m$, show that $m$ divides the determinant.</p>
| achille hui | 59,379 | <p>Given any $n \times n$ matrix $A = (a_{ij})$ whose row sums all equal to $m$, i.e.</p>
<p>$$\sum_{j=1}^n a_{ij} = m, \quad\text{ for } 1 \le i \le n \tag{*1}$$</p>
<p>Let $\text{adj}(A) = (b_{ij})$ be its <a href="http://en.wikipedia.org/wiki/Adjugate_matrix" rel="nofollow">adjugate matrix</a>.
We know</p>
<p>$$\text{adj}(A) A = \det(A) I_n
\quad\iff\quad
\sum_{j=1}^n b_{ij} a_{jk} = \det(A) \delta_{ik}\quad\text{ for } 1 \le i, k \le n
\tag{*2}$$
where $\delta_{ik}$ is the <a href="http://en.wikipedia.org/wiki/Kronecker_delta" rel="nofollow">Kronecker's delta</a>. In RHS of (*2), if we fix $i$ to $1$ and then sum over $k$, $(*1)$ will imply:</p>
<p>$$m \sum_{j=1}^n b_{1j} = \sum_{j=1}^n \sum_{k=1}^n b_{1j} a_{jk} = \det(A) \sum_{k=1}^n \delta_{1k} = \det(A)$$
When $a_{ij}$ are all integers, so do $b_{ij}$. As a result $\displaystyle \sum_{k=1}^n b_{1k}$ is an integer and hence $m$ divides $\det(A)$.</p>
|
897,633 | <p><strong>First question:</strong></p>
<p>Let's say we have a hypothesis test:</p>
<p>${ H }_{ 0 }:u=100$
and ${ H }_{ 1 }:u\neq 100$.</p>
<p>The sample has a size of 10 and gives an average $u=103$ and a p-value = 0.08.
The level of significance is 0.05.</p>
<p>I'm asked the following question (exam):</p>
<p>A) We can conclude that $u=100$</p>
<p>B) We cannot conclude that $u\neq100$</p>
<p>The 2 answers are rather similar, but not the same. I would say B) but I'm not so sure given what I've read.</p>
<p>The p-value here indicates that we cannot reject the null hypothesis, so we cannot accept H1 ?</p>
<p><strong>Second question:</strong></p>
<p>What does it mean exactly that a test is significant ?</p>
<p>Does it mean that we can reject the null hypothesis ?</p>
<p>Thanks in advance.</p>
<p>Regards,</p>
| user99680 | 99,680 | <p>Aren't you also given either the population or sample deviation? If any of these are given, you assume the data is distributed a certain way, e.g., normally, with $\mu=100, \sigma=\sigma_0$. Knowing this, i.e., the (assumed) parameters of the population you are sampling from, then allows you to compute the probability of obtaining the value of 103 under this distribution (say , but not neccesarily, normal) $\mu=100, \sigma=\sigma_0$. If the probability of this value 103 is less than , say,(the significance level) 5% , then you reject ;otherwise you do not reject. </p>
<p>So rejecting a hypothesis at a significance level of k% given the assumption $\mu=\mu_0$, given some value s for either the sample deviation , when the sample mean is $\mu_2$ just means that the probability of obtaining a value of $\mu_2$ in a population with mean $\mu_0$ and deviation s is less than $k$%. Of course there are variants of this test for different parameters.</p>
<p>As above poster wrote, if the p-value-- the probability of observing the value you observed-- is less than the significance value/level, then you reject , but you reject at the given significance value/level. The explanation for what is going on is in the above paragraphs ; let me know if it was not clear.</p>
|
3,193,305 | <p>A random variable <span class="math-container">$x$</span> from the set <span class="math-container">$\{1, 2, ... ,n\}. $</span> Let <span class="math-container">$x$</span> has distribution function <span class="math-container">$f(k) = Y(n) Β· g^k$</span> where <span class="math-container">$g$</span> is a fixed number within <span class="math-container">$0$</span> and <span class="math-container">$1$</span>. Find <span class="math-container">$Y(n)$</span> which is a constant term in terms of n. </p>
<p>I do not know how to determine <span class="math-container">$Y(n)$</span>. Can I integrate <span class="math-container">$f(k)$</span> ? Thank You. </p>
| Robert Lewis | 67,071 | <p>Given the equation</p>
<p><span class="math-container">$y'' + a(x)y = 0, \tag 1$</span></p>
<p>with Wronskian</p>
<p><span class="math-container">$W[y_1, y_2] = y_1y_2' - y_1'y_2, \tag 2$</span></p>
<p>we have</p>
<p><span class="math-container">$W' = (y_1y_2' - y_1'y_2)' = y_1'y_2' + y_1y_2'' - y_1''y_2 - y_1'y_2' = y_1y_2'' - y_1''y_2; \tag 3$</span></p>
<p>now, (1) implies</p>
<p><span class="math-container">$y_i'' = -a(x)y_i, \; i = 1, 2; \tag 4$</span></p>
<p>substituting (4) into (3) we find</p>
<p><span class="math-container">$W' = -a(x)y_1y_2 + a(x)y_1y_2 = 0; \tag 5$</span></p>
<p>it follows that <span class="math-container">$W$</span> is constant; thus if we initialize the <span class="math-container">$y_i$</span>, <span class="math-container">$y_i'$</span>, <span class="math-container">$i = 1, 2$</span> such that </p>
<p><span class="math-container">$W[y_1, y_2] = 1, \tag 6$</span></p>
<p><span class="math-container">$W$</span> will remain fixed at <span class="math-container">$1$</span> over the entire range of <span class="math-container">$x$</span> for which the <span class="math-container">$y_i$</span> exist. This may be accomplished by setting</p>
<p><span class="math-container">$y_1(x_0) = 1 = y_2'(x_0), \tag 7$</span></p>
<p>and</p>
<p><span class="math-container">$y_1'(x_0) = 0 = y_2(x_0); \tag 7$</span></p>
<p>then</p>
<p><span class="math-container">$W[y_1, y_2](x_0) = 1, \tag 8$</span></p>
<p>whence</p>
<p><span class="math-container">$\forall x, \; W[y_1, y_2](x) = 1, \tag 9$</span></p>
<p>provided the <span class="math-container">$y_i$</span> satisfy (1) with the stated conditions at <span class="math-container">$x_0$</span>. </p>
<p><strong><em>Note Added in Edit, Friday 19 April 2019 8:41 AM PST:</em></strong> The equation (1) is in fact a special case of the more general second order equation</p>
<p><span class="math-container">$y'' + b(x)y' + a(x)y = 0; \tag{10}$</span></p>
<p>computing </p>
<p><span class="math-container">$W' = y_1y_2'' - y_1''y_2 \tag{11}$</span></p>
<p>using</p>
<p><span class="math-container">$y_i'' = -b(x)y_i' - a(x)y_i, \; i = 1, 2, \tag{12}$</span></p>
<p>we find that instead of (5) we obtain</p>
<p><span class="math-container">$W' = y_1(-b(x)y_2' - a(x)y_2) - (-b(x)y_1' - a(x)y_1)y_2 = -b(x)y_1y_2' + b(x)y_1'y_2$</span>
<span class="math-container">$= -b(x)(y_1y_2' - y_1'y_2) = -b(x)W; \tag{13}$</span></p>
<p>the solution of this simple first-order, linear, homogeneous equation for <span class="math-container">$W(x)$</span> is well-known to take the form</p>
<p><span class="math-container">$W(x) = \exp \left ( \displaystyle -\int_{x_0}^x b(s) \; ds \right ) W(x_0), \tag{14}$</span></p>
<p>from which we see that <span class="math-container">$W(x)$</span> will not in general be constant. Indeed, if <span class="math-container">$W(x)$</span> <em>is</em> a constant, so that</p>
<p><span class="math-container">$W(x) = W(x_0), \; \forall x \in I, \tag{15}$</span></p>
<p>then</p>
<p><span class="math-container">$\exp \left ( \displaystyle -\int_{x_0}^x b(s) \; ds \right ) = 1, \; \forall x \in I, \tag{16}$</span></p>
<p>which implies that</p>
<p><span class="math-container">$\displaystyle \int_{x_0}^x b(s) \; ds = 0, \forall x \in I; \tag{17}$</span></p>
<p>if <span class="math-container">$b(x)$</span> is assumed continuous we may differentiate this equation to obtain</p>
<p><span class="math-container">$b(x) = 0, \forall x \in I, \tag{18}$</span></p>
<p>which shows that <span class="math-container">$W(x)$</span> is constant if and only if <span class="math-container">$b(x)$</span> vanishes. <strong><em>End of Note.</em></strong></p>
|
1,464,522 | <blockquote>
<p>Let <span class="math-container">$O_n(\mathbb Z)$</span> be the group of orthogonal matrices (matrices <span class="math-container">$B$</span> s.t. <span class="math-container">$BB^T=I$</span>) with entries in <span class="math-container">$\mathbb Z$</span>.<br>
1) How do I show that <span class="math-container">$O_n(\mathbb Z)$</span> is a finite group and find its order?<br>
2) I need to show also that symmetric group <span class="math-container">$S_n$</span> is a subgroup of <span class="math-container">$O_n(\mathbb Z)$</span>.</p>
</blockquote>
<p>So it needs to satisfy associativity/identity/inverse.</p>
<p>It is easy to see that every orthogonal matrix <span class="math-container">$A \in O(\mathbb Z)$</span> has an inverse, namely <span class="math-container">$A^T$</span>. Moreover, the product of two orthogonal matrices is orthogonal since <span class="math-container">$(AB)^T = B^T A^T$</span>. If <span class="math-container">$A, B \in O_n(\mathbb Z)$</span> then <span class="math-container">$(AB)^T(AB) = B^T A^T AB = BIB^T = BB^T = I$</span>, hence <span class="math-container">$O_n(\mathbb Z)$</span> is closed under multiplication, since <span class="math-container">$I \in O_n(\mathbb Z)$</span>.</p>
| Empy2 | 81,790 | <p>HINT: Every column vector has length $1$, and all $b_{ij}$ are integers, so exactly one of $b_{i1}$ is $\pm1$ and all the others are zero.</p>
|
1,652,165 | <p>On a empty shelf you have to arrange $3$ cans of soup, $4$ cans of beans, and $5$ cans of tomato sauce. What is the probability that none of the cans of soup are next to each other?</p>
<p>I tried working this out but get very stuck because I'm not sure that I'm including all the possible outcomes. </p>
| Ross Millikan | 1,827 | <p>I would use inclusion/exclusion. There are $12!$ ways to arrange the cans. To have two soup next to each other, group two cans of soup into a pair. There are six ways to do that, then $11!$ ways to order the cans with the pair together. We have counted the ways to have all three together twice, however, which is $6 \cdot 10!$. So the total chance is $\frac {6(11!-10!)}{12!}=\frac 5{11}$ that two are together and $\frac 6{11}$ that no two are together.</p>
|
1,640,285 | <p>A single-celled spherical organism contains $70$% water by volume. If it loses $10$% of its water content, how much would its surface area change by approximately?</p>
<ol>
<li>$3\text{%}$</li>
<li>$5\text{%}$</li>
<li><p>$6\text{%}$</p></li>
<li><p>$7\text{%}$</p></li>
</ol>
| Harish Chandra Rajpoot | 210,295 | <p>Let $r$ be the original radius the original volume of water is $$V_1=\frac{70}{100}\times \frac{4}{3}\pi r^3=\frac{14}{15}\pi r^3$$ & volume of organic material
$$V_2=\frac{30}{100}\times \frac{4}{3}\pi r^3=\frac{2}{5}\pi r^3$$
& surface area $S_0=4\pi r^2$</p>
<p>when it loses $10$% of water then remaining volume of water $$V_1'=\frac{90}{100}\times V_1= \frac{90}{100}\times\frac{70}{100} \times \frac{4}{3}\pi r^3=\frac{21}{25}\pi r^3$$
If $r'$ is the radius of remaining cell then new volume of cell $$V_1'+V_2=\frac{21}{25}\pi r^3+\frac{2}{5}\pi r^3$$
$$r'=\left(0.93 \right)^{1/3}r$$
hence the new surface area of cell $$S'=4\pi r'^2=4\pi \left(\left(0.93\right)^{1/3}r\right)^2=4\pi \left(0.93 \right)^{2/3}r^2$$</p>
<p>hence % change in surface area of the cell $$\frac{S'-S_0}{S_0}\times 100$$
$$=\frac{4\pi \left(0.93 \right)^{2/3}r^2-4\pi r^2}{4\pi r^2}\times 100=4.722877503\approx 5\ \text{%}$$</p>
|
990,796 | <p>I have a homework question in a discrete mathematics class that asks me to determine how many 7-digit id numbers <strong>do not</strong> contain three consecutive sixes. </p>
<p>It seems clear that I should approach this by determining the number that <strong>do</strong> have three consecutive sixes and subtracting that from $10^7$, the total number of possibilities. </p>
<p>If it were asking simply for the number of values that contain three sixes, it would simply be $10^4$, or $9^4$ for <em>exactly</em> three sixes. But what's got me thinking is the requirement that they appear consecutively. By sketching out this: </p>
<pre><code>___ ___ ___ ___ ___ ___ ___
x x x 1
x x x 2
x x x 3
x x x 4
x x x 5
</code></pre>
<p>I determined that there are five possible placements, so I'm thinking that the number of values with three consecutive sixes is $5 * 10^4$. But because arrangements 1&4, 1&5 and 2&5 can coincide, this overcounts by $3 *10 = 30$, so I get a final total of $5 * 10^4 - 30$ values with three consecutive sixes. </p>
<p>Questions: </p>
<ul>
<li>Is my basic approach here logically sound? </li>
<li>Making that sketch felt awfully "hacky" - is there a mathematical technique I could have used instead - particularly when I'm dealing with bigger values? </li>
</ul>
| Tim | 171,636 | <p>This short python script:</p>
<pre><code>for i in range (10**6,(10**7)-1):
if '666' in str(i):
x += 1
</code></pre>
<p>spat out an answer of 42291 at the end!</p>
<p>It works by getting every number from 0 to $7^{10}$ (282475249), not 0 to 9,999,999. I am not sure which you are looking for. This may be your problem. </p>
<p>8999999 - 42291 = 8957708</p>
|
1,684,741 | <p>I'm able to show it isn't absolutely convergent as the sequence $\{1^n\}$ clearly doesn't converge to $0$ as it is just an infinite sequence of $1$'s. How do I prove the series isn't conditionally convergent to prove divergence!</p>
| Obinna Nwakwue | 307,490 | <p>As far as I know, here is a way you can do this. Let a set $F$ be equal to: $$F = \{1, 16, 81, 256, 625, 1296, \cdot \cdot \cdot \}$$ which is every positive perfect fourth power. Now, let $d = y^4$ in which $d \in F$. Now, you are going to have to test each number (sorry, but guess and check is the only way I know for this one), and you'll get $d = 81$. So, for $d \ \text{mod} \ 7 = 4$, $d = 81$. But, you need to take the fourth root of that. When you calculate $\sqrt[4]{81}$, your answer is 3. So, your final answer is $y = 3$.</p>
|
2,972,950 | <p>Everything on this question is in complex plane.</p>
<p>As the book describes a property of a winding number, it says that:</p>
<blockquote>
<p>Outside of the [line segment from <span class="math-container">$a$</span> to <span class="math-container">$b$</span>] the function <span class="math-container">$(z-a) / (z-b)$</span> is never real and <span class="math-container">$\leq 0$</span>.</p>
</blockquote>
<p>Here, the above statement should be interpreted as "never (real and <span class="math-container">$\leq 0$</span>)".</p>
<p>If anyone could explain why this is true that would be great. I do get why any point on the line segment (other than <span class="math-container">$b$</span>, in which case the denominator is <span class="math-container">$0$</span>) has to satisfy the condition that <span class="math-container">$(z-a) / (z-b)$</span> is real and <span class="math-container">$\leq 0$</span>, but I am not sure how to prove why any point not on the line has to satisfy the condition also.</p>
<p>Here, <span class="math-container">$a$</span> and <span class="math-container">$b$</span> are arbitrary complex number in a region determined by a closed curve in the complex plane; both points lie on the same region. </p>
| d0SO'N | 605,549 | <p>Sorry, I misread the question at first; thanks to Eric for pointing it out. Also see his answer for the geometric perspective.</p>
<p>Anyways, if the statement were true, then there exists <span class="math-container">$c$</span> such that <span class="math-container">$k(c - b) = c - a$</span> for a real number <span class="math-container">$k \le 0$</span>. That is,
<span class="math-container">$$a - kb = (1 - k)c\\
\text{Let } l = (1 - k)^{-1}, \text{where } 0 \lt l \le 1.\\
c = {a - kb\over 1 - k} = la + (1 - l)b = a + (1 - l)(-a + b)$$</span></p>
<p>Yet, <span class="math-container">$c$</span> cannot intersect the line segment from <span class="math-container">$a$</span> to <span class="math-container">$b$</span>. That is, <span class="math-container">$c \neq a + m(b - a)$</span> for any real number <span class="math-container">$0 \le m \le 1$</span>. But we have just found <span class="math-container">$0 \le 1 - l \lt 1$</span> above. So the statement can only be false.</p>
<p><span class="math-container">$$ $$</span></p>
<p>Original answer:</p>
<p>I'm not sure if that statement as displayed is actually true. Say f{z} = (z - a)/(z - b), and let a = -1 - i, b = 1 + i. f{c} = 3 for c = 2 + 2i.</p>
|
1,301,116 | <p>We know that if $f \in \mathcal R[a,b]$ and if $a = c_0 < c_1<\cdots<c_m =b$, then the restrictions of $f$ to each subinterval $[c_{i-1},c_i]$ are Riemann integrable.</p>
<p>Is the converse true, i.e if $f ; [a,b] \to \Bbb R$, and $a = c_0 < c_1<\cdots<c_m =b$ and that the restrictions of $f$ to each subinterval $[c_{i-1},c_i]$ belong to $\mathcal R[c_{i-1},c_i]$ then $f$ is Riemann integrable? </p>
<p>I am finding difficult to show this.</p>
| Tim Raczkowski | 192,581 | <p>Must be true. Probably easiest to show that if $f$ is not Riemann integrable over some $[c_{i-1},c_i]$, then it is not integrable over $[a,b]$.</p>
|
25,488 | <p>I have noticed a common pattern followed by many students in crisis:</p>
<ul>
<li>They experience a crisis or setback (injury, illness, tragedy, etc)</li>
<li>This causes them to miss a lot of class.</li>
<li>They may stay away from class longer than they "need to" because of shame: they feel that since they have been absent, coming back to class will cause their teacher to be critical of them.</li>
<li>Once they do come back, they are overwhelmed. They might need to do twice the amount of work in all of their courses just to catch up. In classes where early material becomes a needed prerequisite for later material, it can become impossible to follow the course content being presented when they do return.</li>
<li>This experience of failure can create long term patterns of giving up under pressure.</li>
</ul>
<p>I have known many <strong>very</strong> capable students who fall into this trap and end up flunking out of University. I could have fallen into this trap myself. As a high school student and college student I got close on several occasions. I have a lot of empathy for this kind of situation.</p>
<p>Question: What kinds of policies and practices can institutions and individual faculty members adopt which help students out of this trap?</p>
| Stef | 16,159 | <p>I've personally known two students who went through such a crisis, and didn't flunk. What set them apart from what you describe is that in both cases, they advised the university of their personal issue when it happened, and were able to come up with an acceptable plan together with the university.</p>
<p>This completely removed the "shame" part that you describe. In one case, the student still had to retake the year because they ended up missing too much, but there was no shame in that, just an extra year at university. In the other case, the student managed to validate the year and go on to the next year even though they missed several months of class. But in neither case did they flunk or really experience "failure".</p>
<p>A few lessons to draw from that:</p>
<ul>
<li>Contact the student as soon as the tragic event happens;</li>
<li>Devise a plan together with the student;</li>
<li>Communication is key; defuse any unwarranted feelings of "shame" the student might have for missing class;</li>
<li>Do not wait until it's too late! Start communicating as soon as you can.</li>
</ul>
|
25,488 | <p>I have noticed a common pattern followed by many students in crisis:</p>
<ul>
<li>They experience a crisis or setback (injury, illness, tragedy, etc)</li>
<li>This causes them to miss a lot of class.</li>
<li>They may stay away from class longer than they "need to" because of shame: they feel that since they have been absent, coming back to class will cause their teacher to be critical of them.</li>
<li>Once they do come back, they are overwhelmed. They might need to do twice the amount of work in all of their courses just to catch up. In classes where early material becomes a needed prerequisite for later material, it can become impossible to follow the course content being presented when they do return.</li>
<li>This experience of failure can create long term patterns of giving up under pressure.</li>
</ul>
<p>I have known many <strong>very</strong> capable students who fall into this trap and end up flunking out of University. I could have fallen into this trap myself. As a high school student and college student I got close on several occasions. I have a lot of empathy for this kind of situation.</p>
<p>Question: What kinds of policies and practices can institutions and individual faculty members adopt which help students out of this trap?</p>
| Tommi | 2,083 | <p>One obvious solution to the problem is obligatory student presence at contact teaching (maybe 80 %). This will make students concerned about being present and can stop the process where shame at not having been there causes more absence.</p>
<p>In particular, an attendance policy usually carries with it an assumption of communication: that students tell of absences and their reasons, and the faculty also keeps track and takes contact if the absences start approaching the limit. This assumption of communication undercuts the spiral of shame.</p>
<p>Of course, this has its own issues in terms of keeping track of attendance and taking away agency from the students. Hence, it is not an obviously correct solution, but maybe still worth considering.</p>
|
3,509,441 | <p>Given a Complex Matrix <span class="math-container">$A$</span> which is <span class="math-container">$n \times n$</span>. How would I go about showing that <span class="math-container">$A^*A$</span> is <span class="math-container">$$\sum_{i=1}^n \sum_{j = 1}^n | a_{ij} |^2$$</span></p>
<p>Here <span class="math-container">$A^*$</span> refers to the complex conjugate - transpose of <span class="math-container">$A$</span>. I know that the trace of any <span class="math-container">$n \times n$</span> matrix is defined to be <span class="math-container">$$\sum _{i = 1}^n a_{ii}$$</span> </p>
| Vsotvep | 176,025 | <p>Here's an alternative proof, using induction.</p>
<p>For <span class="math-container">$n=0$</span> the case is easy: there is only one pair of disjoint subsets of <span class="math-container">$\varnothing$</span>, namely <span class="math-container">$(\varnothing,\varnothing)$</span>.</p>
<p>Suppose that there are <span class="math-container">$3^n$</span> pairs of disjoint subsets of <span class="math-container">$\{1,\dots,n\}$</span>. For each pair of disjoint sets <span class="math-container">$(A,B)$</span>, where both <span class="math-container">$A,B\subseteq\{1,\dots,n\}$</span> we can make three different pairs of disjoint subsets of <span class="math-container">$\{1,\dots,n,n+1\}$</span>, namely <span class="math-container">$(A,B)$</span>, <span class="math-container">$(A\cup\{n+1\},B)$</span> and <span class="math-container">$(A,B\cup\{n+1\})$</span>. I'll leave it to you to show that this gives unique sets for every <span class="math-container">$(A,B)$</span>. Hence the number of pairs of disjoint subsets of <span class="math-container">$\{1,\dots,n,n+1\}$</span> is three times the number of pairs of disjoint subsets of <span class="math-container">$\{1,\dots,n\}$</span>, which is <span class="math-container">$3\cdot 3^n=3^{n+1}$</span> by induction hypothesis.</p>
|
438,263 | <p>Is there a concrete example of a <span class="math-container">$4$</span> tensor <span class="math-container">$R_{ijkl}$</span> with the same symmetries as the Riemannian curvature tensor, i.e.
<span class="math-container">\begin{gather*}
R_{ijkl} = - R_{ijlk},\quad R_{ijkl} = R_{jikl},\quad R_{ijkl} = R_{klij}, \\
R_{ijkl} + R_{iklj} + R_{iljk} = 0.
\end{gather*}</span>
for which there is no metric for which it is the Riemannian curvature tensor?</p>
<p>The existence of such a curvature was already shown by <a href="https://mathoverflow.net/questions/202211/equations-satisfied-by-the-riemann-curvature-tensor">Robert Bryant</a>, however, I'm looking for a concrete example.</p>
| Ofir Gorodetsky | 31,469 | <p>If you can motivate the problem and make some partial progress on it, you can try and publish it as a paper in a specialized journal, or at the very least upload it to the arXiv.</p>
<p>If you only have empirical evidence, there are journals that are receptive to this kind of this ("Mathematics of Computation" and "Experimental Mathematics" spring to mind).</p>
<p>If it concerns elementary mathematics and is relevant to a wide enough audience you can try a popular journal such as "American Mathematical Monthly".</p>
<p>Other than that, you can</p>
<ul>
<li>put in on your website, if you have one.</li>
<li>share it with experts, as they have the highest chance of solving it, and at the very least of assessing its importance and difficulty and possibly guiding you towards relevant literature or a proof.</li>
<li>share it in the problem session of a relevant conference. Problems from such sessions tend to be published in the form of conference proceedings.</li>
</ul>
|
1,315,805 | <blockquote>
<p>Let the series $$\sum_{n=1}^\infty \frac{2^n \sin^n x}{n^2}$$. For $x\in (-\pi/2, \pi/2)$, when is the series converges?</p>
</blockquote>
<p>By the root-test:</p>
<p>$$\sqrt[n]{a_n} = \sqrt[n]{\frac{2^n\sin^n x}{n^2}} = \frac{2\sin x}{n^{2/n}} \to 2\sin x$$</p>
<p>Thus, the series converges $\iff 2\sin x < 1 \iff \sin x < \frac{1}{2}$</p>
<p>Is that right?</p>
| T. Eskin | 22,446 | <p>It's almost done. Remember that the root test has absolute values inside the root, so you're looking at $\lim_{n\to\infty}|a_{n}|^{\frac{1}{n}}$. As you noticed, this gives us the convergence in $-1<2\sin x<1$. For the boundary value $2\sin x=1$ you have the convergent series $\sum n^{-2}$, and for the boundary value $2\sin x =-1$ you have the series $\sum (-1)^{n}n^{-2}$, which converges by the alternating series test. So your sum converges for all $-1\leq 2\sin x \leq 1$, i.e for all $x\in [-\frac{\pi}{6},\frac{\pi}{6}]$.</p>
|
36,774 | <p>Do asymmetric random walks also return to the origin infinitely?</p>
| athos | 26,632 | <p>No, it doesn't.</p>
<p>For a random walk, consider point of view $v_k$ as:</p>
<ol>
<li>let $+1$ and $-k$ be the two "fozen points", $p_k$ is the probability hitting $+1$, $1-p_k$ hitting $-k$, or, equivalently</li>
<li>let $-1$ and $+k$ be the two "fozen points", $q_k$ is the probability hitting $-1$, $1-q_k$ hitting $+k$.</li>
</ol>
<p>With such two "forks", we can construct view $v_{k+1}$, such that</p>
<ol>
<li>let $+1$ and $-(k+1)$ be the two "fozen points", $p_{k+1}$ is the probability hitting $+1$, $1-p_{k+1}$ hitting $-{k+1}$, or, equivalently</li>
<li>let $-1$ and $+{k+1}$ be the two "fozen points", $q_{k+1}$ is the probability hitting $-1$, $1-q_{k+1}$ hitting $+{k+1}$.</li>
</ol>
<p>We have: $p_{k+1} = \frac{p_k}{1-(1-p_k)(1-q_k)}$ and $q_{k+1} = \frac{q_k}{1-(1-p_k)(1-q_k)}$.</p>
<p>Obviously, $p_{k+1}/q_{k+1} = p_k/q_k$, so points $(p_k, q_k)$ line up on a line from the origin $(0,0)$.</p>
<p>Also, $1/p_k > p_{k+1}/p_k = q_{k+1}/q_k < 1/q_k$, so points $(p_k,q_k)$ are bounded in the square $(0,0)$, $(1,0)$, $(1,1)$ and $(0,1)$.</p>
<p>So, starting if $p_1 = 1-q_1 < 1/2$, the probability of hitting $+1$ is $p_\infty = 1/q_1 < 1$, vice versa. $p_1 = q_1 = 1/2$ is the only situation that has probability $1$ to hit both $+1$ and $-1$.</p>
<p>Hence, the probability of returning to origin, is less than $1$, unless the random walk is symmetric.</p>
|
296,727 | <p><b>Assuming that G is a finite cyclic group, let "a" be the product of all the elements in the group.</b> </p>
<p>i. <b> If G has odd order, then a=e.</b> Is this because there are an even number of non-trivial elements must have their inverses within the non-trivial factors within the product?</p>
<p>ii. <b> If G has even order then a is not equal to e. </b> Here there an odd number of non-trivial elements. There must also be an odd number of elements who are their own inverses, and therefore a cannot equal e?</p>
<p>Thanks for any help! I'm just reading a textbook and trying practice problems!</p>
| Calvin Lin | 54,563 | <p>Since we know it is a finite cyclic group,</p>
<p><strong>Hint:</strong> $ 1 + 2 + \ldots + n = \frac {n (n+1)}{2}$</p>
<p>This is a multiple of $n$ if $n$ is odd, and not a multiple of $n$ if $n$ is even.</p>
<hr>
<p>Your statements are true, and don't just apply to a finite cyclic group. I would prefer your solution over mine, but I think the above is what the textbook had in mind.</p>
|
3,062,597 | <p>For the statement of RouchΓ©'s theorem, I've always seen that both <span class="math-container">$f$</span> and <span class="math-container">$g$</span> have to be holomorphic on and inside a simple closed curve <span class="math-container">$ C $</span>. However, I am solving a problem which seems to suggest that I should use RouchΓ©'s theorem even though I only know that <span class="math-container">$ f $</span> is holomorphic in the unit disk <span class="math-container">$ D $</span> and continuous in <span class="math-container">$ \bar{D} $</span>. I also check Wikipedia's page on RouchΓ©'s theorem which says that <span class="math-container">$ f $</span> and <span class="math-container">$ g $</span> only need to be holomorphic inside the region, not on the boundary. Is this sufficient?</p>
| Lutz Lehmann | 115,115 | <p>Yes, it is sufficient since by continuity the inequality assumption of RouchΓ©'s theorem extends to some neighborhood inside the boundary curve, and thus inside the holomorphic domain. In other words, shift the curve along some inside normal vector field a little bit, which is possible because of the compactness of the curve, to get a situation that is conform with the version of the theorem as you know it. </p>
|
4,620,319 | <p>Let's assume that for <span class="math-container">$0<\beta<\alpha<\frac{\pi}{2}$</span>, <span class="math-container">$\sin(\alpha+\beta) = \frac{4}{5}$</span>, and <span class="math-container">$\sin(\alpha-\beta) = \frac{3}{5}$</span>. Then, how could we find <span class="math-container">$\cot(\beta)$</span>?</p>
<p><span class="math-container">$$\sin(\alpha+\beta)+\sin(\alpha-\beta) = 2\sin(\alpha)\cos(\beta) = \frac{7}{5}$$</span></p>
<p><span class="math-container">$$\sin(\alpha+\beta)-\sin(\alpha-\beta) = 2\sin(\beta)\cos(\alpha) = \frac{1}{5}$$</span></p>
<p><span class="math-container">$$\tan(\alpha)\cot(\beta) = 7$$</span></p>
<p>But I am not sure where this would lead us.</p>
| Lai | 732,917 | <p><span class="math-container">$$
\begin{aligned}
& \sin ^2(\alpha+\beta)+\sin ^2(\alpha-\beta)=\frac{16}{25}+\frac{9}{25}=1 \\
\Rightarrow \quad & \sin ^2(\alpha+\beta)=1-\sin ^2(\alpha-\beta)=\cos ^2(\alpha-\beta)\\ \Rightarrow \quad &
(\sin \alpha \cos \beta+\sin \beta \cos \alpha)^2=(\cos \alpha \cos \beta+\sin \alpha \sin \beta)^2 \\\Rightarrow \quad &\left(\cos ^2 \beta-\sin ^{2} \beta\right)\left(\cos ^2 \alpha-\sin ^{2} \alpha\right)=0 \\ \Rightarrow \quad
&\cos ^2 \beta=\sin ^2 \beta \quad \textrm{ or }\cos ^2 \alpha=\sin ^2 \alpha \\\Rightarrow \quad & \cot\beta =1 \textrm{ or } \tan \alpha =1 \\ \Rightarrow \quad & \cot \beta=1 \quad \textrm{ or } \quad 7 \quad \textrm{ (By }\tan \alpha \cot \beta = 7)
\end{aligned}
$$</span></p>
|
234,851 | <p>Find the length of the curve $x=0.5y\sqrt{y^2-1}-0.5\ln(y+\sqrt{y^2-1})$ from y=1 to y=2.</p>
<p>My attempt involves finding $\frac {dy}{dx}$ of that function first, which leaves me with a massive equation.</p>
<p>Next, I used this formula, </p>
<p>$$\int_1^2\sqrt{1+(\frac{dy}{dx})^2}$$</p>
<p>this attempt leaves me with such a messy long equation that eventually took up 2 pages, and still left me unsolved. I am convinced there must be an easier way.</p>
<p>Any hints please? thanks in advance.</p>
| mythealias | 31,292 | <p>You should get $$\frac{dx}{dy} = \sqrt{y^2 - 1}$$
So make sure that you are not making a mistake there.</p>
<p>From there use the correct equation for $L$ as mentioned by Vafa.</p>
|
234,851 | <p>Find the length of the curve $x=0.5y\sqrt{y^2-1}-0.5\ln(y+\sqrt{y^2-1})$ from y=1 to y=2.</p>
<p>My attempt involves finding $\frac {dy}{dx}$ of that function first, which leaves me with a massive equation.</p>
<p>Next, I used this formula, </p>
<p>$$\int_1^2\sqrt{1+(\frac{dy}{dx})^2}$$</p>
<p>this attempt leaves me with such a messy long equation that eventually took up 2 pages, and still left me unsolved. I am convinced there must be an easier way.</p>
<p>Any hints please? thanks in advance.</p>
| preferred_anon | 27,150 | <p>We have $$2x=y\sqrt{(y^{2}-1)}-\ln(y+\sqrt{y^{2}
-1})$$
Make the substitution $y=\cosh(u)$. Over the interval you are concerned with,
$$2x=\cosh(u)\sinh(u)-u=\frac{1}{2}\sinh(2u)-u$$
Note that $\frac{dy}{dx}=\frac{dy}{du}\frac{du}{dx}=\sinh(u)\frac{du}{dx}$. Differentiate the above with repect to $x$:
$$2=\cosh(2u)\frac{du}{dx}-\frac{du}{dx}=\frac{du}{dx}(\cosh(2u)-1)=2\sinh^2(u)\frac{du}{dx}$$
Therefore $\frac{du}{dx}=\frac{1}{\sinh^2(u)}$ and $\frac{dy}{dx}=\frac{1}{\sinh{u}}$, and $\frac{dx}{dy}=\sinh(u)$</p>
<p>Your arc-length is $$\int_{y=1}^{y=2}\sqrt{1+\left(\frac{dx}{dy}\right)^2} dy=\int_{y=1}^{y=2}\sqrt{1+\sinh^{2}(u)} \sinh(u)du$$
Since $1+\sinh^2(u)=\cosh^2(u)$, you should be able to compute this fairly easily.</p>
|
3,140,696 | <p>I am trying to figure out the proper definition of a small circle on a biaxial ellipsoid of revolution. One definition is the intersection of the ellipsoid with a cone emanating from the center of the ellipsoid.</p>
<p>The other way I can imagine to define it is a plane intersecting the ellipsoid in which the plane does not also intersect the center of the ellipsoid (or else it would be a great circle).</p>
<p><a href="https://en.wikipedia.org/wiki/Circle_of_a_sphere" rel="nofollow noreferrer">This Wikipedia article</a> also discusses sphere-intersection, but it is limited to spheres, and I am interested in ellipsoids.</p>
<p>Does anyone know if these two methods, cone intersection and plane intersection, result in the same curve? If not, which one is a small circle, and what would be the name of the other resulting curve?</p>
| Intelligenti pauca | 255,730 | <p>The intersection with a plane is, in general, an ellipse. The intersection with a cone is a more complicated curve, in general not lying on a plane. If what you want is a planar circle all lying on the ellipsoid, then I'm afraid that is not possible, unless particular conditions are met (e.g. a rotational symmetric ellipsoid, intersected by a plane perpendicular to the axis of symmetry).</p>
<p>EDIT.</p>
<p>If what you want is a closed line whose points have all the same distance <span class="math-container">$r$</span> from a given point <span class="math-container">$P$</span> on the surface of the ellipsoid, you can obtain it from the intersection of the ellipsoid with a sphere of radius <span class="math-container">$r$</span> centred at <span class="math-container">$P$</span>.</p>
|
165,489 | <p>I have problem solving this equation, smallest n such that $1355297$ divides $10^{6n+5}-54n-46$. I tried everything using my scientific calculator, but I never got the correct results(!).and finally I gave up!. Could you help me find the first 2 solutions for this equation ? (thanks.)</p>
| Michael E2 | 4,999 | <p>A more <em>Mathematica</em> way to do an exhaustive search:</p>
<pre><code>pp = 1355297;
Pick[#,
Mod[10^5 NestList[Mod[10^6 #, pp] &, 10^6, Length@# - 1] - (54 n + 46 /. n -> #), pp],
0] &@ Range[10^7] // AbsoluteTiming
(* {0.665031, {2331259, 3776127, 5366598, 5505709, 5652052, 7317951, 8306396, 8955772}} *)
</code></pre>
<p>The same applied to chunks, which quits after at least 2 solutions are found.</p>
<pre><code>Last@NestWhile[
Function[args, (* args = {current range, current solutions *)
With[{range = Range @@ args[[1]], sols = args[[2]]},
With[{s = Flatten[{
sols,
Pick[range,
Mod[10^5 NestList[Mod[10^6 #, pp] &,
PowerMod[10^6, First@range, pp],
Length@range - 1] - (54 n + 46 /. n -> range),
pp],
0]
}]},
{args[[1]] + 10^6, s} (* increment range *)
]]
],
{{1, 10^6}, {}}, (* {initial range, initial solutions *)
Length[#[[2]]] < 2 &, (* min. number of solutions *)
1,
100 (* MaxIterations *)
] // RepeatedTiming
(* {0.27, {2331259, 3776127}} *)
</code></pre>
<p>Somewhat surprisingly, it is faster than Henrik's compiled-to-C solution:</p>
<pre><code>cf[2] // RepeatedTiming
(* {0.283, {2331259, 3776127}} *)
</code></pre>
<p><em>Remark:</em> Disappointingly, <a href="http://reference.wolfram.com/language/ref/PowerMod.html" rel="noreferrer"><code>PowerMod</code></a> is slow, so I used <code>NestList</code>:</p>
<pre><code>foo = NestList[Mod[10^6 #, pp] &, 10^6, Length@# - 1] &[Range[10^7]]; // AbsoluteTiming
(* {0.480641, Null} *)
murf = PowerMod[10^6, Range[10^7], pp]; // AbsoluteTiming
(* {7.97349, Null} *)
foo == murf
(* True *)
</code></pre>
|
4,452,885 | <p>Let <span class="math-container">$z = e^{i\theta}, \theta \in \mathbb{R}$</span>. Then, does there exist <span class="math-container">$n \in \mathbb{N}$</span> such that:</p>
<p><span class="math-container">$$1 - z^n = re^{2 \pi i \tau}$$</span></p>
<p>for some <span class="math-container">$\tau \in \mathbb{Q}$</span>?</p>
<p>Naturally, this exists if <span class="math-container">$\theta$</span> is a rational multiple of <span class="math-container">$\pi$</span>. However, does this hold for any <span class="math-container">$\theta$</span>?</p>
<p>Although this question appears quite simple, I have no idea how I would approach it, and I suspect that its proof or disproof would be very difficult.</p>
<p>Link to <a href="https://math.stackexchange.com/questions/4452775/does-this-property-of-certain-fields-have-a-better-description?noredirect=1#comment9325845_4452775"><em>motivation</em></a> (it may appear entirely unrelated (it almost is); I wish to show that <span class="math-container">$\mathbb{C}$</span> has a certain property that I defined on fields with the addition of some analysis.)</p>
| Conrad | 298,272 | <p><span class="math-container">$\arg (1-z^n)=n\theta/2-\pi/2=2\pi \tau+2k\pi$</span> for some <span class="math-container">$k \in \mathbb Z$</span> so <span class="math-container">$\theta=\frac{4\pi (\tau+k+1/4)}{n}$</span> is a rational multiple of <span class="math-container">$\pi$</span></p>
|
2,716,363 | <p>I understand the core principles of how to prove by induction and how series summations work. However I am struggling to rearrange the equation during the final (induction step).</p>
<p>Prove by induction for all positive integers n,</p>
<p>$$\sum_{r=1}^n r^3 = \frac{1}{4}n^2(n+1)^2$$</p>
<p>After both proving for $n=1$ and assuming it holds true for $n=k$:</p>
<p>$$\sum_{r=1}^{k+1} r^3 = \frac{1}{4}k^2(k+1)^2+(k+1)^3$$</p>
<p>However I am unsure of how to proceed from here, the textbook says that the next step is to rearrange to give:</p>
<p>$$\sum_{r=1}^{k+1} r^3 = \frac{1}{4}(k+1)^2(k^2+4(k+1))$$</p>
<p>However I don't understand how they did this, can someone please clarify what they have done or suggest an alternative method to rearrange this equation to prove that the statement holds true for $k+1$ to give:</p>
<p>$$\sum_{r=1}^{k+1} r^3 = \frac{1}{4}(k+1)^2((k+1)+1)^2$$</p>
| E-A | 499,337 | <p>They factored out the $1/4 (k+1)^2$. So, from the first part of that sum, they got a $k^2$, and second part, they got a $4 (k+1)$. Just multiply it through if you want to see why it holds.</p>
|
696,511 | <p>Find the absolute maximum and minimum values of the function:</p>
<p>$$f(x,y)=2x^3+2xy^2-x-y^2$$</p>
<p>on the unit disk $D=\{(x,y):x^2+y^2\leq 1\}$.</p>
| MGA | 60,273 | <p>We first take the partial derivatives with respect to each variable and set them to zero:</p>
<p>$$\frac{\partial f}{\partial x}=6x^2 + 2y^2 - 1=0$$</p>
<p>$$\frac{\partial f}{\partial y}=4xy - 2y=0$$</p>
<p>From the second equation, we have either: $y=0$ or $x=\frac{1}{2}$. Now we substitute each value into the first equation to get the corresponding variable: for $y=0$, we get $x=\pm \frac{1}{\sqrt{6}}$, while for $x=\frac{1}{2}$ we have no real solutions, so we discard this. </p>
<p>Note that both $(0,\frac{1}{\sqrt{6}})$ and $(0,-\frac{1}{\sqrt{6}})$ satisfy our constraint. We evaluate $f(0,\frac{1}{\sqrt{6}})$ and $f(0,-\frac{1}{\sqrt{6}})$ which give us $f=-0.27$ and $f=0.27$, respectively.</p>
<p>We now need to check the perimeter of the unit disk. On this disk, $y^2 = 1-x^2$, so we substitute this into the original function to get:</p>
<p>$$ f = x^2 + x - 1 $$</p>
<p>We set $df/dx=0$ to get $x=-\frac{1}{2}$, and we evaluate $f$ at this point to get $-\frac{5}{4}$.</p>
<p>Finally we have to check the extreme values of $x$ on the unit disk: $x=-1$ and $x=1$, which give us $f=-1$ and $f=1$, respectively.</p>
<p>You now have your global minimum and maximum.</p>
|
5,528 | <p>Let H be a subgroup of G. (We can assume G finite if it helps.) A complement of H in G is a subgroup K of G such that HK = G and |H∩K|=1. Equivalently, a complement is a transversal of H (a set containing one representative from each coset of H) that happens to be a group.</p>
<p>Contrary to my initial naive expectation, it is neither necessary nor sufficient that one of H and K be normal. I ran across both of the following counterexamples in Dummit and Foote:</p>
<ul>
<li><p>It is not necessary that H or K be normal. An example is S<sub>4</sub> which can be written as the product of H=⟨(1234), (12)(34)⟩≅D<sub>8</sub> and K=⟨(123)⟩≅ℤ<sub>3</sub>, neither of which is normal in S<sub>4</sub>.</p></li>
<li><p>It is not sufficient that one of H or K be normal. An example is Q<sub>8</sub> which has a normal subgroup isomorphic to Z<sub>4</sub> (generated by i, say), but which cannot be written as the product of that subgroup and a subgroup of order 2.</p></li>
</ul>
<p>Are there any general statements about when a subgroup has a complement? The <A href="http://en.wikipedia.org/wiki/Complement_%28group_theory%29">Wikipedia page</A> doesn't have much to say. In practice, there are many situations where one wants to work with a transversal of a subgroup, and it's nice when one can find a transversal that is also a group. Failing that, one can ask for the smallest subgroup of G containing a transversal of H.</p>
| Steve D | 1,446 | <p>Version 2 of "<a href="https://arxiv.org/abs/math/0703471v2" rel="nofollow noreferrer">Factorization problems for finite groups</a>", which discusses bicrossed products (knit products), gives an elementary introduction, classifies some examples, and shows that $A_6$ cannot be written as such a product.</p>
|
508,104 | <p>I want to understand more about this proof from Lang's Algebra:</p>
<p>Let $B$ be a subgroup of a free abelian group $A$ with basis $(x_i)_{i=1...n}$. It has already been shown that $B$ has a basis of cardinality $\leq n$.</p>
<blockquote>
<p>...
We also observe that our proof shows that there exists at least one basis
of $B$ whose cardinality is $\leq n$. We shall therefore be finished when we prove
the last statement, that any two bases of $B$ have the same cardinality. Let $S$
be one basis, with a finite number of elements $m$. Let $T$ be another basis, and
suppose that $T$ has at least $r$ elements. It will suffice to prove that $r \leq m$ (one can then use symmetry). </p>
<p>Let $p$ be a prime number. Then <strong>$B/pB$ is a direct sum of cyclic groups of order $p$, with $m$ terms in the sum</strong>. Hence its order is $p^m$. Using the basis $T$ instead of $S$, we conclude that $B/pB$ contains an <strong>$r$-fold</strong> product of cyclic groups of order $p$, whence $p^r \leq p^m$ and $r \leq m$, as was to be shown. (Note that we did not assume a priori that T was finite.) </p>
</blockquote>
<p>I've bolded the parts I'm having trouble with. How do I show the first part and what's an $r$-fold product?</p>
<p>Alternative proofs to the problem welcome.</p>
<p>I know that $pB = \{ \sum_{i} p k_i x_i\ | \sum_{i} k_i x_i \in B\}$ and that it forms a normal subgroup, $A$ being abelian.</p>
| Boris Novikov | 62,565 | <p>1) "$r$-fold" means a direct sum of $r$ exemplars of a group.</p>
<p>2) Every element of $B/pB$ has the order $p$. It is known that such Abelian group is a direct sum of cyclic groups of order $p$.</p>
|
3,165,460 | <p>I am reading a survey on Frankl's Conjecture. It is stated without commentary that the set of complements of a union-closed family is intersection-closed. I need some clearer indication of why this is true, though I guess it is supposed to be obvious. </p>
| Eric Wofsey | 86,856 | <p>There are no uncountable scattered subsets of <span class="math-container">$[0,1]$</span>. This follows from the theory of Cantor-Bendixson rank, for instance. Given any scattered <span class="math-container">$A\subset[0,1]$</span>, there must be some ordinal <span class="math-container">$\alpha$</span> such that the <span class="math-container">$\alpha$</span>th Cantor-Bendixson derivative <span class="math-container">$A^\alpha$</span> of <span class="math-container">$A$</span> is empty. The least such <span class="math-container">$\alpha$</span> must be countable, since the Cantor-Bendixson derivatives are a descending chain of closed subsets of <span class="math-container">$A$</span> and <span class="math-container">$A$</span> is second-countable. Since every subset of <span class="math-container">$A$</span> can have only countably many isolated points (again by second-countability), this means <span class="math-container">$A$</span> is countable.</p>
|
960,865 | <p>How can i prove 2 is a primitive root mod 37, without calculating all powers of 2 mod 37?</p>
| Jyrki Lahtonen | 11,619 | <p>As Peter Taylor explained, it suffices to verify that neither $2^{12}$ nor $2^{18}$ is congruent to $1$ modulo $37$. </p>
<p>If $2^{18}$ were $\equiv1\pmod{37}$, then (because there exists a primitive root) two would have to be a quadratic residue modulo $37$. But $37\equiv 5\pmod8$, so one of the supplements to the law of quadratic reciprocity tells us that this is not the case.</p>
<p>That leaves $2^{12}$. Here I recommend lab's way: $2^6\equiv-10$, so $2^{12}\equiv(-10)^2=100\not\equiv1$. Case closed.</p>
<hr>
<p>In general similar techniques can be used to check whether a given number is a primitive root modulo a prime $p$, if you can fully factor $p-1$. Assume that we have the factorization
$$
p-1=\prod_jq_j^{a_j},
$$
where $a_j$ are positive integers, and $q_j$ are primes for all $j$. Then we can conclude that $a$ is a primitive root, iff $a^{(p-1)/q_j}\not\equiv1\pmod p$ for all $j$. This is because all the proper factors of $p-1$ are factors of at least one of the numbers $(p-1)/q_j$. In the case $q_j=2$ we can try and use the law of quadratic reciprocity instead. To that end we need to be able to factor $a$. We can also always use <a href="http://en.wikipedia.org/wiki/Exponentiation_by_squaring" rel="nofollow">the square-and-multiply algorithm</a> to calculate high modular powers relatively efficiently.</p>
|
1,318,462 | <p>I am struggling with the following problem.Any help will be appreciated.</p>
<p>If the following statement true then please give a proof otherwise give a counterexample.</p>
<ol>
<li><p>If $a^{27} \equiv 1 \pmod{37}$, then $a^9 \equiv 1 \pmod{37}$ </p></li>
<li><p>$a^{9} \equiv 1 \pmod{37}$, then $a^3 \equiv 1 \pmod{37}$ </p></li>
<li><p>$a^{5} \equiv 1 \pmod{37}$, then $a^3 \equiv 1 \pmod{37}$ </p></li>
</ol>
<p>Thank you.</p>
| Bill Dubuque | 242 | <p>$(1)\ \ $ By Fermat $\ 1 \equiv a^{36}\equiv a^{27} a^9\,$ so $\,a^{27}\equiv 1\,\Rightarrow\,a^9\equiv 1$ </p>
<p>$(3)\ \ $ Similarly $\ a^{36}\equiv 1\equiv a^5\,\Rightarrow\ a = a^{36}/(a^5)^7\equiv 1$</p>
<p>$(2)\ \ $ By Fermat $\,(2^4)^9\equiv 1\,$ but $\,(2^4)^3\equiv 8^4\equiv (-10)^2\equiv -11\not\equiv 1$</p>
<p><strong>Remark</strong> $\ $ Generally $\, a^{36}\equiv 1\equiv a^k\,\Rightarrow\, a^{(36,k)}\equiv 1\,$ by $\, (36,k) = 36\,i + j k\ $ by Bezout. These propertes willl become clearer when you study cyclic groups.</p>
|
2,155,180 | <p>Let $f,g$ be analytic on some domain $\Omega \subset \mathbb{C}$. By Cauchy's formula, we have
$$
\frac{1}{2\pi i} \oint_{\partial\Omega}
\frac{f(z) \, g(z)}{z - z_0}
\, dz
=
f(z_0) \, g(z_0)
=
-\frac{1}{4\pi^2}
\oint_{\partial\Omega}
\frac{f(u)}{u - z_0}
\, du
\,
\oint_{\partial\Omega}
\frac{g(v)}{v - z_0}
\, dv
.
$$
Is there a way how I can get from the first expression to the last without the intermediate step? </p>
| Manuel Guillen | 416,103 | <p>\begin{align*}
20x &\equiv 49 &\pmod{23} \\
-3x &\equiv 3 &\pmod{23} \\
x &\equiv -1 &\pmod{23} \\
&\equiv 22 &\pmod{23}
\end{align*}</p>
|
343,281 | <p>Consider the following note written by Gerhard Gentzen in early 1932, on the onset of his work on a consistency proof for arithmetic:</p>
<blockquote>
<p>The axioms of arithmetic are obviously correct, and the principles of proof obviously preserve correctness. Why cannot one simply conclude consistency, i.e., what is the meaning of the second incompleteness theorem, the one by which consistency of arithmetic cannot be proved by arithmetic means? Where is the Godel-point hiding?</p>
</blockquote>
<p>The first question one might ask when reading this statement (plus three questions) is, how is it that Gentzen concludes that, "The axioms of arithmetic [read 'arithmetic' as meaning, <span class="math-container">$PA$</span>--my comment] are obviously correct."? Well, one might infer that Gentzen infers that "The axioms of arithmetic are obviously correct" by virtue of the fact that the axioms of <span class="math-container">$PA$</span> satisfy the following structure:</p>
<p><span class="math-container">$$\langle \mathfrak N, +, \times, = \rangle$$</span></p>
<p>where <span class="math-container">$\mathfrak N = \{ |, ||, |||,\ldots\},$</span> '<span class="math-container">$+$</span>' as meaning concatenation, '<span class="math-container">$\times$</span>' as meaning the Hilbert-Bernays definition of multiplication (e.g., || <span class="math-container">$\times$</span> ||| means replacing each | in || by |||, i.e., ||||||), and '=' as simply meaning equality as defined by the axioms of equality, i.e. for the axiom of equality 'a=a' one has, for the elements of <span class="math-container">$\mathfrak N$</span>, the following equalities:</p>
<p>{ |=|, ||=||, |||=|||,...} [given this, and the closure of <span class="math-container">$\mathfrak N$</span> under <span class="math-container">$+$</span> and <span class="math-container">$\times$</span>, how is it possible that <span class="math-container">$PA$</span>, satisfying this structure, could ever derive, say, '||=|||'?]</p>
<p>In his answer to Noah Schweber's mathoverflow question, "What are some proofs of Godel's Theorem which are <em>essentially different</em> from the original proof?", Ron Maimon mentions the "Jech/Woodin Set theory model proof". In regards to Gentzen's point of view (at least in early 1932), it might behoove one to take a close look at Prof. Jech's three-page paper (<em>Proceedings of the American Mathematical Society</em>, Volume 121, Number 1, May 1994, pp. 311-313).</p>
<p>Why? Because of "Remark 2" on pg. 312 which states:</p>
<blockquote>
<p>Even though our proof of Godel's Theorem [Second Incompleteness Theorem--my comment] uses the Completeness Theorem, it can be modified to apply to weaker theories such as Peano Arithmetic (<span class="math-container">$PA$</span>). To prove that <span class="math-container">$PA$</span> does not prove its own consistency, (unless it is inconsistent), we argue as follows:</p>
<p>Assume that <span class="math-container">$PA$</span> is consistent and that "<span class="math-container">$PA$</span> is consistent" is provable in <span class="math-container">$PA$</span>. There is a conservative extension <span class="math-container">$\Gamma$</span> [let it be <span class="math-container">$ACA_0$</span> as in Noah Schweber's answer--my comment] of <span class="math-container">$PA$</span> in which the Completeness Theorem is provable [Theorem 5.5, p. 443, of Takeuti's <em>Proof Theory</em>, 2nd ed.--my expansion of his comment by his reference], and moreover, <span class="math-container">$PA$</span> <span class="math-container">$\vdash$</span> (<span class="math-container">$\Gamma$</span> is a conservative extension of <span class="math-container">$PA$</span>). Therefore, <span class="math-container">$\Gamma$</span> <span class="math-container">$\vdash$</span> (<span class="math-container">$\Gamma$</span> is a conservative extension of a consistent theory) and thus proves its own consistency. Consequently, <span class="math-container">$\Gamma$</span> proves that <span class="math-container">$\Gamma$</span> has a model.</p>
<p>Now let <span class="math-container">$\Sigma$</span> be a sufficiently strong finite subset of of <span class="math-container">$\Gamma$</span> that proves that <span class="math-container">$\Sigma$</span> has a model; the proof above leads to a contradiction.</p>
</blockquote>
<p>Is this where the Godel-point is hiding with regards to Gentzen's statement and first question?</p>
<blockquote>
<p>The axioms of arithmetic are obviously correct, and the principles of proof obviously prove correctness. Why cannot one simply conclude consistency....?</p>
</blockquote>
<p>Would the 'Godel-point' in question be, following Prof. Jech's Main Theorem,</p>
<blockquote>
<p>It is unprovable in <span class="math-container">$ACA_0$</span> (unless <span class="math-container">$ACA_0$</span> is inconsistent) that there exists a model of <span class="math-container">$PA$</span>. ?</p>
</blockquote>
<p>Now as regards Noah Schweber's very nice answer, I have two questions regarding the following passage</p>
<blockquote>
<p>...However, we are <em>not</em> guaranteed that our model <span class="math-container">$\mathfrak M$</span> [of <span class="math-container">$ACA_0$</span>-- my comment] thinks that its first-order part actually satisfies <span class="math-container">$PA$</span>. That is, the "obvious truth" of the <span class="math-container">$PA$</span> axioms is not actually that obvious.</p>
<p>This is an example of a failure on an <span class="math-container">$\omega$</span>-rule: while for each axiom <span class="math-container">$\varphi$</span> of <span class="math-container">$PA$</span> we do in fact have "<span class="math-container">$Num$</span>(<span class="math-container">$\mathfrak M$</span>) <span class="math-container">$\vDash$</span> <span class="math-container">$\varphi$</span>" (appropriately phrased) is true in <span class="math-container">$\mathfrak M$</span>, we do <em>not</em> get from this that "<span class="math-container">$Num$</span>(<span class="math-container">$\mathfrak M$</span>) <span class="math-container">$\vDash$</span> each <span class="math-container">$PA$</span> axiom" is true in <span class="math-container">$\mathfrak M$</span>. And this is just like how being able to check each individual derivation in <span class="math-container">$PA$</span> doesn't give us a way to check all derivations at once, so it really shouldn't be suprising.</p>
</blockquote>
<ol>
<li>How does the above passage relate to Gentzen's note, especially the phrase</li>
</ol>
<blockquote>
<p>That is, the "obvious truth" of the <span class="math-container">$PA$</span> axioms is not actually that obvious.</p>
</blockquote>
<ol start="2">
<li>What perspective is Gentzen taking in his note (external or internal) and why does it matter what <span class="math-container">$\mathfrak M$</span> 'thinks' (so to speak) as regards Gentzen's note?</li>
</ol>
<p>Now two questions for Panu Raatikainen: as regards your statement</p>
<blockquote>
<p>In general, we just cannot see that they [the theories "which include elementary arithmetic and happen to be consistent"--my paraphrase of your earlier comment] are consistent.</p>
</blockquote>
<ol>
<li><p>Why not?</p></li>
<li><p>What was Gentzen 'seeing' when he made his statement ("The axioms of arithmetic are obviously correct, and the principles of proof obviously preserve correctness"), and why was his 'seeing' incorrect (i.e., leading to inconsistency)?</p></li>
</ol>
| Panu Raatikainen | 102,468 | <p>Gentzen's remark has some bite in the case the standard first-order arithmetic PA, because we plausibly know a bit more arithmetically.
But he apparently did not understand the generality of the incompleteness theorems: they hold for any theory which includes elementary arithmetic and happens to be consistent. In general, we just cannot see that they are consistent (even if they happen to be.)
And as soon as Gentzen would define what exactly he means by "arithmetic means", we can prove that the consistency of the theory of those arithmetic means cannot be proved by those arithmetic means. </p>
|
13,835 | <p>Given that $$X = \{(x,y,z) \in \mathbb{R}^3 |\, x^2 + y^2 + z^2 - 2(xy + xz + yz) = k\}\,,$$ where $k$ is a constant. Also given that a group $G$ is represented by $$\langle g_1,g_2,g_3|\, g_1^2 = g_2^2 = g_3^2 = 1_G\rangle\,.$$ $G$ acts on $X$ such that $$g_1 \cdot (x,y,z) = (2(y+z) - x,y,z)\,,$$ $$g_2 \cdot (x,y,z) = (x,2(x+z)-y,z)$$ and $$g_3 \cdot (x,y,z) = (x,y,2(x+y)-z)\,.$$ So given $(x_0,y_0,z_0) \in X$, how do I go about plotting the orbit of the point $(x_0,y_0,z_0)$, ie $$\{g \cdot (x_0,y_0,z_0) \,|\, g \in G\}\,,$$ on a 3D graph? Would <code>ListPointPlot3D</code> be useful to plot all the points? How about other plotting functions?</p>
| whuber | 91 | <p>Because the image of the group under this (linear) representation is infinite, we will need to limit the orbits. </p>
<h3>Working in the abstract group</h3>
<p>Presuming it may eventually be of interest to depict multiple orbits, let's compute a large number of group elements once and for all. It seems efficient to do this abstractly, in terms of the given presentation, before performing the matrix multiplications, because (a) the representation appears to be faithful (it introduces no new relations among the group elements) and (b) we can eliminate many unnecessary matrix multiplications at the outset. To this end, let's create a new object--<code>word</code>--to represent an abstract element of any group all of whose generators have order 2.</p>
<pre><code>Unprotect[NonCommutativeMultiply]; Clear[NonCommutativeMultiply];
word[g___] ** word[h___ ] := word[g, h];
word[g1___, g_] ** word[g_, h2___] := word[g1, h2];
Protect[NonCommutativeMultiply];
</code></pre>
<p>The first line is generic--that's how multiplication works in a free group--and the second line expresses the relations $g_i^2=1$.</p>
<p><code>NestList</code> will create all words involving up to <code>n</code> products of generators, starting with the identity (as expressed by the empty word). Applying <code>Union</code> at each stage eliminates duplicates:</p>
<pre><code>twoGroup[generators_List, n_Integer] :=
Flatten[NestList[(Flatten[Outer[NonCommutativeMultiply, #, generators]] // Union) &,
{word[]}, n]] // Union
</code></pre>
<p>(If you do not flatten the list, it will be partitioned into sublists corresponding to the word length.)</p>
<h3>Computing with a linear representation of the group</h3>
<p>The group action can now be computed by converting these abstract words into products of matrices and performing the multiplications once and for all. Because matrix multiplication (<code>Dot</code>) does not know the dimension of the representation for the empty word, we need to make provision for that special case.</p>
<pre><code>rep = {Subscript[g, 1] -> {{-1, 2, 2}, {0, 1, 0}, {0, 0, 1}},
Subscript[g, 2] -> {{1, 0, 0}, {2, -1, 2}, {0, 0, 1}},
Subscript[g, 3] -> {{1, 0, 0}, {0, 1, 0}, {2, 2, -1}},
word[] -> IdentityMatrix[3], word -> Dot};
</code></pre>
<p>Applying these replacement rules to the result of <code>twoGroup</code> will produce a list of matrices corresponding to all the abstract group elements it outputs (including the identity). (When the group action is not faithful, there may be duplicates among these matrices.)</p>
<h3>Plotting point orbits in a surface</h3>
<p>We're all set. But before showing the orbits, let's plot the surface as a reference.</p>
<pre><code>f[x_, y_, z_] := (x - y - z)^2 - 4 y z;
surface[{x0_, y0_, z0_}, cmax_] :=
ContourPlot3D[f[x, y, z], {x, -cmax, cmax}, {y, -cmax, cmax}, {z, -cmax, cmax},
Contours -> {f[x0, y0, z0]}, ContourStyle -> Opacity[0.5], Mesh -> None];
</code></pre>
<p>This has been formulated to draw the surface passing through a specified point $x_0$, limited within a specified cube. Now we can specify any point $x$, the maximum word length $n$, and let it fly (using, say, <code>ListPointPlot3D</code>):</p>
<pre><code>orbit[x_List, g_List] := #.x & /@ g;
Module[{x = {-1/6, -1/3, 1/4}, n = 9, points},
points = orbit[x, twoGroup[word /@ {Subscript[g,1], Subscript[g,2], Subscript[g,3]}, n] /. rep];
Show[surface[x, Max[Abs[points]]],
ListPointPlot3D[points, PlotStyle -> Directive[Black, PointSize[0.01]]]]
]
</code></pre>
<p><img src="https://i.stack.imgur.com/MbSem.png" alt="Plot"></p>
<p>Using these functions, generalizations are now easy--plotting multiple orbits, symbolizing the points by the lengths of the words they correspond to, etc. Just be a little careful: the orbit sizes grow exponentially; there are $3(2^{n})-2$ words of length through $n$.</p>
|
3,039,040 | <p>In the equation <span class="math-container">$3^x=2y^2-1$</span>,
<span class="math-container">$x$</span>, <span class="math-container">$y$</span> are natural numbers.
I found <span class="math-container">$x=1$</span> or <span class="math-container">$2$</span> (mod <span class="math-container">$4$</span>), and <span class="math-container">$y^2=1$</span> or <span class="math-container">$4$</span> (mod <span class="math-container">$120$</span>)
but I even don't know if the number of solutions is infinite.
Is there a way to find the solution of this indeterminate equation?</p>
| YiFan | 496,634 | <p>Take both sides of the equation modulo <span class="math-container">$3$</span>. If <span class="math-container">$x\geq 1$</span> then <span class="math-container">$2y^2\equiv 1$</span> modulo <span class="math-container">$3$</span>, but if <span class="math-container">$y\equiv 0$</span>, then <span class="math-container">$2y^2\equiv 0$</span>, and if <span class="math-container">$y\equiv\pm 1$</span>, <span class="math-container">$2y^2\equiv 2(\pm1)^2\equiv2$</span>. So there are no solutions in this case. We must have <span class="math-container">$x=0$</span>, thus <span class="math-container">$y=1$</span>.</p>
|
11,973 | <p>I have a list of strings called <code>mylist</code>:</p>
<pre><code>mylist = {"[a]", "a", "a", "[b]", "b", "b", "[ c ]", "c", "c"};
</code></pre>
<p>I would like to split <code>mylist</code> by "section headers." Strings that begin with the character <code>[</code> are section headers in my application. Thus, I would like to split <code>mylist</code> in such a way as to obtain this output:</p>
<pre><code>{{"[a]", "a", "a"}, {"[b]", "b", "b"}, {"[ c ]", "c", "c"}}
</code></pre>
<p>(The <code>a</code>s, <code>b</code>s, and <code>c</code>s represent <em>any</em> characters; the string inside the section header does <em>not</em> necessarily match the strings that follow in that section. Also, the number of strings in each section can vary.</p>
<p>I have tried:</p>
<pre><code>SplitBy[mylist, StringMatchQ[#, "[" ~~ ___] &]
</code></pre>
<p>But this is not correct; I obtain:</p>
<pre><code>{{"[a]"}, {"a", "a"}, {"[b]"}, {"b", "b"}, {"[ c ]"}, {"c", "c"}}
</code></pre>
<p>Likewise, using <code>Split</code> (since it applies the test function only to adjacent elements) does not work. The command:</p>
<pre><code>Split[mylist, StringMatchQ[#, "[" ~~ ___] &]
</code></pre>
<p>yields:</p>
<pre><code>{{"[a]", "a"}, {"a"}, {"[b]", "b"}, {"b"}, {"[ c ]", "c"}, {"c"}}
</code></pre>
<p>Do you have any advice? Thanks.</p>
| faysou | 66 | <p>Here's an answer based on the solution of Murta that parses recursively a list based on different delimiters that can be patterns or string patterns. This can be useful for example to parse a debug output where loops are involved.</p>
<pre><code>splitByPattern[l_List,p_?System`Dump`validStringExpressionQ]:=splitByPattern[l, _String?(StringMatchQ[#, p] &)];
splitByPattern[l_List,p_]:=Split[l,!MatchQ[#2,p]&];
splitByPatternFold[l_,{},True|False]:=l;
splitByPatternFold[l_,{p_},False]:=splitByPattern[l,p];
splitByPatternFold[l_,{p_},True]:=Join[{First@l},splitByPattern[Rest@l,p]];
splitByPatternFold[l_,{p_,rest__},False]:=splitByPatternFold[#,{rest},True]&/@splitByPattern[l,p];
splitByPatternFold[l_,{p_,rest__},True]:=Join[{First@l},splitByPatternFold[#,{rest},True]&/@splitByPattern[Rest@l,p]];
splitByPatternFold[l_List,patterns_List,hasHeader_:False]:=splitByPatternFold[l,patterns,hasHeader];
</code></pre>
<p>To access the split elements you can use this function</p>
<pre><code>splitAccess[l_, indices_] :=
Module[{offsets = Table[1, {Length@indices}]},
offsets[[1]] = 0;
l[[Sequence @@ (indices + offsets)]]
]
</code></pre>
<p>Example</p>
<pre><code>l={a, b, c, d, e, f, a, b, c, d, e, f};
x = splitByPatternFold[l,{a,b,c,d,e}]
> {{a,{b,{c,{d,{e,f}}}}},{a,{b,{c,{d,{e,f}}}}}}
splitAccess[x,{2,1}]
> {b, {c, {d, {e, f}}}}
</code></pre>
<p>The answer to the question would be written as</p>
<pre><code>mylist={"[a]",a,"a","[b]",b,"b","[ c ]",c,"c"};
splitByPattern[mylist,"[*"]
</code></pre>
<p>Note that all elements don't need to be strings when giving a string pattern as argument.</p>
|
11,973 | <p>I have a list of strings called <code>mylist</code>:</p>
<pre><code>mylist = {"[a]", "a", "a", "[b]", "b", "b", "[ c ]", "c", "c"};
</code></pre>
<p>I would like to split <code>mylist</code> by "section headers." Strings that begin with the character <code>[</code> are section headers in my application. Thus, I would like to split <code>mylist</code> in such a way as to obtain this output:</p>
<pre><code>{{"[a]", "a", "a"}, {"[b]", "b", "b"}, {"[ c ]", "c", "c"}}
</code></pre>
<p>(The <code>a</code>s, <code>b</code>s, and <code>c</code>s represent <em>any</em> characters; the string inside the section header does <em>not</em> necessarily match the strings that follow in that section. Also, the number of strings in each section can vary.</p>
<p>I have tried:</p>
<pre><code>SplitBy[mylist, StringMatchQ[#, "[" ~~ ___] &]
</code></pre>
<p>But this is not correct; I obtain:</p>
<pre><code>{{"[a]"}, {"a", "a"}, {"[b]"}, {"b", "b"}, {"[ c ]"}, {"c", "c"}}
</code></pre>
<p>Likewise, using <code>Split</code> (since it applies the test function only to adjacent elements) does not work. The command:</p>
<pre><code>Split[mylist, StringMatchQ[#, "[" ~~ ___] &]
</code></pre>
<p>yields:</p>
<pre><code>{{"[a]", "a"}, {"a"}, {"[b]", "b"}, {"b"}, {"[ c ]", "c"}, {"c"}}
</code></pre>
<p>Do you have any advice? Thanks.</p>
| Lou | 150 | <p>Here's my version based on Position.</p>
<pre><code>mylist = {"[a]", "a", "a", "[b]", "b", "b", "[ c ]", "c", "c"};
split[lst_List, pat_String] := Module[{len, pos},
len = Length[lst];
pos = Partition[Flatten[{Position[lst, _String?(StringMatchQ[#, pat ~~ __] &)],len + 1}], 2, 1];
lst[[#[[1]] ;; #[[2]] - 1]] & /@ pos]
</code></pre>
<p>usage</p>
<pre><code>split[mylist, "["]
</code></pre>
<p>Out</p>
<p>{{"[a]", "a", "a"}, {"[b]", "b", "b"}, {"[ c ]", "c", "c"}}</p>
|
2,083,460 | <p>While trying to answer <a href="https://stackoverflow.com/questions/41464753/generate-random-numbers-from-lognormal-distribution-in-python/41465013#41465013">this SO question</a> I got stuck on a messy bit of algebra: given</p>
<p>$$
\log m = \log n + \frac32 \, \log \biggl( 1 + \frac{v}{m^2} \biggr)
$$</p>
<p>I need to solve for $m$. I no longer remember enough logarithmic identities to attempt to do this by hand. Maxima canβt do it at all, and Wolfram Alpha <a href="http://www.wolframalpha.com/input/?i=solve+[log+m+%3D+log+n+%2B+%283%2F2%29*log%281+%2B+v%2F%28m%5E2%29%29]+for+m" rel="nofollow noreferrer">coughs up a hairball</a> that appears to be the zeroes of a quartic, with no obvious relationship to the original equation.</p>
<p>Is there a short, tidy solution? Failing that, an explanation of how WA managed to turn this into a quartic, and the quartic itself, would be ok.</p>
| Jan Eerland | 226,665 | <p>Assuming that all variables are real and positive:</p>
<p>$$\ln\text{m}=\ln\text{n}+\frac{3}{2}\cdot\ln\left(1+\frac{\text{v}}{\text{m}^2}\right)\space\Longleftrightarrow\space\frac{2}{3}\cdot\left(\ln\text{m}-\ln\text{n}\right)=\ln\left(1+\frac{\text{v}}{\text{m}^2}\right)$$</p>
<p>Now, use:</p>
<p>$$\ln\text{a}-\ln\text{b}=\ln\frac{\text{a}}{\text{b}}$$</p>
<p>So, we get:</p>
<p>$$\frac{2}{3}\cdot\ln\frac{\text{m}}{\text{n}}=\ln\left(1+\frac{\text{v}}{\text{m}^2}\right)$$</p>
<p>Now, use:</p>
<p>$$\exp\left(\text{c}\cdot\ln\text{d}\right)=e^{\text{c}\cdot\ln\text{d}}=\text{d}^\text{c}$$</p>
<p>So, we get (taking the $\exp$ of both sides):</p>
<p>$$\exp\left\{\frac{2}{3}\cdot\ln\frac{\text{m}}{\text{n}}\right\}=\exp\left\{\ln\left(1+\frac{\text{v}}{\text{m}^2}\right)\right\}=\color{red}{\left(\frac{\text{m}}{\text{n}}\right)^{\frac{2}{3}}=1+\frac{\text{v}}{\text{m}^2}}$$</p>
<p>Now, we want to solve $\text{m}$:</p>
<p>$$\left(\frac{\text{m}}{\text{n}}\right)^{\frac{2}{3}}=\frac{\text{m}^{\frac{2}{3}}}{\text{n}^{\frac{2}{3}}}=1+\frac{\text{v}}{\text{m}^2}=\frac{\text{m}^2+\text{v}}{\text{m}^2}\space\Longleftrightarrow\space\text{m}^8=\text{n}^2\cdot\left(\text{m}^2+\text{v}\right)^3$$</p>
|
2,083,460 | <p>While trying to answer <a href="https://stackoverflow.com/questions/41464753/generate-random-numbers-from-lognormal-distribution-in-python/41465013#41465013">this SO question</a> I got stuck on a messy bit of algebra: given</p>
<p>$$
\log m = \log n + \frac32 \, \log \biggl( 1 + \frac{v}{m^2} \biggr)
$$</p>
<p>I need to solve for $m$. I no longer remember enough logarithmic identities to attempt to do this by hand. Maxima canβt do it at all, and Wolfram Alpha <a href="http://www.wolframalpha.com/input/?i=solve+[log+m+%3D+log+n+%2B+%283%2F2%29*log%281+%2B+v%2F%28m%5E2%29%29]+for+m" rel="nofollow noreferrer">coughs up a hairball</a> that appears to be the zeroes of a quartic, with no obvious relationship to the original equation.</p>
<p>Is there a short, tidy solution? Failing that, an explanation of how WA managed to turn this into a quartic, and the quartic itself, would be ok.</p>
| projectilemotion | 323,432 | <p>$$\log {m}=\log {n}+\frac{3}{2} \log {\left(1+\frac{v}{m^2}\right)}$$
$$\log {m}=\log {n}+ \log {\left(\left(1+\frac{v}{m^2}\right)^{\frac{3}{2}}\right)}$$
$$\log {m}=\log {\left({n\cdot\left(1+\frac{v}{m^2}\right)^{\frac{3}{2}}}\right)}$$
Exponentiate both sides:
$$m=n\cdot\left(1+\frac{v}{m^2}\right)^{\frac{3}{2}}$$
$$m^2=n^2\cdot\left(1+\frac{v}{m^2}\right)^3$$
$$m^2=n^2\cdot\left(1+\frac{3v}{m^2}+\frac{3v^2}{m^4}+\frac{v^3}{m^6}\right)$$
$$m^2=\frac{n^2}{m^6}\left(m^6+3v\cdot m^4+3v^2 \cdot m^2+v^3\right)$$
$$m^8=n^2 m^6+3vn^2 m^4+ 3n^2v^2 m^2+n^2v^3$$
Substitute $m^2=u$ and you will obtain a <strong>quartic</strong> expression.
$$u^4-n^2u^3-3vn^2 u^2-3n^2 v^2 u-n^2v^3=0$$
This is going to be a long solution, however you can use the <a href="https://en.wikipedia.org/wiki/Quartic_function" rel="nofollow noreferrer">general formula</a> for the solution to a quartic equation.</p>
<p>Wikipedia suggests using substitutions in order to solve it.</p>
<p>The full formula without substitution for $ax^4+bx^3+cx^2+dx+e$ is below (I cannot write it in $\LaTeX$ because it is so long).</p>
<p><a href="https://i.stack.imgur.com/vvxXd.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/vvxXd.png" alt="General formula without substitution"></a></p>
<p>This suggests that <a href="http://www.wolframalpha.com/input/?i=ln(m)%3Dln(n)%2B(3%2F2)*ln(1%2Bv%2F(m%5E2))%20solution%20for%20m" rel="nofollow noreferrer">your solution in Wolfram Alpha</a> is probably correct.</p>
|
2,593,627 | <p>I struggle to find the language to express what I am trying to do. So I made a diagram.</p>
<p><a href="https://i.stack.imgur.com/faHgE.png" rel="noreferrer"><img src="https://i.stack.imgur.com/faHgE.png" alt="Graph3parallelLines"></a></p>
<p>So my original line is the red line. From (2.5,2.5) to (7.5,7.5).</p>
<p>I want to shift the line away from itself a certain distance but maintaining the original angle of the line(so move it a certain distance away at a 90 degree angle). So after the shifting the line by the distance of +1 it would become the blue line or by -1 it would be come the yellow line.</p>
<p>I don't know a lot of the maths terminology so if anybody could manage an explanation in layman terms it would be appreciated.</p>
<p>Thanks,
C</p>
| Narasimham | 95,860 | <p>You forgot marking the axes.</p>
<p>In the straight line $y=mx+c$, the $c$ or $y$ coordinate part to be added is $\dfrac{1}{\sqrt2}= \sin 45^{\circ}$ if you want shift up parallelly,</p>
<p>and it is $\dfrac{-1}{\sqrt2}$ if you want shift down parallelly. Here slope $m=1$;</p>
<p>So red line</p>
<p>$$ y= m x +0 $$ </p>
<p>becomes blue line above</p>
<p>$$ y= mx+ \dfrac{1}{\sqrt2}$$</p>
<p>and the yellow line below</p>
<p>$$ y= mx - \dfrac{1}{\sqrt2}$$</p>
<p>as shown in the graph with axes marked. Also if you are looking for a variable distance between lines the polar form is convenient:</p>
<p>$$ x \cos\alpha + y \sin \alpha = p $$</p>
<p>$p$ is pedal (normal) length from origin. and $ \alpha $ is positive angle pedal makes to x-axis. In the graphs you respectively have </p>
<p>$$ \alpha = 3 \pi/4 ; p_{blue,red,yellow}= ( \sqrt 2, 0, -\sqrt 2) $$</p>
|
233,238 | <p>I am just practicing making some new designs with Mathematica and I thought of this recently. I want to make a tear drop shape (doesn't matter the orientation) constructed of mini cubes. I am familiar with the preliminary material, I am just having some difficulty getting it to work.</p>
| kglr | 125 | <p>A few additional methods:</p>
<pre><code>Level[#, {-1}] & /@ list
Apply[Sequence, list, {-2}]
Map[Splice, list, {-2}]
Join[{#}, #2] & @@@ list
Prepend[#2, #]& @@@ list
MapAt[Splice, {All, 2}] @ list
ReplacePart[{_, 2, 0} -> Sequence] @ list
Map[Map @ Apply @ Sequence] @ list
Delete[{2, 0}] /@ list
FlattenAt[#, 2] & /@ list
</code></pre>
<p>... and for the Halloween:</p>
<p><a href="https://i.stack.imgur.com/yo9GE.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/yo9GE.png" alt="enter image description here" /></a></p>
<pre><code>βΊ = ## & @@@ # & /@ # &;
βΊ @ list
</code></pre>
<blockquote>
<pre><code>{{Position, Code}, {1, 0, 1}, {2, 100, 11}, {3, 110, 111}, {4, 1000, 1001},
{5, 1100, 1011}, {6, 1110, 1111}}
</code></pre>
</blockquote>
<p><a href="https://i.stack.imgur.com/Gh6AQ.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Gh6AQ.png" alt="enter image description here" /></a></p>
<pre><code>βΊβΊ = {#, ## & @@ #2} & @@@ # &;
βΊβΊ @ list
</code></pre>
<blockquote>
<pre><code>{{Position, Code}, {1, 0, 1}, {2, 100, 11}, {3, 110, 111}, {4, 1000, 1001},
{5, 1100, 1011}, {6, 1110, 1111}}
</code></pre>
</blockquote>
|
1,873,180 | <p>The final result should be $C(n) = \frac{1}{n+1}\binom{2n}{n}$, for reference.</p>
<p>I've worked my way down to this expression in my derivation:</p>
<p>$$C(n) = \frac{(1)(3)(5)(7)...(2n-1)}{(n+1)!} 2^n$$</p>
<p>And I can see that if I multiply the numerator by $2n!$ I can convert that product chain into $(2n)!$ like so, also taking care to multiply the denominator all the same:</p>
<p>$$C(n) = \frac{(2n)!}{(n+1)!(2n!)} 2^n =\frac{1}{n+1} \cdot \frac{(2n)!}{n!n!} 2^{n-1} = \frac{1}{n+1} \binom{2n}{n} 2^{n-1}$$</p>
<p>Where did I go wrong? Why can't I get rid of that $2^{n-1}$?</p>
| Gregory Simon | 208,825 | <p>$\int f(x)\, dx$ generally means the set of all anti-derivatives of $f(x)$.</p>
<p>If you have a set $A$ and a set $B$ then in this context $A-B$ should be interpreted as $\{a-b:\ a\in A, \ b\in B\}$.</p>
<p>Therefore, $\int f(x)\, dx - \int f(x)\, dx = \mathbb R$ (assuming we are talking about real integrable functions). The resulting subtraction is whatever set the "$C$" in the "$+C$" is allowed to come from, as this is the set of all $a-b$ with $a$ and $b$ both antiderivatives of $f(x)$.</p>
<p>But this is more of a question about ambiguity of definitions. You should avoid writing this in practice, if at all possible.</p>
|
3,869,990 | <p><span class="math-container">$\newcommand{\C}{\mathcal{C}}$</span>
<span class="math-container">$\newcommand{\D}{\mathcal{D}}$</span>
<span class="math-container">$\newcommand{\A}{\mathcal{A}}$</span>
<span class="math-container">$\newcommand{\S}{\mathcal{S}}$</span>
<span class="math-container">$\newcommand{\Psh}{\mathrm{Psh}}$</span>
<span class="math-container">$\newcommand{\Lan}{\mathrm{Lan}}$</span>
<span class="math-container">$\newcommand{\Hom}{\mathrm{Hom}}$</span>
<span class="math-container">$\newcommand{\Set}{\mathbf{Set}}$</span>
<span class="math-container">$\newcommand{\yo}{\mathscr{Y}}$</span></p>
<p>If <span class="math-container">$\A$</span> is any category, I'll write <span class="math-container">$\yo_\A : \A \rightarrow \Psh(\A)$</span> the Yoneda embedding.</p>
<p>Let <span class="math-container">$\C$</span> be a small category, <span class="math-container">$\D$</span> be any category, and <span class="math-container">$F : \C \rightarrow \D$</span> be fully faithful.</p>
<p>Then if I define <span class="math-container">$F_1 : \Psh(\C) \rightarrow \Psh(\D)$</span> by the formula~:</p>
<p><span class="math-container">$$ F_1 = \Lan_{\yo_\C}(\yo_\D \circ F) $$</span></p>
<p>i.e. for <span class="math-container">$A \in \Psh(\C)$</span>~:</p>
<p><span class="math-container">$$ F_1 A = \int^{c \in \C} \Hom(\yo_\C c, A) \otimes \yo_D F c = \int^{c \in \C} Ac \otimes \yo_D F c $$</span></p>
<p>and similarly <span class="math-container">$F_2 : \Psh(\Psh(\C)) \rightarrow \Psh(\Psh(\D))$</span> by the formula~:</p>
<p><span class="math-container">$$ F_2 = \Lan_{\yo_{\Psh(\C)}\circ\yo_\C}(\yo_{\Psh(D)}\circ \yo_\D \circ F) $$</span></p>
<p>i.e. for <span class="math-container">$A \in \Psh(\Psh(\C))$</span>~:</p>
<p><span class="math-container">$$ F_2 A = \int^{c \in \C} \Hom(\yo_{\Psh(\C)} \yo_\C c, A) \otimes \yo_{\Psh(D)} \yo_D F c = \int^{c \in \C} A \yo_\C c \otimes \yo_{\Psh(D)} \yo_D F c $$</span></p>
<p>Now my question is the following : are <span class="math-container">$F_1$</span> and <span class="math-container">$F_2$</span> fully faithful ???</p>
<p>I seem to have a positive answer for <span class="math-container">$F_1$</span>. Indeed :</p>
<p><span class="math-container">\begin{align*}
\Hom_{\Psh(\D)}(F_1 A, F_1 B) & = \int_{c\in \C} \Hom_{\Psh(\D)} \left (Ac \otimes \yo_\D Fc , F_1 B \right ) \\
& = \int_{c\in \C} \Hom_{\Set} \left (Ac, \Hom_{\Psh(\D)}(\yo_\D Fc, F_1 B ) \right) \\
& \overset{\text{Yoneda}}{=} \int_{c\in \C} \Hom_{\Set} \left (A(c), (F_1 B) F c \right) \\
& = \Hom_{\Psh(\C)}(A, F_1 B \circ F )
\end{align*}</span></p>
<p>Moreover for every <span class="math-container">$a \in \C$</span>:</p>
<p><span class="math-container">\begin{align*}
(F_1 B \circ F) a & = \int^{c \in \C} Bc \otimes \yo_{\D} Fc (Fa) \\
& = \int^{c \in \C} B(c) \otimes \Hom_{\D}(Fa,Fc). \\
& = \int^{c \in \C} B(c) \otimes \Hom_{\D}(a,c) \overset{\text{Ninja Yoneda}}{=} B(a).
\end{align*}</span></p>
<p>So <span class="math-container">$ F_1 B \circ F = B$</span>, and thus~:</p>
<p><span class="math-container">$$ \Hom_{\Psh(\D)}(F_1 A, F_1 B) = \Hom_{\Psh(\C)}(A, B). $$</span></p>
<p>So <span class="math-container">$F_1$</span> is fully faithful. If I try to do the exact same proof with <span class="math-container">$F_2$</span>, I seem to obtain in the end~:</p>
<p><span class="math-container">$$ \Hom_{\Psh(\Psh(\D))}(F_2 A, F_2 B) = \Hom_{\Psh(\C)} (A\circ \yo_\C , B \circ \yo_\C). $$</span></p>
<p>so I guess it's <strong>not</strong> fully faithful ??? But it's kinda weird, because in the case where <span class="math-container">$\Psh(\mathcal{A})$</span> is taken to mean <span class="math-container">$[\A:\S]$</span> for a certain small category <span class="math-container">$\S$</span>, and if we have a functor <span class="math-container">$\yo'_\A : \A \rightarrow [\A:\S]$</span>,then we certainly have (don't we ? I'm a bit confused) :</p>
<p><span class="math-container">$$ \Lan_{\yo'_{\Psh(\C)}\circ \yo'_{\C}}(\yo'_{\Psh(\D)}\circ\yo'_{\D}\circ F) = \Lan_{\yo'_{\Psh(\C)}}(\yo'_{\Psh(\D)}\circ \Lan_{\yo'_{\C}}(\yo'_{\D}\circ F)) $$</span></p>
<p>and so if this operation sends embeddings on embeddings, it should still do so when applied twice !!! I'm surprised that such a thing could break just because <span class="math-container">$\Set$</span> is not small. I mean, everything has looked pretty formal. If <span class="math-container">$\C$</span> was a category where every Hom-set was smaller that some cardinal <span class="math-container">$\kappa$</span> (which happens), then up to equivalence you could replace <span class="math-container">$\Psh(C)$</span> by <span class="math-container">$[\C:\kappa]$</span> (which is small) and you still have a Yoneda embedding, for example... so it should be able to "stand" another extension !</p>
<p>TL;DR :
If <span class="math-container">$F : \C \rightarrow \D$</span> is a fully faithful functor with <span class="math-container">$\C$</span> small, I seem to be able to define a fully faithful functor <span class="math-container">$F_1 : \Psh(\C)\rightarrow \Psh(\D)$</span> by Yoneda extension but the "next step" seems to fail and the simplest functor <span class="math-container">$F_2 : \Psh(\Psh(C)) \rightarrow \Psh(\Psh(D))$</span> which I can define (without taking colimits not on small categories, of course !) doesn't seem to be fully faithful : why does the pattern seem to break ?</p>
<p>Thanks to whoever helps me out of this ^^</p>
<p>(EDIT : When we look at the definition for <span class="math-container">$F_2 A$</span>, it is clear that it only depends on <span class="math-container">$A \circ \yo$</span> : that's probably the problem... How should I define <span class="math-container">$F_2$</span> ? Why does it seem to me that when everything is small there is no problem ?)</p>
<p>(EDIT2 : is this a MO question ? I never feel confident to ask anything there so I don't know)</p>
| fosco | 685 | <p>You <span class="math-container">$F_1$</span> is just the left Kan extension along <span class="math-container">$F^\text{op}$</span>; it is going to be fully faithful for a general fact about Kan extensions: extending along a fully faithful functor gives an isomorphism <span class="math-container">$H\cong Lan_F(HF)$</span>, and this is the unit of the adjunction <span class="math-container">$Lan_F \dashv -\circ F$</span>, which is invertible iff the left adjoint if full and faithful.</p>
<p>As for your <span class="math-container">$F_2$</span>, it does not make sense if you don't take <a href="https://ncatlab.org/nlab/show/small+presheaf" rel="nofollow noreferrer">small presheaves</a> (the "category" <span class="math-container">$Psh(Psh(C))$</span> is not a "category", because it is not locally small). :-)</p>
|
3,869,990 | <p><span class="math-container">$\newcommand{\C}{\mathcal{C}}$</span>
<span class="math-container">$\newcommand{\D}{\mathcal{D}}$</span>
<span class="math-container">$\newcommand{\A}{\mathcal{A}}$</span>
<span class="math-container">$\newcommand{\S}{\mathcal{S}}$</span>
<span class="math-container">$\newcommand{\Psh}{\mathrm{Psh}}$</span>
<span class="math-container">$\newcommand{\Lan}{\mathrm{Lan}}$</span>
<span class="math-container">$\newcommand{\Hom}{\mathrm{Hom}}$</span>
<span class="math-container">$\newcommand{\Set}{\mathbf{Set}}$</span>
<span class="math-container">$\newcommand{\yo}{\mathscr{Y}}$</span></p>
<p>If <span class="math-container">$\A$</span> is any category, I'll write <span class="math-container">$\yo_\A : \A \rightarrow \Psh(\A)$</span> the Yoneda embedding.</p>
<p>Let <span class="math-container">$\C$</span> be a small category, <span class="math-container">$\D$</span> be any category, and <span class="math-container">$F : \C \rightarrow \D$</span> be fully faithful.</p>
<p>Then if I define <span class="math-container">$F_1 : \Psh(\C) \rightarrow \Psh(\D)$</span> by the formula~:</p>
<p><span class="math-container">$$ F_1 = \Lan_{\yo_\C}(\yo_\D \circ F) $$</span></p>
<p>i.e. for <span class="math-container">$A \in \Psh(\C)$</span>~:</p>
<p><span class="math-container">$$ F_1 A = \int^{c \in \C} \Hom(\yo_\C c, A) \otimes \yo_D F c = \int^{c \in \C} Ac \otimes \yo_D F c $$</span></p>
<p>and similarly <span class="math-container">$F_2 : \Psh(\Psh(\C)) \rightarrow \Psh(\Psh(\D))$</span> by the formula~:</p>
<p><span class="math-container">$$ F_2 = \Lan_{\yo_{\Psh(\C)}\circ\yo_\C}(\yo_{\Psh(D)}\circ \yo_\D \circ F) $$</span></p>
<p>i.e. for <span class="math-container">$A \in \Psh(\Psh(\C))$</span>~:</p>
<p><span class="math-container">$$ F_2 A = \int^{c \in \C} \Hom(\yo_{\Psh(\C)} \yo_\C c, A) \otimes \yo_{\Psh(D)} \yo_D F c = \int^{c \in \C} A \yo_\C c \otimes \yo_{\Psh(D)} \yo_D F c $$</span></p>
<p>Now my question is the following : are <span class="math-container">$F_1$</span> and <span class="math-container">$F_2$</span> fully faithful ???</p>
<p>I seem to have a positive answer for <span class="math-container">$F_1$</span>. Indeed :</p>
<p><span class="math-container">\begin{align*}
\Hom_{\Psh(\D)}(F_1 A, F_1 B) & = \int_{c\in \C} \Hom_{\Psh(\D)} \left (Ac \otimes \yo_\D Fc , F_1 B \right ) \\
& = \int_{c\in \C} \Hom_{\Set} \left (Ac, \Hom_{\Psh(\D)}(\yo_\D Fc, F_1 B ) \right) \\
& \overset{\text{Yoneda}}{=} \int_{c\in \C} \Hom_{\Set} \left (A(c), (F_1 B) F c \right) \\
& = \Hom_{\Psh(\C)}(A, F_1 B \circ F )
\end{align*}</span></p>
<p>Moreover for every <span class="math-container">$a \in \C$</span>:</p>
<p><span class="math-container">\begin{align*}
(F_1 B \circ F) a & = \int^{c \in \C} Bc \otimes \yo_{\D} Fc (Fa) \\
& = \int^{c \in \C} B(c) \otimes \Hom_{\D}(Fa,Fc). \\
& = \int^{c \in \C} B(c) \otimes \Hom_{\D}(a,c) \overset{\text{Ninja Yoneda}}{=} B(a).
\end{align*}</span></p>
<p>So <span class="math-container">$ F_1 B \circ F = B$</span>, and thus~:</p>
<p><span class="math-container">$$ \Hom_{\Psh(\D)}(F_1 A, F_1 B) = \Hom_{\Psh(\C)}(A, B). $$</span></p>
<p>So <span class="math-container">$F_1$</span> is fully faithful. If I try to do the exact same proof with <span class="math-container">$F_2$</span>, I seem to obtain in the end~:</p>
<p><span class="math-container">$$ \Hom_{\Psh(\Psh(\D))}(F_2 A, F_2 B) = \Hom_{\Psh(\C)} (A\circ \yo_\C , B \circ \yo_\C). $$</span></p>
<p>so I guess it's <strong>not</strong> fully faithful ??? But it's kinda weird, because in the case where <span class="math-container">$\Psh(\mathcal{A})$</span> is taken to mean <span class="math-container">$[\A:\S]$</span> for a certain small category <span class="math-container">$\S$</span>, and if we have a functor <span class="math-container">$\yo'_\A : \A \rightarrow [\A:\S]$</span>,then we certainly have (don't we ? I'm a bit confused) :</p>
<p><span class="math-container">$$ \Lan_{\yo'_{\Psh(\C)}\circ \yo'_{\C}}(\yo'_{\Psh(\D)}\circ\yo'_{\D}\circ F) = \Lan_{\yo'_{\Psh(\C)}}(\yo'_{\Psh(\D)}\circ \Lan_{\yo'_{\C}}(\yo'_{\D}\circ F)) $$</span></p>
<p>and so if this operation sends embeddings on embeddings, it should still do so when applied twice !!! I'm surprised that such a thing could break just because <span class="math-container">$\Set$</span> is not small. I mean, everything has looked pretty formal. If <span class="math-container">$\C$</span> was a category where every Hom-set was smaller that some cardinal <span class="math-container">$\kappa$</span> (which happens), then up to equivalence you could replace <span class="math-container">$\Psh(C)$</span> by <span class="math-container">$[\C:\kappa]$</span> (which is small) and you still have a Yoneda embedding, for example... so it should be able to "stand" another extension !</p>
<p>TL;DR :
If <span class="math-container">$F : \C \rightarrow \D$</span> is a fully faithful functor with <span class="math-container">$\C$</span> small, I seem to be able to define a fully faithful functor <span class="math-container">$F_1 : \Psh(\C)\rightarrow \Psh(\D)$</span> by Yoneda extension but the "next step" seems to fail and the simplest functor <span class="math-container">$F_2 : \Psh(\Psh(C)) \rightarrow \Psh(\Psh(D))$</span> which I can define (without taking colimits not on small categories, of course !) doesn't seem to be fully faithful : why does the pattern seem to break ?</p>
<p>Thanks to whoever helps me out of this ^^</p>
<p>(EDIT : When we look at the definition for <span class="math-container">$F_2 A$</span>, it is clear that it only depends on <span class="math-container">$A \circ \yo$</span> : that's probably the problem... How should I define <span class="math-container">$F_2$</span> ? Why does it seem to me that when everything is small there is no problem ?)</p>
<p>(EDIT2 : is this a MO question ? I never feel confident to ask anything there so I don't know)</p>
| BΓ©ranger Seguin | 290,401 | <p><span class="math-container">$$ \newcommand{\C}{\mathcal{C}} $$</span>
<span class="math-container">$$ \newcommand{\D}{\mathcal{D}} $$</span>
<span class="math-container">$$ \newcommand{\Hom}{\mathrm{Hom}} $$</span>
<span class="math-container">$$ \newcommand{\Psh}{\mathrm{Psh}} $$</span></p>
<p>Thanks a lot to Fosco for the indication to use "small presheaves" instead of presheaves. Not only does it make what I write meaningful, but I'm also able to prove what I wanted to prove if <span class="math-container">$\mathrm{Psh}$</span> means small presheaves !</p>
<p>Take <span class="math-container">$\mathcal{C}$</span> not necessarily small. If <span class="math-container">$A,B$</span> are small presheaves on <span class="math-container">$\mathcal{C}$</span>, then there is a small subcategory <span class="math-container">$\mathcal{C}'$</span> (we can take the same twice, e.g. by taking a coproduct of full subcategories) of <span class="math-container">$\mathcal{C}$</span> and two presheaves <span class="math-container">$A^*,B^*$</span> on <span class="math-container">$\mathcal{C}'$</span> such that~:</p>
<p><span class="math-container">$$ A = \int_{c\in \C'} A^*(c) \Hom_{\C}(-,c) $$</span>
<span class="math-container">$$ B = \int_{c\in \C'} B^*(c) \Hom_{\C}(-,c) $$</span></p>
<p>Moreover it is easy to check that <span class="math-container">$\Hom_{\Psh(\C)}(A,B) = \Hom_{\Psh(\C ')}(A^*, B^*)$</span> (because the inclusion of a full subcategory is fully faithful, so <span class="math-container">$A$</span> and <span class="math-container">$B$</span> are Kan extensions along a fully faithful functor...).</p>
<p>Now the elements of <span class="math-container">$\Psh(\mathcal{D})$</span> I want to define are simply~:</p>
<p><span class="math-container">$$ F^* A = \int_{c\in \C'} A^*(c) \Hom_{\C}(-,Fc) $$</span>
<span class="math-container">$$ F^* B = \int_{c\in \C'} B^*(c) \Hom_{\C}(-,Fc) $$</span></p>
<p>(it is easy to check that it is just Lan along <span class="math-container">$F^{op}$</span>, as you noticed).</p>
<p>And indeed we have :</p>
<p><span class="math-container">\begin{align*} \Hom(F^* A, F^* B) & = \int_{c\in \C'} \Hom(A^* c \otimes \Hom_{\C}(-,Fc), F^* B) \\
& = \int_{c\in \C'} \Hom(A^* c, (F^* B)Fc) \text{ by tensor-Hom adjunction and Yoneda lemma} \\
& = \Hom(A^*, (F^* B) \circ F)
\end{align*}</span></p>
<p>and :</p>
<p><span class="math-container">\begin{align*}
(F^*)B F a & = \int_{c\in \C'} B^*(c) \Hom_{\C}(Fa,Fc) \\
& = \int_{c\in \C'} B^*(c) \Hom_{\C}(a,c) \text{ because } F \text{ is fully faithful}\\
& = B^* a \text{ by Ninja Yoneda}
\end{align*}</span></p>
<p>So indeed~:</p>
<p><span class="math-container">$$ \Hom(F^* A, F^* B) = \Hom(A^*, B^*) = \Hom(A,B) $$</span></p>
<p>so <span class="math-container">$F^*$</span> is fully faithful ; i.e. the Yoneda extension sends embeddings to embeddings ! And since I never supposed <span class="math-container">$\mathcal{C}$</span> small I can just iterate it by taking <span class="math-container">$\C_{new} = \Psh(\C)$</span> etc. This is exactly what I needed ! Thanks a lot !</p>
<p>(I guess you're gonna make a proof 2000 times simpler than mine too, but I don't think I made mistakes)</p>
|
1,937,762 | <p>Let's say I have a ratio of polynomials as follows</p>
<p>$P(x)=\frac{a_0x^n+a_1x^{n-2}+a_2x^{n-4}+...}{b_0x^n+b_1x^{n-2}+b_2x^{n-4}+...}$.</p>
<p>The polynomials are finite. Is there a procedure to convert it into a polynomial</p>
<p>$P(x) = A_0 + A_1 f(x) + A_2g(x) + ...$</p>
<p>where $f(x)$ and $g(x)$ are some power functions of x. Maybe there is a way to expand it in series if it is not divisible...</p>
| Andrew D. Hwang | 86,418 | <p>Let's re-index the polynomials, writing
$$
a(x) = \sum_{j=0}^{n} a_{j} x^{j},\qquad
b(x) = \sum_{j=0}^{m} b_{j} x^{j},\qquad
P(x) = \frac{a(x)}{b(x)}.
$$
(That is, let the coefficient index <em>match</em> the power of $x$; start with the constant terms.)</p>
<ol>
<li><p>If $b_{0} = b(0) \neq 0$, the rational function $P$ is analytic in a neighborhood of $0$. Put $c_{j} = b_{j}/b_{0}$, and write
$$
b(x) = b_{0}\Bigl(1 + \sum_{j=1}^{m} c_{j} x^{j}\Bigr)
= b_{0}\bigl(1 + c(x)\bigr).
$$
In an open neighborhood of $0$ where $|c(x)| < 1$, the reciprocal may be expanded as a geometric series:
$$
\frac{1}{b(x)}
= \frac{1}{b_{0}\bigl(1 + c(x)\bigr)}
= \frac{1}{b_{0}} \sum_{k=0}^{\infty} \bigl(-c(x)\bigr)^{k},
$$
or
$$
P(x) = \frac{a(x)}{b_{0}} \sum_{k=0}^{\infty} \bigl(-c(x)\bigr)^{k}.
$$
Since $x$ divides $c(x)$, the series on the right may be expanded explicitly to any desired finite order $\ell$ by adding up the first $\ell$ terms, a purely algebraic process (no infinite series).</p></li>
<li><p>If $b(0) = 0$ but $b$ is not identically zero, $P(x)$ has a pole at $0$. Factor out the largest power of $x$ dividing $b(x)$,
$$
b(x) = \sum_{k=\ell}^{m} b_{k} x^{k}
= x^{\ell} \sum_{k=0}^{m-\ell} b_{k+\ell} x^{k}
= x^{\ell} \bar{b}(x),\quad b_{\ell} = \bar{b}(0) \neq 0,
$$
then proceed as in 1. (with the factor $x^{\ell}$ in the denominator):
$$
P(x) = \frac{1}{x^{\ell}} \frac{a(x)}{\bar{b}(x)}.
$$
This isn't a power series, of course, but a <a href="https://en.wikipedia.org/wiki/Laurent_series" rel="noreferrer">Laurent series</a>.</p></li>
</ol>
|
3,858,517 | <p>Is it possible to count exactly the number of binary strings of length <span class="math-container">$n$</span> that contain no two adjacent blocks of 1s of the same length? More precisely, if we represent the string as <span class="math-container">$0^{x_1}1^{y_1}0^{x_2}1^{y_2}\cdots 0^{x_{k-1}}1^{y_{k-1}}0^{x_k}$</span> where all <span class="math-container">$x_i,y_i \geq 1$</span> (except perhaps <span class="math-container">$x_1$</span> and <span class="math-container">$x_k$</span> which might be zero if the string starts or ends with a block of 1's), we should count a string as valid if <span class="math-container">$y_i\neq y_{i+1}$</span> for every <span class="math-container">$1\leq i \leq k-2$</span>.</p>
<p>Positive examples : 1101011 (block sizes are 2-1-2), 00011001011 (block sizes are 2-1-2), 1001100011101 (block sizes are 1-2-3-1)</p>
<p>Negative examples : 1100011 (block sizes are <strong>2-2</strong>), 0001010011 (block sizes are <strong>1-1</strong>-2), 1101011011 (block sizes are 2-1-<strong>2-2</strong>)</p>
<p>The sequence for the first <span class="math-container">$16$</span> integers <span class="math-container">$n$</span> is: 2, 4, 7, 13, 24, 45, 83, 154, 285, 528, 979, 1815, 3364, 6235, 11555, 21414. For <span class="math-container">$n=3$</span>, only the string 101 is invalid, whereas for <span class="math-container">$n=4$</span>, the invalid strings are 1010, 0101 and 1001.</p>
| G Cab | 317,234 | <p>Consider a binary string with <span class="math-container">$s$</span> ones and <span class="math-container">$m$</span> zeros in total.<br />
Let's put an additional (dummy) fixed zero at the start and at the end of the string.
We individuate as a <em>run</em> the consecutive <span class="math-container">$1$</span>'s between two zeros, thereby including runs of null length. With this scheme we have a fixed number of <span class="math-container">$m+1$</span> runs.</p>
<p><a href="https://i.stack.imgur.com/N75Lp.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/N75Lp.png" alt="Bernoulli_runs_2" /></a></p>
<p>The number of different strings with the above numbers of zeros and ones is obviously
<span class="math-container">$$
\left( \matrix{ m + s \cr s \cr} \right) = \left( \matrix{ m + 1 + s - 1 \cr s \cr} \right)
$$</span>
which corresponds to the <a href="https://en.wikipedia.org/wiki/Composition_%28combinatorics%29" rel="nofollow noreferrer">weak compositions</a>
of <span class="math-container">$s$</span> into <span class="math-container">$m+1$</span> parts.</p>
<p>The number of compositions of <span class="math-container">$s$</span> into <span class="math-container">$k$</span> non-null parts (<em>strong</em> compositions) is instead
<span class="math-container">$$ \binom{s-1}{k-1} $$</span>
and
<span class="math-container">$$
\eqalign{
& \left( \matrix{ m + s \cr s \cr} \right)
= \sum\limits_{\left( {1\, \le } \right)\,k\,\left( { \le \,\min \left( {m + 1,s} \right)} \right)}
{\left( \matrix{ m + 1 \cr k \cr} \right)\left( \matrix{ s - 1 \cr k - 1 \cr} \right)} = \cr
& = \sum\limits_{\left( {1\, \le } \right)\,k\,\left( { \le \,\min \left( {m + 1,s} \right)} \right)}
{\left( \matrix{ m + 1 \cr m + 1 - k \cr} \right)\left( \matrix{ s - 1 \cr k - 1 \cr} \right)} \cr}
$$</span></p>
<p>So we can concentrate on strong compositions with no equal consecutive parts.<br />
Consider the strong composition of <span class="math-container">$s$</span> into <span class="math-container">$k$</span> parts, the last of which is <span class="math-container">$r$</span>
<span class="math-container">$$
\left[ {r_{\,1} ,\,r_{\,2} ,\; \cdots ,\,r_{\,k - 1} ,r\;} \right]
\quad \left| {\;r_{\,1} + \,r_{\,2} + \; \cdots + \,r_{\,k - 1} + r = s} \right.
$$</span>
whose number is
<span class="math-container">$$
C_T(s,k,r) = \left[ {k = 1} \right] + \left( \matrix{ s - r - 1 \cr k - 2 \cr} \right)
= \left( \matrix{ s - r - 1 \cr s - r - k + 1 \cr} \right)
\quad \left| \matrix{ \;1 \le k \le s \hfill \cr \;1 \le r \le s \hfill \cr} \right.
$$</span>
where <span class="math-container">$[P]$</span> denotes the <a href="https://en.wikipedia.org/wiki/Iverson_bracket" rel="nofollow noreferrer"><em>Iverson bracket</em></a>.<br />
Then the sum over <span class="math-container">$r$</span> will correctly give
<span class="math-container">$$
\eqalign{
& \sum\limits_{r = 1}^s {C_T (s,k,r)}
= \sum\limits_{r = 1}^s {\left( \matrix{ s - r - 1 \cr s - r - k + 1 \cr} \right)}
= \sum\limits_{j = 0}^{s - 1}
{\left( \matrix{ j - 1 \cr j - k + 1 \cr} \right)} = \cr
& = \sum\limits_{\left( {k - 1\, \le } \right)\,j\,\left( { \le \,s - 1} \right)}
{\left( \matrix{ s - 1 - j \cr s - 1 - j \cr} \right)\left( \matrix{ j - 1 \cr j - k + 1 \cr} \right)}
= \left( \matrix{ s - 1 \cr s - k \cr} \right) \cr}
$$</span></p>
<p>Let's indicate with <span class="math-container">$C_G (s,p,r), \; C_B (s,p,r)$</span> the number of good and bad strong compositions of <span class="math-container">$s$</span> into <span class="math-container">$p$</span> parts the last of which
equal to <span class="math-container">$r$</span>.</p>
<p>Then we have the relations
<span class="math-container">$$
\left\{ \matrix{
C_T (s,p,r) = \left[ {1 \le p \le s} \right]\left[ {1 \le r \le s} \right]\left( \matrix{
s - r - 1 \cr
s - r - p + 1 \cr} \right) \hfill \cr
C_G (s,1,r) = C_T (s,1,r) = \left[ {r = s} \right]\quad C_B (s,1,r) = 0 \hfill \cr
C_G (s,p,r) + C_B (s,p,r) = C_T (s,p,r) \hfill \cr
C_B (s,p,r) = \sum\limits_{k = 1}^{s - r} {C_B (s - r,p - 1,k)} + C_G (s - r,p - 1,r) = \hfill \cr
= \sum\limits_{k = 1}^{s - r} {C_B (s - r,p - 1,k)} - C_B (s - r,p - 1,r) + C_T (s - r,p - 1,r) \hfill \cr
C_G (s,p,r) = \sum\limits_{k = 1}^{s - r} {C_G (s - r,p - 1,k)} - C_G (s - r,p - 1,r) \hfill \cr} \right.
$$</span></p>
<p>In particular for the good strong compositions we can write the recurrence
<span class="math-container">$$
C_G (s,p,r) = \sum\limits_{k = 1}^{s - r} {C_G (s - r,p - 1,k)} - C_G (s - r,p - 1,r) + \left[ {1 = p} \right]\left[ {r = s} \right]
$$</span></p>
<p>After computing <span class="math-container">$C_G$</span>, we can sum on <span class="math-container">$r$</span> and then go back along the previous steps
to compute the good weak compositions in terms of <span class="math-container">$s,m$</span> and finally the number in <span class="math-container">$n$</span>,
i.e.:
<span class="math-container">$$
N_G (n) = \sum\limits_{s = 1}^n {\sum\limits_{p = 1}^{n - s + 1}
{\left( \matrix{ n - s + 1 \cr p \cr} \right)
\sum\limits_{r = 1}^s {C_G (s,p,r)} } }
$$</span>
which in fact for <span class="math-container">$0 \le n \le 16$</span> gives
<span class="math-container">$$
0, \, 1, \, 3, \, 6, \, 12, \, 23, \, 44, \, 82, \, 153,
\, 284, \, 527, \, 978, \, 1814, \, 3363, \, 6234, \, 11554, \, 21413
$$</span>
not counting as good the all zeros string.</p>
|
286,798 | <blockquote>
<p>Find the limit $$\lim_{n \to \infty}\left[\left(1-\frac{1}{2^2}\right)\left(1-\frac{1}{3^2}\right)\cdots\left(1-\frac{1}{n^2}\right)\right]$$</p>
</blockquote>
<p>I take log and get $$\lim_{n \to \infty}\sum_{k=2}^{n} \log\left(1-\frac{1}{k^2}\right)$$</p>
| lab bhattacharjee | 33,337 | <p>The $r$th term is $\frac{r^2-1}{r^2}=\frac{(r-1)}r\frac{(r+1)}r$</p>
<p>So, the product of $n$ terms is $$\frac{3.1}{2^2}\frac{4.2}{3^2}\frac{5.3}{4^2}\cdots \frac{(n-1)(n-3)}{(n-2)^2}\frac{(n-2)n}{(n-1)^2}\frac{(n-1)(n+1)}{n^2}$$
$$=\frac12\frac32\frac23\frac43\cdots\frac{n-2}{n-1}\frac n{n-1}\frac{n-1}n\frac{n+1}n=\frac12\frac{(n+1)}n$$</p>
<p>as the 1st half of any term is cancelled by the last half of the previous term except for the 1st & the last term.</p>
|
2,637,812 | <p>Here is dice game question about probability.</p>
<p>Play a game with $2$ die. What is the probability of getting a sum greater than $7$?</p>
<p>I know how the probability for this one is easy, $\cfrac{1+2+3+4+5}{36}=\cfrac 5{12}$.</p>
<p>I don't know how to solve the follow-up question:</p>
<p>Play a game with $200$ die. What is the probability of getting a sum greater than $700$?</p>
| Prasun Biswas | 215,900 | <p>$$\frac{x^2-\log(1+x^2)}{x^2\sin^2x}=\frac{\dfrac 1{x^2}-\dfrac{\log(1+x^2)}{x^4}}{\dfrac{\sin^2x}{x^2}}$$</p>
<p>Taking $z=x^2$, note that the numerator becomes $$\dfrac 1z-\dfrac{\log(1+z)}{z^2}=\dfrac{z-\log(1+z)}{z^2}\to\dfrac{1-\frac 1{1+z}}{2z}\to\dfrac{1}{2(z+1)^2}\to\frac 12$$</p>
<p>as $z=x^2\to 0$ as $x\to 0$ where the final arrowed steps use L'Hopital</p>
<p>The denominator goes to $1$ since $\dfrac{\sin x}x\to 1$ as $x\to 0$</p>
<p>Thus, the limit is $1/2$ by a final application of L'Hopital's rule.</p>
|
919,699 | <p>I am very bad at problems involving expected return and was hoping some one could help me out.</p>
<p>You are offered a chance to play a game for $48 against 99 other players(100 including you) the game consists of 16 rounds and in each round you have 4 chances to win. The first winner picked in each round gets 30 dollars, the second winner gets 50 dollars, the third gets 30 dollars, and the fourth gets 100 dollars. It is also possible for one person to win more than one of the prizes in one round. The question is what is the expected payout of playing this game?</p>
<p><strong>Note</strong>: All rounds are independent of each other and each player has an equal chance to win each prize in each round.</p>
| amcalde | 168,694 | <p>Your expected winnings for a round are:
$$(1/100)\cdot 30 + (1/100)\cdot 50 + (1/100)\cdot 30 + (1/100) \cdot 100 = \frac{21}{10}$$</p>
<p>Do this 16 times. You expect to win
$$16 \cdot \frac{21}{10} \$= \frac{168}{5} \$= 33.6 \$$$</p>
<p>Not worth the money.</p>
<p>Payout is $-48\$ + 33.6\$ = -14.4\$$</p>
|
397,040 | <p>What is the domain for $$\dfrac{1}{x}\leq\dfrac{1}{2}$$</p>
<p>according to the rules of taking the reciprocals, $A\leq B \Leftrightarrow \dfrac{1}{A}\geq \dfrac{1}{B}$, then the domain should be simply $$x\geq2$$</p>
<p>however negative numbers less than $-2$ also satisfy the original inequality. When am I missing in my understanding?</p>
| user49685 | 49,685 | <p>No, you are not missing anything, that rule only works for $A; B > 0$, so $\dfrac{1}{x} \le \dfrac{1}{2} \Leftrightarrow \left[ \begin{array}{l} x < 0 \\ x \ge 2 \end{array} \right.$</p>
<p>Of course it's easy to see that for $x < 0$, we'll always have $\dfrac{1}{x} < 0 < \dfrac{1}{2}$.</p>
|
4,058,600 | <p>Please pardon the elementary question, for some reason I'm not grocking why all possible poker hand combinations are equally probable, as all textbooks and websites say. Just intuitively I would think getting 4 of a number is much more improbably than getting 1 of each number, if I were to draw 4 cards. For example, ignoring order, to get 4 of a single number there are only <span class="math-container">$4 \choose 4$</span> distinct possibilities, whereas for 1 of each number I would have <span class="math-container">${4 \choose 1}^4$</span> distinct possibilities.</p>
| 311411 | 688,046 | <p>Often the books and websites speak a little too loosely. I would say that "getting 4 of a kind" is not a poker hand. It is a set of many different poker hands. A "possible poker hand" is completely specific, e.g. the hand 4 of spades, 4 of hearts, 4 of clubs, 7 of clubs, 8 of diamonds.</p>
|
698,743 | <blockquote>
<p>Let the real coefficient polynomials
$$f(x)=a_{n}x^n+a_{n-1}x^{n-1}+\cdots+a_{1}x+a_{0}$$
$$g(x)=b_{m}x^m+b_{m-1}x^{m-1}+\cdots+b_{1}x+b_{0}$$
where $a_{n}b_{m}\neq 0,n\ge 1,m\ge 1$, and let
$$g_{t}(x)=b_{m}x^m+(b_{m-1}+t)x^{m-1}+\cdots+(b_{1}+t^{m-1})x+(b_{0}+t^m).$$
Show that</p>
<p><strong>there exist positive $\delta$, such for any $t$ such that $0<|t|<\delta$, and such
$f(x)$ and $g_{t}(x)$ be relatively prime.</strong></p>
</blockquote>
<p>I fell this result is very well, because although two polynomial are not relatively prime, we can do it to one of the polynomial tiny perturbation makes relatively prime.</p>
<p>But I can't prove my problem.</p>
<p>Thank you very much</p>
| Lutz Lehmann | 115,115 | <p>You just need to check that the resultant $r(t)=Res_x(f(x), g_t(x))$, which is a polynomial in $t$, is not the zero polynomial. Any root of $r(t)$ marks a value of $t$ where both polynomials have a common factor. If $r(t)\ne 0$, the polynomials $f(x)$ and $g_t(X)$ are relatively prime.</p>
<p>And as in the other answers, as $r(t)$ only has a finite number of roots...</p>
|
536,187 | <p>Let $\mathbb{R}^{\omega}$ be the countable product of $\mathbb{R}$. Make t a topological space using the box topology. Let $\pi_{n}$ denote the usual projection maps. </p>
<p>Fix $N \in \mathbb{Z}_+$ and define $A_N = \{x \in \mathbb{R}^\omega$ $|$ $\pi_{k}(x) = 0$ $\forall k>N\}$. Show that $A_N$ is closed in the box topology. </p>
<p>We know that $A_N$ is closed in the box topology if it has a complement that is open in the box topology. </p>
| Stefan Hamcke | 41,672 | <p><strong>Hint:</strong> If $y\notin A_N$, then for some $k>N$ we have $\pi_k(y)=r\ne0$. Can you find a box around $y$ which does not intersect $A_N$. Think about what it means for a box $B$ to be disjoint from $A_N$.</p>
|
516,544 | <p>The following is an <a href="http://placement.freshersworld.com/placement-papers/IBM/Placement-Paper-Whole-Testpaper-37851" rel="nofollow">aptitude problem (question no: 29-32)</a>, I am trying to solve:- </p>
<blockquote>
<p>Questions 29 - 32:</p>
<p>A, B, C, D, E and F are six positive integers such that</p>
<p>B + C + D + E = 4A</p>
<p>C + F = 3A</p>
<p>C + D + E = 2F</p>
<p>F = 2D</p>
<p>E + F = 2C + 1</p>
<p>If A is a prime number between 12 and 20, then</p>
<ol>
<li>The value of F is</li>
</ol>
<p>(A) 14</p>
<p>(B) 16</p>
<p>(C) 20</p>
<p>(D) 24</p>
<p>(E) 28</p>
<ol>
<li>Which of the following must be true?</li>
</ol>
<p>(A) D is the lowest integer and D = 14</p>
<p>(B) C is the greatest integer and C = 23</p>
<p>(C) B is the lowest integer and B = 12</p>
<p>(D) F is the greatest integer and F = 24</p>
<p>(E) A is the lowest integer and A = 13</p>
</blockquote>
<p>Now there are 5 equations to solve 6 variables. So I am at a loss on how should I start solving the problem? Any help from anybody is appreciated.</p>
| Prasoon Shrivastav | 106,352 | <p>B + C + D + E = 4A ------1</p>
<p>C + F = 3A ----------------2</p>
<p>C + D + E = 2F -----------3</p>
<p>F = 2D --------------------4</p>
<p>E + F = 2C + 1-----------5</p>
<p>Also A can 13,17 or 19</p>
<p>Eliminating D from all the equations we get</p>
<p>B+2F = 4A ------6</p>
<p>C+F = 3A -------7</p>
<p>C+E = 3F/2 -----8</p>
<p>E+F = 2C+1 ----9</p>
<p>Adding 7 and 8</p>
<p>2C+E+F = 3(A+F/2)</p>
<p>4C+1 = 3(A+F/2)</p>
<p>Substituting value of C from 7</p>
<p>4(3A - F) + 1 = 3(A+F/2)</p>
<p>=> A = (11F - 2)/18
=> Only for A = 17, F is an integer</p>
<p>Therefore A = 17, F = 28, C = 23 , D = 14, B = 12, E = 19</p>
<p>1.) F = 28</p>
<p>2.) B = 12 and the lowest</p>
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.