qid
int64
1
4.65M
question
large_stringlengths
27
36.3k
author
large_stringlengths
3
36
author_id
int64
-1
1.16M
answer
large_stringlengths
18
63k
1,849,003
<p>The following question was asked in an theoretical computer science entrance exam in India:</p> <blockquote> <p>A spider is at the bottom of a cliff, and is n inches from the top. Every step it takes brings it one inch closer to the top with probability $1/3$, and one inch away from the top with probability $2/3$, unless it is at the bottom in which case, it always gets one inch closer. What is the expected number of steps for the spider to reach the top as a function of $n$?</p> </blockquote> <p>How do we solve this problem? The recurrence for the expectation seems tricky because of the boundary conditions :(</p>
joriki
6,622
<p>The recurrence for the expectation $a_n$ is</p> <p>$$ a_k=1+\frac13a_{k-1}+\frac23a_{k+1}\;, $$</p> <p>with solution $a_k=c_1+c_22^{-k}-3k$. The boundary conditions are $a_0=0$ and $a_n=1+a_{n-1}$, yielding $c_1+c_2=0$ and $c_22^{-n}-3n=1+c_22^{-(n-1)}-3(n-1)$. Solving the second equation for $c_2$ yields $c_2=-4\cdot2^n$, so $a_k=4\cdot2^n\left(1-2^{-k}\right)-3k$ and $a_n=4\cdot\left(2^n-1\right)-3n$.</p>
2,036,591
<p>Male basketball players at the high school, college, and professional ranks use a regulations basketball that weighs an average of 22.0 ounces with a std dev of 1.0 ounce. Assume that the weights of basketballs are approx. normally distributed. (Round to nearest tenth of a percent)</p> <p>a. What % of regulation basketballs weigh less than 19.5 ounces?</p> <p>b. What % of regulation basketballs weigh more than 23.1 ounces?</p> <p>c. What % of regulation basketballs weigh b/w 19.5 and 22.5 ounces?</p> <p>d. What is the weight of a basketball if its weight is at the 85th percentile (or top 15%)?</p>
Techie91
393,973
<p>It looks like you need to find the z-score and use the Z table. where x_bar = 22 ounces, sigma = 1 ounce. you need to use the equation Z = (x-x_bar)/sigma</p> <p>a) you need to find P(Z). x = 19.5. solve for Z, then find P(Z) using the Z table, where P(Z) is the area under the curve. look up how to use the Z table. here is one you can use. There are several Z tables available, so make sure you are consistent</p> <p><a href="http://d2r5da613aq50s.cloudfront.net/wp-content/uploads/360197.image0.jpg" rel="nofollow noreferrer">http://d2r5da613aq50s.cloudfront.net/wp-content/uploads/360197.image0.jpg</a></p> <p>b) Same as a.</p> <p>c) you need to find P(x = 22.5) and P(x = 19.5). and find the area between the two points.</p> <p>d) P(Z) = 0.85, Look this up on the Z-table, and work backwards to find x.</p>
2,036,591
<p>Male basketball players at the high school, college, and professional ranks use a regulations basketball that weighs an average of 22.0 ounces with a std dev of 1.0 ounce. Assume that the weights of basketballs are approx. normally distributed. (Round to nearest tenth of a percent)</p> <p>a. What % of regulation basketballs weigh less than 19.5 ounces?</p> <p>b. What % of regulation basketballs weigh more than 23.1 ounces?</p> <p>c. What % of regulation basketballs weigh b/w 19.5 and 22.5 ounces?</p> <p>d. What is the weight of a basketball if its weight is at the 85th percentile (or top 15%)?</p>
Brandon
378,998
<p>I will demonstrate the usage of normCdf and invNorm in a graphing calculator to solve problems like these, though they can be solved like the answer given by Techie91.</p> <p>The notation for normalcdf is <em>NormalCDF(lower bound, upper bound, mean, standard deviation)</em></p> <p>Part A:</p> <p>$$NormalCDF (-100, 19.5, 22, 1) =.006$$</p> <p>Part B:</p> <p>$$NormalCDF (23.1, 100, 22, 1) =.135$$</p> <p>Part C:</p> <p>$$NormalCDF(19.5,22.5,22,1) = .685$$</p> <p><em>NOTE that these are percentages, so multiply by 100 to get %)</em></p> <p><strong>The usage of -100 or 100 serves as a lower or upper bound, respectively. You could probably get the same results using 50 (or some value that's far enough from the mean). I like to use 1000 or even 10000 just to be safe.</strong></p> <p>The notation for InverseNorm is <em>InversNorm(area to the left or right <strong>(depending what the question asks, such as top 15% or bottom 15%)</strong>, mean, standard deviation)</em></p> <p>Part D: $$InverseNorm(.85,22,1) = 23.03$$ </p> <p><em>This is of course a weight, in this case ounces</em></p>
210,071
<p>I have some lines of code that generate numerical solutions to equations. Then I want to combine two of these in a piecewise function. The way I did it is the following -lf1[r] and lf4[r] are the aforementioned numerical solutions</p> <pre><code>test[r_] := Piecewise[{{lf1[r], 0.688199 &lt;= r &lt;= 10}, {lf4[r], 0.687159 &lt;= r &lt;= 0.688199}}] Show[Plot[test[r], {r, 0.676319, rmax}, PlotStyle -&gt; {Thick}, BaseStyle -&gt; {18, FontFamily -&gt; "Times New Roman"}, AxesLabel -&gt; {"\[Rho]", "L(\[Rho])"}, PlotRange -&gt; {{0, rmax}, {0, 1.4}}], Plot[x, {x, 0, 1.4}, PlotStyle -&gt; {Thick, Black}]] </code></pre> <p>The plot is the following </p> <p><a href="https://i.stack.imgur.com/hQdkB.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/hQdkB.png" alt="enter image description here"></a></p> <p>Then I would like to have different colours in the different sectors of the piecewise function. I found some excellent answers here and I tried to adopt them - particularly <a href="https://mathematica.stackexchange.com/questions/1128/plotting-piecewise-function-with-distinct-colors-in-each-section?noredirect=1&amp;lq=1">in this link</a>. However, I am facing some difficulties that I do not understand. </p> <p>Example 1: Different colours, wrong plot.</p> <pre><code>pwSplit[_[pairs : {{_, _} ..}]] := Piecewise[{#}, Indeterminate] &amp; /@ pairs pwSplit[_[pairs : {{_, _} ..}, expr_]] := Append[pwSplit@{pairs}, pwSplit@{{{expr, Nor @@ pairs[[All, 2]]}}}] pw = Piecewise[{{lf4[r], 0.687159 &lt;= r &lt;= 0.688199}, {lf1[r], 0.688199 &lt;= r &lt;= 10}}]; Plot[Evaluate[pwSplit@pw], {r, 0, 1}, PlotStyle -&gt; Thick, Axes -&gt; True] </code></pre> <p>The plot is <a href="https://i.stack.imgur.com/KH8L5.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/KH8L5.png" alt="enter image description here"></a></p> <p>Example 2: This time I don't get many colours and I also get a wrong plot -if you see there is a black flat line in the bottom that should not be there</p> <pre><code>f = Piecewise[{{lf1[#], 0.688199 &lt;= # &lt;= 10}, {lf4[#], 0.687159 &lt;= # &lt;= 0.688199}}] &amp;; colorFunction = f; piecewiseParts = Length@colorFunction[[1, 1]]; colors = ColorData[1][#] &amp; /@ Range@piecewiseParts; colorFunction[[1, 1, All, 1]] = colors; Show[Plot[f[x], {x, 0, 10}, ColorFunction -&gt; colorFunction, ColorFunctionScaling -&gt; False, PlotRange -&gt; {{0, rmax}, {0, 1.4}}, PlotStyle -&gt; {Thick}], Plot[x, {x, 0, 1.4}, PlotStyle -&gt; {Thick, Black}]] </code></pre> <p><a href="https://i.stack.imgur.com/1XkYG.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/1XkYG.png" alt="enter image description here"></a></p> <p>I don't understand what I am doing wrong in either case and it is not clear if I should modify something due to the fact that I have numerical functions and not analytic. </p>
Bob Hanlon
9,362
<pre><code>Clear["Global`*"] test[r_] := Piecewise[{{Exp[r], r &lt; -1}, {1 - r^2, -1 &lt; r &lt; 1}, {Sin[Pi r], r &gt; 1}}]; plotRng = {-3, 3}; </code></pre> <p>Extract plot intervals from <a href="https://reference.wolfram.com/language/ref/Piecewise.html" rel="noreferrer"><code>Piecewise</code></a> and the specified <code>pltRng</code></p> <pre><code>intervals = {Cases[test[r][[1, All, -1]], _?NumericQ, 2], plotRng} // Flatten // Union // Partition[#, 2, 1] &amp;; </code></pre> <p><a href="https://reference.wolfram.com/language/ref/Plot.html" rel="noreferrer"><code>Plot</code></a> each interval separately and combine with <a href="https://reference.wolfram.com/language/ref/Show.html" rel="noreferrer"><code>Show</code></a></p> <pre><code>Module[{n = 4}, Show[Plot[test[r], {r, Sequence @@ #}, PlotStyle -&gt; ColorData[97][n++]] &amp; /@ intervals, PlotRange -&gt; {plotRng, Automatic}]] </code></pre> <p><a href="https://i.stack.imgur.com/C3DLO.png" rel="noreferrer"><img src="https://i.stack.imgur.com/C3DLO.png" alt="enter image description here"></a></p>
2,247,445
<p>Find isomorphism between <span class="math-container">$\mathbb F_2[x]/(x^3+x+1)$</span> and <span class="math-container">$\mathbb F_2[x]/(x^3+x^2+1)$</span>.</p> <hr /> <p>It is easy to construct an injection <span class="math-container">$f$</span> satisfying <span class="math-container">$f(a+b)=f(a)+f(b)$</span> and <span class="math-container">$f(ab)=f(a)f(b)$</span>. However, I am stuck how to construct such a mapping that is bijective.</p> <p>Thank you for help!</p>
mrnovice
416,020
<p>We have $$\tan\alpha =\frac{1}{3} =\frac{\text{opposite side}}{\text{adjacent side}}$$</p> <p>Then construct a right angled triangle, with hypotenuse equal to $\sqrt{1^2+3^2} =\sqrt{10}$</p> <p>Then $$\cos\alpha = \frac{\text{adjacent}}{\text{hypotenuse}} = \frac{3}{\sqrt{10}}$$</p> <p>and $$\sin\alpha =\frac{\text{opposite}}{\text{hypotenuse}} =\frac{1}{\sqrt{10}}$$</p> <p>Then we can compute $$f(\alpha) = 25\sin(\alpha)\cos(\alpha)+\frac{75}{4}\sin(\alpha) = 25\cdot\frac{1}{\sqrt{10}}\cdot\frac{3}{\sqrt{10}}+\frac{75}{4}\cdot\frac{1}{\sqrt{10}}$$</p> <p>$$f(\alpha) = \frac{75}{10}+\frac{75}{4\sqrt{10}} = \frac{15}{2}+\frac{75\sqrt{10}}{40} =\frac{15}{2}+\frac{15\sqrt{10}}{8}$$</p> <p>$$f(\alpha) = \frac{60+15\sqrt{10}}{8}$$</p>
3,817,760
<p>In geometry class, it is usually first shown that the medians of a triangle intersect at a single point. Then is is explained that this point is called the centroid and that it is the balance point and center of mass of the triangle. Why is that the case?</p> <p>This is the best explanation I could think of. I hope someone can come up with something better.</p> <p>Choose one of the sides of the triangle. Construct a thin rectangle with one side coinciding with the side of the triangle and extending into it. The center of mass of this rectangle is near the midpoint of the side of the triangle. Continue constructing thin rectangles, with each one on top of the previous one and having having the lower side meet the two other sides of the triangle. In each case the centroid of the rectangle is near a point on the median. Making the rectangles thinner, in the limit all the centroids are on the median, and therefore the center of mass of the triangle must lie on the median. This follows because the center of mass of the combination of two regions lies on the segment joining the centroids of the two regions.</p>
Random
316,502
<p>The centroid is the center of mass of the configuration where we have three point masses of mass 1 at each of the three vertices of the triangle.</p> <p>Notice that we can replace two of those point masses by a mass of 2 at their midpoint (that is, at their center of mass). Therefore, the total center of mass is clearly on the line between a point and the midpoint of the opposite side, that is the median.</p>
1,395,093
<blockquote> <p>Prove that if $f\in L^1([0,1],\lambda)$ is not constant almost everywhere then there exists an interval so that $\int_I\!f\,\mathrm{d}\lambda\neq 0$. Here $\lambda$ is the Lebesgue measure.</p> </blockquote> <p>Since this is obviously true for continuous functions, I've been trying to use the fact that continuous functions with compact support are dense in $L^1$, but I'm not sure how to set it up.</p>
peterwhy
89,922
<p>Setting $f(x) = 0$, $$\begin{align*} 1+\frac ax + \frac a{x^2} &amp;= 0\\ x^2+ax+a &amp;= 0\\ x &amp;= \frac{-a\pm\sqrt{a^2-4a}}{2} \end{align*}$$ So the $x$-intercepts, if any, depend on $a$.</p> <hr> <p>Differentiating $f(x)$ w.r.t. $x$, $$f'(x) = -\frac a{x^2}-\frac {2a}{x^3} = -\frac{ax(x+2)}{x^4}$$ Setting $f'(x) = 0$, $$\begin{align*} -\frac a{x^2}-\frac {2a}{x^3} &amp;= 0\\ -ax-2a&amp;= 0\\ x&amp;=-2 \end{align*}$$ The $x$-coordinate of the stationary point does not depend on $a$, but the $y$-coordinate does.</p> <hr> <p>Differentiating $f'(x)$ w.r.t. $x$, $$f''(x) = \frac{2a}{x^3} + \frac{6a}{x^4} = \frac{2a(x+3)}{x^4}$$ Setting $f''(x) = 0$, $$\begin{align*} \frac{2a}{x^3} + \frac{6a}{x^4} &amp;= 0\\ 2ax + 6a &amp;= 0\\ x&amp;= -3 \end{align*}$$ The $x$-coordinate of the inflexion point does not depend on $a$, but the $y$-coordinate does.</p> <p>Consider the signs in terms of $x$, $$\begin{array}{r|c|c|c|c|c|c} x&amp;(-\infty,-3)&amp;-3&amp;(-3,-2)&amp;-2&amp;(-2,0)&amp;(0,\infty)\\\hline f(x)&amp;&amp;1-\frac a3+\frac a9&amp;&amp;1-\frac a2+\frac a4&amp;&amp;+\\\hline f'(x)&amp;-&amp;-&amp;-&amp;0&amp;+&amp;-\\\hline f''(x)&amp;-&amp;0&amp;+&amp;+&amp;+&amp;+ \end{array}$$</p> <hr> <p>And also the horizontal asymptote. $$\lim_{x\to \infty}\left(1+\frac ax+\frac a{x^2}\right) = \lim_{x\to\infty} 1 + \lim_{x\to\infty} \frac ax+ \lim_{x\to\infty}\frac a{x^2} = 1$$</p>
3,963,517
<blockquote> <p>Write to power series of <span class="math-container">$$f(x)=\frac{\sin(3x)}{2x}, \quad x\neq 0.$$</span></p> </blockquote> <p>I try to get a series for <span class="math-container">$\sin(3x)$</span> with <span class="math-container">$x=0$</span> and multiplying the series with <span class="math-container">$2x$</span>.</p> <p>Is that right?</p>
vlad333rrty
862,985
<p>Just write taylor series for sin(3x) and the divide by 2x <span class="math-container">$$\frac{1}{2} \sum_{n=0}^{\infty} (-1)^n \frac{3^{2n+1}x^{2n}}{(2n+1)!}$$</span></p>
484,095
<p>When taking the log of a matrix we have various choices, but fixing a particular choice, we should have</p> <p>$$P^{-1}\log{(A)} P = \log(P^{-1}AP),$$</p> <p>right? (Here $P \in GL$.)</p> <p>It is supported by the notion that we can exponentiate both sides and it comes out true. Is there some snag here that I'm missing?</p> <p>Also, once we choose a branch of logarithm, do we always have a bijection between $M_n(\mathbb{C})$ and $GL(n, \mathbb{C})$ given by $A \mapsto e^A$?</p> <p>More specifically, given a Jordan block associated with a nonzero eigenvalue $\lambda$</p> <p>$$\left( \begin{matrix} \lambda &amp; 1 &amp; &amp; &amp; \\ &amp; \lambda &amp;1 &amp; &amp; \\ &amp;&amp;\ddots&amp;\ddots \\ &amp;&amp;&amp;\ddots&amp;1 \\ &amp;&amp;&amp;&amp;\lambda \end{matrix} \right)$$</p> <p>We can choose a basis so that the above linear operator has the form</p> <p>$$\left( \begin{matrix} \lambda &amp; \lambda &amp; \lambda/2! &amp; \dotsb &amp; \lambda/n!\\ &amp; \lambda &amp;\lambda &amp; \ddots &amp; \vdots \\ &amp;&amp;\ddots&amp;\ddots&amp; \lambda/2! \\ &amp;&amp;&amp;\ddots&amp;\lambda \\ &amp;&amp;&amp;&amp;\lambda \end{matrix} \right)$$</p> <p>and then a log of such a matrix is </p> <p>$$\left( \begin{matrix} \log{\lambda} &amp; 1 &amp; &amp; &amp; \\ &amp; \log{\lambda} &amp;1 &amp; &amp; \\ &amp;&amp;\ddots&amp;\ddots \\ &amp;&amp;&amp;\ddots&amp;1 \\ &amp;&amp;&amp;&amp;\log{\lambda} \end{matrix} \right),$$</p> <p>where we have to choose a value for $\log{\lambda}$ (in principle we could choose a different value of $\log{\lambda}$ for each diagonal entry).</p> <p>Once we fix a choice of branch of log, it seems that we can get a unique log for each matrix in $GL_n(\mathbb{C})$. </p>
André Nicolas
6,312
<p>It is not correct to divide. If it is useful, you might rewrite as $$\sum_{i=0}^kn_i(b^i-1)\equiv 0\pmod{b-1}.$$</p> <p>It is useful, for example, if you want to <strong>prove</strong> that your relationship holds for all choices of the $n_i$. That is because one can show that $b^i-1\equiv 0\pmod{b-1}$.</p> <p>That's just a fancy way of saying that $b-1$ divides $b^n-1$. The result is not hard to see, since we have $$b^n-1=(b-1)(b^{n-1}+b^{n-2}+\cdots+b+1).$$</p> <p><strong>Remark:</strong> It is quite a bit easier to work directly with <strong>congruences</strong>. Note that $b\equiv 1\pmod{b-1}$. So $b^i\equiv 1\pmod{b-1}$ for all $i$. It follows immediately that $$\sum_0^k n_ib^i\equiv \sum_0^k (n_i)(1)=\sum_0^k n_i.$$</p> <p>That will give you a generalization of the old-fashioned "casting out nines" process to a base-$b$ system.</p>
17,285
<p>I have problems in understanding few concepts of elementary set theory. I've choosen a couple of problems (from my problems set) which would help me understand this concepts. To be clear: it's not a homework, I'm just trying to understand elementary set's theory concepts by reading solutions. <hr/> <strong>Problem 1</strong></p> <p>(I don't understand this; I mean - not at all)</p> <p>Let $f: A \to B$, where $A,B$ are non-empty sets, and let $R$ be an equivalence relation in set $B$. We define equivalence relation $S$ in $A$ set by condition:</p> <p>$aSb \Leftrightarrow f(a)R f(b)$ Determine, which inclusion is always true:</p> <p>(a) $f([a]_S) \subseteq [f(a)]_R$</p> <p>(b) $[f(a)]_R \subseteq f([a]_S)$</p> <p>Notes:</p> <p>$[a]_S$ is an equivalence class <hr/> <strong>Problem 2</strong></p> <p>(I suppose, that (a) is true &amp; (b) is false)</p> <p>Which statement is true, and which is false (+ proof):</p> <p>(a) If $f: A \xrightarrow{1-1} B$ and $f(A) \not= B$ then $|A| &lt; |B|$</p> <p>(b) If $|A| &lt; |B|$ and $C \not= \emptyset$ then $|A \times C| &lt; |B \times C|$ <hr/> <strong>Problem 3</strong></p> <p>(I don't know, how to think about $\mathbb{Q}^{\mathbb{N}}$ and $\{0,1\}^∗$.)</p> <p>Which sets have the same cardinality:</p> <p>$P(\mathbb{Q}), \mathbb{R}^{\mathbb{N}},\mathbb{Z}, \mathbb{Q}^{\mathbb{N}}, \mathbb{R} \times \mathbb{R}, \{ 0,1 \}^*, \{ 0,1 \}^{\mathbb{N}},P(\mathbb{R})$</p> <p>where $\{ 0,1 \}^*$ means all finite sequences/words that contains $1$ and $0$, for example $000101000100$ or $1010101010101$ etc. $P(A)$ is a Power Set. <hr/> <strong>Problem 4</strong></p> <p>(I don't understand this; I mean - not at all)</p> <p>What are: maximum/minimum/greatest/lowest elements in set:</p> <p>$\{\{2; 3; 3; 5; 2\}; \{1; 2; 3; 4; 6\}; \{3\}; \{2; 1; 2; 1\}; \{1; 2; 3; 4; 5\}; \{3; 4; 2; 4; 1\}; \{2; 1; 2; 2; 1\}\}$</p> <p>ordered via subset inclusion <hr/> <strong>Problem 5</strong></p> <p>How many equivalence relations there are in $\mathbb{N}$ which also are partial order? <hr/> These are simple problems, but I really need to understand, how to solve this kind of problems. I would apreciate Your help.</p>
Ross Millikan
1,827
<p>For problem 3, $\mathbb{Q^N}$ means the set of functions from $\mathbb{N}$ into $\mathbb{Q}$. So you need to decide how many of them there are. Certainly there are at least as many as functions from $\mathbb{N}$ into 2, which is $2^\mathbb{N}$, as you could just restrict the functions values to 0 and 1. Have you seen that before?</p> <p>For problem 4, "ordered by subset inclusion" means $a \lt b \Leftrightarrow a \subset b$. So, for example, {3} &lt; {1;2;3;4;6} and three others of the given sets. Now look at your definitions of maximum/minimum/greatest/least and see which sets satisfy them.</p>
41,175
<p>Suppose that I have two linear functions</p> <pre><code>f[x_] := f0 + f1 x g[x_] := g0 + g1 x </code></pre> <p>and a (possibly rather complicated) set of conditional expressions, obtained through Reduce. For example, we might have something like this:</p> <pre><code>conditions = (f0 == f1 &amp;&amp; g0 == 0) || (f0 == g1 &amp;&amp; g0 == f1) </code></pre> <p>What I would like to do is write something like</p> <pre><code>{f[x],g[x]} /. conditions </code></pre> <p>and receive as output the set of pairs of $f$ and $g$ adhering to that formula. In this case we'd have</p> <pre><code>{{a + ax, bx}, {a + bx, b + ax}} </code></pre> <p>(or maybe <code>{{f0 + f0x, g1x}, {f0 + f1x, f1 + f0x}}</code> to stick with original variable names).</p> <p>How can I do this?</p>
TransferOrbit
3,236
<p>First convert your conditions to a list of <a href="http://reference.wolfram.com/mathematica/ref/Rule.html" rel="nofollow noreferrer"><code>Rule</code></a>s</p> <pre><code>myrules = Apply[List, conditions /. {Equal -&gt; Rule}, {0, 1}] </code></pre> <p>which gives</p> <p><img src="https://i.stack.imgur.com/HO9YV.jpg" alt="enter image description here"></p> <p>Then <a href="http://reference.wolfram.com/mathematica/ref/Apply.html" rel="nofollow noreferrer"><code>Apply</code></a> those <a href="http://reference.wolfram.com/mathematica/ref/Rule.html" rel="nofollow noreferrer"><code>Rule</code></a>s to your <a href="http://reference.wolfram.com/mathematica/ref/List.html" rel="nofollow noreferrer"><code>List</code></a> using a <a href="http://reference.wolfram.com/mathematica/ref/Function.html" rel="nofollow noreferrer">pure function</a> and <a href="http://reference.wolfram.com/mathematica/ref/Map.html" rel="nofollow noreferrer"><code>Map</code></a> (<a href="http://reference.wolfram.com/mathematica/ref/Map.html" rel="nofollow noreferrer"><code>/@</code></a>)</p> <pre><code>ReplaceAll[{f[x], g[x]}, #] &amp; /@ myrules </code></pre> <p>which produces</p> <p><img src="https://i.stack.imgur.com/3tr2G.jpg" alt="enter image description here"></p>
3,586,001
<p>Let <span class="math-container">$k$</span> be a positive integer. Is it true that there are infinitely many primes of the form <span class="math-container">$p=6r-1$</span> such that <span class="math-container">$r$</span> and <span class="math-container">$k$</span> are coprime? Feel free to assume any well known (even if hard to prove) theorem such as Dirichlet's theorem for arithmetic progressions.</p> <p>Update: can we do it only with assuming the version that for any coprime <span class="math-container">$a$</span> and <span class="math-container">$d$</span> there are infinitely many primes <span class="math-container">$p$</span> with <span class="math-container">$p\equiv a \pmod d$</span>?</p> <p>Any help appreciated!</p>
Mohammed M. Zerrak
471,639
<p>Assume the contrary, then there is only finitely many primes of the form <span class="math-container">$6r-1$</span> , where gcd<span class="math-container">$(k,r )=1$</span>, then there is infinitely many primes of the form <span class="math-container">$6qr-1$</span> where the integer <span class="math-container">$q$</span> is a prime dividing both <span class="math-container">$k$</span> and <span class="math-container">$r$</span> , as it stands Dirichlet states : </p> <p><span class="math-container">$$ \bigg |\{n\in \mathbb N / an+b \in \mathbb P\}\bigg | \sim n\phi(a)$$</span> <span class="math-container">$$ \bigg |\{n\in \mathbb N / 6n-1 \in \mathbb P\}\bigg | \sim n\phi(6)$$</span></p> <p>but because of our assumption <span class="math-container">$$ \bigg |\{n\in \mathbb N / 6n-1 \in \mathbb P\}\bigg | \sim \bigg | \{n\in \mathbb N / 6qn-1 \in \mathbb P\}\bigg | \sim n\phi(6q)$$</span></p> <p>because : <span class="math-container">$\bigg |\{n\in \mathbb N / 6n-1 \in \mathbb P\} - \{n\in \mathbb N / 6qn-1 \in \mathbb P\} \bigg |$</span> is finite .</p> <p>therefore <span class="math-container">$$\phi(6q)=\phi(6) $$</span> which is absurd</p> <p>Conclusion : There exists infinitely many primes such that <span class="math-container">$(k,r)=1$</span></p>
109,061
<p>Most people learn in linear algebra that its possible to calculate the eigenvalues of a matrix by finding the roots of its characteristic polynomial. However, this method is actually very slow, and while its easy to remember and its possible for a person to use this method by hand, there are many better techniques available (which do not rely on factoring a polynomial).</p> <p>So I was wondering, why on earth is it actually important to have techniques available to solve polynomial equations? (to be specific, I mean solving over $\mathbb{C}$)</p> <p>I actually used to be fairly interested in how to do it, and I know a lot of the different methods that people use. I was just thinking about it though, and I'm actually not sure what sort of applications there are for those techniques.</p>
J. M. ain't a mathematician
498
<p>(This was supposed to be a comment, but it got too long.)</p> <p>Robert raises a good question in the comments. Of course, general analytical expressions for roots of polynomials of high degree aren't really used much in practice, precisely because the expressions themselves are a bit unwieldy, and the special functions (theta functions, hypergeometric functions) that are involved in the closed-form expressions have to be numerically evaluated anyway, and for numerics, there's a bunch of more efficient numerical methods for getting a pile of roots than evaluating special functions.</p> <p>Now, there are a number of reasons for the interest in solving polynomials <em>numerically</em>: for instance, the behavior of solutions to difference and differential equations can be easily analyzed by looking at the roots of a "characteristic polynomial". In signal processing and a bunch of other applications, one is often interested where in the complex plane the roots of a certain polynomial are, whether they are located within a disk, or to the left or right of a half plane (on the other hand, if one just wants to check for existence of roots in such regions, there are computationally less intensive methods). As lhf mentions, CAGD often relies on the solution of polynomials, one application being in finding intersections of shapes represented as piecewise polynomials.</p> <p>Eigenvalues of matrices, pencils, and matrix polynomials have a wide variety of applications, which I won't go into detail here; just search around.</p> <p>So yes, there'll always be a use for a better way to find roots of polynomials than current methods.</p>
855,463
<p>I have the following question regarding support vector machines: So we are given a set of training points $\{x_i\}$ and a set of binary labels $\{y_i\}$.</p> <p>Now usually the hyperplane classifying the points is defined as:</p> <p>$ w \cdot x + b = 0 $</p> <p><strong>First question</strong>: Here $x$ does not denote the points of the training set, but the points on the separating (hyper)plane, right?</p> <p>In a next step, the lecture notes state that the function:</p> <p>$f(x) = \mathrm{sign}\, (w \cdot x + b)$</p> <p>correctly classify the training data.<br> <strong>Second question</strong>: Now I don't understand that, since it was stated earlier that $ w \cdot x + b = 0 $, so how can something which is defined to be zero have a sign?</p> <p>Two additonal questions:</p> <p>(1) You have added that a slack variable might be introduced for non-linearly separable data - how do we know that the data is not linearly separable, as far as I understand the mapping via kernel has as a purpose to map the data into a vector space where in the optimal case it would be linearly separable (and why not using a non-linear discriminant function altogether instead of introducing a slack variable?)</p> <p>(2) I've seen that for the optimization only one of the inequalities $w \cdot x = b \geqslant 1$ is being used as a linear constrained - why?</p>
nullgeppetto
100,951
<p>I suggest to you to thoroughly study a really insightful <a href="http://research.microsoft.com/pubs/67119/svmtutorial.pdf" rel="nofollow">tutorial</a> about Support Vector Machines for Pattern Recognition due to C. Burges. It isn't elementary, but it gives significant information about SV learning. It really helped me as a beginner!</p> <p>Concerning your <strong>first question</strong>. $\mathcal{H}\colon\mathbf{w}\cdot\mathcal{x}+b=0$ describes the locus of points belonging to hyperplane $\mathcal{H}$. $\mathbf{x}$ denotes an arbitrary vector in your input space, that is, the space where your training samples (i.e. $\mathbf{x}_i$, $i=1,\ldots,l$) live. Often this is some subset of $\mathbb{R}^n$.</p> <p>As for your <strong>second question</strong>, granted that you have learned some separating hyperplane, $\mathcal{H}\colon\mathbf{w}\cdot\mathcal{x}+b=0$ (and this is exactly what a linear SVM does), then given some unseen (that is, a new, without truth label) datum $\mathbf{x}_t$, you have to classify it to some class, by giving it a truth label $y_t\in\{\pm1\}$. The way you do so is by checking the sign of the decision function $f(\mathbf{x})=\mathbf{w}\cdot\mathcal{x}+b$, that is, $y_t=\operatorname{sgn}(f(\mathbf{x}_t))$. $\operatorname{sgn}(f(\mathbf{x}_t))&gt;0$ means that your datum is "above" the separating hyperplane and thus has to be classified as positive, while $\operatorname{sgn}(f(\mathbf{x}_t))&lt;0$ means that your datum is "under" the separating hyperplane and thus has to be classified as negative. If $\operatorname{sgn}(f(\mathbf{x}_t))=0$ it means that you testing sample belongs to the separating hyperplane, and thus there cannot be some decision about its truth label.</p> <p>Note that if $-1&lt;f(\mathbf{x}_t)&lt;+1$ then the testing datum lies inside the so-called margin of the classifier (take a look at the tutorial above for more information).</p> <p><strong>EDIT</strong></p> <p>I would like to add some comments below motivated by the discussion between @Pegah and me in the comments.</p> <blockquote> <p>I guess because in other contexts the vectors $\mathbf{x}$ in the equation for hyperplanes refer to the points on the hyperplane, but here $\mathbf{x}$ refer to input vectors which are not necessarily on it, which seemed ambiguous to me.</p> </blockquote> <p>This is not correct, actually. $\mathbf{x}\in\mathbb{R}^n$ refers to any point in the Euclidean $n$-dimensional space. Now, if and only if such a point belongs to the hyperplane $\mathcal{H}$, it satisfies the equation $\mathbf{w}\cdot\mathcal{x}+b=0$. In other words, the hyperplane $\mathcal{H}$ is the locus of the $n$-dimensional points that satisfy $\mathbf{w}\cdot\mathcal{x}+b=0$. For all other point it must hold that $\mathbf{w}\cdot\mathcal{x}+b&gt;0$ or $\mathbf{w}\cdot\mathcal{x}+b&lt;0$.</p> <blockquote> <p>I have another question though if you don't mind: I do understand that we can label new input vectors with $-1$ and $+1$, but why (as in the script linked by you) do we assume that $\mathbf{w}\cdot\mathcal{x}+b\leq-1$ and $\mathbf{w}\cdot\mathcal{x}+b\geq+1$ respectively? I don't see that explained anywhere, and am wondering why it could not be a and $-a$ for some $a\in\mathbb{R}$. Is it simply scaling (and are hence both sides of the inequality scaled, i.e. divided by $a$?).</p> </blockquote> <p>The above inequalities are our constraints to state that the training data are linearly separable, which -of course- is a very strict and coarse assumption. This leads to the so-called hard-margin SVM, which is in the form of the hyperplane $\mathcal{H}$. Keep in mind that you will never achieve to get such a hyperplane if your training data are not linearly-sparable, since the objective function of the hard-margin SVM (i.e. the dual Lagrangian) will grow arbitrarily large. Then, you need to relax these constraints by introducing the slack variables $\xi_i$ which permit the training data to be non-linearly separable. the reason why there are $+1$, $-1$'s here is the indeed there is a normalization by the norm of $\mathbf{w}$.</p> <hr> <p>Concerning your <strong>two additional questions</strong>:</p> <blockquote> <p>(1) You have added that a slack variable might be introduced for non-linearly separable data - how do we know that the data is not linearly separable, as far as I understand the mapping via kernel has as a purpose to map the data into a vector space where in the optimal case it would be linearly separable (and why not using a non-linear discriminant function altogether instead of introducing a slack variable?)</p> </blockquote> <p>First of all, you do not know <em>a-priori</em> that your data are linearly separable (in your input space). It is reasonable, though, that you want to solve the problem in both cases. We introduce the slack variables such that a training sample may enter the wrong class, contributing this way to the total loss. What we desire is to minimize this total loss. To be more specific, let $\xi_i\geq0$ be the aforementioned slack variables. When $0\leq\xi_i&lt;1$, then the sample lies on the margin, but it is not yet misclassified. If $\xi_i=1$ then the sample lies on the contour (still not misclassified). However, if $\xi_i&gt;1$, then the sample lies on the wrong class. In all the above cases (i.e., whenever holds that $\xi_i&gt;0$) there is an error which must be measured (and minimized!). We measure the total error by summing up all the $\xi$'s distances, that is, $C\sum_{i}\xi_i$. $C$ is a user-specified positive parameter (a larger $C$ corresponding to assigning a higher penalty to errors).</p> <p>Now, what about the kernels? We use a kernel because we know that by going to the feature space (i.e., the space where the kernel maps our input space) it is very likely that our data becomes linearly separable. Then we employ our linear method to classify them (linearly!). What is linear in the feature space is not linear in the original input space (granted that the selected kernel is not the linear kernel $k(\mathbf{x},\mathbf{x}')=\mathbf{x}\cdot\mathbf{x}'$). When we go back to the input space, the decision function we have found in the feature space is not linear.</p> <blockquote> <p>(2) I've seen that for the optimization only one of the inequalities $w*x = b \geqslant 1$ is being used as a linear constrained - why?</p> </blockquote> <p>This could not be true... The constraints are still $y_i(\mathbf{w}\cdot\mathbf{x}_i+b)+\xi_i\geq1$, $\forall i$</p>
855,463
<p>I have the following question regarding support vector machines: So we are given a set of training points $\{x_i\}$ and a set of binary labels $\{y_i\}$.</p> <p>Now usually the hyperplane classifying the points is defined as:</p> <p>$ w \cdot x + b = 0 $</p> <p><strong>First question</strong>: Here $x$ does not denote the points of the training set, but the points on the separating (hyper)plane, right?</p> <p>In a next step, the lecture notes state that the function:</p> <p>$f(x) = \mathrm{sign}\, (w \cdot x + b)$</p> <p>correctly classify the training data.<br> <strong>Second question</strong>: Now I don't understand that, since it was stated earlier that $ w \cdot x + b = 0 $, so how can something which is defined to be zero have a sign?</p> <p>Two additonal questions:</p> <p>(1) You have added that a slack variable might be introduced for non-linearly separable data - how do we know that the data is not linearly separable, as far as I understand the mapping via kernel has as a purpose to map the data into a vector space where in the optimal case it would be linearly separable (and why not using a non-linear discriminant function altogether instead of introducing a slack variable?)</p> <p>(2) I've seen that for the optimization only one of the inequalities $w \cdot x = b \geqslant 1$ is being used as a linear constrained - why?</p>
Whadupapp
162,069
<p>To answer both questions at once: The set $\{x \in \mathbb R^n \mid w'x+b=0\}$ defines the separating hyperplane. This hyperplane separates the space $\mathbb R^n$ into two halfspaces $H_+ = \{x \in \mathbb R^n \mid w'x+b&gt;0\}$ and $H_- = \{x \in \mathbb R^n \mid w'x+b&lt;0\}$.</p> <p>Thus, for an unknown example $x$ computing $f(x)= sign(w'x+b)$ tells you which halfspace (defined by the hyperplane) the example lies in and thus which label should be assigned.</p>
297,251
<p>In his well-known <a href="http://matwbn.icm.edu.pl/ksiazki/fm/fm101/fm101110.pdf" rel="nofollow noreferrer">paper</a> Bellamy constructs an indecomposable continua with exactly two composants. The setup is as follows:</p> <p>We have an inverse-system $\{X(\alpha); f^\alpha_\beta: \beta,\alpha &lt; \omega_1\}$ of metric indecomposable continua and retractions. For each $X(\alpha)$ a composant $C(\alpha) \subset X(\alpha)$ is specified and each $C(\alpha)$ maps into $C(\beta) $. </p> <p>The inverse limit $X$ has exactly two composants. The first is the union $\bigcup\{X(\beta): \beta &lt; \omega_1\}$ where we identify $X(\beta)$ with the set of sequences $(x_\alpha) \in X$ with $x_\beta = x_{\beta+1} = x_{\beta+2} = \ldots . $ The second composant is the inverse limit $\{C(\alpha); f^\alpha_\beta: \beta,\alpha &lt; \omega_1\}$. Observe there is no reason <em>a priori</em> for the second composant to be nonempty. However I do not believe an example is know.</p> <p>My question is an easier one. Can you think of an example of a metric continuum $M$ and a $\omega_1$-indexed decreasing nest of dense semicontinua <strong>with empty intersection</strong>? We call the set $S \subset M$ a <em>semicontinuum</em> to mean for each $x,y \in S$ there exists a continuum $K$ with $\{x,y\} \subset K \subset S$.</p> <p>If the second composant was empty the family $\{f^\alpha_0(C(\alpha)): \alpha &lt; \omega_1\}$ would be such a nest for $M = X_0$.</p> <p>If we index by $\omega$ instead an example is easy to come by. Let $M$ be the unit disc and $Q = \{q_1,q_2, \ldots\}$ and enumeration of the rational points on the boundary. Let $S(n)$ be formed by drawing the straight line segment from each element of $\{q_n,q_{n+1}, \ldots\}$ to each rational point of $(0,1/n) \times \{0\}$. Then add in $(0,1/n) \times \{0\}$ itself to make the space a semicontinuum.</p> <p>Indexing by $\omega_1$ must somehow get around the fact that any $\omega_1$ decreasing nest of closed subsets of a metric continuum eventually stabilizes.</p> <p>It feels like this would be easier if we assume the Continuum Hypothesis.</p>
Balazs
6,107
<p>The author of <a href="http://people.bath.ac.uk/masgks/Theses/kasprzyk.pdf" rel="nofollow noreferrer">this</a> and <a href="https://arxiv.org/pdf/math/0311284.pdf" rel="nofollow noreferrer">this</a> might be able to send you lists of vertices, from which you can generate the pictures yourself, if you ask nicely. </p>
3,625,739
<p>My question is from Humphreys <em>Introduction to Lie Algebras and Representation Theory</em>. There is a lemma in section 6.3 which says, if <span class="math-container">$\phi: L \to \mathfrak{gl}(V)$</span> is a representation of a semisimple Lie algebra <span class="math-container">$L$</span> then <span class="math-container">$\phi(L) \subset \mathfrak{sl}(V)$</span> and in particular <span class="math-container">$L$</span> acts trivially on any one-dimensional <span class="math-container">$L$</span>-module.<br> I understand how the first part of the statement follows but not why <span class="math-container">$L$</span> acts trivially on any one-dimensional <span class="math-container">$L$</span>-module. First, "acts trivially" is never defined. Does this mean <span class="math-container">$x \cdot v = \phi(x)v = 0$</span> for all <span class="math-container">$x \in L$</span> and <span class="math-container">$v \in V$</span>? If so how do we get from <span class="math-container">$\phi(x)$</span> is a trace zero matrix to <span class="math-container">$\phi(L)(V) = 0$</span> where <span class="math-container">$V$</span> is a one-dimensional <span class="math-container">$L$</span>-module?</p>
wasatar
641,315
<p>I like the answer above but I just realized this is very clear: <span class="math-container">$\mathfrak{sl}(V)$</span> are exactly the trace zero matrices and when dim<span class="math-container">$V=1$</span> we just have <span class="math-container">$\mathfrak{sl}(V)=0$</span>.</p>
2,416,910
<p>Let $A:=\{-k,\ldots,-2,-1,0,1,2,\ldots,k\}$, $k&lt;\infty$, $k\in \mathbb{N}$.</p> <p>Let $f:\mathbb{R}\rightarrow\mathbb{R}$ be such that $f(-x)=-f(x)$ for all $x\in A$ and $f(x+y)=f(x)+f(y)$ for all $x,y\in A$ such that $x+y\in A$.</p> <p>Does it follow that there is $\alpha\neq 0$ such that $f(x)=\alpha x$ for all $x\in A$?</p>
Davide Giraudo
9,849
<p>Let <span class="math-container">$X$</span> be a random variable taking the value <span class="math-container">$a$</span> with probability <span class="math-container">$p$</span> and <span class="math-container">$-a$</span> with probability <span class="math-container">$1-p$</span>, with <span class="math-container">$a&gt;0$</span> and <span class="math-container">$0&lt;p&lt;1$</span> to be specified later. If <span class="math-container">$$\tag{*} \mathbb E\left[\psi_2\left(\lvert X-\mathbb EX\rvert\right)\right]&gt;1\geqslant \mathbb E\left[\psi_2\left(\lvert X \rvert\right)\right], $$</span> then <span class="math-container">$\lVert X-\mathbb EX\rVert_{\psi_2}&gt;\lVert X\rVert_{\psi_2}$</span>. Let us compute the quantities involved in <span class="math-container">$(*)$</span>. First, observe that <span class="math-container">$\lvert X \rvert=a$</span> hence <span class="math-container">$\mathbb E\left[\psi_2\left(\lvert X \rvert\right)\right]=\psi_2(a)$</span>. Moreover, <span class="math-container">$$ \mathbb EX= pa+(1-p)(-a)=2pa-a $$</span> hence <span class="math-container">$$ \mathbb E\left[\psi_2\left(\lvert X-\mathbb EX\rvert\right)\right] =p\psi_2\left(\left\lvert a-(2pa-a) \right\rvert\right)+(1-p)\psi_2\left(\left\lvert -a-(2pa-a) \right\rvert\right)\\= p\psi_2\left( 2a (1-p) \right)+(1-p)\psi_2\left( 2a p \right). $$</span> In order to fulfill (*), we thus need <span class="math-container">$$ p\psi_2\left( 2a (1-p) \right)+(1-p)\psi_2\left( 2a p \right)&gt;1\geqslant\psi_2(a). $$</span> We have to choose <span class="math-container">$a$</span> as large as possible, hence the choice <span class="math-container">$a=\sqrt{\ln 2}$</span>. Then we have to find <span class="math-container">$p$</span> such that <span class="math-container">$$ f(p):= p\exp\left(4(1-p)^2\ln 2\right)+(1-p)\exp\left(4 p^2\ln 2\right)&gt;2. $$</span> Letting <span class="math-container">$p=1/4$</span> gives <span class="math-container">$$f(1/4)= \frac{7}{4} \times 2^{1/4}&gt;2.$$</span></p>
50,479
<p>Can the number of solutions $xy(x-y-1)=n$ for $x,y,n \in Z$ be unbounded as n varies?</p> <p>x,y are integral points on an Elliptic Curve and are easy to find using enumeration of divisors of n (assuming n can be factored).</p> <p>If yes, will large number of solutions give moderate rank EC?</p> <p>If one drops $-1$ i.e. $xy(x-y)=n$ the number of solutions can be unbounded via multiples of rational point(s) and then multiplying by a cube. (Explanation): Another unbounded case for varying $a , n$ is $xy(x-y-a)=n$. If $(x,y)$ is on the curve then $(d x,d y)$ is on $xy(x-y-a d)=n d^3$. Find many rational points and multiply by a suitable $d$. Not using the group law seems quite tricky for me. The constant $-1$ was included on purpose in the initial post.</p> <p>I would be interested in this computational experiment: find $n$ that gives a lot of solutions, say $100$ (I can't do it), check which points are linearly independent and this is a lower bound on the rank.</p> <p>What I find intriguing is that <strong>all integral points</strong> in this model come from factorization/divisors only.</p> <p><strike> Current record is n=<strong>179071200</strong> with 22 solutions with positive x,y. Due to Matthew Conroy.</p> <p>Current record is n=<strong>391287046550400</strong> with 26 solutions with positive x,y. Due to Aaron Meyerowitz</p> <p>Current record is n=<strong>8659883232000</strong> with 28 solutions with positive x,y. Found by Tapio Rajala. </strike></p> <p>Current record is n=<strong>2597882099904000</strong> with 36 solutions with positive x,y. Found by Tapio Rajala.</p> <p>EDIT: $ab(a+b+9)=195643523275200$ has 48 positive integer points. – Aaron Meyerowitz (<em>note this is a different curve and 7 &lt;= rank &lt;= 13</em>)</p> <p>A variation: $(x^2-x-17)^2 - y^2 = n$ appears to be eligible for the same question. The quartic model is a difference of two squares and checking if the first square is of the form $x^2-x-17$ is easy.</p> <p>Is it possible some relation in the primes or primes or divisors of certain form to produce records: Someone is trying in $\mathbb{Z}[t]$ <a href="https://mathoverflow.net/questions/51193/can-the-number-of-solutions-xyx-y-1n-for-x-y-n-in-zt-be-unbounded-as-n">Can the number of solutions xy(x−y−1)=n for x,y,n∈Z[t] be unbounded as n varies?</a> ? Read an article I didn't quite understand about maximizing the Selmer rank by chosing the primes carefully.</p> <p>EDIT: The curve was chosen at random just to give a clear computational challenge.</p> <p>EDIT: On second thought, can a symbolic approach work? Set $n=d_1 d_2 ... d_k$ where d_i are variables. Pick, well, ?some 100? ($d_i$, $y_i$) for ($x$,$y$) (or a product of $d_i$ for $x$). The result is a nonlinear system (last time I tried this I failed to make it work in practice).</p> <p>EDIT: Related search seems <strong>"thue mahler" equation'</strong></p> <p>Related: <a href="https://mathoverflow.net/questions/50661/unboundedness-of-number-of-integral-points-on-elliptic-curves">unboundedness of number of integral points on elliptic curves?</a></p> <p>Crossposted on MATH.SE: <a href="https://math.stackexchange.com/questions/14932/can-the-number-of-solutions-xyx-y-1-n-for-x-y-n-in-z-be-unbounded-as-n">https://math.stackexchange.com/questions/14932/can-the-number-of-solutions-xyx-y-1-n-for-x-y-n-in-z-be-unbounded-as-n</a></p>
Aaron Meyerowitz
8,008
<p>$n=938995200$ also has $22$ solutions. It might be nicer to put $a=y$ and $b=x-y-1$ so $x=a+b+1$ and one has $ab(a+b+1)=n$. For $n=391287046550400$ there are $26$ solutions with $a,b&gt;0$ which grows to $78$ if one allows negative values and shrinks to $13$ if $[a,b]$ and $[b,a]$ are considered the same (and must be positive). </p> <p>It would seem reasonable that if one chose a number $n$ with "lots" of factors relative to the size of $n$ then any of the curves $ab(a+b+j)=n$ with a "small" j would have a fair number of points and at least some of them would have a large number of points (so one could start with $n$ and look for the most fruitful $j$). That could be made more precise (at least with regard to expectation), maybe not by me though. </p> <p><strong>later</strong> The following sounded plausible but does not turn out to work that well</p> <p>This suggests seeking $n$ from the <a href="http://oeis.org/A002182" rel="nofollow">highly composite numbers</a> (more divisors than any smaller number.) In fact $391287046550400$ is on that list! (Although I did not know that when I found it) However I tried $n=106858629141264000$ from further down the <a href="http://oeis.org/A002182/b002182.txt" rel="nofollow">longer list</a> linked there and only found two points for $j=1$. I did not look at other $j$. </p> <p><strong>Continued</strong> I found $391287046550400$ by looking for products $ab(a+b+1)=n$ with all prime factors under 30 (and no prime over 7 repeated in $n$), and looking for $n$ which turned up frequently. Then I decided to look at the hcn and found that $n$ on the list. However it appears that up to about $1.7 \, 10^{28}$(which is something like the first 260 such ) the appropriate curve has 26 positive points in that one case, 14 in another, and 12 and 10 just a handful of times.</p> <p>Among those $n$ values, the curve $ab(a+b-7)=481880599200$ has $28$ positive integer points and the curve $ab(a+b+9)=195643523275200$ has $48$ positive integer points but those are the only ones $ab(a+b+j)=n$ which better $ab(a+b+1)=391287046550400$ with a smaller $n$ and $|j|&lt;50$</p> <p><strong>even later THEN UPDATED</strong> Consider these four integers</p> <p>$$\begin{eqnarray} 2888071057872000=&amp;&amp;2^7\cdot 3^3\cdot 5^3\cdot 7\cdot 11\cdot 13\cdot 17\cdot 19\cdot 23\cdot 29\cdot 31\\ 8659883232000 =&amp;&amp;2^8\cdot 3^3\cdot 5^3\cdot 7\cdot 11\cdot 13\cdot 17\cdot 19\cdot 31\\ 32607253879200=&amp;&amp;2^{5}\cdot3^{3}\cdot\cdot5^{2}\cdot7^{2}\cdot11\cdot 13\cdot 17\cdot 19\cdot 23\cdot 29\\ 1248124550400=&amp;&amp;2^{8}\cdot 3^{3}\cdot 5^{2}\cdot 7^{2}\cdot 13\cdot 17\cdot 23\cdot 29 \end{eqnarray}$$</p> <p>The first has $32,768$ factors which makes it a hcn since every smaller integer has fewer. The second is the excellent value of $n$ found by tapio which makes $ab(a+b+1)=n$ have 28 positive solutions. The third is also a hcn and the fourth makes $ab(a+b-1)=n$ have 28 positive solutions (or if you prefer, $xy*(x-y+1)=n$ an equation which seems as hard as the chosen one). That is where I would look for similar examples, n a hcn maybe modified by putting in or taking out a couple of large primes and fiddling with the exponent of the smallest primes. Brute calculations will not prove anything of course.</p>
4,290,960
<p>In machine learning (and not only), it is very common to see concatenation of different feature vectors into a single one of higher dimension which is then processed by some function. For example, feature vectors computed for an image at different scales are concatenated to form a multi-scale feature vector which is then further processed.</p> <p>However, combining vectors by concatenation seems somehow artificial to me (we simply stack them and then use a function that operates on a higher-dimensional space):</p> <p><span class="math-container">$$\mathbf{z} = \mathbf{v} \oplus \mathbf{w} = [v_1, \dots, v_n]^T \oplus [w_1, \dots, w_m]^T = [v_1,\dots, v_n, w_1,\dots, w_n]^T \in \mathbb{R}^{n+m},$$</span></p> <p><span class="math-container">$$f(\mathbf{z}): \mathbb{R}^{n+m} \to \mathbb{R}^{k}.$$</span></p> <p>First, I would like to ask if there is a formal definition of concatenation as a mapping to higher-dimensional space (perhaps in form of a matrix multiplication). What can be said about the space where the concatenated vectors live? In particular, if the second vector is fixed, the points represented by the first vector will be mapped to a higher-dimensional space, but they will be confined to the subspace <span class="math-container">$\mathbb{R}^{n}\subset \mathbb{R}^{n+m}$</span> perpendicular to the rest of the axes <span class="math-container">$n+1,\dots m$</span>. It's like manifold embedding.</p> <p>Finally, I was wondering if there are alternatives to concatenation for effective combination of feature vectors?</p>
Ewan Delanoy
15,381
<p>If we solve the quadratic in <span class="math-container">$c$</span>, we obtain</p> <p><span class="math-container">$$ c=ab \pm \sqrt{(a^2-1)(b^2-1)} \tag{1} $$</span></p> <p>We see that <span class="math-container">$(a^2-1)(b^2-1)$</span> is a perfect square. Let <span class="math-container">$d \gt 0$</span> be the square-free kernel of <span class="math-container">$a^2-1$</span>, so that <span class="math-container">$a^2-1=dA^2$</span> for some positive integer <span class="math-container">$A$</span>. Then <span class="math-container">$b^2-1=dB^2$</span> for some positive integer <span class="math-container">$B$</span>.</p> <p>If <span class="math-container">$x_0^2-dy_0^2=1$</span> is the fundamental solution of <span class="math-container">$x^2-dy^2=1$</span>, then we have two exponents <span class="math-container">$n$</span> and <span class="math-container">$m$</span> such that</p> <p><span class="math-container">$$ a+A\sqrt{d}=z_0^n, b+B\sqrt{d}=z_0^m \tag{2} $$</span> where <span class="math-container">$z_0=x_0+y_0\sqrt{d}$</span>.</p> <p>Multiplying or dividing the two relations above, we obtain</p> <p><span class="math-container">$$ ab-dAB +(aB+bA)\sqrt{d} = z_0^{n+m}, ab+dAB +(-aB+bA)\sqrt{d} = z_0^{n-m} \tag{3} $$</span></p> <p>If <span class="math-container">$n$</span> is even, say <span class="math-container">$n=2p$</span> for some integer <span class="math-container">$p$</span>, we can write <span class="math-container">$a+A\sqrt{d}=z_1^2$</span> where <span class="math-container">$z_1=z_0^p$</span>. If we write <span class="math-container">$z_1=x_1+y_1\sqrt{d}$</span>, then <span class="math-container">$x_1^2-dy_1^2=1$</span>, and <span class="math-container">$a=x_1^2+dy_1^2=2x_1^2-1$</span>, so <span class="math-container">$\frac{a+1}{2}=x_1^2$</span> is a perfect square.</p> <p>Similarly, if <span class="math-container">$m$</span> is even we deduce that <span class="math-container">$\frac{b+1}{2}$</span> is a perfect square. Finally, if <span class="math-container">$n$</span> and <span class="math-container">$m$</span> are both odd, then <span class="math-container">$n\pm m$</span> are both even and we deduce from (3) and (1) that <span class="math-container">$\frac{c+1}{2}$</span> is a perfect square. This finishes the proof.</p>
51,903
<p>Does anyone know of a tool which</p> <ol> <li>Can display formulas neatly, preferably like this website without hassle. (Unlike wikipedia with <code>:&lt;math&gt;</code>) </li> <li>Has a wiki like structure: i.e categories of pages, individual articles with hyperlinked sections, subsections etc. </li> <li>Preferrably can be used online and does not require installation of some software.</li> <li>Comes with a free host, i.e for people with little money and no university server.</li> </ol> <p>So basically I need a notebook on steroids :)</p> <p><em>Update</em>: Edited this question to remove the essay I wrote on blogs. You may refer to the revision history if you're interested. I am keeping it open in case anyone knows of further alternatives than have already been mentioned. At present I have settled on and am fairly content with Drupal.</p>
kuch nahi
8,365
<p>After spending a lot of time searching I have finally set it up. The answer is <a href="http://drupal.org/drupal-7.0" rel="nofollow noreferrer">Drupal 7</a>. It redresses all complaints except one (it requires an offline installation). I am currently maintaining it locally (with remote desktop) but I will probably get one of the free hosts when I choose to upload it. </p> <ol> <li>The equations look georgeous (easy install for Mathjax module). For example:</li> </ol> <p><img src="https://i.stack.imgur.com/mPvsj.png" alt="enter image description here"></p> <p>2 It is possible to have a blog + a wiki </p> <p>3.The Learning curve is not so steep.</p> <p>I had actually installed mediawiki and converted many of my latex files to wikitext, but this was so good I abandoned the mediawiki project. I post this here if someone else finds it useful.</p>
180,323
<p>apologies if this is a naive question. Consider two Galois extensions, K and L, of the rational numbers. For each extension, consider the set of rational primes that split completely in the extensions, say Split(K) and Split(L).</p> <p>If Split(K) = Split(L), then is it necessarily true that K and L are isomorphic as Galois extensions of the rationals?</p> <p>If so, for a given set of rational primes, S, is there a way to construct the extension over which S is the set of completely split primes?</p> <p>References welcomed! Thanks, Martin</p>
Ari Shnidman
949
<p>For your first question, it is true that $L$ and $K$ are isomorphic. This follows from the Chebotarev density theorem, for example. </p> <p>For the second question, it's not really known how to do this in general. But class field theory lets you describe the sets $S$ which arise from abelian extensions (over any number field even, not just $\mathbb{Q}$). It's the non-abelian extensions that are harder to understand.</p> <p>A reference for all of this is the introduction to Milne's <a href="http://www.jmilne.org/math/CourseNotes/cft.html" rel="nofollow">notes</a> on class field theory. </p>
9,880
<p>The ban on edits that change only a few characters can be quite annoying in some cases. I don't have a problem with trying to keep trivial spelling corrections and such out of the peer review queue, but sometimes a small edit is semantically significant. For example, it makes perfect sense to edit an otherwise good question/answer to change a $0$ to a $1$ or to change $a\implies b\implies c$ to $(a\implies b)\implies c$.</p>
Community
-1
<p>In most cases, if there is one error to fix there are more of them in the post and you should fix them all. Each small edit bumps the post to the frontpage, too many unnecessary edits are harmful due to that.</p> <p>The minimum of 6 character edits is only active for suggested edits, which additionally require review by other community members. These edits have an even higher cost due to the necessary review by other users. Of course this rule might sometimes prevent a useful edit, but it is a trade off against the potential unnecessary bumping and additional reviews necessary for those small edits.</p> <p>If you see a small mistake you want to fix, look for other problems with the post you could fix. And once you have 2k reputation you can fix those significant mistakes that only require a few characters to change.</p>
481,673
<p>Find all functions $g:\mathbb{R}\to\mathbb{R}$ with $g(x+y)+g(x)g(y)=g(xy)+g(x)+g(y)$ for all $x,y$.</p> <p>I think the solutions are $0, 2, x$. If $g(x)$ is not identically $2$, then $g(0)=0$. I'm trying to show if $g$ is not constant, then $g(1)=1$. I have $g(x+1)=(2-g(1))g(x)+g(1)$. So if $g(1)=1$, we can show inductively that $g(n)=n$ for integer $n$. Maybe then extend to rationals and reals.</p>
Arash
92,185
<p>It is much more complicated than it seems. If you put $x=y=0$, you get either $g(0)=0$ or $g(0)=2$. If $g(0)\neq 0$ then put $y=0$ in the equation and you get for all other $x$, $g(x)=2$.</p> <p>First solution: $g(x)=2$</p> <p>Now if $g(0)=0$ there are other possibilities. Choose $x=y=2$ and then either $g(2)=0$ or $g(2)=2$. </p> <p>If $g(2)=0$, then either $g(1)=0$ or $g(1)=3$. For $g(1)=0$, you get $g(n)=0$ for all $n\in\mathbb{Z}$. Also you get: $$ g(x+n)=g(nx)+g(x)\rightarrow g(x+1)=2g(x) \rightarrow g(x+n)=2^ng(x) $$ On the other hand you have: $$ g(mx)=(2^m-1)g(x) $$ Now write: $g(2x+2n)$ in two ways. $$ g(2x+2n)=2^{2n}g(2x)=2^{2n}3g(x)\text{ or } g(2x+2n)=3g(x+n)=2^{n}3g(x). $$ Now for all $n$ we have $2^{n}3g(x)=2^{2n}3g(x)$ which means $g(x)=0$. So you get:</p> <p>Second solution: $g(x)=0$ for $g(0)=g(2)=g(1)=0$.</p> <p>Now if $g(1)=3$, we can see that $g(x+1)+g(x)=3$ which gives $g(2k)=0$ and $g(2k+1)=3$. Also $g(x)=g(x+2)=g(x+2n)$ Moreover we have similar to before: $$ g(x+2n)=g(2nx)+g(x)\rightarrow g(x+2)=g(2x)+g(x) \rightarrow g(2x)=0. $$ so for all $x$, $g(2x)=0$ which is in contradiction with $g(1)=3$ (choose $x=0.5$). So $g(1)$ cannot be 3. </p> <p>Now we go the case where $g(0)=0$ and $g(2)=2$. By similar argument, $g(1)$ is either 1 or 2. $x=2$ leads to contradiction in a similar way and so you should consider $g(1)=1$ which by induction gives $g(n)=n$. To prove it for rationals start with observing that $g(n+x)=n+g(x)$. Then choose $x=n$ and $y=\frac{1}{n}$ which gives you $g(\frac{1}{n})=\frac{1}{n}$. Finally it can be shown that $g(\frac{m}{n})=\frac{m}{n}$ and then you can show that $g(x)\neq x$ leads to contradiction:</p> <p>Third Solution: $g(x)=x$.</p>
481,673
<p>Find all functions $g:\mathbb{R}\to\mathbb{R}$ with $g(x+y)+g(x)g(y)=g(xy)+g(x)+g(y)$ for all $x,y$.</p> <p>I think the solutions are $0, 2, x$. If $g(x)$ is not identically $2$, then $g(0)=0$. I'm trying to show if $g$ is not constant, then $g(1)=1$. I have $g(x+1)=(2-g(1))g(x)+g(1)$. So if $g(1)=1$, we can show inductively that $g(n)=n$ for integer $n$. Maybe then extend to rationals and reals.</p>
Caleb Stanford
68,107
<p>Here's a much less murky solution with strong influence from Calvin Lin's excellent answer. $$ g(x + y) + g(x)g(y) = g(xy) + g(x) + g(y) \tag{0} $$ First of all, plugging in $y = 1$ gives $$ g(x + 1) + g(x)g(1) = 2g(x) + g(1)$$</p> <p>For readability let $g(1) = k$ and we have $$ g(x + 1) = (2-k)g(x) + k \tag{1} $$</p> <p>Plugging in $y + 1$ for $y$ in (0) and reducing with (1): \begin{align*} g(x + y + 1) + g(x)g(y+1) &amp;= g(xy + x) + g(x) + g(y+1) \\ (2-k)g(x + y) + k + g(x) \left[ (2-k)g(y) + k \right] &amp;= g(xy + x) + g(x) + (2-k)g(y) + k \\ (2-k)\left[g(x + y) + g(x) g(y) - g(x) - g(y)\right] + g(x) &amp;= g(xy + x) \end{align*}</p> <p>Using (0) again we conclude $$ g(xy + x) = (2-k)g(xy) + g(x) $$</p> <p>So as long as $x$ is nonzero, letting $z = xy$ we have $$ g(z + x) = (2 - g(1))g(z) + g(x) \tag{2} $$ for all real $x$ and $z$, $x \ne 0$.</p> <p>From here on out we just work with the much cleaner equation (2). Setting $z = 0$ gives $(2 - g(1))g(0) = 0$, so either $g(1) = 2$ or $g(0) = 0$. If $g(1) = 2$, then $g(z + x) = g(x)$ so $g(x) = 2$ everywhere. Otherwise, $g(0) = 0$, so plug in $z = -x$ to (2): $$ 0 = (2 - g(1))g(-x) + g(x) \implies g(x) = (g(1) - 2) g(-x) $$</p> <p>It follows by applying this to $g(--x)$ that $g(x) = (g(1) - 2)^2 g(x)$. If $g(x)$ is 0 everywhere, this is a solution to the original (0); otherwise, this implies $g(1) - 2 = \pm 1$, so $g(1) = 1$ or $g(1) = 3$.</p> <p>In the case $g(1) = 1$, $g(z + x) = g(z) + g(x)$. This implies $g(x) = mx$ for some constant $m$, and $g(1) = 1$ so $g(x) = x$.</p> <p>In the case $g(1) = 3$, $g(z + x) = -g(z) + g(x)$. But exchanging $x$ and $z$, $g(z + x) = -g(x) + g(z)$, so $g$ is uniformly 0 (which we already assumed was not the case).</p>
2,535,656
<p><a href="https://i.stack.imgur.com/sjkal.png" rel="nofollow noreferrer">This is the example to numbers to the 3rd.</a></p> <p><a href="https://i.stack.imgur.com/kzdHS.png" rel="nofollow noreferrer">Here is the same thing just with numbers to the 4th</a></p> <p>Well I was messing around with numbers I made this discovery, with the correlation between n^x and x!. What is the reason for this? Is it easily explainable?</p> <p>In the sheets every column is found from the differences of two numbers in sequence in the last column</p>
Qiaochu Yuan
232
<p>The relevant keyword here is <a href="https://en.wikipedia.org/wiki/Finite_difference" rel="nofollow noreferrer">finite differences</a>. In general, if <span class="math-container">$f(n)$</span> is a sequence, we can construct from it a new sequence, the <strong>forward difference</strong></p> <p><span class="math-container">$$(\Delta f)(n) = f(n+1) - f(n).$$</span></p> <p>There is also a backward difference. For example, if <span class="math-container">$f(n) = n^2$</span>, then</p> <p><span class="math-container">$$(\Delta f)(n) = (n + 1)^2 - n^2 = 2n + 1.$$</span></p> <blockquote> <p><strong>Exercise #1:</strong> Show that if <span class="math-container">$f(n) = an^d + \dots $</span> is a polynomial of degree <span class="math-container">$d$</span> with leading coefficient <span class="math-container">$a$</span>, then <span class="math-container">$(\Delta f)(n) = d an^{d-1} + \dots$</span> is a polynomial of degree <span class="math-container">$d - 1$</span> with leading coefficient <span class="math-container">$da$</span>.</p> <p><strong>Exercise #2:</strong> Use induction on exercise #1 to show that if <span class="math-container">$f(n) = n^d$</span>, then <span class="math-container">$\Delta^d f$</span> (the result of taking the forward difference <span class="math-container">$d$</span> times) is the constant sequence with constant value <span class="math-container">$d!$</span>.</p> </blockquote> <p>If you've taken calculus this should remind you a lot of what happens when you differentiate polynomials. There are several more very nice things to say about finite differences but this is enough to explain your observation.</p>
2,535,656
<p><a href="https://i.stack.imgur.com/sjkal.png" rel="nofollow noreferrer">This is the example to numbers to the 3rd.</a></p> <p><a href="https://i.stack.imgur.com/kzdHS.png" rel="nofollow noreferrer">Here is the same thing just with numbers to the 4th</a></p> <p>Well I was messing around with numbers I made this discovery, with the correlation between n^x and x!. What is the reason for this? Is it easily explainable?</p> <p>In the sheets every column is found from the differences of two numbers in sequence in the last column</p>
Michael Hardy
11,667
<p>The binomial theorem says</p> <p>$$ (a+b)^x = a^x + xa^{x-1}b + \frac{x(x-1)}2 a^{x-2} b^2 + \frac{x(x-1)(x-2)} 6 a^{x-3} b^3 + \cdots. $$ The differences in the column to the right of the one that lists values of $a^x$ are $$ (a+1)^x - a^x = \left( a^x + x a^{x-1} + \frac{x(x-1)} 2a^{x-2} + \cdots \right) - a^x. $$ Since $a^x$ cancels, you have $xa^{x-1}$ as the highest-degree term in the resulting polynomial function of $a.$</p> <p>Then for the next row, for the same reason you get $x(x-1)x^{x-2}$ as the highest-degree term.</p> <p>And next you get $x(x-1)(x-2)a^{x-3},$ and so on until finally you have $x(x-1)(x-2)\cdots 1 a^{x-x}.$</p>
3,663,267
<p>in a linear algebra class i was given a theorem: If <span class="math-container">$\left\{\boldsymbol{u}_{1}, \boldsymbol{u}_{2}, \ldots, \boldsymbol{u}_{n}\right\}$</span> is a linearly independent subset of <span class="math-container">$\mathbb{R}^{n}$</span> then <span class="math-container">$ \mathbb{R}^{n}=\operatorname{span}\left\{\boldsymbol{u}_{1}, \boldsymbol{u}_{2}, \ldots, \boldsymbol{u}_{n}\right\} $</span></p> <p>I understand what this means but it got me thinking - what do we really mean when we say a set is equal to <span class="math-container">$\mathbb{R}^{n}$</span>? If we have the two vectors (1,0,0) and (0,1,0), then they are linearly independent and their span gives us a plane. Now, would we call this spanning set equal to <span class="math-container">$\mathbb{R}^{2}$</span> or is it just isomorphic to <span class="math-container">$\mathbb{R}^{2}$</span>? If we only have the latter, then is any subset of <span class="math-container">$\mathbb{R}^{3}$</span> actually equal to <span class="math-container">$\mathbb{R}^{2}$</span> or can we only talk about a set being equal to <span class="math-container">$\mathbb{R}^{2}$</span> when we have not explicitly defined vectors that can live in <span class="math-container">$\mathbb{R}^{3}$</span> (as in vectors that have 3 coordinates)?</p>
Naba Kumar Bhattacharya
556,399
<p>Define, <span class="math-container">$e_{ij}$</span> to be the <span class="math-container">$n \times n$</span> matrix having <span class="math-container">$ij$</span>-th entry 1 and all others 0. Try to find out the products of several forms of those matrices. You can refer to the problem 6 of chapter 7.2 (pg 238-239) of Abstract algebra by Dummit and Foote.</p>
416,514
<p>I really think I have no talents in topology. This is a part of a problem from <em>Topology</em> by Munkres:</p> <blockquote> <p>Show that if $A$ is compact, $d(x,A)= d(x,a)$ for some $a \in A$. </p> </blockquote> <p>I always have the feeling that it is easy to understand the problem emotionally but hard to express it in math language. I am a student in Economics and I DO LOVE MATH. I really want to learn math well, could anyone give me some advice. Thanks so much!</p>
not all wrong
37,268
<p><em>Hint</em>: Compact $\iff$ sequentially compact for metric spaces. Can you construct a sequence which must tend to the $a$ you want to find?</p>
3,476,181
<p>I'm trying to solve this group theory problem, and I'm really not sure how to approach this. The question is:</p> <p>Show that <span class="math-container">$\mathbb{F}_7[\sqrt{-1}] := \{a+b\sqrt{-1}$</span> | <span class="math-container">$a,b \in \mathbb{F}_7\}$</span> is a ring.</p> <p>I have been stuck on this problem for hours and I really cannot figure it out. </p> <p>This is my progress so far:</p> <p><span class="math-container">$(a+b\sqrt{-1}) + (c+d\sqrt{-1}) = (a+c) + (b+d)\sqrt{-1}$</span></p> <p>I'm not sure what I'm doing, I'll appreciate it if anyone can help me out. Thanks in advance!</p>
acupoftea
726,513
<p>This answer is translated (with small modifications) from <a href="https://www.mimuw.edu.pl/~rytter/TEACHING/JAO/higman.pdf" rel="nofollow noreferrer">here</a>.</p> <p><span class="math-container">$\Sigma$</span> is a finite alphabet.<br> <span class="math-container">$\Sigma^\ast$</span> is the set of finite strings over <span class="math-container">$\Sigma$</span> (<a href="https://en.wikipedia.org/wiki/Kleene_star" rel="nofollow noreferrer">Kleene star</a>).<br> <span class="math-container">$x\preceq y$</span> means that <span class="math-container">$x$</span> is a subsequence of <span class="math-container">$y$</span>.</p> <p>We'll prove that there is no infinite set <span class="math-container">$S \subseteq \Sigma^\ast$</span> such that no element of it is a subsequence of another (Higman's lemma). </p> <p>Assume the thesis is false. Then there is an infinite sequence <span class="math-container">$x_1, x_2,\ldots$</span> such that</p> <ol> <li><span class="math-container">$x_i\in\Sigma^\ast$</span></li> <li><span class="math-container">$i&lt;j \implies \textit{not} (x_i \preceq x_j) $</span> (notice that <span class="math-container">$x_i \succ x_j$</span> is possible)</li> </ol> <p>From infinite sequences meeting the criteria 1-2 take one that's minimal in the sense that <span class="math-container">$|x_1|$</span> is minimal and with <span class="math-container">$|x_1|$</span> fixed <span class="math-container">$|x_2|$</span> is minimal, etc.</p> <p>Take an infinite subsequence <span class="math-container">$x_{i_1}, x_{i_2},\ldots $</span> where the first letter of each element is <span class="math-container">$a$</span> (constant for all elements). Remove the first letter from each of those elements, getting the sequence <span class="math-container">$x_{i_1}', x_{i_2}',\ldots $</span>. Then, the infinite sequence <span class="math-container">$$x_1, x_2, \ldots, x_{i_1-1}, x_{i_1}', x_{i_2}', x_{i_3}', \ldots$$</span> meets the criteria 1-2 and is "smaller" than <span class="math-container">$x_1, x_2, \ldots$</span>, a contradiction.</p>
176,167
<p>$\newcommand{\scp}[2]{\langle #1,#2\rangle}\newcommand{\id}{\mathrm{Id}}$ Let $f$ and $g$ be two proper, convex and lower semi-continuous functions (on a Hilbert space $X$ or $X=\mathbb{R}^n$) and let $g$ be continuously differentiable. Consequently, the subdifferential $\partial f$ and the gradient $\nabla g$ are monotone operators, i.e. $$ \scp{\nabla g(x)-\nabla g(y)}{x-y}\geq 0 $$ for all $x,y$ and for $u\in\partial f(x)$ and $v\in\partial f(y)$ $$ \scp{u-v}{x-y}\geq 0. $$ My question:</p> <blockquote> <p>Is the operator $x\mapsto x - (\id + \gamma\partial f)^{-1}(x-\gamma\nabla g(x))$ monotone for $\gamma\geq 0$?</p> </blockquote> <p>Note that I ask for all values $\gamma\geq 0$ specifically for large values.</p> <p>An equivalent question is</p> <blockquote> <p>Does for $\gamma\geq 0$ it holds that $$ \scp{(\id + \gamma\partial f)^{-1}(x-\gamma\nabla g(x)) - (\id + \gamma\partial f)^{-1}(y-\gamma\nabla g(y))}{x-y}\leq \|x-y\|^2 $$</p> </blockquote> <p>Thoughts:</p> <p>For $f=0$ and $g=0$ it's clear (for $g=0$ this follows since the "proximal operator" $x\mapsto (\id + \gamma\partial f)^{-1}(x)$ is non-expansive, for $f=0$, the monotonicity of $\gamma\nabla g$ works in the right direction and does the trick). </p> <p>Also for small $\gamma$ (smaller that $2/L$ if $L$ is the Lipschitz constant of $\nabla g$, if is has one) the thing is clear as then the mapping $x\mapsto (\id + \gamma\partial f)^{-1}(x - \gamma\nabla g(x))$ is again non-expasive (and used as iteration in the so-called proximal gradient method). However, for large $\gamma$ I could not make use of any of these observation since using Cauchy-Schwarz for the inner product ruins the estimate then.</p> <p>Moreover all examples I tried numerically (in various dimensions and for various functions $f$ and $g$) suggested that the claim holds.</p> <p>Intuitively, all these together makes me think that the answer to my questions is yes, but I failed to prove it. Also all inequalities in Bauschke/Combettes "Convex analysis and Monotone Operator Theory in Hilbert spaces" I found were not helpful. </p>
Connor Mooney
16,659
<p>This inequality is not true. Here is a counterexample:</p> <p>Let ${\bf x},{\bf y} \in \mathbb{R}^2$ with ${\bf y} = 0$ and ${\bf x} = (1,\epsilon)$ for $\epsilon$ small positive. Let $g(x,y) = Cy^2$ with $C(\epsilon)$ very large, chosen say so that $${\bf x} - \nabla g({\bf x}) = (1, -M)$$ for some $M$ large.</p> <p>We now construct $f$ convex so that $\nabla(|x|^2/2 + f)(2,-10) = (1,-M)$ and $\nabla f(0) = 0$, which would violate the desired inequality. We rewrite this as $\nabla f(2,-10) = (-1,10-M)$, $\nabla f(0) = 0$. Take $$f(x,y) = \frac{1}{2}(C_1y^2 + C_2(x+y)^2)$$ for $C_1,C_2$ to be chosen momentarily. Then $$\nabla f(2,-10) = (-8C_2, -10C_1 - 8C_2), \quad \nabla f(0) = 0.$$ Taking $C_2 = 1/8$ and $C_1$ so that the second component is $10-M$ (if $M$ is large then $C_1$ can be chosen positive, making $f$ convex), we are done.</p> <p>The constants in this example are somewhat arbitrary; geometrically, the point is that we can make $g$ "very monotone" in one direction so that the non-expansivity of $(Id + \nabla f)^{-1}$ can't save us.</p>
2,418,181
<p><strong>Question:</strong> Let $P_1,\ldots,P_n$ be propositional variables. When is the statement $P_1 \oplus \cdots \oplus P_n$ true?</p> <p>I'm currently learning the basics of discrete math. I am stuck on this last question of my assignment... not really sure how to go about solving it.</p> <p>I do know that a propositional variable can either be true or false.</p> <p>Thanks</p>
Anurag A
68,092
<p>Idea: You may think of $\oplus$ as acting like mod $2$ addition by taking $T$ as $1$ and $F$ as $0$. Then \begin{align*} T \oplus F &amp;=1+0=1 \pmod{2}\\ T \oplus T &amp;=1+1=0 \pmod{2}\\ F \oplus F &amp;=0+0=0 \pmod{2} \end{align*}</p> <p>Let $k$ be the number of statements that are true among $P_1, P_2, \ldots ,P_n$, so the remaining $n-k$ are false. Then $\oplus_{i=1}^{n}P_i$ is true if and only if $k$ is odd because based on the idea I have suggested above, we can think of $\oplus_{i=1}^{n}P_{i}$ as $\underbrace{1+1+\dotsb+1}_{k} \equiv k \pmod{2}.$</p>
3,684,158
<p>You are given two circles:</p> <p>Circle G: <span class="math-container">$(x-3)^2 + y^2 = 9$</span></p> <p>Circle H: <span class="math-container">$(x+3)^2 + y^2 = 9$</span></p> <p>Two lines that are tangents to the circles at point <span class="math-container">$A$</span> and <span class="math-container">$B$</span> respectively intersect at a point <span class="math-container">$P$</span> such that <span class="math-container">$AP + BP = 10$</span></p> <p>Find the locus of all points <span class="math-container">$P$</span>.</p> <p><a href="https://i.stack.imgur.com/zKl67.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/zKl67.png" alt="enter image description here"></a></p> <hr> <p>This problem is solvable if we set point <span class="math-container">$P = (x,y)$</span> and solve the equation <span class="math-container">$AP + BP = 10$</span>. After substituting <span class="math-container">$GP^2 = AP^2 + 3^2$</span> and <span class="math-container">$HP^2 = BP^2 + 3^2$</span> and getting the following equation for an ellipse </p> <p><span class="math-container">$16x^2 +25y^2 = 625$</span></p> <p>That's a lot of math and algebra to do, so my question is: What is the geometric reasoning behind why is the locus an ellipse (without using analytical geometry) or is there any other elegant proofs that lack heavy calculations?</p>
K. Miyamoto
991,812
<p>This is not a complete answer to your original question, but there is a nice geometrical explanation on why any point <span class="math-container">$P$</span> on the ellipse <span class="math-container">$16x^2+25y^2=625$</span> satisfies <span class="math-container">$|\overline{AP}|+|\overline{BP}|=10$</span>. This configuration can be obtained by considering a hyperboloid of revolution, two spheres tangent to it, and an intersecting plane.<br/> <a href="https://i.stack.imgur.com/inaW1.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/inaW1.png" alt="Figure 1" /></a><br/> Figure 1<br/> <a href="https://i.stack.imgur.com/05GPB.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/05GPB.png" alt="Figure 2" /></a><br/> Figure 2<br/><br/> Suppose we have a cylindrical surface <span class="math-container">$S:x^2+y^2=25$</span>, two spheres <span class="math-container">$S_1:x^2+y^2+(z-5)^2=25$</span> and <span class="math-container">$S_2:x^2+y^2+(z+5)^2=25$</span>, and a plane <span class="math-container">$\Pi:3x-4z=0$</span> (See Figure 1). Then your configuration appears on <span class="math-container">$\Pi$</span> as the intersections of <span class="math-container">$S$</span>,<span class="math-container">$S_1$</span>,<span class="math-container">$S_2$</span> and <span class="math-container">$\Pi$</span> (see Figure 2). Let <span class="math-container">$G=S_1 \cap \Pi$</span>, <span class="math-container">$H=S_2 \cap \Pi$</span>, <span class="math-container">$E=S \cap \Pi$</span>. <span class="math-container">$C_1=S \cap S_1$</span>, and <span class="math-container">$C_2=S \cap S_2$</span>. Take any point <span class="math-container">$P$</span> on <span class="math-container">$E$</span>. Let <span class="math-container">$A$</span> (<span class="math-container">$B$</span> <em>resp.</em>) be the point of tangency of a tangent line from <span class="math-container">$P$</span> to <span class="math-container">$G$</span> (<span class="math-container">$H$</span> <em>resp.</em>). Draw a generator line <span class="math-container">$l$</span> of <span class="math-container">$S$</span> passing through <span class="math-container">$P$</span>. Denote the intersection of <span class="math-container">$l$</span> and <span class="math-container">$C_1$</span> (<span class="math-container">$C_2$</span> <em>resp.</em>) as <span class="math-container">$A'$</span> (<span class="math-container">$B'$</span>, <em>resp.</em>). <br/><br/> Because Both <span class="math-container">$\overline{AP}$</span> and <span class="math-container">$\overline{BP}$</span> are tangent line segments from <span class="math-container">$P$</span> to <span class="math-container">$S_1$</span>, they have the same length (see Figure 3; note that two triangles <span class="math-container">$PAO_{S1}$</span> and <span class="math-container">$PA'O_{S1}$</span> are congruent, where <span class="math-container">$O_{S1}$</span> is the center of <span class="math-container">$S_1$</span>). Thus <span class="math-container">$|\overline{AP}|=|\overline{A'P}|$</span>. Similarly, <span class="math-container">$|\overline{BP}|=|\overline{B'P}|$</span>. Trivially <span class="math-container">$|\overline{A'B'}|=|\overline{A'P}|+|\overline{B'P}|$</span> is constant. Therefore, we can conclude that <span class="math-container">$|\overline{AP}|+|\overline{BP}|$</span> is also constant. So while this is not a rigorous proof of your original question because we need to consider the converse to this proposition, you can intuitively see why conic sections and two double contact circles have such a property.<br/><a href="https://i.stack.imgur.com/1I7Fy.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/1I7Fy.png" alt="Figure 3" /></a><br/>Figure 3<br/><br/>By the way, let <span class="math-container">$\Pi_1$</span> (<span class="math-container">$\Pi_2$</span> <em>resp.</em>) be the plane containing <span class="math-container">$C_1$</span> (<span class="math-container">$C_2$</span> <em>resp.</em>), and let <span class="math-container">$d_1=\Pi \cap \Pi_1$</span> and <span class="math-container">$d_2=\Pi \cap \Pi_2$</span>. Denote the foot of the perpendicular line from <span class="math-container">$P$</span> to <span class="math-container">$d_1$</span> (<span class="math-container">$d_2$</span> <em>resp.</em>) as <span class="math-container">$C$</span> (<span class="math-container">$D$</span> <em>resp.</em>) Then,<br/><span class="math-container">$$\frac{|\overline{AP}|}{|\overline{CP}|}=\frac{|\overline{BP}|}{|\overline{DP}|}=e$$</span> ,where <span class="math-container">$e$</span> is the eccentricity of <span class="math-container">$E$</span>. This indicates that <span class="math-container">$d_1$</span> and <span class="math-container">$d_2$</span> have a property analogous to the directrix of a conic. Compare Figure 1 with Figure 4, where two spheres <span class="math-container">$S_1':x^2+y^2+(z-6.25)^2=25$</span> and <span class="math-container">$S_2':x^2+y^2+(z+6.25)^2=25$</span> are tangent to <span class="math-container">$\Pi$</span> (see also <a href="https://mathworld.wolfram.com/DandelinSpheres.html" rel="nofollow noreferrer">Dandelin Spheres</a>).<br/><a href="https://i.stack.imgur.com/1zhI2.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/1zhI2.png" alt="Figure 4:Dandelin spheres" /></a><br/>Figure 4<br/><br/></p> <p>For more details, see <a href="https://www.jstor.org/stable/27642608" rel="nofollow noreferrer">Apostol, Tom M., and Mamikon A. Mnatsakanian. “New Descriptions of Conics via Twisted Cylinders, Focal Disks, and Directors.”</a>.</p>
1,147,773
<p>Do you have any explicit example of an infinite dimensional vector space, with an explicit basis ?</p> <p>Not an Hilbert basis but a family of linearly independent vectors which spans the space -any $x$ in the space is a <strong>finite</strong> linear sum of elements of the basis.</p> <p>In general the existence of such a basis follows by the Axiom of choice but I wonder if there is at least one non trivial (not finite dimensional) case where we have some explicit constuction.</p>
Surb
154,545
<p>What do you think about $$X=\text{span}\{e_1,e_2,e_3,...\}$$</p> <p>where $$e_i=(0,...,0,\underset{\underset{i^{th}\ place}{\uparrow}}{1},0,...).$$</p>
3,225,151
<p><span class="math-container">$ZFC+V=L$</span> implies that <span class="math-container">$P(\mathbb{N})$</span> is a subset of <span class="math-container">$L_{\omega_1}$</span>. But I’m wondering what layer of the constructible Universe contains a smaller set.</p> <p>My question is, what is the smallest ordinal <span class="math-container">$\alpha$</span> such that for all formulas <span class="math-container">$\phi(n)$</span> in the language of second-order arithmetic, the set <span class="math-container">$\{n\in\mathbb{N}:\phi(n\}\in L_\alpha$</span>? Does this depend on whether we assume <span class="math-container">$V=L$</span>?</p> <p>I’m guessing <span class="math-container">$\alpha&gt;\omega_1^{CK}$</span>, and that <span class="math-container">$\alpha$</span> is greater than the ordinal <span class="math-container">$\beta_0$</span> discussed in my question <a href="https://math.stackexchange.com/q/3224027/71829">here</a>. But can we say anything more about it?</p>
Noah Schweber
28,111
<p>It is consistent that there is no such <span class="math-container">$\alpha$</span>.</p> <p>More precisely, it is consistent with ZFC that there is a formula <span class="math-container">$\varphi$</span> in the language of second-order arithmetic such that <span class="math-container">$\{x:\varphi(x)\}$</span> is not constructible. For example, <span class="math-container">$0^\sharp$</span> if it exists has this property (it's <span class="math-container">$\Delta^1_3$</span>-definable if it exists).</p> <hr> <p>EDIT: Of course, if V = L then such an <span class="math-container">$\alpha$</span> trivially exists. <strong>Throughout the rest of this answer we assume V=L</strong>.</p> <p>The key point is that there is a "definable translation" between first-order formulas over <span class="math-container">$L_{\omega_1}$</span> and second-order formulas of arithmetic:</p> <ul> <li><p>One direction is immediate: any second-order arithmetic formula can be rephrased in <span class="math-container">$L_{\omega_1}$</span> since sets of naturals are already elements of <span class="math-container">$L_{\omega_1}$</span>. </p></li> <li><p>The other direction is the interesting one. Given a well-founded tree <span class="math-container">$T\subset\omega^{&lt;\omega}$</span> <em>(note that we can definably conflate subsets of <span class="math-container">$\omega$</span> and subsets of <span class="math-container">$\omega^{&lt;\omega}$</span>, and that the set of well-founded trees is second-order definable)</em>, we recursively define a map <span class="math-container">$Set_T$</span> from nodes of <span class="math-container">$T$</span> to sets, by setting <span class="math-container">$$Set_T(\sigma)=\{Set_T(\sigma^\smallfrown \langle k\rangle): k\in\omega, \sigma^\smallfrown\langle k\rangle\in T\};$$</span> for example, if <span class="math-container">$\sigma$</span> is a leaf of <span class="math-container">$T$</span> then <span class="math-container">$Set_T(\sigma)=\emptyset$</span>. We then let <span class="math-container">$Set(T)=Set_T(\langle\rangle)$</span> be the set assigned to the empty string (= the root of <span class="math-container">$T$</span>). It's easy to check that the relations "<span class="math-container">$Set(T_0)=Set(T_1)$</span>" and "<span class="math-container">$Set(T_0)\in Set(T_1)$</span>" are definable in second-order arithmetic, and this gives us an interpretation of <span class="math-container">$L_{\omega_1}$</span> into <span class="math-container">$\mathcal{P}(\omega)$</span>.</p></li> </ul> <p>The projectively-definable reals are precisely the parameter-freely definable elements of the first-order structure <span class="math-container">$(\omega,\mathcal{P}(\omega); +,\times,\in)$</span>, and the translation above identifies these with the set <span class="math-container">$M$</span> of parameter-freely definable elements of the first-order structure <span class="math-container">$(L_{\omega_1}; \in)$</span> (which I'll conflate with <span class="math-container">$L_{\omega_1}$</span>). </p> <p>The final point is that since <span class="math-container">$L$</span> has definable Skolem functions, <span class="math-container">$M$</span> is in fact an elementary submodel of <span class="math-container">$L_{\omega_1}$</span> and hence<span class="math-container">$^1$</span> <span class="math-container">$M=L_\eta$</span> for some <span class="math-container">$\eta$</span>. This <span class="math-container">$\eta$</span> is exactly our <span class="math-container">$\alpha$</span>. That is:</p> <blockquote> <p>Assuming V=L, <span class="math-container">$\alpha$</span> is the height of the smallest elementary submodel of <span class="math-container">$L_{\omega_1}$</span>.</p> </blockquote> <p>In particular, this is massively bigger than <span class="math-container">$\beta_0$</span>, since <span class="math-container">$\beta_0$</span> is parameter-freely definable in <span class="math-container">$L_{\omega_1}$</span>.</p> <hr> <p><span class="math-container">$^1$</span>This is a cute fact. The Condensation Lemma alone doesn't kill this off: in order to apply Condensation we need to know that <span class="math-container">$M$</span> is transitive. But a priori, it's not clear that it needs to be - for example, a countable elementary submodel of <span class="math-container">$L_{\omega_2}$</span> obviously <em>can't</em> be transitive, since it must contain <span class="math-container">$\omega_1$</span> as an element. </p> <p>So what's special about <span class="math-container">$\omega_1$</span> here? The trick here is the following:</p> <blockquote> <p>Suppose <span class="math-container">$A$</span> is a "sufficiently closed" transitive set (= contains <span class="math-container">$\omega$</span> and such that eveyr countable element of <span class="math-container">$A$</span> is countable within <span class="math-container">$A$</span>) - for example, <span class="math-container">$A=L_{\omega_1}$</span> - and <span class="math-container">$B$</span> is an elementary substructure of <span class="math-container">$A$</span> (conflating a transitive set with the corresponding <span class="math-container">$\{\in\}$</span>-structure as usual). Then the set of countable ordinals in <span class="math-container">$A$</span> is closed downwards.</p> </blockquote> <p><em>Rough proof</em>: Suppose <span class="math-container">$\theta$</span> is a (WLOG infinite) countable ordinal in <span class="math-container">$A$</span> and <span class="math-container">$\gamma&lt;\theta$</span>. Since <span class="math-container">$A$</span> computes countability correctly we have in <span class="math-container">$A$</span> an <span class="math-container">$f: \omega\cong\theta$</span>. By elementarity "going down," <span class="math-container">$B$</span> contains some <span class="math-container">$g$</span> which <span class="math-container">$B$</span> thinks is a bijection from <span class="math-container">$\omega$</span> to <span class="math-container">$\theta$</span>; by elementarity "going up," <span class="math-container">$A$</span> also thinks <span class="math-container">$g$</span> is. So (working in <span class="math-container">$A$</span>) there is some <span class="math-container">$n\in\omega$</span> such that <span class="math-container">$g(n)=\gamma$</span>; but since <span class="math-container">$n\in\omega$</span> we have <span class="math-container">$n\in B$</span> (we can't "lose" natural numbers!) and so <span class="math-container">$g(n)=\gamma\in B$</span> as well. <span class="math-container">$\Box$</span></p> <p>We can generalize the above observation using further closedness assumptions: e.g. if <span class="math-container">$B$</span> is an elementary submodel of a sufficiently closed transitive set <span class="math-container">$A$</span> with <span class="math-container">$\omega_1\subseteq B$</span> then <span class="math-container">$B\cap\omega_2$</span> is closed downwards (running the above argument, we only need that <span class="math-container">$dom(g)\subset B$</span>).</p>
1,638,490
<p>Yes, I know the title is bizarre. I was urinating and forgot to lift the seat up. That made me wonder: assuming I maintain my current position, is it possible for the toilet seat (assume it is a closed, but otherwise freely deformable curve) to be moved/deformed such that stream does not pass through the hole anymore without it intersecting the curve (in other words, spraying urine everywhere!)?</p>
dxiv
291,201
<p>Here is a topologically equivalent reformulation.</p> <blockquote> <p>Suppose you are ice fishing on a frozen lake. Is it possible for the hole in the ice to deform in such a way that the rod and line no longer pass through it, without breaking either?</p> </blockquote> <p>The answer is yes, but (in both cases) you'll wish you had never asked.</p>
289,405
<p>Consider an elementary class $\mathcal{K}$. It is quite common in model theory that a structure $K$ in $\mathcal K$ comes with a closure operator $$\text{cl}: \mathcal{P}(K) \to \mathcal{P}(K), $$ which establishes a <a href="https://en.wikipedia.org/wiki/Pregeometry_(model_theory)" rel="nofollow noreferrer">pregeometry</a> on $K$.</p> <p>Any pregeometry yields a notion of dimension, say:</p> <p>$$\text{dim} (K) = \min \{|A|: A \subset K \text{ and } \text{cl}(A) = K\}$$ I am interested in some natural properties shared by dimensions induced by pregeometries.</p> <p>What kind of properties I am looking for? An example is the following,</p> <blockquote> <p>Suppose $K = \bigcup K_i$ (non redundant increasing chain) and that its dimesion is infinite, is it true that $\text{dim}(K) = \sum_i \text{dim}(K_i)$?</p> </blockquote> <p>I already know that there are many results like "trivial geometries are modular" but this is not the kind of result I am looking for. I am looking for structural properties of dimension because I am interested in giving an axiomatic definition of dimension.</p>
Ivan Di Liberti
104,432
<p>The following is valid when $A\subseteq M\prec N$, $\text{cl}(A)$ in $M$ equals $\text{cl}(A)$ in $N$, i.e. closures don't grow in elementary extensions.</p> <hr> <p>The answer to my questions looks to me to be <strong>yes</strong>.</p> <p>Let $K = \bigcup K_i$ We know that $K_i$ has a basis $A_i$.</p> <p>Step 1. We can suppose that $A_i \subset A_{i+1}$. In fact, if it is not the case, consider the $\text{cl}(A_i)$ inside $K_{i+1}$. We through in $A_i$ elements untill it gets a basis $A_{i+1}^*$ of $K_i$ and it must be the case that $|A_{i+1}^*| = |A_{i+1}|$ because of exachange property. By transfinite induction we can replace $\{A_i\}$ so that they are an increasing chain.</p> <p>Step 2. $\bigcup A_i$ is a basis of $K$. In fact $$K = \bigcup K_i = \bigcup \text{cl} (A_i) \subset \text{cl} (\bigcup A_i)$$ because of monotonicity.</p> <p>Step 3. $|\bigcup A_i| = \sum |A_i|$ as soon as they are infinite.</p> <p>Is this correct? Am I really using that $A_i \subset A_{i+1}$? </p>
2,967,626
<p>I have a graph that is 5280 units wide, and 5281 is the length of the arc.</p> <p><img src="https://i.stack.imgur.com/4rFK4.png"></p> <p>Knowing the width of this arc, and how long the arc length is, how would I calculate exactly how high the highest point of the arc is from the line at the bottom?</p>
Nicholas Parris
566,184
<p>The final solution comes out to be very nice, so I suspect this is a homework problem. As such, I will just give you an outline. <span class="math-container">$$0 &lt; \frac{2 b^2 r^2}{z} - \left(2 r ^2 - 2 b r \sqrt{1 - \frac{b^2}{z^2}}\right) z$$</span> Simplifying terms, dividing both sides by <span class="math-container">$z$</span>, letting <span class="math-container">$\frac{b}{z}=\alpha$</span> and pulling the square root to one side, <span class="math-container">$$b \sqrt{1 - \alpha^2} &gt; r-r\alpha^2$$</span> Now just factor the square root from both sides and then square both sides. You should end up with <span class="math-container">$$a^2+b^2&gt;r^2$$</span></p>
55,071
<p><strong>Bug introduced in 10.0.0 and persisting through 10.3.0 or later</strong></p> <hr> <p>I've upgraded my home installation of <em>Mathematica</em> from version 9 to 10 today on a Windows 8.1 machine, and I'm getting a weird font issue - the fonts are not anti-aliased, and look unbalanced and weird. Just look:</p> <blockquote> <p><img src="https://i.stack.imgur.com/cnuXE.png" alt="HALP"></p> </blockquote> <p>For comparison, here what it looks on Linux with <em>Mathematica</em> V10</p> <blockquote> <p><img src="https://i.stack.imgur.com/Qf6kO.png" alt="Mathematica graphics"></p> </blockquote> <p>At this point you may object, as these issues are too minor to get worked up about. But neither I nor my otherwise benign OCD can work like this. Any ideas?</p> <p><strong>EDIT</strong> I've just had an idea, that maybe I need to manually remove old fonts leftover from the old <em>Mathematica</em> 9 installation. I've read somewhere that they were not going to pollute the main font catalog with symbol fonts in the next release, and maybe the new <em>Mathematica</em> is using old fonts for some reason. I can't test it right now myself, unfortunately.</p>
Tetsuo Ichii
18,579
<p>This problem is probably due to the MathematicaMono font which is introduced in v10.</p> <p>Defining the problem:<br> Some fonts ("[","_","]","=", etc.) are rendered badly with strange thinning in v10 in some notebook magnifications. This is obvious when you compare the renderings from v10 with those from v9. </p> <p><img src="https://i.stack.imgur.com/w4wZL.png" alt="comparison v9 v.s. v10"></p> <p>Analyzing the problem:<br> I found that all of these ugly-looking fonts were rendered in MathematicaMono font (specifically, MathematicaMono-Bold.ttf font in this example) by using the method described in my comment for the question section. MathematicaMono fonts are new in v10: we only have fonts named Mathematica1Mono, Mathematica2Mono, and so on, up to v9. These results suggested that the problem is caused by the new MathematicaMono fonts.<br></p> <p>Next, to test this hypothesis, I substituted the MathematicaMono-Bold.ttf by Mathematica1mb.ttf (which contains Mathematica1Mono-Bold font) copied from my v9 installation folder. I renamed the name property of Mathematica1mb.ttf by using FontForge program ("Mathematica1Mono-Bold" to "MathematicaMono-Bold") and installed in the v10 font folder as MathematicaMono-Bold.ttf.<br></p> <p>After the substitution, the notebook was rendered as in v9 at least for characters like "_" and "=": <img src="https://i.stack.imgur.com/Nmz7n.png" alt="v10 after font substitution"></p> <p>Sadly, "[" and "]" were not fixed because Mathematica1Mono font lacks glyph for these characters. But anyway, the substitution experiment partially confirmed my hypothesis.</p> <p>What's wrong in MathematicaMono font?:<br> I have no answer yet. So I can not provide the complete solution. But I found a strange thing in MathematicaMono font. In all of the newly introduced MathematicaXXX.ttf fonts in v10, "Win Ascent" and "Win Descent" properties for OpenType fonts are in strangely big values (5000 and 3500) compared with the values in v9 Mathematica fonts (1747 and 479). This makes the previews of MathematicaXXX fonts small in a strange way when you open the fonts in Windows font viewer program. But, I could not fixed the original problem even when I edited "Win Ascent" and "Win Descent" values of MathematicaMono-Bold font in FontForge.</p> <p>I hope my answer helps someone to solve this problem.</p> <p><strong>Update 7/22</strong>: I found a clear evidence that the problem is in the MathematicaMono fonts. After you "install" the MathematicaMono.ttf and MathematicaMono-Bold.ttf, you can use MathematicaMono fonts in softwere other than Mathematica. Here is the MathematicaMono fonts rendered in Microsoft Word:<br> <img src="https://i.stack.imgur.com/7XshK.png" alt="MathematicaMono fots in MS Word"><br> The rendering problems reported in Mathematica were completely reproduced in MS Word! This indicates that the problem is not in the Mathematica Front-end but in the font itself. </p>
1,866,801
<p>Let $A$ be an infinite set, $B\subseteq A$ and $a\in B$. Let $X\subseteq \mathcal{P}(A)$ be an infinite family of subsets of $A$ such that $a\in \bigcap X$.</p> <p>Suppose $\bigcap X\subseteq B$. Is it possible that, for every non-empty finite subfamily $Y\subset X$, $\bigcap Y \not\subseteq B$ ? </p> <p>Thanks for your help</p>
Piquito
219,998
<p>$$\sin x + \sin y = 1\Rightarrow 2\sin\frac{x+y}{2}\cos \frac{x-y}{2}=1\\\cos x+\cos y=0\Rightarrow 2\cos\frac{x+y}{2}\cos \frac{x-y}{2}=0$$ It follows $$ \cos \frac{x+y}{2}=0\text{ and } \cos \frac{x-y}{2}=0\iff $$ The first ($\cos \frac{x+y}{2}=0$) gives, from the first given equation,</p> <p>$$\begin{cases}x+y=\pi\\x-y=\frac{2\pi}{3}\end{cases}\iff(x,y)=(\frac{\pi}{6},\frac{5\pi}{6})$$ The second ($\cos \frac{x-y}{2}=0$) is discarded by incompatibility.</p> <p>Thus the solutions are given by $$(x,y)=( \frac{\pi}{6}+2m\pi,\space \frac{5\pi}{6}+2n\pi)$$ It is not worthless to see the graphic solutions: all intersections of the red lattice, with blue closed curves.</p> <p><a href="https://i.stack.imgur.com/VDNZC.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/VDNZC.png" alt="enter image description here"></a></p>
1,866,801
<p>Let $A$ be an infinite set, $B\subseteq A$ and $a\in B$. Let $X\subseteq \mathcal{P}(A)$ be an infinite family of subsets of $A$ such that $a\in \bigcap X$.</p> <p>Suppose $\bigcap X\subseteq B$. Is it possible that, for every non-empty finite subfamily $Y\subset X$, $\bigcap Y \not\subseteq B$ ? </p> <p>Thanks for your help</p>
fleablood
280,126
<p>There are so many different ways to solve it the real question is which way.</p> <p>What reaches out to grab me is:</p> <p>$\cos x + \cos y = 0$</p> <p>$\cos x = - \cos y$ which means either $y = \pi - x$ (within a period of $2\pi$) or $y = x + \pi$ (within a period of $2\pi$).</p> <p>If $y = x + \pi$ then $\sin y = - \sin x$ and $\sin y + \sin x = 0 \ne 1$ which is impossible.</p> <p>If $y = x - \pi$ then $\sin y = \sin x$ and $\sin y + \sin x = 2 \sin x$. If this is so (and it's our only option) then $\sin x = 1/2$ which means $x = \{\pi/6, 5\pi/6\}$.</p> <p>So $(x,y) = (\pi/6, 5\pi/6)$ or $(x,y)= (5\pi/6, \pi/6)$ (within periods of $2\pi$)</p>
31,948
<p>A Finsler manifold is defined as a differentiable manifold with a metric defined on so that any well-defined curves of finite arc length is given by a generalized arc length integral of an asymmetric norm over each tangent space defined at a point. This generalizes the Riemannian manifold structure since the norm is no longer required to be induced by an inner product and therefore the Finsler manifold is not necessarily Euclidean in the tangent space structure. </p> <p>A collegue of mine and I recently got into an arguement over whether or not Finsler manifolds are semi- or psuedo-Reimannian. I say no-by definition, A semi-Reimannian manifold-like an Loretzian manifold in relativity theory-is still required to have a metric tensor as it's normed structure,which is clearly an inner product. We simply weaken the condition of positive definiteness(i.e. the associated quadratic form of the norm is real valued) to nondegeneracy (i.e. the tangent space is isomorphic with its dual space). Both conditions require the distance structure to be induced by an inner product. </p> <p>My colleague's argument is that the key property of semi-Riemannian manifolds is that they admit local signed coordinate structures that allow the distinction of different kinds of tangent spaces on the manifold. Local isomorphisms can be defined-especially in infinite-dimensional extensions of classical relativistic spaces-that make certain Fisler manifolds eqivilent to relativistic models of space-time. </p> <p>I honestly don't know enough about the research that's been done on this. Is he right? This seems very bizarre to me, but it may indeed be possible to use specially constructed mappings to convert Finsler spaces to semi-Riemannian ones and vice-versa. I seriously doubt it could be done globally without running into serious topological barriers. </p> <p>I'd like the geometers to chime in on this,particularly ones who are well-versed in relativistic geometry: Am I right? Can Finsler manifolds be defined in such a manner as to be true semi-Reimannian manifolds? Can local isomorphisms or differentiomorphisms be defined to interconvert them? </p>
Will Jagy
3,324
<p>The answer is no. Any smooth manifold admits a Riemannian metric using paracompactness and partitions of unity: in short, a convex sum of positive definite symmetric matrices is positive definite symmetric. So any manifold has such a structure. But there are topological obstructions to the existence of global pseudo-Riemmannian metrics of other prescribed signatures. <a href="http://en.wikipedia.org/wiki/Semi-Riemannian_manifold#Properties_of_pseudo-Riemannian_manifolds" rel="nofollow">http://en.wikipedia.org/wiki/Semi-Riemannian_manifold#Properties_of_pseudo-Riemannian_manifolds</a> </p> <p>EDIT, 16 July: I was looking for an example of said topological obstructions, and with an assist fom Willie Wong it has worked out: the ordinary sphere $$ \mathbb S^2 \subseteq \mathbb R^3 $$ does not possess a signature $(+,-) $ metric which I suppose ought to be called Lorentzian for this dimension. The topological obstruction is that $ \mathbb S^2$ cannot have a smooth (tangent) "line field," just as it cannot have a smooth nonzero tangent vector field by Brouwer. Now, a Lorentzian metric would give (pointwise) null cones, in this case a pair of distinct but intersecting lines in each tangent plane. As we are using $\mathbb R^3, $ we can cheat and define a line field from the angle bisector of the $+$ part of the cone. </p>
651,707
<p>Let $A=(a_{ij})\in \mathbb{M}_n(\mathbb{R})$ be defined by</p> <p>$$ a_{ij} = \begin{cases} i, &amp; \text{if } i+j=n+1 \\ 0, &amp; \text{ otherwise} \end{cases} $$ Compute $\det (A)$</p> <hr> <blockquote> <p>After calculation I get that it may be $(-1)^{n-1}n!$. Am I right?</p> </blockquote>
user76568
74,917
<p>These are matrices with only off-diagonal elements, with values being the row numbers. So the number of negative factors is the same for if it was a diagonal matrix: <strong>even</strong> for an <strong>even</strong> $n$, and <strong>odd</strong> for an <strong>odd</strong> $n$. So looks like the answer is:</p> <p>$$|A|=(-1)^nn!$$</p>
2,617,467
<p>I have been trying to solve this limit for more than an hour and I'm stuck.</p> <blockquote> <p>$$ \lim_{n\to\infty} \frac{3^{n}+\sqrt{n}}{n!+2^{(n+1)}}$$</p> </blockquote> <p>What I've so far is:</p> <p>$$ \lim_{n\to\infty} \frac{3^{n}+\sqrt{n}}{n!+2^{(n+1)}}\\ \lim_{n\to\infty} \frac{3^{n}}{n!+2^{(n+1)}}+\lim_{n\to\infty} \frac{\sqrt{n}}{n!+2^{(n+1)}}\\\lim_{n\to\infty} \frac{3^{n}}{1+\frac{2^{(n+1)}}{n!}}\frac{1}{n!}+\lim_{n\to\infty} \frac{\sqrt{n}}{1+\frac{2^{(n+1)}}{n!}}\frac{1}{n!}\\ \lim_{n\to\infty} \frac{3^{n}}{1+\frac{2^{(n+1)}}{n!}}*0+\lim_{n\to\infty} \frac{\sqrt{n}}{1+\frac{2^{(n+1)}}{n!}}*0\\ \lim_{n\to\infty} \frac{3^{n}}{1+0}*0+\lim_{n\to\infty} \frac{\sqrt{n}}{1+0}*0\\ \lim_{n\to\infty} \frac{3^{n}}{1}*0+\lim_{n\to\infty} \frac{\sqrt{n}}{1}*0\\ \lim_{n\to\infty}{3^{n}}*0+\lim_{n\to\infty} \sqrt{n}*0\\ \lim_{n\to\infty}{\inf}*0+\lim_{n\to\infty} \inf*0$$</p> <p>Which is an undetermination. And I don't know how to continue it. Can someone help me? Please let me know if something isn't very clear. Thank you</p>
Arnaud Mortier
480,423
<p>My advice: don't systematically replace part of an expression by its limit as soon as you know it, while keeping the $n$ in the rest of the expression. </p> <p>For instance when you have $\sqrt{n} \dfrac{1}{n!} $, you rush into replacing $\dfrac{1}{n!} $ by $0$ and then you're stuck. Analyse the situation: $\sqrt{n}$ is actually a very slow sequence compared to $n!$. You can write $\sqrt{n} \dfrac{1}{n!} \leq n\dfrac{1}{n!} =\dfrac{1}{(n-1)!} $ which solves the problem!</p> <p>Also, be very careful when you have a sum, like in your first step: it may happen that $\lim (u_n+v_n)$ exists but neither of $\lim (u_n)$ and $\lim (v_n)$ does. In other words $u_n$ needs $+v_n$ to converge. In that case your cause would be lost from the very first step. However here you are dealing with positive sequences only, which prevents this phenomenon to happen.</p>
2,617,467
<p>I have been trying to solve this limit for more than an hour and I'm stuck.</p> <blockquote> <p>$$ \lim_{n\to\infty} \frac{3^{n}+\sqrt{n}}{n!+2^{(n+1)}}$$</p> </blockquote> <p>What I've so far is:</p> <p>$$ \lim_{n\to\infty} \frac{3^{n}+\sqrt{n}}{n!+2^{(n+1)}}\\ \lim_{n\to\infty} \frac{3^{n}}{n!+2^{(n+1)}}+\lim_{n\to\infty} \frac{\sqrt{n}}{n!+2^{(n+1)}}\\\lim_{n\to\infty} \frac{3^{n}}{1+\frac{2^{(n+1)}}{n!}}\frac{1}{n!}+\lim_{n\to\infty} \frac{\sqrt{n}}{1+\frac{2^{(n+1)}}{n!}}\frac{1}{n!}\\ \lim_{n\to\infty} \frac{3^{n}}{1+\frac{2^{(n+1)}}{n!}}*0+\lim_{n\to\infty} \frac{\sqrt{n}}{1+\frac{2^{(n+1)}}{n!}}*0\\ \lim_{n\to\infty} \frac{3^{n}}{1+0}*0+\lim_{n\to\infty} \frac{\sqrt{n}}{1+0}*0\\ \lim_{n\to\infty} \frac{3^{n}}{1}*0+\lim_{n\to\infty} \frac{\sqrt{n}}{1}*0\\ \lim_{n\to\infty}{3^{n}}*0+\lim_{n\to\infty} \sqrt{n}*0\\ \lim_{n\to\infty}{\inf}*0+\lim_{n\to\infty} \inf*0$$</p> <p>Which is an undetermination. And I don't know how to continue it. Can someone help me? Please let me know if something isn't very clear. Thank you</p>
Guy Fsone
385,707
<p> <p>we have $$ \frac{3^{n}+\sqrt{n}}{n!+2^{(n+1)}}\le \frac{3^{n}+\sqrt{n}}{n!} =\frac{3^{n}}{n!} + \frac{1}{\sqrt{n}(n-1)!}\to0$$</p> <p>Indeed, the convergence of the series $$e^3 =\sum_{n=0}^{\infty}\frac{3^{n}}{n!}\implies \frac{3^{n}}{n!} \to0$$</p>
3,792,242
<p>A large part of the set theory is devoted to infinities of various kinds, and this has been built on Cantor's groundbreaking work on uncountable sets. However, even Cantor's proof is based on the assumption that certain sets exist, namely that the power set of a countably infinite set exists. Sure, after assuming that the set of natural numbers has a power set, Cantor's proof shows that this power set is uncountable. However, he does not address why such a power set should exist in the first place. Is the existence of the power set of an infinite set assumed as an axiom of set theory?</p> <p>Intuitively, I found the existence of such a power set troubling. By definition, most of the elements of this set consist of purely random infinite sets. For example, the power set of natural numbers can be partitioned into:</p> <ol> <li>Those elements that can be constructed by an algorithm (i.e. a Turing Machine), such as all the finite sets, or the infinite sets with a constructive algorithm (e.g., set of odd numbers, set of natural numbers that build the digits of <span class="math-container">$\pi$</span> in some form such as {3, 14, 159, ...}, etc.)</li> <li>All the other elements that no Turing Machine can generate them, e.g., an infinite set of entirely random natural numbers.</li> </ol> <p>These purely random sets have (1) an infinite amount of information packed into them, and (2) we cannot even construct them. Both these aspects are unsettling to me.</p> <p>Setting the intuition aside, is there a concrete reason why mathematicians accept the existence of the power set of an infinite set? Can we prove that such a set must exist? Has there been any inquiry on whether the existence of such a set would result in inconsistencies or not?</p>
Andreas Blass
48,510
<p>This should perhaps be a comment, but it's too long. The question emphasizes a dichotomy among the subsets of <span class="math-container">$\mathbb N$</span>: computable sets versus all the others. But there's actually a much richer spectrum here: computable sets, computably enumerable sets (like the set of code numbers of Turing machines that eventually halt when started on an empty tape), arithmetically definable sets, predicative sets, constructible sets (in the sense of Gödel), etc.</p> <p>Early in the 20th century, there was considerable discussion (or dispute) as to whether mathematics should work with completely arbitrary sets or should require some degree of definability. This question played a central role in discussions about the axiom of choice. Eventually, when mathematical logic had been developed to the point where one could talk precisely about the many levels of definability, it became clear that the natural way to proceed is to let &quot;set&quot; mean &quot;completely arbitrary set&quot;; if one wants to work instead with definable sets, then one should say so and one should specify exactly what sorts of definitions are to be allowed. This decision in favor of arbitrary sets is the basis for most set theorists' acceptance of the axiom of choice.</p> <p>It is entirely possible for someone (like me) who accepts arbitrary sets (and the ZFC axioms that are intended to describe them) to also consider more restricted universes in which only the computable sets are guaranteed to exist. One important axiom system often used to describe such a universe is called <span class="math-container">$\text{RCA}_0$</span> (abbreviating &quot;recursive comprehension axiom&quot;). Another axiom system, <span class="math-container">$\text{ACA}_0$</span>, describes a universe with arithmetical sets; KP+Inf provides for the existence of hyperarithmetical sets; etc.</p> <p>From the point of view of mathematics in general, it is interesting to ask whether various well-known theorems (like the Bolzano-Weierstrass theorem or the existence of prime ideals in non-degenerate rings, or the dominated convergence theorem) are provable with such limited supplies of sets. The detailed study of such questions is called &quot;reverse mathematics&quot;, because to verify that one has just the right axiom system A for proving some theorem T, the usual method is to deduce A from T --- the reverse of the usual business of mathematics, deducing theorems from axioms. The standard reference for this topic is Stephen Simpson's book &quot;Systems of Second-Order Arithmetic&quot;.</p> <p>I should also mention that sets at the other end of the spectrum, random sets that have very high information content, have also been studied extensively by computability theorists. As with definability, randomness also has a spectrum; there are different levels of randomness. And the two spectra are (somewhat) related: A subset of <span class="math-container">$\mathbb N$</span> is random (to some degree) if it lies in all definable (to some degree) sets of probability 1 (in the standard probability space of subsets of <span class="math-container">$\mathbb N$</span>).</p> <p>The &quot;undefinable&quot; end of the spectrum also contains non-random sets of various sorts, for example the so-called generic sets, which lie in all definable sets that are comeager (i.e., their complement is of first Baire category). As with randomness, there are levels of genericity, related to levels of definability.</p>
220,907
<p>If I have these three lists:</p> <pre><code>list1={{0.01,87.,0.},{0.03,87.,0.18353},{0.1,87.,0.494987},{0.3,87.,0.899803},{1.,87.,1.08076},{3.,87.,1.10593},{10.,87.,1.04781},{10.,87.,1.02449},{10.,87.,0.964193},{30.,87.,1.0602},{30.,87.,1.04075},{30.,87.,1.05987},{100.,87.,1.14661},{100.,87.,1.00639},{100.,87.,1.09384},{300.,87.,1.067},{300.,87.,1.15047},{300.,87.,1.10715},{1000.,87.,1.05152},{1000.,87.,1.06942},{1000.,87.,1.17143},{3000.,87.,1.12162},{10000.,87.,1.13136}} list2={{0.01,75.,0.},{0.03,75.,0.},{0.1,75.,0.0959691},{0.3,75.,0.37954},{1.,75.,0.678807},{3.,75.,0.90385},{10.,75.,0.965262},{10.,75.,1.01025},{10.,75.,1.01675},{30.,75.,1.04836},{30.,75.,1.11345},{30.,75.,1.09146},{100.,75.,1.16961},{100.,75.,1.19018},{100.,75.,1.16968},{300.,75.,1.20834},{300.,75.,1.22955},{300.,75.,1.19569},{1000.,75.,1.25479},{1000.,75.,1.32295},{1000.,75.,1.22151},{3000.,75.,1.28794},{10000.,75.,1.25897}} list3={{0.01,55.,0.},{0.03,55.,0.},{0.1,55.,0.},{0.3,55.,0.},{1.,55.,0.},{3.,55.,0.},{10.,55.,0.},{10.,55.,0.},{10.,55.,0.},{30.,55.,0.},{30.,55.,0.},{30.,55.,0.},{100.,55.,0.},{100.,55.,0.},{100.,55.,0.},{300.,55.,0.0721335},{1000.,55.,0.214175},{3000.,55.,0.748622},{10000.,55.,1.05191}} </code></pre> <p>How can I make the list to combine such as they get organized from smallest to highest number based on the first element of the list (which go from 0.01 to 10.000) as to get <code>{0.01,87,0,0.01,75,0,0.01,55,0,.......10000.,87.,1.13136,10000.,75.,1.25897,10000.,55.,1.05191}</code></p> <p>Thank you!</p>
SuperCiocia
35,368
<p>From the description that you give, this what you want:</p> <pre><code>listtotal = Sort[Join[list1, list2, list3], #1[[1]] &lt; #2[[1]] &amp;] </code></pre> <p>Even </p> <pre><code>listtotal = Sort[Join[list1, list2, list3], #1[[1]] &lt; #2[[1]] &amp;] // Flatten </code></pre> <p>if you want the list flattened.</p> <p>This, however, does not return your desired output. But in your output, the numbers are not ordered...</p>
4,180,344
<p>I started learning from Algebra by Serge Lang. In page 5, he presented an equation<br><i></p> <blockquote> <p>Let <span class="math-container">$G$</span> be a commutative monoid, and <span class="math-container">$x_1,\ldots,x_n$</span> elements of <span class="math-container">$G$</span>. Let <span class="math-container">$\psi$</span> be a bijection of the set of integers <span class="math-container">$(1,\ldots,n)$</span> onto itself. Then</i> <span class="math-container">$$\prod_{\nu = 1}^n x_{\psi(\nu)} = \prod_{\nu=1}^n x_\nu$$</span></p> </blockquote> <p>In this equation, the mapping <span class="math-container">$\psi(\nu) = \nu $</span>, resulting the same value, I don't understand why <span class="math-container">$x_{\psi(\nu)}$</span> bothers to index with a mapping, rather than a number.</p>
Rob Arthan
23,171
<p>Suggestion: consider the special case when <span class="math-container">$n = 3$</span> and write <span class="math-container">$a, b, c$</span> for <span class="math-container">$x_1, x_2, x_3$</span> respectively. What Lang is saying in that case is that we have all of the following equalities: <span class="math-container">$$ abc = bac = cab = acb = bca = cba $$</span> I.e., the product <span class="math-container">$x_1x_2x_3$</span> is independent of the way we order the factors <span class="math-container">$x_1$</span>, <span class="math-container">$x_2$</span> and <span class="math-container">$x_3$</span>. Now try to figure out how to generalise this to arbitrary <span class="math-container">$n$</span>: what you will need is something to represent how the factors have been reordered: that is Lang's bijection <span class="math-container">$\psi$</span> on the index set <span class="math-container">$\{1, \ldots, n\}$</span>.</p>
1,742,418
<p>So far I have this:</p> <p>What is the chance that at least one is a 2?</p> <p>There is $\frac{5}{6}$ that you will not get any 2 on the four dices. From this we get $\frac{5^4}{6^4}$</p> <p>The probability of getting at least one 2 is $1 - \frac{5^4}{6^4} $</p> <p>Now I have no idea how to proceed from here.</p>
ervx
325,617
<p>Your answer to the first question is correct. For the second question, we use the formula for conditional probability. </p> <p>Let $A$ be the event that the first die is a $1$.</p> <p>Let $B$ be the event that at least one is a $2$.</p> <p>Then,</p> <p>$$ P(A|B)=\frac{P(A\cap B)}{P(B)}. $$</p> <p>You already concluded that $P(B)=1-\frac{5^{4}}{6^{4}}$. So, we need to find $P(A\cap B)$. Since the first die must be a $1$, one of the next three must be $2$, so, we have $P(A\cap B)=\left(\frac{1}{6}\right)\cdot \left(1-\frac{5^{3}}{6^{3}}\right)$.</p> <p>Hence, </p> <p>$$ P(A|B)=\frac{\left(\frac{1}{6}\right)\cdot \left(1-\frac{5^{3}}{6^{3}}\right)}{1-\frac{5^{4}}{6^{4}}}=\frac{91}{671}. $$</p>
4,344,538
<p><strong>Lemma.</strong> Let <span class="math-container">$X$</span> be a topological vector space over <span class="math-container">$\mathbb{K}$</span> and <span class="math-container">$v\in X$</span>. Then the following mapping is continuous <span class="math-container">$$\begin{align*} \varphi _v:\mathbb{K}&amp;\rightarrow X \\ \xi &amp;\mapsto \xi v. \end{align*}$$</span></p> <p><em>Proof.</em> For any <span class="math-container">$\xi\in\mathbb{K}$</span>, we have <span class="math-container">$\varphi_v(\xi)=M(\psi _v\left ( \xi \right ))$</span>, where <span class="math-container">$\psi_v:\mathbb{K}\rightarrow\mathbb{K}\times X $</span> given by <span class="math-container">$\psi_v(\xi)=(\xi,v)$</span> is clearly continuous by definition of product topology and <span class="math-container">$M:\mathbb{K}\times X$</span> is the scalar multiplication in the topological vector space. <span class="math-container">$X$</span> which is continuous by definition of topological vector space. Hence, <span class="math-container">$\varphi_v$</span> is continuous as composition of continuous mappings.</p> <p>My question is:</p> <ul> <li>Why <span class="math-container">$\psi_v$</span> is continuous by definition of product topology? I have looked for the definition here <a href="https://en.wikipedia.org/wiki/Product_topology#Definition" rel="nofollow noreferrer">Product topology</a> but there is no information that I need.</li> </ul> <p>Can someone help me? Thanks.</p>
Paul Frost
349,785
<p>Let <span class="math-container">$\xi \in \mathbb K$</span> and <span class="math-container">$U$</span> be an open neighborhood of <span class="math-container">$(\xi,v)$</span> in <span class="math-container">$ \mathbb K \times X$</span>. By definition of the product topology there exist open neigbhorhoods <span class="math-container">$V$</span> of <span class="math-container">$\xi$</span> in <span class="math-container">$\mathbb K$</span> and <span class="math-container">$W$</span> of <span class="math-container">$v$</span> in <span class="math-container">$X$</span> such that <span class="math-container">$V \times W \subset U$</span>. Then <span class="math-container">$\psi_v(V) = V \times \{v\} \subset V \times W \subset U$</span>. This shows that <span class="math-container">$\psi_v$</span> is continuous in <span class="math-container">$\xi$</span>.</p>
1,110,411
<p>How can I prove that a continuous map $f : \mathbb{R}P^2 \to S^1$ is homotopic to the constant map? I know that in the projective space every point is a line but I do not get why the above has to be true.</p>
WWK
115,619
<p>Hint:</p> <p>$\pi(\mathbb{R}P^2)=\mathbb{Z_2}$, $\pi(S^1)=\mathbb{Z}$. So the degree of $f_*:\pi(\mathbb{R}P^2) \rightarrow \pi(S^1)$ can only be zero. So by the lifting property, this map can be lift to $\tilde{f}:\mathbb{R}P^2 \rightarrow \mathbb{R}$. </p>
1,110,411
<p>How can I prove that a continuous map $f : \mathbb{R}P^2 \to S^1$ is homotopic to the constant map? I know that in the projective space every point is a line but I do not get why the above has to be true.</p>
Anthony Conway
116,246
<p>As $\pi_1(\mathbb{R}P^2)$ is finite, the subgroup $f_*(\pi_1(\mathbb{R}P^2))$ of $\pi_1(S^1)=\mathbb{Z}$ must be trivial: indeed there are no non-trivial finite subgroups of $\mathbb{Z}$. Denote by $p: \mathbb{R} \longrightarrow S^1$ the universal cover of the circle. Using the lifting criterion for covering spaces (because we now have $0=f_*(\pi_1(\mathbb{R}P^2)) \subset p_*(\pi_1(\mathbb{R}))=0$), your map $f$ lifts to a map $\tilde{f}: \mathbb{R}P^2 \rightarrow \mathbb{R}$. In other words $f$ factors through a contractible space (by definition of "lift", $f=p\tilde{f}$). Consequently $f$ is nullhomotopic.</p>
1,536,487
<p>$A$ is a real symmetric matrix and $E_1$ is a matrix whose $11$ entry is $1$ and rest are $0$. Further $A$ and $E_1$ don't commute. What can be said about eigenvalues of $A+E_1$</p>
Ron Gordon
53,268
<p>Use the binomial theorem:</p> <p>$$(\bar{z_1} t + \bar{z_2} (1-t))^n = \sum_{k=0}^n \binom{n}{k} \bar{z_1}^k \bar{z_2}^{n-k} t^k (1-t)^{n-k} $$</p> <p>Thus, the integral is</p> <p>$$(z_2-z_1)\sum_{k=0}^n \binom{n}{k} \bar{z_1}^k \bar{z_2}^{n-k} \int_0^1 dt \, t^k (1-t)^{n-k} = (z_2-z_1)\frac1{n+1} \bar{z_2}^n \sum_{k=0}^n \left (\frac{\bar{z_1}}{\bar{z_2}} \right )^k = \frac1{n+1} (\bar{z_2}^{n+1}-\bar{z_1}^{n+1}) \frac{z_2-z_1}{\bar{z_2}-\bar{z_1}}$$</p>
1,536,487
<p>$A$ is a real symmetric matrix and $E_1$ is a matrix whose $11$ entry is $1$ and rest are $0$. Further $A$ and $E_1$ don't commute. What can be said about eigenvalues of $A+E_1$</p>
DanielWainfleet
254,665
<p>When a function $f:C\to C$ has a continuous complex derivative $f'$, and you restrict its domain to $R$, it is easily shown that for $t\in R$ we have $$Re (f'(t))=\frac {d Re (f(t))}{dt} \land Im f'(t)=\frac {d Im (f(t))}{dt}$$ and that these are continuous in $t$. Therefore $$\int_0^1f'(t)dt=\int_0^1Re (f'(t))dt+i\int_0^1Im (f'(t))dt= \int_0^1\frac {dRe (f(t))}{dt}dt+i\int_0^1\frac {dIm (f(t))}{dt}dt$$ $$=[Re (f(t))]_{t=0}^{t=1}+i[Im (f(t))]_{t=0}^{t=1} = f(1)-f(0).$$ In your question, we can let $f'(z)=(z \bar z_1+(1-z)\bar z_2)^n$ so (obviously) we can let $f(z)=(n+1)^{-1}(\bar z_1-\bar z_2)^{-1}(z\bar z_1+(1-z)\bar z_2)^{n+1}$. And the rest is obvious.</p>
1,350,062
<p>In almost all of the physics textbooks I have ever read, the author will write the oscillating function as</p> <blockquote> <p>$$x(t)=\cos\left(\omega t+\phi\right)$$</p> </blockquote> <p>My question is that, is there any practical or historical reason why we should prefer $\cos$ to $\sin$ here? One possible explanation I can think of is that, to trigger a harmonic oscillation movement, we usually push the mass (to the maximum displacement) from the balance point at the initial moment, for which the cosine function will be neater to use than sine ($\phi=0$). But is it really the case?</p>
tomi
215,986
<p>Another reason is to use the initial position as a parameter.</p> <p>Compare $z=z_0\lambda^t$ and $h=h_0\cos \omega t$</p> <p>You can't do the same with $\sin$ - at best you can say perhaps $d=d_{max}\sin \omega t$</p> <p>By the way, this argument fails once you introduce a phase shift!</p>
2,983,519
<p>I have to calculate the limit of this formula as <span class="math-container">$n\to \infty$</span>.</p> <p><span class="math-container">$$a_n = \frac{1}{\sqrt{n}}\bigl(\frac{1}{\sqrt{n+1}}+\cdots+\frac{1}{\sqrt{2n}}\bigl)$$</span></p> <p>I tried the Squeeze Theorem, but I get something like this:</p> <p><span class="math-container">$$\frac{1}{\sqrt{2}}\leftarrow\frac{n}{\sqrt{2n^2}}\le\frac{1}{\sqrt{n}}\bigl(\frac{1}{\sqrt{n+1}}+\cdots+\frac{1}{\sqrt{2n}}\bigl) \le \frac{n}{\sqrt{n^2+n}}\to1$$</span></p> <p>As you can see, the limits of two other sequences aren't the same. Can you give me some hints? Thank you in advance.</p>
Jack D'Aurizio
44,121
<p>Rearrange it as <span class="math-container">$$ \frac{1}{n}\left(\sqrt{\frac{n}{n+1}}+\sqrt{\frac{n}{n+2}}+\ldots+\sqrt{\frac{n}{n+n}}\right) = \frac{1}{n}\sum_{k=1}^{n}\frac{1}{\sqrt{1+\frac{k}{n}}}$$</span> which is a Riemann sum for <span class="math-container">$$ \int_{0}^{1}\frac{dx}{\sqrt{1+x}}=2\sqrt{2}-2.$$</span> Since <span class="math-container">$\frac{1}{\sqrt{1+x}}$</span> is a convex function on <span class="math-container">$[0,1]$</span>, the Hermite-Hadamard and Karamata's inequalities give us that <span class="math-container">$\{a_n\}_{n\geq 1}$</span> is an <em>increasing</em> sequence convergent to <span class="math-container">$2\sqrt{2}-2$</span>. Additionally it is not difficult to check that <span class="math-container">$a_n= 2\sqrt{2}-2-\Theta\left(\frac{1}{n}\right)$</span> as <span class="math-container">$n\to +\infty$</span>.</p>
198,612
<p>Let $(P,\leq)$ be a partially ordered set. A <em>down-set</em> is a set $d\subseteq P$ such that $x\in d$ and $x'\in P, x'\leq x$ imply $x'\in d$. If the down-set is totally ordered, we say it is a totally ordered down-set (tods).</p> <p>Let $d_1, d_2$ be tods. We say that they are <em>incompatible</em> if neither $d_1\subseteq d_2$ nor $d_2\subseteq d_1$ holds. A set of pairwise incompatible tods is called a <em>club</em>. A club $C$ said to be <em>complete</em> if for every maximal chain $m\subseteq P$ there is $c\in C$ such that $c\subseteq m$.</p> <p>Given a club $D$ consisting of finite members only, is there a complete club $C$ also consisting of finite sets only, and $C \supseteq D$?</p>
Bjørn Kjos-Hanssen
4,600
<p>Here's a simplified version of Dominic van der Zypen's counterexample: order finite binary strings by extension, with the empty string at the bottom. Consider the club $ D$ consisting of the tods generated by strings of the form $0^n 1$.</p>
3,379,756
<p>Why does the close form of the summation <span class="math-container">$$\sum_{i=0}^{n-1} 1$$</span> equals <span class="math-container">$n$</span> instead of <span class="math-container">$n-1$</span>?</p>
heropup
118,193
<p>Suppose <span class="math-container">$n = 2$</span>. Then <span class="math-container">$n - 1 = 1$</span>, and the sum becomes <span class="math-container">$$\sum_{i=0}^1 1.$$</span> Clearly, there are two terms in this sum, corresponding to <span class="math-container">$i = 0$</span> and <span class="math-container">$i = 1$</span>. So the sum evaluates to <span class="math-container">$2$</span>.</p> <p>In the general case, when the lower index of summation begins at <span class="math-container">$1$</span>, then the counting of the terms is in a sense "natural" and we have <span class="math-container">$$\sum_{i=1}^{n-1} 1 = n-1.$$</span> But when the lower index of summation starts at <span class="math-container">$i = 0$</span>, then there is an extra term.</p>
2,322,481
<p>Look at this limit. I think, this equality is true.But I'm not sure.</p> <p>$$\lim_{k\to\infty}\frac{\sum_{n=1}^{k} 2^{2\times3^{n}}}{2^{2\times3^{k}}}=1$$ For example, $k=3$, the ratio is $1.000000000014$</p> <blockquote> <p>Is this limit <strong>mathematically correct</strong>?</p> </blockquote>
marty cohen
13,079
<p>Is $\lim_{k\to\infty}\frac{\sum_{n=1}^{k} 2^{2\times3^{n}}}{2^{2\times3^{k}}}=1 $?</p> <p>Let</p> <p>$\begin{array}\\ s(k) &amp;={2^{-2\times3^{k}}}\sum_{n=1}^{k} 2^{2\times3^{n}}\\ &amp;=\sum_{n=1}^{k} 2^{2\times3^{n}-2\times3^{k}}\\ &amp;=1+\sum_{n=1}^{k-1} 2^{2\times3^{n}-2\times3^{k}}\\ &amp;\ge 1\\ \text{and}\\ s(k) &amp;\le 1+\sum_{n=1}^{k-1} 2^{2\times3^{k-1}-2\times3^{k}}\\ &amp;= 1+(k-1)2^{2\times(3^{k-1}-3^{k})}\\ &amp;= 1+(k-1)2^{2\times(-2\times 3^{k-1})}\\ &amp;= 1+\dfrac{k-1}{2^{4\times 3^{k-1}}}\\ &amp;\to 1 \qquad\text{as } k \to \infty\\ \end{array} $</p> <p>Therefore $\lim_{k \to \infty} s(k) = 1 $.</p> <p>Note that this holds for $\lim_{k\to\infty}\dfrac{\sum_{n=1}^{k} a^{b\times c^{n}}}{a^{b\times c^{k}}} $ for any $a &gt; 1, b &gt; 0, c &gt; 1$.</p>
78,968
<p>Let $K_0$ be a bounded convex set in $\mathbf{R}^n$ within which lie two sets $K_1$ and $K_2$. Assume that,</p> <ol> <li>$K_1\cup K_2=K_0$ and $K_1\cap K_2=\emptyset$.</li> <li>The boundary between $K_1$ and $K_2$ is unknown. (To avoid the trivial case, we assume that the boundary is not a hyperplane.)</li> <li>Either $K_1$ or $K_2$ is a convex set, but we don't know which one is.</li> <li>We have two initial points $x,y$ on hand, where $x\in K_1$ and $y\in K_2$. (Please see the <em>update</em> below.)</li> </ol> <p>Essentially, $K_0$ can be viewed as a black box. Further assume that one can query any point in $K$ with a <em>membership oracle</em>, namely a procedure that given a point $x\in K$, reports the set contains $x$. </p> <p><strong>The goal is to determine which set is convex using as few membership queries as possible.</strong> </p> <p>Is there an algorithm for doing this?</p> <hr> <p><strong>Update:</strong></p> <p>To make sampling in $K_1$ and $K_2$ (also $K_0$) feasible, I further assume we have two initial points $x$ and $y$, where $x\in K_1$ and $y\in K_2$. Therefore, one can employ for instance hit-and-run algorithm with starting point $x$ (or $y$) to sample points in $K_1$ (or $K_2$).</p>
Niels J. Diepeveen
3,457
<p>Given the constraints, I think the following approach will be close to optimal.</p> <p>Start by randomly querying points until you have 1 point in one set and 2 in the other. You then have the endpoints of two segments crossing the boundary.</p> <p>By using bisection on each of those segments, you can find pairs of points in either set arbitrarily close to the boundary. If one of the sets, say $K_1$, is strictly convex, ultimately you will find two points in $K_2$ close enough to the boundary that their mean is in $K_1$. This then shows that $K_2$ is not convex.</p>
78,968
<p>Let $K_0$ be a bounded convex set in $\mathbf{R}^n$ within which lie two sets $K_1$ and $K_2$. Assume that,</p> <ol> <li>$K_1\cup K_2=K_0$ and $K_1\cap K_2=\emptyset$.</li> <li>The boundary between $K_1$ and $K_2$ is unknown. (To avoid the trivial case, we assume that the boundary is not a hyperplane.)</li> <li>Either $K_1$ or $K_2$ is a convex set, but we don't know which one is.</li> <li>We have two initial points $x,y$ on hand, where $x\in K_1$ and $y\in K_2$. (Please see the <em>update</em> below.)</li> </ol> <p>Essentially, $K_0$ can be viewed as a black box. Further assume that one can query any point in $K$ with a <em>membership oracle</em>, namely a procedure that given a point $x\in K$, reports the set contains $x$. </p> <p><strong>The goal is to determine which set is convex using as few membership queries as possible.</strong> </p> <p>Is there an algorithm for doing this?</p> <hr> <p><strong>Update:</strong></p> <p>To make sampling in $K_1$ and $K_2$ (also $K_0$) feasible, I further assume we have two initial points $x$ and $y$, where $x\in K_1$ and $y\in K_2$. Therefore, one can employ for instance hit-and-run algorithm with starting point $x$ (or $y$) to sample points in $K_1$ (or $K_2$).</p>
whuber
1,489
<p>Let $K_0$ be the unit disk in $\mathbb{R}^2$. $K_1$ consists of the point at $(1,0)$ and some other point $\zeta$ uniformly and randomly chosen on the boundary of $K_0$ (the unit circle). It is not convex. $K_2$ is the complement of $K_1$ in $K_0$. It <em>is</em> convex. Therefore this is a valid instance of the problem.</p> <p>Suppose you start with $x$ at $(1,0)$ any $y$ in $K_2$, say at $(0,0)$. The probability that any given algorithm ever queries $\zeta$ is $0$. Therefore, almost surely, the result of all queries will be to detect additional points in $K_2$, but never any additional points in $K_1$. One will therefore be unable to determine which of $K_1$ or $K_2$ is the convex subset.</p> <p>Contemplate a similar situation in which $K_1$ is the intersection of a ball of radius $\delta$ around $(1,0)$ with $K_0$ and again $x$ is at $(1,0)$. This time, $K_1$ is convex and $K_2$ is not. Given any algorithm that purports to solve the problem and any natural number $N$, choose $\delta$ so small that the algorithm does not probe $K_1$ within $N$ steps. (If it is a stochastic algorithm, the tiny <em>diameter</em> of $K_1$ implies the chance of probing it will be vanishingly small). This shows the number of steps in the algorithm is unbounded (or the expected number of steps is arbitrarily large if the algorithm is stochastic). The best we can say is that the two sets can be differentiated using a number of steps proportional to a ratio of the areas.</p> <p>To distinguish among the two cases, you are essentially undertaking a search for a second point in $K_1$ (which is tiny), but all your probes are always returning points in $K_2$. Evidently a completely random search (uniformly over $K_0$) is going to perform as well as anything else might. (Once you obtain $2$ points within each of $K_1$ and $K_2$, the game is almost over: it's straightforward to construct a small number of additional points that tell you which is the convex subset.)</p> <p>These considerations don't rule out a solution to the problem, but they imply either that the solution must depend on additional assumed properties of $K_1$ and $K_2$; <em>e.g.</em>, that they both have positive measure with a known nonzero lower bound, or else that a different model of space is needed; <em>e.g.</em>, that it is totally discrete and contains a finite number of points (which is a more realistic model of what computers usually do).</p>
675,857
<p>I am trying to figure out easy understandable approach to given small number of $n$, list all possible is with proof, I read this paper but it is really beyond my level to fathom,</p> <p>attempt for $\phi(n)=110$, </p> <p>$$\phi(n)=110=(2^x)\cdot(3^b)\cdot(11^c)\cdot(23^d)\quad\text{ since }\quad p-1 \mid \phi(n)=110$$ and $x =\{0,1\}$, $b=\{0,1\}$, $c=\{0,1,2\}$, $d=\{0,1\}$ .</p> <p>So total $2\cdot2\cdot3\cdot2 =24$ options to test if the $\phi(n)=110$, </p> <p>I am not sure if this is a enough to show or there are no other numbers.</p> <p>look at this paper <a href="http://arxiv.org/pdf/math/0404116v3.pdf" rel="nofollow">http://arxiv.org/pdf/math/0404116v3.pdf</a></p>
David
119,775
<p>If $$n=\prod_{p\,{\rm prime}}p^{\alpha(p)}\ ,$$ then you need $$\prod_{p\,{\rm prime}}p^{\alpha(p)-1}(p-1)=110\ .$$ Since $11\mid110$ you must have $p=11$ as one of the factors on the LHS. (It can't be $p-1=11$ as $12$ is not prime.). Then the exponent must be $\alpha(11)-1=1$ since $11^2\not\mid110$, and the $p-1$ factor is $10$. This accounts for all the non-trivial factors on the LHS, but you could also have a factor of $1$ in the form $$1=2^{1-1}(2-1)\ .$$ So there are two solutions, $n=11^2=121$ and $n=2\times11^2=242$.</p>
2,335,627
<p>Is the following statement true? If so, how can I prove it?</p> <blockquote> <p>If the power series $\sum_{n=0}^{\infty} a_n x^n$ converges for all $x \in (x_0, x_1)$ then it converges absolutely for all $x \in (-\max\{|x_0|, |x_1|\}, \max\{|x_0|, |x_1|\})$.</p> </blockquote>
hamam_Abdallah
369,188
<p>All power series are based on this</p> <p>$$0 &lt;a &lt;b\implies 0 &lt; \frac {a}{b}&lt;1$$</p> <p>and</p> <p>$|q|&lt;1\implies \sum q^n $ converges.</p> <p>The rest is artistic.</p>
509,236
<p>I had previously solved the problem of proving that $n^3-n-4$ must be a multiple of $5$, given that $n-3$ is a multiple of $5$. I did so by algebraically manipulating $n^3-n-4$ into:</p> <p>$$ 2(n-3)+(n-1)(n-1)((n-3)+5) $$</p> <p>Given that the first term is a given multiple of $5$, and the second term is a product of a multiple of $5$, I could prove directly that the sum of these terms was a multiple of $5$.</p> <p>With the new (reverse) problem, I can't do this direct algebraic manipulation, and I believe the way to go about this problem is proof by cases, where the list of exhaustive cases would be when the remainder is $0$, $1$, $2$, $3$, or $4$. </p> <p>My question is, what steps should I take to set up and show the proof for cases? I am new to these numerical theory problems, and so any basic guidance would be very much appreciated.</p>
Brian M. Scott
12,042
<p>Write $n$ in the form $5k+r$, where $r\in\{0,1,2,3,4\}$. Show that $n^3-r^3$ is a multiple of $5$, as of course is $n-r$, so that</p> <p>$$(n^3-n-4)-(r^3-r-4)=(n^3-r^3)-(n-r)$$</p> <p>is a multiple of $5$. Thus, if $n^3-n-4$ is a multiple of $5$, so is $r^3-r-4$. Now show that $3$ is the only value of $r$ for which $r^3-r-4$ is a multiple of $5$.</p>
509,236
<p>I had previously solved the problem of proving that $n^3-n-4$ must be a multiple of $5$, given that $n-3$ is a multiple of $5$. I did so by algebraically manipulating $n^3-n-4$ into:</p> <p>$$ 2(n-3)+(n-1)(n-1)((n-3)+5) $$</p> <p>Given that the first term is a given multiple of $5$, and the second term is a product of a multiple of $5$, I could prove directly that the sum of these terms was a multiple of $5$.</p> <p>With the new (reverse) problem, I can't do this direct algebraic manipulation, and I believe the way to go about this problem is proof by cases, where the list of exhaustive cases would be when the remainder is $0$, $1$, $2$, $3$, or $4$. </p> <p>My question is, what steps should I take to set up and show the proof for cases? I am new to these numerical theory problems, and so any basic guidance would be very much appreciated.</p>
Cameron Buie
28,900
<p>We can write $n=5k+r,$ where $r\in\{0,1,2,3,4\}$ by division with remainder. Then $$\begin{align}n^3-n-4 &amp;= (5k+r)^3-(5k+r)-4\\ &amp;= (125k^3+75k^2r+15kr^2+r^3)-5k-r-4\\ &amp;= 5(25k^3+15k^2r+3kr^2-5k)+r^3-r-4.\end{align}$$ All that remains, then, is to show that $r^3-r-4$ is not a multiple of $5$ when $r=0,1,2,4.$</p>
2,006,676
<p>I'm trying to show that for a non-empty set $X$, the following statements are true for logical statements $P(x)$ and $Q(x)$:</p> <ul> <li>$∃x∈X$, ($P(x)$ or $Q(x)$) $\iff$ $(∃x∈X, P(x))$ or $(∃x∈X, Q(x))$</li> <li>$∃x∈X$, ($P(x)$ and $Q(x)$) $\implies$ $(∃x∈X, P(x))$ and $(∃x∈X, Q(x))$</li> </ul> <p>Is it possible to use truth tables to show this? I can't think of any other way to go about it. Any help would be appreciated.</p>
rschwieb
29,335
<p>$T$ is the complement of the prime ideal $(x)$, so you are just localizing at a prime ideal, and therefore $T^{-1}R$ is a local ring. The units are those things outside the unique maximal ideal. Describe those elements.</p>
3,167,062
<p>Recently friend of mine showed me a way to write <span class="math-container">$f(x)=0$</span>, however I could not find anything about that in the Internet. Is it correct?</p> <p><span class="math-container">$f(x)=x^2-1\stackrel{set}{=}0 \\ x = 1 \lor x = -1$</span></p>
Lazar Ljubenović
24,269
<p>Apparently, this is known as <strong>perp product</strong>; however, the only online reference I could find after a (rather quick) search is <a href="http://geomalgorithms.com/vector_products.html" rel="nofollow noreferrer">this <em>geomalgorithms</em> page</a>. It's also <a href="http://mathworld.wolfram.com/PerpDotProduct.html" rel="nofollow noreferrer">on Wolfram</a>, where it mentions that Hill introduced it in 1994, in a chapter in “Graphic Gems IV”.</p> <p>He firstly defines <strong>perp operator</strong> (perpendicular operator) which gives a rotation of a vector by 90 degeres:</p> <p><span class="math-container">$$ \vec v^\perp = (v_x, v_y)^\perp = (-v_y, v_x). $$</span></p> <p>Then a <strong>perp product</strong> (perpendicular product) is defined, denoted as an infix operator <span class="math-container">$\perp$</span>:</p> <p><span class="math-container">$$ \vec v \perp \vec w := \vec v ^ \perp \cdot \vec w = v_x w_y - v_y w_x. $$</span></p> <p>The idea of using the perp product in the algorithm seems to come from the following property: <span class="math-container">$$\vec v \perp \vec w = 0 \Leftrightarrow \text{$\vec v$ and $\vec w$ are collinear}. $$</span></p> <p>At the end of the algorithm, the denominators are <span class="math-container">$\vec r \perp \vec s$</span>, which is then discussed for being zero, implying collinearity of the vectors and parallelity of the segments.</p> <hr> <p>Either way, a similar but much more intuitive discussion of the problem with the same idea of using parametric equations and perp product is given at <a href="http://geomalgorithms.com/a05-_intersect-1.html" rel="nofollow noreferrer">this page called “Intersections of Lines and Planes” on <em>geomalgorithms</em></a>. It also gives more details in the algorithm at the bottom, considering cases where one or both segments are degenerated into a single point.</p>
2,159,136
<p>I arrived to this question while solving a question paper. The question is as follows:</p> <blockquote> <p>If $f_k(x)=\frac{1}{k}\left(\sin^kx + \cos^kx\right)$, where $x$ belongs to $\mathbb{R}$ and $k&gt;1$, then $f_4(x)-f_6(x)=?$</p> </blockquote> <p>I started as </p> <p>$$\begin{align} f_4(x)-f_6(x)&amp;=\frac{1}{4}(\sin^4x + \cos^4x) - \frac{1}{6}(\sin^6x + \cos^6x) \tag{1}\\[4pt] &amp;=\frac{3}{12}\sin^4x + \frac{3}{12}\cos^4x - \frac{2}{12}\sin^6x - \frac{2}{12}\cos^6x \tag{2}\\[4pt] &amp;=\frac{1}{12}\left(3\sin^4x + 3\cos^4x - 2\sin^6x - 2\cos^6x\right) \tag{3}\\[4pt] &amp;=\frac{1}{12}\left[\sin^4x\left(3-2\sin^2x\right) + \cos^4x\left(3-2\cos^2x\right)\right] \tag{4}\\[4pt] &amp;=\frac{1}{12}\left[\sin^4x\left(1-2\cos^2x\right) + \cos^4x\left(1-2\sin^2x\right)\right] \tag{5} \\[4pt] &amp;\qquad\quad \text{(substituting $\sin^2x=1-\cos^2x$ and $\cos^2x=1-\sin^2x$)} \\[4pt] &amp;=\frac{1}{12}\left(\sin^4x-2\cos^2x\sin^4x+\cos^4x-2\sin^2x\cos^4x\right) \tag{6} \\[4pt] &amp;=\frac{1}{12}\left[\sin^4x+\cos^4x-2\cos^2x\sin^2x\left(\sin^2x+\cos^2x\right)\right] \tag{7} \\[4pt] &amp;=\frac{1}{12}\left(\sin^4x+\cos^4x-2\cos^2x\sin^2x\right) \tag{8} \\[4pt] &amp;\qquad\quad\text{(because $\sin^2x+\cos^2x=1$)} \\[4pt] &amp;=\frac{1}{12}\left(\cos^2x-\sin^2x\right)^2 \tag{9} \\[4pt] &amp;=\frac{1}{12}\cos^2(2x) \tag{10}\\[4pt] &amp;\qquad\quad\text{(because $\cos^2x-\sin^2x=\cos2x$)} \end{align}$$</p> <p>Hence the answer should be ...</p> <p>$$f_4(x)-f_6(x)=\frac{1}{12}\cos^2(2x)$$</p> <p>... but the answer given was $\frac{1}{12}$.</p> <p>I know this might be a very simple question but trying many a times also didn't gave me the right answer. Please tell me where I am doing wrong.</p>
kotomord
382,886
<p>Hint. Let $x=\frac{\pi}{4}$</p> <p>In your answer $f4(x)-f(6x) = 0$</p> <p>By brute-forse, $f4(x)-f6(x) = \frac{0.5}{4} - \frac{0.25}{6} = \frac{1}{8}-\frac{1}{24} =\frac{1}{12}$, so, you make a typo :(</p>
2,159,136
<p>I arrived to this question while solving a question paper. The question is as follows:</p> <blockquote> <p>If $f_k(x)=\frac{1}{k}\left(\sin^kx + \cos^kx\right)$, where $x$ belongs to $\mathbb{R}$ and $k&gt;1$, then $f_4(x)-f_6(x)=?$</p> </blockquote> <p>I started as </p> <p>$$\begin{align} f_4(x)-f_6(x)&amp;=\frac{1}{4}(\sin^4x + \cos^4x) - \frac{1}{6}(\sin^6x + \cos^6x) \tag{1}\\[4pt] &amp;=\frac{3}{12}\sin^4x + \frac{3}{12}\cos^4x - \frac{2}{12}\sin^6x - \frac{2}{12}\cos^6x \tag{2}\\[4pt] &amp;=\frac{1}{12}\left(3\sin^4x + 3\cos^4x - 2\sin^6x - 2\cos^6x\right) \tag{3}\\[4pt] &amp;=\frac{1}{12}\left[\sin^4x\left(3-2\sin^2x\right) + \cos^4x\left(3-2\cos^2x\right)\right] \tag{4}\\[4pt] &amp;=\frac{1}{12}\left[\sin^4x\left(1-2\cos^2x\right) + \cos^4x\left(1-2\sin^2x\right)\right] \tag{5} \\[4pt] &amp;\qquad\quad \text{(substituting $\sin^2x=1-\cos^2x$ and $\cos^2x=1-\sin^2x$)} \\[4pt] &amp;=\frac{1}{12}\left(\sin^4x-2\cos^2x\sin^4x+\cos^4x-2\sin^2x\cos^4x\right) \tag{6} \\[4pt] &amp;=\frac{1}{12}\left[\sin^4x+\cos^4x-2\cos^2x\sin^2x\left(\sin^2x+\cos^2x\right)\right] \tag{7} \\[4pt] &amp;=\frac{1}{12}\left(\sin^4x+\cos^4x-2\cos^2x\sin^2x\right) \tag{8} \\[4pt] &amp;\qquad\quad\text{(because $\sin^2x+\cos^2x=1$)} \\[4pt] &amp;=\frac{1}{12}\left(\cos^2x-\sin^2x\right)^2 \tag{9} \\[4pt] &amp;=\frac{1}{12}\cos^2(2x) \tag{10}\\[4pt] &amp;\qquad\quad\text{(because $\cos^2x-\sin^2x=\cos2x$)} \end{align}$$</p> <p>Hence the answer should be ...</p> <p>$$f_4(x)-f_6(x)=\frac{1}{12}\cos^2(2x)$$</p> <p>... but the answer given was $\frac{1}{12}$.</p> <p>I know this might be a very simple question but trying many a times also didn't gave me the right answer. Please tell me where I am doing wrong.</p>
Fred
380,717
<p>Hint: let $f(x):=f_4(x)-f_6(x)$. Then show that $f'(x)=0$ for all $x$. Hence $f$ is constant. Furthermore: $f(0)=\frac{1}{12}$</p>
2,159,136
<p>I arrived to this question while solving a question paper. The question is as follows:</p> <blockquote> <p>If $f_k(x)=\frac{1}{k}\left(\sin^kx + \cos^kx\right)$, where $x$ belongs to $\mathbb{R}$ and $k&gt;1$, then $f_4(x)-f_6(x)=?$</p> </blockquote> <p>I started as </p> <p>$$\begin{align} f_4(x)-f_6(x)&amp;=\frac{1}{4}(\sin^4x + \cos^4x) - \frac{1}{6}(\sin^6x + \cos^6x) \tag{1}\\[4pt] &amp;=\frac{3}{12}\sin^4x + \frac{3}{12}\cos^4x - \frac{2}{12}\sin^6x - \frac{2}{12}\cos^6x \tag{2}\\[4pt] &amp;=\frac{1}{12}\left(3\sin^4x + 3\cos^4x - 2\sin^6x - 2\cos^6x\right) \tag{3}\\[4pt] &amp;=\frac{1}{12}\left[\sin^4x\left(3-2\sin^2x\right) + \cos^4x\left(3-2\cos^2x\right)\right] \tag{4}\\[4pt] &amp;=\frac{1}{12}\left[\sin^4x\left(1-2\cos^2x\right) + \cos^4x\left(1-2\sin^2x\right)\right] \tag{5} \\[4pt] &amp;\qquad\quad \text{(substituting $\sin^2x=1-\cos^2x$ and $\cos^2x=1-\sin^2x$)} \\[4pt] &amp;=\frac{1}{12}\left(\sin^4x-2\cos^2x\sin^4x+\cos^4x-2\sin^2x\cos^4x\right) \tag{6} \\[4pt] &amp;=\frac{1}{12}\left[\sin^4x+\cos^4x-2\cos^2x\sin^2x\left(\sin^2x+\cos^2x\right)\right] \tag{7} \\[4pt] &amp;=\frac{1}{12}\left(\sin^4x+\cos^4x-2\cos^2x\sin^2x\right) \tag{8} \\[4pt] &amp;\qquad\quad\text{(because $\sin^2x+\cos^2x=1$)} \\[4pt] &amp;=\frac{1}{12}\left(\cos^2x-\sin^2x\right)^2 \tag{9} \\[4pt] &amp;=\frac{1}{12}\cos^2(2x) \tag{10}\\[4pt] &amp;\qquad\quad\text{(because $\cos^2x-\sin^2x=\cos2x$)} \end{align}$$</p> <p>Hence the answer should be ...</p> <p>$$f_4(x)-f_6(x)=\frac{1}{12}\cos^2(2x)$$</p> <p>... but the answer given was $\frac{1}{12}$.</p> <p>I know this might be a very simple question but trying many a times also didn't gave me the right answer. Please tell me where I am doing wrong.</p>
lab bhattacharjee
33,337
<p>HINT:</p> <p>$$\sin^6x+\cos^6x=(\sin^2x+\cos^2x)^3-3\sin^2x\cos^2x(\sin^2x+\cos^2x)=1-3\sin^2x\cos^2x$$</p> <p>$$\sin^4x+\cos^4x=(\sin^2x+\cos^2x)^2-2\sin^2x\cos^2x=1-2\sin^2x\cos^2x$$</p>
1,629,592
<p>I tried to prove this question by first considering the possible last digit of $p$ when $p=n^2+5$, but that reasoning got me nowhere. Then I tried to prove it by contrapositive, and however I just couldn't really find where to start.<br> Hence I'm here asking for some hints (only hints, no solution please).</p> <p>Many thanks,<br> D.</p>
quid
85,306
<p>Consider the last digits of squares, so $n^2$, these are $0,1,4,5,6,9$. </p> <p>From there you get the last digits of $n^2 +5$. </p> <p>Now, drop those from the list that cannot be the last digit of a prime. </p>
1,629,592
<p>I tried to prove this question by first considering the possible last digit of $p$ when $p=n^2+5$, but that reasoning got me nowhere. Then I tried to prove it by contrapositive, and however I just couldn't really find where to start.<br> Hence I'm here asking for some hints (only hints, no solution please).</p> <p>Many thanks,<br> D.</p>
John
7,163
<p>Hint: The possible values of $n^2$ (mod $10$) are $0, 1, 4, 5, 6, 9$. Which ones can you eliminate since $n^2 + 5$ is not prime?</p>
58,912
<p>In the board game <a href="http://en.wikipedia.org/wiki/Hex_%28board_game%29" rel="nofollow">Hex</a>, players take turns coloring hexagons either red or blue. One player tries to connect the top and bottom edges of the board, colored red; the other tries to connect the left and right edges, colored blue. It is known that a game of Hex will never end in a tie: no matter how it is played, there will always be either a blue path connecting the blue edges, or a red path connecting the red edges.</p> <p>My question is, if this fact always holds for a finite grid of hexagons, does it also hold on the plane? If the top and bottom edges of a square are colored red, the left and right edges are colored blue, and the interior of the square is colored arbitrarily, must there be either a red path connecting the red edges, or a blue path connecting the blue edges?</p> <p>More formally, let $S$ be any subset of $[0, 1]^2$. $S$ will represent the points that are red. Must there be either a path within $S$ whose endpoints are of the form $(x, 0)$ and $(x, 1)$, or a path within $[0, 1]^2 - S$ whose endpoints are of the form $(0, y)$ and $(1, y)$?</p>
Tanner Swett
13,524
<p>An answer to this came to me today as I was thinking about how I might describe connectedness to a layperson. Path-connectedness is easy enough: a set is path-connected if, given two points in the set, you can get from one to the other by following a path within the set. If I'm not mistaken, connectedness in the plane can be defined like so: a set is connected if you <em>can't</em> draw a curve separating the set into two parts, such that the curve lies entirely <em>outside</em> the set.</p> <p>So, if we have an example of a set that is connected but not path-connected, we ought to be able to make it into a counterexample. And sure enough, the topologist's sine curve seems to do the trick. Let $S$ be the graph of the function</p> <p>$$f(x) =\frac12 + \frac14 \sin \left (\frac{1}{x - 1/2} \right), \text{ if $x \ne 1/2$}; \qquad 1/2, \text{ otherwise}.$$</p> <p>Since this function is not continuous (and its graph is not path-connected), it is impossible to go from the left edge to the right edge while staying inside $S$. But since the function's graph is connected, it is impossible to go from the top edge to the bottom edge while staying <em>outside</em> the function—if this were possible, we would be able to separate the graph into the part to the left of the curve, and the part to the right of the curve, showing that the graph is disconnected.</p>
638,922
<p>If $q \in \mathbb{H}$ satisfies $qi = iq$, prove that $q \in \mathbb{C}$</p> <p>This seems kinda of intuitive since quaternions extend the complex numbers. I am thinking that $q=i$ because i know that $ij = k , ji = -k$, which is expand to all combinations of $i,j,k,$ which I think means that I have to use $ijk = i^2 = -1$</p>
David Holden
79,543
<p>let $q=a+bi+cj+dk = (a,b,c,d)$ so</p> <p>$$ qi = (-b,a,d,-c) \\ iq = (-b,a,-d,c) $$</p>
638,922
<p>If $q \in \mathbb{H}$ satisfies $qi = iq$, prove that $q \in \mathbb{C}$</p> <p>This seems kinda of intuitive since quaternions extend the complex numbers. I am thinking that $q=i$ because i know that $ij = k , ji = -k$, which is expand to all combinations of $i,j,k,$ which I think means that I have to use $ijk = i^2 = -1$</p>
DonAntonio
31,254
<p>If you put $\;q=a+bi+cj+dk\;$ , then</p> <p>$$\begin{align*}qi=ai-b-ck+dj\\ iq=ai-b+ck-dj\end{align*}$$</p> <p>Well, what do you deduce about the coefficients $\;a,b,c,d\in\Bbb R\;$ above?</p>
3,995,913
<p>We are looking at the following expression:</p> <p><span class="math-container">$$\frac{d}{dx}\int_{u(x)}^{v(x)}f(x) dx$$</span></p> <p>The solution is straightforward for this: <span class="math-container">$\frac{d}{dx}\int_{u(x)}^{v(x)}f(t) dt$</span>. Do we evaluate the given expression in like manner? Do we treat the <span class="math-container">$f(x) dx$</span> as if it were <span class="math-container">$f(t) dt$</span>?</p>
EDX
763,728
<p>Be aware of the name of variables !</p> <p>The <span class="math-container">$x$</span> in <span class="math-container">$v(x)$</span> is not the same as <span class="math-container">$dx$</span> lets call it <span class="math-container">$dt$</span></p> <p>Consider your integral as a primitive</p> <p><span class="math-container">$$I(x)=F(v(x))-F(u(x))$$</span> with <span class="math-container">$F$</span> a primitive of <span class="math-container">$f$</span>. From here it is easy to use <strong>chain rule</strong> and conclude for an expression.</p>
1,470,378
<p>I am not sure i formatted it right but I was studying for calculus and came across a problem I couldn't compute. </p> <p>$$\int \frac{x^2}{x^2-2}dx$$</p> <p>I have not learned partial fractions yet so if this is a case where that is used, the techniques might not work for me. </p> <p>What I have tried. to do integration by parts, and substitution. I put the denominator to the -1 power. I got 1/2ln|x^2-2|*3/2x^3 but I am certain that isn't right. </p>
Empty
174,970
<p><strong>Hint 1 :</strong></p> <p>Put , $x=\sqrt 2\sec \theta$. Then , $\,dx=\sqrt 2 \sec\theta \tan \theta$.</p> <p><strong>Hint 2:</strong></p> <p>$\frac{x^2}{x^2-2}=1+\frac{2}{x^2-2}$</p>
4,371,541
<p>The series representation <span class="math-container">$\sum1/n^s$</span> of the Riemann zeta function <span class="math-container">$\zeta(s)$</span> is known to converge for <span class="math-container">$\sigma&gt;1$</span>, where <span class="math-container">$s=\sigma+it$</span>.</p> <p>In order to show an infinite series is holomorphic, we need to show the series <strong>converges uniformly</strong>.</p> <p>However, the series <span class="math-container">$\sum 1/n^s$</span> doesn't converge uniformly in the domain <span class="math-container">$\sigma&gt;1$</span>. Intuitively, the closer <span class="math-container">$s$</span> is closer to <span class="math-container">$s=1$</span> the more quickly it diverges.</p> <p>To address this, various sources make an argument based on <strong>locally uniform convergence</strong>.</p> <p>The argument is that the series is uniformly convergent on any <span class="math-container">$\sigma&gt;a$</span> where <span class="math-container">$a&gt;1$</span>. The argument proceeds by saying that the union of all these domains is <span class="math-container">$\sigma&gt;1$</span>, so the series is uniformly convergent here.</p> <p><strong>This is where I am confused.</strong> Surely <span class="math-container">$\sigma&gt;a$</span> with <span class="math-container">$a&gt;1$</span> means <span class="math-container">$\sigma&gt;1$</span> which we agreed at the top is a domain in which the series does not converge uniformly.</p> <p><strong>Question:</strong> What am I misunderstanding about the standard argument? Where is my error?</p>
reuns
276,986
<p>As you said the series converges uniformly on <span class="math-container">$\Re(s) \ge 1+\epsilon$</span> (so that it is analytic there, by say Morera's theorem) and only locally uniformly on <span class="math-container">$\Re(s) &gt;1$</span> (so that it is analytic there as well). Analyticity is also immediate from expanding <span class="math-container">$\zeta(z+s)=\sum_{n\ge 1} n^{-z}\sum_{k\ge 0} \frac{s^k (-\log n)^k }{k!}$</span> for <span class="math-container">$|s|&lt;\Re(z)-1$</span>, using the absolute convergence to change the order of summation, obtaining <span class="math-container">$\zeta(z+s)= \sum_{k\ge 0}s^k \sum_{n\ge 1}\frac{ n^{-z} (-\log n)^k}{k!}$</span></p>
1,383,613
<p>Give recursive definition of sequence $a_n = 2^n, n = 2, 3, 4... where $ $a_1 = 2$</p> <p>I'm just not sure how to approach these problems. </p> <p>Then it asks to give a def for:</p> <p>$a_n = n^2-3n, n = 0, 1, 2...$</p> <p>Thanks for all the help!</p>
David
119,775
<p>Note that there are lots of answers to these questions. IMO those given by Terra Hyde are probably what your instructor is expecting, but you could also say:</p> <p>for the first one, $$a_{n+1}=a_n+2^n$$ and for the second $$a_{n+1}=a_n+a_{n-1}-n^2+7n-6$$ among many, many other possibilities.</p>
4,102,035
<p>I'm trying to solve this question and reach a proof, I have already proven that <span class="math-container">$P(B|A)&gt;P(B)$</span>, and <span class="math-container">$P(B^c|A)&lt;P(B^c)$</span>, using the information from the question, but I am struggling to find out how to prove that: <span class="math-container">$P(B^c|A^c)&gt;P(B^c)$</span>. <br> <br> <strong>What I have Tried</strong>: I tried to write <span class="math-container">$(A^c \cap B^c)$</span> as <span class="math-container">$(A\cup B)^c$</span>, and here's what I have reached: <br> <span class="math-container">$P(B^c|A^c)=\frac {P((A \cup B)^c)}{P(A^c)}=\frac {1-(P(A)+P(B)-P(A\cap B))}{1-P(A)}$</span>, so in order to do my proof, I tried to substitute this into the inequality I'm trying to proof: <br> <span class="math-container">$\frac {1-(P(A)+P(B)-P(A\cap B))}{1-P(A)}-P(B^c)$</span>, and started trying to proof that it is <span class="math-container">$&gt;0$</span> (So I can add <span class="math-container">$P(B^c)$</span> and complete my proof). <br> But stuff got really messy and I couldn't really reach a point where I can say its bigger than zero. <br> <br> I would really appreciate any feedback and help</p>
Rahul Madhavan
439,353
<p>You have said that you already showed <span class="math-container">$P(B|A)&gt;P(B)$</span>. I will use this. <span class="math-container">\begin{align*} P(B|A)&amp;&gt;P(B)\\ \frac{P(B\cap A)}{P(A)}&amp;&gt;P(B)\\ P(B\cap A)&amp;&gt;P(B)P(A)\tag 1 \end{align*}</span> Now consider <span class="math-container">$P(A^C)P(B^C)$</span> <span class="math-container">\begin{align*} P(A^C)P(B^C) &amp;=(1-P(A))(1-P(B))\\ &amp;=1-P(A)-P(B)+P(A)P(B)\\ &amp;&lt;1-P(A)-P(B)+P(A\cap B)\tag{By 1}\\ &amp;=1-(P(A)+P(B)-P(A\cap B))\\ &amp;=1-P(A\cup B)\\ &amp;=P\left((A\cup B)^C\right)\\ &amp;=P\left(A^C\cap B^C\right)\tag{by DeMorgan's laws}\\ \end{align*}</span> Therefore: <span class="math-container">\begin{align*} P(A^C)P(B^C)&amp;&lt;P\left(A^C\cap B^C\right)\\ P(B^C)&amp;&lt;\frac{P\left(A^C\cap B^C\right)}{P(A^C)}\\ P(B^C)&amp;&lt;P\left(B^C|A^C\right)\\ \end{align*}</span></p>
4,102,035
<p>I'm trying to solve this question and reach a proof, I have already proven that <span class="math-container">$P(B|A)&gt;P(B)$</span>, and <span class="math-container">$P(B^c|A)&lt;P(B^c)$</span>, using the information from the question, but I am struggling to find out how to prove that: <span class="math-container">$P(B^c|A^c)&gt;P(B^c)$</span>. <br> <br> <strong>What I have Tried</strong>: I tried to write <span class="math-container">$(A^c \cap B^c)$</span> as <span class="math-container">$(A\cup B)^c$</span>, and here's what I have reached: <br> <span class="math-container">$P(B^c|A^c)=\frac {P((A \cup B)^c)}{P(A^c)}=\frac {1-(P(A)+P(B)-P(A\cap B))}{1-P(A)}$</span>, so in order to do my proof, I tried to substitute this into the inequality I'm trying to proof: <br> <span class="math-container">$\frac {1-(P(A)+P(B)-P(A\cap B))}{1-P(A)}-P(B^c)$</span>, and started trying to proof that it is <span class="math-container">$&gt;0$</span> (So I can add <span class="math-container">$P(B^c)$</span> and complete my proof). <br> But stuff got really messy and I couldn't really reach a point where I can say its bigger than zero. <br> <br> I would really appreciate any feedback and help</p>
João Víctor Melo
852,373
<p>A solution could be the following:</p> <p><span class="math-container">$$P(B^c | A^c) = P(A^c \cap B^c)/P(A^c) = [1-(P(A)+P(B)-P(A \cap B))]/P(A^c) $$</span></p> <p>Now we observe that the above value is bigger than:</p> <p><span class="math-container">$$[1-P(A) - P(B) + P(A)\times P(B)]/P(A^c)$$</span></p> <p>since <span class="math-container">$P(B | A) &gt; P(A)\times P(B)$</span>.</p> <p>Perceive that I did <span class="math-container">$1-P(A) = P(A^c) $</span> above and factored <span class="math-container">$P(B)[P(A) -1] = P(B)\times -P(A^c)$</span></p> <p>And so,</p> <p><span class="math-container">$P(B^c|A^c) &gt; P(B^c)$</span>.</p>
4,117,377
<p>In calculating the integral</p> <p><span class="math-container">$$\int_{-\infty}^{\infty} \frac{\sin x}{x}dx$$</span></p> <p>by contour integration, we use</p> <p><span class="math-container">$$\int_{-\infty}^{\infty} \frac{\sin x}{x}dx = \int_{-\infty}^{\infty} \frac{\operatorname{Im}(e^{ix})}{x}dx =\operatorname{Im}\left(\int_{-\infty}^{\infty} \frac{e^{ix}}{x}dx \right)$$</span></p> <p>but in the process, we have gone from an integral which is well-defined with no real singularities to one with a real singularity which in fact is just undefined as an improper integral. Therefore, in the source I am reading, we take the cauchy principal value (CPV) of the integral on the RHS instead of treating it as an improper integral. This principal value is calculated by use of the Residue Theorem.</p> <p>My question: There are different ways to treat singularities in integrals. How do we know that this one (the CPV) will give us the correct result for <span class="math-container">$\int_{-\infty}^{\infty} \frac{\sin x}{x}dx$</span>? Of course, knowing the answer by other methods, we can compare and see it was correct, but I'm looking to understand why the reasoning is valid.</p> <p><strong>Response to 1st answer</strong>: Simply saying that the integral converges is not enough. We need some way to know that in particular the CPV is the correct notion of integration for the exponential integral. Clearly, not any notion of integration which converges must give the correct result.</p> <p><strong>Response to 2nd answer</strong>: The question I ask is: by what reasoning is the notion of CPV in <span class="math-container">$\int_{-\infty}^{\infty} \frac{\sin x}{x}dx =\operatorname{Im}\left(CPV\int_{-\infty}^{\infty} \frac{e^{ix}}{x}dx \right)$</span> justified. Of course the first integral is the same as an improper integral or CPV, this just doesn't answer the question.</p>
saulspatz
235,128
<p>The value of the improper integral is <span class="math-container">$$\lim\limits_{\substack{z\to&amp;\,\,\,\infty \\ y\to&amp;-\infty}}\int_y^z\frac{\sin x}{x}\,\mathrm{d}x$$</span> Clearly, if this limit exists, then so does the Cauchy principal value, and it has the same value as the integral.</p> <p>I would appreciate it if someone would show me how to properly format the limit, so that both lines are the same size.</p>
4,117,377
<p>In calculating the integral</p> <p><span class="math-container">$$\int_{-\infty}^{\infty} \frac{\sin x}{x}dx$$</span></p> <p>by contour integration, we use</p> <p><span class="math-container">$$\int_{-\infty}^{\infty} \frac{\sin x}{x}dx = \int_{-\infty}^{\infty} \frac{\operatorname{Im}(e^{ix})}{x}dx =\operatorname{Im}\left(\int_{-\infty}^{\infty} \frac{e^{ix}}{x}dx \right)$$</span></p> <p>but in the process, we have gone from an integral which is well-defined with no real singularities to one with a real singularity which in fact is just undefined as an improper integral. Therefore, in the source I am reading, we take the cauchy principal value (CPV) of the integral on the RHS instead of treating it as an improper integral. This principal value is calculated by use of the Residue Theorem.</p> <p>My question: There are different ways to treat singularities in integrals. How do we know that this one (the CPV) will give us the correct result for <span class="math-container">$\int_{-\infty}^{\infty} \frac{\sin x}{x}dx$</span>? Of course, knowing the answer by other methods, we can compare and see it was correct, but I'm looking to understand why the reasoning is valid.</p> <p><strong>Response to 1st answer</strong>: Simply saying that the integral converges is not enough. We need some way to know that in particular the CPV is the correct notion of integration for the exponential integral. Clearly, not any notion of integration which converges must give the correct result.</p> <p><strong>Response to 2nd answer</strong>: The question I ask is: by what reasoning is the notion of CPV in <span class="math-container">$\int_{-\infty}^{\infty} \frac{\sin x}{x}dx =\operatorname{Im}\left(CPV\int_{-\infty}^{\infty} \frac{e^{ix}}{x}dx \right)$</span> justified. Of course the first integral is the same as an improper integral or CPV, this just doesn't answer the question.</p>
Community
-1
<p>I don't think there is a way in which we can rigorously use the expression <span class="math-container">$$\operatorname{Im}\left(\int_{-\infty}^{\infty} \frac{e^{ix}}{x}dx \right)$$</span></p> <p>If you test the following in wolfram alpha for example</p> <ul> <li><code>integral from 0.0000001 to inf of e^(ix)/x dx</code></li> <li><code>integral from 0.0000000001 to inf of e^(ix)/x dx</code></li> <li><code>integral from 0.00000000000001 to inf of e^(ix)/x dx</code></li> </ul> <p>you can see that the 'value' of this integral (we are using the cauchy principal value of course) would be &quot;<span class="math-container">$\infty + i \tfrac{\pi}{2}$</span>&quot;. Not a valid complex number, it's just a divergent integral so we can't really take the imaginary part to just throw away the infinity.</p> <p>You may be be able to do manipulation with the divergent integral and still get right answers but I wouldn't trust it.</p> <p>Note that the convergence of a Reimann integral is stronger than the convergence of the C.P.V.</p>
1,548,841
<p>As you can see, In the image a rectangle gets translated to another position in the coordinates System.</p> <p>The origin Coordinates are <code>A1(8,2) B1(9,3)</code> from the length <code>7</code> and the height <code>3</code> you can also guess the vertices of the rectangle.</p> <p><b>Now the Rectangle gets moved</b>. </p> <p>Now <code>A1</code> is at <code>A2(16,9)</code> and <code>B1</code> is located at <code>B2(16,11)</code>. </p> <p>It means that the rectangle got translated, rotated and stretched.</p> <p><b>How can I calculate the Coordinates, of the left-upper corner?</b></p> <p><a href="https://i.stack.imgur.com/ajwDY.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/ajwDY.jpg" alt="enter image description here"></a></p> <hr> <p><i>I first tried to calculate the stretching-factor but then I got stuck when I trying to calculate the angle and translation</i> </p> <p><b>Thanks for your help</b></p>
mathlove
78,967
<p>This answer uses complex numbers.</p> <p>We want to find $a,b,\theta\in\mathbb R$ where $0\lt\theta\lt \pi$ such that $$(16+9i)-(a+bi)=\sqrt 2(8+2i)(\cos\theta+i\sin\theta)$$ $$(16+11i)-(a+bi)=\sqrt 2(9+3i)(\cos\theta+i\sin\theta)$$ Then, solving $$16-a=\sqrt 2(8\cos\theta-2\sin\theta)$$ $$9-b=\sqrt 2(8\sin\theta+2\cos\theta)$$ $$16-a=\sqrt 2(9\cos\theta-3\sin\theta)$$ $$11-b=\sqrt 2(9\sin\theta+3\cos\theta)$$ gives $$a=10,\quad b=-1,\quad \theta=\frac{\pi}{4}$$</p> <p>Thus, the coordinate you want is $$\sqrt 2(3+4i)\left(\cos\frac{\pi}{4}+i\sin\frac{\pi}{4}\right)+(10-i)=9+6i,$$ i.e. $$\color{red}{(9,6)}$$</p>
567,668
<p>In my analysis class we are discussing integrable functions and I need help thinking through something. I don't want any answers, just what should I be looking at? (hints)</p> <p><img src="https://i.stack.imgur.com/qKte6.png" alt="enter image description here"></p>
user729424
729,424
<p>If <span class="math-container">$G_1$</span>, <span class="math-container">$G_2$</span> have order <span class="math-container">$p$</span>, for some prime <span class="math-container">$p$</span>, then <span class="math-container">$G_1\times G_2$</span> will have <span class="math-container">$p+3\ge5$</span> subgroups, all of which are normal since <span class="math-container">$G_1\times G_2$</span> is Abelian. In every other case, <span class="math-container">$G_1\times G_2$</span> will have exactly have exactly four normal subgroups: <span class="math-container">$\left\{1\right\}\times\left\{1\right\}$</span>, <span class="math-container">$G_1\times\left\{1\right\}$</span>, <span class="math-container">$\left\{1\right\}\times G_2$</span>, and <span class="math-container">$G_1\times G_2$</span>.</p> <p>Proof: First note that <span class="math-container">$\left\{1\right\}\times\left\{1\right\}$</span>, <span class="math-container">$G_1\times\left\{1\right\}$</span>, <span class="math-container">$\left\{1\right\}\times G_2$</span>, and <span class="math-container">$G_1\times G_2$</span> are all normal subgroups of <span class="math-container">$G_1\times G_2$</span>. Since simple groups always have at least two elements, these four subgroups are distinct.</p> <p>Case One: <span class="math-container">$G_1$</span>, <span class="math-container">$G_2$</span> are isomorphic Abelian groups. In this case <span class="math-container">$G_1$</span>, <span class="math-container">$G_2$</span> are finite groups of order <span class="math-container">$p$</span>, for some prime <span class="math-container">$p$</span>. Every non-trivial proper subgroup of <span class="math-container">$G_1\times G_2$</span> has order <span class="math-container">$p$</span>, and is therefore cyclic. Every non-identity element of <span class="math-container">$G_1\times G_2$</span> generates a cyclic group of order <span class="math-container">$p$</span>, and every cyclic subgroup of order <span class="math-container">$p$</span> contains <span class="math-container">$p-1$</span> non-identity elements. Hence, there are <span class="math-container">$\dfrac{p^2-1}{p-1}=p+1$</span> subgroups of order <span class="math-container">$p$</span>, and there are <span class="math-container">$p+3$</span> subgroups altogether, all of which are normal since <span class="math-container">$G_1\times G_2$</span> is Abelian.</p> <p>Case Two: <span class="math-container">$G_1$</span>, <span class="math-container">$G_2$</span> are non-isomorphic Abelian groups. In this case <span class="math-container">$G_1$</span>, <span class="math-container">$G_2$</span> are finite groups of order <span class="math-container">$p_1$</span>, <span class="math-container">$p_2$</span>, for distinct primes <span class="math-container">$p_1$</span>, <span class="math-container">$p_2$</span>. In this case <span class="math-container">$G_1\times G_2$</span> is cyclic. Hence, <span class="math-container">$G_1\times G_2$</span> has a unique subgroup of order <span class="math-container">$d$</span> for each <span class="math-container">$d$</span> that divides the order of the group. Hence <span class="math-container">$G_1\times G_2$</span> has exactly four subgroups. So the four normal subgroups listed above are the only subgroups of <span class="math-container">$G_1\times G_2$</span>.</p> <p>Case Three: One of <span class="math-container">$G_1$</span>, <span class="math-container">$G_2$</span> is not Abelian. Without any loss of generality, we can suppose it's <span class="math-container">$G_1$</span>.</p> <p>For <span class="math-container">$i=1,2$</span> let <span class="math-container">$\pi_i:G_1\times G_2\to G_i$</span> be the projection homomorphism, which send <span class="math-container">$(x_1,x_2)\mapsto x_i$</span>. For any normal subgroup <span class="math-container">$N$</span> of <span class="math-container">$G_1\times G_2$</span>, it is easy to check that <span class="math-container">$\pi_i(N)$</span> is normal in <span class="math-container">$G_i$</span>. Since each <span class="math-container">$G_i$</span> has exactly two normal subgroups, there are four possibilities. If one of <span class="math-container">$\pi_1(N)$</span>, <span class="math-container">$\pi_2(N)$</span> is trivial, then it is easy to show that <span class="math-container">$N$</span> is one of <span class="math-container">$\left\{1\right\}\times\left\{1\right\}$</span>, <span class="math-container">$G_1\times\left\{1\right\}$</span>, <span class="math-container">$\left\{1\right\}\times G_2$</span>. So suppose <span class="math-container">$\pi_i(N)=G_i$</span> for <span class="math-container">$i=1,2$</span>. Our goal is to show that in this case <span class="math-container">$N=G_1\times G_2$</span>.</p> <p>Let <span class="math-container">$K=\left\{x\in G_1\,|\,(x,1)\in N\right\}$</span>. It is easy to show that <span class="math-container">$K$</span> is normal in <span class="math-container">$G_1$</span>. So either <span class="math-container">$K$</span> is trivial or <span class="math-container">$K=G_1$</span>. It will follow from the fact that <span class="math-container">$G_1$</span> is non-Abelian that <span class="math-container">$K=G_1$</span>. Pick <span class="math-container">$g,x\in G_1$</span> that don't commute, so that <span class="math-container">$gxg^{-1}x^{-1}$</span> is non-trivial. Since <span class="math-container">$x\in G_1=\pi_1(N)$</span>, there is a <span class="math-container">$y\in G_2$</span> with <span class="math-container">$(x,y)\in N$</span>. Since <span class="math-container">$N$</span> is normal, if we conjugate <span class="math-container">$(x,y)$</span> by <span class="math-container">$(g,1)$</span>, we get that <span class="math-container">$(gxg^{-1},y)\in N$</span>. Since <span class="math-container">$(x,y),(gxg^{-1},y)\in N$</span>, it follows that </p> <p><span class="math-container">$$(gxg^{-1},y)\cdot(x,y)^{-1}=(gxg^{-1}x^{-1},1)\in N.$$</span></p> <p>Hence <span class="math-container">$gxg^{-1}x^{-1}$</span> is a non-trivial element of <span class="math-container">$K$</span>, so that <span class="math-container">$K=G_1$</span>.</p> <p>Now let <span class="math-container">$(x,y)\in G_1\times G_2$</span>. We want to show that <span class="math-container">$(x,y)\in N$</span>. Since <span class="math-container">$y\in G_2=\pi_2(N)$</span>, there is a <span class="math-container">$a\in G_1$</span> with <span class="math-container">$(a,y)\in N$</span>. Also, since <span class="math-container">$a\in G_1=K$</span>, <span class="math-container">$(a,1)\in N$</span>. So <span class="math-container">$(a,y)\cdot(a,1)^{-1}=(1,y)\in N$</span>. Also <span class="math-container">$x\in G_1=K$</span>, so that <span class="math-container">$(x,1)\in N$</span>. Hence <span class="math-container">$(x,1)\cdot(1,y)=(x,y)\in N$</span>. Hence <span class="math-container">$N=G_1\times G_2$</span>. <span class="math-container">$\square$</span></p>
1,789,742
<p>I'm reading the book generatingfunctionology by Herbert Wilf and I came across a partial fraction expansion on page 20 that I cannot understand. The derivation is as follows:</p> <p>$$ \frac{1}{(1-x)(1-2x)...(1-kx)} = \sum_{j=1}^{k} \frac{\alpha_j}{1-jx} $$</p> <p>The book says to fix $r, 1 \leq r \leq k$, and multiply both sides by $1-rx$. Doing so, I get:</p> <p>$$ \frac{1}{(1-x)...(1-(r-1)x)(1-(r+1)x)...(1-kx)} = \frac{\alpha_1(1-rx)}{1-x} + ... + \frac{\alpha_{r-1}(1-rx)}{1-(r-1)x} + \alpha_r + \frac{\alpha_{r+1}(1-rx)}{1-(r+1)x} + ... + \frac{a_k(1-rx)}{1-kx} $$</p> <p>Contrarily, the book has:</p> <p>$$ \alpha_r = \frac{1}{(1-x)...(1-(r-1)x)(1-(r+1)x)...(1-kx)} $$</p> <p>I don't understand how the other other fractions on the right side of my result cancel out to $0$. I tried with a small example where $k=3$ and I couldn't isolate $\alpha_2$ nicely after multiplying both sides by $1-2x$. Any pointers on this would be greatly appreciated.</p> <p>After this, the book goes on by letting $x=1/r$, resulting in the following:</p> <p>$$ \begin{aligned} \alpha_r &amp;= \frac{1}{(1-1/r)(1-2/r)...(1-(r-1)/r)(1-(r+1)/r)...(1-k/r)} \\ &amp;= (-k)^{k-r}\frac{r^{k-1}}{(r-1)!(k-r)!} &amp;&amp; (1 \leq r \leq k) \end{aligned} $$</p> <p>I also can't figure how this is derived (I suspect it's using an identity that I'm not aware of.) Any help would be much appreciated. Thanks a lot!</p>
John Wayland Bales
246,513
<p>It appears that the $\alpha_j$ coefficients are being computed by the standard Heaviside method (sometimes called the "cover up" method).</p> <p><a href="https://en.wikipedia.org/wiki/Heaviside_cover-up_method" rel="nofollow">Heaviside's Method for partial fraction expansion</a></p>
1,789,742
<p>I'm reading the book generatingfunctionology by Herbert Wilf and I came across a partial fraction expansion on page 20 that I cannot understand. The derivation is as follows:</p> <p>$$ \frac{1}{(1-x)(1-2x)...(1-kx)} = \sum_{j=1}^{k} \frac{\alpha_j}{1-jx} $$</p> <p>The book says to fix $r, 1 \leq r \leq k$, and multiply both sides by $1-rx$. Doing so, I get:</p> <p>$$ \frac{1}{(1-x)...(1-(r-1)x)(1-(r+1)x)...(1-kx)} = \frac{\alpha_1(1-rx)}{1-x} + ... + \frac{\alpha_{r-1}(1-rx)}{1-(r-1)x} + \alpha_r + \frac{\alpha_{r+1}(1-rx)}{1-(r+1)x} + ... + \frac{a_k(1-rx)}{1-kx} $$</p> <p>Contrarily, the book has:</p> <p>$$ \alpha_r = \frac{1}{(1-x)...(1-(r-1)x)(1-(r+1)x)...(1-kx)} $$</p> <p>I don't understand how the other other fractions on the right side of my result cancel out to $0$. I tried with a small example where $k=3$ and I couldn't isolate $\alpha_2$ nicely after multiplying both sides by $1-2x$. Any pointers on this would be greatly appreciated.</p> <p>After this, the book goes on by letting $x=1/r$, resulting in the following:</p> <p>$$ \begin{aligned} \alpha_r &amp;= \frac{1}{(1-1/r)(1-2/r)...(1-(r-1)/r)(1-(r+1)/r)...(1-k/r)} \\ &amp;= (-k)^{k-r}\frac{r^{k-1}}{(r-1)!(k-r)!} &amp;&amp; (1 \leq r \leq k) \end{aligned} $$</p> <p>I also can't figure how this is derived (I suspect it's using an identity that I'm not aware of.) Any help would be much appreciated. Thanks a lot!</p>
Ismail Bello
267,865
<p>To find the coefficients of the expansion, you can follow the usual "cover" up rule which is essentially the results you have shown albeit buried in indices and what not: $$ \frac{1}{(1-x)(1-2x)...(1-kx)} = \sum_{j=1}^{k} \frac{\alpha_j}{1-jx} $$ say we would want to find an arbitrary $\alpha_j$ for some value of $j \in \{1,2,...,k\}$. We call this $r$ to emphasise, and to avid confusion with the summation index. So we multiply both sides by $1-rx$ $$ \frac{1-rx}{(1-x)(1-2x)...(1-kx)} = \sum_{j=1}^{k} \alpha_j\frac{1-rx}{1-jx} $$ Since $r$ will coincide with one of the $j$'s we can "pull this out" from the right hand side $$ RHS = \alpha_r + \sum_{\underset{j\ne r}{j=1}}^{k} \alpha_j\frac{1-rx}{1-jx} $$ And for the left hand side, the r will "cancel" on one of the denominator factors: $$ LHS = \frac{1}{\underset{\text{without the $1-rx$}}{(1-x)(1-2x)...(1-kx)}} $$ The expansion holds for all values of x so we can set $x=1/r$, and simplify both sides to get the result. I'll stop here because I think doing it is fun :)</p>
1,686,904
<blockquote> <p>How can we prove that <span class="math-container">$$\binom{n}{1}\binom{n}{2}^2\binom{n}{3}^3\cdots \binom{n}{n}^n \leq \left(\frac{2^n}{n+1}\right)^{\binom{n+1}{2}}$$</span></p> </blockquote> <p><span class="math-container">$\bf{My\; Try::}$</span> Using <span class="math-container">$\bf{A.M\geq G.M\;,}$</span> we get</p> <p><span class="math-container">$$\binom{n}{0}+\binom{n}{1}+\binom{n}{2} + \cdots+\binom{n}{n}\geq (n+1)\cdot \left[\binom{n}{0}\cdot \binom{n}{1}\cdot \binom{n}{2} \cdots \binom{n}{n}\right]^{\frac{1}{n+1}}$$</span></p> <p>So <span class="math-container">$$2^n\geq (n+1)\left[\binom{n}{1}\cdot \binom{n}{2}\cdots \binom{n}{n}\right]^{\frac{1}{n+1}}$$</span></p> <p>How can I solve it after that? Help me.</p> <p>Thanks.</p>
Macavity
58,320
<p>You have the right approach, just need a small detour. </p> <p><strong>Hint</strong> Start with: $$\sum_k k \binom{n}{k} = n 2^{n-1} \tag{why?}$$</p> <p>Now apply AM-GM and watch your exponentiation.</p>
244,489
<p>Given:</p> <p>${AA}\times{BC}=BDDB$</p> <p>Find $BDDB$:</p> <ol> <li>$1221$</li> <li>$3663$</li> <li>$4884$</li> <li>$2112$</li> </ol> <p>The way I solved it:</p> <p>First step - expansion &amp; dividing by constant ($11$): $AA\times{BC}$=$11A\times{BC}$</p> <ol> <li>$1221$ => $1221\div11$ => $111$</li> <li>$3663$ => $3663\div11$ => $333$</li> <li>$4884$ => $4884\div11$ => $444$</li> <li>$2112$ => $2112\div11$ => $192$</li> </ol> <p>Second step - each result is now equal to $A\times{BC}$. We're choosing multipliers $A$ and $BC$ manually and in accordance with initial condition. It takes <strong>a lot</strong> of time to pick up a number and check whether it can be a multiplier.</p> <p>That way I get two pairs:</p> <p>$22*96$=$2112$</p> <p>$99*37$=$3663$</p> <p>Of course $99*37$=$3663$ is the right one.</p> <p>Is there more efficient way to do this? Am I missing something?</p>
aditsu quit because SE is EVIL
51,012
<p>For the "square in triangle" subproblem, I think the solution always looks like <img src="https://i.stack.imgur.com/MAqih.png" alt="test"> </p> <p>where $0 \leq a \leq b$. If $a &gt; b$, then just flip the figure around the $x=y$ line.</p> <p>With the notations in the figure (if you don't like them, you can file a complaint), we have $t = s(\cos a + \sin a + \cos a / \tan b)$ so we have to minimize $f(a) = \sin a + (1 + 1 / \tan b) \cos a$.</p> <p>Analyzing the derivative, I found that the function is increasing up to $\tan a = 1 / (1 + 1 / \tan b)$ and then decreasing, so the minimum is at one of the extremes: $a = 0$ or $a = b$</p> <p>We get $f(0) = 1 + 1 / \tan b = (\sin b + \cos b) / \sin b$ and $f(b) = \sin b + \cos b + \cos^2b / \sin b = (1 + \sin b \cos b) / \sin b$.</p> <p>Since $0 &lt; \sin b$, $\cos b &lt; 1$, we have $(1 - \sin b)(1 - \cos b) / \sin b &gt; 0$ therefore $f(0) &lt; f(b)$ so we get the biggest square for $a = 0$.</p>
128,408
<p>The following:</p> <pre><code>mat = {{1., -3, -2}, {-1, 3, -2}, {0., 0., 0.}} MatrixFunction[Sinc, mat] </code></pre> <p>returns:</p> <pre><code>MatrixFunction::drvnnum: Unable to compute the matrix function because the subsequent derivative of the function Sinc[#1] &amp; at a numeric value is not numeric. </code></pre> <p>The documentation for this error gives examples where the function is not differentiable. But here, <code>Sinc</code> is, so why do I get this error?</p>
J. M.'s persistent exhaustion
50
<p>Two things come into play:</p> <ol> <li><p><code>Sinc'[0]</code> yields <code>Indeterminate</code>, when it should be $0$.</p></li> <li><p>This value needs to be evaluated in the computation of <code>MatrixFunction[Sinc, mat]</code>.</p></li> </ol> <p>In details: for the computation of <code>MatrixFunction[]</code> for an inexact matrix $\mathbf A$, a preliminary step is to always perform the <a href="http://mathworld.wolfram.com/SchurDecomposition.html" rel="nofollow noreferrer">Schur decomposition</a> $\mathbf A=\mathbf Q\mathbf T\mathbf Q^\top$ (effectively, a similarity transformation into a triangular matrix). The reason for this is that the computation of a matrix function is easiest when a matrix is triangular.</p> <p>Here, then is the required Schur decomposition:</p> <pre><code>MatrixForm /@ ({q, t} = SchurDecomposition[mat]) </code></pre> <p><img src="https://i.stack.imgur.com/BB7Z3.png" alt="Schur factors"></p> <p>The problem arises in the computation of <code>MatrixFunction[Sinc, t]</code>, since as mentioned previously, the evaluation of <code>Sinc'</code> at $0$ is needed, as evidenced by the following general formula:</p> <pre><code>MatrixFunction[f, {{0, a, b}, {0, c, d}, {0, 0, 0}}] // MatrixForm </code></pre> <p><img src="https://i.stack.imgur.com/3l1Cd.png" alt="MatrixFunction evaluation for a special triangle"></p> <p>A workaround in this case would be to use the explicit piecewise form of <code>Sinc[]</code>:</p> <pre><code>mySinc = Piecewise[{{1, # == 0}, {Sin[#]/#, True}}] &amp;; MatrixFunction[mySinc, mat] </code></pre> <p><img src="https://i.stack.imgur.com/H1Fzk.png" alt="sinc of a matrix"></p> <p>You can check that the result of <code>q.MatrixFunction[mySinc, t].Transpose[q]</code> is the same.</p> <p>However, <strong>this is not robust</strong>. Consider the following evaluation:</p> <pre><code>MatrixFunction[f, {{0, a, b}, {0, 0, d}, {0, 0, 0}}] // MatrixForm </code></pre> <p><img src="https://i.stack.imgur.com/VsvBh.png" alt="MatrixFunction evaluation for a defective matrix"></p> <p>where it can be seen that the <em>second</em> derivative is now involved. Neither <code>Sinc''</code> nor <code>mySinc''</code> will yield the right answer if evaluated on a matrix like this or similar to this, since one function is indeterminate and the other gives $0$ when the correct value should be $-\frac13$. For matrices of larger dimension where even higher derivatives might be involved, this is even more troublesome.</p> <hr> <h2>What to do when <code>MatrixFunction[]</code> fails?</h2> <p>I've always maintained that one of the nice things about <em>Mathematica</em> is that if all else fails, one can always go back to the mathematical definition. The situation is no different for <code>MatrixFunction[]</code>; as I have previously mentioned here and in other threads, the function seems to be based on evaluating a given function on a triangular matrix that is similar to the original matrix given to it, generated through either <code>JordanDecomposition[]</code> (in the exact case) or <code>SchurDecomposition[]</code> (in the inexact case). Since these seem to be failing here, we could use a different definition of the function of a matrix. Luckily, <a href="http://dx.doi.org/10.2307/2306996" rel="nofollow noreferrer">there are several of them</a>.</p> <p>In particular, one of the most useful definitions of a matrix function is the following Cauchy-like contour integral:</p> <p>$$f(\mathbf A) = \frac{1}{2\pi i} \oint_\gamma f(z)\, (z \mathbf I- \mathbf A)^{-1}\,\mathrm dz$$</p> <p>where $\gamma$ is a closed contour enclosing the eigenvalues of $\mathbf A$, and where $f(z)$ is analytic within. (I previously mentioned this <a href="https://mathematica.stackexchange.com/a/92293">here</a>.)</p> <p>As it turns out, this definition is useful for both exact and inexact computations. Let's deal with the exact case first, using an exact version of the matrix in the OP:</p> <pre><code>mat = {{1, -3, -2}, {-1, 3, -2}, {0, 0, 0}}; </code></pre> <p>A slick way to use the contour integral definition is to recognize that the required computation is equivalent to the sum of the residues of the integrand at the eigenvalues of the original matrix, by virtue of the <a href="http://mathworld.wolfram.com/ResidueTheorem.html" rel="nofollow noreferrer">residue theorem</a> and the <a href="http://mathworld.wolfram.com/CauchyIntegralTheorem.html" rel="nofollow noreferrer">Cauchy integral theorem</a>.</p> <p>Thus, we can compute the supposed result of <code>MatrixFunction[Sinc, mat]</code> in this way:</p> <pre><code>Sum[Map[Residue[#, {z, λ}] &amp;, Sinc[z] Inverse[z IdentityMatrix[Length[mat]] - mat], {2}], {λ, Union[Eigenvalues[mat]]}] </code></pre> <p><img src="https://i.stack.imgur.com/H0ioE.png" alt="sine cardinal of a matrix"></p> <p>(Note that <code>Residue[]</code> had to be mapped manually, as it does not automatically thread over lists.)</p> <hr> <p>To demonstrate the application of the contour integral method to the inexact case, allow me to consider a different matrix and a different function with a removable singularity.</p> <p>In particular, consider the matrix</p> <pre><code>rm = {{2/3, 0, -2/3, -1/3}, {0, 1, 0, 0}, {1/3, 0, 2/3, -2/3}, {2/3, 0, 1/3, 2/3}}; </code></pre> <p>and suppose that you want to evaluate the function $f(\mathbf A)=(\mathbf A-\mathbf I)^{-1}\log\mathbf A$. The eigenvalues of this matrix are</p> <pre><code>eig = Eigenvalues[rm] {(-1)^(1/3), -(-1)^(2/3), 1, 1} </code></pre> <p>and thus, <code>MatrixFunction[Log[#]/(# - 1) &amp;, rm]</code> is not expected to work in this case (and neither will <code>Inverse[]</code> + <code>MatrixLog[]</code>). For numerical evaluation, we use <code>NIntegrate[]</code> along with an appropriate choice of contour.</p> <p>A particularly convenient contour can be obtained by treating the eigenvalues as points in the complex plane, and then slightly dilating the axis-aligned bounding rectangle that contains the eigenvalues. This can be done like so:</p> <pre><code>With[{ε = 1/20}, corners = #1 + I #2 &amp; @@@ Tuples[Transpose[(List @@ BoundingRegion[ReIm[eig], "MinRectangle"]) + {-ε, ε}]][[{2, 1, 3, 4, 2}]]]; </code></pre> <p>after which, one can then evaluate the contour integral:</p> <pre><code>NIntegrate[Log[z]/(z - 1) Inverse[z IdentityMatrix[4] - rm], {z, Sequence @@ corners} // Evaluate]/(2 π I) // Chop {{0.9379331214115499, 0, 0.271266454744782, 0.3333333333333591}, {0, 1.0000000000001494, 0, 0}, {-0.3333333333333591, 0, 0.9379331214115499, 0.271266454744782}, {-0.271266454744782, 0, -0.3333333333333591, 0.9379331214115499}} </code></pre> <p>(Some <code>NIntegrate::izero</code> messages are thrown, but these are due to the zero entries and are harmless in this case.)</p> <p>This compares favorably with the result of the residue-based method:</p> <pre><code>Sum[Map[Residue[#, {z, λ}] &amp;, Log[#]/(# - 1) &amp;[z] Inverse[z IdentityMatrix[4] - rm], {2}], {λ, Union[Eigenvalues[rm]]}] // FullSimplify </code></pre> <p><img src="https://i.stack.imgur.com/aULDS.png" alt="symbolic result"></p> <pre><code>N[%] {{0.9379331214114057, 0., 0.2712664547447392, 0.3333333333333333}, {0., 1., 0., 0.}, {-0.3333333333333333, 0., 0.9379331214114057, 0.2712664547447392}, {-0.2712664547447392, 0., -0.3333333333333333, 0.9379331214114057}} </code></pre> <hr> <p>For people who are interested in further details, I will recommend reading <em><a href="http://books.google.com/books?id=S6gpNn1JmbgC" rel="nofollow noreferrer">Functions of Matrices: Theory and Computation</a></em> by Nick Higham.</p>
128,408
<p>The following:</p> <pre><code>mat = {{1., -3, -2}, {-1, 3, -2}, {0., 0., 0.}} MatrixFunction[Sinc, mat] </code></pre> <p>returns:</p> <pre><code>MatrixFunction::drvnnum: Unable to compute the matrix function because the subsequent derivative of the function Sinc[#1] &amp; at a numeric value is not numeric. </code></pre> <p>The documentation for this error gives examples where the function is not differentiable. But here, <code>Sinc</code> is, so why do I get this error?</p>
Dr. Wolfgang Hintze
16,361
<p><strong>EDIT #2</strong></p> <p>The regularization method is more generally applicable than the Cauchy method. </p> <p>Here's an example for which (according to J.M.) the Cauchy method fails but the regularization method works.</p> <p>The matrix is that of the OP (integer form)</p> <pre><code>matx = {{1, -3, -2}, {-1, 3, -2}, {0, 0, 0}}; Eigenvalues[matx] (* Out[90]= {4, 0, 0} *) </code></pre> <p>and the function has an essential singularity at 0. We obatin immediately</p> <pre><code>Limit[MatrixFunction[1/# Exp[-1/#] &amp;, matx + \[Epsilon] IdentityMatrix[3]], \[Epsilon] -&gt; 0] // FullSimplify (* Out[50]= {{1/(16 E^(1/4)), -(3/(16 E^(1/4))), 1/(16 E^(1/4))}, {-(1/(16 E^(1/4))), 3/(16 E^(1/4)), -(1/(16 E^(1/4)))}, {0, 0, 0}} *) </code></pre> <p>Really simple, and fast.</p> <p><strong>EDIT</strong></p> <p>J.M.'s example treated with the regularization method</p> <p>The Matrix:</p> <pre><code>rm = {{2/3, 0, -(2/3), -(1/3)}, {0, 1, 0, 0}, {1/3, 0, 2/3, -(2/3)}, {2/3, 0, 1/3, 2/3}} </code></pre> <p>The task is to calculate the matrix function</p> <pre><code>MatrixFunction[Log[#]/(# - 1) &amp;, rm] During evaluation of In[29]:= MatrixFunction::fnand: The function Log[#1]/(#1-1)&amp; is not analytic or defined at 1. &gt;&gt; </code></pre> <p>The direct approach fails.</p> <p>But regularization makes life easy:</p> <pre><code>MatrixFunction[Log[#]/(# - 1) &amp;, rm + \[Epsilon] IdentityMatrix[4]]; Limit[%, \[Epsilon] -&gt; 0] // FullSimplify (* Out[33]= {{1/9 (3 + Sqrt[3] \[Pi]), 0, 1/9 (-3 + Sqrt[3] \[Pi]), 1/ 3}, {0, 1, 0, 0}, {-(1/3), 0, 1/9 (3 + Sqrt[3] \[Pi]), 1/9 (-3 + Sqrt[3] \[Pi])}, {1/9 (3 - Sqrt[3] \[Pi]), 0, -(1/3), 1/9 (3 + Sqrt[3] \[Pi])}} *) </code></pre> <p><strong>Summary</strong></p> <p>It may seem that the problem comes from the singularity of the Matrix <code>mat</code>, and is naturally related to the problem with <code>Sin[x]/x/.x-&gt;0</code>. But J.M. showed that problems arise if the matrix is defective. </p> <p>There is no problem with <code>MatrixFunction[Sinc, mat1]</code> if <code>Det[mat1] != 0</code>.</p> <p>We shall proceed here without exploring the root cause and show that the <code>MatrixFunction</code> of the OP can be computed exactly using two methods (a) power series, and (b) regularization, with the result:</p> <pre><code>MatrixFunction[Sinc, mat] == </code></pre> <p>$$\begin{pmatrix} \frac{1}{16} (\sin (4)+12) &amp; \frac{1}{16} (-3) (\sin (4)-4) &amp; \frac{1}{16} (\sin (4)-4) \\ \frac{1}{16} (4-\sin (4)) &amp; \frac{1}{16} (3 \sin (4)+4) &amp; \frac{1}{16} (4-\sin (4)) \\ 0 &amp; 0 &amp; 1 \\ \end{pmatrix}$$</p> <p><strong>Derivation</strong></p> <p><em>Power series</em></p> <p>We shall calculate the matrix function by a power series. This requires nothing but the powers of the matrix which always exist for a square matrix.</p> <p>Here is the power series of <code>Sinc[]</code>:</p> <pre><code>Sum[(-1)^k x^(2 k)/(2 k + 1)!, {k, 0, ∞}] (* Out[10]= Sin[x]/x *) </code></pre> <p>The matrix in question is </p> <pre><code>mat = {{1., -3, -2}, {-1, 3, -2}, {0.`, 0.`, 0.`}} (* Out[9]= {{1., -3, -2}, {-1, 3, -2}, {0., 0., 0.}} *) </code></pre> <p>In order to calculate the powers symbolically we take numerical constants and define</p> <pre><code>mat1 = Floor[mat] (* Out[11]= {{1, -3, -2}, {-1, 3, -2}, {0, 0, 0}} *) </code></pre> <p>From the first few even powers we can easily guess the general form:</p> <pre><code>mp[k_] := {{4^(2 k - 1), -3 4^(2 k - 1), 4^(2 k - 1)}, {-4^(2 k - 1), 3 4^(2 k - 1), -4^(2 k - 1)}, {0, 0, 0}}; </code></pre> <p>Notice that, due to the singularity of mat1, the zeroth power does not exist.</p> <pre><code>MatrixPower[mat1, 0] </code></pre> <blockquote> <p>During evaluation of In[15]:= MatrixPower::sing: Matrix {{1,-3,-2},{-1,3,-2},{0,0,0}} is singular. >></p> </blockquote> <pre><code>(* Out[15]= MatrixPower[{{1, -3, -2}, {-1, 3, -2}, {0, 0, 0}}, 0] *) </code></pre> <p>Hence we take care of the zeroth power by simply adding the unit matrix. We find finally</p> <pre><code>f = DiagonalMatrix[Array[1 &amp;, 3]] + Sum[(-1)^k mp[k]/(2 k + 1)!, {k, 1, ∞}] (* Out[46]= {{1 + 1/16 (-4 + Sin[4]), -(3/16) (-4 + Sin[4]), 1/16 (-4 + Sin[4])}, {1/16 (4 - Sin[4]), 1 + 3/16 (-4 + Sin[4]), 1/16 (4 - Sin[4])}, {0, 0, 1}} *) </code></pre> <p><em>Regularization</em></p> <p>If we make the matrix into a regular one by adding a small matrix</p> <pre><code>mat2 = mat1 + ε DiagonalMatrix[Array[1 &amp;, 3]]; Det[mat2] (* Out[48]= ε (4 ε + ε^2) *) </code></pre> <p>we can apply the MatrixFunction without any problem:</p> <pre><code>f2 = MatrixFunction[Sinc, mat2]; </code></pre> <p>and then take the limit</p> <pre><code>f20 = Limit[f2, ε -&gt; 0]; </code></pre> <p>This gives the same result as before</p> <pre><code>f == f20 (* Out[52]= True *) </code></pre> <p><em>Numerical matrix, partial series</em></p> <p>Defining the partial series as</p> <pre><code>fn[nn_] := DiagonalMatrix[Array[1 &amp;, 3]] + Sum[(-1)^k MatrixPower[mat, 2 k]/(2 k + 1)!, {k, 1, nn}] </code></pre> <p>we can calculate the numerical result up to the nn-th term.</p> <p>For <code>nn = 8</code> the result is alrady we find</p> <pre><code>fn[8] // N (* Out[64]= {{0.7027, 0.8919, -0.2973}, {0.2973, 0.1081, 0.2973}, {0., 0., 1.}} *) </code></pre> <p>in fair agreement with the exact values:</p> <pre><code>f // N (* Out[65]= {{0.7027, 0.8919, -0.2973}, {0.2973, 0.1081, 0.2973}, {0., 0., 1.}} *) </code></pre> <p><em>Numerical matrix, regularization</em></p> <p>Regularizing the numerical matrix of the OP gives</p> <pre><code>mat4 = mat + ε DiagonalMatrix[Array[1 &amp;, 3]]; </code></pre> <p>Application of <code>MatrixFunction</code> with given small <code>ε</code> </p> <pre><code>f4 = MatrixFunction[Sinc, mat4] /. ε -&gt; 10^(-8) </code></pre> <blockquote> <p>During evaluation of In[96]:= JordanDecomposition::jdimp: Unable to find the Jordan decomposition of the matrix with the given precision. Try higher precision or SchurDecomposition instead. >></p> </blockquote> <pre><code>(* Out[96]= {{0.7027, 0.8919, -0.2973}, {0.2973, 0.1081, 0.2973}, {0., 0., 1.}} *) </code></pre> <p>results in the known numerical result despite of the error message.</p> <p><em>Examples of singular matrices</em></p> <p>An extreme case of a matrix is</p> <pre><code>mat3 = {{0, 0}, {0, 0}}; </code></pre> <p>Nevertheless, there's no problem with Sinc:</p> <pre><code>MatrixFunction[Sinc, mat3] (* Out[73]= {{1, 0}, {0, 1}} *) </code></pre> <p>Matrices of the type {{a,b},{0,0}} with a,b [Element]{0,1}</p> <pre><code>mm = {#, {0, 0}} &amp; /@ Union[Tuples[{1, 0, 0, 0}, 2]] (* Out[15]= {{{0, 0}, {0, 0}}, {{0, 1}, {0, 0}}, {{1, 0}, {0, 0}}, {{1, 1}, {0, 0}}} *) Table[{k, MatrixFunction[Sinc, mm[[k]]]}, {k, 1, 4}] During evaluation of In[18]:= MatrixFunction::fnanc: The function Sinc[#1]&amp; is not analytic at 0. &gt;&gt; (* Out[18]= { {1, {{1, 0}, {0, 1}}}, {2, MatrixFunction[inc, {{0, 1}, {0, 0}}]}, {3, {{Sinc[1], 0}, {0, 1}}}, {4, {{Sinc[1], -1 + Sinc[1]}, {0, 1}}} } *) </code></pre> <p>Notice that only for mm[[2]] Mathematica complains the presumed non-analyticity of Sinc at 0 and Returns the input unevaluated. </p> <p>Again the <strong>regularization method</strong> removes the problem:</p> <pre><code>meps = \[Epsilon] IdentityMatrix[2]; Table[{k, MatrixFunction[Sinc, meps + mm[[k]]]}, {k, 1, 4}]; Limit[%, \[Epsilon] -&gt; 0] (* Out[21]= {{1, {{1, 0}, {0, 1}}}, {2, {{1, 0}, {0, 1}}}, {3, {{Sin[1], 0}, {0, 1}}}, {4, {{Sin[1], -1 + Sin[1]}, {0, 1}}}} *) </code></pre>
3,304,542
<p>Let <span class="math-container">$X,Y$</span> be Banach Spaces, show that <span class="math-container">$x_{n} \xrightarrow{w} x$</span> and <span class="math-container">$T \in BL(X,Y)\Rightarrow Tx_{n} \xrightarrow{w} Tx$</span></p> <p>Question: Does sequential continuity (which <span class="math-container">$T$</span> clearly has) necessarily imply that <span class="math-container">$T$</span> is weak-sequentially continuous? If so, then the above is trivial. </p> <p>Otherwise:</p> <p><span class="math-container">$\vert\ell(Tx_{n})-\ell(Tx)\vert=\vert\ell(Tx_{n}-Tx)\vert=\vert T^{*}\ell(x_{n}-x)\vert\xrightarrow{n\to \infty} 0$</span> since <span class="math-container">$T^{*}\ell \in X^{*}$</span>. I am somewhat unsure about this, since I have not used boundedness of <span class="math-container">$T$</span> anywhere.</p>
S. Maths
515,871
<p>As in the previous answer you can prove sequential continuity. A stronger result is the following: <strong>bounded linear operators are continuous in the weak topology.</strong></p> <p>Remark that the weak topology is not metrizable (no sequential characterization of continuity). </p>
3,522,681
<blockquote> <p>There are 8 people in a room. There are 4 males(M) and 4 females(F). What is the probability that there are no M-F pairs that have the same birthday ? It is OK for males to share a birthday and for females to share a birthday. Assume there are <span class="math-container">$10$</span> total birthdays. </p> </blockquote> <p>I give a solution below. Not sure if is correct and is there a more general way to approach it ? I break it into 5 cases-summing these cases gives the total ways M-F do not share. If divide the sum by <span class="math-container">$10^8$</span> would obtain desired probability.</p> <p>Case 1: all men have different birthdays <span class="math-container">$N_1 = 10 \cdot 9 \cdot 8 \cdot 7 \cdot (10-4)^4$</span></p> <p>Case 2: one pair men exact + two single men <span class="math-container">$N_2 = {\sideset{_{10}}{_1} C} \cdot {\sideset{_4}{_2} C} \cdot 9 \cdot 8 \cdot (10-3)^4$</span></p> <ul> <li>the first term chooses the single BD for the pair of men. </li> <li>The second term selects the 2 men in the pair. </li> <li>The <span class="math-container">$9\cdot 8$</span> are the number of ways the two single men can choose their birthdays.</li> <li>The final term is the number of ways the <span class="math-container">$4$</span> woman can select the remaining <span class="math-container">$10-3 = 7$</span> birthdays which do not equal the men which have used <span class="math-container">$3$</span> birthdays.</li> </ul> <p>Case 3: two pair men exact <span class="math-container">$N_3 = {\sideset{_{10}}{_2} C} \cdot {\sideset{_4}{_2} C} \cdot {\sideset{_2}{_2} C} \cdot (10-2)^4$</span></p> <p>Case 4: one triple and one single man <span class="math-container">$N_4 = {\sideset{_{10}}{_1} C} \cdot {\sideset{_4}{_3} C} \cdot {\sideset{_1}{_1} C} \cdot {\sideset{_9}{_1} C} \cdot (10-2)^4$</span></p> <p>Case 5: all men have same birthday <span class="math-container">$N_5 = {\sideset{_{10}}{_1} C} \cdot (10-1)^4$</span></p> <p>The sum of Case <span class="math-container">$1$</span> to <span class="math-container">$5$</span> is the total ways for no M-F pairs. The last term in each case is the number of permutations of the 4 woman with <span class="math-container">$(10-k)^4$</span> choices where <span class="math-container">$k$</span> is the number of unique birthdays used up for the men. I do not believe the order of the people matters: I calculate assuming all the men come first. Please comment on my approach.</p> <p>I have not found an understandable solution on this website.</p>
ironX
534,898
<p>Let <span class="math-container">$A$</span> be the event no M-F pair share the same birthday. Let <span class="math-container">$B_1$</span> be the event all females share ONE birthday. </p> <p>Let <span class="math-container">$N(A \cap B_1)$</span> be the number of possible configurations realizing the event <span class="math-container">$A \cap B_1$</span>. </p> <p>I think <span class="math-container">$$N(A \cap B_1) = {10 \choose 1} {9 \choose 1} + \left [ {10 \choose 2}* {4 \choose 2} \right ] * {8 \choose 1} + \left [ {10 \choose 3} * 3! * 3 \right ] *{7 \choose 1} + \left [ {10 \choose 4} * 4! \right ] * {6 \choose 1} $$</span></p> <p>The first term is {all men share the same birthday} <span class="math-container">$\cap B_1$</span></p> <p>the second term is {all men share two distinct birthdays } <span class="math-container">$\cap B_1$</span></p> <p>the third term is {all men share three distinct birthdays } <span class="math-container">$\cap B_1$</span></p> <p>the fourth term is {all men share 4 distinct birthdays} <span class="math-container">$\cap B_1$</span>. </p> <p>I think we can calculate <span class="math-container">$N(A \cap B_i)$</span> for <span class="math-container">$i = 2,3,4$</span> and then the result would be: </p> <p><span class="math-container">$$\frac{N(A \cap B_1) + N(A \cap B_2) + N(A \cap B_3) + N(A \cap B_4)}{10^8}$$</span></p> <p>Let me know any errors.</p>
2,812,472
<p>I am trying to show that</p> <p>$$ \frac{d^n}{dx^n} (x^2-1)^n = 2^n \cdot n!, $$ for $x = 1$. I tried to prove it by induction but I failed because I lack axioms and rules for this type of derivatives. </p> <p>Can someone give me a hint?</p>
Logic_Problem_42
338,002
<p>Since $x^2-1=(x-1)(x+1)$ one have to differentiate $(x-1)^n(x+1)^n$. It's clear that the result would have the form $\sum a_{km}(x-1)^k(x+1)^m$ with $k,m\le n$. If we insert $x=1$, then all summands with $k&gt;0$ became zero. So it remains to consider the summands of the form $\sum_{k=0} a_{km}(x+1)^m$. There is actually only one such summand, which is $(x+1)^n\frac{d^n}{dx^n}(x-1)^n= n!(x+1)^n$, because otherwise $k$ ist $&gt;0$ in $a_{km}(x-1)^k(x+1)^m$. So, in the point $x=1$ You have $n!(1+1)^n$.</p>
1,060,213
<p>I have searched the site quickly and have not come across this exact problem. I have noticed that a Pythagorean triple <code>(a,b,c)</code> where <code>c</code> is the hypotenuse and <code>a</code> is prime, is always of the form <code>(a,b,b+1)</code>: The hypotenuse is one more than the non-prime side. Why is this so?</p>
Ross Millikan
1,827
<p>You don't need the formulas for generating Pythagorean triples. If you have $a^2=c^2-b^2=(c+b)(c-b)$ with $a$ prime, the only way to factor $a^2$ with distinct factors is $a^2 \cdot 1$, so $c-b=1, c+b=a^2$ </p> <p>Added: this shows how to find all the triangles with a side of $a$, prime or not. You factor $a^2$ into factors which are different and the same parity. The same parity requirement come from the fact that you need $b$ and $c$ to be integers. So if $a=12$, we can write $144= 2 \cdot 72, 4\cdot 36, 6\cdot 24, 8 \cdot 18$, but we can't write $144=1 \cdot 144, 9\cdot 16, 12 \cdot 12$. We then get triangles $(12,35,37),(12,16,20),(12,5,13)$</p>
2,197,915
<p>Suppose $X$ is a scheme. I have been studying (finite rank) locally free $\mathcal{O}_{X}$-modules, and more generally, quasi-coherent sheaves on $X$ mainly from Ravi Vakil's excellent <a href="http://math.stanford.edu/~vakil/216blog/FOAGfeb0717public.pdf" rel="noreferrer">notes</a> as well as Hartshorne. Let $\mathcal{F}$ be any quasi-coherent $\mathcal{O}_{X}$-module and let $\mathcal{G}$ be a locally free $\mathcal{O}_{X}$-module. I understand that for any affine $U = \text{spec}A \subset X$, we can fine an $A$-module $M$ such that $\mathcal{F} \vert_{U} \simeq \widetilde{M}.$ Hence this property also holds for locally free $\mathcal{O}_{X}$-modules. However, how do I know that for any such affine subset that there holds $\mathcal{G} \vert_{U} \simeq \mathcal{O}_{X}^{\oplus^{n}}$? I know that there exists a cover (not necessarily affine) $\left\lbrace U_{i} \right\rbrace_{i \in I}$ such that this holds locally at each $U_{i}$, but I am not sure how to show that the "freeness" property must then hold for any affine subset.</p> <p>I strongly suspect an argument can be made from the transition functions, but I haven't been able to make any progress.</p> <p>Any help is appreciated.</p> <p>Thanks</p>
Alex Mathers
227,652
<p>I don't think this is true. There exist (finitely generated) $A$-modules $M$ which are locally free but not free themselves, which means that $\widetilde M$ is a locally free $\mathcal O_{Spec(A)}$-module on $Spec(A)$, but not a free $\mathcal O_{Spec(A)}$-module. That is, there is a cover of $Spec(A)$ by affine opens (the distinguished open sets) on which the restriction of $\widetilde M$ is free, but $\widetilde M$ is not free on all affine opens because it's not free on $Spec(A)$ itself.</p>
2,197,915
<p>Suppose $X$ is a scheme. I have been studying (finite rank) locally free $\mathcal{O}_{X}$-modules, and more generally, quasi-coherent sheaves on $X$ mainly from Ravi Vakil's excellent <a href="http://math.stanford.edu/~vakil/216blog/FOAGfeb0717public.pdf" rel="noreferrer">notes</a> as well as Hartshorne. Let $\mathcal{F}$ be any quasi-coherent $\mathcal{O}_{X}$-module and let $\mathcal{G}$ be a locally free $\mathcal{O}_{X}$-module. I understand that for any affine $U = \text{spec}A \subset X$, we can fine an $A$-module $M$ such that $\mathcal{F} \vert_{U} \simeq \widetilde{M}.$ Hence this property also holds for locally free $\mathcal{O}_{X}$-modules. However, how do I know that for any such affine subset that there holds $\mathcal{G} \vert_{U} \simeq \mathcal{O}_{X}^{\oplus^{n}}$? I know that there exists a cover (not necessarily affine) $\left\lbrace U_{i} \right\rbrace_{i \in I}$ such that this holds locally at each $U_{i}$, but I am not sure how to show that the "freeness" property must then hold for any affine subset.</p> <p>I strongly suspect an argument can be made from the transition functions, but I haven't been able to make any progress.</p> <p>Any help is appreciated.</p> <p>Thanks</p>
Georges Elencwajg
3,217
<p>It is just not true that given a locally free sheaf $\mathcal G$ of rank $r$ and an arbitrary affine open subset $U\subset X$ the restriction $\mathcal G\vert U$ is isomorphic to the free $\mathcal O_U$-Module $\mathcal O_U^{\oplus r}$.<br> Here is a counterexample: </p> <p>Consider a smooth projective curve $\overline X$ of positive genus (over $\mathbb C$, say) and a point $P\in \overline X$.<br> The curve $X=\overline X\setminus \{P\}$ is then affine.<br> Now let $Q\in X$ be an arbitrary point and consider the line bundle $\mathcal G=\mathcal O(Q)$, which is a locally free rank of rank one.<br> Although $U=X$ is affine, that line bundle is not trivial, i.e . is not isomorphic to $\mathcal O_X$:<br> Indeed, if it were there would exist a rational function $f\in Rat(X)$ with divisor $div f=1.Q$ and that rational function would extend to a rational function $\overline f\in Rat(\overline X)$ with divisor necessarily of the form $div (\overline f)=-1.P+1.Q$ (recall that the divisor of a rational function on $\overline X $ must have degree zero).<br> But this is a contradiction: on a smooth projective curve of positive genus two distinct points cannot be linearly equivalent. </p> <p><strong>Algebraic remark</strong><br> In the dictionary mentioned in the question translating quasi-coherent sheaves on $X$ into $A$-modules the result above says that there exist a finitely generated projective module $\Gamma(X,\mathcal O_X(Q))$ of rank $1$ over $A=\Gamma(X,\mathcal O_X)$ which isn't isomorphic to $A$ as an $A$-module.</p>
332,001
<p><a href="https://en.wikipedia.org/wiki/Cauchy%27s_theorem_(geometry)#Generalizations_and_related_results" rel="nofollow noreferrer">Wikipedia states</a> that A. D. Alexandrov generalized <em>Cauchy's rigidity theorem for polyhedra</em> to higher dimensions. </p> <p>The relevant statement in the article is not linked to any source. The sources at the end of the Wikipedia page seem to be only about <span class="math-container">$3$</span>-dimensional polyhedra as well, in particular Alexandrov's book "Convex polyhedra".</p> <blockquote> <p>Where can I find a reference for that statement?</p> </blockquote>
Joseph O'Rourke
6,094
<p>This may help:</p> <blockquote> <p>Bauer, C. "Infinitesimal Rigidity of Convex Polytopes." <em>Discrete Comput Geom</em> (1999) 22: 177. <a href="https://doi.org/10.1007/PL00009453" rel="noreferrer">https://doi.org/10.1007/PL00009453</a></p> </blockquote> <p>"Aleksandrov [1] proved that a simple convex <span class="math-container">$d$</span>-dimensional polytope, <span class="math-container">$d \ge 3$</span>, is infinitesimally rigid if the volumes of its facets satisfy a certain assumption of stationarity. We extend this result..."</p> <p>[1] is the 1958 <em>Convex Polyhedra</em> book.</p>
3,677,303
<p>Is there an example of topological space being both sequentially compact and Lindelof but noncompact?</p>
J.-E. Pin
89,374
<p>There is no such space. Indeed, every <a href="https://en.wikipedia.org/wiki/Sequentially_compact_space" rel="nofollow noreferrer">sequentially compact space</a> is <a href="https://en.wikipedia.org/wiki/Countably_compact_space" rel="nofollow noreferrer">countably compact</a>. And a countably compact space is compact if and only if it is Lindelöf.</p>
3,677,303
<p>Is there an example of topological space being both sequentially compact and Lindelof but noncompact?</p>
MAS
417,353
<p>We know every sequentially compact space is Countably compact. And we also know countably compact <span class="math-container">$+$</span> Lindeloff space means Compact space. So we don't have an example of your question. </p> <p>See, page <span class="math-container">$2$</span> graph, <a href="https://cpb-us-w2.wpmucdn.com/sites.uwm.edu/dist/0/158/files/2016/10/751.F10.IIIB-C-16qokmp.pdf" rel="nofollow noreferrer">here</a></p>