qid int64 1 4.65M | question large_stringlengths 27 36.3k | author large_stringlengths 3 36 | author_id int64 -1 1.16M | answer large_stringlengths 18 63k |
|---|---|---|---|---|
3,157,338 | <p>If we have a linear recurrence relation on a sequence <span class="math-container">$\{x_n\}$</span>, then I know how to find the worst case asymptotic growth. We consider the largest absolute value <span class="math-container">$\alpha$</span> of any root of the characteristic polynomial. Then, independent of the initial values, the asymptotic growth is <span class="math-container">$x_n=\mathcal{O}(\alpha^n)$</span>. I realize that sometimes you get an extra polynomial factor for roots of larger order, but let us ignore all polynomial factors for convenience.</p>
<p>For example if <span class="math-container">$x_n=x_{n-1}+x_{n-2}$</span>, then <span class="math-container">$x_n=\mathcal{O}(\phi^n)$</span> where <span class="math-container">$\phi$</span> is the golden ratio, because that is the largest absolute value of any root of <span class="math-container">$x^2-x-1$</span>.</p>
<p>My question is about the following. Say we have two linear recurrence relations on a sequence <span class="math-container">$\{x_n\}$</span> such that the first holds for some <span class="math-container">$n$</span>, but the second holds for other <span class="math-container">$n$</span>. For example, say we know that for all <span class="math-container">$n$</span> we have either <span class="math-container">$x_n=x_{n-1}+x_{n-2}$</span>, or <span class="math-container">$x_n=2x_{n-2}+4x_{n-3}$</span>, but we do not know which of the two holds for which values of <span class="math-container">$n$</span>.</p>
<p>My intuition tells me that, to find the worst case asymptotic growth, I just have to take the worst asymptotic growth of any of the recurrences in question. In the example I just gave we hence have <span class="math-container">$x_n=\mathcal{O}(2^n)$</span> by the second recurrence. However, I can not come up with a rigorous argument for this to be the case. So my question is whether my intuition is correct.</p>
<p>For some context, I am studying branching algorithms for <span class="math-container">$NP$</span>-hard problems. Often times you want to have multiple cases for how you should branch, which is where the multiple recurrences come from. However, there is no clear way to predict which case will pop up, and hence which recurrence holds for which index. Note that you can get arbitrarily many different linear recurrences.</p>
| eyeballfrog | 395,748 | <p>Hint: express the operator as an infinite sum of derivatives, then work in Fourier space.</p>
|
3,028,986 | <p>How is this integral
<span class="math-container">$$\dfrac{1}{4} \int_{0}^{4\pi} \left| \cos \theta \right| \; d\theta$$</span>
equal to
<span class="math-container">$$\dfrac{1}{2} \int_{0}^{2\pi} \left| \cos \theta \right| \; d\theta$$</span> </p>
<p>On attempting to solve this integral, found this on a solution manual, I know how to integrate it, but I don't know how are these two equal to one another?</p>
| TurlocTheRed | 397,318 | <p>Remember the integral represents a limit of a sum of areas. <span class="math-container">$d\theta$</span> represents a change in <span class="math-container">$\theta$</span>, the width of a rectangle. The value of the function represents the height. If the change in <span class="math-container">$\theta$</span> is constant, then all the rectangles have the same width. If you reach an interval where the heights of rectangles, i.e. the function values, or the same as they were on some previous interval, you are adding congruent rectangles. Adding contributions to the area from this section of the interval is the same as doubling the area already covered integrating through the early region. The cosine is a periodic function of period <span class="math-container">$2\pi$</span>. So it achieves the same function values going from 0 to <span class="math-container">$2\pi$</span> as it does from <span class="math-container">$2\pi$</span> to <span class="math-container">$4\pi$</span>.</p>
<p>Do you see any examples in those integral expressions in which part of an interval is ignored while what remains is doubled? </p>
|
3,028,986 | <p>How is this integral
<span class="math-container">$$\dfrac{1}{4} \int_{0}^{4\pi} \left| \cos \theta \right| \; d\theta$$</span>
equal to
<span class="math-container">$$\dfrac{1}{2} \int_{0}^{2\pi} \left| \cos \theta \right| \; d\theta$$</span> </p>
<p>On attempting to solve this integral, found this on a solution manual, I know how to integrate it, but I don't know how are these two equal to one another?</p>
| Paramanand Singh | 72,031 | <p>Use the formula <span class="math-container">$$\int_{0}^{2a}f(x)\,dx=2\int_{0}^{a}f(x)\,dx$$</span> provided <span class="math-container">$f$</span> satisfies <span class="math-container">$f(2a-x)=f(x)$</span>. The formula above is proved by splitting the integral as sum of integrals over <span class="math-container">$[0,a]$</span> and <span class="math-container">$[a, 2a]$</span> and then using substitution <span class="math-container">$x=2a-t$</span> in second integral.</p>
<p>Using this formula repeatedly we have <span class="math-container">$$\int_{0}^{4\pi}|\cos x|\, dx=2\int_{0}^{2\pi}|\cos x|\, dx=4\int_{0}^{\pi}|\cos x|\, dx=8\int_{0}^{\pi/2}|\cos x|\, dx=8$$</span></p>
|
2,138,448 | <p>Survival game: Consider $3$ players, $A, B$ and $ C$, taking turns shooting at each other. Any player can shoot at only one opponent at a time (and each of them has to make a shot whenever it is his/her turn). </p>
<p>Each shot of $A$ is successful with probability $1/3$, each shot of $B$ is successful with probability $1$, and each shot of C is successful with probability $1/2$ (with all the outcomes being independent). </p>
<p>$A$ goes first, then $B$,
then $C$, then $A$, and so on, until one of them dies. Then, the remaining two will be shooting at each other, so that nobody ever makes two shots in a row: e.g. if $A$ gets $C$ shot, then $B$ goes next. </p>
<p>The game continues until only one player is left. Assume that every player is trying to find a strategy that maximises his/her probability of survival. Assume also that every player acts optimally and knows that the other players will act optimally too. </p>
<p>Who should player A shoot at first? What is the probability of survival of $ A$ (assuming he/she acts optimally)?</p>
<p>*Hint. A strategy is a sequence of decisions on who to shoot at at any given turn, given who is still left in the game. It is clear that, after $B$ shoots for the first time, there will be at most two players left, and, hence, for the remaining players, there will be no need to make any choices. Therefore, it is convenient to solve the problem recursively, starting from the decision of $B$, and assuming that all players are alive by the time $B$ shoots (otherwise, again, there are no decisions for $B$ to make). </p>
<p>It is clear that,
given a choice between $A$ and $C$, $B$ will shoot at $C$, because playing against $A$ only
will give $B$ a higher probability of survival than playing against $C$ only (i.e. $2/3$
vs. $1/2$). Knowing this, A needs to choose whether it is optimal to shoot at $B$ or at
$C$. Considering the possible outcomes produced by each of the two choices, you will notice that, in one case, the survival probability can be computed by hand, and, in the other case, it can be reduced to the computation of an exit probability of a simple Markov chain.</p>
<p>My trial:</p>
<ol>
<li><p>if $A$ shoots $C$: $P$($A$ hits $C$ *$B$ misses $ A$ * $A$ hits $B$) + $P$ ($A$ misses $C$ * $B$ hits $ C$ * $A$ hits $B ) = 0 +1/3*1*1/3 = 1/9$</p></li>
<li><p>if $A$ shoots $B$: $P$($A$ hits $B$ *$C$ misses $ A$ * $A$ hits $C$) + $P$ ($A$ misses $B$ * $B$ hits $ C$ * $A$ hits $B ) = 1/3*1/2*1/3 +2/3*1*1/3 = 5/18$</p></li>
</ol>
<p>So A should shoot B, is it correct? </p>
| PMar | 415,956 | <p>This problem has appeared in 'the literature' before. If one assumes that each player is allowed to miss deliberately at any time, then A can do better by deliberately missing his first shot. Analysis of this action is just like analysis of A initially shooting at C, except the case where A kills C is avoided; this raises A's probability of survival to a full 1/3.</p>
|
221,017 | <p>If I have the following list:</p>
<pre><code>https://pastebin.com/nqyf4yY5
</code></pre>
<p>How can I find the closest value to "89" in the "T[C]" column and its corresponding value in the "DH,aged-DH,unaged (J/g)" column?.</p>
<p>Thank you in advanced,</p>
| Bob Hanlon | 9,362 | <p>Given your <code>data</code></p>
<pre><code>data[[5]] // InputForm
(* {"Time(s)", "T[C]", "K(T)=k^(1/n)",
"dx/dT", "x(t)",
"DH,aged-DH,unaged (J/g)",
"Check dx"} *)
values = data[[6 ;;]];
</code></pre>
<p>You are asking for data that corresponds to headers for columns {2, 6}.</p>
<p>The entry for the value of <code>T[C]</code> (column 2) closest to <code>89</code></p>
<pre><code>entry = values[[Position[values[[All, 2]],
Nearest[values[[All, 2]], 89][[1]]][[1, 1]]]]
(* {3.87*10^-6, 89.2592, 5.13099, 0.0107504, 0.0102723, 0.0123268, 0.0000417117} *)
</code></pre>
<p>The desired values are</p>
<pre><code>entry[[{2, 6}]]
(* {89.2592, 0.0123268} *)
</code></pre>
|
3,314,561 | <p>Consider the triangle <span class="math-container">$PAT$</span>, with angle <span class="math-container">$P = 36$</span> degres, angle <span class="math-container">$A = 56$</span> degrees and <span class="math-container">$PA=10$</span>. The points <span class="math-container">$U$</span> and <span class="math-container">$G$</span> lie on sides TP and TA respectively, such that PU = AG = 1. Let M and N be the midpoints of segments PA and UG. What is the degree measure of the acute angle formed by the lines MN and PA?</p>
<p>It would be very helpful if anyone had a solution using complex numbers to this problem.</p>
| Jeff | 313,346 | <p>First, you should not believe in anything in mathematics, in particular weak solutions of PDEs. They are sometimes a useful tool, as others have pointed out, but they are often not unique. For example, one needs an additional entropy condition to obtain uniqueness of weak solutions for scalar conservation laws, like Burger's equation. Also note that there are compactly supported weak solutions of the Euler equations, which is absurd (a fluid that starts at rest, no force is applied, and then it does something crazy and comes back to rest). They are a useful tool, connected to physics sometimes, but that is it.</p>
<p>In general, it is naive to ignore applications when studying or looking for motivations for theoretical objects in PDEs. Nearly all applications of PDEs are in physical sciences, engineering, materials science, image processing, computer vision, etc. These are the motivations for studying particular types of PDEs, and without these applications, there would be almost zero mathematical interest in many of the PDEs we study. For instance, why do we spend so much time studying parabolic and elliptic equations, instead of focusing effort on bizarre fourth order equations like <span class="math-container">$u_{xxxx}^\pi = u_y^2e^{u_z}$</span>? (hint: there are physical applications of elliptic and parabolic equations). We study an extremely small sliver of all possible PDEs, and without a mind towards applications, there is no reason to study these PDEs instead of others. </p>
<p>You say you do not know anything about physics; well I would encourage you to learn about some physics and connections to PDEs (e.g., heat equation or wave equation) before learning about theoretical properties of PDEs, like weak solutions.</p>
<p>PDEs are only models of the physical phenomenon we care about. For example, consider conserved quantities. If <span class="math-container">$u(x,t)$</span> denotes the density (say heat content, or density of traffic along a highway) of some quantity along a line at position <span class="math-container">$x$</span> and time <span class="math-container">$t$</span>, then if the quantity is truly conserved, it satisfies (trivially) a conservation law like
<span class="math-container">$$\frac{d}{dt} \int_a^b u(x,t) \, dx = F(a,t) - F(b,t), \ \ \ \ \ (*)$$</span>
where <span class="math-container">$F(x,t)$</span> denotes the flux of the density <span class="math-container">$u$</span>, that is, the amount of heat/traffic/etc flowing to the right per unit time at position <span class="math-container">$x$</span> and time <span class="math-container">$t$</span>. The equation simply says that the only way the amount of the substance in the interval <span class="math-container">$[a,b]$</span> can change is by the substance moving into the interval at <span class="math-container">$x=a$</span> or moving out at <span class="math-container">$x=b$</span>.</p>
<p>The function <span class="math-container">$u$</span> need not be differentiable in order to satisfy the equation above. However, it is often more convenient to assume <span class="math-container">$u$</span> and <span class="math-container">$F$</span> are differentiable, set <span class="math-container">$b = a+h$</span> and send <span class="math-container">$h\to 0$</span> to obtain (formally) a differential equation
<span class="math-container">$$\frac{\partial u}{\partial t} + \frac{\partial F}{\partial x} = 0. \ \ \ \ \ (+)$$</span>
This is called a conservation law, and we can obtain a closed PDE by taking some physical modeling assumption on the flux <span class="math-container">$F$</span>. For instance, in heat flow, Newton's law of cooling says <span class="math-container">$F=-k\frac{\partial u}{\partial x}$</span> (or for diffusion, Fick's law of diffusion is identical). For traffic flow, a common flux is <span class="math-container">$F(u)=u(1-u)$</span>, which gives a scalar conservation law.</p>
<p>Whatever physical model you choose, you have to understand that (*) is the real equation you care about, and (+) is just a convenient way to write the equation. It would seem absurd to say that if one cannot find a classical solution of (+), then we should throw up our hands and admit defeat.</p>
<p>Most applications of PDEs, such as optimal control, differential games, fluid flow, etc., have a similar flavor. One writes down a function, like a value function in optimal control, and the function is in general just Lipschitz continuous. Then one wants to explore more properties of this function and finds that it satisfies a PDE (the Hamilton-Jacobi-Bellman equation), but since the function is not differentiable we look for a weak notion of solution (here, the viscosity solution) that makes our Lipschitz function the unique solution of the PDE. This point is that without a mind towards applications, one is shooting in the dark and you will not find elegant answers to such questions.</p>
|
3,314,561 | <p>Consider the triangle <span class="math-container">$PAT$</span>, with angle <span class="math-container">$P = 36$</span> degres, angle <span class="math-container">$A = 56$</span> degrees and <span class="math-container">$PA=10$</span>. The points <span class="math-container">$U$</span> and <span class="math-container">$G$</span> lie on sides TP and TA respectively, such that PU = AG = 1. Let M and N be the midpoints of segments PA and UG. What is the degree measure of the acute angle formed by the lines MN and PA?</p>
<p>It would be very helpful if anyone had a solution using complex numbers to this problem.</p>
| user7530 | 7,530 | <p>To the excellent longer answers above I will add a short one: weak solutions in a conveniently-chosen (and in particular, <em>finite-dimensional</em>) function space can often be explicitly computed, whereas strong solutions often cannot (even if one can prove a solution must theoretically exist). Computability has obvious and immense practical importance.</p>
<p>Of course, one does <em>not</em> simply believe in the weak solutions: one proves existence, approximability, and conservation theorems, etc, for the weak solutions.</p>
|
3,314,561 | <p>Consider the triangle <span class="math-container">$PAT$</span>, with angle <span class="math-container">$P = 36$</span> degres, angle <span class="math-container">$A = 56$</span> degrees and <span class="math-container">$PA=10$</span>. The points <span class="math-container">$U$</span> and <span class="math-container">$G$</span> lie on sides TP and TA respectively, such that PU = AG = 1. Let M and N be the midpoints of segments PA and UG. What is the degree measure of the acute angle formed by the lines MN and PA?</p>
<p>It would be very helpful if anyone had a solution using complex numbers to this problem.</p>
| ktoi | 149,608 | <p>The existing answers provide good reasons towards the question in the title, but from the perspective of a geometer I feel the applications in physics aren't quite as convincing. It's true that singular phenomena that arises in for example conservation laws requires a suitable notion of a generalised solution, but why is it also useful for geometric problems?</p>
<p>One way I think of weak solutions is that they provide a <em>candidate</em> for a strong solution. Suppose you want to a solve a particular PDE problem with suitable data and you can prove the following:</p>
<ol>
<li>A weak solution exists.</li>
<li>Any classical solution, if it exists, is also a weak solutions.</li>
<li>The weak solution is suitably unique.</li>
</ol>
<p>Then from the above you can infer that if a classical solution exists, it must be the unique weak solution. Hence the problem of existence is effectively reduced to proving the regularity of the weak solution.</p>
<p>Hence in nice cases where existence can established in general (e.g. linear elliptic problems), weak solutions provide a way of solving PDE problems using the above methodology. This is method is effective for the technical reason that it allows us to work in spaces with better compactness properties.</p>
<p>If a solution doesn't always exist however, things get more interesting. If you can still establish the first three points, the solubility criterion is reduced to a regularity problem and we can then look for necessary/sufficient conditions based on this.</p>
<p><strong>Example</strong> (Harmonic map flow): If <span class="math-container">$(M,g)$</span> and <span class="math-container">$(N,h)$</span> are Riemannian manifolds, a classical problem in geometric analysis is whether a non-trivial harmonic map <span class="math-container">$u : M \rightarrow N$</span> exists. In the case when <span class="math-container">$M$</span> is a closed surface, we have the following sufficient condition for existence due to Eells and Sampson; non-trivial harmonic maps <span class="math-container">$M \rightarrow N$</span> exist provided there exists no non-trivial harmonic map <span class="math-container">$S^2 \rightarrow N.$</span></p>
<p>This theorem can be proved using the harmonic map flow to "evolve" a given map <span class="math-container">$u_0$</span> into a harmonic map <span class="math-container">$u_*,$</span> which is the work of Struwe. This method doesn't always work as the flow may develop singularities in general, but the non-existence condition about harmonic spheres provides a sufficient condition to prevent these singularities from forming.</p>
|
3,931,807 | <p>I need to find max and min of <span class="math-container">$f(x,y)=x^3 + y^3 -3x -3y$</span> with the following restriction: <span class="math-container">$x + 2y = 3$</span>.</p>
<p>I used the multiplier's Lagrange theorem and found <span class="math-container">$(1,1)$</span> is the minima of <span class="math-container">$f$</span>. Apparently, the maxima is <span class="math-container">$(-13/7, 17/7)$</span> but I could not find it via Lagrange's theorem.</p>
<p>Here's what I did:</p>
<p>I put up the linear system:</p>
<p><span class="math-container">$\nabla f(x,y) = \lambda \, \nabla g(x,y)$</span></p>
<p><span class="math-container">$g(x,y) = 0$</span></p>
<p>then,</p>
<p><span class="math-container">$(3x^2 -3, 3y^2 -3) = \lambda (1,2)$</span></p>
<p><span class="math-container">$x + 2y -3 = 0$</span></p>
<p>Solving for <span class="math-container">$\lambda$</span>, I got <span class="math-container">$\lambda = 0$</span>, which gave me <span class="math-container">$x = 1$</span> and <span class="math-container">$y = 1$</span>.</p>
<p>How can I find the maxima if lambda only gives one value which is <span class="math-container">$0$</span>?</p>
| Math Lover | 801,574 | <p>This is from your working -</p>
<p><span class="math-container">$(3x^2 -3, 3y^2 -3) = \lambda (1,2)$</span></p>
<p><span class="math-container">$3x^2 - 3 = \lambda, 3y^2-3 = 2\lambda$</span></p>
<p>Equating <span class="math-container">$\lambda$</span> from both equations,</p>
<p><span class="math-container">$6x^2-6 = 3y^2-3 \implies 2x^2 - y^2 = 1$</span></p>
<p>Substitute <span class="math-container">$x$</span> from <span class="math-container">$x+2y = 3$</span></p>
<p><span class="math-container">$2(3-2y)^2 - y^2 = 1$</span></p>
<p><span class="math-container">$\implies 7y^2 - 24y + 17 = 0 \, $</span> or <span class="math-container">$(7y-17)(y-1) = 0$</span></p>
<p>Can you take it from here and find possible points for extrema?</p>
|
255,827 | <p>I've had trouble coming up with one.</p>
<p>I know that if I could find </p>
<p>an irreducible poly $p(x)$ over $\mathbb{Q}$
which has roots $\alpha, \beta, \gamma\in Q(\alpha)$,</p>
<p>then $|\mathbb{Q}(\alpha) : \mathbb{Q}| $ = 3 and would be a normal extension,
as $\mathbb{Q}(\alpha)=\mathbb{Q}(\alpha,\beta,\gamma)$ would be a splitting field of $f$ over $\mathbb{Q}$.</p>
<p>However, this is a lot of conditions to find by luck...</p>
<p>Any help appreciated!</p>
| Gregor Botero | 31,955 | <p>Try to find a polynomial with discriminant $D$ that satisfies $\sqrt{D}\in\mathbb{Q}$.</p>
<p>Why does this help?</p>
<p>First, the only possibilities for the Galois group $G$ are $S_3$ and $A_3$, as Ben Millwood remarked.</p>
<p>Second, every element of $G$ must fix $\sqrt{D}\in\mathbb{Q}$. But you can check that every transposition of two roots of $f$ does not fix $\sqrt{D}$. Therefore $G$ cannot contain any transposition and must be isomorphic to $A_3$.</p>
<p>Spoiler:</p>
<blockquote class="spoiler">
<p> Use $f(x) = x^3 -3x -1$</p>
</blockquote>
|
21,491 | <p>The question is prompted by change of basis problems -- the book keeps multiplying the bases by matrix $S$ from the left in order to keep subscripts nice and obviously matching, but in examples bases are multiplied by $S$ (the change of basis matrix) from whatever side. So is matrix multiplication commutative if at least one matrix is invertible?</p>
| Eric Naslund | 6,075 | <p>Definitely not. Yuan's comment is also not correct, diagonal matrices do not necessarily commute with non-diagonal matrices. Consider $$\left[\begin{array}{cc}
1 & 1\\
0 & 1\end{array}\right]\left[\begin{array}{cc}
a & 0\\
0 & b\end{array}\right]=\left[\begin{array}{cc}
a & b\\
0 & b\end{array}\right]
$$</p>
<p>Changing the order I get
$$
\left[\begin{array}{cc}
a & 0\\
0 & b\end{array}\right]\left[\begin{array}{cc}
1 & 1\\
0 & 1\end{array}\right]=\left[\begin{array}{cc}
a & a\\
0 & b\end{array}\right]
$$
Which is different for $a\neq b$. </p>
<p>Hope that helps. (Sometimes change of basis matrices can go on different sides for different reasons, but without seeing the exact text you are talking about I can't comment)</p>
|
1,232,532 | <p>First, I'm not looking for an answer here, I'm just looking to understand the problem so that I can prove it. I'm trying to analyzing the worst case running time of an algorithm, and it must has summation notation. What keeping me back is that I don't understand how to express <code>doSomething(n-j)</code> in summation ( I know that <code>doSomething(k)</code> takes <code>c * k</code> operations for some constant <code>c > 0</code> (stated in the problem), so it is not constant in this case). The other two loops have starting points (e.g. <code>i = 1</code> or <code>j = i</code>). Anyway, the pseudo-code is stated blow: </p>
<pre><code>function(n)
for int i from 1 to n
for int j from i to n
doSomething(n - j)
endfor
endfor
endfunction
</code></pre>
<p>I can express the nested for loop in summation as follow:</p>
<p>$\sum_{i=1}^n \sum_{j=i}^n doSomething(n-j)$</p>
<p>I think I need one more summation, it's just that I don't know how to express it, maybe something like:</p>
<p>$\sum_{k=?}^{n-j}$</p>
<p>I could be wrong here.
Could anyone please provide me with some hints in this problem? Thanks a lot.</p>
<p>EDIT: since <code>doSomething(k)</code> takes <code>c * k</code> operations, can I express it as follow:</p>
<p>$\sum_{i=1}^n \sum_{j=i}^n c*k$</p>
| Tom | 230,703 | <p>when you squared both sides, you generated an extraneous solution (two solutions instead of one)..
...I don't know where you received this problem, but is it possible this is a "trick" type problem? I mean, you could simply flip the sign on both sides...so
-(sqrt(2x-1)) = x
hope this helped a little...-</p>
|
3,394,050 | <p>I'm having trouble with this problem.</p>
<blockquote>
<p>Using logical equivalencies prove that <span class="math-container">$(p \land q)\implies (p \lor q)$</span> is a tautology.</p>
</blockquote>
| user0102 | 322,814 | <p>According to the equivalence <span class="math-container">$(a\rightarrow b) \Longleftrightarrow \neg a\vee b$</span>, the De Morgan's laws, the associativity and commutativity of the logical operator <span class="math-container">$\vee$</span>, one has</p>
<p><span class="math-container">\begin{align*}
(p\wedge q) \longrightarrow (p\vee q) & \Longleftrightarrow \neg(p\wedge q)\vee(p\vee q) \Longleftrightarrow (\neg p\vee \neg q)\vee(p\vee q)\\\\
& \Longleftrightarrow (\neg p\vee p)\vee(\neg q\vee q) \Longleftrightarrow t\vee t \Longleftrightarrow t
\end{align*}</span></p>
<p>where <span class="math-container">$t$</span> represents a tautology.</p>
|
3,007,443 | <p>I've heard the words "internal" and "external" generalization of concepts in category theory.</p>
<p>Specifically, i heard the idea that the concept of 'power set' has an internal and an external generalization in category theory.</p>
<p>What is the difference between these two?</p>
| Giorgio Mossa | 11,888 | <p>First of all one need to understand the concept of internalization.</p>
<p>Generally many classical constructions which can be given inside some specific category (usually <span class="math-container">$\mathbf{Set}$</span>) can be expressed in the language of category theory in terms of objects, arrows and more generally diagrams.</p>
<p>Once one has a such diagrammatic definition of the construction it is possible to use the same definition to other categories, providing a new version of the construction <em>internal</em> to the new category.</p>
<p>So <em>internalization</em> is about defining concepts in terms of diagrams in a (possibly structured) category, in such a way that once one interprets these concepts in some specific categories (usually <span class="math-container">$\mathbf{Set}$</span>) they get the classical notions that have been internalized.</p>
<p>As an example you can consider an <a href="https://ncatlab.org/nlab/show/monoid+in+a+monoidal+category" rel="nofollow noreferrer">internal monoid in a monidal category</a>, which is a diagram made of morphisms of the form <span class="math-container">$X \otimes X \to X$</span> and <span class="math-container">$I \to X$</span> that make commute certain diagrams.</p>
<p>Externalization is about turing the internalized data in <span class="math-container">$\mathbf{Set}$</span>-theoretic data.
More technically externalization is the process of mapping the internal data via the yoneda embedding. </p>
<p>So the externalization of an internal data (which amounts to a diagram satisfying certain properties) in a category <span class="math-container">$\mathbf C$</span> is basically the corresponding diagram internal to <span class="math-container">$[\mathbf C^\text{op},\mathbf{Set}]$</span>.</p>
<p>Continuing with the example of a monoidal category <span class="math-container">$\mathbf C$</span>, the externalization turns the data of an internal monoid <span class="math-container">$(X,X \otimes X \to X,I \to X)$</span> are in a monoid object <span class="math-container">$$(\hom(-,X),\hom(-,X)\times\hom(-,X) \to \hom(-,X),\hom(-,I) \to \hom(-,X))$$</span> in <span class="math-container">$[\mathbf C^\text{op},\mathbf{Set}]$</span>.</p>
<p>So far it should be clear why internalization is basically a generalization of classical notions: because classical notion are special version (i.e. usually internal to <span class="math-container">$\mathbf{Set}$</span>) of the internal concept.</p>
<p>Externalization provides a different way to generalize, or if you like internalize, concepts.
Neverless this would be difficult to explain in the general case, so I prefer to stop here.</p>
<p>Anyway if you feel the need for additional details feel free to ask.</p>
<p>I hope this helps.</p>
|
2,170,382 | <p>I'm working on a question that asks to:</p>
<p>Find the area in the first quadrant bounded by the curves;
$\ xy = 1, xy=5, y=e^2x, y=e^5x $. </p>
<p>I would very much appreciate help solving this question (including the method of how to find the transformation expressions for $\ u$ and $v$ to use in the Jacobian). Thanks!</p>
| Julián Aguirre | 4,791 | <p>A change of variable $u=x\,y$, $v=y/x$ will transform the domain of integration into a rectangle.</p>
|
2,600,776 | <blockquote>
<p>A continuos random variable $X$ has the density
$$
f(x) = 2\phi(x)\Phi(x), ~x\in\mathbb{R}
$$
then</p>
<p>(<em>A</em>) $E(X) > 0$</p>
<p>(<em>B</em>) $E(X) < 0$</p>
<p>(<em>C</em>) $P(X\leq 0) > 0.5$</p>
<p>(<em>D</em>) $P(X\ge0) < 0.25$</p>
<p>\begin{eqnarray}
\Phi(x) &=& \text{Cumulative distribution function of } N(0,1)\\
\phi(x) &=& \text{Density function of } N(0, 1)
\end{eqnarray}</p>
</blockquote>
<p>I don't have a slightest clue where to start with. Can someone give me a little push. I saw some answers on same question like this but I didn't understand how should I integrate it when calculating expectation. </p>
| StubbornAtom | 321,264 | <p>The required expectation is nothing but $2\mathbb E(X\Phi(X))$ where $X\sim\mathcal N(0,1)$.</p>
<p>Integrating by parts ( taking $\Phi(x)$ as 1st function and $x\phi(x)$ as 2nd function) and using the fact that $\phi'(x)=-x\phi(x)$, it can be shown that </p>
<p>$$\mathbb E(X\Phi(X))=\int_{\mathbb R}x\Phi(x)\phi(x)\,\mathrm{d}x$$</p>
<p>$$\qquad\qquad\qquad\quad=\int_{\mathbb R}\frac{1}{2\pi}e^{-x^2}\,\mathrm{d}x=\frac{1}{2\sqrt{\pi}}$$ </p>
<p><strong>Edit.</strong></p>
<p>Integrating by parts, $\displaystyle\int_{\mathbb{R}}\Phi(x)x\phi(x)\,\mathrm{d}x=\lim_{A\to\infty}\left(-\Phi(x)\phi(x)\big|_{-A}^A\right)+\int_{\mathbb{R}}\phi(x)\phi(x)\,\mathrm{d}x$</p>
<p>The limit is $0$ because $\phi(x)\to0$ whenever $x\uparrow\infty$ or $x\downarrow-\infty$</p>
|
1,831,243 | <p>Kronecker "delta" function is generally defined as
$\delta(i,j)=1$ if $i$ is equal to $ j$, otherwise $0$.</p>
<p>How about if $j$ is not an integer? I mean let $j$ is a half open interval defined as $j=(0,1]$ and $ i$ has any value on interval $[0,1]$,
then can we use Kronecker delta to find if $i$ belongs to $j$? In other words, can we define like: $\delta(i,j)=1$ if $i$ is in $j$, $0$ otherwise?</p>
| parsiad | 64,601 | <p>What you are looking for is the indicator function:
$$
\mathbf{1}_{A}(x)=\begin{cases}
1 & \text{if }x\in A\\
0 & \text{otherwise}
\end{cases}
$$
In your case, $A=(0,1]$ (I have used $x$ instead of $i$ and $A$ instead of $j$ since it is more customary to use lower case letters at the end of the alphabet for real numbers and upper case letters for sets).</p>
<p>Note, as asmeurer points out, that the Kronecker delta $\delta_{i,j}$ can be written in terms of the indicator function: $\delta_{i,j}=\mathbf{1}_{\{j\}}(i)=\mathbf{1}_{\{i\}}(j)$.</p>
<hr>
<p>That having been said, the answer to your original question is yes, since there is nothing stopping you from defining $\delta_{i,j}=\mathbf{1}_{j}(i)$ when $j$ is a subset of the real numbers and $i$ is a real number, but this might confuse your readers, and thus I <strong>strongly</strong> recommend against it.</p>
|
478,517 | <blockquote>
<p>Construct a topological mapping of the open disk $|z|<1$ onto the whole plane.</p>
</blockquote>
<p>I represent $z=re^{i\theta}$. I thought about the bijection from $(0,1)$ to $(0,\infty)$, which is given by $x\rightarrow \dfrac1x-1$. Applying this to the norm, we will get the mapping $re^{i\theta}\rightarrow\left(\dfrac1r-1\right)e^{i\theta}$. The only problem is that the point $0$ has not been mapped to or from yet. If I map $0$ to itself, the map becomes non-continuous.</p>
| njguliyev | 90,209 | <p>Hint: Try another bijection between $(0,1)$ and $(0,+\infty)$.</p>
<blockquote class="spoiler">
<p> $\tan \frac{\pi x}{2}$.</p>
</blockquote>
|
125,592 | <p>I'm finding in trouble trying to resolve this exercise. I have to calculate the convolution of two signals:</p>
<p>$$y(t)=e^{-kt}u(t)*\frac{\sin\left(\frac{\pi t}{10}\right)}{(\pi t)} $$</p>
<p>where $u(t)$ is Heavside function</p>
<p>well I applied the formula that says that the convolution of this two signal is equal to</p>
<p>$$Y(f)=X(f)W(f)$$</p>
<p>where $X(f)$ is the fourier transform of the first signal and $W(f)$ is the fourier transform of second signal</p>
<p>well fourier transform of $e^{-kt}u(t)$ is $X(f)=\frac{1}{k+j2\pi f}$. I have to make second signal as equals as possible to $\operatorname{sinc}\left(\frac{\pi t}{10}\right)$ so I do this operation:
$\frac{\sin\left(\frac{\pi t}{10}\right)}{\left(\frac{\pi t}{10}\right)}{\left(\frac{1}{10}\right)}$. this is equal ${\left(\frac{1}{10}\right)}\operatorname{sinc}\left(\frac{\pi t}{10}\right)$</p>
<p>right or not?</p>
<p>Edit</p>
<p>If something is not clear please advice me</p>
| example | 27,652 | <p>It is also possible with partial integration, though getting the closed formula from the other solution is not as easy to see.</p>
<p>$$ C(n):=\int_0^{2\pi}\!\!\!\cos^n(x)\,dx =\int_0^{2\pi}\!\!\!\cos^{n-1}(x)\cos(x)\,dx $$
partial integration gives</p>
<p>$$ = (n-1)\int_0^{2\pi}\!\!\!\cos^{n-2}(x)\sin^2(x)\,dx$$
$$ =(n-1)\int_0^{2\pi}\!\!\!\cos^{n-2}(x)\left(1-\cos^2(x)\right)\,dx $$
$$ \Rightarrow \int_0^{2\pi}\!\!\!\cos^n(x)\,dx = \frac{n-1}{n}\int_0^{2\pi}\!\!\!\cos^{n-2}(x)\,dx $$</p>
<p>So in short: $C(0)=2\pi$, $C(1)=0$ and
$$C(n)=\frac{n-1}{n}C(n-2) = \frac{(n-1)!!}{n!!} 2\pi\quad \text{for }n\text{ even} .$$</p>
|
1,608,645 | <p>Is there supposed to be a fast way to compute recurrences like these?</p>
<p>$T(1) = 1$</p>
<p>$T(n) = 2T(n - 1) + n$</p>
<p>The solution is $T(n) = 2^{n+1} - n - 2$. </p>
<p>I can solve it with:</p>
<ol>
<li><p>Generating functions.</p></li>
<li><p>Subtracting successive terms until it becomes a pure linear recurrence $T(n) = 4T(n-1) - 5T(n-2) + 2T(n-3)$ and then solving it using the powers-of-roots approach. </p></li>
<li><p>Repeated substitution, which gives a few simple closed-forms but one messy sum $\sum_{k=1}^{n-2} 2^k k$ which to me is not easy to do quickly.</p></li>
</ol>
<p>Each one of these approaches takes me several minutes to flesh out, but I feel like this is supposed to be one of those questions I should be able to answer in a few seconds and move on. What am I missing? Is there some quick trick to doing these recurrences?</p>
| Robert Israel | 8,508 | <p>Just as in linear algebra, the general solution of a linear non-homogeneous equation is a particular solution + the general solution of the homogeneous equation.</p>
<p>The homogeneous equation $T(n) = 2 T(n-1)$ has the obvious solutions $c 2^n$.</p>
<p>For a particular solution of the non-homogeneous equation $T(n) = 2 T(n-1) + n$, since the non-homogeneous term is a polynomial of degree $1$ it's natural to look for a solution that is again a polynomial of degree $1$: try $a n + b$ and you see that $-n - 2$ fits the bill. </p>
<p>EDIT: What I mean is that substituting $T(n) = a n + b$ gives you
$a n + b = 2 (a (n-1) + b) + n$, which simplifies to $(a + 1) n + b - 2 a = 0$, and since this is true for all $n$ you must have $a+1 = 0$, $b-2a = 0$, i.e. $a=-1$ and $b = -2$.</p>
<p>So your general solution is $T(n) = c 2^n - n - 2$, and you plug in the initial condition $n = 1$ to see that $c = 2$. </p>
|
1,608,645 | <p>Is there supposed to be a fast way to compute recurrences like these?</p>
<p>$T(1) = 1$</p>
<p>$T(n) = 2T(n - 1) + n$</p>
<p>The solution is $T(n) = 2^{n+1} - n - 2$. </p>
<p>I can solve it with:</p>
<ol>
<li><p>Generating functions.</p></li>
<li><p>Subtracting successive terms until it becomes a pure linear recurrence $T(n) = 4T(n-1) - 5T(n-2) + 2T(n-3)$ and then solving it using the powers-of-roots approach. </p></li>
<li><p>Repeated substitution, which gives a few simple closed-forms but one messy sum $\sum_{k=1}^{n-2} 2^k k$ which to me is not easy to do quickly.</p></li>
</ol>
<p>Each one of these approaches takes me several minutes to flesh out, but I feel like this is supposed to be one of those questions I should be able to answer in a few seconds and move on. What am I missing? Is there some quick trick to doing these recurrences?</p>
| Claude Leibovici | 82,404 | <p>Let me make the problem slightly more complex with, for example, $$T_n=a \, T_{n-1}+b+c n+d n^2$$ ($a,b,c,d$ being given); set $$T_n=U_n+\alpha +\beta n+\gamma n^2$$ Now, replace in the original expression
$$U_n+\alpha+\beta n +\gamma n^2=a\left(U_{n-1}+\alpha +\beta (n-1)+\gamma (n-1)^2\right)+b+c n+d n^2$$ that is to say $$U_n-aU_{n-1}=a\left(\alpha +\beta (n-1)+\gamma (n-1)^2\right)+b+c n+d n^2-(\alpha+\beta n +\gamma n^2)$$ Expanding the rhs and grouping for a given power of $n$ then gives
$$(a \alpha -a \beta +a \gamma -\alpha +b)+n (a \beta -2 a \gamma -\beta +c)+n^2 (a
\gamma -\gamma +d)$$ Say that, for any $n$, this expression is equal to $0$. This gives three linear equations for three unknowns $\alpha,\beta,\gamma$; these are easy to solve and you then finish with the simplest reccurence equation $$U_n=a\,U_{n-1}$$ Then, back to $T_n$.</p>
<p>For sure, you can generalized the problem to any recurrence of the form $$T_n=a \, T_{n-1}+\sum_{i=0}^k c_in^i$$</p>
|
3,762,174 | <p>I have some confusion in integration . My confusion marked in red and green circle as given below<a href="https://i.stack.imgur.com/4fc1I.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/4fc1I.png" alt="enter image description here" /></a></p>
<p>Im not getting why <span class="math-container">$$\int_{0}^ {x} = \int_{\frac{-1}{2}}^{y}$$</span> and <span class="math-container">$$\int_{0}^{x} = \int_{y}^{1} $$</span> ?</p>
<p>Im not getting how its derive ?</p>
| Alex | 38,873 | <p>In the integral in your problem, the bounds are <span class="math-container">$0<y<x$</span>, so, as @zkutch wrote, if you plot the graph <span class="math-container">$y=f(x)=x$</span>, the area will be the lower triangle if you split the unit square <span class="math-container">$([0,1]\times[0,1])$</span> with this function. The same area corresponds to the bounds <span class="math-container">$y<x<1$</span>. Do the same with <span class="math-container">$[-\frac{1}{2},0]$</span> interval.</p>
<p>To verify that the integrals can be interchanged, Fubini-Tonelli's theorem must be verified:
<span class="math-container">$$
\int_{B}f = \int_{B}f^{+} + \int_{B}f^{-}
$$</span>
Since your function is bounded on the compact set, both of these integrals are finite, so integrals can be interchanged.</p>
|
14,612 | <p>For finding counter examples. That does not sound convincing enough, at least not always. Why as a object in its own right the study of Cantor Set has merit ? </p>
| Lee Mosher | 7,258 | <p>Spaces that are homeomorpic to the Cantor set arise naturally in many mathematical settings, particularly in dynamical systems.</p>
<p>For one dynamical example, the Cantor set is homeomorphic to the phase space of any infinite <a href="https://en.wikipedia.org/wiki/Bernoulli_process" rel="nofollow noreferrer">Bernoulli process</a>.</p>
<p>For another, the "nonescaping set" of many simple dynamical systems in the real line (or the complex plane) is homeomorphic to the Cantor set (this is a "Cantor dust" example as in the answer of @GeraldEdgar). Consider for example the dynamical system
<span class="math-container">$$z_n = (z_{n-1})^2 + 10
$$</span>
(You can replace <span class="math-container">$10$</span> by any real or complex number of magnitude <span class="math-container">$>2$</span>). One can prove that there is a subset <span class="math-container">$C \subset \mathbb C$</span> homeomorphic to the Cantor set such that if <span class="math-container">$z_0 \in C$</span> then the sequence <span class="math-container">$(z_n)$</span> is bounded (in fact it stays in <span class="math-container">$C$</span>), whereas if <span class="math-container">$z_0 \not\in C$</span> then <span class="math-container">$\lim_{n \to \infty} |z_n| =\infty$</span>. In short, points not in <span class="math-container">$C$</span> escape to infinity, points in <span class="math-container">$C$</span> do not.</p>
<p>Also, there are important theoretical descriptions/properties of the Cantor set, for example:</p>
<ul>
<li>A topological space is homeomorphic to the Cantor set if and only if it is compact, metrizable, has no isolated points, and every component is a point. </li>
<li>Any compact zero-dimensional metrizable topological space is homeomorphic to a subspace of the Cantor set.</li>
</ul>
<p>Cantor sets even occur naturally in number theory! The <a href="https://en.wikipedia.org/wiki/P-adic_number#Topology" rel="nofollow noreferrer"><span class="math-container">$p$</span>-adic integers <span class="math-container">$\mathbb Z_p$</span> are homeomorphic to the Cantor set</a>.</p>
|
2,867,042 | <blockquote>
<p>Find the value of
$$\tan\theta \tan(\theta+60^\circ)+\tan\theta \tan(\theta-60^\circ)+\tan(\theta + 60^\circ) \tan(\theta-60^\circ) + 3$$
(The answer is $0$.)</p>
</blockquote>
<p>My try: Let $\theta$ be $A$, $60^\circ -\theta$ be $B$, and $60^\circ + \theta$ be $C$. I simplified the result and got the expression
$$1 + 1/\cos A\cos B\cos C$$ but after that I can't simplify it.</p>
| Batominovski | 72,152 | <p>I agree with Jamie Radcliffe. Let $[n]:=\{1,2,\ldots,n\}$ and $\mathcal{P}_n$ denote the power set $\mathcal{P}\big([n]\big)$. Suppose that $\mathcal{P}_n$ has a decomposition into pairwise disjoint symmetric chains
$$\mathcal{P}_n=\bigcup_{k=0}^{\left\lfloor \frac{n}{2}\right\rfloor}\,\bigcup_{r=1}^{t_k}\,\mathcal{C}_k^r\,,$$
where $\mathcal{C}_k^1,\mathcal{C}_k^2,\ldots,\mathcal{C}_k^{t_k}$ are symmetric chains of length $n+1-2k$ for each $k=0,1,2,\ldots,\left\lfloor\frac{n}{2}\right\rfloor$. For example, when $n=4$, we have
$$\begin{align}\mathcal{P}_4&=\big\{\emptyset,\{1\},\{1,2\},\{1,2,3\},\{1,2,3,4\}\big\}
\\
&\phantom{aaaaa}\cup\big\{\{2\},\{2,3\},\{2,3,4\}\big\}\cup\big\{\{3\},\{3,4\},\{3,4,1\}\big\}\cup\big\{\{4\},\{4,1\},\{4,1,2\}\big\}
\\
&\phantom{aaaaa}\cup\big\{\{1,3\}\big\}\cup\big\{\{2,4\}\big\}\,.\end{align}$$</p>
<p>We shall prove that
$$t_k=l(n,k)=\binom{n}{k}-\binom{n}{k-1}\text{ for every }k=0,1,2,\ldots,\left\lfloor\frac{n}{2}\right\rfloor$$
by induction on $k$. We start with $t_0=1=l(n,0)$. This is trivial because $\emptyset\in\mathcal{P}_n$ has to be in exactly one symmetric chain. </p>
<p>Now, suppose that $k\in\Biggl\{1,2,\ldots,\left\lfloor\frac{n}2\right\rfloor\Biggr\}$ and that $t_j=l(n,j)$ for all $j=0,1,2,\ldots,k-1$. The number of $k$-subsets of $[n]$ that already lie in some $\mathcal{C}_j^s$ with $j<k$ is
$$\sum_{j=0}^{k-1}\,t_j=\sum_{j=0}^{k-1}\,l(n,j)=\sum_{j=0}^{k-1}\,\Biggl(\binom{n}{j}-\binom{n}{j-1}\Biggr)=\binom{n}{k-1}\,.$$
Hence, there are exactly $\displaystyle\binom{n}{k}-\binom{n}{k-1}=l(n,k)$ subsets of $[n]$ of size $k$ left. Each of these subsets must belong in exactly one symmetric chain of length $n+1-2k$. Therefore,
$$t_k=l(n,k)\,,$$
as desired.</p>
|
2,396,073 | <p>Let $\omega_1$ be the first uncountable ordinal. In some book, the set $\Omega_0:=[1,\omega_1)=[1,\omega_1]\backslash\{\omega_1\}$ is called the set of countable ordinals. Why? It is obvious that it is an uncountable set, because $[1,\omega_1]$ is uncountable. The most possible reason I think is that for any $x\prec \omega_1$, the set $[1,x)$ is countable. </p>
| Ross Millikan | 1,827 | <p>It is just like $\omega$ being the set of all finite ordinals. Every member of $\omega$ is finite but $\omega$ itself is infinite. Similarly, $\Omega_0$ is uncountable but all its members are (finite or) countable.</p>
|
2,706,165 | <p>So if $y=\log(3-x) = \log(-x+3)$ then you reflect $\log(x)$ in the $y$ axis to get $\log(-x)$.</p>
<p>Then because it is $+3$ inside brackets you then shift to the left by $3$ giving an asymptote of $x=-3$ and the graph crossing the $x$ axis at $(-4,0)$. </p>
<p>However this does not work. The answer shows the $+3$ in the bracket shifting the curve to the right by $3$ giving an asymptote of $x=3$ and the curve crossing the $x$ axis at $(2,0)$. </p>
<p>Why does it do this? Can anyone please explain?</p>
| pjs36 | 120,540 | <p>There is a small set of algebraic operations that correspond to geometric transformations:</p>
<p>When we have the graph of a function $y = f(x)$...</p>
<p><strong>Shifting:</strong></p>
<ul>
<li><p>The substitution $x \mapsto x - h$ shifts a graph $h$ units to the right (that'd be left, if $h$ is negative)</p></li>
<li><p>The substitution $y \mapsto y - k$ shifts a graph $k$ units down (up if $k$ is negative)</p></li>
</ul>
<p><strong>Reflecting:</strong></p>
<ul>
<li><p>The substitution $x \mapsto -x$ reflects the graph across the $y$-axis; a "left/right flip"</p></li>
<li><p>The substitution $y \mapsto -y$ reflects the graph across the $x$-axis; an "up/down flip"</p></li>
</ul>
<p>Now, here's the key thing: These transformations have to be written <em>exactly like this</em>, only replacing $x$ or $y$ with something.</p>
<hr>
<p>So, when we break down $y = \ln(3 - x)$ as you have...</p>
<p>$$
y = \ln(x)
\xrightarrow{x\ \mapsto\ -x}
y = \ln(-x)
\longrightarrow
y = \ln(-x + 3)
$$</p>
<p>the last transformation, $-x \mapsto -x + 3$, is <strong>not</strong> one of our basic transformations: We are adding $3$ to $-x$, not $x$. As written, we simply can't recognize this as corresponding to any of our basic transformations. But, if we think about it a little differently...</p>
<p>$$
y = \ln(x)
\xrightarrow{{x}\ \mapsto\ -x}
y = \ln(-\color{red}{x})
\xrightarrow{\color{red}{x}\ \mapsto\ \color{red}{x - 3}}
y = \ln\bigl(-(\color{red}{x - 3})\bigr) = \ln(-x + 3)
$$</p>
<p>which we recognize as the sequence of transformations 1) Flip the graph left/right, and 2) Shift to the right 3 units.</p>
<p>There is an alternative:</p>
<p>$$
y = \ln(x)
\xrightarrow{x\ \mapsto\ x + 3}
y = \ln(x + 3)
\xrightarrow{x\ \mapsto\ -x}
y = \ln(-x + 3)
$$</p>
<p>so we see the transformation can also be achieved by 1) Shifting the graph $3$ units left, then 2) Flipping left and right.</p>
<hr>
<p>So, long story short: To recognize a graph as the transformation of another graph, you <em>have</em> to figure out how to only use substitutions like "add this to $x$", or "make $x$ negative", not adding things to $-x$, or $2x$, etc.</p>
|
35,151 | <p>Many complexity theorists assume that $P\ne NP.$ If this is proved, how would it impact quantum computing and quantum algorithms? Would the proof immediately disallow quantum algorithms from ever solving NP-Complete problems in Quantum Polynomial time?</p>
<p><a href="http://en.wikipedia.org/wiki/QMA" rel="nofollow">According to Wikipedia</a>, quantum complexity classes BQP and QMA are the bounded-error quantum analogues of P and NP. Is it likely that a proof that $P\ne NP$ can be adapted to the quantum setting to show that $BQP \ne QMA?$</p>
| Greg Kuperberg | 1,450 | <p>David is right about one thing. Scott had a discussion about this on his blog and I was also involved.</p>
<p>On the one hand, many complexity theorists simply also assume that BQP does not contain NP, just as they assume that P does not contain NP. The evidence for the former is not as dramatic as that for the latter, but there is at least an oracle separation. I.e., there is an oracle A such that BQP<sup>A</sup> does not contain NP<sup>A</sup>. Now, there are some famous cases where two complexity classes are equal or there is an inclusion, even though there is also a credible oracle separation. But the oracle separations for BQP vs NP seem realistic. Besides, apart from tangible evidence, I for one consider BQP to be surprisingly powerful but not incredibly powerful. It's my intuition partly because I expect BQP to be realistic and I don't expect the universe to be perverse. I think of BQP as an extension of randomized computation based on quantum probability.</p>
<p>On the other hand, P vs PSPACE is already an unfathomable open problem. The two main barrier results for P vs NP, Baker-Gill-Solovay and Razborov-Rudich, apply to P vs PSPACE equally well. Since PSPACE contains both NP and BQP, if you were to show that either one does not equal P, then in particular you would show that PSPACE does not equal P. Actually, I don't know a good reason to try to prove that P ≠ NP rather than to first prove that P ≠ PSPACE, since the latter is at least formally easier.</p>
|
347,171 | <p>Let <span class="math-container">$\text{ppTop}$</span> denote the category of pointed and path connected topological spaces with morphisms base-preserve continuous maps. The fundamental group gives a functor <span class="math-container">$FG: \text{ppTop}\to \text{Gp}$</span> where GP is the category of groups.</p>
<p>Now we consider the category <span class="math-container">$\text{pTop}$</span> consisting of path-connected topological spaces and we can naturally define the fundamental groupoids instead of fundamental groups on <span class="math-container">$\text{pTop}$</span>. If we want to define the fundamental group then we need to choose a base point. Notice that there is a forgetful functor <span class="math-container">$\text{For}:\text{ppTop}\to \text{pTop}$</span>.</p>
<blockquote>
<blockquote>
<p>My question is: could we lift the functor <span class="math-container">$FG: \text{ppTop}\to \text{Gp}$</span> to a functor <span class="math-container">$\widetilde{FG}: \text{pTop}\to \text{Gp}$</span> such that <span class="math-container">$\widetilde{FG}\circ \text{For}=FG$</span>? If not, how to construct a contradiction?</p>
</blockquote>
</blockquote>
| NWMT | 38,698 | <p>There is a more topological way. If you assume that <span class="math-container">$X$</span> admits a universal covering <span class="math-container">$\tilde X$</span> (so <span class="math-container">$X$</span> is path connected and semilocally simply connected, I believe) then the <span class="math-container">$G=\pi_1(X)$</span> is realized by the deck transformations, i.e. the self-homeomorphisms of <span class="math-container">$X$</span> that preserve the fibers of <span class="math-container">$p:\tilde X \to X$</span>.</p>
<p>I am a bit worried, however, that my answer simply sweeps basepoints under the rug. They are certainly used in the construction I know of the universal cover <span class="math-container">$\tilde X$</span>, but we can forget about them afterwards and simply use the covering map <span class="math-container">$p$</span>.</p>
<p>**Edit: ** I still think this works, but I'm not sure if this construction is functorial in the sense you need.</p>
|
2,448,696 | <p>Show that $\frac{1}{n}<\ln n$, for all $n>1$ where n is a positive integer</p>
<p>I've tried using induction by multiplying both sides by $\ln k+1$ and $\frac{1}{k+1}$ but but all it does is makes it more complicated, I've tried using the fact that $k>1$ and $k+1>2$ during the inductive $k+1$ step, but I'm still stuck. </p>
<p>Looking for clues for this question.</p>
| Peter Szilas | 408,605 | <p>Consider:</p>
<p>$e \lt n^n$ for $n\gt 1.$</p>
<p>Take $\log$ of both sides:</p>
<p>$1 \lt n\log(n)$ $ $ $\rightarrow:$</p>
<p>$1/n \lt \log(n)$ for $n \gt 1$.</p>
|
623,428 | <blockquote>
<p>Suppose $$
Y = X^TAX,
$$ where $Y$ and $A$ are both known $n\times n$, real, symmetric matrices. The unknown matrix $X$ is restricted to $n\times n$.</p>
</blockquote>
<p>I think there should be at least one real valued solution for $X$. How do I solve for $X$? </p>
| rschwieb | 29,335 | <p>A solution is not possible for all $Y$ and $A$.</p>
<p>For example, suppose $\operatorname{rank}(Y)>\operatorname{rank}(A)$. Then $\operatorname{rank}X^\top AX\leq\operatorname{rank}(A)<\operatorname{rank}(Y)$, so we can't hope for equality.</p>
<p>Another restriction is that $\det(Y)=\det(X)^2\det(A)$. So for example, if the determinant of $Y$ and determinant of $A$ have opposite signs, no $X$ can exist.</p>
<p>The most natural case where such an equation makes sense is in the context of symmetric bilinear forms. In that case, it's known that if $Y$ and $A$ have the same <a href="https://en.wikipedia.org/wiki/Sylvester%27s_law_of_inertia" rel="nofollow">signatures</a>, then there exists a nonsingular $X$ satisfying the equation, and the proof of <a href="https://en.wikipedia.org/wiki/Sylvester%27s_law_of_inertia" rel="nofollow">Sylvester's law of inertia</a> provides a method to calculate it.</p>
|
1,561,370 | <p>Is there any graphical interface in <a href="http://gap-system.org" rel="noreferrer">GAP</a>? Something like <a href="https://www.rstudio.com/" rel="noreferrer">RStudio</a> for <a href="https://www.r-project.org/" rel="noreferrer">R</a> or <a href="http://andrejv.github.io/wxmaxima/" rel="noreferrer">WxMaxima</a> for <a href="http://maxima.sourceforge.net/" rel="noreferrer">Maxima</a>. I'm using GAP under a Linux system.
Thanks</p>
| Russ Woodroofe | 562,386 | <p>I want to follow up on Alexander's answer. <a href="https://cocoagap.sourceforge.io/" rel="nofollow noreferrer">Gap.app</a>, which is one of the <a href="http://www.gap-system.org/Packages/undep.html" rel="nofollow noreferrer">Undeposited Implementations for GAP</a> that Alexander mentions briefly, is back in active development, with a new release this week. It is a front-end and GAP distribution for macOS. It fully supports the xgap library, and also does some other useful things like provide easy save and load of sessions, command completion, etc. See</p>
<p><a href="https://cocoagap.sourceforge.io/" rel="nofollow noreferrer">https://cocoagap.sourceforge.io/</a> .</p>
<p>(Disclosure: I am the author of this program.)</p>
<p>Here's a nice screenshot from <a href="https://www.math.u-psud.fr/~lelievre/" rel="nofollow noreferrer">Samuel Lelièvre</a>:<br>
<img src="https://sourceforge.net/p/cocoagap/screenshot/scr_2018-05-15T131943Z.png"></p>
<p>Unfortunately, if you're on Linux, Gap.app doesn't currently help you so much. Gap.app uses Objective C, and there is reasonable potential for compiling under Gnustep or similar. I have an undergraduate looking at that possibility, and it is possible that he will make some progress.</p>
<p>By the way, xgap still largely works. It uses the Xathena widgets, so does look and feel very dated. There is a bug in the current release of GAP that affects xgap (and prevents most display), but you can work around it by typing <code>GAPInfo.TermEncoding:="latin1";</code> as your first command in a session.</p>
<p>One more thing. Alexander attributes xgap to Max Neunhöffer. The project was actually originated by Frank Celler, and taken over by Max Neunhöffer when Frank left mathematics. Now Max Neunhöffer has also left mathematics, and xgap maintenance is handled by Max Horn.</p>
|
3,365,361 | <p>Suppose that <span class="math-container">$f:\mathbb{R}\to\mathbb{R}$</span> is analytic at <span class="math-container">$x=0$</span>, and <span class="math-container">$T(x)$</span> its Taylor series at <span class="math-container">$x=0$</span>, with radius of convergence <span class="math-container">$R>0$</span>. Is it true that <span class="math-container">$f(x)=T(x)$</span> whenever <span class="math-container">$|x|<R$</span> ?</p>
| Bernard | 202,857 | <p>Another reason is that the trace is the sum of the eigenvalues, and two similar matrices have the same eigenvalues.</p>
|
118,540 | <p>Let $X$ be a projective surface defined over a field $k$ of characteristic $0$, and let $G$ be a finite group acting biregularly on $X$.</p>
<p>Assuming that $X$ is rational over $k$, is the quotient $X/G$ always rational?</p>
<p>If $k=\mathbb{C}$, we can use Castelnuovo's theorem and see that $X/G$ is unirational and hence rational. If $k=\mathbb{R}$, then $X/G$ is geometrically rational and also connected for the transcendental topology, and is thus rational.</p>
<p>But what happens for a general $k$, in particular when $k=\mathbb{Q}$?</p>
| Jason Starr | 13,265 | <p>This is not always true, and cubic threefolds give a counterexample over the field $k=\mathbb{C}(t)$. Let $\mathcal{Y}$ be a smooth cubic hypersurface in $\mathbb{P}^4_{\mathbb{C}}$. Let $L\subset \mathcal{Y}$ be a line. Denote by $\mathcal{X}$ the (locally closed) subvariety of $\mathcal{Y}\times L$ parameterizing pairs $(y,p)$ such that the intersection of $\text{Span}(L,y)$ with $\mathcal{Y}$ is a plane cubic $L\cup C$, where $C$ is a plane conic intersecting $L$ transversally at $p$. This condition on the conic $C$ is valid for all $y$ in a dense open subset of $\mathcal{Y}\setminus L$. Define an involution, $$ i:\mathcal{X} \to \mathcal{X}, \ i(y,p) = (y,q), $$ where $C\cap L$ equals $\{ p,q \}$. This involution defines an action on $\mathcal{Y}$ by the cyclic group $G$ of order. The quotient is the (dense, open) image $U$ of the projection $\text{pr}_{\mathcal{Y}}:\mathcal{X}\to \mathcal{Y}$.</p>
<p>How does this give a counterexample for <I>surfaces</I>? Let $\Pi$ be a linear $2$-plane containing $L$. Let $$\pi:(\mathbb{P}^3_{\mathbb{C}} \setminus \Pi) \to \mathbb{P}^1_{\mathbb{C}}$$
be linear projection away from $\Pi$. Let $U_\Pi$ be $U\setminus \Pi$, and let $\mathcal{X}_{\Pi}$ be the inverse image of $U_\Pi$ in $\mathcal{X}$. Of course this is a $G$-invariant, dense, open subset of $\mathcal{X}$. The claim is that a general fiber of $f\circ \text{pr}_{\mathcal{Y}}:\mathcal{X}_{\Pi} \to \mathbb{P}^1$ is a rational surface. Then letting $k$ be the function field $\mathbb{P}^1_{\mathbb{C}}$, and letting $Y$ and $X$ be the generic fiber of $f$, resp. $f\circ \text{pr}_{\mathcal{X}}$, this gives a counterexample.</p>
<p>Consider the morphism $$(f\circ \text{pr}_{\mathcal{Y}}, \text{pr}_{L}): \mathcal{X}_{\Pi} \to \mathbb{P}^1_{\mathbb{C}} \times_{\mathbb{C}} L.$$ A general point of the target parameterizes a pair $([H],p)$, where $H$ is a hyperplane in $\mathbb{P}^3_{\mathbb{C}}$ containing $\Pi$, and where $p$ is a point of $L$. Consider the "projective linear" tangent space to $X$ at $p$, i.e., the unique hyperplane $\Sigma$ in $\mathbb{P}^3_{\mathbb{C}}$ with maximal order of contact with $X$ at $p$. Then $\Sigma$ contains $L$. The intersection of $\Sigma$ and $H$ is a linear $2$-plane $\Xi$ that contains $L$. If $p$ and $H$ are general then $\Xi$ is not equal to $\Pi$, and the intersection of $\Xi$ with $\mathcal{Y}$ is a plane cubic $L\cup C$, where $C$ is a plane conic that intersects $L$ transversally at $p$ and $i(p)$. Thus the fiber of $(f\circ \text{pr}_{\mathcal{Y}}, \text{pr}_{L})$ over $([H],p)$ is $C\setminus \{p,i(p)\}$. Therefore, at least after passing to a dense open subset of the target, the morphism $(f\circ \text{pr}_{\mathcal{Y}}, \text{pr}_{L})$ is a dense open subset of a conic bundle. Moreover, this conic bundle has a section; namely send $([H],p)$ to the point $p$ of the conic $C$. A conic bundle with a section is birational to $\mathbb{P}^1$ over the base. Thus the composite morphism $$f\circ \text{pr}_{\mathcal{Y}}:\mathcal{X}_\Pi \to \mathbb{P}^1_{\mathbb{C}} $$ is birational to $$\text{pr}_1:\mathbb{P}^1_{\mathbb{C}} \times_{\mathbb{C}} \mathbb{P}^1_{\mathbb{C}}\times_{\mathbb{C}} L \to \mathbb{P}^1_{\mathbb{C}}.$$ </p>
<p>If memory serves, this description of $\mathcal{X}$ as a conic bundle is described in the appendix to Clemens and Griffiths where they explain Mumford's Prym construction.</p>
<p><B>Edit.</B> Of course the point is that the Clemens-Griffiths theorem proves that $\mathcal{Y}$ is not rational over $\mathbb{C}$. If the generic fiber $Y$ of $f$ were rational over $k=\mathbb{C}(t)$, then $\mathcal{Y}$ would be rational over $\mathbb{C}$.</p>
<p><B>Edit. </B> I decided to add the following comment to the answer. In his book "Cubic Forms", Manin seems to give examples of quartic del Pezzo surfaces $Y$ over number fields that have a rational point, that have a degree $2$ double-cover $X$ that is rational (so that $Y$ is $X/G$ for $G$ a cyclic group of order $2$), yet with $Y$ irrational. The reference is Theorem IV.29.2, Theorm IV.29.4 and Remark IV.29.4.1, pp. 157-158 with r=5, and also Section IV.31, pp. 174--182. </p>
|
800,363 | <p>What is </p>
<blockquote>
<p>$$\lim_{x\to 0}\left(\frac{x}{e^{-x}+x-1}\right)^x$$</p>
</blockquote>
<p>Using the expansion of <a href="http://en.wikipedia.org/wiki/Exponential_function" rel="nofollow">$e^x$</a>, I get that the function</p>
<blockquote>
<p>$$y=\left(\frac{x}{e^{-x}+x-1}\right)^x$$</p>
</blockquote>
<p>is not defined for negative numbers.</p>
<p>Hence the limit at $0^{-}$ must not exist.$\implies$The limit at $0$ does not exist.</p>
<p>However <a href="http://www.wolframalpha.com/input/?i=lim%28x%2F%28%28e%5E%28-x%29%29%2Bx-1%29%29%5Ex+as+x+tends+to+0" rel="nofollow">WA</a> says that it should be $1$. :(</p>
<p>Am I wrong?</p>
| Did | 6,179 | <p>WA interprets the number
$$
u(x)=\left(\frac{x}{\mathrm e^{-x}+x-1}\right)^x
$$
when $x\gt0$ as
$$
u(x)=\exp\left(x\log\left(\frac{x}{\mathrm e^{-x}+x-1}\right)\right),
$$
and when $x\lt0$ as
$$
u(x)=\exp\left(x\log\left(\frac{-x}{\mathrm e^{-x}+x-1}\right)+\mathrm i\pi x\right).
$$
Then both limits are indeed $1$ (as one sees when one looks closely at the plot on the WA page).</p>
|
119,636 | <p>I want to know the general formula for $\sum_{n=0}^{m}nr^n$ for some constant r and how it is derived.</p>
<p>For example, when r = 2, the formula is given by:
$\sum_{n=0}^{m}n2^n = 2(m2^m - 2^m +1)$
according to <a href="http://www.wolframalpha.com/input/?i=partial+sum+of+n+2%5En" rel="noreferrer">http://www.wolframalpha.com/input/?i=partial+sum+of+n+2%5En</a></p>
<p>Thanks!</p>
| Community | -1 | <p>Hint: </p>
<ul>
<li><p>The Geometric series...</p></li>
<li><p>Differentiation.</p></li>
</ul>
|
119,636 | <p>I want to know the general formula for $\sum_{n=0}^{m}nr^n$ for some constant r and how it is derived.</p>
<p>For example, when r = 2, the formula is given by:
$\sum_{n=0}^{m}n2^n = 2(m2^m - 2^m +1)$
according to <a href="http://www.wolframalpha.com/input/?i=partial+sum+of+n+2%5En" rel="noreferrer">http://www.wolframalpha.com/input/?i=partial+sum+of+n+2%5En</a></p>
<p>Thanks!</p>
| Marc van Leeuwen | 18,880 | <p>Observe that your formula $\sum_{n=0}^{m}nr^n$ can be obtained from $\sum_{n=0}^{m}x^n$ by applying $x\frac d{dx}$ (deriving and then multiplying by $x$) and then substituting $r$ for $x$. Now for geometric series one has the well known formula
$$
\sum_{n=0}^mx^n=\frac{x^0-x^{m+1}}{1-x}
$$
and applying $x\frac d{dx}$ to the right hand side gives
$$
\frac{x-(m+1)x^{m+1}+mx^{m+2}}{(1-x)^2} = x\frac{1-x^m+mx^m(x-1)}{(1-x)^2}
=x\left(\frac{1-x^m}{(1-x)^2}-\frac{mx^m}{1-x}\right)
$$
so that your answer should be
$$
\sum_{n=0}^{m}nr^n=r\left(\frac{1-r^m}{(1-r)^2}-\frac{mr^m}{1-r}\right).
$$
For $r=2$ this gives $2(1+(m-1)2^m)$, in accordance with what you found.</p>
|
1,158,642 | <p>Let $E$ be a vector bundle of rank $r$ and let $\phi:E\rightarrow \mathbb C_p$ non vanishing map to the skyscraper sheaf.
consider the kernel $F$ of this sheaf which is a sub-bundle of $E$, every fiber of $F$ has a rank $r$, just that over $p$ which has rank $r-1$.
So why we say that $F$ has a rank $r$??
thanks </p>
| Georges Elencwajg | 3,217 | <p>Let $X$ be a smooth curve, $E$ a vector bundle on $X$ of rank $r$ and $\mathcal E$ the locally free sheaf of sections of $E$.<br>
Suppose you have an exact sequence of sheaves $ 0\to \mathcal F\to \mathcal E \to \mathbb C_p \to 0$ with $\mathbb C_p$ the skyscraper sheaf concentrated at $p$ with stalk $\mathbb C$ . </p>
<p>Is $\mathcal F$ a locally free sheaf of rank $r$, thus corresponding to a vector bundle $F$ of rank $r$ ? Yes!<br>
Is $\mathcal F$ a subsheaf of $\mathcal E$ ? Yes!<br>
Is $F$ a subbundle of $E$ ? <strong>NO !</strong> </p>
<p>And therein lies the confusion: the morphism of <strong>stalks</strong> $\mathcal F_p\to \mathcal E_p$ is injective but tensoring with $\mathbb C$ is not an exact functor so that the resulting morphism of <strong>fibers</strong> $ \mathcal F_p\otimes _{\mathcal O_{X_p}}\mathbb C=F(p)\to \mathcal E_p \otimes _{\mathcal O_{X_p}} \mathbb C=E(p)$ is not injective.<br>
In other words $F$ is not a subbundle of $E$. </p>
<p><strong>Toy example</strong><br>
Just think of the ideal $\mathcal F=\mathcal I\subset \mathcal E=\mathcal O_X$ of functions vanishing at $p$ and stare at the exact sequence of <strong>sheaves</strong> $$ 0\to \mathcal I\to \mathcal O_X \to \mathbb C_p \to 0 $$</p>
|
732,996 | <p><img src="https://i.stack.imgur.com/kXJEt.png" alt="enter image description here"></p>
<p>Hi! I am working on some ratio and root test online homework problems for my calc2 class and I am not sure how to completely solve this problem. I guessed on the second part that it converges, but Im not sure how to solve of the value that it converges to. If someone could possibly help me with this problem it would be greatly appreciated. </p>
| Ellya | 135,305 | <p>$\rho=\lim_{n\rightarrow\infty}|\frac{a_{n+1}}{a_n}|=\lim_{n\rightarrow\infty}|\frac{\frac{1}{(2n+2)!}}{\frac{1}{(2n)!}}|=\lim_{n\rightarrow\infty}|\frac{(2n)!}{(2n+2)!}|=\lim_{n\rightarrow\infty}\frac{1}{(2n+1)(2n+2)}=\lim_{n\rightarrow\infty}\frac{1}{4n^2+5n+2}=0$</p>
<p>I think you may have overlooked the fact that it was a factorial?</p>
|
605,772 | <p>Solving $x^2-a=0$ with Newton's method, you can derive the sequence $x_{n+1}=(x_n + a/x_n)/2$ by taking the first order approximation of the polynomial equation, and then use that as the update. I can successfully prove that the error of this method converges quadratically. However, I can't seem to prove this for the residual, and this is likely a simple problem in arithmetic:</p>
<p>\begin{align*}
|x_{n+1}^2 - {a}| &= \left|\frac{1}{4}\Big(x_n+\frac{a}{x_n}\Big)^2 - {a}\right| \\
&= \left|\frac{1}{4}\Big(x_n^2+2a +\frac{a^2}{x_n^2}\Big) - {a}\right| \\
&= \left|\big(\frac{1}{2}x_n\big)^2-\frac{1}{2}a +\big(\frac{a}{2x_n}\big)^2\right| \\
&= \frac{1}{4}\left|x_n^2-2a +\big(\frac{a}{x_n}\big)^2\right| \\
&= \frac{1}{4}\left|\big(x_n+\frac{a}{x_n}\big)^2-2a +\big(\frac{a}{x_n}\big)^2\right|
\end{align*}</p>
<p>I get stuck here, as well as trying other expansions/factorizations. Is there a way to have this simplify?</p>
| Ross Millikan | 1,827 | <p>After the second line, you can go to $ |\frac 14(x_n-\frac a{x_n})^2|$</p>
|
3,498,199 | <p>Suppose if a matrix is given as</p>
<p><span class="math-container">$$ \begin{bmatrix}
4 & 6\\
2 & 9
\end{bmatrix}$$</span></p>
<p>We have to find its eigenvalues and eigenvectors.</p>
<p>Can we first apply elementary row operation . Then find eigenvalues.</p>
<p>Is their any relation on the matrix if it is diagonalized or not.</p>
| Luca Citi | 197,925 | <p>As others have noted you can't apply arbitrary elementary row operations to a matrix and expect the eigenvalues/vectors be preserved. The closest you can do is to apply them to both rows and columns in a specific way as follows.</p>
<p>Consider the matrix
<span class="math-container">$$
T =
\begin{bmatrix}
1 & 0\\
\alpha & 1
\end{bmatrix}
$$</span>
and its inverse
<span class="math-container">$$
T^{-1} =
\begin{bmatrix}
1 & 0\\
-\alpha & 1
\end{bmatrix}.
$$</span></p>
<p>Pre-multiplying a matrix by <span class="math-container">$T$</span> is like performing the operation <span class="math-container">$R_2 \leftarrow R_2 + \alpha R_1$</span> while post-multiplying by <span class="math-container">$T^{-1}$</span> is like performing <span class="math-container">$C_1 \leftarrow C1 - \alpha C_2$</span>.</p>
<p>Since <span class="math-container">$A$</span> and <span class="math-container">$T A T^{-1}$</span> are similar, they have the same eigenvalues and eigenvectors. One can apply analogue operations to larger matrices. If several such operations are applied they have to be applied in opposite order among rows and columns, so that <span class="math-container">$T_1 \dots T_n A T_n^{-1} \dots T_1^{-1}$</span> is similar to <span class="math-container">$A$</span>.</p>
|
221,351 | <p>I asked the following question (<a href="https://math.stackexchange.com/questions/1487961/reference-for-every-finite-subgroup-of-operatornamegl-n-mathbbq-is-con">https://math.stackexchange.com/questions/1487961/reference-for-every-finite-subgroup-of-operatornamegl-n-mathbbq-is-con</a>) on math.stackexchange.com and received no answers, so I thought I would ask it here. I've asked several people in my department who were all stumped by the question.</p>
<p>The question is: why is every finite subgroup of $\operatorname{GL}_n(\mathbb{Q})$ conjugate to a finite subgroup of $\operatorname{GL}_n(\mathbb{Z})$?</p>
<p>Note that at least for $n=2$ the question of isomorphism is much easier, since one can (with some effort) work out exactly which finite groups can be subgroups of $\operatorname{GL}_2(\mathbb{Q})$. Further, there are isomorphic finite subgroups of $\operatorname{GL}_2(\mathbb{Q})$ that are not conjugate to each other. For example, the group generated by $-I_{2 \times 2}$ and $\begin{pmatrix} 0 & 1 \\ 1 & 0 \end{pmatrix}$ are both isomorphic to $C_2$, but they cannot be conjugate to each other because the eigenvalues of the two generators are different.</p>
<p>If there is a relatively simple proof, that would be ideal, but a reference with a potentially long proof is fine as well.</p>
<p>Thanks for any assistance.</p>
| Geoff Robinson | 14,450 | <p>David Speyer has answered the question, but let me add some background. The general result is that if $R$ is a principal ideal domain with field of fractions $K$, and $G$ is a finite group, then every finite dimensional representation of $G$ over $K$ may be realised over $R$ ( ie, is equivalent to a representation over $R$). The general proof is essentially the one that David gives. </p>
<p>As well as the integral case considered in the question, the general result was important for the development by Richard Brauer of modular representation theory: the idea of "reduction (mod $p$)" of a complex representation relies on it: the representation of the finite group $G$ is first realised over a suitable finite extension $K$ of $\mathbb{Q}$, and then $K$ may be viewed as the field of fractions of a localisation $R$ at a prime ideal $\pi$ containing $p$ of the ring of algebraic integers in $K$. Then since $R$ is a PID, the $K$-representation may then be realized over $R$. Then the given representation may be reduced (mod $\pi$), yielding representation of $G$ over the finite field $R/\pi$.</p>
<p>In general, it is not a straightforward issue to decide whether a representation of a finite group $G$ over a number field $K$ may be realised over the ring of integers of $K$. Some of the issues are well illustrated in the article "Three letters to Walter Feit" by J-P. Serre (which is visible online), which considers special cases of this question.</p>
|
465,255 | <p>Does there exists any form of Algebra where operators can be assumed as variables?</p>
<p>For example:
$$
1+2\times3=7
$$
can be considered as:
$$
1\:(\mathrm{\,X})\:2\:(\mathrm{Y})\:3=7
$$
?</p>
| vishva8kumara | 614,215 | <p>I don't think you have to do any calculation to get the answer. This can be just estimated.</p>
<ul>
<li><p>4 Questions with 25 points each is max 100 points.</p>
</li>
<li><p>400 students answering 4 questions - 200 ~ 350 gets each answer right.</p>
</li>
</ul>
<p>Average must be between 50 and 100.</p>
<p>Only <strong>D</strong> fits the estimation range</p>
|
2,697,069 | <p>Two series of functions are given in which I cannot figure out how to find $M_n$ of the second problem. $$1.\space \sum_{n=1}^{\infty} \frac{1}{1+x^n}, x\in[k,\infty)\\ 2. \space \sum_{n=1}^{\infty} (\cos x)^n, x\in(0,\pi)$$.. </p>
<p>I have determined the $M_n$ for problem no. $1.$ [$\space|\sum_{n=1}^{\infty} \frac{1}{1+x^n}|<|\sum_{n=1}^{\infty} \frac{1}{1+k^n}|<\sum_{n=1}^{\infty} \frac{1}{k^n}$] </p>
<p>From problem no. $2.$, since $-1\leq \cos x\leq1$, therefore for higher $n$ the values of $\cos x$ will lie between $[-1,1]$ and in $(0,\pi)$ $\cos x$ is decreasing. But is it correct to choose $n$ as $M_n$, so that $$|f_n(x)|=|(\cos x)^n|<n,$$ where $n$ is decreasing. </p>
<p>I am not sure what the $M_n$ should be. Any help or suggestion please? Any help is greatly appreciated.</p>
| Rohan Shinde | 463,895 | <p>$$2x=5y$$
$$\Rightarrow y=\frac {2x}{5}$$
$$\frac y3=\frac z4\Rightarrow \frac {x}{15}=\frac z8$$
$$\Rightarrow \frac xz=\frac {15}{8}$$</p>
|
2,697,069 | <p>Two series of functions are given in which I cannot figure out how to find $M_n$ of the second problem. $$1.\space \sum_{n=1}^{\infty} \frac{1}{1+x^n}, x\in[k,\infty)\\ 2. \space \sum_{n=1}^{\infty} (\cos x)^n, x\in(0,\pi)$$.. </p>
<p>I have determined the $M_n$ for problem no. $1.$ [$\space|\sum_{n=1}^{\infty} \frac{1}{1+x^n}|<|\sum_{n=1}^{\infty} \frac{1}{1+k^n}|<\sum_{n=1}^{\infty} \frac{1}{k^n}$] </p>
<p>From problem no. $2.$, since $-1\leq \cos x\leq1$, therefore for higher $n$ the values of $\cos x$ will lie between $[-1,1]$ and in $(0,\pi)$ $\cos x$ is decreasing. But is it correct to choose $n$ as $M_n$, so that $$|f_n(x)|=|(\cos x)^n|<n,$$ where $n$ is decreasing. </p>
<p>I am not sure what the $M_n$ should be. Any help or suggestion please? Any help is greatly appreciated.</p>
| TheSimpliFire | 471,884 | <p>We have $$2x=5y\implies \color{red}{x=\frac52}\color{blue}y$$ and $$\frac{y}{3} = \frac{z}{4}\implies \color{blue}{y=\frac34z}$$ so $$\color{red}{x=\frac52}\cdot\color{blue}{\frac34z}\implies \boxed{\frac xz=\frac{15}8}$$</p>
|
10,427 | <p>I like Mathematica, but it's syntax baffles me.</p>
<p>I am trying to figure out how to minimize the whitespace around a graphic.</p>
<p>For example,</p>
<pre><code>ParametricPlot3D[{r Cos[t], r Sin[t], r^2}, {r, 0, 1}, {t, 0, 2 \[Pi]},
Boxed -> True, Axes -> False]
</code></pre>
<p><img src="https://i.stack.imgur.com/jGpvA.png" alt="3d bounding box on"></p>
<p>Puts the 3d bounding box at the limits of the view. But if I don't show the 3d bounding box,</p>
<p><img src="https://i.stack.imgur.com/jGpvA.png" alt="3d bounding box on"></p>
<pre><code>ParametricPlot3D[{r Cos[t], r Sin[t], r^2}, {r, 0, 1}, {t, 0, 2 \[Pi]},
Boxed -> False, Axes -> False]
</code></pre>
<p><img src="https://i.stack.imgur.com/E8skr.png" alt="3d bounding box off"></p>
<p>there is all this white space around the actual object.</p>
<p>Is there some way (syntax) that can put the view just around the visible objects?</p>
<h1>Edit in response to answers</h1>
<p>Ok, from the below answers, I have two solutions; 1) use ImageCrop, or 2) use <code>Method->{"ShrinkWrap" -> True}</code>. However both of these options do a little something strange to the plot I want (maybe it is just a problem with the plot itself). </p>
<p>So the actual plot I am after is,</p>
<pre><code>Module[{r = 1, \[Theta] = \[Pi]/2, \[CurlyPhi] = \[Pi]/6, \[Psi] = \[Pi]/12},
Framed@Show[
Graphics3D[
{
{Arrowheads[.025],
Arrow[{{0, 0, 0}, {1.1, 0, 0}}], Text["x", {1.2, 0, 0}],
Arrow[{{0, 0, 0}, {0, 1.1, 0}}], Text["y", {0, 1.2, 0}],
Arrow[{{0, 0, 0}, {0, 0, 1.1}}], Text["z", {0, 0, 1.2}],
Arrow[{{0, 0, 0}, r {Cos[\[Theta]] Sin[\[CurlyPhi]],
Sin[\[Theta]] Sin[\[CurlyPhi]], Cos[\[CurlyPhi]]}}]},
{Specularity[White, 50], Opacity[.1], Sphere[{0, 0, 0}, r]}
},
Boxed -> False,
ImageSize -> 600,
PlotRange -> 1.1 {{-r, r}, {-r, r}, {0, r}}
]]
]
</code></pre>
<p><img src="https://i.stack.imgur.com/emm3z.png" alt="enter image description here"></p>
<p>Which has too much whitespace. If I replace <code>Framed@Show[</code> with <code>Framed@ImageCrop@Show[</code> I
get,
<img src="https://i.stack.imgur.com/AUCNY.png" alt="enter image description here"></p>
<p>which actually crops some of the (hemi)sphere. If just use <code>Method -> {"ShrinkWrap" -> True},</code> in the <code>Show</code> options, I get,</p>
<p><img src="https://i.stack.imgur.com/WhX2Y.png" alt="Mathematica graphics"></p>
<p>which looks almost correct, but the <code>x</code> and <code>z</code> textboxes have now not included. Seems like I can't win!</p>
| Yves Klett | 131 | <p><code>ImageCrop</code> seems to be a bit buggy (at least right here in Version 8.04, Win 64). It tends to crop lightly coloured areas rather agressively. You could try the following work-around, which works more reliably:</p>
<pre><code>imcrop[img_] := ImagePad[img, -BorderDimensions[img, 0]]
g = Graphics3D[{Specularity[White, 50], Opacity[.1],
Sphere[{0, 0, 0}, 1]}, Boxed -> False,
PlotRange -> 1.1 {{-1, 1}, {-1, 1}, {0, 1}}
];
Column[Framed /@ {g, ImageCrop[g], imcrop[g]}]
</code></pre>
<p><img src="https://i.stack.imgur.com/5r9qY.png" alt="Mathematica graphics"></p>
<p>For your graphics it seems to work without additional changes to <code>Opacity</code> or similar:</p>
<pre><code>g = Module[{r =
1, \[Theta] = Pi/2, \[CurlyPhi] = Pi/6, \[Psi] = Pi/12},
Show[Graphics3D[{{Arrowheads[.025],
Arrow[{{0, 0, 0}, {1.1, 0, 0}}], Text["x", {1.2, 0, 0}],
Arrow[{{0, 0, 0}, {0, 1.1, 0}}], Text["y", {0, 1.2, 0}],
Arrow[{{0, 0, 0}, {0, 0, 1.1}}], Text["z", {0, 0, 1.2}],
Arrow[{{0, 0, 0},
r {Cos[\[Theta]] Sin[\[CurlyPhi]],
Sin[\[Theta]] Sin[\[CurlyPhi]],
Cos[\[CurlyPhi]]}}]}, {Specularity[White, 50], Opacity[.1],
Sphere[{0, 0, 0}, r]}}, Boxed -> False, ImageSize -> 600,
PlotRange -> 1.1 {{-r, r}, {-r, r}, {0, r}}]]];
Framed@imcrop[g]
</code></pre>
<p><img src="https://i.stack.imgur.com/IH3X1.png" alt="Mathematica graphics"></p>
|
2,571,909 | <p>$$\left|\frac{-10}{x-3}\right|>\:5$$</p>
<ul>
<li>Find the values that $x$ can take. </li>
</ul>
<p>I know that</p>
<p>$$\left|\frac{-10}{x-3}\right|>\:5$$
and
$$\left|\frac{-10}{x-3}\right|<\:-5$$</p>
| nonuser | 463,553 | <p>$$\left|\frac{-10}{x-3}\right|>\:5$$
so $$10> 5|x-3| \Longrightarrow -2<x-3<2 \Longrightarrow 1<x<5; x\ne 3$$</p>
|
590,205 | <p>I've been trying to tackle this problem for some while now, but don't know how to start correctly. I know that the cone on $(0,1)$ is given by $$\text{Cone}((0,1)) = (0,1) \times [0,1]/((0,1)\times\{1\}).$$ But how do I show that it can not be embedded in an Euclidean space? Cause for me it looks like it is possible.(Open cylinder with the "ceiling" collapsed to one point. I'm guessing that the problem for me also lies in what a quotient really is, cause I can't really get a good feeling for it.</p>
<p>I don't want the answer, I just want a push in the right direction so that I can think about how to solve it.</p>
<p>Edit:
New insight, when thinking about the cone, it should be something like this (I guess) but this would mean that it can be embedded in $\mathbb{R}^2$ I think, which contradicts the question.<img src="https://i.stack.imgur.com/uSiCy.png" alt="Cone as I think it should be"></p>
<p>Thanks</p>
| Stefan Hamcke | 41,672 | <p>The cone $C(J)$ where $J=\mathrm{int}(I)$ is not first countable. Consider the subspace $$B:=S\times I:=\left\{\frac1n\middle|n\in\Bbb N\right\}\times I$$ of $J\times I$. It is closed, and each closed and saturated subset $A$ of $B$ is either disjoint from $J\times\{1\}$, in this case it is saturated in $J\times I$, or it contains $J\times\{1\}\cap B$, but in that case its saturation is $A\cup J\times\{1\}$ which is closed. So each closed and saturated subset of $B$ is the intersection of a closed and saturated set in $J\times I$ with $B$. Therefore the restriction of the quotient map $q:J\times I\to C(J)$ to $B$ is a quotient map and $C(S)$ is a subspace of $C(J)$.</p>
<p>Now, $C(S)$ is a CW complex with infinitely many cells meeting the apex of the cone, so it is not first countable. Hence the superspace $C(J)$ cannot be a subspace of a metric space.</p>
|
1,200,358 | <blockquote>
<p>Assume the $n$-th partial sum of a series $\sum_{n=1}^\infty a_n$ is the following:
$$S_n=\frac{8n-6}{4n+6}.$$
Find $a_n$ for $n > 1$.</p>
</blockquote>
<p>I'm really stuck on what to do here.</p>
| ASB | 111,607 | <p>Observe that for each $n\in \mathbb{N}$, </p>
<p>$a_{n+1}=S_{n+1}-S_n= \dfrac{8(n+1)-6}{4(n+1)+6}-\dfrac{8n-6}{4n+6}$ and $a_1=S_1=\dfrac{8.1-6}{4.1+6}=\dfrac{1}{5}$.</p>
|
2,672,097 | <p>What are the must-know concepts and best resources for preparing the <strong>mathematical background for advanced machine learning studies</strong>?</p>
<p>Currently, looking into the book <strong>What is Mathematics? by Richard Courant</strong> to strengthen my fundamentals. Are there any better references that can help? <strong>And would it be worth spending time on such basic concepts like number system, congruences etc?</strong></p>
<p>Also, looking for more study material that can help me take a step towards a <strong>deeper understanding</strong> of the subject towards the discipline of data science and machine learning.</p>
| user3658307 | 346,641 | <p>It definitely depends on what you want to do, since ML is a relatively large and diverse field now. A quick summary might be something like this:</p>
<p><strong>Basics</strong> (i.e. needed for the more advanced ones below)</p>
<ul>
<li>Linear algebra (e.g. matrix operations and decompositions, vector spaces)</li>
<li>Multivariate calculus (e.g. gradients and jacobians for optimization)</li>
<li>Basic probability and statistics (e.g. basic distributions & estimators)</li>
<li>Algorithmic analysis</li>
<li>Basic signal processing (e.g. convolutions, Fourier series)</li>
</ul>
<p><strong>Mathematical Theory</strong> (e.g. PAC theory)</p>
<ul>
<li>Analysis & measure theory (e.g. advanced probability)</li>
<li>Functional analysis</li>
</ul>
<p><strong>Probabilistic Modelling</strong> (e.g. Bayesian deep learning, generative modelling)</p>
<ul>
<li>Stochastic processes & information theory (e.g. MCMC, variational inference)</li>
<li>Advanced statistics (e.g. properties of estimators, convergence of distributions)</li>
</ul>
<p><strong>Implementation-Oriented ML</strong></p>
<ul>
<li>Optimization (e.g. convex optimization)</li>
<li>Numerical analysis (e.g. discretizations)</li>
<li>Computational numerics (e.g. error accumulation, matrix algorithms)</li>
</ul>
<p>(Just to link some relevant questions on how to study basic ML mathematically to this one:
<a href="https://math.stackexchange.com/questions/425230/where-to-start-machine-learning?rq=1">[1]</a>,
<a href="https://math.stackexchange.com/questions/1331498/studying-machine-learning?rq=1">[2]</a>,
<a href="https://math.stackexchange.com/questions/908987/mathematics-disciplines-underpinning-machine-learning?rq=1">[3]</a>,
<a href="https://math.stackexchange.com/questions/1205684/mathematical-introduction-to-machine-learning?rq=1">[4]</a>,
<a href="https://math.stackexchange.com/questions/668574/what-all-maths-do-i-need-to-know-to-become-good-at-machine-learning?rq=1">[5]</a>,
<a href="https://math.stackexchange.com/questions/1146143/what-mathematics-should-i-study-to-understand-neural-nets-machine-learning?rq=1">[6]</a>,
<a href="https://math.stackexchange.com/questions/1349526/what-maths-courses-are-needed-for-machine-learning?rq=1">[7]</a>,
<a href="https://math.stackexchange.com/questions/1857677/what-is-a-good-book-for-math-students-to-learn-machine-learning-in-depth?rq=1">[8]</a>,
<a href="https://math.stackexchange.com/questions/1331498/studying-machine-learning?rq=1">[9]</a>,
<a href="https://math.stackexchange.com/questions/1470198/sources-to-learn-and-understand-advanced-probability-in-ml-models?rq=1">[10]</a>
)</p>
|
1,088,734 | <p>It's possible the integral bellow. What way I must to use for solve it.</p>
<p>$$\int \sin(x)x^2dx$$</p>
| kmbrgandhi | 132,855 | <p>Here's a hint: every cubic can be factored in the following way:
$$p(x) = (x-r_1)(x-r_2)(x-r_3)$$
You know, from the given, that $r_1$, $r_2$, $r_3$ are <em>distinct</em> positive integers, and you know that $r_1r_2r_3 = 26$. It turns out that there is only one possible unordered triple $(r_1, r_2, r_3)$ that satisfies these properties – can you show this?</p>
<p>Once you find these roots, you can use Vieta's, as others have mentioned, to find the values of $a$ and $b$.</p>
|
4,000,576 | <blockquote>
<p>What is the value of the following integral:
<span class="math-container">$$\int_0^{2\pi}\frac{1}{4\cos^2(t)+9\sin^2(t)}\mathrm{d}t$$</span>
<span class="math-container">$\frac\pi9$</span> ; <span class="math-container">$\frac\pi6$</span> ; <span class="math-container">$\frac\pi3$</span> ; <span class="math-container">$\frac\pi2$</span> or <span class="math-container">$\frac\pi4$</span>?</p>
</blockquote>
<p>a full solution for this problem would be much appreciated</p>
| Quanto | 686,284 | <p>Integrate with Fourier series as follows</p>
<p><span class="math-container">$$\int_0^{2\pi}\frac{1}{4\cos^2 t+9\sin^2 t}{d}t
=\int_0^{2\pi}\frac{2}{13-5\cos 2t} {d}t\\
=\int_0^{2\pi}\left( \frac16 + \frac13\sum_{n=1}^\infty \frac{1}{5^n} \cos (2n t) \right)dt
=\frac\pi3
$$</span></p>
|
582,283 | <p>$H$ is subgroup of $G$ with $H$ not equal $G$.</p>
<p>Be $S=G-H$. I am being asked to prove that $\langle S \rangle=G$.</p>
<p>Some tip to solve this? I think in $S_3$ is possible but I can´t prove.</p>
| Marc van Leeuwen | 18,880 | <p>Hint. The complement $S$ contains at least one element. You can fix any one $s\in S$, and produce any $h\in H$ by a <em>single</em> multiplication involving $s$.</p>
|
1,304,344 | <p>How do I find the following:</p>
<p>$$(0.5)!(-0.5)!$$</p>
<p>Can someone help me step by step here?</p>
| Tim Raczkowski | 192,581 | <p>Use the gamma function:</p>
<p>$$\Gamma(x)=\int_0^\infty e^{-t}t^{x-1}\,dt.$$</p>
|
4,468,112 | <p>Let <span class="math-container">$a,b\in\mathbb R$</span> with <span class="math-container">$a<b$</span>, <span class="math-container">$$\mathcal D_{[a,\:b]}:=\{(t_0,\ldots,t_k):k\in\mathbb N\text{ and }a=t_0<\cdots<t_k\}$$</span> and <span class="math-container">$$\mathcal T_\varsigma:=\{(\tau_1,\ldots,\tau_k):\tau_i\in[t_{i-1},t_i]\text{ for all }i\in\{1,\ldots,k\}\}\;\;\;\text{for }\varsigma=(t_0,\ldots,t_k)\in\mathcal D_{[a,\:b]}.$$</span> Moreover, let <span class="math-container">$f:[a,b]\to\mathbb R$</span> be continuous and <span class="math-container">$g:[a,b]\to\mathbb R$</span> be of bounded variation. We can show that <span class="math-container">$$\int_a^bf\:{\rm d}g:=\lim_{\substack{|\varsigma|\to0+\\\varsigma\in\mathcal D_{[a,\:b]}\\\tau\in\mathcal T_\varsigma}}S_{\varsigma,\:\tau}(f,g)$$</span> is well-defined, where <span class="math-container">$$|\varsigma|:=\max_{1\le i\le k}(t_i-t_{i-1})\;\;\;\text{for }\varsigma=(t_0,\ldots,t_k)\in\mathcal D_{[a,\:b]}$$</span> and <span class="math-container">$$S_{\varsigma,\:\tau}(f,g):=\sum_{i=1}^kf(\tau_i)(g(t_i)-g(t_{i-1}))\;\;\;\text{for }\varsigma=(t_0,\ldots,t_k)\in\mathcal D_{[a,\:b]}\text{ and }\tau\in\mathcal T_\varsigma.$$</span></p>
<blockquote>
<p>Assuming that <span class="math-container">$g$</span> is differentiable (not necessarily <em>continuously</em> differentiable), are we able to show that <span class="math-container">$$\int_a^bf\:{\rm d}g=\int_a^bf(s)g'(s)\:{\rm d}s\tag1?$$</span></p>
</blockquote>
<p>Let <span class="math-container">$\varsigma=(t_0,\ldots,t_k)\in\mathcal D_{[a,\:b]}$</span>. By the mean value theorem, there is a <span class="math-container">$\tau\in\mathcal T_\varsigma$</span> with <span class="math-container">$$S_{\varsigma,\:\tau}(f,g)=\sum_{i=1}^kf(\tau_i)g'(\tau_i)(t_i-t_{i-1})=S_{\varsigma,\:\tau}(fg',\operatorname{id}_{[a,\:b]})\tag2,$$</span> but does the right-hand side tend to the right-hand side of <span class="math-container">$(1)$</span> as <span class="math-container">$|\varsigma|\to0+$</span>? This is clearly the case when <span class="math-container">$g'$</span> is continuous though ...</p>
| Z Ahmed | 671,540 | <p><span class="math-container">$\int_{a}^{b} f(x) dg= \int f(x)\frac{dg}{dx} dx =\int_{a}^t f(x) g'(x) dx+\int_{t}^{b} f(x) g'(x) dx,$</span></p>
<p>where <span class="math-container">$g'(x)$</span> has a finite jump discontinuity at <span class="math-container">$x=t$</span>.</p>
<p><strong>Edit:</strong></p>
<p>For example let <span class="math-container">$f(x)=x, g(x)=|x|$</span>, then g'(x)= \text{sgn}(x) is discontinuous at <span class="math-container">$x=0$</span>, so we have</p>
<p><span class="math-container">$I=\int_{-1}^2 x d|x|= \int_{-1}^{2} x \frac{d|x|}{dx} dx=\int_{-1}^{2} x ~\text{sgn}(x)~ dx= \int_{-1}^{0} x (-1) dx+ \int_{0}^{2} x.1 dx=\frac{1}{2}+2=\frac{5}{2}.$</span></p>
|
962,691 | <p>I'm trying to integrate $ \int_0^1\frac {u^2 + 1}{u - 2}du$</p>
<p>I've calculated that this equates to $ [\frac{u^2}{2}+2u +5ln(u-2)]_0^1 $</p>
<p>But then I have to evaluate $ln(-1)$ and $ln(-2)$ which are obviously not defined in the real plane. I have drawn the graph and I know for certain that this integral exists. Any guidance on what I'm missing would be great.</p>
| Dr. Sonnhard Graubner | 175,066 | <p>rewrite your integrand in the form $\frac{u^2-4}{u-2}+\frac{5}{u-2}$</p>
|
1,649,907 | <p>Please kindly forgive me if my question is too naive, i'm just a <em>prospective</em> undergraduate who is simply and deeply fascinated by the world of numbers.</p>
<p>My question is: Suppose we want to prove that $f(x) > \frac{1}{a}$, and we <em>know</em> that $g(x) > a$, where $f,g$ and $a$ are all positive and $a$ is a nonzero real number.
<em>If we can show</em> that $f(x)g(x) > 1$, would that imply our required proof ?</p>
<p>EDIT: As demonstrated by various users in the solutions below, the answer is definitely <em>no</em>.
What about if we now want to prove the <em>reverse</em> inequality $f(x) \leq \frac{1}{a}$ given that $g(x) < a$, if we can show that $f(x)g(x)<1$, i guess our required result would follow ?</p>
| Ross Millikan | 1,827 | <p>No, you could have $g$ huge and $f$ tiny. Let $a=10, g=1000, f=1/50$</p>
|
23,953 | <p>I cited the diagonal proof of the uncountability of the reals as an example of a <a href="https://mathoverflow.net/questions/23478/examples-of-common-false-beliefs-in-mathematics/23708#23708">`common false belief'</a> in mathematics, not because there is anything wrong with the proof but because it is commonly believed to be Cantor's second proof. The stated purpose of <a href="http://resolver.sub.uni-goettingen.de/purl?GDZPPN002113910" rel="nofollow noreferrer">the paper where Cantor published the diagonal argument </a> is to prove the existence of uncountable infinities, avoiding the theory of irrational numbers. I have no problem believing that Cantor himself realized that a diagonal proof of the uncountability of <strong>R</strong> was possible but I have not even found an allusion to this in his collected works. The earliest appearance in print that I know is on page 43 of <em>The theory of sets of points</em> by W. H. Young and Grace Chisholm Young (1906). I would be very grateful for any reference to some scrap of paper where Cantor himself mentions the possibility of using the diagonal method to prove the set of reals uncountable. </p>
| Jason Dyer | 441 | <p>From <a href="http://books.google.com/books?id=vrQLbbxGNMsC" rel="nofollow">Labyrinth of thought: a history of set theory and its role in modern mathematics</a> by José Ferreirós and José Ferreirós Domínguez:</p>
<blockquote>
<p>page 184 (quoting a margin note of
Cantor's) </p>
<p>Besides, the theorem of paragraph 2
presents itself as the reason why the
collections of real numerical
magnitudes that constitute what is
called a continuum (say all real
numbers that are greater or equal to 0 and less than or equal to
1) cannot be univocally correlated
with the collection (<em>v</em>) [of all
natural numbers]; thus I find the
clear distinction between a continuum
and a collection of the kind of the
totality of all real algebraic
numbers.</p>
</blockquote>
<p>The book also discusses why this was a margin note and not Cantor's main concern: His goal was a new proof of Liouville's theorem that within any given interval there are infinitely many
transcendent numbers.</p>
|
23,953 | <p>I cited the diagonal proof of the uncountability of the reals as an example of a <a href="https://mathoverflow.net/questions/23478/examples-of-common-false-beliefs-in-mathematics/23708#23708">`common false belief'</a> in mathematics, not because there is anything wrong with the proof but because it is commonly believed to be Cantor's second proof. The stated purpose of <a href="http://resolver.sub.uni-goettingen.de/purl?GDZPPN002113910" rel="nofollow noreferrer">the paper where Cantor published the diagonal argument </a> is to prove the existence of uncountable infinities, avoiding the theory of irrational numbers. I have no problem believing that Cantor himself realized that a diagonal proof of the uncountability of <strong>R</strong> was possible but I have not even found an allusion to this in his collected works. The earliest appearance in print that I know is on page 43 of <em>The theory of sets of points</em> by W. H. Young and Grace Chisholm Young (1906). I would be very grateful for any reference to some scrap of paper where Cantor himself mentions the possibility of using the diagonal method to prove the set of reals uncountable. </p>
| John Stillwell | 1,587 | <p>Cantor's diagonal argument first appears in his 1891 paper
"Über eine elementare Frage der Mannigfaltigkeitslehre", <em>Jahresbericht der Deutschen
Mathematiker-Vereinigung</em> 1: 75–78, in which he generalizes the argument to prove that
any set has more subsets than elements. The 1891 paper has the diagonal argument as
we know it today, but even his 1874 proof begins to look like a diagonal argument if
you look at it closely. The proof uses the least upper bound $x$ of an increasing sequence
$x_1,x_2,x_3,\ldots$ and $x$ "diagonalizes" the sequence in the sense that $x$ differs
from each $x_i$ in some decimal place. The position of the place of difference increases with
$i$, so the places of difference lie on a "jagged diagonal".</p>
<p>A more clearcut use of diagonalization before Cantor's 1891 proof, in my opinion, is in
<a href="http://gdz.sub.uni-goettingen.de/dms/load/img/?IDDOC=27181" rel="nofollow">this 1875 paper</a> by Paul du Bois-Reymond. Given a sequence of positive integer valued
functions $f_1,f_2,f_3,\ldots$, du Bois-Reymond constructs a function $f$ that grows
faster than each $f_i$. In particular, $f$ differs from $f_i$ on the value $i$.</p>
|
1,445,913 | <p>Given 2 lines r and s. </p>
<ul>
<li>r and s don't have an intersection point</li>
<li>none of them touch the origin (0,0,0)
What approach should I use to find the equation of the line that cross the origin and also cross r and s?</li>
</ul>
<p>if necessary, we can consider r and s as:</p>
<pre><code> x = at + d x = gt + j
r: y = bt + e s: y = ht + k
z = ct + f z = it + l
</code></pre>
| Yes | 155,328 | <p>If $n \geq 3$, then $n-1$ and $n+1$ are $> 1$, so $n^{2}-1 = (n-1)(n+1) > 1$. Every prime is by definition $\neq 1$ and can and only can be divided by $1$ and itself. But $n-1$ and $n+1$ divide $n^{2}-1$. Thus $n^{2}-1$ is composite by definition.</p>
|
362,881 | <p>I am going to try to explain this as easily as possible. I am working on a computer program that takes input from a joystick and controls a servo direction and speed. I have the direction working just fine now I am working on speed. To control the speed of rotation on the servo I need to send it so many pulses per second using PWM. The servo that I am using takes arguments for speed between 120-150. 120 is %100 speed and 150 is %0 or stopped. 135 is %50 speed. How would I convert percentage from 0-100 into a number between 120-150 including 1/10ths? I hope this makes sense if you need me to explain further please let me know. I really don't know what tag this falls under either.</p>
| Michael Hardy | 11,667 | <p>$$
\int_{-\infty}^\infty f(x)\delta(x)\,dx = f(0).
$$
$$
\int_{-\infty}^\infty f(x)\Big(e^x \delta(x)\Big)\,dx = \text{what?}
$$
But look at that last integral this way:
$$
\int_{-\infty}^\infty \Big(f(x)e^x\Big) \delta(x)\,dx.
$$
This is equal to the value of the function $x\mapsto f(x)e^x$ at $x=0$, because the delta function is defined that way. Letting $g(x)=e^x\delta(x)$, we now have
$$
\int_{-\infty}^\infty f(x)\delta(x)\,dx = \int_{\infty}^\infty f(x)g(x)\,dx \text{ for all suitable functions }f.
$$
The definition of generalized functions is such that that implies that $g=\delta$. ("Suitable" will mean test functions or Schwarz functions or whatever it is you're using in that role in the context in which you're working.)</p>
<p>So be careful to understand the definition.</p>
|
248,267 | <p>It is known that the transformation rule when you change coordinate frames of the Christoffel symbol is:</p>
<p>$$ \tilde \Gamma^{\mu}_{\nu\kappa} = {\partial \tilde x^\mu \over \partial x^\alpha} \left [ \Gamma^\alpha_{\beta \gamma}{\partial x^\beta \over \partial \tilde x^\nu}{\partial x^\gamma \over \partial \tilde x^\kappa} + {\partial ^2 x^\alpha \over \partial \tilde x^\nu \partial \tilde x^\kappa} \right ]$$</p>
<p>Is there any way to prove this rule using only the definition of the Christoffel via the metric tensor? That is, using:</p>
<p>$$ \Gamma^\mu _{\nu\kappa} = \frac{1}{2}g^{\mu\lambda}\left(g_{\lambda\kappa,\nu}+g_{\nu\lambda,\kappa}-g_{\nu\kappa,\lambda} \right)$$</p>
<p>All proofs have I've seen of the transformation law involve another method.</p>
| Yuri Vyatkin | 2,002 | <p>This is very straightforward, just substitute the transformation rules and collect the terms.</p>
<p>Here are some details.</p>
<p>The inverse metric transforms, as we know, by the rule:
$$
g^{\mu \lambda} = \frac{\partial{\bar{x}}^\mu}{\partial{x}^\alpha} \frac{\partial{\bar{x}}^\lambda}{\partial{x}^\delta} g^{\alpha \delta}
$$</p>
<p>The partial derivatives need some calculations that can be presented as
$$
\begin{align*}
g_{\lambda \kappa , \nu} & = \frac{\partial}{\partial{\bar{x}^\nu}} \Big( \frac{\partial{x^\delta}}{\partial{\bar{x}^\lambda}} \frac{\partial{x^\gamma}}{\partial{\bar{x}^\kappa}} g_{\delta \gamma} \Big) \\
&= \frac{\partial{x^\delta}}{\partial{\bar{x}^\lambda}} \frac{\partial{x^\gamma}}{\partial{\bar{x}^\kappa}} \frac{\partial{x^\beta}}{\partial{\bar{x}^\nu}} g_{\delta \gamma , \beta} + g_{\delta \gamma} \frac{\partial}{\partial{\bar{x}^\nu}} \Big( \frac{\partial{x^\delta}}{\partial{\bar{x}^\lambda}} \frac{\partial{x^\gamma}}{\partial{\bar{x}^\kappa}} \Big)
\end{align*}
$$</p>
<p>Similarly,
$$
g_{\nu \lambda , \kappa} = \frac{\partial{x^\beta}}{\partial{\bar{x}^\nu}} \frac{\partial{x^\delta}}{\partial{\bar{x}^\lambda}} \frac{\partial{x^\gamma}}{\partial{\bar{x}^\kappa}} g_{\beta \delta , \gamma} + g_{\beta \delta} \frac{\partial}{\partial{\bar{x}^\kappa}} \Big( \frac{\partial{x^\beta}}{\partial{\bar{x}^\nu}} \frac{\partial{x^\delta}}{\partial{\bar{x}^\lambda}} \Big)
$$
and
$$
g_{\nu \kappa , \lambda} = \frac{\partial{x^\beta}}{\partial{\bar{x}^\nu}} \frac{\partial{x^\gamma}}{\partial{\bar{x}^\kappa}} \frac{\partial{x^\delta}}{\partial{\bar{x}^\lambda}} g_{\beta \gamma , \delta} + g_{\beta \gamma} \frac{\partial}{\partial{\bar{x}^\lambda}} \Big( \frac{\partial{x^\beta}}{\partial{\bar{x}^\nu}} \frac{\partial{x^\gamma}}{\partial{\bar{x}^\kappa}} \Big)
$$</p>
<p>Substituting these identities into your "definition"
$$
\Gamma^\mu _{\nu\kappa} = \frac{1}{2}g^{\mu\lambda}\left(g_{\lambda\kappa,\nu}+g_{\nu\lambda,\kappa}-g_{\nu\kappa,\lambda} \right)
$$
and taking into account that
$$
\Gamma^\alpha _{\beta \gamma} = \frac{1}{2}g^{\alpha \delta}\left(g_{\delta \gamma , \beta}+g_{\beta \delta , \gamma} - g_{\beta \gamma , \delta} \right)
$$
it is not difficult now to show the required transformation rule for the Christoffel symbols.</p>
|
3,807,900 | <p>I have just met this exercise in functional analysis, asking us to determine if these two subspaces of the Hilbert space <span class="math-container">$\ell^2$</span> of square-summable complex sequences are closed:</p>
<blockquote>
<ol>
<li>The set of all sequences <span class="math-container">$\{x_n\}_{n=1}^{\infty}$</span> satisfying
<span class="math-container">$$\sum_{n=1}^{\infty} \frac{1}{n} x_n = 0 $$</span></li>
</ol>
</blockquote>
<blockquote>
<ol start="2">
<li>The set of all sequences <span class="math-container">$\{x_n\}_{n=1}^{\infty}$</span> satisfying
<span class="math-container">$$\sum_{n=1}^{\infty} x_n = 0 $$</span></li>
</ol>
</blockquote>
<p>I know what I am supposed to do: to prove the subspace is closed I need to consider a general Cauchy sequence in the subspace and show its limit is also in the subspace and to prove it is not closed I only need to find one Cauchy sequence in the subspace whose limit is not in it. However, these two subspaces have me stuck, I do not know if they are closed or not so I have no idea on this. I thank all helpers.</p>
| Matematleta | 138,929 | <p>Hints:</p>
<p>For <span class="math-container">$1).\ $</span> define the linear functional <span class="math-container">$x\mapsto \sum_{n=1}^{\infty} \frac{1}{n} x_n$</span>, and show it is bounded, hence continuous.</p>
<p>For <span class="math-container">$2).\ $</span> Consider the sequence of sequences</p>
<p><span class="math-container">$(1,-1,0,0,\cdots )$</span></p>
<p><span class="math-container">$(1,-1/2,-1/2,0,0\cdots )$</span></p>
<p><span class="math-container">$(1,-1/3,-1/3,-1/3,0,0\cdots )$</span></p>
<p><span class="math-container">$\cdots$</span></p>
|
3,154,212 | <p>I'm working a lot with series these days, and I would like to know if there are any texts, papers, articles that might suggest a general outline for finding <span class="math-container">$n$</span>th partial sums of convergent series. Most of my searching turns up methods for finding the sums of geometric/telescoping/power series etc., but I'd like to know if there are any general guidelines that are followed for finding partial sums for something like <span class="math-container">$$\sum_{k=1}^{\infty}\frac{1}{k^2}$$</span> or <span class="math-container">$$\sum_{k=1}^{\infty}\frac{6^k}{(3^{k+1}-2^{k+1})(3^k-2^k)}$$</span>.</p>
<p>I've seen solutions to both of these, and they're beautiful and unintuitive. So I wonder what, if any, methods might be used to get an edge on finding their <span class="math-container">$n$</span>th sums.</p>
<p>Aside from listing partial sums and looking for patterns, what approaches do mathematicians commonly use to solve problems like this? Or, if listing partial sums is the best route to take, what can or should be done to improve pattern recognition?</p>
| gandalf61 | 424,513 | <p>If <span class="math-container">$u=v$</span> then we can take <span class="math-container">$w=u=v$</span>.</p>
<p>So let's assume that <span class="math-container">$u \ne v$</span>. Then <span class="math-container">$u$</span> and <span class="math-container">$v$</span> span a <span class="math-container">$2$</span> dimensional subspace of <span class="math-container">$\mathbb{R}^n$</span>. Let's call this subspace <span class="math-container">$V$</span>.</p>
<p><span class="math-container">$(u,v)$</span> is a basis of <span class="math-container">$V$</span>. Another possible basis is <span class="math-container">$(u+v, u-v)$</span>. For any <span class="math-container">$au+bv \in V$</span> we have</p>
<p><span class="math-container">$au+bv = \frac{a+b}{2}(u+v) + \frac{a-b}{2}(u-v)$</span></p>
<p>In particular</p>
<p><span class="math-container">$u = \frac 1 2 (u+v) + \frac 1 2 (u-v) \\ v = \frac 1 2 (u+v) - \frac 1 2 (u-v)$</span></p>
<p>Note that <span class="math-container">$(u+v).(u-v) = u.u - v.v = 0$</span> since <span class="math-container">$|u|=|v|$</span>. So <span class="math-container">$u+v$</span> and <span class="math-container">$u-v$</span> are perpendicular (in geometric terms we have just proved that the two diagonals of a rhombus are perpendicular).</p>
<p>So if we let <span class="math-container">$w=u+v$</span> then what is <span class="math-container">$r_w(u)$</span> ?</p>
|
10,880 | <p>I am posting to formally register my disapproval of <a href="https://math.stackexchange.com/users/93658/anti-gay">this user's</a> name.</p>
<p>I believe it constitutes hate speech. If you look at the comments on this user's answers, you will see that many others do too. The name is already causing a lot of trouble, and the user has not even been around for an entire day yet.</p>
<p>I don't think this site should tolerate hate speech. I think the name is intentionally offensive and the user should be suspended. At the very least, I would like their name changed.</p>
<p>Update: This user is now tossing around homophobic slurs in the main chatroom.</p>
<p>Update 2: This user has been temporarily suspended and their name has been changed.</p>
<p>Update 3: The user has been deleted.</p>
| Douglas S. Stones | 139 | <p>Reminds me of this SMBC comic:</p>
<p><a href="http://www.smbc-comics.com/?id=1904" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/ACzEL.gif" alt="enter image description here"></a></p>
|
69,050 | <p>The basic concept of Quotient Group is often a confusing thing for me,I mean can any one tell the intuitive concept and the necessity of the Quotient group,
I thought that it would be nice to ask as any basic undergraduate can learn the intuition seeing the question.
My Question is :</p>
<ol>
<li>Why is the name Quotient Group kept,normally in the case of division, let us take the example of $\large \frac{16}{4}$ the Quotient of the Division is '$4$' which means that there are four '$4$'s in $16$, I mean we can find only $4$ elements with value $4$
<blockquote>
<p>So how can we apply the same logic in the case of Quotient Groups,like consider the Group $A$ and normal subgroup $B$ of $A$, So if $A/B$ refers to "Quotient group", then does it mean:</p>
<blockquote>
<p>Are we finding how many copies of $B$ are present in $A$??, like in the case of normal division, or is it something different ??</p>
</blockquote>
</blockquote></li>
</ol>
<p>I understood the Notion of Cosets and Quotient Groups,b ut I want a different Perspective to add Color to the concept. Can anyone tell me the necessity and background for the invention of Quotient Groups?</p>
<p>Note: I tried my level best in formatting and typing with proper protocol,if in case, any errors still persist, I beg everyone to explain the reason of their downvote (if any), so that I can rectify myself, Thank you.</p>
| Jyrki Lahtonen | 11,619 | <p>Often I think of a quotient group in terms of (loss of) information. When we move from a group to its quotient group we lose some information about the identity of the elements. For example, when we map an element of the additive group of integers <span class="math-container">$\mathbf{Z}$</span> to the quotient group <span class="math-container">$\mathbf{Z}/10\mathbf{Z}$</span> we lose the information of all the other digits save the least significant one. In other words after moving down to the quotient group we can no longer tell the difference between 9, 999, or 314159. In this sense we then <em>equate</em> 9 with 99 et cetera</p>
<p>Why would we want do this, as it amounts to loss of information? Well, there are several reasons. Sometimes we are only really interested in the residual information. For example, when we study the set of numbers of the form <span class="math-container">$a+b\root 3\of 2+c\root 3\of 4$</span>, where <span class="math-container">$a,b,c$</span> are integers, and we want to start adding, subtracting and multiplying them, we quickly notice that those operations are very similar to the corresponding operations involving polynomials <span class="math-container">$a+bx+cx^2$</span>. The difference is that we are only interested in the value of the polynomial at a single point <span class="math-container">$x=\root 3\of 2$</span>. This shows in the multiplication rule, because the polynomial <span class="math-container">$x^3$</span> takes the value <span class="math-container">$2$</span>. In order to make this correspondence between polynomials and numbers more accurate we are forced to equate the polynomial <span class="math-container">$x^3-2$</span> with the polynomial <span class="math-container">$0$</span>. This time we get a quotient ring instead of a quotient group (see algebra textbooks for such details), but the idea is that some things we have learned about polynomial algebra will carry over to our set of numbers, and that gives us the benefit of economy of thinking. We don't need to relearn everything from scratch, if the next time we are interested in <span class="math-container">$\root 3\of 3$</span> instead.</p>
<p>Sometimes quotient groups are forced upon us. We are not in possession of all the information. A simple example is the following. Assume that somebody is counting coins, but the only counting aid available to him is a light switch. Every time he tallies one more coin he will toggle the light switch: lit, dark, lit, dark,... He may or may not be able to keep track of the actual tally, but if somebody else comes to the room, or the tallyman gets confused, the status of the light switch will only tell whether an odd or an even number of coins have been counted, i.e. we have moved from the group <span class="math-container">$\mathbf{Z}$</span> to the quotient group <span class="math-container">$\mathbf{Z}/2\mathbf{Z}$</span>. Another very common quotient group in mathematics is used to decribe an angle of rotation. Let's say that we are studying a planar object spinning about its center of mass. It may have completed God knows how many full revolutions, but when we enter the room and observe its position, we have no way of knowing anything else but the current direction pointed at by, say, a small arrow somebody painted on the object for this purpose. A full revolution corresponds to an angle of rotation <span class="math-container">$2\pi$</span>, so the total angle of rotation will have an uncertainty that can be any integer multiple of <span class="math-container">$2\pi$</span>. In other words, we can only see an element of the quotient group <span class="math-container">$\mathbf{R}/2\pi\mathbf{Z}$</span>, not an element of <span class="math-container">$\mathbf{R}$</span>.</p>
|
876,310 | <p>So I <em>think</em> I understand what differentials are, but let me know if I'm wrong.</p>
<p>So let's take $y=f(x)$ such that $f: [a,b] \subset \Bbb R \to \Bbb R$. Instead of defining the derivative of $f$ in terms of the differentials $\text{dy}$ and $\text{dx}$, we take the derivative $f'(x)$ as our "primitive". Then to define the differentials we do as follows:</p>
<p>We find some $x_0 \in [a,b]$ where there is some neighborhood of $x_0$, $N(x_0)$, such that all $f(x)$ in $\{f(x) \in \Bbb R \mid x \in N(x_0)\}$ are differentiable. Then we choose another point in $N(x_0)$, let's call it $x_1$, such that $x_1 \ne x_0$. Then let $dx = \Delta x = x_1 - x_0$. Now this $\Delta x$ doesn't actually have to be very small like we're taught in Calculus 1 (in particular it's not infinitesimal, it's finite). In fact, as long as $f(x)$ is differentiable for all $x \in [-10^{10}, 10^{10}]$ we could choose $x_0 = -10^{10}$ and $x_1 = 10^{10}$.</p>
<p>Then we know that $\Delta y = f'(x_0) \Delta x + \epsilon(\Delta x)$, where $\epsilon(\Delta x)$ is some nonlinear function of $\Delta x$. If $f(x)$ is smooth, we know that $\epsilon(\Delta x)$ is equal to the sum of powers of $\Delta x$ with some coefficients, by Taylor's theorem. But of course, $\epsilon(\Delta x)$ won't be so easy to describe if $f(x)$ is only once differentiable. So we define $dy$ as $dy = f'(x_0) dx$: that is, $dy$ is the <em>linear part</em> of $\Delta y$. This has the very useful property that $\lim_{\Delta x \to 0} \frac{\Delta y}{\Delta x} = \frac{dy}{dx} = f'(x_0)$. This is then <em>not a definition</em> of the derivative, but a consequence of our definitions.</p>
<p>It can be seen from this $dy$ really depends on what we choose as $dx$, but $f'$ is independent of both. </p>
<p>This definition can be extended to functions of multiple variables, like $z = f(x, y)$ as well, by letting $\Delta x = dx,\ \Delta y=dy$ and defining $dz$ as $dz = \frac{\partial f(x_0, y_0)}{\partial x}dx + \frac{\partial f(x_0, y_0)}{\partial y} dy$. So $dz$ is the linear part of $\Delta z$. Does all of the above look correct?</p>
<p>If so, then where I'm having a problem is: <br>1) how then <em>do</em> we define the derivative of $f(x)$ if not by $f'(x_0) = \lim_{\Delta x \to 0} \frac{\Delta y}{\Delta x}$? <br>2) how do we apply this definition of $dx$ to $\int_a^b f(x)dx$? It seems like the inherit arbitrariness of $dx$ is really going to get in the way of a good definition of the integral.</p>
| Community | -1 | <p>$\mathrm{d}y$ depends only on $y$: it doesn't depend on any choice of $x$ or anything else: that's one of the big advantages to differentials (as opposed to, say, partial derivatives).</p>
<p>A differential is a gadget that expresses <em>how</em> something varies. There are three main things you can do with such a gadget:</p>
<ul>
<li>You can compare two differentials: e.g. if $x$ and $y$ are dependent on one another in a differentiable way, then they are multiples of each other. e.g. if $y = f(x)$, then $\mathrm{d}y = f'(x) \mathrm{d}x$.</li>
<li>Given a differential, you can ask if it has an antiderivative: e.g. $2x \mathrm{d}x$ is the differential (often called the "exterior derivative") of $x^2$.</li>
<li>You can compute a (path) integral to 'add up' along a path all of the variations the differential expresses. e.g. $\int_0^1 2x \mathrm{d}x$ means we 'accumulate' all of the variations $2x \mathrm{d}x$ as we go from $x=0$ to $x=1$. And as we know $2x \mathrm{d}x = \mathrm{d}(x^2)$, our intuition is satisfied in the sense that accumulating how $x^2$ varies from $x=0$ to $x=1$ works out to $1^2 - 0^2$.</li>
</ul>
<p>You can also ask the differential to give you an ordinary number expressing a variation along a (tangent) vector. A common notation for this is, e.g. in $(x,y)$ coordinates, to let the symbol $\partial/\partial x$ and $\partial/\partial y$ denote vectors, and for a differential $\omega$, the notation $\frac{\partial}{\partial x} \omega$ means ordinary number that $\omega$ yields for a variation by the vector $\partial/\partial x$.</p>
<p>e.g. we have
$$ \frac{\partial}{\partial x} \mathrm{d}x = 1
\qquad \qquad \frac{\partial}{\partial x} \mathrm{d}y = 0
\qquad \qquad \frac{\partial}{\partial y} \mathrm{d}x = 0
\qquad \qquad \frac{\partial}{\partial y} \mathrm{d}y = 1$$</p>
<p>This is consistent with the notation for partial derivatives you've learned, in that, e.g.,</p>
<p>$$ \frac{\partial}{\partial x} f = \frac{\partial}{\partial x}
\mathrm{d} f $$</p>
<p>where the left hand side is the meaning taken from introductory multivariable calculus, and the right hand side is the meaning I describe above. (usually first introduced in differential geometry)</p>
<p>Incidentally, I think partial derivative notation is absolutely terrible, and I avoid using it whenever possible. I also think differentials are more intuitive than partial derivatives as well, and I prefer to do all of my calculus in terms of differentials these days. A convenient analog to $f'$ for multivariable functions is to let, e.g., $f_1$ denote the derivative of $f$ in its first argument, $f_2$ denote the derivative in the second argument, and so forth. So I would prefer to write</p>
<p>$$ \mathrm{d}f(x,y) = f_1(x,y) \mathrm{d}x + f_2(x,y) \mathrm{d}y $$</p>
<p>rather than anything resembling the traditional notion of partial derivatives. If I want derivatives in the direction where $y$ is held constant, I express that as setting $\mathrm{d}y = 0$ rather than resorting to partial derivatives.</p>
<p>This use of combining vectors with differentials is related to the (unfortunately common) mistake / abuse of notation that you often see, where the notation $\mathrm{d}x$ is treated an actual change in $x$, rather than as a gadget that can tell you what the change in $x$ is.</p>
|
105,750 | <p>Given a <code>ContourPlot</code> with a set of contours, say, this:</p>
<p><a href="https://i.stack.imgur.com/cKoyo.jpg"><img src="https://i.stack.imgur.com/cKoyo.jpg" alt="enter image description here"></a></p>
<p>is it possible to get the contours separating domains with the different colors in the form of lists? </p>
<p>For example, how to extract the boundaries of the blue domain in the image above?
Or just for the sake of trial, from such a simple example:</p>
<pre><code> ContourPlot[x*Exp[-x^2 - y^2], {x, 0, 3}, {y, -3, 3},
PlotRange -> {0, 0.5}, ColorFunction -> "Rainbow"]
</code></pre>
<p><a href="https://i.stack.imgur.com/Beuzu.jpg"><img src="https://i.stack.imgur.com/Beuzu.jpg" alt="enter image description here"></a></p>
<p>The same task, let us find the lists corresponding to the blue domain boundaries.</p>
<p>To make it clear, I am not asking of how to get the lines from the function behind. This I understand. I ask of how to extract the contour lines that are generated by Mma.</p>
<p>Let us put this question another way around. Is it possible to define the areas with the same color as separate geometric regions in the sense of the computation geometry, and then work with these domains separately?</p>
| Michael E2 | 4,999 | <p>To answer the last question, the contour domains (since V8) are enclosed separately in <a href="http://reference.wolfram.com/language/ref/GraphicsGroup.html"><code>GraphicsGroup</code></a>, each which you can cull and turn into a region:</p>
<pre><code>plot = ContourPlot[x*Exp[-x^2 - y^2], {x, 0, 3}, {y, -3, 3},
PlotRange -> {0, 0.5}, ColorFunction -> "Rainbow"];
regs = With[{coords = First@Cases[plot, GraphicsComplex[p_, ___] :> p, Infinity]},
BoundaryDiscretizeGraphics@
GraphicsComplex[coords, #] & /@ Cases[plot, _GraphicsGroup, Infinity]
];
Multicolumn[regs, 5]
</code></pre>
<p><img src="https://i.stack.imgur.com/Z4DPe.png" alt="Mathematica graphics"></p>
|
909,734 | <p>I have answered this question to the best of my knowledge but somehow I feel as if I am missing something? Can I further prove this statement or add anything to it? </p>
<p>Question: </p>
<p>Let $m \in \mathbb N$. Prove that the congruence modulo $m$ relation on $\mathbb Z$ is transitive. </p>
<p>My attempt:</p>
<p>Let $a\equiv b \pmod{m}$ and $b\equiv c \pmod{m}$.</p>
<p>Then $a-b \equiv 0 \pmod{m}$ and $b-c\equiv 0 \pmod{m}$.</p>
<p>Adding, $a-c\equiv 0 \pmod{m}$, so $a\equiv c\pmod{m}$.</p>
| G Tony Jacobs | 92,129 | <p>I've often seen the congruence relation modulo $m$ defined as "$a\equiv b \pmod{m}$ means m|(a-b)". If that's the definition that you're working with, then the fact you need to use in this proof is: m|p and m|q imply m|(p+q).</p>
<p>Then, using $a-b$ for $p$, and $b-c$ for $q$, you have your result.</p>
|
619,477 | <blockquote>
<p>Alice opened her grade report and exclaimed, "I can't believe Professor Jones flunked me in Probability." "You were in that course?" said Bob.
"That's funny, i was in it too, and i don't remember ever seeing you there."
"Well," admitted Alice sheepishly, "I guess i did skip class a lot." "Yeah, me too" said Bob. Prove that either Alice or Bob missed at least half of the classes. </p>
</blockquote>
<p>Proof:</p>
<p>Let $A$ be the set of lectures Alice attended and missed, let's assume she attended them in no particular order, similarly for Bob $B$ is the set of all lectures Bob attended and missed in random order.
Let $f$ be one-to-one and onto, we define $f:A\to B$ to be the mapping that matches the lectures Alice attended to the lectures that Bob missed and the lectures that Alice missed to the ones that Bob attended. If we consolidate the contiguous entries in the sets $A$ and $B$ into two groups, the group of lectures that Alice attended and the group of lectures she didn't attend and similarly for $B$ then the function $f$ can only be one-to-one and onto if both Alice and Bob attended the same number of lectures they missed.</p>
<p>I understand i've shown that Bob and Alice missed half the classes, how do i show that they could've missed more with this method?? </p>
| Community | -1 | <p>I'm not sure if this is what you're looking for, but why not use the pigeonhole principle?</p>
<p>Let $L_n$ represent the $n$th lecture. During $L_1$, either Bob or Alice attended, or neither attended. During $L_2$, either Bob or Alice attended, or neither attended. This is true for every lecture up to $L_n$. </p>
<p>Now, we know every lecture had to have been missed, by either Bob or Alice or both. Then there are at least $n$ non-attendances/objects that need to be distributed amongst two "bins", let's label them Bob and Alice. Just for clarity, if both Bob and Alice miss, let's say that counts as two non-attendences. </p>
<p>If the number of total absences is odd, then we have at least $n$ (where $n$ is the total days in the class) non-attendences to distribute into two boxes, since the most optimistic case assumes either Alice or Bob attends class every day. Since the number of total absences is odd, the absences cannot be distributed into the two boxes evenly and someone must have missed more than half the class.</p>
<p>If the number of total absences is even, then we have at least $n$ non-attendences to distribute into two boxes. Since the number of total absences is even, they can be evenly distributed between two boxes. However, they do not need to be distributed this way. One box could very well have more than the other, so long as they add up to the total number of absences.</p>
<p>Therefore, one person must have missed at least half of the lectures.</p>
|
197,730 | <blockquote>
<p>Prove that the states of the 8-puzzle are divided into two disjoint sets such that any
state in one of the sets is reachable from any other state in that set, but not from any state in the other set. To do so, you can use the following fact: think of the board as a one-dimensional array, arranged in row-major order. Define an inversion as any pair of contiguous tiles (other than the blank tile) in this arrangement such that the higher
numbered tile precedes the lower numbered tile. Let N be the sum of the total number of
inversions plus the number of the row in which the blank tile appears. Then (N mod 2) is
invariant under any legal move in the puzzle.</p>
</blockquote>
<p>I know how to show that any state in one set is not reachable from another set, due to the invariant, but I'm trying to show that the union of the two disjoint sets encompass the entire state space. One thing I've tried is calculating the total possible arrangements (9!), and then the number of possible arrangements in each of the disjoint sets, but I haven't thought of a good way to calculate the latter.</p>
| user21820 | 21,820 | <p>Don't know why this wasn't answered, but the basic idea is just to prove that you can move any 3 pieces into the top-left 2 times 2 square and cycle them before moving them back, hence proving that you can perform any 3-cycle. Then prove that the set of 3-cycles generates the alternating group. To do so prove that whenever there are more than 3 pieces out of place you can decrease that number by at least one, and that if you have exactly 3 out of place it must be a 3-cycle.</p>
|
3,014,766 | <p>I am supposed to find the derivative of <span class="math-container">$ 2^{\frac{x}{\ln x}} $</span>. My answer is <span class="math-container">$$ 2^{\frac{x}{\ln x}} \cdot \ln 2 \cdot \frac{\ln x-x\cdot \frac{1}{x}}{\ln^{2}x}\cdot \frac{1}{x} .$$</span> Is it correct? Thanks. </p>
| David G. Stork | 210,401 | <p><em>Mathematica</em> gives:</p>
<p><span class="math-container">$$\frac{\log (2) 2^{\frac{x}{\log (x)}} (\log (x)-1)}{\log ^2(x)}$$</span></p>
|
268,360 | <p>Why is $\log_xy=\frac{\log_zy}{\log_zx}$? Can we prove this using the laws of exponents?</p>
| Community | -1 | <p>Let $x^a=y$, $z^b=x$ and $z^c=y$. Then $z^{ab}=(z^b)^a=x^a=y=z^c$ so that $ab=c$.</p>
|
268,360 | <p>Why is $\log_xy=\frac{\log_zy}{\log_zx}$? Can we prove this using the laws of exponents?</p>
| Michael Hardy | 11,667 | <p>I will presume that what was meant was $\displaystyle\log_x y = \frac{\log_z y}{\log_z x}$.</p>
<p>Notice that this is true if and only if $(\log_x y)(\log_z x) = \log_z y$, and that holds if and only if $\displaystyle z^{(\log_x y)(\log_z x)}=y$.</p>
<p>So
$$
z^{\Big((\log_x y)(\log_z x)\Big)} = \Big(z^{\log_z x}\Big)^{\log_x y} = x^{\log_x y} = y.
$$</p>
|
268,360 | <p>Why is $\log_xy=\frac{\log_zy}{\log_zx}$? Can we prove this using the laws of exponents?</p>
| Alan | 54,910 | <p>Demonstrate: $ \log_xy=\frac{\log_ay}{\log_ax}; x,y,a \in \mathbb{R} $</p>
<p>We initially have:$$ f(x,y)= \log_xy$$
We transform it to the exponential form: $$ x^{f(x,y)}=y$$
We apply logarithm of base $a$ for $a \in \mathbb{R}$ on both sides of the equation:$$\log_a{x^{f(x,y)}}=\log_ay$$
Applying the exponential property: $\log_ab^c=c\log_ab$ we have: $$ f(x,y) \log_ax=\log_ay$$
Getting $f(x,y)$ $$ f(x,y)=\frac{\log_ay}{\log_ax} $$</p>
<p>We initially have that $f(x,y)=\log_xy$ and then we got $f(x)=\frac{\log_ay}{\log_ax}$, so:
$$ \log_xy = \frac{\log_ay}{\log_ax} \\ Q.E.D.$$</p>
<p>** If there are words that are not appropriated in this demonstration please let me know. I'm not English speaker! **</p>
|
3,802,806 | <p>I and a friend are trying to find all endomorphisms <span class="math-container">$f$</span> of <span class="math-container">$\mathcal{M}_n(\mathbb{R})$</span> such that <span class="math-container">$f({}^t M)={}^t f(M)$</span> for all <span class="math-container">$M$</span>. We believe they are of the form <span class="math-container">$M\mapsto\lambda M+\mu {}^t M$</span> for a fixed <span class="math-container">$(\lambda,\mu)\in\mathbb{R}^2$</span>. Any help is appreciated, thank you.</p>
| user1551 | 1,551 | <p>Let <span class="math-container">$f$</span> be a linear endomorphism on <span class="math-container">$M_n(\mathbb R)$</span>. Then
<span class="math-container">$$
f(M^T)=f(M)^T\quad\forall M\tag{1}
$$</span>
if and only if
<span class="math-container">$$
f(M)=\frac12\left(g(M)+g(M^T)^T\right)\quad\forall M\tag{2}
$$</span>
for some endomorphism <span class="math-container">$g$</span>.</p>
<p>Given <span class="math-container">$g$</span>, it is straightforward to verify <span class="math-container">$(1)$</span> when <span class="math-container">$f$</span> is defined by <span class="math-container">$(2)$</span>. Conversely, given any <span class="math-container">$f$</span> that satisfies <span class="math-container">$(1)$</span>, condition <span class="math-container">$(2)$</span> is satisfied by taking <span class="math-container">$g=f$</span>.</p>
<p>Alternatively, note that <span class="math-container">$(1)$</span> is satisfied if and only if <span class="math-container">$f$</span> preserves both symmetric and skew-symmetric matrices. Hence such an <span class="math-container">$f$</span> takes the form of <span class="math-container">$f(M)=h\left(\frac{M+M^T}{2}\right)+k\left(\frac{M-M^T}{2}\right)$</span> where <span class="math-container">$h$</span> is an endomorphism defined on the space <span class="math-container">$\mathcal H_n$</span> of all symmetric matrices and <span class="math-container">$k$</span> is an endomorphism defined on the space <span class="math-container">$\mathcal K_n$</span> of all skew-symmetric matrices. Therefore the dimension of the space of all such <span class="math-container">$f$</span>s is
<span class="math-container">$$
\dim\operatorname{End}(\mathcal H_n)+\dim\operatorname{End}(\mathcal K_n)
=\left[\frac{n(n+1)}{2}\right]^2+\left[\frac{n(n-1)}{2}\right]^2
=\frac{n^2(n^2+1)}{2}.
$$</span></p>
|
130,465 | <p>i just started university so im pretty new to all this new math. My problem is to solve this <code>recursive sequence</code>: $a_{n+1} = a_{n}^3$ with: $a_{0} = \frac{1}{2}$ and: $n \in N$</p>
<p>I've to analyse convergence and if its convergent i've to get the limit of this sequence.</p>
<p>I dont know how to start and this <code>to the power of three</code> confuses me.</p>
<p>thanks in advance!</p>
| MathematicalPhysicist | 13,374 | <p>When the limit exists we have $\lim_{n \rightarrow \infty} a_n = \lim_{n \rightarrow \infty} a_{n+1}=a$, so to find the limit you need to solve the equation: $a^3=a$.</p>
<p>But that's not enougth, we can see from the definition of $a_n$ that it's non-negative (why?), and its decreasing to zero, so we can infer from above that limit is zero (a decreasing sequence converges to the infimum of the sequence).</p>
<p>To show that the sequence is decreasing we need to show that $a_n \geq a_{n+1}=a^3_n \ \forall n\in \mathbb{N}$, this happens when $a_n (1-a^2_n)\geq 0$, this is always valid cause $0\leq a_n \leq 1$ (why?).</p>
|
130,465 | <p>i just started university so im pretty new to all this new math. My problem is to solve this <code>recursive sequence</code>: $a_{n+1} = a_{n}^3$ with: $a_{0} = \frac{1}{2}$ and: $n \in N$</p>
<p>I've to analyse convergence and if its convergent i've to get the limit of this sequence.</p>
<p>I dont know how to start and this <code>to the power of three</code> confuses me.</p>
<p>thanks in advance!</p>
| Georgy | 139,717 | <p>Another idea is to define a new sequence $b_n=\log a_n$. Then your recursive equation
$$a_{n+1}=a_n^3$$
becomes
$$b_{n+1}=3b_n$$
which is pretty straight forward.</p>
|
2,820,796 | <p>In How many ways can a 25 Identical books can be placed in 5 identical boxes. </p>
<p>I know the process by counting but that is too lengthy .
I want different approach by which I can easily calculate required number in Exam hall in few minutes. </p>
<p>Process of Counting :
This problem can be taken partitions of 25 into 5 parts.</p>
<p>25 = 25+0+0+0+0</p>
<p>25 = 24 +1 + 0 + 0 +0</p>
<p>25 = 23+ 1 +1 +0 + 0
... ....
Like this way many combinations are made.: about 377 </p>
<p>How can we calculate it without this process of manual counting. </p>
| Andrew Woods | 153,896 | <p>The answer of Foobaz John defined $p_k$ and $p_{\le k}$.</p>
<p>Notice first of all that $p_{\le k}(n)=p_k(n+k)$. (That's because we can add one object to each part to ensure that there are no parts of size zero.) Thus, while we must be careful to distinguish them, the tables for these two functions are very similar.</p>
<p>Let's write down the table for $p_k(n)$ up to $k=5$.</p>
<p>The column for $k=1$ is identically $1$, so we can omit it. The column for $k=2$ can be filled in with $\lfloor\tfrac12n\rfloor$; after that, we use the recurrence $p_k(n)=p_{k-1}(n-1)+p_k(n-k)$ to get:
$$\begin{array}{|c|cccc|}\hline&2&3&4&5\\\hline2&1\\3&1&1\\4&2&1&1\\5&2&2&1&1\\6&3&3&2&1\\\vdots&\vdots&\vdots&\vdots&\vdots\end{array}$$
With practice, the table can be continued fairly rapidly, but it will take a few minutes to get to row $25$, and any error will propagate. An exam ought not to contain such a problem, unless the numbers are very small.
However, formulas do exist. I won't attempt to prove them.</p>
<p>$$\begin{align*}p_2(n)&=\lfloor\tfrac12n\rfloor\\
p_3(n)&=[\tfrac1{12}n^2]\\
p_4(n)&=[\tfrac1{144}(n^3+3n^2\underbrace{-9n}_{\text{if }n\text{ odd}})]\end{align*}$$
In the second and third formulas, $[\ldots]$ signifies the nearest integer.</p>
<p>The equivalent formula for $k=5$ is $$p_5(n)=[\tfrac1{2880}(n^4+10n^3+10n^2-75n-45n(-1)^n)]$$</p>
<p>However, rather than memorize this, we could use the recurrence together with an earlier formula.</p>
<p>$$\begin{align*}p_{\le 5}(25)=p_5(30)&=p_4(29)+p_4(24)+p_4(19)+p_4(14)+p_4(9)+p_4(4)\\&=185+108+54+23+6+1\\&=377\end{align*}$$</p>
|
34,874 | <p>If you visit this <a href="http://www.springerlink.com/content/ug8h1563j3484211/" rel="nofollow">link</a>, you'll see at the top of the PDF view. Basic properties of finite abelian groups:</p>
<p>Every quotient group of a finite abelian group is isomorphic to a subgroup.</p>
<p>If the above statement true, it would make some proofs in Serge Lang's Algebra easier, particularly in the p-Sylow groups section.</p>
<p>I know that there is a correspondence between subgroups of G/N and subgroups of G containing N, but the corresponding groups are not necessarily isomorphic or are they?</p>
| Pete L. Clark | 1,149 | <p>The result you are interested in is Theorem 19 on page 8 of</p>
<p><a href="http://alpha.math.uga.edu/%7Epete/4400algebra2point5.pdf" rel="nofollow noreferrer">http://alpha.math.uga.edu/~pete/4400algebra2point5.pdf</a></p>
<p>As I explain there, this fact is a kind of duality statement, but it lies deeper than the fact that passage to the dual group takes injections to surjections and conversely (Proposition 16). To deduce Theorem 19 from Proposition 16, one needs the fact that a finite abelian group is [oy vey -- <em>at least</em>] non-canonically isomorphic to its own dual group (Theorem 20), which I go on to prove in Section 5 of these notes in the most elementary way I know how.</p>
<p>Note that the first step in the proof of Theorem 20 develops the Sylow theory of finite abelian groups from scratch -- this is much easier than the nonabelian case.</p>
|
12,114 | <p>I retired after 25 years of teaching and moved to Israel a year ago. My Hebrew is okay, but before moving here, I had no experience talking about math in Hebrew. I have been learning Hebrew math vocabulary by reading math textbooks and taking an online math course in Hebrew. </p>
<p>I recently started volunteering in an after school program to help Hebrew-speaking students prepare for tests in mathematics. These tests (called bagruyot) are standardized tests taken at the end of high school and used to determine entrance to college. I have no trouble understanding the problems on the practice exams, but I see that it is difficult for me to teach in Hebrew.</p>
<p>Currently my strategies (when I can't explain clearly in Hebrew) include: acting out problems with students, writing out solutions in math symbols, and finding similar problems with solutions in the textbooks This is not how I taught in my native language!</p>
<p>I am interested in any tips to becoming a better teacher in a language that is foreign to me but native to my students.</p>
| Morten Engelsmann | 3,502 | <p>If your students are willing to take the time, I would say you can add a lot of value to their understanding and skills by aproaching the challenge from a "socratic" point of view.</p>
<ul>
<li><p>Facilitate conceptualization through "concept cards":
I have my students make a mindmap with the math concept in the center.
On one side, 2-5 examples.
On the other hand, the definition(s) needed to handle this math object.
Below 2-4 examples of the math in use.
Above the center, a few negations: What this math concept is not.</p></li>
<li><p>Within a specific field, glossary lists may help the student (and you!)
demarcating -- or at the least characterizing -- the field and starting your students (and you) on the linguistic
exploration of the math field (of the week...)</p></li>
<li><p>Certainly, the tacit, read vocabulary recommended by @KCd.
But do not underestimate, what <a href="https://www.theguardian.com/education/2015/aug/02/sugata-mitra-school-in-the-cloud" rel="noreferrer">Sugatha Mitra</a> dubbed
the 'granny method': Stand behind your pupils and adore
<em>whatever</em> they are doing.
Your mere presence offers to the students your professional skillset,
there is a task for them to go grab it, which should be capitalized.
Math, after all is a pretty international language, just as is esperanto.</p></li>
<li><p>Let students explain to other Hebrew speaking students.
Even if your Hebrew is insufficient, expressing oneself will help
to grasp the concept, and the fellow Hebrew-speaker may know
stuff that supplements what the speaker does not understand --
or you can contribute at this point. </p></li>
</ul>
|
244,241 | <p>How can I find minimum distance between cone and a point ?</p>
<p><strong>Cone properties :</strong><br/>
position - $(0,0,z)$<br/>
radius - $R$<br/>
height - $h$</p>
<p><strong>Point properties:</strong><br/>
position - $(0,0,z_1)$</p>
| Tom Oldfield | 45,760 | <p>One basic example is with eigenvalues and eigenvectors of matrices. Often real matrices are not diagonalisable over $\mathbb{R}$ because they have imaginary eigenvalues, wnad knowing things about these eigenvalues can tell us a lot about the transformation that the matrix represents. The obvious example is the $2D$ rotation matrix $\begin{pmatrix} \cos\theta & -\sin\theta\\ \sin\theta &\cos\theta \end{pmatrix} $ with eigenvalues $e^{\pm i\theta}$ which tell us the angle of rotation that this real matrix gives us. Admittedly a simple example but I'm sure there are plenty more.</p>
<p>On other result that comes to mind is in quantum mechanics! A big area of science right now, it deals with complex wave functions like you wouldn't believe (or maybe you would, it seems like you've done enough maths to have taken a course or two in quantum mechanics!) A lot of problems have complex solutions, and certainly the relation of $e^{i\theta}$ and trig is used to no end, particularly in solving second order differential equations (which the Schrödinger equation frequently reduces to). </p>
<p>Probably the biggest way that the complex results are translated back to the real world is that the probability of finding a wavefunction in a given region is the integral over that region if it's magnitude squared. The complex wave function is reduced to a real integral to give us a probability, which is certainly a real world result! </p>
<p>A lot of interesting solutions, known as steady stationary states of the Schrödinger equation, give us wavefunctions where the time dependence looks like $e^{\frac{iE_nt}{\hbar}}$. Here $E_n$ is the energy of the state and $\hbar$ is Planck's (reduced) constant. The point is, the magnitude of these solutions is independent of time. This means that if a particle has this wavefunction, then we know exactly what it's energy is for all time. Further, since the Schrödinger equation is linear, we can superpose solutions to get more solutions, and in fact these steady states form a basis, so we can find the wavefunction for any particle as a combination of these stationary states.</p>
|
4,539,167 | <p><span class="math-container">$$
g(x) =\min_y f(x, y) =\min_y x^TAx + 2x^TBy + y^TCy
$$</span>
where <span class="math-container">$x\in \mathbb R^{n\times 1}$</span>, <span class="math-container">$y\in \mathbb R^{m\times 1}$</span>, <span class="math-container">$A\in \mathbb R^{n\times n}$</span>, <span class="math-container">$B\in \mathbb R^{n\times m}$</span>, <span class="math-container">$C\in \mathbb R^{m\times m}$</span>.
How to compute the derivative: <span class="math-container">$\frac{d g}{d x}$</span>?</p>
| greg | 357,854 | <p><span class="math-container">$
\def\a{\lambda}
\def\B{BC^{-1}B^T}
\def\o{{\tt1}}\def\p{\partial}
\def\LR#1{\left(#1\right)}
\def\op#1{\operatorname{#1}}
\def\trace#1{\op{Tr}\LR{#1}}
\def\qiq{\quad\implies\quad}
\def\grad#1#2{\frac{\p #1}{\p #2}}
\def\c#1{\color{red}{#1}}
\def\CLR#1{\c{\LR{#1}}}
\def\fracLR#1#2{\LR{\frac{#1}{#2}}}
$</span>Although not explicitly stated I'll assume that <span class="math-container">$A\:{\rm and}\:C$</span> are symmetric matrices.</p>
<p>Perform the inner minimize by calculating the gradient wrt <span class="math-container">$y$</span> and setting it to zero
<span class="math-container">$$\eqalign{
\a &= y^TCy + 2(B^Tx)^Ty + x^TAx \qquad\qquad\qquad\qquad\qquad \\
d\a &= (2Cy + 2B^Tx)^Tdy + 0 \\
\grad{\a}{y} &= 2\LR{Cy + B^Tx} \:=\: 0 \\
w &= y_{opt} \,=\, -C^{-1}B^Tx \\
}$$</span>
This produces an explicit expression for <span class="math-container">$g(x)$</span>
<span class="math-container">$$\eqalign{
g(x) &= f(x,w) \\
&= x^TAx + 2x^TBw + w^TCw \\
&= x^TAx - 2x^TBC^{-1}B^Tx + \LR{C^{-1}B^Tx}^TC\LR{C^{-1}B^Tx} \\
&= x^T\LR{A-2\B+\B}x \\
&= x^T\LR{A-\B}x \\
}$$</span>
whose gradient is a trivial calculation
<span class="math-container">$$\eqalign{
\grad{g}{x} &= 2\LR{A-BC^{-1}B^T}x \qquad\qquad\qquad\qquad\qquad\qquad\quad \\
}$$</span></p>
|
1,986,249 | <blockquote>
<p>Let q be a positive integer such that $q \geq 2$ and such that for any
integers a and b, if $q|ab$, then $q|a$ or $q|b$. Show that $\sqrt{q}$
is irrational.</p>
</blockquote>
<p>Proof;</p>
<p>Let assume $\sqrt{q}$ is a rational number, where $n \neq 0$ and $\gcd (m,n)=1$, meaning $\sqrt{q} = \frac{m}{n} \Rightarrow q=\frac{m^2}{n^2} $</p>
<p>Since $n^2 \nmid m^2$, $q|m^2 \Rightarrow q|m$, so $m=qt$ where $t\in \mathbb{Z}$</p>
<p>By substitute $m=qt$ in the equation $qn^2 = m^2$, we get $n^2=qt^2$.</p>
<p>Since tells us that $q|n^2$ and $t^2|n^2$, it contradicts with the assumption $\gcd (m,n)=1$; therefore, $\sqrt{q}$ is irrational.</p>
<p>I get this proof with the assistant of the course, but is there any flaw or mistake? What are the other methods for proving this statement, can you at least give one different method? And how can I improve this proof?</p>
| Ojas | 382,895 | <p><strong>Proof using <a href="http://mathworld.wolfram.com/BezoutsIdentity.html" rel="nofollow">Bézout's Identity</a></strong></p>
<p>For $\sqrt{q}$ to be irrational, $q$ must not be a perfect square. Thus, we only concern us with non-perfect square $q$.</p>
<p>Assume that $\sqrt{q}$ is rational. Therefore $\sqrt{q} = \frac{m}{n}$ where $\gcd{(m, n)} = 1$.</p>
<p>By Bézout's Identity, there exist integers $x$ and $y$ such that $mx + ny = 1$</p>
<p>Now $\sqrt{q} = \sqrt{q}(1) = \sqrt{q}(mx + ny) = (\sqrt{q}m)x + (\sqrt{q}n)y = qnx + my = \text{an integer}$</p>
<p>Which leads us to a contradiction, since we initially assumed that $q$ wasn't a perfect square. Hence, $\sqrt{q}$ is irrational.</p>
|
2,802,959 | <p>If I write
$$
x\in [0,1] \tag 1
$$
does it mean $x$ could be ANY number between $0$ and $1$?</p>
<p>Is it correct to call $[0,1]$ a set? Or should I instead write $\{[0,1]\}$? </p>
<p>Q2:</p>
<p>If I instead have
$$
x\in \{0,1\} \tag 2
$$
does it mean $x$ could be only $0$ OR $1$?</p>
| Eff | 112,061 | <blockquote>
<p>If I write $x\in[0,1]$ does it mean that $x$ can be ANY number between $0$ and $1$?</p>
</blockquote>
<p><strong>Yes.</strong></p>
<p>If $x\in [0,1]$ then $x$ can be any number between $0$ and $1$ (inclusive). Another way to write this is $0 \leq x \leq 1$.</p>
<p>A related notation is $(0,1)$, or sometimes in European writing $]0,1[$, which is the open interval excluding end points, i.e. $0<x < 1$.</p>
<blockquote>
<p>Is it correct to call $[0,1]$ as set? Or should I instead write $\{[0,1]\}$?</p>
</blockquote>
<p><strong>Yes</strong>, $[0,1]$ is a set (it is also called an interval because it contains only consecutive numbers). The set is $[0,1] = \{x\in\mathbb R\mid 0 \leq x \leq 1\}.$</p>
<p>However, $\{[0,1]\}$ is also a set. A different set. They are different sets because $[0,1]$ has an infinite (uncountable) number of elements (i.e. any real number between $0$ and $1$), whereas $\{[0,1]\}$ has only <em>one</em> element, namely $[0,1]\in\{[0,1]\}$.</p>
<blockquote>
<p>If I instead have $x\in\{0,1\}$ does it mean that $x$ could be only $0$ OR $1$?</p>
</blockquote>
<p><strong>Yes</strong>.</p>
<p>If $x\in\{0,1\}$ then $x$ is either $0$ or $x$ is $1$, and <em>not</em> for example $0.312$.</p>
|
2,802,959 | <p>If I write
$$
x\in [0,1] \tag 1
$$
does it mean $x$ could be ANY number between $0$ and $1$?</p>
<p>Is it correct to call $[0,1]$ a set? Or should I instead write $\{[0,1]\}$? </p>
<p>Q2:</p>
<p>If I instead have
$$
x\in \{0,1\} \tag 2
$$
does it mean $x$ could be only $0$ OR $1$?</p>
| Chappers | 221,811 | <p>$[0,1]$ is (defined as) the set $\{ x \in \mathbb{R} : 0 \leq x \leq 1 \}$, i.e. it is a set that contains every real number between $0$ and $1$ (inclusive). It contains an uncountable number of elements.</p>
<p>$\{0,1\}$ is a set containing 2 elements: $0$, and $1$.</p>
|
3,522,752 | <p>Solve the following equation:
<span class="math-container">$$y=x+a\tan^{-1}p$$</span>
<span class="math-container">$$\text{where p}=\frac{dy}{dx}$$</span>
Differentiating both side w.r.t. x,
<span class="math-container">$$\frac{dy}{dx}=1+\frac{a}{1+p^2}\frac{dp}{dx}\\
\implies p=1+\frac{a}{1+p^2}\frac{dp}{dx}$$</span>
I have tried till this...but what to do next?..please help..</p>
| user577215664 | 475,762 | <p><span class="math-container">$$\frac{dy}{dx}=1+\frac{a}{1+p^2}\frac{dp}{dx}$$</span>
<span class="math-container">$$\implies p=1+\frac{a}{1+p^2}\frac{dp}{dx}$$</span>
It's separable
<span class="math-container">$$ \frac {dp}{(p-1)({1+p^2})}=\frac {dx} a$$</span>
Use fraction decomposition. And integrate.</p>
<hr>
<p><strong>Edit</strong></p>
<p>It's better to keep the original equation
<span class="math-container">$$y=x+a\arctan (y')$$</span>
<span class="math-container">$$y'=\tan \left (\frac {y-x}{a} \right )$$</span>
Substitute <span class="math-container">$y-x=u \implies u'=y'-1$</span>
The equation becomes:
<span class="math-container">$$u'+1= \tan \left (\frac {u}{a} \right )$$</span>
This last De is separable:
<span class="math-container">$$\int \frac {du}{\tan \left (\frac {u}{a} \right )-1} =\int dx$$</span></p>
|
118,406 | <p>I have a single flat directory with over a million files. I just wanted to take a sample of the first few files but <code>FileNames</code> doesn't include a "only the first n" option, and so it took over a minute:</p>
<p><a href="https://i.stack.imgur.com/s5cBS.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/s5cBS.png" alt="enter image description here"></a></p>
<p>Is there a faster way?</p>
| Alexey Golyshev | 23,402 | <p>New function in Mathematica 11 <code>FileSystemMap</code> with option <code>MaxItems</code> (<a href="https://reference.wolfram.com/language/ref/FileSystemMap.html">documentation</a>) can be useful here.</p>
<pre>
dir = "C:\\Users\\Alexey\\Documents";
n = 10;
f = FileSystemMap[#&, dir, MaxItems -> n] // Keys;
</pre>
|
1,070,008 | <p>Is being $T_1$ is a topological invariant?
Is being a first-countable space is a topological invariant?
I need a little hint as to whether or not these sets are topological invariants.</p>
| Matthew Leingang | 2,785 | <p>A <em>toplogical invariant</em> is a property that is preserved under homeomorphism. So your first question is equivalent to:</p>
<blockquote>
<p>If $X$ and $Y$ are homeomorphic, and $X$ is $T_1$, is $Y$ also $T_1$?</p>
</blockquote>
<p>Let $f\colon Y \to X$ be a homeomorphism. Given that $X$ is $T_1$, and $f$ is continuous and invertible, can you show $Y$ is $T_1$?</p>
|
1,456,407 | <p><a href="https://i.stack.imgur.com/oy6T7.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/oy6T7.jpg" alt="enter image description here"></a></p>
<p>We need to find the area of the shaded region , where curves are in polar forms as $r = 2 \sin\theta$ and $r=1$.</p>
<p>I formulated the double integral as follows : </p>
<p>We find the area in the first quadrant and then multiply it by $2$ , </p>
<p>Area of the circle $r=1$ in the first quadrant is $\frac{\pi}{4}$ , we need to subtract the area of the curve $r = 2\sin\theta$ from this , thus , area is given by : </p>
<p>$[\dfrac{\pi}{4} - \int^{\dfrac{\pi}{6}}_{0}\int^{2\sin\theta}_{0}r.dr.d\theta]\times2$ </p>
<p>Is this correct ?
The solution says , " first consider $0< \theta < \dfrac{\pi}{6}$ and then $\dfrac{\pi}{6}< \theta < \dfrac{\pi}{2}$ etc etc..... "</p>
| mathlove | 78,967 | <p>Note that
$$42^{2k}-1=(42^k)^2-1=(42^k-1)(42^k+1)$$
where $1\lt 42^k-1\lt 42^k+1$.</p>
|
467,609 | <blockquote>
<p>Find the value of
$$\int _0 ^ \pi \dfrac{x}{1+\sin^2(x)} dx $$</p>
</blockquote>
<p>I have tried using $\int_a ^bf(x) dx=\int_a^b f(a+b-x)dx$</p>
<p>$\displaystyle \int _0 ^ \pi \dfrac{x}{1+\sin^2(x)} dx=\int _0 ^ \pi \dfrac{\pi-x}{1+\sin^2(x)} dx=I$</p>
<p>I couldn't go any further with that!</p>
| Ahaan S. Rungta | 85,039 | <p>Now, note that $$ \left( \displaystyle\int_0^\pi \dfrac {\pi}{1+\sin^2(x)} \, \mathrm{d}x \right) - I = I \implies I = \dfrac {\displaystyle\int_0^\pi \dfrac {\pi}{1+\sin^2(x)} \, \mathrm{d}x}{2}. $$</p>
<p>Try to find $ \displaystyle\int_0^\pi \dfrac {1}{1+\sin^2(x)} \mathrm{d}x $. </p>
|
467,609 | <blockquote>
<p>Find the value of
$$\int _0 ^ \pi \dfrac{x}{1+\sin^2(x)} dx $$</p>
</blockquote>
<p>I have tried using $\int_a ^bf(x) dx=\int_a^b f(a+b-x)dx$</p>
<p>$\displaystyle \int _0 ^ \pi \dfrac{x}{1+\sin^2(x)} dx=\int _0 ^ \pi \dfrac{\pi-x}{1+\sin^2(x)} dx=I$</p>
<p>I couldn't go any further with that!</p>
| N. S. | 9,176 | <p>You are almost there</p>
<p>$$2I=I+I= \displaystyle \int _0 ^ \pi \dfrac{x}{1+\sin^2(x)} dx+\int _0 ^ \pi \dfrac{\pi-x}{1+\sin^2(x)} dx=\pi\int _0 ^ \pi \dfrac{1}{1+\sin^2(x)} dx$$</p>
<p>The last integral can be calculated with the substitution $t =\tan(\frac{x}{2})$ or by writing $\sin(x)=\frac{1}{\csc(x)}$ (but be carefull as $\csc(x)$ is not defined at $0, \pi$).</p>
|
1,269,738 | <p>I'm looking for problems that due to modern developments in mathematics would nowadays be reduced to a rote computation or at least an exercise in a textbook, but that past mathematicians (even famous and great ones such as Gauss or Riemann) would've had a difficult time with. </p>
<p>Some examples that come to mind are <em><a href="http://en.wikipedia.org/wiki/Group_testing">group testing problems</a></em>, which would be difficult to solve without a notion of error-correcting codes, and -- for even earlier mathematicians -- calculus questions such as calculating the area of some $n$-dimensional body.</p>
<p>The questions have to be understandable to older mathematicians and elementary in some sense. That is, past mathematicians should be able to appreciate them just as well as we can. </p>
| Eric Stucky | 31,888 | <p>This sum-of-squares theorem of Fermat may qualify as an example:</p>
<blockquote>
<p>An odd prime $p$ is expressible as the sum of squares $x^2+y^2$ if and only if $p\equiv 1 \text{ mod } 4$.</p>
</blockquote>
<p>You can read <a href="http://en.wikipedia.org/wiki/Proofs_of_Fermat%27s_theorem_on_sums_of_two_squares" rel="nofollow">this Wikipedia article</a> (as of the most recent update to this answer) to see the difference in mental effort in the original proof by Euler, as opposed to a modern treatment using the fact that the Gaussian integers are a Euclidean domain.</p>
<hr>
<p>A dual example: I think Brouwer would be astonished and pleased to know that the Brouwer fixed point theorem can now be proven for the simplex (and, with more effort, for convex polytopes) with absolutely no knowledge of topology; just some affine geometry and combinatorial intuition to prove <a href="http://en.wikipedia.org/wiki/Sperner's_lemma" rel="nofollow">Sperner's Lemma</a>, and basic analysis to translate to the continuous setting.</p>
<p>It's still not an "easy" proof but it is an example of a classical problem that we now can solve with considerably <em>less</em> machinery, instead of the above example, whose ease of proof can be chalked up to <em>more</em> machinery.</p>
|
4,504,080 | <p>If it is given that <span class="math-container">$$\displaystyle \frac{1}{(20-x)(40-x)}+\displaystyle \frac{1}{(40-x)(60-x)}+....+\displaystyle \frac{1}{(180-x)(200-x)}= \frac{1}{256}$$</span> then how to find the maximum value of <span class="math-container">$x$</span> ? I tried solving it with <span class="math-container">$V_n$</span> method but it is getting more tedious.</p>
| Alex Youcis | 16,497 | <p>I will post this answer just as a complement to <strong>Thomas Preu</strong>'s nice computation above.</p>
<p>Let us write <span class="math-container">$\Gamma:=\mathrm{Gal}(\mathbf{C}/\mathbf{R})$</span>. Also, let me model <span class="math-container">$\mathbf{Z}/2\mathbf{Z}$</span> as <span class="math-container">$\{\pm 1\}$</span> -- I will identify them in what follows. As already observed we have a short exact sequence of <span class="math-container">$\Gamma$</span>-groups</p>
<p><span class="math-container">$$1\to \mathbf{C}^\times\to \mathrm{Aut}(\mathbf{G}_{m,\mathbf{C}})\to \mathbf{Z}/2\mathbf{Z}\to 1,$$</span></p>
<p>where the latter term has trivial <span class="math-container">$\Gamma$</span>-action and the first the usual <span class="math-container">$\Gamma$</span>-action. We then know that we get (e.g. see [Serre, §5.5, Proposition 38]) an exact sequence of pointed sets</p>
<p><span class="math-container">$$1\to \mathbf{R}^\times\to \mathrm{Aut}(\mathbf{G}_{m,\mathbf{R}})\to \mathbf{Z}/2\mathbf{Z}\to H^1(\Gamma,\mathbf{C}^\times)\to H^1(\Gamma,\mathrm{Aut}(\mathbf{G}_{m,\mathbf{C}}))\to H^1(\Gamma, \mathbf{Z}/2\mathbf{Z}).$$</span></p>
<p>As already observed by <strong>Thomas Preu</strong>, by Hilbert's Theorem 90 (e.g. see [Poonen, Proposition 1.3.15]) that <span class="math-container">$H^1(\Gamma,\mathbf{C}^\times)$</span> is trivial. Moreover, as <span class="math-container">$\Gamma$</span> acts trivially on <span class="math-container">$\mathbf{Z}/2\mathbf{Z}$</span> (which is abelian) we have that</p>
<p><span class="math-container">$$H^1(\Gamma,\mathbf{Z}/2\mathbf{Z})=Z^1(\Gamma,\mathbf{Z}/2\mathbf{Z})=\mathrm{Hom}(\Gamma,\mathbf{Z}/2\mathbf{Z})=\mathbf{Z}/2\mathbf{Z}.$$</span></p>
<p>Now, by the definition of exact sequence of pointed sets, the fiber over <span class="math-container">$1$</span> of the map</p>
<p><span class="math-container">$$H^1(\Gamma,\mathrm{Aut}(\mathbf{G}_{m,\mathbf{C}}))\to H^1(\Gamma, \mathbf{Z}/2\mathbf{Z})\qquad (1)$$</span></p>
<p>is trivial, and so to understand <span class="math-container">$H^1(\Gamma,\mathrm{Aut}(\mathbf{G}_{m,\mathbf{C}}))$</span> it suffices to understand the fiber over the non-trivial element in the map in <span class="math-container">$(1)$</span>.</p>
<p>That said, let us observe that by [Serre, §5.5, Corollary 2] and our discussion above we know that the fiber of the map in <span class="math-container">$(1)$</span> over the non-trivial element can be understood as</p>
<p><span class="math-container">$$H^1(\Gamma, {}_\varsigma(\mathbf{C}^\times))/H^0(\Gamma,{}_\varsigma (\mathbf{Z}/2\mathbf{Z}))$$</span></p>
<p>where <span class="math-container">$\varsigma$</span> is any element of <span class="math-container">$Z^1(\Gamma,\mathrm{Aut}(\mathbf{G}_{m,\mathbf{C}}))$</span> whose associated cocycle is non-trivial, and where <span class="math-container">$_\varsigma(-)$</span> is as in [Serre, §5.3]. Now, explicitly let us take <span class="math-container">$\varsigma$</span> corresponding <span class="math-container">$S^1$</span> which is given by</p>
<p><span class="math-container">$$\varsigma\colon \Gamma\to \mathrm{Aut}(\mathbf{G}_{m,\mathbf{C}})=\mathbf{C}^\times\rtimes (\mathbf{Z}/2\mathbf{Z}),\qquad \sigma\mapsto (1,-1),$$</span></p>
<p>where <span class="math-container">$\sigma$</span> is the complex conjugation map. We can then explicitly compute that <span class="math-container">$_\varsigma (\mathbf{C}^\times)$</span> is the group which has the same underlying group, but now for which we have</p>
<p><span class="math-container">$$\sigma\cdot \alpha=\sigma(\alpha)^{-1}.$$</span></p>
<p>Similarly, <span class="math-container">$_\sigma(\mathbf{Z}/2\mathbf{Z})$</span> has the same underlying group, and now</p>
<p><span class="math-container">$$\sigma\cdot -1= (-1) (-1) (-1)^{-1}=-1,$$</span></p>
<p>or, in other words, <span class="math-container">$_\varsigma(\mathbf{Z}/2\mathbf{Z})$</span> is still just <span class="math-container">$\mathbf{Z}/2\mathbf{Z}$</span> with the trivial action.</p>
<p>Now, as <span class="math-container">$_\varsigma(\mathbf{C}^\times)$</span> is abelian, and <span class="math-container">$\Gamma$</span> is cyclic, we can use <strong>Thomas Preu</strong>'s favorite formula to compute that</p>
<p><span class="math-container">$$H^1(\Gamma,{}_\varsigma(\mathbf{C}^\times))=\ker(N)/\mathrm{im}(\Delta).$$</span></p>
<p>Observe here now that</p>
<p><span class="math-container">$$N\colon {}_\varsigma(\mathbf{C}^\times)\to {}_\varsigma(\mathbf{C}^\times),\qquad \alpha\mapsto \sigma(\alpha)^{-1}\alpha,$$</span></p>
<p>and so the kernel of this map is those <span class="math-container">$\alpha$</span> such that <span class="math-container">$$\sigma(\alpha)=\alpha$$</span>, which is precisely <span class="math-container">$\mathbf{R}^\times$</span>. On the other hand,</p>
<p><span class="math-container">$$\Delta\colon {}_\varsigma(\mathbf{C}^\times)\to {}_\varsigma(\mathbf{C}^\times),\qquad \alpha\mapsto \sigma(\alpha)^{-1}\alpha^{-1},$$</span></p>
<p>which has image precisely <span class="math-container">$\mathbf{R}^{>0}$</span>. So, we see that</p>
<p><span class="math-container">$$H^1(\Gamma,{}_\varsigma(\mathbf{C})^\times)=\mathbf{R}^\times/\mathbf{R}^{>0}.$$</span></p>
<p>Now, the action of <span class="math-container">$_\varsigma(\mathbf{Z}/2\mathbf{Z})$</span> on this group is by inversion which, is clearly the trivial action, and so all-in-all we see that the fiber over the non-trivial element of the map in <span class="math-container">$(1)$</span> is in bijection with</p>
<p><span class="math-container">$$H^1(\Gamma,{}_\varsigma(\mathbf{C})^\times)=\mathbf{R}^\times/\mathbf{R}^{>0},$$</span></p>
<p>which has <span class="math-container">$2$</span> elements. Thus, we see that</p>
<p><span class="math-container">$$\# H^1(\Gamma,\mathrm{Aut}(\mathbf{G}_{m,\mathbf{C}}))=1+2=3.$$</span></p>
<p><strong>Remark:</strong></p>
<ol>
<li>This was VERY long-winded because I wanted to show all the steps to illustrate the method. That said, in practice I was able to immediately compute the correct answer was 3 using this method. The utility (at least here) is that I never had to actually think about non-abelian group cohomology and, ultimately, could use <strong>Thomas Preu</strong>'s favorite formula.</li>
<li>Bonus question. This computation of <span class="math-container">$H^1(\Gamma,{}_\varsigma(\mathbf{C})^\times)$</span> looks quite similar, at the end, to the calculation of <span class="math-container">$\mathrm{Br}(\mathbf{R})=H^2(\Gamma,\mathbf{C}^\times)$</span>. On the other hand, the curve <span class="math-container">$Y$</span> has smooth compactification a Brauer--Severi variety (a twist of <span class="math-container">$\mathbf{P}^n_\mathbf{R}$</span> for some <span class="math-container">$n$</span>, in this case <span class="math-container">$n=1$</span>) which is exactly what the Brauer group computes. What is the relationship?</li>
</ol>
<p><strong>EDIT:</strong> Just for fun here is an answer to my second remark.</p>
<p>Let us observe that we have a natural map</p>
<p><span class="math-container">$$\mathrm{Twist}(\mathbf{G}_{m,\mathbf{R}})\to \mathrm{Twist}(\mathbf{P}^1_{\mathbf{R}}),\qquad C\mapsto \overline{C},$$</span></p>
<p>where <span class="math-container">$\overline{C}$</span> is the (unique) smooth compactification of <span class="math-container">$Y$</span> (e.g. see <a href="https://stacks.math.columbia.edu/tag/0BXX" rel="nofollow noreferrer">Tag 0BXX</a>). This map is not an injection though:</p>
<p><span class="math-container">$$\overline{S^1}\cong \overline{\mathbf{G}_{m,\mathbf{R}}}\cong \mathbf{P}^1_{\mathbf{R}}.$$</span></p>
<p>We can model this map of sets group theoretically, and this can actually help us recompute <span class="math-container">$H^1(\Gamma,\mathrm{Aut}(\mathbf{G}_{m,\mathbf{C}}))$</span>. Namely, we have an injection of <span class="math-container">$\Gamma$</span>-groups (we can even upgrade this to an embedding of algebraic groups)</p>
<p><span class="math-container">$$\mathrm{Aut}(\mathbf{G}_{m,\mathbf{C}})\hookrightarrow \mathrm{Aut}(\mathbf{P}^1_{\mathbf{C}})\qquad \mathbf{(2)}$$</span></p>
<p>given by sending <span class="math-container">$f$</span> to its extension (in the sense of <a href="https://stacks.math.columbia.edu/tag/0BXY" rel="nofollow noreferrer">Tag 0BXY</a>) which is still an automorphism by <a href="https://stacks.math.columbia.edu/tag/0BY1" rel="nofollow noreferrer">Tag 0BY1</a>. One can explicitly check that the following diagram commutes</p>
<p><span class="math-container">$$\begin{matrix}\mathrm{Twist}(\mathbf{G}_{m,\mathbf{R}}) & \to & \mathrm{Twist}(\mathbf{P}^1_{\mathbf{R}})\\ \downarrow & & \downarrow\\ H^1(\Gamma,\mathrm{Aut}(\mathbf{G}_{m,\mathbf{C}})) & \to & H^1(\Gamma,\mathrm{Aut}(\mathbf{P}^1_{\mathbf{C}}))\end{matrix},$$</span></p>
<p>where the vertical maps are bijections.</p>
<p>We may explicitly describe the image of <span class="math-container">$\mathbf{(2)}$</span> as the set of automorphisms <span class="math-container">$\varphi$</span> of <span class="math-container">$\mathbf{P}^1_\mathbf{C}$</span> such that <span class="math-container">$\varphi(S)=S$</span> where <span class="math-container">$S=\{[1:0],[0:1]\}$</span>. Using the identification of <span class="math-container">$\Gamma$</span>-groups</p>
<p><span class="math-container">$$\mathrm{Aut}(\mathbf{P}^1_\mathbf{C})=\mathrm{PGL}_2(\mathbf{C}),$$</span></p>
<p>where the latter acts by fractional linear transformations (e.g. see [Mumford, Chapter 0, §5]) we may then make explicitly identify the following <span class="math-container">$\Gamma$</span>-groups</p>
<p><span class="math-container">$$\begin{aligned}\mathrm{Aut}(\mathbf{G}_{m,\mathbf{C}}) &= \left\{\varphi \in \mathrm{Aut}(\mathbf{P}^1_\mathbf{C}):\varphi(S)=S\right\}\\ &=\left\{\begin{pmatrix}a & 0\\ 0 & d\end{pmatrix}\right\}\cup \left\{\begin{pmatrix}0 & b\\ c & 0\end{pmatrix}\right\}\\ &= C(\gamma)\end{aligned},$$</span></p>
<p>where <span class="math-container">$\gamma=\left(\begin{smallmatrix}-1 & 0 \\ 0 & 1\end{smallmatrix}\right)$</span> and <span class="math-container">$C(\gamma)$</span> is the centerlizer of <span class="math-container">$\gamma$</span> in <span class="math-container">$\mathrm{PGL}_2(\mathbf{C})$</span>. In particular, observe that we have an identification of <span class="math-container">$\Gamma$</span>-sets</p>
<p><span class="math-container">$$ \mathrm{Aut}(\mathbf{P}^1_{\mathbf{C}})/\mathrm{Aut}(\mathbf{G}_{m,\mathbf{C}})\cong \mathcal{O}_\gamma,$$</span></p>
<p>where the latter is the conjugacy class of <span class="math-container">$\gamma$</span> in <span class="math-container">$\mathrm{PGL}_2(\mathbf{C})$</span>.</p>
<p>Now, by [Serre, §5.4, Proposition 36] and the above discussion we have exact sequence of pointed sets</p>
<p><span class="math-container">$$1\to \mathrm{Aut}(\mathbf{G}_{m,\mathbf{R}})\to \mathrm{Aut}(\mathbf{P}^1_{\mathbf{R}})\to \mathcal{O}_\gamma^\Gamma\to H^1(\Gamma,\mathrm{Aut}(\mathbf{G}_{m,\mathbf{C}}))\to H^1(\Gamma,\mathrm{Aut}(\mathbf{P}^1_{\mathbf{C}})).$$</span></p>
<p>Now, with the identification of <span class="math-container">$\Gamma$</span>-groups <span class="math-container">$\mathrm{Aut}(\mathbf{P}^1_\mathbf{C})=\mathrm{PGL}_2(\mathbf{C})$</span> and the short exact sequence of <span class="math-container">$\Gamma$</span>-groups</p>
<p><span class="math-container">$$1\to \mathbf{C}^\times\to \mathrm{GL}_2(\mathbf{C})\to \mathrm{PGL}_2(\mathbf{C})\to 1$$</span></p>
<p>we may use [Serre, §5.7, Proposition 43] to obtain an exact sequence of pointed sets</p>
<p><span class="math-container">$$H^1(\Gamma,\mathrm{GL}_2(\mathbf{C}))\to H^1(\Gamma,\mathrm{PGL}_2(\mathbf{C}))\to H^2(\Gamma,\mathbf{C}^\times)=:\mathrm{Br}(\mathbf{R})\cong \mathbf{R}^\times/\mathbf{R}^{>0}.$$</span></p>
<p>Again by Hilbert's theorem 90 (e.g. in this context see [GS, Example 2.3.4] the first term vanishes, and in fact the map we get an injection <span class="math-container">$H^1(\Gamma,\mathrm{PGL}_2(\mathbf{C}))\to H^2(\Gamma,\mathbf{C}^\times)$</span> (see [GS, Theorem 4.4.5]) and in fact must be an isomorphism as the target is isomorphic to <span class="math-container">$\mathbf{Z}/2\mathbf{Z}$</span> and the source is non-trivial (as <span class="math-container">$\mathrm{Twist}(\mathbf{P}^1_\mathbf{R})$</span> contains the non-trivial element given by <span class="math-container">$\overline{Y}$</span>). Thus, we see that <span class="math-container">$H^1(\Gamma,\mathrm{PGL}_2(\mathbf{C}))$</span> is a two element set.</p>
<p>We see that <span class="math-container">$\mathbf{G}_m$</span>, corresponding to the trivial cocycle, and <span class="math-container">$Y$</span> corresponding to the cocycle</p>
<p><span class="math-container">$$\vartheta\colon \Gamma\to \mathrm{Aut}(\mathbf{G}_{m,\mathbf{C}})\subseteq \mathrm{PGL}_2(\mathbf{C}),\qquad \sigma\mapsto \begin{pmatrix}0 & -1\\ 1 & 0\end{pmatrix},$$</span></p>
<p>form a set of representatives of <span class="math-container">$H^1(\Gamma,\mathrm{Aut}(\mathbf{G}_{m,\mathbf{C}}))$</span> which surject onto <span class="math-container">$H^1(\Gamma,\mathrm{Aut}(\mathbf{P}^1_\mathbf{C}))$</span>. Thus, to determine the size of <span class="math-container">$H^1(\Gamma,\mathrm{PGL}_2(\mathbf{C}))$</span> it suffices to determine the fibers over (the images) of each of these cocycles. By [Serre, §5.4, Corollary 2] these fibers are in bijection, respectively, the following sets</p>
<p><span class="math-container">$$\mathcal{O}_\gamma^\Gamma/\mathrm{PGL}_2(\mathbf{R}),\qquad \left({}_\vartheta \mathrm{PGL}_2(\mathbf{C})/{}_\vartheta C(\gamma)\right)^\Gamma/({}_\vartheta \mathrm{PGL}_2(\mathbf{C}))^\Gamma.\qquad \mathbf{(3)}$$</span></p>
<p>It is a fun exercise in linear algebra (<strong>NB:</strong> I want to thank my friend Alexander Bertoloni Meli for helping me do this exercise), to compute that</p>
<p><span class="math-container">$$\mathcal{O}_\gamma=\mathrm{PGL}_2(\mathbf{R})\cdot \gamma\sqcup \mathrm{PGL}_2(\mathbf{R})\cdot \begin{pmatrix}0 & -1\\ 1 & 0\end{pmatrix}$$</span></p>
<p>The non-triviality is a shadow of the deep notion of <em>geometric conjugacy vs. rational conjugacy</em>. In any case, using this it's easy to compute that the sizes of the sets in <span class="math-container">$\mathbf{(3)}$</span> is <span class="math-container">$2$</span> and <span class="math-container">$1$</span> respectively rederiving the calculation that <span class="math-container">$\# H^1(\Gamma,\mathrm{Aut}(\mathbf{G}_{m,\mathbf{C}}))=3$</span>.</p>
<p>Finally, an answer to my second remark. From the above we have a natural map</p>
<p><span class="math-container">$$H^1(\Gamma, {}_\varsigma(\mathbf{C}^\times))=H^1(\Gamma,{}_\varsigma(\mathbf{C}^\times))/H^0(\Gamma,{}_\varsigma(\mathbf{Z}/2\mathbf{Z}))\hookrightarrow H^1(\Gamma,\mathrm{Aut}(\mathbf{G}_{m,\mathbf{C}}))\to \mathrm{Br}(\mathbf{R}),$$</span></p>
<p>and this map is a bijection, so it's not very surprising that they produced 'the same computation'.</p>
<p><strong>References:</strong></p>
<p>[GS] Gille, P. and Szamuely, T., 2017. Central simple algebras and Galois cohomology (Vol. 165). Cambridge University Press.</p>
<p>[Mumford] Mumford, D., Fogarty, J. and Kirwan, F., 1994. Geometric invariant theory (Vol. 34). Springer Science & Business Media.</p>
<p>[Poonen] Poonen, B., 2017. Rational points on varieties (Vol. 186). American Mathematical Soc..</p>
<p>[Serre] Serre, J.P., 1994. Cohomologie galoisienne (Vol. 5). Springer Science & Business Media.</p>
|
576,553 | <p>Please, forgive me if this is an elementary question, as well as my the sloppy phrasing and notation.</p>
<p>Suppose we have two discrete probability distributions $p = {\lbrace p_i \rbrace}$ and $q={\lbrace q_i \rbrace}$, $i=1,\dots,n$, where $p_i=P(p=p_i)$ and $q_i=P(q=q_i)$. Let's represent them as vectors $\boldsymbol{p} = [p_i], \boldsymbol{q}= [q_i] \in \mathbb{R}^n$.</p>
<p>If we take the two p-norms $||\cdot||_a$ and $||\cdot||_b$, excluding 1-norm and max-norm then if $||\boldsymbol{p}||_a>||\boldsymbol{q}||_a$ is it the case that also $||\boldsymbol{p}||_b>||\boldsymbol{q}||_b$ holds? In other words, will all the p-norms induce the same 'ranking' of $\boldsymbol{p}$ and $\boldsymbol{q}$?</p>
<p>Would anything change if at least one of the $||\cdot||_a$ and $||\cdot||_b$ were p-quasinorms i.e., $a,b\in(0,1)$ instead?</p>
| suvrit | 18,934 | <p>Neither of the two versions holds...</p>
<p>Here are counterexamples. Let $a=1.5$, $b=2$. Let
\begin{equation*}
p=[3,1,4]/8\quad q = [2,2,5]/9;
\end{equation*}
Then, we have</p>
<p>\begin{equation*}
\begin{split}
\|p\|_a &= 0.7329,\quad \|q\|_a = 0.7299\\
\|p\|_b &= 0.6374,\quad \|q\|_b = 0.6383.
\end{split}
\end{equation*}</p>
<p>If we use quasinorms, then we can get similar counterexamples.</p>
<p>However, not all is bad. There is a version of the conjecture that does hold. That is, if $p \prec q$, then for all $p$-norms, you'll have the desired monotonicity. That is, $p \prec q$ means the <em>majorization</em> order:
\begin{equation*}
\begin{split}
&\sum\nolimits_{i=1}^k p_i^\downarrow \le \sum\nolimits_{i=1}^k q_i^\downarrow\quad\text{for}\ k=1,\ldots,n-1\\
&\sum\nolimits_{i=1}^n p_i^\downarrow = \sum\nolimits_{i=1}^n q_i^\downarrow.
\end{split}
\end{equation*}
In this case, $\|p\|_a \le \|q\|_a$ for all $a \ge 1$.</p>
|
2,312,913 | <p>In the triangle $ABC$, let $E$ be a point on $BC$ such that $BE : EC =
3: 2$. Pick points $D$ and $F$ on the sides $AB$ and $AC$ , correspondingly, so that
$3AD = 2AF$ . Let $G$ be the point of intersection of $AE$ and $DF$ . Given that $AB = 7$
and $AC = 9$, find the ratio $DG: GF$.</p>
<p>I have been working on trying to solve this problem. I am having difficulty relating the length of $AB =7$ and the ratios given to find $AD:BD$. Similarly I am having trouble finding ratio of $FC:AF$. I am sure that I can solve this problem if someone can give me a hint on how to find those ratios. Any help would be appreciated.</p>
| fonfonx | 247,205 | <p>Let us count the number of groups without any girls: there are $25 \choose 20$ possibilities (just pick 20 boys out of 25).</p>
<p>Let us count the total number of possible groups: here are $50 \choose 20$ possibilities (just pick 20 kids out of 50).</p>
<p>Consequently you have ${50 \choose 20} - {25 \choose 20}$ groups with at least one girl.</p>
<p>Generally when you have to count the number of elements in a set such that <em>at least 1 element verifies a property</em> it is easier to count the number of elements such that <em>0 element verifies the property</em> and then compute the difference with the total number of elements in your set.</p>
|
3,620,767 | <p><a href="https://imgur.com/a/i24lMmS" rel="nofollow noreferrer">https://imgur.com/a/i24lMmS</a></p>
<p>I tried solving this problem, but couldn't find an answer. Any suggestions? Thanks!</p>
| sammy gerbil | 203,175 | <p><a href="https://i.stack.imgur.com/xXuI8.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/xXuI8.png" alt="enter image description here"></a></p>
<p>The required area <span class="math-container">$A$</span> of the large square is the area of the middle square AFGC plus 4 x the area of the triangles congruent with ADC : <span class="math-container">$$A=37^2+2ab$$</span></p>
<p>Triangles ABE and ACD are similar, therefore <span class="math-container">$$\frac{b}{a}=\frac{b-16}{16}$$</span> <span class="math-container">$$ab=16(a+b)=16\sqrt{A}$$</span></p>
<p>Substituting for the value of <span class="math-container">$ab$</span> : <span class="math-container">$$A=37^2+32\sqrt{A}$$</span> from which the value of <span class="math-container">$A$</span> can be found.</p>
|
2,011,754 | <p>Can somebody help me to solve this equation?</p>
<p>$$(\frac{iz}{2+i})^3=-8$$ ?
I'm translating this into</p>
<p>$(\frac{iz}{2+i})=-2$</p>
<p>But i recon it's wrong ...</p>
| haqnatural | 247,767 | <p>You are in the right way
$$\frac { iz }{ 2+i } =\sqrt [ 3 ]{ 8\cdot \left( -1 \right) } =-2\left( \cos { \frac { k\pi }{ 3 } +i\sin { \frac { k\pi }{ 3 } } } \right) ,k=0,1,2$$ check for instance $k=0$ </p>
<p>$$\frac { iz }{ 2+i } =-2\\ iz=-4-2i\\ z=\frac { -4-2i }{ i } =\frac { -4i+2 }{ { i }^{ 2 } } =-2+4i$$</p>
|
1,506,532 | <p>How do I prove that the connected undirected graph having 10 nodes and 10 edges
contains a cycle.</p>
| happymath | 129,901 | <p><strong>Hint</strong>: Any tree with $n$ vertices can have atmost $n-1$ edges.</p>
|
1,506,532 | <p>How do I prove that the connected undirected graph having 10 nodes and 10 edges
contains a cycle.</p>
| TechJ | 281,154 | <p>Assume that it is not cyclic, so then it means it is a tree.</p>
<p>But for tree we have number of edges 1 less than number of vertices i.e. $n=n-1$</p>
<p>$n=10-1=9$ which contradicts the given statement.</p>
<p>Hence our assumption is wrong, so there exists a cycle.</p>
|
1,508,863 | <p>I have this homework problem assigned but I'm confused as to how to solve it:</p>
<p>For $n>2$ and $a\in\mathbb{Z}$ with $\gcd(a,n)=1$, show that $o_n(a)=m$ is odd $\implies o_n(-a)=2m$.</p>
<p>(where $o_n(a)=m$ means that $a$ has order $m$ modulo $n$).</p>
<p>We were also given this hint: Helpful to consider when $o_p(-a)$ is odd and when it is even.</p>
<p>Thanks for any help!</p>
| Brian M. Scott | 12,042 | <p>The number of <a href="https://en.wikipedia.org/wiki/Derangement">derangements</a> of $[n]=\{1,\ldots,n\}$ is </p>
<p>$$d_n=n!\sum_{k=0}^n\frac{(-1)^k}{k!}\;,$$</p>
<p>so</p>
<p>$$\frac{n!}e-d_n=n!\sum_{k>n}\frac{(-1)^k}{k!}\;,$$</p>
<p>which is less than $\frac1{n+1}$ in absolute value. Thus for $n\ge 1$, $d_n$ is the integer nearest $\frac{n!}e$, and</p>
<p>$$d_n=\begin{cases}
\left\lfloor\frac{n!}e\right\rfloor,&\text{if }n\text{ is odd}\\\\
\left\lceil\frac{n!}e\right\rceil,&\text{if }n\text{ is even}\;.
\end{cases}$$</p>
<p>The recurrence $d_n=nd_{n-1}+(-1)^n$ is also well-known. We have $d_0=1$, so an easy induction shows that $d_n$ is odd when $n$ is even, and even when $n$ is odd. Thus, for odd $n$ we have $\left\lfloor\frac{n!}e\right\rfloor=d_n$ is even, and for even $n$ we have $\left\lfloor\frac{n!}e\right\rfloor=\left\lceil\frac{n!}e\right\rceil-1$ is again even.</p>
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.