qid int64 1 4.65M | question large_stringlengths 27 36.3k | author large_stringlengths 3 36 | author_id int64 -1 1.16M | answer large_stringlengths 18 63k |
|---|---|---|---|---|
2,965,193 | <p>Basically the question is asking us to prove that given any integers <span class="math-container">$$x_1,x_2,x_3,x_4,x_5$$</span> Prove that 3 of the integers from the set above, suppose <span class="math-container">$$x_a,x_b,x_c$$</span> satisfy this equation: <span class="math-container">$$x_a^2 + x_b^2 + x_c^2 = 3k$$</span> So I know I am suppose to use the pigeon hole principle to prove this. I know that if I have 5 pigeons and 2 holes then 1 hole will have 3 pigeons. But what I am confused about is how do you define the hole? Do I just say that the container has a property such that if 3 integers are in it then those 3 integers squared sum up to a multiple of 3?</p>
| fleablood | 280,126 | <p>Basic pigeon hole. If all integers fall into either type <span class="math-container">$A$</span> or type <span class="math-container">$B$</span> and you have <span class="math-container">$n$</span> integers. Then at least <span class="math-container">$\lceil \frac n2 \rceil$</span> will be of the same type.</p>
<p>I hope you can convince yourself of that on your own.</p>
<p>...</p>
<p>Let <span class="math-container">$x_i = 3*M_i + r_i$</span> where <span class="math-container">$r_i = 0, 1,$</span> or <span class="math-container">$-1$</span>. All numbers will be one of these three options.</p>
<p>Then <span class="math-container">$x_i^2 = 9*M_i^2 + 6*M_i*r_i + r_i^2$</span>. Let <span class="math-container">$V_i = 3*M_i^2 + 2*M_i*r_i$</span> so <span class="math-container">$x_i^2 = 3V_i + r_i^2$</span> where <span class="math-container">$r_i^2 = 0$</span> or <span class="math-container">$1$</span>. All numbers will be one of these <span class="math-container">$2$</span> options.</p>
<p>Let all integers, <span class="math-container">$x_i$</span> where <span class="math-container">$r_i^2 = 0$</span> by of type <span class="math-container">$A$</span> and let all integers, <span class="math-container">$x_j$</span> where <span class="math-container">$r_j^2 =1$</span> by of type <span class="math-container">$B$</span>.</p>
<p>So if you have <span class="math-container">$5$</span> integers and each is of type <span class="math-container">$A$</span> or type <span class="math-container">$B$</span> then by pigeon hole you must have at least <span class="math-container">$3$</span> of same type.</p>
<p>So suppose <span class="math-container">$x_a, x_b, x_c$</span> are three that are of one of these two types.</p>
<p>If the three are of the type where <span class="math-container">$A$</span> where <span class="math-container">$r_i^2 = 0$</span> then you will <span class="math-container">$x_a^2 + x_b^2 + x_c^2 = 3V_a + 0 + 3V_b + 0 + 3V_c + 0$</span>. And that is divisible by <span class="math-container">$3$</span>.</p>
<p>If the three are of the type <span class="math-container">$B$</span> where <span class="math-container">$r_i^2 = 1$</span> then you will have <span class="math-container">$x_a^2 + x_b^2 + x_c^2 = 3V_a + 1 + 3V_b+1 + 3V_c + 1 = 3V_a + 3V_b + 3V_c + 3$</span> which is divisible by <span class="math-container">$3$</span>.</p>
<p>So you will always have at least three integers whose sums of squares is divisible by <span class="math-container">$3$</span>.</p>
|
3,347,391 | <blockquote>
<p>Find the maximum and minimum values of <span class="math-container">$x^2 + y^2 + z^2$</span> subject to the equality constraints <span class="math-container">$x + y + z = 1$</span> and <span class="math-container">$x y z + 1 = 0$</span></p>
</blockquote>
<p>My try:</p>
<p>Let <span class="math-container">$u=x^2+y^2+z^2$</span>
<span class="math-container">$$x+y+z-1=0$$</span>
<span class="math-container">$$xyz+1=0$$</span>
<span class="math-container">$$(xdx+ydy+zdz)+m(dx+dy+dz)+n(yzdx+xzdy+xydz)=0$$</span>
<span class="math-container">$$x+m+yzn=0$$</span>
<span class="math-container">$$y+m+xzn=0$$</span>
<span class="math-container">$$z+m+xyn=0$$</span></p>
<p>Multiplying by <span class="math-container">$x ,y$</span> and <span class="math-container">$z$</span> then adding above three equations i get
$u+m+n=0.¢
What should i do after that.. please help me.. thanks in advance.</p>
| Jack D'Aurizio | 44,121 | <p>Let us assume that <span class="math-container">$x,y,z$</span> are the roots of a monic, cubic polynomial in the <span class="math-container">$t$</span>-variable,
<span class="math-container">$$ q(t) = t^3-t^2+ct+1. $$</span>
This polynomial has three real roots iff its discriminant is non-negative, i.e. iff
<span class="math-container">$$ 4c^3-c^2+18c+23\leq 0.$$</span>
We want maximize/minimize <span class="math-container">$x^2+y^2+z^2 = (x+y+z)^2-2c = 1-2c$</span> over the previous constraint, but the polynomial <span class="math-container">$4c^3-c^2+18c+23$</span> has a unique real root at <span class="math-container">$c=-1$</span>. It follows that under the constraints <span class="math-container">$x+y+z=1$</span> and <span class="math-container">$xyz=-1$</span> the quantity <span class="math-container">$x^2+y^2+z^2$</span> can take any value <span class="math-container">$\geq 3$</span>, which is attained at the cyclic permutations of <span class="math-container">$(1,1,-1)$</span>.</p>
|
226,346 | <p>I have the three dimensional Laplacian <span class="math-container">$\nabla^2 T(x,y,z)=0$</span> representing temperature distribution in a cuboid shaped wall which is exposed to two fluids flowing perpendicular to each other on either of the <span class="math-container">$z$</span> faces i.e. at <span class="math-container">$z=0$</span> (ABCD) and <span class="math-container">$z=w$</span> (EFGH). Rest all the faces are insulated i.e. <span class="math-container">$x=0,L$</span> and <span class="math-container">$y=0,l$</span>. The following figure depicts the situation.<a href="https://i.stack.imgur.com/T4kKK.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/T4kKK.png" alt="enter image description here" /></a></p>
<p>The boundary conditions on the lateral faces are therefore:</p>
<p><span class="math-container">$$-k\frac{\partial T(0,y,z)}{\partial x}=-k\frac{\partial T(L,y,z)}{\partial x}=-k\frac{\partial T(x,0,z)}{\partial y}=-k\frac{\partial T(x,l,z)}{\partial y}=0 \tag 1$$</span></p>
<p>The bc(s) on the two z-faces are robin type and of the following form:</p>
<p><span class="math-container">$$\frac{\partial T(x,y,0)}{\partial z} = p_c\bigg(T(x,y,0)-e^{-b_c y/l}\left[t_{ci} + \frac{b_c}{l}\int_0^y e^{b_c s/l}T(x,s,0)ds\right]\bigg) \tag 2$$</span></p>
<p><span class="math-container">$$\frac{\partial T(x,y,w)}{\partial z} = p_h\bigg(e^{-b_h x/L}\left[t_{hi} + \frac{b_h}{L}\int_0^x e^{b_h s/L}T(x,s,w)ds\right]-T(x,y,w)\bigg) \tag 3$$</span></p>
<p><span class="math-container">$t_{hi}, t_{ci}, b_h, b_c, p_h, p_c, k$</span> are all constants <span class="math-container">$>0$</span>.</p>
<p>I have two questions:</p>
<p><strong>(1)</strong> With the insulated conditions mentioned in <span class="math-container">$(1)$</span> does a solution exist for this system?</p>
<p><strong>(2)</strong> Can someone help in solving this analytically ?
I tried to solve this using the following approach (separation of variables) but encountered the results which I describe below (in short I attain a <em>trivial solution</em>):</p>
<p>I will include the codes for help:</p>
<pre><code>T[x_, y_, z_] = (C1*E^(γ z) + C2 E^(-γ z))*
Cos[n π x/L]*Cos[m π y/l] (*Preliminary T based on homogeneous Neumann x,y faces *)
tc[x_, y_] =
E^(-bc*y/l)*(tci + (bc/l)*
Integrate[E^(bc*s/l)*T[x, s, 0], {s, 0, y}]);
bc1 = (D[T[x, y, z], z] /. z -> 0) == pc (T[x, y, 0] - tc[x, y]);
ortheq1 =
Integrate[(bc1[[1]] - bc1[[2]])*Cos[n π x/L]*
Cos[m π y/l], {x, 0, L}, {y, 0, l},
Assumptions -> {L > 0, l > 0, bc > 0, pc > 0, tci > 0,
n ∈ Integers && n > 0,
m ∈ Integers && m > 0}] == 0 // Simplify
th[x_, y_] =
E^(-bh*x/L)*(thi + (bh/L)*
Integrate[E^(bh*s/L)*T[s, y, w], {s, 0, x}]);
bc2 = (D[T[x, y, z], z] /. z -> w) == ph (th[x, y] - T[x, y, w]);
ortheq2 =
Integrate[(bc2[[1]] - bc2[[2]])*Cos[n π x/L]*
Cos[m π y/l], {x, 0, L}, {y, 0, l},
Assumptions -> {L > 0, l > 0, bc > 0, pc > 0, tci > 0,
n ∈ Integers && n > 0,
m ∈ Integers && m > 0}] == 0 // Simplify
soln = Solve[{ortheq1, ortheq2}, {C1, C2}];
CC1 = C1 /. soln[[1, 1]];
CC2 = C2 /. soln[[1, 2]];
expression1 := CC1;
c1[n_, m_, L_, l_, bc_, pc_, tci_, bh_, ph_, thi_, w_] :=
Evaluate[expression1];
expression2 := CC2;
c2[n_, m_, L_, l_, bc_, pc_, tci_, bh_, ph_, thi_, w_] :=
Evaluate[expression2];
γ1[n_, m_] := Sqrt[(n π/L)^2 + (m π/l)^2];
</code></pre>
<p>I have used <code>Cos[n π x/L]*Cos[m π y/l]</code> considering the homogeneous Neumann condition on the lateral faces i.e. <span class="math-container">$x$</span> and <span class="math-container">$y$</span> faces.</p>
<p>Declaring some constants and then carrying out the summation:</p>
<pre><code>m0 = 30; n0 = 30;
L = 0.025; l = 0.025; w = 0.003; bh = 0.433; bc = 0.433; ph = 65.24; \
pc = 65.24;
thi = 120; tci = 30;
Vn = Sum[(c1[n, m, L, l, bc, pc, tci, bh, ph, thi, w]*
E^(γ1[n, m]*z) +
c2[n, m, L, l, bc, pc, tci, bh, ph, thi, w]*
E^(-γ1[n, m]*z))*Cos[n π x/L]*Cos[m π y/l], {n,
1, n0}, {m, 1, m0}];
</code></pre>
<p>On executing an plotting at <code>z=0</code> using <code>Plot3D[Vn /. z -> 0, {x, 0, L}, {y, 0, l}]</code> I get the following:</p>
<p><a href="https://i.stack.imgur.com/HxIiR.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/HxIiR.jpg" alt="enter image description here" /></a></p>
<p>which is basically 0. On looking further I found that the constants <code>c1, c2</code> evaluate to <code>0</code> for any value of <code>n,m</code>.</p>
<p><strong>More specifically I would like to know if some limiting solution could be developed to circumvent the problem of the constants evaluating to zero</strong></p>
<hr />
<p><strong>Origins of the b.c.</strong><span class="math-container">$2,3$</span></p>
<p>Actual bc(s): <span class="math-container">$$\frac{\partial T(x,y,0)}{\partial z}=p_c (T(x,y,0)-t_c) \tag 4$$</span>
<span class="math-container">$$\frac{\partial T(x,y,w)}{\partial z}=p_h (t_h-T(x,y,w))\tag 5$$</span></p>
<p>where <span class="math-container">$t_h,t_c$</span> are defined in the equation:</p>
<p><span class="math-container">$$\frac{\partial t_c}{\partial y}+\frac{b_c}{l}(t_c-T(x,y,0))=0 \tag 6$$</span>
<span class="math-container">$$\frac{\partial t_h}{\partial x}+\frac{b_h}{L}(t_h-T(x,y,0))=0 \tag 7$$</span></p>
<p><span class="math-container">$$t_h=e^{-b_h x/L}\bigg(t_{hi} + \frac{b_h}{L}\int_0^x e^{b_h s/L}T(x,s,w)ds\bigg) \tag 8$$</span></p>
<p><span class="math-container">$$t_c=e^{-b_c y/l}\bigg(t_{ci} + \frac{b_c}{l}\int_0^y e^{b_c s/l}T(x,s,0)ds\bigg) \tag 9$$</span></p>
<p>It is known that <span class="math-container">$t_h(x=0)=t_{hi}$</span> and <span class="math-container">$t_c(y=0)=t_{ci}$</span>. I had solved <span class="math-container">$6,7$</span> using the method of integrating factors and used the given conditions to reach <span class="math-container">$8,9$</span> which were then substituted into the original b.c.(s) <span class="math-container">$4,5$</span> to reach <span class="math-container">$2,3$</span>.</p>
<hr />
<p><strong>Non-dimensional formulation</strong>
The non-dimensional version of the problem can be written as:</p>
<p>(In this section <span class="math-container">$x,y,z$</span> are non-dimensional; <span class="math-container">$x=x'/L,y=y'/l,z=z'/w, \theta=\frac{t-t_{ci}}{t_{hi}-t_{ci}}$</span>)</p>
<p>Also, <span class="math-container">$\beta_h=h_h (lL)/C_h, \beta_c=h_c (lL)/C_c$</span> (However, this information might not be needed)</p>
<p><span class="math-container">$$\lambda_h \frac{\partial^2 \theta_w}{\partial x^2}+\lambda_c \frac{\partial^2 \theta_w}{\partial y^2}+\lambda_z \frac{\partial^2 \theta_w}{\partial z^2}=0 \tag A$$</span></p>
<p>In <span class="math-container">$(A)$</span> <span class="math-container">$\lambda_h=1/L^2, \lambda_c=1/l^2, \lambda_z=1/w^2$</span></p>
<p><span class="math-container">$$\frac{\partial \theta_h}{\partial x}+\beta_h (\theta_h-\theta_w) = 0 \tag B$$</span></p>
<p><span class="math-container">$$\frac{\partial \theta_c}{\partial y} + \beta_c (\theta_c-\theta_w) = 0 \tag C$$</span></p>
<p>The z-boundary condition then becomes:</p>
<p><span class="math-container">$$\frac{\partial \theta_w(x,y,0)}{\partial z}=r_c (\theta_w(x,y,0)-\theta_c) \tag D$$</span>
<span class="math-container">$$\frac{\partial \theta_w(x,y,w)}{\partial z}=r_h (\theta_h-\theta_w(x,y,w))\tag E$$</span></p>
<p><span class="math-container">$$\theta_h(0,y)=1, \theta_c(x,0)=0$$</span></p>
<p>Here <span class="math-container">$r_h,r_c$</span> are non-dimensional quantities (<span class="math-container">$r_c=\frac{h_c w}{k}, r_h=\frac{h_h w}{k}$</span>).</p>
| Alex Trounev | 58,388 | <p>We can solve this problem with using method explained in my answer <a href="https://mathematica.stackexchange.com/questions/262458/unbounded-solution-at-boundaries-with-few-combination-of-values-in-this-bvp-solu/262563?noredirect=1#comment654595_262563">here</a> and in my paper attached to this <a href="https://community.wolfram.com/groups/-/m/t/2457908" rel="nofollow noreferrer">post</a>. We solve in the unit cube system of equations</p>
<pre><code>eq1 = λh D[θw[x, y, z], x,
x] + λc D[θw[x, y, z], y,
y] + λz D[θw[x, y, z], z, z] ==
0; bc1 = {(D[θw[x, y, z], z] +
rh (θw[x, y, z] - θwh[x, y]) == 0) /. z -> 1,
(D[θw[x, y, z], z] -
rc (θw[x, y, z] - θwc[x, y]) == 0) /.
z -> 0}; eq2 =
D[θwh[x, y], x] +
bh (θw[x, y, 1] - θwh[x, y]) == 0;
bc2 = θwh[0, y] == 1;
eq3 = D[θwc[x, y], y] -
bc (θw[x, y, 0] - θwc[x, y]) == 0;
bc3 = θwc[x, 0] == 0;
</code></pre>
<p>First we generate base functions and solution to the problem as follows</p>
<pre><code>E[m_, t_] := Cos[m t] Exp[-m t]
nn = 5;
dx = 1/(nn); xl = Table[l*dx, {l, 0, nn}]; ycol =
xcol = Table[(xl[[l - 1]] + xl[[l]])/2, {l, 2,
nn + 1}]; zcol = ycol; Psijk =
Table[UE[n, t1], {n, 0, nn - 1}]; Int1 = Integrate[Psijk, t1];
Int2 = Integrate[Int1, t1];
Psi[y_] := Psijk /. t1 -> y; int1[y_] := Int1 /. t1 -> y;
int2[y_] := Int2 /. t1 -> y; M = nn;
M = nn; U1 = Array[a1, {M, M, M}]; U2 = Array[a2, {M, M, M}]; U3 =
Array[a3, {M, M, M}]; B1 = Array[b1, {M, M}]; B2 =
Array[b2, {M, M}]; B3 = Array[b3, {M, M}]; G1 =
Array[g1, {M, M}]; G2 = Array[g2, {M, M}]; G3 =
Array[g3, {M, M}]; G4 = Array[g4, {M, M}]; G5 = Array[g5, {M, M}];
H1 = Array[h1, {M}]; H2 = Array[h2, {M}];
thx[x_, y_] := (Psi[x] . G5 . Psi[y]);
tcy[x_, y_] := (Psi[x] . G4 . Psi[y]);
th[x_, y_] := (int1[x] . G5 . Psi[y]) + H2 . Psi[y];
tc[x_, y_] := (Psi[x] . G4 . int1[y]) + H1 . Psi[x];
u1[x_, y_, z_] := (int2[x] . U1 . Psi[y]) . Psi[z] +
x Psi[y] . G1 . Psi[z] + Psi[y] . B1 . Psi[z];
u2[x_, y_, z_] := (Psi[x] . U2 . int2[y]) . Psi[z] +
y Psi[x] . G2 . Psi[z] + Psi[x] . B2 . Psi[z];
u3[x_, y_, z_] := (Psi[x] . U3 . Psi[y]) . int2[z] +
z Psi[x] . G3 . Psi[y] + Psi[x] . B3 . Psi[y];
uz[x_, y_, z_] := (Psi[x] . U3 . Psi[y]) . int1[z] +
Psi[x] . G3 . Psi[y];
uy[x_, y_, z_] := (Psi[x] . U2 . int1[y]) . Psi[z] +
Psi[x] . G2 . Psi[z];
ux[x_, y_, z_] := (int1[x] . U1 . Psi[y]) . Psi[z] +
Psi[y] . G1 . Psi[z];
uxx[x_, y_, z_] := (Psi[x] . U1 . Psi[y]) . Psi[z];
uyy[x_, y_, z_] := (Psi[x] . U2 . Psi[y]) . Psi[z];
uzz[x_, y_, z_] := (Psi[x] . U3 . Psi[y]) . Psi[z];
</code></pre>
<p>Parameters of the model, equations on the grid and variables definition</p>
<pre><code>(*Another set of parameters can be \
bh=bc=2.065,rh=rc=0.861,\[Lambda]x=\[Lambda]y=0.0118,\[Lambda]z=0.\
8162.These parameters correspond to a miniaturized steel (k=16W/mK) \
wall where L=l=25 mm,w=3 mm with water (cp=4178 J/kgK) flowing on \
either side with a mass flow rate of 0.9775 gm/sec.The heat transfer \
coefficient (h) is set to 4590 W/sq.m K.*)
bh = bc = 2.065; rh =
rc = 0.861; λh = λc = 0.0118; λz = 0.8162;
eq = Join[
Flatten[Table[(λh uxx[xcol[[i]], ycol[[j]],
zcol[[k]]] + λc uyy[xcol[[i]], ycol[[j]],
zcol[[k]]] + λz uzz[xcol[[i]], ycol[[j]],
zcol[[k]]]) == 0, {i, M}, {j, M}, {k, M}]],
Flatten[Table[
u1[xcol[[i]], ycol[[j]], zcol[[k]]] -
u2[xcol[[i]], ycol[[j]], zcol[[k]]] == 0, {i, M}, {j, M}, {k,
M}]], Flatten[
Table[u1[xcol[[i]], ycol[[j]], zcol[[k]]] -
u3[xcol[[i]], ycol[[j]], zcol[[k]]] == 0, {i, M}, {j, M}, {k,
M}]], Flatten[
Table[ux[1, ycol[[i]], zcol[[j]]] == 0, {i, M}, {j, M}]],
Flatten[Table[uy[xcol[[i]], 1, zcol[[j]]] == 0, {i, M}, {j, M}]],
Flatten[Table[ux[0, ycol[[i]], zcol[[j]]] == 0, {i, M}, {j, M}]],
Flatten[Table[uy[xcol[[i]], 0, zcol[[j]]] == 0, {i, M}, {j, M}]],
Flatten[Table[
uz[xcol[[i]], ycol[[j]], 1] +
rh (u3[xcol[[i]], ycol[[j]], 1] - th[xcol[[i]], ycol[[j]]]) ==
0, {i, M}, {j, M}]],
Flatten[Table[
uz[xcol[[i]], ycol[[j]], 0] -
rc (u3[xcol[[i]], ycol[[j]], 0] - tc[xcol[[i]], ycol[[j]]]) ==
0, {i, M}, {j, M}]],
Flatten[Table[
thx[xcol[[i]], ycol[[j]]] -
bh (u3[xcol[[i]], ycol[[j]], 1] - th[xcol[[i]], ycol[[j]]]) ==
0, {i, M}, {j, M}]],
Flatten[Table[
tcy[xcol[[i]], ycol[[j]]] -
bc (u3[xcol[[i]], ycol[[j]], 0] - tc[xcol[[i]], ycol[[j]]]) ==
0, {i, M}, {j, M}]], Table[th[0, ycol[[i]]] == 1., {i, M}],
Table[tc[xcol[[i]], 0] == 0., {i, M}]];
var = Join[Flatten[U1], Flatten[U2], Flatten[U3], Flatten[B1],
Flatten[B2], Flatten[B3], Flatten[G1], Flatten[G2], Flatten[G3],
Flatten[G4], Flatten[G5], H1, H2];
</code></pre>
<p>Solution and visualization</p>
<pre><code>{v, mat} = CoefficientArrays[eq, var];
sol1 = LinearSolve[mat, -v];
rul = Table[var[[i]] -> sol1[[i]], {i, Length[var]}];
{Plot3D[Evaluate[tc[x, y] /. rul], {x, 0, 1}, {y, 0, 1},
ColorFunction -> "Rainbow", MeshStyle -> White,
PlotLegends -> Automatic, PlotLabel -> \[Theta]c],
Plot3D[Evaluate[th[x, y] /. rul], {x, 0, 1}, {y, 0, 1},
ColorFunction -> "Rainbow", MeshStyle -> White,
PlotLegends -> Automatic, PlotLabel -> \[Theta]h],
Table[Plot3D[Evaluate[u1[x, y, z] /. rul], {x, 0, 1}, {y, 0, 1},
ColorFunction -> "Rainbow", MeshStyle -> White,
PlotLegends -> Automatic, PlotLabel -> \[Theta]w[z]], {z, 0,
1, .2}]}
</code></pre>
<p><a href="https://i.stack.imgur.com/WtWYE.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/WtWYE.png" alt="Figure 1" /></a></p>
<p>We can compare this code with code developed by bbgodfrey above. But we need to change parameters as well</p>
<pre><code>DSolveValue[{D[θc[y], y] + b (θc[y] - 1) ==
0, θc[0] == 0}, θc[y], y] // Simplify;
a00 = Simplify[Integrate[%, {y, 0, 1}]];
an0 = Simplify[Integrate[%% 2 Cos[n π y], {y, 0, 1}],
Assumptions -> n ∈ Integers];
DSolveValue[{D[θc[y], y] + b (θc[y] - Cos[m Pi y]) ==
0, θc[0] == 0}, θc[y], y] // Simplify;
a0m = Simplify[Integrate[%, {y, 0, 1}],
Assumptions -> m ∈ Integers];
amm = Simplify[Integrate[%% 2 Cos[m π y], {y, 0, 1}],
Assumptions -> m ∈ Integers];
anm = FullSimplify[Integrate[%%% 2 Cos[n π y], {y, 0, 1}],
Assumptions -> (m | n) ∈ Integers];
a[nn_?IntegerQ, mm_?IntegerQ] :=
Which[nn == 0 && mm == 0, a00, mm == 0, an0, nn == 0, a0m, nn == mm,
amm, True, anm] /. {n -> nn, m -> mm};
λh D[θw[x, y, z], x,
x] + λc D[θw[x, y, z], y,
y] + λz D[θw[x, y, z], z, z];
Simplify[(% /. θw ->
Function[{x, y, z},
Cos[nh Pi x] Cos[nc Pi y] θwz[z]])/(Cos[nh Pi x] Cos[
nc Pi y])] /. π^2 (nc^2 λc + nh^2 λh) ->
k[nh, nc]^2 λz;
Flatten@DSolveValue[% == 0, θwz[z], z] /. {C[1] -> c1[nh, nc],
C[2] -> c2[nh, nc]};
sz = c2[nh, nc] Sinh[k[nh, nc] z]/Cosh[k[nh, nc]] +
c1[nh, nc] Sinh[k[nh, nc] (1 - z)]/Sinh[k[nh, nc]];
sθc[nh_?IntegerQ, nc_?IntegerQ] :=
Sum[a[nc, m] c1[nh, m], {m, 0, maxc}] /. b -> bc;
(D[sz, z] == rc (sz - sθc[nh, nc])) /. z -> 0;
Solve[%, c2[nh, nc]] // Flatten // Apart;
sz1 = Simplify[sz /. %] // Apart;
szθh[nh_?IntegerQ, nc_?IntegerQ] := Evaluate[sz1 /. z -> 1];
sθh[nh_?IntegerQ, nc_?IntegerQ] :=
Evaluate[Sum[a[nh, m] szθh[m, nc], {m, 0, maxh}]];
eq = Simplify[(D[sz1, z] + rh (sz1 - sθh[nh, nc])) /. z -> 1] -
rh (DiscreteDelta[nh] - a[nh, 0]) DiscreteDelta[nc];
maxh = 3; maxc = 3; λh = 1; λc = 1; λz = 1; \
bh = 1; bc = 1; rh = 1; rc = 1;
ks = Flatten@
Table[k[nh, nc] ->
Sqrt[π^2 (nc^2 λc +
nh^2 λh)/λz], {nh, 0, maxh}, {nc, 0, maxc}];
eql = N[Collect[
Flatten@Table[
eq /. Sinh[k[0, 0]] -> k[0, 0], {nh, 0, maxh}, {nc, 0,
maxc}] /. b -> bh, _c1, Simplify] /. ks] /.
c1[z1_, z2_] :> Rationalize[c1[z1, z2]];
Union@Cases[eql, _c1, Infinity];
coef = NSolve[Thread[eql == 0], %] // Flatten;
sol = Total@
Simplify[
Flatten@Table[
sz1 Cos[nh Pi x] Cos[nc Pi y] /.
Sinh[z k[0, 0]] -> z k[0, 0], {nh, 0, maxh}, {nc, 0, maxc}],
Trig -> False] /. ks /. %;
(*{c1[0,0]->0.3788,c1[0,1]->-0.0234913,c1[0,2]->-0.00123552,c1[0,3]->-\
0.00109202,c1[1,0]->0.00168554,c1[1,1]->-0.0000775391,c1[1,2]->-5.\
40917*10^-6,c1[1,3]->-4.63996*10^-6,c1[2,0]->4.19045*10^-6,c1[2,1]->-\
1.24251*10^-7,c1[2,2]->-1.17696*10^-8,c1[2,3]->-1.02576*10^-8,c1[3,0]->\
1.65131*10^-7,c1[3,1]->-3.41814*10^-9,c1[3,2]->3.86348*10^-10,c1[3,3]->\
-3.48432*10^-10}*)
</code></pre>
<p>Visualization</p>
<pre><code>pl1 = Table[
Plot3D[sol, {x, 0, 1}, {y, 0, 1}, PlotLegends -> Automatic,
AxesLabel -> {x, y}, Mesh -> None, ColorFunction -> "Rainbow",
PlotLabel -> Row[{"z = ", z}]], {z, 0, 1, .2}]
</code></pre>
<p><a href="https://i.stack.imgur.com/EO2RK.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/EO2RK.png" alt="Figure 2" /></a></p>
<p>Numerical table for comparison on the grid</p>
<pre><code>tab1 =
Table[{x, y, z, sol}, {x, 0, 1, .2}, {y, 0, 1, .2}, {z, 0, 1, .2}]
(*Out[]= {{{{0., 0., 0., 0.354583}, {0., 0., 0.2, 0.416427}, {0., 0.,
0.4, 0.472633}, {0., 0., 0.6, 0.527367}, {0., 0., 0.8,
0.583573}, {0., 0., 1., 0.645417}}, {{0., 0.2, 0., 0.361377}, {0.,
0.2, 0.2, 0.419296}, {0., 0.2, 0.4, 0.474032}, {0., 0.2, 0.6,
0.528107}, {0., 0.2, 0.8, 0.584005}, {0., 0.2, 1.,
0.645725}}, {{0., 0.4, 0., 0.375098}, {0., 0.4, 0.2,
0.426074}, {0., 0.4, 0.4, 0.47755}, {0., 0.4, 0.6, 0.530014}, {0.,
0.4, 0.8, 0.585128}, {0., 0.4, 1., 0.646529}}, {{0., 0.6, 0.,
0.387889}, {0., 0.6, 0.2, 0.433593}, {0., 0.6, 0.4,
0.481704}, {0., 0.6, 0.6, 0.532322}, {0., 0.6, 0.8,
0.586504}, {0., 0.6, 1., 0.647516}}, {{0., 0.8, 0.,
0.398835}, {0., 0.8, 0.2, 0.439582}, {0., 0.8, 0.4,
0.484998}, {0., 0.8, 0.6, 0.534165}, {0., 0.8, 0.8,
0.587608}, {0., 0.8, 1., 0.648311}}, {{0., 1., 0., 0.403914}, {0.,
1., 0.2, 0.441963}, {0., 1., 0.4, 0.486258}, {0., 1., 0.6,
0.534865}, {0., 1., 0.8, 0.588028}, {0., 1., 1.,
0.648613}}}, {{{0.2, 0., 0., 0.354275}, {0.2, 0., 0.2,
0.415995}, {0.2, 0., 0.4, 0.471893}, {0.2, 0., 0.6,
0.525968}, {0.2, 0., 0.8, 0.580704}, {0.2, 0., 1.,
0.638623}}, {{0.2, 0.2, 0., 0.361064}, {0.2, 0.2, 0.2,
0.418862}, {0.2, 0.2, 0.4, 0.473291}, {0.2, 0.2, 0.6,
0.526709}, {0.2, 0.2, 0.8, 0.581138}, {0.2, 0.2, 1.,
0.638936}}, {{0.2, 0.4, 0., 0.374776}, {0.2, 0.4, 0.2,
0.425637}, {0.2, 0.4, 0.4, 0.476809}, {0.2, 0.4, 0.6,
0.528616}, {0.2, 0.4, 0.8, 0.582265}, {0.2, 0.4, 1.,
0.639752}}, {{0.2, 0.6, 0., 0.38756}, {0.2, 0.6, 0.2,
0.433153}, {0.2, 0.6, 0.4, 0.480962}, {0.2, 0.6, 0.6,
0.530926}, {0.2, 0.6, 0.8, 0.583645}, {0.2, 0.6, 1.,
0.640754}}, {{0.2, 0.8, 0., 0.398498}, {0.2, 0.8, 0.2,
0.439139}, {0.2, 0.8, 0.4, 0.484255}, {0.2, 0.8, 0.6,
0.532769}, {0.2, 0.8, 0.8, 0.584753}, {0.2, 0.8, 1.,
0.641561}}, {{0.2, 1., 0., 0.403574}, {0.2, 1., 0.2,
0.441519}, {0.2, 1., 0.4, 0.485515}, {0.2, 1., 0.6,
0.53347}, {0.2, 1., 0.8, 0.585174}, {0.2, 1., 1.,
0.641868}}}, {{{0.4, 0., 0., 0.353471}, {0.4, 0., 0.2,
0.414872}, {0.4, 0., 0.4, 0.469986}, {0.4, 0., 0.6,
0.52245}, {0.4, 0., 0.8, 0.573926}, {0.4, 0., 1.,
0.624902}}, {{0.4, 0.2, 0., 0.360248}, {0.4, 0.2, 0.2,
0.417735}, {0.4, 0.2, 0.4, 0.471384}, {0.4, 0.2, 0.6,
0.523191}, {0.4, 0.2, 0.8, 0.574363}, {0.4, 0.2, 1.,
0.625224}}, {{0.4, 0.4, 0., 0.373936}, {0.4, 0.4, 0.2,
0.424502}, {0.4, 0.4, 0.4, 0.474899}, {0.4, 0.4, 0.6,
0.525101}, {0.4, 0.4, 0.8, 0.575498}, {0.4, 0.4, 1.,
0.626064}}, {{0.4, 0.6, 0., 0.3867}, {0.4, 0.6, 0.2,
0.432008}, {0.4, 0.6, 0.4, 0.47905}, {0.4, 0.6, 0.6,
0.527413}, {0.4, 0.6, 0.8, 0.576888}, {0.4, 0.6, 1.,
0.627096}}, {{0.4, 0.8, 0., 0.397621}, {0.4, 0.8, 0.2,
0.437988}, {0.4, 0.8, 0.4, 0.482342}, {0.4, 0.8, 0.6,
0.529259}, {0.4, 0.8, 0.8, 0.578005}, {0.4, 0.8, 1.,
0.627926}}, {{0.4, 1., 0., 0.402688}, {0.4, 1., 0.2,
0.440365}, {0.4, 1., 0.4, 0.483601}, {0.4, 1., 0.6,
0.52996}, {0.4, 1., 0.8, 0.578429}, {0.4, 1., 1.,
0.628242}}}, {{{0.6, 0., 0., 0.352484}, {0.6, 0., 0.2,
0.413496}, {0.6, 0., 0.4, 0.467678}, {0.6, 0., 0.6,
0.518296}, {0.6, 0., 0.8, 0.566407}, {0.6, 0., 1.,
0.612111}}, {{0.6, 0.2, 0., 0.359246}, {0.6, 0.2, 0.2,
0.416355}, {0.6, 0.2, 0.4, 0.469074}, {0.6, 0.2, 0.6,
0.519038}, {0.6, 0.2, 0.8, 0.566847}, {0.6, 0.2, 1.,
0.61244}}, {{0.6, 0.4, 0., 0.372904}, {0.6, 0.4, 0.2,
0.423112}, {0.6, 0.4, 0.4, 0.472587}, {0.6, 0.4, 0.6,
0.52095}, {0.6, 0.4, 0.8, 0.567992}, {0.6, 0.4, 1.,
0.6133}}, {{0.6, 0.6, 0., 0.385643}, {0.6, 0.6, 0.2,
0.430607}, {0.6, 0.6, 0.4, 0.476735}, {0.6, 0.6, 0.6,
0.523265}, {0.6, 0.6, 0.8, 0.569393}, {0.6, 0.6, 1.,
0.614357}}, {{0.6, 0.8, 0., 0.396543}, {0.6, 0.8, 0.2,
0.436578}, {0.6, 0.8, 0.4, 0.480024}, {0.6, 0.8, 0.6,
0.525113}, {0.6, 0.8, 0.8, 0.570518}, {0.6, 0.8, 1.,
0.615207}}, {{0.6, 1., 0., 0.4016}, {0.6, 1., 0.2,
0.438952}, {0.6, 1., 0.4, 0.481282}, {0.6, 1., 0.6,
0.525816}, {0.6, 1., 0.8, 0.570946}, {0.6, 1., 1.,
0.615531}}}, {{{0.8, 0., 0., 0.351689}, {0.8, 0., 0.2,
0.412392}, {0.8, 0., 0.4, 0.465835}, {0.8, 0., 0.6,
0.515002}, {0.8, 0., 0.8, 0.560418}, {0.8, 0., 1.,
0.601165}}, {{0.8, 0.2, 0., 0.358439}, {0.8, 0.2, 0.2,
0.415247}, {0.8, 0.2, 0.4, 0.467231}, {0.8, 0.2, 0.6,
0.515745}, {0.8, 0.2, 0.8, 0.560861}, {0.8, 0.2, 1.,
0.601502}}, {{0.8, 0.4, 0., 0.372074}, {0.8, 0.4, 0.2,
0.421995}, {0.8, 0.4, 0.4, 0.470741}, {0.8, 0.4, 0.6,
0.517658}, {0.8, 0.4, 0.8, 0.562012}, {0.8, 0.4, 1.,
0.602379}}, {{0.8, 0.6, 0., 0.384793}, {0.8, 0.6, 0.2,
0.429482}, {0.8, 0.6, 0.4, 0.474887}, {0.8, 0.6, 0.6,
0.519976}, {0.8, 0.6, 0.8, 0.563422}, {0.8, 0.6, 1.,
0.603457}}, {{0.8, 0.8, 0., 0.395675}, {0.8, 0.8, 0.2,
0.435447}, {0.8, 0.8, 0.4, 0.478174}, {0.8, 0.8, 0.6,
0.521826}, {0.8, 0.8, 0.8, 0.564553}, {0.8, 0.8, 1.,
0.604325}}, {{0.8, 1., 0., 0.400723}, {0.8, 1., 0.2,
0.437817}, {0.8, 1., 0.4, 0.479432}, {0.8, 1., 0.6,
0.522529}, {0.8, 1., 0.8, 0.564984}, {0.8, 1., 1.,
0.604656}}}, {{{1., 0., 0., 0.351387}, {1., 0., 0.2,
0.411972}, {1., 0., 0.4, 0.465135}, {1., 0., 0.6, 0.513742}, {1.,
0., 0.8, 0.558037}, {1., 0., 1., 0.596086}}, {{1., 0.2, 0.,
0.358132}, {1., 0.2, 0.2, 0.414826}, {1., 0.2, 0.4, 0.46653}, {1.,
0.2, 0.6, 0.514485}, {1., 0.2, 0.8, 0.558481}, {1., 0.2, 1.,
0.596426}}, {{1., 0.4, 0., 0.371758}, {1., 0.4, 0.2,
0.421571}, {1., 0.4, 0.4, 0.47004}, {1., 0.4, 0.6, 0.516399}, {1.,
0.4, 0.8, 0.559635}, {1., 0.4, 1., 0.597312}}, {{1., 0.6, 0.,
0.384469}, {1., 0.6, 0.2, 0.429054}, {1., 0.6, 0.4,
0.474184}, {1., 0.6, 0.6, 0.518718}, {1., 0.6, 0.8,
0.561048}, {1., 0.6, 1., 0.5984}}, {{1., 0.8, 0., 0.395344}, {1.,
0.8, 0.2, 0.435016}, {1., 0.8, 0.4, 0.477471}, {1., 0.8, 0.6,
0.520568}, {1., 0.8, 0.8, 0.562183}, {1., 0.8, 1.,
0.599277}}, {{1., 1., 0., 0.400389}, {1., 1., 0.2, 0.437385}, {1.,
1., 0.4, 0.478728}, {1., 1., 0.6, 0.521272}, {1., 1., 0.8,
0.562615}, {1., 1., 1., 0.599611}}}}*)
</code></pre>
<p>Results computed with our code for <code>λh = 1; λc = 1; λz = 1; bh = 1; bc = 1; rh = 1; rc = 1;</code><br />
<a href="https://i.stack.imgur.com/KJEfD.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/KJEfD.png" alt="Figure 3" /></a></p>
<p>Numerical table</p>
<pre><code>tab2 =
Table[{x, y, z, u3[x, y, z] /. rul}, {x, 0, 1, .2}, {y, 0,
1, .2}, {z, 0, 1, .2}]
(*Out[]= {{{{0., 0., 0., 0.351913}, {0., 0., 0.2, 0.415671}, {0., 0.,
0.4, 0.472425}, {0., 0., 0.6, 0.527563}, {0., 0., 0.8,
0.584311}, {0., 0., 1., 0.648035}}, {{0., 0.2, 0., 0.362043}, {0.,
0.2, 0.2, 0.41954}, {0., 0.2, 0.4, 0.474254}, {0., 0.2, 0.6,
0.528512}, {0., 0.2, 0.8, 0.584869}, {0., 0.2, 1.,
0.648406}}, {{0., 0.4, 0., 0.374862}, {0., 0.4, 0.2,
0.426163}, {0., 0.4, 0.4, 0.477734}, {0., 0.4, 0.6,
0.530404}, {0., 0.4, 0.8, 0.585983}, {0., 0.4, 1.,
0.649196}}, {{0., 0.6, 0., 0.388098}, {0., 0.6, 0.2,
0.433759}, {0., 0.6, 0.4, 0.481916}, {0., 0.6, 0.6,
0.532732}, {0., 0.6, 0.8, 0.587369}, {0., 0.6, 1.,
0.650192}}, {{0., 0.8, 0., 0.398678}, {0., 0.8, 0.2,
0.439659}, {0., 0.8, 0.4, 0.485185}, {0., 0.8, 0.6,
0.534565}, {0., 0.8, 0.8, 0.588468}, {0., 0.8, 1.,
0.650978}}, {{0., 1., 0., 0.405092}, {0., 1., 0.2, 0.442489}, {0.,
1., 0.4, 0.486624}, {0., 1., 0.6, 0.535347}, {0., 1., 0.8,
0.588936}, {0., 1., 1., 0.651302}}}, {{{0.2, 0., 0.,
0.351575}, {0.2, 0., 0.2, 0.415109}, {0.2, 0., 0.4,
0.471481}, {0.2, 0., 0.6, 0.525731}, {0.2, 0., 0.8,
0.580452}, {0.2, 0., 1., 0.63791}}, {{0.2, 0.2, 0.,
0.361696}, {0.2, 0.2, 0.2, 0.418975}, {0.2, 0.2, 0.4,
0.47331}, {0.2, 0.2, 0.6, 0.526681}, {0.2, 0.2, 0.8,
0.581013}, {0.2, 0.2, 1., 0.638289}}, {{0.2, 0.4, 0.,
0.374505}, {0.2, 0.4, 0.2, 0.425594}, {0.2, 0.4, 0.4,
0.476789}, {0.2, 0.4, 0.6, 0.528573}, {0.2, 0.4, 0.8,
0.582132}, {0.2, 0.4, 1., 0.639099}}, {{0.2, 0.6, 0.,
0.387731}, {0.2, 0.6, 0.2, 0.433186}, {0.2, 0.6, 0.4,
0.48097}, {0.2, 0.6, 0.6, 0.530903}, {0.2, 0.6, 0.8,
0.583524}, {0.2, 0.6, 1., 0.640118}}, {{0.2, 0.8, 0.,
0.398303}, {0.2, 0.8, 0.2, 0.439083}, {0.2, 0.8, 0.4,
0.484238}, {0.2, 0.8, 0.6, 0.532737}, {0.2, 0.8, 0.8,
0.584628}, {0.2, 0.8, 1., 0.640923}}, {{0.2, 1., 0.,
0.404713}, {0.2, 1., 0.2, 0.441912}, {0.2, 1., 0.4,
0.485677}, {0.2, 1., 0.6, 0.53352}, {0.2, 1., 0.8,
0.585098}, {0.2, 1., 1., 0.641255}}}, {{{0.4, 0., 0.,
0.350792}, {0.4, 0., 0.2, 0.413998}, {0.4, 0., 0.4,
0.469592}, {0.4, 0., 0.6, 0.522253}, {0.4, 0., 0.8,
0.573834}, {0.4, 0., 1., 0.625097}}, {{0.4, 0.2, 0.,
0.360894}, {0.4, 0.2, 0.2, 0.417859}, {0.4, 0.2, 0.4,
0.47142}, {0.4, 0.2, 0.6, 0.523204}, {0.4, 0.2, 0.8,
0.574398}, {0.4, 0.2, 1., 0.625487}}, {{0.4, 0.4, 0.,
0.373682}, {0.4, 0.4, 0.2, 0.42447}, {0.4, 0.4, 0.4,
0.474897}, {0.4, 0.4, 0.6, 0.525099}, {0.4, 0.4, 0.8,
0.575526}, {0.4, 0.4, 1., 0.626319}}, {{0.4, 0.6, 0.,
0.386887}, {0.4, 0.6, 0.2, 0.432053}, {0.4, 0.6, 0.4,
0.479075}, {0.4, 0.6, 0.6, 0.527431}, {0.4, 0.6, 0.8,
0.576928}, {0.4, 0.6, 1., 0.627365}}, {{0.4, 0.8, 0.,
0.397442}, {0.4, 0.8, 0.2, 0.437944}, {0.4, 0.8, 0.4,
0.482342}, {0.4, 0.8, 0.6, 0.529268}, {0.4, 0.8, 0.8,
0.57804}, {0.4, 0.8, 1., 0.628191}}, {{0.4, 1., 0.,
0.403841}, {0.4, 1., 0.2, 0.440769}, {0.4, 1., 0.4,
0.48378}, {0.4, 1., 0.6, 0.530052}, {0.4, 1., 0.8,
0.578513}, {0.4, 1., 1., 0.628532}}}, {{{0.6, 0., 0.,
0.349794}, {0.6, 0., 0.2, 0.412617}, {0.6, 0., 0.4,
0.467266}, {0.6, 0., 0.6, 0.518075}, {0.6, 0., 0.8,
0.56624}, {0.6, 0., 1., 0.611867}}, {{0.6, 0.2, 0.,
0.359872}, {0.6, 0.2, 0.2, 0.416471}, {0.6, 0.2, 0.4,
0.469092}, {0.6, 0.2, 0.6, 0.519027}, {0.6, 0.2, 0.8,
0.566809}, {0.6, 0.2, 1., 0.612267}}, {{0.6, 0.4, 0.,
0.372633}, {0.6, 0.4, 0.2, 0.423072}, {0.6, 0.4, 0.4,
0.472566}, {0.6, 0.4, 0.6, 0.520924}, {0.6, 0.4, 0.8,
0.567945}, {0.6, 0.4, 1., 0.613119}}, {{0.6, 0.6, 0.,
0.385811}, {0.6, 0.6, 0.2, 0.430644}, {0.6, 0.6, 0.4,
0.476742}, {0.6, 0.6, 0.6, 0.523259}, {0.6, 0.6, 0.8,
0.569358}, {0.6, 0.6, 1., 0.614192}}, {{0.6, 0.8, 0.,
0.396346}, {0.6, 0.8, 0.2, 0.436526}, {0.6, 0.8, 0.4,
0.480006}, {0.6, 0.8, 0.6, 0.525098}, {0.6, 0.8, 0.8,
0.57048}, {0.6, 0.8, 1., 0.615039}}, {{0.6, 1., 0.,
0.40273}, {0.6, 1., 0.2, 0.439347}, {0.6, 1., 0.4,
0.481443}, {0.6, 1., 0.6, 0.525883}, {0.6, 1., 0.8,
0.570957}, {0.6, 1., 1., 0.615389}}}, {{{0.8, 0., 0.,
0.349011}, {0.8, 0., 0.2, 0.411519}, {0.8, 0., 0.4,
0.465435}, {0.8, 0., 0.6, 0.514807}, {0.8, 0., 0.8,
0.560343}, {0.8, 0., 1., 0.60129}}, {{0.8, 0.2, 0.,
0.359071}, {0.8, 0.2, 0.2, 0.415368}, {0.8, 0.2, 0.4,
0.46726}, {0.8, 0.2, 0.6, 0.51576}, {0.8, 0.2, 0.8,
0.560915}, {0.8, 0.2, 1., 0.601699}}, {{0.8, 0.4, 0.,
0.371809}, {0.8, 0.4, 0.2, 0.421961}, {0.8, 0.4, 0.4,
0.470731}, {0.8, 0.4, 0.6, 0.517659}, {0.8, 0.4, 0.8,
0.562058}, {0.8, 0.4, 1., 0.602568}}, {{0.8, 0.6, 0.,
0.384967}, {0.8, 0.6, 0.2, 0.429524}, {0.8, 0.6, 0.4,
0.474905}, {0.8, 0.6, 0.6, 0.519996}, {0.8, 0.6, 0.8,
0.563479}, {0.8, 0.6, 1., 0.603662}}, {{0.8, 0.8, 0.,
0.395485}, {0.8, 0.8, 0.2, 0.435399}, {0.8, 0.8, 0.4,
0.478168}, {0.8, 0.8, 0.6, 0.521837}, {0.8, 0.8, 0.8,
0.564608}, {0.8, 0.8, 1., 0.604526}}, {{0.8, 1., 0.,
0.401858}, {0.8, 1., 0.2, 0.438217}, {0.8, 1., 0.4,
0.479604}, {0.8, 1., 0.6, 0.522623}, {0.8, 1., 0.8,
0.565087}, {0.8, 1., 1., 0.604883}}}, {{{1., 0., 0.,
0.348701}, {1., 0., 0.2, 0.41105}, {1., 0., 0.4, 0.464655}, {1.,
0., 0.6, 0.513367}, {1., 0., 0.8, 0.557517}, {1., 0., 1.,
0.594879}}, {{1., 0.2, 0., 0.358753}, {1., 0.2, 0.2,
0.414897}, {1., 0.2, 0.4, 0.466479}, {1., 0.2, 0.6,
0.514321}, {1., 0.2, 0.8, 0.558091}, {1., 0.2, 1.,
0.595293}}, {{1., 0.4, 0., 0.371483}, {1., 0.4, 0.2,
0.421487}, {1., 0.4, 0.4, 0.46995}, {1., 0.4, 0.6, 0.51622}, {1.,
0.4, 0.8, 0.559237}, {1., 0.4, 1., 0.596173}}, {{1., 0.6, 0.,
0.384632}, {1., 0.6, 0.2, 0.429046}, {1., 0.6, 0.4,
0.474123}, {1., 0.6, 0.6, 0.518559}, {1., 0.6, 0.8,
0.560663}, {1., 0.6, 1., 0.597281}}, {{1., 0.8, 0.,
0.395142}, {1., 0.8, 0.2, 0.434919}, {1., 0.8, 0.4,
0.477385}, {1., 0.8, 0.6, 0.5204}, {1., 0.8, 0.8, 0.561795}, {1.,
0.8, 1., 0.598156}}, {{1., 1., 0., 0.401512}, {1., 1., 0.2,
0.437734}, {1., 1., 0.4, 0.478821}, {1., 1., 0.6, 0.521186}, {1.,
1., 0.8, 0.562276}, {1., 1., 1., 0.598518}}}}*)
{ListPlot[{Flatten[tab1, 2][[All, 4]], Flatten[tab2, 2][[All, 4]]},
PlotLegends -> {"tab1", "tab2"}, ImageSize -> Large,
PlotRange -> All],
ListPlot[Flatten[tab1, 2][[All, 4]] - Flatten[tab2, 2][[All, 4]],
PlotRange -> All]}
</code></pre>
<p><a href="https://i.stack.imgur.com/qejeo.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/qejeo.png" alt="Figure 4" /></a></p>
<p>Note, that the difference of two methods is about <span class="math-container">$2\times 10^{-3}$</span>.</p>
|
15,033 | <p>I have one incident edges and multiple outgoing Edges, for which I want to pick an outgoing edge such that the angles between the outgoing edge and the incoming edge is the smallest of all. We know the coordinates for the vertex $V$ .</p>
<p>The angle must start from the incoming edge ($e_1$) and ends at another edge ($e_2$, $e_3$, $e_4$). </p>
<p>Also, the angle must be form in such a way that the face constructed from $e_1$ to $e_2$ must be in counter-close-wise direction. In other words, the face that you construct when you link all the vertexes $V$ in $e_1$ and $e_i$ together, must be a counter clockwise cycle.</p>
<p>So in the case below, edge $e_2$ must be picked because the $\theta$ is the smallest.</p>
<p><img src="https://docs.google.com/drawings/pub?id=1ducKqeABywKY4A1WS17exNytbvNnDxSAovK0cZnUs3Y&w=960&h=720"></p>
<p>Is there an algorithm that does this elegantly? The few algorithms I devise are just plain ugly and requires a lot of hacks. </p>
<p>Update: One method that I think of ( and <a href="https://math.stackexchange.com/questions/15033/find-the-outgoing-edge-with-the-smallest-angles-given-one-incident-edges-and-mul/15035#15035">proposed below</a>) is to use the cross product for each edge against the incoming edge $e_1$, and get the minimum $\theta_{i}$. i.e.,</p>
<p>$$e_{1} \times e_{i} = |e_{1}||e_{i}| \sin\, \theta_{i}$$</p>
<p>But the problem of this is it doesn't really work. Consider the following test cases:
$$e_1=(1,0)$$
$$e_2=(-1/\sqrt(2),1/\sqrt(2) )$$
$$e_3=(1/\sqrt(2),-1/\sqrt(2))$$</p>
<p>One would find that</p>
<p>$$\theta_2 = -45^o$$
$$\theta_3=-45^o$$</p>
<p>Both are equal-- even though we know that $e_2$ should be selected. </p>
| picakhu | 4,728 | <p>Assuming that finding the coordinates(of your vectors) is not too difficult. </p>
<p>You can use the definition of dot and or cross products, and then solve for the angles. </p>
<p>So you can for example use $| a \times b | = |a||b| \sin\, \theta$</p>
|
2,390,670 | <p>For an array with range $n$ filled with random numbers ranging from 0 (inclusive) to $n$ (exclusive), what percent of the array contains unique numbers?</p>
<p>I was able to make a program that tries to calculate this with repeated trials and ended up with ~63.212%.</p>
<p>My Question is what equation could calculate this instead of me just repeating trials.</p>
| kimchi lover | 457,779 | <p>Your number is suspiciously close to $1-1/e$. The fraction of values represented exactly $k$ times in your array should be close to $\exp(-1)/k!$, so it looks like your program counted the number of distinct values in the array, rather than the number of values represented only once.</p>
|
242,203 | <p>What's the derivative of the integral $$\int_1^x\sin(t) dt$$</p>
<p>Any ideas? I'm getting a little confused.</p>
| amWhy | 9,003 | <p>You can use the fundamental theorem of calculus, but if you have not yet covered that theorem, in short, you'll be taking the derivative - with respect to x - of the integral of $\sin(t)dt$ when the integral is evaluated from $1$ to $x$:</p>
<p>$$\frac{d}{dx}\left(\int_1^x \sin(t) \text{d}t\right) = \frac{d}{dx} [-\cos t]_1^x = \frac{d}{dx}\left(-\cos(x) - (-\cos(1))\right) = \sin(x).$$</p>
<p>and you'll no doubt be encountering the Fundamental Theorem of Calculus very, very soon:</p>
<p>For any <em>integrable</em> function $f$, and constant $a$: $$\frac{d}{dx} \left(\int_a^x f(t)dt \right)= f(x),$$ </p>
<p>(provided $f$ is continuous at $x$).</p>
|
1,595,658 | <p>$$ \text { Given the function: }f:\mathcal{N^+} \to \mathcal{N^+} where f \left(k\right) = \sum_{i=0}^k \,4^i. $$ </p>
<p>Examining the prime factorizations of f(k) for k= 1...48, many factors appear in a regular pattern. </p>
<p>QUESTION: </p>
<ol>
<li>Is there a proof that these patterns continue for larger values of k?</li>
<li>Is there a known use for these patterns in number theory?</li>
</ol>
| Gottfried Helms | 1,714 | <p>I've once "invented" a small notation-scheme for the handy expression of the primefactorization of that function. </p>
<ol>
<li>Let's define $[a:p] = 1 $ if $p$ divides $a$ and $=0$ if not.
($p$ need not a simple primefactor here) </li>
<li>Let's define $\{a,p\} = m $ giving the (highest) exponent to which
$[a:p^m]=1$</li>
</ol>
<p>Next </p>
<ol start="3">
<li>let's define $\lambda_{a,p}$ and $\alpha_{a,p}$ implicitely by<br>
$\lambda_{a,p}$ being the minimal positive exponent in $ \{a^{\lambda_{a,p}} -1,p\}= \alpha_{a,p}$ such that $\alpha_{a,p} \gt 0$ . </li>
</ol>
<p><em>(Notes: we know $\lambda_{a,p}$ also as the "order of the cyclic subgroup modulo $p$". Of course there are cases, where $p$ never divides that expression, but we might assign formally $\lambda_{a,p}=\infty$ in that cases.)</em> </p>
<p>For instance, let $a=2,p=7$ then $\lambda_{a,p}=3$ and $\alpha_{a,p}=1$ because $ \{2^3 - 1, 7\}=1$, or let $a=3,p=11$ then $\lambda_{a,p}=5$ and $\alpha_{a,p}=2$ because $ \{3^5 - 1, 11\}=2$. </p>
<p>With this it is possible to formulate algebraically manipulatable formulae for the primefactorization of expressions $f(n)=a^n-1 $ as well as of $f(n)=a^n-b^n$. </p>
<p>For your example $a=4$ we get
$$f(n) = 4^n-1 = 3^{e3} \cdot 5^{e5} \cdot 7^{e7} \cdot 11^{e11} \cdot \ldots $$where
$$ \begin{array}{}
e_3 &=& [n:1](1 + \{n,3\}) \\
e_5 &=& [n:2](1 + \{n,5\}) \\
e_7 &=& [n:3](1 + \{n,7\}) \\
e_{11} &=& [n:5](1 + \{n,11\}) \\
e_{13} &=& [n:6](1 + \{n,13\}) \\
\vdots &=& \vdots \\
e_{1093} &=& [n:182](2 + \{n,1093\}) \\
\vdots &=& \vdots \\
e_{3511} &=& [n:1755](2 + \{n,3511\}) \\
\vdots &=& \vdots
\end{array}$$
where I inserted also the two known Wieferich primes $1093,3511$ which show that the $\alpha_{a,p}$ can sometimes be bigger than 1. The formula for one primefactor $p$ is in general $ \{a^n-1,p\} = [n:\lambda](\alpha + \{n,p\}) $ <em>(plus special handling for the primefactor 2, but it does not occur here)</em> </p>
<p>Unfortunately, the $\lambda$ as well as the $\alpha$ must be determined "empirically" for each pair of (base,primefactor), but where it is of help that $\lambda_{a,p}$ is either equal to the Euler's-totient function $\phi(p)$ or a true divisor of it. But after those two values are determined, we can work algebraically with varying $n$, products, quotients even of different bases and so on.<br>
For instance we can easily derive the primefactorization for the related expression $g(n)=4^n+1$ because $g(n) = f(2n)/f(n)$ and we find the primefactorization for $g(n)$ for instance for the primefactor $5$ by
$$ \begin{array}{rrlll}
\{g(n),5\}&=& \{f(2n),5\} &- \{f(n),5\} \\
&=& [2n:2](1+\{2n,5\}) &-[n:2](1+\{n,5\}) \\
&=& 1 \cdot (1+\{2n,5\}) &-[n:2](1+\{n,5\}) \\
&=& 1 \cdot (1+\{n,5\}) &-[n:2](1+\{n,5\}) \\
&=& (1 -[n:2])(1+\{n,5\}) \\
\end{array}$$
which means, the primefactor $5$ occurs only if $n$ is odd (=not divisible by 2) ; its exponents is 1 at $n=1$ (of course $4^1+1$ is divisible by $5$) and increases as far as $n$ contains itself powers of $5$, so we should have
$$ \{g(5),5\} = 1 \cdot (1+\{5,5\}) = 1+1 =2 $$
and indeed $4^5+1 = 1025$ is divisible by $5^2=25$. </p>
<p>Of course, because you have defined your function $F(k)$ such that it is
$$ F(k) = f(k+1)/3 = f(n+1)/f(1) $$ as Prof. Israel has already pointed out, we have to remove the constant summand $1$ in the display of $e_3$ and also correct our result for replacing $n$ by $n+1$ - but I think that does not affect the understandability and intuitivity of the whole scheme as given in the examples. </p>
<p>I've done the proof for this "algebraicity" of the exponents of the primefactors in <a href="http://go.helms-net.de/math/expdioph/CyclicSubgroups_work.pdf" rel="nofollow">a small treatize of mine</a>, but it seems it has made its way independentenly into the wider open under the name <em>"lemma of the</em> $LTE$ <em>(lifting the exponents)"</em> and I've sometimes seen even answers here in MSE which relate to this (and where I expect they point to the required proofs - but which are simple and elementary and can be done by yourself).</p>
|
2,563,402 | <p>For which values of $a$ does the system
$$x_1 + x_2 + x_3 = 1$$
$$x_1 + 2x_2 + ax_3 = 2$$
$$2x_1 + ax_2 + 4x_3 = a^2$$
have (i) a unique solution, (ii) no solution, (iii) infinitely many solutions? Where the system has infinitely many solutions, write the solutions in parametric form.</p>
<p>So I tried to row reduce the matrix and got up till this point:</p>
<p>$$\left[\begin{array}{ccc|c}1&1&1&1\\ 0&1&a-1&1\\ 0&a-2&2&a^2-2\end{array}\right]$$</p>
<p>But I'm a little confused on how to continue further. How exactly would I change the $a-2$ to a $0$ and $2$ to a $1$? </p>
<p>Any help?</p>
| Karn Watcharasupat | 501,685 | <p>You are on the right track :)</p>
<p>So you have
\begin{align}
&\left[\begin{array}{ccc|c}
1&1&1&1\\
0&1&a-1&1\\
0&a-2&2&a^2-2
\end{array}\right]\\
\xrightarrow{R_3-(a-2)R_2\to R_3}\
&\left[\begin{array}{ccc|c}
1&1&1&1\\
0&1&a-1&1\\
0&0&2-(a-2)(a-1)&(a^2-2)-(a-2)\\
\end{array}\right]\\
=&\left[\begin{array}{ccc|c}
1&1&1&1\\
0&1&a-1&1\\
0&0&a(3-a)&a(a-1)\\
\end{array}\right]\\
\end{align}</p>
<p>Can you proceed from here? </p>
<blockquote class="spoiler">
<p> Consider the cases where $a=0,3$.</p>
</blockquote>
|
2,563,402 | <p>For which values of $a$ does the system
$$x_1 + x_2 + x_3 = 1$$
$$x_1 + 2x_2 + ax_3 = 2$$
$$2x_1 + ax_2 + 4x_3 = a^2$$
have (i) a unique solution, (ii) no solution, (iii) infinitely many solutions? Where the system has infinitely many solutions, write the solutions in parametric form.</p>
<p>So I tried to row reduce the matrix and got up till this point:</p>
<p>$$\left[\begin{array}{ccc|c}1&1&1&1\\ 0&1&a-1&1\\ 0&a-2&2&a^2-2\end{array}\right]$$</p>
<p>But I'm a little confused on how to continue further. How exactly would I change the $a-2$ to a $0$ and $2$ to a $1$? </p>
<p>Any help?</p>
| Raffaele | 83,382 | <p>$\det \left(
\begin{array}{ccc}
1 & 1 & 1 \\
1 & 2 & a \\
2 & a & 4 \\
\end{array}
\right)=3 a-a^2$</p>
<p>$3a-a^2\ne 0\to a\ne 0\lor a\ne 3$ the sistem has one and only one solution</p>
<p>If $a=0$ the system becomes </p>
<p>$\begin{cases}
x_1 + x_2 + x_3=1\\
x_1 + 2 x_2=2\\
2 x_1 + 4 x_3=0\\
\end{cases}
$</p>
<p>Infinite solutions $(t,1-t/2,-t/2)$</p>
<p>if $a=3$ then $\text{rank}(A)=2$</p>
<p>$
A|B=\left(
\begin{array}{ccc|c}
1 & 1 & 1 & 1 \\
1 & 2 & 3 & 2 \\
2 & 3 & 4 & 9 \\
\end{array}
\right)$</p>
<p>$\text{rank}(A|B)=3\ne \text{rank}(A)$ </p>
<p>so the system is impossible</p>
<p>Hope this helps</p>
|
3,135,440 | <p>This is throwing me off a bit I believe mainly because the way the question is worded? Would this simply be <span class="math-container">$4$</span> out of <span class="math-container">$36$</span>?</p>
| Robert Shore | 640,080 | <p>By the way, the simplest way to solve this problem is probably to observe</p>
<p><span class="math-container">$$y= \frac{x+1}{x-1} = 1+\frac{2}{x-1} \Rightarrow \frac{dy}{dx} = -\frac{2}{(x-1)^2}.$$</span></p>
<p>I'm guessing the point of the problem was to convince yourself that the various techniques result in the same answer, but I didn't want you to overlook the value of simple division.</p>
|
3,909,191 | <p>I am not too sure as to what the relation is but I think <span class="math-container">$R = \{(1, 2), (2, 3), (3, 4), ..., (n - 1, n)\} $</span>.</p>
<p>Any guidance would be appreciated.</p>
| Claude Leibovici | 82,404 | <p>For this equation, I should strongly recommend to use the trigonometric method for three real roots since
<span class="math-container">$$\Delta=6914880 \left(175 X^4 Y^2+230 X^2 Y^4+147 Y^6\right) >0 \quad \forall X,Y$$</span>
<span class="math-container">$$p=-\frac{49}{240} \left(5 X^2+9 Y^2\right)\qquad\qquad q=\frac{49}{4320} X \left(27 Y^2-35 X^2\right)$$</span> which gives for the roots
<span class="math-container">$$L'_{k}=-\frac X 6+\frac{7 \sqrt{5 X^2+9 Y^2}}{6 \sqrt{5}}\times $$</span> <span class="math-container">$$\cos \left(\frac{2 \pi k}{3}-\frac{1}{3} \cos ^{-1}\left(-\frac{\sqrt{5} X
\left(27 Y^2-35 X^2\right)}{7 \left(5 X^2+9 Y^2\right)^{3/2}}\right)\right)$$</span> with <span class="math-container">$k=0,1,2$</span>.</p>
|
2,924,165 | <p>Assume that $\mathbb R$ is an ordered field (i.e. $\mathbb R$ is a model of real numbers). We define the set of natural numbers $\mathbb N$ as the smallest inductive set containing $1_\mathbb R$ (multiplicative identity of the field $\mathbb R$), where by definition a set $X\subset \mathbb R$ is inductive if $x\in X$ implies $x+1_\mathbb R\in X$.</p>
<p>Now I wish to prove that every nonempty subset $M$ of $\mathbb N$ contains a minimal element. My book proves it as follows: If $1_\mathbb R\in M$ then $1_\mathbb R$ is the minimal element, otherwise consider the set $E:=\mathbb N - M$, which contains $1_\mathbb R$. The set $E$ must contain a natural number $n$ such that all natural numbers not larger than $n$ belong to $E$ but $n+1$ belongs to $M$; <strong>if there were no such $n$, the set $E$ which contains $1_\mathbb R$ would contain along with each of its elements $n$, the number $n+1_\mathbb R$ too, hence it would equal the whole $\mathbb N$, a contradiction.</strong> The number $n+1_\mathbb R$ so found is the minimal element of $M$.</p>
<p>But I do not think the bold-faced argument is correct (why $E$ would contain $n+1$ if $n\in E$?), or if it is correct according to what axioms is it correct?</p>
| Liandarin | 595,286 | <p>It may be useful to consider the change of coordinates by rotating the $xy$-plane by $45^\circ$ as:
$$u = x+y,\; v=x-y.$$
Then the top surface of the cylinder is described by the simple equation $z=2-u$ and you can calculate the elliptical surface area in terms of principle axes.</p>
<p>I hope, this is helpful.</p>
|
4,006,571 | <p>Prove this statement using a proof by contradiction: <br />
Let <span class="math-container">$n$</span> be a natural number. If <span class="math-container">$x_1,\ldots,x_n \in \mathbb{N} \cup \{0\}$</span> and <span class="math-container">$\sum_{i=1}^{n}{x_i} = n+1$</span> then there is an <span class="math-container">$i \in [n]$</span> such that <span class="math-container">$x_i \geq 2$</span></p>
<p>I'm not sure how to approach this problem with the proof by contradiction. So far, I assume that there is no <span class="math-container">$i \in [n] : x_i \geq 2$</span>. Then, <span class="math-container">$\sum_{i=1}^{1}{x_i} < 2$</span>. Finally, because <span class="math-container">$\sum_{i=1}^{1}{x_i} = n + 1 = 2$</span>, I can conclude that by contradiction this is false.</p>
<p>I'm not entirely sure if this logic is a valid proof by contradiction or what a better proof would be. Any help is appreciated</p>
| Wuestenfux | 417,848 | <p>If <span class="math-container">$x_i\leq 1$</span> for all <span class="math-container">$i$</span>, then <span class="math-container">$x_1+\ldots+x_n\leq 1+\ldots +1 = n\cdot 1 = n$</span>.</p>
<p>Contraposition:
If <span class="math-container">$x_1+\ldots+x_n> n$</span>, there is an <span class="math-container">$i$</span> with <span class="math-container">$x_i\geq 2$</span>.</p>
<p><span class="math-container">$P\Rightarrow Q \Leftrightarrow \neg Q\Rightarrow\neg P$</span>.</p>
|
2,762,323 | <p>I need to find the asymptotic behavior of $$\sum_{j=1}^N \frac{1}{1 - \cos\frac{\pi j}{N}}$$
as $N\to\infty$.</p>
<p>I found (using a computer) that this asymptotically will be equivalent to $\frac{1}{3}N^2$, but don't know how to prove it mathematically.</p>
| Mir Aaliya | 698,391 | <p>I came across a map like: (X, T1) and (X, T2) is a topological space with topologies T1 and T2 where T1 is stronger than T2. Can we say that a map from (X, T1) to (X, T2) is an inclusion map? Anybody please.</p>
|
2,543,215 | <p>I’ve read a post asking whether a subring of a PID is always a PID. The answer is no, but the post itself gave me more questions.</p>
<ol>
<li><p>Is that possible for a PID that is a subring of a non-PID?</p></li>
<li><p>Is that possible for a subring of a PID that is not a UFD? </p></li>
</ol>
<p>Some hints or examples are really appreciated!</p>
<p>Thank you!</p>
| Edward Evans | 312,721 | <p>The answer to your first question is yes, as in the comments, $\Bbb Z$ is a subring of $\Bbb Z[X]$, which is a PID, but $\Bbb Z[X]$ is not a PID (look at the ideal $(2, X)$ for the canonical example).</p>
<p>For your second question, pick your favourite ring which is not a unique factorisation domain and then extend to its field of fractions. For example, $\Bbb Z[\sqrt{-5}]$ is not a unique factorisation domain, while $\Bbb Q(\sqrt{-5})$ is a principal ideal domain (in particular, a field).</p>
|
1,809,017 | <p>Let $U$ be an open set containing $0$ and $f:U \rightarrow C$ a holomorphic function such that $f(0)=0$ and $f^{'}(0)=2$.Prove that there exists an open neighbourhood $0 \in V \subset U $ and a holomorphic injective function $h:V \rightarrow V$ such that $h(f(z))=2h(z)$. Since I don't have any idea where to start, I'd appreciate a small hint rather then a full solution. Thank you for all your answers.</p>
| Hrhm | 332,390 | <p>Let's integrate this in terms of $z$. Let $A(z)$ be the area of the horizontal cross section of $x^2+4y^2-(2-z)^2\leq 0$ at height $z$. Instead of integrating from $-\sqrt{2}\leq z\leq\sqrt{2}$, we will integrate from $4-\sqrt{2}\leq z \leq 4+\sqrt{2}$. (This is only so that we do not have to deal with ugly negative signs. Since the graph $x^2+4y^2-(2-z)^2= 0$ is symmetric across the plane $z=2$, the answer will be the same.)</p>
<p>Then we are trying to find $\displaystyle\int_{4-\sqrt{2}}^{4+\sqrt{2}}A(x)\text{ }dz$. </p>
<p>Since $x^2+4y^2-(2-z)^2= 0$ is an elliptical paraboloid, each horizontal cross section is an ellipse:</p>
<p>$$x^2+4y^2-(2-z)^2= 0$$
$$x^2+4y^2=(2-z)^2$$
$$\left(\frac{x}{2-z}\right)^2+\left(\frac{2y}{(2-z)}\right)^2=1$$</p>
<p>Ergo, $A(z)=\displaystyle\frac{\pi(2-z)^2}{2}$</p>
<p>Finally, $\displaystyle\int_{4-\sqrt{2}}^{4+\sqrt{2}}\frac{(2-z)^2\pi}{2}\text{ }dz=\frac{\pi\left(4+\sqrt{2}-2\right)^3}{6}-\frac{\pi\left(4-\sqrt{2}-2\right)^3}{6}=\frac{14\sqrt{2}\pi}{3}$</p>
|
1,331,063 | <p>Let $\cal{H}$ be a Hilbert space, $T$ a bounded linear operator on $\cal{H}$, $S$ a trace class operator, then can one verify that
$$|Tr(TS)|\leq\|T\|\cdot|Tr(S)|?$$</p>
| Martin Argerami | 22,857 | <p>No. Let
$$
S=\begin{bmatrix}0&1\\0&0\end{bmatrix},\ \ T=\begin{bmatrix}0&0\\1&0\end{bmatrix}.
$$
Then
$$
\text{Tr}(TS)=1,\ \ \text{ and } \|T\|\,|\text{Tr}(S)|=0.
$$</p>
|
264,594 | <p>I need to make a proof but I can't come to the solution:
<p>For every vertex of oriented graph with vertices $U_{1},U_{2},\ldots,U_{n}$ we've got $s_{+}(U)$ the number of edges, which come to the vertex $U$, and $s_{-}(U)$ the number of edges which leave from the vertex.
<p>Prove that: $\sum_{i=1}^{n} |(s_{+}(U_{i})-s_{-}(U_{i})|$ is even number.
<p>Until now I came to the statement that when we remove absolute values we get number 0.</p>
| Douglas S. Stones | 139 | <p>For any integer $a$, we can check that $|a| \equiv \pm a \equiv a \pmod 2$. Thus:
\begin{align*}
\sum_{i=1}^{n} |s_{+}(U_{i})-s_{-}(U_{i})| & \equiv \sum_{i=1}^{n} \big( s_{+}(U_{i})-s_{-}(U_{i}) \big) & \text{since } |a| \equiv a \pmod 2 \\
& \equiv \sum_{i=1}^{n} s_{+}(U_{i})- \sum_{i=1}^{n} s_{-}(U_{i}) \\
& = \text{nr edges} - \text{nr edges} \\
& = 0 \pmod 2.
\end{align*}</p>
|
1,111,854 | <p>For example:</p>
<p><img src="https://i.stack.imgur.com/xEFpG.jpg" alt="enter image description here"></p>
<p>The last three lines have a |t=ti, what does that mean?</p>
| AlexR | 86,940 | <p>The notation is defined by
$$\begin{align*}
f(x) \Big|_{x=a}^b & := f(b) - f(a)\\
f(x) \Big|_{x=a} & := f(a)
\end{align*}$$
(note the "inconsistency" in the sign of $f(a)$)<br>
The notation allows for notational improvements of statements like
$$\left. \frac{\partial}{\partial x} \phi(x,s) \right|_{x=x_0}$$
Wich says first compute the partial derivative of $\phi$ w.r.t. $x$ and then "plug in" $x_0$ for $x$. Compare this to
$$\frac{\partial}{\partial x} \phi(x_0,s) = 0$$
Because there is no free variable $x$ in the argument list of $\phi$.<br>
Another problem can occur with the total derivative:
$$\phi_x(x,y(x),z_0(x)) + y'(x) \phi_y(x,y(x),z_0(x)) = \\
\left. \frac{\mathrm d}{\mathrm dx} \phi(x, y(x), z) \right|_{z=z_0(x)} \ne \frac{\mathrm d}{\mathrm dx} \phi(x,y(x),z_0(x)) \\
= \phi_x(x,y(x),z_0(x)) + y'(x) \phi_y(x,y(x), z_0(x)) + z_0'(x) \phi_z(x,y(x),z_0(x))$$
because the chain rules to be applied are different.</p>
|
624,002 | <p>Determine whether $\mathbb{Z}\times \mathbb{Z}$ and $\mathbb{Z}\times \mathbb{Z}\times \mathbb{Z}$ are isomorphic groups or not.</p>
<p>pf) Suppose that these are isomorphic. Note that $\mathbb{Z}\times \mathbb{Z}$ is a subgroup of $\mathbb{Z}\times \mathbb{Z}$ and $\mathbb{Z}\times \mathbb{Z}\times\left \{ 0 \right \}$ is a subgroup of $\mathbb{Z}\times \mathbb{Z}\times \mathbb{Z}$. Since $\mathbb{Z}\times \mathbb{Z}$ and $\mathbb{Z}\times \mathbb{Z}\times\left \{ 0 \right \}$ are isomorphic, $\mathbb{Z}\times \mathbb{Z}/\mathbb{Z}\times \mathbb{Z}$ and $\mathbb{Z}\times \mathbb{Z}\times \mathbb{Z}/\mathbb{Z}\times \mathbb{Z}\times \left \{ 0 \right \}$ are isomorphic. But the first one is isomorphic to the trivial group and the second one is isomorphic to $\mathbb{Z}$. It is a contradiction.</p>
<p>Is my proof right? If not, is there another proof?</p>
| zcn | 115,654 | <p>Your proof is not quite correct - an (abstract) homomorphism $\mathbb{Z}^2 \to \mathbb{Z}^3$ need not send $\mathbb{Z}^2$ to $\mathbb{Z}^2 \times \{0\}$. Here's my preferred way of showing they are not isomorphic (and the argument generalizes):</p>
<p>For any abelian group $A$, the set of group homomorphisms $\text{Hom}(\mathbb{Z}, A)$ has the same cardinality as $A$ (a bijection is given by $a \leftrightarrow (1 \mapsto a)$). Combining this with the fact that $\text{Hom}(\mathbb{Z}^n, A) \cong A^n$ gives that $\text{Hom}(\mathbb{Z}^2, A)$ and $\text{Hom}(\mathbb{Z}^3, A)$ have different cardinalities (if $1 < |A| < \infty$).</p>
|
3,808,077 | <p>I'm trying to show that <span class="math-container">$P(A\triangle B)=P(A)+P(B)–2P(A\cap B)$</span>. Knowing that <span class="math-container">$A\triangle B=(A\cap B^{c})\cup(A^{c} \cap B)$</span>.</p>
<p>So, what I did was this:</p>
<p><span class="math-container">\begin{equation*}
\begin{aligned}
P(A\triangle B)&=P(A)+P(B)–2P(A\cap B)\\
P((A\cap B^{c})\cup(A^{c} \cap B))&= P(A)+P(B)–2P(A\cap B)\\
P(A\cap B^{c})+P(A^{c}\cap B)-P((A\cap B^{c})\cap(A\cap B^{c}))&=P(A)+P(B)–2P(A\cap B)\\
\end{aligned}
\end{equation*}</span></p>
<p>And the truth is, I got stuck there. I thought I'd solve it assuming that they were independent events but I don't know if I'm doing it right. I would appreciate your help and thank you in advance.</p>
| Michael Rozenberg | 190,319 | <p>Because <span class="math-container">$$P(A\Delta B)=P\left((A\setminus B)\cup(B\setminus A)\right)=$$</span>
<span class="math-container">$$=P\left((A\setminus(A\cap B))\cup(B\setminus(A\cap B))\right)=$$</span>
<span class="math-container">$$=P\left((A\setminus(A\cap B)))+P(B\setminus(A\cap B))\right)=$$</span>
<span class="math-container">$$=P(A)-P(A\setminus B)+P(B)-P(A\setminus B)=P(A)+P(B)-2P(A\setminus B).$$</span></p>
|
3,293,955 | <p>Here is the curve i wish to plot with a function:</p>
<p><a href="https://i.stack.imgur.com/4Smib.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/4Smib.png" alt="eliptical curve"></a></p>
<p>I expect the curve to be 1/4 of an elipse but I only have the coordinates to work with (minx,miny and maxx,maxy). I've been using the graphing tool at: <a href="http://itools.subhashbose.com/grapher" rel="nofollow noreferrer">http://itools.subhashbose.com/grapher</a> but I haven't been able to remember way back to highschool when we worked on functions like these (it's been over 30 years). Any help greatly appreciated.</p>
<p>Note: answers come in quick, wish i had specified earlier the format of the equation ideally looks like: <code>f(x) = ?</code></p>
| rubikscube09 | 294,517 | <p>For the hint, first notice that by dominated convergence:</p>
<p><span class="math-container">$$
\|f\|_{L^1} = \int_\mathbb{R^k} |f(y)|\mathrm{d}y = \lim_{r \to \infty} \int_{B_r} |f(y)| \mathrm{d}y
$$</span>
for any <span class="math-container">$f \in L^1 (\mathbb{R}^k)$</span>. Note also that <span class="math-container">$\|f\|_{L^!} \neq 0$</span> thanks to the assumption that <span class="math-container">$f$</span> is not a.s. 0. Now, the maximal function is given by:
<span class="math-container">$$
(Mf)(x) = \sup_{B_r \ni x} \frac{1}{C(k)|r|^k}\int_{B_r(z)}|f(y)|\mathrm{d}y
$$</span>
where <span class="math-container">$B_r(z)$</span> is a ball centered around <span class="math-container">$z$</span> with radius <span class="math-container">$r$</span> that contains our point <span class="math-container">$x$</span>. By the above, we may fix <span class="math-container">$\epsilon>0$</span> such that for <span class="math-container">$r$</span> sufficiently large, (i.e for <span class="math-container">$r \geq R_0$</span>), we have:
<span class="math-container">$$
\|f\|_{L^{1}} - \int_{B_r(z)}|f(y)|\mathrm{d}y <\varepsilon
$$</span>
Fix <span class="math-container">$x_0$</span> such that <span class="math-container">$|x| \geq R_0$</span>. Then, by definition of <span class="math-container">$(Mf)(x)$</span>, we have:</p>
<p><span class="math-container">$$
(Mf)(x) = \sup_{B_r \ni x} \frac{1}{C(k)|r|^k}\int_{B_r(z)}|f(y)|\mathrm{d}y
\geq \frac{1}{C(k)|x_0|^k }\int_{B_{|x_0|}(z)}|f(y)|\mathrm{d}y
$$</span>
For <span class="math-container">$x \geq x_0$</span>, we may apply the above <span class="math-container">$\varepsilon$</span> inequality, and conclude that:
<span class="math-container">$$
\frac{1}{C(k)|x_0|^k }\int_{B_|x_0|(z)}|f(y)|\mathrm{d}y \geq \frac{1}{C(k)|x_0|^k } (\|f\|_{L^1} -\epsilon) \geq \frac{1}{C(k)|x|^k}(\|f\|_{L^1} - \epsilon)
$$</span>
Multiplying all constants through on the RHS gives us a constant depending only on <span class="math-container">$f$</span> and dimension.</p>
|
954,376 | <p>Beltrami made (out of thin paper or stiff or starched cloth not mentioned) a model of a surface of constant negative Gauss curvature <span class="math-container">$ K=-1/c^2$</span>. The original might have resembled a <em>large saddle shaped</em> Pringles chip, and frills might have developed by sagging with time, if one were to make a guess.</p>
<p>Also it is now easy to guess that the mold he used had dimensions comparable to the length and breadth of the table on which this is/was placed. The narrow neck he obtained is/was by <em>hand rolling to a tight paper scroll</em>, imo.</p>
<p>Is there a higher definition photograph with more details available?</p>
<p>More importantly, does anyone know its modern day parametrization?</p>
<p>The original photograph is given on page 133 as Fig 56, Roberto Bonola <em>Non-Euclidean Geometry</em>, 1955 Edition, Dover, ISBN 0-486-60027-0. (Thanks in advance to anyone for scanning and posting it here).</p>
<p>The following is inserted from Daina Tamania's blog
( <a href="http://hyperbolic-crochet.blogspot.co.uk/2010/07/story-about-origins-of-model-of.html" rel="nofollow noreferrer">http://hyperbolic-crochet.blogspot.co.uk/2010/07/story-about-origins-of-model-of.html</a> ):</p>
<p><img src="https://i.stack.imgur.com/sf7Ul.png" alt="Beltrami pseudosphere made by himself" /></p>
<p>It may be recalled all such shapes can be scrolled up tight somewhat like a table napkin rolled by hand into a smaller ring.. maintaing parallelsim among previous parallels due to isometry or contant <span class="math-container">$K$</span> property, into newer smaller scroll.</p>
| Willemien | 88,985 | <p>Lets start at the beginning</p>
<p>There is no 3 dimensional Euclidean surface that represents the complete hyperbolic plane (Hilberts theorem )</p>
<p>There are Surfaces of revolution that have (outside some boundaries ) a constant negative curvature $K$ .</p>
<p>All surfacesof revolution revolving around the $x_3 $ axis can be formulated as:</p>
<p>$ x_1 = \phi(v) \cos u $ , $ x_2 = \phi(v) \sin u $ and $ x_3 = \psi(v) $</p>
<p>For a revolution surfaces that have (outside some boundaries ) a constant negative curvature of $K = -1 $ also needs:</p>
<ul>
<li>$(\phi')^2+(\psi')^2 = 1 $ and</li>
<li>$\phi'' = \phi $</li>
</ul>
<p>The meridian curve (generatrix ) $ x_1 = \phi(v) , x_2 = 0 $ and $ x_3 = \psi(v) $ is a geodesic on the surface.</p>
<p>combining this with the hyperbolic plane the meridian curve is an hyperbolic line</p>
<p>There are 3 types of these surfaces</p>
<ul>
<li>the "pseudo spherical surfaces of the elliptic type":</li>
</ul>
<p>also called the "conic type" $\phi(v)=C\sinh v$ and $\psi(v)=\int_0^v \sqrt{1-C^2\cosh^2v} dv$</p>
<p>The meridians are hyperbolic lines trough the vertex and the parallels are arcs of hyperbolic circles with the vertex as center.</p>
<ul>
<li>the "pseudo spherical surface of the parabolic type":</li>
</ul>
<p>This is the one that is also known as the pseudosphere or the tractoid</p>
<p>$\phi(v)=e^v$ and $\psi(v)=\int_0^v \sqrt{1-e^{2v}} dv$ or</p>
<p>$\phi(v)= \frac{1}{\cosh v} $ and $\psi(v) = v -\tanh v $</p>
<p>(there are also other formula's)</p>
<p>The meridians are hyperbolic lines asymtopic to each other horoparallel) and the parallels are arcs of horocycles.</p>
<ul>
<li>the "pseudo spherical surface of the hyperbolic type":</li>
</ul>
<p>$\phi(v)=C\cosh v$ and $\psi(v)=\int_0^v \sqrt{1-C^2\sinh^2v} dv$</p>
<p>PS this is not an one sheeted hyperboliod (an hyperboloid doesn't have an constant curvature.)</p>
<p>The meridians are hyperbolic lines athat share a common perpendicular and the parallels are the common perpendicular and arcs of hypercycles equidistant to this common perpendicular) .</p>
<p>I think this is the surface than Riemann describes in his famous Habilationsvortrag, (section II $ 5):</p>
<blockquote>
<p>The theory of surfaces of constant curvature will serve for a geometric illustration. It is easy to see that surface whose curvature is positive may always be rolled on a sphere whose radius is unity divided by the square root of the curvature; but to review the entire manifoldness of these surfaces, let one of them have the form of a sphere and the rest the form of surfaces of revolution touching it at the equator. [...] The surface with curvature zero will be a cylinder standing on the equator; <strong>the surfaces with negative curvature will touch the cylinder externally and be formed like the inner portion (towards the axis) of the surface of a ring.</strong></p>
</blockquote>
<p>pictures of all three types of "pseudosperical surfaces" can be seen at:</p>
<p>virtualmathmuseum.org/Surface/gallery_o.html#PseudosphericalSurfaces </p>
<p>and</p>
<p>demonstrations.wolfram.com/SurfacesOfRevolutionWithConstantGaussianCurvature/ </p>
<ul>
<li><p>So now back to the question which did Beltrami use?</p></li>
<li><p>First of all does it matter?</p></li>
</ul>
<p>Surfaces that have the same constant curvature are isometric, and you can take this very literal:</p>
<p>a triangle were the sides have lengths a,b, and c on one surface is congruent to an triangle on one another surface where the sides have lengths. (theorem of Minding 1938))</p>
<p>the different surfaces are just different "cut-outs" of the hyperbolic plane</p>
<p>But besides that I am not sure about it.</p>
<p>Reading Stillwell's translation of Beltrami's "Saggio" in "Sources of hyperbolic geometry" it first reads like he is using the hyperbolic type (this is the only surface where there are easy identifiable orthogonal hyperbolic lines) but later he changes to the tractoid.</p>
<p>Maybe he changed to the tractoid because this is the only one with a easy formulation.</p>
<p>But again does it really matter?</p>
<ul>
<li>About the pictures:</li>
</ul>
<p>The rolled out version to me looks like a small part of the the hyperbolic type if only it was rolled out over an curved surface (a cylinder orthogonal to the lenght of the surface bump up) so that the frills disappear. and become a side of a ring or indeed a big pringle.</p>
<p>Maybe best compare it to a small part of the surface generated in the wolfram demonstration "negative curvature , first type" with the slider at the right and then the yellow side up and taking only a small part of it (around $30^o $ of the big circle)</p>
<p>Hopes this helps</p>
|
2,508,499 | <p>How is this hold that $\mathbb R \subseteq B(0,2)$ where
$\big<\mathbb R,d\big>$ and d is a discrete metric?</p>
<p>By doing so we showed that $\mathbb R $ is bounded</p>
| John Griffin | 466,397 | <p>Mimicking my answer to your previous question:</p>
<p>Recall that
$$
B(0,2) = \{x\in\mathbb{R} \mid d(0,x)<2\}.
$$
If $d$ is the discrete metric, then
$$
d(0,x)=\begin{cases}
0 & \text{if $x=0$}\\
1 & \text{otherwise}
\end{cases}
$$ so that $d(0,x)<2$ for every $x\in\mathbb{R}$. Therefore $B(0,2)=\mathbb{R}$, so that $\mathbb{R}$ is bounded in the discrete metric.</p>
|
3,061,575 | <p>It is a principle and proof from Introduction to Set Theory, Hrbacek and Jech. </p>
<p>In the proof, line 1 and 2, I couldn't understand why <span class="math-container">$Q(0)$</span> is true. </p>
<p><span class="math-container">$Q(0)$</span> means that "<span class="math-container">$P(k)$</span> holds for all <span class="math-container">$k<0$</span>". </p>
<p>I understood there are no <span class="math-container">$k<0$</span>. </p>
<p>And then I couldn't proceed. </p>
<p><a href="https://i.stack.imgur.com/qmEH5.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/qmEH5.jpg" alt="figure for question"></a></p>
| Andreas | 317,854 | <p>Apply the integration <span class="math-container">$\int_0^1 \cos(\pi x) f(x) dx$</span>, then
<span class="math-container">$$\int_0^1 \cos(\pi x) \sin(\pi x) dx=a_0 \int_0^1 \cos(\pi x) dx +\sum_{n=1}^{\infty}a_n \int_0^1 \cos(\pi x) \cos(n\pi x) dx$$</span></p>
<p>Since <span class="math-container">$\int_0^1 \cos(\pi x) \cos(n\pi x) dx = 0$</span> for <span class="math-container">$n \ne 1$</span>, you have
<span class="math-container">$$\int_0^1 \cos(\pi x) \sin(\pi x) dx=a_0 \int_0^1 \cos(\pi x) dx +a_1 \int_0^1 \cos^2 (\pi x)\ dx$$</span> which is <span class="math-container">$0 = 0 + 0.5 a_1$</span>, hence <span class="math-container">$a1 = 0$</span>.</p>
<p>Likewise just integrate:</p>
<p><span class="math-container">$$\int_0^1 \sin(\pi x) dx=a_0 +\sum_{n=1}^{\infty}a_n \int_0^1 \cos(n\pi x) dx$$</span> which gives <span class="math-container">$2/\pi = a_0+ 0$</span>.</p>
<p>So the result is <span class="math-container">$(a_0+a_1)\pi = 2$</span>.</p>
|
939,110 | <p>This is a different but related question to one I asked earlier. I link to it here:</p>
<p><a href="https://math.stackexchange.com/questions/938953/to-show-that-f-is-injective-i-dont-get-this-statement">"To show that f is injective" - I don't get this statement</a></p>
<p>I am pretty new to "functions" having just went through a quick primer on "propositional logic". So the $\rightarrow$ symbol which represents a conditional statement looks very similar in the definition of injective below.</p>
<p>Suppose that $f: A \to B$ (Is this to be read in English as "If $A$ is true then $B$ is true"?)</p>
<p>To show that $f$ is injective - Show that if $f(x) = f(y)$ for arbitrary $x, y \in A$ with $x \neq y$, then $x = y$.</p>
<p>How do I read this in English, specifically the part where there is a comma. I am not sure if this is stating that the ordered pair $x,y$ is an element of set $A$ or just the $y$ element itself. If the author of the text (Rosen) is talking about $x, y$ as an ordered pair then it would help to use parentheses.</p>
| Nishant | 100,692 | <p>Hmm, how about $\mathbb Q/\mathbb Z$?</p>
|
1,711,087 | <p>Find the number of elements in the set if the average of these numbers is seven less than the number of elements in the set.</p>
| Stella Biderman | 123,230 | <p>We know that $\sum a_i=30$, $\frac{1}{N}\sum a_i= N-7$. Can you figure out how to solve these two equations? Try substitution.</p>
|
1,438,999 | <p>If $(x-a)^2=(x+a)^2$ for all values of $x$, then what is the value of $a$?</p>
<p>At the end when you get $4ax=0$, can I divide by $4x$ to cancel out $4$ and $x$?</p>
| Miguel Mars | 226,969 | <p>I think the other answers fit best to solve what you were thinking, but I will write this in the case you want to know more.</p>
<p>If we are working in a polynomial ring like $\mathbb Z _2 \left[ x \right]$, then this equality is always true, as we get that $\left(x-a \right)^2 = \left(x+a \right)^2$ for all $a\in \mathbb Z _2$. This is because in $\mathbb Z _2 = \left\{ 0,1 \right\}$ we have that any element is its own additive opposite (that is to say, $0+0=0$ and $1+1=0$), so $a=-a$ in any of the two cases.</p>
<p>This doesn't happen when working on $\mathbb R$, for example, but I thought it was an interesting thing to add.</p>
|
523,932 | <p>I've got a system of equations which is:<br></p>
<p>$\begin{cases} x=2y+1\\xy=10\end{cases}$</p>
<p>I have gone into this: $x=\dfrac {10}y$.
<br>
How can I find the $x$ and $y$?</p>
| Felix Marin | 85,343 | <p>$$
1 = \left(x - 2y\right)^{2} = x^{2} - 4xy + 4y^{2}
$$</p>
<p>$$
1 + 80 = x^{2} + 4xy + 4y^{2} = \left(x + 2y\right)^{2}
$$</p>
<p>$$
x + 2y = \pm 9\,,
\quad
x = {1 \pm 9 \over 2}\,,
\quad
y = {\pm 9 - 1 \over 4}
$$</p>
<p>$$
\color{#ff0000}{\large\left(x, y\right)
=
\quad
\left(5,2\right)\,,\quad
\left(-4, -\,{5 \over 2}\right)}
$$</p>
|
1,618,699 | <p>Let A be a non-empty subset of $\mathbb{R}$ that is bounded above and put $s=\sup A$<br>
Show that if $s\notin A$ the the set $A\cap (s-ε,s)$ is infinite for any $ε>0$ </p>
<p>This has to be solved using contradiction, by supposing $A\cap (s-ε,s)$ is an finite set. But I am not sure how to proceed after this.</p>
| JMP | 210,189 | <p>The construction comes from analytic continuation.</p>
<p>A function can only be considered valid where it is finite. Often if we extend its domain to complex numbers it remains finite for longer, indeed almost entirely.</p>
<p>A classic example is Riemann's extension of the zeta function.</p>
<p>$\zeta(-1)=-\frac{1}{12}$ with Riemann's zeta function, and $\infty$ with the basic zeta function.</p>
<p>The difference between the two values might be considered relevant, and could provide an insight as to how analytic continuation works.</p>
|
3,050,295 | <p>I'm having a problem with following equation:</p>
<p>I'm applying <span class="math-container">$(a+b)^2$</span> and <span class="math-container">$(a-b)^2$</span>, but am unable to get the correct answer.</p>
<blockquote>
<p><span class="math-container">$$(\sqrt{3}+i)^{2017} + (\sqrt{3} - i)^{2017}$$</span></p>
</blockquote>
| Coupeau | 626,733 | <p>I would use the Euler's formula. Say <span class="math-container">$z = \sqrt{3} + i$</span>.<br>
So <span class="math-container">$tan(\phi) = 1/\sqrt{3}$</span><br>
and <span class="math-container">$\phi = \pi/6$</span><br>
and <span class="math-container">$r = |z|= \sqrt{1^2 + \sqrt{3}^2} = \sqrt{4} = 2$</span><br>
Meaning <span class="math-container">$z = 2 \cdot e^{i \pi/6}$</span>, <span class="math-container">$\overline z = 2 \cdot e^{-i \pi/6}$</span></p>
<p><span class="math-container">$z^{2017} = 2^2017 \cdot e^{i \cdot 2017 \cdot\pi/6}$</span>, <span class="math-container">$(\overline z)^{2017} = 2^{2017} \cdot e^{-i \cdot 2017 \cdot\pi/6}$</span></p>
<p>Converting this back from <span class="math-container">$w= |w| \cdot e^{i \cdot \psi}$</span> to the form <span class="math-container">$w = a + ib$</span>
with <span class="math-container">$Re(w) = a = |w|cos(\psi)$</span> and <span class="math-container">$Im(w) = b = |w|sin(\psi)$</span></p>
<p>gives us
<span class="math-container">$$z^{2017} + (\overline z)^{2017} = \\\
Re(z^{2017}) + Re((\overline z)^{2017}) + i\cdot (Im(z^{2017}) + Im((\overline z)^{2017})) = \\\
2^{2017} cos(2017 \cdot \pi / 6) + 2^{2017} cos(- 2017 \cdot \pi /6) + i\cdot (2^{2017} sin(2017 \cdot \pi / 6) + 2^{2017} sin(- 2017 \cdot \pi / 6)) = \\\
2^{2017} cos(2017 \cdot \pi / 6) + 2^{2017} cos(2017 \cdot \pi /6) + i\cdot (2^{2017} sin(2017 \cdot \pi / 6) - 2^{2017} sin(2017 \cdot \pi / 6)) = \\\
2^{2017} (2 \cdot cos(2017 \cdot \pi /6)) = \\\
2^{2017} (2 \cdot cos(\pi /6)) = \\\
2^{2017} (2 \frac{\sqrt{3}}{2}) = \\\
2^{2017} \sqrt{3}
$$</span></p>
|
345,844 | <p>Should be simple enough, yet I can't show that there are no monomorphisms $\mathbb{Z}^3\rightarrow \mathbb{Z}^2$. (It is true, right?)</p>
| N. S. | 9,176 | <p><strong>Hint</strong> If $f(x): \mathbb Z^3 \to \mathbb Z^2$ is any morphism then </p>
<p>$$f(1,0,0), f(0,1,0), f(0,0,1) \in \mathbb Z^2 \subset \mathbb Q^2$$</p>
<p>Then, they must be linearly dependent over $\mathbb Q$ since $ \mathbb Q^2$ is a 2 dimensional $\mathbb Q$-vector space. It is trivial to prove that linear dependence over $\mathbb Q$ implies linear dependence over $\mathbb Z$.</p>
<p><strong>Alternately</strong> $f$ is given by a $2 \times 3$ matrix with rational entries. Then the reduced row echelon form of $A$ has all rational entries so $\ker(A)$ contains a vector with all entries rational. Prove now that you can find another vector with all entries integer.</p>
|
345,844 | <p>Should be simple enough, yet I can't show that there are no monomorphisms $\mathbb{Z}^3\rightarrow \mathbb{Z}^2$. (It is true, right?)</p>
| Mikasa | 8,581 | <p>If we have such that monomorphism $$\phi:\mathbb Z^3\to\mathbb Z^2$$ then according to first theorem of homomorphism we get $\mathbb Z^3\leq\mathbb Z^2$.</p>
|
2,661,443 | <p>For the equation $2^x = 7$</p>
<p>The textbook says to use log base ten to solve it like this $\log 2^x = \log 7$. </p>
<p>I then re-arrange it so that it reads $x \log 2 = \log 7$ then divide the RHS by $\log 2$ to isolate the $x$. I understand this part.</p>
<p>I can alternatively solve it in an easier way by simply using $\log_2 7$ on my calculator.</p>
<p>Using both methods the answer comes to the same which is $2.807$</p>
<p>My question is twofold:</p>
<ol>
<li><p>Why would the textbook suggest to use log base ten rather than simply using log base two?</p></li>
<li><p>I can see how using log base ten and the suggested method in the textbook makes me arrive at the same answer but I don't understand WHY this is so. How does base ten play a factor in the whole scope of things.</p></li>
</ol>
<p>Thank you</p>
| Davislor | 422,808 | <p>As others have said, the formula works for a logarithm of any base, because of the change-of-base formula. However, the accepted answer says, “The practical reason for using base 10 was a little old fashioned: it allowed the use of tables of logarithms instead of a calculator and reducing these calculations to addition and subtraction.” This raises the question, <em>why did those tables use base 10?</em> After all, we just explained that the same tricks work for any base at all. If you just needed to pick one base to compile a table of and print as a book, or mark on a slide rule, wouldn’t the least-arbitrary choice have been e?</p>
<p>The answer behind the answer is that base-10 logarithms are the easiest for humans without calculators and used to decimal numbers to work with. The log of 1 is 0, the log of 10 is 1, the log of 100 is 2, and so on up, so any one-digit number has a log of zero point something, any two-digit number has a log of one point something, and so forth. So, without a calculator, what’s the log of 300? Well, log 300 is log 3·100, which is log 3 + log 100. The square root of 10 is a little more than 3, so log 3 is a little less than 0.5, and log 100 is exactly 2, so intuitively it’s a little less than 2.5. That makes it really easy to find the log of any number in scientific notation, or move a decimal point left or right.</p>
<p>If you did a lot of these, which engineers once had to, you’d quickly get an intuitive sense for it. And doing these problems with base 10 lets you use that number sense to catch silly mistakes: “No, that can’t be right, it’s something thousand, so the log has to be three point something.”</p>
|
77,379 | <p>It is to show for an $a\in \mathbb{C}^{\ast}$ that $aB_{1}(1)= B_{|a|}(a)$ </p>
<p>where B denotes a disc </p>
<p>Okay, maybe this is correct: </p>
<p>$aB_{1}(1) = a(e^{i\phi}) = ae^{i\phi} = |a|e^{i\phi} = B_{|a|}(a)$</p>
<p>But this seems very wrong! </p>
<p>V</p>
| savick01 | 18,493 | <p>Well, your idea is OK. But you should improve some things:</p>
<p>First: If B is a disk and you write $e^{i\phi}$, then it is a parametrization of a circle not the whole disk. But actually a disk is a sum of circles plus the middle point, so the strategy is OK.</p>
<p>Second: As @Adam wrote, you can't write $ae^{i\phi}=|a|e^{i\phi}$, because it is not equal if $a \neq |a|$. However it is true that $ae^{i\phi} \in B_0^{|a|}$. So in that way, you can prove that $aB_0^1 \subseteq B_0^{|a|}$ (1). So what remains is that it is not an inclusion but an equality. Since $a \cdot a^{-1}=1$ it is enough to show that $ a^{-1}B_0^{|a|} \subseteq B_0^1$ and proving that is actually the same as (1).</p>
<p>Third: You have to remember that the middlepoint of your circle is $1$ and when you write $e^{i\phi}$ it generates a circle around $0$, so you have to add $1$ to it.</p>
<p>P.S. There is no $a^{-1}$ for $a=0$, so it is a special case, but a very simple one.</p>
<p>Your second attempt also should lead to success, but you shouldn't multiply $a$ by distance between $z$ and $1$ (notice that it leaded you to write an inequality with complex numbers, which makes no sense!) - you should multiply $a$ by $z$ and look what is the distance between $az$ and $a$ assuming that the distance between $z$ and $1$ is smaller than $1$</p>
|
727,752 | <blockquote>
<p>If S is a compact subset of R and T is a closed subset of S,then T is compact.</p>
<p>(a) Prove this using definition of compactness.</p>
<p>(b) Prove this using the Heine-Borel theorem.</p>
</blockquote>
<p>My solution: firstly I should suppose a open cover of T, and I still need to think of the
set S-T. But if S-T is open in R,it can be done because the open cover of T and S-T is a open cover of R. The reality is S-T is not necessarily a open set in R. My question is that How we can find a open cover which covers S-T but misses T! I don't know how to do this thing!</p>
<p>in terms of part (b), I know it is bounded, but How to prove it is closed for T.</p>
| user137301 | 137,301 | <p>$T$ is a closed subset of $S$ if and only if $T=C\cap S$ for some $C$ closed in $\mathbb{R}$. But $S$ is closed too, being compact, so $T$ is closed in $\mathbb{R}$ because it is the intersection of two closed sets. This takes care of the remaining part of $(b)$. For $(a)$, $\mathbb{R}\setminus T$ is an open set containing $S$.</p>
|
405,772 | <p>I encountered a conformal mapping on the complex plane:$$z\rightarrow e^{i\pi z}$$
and I am not sure about where it does send the point at infinity. If I could say something along the lines: $$\text{Im}(\infty) = \infty$$
Then it would map it to the origin but there is still a voice in my head saying that this equality is non-sense. And from the usual definition of the imaginary part:$$\text{Im}(z)=\frac{z+\bar{z}}{2i}$$ it makes even less sense.
I'd appreciate some enlightenment.</p>
| gt6989b | 16,192 | <p>You are seeking</p>
<p>$$
\lim_{z \to \infty} e^{i\pi z}.
$$</p>
<p>Note that if you consider $z = x + iy$, then by De Moivre's Theorem, we have
$$
e^{i\pi z} = e^{i\pi (x+iy)} = \frac{\cos(\pi x) + i \sin(\pi x)}{e^y}.
$$</p>
<p>Now everything depends on along which path $z \to \infty$: if $x$ is fixed and $y \to \infty$, the limit converges to $0$. Alternatively, if $y$ is fixed and $x \to \infty$, the limit does not exist.</p>
<p>Since the value of the limit is path-dependent, the limit does not exist.</p>
|
4,188,020 | <p>I know that there exists a connection on a principal bundle and via parallel transport it is possible to define a a covariant derivative on the associated bundle.</p>
<p>However, can we also define a covariant derivative on the principal bundle. I.e. something that can differentiate a section along a vector field? Or do we need a linear structure like the one in a vector bundle to 'take derivatives'?</p>
| jw_ | 671,015 | <p>It is defind in <a href="https://en.wikipedia.org/wiki/Exterior_covariant_derivative" rel="nofollow noreferrer">Wikipedia</a> as <span class="math-container">$D\phi(v_0,...,v_k)=d\phi(hv_0,...,hv_k)$</span> where <span class="math-container">$h$</span> is the projection to horizontal subspace according to the given principal connection of the principal G-bundle <span class="math-container">$\pi:P\to M$</span>.</p>
<p>If <span class="math-container">$\rho:G\to GL(V)$</span> is a representaion of <span class="math-container">$G$</span> on some vector space <span class="math-container">$V$</span>, then a tensorial (or basic, i.e. G-equivariant and horizontal) k-form of type <span class="math-container">$\rho$</span> on <span class="math-container">$P$</span> can be identified with <span class="math-container">$P×_\rho V$</span>-valued k-form on M.</p>
<p>As other answers/comments said, it is not a covariant derivative in the common sense, but is a covariant derivative in the sense that if <span class="math-container">$\phi$</span> is a <span class="math-container">$P×_\rho V$</span>-valued k-form on M, and it is identified with tensorial form <span class="math-container">$\hat\phi$</span> on <span class="math-container">$P$</span>, and <span class="math-container">$\nabla\phi$</span> is <span class="math-container">$P×_\rho V$</span>-valued k+1-form on M, where <span class="math-container">$\nabla$</span> is the covariant derivative on <span class="math-container">$P×_\rho V$</span> corresonding to the given principal connection, and <span class="math-container">$\widehat{\nabla\phi}$</span> is the identification of <span class="math-container">$\nabla\phi$</span> to tensorial form on <span class="math-container">$P$</span>, then <span class="math-container">$D\hat\phi=\widehat{\nabla\phi}$</span>.</p>
|
4,613,214 | <p>I have to do a large modulo but my answer is incorrect.<br />
I am given:<br />
<span class="math-container">$$ 111^{4733} \mod 9467 $$</span></p>
<ul>
<li>9467 prime</li>
<li>111 and 9467 are coprime</li>
<li>Also note that 4733*2=9466<br />
So we can Apply Euler's theorem</li>
</ul>
<p><span class="math-container">$$ 111^{4733} = 111^{9466 \cdot \frac{1}{2}} = (111^{9466})^{\frac{1}{2}}=(1)^{\frac{1}{2}}=1 \mod 9467 $$</span></p>
<p>However, the correct answer is
<span class="math-container">$$ 111^{4733} \equiv 9466 \mod 9467 $$</span>
What is the approach to solving it?<br />
<strong>EDIT1:</strong> I understand that exponent 1/2 is not allowed and I also do not want to use Legendre symbol, as we have not studied it in the course. Besides, I want to solve it without a calculator. Moreover, I should mention that this is not homework, but this is homework-solutions that were given to us to prepare for the exam. I just try to understand the approach of how to solve this and similar exponents. Hence, the complete solution would be fine.<br />
<strong>EDIT2:</strong> It turned out that a calculator was allowed in this specific exercise. Other than that I will mark the Legendre symbol, as the correct solution for this.</p>
| Ted | 15,012 | <p>Your calculation is wrong because of your 1/2 exponent. The operation <span class="math-container">$x^{1/2}$</span> doesn't make sense when you are working modulo <span class="math-container">$p$</span>.</p>
<p>A number may have 2 square roots mod <span class="math-container">$p$</span>. This is of course also true for, say, the positive real numbers. But over the positive real numbers you can say "choose the <em>positive</em> square root" and get a reasonably well behaved function satisfying <span class="math-container">$(xy)^{1/2} = x^{1/2} y^{1/2}$</span> and other expected properties. On the other hand, when working mod <span class="math-container">$p$</span>, the concept of <em>positive</em> has no meaning. There is no way to make a consistent choice that will satisfy expected properties.</p>
|
4,242,561 | <p>Let <span class="math-container">$T: R^3 \rightarrow R^3$</span> be a linear transformation such that <span class="math-container">$T(x,y,z) = (x,0,0)$</span>. Which implies that the matrix that represents the transformation is <span class="math-container">\begin{bmatrix}1&0&0\\0&0&0\\0&0&0\end{bmatrix}</span>
Which of these would be the correct way to name the matrix?</p>
<p><span class="math-container">$T= \begin{bmatrix}1&0&0\\0&0&0\\0&0&0\end{bmatrix}$</span> or <span class="math-container">$[T] = \begin{bmatrix}1&0&0\\0&0&0\\0&0&0\end{bmatrix}$</span>
or perhaps none of them are right? I'm getting extremely confused with the notation. Tried to search online for awhile for an answer but I've been unable to find a proper answer regarding the notation.</p>
| Mark S. | 26,369 | <p>It depends on your book/the context.</p>
<p>If the context is such that matrices assume the standard basis and stand for the corresponding linear transformation, then the first (writing that <span class="math-container">$T$</span> is the matrix) is fine.</p>
<p>If we distinguish matrices from linear transformations but assume the standard basis when one is not specified, then the second <span class="math-container">$[T]$</span> might be more appropriate.</p>
<p>If we need to specify the basis always, and <span class="math-container">$\mathcal E$</span> represents the standard basis, then you should probably write something like <span class="math-container">$[T]_{\mathcal E}$</span> instead of either.</p>
<p>In general, context typically makes it clear.</p>
|
2,086,006 | <p>You have $7$ boxes in front of you and $140$ kittens are sitting side-by-side inside the
boxes, $20$ in each box. You want to take some kittens as your pets. However the
kittens are very cowardly. Each time you chose a kitten from a box, the kittens that
are in that box to the left of it go to the box in the left, the kittens that are in that box
to the right go to the box in the right. If they don’t find a box in that direction, they
simply run away. After taking a few kittens, you see that all other kittens have run
away. At least how many kittens have you taken?</p>
| DonAntonio | 31,254 | <p>$$n\left(\sqrt[n]{ea}-\sqrt[n]a\right)=n\sqrt[n]a\left(\sqrt[n]e-1\right)=\sqrt[n]a\,\frac{\sqrt[n]e-1}{\frac1n}\xrightarrow[n\to\infty]{}1\cdot\left[\left(e^x\right)'\right]_{x=0}=1\cdot e^0=1$$</p>
|
2,080,716 | <p>I have the quadratic form
$$Q(x)=x_1^2+2x_1x_4+x_2^2 +2x_2x_3+2x_3^2+2x_3x_4+2x_4^2$$</p>
<p>I want to diagonalize the matrix of Q. I know I need to find the matrix of the associated bilinear form but I am unsure on how to do this.</p>
| Kanwaljit Singh | 401,635 | <p>Its simple find the sum of terms divisible by 21.
And add this sum.</p>
<p><strong>Sum of first 100 terms excluding terms divisible by 3 and 7 = Sum of first 100 terms - Sum of terms divisible by 3 - Sum of terms divisible by 21 + Sum of terms divisible by 21.</strong></p>
<p>Because when you are subtracting the sum of terms divisible by 3 and sum of terms divisible by 7 few terms which are divisible by both subtracted twice. So add sum of terms divisible by 21.</p>
<blockquote>
<p>Update - Answer with explanation.</p>
</blockquote>
<p>Numbers divisible by 3 are 3, 6, 9, ...., 99.</p>
<p>Numbers divisible by 7 are 7, 14, 21, 28, ...., 98.</p>
<p>Numbers divisible by 21 are 21, 42, 63, 84.</p>
<p>Now we have sum of first 100 numbers not divisible by 3 and 7 is</p>
<p>(1 + 2 + 3 + .... + 100) - (3 + 6 + 9 + .... + 99) - (7 + 14 + 21 + .... + 98) + (21 + 42 + 63 + 84)</p>
<p>= (1 + 2 + 3 + .... + 100) - 3(1 + 2 + 3 + .... + 33) - 7(1 + 2 + 3 + .... + 14) + 21(1 + 2 + 3 + 4)</p>
<p>Using formula -</p>
<p>$$\sum_{k = 1}^{n} k = \tfrac{1}{2} (n)(n+1)$$</p>
<p>We have,</p>
<p>$ = \frac12 (100)(101) - 3 \cdot \frac12 (33)(34) - 7\cdot \frac12 (14)(15) + 21 \cdot \frac12 (4)(5)$</p>
<p>$= 5050 - 1683 - 735 + 210$</p>
<p>$= 2842$</p>
|
3,355,542 | <p>Let <span class="math-container">$f \in L^{1} [0,1]$</span> such that for all smooth function <span class="math-container">$h: [0,1] \to \mathbb R$</span> with <span class="math-container">$h(0) = h(1) = 0$</span> one has <span class="math-container">$\int_{0}^{1} f(t) h'(t) = 0$</span>. Prove that <span class="math-container">$f$</span> admits a representative which is almost everywhere differentiable on <span class="math-container">$[0,1]$</span> with <span class="math-container">$f' =0$</span>. </p>
<p>I know without initial condition <span class="math-container">$h(0)=h(1) =0$</span> above is well-known statement. </p>
<p>(from comments below)
My goal of asking this question was in fact to clarify the answer provided here <a href="http://www.mathoverflow.net/a/341462/108824" rel="nofollow noreferrer">MO link</a> See my comments under the answer. </p>
| Aphelli | 556,825 | <p>Consider the linear form <span class="math-container">$L_f:\phi \longmapsto \int_0^1{f\phi}$</span>, defined on the vector space <span class="math-container">$V$</span> of functions <span class="math-container">$\phi$</span> that are smooth and compactly supported in <span class="math-container">$(0,1)$</span>. </p>
<p>By the assumption you made on <span class="math-container">$f$</span>, you can easily show that if <span class="math-container">$\phi$</span> is in the kernel of <span class="math-container">$L_1$</span> (hint: if <span class="math-container">$L_1\phi=0$</span>, then <span class="math-container">$\phi$</span> has an antiderivative belonging to <span class="math-container">$V$</span>), then <span class="math-container">$L_f(\phi)=0$</span>. </p>
<p>By standard linear algebra, this implies that <span class="math-container">$L_f=L_{\alpha}$</span> for some constant <span class="math-container">$\alpha$</span>. By standard analysis it implies that <span class="math-container">$f=\alpha$</span> ae. </p>
|
1,986,172 | <p>I am asked to simplify $(\sqrt{t^3}) \times (\sqrt{t^5})$.</p>
<p>I get up to $\sqrt[3]{t^3}\times \sqrt{t^5}$ but I am not sure how to simplify this further as now roots are involved and not just powers.</p>
<p>When I checked the solutions the final answer should be $t^4$ but I'm not sure how this is achieved.</p>
| Emilio Novati | 187,568 | <p>If my edit is correct you have:
$$
\sqrt{t^3}\times \sqrt{t^5}=\sqrt{t^3\times t^5 }=\sqrt{t^8}=t^4
$$</p>
<p>or, with fractional exponents:
$$
\sqrt{t^3}\times \sqrt{t^5}=t^{\frac{3}{2}}t^{\frac{5}{2}}=t^{\frac{3}{2}+\frac{5}{2}}=t^{\frac{8}{2}}=t^4
$$</p>
|
211,290 | <p>Is it possible to import graphs generated by <code>geng</code> (a tool from <a href="http://pallini.di.uniroma1.it/" rel="noreferrer">the nauty suite</a>) one by one, rather than all at once. If one could also specify not only the order but also the number of edges that would be great, but the main thing is to be able to get one at a time rather than dump them all in memory at once. Thanks</p>
| Bill | 18,890 | <p>I guessed where to insert a missing <code>)</code> in Eq2. If that was right then</p>
<pre><code>Plot[ReIm[Eq1[d]-0.08*Eq2[d]],{d,-1/2,1/2}]
</code></pre>
<p>shows you approximately where the two roots are so you can give <code>FindRoot</code> good starting estimates.</p>
|
4,041,140 | <p>this is a problem which is for homework in my math course. The problem states that you must find two distinct, non-zero matrices, (Size 2x2) such that A * B + A + B = 0.</p>
<p>I'm not really looking for an answer, but rather the methodology I should be using to come to this answer. It seems like the easiest way to do this would be through brute force, but I am somewhat slow when it comes to math and so I was hoping there might be an easier method out there. Thank you in advance.</p>
| José Carlos Santos | 446,262 | <p>Take any number <span class="math-container">$a\ne-1$</span>. Then take <span class="math-container">$b=-\frac a{a+1}$</span>. With this choice, <span class="math-container">$ab+a+b=0$</span>. Now, take <span class="math-container">$A=\left[\begin{smallmatrix}a&0\\0&a\end{smallmatrix}\right]$</span> and <span class="math-container">$B=\left[\begin{smallmatrix}b&0\\0&b\end{smallmatrix}\right]$</span>.</p>
|
1,048,526 | <p>I'm trying to bound the quantity
<span class="math-container">$\langle \nabla \Psi(x),\bar{x}-x \rangle$</span> above, with the bound depending on <span class="math-container">$\|x-\bar{x}\|$</span> and perhaps also of <span class="math-container">$\|x-y\|$</span> for fixed (but not varying) points <span class="math-container">$y$</span>. Where here <span class="math-container">$\Psi(x):X\rightarrow \mathbb{R}$</span> with <span class="math-container">$X$</span> a finite dimensional Banach space (or <span class="math-container">$\mathbb{R}^n$</span>, whatever)
And <span class="math-container">$\Psi(x)$</span> is a <span class="math-container">$\mu$</span>-strongly convex function (with <span class="math-container">$\mu$</span>>0) that can be written as <span class="math-container">$\Psi=f+g$</span> with <span class="math-container">$f$</span> convex and differentiable with <span class="math-container">$\nabla f$</span> <span class="math-container">$L$</span>-Lipschitz continuous and <span class="math-container">$g$</span> <span class="math-container">$\mu$</span>-strongly convex.</p>
<p>I know that if <span class="math-container">$\Psi$</span> was differentiable and its gradient was <span class="math-container">$L$</span>-Lipschitz continuous one could fix some point <span class="math-container">$x^*$</span> on the optimal set and bound as</p>
<p><span class="math-container">$\langle \nabla \Psi(x), \bar{x}-x \rangle \leq \|\nabla \Psi(x)\|\|\bar{x}-x\| = \|\nabla \Psi(x)-\nabla \Psi(x^*)\|\|\bar{x}-x\| \leq L\|x-x^*\|\|\bar{x}-x\|$</span></p>
<p>And the bound is done.
So my question is, is there an analogous of this property on the non-differentiable case? Like, I know that I can pick a point <span class="math-container">$x^*$</span> on the optimal set such that <span class="math-container">$0 \in \partial \Psi(x^*)$</span>, but then can I say that for a <span class="math-container">$v \in \partial \Psi(x)$</span> it holds</p>
<p><span class="math-container">$\|v\| = \|v-0\| \leq L\|x-x^*\|$</span> or something on that line?</p>
<p>Any help is appreciated</p>
| Dirk | 3,148 | <p>The answer is no. On the real line consider $\Phi(x)=|x| $ (and add some smooth convex function with minimum in zero if you like). Then the minimum is in zero but the subgradient at any positive point is about 1.</p>
|
3,858,362 | <p>Solve <span class="math-container">$$\dfrac{x^3-4x^2-4x+16}{\sqrt{x^2-5x+4}}=0.$$</span>
We have <span class="math-container">$D_x:\begin{cases}x^2-5x+4\ge0\\x^2-5x+4\ne0\end{cases}\iff x^2-5x+4>0\iff x\in(-\infty;1)\cup(4;+\infty).$</span> Now I am trying to solve the equation <span class="math-container">$x^3-4x^2-4x+16=0.$</span> I have not studied how to solve cubic equations. Thank you in advance!</p>
| Michael Rozenberg | 190,319 | <p>It's <span class="math-container">$$x^2(x-4)-4(x-4)=0$$</span> or
<span class="math-container">$$(x-4)(x^2-4)=0.$$</span>
Can you end it now?</p>
|
3,008,162 | <p>Let <span class="math-container">$A$</span> and <span class="math-container">$B$</span> be well-ordered sets, and suppose <span class="math-container">$f:A\to B$</span> is an
order-reversing function. Prove that the image of <span class="math-container">$f$</span> is finite.</p>
<p>I started by supposing not. Then we must have that the image of <span class="math-container">$f$</span>, or the set <span class="math-container">$\{f(x)\in B:x\in A\}$</span>, has infinite cardinality. If this is the case the we must have that <span class="math-container">$\vert{\{f(x)\in B:x\in A\}}\vert\geq \aleph_0$</span> which also means there exists a strictly order-preserving function <span class="math-container">$g:\mathbb{N}\to \{f(x)\in B:x\in A\}$</span>. </p>
<p>The contradiction I am trying to reach is that this would imply that there exists an order-reversing function from <span class="math-container">$\mathbb{N}$</span> to an infinite image which is a subset of a well-ordered set which can't happen but I don't know how to close the gap in the argument. </p>
| Hagen von Eitzen | 39,174 | <p>By the givens, each non-empty subset of <span class="math-container">$f(A)$</span> has both a minimal and a maximal element. Conclude that <span class="math-container">$f(A)$</span> cannot contain a subset isomorphic to <span class="math-container">$\omega$</span></p>
|
2,199,303 | <p>Consider the DE $$y''+\lambda y=0$$ where $\lambda$ is a constant </p>
<p>subject to the boundary conditions $$y(0)=0$$ and $$y(a)=0$$ where $a$ is a positive constant</p>
<p>I want to find the eigenvalues and eigenfunctions related to this problem</p>
<p>My attempt:</p>
<p>The auxiliary equation is $$m^2-\lambda=0\implies m=\sqrt{-\lambda}$$</p>
<p>Thus we have two cases</p>
<p>Case (i)
$$\lambda<0$$ in which we have real different roots
so let $$\lambda=-k^2\implies m=\pm k$$
thus the solution is $$y=Ae^{kx}+Be^{-kx}$$</p>
<p>Now when $y(0)=0$ we have that $$0=Ae^{k(0)}+Be^{-k(0)}\implies -B=A$$ since $k\ne 0$</p>
<p>Now when $y(a)=0$ we have that $$0=Ae^{ka}+Be^{-ka}\implies \frac{Ae^{ka}}{e^{-ka}}=-B$$ since $a\ne 0$ thus there are no eigenvalues for when $\lambda <0$</p>
<p>Can anyone tell me if i have made a mistake somewhere in this, because i'm unsure what to do from here, i know the next case to check will be when $\lambda>0$ which would give complex roots, but i would like to know if my approach up to now is correct, thanks for taking the time to read this, and any help would be appreciated also i know that when i multiply through by $$q^*=\frac{1}{a_2}e^{\int\frac{a_1}{a_2}}dx=1$$ thus in normal ?self-adjoint form.</p>
| Disintegrating By Parts | 112,478 | <p>I'll suggest a general method that separates the equations at the endpoints. This does not directly answer your question, but the method is worth knowing.</p>
<p>For a solution $y$ of the second order equation $y''+\lambda y$, there is no non-trivial solution with $y(0)=0=y'(0)$. So, by scaling $y$ if necessary, it must be a solution of
$$
y''+\lambda y = 0, \;\; y(0)=0,\;\; y'(0)=1,
$$
which has unique solution
$$
y_{\lambda}(x) = \frac{\sin(\sqrt{\lambda}x)}{\sqrt{\lambda}}.
$$
This version of the solution works for $\lambda=0$ by taking a limit:
$$
y_{0}=\lim_{\lambda\rightarrow 0}y_{\lambda}(x) = x.
$$
(This technique eliminates special cases because the limits always exist.)</p>
<p>Any non-trivial solution of the solution where $y(0)=0=y(a)$ must be a non-zero constant multiple of $y_{\lambda}$, which means that the only values of $\lambda$ for which there are non-trivial solutions must satisfy $y_{\lambda}(a)=0$. $\lambda=0$ does not work, which is easily checked because $y_0(x)=x$ does not vanish at $a$. The non-zero $\lambda$ must satisfy
$$
\sin(\sqrt{\lambda}a) = 0 \\
\implies \sqrt{\lambda}a = \pm \pi,\pm 2\pi,\pm 3\pi,\cdots \\
\implies \lambda = \frac{n^2\pi^2}{a^2},\;\; n=1,2,3,\cdots.
$$</p>
|
1,325,432 | <p>$f(x) = x$ , $f(x+2\pi) = f(x) $ on $ [-\pi , \pi] $ </p>
<p>How do I know that this function is even or odd? My book says odd, but I don't understand how to work this out? </p>
<p>also why does $a_0 = 0$ and $a_n = 0$? </p>
<p>since its an odd function I thought we use the even extension? </p>
<p>i.e $$ a_n = \frac{1}{\pi} \int_{-\pi}^{\pi} f(x) \cos(nx)dx $$</p>
<p>but the answer is </p>
<p>$$ b_{n} = \frac{-2}{n}\cos(n\pi) = \frac{2(-1)^{n+1}}{n} $$</p>
| albo | 22,610 | <p>If $f(-x)=-f(x)$, then we say $f$ is odd. On the other hand if $f(-x)=f(x)$ we say $f$ is even.</p>
<p>The general Fourier representation of $f$ is </p>
<p>$$
f(x)=\frac{a_0}{2}+\sum_{n=1}^\infty \biggl[ a_{n}\cos\biggl(\frac{n\pi x}{L}\biggl)+b_{n} \sin\biggl(\frac{n\pi x}{L}\biggl) \biggl]\qquad for~-L\leq x\leq L
$$
where
\begin{align*}
a_0&=\frac{1}{L}\int_{-L}^{L} f(x)dx \\
a_n&=\frac{1}{L}\int_{-L}^{L} f(x)\cos\biggl(\frac{n\pi x}{L}\biggl)dx \\
b_n&=\frac{1}{L}\int_{-L}^{L} f(x)\sin\biggl(\frac{n\pi x}{L}\biggl)dx
\end{align*}
And we represent even functions using Fourier cosine since cosine is an even function. For the same reason we use sine function to represent odd function.</p>
<p>In your case $f$ is an odd function that is why the coefficient of cosine becomes zero. </p>
<p>To calculate $b_n$, just plug in the values $L=\pi$ and $f(x)=x$
$$
b_n =\frac{1}{\pi}\int_{-\pi}^{\pi} x\sin(nx)dx
$$
<strong>Hint</strong>: use integration by parts to evaluate the integral. </p>
<p>Hence the answer!</p>
|
4,160 | <p>I am a guest here, having responded to a general invitation extended to the <a href="https://stats.stackexchange.com/questions">Cross Validated</a> community, to possibly contribute answers whenever some question related to Statistics comes up in this site.
I do not teach Mathematics, but I do occasionally teach Statistics. And one of the less obvious and most difficult aspects to teach, is the difference between Descriptive and Inferential Statistics. </p>
<p>To me of course, it seems pretty clear: Descriptive Statistics just summarize some characteristics of a specific data set. Inferential Statistics is our attempt to draw inferences about something "larger" than the data set available. How do we manage that? <em>By making a whole new set of assumptions</em>. And what do we do after? <em>We use the exact same results we derived from Descriptive Statistics</em> -but which now lead us to totally different conclusions in nature and in scope. </p>
<p>And here lies the problem: these additional assumptions are simply sewn alongside the <em>tools</em> of Descriptive Statistics. And students get uneasy: in the previous (Descriptive Statistics) class, this was just the "average of the data set", "a centrality measure". How on earth the exact same number, calculated in the exact same way, has now become "an estimate of the population mean" that moreover has been derived through the interaction of the data set with a <em>function of random variables</em>,(the estimator), a function that is (say), "unbiased and asymptotically consistent"? </p>
<p>The problem is not whether the concepts themselves need work and mental effort to understand. The problem is that this "switch of vision" of the same thing (the data set), from a "vector of numbers" to a "set of realizations of distinct random variables that belong to the same (statistical) population, forming a sample of this population" is so big, that, the consequent use of the exact same tools and results seems in total disharmony: surely, such a big step in the set up should lead to some brand new tools also... and it does, but mostly "later on", while the tools of Descriptive Statistics remain prominent, with maybe minor modifications (like bias-correction). </p>
<p>The bigger problem, is that the true problem takes time to show: students may play along, some may even like this new stochastic and probabilistic world, -but I keep getting the feeling that deep down, they feel that all the theoretical apparatus of Inferential Statistics is just an ingenious way to make something out of nothing (or too much out of too little), since after all, we keep adding the values in the data set and we divide by their number... </p>
<p>Since "too much out of too little" is demonstrably <em>not</em> the case (if the general public knew how many procedures they consider as deterministically controlled, are in reality driven by statistical algorithms, I suspect they would had a serious panic attack), I believe it is important to find ways to deal with this.<br>
One way would be to <em>start</em> with Inferential Statistics, and get rid of the notion that "Descriptive Statistics are a good introduction and familiarization step" (I just argued that they are not). </p>
<p><strong>My question(s)?</strong> <strong>1)</strong> To those that teach Statistics, what are your experiences and how do you deal with the passage from Descriptive to Inferential Statistics?<br>
And to everybody,<br>
<strong>2)</strong> what are some other fields in Mathematics where such "structural breaks" happen, i.e. where objects and concepts already taught, acquire a <em>totally different meaning</em> by activating a new set of assumptions? And how do you teach that?</p>
| DavidButlerUofA | 1,853 | <p><strong>On the (false) distinction between descriptive and inferential statistics</strong></p>
<p>In my view, it is rare that your only purpose is simply to describe the data you have. Even a simple graph is usually used to make a tentative hypothesis about the relationship between variables or the distribution of a single variable, and this is really the beginnings of inferential statistics. Also, presenting means in a research paper is usually to forward your argument about differences in populations. In that sense the mean is already a representation of the population and not technically only a representation of your sample.</p>
<p>On the other hand, if you really want to investigate the data you currently have, then summary stats are not usually fit for the purpose - you are often much more interested in identifying outliers and investigating them further. For example you might notice a negative skew in your class results, but then you would naturally be interested in those students in that left tail and how you might help them, so you end up looking at the individual data points. Even if you do calculate a summary, such as the pass rate, then your thoughts naturally turn to how you can make the pass rate higher in future students, and now you're thinking about a population beyond your current class.</p>
<p>Even the usual measures of central tendency are problematic. Many students I talk to express concern that they remove so much of the data they are trying to describe, and their natural instinct is instead to talk about clumps in the data or the percentage in a certain zone. It seems to me that the choice of the mean is actually guided by the estimation if a parameter, rather than actually being natural way for people to summarise data.</p>
<p>Finally, in practice, when a statistician does analyse data with the view to make conclusions about the population, they <em>always</em> draw graphs and calculate summaries. But we have a hard time convincing students that they should do this. I wonder if the dichotomy we set up between descriptive statistics and inferential statistics encourages students not to draw graphs. It seems they have learned that graphs only describe the data you have and they think they therefore cannot help in the process of making inferences.</p>
<p><strong>Big ideas of variability, population and distribution</strong></p>
<p>It distresses me that even though we know that stats is built on the big ideas of variability, population and distribution, we tend to forget about them when teaching certain parts of the course.</p>
<p>Even when teaching probability, we often focus on specific events, whereas in fact the only reason any one event has a probability is because it is in comparison to all the other events that could have been.</p>
<p><a href="http://onlinelibrary.wiley.com/doi/10.1111/j.1751-5823.2007.00029.x/full#ss4" rel="nofollow">This article</a> reviews research into learning statistics, and the section I have linked to is the part about those big ideas and is well worth a read.</p>
<p><strong>My experience teaching all stats as inference</strong></p>
<p>To orient you, I am coordinator of a Maths Learning Centre at a university, so I talk to many students from many disciplines about their stats. I also do guest lectures on stats within courses (such as the med students during their research course).</p>
<p>Taking the previous discussion into account, I decided that when helping students understand statistics, I would try to remove the distinction between descriptive and inferential stats. I tell them that the main purpose of data collection and analysis is actually to make conclusions about populations, and that descriptive statistics are the essential first step. </p>
<p>I talk about distributions very early on, usually when they are studying descriptive stats. I introduce the big idea that any number you might measure or calculate has a distribution that describes all the possibilities and how likely they are. Your data is always a sample of the data you could have had, and the mean is just one mean that could have happened. Yet it has the tendency to be close to that central clump of the distribution, which is why it's pretty good as a measure of centre if your view is towards inference.</p>
<p>I find that recognising at the outset that almost all stats has a view to inference matches best with the reasons why they are forced to learn stats (generally it's a research methods course), and also helps to unify what they are learning in their stars course.</p>
<p>The concept of "all things have a distribution" helps to get them to understand why the calculations for test statistics need to be different, because we need to create something with a distribution we actually know. It also helps them to pull away from the data itself to imagine the bigger picture of the distribution/population it came from.</p>
|
1,968,541 | <p>$$144x^5 − 121x^4 + 100x^3 − 81x^2 − 64x + 49 = 0 $$</p>
<p>I re-wrote it as </p>
<p>$$ 12^2x^5 - 11^2x^4 + 10^2x^3 - 9^2x^2 - 8^2x + 7^2 = 0 $$</p>
<p>And then as $$ \sum_{k=0}^{5} (k+7)^2(-1)^kr^k = 0 $$</p>
<p>But I don't know what to do with that. Thanks for any help!</p>
| Len West | 377,227 | <p>Assume. n is an integer solution.</p>
<p>Then (x-n) would be a factor of the polynomial.</p>
<p>Then n would have to be a divisor of 49.</p>
<p>The only possibilities for n are positive or negative 1, 7, & 49. </p>
<p>Substitution of these 6 possibilities shows that none are solutions.</p>
<p>Therefore no integer solutions exist.</p>
|
2,106,003 | <p>I was just reading about the <a href="https://en.wikipedia.org/wiki/Banach%E2%80%93Tarski_paradox">Banach–Tarski paradox</a>, and after trying to wrap my head around it for a while, it occurred to me that it is basically saying that for any set A of infinite size, it is possible to divide it into two sets B and C such that there exists some mapping of B onto A and C onto A.</p>
<p>This seems to be such a blatantly obvious, intuitively self-evident fact, that I am sure I must be missing something. It wouldn't be such a big deal if it was really that simple, which means that I don't actually understand it.</p>
<p>Where have I gone wrong? Is this not a correct interpretation of the paradox? Or is there something else I have missed, some assumption I made that I shouldn't have?</p>
| hexomino | 314,970 | <p>In addition to the other answer, you might be interested to learn that extending the statement of the Banach-Tarski paradox to $\mathbb{R}$ or $\mathbb{R}^2$ doesn't work.</p>
<p>See <a href="https://terrytao.wordpress.com/2009/01/08/245b-notes-2-amenability-the-ping-pong-lemma-and-the-banach-tarski-paradox-optional/">here</a> for a more detailed discussion.</p>
<p>This shows that the statement is in fact deeper than providing bijections between infinite sets.</p>
|
1,255,311 | <p><img src="https://i.stack.imgur.com/5V9e0.png" alt="enter image description here"></p>
<p>I understand inner product space with vectors, but the conversion to functions is throwing me off. Also why do they use an integral here, I've always seen summations. I think I'm missing something with notation here. Any help/hints would be appreciated. </p>
| Brandon Suarez | 235,266 | <p>\begin{align}Z&=Ae^it+Be^{-i}t\\
&=A(\cos(t)+i\sin(t))+B(\cos(t)-i\sin(t))\\
&=(A+B)\cos(t)+i(A-B)\sin(t)\\&=x+iy\\
\implies
x&=(A+B)\cos(t)\\
y&=(A-B)\sin(t)\end{align}</p>
|
2,506,279 | <blockquote>
<p>If $\lim_{x\to \infty}xf(x^2+1)=2$ then find
$$\lim_{x\to 0}\dfrac{2f'(1/x)}{x\sqrt{x}}=?$$</p>
</blockquote>
<p>My Try :
$$g(x):=xf(x^2+1)\\g'(x)=f(x^2+1)+2xf'(x^2+1)$$
Now what?</p>
| algoHolic | 409,509 | <p>What a difference a day makes :¬)</p>
<p>It turns out that <a href="https://en.wikipedia.org/wiki/Floating-point_arithmetic#Floating-point_numbers" rel="nofollow noreferrer"><em>the Wikipedia page's sigma notation</em></a> was correct after all. It's just that their worked-out calculation is in fact wrong. Misleading, at best. Plus the textual explanation isn't as clear as it should be. </p>
<p>The key to working out the calculation correctly — <em>and keeping it aligned with the sigma notation in <a href="https://en.wikipedia.org/wiki/Floating-point_arithmetic#Floating-point_numbers" rel="nofollow noreferrer">the wiki</a></em> — is in the last sentence of the snippet in my original post...</p>
<blockquote>
<p>...where ${\displaystyle n}$ is the normalized significand's nth bit from the left [<strong><em>the MSB?</em></strong>], where counting starts with 1...</p>
</blockquote>
<p>They're not using ${\displaystyle n}$ to denote the ordinal position of the bits. They're using ${\displaystyle n}$ simply as an index for iterating over the sequence of bits. If I'm not mistaken, that is precisely what the common usage of sigma notation expects.</p>
<p>The crucial snippet I just re-quoted is telling the reader that the first bit to count is the MSB (<em>the left-most bit</em>). And that the iteration index ${\displaystyle n}$ that will be used in the calculation — ${\displaystyle bit_{n} \times 2^{-n}}$ — needs to start at 1.</p>
<p>Once the penny dropped, I was able to correct my spreadsheet accordingly. Now it calculates the correct answer for pi. And the formulas I use are exactly what is prescribed by the original sigma notation.</p>
<p>I've learned enough in the last 24 hours to take a stab at answering my own questions...</p>
<ol>
<li><em>What is the <strong>correct</strong> sigma notation for converting the binary representation of pi to its decimal representation</em>?</li>
</ol>
<p>The original sigma notation is actually correct after all.</p>
<ol start="2">
<li><em>Why doesn't the example on the wikipedia page include ${\displaystyle2^{3}+\cdots+2^{5}+\cdots+2^{6}\cdots}$ in its calculation?</em></li>
</ol>
<p><a href="https://en.wikipedia.org/wiki/Floating-point_arithmetic#Floating-point_numbers" rel="nofollow noreferrer"><em>The Wikipedia page</em></a> excluding ${\displaystyle2^{3}+\cdots+2^{5}+\cdots+2^{6}\cdots}$ out of its calculation was not simply "<em>abuse of notation</em>" as I first suspected. It is just plain wrong. Probably a typo.</p>
<ol start="3">
<li><em>What do I need to correct in my spreadsheet formula to make it arrive at the same answer that the wikipedia page arrives at?</em></li>
</ol>
<p>The formulas I was using in my original spreadsheet were correct all along. I did need to correct which cell my calculation was starting from, however.</p>
<ol start="4">
<li><em>Does that description need correcting in its confusing reference to both the <strong>MSB</strong> and the <strong>LSB</strong> as "the first bit"?</em></li>
</ol>
<p>The original description most certainly does need to be edited to be made clearer in regards to its confusingly calling both the MSB and the LSB "<em>the first bit</em>".</p>
<ol start="5">
<li><em>Does the sigma notation in that description need correcting to make it clear that ALL of the bits should be included in the calculation?</em></li>
</ol>
<p>The actual sigma notation is correct. But the expanded calculation certainly does need to be corrected.</p>
<ol start="6">
<li><em>Going by what I described of my spreadsheet calculation, what step(s) or what fundamental mathematical concept have I overlooked or misunderstood?</em></li>
</ol>
<p>The placement of bits in incorrect cells of my spreadsheet was a result of my confusion over the ${\displaystyle n}$ index variable prescribed in <a href="https://en.wikipedia.org/wiki/Floating-point_arithmetic#Floating-point_numbers" rel="nofollow noreferrer"><em>the Wikipedia page</em></a>. Once I remembered the role of the index variable in sigma notation, I understood then what I was doing wrong.</p>
<hr>
<p><a href="https://ideone.com/MZnW6g" rel="nofollow noreferrer"><em>Here's some Java code</em></a> based on <a href="https://en.wikipedia.org/wiki/Floating-point_arithmetic#Floating-point_numbers" rel="nofollow noreferrer"><em>the Wikipedia page's</em></a> sigma notation...</p>
<pre><code>import static java.lang.System.out;
class Q2506277Converter {
private static strictfp float binary32ToDecimal( String s /* the significand */ ) {
byte n = 1; /* the 'n' in the sigma notation */
byte p = 24; /* the 'p' in the sigma notation */
byte bitN = 0; /* the 'bit_n' in the sigma notation */
byte e = 1; /* the 'e' in the sigma notation */
float sum = 0.0f; /* the summation in the sigma notation */
for(; n <= p-1; n++ ){
bitN = Byte.parseByte( s.substring( n, n+1 ) );
sum += (float)(bitN * Math.pow(2, -n) ); /* the '2^-n' in the sigma notation */
}
return (1 + sum) * (float)Math.pow( 2, e ); /* the '2^1' in the sigma notation */
}
public static void main( String[] args ){
/* IEEE 754 encoding significand */
String significand = "110010010000111111011011";
out.printf( "%.9f%n", binary32ToDecimal( significand ) );
}
}
</code></pre>
<hr>
<p>I don't know Python well enough to write one-liners. But <a href="https://ideone.com/FNI1sh" rel="nofollow noreferrer"><em>here's some Python code</em></a> based on the <a href="https://en.wikipedia.org/wiki/Floating-point_arithmetic#Floating-point_numbers" rel="nofollow noreferrer"><em>Wikipedia page's sigma notation</em></a>...</p>
<pre><code>sum = 0
e = 1
result = 0.0
for n, bitN in enumerate("110010010000111111011011"[1::1]):
n = n+1
sum = sum + int(bitN) * 2**(-n)
result = ( 1 + sum ) * 2**(e)
print( result )
</code></pre>
|
618,986 | <p>I'm having trouble with this question, I'd like someone to point me in the right direction.</p>
<p>let $A$ be a n by n matrix with real values.
show that there is another n by n real matrix $B$ such that $B^3=A$, and that $B$ is symmetric. Are there more matrices like this $B$ or is it the only one?</p>
<p>What I was thinking:</p>
<p>I don't have a clear way to solve it. I think we need to use the fact that if a real matrix is symmetric, then it is normal, and so has an orthonormal basis of eigenvectors...Other then that I don't really know anything.</p>
| Thomas Russell | 32,374 | <p>Note that $\forall \mathbf{M}\in\mathbb{R}^{n\times n}$ such that $\mathbf{M}$ is symmetric, we have $\mathbf{M}=\mathbf{P}\mathbf{\Lambda}\mathbf{P}^{-1}=\mathbf{P}\mathbf{\Lambda}\mathbf{P}^{T}$, for some orthogonal matrix $\mathbf{P}$.</p>
<p>Therefore we have $\mathbf{B}^{3}=\left(\mathbf{P}\mathbf{\Lambda}_{\mathbf{B}}\mathbf{P}^{T}\right)^{3}=\mathbf{P}\mathbf{\Lambda}_{\mathbf{B}}^{3}\mathbf{P}^{T}$, and $\mathbf{A}=\mathbf{P}\mathbf{\Lambda}_{\mathbf{A}}\mathbf{P}^{T}$, therefore we have $\mathbf{\Lambda}_{B}^{3}=\mathbf{\Lambda}_{\mathbf{A}}$, where $\mathbf{\Lambda}_{\mathbf{A}}=\operatorname{diag}(\lambda_{1},\dots,\lambda_{n})$ and $\lambda_{i}$ are the eigenvalues of $\mathbf{A}$.</p>
<p>Therefore we have:</p>
<p>$$\mathbf{B}=\mathbf{P}\operatorname{diag}(\sqrt[3]{\lambda_{1}},\dots,\sqrt[3]{\lambda_{n}})\mathbf{P}^{T}$$</p>
|
293,245 | <p>Most true statements independent of PA that I know of is equivalent to some consistency statement. For example</p>
<ul>
<li>Con(PA), Con(PA + Con(PA)), Con(PA + Con(PA) + Con (PA + Con(PA)), $\dots$</li>
<li>Goodstein's theorem is equivalent to Con(PA)</li>
<li>Any conjunction or disjunction of the above.</li>
</ul>
<p>Is every true statement independent of PA equivalent to some consistency statement?</p>
<p>By "equivalent to some consistency statement", I mean that $PA \vdash S \iff Con(T)$, for some theory $T$. Also, $T$ should be either finite, or specified by a Turing machine that outputs its axioms (and such that PA proves that the Turing machine never stops outputting statements), so that the description of $T$ doesn't throw PA off.</p>
<p>EDIT: In particular, are there are $\Pi^0_1$ examples?</p>
| user103227 | 103,227 | <p>The theory $PA + Con(PA)$ has the property you are asking for, this is the so called Friedman-Goldfarb-Harrington principle (see, e.g., <a href="https://projecteuclid.org/euclid.ndjfl/1093883515" rel="nofollow noreferrer" title="Fifty years of self-reference in arithmetic">Fifty years of self-reference in arithmetic</a>, p. 366). Formally, for <em>every</em> $\Pi_1$ sentence $\pi$, there is a $\Pi_1$ sentence $\psi$ such that $PA + Con(PA) \vdash \pi \leftrightarrow Con(PA + \psi)$.</p>
<p>EDIT:</p>
<p><s>That being said, my hunch is that the answer must be "no" for plain $PA$ even though I can't produce a counterexample at the moment.</s></p>
<p>As observed by Will Sawin, we have that for every $\Pi_1$ sentence $\pi$, there is a $\Pi_1$ sentence $\psi$ such that $PA \vdash \pi \leftrightarrow Con(EA + \psi)$, which gives a positive answer to the OP's modified question.</p>
<p>My hunch is that $Con(EA + \psi)$ can not be replaced by $Con(PA + \psi)$ in the above.</p>
|
2,553,284 | <p>I know that
$$\ln e^2=2$$
But what about this?
$$(\ln e)^2$$
A calculator gave 1. I'm really confused.</p>
| Reader Manifold | 376,599 | <p>$\ln e$ is 1. So $(\ln e ) ^ 2 $ is one.</p>
|
157,497 | <p>Let's suppose I have created a 3d image of gray scale Images with:</p>
<pre><code>image3d = Image3D[Table[readImage[i], {i, numberOfImages}]];
</code></pre>
<p>and </p>
<pre><code>image3dSlices = Image3DSlices[image3d]
</code></pre>
<p>To show the 3d image I can use:</p>
<pre><code>image3d
</code></pre>
<p>or </p>
<pre><code>Image3D[image3dSlices[[startImageNumber;;endImageNumber]]]
</code></pre>
<p>Is it somehow possible to convert the image data so that I could use <code>ListPlot3D</code> or <code>ListDensityPlot3D</code>? Please see also <a href="https://mathematica.stackexchange.com/questions/157419/image3d-plot-physical-axes-scales">here</a>: </p>
| Akku14 | 34,287 | <p>Here another simple, quite mechanical way. </p>
<p>Straightforword apply operators that are suited to simplify the expression, simplify and later apply the inverse operator.</p>
<pre><code>t1 = Sum[Sin[n x]/n, {n, \[Infinity]}]
(* 1/2 I (Log[1 - E^(I x)] - Log[E^(-I x) (-1 + E^(I x))]) *)
t2 = t1 // Exp
(* E^(1/2 I (Log[1 - E^(I x)] - Log[E^(-I x) (-1 + E^(I x))])) *)
t3 = t2 // FullSimplify
(* (1 - E^(-I x))^(-(I/2)) (1 - E^(I x))^(I/2) *)
t4 = t3 // #^(2/I) &
(* ((1 - E^(-I x))^(-(I/2)) (1 - E^(I x))^(I/2))^(-2 I) *)
t5 = t4 // PowerExpand[#, Assumptions -> 0 <= x <= 2 Pi] &
(* E^(4 \[Pi] Floor[
1/2 + Re[Log[1 - E^(-I x)]]/(4 \[Pi]) - Re[Log[1 - E^(I x)]]/(
4 \[Pi])])/(1 - E^(-I x)) - E^(
I x + 4 \[Pi] Floor[
1/2 + Re[Log[1 - E^(-I x)]]/(4 \[Pi]) - Re[Log[1 - E^(I x)]]/(
4 \[Pi])])/(1 - E^(-I x)) *)
t6 = t5 // FullSimplify[#, 0 <= x <= 2 Pi] &
(* -E^(I x) *)
t7 = t6 // #^(I/2) &
(* (-E^(I x))^(I/2) *)
t8 = t7 // PowerExpand[#, Assumptions -> 0 <= x <= 2 Pi] &
(* E^(-(\[Pi]/2) - x/2 - \[Pi] Floor[-(x/(2 \[Pi]))]) *)
t9 = t8 // FullSimplify[#, 0 <= x <= 2 Pi] &
(* E^(-(\[Pi]/2) - x/2 + \[Pi] Ceiling[x/(2 \[Pi])]) *)
t10 = t9 // Log
(* Log[E^(-(\[Pi]/2) - x/2 + \[Pi] Ceiling[x/(2 \[Pi])])] *)
t11 = t10 // PowerExpand[#, Assumptions -> 0 <= x <= 2 Pi] &
(* -(Pi/2) - x/2 + Pi Ceiling[x/(2 Pi)] *)
t12 = t11 // FullSimplify[#, 0 < x <= 2 Pi] &
(* (Pi - x)/2 *)
</code></pre>
|
3,266,930 | <blockquote>
<p>Let <span class="math-container">$X$</span> be a positive random variable on the <span class="math-container">$(\Omega,\mathscr{A},P)$</span>. Show that if <span class="math-container">$X\in L_p$</span> for <span class="math-container">$1<p<\infty$</span>.
Prove <span class="math-container">$\lim_{x\to\infty} x^p P(X>x)=0$</span></p>
</blockquote>
<p>Using Chebyshev inequality:</p>
<p><span class="math-container">$\lim_{x\to\infty} x^p P(X>x)\leqslant\lim_{x\to\infty} x^p\frac{1}{x^p} \int_{X>x}|X|^p dP=\lim_{x\to\infty} \int_{X>x}|X|^p dP $</span></p>
<p>Is it true <span class="math-container">$ \lim_{x\to\infty} \int_{X>x}|X|^p dP=0 $</span>?</p>
<p><strong>Questions:</strong></p>
<p>Is my reasoning right? How do I prove <span class="math-container">$ \lim_{x\to\infty} \int_{X>x}|X|^p dP=0 $</span>?</p>
<p>Thanks in advance!</p>
| jgon | 90,543 | <p>Let the sum be <span class="math-container">$S$</span>. Then multiply by <span class="math-container">$a$</span>.
We get
<span class="math-container">$$aS=\sum_{i=1}^{\phi(n)}a^i=a^{\phi(n)}+\sum_{i=1}^{\phi(n)-1}a^i =S,$$</span>
since even if <span class="math-container">$a$</span> doesn't have order <span class="math-container">$\phi(n)$</span>, we still know <span class="math-container">$a^{\phi(n)}=1$</span>.</p>
<p>Then <span class="math-container">$(a-1)S=0$</span>, and <span class="math-container">$a-1$</span> is a unit. Thus <span class="math-container">$S=0$</span>.</p>
|
1,724,812 | <p>I'm struggling with the following limit:</p>
<p>$$\lim_{x \to 0} \frac{1-(\cos x)^{\sin x}}{x^2}$$</p>
<p>Don't know where to start with this. Hints/solutions very appreciated.</p>
| Paramanand Singh | 72,031 | <p>We have
\begin{align}
L &= \lim_{x \to 0}\frac{1 - (\cos x)^{\sin x}}{x^{2}}\notag\\
&= \lim_{x \to 0}\frac{1 - \exp(\sin x\log \cos x)}{x^{2}}\notag\\
&= -\lim_{x \to 0}\frac{\exp(\sin x\log \cos x) - 1}{\sin x\log \cos x}\cdot\frac{\sin x\log \cos x}{x^{2}}\notag\\
&= -\lim_{x \to 0}\frac{\log\cos x}{x}\notag\\
&= -\lim_{x \to 0}\frac{\log\cos^{2} x}{2x}\notag\\
&= -\frac{1}{2}\lim_{x \to 0}\frac{\log(1 - \sin^{2} x)}{\sin^{2}x}\cdot\frac{\sin^{2}x}{x^{2}}\cdot x\notag\\
&= -\frac{1}{2}\cdot(-1)\cdot 1\cdot 0\notag\\
&= 0\notag
\end{align}
Like most of the limit problems seen on MSE, this one is also evaluated using standard limits without the use of advanced tools like L'Hospital's Rule or Taylor's series.</p>
|
3,854,286 | <p>This was an exercise in my class, please help:</p>
<blockquote>
<p>Put <span class="math-container">$A = {\mathbb Q}[x,y]$</span> and <span class="math-container">$B = {\mathbb Q}[x,z]$</span>. Consider the morphism <span class="math-container">$f \colon A \to B$</span> of <span class="math-container">${\mathbb Q}$</span>-algebras given by <span class="math-container">$x \mapsto x$</span>, <span class="math-container">$y \mapsto x z$</span>. Then <span class="math-container">$B$</span> is an <span class="math-container">$A$</span>-module. Is <span class="math-container">$B$</span> flat ? [Hint: Consider the inclusion <span class="math-container">$(x,y) \subset A$</span>.]</p>
</blockquote>
<p>My guess is that it isn't flat, by using their hint I found that the map</p>
<p><span class="math-container">$(x,y)\otimes_A B\to A\otimes_A B$</span></p>
<p>sends <span class="math-container">$x\otimes z - y\otimes 1$</span> to <span class="math-container">$0$</span>, so if this was nonzero I would be done. But I have a hard time proving that <span class="math-container">$x\otimes z - y\otimes 1$</span> is nonzero.</p>
| Aphelli | 556,825 | <p>Given what the element is, it is enough to show (by the property of the tensor product) that there is an <span class="math-container">$A$</span>-bilinear <span class="math-container">$\beta: (x,y) \times B \rightarrow C$</span> with <span class="math-container">$\beta(x,z) \neq \beta(y,1)$</span>.</p>
<p>Take <span class="math-container">$C=\mathbb{Q}$</span> where <span class="math-container">$x$</span> and <span class="math-container">$y$</span> act as zero, and define <span class="math-container">$\beta(P,Q)=Q(0,1)\frac{\partial P}{\partial y}(0,0)$</span>. This is <span class="math-container">$\mathbb{Q}$</span>-bilinear, and <span class="math-container">$\beta(y,1)=1$</span> by definition, while <span class="math-container">$\beta(x,z)=0$</span>. Moreover, <span class="math-container">$\beta(xP,Q) = 0 = \beta(P,xQ)$</span>, <span class="math-container">$\beta(yP,Q)=0$</span> because <span class="math-container">$P \in (x,y)$</span> already, and <span class="math-container">$\beta(P,y \cdot Q)=\beta(P,xzQ)=\beta(P,x\cdot (zQ)) = 0$</span> by the above (<span class="math-container">$\cdot$</span> denoting the <span class="math-container">$A$</span>-actions), so <span class="math-container">$\beta$</span> is <span class="math-container">$A$</span>-bilinear.</p>
<p>Here is another proof with almost no calculation. If <span class="math-container">$B$</span> is flat over <span class="math-container">$A$</span>, then <span class="math-container">$B/xB$</span> is flat over <span class="math-container">$A/(x)=\mathbb{Q}[y]$</span>, and given that <span class="math-container">$A/(x)$</span> is a PID, <span class="math-container">$B/xB$</span> must be torsion-free. But <span class="math-container">$B/xB$</span> is nonzero, yet <span class="math-container">$y \cdot B/xB=0$</span>.</p>
|
529,260 | <p>Let $V$ be a complex vector space of dimension $n$ with a scalar product, and let $u$ be an unitary vector in $V$. Let $H_u: V \to V$ be defined as</p>
<p>$$H_u(v) = v - 2 \langle v,u \rangle u$$</p>
<p>for all $v \in V$. I need to find the minimal polynomial and the characteristic polynomial of this linear operator, but the only way I know to find the charactestic polynomial is using the associated matrix of the operator.</p>
<p>I don't know how to find this matrix because I don't know how to deal with the scalar product. Is there some other way to find the characteristic polynomial? If not, how can I find the associated matrix of this linear operator?</p>
<p>Thanks in advance.</p>
| James | 68,019 | <p>$H_{u}=I-2P_{u}$, where $P_{u}:v\mapsto\left\langle v,u\right\rangle u$
is the projection onto the one-dimensional subspace spanned by $u$.
Decompose the space as $V=\mathbb{C}u\oplus\left(V-\mathbb{C}u\right)$,
then </p>
<p>$$
P_{u}=\left[\begin{array}{cc}
1 & 0\\
0 & 0_{n-1}
\end{array}\right]
$$
and
$$
H_{u}=I_{n}-2P_{u}=I_{n}-2\left[\begin{array}{cc}
1 & 0\\
0 & 0_{n-1}
\end{array}\right]=\left[\begin{array}{cc}
-1 & 0\\
0 & I_{n-1}
\end{array}\right]
$$</p>
|
970,062 | <p>To show to quadratic forms are not equivalent, we can find rank, or discriminant or some element which is represented by either one only etc. But Is there a general criterion to show that two binary(right now I am only concerned for binary) quadratic forms are equivalent.
Like here is an example which uses variable transformation, but every time find what transformation will apply is tough.</p>
<p>$\textbf{Example-}$ $X^2-Y^2 $ and $4XY$. replacing $X,Y$ by $X+Y,X-Y$ do the job here. Nnow there are criterion which gives an equivalent condition to check whether two binary forms are equivalent or not , like </p>
<p>$\textbf{Criterion 1-}$ On algebraically closed fields, same dimension and same rank suffice.</p>
<p>$\textbf{Criterion 2-}$ On Reals, same rank and same signature suffices.</p>
<p>$\textbf{Criterion 3-}$ On finite fields,same discriminant is enough.</p>
<p>But in general if we are given field as some arbitrary $K$, then what to do? I am very new at quadratic forms. Any help will be appreciated. </p>
<p>I know that we can check similarity of their corresponding matrices, but is there any other way, beside checking matrices are similar or not. Like how would you proceed on these couple of question below. If there is no other method, can somebody please explain me the method to check whether two matrices are similar or not, I am really weak with matrices.</p>
<p>$\textbf{Question 1-}$ Prove $X^2-4XY+3Y^2, X^2-Y^2,aX^2-aY^2$ are equivalent over $K$.</p>
<p>$\textbf{Question 2-}$ Show </p>
<p>i) $X^2-Y^2 \sim XY$ over $K$, ii) $3X^2-2Y^2 \sim X^2-6Y^2$ (over $Q$)</p>
| Nero | 88,078 | <p>Revuz and Yor: Continuous Martingales and Brownian Motion.
Karatzas and Shreve: Brownian Motion and Stochastic Calculus.</p>
<p>Both are indispensable.</p>
|
3,492,435 | <p>I am reading <em><strong>Foundations of Constructive Analysis</strong></em> by Errett Bishop. In the first chapter he describes a particular construction of the real numbers. There is a intermediate definition before his primary introduction of the Real numbers:</p>
<blockquote>
<p>A sequence <span class="math-container">${\{x_n\}}$</span> of rational numbers is regular if</p>
<p><span class="math-container">$|x_m - x_n | \le m^{-1} + n^{-1}\;\;\;\;\;(m, n\in \Bbb Z^+)$</span></p>
<p><em>Chapter 1 (2.1)</em></p>
</blockquote>
<p>What does the negative superscript mean in this definition? Since clearly you cannot take an integer to a negative power. Am I correct in interpreting <span class="math-container">$m$</span> and <span class="math-container">$n$</span> on the right hand side of the equation as the actual elements of the sequence? I am fairly sure the definition seems to parallel the Cauchy Sequence.</p>
| Michael Hardy | 11,667 | <p>Replacing <span class="math-container">$m$</span> by <span class="math-container">$m+1$</span> in <span class="math-container">$x^m$</span> is antidifferentiating (modulo multiplication by a constant).</p>
<p>Replacing <span class="math-container">$n$</span> by <span class="math-container">$n-1$</span> in <span class="math-container">$(1-x)^n$</span> is differentiating (modulo multiplication by a constant).</p>
<p>Differentiating one factor and antidifferentiating the other is exactly what happens in integration by parts.</p>
<p>So integrate by parts:
<span class="math-container">\begin{align}
& \int_0^1 x^m(1-x)^n\,dx \\[10pt]
= {} & \int_0^1 (1-x)^n \Big( x^m \, dx\Big) \\[10pt]
= {} & \int u \,dv = uv - \int v\,du \\[10pt]
= {} & \left[ (1-x)^n \frac{x^{m+1}} {m+1} \right]_0^1 - \int_0^1 \frac{x^{m+1}} {m+1} \cdot n(1-x)^{n-1}(-1) \, dx \\[10pt]
= {} & \frac n {m+1} \int_0^1 x^{m+1} (1-x)^{n-1} \, dx.
\end{align}</span></p>
|
3,492,435 | <p>I am reading <em><strong>Foundations of Constructive Analysis</strong></em> by Errett Bishop. In the first chapter he describes a particular construction of the real numbers. There is a intermediate definition before his primary introduction of the Real numbers:</p>
<blockquote>
<p>A sequence <span class="math-container">${\{x_n\}}$</span> of rational numbers is regular if</p>
<p><span class="math-container">$|x_m - x_n | \le m^{-1} + n^{-1}\;\;\;\;\;(m, n\in \Bbb Z^+)$</span></p>
<p><em>Chapter 1 (2.1)</em></p>
</blockquote>
<p>What does the negative superscript mean in this definition? Since clearly you cannot take an integer to a negative power. Am I correct in interpreting <span class="math-container">$m$</span> and <span class="math-container">$n$</span> on the right hand side of the equation as the actual elements of the sequence? I am fairly sure the definition seems to parallel the Cauchy Sequence.</p>
| marwalix | 441 | <p>Just integrate <span class="math-container">$I_{m,n}$</span> by parts. Taking <span class="math-container">$u=(1-x)^n$</span> and <span class="math-container">$dv=x^mdx$</span> the integration by parts formula</p>
<p><span class="math-container">$$\int_0^1u\cdot dv =\left[u\cdot v\right]_0^1-\int_0^1v\cdot du$$</span></p>
<p>gives</p>
<p><span class="math-container">$$I_{m,n}={1\over m!n!}\left[(1-x)^n{x^m\over m+1}\right]_0^1+{1\over (m+1)!(n-1)!}\int_0^1x^{m+1}(1-x)^{n-1}dx$$</span></p>
|
423,159 | <p>What do you call a linear map of the form <span class="math-container">$\alpha X$</span>, where <span class="math-container">$\alpha\in\Bbb R$</span> and <span class="math-container">$X\in\mathrm O(V)$</span> is an orthogonal map (<span class="math-container">$V$</span> being some linear space with inner product)? Are there established names, historical names, some naming attempts that haven't caught on?</p>
<ul>
<li><p>"<a href="https://en.wikipedia.org/wiki/Conformal_map" rel="nofollow noreferrer">Conformal</a>" aka. "angle-preserving" feels rather close, but I believe these terms are more commonly used in the sense of "locally angle-preserving" (i.e. it is not implicitly understood to be linear). Also, <span class="math-container">$\alpha=0$</span> is explicitly allowed in my context, which is not quite angle-preserving.</p>
</li>
<li><p>I first thought "<a href="https://en.wikipedia.org/wiki/Homothety" rel="nofollow noreferrer">homotheties</a>" are what I am looking for, but these only capture the scaling part, not the rotation part.</p>
</li>
<li><p>Roto-scaling or scale-rotation is apparently also already taken and is more general than what I need (see the comment by Carlo).</p>
</li>
</ul>
<p>At the risk of letting this become too "opinion-based", let me also say that I am open for suggestions.</p>
| paul garrett | 15,629 | <p>"Orthogonal similitude" would be consistent with a very-common use of "symplectic similitude" (in automorphic forms and repn theory) for <span class="math-container">$g\in GL_n$</span> such that <span class="math-container">$g^\top J g=\nu(g)\cdot J$</span> for skew-symmetric matrix <span class="math-container">$J$</span>.</p>
<p>And <span class="math-container">$GSp(J)$</span> is the "symplectic similitude group", and <span class="math-container">$GO(S)$</span> (with <span class="math-container">$S$</span> the non-degenerate quadratic form) is the orthogonal similitude group...</p>
|
277,594 | <p><a href="https://i.stack.imgur.com/yX9my.gif" rel="noreferrer"><img src="https://i.stack.imgur.com/yX9my.gif" alt="enter image description here" /></a></p>
<pre><code>Manipulate[
ParametricPlot[{Sec[t], Tan[t]}, {t, 0, u}, PlotStyle -> Dashed,
PerformanceGoal -> "Quality", Exclusions -> All,
PlotRange -> 2], {u, 0.001, 2 Pi}]
</code></pre>
<p>I found that for parametric curves with singularities, using <code>ParametricPlot</code> with the dahed line style, there will be shake in some animations, is there a simple way to eliminate shake?</p>
| bmf | 85,558 | <p>I think you want something like the following (?)</p>
<pre><code>Sum[A[i, j] x[i] x[j], {i, 1, 2}, {j, 1, 2}] /. {x[i_] x[j_] ->
c[i, j], x[i_] x[i_] -> c[i, i]}
</code></pre>
<blockquote>
<p><a href="https://i.stack.imgur.com/RXsBV.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/RXsBV.jpg" alt="res" /></a></p>
</blockquote>
|
1,710,304 | <p>I have a boolean algebra equation that i'm not able to simplify fully.</p>
<p>\begin{align}
&(c+ab)(d+b(a+c))\\
&(c+ab)(d+ba+bc)\\
&cd+ abc + bc^2+abd+a^2 b^2 + ab^2 c\\
&\text{using boolean laws $x^2=x$ and $x+x=x$}\\
&cd + bc + abd + ab + (abc + abc)\\
&cd + bc + abd + ab + abc
\end{align}
And now I get stuck. Mathematica simplifies this to $ac+bc+bd$, but I just don't see how.</p>
| parsiad | 64,601 | <p>You might have typed this into Mathematica incorrectly. Here's the solution:</p>
<p>\begin{align*}
(c+ab)(d+b(a+c)) & =(c+ab)(d+ab+bc))\\
& =cd+abc+bc+abd+ab+abc\\
& =cd+abc+bc+abd+ab\\
& =cd+bc+ab(c+d+1)\\
& =cd+bc+ab
\end{align*}</p>
<p>Here's <a href="http://www.wolframalpha.com/input/?i=(c%20or%20(a%20and%20b))%20and%20(d%20or%20(b%20and%20(a%20or%20c)))" rel="nofollow">Wolfram Alpha computing the same thing</a> (DNF).</p>
|
2,172,975 | <p>I am reading <a href="http://www.deeplearningbook.org/contents/linear_algebra.html" rel="nofollow noreferrer">http://www.deeplearningbook.org/contents/linear_algebra.html</a> Chapter $2$, page $44$ ($3$rd paragraph) of this book and got confused. Can any body help me to understand this paragraph? Thanks in advance.</p>
<p><em>While any real symmetric matrix $A$ is guaranteed to have an eigendecomposition, the eigendecomposition may not be unique. If any two or more eigenvectors
share the same eigenvalue, then any set of orthogonal vectors lying in their span are also eigenvectors with that eigenvalue, and we could equivalently choose a $Q$ using those eigenvectors instead.</em></p>
| DonAntonio | 31,254 | <p>I think it simply says: if $\;u,v\;$ two <em>linearly independent</em> eigenvectors corresponding to one same eigenvalue $\;\lambda\;$ , then <strong>any</strong> linear combination $\;\alpha u+\beta v\;$ is also an eigenvector for the same eigenvalue, and we can thus choose on of these lin. comb.'s instead of $\;u\;$, or of $\;v\;$ , say.</p>
|
1,981,928 | <p>While I was studying properties of limit and sequences, I found a theorm that says 'if {$s_n$}, {$t_n$} are convergent sequences, then $s_n \le t_n$ for all $n \in \mathbb{N}$ implies that $$\lim_{n\rightarrow \infty} s_n \le \lim_{n\rightarrow \infty} t_n$$
this proof is quite easy to construct, as you can say </p>
<blockquote>
<p>Given $\epsilon>0$ choose $N \in \mathbb{N}$ such that $|s_n-S| < \epsilon/2$ and $|t_n-T| < \epsilon/2$ then $S-T = (S-s_n) + (t_n-T) + s_n - t_n$ and use triangle inequality to finish</p>
</blockquote>
<p>but I heard that following is NOT TRUE, and I don't know why? </p>
<p><strong>if $s_n<t_n$, can we say $\lim_{n\rightarrow \infty} s_n < \lim_{n\rightarrow \infty} t_n$?</strong></p>
| Eff | 112,061 | <p><strong>It is not true</strong>. However, if <span class="math-container">$a_n < b_n$</span> (or <span class="math-container">$a_n \leq s_n$</span> as seen in your theorem) for all <span class="math-container">$n\in\mathbb{N}$</span> and their limits exist, then you <strong>can</strong> instead say that
<span class="math-container">$$\lim_{n\to \infty} a_n \leq \lim_{n\to \infty} b_n. $$</span></p>
<p>For example, if <span class="math-container">$a_n = 1-\frac{1}{n}$</span> and <span class="math-container">$b_n = 1$</span>, then <span class="math-container">$a_n < 1$</span> for all <span class="math-container">$n$</span> but <span class="math-container">$\lim_{n\to \infty} a_n = \lim_{n\to \infty} b_n = 1.$</span></p>
|
9,335 | <p>How to prove $\limsup(\{A_n \cup B_n\}) = \limsup(\{A_n\}) \cup \limsup(\{B_n\})$? Thanks!</p>
| Gabriel Ebner | 474 | <p>Another nice way is to use characteristic functions:</p>
<p>The map $\chi : \mathcal{P}(\Omega) \to \{0,1\}^\Omega$ assigns to every subset of $\Omega$ its characteristic function.</p>
<ul>
<li>$\chi$ is bijective.</li>
<li>$\chi$ is continuous, i.e. $\chi_{\lim\sup_{n\to\infty} A_n} = \lim\sup_{n\to\infty}\, \chi_{A_n}$ (pointwise limit)</li>
<li>$\chi$ is a homomorphism, i.e. $\chi_{A \cup B} = \chi_A + \chi_B - \chi_A \chi_B$</li>
</ul>
<p>Now your question reduces to the computation of an ordinary limit.</p>
|
323,559 | <p>Developable surfaces in <span class="math-container">$\mathbb{R}^{3}$</span> have lots of applications outside geometry (e.g., cartography, architecture, manufacturing).</p>
<p>I am a curious about potential or actual applications to other fields of mathematics and science of flat submanifolds of <span class="math-container">$\mathbb{R}^{d}$</span>, where <span class="math-container">$d >3$</span>. </p>
<p>By <em>flat</em> I mean locally isometric to Euclidean space.</p>
| Francesco Polizzi | 7,460 | <p>The torus <span class="math-container">$T$</span> can be embedded as a flat submanifold of <span class="math-container">$\mathbb{R}^4$</span>, the so-called <a href="https://en.wikipedia.org/wiki/Clifford_torus" rel="nofollow noreferrer">Clifford torus</a>. It is possible to put infinitely many different complex structures on <span class="math-container">$T$</span>, and by Poincaré-Koebe Uniformization Theorem the resulting complex curves (known as <em>elliptic curves</em>) have the structure of a <span class="math-container">$1$</span>-dimensional group variety over <span class="math-container">$\mathbb{C}$</span>, their group law being induced by the translations of their universal cover <span class="math-container">$\mathbb{R}^2$</span>. </p>
<p>Reduction over <span class="math-container">$\mathbb{F}_p$</span> of elliptic curves defined over <span class="math-container">$\mathbb{Q}$</span> are extensively used in <a href="https://en.wikipedia.org/wiki/Elliptic-curve_cryptography" rel="nofollow noreferrer">modern cryptography</a>. </p>
|
323,559 | <p>Developable surfaces in <span class="math-container">$\mathbb{R}^{3}$</span> have lots of applications outside geometry (e.g., cartography, architecture, manufacturing).</p>
<p>I am a curious about potential or actual applications to other fields of mathematics and science of flat submanifolds of <span class="math-container">$\mathbb{R}^{d}$</span>, where <span class="math-container">$d >3$</span>. </p>
<p>By <em>flat</em> I mean locally isometric to Euclidean space.</p>
| Piotr Hajlasz | 121,665 | <ol>
<li><p>Crystallographic groups define flat compact manifolds and they are used to describe symmetries of crystals. </p></li>
<li><p>Flat tori are used in computational physics and chemistry: if you want to investigate dynamics, say of a gas and and for computational reasons you can only consider 1000 particles, you cannot place the particles in <span class="math-container">$\mathbb{R}^3$</span> because they would escape. The trick is to place the particles in <span class="math-container">$\mathbb{S}^1\times \mathbb{S}^1\times \mathbb{S}^1$</span>
which is represented as a "periodic" cube: if a particle leaves a cube through one side, in enters the cube on the opposite side.</p></li>
<li><p>Math and art: By the famous Nash-Kuiper theorem, a flat torus <span class="math-container">$\mathbb{S}^1\times \mathbb{S}^1$</span> does admit a <span class="math-container">$C^1$</span> isometric embedding into <span class="math-container">$\mathbb{R}^3$</span>. This is a very surprising result. There have been sculptures showing this embedding and you can see it on youtube: <a href="https://www.youtube.com/watch?v=RYH_KXhF1SY" rel="nofollow noreferrer">https://www.youtube.com/watch?v=RYH_KXhF1SY</a> </p></li>
</ol>
|
118,275 | <p>In the construction of Soergel's bimodules in representtion theory , it's essential for him to work with <em>split</em> Grothendieck groups. Here he starts with a certain small additive category $\mathcal{A}$ and writes $\langle \mathcal{A} \rangle$ for its split Grothendieck group: the free abelian group on objects $\langle A \rangle$ corresponding as usual to isomorphism classes, modulo sums$\langle C \rangle = \langle A \rangle + \langle B \rangle$ corresponding only to the situation $C \cong A \oplus B$.</p>
<p>This is a less familiar situation than the usual Grothendieck group with sums corresponding to short exact sequences which may or may not split. </p>
<blockquote>
<p>Where does the notion of split Grothendieck group originate, and why?</p>
</blockquote>
<p>This is mostly asked out of curiosity, but I'm also looking for further interesting examples.</p>
| Angelo | 4,790 | <p>The split Grothendieck group for vector bundles on a complete variety appears in Nori's PhD thesis on the fundamental group scheme; this was published in the Proceedings of the Indian Academy of Science in 1981. It is used to define and study finite vector bundles. Nori does not give any references, so as far I know the construction might be due to him.</p>
|
3,964,862 | <p>The arithmetic mean has the nice property of minimising the sum of squares, or in other words, minimising the sum of quadratic-euclidean distances. Formally, given a set of points <span class="math-container">$x_0, \dots, x_n \in \mathbb{R}^d$</span>, the arithmetic mean, <span class="math-container">$\mu = \frac{1}{n} \sum_{i=1}^n x_i$</span>, is the unique solution to the equation
<span class="math-container">$$\mu = \underset{x \in \mathbb{R}^d}{\operatorname{argmin}} \sum_{i=1}^n d^2(x_i, x) \, ,$$</span>
where <span class="math-container">$d$</span> denotes the Euclidean distance. I was wondering if there exists a distance measure on <span class="math-container">$\mathbb{R}$</span> that has a similar property for the geometric mean, i.e.</p>
<hr />
<p><strong>Question</strong></p>
<p>Is there a distance measure <span class="math-container">$d$</span> on <span class="math-container">$\mathbb{R}$</span> such that for any set of points <span class="math-container">$x_1, \dots, x_n \in \mathbb{R}$</span> we have
<span class="math-container">$$ \sqrt[n]{x_1 \cdot \dots \cdot x_n} = \underset{x \in \mathbb{R}^d}{\operatorname{argmin}} \sum_{i=1}^n d^2(x_i, x) \; \textbf{?}$$</span></p>
| Matthew H. | 801,306 | <p>Consider the metric <span class="math-container">$d:(0,\infty)^2 \rightarrow [0,\infty)$</span> defined by <span class="math-container">$$d(x,y)=\Bigg|\ln\Big(\frac{y}{x}\Big)\Bigg|$$</span> This is the metric you're looking for. Given a dataset <span class="math-container">$\{x_1,\ldots ,x_n\}\subseteq (0,\infty)$</span> define a function <span class="math-container">$f$</span> on <span class="math-container">$(0,\infty)$</span> by <span class="math-container">$$f(x)=\sum_{i=1}^nd^2(x,x_i)=\sum_{i=1}^n(\ln(x)-\ln(x_i))^2$$</span> Taking a derivative yields <span class="math-container">$$f'(x)=\frac{2}{x}\sum_{i=1}^n(\ln(x)-\ln(x_i))=0 \iff \ln(x)=\frac{1}{n}\sum_{i=1}^n\ln(x_i)$$</span> This shows <span class="math-container">$x=(x_1 \times \dots \times x_n)^{1/n}$</span> is a critical point of <span class="math-container">$f$</span>. Now since <span class="math-container">$f$</span> is continuous on <span class="math-container">$(0,\infty)$</span>, is bounded below by <span class="math-container">$0$</span>, and <span class="math-container">$f(x)\rightarrow \infty$</span> as <span class="math-container">$x \rightarrow 0^+$</span> or as <span class="math-container">$x \rightarrow \infty$</span>, we see that <span class="math-container">$f$</span> attains a global minimum at its critical point.</p>
|
202,247 | <p>I'm working on some problem in algebraic geometry. I need a reference to the following result:</p>
<p>Let $h\in\mathbb{N}$ with $h\geq1$ and let $F\in\mathbb{C}\left[x_{1},\ldots,x_{h}\right]$
be a non zero polynomial. The complement manifold $\mathbb{C}^{h}\setminus\left\lbrace F=0\right\rbrace$ is a
nonempty open connected subspace of $\mathbb{C}^{h}.$</p>
<p>Probably this is contained in some old work of Zariski (or even older). Please do not esitate to suggest me some bibliographical references.</p>
| Georges Elencwajg | 450 | <p><strong>Claim:</strong> The complement $U=\mathbb C^h\setminus \{F=0\}$ is path-connected and thus connected.<br>
<strong>Proof:</strong><br>
Given $a,b\in U$ consider the affine complex line $L_{a,b}=L$ joining $a$ to $b$.<br>
The polynomial $F\mid L$ is not zero since it is not zero at $a$ nor at $b$.<br>
Thus it has only finitely many zeros on $L$ ($\cong \mathbb C $ !) and we can find a path from $a$ to $b$ in $L$ avoiding these zeros: that path is contained in $U$. </p>
|
3,276,572 | <p>Let be <span class="math-container">$\lVert \cdot \rVert$</span> a matrix norm (submultiplicative).</p>
<p>Do we have for all matrices of determinant 1, the following lower bound:</p>
<p><span class="math-container">$$\lVert M \rVert \geq 1$$</span></p>
<p>I'm very confused and could not find any counterexample and I find this statement very fishy, I tried to experiment with:</p>
<p><span class="math-container">\begin{bmatrix}
1& x \\
0& 1
\end{bmatrix}</span></p>
<p>But, its Frobenius norm cannot be small enough.</p>
| Raito | 112,314 | <p>Well, in fact, I just got it.</p>
<p><span class="math-container">$SL_n(\Bbb K)$</span> is closed.</p>
<p>If I suppose that <span class="math-container">$\lVert M \rVert < 1$</span>, then: <span class="math-container">$M^n \in SL_n(\Bbb K)$</span> for all <span class="math-container">$n$</span>, and then: <span class="math-container">$M^n \to 0$</span> for <span class="math-container">$\lVert \cdot \rVert$</span>, thus: we derive a contradiction.</p>
<p>Which yields the result.</p>
|
1,728,920 | <p>I am a software engineer trying to wrap his head around <strong>Fast Fourier Transform (FFT)</strong>. Specifically, I need to implement it as part of some software I am writing. Now I can handle the implementation of the algorithm/operations, and in fact will likely just use an open source math library to do most of the heavy lifting for me. But there's something fundamental here that I just <em>want</em> to understand.</p>
<p>According to <a href="https://en.wikipedia.org/wiki/Fourier_analysis" rel="nofollow">Wikipedia</a>:</p>
<blockquote>
<p>Fourier analysis is the study of the way general functions may be represented or approximated by sums of simpler trigonometric functions. Fourier analysis grew from the study of Fourier series, and is named after Joseph Fourier, who showed that representing a function as a sum of trigonometric functions greatly simplifies the study of heat transfer.</p>
</blockquote>
<p>and:</p>
<blockquote>
<p>For example, determining what component frequencies are present in a musical note would involve computing the Fourier transform of a sampled musical note. One could then re-synthesize the same sound by including the frequency components as revealed in the Fourier analysis. In mathematics, the term Fourier analysis often refers to the study of both operations.</p>
</blockquote>
<p>So here we have two examples:</p>
<ol>
<li>Decomposing a general function into component trig functions to study heat transfer; and</li>
<li>Decomposing a sampling of a musical note into component frequencies</li>
</ol>
<p>So I get the <em>what</em> here; that is, what FFT/Fourier Analysis is attempting to do (decomposing a function into component functions), and like I said, I can handle the <em>how</em> later (either coding up FFT myself or using a library), <strong>but what I'm stuck on is the <em>why</em>.</strong></p>
<p><strong>Why do this?</strong> Using one of the examples above, <em>why</em> decompose a sound sample (musical note) into component frequencies? What is there to be gained/learned from doing this? What extra information does this expose for analytical purposes? Why can we better understand heat transfer by using FFT to decompose some "heat function" into smaller/component trig functions?</p>
| mathreadler | 213,607 | <p>Sound is moving waves in matter. Sin and Cos are waves too.</p>
<p>Heat transfer is often modelled with differential equations. Complex exponentials (which are the basis functions in the Fourier Transform) are eigenfunctions to the operation of differentiation.</p>
|
1,946,881 | <p>Looking around I have found lots of material on continuous time Markov processes on finite or countable state spaces, say $\{0,1,\ldots,J\}$ for some $J\in\mathbb{N}$ or just $\mathbb{N}$. Similarly I have earlier worked with (discrete time) Markov chains on general state spaces, following the modern classic by Meyn & Tweedie. </p>
<p>My question concerns monographs on continuous time Markov processes on general state spaces, say some subset of $\mathbb{R}^k$, $k\in\mathbb{N}$. Are there any good references - preferably but not necessarily suited for an ambitious master student - on this topic?</p>
| Hagen von Eitzen | 39,174 | <p>Consider any $G$ with at least two elements and define $a*b=a$.
Then for all $x\in G$, there exists $e\in G$ (namely, $e=x$) such that $x*e=e*x=x$. However, there is no $e\in G$ such that for all $x\in G$, we have $e*x=x$ (because $e*x=e$)</p>
|
1,946,881 | <p>Looking around I have found lots of material on continuous time Markov processes on finite or countable state spaces, say $\{0,1,\ldots,J\}$ for some $J\in\mathbb{N}$ or just $\mathbb{N}$. Similarly I have earlier worked with (discrete time) Markov chains on general state spaces, following the modern classic by Meyn & Tweedie. </p>
<p>My question concerns monographs on continuous time Markov processes on general state spaces, say some subset of $\mathbb{R}^k$, $k\in\mathbb{N}$. Are there any good references - preferably but not necessarily suited for an ambitious master student - on this topic?</p>
| celtschk | 34,930 | <p>For example, consider the set $G=\{a,b,c\}$ with $x*y=y$ for all $x,y\in G$.</p>
<p>Let's first test the group axiom:</p>
<blockquote>
<p>There exists an $e\in G$ such that for all $x\in G$, $x∗e=e∗x=x$</p>
</blockquote>
<p>So let's check:</p>
<ul>
<li>Could it be that $e=a$? No, because $a*b=b\ne a$.</li>
<li>Could it be that $e=b$? No, because $a*c=c\ne a$.</li>
<li>Could it be that $e=c$? No, because $c*a=a\ne c$.</li>
</ul>
<p>So no matter which element we choose for $e$, the condition is not fulfilled for all $x$, so the axiom is <em>not</em> fulfilled for my example.</p>
<p>Now let's check your alternative:</p>
<blockquote>
<p>For all $x\in G$, there exists an $e\in G$ such that $x∗e=e∗x=x$</p>
</blockquote>
<p>So we have to check all $x\in G$.</p>
<ul>
<li>For $x=a$, there exists such an $e$, namely $e=a$, as $a*a=a*a=a$.</li>
<li>For $x=b$, there exists such an $e$, namely $e=b$, as $b*b=b*b=b$.</li>
<li>For $x=c$, there exists such an $e$, namely $e=c$, as $c*c=c*c=c$.</li>
</ul>
<p>So your alternative axiom <em>is</em> fulfilled for my example.</p>
<p>I hope that example clears up the difference between both statements.</p>
|
2,655,518 | <p>$2ac=bc$
find the ratio ( $K$ )
what is the ratio of their area?
<a href="https://i.stack.imgur.com/9NPRi.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/9NPRi.png" alt="enter image description here"></a>I found out it is $2$ or $1/2$
is it true? </p>
<p>if the question isn't clear, make sure to notify me, I will make an effort to make it understandable </p>
| TheSimpliFire | 471,884 | <p>If you have an operator that yields more than one solution, then obviously the solutions are going to be different (otherwise there would only be one). But this does <em>not</em> mean they are equal.</p>
<p>Your example involves the function $f(x)=\sqrt x$, with $f(25)=\pm5$. Although both $-5$ and $5$ satisfy $\sqrt{25}$, $-5$ is clearly not the same as $5$. In other words, saying that the negative square root is equal to that of the positive makes no sense.</p>
<p>As suggested in a comment, to avoid this confusion, we use the <em>principal square root</em>: $f(x)=|\sqrt x|$</p>
<p>This may be extended to equations outputting two or more solutions. An example of this is solving the cubic $$x^3-2x^2-x+2=(x+1)(x-1)(x-2)=0$$ Here, $-1$, $1$ and $2$ are solutions, but $-1\neq1\neq2$.</p>
|
1,231,095 | <p>How does one find $\mathcal{L}^{-1}\{\ln[\frac{s^2+a^2}{s^2+b^2}]\}$?</p>
<p>I've tried splitting it up into $\mathcal{L}^{-1}\{\ln(s^2+a^2)\}-\mathcal{L}^{-1}\{\ln(s^2+b^2)\}$. However, I can't think of any way to actually take the inverse transform of $\mathcal{L}^{-1}\{\ln(s^2+a^2)\}$.</p>
| Fatemeh Shiravand | 230,494 | <p>You can write
$$
\begin{align}
F^{\prime}(s)&=[\ln(s^{2}+a^{2})-\ln(s^{2}+b^{2})]^\prime\\
&=\frac{2s+2a}{s^{2}+a^{2}}-\frac{2s+2b}{s^{2}+b^{2}}\\
&=\frac{2s}{s^{2}+a^{2}}+\frac{2a}{s^{2}+a^{2}}-\frac{2s}{s^{2}+b^{2}}-\frac{2b}{s^{2}+b^{2}}\\
&\to 2(\cos at)+2(\sin at)-2(\cos bt)-2(\sin bt)
\end{align}
$$</p>
|
2,751,819 | <p>I need some help solving this.
I have tried:</p>
<p>$$
\begin{bmatrix}
a & b \\
c & d \\
\end{bmatrix}
=\frac{1}{\operatorname{det}A}\cdot \begin{bmatrix}
d & -b \\
-c & a \\
\end{bmatrix}$$
I ended up with $$a=\frac{d}{\operatorname{det}A},$$
and
$$d=\frac{a}{\operatorname{det}A}.$$
Then
$$\operatorname{tr}(A)=a+d=\frac{a+d}{\operatorname{det}A},$$
but I don't really think it works.</p>
| lhf | 589 | <p>By the Cayley–Hamilton theorem or <a href="https://math.stackexchange.com/questions/1494369/show-that-a-matrix-a-pmatrixab-cd-satisfies-a2-adaad-bci-o">direct verification</a>, we have $A^{2}-\operatorname {tr}(A)A+\det(A)I=0$.</p>
<p>From $A^2=I$, we get $\operatorname {tr}(A)A=(\det(A)+1)I$.</p>
<p>Taking traces on both sides, we get $\operatorname {tr}(A)^2=2(\det(A)+1)$.</p>
<p>From $A^2=I$, we also get $\det(A)^2=1$ and so $\det(A)=\pm1$.</p>
<p>If $\det(A)=1$, then $\operatorname {tr}(A)^2=4$ and so $\operatorname {tr}(A)=\pm 2$.</p>
<p>If $\det(A)=-1$, then $\operatorname {tr}(A)^2=0$ and so $\operatorname {tr}(A)=0$.</p>
<p>These three possibilities occur for the matrices below:
$$
\operatorname {tr}\begin{pmatrix}1&0\\0&1\end{pmatrix} = 2
\qquad
\operatorname {tr}\begin{pmatrix}-1&\hphantom-0\\\hphantom-0&-1\end{pmatrix} = -2
\qquad
\operatorname {tr}\begin{pmatrix}1&\hphantom-0\\0&-1\end{pmatrix} = 0
$$</p>
|
3,715,475 | <p>About a year ago I asked <a href="https://math.stackexchange.com/questions/3258617/alaoglu-theorem-over-the-p-adics">here</a> whether the Banach-Alaoglu Theorem works over the <span class="math-container">$p$</span>-adics. The satisfactory answer I got is that the "usual" proof only uses local compactness, and so the Banach-Alaoglu Theorem holds for any local field.</p>
<p>Now I would like to look at other, more general non-Archimedean fields. I know that Hahn-Banach holds for all spherically complete such fields, and so I was wondering if it is possible to prove Banach-Alaoglu for such fields as well? Because Hahn-Banach works, a related question is whether in the complex setting there is a proof of Banach-Alaoglu that uses Hahn-Banach, but not local compactness of <span class="math-container">$\mathbb{R}$</span> or <span class="math-container">$\mathbb{C}$</span>.</p>
| Chilote | 113,061 | <p>In the general non-Archimedean case, instead of using the concept of compactness, it is more suitable to use the concept of compactoidness. I think the theorem you are looking for is
<a href="https://i.stack.imgur.com/1GihY.png" rel="noreferrer"><img src="https://i.stack.imgur.com/1GihY.png" alt="enter image description here" /></a></p>
<p>(page 273) all the details and preliminaries are in the spectacular book: Locally Convex Spaces over non-Arquimedean Valued Fields, Cambridge University Press - [C.Perez-Garcia, W.H.Schikhof] - 2010</p>
|
11,073 | <p>I have three simple graphs in one Plot. Now I am trying to make a button for each graph so you can hide or show it in the plot. Until now I was just able to make a checkbox with the Manipulate function, but I don't now how to tell the checkbox that it should hide my graph when unchecked an display it when checked. </p>
<p>Here is what I was able to make so far, I know it looks simple but I also have many other mathematica documents which at the end just need this button.</p>
<p><img src="https://i.stack.imgur.com/QMqcr.png" alt="enter image description here"> </p>
| Vitaliy Kaurov | 13 | <p>One way to do this is to use <code>Opacity</code> to hide a graph and empty label "" to hide a label:</p>
<pre><code>Manipulate[
Plot[{0.5 x + 1, x, 2 x - 2}, {x, -1, 5},
PlotRange -> {-1, 5}, AspectRatio -> 1,
PlotStyle -> {Opacity[a], Opacity[b], Opacity[c]},
Epilog -> {
Text[If[a == 1, "f(x)", ""], {4.5, 2.7}],
Text[If[b == 1, "y=x", ""], {4.5, 4}],
Text[If[c == 1, "g(x)", ""], {3, 4.5}]
}],
{{a, 1, "f(x)"}, {1, 0}},
{{b, 1, "y=x"}, {1, 0}},
{{c, 1, "g(x)"}, {1, 0}}, ControlPlacement -> Left,
FrameMargins -> 0]
</code></pre>
<p><img src="https://i.stack.imgur.com/AUxJT.png" alt="enter image description here"></p>
|
11,073 | <p>I have three simple graphs in one Plot. Now I am trying to make a button for each graph so you can hide or show it in the plot. Until now I was just able to make a checkbox with the Manipulate function, but I don't now how to tell the checkbox that it should hide my graph when unchecked an display it when checked. </p>
<p>Here is what I was able to make so far, I know it looks simple but I also have many other mathematica documents which at the end just need this button.</p>
<p><img src="https://i.stack.imgur.com/QMqcr.png" alt="enter image description here"> </p>
| Mike Honeychurch | 77 | <p><code>Manipulate</code> is probably the easiest for this specific case but here is an alternative:</p>
<pre><code> DynamicModule[{select = {1, 2, 3}},
Column[{
CheckboxBar[
Dynamic[select], {1 -> "g(x)", 2 -> "y=x",
3 -> "f(x)"}],
Dynamic@Plot[Evaluate@{0.5 x + 1, 2 x - 2, x}[[select]], {x, -1, 5},
PlotRange -> {-1, 5},
AspectRatio -> 1,
ImageSize -> 300,
PlotStyle -> ColorData[1, "ColorList"][[select]]
]
}]
]
</code></pre>
<p><img src="https://i.stack.imgur.com/O3aUv.jpg" alt="enter image description here"></p>
<pre><code>DynamicModule[{select = {1, 2, 3}},
Row[{
CheckboxBar[
Dynamic[select], {1 -> "g(x)", 2 -> "y=x", 3 -> "f(x)"},
Appearance -> "Vertical"],
Dynamic@Plot[Evaluate@{0.5 x + 1, 2 x - 2, x}[[select]], {x, -1, 5},
PlotRange -> {-1, 5},
AspectRatio -> 1,
ImageSize -> 300,
PlotStyle -> ColorData[1, "ColorList"][[select]]
]
}]
]
</code></pre>
<p><img src="https://i.stack.imgur.com/r18e0.jpg" alt="enter image description here"></p>
|
3,989,921 | <p>Answered (I think!):</p>
<p>The triple product's purpose is to find a direction to the origin, perpendicular to the baseline, which is super trivial in 2D as there is only two perpendicular orientations, but the "cylinder" distinction is made in 3D because there are infinite perpendicular orientations - hence the triple product. Diagram in one of the answers shows this nicely!</p>
<p>Original Question:</p>
<p>I was looking at a video explaining the G-J-K algorithm for finding the two closest features on two convex shapes - when it came to a bit about, given a line, what is the direction in 3D space, from a point on the line, which points to the origin. Seems easy enough: just invert the position vector to point backwards to the origin - and that's exactly what the code does in an earlier step of the algorithm. Bizarrely, they say that as "the origin could be in a cylinder around the line in 3D space", the direction to the origin is given by the following:</p>
<p>Let X be the vector between points A and B on the line (from A to B), and Y be the inverse position vector from A to the origin. The direction to the origin, apparently, is (X cross Y) cross X! I expanded this out, saw an interesting pattern but could not link it to the problem:</p>
<p>For two vectors A, B; (A cross B) cross A I found to be equal to:</p>
<p>Ax(AyBy + AzBz) - Bx(AyAy + AzAz)</p>
<p>Ay(AxBx + AzBz) - By(AxAx + AzAz)</p>
<p>Az(AxBx + AyBy) - Bz(AxAx + AyAy)</p>
<p>Notated as a column vector, with "Vn" meaning the Nth coordinate of vector V. By "cross" I mean the 3D cross product.</p>
<p>I know this isn't the computer science stack exchange - I would like the mathematical meaning of "(A cross B) cross A", and a reason why the code does this, and not just another inverse position vector as it did previously, to find a direction to the origin. Many thanks for any help, and I will repost this on a different stack if it's too offensive here!</p>
| Community | -1 | <p>I must be missing something, but taking the question in isolation, namely "Let X be the vector between points A and B. The direction to the origin, apparently, is:"</p>
<p>... Welp, If we know all about the endpoints of the line, namely <span class="math-container">$\vec{A}$</span> and <span class="math-container">$\vec{B}$</span>, then some point on the line connecting the endpoints must be <span class="math-container">$$\vec{X} = \alpha \vec{A} + \left(1-\alpha\right) \vec{B} \:,$$</span> where <span class="math-container">$\alpha$</span> is a dimensionless parameter between zero and one, inclusive. If that form is agreeable, just take <span class="math-container">$-\vec{X}$</span> to reach back to the origin, or calculate <span class="math-container">$-\hat{X}$</span> to simply point there.</p>
|
3,989,921 | <p>Answered (I think!):</p>
<p>The triple product's purpose is to find a direction to the origin, perpendicular to the baseline, which is super trivial in 2D as there is only two perpendicular orientations, but the "cylinder" distinction is made in 3D because there are infinite perpendicular orientations - hence the triple product. Diagram in one of the answers shows this nicely!</p>
<p>Original Question:</p>
<p>I was looking at a video explaining the G-J-K algorithm for finding the two closest features on two convex shapes - when it came to a bit about, given a line, what is the direction in 3D space, from a point on the line, which points to the origin. Seems easy enough: just invert the position vector to point backwards to the origin - and that's exactly what the code does in an earlier step of the algorithm. Bizarrely, they say that as "the origin could be in a cylinder around the line in 3D space", the direction to the origin is given by the following:</p>
<p>Let X be the vector between points A and B on the line (from A to B), and Y be the inverse position vector from A to the origin. The direction to the origin, apparently, is (X cross Y) cross X! I expanded this out, saw an interesting pattern but could not link it to the problem:</p>
<p>For two vectors A, B; (A cross B) cross A I found to be equal to:</p>
<p>Ax(AyBy + AzBz) - Bx(AyAy + AzAz)</p>
<p>Ay(AxBx + AzBz) - By(AxAx + AzAz)</p>
<p>Az(AxBx + AyBy) - Bz(AxAx + AyAy)</p>
<p>Notated as a column vector, with "Vn" meaning the Nth coordinate of vector V. By "cross" I mean the 3D cross product.</p>
<p>I know this isn't the computer science stack exchange - I would like the mathematical meaning of "(A cross B) cross A", and a reason why the code does this, and not just another inverse position vector as it did previously, to find a direction to the origin. Many thanks for any help, and I will repost this on a different stack if it's too offensive here!</p>
| Aleksejs Fomins | 250,854 | <p>Double cross product is a very common technique to project a vector onto the surface. Consider the triple product</p>
<p><span class="math-container">$$-\vec{n} \times \vec{n} \times \vec{v}$$</span></p>
<p>where <span class="math-container">$\vec{n}$</span> is a normal vector to some surface. <span class="math-container">$\vec{n}$</span> is typically normalized, meaning that its length <span class="math-container">$|\vec{n}| = 1$</span>. Any cross-product with a normal is going to be orthogonal to that normal, so it will land on the surface the normal is orthogonal to. It will also be orthogonal to the original <span class="math-container">$\vec{v}$</span>. Now the second cross product will rotate the vector counter-clockwise 90 degrees, still within that surface. The result of that will be that it will now be parallel to <span class="math-container">$\vec{v}$</span> in some sense. More precisely, we will have obtained the projection of the vector <span class="math-container">$\vec{v}$</span> onto the surface, for which <span class="math-container">$\vec{n}$</span> is a normal. It turns out that the cross product rotates the vector in the wrong direction, so we also need to add a minus sign in front to correct for that.</p>
<p><a href="https://i.stack.imgur.com/phy8C.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/phy8C.png" alt="enter image description here" /></a></p>
|
1,488,501 | <p>Let $\varphi:\mathbb{R}\backslash\{3\}\to \mathbb{R}$ a periodic function so that forall $x\in \mathbb{R}$ $$\varphi(x+4)=\frac{\varphi(x)-5}{\varphi(x)-3}$$ Find the period the $\varphi$.</p>
| Paul Sinclair | 258,282 | <p>Defining $f(x) = {x - 5\over x-3}$, we have $\varphi(x + 4) = f(\varphi(x))$. More generally, $\varphi(x + 4k) = f^{(k)}(\varphi(x))$. Assuming that the period is a multiple of $4$, for some $k$, $\varphi(x) = y = f^{(k)}(y)$.</p>
<p>I suspect the trick is to find a value of $k$ such that $y = f^{(k)}(y)$ has real solutions.</p>
|
736,036 | <p><strong>Problem</strong> - The least number which leaves remainders 2, 3, 4, 5 and 6 on dividing by 3, 4, 5, 6 and 7 is?</p>
<p><strong>Solution</strong> - Here 3-2 = 1, 4-3 = 1, 5-4 = 1 and so on.</p>
<p>So required number is (LCM of 3, 4, 5, 6, 7) - 1 = 419</p>
<p><strong>My confusion</strong> - </p>
<p>I didn't get the solution of this. Please explain how by subtracting the conclusion was drawn that the number will be the (LCM - 1)?</p>
| Charles | 1,778 | <p>You can see by inspection that -1 works. Then any other solution must be congruent to -1 mod lcm(3,4,5,6,7) = 420 and hence the least positive solution is 419.</p>
|
2,865,122 | <p><a href="http://math.sfsu.edu/beck/complex.html" rel="nofollow noreferrer">A First Course in Complex Analysis by Matthias Beck, Gerald Marchesi, Dennis Pixton, and Lucas Sabalka</a> Exer 3.8</p>
<blockquote>
<p>Suppose <span class="math-container">$f$</span> is holomorphic in region <span class="math-container">$G$</span>, and <span class="math-container">$f(G) \subseteq \{ |z|=1 \}$</span>. Prove <span class="math-container">$f$</span> is constant.</p>
</blockquote>
<p>(I guess we may assume <span class="math-container">$f: G \to \mathbb C$</span> s.t. image(f)=<span class="math-container">$f(G)$</span>. I guess it doesn't matter if we somehow have <span class="math-container">$f: A \to \mathbb C$</span> for any <span class="math-container">$A$</span> s.t. <span class="math-container">$G \subseteq A \subseteq \mathbb C$</span> as long as <span class="math-container">$G$</span> is a region and <span class="math-container">$f$</span> is holo there.)</p>
<p>I will now attempt to elaborate the following proof at a <a href="https://math.oregonstate.edu/%7Edeleenhp/teaching/winter17/MTH483/hw2-sol.pdf" rel="nofollow noreferrer">Winter 2017 course in Oregon State University</a>.</p>
<p><strong>Question 1</strong>: For the following elaboration of the proof, what errors if any are there?</p>
<p><strong>Question 2</strong>: Are there more elegant ways to approach this? I have a feeling this can be answered with Ch2 only, i.e. Cauchy-Riemann or differentiation/holomorphic properties instead of having to use Möbius transformations.</p>
<blockquote>
<p>OSU Pf (slightly paraphrased): Let <span class="math-container">$g(z)=\frac{1+z}{1-z}$</span>, and define <span class="math-container">$h(z)=g(f(z)), z \in G \setminus \{z : f(z) = 1\}$</span>. Then <span class="math-container">$h$</span> is holomorphic on its domain, and <span class="math-container">$h$</span> is imaginary valued by Exer 3.7. By a variation of Exer 2.19, <span class="math-container">$h$</span> is constant. QED</p>
</blockquote>
<p>My (elaboration of OSU) Pf: <span class="math-container">$\because f(G) \subseteq C[0,1]$</span>, let's consider the Möbius transformation in the preceding Exer 3.7 <span class="math-container">$g: \mathbb C \setminus \{z = 1\} \to \mathbb C$</span> s.t. <span class="math-container">$g(z) := \frac{1+z}{1-z}$</span>:</p>
<p>If we plug in <span class="math-container">$C[0,1] \setminus \{1\}$</span> in <span class="math-container">$g$</span>, then we'll get the imaginary axis by Exer 3.7. Precisely: <span class="math-container">$$g(\{e^{it}\}_{t \in \mathbb R \setminus \{0\}}) = \{is\}_{s \in \mathbb R}. \tag{1}$$</span> Now, define <span class="math-container">$G' := G \setminus \{z \in G | f(z) = 1 \}$</span> and <span class="math-container">$h: G' \to \mathbb C$</span> s.t. <span class="math-container">$h := g \circ f$</span> s.t. <span class="math-container">$h(z) = \frac{1+f(z)}{1-f(z)}$</span>. If we plug in <span class="math-container">$G'$</span> in <span class="math-container">$h$</span>, then we'll get the imaginary axis. Precisely: <span class="math-container">$$h(G') := \frac{1+f(G')}{1-f(G')} \stackrel{(1)}{=} \{is\}_{s \in \mathbb R}. \tag{2}$$</span></p>
<p>Now Exer 2.19 says that a real valued holomorphic function over a region is constant: <span class="math-container">$f(z)=u(z) \implies u_x=0=u_y \implies f'=0$</span> to conclude by Thm 2.17 that <span class="math-container">$f$</span> is constant or simply by partial integration that <span class="math-container">$u$</span> is constant. Actually, an imaginary valued holomorphic function over a region is constant too: <span class="math-container">$f(z)=iv(z) \implies v_x=0=v_y \implies f'=0$</span> again by Cauchy-Riemann Thm 2.13 to conclude by Thm 2.17 that <span class="math-container">$f$</span> is constant or simply by partial integration that <span class="math-container">$v$</span> is constant.</p>
<p><span class="math-container">$(2)$</span> precisely says that <span class="math-container">$h$</span> is imaginary valued over <span class="math-container">$G'$</span>. <span class="math-container">$\therefore,$</span> if <span class="math-container">$G'$</span> is a region (A) and if <span class="math-container">$h$</span> is holomorphic on <span class="math-container">$G'$</span> (B), then <span class="math-container">$h$</span> is constant on <span class="math-container">$G'$</span> with value I'll denote <span class="math-container">$Hi, H \in \mathbb R$</span>:</p>
<p><span class="math-container">$\forall z \in G',$</span></p>
<p><span class="math-container">$$Hi = \frac{1+f(z)}{1-f(z)} \implies f(z) = \frac{Hi-1}{Hi+1}, \tag{3}$$</span></p>
<p>where <span class="math-container">$Hi+1 \ne 0 \forall H \in \mathbb R$</span>.</p>
<p><span class="math-container">$\therefore, f$</span> is constant on <span class="math-container">$G'$</span> (Q4) with value given in <span class="math-container">$(3)$</span>.</p>
<p>QED except possibly for (C)</p>
<hr />
<blockquote>
<p>(A) <span class="math-container">$G'$</span> is a region</p>
</blockquote>
<p>I guess if <span class="math-container">$G \setminus G'$</span> is finite, then G' is a region. I'm thinking <span class="math-container">$D[0,1]$</span> is a region and then <span class="math-container">$D[0,1] \setminus \{0\}$</span> is still a region.</p>
<blockquote>
<p>(B) To show <span class="math-container">$h$</span> is holomorphic in <span class="math-container">$G'$</span>:</p>
</blockquote>
<p>Well <span class="math-container">$h(z)$</span> is differentiable <span class="math-container">$\forall z \in G'$</span> and <span class="math-container">$f(z) \ne 1 \forall z \in G'$</span> and <span class="math-container">$f'(z)$</span> exists in <span class="math-container">$G' \subseteq G$</span> because <span class="math-container">$f$</span> is differentiable in <span class="math-container">$G$</span> because <span class="math-container">$f$</span> is holomorphic in <span class="math-container">$G$</span>.</p>
<p><span class="math-container">$$h'(z) = g'(f(z)) f'(z) = \frac{2}{(1-w)^2}|_{w=f(z)} f'(z) = \frac{2 f'(z)}{(1-f(z))^2} $$</span></p>
<p>Now, <span class="math-container">$f'(z)$</span> exists on an open disc <span class="math-container">$D[z,r_z] \ \forall z \in G$</span> where <span class="math-container">$r_z$</span> denotes the radius of the open disc s.t. <span class="math-container">$f(z)$</span> is holomorphic at <span class="math-container">$z$</span>. So, I guess <span class="math-container">$\frac{2 f'(z)}{(1-f(z))^2} = h'(z)$</span> exists on an open disc with the same radius <span class="math-container">$D[z,r_z] \ \forall z \in G'$</span>, and <span class="math-container">$\therefore, h$</span> is holomorphic in <span class="math-container">$G'$</span>.</p>
<blockquote>
<p>(C) Possible flaw:</p>
</blockquote>
<p>It seems that on <span class="math-container">$G'$</span>, <span class="math-container">$f$</span> has value <span class="math-container">$\frac{Hi-1}{Hi+1}$</span> while on <span class="math-container">$G \setminus G'$</span>, <span class="math-container">$f$</span> has value <span class="math-container">$1$</span>.</p>
<p><span class="math-container">$$\therefore, \forall z \in G, f(z) = \frac{Hi-1}{Hi+1} 1_{G'}(z) + 1_{G \setminus G'}(z)$$</span></p>
<p>It seems then that we've actually show only that <span class="math-container">$f$</span> is constant on <span class="math-container">$G$</span> except for the subset of G where <span class="math-container">$f=1$</span>.</p>
| Greg Markowsky | 387,394 | <p>Another fun way is to use Parseval's identity (one should never pass up an opportunity to do so). Suppose your region contains <span class="math-container">$0$</span> (if not, just translate it so that it does). Then <span class="math-container">$f$</span> has a power series representation <span class="math-container">$f(z) = \sum_{n=0}^\infty a_n z^n$</span> which converges near <span class="math-container">$0$</span>, and for <span class="math-container">$r$</span> sufficiently small we have</p>
<p><span class="math-container">$$
\frac{1}{2\pi} \int_0^{2\pi} |f(re^{i\theta})|^2 d\theta = \sum_{n=0}^\infty |a_n|^2 r^{2n}.
$$</span></p>
<p>Our assumptions imply that the left side is equal to <span class="math-container">$1$</span> for all such <span class="math-container">$r$</span>, but the only way the right can be constant is if <span class="math-container">$a_n=0$</span> for <span class="math-container">$n \geq 1$</span>. I guess this just shows that <span class="math-container">$f$</span> is locally constant, but then the derivative is <span class="math-container">$0$</span> at every point in <span class="math-container">$G$</span>, and this means <span class="math-container">$G$</span> is globally constant.</p>
|
1,649,320 | <p>I am having a hard time proving this simple and natural identity of sets. what I do is go round and round in circles:</p>
<p>$$A\cup( A\cap B) = (A\cup A) \cap (A\cup B)$$
$$= A \cap(A\cup B)$$</p>
<p>Now what? I apply the distributive property again and reach the first expression. How can I show this using set properties (distributive, idempotent, associative, de morgan etc)?</p>
| Brian M. Scott | 12,042 | <p>Assume that $A$ and $B$ are subsets of some universal set $X$. Then</p>
<p>$$\begin{align*}
A\cup(A\cap B)&=(\color{red}A\cap X)\cup(\color{red}A\cap B)\\
&=\color{red}A\cap(X\cup B)\\
&=A\cap X\\
&=A\;.
\end{align*}$$</p>
|
1,649,320 | <p>I am having a hard time proving this simple and natural identity of sets. what I do is go round and round in circles:</p>
<p>$$A\cup( A\cap B) = (A\cup A) \cap (A\cup B)$$
$$= A \cap(A\cup B)$$</p>
<p>Now what? I apply the distributive property again and reach the first expression. How can I show this using set properties (distributive, idempotent, associative, de morgan etc)?</p>
| user 1 | 133,030 | <blockquote>
<p><strong>Answer 1</strong>. Clearly $A \subseteq A\cup (A\cap B).$<br>
For the converse, Note that $\color{maroon}A \subseteq A$ and $\color{lime}{A\cap B} \subseteq A$. So $\color{maroon}A\cup \color{lime}{(A\cap B)} \subseteq A.$ </p>
<hr>
</blockquote>
<p><strong>Answer 2</strong>. If $x\in A$, clearly $x\in A\cup (A\cap B).$<br>
Let $x\in A\cup (A\cap B).$ So either $\color{red}{x\in A}$ or $\color{blue}{x\in (A\cap B)}.$ The blue means $\color{blue}{x\in A},$ and $\color{blue}{x\in B}.$ So in both cases (blue&red) we have $x\in A$.</p>
|
873,439 | <p>This is a follow-up <a href="https://math.stackexchange.com/questions/872921/prove-that-if-the-square-of-a-number-m-is-a-multiple-of-3-then-the-number-m/872927">question</a>.</p>
<p>The problem is:</p>
<blockquote>
<p>Given two natural numbers, $m$ and $n$, and $n \vert m^2$.</p>
<p>Find necessary and sufficient conditions for $n \vert m$.</p>
</blockquote>
<p>Here are what I find:</p>
<ul>
<li><p>Necessity</p>
<blockquote>
<ol>
<li>$m \geq n$ (trivial)</li>
<li>?</li>
</ol>
</blockquote></li>
<li><p>Sufficiency</p>
<blockquote>
<ol>
<li>$n$ is prime - follows directly from <strong>Matthias</strong>'s answer</li>
<li>$\color{#c00}{n \text{ is} ~square\!-\!free,}$ $\color{#c00}{\text{i.e., has no} ~repeated~prime~factor}$ (stated in <strong>JHance</strong>'s comment to my answer) - as pointed out by <strong>gammatester</strong>, it is wrong.</li>
<li>?</li>
</ol>
</blockquote></li>
</ul>
<p>Help me to complete this list, folks.</p>
| gammatester | 61,216 | <p>Your statement is <strong>wrong</strong>. For every $n$ set $m=n$ and you have $n|m^2$ and $n|m.\;$ But $n$ is not necessarily square-free! Ex. $4|16$ and $4|4$ but $4$ is not square-free.</p>
|
2,860,360 | <p>It is a general question about simple examples of calculating class numbers in quadratic fields. Here are an excerpt from Frazer Jarvis' book <em>Algebraic Number Theory</em>:</p>
<p>"<em>Example 7.20</em> For $K=\mathbb{Q}(\sqrt[3]{2} )$, the discriminant is 108, and $r_{2}=1$. So the Minkowski bound is $\approx 2.940$. So every ideal is equivalent to one whose norm is at most 2. The only ideal of norm 1 is the full ring of integers, which is principal; the ideal $(2)=\mathcal{p}_{2}^{3}$, where $\mathcal{p}_{2}=(\sqrt[3]{2})$ is also principal. Thus every ideal is equivalent to a principal ideal, so the class group is trivial."</p>
<p>The question is why does it suffice to look at the principal ideal generated by 2?</p>
| Davislor | 422,808 | <p>To flesh out Mark Bennet’s answer with a little more detail, there are three questions here:</p>
<ol>
<li><p>If a group of six people all try something they’ll succeed at one time in five, how often will at least one of them succeed?</p></li>
<li><p>If a group of six people all try something they’ll succeed at one time in five, a million times over, what is the expected number of successes?</p></li>
</ol>
<p>As you know, the answer to question 1 is the chance that not all six will fail. $$1 - \left(\frac{4}{5}\right)^6 = \frac{11529}{15625} \approx 0.738$$</p>
<p>Question 2, you already answered: $\frac{1}{5} \cdot 6 \cdot 10^6 = 1.2 \cdot 10^6$. This is not only greater than 737,856, it’s greater than a million.</p>
<p>There’s no contradiction here. We can derive both from the probability of getting <em>exactly</em> one success on six attempts, two successes, and so on up to six.</p>
<p>What are those? Let’s say we have Alice, Bob, Clara, David, Emily and Frank. There’s exactly one success if Alice succeeds (1 time in 5) and each of the other five fails (4 times in 5, to the power of 5), or if the same happens to Bob, Clara, David, Emily or Frank.</p>
<p>There are exactly two successes if Alice and Bob succeed (1 time in 5, to the power of 2) and the other three fail (4 times in five, to the power of 3), and the same for any other pair of people: Alice and Clara, David and Emily, and so on. How many pairs of people are there? There are six people who could be first, and for each of them, five other people they could be paired with. Then, because picking Alice and David is the same as picking David and Alice, divide by two. As you probably know, this formula is written $\binom{6}{2}$ and pronounced “six choose two.” It’s equal to 15 (AB, AC, AD, AE, AF, BC, BD, BE, BF, CD, CE, CF, DE, DF, EF).</p>
<p>So, the odds of getting more than six or fewer than zero are nonexistent. Otherwise, the odds of getting exactly $i$ successes is $p(i) \equiv \binom{6}{i} \left( \frac{1}{5} \right)^i \left( \frac{4}{5} \right)^{6-i}$. That is, there are $\binom{6}{i}$ sets of $i$ people chosen from 6, a $\left( \frac{1}{5} \right)^i$ probability that all $i$ of them will succeed, and a $\left( \frac{4}{5} \right)^{6-i}$ probability that none of the remaining $6-i$ people will.</p>
<p>This allows us to compute the probabilities of every possible number of successes:</p>
<p>$
\begin{array}{ccccccc}
p(0) & p(1) & p(2) & p(3) & p(4) & p(5) & p(6) \\
\frac{4096}{15625} &\frac{6144}{15625} &\frac{768}{3125} &\frac{256}{3125} &\frac{48}{3125} &\frac{24}{15625} &\frac{1}{15625}
\end{array}
$</p>
<p>We can see that these probabilities add to 1. Furthermore, we can see that the probability that the number of successes is greater than 0 is equal to $1 - p(0)$. What is the expected number of successes? If we conducted a million random trials and counted the number of successes on each, we would add zero times the number of trials with no successes, plus one per the number of trials with exactly one success, two per the number of trials with exactly two successes, and so on up to six. The formula for the expected number of successes per round is therefore</p>
<p>$$ E = 0p(0) + 1p(1) + 2p(2) + ... + 6p(6) = \sum_{i=0}^6 i\cdot p(i) $$</p>
<p>Plugging in the numbers from our table, we see that this is</p>
<p>$$
0\frac{4096}{15625} + 1\frac{6144}{15625} + 2\frac{768}{3125} + 3\frac{256}{3125} + 4\frac{48}{3125} + 5\frac{24}{15625} + 6\frac{1}{15625} = \frac{6}{5}
$$</p>
<p>So, the expected number of successes per round of six people flipping coins is confirmed to be 1.2. On average, that’s 1.2 million successes per one million rounds.</p>
<p>If you’d like to learn more of what mathematicians have discovered about processes like this, you can search for <strong>Bernoulli distribution</strong> and <strong>geometric distribution</strong>.</p>
|
2,771,034 | <p>$\frac{a_n}{b_n} \rightarrow 1$ and $\sum_{n=1}^\infty b_n$ converges, can it be concluded that $\sum_{n=1}^\infty a_n$ converges?<br>
My attempt at an answer to this question: since $\sum_{n=1}^\infty b_n$ converges, $b_n \rightarrow 0$. Because of this, $a_n \rightarrow 0$ equally fast. However, I'm well aware that this does not imply that $\sum_{n=1}^\infty a_n$ converges. I'm stuck at that point, though, as I'm not sure what other conclusions can be drawn. Could anyone help me out?</p>
| G Tony Jacobs | 92,129 | <p>This is the limit comparison test. As long as $\sum b_n$ converges, the limit of $\frac{a_n}{b_n}$ being any real number is enough to guarantee that $\sum a_n$ converges.</p>
<p>Indeed, since $\sum b_n$ is convergent, then so is $\sum kb_n$ for any real $k$. Whatever limit you obtain for $\frac{a_n}{b_n}$, choose some $k$ larger than that, and then look at a direct comparison between $\sum a_n$ and $\sum kb_n$.</p>
<p>(I'm assuming in this discussion that all terms are positive. If not, then you may have to make some adjustments.)</p>
|
71,031 | <p>In 1976 Cappell and Shaneson gave some examples of knots in homotopy 4-spheres and for some time these examples were considered as possible counter-examples to the smooth 4-dimensional Poincare conjecture.</p>
<p>In a series of papers, Akbulut and Gompf have shown most of these Cappell-Shaneson knots actually are knots in the standard <span class="math-container">$S^4$</span>, the most recent reference being <a href="https://arxiv.org/abs/0908.1914" rel="nofollow noreferrer">this</a>.</p>
<p>In principle, one should be able to work through their arguments to derive a picture of these 2-knots in the 4-sphere. Has anyone done this, for <em>any</em> of the Cappell-Shaneson knots?</p>
<p>I know various people have created censi of 2-knots, does anyone know if any Cappell-Shaneson knots appear in those censi? (I have a hard time accepting censuses as plural of census, sorry, it sounds so wrong!)</p>
<p>I'd be happy with any fairly explicit geometric picture of a Cappell-Shaneson knot sitting in <span class="math-container">$S^4$</span>. The two I'm most familiar with is the Whitneyesque motion-diagram, and the "resolution of a knotted 4-valent graph in <span class="math-container">$S^3$</span>" picture. What I want to avoid is the "attach a handle and fuss about and argue that the manifold you've constructed is diffeomorphic to <span class="math-container">$S^4$</span>" situation.</p>
| Scott Carter | 36,108 | <p>There is a paper by Iain Aitcheson (possible mis-spelling of the last name) and Hyam Rubenstein published in a Contemporary Mathematics Series of the AMS (Conference Proceedings) that is the most explicit description of which I know. I wanted to to try and draw the corresponding knot diagrams or Yoshikawa diagrams at one time, but never found the time or engery for it. It is a pity.</p>
<p>Daniel Nash may have a paper about this on the ArXiv. Yep, <a href="http://arxiv.org/pdf/1103.5571"> here </a> and <a href="http://arxiv.org/pdf/1101.2981"> here </a>. I am sorry but I don't have mathscinet at home to look up the reference for the first example. </p>
|
95,819 | <p>I have a set of parametric equations in spherical coordinates that supposedly form circle trajectories. See below:</p>
<pre><code>r=C1
theta=C2*Sin[beta]*Sin[phi[t]]
phi=(C2*Sin[beta]*(Cos[theta[t]]/Sin[theta[t]])*Cos[phi[t]])+(C2*Cos[beta])
</code></pre>
<p>C1 and C2 are constants and beta is some angle, say 15 degrees, or (15/180)*Pi radians. </p>
<p>These are circle trajectories on the surface of a sphere, hence the constant r-component. </p>
<p>My question is this: How do I solve these trajectories and plot them on the surface of a sphere. This is what I have done already:</p>
<p>Step 1:</p>
<p>Solve the coupled differential equation with NDSolve. See below:</p>
<pre><code>Answer = NDSolve[
{theta'[t] == C2*Sin[beta]*Sin[phi[t]],
phi'[t] == (C2*Sin[beta]*Cos[theta[t]]*
Cos[phi[t]]/Sin[theta[t]]) + (C2*Cos[beta]),
theta[tMin] == StartingTheta,
phi[tMin] == StartingPhi},
{theta, phi},
{t, tMin, tMax}];
</code></pre>
<p>where I have defined StartingTheta=(45/180)*Pi and StartingPhi=(180/180)*Pi. tMin is 0 and tMax is, say, a 1000.</p>
<p>This will form ONE circle trajectory on the surface of the sphere. Changing StartingTheta will give another circle trajectory and so forth.</p>
<p>Step 2: Create the Sphere. I did that - see below: </p>
<pre><code>sphere = ParametricPlot3D[
{Cos[v]*Cos[u],
Sin[v]*Cos[u],
Sin[u]},
{v, 0, 2*Pi},
{u, -Pi/2, Pi/2}];
</code></pre>
<p>Step 3:</p>
<p>Evaluate. This is where I am struggling. See below what I have done so far:</p>
<pre><code>Trajectory = ParametricPlot3D[
Evaluate[
{????????????????}
/. Answer],
{t, tMin, tMax}
]
</code></pre>
<p>At the place where I inserted all the question marks is the problem, I am not sure what form I should be evaluating. I tried a few obvious expressions but I keep getting straight lines.</p>
<p>The next step I suppose is quite easy:</p>
<pre><code>Show[sphere,Trajectory]
</code></pre>
<p>If anybody out there is able to help me with what I should be evaluating in order to plot these circle trajectories on the surface of the sphere, it would be greatly appreciated. </p>
<p>Thanx!</p>
| gpap | 1,079 | <p>I didn't have time to look at this properly as your functions don't work straight out of the box because they require additional definitions. Also, I think there is a derivative missing in your original funciton definitions. </p>
<p>Anyway, I will use the parameters from Jack LaVigne's answer (but I have modified the range of <code>t</code> that only needs to span one period):</p>
<pre><code>C2 = 1.;
beta = 15. Degree;
StartingTheta = 45. Degree;
StartingPhi = 180. Degree;
tMin = 0;
tMax = 2 π.;
Answer = NDSolve[
{theta'[t] == C2*Sin[beta]*Sin[phi[t]],
phi'[t] == (C2*Sin[beta]*Cos[theta[t]]*
Cos[phi[t]]/Sin[theta[t]]) + (C2*Cos[beta]),
theta[tMin] == StartingTheta,
phi[tMin] == StartingPhi},
{theta, phi},
{t, tMin, tMax}];
</code></pre>
<p>Because of the parametric dependence on t, in retrospect, mesh functions is not the optimal way to go (at least as far as I can tell) so it's best to just plot the path together with the sphere. First define the point as a function of <code>t</code>:</p>
<pre><code>ans[t_] = {Cos[v]*Cos[u], Sin[v]*Cos[u], Sin[u]} /.
u -> Answer[[1, 1, 2]][t] /. v -> Answer[[1, 2, 2]][t];
</code></pre>
<p>and then overlap it to the sphere for a full period in <code>t</code>:</p>
<pre><code>Show[
ParametricPlot3D[{Cos[v]*Cos[u], Sin[v]*Cos[u], Sin[u]}, {u, 0,
2*Pi}, {v, -Pi/2, Pi/2}, PlotStyle -> None
],
Graphics3D[
{Red,
Thickness[0.01], Line[Table[ans[t], {t, 0, 2 π, π/64}]]}
]
]
</code></pre>
<p><a href="https://i.stack.imgur.com/GHRzv.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/GHRzv.png" alt="enter image description here"></a></p>
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.