qid int64 1 4.65M | question large_stringlengths 27 36.3k | author large_stringlengths 3 36 | author_id int64 -1 1.16M | answer large_stringlengths 18 63k |
|---|---|---|---|---|
57,281 | <p><strong>Bug introduced in 10.0 and fixed in 10.3</strong></p>
<hr>
<p>I'm having trouble calculating the median of a <code>Dataset[]</code> in <em>Mathematica</em> 10.</p>
<p>The situation is as follows. Consider a dataset that was defined as follows:</p>
<pre><code>dataset = Dataset[{<|"a"->1,"b"->2|>,<|"a"->3,"b"->4|>}];
</code></pre>
<p>The mean and variance of columns <code>a</code> and <code>b</code> can now be calculated by</p>
<pre><code>mean = dataset[Mean, {"a","b"}]
var = dataset[Variance, {"a","b"}]
</code></pre>
<p>That works pefectly, but</p>
<pre><code>med = dataset[Median, {"a","b"}]
</code></pre>
<p>returns a <code>Failure[]</code>! Somehow, <code>Median[]</code> is not compatible with a list of associations as its argument and the other functions are.</p>
<p>Can someone explain why this happens and maybe help with a solution?</p>
| Community | -1 | <p>Working since version 10.3:</p>
<pre><code>dataset = Dataset[{<|"a" -> 1, "b" -> 2|>, <|"a" -> 3, "b" -> 4|>}];
med = dataset[Median, {"a", "b"}]
</code></pre>
<p><a href="https://i.stack.imgur.com/cBvSQ.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/cBvSQ.png" alt="enter image description here"></a></p>
|
136,067 | <p>Assume $f(x)>0$ defined in $[a,b]$, and for a certain $L>0$, $f(x)$ satisfies the Lipschitz condition $|f(x_1)-f(x_2)|\leq L|x_1-x_2|$.</p>
<p>Assume that for $a\leq c\leq d\leq b$,$$\int_c^d \frac{1}{f(x)}dx=\alpha,\int_a^b\frac{1}{f(x)}dx=\beta$$Try to prove$$\int_a^b f(x)dx \leq \frac{e^{2L\beta}-1}{2L\alpha}\int_c^d f(x)dx$$</p>
| tibL | 29,405 | <p>I got something which is rather close to your result but couldn't get rid of an additional term. I'm hoping someone will find the development useful in order to give a complete answer. The question reminded me somehow of the proof of Gronwall's inequality and my answer is based on that.</p>
<p>Let $h(t)=\int_a^t f(s)ds$ taking the derivative with respect to $t$ yields
$h'(t) = f(t)$. Rewriting $f(t)=f^2(t)/f(t)$, we can find an upper bound for $f^2(t)$ indeed, we have:
$$f^2(t) = \int_a^t (f^2(s))'ds + f^2(a) = 2\int_a^t f(s)f'(s)ds +f^2(a)$$</p>
<p>but then, the Lipschitz continuity implies that $|f'(s)|\le L$ which yields</p>
<p>$$f^2(t) \le 2L \int_a^t f(s)ds + f^2(a)\quad \Longrightarrow \quad h'(t) \le \left(2Lh(t) +f^2(a)\right)\left({1\over f(t)}\right) $$
we can then introduce the additional positive term $\left(\int_c^df(t)dt\right)/\alpha$ with:
$$ h'(t) \le \left(2L h(t)+f^2(a) + {\int_c^d f(s)ds\over \alpha}\right)\left({1\over f(t)}\right) $$
<strong>Remark</strong>: the additional term comes from the $f^2(a)$. We could get rid of it if $\left(\int_c^d f(s)ds\right)/\alpha > f^2(a)$ which I wasn't able to prove.</p>
<p>To simplify notations, let $a=f^2(a) + {\int_c^d f(s)ds\over \alpha},$ $b=2L$ and $g(t)=1/f(t)$ then. Then the previous inequality reads
$$h'(t) \le g(t)(a+bh(t))$$
which can be rewritten as follows (here is where it starts to look like Gronwall's inequality):</p>
<p>$$ {(a+bh(t))'\over a+bh(t)} \le bg(t) $$</p>
<p>the left-hand side is the logarithmic derivative of $a+bh(t)$, integrating both sides from $a$ to $t$ yields</p>
<p>$$ a+bh(t) \le a\exp\left(b\int_a^t g(t) \right) $$</p>
<p>plugging the values of $a$,$b$ and using $\int_a^b g(t) = \beta$, we finally get:</p>
<p>$$\int_a^b f(s)ds = h(b) \le \color{green}{\left[{\exp(2L\beta)-1\over 2L\alpha}\right]\int_c^d f(s)ds} + \color{red}{{f^2(a)\over 2L}\left(\exp(2L\beta)-1\right)} $$</p>
|
2,601,412 | <p>"A game is played by tossing an unfair coin ($P(head) = p$) until $A$ heads or $A$ tails (not necessarily consecutive) are observed. What is the expected number of tosses in one game?"</p>
<p>My approach is the following:</p>
<p>Let's represent represent a head by $H$ and a tail by $T$, and call $H_n$ the event "the game ends with $A$ heads when the coin is tossed for the n-th time" and $T_n$ the event "the game ends with $A$ tails when the coin is tossed for the n-th time".</p>
<p>First, I analyse $H_n$.</p>
<p>For $n<A$, $P(H_n) = 0$ because at least $A$ tosses are needed.
For $n>2A-1$, $P(H_n) = 0$ because by then we will surely have at least $A$ heads or tails.</p>
<p>$$P(H_A) = p^A$$</p>
<p>$$P(H_{A+1}) = \left(\binom{A+1}{A} -1 \right) p^A(1-p)$$
$$P(H_{A+2}) = \left(\binom{A+2}{A} -\binom{A+1}{A} \right) p^A(1-p)^2$$</p>
<p>We can generalize it to:
$$P(H_{A+i}) = \left(\binom{A+i}{A} -\binom{A+i-1}{A} \right) p^A(1-p)^{i}$$</p>
<p>The expression for $T_n$ is analogous:
$$P(T_{A+i}) = \left(\binom{A+i}{A} -\binom{A+i-1}{A} \right) p^i(1-p)^{A}$$</p>
<p>And the expectation for the number of tosses in one game is:
$$\sum_{i=0}^{A-1} P(H_{A+i}) (A+i) + \sum_{i=0}^{A-1} P(T_{A+i}) (A+i)$$</p>
<p><strong>Is it correct? Is there a more elegant way of doing it?</strong></p>
<p>EDIT:</p>
<p>For each of the cases $(A,p) \in \{3,5,10\} \times \{0.5,0.6.0.7\}$, I simulated $10^7$ games. The maximum relative difference between the simulated average and the expectation given by the formula above was $0.013%$. I am assuming the formula is correct.</p>
| gar | 138,850 | <p>A closed form may not exist, but we can write it as summations, which can be easily evaluated by computer algera systems</p>
<p>1.
\begin{align*}
f(a,0) &= p^a \\
f(0,b) &= (1-p)^b \\
f(a,b) &= p\cdot f(a-1,b) + (1-p)\cdot f(a,b-1) \\
E(A,p) &= A\times \sum_{i=0}^{A-1} f(A,i) + f(i,A)
\end{align*}
2.
\begin{align*}
E(A,p) &= \sum_{i=0}^{A-1} \left(A+i\right) \binom{A+i-1}{i} \left((1-p)^A p^i + p^A (1-p)^i\right) \\
\end{align*}</p>
|
223,631 | <p>I'm using NeumannValue boundary conditions for a 3d FEA using NDSolveValue. In one area I have positive flux and in another area i have negative flux. In theory these should balance out (I set the flux inversely proportional to their relative areas) to a net flux of 0 but because of mesh and numerical inaccuracies they don't. Is there a way to constrain total flux = 0 and just set a constant flux for one of my areas?</p>
<p>edit:
here's my boundary conditions:</p>
<pre><code>Subscript[Γ, 1] =
NeumannValue[-1, (Abs[x] - 1)^2 + (Abs[y] - 1)^2 < (650/1000)^2 &&
z < -0.199 ];
Subscript[Γ, 2] =
NeumannValue[4, x^2 + y^2 + (z + 1/5)^2 < (650/1000/2)^2 ];
</code></pre>
<p>and my equations:</p>
<pre><code>Dcof = 9000
ufun3d = NDSolveValue[
{D[u[t, x, y, z], t] - Dcof Laplacian[u[t, x, y, z], {x, y, z}] ==
Subscript[Γ, 1] + Subscript[Γ, 2],
u[0, x, y, z] == 0},
u, {t, 0, 10 }, {x, y, z} ∈ em];
</code></pre>
<p>and my element mesh:</p>
<pre><code>a = ImplicitRegion[True, {{x, -1, 1}, {y, -1, 1}, {z, 0, 1}}];
b = Cylinder[{{0, 0, -1/5}, {0, 0, 0}}, (650/1000)/2];
c = Cylinder[{{1, 1, -1/5}, {1, 1, 0}}, 650/1000];
d = Cylinder[{{-1, 1, -1/5}, {-1, 1, 0}}, 650/1000];
e = Cylinder[{{1, -1, -1/5}, {1, -1, 0}}, 650/1000];
f = Cylinder[{{-1, -1, -1/5}, {-1, -1, 0}}, 650/1000];
r = RegionUnion[a,b,c,d,e,f];
boundingbox = ImplicitRegion[True, {{x, -1, 1}, {y, -1, 1}, {z, -1/5, 1}}];
r2 = RegionIntersection[r,boundingbox]
em = ToElementMesh[r2];
</code></pre>
<p>And this is what my mesh looks like from the bottom up. </p>
<p><a href="https://i.stack.imgur.com/Q66fl.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Q66fl.png" alt="enter image description here"></a>
edit2:
I figured i should add a plot of what i think is "wrong" too.<br>
plotting the diagonal cross section i'd expect the values to be centered around 0 but they're all negative.</p>
<pre><code>ContourPlot[ufun3d[5, xy, xy, z], {xy, -1 , 1 }, {z, -0.2, 1},
ClippingStyle -> Automatic, PlotLegends -> Automatic]
</code></pre>
<p><a href="https://i.stack.imgur.com/SfwIa.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/SfwIa.png" alt="enter image description here"></a></p>
| Tim Laska | 61,809 | <h1>Update (Steady-State Solution)</h1>
<p>I think the fundamental issue is that you are over constraining your system. Whether you are solving the "heat equation" or not, your operator has the same form of the heat equation as shown below:</p>
<p><span class="math-container">$$\rho {{\hat C}_p}\frac{{\partial T}}{{\partial t}} + \nabla \cdot {\mathbf{q}} = 0$$</span></p>
<p>If the flux, <span class="math-container">$\mathbf{q}$</span>, needs to be perfectly conserved to conserve quanta, then it is equivalent to saying that the divergence of the flux is 0 or:</p>
<p><span class="math-container">$$\nabla \cdot {\mathbf{q}} = 0$$</span></p>
<p>Therefore, the problem is a steady-state problem because there can be no accumulation in the domain:</p>
<p><span class="math-container">$$\rho {{\hat C}_p}\frac{{\partial T}}{{\partial t}} + \nabla \cdot {\mathbf{q}} = \rho {{\hat C}_p}\frac{{\partial T}}{{\partial t}} + 0 = \rho {{\hat C}_p}\frac{{\partial T}}{{\partial t}} = 0$$</span></p>
<p>So, if you are seeing a response at all, then it is result of the numerical inaccuracies and not something physical.</p>
<p>If we substitute Fourier's Law for flux to put in terms of a temperature potential, we obtain:</p>
<p><span class="math-container">$$\nabla \cdot {\mathbf{q}} = \nabla \cdot \left( { - {\mathbf{k}}\nabla T} \right) = \nabla \cdot \left( { - {\mathbf{k}}\nabla \left( {T + constant} \right)} \right)$$</span></p>
<p>The problem with this is that there is no unique solution because you can add an infinite number of constants to the temperature and still satisfy the equation. The way to obtain a unique solution is to add a Dirichlet or Robin condition on one of the boundaries and let the solver solve for the flux that balances the solution.</p>
<p>The following is a workflow that solves for the steady-state flux:</p>
<pre><code>Needs["NDSolve`FEM`"]
Needs["OpenCascadeLink`"]
a = ImplicitRegion[True, {{x, -1, 1}, {y, -1, 1}, {z, 0, 1}}];
b = Cylinder[{{0, 0, -1/5}, {0, 0, 0}}, (650/1000)/2];
c = Cylinder[{{1, 1, -1/5}, {1, 1, 0}}, 650/1000];
d = Cylinder[{{-1, 1, -1/5}, {-1, 1, 0}}, 650/1000];
e = Cylinder[{{1, -1, -1/5}, {1, -1, 0}}, 650/1000];
f = Cylinder[{{-1, -1, -1/5}, {-1, -1, 0}}, 650/1000];
shape0 = OpenCascadeShape[Cuboid[{-1, -1, 0}, {1, 1, 1}]];
shape1 = OpenCascadeShape[b];
shape2 = OpenCascadeShape[c];
shape3 = OpenCascadeShape[d];
shape4 = OpenCascadeShape[e];
shape5 = OpenCascadeShape[f];
shapeint = OpenCascadeShape[Cuboid[{-1, -1, -1}, {1, 1, 1}]];
union = OpenCascadeShapeUnion[shape0, shape1];
union = OpenCascadeShapeUnion[union, shape2];
union = OpenCascadeShapeUnion[union, shape3];
union = OpenCascadeShapeUnion[union, shape4];
union = OpenCascadeShapeUnion[union, shape5];
int = OpenCascadeShapeIntersection[union, shapeint];
bmesh = OpenCascadeShapeSurfaceMeshToBoundaryMesh[int];
groups = bmesh["BoundaryElementMarkerUnion"];
temp = Most[Range[0, 1, 1/(Length[groups])]];
colors = ColorData["BrightBands"][#] & /@ temp;
bmesh["Wireframe"["MeshElementStyle" -> FaceForm /@ colors]]
mesh = ToElementMesh[bmesh];
mesh["Wireframe"]
nv = NeumannValue[4, (x)^2 + (y)^2 < 1.01 (650/1000/2)^2 && z == -1/5];
dc = DirichletCondition[
u[x, y, z] == 0, (x)^2 + (y)^2 > 1.01 (650/1000/2)^2 && z == -1/5];
op = Inactive[
Div][{{-9000, 0, 0}, {0, -9000, 0}, {0, 0, -9000}}.Inactive[Grad][
u[x, y, z], {x, y, z}], {x, y, z}];
ufun3d = NDSolveValue[{op == nv, dc}, u, {x, y, z} \[Element] mesh];
ContourPlot[ufun3d[xy, xy, z], {xy, -Sqrt[2], Sqrt[2]}, {z, -0.2, 1},
ClippingStyle -> Automatic, AspectRatio -> Automatic,
PlotLegends -> Automatic, PlotPoints -> {75, 50}]
</code></pre>
<p>The <em>Mathematica</em> (Top) result compares favorably to other FEM solver's such as Altair's AcuSolve (Bottom):</p>
<pre><code>img = Uncompress[
"1:eJzt2+tP02cUB/\
CjYjQMnYuTYHQzLJItGI2OuWA0EpjG6eI07Vi8IFrgZ630Ai3VNjqeGQgCYyAKdlSBAuVS\
ZSgV5A5ekMWBEFEjYkBxBiUoTofxFvjamu2N/8GS8+KcnHOekzxvPm+\
Pb4ROtnMyERncaa1GoZR2TnS3Xq70vVEj6VWRwXq9whwxyTXwccUlV7hrPHyI3l50dKC5G\
ZWVKCpCdjYOHoTJhN27ERaGDRsQHIyAAPj5wccHnp4vp9Dwx9T3GXUtpvMrqeo7KtlMvyk\
peS/tSyTNYdpuI9nvtKqBvr5MX9ykOffJ8znRGw8a+YjuzqPuhdS6nGq+JcePdCyKfomj+\
AMUk0ERuRR6gtbU0rI2WnCdPh2gac8mTBifPv3p3Ll/+fvfCAz8Y/Xqerm8XKHIi41NF+\
LntDSD1SqVlm6qrl538eKKq1cX9ff7PnkyY2xsIkY/\
wOBs9HyOP5eiKQSnNiJPgUwtEvZjTwp2WbDVjvVOBJ3Dkk749mPmI0x+/\
WIqhrxxez6ufIlzQXCuR0E4sqKRZIY5CdFZCC/AxlMIacJX7Zh/G95DmPoCk8bg9RKz/\
sEnI/AbwqL7WNaH4B6suwZZJ7ZeRmQr1C0w1iO+\
CskVOORAjh0223hB3mjB8eFC673CnFtFRzuLslvtRxrtmc7iDEdJen5JmqU09dfS5MSyJH\
NZYowjQek4sO2ECK0Qm8+I7bVCahTRF4S+\
TZjaxU9dIuG6SOkRGX0ia0BYB4VtWJT8LcqfC+crUTsuml7HN4/ua35sbnqwt/\
GOsfGWoaE7tr5DV3dJU9cSXVunqnEqa8qls/\
aI6twdVZbwqkNhZ1K3OFPDKjMVFRblyXxNWbGhuNxU6Iy31SXktqRY29ItHVnZ3TmHe20Z\
A8VpD06mjJxOYk7MiTkxJ+\
bEnJgTc2JOzIk5MSfmxJyYE3NiTsyJOTEn5sScmBNzYk7MiTkxJ+\
bEnJgTc2JOzIk5MSfmxJyYE3NiTsyJOTEn5sScmBNzYk7MiTkxp/8dJ/\
kMIgrVGlRKrRS1VhsnKSV9oNzDNQwxx/17rOfuZEa1ZPB0Fd/\
o1Dq9PEYRKcndd3qyNSHvLX3436WfTDLo1MY4lU6rMrlm7625LwDd/+nVkmKPSqt89/\
KD3ii9BWHVFNA="];
dims = ImageDimensions[img];
colors2 =
RGBColor[#] & /@
ImageData[img][[IntegerPart@(dims[[2]]/2), 1 ;; -1]];
DensityPlot[
ufun3d[X/Sqrt[2], X/Sqrt[2],
z], {X, -(Sqrt[2]), (Sqrt[2])}, {z, -0.2, 1},
ColorFunction -> (Blend[colors2, #] &), PlotLegends -> Automatic,
PlotPoints -> {150, 100}, PlotRange -> All, AspectRatio -> Automatic,
Background -> Black, ImageSize -> Large]
</code></pre>
<p><a href="https://i.stack.imgur.com/psPRa.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/psPRa.png" alt="Solver Comparison" /></a></p>
<h2>3D Visualization Concepts</h2>
<p>In the comments, @ABCDEMMM requested some 3D visualization of the solution. The example provided <a href="https://www.dealii.org/images/steps/developer/step-37.solution.png" rel="nofollow noreferrer">here</a>, was actually quite complex as it appeared to have elements of clip-planes, iso-surfaces, and volume rendering. It is non-trivial to get all these elements tuned to produce a pleasing and informative visualization. In the process, I also could not get volume rendering (<a href="https://reference.wolfram.com/language/ref/DensityPlot3D.html" rel="nofollow noreferrer"><code>DensityPlot3D</code></a>) and iso-surfaces (<a href="https://reference.wolfram.com/language/ref/ContourPlot3D.html" rel="nofollow noreferrer"><code>ContourPlot3D</code></a>) to play nicely together. Here is an example workflow that combines clip-planes with volume rendering:</p>
<pre><code>minmax = Chop@MinMax[ufun3d["ValuesOnGrid"]];
dpreg = DensityPlot3D[
ufun3d[x, y, z], {x, -1, 1}, {y, -1, 1}, {z, -0.2, 1},
PlotRange -> minmax, ColorFunction -> (Blend[colors2, #] &),
PlotLegends -> Automatic, OpacityFunction -> 0.05,
RegionFunction -> Function[{x, y, z, f}, -x + y > 0],
AspectRatio -> Automatic, Background -> Black, ImageSize -> Large]
dp = DensityPlot3D[
ufun3d[x, y, z], {x, -1, 1}, {y, -1, 1}, {z, -0.2, 1},
PlotRange -> minmax, ColorFunction -> (Blend[colors2, #] &),
PlotLegends -> Automatic, OpacityFunction -> 0.075,
AspectRatio -> Automatic, Background -> Black, ImageSize -> Large]
scp = SliceContourPlot3D[
ufun3d[x, y, z], {x == -0.9, y == 0.9, z == -0.15,
x - y == 0}, {x, -1, 1}, {y, -1, 1}, {z, -0.2, 1},
PlotRange -> minmax, Contours -> 30,
ColorFunction -> (Blend[colors2, #] &), PlotLegends -> Automatic,
RegionFunction -> Function[{x, y, z, f}, x - y <= 0.01],
AspectRatio -> Automatic, Background -> Black, ImageSize -> Large]
Show[dp, scp]
</code></pre>
<p><a href="https://i.stack.imgur.com/hmJrX.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/hmJrX.jpg" alt="Clip Volume Rendering" /></a></p>
<p>Here is concept for 3D visualization using clip-planes and iso-surfaces:</p>
<pre><code>cp100 = ContourPlot3D[
ufun3d[x, y, z], {x, -1, 1}, {y, -1, 1}, {z, -0.2, 1},
PlotRange -> minmax,
Contours -> (ufun3d[#/Sqrt[2], #/Sqrt[2], 0] & /@ {0.05, 0.32, 0.45,
0.65, 0.72, 0.78, 0.98}), MaxRecursion -> 0,
ColorFunctionScaling -> False,
ColorFunction -> (Directive[Opacity[1],
Blend[colors2, Rescale[#4, minmax]]] &), Mesh -> None,
PlotLegends -> Automatic, PlotPoints -> {100, 100, 50},
AspectRatio -> Automatic, Background -> Black, ImageSize -> Large]
cp50 = ContourPlot3D[
ufun3d[x, y, z], {x, -1, 1}, {y, -1, 1}, {z, -0.2, 1},
PlotRange -> minmax,
Contours -> (ufun3d[#/Sqrt[2], #/Sqrt[2], 0] & /@ {0.05, 0.32,
0.45, 0.65, 0.72, 0.78, 0.98}), MaxRecursion -> 0,
ColorFunctionScaling -> False,
ColorFunction -> (Directive[Opacity[0.5],
Blend[colors2, Rescale[#4, minmax]]] &), Mesh -> None,
PlotLegends -> Automatic, PlotPoints -> {100, 100, 50},
AspectRatio -> Automatic, Background -> Black, ImageSize -> Large];
cp25 = ContourPlot3D[
ufun3d[x, y, z], {x, -1, 1}, {y, -1, 1}, {z, -0.2, 1},
PlotRange -> minmax,
Contours -> (ufun3d[#/Sqrt[2], #/Sqrt[2], 0] & /@ {0.05, 0.32,
0.45, 0.65, 0.72, 0.78, 0.98}), MaxRecursion -> 0,
ColorFunctionScaling -> False,
ColorFunction -> (Directive[Opacity[0.25],
Blend[colors2, Rescale[#4, minmax]]] &), Mesh -> None,
PlotLegends -> Automatic, PlotPoints -> {100, 100, 50},
AspectRatio -> Automatic, Background -> Black, ImageSize -> Large];
scp25 = SliceContourPlot3D[
ufun3d[x, y, z], {x == -0.9, y == 0.9, z == -0.15, z == 0.90,
x - y == 0}, {x, -1, 1}, {y, -1, 1}, {z, -0.2, 1},
PlotRange -> minmax, Contours -> 30,
RegionFunction -> Function[{x, y, z, f}, x - y <= 0.1],
ColorFunction -> (Directive[Opacity[0.25], Blend[colors2, #]] &),
PlotLegends -> Automatic, PlotPoints -> {100, 100, 50},
AspectRatio -> Automatic, Background -> Black, ImageSize -> Large];
scp50 = SliceContourPlot3D[
ufun3d[x, y, z], {x == -0.9, y == 0.9, z == -0.15, z == 0.90,
x - y == 0}, {x, -1, 1}, {y, -1, 1}, {z, -0.2, 1},
PlotRange -> minmax, Contours -> 30,
RegionFunction -> Function[{x, y, z, f}, x - y <= 0.1],
ColorFunction -> (Directive[Opacity[0.5], Blend[colors2, #]] &),
PlotLegends -> Automatic, PlotPoints -> {100, 100, 50},
AspectRatio -> Automatic, Background -> Black, ImageSize -> Large];
scp100 = SliceContourPlot3D[
ufun3d[x, y, z], {x == -0.9, y == 0.9, z == -0.15, z == 0.90,
x - y == 0}, {x, -1, 1}, {y, -1, 1}, {z, -0.2, 1},
PlotRange -> minmax, Contours -> 30,
RegionFunction -> Function[{x, y, z, f}, x - y <= 0.1],
ColorFunction -> (Directive[Opacity[1], Blend[colors2, #]] &),
PlotLegends -> Automatic, PlotPoints -> {100, 100, 50},
AspectRatio -> Automatic, Background -> Black, ImageSize -> Large]
Show[scp50, cp25]
</code></pre>
<p><a href="https://i.stack.imgur.com/esG0e.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/esG0e.jpg" alt="Iso-surface and clip plane visualization" /></a></p>
<p>It shows the 3D aspects of the solution and it is something to get you started. It will take time and practice to optimize the appearance of the plots.</p>
<h1>Update (Transient)</h1>
<p>As alluded to in the comments, the <span class="math-container">$t_{max} = 10$</span> in the OP is about 18,000 times larger than it should be for a transient problem. One issue with running that long with a flux boundary condition is that the discretized areas of the boundary surfaces have an error associated with them that will accumulate with time. Therefore, one does not want to run more than necessary after the solution has reached a steady-state.</p>
<p>If we set the <span class="math-container">$t_{max}=0.0001$</span> and run the simulation with flux only boundary conditions, we can get a reasonable answer:</p>
<pre><code>tmax = 0.0001;
nvin = NeumannValue[
4, (x)^2 + (y)^2 < 1.01 (650/1000/2)^2 && z == -1/5];
nvout = NeumannValue[-1, (x)^2 + (y)^2 > 1.01 (650/1000/2)^2 &&
z == -1/5];
ic = u[0, x, y, z] == 0;
op = Inactive[
Div][{{-9000, 0, 0}, {0, -9000, 0}, {0, 0, -9000}}.Inactive[Grad][
u[t, x, y, z], {x, y, z}], {x, y, z}] + D[u[t, x, y, z], t]
ufun3d = NDSolveValue[{op == nvin + nvout, ic},
u, {t, 0, tmax}, {x, y, z} ∈ mesh];
imgs = Rasterize[
DensityPlot[
ufun3d[#, X/Sqrt[2], X/Sqrt[2],
z], {X, -(Sqrt[2]), (Sqrt[2])}, {z, -0.2, 1},
ColorFunction -> (Blend[colors2, #] &),
PlotLegends -> Automatic, PlotPoints -> {150, 100},
PlotRange -> All, AspectRatio -> Automatic, Background -> Black,
ImageSize -> Medium]] & /@ Subdivide[0, tmax, 30];
ListAnimate[imgs, ControlPlacement -> Top]
</code></pre>
<p><a href="https://i.stack.imgur.com/NF1Ry.gif" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/NF1Ry.gif" alt="Transient Solution With Smaller tmax" /></a></p>
<p>As you can see, the density plot of the end point of the transient solution is essentially the same up to a constant as the previously calculated steady-state solution.</p>
<h1>Original Answer</h1>
<p>The code posted in the OP does not produce quarter arcs as suggested in the comments. On my machine, I obtain:</p>
<pre><code>a = ImplicitRegion[True, {{x, -1, 1}, {y, -1, 1}, {z, 0, 1}}];
b = Cylinder[{{0, 0, -1/5}, {0, 0, 0}}, (650/1000)/2];
c = Cylinder[{{1, 1, -1/5}, {1, 1, 0}}, 650/1000];
d = Cylinder[{{-1, 1, -1/5}, {-1, 1, 0}}, 650/1000];
e = Cylinder[{{1, -1, -1/5}, {1, -1, 0}}, 650/1000];
f = Cylinder[{{-1, -1, -1/5}, {-1, -1, 0}}, 650/1000];
r = RegionUnion[a, b, c, d, e, f];
em = ToElementMesh[r];
em["Wireframe"]
</code></pre>
<p><a href="https://i.stack.imgur.com/gqGj9.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/gqGj9.png" alt="enter image description here" /></a></p>
<p>So, I am answering based on the full cylinders versus quarter arcs.</p>
<p>You will need a DirichletCondition or a Robin Condition somewhere to fully define temperature. Here is a case where applied a convective heat transfer condition to all but the bottom surfaces. There is a 16x change in area between the center port and the other ports, so I made the flux 16x more in the center. I also used the <a href="https://reference.wolfram.com/language/OpenCascadeLink/tutorial/UsingOpenCascadeLink.html" rel="nofollow noreferrer">OpenCascadeLink</a> to build the geometry since it seems to do a good job at snapping to features.</p>
<pre><code>Needs["NDSolve`FEM`"]
Needs["OpenCascadeLink`"]
a = ImplicitRegion[True, {{x, -1, 1}, {y, -1, 1}, {z, 0, 1}}];
b = Cylinder[{{0, 0, -1/5}, {0, 0, 0}}, (650/1000)/2];
c = Cylinder[{{1, 1, -1/5}, {1, 1, 0}}, 650/1000];
d = Cylinder[{{-1, 1, -1/5}, {-1, 1, 0}}, 650/1000];
e = Cylinder[{{1, -1, -1/5}, {1, -1, 0}}, 650/1000];
f = Cylinder[{{-1, -1, -1/5}, {-1, -1, 0}}, 650/1000];
shape0 = OpenCascadeShape[Cuboid[{-1, -1, 0}, {1, 1, 1}]];
shape1 = OpenCascadeShape[b];
shape2 = OpenCascadeShape[c];
shape3 = OpenCascadeShape[d];
shape4 = OpenCascadeShape[e];
shape5 = OpenCascadeShape[f];
union = OpenCascadeShapeUnion[shape0, shape1];
union = OpenCascadeShapeUnion[union, shape2];
union = OpenCascadeShapeUnion[union, shape3];
union = OpenCascadeShapeUnion[union, shape4];
union = OpenCascadeShapeUnion[union, shape5];
bmesh = OpenCascadeShapeSurfaceMeshToBoundaryMesh[union];
groups = bmesh["BoundaryElementMarkerUnion"];
temp = Most[Range[0, 1, 1/(Length[groups])]];
colors = ColorData["BrightBands"][#] & /@ temp;
bmesh["Wireframe"["MeshElementStyle" -> FaceForm /@ colors]]
mesh = ToElementMesh[bmesh];
mesh["Wireframe"]
nv1 = NeumannValue[-1/4, (x - 1)^2 + (y - 1)^2 < (650/1000)^2 &&
z < -0.199];
nv2 = NeumannValue[-1/4, (x + 1)^2 + (y - 1)^2 < (650/1000)^2 &&
z < -0.199];
nv3 = NeumannValue[-1/4, (x + 1)^2 + (y + 1)^2 < (650/1000)^2 &&
z < -0.199];
nv4 = NeumannValue[-1/4, (x - 1)^2 + (y + 1)^2 < (650/1000)^2 &&
z < -0.199];
nvc = NeumannValue[16,
x^2 + y^2 + (z + 1/5)^2 < (650/1000/2)^2 && z < -0.199];
nvconvective = NeumannValue[(0 - u[t, x, y, z]), z > -0.29];
ufun3d = NDSolveValue[{D[u[t, x, y, z], t] -
5 Laplacian[u[t, x, y, z], {x, y, z}] ==
nv1 + nv2 + nv3 + nv4 + nvc + nvconvective, u[0, x, y, z] == 0},
u, {t, 0, 10}, {x, y, z} \[Element] mesh];
ContourPlot[
ufun3d[5, xy, xy, z], {xy, -Sqrt[2], Sqrt[2]}, {z, -0.2, 1},
ClippingStyle -> Automatic, PlotLegends -> Automatic,
PlotPoints -> 200]
</code></pre>
<p><a href="https://i.stack.imgur.com/918J9.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/918J9.png" alt="Robin Condition" /></a></p>
<p>You could take advantage of symmetry and create 1/4 sized model. Here is a case where I applied a DirichletCondition to the top surface.</p>
<pre><code>shaped = OpenCascadeShape[Cuboid[{0, 0, -1}, {2, 2, 2}]];
intersection = OpenCascadeShapeIntersection[union, shaped];
bmesh = OpenCascadeShapeSurfaceMeshToBoundaryMesh[intersection];
groups = bmesh["BoundaryElementMarkerUnion"];
temp = Most[Range[0, 1, 1/(Length[groups])]];
colors = ColorData["BrightBands"][#] & /@ temp;
bmesh["Wireframe"["MeshElementStyle" -> FaceForm /@ colors]]
mesh = ToElementMesh[bmesh];
mesh["Wireframe"]
nv1 = NeumannValue[-1/
4, (Abs[x] - 1)^2 + (Abs[y] - 1)^2 < (650/1000)^2 && z < -0.199];
nvc = NeumannValue[16/4,
x^2 + y^2 + (z + 1/5)^2 < (650/1000/2)^2 && z < -0.199];
dc = DirichletCondition[u[t, x, y, z] == 0, z == 1];
ufun3d = NDSolveValue[{D[u[t, x, y, z], t] -
5 Laplacian[u[t, x, y, z], {x, y, z}] == nv1 + nvc , dc,
u[0, x, y, z] == 0}, u, {t, 0, 10}, {x, y, z} ∈ mesh];
ContourPlot[ufun3d[5, xy, xy, z], {xy, 0, Sqrt[2]}, {z, -0.2, 1},
ClippingStyle -> Automatic, PlotLegends -> Automatic]
</code></pre>
<p><a href="https://i.stack.imgur.com/9Qi0k.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/9Qi0k.png" alt="QuarterSym Model" /></a></p>
|
176,340 | <p>I am running an iterative routine that I want to export to a file while each iteration is computed, instead of storing everything in memory and then exporting to a file. </p>
<p>My solution is to write to an "m" file that saves the values in the usual array format that mathematica understands (e.g. {{2,1},{3,1}} for a 2x2 matrix with the obvious contents). To do that though, I also need to write the "commas" and the brackets "{","}" manually.</p>
<p>In any case, here is a sample code that achieves that but in a, quite likely, not very clever, efficient and readable way:</p>
<pre><code>sm = 3;
rm = 3;
prior = 0;
SetDirectory[NotebookDirectory[]];
DeleteFile["test.m"]
stream = OpenAppend["test.m"];
Do[next = prior + s + r;
If[r == 1 && s == 1, WriteString[stream, "{"]];
If[r == 1, WriteString[stream, "{"]];
WriteString[stream, ToString[next]];
prior = next;
If[s < sm, If[r == rm, WriteString[stream, "},"]; prior = 0, WriteString[stream, ","]], If[r == rm, WriteString[stream, "}"], WriteString[stream, ","]]];
If[r == rm && s == sm, WriteString[stream, "}"]] ;, {s, 1, sm}, {r, 1, sm}]
Close[stream]
</code></pre>
<p>This generates an "m" file that when I open I can immediately process by defining a matrix with the written data to make further analysis later. It looks like this for the above code:</p>
<p><a href="https://i.stack.imgur.com/Z4l3w.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Z4l3w.png" alt="enter image description here"></a></p>
<p>The problem is that my actual code includes three iterating indices (and the actual expression for calculation is much more complex) so the situation becomes very complicated with this simple solution (mainly, too many IF commands that need to be introduced).</p>
<p>So, the question is, is there a way to make this code sorter, more elegant, clever, efficient and readable so that it is easily generalised and debugged?</p>
<p>Note that this question is also related to <a href="https://mathematica.stackexchange.com/questions/175967/exporting-result-of-calculation-to-file-at-each-step-of-iterative-routine">this question</a> I asked a few days ago.</p>
<p>Thanks.</p>
| rhermans | 10,397 | <p>Something like this?</p>
<p>The strategy is to pre-generate the delimiters and the coordinates to then have a single loop that writes the value and the next delimiter, whatever that is in each iteration and for whatever the number of dimensions.</p>
<pre><code>Module[
{
sm = 3,
rm = 3,
stream = OpenWrite["test.m"],
delim, coord, wf,
prev = 0,
expensivefunc
},
wf = WriteString[stream, #] &; (* write function *)
expensivefunc = Function[{x, y}, prev + x + y]; (* heavy task *)
delim = StringSplit[ExportString[Table["%", {sm}, {rm}], "String"],"%"]; (* pre-calculated delimiters *)
coord = Flatten[Table[{s, r}, {s, sm}, {r, rm}], 1]; (* pre-calculated parameters *)
wf[delim[[1]]]; (* Writes first delimiter *)
Table[
prev = Apply[expensivefunc, coord[[k]]]; (* calculates *)
wf[ToString[prev]]; (* Writes value *)
wf[delim[[k + 1]]]; (* Writes delimiter *)
, {k, Length[coord]}
];
Close[stream]
]
</code></pre>
|
1,005,576 | <p>How can I write this term in a compact form where $a$ only appears once on the RHS (in particular without cases)?</p>
<p>$T(a) =
\begin{cases}
a^2 &,\text{ if $a \leq 0$}\\
2a^2 &,\text{ if $a > 0$}\\
\end{cases}$</p>
<p>I have already thought about $T(a) = \max\{\sqrt{2}a,|a|\}^2$ or $T(a) = \frac{3+\text{sgn}(a)}{2}a^2$, but in both cases $a$ appears twice.</p>
| matheburg | 155,537 | <p>After discussing the problem with a brilliant friend we came up with the following solution:</p>
<p>$T(a) = \left[\Re\left((\sqrt[4]{2}-i)\sqrt{a}\right)\right]^4$</p>
<p>However, I am still up for further suggestions!</p>
|
2,032,711 | <p>In a triangle $ABC$, if $\sin A+\sin B+\sin C\leq1$,then prove that $$\min(A+B,B+C,C+A)<\pi/6$$
where $A,B,C$ are angles of the triangle in radians.</p>
<p>if we assume $A>B>C$,then $\sum \sin A\leq 3 \sin A$,and $ A\geq \frac{A+B+C}{3}=\pi/3$.also $\sum \sin A\geq 3\sin C$ and $ C\leq \frac{A+B+C}{3}=\pi/3$.But I could not proceed with this. Please help me in this regard.Thanks.</p>
| dezdichado | 152,744 | <p>Since you assumed $A\geq B\geq C$, it must be that $\dfrac{A}{2}+C\leq\dfrac{\pi}{2}.$ Hence, $\sin\tfrac{A}{2}<\sin(\tfrac{A}{2}+C) = \cos(\tfrac{B-C}{2})$.
Finally, $$1\geq \sin A+\sin B+\sin C = \sin A+2\sin\tfrac{B+C}{2}\cos\tfrac{B-C}{2} = 2\cos\tfrac{A}{2}\big(\sin\tfrac{A}{2}+\cos\tfrac{B-C}{2}\big)>4\cos\tfrac{A}{2}\sin\tfrac{A}{2} = 2\sin A\Rightarrow$$</p>
<p>$\sin A< \tfrac{1}{2}$, which means that $A\gt\frac{5\pi}{6}$, because $A$ is the largest angle, by assumption. Then, $B+C<\tfrac{\pi}{6}$ and we are done. </p>
|
22,101 | <p>The general rule used in LaTeX doesn't work: for example, typing <code>M\"{o}bius</code> and <code>Cram\'{e}r</code> doesn't give the desired outputs.</p>
| Avatar | 186,146 | <p>If you use the standard implementation of Mathjax, the <code>M\"{o}bius</code> will not render. </p>
<p>Workaround: </p>
<pre><code>\ddot{o}
</code></pre>
<p>will give: </p>
<p>$$ \ddot{o} $$</p>
<p>Probably this is helpful for some.</p>
<p>Another workaround is to specify another font for the text in Mathjax: </p>
<pre><code>"HTML-CSS": {
styles: {
".MathJax .mtext": {
"font-family": "sans-serif !important"
}
}
}
</code></pre>
<p>Or if you have access to the local HTML/CSS files: </p>
<pre><code>.MathJax .mtext {
font-family: sans-serif !important;
}
</code></pre>
|
3,371,302 | <p>trying to find all algebraic expressions for <span class="math-container">${i}^{1/4}$</span>.</p>
<p>Using. Le Moivre formula , I managed to get this : </p>
<blockquote>
<p><span class="math-container">${i}^{1/4}=\cos(\frac{\pi}{8})+i \sin(\frac{\pi}{8})=\sqrt{\frac{1+\frac{1}{\sqrt{2}}}{2}} + i \sqrt{\frac{1-\frac{1}{\sqrt{2}}}{2}}$</span></p>
</blockquote>
<p>What's about other expressions.</p>
| Allawonder | 145,126 | <p>The fourth roots are spaced on a circle, equally partitioning it. Thus, if you know one, you can find the rest by rotating by <span class="math-container">$π/2,$</span> or (which is the same thing) by multiplying by <span class="math-container">$i.$</span></p>
<p>Thus, since one of them as you found is <span class="math-container">$\cos(π/8)+i\sin(π/8),$</span> the others are <span class="math-container">$-\sin(π/8)+i\cos(π/8),$</span> etc.</p>
|
89,197 | <p>I am working on a problem where I have to generate a table of components while each component of the table has 18 entries. Six of the indices among 18 run from 0 to 1 while the other 12 can take values between 0 to 3. After doing that I have to select some of the entries which follow a certain criterion (sum of all values in each component should be three). I have done this for smaller sized entry tables but for this one <em>Mathematica</em> gives up very fast saying <code>General::nomem: The current computation was aborted because there was insufficient memory available to complete the computation</code>. I don't have a larger memory computer available. Can somebody help me with this please? The commands I am using are:</p>
<pre><code>list =
Table[{i, j, k, l, m, n, o, p, q, r, s, u, v, x, y, z, a, b},
{i, 0, 1}, {j, 0, 3}, {k, 0, 3}, {l, 0, 1}, {m, 0, 3}, {n, 0, 3}, {o, 0, 1},
{p, 0, 3}, {q, 0, 3}, {r, 0, 1}, {s, 0, 3}, {u, 0, 3}, {v, 0, 1}, {x, 0, 3},
{y, 0, 3}, {z, 0, 1}, {a, 0, 3}, {b, 0, 3}] // Flatten
list1 = Partition[%, 18];
f1 = Total[#] < 4 &;
f2 = Total[#] > 2 &;
list2 = Select[list1, f1];
list3 = Select[list1, f2];
list4 = Intersection[list2, list3];
</code></pre>
| m_goldberg | 3,066 | <p>I don't know if this will save you sufficient memory, but it will certainly cut down your memory use.</p>
<pre><code>$HistoryLength = 0;
list1 =
Flatten[
Table[{i, j, k, l, m, n, o, p, q, r, s, u, v, x, y, z, a, b},
{i, 0, 1}, {j, 0, 3}, {k, 0, 3}, {l, 0, 1}, {m, 0, 3}, {n, 0, 3}, {o, 0, 1},
{p, 0, 3}, {q, 0, 3}, {r, 0, 1}, {s, 0, 3}, {u, 0, 3}, {v, 0, 1}, {x, 0, 3},
{y, 0, 3}, {z, 0, 1}, {a, 0, 3}, {b, 0,3}], 17];
list2 = Select[list1, 2 < Total[#] < 4 &]
</code></pre>
|
89,197 | <p>I am working on a problem where I have to generate a table of components while each component of the table has 18 entries. Six of the indices among 18 run from 0 to 1 while the other 12 can take values between 0 to 3. After doing that I have to select some of the entries which follow a certain criterion (sum of all values in each component should be three). I have done this for smaller sized entry tables but for this one <em>Mathematica</em> gives up very fast saying <code>General::nomem: The current computation was aborted because there was insufficient memory available to complete the computation</code>. I don't have a larger memory computer available. Can somebody help me with this please? The commands I am using are:</p>
<pre><code>list =
Table[{i, j, k, l, m, n, o, p, q, r, s, u, v, x, y, z, a, b},
{i, 0, 1}, {j, 0, 3}, {k, 0, 3}, {l, 0, 1}, {m, 0, 3}, {n, 0, 3}, {o, 0, 1},
{p, 0, 3}, {q, 0, 3}, {r, 0, 1}, {s, 0, 3}, {u, 0, 3}, {v, 0, 1}, {x, 0, 3},
{y, 0, 3}, {z, 0, 1}, {a, 0, 3}, {b, 0, 3}] // Flatten
list1 = Partition[%, 18];
f1 = Total[#] < 4 &;
f2 = Total[#] > 2 &;
list2 = Select[list1, f1];
list3 = Select[list1, f2];
list4 = Intersection[list2, list3];
</code></pre>
| ciao | 11,467 | <p>I think the comment solution will serve you well:</p>
<pre><code>p1 = Join @@ Permutations /@ IntegerPartitions[3, {18}, Range[0, 3]];
result = Cases[p1, Alternatives @@@ Range[0, {1, 3, 3, 1, 3, 3, 1, 3, 3, 1, 3, 3, 1, 3, 3, 1, 3, 3}]];
</code></pre>
<p>Testing this (on my loungebook, so I limited both yours and this to indices to <em>u</em>), yours took ~30 seconds and the initial table took over 50MB on <em>ByteCount</em>, the above finished under timer resolution with under 19KB used... I'd expect 10-20X faster speed on a workstation, same memory needs.</p>
<p>Same result, modulo sort order.</p>
<p>The advantage will of course grow extending the indices to the full set.</p>
|
1,766,264 | <p>A store sells 8 kinds of candy. How many ways can you pick out 15 candies total to throw unordered into a bag and take home.</p>
<p>here 15 candies..
so we choose 8 from out of 15 is ..=$^{15}C_8$ is i am right</p>
| mathreadler | 213,607 | <p>Something to expand a bit on Andrés answer ( <strong>may be more than you need</strong>, but could maybe be interesting if you are curious. Consider the expression.
$$(x_1+\cdots+x_8)^{15}$$ each time we pick a term in the factor $x_k$, that symbolizes picking candy type $k$. So the possible candy configurations will be each monomial in the expanded sum. For example the term ${x_1}^7{x_2}^8$ would symbolize having picked seven of the first kind and 8 of the second. In other words we are looking for the total number of such terms. Now the coefficients of each monomial will be the number of possible orders we can pick them. So this approach gives a bit of more information than we need (right now). </p>
<p>A minimal example would be $(x_1+x_2)^2 = {x_1}^2 + 2x_1x_2 + {x_2}^2$ where the coefficient 2 means that we can first pick candy 1 and then 2 or first candy 2 and then candy 1 ( two configurations ).</p>
|
1,766,264 | <p>A store sells 8 kinds of candy. How many ways can you pick out 15 candies total to throw unordered into a bag and take home.</p>
<p>here 15 candies..
so we choose 8 from out of 15 is ..=$^{15}C_8$ is i am right</p>
| N. F. Taussig | 173,070 | <p>The number $\binom{15}{8}$ represents the number of ways of making an unordered selection of eight objects from a set of $15$ distinct objects. </p>
<p>In this problem, we are instead selecting $15$ pieces of candy from eight different types of candy. What matters is how many candies of each type we choose. If $x_k$ represents the number of candies of type $k$, $1 \leq k \leq 8$, then
$$x_1 + x_2 + x_3 + x_4 + x_5 + x_6 + x_7 + x_8 = 15$$
Assuming there are at least $15$ candies of each type available, we need to determine the number of solutions of this equation in the non-negative integers. A particular corresponds to the placement of seven addition signs in a row of $15$ ones. For instance,
$$1 + 1 1 + 1 1 1 + + 1 1 + 1 1 1 + 1 + 1 1 1$$
corresponds to the solution $x_1 = 1$, $x_2 = 2$, $x_3 = 3$, $x_4 = 0$, $x_5 = 2$, $x_6 = 3$, $x_7 = 1$, and $x_8 = 3$. Thus, the number of solutions of the equation is the number of ways of inserting seven addition signs into a row of $15$ ones, which is
$$\binom{15 + 7}{7} = \binom{22}{7}$$
since we must choose which seven of the twenty-two symbols ($15$ ones and $7$ addition signs) will be addition signs. </p>
|
267,706 | <p>I'm making an animation of a <a href="https://en.wikipedia.org/wiki/Reuleaux_triangle" rel="nofollow noreferrer">Reuleaux triangle</a> rolling on a straight line like this
<a href="https://i.stack.imgur.com/m0IMm.gif" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/m0IMm.gif" alt="rolling Reuleaux triangle" /></a></p>
<p>The animation generated by my code is not continuous. Is there a simple way to eliminate jumping?</p>
<pre><code>Manipulate[
Module[{reuleaux, s},
reuleaux[t_] = {-Cos[Pi/3 (1 + 2 Floor[3 t])] + Sqrt[3] Cos[Pi/6 + Pi t + Pi/3 Floor[3 t]],
-Sin[Pi/3 (1 + 2 Floor[3 t])] + Sqrt[3] Sin[Pi/6 + Pi t + Pi/3 Floor[3 t]]};
s[t_?NumericQ] := NIntegrate[Norm[reuleaux'[s]], {s, 0, t}];
ParametricPlot[{s[u], 0} + (reuleaux[t] - reuleaux[u]).RotationMatrix[ArcTan @@ (reuleaux'[u])] // Evaluate,
{t, 0, 1}, PlotRange -> {{-1, 7}, {-1, 2}}]
], {u, 0.001, 1 + 0.001}]
</code></pre>
<p><a href="https://i.stack.imgur.com/AqsNC.gif" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/AqsNC.gif" alt="animation with jump" /></a></p>
<p>Reference link:<br />
<a href="https://community.wolfram.com/groups/-/m/t/1628699" rel="nofollow noreferrer">On rolling polygons and Reuleaux polygons</a><br />
<a href="https://math.stackexchange.com/questions/2279568/formula-to-create-a-reuleaux-polygon">formula-to-create-a-reuleaux-polygon</a><br />
<a href="https://mathematica.stackexchange.com/questions/242180/how-to-roll-a-graph-on-the-y-axis">How to roll a graph on the y-axis</a><br />
<a href="https://mathematica.stackexchange.com/questions/212962/how-to-plot-a-bicycle-with-square-wheels">How to plot a bicycle with square wheels</a></p>
| Adam | 74,641 | <p>Daniel Huber's code seems to let the points and edges slide a little. In order to ensure no slippage, you need to use the perimeter of the shape: each arc has length <span class="math-container">$\pi/3$</span>, so the distance between successive vertex rotations should be <span class="math-container">$\pi/3$</span>.</p>
<p>This code is crude but gets the job done</p>
<pre><code>With[{prims=Circle@@@({CirclePoints@3/√3,{1,1,1},{{2,3},{4,5},{0,1}}π/3}\[Transpose])},
Animate[Graphics@With[{mθ=Mod[θ,2π/3]},With[{tr={⌊3θ/(2π)⌋π/3,0}+
If[mθ<π/3,{mθ(π-√3)/π,1-Cos[π/6-mθ]/√3},{(2π-√3)/6-Cos[mθ]/√3,Sin[mθ]/√3}]},
{Point@tr,TranslationTransform[tr]@*RotationTransform[π/6-θ]/@prims,
Line/@{{{-1,1},{5,1}},{{-1,0},{5,0}}},Point@{π#/3-1/(2√3),0}&/@Range[0,4]}]],
{θ,0,7π/3},AnimationDirection->ForwardBackward]]
</code></pre>
<p>and you get something like</p>
<p><a href="https://i.stack.imgur.com/KkFcP.gif" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/KkFcP.gif" alt="rolling1" /></a></p>
|
1,865,364 | <p>After having seen a lengthy and painful calculation showing
$\operatorname{Gal}(\mathbb Q[e^\frac{2\pi i}3, \sqrt[\leftroot{-2}\uproot{2}3]{2}]/\mathbb Q)\cong S_3$, I'm wondering whether there's a slick proof $\operatorname{Gal}(\mathbb Q[e^\frac{2\pi i}p, \sqrt[\leftroot{-2}\uproot{2}p]{2}]/\mathbb Q)\cong S_p$ for odd prime $p$, because these calculations are getting intractable fast.</p>
<p>What are some slick proofs of this fact (assuming it is indeed correct).</p>
<p><strong>Correction:</strong> What <strong>IS</strong> $\operatorname{Gal}(\mathbb Q[e^\frac{2\pi i}p, \sqrt[\leftroot{-2}\uproot{2}p]{2}]/\mathbb Q)$ for prime $p$?</p>
| M. Van | 337,283 | <p>Your statement does not hold. Let $\zeta$ be some $p$-th root of unity. Remember that the order of the galois group $\text{Gal} \mathbb{Q}(\zeta, \sqrt[p]{2})$ is the degree of the extension $\mathbb{Q}(\zeta, \sqrt[p]{2})/ \mathbb{Q}$. Now $[\mathbb{Q}(\zeta) : \mathbb{Q}]=p-1$ and $[\mathbb{Q}(\sqrt[p]{2}) : \mathbb{Q}]=p$ because $X^p-2$ is irreducible by eisenstein. We have $[\mathbb{Q}( \zeta, \sqrt[p]{2}) : \mathbb{Q}( \sqrt{2} ) ] \leq p$ but $p \mid [ \mathbb{Q}(\zeta, \sqrt[p]{2}) : \mathbb{Q}]=[\mathbb{Q}(\zeta, \sqrt[p]{2}) : \mathbb{Q}(\zeta)][\mathbb{Q}(\zeta): \mathbb{Q}]= [\mathbb{Q}(\zeta, \sqrt[p]{2}) : \mathbb{Q}(\zeta)](p-1)$, by euclid's lemma we have $p \mid [\mathbb{Q}(\zeta, \sqrt[p]{2}) : \mathbb{Q}(\zeta)]$ because $ \gcd(p, p-1) =1 $. So $p=[\mathbb{Q}(\zeta, \sqrt[p]{2}) : \mathbb{Q}(\zeta)]$.</p>
<p>Conclusion: $[\mathbb{Q}(\zeta, \sqrt[p]{2}) : \mathbb{Q}]=p(p-1)$. Now if your statement would hold, then $p(p-1)=|S_p|=p!$. This is true for odd $p$ exactly when $p=3$. So for any other odd $p$ this is not true.</p>
|
3,660,652 | <p>To which of the seventeen standard quadrics (<a href="https://mathworld.wolfram.com/QuadraticSurface.html" rel="nofollow noreferrer">https://mathworld.wolfram.com/QuadraticSurface.html</a>) do these two equations reduce?
<span class="math-container">\begin{equation}
Q_1^2+3 Q_2 Q_1+\left(3
Q_2+Q_3\right){}^2 = 3 Q_2+2 Q_1 Q_3.
\end{equation}</span>
<span class="math-container">\begin{equation}
-9 Q_2-6 Q_3+3 \left(Q_1^2+\left(3 Q_2+4 Q_3-1\right) Q_1+9 Q_2^2+4 Q_3^2+6 Q_2
Q_3\right) = 0.
\end{equation}</span>
Further, what are the associated transformations needed to accomplish the reductions?</p>
<p>This is a "distilled" form of a previous more expansive question <a href="https://mathoverflow.net/questions/359459/interpret-certain-expressions-in-terms-of-classical-quadratic-surfaces">https://mathoverflow.net/questions/359459/interpret-certain-expressions-in-terms-of-classical-quadratic-surfaces</a></p>
| fleablood | 280,126 | <p><span class="math-container">$x = \frac{781 + 256 (3d-1)}{81}$</span> means that <span class="math-container">$x = \frac{781 + 256 (3d-1)}{81}$</span> is an integer and that <span class="math-container">$81$</span> divides into <span class="math-container">$781+256(3d-1)$</span> evenly.</p>
<p><span class="math-container">$\frac {781 + 256(3d-1)}{81}= \frac {525 + 768 d}{81} = \frac {175 + 256d}{27}$</span> so <span class="math-container">$27$</span> divides into <span class="math-container">$175 + 256d$</span> evenly.</p>
<p><span class="math-container">$175+256d = 27*(6 + 9d) + (13+13d)$</span> so <span class="math-container">$27$</span> must divide into <span class="math-container">$13+13d$</span> evenly.</p>
<p>As <span class="math-container">$13$</span> and <span class="math-container">$27$</span> are relatively prime <span class="math-container">$27$</span> must divide into <span class="math-container">$1+d$</span> evenly.</p>
<p>So there must be an integer <span class="math-container">$m$</span> so that <span class="math-container">$1+d = 27m$</span> or <span class="math-container">$d = 27m-1$</span>.</p>
<p>So we can have <span class="math-container">$d= 26, 53, 80, ..... $</span></p>
<p>So long as <span class="math-container">$d=27m -1$</span> then </p>
<p><span class="math-container">$x = \frac {781+256(3(27m-1)-1)}{81}= \frac{525+768(27m-1)}{81}=\frac {27*768m-243}{81}= 256m- 3$</span></p>
<p>.... if you are comfortably with modular arithmetic:</p>
<p><span class="math-container">$x=\frac{781 + 256 (3d-1)}{81}$</span> being an integer means</p>
<p><span class="math-container">$781+256(3d-1) \equiv 0 \pmod {81}$</span></p>
<p><span class="math-container">$256*3d \equiv -525 \pmod {81}$</span></p>
<p><span class="math-container">$39d \equiv -39\pmod{81}$</span></p>
<p><span class="math-container">$d\equiv -1 \pmod {\frac{81}{\gcd(39,81}}$</span></p>
<p><span class="math-container">$d \equiv -1 \pmod{27}$</span></p>
<p>would be easier. (we only have to do remaiders... and etc...)</p>
|
319,725 | <p>I am trying to prove the following inequality concerning the <a href="https://en.wikipedia.org/wiki/Beta_function" rel="noreferrer">Beta Function</a>:
<span class="math-container">$$
\alpha x^\alpha B(\alpha, x\alpha) \geq 1 \quad \forall 0 < \alpha \leq 1, \ x > 0,
$$</span>
where as usual <span class="math-container">$B(a,b) = \int_0^1 t^{a-1}(1-t)^{b-1}dt$</span>.</p>
<p>In fact, I only need this inequality when <span class="math-container">$x$</span> is large enough, but it empirically seems to be true for all <span class="math-container">$x$</span>.</p>
<p>The main reason why I'm confident that the result is true is that it is very easy to plot, and I've experimentally checked it for reasonable values of <span class="math-container">$x$</span> (say between 0 and <span class="math-container">$10^{10}$</span>). For example, for <span class="math-container">$x=100$</span>, the plot is:</p>
<p><a href="https://i.stack.imgur.com/UiRCf.png" rel="noreferrer"><img src="https://i.stack.imgur.com/UiRCf.png" alt="Plot of the function to be proven greater than 1"></a></p>
<p>Varying <span class="math-container">$x$</span>, it seems that the inequality is rather sharp, namely I was not able to find a point where that product is larger than around <span class="math-container">$1.5$</span> (but I do not need any such reverse inequality).</p>
<p>I know very little about Beta functions, therefore I apologize in advance if such a result is already known in the literature. I've tried looking around, but I always ended on inequalities trying to link <span class="math-container">$B(a,b)$</span> with <span class="math-container">$\frac{1}{ab}$</span>, which is quite different from what I am looking for, and also only holds true when both <span class="math-container">$a$</span> and <span class="math-container">$b$</span> are smaller than 1, which is not my setting.</p>
<p>I have tried the following to prove it, but without success: the inequality is well-known to be an equality when <span class="math-container">$\alpha = 1$</span>, and the limit for <span class="math-container">$\alpha \to 0$</span> should be equal to 1, too. Therefore, it would be enough to prove that there exists at most one <span class="math-container">$0 < \alpha < 1$</span> where the derivative of the expression to be bounded vanishes. This derivative can be written explicitly in terms of the <a href="https://en.wikipedia.org/wiki/Digamma_function" rel="noreferrer">digamma function</a> <span class="math-container">$\psi$</span> as:
<span class="math-container">$$
x^\alpha B(\alpha, x\alpha) \Big(\alpha \psi(\alpha) - (x+1)\alpha\psi((x+1)\alpha) + x\alpha \psi(x\alpha) + 1 + \alpha \log x \Big).
$$</span>
Dividing by <span class="math-container">$x^\alpha B(\alpha, x\alpha) \alpha$</span>, this becomes
<span class="math-container">$$
-f(\alpha) + \frac{1}{\alpha} + \log x,
$$</span>
where <span class="math-container">$f(\alpha) = -\psi(\alpha) + (x+1)\psi((x+1)\alpha) - x \psi(x\alpha)$</span> is, as proven <a href="http://web.math.ku.dk/~berg/manus/alzberg2.pdf" rel="noreferrer">by Alzer and Berg</a>, Theorem 4.1, a completely monotonic function. Unfortunately, the difference of two completely monotonic functions (such as <span class="math-container">$f(\alpha)$</span> and <span class="math-container">$\frac{1}{\alpha} + C$</span>) can vanish in arbitrarily many points, therefore this does not allow to conclude.</p>
<p>Many thanks in advance for any hint on how to get such a bound!</p>
<p>[EDIT]: As pointed out in the comments, the link to the paper of Alzer and Berg pointed to the wrong version, I have corrected the link.</p>
| esg | 48,831 | <p>One can also use Jensen's inequality. Let (for <span class="math-container">$\sigma>0$</span>) <span class="math-container">$G_\sigma$</span> denote a random variable with <span class="math-container">$\Gamma(1,\sigma)$</span>-distribution, i.e. having Lebesgue density
<span class="math-container">$$f_\sigma(t)=\frac{t^{\sigma-1}}{\Gamma(\sigma)} e^{-t}\;1_{(0,\infty)}(t)\;,$$</span>
then <span class="math-container">$\mathbb{E}(G_\sigma)=\sigma$</span>.
Since <span class="math-container">$\alpha\in (0,1)$</span> the functions <span class="math-container">$t\mapsto t^\alpha$</span> resp. <span class="math-container">$t\mapsto t^{1-\alpha}$</span> on <span class="math-container">$\mathbb{R}_+$</span> are concave. By Jensen's inequality
<span class="math-container">$$\frac{\Gamma(\alpha+\alpha x)}{\Gamma(\alpha x)}=\mathbb{E}(G_{x\alpha}^\alpha)\leq \left(\mathbb{E}(G_{x\alpha})\right)^\alpha=(x\alpha)^{\alpha}$$</span></p>
<p>and
<span class="math-container">$$\frac{1}{\Gamma(\alpha)}=\mathbb{E} G_\alpha^{1-\alpha}\leq\left(\mathbb{E}(G_{\alpha})\right)^{1-\alpha}=\frac{1}{\alpha^{\alpha-1}}$$</span>
Using that gives
<span class="math-container">$$B(\alpha,x \alpha)=\frac{\Gamma(\alpha)\,\Gamma(x\alpha)}{\Gamma(\alpha +x\alpha)}\geq \frac{\Gamma(\alpha)}{\alpha^\alpha x^\alpha}\geq \frac{\Gamma(\alpha)}{\alpha\,\Gamma(\alpha)\,x^\alpha}=\frac{1}{\alpha x^\alpha},$$</span>
as desired.</p>
|
4,351,504 | <p>A question from Herstein's Abstract Algebra book goes-</p>
<blockquote>
<p>Let <span class="math-container">$(R,+,\cdot)$</span> be a ring with unit element. Using its elements we define a ring <span class="math-container">$(\tilde R,\oplus,\odot)$</span> by defining <span class="math-container">$a\oplus b = a + b + 1$</span> and <span class="math-container">$a\odot b = a\cdot b + a + b$</span> where <span class="math-container">$a,b\in R$</span>.</p>
<ol>
<li>Prove that <span class="math-container">$\tilde R$</span> is a ring under the operations <span class="math-container">$\oplus$</span> and <span class="math-container">$\odot$</span>.</li>
<li>What is the zero element of <span class="math-container">$\tilde R$</span>?</li>
<li>What is the unit element of <span class="math-container">$\tilde R$</span>?</li>
<li>Prove that <span class="math-container">$R$</span> is isomorphic to <span class="math-container">$\tilde R$</span>.</li>
</ol>
</blockquote>
<p>Parts 1,2 and 3 seemed quite easy for me, and the answers I got for 2 and 3 are <span class="math-container">$-1$</span> and <span class="math-container">$0$</span> respectively.</p>
<p>But, I got stuck with part 4. I understood that I had to construct an isomorphism <span class="math-container">$\phi:R\to \tilde R$</span> such that <span class="math-container">$0\mapsto -1$</span> and <span class="math-container">$1\mapsto 0$</span>. But, I couldn't construct the bijection explicitly. A little google search revealed the answer to be <span class="math-container">$\phi (x)=x-1$</span> and that works.</p>
<p>My question is, how do we come up with that isomorphism? How do we construct that function when all we know are the two weird sum and product definitions, and <span class="math-container">$0\mapsto -1$</span> and <span class="math-container">$1\mapsto 0$</span>? Some <em>"stacking"</em> showed <a href="https://math.stackexchange.com/q/2004269/943723">some</a> <a href="https://math.stackexchange.com/q/2003399/943723">similar</a> <a href="https://math.stackexchange.com/a/15006/943723">questions</a> where people have suggested something called <a href="https://math.stackexchange.com/search?q=user%3A242+transport+">"transporting ring structure"</a> which I honestly can't grasp properly. I'm not even sure whether that is really the answer to my question.</p>
<p>I would like to have some help from the experts here.</p>
<p>Also please change the title of the question if you can think of a better one :|</p>
| Svyatoslav | 869,237 | <p>We can also try to dig a bit deeper. Knowing that <a href="https://www.google.com/search?q=erf%20laplace%20transform&rlz=1C1GCEU_ruRU866RU868&oq=erf%20laplace%20transform&aqs=chrome..69i57.11914j0j15&sourceid=chrome&ie=UTF-8" rel="nofollow noreferrer">Laplace Transform</a> of <span class="math-container">$\operatorname{erf}(-\frac{\sqrt \alpha}{2\sqrt t})$</span> is <span class="math-container">$\frac{1}{s}e^{-\sqrt{\alpha s}}$</span> and taking the first derivative over <span class="math-container">$\sqrt \alpha$</span>, we may suppose that the desired function has the representation
<span class="math-container">$$f(t)=\frac{a}{\sqrt t}e^{-\frac{b}{t}}$$</span>
where <span class="math-container">$a$</span> and <span class="math-container">$b$</span> are some constants.
Performing LT
<span class="math-container">$$I(s)=\int_0^\infty f(t)e^{-st}dt=\frac{a}{\sqrt s}\int_0^\infty\frac{dx}{\sqrt x}e^{-x-\frac{bs}{x}}=\frac{2a}{\sqrt s}\int_0^\infty e^{-t^2-\frac{bs}{t^2}}dt$$</span>
<span class="math-container">$$=\frac{2a}{\sqrt s}\int_0^\infty e^{-(t-\frac{\sqrt{bs}}{t})^2}e^{-2\sqrt{bs}}dt$$</span>
Now we can use <a href="https://en.wikipedia.org/wiki/Glasser%27s_master_theorem" rel="nofollow noreferrer">Glasser's Master Theorem</a>, or just use the substitution <span class="math-container">$x=\frac{\sqrt{bs}}{t}$</span> to evaluate the integral:
<span class="math-container">$$I(s)=\frac{{\sqrt\pi}\,a}{\sqrt s}e^{-2\sqrt{bs}}$$</span>
The last action is to choose the appropriate coefficients <span class="math-container">$a$</span> and <span class="math-container">$b$</span>.</p>
|
2,360,523 | <blockquote>
<p>Let $a, b, c$ be positive real numbers such that $a+b+c = 1$. Prove that
$$ \displaystyle\sum_{cyc}\frac{ab}{\sqrt{ab+bc}} \leq \frac{1}{\sqrt{2}}$$</p>
</blockquote>
<p>My attempted work :</p>
<p>By C-S, $$ (ab+ac)(1+1) \geq (\sqrt{ab}+\sqrt{bc})^2$$</p>
<p>$$\sqrt{2} \sqrt{ab+bc} \geq \sqrt{ab}+\sqrt{bc}$$</p>
<p>$$\frac{\sqrt{2} ab}{ \sqrt{ab}+\sqrt{bc}} \geq \frac{ab}{\sqrt{ab+bc}}$$</p>
<p>$$\frac{ab}{\sqrt{ab+bc}} \leq \frac{\sqrt{2} ab}{ \sqrt{ab}+\sqrt{bc}}$$</p>
<p>multiply through by $\sqrt{2}$</p>
<p>$$\displaystyle\sum_c \frac{\sqrt{2} ab}{\sqrt{ab+bc}} \leq \displaystyle\sum_c \frac{ 2ab}{ \sqrt{ab}+\sqrt{bc}} = \displaystyle\sum_c \frac{ ab}{ \sqrt{ab}+\sqrt{bc}} + \displaystyle\sum_c \frac{ bc}{ \sqrt{ab}+\sqrt{bc}} = \displaystyle\sum_c \frac{ ab+bc}{ \sqrt{ab}+\sqrt{bc}}$$</p>
<p>Please suggest, how to show that </p>
<p>$$\displaystyle\sum_c \frac{ ab+bc}{ \sqrt{ab}+\sqrt{bc}} \leq \frac{1}{\sqrt{2}}\sqrt{2} = 1 = a+b+c $$</p>
<p>Can we just use basic inequalities ?</p>
| Michael Rozenberg | 190,319 | <p>Your solution is wrong because
$$ 2 \displaystyle\sum_{cyc} \frac{ ab}{ \sqrt{ab}+\sqrt{bc}} \neq \displaystyle\sum_{cyc} \frac{ ab}{ \sqrt{ab}+\sqrt{bc}} + \displaystyle\sum_{cyc}\frac{ bc}{ \sqrt{ab}+\sqrt{bc}} $$</p>
<p>My proof:</p>
<p>By C-S
$$\left(\sum_{cyc}\sqrt{\frac{a^2b}{a+c}}\right)^2\leq(ab+ac+bc)\sum_{cyc}\frac{a}{a+c}=$$
$$=(ab+ac+bc)\left(3-\sum_{cyc}\frac{c}{a+c}\right)=(ab+ac+bc)\left(3-\sum_{cyc}\frac{c^2}{ac+c^2}\right)\leq$$
$$\leq(ab+ac+bc)\left(3-\frac{(a+b+c)^2}{\sum\limits_{cyc}(a^2+ab)}\right).$$
Thus, it remains to prove that
$$(ab+ac+bc)\left(3-\frac{(a+b+c)^2}{\sum\limits_{cyc}(a^2+ab)}\right)\leq\frac{(a+b+c)^2}{2}.$$
Now, let $a^2+b^2+c^2=k(ab+ac+bc)$.</p>
<p>Thus, $k\geq1$ and we need to prove that
$$3-\frac{k+2}{k+1}\leq\frac{k+2}{2}$$ or
$$k(k-1)\geq0.$$
Done!</p>
|
2,360,523 | <blockquote>
<p>Let $a, b, c$ be positive real numbers such that $a+b+c = 1$. Prove that
$$ \displaystyle\sum_{cyc}\frac{ab}{\sqrt{ab+bc}} \leq \frac{1}{\sqrt{2}}$$</p>
</blockquote>
<p>My attempted work :</p>
<p>By C-S, $$ (ab+ac)(1+1) \geq (\sqrt{ab}+\sqrt{bc})^2$$</p>
<p>$$\sqrt{2} \sqrt{ab+bc} \geq \sqrt{ab}+\sqrt{bc}$$</p>
<p>$$\frac{\sqrt{2} ab}{ \sqrt{ab}+\sqrt{bc}} \geq \frac{ab}{\sqrt{ab+bc}}$$</p>
<p>$$\frac{ab}{\sqrt{ab+bc}} \leq \frac{\sqrt{2} ab}{ \sqrt{ab}+\sqrt{bc}}$$</p>
<p>multiply through by $\sqrt{2}$</p>
<p>$$\displaystyle\sum_c \frac{\sqrt{2} ab}{\sqrt{ab+bc}} \leq \displaystyle\sum_c \frac{ 2ab}{ \sqrt{ab}+\sqrt{bc}} = \displaystyle\sum_c \frac{ ab}{ \sqrt{ab}+\sqrt{bc}} + \displaystyle\sum_c \frac{ bc}{ \sqrt{ab}+\sqrt{bc}} = \displaystyle\sum_c \frac{ ab+bc}{ \sqrt{ab}+\sqrt{bc}}$$</p>
<p>Please suggest, how to show that </p>
<p>$$\displaystyle\sum_c \frac{ ab+bc}{ \sqrt{ab}+\sqrt{bc}} \leq \frac{1}{\sqrt{2}}\sqrt{2} = 1 = a+b+c $$</p>
<p>Can we just use basic inequalities ?</p>
| River Li | 584,414 | <p>Some years ago, I came up with a proof.</p>
<p><strong>Proof</strong>: By using the Cauchy-Schwarz inequality, we have
<span class="math-container">\begin{align}
&\frac{xy}{\sqrt{xy+yz}}+\frac{yz}{\sqrt{yz+zx}}+\frac{zx}{\sqrt{zx+xy}}\\
\le \ & \sqrt{(xy+yz+zx)\Big(\frac{xy}{xy+yz}+\frac{yz}{yz+zx}+\frac{zx}{zx+xy}\Big)}\\
= \ & \sqrt{xy + \frac{x^2z}{x+z} + yz + \frac{xy^2}{y+x} + xz + \frac{yz^2}{z+y}}\\
\le \ &
\sqrt{xy + \frac{x(x+z)^2/4}{x+z} + yz + \frac{y(y+x)^2/4}{y+x} + xz + \frac{z(z+y)^2/4}{z+y}}\\
= \ &
\sqrt{\frac{1}{4}(x^2+y^2+z^2)+\frac{5}{4}(xy+yz+zx)}\\
\le \ & \sqrt{\frac{1}{2}(x+y+z)^2}\\
= \ & \frac{1}{\sqrt{2}}.
\end{align}</span></p>
|
332,927 | <p>there are two bowls with black olives in one and green in the other. A boy takes 20 green olives and puts in the black olive bowl, mixes the black olive bowl, takes 20 olives and puts it in the green olive bowl. The question is -</p>
<p>Are there more green olives in the black olive bowl or black olive in the green olive bowl? Answer with reason.</p>
| Philip C | 12,160 | <p>Suppose you start with $B$ black olives in one bowl and $G$ green olives in the other.</p>
<p>After the first transfer, we have $B$ black olives and $20$ green olives in one bowl and $G-20$ green olives in the other.</p>
<p>For the second transfer, suppose the boy picks $X$ black olives and $20-X$ green olives (for a total of $20$ olives).</p>
<p>Then:<br>
In the black olive bowl, we have $B - X$ black olives and $20-(20-X) = X$ green olives.<br>
In the green olive bowl, we have $X$ black olives and $(G-20)+(20-X) = G-X$ green olives.</p>
<p><strong>Thus we end up with the same number of green olives in the black olive bowl as black olives in the green olive bowl (namely $X$).</strong></p>
<p>(Or in table form, with $(A,B)$ denoting the number of black and green olives respectively):</p>
<pre><code>Black bowl | Green bowl
(B,0) | (0,G)
(B,20) | (0,G-20) Transfer 20 green, from right to left
(B-X,X) | (X,G-X) Transfer X black and 20-X green, from left to right
</code></pre>
|
152,295 | <p>What is the definition of picture changing operation?
What is a standard reference where it is defined - not just used?</p>
| Carlo Beenakker | 11,260 | <p><A HREF="http://arxiv.org/abs/hep-th/9706033" rel="nofollow noreferrer">Picture changing operators in supergeometry and superstring theory</A>, Alexander Belopolsky (1997).</p>
<p><IMG SRC="https://ilorentz.org/beenakker/MO/picture_change.png"></p>
|
3,196,238 | <p>Let <span class="math-container">$ A = \left\{ (x,y) \in \mathbb{R^2} \mid y= \sin ( \frac{1}{x}) , \ 0 < x \leq 1 \right\}$</span> . Find <span class="math-container">$\operatorname{Cl} A$</span> in topological space <span class="math-container">$\mathbb{R^2}$</span> with dictionary order topology.</p>
<p>I guess <span class="math-container">$ \operatorname{Cl} A = A $</span>? </p>
| Cameron Buie | 28,900 | <p>You're quite right. To prove it, I would take an arbitrary point not in <span class="math-container">$A,$</span> and find an open interval around it containing no points of <span class="math-container">$A.$</span> This shows that the complement of <span class="math-container">$A$</span> is open, so that <span class="math-container">$A$</span> is closed.</p>
|
3,520,327 | <p>Currently in Calculus II and I was introduced to hyperbolic trigonometric functions and it threw me for a loop. I’m really confused on their MEANING... and what they represent. I can use the formulas for them easily but it doesn’t actually make sense to me. Can someone please help me out? Are there any good books you can recommend as well?</p>
| Dan Christensen | 3,515 | <p>In classical logic, <span class="math-container">$p \implies q$</span> means only that it is false that both <span class="math-container">$p$</span> is true and <span class="math-container">$q$</span> is false. </p>
<blockquote>
<p><span class="math-container">$p \implies q \space \space \equiv \space\space \neg (p \land \neg q)$</span></p>
</blockquote>
<p>We can prove <span class="math-container">$p\implies q$</span> by either:</p>
<ol>
<li><p>Assuming <span class="math-container">$p$</span> is true, and then proving that <span class="math-container">$q$</span> must also be true.</p></li>
<li><p>Assuming <span class="math-container">$q$</span> is false, and then proving that <span class="math-container">$p$</span> must also be false.</p></li>
<li><p>Assuming <span class="math-container">$p$</span> is true and <span class="math-container">$q$</span> is false, and then obtainimg a contradiction of the forms <span class="math-container">$r\land \neg r$</span> or <span class="math-container">$r \iff \neg r$</span> </p></li>
<li><p>Proving <span class="math-container">$p$</span> is false. (Then there is no need to prove anything about <span class="math-container">$q$</span>.)</p></li>
<li><p>Proving <span class="math-container">$q$</span> is true. (Then there is no need to prove anything about <span class="math-container">$p$</span>.)</p></li>
</ol>
|
9,345 | <p>On meta.tex.sx, I've asked a question about a class of questions that might get asked over there (and have been) that are (i) ostensibly about maths usage, but (ii) might best be served by an answer that is primarily about how to handle the notation in Latex (See <a href="https://tex.meta.stackexchange.com/questions/3523/where-usage-meets-tex">https://tex.meta.stackexchange.com/questions/3523/where-usage-meets-tex</a> ). </p>
<p>One of the moderators, Martin Scharrer, suggests that the best course of action for these is to migrate them here: this is the right place for the usage part, and there are many knowledgeable Texnicians over here who might be able to spot and handle the need for Latex-specific content in the answers.</p>
<p>Would this policy be acceptable here?</p>
| Tom Oldfield | 45,760 | <p>If your question is overlapping the two disciplines, it may be best to split it into two distinct parts, and post these parts separately in the respective exchanges. Saying this, it is my opinion that the example in your meta post on the TeX exchange belonged solely here and not there, since no part of the question is about how to use LaTeX to get what you want. </p>
<p>If you were to get an answer here that suggests that you do something tricky in LaTeX (which I imagine would be unlikely) then you could post a question to the TeX exchange, linking your question here, pointing out the answer you liked and asking how to carry out the suggestion.</p>
|
250,364 | <blockquote>
<p><strong>Problem</strong> Prove that $$\log(1 + \sqrt{1+x^2})$$ is uniformly continuous.</p>
</blockquote>
<p>My idea is to consider $|x - y| < \delta$, then show that
$$|\log(1 + \sqrt{1+x^2}) - \log(1 + \sqrt{1+y^2})|
= \bigg|\log\bigg(\dfrac{1 + \sqrt{1+x^2}}{1 + \sqrt{1+y^2}}\bigg)\bigg| < \epsilon$$
But I couldn't find a choice for $x, y$ that could implies the above expression is true. Completing the square doesn't seem to help at all. Any idea?</p>
| lhf | 589 | <p>The derivative of $f(x)=\log(1 + \sqrt{1+x^2})$ is $\frac{x}{1 + x^2 + \sqrt{1 + x^2}}$, which is bounded in the whole real line since it is continuous and tends to $0$ as $x\to\pm\infty$. By the Mean Value Theorem, $f$ is Lipschitz and so uniformly continuous.</p>
|
1,281,967 | <p>This is a dumb question I know.</p>
<p>If I have matrix equation $Ax = b$ where $A$ is a square matrix and $x,b$ are vectors, and I know $A$ and $b$, I am solving for $x$.</p>
<p>But multiplication is not commutative in matrix math. Would it be correct to state that I can solve for $A^{-1}Ax = A^{-1}b \implies x = A^{-1}b$?</p>
| the.polo | 202,381 | <p>Yes, if the matrix is invertible, this is correct and the equation has the unique solution $x=A^{-1}b$.</p>
<p>Here is the <a href="http://en.wikipedia.org/wiki/Invertible_matrix#The_invertible_matrix_theorem" rel="nofollow">list</a> of properties that make a matrix invertible.</p>
|
1,082,390 | <p>$$\lim_{x \to \infty} \left(\sqrt{4x^2+5x} - \sqrt{4x^2+x}\ \right)$$</p>
<p>I have a lot of approaches, but it seems that I get stuck in all of those unfortunately. So for example I have tried to multiply both numerator and denominator by the conjugate $\left(\sqrt{4x^2+5x} + \sqrt{4x^2+x}\right)$, then I get $\displaystyle \frac{4x}{\sqrt{4x^2+5x} + \sqrt{4x^2+x}}$, but I can conclude nothing out of it. </p>
| lab bhattacharjee | 33,337 | <p>Set $h=\dfrac1x$ </p>
<p>$$4x^2+ax=\frac{4+ah}{h^2}\implies\sqrt{4x^2+ax}=\frac{\sqrt{4+ah}}{\sqrt{h^2}}$$</p>
<p>Now as $h\to0^+,h>0\implies\sqrt{h^2}=|h|=h$</p>
<p>$$\implies\lim_{x \to \infty} (\sqrt{4x^2+5x} - \sqrt{4x^2+x})=\lim_{h\to0^+}\frac{\sqrt{4+5h}-\sqrt{4+h}}h$$</p>
<p>$$=\lim_{h\to0^+}\frac{4+5h-(4+h)}{h(\sqrt{4+5h}+\sqrt{4+h})}=\cdots$$</p>
|
1,082,390 | <p>$$\lim_{x \to \infty} \left(\sqrt{4x^2+5x} - \sqrt{4x^2+x}\ \right)$$</p>
<p>I have a lot of approaches, but it seems that I get stuck in all of those unfortunately. So for example I have tried to multiply both numerator and denominator by the conjugate $\left(\sqrt{4x^2+5x} + \sqrt{4x^2+x}\right)$, then I get $\displaystyle \frac{4x}{\sqrt{4x^2+5x} + \sqrt{4x^2+x}}$, but I can conclude nothing out of it. </p>
| Claude Leibovici | 82,404 | <p>Since you already received answers, let me show you another approach you could use. $$A=\sqrt{4x^2+5x} - \sqrt{4x^2+x}=2x \sqrt{1+\frac{5}{4x}}-2x \sqrt{1+\frac{1}{4x}}=2x \Big(\sqrt{1+\frac{5}{4x}}-\sqrt{1+\frac{1}{4x}}\Big)$$ Now, you may be already know that, when $y$ is small compared to $1$ $$\sqrt{1+y}=1+\frac{y}{2}-\frac{y^2}{8}+O\left(y^3\right)$$ Use this twice, replacing $y$ by $\frac{5}{4x}$ for the first radical and by $\frac{1}{4x}$ for the second radical. You will then have $$A=2x \Big(\frac{1}{2 x}-\frac{3}{16 x^2}+\cdots\Big)=1-\frac{3}{8 x}+\cdots$$ which shows the limit and how it is approached.</p>
|
2,426,897 | <p>Let $\mathbb{H}$ be the ring of real quaternions and $Z(\mathbb{H})$ be its center. Of course $Z(\mathbb{H})=\mathbb{R}$. </p>
<p>Suppose $a+bi+cj+dk$, $x+yi+zj+wk \in \mathbb{H}$ such that $(a+bi+cj+dk)(x+yi+zj+wk) \in Z(\mathbb{H})$. </p>
<p>Does it imply $(x+yi+zj+wk)(a+bi+cj+dk) \in Z(\mathbb{H})$?</p>
| Mr. Brooks | 162,538 | <p>You can also use the <em>least</em> significant digits to get your bearings. Since $2016 \equiv 16 \pmod{100}$, if $2016$ is a perfect square, then $n$ in $n^2 = 2016$ is an integer satisfying $n \equiv 4, 6 \pmod{10}$. Clearly $n = 4$ or $6$ is too small.</p>
<p>Then, working our way up, we get $(196, 256), (576, 676), (1156, 1296), (1936, 2116)$, the last two corresponding to $44$ and $46$. Of course $45^2 \neq 2017$, but maybe it's $2025$. Notice then that $2116 - 2025 = 91$ and $91$ is the $45$th odd number. Likewise, $2025 - 1936 = 89 = 2 \times 44 + 1$, so it checks out.</p>
<p>So the answer is $44 < \sqrt{2017} < 45$.</p>
|
2,426,897 | <p>Let $\mathbb{H}$ be the ring of real quaternions and $Z(\mathbb{H})$ be its center. Of course $Z(\mathbb{H})=\mathbb{R}$. </p>
<p>Suppose $a+bi+cj+dk$, $x+yi+zj+wk \in \mathbb{H}$ such that $(a+bi+cj+dk)(x+yi+zj+wk) \in Z(\mathbb{H})$. </p>
<p>Does it imply $(x+yi+zj+wk)(a+bi+cj+dk) \in Z(\mathbb{H})$?</p>
| Jam | 161,490 | <p>There are well enough good answers here, but no one's suggested this method yet, so I'll add it.</p>
<h3>Method <span class="math-container">$1$</span> - (Secant Approximation)</h3>
<p>We can take two simple perfect squares that straddle <span class="math-container">$2017$</span>. We're just getting a rough estimate, so we'd prefer numbers to be easy to work with than near <span class="math-container">$2017$</span>. For example, <span class="math-container">$1600\leq 2017\leq 3600$</span>. The secant line passing <span class="math-container">$(1600,40)$</span> and <span class="math-container">$(3600,60)$</span> should approximate <span class="math-container">$\sqrt{x}$</span> in the interval <span class="math-container">$[1600,3600]$</span>. This gives us: <span class="math-container">$\sqrt{x}=24+\frac{x}{100}+\varepsilon_1(x)$</span>. *
So, <span class="math-container">$\sqrt{2017}\approx44$</span>. Finally, we can check the squares of <span class="math-container">$43,44,45,\ldots$</span> and find our answer.</p>
<h3>Method <span class="math-container">$2$</span> - (Mean of <span class="math-container">$2$</span>nd Degree Taylor Polynomials)</h3>
<p>We can make method <span class="math-container">$1$</span> slightly more accurate, though it's not really vital to do so. The <span class="math-container">$2$</span>nd degree Taylor expansion of <span class="math-container">$\sqrt{x}$</span> at <span class="math-container">$x=a$</span> is <span class="math-container">$T_a(x)=\sqrt{a}+\frac{x-a}{80}+\frac{(x-a)^2}{8a\sqrt{a}}$</span>. Then the mean of <span class="math-container">$T_{1600}$</span> and <span class="math-container">$T_{3600}(x)$</span> should be a good estimate for <span class="math-container">$\sqrt{x}$</span> near the centre of the interval <span class="math-container">$[1600,3600]$</span>. Then <span class="math-container">$\sqrt{x}=60+\frac{x-2600}{80}-\frac{1}{2\cdot8}\left(\frac{(x-1600)^2}{40^3}+\frac{(x-3600)^2}{60^3}\right)+\varepsilon_2(x)$</span>.** Hence, <span class="math-container">$\sqrt{2017}\approx45$</span> and we can check the squares of nearby integers, as in method <span class="math-container">$1$</span>.</p>
<p>These methods would also work for any <span class="math-container">$2000<x<3000$</span> and could easily be adapted for other values of <span class="math-container">$x$</span>. They also give a convenient way of finding an initial guess, for methods such as Newton-Raphson (detailed in <em>@mathreadler</em>'s answer).</p>
<hr />
<h3>Accuracy of Methods</h3>
<p>* The error term reaches its maximum at <span class="math-container">$\varepsilon_1(2500)= 1$</span>. So, in the worst case scenario, we'd need to check the <span class="math-container">$3$</span> numbers <span class="math-container">$(y-1),y,(y+1)$</span>, where <span class="math-container">$y$</span> is the estimate of <span class="math-container">$\sqrt{x}$</span> from method <span class="math-container">$1$</span> and where <span class="math-container">$x\in[1600,3600]$</span>.</p>
<p>** For <span class="math-container">$x\in(1769,3110)$</span>, we have <span class="math-container">$\varepsilon_2(x)<0.67<\varepsilon_1(x)$</span> but for <span class="math-container">$x<1769$</span> or <span class="math-container">$x>3110$</span>, we have <span class="math-container">$\varepsilon_2(x)>\varepsilon_1(x)$</span>. In other words, method <span class="math-container">$1$</span> is more accurate than method <span class="math-container">$2$</span> in the centre of <span class="math-container">$[1600,3600]$</span>, but the opposite is true near the bounds of the interval. However, since we're interested in the centre of the interval, this is good. The error term, <span class="math-container">$\varepsilon_2(x)$</span>, reaches a minimum of <span class="math-container">$0$</span> around <span class="math-container">$x=2351$</span>.</p>
|
3,393,466 | <p>I am in final year of my undergraduate in mathematics from a prestigious institute for mathematics. However a thing that I have noticed is that I seem to be slower than my classmates in reading mathematics. As in, how muchever I try, I seem to finish my works at the last moment and I rarely find any time for extra reading. Is there any suggestions or tips that I could try that you know of? Or is it advisable to skip details in favour of saving time?</p>
| RyRy the Fly Guy | 412,727 | <p>I am in the same position as you, @Deepakms. I'm always the last to finish on a math exam, the last to turn in the class assignment, etc... with that being said, I typically ace every exam, and I'm usually the one with the highest grade in the class. </p>
<p>People who invest more time in cultivating a rich understanding of a topic generally do not progress through material quickly. However, when they do finish, they are more often correct, able to teach and explain concepts with greater facility, able to build on prior knowledge more easily, able to generalize what they've learned to diverse situations, and able to be more creative/innovative with what they've learned.</p>
<p>So there is definitely a trade off, and I don't think you should beat yourself up over "taking longer" if the reason you are taking longer is to enrich your understanding or check your answers. If mathematics is a passion of yours or important for your career, then you certainly are not wasting your time.</p>
|
327,750 | <p>$$\bigcup_{n=1}^\infty A_n = \bigcup_{n=1}^\infty (A_{1}^c \cap\cdots\cap A_{n-1}^c \cap A_n)$$</p>
<p>The results is obvious enough, but how to prove this</p>
| dtldarek | 26,306 | <p>This is a similar approach, but using different tools. It came out a bit over-formalized, but perhaps it still might be helpful to you.</p>
<hr>
<p>You want to prove</p>
<p>$$\bigcup_{n=1}^\infty A_n = \bigcup_{n=1}^\infty (A_{1}^c \cap \ldots\cap A_{n-1}^c \cap A_{n})$$
or more concisely</p>
<p>$$\bigcup_{n=1}^\infty A_n = \bigcup_{n=1}^\infty\left(\bigcap_{k=1}^{n-1} A_k^c\right) \cap A_n.$$</p>
<p>This is equivalent to </p>
<p>$$\exists n.\ P(n) \iff \exists n.\ (\neg P(1) \land \ldots \land \neg P(n-1) \land P(n))$$</p>
<p>or </p>
<p>$$\exists n.\ P(n) \iff \exists n.\ \Big(\forall k < n. \neg P(k)\Big) \land P(n).$$</p>
<p>Of course $A \land B$ implies $B$ so the $\Leftarrow$ part is trivial. To prove $\Rightarrow$, set $$\mathcal{I} = \big\{n\ \big|\ P(n)\big\}.$$ By $\exists n. P(n)$ we know that $\mathcal{I}$ is non-empty. Now, observe that $\langle \{1,2,3,\ldots\},\leq\rangle$ is a well order (that is a well-founded total order), and as such $\mathcal{I}$ has <em>the</em> smallest element; name it $m$. By the definition of $\mathcal{I}$ we know that $\forall k < m.\ \neg P(k)$, and also $P(m)$, so we have constructed the desired $n$ from the right-hand side.</p>
<p>I hope that helps ;-)</p>
|
1,074,534 | <p>How can I get started on this proof? I was thinking originally:</p>
<p>Let $ n $ be odd. (Proving by contradiction) then I dont know.</p>
| Aaron | 140,411 | <p>To get you started</p>
<p>Assume that the largest number that is divisible by 500 different numbers is $n$, then assume $n$ is not divisible by $2$. and is instead divisible by $x$, which is the smallest positive integer than $n$ can be divided by, hence $x$ must be larger than 2.</p>
<p><strong>To finish the proof (so do not read if you just want to get started)</strong></p>
<p>we find that $\frac{n*2}{x}$ is divisible by 2 and is smaller than $n$, hence $n$ cannot be the smallest number divisible by 500 different numbers</p>
|
1,074,534 | <p>How can I get started on this proof? I was thinking originally:</p>
<p>Let $ n $ be odd. (Proving by contradiction) then I dont know.</p>
| Henry | 6,460 | <ul>
<li><p>The smallest number with at least $500$ divisors is $2^6\times 3^2 \times 5^2 \times 7 \times 11 \times 13 = 14414400$</p></li>
<li><p>The smallest number with at exactly $500$ divisors is $2^4\times 3^4 \times 5^4 \times 7 \times 11 = 62370000$</p></li>
<li><p>The smallest number with at exactly $500$ divisors apart from itself is $2^{166}\times 3^2 = 841824943102600080885322463644579019321817144754176$ </p></li>
</ul>
<p>All three of these are even.</p>
|
201,060 | <p>I posted this question on Math, but there has been silence there since. So, I wonder if anyone here can get any closer to the answer to my question using Mathematica. Here is the question:</p>
<p>Suppose I draw <span class="math-container">$N$</span> random variables from independent but identical uniform distributions, where <span class="math-container">$N$</span> is an even integer. I now sort the drawn values and find the two middlemost of these. Finally, I calculate a simple average of these two middlemost values.</p>
<p>Is there a closed-form description of the progression of distributions that arise as <span class="math-container">$N$</span> increases from <span class="math-container">$N=2$</span> to <span class="math-container">$N=∞$</span> ? The first distribution is easily found to be Triangular, but what about the rest? Plots from simulations in MATLAB, with a uniform distribution on the range 0 to 1, provide the following illustrations:</p>
<p><a href="https://i.stack.imgur.com/2wwIJ.png" rel="noreferrer"><img src="https://i.stack.imgur.com/2wwIJ.png" alt="enter image description here"></a></p>
| JimB | 19,758 | <p><em>Mathematica</em> does make this pretty easy. The statistic of interest is the typical estimator of the median when the sample size is even. When the sample size is odd the sample median has a beta distribution:</p>
<pre><code>OrderDistribution[{UniformDistribution[{0, 1}], n}, (n + 1)/2]
(* BetaDistribution[(1 + n)/2, 1 + 1/2 (-1 - n) + n] *)
</code></pre>
<p>Now for the case when <span class="math-container">$n$</span> is even. First find the joint distribution of the middle two order statistics. Then find the distribution of the mean of those two statistics.</p>
<pre><code>n = 6;
od = OrderDistribution[{UniformDistribution[{0, 1}], n}, {n/2, n/2 + 1}];
md = TransformedDistribution[(x1 + x2)/2, {x1, x2} \[Distributed] od];
PDF[md, x]
</code></pre>
<p><a href="https://i.stack.imgur.com/RMLAe.png" rel="noreferrer"><img src="https://i.stack.imgur.com/RMLAe.png" alt="PDF of distribution"></a></p>
<pre><code>Plot[Evaluate[PDF[md, x]], {x, 0, 1}]
</code></pre>
<p><a href="https://i.stack.imgur.com/3aQF4.png" rel="noreferrer"><img src="https://i.stack.imgur.com/3aQF4.png" alt="Density function"></a></p>
<p>To obtain the distribution for general <span class="math-container">$n$</span> when <span class="math-container">$n$</span> is even we have to use some other than <code>TransformedDistribution</code>. We need to integrate the joint density function and treat <span class="math-container">$0<x<1/2$</span>, <span class="math-container">$x=1/2$</span>, and <span class="math-container">$1/2<x<1$</span> separately.</p>
<pre><code>fltOneHalf = 2 Integrate[(x1^(-1 + n/2) (1 - x2)^(-1 + n/2) n!)/((-1 + n/2)!)^2 /.
x2 -> 2 x - x1, {x1, 0, x}, Assumptions -> n > 1 && 0 < x < 1/2]
(* -((4 ((1 - 2 x) x)^(n/2) Gamma[n]*
Hypergeometric2F1[1 - n/2, n/2, (2 + n)/2, x/(-1 + 2 x)])/((-1 + 2 x)*
Gamma[n/2]^2)) *)
fOneHalf = 2 Integrate[(x1^(-1 + n/2) (1 - x2)^(-1 + n/2) n!)/((-1 + n/2)!)^2 /.
x2 -> 1 - x1, {x1, 0, 1/2}, Assumptions -> n > 1]
(* (2^(2 - n) n!)/((-1 + n) ((-1 + n/2)!)^2) *)
(* Because the density is symmetric, we'll take advantage of that *)
fgtOneHalf = FullSimplify[fltOneHalf /. x -> y /. y -> 1 - x]
(* (4 (-1 + (3 - 2 x) x)^(n/2) Gamma[n]*
Hypergeometric2F1[1 - n/2, n/2, (2 + n)/2, (-1 + x)/(-1 + 2 x)])/((-1 + 2 x) Gamma[n/2]^2) *)
</code></pre>
<p>Putting this together in a single function:</p>
<pre><code>pdf[n_, x_] :=
Piecewise[{{-((4 ((1 - 2 x) x)^(n/2)*Gamma[n] Hypergeometric2F1[1 - n/2, n/2, (2 + n)/2,
x/(-1 + 2 x)])/((-1 + 2 x) Gamma[n/2]^2)), 0 < x < 1/2},
{(2^(2 - n) n!)/((-1 + n) ((-1 + n/2)!)^2), x == 1/2},
{(4 (-1 + (3 - 2 x) x)^(n/2) * Gamma[n]*
Hypergeometric2F1[1 - n/2, n/2, (2 + n)/2, (-1 + x)/(-1 + 2 x)])/((-1 + 2 x) Gamma[n/2]^2),
1/2 < x < 1}}, 0]
</code></pre>
|
3,068,782 | <p>The canonical basis is not a Schauder basis of the space of bounded sequences, but in some way, it uniquely determines every element in the space. Is it a basis in a weaker sense? How is it called?</p>
<p>Thanks a lot.</p>
| SmileyCraft | 439,467 | <p>A subspace of a normed vector space is closed if and only if it is weakly closed <a href="https://math.stackexchange.com/questions/449301/closed-iff-weakly-closed-subspace">Closed <span class="math-container">$\iff$</span> weakly closed subspace</a>. Hence, a set is a Schauder basis if and only if it is a "basis in the weaker sense".</p>
|
2,064,284 | <blockquote>
<p>Prove that the sequence $\{y_n\}$ where $y_{n+2}=\frac{y_{n+1} +2 y_{n}}{3}$ $n\geq 1$, $0<y_1<y_2$, is convergent by using subsequencial criteria, <strong>by showing $\{y_{2n}\}$ and $\{y_{2n-1}\}$ converges to the same limit. Find the limit also</strong>.</p>
</blockquote>
<p>I can solve it by Cauchy sequence as $|y_m-y_n|\leq |y_{n}-y_{n+1}|+|y_{n+1}-y_{n+2}|\cdots +|y_{m-1}-y_m|\cdots$,
but here, we have to check convergence by using subsequencial criteria, by showing $\{y_{2n}\}$ and $\{y_{2n-1}\}$ converges to the same limit. Please help. </p>
| Asinomás | 33,907 | <p>Let $P$ be the minimum polynomial of $M$, fuppose $Q$ is a polynomial with the same degree as $M$ that annihalates $M$.</p>
<p>Write $P$ as $\alpha Q+R$ were $R$ is of degree less than $P$ (we can do this with the division algorithm).</p>
<p>Notice that $0=P(M)=\alpha Q(M)+R(M)=0+R(M)$. This implies $R$ is the zero polynomial (since otherwise it would be a non-zero polynomial of degree less than $P$ that annihalates $M$. We conclude $Q=\alpha P$.</p>
<p>Your case is a particular case of this, you are just saying that $P$ has degree $n$.</p>
|
2,064,284 | <blockquote>
<p>Prove that the sequence $\{y_n\}$ where $y_{n+2}=\frac{y_{n+1} +2 y_{n}}{3}$ $n\geq 1$, $0<y_1<y_2$, is convergent by using subsequencial criteria, <strong>by showing $\{y_{2n}\}$ and $\{y_{2n-1}\}$ converges to the same limit. Find the limit also</strong>.</p>
</blockquote>
<p>I can solve it by Cauchy sequence as $|y_m-y_n|\leq |y_{n}-y_{n+1}|+|y_{n+1}-y_{n+2}|\cdots +|y_{m-1}-y_m|\cdots$,
but here, we have to check convergence by using subsequencial criteria, by showing $\{y_{2n}\}$ and $\{y_{2n-1}\}$ converges to the same limit. Please help. </p>
| Will Jagy | 10,400 | <p>There are other things that happen when the minimal polynomial and characteristic polynomial coincide; note that we demand both monic...</p>
<p>First, while there may be eigenvalues with multiplicity greater than one, nevertheless each eigenvalue occurs in a single Jordan block.</p>
<p>Second, if we call our matrix $A,$ then any matrix $B$ that commutes with $A,$ that is $AB=BA,$ is a polynomial in $A,$ of degree no larger than $n-1$ because of Cayley Hamilton, anyway
$$ B = a_0 I + a_1 A + a_2 A^2 + \cdots + a_{n-1} A^{n-1}. $$
The set of such $B$ makes a vector space, it is then dimension $n,$ which is very small. In comparison, the identity matrix commutes with all matrices, in that case dimension $n^2.$ Big.</p>
<p>Here's a simple example, one you can check with 2 by 2 and 3 by 3 matrices. If a diagonal matrix $D$ has $n$ different elements on the diagonal, then it commutes only with other diagonal matrices. These other diagonal matrices may have repetition, for example the identity matrix. </p>
|
2,596,098 | <p>For a square matrix $A$ and identity matrix $I$, how does one prove that $$\frac{d}{dt}\det(tI-A)=\sum_{i=1}^n\det(tI-A_i)$$ Where $A_i$ is the matrix $A$ with the $i^{th}$ row and $i^{th}$ column vectors removed?</p>
| copper.hat | 27,978 | <p>Here is one way to see this:</p>
<p>Note that the map $\phi(t_1,...,t_n) = \det ( \sum_k t_k e_k e_k^T -A)$ is smooth, and if $\tau(t) = (t,....,t)$ then $f(t)=\det (tI-A) = \phi(\tau(t))$.</p>
<p>In particular, $f'(t) = \sum_k {\partial \phi(t,....,t) \over \partial t_k}$.</p>
<p>If we adopt the notation $\det B = d(b_1,...,b_n)$, where $b_k$
is the $k$th column of $B$, we have
\begin{eqnarray}
\phi(t,...,t+\delta,...t) &=& d(te_1-a_1,..., \delta e_k + t_ke_k -a_k,...,te_n -a_n) \\
&=& \phi(t,...,t) + \delta d(te_1-a_1,..., e_k,...,te_n -a_n) \\
&=& \phi(t,...,t) + \delta \det (tI-A_k)
\end{eqnarray}
and so ${\partial \phi(t,....,t) \over \partial t_k} = \det (tI-A_k)$.</p>
|
853,774 | <blockquote>
<p>If $(G,*)$ is a group and $(a * b)^2 = a^2 * b^2$ then $(G, *)$ is abelian for all $a,b \in G$.</p>
</blockquote>
<p>I know that I have to show $G$ is commutative, ie $a * b = b * a$</p>
<p>I have done this by first using $a^{-1}$ on the left, then $b^{-1}$ on the right, and I end up with and expression $ab = b * a$. Am I mixing up the multiplication and $*$ somehow?</p>
<p>Thanks</p>
| mwmjp | 161,182 | <p>For notational ease, let's write $ab$ in place of $a*b$. This is ultimately, an application of left and right-cancellation in a group. Namely, $$(ab)^2=a^2b^2$$ and expanding each side we see that $$abab=aabb.$$ Canceling on the left we get $bab=abb$ and now canceling on the right we have that $ba=ab$. Hence, $G$ is abelian. </p>
|
3,407,368 | <p>Please help me to think through this.</p>
<p>Take Riemann, for example. Finding a non-trivial zero with a real part not equal to <span class="math-container">$\frac{1}{2}$</span> (i.e., a counterexample) would disprove the conjecture, and also so it to be decidable.</p>
<p>How about demonstrating that Riemann is undecidable? Would that not imply that we can check zeros ad infinitum without resolving the hypothesis? But, checking zeros can only provide a counterexample, i.e., a disproof. </p>
<p>How (if at all) do these statements differ?</p>
<p>Any non-trivial zeros that we can find through brute force checking will have a real part of <span class="math-container">$\frac{1}{2}$</span>.</p>
<p>All non-trival zeros have a real part of <span class="math-container">$\frac{1}{2}$</span>.</p>
<p>Is my assumption that all non-trivial zeros is in the infinite set of zeros that can be checked by brute force correct, or even relevant? Or meaningful?</p>
<p>Please be kind. I'm not sure if my question even makes sense.</p>
| Peter | 82,961 | <p>Statements of this form (the Goldbach conjecture is another such statement) that would be proven to be true if they were proven to be undecidable in ZFC, cannot be shown to be undecidable in ZFC within ZFC.</p>
<p>The reason is that such a proof of undecidability could not work in ZFC because this would proof the statement. </p>
<p>Statements like the continuum hypothesis or the axiom of choice are of another kind. In this case we could prove them to be undecidable in ZFC without running into some contradiction.</p>
<p>A counterexample of the continuum hypothesis for example must be so abstract that we can not construct it in ZFC.</p>
<p>The proof of undecidability would have to come from outside ZFC. In this way, it could be possible to show that the Riemann hypothesis is undecidable in ZFC and thus proving it to be true.</p>
<p>A statement is (by Goedel) provable if and only if it is true in every interpretation. If it is false in at least one interpretation and true in at least one interpretation, it can neither be proven nor disproven, hence is undecidable within the given theory.</p>
|
3,407,368 | <p>Please help me to think through this.</p>
<p>Take Riemann, for example. Finding a non-trivial zero with a real part not equal to <span class="math-container">$\frac{1}{2}$</span> (i.e., a counterexample) would disprove the conjecture, and also so it to be decidable.</p>
<p>How about demonstrating that Riemann is undecidable? Would that not imply that we can check zeros ad infinitum without resolving the hypothesis? But, checking zeros can only provide a counterexample, i.e., a disproof. </p>
<p>How (if at all) do these statements differ?</p>
<p>Any non-trivial zeros that we can find through brute force checking will have a real part of <span class="math-container">$\frac{1}{2}$</span>.</p>
<p>All non-trival zeros have a real part of <span class="math-container">$\frac{1}{2}$</span>.</p>
<p>Is my assumption that all non-trivial zeros is in the infinite set of zeros that can be checked by brute force correct, or even relevant? Or meaningful?</p>
<p>Please be kind. I'm not sure if my question even makes sense.</p>
| saulspatz | 235,128 | <p>Certainly, if the Riemann hypothesis is false, it's decidable, since there is a counter-example, as you say. It's conceivable that it's true but undecidable, since we would never get done checking zeros. This doesn't mean that the Riemann hypothesis actually is undecidable, because brute force is not the only way to attack the problem.</p>
<p>Compare Fermat's last theorem. A brute force approach would be to check the quadruples <span class="math-container">$(x,y,x,n)$</span> in some sequence to see if <span class="math-container">$x^n+y^n=z^n$</span>. We would never find a counterexample, because Fermat's last theorem has been decided -- it's true. </p>
|
4,506,093 | <p><em>Please note that the following is not a duplicate:</em></p>
<p><a href="https://math.stackexchange.com/q/657931/104041">Why negating universal quantifier gives existential quantifier?</a></p>
<p><em>I am asking for a particular type of formal proof. I have added the <a href="/questions/tagged/alternative-proof" class="post-tag" title="show questions tagged 'alternative-proof'" rel="tag" aria-labelledby="alternative-proof-container">alternative-proof</a> tag because I can prove it one way, but I would like another way.</em></p>
<hr />
<p>I have recently had the pleasure of finding <a href="https://proof-checker.org/" rel="nofollow noreferrer">Proof Checker</a>. I want to brush up on my logic - in which I am entirely self-taught - and in doing so, I found that I am stuck trying to prove</p>
<p><span class="math-container">$$(\lnot\forall xPx)\to(\exists y(\lnot Py)).\tag{1}$$</span></p>
<p>Well, when I say "prove", I mean using the following rules:</p>
<ul>
<li>modus ponens ->E</li>
<li>modus tollens MT</li>
<li>modus tollendo ponens DS</li>
<li>double negation DNE</li>
<li>addition vI</li>
<li>adjunction ^I</li>
<li>simplification ^E</li>
<li>bicondition <->I</li>
<li>equivalence <->E</li>
<li>repeat Rep</li>
<li>conditional derivation ->I</li>
<li>reductio ad absurdum RAA</li>
<li>universal instantiation AE</li>
<li>universal derivation AI</li>
<li>existential instantiation EE</li>
<li>existential generalization EI</li>
<li>identity introduction =I</li>
<li>substitution of identicals =E</li>
</ul>
<p><strong>. . . in a "Fitch-style" proof.</strong></p>
<p><a href="https://proof-checker.org/rules.html" rel="nofollow noreferrer">These rules can be found on the Proof Checker site</a>.</p>
<hr />
<p>I know the following.</p>
<blockquote>
<p>Suppose the opposite. Then <span class="math-container">$$\lnot((\lnot\forall xPx)\to(\exists y(\lnot Py))).$$</span> The only way for an implication to be false is for its assumption to be true, <span class="math-container">$\lnot\forall xPx$</span>, while its conclusion is false, <span class="math-container">$\lnot(\exists y(\lnot Py))$</span>. From the former, we have <span class="math-container">$\lnot Pa$</span> for some <span class="math-container">$a$</span>. From the latter, we have <span class="math-container">$\lnot\lnot Pa$</span>, from which we have <span class="math-container">$Pa$</span>, a contradiction.</p>
</blockquote>
<p>This is the written form of the <a href="https://en.m.wikipedia.org/wiki/Method_of_analytic_tableaux" rel="nofollow noreferrer">method of analytic tableaux</a> applied to <span class="math-container">$(1)$</span>; namely:</p>
<p><span class="math-container">$$\begin{array}{ccc}
1. & \lnot(\lnot\forall xPx\to\exists y\lnot Py) & \,\\
2. & \lnot\forall xPx & (1)\\
3. & \lnot \exists y\lnot Py & (1)\\
4. & \lnot Pa & (2)\\
5. & \lnot\lnot Pa & (3)\\
6. & Pa & (5)
\end{array}$$</span></p>
<p>It seems to assume (at least in the metalogic) what I set out to prove. I don't know how to convert this to a formal proof.</p>
<hr />
<p>For what it's worth (and I think it's worth very little), I suppose the proof starts out like (or contains) this:</p>
<p><span class="math-container">$$\frac{|1. \lnot \forall xPx}{\vdots}.$$</span></p>
<p>That is, I can start (a subproof) by assuming <span class="math-container">$\lnot\forall xPx$</span>. My problem is that, as far as I can see, none of the given rules allows me to go from <span class="math-container">$\lnot\forall xPx$</span> to <span class="math-container">$\lnot Pa$</span>, which is what I am guessing is the next line, especially if the tableaux method is anything to go by.</p>
<hr />
<p>Please help :)</p>
| peterwhy | 89,922 | <p>This is what I did a few days ago, when I was playing with the proof checker and wanted to understand how <a href="https://proof-checker.org/rules.html" rel="nofollow noreferrer">universal derivation</a> works in that tool.</p>
<p><span class="math-container">$$\begin{array}{|rll}
1 & \neg\forall x\ Px\\
\hline
&\rlap{\begin{array}{|rll}
2 & \neg Pa\\
\hline
3 & \exists y\ \neg Py & \text{2, Existential generalization}
\end{array}}\\
4 & \neg Pa \to (\exists y\ \neg Py) & \text{2-3, Conditional derivation}\\
&\rlap{\begin{array}{|rll}
5 & \neg(\exists y\ \neg Py)\\
\hline
6 & \neg \neg Pa & \text{4, 5, Modus Tollens}\\
7 & Pa & \text{6, Double Negation}\\
8 & \forall x\ Px & \text{7, Universal derivation}\\
9 & \neg \forall x \ Px & \text{1, Repeat}\\
\end{array}}\\
10 & \exists y \ \neg Py & \text{5-9, Reductio Ad Absurdum}
\end{array}$$</span></p>
<p>Here is an image of the same proof with better aligned reasons. But I was using different variable names a few days ago.</p>
<p><a href="https://i.stack.imgur.com/MlZQo.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/MlZQo.png" alt="enter image description here" /></a></p>
|
4,506,093 | <p><em>Please note that the following is not a duplicate:</em></p>
<p><a href="https://math.stackexchange.com/q/657931/104041">Why negating universal quantifier gives existential quantifier?</a></p>
<p><em>I am asking for a particular type of formal proof. I have added the <a href="/questions/tagged/alternative-proof" class="post-tag" title="show questions tagged 'alternative-proof'" rel="tag" aria-labelledby="alternative-proof-container">alternative-proof</a> tag because I can prove it one way, but I would like another way.</em></p>
<hr />
<p>I have recently had the pleasure of finding <a href="https://proof-checker.org/" rel="nofollow noreferrer">Proof Checker</a>. I want to brush up on my logic - in which I am entirely self-taught - and in doing so, I found that I am stuck trying to prove</p>
<p><span class="math-container">$$(\lnot\forall xPx)\to(\exists y(\lnot Py)).\tag{1}$$</span></p>
<p>Well, when I say "prove", I mean using the following rules:</p>
<ul>
<li>modus ponens ->E</li>
<li>modus tollens MT</li>
<li>modus tollendo ponens DS</li>
<li>double negation DNE</li>
<li>addition vI</li>
<li>adjunction ^I</li>
<li>simplification ^E</li>
<li>bicondition <->I</li>
<li>equivalence <->E</li>
<li>repeat Rep</li>
<li>conditional derivation ->I</li>
<li>reductio ad absurdum RAA</li>
<li>universal instantiation AE</li>
<li>universal derivation AI</li>
<li>existential instantiation EE</li>
<li>existential generalization EI</li>
<li>identity introduction =I</li>
<li>substitution of identicals =E</li>
</ul>
<p><strong>. . . in a "Fitch-style" proof.</strong></p>
<p><a href="https://proof-checker.org/rules.html" rel="nofollow noreferrer">These rules can be found on the Proof Checker site</a>.</p>
<hr />
<p>I know the following.</p>
<blockquote>
<p>Suppose the opposite. Then <span class="math-container">$$\lnot((\lnot\forall xPx)\to(\exists y(\lnot Py))).$$</span> The only way for an implication to be false is for its assumption to be true, <span class="math-container">$\lnot\forall xPx$</span>, while its conclusion is false, <span class="math-container">$\lnot(\exists y(\lnot Py))$</span>. From the former, we have <span class="math-container">$\lnot Pa$</span> for some <span class="math-container">$a$</span>. From the latter, we have <span class="math-container">$\lnot\lnot Pa$</span>, from which we have <span class="math-container">$Pa$</span>, a contradiction.</p>
</blockquote>
<p>This is the written form of the <a href="https://en.m.wikipedia.org/wiki/Method_of_analytic_tableaux" rel="nofollow noreferrer">method of analytic tableaux</a> applied to <span class="math-container">$(1)$</span>; namely:</p>
<p><span class="math-container">$$\begin{array}{ccc}
1. & \lnot(\lnot\forall xPx\to\exists y\lnot Py) & \,\\
2. & \lnot\forall xPx & (1)\\
3. & \lnot \exists y\lnot Py & (1)\\
4. & \lnot Pa & (2)\\
5. & \lnot\lnot Pa & (3)\\
6. & Pa & (5)
\end{array}$$</span></p>
<p>It seems to assume (at least in the metalogic) what I set out to prove. I don't know how to convert this to a formal proof.</p>
<hr />
<p>For what it's worth (and I think it's worth very little), I suppose the proof starts out like (or contains) this:</p>
<p><span class="math-container">$$\frac{|1. \lnot \forall xPx}{\vdots}.$$</span></p>
<p>That is, I can start (a subproof) by assuming <span class="math-container">$\lnot\forall xPx$</span>. My problem is that, as far as I can see, none of the given rules allows me to go from <span class="math-container">$\lnot\forall xPx$</span> to <span class="math-container">$\lnot Pa$</span>, which is what I am guessing is the next line, especially if the tableaux method is anything to go by.</p>
<hr />
<p>Please help :)</p>
| Graham Kemp | 135,106 | <blockquote>
<p>That is, I can start (a subproof) by assuming ¬∀xPx. My problem is that, as far as I can see, none of the given rules allows me to go from ¬∀xPx to ¬Pa, which is what I am guessing is the next line, especially if the tableaux method is anything to go by.</p>
</blockquote>
<p>No, start <em>exactly</em> as the tableau suggests, by first assuming the antecedent is true <em>and</em> the consequent is false, <em>then</em> assuming <span class="math-container">$\neg Pa$</span> holds for an arbitrary variable (<span class="math-container">$a$</span>). Everything else derives from discharging those assumptions.</p>
<p>So, as the third assumption contradicts the second assumption (via Existential Generalisation), we may discharge the third assumption with RAA to deduce <span class="math-container">$Pa$</span>.</p>
<p>Next, since <span class="math-container">$a$</span> is arbitrary, that now contradicts the first assumption (via Universal Derivation), so we may discharge the second assumption with RAA to deduce the desired conclusion.</p>
<p>Finally, we may discharge the first assumption with Conditional Derivation, wrap it all up and put a bow on top.</p>
<p><span class="math-container">$\def\fitch#1#2{\quad\begin{array}{|l}#1\\\hline#2\end{array}}
\qquad{\fitch{}{\fitch{~~1.~\neg\forall x~Px}{\fitch{~~2.~\neg\exists y~\neg Py}{\!\begin{array}{|l}\!\fitch{~~3.~\neg Pa}{~~4.~\exists y~\neg Py\hspace{7ex}\textrm{Existential generalisation}~3\\~~5.~\neg\exists y~\neg Py\hspace{5.5ex}\textrm{Repetition}~2}\\\!~~6.~Pa\hspace{15.5ex}\textrm{Reductio Ad Absurdum}~3{-}5\end{array}\\~~7.~\forall x~Px\hspace{12.25ex}\textrm{Universal derivation}~6\\~~8.~\neg\forall x~Px\hspace{10.5ex}\textrm{Repetition}~1}\\~~9.~\exists y~\neg Py\hspace{14ex}\textrm{Reductio Ad Absurdum}~2{-}8}\\10.~\neg\forall x~Px \to \exists y~\neg Py\hspace{6ex}\text{Conditional derivarion}~1{-}9}\\\blacksquare}$</span></p>
<p>Or simply: If it is not true that everthing satisfies the predicate, then there must be something that does not satisfy the predicate.</p>
|
2,735,984 | <p>I tried to solve this recurrence by taking out $n+1$ as a common in the RHS, but still have $n \cdot a_n$ and $a_n$</p>
| Sungjin Kim | 67,070 | <p><strong>Hint</strong> </p>
<p>$$a_{n}=\frac{n+1}na_{n-1}+3n+3,$$
$$
na_n= (n+1)a_{n-1} + 3n(n+1)
$$</p>
<p>$$
\frac{a_n}{n+1}=\frac{a_{n-1}}n +3
$$</p>
|
1,686,568 | <p>I am learning about tensor products of modules, but there is a question which makes me very confused about it! </p>
<p>If $E$ is a right $R$-module and $F$ is a left $R$-module, then suppose we have a balanced map (or bilinear map) $E\times F\to E\otimes F$. If some element $x\otimes y \in E\otimes F$ is $0$, then can we say $x$ or $y$ must be equal to $0$? I know if $x = 0$ or $y = 0$, then $x\otimes y$ is $0$. Are there other cases where $x\otimes y$ is $0$? Can someone give me a specific example? </p>
<p>Really thank you!</p>
| Tanner Strunk | 346,324 | <p>I found this question while seeking to answer a related one. I think it's worth saying that $0\otimes n = (0\cdot 0)\otimes n = 0\cdot(0\otimes n) = 0\otimes(0\cdot 0) = 0\otimes 0 = 0\in M\otimes_A N$ where $M$ and $N$ are left $A$-modules (or right--I can never remember which is which).</p>
<p>(You may also note that $0\cdot n = (1 + (-1))\cdot n = n - n = 0$.)</p>
|
2,835,767 | <p>Let $V \subset L^2(\Omega)$ be a Hilbertspace and $\{V_n\}$ a sequence of subspaces such that
\begin{align*}
V_1 \subset V_2 \subset \dots \quad \text{and} \quad \overline{\bigcup_{n \in \mathbb{N}} V_n} = V \, (\text{w.r.t. } V\text{-norm} ).
\end{align*}
For some $f\in L^2(\Omega)$ we define $\phi_n = \sup_{\| v_n\| = 1, v_n \in V_n} \int_\Omega f(x) v_n(x)\, dx$. How can I prove that
\begin{align*}
\lim_{n\to\infty} \phi_n = \sup_{\| v\| = 1, v \in V} \int_\Omega f(x) v(x)\, dx
\end{align*}
holds? Is this convergence uniform?</p>
| ramanujan | 235,721 | <p>A is true. Let $p_1(x) = a_1 x^2$ and $p_2(x)= a_2 x^2$ are two elements of the given set, where $a_1,a_2 \in \mathbb R$ then $p_1(x)+p_2(x)= (a_1+a_2)x^2$, which is also belongs to the same set( because $a_1+a_2 \in \mathbb R)$. So it is closed under addition. And let $k \in \mathbb R$ then $kp (x)=(k.a) x^2$, so it is closed under scalar multiplication. So, it is subspace.</p>
<p>B is false. Multiply $x^2$ by 5, which is $5x^2$ and it doesn't belong to the same set. So, it is not subspace.</p>
|
2,655,075 | <p>How many subsets of the set $\{1, 2, \ldots, 11\}$ have median 6?</p>
<p>So I have split this problem into cases. The first case is if 6 is in the subset and the second is where 6 is not. </p>
<p>In case 1, I did 6 with 0, 1, 2, 3, 4, and 5 numbers surrounding it which yielded 1+16+36+16+1 = 70</p>
<p>My struggle is with case 2, how do I find all the unique subsets that don't use 6?</p>
| Misha Lavrov | 383,078 | <p>For a set not containing $6$ to have a median of $6$, it must have an even number of elements, the middle two of which average to $6$. So they can be $5$ and $7$, or $4$ and $8$, or $3$ and $9$, or $2$ and $10$, or $1$ and $11$.</p>
<p>Each of these is handled identically to the case where $6$ occurs in the set and is the median. For example, in the "$4$ and $8$" case, we may add any number of elements from $\{1,2,3\}$ to our set, provided we add the same number of elements from $\{9,10,11\}$, so the number of sets that fall under this case is $\binom30^2 + \binom31^2 + \binom32^2 + \binom33^2 = 1 + 9 + 9 + 1 = 20.$</p>
<p>By the way, you should double-check your arithmetic for your first case. Your approach is right, but you should get $1 + 25 + 100 + 100 + 25 + 1 = 252$ instead of $70$.</p>
|
206,825 | <p>Let's say i have</p>
<p>N1 = -584</p>
<p>N2 = 110</p>
<p>Z = 0.64 </p>
<p>How do i calculate from Z which value is it in range of N1..N2? Z is range from 0 to 1.</p>
| Ross Millikan | 1,827 | <p>If by $Z=0.64$ you want a number that $64\%$ of the way from $-584$ to $110,$ the expression is $-584 + 0.64(110-(-584))$</p>
|
1,849,797 | <p>Complex numbers make it easier to find real solutions of real polynomial equations. Algebraic topology makes it easier to prove theorems of (very) elementary topology (e.g. the invariance of domain theorem).</p>
<p>In that sense, what are theorems purely about rational numbers whose proofs are greatly helped by the introduction of real numbers? </p>
<p>By "purely" I mean: not about Cauchy sequences, Dedekind cuts, etc. of rational numbers. (This is of course a meta-mathematical statement and therefore imprecise by nature.)</p>
<p>"No, there is no such thing, because..." would also be a valuable answer.</p>
| goblin GONE | 42,339 | <p>Maybe this is a little trivial, but I consider the ability to rewrite $$A=\{x \in \mathbb{Q} : x^2 < 2\}$$ as $$A=\{x \in \mathbb{Q} : -\sqrt{2} <x<\sqrt{2}\}$$
to be a benefit.</p>
<p>The latter characterization makes the "structure" of this set much clearer; in particular, its suddenly clear why this set is convex, by which I mean that if $x,y \in A$, then for all $a \in \mathbb{Q}$ satisfying $x<a<y$, we have $a \in A$.</p>
|
2,419,116 | <p>The problem is:</p>
<p>Prove the convergence of the sequence </p>
<p>$\sqrt7,\; \sqrt{7-\sqrt7}, \; \sqrt{7-\sqrt{7+\sqrt7}},\; \sqrt{7-\sqrt{7+\sqrt{7-\sqrt7}}}$, ....</p>
<p>AND evaluate its limit.</p>
<p>If the convergen is proved, I can evaluate the limit by the recurrence relation</p>
<p>$a_{n+2} = \sqrt{7-\sqrt{7+a_n}}$.</p>
<p>A quickly find solution to this quartic equation is 2; and other roots (if I find them all) can be disposed (since they are either too large or negative).</p>
<p>But this method presupposes that I can find all roots of a quartic equation.</p>
<p>Can I have other method that bypasses this?</p>
<p>For example can I find another recurrence relation such that I dont have to solve a quartic (or cubic) equation? or at least a quintic equation that involvs only quadratic terms (thus can be reduced to quadratic equation)?</p>
<p>If these attempts are futile, I shall happily take my above mathod as an answer.</p>
| Simply Beautiful Art | 272,831 | <p>To prove the limit exists, show that$$a_{4n}>a_{4n+1}>2>a_{4n+3}>a_{4n+2}$$Using induction. For example,$$a_{4n}>2\implies\underbrace{a_{4n+2}=\sqrt{7-\sqrt{7+a_{4n}}}<\sqrt{7-\sqrt{7+2}}=2}_{\huge a_{4n+2}<2}$$Same with</p>
<p>$a_{4n+1}\implies a_{4n+3},\\a_{4n+2}\implies a_{4n+4},\\a_{4n+3}\implies a_{4n+5}.$</p>
<p>Likewise, don't forget to check $a_0$ and $a_1$. And then show that$$a_{4n}>a_{4n+1}>a_{4n+4}>a_{4n+5}\\a_{4n+7}>a_{4n+6}>a_{4n+3}>a_{4n+2}$$So that we can see that $a_n$ is bounded between $a_0$ and $a_2$, and the subsequences $a_{4n}$ and $a_{4n+1}$ are monotone decreasing and $a_{4n+2}$ and $a_{4n+3}$ are monotone increasing. From there, it simply involves showing that they must converge to $2$.</p>
|
1,974,114 | <p>Let R be a integral domain with a finite number of elements. Prove that R is a field.</p>
<p>Let a ∈ R \ {0}, and consider the set aR = {ar : r ∈ R}. </p>
<p>Guessing i will have to show that |aR| = R, and deduce that there exists r ∈ R such that ar = 1 but don't know what to do?</p>
| Bernard | 202,857 | <p><strong>Hint:</strong></p>
<p>If $R$ is an integral domain, multiplication by $a\ne 0$ in $R$ is an injective ring homomorphism. Now, for a map between sets with the same finite cardinality,
$$\text{injective}\iff\text{surjective}\iff\text{bijective}. $$</p>
|
462,569 | <blockquote>
<p>Consider the polynomial ring <span class="math-container">$F\left[x\right]$</span> over a field <span class="math-container">$F$</span>. Let <span class="math-container">$d$</span> and <span class="math-container">$n$</span> be two nonnegative integers.</p>
<p>Prove:<span class="math-container">$x^d-1 \mid x^n-1$</span> iff <span class="math-container">$d \mid n$</span>.</p>
</blockquote>
<p>my tries:</p>
<hr />
<p>necessity, Let <span class="math-container">$n=d t+r$</span>, <span class="math-container">$0\le r<d$</span></p>
<p>since <span class="math-container">$x^d-1 \mid x^n-1$</span>, so,</p>
<p><span class="math-container">$x^n-1=\left(x^d-1\right)\left(x^{\text{dt}+r-d}+\dots+1\right)$</span>...</p>
<p>so,,, to prove <span class="math-container">$r=0$</span>?</p>
<p>I don't know, and I can't go on.
How to do it.</p>
| user49685 | 49,685 | <p>You can use <em>Long Division</em> to prove that if $d$ does not divide $n$, then when dividing $x^n - 1$ by $x^d - 1$, the remainder will be $x^r - 1$. So unless $r = 0$, $x^d - 1 \not | x^n - 1$.</p>
<p>The other way round, i.e $\Leftarrow$ should be obvious.</p>
|
186,726 | <p>Just a soft-question that has been bugging me for a long time:</p>
<p>How does one deal with mental fatigue when studying math?</p>
<p>I am interested in Mathematics, but when studying say Galois Theory and Analysis intensely after around one and a half hours, my brain starts to get foggy and my mental condition drops to suboptimal levels.</p>
<p>I would wish to continue studying, but these circumstances force me to take a break. It is truly a case of "the spirit is willing but the brain is weak"?</p>
<p>How do people maintain concentration over longer periods of time? Is this ability trainable or genetic? (Other than taking illegal drugs like Erdős.)</p>
<p>I know this is a really soft question, but I guess asking mathematicians is the best choice since the subject of Mathematics requires the most mental concentration compared to other subjects.</p>
| Ronnie Brown | 28,586 | <p>I agree with the importance of the points mentioned by Sasha!</p>
<p>But the question is also: what is your process of studying? I found out the hard way that I learn best when I try to write out mathematics to make it as clear as possible to myself, and even as pretty as I can make it. Sometimes this has resulted in rewrites of traditional versions. </p>
<p>Some well known mathematicians say they learn from conversations with others! </p>
<p>August, 2014: A few additional points. </p>
<p>I heard a magician say he practices until the difficult becomes easy, the easy becomes habit, and the habit becomes beautiful. </p>
<p>I have been helped by having a kind of global question: "What is and what can be <a href="http://pages.bangor.ac.uk/~mas010/hdaweb2.html" rel="nofollow">higher dimensional group theory</a>?" in which to place many particular problems. The idea is really about the place of multiple groupoids in mathematics, and, hopefully, physics. This broad programme has allowed lots of flexibility; it has turned out quite technically difficult in places, but allowing many pictures, and intuitions. </p>
<p>I hope that helps. </p>
|
2,223,577 | <p>$\mathbb{Q}[e^{\frac{2\pi i}{5}}]$ is an extension of $\mathbb{Q}$ of degree 4, since $x^4+x^3+x^2+x+1$ is the irreducible polynomial of $\theta=e^{\frac{2\pi i}{5}}$ over $\mathbb{Q}$.</p>
<p>I'm asked if there is a quadratic extension $K$ of $\mathbb{Q}$ inside $\mathbb{Q}[e^{\frac{2\pi i}{5}}]$.
I suspect that the answer is no.</p>
<p>Since a quadratic extension is always of the form $\mathbb{Q}[\sqrt{k}]$ for an integer $k$, a naive way is to show that the equality $$(a+b\theta+c\theta^2+d\theta^3)^2=k$$ for an integer $k$ is impossible.
But that seems tedious. </p>
<p>Another approach would be that in such case, $\mathbb{Q}[e^{\frac{2\pi i}{5}}]$ is itself a quadratic extension, so we are expected to find an element that looks like $\sqrt{a+b\sqrt{k}}$ inside (for rational $a,b$). But then I'm stuck again.</p>
<p>Any ideas?</p>
| Community | -1 | <p>$\theta$ is a fifth root of unity; $\mathbb{Q}(\theta) / \mathbb{Q}$ is an <em>abelian</em> extension. That is, it is a Galois extension with abelian Galois group.</p>
<p>Every abelian group $G$ of order $n$ has, for every $m \mid n$, at least one subgroup $H$ of order $m$.</p>
<p>Consequently, the extension $\mathbb{Q}(\theta) / \mathbb{Q}$ has at least one subextension of every degree dividing $[\mathbb{Q}(\theta) : \mathbb{Q}]$.</p>
<hr>
<p>There are two ways to produce the quadratic subextension.</p>
<p>We can identify the quadratic extension by looking at the ramification in the ring of integers $\mathbb{Z}[\theta]$. For every $p$ except $5$, the algebraic closure of $\mathbb{F}_p$ has four distinct primitive roots of unity. However, over $\mathbb{F}_5$, every fifth root of unity is $1$.</p>
<p>Consequently, the extension ramifies <em>only</em> over the prime $5$. The extension must be either $\mathbb{Q}(\sqrt{5})$ or $\mathbb{Q}(\sqrt{-5})$. Studying how ramification over $2$ works implies that we must be taking the square root of a number that is $1 \bmod 4$, thus the extension is $\mathbb{Q}(\sqrt{5})$.</p>
<p>In general, for odd $p$, $\mathbb{Q}(\zeta_p)$ will contain either $\mathbb{Q}(\sqrt{p})$ or $\mathbb{Q}(\sqrt{-p})$ as a subfield; the correct square root is whichever $\pm p$ is $1 \bmod 4$.</p>
<hr>
<p>Another way to produce the quadratic subextension is to observe that $\mathbf{Q}(\theta)$ has complex embeddings, and that complex conjugation acts on the field.</p>
<p>Thus, it has subfield fixed by complex conjugation. We can even identify the subfield as:</p>
<p>$$\mathbb{Q}(\theta + \bar{\theta}) \subseteq \mathbb{Q}(\theta)$$</p>
<p>Since complex conjugation has order 2 (or simply by writing down the minimal polynomial of $\theta$ over the subfield), this field extension is order $2$, and thus $[\mathbb{Q}(\theta + \bar{\theta}) : \mathbb{Q}] = 2$.</p>
|
2,210,871 | <p>I'm doing some calculus homework and I got stuck on a question, but eventually figured it out on my own. My textbook doesn't have all the answers included (it only gives answers to even numbered questions for some reason). Anyways I got stuck when I needed to solve for x for this function.</p>
<p>$${\ -3x^3+8x-4{\sqrt{x}}-1=0}$$</p>
<p>I tried to factor it but I couldn't see a way to remove the radical. However, intuitively I could see it the answer to this question was just one, after a long time of confusion. Is there a possible way to factor this? Is there any way to solve this other than just looking at it and seeing the correct answer?</p>
<p>If you are curious here is the question in my textbook:</p>
<p>"Find the equation of the tangent line to the curve at the point (1,5)"
$${y=(2-{\sqrt{x})}}{(1+{\sqrt{x}}+3x)}$$</p>
<p>Thank you for your time.</p>
| user344249 | 344,249 | <p>Most colleges offer tutoring services that are really effective in helping you learn the material and helping you to actually understand it outside of class. I would take advantage of those as much as you can because they are often free and the people tutoring really want to help you learn and understand it.</p>
|
1,903,416 | <p>Is there a function that can be bijective, with the set of natural numbers as domain and range, other than $f(n) = n$?</p>
| Hagen von Eitzen | 39,174 | <p>There are uncountably many of such maps.</p>
<p>In fact, let $A$ be any subset of $\Bbb N=\{1,2,3,\ldots\}$ such that both $A$ and $\Bbb N\setminus A$ are infinite (for example, $A$ could be the set of primes or the set of perfect squares).
Then we can define $a(n):=$ $n$th smallest element of $A$, $b(n):=$ $n$th smallest element of $\Bbb N\setminus A$, and
$$ f(n)=\begin{cases}a(\tfrac n2)&\text{if $n$ is even}\\b(\tfrac{n+1}2)&\text{if $n$ odd}\end{cases}$$
Different $A$ will give different $f$, hence there are at leadt as many $f$ as there are $A$ - and that's continuum-many.</p>
|
25,172 | <p>What would be a good books of learning differential equations for a student who likes to learn things rigorously and has a good background on analysis and topology?</p>
| guest troll | 19,801 | <p>There's nothing wrong with learning it rigorously, but I would recommend to learn it "non-rigorously" first and then rigorously. E.g. in the context of a "graduate" ODE course. That's because there are some very important aspects of the topic that you may miss if you only concentrate on rigor, first. Like an intuitive understanding of the 2nd order ODE and the whole forcing function, undamped, overdamped, etc. And the sources that concentrate on rigor often assume that you have the non-rigorous understanding...not just that it's harder without the background, but that you never learn some key insights.</p>
<p>In that vein, a couple texts that I like that will still give you the basics and are slightly rigorous are Murray Spiegel Applied Differential Equations (3rd edition) and Ordinary Differential Equations Tenenbaum and Pollard. After that, I would move to a graduate ODE text, probably something in Springer, for the theory.</p>
<p>But really, you are missing a major trick if you only consider ODEs (or worse PDEs) in the context of analysis/topo, with zero physical insights or references to the rich science and engineering applications.</p>
|
919,040 | <p>I want to prove that a function defines a group action:</p>
<blockquote>
<p>We have group $G$ of diagonal $2\times 2$ matrices under matrix multiplication, and the set $X$ of points of the Cartesian plane, eg:</p>
<p>$G = \left\{ \begin{bmatrix} a &0\\0&b \end{bmatrix} : a,b\in \mathbb{R} - \{0\} \right\}$, $X=\{(x,y): x,y \in \mathbb{R}\}$</p>
<p>For each $g =\begin{bmatrix} a &0\\0&b \end{bmatrix}\in G$ and $(x,y)\in X$ where I use minus to denote that $0\not\in$ this set, define the function</p>
<p>$g((x,y)) = (ax,by)$</p>
</blockquote>
<p>How to prove closure, identity and composition? Refer <a href="https://math.stackexchange.com/posts/919040/revisions">to edits</a> for effort shown. Question de-cluttered so people won't deem it too much effort.</p>
| Martin Sleziak | 8,297 | <p>In fact, if we work with column vectors, the group action you described is just the multiplication of matrices.</p>
<p>$$g(x,y)= \begin{pmatrix}a&0\\0&b\end{pmatrix} \begin{pmatrix} x \\ y \end{pmatrix} = \begin{pmatrix} ax \\ by \end{pmatrix}$$</p>
<p>Now the fact that this is indeed a <a href="http://en.wikipedia.org/wiki/Group_action" rel="nofollow">group action</a> follows from the well-known properties of matrix multiplication:</p>
<ul>
<li><em>Closure:</em> If we multiply $2\times 2$ and $2\times 1$-matrix, we get again $2\times 1$-matrix.</li>
<li><em>Identity:</em> Multiplication by the identity matrix does not change anything.</li>
<li><em>Compatibility</em> is a consequence of associativity of matrix multiplication.</li>
</ul>
<hr>
<p>Another possibility how to look at this problem is to view it as the <a href="http://en.wikipedia.org/wiki/Hadamard_product_%28matrices%29" rel="nofollow">coordinatewise multiplication</a> of 2-dimensional vectors. (If you identify the matrix $\begin{pmatrix}a&0\\0&b\end{pmatrix}$ with the vector $(a,b)$.)</p>
|
919,040 | <p>I want to prove that a function defines a group action:</p>
<blockquote>
<p>We have group $G$ of diagonal $2\times 2$ matrices under matrix multiplication, and the set $X$ of points of the Cartesian plane, eg:</p>
<p>$G = \left\{ \begin{bmatrix} a &0\\0&b \end{bmatrix} : a,b\in \mathbb{R} - \{0\} \right\}$, $X=\{(x,y): x,y \in \mathbb{R}\}$</p>
<p>For each $g =\begin{bmatrix} a &0\\0&b \end{bmatrix}\in G$ and $(x,y)\in X$ where I use minus to denote that $0\not\in$ this set, define the function</p>
<p>$g((x,y)) = (ax,by)$</p>
</blockquote>
<p>How to prove closure, identity and composition? Refer <a href="https://math.stackexchange.com/posts/919040/revisions">to edits</a> for effort shown. Question de-cluttered so people won't deem it too much effort.</p>
| Pece | 73,610 | <p>You might have seen that a group action $G \times X \to X$ is actually <em>the same things</em> as a group morphism $G \to \operatorname{Bij}(X)$. Namely, for a group action $\varphi \colon G \times X \to X$, the group morphism is $\psi \colon g \mapsto \varphi(g,\cdot)$ ; conversely, any group morphism $\psi \colon G \to \operatorname{Bij}(X)$ gives rise to a group action $\varphi \colon (g,x) \mapsto \psi(g)(x)$.</p>
<p>Here, viewing $2\times 2$-matrices as linear endomorphisms of the plane, the inclusion $i \colon G \hookrightarrow \operatorname{Bij}(\mathbb R^2)$ is a group morphism giving rise to the group action $(g,(x,y)) \mapsto i(g)(x,y) = g(x,y)$ which is precisely the function of the exercise.</p>
<hr>
<p>Remark that here the image of the inclusion $i$ is actually included in the group $\operatorname{Aut}(\mathbb R^2)$ of linear automorphisms of the plane, not only in the group of set-theoretic bijections. It is what we call a (faithful here) <em>linear representation</em> of the group $G$. Representation theory is a beautiful theory, you can look it up if you are curious.</p>
|
2,954,929 | <p>What happens when <span class="math-container">$x < -2$</span> ? Does the whole square root term just "disappear" which leaves us with 1 which is positive and thus the answer to the question is <span class="math-container">$x\le-1$</span>? Or do we have to constrain the domain of <span class="math-container">$x$</span> to: <span class="math-container">$(-2\le x\le-1)$</span>?</p>
| J.G. | 56,861 | <p>If <span class="math-container">$X$</span> is a vector with Hermitian components, <span class="math-container">$$\langle\psi |X\cdot X|\psi\rangle=\sum_i\langle\psi |X_i^2| \psi\rangle=\sum_i\langle\psi |X_i^TX_i| \psi\rangle=\sum_i\Vert X_i|\psi\rangle\Vert^2\ge 0.$$</span></p>
|
3,678,417 | <p>I understand:
<span class="math-container">$$\sum\limits^n_{i=1} i = \frac{n(n+1)}{2}$$</span>
what happens when we restrict the range such that:
<span class="math-container">$$\sum\limits^n_{i=n/2} i = ??$$</span></p>
<p>Originally I thought we might just have <span class="math-container">$\frac{n(n+1)}{2}/2$</span> but I know that's not correct since starting the summation at n/2 would be the larger values of the numbers between <span class="math-container">$1..n$</span></p>
| Alex | 38,873 | <p>Asymptotic solution: the difference, let's call it <span class="math-container">$S_d = S_1 - S_2$</span> can be obtained by taking the largest term in each sum, i.e. <span class="math-container">$\frac{n(n+1)}{2}$</span> and <span class="math-container">$\frac{n(n+2)}{8}:$</span>
<span class="math-container">$$
S_d = \frac{n(n+1)}{2} - \frac{n(n+2)}{8} = \frac{3 n^2}{8} + \frac{n}{4} = O(n^2)
$$</span>
so asymptotically the difference is of the same order as both sums. </p>
|
3,264,693 | <p>For context, I have been relearning a lot of math through the lovely website Brilliant.org. One of their sections covers complex numbers and tries to intuitively introduce Euler's Formula and complex exponentiation by pulling features from polar coordinates, trigonometry, real number exponentiation, and vector space transformations.</p>
<p>While I am now decently familiar with how complex exponentiation behaves (i.e. inducing rotation), I am slightly confused by the following. </p>
<p><span class="math-container">$ 2^3 z$</span> can be viewed as stretching the complex number <span class="math-container">$z$</span> by <span class="math-container">$2^3$</span>. This could be rewritten as <span class="math-container">$8z$</span>. Therefore, Brilliant.org suggests that exponentiation of real numbers can be thought of as stretching a vector just like real number multiplication would. (<strong>check - understood</strong>)</p>
<p>Brilliant.org then demonstrates that multiplying <span class="math-container">$z_1$</span> by another complex number <span class="math-container">$z_2$</span> is equivalent to first stretching <span class="math-container">$z_1$</span> by the magnitude of <span class="math-container">$z_2$</span> and then rotating <span class="math-container">$z_1$</span> by the angle that <span class="math-container">$z_2$</span> creates with the real axis counterclockwise. (<strong>check - understood</strong>)</p>
<p>However, this is where I get confused. Why does, for example, <span class="math-container">$2^{2i}* z$</span> cause purely rotation of z but <span class="math-container">$2i*z$</span> does not (i.e. it causes stretching, too, in addition to rotation)?</p>
<p>To me, the fact that <span class="math-container">$2^{(2i+3)}$</span> causes both rotation and stretching makes perfect sense because we can rewrite this as <span class="math-container">$(2^3)*(2^{(2i)})$</span>. As previously noted by Brilliant.org, exponentiation by real numbers can thought of as stretching.</p>
<p><strong>Here is the crux of my issue:</strong></p>
<blockquote>
<p>I understand that the magnitude of the imaginary number in the exponent (for example, the <span class="math-container">$'2'$</span> in <span class="math-container">$e^{2i}$</span> ) can be thought of as a rate of speed...but why does this interpretation '<strong>drop</strong>' when we are doing something like <span class="math-container">$2i * z$</span>. i.e. <strong>Why is the <span class="math-container">$2$</span> in <span class="math-container">$2i*z$</span> not also treated like a rate of rotation but instead treated like a magnitude of stretching ?</strong></p>
</blockquote>
<p>My math skill is not particularly high level so if anyone can offer as much of an intuitive answer as possible, it would be greatly appreciated!</p>
<p>Edit 1: I guess another way of expressing this question is as follows: </p>
<p>Why does a duality exist between real number exponentiation and real number multiplication but a duality does not exist between imaginary number exponentiation and imaginary number multiplication (i.e. imaginary number multiplication can cause stretching in addition to rotation)?</p>
<p>Edit 2: While I accept that Euler's formula is a way of proving that exponentiation of purely imaginary numbers has a magnitude of 1 and therefore does not invoke stretching, that is not the sort of answer I am looking for. My question is aimed at identifying what was specified in Edit 1. </p>
<p>Edit 3: Here is a picture that helps clarify my point of confusion.
<a href="https://i.stack.imgur.com/9XeY0.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/9XeY0.png" alt="Lack of Duality Between Exponentiation and Multiplication"></a></p>
<p>Edit 4: The question that was asked in this post <a href="https://math.stackexchange.com/questions/1540062/which-general-physical-transformation-to-the-number-space-does-exponentiation-re">Which general physical transformation to the number space does exponentiation represent?</a> is sort of the theme that I am going for. The answer that was given to this post, however, omits a reference to the complex numbers. </p>
| Gerry Myerson | 8,269 | <p><span class="math-container">$2^{2i}z$</span> <strong>does</strong> cause a stretching of <span class="math-container">$z$</span>. It causes a stretching by the magnitude of <span class="math-container">$2^{2i}$</span>. And the magnitude of <span class="math-container">$2^{2i}$</span> is <strong>one</strong>. So, it stretches by a factor of <span class="math-container">$1$</span>. If you choose to deny that stretching by a factor of <span class="math-container">$1$</span> is stretching, well, then that's where your problem is. </p>
|
2,148,187 | <p>I am given a charge of $Q(t)$ on the capacitor of an LRC circuit with a differential equation</p>
<p>$Q''+2Q'+5Q=3\sin(\omega t)-4\cos(\omega t)$ with the initial conditions $Q(0)=Q'(0)=0$</p>
<p>$\omega >0$ which is constant and $t$ is time. I am then asked find the steady state and transient parts of the solution and the value of $\omega$ for which the amplitude of the steady state charge maximal.</p>
<p>I believe the transient part is just the homogeneous solution to the ODE and the steady state part of this solution is the complementary solution.</p>
<p>I solved the homogeneous and got</p>
<p>$Q_{tr}=c_1e^{-t}\sin(2t)+c_2e^{-t}\cos(2t)$ which I am pretty sure is right. </p>
<p>The problem is that I am not given a value for $\omega$ so if I were to go ahead and solve it, I would get a mess because I use undetermined coefficients.</p>
<p>So if I go ahead and "guessed" a solution, I get $Q_{ss}=A\sin(\omega t)+B\cos(\omega t)$ but if I differentiated this and actually plugged this into the derivative I get a huge mess so I am not sure if that's entirely the right way to approach this problem...</p>
<p>I mean if I actually solved the steady state solution, I get:</p>
<p>$Q(t)= c_1e^{-t}sin(2t)+c_2e^{-t}cos(2t)+ \frac{-3\omega^2-8\omega+15}{\omega^4-6\omega^2+25} \sin( \omega t)+\frac{4\omega^2-6\omega-20}{\omega^4-6\omega^2+25} \cos( \omega t)$</p>
<p>Plugging the initial conditions into this would be terrible. Is this even the correct approach?</p>
<p>For the second part of the problem I guess that I would take the derivative of $Q(t)$ and then find the critical points for which there will be a maximum but I am not sure about that.</p>
<p>Any guidance would be much appreciated thanks :) .</p>
| BobaFret | 43,760 | <p>You're right so far! Your $Q(t)$ is correct.</p>
<p>Determining the values of $c_1$ and $c_2$ isn't too bad compared to what you've done so far. You should get:</p>
<p>$$c_1 = \dfrac{1}{2} \dfrac{3\omega^3 + 4 \omega^2 - 9\omega + 20}{\omega^4 - 6\omega^2 + 25}$$</p>
<p>$$c_2 = - \dfrac{2( 2\omega^2 - 3\omega -10)}{\omega^4 - 6\omega^2 + 25}$$</p>
|
2,148,187 | <p>I am given a charge of $Q(t)$ on the capacitor of an LRC circuit with a differential equation</p>
<p>$Q''+2Q'+5Q=3\sin(\omega t)-4\cos(\omega t)$ with the initial conditions $Q(0)=Q'(0)=0$</p>
<p>$\omega >0$ which is constant and $t$ is time. I am then asked find the steady state and transient parts of the solution and the value of $\omega$ for which the amplitude of the steady state charge maximal.</p>
<p>I believe the transient part is just the homogeneous solution to the ODE and the steady state part of this solution is the complementary solution.</p>
<p>I solved the homogeneous and got</p>
<p>$Q_{tr}=c_1e^{-t}\sin(2t)+c_2e^{-t}\cos(2t)$ which I am pretty sure is right. </p>
<p>The problem is that I am not given a value for $\omega$ so if I were to go ahead and solve it, I would get a mess because I use undetermined coefficients.</p>
<p>So if I go ahead and "guessed" a solution, I get $Q_{ss}=A\sin(\omega t)+B\cos(\omega t)$ but if I differentiated this and actually plugged this into the derivative I get a huge mess so I am not sure if that's entirely the right way to approach this problem...</p>
<p>I mean if I actually solved the steady state solution, I get:</p>
<p>$Q(t)= c_1e^{-t}sin(2t)+c_2e^{-t}cos(2t)+ \frac{-3\omega^2-8\omega+15}{\omega^4-6\omega^2+25} \sin( \omega t)+\frac{4\omega^2-6\omega-20}{\omega^4-6\omega^2+25} \cos( \omega t)$</p>
<p>Plugging the initial conditions into this would be terrible. Is this even the correct approach?</p>
<p>For the second part of the problem I guess that I would take the derivative of $Q(t)$ and then find the critical points for which there will be a maximum but I am not sure about that.</p>
<p>Any guidance would be much appreciated thanks :) .</p>
| Jan Eerland | 226,665 | <p>Another way of solving is to use Laplace transform, we have that:</p>
<p>$$\mathcal{Q}''\left(t\right)+2\cdot\mathcal{Q}'\left(t\right)+5\cdot\mathcal{Q}\left(t\right)=3\cdot\sin\left(\omega t\right)-4\cdot\cos\left(\omega t\right)\tag1$$</p>
<p>Now, in order to take the Laplace transform of both sides, use this:</p>
<ul>
<li>$$\mathscr{L}_t\left[\mathcal{Q}''\left(t\right)\right]_{\left(\text{s}\right)}=\text{s}^2\cdot\text{Q}\left(\text{s}\right)-\text{s}\cdot\mathcal{Q}\left(0\right)-\mathcal{Q}'\left(0\right)\tag2$$</li>
<li>$$\mathscr{L}_t\left[\mathcal{Q}'\left(t\right)\right]_{\left(\text{s}\right)}=\text{s}\cdot\text{Q}\left(\text{s}\right)-\mathcal{Q}\left(0\right)\tag3$$</li>
<li>$$\mathscr{L}_t\left[\mathcal{Q}\left(t\right)\right]_{\left(\text{s}\right)}=\text{Q}\left(\text{s}\right)\tag4$$</li>
<li>$$\mathscr{L}_t\left[\sin\left(\omega t\right)\right]_{\left(\text{s}\right)}=\frac{\omega}{\text{s}^2+\omega^2}\tag5$$</li>
<li>$$\mathscr{L}_t\left[\cos\left(\omega t\right)\right]_{\left(\text{s}\right)}=\frac{\text{s}}{\text{s}^2+\omega^2}\tag6$$</li>
</ul>
<p>Now, using the initial conditions we get that:</p>
<p>$$\text{s}^2\cdot\text{Q}\left(\text{s}\right)+2\cdot\text{s}\cdot\text{Q}\left(\text{s}\right)+5\cdot\text{Q}\left(\text{s}\right)=3\cdot\frac{\omega}{\text{s}^2+\omega^2}-4\cdot\frac{\text{s}}{\text{s}^2+\omega^2}\tag7$$</p>
<p>Solving $\text{Q}\left(\text{s}\right)$ out of equation $(7)$:</p>
<p>$$\text{Q}\left(\text{s}\right)=\frac{3\omega-4\text{s}}{\left(5+\text{s}\left(2+\text{s}\right)\right)\left(\text{s}^2+\omega^2\right)}\tag8$$</p>
|
388,523 | <p>I have this question:</p>
<p>Evaluate $\int r . dS$ over the surface of a sphere, radius a, centred at the origin. </p>
<p>I'm not really sure what '$r$' is supposed to be? I would guess a position vector? If so, I would have $r . dS$ as $(asin\theta cos\phi, a sin\theta sin\phi, acos\theta) . (a^2sin\theta d\theta d\phi)$ which doesn't seem right. Any pointers appreciated, thanks!</p>
| Andrés E. Caicedo | 462 | <p>There is a hierarchy of fast-growing functions $f_\alpha:\mathbb N\to\mathbb N$ indexed by ordinals $\alpha<\epsilon_0$, where $\epsilon_0$ is the first ordinal fixed point of the ordinal exponentiation map $\tau\mapsto\omega^\tau$, that is,
$$ \epsilon_0=\omega^{\omega^{\dots}}. $$
(Note that this is a countable ordinal.) Let $\zeta_0=1$ and $\zeta_{k+1}=\omega^{\zeta_k}$ for $k\in\mathbb N$. </p>
<p>For each $k\ge0$, let $I\Sigma_{k+1}$ denote the restriction of Peano Arithmetic where the induction axiom is only stated for $\Sigma_{k+1}$-formulas, so $\mathsf{PA}$ is the union of all the $I\Sigma_{k+1}$ for $k\in\mathbb N$.</p>
<blockquote>
<p><strong>Theorem.</strong> (Wainer) If $f$ is a recursive function, provably total in $I\Sigma_{k+1}$ , then $f$ is eventually dominated by some $f_\alpha$, $\alpha<\zeta_{k+1}$. In particular, any recursive $f$ provably total in $\mathsf{PA}$ is eventually dominated by some $f_\alpha$, $\alpha<\epsilon_0$. </p>
</blockquote>
<p>Note that the function $f$ must be recursive. There are non-recursive functions that only take the values $0$ and $1$, so this restriction is essential. The specific hierarchy of functions we pick is not so important. All natural hierarchies that have been considered give the same result that $\epsilon_0$ is the limit. The specific hierarchy mentioned in the theorem is defined as follows:</p>
<p>Any ordinal $\alpha<\epsilon_0$ can be written in a unique way as $\alpha=\omega^\beta(\gamma+1)$ where $\beta<\alpha$ (note that $\beta$ could be $0$). By transfinite recursion, define for limit $\alpha<\epsilon_0$ an increasing sequence $(d(\alpha, n)\mid n<\omega)$ cofinal in $\alpha$ by setting
$$ d(\alpha, n) = \omega^\beta\gamma+\left\{\begin{array}{cl}\omega^\delta n&\mbox{ if }\beta=\delta+ 1,\\ \omega^{d(\beta,n)}&\mbox{ if }\beta \mbox{ is limit.}\end{array}\right. $$
The fast growing hierarchy $(f_\alpha)_{\alpha<\epsilon_0}$ of functions $f:\mathbb N\to\mathbb N$, due to Löb and Wainer, can now be defined as follows: </p>
<ol>
<li>$f_0 (n) = n + 1$.</li>
<li>For $\alpha<\epsilon_0$, $f_{\alpha+1} (n) = f^n_\alpha(n)$, where the superindex indicates that $f_\alpha$ is iterated $n$ times. </li>
<li>For limit $\alpha<\epsilon_0$, $f_\alpha(n) = f_{d(\alpha,n)} (n)$. </li>
</ol>
<p>Note that these functions grow very fast indeed. One can prove that these functions are strictly increasing, and that whenever $\alpha<\beta<\epsilon_0$, there is a number $m$ such that $f_\alpha(n)<f_\beta(n)$ for all $n\ge m$. We say that $f_\beta$ <em>eventually dominates</em> $f_\alpha$.</p>
<p>In Ramsey theory one sometimes encounters the first four or five levels $f_0$–$f_4$ and rarely one needs to go up to $f_\omega$: Note that $f_1(n)=2n$, $f_2(n)=n2^n$, $f_3(n)$ is larger than a stack of $n$ powers of $2$, etc. The function $f_\omega$ grows roughly as fast as the diagonal of <a href="http://en.wikipedia.org/wiki/Ackermann_function">Ackermann's function</a>, so it eventually dominates all primitive recursiev functions. <a href="http://en.wikipedia.org/wiki/Goodstein%27s_theorem">Goodstein's problem</a> requires all the functions up to $\epsilon_0$ and therefore the totality of Goodstein's function is not provable in $\mathsf{PA}$. Several famous unprovability results have been established the same way. Typically, unprovability in arithmetic considers $\Pi^0_2$ sentences, statements $\phi$ of the form "For all $n$ there is an $m$ such that $\psi(n,m)$" where $\psi(n,m)$ is an easily verifiable (recursive) statement about $n,m$. This naturally gives rise to a function, $f(n)=m$ iff $m$ is the least number such that $\psi(n,m)$ holds. If $\phi$ is true, then $f$ is a recursive function: To compute $f(n)$, you consider in succession all natural numbers, verifying for each $k$ whether $\psi(n,k)$ holds. As soon as we find one such $k$, that is the value of $f$. This search succeeds, since $\phi$ is true, so $\psi(n,m)$ holds for some $m$, and therefore there is a least such $m$. The issue is whether the formal statement $\phi$ is provable from $\mathsf{PA}$, which is equivalent to the question of whether $\mathsf{PA}$ can prove that $f$ is a total function. In view of Wainer's theorem, this means that we should verify whether $f$ is eventually dominated by some $f_\alpha$, $\alpha<\epsilon_0$. In practice, when $\phi$ is not provable, we usually show that in fact $f$ eventually dominates each $f_\alpha$, $\alpha<\epsilon_0$. This is the case, for example, for Goodstein's function.</p>
<p>I list now some references, taken from my paper on Goodstein's function, available at my <a href="http://andrescaicedo.wordpress.com/papers/#papers">papers page</a>. To get an idea of how fast these functions grow, I suggest you look at the few values of Goodstein's function I compute in that paper.</p>
<blockquote>
<p>Fairtlough, M., and Wainer, S. <strong>Handbook of Proof Theory</strong>. Elsevier-North Holland,
1998, ch. <em>Hierarchies of provably recursive functions</em>, pp. 149–207. </p>
<p>Löb, M., and Wainer, S. <em>Hierarchies of number theoretic functions. I</em>. Arch. Math.
Logik Grundlagenforsch. <strong>14</strong> (1970), 39–51. </p>
<p>Wainer, S. <em>A classification of the ordinal recursive functions</em>. Arch. Math. Logik Grundlagenforsch. <strong>13</strong> (1970), 136–153. </p>
</blockquote>
<p>Note that the hierarchy can be extended significantly past $\epsilon_0$ (maintaining that the functions are increasing, and each new one eventually dominates its predecessors), and unprovability results in stronger systems are sometimes obtained by appropriate generalizations of Wainer's results to these other systems. The question of how to extend these functions is always the question of how to pick for (smallish) countable ordinals $\alpha$ a cofinal sequence in an explicit fashion. This leads to the delicate topic of natural well-orderings, see</p>
<blockquote>
<p>John N. Crossley, Jane Bridge Kister. <em>Natural well-orderings</em>, Archiv für mathematische Logik und Grundlagenforschung, <strong>26 (1)</strong>, (1987), 57-76.</p>
</blockquote>
|
3,232,296 | <ol>
<li><p>For , ∈ ℝ, we have ‖−‖≤‖+‖. </p></li>
<li><p>The dot product of two vectors is a vector. </p></li>
<li><p>For ,∈ℝ, we have ‖−‖≤‖‖+‖‖. </p></li>
<li><p>A homogeneous system of linear equations with more equations than variables will always have at least one parameter in its solution. </p></li>
<li><p>Given a non-zero vector , there exist exactly two unit vectors that are parallel to .</p></li>
</ol>
<p>My answers were</p>
<ol>
<li>FALSE
because if we assumed that a= (-1,-2) and b= (3,4) it would make the statement false </li>
<li>FALSE
because the dot product of 2 vectors is a scalar </li>
<li>FALSE
this would have the same assumption as for question 1 </li>
<li>FALSE
I am not sure </li>
<li>TRUE
I am not sure </li>
</ol>
<p>I am not sure which one of my answers is/are wrong </p>
| Paulo Mourão | 673,659 | <p><span class="math-container">$1$</span> and <span class="math-container">$2$</span> are both right.</p>
<p><span class="math-container">$3$</span> is wrong. The triangle inequality actually implies <span class="math-container">$3$</span>:</p>
<p><span class="math-container">$$||u-v||\leq ||u||+||-v||=||u||+||v||$$</span></p>
<p><span class="math-container">$4$</span> is right. Just consider</p>
<p><span class="math-container">$$\left\{
\begin{array}{ll}
x=0 \\
x=0
\end{array}
\right.$$</span>
The only solution is <span class="math-container">$x=0$</span>. This statement would be true the other way around: a homogeneous system of linear equations with more variables than equations will always have at least one parameter in its solution.</p>
<p>And <span class="math-container">$5$</span> is also right: <span class="math-container">$\bf u=\frac{v}{||v||}$</span> is a unit vector. Any other vector parallel to <span class="math-container">$\bf v$</span> (and thus also parallel to <span class="math-container">$\bf u$</span>) is of the form <span class="math-container">$k\bf u$</span> for some real number <span class="math-container">$k$</span>. And <span class="math-container">$k\bf u$</span> is a unit vector if and only if <span class="math-container">$k=\pm 1$</span>. Hence the two unit vectors are <span class="math-container">$\pm \bf u$</span>.</p>
|
3,046,205 | <p>I am trying to figure out the steps between these equal expressions in order to get a more general understanding of product sequences:
<span class="math-container">$$\prod_{k=0}^{n}\left(3n-k\right) + \prod_{k=n}^{2n-3}\left(2n-k\right) = \prod_{j=2n}^{3n}j + \prod_{j=3}^{n}j =\frac{(3n)!}{(2n-1)!}+\frac{n!}{2}$$</span></p>
<p>I know that <span class="math-container">$ n! :=\prod_{k=1}^{n}k$</span> but I can't figure out how that helps me understand the above equation.</p>
<p>edit: Thank you for the great help! Another thing I don't understand, is how I get from <span class="math-container">$\prod_{k=0}^{n}\left(3n-k\right) + \prod_{k=n}^{2n-3}\left(2n-k\right)$</span> to <span class="math-container">$\prod_{j=2n}^{3n}j + \prod_{j=3}^{n}j$</span>. Any help with understanding this is much appreciated, I will try to figure it out myself while I wait for answers.</p>
| Andreas Caranti | 58,401 | <p>Note first that periodic here means there are <span class="math-container">$K$</span>, <span class="math-container">$P > 0$</span> such that <span class="math-container">$a^{i} = a^{i+P}$</span> for <span class="math-container">$i > K$</span>.</p>
<p>One begins with showing that there are <span class="math-container">$m > n$</span> such that <span class="math-container">$a^{m} = a^{n}$</span>. Then for all <span class="math-container">$i > n$</span> we have
<span class="math-container">$$
a^{i} = a^{i - n + n} = a^{i - n} a^{n} = a^{i - n} a^{m} =
a^{i + (m - n)}.
$$</span>
So you can take <span class="math-container">$K = n$</span> and <span class="math-container">$P = m - n$</span>.</p>
|
4,156,482 | <blockquote>
<p>Can every continuous function <span class="math-container">$f(x)$</span> from <span class="math-container">$\mathbb{R}\to \mathbb{R}$</span> be continuously "transformed" into a differentiable function?</p>
</blockquote>
<p>More precisely is there always a continuous (non constant) <span class="math-container">$g(x)$</span> such that <span class="math-container">$g(f(x))$</span> is differentiable?<br></p>
<ul>
<li>This seems to hold for simple functions, for instance the function <span class="math-container">$f(x)=|x|$</span> can be transformed into a differentiable function by the function <span class="math-container">$g(x)=x^2$</span>. <br></li>
<li>If <span class="math-container">$g(x)$</span> is additionally required to be increasing everywhere and
differentiable then the answer seems to be <em><strong>no</strong></em> by the inverse
function theorem, because of the existence of continuous nowhere
differentiable functions.</li>
</ul>
| Frank | 460,691 | <p><em>Edit 1.</em> I realized my proof has a mistake. Because I use the inverse function theorem on <span class="math-container">$g \circ f$</span>, my answer only checks out if <span class="math-container">$g \circ f$</span> is required to be <em>continuously differentiable</em>.</p>
<hr />
<p><em>Original answer.</em> Here's a counterexample.</p>
<p>We say a function <span class="math-container">$f$</span> is <strong>locally injective</strong> at a point <span class="math-container">$x$</span> if there exists a small interval <span class="math-container">$(x - \delta, x + \delta)$</span> where <span class="math-container">$x$</span> is injective. Suppose we have a function <span class="math-container">$f$</span> that is <em>not</em> locally injective at a point <span class="math-container">$x$</span>. Also, suppose we have a function <span class="math-container">$g$</span> where the composition <span class="math-container">$g \circ f$</span> is differentiable at <span class="math-container">$x$</span>. Note that because <span class="math-container">$f$</span> is not locally injective at <span class="math-container">$x$</span>, <span class="math-container">$g \circ f$</span> cannot be locally injective by <span class="math-container">$x$</span>. So by the inverse function theorem, its derivative <span class="math-container">$(g \circ f)'$</span> must necessarily vanish at <span class="math-container">$x$</span>. This leads us to the following question:</p>
<p><strong>Question.</strong> Can we find a function that is everywhere continuous, but <em>nowhere locally injective</em>?</p>
<p>If we can find such a function <span class="math-container">$f$</span>, then any composition <span class="math-container">$g \circ f$</span> must be have an identically zero derivative. This means <span class="math-container">$g \circ f$</span> must be constant, implying <span class="math-container">$g$</span> is constant (at least in the range of <span class="math-container">$f$</span>)!</p>
<p>We can achieve this with a function <span class="math-container">$f$</span> that is everywhere continuous, but nowhere differentiable. There are many such functions, the most famous being the “pathological” <a href="https://en.wikipedia.org/wiki/Weierstrass_function" rel="nofollow noreferrer">Weierstrass function</a>. To prove <span class="math-container">$f$</span> is nowhere locally injective, suppose toward a contradiction that it is locally injective at a point <span class="math-container">$x$</span>. Then there is in a small interval <span class="math-container">$I$</span> about <span class="math-container">$x$</span> where <span class="math-container">$f$</span> is injective and continuous, and hence monotone. By Lebesgue's monotone function theorem referenced in <a href="https://math.stackexchange.com/questions/2853639/a-continuous-nowhere-differentiable-but-invertible-function">this post</a>, <span class="math-container">$f$</span> must be differentiable almost everywhere in <span class="math-container">$I$</span>. This is a contradiction because <span class="math-container">$f$</span> is nowhere differentiable.</p>
<hr />
<p><em>Edit 2.</em> Instead of examining local injectivity, we can maybe examine a slightly stronger condition. Let's say a function <span class="math-container">$f$</span> has a <strong>corner</strong> (not an actual term) at a point <span class="math-container">$x$</span> if for all <span class="math-container">$\delta > 0$</span>, there exists <span class="math-container">$x_1, x_2 \in (x - \delta, x + \delta)$</span> such that <span class="math-container">$x_1 < x < x_2$</span> and <span class="math-container">$f(x_1) = f(x_2)$</span>. The key here is that a “corner” requires the points breaking injectivity to lie on both sides of <span class="math-container">$x$</span>.</p>
<p>We can show that if <span class="math-container">$f$</span> has a corner at <span class="math-container">$x$</span> and <span class="math-container">$g \circ f$</span> is differentiable at <span class="math-container">$x$</span>, then <span class="math-container">$(g \circ f)'$</span> must vanish at <span class="math-container">$x$</span>. For all <span class="math-container">$\delta > 0$</span>, let <span class="math-container">$x_{1, \delta} < x < x_{2, \delta}$</span> be as in the definition of a corner. Taking the left-side points, we get
<span class="math-container">$$
\tag{1}
(g \circ f)'(x) = \lim_{\delta \to 0^+}
\frac{(g \circ f)(x_{1, \delta}) - (g \circ f)(x)}
{x_{1, \delta} - x},
$$</span>
and taking the right-side points, we get
<span class="math-container">$$
\tag{2}
(g \circ f)'(x) =
\lim_{\delta \to 0^+} \frac{(g \circ f)(x_{2, \delta}) - (g \circ f)(x)}
{x_{2, \delta} - x}.
$$</span>
Note that the numerators of <span class="math-container">$(1)$</span> and <span class="math-container">$(2)$</span> are equal because <span class="math-container">$(g \circ f)(x_{1, \delta}) = (g \circ f)(x_{2, \delta})$</span>. However, the denominators <span class="math-container">$x_{1, \delta} - x$</span> and <span class="math-container">$x_{2, \delta} - x$</span> have opposite signs, so we must have <span class="math-container">$(g \circ f)'(x) = 0$</span>. Therefore, we can instead ask the question:</p>
<p><strong>Question.</strong> Can we find a function that is everywhere continuous, but has a corner at every point in <span class="math-container">$\mathbb{R}$</span>?</p>
<p>Because the Weierstrass function behaves like a fractal, this seems like it would be true (but maybe it only has corners densely packed in <span class="math-container">$\mathbb{R}$</span>, in which continuous differentiability comes back to bite us). I'm not sure how to prove this though.</p>
|
4,156,482 | <blockquote>
<p>Can every continuous function <span class="math-container">$f(x)$</span> from <span class="math-container">$\mathbb{R}\to \mathbb{R}$</span> be continuously "transformed" into a differentiable function?</p>
</blockquote>
<p>More precisely is there always a continuous (non constant) <span class="math-container">$g(x)$</span> such that <span class="math-container">$g(f(x))$</span> is differentiable?<br></p>
<ul>
<li>This seems to hold for simple functions, for instance the function <span class="math-container">$f(x)=|x|$</span> can be transformed into a differentiable function by the function <span class="math-container">$g(x)=x^2$</span>. <br></li>
<li>If <span class="math-container">$g(x)$</span> is additionally required to be increasing everywhere and
differentiable then the answer seems to be <em><strong>no</strong></em> by the inverse
function theorem, because of the existence of continuous nowhere
differentiable functions.</li>
</ul>
| andrew bruckner | 946,388 | <p>Gillis, Note on a conjecture of Erdos, Quart. J. of Math.,Oxford 10, (1939),151-154, has an example of a continuous function <span class="math-container">$f$</span>, all of whose level sets are Cantor sets. Unless <span class="math-container">$g$</span> is a constant function, it seems <span class="math-container">$g \circ f$</span> won't be differentiable.
(Gillis made other claims that were false, but his function does have every level set being a Cantor set).</p>
<p>Foran also has an example, but I don't think he published it. He constructs its graph as the intersection of a nested sequence of rectangles in the unit square.</p>
|
3,604,388 | <p>Let <span class="math-container">$P_n$</span> be the statement that <span class="math-container">$\dfrac{d^{2n}}{dx^{2n}}(x^2-1)^n = (2n)!$</span> </p>
<p>Base case: n = 0, <span class="math-container">$\dfrac{d^0}{dx^0}(x^2-1)^0 = 1 = 0!$</span></p>
<p>Assume <span class="math-container">$P_m = \dfrac{d^m}{dx^m}(x^2-1)^m = m!$</span> is true. </p>
<p>Prove <span class="math-container">$P_{m+1} = \dfrac{d^{2(m+1)}}{dx^{2(m+1)}}(x^2-1)^{m+1} = [2(m+1)]!$</span> </p>
<p><span class="math-container">$\dfrac{d^{2(m+1)}}{dx^{2(m+1)}}(x^2-1)^{m+1}$</span></p>
<p>= <span class="math-container">$\dfrac{d^{2m}}{dx^{2m}}\left(\dfrac{d^2}{dx^2}(x^2-1)^{m+1}\right)$</span> </p>
<p>= <span class="math-container">$\dfrac{d^{2m}}{dx^{2m}}\left(2x(m)(m+1)(x^2-1)^{m-1}\right)$</span></p>
<p>= <span class="math-container">$[\dfrac{d^{2m}}{dx^{2m}}(x^2-1)^m][2x(m)(m+1)(x^2-1)^{-1}]$</span></p>
<p>From the inductive hypothesis, </p>
<p>= <span class="math-container">$(2m)! [2x(m)(m+1)(x^2-1)^{-1}]$</span> </p>
<p>I got stuck here, and not sure if I have done correctly thus far? I did not know how to get to <span class="math-container">$[2(m+1)]!$</span>. Please advise. Thank you. </p>
| Calvin Lin | 54,563 | <p>You've made several mistakes. </p>
<p><strong>Hint:</strong> What does the product rule say <span class="math-container">$\frac{d}{dx} f(x) g(x)$</span> is equal to?<br>
Now set <span class="math-container">$ f(x) = x^2 -1, g(x) = (x^2 -1)^m$</span>. </p>
|
3,834,796 | <p>I found a really interesting question which is as follows:
Prove that the value of
<span class="math-container">$$\sum^{7}_{k=0}[({7\choose k}/{14\choose k})*\sum^{14}_{r=k}{r\choose k}{14\choose r}] = 6^7$$</span></p>
<p>my approach:</p>
<p>I tried to simplify the innermost sigma as well as trying to simplify by using
<span class="math-container">${n\choose k}=n!/k!(n-k)!$</span> however I am can't get hold of this one.</p>
<p>My guess is that the summation simplifies into a standard series but I can't say for sure.
Kindly help me out.</p>
| epi163sqrt | 132,007 | <blockquote>
<p>Setting <span class="math-container">$n=7$</span> we obtain
<span class="math-container">\begin{align*}
\color{blue}{\sum_{k=0}^n}&\color{blue}{\binom{n}{k}\binom{2n}{k}^{-1}\sum_{r=k}^{2n}\binom{r}{k}\binom{2n}{r}}\\
&=\sum_{k=0}^n\binom{n}{k}\frac{k!(2n-k)!}{(2n)!}\sum_{r=k}^{2n}\frac{r!}{k!(r-k)!}\,\frac{(2n)!}{r!(2n-r)!}\\
&=\sum_{k=0}^n\binom{n}{k}\sum_{r=k}^{2n}\binom{2n-k}{r-k}\\
&=\sum_{k=0}^n\binom{n }{k}\sum_{r=0}^{2n-k}\binom{2n-k}{r}\\
&=\sum_{k=0}^n\binom{n}{k}2^{2n-k}\\
&=2^{2n}\sum_{k=0}^n\binom{n}{k}\frac{1}{2^k}\\
&=2^{2n}\left(1+\frac{1}{2}\right)^n\\
&\,\,\color{blue}{=6^n}
\end{align*}</span>
and the claim follows.</p>
</blockquote>
|
2,705,980 | <p>I have the following problem:
\begin{cases}
y(x) =\left(\dfrac14\right)\left(\dfrac{\mathrm dy}{\mathrm dx}\right)^2 \\
y(0)=0
\end{cases}
Which can be written as:</p>
<p>$$ \pm 2\sqrt{y} = \frac{dy}{dx} $$</p>
<p>I then take the positive case and treat it as an autonomous, seperable ODE. I get $f(x)=x^2$ as my solution.</p>
<p>In order to solve this problem, I have to divide each side of the equation by $\frac{1}{\sqrt{y}}$. But since the solution to this IVP is $y(x)=x^2$, zero is in the image of $f(x)$. So at a particular point $1/\sqrt{y}$ is not defined. But the <strong>solution</strong> is defined at $y =0$.</p>
<p>In fact, $y(x)= 0$ for all x is another solution. But aside from this solution the non-trivial solution is defined at zero also.</p>
<p>So is it wrong to divide across by $1/\sqrt{y}$? And if so how else do I approach this question?</p>
| Michael Burr | 86,421 | <p>Depending on how much you know, there are some quick proofs. If you know the invertible matrix theorem and basis theorem, then you can get this directly.</p>
<p>Since $\{A\bf{v}_1,\dots,A\bf{v}_n\}$ are linearly independent in $\mathbb{R}^n$, they form a basis. Therefore, the image of $A$ contains a basis for $\mathbb{R}^n$, so the columns of $A$ span $\mathbb{R}^n$. This is one of the statements of the invertible matrix theorem, so $A$ is invertible. Finally, being invertible is the same as being nonsingular.</p>
|
2,190,551 | <p>How can I find the degrees of freedom of a $n \times n$ real orthogonal matrix?</p>
<p>I have tried to proceed by principle of induction but I fail.Please tell me the right way to proceed.</p>
<p>Thank you in advance.</p>
| Community | -1 | <p>Answer to @Distracted Kerl and to the question asked by the OP.</p>
<p>There are essentially $2$ proofs of the required result; both uses the famous formula $1+2+\cdots+n=n(n+1)/2$. You chose the most difficult one. </p>
<p>$O(n)$ is an algebraic set. If $A\in O(n)$, then there is an orthogonal $P$ s.t. $P^TAP$ is in your above form.</p>
<p>Step 1. If you leave the $\pm 1$, then you only have $k$ degrees of freedom while you can have $\approx n/2$ degrees of freedom. The key is to consider the open dense subset constituted by $Z=\{A\in O(n); A$ has distinct pairwise conjugate eigenvalues and at most $1$ real eigenvalue$\}$. Such a matrix $A$ is orthogonally similar to </p>
<p>i) $diag(R_{a_1},\cdots,R_{a_p})$ when $n=2p$ ($p$ degrees of freedom) OR</p>
<p>ii) $diag(R_{a_1},\cdots,R_{a_p},\pm 1)$ when $n=2p+1$ ($p$ degrees of freedom).</p>
<p>We assume that the $(a_i)$ are distinct $mod\;2\pi$.</p>
<p>Step 2. We use a stratification using the Grassmannian varieties $G_{m,k}=\{V;V$ is a vector subspace of $\mathbb{R}^m$ of dimension $k\}$; note that </p>
<p>$(*)$ $G_{m,k}$ is an algebraic set of dimension $k(m-k)$.</p>
<p>Case i). To obtain all the matrices $A$ under the action of $P$ is equivalent to choose orthogonal planes $\Pi_1,\cdots,\Pi_{p}$. The numbers of degrees of freedom are </p>
<p>for $\Pi_1$: $2.(2p-2)$ (according to $(*)$ with $m=2p$)</p>
<p>for $\Pi_2$:$2(2p-4)$ (according to $(*)$ with $m=2p-2$ since $\Pi_2$ is included in the orthogonal of $\Pi_1$), $\cdots$</p>
<p>The sum of degrees of freedom (for the action of $P$) is $4.(1+2+\cdots+(p-1))=2p^2-2p$ (the famous formula).</p>
<p>Finally when we consider the $p$ angles of the rotations $R_{a_i}$ we obtain $p(2p-1)=n(n-1)/2$ degrees of freedom.</p>
<p>Case ii). In the same way as above.</p>
|
313,025 | <p>I got two problems asking for the proof of the limit: </p>
<blockquote>
<p>Prove the following limit: <br/>$$\sup_{x\ge 0}\ x e^{x^2}\int_x^\infty e^{-t^2} \, dt={1\over 2}.$$</p>
</blockquote>
<p>and, </p>
<blockquote>
<p>Prove the following limit: <br/>$$\sup_{x\gt 0}\ x\int_0^\infty {e^{-px}\over {p+1}} \, dp=1.$$</p>
</blockquote>
<p>I may feel that these two problems are of the same kind. World anyone please help me with one of them and I may figure out the other one? Many thanks! </p>
| Julien | 38,053 | <p>I'll do the second one. And it appears that André Nicolas did the first one while I was writing, so everything is fine.</p>
<p>By integration by parts,
$$
x\int_0^{+\infty}\frac{e^{-px}}{p+1}dp=1-\int_0^{+\infty}\frac{e^{-px}}{(p+1)^2}dp\leq 1
$$
for all $x>0$.</p>
<p>Now by Lebesgue dominated convergence theorem, the rhs tends to $1-0=1$ as $x\rightarrow +\infty$.</p>
<p>So the sup over $x>0$ is indeed $1$.</p>
|
3,245,428 | <p>Is it true that every tame knot has at least an alternating diagram?</p>
<p>If yes, is it true that we can always obtain an alternating diagram by a finite number of Reidemeister moves from a diagram of a knot? </p>
<p>If yes, how can we do it?</p>
<p>I am reading GTM Introduction to Knot Theory and find they sort of assume this, which makes me think it should be evident but I cannot figure out.</p>
| Kyle Miller | 172,988 | <p>A knot is called <em>alternating</em> if it has an alternating knot diagram. If there is a sequence of Reidemeister moves on a diagram for a knot that results in an alternating diagram, then the knot is an alternating knot. Because Reidemeister moves are a complete set of moves, given a diagram for an alternating knot, there is some sequence of Reidemeister moves that will give an alternating diagram.</p>
<p>Alternating knots are fairly special. If <span class="math-container">$K$</span> is a prime alternating knot that is not a torus knot, then <span class="math-container">$S^3-K$</span> can be given a complete hyperbolic metric of constant negative curvature. Using the geometrization theorem, it follows that any non-trivial <a href="https://en.wikipedia.org/wiki/Satellite_knot" rel="nofollow noreferrer">satellite operation</a> gives a non-alternating knot.</p>
<p>If the knot is prime and alternating, then every minimal-crossing-number diagram is an alternating diagram (see <a href="https://math.stackexchange.com/questions/2660326/alternating-and-non-altenating-knot-projections-with-same-crossing-number">Alternating and Non-Altenating Knot projections with same crossing number?</a>). It's not clear to me if there exists a bounded-time algorithm that can actually find such a sequence of Reidemeister moves, however! (There is always exhaustively trying all sequences of Reidemeister moves, but this is <em>a priori</em> an unbounded-time algorithm.)</p>
|
3,617,600 | <p>I am trying to understand the proof of the First and Second Variation of Arclength formulas for Riemannian Manifolds. I want some verifaction that the following covariant derivaties commute. I find it intuitive but I want to also have a formal proof.</p>
<p>Some notation: Let <span class="math-container">$\gamma(t,s):\overbrace{[a,b]}^{t}\times \overbrace{(-\epsilon,\epsilon)}^{s}\rightarrow M$</span> be a variation of <span class="math-container">$\gamma_0(t)$</span> with <span class="math-container">$|\dot{\gamma_{0}|}=\lambda \ \forall t \in [a,b] $</span> and <span class="math-container">$V=\gamma_{*}\left(\frac{\partial}{\partial s}\right)$</span> tha variational vector field.</p>
<p>Then I would like to prove that <span class="math-container">$\frac{D}{ds}(\dot{\gamma_t})=\frac{D}{dt}V$</span> or in other words that <span class="math-container">$\frac{D}{ds},\frac{D}{dt}$</span> commute. Note that <span class="math-container">$\frac{D}{ds}$</span> is the covariant derivative along the map <span class="math-container">$\gamma$</span> therefore we will use the property for connections along maps that <span class="math-container">$\nabla^{\gamma}_X(Z\circ \gamma(p))=\nabla_{\gamma_*(X)}Z|_p$</span>.</p>
<p>Indeed <span class="math-container">$\frac{D}{ds}\left( \dot{\gamma_t}\right)=\frac{D}{ds}[\gamma_*(\frac{d}{dt})\circ(\gamma(s,t))]=D_{\gamma_*\left(\frac{d}{dt}\right)}\gamma_*(\frac{d}{ds})=D_{\gamma_*\left(\frac{d}{ds}\right)}\gamma_*(\frac{d}{dt})$</span>.</p>
<p>The last equality follows since <span class="math-container">$D_{\gamma_*\left(\frac{d}{dt}\right)}\gamma_*(\frac{d}{ds})-D_{\gamma_*\left(\frac{d}{ds}\right)}\gamma_*(\frac{d}{dt})=\left[\gamma_*(\frac{d}{dt}),\gamma_*(\frac{d}{ds}) \right]=\gamma_*([\frac{d}{ds},\frac{d}{dt}])=0$</span> since <span class="math-container">$\frac{d}{ds},\frac{d}{dt}$</span> are coordinate vector fields.</p>
| Nick A. | 412,202 | <p>HK Lee's answer is 100% correct. I am going to answer my question just to have an answer in the notation that I am more familiar with i.e. with connections along maps. </p>
<p>Say we have <span class="math-container">$f:(N,h)\rightarrow (M,g)$</span> and we endow <span class="math-container">$M$</span> with a connection <span class="math-container">$\nabla$</span>. Then there is a unique connection along <span class="math-container">$f$</span> denoted <span class="math-container">$\nabla^f$</span>, mainly a map <span class="math-container">$T(N)\times f^*(TM)\rightarrow f^*(TM)$</span> satisfying the standard axioms for a connection. What is more, if <span class="math-container">$\nabla$</span> has no torsion then neither does <span class="math-container">$\nabla^f$</span>. In this notation <span class="math-container">$\frac{D}{dt}=\nabla^{\gamma}_{\partial_t}$</span> and <span class="math-container">$\frac{D}{ds}=\nabla^{\gamma}_{\partial_s}$</span>.</p>
<p>Then <span class="math-container">$$T^{\gamma}(\partial_t,\partial_s)=\frac{D}{dt}\gamma_*(\partial_s)-\frac{D}{ds}\gamma_*(\partial_t)-\gamma_*[\partial_t,\partial_s]$$</span></p>
<p>But since we are using the Levi-Civita connection on <span class="math-container">$M$</span> the tensor <span class="math-container">$T^{\gamma}$</span> vanishes, as well as the term <span class="math-container">$[\partial_t,\partial_s]$</span>. Therefore the result follows immediately. </p>
|
3,959,263 | <p>Let <span class="math-container">$G$</span> be a tree with a maximum degree of the vertices equal to <span class="math-container">$k$</span>.
<strong>At least</strong> how many vertices with a degree of <span class="math-container">$1$</span> can be in <span class="math-container">$G$</span> and why?</p>
<p>I think the answer must be <span class="math-container">$k$</span> but I don't know how to prove it.</p>
| Jonas Linssen | 598,157 | <p><strong>Hint</strong> Any tree on <span class="math-container">$\geq 2$</span> vertices has at least two leafs. Consider a tree with maximal degree <span class="math-container">$k$</span> and delete a vertex <span class="math-container">$v$</span> with degree <span class="math-container">$k$</span>. You are left with a forest consisting of <span class="math-container">$k$</span> components, coming with a distinct neighbor <span class="math-container">$v_i$</span> of <span class="math-container">$v$</span> in <span class="math-container">$T$</span>. Finish by making a case distinction on whether the components have 1 or more vertices and using the first sentence.</p>
|
159,585 | <p>This is a kind of a plain question, but I just can't get something.</p>
<p>For the congruence and a prime number $p$: $(p+5)(p-1) \equiv 0\pmod {16}$.</p>
<p>How come that the in addition to the solutions
$$\begin{align*}
p &\equiv 11\pmod{16}\\
p &\equiv 1\pmod {16}
\end{align*}$$
we also have
$$\begin{align*}
p &\equiv 9\pmod {16}\\
p &\equiv 3\pmod {16}\ ?
\end{align*}$$</p>
<p>Where do the last two come from? It is always 4 solutions? I can see that they are satisfy the equation, but how can I calculate them?</p>
<p>Thanks</p>
| Tony | 1,491 | <p>The assertion $(p+5)(p-1) \equiv 0 \pmod{16}$ is equivalent to $16 \mid (p+5)(p-1)$. Then you consider cases: $2^4 \mid (p+5)$, $2^3 \mid (p+5)$ and $2 \mid p-1$, $2^2 \mid p+5$ and $2^2 \mid p-1$, etc. </p>
|
159,585 | <p>This is a kind of a plain question, but I just can't get something.</p>
<p>For the congruence and a prime number $p$: $(p+5)(p-1) \equiv 0\pmod {16}$.</p>
<p>How come that the in addition to the solutions
$$\begin{align*}
p &\equiv 11\pmod{16}\\
p &\equiv 1\pmod {16}
\end{align*}$$
we also have
$$\begin{align*}
p &\equiv 9\pmod {16}\\
p &\equiv 3\pmod {16}\ ?
\end{align*}$$</p>
<p>Where do the last two come from? It is always 4 solutions? I can see that they are satisfy the equation, but how can I calculate them?</p>
<p>Thanks</p>
| Mohamed | 33,307 | <p><strong>Hint :</strong> We can use the existence and unicity of decomposition of all non zero integer $N$ as $N=2^m q$ where $m$ is an integer and $q$ an odd integer.
We write :</p>
<p>$p+5=2^k u $ and $p-1 = 2^l v $ where $u$ and $v$ are odd, that implies $u2^k-5 = v 2^l +1$ implies $u2^k-v 2^l = 6$, then we ca deduce that $\inf(k,l) \leq 1$</p>
|
159,585 | <p>This is a kind of a plain question, but I just can't get something.</p>
<p>For the congruence and a prime number $p$: $(p+5)(p-1) \equiv 0\pmod {16}$.</p>
<p>How come that the in addition to the solutions
$$\begin{align*}
p &\equiv 11\pmod{16}\\
p &\equiv 1\pmod {16}
\end{align*}$$
we also have
$$\begin{align*}
p &\equiv 9\pmod {16}\\
p &\equiv 3\pmod {16}\ ?
\end{align*}$$</p>
<p>Where do the last two come from? It is always 4 solutions? I can see that they are satisfy the equation, but how can I calculate them?</p>
<p>Thanks</p>
| Jackson Walters | 13,181 | <p>$$(p+5)(p-1) \equiv 0 \text{ (mod 16)} $$
$$\Leftrightarrow (p+5)(p-1)=16k $$
$$\Leftrightarrow p^{2}+4p+(-5-16k)=0$$
$$\Rightarrow p=-2 \pm\frac{1}{2}\sqrt{36+64k}$$
$$\Rightarrow p=-2 \pm \sqrt{9+16k}$$</p>
<p>$$k=0: p=-2\pm 3 \Rightarrow p \in \{1,-5\} \equiv \{1,11\} \text{ (mod 16)}$$
$$k=1: p=-2 \pm 5 \Rightarrow p \in \{3,-7\} \equiv \{3,9\} \text{ (mod 16)}$$</p>
<p>No other values of $k$ will yield unique integer solutions for $p \text{ (mod 16)}$. To see this, note that in order for $p$ to be an integer, we need $9+16k=q^{2}$ for some $q$. But mod 16, this reduces to $q^{2} \equiv 9 \text{ (mod 16)}$. Now a quick check shows that $q=3$ and $q=5$ are the only solutions. These correspond to $k=0$ and $k=1$, which yield the only 4 possible values: $p \in \{1,3,9,11\}$.</p>
|
1,981,948 | <p>Is there a relationship between group order and element order? </p>
<p>I know that there is a relationship between group order and subgroup order, which is that $[G:H] = \frac{|G|}{|H|}$ where $H$ is the subgroup of $G$ and $[G:H]$ is the index of $H$ in $G$. But is there a relationship between group order and the order of elements in the group?</p>
<p>For example, let the group $G$ be of order $7^{3}$. Does $G$ have an element of order $7$?</p>
| dezdichado | 152,744 | <p>The order of any element must divide the group order if the group is finite. This follows from what you have written i.e, the Lagrange's theorem. </p>
<p>Your second question is a consequence of the Cauchy's theorem.</p>
|
1,360,835 | <p>Reading "A First Look at Rigorous Probability Theory", and in the definition of outer measure of a set A, we take the infimum over the measure of covering sets for A from the semi-algebra (e.g., intervals in [0,1] ).</p>
<p>Is this set over which we are taking the infimum well-defined? For a given real
number x, how can I tell if x is in this set? I have to find a class of sets A1, A2, ... from the semi-algebra such that x = P(A1)+P(A2)+... </p>
<p>It is not clear how to find at least one such class (and hence determine if x is in this set), or determine that I <em>cannot</em> find such a class and hence x is <em>not</em> in the set. Any clarification is appreciated.</p>
| pancini | 252,495 | <p>First of all, never assume what you are trying to prove. It is possible that it makes sense in your mind to assume something and then test to show it is true, but this is not how formal math is done so most instructors wouldn't accept it.</p>
<p>Second, I am a bit confused about what you are trying to show. This is my guess at your problem:</p>
<blockquote>
<p>Let $f(x)$ be a function let $x_0$ be the solution to $f'(x_0)=0$. Show that $x_0<c$ for some constant $c$.</p>
</blockquote>
<p>We really need more information here. Id $c$ and $f$ given? Do you know that $f$ is continuous? My guess is that you just want to find the values where $f'$ is $0$ and show that every such $x$ is less than $c$.</p>
|
4,076,033 | <p>I know how to check if a vector or a matrix is linearly dependent or independent , but how do I apply it on this problem?</p>
<p>Let V1 , V2 , V3 be vectors
How do I prove that the vector V3 = ( 2, 5, -5) is linearly dependent on V1 = ( 1,-2,3) and V2 = (4,1,1) ?</p>
<p>Will it be enough or correct if I solved the equation:
α1<em>V1 + α2</em>V2 - V3 = 0
and proved it has a solution?</p>
| José Carlos Santos | 446,262 | <p>Yes, that will work.</p>
<p>Or you can check that<span class="math-container">$$\begin{vmatrix}2&1&4\\5&-2&1\\-5&3&1\end{vmatrix}=0.$$</span>It follows from this that there are numbers <span class="math-container">$\alpha$</span>, <span class="math-container">$\beta$</span> and <span class="math-container">$\gamma$</span> such that <span class="math-container">$\alpha v_3+\beta v_1+\gamma v_2=0$</span>. If <span class="math-container">$\alpha=0$</span>, then <span class="math-container">$v_1$</span> and <span class="math-container">$v_2$</span> would be linearly dependent, and therefore one of them would be the other one times a scalar. But that's clearly not the case. So, <span class="math-container">$\alpha\ne0$</span> and therefore<span class="math-container">$$v_3=-\frac\beta\alpha v_1-\frac\gamma\alpha v_2.$$</span></p>
|
341,602 | <p>Let <span class="math-container">$L$</span> be a semisimple Lie algebra and let <span class="math-container">$(V,\varphi)$</span> be a finite-dimensional <span class="math-container">$L$</span>-module representation. Our main goal is to prove that <span class="math-container">$\varphi$</span> is completely reducible.
Consider an <span class="math-container">$L$</span>-submodule of <span class="math-container">$V$</span> of codimension one, let <span class="math-container">$0 \longrightarrow W \longrightarrow V \longrightarrow F \longrightarrow 0$</span> be an exact sequence (where <span class="math-container">$F$</span> is an <span class="math-container">$L$</span>-module). From the book of James Humphreys called "<a href="https://link.springer.com/book/10.1007%2F978-1-4612-6398-2" rel="nofollow noreferrer">Introduction to Lie Algebras and Representation Theory</a>", I have understood the following steps:</p>
<ol>
<li>We take another proper submodule of <span class="math-container">$W$</span> denoted by <span class="math-container">$W'$</span> such that the exact sequence <span class="math-container">$0 \longrightarrow W/W' \longrightarrow V/W' \longrightarrow F \longrightarrow 0$</span> splits, so there exists a one dimensional <span class="math-container">$L$</span>-submodule of <span class="math-container">$V/W'$</span> (say <span class="math-container">$\tilde{W}/W'$</span>) complementary to <span class="math-container">$W/W'$</span>.</li>
<li>We proceed by induction on the dimension of <span class="math-container">$W$</span>, so we get an exact sequence <span class="math-container">$0 \longrightarrow W' \longrightarrow \tilde{W} \longrightarrow F \longrightarrow 0$</span> which splits. It follows easily that <span class="math-container">$V=W \oplus X$</span>, where <span class="math-container">$X$</span> is a submodule complementary to <span class="math-container">$W'$</span> in <span class="math-container">$\tilde{W}$</span>.</li>
<li>We suppose that <span class="math-container">$W$</span> is irreducible, so we may use Schur's lemma on <span class="math-container">$c \vert_{W}$</span> to say that <span class="math-container">$\operatorname{Ker} c$</span> is an <span class="math-container">$L$</span>-submodule of <span class="math-container">$V$</span>, where <span class="math-container">$c$</span> is the endomorphism of <span class="math-container">$V$</span> defined in 6.2.</li>
</ol>
<p>The other parts of the proof are very hard, and I didn't understand them. Can someone help me to figure out those parts? If there is another comprehensible method, can someone share it with us?</p>
| F Zaldivar | 3,903 | <p>Since you also ask for another method, perhaps you may try Hans Samelson's approach in his textbook "Notes on Lie Algebras" (I have an <a href="https://mathscinet.ams.org/mathscinet-getitem?mr=254112" rel="nofollow noreferrer">older edition</a> but it has been <a href="https://link.springer.com/book/10.1007/978-1-4613-9014-5" rel="nofollow noreferrer">republished</a> by Springer). It is in Chapter III Section 4. Roughly, the idea is to prove first a lemma of Whitehead that for the Lie algebra <span class="math-container">$L$</span> acting on a finite dimensional vector space <span class="math-container">$V$</span> and a linear function <span class="math-container">$f:L\rightarrow V$</span> satisfying that <span class="math-container">$f[x,y]=xf(y)-yf(x)$</span> (a cocycle condition) there exists a vector <span class="math-container">$v\in V$</span> such that <span class="math-container">$f(x)=xv$</span> for all <span class="math-container">$x\in L$</span> (a coboundary condition). Then, the proof of Weyl's complete reducibility proceeds to split a canonical epimorphism in a more direct fashion. You may enjoy reading this approach.</p>
|
2,547,508 | <p>I am trying to prove that various expressions are real valued functions. Is it possible to state that, because no square roots (or variants such as quartic roots etc) are in that function, it is a real valued function?</p>
| B. Goddard | 362,009 | <p>Well, $\sin^{-1} 2 = 1.570796327-1.316957897i$ (for a usual branch.) I don't know if this counts as using square root. The series for $\sin^{-1}x$ is found by integrating the series for $1/\sqrt{1-x^2},$ so there's a square root floating around.</p>
|
2,172,836 | <p>I'm writing a small java program which calculates all possible knight's tours with the knight starting on a random field on a 5x5 board.</p>
<p>It works well, however, the program doesn't calculate any closed knight's tours which makes me wonder. Is there an error in the code, or are there simply no closed knight's tours on a 5x5 board?</p>
<p>If so, what is the minimum required board size for the existence of at least one closed knight's tour?</p>
| Lelouch | 152,626 | <p>The definition of a 'closed' knights tour on a $m \times n$board, is a sequence of steps from a starting square $a_1$ to another square $a_{mn}$ , such that every square is visited exactly once, and the last sqaure is only one knight step away from $a_1$. Having said that, it is obvious, that for $mn $(mod2) $= 1$, there exists no closed tour. </p>
<p>Short proof:</p>
<p>Suppose $a_1$ is black. Clearly then for any $i; i\le mn , i$(mod 2)$=1, a_i$ must be black. Since $mn$ is odd, $a_{mn}$ must be black, which implies that it cannot be a square one knight step away from $a_1$. Thus there exists no closed tour for odd $mn$.</p>
|
4,042,741 | <p>I'm really struggling to understand the literal arithmetic being applied to find a complete residue system of modulo <span class="math-container">$n$</span>. Below is the definition my textbook provides along with an example.</p>
<blockquote>
<p>Let <span class="math-container">$k$</span> and <span class="math-container">$n$</span> be natural numbers. A set <span class="math-container">$\{a_1,a_2,...,a_k\}$</span> is called a canonical complete residue system modulo <span class="math-container">$n$</span> if every integer is congruent modulo <span class="math-container">$n$</span> to exactly one element of the set</p>
</blockquote>
<p>I'm struggling to understand how to interpret this definition. Two integers, <span class="math-container">$a$</span> and <span class="math-container">$b$</span>, are "congruent modulo <span class="math-container">$n$</span>" if they have the same remainder when divided by <span class="math-container">$n$</span>. So the set <span class="math-container">$\{a_1,a_2,...,a_k\}$</span> would be all integers that share a quotient with <span class="math-container">$b$</span> divided by <span class="math-container">$n$</span>?</p>
<p>After I understand the definition, this is a simple example provided by my textbook</p>
<blockquote>
<p>Find three residue systems modulo <span class="math-container">$4$</span>: the canonical complete residue system, one containing negative numbers, and one containing no two consecutive numbers</p>
</blockquote>
<p>My first point of confusion is "modulo <span class="math-container">$4$</span>". <span class="math-container">$a{\space}mod{\space}n$</span> is the remainder of Euclidean division of <span class="math-container">$a$</span> by <span class="math-container">$n$</span>. So what is meant by simply "modulo <span class="math-container">$4$</span>"? What literal arithmetic do I perform to find a complete residue system using "modulo <span class="math-container">$4$</span>"?</p>
| fleablood | 280,126 | <p>Prelim: The definition is confusing because it is <em>not</em> assuming <span class="math-container">$k = n$</span>. You <em>will</em> be able to prove <span class="math-container">$k = n$</span> <em>later</em> but in mathematics we don't include <em>anything</em> in a definition that we can prove later.</p>
<p>The definition of a complete residue system is a collection of integers <span class="math-container">$\{a_j\}$</span> so that for <em>any</em> integer, that integer will be congruent (have the same remainder) with exactly <em>one</em> of those integers.</p>
<p>In Other words, and probably a much more straightforward definiton, For every possible remainder, there will be be exactly one integer with that remainder.</p>
<p>For instance if <span class="math-container">$n = 7$</span>, the easiest and most obvious complete residue system would be simply <span class="math-container">$\{0,1,2,3,4,5,6\}$</span>. Every integer will be have remainder <span class="math-container">$0,1,2,3,4,5,6$</span> and those are precisely the numbers in there.</p>
<p>Another complete system could be <span class="math-container">$\{63, 8, 15, -4, 32, 75, 146\}$</span>. If an integer <span class="math-container">$b$</span> has remainder <span class="math-container">$0$</span> it is congruent to <span class="math-container">$63$</span>. <span class="math-container">$63$</span> represents all the integers with remainder <span class="math-container">$0$</span>. ANd if <span class="math-container">$b$</span> has remainder <span class="math-container">$8$</span> then <span class="math-container">$b$</span> is congruent to <span class="math-container">$8$</span>. <span class="math-container">$8$</span> represents all the integers with remainder <span class="math-container">$1$</span>.... And so on.</p>
<p>Every remainder is represented exactly once.</p>
<p>And that is what a complete residue system means. A residue is a representation of one class of remainders (all the integers with remainder <span class="math-container">$4$</span> for example are represented by <span class="math-container">$32\equiv 4 \pmod 7$</span>, for example). And a <em>complete</em> residue system means every residue is represented.</p>
<p>And as there <span class="math-container">$n$</span> possible remainders there will be <span class="math-container">$n$</span> elements in the system so if the system is <span class="math-container">$\{a_1, ...., a_k\}$</span> then <span class="math-container">$k = n$</span>. (If it were me, I wouldn't even bring up the idea this could be in doubt. It just confuses things the first time you see the definition.)</p>
<p>.....</p>
<p>Okay. To do a completer residue system <span class="math-container">$\pmod 4$</span> you need to find a <span class="math-container">$\{a_1, a_2, ..... , a_k\}$</span> were for every integer <span class="math-container">$-6,-5,-4,-3,-2,-1,0,1,2,3,4,5,6,....$</span> is congruent to exact one of them.</p>
<p>So we need and <span class="math-container">$a_1\equiv -6\pmod 4$</span>. Well <span class="math-container">$-6\equiv 2 \pmod 4$</span> so let's let <span class="math-container">$a_1 = 2$</span>. And we need something <span class="math-container">$\equiv -5 \pmod 4$</span>. We <span class="math-container">$-5\equiv 3 \pmod 4$</span> and <span class="math-container">$15 \equiv 3 \pmod 4$</span> and <span class="math-container">$15 \equiv -3 \pmod 4$</span> so lets use <span class="math-container">$a_2 = 15$</span>.</p>
<p>And we need something <span class="math-container">$\equiv - 4 \pmod 4$</span>. Well <span class="math-container">$-4 \equiv 0 \equiv 28\pmod 4$</span> so let's use <span class="math-container">$a_3 = 28$</span>. And we need something <span class="math-container">$\equiv -3\pmod 4$</span> but <span class="math-container">$-3\equiv 1 \equiv 48321 \pmod 4$</span>. SO lets let <span class="math-container">$a_4 = 48321$</span>.</p>
<p>And we need something <span class="math-container">$\equiv -2\pmod 4$</span>. But <span class="math-container">$-2 \equiv 2 = a_1$</span> so we already have it. In fact it looks like we have one of each.</p>
<p>So <span class="math-container">$\{2, 15,28, 48321\}$</span> seems to be complete.</p>
<p>If <span class="math-container">$b$</span> is an integer we have either <span class="math-container">$b = 4k$</span> or <span class="math-container">$4k + 1$</span> or <span class="math-container">$4k + 2$</span> or <span class="math-container">$4k + 3$</span>.</p>
<p>If <span class="math-container">$b = 4k$</span> then <span class="math-container">$b \equiv 28\pmod 4$</span>. ANd if <span class="math-container">$4k + 1$</span> then <span class="math-container">$b\equiv 48321$</span>. And if <span class="math-container">$b = 4k + 2$</span> then <span class="math-container">$b \equiv 2\pmod 4$</span> and if <span class="math-container">$b= 4k + 3$</span> then <span class="math-container">$b\equiv 15\pmod 4$</span>.</p>
<p>So ....</p>
<p>the definition is:</p>
<blockquote>
<p><span class="math-container">$\{2, 15,28, 48321\}$</span> is called a canonical complete residue system modulo n if every integer is congruent modulo <span class="math-container">$4$</span> to exactly one element of the set</p>
</blockquote>
<p>Well, that <em>is</em> true so <span class="math-container">$\{2, 15,28, 48321\}$</span> is a complete residue system.</p>
<p>That's all.</p>
|
1,403,486 | <p>As an introduction to multivariable calculus, I'm given a small introduction to some topological terminology and definitions. As the title says, I have to prove that <span class="math-container">$\{(x,y): x>0\}$</span> is connected. My tools for this are:</p>
<blockquote>
<p><strong>Definition 1</strong>: Two disjoint sets <span class="math-container">$A$</span> and <span class="math-container">$B$</span>, neither empty, are said to be <strong>mutually separated</strong> if neither contains a boundary point of the other.</p>
<p><strong>Definition 2</strong>: A set is <strong>disconnected</strong> if it is the union of separated subsets.</p>
<p><strong>Definition 3</strong>: A set is <strong>connected</strong> if it is not disconnected.</p>
</blockquote>
<p>Because <strong>Definition 3</strong> is a negation, I'm encouraged to do this by contradiction. Suppose <span class="math-container">$\{(x,y): x>0\}$</span> is disconnected. Then it is the union of mutually separated sets. I don't know where to go from here or if there is a way to show directly that the set can't be expressed as the union of mutually separated sets directly. Any guidance would be appreciated.</p>
| graydad | 166,967 | <p>The two definitions really are saying the same thing. And not even in one of those weird ways where two things are equivalent but don't sound at all related. Like the Bolzano-Weierstrass Theorem.</p>
<p>For a proof by contradiction, let's suppose your definition of equal sets ($X \subseteq Y$ and $Y \subseteq X$) holds while the new definition you have is not true. Then we'd have some sets $A,B$ such that $A \subseteq B, B\subseteq A$ but at least one $b \in B$ where $b \notin A$. Since $b \notin A$ and $b \in B$, $B$ cannot be a subset of $A$; to be a subset of $A$ would require that every element of $B$ is contained in $A$. Contradiction. Therefore the two definitions are equivalent. </p>
|
331,859 | <p>I need to find the antiderivative of
$$\int\sin^6x\cos^2x \mathrm{d}x.$$ I tried symbolizing $u$ as squared $\sin$ or $\cos$ but that doesn't work. Also I tried using the identity of $1-\cos^2 x = \sin^2 x$ and again if I symbolize $t = \sin^2 x$ I'm stuck with its derivative in the $dt$.</p>
<p>Can I be given a hint?</p>
| lab bhattacharjee | 33,337 | <p>$$\text{ As }\cos2y=2\cos^2y-1=1-2\sin^2y$$</p>
<p>$$\sin^6x\cos^2x=\left(\frac{1-\cos2x}2\right)^3\left(\frac{1+\cos2x}2\right)$$</p>
<p>$$16\sin^6x\cos^2x=(1-3\cos2x+3\cos^2x-\cos^32x)(1+\cos2x)$$</p>
<p>$$=\left(1-3\cos2x+3\frac{(1+\cos4x)}2-\frac{(\cos6x+3\cos2x)}4\right)(1+\cos2x)$$ (applying $\cos3y=4\cos^3y-3\cos y$)</p>
<p>$$64\sin^6x\cos^2x=(10-15\cos2x+6\cos4x-\cos6x)(1+\cos2x)$$</p>
<p>$$=10-15\cos2x+6\cos4x-\cos6x+10\cos2x-15\cos^22x+6\cos4x\cos2x-\cos6x\cos2x$$</p>
<p>$$=10-5\cos2x+6\cos4x-\cos6x+10\cos2x-15\frac{(1+\cos4x)}2+6\frac{(\cos2x+\cos6x)}2-\frac{(\cos4x+\cos8x)}2$$ (applying $2\cos A\cos B=\cos(A-B)+\cos(A+B)$)</p>
<p>$$\text{ So, }128\sin^6x\cos^2x=5-4\cos2x-4\cos4x+4\cos6x-\cos8x$$</p>
<hr>
<p>Alternatively </p>
<p>as we know, $e^{ix}=\cos x+i\sin x,e^{-ix}=\cos x-i\sin x\implies \cos x=\frac{e^{ix}+e^{-ix}}2,\sin x=\frac{e^{ix}-e^{-ix}}{2i}$</p>
<p>So, $$\sin^6x\cos^2x=\left(\frac{e^{ix}-e^{-ix}}{2i}\right)^6\left(\frac{e^{ix}+e^{-ix}}2\right)^2$$</p>
<p>$$=\frac{\left(e^{6ix}+e^{-6ix}-\binom61(e^{4ix}+e^{-4ix})+\binom62(e^{2ix}+e^{-2ix})-\binom63\right)}{-2^6}$$
$$\cdot\frac{\left(e^{2ix}+e^{-2ix}+2\right)}{2^2}$$</p>
<p>$$=\frac{e^{8ix}+e^{-8ix}-(6-2)(e^{6ix}+e^{-6ix})+(1+\binom62-2\cdot\binom61)(e^{4ix}+e^{-4ix})-(\binom63-2\cdot\binom62+\binom61)(e^{2ix}+e^{-2ix})+2\binom62-2\binom63}{-2^8}$$</p>
<p>$$=\frac{2\cos8x-4\cdot2\cos6x+4\cdot2\cos4x+4\cdot2\cos2x-10}{-256}\text{ as }e^{ix}+e^{-ix}=2\cos x$$</p>
<p>Now, simplify and use $\int\cos mxdx=\frac{\sin mx}m+C$</p>
|
366,724 | <p>Suppose $D$ is an integral domain and that $\phi$ is a nonconstant function from $D$ to
the nonnegative integers such that $\phi(xy) = \phi(x)\phi(y)$. If $x$ is a unit in $D$, show that
$\phi(x) = 1$.</p>
| André Nicolas | 6,312 | <p><strong>Hint:</strong> First show that if $e$ is the identity element, then $\phi(e)=1$. This should be an easy consequence of $ee=e$. </p>
<p>Then use the fact that if $x$ is a unit, and $y$ is the inverse of $x$, then $\phi(e)=\phi(xy)=\phi(x)\phi(y)$. </p>
<p><strong>Added:</strong> It is all too easy to forget about the possibility that $\phi$ takes on the value $0$. Let $\phi(e)=a$. Then since $e=e^2$, we have $\phi(e)=\phi(e^2)=\phi(e)\phi(e)$. So $a^2=a$. Thus $a=0$ or $a=1$. If $a=0$, then for any $x$, $\phi(x)=\phi(e)\phi(x)=0$.But we were told $\phi$ is non-constant. so $a=1$. </p>
|
696,869 | <p>Question:
Show that if a square matrix $A$ satisfies the equation $A^2 + 2A + I = 0$, then $A$ must be invertible.</p>
<p>My work: Based on the section I read, I will treat I to be an identity matrix, which is a $1 \times 1$ matrix with a $1$ or as an square matrix with main diagonal is all ones and the rest is zero. I will also treat the $O$ as a zero matrix, which is a matrix with all zeros.</p>
<p>So the question wants me to show that the square matrix $A$ will make the following equation true. Okay so I pick $A$ to be $[-1]$, a $1 \times 1$ matrix with a $-1$ inside. This was out of pure luck.</p>
<p>This makes $A^2 = [1]$. This makes $2A = [-2]$. The identity matrix is $[1]$.</p>
<p>$1 + -2 + 1 = 0$. I satisfied the equation with my choice of $A$ which makes my choice of the matrix $A$ an invertible matrix.</p>
<p>I know matrix $A *$ the inverse of $A$ is the identity matrix.</p>
<p>$[-1] * inverse = [1]$. So the inverse has to be $[-1]$.</p>
<p>So the inverse of $A$ is $A$. </p>
<p>It looks right mathematically speaking. </p>
<p>Anyone can tell me how they would pick the square matrix A because I pick my matrix out of pure luck? </p>
| vadim123 | 73,324 | <p>The inverse of $A$ is almost certainly not $A$. You have $A^2+2A=-I$, or $A(A+2I)=-I$. Multiplying by $-1$ you get $$A(-A-2I)=I$$</p>
<p>Hence the inverse of $A$ is $-A-2I$.</p>
|
696,869 | <p>Question:
Show that if a square matrix $A$ satisfies the equation $A^2 + 2A + I = 0$, then $A$ must be invertible.</p>
<p>My work: Based on the section I read, I will treat I to be an identity matrix, which is a $1 \times 1$ matrix with a $1$ or as an square matrix with main diagonal is all ones and the rest is zero. I will also treat the $O$ as a zero matrix, which is a matrix with all zeros.</p>
<p>So the question wants me to show that the square matrix $A$ will make the following equation true. Okay so I pick $A$ to be $[-1]$, a $1 \times 1$ matrix with a $-1$ inside. This was out of pure luck.</p>
<p>This makes $A^2 = [1]$. This makes $2A = [-2]$. The identity matrix is $[1]$.</p>
<p>$1 + -2 + 1 = 0$. I satisfied the equation with my choice of $A$ which makes my choice of the matrix $A$ an invertible matrix.</p>
<p>I know matrix $A *$ the inverse of $A$ is the identity matrix.</p>
<p>$[-1] * inverse = [1]$. So the inverse has to be $[-1]$.</p>
<p>So the inverse of $A$ is $A$. </p>
<p>It looks right mathematically speaking. </p>
<p>Anyone can tell me how they would pick the square matrix A because I pick my matrix out of pure luck? </p>
| user76568 | 74,917 | <p>$$\det{A} \cdot \det{(A+2I)} =\det{[A(A+2I)]}=\det{(-I)}=\pm 1 \implies \det{A} \neq 0 \iff A \space \text{is invertible}$$ </p>
|
2,898,767 | <p>For $M_n (\mathbb{C})$, the vector space of all $n \times n $ complex matrices,</p>
<p>if $\langle A, X \rangle \ge 0$ for all $X \ge 0$ in $M_n{\mathbb{C}}$,then $A \ge 0$</p>
<p>which of the following define an inner product on $M_n(\mathbb{C})$?</p>
<p>$1)$$ \langle A, B\rangle = tr(A^*B)$</p>
<p>$2)$$ \langle A, B\rangle = tr(AB^*)$</p>
<p>$3)$$\langle A, B\rangle = tr(BA)$</p>
<p>Taken from Zhang linear algebra books page no .112.</p>
<p><a href="https://i.stack.imgur.com/72YYh.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/72YYh.png" alt="enter image description here"></a></p>
<p><strong>My attempts:</strong>
I read this Wikipedia article, but could not get any idea on how to clarify these options:</p>
<p><a href="https://en.wikipedia.org/wiki/Inner_product_space" rel="nofollow noreferrer">https://en.wikipedia.org/wiki/Inner_product_space</a></p>
<p>Any hints/solutions will be appreciated, thank you.</p>
| user | 505,767 | <p>The fact is that <a href="http://www.wolframalpha.com/input/?i=domain%20x%5E(1%2F3)" rel="nofollow noreferrer"><strong>Wolfram</strong></a> assumes the domain $x\ge 0$ for the function $\sqrt[3] x$ even if for $x \in \mathbb{R}$ the function is well defined on the whole domain.</p>
|
402,750 | <p>I read that a continuous random variable having an exponential distribution can be used to model inter-event arrival times in a Poisson process. Examples included the times when asteroids hit the earth, when earthquakes happen, when calls are received at a call center, etc. In all these examples, the expected value of the number of events per unit of time, lambda, is known and is constant over time. Moreover, each event's occurrence is independent of previous events' occurrences. And the exponential variable that models inter-event arrivals has the same lambda parameter as the Poisson variable that models the number of events.</p>
<p>And now, my problem. It don't get the connection between the intuition behind the exp distrib and its pdf. It seems obvious that the more time it passes by without an earthquake happening, the more likely it is than an earthquake will happen. Assume my understanding of lambda is correct, i.e., lambda = the rate at which the event happens, e.g., 5 earthquakes per minute on some remote, angry planet. On the pdf graphic, the prob of generating a value between 0 and 1 is greater than the prob of choosing a value between 4 and 5 for instance. How is this graphic related to the fact that on average we need to have 5 earthquakes per minute?</p>
| Tim | 74,128 | <p>The link between the Poisson point process and the exponential is probably the key to understanding this.</p>
<p>Let's say that asteroids hit your planet as a Poisson point process rate $\lambda$ per minute. This means that if $X_i$ is the number of asteroids that arrive in the $i$th minute then the sequence $X_i$ are independent Poisson distributed random variables with mean $\lambda$. Furthermore the exact arrival times of the $X_i$ asteroids in minute $i$ are distributed uniformly in the interval $(i-1,i)$.</p>
<p>The "magic" of the Poisson distribution then means that in any period of $T$ minutes, the number of asteroids that arrive will be a Poisson distribution mean $\lambda T$. Even if $T$ is not an integer. </p>
<p>So, what is the probability that the first asteroid lands in the $4$th minute?
That's two events, firstly at least one asteroid lands in the $4$th minute, and secondly, that <em>no asteroid arrives in the first three minutes</em>.</p>
<p>That's reasonably easy to work out. The probability that at least one asteroid lands in the $4$th minute is $1-e^{-\lambda}$ and the probabilty that no asteroid lands in the first three minutes. is $e^{-3\lambda}$. So the probability that the first asteroid lands in the $4th$ minute is</p>
<p>$$(1-e^{-\lambda})e^{-3\lambda} = e^{-3\lambda}-e^{-4\lambda} = \frac 1\lambda\int_3^4e^{-\lambda t}dt.$$</p>
<p>The maths works out the same for any interval. But the intuition is hopefully a bit clearer as to why the pdf is decreasing. If the first asteroid arives at time $t$ it's because no asteroids have arrived in the first $t$ minutes. </p>
<p>As the (infinitesimal) probability that an asteroid arrives at time $t$ doesn't depend on $t$, the probability density that the <em>first</em> asteroid arrives at time $t$ is directly proportional to the probabilty that no asteroids arrive in the interval $(0,t)$ which must be decreasing.</p>
|
402,750 | <p>I read that a continuous random variable having an exponential distribution can be used to model inter-event arrival times in a Poisson process. Examples included the times when asteroids hit the earth, when earthquakes happen, when calls are received at a call center, etc. In all these examples, the expected value of the number of events per unit of time, lambda, is known and is constant over time. Moreover, each event's occurrence is independent of previous events' occurrences. And the exponential variable that models inter-event arrivals has the same lambda parameter as the Poisson variable that models the number of events.</p>
<p>And now, my problem. It don't get the connection between the intuition behind the exp distrib and its pdf. It seems obvious that the more time it passes by without an earthquake happening, the more likely it is than an earthquake will happen. Assume my understanding of lambda is correct, i.e., lambda = the rate at which the event happens, e.g., 5 earthquakes per minute on some remote, angry planet. On the pdf graphic, the prob of generating a value between 0 and 1 is greater than the prob of choosing a value between 4 and 5 for instance. How is this graphic related to the fact that on average we need to have 5 earthquakes per minute?</p>
| not all wrong | 37,268 | <p>The key property of the exponential distribution is that it is memoryless, so that translation of the PDF in time $x\to x-c$ is just scaling of the PDF $f\to e^{\lambda c} f$ plus cutting it off at zero. Probabilistically this says $P(X>s+t|X>s)=P(X>t)$. That is, it doesn't matter how long it was since the last event, the probability that it takes a certain time <strong>from now</strong> is the same as it would be in any other situation. For somebody who has been waiting the whole time it might seem very unlikely that it will take 20 minutes, but given that it doesn't happen in the first quarter an hour, their new estimate of the likelihood of the <strong>total wait</strong> (not the wait stating at quarter an hour) being at least 20 minutes has gone up - it is now the same they thought the probability of it's taking over 5 minutes was at the start. Thus you have to be careful what you are asking.</p>
<p>Your feeling that the likelihood of an earthquake goes up over time is indeed contrary to the principle of the exponential dist. as Did said. However, one thing is true: e.g. let $p$ be the prob. of the next earthquake being between 15-20 minutes at the start, say at midnight; let $X$ be the time past midnight. Let $q$ be the the probability it happened in that same time given that you know it doesn't happen in the first 10 minutes. Then $q>p$. But let $Y$ be the time taken for the first earthquake after $00:10$. The probability that $Y$ is between 5 and 10 is unchanged by the knowledge that there was nothing between midnight and 10 past midnight. It was always $q$.</p>
<p>As to the mean, well, there's no reason why earlier times being more likely can prevent the mean being anything in particular - if there are large times with some probability then it can all balance out.</p>
|
402,750 | <p>I read that a continuous random variable having an exponential distribution can be used to model inter-event arrival times in a Poisson process. Examples included the times when asteroids hit the earth, when earthquakes happen, when calls are received at a call center, etc. In all these examples, the expected value of the number of events per unit of time, lambda, is known and is constant over time. Moreover, each event's occurrence is independent of previous events' occurrences. And the exponential variable that models inter-event arrivals has the same lambda parameter as the Poisson variable that models the number of events.</p>
<p>And now, my problem. It don't get the connection between the intuition behind the exp distrib and its pdf. It seems obvious that the more time it passes by without an earthquake happening, the more likely it is than an earthquake will happen. Assume my understanding of lambda is correct, i.e., lambda = the rate at which the event happens, e.g., 5 earthquakes per minute on some remote, angry planet. On the pdf graphic, the prob of generating a value between 0 and 1 is greater than the prob of choosing a value between 4 and 5 for instance. How is this graphic related to the fact that on average we need to have 5 earthquakes per minute?</p>
| Raskolnikov | 3,567 | <p>Just restating my comment as an answer:</p>
<p>You are confusing "long interarrival times are rare" with "the more time passes by, the more likely it is an earthquake will happen". The first is true for an exponential distribution if your mean interarrival time is short. The second one is never true for an exponential distribution, as Did and the others already have noted.</p>
<p>I find it an interesting question though because this is something you will come across relatively often in newspapers. Statements like "a big eruption of Yellowstone park is overdue" or "the last big asteroid crash dated from ..., considering the mean rate is ..., we are overdue for a crash.".</p>
<p>They create a false sense of impending doom. What they are really stating is that we are experiencing an unusually long interarrival time, but insofar as the process is Poisson, the probability of the event happening in the next days, months or years is still independent of how long it has been since the last event.</p>
<p>Add to that the fact that these processes are only approximately Poisson or even that the rates are variable and you realize that one should really relativize all this sensationalist press. For instance, asteroid crashes become ever rarer as planetary orbits get gradually cleared of asteroids through the solar system's history. There are just less and less asteroids to crash. So the rate of the process is not constant.</p>
|
296,536 | <p>Let $\mu$ be some positive measure on $\mathbb{R}$. For technical reasons, I would like to know if the limit
$$\lim_{p\rightarrow\infty}\frac {\ln \|f\|_{L^p(\mu)}}{\ln p}$$
exists in $[0,\infty]$ for any $f$ (That is, I want the limit to exist, but perhaps not be finite.)</p>
<p>Moreover generally I would like to know if in general,
$$\lim_{p\rightarrow\infty}\frac {\frac{d^k}{dp^k} \ln \|f\|_{L^p(\mu)}}{\frac{d^k}{dp^k} \ln p}$$
exists in $[0,\infty]$ for any $f$ such that $f\in L^p(\mu)$ for all $1\leq p<\infty.$
(Although I only really need it for $k=2.$) Note that these limits are related by L'Hospital's rule.</p>
| Willie Wong | 3,948 | <p>The limit doesn't always exist. </p>
<p>For convenience I will use the Lebesgue measure; similar construction can also be done for most $\mu$. </p>
<p>Let $c_n$ be a sequence of increasing positive integers. Consider the function
$$ f(x) = \sum_{n = 0}^\infty c_n^2 \chi_{[n, n + (c_n!)^{-1}]}(x) $$</p>
<p>By the disjoint support we have that
$$ \|f\|_{L^p}^p = \sum_{n = 0}^\infty \frac{(c_n)^{2p}}{c_n !} $$</p>
<p>Let us choose $c_n$ so that
$$ c_{n+1} \geq 10 (c_n)^{2n} $$</p>
<h3>Lim-sup</h3>
<p>If $p = c_N$ for some $N$, we have
$$ \|f\|_{L^p}^p \geq \frac{(c_N)^{2c_N}}{c_N!} \geq p^p $$
This implies
$$ \limsup_{p \to \infty} \frac{\ln \|f\|_p}{\ln p} \geq 1$$</p>
<h2>Lim-inf</h2>
<p>Let $p = (c_N)^{2N}$ for some $n$.
We have
$$ \sum_{n = 0}^N \frac{(c_n)^{2p}}{c_n!} \leq (c_N)^{2p} \cdot e = e\cdot p^{p/N}$$
On the other hand
$$ \sum_{n = N+1}^\infty \frac{(c_n)^{2p}}{c_n!} \leq \sum_{n = N+1}^\infty \frac{(c_n)^{2p}}{c_n (c_n - 1) \cdots (c_n - 2p + 1) \cdot (c_n - 2p)!} $$
Noting that $2p \leq \frac15 c_{N+1}$ we have
$$ \leq \sum_{n = N+1}^\infty \left( \frac54\right)^{2p} \cdot \frac{1}{(c_n - 2p)!} \leq \left( \frac54 \right)^{2p} \cdot e $$
So
$$ \|f\|_{L^p} \leq e^{1/p} \left( p^{1/N} + \frac{25}{16}\right) $$
For all sufficiently large $N$, using that $p^{1/N} = (c_N)^2 > 2$ we have
$$ \ln \|f\|_p \leq \frac{1}{p} + \ln 2 + \frac{1}{N} \ln p $$
and hence
$$ \liminf_{p\to \infty} \frac{\ln \|f\|_p}{\ln p} = 0 $$</p>
<h2>Remarks</h2>
<p>Note that the Lebesgue measure of the support of $f$ is has size less than $e$. So the same construction would work even if your measure is finite. What it really needs is enough sets of arbitrarily small measure. </p>
<p>To illustrate this final fact, let us consider the special case of the counting measure on $\mathbb{N}$; in other words, let's look at the $\ell_p$ norms on real sequences. (The argument here is a modification of one given by Alexandre Eremenko.) </p>
<p>By Minkowski's inequality we know that that $\ell_p$ norms are decreasing in $p$. So if we consider the mapping $\psi: [0,1]\ni x \mapsto \ln \|f\|_{\ell_{1/x}}$ for any fixed sequence $f$, we see that $\psi$ is</p>
<ul>
<li>An increasing function of $x$ (by Minkowski)</li>
<li>A convex function of $x$ (by Riesz-convexity)</li>
</ul>
<p>Now the function $y\mapsto e^{-y}$ is convex, and hence the function $\phi: y\mapsto \psi(e^{-y})$ is also convex, as it is the composition of two convex functions, the outer of which is increasing. </p>
<p>Note that $\phi(\ln p) = \ln \|f\|_{\ell_p}$. </p>
<p>We can conclude that the desired limit exists in this case since for any convex function $\phi$, the limit $\lim_{x\to\infty} \phi(x) / x$ exists in the sense given in the question (basically since $\phi'$ is increasing). </p>
|
1,032,926 | <p>I'm trying to understand this proof of the following Lemma, that I found in an article on Existence of Eigenvalues and Eigenvectors, but I don't understand the following step:</p>
<p><em>Let $V$ be a finite-dimensional complex vector space, $v\in V$ and $c\gt 0$. Since for every $v\in V\setminus \{ 0 \}$ and $k\in\mathbb C$ we have $||T(v) - kv||\ge c||v||$, then for every $k\in\mathbb C$ the operator $T - kI$ is one-to-one.</em></p>
<p>Why does this guarantee injectivity of $T-kI$?</p>
| Robert Lewis | 67,071 | <p>We need to assume $c > 0$. So assuming, we then have $\Vert (T - k I)(v) \Vert \ge c\Vert v \Vert > 0$ for $v \ne 0$. Thus $(T - k I)v \ne 0$ for $v \ne 0$. If now $T - kI$ were <em>not</em> injective, then there would exist <em>some</em> $w \in V$ and <em>distinct</em> $y_1, y _2 \in V$ with $(T - k I)y_1 = (T - k I)y_2 = w$. But this implies $(T - k I)(y_1 - y_2) = 0$; since as we have seen $y_1 - y_2 \ne 0$ implies $(T - k I)(y_1 - y_2) \ne 0$, we must thus have $y_1 - y_2 = 0$, contradicting the distinction of $y_1$, $y_2$. Thus we must have $y_1 = y_2$ and so $T - k I$ is injective for every $k \in \Bbb C$. <strong><em>QED.</em></strong></p>
<p>Hope this helps. Cheers,</p>
<p>and as ever,</p>
<p><strong><em>Fiat Lux!!!</em></strong></p>
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.